Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to prove that ${l \choose a_1,...,a_n}\le n^{l-1} $ , when $a_1+...+a_n=l$. In the proof of (Corollary 8 chap. 3 ) in the book "Sobolev Spaces on Domains" by Burenkov the following inequality is used : given $a_1,...,a_n \in \mathbb{N}$ such that $a_1+...+a_n=l$, then $${l \choose a_1,...,a_n}\le n^{l-1} $$ where $${l \choose a_1,...,a_n}=\frac{l!}{a_1!a_2!\cdots a_n!}$$ is the multinomial coefficient. How can one prove this fact? I was able to prove using multinomial theorem that $${l \choose a_1,...,a_n}\le n^{l}$$ but I couldn't prove the sharper inequality.
Assuming that $a_1,\ldots,a_n$ are distinct integers, $$n\binom{l}{a_1,\ldots,a_n}\\=\binom{l}{a_1,a_2,\ldots,a_{n-1},a_n}+\binom{l}{a_2,a_3\ldots,a_{n},a_1}+\ldots+\binom{l}{a_n,a_1,\ldots,a_{n-2},a_{n-1}}$$ but the last sum is less than the sum of any multinomial coefficient $\binom{l}{x_1,x_2,\ldots,x_{n-1},x_n}$ with $x_1+x_2+\ldots+x_{n-1}+x_n=n$, hence it is less than $n^l$. The same argument gives the stronger: $$\binom{l}{a_1,\ldots,a_n} \leq \frac{n^l}{n!}.$$ We may deal with the cases in which $a_i=a_j$ by inclusion-exclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1804703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How is the entropy of the normal distribution derived? Wikipedia says the entropy of the normal distribution is $\frac{1}2 \ln(2\pi e\sigma^2)$ I could not find any proof for that, though. I found some proofs that show that the maximum entropy resembles to $\frac{1}2+\ln(\sqrt{2\pi}\sigma)$ and while I see that this can be rewritten as $\frac{1}2\ln(e\sigma\sqrt{2\pi})$, I do not get how the square root can be get rid of and how the extra $\sigma$ can be put into $\ln$. It is clear that an additional summand $\frac{1}2\ln(\sigma\sqrt{2\pi})$ would help, but where do we get it from? Probably just thinking in the wrong way here... So, what is the proof for the maximum likelihood entropy of the normal distribution?
You have already gotten some good answers, I thought I could add something more of use which is not really an answer, but maybe good if you find differential entropy to be a strange concept. Since we can not store a real or continuous number exactly, entropy for continuous distributions conceptually mean something different than entropy for discrete distributions. It means the information required except for the resolution of representation. Take for example the uniform distribution on $[0,2^a-1]$ for an integer $a$. At integer resolution it will have $2^a$ equiprobable states and that would give $a$ bits of entropy. Also, the differential entropy is $\log(2^a-0)$, which happens to be the same. But if we want another resolution, lesser or more bits are of course required. Double resolution ($\pm 0.5$) would require 1 more bit (on average).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1804805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Suppose $0 Let a and b be real numbers, and suppose $0<a<b$. Prove for all $n\geq 2$, $0< \sqrt[n]a< \sqrt[n]b$. Proof: Suppose there exists an $n\geq 2$ such that $0 \geq \sqrt[n]a \text{ or } \sqrt[n]a \geq\sqrt[n]b$. case $0 \geq \sqrt[n]a $: Then $0 \geq a$, which contradicts our assumption that $0<a<b$. case $\sqrt[n]a \geq\sqrt[n]b$ Then $a \geq b$, which contradicts our assumption that $0<a<b$. In all cases, there is a contradiction with the assumption that $0<a<b$. Therefore, the proposition holds. Is the above proof correct? Can this statement be proven by induction on the natural number?
You can prove this by showing that $\sqrt[n]{x}$ is monotonically increasing when $n\geq 2$. Let $f(x)=\sqrt[n]{x}$, then $f'(x)=\frac{1}{n}x^{\frac{1-n}{n}}>0$ when $n\geq 2$, so $f(x)$ is monotonically increasing when $n\geq 2$. Also, $f(x)>0$ when $x>0$. Therefore, for all $n \geq 2, 0<\sqrt[n]{a}<\sqrt[n]{b}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1804905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that $\frac {e^{b^2-1}}{b^2}$ ≥ 1 How to prove that $$\frac {e^{b^2-1}}{b^2} \ge 1?$$ Use logarithm or limit or what? Or do we have to use it as a conclusion to prove it backwards? And how to prove it forwards, that is, without assuming this is right.
First, let's make a substitution: $x=b^2$ The expression now becomes $\displaystyle\frac{e^{x-1}}{x}$. Next, let's take the derivative of this expression: $\displaystyle \frac{d}{db}\frac{e^{x-1}}{x}=\frac{(x-1)e^{x-1}}{{x^2}}$ We know that local maxima/minima of this expression occur at values of $x$ for which $\displaystyle\frac{(x-1)e^{x-1}}{{x^2}}=0$ We also know that $x$ must be non-negative, so we only have to consider non-negative solutions to the above equation. The only positive solution is $x=1$. Since the derivative $\displaystyle\frac{(x-1)e^{x-1}}{{x^2}}$ is positive for $1<x$ and negative for $0\leq x<1$, a minimum (as opposed to a maximum) must occur at $x=1$. Ergo, the smallest value of $\displaystyle\frac{e^{b^2-1}}{b^2}$ occurs when $b^2=1$, and the smallest value of $\displaystyle\frac{e^{b^2-1}}{{b^2}}$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1804985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
What's the most efficient algorithm to check the number of cycles of length 4 in an undirected graph? What's the most efficient algorithm to check the number of cycles of length 4 in an undirected, unweighted graph?
I don't know whether this is the fastest possible, but this is how I'd do it if I had to do it fast: Use a hash map that maps pairs of vertices to the number of paths of length $2$ between them. Iterate over the vertices, adding $1$ to the entry for each pair of neighbours of the vertex. Then iterate over the entries, summing $\binom n2$ of the counts $n$. Divide the total by $2$ since you've counted each $4$-cycle twice. (Of course if you're really after speed you'd sum $n(n-1)$ and divide by $4$ in the end.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Group extension that doesn't realize a coupling Let $E$ be an extension of $N$ by $G$: $$N \hookrightarrow E \twoheadrightarrow G$$ If $N$ is abelian, then $E$ uniquely defines an action of $G$ on $N$. More generally, it defines a unique class $\chi$ on: $$\text{Out}(N) = \text{Aut}(N)/\text{Inn}(N)$$ We call the pair $(G, \chi)$ a coupling of $G$ to $N$. Robinson says: [...] principal aims of the theory of group extensions may be summarized as follows: (i) to decide which couplings of $G$ to $N$ give rise to an extension of $N$ by $G$; Unfortunately, I'm failing to find a counter example, a coupling of $G$ to $N$ that does not gives rise to an extension of $N$ by $G$. So far, I've looked only at finite, abelian groups $N$. Can someone point me to such counter example?
I know one example of this, but it might not be the smallest. Let $N = {\rm SL}(2,9)$, which is isomorphic to a double cover $2.A_6$ of $A_6$. Then ${\rm Out}(N) \cong C_2 \times C_2$. Let $G = {\rm Out}(N)$ with $\chi$ the identity map. Then there is no extension $E$ that induces this coupling. In the ATLAS of Finite Simple groups, if you look under $A_6$, you will find that there is no extension of the form $2.A_6.2_3 = 2.M_{10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to evaluate $\int_{0}^{1} \frac{\ln x}{x+1} dx$ I want to evaluate: $$\int_{0}^{1} \frac{\ln x}{x+1} dx$$ If I was asked I would to evaluate: $$\int_{0}^{1} \frac{\ln x}{x-1} dx$$ That would be easy because if I use the Taylor series for $\ln x$ centered at $1$ then things will cancel out and leave me with a easy integral. So how about this integral, I'm guessed to use the Taylor series centered around $-1$. But even with that thought in mind, it does not take me anywhere because $\ln (-1)$ is not defined. Can someone help.
We know that $$\int_0^1 x^a~dx=\frac{1}{a+1}$$ Then we can differentiate with respect to $a$: $$\int_0^1 x^a \ln x ~dx=-\frac{1}{(a+1)^2}$$ Now we can use the geometric series: $$\sum_{a=0}^\infty (-1)^a x^a=\frac{1}{1+x},~~~|x|<1$$ $$\sum_{a=0}^\infty (-1)^a \int_0^1 x^a \ln x ~dx=- \sum_{a=0}^\infty (-1)^a \frac{1}{(a+1)^2}$$ (Now we interchange integration and summation on the left hand side) $$\int_0^1 \frac{\ln x}{1+x} ~dx=-\sum_{a=0}^\infty \frac{(-1)^a}{(a+1)^2}=\sum_{a=1}^\infty \frac{(-1)^a}{a^2}=-\frac{\pi^2}{12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How do curves consist of points? According to Euclid, a point is something which has no dimensions. And we know that all curves of any type consists of points. Now this thing bothers me because if a point has no dimensions, i.e. in other words there is nothing, then how is it possible to draw any curve? The thing I could imagine is that maybe a point is not as Euclid thought. I mean a point can be thought of as a small line segment whose length is approaching zero but never becomes exactly zero. In this way, we can say that curves consist of points (the line segment with length approaching zero). But then it breaks the fact as given by Euclid that a point has no dimensions. Please help me to get out of this dilemma.
A curve is completely determined by two facts: * *Knowledge of all of the points lying on the curve *Knowledge that the curve is drawn on the Euclidean plane When it's said that a curve is made out of points, one really means to include in the latter fact too, or something similar (e.g. a topology or a metric on the collection of points). There are more sophisticated geometric techniques (e.g. tangent spaces, halos, germs, stalks) that probe the "infinitesimal" shape of the curve at the point; For example, studying the tangent space to the curve would indeed allow you to say that, at each point where the curve is smooth, it consists of an infinitesimal line. But just to reinforce my initial point, the shape of that infinitesimal line can be ascertained simply by knowing the curve is being drawn on the plane along with which 'nearby' points lie on the curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 9, "answer_id": 2 }
Showing that $\text{exp}(\left \lvert z \right \rvert/(\left\lvert z \right\rvert - 1))\leq \left \lvert 1 + z \right\rvert$ Suppose $z \in \mathbb{C}$ with $\left\lvert{z}\right\rvert < 1$. I want to prove that that $$\exp\left(\frac{|z|}{|z| - 1}\right)\leq |1 + z|.$$ I tried writing $z$ in cartesian form as $x + iy$ and in polar form as $re^{i\theta}$, but neither form seemed viable. I tried squaring the inequality, but that didn't seem useful either. Any ideas on how I can get started on this?
Let $z = re^{i\theta}$, $0 < r < 1$, $\theta \in [0,2\pi]$. The desired inequality becomes $$ \exp\left(\frac{r}{r-1}\right) \leq |1+re^{i\theta}|. $$ The RHS is: $$ |1+re^{i\theta}| = \sqrt{(1+re^{i\theta})(1+re^{-i\theta})} = \sqrt{1+2r\cos\theta + r^2}. $$ Now the desired inequality is equivalent to $$ \frac{r}{r-1} \leq \frac12\ln(1+2r\cos\theta + r^2). $$ For any $r$, the least the RHS can be is when $\theta = \pi$, so it is enough to show that $$ \frac{r}{r-1} \leq \frac12\ln(1-2r + r^2) = \ln(1-r). $$ The last inequality can be shown using calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given a matrix $A$ with $\operatorname{tr} (A) = 0$, prove that there is a B such that $\forall 1\leq i\leq n :(B^{-1}AB)_{i,i}=0$ I've tried using some matrices $B^{-1}$ that switch the rows, but the $B$ at the end placed the elements back in the diagonal (in different order) so I couldn't find a rule.
Lemma Given any matrix $A$ of order at least $2$, there exists an invertible matrix $E$ such that $(E^{-1}AE)_{11} = 0$. Proof: Let $A = [a_{ij}]_{n \times n}$, $n \ge 2$. If $a_{11} = 0$, the result holds with $E = I$, the identity matrix of order $n$. If $a_{12} = 0$, then let $E$ be the permutation matrix obtained by interchanging the first and second rows of $I$. Then $E^{-1}AE$ is the result of interchanging the first and second rows as well as first and second columns of $A$, and therefore has $(1,1)$ entry zero. Suppose $a_{11}$ and $a_{12}$ are non-zero. Define the $n \times n$ matrix \begin{equation*} E = \left[\begin{array}{cc|c} a_{12} & 0 & \mathbf 0_{1 \times (n-2)}\\ -a_{11} & 1 & \mathbf 0_{1 \times (n-2)}\\ \hline \mathbf 0_{(n-2) \times 1} & \mathbf 0_{(n-2) \times 1} & I_{n-2} \end{array}\right]. \end{equation*} The inverse of $E$ is easily verified to be \begin{equation*} E^{-1} = \left[\begin{array}{cc|c} 1/a_{12} & 0 & \mathbf 0_{1 \times (n-2)}\\ -a_{11}/a_{12} & 1 & \mathbf 0_{1 \times (n-2)}\\ \hline \mathbf 0_{(n-2) \times 1} & \mathbf 0_{(n-2) \times 1} & I_{n-2} \end{array}\right]. \end{equation*} Then the $(1,1)$ entry of $AE$ is $a_{11} a_{12} - a_{12} a_{11} = 0$. Multiplication on the left by $E^{-1}$ results in a matrix with the first row elements multiplied by $1/a_{12}$ (and other rows affected in various ways). Therefore, $E^{-1}AE$ also has $(1,1)$ entry zero. $\qquad \square$ Now, we know that similar matrices have the same trace, so $\operatorname{trace}(E^{-1}AE) = \operatorname{trace}(A)$. Thus we have the following corollary. Corollary Every square matrix of order at least $2$ is similar to a matrix with the same trace, and having $(1,1)$ entry zero. Proposition Every square matrix $A$ is similar to a matrix with the last diagonal entry equal to $\operatorname{trace}(A)$, and all preceding diagonal entries zero. Proof: We prove by induction on $k$, $1 \le k \le n - 1$, that $A$ is similar to a matrix with the first $k$ diagonal entries $0$, and having the same trace as $A$ due to similarity. Then the result follows by letting $k = n - 1$. From the above corollary, the result is true for $k = 1$. That is, there is a matrix similar to $A$ and having the same trace as $A$, with its first diagonal entry equal to zero. Suppose the result to be true for $k$. Thus, without loss of generality, assume that the first $k$ diagonal entries of $A$ are all zero. Let $A$ be partitioned as given below (with matrices $X$ and $Y$ of appropriate sizes). \begin{equation*} A = \left[\begin{array}{c|c} Z_{k \times k} & X\\ \hline Y & B_{(n - k) \times (n - k)} \end{array}\right] \end{equation*} Then $Z$ has a zero diagonal, and $\operatorname{trace}(B) = \operatorname{trace}(A)$. Now, by the above lemma, we have a matrix $E$ of order $n - k$ such that $E^{-1}BE$ has $(1,1)$ entry zero. Define an $n \times n$ matrix \begin{equation*} F = \left[\begin{array}{c|c} I_k & \mathbf 0'\\ \hline \mathbf 0 & E \end{array}\right] \end{equation*} with the zero matrices $\mathbf 0$ and $\mathbf 0'$ of sizes same as those of $X$ and $Y$ respectively. Then \begin{align*} F^{-1}AF & = \left[\begin{array}{c|c} I_k & \mathbf 0'\\ \hline \mathbf 0 & E^{-1} \end{array}\right] \left[\begin{array}{c|c} Z& X\\ \hline Y & B \end{array}\right] \left[\begin{array}{c|c} I_k & \mathbf 0'\\ \hline \mathbf 0 & E \end{array}\right] \\ & = \left[\begin{array}{c|c} I_k & \mathbf 0'\\ \hline \mathbf 0 & E^{-1} \end{array}\right] \left[\begin{array}{c|c} Z & XE\\ \hline Y & BE \end{array}\right] \\ & = \left[\begin{array}{c|c} Z & XE\\ \hline E^{-1}Y & E^{-1}BE \end{array}\right] \end{align*} Thus, $F^{-1}AF$ is a matrix similar to $A$ (and hence with zero trace) with the first $k + 1$ diagonal entries zero (since the first diagonal entry of $E^{-1}BE$ is zero). By induction, the result is true for all $k \le n - 1$. $\qquad \square$ As a special case of the above proposition, if $\operatorname{trace}(A) = 0$, then $A$ is similar to a matrix with the last diagonal entry as well as all preceding diagonal entries zero. Thus we have the main result. Theorem Every matrix $A$ with zero trace is similar to a matrix with zero diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A curious approximation to $\cos (\alpha/3)$ The following curious approximation $\cos\left ( \frac{\alpha}{3} \right ) \approx \frac{1}{2}\sqrt{\frac{2\cos\alpha}{\sqrt{\cos\alpha+3}}+3}$ is accurate for an angle $\alpha$ between $0^\circ$ and $120^\circ$ In fact, for $\alpha = 90^\circ$, the result is exact. How can we derive it?
Close but not exact even with regard to the linear term. With $x$ and $y$ as per Paramanand Singh's answer, if $x=1-\delta$, then, ignoring $o(\delta)$ throughout, $$\begin{align} y&=(1-\delta)[4(1-2\delta)-3]\\ &=(1-\delta)(4-8\delta-3)\\ &=(1-\delta)(1-8\delta)\\ &=1-9\delta. \end{align}$$ By Paramanand Singh's equation (2), $$\begin{align} x&\approx \frac12\sqrt{\frac{2(1-9\delta)}{\sqrt{1-9\delta+3}}+3}\\ &\approx \frac12\sqrt{\frac{2(1-9\delta)}{\sqrt{4-9\delta}}+3}\\ &\approx \frac12\sqrt{\frac{1-9\delta}{\sqrt{1-9\delta/4}}+3}\\ &\approx \frac12\sqrt{\frac{1-72\delta/8}{1-9\delta/8}+3}\\ &\approx \frac12\sqrt{1-63\delta/8+3}\\ &\approx \frac12\sqrt{4-63\delta/8}\\ &\approx \sqrt{1-63\delta/32}\\ &\approx 1-63\delta/64. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find all the natural numbers $n$ such that $\sigma(n)=15$ Find all the natural numbers $n$ such that $\sigma(n)=15$ Where $\sigma (n)$ is the sum-of-divisors function My attempt: $$n=p_1^{\alpha_1}\cdots p_s^{\alpha_s}$$ $$\sigma(n)=\frac{p_1^{\alpha_1+1}-1}{p_1-1}\cdots \frac{p_s^{\alpha_s+1}-1}{p_1-1}=15$$ $1.\quad \text{ for }s=1:$ $$n=p^\alpha,\quad \frac{p^{\alpha+1}-1}{p-1}=15$$ $$p^{\alpha+1}-1=15p-15\\ p^{\alpha+1}=15p-14$$ I don't know what I should do now.
Here is a way to finish the s = 1 case in the way you started it and continuing from your last line: $(p^\alpha - 15)p = 14$. Since p is prime and divides 14 we must have either $p = \pm 2$ or $p = \pm 7$. So we are left with four cases: $2^\alpha - 15 = 7$, $2^\alpha - 15 = -7$, $7^\alpha - 15 = -2$ and $7^\alpha - 15 = 2$. It is easy to see that in only one case $\alpha$ is an integer, which means that there is exactly one solution to the $s = 1$ case. Time to move on to s = 2. Here it useful that there are not that many ways to write 15 as the product of 2 integers. So we can either solve the system of two equations: $(p^{\alpha + 1} - 1)/(p-1) = 15$ and $(q^{\alpha + 1} - 1)/(q-1) = 1$ or the system of two equations $(p^{\alpha + 1} - 1)/(p-1) = 3$ and $(q^{\alpha + 1} - 1)/(q-1) = 5$ Of course all of them take some work, and after solving this we have to move on two s = 3, but as Daniel points out above, we won't be busy for ever. You will find that in order to solve the s = 3 case you are looking at equations you already studied in the s = 2 case and so on. (Note for instance that of the four equations needed in the s = 2 case, the first has already been studied when working on s = 1.) Good luck! (With many thanks to Daniel for below comment!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the range of the function $f(x) = 4x + 8$ for the given domain $D = \{-5, -1, 0, 6, 10\}$ The question is to find the range of each function for the given domain $f(x)=4x+8$, $D=\{-5, -1, 0,6, 10\}$. Is the range just $R= \{-12,4,8,32,48\}$ or am I mistaken? Could you elaborate why my answers are correct?
Sure. The range (or image) of a function is just the set of all possible values of the function that you can get by plugging in values in the domain. If the domain is a finite list of numbers, you can find the range just by plugging in every number in the list, and removing duplicates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1805823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Discontinuity and Dirac's Delta Function Can someone help me understand how he came up with Dirac's function to differentiate that discontinuous periodic function? I am familiar with Dirac's function, but I don't understand where it came from in this case. Thanks
The function in the figure can be constructed by the sum of $i(t)=-t/T$ {which is a straight line at -45 degree slope through the origin} and $j(t)=\sum{} h(t-nT)$ {which is a 'staircase' function comprised of the sum of Heaviside step functions} . The derivative of $i(t)+j(t)$ yields your results. So the Dirac distribution 'spikes' come from differentiating the Heaviside step functions in $j(t)$ since the derivative of $h(x-a)$ is $\delta(x-a)$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
about non normality of Sorgenfrey plane There is a classical theorem saying that a regular space with a countable basis is normal. The Sorgenfrey plane is regular since it is the product of two regular spaces (which in fact are normal) and it has a countable basis since it is separable which would say that it should be normal, however, it is NOT. What am I thinking wrong? Thanks in advance.
Countable basis $\Longrightarrow$ separable | in general topological spaces. Separable $\Longrightarrow$ countable basis | only in metrizable topological spaces. Sorgenfrey’s plane is not metrizable. It is separable, but does not admit a countable basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving $\int_{0}^{\infty}\frac{4x^2}{(x^4+2x^2+2)^2}dx\stackrel?=\frac{\pi}{4}\sqrt{5\sqrt2-7}$ $$\int_{0}^{\infty}\frac{4x^2}{(x^4+2x^2+2)^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\infty}\frac{4x^2}{[(1+(1+x^2)^2]^2}dx=\frac{\pi}{15}$$ $u=\tan(z)$ $\rightarrow$ $du=\sec^2(z)$ $u$ $\rightarrow \infty$, $\tan(z)=\frac{\pi}{2}$ $u$ $\rightarrow 0$, $\tan(z)=0$ $$\int_{0}^{\pi \over 2}\frac{4\tan^2(z)}{[(1+(1+\tan^2(z))^2]^2}\frac{du}{\sec^2(z)}=\frac{\pi}{15}$$ $1+\tan^2(z)=\sec^2(z)$ $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+\sec^4(z)]^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+i\sec^2(z))(1-i\sec^2(z))]^2}dx=\frac{\pi}{15}$$ Hopeless! Any suggestion? Try again $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+\sec^4(z)]^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\infty}\frac{\sin^2(2z)\cos^6(z)}{(1+\cos^4(z))^2}dx=\frac{\pi}{15}$$ $\sin^2(2z)=\frac{1-\cos(4z)}{2}$ No more I gave up. Any hints?
Hint. One may write $$ \begin{align} \int_{0}^{\infty}\frac{4x^2}{(x^4+2x^2+2)^2}dx&=4\int_0^{\infty}\frac1{\left(\left(x-\frac{\sqrt{2}}x\right)^2+2+2\sqrt{2}\right)^2}\:\frac{dx}{x^2} \\\\&=2\sqrt{2}\int_0^{\infty}\frac1{\left(\left(x-\frac{\sqrt{2}}x\right)^2+2+2\sqrt{2}\right)^2}\:dx \quad (x \to \sqrt{2}/x) \\\\&=\frac1{\sqrt{2}}\int_{-\infty}^{\infty}\frac{\left(1+\frac{\sqrt{2}}{x^2}\right)}{\left(\left(x-\frac{\sqrt{2}}x\right)^2+2+2\sqrt{2}\right)^2}\:dx \\\\&=\frac1{\sqrt{2}}\int_{-\infty}^{\infty}\frac{d\left(x-\frac{\sqrt{2}}x\right)}{\left(\left(x-\frac{\sqrt{2}}x\right)^2+2+2\sqrt{2}\right)^2} \\\\&=\frac1{\sqrt{2}}\int_{-\infty}^{\infty}\frac{du}{(u^2+2+2\sqrt{2})^2} \\\\&=\color{red}{\frac14 \sqrt{5\sqrt{2}-7} \:\pi} \\\\ \end{align} $$ where we have made $u:=\sqrt{2+2\sqrt{2}}\:\sinh v$ to get the last step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Prove $\ \sin(x) < x \ \ \ \forall x \in(0, 2\pi)$ Problem : Prove $\sin(x) < x \ \ \ \forall x \in(0, 2\pi)$ Now I have a possible solution for this, using limits and the first derivatives of $\sin(x)$ and $x$, but I don't feel it's a very rigorous or succinct way to prove this. Can any of you find better ways to prove this? My proof is listed below. Possible Proof First we take the limits of both $\sin(x)$ and $x$ as $x \to 0^{+}$. $$ \lim_{x \to \ 0^{+}}\ \sin(x) = \sin(0) = 0$$ $$ \lim_{x \to \ 0^{+}}\ x = 0$$ Next we take the derivatives of $\sin(x)$ and $x$. To see how they increase/descrease over the interval $(0, 2\pi)$ $$\frac{d}{dx} \ \sin(x)\ = \cos(x)$$ $$\frac{d}{dx} \ x\ = 1$$ $$\text{However} \ \ \cos(x) < 1, \ \forall x \in (0, 2\pi)$$ $$\implies \frac{d}{dx} \ \sin(x) < \frac{d}{dx} \ x\ ,\ \ \forall x \in (0, 2\pi)$$ This shows that the magnitude at which $\sin(x)$ is increasing is less than that of $x$ over the interval $(0, 2\pi)$, therefore if $\sin(x) \not> x$ as $x \to 0^{+}$, $\sin(x) < x, \ \forall x \in (0, 2\pi)$ $$Q.E.D$$ Would you say that this is a satisfactory proof? It doesn't seem particularly satisfactory to me, and it doesn't seem rigorous enough or all that succinct. Are there better or more efficient/clearer ways to prove this, or problems of these sort of nature? Also if you have any comments about my proof-writing skills please leave them below.
If you know that $\cos x < 1$ for $0 < x < 2\pi$, then $$ 0 < \int_0^{x} 1-\cos(t) \, dt = x -\sin(x) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How to show that $\sup(\mathbb{Q} \cap (a,b)) = b$? Let $a<b$ be two real numbers. Show that $\sup(\mathbb{Q} \cap (a,b)) = b$ and $\inf(\mathbb{Q} \cap (a,b)) = a$. This intuitively makes sense. Since a sequence of rationals will infinitely approach $a$ and $b$, it makes sense that the question is true. How do I prove it rigorously?
Let $A=\Bbb{Q}\cap (a,b)$. Of course, $b\geq x$ for all $x\in A$. Suppose that the supremum of $A$ is $m<b$. By density, there exists $r\in\Bbb{Q}\cap(m,b)$. Hence, $r\in A$ and $r>m$, which contradicts the maximality of $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Normal subgroups of matrices Let $G=\begin{bmatrix}1&a\\0&b\end{bmatrix}$ so that $a,b\in\mathbb C$ and $b\ne0$. I need to prove that $G$ has infinitely many normal subgroups. I attempt to do this by constructing some family of normal subgroups but I keep failing, as most of the things I try aren't even subgroups.
Let $g$ and $h$ be two elements in $G$. Calculate $ghg^{-1}$. You will immediately see under what conditions $ghg^{-1}$ is "of the same form" as $h$. That is, $ghg^{-1} \in H$ where $H$ is a subgroup of $G$. This condition $ghg^{-1} \in H$ means that $H$ is normal. This method will give you infinitely many subgroups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to verify $(1+\frac{1}{n})^2(1-\frac{1}{n^2})^{n-1}\geq \exp(\frac{1}{n})$ How to verify this inequality? Assuming that $n\in \mathbb{N}^+$, we have: $$\left(1+\frac{1}{n}\right)^2\left(1-\frac{1}{n^2}\right)^{n-1}\geq \exp\left(\frac{1}{n}\right).$$
Consider $$A_n=\left(1+\frac{1}{n}\right)^2\times\left(1-\frac{1}{n^2}\right)^{n-1}$$ Take logarithms $$\log(A_n)=2\log\left(1+\frac{1}{n}\right)+(n-1)\log\left(1-\frac{1}{n^2}\right)$$ Now, use the Taylor series $$\log(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}+O\left(x^6\right)$$ and replace $x$ by $\frac 1n$ in the first term and by $-\frac 1{n^2}$ in the second term. This should give $$\log(A_n)=\frac{1}{n}+\frac{1}{6 n^3}+\frac{1}{15 n^5}+O\left(\frac{1}{n^6}\right) >\frac 1n$$ You can check, using limits, that $A_1=4 > e$. Edit Taking more terms, the expansion of $\log(A_n)$ does not seem to contain any negative coefficient (I did stop at $O\left(\frac{1}{n^{1000}}\right)$, using, for sure, a CAS for this last part). A closer look at the coefficients reveals that $$\log(A_n)=\sum_{k=1}^\infty \frac{1}{k (2 k-1)}\frac 1 {n^{2 k-1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$\exp(x)$ as defined by a net Motivation: So, I had an idle thought last week, and I thought I would ask it here before I forget about it. It is well known that we can define $$ e^x = \lim_{n \to \infty} \left(1 + \frac x{n} \right)^{n} $$ Where $x$ here can be taken as either a number or a linear operator. This is often intuitively explained as stating that $e^x$ is the multiplication which is generated by the infinitessimal perturbation of $1$ in the direction of $x$. Or, if you prefer, $e^x$ is the "continuous interest rate" that is generated by the periodic interest rate $x$. In either case, it is "suspicious" that we've broken the $n$th product down into $n$ identical pieces, so perhaps we can come up with a more "robust" definition. In that vein: Problem Statement Let $\lambda$ denote a tuple $(\lambda_1,\lambda_2,\dots,\lambda_n)$. Let $\Lambda$ denote the set of all such (finite) tuples of positive $\lambda_i$ satisfying $\sum_i\lambda_i = 1$ with the partial order $$ (\lambda_{1,1},\dots,\lambda_{1,k_1},\lambda_{2,1},\dots,\lambda_{2,k_2}, \dots \dots \dots,\lambda_{n,1},\dots,\lambda_{n,k_n}) \succeq\\ ([\lambda_{1,1}+\cdots+\lambda_{1,k_1}],[\lambda_{2,1}+\cdots+\lambda_{2,k_2}], \dots,[\lambda_{n,1}+\cdots+\lambda_{n,k_n}]) $$ So, for example, $(1/2,1/2) \preceq (1/4,1/4,1/2) \preceq (1/8,1/8,1/4,1/2)$. Define the net $(e_{\lambda}^x)_{\lambda \in \Lambda}$ by $$ e_{\lambda}^x = \prod_{i =1}^n \left( 1 + \lambda_i x\right) $$ Conjecture: $\lim_{\lambda \in \Lambda} e_\lambda^x = e^x$ Is this statement correct? Has this been done before? Is this demonstrably useless? Let me know.
Having come across one of my old questions, I've decided to leave an answer (following Daniel's hint in the comment). Fix $x \in \Bbb C$. The Taylor expansion for $\log(1 + z)$ (with the Peano form of the remainder) tells us that $$ \log(1 + \mu x) - \mu x = [1 + \zeta(\mu x)]\frac {\mu^2 x^2}2 $$ where $\zeta(z) \to 0$ as $z \to 0$. For any $\epsilon > 0$, we may select a sufficiently small $\delta > 0$ with $|\delta x| < \sqrt{\epsilon}$ such that $|\log(1 + \mu x) - \mu x| < \mu^2 |x|^2$ whenever $0 < \mu < \delta$. If $\lambda = (\lambda_1,\dots,\lambda_n)$ is chosen such that $\max_k(\lambda_k) < \delta$, then we have $$ |x - \log e_{\lambda}^x| = \sum_{k=1}^n [\lambda_i x - \log(1 + \lambda_i x)] \leq \sum_{k=1}^n\lambda_i^2 x^2 < \delta^2 x^2 < \epsilon $$ Now, if we select $N$ with $1/N < \delta$, then the above shows that $$ \lambda \succeq (1/N, \dots, 1/N) \implies |x - \log e^x_{\lambda}| < \epsilon $$ Thus, we have shown that the net $\log e^x_{\lambda}$ converges to $x$, which means that the net $e_{\lambda}^x$ coverges to $e^x$, which was the desired conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Which elements of $\mathbb{R}$ make sense as representatives for cosets of $\mathbb{Q}$ in the group $\mathbb{R/Q}$ I am trying to better understand the group $\mathbb{R/Q}$. It's unclear to me when two irrational numbers will give the same coset of $\mathbb{Q}$, but I know that this must happen since, for example $\pi+1 = (\pi-1)+2$, meaning the cosets $\pi\mathbb{Q}$ and $(\pi−1)\mathbb{Q}$ share an element and thus are equivalent. Can we describe a set of irrational numbers that give each coset of $\mathbb{Q}$ exactly once in a way that, given an irrational number, we would be able to say whether or not it's in the set? Edit: I am also interested in understanding this group in other ways. What is it's order? Are there any groups it's isomorphic to?
$\mathbb{R/Z}$ is isomorphic to the unit circle $S^1$, the set of all complex numbers of absolute value $1$, seen as a multiplicative group. $\mathbb{Q/Z}$ is isomorphic to the torsion subgroup of $S^1$, formed by the elements of finite order, that is, all roots of unity. $\mathbb{R/Q}$ is thus isomorphic to $S^1/tor(S^1)$, a very large and complicated group. For instance, the powers of every element form a dense subset. Another approach is to consider $\mathbb{R}$ as a vector space over $\mathbb{Q}$. The dimension here is the cardinality of $\mathbb{R}$ and so $\mathbb{R/Q}$ is essentially the same as $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1806926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why can we choose a greatest ordinal $\beta$, such that $\omega^\beta\leq \alpha$? I am reading a proof of Cantor's normal form theorem. In it, I read: for arbitrary $\alpha>0$ let $\beta $ be the greatest ordinal such that $\omega^\beta \leq \alpha$. Why should such an ordinal exist?
There are ordinals $\varepsilon$ such that $\omega^{\varepsilon} > \alpha$. By the well-ordering of the ordinals, there is hence a smallest ordinal $\gamma$ with $\omega^{\gamma} > \alpha$. For limit ordinals $\lambda$, we have $$\omega^{\lambda} = \bigcup_{\delta\in \lambda} \omega^{\delta} = \sup \{ \omega^{\delta} : \delta \in \lambda\}.$$ Since $\omega^{\delta} \leqslant \alpha$ for all $\delta \in \gamma$ by definition of $\gamma$, we have $$\sup \{ \omega^{\delta} : \delta \in \gamma\} \leqslant \alpha < \omega^{\gamma},$$ hence $\gamma$ is not a limit ordinal, and thus there is a $\beta$ with $\gamma = \beta + 1$. This is then the largest ordinal with $\omega^{\delta} \leqslant \alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
coordinate geometry in polar coordinate Let $G= \{(x, f(x)) \mid x \text{ lies between } 0 \text{ and } 1 \}$ Let $(1,0)$ belong to $G$. It is given that tangent vector to $G$ at any point is perpendicular to radius vector at that point. Is $G$ parabola or ellipse?
Well, you are given that $(1,f'(x)) \perp (x,f(x))$ and $f(1) = 0$. Thus, $x + f'(x)f(x) = 0$. We can rewrite this as $$ \frac{d}{dx} f^2(x) = -2x. $$ Integrating, we get $f^2(x) = -x^2 + C$ and by plugging in $f(1) = 0$ we see that $C = 1$. Thus, $f(x) = \pm \sqrt{1 - x^2}$ and thus $G$ is (part of) a circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Set of marginals is convex Let $[n] = \{1,2,\cdot,n\}$ and $[m] = \{1,2,\cdot,m\}$. Let $Z_{1,2}$ denote the set of all probability distribution on the Cartesian product $[m]\times [n]$. Let $S_{1,2}$ denote a convex closed subset of $Z_{1,2}$. Let $S_1$ denote the set of probability distributions on the set $[n]$ obtained my marginalizing every probability in the set $S_{1,2}$. Is $S_1$ a convex and closed set.
Think of this as a linear map from $\phi: Z_{1,2} \subset \mathbb{R}^{mn} \to \mathbb{R}^n$ where $Z_{1,2}$ is your compact set and the map is the marginalizing map. Since $\phi$ is linear, it is continuous and since and $S_{1,2}$ is compact, the image $\phi (S_{1,2})$ is compact (and therefore closed). On the other hand, since $\phi$ is linear, it preserves convex combinations. Thus the image is convex. Note: The marginalizing map is the map that sends the joint probability vector to its marginal on $[n]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that the $7$-th cyclotomic extension $\mathbb{Q}(\zeta_7)$ contains $\sqrt{-7}$ Prove that the $7$-th cyclotomic extension $\mathbb{Q}(\zeta_7)$ contains $\sqrt{-7}$ I thought that the definition of the $n$-th cyclotomic extension was: $\mathbb{Q}(\zeta_n)=\{\mathbb{Q}, \sqrt{-n}\}$. Is this correct? How could I prove the statement? Could we consider the polynomial $X^2+7$ (which is irreducible by Eisenstein's criterion with $p=7$?
I'm sure there are more elegant ways to observe this, but here's one possible way: * *First, show that the extension is Galois (not too hard) and that $Gal(\mathbb{Q}(\zeta_7)/\mathbb{Q})\cong Z_6$. This has subgroups of order 2 and 3. *Now find the fixed field of the subgroup of order 3. This is a degree 2 extension. To find the generator, find a generator of the Galois group and write the subgroup in terms of powers of the generator, and find a generator of the subgroup. Look at the orbit of the generator of that subgroup on $\zeta_7$. We guess that this will generate a degree 2 extension. *To show that, take powers of that generator and try to find the minimal polynomial. It should be degree 2. The discriminant of the quadratic will be $\sqrt{-7}$, and you will be done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Irrationality of the concatenation of the rightmost nonzero digits in $n!$ Surfing the internet I bumped into a very interesting problem, which I tried to solve, but got no results. The problem is following: let $h_n$ be the most right non-zero digit of $n!$, for example, $10!=3628800,$ so $h_{10}=8$. The task is to prove that decimal fraction $0,h_1h_2\ldots h_n\ldots$ is irrational.
To show that $0,h_1h_2\ldots$ is irrational, it's enought to show that the sequence $h_n$ is not eventually periodic. Assume on contrary that $h_n$ is eventually periodic. Then $h_n$ is eventually periodic as a sequence modulo $5$. Let $\mathbb Q^+$ denote the multiplicative group of positive rationals. The set of prime numbers constitutes a base of $\mathbb Q^+$ regarded as $\mathbb Z$-module. Consequently, there exists a unique multiplicative group homomorphism $\lambda:\mathbb Q^+\to\mathbb Q^+$ such that: $$\lambda(p)=\begin{cases}p&p\neq 5\\\frac 12&p=5\end{cases}$$ ($p$ prime number). Then we have $h_n\equiv \lambda(n!)\pmod{10}$ for each $n>0$, hence $$h_n\equiv\lambda(n!)\equiv \prod_{k=1}^n\lambda(k)\pmod{5}$$ Note that $\lambda(k)=k$ if $5\nmid k$. Lemma. Let $u_n(n\in\mathbb N)$ be a sequence in a multiplicative group $G$. If $\prod_k u_k$ is eventually periodic, then $u_n$ is eventually periodic. For if $$\prod_{k=0}^{n+T}u_k=\prod_{k=0}^{n}u_k$$ for each $n\geq N$, then $$\prod_{k=n+1}^{n+T}u_k=1$$ for each $n\geq N$, hence $$\prod_{k=n+1}^{n+T}u_k=1=\prod_{k=n+2}^{n+1+T}u_k$$ from which $u_{n+1}=u_{n+1+T}$ for each $n\geq N$. Thus if $h_n$ is eventually periodic, then also $\lambda(k)$ is eventually periodic in the group of units of $\mathbb Z\diagup 5\mathbb Z$, that's $\lambda(n+T)\equiv\lambda(n)\pmod 5$ for each $n\geq N$. Write $T=5^vT'$ with $5\nmid T'$; for $n\geq N$ and $n>v$ we have $\lambda(5^n)\equiv 3^n\pmod 5$ and $$\lambda(5^n+T)=\lambda(5^n+5^vT')=3^v\lambda(5^{n-v}+T')\equiv 3^v(5^{n-v}+T')\equiv 3^vT'\pmod 5$$ thus $T'\equiv 3^{n-v}\pmod 5$ for each $n$ large enought - a contradiction which concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that random variables are independent $X_1,X_2,X_3$ are independent random variables with with exponential distribution with parameter $\lambda=1$. I'd like to prove that variables $\frac{X_1}{X_2 +X_1}, \frac{X_1 +X_2}{X_1 + X_2 +X_3},X_1 + X_2 +X_3$ are independent too. I can calculate the distribution od those variables (for example $\frac{X_1}{X_2 +X_1} \cong 1_{[0,1]}(x)$), but I can't really go any further. I know I have to prove that density of multivariable $\left(\frac{X_1}{X_2 +X_1}=A, \frac{X_1 +X_2}{X_1 + X_2 +X_3}=B,X_1 + X_2 +X_3=C\right)$ is a product of densities of coordinates and surely it is possible to do it with Fubini theorem : $P(A <t, B<s, C< w) = P( (X_1,X_2,X_2) \in D \subset R^{3})$ and use the independence of $(X_1,X_2,X_3)$ but it gets very messy and thus I believe there must be a nicer way to do it. Thank you for any hints.
Use the Jacobian formula. If $X, Y, Z$ are iid exponential($\lambda$), then we obtain $U,V,W$ by applying the transformation $(x,y,z)\mapsto(u,v,w)$ defined by $$\begin{align} u&=\frac x{x+y}\\ v&=\frac{x+y}{x+y+z}\\ w&=x+y+z.\end{align}$$ Invert this mapping to obtain $$\begin{align}x&=uvw\\y&=(1-u)vw\\ z&=(1-v)w.\end{align} $$ Next, compute the Jacobian determinant $$ J(u,v,w):=\det\frac{\partial(x,y,z)}{\partial(u,v,w)}=\left| \begin{array}{lll} vw&-vw&0\\ uw&(1-u)w&-w\\ uv&(1-u)v&1-v\\ \end{array} \right|=vw^2 $$ so the joint density of $(U,V,W)$ is $$ f_{U,V,W}(u,v,w)=f_{X,Y,Z}(x,y,z)\left|J(u,v,w)\right|=\lambda^3e^{-\lambda(x+y+z)}\cdot vw^2=1\cdot 2v\cdot\frac{\lambda^3}2w^2e^{-\lambda w} $$ which factors into a product of the marginal densities. Note that $U$ and $V$ take values in the unit interval $[0,1]$ while $W>0$. [Specifically, $W$ has Gamma($k=3,\lambda$) distribution.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that $(a,b)+(c,d) = (a+c,b+d)$ Let $a<b$ and $c<d$ be real numbers. Show that $(a,b)+(c,d) = (a+c,b+d)$. I don't understand the question. Since $(a,b)$ and $(c,d)$ are intervals, what does it mean to add them?
In terms of sets and set notation: $(a,b)$ = all the points of R that are between a and b exclusively =$\{x\in \mathbb R| a < x < y\}$ If $A$ and $B$ are sets, than we say $A + B =\{x+y|x \in A; y \in B \}$. So the statement $(a,b)+(c,d) = (a+b,c+d)$ means that $(a,b) + (c,d)=\{x+y|a <x <b;c <y<d\}$ is the same as $(a+b, c+d) = \{z|a+b <z <c+d\}$ Proof: 1) if $z=x+y \in (a,b) + (c,d)$, that is, $a <x <b;c <y <d$ then $a+c <x+y <b+d$ so $(a,b)+(c,d) \subset (a+c,b+d) $. That was easy. 2) if $z \not \in (a+b,c+d)$ a) if $z \le a + c$. Let $x$ be any number such that $a < x < b$. Then if $z =x+y$ for some $y$ it follows that $y = z-x < a+c-a =c$ so $z \not \in (a,b) +(c,d)$. b) Likewise if $z \ge b + d$. Let $y$ be any number such that $c < y < d$. Then if $z =x+y$ for some $x$ it follows that $x = z-y > b+d-d =b$ so $z \not \in (a,b) +(c,d)$. So $z \not \in (a+b,c+d)$ implies $z \not \in (a,b)+(c,d)$. So $(a,b)+(c,d) \subset (a+b,c+d)$ 1 and 2 together mean $(a+b,c+d)=(a,b)+(c+d)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $T^{-1}:Y \to X$ exists and is bounded. Let $T$ be a bounded linear operator from a normed space $X$ onto a normed space $Y$. If there is a positive $b$ such that $$||Tx||\ge b||x||,$$ for all $x \in X$, show that $T^{-1}:Y \to X$ exists and is bounded. My attempt: Suppose $Tx =0$ then clearly $||Tx~||=0 \iff || x||=0$ and so $T$ is injective, that is $T^{-1}$ exists. We need now show that $T^{-1}$ is bounded. This is where I am stuck. I have the following trail of thought, but I am not sure if this is correct: Since they say $T$ is a b.l.o. from $X$ onto $Y$, we can say that $T$ is surjective. Now, for every $y \in Y$ then there exists an $x=T^{-1}y \in X$ and so \begin{align}||T(T^{-1}y)|| &\ge b||T^{-1}y|| \\ \therefore ||y|| &\geq b||T^{-1}y|| \\ \therefore \frac{1}{b}||y|| &\ge ||T^{-1}y||,\end{align} for every $y \in Y$. That is, $T^{-1}$ is bounded. Is this correct?
As I said in a comment, you should also check that $T^{-1}$ is linear but the rest seems fine. Here is the proof. Let $y_1, y_2 \in Y$ and $\lambda, \mu \in \mathbb{C}$. Using the linearity of $T$ you've got $$ \lambda y_1 + \mu y_2 = \lambda TT^{-1}(y_1)+ \mu TT^{-1}(y_2) \\= T\Big(\lambda T^{-1}(y_1) + \mu T^{-1}(y_2)\Big).$$ Then apply $T^{-1}$ on both sides : $$ T^{-1}\big(\lambda y_1 + \mu y_2\big) = \lambda T^{-1}(y_1) + \mu T^{-1}(y_2).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I'm stuck in a logarithm question: $4^{y+3x} = 64$ and $\log_x(x+12)- 3 \log_x4= -1$ If $4^{y+3x} = 64$ and $\log_x(x+12)- 3 \log_x4= -1$ so $x + 2y= ?$ I've tried this far, and I'm stuck $$\begin{align}4^{y+3x}&= 64 \\ 4^{y+3x} &= 4^3 \\ y+3x &= 3 \end{align}$$ $$\begin{align}\log_x (x+12)- 3 \log_x 4 &= -1 \\ \log_x (x+12)- \log_x 4^3 &= -1 \\ \log_x(x+12)- \log_x 64 &= -1 \end{align}$$ then I substituted $4^{y+3x} = 64$ $\log_x (x+12) - \log_x 4^{y+3x} = -1$ I don't know what should I do next. any ideas?
You're right up to $y+3x=3$. Now consider the other statement $\log_x(x+12)-3\log_x 4=-1$ $\log_x{x+12 \over 64 }=-1$ ${x+12 \over 64 }={1 \over x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
proof that $\int_{a}^{x} = \int_{x}^{b}$ I want to show that if $f$ is a continuous function on the interval $[a,b]$ then there must exist some $x \in [a,b]$ such that: $$\int_{a}^{x} = \int_{x}^{b}$$ Intuitively this seems very easy and I can see why it is true, its just the structure of the proof that I'm confused about. Is it sufficient to say that since $f$ is continuous, $\int_{a}^{x}$ exists $\forall x \in [a,b]$ and in particular we can find some $x_{0}$ such that: $$\int_{a}^{x_{0}} = \frac{1}{2} \int_{a}^{b}$$ then since: $$\int_{a}^{x_{0}}+\int_{x_{0}}^{b} = \int_{a}^{b} $$ we get that: $$\int_{x_{0}}^{b} = \int_{a}^{b} - \int_{a}^{x_{0}} $$ $$\int_{x_{0}}^{b} = \int_{a}^{b} - \frac{1}{2} \int_{a}^{b} $$ so: $$\int_{x_{0}}^{b} = \frac{1}{2} \int_{a}^{b} $$ and we get that: $$\int_{a}^{x_{0}} = \int_{x_{0}}^{b}$$ Is this rigorous enough? should I try a different method maybe using partitions and upper/lower sums? is there a sort of, mean value theorem equivalent for integrals with area instead of the derivative? Thanks guys.
We know that there exists a continuous function on the interval $x \in [a, b]$, $F(x)$, such that: $$ \int_a^bf(t)dt = F(a) - F(b) \\ \int_a^xf(t)dt = F(x) - F(a) \\ \int_x^bf(t)dt = F(b) - F(x) \\ $$ Set the last two equations to be true: $$ F(x) - F(a) = F(b) - F(x) \longrightarrow F(x) = \frac{F(a)+ F(b)}{2} $$ Using the Intermediate Value Theorem, such an $x$ must exist on the interval $x \in [a, b]$ since it can be shown that the the average of $F(a)$ and $F(b)$ is certainly between the two values of $F(a)$ and $F(b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
If $f(x)$ has a minimum at (3,2) what does $y = \frac{5}{3+f(x)}$ have at x = 3? No calculus allowed. I can get the value by substituting 2 for $f(3)$, and that it should be a maximum turning point as $f(x+\delta h)$ and $f(x -\delta h) > f(x)$ but am not sure how to proceed further.
Let $g(x)=y=5/(3+f(x))$. For any $x$, we have $$ 3+f(x)\geq3+f(3)=3+2=5>0\implies g(x)=\frac{5}{3+f(x)}\leq\frac{5}{3+f(3)}=g(3). $$ And so $y=g(x)$ is everywhere defined (because $3+f(x)>0$ always; see above) and has a global maximum at $x=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1807915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Calculate the normal cone of a convex set at a point Let $C$ be a convex set in $\mathbb{R}^d$ and $\overline{x}\in C$. We define the normal cone of $C$ at $\overline{x}$ by \begin{equation} N_C(\overline{x}) = \{ y \in \mathbb{R}^d \ <y ,c-\overline{x}> \leq 0 \forall c \in C \}. \end{equation} I found in a book of nonsmooth analysis that using this definition the normal cone of \begin{equation} C= \{ (x,0) \in \mathbb{R}^2 : 0\leq x \leq 1 \} \end{equation} at $\overline{x} = (0,0)$ is the set \begin{equation} N_C(0,0) = \{ (y_1,y_1) \in \mathbb{R}^2 : y_1\leq 0 , y_2 \in \mathbb{R} \}. \end{equation} My question is: How to calculate the normal cone? I tried to use the definition, but I couldn't obtain the result! Thank you very much.
In your case \begin{align} N_C(0,0) &= \{ (y_1,y_2)\in \mathbb{R}^2|y_1c_1+y_2c_2\le 0, \forall (c_1,c_2)\in C\}\\ &=\{ (y_1,y_2)\in \mathbb{R}^2|y_1c_1\le 0, \forall\, c_1\in [0,1]\}\\ &= \{ (y_1,y_2)\in \mathbb{R}^2 | y_1\le 0\} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How work out the length of this side? This is probably so basic, but I just cannot see it. If you do not know that the left side is $5x$ and are only given $3x$ and $4x$, how do you deduce $5x$?
As Emilio Novati implies, all right triangles, with given angle $\theta$, are "similar triangles". So since the legs have ratio $\frac{3x}{4x}$, the hypotenuse must be a multiple of the hypotenuse with legs of length 3 and 4 which is, of course, 5. The hypotenuse must have length 5x.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Uniform bounded of Riemann-like sum and improper integral For any $h>0$, suppose $\{(y_i,y_{i+1}]\mid i\in \mathbb{Z}\}$ be a uniform partition of $\mathbb{R}$ with mesh size $h$. I am considering under what condition for a continuous transition density function $p(h,x,y)$ of a stochastic process, we can have $$\sum_i p(h,x,\xi)h^3\le Mh^\alpha$$ uniformly in $h,x$. Let's take Gaussian density function as an example. That is, given the Gaussian density function $$p(h,x,y)=\frac{1}{2\pi\sqrt{h}}e^{-\frac{(x-y)^2}{2h}},$$ I am considering the following two questions: * *Whether there exists $M,h_0>0$ independent of $x$, such that $h\in (0,h_0)$, we have $$\sum_i p(h,x,\xi_i)h^3\le Mh^\alpha,\tag{1}$$ for some $\alpha>0$. *Whether the Riemann-like sum can be approximated by a corresponding improper integral in the sense that for any $\epsilon>0$, there exists $h_0$ independent of $x$, such that for any $0<h<h_0,x\in \mathbb{R}$, we have $$|\sum_i p(h,x,\xi_i)h- \int_\mathbb{R}p(h,x,y)\,dy|<\epsilon .\tag{2}$$ Here $\xi_i\in (y_i,y_{i+1})$ satisfies $p(h,x,\xi_i)=\sup_{y\in [y_i,y_{i+1}]}p(h,x,y)$. I am really sorry that I edited the question. I added more background information so that I can express my original question in a clearer way. Thanks for Kirvich Entracus's answer. Generalizing the idea in the answer, if $$p(h,x,y)\le \dfrac{M}{\sqrt{h}}$$ uniformly and such that for fixed $h$, there exists $\delta(h)>0$, such that for any fixed $x$, if $|y-x|>\delta(h)$, we have $p(h,x,y)$ decreases when $|y-x|$increases, then we can have $$\sum_i p(h,x,\xi_i)h\le \dfrac{M}{\sqrt{h}} \delta(h)+1$$ uniformly for small $h$ by comparing it with the improper integral.
Looks not that complicated, if I understood the statements correctly. For simplicity let me set $x=0$, $y_0=0$. $i\geq 0\,\, \sum_{i\geq 0} p(h,0,\xi _i)h\leq \frac{h^\frac{1}{2}}{2\pi}+ \sum_{i\geq 0} q(h,0,\xi_i)h\leq\frac{h^\frac{1}{2}}{2\pi}+\int _{y\geq0}p(h,0,y)dy\leq \frac{h^\frac{1}{2}}{2\pi}+\frac{1}{2}\,\,\, where\,\, q(h,0,\xi_i)=inf\{p(h,0,y)|y\in (y_i,y_{i+1}]\} \\ i<0\,\, \sum_{i< 0} p(h,0,\xi _i)h\leq \frac{h^\frac{1}{2}}{2\pi}+ \sum_{i< 0} q(h,0,\xi_i)h\leq\frac{h^\frac{1}{2}}{2\pi}+\int _{y<0}p(h,0,y)dy\leq\frac{h^\frac{1}{2}}{2\pi}+\frac{1}{2}\,\,\, where\,\, q(h,0,\xi_i)=inf\{p(h,0,y)|y\in (y_i,y_{i+1}]\}$ So for some constant $C>0$ $\sum_ip(h,0,y)h\leq Ch^{\frac{1}{2}}+1 \\ \sum_ip(h,0,y)h^3\leq (Ch^{\frac{1}{2}}+1)h^2$ So we can choose $(Ch_0^{\frac{1}{2}}+1)$ for M and $\alpha=2$. When $x$ is not zero, slight modifications conclude the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What are some examples of when Mathematics 'accidentally' discovered something about the world? I do not remember precisely what the equations or who the relevant mathematicians and physicists were, but I recall being told the following story. I apologise in advance if I have misunderstood anything, or just have it plain wrong. The story is as follows. A quantum physicist created some equations to model what we already know about sub-atomic particles. His equations and models are amazingly accurate, but they only seem to be able to hold true if a mysterious particle, currently unknown to humanity, exists. More experiments are run and lo and behold, that 'mysterious particle' in actual fact exists! It was found to be a quark/dark-matter/anti-matter, or something of the sort. What similar occurrences in history have occurred, where the mathematical model was so accurate/good, that it 'accidentally' led to the discovery of something previously unknown? If you have an answer, could you please provide the specific equation(s), or the name of the equation(s), that directly led to this? I can recall one other example. Maxwell's equations predicted the existence of radio waves, which were then found by Hertz.
Science News article accidental astrophysicists 13 June 2008 explains how a math proof became a physics proof of gravitational lensing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "243", "answer_count": 26, "answer_id": 4 }
Simple rocket model I have a problem creating a model for a horizontal rocket flight. I want to model a rocket with constant force, drag constant and gravity. I also have to account for a changing mass and drag. I know I could calculate movement of this rocket by something like Runge-Kutta algorithm but I want to know if it is possible to create a equation that would give me a position (height) for a given time. Basically those are my functions: $$F_G (t) = - g \, m (t)$$ $$F_D (t) = - c \, v^2 (t)$$ $$F (t) = F_T (t) - F_G (t) + F_D (t)$$ where $F_T$ is the (constant) thrust, $F_G$ is gravity, and $F_D$ is drag. I'm kinda stuck here so any help would be appreciated.
Assuming that you rocket is still not fast enough. By Newton's second law (like gt6989 said): $m\ddot{x}+\dot{m}\dot{x}=F(t)-gm-c\dot{x}^2$ or $$m\ddot{x}+\dot{m}\dot{x}+c\dot{x}^2=F(t)-gm$$ You still need to give models for the thrust and also for the mass which is lost. An explicit solution will not be that easy. You could also introduce the substitution $\dot{x}=z$ $$m\dot{z}+\dot{m}z+cz^2=F(t)-gm$$ This is a riccati differnetial equation. You might be lucky if there is a general explicit solution. $$\dot{z}=\frac{m_d}{m}z-\frac{c}{m}z^2+\frac{F(t)}{m}-g$$ $$\dot{z}=\frac{m_d}{m_0 - m_d t}z-\frac{c}{m_0 - m_d t}z^2+\frac{F_0}{m_0 - m_d t}-g$$ Maple gives an explicit solution with Bessel functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $n \in \mathbb{N}$. Proving that $13$ divides $(4^{2n+1} + 3^{n+2})$ Let $n \in \mathbb{N}$. Prove that $13 \mid (4^{2n+1} + 3^{n+2} ). $ Attempt: I wanted to show that $(4^{2n+1} + 3^{n+2} ) \mod 13 = 0. $ For the first term, I have $4^{2n+1} \mod 13 = (4^{2n} \cdot 4) \mod 13 = \bigg( ( 4^{2n} \mod 13) \cdot ( 4 \mod 13 ) \bigg) \mod 13. $ But still I don't know how to simplify the first term in the large bracket. Any help/suggestions?
By the binomial theorem, $$ 4^{2n+1} + 3^{n+2} =4\cdot 16^n+9\cdot 3^n =4\cdot (13+3)^n+9\cdot 3^n =4(13a+3^n)+9\cdot 3^n =13(4a+3^n) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
2-norm of matrix How to prove that for a symmetric matrix A with eigenvalues $\lambda_1 \leq \lambda_ 2, ... \leq \lambda_n$ it holds that $$\vert\vert A \vert\vert = \text{max}(-\lambda_1, \lambda_n)$$ where $\vert\vert \cdot\vert\vert$ denotes the 2-norm? I am familiar with the usual matrix norm, i.e. $\text{max}(\lambda_i)$, but the $-\lambda_1$ confuses me.
If $v_i$ are the corresponding orthonormal eigenvectors of $A$ then you have $$ ||Av||^2 = \left< A\left( \sum_{i=1}^n \left< v, v_i \right>v_i \right), A \left( \sum_{i=1}^n \left< v, v_j \right> v_j\right) \right> = \left< \sum_{i=1}^n \left<v, v_i \right> \lambda_i v_i, \sum_{j=1}^n \left< v, v_j \right> \lambda_j v_j \right> \\ = \sum_{i=1}^n \lambda_i^2 \left<v, v_i \right>^2 \leq \max_{i} \lambda_i^2 \sum_{i=1}^n \left<v, v_i \right>^2 = \max(-\lambda_1, \lambda_n)^2 ||v||^2. $$ In the equations above we used the fact that $v_i$ is an orthonormal basis and so $\left< v_i, v_j \right> = \delta_{ij}$ and the fact that the eigenvalues are ordered and so $\max_{i} {\lambda_i^2} = \max (\lambda_1^2, \lambda_n^2) = \max(-\lambda_1, \lambda_n)^2. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find the range of the given function? Find the range of $$f(x)=\dfrac{x^2+14x+9}{x^2+2x+3}$$ where $x\in \mathbb R$ I thought of finding derivative but this will get too complicated so i am completely blank. Thanks in advance!
I think you'd have to find the derivative for this problem. But don't worry, finding the derivative is not as cumbersome as you think. Just use the quotient rule and you'll get something reasonable. You should get $-\frac{12(x^2+x-2)}{(x^2+2x+3)^2}$ and setting that equal to 0 and solving for $x$ should be straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
How did we derive this general term for the series? We have this series of numbers: $1, 3, 6, 10, 15$ The general term can be described wit: $\frac{r(r + 1)}{2}$ Apparently the following series: $1, 4, 10, 20, 35$ Can be described with $\frac{r(r + 1)(r + 2)}{6}$ based on the first series. But I am not clear how this is derived. Can someone please explain?
Note: That from the given series you are not able to derive a unique expression for the general term. There are infinitly many solutions. There are some which are more obvious (see JMoravitz), but there is no mathematical definition for more obvious as far as I know. E.G. a polynomial of 6th degree can fit all points, polynomial of 7th degree ...., all polynomial of higher degree than 6 can have an appropriate choice of coefficients so that all the given points lie on the polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
$\sin\alpha + \sin\beta + \sin\gamma = 4\cos{\frac{\alpha}{2}}\cos{\frac{\beta}{2}}\cos{\frac{\gamma}{2}}$ when $\alpha + \beta + \gamma = \pi$ Assume: $\alpha + \beta + \gamma = \pi$ (Say, angles of a triangle) Prove: $\sin\alpha + \sin\beta + \sin\gamma = 4\cos{\frac{\alpha}{2}}\cos{\frac{\beta}{2}}\cos{\frac{\gamma}{2}}$ There is already a solution on Math-SE, however I want to avoid using the sum-to-product identity because technically the book I go by hasn't covered it yet. So, is there a way to prove it with identities only as advanced as $\sin\frac{\alpha}{2}$? Edit: Just giving a hint will probably be adequate (i.e. what identity I should manipulate).
You may go the other way around: $$ \cos\frac{\gamma}{2}=\cos\frac{\pi-\alpha-\beta}{2}= \sin\frac{\alpha+\beta}{2}= \sin\frac{\alpha}{2}\cos\frac{\beta}{2}+ \cos\frac{\alpha}{2}\sin\frac{\beta}{2} $$ so the right hand side becomes $$ 4\cos\frac{\alpha}{2}\cos\frac{\beta}{2} \sin\frac{\alpha}{2}\cos\frac{\beta}{2}+ 4\cos\frac{\alpha}{2}\cos\frac{\beta}{2} \cos\frac{\alpha}{2}\sin\frac{\beta}{2} $$ Recalling the duplication formula for the sine we get $$ 2\sin\alpha\cos^2\frac{\beta}{2}+2\sin\beta\cos^2\frac{\alpha}{2} $$ and we can recall $$ 2\cos^2\frac{\delta}{2}=1+\cos\delta $$ to get $$ \sin\alpha+\sin\alpha\cos\beta+\sin\beta+\sin\beta\cos\alpha = \sin\alpha+\sin\beta+\sin(\alpha+\beta)= \sin\alpha+\sin\beta+\sin\gamma $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Least Squares Alternates- approximating functions I was given this least squares problem to solve: Find a linear function $\ell(x)$ such that $\displaystyle\int_0^1(e^x-\ell(x))^2{\rm d}x$ is minimized. As an answer, I got $\ell(x)=0.5876+0.5519x$, which I am pretty sure but not positive that it is right. I am supposed to also find the approximate function $\ell(x)$ two other ways, which is where I need help. a) Find $\ell(x)$ such that $\int_0^1|e^x-\ell(x)|{\rm d}x$ is minimized. (How do I even integrate an absolute value?) b) Find $\ell(x)$ such that $\displaystyle\max_{0\leq x \leq 1}|e^x-\ell(x)|$ is minimized.
About the main problem, since $$ \int_{0}^{1} e^{x}\,dx = (e-1), \qquad \int_{0}^{1} e^{x} P_1(2x-1)\,dx = (3-e) $$ the best $L^2$ approximation is given by $\color{red}{(4e-10)+(18-6e)x}$ as already pointed by Winther in the comments. About $(a)$ and $(b)$, given the convexity of $e^x$ it is quite trivial that the best $L^1$ and $L^\infty$ approximations are given by two lines through $(x_1,e^{x_1})$ and $(x_2,e^{x_2})$ with $0<x_1<x_2<1$. $L^1$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation. $$ \int_{0}^{1}\left| e^{x}-g(x)\right|\,dx = \int_{0}^{1}(e^x-g(x))\,dx +2\int_{x_1}^{x_2}(g(x)-e^{x})\,dx $$ gives us a (horrible) function of $x_1,x_2$ to minimize. $L^{\infty}$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation. $f(x)-g(x)$ is a convex function with a stationary point at $\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)$. We have to solve: $$ f(0)-g(0)=\frac{e^{x_2}-e^{x_1}}{x_2-x_1}-g\left(\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)\right)=f(1)-g(1). $$ By letting $g(x)=a+bx$ and imposing $f(0)-g(0)=f(1)-g(1)$ we get $b=e-1$. So, if the slope of $\ell$ is $e-1$, the stationary point is located at $\log(e-1)$ and $a$ can be found by solving: $$ 1-a = a+(e-1)\log(e-1)-(e-1) $$ and finding: $$\color{red}{ g(x) = \frac{e+(1-e)\log(e-1)}{2}+(e-1)x}.$$ By trial and error we may check that the $L^2$ best approximation is not so bad at all as a $L^\infty$ approximation (as predicted by Chebishev's theorems), and that the error of the best $L^\infty$ approximation is around $\frac{1}{10}$. Probably the exercise is intended as a lesson on how practical the least square method is, and how beautiful $L^2$ is, compared to $L^1$ and $L^{\infty}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Show every irreducible subset of a topological space $X$ is contained in a maximal irreducible subset Let $X$ be a topological space. A subset $A$ is irreducible if for every open $U,V\subseteq A$, we have $U\cap V\neq\varnothing$. Show that any irreducible subset $A\subseteq X$ is contained in a maximal irreducible set. So here's basically what I want to do: let $A$ be an irreducible subset of $X$ and $\hat A$ be the union of all irreducible subsets containing $A$. I think this is the maximal irreducible subset I'm looking for. To show this, let $U,V\subseteq$ be open in $A$. I want to show $U\cap V\neq\varnothing$ but I'm not sure how to do this. It's clear that an open subset of an irreducible set is irreducible, so I could show this if I knew that $U$ and $V$ were both contained in an irreducible set. But I don't know if this is even true. Any hints?
First, let me note that there might be more than one maximal irreducible set containing $A$. For instance, let $X=\{a,b,c\}$ with $\{b\}$ and $\{c\}$ as the only nontrivial open sets. Then $A=\{a\}$ is irreducible, but $\{a,b\}$ and $\{a,c\}$ are two different maximal irreducible sets containing it. In particular, in this example, your $\hat{A}$ would be all of $X$, which is not irreducible. So your approach will not work. More generally, you should not expect there to be any canonical way to construct a maximal irreducible set, because of this non-uniqueness. So instead, you need to do something nonconstructive. I would suggest trying out Zorn's lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Strange PDE solution Given the linear equation $$u_t -xt u_x = x$$ $x\in\mathbb{R}$, $t>0$, with IVP $u(x,0)=u_0(x)$, my solution comes to $u(x,t) = u_0(xe^{t^2/2})+xt$, but Maple gives a much more complicated solution to this IVP. I would appreciate if someone could please point out what I might not be doing right. Assuming the parametrization $x=x(s), t=t(s), u=u(s)$, and applying the method of characteristics, we get: $$t_s=1, x_s=-xt,u_s=x,$$ so $$u_x = -\frac{1}{t}, u_t=x,$$ thus $$u=xt+c_2, x=e^{t^2/2}c_3,$$ where $c_2, c_2$ are constants. So, $u-xt=c_2, c_3 = xe^{t^2/2}$, $G(xe^{t^2/2})=u-xt$, and $G(x) = u_0(x)$, and we get $$u(x,t) = u_0(xt^{t^2/2})+xt$$ But Maple gives me this:
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example: $\dfrac{dt}{ds}=1$ , letting $t(0)=0$ , we have $t=s$ $\dfrac{dx}{ds}=-xt=-xs$ , letting $x(0)=x_0$ , we have $x=x_0e^{-\frac{s^2}{2}}=x_0e^{-\frac{t^2}{2}}$ $\dfrac{du}{ds}=x=x_0e^{-\frac{s^2}{2}}$ , letting $u(0)=f(x_0)$ , we have $u(x,t)=f(x_0)+\int_0^sx_0e^{-\frac{\tau^2}{2}}~d\tau=f(xe^\frac{t^2}{2})+\int_0^txe^\frac{t^2-\tau^2}{2}~d\tau$ $u(x,0)=u_0(x)$ : $f(x)=u_0(x)$ $\therefore u(x,t)=u_0(xe^\frac{t^2}{2})+\int_0^txe^\frac{t^2-\tau^2}{2}~d\tau$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linearising shallow-water wave equations We are given the equations $$\frac{\partial{u}}{\partial{t}}+u\frac{\partial{u}}{\partial{x}}+g\frac{\partial{h}}{\partial{x}}=0$$ and $$\frac{\partial{h}}{\partial{t}}+\frac{\partial{(hu)}}{\partial{x}} = 0$$ We are then asked By linearising these equations about a uniform mean flow of speed $u_{0}$ and uniform thickness $h_0$, derive expressions for the phase and group speeds of linear shallow water waves. Are these waves dispersive? Having attempted $u=u_0 + u'$ and $h=h_0 + h'$ I arrived at $$\frac{\partial{u'}}{\partial{t}}+u_0\frac{\partial{u'}}{\partial{x}}+g\frac{\partial{h'}}{\partial{x}}=0$$ and $$\frac{\partial{h'}}{\partial{t}}+u_0\frac{\partial{h'}}{\partial{x}}+h_0\frac{\partial{u'}}{\partial{x}}=0$$ Which I don't see how they come out to be proper wave equations from which I can get the velocities.
Following a suggestion from Semiclassical on the chat and in comment. We can write the systems in the following form: $$\begin{align*}\frac{\partial u^{\prime}}{\partial t} &= -u_{0}\frac{\partial u^{\prime}}{\partial x}-g\frac{\partial h^{\prime}}{\partial x} \\ \frac{\partial h^{\prime}}{\partial t}&= -u_{0}\frac{\partial h^{\prime}}{\partial x} - h_{0}\frac{\partial u^{\prime}}{\partial x}\end{align*}$$ This can be written in the following matrix form: $$\begin{pmatrix}\frac{\partial u^{\prime}}{\partial t} \\ \frac{\partial h^{\prime}}{\partial t}\end{pmatrix} = -\begin{pmatrix}u_{0} & g \\ h_{0} & u_{0}\end{pmatrix}\begin{pmatrix}\frac{\partial u^{\prime}}{\partial x} \\ \frac{\partial h^{\prime}}{\partial x}\end{pmatrix}$$ We can diagonalise the matrix to give us: $$\begin{pmatrix}\frac{\partial u^{\prime}}{\partial t} \\ \frac{\partial h^{\prime}}{\partial t}\end{pmatrix} = \begin{pmatrix}\sqrt{\frac{g}{h_{0}}} & -\sqrt{\frac{g}{h_{0}}} \\ 1 & 1\end{pmatrix}\begin{pmatrix}-\sqrt{gh_{0}}-u_{0} & 0 \\ 0 & \sqrt{gh_{0}}-u_{0}\end{pmatrix}\begin{pmatrix}\sqrt{\frac{g}{h_{0}}} & -\sqrt{\frac{g}{h_{0}}} \\ 1 & 1\end{pmatrix}^{-1}\begin{pmatrix}\frac{\partial u^{\prime}}{\partial x} \\ \frac{\partial h^{\prime}}{\partial x}\end{pmatrix}$$ Premultiplying by the inverse eigenvector matrix: $$\begin{pmatrix}\sqrt{\frac{h_{0}}{g}} & 1 \\ -\sqrt{\frac{h_{0}}{g}} & 1\end{pmatrix}\begin{pmatrix}\frac{\partial u^{\prime}}{\partial t} \\ \frac{\partial h^{\prime}}{\partial t}\end{pmatrix} = \begin{pmatrix}-\sqrt{gh_{0}}-u_{0} & 0 \\ 0 & \sqrt{gh_{0}}-u_{0}\end{pmatrix}\begin{pmatrix}\sqrt{\frac{h_{0}}{g}} & 1 \\ -\sqrt{\frac{h_{0}}{g}} & 1\end{pmatrix}\begin{pmatrix}\frac{\partial u^{\prime}}{\partial x} \\ \frac{\partial h^{\prime}}{\partial x}\end{pmatrix}$$ This leads to two decoupled partial differential equations: $$\begin{align}\frac{\partial}{\partial t}\left(\sqrt{\frac{h_{0}}{g}}u^{\prime} + h^{\prime}\right)&=-\left(\sqrt{gh_{0}}+u_{0}\right)\frac{\partial}{\partial x}\left(\sqrt{\frac{h_{0}}{g}}u^{\prime} + h^{\prime}\right) \\ \frac{\partial}{\partial t}\left(h^{\prime}-\sqrt{\frac{h_{0}}{g}}u^{\prime}\right) &= \left(\sqrt{gh_{0}}-u_{0}\right)\frac{\partial}{\partial x}\left(h^{\prime}-\sqrt{\frac{h_{0}}{g}}u^{\prime}\right)\end{align}$$ We now can assume the two ansätze expressions: $$\sqrt{\frac{h_{0}}{g}}u^{\prime} + h^{\prime} = Ae^{i(k_{1}x - \omega_{1} t)},\quad h^{\prime}-\sqrt{\frac{h_{0}}{g}}u^{\prime} = Be^{i(k_{2}x - \omega_{2} t)}$$ From this and our two derived equations we find: $$-\omega_{1} = -(\sqrt{gh_{0}}+u_{0})k_{1} \implies \frac{\omega_{1}}{k_{1}} = \frac{\partial \omega_{1}}{\partial k_{1}} = \sqrt{gh_{0}}+u_{0}$$ And: $$-\omega_{2} = (\sqrt{gh_{0}}-u_{0})k_{2} \implies \frac{\omega_{2}}{k_{2}} = \frac{\partial \omega_{2}}{\partial k_{2}} = u_{0} - \sqrt{gh_{0}}$$ I'm not sure if this is correct, as this talks about the velocity of propagation of the normal modes of vibration, so I would appreciate it if anyone has any comments as to the validity of this approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Support Vector Machines: Hype or Hallelujah? - what is alfa? I at the moment trying to understand how SVM works with the help of this paper The paper itself explains things pretty well, but there is an alfa term, which doesn't seem to be documented anywhere? could any of elaborate on what it means? and what effect it has? The alfa terms is first seen in equation (1)
$c$ (resp. $d$) is a point on the convex hull of the points in Class 1 (resp. Class 2). Therefore, $c$ (resp. $d$) can be represented as a convex combination of the points in Class 1 (resp. Class 2). More specifically, $$ c = \sum_{y_i \in \text{ Class 1}} \alpha_i x_i\quad \text{ for some }\quad \sum_{y_i \in \text{ Class 1}}\alpha_i = 1\quad \text{ and } \quad \alpha_i \geq 0 $$ and $$ d = \sum_{y_i \in \text{ Class 2}} \alpha_i x_i\quad \text{ for some }\quad \sum_{y_i \in \text{ Class 2}}\alpha_i = 1\quad \text{ and } \quad \alpha_i \geq 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
100 pieces of paper in a box, one of which has a black dot on it. Probability Question. There are $100$ pieces of paper in a box, one of which has a black dot on it. If $100$ people go up one by one and pick a paper from the box, which one has the lowest probability of getting the black dot, and which one has the highest probability of getting the black dot?
The probability $p_k$ that the $k$-th person, $1\le k\le 100$ has the black dot is equal to $$p_k=\frac{99}{100}\cdot\frac{98}{99}\cdot\;\cdots\;\cdot\frac{100-(k-1)}{100-(k-1)+1}\cdot\frac{1}{100-k+1}=\frac1{100}$$ so it is independent of $k$ and equal for every one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why L'hopital's rule only proved for indetermined forms? In this proof of L'hopital's rule, $\lim\limits_{x\to a}\, f(x)=0$ and $g(x)=0$ seems have no role to paly. So what goes wrong when the limits mentioned before are not equal to $0$? (My guess is that you can't assume that "f(a)=g(a)=$0$" without making the functions discontinuous when those limit are not equal to $0$. Is that right?) thanks!
The proof requires an application of Rolle's theorem to $$h(x) = f(x) - \frac{f(b)}{g(b)}g(x).$$ Since $h(b) = 0$, this requires $h(a) = 0$ for all $b >a$. This is satisfied if $f(a) = g(a) = 0$ either outright or by continuous extension using $f(x),g(x) \to 0$ as $x \to a+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve Poisson Equation Using FFT I am trying to solve Poisson equation using FFT. The issue appears at wavenumber $k = 0$ when I want to get inverse Laplacian which means division by zero. We have ${\nabla ^2}\phi = f$ Taking FFT from both side we get: $-k^2\hat\phi = \hat f $ or $\hat\phi = \frac{\hat f}{-k^2} $ Assuming that we want to solve this equation in periodic domain and using DFT using FFTW package, at $k=0$ we have a division by zero. Anybody knows how to deal with this singularity?
This is conceptually similar the integrating constants that show up when you are solving a differential equation by other methods. Usually these integrating constants are decided by your boundary conditions. Instead of doing any division you can simply rewrite it as a least-norm problem: $$\|-k^2\hat \phi +\hat f\|_2^2+\text{any more terms you may want}$$ If your $\hat \phi,\hat f$ are stored in vectors the $k^2$ in the expression above will be a diagonal weight matrix which multiplies the $\hat \phi$ vector. Oops sorry I forgot where the FFTW comes into place, you can use it to calculate the matrix-vector product, $F$ below is the Fourier transform matrix. $$\|-k^2 \hat \phi + F f\|_2^2$$ So $Ff$ means multiply $f$ vector by $F$ and it will be the output of calling your FFTW library with $f$ vector as the input. And for other terms you have it might be $F^{-1}$ you want to multiply with but that is simply the corresponding IFFT routine in your library.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Find $\lim_{n \to +\infty} \frac{1}{n}\sqrt[n]{\frac{(2n)!}{n!}} $ using Riemann integral Wonder how to determine this limit by the use of Riemann integral. The limit is as follows: $$\lim_{n \to +\infty} \frac{1}{n}\sqrt[n]{\frac{(2n)!}{n!}} $$ My instructor told me that the usage of Riemann integral gives spectacular result. Checked Rudin, but did not find any valuable references. I am very interested in seeing how this "spectacular result" emanates. Help/advices/solutions very, very appreciated!
I will repeat basically the same approach as in this answer: Find the value of : $\lim\limits_{n\to \infty} \sqrt [n]{\frac{(3n)!}{n!(2n+1)!}} $ It is also the same approach as suggested in Paramanand Singh's comment. (I see that the OP asks specifically about a proof using Riemann's integral, but this seems interesting enough to be mentioned, too.) We will use this fact (see the linked answer for references): Let $(a_n)$ be a sequence of positive real numbers. If $\lim\limits_{n\to\infty}\frac{a_{n+1}}{a_n}=L$, then $\sqrt[n]{a_n}$ converges too and $\lim\limits_{n\to\infty}\sqrt[n]{a_n}=L$. We use the above for $a_n = \frac{(2n!)}{n!n^n}$ and we get $$\lim\limits_{n\to\infty} \frac{a_{n+1}}{a_n} = \lim\limits_{n\to\infty} \frac{(2n+1)(2n+2)}{(n+1)^2}\cdot\left(\frac{n}{n+1}\right)^n = \frac4e.$$ This implies that also $$\lim\limits_{n\to\infty} \frac1n\sqrt[n]{\frac{(2n)!}{n!}} = \lim\limits_{n\to\infty} \sqrt[n]{a_n} = \frac4e.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1809965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Using the digits of $\pi$ to generate random numbers. Let's say I've been captured by Russian operatives and am locked in a room with only one object: a book listing the digits of $\pi$. I'm told to generate a sequence of binary digits. If this sequence is random, they will cut off one of my arms and let me free; if this sequence is not random, however, I will be killed. My first solution was to take the digits of $\pi \ \text{mod} \ 2$, so that: $$3.1415926535897...$$ $$\downarrow$$ $$1.1011100111011...$$ And I would read the digits from left-to-right of the second number. My Question Is there any way to prove that the bits I generate are random (no discernible pattern)? Are the digits of $p \ \text{mod} \ 2$ random for any transcendental $p$? How about any irrational $p$? I feel like this should be a really easy question (with an affirmative answer), but I don't know how to show it.
There are long lists of digits for $\pi$ online. You could write a computer program that calculates different types of statistics to test your hypothesis. One interesting statistic could be to measure $$P(X_{i+1}=1|X_{i},\cdots,X_{i-n})$$That is: the conditioned probability of one bit being $1$ given that we know the binary number of length $n$ preceding it. This conditioned probability should converge to the stationary $P(X_{i+1}=1)$ for all such $n$ bit numbers if there is no short term-memory in our source.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Difference between viewer and camera in a 3d projection I have been programming a 3D graphics library for myself and I have used the following wikipedia page to help me. https://en.wikipedia.org/wiki/3D_projection#Perspective_projection The article references both a camera position and a viewers position. I assume after finishing my implementation that the viewer has something to do with the field of view but it makes no effort to explain how. It simply states at the beginning "The camera's position, orientation, and field of view control the behavior of the projection transformation." The article describes the use of the camera's position and orientation but never clarifies where field of view comes into play. It later uses the coordinates of a viewer in the final projection, but it is unclear to me what these values mean. So then my question is: what is the difference between a viewer and a camera in a 3d projection of an image to a 2d plane? And how do I use this knowledge to manipulate the field of view?
It should be roughly like this: * *The camera projects the 3D world on its 2D display. *The viewer (observer) takes a look at that display. *The field of view field is a property of the camera, describing what part of a sphere around the camera is visible to it
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove or disprove: there exists a basis $p_0, p_1, p_2, p_3 \in P_3(F)$ such that none of the polynomials $p_0, p_1, p_2, p_3$ has degree 2 Prove or disprove: there exists a basis $p_0, p_1, p_2, p_3 \in P_3(F)$ such that none of the polynomials $p_0, p_1, p_2, p_3$ has degree 2 This is a repeat of Does there exist a basis $(p_0,p_1,p_2,p_3)\in P_3(\Bbb F)$ such that none of the polynomials $p_0,p_1,p_2,p_3$ has degree $2$? But I just have a question in regards to the supposed basis vectors. My conclusion was that it could not occur because in order to characterize all of the polynomials of degree 3, you will need a polynomial of degree 2. But the solution said otherwise, particularly how are $x^2 + x^3, x^2$ going to be basis vectors. Do these not have a polynomial of degreee 2? Which is what we are trying to show cannot occur?
Suppose we have a list of vectors $(v_1,v_2,v_3,v_4)$, and that this list forms a basis for the space $V$. Our task is to prove that $$(v_1+v_4,v_2+v_4,v_3+v_4,v_4)$$ also forms a basis of $V$. Our first step is to prove that $(v_1+v_4,v_2+v_4,v_3+v_4,v_4)$ is linearly independent. To do so, we'll examine the following: $$a(v_1+v_4)+b(v_2+v_4)+c(v_3+v_4)+dv_4=\ av_1+bv_2+cv_3+(a+b+c+d)v_4 =\ 0$$ $$\iff$$ $$a=b=c=d=0$$ $(v_1+v_4,v_2+v_4,v_3+v_4,v_4)$ only produces the zero vector when the coefficients are all zero, so this list is linearly independent. Now we can show that all of the original elements that formed the basis of $V$ can be represented as a linear combination of potential basis. This isn't as hard as it might sound. For instance $$v_1=(v_1+v_4)-v_4$$ $$v_2=(v_2+v_4) - v_4$$ $$v_3=(v_3+v_4)-v_4$$ $$v_4=v_4$$ Et voila! New basis proven. If you substitute $v_1 = 1$, $v_2=x$, $v_3=x^2$, and $v_4=x^3$, then you will have essentially proven that you can represent $\mathscr{P}_3(\mathbb{F})$ with a group of polynomials where the 2nd degree is not represented.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What is the classifying space G/Top? I simply can't find the definition(except in one book on surgery where a definition was not actually given but instead they alluded to what the definition is) and I have spent an hour and half looking. My professor used the notation when talking me today. I would be grateful for a reference. My background is evident from my stackexchange posts.
A reference available online is Rudyak's survey Piecewise Linear Structures on Topological Manifolds. Beware that what you are calling $G$ he calls $F$ (I think both notations are common --and awful). $G/TOP$ can be defined as the homotopy fiber of the canonical map $BTOP \to BG$, where $BTOP$ is the classifying space for stable topological bundles and $BG$ is the classifying space for stable spherical fibrations. $BTOP$ can be defined as follows: let $TOP_n$ denote the topological group of self homeomorphisms of $\mathbb{R}^n$ fixing the origin, and let $BTOP_n$ be its classifying space. Each $TOP_n$ can be included in the next by sending $h : \mathbb{R}^n \to \mathbb{R}^n$ to $h \times \mathrm{id} : \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$. You get induced maps $BTOP_n \to BTOP_{n+1}$ and define $BTOP$ to be the colimit of all the $BTOP_n$. $BG$ can be defined as follows: let $G_n$ be the topological monoid of pointed homotopy self equivalences of $S^n$ and $BG_n$ its classifying space (note that homotopy equivalences don't strictly speaking have an inverse udner composition, so $G_n$ is not a group, just a monoid). Suspension gives a map $G_n \to G_{n+1}$ and you set $BG$ to the colimit of the corresponding sequence of maps $BG_n \to BG_{n+1}$. The map $BTOP \to BG$ is induced from the operation of one-point compactification: if $h : \mathbb{R}^n \to \mathbb{R}^n$, the one-point compactification $h^\bullet : S^n \to S^n$ is a homotopy self equivalence of $S^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Isomorphism of two graphs using adjacency matrix How can I show that the following two graphs are isomorphic: Steps: The given graphs can be written as:
I am keeping this answer as simple as I can, so kindly pardon the layman's language. Observe the to graphs, From G1, G2 can be obtained if the first line and second line of the graph G1 are interchanged. So, this graph is definitely iso-"morphic". (bijective and satisfies the edge adjacency property). And the mapping would be v6 -> w1, v1 -> w5 and v2 -> w6, rest being pretty straight forward mapping. These mappings satisfy the isomorphic property, hence it should be isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Sums involving binomial coefficients in a finite field Consider the field $\mathbb{F}_q$ where $q=p^k$ for some prime $p$. I have some identities related to binomial coefficients over such a field, which I wish to prove. So, can someone tell me a source where I could read up on these? An example of the identities that I am looking out for is the following. For some $a$, such that $0 \leq a < q$, $${q(q-1)\choose(q-a)(q-1)} + {(q-1)(q-1)\choose(q-a)(q-1)} + {(q-2)(q-1)\choose(q-a)(q-1)} + . . . + {(q-a)(q-1)\choose(q-a)(q-1)} = 1$$ in $\mathbb{F}_q$. I have some more similar identities that I wish to prove and I would like it if someone could give me a hint / a strategy / a reference for the same. (Please comment if you wish to see more context)
I found the solution to the above problem. A general term of the above sum looks like ${(q-t)(q-1) \choose (q-a)(q-1)}$ which is the coefficient of $x^{(q-a)(q-1)}$ in $(1+x)^{(q-t)(q-1)}$. Hence, the sum is the coefficient of $x^{(q-a)(q-1)}$ in $(1+x)^{q(q-1)}+(1+x)^{(q-1)^2}+...+(1+x)^{(q-a)(q-1)}$. We can continue this sum up till 1 as none of those terms will alter any coefficient of $x^{(q-a)(q-1)}$. So, we are looking for the coefficient of $x^{(q-a)(q-1)}$ in: $$(1+x)^{q(q-1)}+(1+x)^{(q-1)^2}+...+(1+x)^{(q-a)(q-1)} + (1+x)^{(q-a-1)(q-1)} + . . . + 1$$ $$=\frac{(1+x)^{(q-1)(q+1)}-1}{(1+x)^{q-1}-1}$$ $$=\frac{\frac{(1+x)^{q^2}}{1+x}-1}{\frac{(1+x)^q}{1+x}-1}$$ $$=\frac{\frac{1+x^{q^2}}{1+x}-1}{\frac{1+x^q}{1+x}-1}$$ $$=\frac{x^{q^2}-x}{x^q-x}$$ $$=\frac{x^{q^2-1}-1}{x^{q-1}-1}$$ $$=1+x^{q-1}+x^{2(q-1)}+...+x^{q(q-1)}$$ And hence, the required coefficient is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a rigorous proof of this combinatorial identity? Theorem: For any pair of positive integers $n$ and $k$, the number of $k$-tuples of positive integers whose sum is $n$ is equal to the number of $(k − 1)$-element subsets of a set with $n − 1$ elements. Does anyone know of a rigorous mathematical proof to this theorem? All the examples I have seen thus far just use the "stars and bars" explanation.
Stars and bars will explain it, but suppose we go for an inductive proof. So let $S(n,k)$ mean that the number of solutions to $$x_1+x_2+ \cdots +x_k=n \tag{1}$$ with positive $x_j$ (order mattering) is given by $\binom{n-1}{k-1}.$ A few base cases are easily established, so we turn to breaking up the solutions of (1) into (A) those in which $x_k=1$ and (B) those in which $x_k>1.$ On subtracting the final $1,$ the type (A) solutions become those of $$x_1+x_2+ \cdots +x_{k-1}=n-1, \tag{2}$$ of which there are by induction $\binom{n-2}{k-2}$ solutions, while subtracting $1$ in case (B) leaves the last term still positive at $x_k-1$ since $x_k>1$ in case (B). That is, this case gives $$x_1+x_2+ \cdots +(x_k-1)=n-1 \tag{1}$$ which using the inductive hypothesis again has $\binom{n-2}{k-1}$ solutions. Then adding these two binomials for the cases (A) and (B) it results in $\binom{n-1}{k-1}$ using the Pascal identity. I don't know if this approach is really any more rigorous than stars and bars, but it does give another approach to the count, one relying heavily on already knowing what the formula should be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Group generated by $x,y$. With relations $x^3=y^2=(xy)^2=1$. Let $G$ be a group generated by $x,y$ with the relations $x^3=y^2=(xy)^2=1$. Then show that the order of $G$ is 6. My attempt: So writing down the elements of $G$ we have $\{1,x,x^2,y,\}$. Other elements include $\{xy, xy^2, x^2y\}$ it seems I am counting more than $6$. Are some of these equal? how do I prove that?
One group presentation for the dihedral group $D_n$ is $\langle x,y|x^2=1,y^n=1,(xy)^2=1\rangle $. Hence the group is indeed isomorphic to $D_3$. Here $x$ with $x^2=1$ corresponds to a reflection, and $y$ with $y^3=1$ to a rotation of $60$ degrees. Finally we have $xyx^{-1}=y^{-1}=y^2$, which is how rotation and reflection interact. So all elements are given by $\{1,y,y^2,x,xy,xy^2\}$. Sorry, I have interchanged $x$ and $y$ here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Clarification about injectivity of a function Let $f:{X\to Y}$ be a function. Show: $f$ is injective $\Leftrightarrow \exists \space h:{Y\to X} \ni h\circ f=id_X$. If we have $f(x)=f(y)$ then $x=y$. My problem is how can we have the function $h$? f is injective but not surjective this means that there might be values of $Y$ that don't correspond to anything in $X$. By the definition of a function h must map all values from $Y$ to $X$. How can I find $h\circ f=id_X$ I am thinking about restriction function here, $ h:{Y/{A}\to X}$ but the problem asks for $Y$.
You can define how $h$ works with those values $\alpha \in Y $ for which does not exists a $x \in X$ such that $f(x)=\alpha$ in this way: $h(\alpha)=x$ if $f(x)=\alpha$ and $h(\beta)=z \in X$ if $\not\exists x \in X \space ,f(x)=\beta.$ This function do what we want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why $h$ has zero topological degree? I am trying to prove that $f,g : M^n \to S^n$, both $C^1$ (indeed just $C^0$ is enough) with the same topological degree are homotopic. I saw on a book that the trick is as follows: Take $W = M\times [0,1]$ and define $h(x,0) = f(x),$ and $h(x,1) = g(x).$ Then the map $h : \partial W \to S^n$ has topological degree null, why?? If it is true, then the claim follows from the Hopf theorem, but for the part I stated before, how can I conclude that $h$ has null degree?
The boundary of $M\times [0,1]$ is the disjoint union of two copies $M_1=M\times\{0\}$ and $M_2=M\times\{1\}$ of $M$ with opposite orientation. Thus the degree of $f$ and $g$ are integers with opposite signs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1810955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Help with sum and product of roots. I'm having trouble with a question from my textbook relating to roots of an equation. This is it: Let a and b be the roots of the equation: $x^2-x-5=0$ Find the value of $(a^2+4b-1)(b^2+4a-1)$, without calculating values of $a$ and $b$. What I do know however, is that the book has hinted towards sum and product of roots in which I have determined that the SUM OF ROOTS is $1$ and the PRODUCT OF ROOTS is $-5$. So really I'm just having difficulty finding the value. I don't want a direct answer can I have a few hints to get myself closer to getting the answer?
HInt: You have $a+b=1$ and $ab=-5$. Now expand the product $(a^2+4b-1)(b^2+4a-1)=(ab)^2+4(a^3+b^3)+16ab-4(a+b)-(a^2+b^2)+1$, then try to express the terms $a^3+b^3$ and $a^2+b^2$ in terms of $ab$ and $a+b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Maximum likelihood estimators for gamma distribution I'm having trouble with an exercise about maximum likelihood estimators. Specifically, the exercise gives me values of a protein which was found in 50 adults. We assumed that the data follow a gamma distribution: $X \sim \Gamma(r,\lambda)= \frac {\lambda^{r}}{\Gamma(r)}x^{r-1}e^{-\lambda x} $ if $x\ge0$. It asks me to find the maximum likelihood estimators of parameters $\lambda$ and $r$. How can I find those parameters given that from the data I have $E(X),Var(X)$?
We know that $\Gamma(r,\lambda)= \frac {1}{\Gamma(r)}\lambda^{r}x^{r-1}e^{-\lambda x} $ if $x\ge0$. In this case the likelihood function $L$ is $$\prod_i \Gamma(r,\lambda)_{x_i}=\frac{1}{\Gamma(r)^{n}}\lambda^{nr}x_1^{r-1}x_2^{r-1}...x_n^{r-1}e^{-\lambda T}$$ where $T=x_1+...+x_n$; By apllying the logaritmic function to $L$ we semplificate the problem so $$logL=(r-1)\sum_ilogx_i-\lambda T +(nr)log\lambda -nlog(\Gamma(r))$$ and now we must find the point of max of $logL$, so $\frac{\partial L}{\partial\lambda}= -T+\frac{nr}{\lambda}=0$ which have as solution $\hat\lambda = \frac{nr}{T}$. With the same method you can obtain the extimation for $r$. (Find $\frac {\partial L}{\partial r}$ and put it equal to $0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $\varphi \in E'$ and $A$ is convex and open then $\varphi (A)$ is an open interval Let $E$ be a real normed space and $\varphi \in E'$, $\varphi \neq 0$. Suppose that $A \subset E$ is an open convex not empty subset. Show that $\varphi(A)$ is an open interval. Since $A$ is connected and $\varphi$ is continuous, $\varphi(A)$ is an interval. How could I show that $\varphi(A)$ is open? I tried to apply the geometric form of Hahn Banach theorem but it didn't work. I appreciate if you could give some hints. Thanks.
Since $\varphi(A)$ is an $\mathbb{R}$-interval, it is of the form $(a,b)$, $[a,b]$, $(a,b]$, $[a,b)$, with $-\infty\le a,b\le\infty$. Supose $\varphi(A)$ is not open. Then, WLG we can assume $\varphi(A)=[a,b)$. Then, there is $x\in A$ such that $\varphi(x)=a$. Since $A$ is open, there is $r>0$ such that if $\|x-y\|<r$ we have $y\in A$. Now, if $\|z\|<r$ we have $\|x-(x+z)\|<r$ and thus $a\le \varphi(x+z)=a+\varphi(z)$. Therefore $\varphi(z)\ge 0$ for all $z$ with $\|z\|<r$. Fix $z_0$, $\|z_0\|<r$. Then $\|-z_0\|<r$. We have $0\le\varphi(-z_0)=-\varphi(z_0)\le 0$. Thus, $\varphi(z)=0$ for all $z$ with $\|z\|<r$. But the last implies, by linearity of $\varphi$, that $\varphi=0$ (cause every element of the space is a scalar multiple of an element of norm $<r$). This contradiction shows $\varphi(A)\neq [a,b)$ for all $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $\int_{0}^{1}{x(x-1)(x+2)\over (x+1)^3}\cdot{1\over \ln(x)}dx={\ln{\pi\over 2}-{7\zeta(3)\over 4\pi^2}}$ Showing that $$\int_{0}^{1}{x(x-1)(x+2)\over (x+1)^3}\cdot{1\over \ln(x)}dx=\color{brown}{\ln{\pi\over 2}-{7\zeta(3)\over 4\pi^2}}$$ Applying substitution $u=\ln(x)\rightarrow du={1\over x}dx$ and $x=e^u$ $x=1\rightarrow u=0$ and $x=0\rightarrow u=-\infty$ Then $$I=\int_{-\infty}^{0}{e^{2u}(e^u-1)(e^u+2)\over u(e^u+1)^3}du$$ $${e^{2u}(e^u-1)(e^u+2)\over u(e^u+1)^3}=e^u-2+{e^{2u}+5e^u+2\over (e^u+1)^3}$$ Substitute back in $$I=\int_{-\infty}^{0}{e^u-2\over u}+{e^{2u}+5e^u+2\over u(e^u+1)^3}du$$ $e^{2u}+5e^u+2=A(e^u+1)+Bu(e^u+1)^2+Cu(e^u+1)+Du$ $u=0\rightarrow A=1$ I think it is impossible to find the values of B,C and D. This method is not working. Can anyone give me a hint on another method?
$$\int_{0}^{1}\frac{1-x}{(1+x)\log x}\,dx =\log\frac{\pi}{2}\tag{1}$$ is a straightforward consequence of Frullani's theorem and Wallis' product. So it is enough to compute: $$\begin{eqnarray*} \int_{0}^{1}\frac{1-x}{(1+x)^3\log x}\,dx &=& \int_{0}^{+\infty}\frac{e^{-x}-e^{-2x}}{x(1+e^{-x})^3}\,dx\\&=&-7\cdot\zeta'(-2)\tag{2}\end{eqnarray*}$$ always by Frullani's theorem. The reflection formula for the $\zeta$ function finishes the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does $\mathrm{d}l=v\,\mathrm{d}t$ imply that $\frac{\mathrm{d}}{\mathrm{d}t}=v\frac{\mathrm{d}}{\mathrm{d}l}$? If we have a lengthening pendulum and the length $l$ of the pendulum at time $t$ is $$l=l_0+vt$$ where $l_0$ is the initial length of the pendulum and $v$ is the velocity for which the pendulum's length is increasing. Then the differential change in the length of the pendulum is given by $$\mathrm{d}l=v\,\mathrm{d}t$$ this implies that $$\frac{1}{\mathrm{d}t}=v\frac{1}{\mathrm{d}l}\tag{1}$$ My question is why does it now follow that $$\frac{\mathrm{d}}{\mathrm{d}t}=v\frac{\mathrm{d}}{\mathrm{d}l}$$ Do we simply multiply both sides of $(1)$ by $\mathrm{d}$ or is there more to it than that?
The chain rule gives $$\begin{align} \left(\frac{d}{dt}\right)f(l(t))&=\frac{df(l)}{dl}\frac{dl}{dt}\\\\ &=\left(v\frac{d}{dl}\right)f(l) \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is span $\{[1,0],[0,1]\}$ a vector space? I can't figure this out. I would think that it is a vector space because it has the zero vector, and it seems to me to be closed under addition and scalar multiplication. But $[1,0]+[0,1] = [1,1]$ which is definitely not in the set. Can someone clarify? Is the span a vector space, or not?
Yes, the (linear) span is a vector space. By definition it is the smallest vector space that contains all the elements in the set. In particular it will contain all linear combinations of those elements (and will in fact contain exactly all linear combinations that can be formed with those elements). So in your example, if we consider $\{[1,0],[0,1]\} \subset \mathbb{R}^{2}$ we will get $$span\{[1,0],[0,1]\} = \mathbb{R}^{2}$$ because for every $[x,y] \in \mathbb{R}$ we can find a linear combination of $[1,0],[0,1]$ that represents $[x,y]$: $$x\cdot[1,0] + y\cdot[0,1] = [x,y].$$ With similar arguments we get get $$span\{[1,0]\} = \{[x,0] \in \mathbb{R}^{2}: x\in\mathbb{R}\}.$$ Wikipedia is also a good starting point for more on linear spans.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Locally symmetric spaces and the curvature tensor Let $M$ be a Riemannian manifold. Suppose $\nabla R=0$ where $R$ is the curvature tensor (we then say $M$ is locally symmetric). Then if $\gamma$ is a geodesic of $M$ and $X,Y,Z$ are parallel vector fields along $\gamma$ then $R(X,Y)Z$ is a parallel field along $\gamma$. However, I do not fully understand what does it mean $\nabla R=0$. Is it to say that $\nabla_ZR(X,Y,W,T)=0$ for all vector fields $Z,X,Y,W,T$? How can we use this to prove $\dfrac{D}{dt}R(X,Y)Z=0$? The only thing I can think is maybe we can use that if $V(t)=Y(c(t))$ (i.e. $V$ is induced by a vector field) then $\dfrac{DV}{dt}=\nabla_{c'}Y$. Any hints would be appreciated.
The covariant differential $\nabla T$ of a tensor $T$ of order $r$ is defined as: $$\nabla T (Y_1, \ldots, Y_r, Z) = Z(T(Y_1, \ldots, Y_r)) - T(\nabla_Z Y_1, \ldots, Y_r) - \cdots - T(Y_1, \ldots, Y_{r-1}, \nabla_Z Y_r)$$ In this case $T = R$ and $r = 4$ because the Riemann tensor takes 4 arguments $R(X,Y,Z,W)$. So we have $$(\nabla R)(X,Y,Z,W,V)=V(R(X,Y,Z,W))-R(\nabla_V X, Y, Z, W)-R(X,\nabla_VY,Z,W)-R(X,Y,\nabla_VZ,W)-R(X,Y,Z,\nabla_VW)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Books on graph/network theory with linear algebra focus I am interested on getting feed back on books that are graph theory with focusing on linear algebra(have taken several courses on Linear Algebra) I have gone through * *Introductory Graph Theory by Gary Chartrand *Graph Theory and Complex Networks: An Introduction by Maarten van Steen *Graphs and Matrices by Ravindra B. Bapat This is for personal learning to help me understand graph/network theory and how it interacts with geography. I have applications that do all the work around network theory but I want to actually learn it. thanks for any feedback
This question is very similar to the question Textbook on Graph Theory using Linear Algebra except the stress on the networks so I could find Book recommendation for network theory but I could not find a question on network theory and linear algebra, except that what is the difference between network theory and graph theory that considers graphs with terminals (vertices that cannot fail)? Anyway, instead of focusing too much on the network theory because the term, network, used in so different meanings, I suggest the thread as a starter where some authors are suggesting books First book on algebraic graph theory?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that if $ p | x^p + y^p $ where $p$ is a prime number greater than $2$, then $p^2 | x^p + y^p$ I was trying to solve the following problem recently: Prove that if $ p | x^p + y^p $ where $p$ is a prime number greater than $2$, then $p^2 | x^p + y^p$. Here $x$ and $y$ are both integers. $a|b$ reads $a$ divides $b$ or $b$ is divisible by $a$. I was able to solve the problem, but through arguably tedious means, my solutions is here. I feel like there are more elegant solutions available, but I cannot think of any at present, which is why I decided to post this as a question here. Thanks
$p|x^p+y^p$ given since $p$ is a prime number $> 2$, it is an odd prime number For any odd power $p$, $x^p+y^p$ has a factor $(x+y)$ Using Fermat's little theorem $$x^p \equiv x \mod p\\ y^p \equiv y\mod p\\ x^p+y^p \equiv (x+y) \equiv 0\mod p$$ since $p|x^p+y^p$ Therefore $p|(x+y)$, so $(x+y) = m*p$ where $m$ is an integer $p|x^p+y^p$ and $(x+y)|x^p+y^p$ we can say $x^p+y^p = k*p*(x+y)$ where k is an integer $=k*p*m*p = kmp^2$ So $x^p+y^p= kmp^2$ where $k$ and $m$ are integers. so $p^2|x^p+y^p$ Proved
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Optimization inside integral I want maximize the integral $$\int_a^b \left( 2 cx y(x) - e y(x)^2 \right) \, \mathrm{d}x$$ with respect to to $y(x)$. If I discretize the problem, I get $$ \frac{b-a}{n}\sum_{i=1}^n 2c(i/n(b-a)+a)y_i-eyi^2$$ If I take the derivative with respect to each $y_i$, I find in undiscretized version $y(x)$ as $\frac{cx}{e}$ If i plugin the values i get my optimization result. I can understand the dynamics of the problem, but my calculus is a bit rusty, Is there anyone who can pinpoint which chapter of the calculus book should i skim in order to get more formal explanation. Thanks in advance
No fancy calculus is required. Since there are no constraints and no derivatives of $y$ occur in the integral, only $y$ itself, the integral is directly maximised by maximising the integrand at each point separately. Setting the derivative of $2cxy-ey^2$ with respect to $y$ to $0$ yields $2cx-2ey=0$ and thus $y=\frac{cx}e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1811933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Complex numbers inequalities and optimisation I'm now aware that you can't definitely with ease say that one complex number is greater than another. Though what about imaginary numbers? Is $5i > 3i$? Is $i>-i$? Is it possible to optimise (find the minimum or maximum) of a complex number function? I assume if it's not ordered you can't do that ,as when we optimise for real numbers the question is if which is the greatest or least output. But if you can't say one output is greater than another than how would you optimise it? Thanks
I Think we can't say that 5i>3i. But if the modulus is greatest then we say the complex number is greater.|5i|=5 &|3i|=3. So we may say that 5i>3i. But it is true that we can't say that 5i>3i
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_{0}^\infty \frac{\log x \, dx}{\sqrt x(x^2+a^2)^2}$ using contour integration I need your help with this integral: $$\int_{0}^\infty \frac{\log x \, dx}{\sqrt x(x^2+a^2)^2}$$ where $a>0$. I have tried some complex integration methods, but none seems adequate for this particular one. Is there a specific method for this kind of integrals? What contour should I use?
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align} &\color{#f00}{% \int_{0}^{\infty}{\ln\pars{x} \over \root{x}\pars{x^{2} + a^{2}}^{2}}\,\dd x} \,\,\,\stackrel{x\ \to\ x^{1/2}}{=}\,\,\, {1 \over 4}\int_{0}^{\infty} {x^{-3/4}\,\ln\pars{x} \over \pars{x + a^{2}}^{2}}\,\dd x \\[5mm] = &\ -\,{1 \over 4}\,\partiald{}{\pars{a^{2}}}\int_{0}^{\infty} {x^{-3/4}\,\ln\pars{x} \over x + a^{2}}\,\dd = -\,{1 \over 8\verts{a}}\,\partiald{}{\verts{a}}\int_{0}^{\infty} {x^{-3/4}\,\ln\pars{x} \over x + a^{2}}\,\dd x \\[5mm] = &\ -\,{1 \over 8\verts{a}}\,\partiald{}{\verts{a}}\bracks{% \lim_{\mu \to -3/4}\,\,\partiald{}{\mu} \int_{0}^{\infty}{x^{\mu} \over x + a^{2}}\,\dd x}\tag{1} \end{align} With the branch-cut $\ds{z^{\mu} = \verts{z}^{\mu}\exp\pars{\ic\,\mathrm{arg}\pars{z}\mu}\,,\ 0 < \mathrm{arg}\pars{z} < 2\pi\,,\ z \not = 0}$, the integral is performed along a key-hole contour. Namely, \begin{align} 2\pi\ic\,\verts{a}^{2\mu}\exp\pars{\ic\pi\mu} & = \int_{0}^{\infty}{x^{\mu} \over x + a^{2}}\,\dd x + \int_{\infty}^{0}{x^{\mu}\exp\pars{2\pi\mu\ic} \over x + a^{2}}\,\dd x \\[3mm] & = -\exp\pars{\ic\pi\mu}\bracks{2\ic\sin\pars{\pi\mu}} \int_{0}^{\infty}{x^{\mu} \over x + a^{2}}\,\dd x \\[5mm] \imp\ \int_{0}^{\infty}{x^{\mu} \over x + a^{2}}\,\dd x & = -\pi\,\verts{a}^{2\mu}\csc\pars{\pi\mu} \end{align} Plug this result in $\pars{1}$: $$ \color{#f00}{% \int_{0}^{\infty}{\ln\pars{x} \over \root{x}\pars{x^{2} + a^{2}}^{2}}\,\dd x} = \color{#f00}{{\root{2} \over 16}\,\pi\,{% 6\ln\pars{\verts{a}} - 3\pi - 4 \over \verts{a}^{7/2}}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Area of a triangle whose vertices lie on a parallelogram In the parallelogram $ABCD$, $X$ and $Y$ are the midpoints of $BC$ and $CD$. Then prove that $$Ar(\triangle AXY) = \frac {3}{8} Ar(ABCD)$$ My Attempt : Construction; Joining $BY$ and $AC$, I got $$\triangle AYC=\triangle BCY$$. But I couldn't move further from here.
Use $$S_{ABC}=\frac12 a\cdot h_a$$ and $$S_{ABCD}= a\cdot h_a$$ Let $Ar(ABCD)=S$, then $Ar(\triangle ABX)=\frac14S$, $Ar(\triangle ADY)=\frac14S$, $Ar(\triangle CXY)=\frac18S$ Then $Ar(\triangle AXY)=S-\frac14S-\frac14S-\frac18S=\frac38S$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Convergence under a Hilbert space Let $\{\varphi_n\}_{n=1}^\infty$ be an orthonormal sequence (not necessarily a basis) in a Hilbert space. Let $\{\lambda_n\}_{n=1}^\infty$ be a sequence of numbers Define $T:H\to H$ by $Tx= \sum_{n=1}^{\infty} \lambda_n\langle x, \varphi_n\rangle \varphi_n$ Show that if $sup_n|λ_n| < \infty$ then $\sum_{n=1}^{\infty} \lambda_n\langle x, \varphi_n\rangle \varphi_n$ converges for all $x \in H$. Well I have no clue for where to begin. Does it have something to do with Gram–Schmidt process?
Hint: Since $(\varphi_n)_{n=1}^{\infty}$ is an orthogonal sequence, the expression $$ \sum_{n=1}^{\infty}|\langle x, \varphi_n\rangle |^2 $$ always converges. This follows from Bessel's inequality. Using the fact that $\quad\sup_n|\lambda_n|<\infty\quad$, there's an $M>0$ such that $|\lambda_n|<M$ for all $n\in\Bbb N$. It is not hard to see that $$ \sum_{n=1}^{\infty}M^2|\langle x, \varphi_n\rangle |^2 $$ is convergent for any $x\in H$. What does this say about the convergence of $\sum_{n=1}^{\infty}|\lambda_n\langle x, \varphi_n\rangle |^2$? There's a theorem saying that for any orthonormal sequnce $(e_n)_{n=1}^{\infty}$, $$ \sum_{n=1}^{\infty}|\langle x, e_n\rangle |^2 \quad\text{converges iff}\quad \sum_{n=1}^{\infty}\langle x, e_n\rangle e_n\quad\text{converges.} $$ Now all you have to do is put them all together.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Calculating $\lim \limits_{x \to \infty} \frac{x+\frac12\cos x}{x-\frac12\sin x}$ using the sandwich theorem Calculating $\lim \limits_{x \to \infty} \dfrac{x+\frac12\cos x}{x-\frac12\sin x}$ Correct me if I'm wrong: $\cos x$ and $\sin x$ are bounded so that $$|\cos x|\le 1,\qquad |\sin x|\le1$$ Therefore I can say: $$ \frac{x-\frac12}{x+\frac12}\le \frac{x+\frac12\cos x}{x-\frac12\sin x}\le \frac{x+\frac12}{x-\frac12} $$ the limits of the left and right side are equal to 1, therefore the the limit I'm looking for is also equal to 1 . The answer is correct, but what I'm not sure is $$ \frac{x-\frac12}{x+\frac12}\le\frac{x+\frac12\cos x}{x-\frac12\sin x} $$ was this step correct?
Yes, your solution is correct. Indeed : $|\cos x| \leq 1 \Leftrightarrow -1 \leq \cos x \leq 1 $ and $|\sin x| \leq 1 \Leftrightarrow -1 \leq \sin x \leq 1$ This means that your fraction is bounded between the values that it gets for the upper and lower bounds of $\cos x, \sin x$. $x$ approaches $\inf$ which makes the absolute values to vanish as-is. Take note that the Squeeze-Sandwich Theorem talks about $x \to a$ where $a \in \mathbb R$ or $\mathbb R^n$ for upper spaces, but it has a different name that involves sequences. Now, a similar statement holds for infinite intervals: for example, if $ I=(0,\infty)$ , then the conclusion holds, taking the limits as $ x\to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Integrate $\int^a_0x^3 - x \mathop{\mathrm{d}x}$ to find the area (a) Solve the equation: $$\int^a_0x^3 - x \mathop{\mathrm{d}x} = 0, a > 0$$ (b) For this value of $a$, find the total area enclosed between the $x$-axis and the curve $y=x^3 - x$ for $0 \leq x \leq a$. I can quite easily solve the first part but for part (b) when I substitute $\sqrt{2}$ I just get $0$, what am I doing wrong?
Actually problem part(a) is not different from part (b), as it is asking to just check your work.. so getting zero between $ 0, \sqrt 2 $ shows your result is in fact correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are standalone statements conventionally considered to imply truth? From what I understand, the statement $\exists x(p(x) \vee q(x))$ in the English language sounds something like this: "There exists $x$ such that $p(x)$ or $q(x)$". But this sounds like an incomplete claim; "There exists $x$ such that $p(x)$ or $q(x)$ what? Are true? Are false?
Logical formulae like this are always meant to be true, and if what they express is false, so is the formula. That is, the formula you gave essentially says "It is true that there is an x which is p or q", but since the "It is true that..."-part is already part of the semantics of logical formulae anyway, you wouldn't explicitly spell out this when translating predicate logic into natural language - just like in natural language you don't say either "It is true that the sun shines" but simply "The sun shines." The reason why "... such that p(x) or q(x)" sounds incomplete may also be due to "p(x)" and "q(x)" not actually being full sentences with a verb etc. as we would expect but just abbreviations; if we substituted "p" and "q" for e.g. to "dog" and "hungry" (and the choice of variable and constant names is irrelevant to the semantics of the formula, as long as we stick to a unique interpretation of them), we could translate the formula into "There exists an x such that x is a dog and x his hungry", which doesn't sound incomplete at all. So the reason why the formula may sound like an incomplete term to you probably is due to the poor level of translation into natrual language sentences (wording it as "such that p(x)", which isn't a "complete" sentence in sense of lacking a verb etc., instead of wording it e.g. "such that x is p"), rather than the formula itself being incomplete. The default assumption is always that what the formula expresses is true, and if this is not the case, so is the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniqueness of cyclic groups up to isomorphism. I am reading something about abstract algebra. Can anyone please tell me if the following statements are saying: a cyclic group can only be isomorphic to either $\mathbb{Z}/n\mathbb{Z}$ if it is finite or $\mathbb{Z}$ if it is infinite? Thanks a lot. The group $\mathbb{Z}$ is the only infinite cyclic group, up to isomorphism. The group $\mathbb{Z}/n\mathbb{Z}$ is the only cyclic group of order $n$, up to isomorphism
Yes, that is correct. If a group $G$ is cyclic, then $G$ is generated by (the positive and negative powers of) a single element $a$, i.e there exists an $a \in G$ such that $G=\{a^n: n \in \mathbb{Z} \}$. If $G$ is finite, then there exists an isomorphism from $G$ to $C_n$, and if $G$ is infinite, then there exists an isomorphism from $G$ to $\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Finding all pairs of integers that satisfy a bilinear Diophantine equation The problem asks to "find all pairs of integers $(x,y)$ that satisfy the equation $xy - 2x + 7y = 49$. So far, I've got \begin{align} xy - 2x + 7y &= 49 \\ x\left(y - 2\right) + 7 &= 49 \\ y &\leq 49 \end{align} I can't get any further. Any help?
hint: $xy+7y = 2x+49 \implies (x+7)y = 2x+49 \implies y = \dfrac{2x+49}{x+7}= 2 + \dfrac{35}{x+7}\implies (x+7) \mid 35\implies x+7 = \pm 1, \pm 5, \pm 7, \pm 35$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1812936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is the function an entire function? Is the following an entire function? (Here $z\in \mathbb{C}$) $$\sum_{n=0}^\infty \frac{2^n}{n!}z^{3n}$$ ($***$) So, here I first note that the function is a sum of powers of $z$. Now if I show that the sum converges for all $z\in \mathbb{C}$, the problem will be solved, right? Using ratio test i get $\frac{a_{n+1}}{a_n}=\frac{2z}{n+1}$ which goes to $0$ as $n$ goes to $\infty$, showing that the function converges. The only problem is that is my statement $(***)$ correct? Thanks in advance.
The function defined by the sum is entire, yes. For this, it suffices to check that the radius of convergence is infinite (which you have done); the result then follows as this implies that the power series is uniformly convergent everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Condition on $a$ for $(x^2+x)^2+a(x^2+x)+4=0$ Find the set of values of $a$ if $$(x^2+x)^2+a(x^2+x)+4=0$$ has $(i)$ All four real and distinct roots $(ii)$ Four roots in which only two roots are real and distinct. $(iii)$ All four imaginary roots $(iv)$ Four real roots in which only two are equal. Now if I set $x^2+x=t$ then even if $t^2+at+4=0$ has real roots in is not necessary that $(x^2+x)^2+a(x^2+x)+4=0$ will have real roots too. So how to derive the condition on a? Could someone give me some direction?
Alternatively, graphing it: Let $a=y$, then $(x^2+x)^2+y(x^2+x)+4=0$ $\displaystyle y=\frac{1}{4}-\left( x+\frac{1}{2} \right)^{2}+ \frac{4}{\frac{1}{4}-\left( x+\frac{1}{2} \right)^{2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove the polynomial $P_a=X^5 + a$ is reducible over a field Let $(K, +, \cdot)$ a finite field so that the polynomial $P=X^2-5$ is irreducible. Prove that: a) $1+1 \ne 0$ b) The polynomial $P_a=X^5 + a$ is reducible $\forall a \in K$ a) This is the easy part. Suppose $1+1=0$. Then $5=1$ therefore $P=(X-1)(X+1)$ contradiction. b) I think the idea is to show the equation $x^5 + a=0$ has solutions $\forall a \in K$, but I failed to prove it. Any hint for an elementary solution is appreciated.
If $a=0$,we're done. So, suppose $a\in F^\times$. Let $p=$char$ F$. Then, $F=\Bbb{F}_{p^k}$. If $k$ is even, then, $\Bbb{F}_{p^2}\subseteq F$. However, $X^2-5$ splits over $\Bbb{F}_{p^2}$, as splitting field of a degree two polynomial is of degree two, and $\Bbb{F}_{p^2}$ is the only degree two extension of $\Bbb{F}_p$. Contradiction. So, $k$ is odd. $x^2-5$ has no solution means $5$ is not a quadratic residue $\Bbb{F}_p$. Then, by the quadratic reciprocity law, $p$ is not a quadratic residue in $\Bbb{F}_5$. In particular $5\not\lvert p\pm 1$, so, $5\not\lvert p^k-1$. Choose a generator $g\in F^\times$. Let $-a=g^{m}$. As $5\not\lvert p^k-1$, $5$ divides one of $m,m+(p^k-1),m+2(p^k-1),m+3(p^k-1),m+4(p^k-1)$. Then, $g^\frac{m+b(p^k-1)}5$ is a root of $x^5+a$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
why cannot the limits be $-1$ and $-2$ I came across a problem in definite integral as : Evaluate $$I=\int_{0}^{3} x\sqrt{1+x}\:dx$$ By the substitution $1+x=t^2$ so book has given lower and upper limits as $t=1$ and $t=2$ which is obtained as $t^2=1$ $\implies$ $t=1$ and $t^2=4$ $\implies$ $t=2$ we get $$I=\int_{1}^{2} (t^2-1)(t)(2t)dt=2\int_{1}^{2}(t^4-t^2)dt=\frac{116}{15}$$ but why cant we take limits as $t=-1$ and $t=-2$ then we get $$I=2\int_{-1}^{-2}(t^4-t^2)dt$$ since $t^4-t^2$ is even function we have $$I=\frac{-116}{15}$$ what is the mistake in my analysis?
The book does a bad service to students, in my opinion. The substitution should be $t=\sqrt{1+x}$, making it clear that $t\ge0$ and also providing for automatic substitution of the bounds. Of course you can also do the substitution $t=-\sqrt{1+x}$, so the bounds are $-1$ and $-2$, but the integral becomes $$ \int_{-1}^{-2}-(t^2-1)t\cdot 2t\,dt $$ so the final result is the same as with $t=\sqrt{1+x}$. The problem is that the function $t\mapsto t^2-1$ is not injective, so you have to choose a branch where it is; for instance $t\ge0$ or $t\le0$. The substitution spelled out as $t=\sqrt{1+x}$ or $$ \begin{cases} x+1=t^2\\ t\ge0 \end{cases} $$ avoids the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
uniform rod revolving around a vertical axis with given angular velocity and given length of rod A uniform rod of given length and given angular velocity is revolving around a vertical axis. Clearly it can do so in a horizontal plane with respect to vertical axis. At what other angle can it do so? That is, what is the angel of inclination with vertical axis? Given that inclination angle is constant. Answer in the book is :cosine inverse 3g/(2 times square of angular velocity times length of rod). Where g is acceleration due to gravity.
The torque due to gravity and centripetal force must be equal. Let's call the coordinate along the rod $x$, varying from 0 to $L$. A small piece, of length $dx$ has a mass of $\frac{m}{L} dx$. The torque due to gravity of this small piece is $$d\tau_g=\frac{m}{L} dx g x \sin \theta$$ The total torque due to gravity is given by $$\tau_g=\int_0^L\frac{m}{L} dx g x \sin \theta=\frac{1}{2}mgL\sin\theta$$ The centripetal force on the same small piece is given by $$dF=\frac{m}{L} dx \omega^2 x\sin\theta$$ $\omega$ is the angular velocity. If the rod is at an angle $\theta$ with respect to the vertical, the rotation radius of that small piece at distance $x$ along the rod is $x\sin\theta$. the torque due to this force is $$d\tau_F=dF x \cos\theta=\frac{m}{L} dx \omega^2 x^2\sin\theta\cos\theta$$ Integrating it we get $$\tau_F=\frac{1}{3}m\omega^2L^2\sin\theta\cos\theta$$ Now from $\tau_g=\tau_F$ we get: $$ \frac{1}{2}mgL\sin\theta=\frac{1}{3}m\omega^2L^2\sin\theta\cos\theta$$ We have two solutions. $\theta$=0 means that the rod is vertical, rotating around the vertical axis. The other solution is the one you were looking for: $$\cos\theta=\frac{3g}{2\omega^2L}$$ Note that the rod cannot be horizontal, since the gravity will pull it downwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence in probability Can anyone tell me how they got the regions $0<\epsilon<\theta$ and $\epsilon >0 $. Also to clarify, is the last step where it says $\lim_{n \to \infty} P(|Y_n-\theta|>\epsilon)=0$
\begin{align} \mathbb{P}(|Y_n - \theta | \geq \epsilon) &= 1- \mathbb{P}(-\epsilon \leq Y_n - \theta \leq \epsilon) \\ &= 1 - \mathbb{P}(Y_n \leq \theta + \epsilon) + \mathbb{P}(Y_n \leq \theta - \epsilon) \\ &= 1 - F_{Y_n}(\theta + \epsilon) + F_{Y_n}(\theta-\epsilon). \\ &= F_{Y_n}(\theta- \epsilon). \end{align} Note that the last equality follows from the fact that $\theta + \epsilon \geq \theta$ for any $\epsilon>0$ and consequently $F_{Y_n}(\theta+ \epsilon) = 1$. According to the definition of the distribution function of $Y_n$, there are two cases to be considered: (1) Assume $0<\epsilon<\theta$, then $0 < \theta-\epsilon < \theta$ and consequently $$\mathbb{P}(|Y_n - \theta| \geq \epsilon) = \left(\frac{\theta-\epsilon}{\theta}\right)^n \to 0,$$ as $n \to \infty$. (2) Assume $\epsilon > \theta$, then $\theta - \epsilon < 0$ and consequently $F_{Y_n}(\theta- \epsilon) = 0, \forall n \geq 1$. Therefore, the result holds in the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How many solutions has the equation $\sin x= \frac{x}{100}$ ? How many solutions has the equation $\sin x= \frac{x}{100}$ ? Usually when I was asked to solve this type of problem, I would solve it graphically but this one seems to be trickier. It doesn't seem wise to put $f(x)=\sin x$ and $g(x)=\frac{x}{100}$ in the same graph and then counting all the intersection points. What would be some algebraic methods to solve this?
First, we may suppose $x\ge 0$ since both sides are odd functions. Using the Intermediate value theorem, there'll be two non-negative solutions on each interval $]2k\pi,2(k+1)\pi[$ as long as $\frac x{100}\le 1$, i.e. $x\le 100$. There results the number of non-negative solutions is equal to $2\times \biggl\lfloor \dfrac{50}{\pi}\biggr\rfloor=32$. Hence, by symmetry, the total number of roots is $\;\color{red}{63}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
proof that diagonals of a quadrilateral are perpendicular if $AB^2+CD^2=BC^2+AD^2$ proof that diagonals of a quadrilateral are perpendicular if $AB^2+CD^2=BC^2+AD^2$. My Attempt:we know that if diagonals of a quadrilateral are perpendicular then we have $AB^2+CD^2=BC^2+AD^2$.But have to proof opposite of it?
Assume the intersection of the to diagonals is $O$. Let $|OA|=a,|OB|=b,|OC|=c,|OD|=d$. Assume $\angle AOB=\gamma$. Then $$|AB|^2=a^2+b^2-2ab \cos\gamma,$$ $$|CD|^2=c^2+d^2-2cd \cos\gamma,$$ $$|BC|^2=b^2+c^2-2bc \cos(\pi-\gamma)=b^2+c^2+2bc \cos\gamma,$$ $$|AB|^2=a^2+d^2-2ad \cos(\pi-\gamma)=a^2+d^2+2ad \cos\gamma.$$ From the condition, we have $-2ab \cos\gamma-2cd \cos\gamma=2bc \cos\gamma+2ad \cos\gamma$. Thus, $(a+c)(b+d)\cos\gamma=0\Rightarrow \cos\gamma=0\Rightarrow \gamma=\frac{\pi}{2}$. Thus, $AC\perp BD$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Expression for Euler characteristic in differential geometry Is the Euler-Poincare$ ^{\prime}$ characteristic $\chi$ a bending invariant? If so, how should it be isometrically expressed in terms of first fundamental form coefficients? Is not stretching invariant better as terminology? If so how can it be expressed in terms of the first and second form coefficients ? and/or in any other way? $\chi$
The Euler characteristic in a homotopy invariant, and that is a more precise statement that the one you want. The Gauss-Bonnet theorem states that the Euler characteristic of a closed orientable surface is equal, up to normalization, to the integral of the gaussian curvature. More generally the Chern-Gauss-Bonnet theorem does the same thing for closed orientable manifolds of even dimension. Now you do not integrate the curvature but something derived from it. (The Euler characteristic of closed odd-dimensional manifoldsis zero, so nothing inteeresting can be said there) This is very well explained in Spivak's book on differential geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1814943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the gradient of a function in a $n$-dimensional space? $q(x)=x^TAx+b^Tx+c$ $A$ is matrix. $x,b\in \mathbb{R}^n$ and $c\in \mathbb{R}$ I really don't know how to calculate it for this function.
$$q(x+h)=(x+h)^TA(x+h)+b^T(x+h)+c=\\=x^TAx+b^Tx+c+\color{blue}{h^TAx}+x^TAh+b^Th+h^TAh=\\ =\color{red}{x^TAx+b^Tx+c}+\color{blue}{x^TA^Th}+x^TAh+b^Th+\color{brown}{h^TAh}=\\=\color{red}{q(x)}+x^T(A^T+A)h+b^Th+\color{brown}{O(\lVert h\rVert^2)}$$ Hence, $$\nabla_xq=x^T(A^T+A)+b^T$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Prove that $\lambda(f) = o(2^n)$ for almost all boolean functions How to prove that $\lambda(f) = o(2^n)$ for almost all boolean functions $f$ of $n$ variables? Here $\lambda(f)$ denotes minimal length (i.e. count of terms) of all possible disjunctive normal forms (DNFs) of $f$.
This is the Korshunov–Kuznetsov Theorem. Quoting [1] which restates the theorem: Theorem. (Korshunov–Kuznetsov, 1983) The optimal DNF size for a random Boolean function is $(K + o(1))\frac{2n}{\log n \log\log n}$, where $1\leq K\leq 1.54169$ (and the $\log$ is in base $2$) They refer to the article of Pippenger [2], which (re)proves the lower bound, and improves the constant of the upper bound (which is the quantity you are interested in). From the abstract of [2]: Our main result is a new upper bound $l(n) \leq (1 + o(1)) H(n) \frac{2n}{\log n \log \log n}$, where $H(n)$ is a function that oscillates between 1.38826... and 1.54169.... The best previous upper bound, due to Korshunov, had a similar form, but with a function oscillating between 1.581411... and 2.621132 .... The main ideas in our new bound are (1) the use of Rödl's "nibble" technique for solving packing and covering problems, (2) the use of correlation inequalities due to Harris and Janson to bound the effects of weakly dependent random variables, and (3) the solution of an optimization problem that determines the sizes of "nibbles" and larger "bites" to be taken at various stages of the construction. [1] Approximating Boolean functions with depth-2 circuits. Eric Blais and Li-Yang Tan, CCC'13. http://eccc.hpi-web.de/report/2013/051/ [2] The shortest disjunctive normal form of a random Boolean function, Nicholas Pippenger. Random Structures & Algorithms, 22: 161–186, 20003. http://dl.acm.org/citation.cfm?id=770315
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Disprove: $f\circ g = f \circ h \implies g=h$ for a surjective function $f$ I tried using a very specific counterexample here where I select a surjective function for which the compositions are equal but the functions within are not. This is probably off-base, but it's what I've got so far. Assume $f \circ g = f \circ h$. Consider the surjective function $f:\mathbb{R} \rightarrow \mathbb{R}$ given by $f(x) = x*sin(x)$. Should I prove this is surjective before proceeding? Suppose for the sake of contradiction that $g \neq h$ given by $g(x) = 0$ and $h(x) = 2\pi$. Can I choose these constant functions? Do I need to define domains and codomains? $(f \circ g)(x) = f(g(x)) = f(0) = 0 * sin(0) = 0$ $(f \circ h)(x) = f(h(x)) = f(2\pi) = 2\pi * sin(2\pi) = 0$ Observe that $f \circ g = f \circ h$ $\land$ $x_1 \neq x_2$. Thus we have given a counterexample to disprove the statement. Thus surjectivity of $f$ is not a sufficient condition for the statement to be true. I understand the proof completely now and understand I have it correct, thank you for your responses.
Let $f: \mathbb{R} \rightarrow [-1,1]$, $f(x)=\sin(x)$. Then let $g(x)=x$, $h(x)=x+2 \pi$. $f$ is surjective and $f \circ h= f \circ g$, but we clearly don't have $h=g$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Expected value of $\max\{X_1,\ldots,X_n\}$ where $X_i$ are iid uniform. Let $X_i\sim\mathrm{Uniform}(0,\theta)$ be iid, what is $E[\max\{X_1,\ldots,X_n\}]$? Apparently the answer is $$\frac{n}{n+1}\theta,$$ but I do not see why? It seems intuitive in that you would "expect" them to be spaced out evenly, hence the maximum would be $\frac{n}{n+1}$-of-the-way along the interval, but how can we prove this mathematically? I feel like it should be simple, but evidently $$E[\max\{X_1,\ldots,X_n\}]\neq\max\{E[X_1],\ldots,E[X_n]\}.$$ Thanks.
Your intuition is correct. To see this mathematically, suppose $X_1, \ldots, X_n$ are independent and uniformly distributed and $M_n = \max(X_1,X_2,\ldots,X_n).$ The distribution function of the maximum is the joint probability that $X_k \leq x$ for all $k.$ This is a product of marginal probabilities since the variables are independent. $$ F_M(x)=P(M_n \leq x) =P(X_1 \leq x,\ldots,X_n \leq x)=(x/\theta)^n$$ for $0 \leq x \leq \theta$. Also $F_M(x) = 0$ for $x < 0$ and $F_M(x) = 1$ for $x > \theta$. Hence, the probability density function on $[0,\theta]$ is $$f_M(x)=F'_M(x)=nx^{n-1}\theta^{-n}$$ and the expected value is $$E(M_n) = \theta^{-n}\int_0^{\theta}xnx^{n-1}\, dx=\frac{n}{n+1}\theta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Given $a,b_0,\ldots,b_n$, there exists a polynomial of degree $\le n$ s.t. the derivatives $f^{(i)}(a)=b_i$ Just exploring some maths problems from a book until I came across this question. Let $a, b_0, . . . , b_n ∈ R$. Show that there exists a polynomial $f(x)$ of degree at most n such that $$f(a) = b_0, f' (a) = b_1, f''(a) = b_2,\ldots , f^{(n)} (a) = b_n$$ I am not sure how to approach this problem, can someone give me a guide of how to do this problem?
Consider the general polynomial of degree $n$ $$ f(x) = a_0 + a_1x + \ldots a_nx^n $$ Since we know that a polynomial is uniquely determined by its Taylor expansion , consider the taylor expansion of $f(x)$ about $a$, which is $$ f(x) = f(a) + \frac{(x - a)f'(a)}{1!} + \frac{(x - a)^2f''(a)}{2!} + \ldots \frac{(x - a)^nf^n(x)}{n!} $$ which gives us the unique polynomial in terms of the derivatives at $a$ and powers of $(x - a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the center of the circle through the points $(-1,0,0),(0,2,0),(0,0,3).$ Find the center of the circle through the points $(-1,0,0),(0,2,0),(0,0,3).$ Let the circle passes through the sphere $x^2+y^2+z^2+2ux+2vy+2wz+d=0$ and the plane $Ax+By+Cz+D=0$ So the equation of the circle is $x^2+y^2+z^2+2ux+2vy+2wz+d+\lambda(Ax+By+Cz+D)=0$ As it passes through $(-1,0,0)$ so $1-2u+d+\lambda(-A+D)=0$ As it passes through $(0,2,0)$ so $4+4v+d+\lambda(2B+D)=0$ As it passes through $(0,0,3)$ so $9+6w+d+\lambda(3C+D)=0$ Here i am stuck.I cannot find the radius of the circle.Is my method correct or not?Is there simpler method possible?Please help.
follow this process * *find the plane passing through the three points (as the 3 points are points on axis it is very simple) *take normal vector to the plane *consider any two sides of the triangle made by the three points and take their mid-point *now take cross product between side's vector and plane's normal vector *this will give you direction of perpendicular to the side *now take a line passing through the mid-point of that side *do the same process for the second side *the point these both points intersect is your center of the circle we just intersected two perpendicular bisectors of the sides
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Prove that $X^4+X^3+X^2+X+1$ is irreducible in $\mathbb{Q}[X]$, but that it has two different irreducible factors in $\mathbb{R}[X]$ Prove that $X^4+X^3+X^2+X+1$ is irreducible in $\mathbb{Q}[X]$, but it has two different irreducible factors in $\mathbb{R}[X]$. I've tried to use the cyclotomic polynomial as: $$X^5-1=(X-1)(X^4+X^3+X^2+X+1)$$ So I have that my polynomial is $$\frac{X^5-1}{X-1}$$ and now i have to prove that is irreducible. The lineal change of variables are ok*(I don't know why) so I substitute $X$ by $X+1$ then I have: $$\frac{(X+1)^5-1}{X}=\frac{X^5+5X^4+10X^3+10X^2+5X}{X}=X^4+5X^3+10X^2+10X+5$$ And now we can apply the Eisenstein criterion with p=5. So my polynomial is irreducible in $\mathbb{Q}$ Now let's prove that it has two different irreducible factors in $\mathbb{R}$ I've tryed this way: $X^4+X^3+X^2+X+1=(X^2+AX+B)(X^2+CX+D)$ and solve the system. But solve the system is quite difficult. Is there another way?
let $$P(x)=x^4+x^3+x^2+x^1+1$$ We know if $x=\frac{a}{b}$ is root of $P(x)$ then $b|1\,$ , $\,a|1$. In the other words $a=\pm 1 $ and $b=\pm 1 $ but $P(1)=5$ and $P(-1)=1$, thus we let $$P(x)=(x^2+ax+b)(x^2+cx+d)$$ as a result \begin{align} & bd=1 \\ & ad+bc=1 \\ & b+d+ac=1 \\ & a+c=1 \\ \end{align} This system has not solution in $Q$ because $$d(ad+bc)=d\times\,1\to\,ad^2+c=d$$ On the other hand $\,c=1-a$ thus $$ad^2+1-a=d\to\,a(d^2-1)=d-1$$ This implies $d=1$ or $ad+a=1$. If $d=1$ then $\left\{\begin{matrix} a+c=1 \\ ac=-1 \\ \end{matrix}\right.$ that this system has not rational roots . If $\,ad+a=1\,$ then $a=\frac{1}{d+1}=\frac{b}{b+1}$ as a result $$b+d+ac=1\to b+\frac{1}{b}+\frac{b}{b+1}\left(1-\frac{b}{b+1}\right)=1$$ we have $$\frac{(b+1)^2}{b}+\frac{b}{(b+1)^2}=-1$$ This equation has not solution in $\mathbb{R}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Ten digit numbers divisible by 3. I came across an interesting property of 10-digit numbers that are constructed using each digit only once: e.g. $9867534210$ or $352147890$. These numbers are exactly divisible by $3$. Each and every of the $10!$ combinations are also divisible by $3$. But why is this property emerging, i have no idea. Can somebody explain this to me why this happens??
You have $10$ digits $[0,1,2,3,4,5,6,7,8,9]$ and if you construct any possible number by taking each digits once,you'll get $10!$ numbers. A number is divided by $3$,if sum of the digits of the number is divided by $3$ For all these numbers (Sum of digits)=$(0+1++2+3+4+5+6+7+8+9)=45$ is divided by $3$. So,all these number are divisible by $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to prove that the condition is sufficient? I was solving a question at a programming website. The question specifies that a person is standing at point $(a, b)$ in an infinite 2D grid. He wants to know if he can reach point $(x, y)$ or not. The only operation allowed is to move to point $(a, a+b)$ or $(a+b, b)$ or $(a-b, b)$ or $(a, a-b)$. He can perform any number of these operations. Now, I know that $\gcd(a + m*b, b)$ is same as $\gcd(a,b)$(I have proved it). Therefore, at the end $\gcd(x,y)$ should be equal to $\gcd(a,b)$. But how do I prove it to be sufficient?
Let $d = gcd(a, b) = gcd(x, y)$ For simplicity, I am assuming that $a, b, x, y$ are nonnegative. Otherwise, you can easily modify them until they are. You can use the euclidean algorithm to get from $(a, b)$ to $(x, y)$: The algorithm looks like this (quoting Wikipedia): function gcd(a, b) while a ≠ b if a > b a := a − b; else b := b − a; return a; You can easily visualize this as an algorithm that repeatedly replaces $(a, b)$ either with $(a - b, b)$ or with $(a, b - a)$, until it reaches the point $(d, d)$. Similarly, you can use this algorithm to get from $(x, y)$ to $(d, d)$. Now, we can take this sequence of steps, and reverse it. When doing so, an operation that takes $(a, b)$ to $(a - b, b)$ turns into an operation that takes $(o, p)$ to $(o + p, p)$ and an operation that takes $(a, b)$ to $(a, b - a)$ turns into an operation that takes $(o, p)$ to $(o, o + p)$. So after reversing this path from $(x, y)$ to $(d, d)$, we get a path from $(d, d)$ to $(x, y)$. Concatenating this to the first path, we get a path from $(a, b)$ to $(x, y)$! Edit: To clarify what you do if $a, b, x, y$ are negative: If $a < 0$, you can go from $(a, b)$ to $(a + b, b)$ to $(a + b, b - (a + b)) = (a + b, -a)$ to $(b, -a)$. If $b < 0$ as well, you can repeat this process to get to $(-a, -b)$. If $x$ or $y$ are negative, you can use the same method to go from $(x, y)$ to some point with nonnegative coordinates, and then reverse that path.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dual space with $W=\ker T$ Let $f \in V^*$ with $V$ a vector space and $W=ker f $. If $v_0 \in V$ is a vector such that $f(v_0)\ne 0$ then for every $v \in V$ there exist unique $w \in W$ and scalar $c$ such that $v=cv_0+w$. How can I prove this I don't understand it, please if someone can help me. Thanks for you time and help.
Since there exists $v_0 \in V$ such that $f(v_0) \neq 0$, we know $f$ is not trivial. Since $V$ is a vector space over a field $F$, and $f:V \rightarrow F$ is linear, we must then have $f$ surjective, since $F$ has no nontrivial proper subspaces (this is a property of fields). Then we have by the first isomorphism theorem: $V/ \ker(f) \cong F$, so then $V/\ker(f)$ is a one dimensional vector space, i.e. for every $v \in V$ there is a $c \in F$ such that $v+ \ker(f)=cv_0+\ker(f)$. But then $v+h=cv_0+k$ for some $h,k \in \ker(f)$, so letting $w=k-h$, $w \in \ker(f)$ and $v=cv_0+w$, as desired. If you don't want to use the first isomorphism theorem, use rank-nullity: $\dim(im(f))+\dim(\ker(f))= \dim(V)$, so $\dim(\ker(f))= \dim(V)- 1$, i.e. $\dim(V/\ker(f))=1$ and the same result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1815959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }