Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Cauchy nets in a metric space Say that a net $a_i$ in a metric space is cauchy if for every $\epsilon > 0$ there exists $I$ such that for all $i, j \geq I$ one has $d(a_i,a_j) \leq \epsilon$. If the metric space is complete, does it hold (and in either case why) that every cauchy net converges?
| Yes, it’s true.
Suppose that the metric space $\langle X,d\rangle$ is complete, and let $\langle x_i:i\in I\rangle$ be a Cauchy net in $X$. Pick $i(0)\in I$ such that $d(x_i,x_j)\le 1$ whenever $i,j\ge i(0)$. Given $i(n)\in I$ such that $d(x_i,x_j)\le 2^{-n}$ whenever $i,j\ge i(n)$, choose $i(n+1)\in I$ such that $i(n+1)\ge i(n)$ and $d(x_i,x_j)\le 2^{-(n+1)}$ whenever $i,j\ge i(n+1)$. Then the sequence $\langle x_{i(k)}:k\in\Bbb N\rangle$ is $d$-Cauchy and therefore converges to some $x\in X$. Fix $\epsilon>0$; there is an $m_0\in\Bbb N$ such that $d(x_{i(n)},x)<\epsilon/2$ whenever $n\ge m$, and there is an $m_1\in\Bbb N$ such that $d(x_i,x_j)<\epsilon/2$ whenever $i,j\ge i(m_1)$. Let $m=\max\{m_0,m_1\}$; then $d(x_i,x)<\epsilon$ whenever $i\ge i(m)$, so $\langle x_i:i\in I\rangle$ converges to $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
} |
Can I prove that this polynomial is irreducible? The task is prove that the next polynomial us irreducible in $\mathbb{Q}$
$$y^3-3y +1 = 0.$$
I don't know the easy form for attack other problems similar.
thanks,
| Suppose that $y^3-3y+1$ is not irreducible over $\mathbb Q$. Then it can be factored into a linear and a quadratic term over $\mathbb Q$. Thus it has a rational root (the root of the linear term). Yet by the rational root theorem, any rational root of $y^3-3y+1$ is $\pm 1$ and it is straightforward to check that neither of these works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Foot of a dropped perpendicular in $R^3$ Given a Point $P\in\mathbb{R}^3$ is there a way to compute the foot of a dropped perpendicular to line $l = \vec{a} + x\vec{b}$ with $x\in\mathbb{R}$?
There is one constraint here: I want to compute it with a computer program. Thus, a set of linear equations should be avoided.
My own thoughts so far:
A line has no unique perpendicular in $\mathbb{R}^3$. That's why we can't easily construct a perpendicular from the get-go. But we can construct a helping plane $H$ that's perpendicular to $l$: $$H: \left(\vec{r}\cdot\frac{\vec{b}}{\left\|\vec{b}\right\|}\right) - d = 0$$ with $d = \vec{a}\cdot\frac{\vec{b}}{\left\|\vec{b}\right\|}$.
We now could apply the line to this equation:
$$H: \left(\left(\vec{a} + x\vec{b}\right)\cdot\frac{\vec{b}}{\left\|\vec{b}\right\|}\right) - d = 0$$
Here I am completely stuck because I can't come up with a computational solution to solve for $x$.
| You seem to go through a lot of trouble when the naive way does work:
So you are looking for a point $Q$ on the line, say $Q=a+qb$, such that $P-Q$ is orthogonal to $b$, i.e. $\left<P-Q,b\right>=0$. Plugging things together we get:
$$0=\left<P-Q,b\right>=\left<P-a-qb,b\right>=\left<P-a,b\right>-q\left<b,b\right>$$
or equivalently
$$q=\frac{\left<P-a,b\right>}{\left<b,b\right>}$$
So just compute this $q$ and then the foot will be $Q=a+qb$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finding Lyapunov functions I came across the following question in one of my professor's past exams:
Find a Lyapunov function for $(0,0)$ in the system:
$$
\left\{
\begin{array}{ll}
\dot{x} = -x -2y + x^2\\
\dot{y} = x - 4y + xy
\end{array}\right.
$$
I know there is no formula for finding Lyapunov functions for a system, so how do I start solving such problems?
Thanks!
| For many ODEs, a good bet is to try a polynomial as a Lyapunov function candidate. In fact, in practice, a common method is to search for a sum-of-squares polynomial, i.e., a polynomial that can be given as $\sum_{i=1}^k p_i(x)^2$, with $p_1, \dots, p_k$ polynomial.
I haven't tried this example, but a good approach might be to try something of the form $V(x,y) = (x+ay+b)^2 + cy^2 + d$, with parameters $a,b,c,d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Solving improper integrals and u-substitution on infinite series convergent tests This is the question:
Use the integral test to determine the convergence of $\sum_{n=1}^{\infty}\frac{1}{1+2n}$.
I started by writing:
$$\int_1^\infty\frac{1}{1+2x}dx=\lim_{a \rightarrow \infty}\left(\int_1^a\frac{1}{1+2x}dx\right)$$
I then decided to use u-substitution with $u=1+2n$ to solve the improper integral. I got the answer wrong and resorted to my answer book and this is where they went after setting $u=1+2n$:
$$\lim_{a \rightarrow \infty}\left(\frac{1}{2}\int_{3}^{1+2a}\frac{1}{u}du\right)$$
And the answer goes on...
What I can't figure out is where the $\frac{1}{2}$ came from when the u-substitution began and also, why the lower bound of the integral was changed to 3.
Can someone tell me?
| You have $\int_{1}^{\infty}{\frac{1}{1+2n}\:dn}=\lim_{a\to\infty}\left(\int_{1}^{a}{\frac{1}{1+2n}\:dn}\right)$. Using the substitution you mentioned $u=1+2n$, we have our lower bound as $u(1)=1+2(1)=3$ and our upper bound as $u(a)=1+2(a)=1+2a$.
We also have $\frac{du}{dn}=2\implies dn=\frac{du}{2}$, therefore we have:
$$\int_{3}^{1+2a}{\frac{1}{u}\frac{du}{2}}=\frac{1}{2}\int_{3}^{1+2a}{\frac{1}{u}\:du}$$
Taking our previous limit, we have:
$$\lim_{a\to\infty}{\left(\frac{1}{2}\int_{3}^{1+2a}\frac{1}{u}\:du\right)}$$
Which is what you have in your answer book.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
$\mathbb{Z}$ has no torsion? What does is mean to say that $\mathbb{Z}$ has no torsion?
This is an important fact for any course?
Thanks, I heard that in my field theory course, but I don't know what it is.
| In general, given an $R$-module $M$, an element $m \in M$ is called a torsion element if there exists some non zero $r \in R$ such that $rm=0$. Here I denote the set of torsion elements of $M$ is denoted $\text{Tor}(M)$, although I have also seen it denoted by $T(M)$. If $\text{Tor}(M)=0$, then $M$ is said to be torsion free.
For any ring $R$, one can think of $R$ as an $R$-module over itself, where scalar multiplication is simply the ring multiplication. In your case you are looking at the ring $\mathbb{Z}$, which is certainly a $\mathbb{Z}$-module. So the torsion elements of $\mathbb{Z}$ would be the set $\text{Tor}(\mathbb{Z})$. If $a \in \text{Tor}(\mathbb{Z})$, then there exists some $n\in \mathbb{Z}$ such that $na=0$. Since $\mathbb{Z}$ is an integral domain (the prototypical one in fact), the equation $na=0$ forces either $n=0$ or $a=0$. Thus $\text{Tor}(\mathbb{Z})=\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/187988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Trying to find angle at which projectile was fired. So let's say I have a parabolic function that describes the displacement of some projectile without air resistance. Let's say it's
$$y=-4.9x^2+V_0x.$$
I want to know at what angle the projectile was fired. I notice that
$$\tan \theta_0=f'(x_0)$$ so the angle should be
$$\theta_0 = \arctan(f'(0)).$$ or
$$\theta_0 = \arctan(V_0).$$
Is this correct? I can't work out why it wouldn't be, but it doesn't look right when I plot curves.
| As you only have one equation to denote the position I assume you are working in one dimension. If you are working in one dimension, the initial angle is $0$ or $\frac{\pi}{2}$, if your $y$ denotes the height of the position. For a 2 dimensional trajectory you need two equations to denote the position of a projectile. If you shoot a projectile at speed $v_0$ with an angle $\alpha$, the position $(x,y)$ is $$x=x_0+v_0t\cos\alpha$$
$$y=y_0+v_0t\sin\alpha+a{t}^{2}$$
where $(x_0,y_0)$ is the initial position, $a$ is the gravity and $t$ is the time.
If you know $v_x=v_0\cos\alpha$ and $v_y=v_0\sin\alpha$, the initial angle $\alpha$ would be $\arctan\frac{v_y}{v_x}$. You could do a function that denotes the angle in function of time $\alpha(t)=\arctan\frac{v_0\sin\alpha+2at}{v_0\cos\alpha}$. The angle in the vertex of the parabola that describes the projectile should be $0^o$, as $v_y$ should be zero. So $\alpha(\frac{-v_0\sin\alpha}{2a})=0$. If $x_0=0$ and $y_0=0$, then $\alpha(\frac{-v_0\sin\alpha}{a})=-\arctan\frac{v_y}{v_x}$. Summarizing, with your equation, the trajectory is vertical so the angle $\alpha$ would be $\lim_{x\to 0}\arctan\frac{V_0}{x.V_0} = \frac{\pi}{2}$ as $x$ does not change.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
$A\in M_{2n+1}(\mathbb{R})$, with $AA^t=I$, prove that either $1$ or $-1$ is an eigenvalue. So basically the question is in the title. I did it this way:
Use the fact that if $\lambda$ is an eigenvalue (with eigen vector $x$) of a normal operator $T$, then $x$ is an eigenvector corresponding to $\bar{\lambda}$ of $T^*$.
Since $AA^t=I$, and our entries are in the reals, we know that if $\lambda$ is an eigenvalue of $A$, then it is also an eigenvalue of $A^t$ (with the same eigenvector).
To prove the existence of such an eigenvalue we see that the char poly of $A$ is going to have odd degree (here we use the $2n+1$ assumption), and hence a root, so at least one eigenvalue exists.
Hence, say $P$ is the change of coordinate matrix such that $PAP^{-1}=B$ where $B$ has as a first column $(\lambda,0,....,0)^t$, by the above remarks we know that this is also the first column of $B^t$, so $$BB^t=PAP^{-1}PA^tP^{-1}=PIP^{-1}=I$$the $(1,1)$ entry of the LHS is $\lambda^2$ and the RHS is $1$, so $\lambda^2=1$, and we are done.
Im trying to find other ways to do this problem. I ask because I missed the session and I saw that this was one of the problems done, but they havent learned the first claim (the one of normal operators) that I used. (I believe they were talking about minimal polynomials that day). If anyone knows about another way it would be appreciated.
Thanks.
| $Av=\lambda v\implies||Av||=||\lambda v||\implies\langle Av,Av\rangle=\langle\lambda v,\lambda v\rangle$
now using the fact the over $\mathbb{R}$ it holds that $A^{t}=A^{*}$
and since $A$ is orthogonal we have $\langle v,v\rangle=|\lambda|^{2}\langle v,v\rangle\implies|\lambda|=1$.
Since, as you stated, the char poly of $A$ is of odd degree it has
a real root $\lambda$ and the claims follow since $|\lambda|=1$
and $\lambda\in\mathbb{R}$ imply $\lambda\in\{-1,1\}$.
Note: I used that $v\neq 0$ since it is an eigenvector to deduce $\langle v,v\rangle\neq 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to complete this proof regarding closed form of tower of hanoi problem?
Possible Duplicate:
can one derive the $n^{th}$ term for the series, $u_{n+1}=2u_{n}+1$,$u_{0}=0$, $n$ is a non-negative integer
I'm trying to learn induction through practise and I'm dealing with the classic tower of hanoi question.
$T_{n} = 2T_{n-1} + 1$
for $ n > 0$
I've come up with a closed form equation:
$T_{n} = 2^{n}-1$
Now I'm trying to prove the second statement inductively and this is what I have thus far, but I think I'm missing the mark a bit.
$T_{n} = 2T_{n-1}+1=2(2^{n-1}-1)+1 = ...$
I'm sure that this expands out to $2^{n} - 1$ but I cant seem to do it. Can someone please show me how?
| What you really want to show is that the function $f(n)=2^n-1$ that expresses your closed form solution satisfies the recurrence and initial conditions of the Tower of Hanoi problem. The recurrence should be $T_n=2T_{n-1}+1$ for $n>0$. You want to prove by induction that $T_n$, defined by this recurrence with initial condition $T_1=1$, is equal to $f(n)$ for every positive integer $n$.
First we get the induction off the ground: $f(1)=2^1-1=1=T_1$, so all’s well. Now assume that $T_n=f(n)$ for some $n\ge 1$; that’s your induction hypothesis, and you want to use it to prove that $T_{n+1}=f(n+1)$. The computation is pretty straightforward:
$$\begin{align*}
T_{n+1}&\overset{(1)}=2T_n+1\\
&\overset{(2)}=2f(n)+1\\
&\overset{(3)}=2\left(2^n-1\right)+1\\
&\overset{(4)}=2\cdot2^n-2+1\\
&\overset{(5)}=2^{n+1}-1\\
&\overset{(6)}=f(n+1)
\end{align*}$$
Here $(1)$ is by virtue of the recurrence definining the numbers $T_k$, $(2)$ uses the induction hypothesis that $T_n=f(n)$, $(3)$ replaces $f(n)$ by its definition, $(4)$ and $(5)$ are algebra, and $(6)$ is the definition of $f$ again.
Since you’ve checked that $T_1=f(1)$ and that $T_n=f(n)$ implies that $T_{n+1}=f(n+1)$, it follows from the principle of mathematical induction that $T_n=f(n)=2^n-1$ for all positive integers $n$.
Of course this is an excessively careful presentation of the argument; in practice both the calculation and the verbiage can be reduced a bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove $ \int_0^\infty e^{-x^2} \; dx = \frac{\sqrt{\pi}}2$ without changing into polar coordinates? How to prove $ \int_0^\infty e^{-x^2} \; dx = \frac{\sqrt{\pi}}2$ other than changing into polar coordinates? It is possible to prove it using infinite series?
| We use the change of variables $y=x^2$.
$$ \int_0^\infty e^{-x^2} \; dx = \frac{1}{2}\int_{0}^{\infty} y^{-1/2} {\rm e}^{-y} \,dy = \frac{1}{2} \Gamma(\frac{1}{2}) \,,$$
where $\Gamma(x)$ is defined by the integral,
$$\Gamma( x ) = \int_{0}^{\infty} y^{x-1} {\rm e}^{-y} \,dy \,. $$
Series Method
If you are interested in series method, then you can use Watson's lemma;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 2
} |
Characterising continuous maps between metric spaces Let $f:(X,d)\to (Y,\rho)$.
Prove that $f$ is continuous if and only if $f$ is continuous restricted to all compact subsets of $(X,d)$.
I could do the left to right implication but couldn't do the reverse implication.
Please help.
Thanks in advance!!!
| Just as @Siminore pointed out in comments.
To be more precise:
assume that $f$ is continuous on every compact subset of $X$.
Suppose that a sequence $x_n$ of elements of $X$ is such that $x_n\rightarrow x$ for some $x\in X$.
Observe that a set $\{x_n:n\in\mathbb{N}\}\cup\{x\}$ is a compact subset of the space $X$. Thus, along our assumption, $F$ is continuous on that set and it does simply means, that $f(x_n)\rightarrow f(x)$. Since the choice of a sequence was arbitrary we have proved that $f:X\rightarrow Y$ is a continuous function. Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is an application of the dual space? Does somebody know any application of the dual space in physics, chemistry, biology, computer science or economics?
(I would like to add that to the german wikipedia article about the dual space.)
| When you are dealing with a vector space $X$ its dual $X^*$ is there, whether you are willing to apply it or not. It is a fact that the presence of a scalar product in $X$ tends to obscure the difference between $X$ and $X^*$. This is the case when $X$ is our geometrical or physical space ${\mathbb R}^3$ with the "standard scalar product", i.e., is a euclidean space where an orthonormal basis has been chosen.
An example: When $f$ is a differentiable scalar function of ${\bf x}$ then for "small" increment vectors ${\bf X}$ one has
$$f({\bf x}+{\bf X})=f({\bf x})+ \omega.{\bf X}+ o\bigl(|{\bf X}|\bigr)\qquad ({\bf X}\to{\bf 0})\ ,$$
where $\omega.{\bf X}$ depends linearly on ${\bf X}$. That is to say: The symbol $\omega:=df({\bf x})$ denotes an element of $X^*$. Now the availability of a scalar product $\bullet$ in $X$ allows one to write the above equation in the form
$$f({\bf x}+{\bf X})=f({\bf x})+ \nabla f({\bf x})\bullet{\bf X}+ o\bigl(|{\bf X}|\bigr)\qquad ({\bf X}\to{\bf 0})\ ,$$
where $\nabla f({\bf x})$ denotes the gradient of $f$ at ${\bf x}$ and is a bona fide vector in $X$.
There are situations in physics or economy where the variables are not "geometric" variables $x_i$ but $p$, $V$, $T$ for pressure, volume, and temperature (or similar). In such a case it doesn't make sense to introduce a scalar product making the "norm" of a state $s$ equal to $\|s\|:=\sqrt{p^2+V^2+T^2}$. As a consequence a linear function of the state variables $p$, $V$, $T$ is a genuine element of $X^*$ and cannot be represented by some "gradient vector" belonging to the state space $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Sigma notation only for odd iterations $ \sum_{i=0}^{5}{i^2} = 0^2+1^2+2^2+3^2+4^2+5^2 = 55 $
How to write this Sigma notation only for odd numbers: $ 1^2+3^2+5^2 = 35 $ ?
| You could write
$$
\sum_{i=1}^{3} f(2i-1).
$$
Otherwise it is allowed to write
$$
\sum_{1 \leq i\leq 5, i \text{ odd}} f(i).
$$
(Here in your example $f(i) = i^2$ of course).
So in general whatever condition you have on the index, you can write that underneath the sum. In general you will find some people prefer one thing over another.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Why are the rows of a parity-check matrix linearly independent? I've been reading about error-correcting codes, and I came across the following definition for a parity-check matrix:
''There is an $(n-k) \times n$ matrix $H$, called a parity check matrix for an $[n,k]$ linear code $\mathcal{C}$, defined by
$$\mathcal{C} = \{x \in \mathbb{F}^{n}_{q} | Hx^{T} = 0 \}.'' $$
Then, the book states that the rows of $H$ will also be linearly independent, but I am having trouble seeing why this is true.
As of now, I've noticed that the map $x \mapsto Hx^{T}$ is a linear transformation from $\mathbb{F}^{n}_{q}$ to $\mathbb{F}^{n-k}_{q}$ with $\mathcal{C}$ being the kernel of this linear transformation, but I can't make the connection to linear independence.
Can anyone help?
| I think Dilip has it (in a comment above). Remember the Rank-Nullity theorem for this one. ($\mathcal{C}$ is the kernel of the linear transformation described by $H$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Differential Equation Past Paper Question
a) Find the general solution to the differential equation: $$2\frac{d^{2}y}{dx^{2}}-4\frac{dy}{dx}+20y=0.$$ b) Find the general solution to the differential equation: $$x^{2}\frac{d^{2}y}{dx^{2}}+4x\frac{dy}{dx}+2y=\frac{1}{x}.$$
I am having trouble with these two differential equations in a past paper I am going through.
Thanks in advance for any replies!
| I'll just try to expand on the comments.
These two equations are linear ODEs, the first one being homogeneous.
To solve the first one, guess the solution ofthe form $y = e^{ax}$ and solve for $a$. You should get two values of $a$, say $a_1$ and $a_2$, hence two solutions: $e^{a_1x}$ and $e^{a_2x}$. The theory of ODEs will tell you that for $x \in \mathbb R$, there are at most $2$ linearly independent solutions, so you know that $c_1 e^{a_1 x} + c_2 e^{a_2 x}$ is your complete solution. Well, you must know that $e^{a_1 x}$ and $e^{a_2 x}$ are linearly independent before making the last conclusion. This is always the case when $a_1 \ne a_2$ (which is true here).
To solve the second one, first solve its homogeneous version: $x^2 \frac{d^2 y}{dx^2} + 4x \frac{dy}{dx} + 2y = 0$. This can be solved by first guessing the form of solution $y = x^a$ and solving for $a$. (This is equivalent to setting $x = e^t$ in Robert's comment.) You'll get two values for $a$, say $a_1$ and $a_2$, hence two solutions $x^{a_1}$ and $x^{a_2}$. The theory of ODEs, again, tells you that for $x > 0$, $y = c_1 x^{a_1} + c_2 x^{a_2}$ is the complete solution to the homogeneous equation. To deal with the non-homogeneous term, use variation of parameters: Assume $y = u_1(x)x^{a_1} + u_2(x)x^{a_2}$. Solve the system
$$
\begin{pmatrix}
x^{a_1} & x^{a_2} \\
a_1 x^{a_1 - 1} & a_2 x^{a_2 - 1}
\end{pmatrix}
\begin{pmatrix}
u_1'(x) \\ u_2'(x)
\end{pmatrix} =
\begin{pmatrix}
0 \\
1/x^3
\end{pmatrix}
$$
for $u_1'$ and $u_2'$, then integrate to get $u_1$ and $u_2$ (keep the two constants that appear). Substitute back into $y = u_1(x)x^{a_1} + u_2(x)x^{a_2}$ to get your final answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
how can one find the value of the expression, $(1^2+2^2+3^2+\cdots+n^2)$
Possible Duplicate:
Proof that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$?
Summation of natural number set with power of $m$
How to get to the formula for the sum of squares of first n numbers?
how can one find the value of the expression, $(1^2+2^2+3^2+\cdots+n^2)$
Let,
$T_{2}(n)=1^2+2^2+3^2+\cdots+n^2$
$T_{2}(n)=(1^2+n^2)+(2^2+(n-1)^2)+\cdots$
$T_{2}(n)=((n+1)^2-2(1)(n))+((n+1)^2-2(2)(n-1))+\cdots$
| If you can find or sketch some 3D blocks, there is a fun geometric proof.
Fix some $n$. If you are doing this with real blocks, $n=3$ or $4$ should be convincing. Let's take $n=4$ for now.
Make a $4\times 4\times 1$ base, laid flat, which of course has volume $4^2$.
Now make a $3\times 3 \times 1$ brick and place it, laid flat, above the base with some corners aligned. Continue in this way up to the top, making smaller squares and always aligning with the same corner. You now have a 3D corner of stairs whose volume is $1^2+\cdots +n^2$.
Now the fun part. Make 5 more of these "stair corners", for a total of 6. These six toys can be turned sideways and upside down, and then pieced together to make an $n\times(n+1)\times(2n+1)$ rectangular solid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Why is $a^n - b^n$ divisible by $a-b$? I did some mathematical induction problems on divisibility
*
*$9^n$ $-$ $2^n$ is divisible by 7.
*$4^n$ $-$ $1$ is divisible by 3.
*$9^n$ $-$ $4^n$ is divisible by 5.
Can these be generalized as
$a^n$ $-$ $b^n$$ = (a-b)N$, where N is an integer?
But why is $a^n$ $-$ $b^n$$ = (a-b)N$ ?
I also see that $6^n$ $- 5n + 4$ is divisible by $5$ which is $6-5+4$ and $7^n$$+3n + 8$ is divisible by $9$ which is $7+3+8=18=9\cdot2$.
Are they just a coincidence or is there a theory behind?
Is it about modular arithmetic?
| It can be generalized as:
$$a^n-b^n = (a-b)(a^{n-1}+a^{n-2}b+\cdots +b^{n-1})$$
If you are interested in a modular arithmetic point of view, since $a \equiv b \pmod{a-b},$ $a^n \equiv b^n \pmod{a-b}.$
Your last two examples are true because what you are essentially doing is plugging in $n=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 8,
"answer_id": 3
} |
What is the $(\lg n)$-th root of $n$? I am looking for the answer of the $(\lg n)$-th root of $n$, that is, $\sqrt[\lg n]{n}$. What is the answer and what log property should I use here? Please assume base as $2$ and $n$ as a natural number.
| Take the $\log$ of this expression, to get
$$\log (n^{1/\log(n)}) = \frac{1}{\log(n)} \log(n) = 1.$$
This means that:
$$n^{1/\log(n)}=2$$
or generally, the base of your $\log$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Fréchet derivative I have been doing some self study in multivariable calculus.
I get the geometric idea behind looking at a derivative as a linear function, but analytically how does one prove this?
I mean if $f'(c)$ is the derivative of a function between two Euclidean spaces at a point $c$ in the domain... then is it possible to explicitly prove that $$f'(c)[ah_1+ bh_2] = af'(c)[h_1] + bf'(c)[h_2]\ ?$$ I tried but I guess I am missing something simple.
Also how does the expression
$f(c+h)-f(c)=f'(c)h + o(\|h\|)$
translate into saying that $f'(c)$ is linear?
| Assume $f\colon\mathbb{R}^n\to\mathbb{R}^m$ has continuous partial derivatives of the first order. Then it is a standard result that $f$ is Fréchet differentiable, i.e., $$f(x+h)=f(x)+f'(x)h+o(\lvert h\rvert)$$ where $f'(x)$ is a linear map $\mathbb{R}^n\to\mathbb{R}^m$. Moreover, the matrix of this map is what you would expect, consisting of the partial derivatives of the component functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
how to evaluate the integral $\frac{64}{\pi^3} \int_0^\infty \frac{ (\ln x)^2 (15-2x)}{(x^4+1)(x^2+1)}\ dx$ How to evaluate the 59 integral, possibly using real method?
$$\frac{64}{\pi^3} \int_0^\infty \frac{ (\ln x)^2 (15-2x)}{(x^4+1)(x^2+1)}\ dx$$
| Start by evaluating, for $-1<\Re(s)<4$, the following integral:
$$
I(s) = \int_0^\infty \frac{x^s}{(1+x^2)(1+x^4)} \mathrm{d}x = \frac{1}{2} \int_0^\infty x^s \left( \frac{1}{1+x^2} + \frac{1-x^2}{1+x^4} \right) \mathrm{d}x
$$
When $-1<\Re(s)<1$ we can rewrite this as a sum of integrals:
$$
I(s) = \frac{\pi}{4} \frac{1}{\cos\left(\frac{\pi}{2} s\right)} + \frac{\pi}{8} \frac{1}{\cos\left( \frac{\pi}{4} (s+1)\right)} - \frac{\pi}{8} \frac{1}{\cos\left( \frac{\pi}{4} (s+3)\right)}
$$
Where the following result was used:
$$
\int_0^\infty \frac{x^s}{1+x^n} \mathrm{d} x = \frac{\pi}{n} \frac{1}{\cos\left( \frac{\pi}{n} (s+1)\right)}
$$
The expression given above remains true for all $-1<Re(s)<4$ by the principle of analytic continuation. Now, we recover the original integral as:
$$
\frac{64}{\pi^3} \left( 15 I^{\prime\prime}(0) - 2 I^{\prime\prime}(1) \right)
$$
which gives the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\int_0^{\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx$ I need to solve
$$
\int_0^{\Large\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx
$$
I tried to use symmetric properties of the trigonometric functions as is commonly used to compute
$$
\int_0^{\Large\frac\pi2}\ln\sin x\ dx = -\frac{\pi}{2}\ln2
$$
but never succeeded. (see this for example)
| $$\begin{align*}
I &= \int_0^{\frac\pi2} \frac{\log(\cos(x)) \log(\sin(x))}{\tan(x)} \, dx \\[1ex]
&= \int_0^\infty \frac{\log\left(\frac1{\sqrt{x^2+1}}\right) \log\left(\frac x{\sqrt{x^2+1}}\right)}{x} \, \frac{dx}{1+x^2} \tag{1} \\[1ex]
&= \frac14 \int_0^\infty \frac{\log^2(x^2+1)-\log(x^2)\log(x^2+1)}{x} \, \frac{dx}{1+x^2} \\[1ex]
&= \frac18 \int_0^\infty \frac{\log^2(x+1)-\log(x)\log(x+1)}{x(1+x)} \, dx \tag{2} \\[1ex]
&= \frac18 \int_0^1 \left(\frac{\log^2(x+1)}{x} - \frac{\log(x)\log(x+1)}x\right) \, dx \tag{3} \\[1ex]
&= \frac14 \int_0^1 \frac{\log(x)\log(x+1)}x \, dx - \frac12 \int_0^1 \frac{\log(x+1)\log(x)}{x+1} \, dx \tag{4} \\[1ex]
&= -\frac14 \sum_{n=1}^\infty \frac{(-1)^n}n \int_0^1 x^{n-1} \log(x) \, dx + \frac12 \sum_{n=1}^\infty (-1)^n H_n \int_0^1 x^n \log(x) \, dx \tag{5} \\[1ex]
&= -\frac14 \sum_{n=1}^\infty \frac{(-1)^n}{n^3} + \frac12 \sum_{n=1}^\infty \frac{(-1)^n H_n}{(n+1)^2} \tag{4} \\[1ex]
&= \boxed{\frac{\zeta(3)}8} \tag{6}
\end{align*}$$
*
*$(1)$ : substitute $x\mapsto\arctan(x)$
*$(2)$ : substitute $x\mapsto\sqrt x$
*$(3)$ : split up the integral at $x=1$; substitute $x\mapsto\frac1x$ in the integral over $[1,\infty)$; partial fractions
*$(4)$ : integrate by parts
*$(5)$ : recall the power series $-\log(1-x)=\sum\limits_{n\ge1}\frac{x^n}n$ and $-\frac{\log(1-x)}{1-x}=\sum\limits_{n\ge1} H_n x^n$, where $H_n$ is the $n^{\rm th}$ harmonic number
*$(6)$ : see e.g. section $7$ for a contour-integration method to evaluating the Euler sum; I'm sure there are more efficient means
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 5,
"answer_id": 4
} |
What am I losing if I decide to perform all math by computer? I solve mathematical problems every day, some by hand and some with the computer. I wonder: What will I lose if I start doing the mathematical problems only by computer? I've read this text and the author says that as techonology progress happens, we should change our focus to things that are more important. (You can see his suggestions on what is more important, in the suggested text.)
I'm in search of something a little more deep than the conventional cataclysmic hypothesis: "What will you do when a EMP hit us?!" and also considering an environment without exams which also exclude the "Computers are not allowed in exams" answer.
This is of crucial importance to me because as I'm building some mathematical knowledge, I want to have a safe house in the future. I'm also open to references on this, I've asked something similar before and got a few indications.
| Perhaps you'd be interested in the TED Talk by Wolfram.
http://www.youtube.com/watch?v=60OVlfAUPJg
There are more materials on the Wolfram web site.
HTH, as this is an area I have interest in and have often wondered how to change the way we explore, learn and interact with Mathematics.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/188945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 6,
"answer_id": 1
} |
A couple of asymptotics exercises Recently I've been following the chapter on asymptotics in Concrete Mathematics. The subject matter of it is relatively new to me though and I'm having some difficulties dealing with asymptotic quantities, as I'm still trying to sort out the fundamentals.
What follows are a few sample exercises from the aforementioned book and my attempts at solving them:
9.2 Which function grows faster:
*
*$n^{\ln n}$ or $(\ln n)^n$
*$n^{\ln \ln \ln n}$ or $(\ln n)!$
*$(n!)!$ or $((n-1)!)!(n-1)!^{n!}$
*$F^2_{\lceil H_n \rceil}$ or $H_{F_n}$ (here $F_n$ denotes the $n$th element of the Fibonacci sequence, while $H_n$ is the $n$th harmonic number)
In order to formally justify that either function is little-$o$ of the other one, I'd show that their quotient converges to $0$ as $n$ approaches $\infty$. This has let me prove that $n^{\ln n} = e^{(\ln n)^2} \prec e^{n \ln c} $
$\prec e^{n \ln\ln n} $
$= (\ln n)^n$. I don't know how to approach the other pairs of functions though. Those involving the Pi function (I don't have any experience in dealing with it) or sequences don't seem to be as straightforward as the first example.
9.13 Evaluate $(n + 2 + O(n^{-1}))^n$ with relative error $O(n^{-1})$.
Ok, so I'm working towards expressing the formula in the form $f(n)(1 + O(n^{-1}))$.
\begin{equation} \begin{split}
(n + 2 + O(n^{-1})) &= (n(1 + 2n^{-1} + O(n^{-2})))^n = n^n(1 + 2n^{-1} + O(n^{-2}))^n \\
&= n^n \exp(n(\ln(1 + 2n^{-1} + O(n^{-2}))) \\
&= n^n \exp(n(2n^{-1} + O(n^{-2}))) \hspace{12pt} \text{// by the $ln(1+z)$ approx.}\\
&= n^n \exp(2 + O(n^{-1}))) = n^n e^2 e^{O(n^{-1})} \\
&= n^n e^2 (1 + O(n^{-1})) \hspace{12pt} \text{// by the $e^x$ approx.}
\end{split} \end{equation}
Are all these transformations legal and correct? From what I know, we can apply an asymptotic expansion (as in $\ln(1 + z)$ in this case) when $z \rightarrow 0$ as $n \rightarrow \infty$. Am I right in concluding, that an approximated expression with the absolute error that doesn't converge to $0$ cannot be expanded this way (e.g. $ln(1 + n^{-1} + O(n))$)?
9.14 Prove that $(n + a)^{n + b} = n^{n + b}e^{a}(1 + a(b - \dfrac{a}{2})n^{-1} + O(n^{-2}))$.
I transform the LHS so as to make it easier to compare with what we have on the RHS.
\begin{equation} \begin{split}
(n+a)^{n+b} &= (n(1 + \dfrac{a}{n}))^{n+b} = n^{n+b}(1 + \dfrac{a}{n})^{n+b} = n^{n+b} \exp((n+b) \ln(1 + \dfrac{a}{n})) \\
&= n^{n+b}exp((n+b)(\dfrac{a}{n} - \dfrac{a^2}{2n^2} + O((n^{-3}))) \hspace{12pt} \text{// by the $ln(1+z)$ approx.} \\
&= n^{n+b} \exp(a - \dfrac{a^2}{2n} + O(n^{-2}) + \dfrac{ab}{n} - \dfrac{a^2b}{2n^2} + O(n^{-3})) \\
&= n^{n+b} e^a \exp(\dfrac{-a^2}{2n} + \dfrac{ab}{n} - \dfrac{a^2b}{2n^2} + O(n^{-2}))\\
&= \ldots
\end{split} \end{equation}
(First I'd like to make sure that the step from the 3rd to the 4th line is formally correct - is it enough to say that $O(n^{-3}) \subset O(n^{-2})$)? Anyway, I don't really know how to move from here. One attempt would be to use the $e^z$ expansion, but the result doesn't look very nice. Or I have made a mistake I can't yet see.
It looks like I've written a pretty long post. :) I know some of my questions may sound trivial (or there may be some basic mistakes above), but I'm trying to make head and tail of that whole asymptotics and need a little assistance :)
| 9.13: Right. The point is that $\ln(1 + z + o(z)) = z + o(z)$ is a statement that is true as $z \to 0$, not as $z \to \infty$. Similarly for any function $f$ that is differentiable at $a$, $f(a+z+o(z)) = f(a) + f'(a) z + o(z)$ as $z \to 0$.
9.14: You're fine up to $$ n^{n+b} \exp(a(1 - \dfrac{a}{2n} + \dfrac{b}{n} - \dfrac{ab}{2n^2} + O(n^{-2})))$$
except that you can absorb the $-ab/(2n^2)$ in the $O(n^{-2})$. Then that becomes
$$ \eqalign{n^{n+b}& e^a \exp\left(\frac{ab-a^2/2}{n} + O(n^{-2})\right)\cr
&= n^{n+b} e^a\left(1 + \frac{ab-a^2/2}{n} + O(n^{-2})\right)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How can one prove that $\lim\limits_{\theta\to 0} \frac{\sin\theta}{\theta}=1$
Possible Duplicate:
Proving $\lim_{\theta \to 0}{\frac{\sin \theta}{\theta}}=1$ using $\frac{1}{2}r^2(\theta-\sin \theta)$
How can one prove that $$\lim_{\theta \to 0} \frac{\sin\theta}{\theta}=1$$
$\sin(0) = 0$
| $\lim_{\theta \rightarrow 0} \frac{\sin(\theta)}{\theta} = \lim_{\theta \rightarrow 0} \cos(\theta) = 1$ by L'hopital's rule.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proving inequality $\frac{a}{b}+\frac{b}{c}+\frac{c}{a}+\frac{3\sqrt[3]{abc}}{a+b+c} \geq 4$ I started to study inequalities - I try to solve a lot of inequlites and read interesting .solutions . I have a good pdf, you can view from here . The inequality which I tried to solve and I didn't manage to find a solution can be found in that pdf but I will write here to be more explicitly.
Exercise 1.3.4(a) Let $a,b,c$ be positive real numbers. Prove that
$$\displaystyle \frac{a}{b}+\frac{b}{c}+\frac{c}{a}+\frac{3\sqrt[3]{abc}}{a+b+c} \geq 4.$$
(b) For real numbers $a,b,c \gt0$ and $n \leq3$ prove that:
$$\displaystyle \frac{a}{b}+\frac{b}{c}+\frac{c}{a}+n\left(\frac{3\sqrt[3]{abc}}{a+b+c} \right)\geq 3+n.$$
| Following above motivations and applying AM-GM three times:
\begin{align}
&\frac13\left(\frac ab+\frac ab+\frac bc\right)+\frac13\left(\frac bc+\frac bc+\frac ca\right)+\frac13\left(\frac ca+\frac ca+\frac ab\right)+\frac{3\sqrt[3]{abc}}{a+b+c}\\
&\ge \frac{a}{\sqrt[3]{abc}}+\frac{b}{\sqrt[3]{abc}}+\frac{c}{\sqrt[3]{abc}}+\frac{3\sqrt[3]{abc}}{a+b+c}\\
&=\frac{a+b+c}{3\sqrt[3]{abc}}+\frac{a+b+c}{3\sqrt[3]{abc}}+\frac{a+b+c}{3\sqrt[3]{abc}}+\frac{3\sqrt[3]{abc}}{a+b+c}\\
&\ge4\left(\left(\frac{a+b+c}{3\sqrt[3]{abc}}\right)^2\right)^\frac14\\
&\ge4.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
How to construct a one-to one correspondence between$\left [ 0,1 \right ]\bigcup \left [ 2,3 \right ]\bigcup ..$ and $\left [ 0,1 \right ]$ How can I construct a one-to one correspondence between the Set $\left [ 0,1 \right ]\bigcup \left [ 2,3 \right ]\bigcup\left [ 4,5 \right ] ... $ and the set $\left [ 0,1 \right ]$ I know that they have the same cardinality
| Suppose that you had a bijection $f:[0,1]\to(0,1]$. Then you could decompose $[0,1]$ as
$$\begin{align*}
[0,1]&=\left[0,\frac12\right]\cup\left(\frac34,1\right]\cup\left(\frac58,\frac34\right]\cup\left(\frac9{16},\frac58\right]\cup\dots\\
&=\left[0,\frac12\right]\cup\bigcup_{n\ge 1}\left(\frac{2^n-1}{2^{n+1}},\frac{2^{n-1}+1}{2^n}\right]\;,
\end{align*}$$
map $[0,1]$ to $\left[0,\frac12\right]$ in the obvious way, and for $n\ge 1$ map $[2n,2n+1]$ to $\left(\frac{2^n-1}{2^{n+1}},\frac{2^{n-1}+1}{2^n}\right]$ using straightforward modifications of $f$ for each ‘piece’. I’ll leave that part to you unless you get stuck and ask me to expand; the hard part is finding $f$. Here’s one way:
$$f:[0,1]\to(0,1]:x\mapsto\begin{cases}
\frac12,&\text{if }x=0\\\\
\frac1{2^{n+1}},&\text{if }x=\frac1{2^n}\text{ for some }n\ge 1\\\\
x,&\text{otherwise}\;.
\end{cases}$$
In other words, $f$ is the identity map except on the set $\displaystyle{\{0\}\cup\left\{\frac1{2^n}:n\ge 1\right\}}$, which it shifts one place ‘forward’ like this:
$$0\overset{f}\mapsto\frac12\overset{f}\mapsto\frac14\overset{f}\mapsto\frac18\overset{f}\mapsto\frac1{16}\overset{f}\mapsto\dots\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Closed form representation of an irrational number Can an arbitrary non-terminating and non-repeating decimal be represented in any other way? For example if I construct such a number like 0.1 01 001 0001 ... (which is irrational by definition), can it be represented in a closed form using algebraic operators? Can it have any other representation for that matter?
| Some irrational numbers can be expressed in a closed form using algebraic operations; $\sqrt7$ is a very simple example. Some can be expressed in other ways, like $\pi$ for which a multitude of formulas is known. Most real numbers however cannot be expressed (using a finite amount of information, but that is implicit in "expressing") at all, since there are just too many of them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
consider the group $G=\mathbb Q/\mathbb Z$. For $n>0$, is there a cyclic subgroup of order n consider the group $G=\mathbb Q/\mathbb Z$. Let $n$ be a positive integer. Then is there a cyclic subgroup of order $n$?
*
*not necessarily.
*yes, a unique one.
*yes, but not necessarily a unique one.
*never
| Another hint to prove unicity: suppose $\overline {a/b}$ has order $n$, $0 < a < b$, and $gcd(a,b)=1$, then $na=kb$ for some $k \in \mathbb{Z}$, hence $a$ divides $k$ and $n=bk'$. It follows that $\overline {a/b}$ = $\overline {ak'/n}$, so ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prime elements in $\mathbb{Z}/n\mathbb{Z}$ I'm tring to determine the prime elements in the ring $\mathbb{Z}/n\mathbb{Z}$.
| Hint $\rm\:\ (p,n) = 1\!\!\!\overset{\rm Bezout\!\!}\iff p\:$ is a unit in $\rm\: \Bbb Z/n.\:$ Else $\rm\:\color{#c00}{p\:|\:n}\:$ therefore
$$\rm\: p\:|\: ab\ in\ \Bbb Z/n\:\Rightarrow\: \color{#c00}pq = ab + k\color{#c00}n\ in\ \Bbb Z\:\Rightarrow\:p\:|\:ab\ in\ \Bbb Z\:\Rightarrow\:p\:|\:a\,\ or\,\ p\:|\:b\ in\ \Bbb Z,\ so\ also\ in\ \Bbb Z/n$$
This is aspecial case of a general correspondence principle between prime ideals in a ring R and its quotient rings.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
If the sum of three primes is equal to a prime ... Does anyone know how to always get a prime from the sum of three primes?
For example: $5+7+11=23$, $17+29+43=89$, etc.
| It has long been conjectured that every odd number greater than $7$ is the sum of $3$ primes. It is known that every large enough odd number is the sum of three primes. I do not know of any extra information known if the target odd number is itself prime.
If your question has to do with an efficient algorithm for finding the $3$ primes, I know very little. But it turns out that for large odd $n$, there seems to be a large number of representations of $n$ as a sum of three primes, so an efficiently conducted search works reasonably well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can $ \int_0^{\pi/2} \ln ( \sin(x)) \; dx$ be evaluated with "complex method"? Can the following integral be evaluated using complex method by substituting $\sin(x) = {e^{ix}-e^{-ix} \over 2i}$?
$$ I=\int_0^{\pi/2} \ln ( \sin(x)) \; dx = - {\pi \ln(2) \over 2}$$
| Note first that
\begin{align*}
\int_0^{2\pi} \ln(\sin t)\,dt
&= 2\int_0^{\pi/2} \ln(\sin t)\,dt + 2\int_0^{\pi/2} \ln(-\sin t)\,dt \\
&= 2\int_0^{\pi/2} \ln(\sin t)\,dt + 2\int_0^{\pi/2} \Bigl(\ln(\sin t) + i\pi\Bigr)\,dt \\
&= 4I + \pi^2i
\end{align*}
and, using the substitution $z=e^{it}$
\begin{align*}
\int_0^{2\pi} \ln(\sin t)\,dx
&= \int_{|z|=1} \ln\left(\frac{z-1/z}{2i}\right)\,\frac{dz}{iz} \\
&= \int_{|z|=1} \frac{\ln(1-z^2)}{iz}\,dz - \int_{|z|=1} \frac{\ln z}{iz}\,dz + \int_{|z|=1} \frac{\ln(i/2)}{iz}\,dz.
\end{align*}
The first integral on the right is zero $\dots$ this is because $\ln(1-z^2)$ is analytic inside the unit disk and vanishes at $z=0$ so the singularity in the integrand there is removable (you need to shrink the circle of integration slightly to avoid the points $z=\pm1$ and then let the radius go to 1).
The second integral we do directly
$$ \int_{|z|=1} \frac{\ln z}{iz}\,dz = \left[\frac{1}{2i}\bigl(\ln z\bigr)^2\right]_{-1^+}^{-1^-} = \frac{1}{2i}\Bigl[(\pi i)^2 - (-\pi i)^2 \Bigr] = 0
$$
where $-1^\pm$ indicate points just above and below $-1$ and on either side of the branch cut of $\ln$ along the negative real axis.
The third integral is just $2\pi i\ln(i/2)/i = 2\pi\bigl(-\ln2 + \pi i/2\bigr)$.
Therefore $4I + \pi^2i = -2\pi\ln2 + \pi^2i$ and $I = -\frac{\pi}{2}\ln2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
} |
Intuition behind gradient VS curvature In Newton's method, one computes the gradient of a cost function, (the 'slope') as well as its hessian matrix, (ie, second derivative of the cost function, or 'curvature'). I understand the intuition, that the less 'curved' the cost landscape is at some specific weight, the bigger the step we want to take on the landscape terrain. (This is why it is somewhat superior to simply gradient ascent/descent).
Here is the equation:
$$
\mathbf {w_{n+1} = w_n - \alpha\frac{\delta J(w)}{\delta w}\begin{bmatrix}\frac{\delta^2J(w)}{\delta w^2}\end{bmatrix}^{-1}}
$$
What I am having a hard time visualizing, is what is the intuition behind 'curvature'? I get that is the 'curviness of a function', but isnt that the slope? Obviously it is not, that is what the gradient measures, so what is the intuition behind curvature?
Thanks!
| The gradient and the radius of curvature at a point x are both normal to the equipotential surface through x.
The radius of curvature is the radius of the largest sphere that is tangent to the surface at that point. It has nothing to do with the slope, but only with the equipotential surface. Curvature is the reciprocal of the radius of curvature. It is used rather than the radius of curvature because the radius of curvature is infinite when curvature is 0. 0 is more convenient to use in computations than infinity.
The gradient is a vector that points "up hill" and has magnitude equal to the slope.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving that one quantity *eventually* surpasses the other I want to prove the following; $$\forall t>0\ \ \forall m\in \mathbb{N} \ \ \exists N \in \mathbb{N} \ \ \forall n\geq N: \ (1+t)^n > n^m.$$
For readers who hate quantifiers, here's the version in words: "$(1+t)^n$ eventually surpasses $n^m$ for some $t>0$ and $m\in \mathbb{N}$".
Though this sounds simple enough, I couldn't manage to prove it using only very elementary statements about the real numbers, like the archimedian property etc. (so no derivates and so on involved).
My questions are:
1) Is there a simple (as described above) proof for this ? (If there isn't, I would also be happy with a proof using more of the analysis machinery.)
2) Is there a way to express the least $N$, which satisfies the above, in a closed form ?
3) I have somewhere heard of a theorem, that two real convex functions can have at most two intersection points (and the above statement seems closely related to this theorem), so I would be very happy, if someone could also give me a reference for this theorem.
| For your question 1:
(1) Show $2^n > n$ for all $n$ immediately by induction.
(2) Moreover, given $c$ clearly $2^n> c$ for almost all $n$.
Lemma:
For $c>0$ and $m\in \mathbb N$, we have $2^n > c n^m$ for almost all $n$.
Proof by induction on $m$.
The case $m=0$ is just (2).
Assume the statement is true for some $m$.
We want to show that $2^n> c n^{m+1}$ for almost all $n$.
By induction assumption, there is $N$ such that $k>N$ implies $2^k>2^{m+2} c k^m$.
If $n>2N$ is even, write $n=2k$ with $k>N$. Then $2^n = 2^k\cdot 2^k>2^{m+2}c k^m\cdot k = 2c\cdot (2k)^{m+1} > c n^{m+1}$ (we use (1) in this).
If $n>2N$ is odd, write $n=2k-1$ with $k>N$. Then $2^n = \frac12\cdot2^{2k}>c\cdot (2k)^{m+1}>c n^{m+1}$. \qed
Finally, to change the base from 2 to $1+t$, select $r\ge\frac 1t$.
Then $(1+t)^r \ge 1+r t \ge 2$.
Hence if $n=k r - s$ with $0\le s < r$, then $(1+t)^n>\frac12 2^k> \frac12\cdot(2r^m) k^m=(rk)^m\ge n^m$ for almost all $n$.
--
For your question 3:
$x\mapsto x^4$ and $x\mapsto x^2$ are convex, but $x^4-x^2=0$ has three solutions $-1, 0$,and $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist? Why? Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist, given that $\mathbf A$ is a square and positive (and possibly singular) matrix and $\epsilon$ is a small positive number? I want to use this to regularize a sample covariance matrix ($\mathbf A = \Sigma$) in practice, so that I can compute the inverse, which I need to calculate a Mahalanobis distance between two samples. In practice, my covariance matrix is often singular. I know the term $(\mathbf A+\epsilon \mathbf I)^{-1}$ often appears in the context of least squares problems involving Tikhonov regularization (ridge regression). However, I've never seen a statement, proof, or reference which says that the expression is always invertible.
Can any of you help me with a proof or reference?
| If $A$ is symmetric positive semidefinite, and $\epsilon > 0$, then $A + \epsilon I$ is symmetric positive definite (and hence invertible).
To see this, note that if $x \neq 0$ then
\begin{align}
x^T(A + \epsilon I) x &= x^T A x + \epsilon x^T x \\
&= x^T A x + \epsilon \|x\|^2 \\
& > 0.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Hydrogen atom in partial differential equations For the hydrogen atom, if
$$\int |u|^2 ~dx = 1,$$
at $t = 0$,
I am trying to show that this is true at all later times.
What I need help is with differentiating the integral with respect to $t$, and taking care about the solution being complex valued. Except that my notation is getting me mixed up. I think this might get me there.
Following Ben's hint, here is what I have:
*
*Change $|u|^ 2$ into $u^* u$.
*Bring the derivative inside the integral.
*Apply the product rule.
*Apply the Schrödinger equation and try to show that the result is zero.
$$\int u^* u ~dx = 1 $$
and to bring the derivative inside the integral, isn't $dx$ already inside?
From Schrödinger equation I have:
$$-i\hslash u_t = \sum_{i=1}^n \frac{\hslash^2}{2m_i}(u_{x_i x_i} + u_{y_i y_i} + u_{z_i z_i}) + V(x_1,\ldots,z_n)u$$
for $n$ particles and the potential would $V$ depend on all of the $3n$ coordinates.
I'm not sure how to extend it to even 2 dimensions with the notation below
| For a beginning, define $3n$ orthonormal basis vectors with respect to an inner product "$\cdot$", $\hat x_j, \hat y_j, \hat z_j$ in a real vector space, and define
$$ \nabla = \sum_j \frac{\hbar}{\sqrt{2 m_j}}\left(\hat x_j \frac{\partial}{\partial x_j}+ \hat y_j \frac{\partial}{\partial y_j}+ \hat z_j\frac{\partial}{\partial z_j} \right)
$$
Then $$\nabla \cdot \nabla u = \sum_j \frac{\hbar^2}{2 m_j}\left(u_{x_i x_i} + u_{y_i y_i} + u_{z_i z_i} \right) $$
The Schrodinger equation and its complex conjugate, multiplied with $u^*$ and $u$ respectively gives
$$-i \hbar u_t u^* = (\nabla \cdot \nabla u ) u^* + V u u^* $$ and $$ i\hbar u_t^* u = (\nabla \cdot \nabla u^*)u + V u^* u $$. From this,
$$u_t u^* + u_t^* u = \frac{1}{i \hbar} \left( -(\nabla \cdot \nabla u)u^* + (\nabla \cdot \nabla u^*)u \right). $$
Right hand side can be written
$$\frac{1}{i \hbar} \nabla \cdot ((\nabla u)u^* + (\nabla u^*)u) =\nabla \cdot F(u).$$
When $W$ is a volume in $3n$-dimensional space independent of time, then
$$ \frac{\partial}{\partial t}\int_Wu^* u \;\; d w = \int_W u_t u^* + u u_t^* \;\; dw = \int_W \nabla \cdot F(u) \;\; dw .$$
From the divergence theorem, assuming that the surface $A$ of the volume $W$ is so far out that the wavefunction $u$ is zero there, and then also $F(u),$ the last expression equals
$$ \int_A F(u) \cdot da = 0.$$
That is, $ \int_W |u|^2 dw$ is independent of time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Inverse Laplace transform - using the table I am trying to find the inverse Laplace transform $(g(t))$ of
$$ G(s) = \frac{2s}{(s+1)^2+4}$$
I know about the inverse transforms $e^{a t}\cos(\omega t)$ and $\mathrm{e}^{at}\sin(\omega t)$ however I am trying to get the inverse transforms without these, as these are not on the table of standard transforms for our course.
I was also attempting to use $\mathrm{e}^{at} f(t) \leftrightarrow F(s-a)$, but I'm not sure how to go about this, either.
Any help would be appreciated. :)
| If you have $(s+1)^2$ in the denominator then make it appear in the numerator by adding zero:
$$ G(s) = \frac{2s}{(s+1)^2+4} = \frac{2(s+1-1)}{(s+1)^2+4} = 2 \frac{s+1}{(s+1)^2+4}-\frac{2}{(s+1)^2+4} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Plane intersection by a mapping and different cases In $\mathbb{R}^3$ say we have the 2 planes $A=\{z=1\}$ and $B=\{x=1\}$. A line through 0 meeting $A$ at $(x,y,1)$ meets $B$ at $(1,y/x,1/x).$ Consider the map $\phi: A \rightarrow B$ defined by $(x,y) \mapsto (y' = y/x, z' = 1/x)$.
I'm trying to figure out the image under $\phi$ of
1) the line $ax = y + b$; the pencil of parallel lines $ax = y + b$ with fixed $a$ and variable $b$;
2) circles $(x-1)^2 + y^2 = c$ for variable $c,$ distinguishing the 3 cases $c>1, c = 1,$ and $c< 1$.
and to imagine the above as a perspective drawing by an artist sitting at $(0,0,0)$ and drawing figures from the plane $A$ on the plane $B$.
What happens to the points of the 2 planes where $\phi$ and $\phi^{-1}$ are undefined? Thanks!
| To answer your last question first, notice that a line through $0$ meeting $A$ at $(0,y,1)$ does not meet $B$ at all. This explains why $\phi$ is undefined in such cases. Correspondingly, pick any point on $B$ with $z = 0$ and any line through the origin and that point is wholly within the $xz$-plane, so will never hit $x = 1$, so is not the projection of any point on $A$, so $\phi^{-1}$ is undefined.
To understand how lines on $A$ work, think of lines as the intersection of planes. More specifically, for each line $\lambda$ in $A$ there is a unique plane $C$ through the origin such that $\lambda$ is the intersection of $A$ with $C$. Then the image under $\phi$ must be the intersection of $C$ with $B$ (since any "projection ray" from the origin through $\lambda$ lies in the plane $C$). Now, this intersection will be a line in $B$ (assuming the line was not $\{x = 0, z = 1\}$, in which case there is no intersection). So lines project to lines. Once we have that fact, it's easy to compute which line it is: just project any two points of $\lambda$, and join them up. If you really need an explicit formula, just ask.
Circles are a little trickier. Substitute $x=1/z\prime$ and $y=y\prime/z\prime$ into the equation, and get: \[\frac{1}{z^2}(y^2 + (1-z)^2)=c\]. What does this actually mean? Well, let's rearrange a little: \[\begin{align}
\frac{1}{z^2}(y^2 + 1 - 2z + z^2) &= c \\
y^2 + 1 - 2z + z^2 &= cz^2 \\
y^2 - 2z + (1-c)z^2 &= -1
\end{align}\]. At this point I want to divide by $1-c$ to complete the square, so I'm going to have to distinguish the $c=1$ case. In that case, we get \[\frac{1}{2}(y^2 + 1)=z\], which is a parabola. Otherwise: \[\begin{align}
y^2 + (1-c)(z^2 - \textstyle{\frac{2}{1-c}}z) &= -1 \\
y^2 + (1-c)((z-\textstyle{\frac{1}{1-c}})^2 - \textstyle{\frac{1}{(1-c)^2}}) &= -1 \\
y^2 + (1-c)(z-\textstyle{\frac{1}{1-c}})^2 &= \textstyle{\frac{1}{1-c}} - 1 \\
y^2 + (1-c)(z-\textstyle{\frac{1}{1-c}})^2 &= \textstyle{\frac{c}{1-c}}
\end{align}\]. For $c < 1$, this is an ellipse, while for $c > 1$, it is a hyperbola.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
solving for a coefficent term of factored polynomial.
Given: the coefficent of $x^2$ in the expansion of $(1+2x+ax^2)(2-x)^6$ is $48,$ find the value of the constant $a.$
I expanded it and got
$64-64\,x-144\,{x}^{2}+320\,{x}^{3}-260\,{x}^{4}+108\,{x}^{5}-23\,{x}^{
6}+2\,{x}^{7}+64\,a{x}^{2}-192\,a{x}^{3}+240\,a{x}^{4}-160\,a{x}^{5}+
60\,a{x}^{6}-12\,a{x}^{7}+a{x}^{8}
$
because of the given info $48x^2=64x^2-144x^2$ solve for $a,$ $a=3$.
Correct?
P.S. is there an easier method other than expanding the terms?
I have tried using the bionomal expansion; however, one needs still to multiply the terms. Expand $(2-x)^6$ which is not very fast.
| All you need of the expansion of $(2-x)^6$ is the first three terms, $$(2-x)^6=2^6-(6)2^5x+(15)2^4x^2+\cdots=64-192x+240x^2+\cdots$$ Then multiplying by $1+2x+ax^2$ you can pick out the coefficient of $x^2$ as $$(1)(240)-(2)(192)+64a$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/189990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
If a stochastic process is path-wise continuous then its filtration is right-continuous? Let $(\Omega,\mathcal{F})$ be a measurable space, carrying a stochastic process $X=X_{t≥0}$ with state space $(\mathbb{R},\mathcal{B}(\mathbb{R}))$. Let $\mathcal{F}_t = \sigma(X_s:s≤t)$. Assume that trajectories $t\mapsto X_t(w)$ are continuous for all $w\in\Omega$. Prove or disprove : $(\mathcal{F}_t)$ is right continuous.
| The filtration $(\mathcal F_t)_{t\geqslant0}$ may be discontinuous. For an example, assume that $X_t=t\cdot U$ for some random variable $U$. Then $t\mapsto X_t$ is almost surely continuous, $\mathcal F_t=\sigma(U)$ for every $t\gt0$, and $\mathcal F_0=\{\varnothing,\Omega)$, hence $\mathcal F_{0+}\ne\mathcal F_0$ as soon as $U$ is not almost surely constant.
The filtration $(\mathcal F_t)_{t\geqslant0}$ may be continuous. For an example, assume that $X_t=\mathrm e^t\cdot U$ for some random variable $U$. Then $t\mapsto X_t$ is almost surely continuous, $\mathcal F_0=\mathcal F_t=\sigma(U)$ for every $t\gt0$, hence $\mathcal F_{0+}=\mathcal F_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to check if a point is inside a rectangle? There is a point $(x,y)$, and a rectangle $a(x_1,y_1),b(x_2,y_2),c(x_3,y_3),d(x_4,y_4)$, how can one check if the point inside the rectangle?
| My first thought was to divide rectangle to two triangles and apply some optimized method twice. It seemed to be more effective than @lab bhattacharjee's answer to me.
http://www.blackpawn.com/texts/pointinpoly/default.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "229",
"answer_count": 24,
"answer_id": 9
} |
Lemma from PDE book This is Lemma 6.1 from Gilbarg - Trudinger. It states "Let $\textbf{P}$ be a constant matrix which defines a nonsingular linear transformation $y=x\textbf{P}$ from $\mathbb{R}^n \rightarrow \mathbb{R}^n$. Letting $u(x) \rightarrow \tilde{u}(y)$ under this transformation one verifies easily that $A^{ij}D_{ij}u(x) = \tilde{A}^{ij}D_{ij}\tilde{u}(y)$, where $\tilde{\textbf{A}} = \textbf{P}^t\textbf{A}\textbf{P}$."
Here, we are using the summation convention, and $A^{ij}$ is a constant matrix with $A^{ij} = A^{ji}$.
I have no idea where this is coming from. It says it's an easy verification... but even trying to do this with 2 by 2 matrices gives a huge mess. Second, what exactly does $u(x) \rightarrow \tilde{u}(y)$ mean? Is $\tilde{u}$ the same function but just a different variable? Why not call it $u(y)$ then? Any help is appreciated.
| The idea is to perform a "rotation and stretching" $(P)$ of coordinates which transforms $u$ (defined on $\Omega$) into a function $\tilde{u}$ defined on $P(\Omega)$ so that $\tilde{u}$ satisfies a nice equation.
Computationally, we have $u(x) = \tilde{u}(Px)$. The general formula (by the Chain Rule) for $D^2u$ is
$$D^2u(x) = P^T D^2\tilde{u}(Px) P.$$
Thus, we have
$$0 = tr(AD^2u(x)) = tr(AP^T D^2\tilde{u}(Px) P) = tr(PAP^T D^2\tilde{u}(Px)).$$
For a simple example try $u_{xx} + u_{xy} + u_{yy} = 0$, say defined on $B_1$. By rotating coordinates to the $(1,1)$ and $(1,-1)$ directions we can write the equation without mixed derivatives, and by stretching in one direction and squeezing in the other we obtain harmonic $\tilde{u}$ defined on some rotated ellipse.
The idea of changing coordinates is also very useful in scaling arguments for PDE, where we have some estimate in $B_1$ which we would like to apply at all scales.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
"Show" that the direction cosines of a vector satisfies... "Show" that the direction cosines of a vector satisfies
$$\cos^2 \alpha + \cos^2 \beta + \cos^2 \gamma = 1$$
I am stumped on these things:
*
*"SHOW" that the direction cosines corresponds to a given vector to satisfy the relation above. ----> How do you "show" this? What does this mean? Does this mean to use the direction cosines of a vector?
*
*I'm sure this is a proof but I don't know what the end result would look like or better, what I am expected to learn from this proof.
I am not looking for a mere answer but really an in-depth explanation of the problem.
Greatly appreciated. :)
| If $$x^2+y^2+z^2=r^2,$$ then $$\frac{x^2}{r^2}+\frac{y^2}{r^2}+\frac{z^2}{r^2}=1,$$ or $$\cos^2 \alpha + \cos^2 \beta + \cos^2 \gamma=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Extension of the Birthday Problem How do you find the expected number of people (or the expected number of pairs) among the n that share their birthday within r days of each other?
For the regular birthday problem, it's $n\left(1-(1-1/N)^{n-1}\right)$ expected people or ${n\left(1-(1-1/N)^{n-1}\right) \choose 2}$ pairs (see https://math.stackexchange.com/a/35798/39038). In this link, is it correct to derive the expected number of people among the n that share their bday within r days of each other using the same steps, but just with replacing $\frac{1}{N}$ with $\frac{1+2r}{N}$ ? In other words, is $$n\left(1-(1-(2r+1)/N)^{n-1}\right) \choose 2$$ correct for the expected number of pairs?
| Expectation is linear, so if you can calculate the probability that a given person shares his birthday with anyone else (within $\pm r$ days, in your case), then you can multiply that by $n$ and find the expected number of such people. The probability that no-one else has their birthday within the excluded $2r+1$ days is given by
$$
\left(1-\frac{2r+1}{N}\right)^{n-1},
$$
and so the expected number of people with at least birthday partner is
$$
n - n\left(1-\frac{2r+1}{N}\right)^{n-1}
$$
as you stated. However, the expected number of pairs is not the same as the number of pairs among the expected number of people: for one thing, not every pair of people with birthday partners are birthday partners with each other; and even if they were, expectation is not distributive. The exact expected number of pairs is just $n(n-1)/2$ times the probability that a given pair is partnered, which is $(2r+1)/N$. So the expected number of pairs is
$$
\frac{n(n-1)(2r+1)}{2N}.
$$
Both of these expressions are of order $1$ in the same regime, viz., $n \sim \sqrt {N/r}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to prove that the Torus and a Ball have the same Cardinality How to prove that the Torus and a Ball have the same Cardinality ?
The proof is using the Cantor Berenstein Theorem.
I now that they are subsets of $\mathbb{R}^{3}$ so I can write $\leq \aleph$ but I do not know how to prove $\geqslant \aleph$.
Thanks
| I'm assuming that you're talking about solid balls and tori in $\mathbb R^3$.
To prove that these sets are equipollent using the Cantor–Bernstein–Schröder theorem, all you need to do is to show that:
*
*there exists an injection $f$ from the ball $B$ into the torus $T$, and
*there exists an injection $g$ from the torus $T$ into the ball $B$.
The CBS theorem then says that there exists a bijection between $B$ and $T$.
If you're given explicit geometric definitions of $B$ and $T$, you can use these directly to construct invertible affine maps $f$ and $g$ such that $f(B) \subset T$ and $g(T) \subset B$.
Another, more general way to do this is to note that both of these subsets of $\mathbb R^3$ are bounded and have non-empty interior. Thus, for each of them, there exists open balls $I$ and $O$ such that $I_B \subset B \subset O_B$ and $I_T \subset T \subset O_T$. Then choosing $f$ and $g$ (again as affine maps) such that $f(O_B) = I_T$ and $g(O_T) = I_B$ will do the trick.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is it morally right and pedagogically right to google answers to homework? This is a soft question that I have been struggling with lately.
My professor sets tough questions for homework (around 10 per week).
The difficulty is such that if I attempt the questions entirely on my own, I usually get stuck for over 2 hours per question, with no guarantee of succeeding.
Some of the questions are in fact theorems proved by famous mathematicians like Gauss, or results from research papers.
As much as I dislike to search for answers on the internet, I am often forced to by time constraints if I even expect to complete the homework in time for submission. (I am taking 2 other modules and writing an undergraduate thesis too).
My school does not have explicit rules against googling for homework, so I guess it is not a legal issue.
However, it often goes against my conscience, and I wonder if this practice is counterproductive for my mathematical development.
Any suggestions and experience dealing with this?
| First of all, be relax and take things easier. If some problems are hard
and you cannot solve them then I see no problem to ask for help as long as you
want to understand and learn the tackling ways of the problems, not only to hand in a solution. The real fact is that some teachers do really poor at their classes and expect a lot from their students. I don't know how things in your
case are, but if you like mathematics you may learn a lot on your own. Moreover, if you study the problems and the solutions posted on this site you'll learn a lot!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69",
"answer_count": 12,
"answer_id": 2
} |
If $G$ is a finite group and $g \in G$, then $O(\langle g\rangle)$ is a divisor of $O(G)$ Does this result mean:
*
*Given any finite group, if we are able to find a cyclic group out of it (subgroup), then the order of the cyclic group will be a divisor of the original group.
If I am right in interpreting it, can one suggest an example of highlighting this? And also make me understand the possible practical uses of this result. It surely looks interesting
Thanks
Soham
| An application of this result is the formula
$$
\sum_{d\mid n} \phi(d) = n
$$
which can be estabilished by considering the cyclic group of order $n$: every element in this group has an order which is a divisor of $n$ and for every divisor $d$ of $n$ there are exactly $\phi(d)$ elements of order $d$.
A consequence of this formula is that finite multiplicative subgroups of a field are cyclic. In particular, the multiplicative group of a finite field is cyclic.
A simpler but very important consequence of the theorem is that groups of prime order are cyclic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is $x^y$ - $a^b$ divisible by $z$, where $y$ is large? The exact problem I'm looking at is:
Is $4^{1536} - 9^{4824}$ divisible by $35$?
But in general, how do you determine divisibility if the exponents are large?
| $$4^{1536} \equiv 256^{389} \equiv 11^{389} \equiv 11*{14641}^{97}
\equiv 11*11^{97} \equiv 121^{49} \equiv 121* 14641^{24} \equiv 16* 11^{12}
\equiv 16*14641^3 = 16*11^3 \equiv 16*1331 \equiv 16. $$
I use $\equiv$ to denote equivalence mod 35. Just keep tossing out 35s. YOu can do this on the other piece too and compare.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
In Group theory proofs what is meant by "well defined" What is exactly meant or required for a mapping to be well defined? I was reading the First Isomorphism theorem (link), and the first thing the proof does is define a map and find out if it's well defined.
Intuitively it makes sense, but what are the requirements for a map to be well defined? For example in the link given, I understand they show one-one relationship as being well defined and later on they again prove it's injective.
What have I understood wrongly?
| Say you have an equivalence relation $\equiv$ which defines equivalence classes $[a]=\{b | b\equiv a\}$.
A function $F$, which is defined on elements, will be well-defined, as a function on the equivalence classes if $F(a)=F(b)$
whenever $a\equiv b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 6,
"answer_id": 0
} |
Finding disjoint neighborhoods of two points in $\Bbb R$ Let $x$ and $y$ be unique real numbers. How do you prove that there exists a neighborhood $P$ of $x$ and a neighborhood $Q$ of $y$ such that $P \cap Q = \emptyset$?
| Without loss of generality, assume $x < y$. then
$x\in(-\infty, (x + y)/2)$ and $y \in ((x+y)/2,\infty)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Needing to find a variance from a data set below I am trying to get the variance of the data set below. I get a negative number upon applying 1/2(2.5)^2 + 1(3)^2 + 0.5(4.5)^2 + 1(5)^2 + 1(6)^2 - (17.5)^2 = variance, but a negative and weird number.
Please use this data below from the link, the problem is as is and the sample space is what I also got. But I got a mean of 17.5 and not sure how to to do the variance.
http://answers.yahoo.com/question/index?qid=20100120103118AAld5Yx
| The problem is to consider all possible samples of size 2 from the set of values 1,2,2,4,8 and find the distribution, mean and variance of the number of values >3. The set of possible samples are:
[1,2] [1.2], [1.4], [1,8], [2,2], [2,4], [2,8]. [2,4]. [2,8] amd [4,8]. These 10 samples are equally likely. The values for the random variable X = # of cases > 3 are
0,0,1,1,0,1,1,1,1,2. The mean is 0.8 and the variance is
[3(0-0.8)$^2$+5(1-0.8)$^2$+(2-0.8)$^2$]/10 =[3(0.64)+5(0.04)+1.44]/10= [1.92+0.20+1.44]/10=
3.56/10 =0.356.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What's the answer of (10+13) ≡? As to Modulus operation I only have seen this form:
(x + y) mod z ≡ K
So I can't understand the question, by the way the answers are :
a) 8 (mod 12)
b) 9 (mod 12)
c) 10 (mod 12)
d) 11 (mod 12)
e) None of the above
| It is d).
$23 \equiv 11 \text{ mod } 12$ since $(12)(1) + 11 = 23$. In other word, the remainder upon division by $12$ is $11$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding $E(\bar{X^2}|\bar{X})$
Possible Duplicate:
Finding $E\Bigl(\overline{Y^2}\Bigm|\overline{Y\vphantom{Y^2}}\Bigr)$ by Basu's theorem?
Suppose $X_1,\ldots,X_n$ are a random sample of $N(\theta,1)$. if $\bar{X^2}=\displaystyle\frac{1}{n}\sum_{i=1}^n X_i^2$, how can I find $E(\bar{X^2}|\bar{X})$?
| It's late, so I'll just do the case $\theta = 0$. Thus $X = (X_1,\ldots,X_n)^T$ has a multivariate normal distribution with mean $0$ and covariance matrix $I$.
Let $U$ be an $n \times n$ orthogonal matrix whose first row is $(1,\ldots,1)/\sqrt{n}$.
Then $W = U X$ also has a multivariate normal distribution with mean $0$ and covariance matrix $I$. Note that $W_1 = \frac{1}{\sqrt{n}} \sum_{i=1}^n X_i = \sqrt{n} \overline{X}$, while
$$\overline{X^2} = \frac{1}{n}\sum_{i=1}^n X_i^2 = \frac{1}{n} X^T X = \frac{1}{n} W^T W = \frac{1}{n}\sum_{i=1}^n W_i^2$$ Since $W_i$ and $W_j$ are independent for $i \ne j$,
$$ E [\overline{X^2} | \overline{X}] =
\frac{1}{n} E \left[ \sum_{i=1}^n W_i^2 | W_1 \right] = \frac{n-1}{n} E[W_i^2] + \frac{1}{n} W_1^2 = \frac{n-1}{n} + \overline{X}^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Partitions of binary numbers into binary numbers with fixed digits? If we are to have (two, for example) binary numbers, such that their sum is $100111010_2$, and given that the first number has 5 ones, and the second number has 3 ones, can I find the numbers that when added together gave $100111010_2$ (the two numbers are $11010110_2$ and $01100100_2$, by the way)?
My theory is that, the number of partitions of a binary number $n_2$ into $k$ binary parts with the given constraint of ones present in each, there can at most be $k$ ways in which $n_2$ can be partitioned (without order, not necessarily distinct). Anyone here to prove (or disprove) this?
| Not true. For instance, suppose you want to write the number with $kn$ binary digits that are all one as a sum of $k$ binary numbers each having $n$ ones. For each digit you can let all the summands except one have a zero, and you can choose the nonzero summand independently for each digit. There are $k^{kn}$ ways to make these choices. Even if you treat permutations of the summands as equivalent, there are still at least $k^{kn}/k!$ distinct ways to form the sum, which is much greater than $k$ (exponentially so as $n$ increases).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/190968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How do I evaluate $\int \limits_{-\infty}^{a} e^{−t^2}dt$? I know that
$$I \equiv \int \limits_{-\infty}^\infty e^{−t^2} \, dt=\sqrt{\pi},\text{ and }\int \limits_{-\infty}^0 e^{−t^2} \, dt=\frac{\sqrt{\pi}}{2}.$$
However, I don't understand if (or how) I can find a similar solution for $\int \limits_{-\infty}^{a}e^{−t^2}dt, a \neq 0$, given that the error function actually does not yield closed form solutions.
Any help is greatly appreciated!
| Since $e^{-t^2}$ is an even function, and since you know $\int_{-\infty}^{\infty}$, if you were able to find $\int_{-\infty}^b$ then you would be able to find $\int_a^b$ for any $a,b \in \mathbb{R}$. The fact that $e^{-t^2}$ does not have an elementary primative should suggest to you that you probably can't find $\int_a^b$ explicitly.
As an A-level student (UK 16-18 pre-university), I always wondered why we had to use a table of normal distribution values to get approximate probabilities. Now I know that the probability density function is basically a stretched and translated version of $e^{-t^2}$.
Food for thought: the natural logarithm, $\ln x$, has an integral definition:
$$ \ln x := \int_1^x \frac{dt}{t} $$
because $t^{-1}$ has no known primitive, yet we call $\ln x$ an elementary function, while $\text{erf} \, x$ is not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Is $[0,4,4]^T$ in the plane in $\mathbb{R}^3$ spanned by the columns of $A$? Let $u$ be the $3\times 1$ matrix
$$\begin{bmatrix}
0\\
4\\
4
\end{bmatrix}$$
and $A$ the $3\times2$ matrix:
$$\begin{bmatrix}
3 & -5\\
-2 & 6\\
1 & 1
\end{bmatrix}.$$
Here's where the title comes in: is $u$ in the plane $\mathbb R^3$ spanned by the columns of $A$? Why or why not? I know that it is, I just don't know why. This is the chapter before we learn about linear dependence and independence, so I doubt it has to do with either of these, beyond that I haven't a clue.
| Assuming that the column vectors of your matrix $A$ can be written with $v=\begin{bmatrix}3&-2&1\end{bmatrix}^T$ and $w=\begin{bmatrix}-5&6&1\end{bmatrix}^T$ where $A=[v\quad w]$ then your (linear independant) vectors $v$ and $w$ span obviously one plane $P=\operatorname{span}\{v,w\}\subset\mathbb{R}^3$ with $\dim(P) = 2$.
Assume that there is one vector $x\in P$. $x$ has then the following form
$$x=\lambda v+\mu w\in P$$
and therefore can be written as a linear combination of the two vectors $v,w$ that span $P$. If you want to check whether $u=\begin{bmatrix}0&4&4\end{bmatrix}^T$ is in $P$ you have to find $\lambda,\mu$ such that the combination of the vectors $v,w$ equals $u$. If there is no solution, then the vector is not part of the plane.
Basically you solve the following system of equations:
$$\begin{bmatrix}3&-5\\-2&6\\1&1\end{bmatrix}\cdot\begin{bmatrix}\lambda\\\mu\end{bmatrix}=\begin{bmatrix}0\\4\\4\end{bmatrix}$$
Hint: This system has a unique solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is the constant of Poincaré inequality related with the measure of the set? If I'm not mistaken the constant of the Poincaré inequality is related to the measure of set. For example in a ball. I'd like that someone told me up or indicate a reference for me. I'll be grateful, thanks.
| Poincaré inequality works if the open $\Omega$ set we are working with has a finite measure in a direction, namely if there is $v$ of norm $1$ such that $S:=\{\lambda,\lambda v\in\Omega\}$ has a finite measure. In this case, we have Poincaré inequality. Indeed, rotating $\Omega$, we can assume that $v=e_n$, then we show it for test functions:
$$u(x)=\int_{-\infty}^{x_n}\partial_{x_n}u(x_1,\dots,x_{n-1},t)dt$$
hence
$$\int_{\Omega}|u(x)|^2dx\leq |S|^2\int_{\Omega}|\nabla u|^2dx.$$
Then we conclude by density.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is this a typo in my math book? I am doing homework which is submitted online. I came across a question asking if two functions are equal. $f(x)=3x+4$ and $g(x)=14+(8/x)+b(x-4)$. I set the two equations together and got 7 for an answer but the book says the answer is $\frac {7}{3}$.
Here is an image of the solution in the book:
| If you set $f(-3) = g(-3)$, you end up with:
$$-5 = 14 -(8/3) -7b$$
If you multiply by 3 to remove fractions, you are left with
$$-15 = 42 - 8 -21b$$
Collecting like terms, we are left with
$$21b = 49$$
reducing yields: $$b = 7/3$$
Can you find your error?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Perimeter of an ellipse How can I calculate the perimeter of an ellipse? What is the general method of finding out the perimeter of any closed curve?
| For general closed curve(preferably loop), perimeter=$\int_0^{2\pi}rd\theta$ where (r,$\theta$) represents polar coordinates.
In ellipse, $r=\sqrt {a^2\cos^2\theta+b^2\sin^2\theta}$
So, perimeter of ellipse = $\int_0^{2\pi}\sqrt {a^2\cos^2\theta+b^2\sin^2\theta}d\theta$
I don't know if closed form for the above integral exists or not, but even if it doesn't have a closed form , you can use numerical methods to compute this definite integral.
Generally, people use an approximate formula for arc length of ellipse = $2\pi\sqrt{\frac{a^2+b^2}{2}}$
you can also visit this link : http://pages.pacificcoast.net/~cazelais/250a/ellipse-length.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Example of a sequence with countable many cluster points Can someone give a concrete example of a sequence of reals that has countable infinite many cluster points ?
| -- For $\,n=1,3,5,..., 2n-1,...\,$ , define $\,a_n=1\,$
-- For $\,n=2,6,10,...,2+4(n-1),...\,$ , define $\,a_n=2\,$
-- For $\,n=4,12,20,..,4+8(n-1),...\,$ , define $\,a_n=3\,$
...............................
-- For $\,n=2^k,2^k+2^{k+1},...,2^k+2^{k+1}(n-1),...\,$ , define $\,a_n=k+1$
...............
Now just take the sequence $\,\left\{a_n\right\}_{n=1}^\infty\,$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Need Help With Calc Homework Question Suppose that $x^*$ means $5x^2 - x$. Then what does $(y+3)^*$ mean?
| By 5x2 do you mean $5x^2$? In such a case, simply replace every instance of $x$ with $(y+3)$, and then expand.
For example, $f(x) = 2x+2 \Longrightarrow f(y+3) = 2(y+3)+2 = 2y+6+2 = 2y+8$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Sum of Cauchy sequences is Cauchy in an Abelian Topological Group Let $G$ be a topological abelian group and suppose $0$ has a countable
fundamental system of neighborhoods. Let $(x_n),(y_n)$ be Cauchy sequences
of $G$. Why is it true that $(x_n+y_n)$ is a Cauchy sequence?
I tried to generalize the case of real sequences: my problem is that if
$U$ is a neighborhood of $0$, then i would need to use something like
$\frac{1}{2} U$, but obviously this does not make sense.
I also looked at this relevant question
Sum of Cauchy Sequences Cauchy?
however it was not very helpful, since it refers to metric topological groups.
Thanks.
| You had the right idea, so let me spell out the argument with the fix I suggested in the comments:
Suppose that for every $0$-neighborhood $U$ there is a $0$-neighborhood $V$ such that $V + V \subset U$. Since $(x_n)$, and $(y_n)$ are Cauchy, there is $N$ such that $x_n - x_m \in V$ and $y_n - y_m \in V$ for all $m,n \geq N$ and hence $(x_n+y_n)-(x_m+y_m) = (x_n-x_m)+(y_n-y_m) \in V+V \subset U$ for all $m,n \geq N$, showing that $(x_n+y_n)$ is Cauchy.
To see that our hypothesis is in fact true, we can argue as follows: Since the addition map $a\colon G \times G \to G, (g,h) \mapsto g+h$ is continuous, we know that $a^{-1}(U) \subset G \times G$ is open. Also, $(0,0) \in a^{-1}(U)$, so there are open $0$-neighborhoods $V_1, V_2 \subset G$ such that $V_1 \times V_2 \subset a^{-1}(U)$ by the definition of the product topology. Now $V = V_1 \cap V_2$ is a $0$-neighborhood such that $V + V \subset U$, since $V \times V \subset V_1 \times V_2 \subset a^{-1}(U)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Random walking and the expected value I was asked this question at an interview, and I didn't know how to solve it. Was curious if anyone could help me.
Lets say we have a square, with vertex's 1234. I can randomly walk to each neighbouring vertex with equal probability. My goal is to start at '1', and get back to '1'. How many average walks will I take before I return back to 1?
| After an even number of steps, you are either at 1 or at 3; after an odd number of steps, you are either at 2 or at 4. Hence you cannot return home after an odd number of steps. To return home for the first time after $2n$ steps, you must have had "bad luck" add all odd steps (i.e. have gone to 3 instead of 1) and the choice in even steps (when you are at 4 - except at the beginning) does not matter. Therefore $P(X=2n) = 2^{-n}$ and $E(X) = \sum_{n=1}^\infty 2n P(X=n) = \sum_{n=1}^\infty n 2^{-n}$.
In case you dont recognize that series in an interview situation, observe that $E(X) = 2E(X)-E(X)= \sum_{n=0}^\infty (n+1) 2^{1-n}-\sum_{n=1}^\infty n 2^{1-n}=\sum_{n=0}^\infty 2^{1-n}=4$. Note: 2E(X)=2 $\sum_{n=1}^\infty n 2^{-n}$$=\sum_{n=1}^\infty n 2^{1-n}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Prove that $i^i$ is a real number According to WolframAlpha, $i^i=e^{-\pi/2}$ but I don't know how I can prove it.
| Here's a proof that I absolutely do not believe: take its complex conjugate, which is $\bigl({\bar i}\bigr)^{\bar i}=(1/i)^{-i}=i^i$. Since complex conjugation leaves it fixed, it’s real!
EDIT: In answer to @Isaac’s comment, I think that to justify the formula above, you have to go through exactly the same arguments that most of the other answerers did. For complex numbers $u$ and $v$, we define $u^v=\exp(v\log u)$. Now, the exponential and the logarithm are defined by series with all real coefficients; alternatively you can say that they are analytic, sending reals to reals. Thus $\overline{\exp u}=\exp(\bar u)$ and $\overline{\log(u)}=\log\bar u$. The result follows, always sweeping under the rug the fact that the logarithm is not well defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83",
"answer_count": 6,
"answer_id": 4
} |
Evaluating $\int_0^\infty\frac{\sin(x)}{x^2+1}\, dx$ I have seen $$\int_0^\infty \frac{\cos(x)}{x^2+1} \, dx=\frac{\pi}{2e}$$ evaluated in various ways.
It's rather popular when studying CA.
But, what about $$\int_0^\infty \frac{\sin(x)}{x^2+1} \, dx\,\,?$$
This appears to be trickier and more challenging.
I found that it has a closed form of
$$\cosh(1)\operatorname{Shi}(1)-\sinh(1)\text{Chi(1)}\,\,,\,\operatorname{Shi}(1)=\int_0^1 \frac{\sinh(x)}{x}dx\,\,,\,\, \text{Chi(1)}=\gamma+\int_0^1 \frac{\cosh(x)-1}{x} \, dx$$
which are the hyperbolic sine and cosine integrals, respectively.
It's an odd function, so
$$\int_{-\infty}^\infty \frac{\sin(x)}{x^2+1} \, dx=0$$
But, does anyone know how the former case can be done? Thanks a bunch.
| Here is another solution:
Consider the integral
$$I(\alpha) = \int_{0}^{\infty} \frac{\sin (\alpha x)}{1+x^2} \, dx = \int_{0}^{\infty} \frac{\alpha \sin x}{\alpha^2+x^2} \, dx.$$
Differentiating $I(\alpha)$ with the first equality, we have
\begin{align*}
I'(\alpha)
&= \int_{0}^{\infty} \frac{x \cos (\alpha x)}{1+x^2} \, dx
= \int_{0}^{\infty} \frac{x \cos x}{\alpha^2+x^2} \, dx.
\end{align*}
Differentiating once again, we have
\begin{align*}
I''(\alpha)
&= -\int_{0}^{\infty} \frac{2\alpha x \cos x}{(\alpha^2+x^2)^2} \, dx
= \left[ \frac{\alpha \cos x}{\alpha^2+x^2} \right]_{0}^{\infty} + \int_{0}^{\infty} \frac{\alpha \sin x}{\alpha^2+x^2} \, dx \\
&= -\frac{1}{\alpha} + I(\alpha).
\end{align*}
Thus $I$ satisfies the differential equation
$$ I'' - I = -\frac{1}{\alpha}. \tag{1}$$
To solve this equation, we let
$$ I(\alpha) = u e^{\alpha}. $$
Plugging this to $(1)$ and multiplying $e^{\alpha}$ to both sides, we obtain
$$ (u'e^{2\alpha})' = -\frac{1}{\alpha}e^{\alpha}. $$
Thus integrating both sides, we have
$$ u'e^{2\alpha} = -\mathrm{Ei}(\alpha) - \frac{c_{1}}{2}, $$
where
$$\mathrm{Ei}(\alpha) = PV \int_{-\infty}^{\alpha} \frac{e^{t}}{t} \, dt$$
is the exponential integral function. Then
$$ u' = -e^{-2\alpha}\mathrm{Ei}(\alpha) - \frac{c_{1}}{2}e^{-2\alpha} $$
and hence
\begin{align*}
u
&= \int \left( -e^{-2\alpha}\mathrm{Ei}(\alpha) - \frac{c_{1}}{2}e^{-2\alpha} \right) \, d\alpha \\
&= \frac{1}{2}e^{-2\alpha} \mathrm{Ei}(\alpha) - \int \frac{e^{-\alpha}}{2\alpha} \, d\alpha + c_{1}e^{-2\alpha} + c_{2} \\
&= \frac{1}{2}e^{-2\alpha} \mathrm{Ei}(\alpha) - \frac{1}{2}\mathrm{Ei}(-\alpha) + c_{1}e^{-2\alpha} + c_{2}.
\end{align*}
Therefore it follows that
$$ I(\alpha) = \frac{e^{-\alpha} \mathrm{Ei}(\alpha) - e^{\alpha}\mathrm{Ei}(-\alpha)}{2} + c_{1}e^{-\alpha} + c_{2} e^{\alpha} $$
for some $c_1$ and $c_2$. To determine $c_1$ and $c_2$, observe that
$$\mathrm{Ei}(\alpha) \sim c + \log |\alpha|$$
near $\alpha = 0$. (In fact, we have $c = \gamma$.) Thus taking $\alpha \to 0$,
$$ 0 = I(0) = c_1 + c_2. $$
This shows that we may write
$$ I(\alpha) = \frac{e^{-\alpha} \mathrm{Ei}(\alpha) - e^{\alpha}\mathrm{Ei}(-\alpha)}{2} + c \sinh \alpha. $$
But L'hospital's rule shows that
$$ \mathrm{Ei}(\alpha) \sim \frac{e^{\alpha}}{\alpha} $$
as $|\alpha| \to \infty$. Thus $ I(\alpha) \sim c \sinh \alpha$ as $\alpha \to \infty$. But it is clear that $I(\alpha)$ is bounded:
$$ \left|I(\alpha)\right| \leq \int_{0}^{\infty} \frac{1}{1+x^2} \, dx = \frac{\pi}{2}. $$
Therefore $c = 0$ and we have
$$ \int_{0}^{\infty} \frac{\sin (\alpha x)}{1+x^2} \, dx = \frac{e^{-\alpha} \mathrm{Ei}(\alpha) - e^{\alpha}\mathrm{Ei}(-\alpha)}{2}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 2
} |
If $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ has coordinates $f^1 \ldots f^m$, and each $f^i$ is differentiable at $0$... ...then does it follow that $f$ is differentiable at 0?
My motivation for asking this is as follows: in Spivak's Calculus on Manifolds, in theorem 2.9, he uses this with the additional condition that each $f^i$ is continuously differentiable in a nbh of 0, to conclude that $f$ is differentiable and I don't think is necessary.
Namely, if each $f^i$ has derivative $Df^i$, I claim the matrix with $i^{th}$ row $Df^i$ will serve as $Df$. Indeed, let $v_j$ be a sequence tending to 0 in $\mathbb{R}^n$, we have (by the triangle inequality, if you wish)$$\frac{| f(v_j) - f(0) - \sum_i Df^i(v) |}{|v_j|} \leqslant \frac{\sum_i |f^i(v_j) - f^i(0) - Df^i(v_j)|}{|v_j|}$$Taking the limit as $j \rightarrow \infty$, each summand goes to 0 by the differentiability of $Df^i$ (and there's only $m$ of them), hence the limit is 0.
Is this wrong? Thanks in advance!
EDIT: btw, conditional on the above proof being right and/or the claim being right, does anyone know maybe what Spivak was going for?
| In case this is useful to anyone else, let me record the comments of Georges Elencwajg and Dylan Moreland-the answer is yes it's true, and the condition in Spivak is superfluous.
The proof is the one liner I wrote above, albeit with better notation: If $\lambda^i$ are the derivatives of the $f^i$ at 0, I claim the matrix with $i^{th}$ row $\lambda^i$ will serve as $Df(0)$. Indeed we have $$\frac{|f(v) - f(0) - \lambda^i(v)e_i|}{|v|} \leqslant \sum_i \frac{|f^i(v) - f^i(0) - \lambda^i(v)|}{|v|}$$Taking any $v_i \rightarrow 0$, applying the above inequality, and using the differentiability of each $f^i$ at 0 gives the result.
Thanks all for the assistance!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Existence of T-invariant complement of T-invariant subspace when T is diagonalisable Let $V$ be a complex linear space of dimension $n$. Let $T \in End(V)$ such that $T$ is diagonalisable. Prove that each $T$-invariant subspace $W$ of $V$ has a complementary $T$-invariant subspace $W'$ such that $V= W \oplus W'$.
Note: Let $\{e_1,...e_n\}$ be the set of eigenvectors together with eigenspaces $V_{\lambda_1},...V_{\lambda_n}$ of $T$. It's sufficient to show that every $T$-invariant subspace $W$ must be a direct sum of eigenspaces, then it'll be trivial to find $W'$ (just take the rest eigenspaces not in the direct sum and glue them to $W$).. But how to prove $W$ is a direct sum of eigenspaces?
| The minimal polynomial is of the form
\begin{equation}
p=(x-c_1)(x-c_2)\cdots (x-c_k),
\end{equation}
where $c_1,c_2,\ldots,c_k$ are the distinct eigenvalues of $T$.
By primary decomposition
\begin{equation}
V=W_1 \oplus W_2 \oplus \cdots \oplus W_k,
\end{equation}
where $W_i$ is the eigenspace corresponding to $c_i$, $1\leq i \leq k$.
From Hoffman & Kunze, Page 226, Exercise 10, one should be able to see that
\begin{equation}
W=(W\cap W_1) \oplus (W\cap W_2) \oplus \cdots \oplus (W\cap W_k).
\end{equation}
Clearly, $W\cap W_i$ is $T$-invariant, $1\leq i \leq k$.
Let $\{\alpha_1,\alpha_2,\ldots,\alpha_{r_i} \}$ be an ordered basis for $W\cap W_i$. Since $W\cap W_i$ is a subspace of the eigenspace $W_i$,
$\{\alpha_1,\alpha_2,\ldots,\alpha_{r_i} \}$ can be extended to
$\{\alpha_1,\alpha_2,\ldots,\alpha_{r_i},\alpha_{r_i+1},\ldots,\alpha_{s_i} \}$, a basis for $W_i$. Let $V_i$ be the subspace spanned by $\{\alpha_{r_i+1},\ldots,\alpha_{s_i} \}$. Then $W_i=(W\cap W_i)\oplus V_i$.
Hence
\begin{equation}
V=(W\cap W_1)\oplus V_1 \oplus (W\cap W_2)\oplus V_2 \oplus \cdots
\oplus (W\cap W_k)\oplus V_k,
\end{equation}
i.e., $W$ has $T$-invariant complementary subspace of $V$,
$V_1\oplus V_2 \oplus \cdots \oplus V_k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Expansion of $\bigl(\sum x_i\bigr)^4$ Show that $$(\sum_{i=1}^n X_i)^4=\sum_{i=1}^n X_i^4+4\sum_{i\neq j}^n X_i^3X_j+3\sum_{i\neq j}^n X_i^2X_j^2+6\sum_{i\neq j\neq k}^n X_i^2X_jX_k+\sum_{i\neq j\neq k\neq l}^n X_iX_jX_kX_l$$ Please show it step by step.Thanks in advance.
| More directly, if you prefer:
$$\begin{align}
\left(\sum_i X_i\right)^4&=\left(\sum_i X_i\right)\left(\sum_j X_j\right)\left(\sum_k X_k\right)\left(\sum_\ell X_\ell\right)\\
&=\sum_i\sum_j\sum_k\sum_\ell X_iX_jX_kX_\ell\\
&=\sum_{i,j,k,\ell} X_iX_jX_kX_\ell\\
\end{align}$$
Now just break apart the 4-tuples according to multiplicities.
$$\begin{align}
&=\sum_{i=j=k=\ell} X_iX_jX_kX_\ell\\
&+\left(\sum_{i=j=k\neq\ell} X_iX_jX_kX_\ell+\sum_{i=j=\ell\neq k} X_iX_jX_kX_\ell+\sum_{i=\ell=k\neq j} X_iX_jX_kX_\ell+\sum_{\ell=j=k\neq i} X_iX_jX_kX_\ell\right)\\
&+\left(\sum_{i=j\neq k=\ell} X_iX_jX_kX_\ell+\sum_{i=k\neq j=\ell} X_iX_jX_kX_\ell+\sum_{i=\ell\neq k=j}X_iX_jX_kX_\ell\right)\\
&+\left(\text{six such things with two equal indices and a third and fourth distinct index}\right)\\
&+\sum_{i, j, k, \ell \text{ distinct}} X_iX_jX_kX_\ell\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Complex numbers Here's the question:
z is a complex, and if $z^5 + z^4 + z^3 + z^2 + z + 1 = 0$ then $z^6=1$.
use this fact to calculate how many answers is there for:
$$z^5 + z^4 + z^3 + z^2 + z + 1 = 0$$
Thanks.
| $z^6 = 1$ if and only if $z^6 - 1 = (z-1)(z^5+z^4+z^3+z^2+z+1) = 0$. So the roots of $z^5+z^4+z^3+z^2+z+1$ consist of the roots of $z^6 = 1$ excluding the root $z = 1$, which leaves $5$ roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does knowing the totient of a number help factoring it? Edit: The quoted question addresses only numbers of the form $p^a q^b$, I asked a general question for arbitrary $n$.
If $n$ is a prime or a product of 2 primes then knowing its totient $\varphi(n)$ allows us immediately to find the prime factorization of $n$.
How about a general case? Does knowing $\varphi(n)$
*
*give us a way how to find the prime factorization of $n$,
*help as find a prime factor of $n$, or at least
*help at in finding any factor of $n$? (This turns out to be obvious.)
| In fact, it is a well-known observation in cryptography that just knowing a multiple of $\phi(n)$ helps greatly in factoring $n$, regardless of the number of prime factors (if there's only one or two primes dividing $n$ then this has already been covered).
Suppose $m$ is a multiple of $\phi(n)$. Then if you factor out enough powers of $2$ from $m$, there must exist a divisor $t = m/(2^r)$ such that $\lambda(n) \mid 2t$ but $\lambda(n) \nmid t$. (Here, $\lambda(n)$ is the Carmichael function, which will necessarily be even, unless $n$ is trivially small.)
It will then be the case that for some bases $b$, $b$ will be a quadratic residue for some prime $p \mid n$ but not for a different $q \mid n$. In this case, taking the GCD $(b^t-1,n)$ will produce a non-trivial factor of $n$.
One simply has to randomly try different values of $b$ (the expected number of tries is finite), as well as different choices of $t$ (there are at most $\log(n)$ possibilities, and one can use successive squaring to efficiently cover them).
To get all the prime factors of $n$, just keep repeating the process (we still have a multiple of $\phi$ for each of factors found above).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
A question about a closed set Let $X = C([0; 1])$. For all $f, g \in X$, we define the metric $d$ by
$d(f; g) = \sup_x |f(x) - g(x)|$. Show that $S := \{ f\in X : f(0) = 0 \}$ is closed in $(X; d)$.
I am trying to show that $X \setminus S$ is open but I don't know where to start showing that.
I wanna add something more, I have not much knowledge about analysis and I am just self taught of it, what I have learnt so far is just some basic topology and open/closed sets.
| Here's a direct argument: Suppose $f\in X\setminus S$. Let $d=|f(0)|\ne 0$. Then the open ball of radius $d$ around $f$ is disjoint from $S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Solving differential equation for an expanding bubble I need to solve the equation
\begin{eqnarray}
R^3 \frac{d } {dt} \left [ \frac{4}{3} \rho_{\rm ext} \left ( \frac{dR}{dt} \right )^2 \right ]+ 4 p R^2 \frac{d R} {dt} =\frac{F_E}{4\pi}
\end{eqnarray}
Could you please help in this regard?
| Let $X=\dfrac{dR}{dt}$ ,
Then $R^3\dfrac{d}{dt}\left[\dfrac{4}{3}\rho_{\rm ext}X^2\right]+4pR^2X=\dfrac{F_E}{4\pi}$
$R^3\dfrac{d}{dR}\left[\dfrac{4}{3}\rho_{\rm ext}X^2\right]\dfrac{dR}{dt}+4pR^2X=\dfrac{F_E}{4\pi}$
$\dfrac{8\rho_{\rm ext}R^3X^2}{3}\dfrac{dX}{dR}=\dfrac{F_E}{4\pi}-4pR^2X$
Let $Y=\dfrac{1}{R^2}$ ,
Then $\dfrac{dX}{dR}=\dfrac{dX}{dY}\dfrac{dY}{dR}=-\dfrac{2}{R^3}\dfrac{dX}{dY}$
$\therefore-\dfrac{16\rho_{\rm ext}X^2}{3}\dfrac{dX}{dY}=\dfrac{F_E}{4\pi}-\dfrac{4pX}{Y}$
$-\dfrac{64\pi\rho_{\rm ext}X^2}{3}\dfrac{dX}{dY}=\dfrac{F_EY-16\pi pX}{Y}$
$(F_EY-16\pi pX)\dfrac{dY}{dX}=-\dfrac{64\pi\rho_{\rm ext}X^2Y}{3}$
This belongs to an Abel equation of the second kind.
Let $U=Y-\dfrac{16\pi pX}{F_E}$ ,
Then $Y=U+\dfrac{16\pi pX}{F_E}$
$\dfrac{dY}{dX}=\dfrac{dU}{dX}+\dfrac{16\pi p}{F_E}$
$\therefore F_EU\left(\dfrac{dU}{dX}+\dfrac{16\pi p}{F_E}\right)=-\dfrac{64\pi\rho_{\rm ext}X^2}{3}\left(U+\dfrac{16\pi pX}{F_E}\right)$
$F_EU\dfrac{dU}{dX}+16\pi pU=-\dfrac{64\pi\rho_{\rm ext}X^2U}{3}-\dfrac{1024\pi^2p\rho_{\rm ext}X^3}{3F_E}$
$F_EU\dfrac{dU}{dX}=-\left(\dfrac{64\pi\rho_{\rm ext}X^2}{3}+16\pi p\right)U-\dfrac{1024\pi^2p\rho_{\rm ext}X^3}{3F_E}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/191987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Are there any famous number system competely independence from the real number system that show its signifance in math history? I know that both of the binary number system and complex number system depend on each others with real number system respectively and share some of their conditions and operation properties.
My question is: Are there any famous number system competely independence from the real number system that show its signifance in math history?
| To expand on William's comment:
${\bf Z}/n{\bf Z}$ is the integers modulo $n$. You probably know about modular arithmetic and if you don't you can find tons of information about it on the web and in texts about Number Theory and/or Discrete Mathematics. It doesn't contain the reals and it's not contained in the reals and the operations are not the operations in the reals.
Now pick a prime number $p$. Every non-zero rational number $a/b$ can be written as $(r/s)p^t$ where $r,s,t$ are integers and $p$ divides neither $r$ nor $s$. Define a sort of absolute value on the rationals by $|a/b|_p=p^{-t}$. This extends to a distance on the rationals by defining the distance $d(x,y)$ between $x$ and $y$ to be $|x-y|_p$. Now if you know how to get the reals from the rationals by putting in all the limits of convergent sequences, you can do the same thing but using $|\ |_p$ instead of the usual absolute value, and what you get is the $p$-adic numbers. Again, it's not a subset of the reals, nor does it contain the reals, and its distance structure is very different from that on the reals. Again, tons of info on the web and in (somewhat more advanced) Number Theory texts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\sqrt{x-4} + 10 = \sqrt{x+4}$ Solve: $$\sqrt{x-4} + 10 = \sqrt{x+4}$$
Little help here? >.<
| As others have said, there are no solutions within the usual rules. However, once we get to $\sqrt {x-4}=-4.6$ we can remember that square roots can be negative (despite the convention that $\sqrt x \ge 0$). Then we can square and add $4$to find $x=25.16$. Checking, we find $\sqrt {x+4}=5.4, \sqrt{x-4}=-4.6$ and the difference is truly $10$. You can decide if this is better than no answer at all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
How to show the limit of the derivative is positive infinity, as x aproaches positive infinity? Here is the exercise that I can not find the proof at present: Let $f:[0,\infty)\to\mathbb{R}$ be a differentiable function satisfying $\lim_{x\to+\infty}\frac{f(x)}{x^2}=1, \text{ and } \lim_{x\to+\infty}f'(x)=L$. Show that $L=+\infty$.
This exercise is extracted from Giaqinta & Modica's TextBook: Mathematical Analysis, Foundations of one variable, Page143, 3.86.
I have tried like the following: If $L\in\mathbb{R}$, then there exist $M>0, X>0,$ such that $|f'(x)|\leq M, \forall x\geq X.$ Thus for each $x>X$, $\exists \eta\in (X,x)$, such that $f(x)-f(X)=f'(\eta)(x-X), $ and then
$$\frac{f(x)}{x^2}=\frac{f(X)}{x^2}+(\frac{1}{x}-\frac{X}{x^2})f'(\eta)\to 0, \text{ as } x\to+\infty,$$
which contradicts the condition $\frac{f(x)}{x^2}\to 1, \text{as} x\to+\infty.$ Therefore $L\in\{+\infty, -\infty\}.$ But How to conclude that $L\not=-\infty$?
| If $L = -\infty$, then for sufficiently large $x$, $f'(x) < -1$, and so $f(x) < k - x$ where $k$ is some constant. This implies $\frac{f(x)}{x^2} < \frac{k - x}{x^2}$ for large $x$. Therefore, $\limsup_{x\to\infty} \frac{f(x)}{x^2} \le 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
limit at infinity $f(x)=x+ax \sin(x)$ Let $f:\Bbb R\rightarrow \Bbb R$ be defined by $f(x)= x+ ax\sin x$.
I would like to show that if $|a| < 1$, then $\lim\limits_{x\rightarrow\pm \infty}f(x)=\pm \infty$.
Thanks for your time.
| HINT: Rewrite the function as $f(x)=(1+a\sin x)x$. Now, what is $\inf_{x\in\Bbb R}(1+a\sin x)$? What does this tell you about the smallest possible value of $|f(x)|$ in terms of $x$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
The product of all elements in $G$ cannot belong to $H$
Let $G$ be a finite group and $H\leq G$ be a subgroup of order odd such that $[G:H]=2$. Therefore the product of all elements in $G$ cannot belong to $H$.
I assume $|H|=m$ so $|G|=2m$. Since $[G:H]=2$ so $H\trianglelefteq G$ and that; half of the elements of the group are in $H$. Any Hints? Thanks.
| Since the index is $2$ and $|H|$ is odd. That means $G$ will have an even number of elements and exactly half of those elements are in $H$ and half are in $gH$ (both odd number of elements).
Consider $$G = \{ g_1, g_2, ... , g_{k}, h_1, h_2, ..., h_{k}\}$$ and let $$x = g_1 \times g_2 \times ... \times g_k \times h_1 \times h_2 \times ... \times h_k.$$
The order that we multiply the $g_i$s and $h_i$s don't matter. And since each time we have $h_i$ we stay in $H$ and we switch from $gH$ to $H$ or $H$ to $gH$ when we have $g_i$, and we switch an odd number of times for the $g_i$, hence we always end up in $gH.$
Lastly, we know that $xH = gH \iff x \in gH$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
} |
How may I prove this inequality? Let $a, b, c$ be positive real, $abc = 1$. Prove that:
$$\frac{1}{1+a+b} + \frac{1}{1+b+c} + \frac{1}{1+c+a} \le \frac{1}{2+a} + \frac{1}{2+b}+\frac{1}{2+c}$$
I thought of Cauchy and AM-GM, but I don't see how to successfully use them to prove the inequality. Any hint, suggestion will be welcome. Thanks.
| $\frac{2}{1+a+b}-(\frac{1}{2+a}+\frac{1}{2+b})$
$=\frac{1}{1+a+b}-\frac{1}{2+a}+\frac{1}{1+a+b}-\frac{1}{2+b}$
$=\frac{1}{1+a+b}(\frac{1-b}{2+a}+\frac{1-a}{2+b})$
$=\frac{1}{(1+a+b)(2+a)(2+b)}((1-b)(2+b)+(1-a)(2+a))$
$≤\frac{1}{1\cdot 2\cdot 2}(4-(a+b+a^2+b^2))$ as $a,b>0,2+a>2$ and $a+b+1>1$
$\sum(\frac{2}{1+a+b}-(\frac{1}{2+a}+\frac{1}{2+b}))≤\frac{1}{4}(3\cdot 4-2(a+b+c)-2(a^2+b^2+c^2))≤0$ as $a^n+b^n+c^n≥3(abc)^{\frac{n}{3}}=3$ for any positive number $n$.
$\implies \sum2(\frac{1}{1+a+b})≤ \sum2(\frac{1}{2+a})$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Finding $E\Bigl(\overline{Y^2}\Bigm|\overline{Y\vphantom{Y^2}}\Bigr)$ by Basu's theorem? Suppose $Y_1,\ldots,Y_n$ are a random sample of normal distribution $\mathcal{N}(\mu,1)$. If $\overline{Y^2}=\displaystyle\frac{1}{n}\sum_{i=1}^n Y_i^2$, how can I find $E\Bigl(\overline{Y^2}\Bigm|\overline{Y\vphantom{Y^2}}\Bigr)$ by Basu's theorem?
| By Basu's theorem $\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2$ is independent of $\bar{Y}$. Hence
$$ \mathbb{E} \left(\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2 \mid \bar{Y} \right) = \mathbb{E} \left(\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2 \right)$$
Develop both sides. You expression will appear on the left-hand side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
There exists an injection from $X$ to $Y$ if and only if there exists a surjection from $Y$ to $X$. Theorem. Let $X$ and $Y$ be sets with $X$ nonempty. Then (P) there exists an injection $f:X\rightarrow Y$ if and only if (Q) there exists a surjection $g:Y\rightarrow X$.
For the P $\implies$ Q part, I know you can get a surjection $Y\to X$ by mapping $y$ to $x$ if $y=f(x)$ for some $x\in X$ and mapping $y$ to some arbitrary $\alpha\in X$ if $y\in Y\setminus f(X)$. But I don't know about the Q $\implies$ P part.
Could someone give an elementary proof of the theorem?
| Suppose that $g$ is a surjection from $Y$ to $X$. For every $x$ in $X$, let $Y_x$ be the set of all $y$ such that $g(y)=x$. So $Y_x=g^{-1}(\{x\})$: $Y_x$ is the preimage of $x$. Since $g$ is a surjection, $Y_x$ is non-empty for every $x\in X$.
By the Axiom of Choice, there is a set $Y_c$ such that $Y_c\cap Y_x$ is a $1$-element set for every $x$. Informally, the set $Y_c$ chooses (simultaneously) an element $y_x$ from every $Y_x$.
Define $f(x)$ by $f(x)=y_x$. Then $f$ is an injection from $X$ to $Y$.
Remark: Fairly elementary, I guess, but definitely non-constructive. It can be shown that for general $X$, $Y$, and $g$, the result cannot be proved in ZF$. So we really cannot do better.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 0
} |
Sum of the sequence What is the sum of the following sequence
$$\begin{align*}
(2^1 - 1) &+ \Big((2^1 - 1) + (2^2 - 1)\Big)\\
&+ \Big((2^1 - 1) + (2^2 - 1) + (2^3 - 1) \Big)+\ldots\\
&+\Big( (2^1 - 1)+(2^2 - 1)+(2^3 - 1)+\ldots+(2^n - 1)\Big)
\end{align*}$$
I tried to solve this. I reduced the equation into the following equation
$$n(2^1) + (n-1)\cdot2^2 + (n-2)\cdot2^3 +\ldots$$
but im not able to solve it further. Can any one help me solve this equation out. and btw its not a Home work problem. This equation is derived from some puzzle.
Thanks in advance
| Others have given the correct answer; here’s how you could have simplified your incorrect expression.
$$\begin{align*}
n(2^1) + (n-1)\cdot2^2 + (n-2)\cdot2^3 +\ldots&=\sum_{k=1}^n(n-k+1)2^k\\
&=(n+1)\sum_{k=1}^n2^k-\sum_{k=1}^nk2^k\\
&=(n+1)\left(2^{n+1}-2\right)-\sum_{k=1}^n\sum_{i=1}^k2^k\\
&=(n+1)\left(2^{n+1}-2\right)-\sum_{i=1}^n\sum_{k=i}^n2^k\\
&=(n+1)\left(2^{n+1}-2\right)-\sum_{i=1}^n\left(2^{n+1}-2^i\right)\\
&=(n+1)\left(2^{n+1}-2\right)-n2^{n+1}+\sum_{i=1}^n2^i\\
&=(n+1)\left(2^{n+1}-2\right)-n2^{n+1}+2^{n+1}-2\\
&=2\cdot2^{n+1}-2n-4\\
&=2^{n+2}-2n-4
\end{align*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
The distance function is continuous. Let $S\subset\Bbb R$ not empty, define $f:\Bbb R\rightarrow\Bbb R$ such that $f(x)=
\inf\{|x-s| ;s\in S\}$
then, prove that $|f(x)-f(y)|\le|x-y| $ for any $x,y \in \Bbb R$
| $|x - s| \le |x - y| + |y - s|$ for every $s \in S$.
Hence $f(x) \le |x - y| + f(y)$.
Similarly $f(y) \le |x - y| + f(x)$.
Hence $|f(x) - f(y)| \le |x - y|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does this number have an expression as square root or log of something If this ends up being a ridiculous question I will delete it. Forgive me if this is ridiculous but this number has me stumped.
$$1.52360679774998$$
The continued fraction calculator gives $1, 1, 1, 10, 11, 11, 11, 11, 11, 11 ...$ which makes me think this number should have a nice expression as a root or log of something or be related to some special number like $\phi$. But I've been unable to tease out any such expression. I appreciate it if someone has more insight into this.
| If we assume that the continued fraction expansion that you quoted continues in that pattern forever, we can do the calculation by hand. For let $x$ be the value of the continued fraction $\langle 0;11,11,11,\dots\rangle$. Then $x=\dfrac{1}{11+x}$. This gives a quadratic equation with positive root $\dfrac{5\sqrt{5}-9}{2}$.
Now we can claw our way to the top. For example, $\langle 0;10,11,11,11,\dots\rangle=\dfrac{1}{10+x}=\dfrac{2}{5\sqrt{5}+9}$. Continue, resisting the urge to rationalize the denominator. At the end we get $\dfrac{5\sqrt{5}+31}{10\sqrt{5}+20}$. Finally, rationalize the denominator. We get $\dfrac{13+\sqrt{5}}{10}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
$p$ is polynomial, set bounded open set with at most $n$ components Assume $p$ is a non constant polynomial of degree $n$. Prove that the set $\{z:|(p(z))| \lt 1\}$ is a bounded open set with at-most $n$ connected components. Give example to show number of components can be less than $n$.
thanks.
EDIT:Thanks,I meant connected components.
| For bounding the number of connected components, you can show that every connected component contains at least one root of your polynomial. To obtain that, you can apply the minimum modulus principle to the connected component, just like in a proof of the fundamental theorem of algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to solve $x^3=-1$? How to solve $x^3=-1$? I got following:
$x^3=-1$
$x=(-1)^{\frac{1}{3}}$
$x=\frac{(-1)^{\frac{1}{2}}}{(-1)^{\frac{1}{6}}}=\frac{i}{(-1)^{\frac{1}{6}}}$...
| $$x^3=-1$$
$$x^3+1=0$$
$$(x+1)(x^2+1-x)=0$$
$$x=-1 \quad\text{or}\quad x^2-x+1=0$$
When $$x^2-x+1=0, x= \frac{-1(\pm\sqrt{-3})}{2}$$
$$x= \frac{-1+\sqrt{3}i}{2} \quad\text{and}\quad \frac{-1-\sqrt{3}i}{2}$$
Which is equal to $e^{−i2π/3}$ and $e^{i2π/3}$ respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 8,
"answer_id": 3
} |
Topology induced by the completion of a topological group Let $G$ be an abelian topological group and let $\hat{G}$ be its completion, i.e. the group containing the equivalence classes of all Cauchy sequences of $G$. What exactly is the topology of $\hat{G}$?
| formerly a remark
For each neighborhood $N$ of zero in G, define a neighborhood $\hat{N}$ in $\hat{G}$ consisting of those equivalence classes for which all sequences in the class are eventually in $N$. This is a base (of neighborhoods of zero) for the new topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
} |
Proving norm inequality There is a brief proof in my textbook that I have one question about.
We are supposed to prove that $||x||_{1} \leq n||x||_{\infty}$ for $x \in \mathbb{R}^n$
The book writes the following:
$||x||_{1} = \sum_{i=1}^{n} |x_i| \leq \sum_{i=1}^{n}\{\max_{1 \leq j \leq n} |x_j| \} \leq \sum_{i=1}^{n} ||x||_{\infty} = n||x||_{\infty}$
The one thing I don't quite follow is when the book writes:
$\sum_{i=1}^{n}\{\max_{1 \leq j \leq n} |x_j| \} \leq \sum_{i=1}^{n} ||x||_{\infty}$. In my book the definition of $||x||_{\infty}$ is given as:
$$||x||_{\infty} = \max_{1 \leq j \leq n}|x_j|$$
So shouldn't the inequality $\sum_{i=1}^{n}\{\max_{1 \leq j \leq n} |x_j| \} \leq \sum_{i=1}^{n} ||x||_{\infty}$ actually be an equality?
I would really appreciate it if someone could explain this to me!
| It would also be correct with equality, but it’s not incorrect as it stands. After all, if $a=b$, then it’s also true that $a\le b$, and the inequality is all that’s actually needed here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solve sum for theta Is there any way to solve the following sum of trigonometric functions for theta without using a solver?
$$25\sin(\theta)-1.5\cos(\theta)=20$$
| $$
25\sin\theta-1.5\cos\theta = \sqrt{25^2+1.5^2}\left( \frac{25}{\sqrt{25^2+1.5^2}}\sin\theta - \frac{1.5}{\sqrt{25^2+1.5^2}}\cos\theta \right)
$$
$$
= \sqrt{25^2+1.5^2}(\cos\varphi\sin\theta-\sin\varphi\cos\theta) = \sqrt{25^2+1.5^2} \sin(\varphi-\theta).
$$
So you want
$$
\sin(\varphi-\theta)=\frac{20}{\sqrt{25^2+1.5^2}}.
$$
Take arcsines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/192988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Probability of throwing multiple dice of at least a given face with a set of dice I know how to calculate the probability of throwing at least one die of a given face with a set of dice, but can someone tell me how to calculate more than one (e.g., at least two)?
For example, I know that the probability of throwing at least one 4 with two 6-sided dice is 27/216, or 1 - (3/6 x 3/6 x 3/6). How do I calculate throwing at least two 4s with four 6-sided dice?
| The probability of no 4 with four 6-sided dice$(p_1)=(\frac{5}{6})^4$
The probability of exactly one 4 with four 6-sided dice$(p_2)$
$=4\frac{1}{6}(\frac{5}{6})^3$ as here the combinations are $4XXX$ or $X4XX$ or $XX4X$ or $XXX4$ where $X$ is some other face$≠4$
So, the probability of at least two 4s with four 6-sided dice$=1-p_1-p_2$
$=1-((\frac{5}{6})^4+4\frac{1}{6}(\frac{5}{6})^3)$
$=1-(\frac{5}{6})^3(\frac{5}{6}+\frac{4}{6})=1-\frac{125}{144}=\frac{19}{144}$
The probability of throwing at least 4 by one 6-sided dice
$=\frac{3}{6}=\frac{1}{2}$
The possible combinations are $XXYY$, $XYXY$, $XYYX$, $YXXY$, $YXYX$, $YYXX$
where $1≤Y≤3,4≤X≤6$
So, the required probability of throwing exactly two occurrences of at least 4 is $^4C_2\frac{1}{2}\frac{1}{2}(1-\frac{1}{2})(1-\frac{1}{2})=\frac{3}{8}$ using Binomial Distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Metric is locally constant If $(M, g)$ be a Riemannian manifold. My doubt is the following:
For each $p\in M$, there is a coordinate chart $(U, (x_1,x_2,.., x_n))$ such that
$g = \sum dx_i\otimes dx_j $
| No. Locally, you can always find an orthonormal frame $\{X_1\ldots X_n\}$, but its vectors need not be coordinate vectors relative to a chart, because in general you will have $[X_i, X_j]\ne 0$. The property you state is equivalent to the manifold being flat.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y \in\mathbb{R}$ This is probably really easy for all of you, but my question is how do I prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y\in\mathbb{R}$
Thanks for the help!
| You can’t prove that it’s always greater than $0$, because when $x=y=0$ it is $0$, but you can prove that it’s always greater than or equal to $0$ by completing the square:
$$x^2+5xy+7y^2=\left(x+\frac{5y}2\right)^2+\frac34y^2\;.$$
Since square are always at least $0$, so is their sum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
finding the significant digits for relative error How exactly do you go about finding the number of significant digits?
From what I've found I am suppose to find t where
relative error (Re) $ \le$ 5*10^-(t)
But I don't understand how you find t.
For example, let pi be the exact value, and 3 the approximation. So I found Re= 0.04507. How do I get the number of significant numbers from this?
| Please take a look at the Waterloo University link about Significant Digits first. As per your question,
$$RE = \frac{\left | 3-\pi \right |}{\left | \pi \right |} \leq 0.04507 $$
In the case where:
$$Re = 0.5*(10)^{-t}$$
so we can say that:
$$ 0.04507= 0.5*(10)^{-t}$$
so,
$$ \frac{0.04507}{0.5} = (10)^{-t}$$
taking the log, this leads to:
$$t=1.04508244627$$
Now, you can write the first equation as:
$$RE = \frac{\left | 3-\pi \right |}{\left | \pi \right |} \leq 0.04507 ={0.5} * (10)^{-1.04508244627} $$
This tells us that the value $3$ you have calculated for $pi$ is good for $1$ position because we are interested only in integer values of $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When is the set statement: (A⊕B) = (A ∪ B) true? "When is the set statement:
(A⊕B) = (A ∪ B)
a true statement? Is it true sometimes, never, or always? If it is sometimes, state the cases where it is."
How would you go about finding the answer to the question or ones like this one?
Thanks for your time!
| If you define $A \bigoplus B$ as Kevin did we see that it is true when $A \cap B = \emptyset$. This is because $$(A \cap B^{c}) \cup (B \cap A^{c}) = A \cup B - (A \cap B) = A \cup B.$$ This tells us that for $A \cap B \neq \emptyset$ they are not equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
A matrix is diagonalizable, so what? I mean, you can say it's similar to a diagonal matrix, it has $n$ independent eigenvectors, etc., but what's the big deal of having diagonalizability? Can I solidly perceive the differences between two linear transformation one of which is diagonalizable and the other is not, either by visualization or figurative description?
For example, invertibility can be perceived. Because non-invertible transformation must compress the space in one or more certain direction to $0$. Like crashing a space flat.
| Diagonalization is shearing any source of origin the x directional and y directional matric elements of rows and column from a source of for example radiation of light rays into squeezing along hypotenuse out of a right angled triangle sides as such resultant a+ib module of shearing.This happens as a result of stretching.It really bends a light ray by its refractive index along input and output planes.On the line of thought this may play the role of light ray invisibility by the Einstein gravity of bending typically applicable in Quantum mechanics.A sort of squeezing or shearing along hypotenuse side.When all the elements are diagonally shifted the potential becomes zero and not so when more than zero above the diagonalization of directional matrics elements.This may also be called a twisting along axial planes as a function of twisting angle of ray transfer matrics pave the way for magnification as divergence ,convergence as well as for invisible clocking dynamics using laser beams.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 3
} |
Rudin against Pugh for Textbook for Self-Studying A First Course in Real Analysis I have shortlisted two books irrespective of their price:
1)Real Mathematical Analysis by Charles Chapman Pugh and 2) Principles of Mathematical Analysis by Walter Rudin .I wish to self study analysis(I am in high school) and I was wondering which book is better.Any insight into this is appreciated.
UPDATE: I shall accept an answer soon.New opinions are welcome.
| You are definitely ambitious! Both books are rather hard for a beginner. I learned mathematical analysis on Rudin's book, since I am older than Pugh's book.
I'd suggest to read Pugh's first, which is probably less pedantic. However, I still judge Baby Rudin a better book than Pugh's.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Detect when a point belongs to a bounding box with distances I have a box with known bounding coordinates (latitudes and longitudes): latN, latS, lonW, lonE.
I have a mystery point P with unknown coordinates. The only data available is the distance from P to any point p. dist(p,P).`
I need a function that tells me whether this point is inside or outside the box.
| Measure distances from the four corners of the rectangle.
Consider the four triangles formed from P and the four sides of the rectangle.
Apply the Law of Cosines to measure the angles of the four triangles next to the sides of the rectangle.
If any is larger than 90 degrees then you are outside. Otherwise you are inside.
EDIT 1:
Suppose your rectangle is $EFGH$ where the sides are $d(EF)=d(HG)=x$ and $d(FG)=d(EH)=y$.
Let the distance of $P$ from $E,F,G,H$ be $e,f,g,h$ respectively.
Now by Law of Cosines applied to angle $EHP=\alpha$ we have $\cos (\alpha) = (y^2+h^2-e^2)/(2yh)$. If the angle is larger than $90^\circ$ then $y^2+h^2-e^2<0$. We need to apply a similar check 8 times. If any of the expressions is negative then we are outside. Else we are inside (or on boundary). So you have to check the sign of 8 expressions.
If either of the following
$y^2+h^2-e^2$, $y^2+e^2-h^2$, $x^2+e^2-f^2$, $x^2+f^2-e^2$, $y^2+f^2-g^2$, $y^2+g^2-f^2$, $x^2+g^2-h^2$, $x^2+h^2-g^2$
is negative $P$ is outside.
EDIT 2:
In case efficiency is a consideration: To test that $P$ is inside, or on the border, checking with respect to 3 sides will do. So it takes 6 inequalities. For example
$y^2+h^2-e^2\ge 0$, and $y^2+e^2-h^2\ge 0$, and
$x^2+e^2-f^2\ge 0$, and $x^2+f^2-e^2\ge 0$, and
$y^2+f^2-g^2\ge 0$, and $y^2+g^2-f^2\ge 0$.
EDIT 3:
A similar approach will work for checking with respect to a convex polygon.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 5
} |
Proving that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$ when $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$ The problem is like this:
Fix $n$ a positive integer.
Suppose that $z_{1},\cdots,z_{n} \in \mathbb C$ are complex numbers satisfying $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $w_{1},\cdots,w_{n} \in \mathbb C$ such that $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$.
Prove that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$.
For this problem, I so far have that $|z_{i}|^{2}\le 1$ for all $i$ by plugging $(0, \cdots,0,1,0,\cdots,0)$ for $w=(w_{1},\cdots,w_{n} )$
Also, by plugging $(1/\sqrt{n},\cdots,1/\sqrt{n})$ for $w=(w_{1},\cdots,w_{n} )$
we could have $|z_{1}+\cdots+z_{n}|\le \sqrt{n}$
I wish we can conclude that $|z_{i}|\le 1/\sqrt{n}$ for each $i$.
Am I in the right direction?
Any comment would be grateful!
| Let $w_j:=\frac{\bar z_j}{\sqrt{\sum_{k=1}^n|z_k|^2}}$ (when $\sum_j|z_j|^2\neq 0$). Then $|w_j|^2=\frac{|z_j|^2}{\sum_{k=1}^n|z_k|^2}$ and we get $\sqrt{\sum_{j=1}^n|z_j|^2}\leq 1$, hence the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
A young limit $\lim_{n\to\infty} \frac{{(n+1)}^{n+1}}{n^n} - \frac{{n}^{n}}{{(n-1)}^{n-1}} =e$ These days I saw a really interesting limit as I was reading more information on Napier's constant here : http://mathworld.wolfram.com/e.html. It seems a pretty young limit since it appears under the name and year "Brothers and Knox 1998". It's also new for me and I'd like to know more approaching ways for it.
$$\lim_{n\to\infty} \frac{{(n+1)}^{n+1}}{n^n} - \frac{{n}^{n}}{{(n-1)}^{n-1}} =e$$
A possible way to go is to use the first converse of the Stolz–Cesàro theorem, but since $\lim_{n\to\infty}\frac{b_{n}}{b_{n+1}}=1$ we can at most check the given result because the theorem may work
or not in this case. For the case when $\lim_{n\to\infty}\frac{b_{n}}{b_{n+1}}=1$ it is required more research in
order to be sure that we can safely apply the first converse theorem. I'll make
an update
as soon as things are clarified such that I may turn it into a rigorous proof.
$$\lim_{n\to\infty} \frac{{(n+1)}^{n+1}}{n^n} - \frac{{n}^{n}}{{(n-1)}^{n-1}}=$$
$$\lim_{n\to\infty} f(n+1) - f(n)=$$
By the first converse of the Stolz–Cesàro theorem we have
$$\lim_{n\to\infty} \frac{f(n+1) - f(n)}{n+1-n}=$$
$$\lim_{n\to\infty} \frac{f(n)}{n}=$$
$$\lim_{n\to\infty} \frac{n^n}{(n-1)^{n-1}}\cdot \frac{1}{n}=$$
$$\lim_{n\to\infty} \frac{n^{n-1}}{(n-1)^{n-1}}=\left(1+\frac{1}{n-1}\right)^{n-1}=e.$$
What else can we do here? Thanks.
| $$\eqalign{(n+1)^{n+1} = \exp\left((n+1) \ln(n+1)\right) &= \exp\left(((n+1) (\ln n + \frac{1}{n} - \frac{1}{2n^2} + O(n^{-3}))\right)\cr
&= \exp\left( n \ln n + \ln n + 1 + \frac{1}{2n} + O(n^{-2})\right) \cr &= n^{n+1} e \left(1 + \frac{1}{2n}+ O(n^{-2})\right)}$$
so
$$ \dfrac{(n+1)^{n+1}}{n^n} = n e + \frac{e}{2} + O(1/n)$$
Replacing $n$ by $n-1$,
$$ \dfrac{n^n}{(n-1)^{n-1}} = (n-1) e + \frac{e}{2} + O(1/(n-1)) = ne - \frac{e}{2} + O(1/n)$$
So
$$ \dfrac{(n+1)^{n+1}}{n^n} - \dfrac{n^n}{(n-1)^{n-1}} = e + O(1/n)$$
If you include more terms, you get
$$ \dfrac{(n+1)^{n+1}}{n^n} - \dfrac{n^n}{(n-1)^{n-1}} = e + \dfrac{e}{24 n^2} + \dfrac{11\; e}{640 n^4} + \ldots $$
Hmm, the denominators of the coefficients seem to be the almost the same as OEIS sequence A118051
http://oeis.org/A118051 which comes from the asymptotics of the inverse harmonic numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$ Please help me for prove this inequality:
$$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$$
| This inequality can also be rewritten as
$$\frac{a+b+c}{3} \geq \frac{3}{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}} \,,$$
which is just the AM-HM inequality.
A more direct proof would be to simply multiply:
$$(a+b+c)(\frac{1}{a}+\frac{1}{b}+\frac{1}{c})=3+ (\frac{a}{b}+\frac{b}{a})+(\frac{a}{c}+\frac{c}{a})+(\frac{c}{b}+\frac{b}{c}) \geq 3+2+2+2 =9$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
} |
What's the probability that the other side of the coin is gold? 4 coins are in a bucket: 1 is gold on both sides, 1 is silver on both sides, and 2 are gold on one side and silver on the other side.
I randomly grab a coin from the bucket and see that the side facing me is gold. What is the probability that the other side of the coin is gold?
I had thought that the probability is $\frac{1}{3}$ because there are 3 coins with at least one side of gold, and only 1 of these 3 coins can be gold on the other side. However, I suspect that the sides might be unique, which derails my previous logic.
| If one is careful, one can find an informal but correct argument that gives the right answer. However, it is useful to know how to do a formal conditional probability calculation. One reason is that the intuition can be treacherous.
Let $S$ be the event that the side we see is gold, and let $D$ be the event we have drawn the double-gold coin. We want $\Pr(D|S)$. By a standard formula, we have
$$\Pr(D|S)\Pr(S)=\Pr(D\cap S).\tag{$1$}$$
We first find $\Pr(S)$. The event $S$ can happen in two ways: (i) We drew the double-gold, and the side we see is gold or (ii) We drew a "mixed" coin, and the side we see is gold.
To find the probability of (i), note that with probability $1/4$ we draw the double-gold. If this is the case, then the side we see is gold with probability $1$. So the probability of (i) is $(1/4)(1)$.
To find the probability of (ii), note that with probability $2/4$ we draw a mixed coin. given that we do, the probability of seeing the gold side is $1/2$. So the probability of (ii) is $(2/4)(1/2)$.
Thus $\Pr(S)=(1/4)(1)+(2/4)(1/2)=1/2$.
Note that $\Pr(D\cap S)$ is just the probability of (i), which is $(1/4)(1)$.
Now from Equation $(1)$ we find that $\Pr(D|S)=\frac{1/4}{1/2}=1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
How can I get sequence $4,4,2,4,4,2,4,4,2\ldots$ into equation? How can I write an equation that expresses the nth term of the sequence:
$$4, 4, 2, 4, 4, 2, 4, 4, 2, 4, 4, 2,\ldots$$
| $$4-2\cdot\mathbf 1_{3\mid n}\qquad\text{or}\qquad 2+2\cdot\mathbf 1_{\gcd(3,n)=1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/193897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 17,
"answer_id": 10
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.