Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
$\operatorname{lcm}(n,m,p)\times \gcd(m,n) \times \gcd(n,p) \times \gcd(n,p)= nmp \times \gcd(n,m,p)$, solve for $n,m,p$? $\newcommand{\lcm}{\operatorname{lcm}}$
I saw this in the first Moscow Olympiad of Mathematics (1935), the equation was :
$$\lcm(n,m,p)\times \gcd(m,n) \times \gcd(n,p)^2 = nmp \times \gcd(n,m,p)$$
My attempt :
I've multiplied both sides of the equation by $\frac{1}{\gcd(n,m,p)}$ to get this ( I don't know why i did ):
$$\frac{\lcm(n,m,p)}{\gcd(n,m,p)}\times \gcd(m,n) \times \gcd(n,p)^2 = nmp$$
then I've multiplied both sides by $\gcd(n,m,p)$, I got this but I get stuck here actually:
$$\frac{nmp\times \not{nmp}\times \gcd(m,n) \times \gcd(n,p)^2}{\gcd(n,m,p)}=\not{nmp}$$
Finally:
$$\gcd(m,n)\times \gcd(n,p)^2\times nmp=\gcd(n,m,p).$$
|
Using the standard trick: $$d=\gcd(n, m, p), u=\frac{\gcd(n, m)}{\gcd(n, m, p)}, v=\frac{\gcd(n, p)}{\gcd(n, m, p)}, w=\frac{\gcd(m, p)}{\gcd(n, m, p)}$$
we may write$\newcommand{\lcm}{\operatorname{lcm}}$
$$n=duvn_1, m=duwm_1, p=dvwp_1$$
where
$$\gcd(vn_1, wm_1)=\gcd(un_1, wp_1)=\gcd(um_1, vp_1)=1$$
This gives
$$\gcd(u, v)=\gcd(u, w)=\gcd(v, w)=1$$
$$\gcd(n_1, m_1)=\gcd(n_1, p_1)=\gcd(m_1, p_1)=1$$
$$\gcd(u, p_1)=\gcd(v, m_1)=\gcd(w, n_1)=1$$
Then we have $$\lcm(n, m, p)=duvwn_1m_1p_1, \gcd(m, n)=du, \gcd(n, p)=dv, nmp=d^3u^2v^2w^2n_1m_1p_1$$
Thus the equation becomes
$$(duvwn_1m_1p_1)(du)(dv)^2=(d^3u^2v^2w^2n_1m_1p_1)d$$
$$v=w$$
Since $\gcd(v, w)=1$, $v=w=1$. Conversely if $v=w=1$ and all the $\gcd$ conditions above hold, then the given equation holds.
Thus all solutions are given by
$$n=dun_1, m=dum_1, p=dp_1$$
where $d, u, n_1, m_1, p_1$ are any positive integers satisfying
$$\gcd(u, p_1)=\gcd(n_1, m_1)=\gcd(n_1, p_1)=\gcd(m_1, p_1)=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/790526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Equivalent definitions of vector field There are two definitions of a vector field on a smooth manifold $M$.
*
*A smooth map $V:M \rightarrow TM, \forall p \in M:V(p) \in T_p M$.
*A linear map $V:C^{\infty}(M) \rightarrow C^{\infty}(M), \forall f,g:V(fg)=fV(g)+gV(f)$
I can't undestand why they are equivalent. We must somehow build $2$ maps and show that their composition is $id$, but i don't have any ideas how. Please help.
|
This depends heavily on your definition of the tangent space $T_{p}M$, and thus the tangent bundle $TM$. There are several equivalent ways of defining it. Which book are you following?
If your definition of the tangent space $T_{p}M$ is a vector space of linear maps $V : C^{\infty}(p)\to\mathbb{R}$ that satisfy the Leibniz rule, i.e. $$V(fg)=f(p)V(g)+g(p)V(f),$$ where $C^{\infty}(p)$ is defined as
\begin{align*}
C^{\infty}(p)=\{f:U\to\mathbb{R}\,\,|\,\,f\,\,\mathrm{is}\,\,\mathrm{smooth}\,\,\mathrm{at}\,\,p\in U\,\,\mathrm{and}\,\,U\subseteq M\,\,\mathrm{is}\,\,\mathrm{open}\},
\end{align*}
as is usually done, then this exercise is pretty straight forward.
Get a map $\Psi:(1)\to (2)$ as follows. For each $V$ satisfying $(1)$ assign a linear map $\Psi(V)$ satisfying $(2)$ by taking $\Psi(V)(f)(p)=V(p)(f)$ for all $f\in C^{\infty}(p)$ and $p\in M$. Show that this is one-to-one and onto, or alternatively define an inverse $\Phi:(2)\to (1)$ by assigning for each $V$ satisfying $(2)$ a smooth map $\Phi(V)$ satisfying $(1)$ by taking $\Phi(V)(p)(f)=V(f)(p)$ for all $p\in M$ and $f\in C^{\infty}(p)$. So you get that the two definitions are equivalent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/790626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
simplify $\sqrt[3]{11+\sqrt{57}}$ I read in a book (A Synopsis of Elementary Results in Pure and Applied Mathematics) that the condition to simplify the expression $\sqrt[3]{a+\sqrt{b}}$ is that $a^2-b$ must be a perfect cube.
For example $\sqrt[3]{10+6\sqrt{3}}$ where $a^2-b
=(10)^2-(6 \sqrt{3})^2=100-108=-8$ and $\sqrt[3]{-8} = -2$
So the condition is satisfied and $\sqrt[3]{\sqrt{3}+1}^3=\sqrt{3}+1$.
But the example $\sqrt[3]{11+\sqrt{57}}$ where $a^2-b =
(11)^2-57=121-57=64$ and $\sqrt[3]{64}=4$ so the condition is satisfied.
But I can’t simplify this expression. Please help us to solve this problem. Note: this situation we face it in many examples
|
It is not a sufficient condition (I don't know if it's necessary). Not all expressions of the form $\sqrt[3]{a+\sqrt{b}}$, satisfying the condition that $a^2-b$ is a perfect cube, can be simplified.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/790738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
How to evaluate $\cos(\frac{5\pi}{8})$? I'm sorry I don't know the way to input pie (3.14) don't have symbol on my pc
|
Recall the identity
$$\cos 2\theta=2\cos^2 \theta-1.\tag{1}$$
Let $\theta=\frac{5\pi}{8}$. Then $\cos 2\theta=-\frac{1}{\sqrt{2}}$. To finish, note that $\cos(5\pi/8)$ is negative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/790803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How do I prove $\csc^4 x-\cot^4x=(1+\cos^2x)/\sin^2x$ How do I prove $\csc^4 x-\cot^4x=\dfrac{(1+\cos^2x)}{\sin^2x}$
Do you start from RHS or LHS? I get stuck after first few steps-
|
The LHS:
$$\frac{1}{\sin^4 x}-\frac{\cos^4 x}{\sin^4 x}=\frac{1-\cos^4 x}{\sin^4 x}=\frac{(1-\cos^2 x)(1+\cos^2 x)}{\sin^4 x}=\frac{1+\cos^2 x}{\sin^2 x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/790921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why are cosine and sine used to solve this differential equation? $$
\frac{d^2 u}{dt^2}+\lambda u =0
$$
Why are cosine and sine used to solve this differential equation of second order?
|
Apart from the mathematical theory, and assuming $\lambda >0$ since otherwise no sine/cosine is involved, this is a rough argument: you are looking for a function whose second derivative is a negative multiple of the function itself. Since the second derivative of the sine is minus the sine and the second derivative of the cosine is minus the cosine, it is natural to manipulate $\sin (\cdot)$ and $\cos (\cdot)$ to find a true solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
What is the radius of the circle?
Please help with this grade nine math problem. How does one calculate the radius if the two sides of the right angle triangle are 85cm. The sides of the triangle are tangent to the circle.
|
It's useful to realize that the "left" and "right" radia, as drawn in the above picture, will be parallel to the respective cathetae.
Then you get:
$$C=\sqrt{A^2+A^2}=\sqrt{2}A$$
The height of the triangle is then:
$$h=\sqrt{A^2-\left(\frac{C}{2}\right)^2}=\sqrt{A^2-\frac{A^2}{2}}=\frac{1}{\sqrt{2}}A$$
Define x-axis along the base of the triangle and y-axis along the height.
Unit vectors at a 45° angle to the x-axis are given by:
$$\vec{u}_1=\frac{1}{\sqrt{2}}\left({1}\atop{1}\right)~~~~~,~~~~~\vec{u}_2=\frac{1}{\sqrt{2}}\left({1}\atop{-1}\right)$$
You can check $\vec{u}\cdot\vec{u}=1$.
Now use that the distance from any of the two 45° angles to the two nearest spots where the circle touches the triangle is the same, namely $C/2=A/\sqrt{2}$.
With this you can establish a vectorial relation between the following vectors:
$$h\vec{e}_y+\left(A-\frac{C}{2}\right)\vec{u}_2=R\vec{e}_y+R\vec{u}_1$$
Where $\vec{e}_y=(0,1)$ is the unit vector along the y-axis. This gives you two equations.
The y-axis equation is:
$$h-\frac{1}{\sqrt{2}}\left(A-\frac{C}{2}\right)=R+\frac{1}{\sqrt{2}}R\\\frac{A}{\sqrt{2}}-\frac{1}{\sqrt{2}}\left(A-\frac{A}{\sqrt{2}}\right)=\left(1+\frac{1}{\sqrt{2}}\right)R\\\frac{A}{2}=\left(1+\frac{1}{\sqrt{2}}\right)R\\R=\frac{A}{2+\sqrt{2}}$$
The x-axis equation is:
$$\frac{1}{\sqrt{2}}\left(A-\frac{C}{2}\right)=\frac{1}{\sqrt{2}}R\\\left(1-\frac{1}{\sqrt{2}}\right)A=R\\R=\frac{\left(1-\frac{1}{\sqrt{2}}\right)\left(2+\sqrt{2}\right)}{2+\sqrt{2}}A\\R=\frac{2-\sqrt{2}+\sqrt{2}-1}{2+\sqrt{2}}A\\R=\frac{A}{2+\sqrt{2}}$$
Both answers properly agree, so that the world is a happy and sunny place.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Corollary of identity theorem without connectedness assumption The identity theorem has a corollary, which is often stated as "If $U$ is a connected domain, $f,g$ are analytic in $U$, and the set of points where $f$ and $g$ coincide has a limit point in $U$, then $f=g.$"
The proof runs by showing that the set $L$ of limit points of $\{z \in U : f(z) = 0\}$ is closed and open. Since $U$ is connected, $L$ is either empty or all of $U$. Since $L$ is nonempty (there is a limit point in $U$), we conclude that $L = U$; that is, $f(z) = 0$ on all of $U$. By replacing "$f(z)$" with "$(f-g)(z)$", the conclusion follows.
However, I have also seen this corollary stated withput the connectedness assumption; that is, "If $f,g$ are analytic on some $U$, and the set of points where $f$ and $g$ coincide has a limit point in $U$, then $f=g.$" Wikipedia seems to confirm this in the "An Improvement" section: http://en.wikipedia.org/wiki/Identity_theorem
However, I'm having a hard time proving the corollary without the usual assumption of connectedness. Also, I don't understand Wikipedia's proof, because it just seems to show (in the end) that $f$ and $g$ must be equal in a neighborhood of the limit point of $U$ (by showing that $f^k(c) = g^k(c)$ for all $k \ge 0$, where $c$ is the limit point).
Any help would be appreciated.
|
Where have you seen it stated without connectedness?. For what i understand, A domain = an open connected set. "Specifically, if two holomorphic functions f and g on a domain D agree on a set S which has an accumulation point c in D then f = g on all of D.". You're still using connectedness but instead of f and g being equal on a whole subset they're equal on a sequence with a limit point.
http://en.wikipedia.org/wiki/Domain_%28mathematical_analysis%29
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
combinatorical acquaintanceship problem Group of people went hiking.
It's given that if we pick any four of them, than at least one knows everybody in that quad.
We have to prove, than in group everybody knows everybody, except at most 3 persons.
I tried to sketch a problem for the case when group size is 5, to get some idea of how it might be possible to prove. Statement hold, but I really don't know way to prove it.
I would appreciate some insights, thanks.
|
Hint:
You cannot have four distinct people with "$A$ and $B$ do not know each other" and "$C$ and $D$ do not know each other", as it contradicts "if we pick any four of them, than at least one knows everybody in that quad".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$f\circ g=g\circ f$ + increasing $\Rightarrow$ common fixed point.
Let $f,g:\mathbb [a,b] \to \mathbb [a,b]$ be monotonically increasing functions
such that $f\circ g=g\circ f$
Prove that $f$ and $g$ have a common fixed point.
I found this problem in a problem set, it's quite similar to this Every increasing function from a certain set to itself has at least one fixed point but I can't solve it.
I think it's one of those tricky problems where you need to consider a given set and use the LUB... I think $\{x \in [a,b]/ x < f(x) \text{and} x< g(x) \}$ is a good one.
Any hint ?
|
Let $A=\{x \in [a,b]/ x \leq f(x) \; \text{and} \; x \leq g(x) \}$
*
*$a\in A$
*let $u=\sup A$
*Let us prove that $f(u)$ and $g(u)$ are upper bounds for $A$
Indeed let $x\in A$.
Then $x\leq u$. Hence $f(x) \leq f(u)$, thus $x\leq f(x) \leq f(u)$ and finally $x\leq f(u)$
In the same way, $x\leq g(u)$
*
*Therefore, by LUB definition, $\color{red}{ u\leq f(u)}$ and $\color{red}{ u\leq g(u)}$
*Then, $f(u) \leq f(g(u))=g(f(u))$. Thus $u\leq f(u)\leq g(f(u))$ and then $u\leq g(f(u))$
*But in the same way, $u\leq f(f(u))$
Therefore, $f(u) \in A$ (see the last two last inequalities in the last two bulleted points)
Similarly, $g(u) \in A$.
*
*By LUB definition, $\color{red}{ f(u) \leq u}$ and $\color{red}{ g(u) \leq u}$
$u$ is a common fixed point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Projection onto the column space of an orthogonal matrix The projection of a vector $v$ onto the column space of A is
$$A(A^T A)^{-1}A^T v$$
If the columns of $A$ are orthogonal, does the projection just become $A^Tv$? I think it should because geometrically you just want to take the dot product with each of the columns of $A$. But how can I show this is true?
|
No. If the columns of $A$ are orthonormal, then $A^T A=I$, the identity matrix, so you get the solution as $A A^T v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
The inequality about recurrence sequence Sequence $(x_n)$ is difined
$x_1=\frac {1}{100}, x_n=-{x_{n-1}}^2+2x_{n-1}, n\ge2$
Prove that
$$\sum_{n=1}^\infty [(x_{n+1}-x_n)^2+(x_{n+1}-x_n)(x_{n+2}-x_{n+1})]\lt \frac {1}{3} $$
I found relation $(1-x_n)=(1-x_{n-1})^2$
I don't know what to do next.
There is a real number which is less than $\frac {1}{3}$?
I need your help.
|
A direct proof (note that I've shifted indices from starting at 1 to 0):
First, notice that $x_n\to1$ is the only possible limit. ($x=-x^2+2x \implies (x-1)^2=0$)
[edit]
The obvious mistake in my algebra was pointed out -- $x^2=x$ so $x=0$ or $1$. Recentering about $x=0$ doesn't change the recurrence, but recentering about $x=1$, as you can see below, yields a relation that can be solved by inspection.[/edit]
When in doubt, recenter your system about the fixed point; define $a_n=1-x_n$. Then,
$$ a_{n+1} = 1-x_{n+1} = 1-[-x_n^2+2x_n] = 1+x_n^2-2x_n=(1-x_n)^2=a_n^2. $$
This allows us to solve the system exactly; $a_n = a_0^{2^n}$, or $1-x_n=(1-x_0)^{2^n}$.
It should now be a matter of algebra to evaluate the sum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How and in what context are polynomials considered equal? There's two notions of equivalent polynomials floating around, one saying that $f = g$ iff they're equivalent as maps, and the other saying $f = g$ iff they're equal on each coefficient when written in standard form.
I'm interested in polynomials over a finite field, irreducible polynomials and factoring so what type of equivalence should I use? For instance if we take map equivalence, then there are only a finite number of polynomials. And that makes a huge difference!
Please explain when it's okay to use what.
|
In abstract algebra polynomials over a ring $R$ are defined as formal sums
$$
\sum_{k=0}^N a_k X^k
$$
where $X$ is a formal variable and all $a_k\in R$. To make this precise, we can also model polynomials as sequences $(a_0, a_1, \dots)$ where all but finitely many $a_i$ are zero. Addition and multiplications is then given by
$$
(a_0, a_1, \dots) + (b_0, b_1, \dots) := (a_0+a_1, b_0+b_1, \dots)
$$
and
$$(a_0, a_1, \dots) \cdot (b_0, b_1, \dots) := (c_0, c_1, \dots),\quad\text{where}\ c_k = \sum_{i=0}^k a_i b_{k-i}.$$
This corresponds to the usual coefficient-wise addition of polynomials and the Cauchy product formula
$$
\left(\sum_{k=0}^\infty a_k X^k \right) \cdot \left(\sum_{k=0}^\infty b_k X^k \right) = \sum_{k=0}^\infty c_k X^k,\quad\mathrm{where}\ c_k=\sum_{i=0}^k a_i b_{k-i}.
$$
Letting $X=(0,1,0,\dots)$ we obtain the usual representation of polynomials. So two polynomials are equal if the underlying tuples are equal which is the case if and only if all coefficients are equal.
Note that we can easily drop the "all but finitely many $a_i=0$" requirement and still get a ring structure with the above operations. This is the ring $R[[X]]$ of formal power series, while the ring of polynomials is usually denoted $R[X]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Richardson Extrapolation Matlab Code: Example and try out code included. currently I am studying Numerical Methods in Matlab and I need a Matlab code which would calculate Richardson Extrapolation using data given in a table, respectively for x and f(x).
For example: Use the table below to calculate f'(0) as accurately as possible.
Our professor suggested to use the find function in Matlab, as I did, but so far I only get an empty matrix as a result..
function G = richEx(x,y,xp)
h=x(2)-x(1);
h2=h;
h1=h2*2;
g1 = (-3 * ( find(y==(xp))) + 4 * (find(y==(xp + h1))) - (find(y==(xp) + 2*h1)))) / 2*h1;
g2 = (-3 * (find(y==(xp))) + 4 * (find(y==(xp + h2))) - (find(y==(xp + 2*h2)))) / 2 * h2;
G=(4*g2-g1)/3;
Can someone help me? Thank you.
|
Your find method is returning empty matrices because it's looking for values of y equal to some condition.
For instance, find(y == xp) looks for values of y equal to xp and it returns the index. You haven't told us what xp is, but chances are, there aren't any values of y that equal xp.
Furthermore, find returns the index of the value. So find([4 5 6]==4) returns 1.
I would re-visit the Richardson extrapolation algorithm, because your code does not reflect what the algorithm does.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/791949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the eigenvalue and eigenvector of a matrix Confirm by multiplication that x is an eigenvector of A, and find the corresponding eigenvalue.
Given:
\begin{align}
A = \begin{pmatrix} 1&2\\3&2\\\end{pmatrix}, &&
x = \begin{pmatrix} 1\\-1\\\end{pmatrix}
\end{align}
I know: $Ax = \lambda x$
My work:
I know $\lambda I - A$
\begin{pmatrix} \lambda - 1&-2\\-3&\lambda - 2\\\end{pmatrix}
From there I know the characteristic polynomial is $\lambda^2 - 3\lambda - 4 = 0$ through ad-bc (since this is a 2 x 2 matrix)
I can successively trying out each factor of c (which is 4) : positive and negative of 1,2,4.
4 turns out to be the only one that works. So $\lambda - 4 = 0$. So the $\lambda$ = 4.
I also know I can divide the characteristic polynomial by $\lambda - 4$ and get $\lambda + 1$. Setting $\lambda + 1 = 0$. $\lambda$ is $-1$.
Answer: So I got two eigenvalues which is $-1$ and $4$.
Dilemma I am having with eigenvector:
The problem is I am not sure if the given eigenvector applies for both the left and right side of the equation Ax = $\lambda$x. Or is it just the left side?
Work I have done using the given eigenvector x:
I know Ax = $\lambda$x
\begin{align}
\begin{pmatrix} 1&2\\3&2\\\end{pmatrix} \cdot \begin{pmatrix} 1\\-1\\\end{pmatrix} = \begin{pmatrix} 1*(1) + &2 (-1)\\3*(1)&2(-1)\\\end{pmatrix} = \begin{pmatrix}
-1\\1\\\end{pmatrix} = Ax.
\end{align}
Problem I am facing:
What do I do after this step? Do I use the given value of the eigenvector $x$ on the right side of the equation $Ax = \lambda x$ along with the eigenvalue I find to see if the equation satisfies itself? How do I know if the given eigenvector is actually correct?
|
The directions are confirm by multiplication. All you need do is compute $Ax$ for the given $A$ and $x$ then compare that result to the given $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How to finish this integration? I'm working with the integral below, but not sure how to finish it...
$$\int \frac{3x^3}{\sqrt[3]{x^4+1}}\,dx = \int \frac{3x^3}{\sqrt[3]{A}}\cdot \frac{dA}{4x^3} = \frac{3}{4} \int \frac{dA}{\sqrt[3]{A}} = \frac{3}{4}\cdot\quad???$$
where $A=x^4+1$ and so $dA=4x^3\,dx$
|
$$\dfrac{1}{\sqrt[\large 3]{A}} = \dfrac 1{A^{1/3}} = A^{-1/3}$$
Now use the power rule.
$$ \frac{3}{4} \int A^{-1/3}\,dA = \frac 34 \dfrac {A^{2/3}}{\frac 23} + C = \dfrac 98 A^{2/3} + C$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Integral $\int \operatorname{sech}^4 x \, dx$ How we can solve this?$\newcommand{\sech}{\operatorname{sech}}$
$$
\int \sech^4 x \, dx.
$$
I know we can solve the simple case
$$
\int \sech \, dx=\int\frac{dx}{\cosh x}=\int\frac{dx\cosh x}{\cosh ^2x}=\int\frac{d(\sinh x)}{1+\sinh^2 x}=\int \frac{du}{1+u^2}=\tan^{-1}\sinh x+C.
$$
I am stuck with the $\sech^4$ though. Thank you
|
Note that
$$
\int \DeclareMathOperator{sech}{sech}{\sech}^4x\,dx=\int{\sech}^2{x}\cdot(1-\tanh^2x)\,dx
$$
Letting $u=\tanh x$ gives $du={\sech}^2x$ so
$$
\int{\sech}^4x\,dx=\int(1-u^2)\,du=u-\frac{u^3}{3}+C=\tanh x-\frac{1}{3}\tanh^3x+C
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
If the sum of two independent random variables is in $L^2$, is it true that both of them are in $L^1$? Let $X$ and $Y$ be two independent random variables. If $\mathbb E(X+Y)^2 < \infty$, do we have $\mathbb E |X| < \infty$ and $\mathbb E |Y| < \infty$?
What I actually want is that $X$ and $Y$ are both in $L^2$, i.e., $\mathbb E X^2 < \infty$ and $\mathbb E Y^2 < \infty$. But this can be reduced to $\mathbb E |X|\mathbb E |X| < \infty$. Thus it suffices to show that $X$ and $Y$ are in $L^1$.
It might be helpful to see that since $|X| < |Y| + |X+Y|$, (and by symmetry), either both of $X$ and $Y$ or none of them are in $L^1$. So we may assume $\mathbb E |X| = \infty$ and $\mathbb E |Y| = \infty$ and try to find a contradiction. But here is where I got stuck.
|
If they have the finite mean x=$E X < \infty$ and $y= E Y <\infty$ then yes
$E(X+Y)^2 = E(X^2)+E(Y^2)+2 E(X)*E(Y) = E(X^2)+E(Y^2)+2xy < \infty$. If no, I am affraid one can found a pathological case when this is not true.
Let me update myself. I think the previous answer given by David Giraudo is correct. Let me just a little expand it. $E(X+y)^2<\infty$ for some $y$ means
that $E(X+y)=E(X)+y<\infty$ for this $y$. This means that $E(X)<\infty$. This means that $E(X+y)^2=E(X^2)+2E(X)y+y^2 <\infty$. So $E(X^2)<\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Why is $\cos (90)=-0.4$ in WebGL? I'm a graphical artist who is completely out of my depth on this site.
However, I'm dabbling in WebGL (3D software for internet browsers) and trying to animate a bouncing ball.
Apparently we can use trigonometry to create nice smooth curves.
Unfortunately, I just cannot see why.
I can accept this diagram:
However, running some calculations just do not make sense to me:
Let's set $\alpha$ to 45 (around where it appears to be in the diagram) and find the cosine value, thus giving us the green line. $$\cos(45) = 0.5$$
Fair enough. $\cos(\alpha)$ / the green line is $0.5$ units.
But now this is where it all falls apart. I would have thought if we set $\alpha$ to $90$, $\cos$ would become $0$. Do you see why I think this? Look at the digram, isn't that reasonable to think? Similarly, $\cos(0)$ I would have said should equal $1$ (twice that of $\cos(45)$ )
While $\cos(0)$ does equal $1$, this does not check out: $$ cos(90) = -0.4$$
Just do not get that $0.4$? Could someone explain? That just makes no sense to me. None.
I'm using the google calculator and I would stress I have not touched maths for about $6$ years (ever since I left school!) so please lots of examples and words to explain!
|
Wrong unit.
You talk about deg, while the function obviously expects rad, which gives -0.4.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123",
"answer_count": 9,
"answer_id": 8
}
|
Word for "openness"/"closedness" of an interval What word properly completes the phrase
the radius of convergence does not depend on the $\text{______}$ of the interval
to mean that it doesn't matter whether $(a, b)$, $[a, b)$, $(a, b]$, or $[a, b]$ is the correct answer?
*
*Openness and closedness don't really seem to work because the interval doesn't have to be either (it could be half-open, or, in $\mathbb{R}^n$, include any subset of its limit points).
*Strictness makes sense, because you can say that $2$, and not $3$, is "strictly between" $1$ and $3$. However, this only really makes sense (to me) once you know the meaning; if I saw the word strictness I wouldn't really know what it meant.
*Boundary and endpoints don't work because the boundary does matter—we care what $a$ and $b$ are, just not whether they're included in the interval.
This is for a Calculus II class, so topology, etc. are outside the scope of the curriculum.
Thoughts?
|
"The radius of convergence does not depend on whether the interval is open, closed, or neither."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Curse of Dimensionality ... as illustrated by Christopher Bishop I'm reading Christopher Bishop's book "Neural Networks for Pattern Recognition". I'm on pg 7 about curse of dimensionality.
Here is the relevant part:
For simplicity assume the dimensionality we are working with is 3. Now "divide the input variables $x_1, \dots x_d$ into M intervals, so that the value of a variable can be specified approximately by saying in which interval it lies. This leads to a division of the whole input space into a large number of [3D] boxes or cells ...
Each of the training examples corresponds to a point in one of the cells , and carries an associated value of the output variable $y$. ... [To find $y$ for a given point], by finding which cell the point falls in, and then returning the average value of y for all training points that lie in that cell."
The claim is that if each input variable is divided into $M$ divisions, then the total number of cells is $M^d$.
First and foremost, why is this true? Why do we need $M^d$ cells? Secondly, what does it mean to divide an input variable into intervals or divisions. (I assume an input variable is $x_i$ from $x_1, \dots, x_d$ for $1\leq i \leq d$.)
I'm interested in making sure that I understand this because this seems (at least to me) a clever way of thinking about the curse of dimensionality.
|
Let's be explicit and consider $M=3$. If $d=1$, then you divide a segment into three subsegments. For example you divide the interval $[0,1]$ into $[0,1/3)$, $(1/3,2/3]$, and $(2/3,1]$.
If $d=2$, then you divide a square into thirds horizontally and also vertically, so there are $9$ areas. If $d=3$, then you divide a cube into thirds in all three directions, like a Rubik's cube, leading to $3^3=27$ volumes. That's why you need $M^d$ cells.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Taylor series of a division-by-zero equation I need to calculate taylor series of $(\frac1{t^3}+\frac3{t^2})^{1/3} - \sqrt{(\frac1{t^2}-\frac2{t})}$ at $t = 0$
to calculate limit $(\frac1{t^3}+\frac3{t^2})^{1/3} - \sqrt{(\frac1{t^2}-\frac2{t})}$ as $t \rightarrow 0$
I got division-by-zero error where $t = 0$. however, another algebra tool such as wolframalpha and symbolab give me an answer. (Please take a look to the below link)
http://www.wolframalpha.com/input/?i=taylor+%28%281%2Ft%5E3%29%2B3%2Ft%5E2%29%5E%281%2F3%29+-+%281%2Ft%5E2-2%2Ft%29%5E%281%2F2%29+at+t+%3D+0
Does anyone how to get the result ?
Thanks for reading the question.
|
Remember that when $x$ is small compared to $1$, $(1+x)^n \simeq (1+n~x)$. So $$(1+3t)^{1/3} \simeq 1+t$$ $$(1-2t)^{1/2} \simeq 1-t$$ and then $$\frac{1}{t}(1+3t)^{1/3}-\frac{1}{t}(1-2t)^{1/2} \simeq \frac{1}{t} (1+t)-\frac{1}{t} (1-t)=2$$.
If you have needed to go further, you could have used the binomial expansion of $$(1+x)^n=1+nx+ \frac {n(n-1)}{2!}x^2+...$$ Applied to your expressions,
$$(1+3t)^{1/3} \simeq 1 + t - t^2$$ $$(1-2t)^{1/2} \simeq 1-t-\frac{t^2}{2}$$ $$\frac{1}{t}(1+3t)^{1/3}-\frac{1}{t}(1-2t)^{1/2} \simeq \frac{1}{t} (1+t-t^2)-\frac{1}{t} (1-t-\frac{t^2}{2})=2-\frac{t}{2}$$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Covolution (space) over compact Lie groups Let $G$ be a compact Lie group. Is there any way one can characterize the functions $\phi$ of the form $\phi=\psi\ast \psi^\ast$ in $C^\infty(G)$ where $\psi\in C^\infty(G)$? Here as usual $\psi^*(x)=\overline{\psi(x^{-1})}$.
Another (perhaps easier) question is whether the above convolutions span the vector space $C^\infty(G)$ of smooth functions on $G$.
Both questions have puzzled me for a while, and I wonder if they are well known to the experts. (I have assumed that $G$ is compact for simplicity. Of course similar questions can be asked about compactly supported functions of any order over $G$.)
|
The only thing that comes to my mind about this type of functions is as follows:
Given $f\in L^2(G)$, the function $f\ast \tilde{f}$, where $\tilde{f}(g)=\overline{f(g^{-1})}$, is a function of positive type associated with the left regular representation of $G$. For the involved terminology and definitions see Appendix C of the following book:
Kazhdan's Property (T)., by B. Bekka, P. de la Harpe, A. Valette.
It is available here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Unbounded sequence with convergent subsequence I'm just wondering if anyone knows any nice sequences that are unbounded themselves, but have one or more convergent sub-sequences?
|
There are plenty.
Take any convergent sequence, say $a_n \to a \in \mathbb R$. Then take any unbounded sequence, say $b_n \to \infty$. Then define $$ c_n = \begin{cases} a_n & \text{n even} \\ b_n & \text{n odd.} \end{cases}$$
Then $c_n$ is unbounded, but has a convergent sequence. Notice that you can generalize this: given any finite number of convergent sequences, you can make a unbounded sequence with the convergent sequences as subsequences.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Why are two statements about a polynomial equivalent? I am reading a claim that the following two statements are equivalent.
*
*One of the roots of a polynomial $v(t)$ is a $2^j$-th root of unity, for some $j$.
*The polynomial $v(t)$ is divisible either by $1-t$ or by $1+t^{2^{j-1}}$, for some $j$.
We know that the coefficients of $v(t)$ are from $\{-1,0,1\}$.
I am not sure exactly how to interpret this. For which $j$ is this true?
Should $j$ be a positive integer such that $2^j$ is at most the degree of $v(t)$ for example or should there be some other restriction on $j$ or is it really true with no restriction on $j$?
Erratum: Fixed exponent in $1+t^{2^{j-1}}$.
|
A $2^j$-th root of unity is a root of a polynomial $P$ if and only if the minimal polynomial of that root is a factor of $P$. The minimal polynomials of roots of unity are called cyclotomic polynomials, and it's easy to see that for $j=0$, this is $1-t$, and for $j > 0$, it is $1 + t^{2^{j-1}}$ :
$$\Phi_{2^j} = \prod_{\zeta^{2^j} = 1, \zeta^{2^{j-1}} \neq 1} (\zeta - t) = \frac{\prod_{\zeta^{2^j} = 1} (\zeta - t) }{\prod_{\zeta^{2^{j-1}} = 1} (\zeta - t) } = \frac{1-t^{2^j} }{ 1-t^{2^{j-1}}} = 1+t^{2^{j-1}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/792906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why does $ x^2+y^2=r^2 $ have uncountably many real solutions? What is exactly the reason the equation of a cirle of radius $ r $ and centered at the origin has uncountably many solutions in $\mathbb { R} $?
|
The mapping $f\colon\mathbb{R}\to\mathbb{R}^2$ defined by
$$
f(t)=\left(r\frac{1-t^2}{1+t^2},r\frac{2t}{1+t^2}\right)
$$
is injective and its image is the circle with center at the origin and radius $r$, except for the point $(-r,0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
average number of rolls of a die between appearance of a side I saw this might have been duplicated in places here -- I think this might be a variation on the coupon collector problem -- but I wanted to be sure and understand how to do the calculation.
I have an n-sided die. I want to know what the average number of rolls between the appearance of a number on the die, k is.
I thought that the binomial distribution would be appropriate here. The way I originally approached it was to say that we have a 1/n chance of getting a number. The chance of getting any other number is (n-1)/n. I know that if I wanted to know the odds of getting the same number several times in a row is $\left(\frac{1}{n}\right)^m$ with m being the number of rolls. But beyond that I was a bit stumped. I know that there's a binomial distribution or a Harmonic number involved somehow, and I read the coupon collector's problem but honestly that explanation seemed to make things less clear rather than more.
Anyhow, if someone could point me to either a duplicate question or a better explanation that would be much appreciated.
|
Let n denote any face number other than k. At the outset or after a $k$ has turned up, we roll the die until a $k$ reappears. The possibilities are:
$$k, nk, nnk, nnnk, ...$$
If $p$ is the probability of $k$ and $q = (1-p)$ is the probability of $n$, then the expected waiting time is
$$E(N) = \sum_{j=1}^{\infty} jq^{j-1}p = p \frac{d}{dq} \sum_{j=0}^{\infty} q^j = \frac{p}{(1-q)^2} = \frac{1}{p}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Finding determinant for matrix using upper triangle method Here is an example of a matrix, and I'm trying to evaluate its determinant:
$$
\begin{pmatrix}
1 & 3 & 2 & 1 \\
0 & 1 & 4 & -4 \\
2 & 5 & -2 & 9 \\
3 & 7 & 0 & 1 \\
\end{pmatrix}
$$
When applying first row operation i get:
$$
\begin{pmatrix}
1 & 3 & 2 & 1 \\
0 & 1 & 4 & -4 \\
0 & -1 & -6 & 7 \\
0 & -2 & -6 & -2 \\
\end{pmatrix}
$$
Now, if I continue doing row operations until i get the upper triangle, the determinant will be 14 (which is said to be the correct one).
$$
\begin{pmatrix}
1 & 3 & 2 & 1 \\
0 & 1 & 4 & -4 \\
0 & 0 & 2 & 3 \\
0 & 0 & 0 & -7 \\
\end{pmatrix}
$$
However, if I instead apply this certain operation, R4 --> (1/-2)R4...
$$
\begin{pmatrix}
1 & 3 & 2 & 1 \\
0 & 1 & 4 & -4 \\
0 & -1 & -6 & 7 \\
0 & 1 & 3 & 1 \\
\end{pmatrix}
$$
...and then carry on with operations, I get a different final answer: The determinant will be 7 in this case!
$$
\begin{pmatrix}
1 & 3 & 2 & 1 \\
0 & 1 & 4 & -4 \\
0 & 0 & -1 & 5 \\
0 & 0 & 0 & -7 \\
\end{pmatrix}
$$
Could someone explain that to me - is this operation illegal?
R4 --> (1/-2)R4
How so? Because i always tend to use it, just to simply things a little.
|
You just multiplied a row with $\frac {1}{-2}$! This will change the value of determinant. What you can do is take $-2$ common from a row and write it outside.
Consider a $1\times 1$ matrix $A=[1]$.
$det(A)=1$
Apply $R_!\to2R_1$
$A=[2]$
$det(A)=2$
Can you see why you cannot do it?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find all the singularities of $f(z)= \frac{1}{z^4+1}$ and the associated residue for each singularity I know that there are poles at
$$\Large{z=e^{\frac{i\pi}{4}}},$$
$$\Large{z=-e^{\frac{i\pi}{4}}},$$
$$\Large{z=e^{\frac{i3\pi}{4}}},\text{ and}$$
$$\Large{z=-e^{\frac{i3\pi}{4}}}$$
I am having trouble with the residues for each one. Are the answers just the poles but all divided by $4$? can someone help? Thanks!
|
Like N3buchadnezzar just said the residues are given by
$$\mathrm{Res}\left(\frac{f(z)}{g(z)},z_k\right) = \frac{f(z_k)}{g'(z_k)}$$
In your case the algebra involved in the calculation may lead to many errors if you consider the residues as you listed them. I suggest you to write the singularities of $\frac{1}{z^4+1}$ as
\begin{align*}z_1 &= \frac{1+i}{\sqrt{2}} & z_2&=\frac{-1+i}{\sqrt{2}}\\
z_3&=\frac{1-i}{\sqrt{2}} & z_4&=\frac{-1-i}{\sqrt{2}}
\end{align*}
You can find these singularities just by a simple geometric reasoning.
Now i think it's easier to evaluate the residues. For intance you get
\begin{align*}
\mathrm{Res}\left(\frac{1}{z^4+1},z_1\right) &= \mathrm{Res}\left(\frac{1}{z^4+1},\frac{1+i}{\sqrt{2}}\right)\\
&=\frac{1}{4z^3}\mid_{\frac{1+i}{\sqrt{2}}}\\
&=\frac{1}{\frac{4(1+i)^3}{\sqrt{2}}}\\
&=\frac{1}{\frac{4(-2+2i)}{\sqrt{2}}\cdot\frac{\sqrt{2}}{\sqrt{2}}}\\
&=\frac{1}{4\sqrt{2}(-1+i)}\cdot \frac{(-1-i)}{(-1-i)}\\
&=\frac{-1-i}{8\sqrt{2}}
\end{align*}
You can evaluate the other residues very similar.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why does $1 \cdot 0=0$ not stand? A set $G$ together with an operation $*$ is called group when it satisfies the following properties:
*
*$a*(b*c)=(a*b)*c, \forall a,b,c \in G$
*$ \exists e \in G: e*a=a*e=a, \forall a \in G$
*$\forall a \in G \exists a' \in G: a'*a=e=a*a'$
$$$$
$$(\mathbb{Z}, \cdot ) \text{ is not a group}$$
The property $(1)$ is satisfied.
For the property$(2)$ we take $e=1$. But while $1 \cdot a = a \cdot 1 =a, \forall a \in \mathbb{Z} \setminus \{0 \}$, it does not stand that $1 \cdot 0=0$.
I haven't understood why $1 \cdot 0=0$ does not stand...Could you explain it to me?
|
There isn't any failure in terms in property $(2)$. $1$ is certainly the identity, and it does stand that $0\times 1 = 1\times 0 = 0$.
But, zero creates another problem:
Consider property $(3)$ asserting that for every element in a group, its inverses exists and is in the group, too. This is where things "go bad" for $0$, and essentially every element in $\mathbb Z$ that is not $-1$ or $1$:
Is there any $a$ such that $a \cdot 0 = 0\cdot a = e = 1$?
Is there any $a \in \mathbb Z$ such that $a \cdot 3 = 3\cdot a = 1$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Dividing into 2 teams In how many ways can $22$ people be divided into $ 2 $ cricket teams to play each other?
Actual answer : $\large \dfrac{1}{2} \times \dbinom{22}{11}$
My approach :
Each team consists of $11$ members. Number of ways to select a team of $11$ members = $ \dbinom{22}{11}$
Number of ways by which $22$ people can be divided into $2$ cricket teams = $\dbinom{22}{11} \times 1$ (since the remaining 11 members will automatically fall into the 2nd team).
I appreciate if somebody would be able to explicate the role of $ \large \dfrac{1}{2} $ here.
|
This is because when you choose $ \large11 $ people out of $ \large 22 $ people, there is a complementary team formed on the other side, that is, the other $ \large 11 $ people also form a team. So we overcount by a factor of $ \large 2 $, that is, we count every time twice.
For example, let $ \large 1, 2, 3, 4 $ be the group of people to choose from. We can choose $ \large 2 $ players in $ \large \dbinom{4}{2} $ ways but we overcount by a factor of $ \large 2 $.
Here, if we choose $\large 2 $ teams, we get
(1, 2)
(1, 3)
(1, 4)
(2, 3)
(2, 4)
(3, 4)
The complementary teams in every case are
(1, 2) (3, 4)
(1, 3) (2, 4)
(1, 4) (2, 3)
(2, 3) (1, 4)
(2, 4) (1, 3)
(3, 4) (1, 2)
We notice that we have counted every time twice. For example, $ \large (1, 2) $ and $ \large (3, 4) $ should not be counted separately, as when $ \large (1, 2) $ occurs, we automatically get $ \large (3, 4) $.
We can extrapolate this for the $\large 22 $ players.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Convergence $I=\int_0^\infty \frac{\sin x}{x^s}dx$ Hi I am trying to find out for what values of the real parameter does the integral
$$
I=\int_0^\infty \frac{\sin x}{x^s}dx
$$
(a) convergent and (b) absolutely convergent.
I know that the integral is convergent if $s=1$ since
$$
\int_0^\infty \frac{\sin x}{x}dx=\frac{\pi}{2}.
$$
For $s=0$ it is easy to see divergent integral since $\int_0^\infty \sin x\, dx$ is divergent. However I am stuck on figuring out when it is convergent AND or absolutely convergent.
I know to check for absolute convergence I can determine for an arbitrary series $\sum_{n=0}^\infty a_n$ by considering
$$
\sum_{n=0}^\infty |a_n|.
$$
If it helps also $$\sin x=\sum_{n=0}^\infty \frac{(-1)^{2n+1}}{(2n+1)!} {x^{2n+1}}$$.
Thank you all
|
$$\varphi_1(\alpha) =\int_0^\infty \frac{\sin t}{t^\alpha}\,dt\tag{I}$$
case $\alpha\gt 0$
Near $t=0$, $\sin t\approx t.$ Which yields, $\frac{\sin t}{t^{\alpha}}\approx \frac{1}{t^{\alpha -1}}$ and the convergence of the integral in (I) holds nearby $t=0$ if and only if $\alpha<2 $.
Now let take into play the case where $t $ is large.
case $\alpha\leq 0$
Employing integration by part,
\begin{eqnarray*}
\Big| \int_{\frac{\pi}{2}}^\infty \frac{\sin t}{t^\alpha}\,dt\Big| &= & \Big| -\alpha \int_{\frac{\pi}{2}}^\infty \frac{\cos t}{t^{\alpha+1}}\,dt\Big|\\
%
&\leq & \alpha \int_{\frac{\pi}{2}}^\infty \frac{ 1 }{t^{\alpha+1}}\,dt< \infty \qquad\text{since} \qquad \alpha +1>1~~\text{with} ~~\alpha >0.
\end{eqnarray*}
Thus for $\alpha>0 $
$\varphi_1(\alpha)$ exists if and only if $0<\alpha<2$.
We will later these are the only values of $\alpha$ which guarantee the existence of $\varphi_1$. For now let have a look on the integrability of functions under (I). In other to see that, one can quickly check the following
$$ \mathbb{R}_+ = \bigcup_{n\in\mathbb{N}} [n\pi, (n+1)\pi).$$
Then,
$$\int_0^\infty \frac{|\sin t|}{t^\alpha}\,dt = \int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt+ \sum_{n=1}^{\infty} \int_{n\pi}^{(n+1)\pi} \frac{|\sin t|}{t^\alpha}\,dt \\:= \int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt+\sum_{n=1}^{\infty} a_n$$
With suitable change of variable ($u = t-n\pi$) we get
\begin{eqnarray*}
a_n &=& \int_{0}^{\pi} \frac{\sin t}{{(t+n\pi)}^\alpha} \,dt\qquad\text{since } \sin(t+n\pi)= (-1)^n\sin t
\end{eqnarray*}
On the oder hand, it is also easy to check
\begin{eqnarray}
\frac{2}{(n+1\pi)^\alpha} \leq a_n \leq \frac{2}{(n\pi)^\alpha}.
%
\end{eqnarray}
These inequality together with the Riemann sums show that the series of general terms $(a_n)_n$ and $(b_n)_n$ converge if and only if $\alpha>1.$ Moreover we have seen from the foregoing that
$$\int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt$$ converges only for $\alpha <2$
Taking profite of the tricks above, we get the result for the case $\alpha \leq 0$ as follows
$$\int_0^\infty \frac{\sin t}{t^\alpha}\,dt = \int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt+ \sum_{n=1}^{\infty} \int_{n\pi}^{(n+1)\pi} \frac{\sin t}{t^\alpha}\,dt \\:= \int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt+\sum_{n=1}^{\infty} a'_n $$
With
\begin{eqnarray*}
|a'_n| &=&\left|\int_{n\pi}^{(n+1)\pi} \frac{\sin t}{{(t+n\pi)}^\alpha} \,dt\right|= \left|\int_{0}^{\pi} \frac{\sin t}{{(t+n\pi)}^\alpha} \,dt\right| \geq \frac{2}{(\pi+n\pi)^\alpha} \qquad\qquad\text{since } \sin(t+n\pi) = (-1)^n\sin t .
\end{eqnarray*}
and the equalities hold in both cases when $\alpha = 0.$ Therefore,
$$\lim |a'_n|= \begin{cases}
2 &~~if ~~\alpha = 0 \nonumber\\
\infty & ~~if ~~\alpha <0. \nonumber
\end{cases}$$
What prove that the divergence of the series $\sum\limits_{n=0}^{\infty} a'_n$ since $a_n'\not\to 0$. Consequently the left hand side of the previous relations always diverge since $\int_{0}^{\pi} \frac{\sin t}{{t}^\alpha} \,dt $ converges for $\alpha\leq 0.$
Conclusion$ \frac{\sin t}{t^\alpha} $ converges for $0<\alpha<2$ and converges absolutely for $1<\alpha <2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 3
}
|
Is infinity a real or complex quantity? Since I was interested in maths, I have a question. Is infinity a real or complex quantity? Or it isn't real or complex?
|
The question is a bit meaningless. "The infinite" is a philosophical concept. There are a wide variety of very different mathematical objects that are used to represent "the infinite", and now that we're in the realm of mathematics and not philosophy, I can make the concrete mathematical claim that no, those objects are neither real numbers nor complex numbers.
For a rundown on what different mathematical objects can represent infinity, I think the linked questions in Asaf's comment under your question are a fine place to start.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Product of CW complexes question I am having trouble understanding the product of CW complexes. I know how to actually do the computations and all, I just don't understand how exactly it works.
So here's my questions specifically: If $X,Y$ are CW-complexes then say $e,f$ are $p,q$ cells on $X,Y$ respectively, then we know that $e \times f$ will be a $p+q$ cell in $X \times Y$. But this cell we have to think of as $D^{p+q}$ with some identification on the boundary sphere. But what we have here is $(D^p \times D^q)$. So I guess we need to know that we have homeomorphism of pairs $(D^{p+q},S^{p+q-1}) \cong (D^p \times D^q, S^{p-1} \times D^q \cup D^p \times S^{q-1})$. But that's what I do not get. How exactly do this homeomorphism work?
I kinda see it in the case $p=q=1$ (it's just that a square and a disc are homeomorphic with canonical identifications in the boundaries). But I am having trouble defining it or actually visualizing it in higher dimensions. Any help?
Thanks!
|
You can think of $D^n$ as the homeomorphic cube $I^n$. This way, the product
$$\left(D^k\times D^l,\ \partial D^k× D^l\cup D^k×∂D^l\right)\\
\cong\left(I^k×I^l,\ ∂I^k×I^l\cup I^k×∂I^l\right)\\
=\left(I^{k+l},∂\left(I^k×I^l\right)\right)\\
\cong \left(D^{k+l},∂\left(D^k×D^l\right)\right)$$
The homeomorphism between $D^k\cong I^k$ i given by
$$x\mapsto \dfrac{x\cdot||x||_2}{||x||_\infty}$$
The characteristic map is $\Phi_{\alpha,\beta}=Φ_{α}×Φ_β:D^k×D^l\to X×Y$
*
*Note that since $X,Y$ are Hausdorff, so is $X\times Y$.
*The images of the interiors
$Φ_{α,β}\left(\text{int}D^{k+l}\cong\text{int}D^k×\text{int}D^l\right)$ partition $X\times Y$.
*For each product cell $e_α×e_β$, the image of $∂\left(D^k×D^l\right)=
∂I^k×I^l\cup I^k×∂I^l$ is in finitely many cells of dimension less than $k+l$
*If the product topology on $X×Y$ is such that each set $A$ is closed if $A\cap \overline{e_{α,β}}$ is closed in $\overline{e_{α,β}}$ for each cell, then all conditions in the implicit definition are satisfied, so $X×Y$ will be a CW complex.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Sufficient conditions for a meromorphic function to be rational I know that rational functions are meromorphic, but under what conditions are meromorphic functions rational? I know that the automorphisms of the Riemann sphere are rational, but are there any more general conditions that ensure rationality?
|
The given function should have a finite number of poles on the Riemann sphere with the counting done with multiplicity given by the order of the pole.
First consider the complex plane, i.e., the Riemann sphere without the point at infinity. Then by multiplying the meromorphic function, $f(z)$ with an entire function $q(z)=\prod_{a_i\in \text{poles}}(z-a_i)^{m_i}$ that vanishes with suitable multiplicity, $m_i$, at each pole, $a_i$, removes the meromorphicity in the complex plane. You get an entire function, $p(z)=f(z) q(z)$ with no poles in the complex plane. The order of the pole at infinity is given to be finite. Thus, you need to argue that $p(z)$ is a polynomial in $z$. Then, you have shown that $f(z)=p(z)/q(z)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Gift advice: present for high school graduate interested in math I am a PhD student in mathematics who recently found out that I will be attending my girlfriend's cousin's high school graduation party. I have never met the cousin, but hear that he is very interested in mathematics and is hoping to major in mathematics in college. He is taking Calculus BC (the equivalent of Calculus 2 at most colleges) now and is apparently doing quite well.
I am considering giving him a math book as a graduation present. The following texts immediately came to mind as decent candidates:
*
*Elementary Number Theory by Underwood Dudley
*Calculus by Michael Spivak
*How to Prove it by Daniel Velleman
I think that they are all at about the right level. Further, these texts were instrumental in my early mathematical development. They provide for entertaining reads while still being substantive.
However, I would like some advice on the following matters,
*
*Is a math text an appropriate graduation present?
*What other math texts might I consider?
*Would it be better to give a popular text such as Derbyshire's Prime Obsession?
|
I second Kaj Hansen's suggestion of "what is mathematics" and I'd suggest also "Gödel, Escher, Bach, en eternal golden braid", it deals with very interesting topics (Gödel's incompleteness theorem, formal systems and similar things) in a very accessible and entertaining way.
I read both of them 2 years ago, when I was 17 and they made me love maths!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 26,
"answer_id": 3
}
|
Summation Notation Confusion I am unclear about what the following summation means given that $\lambda_i: \forall i \in \{1,2,\ldots n\}$:
$\mu_{4:4} = \sum\limits_{i=1}^{4} \lambda_i + \mathop{\sum\sum}_{1\leq i_1 < i_2 \leq 4}(\lambda_{i_1} + \lambda_{i_2}) + \mathop{\sum\sum\sum}_{1\leq i_1 < i_2 <i_3 \leq 4}(\lambda_{i_1} + \lambda_{i_2} + \lambda_{i_3})$
I understand how this term expands:
$\sum\limits_{i=1}^{4} \lambda_i = \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4$.
But, I don't understand what how this term expands
$\mathop{\sum\sum}_{\substack{1\leq i_1 < i_2 \leq 4}}(\lambda_{i_1} + \lambda_{i_2})$
Nor do I understand how this term expands
$\mathop{\sum\sum\sum}_{\substack{1\leq i_1 < i_2 <i_3 \leq 4}}(\lambda_{i_1} + \lambda_{i_2} + \lambda_{i_3})
$
Any help in these matters would be appreciated.
|
I understand how this term expands
$\sum\limits_{i=1}^{4} \lambda_i = \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4$.
But, I don't understand what how this term expands
$\mathop{\sum\sum}_{1\leq i_1 < i_2 \leq 4}(\lambda_{i_1} + \lambda_{i_2})$
The subscript is just another way of indicating the domain of the indices.
Like so: $\displaystyle\quad\quad \sum\limits_{i=1}^{4} \lambda_i = \sum\limits_{1\leq i \leq 4} \lambda_i$
Thus, $1\leq i_1 < i_2 \leq 4$ means: $i_1\in[1\,..\,(i_2-1)], i_2\in[(i_1+1)\,..\,4]$
Hence:
$$\mathop{\sum\sum}_{1\leq i_1 < i_2 \leq 4}(\lambda_{i_1} + \lambda_{i_2}) \\ = \sum\limits_{i_1=1}^{3}\left(\sum\limits_{i_2=i_1+1}^4(\lambda_{i_1} + \lambda_{i_2})\right) \\ = ((\lambda_1+\lambda_2)+(\lambda_1+\lambda_3)+(\lambda_1+\lambda_4))+((\lambda_2+\lambda_3)+(\lambda_2+\lambda_4))+((\lambda_3+\lambda_4)) \\ = 3(\lambda_1 +\lambda_2+\lambda_3+\lambda_4) \\ = 3\sum_{i=1}^4 \lambda_i$$
Nor do I understand how this term expands
$\mathop{\sum\sum\sum}_{1\leq i_1 < i_2 <i_3 \leq 4}(\lambda_{i_1} + \lambda_{i_2} + \lambda_{i_3})$
$$\mathop{\sum\sum\sum}_{1\leq i_1 < i_2 <i_3 \leq 4}(\lambda_{i_1} + \lambda_{i_2} + \lambda_{i_3}) \\ = \sum_{i_1=1}^2\left(\sum_{i_2=i_1+1}^{3}\left(\sum_{i_3=i_2+1}^{4} (\lambda_{i_1} + \lambda_{i_2} + \lambda_{i_3})\right)\right) \\ = (\lambda_1\!+\!\lambda_2\!+\!\lambda_3)\!+\!(\lambda_1\!+\!\lambda_2\!+\!\lambda_4)\!+\!(\lambda_1\!+\!\lambda_3\!+\!\lambda_4)\!+\!(\lambda_2\!+\!\lambda_3\!+\!\lambda_4) \\ = 3( \lambda_1 + \lambda_2+\lambda_3+\lambda_4)\\ = 3\sum_{i=1}^4 \lambda_i$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/793992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Find all ordered triples $(x,y,z)$ of prime numbers satisfying equation $x(x+y)=z+120$ This question was from my Math Challenge II Number Theory packet, and I don't get how to do it. I know you can distribute to get $x^2+xy=z+120$, and $x^2+xy-z=120$, but that's as far as I got. Can someone explain step by step?
|
If $x = 2$, the left side is even - hence, $z $ must also be $2$.
If $x$ is an odd prime and $y$ is also odd, the left side is again even, implying that $z = 2$.
So the interesting case is when $x$ is an odd prime and $y = 2$; in this case, we have that
$$x(x + 2) = z + 120$$
Upon adding $1$ to both sides and factoring, we have
$$x^2 + 2x + 1 = z + 121 \implies (x + 1)^2 - 11^2 = z \implies (x + 12)(x - 10) = z$$
Hence $x = 11$.
So the solutions are $(2, 59, 2)$ and $(11, 2, 23)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ be a power series. Show sum-function $g(z)$ is continuous on $|z|\le 1$. Let $\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ be a power series.
I've shown that radius of convergence is $R=1$.
I've a theorem saying that the sum-function $g(z)=\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ is continuous for $|z| < R$.
How can I show that $g(z)$ is continuous on the set $|z|\le 1$ ?
|
This is only a partial answer taking into account comments and answers to comments.
I suppose that you noticed that $$f(z)=\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$$ is the antiderivative of $$\sum^{\infty}_{n=1} \frac {z^{n}} {n}=-\log (1-z)$$ So, integration by parts leads to $$f(z)=z+(1-z) \log (1-z)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Product measure with a Dirac delta marginal Let $(S,\mathcal F)$ be a measurable space, and let $\nu \in\mathcal P(S,\mathcal F)$ be a probability measure on $(S,\mathcal F)$. Fix some $x\in S$ and consider Dirac measure $\delta_x$. Would like to prove
If $\mu \in \mathcal P(S×S,\mathcal F\otimes \mathcal F)$ and has marginals $ν$ and $δ_x$ $then$ $μ=ν×δ_x$
So we should show $μ(A×B)=ν(A)δ_x(B)$ for $∀A,B∈\mathcal F$.
If $x∉B$ then right-hand side is $0$ but so is the left-hand side since $μ(A×B)≤μ(S×B)=δ_x(B)=0$.
How to deal with the case $x∈B$ ?
|
For any measurable set $B\subset S$, $\mu(S\times B)=\delta_x(B)=\mathbb{1}_B(x)$. In particular, $\mu(S\times\{x\})=1$, and if $x\notin B$,
$$\mu(A\times B)=0=\nu(A)\delta_x(B),\qquad A\in\mathcal{F}$$
for $\mu(A\times B)\leq\mu(A\times(S\setminus\{x\})=0$.
Suppose now that $x\in B$. Then, $\delta_x(B)=1$, and $\mu(A\times(S\setminus B))=0$. Consequently,
$$\begin{align}
\nu(A)\delta_x(B)=\nu(A)&=\mu(A\times S)=\mu((A\times B) \cup (A\times(S\setminus B))\\
&=\mu(A\times B)+\mu(A\times(S\setminus B))=\mu(A\times B)
\end{align}$$
Putting things together, we have that
$$\mu(A\times B)=\nu(A)\times\delta_x(B),\qquad A,B\in\mathcal{F}$$
From this (why?) it follows that $\mu=\nu\otimes\delta_x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
}
|
Use power series $\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ to show $\sum^{\infty}_{n=1} \frac {1} {n(n+1)} =1$. Consider $\sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ (power series). I've found that the sum-function $g(z) := \sum^{\infty}_{n=1} \frac {z^{n+1}} {n(n+1)}$ is defined and continuous on $|z| \le 1$.
Let $f$ be the restriction of $g$ to $[-1,1]$. I've shown that $f(x) = (1-x)\log(1-x)+x$.
How can I use these results to show $\sum^{\infty}_{n=1} \frac {1} {n(n+1)} =1$ ?
I'm well aware that $\frac 1 n - \frac 1 {n+1} = \frac 1 {n(n+1)}$, but I think I should use what I've proved instead of looking at the partial sums.
Does it hold to say $\lim_{x \rightarrow 1} f(x) = \lim_{x \rightarrow 1} (1-x)\log(1-x)+x$ ? I know, since $f$ is continuous $\lim_{x \rightarrow 1} f(x) = f(1)$, but $(1-x)\log(1-x)+x$ is not defined for $x=1$ ?
|
Hint
When $x$ goes to $0$, $x\log(x)$ has a limit of $0$ and, so, when $x$ goes to $1$, $(1-x)\log(1-x)$ has also a limit of $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What remainder does $34!$ leave when divided by $71$? What is the remainder of $34!$ when divided by $71$?
Is there an objective way of solving this?
I came across a solution which straight away starts by stating that
$69!$ mod $71$ equals $1$ and I lost it right there.
|
From $$69!=1\mod 71\Rightarrow 34!36!=-1\mod 71$$ Multiplying both sides by $4$ and noting that $35\cdot 2=-1\mod 71,\ 36\cdot 2=1\mod 71$, we get $$(34!)^2=4\mod 71\Rightarrow x^2=4\mod 71$$ where $34!=x\mod 71$ So, $$71|(x-2)(x+2)\Rightarrow x+2=71, or \ x=2\Rightarrow x=69\ or\ 2$$ since $1\le x\le 70$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
}
|
$6^{(n+2)} + 7^{(2n+1)}$ is divisible by $43$ for $n \ge 1$ Use mathematical induction to prove that 6(n+2) + 7(2n+1) is divisible by 43 for n >= 1.
So start with n = 1:
6(1+2) + 7(2(1)+1) = 63 + 73 = 559 -> 559/43 = 13. So n=1 is divisible
Let P(k): 6(k+2)+7(2k+1) , where k>=1
Show that P(k+1): 6((k+1)+2) + 7(2(k+1)+1) is true
= 6(k+1+2) + 7(2k+2+1)
= 6(k+3) + 7(2k+3)
I'm unsure where to go from here, I've tried several directions after this but have got nowhere. I don't know how to get the 43 or 559 out the front.
Any help would be great
|
The sequence $A_n = 6^{n+2} + 7^{2n+1}$ satisfies a two term linear recurrence relation with integer coefficients. Specifically it satisfies $A_n = 55A_{n-1} - 294A_{n-2}$ but we don't actually care what the relation is. If $43$ divides $A_n$ for two consecutive $n$ then by induction it must divide every term after that. So it is enough to check this for say $n = 0$ and $n=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
A question about optimal codes Recall that a code attaining any bound is called an optimal code. Is the dual code of an optimal code also an optimal code?
|
It depends on the bound and on the code - A code is said to be optimal with respect to a particular bound.
For example, the dual of a linear MDS code is another linear MDS code, so the dual and the original linear code both meet the singleton bound (recall a MDS code is one which meets the singleton bound and thus is optimal with respect to the singleton bound). This is a standard homework problem.
On the other hand, look at non-trivial linear codes which meet the Hamming bound (i.e. perfect codes -- which are precisely codes with the parameters of a binary/ternary Golay code or Hamming Code), and you see that the dual codes are not within the respective families - for example, the dual of the [7,4] Hamming code is a simplex code which does not share the parameters with any Golay or Hamming code. The characterization of perfect codes is a result of Van Lint and Tietäväinen - See Macwilliams and Sloane's Theory of Error Correcting Codes, Chapter 6 for more details.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Prove that if $ u \cdot v = u \cdot w $ then $v = w$ I've tried putting it up as:
$$ [u_1 v_1 + \ldots + u_n v_n] = [u_1 w_1 + \ldots + u_n w_n] $$
But this doesn't make it immediately clear...I can't simply divide by $u_1 + \ldots + u_n$ as these ($u$, $v$ and $w$) are vectors...
Any hints?
|
$$
u\cdot v=u\cdot w
$$
Others have shown how to show that $v=w$ if one assumes the above for all values of $u$.
To show that it's now true if one just assumes $u$, $v$, $w$ are some vectors, let's look at the circumstances in which it would fail. Recall that $u\cdot v = \|u\| \|v\|\cos\theta$ where $\theta$ is the angle between the vectors $u$ and $v$.
Thus one circumstance in which the conclusion does not hold is when $v$ and $w$ are of equal lengths, i.e. $\|v\|=\|w\|$, and both are at the same angle with $u$. Just draw a picture. One can rotate $v$ about an axis in which the vector $u$ lies and get many vectors $w$ having the same length as $v$ and making the same angles with $u$.
Another circumstance in which it fails is this: picture $u$ and $v$ as arrow pointing out from the origin, and draw a plane or hyperplane at right angles to $u$ passing through the endpoint of the arrowhead of $v$. Choose an arbitrary point in that hyperplane, and draw an arrow from the origin to that point. Call that vector $w$. Then show that $u\cdot v=u\cdot w$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Is there a way in matrix math notation to show the 'flip up-down', and 'flip left-right' of a matrix? Title says it all - is there an accepted mathematical way in matrix notation to show those operations on a matrix?
Thanks.
|
$$\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix} = \begin{pmatrix} d & c \\ b & a\end{pmatrix}.$$
In general, left-multiplying by the anti-diagonal identity matrix swaps all rows. Right-multiplying swaps columns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Convergence of the series $\sum \frac{(-1)^{\sqrt{n}}}{n}.$ I'm looking for some help to show that:
$$\sum {(-1)^{\lfloor \sqrt{n}\rfloor}\over n} < \infty$$
|
After clarification, it seems that the goal is to prove that the sequence $(S_n)$ converges, where, for every $n\geqslant1$,
$$
S_n=\sum_{k=1}^n\frac{(-1)^{\lfloor k\rfloor}}k.
$$
To do so, consider, for every $n\geqslant1$,
$$
T_n=\sum_{k=n^2}^{(n+1)^2-1}\frac{(-1)^{\lfloor k\rfloor}}k=(-1)^n\sum_{k=n^2}^{(n+1)^2-1}\frac1k.
$$
For every $n$,
$$
|T_n|\leqslant\sum_{k=n^2}^{(n+1)^2-1}\frac1{n^2}=\frac{2n+1}{n^2}\leqslant\frac3n,
$$
hence $T_n\to0$. Furthermore, the signs of the entries $T_n$ alternate hence, if the sequence $|T_n|$ is noninceasing, the series
$$
\sum_{n\geqslant1}T_n
$$
is an alternating series and, as such, its sums converge to some limit $\ell$. For every $n$, there exists some $k$ such that $k^2\leqslant n\lt (k+1)^2$ hence
$$
\sum_{i=1}^kT_i\leqslant S_n\leqslant \sum_{i=1}^{k+1}T_i\quad\text{or}\quad\sum_{i=1}^{k+1}T_i\leqslant S_n\leqslant \sum_{i=1}^{k}T_i,
$$
depending on the parity of $k$. This proves that $S_n\to\ell$.
To conclude, it remains to show that indeed $|T_{n+1}|\leqslant|T_n|$ for every $n$. Can you do that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluating the following integral: $\int\frac1{x^3+1}\,\mathrm{d}x$ How to integrate
$$\int\frac1{x^3+1}~\mathrm{d}x$$
Is it possible to use Taylor expansion?
|
If $x^3 + 1=0$ then $x^3=-1$ so $x=-1$, at least if $x$ is real.
If you plug $-1$ in for $x$ in a polynomial and get $0$, then $x-(-1)$ is a factor of that polynomial.
So you have $x^3+1=(x+1)(\cdots\cdots\cdots\cdots)$.
The second factor can be found by long division or other means. It is $x^2-x+1$.
Can that be factored? Solving the quadratic equation $x^2-x+1=0$ yields two non-real solutions, complex conjugates of each other.
Doing arithmetic or algebra with complex numbers is in many ways just like doing the same things with real numbers. But doing calculus with complex numbers opens some cans of worms that get dealt with in more advanced courses. With real numbers, the quadratic polynomial $x^2-x+1$ is irreducible, i.e. cannot be factored. The quickest way to see that is by observing that the discriminant $b^2-4ac$ is negative. A way that's not as quick but that may give some insight is completing the square:
$$
x^2-x+1 = \left( x-\frac12\right)^2 + \frac 3 4.
$$
Obviously this can never be $0$ when $x$ is real, so this can't be factored with real numbers.
So
$$
\frac{1}{x^3+1} = \frac{A}{x+1} + \frac{Bx+C}{x^2-x+1}
$$
and then you have to find $B$ and $C$.
Now another difficulty comes along: How to find
$$
\int \frac{Bx+C}{x^2-x+1} \,dx\text{ ?}
$$
Let $u=x^2-x+1$ so that $du = (2x-1)\,dx$ and you get part of it:
$$
(Bx+C)\,dx = \underbrace{\frac B 2 (2x-1)\,dx} + \left(C + \frac B2\right)\,dx.
$$
The substitution handles the part over the $\underbrace{\text{underbrace}}$. What about the other part? You have
$$
\text{constant}\cdot\int \frac{dx}{x^2-x+1}.
$$
Complete the square:
$$
\int \frac{dx}{\left(x - \frac 12\right)^2 + \frac 3 4}.
$$
Starts to remind you of an arctangent, but you have $3/4$ where you need $1$.
$$
\int \frac{dx}{\left(x - \frac 12\right)^2 + \frac 3 4} = \frac 4 3 \int \frac{dx}{\frac43\left(x-\frac12\right)^2+1}
$$
Now
$$
\frac43\left(x-\frac12\right)^2+1 = \left(\frac{2x-1}{\sqrt{3}}\right)^2+1
$$
So let $w=\dfrac{2x-1}{\sqrt{3}}$ and $\sqrt{3}\,dw=dx$, and then you're almost done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/794956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Suppose $A$, $B$, and $C$ are sets, and $A - B \subseteq C$. Then $A - C \subseteq B$. I know how to prove it by contradiction, but I am wondering if it's possible to prove it directly. I tried doing that, but so far no results. Is it not possible to prove it directly?
Thanks.
|
We have
$$A-B=A\cap B^c\subset C\Rightarrow C^c\subset A^c\cup B$$
hence
$$A-C=A\cap C^c\subset A\cap (A^c\cup B)=A\cap B\subset B$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Divergence test for $\sum_{n=1}^{\infty}\ln (1+\frac{1}{n})^n$. I am trying to prove that this is divergent
$$\sum_{n=1}^{\infty} \left(1+\dfrac{1}{n}\right)^n$$
by finding the limit of
$$\ln \left(1+\dfrac{1}{n}\right)^n$$
I know its $e$ and I am trying to arrive at that value by this
$$\ln y = n \ln(1 + \dfrac{1}{n})\\= \dfrac{\ln(1 + \dfrac{1}{n})}{\dfrac{1}{n}}$$
and I am already lost at this indeterminate form.
|
But if you want to prove that diverges is not most easy:
$$\sum (1+1/n)\leq \sum (1+1/n)^n?$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
How to go about proving that $\cos^2 x$ is everywhere differentiable? My first line of reasoning was to try directly evaluating $$\lim\limits_{h \to 0}\frac{\cos^2 (x+h) - \cos^2 (x)}{h}$$ and showing such a limit existed for any x, but when $\cos^2(x)$ evaluates to zero (e.g. when $x = \frac{\pi}{2}$), then directly evaluating the limit yields the indeterminate form $\frac{0}{0}$. The same thing happens even after transforming $\cos^2 x$ to $\frac{1}{2} \cos (2x)$, so I don't think this is the correct strategy. I considered using L'Hopitals rule, but don't think that is the route that the question intended, as it is suggested to use the identity $\lim\limits_{x \to 0} \frac{\sin x}{x} = 1$ along with standard trigonometric identities. I am not sure where to begin. What would be a step in the right direction?
|
HINT:
$$\cos^2B-\cos^2A=1-\sin^2B-(1-\sin^2A)$$
Using Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $, this becomes $$\sin^2A-\sin^2B=\sin(A+B)\sin(A-B)$$
So, $$\cos^2(x+h)-\cos^2x=\sin(2x+h)\sin(-h)=-\sin(2x+h)\sin(h)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
A matrix $M$ that commutes with any matrix is of the form $M=\alpha I$ I feel like this is probably a simple proof but I can't quite come up with it in an elegant way nor could I find it here.
Prove that if a matrix $M$ commutes with any matrix then $M$ is of the form $M=\alpha I$.
Proving the contrapositive seems like the natural way to go where we can logically transform $\lnot \forall A(MA = AM)$ into $\exists A (MA \neq AM)$ but assuming that $M \neq \alpha I$ immediately becomes messy. Is there a nice way out of this or is it inevitably going to get messy?
|
Here's somewhat of an overkill answer for what it is worth.
A normal matrix is a matrix that is unitarily similar to a diagonal matrix. Another characterization is that a matrix $M$ is normal iff $M^* M = M M^*$.
If $M$ commutes with all matrices then it is clear it is normal. From this we have $M = UDU^*$ for some unitary matrix $U$. Write $MU = UDU^* U = UD$ and then use commutativity to write $UM = UD$. Finally cancel the $U$'s to conclude that $M$ must be diagonal.
Then if $P$ is the elementary matrix that swaps rows $i$ and $j$ when applied to $M$ as $PM$, since $MP$ swaps columns and is the same as $MP$, we conclude each diagonal element is the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Crossings in an Eulerian Trail Exercise 11.2 in Graph Theory by Harary says
Every plane eulerian graph contains an eulerian trial that never crosses itself.
What does it mean for a trail to not cross itself? The book does not give a formal definition of this notion.
|
I don't know the "formal" definition, but informally it means just what you would think. If you regard the Eulerian trail as a curve in the plane, the curve does not cross itself, in the sense that the graphs of $y=0$ and $y=x$ cross at the origin, but the graphs of $y=0$ and $y=x^2$ touch without crossing.
For instance, consider the plane Eulerian graph with vertices $v=(0,0)$, $w=(1,0)$, $x=(1,1)$, $y=(-1,-1)$, $z=(-1,0)$, and straight-line edges $vw,wx,xv,vy,yz,zv$. The Eulerian trail $z,v,w,x,v,y,z$ crosses itself at $v$, but the Eulerian trail $z,v,x,w,v,y,z$ does not cross itself.
P.S. Following a suggestion by the OP, here's an attempt at defining a self-crossing for an Eulerian trail in a plane graph $G$. The trail crosses itself at a vertex $v$ if, among the edges of $G$ that are incident with $v$, there are four distinct edges $a,b,c,d$ which occur in the cyclic order $(a,b,c,d)$ in the clockwise ordering of the edges incident with $v$, and such that the edges $a$ and $c$ are traversed consecutively (though not necessarily in that order) in the trail, and the same goes for the edges $b$ and $d$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Conjugate to the Permutation How many elements in $S_{12}$ are conjugate to the permutation $$\sigma=(6,2,4,8)(3,5,1)(10,11,7)(9,12)?$$
How many elements commute with $\sigma$ in $S_{12}$?
I believe I use the equation $n!/|K_{\sigma}|$ for the second question, but I'm not sure. Is anyone aware of how to do these?
|
Two permutations are conjugate if they have the same cycle structure (same number of cycles with same lengths).
Given a permutation with this cycle structure, you won't change it if you rotate each cycle as much as you want.
So, there are $4$ "rotated cycles" for the first one, $3$ for the next two cycles, and $2$ for the last. So there are $4\times3\times3\times2$ equivalent permutations. But you can also swap the second and the third cycles, so there are actually $(4\times3\times3\times2)\times2=144$ equivalent permutations, for each given permutation.
So there are $12!/144= 3326400$ permutations conjugate to yours.
A precision about what I mean by "rotated cycle".
Since you have
$$\tau^{-1}\sigma\tau = \left(\tau(6), \tau(2), \tau(4), \tau(8)\right)\left(\tau(3), \tau(5), \tau(1)\right)\left(\tau(10), \tau(11), \tau(7)\right)\left(\tau(9), \tau(12)\right)$$
A priori all permutation $\tau \in S_n$ would give you a conjugate of $\sigma$ by this operation, but there are double counts.
If, for the first cycle, the image of $(\tau(6), \tau(2), \tau(4), \tau(8))$ is, say, $(1,2,3,4)$, then you will get the same cycle if instead the image is $(2,3,4,1)$, $(3,4,1,2)$ or $(4,1,2,3)$. So each given conjugate can be associated with $4$ permutations $\tau$ that give the same cycle $(1,2,3,4)$. But you have to consider also the other cycles. And you need also to consider that the two 3-cycles may be switched.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
How do I evaluate this definite integral which blows up at lower limit? I have an integral of the form
$$\int^{\infty}_{0}{\frac{2a^2-x^{2} }{a^{2}+x^{2}}e^{\frac{-x^{2}}{b^2}}xdx}.$$
On substitution of $x^2=t$ and simplifying, I get integral of the form $$\int^{\infty}_{0}{\frac{e^{-t}}{t}dt}$$ which blows up as $ t \to 0$. Is there any way to approximate it? Is there some cutoff at the lower limit I can use?
|
That integral is a well known special function, the Exponential Integral . You didn't do the substitution right, though.
If $b$ is $1$, the integral is $\int_0^\infty \frac{a^2-x^2}{a^2+x^2} e^{-x^2} x dx = \int_0^\infty \frac{a^2 - t}{a^2+t} e^{-t} dt$. Then, you $u$-substitute $u=a^2+t$ and shift the bounds from $u=a^2$ to $u=\infty$ and get the integral $\int_{a^2}^\infty \frac{a^2 - (u - a^2)}{u} e^{-u} e^{a^2} du = e^{a^2} (2 a^2 \int_{a^2}^\infty \frac{e^{-u}}{u} du - \int_{a^2}^\infty e^{-u} du)$. The second integral is simply $1- e^{-a^2}$, and the first integral is the exponential integral evaluated at $a^2$, which is finite for $a \neq 0$.
You can adjust to the case where $b \neq 1$ easily, by factoring out a $b^2$ from the numerator and denominator of the fraction and then substituting $t=\frac{x^2}{b^2}$, and then the rest is similar.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Possibility of publishing First little background. I have master degree in mathematics. Then I decided to continue to study PhD level. After some years I cancel study (reason was in some things in my life). Now I am returning back to mathematics. I have job, but I do mathematics in my free time.
Is it possible to publich some article in some journal, if I am not connected to any university or research institute?(I am aware about review process in mathematical journals)
|
Of course you can. But since you want to publish something maybe it would be better to consider re-entering a PhD program in order to spend more time doing math. There you will have more opportunities to do research and publish your work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Differentiability and Lipschitz Continuity I've seen some questions on this site that are similar to the following, but not precisely the same.
Let $f:[a,b]\to\mathbb{R}$ be a differentiable function and assume $f'$ is continuous in $[a,b]$. Prove that $f$ is Lipschitz continuous. What is the best possible Lipschitz constant?
My work so far:
Assume $f$ differentiable on $[a,b]$, $f'$ continuous on $[a,b]$. Since $f$ differentiable on $[a,b]$, then $f$ is continuous on $[a,b]$. Consider any $x_1,x_2\in[a,b]$ with $x_1<x_2$. Since $[x_1,x_2]\subseteq[a,b]$, then $f$ differentiable and continuous on each such $[x_1,x_2]$, and $f'$ continuous on $[x_1,x_2]$. Then, by the Mean Value Theorem, exists $c\in(x_1,x_2)$ such that $f'(c)=(f(x_2)-f(x_1))/(x_2-x_1).$ Since $f'$ continuous on compact set $[a,b]$, by the min-max value theorem, $f'$ achieves its minimum and maximum values at some $x_m,x_M\in[a,b]$. Let $A=max\{|f(x_m)|,|f(x_M)|\}.$ Then$$|f'(c)|=|(f(x_2)-f(x_1))/(x_2-x_1)|\leq A\to|f(x_2)-f(x_1)|\leq A|x_2-x_1|$$
$\forall x_1,x_2\in[a,b]$
Is this correct? I appreciate the help...
|
More or less... More directly, $|f(x_2)-f(x_1)| = |f'(c)| |x_2-x_1| \leq \max_{c \in [a,b]} |f'(c)| |x_2-x_1|$. Hence $A=\|f'\|_\infty = \max_{c \in [a,b]} |f'(c)|$ is one possible Lipschitz constant.
On the other hand, if $f(x)=mx+q$, then $\|f'\|_\infty=m$, and clearly $m$ is also the best Lipschitz constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/795950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why the geodesic curvature is invariant under isometric transformations? As I know the geodesic curvature
$$
\kappa_g = \sqrt{\det~g} \begin{vmatrix} \frac{du^1}{ds} & \frac{d^2u^1}{ds^2} + \Gamma^1_{\alpha\beta} \frac{du^\alpha}{ds} \frac{du^\beta}{ds} \\ \frac{du^2}{ds} & \frac{d^2u^2}{ds^2} + \Gamma^2_{\alpha\beta} \frac{du^\alpha}{ds} \frac{du^\beta}{ds} \end{vmatrix},
$$
where $g$ is the metric tensor, $\Gamma^v_{\alpha\beta}$ is the Christoffel symbols of the second kind.
And the first fundamental form of the surface $I = (du^1, du^2) g (du^1, du^2)^T$. I think $I$ is invariant under isometric transformations but not the metric tensor $g$. So why $\kappa_g$ is invariant under isometric transformations?
|
$ \kappa_g$ depends purely on the coefficients of the first fundamental form ( of surface theory FFF) and their derivatives, second fundamental form SFF coefficients are not involved.
It is invariant in isometric mappings ( bending transformations) like lengths,angles, $K$ Gauss curvature , integral curvature etc. Liouville's theorem gives the expressions. Reference of text books of Differential geometry.
$K$ is an exception where the determinants of SFF and FFF can be used to derive it in the Gauss Egregium theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
order of groups Question: Suppose $\operatorname{ord}(g)=20$. Find elements $h,k\in G$ such that $\operatorname{ord}(h)=4$, $\operatorname{ord}(k)=5$, and $hk^{-1}=k^{-1}h=g$.
I can't seem to find anything in my notes on how to complete this question. Can someone help hint how to find the solution to this question please?
|
Hint: What are the orders of $g^5$ and $g^4$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Interesting "real life" applications of serious theorems As student in mathematics, one sometimes encounters exercises which ask you to solve a rather funny "real life problem", e.g. I recall an exercise on the Krein-Milman theorem which was something like:
"You have a great circular pizza with $n$ toppings. Show that you can divide the pizza equitably among $k$ persons, which means every person gets a piece of pizza with exactly $\frac{1}{k}$ of any of the $n$ topping on it."
Are there more examples which are particular interesting or instructive?
EDIT: Since this is turning into a list of mathematical jokes or sophisticated proofs for simple facts, I may have to be more precise what I was asking for: a "real life example didactically used to motivate a mathematical theorem" (thanks to Lord_Gestalter for this great wording).
|
The $n=2$ case of the Borsuk-Ulam theorem can be visualized by saying there exists some pair of antipodal points on the Earth with equal temperatures and barometric pressures. Of course, this is assuming that temperature and pressure vary continuously.
Ramsey's theorem says that, if given a sufficiently large complete graph that has been arbitrarily colored with $n$ colors, then one can find a monochromatic complete subgraph of a particular size. One example is as follows: Given any $2$-coloring on $K_6$, then we are guaranteed to find a monochromatic subgraph of size $3$. This has an interesting real-life interpretation: If we invite $6$ people to a party, then at least $3$ of them must be mutual acquaintances, or at least $3$ of them must be mutual strangers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64",
"answer_count": 17,
"answer_id": 4
}
|
Expectation of Continuous variable. Given the probability density function
$$
f(x) =
\begin{cases}
\frac{cx}{3}, & 0 \leq x < 3, \\
c, & 3 \leq x \leq 4, \\
0 & \text{ otherwise}
\end{cases}
$$
I have found $c$ to be $0.4$ and $E(X)$ to be $2.6$. But I'm being asked to find $E(3X - 5)$ and I'm unsure of what to do.
|
$$\mathbf E(3\mathbf X-5)=3\mathbf E(\mathbf X)-5=3(2.4)-5=2.2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Identically distributed and same characteristic function If $X,Y$ are identically distributed random variables, then I know that their characteristic functions $\phi_X$ and $\phi_Y$ are the same. Does the converse also hold?
|
Yes, it is true as a consequence of the inversion formula
$$\mu(a,b) +\frac 12\mu(\{a,b\}) = \frac 1{2\pi}\lim_{ t\to +\infty}\int_{-T}^T\frac{e^{ita} -e^{itb} }{it}\varphi_\mu (t)\mathrm dt,$$
valid for $a\lt b$.
If $\mu$ and $\nu$ have the same characteristic function, then $\mu([a,b])=\nu([a,b])$ for each $a\lt b$. This is true for finite disjoint unions of half-open intervals, and these sets characterize Borel probability measures on the real line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to prove ${\bf u}\cdot{\bf v}=|{\bf u}|\cdot|{\bf v}|\cos\theta$, if $\theta$ is the angle between $|{\bf u}|$ and $|{\bf v}|$ This is a snippet from my book.
How did they get from $|{\bf u}|^2={\bf u}\cdot{\bf v}=|{\bf u}||{\bf v}|\frac{|{\bf u}|}{|{\bf v}|}$?
|
I dont understand the comments from your book, but I assume the question is how to prove the equivalence of the two definitions of dot product, $\mathbf{u}\cdot \mathbf{v}=u_1v_1+u_2v_2+u_3 v_3$, and the equation $\mathbf{u}\cdot \mathbf{v}=|\mathbf{u}||\mathbf{v}| \cos \theta$.
This is a consequence of the cosine law, if we take the triangle with sides $\mathbf{u},\mathbf{v}$ and $\mathbf{u}-\mathbf{v}$ then the cosine law reads
$$|\mathbf{u}-\mathbf{v}|^2=|\mathbf{u}|^2+|\mathbf{v}|^2-2|\mathbf{u}||\mathbf{v}|\cos \theta$$
using
$$|\mathbf{u}-\mathbf{v}|^2==|\mathbf{u}|^2+|\mathbf{v}|^2 -2\mathbf{u}\cdot \mathbf{v}$$ we get the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Direct proof that $n!$ divides $(n+1)(n+2)\cdots(2n)$ I've recently run across a false direct proof that $n!$ divides $(n+1)(n+2)\cdots (2n)$ here on math.stackexchange. The proof is here prove that $\frac{(2n)!}{(n!)^2}$ is even if $n$ is a positive integer (it is the one by user pedja, which got 11 upvotes). The proof is wrong because it claims that one can rewrite $(n+1)\cdots (2n)$ as
$$ (n+1)(n+2)\cdots 2(n-2)(2n-1)(2n) = 2\cdots 2\cdot n!\cdot (2n-1)(2n-3)\cdots (n+1).$$
In other words, it claims that the product of the factors $2n$, $2(n-1)$, $2(n-2)$, $\ldots$, all of which are in $(n+1)\cdots(2n)$, amounts to $2^kn!$, but this is not true since the factors $2m$ under scrutiny do not start from $m=1$ but from values greater than $n$. For instance, for $n=4$, we have $(8)(7)(6)(5)=2\cdot 2\cdot 4\cdot 3\cdot 5\cdot 7$, not $(8)(7)(6)(5)=2\cdot 2\cdot 4!\cdot 5\cdot 7$. This makes me wonder two things:
(1) What is a valid direct proof?
(2) How many wrong proofs do go undetected here? (How many false proofs receive 10+ upvotes?)
NB Not interested in any proof that uses binomial coefficients and/or the relationship $\binom{2n}{n}=\frac{(2n)!}{n!n!}$.
|
Here is a more direct number theoretical type proof that if $a \ge 0$ and $a_1+a_2+\cdots+a_r = n$ that $\frac{n!}{a_1!a_2! \cdots a_r!}$ is an integer.
This reduces to proving $\sum \left \lfloor \frac{n}{p_i} \right \rfloor \ge \sum \left \lfloor \frac{a_1}{p_i} \right \rfloor + \sum \left \lfloor \frac{a_1}{p_i} \right \rfloor + \cdots + \sum \left \lfloor \frac{a_r}{p_i} \right\rfloor $.
Lemma: $\lfloor x \rfloor + \lfloor y \rfloor \le \lfloor x+y \rfloor$ if $x,y$ are real numbers.
This can be easily proved by writing $x,y$ in terms of their integer and fractional parts.
Applying this to the $RHS$ gives $RHS \le \left \lfloor \frac{a_1+a_2+ \cdots + a_r}{p_i} \right \rfloor = \lfloor \frac{n}{p_i} \rfloor$ and doing this over all $i$ and all $p$ gives the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
}
|
$\mathbb{Q}(\sqrt[3]{2}, \zeta_{9})$ Galois group How do I calculate the degree of $\mathbb{Q}(\sqrt[3]{2}, \zeta_{9})$ over $\mathbb{Q}$. Should it be 18, as $[\mathbb{Q}(\sqrt[3]{2}):\mathbb{Q}] = 3$, and $[\mathbb{Q}(\zeta_{9}):\mathbb{Q}] = 6$?
However $(\sqrt[3]{2})^{3} \in \mathbb{Q}(\zeta_{9})$, how this affect the calculation?
Thanks
|
$\newcommand{\Q}{\mathbb{Q}}\newcommand{\Size}[1]{\lvert #1 \rvert}$$\sqrt[2]{2}$ has minimal polynomial
$f = x^{3}-2$ over $\Q$. You have to show that $f$ is also the minimal
polynomial over $F = \Q(\zeta_{9})$, that is, that $f$ is irreducible in
$F[x]$, and since $f$ has degree $3$, it is enough to show
that $f$ has no roots in $F$. From this it will follow that
$$
\Size{\Q(\sqrt[3]{2}, \zeta_{9}) : \Q}
=
\Size{F(\sqrt[3]{2}) : F} \cdot \Size{F : \Q}
=
3 \cdot 6 = 18.
$$
Since $F/\Q$ is Galois, if it contains a root of the polynomial $f \in
\Q[x]$, which is irreducible over $\Q$, then it contains all the roots
of $f$, and thus it contains a splitting field $E$ of $f$ over $\Q$.
But the Galois group of $E/\Q$ is nonabelian, while that of $F/\Q$ is
abelian, so this rules out the possibility that $E \subseteq F$.
Alternatively, since the Galois group of $F/\Q$ is abelian of order $6$, there is
only one intermediate field $\Q \subset L
\subset F$ with $\Size{L : \Q} = 3$, and so in $F$ you cannot find the
three distinct extensions $\Q(\sqrt[3]{2}), \Q(\omega \sqrt[3]{2}),
\Q(\omega^{2} \sqrt[3]{2})$ of degree $3$ over $\Q$. Here $\omega$ is
a primitive third root of unity, and $\sqrt[3]{2}, \omega \sqrt[3]{2},
\omega^{2} \sqrt[3]{2}$ are the three roots of $f$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/796820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
How find this minimum of the value $f(1)+f(2)+\cdots+f(100)$ Give the positive integer set $A=\{1,2,3,\cdots,100\}$, and define function
$f:A\to A$ and
(1):such for any $1\le i\le 99$,have
$$|f(i)-f(i+1)|\le 1$$
(2): for any $1\le i\le 100$,have $$f(f(i))=100$$
find the minium of the value
$$f(1)+f(2)+f(3)+f(4)+\cdots+f(99)+f(100)$$
maye this is nice problem,and I want use $|f(i)-f(i+1)|\le 1$,But I can't it.Thank you
Now it is said this answer is 8350
|
Claim: to achieve the minimum, f(n) is a non decreasing function. Suppose not, take the natural construction $f^*(n)$ where we smooth out the decreasing part, show that it satisfies the conditions and has a smaller sum.
Claim: Suppose that The image of $f(n)$ consists of $k$ elements. Then, because we have a non decreasing function, we see that the minimum sum occurs when we have $f(1)=\ldots=f(100-2k+2)=100-k+1, f(100-2k +j)=100-k+j-1$ for j from 3 to k and $f(100-k+1)=\ldots=f(100)=100$.
It remains to verify that the minimum sum is achieved at $k=34$ This is easily done n
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/797850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
}
|
Inverse Laplace Through Complex Roots I have been asked to apply inverse laplace to this:
$$ \frac{(4s+5)}{s^2 + 5s +18.5} $$
What I have done is;
I found the roots of denominator which are : $$ (-5-7i)/2 $$ and $$ (-5+7i)/2 $$
Then I factorized the denominator as :
$$ \frac{(4s+5)}{(s+\frac{(5+7i)}{2})(s + \frac{(5-7i)}{2})} $$
Then i split this fraction to sum of two different fractions through;
$$ \frac{(A)}{(s+\frac{(5+7i)}{2})} $$
$$ \frac{(B)}{(s+\frac{(5-7i)}{2})} $$
Then I found A and B as ; $$ A = 2 + \frac{35i}{49} $$
$$ B = 2 - \frac{35i}{49} $$
At the and , Inverse Laplace took this form ;
$$ (2 + \frac{35i}{49})(\mathcal{L}^{-1}(\frac{1}{s+\frac{5+7i}{2}})) + (2 - \frac{35i}{49})(\mathcal{L}^{-1}(\frac{1}{s+\frac{5-7i}{2}}))$$
When I took the inverse laplace of these, the result was;
$$ (2 + \frac{35i}{49})( (e)^{(-3.5i - 2.5)t} ) + (2 - \frac{35i}{49}) ( (e)^{(3.5i-2.5)t} ) $$
I verified this result from Wolfram Alpha and Mathematica. But my guest professor insists this is not the solution and he gave me 0 points. He insists the solution includes cosines and sines. I explained him if he uses Euler Identity on these exponents the result will become his result but he refuses and says only way to solve this is to use Laplace tables.
I do agree making the denominator a full square and use Laplace table is the easier and cleaner solution. But isn't this also a solution? Thanks.
|
I checked carefully your calculations and they are perfectly correct ! Congratulations. May be your professor would prefer $\frac{35}{49}$ to be replaced by $\frac{5}{7}$ !!
You are totally correct with the fact that sines and cosines would be obtained using Euler's identity. So, factor $e^{-\frac {5}{2}t}$ and give $$\frac{2}{7} e^{-5 t/2} \left(14 \cos \left(\frac{7 t}{2}\right)-5 \sin \left(\frac{7
t}{2}\right)\right)$$ in which everything is real. Probably, your professor does not enjoy complex numbers in real valued functions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/797926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Do prime numbers satisfy this? Is this true that $n\log\left(\frac{p_n}{p_{n+1}}\right)$ is bounded, where $p_n$ is the $n$-th prime number?
|
Seems unbounded:
Let $g_n = p_{n+1} - p_n$ be the prime gap, then Westzynthius's result (see link below) states that $\lim\sup \left[ g_n/(\log p_n) \right] = \infty$, hence
$$\lim \sup n \log(p_{n+1}/p_n) = \lim \sup n \log (1 + g_n/p_n) = \lim \sup n g_n/ p
_n = \lim \sup g_n/\log n = \infty$$
http://en.wikipedia.org/wiki/Cram%C3%A9r%27s_conjecture
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
From any list of $131$ positive integers with prime factor at most $41$, $4$ can always be chosen such that their product is a perfect square Author's note:I don't want the whole answer,but a guide as to how I should think about this problem.
BdMO 2010
In a set of $131$ natural numbers, no number has a prime factor greater than 42. Prove that it
is possible to choose four numbers from this set such that their product is a perfect square.
The above question obviously has "PIGEONHOLE" written all over its face.However,finding the pigeons and holes is the hard part.The first thing to realize is that the number of primes is 13.We can write any number of our list using those primes.Now,since we only care about whether the exponents of the primes are even or odd,there are a total of $2^{13}$ types of integers.I have done a lot more computation but I am unable to see how this is leading us to pigeonholing(okay,I will admit that I should be a little more open-minded but I am certain that this uses pigeonholing).Also,the only way four integers can add up to an even number is if there are an even number of integers of the same parity.In that case,there are a total of $8$ cases for every quadruple of exponents(counting permutations).
A hint will be appreciated.Also,any mention of how you came up with your own solution will be very much appreciated.
EDIT: I have just found this
EDIT: I may have taken a step towards a solution.We can rewrite each of the integers as $x^2m$ where $m$ is squarefree.The number of possible $m$'s is $2^13$ .Now,if we show that there is another integer with the same $x$ part,and argue the same way with another pair of integers,and then multiply them together,our solution will be complete.However,I don't think that the claim is at all true.
|
HINT - though I haven't followed through a solution ...
Finding four numbers all at once could be hard. Sometimes divide and conquer goes along with pigeonhole - using pigeonhole to find (disjoint) pairs which give the right parity on some fixed subset of the prime factors, and then using it again to find two pairs which match on the rest - but whether that works depends on whether you can find the right number of pairs for the second stage.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Question about calculating at Uniform distribution A train come to the station $X$ minuets after 9:00, $X\sim U(0,30)$.
The train stay at the station for 5 minutes and then leave.
A person reaches to the station at 9:20.
Addition:
There was no train when the person came to the station
What is the probability that he didn't miss the train?
Please help me to calculate it, and if you can, please explain me (if you understand) why the detail that that the train stay at the station is necessary?
It should be:
$$P(X<15)\;?$$
Thank you!
|
Hint: figure out for which values of $X$ will the condition (not missing the train) work.
For example, if the train waits 30 minutes, then the probability (not missing)
is 1.
If it's hard to think in continuous terms, imagine that the train comes at an integer time, $0 \le i \le 30$. What happens then?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Nuking the Mosquito — ridiculously complicated ways to achieve very simple results Here is a toned down example of what I'm looking for:
Integration by solving for the unknown integral of $f(x)=x$:
$$\int x \, dx=x^2-\int x \, dx$$
$$2\int x \, dx=x^2$$
$$\int x \, dx=\frac{x^2}{2}$$
Can anyone think of any more examples?
P.S. This question was inspired by a question on MathOverflow that I found out about here. This question is meant to be more general, accepting things like solving integrals and using complex numbers to evaluate simple problems.
|
Here is a major number-theoretical nuking. By a result of Gronwall (1913) the Generalized Riemann Hypothesis (GRH) implies that the only quadratic number fields $\,K$ whose integers have unique factorization are $\,\Bbb Q[\sqrt {-d}],\,$ for $\,d\in \{1,2,3,7,11,19,43,67,163\}.\,$ Therefore, if $\,K$ is not in this list then it has an integer with a nonunique factorization into irreducibles.
But that can be proved much more simply in any particular case, e.g. the classic elementary proof that $\,2\cdot 3 = (1-\sqrt{-5})(1+\sqrt{-5})\,$ is a nonunique factorization into irreducibles in $\,\Bbb Z[\sqrt{-5}],\,$ which can easily be comprehended by a bright high-school student.
Similarly, other sledgehammers arise by applying general classification theorems to elementary problems, e.g. classifications of (finite) (abelian) (simple) groups. Examples of such sledgehammers can be found here and on MathOverflow by keyword searches.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66",
"answer_count": 21,
"answer_id": 3
}
|
Second derivative of $\arctan(x^2)$ Given that $y=\arctan(x^2)$ find $\ \dfrac{d^2y}{dx^2}$.
I got
$$\frac{dy}{dx}=\frac{2x}{1+x^4}.$$
Using low d high minus high d low over low squared, I got
$$\frac{d^2y}{dx^2}=\frac{(1+x)^4 \cdot 2 - 2x \cdot 4(1+x)^3}{(1+x^4)^2}.$$
I tried to simplify this but didn't get the answer which is
$$\frac{d^2y}{dx^2}=\frac{2(1-3x^4)}{(1+x^4)^2}.$$
Where am I going wrong?
|
Alternatively,
$ \large \tan (y) = x^2 \Rightarrow \sec^2 (y) \cdot \frac { \mathrm{d}y}{\mathrm{d}x} = 2x $
$ \large \Rightarrow \frac { \mathrm{d}y}{\mathrm{d}x} = \frac {2x}{\sec^2 (y)} = \frac {2x}{\tan^2 (y) + 1} = \frac {2x}{x^4 + 1} $
$ \large \Rightarrow \frac { \mathrm{d^2}y}{\mathrm{d}x^2} = \frac {2(x^4+ 1) - 2x(4x^3)}{(x^4+1)^2} = \frac {-6x^4 + 2}{(x^4 + 1)^2} $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How to prove such an elementary inequality The inequality is the following:
$a^ \theta b^ {1-\theta}$ $\leq$ $[\theta ^ \theta (1-\theta)^ {1-\theta}]^{1/p}(a^p+b^p)^{1/p}$, where $\theta \in [0,1]$, $a,b$ are nonnegative.
This inequality is used to give a sharper constant in the proof of an embedding theorem in Sobolev spaces. Here is the link https://www.math.ucdavis.edu/~hunter/pdes/ch3.pdf. On page 66, the author used the inequality to give a sharper estimate, but he didn't give a proof of this inequality. Actually,
$a^ \theta b^ {1-\theta}$ $\leq$ $(a^p+b^p)^{1/p}$ is obvious (this is my first try), and enough to prove the embedding theorem, but it is always interesting to give a sharper inequality.
I tried to prove this seemingly elementary inequality, but I'm really not good at it. Can anyone give a smart answer? Any hint would be appreciated.
|
By the AM/GM inequality,
$$ \left(\frac a\theta\right)^\theta \left(\frac b{1-\theta}\right)^{1-\theta}
\le \theta\left(\frac a\theta\right) + (1-\theta)\left(\frac b{1-\theta}\right)
= a+b
$$
Now replace $a$ and $b$ with $a^p$ and $b^p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Please, help with this integration problem Consider the region bounded by the curves $y=e^x$, $y=e^{-x}$, and $x=1$. Use the method of cylindrical shells to find the volume of the solid obtained by rotating this region about the y-axis.
I drew the corresponding graph. I'm confused by the fact that the area is rotating about one of the bounding lines (x=1) that bound the region (I have solved volumes this way, until now), but about the y-axis, which is a previous point to the bounding line. How does this changes the procedure?
Thank you!
|
Rotating about the vertical line $x=1$ is in principal no different than rotating about the $y$-axis, which after all is just the vertical line $x=0$. The radius of a shell when rotating about the $y$-axis is the distance of $x$ from $0$, which is $r=|x-0|=|x|$; this further simplifies to $|x|=x$ if $0\leq x$. The only modification required when rotating about a different vertical line such as $x=1$ is that now the radius of a shell is going to be the distance of $x$ from $1$, which is $r=|x-1|$, which will further simplify to $|x-1|=1-x$ if $x\leq 1$, as is the case for your problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Definition of Cyclic subgroup
The above is a theorem from my book. What I don't understand is the second sentence when it says $b$ generates $H$ with $n/d$ elements. I thought that since $b = a^s$ generates $H$, it would have $s$ elements, meaning $H = \{e, a, a^2, \dots, a^s \}$? I've found some counterexamples to convince myself I am wrong, but I don't understand why my original idea is wrong either.
|
Another proof
Let $k=o\langle b\rangle=o\langle a^s\rangle$ be the order of H
and according to the definition,
$$a^n=e$$
and
$$b^k=(a^s)^k=a^{ks}=e$$
so that
$$n\mid ks$$
where k is the minimum possible value
$$ks\equiv 0 \pmod n$$
and finally
$$k=\frac{n}{\text{gcd}(n,s)}=n/d$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Tell whether $\dfrac{10^{91}-1}{9}$ is prime or not? I really have no idea how to start. The only theorem considering prime numbers I know of is Fermat's little theorem and maybe its related with binomial theorem.
Any help will be appreciated.
|
Just think through the actual number.
$10^{91}$ is a $1$ with $91$ $0$'s after it.
$10^{91}-1$ is therefore $91$ $9$'s in a row.
$\frac{10^{91}-1}{9}$ is therefore $91$ $1$'s in a row.
Due to the form of this number, $x$ $1$'s in a row will divide it, where $x$ is a divisor of $91$.
For example $1111111$ is a divisor, so is $1111111111111$.
Hence the number is not prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
$\text{Im}(z)$ in equation I'm having trouble with this equation:
$$\text{Im}(-z+i) = (z+i)^2$$
After a bit of algebra i've gotten:
$$1-\text{Im}(z) = z^2 + 2iz - 1$$
But i have no clue where to go from here, how do i get rid of the "$\text{Im}$"?
|
Hint
Write $z=a+i~b$ in which $a$ and $b$ are real numbers. So $$Im(-z+i)=Im(-a+i(1-b))=1-b$$
Since John's answer came while I was typing, just continue the way he suggests (this is what I was about to write).
Continuation of my answer
The right hand side is $$(z+i)^2=(a+i(b+1))^2=a^2-(b+1)^2+2a(b+1)i$$ So the equation is $$1-b=a^2-(b+1)^2+2a(b+1)i$$ Now, the method consists in the identification of real and imaginary parts. This means that we have tox equations $$1-b=a^2+(b+1)^2$$ $$2a(b+1)=0$$ that is to say two equations for two unknowns $a$ and $b$.
I am sure that you can take from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
b such that Ax = b has no solution having found column space $A:=\begin{bmatrix}
2 & 6 & 0 \\
3 & 1 & 3 \\
1 & 0 & 0 \\
4 & 8 & 1
\end{bmatrix}$
I've found the basis for the column space by doing row reduction (i.e. basis is just the columns vectors of A in this case), and the null space only has the trivial solution.
Question
Find a basis for $B = \{b \in R^4 \ | \ Ax = b \ has \ no\ solution\}$.
Is there a quick way of doing this? I know I can augment A with $b = (b_1, b_2, b_3, b_4)^T$, do row reduction, and then look at the row of zeros, but that seems quite laborious?
|
Denote $A\in\mathbb{F}^{m\times n}$ by
$$
A=\begin{bmatrix}a_{11} & a_{12} & . & . & . & & & a_{1n}\\
a_{21} & & & & & & & a_{2n}\\
. & & . & & & & & .\\
. & & & . & & & & .\\
. & & & & . & & & .\\
\\
\\
a_{m1} & a_{m2} & . & . & . & & & a_{mn}
\end{bmatrix}
$$
and
$$
x=\begin{bmatrix}x_{1}\\
x_{2}\\
.\\
.\\
.\\
\\
\\
x_{n}
\end{bmatrix}
$$
then
$$
Ax=\begin{bmatrix}a_{11}x_{1}+a_{12}x_{2}+...+a_{1n}x_{n}\\
a_{21}x_{1}+a_{22}x_{2}+...+a_{2n}x_{n}\\
.\\
.\\
.\\
\\
\\
a_{m1}x_{1}+a_{m2}x_{2}+...+a_{mn}x_{n}
\end{bmatrix}=\begin{bmatrix}a_{11}\\
a_{21}\\
.\\
.\\
.\\
\\
\\
a_{m1}
\end{bmatrix}x_{1}+\begin{bmatrix}a_{12}\\
a_{22}\\
.\\
.\\
.\\
\\
\\
a_{m2}
\end{bmatrix}x_{2}+...+\begin{bmatrix}a_{1n}\\
a_{2n}\\
.\\
.\\
.\\
\\
\\
a_{mn}
\end{bmatrix}x_{n}
$$
Note that column matrices that are multiplied by the $x_{i}$ are
the columns of $A$ hence any vector $b$ of the form $b=Ax$ is a
linear combination of the columns of $A$ and thus in the span of
the columns of $A$.
On the other hand, denote the columns of $A$ as $C_{1},C_{2},...,C_{n}$.
If $b$ is a linear combination of the columns of $A$ this precisely
means that there are scalars $\alpha_{1},\alpha_{2},...\alpha_{n}$
s.t
$$
b=\alpha_{1}C_{1}+\alpha_{2}C_{2}+...+\alpha_{n}C_{n}
$$
and we see (using the above about $Ax$) that choosing $x_{i}=\alpha_{i}$
would give us $Ax=b$.
We conclude that the span of the columns of $A$ is precisely the
set of solutions $Ax=b$, thus you are looking for all the vectors
in the space that are not in that spanned subspace
Note: Since $0\not\in B$ (since $Ax=0$ have a solution, $x=0$)
then $B$ is not a subspace and thefore we can't talk about a basis
for this space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/798930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Limits in cofinite topology/why is the limit of x_n = n equal to 1 in the cofinite topology. Just reading about topological spaces for my exam, and I was wondering if anybody could explain exactly how limits work in the cofinite topology. So I am aware of the topological definition of a limit:
$ Let~(X, \tau)$ be a topological space, and let $x_n$ be a sequence in $X$. x_n is convergent if $\exists L$ s.t.$\forall~U~\in~\tau$, with $L \in U, \exists n\geq N$ s.t. $n\geq N \implies x_n \in U. $
I just can not see how to apply this to the cofinite topology. My lectures claim that $x_n = n \rightarrow 1. $ I have looked elsewhere for answers, but I can't really grasp what they are trying to say. Hopefully, explaining this example will make it clear to me.
Thanks, MSE!
|
Take any open neighbourhood $U$ of $1$. Since $\Bbb R\setminus U$ must be finite, $U$ contains all but a finite number of terms of the sequence $x_n$. Therefore $x_n\to 1$. You can see $1$ is not special at all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Colloquialisms in Math Terminology What are some of your favorite colloquial sounding names for mathematical objects, proofs, and so on? For example, manifolds are often described using an atlas and a neighborhood describes a small set surrounding a point.
|
I can't believe no one has yet mentioned the following. These were quite common, at least at one time (1970s).
abstract nonsense -- refers to (Eilenberg/MacLane) category theory
invariant under change of notation -- refers to differential geometry, especially when tensors are involved
proof by diagram chasing -- refers to a proof involving commutative diagrams, especially when the proof is essentially solving a path puzzle or hedge maze puzzle
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Probability of drawing 6 specific letters from a set of 144 with 21 draws I know similar questions have been asked before, but I am struggling to find a method to calculate the probability for the situation below:
I have a set of 144 scrabble tiles with the following letter distribution.
2 x , K, Q, X, Z
3 x B, C, F, H, M, P, V, W, Y
4 x G
5 x L
6 x D, S, U
8 x N
9 x T, R
11 x O
12 x I
13 x A
18 x E
I am trying to calculate the probability having the letters "I S N E A T" in any order when 21 tiles are picked from the set of 144.
I would appreciate any help on methods to calculate this
Thanks in advance - Gary
|
There are a total of ${144\choose 21}=8943919960348862933558400$ ways to choose 21 tiles from the 144 available.
Let $I=\{12,6,8,18,13,9\}$ be the counts of tiles corresponding
to the letters I,S,N,E,A,T. Then the number of bad samples, i.e, missing one
of the required letters, of size 21 can be worked out using the inclusion-exclusion principle,
giving $$\sum_{A\subseteq I, A\neq\emptyset} {144-S(A)\choose 21}(-1)^{|A|+1}
=6947914061108945063687691. $$ Here $S(A)=\sum_{x\in A}x$ gives the number of
tiles corresponding to the set $A$. For instance, when $A=\{12,6\}$
we have $S(A)=18$, since there are eighteen tiles labelled I or S.
Subtracting gives the number of good samples $1996005899239917869870709$.
Dividing by the total number of samples gives
the chance of getting all six letters I,S,N,E,A,T:
$${1996005899239917869870709\over 8943919960348862933558400}\approx .22317.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to calculate the square root of a number? By searching I found few methods but all of them involve guessing which is not what I want.
I need to know how to calculate the square root using a formula or something. In other words how does the calculator calculate the square root?
|
The easiest way to find $\sqrt[n]{a}$ for integer $n$ and $a>0$ efficiently is to use the Newton-Raphson approximation to invert the function $f : x \mapsto x^n - a$. But one must be careful with choosing the right starting point, so that the iteration will converge quadratically. Quadratic convergence means that at each step the error becomes approximately a constant times its square, which is equivalent to the error being proportional to $c^{2^k}$ after $k$ steps, for some $c \in (0,1)$
Let $x_0$ be such that $x_0 \in \sqrt[n]{a}[1,1+\frac{1}{4n})$
For each natural $k$ from $0$ to $\infty$:
Let $x_{k+1} = x_k - \dfrac{f(x_k)}{f'(x_k)} = x_k - \dfrac{{x_k}^n-a}{n{x_k}^{n-1}} = \dfrac{(n-1){x_k}^n-a}{n{x_k}^{n-1}}$
Then $( x_k : k \in \mathbb{N} )$ converges quadratically to $\sqrt[n]{a}$ uniformly for all $a>0$
General Case
For any real function $f$ such that $f(r) = 0$ and $f' \ne 0$ and $f''$ exists and $\left|\frac{f''}{2f'(r)}\right| \le m$ for some $m$:
Let $a = f'(r) \ne 0$
Then $f(r+d) = a d + g(d) d^2$ for any $d$ for some function $g$ such that:
$g(d) \in a [-m,m]$ for any $d$
Also $f'(r+d) = a + h(d) d$ for any $d$ for some function $h$ such that:
$h(d) \in a [-m,m]$ for any $d$
Let $( x_k : k \in \mathbb{N} )$ be such that:
$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}$ for any natural $k$
$|x_0-r| \le \frac{1}{6m}$
For each natural $k$ from $0$ to $\infty$:
$x_k = r + d_k$ for some $d_k$
$|d_k| \le |d_0| \le \frac{1}{6m}$ by invariance
$x_{k+1} = (r+d_k) - \dfrac{ad_k+g(d_k){d_k}^2}{a+h(d_k){d_k}} \in (r+d_k) - \dfrac{d_k+[-m,m]{d_k}^2}{1+[-m,m]{d_k}}$
Thus $d_{k+1} \in d_k - (d_k+[-m,m]{d_k}^2) (1-[-m,m]{d_k}+[0,2]([-m,m]{d_k})^2)$ because:
$\frac{1}{1+t} \in 1-t+[0,2]t^2$ for any $t \ge -\frac{1}{2}$
Thus $d_{k+1} \in d_k - (d_k+[-m,m]{d_k}^2) (1+[-m,m]{d_k}+\frac{1}{3}[-m,m]d_k) \\ \quad \subseteq \frac{7}{3}[-m,m]{d_k}^2 + \frac{4}{3}[-m,m]^2{d_k}^3 \subseteq \frac{7}{3}[-m,m]{d_k}^2 + \frac{7}{18}[-m,m]{d_k}^2 \\ \quad \subset 3[-m,m]{d_k}^2 \subset [-1,1]d_k$
Thus the invariance is preserved
Also $3 m |d_{k+1}| < ( 3 m |d_k| )^2$
Therefore $3 m |d_k| < ( 3 m |d_0| )^{2^k} \le 2^{-2^k}$ for any natural $k$
Thus $x_k \to r$ quadratically as $k \to \infty$
Notes
In the case of finding $r = \sqrt[n]{a}$, the function $f : x \mapsto x^n - a$ has $\frac{f''}{2f'(r)}$ being $x \mapsto \frac{(n-1)x^{n-2}}{2r^{n-1}}$ which is bounded on $r[1,1+\frac{1}{4n})$ by $m = \frac{2n}{3r}$ because $\frac{n}{2r} (\frac{x}{r})^{n-2} \le \frac{n}{2r} (1+\frac{1}{4n})^n < \frac{n}{2r} e^{1/4} < m$. Thus $|x_0-r| < \frac{r}{4n} = \frac{1}{6m}$.
The procedure to find $x_0$ for efficient arbitrary precision arithmetic can be as follows:
Find the minimum integer $d$ such that $(2^d)^n \ge a$
Binary search on $[2^{d-1},2^d]$ to find $r$ until within an error of $\frac{2^{d-1}}{4n}$
Return the upper bound when the upper and lower bounds are within the error margin
The upper bound is between $r$ and $r+\frac{2^{d-1}}{4n} < r+\frac{r}{4n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Isn't this wrong? This worksheet
This question:
$$w^2 - w \leq 0$$
This answer:
$$(-\infty, -1] \cup [0, 1]$$
Isn't this wrong ? At $w = -2$, it becomes: $(-2)^2 - (-2)$, which is $4 + 2$, which is $\geq 0$. But might be that I must be wrong somewhere. Please correct me. Thanks.
|
$w^2-w\le 0$
$w(w-1)\le 0$
$0\le w\le 1$
The answer given is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Algebraic solution for the intersection point(s) of two parabolas I recently ran through an algebraic solution for the intersection point(s) of two parabolas $ax^2 + bx + c$ and $dx^2 + ex + f$ so that I could write a program that solved for them. The math goes like this:
$$
ax^2 - dx^2 + bx - ex + c - f = 0 \\
x^2(a - d) + x(b - e) = f - c \\
x^2(a - d) + x(b - e) + \frac{(b - e)^2}{4(a - d)} = f - c + \frac{(b - e)^2}{4(a - d)} \\
(x\sqrt{a - d} + \frac{b - e}{2\sqrt{a - d}})^2 = f - c + \frac{(b - e)^2}{4(a - d)} \\
(a - d)(x + \frac{b - e}{2(a - d)})^2 = f - c + \frac{(b - e)^2}{4(a - d)} \\
x + \frac{b - e}{2(a - d)} = \sqrt{\frac{f - c + \frac{(b - e)^2}{a - d}}{a - d}} \\
x = \pm\sqrt{\frac{f - c + \frac{(b - e)^2}{a - d}}{a - d}} - \frac{b - e}{2(a - d)} \\
$$
Then solving for $y$ is as simple as plugging $x$ into one of the equations.
$$
y = ax^2 + bx + c
$$
Is my solution for $x$ and $y$ correct? Is there a better way to solve for the intersection points?
|
You lost a factor $4$ somewhere. You can simply rewrite your problem as
$$(a-d)x^2+(b-e)x+(c-f)=0$$
and use the standard formula for a quadratic equation, i.e.
$$x=-\frac{b-e}{2(a-d)}\pm\sqrt{\frac{(b-e)^2}{4(a-d)^2}-\frac{c-f}{a-d}}$$
Before evaluating this equation, you need to check if $a-d=0$, in which case
$$x=\frac{f-c}{b-e}$$
In this case you of course need to check if $b-e=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Fourier transform of 1/cosh How do you take the Fourier transform of
$$
f(x) = \frac{1}{\cosh x}
$$
This is for a complex class so I tried expanding the denominator and calculating a residue by using the rectangular contour that goes from $-\infty$ to $\infty$ along the real axis and $i \pi +\infty$ to $i \pi - \infty$ to close the contour (with vertical sides that go to 0). Therefore, I tried to calculate the residue at $\frac{i \pi}{2}$ of
$$
\frac{e^{-ikx}}{e^x + e^{-x}} $$ which will be give me the answer, but I don't know how to do this. Thanks for the help!
|
To calculate the residue, we use the formula
\begin{equation*}
\text{Res}_{z_0}f=\lim_{z\rightarrow z_0}(z-z_0)f(z)
\end{equation*}
Then we replace $z_0$ by $i\pi/2$
\begin{equation*}
\begin{split}
(z-z_0)f(z)&=e^{-2\pi iz\xi}\frac{2(z-z_0)}{e^{\pi z}+e^{-\pi z}} \\ &=2e^{-2\pi iz\xi}e^{\pi z}\frac{2(z-z_0)}{e^{2\pi z}-e^{2\pi z_0}}
\end{split}
\end{equation*}
\begin{equation*}
\lim_{z\rightarrow z_0}(z-z_0)f=2e^{-2\pi iz_0\xi}e^{\pi z_0}\frac{1}{2\pi e^{2\pi z_0}}=\frac{e^{\pi\xi}}{\pi i}
\end{equation*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 3
}
|
Definite Trig Integrals: Changing Limits of Integration $$\int_0^{\pi/4} \sec^4 \theta \tan^4 \theta\; d\theta$$
I used the substitution: let $u = \tan \theta$ ... then $du = \sec^2 \theta \; d\theta$.
I know that now I have to change the limits of integration, but am stuck as to how I should proceed.
Should I sub the original limits into $\tan \theta$ or should I let $\tan \theta$ equal the original limits and then get the new limits?
And if it help, the answers of the definite integral is supposed to be $0$.
Thanks in advance.
|
$\sec^4\theta = (1 + \tan^2\theta)\cdot \sec^2\theta$, then substitute $u = \tan\theta$ to get:
$$I = \displaystyle \int_{0}^1 (u^6 + u^4) du = \left.\dfrac{u^7}{7} + \dfrac{u^5}{5}\right|_{0}^1 = \dfrac{12}{35}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Describe the Galois Group of a field extension I'm struggling to understand the basics of Galois theory. One of the things I don't understand is how to actually derive automorphisms of a field extension. Let's say you had a simple problem:
$x^2-3$ over $\mathbb{Q}$ has splitting field $\mathbb{Q}(\sqrt{3}$) correct?
Once I have this how would I go about finding the Galois group in this problem? Isn't there an unlimited number of ways I can assign elements of the field? If the Galois group is the set of ALL automorphisms aren't I just limited by my imagination?
|
It is important to remember that these automorphisms fix the base field. In particular, they fix the polynomial $x^2 - 3$, so they must send roots of that polynomial to other roots.
They are also field maps and, since they fix the base field, they are in particular linear over the base field - Q, in your example . Since the splitting field is generated by $1$ and $\sqrt(3)$ over $Q$ (as a vector space), knowing where $1$ and $\sqrt(3)$ are sent determines the entire map. But $1$ must be sent to $1$ and $\sqrt(3)$ to another root of your polynomial. So you can see that there are only a few possibilities.
It is not always the case that you can just send roots to other ones - there may be subtle relationships among the roots that prevent every map that permutes roots from being automorphisms. One trick for determining the automorphisms (which happens frequently enough to be worth remembering) is that Galois theory tells you that in the Galois case there are exactly as many automorphism as the degree of the minimal polynomial ( which is degree 2 in this case, since you can factor 1 out of $x^3 - 1$). If you can show that there are exactly that many possible automorphisms by the consideration in the previous paragraph and exhibit them (here the identity and $\sqrt(3) \to -\sqrt(3)$), then you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Conditions under which a group homomorphisms between products of groups arises as a product of homomorphisms Let $\phi: G\times H\to G\times H$ be a group homomorphism. Under what conditions can we write $\phi=(f,g)$, where $f:G\to G$ and $g:H\to H$, where $f$, $g$ are group homomorphisms?
|
$\phi : G \times H \to G \times H$ is determined uniquely by its values on the natural embeddings of $G$ and $H$, since $\phi(g,h) = \phi(g,1) \phi(1,h)$.
We restrict $f$ to these natural embeddings to get functions $f_G : G \to G \times H$ and $f_H : H \to G \times H$. Then the function $(f_G, f_H) : G\times H \to G \times H$ defined by $(f_G, f_H)(g,h) = (f_G(g),f_H(h))$ agrees with $\phi$, so is equal to $\phi$.
So, all the time.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Real number comparisons: must a number be less than or equal to or greater than another number? I've been reading Knuth's Surreal Numbers recently and came up with this question about real numbers.
Is is true that among all three relationships (=, >, <), a real number must be of one, and only one relationship with another real number. If this is true, how to prove it?
|
Yes - this is called the trichotomy property (
http://en.wikipedia.org/wiki/Trichotomy_%28mathematics%29 ) . It can be easier to see depending on how you define the real numbers. For instance: http://en.wikipedia.org/wiki/Dedekind_cut (see ordering of cuts part way down the page)
http://en.wikipedia.org/wiki/Trichotomy_%28mathematics%29
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/799970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proof of power series uniform convergence on compact set I proved:
If a power series converges pointwise on a compact set then it converges uniformly.
Please could somebody check my proof?
My idea is to use Abel's theorem:
Let $g(x) = \sum_{n=0}^\infty a_n x^n$ be a power series that converges at the point $x=R > 0$. Then the series converges uniformly on $[0,R]$. A similar result holds for $x=-R$.
Let $K$ be a compact set. Let $M = \max K, m = \min K$. Let $p(x)=\sum_{n=0}^\infty a_n x^n$ be pointwise convergent on $K$.
If $m < 0 < M$ then by Abel's theorem $p$ converges uniformly on $[0,M]$ and by the negative case uniformly on $[m,0]$ therefore uniformly on $K \subseteq [m,M]$. The argument is similar if $0 < m < M$ or $m < M < 0$.
|
Your proof is correct. Presentation may be improved by preceding it with a lemma: if a series converges uniformly on each of the sets $E_1,\dots,E_m$, then it converges uniformly on $\bigcup_{i=1}^m E_i$. (That is, uniformity of convergence is preserved under finite unions.)
Then you have $K\subseteq E_1\cup E_2$ where $E_1$ is the closed interval with endpoints $0,m$ and $E_2$ is the closed interval with endpoints $0,M$.
Aside: I can't figure out if the statement remains true in the complex plane.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that a map is not an automorphism in an infinite field How should I show that a map $f(x) = x^{-1}$ for $x \neq 0$ and $f(0) = 0$ is not an automorphism for an infinite field?
Thanks for any hints.
Kuba
|
An elemantary way;
Assume that $\phi$ is an automorphism of $F$ as you defined.Notice that if $\phi(x)=x$ then $x=1$ or $x=-1$ or $x=0$.
Now let $r$ be any nonzero elements of $F$ then set $x=r+\dfrac 1r$.
So we have, $\phi(x)=\phi(r+\dfrac 1r)=\phi(r)+\phi(\dfrac 1r)=\dfrac 1r+r=x$ which means that
$r$ must be a root of $$r+\dfrac 1r=0$$
$$r+\dfrac 1r=1$$
$$r+\dfrac 1r=-1$$
You have finite possible $r$ contradicting the fact that $F$ is infinite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
matrix representation of linear transformation For a set $N$ let $id_N:N \rightarrow N$ be the identical transformation. Be $V:=\mathbb{R}[t]_{\le d}$. Determine the matrix representation $A:=M_B^A(id_V)$ of $id_V$ regarding to the basis $A=\{1,t,...,t^d\}$ and $B=\{1,(t-a),...,(t-a)^d\}$.
I know, that i have to write the $t^i$ as a linear combination of $(t-a)^j$. So for
$t^0 = 1*(t-a)^0$
$t^1 = a*(t-a)^0 + 1*(t-a)^1$
$t^2 = (-a^2 + 2a^2) * (t-a)^0 + 2a*(t-a)^1 + 1*(t-a)^2$
$t^3= a^3(t-a)^0 -a^2(t-a)^1+a(t-a)^2+1(t-a)^3$
$...$
How can i figure out a system for the general case?
So the matrix representation is
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 &\dots & 0\\
a & 1 & 0 & 0 & 0 &\dots & 0\\
a^2 & \binom{2}{1} a^{2-1} & 1 & 0 & 0 &\dots & 0\\
\vdots & \dots & & \ddots & \vdots\\
a^d & \binom{2}{1} a^{d-1} &\dots &\binom{d}{i} a^{d-i} &\dots & &1
\end{pmatrix}
? Can someone please tell me, if this is correct?
|
Do the binomial expansion
$$
(z + a)^{k}
=
a^{k} + \binom{k}{1} a^{k-1} z + \dots + \binom{k}{i} a^{k-i} z^{i} + \dots + z^{k},
$$
and then substitute $t = z + a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Example of a bounded lattice that is NOT complete I know that every complete lattice is bounded. Is there a simple example for a bounded lattice that is not complete?
Thank you
|
Update: My answer below is wrong! (Thanks to bof for pointing that out.) I will leave the answer here because I think my mistake and bof's comment could maybe be instructive.
Let $\mathbb{N}$ be the set of natural numbers. Let $\mathcal{P}_{fin}(\mathbb{N})$ denote the collection of finite subsets of $\mathbb{N}$. Then $L=\mathcal{P}_{fin}(\mathbb{N})\cup\{\mathbb{N}\}$ is a bounded lattice under inclusion. However, it is not complete since $\{0\}\cup\{2\}\cup\{4\}\cup\{6\}\cup\ldots$ is not in $L$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
If $\sum{a_n}$ converges does that imply that $\sum{\frac{a_n}{n}}$ converges? I know if $\sum{a_n}$ converges absolutely then $\sum{\frac{a_n}{n}}$ converges since $0\le \frac{|a_n|}{n} \le |a_n| $ for all $n$ so it converges absolutely by the basic comparison test and therefore converges. However, I cannot prove the convergence of $\sum \frac{a_n}{n}$ if $\sum{a_n}$ converges but not absolutely even though I suspect it to be true. Can you give me a proof or a counterexample for this?
|
Yes;
A theorem found in "Baby'' Rudin's book: If $\sum a_{n}$} converges and $\lbrace{ b_{n} \rbrace}$ monotonic and bounded then $\sum a_{n}b_{n}$ converges. See:
Prob. 8, Chap. 3 in Baby Rudin: If $\sum a_n$ converges and $\left\{b_n\right\}$ is monotonic and bounded, then $\sum a_n b_n$ converges.
Here, we take $b_{n} = \frac{1}{n}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
}
|
Extrema of $x+y+z$ subject to $x^2 - y^2 = 1$ and $2x + z = 1$ using Lagrange Multipliers
Find the extrema of $x+y+z$ subject to $x^2 - y^2 = 1$ and $2x + z = 1$ using Lagrange multipliers.
So I set it up:
$$
1 = 2x\lambda_1 + 2\lambda_2 \\
1 = -2y\lambda_1 \\
1 = \lambda_2
$$
Plug in for $\lambda_2$:
$$
1 = 2x\lambda_1 + 2 \\
1 = -2y\lambda_1 \\
$$
So we work with:
$$
1 = 2x\lambda_1 + 2 \\
1 = -2y\lambda_1 \\
1 = x^2 - y^2 \\
1 = 2x + z
$$
After some algebra I got $x = y$ as a solution but that's impossible because of the constraint $1 = x^2 - y^2$. What am I missing?
|
The constraints define two curves in ${\mathbb R}^3$ as follows: The constraint $x^2-y^2=1$ defines a hyperbolic cylinder $Z$ consisting of two sheets, which can be parametrized as follows:
$$(t,z)\mapsto(\pm\cosh t,\sin t, z)\qquad(-\infty<t<\infty, \ -\infty<z<\infty)\ .$$
Intersecting these two sheets with the plane $z=1-2x$ produces thw two curves
$$\gamma_\pm:\ t\mapsto(\pm\cosh t,\sinh t, 1\mp2\cosh t)\qquad(-\infty<t<\infty)\ .$$
The pullback of $f(x,y,z):=x+y+z$ to $\gamma_\pm$ computes to
$$\phi(t)=1+\sinh t\mp\cosh t\ ,$$
so that we obtain
$$\phi'(t)=\cosh t\mp\sinh t=e^{\mp t}>0\qquad(-\infty<t<\infty)\ .$$
This shows that there are no conditionally stationary points of $f$ on the two curves, and explains why Lagrange's method didn't produce any solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Linear independence of two functions, how to solve I have a problem. I need to show that $\{f,g\}$ is a linearly independent set in the vector space of all functions from $\mathbb{R}^{+}$ to $\mathbb{R}$, where
$$f(x)=x$$
$$g(x)=\frac1{x}$$
First (and least important), is there a standard notation for this vector space?
Second, I didn't know how to answer this question, so I cheated and checked the solution manual. There, it basically showed that letting $x=1$ first and $x=2$ second and setting $h(x)=af(x)+bg(x)=0$ gives a system of equation. Solving shows $a=b=0$, and thus linearly independent.
Not having any exposure really with this particular vector space, why are we able to pick values for $x$? We don't do that for $\mathbb{R}^n$ or $\mathbb{F}_n[x]$. I guess I'm just not accustomed to solving problems in this vector space and am unsure how to procede and why this is the method? Can someone explain to me why this is the correct way?
|
The standard notation for the vector space of all functions $\Bbb R^+\to\Bbb R$ is $\Bbb R^{\Bbb R^+}$. This is sort of awkward and I've also seen the notation $\mathcal F(0,\infty)$ used. As long as you define your notation properly I don't think notation matters much here.
As for showing that $f(x)$ and $g(x)$ are linearly independent in $\mathcal F(0,\infty)$, we can use the definition. Seeking a contradiction, suppose that there exist scalars $a$ and $b$ not both zero such that
$$
af(x)+bg(x)=0\tag{1}
$$
for all $x>0$. Then
$$
ax=-\frac{b}{x}\tag{2}
$$
for all $x>0$. In particular, plugging $x=1$ and $x=2$ into (2) gives
$$
a=-b\tag{3}
$$
and
$$
2a=-\frac{b}{2}\tag{4}
$$
Plugging (3) into (4) gives
$$
2a=\frac{a}{2}
$$
which implies $a=0$. But then (3) implies $b=0$ too, a contradiction. Hence $f(x)$ and $g(x)$ are linearly independent in $\mathcal F(0,\infty)$.
The key point here is that the equation (1) holds for all $x>0$ which is what allows us to arrive at our desired contradiction. The reason (1) is assumed to hold for all $x>0$ is because the elements of $\mathcal F(0,\infty)$ are functions which are equal if and only if they are equal at every point of $\Bbb R^+$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Probability question - (Probably) Bayes' Rule and Total Probability Theorem I just took a probability final exam and was fairly confident in my solution of 28/31, but I wanted to be sure... because according to http://www.stat.tamu.edu/~derya/stat211/SummerII02/Final.Summer02.doc which has it as the second question, the answer is .6627. What's discerning is that they have the decimal equivalent of 28/31 as one of their answers which makes it seem like they know something I don't...
"Seventy percent of all cattle are treated by an injected vaccine to combat a serious disease. The probability of recovery from the disease is 1 in 20 if untreated and 1 in 5 if treated. Given that an infected cow has recovered, what is the probability that the cow received the preventive vaccine?"
Here's my solution: Let A be the event a cow recovered, let B be the event a cow received the vaccine.
We are given:
P(A|B) = 1/5
P(A|~B) = 1/20
P(B) = 7/10
We want to find P(B|A), so use Bayes' rule and the total probability theorem to find
P(B|A) = P(A|B) x P(B) / (P(A|B) x P(B) + P(A|~B) x P(~B) ).
Plugging in the values from what's given above, we get (.2 x .7) / (.2 x .7 + .05 x .3) which gives 28/31.
If I'm wrong, I'd love to be pointed in the right direction haha
Thank you!
|
I get .68, probably rounding but what happened was you fell off right at start and the checkers accepted your term definitions which are incorrect. A=1/4 B=7/10 and P(A/~B)=1/10
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/800925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Does the series $\sum_{n\ge0}\frac{x^n\sin({nx})}{n!}$ converge uniformly on $\Bbb R$? The series $$\sum_{n\ge0}\frac{x^n\sin({nx})}{n!}$$ converges uniformly on each closed interval $[a,b]$ by Weierstrass' M-test because $$\left|\frac{x^n\sin({nx})}{n!}\right|\le\frac{\max{(|a|^n,|b|^n)}}{n!}.$$
But does this series converge uniformly on $\Bbb R$?
|
To rephrase Paul's answer: if the series converges uniformly, then the sequence of functions $(f_n)_{n\geq1}$ with $f_n(x)=\dfrac{x^n\sin(nx)}{n!}$ converges uniformly to zero.
If $n\geq1$, let $\xi_n=\pi\left(2n+\frac1{4n}\right)$. Then $\xi_n>n$, so that $(\xi_n)^n>n!$ and $\sin(n\xi_n)= \sin \left(\pi \left(2n^2+\tfrac14 \right)\right)=1$: we see that $f_n(\xi_n)>1$.
It follows at once that the sequence $(f_n)_{n\geq1}$ does not converge uniformly to zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/801110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Matrices and Complex Numbers Given this set:
$$
S=\left\{\begin{bmatrix}a&-b\\b&a\end{bmatrix}\middle|\,a,b\in\Bbb R\right\}
$$
Part I:
Why is this set equivalent to the set of all complex numbers a+bi (when both are under multiplication?)
There is one matrix that corresponds to a specific complex number. Can this example be found and how can it be demonstrated to give equivalent answers?
Part II:
What is a formula for the multiplicative inverse of the matrix shown in the set, using knowledge on inverses of complex numbers?
|
Looking in the comments since I've posted, the following answer appears to be expanding on Jyrki's idea.
There exists a homomorphism $\phi:S \rightarrow \mathbb{C}$ defined as follows:
$$\begin{bmatrix}
a & -b \\[0.3em]
b & a \\[0.3em]
\end{bmatrix} \mapsto (a + bi)$$
Of course, you will want to prove that this is indeed a homomorphism by checking the following conditions:
*
*$\phi$ maps the multiplicative identity in $S$ to the multiplicative identity in $\mathbb{C}$.
*$\phi(xy) = \phi(x)\phi(y)$ for any $x, y \in S$.
*$\phi(x + y) = \phi(x) + \phi(y)$ for any $x, y \in S$.
Once you have done this, then show that $Im(\phi) = \mathbb{C}$, and $\ker(\phi) = \{0\}$, where $0$ is the additive identity in $S$. From here, you can apply the isomorphism theorem to show that $S$ is isomorphic to $\mathbb{C}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/801204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Estimate the number of roots of an analytic function Let $f : \mathbb{C}\longrightarrow \mathbb{C}$ be analytic with $0 \not = f(0)$. Suppose we have normalized $f$ such that $|f(0)| = 1$. Suppose that $f$ has $n$ roots (including repeated roots) and they are all in $B_{\frac 12}(0)$. Is it possible to estimate $n$ in terms the maximum value of $f$ on the unit circle !?
|
By Jensen's formula, $n\leq \frac{\log M}{\log2}$, where $M = \sup\limits_{\vert z\vert =1}\vert f(z)\vert$.
https://en.wikipedia.org/wiki/Jensen%27s_formula
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/801312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.