Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Product (arbitrary) of open functions is open. Let $f_{\alpha}\colon X_{\alpha}\to Y_{\alpha}$ be open, for all $\alpha \in J$. Then $\prod_{\alpha} f_{\alpha}\colon \prod_{\alpha}X_{\alpha} \to \prod_{\alpha}Y_{\alpha}$ is open? Both $ \prod_{\alpha}X_{\alpha}$ and $ \prod_{\alpha}Y_{\alpha}$ have the product topology.
What if every $f_{\alpha}$ is also surjective?
|
2.3.29. Proposition. The Cartesian product $f=\prod_{s\in S} f_s$, where $f_s \colon X_s \to Y_s$ and $X_s\ne\emptyset$ for $s\in S$, is open if and only if all mappings $f_s$ are open and there exists a finite set $S_0\subset S$ such that $f_s(X_S)=Y_S$ for $s\in S\setminus S_0$.
Proof. From 1.4.14 and the equality $f(\prod_{s\in S}W_s)=\prod_{s\in S}f_s(W_s)$ it follows that if the mappings $f_s$ satisfy the above conditions, then $f$ is open.
Conversely, suppose that $f$ is an open mapping . Take an $s_0\in S$ and a non-empty open set $U\subset X_{s_0}$. the set $U\times \prod_{s\in S\setminus\{x_0\}} X_s$ is non-empty and open in $\prod_{s\in S} X_s$, so that the set
$$p_{s_0} f(U\times\prod_{s\in S\setminus\{s_0\}} X_s)=f_{s_0}(U)$$
is open in $Y_{s_0}$, because the projection $p_{s_0}$ is an open mapping. This implies that $f_{s_0}$ is open. As $\prod_{s\in S} X_s\ne\emptyset$, the set $f(\prod_{s\in S} X_s) = \prod_{s\in S} f_s(X_s)$ is a non-empty open subset of $\prod_{s\in SY_s}$, so that it contains a set of the form $\prod_{s\in S} W_s$, where $W_s\ne Y_s$ only for $s$ in a finite set $S_0\subset S$; then for $s\in S\setminus S_0$ we have $f_s(X_s)=Y_s$. $\square$
(cited from “General Topology” by Ryszard Engelking. )
1.4.14. Theorem. A continuous mapping $f\colon X\to Y$ is open if and only if there exists a base $\mathcal B$ for $X$ such that $f(U)$ is open in $Y$ for every $U\in\mathcal B$. $\square$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Cauchy integral formula Can someone please help me answer this question as I cannot seem to get to the answer.
Please note that the Cauchy integral formula must be used in order to solve it.
Many thanks in advance!
\begin{equation*}
\int_{|z|=3}\frac{e^{zt}}{z^2+4}=\pi i\sin(2t).
\end{equation*}
Also $|z| = 3$ is given the counterclockwise direction.
|
Write integral in the form the function in the form $\frac{e^{zt}}{(z-2i)(z+2i)}$. Then you should split the contour in two parts such that interior of each part contain only one point $2i$ or $-2i$.
Then apply Cauchy integral formula for each contour.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
How to improve visualization skills (Graphing) Okay, so my problem is, that I have difficulty visualizing graphs of functions.
For example, if we have to calculate the area bounded by multiple curves, I face difficulty in visualizing that how the graph would look like. And in Integral Calculus, it is very important to plot the graph in some problems, so how can I improve my visualization skills? What can I implement in my daily math problem solving to improve these skills?
Edit:I am not talking about normal functions, simple ones like Signum function, absolute value function etc. I am talking about more complex ones like
\begin{equation*}
y=e^{|x-1|+|x+2|}.
\end{equation*}
|
A friend of mine sent me a link to graph paper site: http://www.printablepaper.net/category/graph
gives you a choice of squares per inch and whether there are bolder lines each inch. Draw stuff. Do not imagine drawing things. Actually do so. If you want to solve $3 |x| = e^x,$ print out some graph paper, draw $y = 3|x|$ and $y=e^x$ on the same page and see what happens.
There is a tradition in, let us call it, neuroscience, possibly a small minority opinion, that we have intelligence precisely because we have hands; one aspect of this is http://en.wikipedia.org/wiki/Homo_faber I see lots of students on this site who have no ability to visualize in either two or three dimensions because they have never drawn any pictures or built any models of polyhedra. Part of this is that software such as mathcad took over early in many engineering and architectural fields; the people involved are the poorer for it. This has some references: http://www.waldorfresearchinstitute.org/pdf/Hand-Movements-Create-Intelligence.pdf
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
What does a linear equation with more than 2 variables represent? A linear equation with 2 variables, say $Ax+By+C = 0$, represents a line on a plane but what does a linear equation with 3 variables $Ax+By+Dz+c=0$ represent? A line in space, or something else?
On a general note, what does a linear equation with $n$ variables represent?
|
In Geometry: the linear equation: $Ax+By+Dz+c=0$ in three variables $x$, $y$ & $z$ generally represents a plane in 3-D co-ordinate system having three orthogonal axes X, Y & Z.
The constants $A$, $B$, $D$ shows the $\color{#0ae}{\text{direction ratios}}$ of the vector normal to the plane: $Ax+By+Dz+c=0$. Its direction cosines are given as $$\cos\alpha=\frac{A}{\sqrt{A^2+B^2+D^2}}, \quad\cos\beta=\frac{B} {\sqrt{A^2+B^2+D^2}} \quad \text{&} \quad\cos\gamma=\frac{D}{\sqrt{A^2+B^2+D^2}}$$
It can also be written in the $\color{#0ae} {\text{intercept form}}$ as follows $$\frac{x}{\left(\frac{-c}{A}\right)}+\frac{y}{\left(\frac{-c}{B}\right)}+\frac{z}{\left(\frac{-c}{D}\right)}=1$$ Where, $\left(\frac{-c}{A}\right)$, $\left(\frac{-c}{B}\right)$ & $\left(\frac{-c}{D}\right)$ are the intercepts of the plane with three orthogonal axes x, y & z respectively in the space.
In Linear Algebra: the linear equation: $Ax+By+Dz+c=0$ represents one of the three linear equations of a system having unique solution, infinite solutions or no solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Area of an ellipse. I need to find the area of the image of a circle centred at the origin with radius 3 under the transformation:
$
\begin{pmatrix}
3 & 0\\
0 & \frac{1}{3}
\end{pmatrix}
$
The image is the ellipse $ \frac{x^2}{81}+y^2=1$. It would appear that it has the same area as the original circle i.e. $9\pi$. Is this because the matrix has some special property such as being its own inverse?
|
It is a known formula that the area enclosed in the ellipse with semi-axes $a$ and $b$ is $\pi ab$, as may be seen from the orthogonal affinity that transforms a circle into an ellipse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Algebric and geometric multiplicity and the way it affects the matrix Given a matrix $A$. Suppose $A$ has $\lambda_1,\dots,\lambda_n$ eigenvalues each with $g_i$ geometric multiplicity and $r_1,\dots,r_n$ algebric multiplicity, $g_i\leq r_i$.
Given this information alone, can I understand the way the matrix might look like?
And more in general, how does the way algebric and geomtric multiplicties affect the matrix?
|
The difference between algebraic and geometric comes from the number of linearly independent eigenvectors.
Geometric multiplicity is strictly less than algebraic multiplicity if and only if the number of linearly independent eigenvectors is less than $n$ and some eigenvectors have to be repeated in an eigendecomposition of $A$.
The eigenvectors then do not span the space and do not give a basis.
This means that $A$ can not be diagonalized.
Eigenvalue multiplicities are basis independent so they do not say much about the appearance of $A$ itself, since any transformation will leave them unchanged. They do constrain the Jordan normal form though, but the full specification of the Jordan normal form requires the actual eigenvector multiplicities. For each distinct eigenvector in a given eigendecomposition there is a block with the size of that eigenvectors multiplicity filled with the corresponding eigenvalue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
$A$ is diagonalizable if $A^8+A^2=I$ Given a matrix $A\in M_{n}(\mathbb{C})$ such that $A^8+A^2=I$, prove that $A$ is diagonalizable.
So let $p(x)=x^8+x^2-1$ and we know that $p(A)=0$.
The next step would be to show that the algebric and geometric multipliciteis of all the eigenvalues are equal.
But this polynomial is reducible in a very unpleasent way, so even checking for the minimal polynomial is not an option.
What can I do differently.
|
It suffices to show that all the eigenvalues are simple. If $\lambda$ is an eigenvalue with multiplicity $\ge2$ then we have
$$\lambda^8+\lambda^2-1=0\tag 1$$
and
$$8\lambda^7+2\lambda=0$$
but clearly $0$ isn't an eigenvalue so
$$\lambda^6=-\frac14\tag2$$
so form $(1)$ we get $$-\frac14\lambda^2+\lambda^2-1=\frac34\lambda^2-1=0\iff \lambda=\pm \sqrt{\frac{4}{3}}$$
which contradicts $(2)$. Conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
integrate this double integral by any method you can. I'm having trouble with this double integral:
$$\int_0^2\int_0^{2-x} \exp\left(\frac{x−y}{x+y}\right)\text dy\,\text dx$$
|
The integral is one over the $2$-simplex $\Delta_2(2) = \{ (x,y ) : x, y \ge 0, x+ y \le 2 \}$.
One standard trick to deal with integral over $d$-simplex of the form
$$\Delta_d(L) = \{ (x_1, x_2, \ldots, x_d ) : x_i \ge 0, \sum_{i=1}^d x_i \le L \}$$
is convert it to one over the $d$-cuboid $[0,L] \times [0,1]^{d-1}$ through following change of variables
$$\begin{align}
\lambda &= x_1 + x_2 + \cdots + x_d\\
\lambda\mu_1 &= x_1 + x_2 + \cdots + x_{d-1}\\
\lambda\mu_1\mu_2 &= x_1 + x_2 + \cdots + x_{d-2}\\
&\;\;\vdots\\
\lambda\mu_1\mu_2\cdots\mu_{d-1} &= x_1
\end{align}
$$
Under such change of variables, an integral of $(x_1,\ldots,x_d)$ over $\Delta_d(L)$ becomes an integral of $(\lambda,\mu_1,\ldots,\mu_{d-1})$ over
$[0,L] \times [0,1]^{d-1}$.
For the integral at hand, let $\lambda = x + y$ and $x = \mu\lambda$.
The area element can be rewritten as
$$dx \wedge dy = dx \wedge d(x+y) = d(\mu \lambda) \wedge d\lambda = \lambda d\mu \wedge d\lambda$$
So the integral becomes
$$\int_{\Delta_2(2)} \exp\left(\frac{x-y}{x+y}\right)dxdy
= \int_0^2 \int_0^1 e^{2\mu - 1} \lambda d\mu d\lambda
= \left(\int_0^2 \lambda d\lambda \right)\left(\int_0^1 e^{2\mu-1} d\mu\right)\\
= 2 \times \frac{1}{2e}(e^2 - 1)
= 2\sinh(1)
\approx 2.35040238728760291376476370119120163
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Show that the subset $S$ in $\mathbb{R}_3$ is a subspace. Show that the subset $S$ in $\mathbb{R}_3$ defined by $S=\{(a,b,c) \in \mathbb{R}_3 \text{ such that } a+b=c \}$ is a subspace.
I'm having trouble adapting the definition of subspace with the part $a+b=c$.
|
You want to show, that
$$ S = \left\{ (a,b,c) \in \Bbb R^3 \; : \; a+b = c \right\} \subset \Bbb R^3$$
is a subspace of $\Bbb R^3$.
First, note that $(0,0,0) \in S$, since $0 + 0 = 0$, so $S \neq \emptyset$.
Next, let $v_1 := (a_1, b_1, c_1), \, v_2 := (a_2, b_2, c_2) \in S$. We have to show, that $v_1 + v_2 = (a_1 + b_1, a_2 + b_2, a_3 + b_3) \in S$. Since $v_1 \in S$, we have $a_1 + b_1 = c_1$, and since $v_2 \in S$, we have $a_2 + b_2 = c_2$. So we see, that
$$(a_1 + a_2) + (b_1 + b_2) = c_1 + c_2 \; ,$$
which means that
$$ v_1 + v_2 = (a_1 + a_2, b_1 + b_2, c_1 + c_2) \in S \; .$$
Finally, let $\alpha \in \Bbb R$ and $v := (a,b,c) \in S$. We need to show, that $\alpha v \in S$. Since $a+b = c$, we have $\alpha a + \alpha b = \alpha c$, which means that
$$ \alpha v = (\alpha a, \alpha b, \alpha c) \in S \; .$$
This shows, that $S$ is a subspace of $\Bbb R^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\frac {\sin x} {\cos 3x} + \frac {\sin 3x} {\cos 9x} + \frac {\sin 9x} {\cos 27x} = \frac 12 (\tan 27x - \tan x)$ The question asks to prove that -
$$\frac {\sin x} {\cos 3x} + \frac {\sin 3x} {\cos 9x} + \frac {\sin 9x} {\cos 27x} = \frac 12 (\tan 27x - \tan x) $$
I tried combining the first two or the last two fractions on the L.H.S to allow me to use the double angle formula and get $\sin 6x$ or $\sin 18x$ but that did not help at all.
I'm pretty sure that if I express everything in terms of $x$, the answer will ultimately appear but I'm also certain that there must be another simpler way.
|
@MayankJain @user,
I don't know that the case for $n=1$ is trivial, especially for someone in a trig class currently. I will offer a proof without induction using substitution instead. Mayank, to see that this is the case for $n=1$, convert the RHS of the equation as follows:
$\dfrac{1}{2} \cdot \big{[}\dfrac{\sin 3x}{\cos 3x} - \dfrac{\sin x}{\cos x}]$ = (1) $\dfrac{1}{2} \cdot \big(\dfrac{\sin 3x \cos x - \cos 3x \sin x}{\cos 3x \cos x}\big{)} $ = (2) $ \dfrac{1}{2} \cdot \dfrac{\sin 2x}{\cos 3x \cos x}$ = (3) $\dfrac{1}{2} \cdot \dfrac{2\sin x \cos x}{\cos 3x \cos x}$ = $\dfrac{\sin x}{\cos 3x}$
This is pretty heavy on trig identities. We get equivalence (1) by multiplying out the fraction, equivalence (2) because $\sin(u - v) = \sin u \cos v - \cos u \sin v$, equivalence (3) because $\sin 2x = 2\sin x \cos x$, and, finally, (4) by cancellation.
Now, if you are unfamiliar with induction, substitution will help here as an alternative method. Since you can now derive the equivalence for the 'trivial' case, set $3x = u$ for the second case, and $9x = v$ for the third. Then you already know $\dfrac{\sin u}{\cos 3u} = \dfrac{1}{2} \cdot (\tan 3u - \tan u)$, and similarly, $\dfrac{\sin v}{\cos 3v} = \dfrac{1}{2} \cdot (\tan 3v - \tan v)$. So now we can re-write the original expression $\dfrac{\sin x}{\cos 3x} + \dfrac{\sin u}{\cos 3u} + \dfrac{\sin v}{\cos 3v}$ as $\dfrac{1}{2} \cdot (\tan(3x) - \tan x + \tan(9x) - \tan(3x) + \tan(27x) - \tan(9x))$ and everything cancels out except the desired expression: $\dfrac{1}{2} \cdot (\tan(27x) - \tan(x))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1299957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Is there any cyclic subgroup of order 6 in in $ S_6$? Is there any cyclic subgroup of order 6 in $ S_6$?
Attempt:
$|S_6|=6!=720$
Let $H$ be a subgroup of $S_6$ ,$H$ cyclic $\iff\langle H \rangle=\{e,h,h^2,...,h^{n-1}\}=S_6$
|
Yes: no need of great theorems. The subgroup generated by $(1,2,3,4,5,6)$ does the job.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Area Between Intersecting Lines - Elegant Solution? I am running simulations, and the output will be a line y = mx+b. I am interested in the area below the line between x=0 and x=1. I am only interested in the area that is below the diagonal y = x.
I have figured out how to determine this area by finding areas of triangles. But to do so, I have to define 6 cases. This requires many if-else statements in my computer program, and is inefficient.
I was wondering if there is an elegant solution to this problem which will not require such a complex program?
In the diagram below, the diagonal line is solid, my line of interest is the dotted line.
|
Let $y = mx+b$ be the equation of your line, and then find the point of intersection with $y=x$, which will be $(p,p)$.
Then you need to find two integrals: the integral from $0$ to $p$ of $x-(mx+b)$ and the integral from $p$ to $1$ of the same function. Then pick whichever one is nonnegative!
EDIT: Just realized the lines don't have intersect. If that happens, just take the integral from $0$ to $1$ of $x-(mx+b)$ and if it's negative, return 0.
I guess I only simplified a few cases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to evaluate $\cot(2\arctan(2))$? How do you evaluate the above?
I know that $\cot(2\tan^{-1}(2)) = \cot(2\cot^{-1}\left(\frac{1}{2}\right))$, but I'm lost as to how to simplify this further.
|
Let $y=\cot(2\arctan 2)$. You can use the definition $\cot x\equiv \frac{1}{\tan x} $ and the identity for $\tan 2x\equiv\frac{2\tan x}{1-\tan^2 x}$ to find $\frac 1y=\tan(2 \arctan 2)$ in terms of $\tan(\arctan 2) = 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
How can I simplify $1\times 2 + 2 \times 3 + .. + (n-1) \times n$ progression? I have a progression that goes like this:
$$1\times 2 + 2 \times 3 + .. + (n-1) \times n$$
Is there a way I can simplify it?
|
Hint. You have
$$
(n^3-n)-\left((n-1)^3-(n-1)\right)=3\times(n-1)\times n
$$ and the sum is telescoping.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
How does one prove that $\mathbb{Z}\left[\frac{1 + \sqrt{-43}}{2}\right]$ is a unique factorization domain? By extending the Euclidean algorithm one can show that $\mathbb{Z}[i]$ has unique factorization.
This logic extends to show $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{2}\right]$, $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{7}\right]$ and $\mathbb{Z}\left[\frac{1 + \sqrt{-11}}{2}\right]$ have unique factorization.
However, there are principal ideal domains which are not Euclidean. How do we check that $\mathbb{Z}\left[\frac{1 + \sqrt{-19}}{2}\right]$ and $\mathbb{Z}\left[\frac{1 + \sqrt{-43}}{2}\right]$ have unique factorization?
There are two other interesting rings: $\mathbb{Z}\left[\frac{1 + \sqrt{-41}}{2}\right]$ and $\mathbb{Z}[\sqrt{-41}]$.
|
The first thing to keep in mind is that almost all complex quadratic integer rings are non-UFDs. Put another way, if a random $d$ is negative, then you can be almost certain that the ring of algebraic integers of $\mathbb{Q}(\sqrt{d})$ (often denoted $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$) has class number greater than $1$. If $d < -7$ is odd, then $2$ is irreducible but $d + 1 = (1 - \sqrt{d})(1 + \sqrt{d})$. For example, in $\textbf{Z}[\sqrt{-21}]$, we have $22 = 2 \times 11 = (1 - \sqrt{-21})(1 + \sqrt{-21})$, yet neither of the last two factors is divisible by $2$ or $11$.
In domains like $\mathcal{O}_{\mathbb{Q}(\sqrt{-19})}$ and a very few cases you've probably already seen in your textbooks (e.g., Heegner numbers), things get a lot more interesting. It is true that $$20 = 2^2 \times 5 = 2^2 \left(\frac{1 - \sqrt{-19}}{2}\right) \left(\frac{1 + \sqrt{-19}}{2}\right),$$ but it turns out that $$\left(\frac{1 - \sqrt{-19}}{2}\right) \left(\frac{1 + \sqrt{-19}}{2}\right) = 5.$$ This means that $2^2 \times 5$ is an incomplete factorization of $20$ in this ring.
And as you already know, this ring is not Euclidean, so that shortcut for proving unique factorization is not available. It is worth reviewing the fact that every principal ideal domain is a unique factorization domain (Theorem $2.3$ in Peric & Vukovic). If you can prove that the ring at hand (no pun intended) is a principal ideal domain, then you have also proven that it is UFD. There is a paper where they do just that, but I can't remember at the moment how I found it (it's on the Internet, I can tell you that much).
Peric & Vukovic take a different approach: they prove that $\mathcal{O}_{\mathbb{Q}(\sqrt{-19})}$ is an "almost Euclidean domain" (Definition $2.3$, Theorem $3.3$) and that all such domains are also UFDs (Theorem $2.2$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Distance between line and a point Consider the points (1,2,-1) and (2,0,3).
(a) Find a vector equation of the line through these points in parametric form.
(b) Find the distance between this line and the point (1,0,1). (Hint: Use the parametric form of the equation and the dot product)
I have solved (a), Forming:
Vector equation: (1,2,-1)+t(1,-2,4)
x=1+t
y=2-2t
z=-1+4t
However, I'm a little stumped on how to solve (b).
|
You can use a formula, although I think it's not too difficult to just go through the steps. I would draw a picture first:
You are given that $\vec{p} = (1,0,1)$ and you already found $\vec{m} = (1, -2, 4)$ and $\vec{l}_0 = (1,2,-1)$. Now it's a matter of writing an expression for $\vec{l}(t) - \vec{p}_0$:
\begin{align}
\vec{l}(t) - \vec{p}_0 =&\ (\ (t + 1) - 1\ ,\ (-2t + 2) - 0\ ,\ (4t - 1) - 1\ )\\
=&\ (\ t\ ,\ -2t + 2\ ,\ 4t - 2\ )
\end{align}
Now you dot this with the original slope of the line (recall that $\vec{l}(t) - \vec{p}_0$ is the slope of the line segment connecting the point and the line). When this dot product equals zero, you have found $t_0$ and thus $\vec{x}_0$:
\begin{align}
\vec{m} \circ (\vec{l}(t) - \vec{p}_0) =&\ (1,-2,4)\circ(\ t\ ,\ -2t + 2\ ,\ 4t - 2\ ) \\
=&\ t + 4t - 4 + 16t - 8 \\
=&\ 21t - 12
\end{align}
Setting this to $0$ gives that $21t_0 - 12 = 0 \rightarrow t_0 = \frac{4}{7}$. This gives the point $\vec{x}_0$ as:
\begin{align}
\vec{x}_0 =&\ \vec{l}(t_0) = (\ \frac{4}{7} + 1\ ,\ -\frac{8}{7} + 2\ ,\ \frac{16}{7} - 1\ ) \\
=&\ \frac{1}{7}(11, 6, 9)
\end{align}
So finally the distance would be the distance from $\vec{p}_0$ to $\vec{x}_0$:
\begin{align}
d =&\ \sqrt{\left(\frac{11}{7} - 1\right)^2 + \left(\frac{6}{7} - 0\right)^2 + \left(\frac{9}{7} - 1\right)^2}\\
=&\ \sqrt{\left(\frac{4}{7}\right)^2 + \left(\frac{6}{7}\right)^2 + \left(\frac{2}{7}\right)^2} \\
=&\ \frac{1}{7}\sqrt{4^2 + 6^2 + 2^2}\\
=&\ \frac{1}{7}\sqrt{56} \\
=&\ \frac{2}{7}\sqrt{14}
\end{align}
...or perhaps $\sqrt{\frac{8}{7}}$ is more appealing.
Extra Info
There's no need to worry about whether or not my 2D picture is really representative--it is. No matter how high the dimensions of the problem, the problem itself can always be mapped to exactly 2 dimensions unless the point is on the line--then it's a 1 dimensional problem--which of course we can represent in 2 dimensions just as we can represent this 2 dimensional problem in much higher ones.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Showing Uniform convergence of $\frac{n x}{1 + n \sin(x)}$ I want to prove for all $a\in \left(0,\frac{\pi}{2}\right]$, $ \ f_n\to f$ uniformly on $\left[a,\frac{\pi}{2}\right]$.
Also, how is this different from $f_n \to f$ uniformly on $\left(0, \frac{\pi}{2}\right]$ ?
I made the crucial error of omitting that first line, when I asked this question before.
Let us define:
$$f_n(x) = \frac{n x}{1 + n \sin(x)}$$
Which has pointwise convergence to $f(x)$ if:
$$x = 0, \ f(x) = 0, \text{ and } x \in \left(0, \frac{\pi}{2}\,\right], \ f(x) = \frac{x}{\sin(x)}$$
Here is my attempt at the problem:
If $x \in \left(0, \frac{\pi}{2}\right]$ then
$$\left|\, f_n(x) - f(x) \right| =
\left| \frac{nx}{1 + n\sin(x)} - \frac{x}{\sin(x)}\right|
\\
= \left|\frac{nx \sin(x) - x\, \big(1 + n \sin(x)\big)}{\big(1+ n \sin(x)\big)\sin(x)} \right|
\\
= \frac{x}{\sin(x) + n \sin^2(x)} \leq \frac{1}{n},
$$
is this line correct?
So $\forall \epsilon > 0$, we may choose $N \geq \frac{1}{\epsilon}$ such that when $n \geq N \implies \left|\,f_n(x)-f(x)\right| \leq \epsilon \quad \forall x \in \left(0, \frac{\pi}{2}\right]$
I am not satisfied or confident in my answer, may anyone else suggest improvements?
|
The non-uniform convergence on $(0,\pi/2]$ was addressed (partially) in another post. It follows from $\displaystyle \lim_{n \to \infty}|f_n(x) - f(x)|= 1$ for any sequence $(x_n)$ with $x_n \to 0$.
Convergence is uniform on $[a,\pi/2]$ for $0 < a < \pi/2$.
Note that as $n \to \infty$ we have for all $x \in [a,\pi/2]$,
$$|f_n(x) - f(x)| = \frac{x}{\sin x + n \sin^2 x} \leqslant \frac{\pi}{2n \sin^2 a}\to 0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why do disks on planes grow more quickly with radius than disks on spheres? In the book, Mr. Tompkins in Wonderland, there is written something like this:
On a sphere the area within a given radius grows more slowly with the radius than on a plane.
Could you explain this to me? I think that formulas shows something totally different:
The area of a sphere is $4 \pi r^2$, but the area of a disk on the plane is $\pi r^2$.
UPDATE:
Later in this book there is:
If, for example, you are on the north pole, the circle with the radius equal to a half meridian is the equator, and the area included is the northern hemisphere. Increase the radius twice and you will get in all the earth's surface; the area will increase only twice instead of four times if it were on a plane.
Could someone explain me this?
|
Theorem: Let $0 < r \leq 2R$ be real numbers, $S_{1}$ a Euclidean sphere of radius $R$, $S_{2}$ a Euclidean sphere of radius $r$ centered at a point $O$ of $S_{1}$, and $D_{r}$ the portion of $S_{1}$ inside $S_{2}$.
The area of $D_{r}$ is $\pi r^{2}$.
Now let $D'_{r}$ be the disk of radius $r$ in $S_{1}$, i.e., the portion of $S_{1}$ at distance at most $r$ from $O$, measured along the surface of $S_{1}$. Since a chord of length $r$ subtends an arc of length strictly greater than $r$, the disk $D_{r}'$ is strictly smaller than $D_{r}$, and therefore has smaller area.
Proof of Theorem: The region $D_{r}$ (bold arc) is a zone on a sphere of radius $R$. To calculate its width, consider a longitudinal section:
Let $\theta$ be the angle subtended at the center of $S_{1}$ by a chord of length $r$. The indicated right triangles share an interior angle, and consequently are similar, with $\frac{\theta}{2}$ as interior angle. Reading from the right triangle whose hypotenuse is horizontal,
$$
R\sin\tfrac{\theta}{2} = \tfrac{r}{2}.
$$
On the other hand, reading from the right triangle with the dashed vertical side shows that $D_{r}$ has width $h = r\sin\frac{\theta}{2}$. By Archimedes' theorem, the area of $D_{r}$ is
$$
2\pi Rh = 2\pi R \cdot r\sin\tfrac{\theta}{2}
= 2\pi r \cdot R\sin\tfrac{\theta}{2}
= \pi r^{2}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
What exactly is wrong with this argument (Lucas-Penrose fallacy) Argument
"For every computer system, there is a sentence which is undecidable for the computer, but the human sees that it is true, therefore proving the sentence via some non-algorithmic method."
|
The basic fallacy here comes by regarding the original Gödelian proof. First, one assumes the consistency of a theory T, and from T shows that the sentence G is undecidable. Now, to show (formally) that G is nonetheless true in the standard model (there are models in which G is false!), one requires a more powerful theory P. Now, a human can see that G is true in the same way. So, the correct statement is: G is undecidable in T, but can be shown with P to be true in the standard model. Thus the need for a non-algorithmic method is not shown so far. Further: the assumption that a human would be able to decide every sentence is an unwarranted assumption. It is not too difficult to imagine that the human will also be stumped at one point. Indeed, there are undecidable sentences for which there is no standard model, and so no "correct" decision until you add some more axioms: the mathematical literature is full of them. The most infamous one is: can a human decide the Continuum Hypothesis on the basis of the standard ZFC axioms? No. Here the human and the computer are on equal footing. (I am being a little loose here with the Church-Turing thesis in my interchanging mathematical proof with computability, but one could clean up the argument to do without it. For example, see https://math.stanford.edu/~feferman/papers/penrose.pdf)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Empty set does not belong to any cartesian product? I am reading from Halmos naive set theory, for Cartesian product, defined as:
$$
A\times B=\left\{x: x \in P(P(A\cup B))\,\wedge\,\exists a \in A,\exists b \in B,\, x=(a,b)\right\}
$$
Empty set belongs to $P(P(AUB))$, but since in $(a,b)$ for some $a$ in $A$ and some $b$ in $B$ , because of existence quantifier empty set doesn't belong to $A\times B$ right? But still empty set could be an ordered pair.
in Halmos, $(a,b)=\{\{a\},\{a,b\}\}$ , so empty set also an ordered pair but no cartesian product has it as an element. am i correct?
|
With the definition,
$$A \times B = \{ X \in P(P (A \cap B)) : \exists a \in A, \exists b \in B[X = (a,b)]\}$$
One should note that we are defining with $\in$, which does not consider the empty set, but $\{\varnothing\} \in A$ is certainly possible, i.e. the set containing the empty set can be an element (also note that there are set-theoretic definition of $0$ such as $\{\varnothing\})$. However, the empty set is still a subset of $A \times B$, which differs from being an element.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1300825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Distance between points Suppose I have two matrices each containing coordinates of $m$ and $n$ points in 2 D. Is there an easy way using linear algebra to calculate the euclidean distance between all points (i.e., the results should be a $m$ by $n$ matrix)?
|
We start with
$$
M = (u_1, \ldots, u_m) \in \mathbb{R}^{3\times m} \\
N = (v_1, \ldots, v_n) \in \mathbb{R}^{3\times n}
$$
and want
$$
D = (d_{ij}) \in \mathbb{R}^{m\times n}
$$
with
\begin{align}
d_{ij}
&= \lVert u_i - v_j \rVert \\
&= \sqrt{(u_i - v_j)\cdot (u_i - v_j)} \\
&= \sqrt{u_i \cdot u_i + v_j \cdot v_j - 2(u_i \cdot v_j)} \\
&= \sqrt{(M e_i - N e_j)\cdot (M e_i - N e_j)} \\
&= \sqrt{M e_i \cdot M e_i + N e_j \cdot N e_j - 2 (M e_i \cdot N e_j)} \\
&= \sqrt{(M^t M)_{ii} + (N^tN)_{jj}-2(M^tN)_{ij}} \\
\end{align}
I do not see an advantage of the matrix formulation here. Beside that one has to take the square roots anyway.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to analytically evaluate $\cos(\pi/4+\text{atan}(2))$ This equals $\cos(\pi/4)\cos(\text{atan}(2))-\sin(\pi/4)\sin(\text{atan}(2))$.
I'm just not sure how to evaluate $\cos(\text{atan}(2))$
|
Hint: $\sin(\arctan(2))=\frac{\tan(\arctan(2))}{\sec(\arctan(2))}=\frac2{\sqrt5}$ and $\cos(\arctan(2))=\frac1{\sec(\arctan(2))}=\frac1{\sqrt5}$
Recall that $\sec^2(\theta)=\tan^2(\theta)+1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
$f(n) = n^{\log(n)}$, $g(n) = log(n)^{n}$ is $f\in O(g(n))$? $$f(n) = n^{\log(n)}$$
$$g(n) = \log(n)^n$$
$$f\in O(g(n))\text{ or }f \notin O(g(n))$$
why? I do not seem to get this one in particular
For O (big O)
Thanks!
|
Hint: $n^{\log(n)}=e^{(\log n)^2}$ and $\log(n)^n=e^{n \log(\log(n))}$. Which of the two grows faster?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving with integration by parts: $\int \frac 1 {x\ln^2x}dx$
Solving: $$\int \frac 1 {x\ln^2x}dx$$ with parts.
$$\int \frac 1 {x\ln^2x}dx= \int \frac {(\ln x)'} {\ln^2x}dx \overset{parts} = \frac {1} {\ln x}-\int \frac {(\ln x)} {(\ln^2x)'}dx$$
$$\int \frac {(\ln x)} {(\ln^2x)'}dx=\int \frac {(x\ln x)} {(2\ln x)}dx=\frac {x^2} 4$$
So $$\int \frac 1 {x\ln^2x}dx= \frac {1} {\ln x} - \frac {x^2} 4$$
But the answer should be $-\frac {1} {\ln x}$ and I can't find which step is wrong...
http://www.integral-calculator.com/#expr=1%2F%28x%28lnx%29%5E2%29
|
As pointed out in previous answers, this can be found most easily using the substitution $u=\ln x$.
Using integration by parts, though, with $u=(\ln x)^{-2}$ and $dv=\frac{1}{x}dx$, so $du=-2(\ln x)^{-3}dx$ and $v=\ln x$,
gives $\displaystyle\int\frac{1}{x(\ln x)^2}dx=(\ln x)^{-1}-(-2)\int(\ln x)^{-2}\frac{1}{x}dx=(\ln x)^{-1}+2\int\frac{1}{x(\ln x)^2}dx$, so
$\displaystyle-\int\frac{1}{x(\ln x)^2}dx=(\ln x)^{-1}+C\;\;$ and $\;\;\displaystyle\int\frac{1}{x(\ln x)^2}dx=-(\ln x)^{-1}+C$
(Notice that you have a mistake in the 3rd line of your answer where you get $\frac{x^2}{4}$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $|\sin^{−1}(a)−\sin^{−1}(b)|≥|a−b|$ Question:
Using the Mean Value Theorem, prove that $$|\sin^{−1}(a)−\sin^{−1}(b)|≥|a−b|$$ for all $a,b∈(1/2,1)$. Here, $\sin^{−1}$ denotes the inverse of the sine function.
Attempt:
I think I know how to do this but I want to make sure that I am as detailed as possible so I get all the marks. Here is my attempt:
Define $f:[-1,1] \rightarrow [-\pi/2,\pi/2]$ by $f(x)=\sin^{-1}(x)$. This is a differentiable function on $(-1,1)$ and continuous on $[-1,1]$. 'Without loss of generality' assume $a<b$. Our $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$ since $[a,b] \subset [-1,1]$.
By MVT, there exists $c \in (-1,1)$ such that $$\frac{f(b)-f(a)}{b-a}=\frac{\sin^{−1}(a)−\sin^{−1}(b)}{b-a}=f'(c)=\frac1{\sqrt{1-c^2}}\geq 1$$ which gives us: $\sin^{−1}(a)−\sin^{−1}(b) \geq b-a$ and then giving the desired result by putting modulus on both sides.
My concern is that i said $a<b$. Am i allowed to do that?
And more importantly I let $c \in(-1,1)$ and not $(a,b)$. Is that wrong? If I did let it in $(a,b)$ then its impossible to say that it is $\geq 1$...
|
we can do this without calculus. first we will show an equivalent inequality that $$|\sin t - \sin s| \le |t-s|.\tag 1$$ you can show $(1)$ using the interpretation that $(\cos t , \sin t)$ is the coordinates of the terminal point on the unit circle corresponding to the signed arc length $t$ measured from $(1.0)$
let $$P = (\cos t \sin t), ( Q = \cos s, \sin s) $$
we have $|t-s| = arc PQ \le PQ = \sqrt{(\sin t - \sin s)^2 +(\cos t - \cos s)^2} \le |\sin t - \sin s|$
suppose $$\sin^{-1} (a) = t, \sin^{-1}(b) = s $$ then we know the following $$-\pi/2 \le t, s \le \pi/2, \sin t = a, \sin s = b $$ putting these in $(1),$ we have $$ |a-b| \le |\sin^{-1} (a) -\sin^{-1}(b)|.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Uniqueness of basis vectors Say I have 2 vectors $v_1$ and $v_2$ as basis of a subspace. Then is it true that $kv_1$ and $mv_2$ where $k$ and $m$ are real numbers, are also basis for that subspace?
|
Start with the definition of a basis of a vector space. A basis is a set of linearly independent vectors that spans the vector space.
Now $\{v_1,v_2\}$ is a spanning set of vectors for a vector space $V$ over a field $F$ if and only if every vector $v\in V$ can be written as a linear combination of the basis vectors. That is, for all $v\in V$ there exist some scalars $a_v,b_v \in F$ such that
$$
v=a_vv_1+b_vv_2.
$$
Then, if $k$ and $m$ in $F$ are non zero,
$$
v=(a_v(1/k))(kv_1)+(b_v(1/m)(mv_2)
$$
so that the set $\{kv_1, mv_2\}$ spans $V$.
On the other hand, $\{v_1,v_2\}$ is a linearly independent set of vectors if and only if the equation
$$
av_1+bv_2=0
$$
implies that $a=b=0$.
Then
$$
a(kv_1)+b(mv_2)=(ak)v_1+(bm)v_2=0
$$
implies that $a$ and $b$ are zero (since $k$ and $m$) are assumed to be nonzero.
It follows that $\{kv_1, mv_2\}$ is basis for $V$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Laplace transform of the wave equation I started of with the wave equation $$\frac{\partial^2 u}{\partial x^2}=\frac{\partial^2 u}{\partial t^2}$$
with boundary conditions $u=0$ at $x=0$ and $x=1$ and initial condition $u=sin(\pi x)$ and $\frac{\partial u}{\partial t}=0$ when $t=0$
I have gotten to the stage $$\frac{d^2 v}{dx^2}-s^2v=-s* sin(\pi x)$$
Where do I go from here?
My notes say find the complimentary function using the axillary function $m^2-s^2$
to get $$v_c=Ae^{sx}+Be^{-sx}$$ Then it says find the particular integral for the right hand side and use the trial function $$v_p=a*sin(\pi x)+b*cos(\pi x)$$
But I do not quite understand this.
Why do we need the complimentary function?
What is the trial function and why do we need it?
|
$$\frac{d^2 v}{dx^2}-s^2v=-s* sin(\pi x)$$
Once you have this, you have a differential equation of variable $x$. So you will solve it assuming $s$ is a constant. What you have is a linear second order constant coefficient inhomogeneous ordinary differential equation. It will have a complementary (homogeneous) and a particular component.
The auxiliary (characteristic) equation is actually $1-s^2=0$, so the solution will be a linear combination of the exponentials of the solutions of the characteristic equation. Solutions are $\pm s$, exponentials of the solutions are $e^s$ and $e^{-s}$. $A$ and $B$ are constant coefficients.
For the particular solution, the method you describe is using the method of undetermined coefficients. It is pointless to explain it here, there are quite a few things you need to be careful about and you need to study it properly. You can't solve partial differential equations without knowing such fundamentals about ordinary differential equations. But to find $a$ and $b$, you will insert $v_p$ into the equation (keep in mind $s$ is constant) and you will equate coefficients of $\sin$'s and $\cos$'s
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
What is $X\cap\mathcal P(X)$? Does the powerset of $X$ contain $X$ as a subset, and thus $X\cap \mathcal{P}(X)=X$, or is $X\cap \mathcal{P}(X)=\emptyset$ since $X$ is a member of the
$\mathcal{P}(X)$, and not a subset?
|
It depends.
For "normal" sets, this intersection will be empty, so in general, your second thought is the correct one.
However, if we have something like:
$X = \{1,2,3, \{2,3\}\}$ then intersecting $X$ with its own power set will actually have an element - $\{2,3\}$
The distinction to be made here is subsets vs elements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Why is the chain rule applied to derivatives of trigonometric functions? I'm having trouble to understand why is the Chain rule applied to trigonometric functions, like:
$$\frac{d}{dx}\cos 2x=[2x]'*[\cos 2x]'=-2 \sin 2x$$
Why isn't it like in other variable derivatives? Like in:
$$ \frac{d}{dx} 3x^2=[3x^2]'=6x $$
Does it means it is the derivative of the trig function times the derivative of the angle?
Thanks once again.
|
A more appropriate example would be
$$\frac{d}{dx} (3x)^2=2(3x) \cdot (3x)' = 18x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why do we need $\sup$ and $\inf$ when we have $\max$ and $\min$. In my analysis text, it seems that $\max$ and $\min$ are replaced by $\sup$ and $\inf$ for 1D single variable function, why is this the case?
|
Well there isn't always a maximum. Consider $S=\{r\in\Bbb Q:r<2\}$. This is a bounded set with no maximum element. But we can say that $2$ is the smallest number which is an upper bound of $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
A bounded sequence in a Banach space Let $X$ be a Banach space and $\langle x_n\rangle $ be a sequence in $X$.
If ( $f(x_n)$ ) is a bounded sequence for any bounded linear functional $f$ on $X$, then ( $x_n$ ) is a bounded sequence in $X$.
I have to prove this fact. I first thought it would be simple, but it turns out to be trickier...
Could anyone help me with this?
|
As PhoemueX said: the statement follows from the Uniform Boundedness Principle applied to linear maps $\phi_n:X^*\to \mathbb{K}$ that are defined by $\phi_n(f)=f(x_n)$. ($\mathbb{K}$ stands for $\mathbb{R}$ or $\mathbb{C}$.) The principle applies here because $X^*$ is complete even when $X$ isn't.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1301951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the relationship betweeen a pdf and cdf? I am learning stats. On page 20, my book, All of Statistics 1e, defines a CDF as function that maps x to the probability that a random variable, X, is less than x.
$F_{x}(x) = P(X\leq x)$
On page 23 it gives a function
$P(a < X < b ) = \int_{a}^{b}f_{X}dx$
and then says that "the function $f_{X}$ is called the probability density function. We have that..."
$F_{x}(x) = \int_{-\infty}^{x}f_{X}dt$
I am a little confused about how to characterize the most important difference between them. The equation above says that the cdf is the integral of the pdf from negative infinity to x. Is it fair to say that the cdf is the integral of the pdf from negative infinity to x?
|
The cumulative distribution function $F_X$ of any random variable $X$ is defined by
$$
F_X(x)=P(X\le x)
$$
for $x\in\mathbb R$. If $X$ is a continuous random variable, then the cumulative distribution function $F_X$ can be expressed as
$$
F_X(x)=\int_{-\infty}^xf(x)\mathrm dx
$$
for $x\in\mathbb R$, where $f$ is the probability density function of a random variable $X$. If a random variable $X$ is not continuous, it does not have a probability density function (for example, a random variable with Bernoulli distribution).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Finding the mode of the negative binomial distribution The negative binomial distribution is as follows: $\displaystyle f_X(k)=\binom{k-1}{n-1}p^n(1-p)^{k-n}.$
*
*To find its mode, we want to find the $k$ with the highest probability.
*So we want to find $P(X=k-1)\leq P(X=k) \geq P(X=k+1).$
I'm getting stuck working with the following:
If $P(X=k-1)\leq P(X=k)$ then $$1 \leq \frac{P(X=k)}{P(X=k-1)}=\frac{\binom{k-1}{n-1}p^n(1-p)^{k-n}}{\binom{k-2}{n-1}p^{n}(1-p)^{k-n-1}}.$$
First of all, I'm wondering if I'm on the right track. Also, I'm having problems simplifying the binomial terms.
|
You are on the right track (except for typos in the notation now corrected, see @Henry's Comment and my response). Express binomial coefficients
in terms of factorials. Some factorials will cancel exactly. Others
will have factors in common: for example, 10!/9! = 10.
For a simple start, you might try the case where $p = 1/2.$
Answer: The mode is at the integer part
of $t = 1 + (n-1)/p,$ if $t$ is not an integer. For integer $t,$ there
is a 'double mode' at $t-1$ and $t.$
Examples: Below are three examples that illustrate this formula
(4-place accuracy):
n = 2; p = 1/2; t = 3, mode at 2 & 3
k : 2 3 4 5
p(k): 0.2500 0.2500 0.1875 0.1250
n = 2; p = 1/3; t = 4, mode at 3 & 4
k : 2 3 4 5
p(k): 0.1111 0.1481 0.1481 0.1317
n = 2; p = .4; t = 3.5, mode at 3
k : 2 3 4 5
p(k): 0.1600 0.1920 0.1728 0.1382
Note: Wikipedia and many advanced texts use a different form of the negative
binomial distribution where only failures before the $n$th
success are counted. Hence $X$ takes values $0, 1, 2, \dots.$
For the mode according to that formulation, see Wikipedia.
(It is noted later in the article, to add $n$ to the mode
given at the head of the article for your formulation.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to not feel bad for doing math? I have a MsC and want to take a PhD in algebraic topology. Probably very few people in the world will have any interest of my thesis. They will pay me for doing my hobby. Its the only job I can think of which doesnt contribute to bettering the world somehow. I feel like I should get a real job like doctor or garbageman. I get bad conscience. How can I feel better for pursuing my dream?
|
Simple - you contribute to humankind according to your ability to do your job. Your ability to do your job is fundamentally affected by your enthusiasm for your job. It stands to reason that the people should be taking jobs doing things they want to do anyway, regardless of pay, perks, etc.
If you are making great contributions to algebraic topology, then that is what you should be doing. The small number of people being interested in your field means that opportunities might be limited. But if this is what you are determined to do, then you should be able to contribute. (Of course, always have a Plan B.)
If you pursue the medical or the sanitation arts despite showing no interest in either, you will not be able to contribute all of which you are capable. That robs all of us of your potential. Worse, you might get depressed over your inability to pursue your dream. Depression is expensive and does you no good. Better that you strive to be an algebraic topologist, where your contributions will be great and your happiness unbounded.
Never feel guilty about doing what you want to do for a living. It is a gift from a successful society.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
n points can be equidistant from each other only in dimensions $\ge n-1$? 2 points are from equal distance to each other in dimensions 1,2,3,...
3 points can be equidistant from each other in 2,3,... dimensions
4 points can be equidistant from each other only in dimensions 3,4,...
What is the property of number dimensions that relates the number of points that can be equidistant to all other points?
|
One side of this, there is a standard picture. In $\mathbb R^n,$ take the $n$ points
$$ (1,0,0,\ldots,0), $$
$$ (0,1,0,\ldots,0), $$
$$ \cdots $$
$$ (0,0,0,\ldots,1). $$
These are all at pairwise distance $\sqrt 2$ apart.
At the same time, they lie in the $(n-1)$-dimensional plane
$$ x_1 + x_2 + \cdots + x_n = 1. $$ If you wish to work at it, you can rotate this into $\mathbb R^{n-1};$ in any case, $n$ points in $\mathbb R^{n-1}.$
If you prefer, you can keep $\mathbb R^n$ and place a point numbered $(n+1)$ at
$$ (-t,-t,-t, \ldots, -t) $$
for a special value of $t > 0$ that makes all the distances $\sqrt 2.$ I get
$$ n t^2 + 2 t - 1 = 0 $$ or, with $t>0,$
$$ t = \frac{\sqrt {n+1} - 1}{n } = \frac{1}{\sqrt {n+1} + 1} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
how to prove the sequence based definition of a closure in metric spaces Let $A$ be a subset of a metric space $\Omega$. By definition, the closure of $A$ is the smallest closed set that contains $A$. How to prove that alternativelly, the closure is given by
(1): $\bar A = \{a_* \vert a_* = \lim_{n\rightarrow\infty} a_n; \forall a_n \in A\}$
i.e. that $\bar A$ is given by limits of all converging sequences in $A$.
I know how to prove that
(2): A set $F\subset\Omega$ is closed iff the limits of all convergengent sequences from $F$ are in $F$.
I feel that (1) and (2) should be related but how to prove (1)?
All the help appreciated.
EDIT: I was thinking about trying this, but it is too engineering-like in spirit, I am not sure whether it would work. Take the set of all possible converging sequences which are obtained by taking elements in $A$, and call it $S$.
$S = \{a_* \vert a_* \text{ as in Eq. (1)}\}$
Then there are two types of sequences, the ones that "saturate" i.e. where the same element starts repeating itself, call them $S_0$, and the ones that are "genuine" in the sense that all elements of the sequence are different, call it $S_*$. Thus
$S=S_0\cup S_*$
The saturating ones all represent $A$. Thus is should be that,
$S_0=A$
What is left, is the border $\partial A\equiv\bar A\setminus A$, and this border should be somehow made of genuine sequences, i.e. one should somehow prove that
$S_* = \partial A$
e.g. I presume by exploiting (2). Would a proof based on this strategy be possible?
|
Consider the set $X=\{x\,|\,a_n\to x;\;\forall n\, a_n\in A\}$.
Clearly $A\subset X$. You should be able to prove that $X\subset\Omega$ is closed. Now suppose that $Y\subset\Omega$ is closed, $Y\ne X$ and $A\subset Y\subset X$. Then there must be some $x\in X$ such that $x\ne Y$. Work from there to show that no such $Y$ exists and then you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to solve the difference equation $u_n = u_{n-1} + u_{n-2}+1$ Given that:
$$
\begin{equation}
u_n=\begin{cases}
1, & \text{if $0\leq n\leq1$}\\
u_{n-1} + u_{n-2}+1, & \text{if $n>1$}
\end{cases}
\end{equation}
$$
How do you solve this difference equation?
Thanks
EDIT:
From @marwalix's answer:
$$
u_n=v_n-1
$$
$$
\begin{equation}
v_n=\begin{cases}
2, & \text{if $0\leq n\leq1$}\\
v_{n-1} + v_{n-2}, & \text{if $n>1$}
\end{cases}
\end{equation}
$$
Characteristic equation of $v_n$ is
$$
r^2=r+1
$$
Therefore,
$$
r=\frac{1\pm\sqrt{5}}{2}
$$
Therefore, the general solution for $v_n$ is
$$
v_n=A\left(\frac{1+\sqrt{5}}{2}\right)^n+B\left(\frac{1-\sqrt{5}}{2}\right)^n
$$
When $n=0$,
$$
2=A+B
$$
When $n=1$,
$$
2=A\left(\frac{1+\sqrt{5}}{2}\right)+B\left(\frac{1-\sqrt{5}}{2}\right)
$$
Therefore,
$$
A=\frac{5+\sqrt{5}}{5}
$$
$$
B=\frac{5-\sqrt{5}}{5}
$$
Therefore,
$$
u_n=\frac{5+\sqrt{5}}{5}\left(\frac{1+\sqrt{5}}{2}\right)^n+\frac{5-\sqrt{5}}{5}\left(\frac{1-\sqrt{5}}{2}\right)^n-1
$$
|
Write $u_n=v_n+a$ where $a$ is a constant. In that case the recurrence reads as follows
$$v_n+a=v_{n-1}+v_{n-2}+2a+1$$
So if we chose $a=-1$ we are left with
$$v_n=v_{n-1}+v_{n-2}$$
And we're back to a Fibonnacci type and in this case we have $v_0=v_1=2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Method for proving polynomial inequalities
Let $x\in\mathbb{R}$. Prove that
$\text{(a) }x^{10}-x^7+x^4-x^2+1>0\\
\text{(b) }x^4-x^2-3x+5>0$
Possibly it can be proved in a few different ways, but I have first tried to prove it reducing to a sum of squares. After too many attempts and using a trial-and-error method, I got
$$x^4\left({x^3-\frac12}\right)^2+\left({\frac12x^2-1}\right)^2+\frac12x^4>0$$
for $\text{(a)}$. My question is: is there any easier method to prove this for any polinomial which is always positive? Also, I am wondering if there is any other simplier method than reducing to sum of squares.
|
Just another way for $(a)$ is using the AM-GMs:
$$\frac12x^{10}+\frac12x^4 \ge x^7, \quad \frac12x^4+\frac12 \ge x^2$$
$$\implies x^{10}-x^7+x^4-x^2+1 \ge \frac12+\frac12x^{10}>0$$
and similarly for $(b)$:
$$\frac12x^4+\frac12 \ge x^2, \quad \frac12x^4+\frac32+\frac32+\frac32 \ge 2\times 3^{3/4}x> 3x$$
Not always applicable, of course.
Also you may want to know that for univariate polynomials theoretically you can always express as a sum of squares, and while tedious, somehow the end result is more satisfying...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Eigenvalues of different symmetric $(2n+1)\times(2n+1)$ matrix
I ve looked at other similar post but I could not find help with them
|
The rank of your matrix is $2$, which implies that $\lambda=0$ is an eigenvalue with multiplicity at least $2n+1-rank=2n-1$.
Now, if $\lambda_1, \lambda_2$ are the remaining eigenvalues, since $tr(A)$ is the sum of the eigenvalues you get $\lambda_1+\lambda_2=0$. [I assume the row and column have the same index]. This Yields $\lambda_2=-\lambda_1$.
Finally, the eigenvalues of $A^2$ are $0,0,.., 0, \lambda_1^2, \lambda_2^2$, thus
$$
tr(A^2)=2 \lambda_1^2 \,.
$$
note that
$$A^2 =\begin{bmatrix}
0 & 0 & 0 & ..& 1 & ... &0 \\
0 & 0 & 0 & ..& 1 & ... &0 \\
0 & 0 & 0 & ..& 1 & ... &0 \\
... & ... & ... & ...& ... & ... &... \\
1 & 1 & 1 & ..& 0 & ... &1 \\
... & ... & ... & ...& ... & ... &... \\
0 & 0 & 0 & ..& 1 & .. &0 \\
\end{bmatrix}
\begin{bmatrix}
0 & 0 & 0 & ..& 1 & ... &0 \\
0 & 0 & 0 & ..& 1 & ... &0 \\
0 & 0 & 0 & ..& 1 & ... &0 \\
... & ... & ... & ...& ... & ... &... \\
1 & 1 & 1 & ..& 0 & ... &1 \\
... & ... & ... & ...& ... & ... &... \\
0 & 0 & 0 & ..& 1 & .. &0 \\
\end{bmatrix}
=\begin{bmatrix}
1 & 1 & 1 & ..& 0 & ... &1 \\
1 & 1 & 1 & ..& 0 & ... &1 \\
1 & 1 & 1 & ..& 0 & ... &1 \\
... & ... & ... & ...& ... & ... &... \\
0 & 0 & 0 & ..& 2n& ... &0 \\
... & ... & ... & ...& ... & ... &... \\
1& 1 & 1 & ..& 0 & .. &1 \\
\end{bmatrix}$$
Therefore
$$2 \lambda_1^2=tr(A^2)=4n$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Why is cross product not commutative? Why, conceptually, is the cross product not commutative? Obviously I could simply take a look at the formula for computing cross product from vector components to prove this, but I'm interested in why it makes logical sense for the resultant vector to be either going in the negative or positive direction depending on the order of the cross operation. I don't have any formal experience in linear algebra, so I would appreciate if an answer would take that into account (I'm merely learning vectors as part of a 3D game math education).
|
The magnitude of the resulting vector is a function of the angle between the vectors you are multiplying. The key issue is that the angle between two vectors is always measured in the same direction (by convention, counterclockwise).
Try holding your left thumb and index finger in an L shape. Measuring counterclockwise, the angle between your thumb and index finger is roughly 90 degrees. If you measure from your index finger to your thumb (still must be done counterclockwise!) you have roughly a 270 degree angle.
One way to calculate a cross product is to take the determinant of a matrix whose top row contains the component unit vectors, and the next two rows are the scalar components of each vector. Changing the order of multiplication is akin to interchanging the two bottom rows in this matrix. It is a theorem of linear algebra that interchanging rows results in multiplying the determinant by -1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1302878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
}
|
Axiomatic definition of sin and cos? I look for possiblity to define sin/cos through algebraic relations without involving power series, integrals, differential equation and geometric intuition.
Is it possible to define sin and cos through some axioms?
Like:
$$\sin 0 = 0, \cos 0 = 1$$
$$\sin \pi/2 = 1, \cos \pi/2 = 0$$
$$\sin^2 x + \cos^2 x = 1$$
$$\sin(x+2\pi n) = \sin x, \cos(x+2\pi n) = \cos x$$
$$\sin(-x)=-\sin x, \cos(-x) = \cos x \text{ for } x \in [-\pi;0]$$
$$\sin(x+y)=\sin x \cos y + \sin y \cos x$$
and be able to prove trigonometric school equations?
What additions are required to prove continuity and uniqueness of such functions and analysis properties like:
$$\lim_{x \to 0}\frac{\sin x}{x} = 0$$
or
$$\sin ' x = \cos x$$
or
$$\int \frac{dx}{\sqrt {1-x^2}} = \arcsin x$$
PS In Walter Rudin book "Principles of Mathematical analysis" sin and cos introduced through power series.
In Solomon Feferman book "The Number Systems: Foundations of Algebra and Analysis" I see system derived from integral definition.
|
From what you do not want that pretty much leaves functional equations.
There seems to be a system of two functional equations for sine and cosine: (link)
$$
\Theta(x+y)=\Theta(x)\Theta(y)-\Omega(x)\Omega(y) \\
\Omega(x+y)=\Theta(x)\Omega(y)+\Omega(x)\Theta(y)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 4,
"answer_id": 3
}
|
Can two perfect squares average to a third perfect square? My question is does there exist a triple of integers, $a<b<c$ such that $b^2 = \frac{a^2+c^2}{2}$
I suspect that the answer to this is no but I have not been able to prove it yet. I realize this is very similar to the idea of Pythagorean triples but I am not versed enough in this subject to try and modify the theory for this case. One simple observation is that in order to have any hope of this working is that $a$ and $c$ must be of the same parity. Furthermore if such a triple exists we can build an infinite sequence since $(2b)^2 = \frac{(2a)^2+(2c)^2}{2}$ if and only if $b^2 = \frac{a^2+c^2}{2}$
Any help on determining this to be either true or false would be much appreciated. I am hoping it does end up being impossible, so if someone does find a desired triple I would next move up to cubes instead of squares
Edit: Thank you for the comments, I have foolishly overlooked a simple example and see that there are many solutions to this based on the theory of diophantine equations. However this is unfortunate for me because i was hoping NOT to be able to solve this. This question arose while studying a certain type of graph labeling. What I desire is to be able to create a sequence, $S$, of arbitrary length (since each member of the sequence is to be a label of a vertex in the graph) such that for every $x \in S$, $|x-s_i| \neq |x-s_j|$ for $ i \neq j$. I was naively hoping that the sequence of squares was sufficient to satisfy this condition.
Further edit, I have found that the sequence $2^n$ works but it would be nice if I could find a polynomial sequence.
|
Parametric solution:
$$ a = |x^2 - 2 x y - y^2|, \ b = x^2 + y^2,\ c = x^2 + 2 x y - y^2 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 4
}
|
Find the sequence of partial sums for the series $a_n = (-1)^n$ Does this series converge? Find the sequence of partial sums for the series $$ \sum_{n=0}^\infty (-1)^n = 1 -1 + 1 -1 + 1 - \cdots$$
Does this series converge ?
My answer is that the sequence $= 0.5 + 0.5(-1)^n$. This makes a sequence that alternates between $1$ and $0$.
I know that the sequence does not converge since it is not monotone. But how can I prove this?
|
$$ \sum\limits_{n=0}^\infty (-1)^n = \lim\limits_{m\to\infty}\sum\limits_{n=0}^m (-1)^n $$
$$ = \lim\limits_{m\to\infty} \frac12\left((-1)^m + 1\right) $$
Note that the sequence
$$a_m=\frac12\left((-1)^m + 1\right) $$
is convergent if
$$ \lim\limits_{m\to\infty} a_{2m} = \lim\limits_{m\to\infty} a_{2m+1} = L$$
However, in this case we have
$$ \lim\limits_{m\to\infty} \frac12\left((-1)^{2m}+1\right) =\frac22= 1 $$
And
$$ \lim\limits_{m\to\infty} \frac12\left((-1)^{2m+1}+1\right) =\frac02= 0 $$
Therefore $a_m$ is a divergent sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Is there a constructive discontinuous exponential function? It is well-known that the only continuous functions $f\colon\mathbb R\to\mathbb R^+$ satisfying $f(x+y)=f(x)f(y)$ for all $x,y\in\mathbb R$ are the familiar exponential functions. (Prove $f(x)=f(1)^x$ successively for integers $x$, rationals $x$, and then use continuity to get all reals.)
The usual example to show that the identity $f(x+y)=f(x)f(y)$ alone doesn't characterize the exponentials requires the axiom of choice. (Define $f$ arbitrarily on the elements of a Hamel basis for $\mathbb R$ over $\mathbb Q$, then extend to satisfy the identity.)
Is there an explicit construction of a discontinuous function satisfying the identity? On the other hand, does the existence of such a function imply the axiom of choice or some relative?
|
Let $g(x)=\ln f(x)$, so that $g(x+y)=g(x)+g(y)$. The solutions to this equation are precisely the ring homomorphisms from $\mathbb R\to\mathbb R$ and they are in bijection with solutions to your original equation. Since $\mathbb R$ is an infinite dimensional $\mathbb Q$-vector space, there are infinitely many such ring homomorphisms. They are determined by a Hamel basis for $\mathbb R$.
Does this answer your question?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Trigonometric relationship in a triangle If in a triangle $ABC$, $$3\sin^2B+4\sin A\sin B+16\sin^2A-8\sin B-20\sin A+9=0$$ find the angles of the triangle. I am unable to manipulate the given expression. Thanks.
|
The equation can be rearranged as
$$(16\sin A+2\sin B-10)^2+44(\sin B-1)^2=0\ .$$
The only way this can happen is if
$$16\sin A+2\sin B-10=0\ ,\quad \sin B-1=0\ ;$$
these equations are easily solved to give
$$\sin A=\frac12\ ,\quad \sin B=1\ ,$$
and since $A,B$ are angles in a triangle we have
$$A=\frac\pi2\ ,\quad B=\frac\pi6\ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What situations should $\oint$ be used? Sometimes people put a circle through the integral symbol: $\oint$
What does this mean, and when should we use this integration symbol?
|
This symbol is used to indicate a line integral along a closed loop.
if the loop is the boundary of a compact region $\Omega$ we use also the symbol
$
\int_{\delta \Omega}
$
we can generalize such notation to the boundary of a region in an n-dimensional space and, if $\Omega$ is an orientable manifold we have the generalized Stokes' theorem
$$
\int_{\delta \Omega}\omega=\int_ \Omega d\omega
$$
that is a beautiful generalization of the fundamental theorem of calculus.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Questions on Kolmogorov Zero-One Law Proof in Williams Here is the proof of the Kolmogorov Zero-One Law and the lemmas used to prove it in Williams' Probability book:
Here are my questions:
*
*Why exactly are $\mathfrak{K}_{\infty}$ and $\mathfrak{T}$ independent? I get that $\mathfrak{K}_{\infty}$ is a countable union of $\sigma$-algebras that each are independent with $\mathfrak{T}$, but I don't see how exactly that means $\mathfrak{K}_{\infty}$ and $\mathfrak{T}$ are independent kind of like here.
*How exactly does one show that $\mathfrak{T} \subseteq \mathfrak{X}_{\infty}$?
That is, how exactly does one show that $\bigcap_{n \geq 0} \sigma(X_{n+1}, X_{n+2}, ...) \subseteq \sigma [\sigma(X_1) \cup \sigma(X_1, X_2) \cup ...]$?
Intuitively, I get it. I just wonder how to prove it rigorously.
What I tried:
Suppose $A \in \mathfrak{T}$. Then A is in the preimage of...I don't know. Help please?
|
On 1):
Let $A\in \mathfrak{K}_{\infty}$ and $B\in\mathfrak{T}$.
Then $A\in \mathfrak{X}_n$ for some $n$ and since $\mathfrak{X}_n$ and $\mathfrak{T}$ are independent $\sigma$-algebra's we are allowed to conclude that $P(A\cap B)=P(A)\times P(B)$.
This proves that $\mathfrak{K}_{\infty}$ and $\mathfrak{T}$ are independent. Here $\mathfrak{K}_{\infty}$ is an algebra (hence closed under intersections) and $\mathfrak{T}$ is a $\sigma$-algebra. And based on this it can be shown that $\sigma(\mathfrak{K}_{\infty})$ and $\mathfrak{T}$ are independent $\sigma$-algebras.
On 2):
$\mathfrak{T}\subseteq\mathfrak{T}_1=\mathfrak{X}_{\infty}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Number of points on an elliptic curve and it's twist over $\mathbb{F}_p$. I have another probably very trivial question about elliptic curves. This wikipedia article gives the following formula $|E|+|E^d|=2p+2$ where $E$ is an elliptic curve over $\mathbb{F}_p$ and $E^d$ is it's quadratic twist. At the same time the paragraph preceding this statement seems to argue why this should be true making it seem easy, but I seem to be missing a step or two. I'll try to reconstruct the proof and finish up by asking about the step I'm unable to get. In all that follows I assume $p$ is a prime greater then $3$ and all my elliptic curves are over $\mathbb{F}_p$.
Let us have $E:y^2=x^3+ax+b$ and it's elliptic twist $E^d:dy^2=x^3+ax+b$ where $d$ is a non-square in $\mathbb{F}_p$. Then for every $x\in\mathbb{F}_p$ we have $x^3+ax+b$ or $d^{-1}(x^3+ax+b)$ is a square and so there is a $y$ such that $(x,y)\in E$ or $(x,y)\in E^d$. Furthermore if $x^3+ax+b\neq 0$ then there exist exactly two such $y$ (namely $\pm\sqrt{x^3+ab+b}$ since if $y=-y$ we get $y+y=0$ and since $p>3$ we get $y=0$).
This gives us almost the result we want except we might have up to 3 $x$'s where $x^3+ax+b=0$ which only have 1 solution. How do we fix this to get us the wanted result?
|
If $x^3+ax+b = 0$, then the point $(x,0)$ lies on each of $E$ and $E^d$, for a total of $2$ solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convergence of a subsequence in $(C(\mathbb{T}), \|\cdot\|_2)$ Problem: Define, $ \mathbb{T} := \mathbb{R}/{2\pi\mathbb{Z}} $.
Consider a sequence of functions $(g_n)_{n\in \mathbb{N}} \in C^4(\mathbb{T})$ such that, $ \sup_{n \in \mathbb{N}}(\| g_n \|_2 + \| g'_n \|_2 + \| g''_n \|_2) < \infty $. Prove that $(g_n)_{n\in \mathbb{N}}$ has a subsequence converging in $(C(\mathbb{T}), \|\cdot\|_2)$, where, $\|\cdot\|_2$ is the norm,
$\|f\|_2 = (\frac{1}{2\pi}\int_0^{2\pi}|f(t)|^2dt)^{1/2}$.
Background: First course in analysis. This part of the course covers Fourier series, Fejér's theorem, covergence in the $L^\infty$ and $L^2$ norms. I know that the $C^4$ condition on the sequence of functions guarantees that the fourier series approximations of each function $g_n$ and its derivatives converge in both norms.
I don't really see how to approach the problem, unless there's some way to show that the space is compact?
Thanks.
|
Yes, you need to prove compactness. This is an instance of Rellich–Kondrachov theorem, but presumably the point of the problem is to show compactness directly. To get uniform convergence, it suffices to have Fourier coefficients converging in $\ell^1(\mathbb{Z})$, so let's look at those.
Let $g_n^{(k)}$ be the $k$th coefficient of $g_n$. Then $g_n''$ has coefficients $-k^2 g_n^{(k)}$. So, the assumption that both $g_n$ and $g_n''$ are bounded in $L^2$ norm implies the existence of a constant $M$ such that
$$\sum_{k\in\mathbb{Z}}(k^2+1)^2|g_n^{(k)}|^2\le M$$
Hence, $|g_n^{(k)}|\le M/(k^2+1)$. This is the main step of the proof: we got a summable dominating sequence $a_k=M/(k^2+1)$. For the rest, only this fact matters.
There are various ways to finish. One could use diagonalization to get coordinate-wise convergent subsequence and apply the dominated convergence theorem. Or one could prove the compactness of the set $A=\{c\in \ell^1: |c_k|\le a_k\ \forall k\}$ directly. Indeed, it is obviously closed in $\ell^1$, hence complete. To show total boundedness: given $\epsilon>0$, pick $N$ such that $\sum_{|k|>N}a_k<\epsilon/2$, and observe that $A$ is contained in the $\epsilon/2$-neighborhood of the set
$$A_N=\{c\in \ell^1: |c_k|\le a_k\ \forall k,\text{ and } c_k=0 \text{ for }|k|>N\}$$
Since $A_N$ is compact, it admits a finite $\epsilon/2$ net. This net serves as an $\epsilon$-net of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1303897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Differentiating both sides of an inequality with monotonic functions If $f(x)\le g(x)$ for all real $x$ for monotonic functions $f$ and $g$ (say, both increasing), does it follow that $f'(x)\le g'(x)$?
(Note: I've seen several questions asking the same thing without the condition of monotonicity, but the counterexamples given always involve a non-monotonic function, and it seems to me that this condition might be sufficient; I haven't been able to come up with any counterexamples myself.)
If not, is the stronger condition that $f^{(n)}(x)$ and $g^{(n)}(x)$ are monotone for either all natural $n$ or all $n\le N$ for some $N$ sufficient?
|
Another more explicit example:
$$
f(x)=x-\frac\pi2+\arctan x,\quad g(x)=x.
$$
$$
f'(x)=1+\frac{1}{1+x^2},\quad g'(x)=1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Distance between two points on a sphere. Say there is a sphere on which there is an ant and the ant wants to go to another point. The ant can't definitely travel through the sphere. So it has to travel along a curve. My question is what is the least distance between the two points i.e. distance between 2 points on a sphere.
|
If $a = (a_{1}, a_{2}, a_{3})$ and $b = (b_{1}, b_{2}, b_{3})$ are points on a sphere of radius $r > 0$ centered at the origin of Euclidean $3$-space, the distance from $a$ to $b$ along the surface of the sphere is
$$
d(a, b) = r \arccos\left(\frac{a \cdot b}{r^{2}}\right)
= r \arccos\left(\frac{a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3}}{r^{2}}\right).
$$
To see this, consider the plane through $a$, $b$, and the origin. If $\theta$ is the angle between the vectors $a$ and $b$, then $a \cdot b = r^{2} \cos\theta$, and the short arc joining $a$ and $b$ has length $r\theta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
}
|
solving second order non-homogeneous differential equation 3 Help me to solve this non-homogeneous differential equation :
$
y''+y=\tan x
$
$
0<x<\dfrac{\pi}{2}
$
I could reach to $y_{c}=c_{1}\cos x + c_{2}\sin x$ but particular solution is where I stopped.
|
Knowing a solution of the homogeneous ODE, for example $c\cos(x)$, remplace the constant $c$ by a function $Y(x)$ , i.e.: change of function $y=Y(x)\cos(x)$. This allows to reduce the second order ODE to a first order ODE easier to solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Are there different ways to find (the) residue(s) for a function with one simple pole vs. a function with several simple poles? Regarding evaluation of residuals for functions with simple poles.
Let's say $m$ represents the order of the pole, then in order to find the residual at each pole/the pole (if only one pole) we have the following equation:
$$
\text{where } m=1,2,\ldots\qquad\underset{z=z_0}{\operatorname{Res}} f(z) = \frac 1 {(m-1)!} \lim_{z\to z_0} \left( \frac{d^{m-1}}{dz^{m-1}}[(z-z_0)^m f(z)] \right)
$$
If the order of the pole $m$ is equal to $1$ then the above equation reduces to:
$$
\text{simple pole }(m=1) \qquad \underset{z=z_0}{\operatorname{Res}} f(z) = \lim_{z\to x_0} (z-z_0) f(z).
$$
Now I've seen the following equation for finding the residue at a simple pole for a function $f(z)$ given by:
$$
f(z) = \frac{p(z)}{q(z)} \qquad \underset{z=z_0}{\operatorname{Res}} f(z) = \frac{p(z_0)}{q'(z_0)}.
$$
I'm eager to get this clarified once and for all, because it seems a bit confusing, seeing different ways to solve the below in different places, just wondering if it doesn't matter or if it actually does make a difference.
Now my question is if the function you are taking into account has one simple pole, would you use the same equation as in the case of a function with several simple poles?
In other words, is there any difference for finding residue(s) when you have one simple pole (if only one pole) versus a function with several simple poles, (and a third scenario would be a function with several poles, only one of which is simple)?
Or will any of the equations above return the same result in any of the mentioned cases?
|
You should be aware that a simple pole is a point in which the Laurent series of $f$ is
$$
f(z)=\frac{a_{-1}}{z-z_0}+a_0+a_1(z-z_0)+a_2(z-z_0)^2+\ldots
$$
The residual is $a_{-1}$, the coefficient of $1/(z-z_0)$.
Now, we have
$$
(z-z_0)f(z)=a_{-1}+a_0(z-z_0)+a_1(z-z_0)^3+a_2(z-z_0)^3+\ldots
$$
that gives $a_{-1}$ when you do the limit $z\to z_0$.
When $f$ is a rational function, ratio of two polynomials $p$ and $q$, with a single pole, $q$ should be of the form
$$
q(z)=(z-z_0)Q_1(z)
$$
with $Q_1(z_0)\neq0$, and
$$
q'(z)=Q_1(z)+(z-z_0)Q_1'(z)
$$
so that the limit for $z\to z_0$ is $q'(z_0)=Q_1(z_0)$.
At this point
$$
(z-z_0)f(z)=(z-z_0)\frac{p(z)}{(z-z_0)Q_1(z)}=\frac{p(z)}{Q_1(z)}
$$
and for $z\to z_0$ we have
$$
\frac{p(z_0)}{Q_1(z_0)}=\frac{p(z_0)}{q'(z_0)}
$$
I add that you could use the same formula for each single pole, also if the function has more than one single pole.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\sin x + c_1 = \cos x + c_2$ While working a physics problem I ran into a seemingly simple trig equation I couldn't solve. I'm curious if anyone knows a way to solve the equation:
$\sin(x)+c_1 = \cos(x)+c_2$
(where $c_1$ and $c_2$ are constants) for $x$ without using Newton's method or some other form of approximation.
|
$$
A\sin x + B\cos x = \sqrt{A^2+B^2}\left( \frac A {\sqrt{A^2+B^2}}\sin x+ \frac B {\sqrt{A^2+B^2}}\cos x \right)
$$
Notice that the sum of the squares of the coefficients above is $1$; hence they are the coordinates of some point on the unit circle; hence there is some number $\varphi$ such that
$$
\cos\varphi = \frac A {\sqrt{A^2+B^2}}\quad\text{and}\quad\sin\varphi=\frac B {\sqrt{A^2+B^2}}.
$$
And notice that $\tan\varphi=\dfrac B A$, so finding $\varphi$ is computing an arctangent.
Now we have
$$
A\sin x + B\cos x = \sqrt{A^2+B^2}(\cos\varphi \sin x+ \sin\varphi \cos x) = \sqrt{A^2+B^2} \sin(x+\varphi).
$$
Apply this to $\sin x - \cos x$, in which $A=1$ and $B=-1$, and you get
$$
\sqrt{2} \sin(x+\varphi) = c_2-c_1.
$$
So you just need to find an arcsine and an arctangent. And the arctangent is easy in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
What are the basic rules for manipulating diverging infinite series? This is something that I played around with in Calc II, and it really confuses me:
$s = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + \ldots = \infty$
$s - s = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + \ldots $
$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + \ldots) $
$s - s = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + \ldots$
$0s \ \ \ \ \ = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + \ldots = \infty$
$\therefore 0 (\infty) = \infty$
I'm aware the $0\cdot\infty$ is indeterminate, so what am I doing wrong? What are the basic rules for manipulating these diverging infinite series?
|
Basically, you don't.
The only way to manipulate them is if you know precisely what you're doing and using renormalization techniques or analytic continuation or something, but those techniques won't be taught for awhile and have specific rules about them.
Long story short, you don't manipulate them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Complex analysis, residues
Find the residue at $z=0$ of $f(z)=\dfrac{\sinh z}{z^4(1-z^2)}$.
I did
\begin{align}
\frac{\sinh z}{z^4(1-z^2)} & =\frac{1}{z^4}\left[\left(\sum_{n=0}^\infty \frac{z^{2n+1}}{(2n+1)!)}\right)\left(\sum_{n=0}^\infty z^{2n}\right)\right] \\[8pt]
& =\frac{1}{z^4}\left[\left(z+\frac{z^3}{6}+\cdots\right)(1+z^2+z^4+\cdots)\right]
\end{align}
Then $\operatorname{Res}_{z=0}=1+1+\frac{1}{6}=\frac{13}{6}$
But the solutions say that $\operatorname{Res}_{z=0}=\frac{7}{6}$
Anyone can help me?
|
The residue at $z=0$ is the coefficient of the $\frac1z$ term in the expansion
$$
\frac{\sinh(z)}{z^4(1-z^2)}=\frac1{z^4}\left(z+\frac{z^3}6+\frac{z^5}{120}+\dots\right)\left(1+z^2+z^4+\dots\right)
$$
That is the coefficient of $z^3$ term in the expansion
$$
\left(\color{#C00000}{z}+\color{#00A000}{\frac{z^3}6}+\frac{z^5}{120}+\dots\right)\left(\color{#00A000}{1}+\color{#C00000}{z^2}+z^4+\dots\right)
$$
which is
$$
\color{#C00000}{z\cdot z^2}+\color{#00A000}{\frac{z^3}6\cdot1}=\frac76z^3
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Breaking down the equation of a plane Could someone explain the individual parts of a plane equation?
For example:
$3x + y + z = 7$
When I see this I can't imagine what it's supposed to look like.
|
Consider the collection of vectors $\{\vec x:\vec x\perp (3\vec i+\vec j+\vec k)\}$. The endpoints of these form a plane through the origin. If you shift this plane upwards $7$ units, you get the plane in question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why does $\frac{49}{64}\cos^2 \theta + \cos^2 \theta$ equal $\frac{113}{64}\cos^2 \theta $? I have an example:
$$ \frac{49}{64}\cos^2 \theta + \cos^2 \theta = 1 $$
Then what happens next:
$$ \frac{113}{64}\cos^2 \theta = 1 $$
Where has the other cosine disappeared to? What operation happened here? Any hints please.
|
$\dfrac{49}{64}\cos^2 \theta + \cos^2 \theta = \left(\dfrac{49}{64} +1\right) \cos^2 \theta = \dfrac{113}{64}\cos^2 \theta$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1304916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Is $\sqrt{2t \log{\log{\frac{1}{t}}}}$ increasing for $t\in(0,a)$, for a suitable $a>0$? I believe it's obviously, but I tried a lot, and have no clue how can I show that
$$\sqrt{2t \log{\log{\frac{1}{t}}}}$$
is increasing for $t\in(0,a)$ for a suitable $a>0$?
|
Taking the derivative of $f(t)=\sqrt{2t\ln{\ln{\dfrac{1}{t}}}}$ with respect to $t$:
$$f'(t)=\dfrac{1}{2\sqrt{2t\ln{\ln{\dfrac{1}{t}}}}}\cdot\left(2\ln{\ln{\dfrac{1}{t}}}+2t\cdot\dfrac{1}{\ln{\dfrac{1}{t}}}\cdot t\cdot\dfrac{-1}{t^2}\right)$$
$$\therefore f'(t)=\dfrac{\ln{\ln{\frac{1}{t}}}-\dfrac{1}{\ln{\frac{1}{t}}}}{\sqrt{2t\ln{\ln{\frac{1}{t}}}}}$$
$f(t)$ is increasing when $f'(t)>0$:
$$\large 0<t< e^{-e^{\mathcal{W}(1)}}: f'(t)>0$$
where $\mathcal{W}$ is the Lambert W function, which gives (numerically):
$$\large e^{-e^{\mathcal{W}(1)}}\simeq 0.17149128425$$
Working out $f'(t)=0$:
$$\ln{\ln{\frac{1}{t}}}-\dfrac{1}{\ln{\frac{1}{t}}}=0\Longleftrightarrow\ln{\dfrac{1}{\ln{\frac{1}{t}}}}+\dfrac{1}{\ln{\frac{1}{t}}}=0\Longleftrightarrow\dfrac{1}{\ln{\frac{1}{t}}}\cdot e^{\frac{1}{\ln{\frac{1}{t}}}}=1\Longleftrightarrow\dfrac{1}{\ln{\frac{1}{t}}}=\mathcal{W}(1)$$$$\large\therefore t=e^{-\frac{1}{\mathcal{W}(1)}}=e^{-e^{\mathcal{W}(1)}}\simeq 0.17149128425$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Rolling two dice, what is the probability of getting 6 on either of them, but not both? Rolling two dice, what is the probability of getting 6 on one of them, but not both?
|
If the first die shows $6$ and the second shows anything but $6$ that can happen $5$ ways (6-1, 6-2, 6-3, 6-4, 6-5). Similarly if the second die shows $6$ and the first anything but $6$ that can happen another $5$ ways (1-6, 2-6, 3-6, 4-6, 5-6).
There are $36$ possible rolls of the two dice in total.
Hence the probability of exactly one die showing $6$ is
$${10 \over 36} = {5 \over 18}$$
Alternatively:
If
*
*$A$ is the event that the first die is $6$ and the second anything
*$B$ is the event that the second die is $6$ and the first anything
then the desired probability is
$$P(\text{One 6}) = P(\text{at least one 6}) - P(\text{two 6s}) $$ $$ \hspace{5 mm} = P(A \cup B) - P(A \cap B)$$ $$ = {11 \over 36} - {1 \over 36} \hspace{15 mm}$$ $$ = {10 \over 36} \hspace{27 mm}$$
This is also equal to
$$P(\text{ first is 6 but not the second }) + P(\text{ second is 6 but not the first })$$
$$ = (P(A) - P(A \cap B)) + (P(B)- P(A \cap B))$$
$$ = P(A) + P(B) - 2P(A\cap B)$$
$$= {6 \over 36} + {6 \over 36} - 2 \cdot {1 \over 36}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Homogeneous Markov chains with general state space I found in the book Markov Chains by Revuz the following definition of a Markov chain. In the following $(X_n)_{n \in \mathbb{N}}$ is a sequence of random variables on a probability space $(\Omega,\mathcal{F},P_0)$ and the range is a measurable space $(E,\Sigma)$, and $\mathcal{F}_n^0 =\sigma(X_m , m \leq n)$
Definition.1 $(X_n)_{n \in \mathbb{N}}$ is a Markov chain if for every
$B \in \Sigma$ we have $$P_0 [ X_m \in A \mid \mathcal{F}_n^0 ] = P_0[
X_m \in A \mid X_n] $$ for all $m\geq n$
For a transition probability $P : E \times \Sigma \to [0,1]$, i.e $x \to p(x,A)$ is measurable for all $A\in \Sigma$ and $A\to p(x,A)$ is a probability measure for all $x\in E$ he defines a family of transition probabilities $(P_n)_{n \geq 0}$ by $$ P_{n} (x,A) = \int_E p(y,A) \, P_{n-1}(x,dy) $$
and $$ P_0 := P.$$ Then he defines a homogeneous Markov chain by
Definition 2. $(X_n)_{n \in \mathbb{N}}$ is called a homogeneous
Markov chain with respect to transition probability $P$ if $$ P_0[X_m
\in A \mid \mathcal{F}_n^o] = P_{n-m}(x,\{X_m \in A\}).$$
I would like to know if $(X_n)$ is a Markov chain with
$$ P_0[X_n \in B \mid X_0 \in A] = P_0[X_{n+m} \in B \mid X_m \in A] $$
for all $n,m\geq 0$ and $A,B \in \Sigma$, does this imply that one can find a transition probability $P$ such that $(X_n)$ is a homogeneous Markov chain with respect to $P$?
|
Thankfully, YES :)
You can very simply define the needed probability transition by :
\begin{equation}
P : x, \mathcal{A} \mapsto P_0[X_1 \in \mathcal{A} | X_0=x]
\end{equation}
You can then deduce from the definition of the family $(P_n)$ that :
\begin{equation}
P_n ( x, \mathcal{A} ) = P_0[X_{n+1} \in \mathcal{A} | X_0=x]
\end{equation}
!
Then verify that your Markov chain satisfies the condition to be homogeneous with respect to $(P_n)$, by using the definition of a Markov chain and the property.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Decomposition into irreducibles of a Noetherian topological space I'm struggling with the proof that says a Noetherian topological space $X$ is the finite union of closed irreducible subsets. In particular with this part:
First observe that every nonempty set of closed subsets of $X$ has a minimal element, since otherwise it would contain an infinite strictly descending chain.
I get that a chain $Y_1\supsetneq Y_2\supsetneq\ldots$ should terminate by the Noetherness of $X$. But why is it not possible to have a nonempty set of closed subsets, in which $Y_i\nsupseteq Y_j$ for all $i,j$?
|
Suppose that we have a family of closed sets $\mathcal{F}$ without a minimal element. So pick $F_0 \in \mathcal{F}$. Then $F_1 \in \mathcal{F}$ exists such that $F_1 \subsetneq F_0$, as otherwise $F_0$ would have been minimal. As $F_1$ is not minimal either, some $F_2 \in \mathcal{F}$ exists with $F_2 \subsetneq F_1$ as well, and by recursion we have such a sequence $F_n$ in $\mathcal{F}$ with $F_{n+1} \subsetneq F_n$ for all $n$. But this sequence contradicts being Noetherian. So such a family cannot exists.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why do all parabolas have an axis of symmetry? And if that's just part of the definition of a parabola, I guess my question becomes why is the graph of any quadratic a parabola?
My attempt at explaining:
The way I understand it after some thought is that any quadratic can be written by completing the square as a perfect square + a constant, for ex:
$f(x) = x^2 + x$ can be written as $f(x) = (x+\frac{1}{2})^2 - \frac{1}{4}$.
So essentially, any quadratic is a displaced version of $f(x) = x^2$, and it's pretty obvious why $f(x) = x^2$ has an axis of symmetry and why it's at the vertex.
Is my reasoning correct? And if you have a different way to think about it and explain it, whether geometric, algebraic or other, I would love to see it.
|
Suppose, $f(x)=(x+a)^2+b$
Then, we have $$f(-a-x)=f(-a+x)=x^2+b$$ for all $x\in \mathbb R$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 3
}
|
Book for Module Theory I want a book to cover the following topics in Module Theory:
Modules, Submodules, Quotient modules, Morphisms Exact sequences, three lemma, four lemma, five lemma, Product and Co products, Free modules, Projective modules, Injective modules, Direct sum of Projective modules, Direct product of Injective modules.
Divisible groups, Embedding of a module in an injective module, tensor product of modules, Noetherian and Artinian Modules, Finitely generated modules, Jordan Holder Theorem, Indecomposable modules, Krull Schmidt theorem, Semi simple modules, homomorphic images of semi simple modules.
I want a book that covers these topics. It is not that a single book should contain all of these but it would be better if it is so. The book should be interesting, have some good problems to work on (would be better if it is provided with hints to hard problems).
Comments and suggestions are needed.
|
As it was suggested before, Module Theory: An Approach to Linear Algebra by T. S. Blyth is an awesome title which covers almost every basic topic of Module theory in a very elegant, clear and efficient way. It is hands down my favorite text in the subject, but unfortunately it has been long out of print and therefore it is expensive and hard to obtain.
I contacted T. S. Blyth a few months ago looking at the possibility of a new edition, or at least, for a Dover reprint or something like that. Recently, Tom told me that he just finished an electronic edition, and I inmediately helped him with a careful and exhausting typo hunting.
With great joy, I shall let you all know that this venerable text is now available for free!
Download the book here!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is there a way to write this expression differently: $\arctan 1/(1+n+n^2)$? $$ \arctan\left(\frac{1}{1+n+n^2}\right)$$
My professor wrote this as
$$\arctan(n+1) - \arctan(n)$$
I don't understand how this expression is right?
|
We have $$\tan^{-1}\left( \frac{1}{1+n+n^2}\right)$$ $$=\tan^{-1}\left( \frac{(n+1)-n}{1+n(n+1)}\right)$$ Now, let $\color{blue}{n+1=\tan\alpha}$ ($\implies \color{blue}{\alpha=\tan^{-1}(n+1)}$) & $\color{blue}{n=\tan\beta}$ ($\implies \color{blue}{\beta=\tan^{-1}(n)}$) $\forall \quad \alpha>\beta$
Thus by sustituitng the correspoding values in above expression, we get $$\tan^{-1}\left( \frac{\tan \alpha-\tan\beta}{1+\tan\alpha\tan \beta}\right)$$ $$=\tan^{-1}\left( \tan(\alpha-\beta)\right)$$$$=\alpha-\beta $$ $$=\color{green}{\tan^{-1}(n+1)-\tan^{-1}(n)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
How to show that $\lim_{x \to 0} x^p (\ln x)^r = 0$ I want to show that $\lim_{x \to 0} x^p (\ln x)^r = 0$ if $p > 1$ and $r \in \mathbb{N}$.
To show this, I wanted to use that $\lim_{x \to 0} x \ln x = 0$, and in fact if $p \geq r$ we can write that
$$\lim_{x \to 0} x^p (\ln x)^r = \lim_{x \to 0} x^{p-r}((x \ln x)^r)$$
and both terms of the product goes to 0 as $x \to 0$. Thus this part is okay.
I have trouble if $p < r$. I tried to do the same, but I have no result so far. Maybe by considering $(x \ln x)^p (\ln x)^{r-p}$ and then setting $y = \ln x$. Thus we obtain
$$
\begin{align*}
\lim_{x \to 0} (x \ln x)^p (\ln x)^{r-p}
&= \lim_{e^y \to 0} (e^y \cdot y)^p \cdot y^{r-p} \\
&= \lim_{y \to - \infty} (e^y \cdot y)^p \cdot y^{r-p} \\
&= \lim_{y \to - \infty} e^{py} y^r\\
&= 0
\end{align*}
$$
because the exponential goes "faster" to $0$ than a polynom diverges to infinity? Does it seem correct to you?
Edit:
I will assume that we know that for $r \in \mathbb{N}$,
$$\lim_{y \to \infty} \frac{y^r}{e^y} = 0.$$
Step 1: I will prove that for $p > 1$ and $r \in \mathbb{N}$ (for my exercise),
$$\lim_{x \to \infty} \frac{\ln^r x}{x^p} = 0.$$
Proof: Let $y = \ln x$. Then $x = e^y$ and thus
$$
\begin{align*}
\lim_{x \to \infty} \frac{\ln^r x}{x^p}
&= \lim_{e^y \to \infty} \frac{y^r}{e^{py}} & e^{px} \geq e^x \text{ as } p> 1 \text{ and } x \geq 0\\
&\leq \lim_{y \to \infty} \frac{y^r}{e^y}\\
&= 0.
\end{align*}
$$
Step 2: Now I can prove that under the same hypothesis for $p$ and $r$,
$$\lim_{x \to 0} x^p \ln^r x = 0.$$
Proof: Let $y = \frac{1}{x}$, thus $x = \frac{1}{y}$ and
$$
\begin{align*}
\lim_{x \to 0} x^p \ln^r x &= \lim_{\frac1y \to 0} \frac{1}{y^p} \cdot \ln \left( \frac{1}{y} \right)^r\\
&= \lim_{y \to \infty} \frac{- \ln^r y}{y^p}\\
&= 0,
\end{align*}
$$
which concludes.
I took $p > 1$ and $r \in \mathbb{N}$ because it is only what I needed for my exercise, but if you feel like editing and proving this for all $p. r > 0$, go ahead.
|
First let us examine the following. Given \begin{equation} l > 0 \end{equation} we prove that
\begin{equation}
\lim_{x \to0} ln(x)x^l = 0.
\end{equation}
Doing this comes straight from L'Hospitals' rule. We put the limit in the indeterminate form and get the following.
\begin{equation}
\lim_{x \to \infty}\frac{ln(1/x)}{x^l}= \lim_{x \to \infty}\frac{-1}{x} *\frac{1}{lx^{l-1}}=\lim_{x \to \infty} \frac{-1}{lx^{l}} = 0
\end{equation}
Since \begin{equation} l>0 \end{equation}
We now have a very general statement. From here you can see that if you substitute l for p/r (p and r >0), you can put our new found limit to the power of r and still get 0 as your answer. More importantly, by distributivity, we get the wanted limit like so.
\begin{equation}
0=\lim_{x \to 0}(x^{\frac{p}{r}}ln(x))^r =\lim_{x→0}x^p(ln(x))^r
\end{equation}
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
limits and infinity I'm having trouble wrapping my head around some of the 'rules' of limits. For example,
$$
\lim_{x\to \infty} \sqrt{x^2 -2} - \sqrt{x^2 + 1}
$$
becomes
$$
\sqrt{\lim_{x\to \infty} (x^2) -2} - \sqrt{\lim_{x\to \infty}(x^2) + 1}
$$
which, after graphing, seems to approach zero. My question is how do you know for sure the answer is zero without graphing? Thanks!
|
Hint: $$\left[\sqrt {x^2 - 2} - \sqrt{x^2 + 1}\right]\dot\, \frac{\sqrt {x^2 - 2} + \sqrt{x^2 + 1}}{\sqrt {x^2 - 2} + \sqrt{x^2 + 1}} = \frac{x^2 - 2 - (x^2 + 1)}{\sqrt {x^2 - 2} + \sqrt{x^2 + 1}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Misunderstanding of $\epsilon$-neighborhood I am given the following definition of an $\epsilon$-neighborhood:
Given a real number $a\in \mathbb{R}$ and a positive number $\epsilon>0$, the set
$$\{x \in \mathbb{R}: |x-a|<\epsilon \}$$
is called the $\epsilon$-neighborhood of a.
The book says that the $\epsilon$-neighborhood of a consists of all those points whose distance from a is less than $\epsilon$. I don't understand how $|x-a|<\epsilon$ says that though. To my understanding $|x-a|<\epsilon$ says that
$$-\epsilon<x-a<\epsilon$$
$$-x-\epsilon<-a<\epsilon-x$$
$$x+\epsilon>a>x-\epsilon$$
$$x-\epsilon<a<x+\epsilon$$
Could someone please reconcile my understanding?
Thanks!
|
You might want to write it this way: $$|x-a|<\epsilon \iff -\epsilon < x-a < \epsilon \iff a-\epsilon < x < a+\epsilon$$And the last expression means that $x \in \left]a-\epsilon,a+\epsilon\right[$. The neighbourhood is centered in $a$, not $x$. The moral of the history is that $|x-a|<\epsilon$ if and only if $x \in \left]a-\epsilon,a+\epsilon\right[$. So: $$\{x \in \Bbb R \mid |x-a|<\epsilon\} = \left]a-\epsilon,a+\epsilon\right[. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Abelian group and element order Suppose that $G$ is an Abelian group of order 35 and every element of $G$ satisfies the equation $x^{35} = e.$ Prove that G is cyclic.
I know that since very element of $G$ satisfies the equation $x^{35} = e$ so the order of elements are $1,5,7,35$. I know how to prove the situation when one element's order is 35; one is 5, one is 7; but I don't know how to prove when all elements' order is 5 or 7. Is this possible? If it is, how can $G$ be cyclic?
|
A very basic proof:
If $G$ has an element of order $5$ and an element of order $7$, say $a$ and $b$, then since $G$ is abelian, $ab$ has order $35$ and $G$ is cyclic.
So we want to rule out the possibility that $G$ has no elements of order $5$, or alternatively, no elements of order $7$, without recourse to Cauchy's Theorem.
Elements of order $5$ occur "$4$ at a time" (any element of order $5$ generates a subgroup of order $5$, and any two such subgroups intersect in just the identity, so that gives $4$ elements of order $5$ in each distinct subgroup of order $5$), and similarly, elements of order $7$ occur "$6$ at a time".
Now $G$ has $34$ non-identity elements, which is neither a multiple of $4$, nor $6$, so we cannot have elements of only one order among the non-identity elements.
P.S.: the condition that $G$ be abelian is unnecessary-any group of order $35$ is actually cyclic, but a proof of this will have to wait until you have access to more advanced theorems.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1305950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Pre-calculus - Deriving position from acceleration Suppose an object is dropped from the tenth floor of a building whose roof is 50 feet above the point of release. Derive the formula for the position of the object t seconds after its release is distance is measured from the roof. The positive direction of distance is downward and time is measured from the instant the object is dropped. What is the distance fallen by the object in t seconds?
The problem gives you acceleration as being $a=32$. I integrate to velocity, getting $v=32t + C$. At the time of release, $v=0$, so the equation is $v=32t$. Integrate again for position and I get $$s=16t^2 + C$$
Here is where I get stuck. Do I add 50 to the position function because the height is measured from the roof? What is the next step?
|
Yes, add $50$ft to the function, because when $t=0$, the object is $50$ft away from the roof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$\mathbb{Q}\left ( \sqrt{2},\sqrt{3},\sqrt{5} \right )=\mathbb{Q}\left ( \sqrt{2}+\sqrt{3}+\sqrt{5} \right )$ Prove that
$$\mathbb{Q}\left ( \sqrt{2},\sqrt{3},\sqrt{5} \right )=\mathbb{Q}\left ( \sqrt{2}+\sqrt{3}+\sqrt{5} \right )$$
I proved for two elements, ex, $\mathbb{Q}\left ( \sqrt{2},\sqrt{3}\right )=\mathbb{Q}\left ( \sqrt{2}+\sqrt{3} \right )$, but I can't do it by the similar method.
|
It is worth noting from alex.jordan and achille hui's responses that if $\theta = \sqrt{2} + \sqrt{3} + \sqrt{5}$, we explicitly have $$\begin{align*} \sqrt{2} &= \tfrac{5}{3} \theta - \tfrac{7}{72} \theta^3 - \tfrac{7}{144} \theta^5 + \tfrac{1}{576} \theta^7, \\ \sqrt{3} &= \tfrac{15}{4} \theta - \tfrac{61}{24} \theta^3 + \tfrac{37}{96} \theta^5 - \tfrac{1}{96} \theta^7, \\ \sqrt{5} &= -\tfrac{53}{12} \theta + \tfrac{95}{36} \theta^3 - \tfrac{97}{288} \theta^5 + \tfrac{5}{576} \theta^7. \end{align*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Why the set of all maximum points of a continuous function is a closed set? Suppose $f(x)$ is a continuous function on domain $\Omega$ and its maximum value is $m$. Let $M={\{x|f(x)=m\}}$, then how do I prove $M$ is a closed set? Thank you!
|
Suppose $x \notin M$, then $f(x) \neq m$ and since $f$ is continuous, there is some open $U$ containing $x$ such that $f(y) \neq m$ for all $y \in U$, hence $M^c$ is open.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does rotation of a rectangle keep it rectangular? If I rotate a rectangle by 45°, does it stay rectangular or become something else? I mean do 90° angles stay 90°?
I am asking this question because I have some results where the rotated rectangle becomes not so rectangular ... I think I have a problem.
Problem: I think the problem comes from scaling.... I draw the resulting data after rotation in another image, and I have got the rectangle correctly rotated. Maybe it is because I used imagesc to draw the background or axis image in MATLAB...?
|
I found the solution.
Well, I used to draw the image using a different scaling between Y-axis and X-axis. The solution is to set DataAspectRatio to [1 1 1].
In order to use I have used axis image but after imagesc(image) !
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
}
|
Inequality involving lengths and triangles I was quite sure this would have been asked before but I couldn't find it, so here goes:
If $\displaystyle BC<AC<AB \hspace{5pt} (\alpha<\beta<\gamma)$, show $\displaystyle PA+PB+PC<\beta+\gamma \hspace{3pt}$ (where $P$ arbitrary in the interior of the triangle).
I haven't actually been able to make a simple yet convincing attempt (although I have got a messy solution). I know that there is a nice and elegant way to do it, but it isn't coming to me.
|
The Fermat-Torricelli point of a triangle $ABC$ is the point $P$ such that $PA+PB+PC$ is minimized. It is usually found through the following lines: Assume, for first, that $ABC$ has no angle greater than $120^\circ$ and take a point $Q$ inside $ABC$. Let $Q'$ the image of $Q$ under a rotation of $60^\circ$ around $B$, $A'$ likewise:
Then $Q'A'=QA$ and $BQ=BQ'=QQ'$, hence:
$$ QA+QB+QC = A'Q'+Q'Q+QC \leq CA', $$
and the Fermat-Torricelli point lies in the intersection of the lines through a vertex of $ABC$ and the corresponding vertex of a equilateral triangle built on the opposite side, externally to $ABC$.
However, if $ABC$ has an angle $\geq 120^\circ$, the Fermat-Torricelli point is just the opposite vertex to the longest side.
Can you finish from there?
Update: I also have a one-shot-one-kill solution. The function that maps $P$ to $PA+PB+PC$ is a convex function, as a sum of convex functions. It follows that $PA+PB+PC$ attains its maximum in a vertex of $ABC$, since $ABC$ is a convex set. If $BC<AC<AB$, such a maximum is obviously attained in $A$, and the claim follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
existence of holomorphic functions Let $D$= { $z \epsilon \Bbb C :|z|<1$ } . Then there exists a holomorphic function $f:D\to \overline D$ with $f(0)=0$ with the property
a) $f'(0)={1\over 2}$
b) $|f(1/3)|={1\over 4}$
c) $f(1/3)={1\over 2}$
d) $|f'(0)|=\sec(\pi/6)$
when $f(z)={1\over 2}z$ then $a)$ is true
when $f(z)={1\over 4}$ then $b)$ is true
To solve the problem do we've to take such examples?Or is there any other way?
|
Hint:
If $f(z)$ is holomorphic function from $D$ onto itself, where $D=\{z\in\mathbb{C}:|z|<1\}$ and $f(0)=0$, then $|f(z)|\le|z|$ and $|f'(z)|\le1$ for all $z\in D$, by Schwartz lemma
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Complex analysis, find the residue
Find the residue of $f(z)=\frac{1}{z^2\sin z}$ at $z_0=0$
What I tried
Let $g(z)=1$ and $h(z)=z^2\sin z$, both are analytics but they have zeros of different orders then $f(z)$ don't have removable singularity point at $z_0$
Then I tried to use the fact that if $lim_{z\rightarrow z_0}(z-z_0)^kf(z)$ exists, then $f(z)$ have pole of order $k$ at $z_0$, but I'm stuck on this, because if for example I take $k=2$
$$lim_{z\rightarrow z_0}(z-0)^2\frac{1}{z^2\sin z}=lim_{z\rightarrow z_0}\frac{1}{\sin z}=\infty$$
Maybe I need to do some transformation in the function, but I can not see such a transformation.
|
You may write, as $z \to 0$,
$$
\begin{align}
\frac1{z^2\sin z}&=\frac1{z^2\left(z-\dfrac1{3!}z^3+O(z^5)\right)}\\\\
&=\frac1{z^3\left(1-\dfrac16z^2+O(z^4)\right)}\\\\
&=\frac1{z^3}(1+\frac16z^2+O(z^4))\\\\
&=\frac1{z^3}+\frac1{6z}+O(z)\\\\
\end{align}
$$ and the desired residue is equal to $\dfrac16$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to evaluate $\lim\limits_{n\to +\infty}\frac{1}{n^2}\sum\limits_{i=1}^n \log \binom{n}{i} $ I have to compute:
$$ \lim_{n\to +\infty}\frac{1}{n^2}\sum_{i=1}^{n}\log\binom{n}{i}.$$
I have tried this problem and hit the wall.
|
We have:
$$ \prod_{i=1}^{n}\binom{n}{i} = \frac{n!^n}{\left(\prod_{i=1}^{n}i!\right)\cdot\left(\prod_{i=1}^{n-1}i!\right)}\tag{1}$$
hence:
$$ \frac{1}{n^2}\sum_{i=1}^{n}\log\binom{n}{i}=\frac{1}{n^2}\left((n-1)\log\Gamma(n+1)-2\sum_{i=1}^{n-1}\log\Gamma(i+1)\right)\tag{2}$$
and by Stirling's approximation:
$$ \log\Gamma(z+1) = \left(z+\frac{1}{2}\right)\log z-z+O(1) \tag{3}$$
and partial summation we get:
$$ 2\sum_{i=1}^{n-1}\log\Gamma(i+1) = n^2\left(\log n-\frac{3}{2}\right)+O(n)\tag{4}$$
so:
$$ \sum_{i=1}^{n}\log\binom{n}{i} = \frac{n^2}{2}+O(n\log n)\tag{5} $$
and our limit is just $\color{red}{\frac{1}{2}}.$
A simpler approach is given by the identity:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&(n-1)\log n!-2\sum_{i=1}^{n-1}\sum_{k=1}^{i}\log k \\ &=&(n-1)\log n+\sum_{k=1}^{n-1}(2k-n-1)\log k\tag{6}\end{eqnarray*}$$
hence partial summation gives:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&(1-n)\log\left(1-\frac{1}{n}\right)-\sum_{k=1}^{n-2}(k^2-kn)\log\left(1+\frac{1}{k}\right)\\&=&O(1)+\sum_{k=1}^{n-2}(n-k)+O\left(\sum_{k=1}^{n-2}\frac{n-k}{k}\right)\\&=&\frac{n^2}{2}+O(n\log n).\tag{7}\end{eqnarray*} $$
Still another approach through summation by parts:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&-\sum_{i=1}^{n-1}i\cdot\log(i+1)+\sum_{i=1}^{n-1}(n-i)\log(i)\\&=&O(n)+\sum_{i=1}^{n-1}(2i-n)\log(n-i)\\&=&O(n)+\sum_{i=1}^{n-1}(2i-n)\log\left(1-\frac{i}{n}\right)\tag{8}\end{eqnarray*}$$
gives that the limit is provided by a Riemann sum:
$$\begin{eqnarray*}\lim_{n\to +\infty}\frac{1}{n^2}\sum_{i=1}^{n}\log\binom{n}{i}&=&\int_{0}^{1}(2x-1)\log(1-x)\,dx\\&=&\int_{0}^{1}\frac{x^2-x}{x-1}\,dx\\&=&\int_{0}^{1}x\,dx=\color{red}{\frac{1}{2}}.\tag{9}\end{eqnarray*}$$
This limit can be related with the entropy of the binomial distribution through the Kullback-Leibler divergence, for instance.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What does the "$\cdot$" mean in an equation I am trying to solve an equation for a project that I am undertaking. The equation is very long and its probably not necessary to show it all here. Most of the equation is fairly straightforward; i.e., $1+(\frac{w}{W})(\frac{d}{t})$, etc. but at the very end it reads $\left(1+\frac{w}{W}\right)\left(\frac{D}{T}\right) \cdot \frac{1}{T}$.
$w$, $W$, $D$, and $T$ are all either constants or variables that I have solved already.
My question is, what is the "$\cdot$" symbol asking me to do?
In the equation in the book, the "$\cdot$" is in the centre of the dividing line between $1$ and $T$, as opposed to down low like a period or full stop.
I appreciate that I have phrased this question in an awkward manner, but as is obvious, I am no maths expert.
Here is a photo of the equation in the book:
|
Like this?
$$ \left(1+\frac{w}{W}\right)\left(\frac DT\right)\cdot\frac1T $$
That means multiply.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Use Laplace's method with $\int_{0}^{\infty} e^{x(3u-u^3)}du$ as $x\rightarrow \infty$ Use Laplace's method with $\int_{0}^{\infty} e^{x(3u-u^3)}du$ as $x\rightarrow \infty$.
I'm confused about how to taylor expand about u=1? How do I continue? Obviously first of all I have converted it to:
$$\int_{1-\epsilon}^{1+\epsilon}e^{x(3u-u^3)}du$$
but what now?
|
Let's see the where the usual reasoning for Laplace's method leads us.
Let me write the integral as $\exp{(x f(u))}$, where $f(u)$ has a global maximum at $u_* = 1$. We could anticipate that the major contribution to the integral is coming from the neighbourhood of $u = u_*$ as $x \to \infty$. If we expanded the argument of the exponential about $u = u_*$ we would find:
$$ I(x) = \int^\infty_0 \mathrm{e}^{xf(u)} \, \mathrm{d}u \approx \int^\infty_0 \mathrm{e}^{x \left[ f(u_*) + \frac{1}{2}f''(u_*)(u-u_*)^2 \right] } \, \mathrm{d}u, $$ where $f(u_*) = 2$ and $f''(u_*) = -6$ (of course, $f'(u_*)=0$). Now, we call Mathematica to solve the last integral to come up with:
$$I(x) \sim \sqrt{\frac{\pi }{12 x}} \mathrm{e}^{2 x} \left(1+\text{erf}\left( \sqrt{3x}\right)\right), \quad x \to \infty $$ where $\mathrm{erf}$ is the error function. We have obtained two terms in the asymptotic expansion of $I(x)$.
Hope this helps
Note that if you do not retain the second order in the expansion of $f$ you end up with a non-convergent integral for $u$ leading to the value of $I$ at $x \to \infty$, which is not precisely what we desire.
Here's a comparison between the numerical integration of $I(x)$ (black solid line) and the two-term asymptotic expansion (dashed line):
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that $T$ is a linear operator - Linear Transformations in Linear Algebra We are asked:
Consider the operator $T:\mathbb{R^2} \rightarrow \mathbb{R^2}$ where $T(x_1, x_2) = (x_1 + kx_2, -x_2)$ for every $(x_1, x_2) \in \mathbb{R^2}$. Here $k \in \mathbb{R}$ is fixed.
a) Show that $T$ is a linear operator
b) Show that $T$ is one-to-one
We know in part (a) that a linear operator is one that satisfies the conditions of linearity
$$T(x_1 + x_2) = T(x_1) + T(x_2)$$
$$T(kx_1) = kT(x_1)$$
How do I apply that to the problem given?
We know that in part (b) a function is one-to-one whenever
$$T(x) = T(y)$$
for some $x$ and $y$ we must have $x=y$ (this is the property of 'onto').
I am almost certain that these responses are not proper solutions to my questions.
|
Hint:
there is some confusion between componets and vectors (or points) in your notation.
For part a) you have to prove:
$$
T(\vec x+\vec y)=T(\vec x)+T(\vec y)
$$
with: $\vec x=(x_1,x_2)$ and $\vec y=(y_1,y_2)$
and
$$
T(k \vec x)=kT(\vec x)
$$
with with: $\vec x=(x_1,x_2)$
Use the definition of T for $T \vec x$ and $T \vec y$ and you find the proof.
In the same way you can prove b).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1306989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Represent $\cos(x)$ as a sum of sines - where is my mistake Let's look at $f(x)=\cos(x)$ defined on the interval $[0,\pi]$.
We know that for any function $g$ defined on $[0,\pi]$ we have:
$g(x)=\sum_{k=1}^{\infty}B_k\sin(kx)$ where $B_k=\frac{2}{\pi}\int_{0}^{\pi}g(x)\sin(kx)dx$. And $f$ is no different. So in our case:
$B_k=\frac{2}{\pi}\int_{0}^{\pi}f(x)\sin(kx)dx=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx$
It can be shown that $\int\cos(x)\sin(kx)dx=\frac{\sin(x)\sin(kx)+k\cos(x)\cos(kx)}{1-k^2}$, so:
$B_k=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx=\frac{2}{\pi}\frac{-k\cos(k\pi)-k}{1-k^2}=\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}$
So overall we should have $f(x)=\cos(x)=\sum_{k=1}^{\infty}\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}\sin(kx)$ But that clearly can't be true, because $\cos(0)=1$ but that sum is equal to $0$ at $x=0$ since $\sin(0)=0$.
Where is the mistake? and not only that, we seem to have a big problem when $k=1$
|
Your formula after "it can be shown that" is clearly not valid for $k=1$,
so you simply have to compute $B_1$ using some different method. (For example, $\int \cos x \sin x \, dx= \frac12 \sin^2 x + C$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Determine whether or not the limit exists: $\lim_{(x,y)\to(0,0)}\frac{(x+y)^2}{x^2+y^2}$
Determine whether or not the limit $$\lim_{(x,y)\to(0,0)}\frac{(x+y)^2}{x^2+y^2}$$
exists. If it does, then calculate its value.
My attempt:
$$\begin{align}\lim \frac{(x+y)^2}{x^2+y^2} &= \lim \frac{x^2+y^2}{x^2+y^2} + \lim \frac {2xy}{x^2+y^2} =\\&= 1 + \lim \frac 2{xy^{-1}+yx^{-1}} = 1+ 2\cdot\lim \frac 1{xy^{-1}+yx^{-1}}\end{align}$$
But $\lim_{x\to 0^+} x^{-1} = +\infty$ and $\lim_{x\to 0^-} x^{-1} = -\infty$
Likewise, $\lim_{y\to 0^+} y^{-1} = +\infty$ and $\lim_{y\to 0^-} y^{-1} = -\infty$
So the left hand and right hand limits cannot be equal, and therefore the limit does not exist.
|
Your final answer is correct, but the way there is just slightly misleaded. The main problem is that you seperate your limit into a sum of limits. The first, $$\lim_{(x,y)\to(0,0)} \frac{x^2+y^2}{x^2+y^2}$$
converges and is equal to $1$, as you note. The second though, $$\lim_{(x,y)\to(0,0)} \frac {2xy}{x^2+y^2}$$
diverges.
The first problem (but probably not the most important) is that it doesn't diverge for the reasons you state (if you look closely, your argument actually concludes that the above limit is $0$). A proof of its divergence would be to consider the limit through points $(x,x)$ vs. through $(0,y)$, as suggested in another answer.
The more important problem though, is that this reasoning cannot conclude that the original limit doesn't exist! The equality (compactly stated) $$\lim f+g = \lim f + \lim g$$
holds only if both the limits of $f$ and of $g$ exist. In that case we can conclude that the original limit exists and is equal to blah blah. However there is nothing to be said in the case that either limit doesn't exist.
You could modify your method by keeping the expansions inside the limit operand: $$\lim_{(x,y)\to(0,0)}\frac{(x+y)^2}{x^2+y^2} = \lim_{(x,y)\to(0,0)}\frac{x^2+y^2}{x^2+y^2} + \frac{2xy}{x^2+y^2}$$
and conclude by the "path-dependence" argument.
Just to make clear what I mean, here's an example where the same argument would conclude that a very evidently convergent limit, doesn't exist!
$$0 = \lim_{x\to +\infty} \sin x-\sin x \stackrel{?}{=}\lim_{x\to +\infty}\sin x - \lim_{x\to +\infty}\sin x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 4
}
|
Show the set of all functions from $\mathbb{N}$ to $\{0, 1\}$ is uncountable using a contradiction. This is what I have written:
By contradiction, assume it is countable. Write $S=\{\text{all functions } \mathbb{N} \rightarrow \{0,1\} \}$. Then, we can find a bijection $\mathcal{H}: S \rightarrow \mathbb{N}$. Now, I would like to check how to incorporate Cantor's method to find the contradiction. Would it be right to think of each function as a binary representation (because they map to either $0$ or $1$)? So, I will write
$f(1) \mapsto a_{11}a_{12}a_{13}...$
$f(2) \mapsto a_{21}a_{22}a_{23}...$
$f(3) \mapsto a_{31}a_{32}a_{33}...$
where $a_{ij} \in \{0,1\}$.
and so on. So for example, $f(1)$ has input any natural number, so it will spit out either a $0$ or a $1$, and I have written all possibilities in a list.
Then, I define a function in $S$ that is $0$ if a string value is $1$ and $1$ if the string value is $0$.
I have one more question: what is the meant by the notation $\{0,1\}^{\mathbb{N}}$?
Thank you.
|
Your idea is correct, Using the binary representation actually makes the explanation way easier. You can always get a binary number that is not in the list and obtain a contradiction using cantor's diagonal method
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Finding the square root of a big number, like 676? I am having trouble understanding and finding the square roots of large numbers. How would I go about finding this number efficiently?
|
Square root of $676$ is the number such that multiplied by itself will give you 676.
This number is denoted as $\sqrt{676}$.
Note that generally speaking, for a non-zero real number $p$ there are two numbers which squared will give you $p$. They differ by the sign. In order to ensure uniqueness of the square root, the general convention is for positive numbers to denote square another positive number as its square root.
If you want to calculate a square root of a large number, it is often helpful to represent this large number as a product of smaller numbers, e.g.
$$
676 = 4\cdot 169 = (2\cdot 2) \cdot (13\cdot 13) = 2^2\cdot 13^2
$$
Therefore
$$
\sqrt{676} = \sqrt{4\cdot 169 } = \sqrt{2^2}\cdot \sqrt{ 13^2} = 26
$$
EDIT:
Thanks to the comments, I am able to get rid of unclarity in my original post.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
find the minimum value of $\sqrt{x^2+4} + \sqrt{y^2+9}$ Question:
Given $x + y = 12$, find the minimum value of $\sqrt{x^2+4} + \sqrt{y^2+9}$?
Key:
I use $y = 12 - x$ and substitute into the equation, and derivative it.
which I got this
$$\frac{x}{\sqrt{x^2+4}} + \frac{x-12}{\sqrt{x^2-24x+153}} = f'(x).$$
However, after that. I don't know how to do next in order to find the minimum value. Please help!
|
the constraint is $x + y = 12.$ at a local extremum of $\sqrt{x^2 + 4} + \sqrt{y^2 + 9},$ the critical numbers satisfy $$dx + dy =0,\quad \frac{x\, dx}{\sqrt{x^2 + 4}} + \frac{y\, dy}{\sqrt{y^2 +9}} = 0 \to x\sqrt{y^2 + 9}=y\sqrt{x^2 + 4} $$ squaring the last equation we have $$9x^2 =4y^2 \to y = \pm\frac32x, x + y = 12 \implies x=24/5, y = 36/5,\\ \sqrt{x^2 + 4} + \sqrt{y^2+9} = 13$$ therefore the global minimum of $$\sqrt{x^2 + 4} + \sqrt{y^2+9} \text{ is } 13.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Fourier series convergence Let $f_n \rightarrow f$ be a sequence of $2\pi$-periodic functions, where the convergence is in $L^1({\mathbb R}/2\pi{\mathbb Z})$.
Then the Fourier-coefficients satisfy $|F(f_n) -F(f)| \rightarrow 0 $ uniformly.
Now, I was wondering. Does this imply that the Fourier series $$h_k(x):=\sum_{n \in \mathbb{Z}} F(f_k)(n)e^{-inx} $$ converges to $$h(x):=\sum_{n \in \mathbb{Z}} F(f)(n)e^{-inx} $$ pointwise, where we assume that everything exists( so we assume that the sums $h_n,h$ converge for all $x$.)
|
Let $\{f_n\}$ be the typewriter sequence. $f_n$ converges to $f\equiv0$ in $L^1$ but not pointwise (not even pointwise almost everywhere). The Fourier coefficients of $f$ are all equal to $0$, so its Fourier series converges to $0$. The Fourier series of $f_n$ converges to $f_n$ at all points except two.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Notation for integral of a vector function over an ellipsoid For a short proof, I need to write a point $\pmb y\in\mathbb{R}^p$ as
the integral of the surface of the ellipse $\pmb x^{\top}\pmb Q\pmb x=c$ where $\pmb Q$ is a $p$ by $p$ PSD matrix (for now $\pmb y$ is defined in words). What is the formal way to write this? Can we do better than:
$$\pmb y=\int_{\pmb x\in\mathbb{R}^p:\pmb x^{\top}\pmb Q\pmb x=c} \pmb x d(\pmb x)$$
Also, I am not a professional mathematician so I am not too sure about the $d(\pmb x)$ part.
|
You have lots of options here, but a typical choice is to denote the ellipsoid itself by a symbol, say,
$$E := \{{\bf x} \in \Bbb R^p : {\bf x}^T {\bf Q} {\bf x} = c\},$$
and then write the integral the integral of a real-valued function $f$ as
$$\iint_E f \, dS.$$
Here, $dS$ is the infinitesimal area element of the surface $E$.
In our case, we're integrating a vector function (in fact, just the identity function ${\bf x} \mapsto {\bf x}$), and we can use essentially the same syntax and write
$${\bf y} = \iint_E {\bf x} \, dS.$$
(Of course, our $E$ is symmetric w.r.t. reflection through the origin, so in our case the integral is automatically zero.)
The double integral symbol simply reminds us that we're integrating over a surface (as to evaluate such an integral, often we parameterize the surface and then evaluate the appropriate double integral over the domain of the parameterization), but this emphasis is optional, as one will occasionally see $$\int_E f \, dS$$ instead. The ellipse is a closed surface, and some authors will remind you of by decorating the double integral $\iint$ with a loop that overlaps both of the integral symbols:
.
(This symbol can produced with $\texttt{\oiint}$ with various $\LaTeX$ packages, but to my knowledge it is not supported by MathJax, which is used on this site for rendering.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve inequality for : $|7x - 9| \ge x +3$ How to solve inequality for : $|7x - 9| \ge x + 3$
There is a $x$ on both side that's make me confused...
|
We can solve the absolute value inequality by squaring both sides, then solving the resulting quadratic inequality.
\begin{align*}
|7x - 9| & \geq x + 3\\
|7x - 9|^2 & \geq (x + 3)^2\\
(7x - 9)^2 & \geq (x + 3)^2\\
49x^2 - 126x + 81 & \geq x^2 + 6x + 9\\
48x^2 - 132x + 72 & \geq 0\\
4x^2 - 11x + 6 & \geq 0\\
4x^2 - 8x - 3x + 6 & \geq 0\\
4x(x - 2) - 3(x - 2) & \geq 0\\
(4x - 3)(x - 2) & \geq 0
\end{align*}
Equality holds when $x = 3/4$ or $x = 2$. Since $(4x - 3)(x - 2)$ is continuous, the sign of the product can only change at one of the roots. We perform a line analysis.
The sign of the product is the product of the signs of the factors. Hence $(4x - 3)(x - 2) \geq 0$ when $x \leq 3/4$ or when $x \geq 2$. Since $|7x - 9| = \sqrt{(7x - 9)^2}$, the steps are reversible. Hence, the solution set of the inequality $(4x - 3)(x - 2) \geq 0$ is the solution set of the absolute value inequality $|7x - 9| \geq x + 3$. Therefore, the solution set of the absolute value inequality is
$$S = \left(-\infty, \frac{3}{4}\right] \cup [2, \infty)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Application of Green's Theorem when undefined at origin Problem:
Let $P={-y \over x^2+y^2}$ and $Q={x \over x^2+y^2}$ for $(x,y)\ne(0,0)$.
Show that $\oint_{\partial \Omega}(Pdx + Qdy)=2\pi$ if $\Omega$ is any open set containing $(0,0)$ and with a sufficiently regular boundary.
Working:
Clearly, we cannot immediately apply Green's Theorem, because $P$ and $Q$ are not continuous at $(0,0)$. So, we can create a new region $\Omega_\epsilon$ which is $\Omega$ with a disc of radius $\epsilon$ centered at the origin excised from it.
We then note ${\partial Q \over \partial x} - {\partial P \over \partial y} = 0$ and apply Green's Theorem over $\Omega_\epsilon$. Furthermore, $\oint_C(Pdx + Qdy)=2\pi$ if $C$ is any positively oriented circle centered at the origin.
I get the general scheme of how to approach this problem, however I am unsure of how to argue it in a rigorous manner.
|
If your boundary is smooth, then you may parametrize it by $x(t) = r(t) \cos t$ and $y(t) = r(t) \sin t$, with $t \in [0,2\pi]$. Then your differential form $P \Bbb d x + Q \Bbb d y$ becomes
$$\frac {-r(t) \sin t \Big (r'(t) \cos t - r(t) \sin t \Big) + r(t) \cos t \Big( r'(t) \sin t + r(t) \cos t \Big)} {r^2 (t)} = 1$$
which, integrated from $0$ to $2\pi$, will give $2 \pi$.
Please note that the application of Green's theorem on $\Omega _\varepsilon$ will tell you that the integral on $C$ is equal to the integral on the small circle of radius $\varepsilon$ surrounding the origin, but won't calculate things for you. You'll still have to parametrize the circle and do an explicit integral.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
6 periods and 5 subjects Question : There are 6 periods in each working day of a school. In how many ways can one organize 5 subjects such that each subject is allowed at least one period? Is the answer 1800 or 3600 ? I am confused.
Initially this appeared as a simple question. By goggling a bit, I am stuck with two answers. Different sites gives different answers and am unable to decide which is right.
Approach 1 (Source)
we have 5 sub and 6 periods so their arrangement is 6P5 and now we have 1 period which we can fill with any of the 5 subjects so 5C1
6P5*5C1=3600
Approach 2 (Source)
subjects can be arranged in 6 periods in 6P5 ways.
Remaining 1 period can be arranged in 5P1 ways.
Two subjects are alike in each of the arrangement. So we need to divide by 2! to avoid overcounting.
Total number of arrangements = (6P5 x 5P1)/2! = 1800
Alternatively this can be derived using the following approach.
5 subjects can be selected in 5C5 ways.
Remaining 1 subject can be selected in 5C1 ways.
These 6 subjects can be arranged themselves in 6! ways.
Since two subjects are same, we need to divide by 2!
Total number of arrangements = (5C5 × 5C1 × 6!)/2! = 1800
Is any of these approach is right or is the answer different?
|
5 subjects on 5 periods
selecting a place for remaining one subject
Final answer
Over-counting check
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
How many $3$ digit even numbers can be formed by using digits $1,2,3,4,5,6,7$, if no digits are repeated? How many $3$ digit even numbers can be formed by using digits $1,2,3,4,5,6,7$, if no digits are repeated?
ATTEMPT
There are three places to be filled in _ _ _
I wrote it like this
_ _ $2$
_ _ $4$
_ _ $6$
Now each of the two blanks can be filled in $P(6,2)$ ways. So adding results of three cases i have $3$. $P(6,2)$ which gives me $90$ ways. But the textbook states $60$ ways. Can someone suggest how ?
Thanks.
|
For the first digit you only have $3$ possibilities, $2,4$ or $6$. For the second digit, having chosen the first digit you only have $6$ possibilities left. For the third digit, having chosen the first two digit you have $5$ possibilities left. In total you have $3 \times 6 \times 5$ possibilities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1307997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A compact Hausdorff space It is known that every finite space is compact. Then I am worried whether there exists a compact Hausdorff space $X$ with with ordinal of $X$ is $\omega_0$.
Does anyone know about it?
|
Consider the set formed by the sequence {1/n} union the point {0}. This is a closed and bounded subset of the real numbers so it's compact. By definition of a sequence it's countable. It's also Hausdorff since it's a subset of a Hausdorff space (the reals).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Prove that $G = \langle x,y\ |\ x^2=y^2 \rangle $ is torsion-free.
Prove that $G = \langle x,y\ |\ x^2=y^2 \rangle $ is torsion-free.
Here $x^2$ is central as $x^2y=yx^2$ similarly $y^2$ is central. Apart from this I do not know how to proceed. Taking any arbitrary word in $G$ does not help.
|
As noticed here, $G= \langle x,y \mid x^2=y^2 \rangle$ is the fundamental group of the Klein bottle $K$.
Argument 1: Any subgroup $H \leq G$ comes from a cover $C \to K$ where $\pi_1(C) \simeq H$. But $C$ is a surface, and, by looking at the abelianizations of the closed surface groups, we deduce that $C$ cannot be compact. On the other hand, it is known that the fundamental group of any non-compact surface is free. Consequently, $\pi_1(C) \simeq H$ cannot be a finite cyclic group. For more information, see here.
Argument 2: The universal cover of $K$ is the plane $\mathbb{R}^2$, so that $K$ is aspherical: $K$ is a classifying space of $G$. Because $K$ is finite-dimensional, $G$ must be torsion-free.
Argument 3: As noticed here, using the fact that there exists a two-sheeted covering $\mathbb{T}^2 \to K$, we deduce that $\langle x^2, xy^{-1} \rangle$ is a subgroup of index two in $K$ isomorphic to $\mathbb{Z}^2$. Since $$G / \langle x^2,xy^{-1} \rangle = \langle y \mid y^2=1 \rangle,$$ it is sufficient to prove that $y$ has infinite order in $G$. As Derek Holt did, one may consider the quotient $G/ \langle \langle xy^{-1} \rangle \rangle$, which is infinite cyclic.
Argument 4: $G$ is an amalgamated product $\mathbb{Z} \underset{\mathbb{Z}}{\ast} \mathbb{Z}$ of torsion-free groups, so it is torsion-free.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
}
|
Limit of $n\left(e-\left(1+\frac{1}{n}\right)^n\right)$ I want to find the value of $$\lim\limits_{n \to \infty}n\left(e-\left(1+\frac{1}{n}\right)^n\right)$$
I have already tried using L'Hôpital's rule, only to find a seemingly more daunting limit.
|
Let $P_n = (1+1/n)^n$. Then
$$\log{P_n} = n \log{\left ( 1+\frac1{n} \right )} = n \left (\frac1{n} - \frac1{2 n^2} + \frac1{3 n^3} - \cdots \right) = 1-\frac1{2 n} + \frac1{3 n^2} - \cdots$$
$$\therefore P_n = e^1e^{-1/(2 n)+ 1/(3n^2)-\cdots} = e \left (1-\frac1{2 n} + \frac{11}{24 n^2} \cdots \right ) $$
Thus
$$\lim_{n \to \infty} n (e-P_n) = \lim_{n \to \infty} e \left(\frac12-\frac{11}{24n} \cdots\right)=\frac{e}{2} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Is a subset of an inner product space also an inner product space? My question may seem trivial but it's important that I know this. I know for a fact that a subspace of an inner product space is also an inner product space, but how about an arbitrary subset? Could I argue that since we are allowed to pick any subset, we could pick one that is not a subspace and since it is not a subspace it is not eligible as an inner product space?
Thank you!
|
A subset of an inner product space does not have to be an inner product space.
As you said, we can pick a subset that is not a subspace, hence not an inner product space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to compute determinant of $n$ dimensional matrix? I have this example:
$$\left|\begin{matrix}
-1 & 2 & 2 & \cdots & 2\\
2 & -1 & 2 & \cdots & 2\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
2 & 2 & 2 & \cdots & -1\end{matrix}\right|$$
When first row is multiplied by $2$ and added to second, to $nth$ row, determinant is:
$$\left|\begin{matrix}
-1 & 2 & 2 & \cdots & 2\\
0 & 3 & 6 & \cdots & 6\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
0 & 6 & 6 & \cdots & 3\end{matrix}\right|$$
Now using laplace expansion on first column:
$$-\left|\begin{matrix}
3 & 6 & \cdots & 6\\
\vdots & \vdots & \ddots & \vdots \\
6 & 6 & \cdots & 3\end{matrix}\right|$$
Is it possible to get recursive relation?
What to do next?
Thanks for replies.
|
Your recursive approach is fine; just follow it through. Let $D_n(a,b)$ be the determinant of the matrix with diagonal elements $a$ and all other elements $b$; clearly $D_1(a,b)=a$. For $n>1$, multiplying the first row by $-b/a$ and adding it to every other row gives $0$'s in the first column (except for an $a$ in the upper left), $a - b^2/a$ along the rest of the diagonal, and $b - b^2/a$ everywhere else. Using the Laplace expansion on the first column gives
$$
D_n(a,b)=a D_{n-1}\left(a-\frac{b^2}{a}, b-\frac{b^2}{a}\right)=aD_{n-1}\left(\frac{(a-b)(a+b)}{a},\frac{b(a-b)}{a}\right).
$$
Now, rescaling all $n-1$ remaining rows by $a/(a-b)$ gives
$$
D_n(a,b)=\frac{(a-b)^{n-1}}{a^{n-2}}D_{n-1}(a+b,b).
$$
Expanding this out, then,
$$
D_n(a,b)=\frac{(a-b)^{n-1}}{a^{n-2}}\cdot\frac{a^{n-2}}{(a+b)^{n-3}}\cdot\frac{(a+b)^{n-3}}{(a+2b)^{n-4}}\cdots \frac{(a + (n-3)b)}{1}D_1(a+(n-1)b, n) \\
= (a-b)^{n-1}\left(a + (n-1)b\right),
$$
since the product telescopes and all terms cancel except the first numerator. The solution to the original problem is
$$
D_n(-1,2)=(-1-2)^{n-1}\left(-1 + (n-1)2\right)=(-3)^{n-1}(2n-3).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Compactness of a group with a bounded left-invariant metric Let $G$ be a group equipped with a left-invariant metric $d$: that is, $(G,d)$ is a metric space and $d(xy,xz) = d(y,z)$ for all $x,y,z \in G$. Suppose further that $(G,d)$ is connected, locally compact, and bounded. Must $(G,d)$ be compact?
I can show from the local compactness that $(G,d)$ is complete. I don't see how to get it to be totally bounded, but I also can't think of a counterexample.
If it helps, you can assume that $(G,d)$ is a topological group (i.e. right translation and inversion are homeomorphisms; we already know that left translation is an isometry). I would even be interested in the case where $G$ is a finite-dimensional Lie group and $d$ induces the manifold topology, but I do not want to assume that $d$ is induced by a left-invariant Riemannian metric.
|
The answer is no. Consider $\mathbb R$ with the bounded left invariant metric
$$d(x,y)=\min(|x-y|,1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Statistics: Why doesn't the probability of an accurate medical test equal the probability of you having disease? Suppose there is a test for Disease A that is correct 90% of the time. You had this test done, and it came out positive. I understand that the chance that this test is right is 90%, but I thought this would mean the chance that you have a disease should be 90% too. However, according to Bayes' rule, your chance of disease depends on the percentage of the population that has this disease. It sounds absurd: If the test is correct then you have it, and if it's not then you don't, 90% of the time- so there should be 90% chance that the results are right for you...
But on other hand, say 100% of population has it. Then regardless of the chance the test says you have it, let it be 90% or 30%, your chance is still 100%... now all of a sudden it doesn't sound absurd.
Please avoid using weird symbols as I'm not statistics expert. It just deludes things for me and buries the insight.
|
Let's say 1/1000 people have a disease and we test 1000 people with a 10% fail rate on a test.
In this instance (1000 people) I haven't shown any false negatives. If we were showing 10,000 people we could include one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 12,
"answer_id": 11
}
|
Proof of absolute inequality I am new to proofs and would like some help understanding how to prove the following abs inequality.
$$| -x-y | \leq |x| + |y|.$$
I think I should take out the negative in the left absolute value function.? Then prove for the triangle inequailty.
|
$$|-x-y| \leq |x| + |y|\\
|-1(x+y)| \leq |x| + |y|\\
|x+y||-1|\leq |x| + |y|\\
1|x+y|\leq |x| + |y|\\
|x+y|\leq |x| + |y|
$$
I think that you can take it from here as long as you use the fact that $|x| = \max(x, -x)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Dividing elliptic curve point on integer Can we solve the equation $nP = Q$, where $P$, $Q$ is a rational poins on elliptic curve ($P$ is unknown), and $n$ is integer?
|
If $E$ is an elliptic curve the set $E(\Bbb C)$ of its complex points can be identified via Weierstrass theory with the quotient $\Bbb C/\Lambda$ where $\Lambda$ is some lattice, i.e. $\Lambda=\Bbb Zz_1\oplus\Bbb Zz_1$ for some complex numbers $z_1$, $z_2$ which are linearly independent over $\Bbb R$.
Thus, if $Q\in E$ is a point the equation $nP=Q$ has always $n^2$ solutions over the complex numbers, which under the above mentioned identifications can be written as
$$
P=\frac1nQ+\frac{n^{-1}\Lambda}\Lambda.
$$
If $Q$ is a rational point (meaning that it has rational coordinates in some Weierstrass model of $E$) you cannot expect that any of the points $P$ is rational too, though. In general, the coordinates of $P$ will be in some number field of degree $n$ over $\Bbb Q$.
This is also clear from the Mordell-Weil theorem. It says that the group $E(\Bbb Q)$ of rational points of $E$ is finitely generated, i.e. abstractly isomorphic to a group of the form $T\oplus\Bbb Z^r$ where $T$ is a finite abelian group. Now, no finitely generated abelian group is divisible. Concretely, if $Q$ happens to be, for instance, one of the generators of the free part, no point $P\in E(\Bbb Q)$ has the property that $nP=Q$ for any $n>1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1308897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.