Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Projection Matrix and Orthogonal Complements We have a projection matrix, $P = A(A^T A)^{-1} A$. The columns of $A$, we're given, form a basis for some subspace $W$. I need to prove that for any vector in the orthogonal complement of $W$, if we act on it with this projection matrix, we get that vector.
I'm unsure even on where to start. I know the definition of the orthogonal complement (though I can't quite see how to implement this, as the proof does not involve, as far as I can tell, inner products), and that we can write any vector in the subspace as a linear combination of $A$, given the fact that $A$ is a basis for the subspace. We can also take the original space, say, $V$ (such that $W \subset V$) and write any vector as a sum of a vector in $W$ and its subspace. This last fact is the only possible starting point I can think of.
I'd greatly appreciate any insights on this, particularly on how to start the proof, as I'd very much like to work through it and figure it out. Thanks in advance.
| Note that since $P$ is the projection matrix onto W, for any vector in the orthogonal complement of $W$, if we act on it with this projection matrix, we get zero vector.
What you are claiming is true for the matrix $(I-P)$ that is the projection matrix for the orthogonal complement of $W$.
Refer to How to prove the complement $P^\perp$ of a projection matrix $P$ have relation $I-P=P^\perp$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2735910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How solve the following PDE? I want to solve the following PDE:$$
\begin{cases}
u_{tt}-c^2u_{xx}=0, \quad x\in\mathbb{R},\ t\geq x\\
u(x,x)=φ(x), \quad x\in\mathbb{R}\\
u_t(x,x)=0, \quad x\in\mathbb{R}
\end{cases}
$$
where $φ:\mathbb{R} \to \mathbb{R}$, $φ\in C^1(\mathbb{R})$.
Thanks for your help.
| $\def\d{\mathrm{d}}$Case 1: $c^2 \neq 1$. Make substitution $(y, s) = (x - c^2 t, t - x)$, then$$
u_t = u_s - c^2 u_y, \quad u_{tt} = u_{ss} - 2c^2 u_{ys} + c^4 u_{yy}, \quad u_{xx} = u_{yy} - 2u_{ys} + u_{ss},
$$
and the equations become$$
\begin{cases}
(1 - c^2) u_{ss} - (c^2 - c^4) u_{yy} = 0, \quad y \in \mathbb{R},\ s > 0\\
u\bigr|_{s = 0} = φ\left( \dfrac{y}{1 - c^2} \right), \quad y \in \mathbb{R}\\
(u_s - c^2 u_y)\bigr|_{s = 0} = 0. \quad y \in \mathbb{R}
\end{cases} \tag{1}
$$
Because $u\bigr|_{s = 0} = φ\left( \dfrac{y}{1 - c^2} \right)$, then$$
u_y\bigr|_{s = 0} = \frac{\d}{\d y} \left( φ\left( \frac{y}{1 - c^2} \right) \right) = \frac{1}{1 - c^2} φ'\left( \frac{y}{1 - c^2} \right),
$$
and (1) becomes$$
\begin{cases}
u_{ss} - c^2 u_{yy} = 0, \quad y \in \mathbb{R},\ s > 0\\
u\bigr|_{s = 0} = φ\left( \dfrac{y}{1 - c^2} \right), \quad y \in \mathbb{R}\\
u_s\bigr|_{s = 0} = \dfrac{c^2}{1 - c^2} φ'\left( \dfrac{y}{1 - c^2} \right). \quad y \in \mathbb{R}
\end{cases} \tag{2}
$$
Thus\begin{align*}
u(y, s) &= \frac{1}{2} \left( φ\left( \frac{y + cs}{1 - c^2} \right) + φ\left( \frac{y - cs}{1 - c^2} \right) \right) + \frac{1}{2c} \int_{y - cs}^{y + cs} \dfrac{c^2}{1 - c^2} φ'\left( \dfrac{ξ}{1 - c^2} \right) \,\d ξ\\
&= \frac{1}{2} \left( φ\left( \frac{y + cs}{1 - c^2} \right) + φ\left( \frac{y - cs}{1 - c^2} \right) \right) + \frac{c}{2} \left( φ\left( \frac{y + cs}{1 - c^2} \right) - φ\left( \frac{y - cs}{1 - c^2} \right) \right)\\
&= \frac{1 + c}{2} φ\left( \frac{y + cs}{1 - c^2} \right) + \frac{1 - c}{2} φ\left( \frac{y - cs}{1 - c^2} \right),
\end{align*}
and$$
u(x, t) = \frac{1 + c}{2} φ\left( \frac{x + ct}{1 + c} \right) + \frac{1 - c}{2} φ\left( \frac{x - ct}{1 - c} \right).
$$
Case 2: $c^2 = 1$. Make substitution $(y, s) = (x, t - x)$, then$$
u_t = u_s, \quad u_{tt} = u_{ss}, \quad u_{xx} = u_{yy} - 2u_{ys} + u_{ss},
$$
and the equations become$$
\begin{cases}
2u_{ys} - u_{yy} = 0, \quad y \in \mathbb{R},\ s > 0\\
u\bigr|_{s = 0} = φ(y), \quad y \in \mathbb{R}\\
u_s\bigr|_{s = 0} = 0, \quad y \in \mathbb{R}
\end{cases} \tag{3}
$$
which implies$$
\begin{cases}
2u_s - u_y = η(s), \quad y \in \mathbb{R},\ s > 0\\
u\bigr|_{s = 0} = φ(y), \quad y \in \mathbb{R}
\end{cases} \tag{4}
$$
where $η(s)$ is a function to be determined. By the method of characteristics, the general solution to (4) is$$
u(y, s) = \frac{1}{2} \int_0^s η(ξ) \,\d ξ + φ\left( y + \frac{s}{2} \right),
$$
and plugging it back to (3) yields$$
0 = u_s\bigr|_{s = 0} = \frac{1}{2} η(s) + \frac{1}{2} φ(y) = 0. \quad \forall y \in \mathbb{R}
$$
For the existence of solutions to the original equations, $φ$ must be a constant (namely $-η(0)$) and$$
u(x, t) = \frac{1}{2} \int_0^{t - x} η(ξ) \,\d ξ - \frac{1}{2} η(0).
$$
It can be verified that $u(x, t)$ with the form above are indeed solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Polar Coordinate function of a Straight Line I was having some problem when trying to come out a polar coordinate function with straight line equation.
I know it is not good to post images here, but please bear with me as the question requires us to solve the equation from the straight line in the image.
What I have done is I tried to come out with an implicit function for the straight line L.
2x + 3y - 6 = 0
But then I not sure how can I continue from there to come out with a polar coordinate function. Any ideas?
Thanks!
| If $\alpha $ is CCW angle made to the normal then in Cartesian form ( not exactly as shown in your question, it should be normal to the line L.
$$ x \cos\alpha + y \sin \alpha = p $$
and in polar coordinates after transformation
$$ r = p \sec \left( \theta - \tan^{-1} \frac{a}{b} \right ) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 3
} |
Find the locus of points $P$ such that the distances from $P$ to the sides of a given triangle can themselves form a triangle.
Find the locus of points $P$ within a given $\triangle ABC$ and such that the distances from $P$ to the sides of the given triangle can themselves be the sides of a certain triangle.
Join $PA,PB,PC$ and let the perpendiculars from $P$ meet $BC,CA,AB$ in $D,E,F$ respectively. Then, $$PA^2 = PE^2 + AE^2 = PF^2 + FA^2$$ $$PB^2 = PD^2 + BD^2 = PF^2 + BF^2$$ $$PC^2 = EC^2 + PE^2 = PD^2 + CD^2$$
$$\therefore PD = \sqrt{PC^2 - CD^2} = \sqrt{PB^2 - BD^2}$$ $$PE = \sqrt{PA^2 - AE^2} = \sqrt{PC^2 - CE^2}$$ $$PF = \sqrt{PB^2 - BF^2} = \sqrt{PA^2 - AF^2} $$
How do I proceed using this ?
| If the triangle is equilateral, then the sum of the distances from an arbitrary point inside the triangle to the three sides is equal to the height of the triangle, which is a constant. So the three perpendiculars are the lengths of a triangle when all of them are not longer than half the height of the given triangle. The point $P$ must lie inside the midpoint triangle of the given triangle.
For a triangle in general, I used Excel to generate random points and plot all points satisfying the condition.
My conclusion is that $P$ must be a point inside the incentral triangle of the given triangle.
We first consider the limiting case when the sum of the lengths of two perpendiculars is equal to that of the third one.
Let $BE$ be the bisector of $\angle ABC$ and $CF$ the bisector of $\angle ACB$, where $E$ lies on $AC$ and $F$ lies on $AB$. Let $G$ be a point on $EF$.
Let $X_1$, $X_2$ and $X_3$ be the feet of perpendicular to $BC$ from $E$, $F$ and $G$ respectively. Let $Y_2$ and $Y_3$ be the feet of perpendicular to $AC$ from $F$ and $G$ respectively. Let $Z_1$ and $Z_3$ be the feet of perpendicular to $AB$ from $E$ and $G$ respectively.
Note that $EX_1=EZ_1$ and $FX_2=FY_2$.
We have $\displaystyle \frac{GX_3-EX_1}{FX_2-EX_1}=\frac{EG}{EF}$, $\displaystyle \frac{EZ_1-GZ_3}{EZ_1}=\frac{EG}{EF}$ and $\displaystyle \frac{GY_3}{FY_2}=\frac{EG}{EF}$.
Therefore,
$$\frac{GX_3-EX_1}{FX_2-EX_1}=\frac{EZ_1-GZ_3}{EZ_1}=\frac{GY_3}{FY_2}$$
$$\frac{GX_3-EX_1}{FX_2-EX_1}=\frac{EX_1-GZ_3}{EX_1}=\frac{GY_3}{FX_2}$$
So,
$$GX_3-EX_1=GY_3-(EX_1-GZ_3)$$
$$GX_3=GY_3+GZ_3$$
(Note: this equality holds also for external points of division of $EF$ if we consider negative lengths.)
If $P$ is a point inside $BCEF$, let $Q$ be a point on $EF$ such that $QP$ is perpendicular to $BC$. The distance from $P$ to $BC$ is less than the distance from $Q$ to $BC$. The distance from $P$ to $AC$ is greater than the distance from $Q$ to $AC$ and the distance from $P$ to $AB$ is greater than the distance from $Q$ to $AB$. We can now conclude that the distance from $P$ to $BC$ is less than the sum of the distance from $P$ to $AB$ and the distance from $P$ to $AC$.
Let $D$ be a point on $BC$ such that $AD$ bisects $\angle BAC$.
Repeating the above arguments, the perpendiculars from $P$ to the three sides of the triangle are the lengths of a triangle if and only if $P$ lies inside $\triangle DEF$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Double integral with strange Change of Variables I am trying to compute the following integral,
$$\iint_{\mathbb{R}^2} \left(\frac{1-e^{-xy}}{xy}\right)^2 e^{-x^2-y^2}dxdy$$
First I tried substituting $x=r\cos{\theta}, y=r\sin{\theta}$ but it didn't really give me anything.
For the second try, I tried $u=x^2+y^2, v=xy$ but after computing the Jacobian, it got really messy, I couldn't really continue further.
Would there be a way to change the variables so that I would be able to compute this integral?
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[10px,#ffd]{\ds{\iint_{\mathbb{R}^2}
\pars{1 - \expo{-xy} \over xy}^{2}\expo{-x^{2} - y^{2}}\dd x\,\dd y}}
\\[5mm] = &\
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\
\overbrace{\pars{\int_{0}^{1}\expo{-xya}\dd a}}
^{\ds{\expo{-xy} - 1 \over -xy}}\
\overbrace{\pars{\int_{0}^{1}\expo{-xyb}\dd b}}
^{\ds{\expo{-xy} - 1 \over -xy}}
\expo{-x^{2} - y^{2}}\dd x\,\dd y
\\[5mm] = &\
\int_{0}^{1}\int_{0}^{1}\
\overbrace{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\pars{-x^{2} - \bracks{a + b}xy - y^{2}}\dd x\,\dd y}
^{\ds{2\pi \over \root{4 - \pars{a + b}^{2}}}}\
\,\dd a\,\dd b
\\[5mm] = &\
2\pi\int_{0}^{1}\int_{0}^{1}{\dd a\,\dd b \over \root{4 - \pars{a + b}^{2}}} =
\bbx{{4 \over 3}\,\pi\pars{3 - 3\root{3} + \pi}} \approx 3.9603
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Number of ways to partition a set of $n$ indistinguishable items into $r$ groups I came across the problem where I have to count the number of ways a set of $n$ indistinguishable items can be partitioned into $r$ groups.
For example,
*
*if $n=2$ and $r=2$, I can do partitioning as follows:
$*|*$
*if $n=3$ and $r=2$, I can do partitioning as follows:
$*|**$
*if $n=4$ and $r=2$, I can do partitioning as follows:
$*|***$
$**|**$
*if $n=5$ and $r=2$, I can do partitioning as follows:
$*|****$
$**|***$
Is there any closed formula / equation which can come up with this count? If not closed, then some summation series which can give this count?
This does resembles to star and bar problems with the difference that in star and bar problems, we have distinguishable groups. That is $(**|***)$ and $(***|**)$ will be counted twice separately. This is not the case here. My primary understanding is that, here we cannot come up with closed formula like in star and bar problems. Am I correct?
| The number you are looking for is the partition of a number into $k$ positive parts.
There is no closed formula for this, but one can use the recurrence relation:
$$
p_k(n) = p_k(n − k) + p_{k−1}(n − 1)
$$
with
$$
p_0(0) = 1 \text{ and } p_k(n) = 0 \text{ if } n ≤ 0 \text{ or } k ≤ 0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof for for convergence of $\int_{-∞}^0\frac{\sqrt{\left|x\right|}}{x^2+x+1}dx$ I want to show that the following integral converges. However I am troubled finding an estimate for this term which is sufficient for the proof. Could I use L'Hospital on this one?
$$\int_{-∞}^0\frac{\sqrt{\left|x\right|}}{x^2+x+1}dx$$
| $$\int_{-\infty}^{0}\frac{\sqrt{|x|}\,dx}{x^2+x+1} = \int_{0}^{+\infty}\frac{\sqrt{x}\,dx}{x^2-x+1} \stackrel{x\mapsto z^2}{=} \int_{0}^{+\infty}\frac{2z^2}{z^4-z^2+1}\,dz$$
equals (by parity)
$$\int_{-\infty}^{+\infty}\frac{dz}{z^2+\frac{1}{z^2}-1}=\int_{\mathbb{R}}\frac{dz}{\left(z-\frac{1}{z}\right)^2+1}\stackrel{\text{GMT}}{=}\int_{\mathbb{R}}\frac{dz}{z^2+1}=\color{red}{\pi}<+\infty. $$
$\text{GMT}$ stands for Glasser's Master Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
number of solutions to the equation $x_1+x_2+x_3=2018$ with odd\even condition on $x$. I have to find the number of solutions to $x_1+x_2+x_3=2018$ ,
with the following conditions :
*
*$x_1,x_2,x_3$ are even numbers.
*$x_1$ is even while $x_2$ and $x_3$ are odd.
I guess I have to use stars and bars in order to solve it, so
in case one I reffed to every two stars as one star and then solved this equation $x_1+x_2+x_3=1009$ (with stars and bars again). but in case two i'm stuck, I don't understand how to divide the stars correctly when odd and even conditions are involved.
appreciate your help very much!
| How many solutions are there for $x_2+x_3=2n$ with both $x_2$ and $x_3$ odd? Well, $x_2$ can range from $1$ to $2n-1$, so there are $n$ solutions. This means that, for a given $x_1=2k$, your equation $x_1+x_2+x_3=2018$ has $\frac{2018-x_1}{2}=1009-k$ solutions. Since $k$ may take values from $0$ to $1009$, the total number of solutions is $$\sum_{k=0}^{1009}{(1009-k)}=\sum_{k=0}^{1009}{k}=\frac{1009\times1010}{2}=509545.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2736963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Finding What Percentage a Plane is Covered By Pennies Touching Tangentially The question is the following:
Imagine covering an unlimited plane surface with a single layer of pennies, arranged so that each penny touches six others tangentially. What percentage of the plane is covered?
I can't seem to visually understand how a penny would touch six other pennies tangentially and I am, therefore, unable to understand how the plane is being covered in that pattern. Any help will be greatly appreciated.
| Consider this image:
The outer six pennies are touching the center penny tangentially. If you filled a plane with pennies like this, the shaded rectangular area would repeat.
If you calculate the area of the shaded rectangle, $A_{shaded}$, and the area covered by the pennies within the shaded rectangle, $A_{covered}$, the percentage of a plane covered by pennies touching tangentially is
$$100\% \cdot \frac{A_{covered}}{A_{shaded}}$$
Because the scale does not matter, you can assume the radius of the pennies to be $1$. (The scale does not matter, because you can use "the radius of a penny" as your measurement unit. Then, the area units are "the radius of a penny, squared". Because the area units cancel out, you are free to choose your length (and therefore area) units.)
To find out the height of the rectangle, consider the angle the centers of two outer pennies make with respect to the center of the center penny.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove if $x\gt3$ then $1\ge\frac{3}{x(x-2)}$. I tried to prove it by contradiction.
Suppose it is not true that $1\ge\frac{3}{x(x-2)}$, so $1\lt\frac{3}{x(x-2)}$. Then $\frac{3}{x(x-2)}-1\gt0$. Multiply both sides of $\frac{3}{x(x-2)}-1\gt0$ by ${x(x-2)}$.
$(\frac{3}{x(x-2)}-1\gt0)({x(x-2)}\gt0(x(x-2)$
${3-(x(x-2)\gt0}$
${3-x^2-2x\gt0}$
${-x^2-2x+3\gt0}$
${-1(x^2+2x-3)\gt0}$
$-1\frac{(x-1)(x+3)}{-1}\gt0/-1$
${(x-1)(x+3)\lt0}$
At this point I really do not know what to do after this point or if I really even went about it the right way. Thank you for the help.
| $x > 3
\implies x-2 > 1
\implies x(x-2) > 3
\implies 1 > \dfrac{3}{x(x-2)}
$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Finding points on a line that are closest
Find the points that give the shortest distance between the lines$$\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}2-t\\-1+2t\\-1+t\end{pmatrix}\\\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}5+3s\\0\\2-s\end{pmatrix}$$
So I subtracted the second line from the first to get these two equations:
$\begin{pmatrix}-t-3s-3\\ -1+2t\\ t+s-3\end{pmatrix}\cdot \begin{pmatrix}-1\\ 2\\ 1\end{pmatrix}=0$
$\begin{pmatrix}-t-3s-3\\ -1+2t\\ t+s-3\end{pmatrix}\cdot \begin{pmatrix}3\\ 0\\ -1\end{pmatrix}=0$
I know I am supposed to rearrange them together two get a system of equations but I am not sure how.
Any help please?
| Another approach is to use calculus and minimize the distance between arbitrary points on the lines, which is a function of $s$ and $t$. Distance is non-negative, so it suffices to minimize the squared distance, which is easier to differentiate.
The squared distance $dsq(s,t)$ between two arbitrary points, one on each line, is $$dsq(s,t)=\big((2-t)-(5+3s)\big)^2+\big(-1+2t\big)^2+\big((-1+t)-(2-s)\big)^2.$$
Then compute the partial derivatives and solve $\dfrac{\partial\,dsq(s,t)}{\partial s}=\dfrac{\partial\,dsq(s,t)}{\partial t}=0$. Don’t bother to expand or simplify anything until after finding the derivatives with the chain rule.
You should get $12t+8s-4=8t+20s+12=0$, which is pretty easy to solve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Show a normal family $\{f_n\} $ converges uniformly on compacts Let $\{f_n\}$ be a sequence of holomorphic functions on a domain $\Omega\subset \mathbb C$ which is bounded uniformly on compact subsets of $\Omega$. Let $\{z_k\}$ be a sequence of distinct points in $\Omega$.
with lim$_{k\to \infty} z_k = z_0 \in \Omega$. Assume that lim$_{n\to\infty} f_n(z_k)$ exists, for all $k$. Prove that
$\{f_n\}$ converges uniformly on compact subsets of $\Omega$.
So far I can show that for disk $D$ centered at $z_k$, $\{f_n\}$ converges uniformly on $D$ if $\{f_n^{(i)}(z_k)\}$ converges for all $i$. But I don't know how this will help in proving uniform convergence on arbitrary compacts. Any help will be appreciated.
| It suffices to show that there exists a holomorphic $f$ with the property that every subsequence of $\{f_n\}$ has a further subsequence which converges to $f$: if this were to hold but $f_n\not\to f$, then by Montel’s theorem we can pass to a subsequence which converges to something that is not $f$, however this is impossible since passing to yet another subsequence shows that this “not $f$” is indeed $f$.
Now by Montel’s theorem yet again, every subsequence of $\{f_n\}$ has a further subsequence which converges to something; but by our condition on $\{z_k\}$, we know that all these convergent subsequences must agree: this is a consequence of the identity principle (the set of points where they agree has an accumulation point).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$\mathbb{C}^2\setminus \{(z,z)\ :\ z\in \mathbb{C}\}$ is path connected. I am really surprised that why $$ \mathbb{C}^2\setminus \{(z,z)\ :\ z\in \mathbb{C}\} $$ is path connected?
I was thinking that it is same as $\mathbb{R}^4\setminus\mathbb{R}^2.$ But still I am unable to prove that the above space is path connected.
| Like how $\Bbb R^3$ removing a line (an affine subspace of two lower dimension) is path-connected, $\Bbb R^4$ removing a plane (also an affine subspace of two lower dimension) should, by guessing, also be path-connected.
First have a look how $\Bbb R^3$ removing a line is path-connected. Consider $\Bbb R^3\setminus L$, where $L=\{(0,0,x_3)\}$. Consider the cylinder $C=\{(x_1,x_2,x_3):x_1^2+x_2^2=1\}$. The cylinder is path-connected, which you can actually see. Each point $(x_1,x_2,x_3)$ on $\Bbb R^3\setminus L$ can be connected to $C$ by a straight line path that goes in the direction $(x_1,x_2,0)$ or its reverse direction. So the whole $\Bbb R^3\setminus L$ is path-connected.
For your case, the set would be $\Bbb R^4\setminus P$ with $P=\{(0,0,x_3,x_4)\}$. You again prove that the cylinder $C=\{(x_1,x_2,x_3,x_4):x_1^2+x_2^2=1\}$ is path-connected, and any point $(x_1,x_2,x_3,x_4)$ on $\Bbb R^4\setminus P$ can be connected to the cylinder by the straight line path that goes in the direction $(x_1,x_2,0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Supremum of a sequence - same as supremum of set? This is more of a notational/terminology question than anything. I was refreshing my knowledge on the lemmas in the monotone convergence theorem, and saw the claim "If a sequence of real numbers is increasing and bounded above, then its supremum is the limit." I haven't really thought of sequences having a supremum, as I thought this was more reserved for sets. So if $a_n$ is a sequence defined in $\mathbb{N}$, then are we defining the supremum of $a_n$ as $\sup(\{a_n | n \in \mathbb{N}\})$? If not, how do we extend the notion of supremum to sequences?
| The supremum is the supremum of the range of the sequence $(a_{n})$, that is, $\sup\{a_{n}: n=1,2,...\}$.
So several sequences may have the same supremum, especially when they have the same range:
$(x_{n})=(1,0,1,0,1,0,...)$ and $(y_{n})=(0,1,0,1,0,1,...)$ are different sequences, but they have the same supremum $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2737883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to predict the maxima and minima of a tidal wave? Assume that a planet of mass $m$ and radius $r$ is orbiting a massive star of mass $M$ at a orbital distance $R$. Assume that the planet is covered in a ideal, non viscous ocean on a friction less surface. Hence the ocean will align itself along the equipotential surface generated by the two masses. Let the maximum height of the water bulge from the planet's surface be $H$ and minimum height of the water bulge be $h$.
What are the mathematical relations between the above values?
My effort at a solution
I found a related question on Physics SE. It references Hale Bradt, Tidal distortion: Earth tides and Roche lobes which is too complex for me too understand.
It would greatly help me if someone can simplify the article and present me with the formulas connecting the above quantities (without proof if needed).
I am a high school student and I need this for a Olympiad Problem which I am trying to solve.
I understand Olympiad level Physics and Astronomy.
| Equation 20 in that article seems to provide what you are looking for. It describes the displacement (from spherical) of the equipotential at the surface of the planet (where $\phi$ is the polar coordinate with 0 pointing towards the other mass) .
Using the terms you have defined the equation is
$$ dr=\frac{m} {M} \frac{r^4}{2 R^3} \left( 3 \cos^2 \phi - 1\right) $$
The maximum bulge will be towards the star
$$dr(\phi =0) = H = \frac{m} {M} \frac{r^4}{R^3} $$
While on the perpendicular axis the sphere will be flattened
$$dr(\phi =90^\circ) = H = - \frac{m} {M} \frac{r^4}{2R^3} $$
You can see in that article how they have used this to calculate the 54cm value for earth tides.
To get the actual radius of the planet at a particular position you would add dr to r.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
The "assumption" in proof by induction The second step in proof by induction is to:
Prove that if the statement is true for some integer $n=k$, where
$k\ge n_0$ then it is also true for the next larger integer, $n=k+1$
My question is about the "if"-statement. Can we just assume that indeed the statement is true? If we assume it, then the proof works... but isn't that similar to the following "proof":
Let $N$ be the largest positive integer.
Since $1$ is a positive integer, we must have $N\ge1$.
Since $N^2$ is a positive integer, it cannot exceed the largest positive integer.
Therefore, $N^2\le N$ and so $N^2-N\le0$.
Thus, $N(N-1)\le0$ and we must have $N-1\le0$.
Therefore, $N\le1$. Since also $N\ge1$, we have $N=1$.
Therefore, $1$ is the largest positive integer.
The only thing that is wrong with this "proof" is that we falsely assume there actually exists a largest positive integer.
So both in the above case and in proof by induction we do an assumption. In the second case the assumption leads to a false conclusion. What is the difference with proof by induction? Why is doing the assumption that the hypothesis is actually true valid here and why doesn't it lead to a similar contradiction?
EDIT: the "proof" above is not mine, it is taken from Calculus a Complete Course 8th edition as an example of why existence proofs are important.
| The proof by induction is based on the following statement
\begin{aligned}
& \left [\mathcal {P}(0) \land \left( \mathcal {P}(n) \implies \mathcal {P}(n+1)\; \forall n\geq 0\right)\right] \\
& \implies \mathcal {P}(n)\; \forall n\geq 0 \, ,
\end{aligned}
where $\mathcal {P}$ is a predicate over the natural integers $\Bbb N$.
As soon as one has shown the inheritance property $\mathcal {P}(n) \implies \mathcal {P}(n+1)$ (with the if) and the initialization $\mathcal {P}(0)$, then $\mathcal {P}(n)\; \forall n$. If only the inheritance property is proven, one can obtain such stupid results as all natural integers are larger than $\pi$ (corresponding predicate $\mathcal {P}(n) : n \geq \pi$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 6,
"answer_id": 3
} |
Prove $\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Z}/n\mathbb{Z},A)\cong A[n]$ by using left-exactness of Hom? Let $A$ be an abelian group, then $\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Z}/n\mathbb{Z},A)\cong A[n]$, where $A[n]=\{a\in A:na=0\}$. How do I prove this by using left-exactness of Hom? (This is Exercise 2.7 from Rotman's An Introduction to Homological Algebra.)
I tried applying the functor $\operatorname{Hom}_\mathbb{Z}(-,A)$ to the short exact sequence $0\to\mathbb{Z}\xrightarrow{\mu_n}\mathbb{Z}\xrightarrow{\pi}\mathbb{Z}/n\mathbb{Z}\to0$, where $\mu_n$ is multiplication by $n$ and $\pi$ is the canonical projection, but it didn't work.
| When I do that I get
$$0\to\text{Hom}(\Bbb Z/n\Bbb Z,A)\to A\to A$$
is exact where the $A\to A$ map is multiplication by $n$. That gives
me $\text{Hom}(\Bbb Z/n\Bbb Z,A)\cong A[n]$ straight away.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to denote an arbitrary expression involving some number of dummy variables? Let me give you an example.
$(n)_{n=0}^{\infty}$ is a reference to a particular Sequence object.
$$ 0, 1, 2, \ldots $$
Is another way to reference the same sequence object.
$a_n = n\forall n\in \mathbb{N}$, $(a_n)_{n=0}^{\infty}$ is yet another way to reference that Sequence object. You get the point.
However, the notation $(\cdot )_{n=0}^{\infty}$, where $\cdot$ is some arbitrary expression involving the symbol $n$, is a sort of template for referencing Sequence objects, like $(\frac{1}{n+1} )_{n=0}^{\infty}$, or something. Thus, I'd like to be able to say:
One way to denote a sequence is $(arb\_expression(n))_{n=0}^{\infty}$. Where arb_expression means you can substitute any "well-defined" expression there involving one dummy symbol $n$. Is there a well-defined mathematical notation for doing so?
EDIT: to be clear, I'm asking about how to define, in a mathematically pleasing way, the way in which Sequence objects are denoted (and of course, extending beyond sequences). I know it can be expressed well enough in words (I just did so), but I'm asking for a method that is "mathematically pleasing," in a sense.
| Hint: Note that sequences $(a_n)_{n=0}^\infty$ are just functions with domain $\mathbb{N}$. So, whatever can be said about functions with domain $\mathbb{N}$ can also be said about sequences.
When considering for instance a real-valued sequence $(a_n)_{n=0}^\infty$ we can equivalently consider a function
\begin{align*}
&f:\mathbb{N}\rightarrow \mathbb{R}\\
&f(n)=a_n
\end{align*}
and each rule which is mathematically valid can be applied to specify $a_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Intersection between two lines 3D I'm writing the program for university project on c#. I have to find intersection between two lines in 3d space. Lines are specified by the point lying on the line and by its direction vector. I'm bad in math therefore I don't know in what direction I have to move for solving this task. I will be very grateful for the help.
| Since in the comment I mentioned about something numerical, I would like to provide a method to compute the minimal distance between two lines in 3D. Then, if the distance is zero (or sufficiently small), you could deduce that the two lines intersects.
Let the points $a=(a_x, a_y, a_z)$, $b=(b_x, b_y, b_z)$ on the first line, and $c=(c_x, c_y, c_z)$, $d=(d_x,d_y,d_z)$ on the second line.
Let the directional vectors $v_1=b-a$, $v_2=d-c$ (component-wise operation). And another vector $w=d-a$.
The distance between two lines is the height of the following parallelepiped:
For parallelepipeds, (height) = (volume) / (base area). Compute the volume by determinant of $v_1,v_2,w$ and area by the norm of cross product $|v_1\times v_2|$:
$$\textrm{distance between lines} = \textrm{height} = \frac{|\left|\begin{matrix}
v_{1x} & v_{2x} & w_x\\
v_{1y} & v_{2y} & w_y\\
v_{1z} & v_{2z} & w_z\end{matrix}\right||}{\sqrt{\left|\begin{matrix}
v_{1y} & v_{1z}\\
v_{2y} & v_{2z}
\end{matrix}\right|^2+\left|\begin{matrix}
v_{1z} & v_{1x}\\
v_{2z} & v_{2x}
\end{matrix}\right|^2+\left|\begin{matrix}
v_{1x} & v_{1y}\\
v_{2x} & v_{2y}
\end{matrix}\right|^2}}$$
Exception:
There is an exception that the two lines are parallel or the same. Then the denominator becomes zero. Thus you have to check that whether the two vectors $v_1, v_2$ are proportional at first:
*
*$v_1\parallel v_2\nparallel w\implies$ parallel lines
*$v_1\parallel v_2\parallel w\implies$ same line
Edit: Find the intersection
To find the intersection, we have to find how many $v_1$ for $a$ to add to arrive the intersection (and same for $c, v_2$), ie, calculate $s,t\in\mathbb R$ for $$\textrm{intersection}=a+tv_1=c+sv_2$$ in which "+" is component-wise and $s,t$ are scalar multiplications. This is totally the same as gimusi's answer.
So we are solving this set of equations:
$$\left\{\begin{aligned}
a_x+tv_{1x}&=c_x+sv_{2x}\\
a_y+tv_{1y}&=c_x+sv_{2y}\\
a_z+tv_{1z}&=c_x+sv_{2z}
\end{aligned}\right.$$
It only requires two equations to solve for $t,s$. So we solve the first two equations by Cramer's rule:
$$t=\frac{\left|\begin{matrix}
c_x-a_x & -v_{2x}\\ c_y-a_y & -v_{2y}
\end{matrix}\right|}{\left|\begin{matrix}
v_{1x} & -v_{2x}\\ v_{1y} & -v_{2y}
\end{matrix}\right|}$$
So the intersection is $$\textrm{intersection}=a+tv_1=(a_x,a_y,a_z)+\frac{\left|\begin{matrix}
c_x-a_x & -v_{2x}\\ c_y-a_y & -v_{2y}
\end{matrix}\right|}{\left|\begin{matrix}
v_{1x} & -v_{2x}\\ v_{1y} & -v_{2y}
\end{matrix}\right|}(v_{1x}, v_{1y},v_{1z})$$
Exception:
If the denominator of $t$ is zero, then solve $t$ by another equations:
$$t=\frac{\left|\begin{matrix}
c_y-a_y & -v_{2y}\\
c_z-a_z & -v_{2z}
\end{matrix}\right|}{\left|\begin{matrix}
v_{1y} & -v_{2y}\\
v_{1z} & -v_{2z}
\end{matrix}\right|}=\frac{\left|\begin{matrix}
c_z-a_z & -v_{2z}\\
c_x-a_x & -v_{2x}
\end{matrix}\right|}{\left|\begin{matrix}
v_{1z} & -v_{2z}\\
v_{1x} & -v_{2x}
\end{matrix}\right|}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does this condition imply that $f$ is locally Lipschitz or Lipschitz? Suppose there are constants $\delta > 0$ and $M < \infty$ such that for all $x \in \mathbb{R}$, $|f(x+t)−f(x)| \leq M|t|$ for all $t \in (−\delta,\delta)$. Then is $f$ locally Lipschitz or Lipschitz on $\mathbb{R}$? And if so, why?
Thank you.
| $f$ is Lipschitz.
To prove this, consider arbitrary $x,y \in \Bbb R$ with $x<y$. Divide the interval $[x,y]$ into $N$ small subintervals $[x_i,x_{i+1}]$ (with $x_{i+1}-x_i < \delta$, $x_0=x$ and $x_N=y$).
Then
$$|f(y)-f(x)| = |f(x_N)-f(x_0)| \le \sum_{i=0}^{N-1} |f(x_{i+1})-f(x_i)| \le M \sum_{i=0}^{N-1} |x_{i+1}-x_i| = M|y-x|$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If M is an oriented topological n-Manifold, is M - {x} oriented? Does removing a point from an oriented topological manifold result in a non-oriented manifold?
I know that if M - {x} is oriented, then M is oriented because we can use the two fold cover given in 3.3 of Hatcher. However, I am not able to see converse is true.
| No: you have this the wrong way round: removing a point from an orientable manifold can't make it unorientable (essentially because $\Bbb{R}^n$ and $\Bbb{R}^n \setminus \{0\}$ are both orientable allowing you to convert an oriented atlas for $M$ into an oriented atlas for $M\setminus \{x\}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
cancelling out of $\frac{n!}{2n!}$? Perhaps a rather strange question. I am doing an exercise and the last part ends:
$I = \frac{n!}{(2n)!}$
Now I know that:
$$I = \frac{n(n-1)(n-2)(n-3)\cdots}{2n(2n-1)(2n-2)(2n-3)\cdots }$$
Is it possible to cancel out some terms? Or to make it a little bit more 'friendly'?
| Note that $$I = \frac{n(n-1)(n-2)(n-3) \cdots 2 \cdot 1}{2n(2n-1)(2n-2)(2n-3) \cdots 2 \cdot 1} $$$$= \frac{n(n-1)(n-2)(n-3) \cdots 2 \cdot 1}{2^n \times n(2n-1)(n-1)(2n-3)(n-2) \cdots 1 \cdot 1}$$ and after cancelling out we get that$$ I = \frac{1}{2^n (2n-1) (2n-3) \cdots 3 \cdot 1} = \frac{1}{2^n} \prod_{i=1}^n \frac{1}{2i -1} $$This is one possible expression, but I think yours is more concise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2738966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 1
} |
Linear Algebra- Subspace proof involving operator Let $\mathbb{R}^\mathbb{R}$ be the real vector space of all functions $f:\mathbb{R}→\mathbb{R}$ and let $∆:\mathbb{R}^\mathbb{R} →\mathbb{R}^\mathbb{R}$ be the linear operator defined by $$∆[f](x) := f(x + 1) − f(x).$$
(a) As usual, $∆^2 := ∆ ◦ ∆$ denotes the composition of $∆$ with itself. Prove that the subset $W$, consisting of all functions $f ∈ \mathbb{R}^\mathbb{R}$ such that $(∆^2)[f] + (7∆)[f] + 3f = 0$, is a linear subspace.
(b) Is the endomorphism $∆ ∈ \mathrm{End}(\mathbb{R}^\mathbb{R})$ injective? Explain.
I am confused by the use of $∆$ and I don't understand how squaring $∆$ actually affects the equation the linear operator is defined by. While I understand how the subspace test works, I am wondering if someone can explain how they would go about proving it for this subspace.
| I'll leave (b) for now, and instead clarify the meaning of $\Delta^2$.
Specifically, let's consider a simple example, $f(x) = x^2$. Then say $g = \Delta f$, so $g(x) = f(x+1) - f(x) = 2x+1$. But what about $\Delta^2$? Well, $$\Delta^2 f= (\Delta \circ \Delta) f = \Delta (\Delta f) = \Delta g,$$
and $\Delta g = g(x+1) - g(x) = 2$, so $\Delta^2 f(x) = 2$.
Now, onto the subspaces. First observe that $\Delta$ is a linear operator, so $\Delta^2$ is also a linear operator. From here, you know $\Delta^2[f+g] = \Delta^2 f + \Delta^2 g$, so you should be able to continue the calculations to see $W$ is a linear subspace.
(Alternatively, you could observe that $\Delta^2 + 7 \Delta + 3$ is a linear operator also, and $W$ is its kernel, so $W$ is a linear subspace.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The number $555,555$ can decompose, as the product of two factors of three digits, in how many ways?
The number $555,555$ can decompose, as the product of two factors of three digits, in how many ways?
I've seen the answer to the question, and there is only one way:
Since $555, 555 = 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 37$, the only way to combine the factors to achieve expressing it as a product of two three-digit numbers is $(3 \cdot 7 \cdot 37) (5 \cdot 11 \cdot 13)$. Regardless of this, I struggle to understand how the answer was formulated. Can someone show me the procedure?
Sorry if the question is poorly phrased, it is a rough translation of the original problem in Spanish.
| We can write $$555,555=5\times111,111$$ and notice that $111=37\times3$, so we have $$555,555=5\times37\times3003$$ and since $1001=7\times11\times13$, the prime factorization is $$555,555=3\times5\times7\times11\times13\times37.$$
If we multiply each of the three combinations $11\times13$, $11\times37$ and $13\times37$ by $3$, we see that only $3\times(13\times37)$ exceeds $1000$ which is a four-digit number.
Hence $37$ must pair with two other one-digit numbers, and $11\times13$ must pair with either $3$ or $5$ since $7\times11\times13>1000$.
If $11\times13$ pairs with $3$, then the other product must be $$37\times(5\times7)>1000$$ which is not a three-digit number.
Therefore, the only possible combination for $555,555=P_1\times P_2$ is $$P_1=5\times11\times13,\quad P_2=3\times7\times37.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 4
} |
Which of the following is/are closed sets Which of the following is/are closed sets
*
*$S=\{(x,x\sin\frac{1}{x}) \mid 0\le x \le1\}\cup\{(0,0)\}$
*$T=\{(x,x^2\sin\frac{1}{x}) \mid 0\le x \le1\}\cup\{(0,0)\}$
My idea
for (1).
$f(x)=x\sin \frac{1}{x}$
then $|f(x)|=|x\sin \frac{1}{x}|\le |x|\cdot|\sin \frac{1}{x}|\le |x|$
and $S'=\{(x,x\sin \frac{1}{x}) \mid 0\le x \le1\}\cup\{(0,0)\} $
And hence $S' \subseteq S$ $\implies$ $S$ is closed and same as $T$ also
Is i am correct
| MINOR ISSUE : In the definitions of $S$ and $T$ given in the question, one should ensure $x \neq 0$ in the first component of both set definitions.
I get what you are trying to do, but you have not written things clearly.
The idea in both, is that one should use the limit point definition. You should start with a convergent sequence in the set, and show that the limit is also in the set. This is not visible in your attempt.
Let us start with the first one.
Consider a convergent sequence $(x_n, x_{n} \sin \frac 1{x_n}) \to (x,y) \in \mathbb R^2$. We want to show that $(x,y) \in S$.
To do this, note that componentwise convergence occurs, so definitely $x_n \to x$. However, since $0 \leq x_n \leq 1$ is true, we know that $0 \leq x \leq 1$,by the fact that $[0,1]$ is closed in $\mathbb R$.
Next, if $x \neq 0$, then $x \sin \frac 1x$ is continuous at $x$, since it's a product/composition of continuous functions at $x$. So, in that case, $x_n \sin \frac 1{x_n} \to x \sin \frac 1x$ happens as well. By uniqueness of limits, we get $y = x \sin \frac 1{x}$. Therefore, $(x, y) = (x, \sin \frac 1{x})$ for some $0 \leq x \leq 1$, which means $(x,y) \in S$.
If $x = 0$, then one must check that $\lim_{n \to \infty} x_n \sin \frac 1{x_n} = 0$. Here, we use the estimate that $|\sin \frac 1x| \leq 1$ for all $x$, therefore by squeeze theorem $-x_n \leq x_n \sin \frac 1{x_n} \leq x_n$ for all $n$, and both right and left hand side have limit zero. Hence, here too the result holds i.e. $y = 0$, so we get $(x,y) = (0,0) \in S$.
Hence, $S$ is closed.
For $T$, nothing changes (yes, that means it is closed). In fact, the proof is exactly the same, even there you must use the squeeze theorem similarly. You can almost copy paste what I wrote for $S$. Try it as an exercise.
Indeed, the nature of $S$ and $T$ is that they are "graphs" of functions i.e. when we plot a (single variable) function $f(x)$ on paper, the set that we get on the two dimensional plane is its graph. Properties of $f$ are linked to properties of its graph.
For example, you can show that $f$ is continuous if and only if its graph is closed and bounded. Thus, to study a function one can often study its graph and vice versa.
The best part is, in infinite dimensions, with results like the closed graph theorem, and the way we define closedness of unbounded operators etc., the above bond is actually a very subtle and deep one, but one which will require a lot more context to appreciate at your level.
REQUEST : There may be better answers, so you can wait for some time before deciding to accept somebody else's / my answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If A is similar to B then $A^2$ is similar to $B^2$ im not sure how to begin to prove this, all i know is that for two matrices to be similar, the following equation must be true
$A = PBP^{-1}$
any help will be appreciated
| The definition of $A \sim B$ is that there exists an invertible matrix $P$ of the same dimensions as $A$ and $B$ that satisfies $A = PBP^{-1}$.
You just need to show that:
$$A^{2} = AA = (PBP^{-1})(PBP^{-1}) = PB(P^{-1}P)BP^{-1} = PBIBP^{-1} = PBBP^{-1} = PB^{2}P^{-1}$$
And then conclude that, in fact, there exists an invertible matrix $P$ such that
$$A^{2} = PB^{2}P^{-1}\Rightarrow A^{2} \sim B^{2}$$
And you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Do there exist pairs of distinct real numbers whose arithmetic, geometric and harmonic means are all integers? I self-realized an interesting property today that all numbers $(a,b)$ belonging to the infinite set $$\{(a,b): a=(2l+1)^2, b=(2k+1)^2;\ l,k \in N;\ l,k\geq1\}$$ have their AM and GM both integers.
Now I wonder if there exist distinct real numbers $(a,b)$ such that their arithmetic mean, geometric mean and harmonic mean (AM, GM, HM) all three are integers. Also, I wonder if a stronger result for $(a,b)$ both being integers exists.
I tried proving it, but I did not find it easy. For the AM, it is easy to assume a real $a$ and an AM $m_1$ such that the second real $b$ equals $2m_1-a$. For the GM, we get a condition that $m_2=\sqrt{(2m_1-a)a}$. If $m_2$ is an integer, then… what? I am not sure exactly how we can restrict the possible values of $a$ and $m_1$ in this manner.
| Expanding on Christian Blatter's answer.
There are a few key points.
*
*The arithmetic mean of two rational numbers is always rational.
*The harmonic mean of two non-zero rational numbers is always rational.
*The geometric mean of two squared positive integers is always an integer.
*For all three types of mean if we multiply every input by a positive real value we also multiply the result by that same value.
These key points lead to a strategy for finding numbers whose am, gm and hm are all integers.
*
*pick a pair of integers whose GM is an integer.
*calculate the AM and HM
*multiply through by the denominators of the AM and HM.
Now to work this through, pick any two distinct positive integers $x$ and $y$.
$$\mathrm{GM}(x^2,y^2) = xy$$
$$\mathrm{AM}(x^2,y^2) = \frac{x^2+y^2}{2}$$
$$\mathrm{HM}(x^2,y^2) = \frac{2x^2y^2}{x^2+y^2}$$
Let $t = 2(x^2 + y^2)$ Let $a=tx^2$ Let $b=ty^2$. Since only addition, multiplication and squaring of positive integers is involved it is clear that $t$, $a$ and $b$ are all positive integers. It is also clear that a and b are distinct.
$$\mathrm{GM}(a,b) = txy$$
$$\mathrm{AM}(a,b) = t\frac{x^2+y^2}{2} = (x^2+y^2)^2$$
$$\mathrm{HM}(a,b) = t\frac{2x^2y^2}{x^2+y^2} = 4x^2y^2$$
Again since all these values can be calculated merely by adding, multiplying and squaring positive integers they are all positive integers.
Lets plug in some numbers, for example $x=1$ and $y=2$
$$t = 10$$
$$a = 10$$
$$b = 40$$
$$\mathrm{GM}(10,40) = 20$$
$$\mathrm{AM}(10,40) = 25$$
$$\mathrm{HM}(10,40) = 16$$
Indeed we can extend this techiquie to find an arbitary size list of integers the AM, GM, and HM of any subset of which are integers. Just start with integers of the form $x^{n!}$ so the GMs are all integers. Then work out the AMs and HMs and multiply through.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 7,
"answer_id": 0
} |
Let $f\colon X \rightarrow Y$ be a continuous and surjective function. Show that if $X$ is compact, then so is $Y$. Here is my attempt at answering the above question.
(I feel that there are gaps in my knowledge in this topic and don't have a sound understanding of what a covering actually is but here goes!)
As f is surjective, there are $x_1, x_2∈ X$ s.t $f(x_1)=y_1, f(x_2)=y_2$. And f is continuous if for every open $v⊆Y, f^{-1}(v)$ is open in X.
If X is compact, then every open covering $U=(u_i)_{i∈I}$ has a finite subcovering.
Therefore, $f^{-1}(y_1)=x_1$ which is also an open covering. Thus Y is compact.
| Correct me if wrong:
Hopefully adding a bit of detail to José Carlos' solution .
1) Let $O_i$, $i \in I$, be an open cover of $Y$,
i.e. $(\cup O_i)_{i \in I} =Y.$
$X \subset f^{-1}Y$ , or $X \subset f^{-1}(\cup O_i)_{i \in I}$.
2)$X \subset f^{-1}(\cup O_i)_{ i \in I}=$
$(\cup f^{-1}(O_i))_{i \in I}$.
3) Since $f$ is continuous,
$f^{-1}(O_i)_{ i \in I}$ is open, and
$\cup f^{-1}(O_i)_{i \in I}$ is an open cover of $X$
4) Since $X$ compact there is a finite subset $I_f$ of $I$, such that
$X \subset \cup f^{-1}(O_i)_{i \in I_f}$.
5) $Y=f(X)=f(\cup f^{-1}(O_i)_{ i \in I_f})=$
$\cup f(f^{-1}(O_i)_{i \in I_f})= \cup (O_i)_{i \in I_f}$, i.e
a finite cover of $Y$.
Used $f(f^{-1})(O_i)=O_i$ , $i \in I_f$, since $f$ surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Limit of $\lim _{ n->\infty }{ ({ z+{ z }^{ -1 }) }^{ n } } $ I'd like to compute the limit of $\lim _{ n->\infty }{ ({ z+{ z }^{ -1 }) }^{ n } } $ for $z\neq 0$
My attempt was to use the Polar-Cor. representation for complexe numbers.So
${ (z+{ z }^{ -1 })^{n} }$=${ { (r }_{ 1 }(\cos { \theta } +i\sin { \theta )+\frac { 1 }{ { r }_{ 2 }(\cos { \phi +i } \sin { \phi ) } } } ) }^{ n }$
and using Moivre's Formula we get
${ { r }_{ 1 } }^{ n }{ e }^{ i*n*\theta }+\frac { 1 }{ { { { r }_{ 2 } }^{ n } }{ e }^{ i*n*\phi } } $. How do I continue from here?
| Let
$$z+\frac1z=w.$$
Obviously, $w^n$ converges to zero if $|w|<1$ or to one if $w=1$.
By solving the above identity for $z$,
$$z=\frac{w\pm\sqrt{w^2-4}}2=\frac{re^{i\theta}\pm\sqrt{r^2e^{i2\theta}-4}}2$$ with $r<1$, or
$$z=\frac{1\pm i\sqrt3}2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2739792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Help Looking for a Function with Particular Properties I'm sorry to bother everyone, but I've been searching for functions that satisfy several properties, and so far, I've yet to be able to think of any! Specifically, the properties needed are:
$f(0)=0$ and $f(1)=1$
And on the interval $[0,1]$:
*
*$f(x)$ is differentiable (though this could be relaxed a bit, the smoother, the better)
*$f(x) \geq x$
*$f(x)-x$ Is maximized as close to $x=1$ as possible. (Again, only between [0,1]. So, for example, for $f(x)=\sqrt{x}$, the argmax of $f(x)-x$ would be at $x=.25$ While I can find several with this value being $\leq .5$, I'm having a lot of trouble finding nice examples where this falls above $.25$!)
Where $f(x)$ really only needs to be defined over the unit interval.
I'm sorry if these properties aren't clear- just let me know and I'll try to explain better/be more specific if desired!
Thank you so much for your responses!!!
EDIT 1: Wow, you guys are astoundingly quick!! I'm so, so sorry, but I realized I (incredibly stupidly) left out two crucial properties:
*
*$f(x)$ is weakly increasing on $[0,1]$
*$0 \leq f(x) \leq 1$ (Though this can be done artificially by rescaling, so I don't think this should be an issue)
Again, I'm so sorry for forgetting to enter them into the original problem (I was trying to figure out the numbered list format, deleted the old copy, and forgot to add it back!!)
| Consider the polynomial (in particular, infinitely differentiable): $$f(x)=-2.05026 x^4 + 3.74339 x^3 - 2.11177 x^2 + 1.418651 x$$
This satisfies $1\ge f(x)\ge x$ (on $[0,1]$) and has $f(x)-x$ maximized at
$x\approx 0.81$.
$f(x)-x$ was found using alpha by interpolating the five points $$(0,0),(1,0),(0.9,0.05),(0.3,0.02),(0.7,0.05)$$
The first and second are problem conditions. $(0.9,0.05)$ was chosen to have a peak near $0.9$, and to have a gentle slope (between $0$ and $-1$) at the right endpoint. The other two were tweaked to keep $1\ge f(x)-x\ge 0$ for the desired interval, and to keep the rightmost peak the maximal peak.
(revised to meet updated conditions)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Homomorphism from $S_n$ to an abelian group Any homomorphism from $S_n$ to an abelian group $G$ is given by $\;f(\sigma) = e$, if $\sigma$ is an even permutation, and $f(\sigma)= a$, where order of $a =2$, if $\sigma$ is an odd permutation.
I can prove that this is a homomorphism, but what guarantees that there is no other homomorphism other than this, and why are all even permutations mapped onto the identity of $G$?
I only know that under a homomorphism, identity of $G_1$ goes to identity of $G_2$ and kernels are normal subgroups ($A_n$ are normal), but why there are no possibilities in which kernel is not $A_n$?? I am confused.
| Hint If $f : S_n \to G$ is a group homomorphism and $f((i,j))=a$ than $a^2=e$ in $G$.
Therefore, $f$ takes each transposition in some element of order 2.
Next, if $G$ is abelian, use the fact that
$$(i,j)(1,i)(i,j)=(1,j)$$
And
$$(1,j)(1,i)(1,j)=(i,j)$$
to deduce that
$$f((1,i))=f((i,j))=f((i,j)) \forall (i,j)$$
Finally, write each permutation as a product of transpositions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Finding an invertible matrix P and some matrix C Find an invertible matrix $P$ and a matrix $C$ of the form
$C=\begin{pmatrix}a & -b\\b & a\end{pmatrix}$
such that the given matrix $A$ has the form $A = PCP^{-1}$
$A=\begin{pmatrix}5 & -2\\1 & 3\end{pmatrix}$
The first thing i tried to do was to find the eigenvectors of matrix $A$ and i got these vectors (which i glued together to get matrix $P$ and $P^{-1}$)
$P=\begin{pmatrix}1+ i& 1-i\\1 & 1\end{pmatrix}$
$P^{-1}=\begin{pmatrix}\frac{1}{2i} & \frac{-1+i}{2i}\\-\frac{1}{2i} & \frac{1+i}{2i}\end{pmatrix}$
Im not sure how to find the matrix $C$, i thought at first i could plug in the eigenvalues in the $C$ matrix, but i don't think that is what they problem i asking me to do.
Any help will be appreciated
| Per this question, $A$ has eigenvalues $4\pm i$, so it is similar to a matrix of the form $$C=\begin{bmatrix}4&-1\\1&4\end{bmatrix}.$$ This answer shows how to construct an appropriate basis without computing any eigenvalues explicitly. Note that the resulting matrix in the question has the opposite signs from what we want on the $\beta$’s in our matrix, but we can flip the signs by taking $\mathbf v_2=\frac1\beta B\mathbf v_1$.
Following this method, we have $$B = \begin{bmatrix}5&-2\\1&3\end{bmatrix} - \begin{bmatrix}4&0\\0&4\end{bmatrix} = \begin{bmatrix}1&-2\\1&-1\end{bmatrix}.$$ Taking $\mathbf v_1=(1,0)^T$, we have $\mathbf v_2=(1,1)^T$, therefore $$P = \begin{bmatrix}1&1\\0&1\end{bmatrix}, P^{-1}=\begin{bmatrix}1&-1\\0&1\end{bmatrix}.$$
Since you’ve gone to the trouble of finding eigenvectors of $A$, the second part of the linked question suggests another way to find $P$. Since $A$ is real, then for any complex vector $\mathbf v$, $$A(\Re\mathbf v) = \frac12A(\mathbf v+\bar{\mathbf v}) = \frac12(A\mathbf v+A\bar{\mathbf v}) = \frac12(A\mathbf v+\overline{A\mathbf v}) = \Re(A\mathbf v)$$ and similarly $A(\Im\mathbf v)=\Im(A\mathbf v)$. Let $\mathbf v_r$ and $\mathbf v_i$ be linearly independent real vectors such that $\mathbf v_r+i\mathbf v_i$ is an eigenvector of $\alpha-i\beta$. (Proving that this is always possible is a relatively simple, but useful exercise.) Then $$A\mathbf v_r = \Re[(\alpha-i\beta)(\mathbf v_r+i\mathbf v_i)] = \alpha\mathbf v_r+\beta\mathbf v_i$$ and $$A\mathbf v_i = \Im[(\alpha-i\beta)(\mathbf v_r+i\mathbf v_i)] = \alpha\mathbf v_i-\beta\mathbf v_r.$$ Setting $P=\begin{bmatrix}\mathbf v_r&\mathbf v_i\end{bmatrix}$ and $J=\small{\begin{bmatrix}0&-1\\1&0\end{bmatrix}}$, we can write these as $$\begin{align}A\mathbf v_r &= P(\alpha I+\beta J)P^{-1}\mathbf v_r \\ A\mathbf v_i &= P(\alpha I+\beta J)P^{-1}\mathbf v_i\end{align}$$ and since $\mathbf v_r$ and $\mathbf v_i$ are linearly independent, this holds for all $\mathbf v$, therefore $A=P(\alpha I+\beta J)P^{-1}$. Note that this is consistent with the first method since from the expression for $A\mathbf v_r$ we obtain $\mathbf v_i = \frac1\beta(A-\alpha I)\mathbf v_r$.
For your matrix, you’ve found that $(1-i,1)^T$ is an eigenvector of $4-i$. This splits into real and imaginary parts $(1,1)^T$ and $(-1,0)^T$, respectively, which gives $P=\small{\begin{bmatrix}1&-1\\1&0\end{bmatrix}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Finding area of a triangle using equation of a circle **Ignore notes I made they are stupid
Without a calculator
Question reads:
The diagram shows a sketch of the circle with equation $x^2 + y^2 = 5$.
The $y$-coordinate of point $A$ is $-1$.
The tangent to the circle at $A$ crosses the axes at $B$ and $C$ as shown.
Find the area of triangle $OBC$
| Some hints:
*
*Using the $y$ coordinate of $A$, find the $x$ coordinate of $A$.
*Find the slope of line $OA$.
*From that, calculate the slope of the tangent line $BC$.
*Using point-slope form, calculate the intercepts.
*Profit!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Triangle inequality raised to fractional powers Does the inequality $|x + y|^\alpha \leq |x|^\alpha + |y|^\alpha$ where $\alpha \in [0, 1)$, hold for all $x,y$ ??
| For the $x\ge 0,y\ge 0$. $f : t \mapsto (t + x)^{\alpha} - t^{\alpha}$ is decreasing function on $[0,+\infty)$ (take the derivative $f'(t) = \alpha ((t+x)^{\alpha -1} - t^{\alpha - 1}) < 0$ since $\alpha < 1$) so $$|x|^{\alpha} = f(0) \ge f(y) = |x+y|^\alpha - y^\alpha$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
For any $n\times n$ matrix $A$, there corresponds a vector $x\neq 0$ such that $\|Ax\|=\|A\|\|x\|$ Prove that for any vector norm and its subordinate matrix norm, and for any $n\times n$ matrix $A$, there corresponds a vector $x\neq 0$ such that $\|Ax\|=\|A\|\|x\|$
I know that $\|Ax\|\leq \|A\|\|x\|$ for all $x\in \mathbb{R}^n$, but I do not know how to find $x\neq 0$ in such a way that the other equality is fulfilled, could someone help me please? Thank you very much.
| Hint: the map $f(x)=\|A(x)\|$ defined on $S_n=\{x:\|x\|= 1\}$ is continuous, since $S_n$ is compact, there exists $x\in B_n$ such that $f(x)=sup_{y\in S_n}f(y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Integral test for series $\sum_{n = 18}^{\infty} \frac{n^2}{(n^3 + 3)^{7/2}}$ I am stuck on how to more so algebraically to solve this problem. I understand that you would rewrite the series as a function of x, and then evaluate the integral from 18 to infinity - but that's all I got. Any pointers? Thank you in advance.
Use the integral test to determine whether the infinite series is convergent.
$$\sum_{n = 18}^{\infty} \frac{n^2}{(n^3 + 3)^{7/2}}.$$
| Let $$u=x^3+3$$ to evaluate the $$\int_{18}^{\infty} \frac{x^2}{(x^3 + 3)^{7/2}} dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Need hint to solve the following integral We are fixed and need some hint to solve the problem
$$\int\frac{(ax+b)^m}{(cx+d)^n}dx$$ where $m,n\in \mathbb{N}$.
| Let $u=cx+d$.
$$\int\frac{(ax+b)^m}{(cx+d)^n}dx=\frac{1}{c}\int\frac{\left(\frac{au}{c}+\frac{d(c-a)}{c}\right)^m}{u^n}du=\frac{1}{c^{m+1}}\sum_{k=0}^m\binom{m}{k}a^{k}d^{m-k}(c-a)^{m-k}\int u^{k-n}du$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2740967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Inconsistency for solving $x' = x^{1/2}$ The proposed system $x' = x^{1/2}$ can be solved easily to obtain $x(t) = \frac{1}{4} (t^2 + t c + c^2)$, where $c$ is the integration constant.
However, differentiate the newly-found $x(t)$, one gets: $x(t)' = \frac{1}{2}t+\frac{1}{4}c$. This implies that $x^{1/2} = \frac{1}{2}t+\frac{1}{4}c$. However,
$$(x^{1/2})^2 = \left(\frac{1}{2}t+\frac{1}{4}c\right)^2 = \frac{1}{4}\left(t^2 + \frac{1}{2}tc + c^2\right) \neq \frac{1}{4} \left(t^2 + t c + c^2\right) = x(t)$$
Does anyone know why the inconsistency occurs? I understand there is another solution, but it is also inconsistent.
| By separation of variables:
$$
\frac{dx}{\sqrt{x}}=dt
$$
so
$$
\sqrt{x}=\frac{1}{2}(t+c)
$$
and finally
$$
x=\frac{1}{4}(t+c)^2=\frac{1}{4}(t^2+2ct+c^2)
$$
Can you spot your error?
$\displaystyle x=\frac{1}{4}(t^2+{\Large\color{red}{2}}ct+c^2)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Completing a specific matrix to a unitary one Let $n,m,p,k_1,k_2$ be natural numbers.
Given two unitary matrices $U\in\mathbb{U}(n),V\in\mathbb{U}(m)$ and a decomposition of these as follows $$U=\bigg(\begin{matrix}A & C\\B& D\end{matrix}\bigg)~ ~ ~ ~ ~ ~V=\bigg(\begin{matrix}A' & C'\\B'& D'\end{matrix}\bigg)$$ where $A\in\mathcal{M}_{k_1,p}(\mathbb{C}),A'\in\mathcal{M}_{p,k_2}(\mathbb{C})$.
My question: is there a natural number $R$ and a unitary matrix $W\in\mathbb{U}(R)$ such that $$W=\bigg(\begin{matrix}AA' & E\\F& G\end{matrix}\bigg)$$
for some $E,F,G$.
Thoughts so far:
I think it should be true as the columns vectors that make $A$ and $A'$ are truncations of the column vectors of unitary matrices so have norm less that $1$. So I guess the column vectors that make $AA'$ should also have norm less that $1$ and it should be possible to add coordinates to these vectors to make them pairwise orthogonals and of norm $1$ (pairwise orthogonality is where I am stuck). Then one can conclude by completing the obtained orthonormal family into a basis and those vectors would form the columns of $W$.
The result is obviously true when $U,V$ are diagonals and I tried without success to extend this using the fact that a unitary is in the unitary conjugaison class of a diagonal matrix.
Any help would be hugely appreciated :)
| $A$ and $A'$ each have spectral norm at most $1,$ so $AA'$ also has spectral norm at most $1.$ This means the matrix $I_p-(AA')^*(AA')$ is positive semidefinite, so has a $p\times p$ self-adjoint square root which we can take to be $F.$ This choice ensures $\begin{pmatrix}AA'\\ F\end{pmatrix}^*\begin{pmatrix}AA'\\ F\end{pmatrix}=I_p.$ Then the $2p\times p$ matrix $\begin{pmatrix}AA'\\ F\end{pmatrix}$ can be completed to a $2p\times 2p$ unitary matrix by taking an orthonormal basis of the orthogonal complement of the column space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question on the proof of $e^x>1+x$ for $x>0$
Show that $e^x>1+x$ for $x>0$
Proof: Set $f(x)=e^x-(1+x)$. Show that $f(x)$ is always positive. We know that $f(0)=e^0-(1+0)=0$ and $f'(x)=e^x-1$. When $x$ is positive, $f'(x)$ is positive because $e^x>1$
We know that if $f'(x)>0$ on an interval, then $f(x)$ is increasing on that interval, so we can conclude that $f(x)>f(0)$ for $x>0$ and thus:
$$e^x-(1+x)>0 \iff e^x>1+x$$
Question: How do we know that $f(x)$ is positive just by $e^x>1$?
Additional question, how do we prove $f'(x)$ is positive/negative over an interval $[a,b]$?
| For $x>0$, we know $e^x > 1$, so $f'(x) > 0$. So, $f(x)$ is increasing, in particular $f(x) > f(0) = 0$, so $f(x)$ is positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
} |
Exists a continuous function $f: \mathbb R \to \mathbb R$ with $f(x_n) = y_n, \, \forall n$ and $f(x) = y$ Let $x_n \to x$ and $y_n \to y$ in $\mathbb R$ such that $x_n \neq x_m, \, \forall n \neq m$.
How can I show the existance of a continuous function $f: \mathbb R \to \mathbb R$ with $f(x_n) = y_n, \, \forall n$ and $f(x) = y$?
I've trying to solving this problem in the last few days, however I didn't have any idea. I'm looking for a hint to solve this question.
Help?
| Here's a generalization: Let $X$ be a metric space, $x\in X,$ and $x_n$ is a sequence of distinct points in $X\setminus \{x\}$ converging to $x.$ Let $y,y_n$ be as before. Then there exists a continuous $f:X\to \mathbb R$ such that $f(x_n)=y_n, n=1,2,\dots$ and $f(x)=y.$
Proof (sketch): Let $E=\{x_n\}\cup \{x\}.$ Then $E$ is closed in $X.$ Define $g:E \to \mathbb R$ by setting $g(x_n)=y_n, n=1,2,\dots$ and $g(x)=y.$ Then $g$ is continuous on $E.$ By Tietze's extension theorem, there exists a continuous $f:X\to \mathbb R $ such that $f=g$ on $E.$ That does it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Should I apply boundary conditions in the general solution before finding the particular solution? Given a function:
$$ y'' - y' = x$$ I want to find the solution where $x = 1$, $y = 1$, $dy/dx = 2$.
I have managed to find the full form of the equation by first finding the complementary function solution and then the particular solution.
The problem is if I apply the boundary conditions after I find the complementary function solution, I get a different answer than applying after I find the full solution $y = y_c + y_p$, where $y_c$ is the complementary one and $y_p$ is the particular solution.
When should I apply the boundary conditions?
| You should apply the boundary conditions to the general solution that you have already determined:
$$y(x)=\underbrace{C_1+C_2e^x}_{y_o}+\underbrace{-\frac{x^2}{2}-x}_{y_p}.$$
So it remains to find $C_1$ and $C_2$ such that $y(1)=1$ and $y'(1)=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
conflicting answer to indefinite integral In modelling a mixing problem of a single tank using first order ODE , i ended up with
$$\frac{dy}{dt}=3-0.03y$$
Case 1 : if I do $\frac{dy}{dt}=-0.03(y-100)$ which leads to $\ln|y-100|=-0.03t+c$ and ultimately
$$y=100+Ce^{(-0.03t)}$$
Case 2 : if I do $\frac{dy}{dt}=0.03(100-y)$ which leads to $\ln|100-y|=0.03t+c$ and ultimately
$$y=100-Ce^{(0.03t)}$$
$$y(0)=1$$ which makes case 1 :$$y=100-99e^{-0.03t}$$
and case 2 :$$y=100-99e^{0.03t}$$
which one is apt...am i doing mistakes ?
edited after 11 mins. i got the answer guys...im unable to close the question my apologies...
| this is not correct
Case 2 : if I do $\frac{dy}{dt}=0.03(100-y)$ which leads to $\ln|100-y|=0.03t+c$ and ultimately
Substitute $z=100-y$ before to integrate
Second case
$$\frac{dy}{dt}=0.03(100-y)$$
Substitute $z=100-y \implies dz=-dy$
$$-\int \frac{dz}{z}=0.03t+K$$
$$\ln|z|=-0.03t+K$$
$$100-y=e^{-0.03t}K$$
$$y=100-Ke^{-0.03t}$$
$$y=100+Ce^{-0.03t}$$
C is just a constant...
$$y(0)=1 \implies 1=100+C \implies C=-99 $$
$$\implies y=100-99e^{-0.03t}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Product of sums which equal to sum of product We can be sure that
$$\left(\sum\limits_{k=0}^{n}\frac{1}{k+1}\right)\left(\sum\limits_{k=0}^{n}\binom{n}{k}\frac{(-1)^k}{k+1}\right)= \sum\limits_{k=0}^{n}\binom{n}{k}\frac{(-1)^k}{(k+1)^2}$$
Is there any similar identities or some types of generalization to find them?
| Let $e_k=1$. You want sequences $a_k,\,b_k$ with $\sum_k a_k \sum_k b_k =\sum_k a_k b_k$, or in terms of inner products $a\cdot e \, e\cdot b = a\cdot b$. This equation would be easy to satisfy for $3$-dimensional vectors: choose your favourite vector $c$ and take $a=c\times e,\, b=a\times c$ so $a\cdot e =a\cdot b =0$. And now you can stitch together infinitely many triples for the infinite-sequence problem, scaling the triples as you go so the resulting sums are convergent. So there are clearly a lot of solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2741897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
If $x \in \operatorname{cl}(A)$, where $A$ is a connected subspace of a topological space $X$, then $A \cup \{x\}$ is connected.
Prove that if $x \in \operatorname{cl}_X(A)$, where $A$ is a connected
subspace of a topological space $X$, then $A \cup \{x\}$ is connected.
My attempt:
Suppose, in order to find a contradiction, that $B \cup C = A \cup \{x\}$ where $B,C$ are open in $A \cup \{x\}$ non empty and disjoint.
Then, we have either $A \subseteq B$ or $A \subseteq C$. Indeed, if this wouldn't be the case, then both $A \cap C$ and $A \cap B$ are non empty, and then
$A = A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$ gives a union of disjoint non empty open (in A) sets, contradicting the connectedness of $A$.
Without loss of generality, we may assume that $A \subseteq B$. Then $x \in C$, or else $A \cup \{x\} = B$, meaning that $C = \emptyset$, which isn't possible.
Write $C = G \cap (A\cup \{x\})$ with $G$ open in $X$. Because $x \in \operatorname{cl}(A)$, and $x \in G$, it follows that $A \cap G \neq \emptyset$. Pick $y \in A \cap G$. Then $y \in A \subseteq B$ and $y \in A \cap G \subseteq C$, so $y \in B \cap C$. This is the desired contradiction.
Is this correct?
|
The OP's proof is correct, but by changing the last 2 paragraphs it
can be nailed down in another way.
Suppose, in order to find a contradiction, that $B \cup C = A \cup \{x\}$ where $B,C$ are open in $A \cup \{x\}$ non empty and disjoint.
Then, we have either $A \subseteq B$ or $A \subseteq C$. Indeed, if this wouldn't be the case, then both $A \cap C$ and $A \cap B$ are non empty, and then
$A = A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$ gives a union of disjoint non empty open (in A) sets, contradicting the connectedness of $A$.
Without loss of generality, we may assume that $A \subseteq B$. But then it must be that $B = A$ and $C$ is the open singleton $\{x\}$.
Since $x \in \operatorname{cl}(A)$, any open set in $A \cup \{x\}$ that contains $x$ can't be a singleton. This is the desired contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
In classical geometry why is a line considered to be parallel to itself? A definition in classical geometry (for example, Birkhoff's formulation, but I suppose it could be all of them) is that a line is always considered to be parallel to itself. I understand this is probably for convenience, but in my mind since two distinct lines are parallel if they have no points in common and a line has infinitely many points in common with itself. Perhaps the idea is to ease the definition that two (non-parallel) lines intersect at one and only one point?
Q: What's the purpose/what inconvenience would be caused if we didn't have that definition?
| The idea is that you want "parallel" to define equivalence classes (called "pencils", cf. Coxeter, Projective Geometry, and Artin, Geometric Algebra), which require the defining relationship to be an equivalence relationship: reflexive, symmetric, and transitive. Those classes then have some nifty uses, like defining projective space by adding a point at infinity for each pencil (which is the considered to be on each of those lines) and a line at infinity for each class of parallel planes (this line containing all the points at infinity corresponding to the pencils of lines in that class of planes).
Also, you were already going to have to rethink the definition of "parallel" as having no points in common, if you are going to do solid geometry. Parallel lines also need to be coplanar...i.e., there need to be two other lines that intersect each other and that each intersect the parallel lines (five distinct points of intersection). Lines that are not coplanar are called "skew" not "parallel".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 7,
"answer_id": 2
} |
The random variable U is uniformly distributed over the interval [3,6]. Find the following probabilities (a) $P[\frac4 5 ≤ U ≤ 4]$ = $$P[\frac4 5 ≤ U ≤ 3]+P[3≤ U ≤ 4]=0+P[U ≤ 4]=\frac{(4-3)}{(6-3)}=\frac1 3$$ Is this even right? I'm just not so confident.
(b) $P[U > 5]$ = $$1-P[U ≤5]=1-\frac{8-3}{6-3}=1-\frac5 3$$
The answer for b is a positive number according to my friends so I am not sure that I'm correct in this one either.
(c) $P[16 ≤ U^2 ≤ 36]$=
First I simplify them
$$P[4≤ U ≤ 6] or P[-6 ≤ U ≤ -4]$$
Then I form the equation
$$P[16 ≤ U^2 ≤ 36]= P[4≤ U ≤ 6]+P[-6 ≤ U ≤ -4]$$
But I am not sure what to do afterwards, do I have to integrate it? If so, how??
(d) $P[4−2|U|≥−8$= $$ P[|U|≤ 5] = P[−5 ≤ U ≤ 5] = P[3 ≤ U ≤ 5] = 1/3$$ Is this even correct??
Trying to do the exercises based on my friend's notes on the classes. So far I still feel lost on these kinds of exercises but the answers just seem so off so far so I would appreciate some advice.
| $(a)$ is correct.
$(b)$ $$1-P[U ≤5]=1-\frac{\textbf{5}-3}{6-3}=\frac1 3$$
$(c)$ $$P[16 ≤ U^2 ≤ 36]= P[4≤ U ≤ 6]+P[-6 ≤ U ≤ -4] = P[4≤ U ≤ 6]+0 = 1-P[U ≤4]=1 - \frac{4-3}{6-3} = \frac2 3$$
$(d)$ $$P[4−2|U|≥−8] = P[-2|U|≥-12]=P[-|U|≥-6] =P[|U|≤6]=P[U≤6]= 1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Fermat generalization, not sure if it's true or how to prove it I have this rule in my notebook, but I don't remember when I took it:
$$a^{p^n}$$ is congruent with a(p) (a modulo p), where p is a prime number, a is an integer and n is a natural number.
Or, in other words, that:
$$p | a^{p^n} - a$$
I can't find this rule in Internet. It's supposed to be a generalization of Fermat Little Theorem.
Is this true? If so, how can be derivated from Fermat Little Theorem?
| Fermat's (little) theorem is: If $p$ doesn't divide $a$ then $a^{p-1} \equiv 1$ mod $p.$ When both sides multiplied by $a$ it becomes $a^p \equiv a$ mod $p$ and now one can allow $a$ divisible by $p.$ At this point apply an inductive argument, next case being $(a^p)^p \equiv a^p,$ then use inductive hypothesis to finish. Higher powers go similarly, each going down to the previous power.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability in a Urn Modell If we have a Urn, with 2 black balls,3 red ones and 4 yellow ones.
I'd like to determine the probability of drawing a black ball before drawing a red ball.
So the probabilies to consider are the 3-tupels
(black,red,random),(random,black,red) and (black,yellow,red)
I tried to split the Set of the desired Event into 3-smaller events
namly
${ A }_{ 1 }=\left\{ ({ w }_{ 1, }{ w }_{ 2 },{ w }_{ 3 }):{ w }_{ 1 }\in (1,2),{ w }_{ 2 }\in (3,4,5),{ w }_{ 3 }\in (1,..,9) \right\} \\ { A }_{ 2 }=\left\{ ({ w }_{ 1, }{ w }_{ 2 },{ w }_{ 3 }):{ w }_{ 1 }\in (1,...,9),{ w }_{ 2 }\in (1,2),{ w }_{ 3 }\in (3,4,5) \right\} \\ { A }_{ 3 }=\left\{ ({ w }_{ 1, }{ w }_{ 2 },{ w }_{ 3 }):{ w }_{ 1 }\in (1,2),{ w }_{ 2 }\in (6,7,8,9),{ w }_{ 3 }\in (3,4,5) \right\} $
but im struggeling to get the orders of those sets and im also not sure if the number of total outcomes=84.
appreciate any help
| So in our draw of $3$ balls, there's definitely a black ball and a red ball. So the third ball can be a yellow or black or red ball. If it's a yellow ball, the favourable cases are just the number of permutations where black comes before red. This is $2$ (YBR,BRY), so the probability is just $1/12$. If it's a black ball, the number of favourable cases is again $2$ (BBR, BRB) and this time the probability is ${2}\choose{2}$$. 3.2$/ ${9}\choose{3}$ the third ball is red, theres just one favourable case,(BRR), and it's probability is $2.$${3}\choose{2}$/${9}\choose{3}$. Since these cases are disjoint, the total probability is just the sum of these probabilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Hints on how to solve $(x+y^2)dy = ydx$? I'm looking for hints on how to solve the differential equation: $(x+y^2)dy = ydx$ .
I tried finding an integrating factor and dividing both sides by $y$ but that didn't work.
| In $(x+y^2)dy=ydx$ you can indeed divide by $y^2$ to get
$$
d\left(\frac{x}{y}\right)=\frac{y\,dx-x\,dy}{y^2}=dy
$$
which is directly integrable.
For the first version of the question, $(x^2+y^2)dy=ydx$, observe that
$$
\frac{dx}{dy}=y+\frac{x^2}{y}
$$
looks like a Riccati equation. Set $x(y)=-\dfrac{yu'(y)}{u(y)}$ to get a second order linear ODE in $u$
$$
y+\frac{yu'^2}{u^2}=x'=-\frac{yu''+u'}{u}+\frac{yu'^2}{u^2}
\\\iff\\
yu''+u'+yu=0.
$$
Then apply power series expansion or identify the special function type that solves this equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How can I find all solutions using an iteration formula? Take the equation $x^2+7x+10=0$, which has roots $-2$ and $-5$.
When I use the iteration formula $x_{n+1}=\dfrac{-10}{x_n+7}$, I always converge on $-2$ but not on $-5$.
I have tried starting values of $-6, -4, -1, 1, 2$ and many others, but I only converge on $-2$.
Is there something wrong with my iteration formula or starting values?
| There is nothing wrong, when considering sequences defined by $x_{n+1}=f(x_n)$ to study if $l$ such that $f(l)=l$ is attractive or repulsive you have to consider $|f'(l)|$:
*
*If $|f'(l)|<1$ this is an attractive point.
*If $|f'(l)|>1$ this is a repulsive point (and there is no hope to converge to $l$ if $x_0 \neq l$).
*If $|f'(l)| =1$ you have to study with more precision the function.
Here $f'(x)=\frac{10}{(x+7)^2}$ so:
*
*$f'(-2)=\frac{10}{25}\in (-1,1)$
*$f'(-5)=\frac{10}{4} >1$
so $-2$ is attractive but $-5$ is repulsive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2742975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Why are these $\sum \cos$ and $\csc$ equivalent? Mathematica 'simplifies' this formula
$$\sum_{k=1}^R \cos \frac{2k \pi x}{R}$$
to this
$$\frac{1}{2} \biggl(\csc \frac{\pi x}{R} \sin \frac{(2R+1) \pi x}{R}-1\biggr)$$
A graphical plot of the two formulae generates two identical continuous curves - but why?
Surely $\csc \frac{\pi x}{R}$ is discontinuous, with poles at $x={0,R,2R,3R...}$? So, how can $\frac{1}{2} \bigl(\csc \frac{\pi x}{R} \sin \frac{(2R+1) \pi x}{R}-1\bigr)$ produce a continuous curve?
I'd be grateful for:
*
*A proof of this equivalence
*An explanation for why the resulting curve is continuous despite $\csc \frac{\pi x}{R}$ being discontinuous.
| Assuming you're a bit familiar with complex numbers:
$$e^{ix}=\cos(x)+i\sin(x)\tag{1}$$ Let $a=\frac{2\pi}{R}$, then you want to find the sum $$\sum_{k=1}^{R}\cos(akx)\tag{2}$$
Notice that it is way easier to first compute (which we later can relate to $\cos(akx)$) $$\sum_{k=1}^{R}e^{akix}=\sum_{k=1}^{R}\left (e^{aix}\right )^k\tag{3}$$ which is a geometric series. This evaluates to $$\sum_{k=0}^R \left (e^{aix}\right )^k=\frac{e^{aix(R+1)}-1}{e^{aix}-1}\tag{4}$$
But since your series starts at $k=1$ we have to subtract the first term, so that $$\sum_{k=1}^R\left (e^{aix}\right )^k=\frac{e^{aix(R+1)}-1}{e^{aix}-1}-1\tag{5}$$
Now substitute eq. $(1)$ back into $(5)$ to get $$\sum_{k=1}^R\cos(akx)+i\sum_{k=1}^R\sin(akx)=\dots$$ Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
A non-zero ring $R$ is a field if and only if for any non-zero ring $S$, any ring homomorphism from $R$ to $S$ is injective. Show that a non-zero ring $R$ is a field if and only if for any non-zero ring $S$, any unital ring homomorphism from $R$ to $S$ is injective.
I would like to verify my proof, especially the reverse implication.
$\Rightarrow$ Let $S$ be any ring, and $f:R\rightarrow S$ be a ring homomorphism. If $x\in \ker f$ where $x$ is non-zero, then $0= f(x)f(x^{-1}) = f(xx^{-1})=f(1) = 1$ contradiction. Thus $x=0$, so $f$ is injective.
$\Leftarrow$ Since any ring homomorphism is injective, the only ideals of $R$ are $\{0\}$ and $R$. Thus $R$ is a field.
| Yes, this proof looks good to me!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
$f(x+1/2)+f(x-1/2)= f(x)$ Then the period of $f(x)$ is?
$f(x+1/2)+f(x-1/2)= f(x)$. Then the period of $f(x)$ is:
a)$1$
b)$2$
c)$3$
d)$4$?
Attempt:
I substituted $x= x \pm1/2$ but the equations I got didn't help at all.
How do I go about solving such a question? I am just looking for a hint and not the entire solution.
| We may conclude that $f(x)=f(x+3)$ holds for all $x$.
Following @Mike Earnest's suggestion,
\begin{align}
f(x)&=f(x+1/2)+f(x-1/2),\\
f(x-1/2)&=f(x)+f(x-1).
\end{align}
Add up these two equations, and you will have
$$
f(x+1/2)+f(x-1)=0,
$$
or, due to the arbitrariness of $x$,
$$
f(x+3/2)+f(x)=0.
$$
Now let $x\to x+3/2$, and
$$
f(x+3)+f(x+3/2)=0.
$$
The difference of the last two equations gives
$$
f(x)=f(x+3).
$$
Of course, as @Przemysław Scherwentke has mentioned, this might not mean that $3$ is the period of $f$, as we do not know if this $3$ is the smallest non-negative value $T$ that ensures $f(x)=f(x+T)$. All in all, this is what we can get from the given relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 0
} |
If $f$ is differentiable and its derivative is uniformly continuous, then $n \cdot (f(x+\frac{1}{n})-f(x)) \to^{u} f'(x)$ Let $f$ be a differentiable function in $\mathbb{R}$ such that its derivative $f'$ is uniformly continuous in $\mathbb{R}$.
Prove: $$n \cdot (f(x+\frac{1}{n})-f(x)) \to^{u} f'(x)$$ (uniform convergence).
If $f'$ is only continuous- Does the claim still hold?
Please help me approach this one
| Instead of using integration, you can use the mean value theorem as suggested in the comments.
Let $\epsilon > 0$, so we have $\delta > 0$ such that $|x-y|< \delta$, $|f'(x) - f'(y)| < \epsilon$. Now let $n > \frac{1}\delta$ and $x \in \mathbb R$. By the mean value theorem, $\exists \xi \in (x, x+\frac{1}{n})$ such that
$$\frac{f\left(x+\frac{1}{n}\right) - f(x)}{\frac{1}{n}}=f'(\xi).$$
Since $n > \frac{1}{\delta}$, $|\xi - x| < \delta$, so $|f'(\xi) - f'(x)|<\epsilon$. Combining these two results,
$$\left|\frac{f\left(x+\frac{1}{n}\right) - f(x)}{\frac{1}{n}}-f'(x)\right|=\left|f'(\xi)-f'(x)\right|<\epsilon,$$
as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Substantial / Total / Material Derivative Contradiction We want to find the material derivative in terms of the spatial representation (classic case).
$\phi$ : Temperature, entropy etc,
$x_{0}$ : Reference coordinate,
$x$ : Spatial / eulerian coordinate
Let $\phi=xt$ and $x= x_{0}(1+t)$ where $x_{0} = c $
$\dfrac{d\phi}{dt} = \left(\dfrac{\partial \phi}{\partial x}\right)_{t} \dfrac{dx}{dt}+ \left(\dfrac{\partial \phi}{\partial t}\right)_{x}$
Therefore, the total derivative in spatial and material coordinates is the following
$\dfrac{d\phi}{dt} = tx_{0} + x$
To obtain the pure spatial representation, you find the material coordinate in terms of the spatial coordinate.
$x_{0} = \dfrac{x}{1+t}$
Lastly, you perform the substitution into the total derivative yielding the classic result.
$\dfrac{d\phi}{dt} = t\dfrac{x}{1+t} + x$
Since $x_{0} = c$, how can the above substitution be valid for any $x, t$ when it appears only valid for $x(t)= x_{0}(1+t)$?
| The material derivative is the temporal rate of change of the scalar $\phi$ following the path of a fluid particle. In a Lagrangian framework, the location of a fluid particle at time $t$ initially with coordinate $x_0$ is specified by a smooth function $\xi: \mathbb{R}^2 \to \mathbb{R}$ where
$$x = \xi(x_0,t) = x_0(1 + t).$$
Each fluid particle is associated with a different $x_0$ which you are mistakenly interpreting as a fixed constant.
The material derivative is
$$\frac{d}{dt} \phi(\xi(x_0,t),t) = D_1\phi(\xi(x_0,t),t) \frac{\partial \xi}{\partial t} + D_2\phi(\xi(x_0,t),t), $$
where $D_1$ and $D_2$ denote partial derivatives with respect to the first and second arguments. (Part of you confusion may be due to your notation).
This reduces to
$$\frac{d}{dt} \phi(\xi(x_0,t),t) =tx_0 + \xi(x_0,t), $$
and written in terms of Eulerian coordinates
$$\frac{d \phi}{dt} =t\frac{x}{1+t} + x. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of a coin stack being greater than a value? What's wrong with my reasoning? Basic probability question.
Consider a pile of 9 coins where each could either be 1 cent or 10
cents and the distribution of the coin combinations is uniform.
Knowing that the upper 4 coins are all 10 cents, what is the
probability that the total value is greater than 50 cents?
My reasoning was simply that we have 5 coins leftover and we needs at least 10 more cents to get to 50 cents. We have a total of $2^5$ combinations for the remaining 5 coins. Our sample space size is $2^5-1$ because the only way which wouldn't work out is if we get all pennies. So the probability should be $\frac{2^5-1}{2^5}$
What's wrong here?
| Well the question does seem to be rather poorly worded. In probability it is important to be precise on what is being conditioned on and the wording of the question was not. I took it to mean that
*
*Each of the 9 coins was pulled out of a large vat w an equal number of pennies and dimes, so that with each pull, the probability of a coin being a dime is 50% and that the values of the coins pulled out are mutually independent of one another. (That is quite different from getting a pile of $n$ coins and where half are pennies and half are dimes.)
*As you pulled each coin out of the vat you looked at it, and after 4 pulls you had 4 dimes.
If that is the case then yes your reasoning is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2743966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How to find the equation of the tangent to the parabola $y = x^2$ at the point (-2, 4)? This question is from George Simmons' Calc with Analytic Geometry. This is how I solved it, but I can't find the two points that satisfy this equation:
$$
\begin{align}
\text{At Point P(-2,4):} \hspace{30pt} y &= x^2 \\
\frac{dy}{dx} &= 2x^{2-1} \\
&= 2x = \text{Slope at P.}
\end{align}
$$
Now, the equation for any straight line is also satisfied for the tangent:
$$
\begin{align}
y - y_0 &= m(x - x_0) \\
\implies y - y_0 &= 2x (x - x_0) \\
\text{For point P, } x_0 &= -2 \text{ and } y_0 = 4 \\
\implies y - 4 &= 2x(x+2)\\
\implies y - 4 &= 2x^2 + 4x\\
\implies y &= 2x^2 + 4x +4\\
\end{align}
$$
This is where the problem occurs. If I were to try to solve for $y$ using:
$$
y = ax^2+bx+c \implies y = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
$$
I'd get:
$$
\begin{align}
y &= 2x^2+4x+4 \text{ and, at x-intercept: }\\
x &= \frac{-4 \pm \sqrt{4^2 - (4\times2\times4)}}{2\times2} \\
x &= \frac{-4 \pm \sqrt{16 - 32}}{4} \\
x &= \frac{-4 \pm 4i}{4} \\
x &= -1 \pm i
\end{align}
$$
Is this the correct direction, or did I do something wrong?
| The others did point out your error, so I will just add the way I'd do it:
The tangent line we are looking for is in the form of $$g(x)=ax+b$$ for the function $$f(x)=x^2$$ at $x=-2$. We know that their derivate and their value most be equal at the given point, so we have that $$a=2*(-2)=-4$$ and $$(-2)^2=-4(-2)+b$$ $$4=8+b$$ $$b=-4$$ So the eqution for the tangent line is $$y=-4x-4$$
I like this method because I do not need to remember to the equation of the line through a given point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Help to solve a system equation: $x-y+xy=-4$; $xy(x-y)=-21$. I need to solve a system equation. Here's how it looks:
$x-y+xy=-4$
$xy(x-y)=-21$
I tried to substitute $x-y$ with $w$ and $xy$ with $t$ to simplify everything. After that I got this system equation:
$w+t=-4$
$tw=-21$
I solved this new system equation and got these results:
$t_1=-7, w_1=3$
$t_2=3, w_2=-7$
After all these steps I ended up with 2 new system equations:
1.
$xy=-7 (\leftarrow t)$
$x-y=3 (\leftarrow w)$
2.
$xy=3 (\leftarrow t)$
$x-y=-7 (\leftarrow w)$
Looks like the first one doesn't have any solutions. And I can't solve the second one. Am I doing wrong steps? Please help me to solve this system equation. Thanks.
| HINTS:
We have the equations $$x-y+xy=-4\implies x-y=-4-xy\tag{1}$$ and $$xy(x-y)=-21\tag{2}$$ so substituting $(1)$ into $(2)$, we get $$xy(-4-xy)=-21\implies(xy)^2+4xy-21=0$$ Let $u=xy$. Then $$u^2+4u-21=0$$ and so...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Curious integral, $8\pi\int_{0}^{\pi/2}\cos^2x{\ln^2(\tan^2 x)\over [\pi^2+\ln^2(\tan^2 x)]^2}dx=\ln 2-{1\over 4}\zeta(2)$ How to show that,
$$8\pi\int_{0}^{\pi/2}\cos^2x\cdot{\ln^2(\tan^2 x)\over [\pi^2+\ln^2(\tan^2 x)]^2}\mathrm dx=\ln 2-{1\over 4}\zeta(2)$$
This integral is an extract from this paper of Olivier Oloa line $2.14$
I have not ideas where to begin, any help!
| By letting $x=\arctan u$ the original integral is converted into
$$ 32\pi \int_{0}^{+\infty}\frac{\log^2(u)}{(1+u^2)^2\left(\pi^2+\log^2 u\right)^2}\,du\stackrel{u\mapsto e^{\pi x}}{=}32\int_{\mathbb{R}}\frac{e^{\pi x}x^2}{(1+e^{2\pi x})^2(1+x^2)^2}\,dx $$
and by symmetry the RHS collapses into
$$ 16 \int_{0}^{+\infty}\frac{x^2}{\cosh(\pi x)(1+x^2)^2}\,dx = 8 \int_{\mathbb{R}}\frac{x^2}{\cosh(\pi x)(1+x^2)^2}\,dx$$
which can be easily evaluated through the Fourier transform, since $\frac{1}{\cosh(\pi x)}$ is essentially a fixed point for $\mathscr{F}$ and $\mathscr{F}\left(\frac{x^2}{(1+x^2)^2}\right)(s)$ is related to the Laplace distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Matrix with nonnegative symmetric part and semisimplicty of the eigenvalue 0 Let B be a real square matrix with non-negative symmetric part, i.e. for all vectors $X$, $X^\top B X\geq 0$. We also assume that $B$ is singular. I am wondering if the eigenvalue $0$ of $B$ is necessary semi-simple, i.e. is the dimension of the kernel of $B$ equal to algebraic multiplicity of the eigenvalue $0$.
I am unable to prove it, nor I am able to find a counterexample. A counterexample would necessary be non-symmetric, and of dimension at least $3$, and at this point, the differences between diagonalising the matrix $B$ and the quadratic form $X^\top BX$ are wrecking my brain.
| I came back to this and found the answer: yes, if $B$ is singular and has nonnegative symmetric part, then the eigenvalue $0$ of $B$ is semisimple. This relies on the following observations:
Claim 1: If $0$ is not a semisimple eigenvalue of $B$, then the resolvent $R(z) = (B-z)^{-1}$ has a pole of order at least two at zero.
Claim 2: If $B$ has nonnegative symmetric part, then there exists $C>0$ such that for $z$ small enough, $\|R(z)\|\leq C|z|^{-1}$.
We use the following standard theorem:
Proposition: if there exists $c>0$ such that for all vectors $X$, $|AX|\geq c|X|$, then $A$ is nonsingular, and if $c$ is the greatest real that satisfies the previous inequality, then $\|A^{-1}\| = c^{-1}$.
Proof of claim 1: By assumption, $\ker(B)\neq \ker(B^n)$, so let us choose $X_0\neq 0$ such that $ BX_0\neq 0$ and $B^2 X_0 = 0$. Then, for $z\in\mathbb C^\ast$,
$$(B-z)(BX_0-zX_0) = -z^2X_0.$$
Since for $z$ small $|BX_0-zX_0|\geq c>0$, this implies that for $z$ small $\|R(z)\|\geq \frac{c}{|X_0|}|z|^{-2}$.
Proof of claim 2: We have for all $z\in(-\infty,0)$ and vector $X$:
\begin{align*}
|(B-z)X|^2 &= ((B-z)^\ast(B-z)X|X)\\
&=(B^\ast BX|x)-\bar z(BX|X) - z(B^\top X|X) + |z|^2(X|X)\\
&= |BX|^2 -z((B^\top+B)X|X) + |z|^2|X|^2\qquad\text{since }\bar z =z\\
&\geq |z|^2|X|^2\qquad\text{since }z<0
\end{align*}
So, for $z<0$, $\|R(z)\|\leq |z|^{-1}$. Now if we write the resolvent as a Laurent series $R(z) = \sum_{k=-n}^{+\infty}R_k z^k$, the previous inequality implies that $R_{-n} = R_{-n+1}=\dots=R_{-2} =0$, which concludes the proof. (Note: we can find the fact that for $k<-n$, $R_k = 0$ in Kato's "Perturbation theory for linear operators" p. 39 eqs. (5.18) and (5.20).)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to see the symmetry in this trigonometric equation Consider the equation $2\cos^2x-\cos x-1=0$. We can factor the LHS to obtain: $$(2\cos x + 1)(\cos x-1)=0,$$ leading to three solutions in the interval $[0,2\pi)$, namely $x=0, \frac{2\pi}{3}, \frac{4\pi}{3}$. If we want all solutions over $\Bbb R$, then we can add any multiple of $2\pi$ to each of these solutions.
However, all solutions over $\Bbb R$ are more efficiently expressed as simply the integer multiples of $\frac{2\pi}{3}$. That kind of solution is what one would expect from a problem that started out with something like $\cos (3x)=1$. However, if we expand $\cos(3x)$ using sum formulas, that equation leads us to a different polynomial in $\cos(x)$, namely: $$(2\cos x+1)^2(\cos x-1)=0.$$
So, it's clear that both polynomials have the same roots, and that "explains" why the solution sets are the same. Great.
My question: Is there a reasonable way to recognize the symmetry in the original equation, and transform it into an equation in $\cos(3x)$, other than just knowing ahead of time how it's going to work out?
| $2\cos^2x-\cos x-1=(\cos{2x}+1)-\cos x-1=\cos 2x-\cos x$
$\cos 2x -\cos x=-2\sin{\frac{3x}{2}}\sin{\frac{x}{2}}$
Is it OK at this point?
You can play more with the trigonometric identities to see sometimes if it does turn out well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Using a Taylor polynomial of degree 2, find an approximation for $\sqrt[3]{e}$ I do not understand how to find an approximation using a Taylor polynomial.
Also, I need to find an upper limit to the remainder $|R_2(x)|$.
Excuse me if I have any obvious mistakes, this is my first time solving something like this.
$$R_2(\frac{1}{3}) = \frac{f^{(3)}(c)}{3!}(\frac{1}{3})^3$$
$0 \le c \le x$ and $f^{(3)}(c) = e^c$, therefore:
$$= \frac{e^c}{6}\cdot\frac{1}{27} \le \frac{e^\frac{1}{3}}{162}$$
I have no idea. This is probably completely wrong, please help.
| Use the Taylor series for $e^x$ evaluated at $x=\frac{1}{3}$:
$$e^{1/3}=\sum_{n=0}^\infty\frac{1}{3^nn!}=1+\frac{1}{6}+\frac{1}{54}+\cdots$$
which is from the general form
$$e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
A claim from "Example of a linear functional, but not a distribution" I'm studying linear functionals that are not distributions, and I came across this post: link.
In one of the comments it is claimed that the functional $u:C_c^\infty\to\mathbb{C}$ given by
$$ u= \sum_{n\geq 0} \frac{R^n}{n!}\delta^{(n)} $$
is not even well-defined unless the test function is analytic with radius of convergence around 0 greater than $R$.
How does one prove this? I was able to prove that the sum does not converge absolutely if we take a non-analytic test function with the help of the Cauchy estimate, but this does not imply that the sum is not well-defined.
| Strictly speaking this statement is not true. Fix a smooth function $\phi$ with compact support equal to $1$ on a neighborhood of zero and define
$$f(x)=\begin{cases} \phi(x) e^{-1/x^2} & x \neq 0 \\
0 & x=0 \end{cases}.$$
Then $u(f;R)=0$ for all $R$, but $f$ is not analytic on any neighborhood of zero.
This statement is sort of "morally true" however, in the sense that you should only expect $u(f;R)$ to make sense if $f$ is analytic on a disk centered at the origin of radius greater than $R$, even though there can be exceptions. To see this, consider now a smooth function $\phi$ with support compactly contained in $[-1,1]$ and which is equal to $1$ on a neighborhood of zero. Then look at $f(x)=\frac{\phi(x)}{x-1}$. For this, $u(f;R)$ diverges if $R \geq 1$, even though $f$ is clearly a $C^\infty_c$ function. What is the problem? The problem is that $f$ is a non-analytic continuation of the meromorphic function $\frac{1}{z-1}$ which has a pole at $z=1$. But $u$ only sees the behavior of $f$ on an infinitesimal neighborhood of $0$, so $u$ "thinks" that $f$ is actually that meromorphic function. This prohibits convergence of the series for $R \geq 1$.
In any case, the main point is that the topology on $D$ forces all distributions to have some finite "order", they cannot take derivatives above this order. This allows us to handle non-analytic smooth functions within the framework of distribution theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If my random variables $X_1,...,X_n$ are i.i.d. $N(\mu,\sigma^2)$, why isn't $\bar{X}\sim N(\mu,0)$? If my random variables $X_1,...,X_n$ are i.i.d. $N(\mu,\sigma^2)$, why isn't $\bar{X}\sim N(\mu,0)$?
In other words, if, as I understand it, $X_1,...,X_n$ all have the same mean, $\mu$, how can there be any variance at all in $\bar{X}$?
| You draw a sample consisting of $n$ observations, you can compute a sample mean.
You draw another sample consisting of another $n$ observations, you can compute another sample mean.
We do not expect the first sample mean to be equal to the second sample mean, in fact, it is unlikely that either of them would be equal to $\mu$.
Each sample mean is random and not deterministic. It depends on the sample drawn.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2744905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Abstract index notation vs Ricci Calculus I have come accross some comparison between the abstract index notation and Ricci calculus as it pertains to contraction and what I find is:
The former (abstract notation) indicates that a basis-independent trace operation being applied, which reduces to the aforementioned summation whenever a specific basis is fixed; the latter (Ricci calculus) construes contraction as true summation with numerical indexes and, correspondingly, with a given coordinate system. In the Ricci calculus, a contraction indicates a literal summation. Since this requires numbers, it also requires a coordinate system to be chosen. Really, the abstract index notation is nothing more than the observation that almost all of the Ricci calculus remains intact if one does not choose a basis. There's a great deal of meaning in the structure of the index expressions which is not basis dependent.
My question is very simple: What kind of meaning in the structure is not basis dependent? Can you elaborate on that?
Thanks in advance
| A simple example to illustrate the point is an expression of the form $t^a_{\;a}$ where $t^a_{\;b}$ is a tensor (or tensor field) of type (1,1), vulgarly known as an endomorphism. The expression $t^a_{\;a}$ is interpreted in the usual index notation as the sum of the "diagonal" elements and taken literally is dependent on the basis. When interpreted as contraction in the abstract index notation, it results in a scalar (or scalar function) without ever having to choose a basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Chebychev's Inequality Question Not sure if I'm understanding the question wrong, but the prof's notes gave a different answer.
The number of equipment breakdowns in a manufacturing plant averages 5 per week with a standard deviation of 0.8 per week.
Find an interval that includes at least 90% of the weekly figures for the number of breakdowns.
My Answer:
Prof's Answer:
I think he did it for at most 90%. Thoughts?
| The problem is when you write that $$P(|X-E(X)|\geq kSD(X))\leq \frac{1}{k^2}$$
implies $$P(|X-E(X)|< kSD(X))> \frac{1}{k^2}$$
It would rather be $$P(|X-E(X)|< kSD(X))\geq 1- \frac{1}{k^2}$$
Edit: how to get from $1$ to $3$. Call $A$ the event $\{|X-E(X)|\geq kSD(X)\}$.
Then $P(A)+P(A^c)=1$ implies that if $P(A)\leq \frac{1}{k^2}$ then $P(A^c)\geq 1-\frac{1}{k^2}$. It turns out that $A^c=\{|X-E(X)|< kSD(X)\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Identify some Coxeter group As we all know, the weyl group of lie algebra of $B_{2}$ type is $\left\{s_{1},s_{2}|s_{1}^{2}=1, s_{2}^{2}=1, (s_{1}s_{2})^{4}=1\right\}$. How can we identify this with $Z^{2}_{2}\rtimes S_{2}$?
If I choose fundamental roots of $B_{2}$ as basis, then I can identify $s_{1}$ with matrix $\begin{bmatrix}
-1&1\\0&1 \end{bmatrix}$ and identify $s_{2}$ with matrix $\begin{bmatrix}
1&0\\2&-1 \end{bmatrix}$. Then our group can be thought as matrix group generated by these two matrices. But I don't know how to find semidirect product structure out there.
| I will describe this for type $B_n$ in general, since that actually makes it a bit more clear what is going on.
$$W(B_n) = \langle t_0, s_1,\dots, s_{n-1}\mid t_0^2, s_i^2, (t_0s_1)^4, (t_0s_i)^2\mbox{ for }i\geq 2, (s_is_{i+1})^3, (s_is_j)^2\mbox{ for }|i-j|>1\rangle$$
So it has generators $s_i$ for $i\in \{1,\dots, n-1\}$ which generate a copy of $S_n$ (just note that the $s_i$ precisely satisfy the relations of the type $A_{n-1}$ Coxeter group), together with an extra generator that I have here called $t_0$. The reason for this comes now:
For each $i\in \{1,\dots, n-1\}$ define recursively $t_i = s_it_{i-1}s_i$.
Now I claim that the subgroup $\langle t_0, t_1,\dots, t_{n-1}\rangle$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^n$, that it is normalized by the subgroup $\langle s_1,\dots, s_{n-1}\rangle$, and that it intersects the latter trivially.
All of this leads to $W(B_n)$ being isomorphic to the semidirect product $(\mathbb{Z}/2\mathbb{Z})^n\rtimes S_n$ (in fact, a very nice such semidirect product: It is the wreath product of $\mathbb{Z}/2\mathbb{Z}$ with $S_n$).
I leave the proof of my claim about the subgroup as a good exercise.
Hint for showing that the copy of $S_n$ normalizes the subgroup generated by the $t$'s: Show explicitly that $$s_it_js_i = \begin{cases}t_{j-1} & \mbox{if }i=j \\ t_{j+1} & \mbox{if }i = j+1 \\ t_j & \mbox{else}\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove $\lim_{x \to \infty} x^{(x+1)}-(x+1)^x = \infty$ I try to prove this by using L'Hospital Rule but it doesn't work.
I know it is infinity from wolframalpha but I don't know how to prove it.
| Use the estimate:
$$n^{n+1}-(n+1)^n>n^n, n>3 \iff n^n(n-1)>(n+1)^n,n>3 \iff \\
n-1>\left(1+\frac 1n\right)^n, n>3.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Eigenvalues of a block circulant matrix How does one calculate the eigenvalues of a unitary 3 by 3 block circulant matrix where
$$U = -\frac{i}{3} \begin{pmatrix}
\Lambda_{1} & \Lambda_{2} & \Lambda_{3} \\
\Lambda_{3} & \Lambda_{1} & \Lambda_{2} \\
\Lambda_{2} & \Lambda_{3} & \Lambda_{1} \end{pmatrix}$$
where
$$ \Lambda_{1} = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & -2\sqrt{2} \\ -2\sqrt{2} & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \end{pmatrix}, \quad \Lambda_{2} = \begin{pmatrix} 0 & 0 & -2\sqrt{2} & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} , \quad \Lambda_{3} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & -2\sqrt{2} & 0 & 0 \end{pmatrix} $$
| This won't be a full answer, but here's some thoughts that (I think) should move you in the right direction.
Let $P$ denote the matrix
$$
P = \pmatrix{0&1&0\\0&0&1\\1&0&0}
$$
Then, in terms of Kronecker products, we can write your matrix as
$$
3iU = I \otimes \Lambda_1 + P \otimes \Lambda_2 + P^2 \otimes \Lambda_3
$$
It is actually very easy to calculate the eigenvalues of $(P \otimes \Lambda_2 + P^2 \otimes \Lambda_3)$: notably, $\Lambda_2,\Lambda_3$ commute, and are both nilpotent. It follows that $P \otimes \Lambda_2$ and $P^2 \otimes \Lambda_3$ are also commuting nilpotent matrices. It follows that $(P \otimes \Lambda_2 + P^2 \otimes \Lambda_3)$ is a nilpotent matrix, which means that its only eigenvalue is $0$. Moreover, it satisfies $(P \otimes \Lambda_2 + P^2 \otimes \Lambda_3)^2 = 0$. Its rank is at most $3$.
The $\Lambda_1$ term, however, makes things a bit tricky, and I'm not sure how to continue this line of analysis.
Here's a "brute force" approach that might help.
*
*Your matrix $\Lambda_1$ is diagonalizable. To begin, compute an invertible $S$ (whose columns are eigenvectors) such that $D = S^{-1}\Lambda_1S$ is diagonal. The eigenvalues of $\Lambda_1$ are the $4$ distinct (but non-real) roots of its characteristic polynomial, $p(x) = x^4 - 2x^2 + 9$
*Let $V = 3iU$ (so that I can be lazy and leave off the $-\frac i3$). We note that
$$
(I \otimes S)^{-1} V (I \otimes S) =
\pmatrix{S\Lambda_{1}S^{-1} & S\Lambda_{2}S^{-1} & S\Lambda_{3}S^{-1} \\
S\Lambda_{3}S^{-1} & S\Lambda_{1}S^{-1} & S\Lambda_{2}S^{-1} \\
S\Lambda_{2}S^{-1} & S\Lambda_{3}S^{-1} & S\Lambda_{1}S^{-1}} = \\
\pmatrix{D & M_2 & M_3 \\
M_3 & D & M_2 \\
M_2& M_3 & D} = I \otimes D + P \otimes M_2 + P^2 \otimes M_3
$$
The matrix $I \otimes D$ is diagonal, and the matrix $(P \otimes M_2 + P^2 \otimes M_3)$ is similar to the matrix $(P \otimes \Lambda_2 + P^2 \otimes \Lambda_3)$ analyzed above. The matrices $M_2,M_3$ have rank $1$.
I hope you find these observations to be useful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
what is "Minimal Uncountable well-ordered set"? Can anyone make me understand what is "Minimal Uncountable well-ordered set" (Munkres, Topology, Example 2 of the limit point compactness section)?
I know what is Uncountablity and Well ordered set.
Thank You in Advance.
| @cmi ℝ in the usual ordering is not well ordered. And if equipped with proper ordering, ℝ can be isomorphic to the "Minimal Uncountable well-ordered set" (assuming the continuum hypothesis), because under the continuum hypothesis, ℝ has the same cardinality as the "Minimal Uncountable well-ordered set", and therefore a bijection between them exists. We can then just define the ordering on ℝ based on the bijection and the ordering of the "Minimal Uncountable well-ordered set".
So the reason of ℝ (with the usual ordering) being "not well ordered minimal uncountable set" is not the reason you gave (can remove element), but the its ordering.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that $6$ is a divisor of $n^3 - n$ for all natural numbers. How would you approach such a problem? Induction perhaps? I have been studying proof by induction, but so far I have only solved problems of this nature:
$$1 + 4 + 7 +\dots+ (3n-2) = \frac{n(3n-1)}{2}.$$
| To prove by induction take the base case $n=0$ or $n=1$.
Then for the inductive step $$(n+1)^3-(n+1)=n^3+3n^2+3n+1-n-1=(n^3-n)+3(n^2+n)$$and the statement will be true if you can show that $n^2+n=n(n+1)$ is even. You can do that various ways, including a similar induction
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2745984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Closed form of $\sum_{k=0}^n\binom{2k}{k}(-1/4)^k$ Loosely related to my last question, I was trying to find a closed form of the finite sum
$$a_n:=\sum_{k=0}^n\binom{2k}{k}\left(-\frac{1}4\right)^k$$
This is not too different from the well-known expression
$$\sum_{k=0}^n\binom{2k}{k}\left(\frac{1}4\right)^k=\binom{n+\frac12}{n}=\frac{2n+1}{2^{2n}}\begin{pmatrix}2n\\n\end{pmatrix}$$
so
$$
\sum_{k=0}^n\binom{2k}{k}\Big(\frac{-1}4\Big)^k=\sum_{k=0}^n\binom{2k}{k}\Big(\frac{1}4\Big)^k-2\sum_{\substack{k=0\\k\text{ odd}}}^n\binom{2k}{k}\Big(\frac{-1}4\Big)^k\\
=\frac{2n+1}{2^{2n}}\binom{2n}{n}-\frac12\sum_{l=0}^{\lfloor \frac{n-1}2 \rfloor}\begin{pmatrix}4l+2\\2l+1\end{pmatrix}\frac1{16^l}.
$$
However, the second sum does not seem to be easier to handle at first glance. Also trying a similar ansatz $a_n= \binom{2k}{k}p_n/2^{2n}$ led to seemingly unstructured $p_n$ given by (starting from $p=0$):
$$
1,1,\frac73,\frac95,\frac{72}{35},\frac{9}{7},\frac{185}{77},\frac{227}{143},\frac{5777}{2145},\ldots
$$
So I was wondering (a) if there already is a known closed form for the $a_n$ (I've searched quite a bit but sadly wasn't successful) and if not, then (b) what is a good way to tackle this problem - or does a "simple" (sum-free) form maybe not even exist?
Thanks in advance for any answer or comment!
Edit: I am aware of the solution
$$
a_n=(-1)^n 2^{-2n+2}\begin{pmatrix}2n+2\\n+1\end{pmatrix} {}_2F_1(1;n+\frac32;n+2;-1)+\frac1{\sqrt2}
$$
Mathematica presents where ${}_2F_1$ is the hypergeometric function. But as the latter is given by an infinite sum, I was hoping for something simpler by connecting it to the known, non-alternating problem as described above - although I'd also accept if this was not possible.
| I think that
\begin{align}
a_n&=\sum_{k=0}^n\binom{2k}{k}\biggl(-\frac14\biggr)^k\\
&=\frac2\pi\int_0^{\pi/2}\frac{(-1)^n \sin ^{2 n+2}x+1}{\sin ^2x+1}\textrm{d}x\\
&=\frac2\pi\int_0^{\infty}\frac{1}{2+x^2}\biggl[1+\frac{(-1)^n}{(1+x^2)^{n+1}}\biggr]\textrm{d}x.
\end{align}
These integral representations imply that
*
*the sequence $a_n>0$ for all $n\in\{0\}\cup\mathbb{N}$;
*the sequence $a_{2n}$ for all $n\in\{0\}\cup\mathbb{N}$ is decreasing and convex;
*the sequence $a_{2n+1}$ for all $n\in\{0\}\cup\mathbb{N}$ is increasing and concave;
*the limit $\lim_{n\to\infty}a_n=\frac{\sqrt{2}\,}{2}$ is valid.
If this formula is useful, please read Theorem 23 and its proof in the paper
Feng Qi and Bai-Ni Guo, Integral representations of the Catalan numbers and their applications, Mathematics 5 (2017), no. 3, Article 40, 31 pages; available online at https://doi.org/10.3390/math5030040.
I will write down the proof of my own idea above in a paper and then come back.
By the way, What is the general formula of the sum $\sum_{k=0}^{n}(-1)^{k} \binom{n}{k}\binom{k/2}{m}$ for $m,n\in\mathbb{N}$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Paradox vs Tautology. The expression(~p or p )is a Tautology.
Consider this statement(p): This statement is false.
Now here, Statement p is paradoxical.
My question is :- Can we define paradoxes like this as statements which prove Tautologies wrong?
| A statement can be provable or not provable, and it can be sound (something we expect to be true, like 1+1=2) or unsound (we expect it to be false, like 1+1=3).
A paradox arises when a statement is unsound but provable. It indicates an error in the logic.
The claim of the speaker in the liar's paradox is clearly unsound: it contradicts itself. But we have no reason to believe it is provable. So it only has 1 of the 2 qualifications needed to be a paradox.
Can we define paradoxes like this as statements which prove Tautologies wrong?
No. A paradox isn't only defined by what it proves (unsound claims) , but by the fact that the statement itself is provable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If two random variables have CDFs that have the same value for all x, can we assume the random variables are equal? My text has the following theorem:
Let $X$ have a CDF $F$ and let $Y$ have CDF $G$. If $F(x) = G(x)$ for all $x$, then $\mathbb{P}(X \in A) = \mathbb{P}(Y \in A)$ for all $A$.
I don't see a way that X and Y could assign a different probability to the same event but still have their CDFs be equal at every point. If they disagreed at point j, then F(j) will not equal G(j). Therefore they must be the same?
| Elementary probability:
They don't teach this is in elementary probability, but random variables have an explicit representation known as the Skorokhod representation.
Basically, we never really know the formulas for a lot of the $X$'s. We know the $X$'s mainly from the $F_X(x)$'s. It's kinda like talking about $f(x)=x^2+c$'s through their common derivative $f'(x)=2x$: When is $f$ increasing? When $f' > 0$. We know that if $f$ is not unique given an $f'(x)$. We can do that through integration, or just construct an explicit example $f(x)=x^2+5$ and $f(x)=x^2+4$.
How we do similarly here in probability?
For example, consider $X \sim Be(p)$ where $P(X=0):=p$ and $P(X=1):=1-p$ (Usually, textbooks use $p$ for the $P(X=1)$).
If both of the following $X_i$'s satisfy $X \sim Be(p)$, then we've given explicit Bernoulli random variables that can never be the same, i.e. $X \sim Be(p)$ doesn't have a unique Skorokhod representation.
$$X_1(\omega) := 1_{(0,1-p)}(\omega) := 1_{A_1}(\omega)$$
$$X_2(\omega) := 1_{(p,1)}(\omega) := 1_{A_2}(\omega)$$
If $\omega=\frac{1-p}{2}$, then $X_1(\omega)=1$ while $X_2(\omega)=0$.
Let us try to compute the CDF of $X_i$:
$P(X_i(\omega) \le x)$ is 0 for $x<0$ and 1 for $x \ge 1$.
As for $0 \le x < 1$, define
$$P(X_i(\omega) \le x) = P(X_i(\omega) = 0) = P(1_{A_i}(\omega) = 0) = P(\omega \notin A_i) = 1 - P(\omega \in A_i)$$
We have our result if $P(\omega \in A_1) = P(\omega \in A_2) = 1-p$. Is it?
Okay so here, we need to need to make some kind of assumption to say that the interval $(p,1)$ is not only as probable as $(0,1-p)$ but also that probability of each interval is $1-p$. Clearly, the intervals have the same length, but does that mean they have the same probability? Furthermore, if they do, is it equal to That depends on how we define probabilities here. One such assumption is:
A uniformly distributed random variable $U$ on $(0,1)$ has Skorokhod representation $U(\omega) = \omega \sim Unif(0,1)$.
Hopefully this isn't circular, otherwise this half of the answer is nonsense.
Then $P(\omega \in A_i) = \frac{(1-p)-(0)}{1-0}$ or $= \frac{(1)-(p)}{1-0}$
$$P(\omega \in A_i) = \frac{1-p}{1-0} = 1-p$$
Advanced probability:
It can be shown that $$Y(\omega) = \omega \sim Unif(0,1)$$ for $\omega$ in $((0,1),\mathscr B(0,1),\mu)$ where $\mu$ is Lebesgue measure.
Hence,
$$P(\omega \in A_i) = \mu(A_i) = l(A_i) = 1-p$$
where $l$ is length.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why worry about commutativity but not associativity in The Fundamental Theorem of Arithmetic? A common statement of The Fundamental Theorem of Arithmetic goes:
Every integer greater than $1$ can be expressed as a product of powers of distinct prime numbers uniquely up to a reordering of the factors.
Now the statement makes a point of mentioning that factorization is unique up to reordering of the factors, saying basically that we don't have to worry about it because multiplication in the integers is commutative. But why not specify that it's also unique up to the choice in which order we multiply the factors? I.e, that we don't have to worry about it because multiplication in the integers is associative too? If we insist on multiplication being a binary operation, then we need to define some grouping when we have a product of more than two integers. Shouldn't there be a clause in the Fundamental Theorem that indicates, for example, that $30 = (2\times (3 \times 5))$ and $30 = ((2\times 3) \times 5)$ are not distinct factorizations?
It should be noted that some answers to this question were merged from another question, so they may not be completely consistent with this question exactly as it's stated.
| While being sloppy, mathematicians create an intuitive and simple language that allows deeper and deeper investigations in an extraordinary creative activity. Keeping all parenthesis and avoid all informal constructions would make mathematics more static.
Of course one can define a normal form which cooperates with the idea of the Fundamental Theorem of Arithmetic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 10,
"answer_id": 9
} |
In which order can you execute the given rotation and the projection successively?
Given is the rotation
$$d:\mathbb{R}^2 \rightarrow \mathbb{R}^2 \mbox{ with }
d:\begin{pmatrix}
x\\
y
\end{pmatrix} \mapsto
\begin{pmatrix}
x \cos \alpha - y \sin \alpha\\
x \sin \alpha + y \cos \alpha
\end{pmatrix}$$
and the projection
$$p: \mathbb{R}^3 \rightarrow \mathbb{R}^2 \mbox{ with } p: \begin{pmatrix}
x\\
y\\
z
\end{pmatrix} \mapsto
\begin{pmatrix}
x\\
y
\end{pmatrix}$$
In which order can you execute the rotation $d$ and projection $p$ successively?
I'm not quite sure why the order would matter. So if we form the linear mapping to a matrix, we have for rotation:
$$\begin{pmatrix}
\cos \alpha & -\sin \alpha\\
\sin \alpha & \cos \alpha
\end{pmatrix}$$
And for the projection we can take the matrix
$$\begin{pmatrix}
1\\
0
\end{pmatrix}$$
because projection is linear mapping from vector space to itself where its square is still the same result.
And now multiply both matrices?
I don't see why the order matters here and how to do it actually? Does the order matter because if you choose it badly you cannot do matrix multiplication because their sizes don't match?
| Since
*
*$d:\mathbb{R}^2 \rightarrow \mathbb{R}^2$
and
*
*$p: \mathbb{R}^3 \rightarrow \mathbb{R}^2$
the composition is possible only for projection first and then rotation, that is
*
*$d\circ p: \mathbb{R}^3 \rightarrow \mathbb{R}^2\quad \begin{pmatrix}
x\\
y\\
z
\end{pmatrix} \mapsto \begin{pmatrix}
x \cos \alpha - y \sin \alpha\\
x \sin \alpha + y \cos \alpha
\end{pmatrix}$
note that the transformation matrix for the composition is
$$T(\vec x)=\begin{pmatrix}
\cos \alpha &- \sin \alpha&0\\
\sin \alpha & \cos \alpha&0
\end{pmatrix}\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}=\begin{pmatrix}
x \cos \alpha - y \sin \alpha\\
x \sin \alpha + y \cos \alpha
\end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $M=f(N)\oplus K$, then there is a left inverse to $f$ Suppose $f:N\to M$ is an injective $R$-module homomorphism and $f(N)$ is a direct summand of $M: M=f(N)\oplus K$ for a submodule $K\subset M$. I'm trying to show that there is a homomorphism $f':M\to N$ such that $f'(f(n))=n$ for $n\in N$.
It would be natural to define $f':M=f(N)\oplus K\to N$ by $(f(n),k)\mapsto n$. But since I don't know the explicit form of the map $f$, I cannot show that $f'(f(n))=n$. How do I prove that without knowing what $f$ is?
| We don't need to know what $f$ is, just that it's isomorphic onto its image. We could write the proof as follows:
Theorem: Suppose $f : N \rightarrow M$ is an injective $R$-module homomorphism and $f(N)$ is a direct summand of $M$. Then $f$ has a left inverse.
Proof: Take a submodule $K$ such that $M = f(N) \oplus K$ (internal direct sum). Then the map $g : N \rightarrow f(N)$ sending $x$ to $f(x)$ is surjective and injective, and therefore an isomorphism. Define a homomorphism $h : M \rightarrow N$ where, for $x \in f(N)$ and $y \in K$, we set $h(x + y) = g^{-1} (x)$. This is an $R$-linear homomorphism, since for $x, x' \in f(N)$ and $y, y' \in K$, we have
$f(x + x' + y + y') = g^{-1} (x + x') = g^{-1} (x) + g^{-1}(x') = f(x + y) + f(x' + y')$
For $x \in f(N)$, $y \in K$, and $r \in R$, we have
$f(r(x + y)) = f(rx + ry) = g^{-1} (rx) = r g^{-1} (x) = r f(x + y)$.
This shows that $h$ is an $R$-linear homomorphism. Then we have $h(f(x)) = g^{-1} (f(x)) = g^{-1} (g (x)) = x$. So $h \circ f = \text{Id}_N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In how many ways can 20 identical chocolates be distributed among 8 students Provided each student gets atleast 1 chocolate and exactly two students get atleast two chocolates each.
We know that if k indistinguishable objects are to be placed in n bins such that each bin contains atleast 1 object- the # of ways we can do that is:
$\binom{k-1}{n-1}$
So here in this context, we have, $\binom{20-1}{8-1}=\binom{19}{7}$ as $k=20 \space identical \space chocolates$ and $n=8 \space distinct \space students$
Now after this step we have to find out which two students are given 2 chocolates each.
So choose two students: We have, $\binom{8}{2}$ # of ways to do that
Now after that we are left with $19-7=12$ chocolates which we have to divide among 2 students s.t each student gets atleast 2 chocolates.
Thus this boils down to $x+y=12,x\ge2,y\ge2$
So possible solutions:
$(2+10=12)....(occurs \space twice,i.e 10+2=12),(3+9=12)....(occurs \space twice),(4+8=12)....(occurs \space twice),(5+7=12)....(occurs \space twice),(6+6=12)....(occurs \space only \space once)$
Thus total # of cases= $9$
So, for each of $\binom{8}{2}$ students we have 9 cases.
So total # of instances= $\binom{8}{2}\times 9$
Hence total # of instances=$\binom{19}{7}\times\binom{8}{2}\times 9$
But sadly this does not match with any of the options given:
Options are
A.$308$
B.$364$
C.$616$
D.$\binom{8}{2}\binom{17}{7}$
Where am I wrong?
What is/are the correct step(s)?
Please give proper and detailed reasoning.
My assumptions are wrong. Please see my answer which was suggested by @antkam to solve this.
| All $8$ students shall obtain $\geq1$ pieces, and exactly $2$ of them shall obtain $\geq2$ pieces.
Give each student $1$ piece, select the $2$ special students in ${8\choose2}=28$ ways, give the senior of these $s\in[11]$ additional pieces, and the junior the remaining $12-s>0$ pieces. It follows that there are $28\cdot11=308$ admissible allocations in all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$\tilde{P}$ is a refinement of $P$. $m_j\le \tilde{m_p}$? $\tilde{P}$ is a refinement of $P$. $P=\{x_0,....,x_n\}$ and $\tilde{P}=\{x_{k_0},...,x_{k_n}\}$, and $x_{k_j}=x_j.$ $m_j = \inf\{f(x):x_{j-1}\le x \le x_j\}$, and $\tilde{m_p}=\inf\{f(x_p):x_{k_{j-1}}\le x_p \le x_{k_j}\}$. My textbook says $m_j\le \tilde{m_p}$, but shouldn't it be the opposite? because $P \subset \tilde {P}$.
Edit: I add the photo in case the above explanation is not enough.
| Let's start with the observation that if $I$ is an interval, and $I' \subset I$ is a subinterval, then $\inf \{f(x) : x \in I\} \leq \inf \{f(x) : x \in I'\}$, since the $\inf$ on the whole interval is clearly smaller than the $\inf$ on the contained interval.
Now let
$$ m_j = \inf\{f(x):x_{j-1}\le x \le x_j\}$$.
If we refine this interval, we get a bunch of points
$$\tilde x_{0} \ \tilde x_{1} \ ... \ \tilde x_{m}$$
with $\tilde x_0 = x_{j-1}$ and $\tilde x_{m} = x_j$
Now, for any $p < m$, we have
$$(\tilde x_p, \tilde x_{p+1}) \subset (x_{j-1}, x_j) $$
And so
$$\inf \{f(x) : x \in (x_{j-1}, x_j) \} \leq \inf \{f(x) : x \in x_{p}, x_{p+1} \} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inner product's property in a Hilbert space on $\mathbb{C}$ Recently, I have just learnt about the concept of a Hilbert space.
As far as I can understand, a Hilbert space is a generalized Euclidean space.
When talking about an Euclidean space $E$, indeed there must be a mapping from $E \times E$ to the scalar field, called the "inner product". Back then, when I started to study linear algebra, I studied about the Euclidean space on $\mathbb{R}$. The inner product there has this property
$$\langle x,y \rangle = \langle y,x \rangle $$
But, when expanded to a Hilbert space on $\mathbb{R}$ or $\mathbb{C}$, that property has been changed into
$$\langle x,y \rangle = \overline{\langle y,x \rangle} $$
Indeed, when our scalar field is $\mathbb{R}$, nothing have changed. But what I'm concerning is: Why do we need the inner product of $x$ and $y$ to be the conjugate of the inner product of $y$ and $x$ when the scalar field is $\mathbb{C}$? Why is it neccesary to define that property like that, while we can just simply keep the commutativity like when the scalar field is $\mathbb{R}$?
Sorry if I asked something stupid. Thank you.
| It's not a stupid question at all!
One reason is that using conjugate symmetry instead of symmetry allows for a norm to be defined on the space, as with real inner product spaces. We define
$$\|x\| = \sqrt{\langle x, x \rangle},$$
and expect such a thing to be well-defined, real, and non-negative so as to measure distance. Having conjugate symmetry means that,
$$\langle x, x \rangle = \overline{\langle x, x \rangle},$$
making the inner product of $x$ with itself real (of course, further axioms are required to make it non-negative). It better serves a geometric purpose than the slightly more obvious generalisation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find explicitly the positive solutions of $2^x=x^2$
Find explicitly the positive solutions of the equation $2^x=x^2$
I noticed that $x=2$ and $x=4$ are roots of the equation.
How can I prove that they are the only positive ones? Thanks in advance
| They're not the only solutions, if we also allow negative nummbers:. Since $2^0>0^2$ and $2^{-1}<(-1)^2$ there exists a solution $x$ with $-1<x<0$.
Those three are the only real solutions. Let $f(x)=2^x-x^2$. Calculate the third derivative and show that $f'''(x)>0$ for every $x$. If $f$ had four zeroes the Mean Value Theorem would show that $f'$ had three zeroes, hence $f''$ would have two zeroes, hence $f'''$ would have a zero.
So since there are only three real solutions and one is negative there are only two positive solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
if $d\mid n$ then $x^d-1\mid x^n-1$ proof How would you show that if $d\mid n$ then $x^d-1\mid x^n-1$ ?
My attempt :
$dq=n$ for some $q$. $$ 1+x+\cdots+x^{d-1}\mid 1+x+\cdots+x^{n-1} \tag 1$$ in fact, $$(1+x^d+x^{2d}+\cdots+x^{(q-1)d=n-d})\cdot(1+x+\cdots+x^{d-1}) = 1+x+x^2 + \cdots + x^{n-1}$$
By multiplying both sides of $(1)$ by $(x-1)$ we get that $1-x^d\mid 1-x^n$ which is the final result
Is this an ok proof?
| You can always do:
$f(y)=1+y+ \dots +y^{r-1}$ so that $yf(y)=y+y^2+\dots +y^r$ and $yf(y)-f(y)=(y-1)f(y)=y^r-1$
Then put $y=x^d$ with $dr=n$ and obtain $(x^d-1)f(x^d)=x^n-1$ and by construction $f(x^d)$ is a polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 2
} |
Suppose $ x+y+z=0 $. Show that $ \frac{x^5+y^5+z^5}{5}=\frac{x^2+y^2+z^2}{2}\times\frac{x^3+y^3+z^3}{3} $. How to show that they are equal? All I can come up with is using symmetric polynomials to express them, or using some substitution to simplify this identity since it is symmetric and homogeneous but they are still too complicated for one to work out during the exam. So I think there should exist some better approaches to handle this identity without too much direct computation.
In addition, this identity is supposed to be true:
$$ \frac{x^7+y^7+z^7}{7}=\frac{x^2+y^2+z^2}{2}\times\frac{x^5+y^5+z^5}{5} .$$
| Write $S_n=x^n+y^n+z^n$. Then $S_0=3$, you are given $S_1=0$ and are charged to prove that $S_5=(5/6)S_2S_3$. Define
$$F(t)=\sum_{n=0}^\infty S_nt^n.$$
Then
$$ F(t)=\frac1{1-xt} + \frac1{1-yt} + \frac1{1-zt} = \frac{3+2e_1t+e_2t^2}
{1-e_1t+e_2t^2-e_3t^3} $$
where $e_1=x+y+z$, $e_2=xy+xz+yz$ and $e_3=xyz$. Then $e_1=0$ so
\begin{align}
F(t)&=\frac{3+e_2t^2}
{1+e_2t^2-e_3t^3}=(3+e_2t^2)\sum_{k=0}^\infty(-1)^k(e_2t^2-e_3t^3)^k\\
&=(3+e_2t^2)(1-e_2t^2+e_3t^3+e_2^2t^4-2e_2e_3t^5+\cdots)\\
&=3-2e_2t^2+3e_3t^3+2e_2^2t^4-5e_2e_3t^5+\cdots.
\end{align}
Therefore $S_2=-2e_2$, $S_3=3e_3$ and $S_5=-5e_2e_3$ etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Finite abelian group generated by two elements Let $G$ be a finite abelian group generated by two elements $a$ and $b$. I am trying to prove that $G$ is isomorphic to the direct product of two cyclic group $C_r$ and $C_s$, where the value of $r$ and $s$ depend on $|a|$, $|b|, d = |\langle a \rangle \cap \langle b \rangle |.$ I have defined a map
$$ \psi : \frac{\langle a \rangle}{\langle a \rangle \cap \langle b \rangle } \times \langle b \rangle : \rightarrow \langle a, b \rangle $$
such that $\psi(\bar{a^i}, b^j) = (a^i b^j).$ But this map is not well defined. Can we defined another map or how to prove that $G$ is isomorphic to the direct product of two cyclic group $C_r$ and $C_s$, where the value of $r$ and $s$ depend on $|a|$, $|b|, d = |\langle a \rangle \cap \langle b \rangle |.$ Any help would be appreciated. Thank you.
| Here's a different approach:
Theorem: Let $A$ be an abelian group and suppose $A$ is generated by $a, b$, where $a$ and $b$ have finite order. Then $A$ is isomorphic to the product of two cyclic groups.
Proof: Take generators $a$ and $b$ of $A$, and write $C = \langle a \rangle \subset A$. $A /C$ is cyclic with generator $c = \overline{b}$. There is a canonical quotient map $\pi : A \rightarrow A/C$. $\pi (b) = c$. $c$ is a generator of $A/C$. The order of $b$ is divisible by the order of $A/C$. To see this, note that $\pi$ induces a surjective map $\pi' : \langle b \rangle \rightarrow A/C$, so that $|A/C| \cdot |\ker(\pi')| = \text{ord}(b)$. Write $\text{ord}(b) = n$ and $|A/C| = m$, and take $k \in \mathbb{N}$ such that $mk = n$. $b^k$ then has order $m$ and $\phi(b^k)$ is a generator of $A/C$ (why?).
We show that $\langle b^k \rangle \cap \langle a \rangle = \{ e \}$ and $\langle b^k \rangle \langle a \rangle = A$. Suppose $b^{ki} = a^j$ for some $i, j \in \mathbb{N}$. Applying $\pi$, we see that $c^i = e$, so that $i$ is divisible by $m$. Then $ki$ is divisible by $n$, so that $b^{ki} = e$. Then $b^{ki} = a^j = e$. So $\langle b^k \rangle \cap \langle a \rangle = \{ e \}$. To see that $\langle b^k \rangle \langle a \rangle = A$, take $x \in A$, and take $i$ such that $\pi (x) = c^i$. Then $xb^{-ki} = c^i c^{-i} = e$. We can write $xb^{-ik} = a^j$ since $xb^{-ik} \in \ker(\pi)$. Then $x = a^j b^{ik}$.
It follows from a well known characterization of direct products that $A \cong \langle b^k \rangle \times \langle a \rangle$. Therefore $A$ is the product of two cyclic groups.
Note: Here's some intuition concerning direct products that might help. Suppose we have a quotient map $\pi: A \rightarrow B$. When is $A$ isomorphic to the direct product of $B$ and $\ker(\pi)$? It turns out that this is true whenever there is another map $\iota : B \rightarrow A$ such that $\pi \circ \iota = \text{Id}_B$. It is a good exercise to verify this (and it's similar to the proof above). In our case, we created such a situation as this by taking $\langle a \rangle = \ker(\pi)$, $B = A / \langle a \rangle$, and $\pi : A \rightarrow B$ the canonical quotient map. The rest of what we showed was actually equivalent to showing that we have a map $\iota : B \rightarrow A$ ($\iota : B \rightarrow A$ would send $c$ to $b^k$).
Note: This is actually a specific case of a much more general theorem. All finitely generated abelian groups have a decomposition into a product of cyclic groups (note $\mathbb{Z}$ is cyclic). You can read about the most general formulation of this, in terms of finitely generated modules over a PID, here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Out of $8$ points, $4$ points on one branch of a hyperbola and $4$ on the other but no $5$ ever in convex position,is it true? Are any $8$ points, $4$ points on one branch of a hyperbola, $4$ points on the other branch of the same hyperbola always such that no $5$ points are in convex position (form a convex shape)
| More is true: If we have four points on a hyperbola, three of them on the same branch, and one on the other branch, then these four points do not form a convex quadrangle.
Proof. Consider the hyperbola $xy=1$, and assume $P_i=(x_i,y_i)$ $\>(1\leq i\leq3)$ in the first quadrant, with $x_1<x_2<x_3$. The lines $g_{12}=P_1\vee P_2$ and $g_{23}=P_2\vee P_3$ do not intersect the third quadrant. They form a wedge with vertex $P_2$, containing the full third quadrant, hence the full second branch of the hyperbola, in its interior. It follows that $P_2$ is an interior point of the triangle $\triangle(P_4P_3P_1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2747879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $A$ is diagonally dominant and if $Q$ is chosen as in the Jacobi method, then $\rho(I-Q^{-1}A)<1$ Prove that if $A$ is diagonally dominant and if $Q$ is chosen as in the Jacobi method, then $\rho(I-Q^{-1}A)<1$
I know that $\rho(A)=\inf_{\|.\|}\|A\|$ and \begin{equation}
||I-Q^{-1}A||_\infty = \max_{1\leq i\leq n}\sum\limits_{j=1,j\neq i}^n \left|\frac{a_{ij}}{a_{ii}}\right|
\end{equation}, but I do not know what else I can do to show what I want, could someone help me please?
| This property can be easily shown with the help of this property of the spectral radius: For the spectral radius the following holds for any matrix $A \in \mathbb R^{n \times n} $ and any (matrix) norm $\Vert \cdot \Vert$:
$$\rho(A) \leq \Vert A \Vert. $$
Thus, for your chosen norm and a strictly diagonally dominant matrix $A$ you get convergence since $$\sum\limits_{j=1,j\neq i}^n \left|\frac{a_{ij}}{a_{ii}}\right| < 1 \: \forall \: i$$ and thus $$\rho \big(I-Q^{-1}A\big) \leq \big\Vert I-Q^{-1}A \big\Vert_\infty < 1 \checkmark$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are the rings $\mathbb{Q}[x]$ and $\mathbb{Q}[x,y]$ principal ideal domains? Are the rings $\mathbb{Q}[x]$ and $\mathbb{Q}[x,y]$ principal ideal domains?
I understand what an integral domain is. I know the definitions of ideal and principal but have not ever dealt with principal ideal domains. I know that from the discussion here, $\mathbb{Z}[x]$ is not a principal ideal domain, but I'm not sure if the example with the ideal $(2,x)$ should extend to the polynomials with rational coefficients and would like some help.
| If $k$ is a field, then $k[x]$ is a PID, for essentially the same reason that $\mathbb{Z}$ is -- in both, we have Euclidean algorithm for division.
On the other hand, $k[x, y]$ is not a PID: consider ideal $I = (x, y)$. Supposed that $I$ is principal, $I = (f)$. Then $x = fg$, $y = fh$ for some $g, h \in k[x, y]$. Since $x = fg$, degree of $f$ in variable $y$ must be $0$. Considering $y = fh$, we conclude the same about degree of $f$ in variable $x$. Thus $f \in k$, which contradicts $I = (f) = (x, y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Largest integer $n$ such that $3^n$ divides every $abc$ with $a$, $b$, $c$ positive integers, $a^2+b^2=c^2$, and $3|c$
Let $P$ denote the set { $abc$ : $a$, $b$, $c$ positive integers, $a^2+b^2=c^2$, and $3|c$}. What is the largest integer $n$ such that $3^n$ divides every element of $P$?
I first saw that as $3|c\implies c=3k$ where $k\in \mathbb{N}$
$\implies a^2+b^2=9k^2$
Now this means $a^2\equiv r1(mod \space 9)$ & $b^2\equiv r2(mod \space 9)$
where $r1+r2=9k_1$ where $k_1\in \mathbb{N}$
In order to get the largest integer $n$ s.t $3^n|\space \text{every element of P}$
We have to assume $a=3k$ and $b=3k$
Thus $abc=3^3\times(some \space number \ne a \space multiple \space \space of \space 3)$
Thus $n \space should \space be =3$
But the answer here is 4
I can't figure out the solution.Please help.
P.S. Is there any geometrical angle in this problem meaning can this $a^2+b^2=c^2$ be considered a circle somehow and proceed? I can't think of that approach.
OK here we have a solution, lets look at it:
As @ChristianF suggested as an answer to another of my question,
Lemma: If $a^2+b^2=c^2$ then at least one of integers $a,b,c$ is divisible by $3$.
Proof: If $3\mid c$ we are done. Say $3$ doesn't divide $c$. Then $$a^2+b^2\equiv 1 \pmod 3$$
So if $3$ doesn't divide none of $a$ and $b$ we have $$2\equiv 1 \pmod 3$$ a contradiction.
Also suggested by @Christian Blatter
Modulo $3$ only $0$ and $1$ are squares, hence $x^2+y^2=0$ mod $3$ implies $x=y=0$ mod $3$. It follows that all three of $a$, $b$, $c$ are divisible by $3$. Canceling this common factor we obtain $a'^2+b'^2=c'^2$ which is only possible if at least one of $a'$, $b'$, $c'$ is $=0$ mod $3$.
We will follow along the lines:
If $3|c \implies c=3k$ for some $k \in \mathbb{N}$
Now this means according to @Christian Blatter,Modulo $3$ only $0$ and $1$ are squares, hence $x^2+y^2=0$ mod $3$ implies $x=y=0$ mod $3$. It follows that all three of $a$, $b$, $c$ are divisible by $3$.
Hence $a=3k_1,b=3k_2$ for some $k_1,k_2 \in \mathbb{N}$
Now we get $9k_1^2+9k_2^2=9k^2$
Cancelling the 9 from the above from LHS and RHS we get
$k_1^2+k_2^2=k^2$
which is analogous to the fact that
we obtain $a'^2+b'^2=c'^2$ which is only possible if at least one of $a'$, $b'$, $c'$ is $=0$ mod $3$
Thus one of $k_1,k_2$ is still a multiple of 3
$\implies$ $a\times b \times c= 3^4 \times K$
Hence for $n=4$, which is the largest integer, $3^n|a\times b \times c$
Thanks to @ChristianF and @Christian Blatter for the insight
| First, convince yourself that every solution of $a^2+b^2=c^2$ has $abc$ a multiple of 3.
Then convince yourself that every solution with $c$ a multiple of 3 must be such that $(a/3)^2+(b/3)^2=(c/3)^2$ with $a,b,c$ all integers (I think you've already done this, so the first sentence above is all you really need).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimum length of the hypotenuse
A point on the hypotenuse of a triangle is at distance a and b from the sides of the triangle. Prove that the minimum length of the hypotenuse is $(a^{2/3}+b^{2/3})^{3/2}$.
My Attempt
$\frac{x}{y}=\frac{a}{CM}=\frac{AN}{b}$
$$
\frac{x}{y}=\frac{AN}{b}\implies y=\frac{xb}{\sqrt{x^2-a^2}}
$$
$$
h(x)=x+y=x+\frac{xb}{\sqrt{x^2-a^2}}
$$
$$
h'(x)=1+\frac{\sqrt{x^2-a^2}.b-xb.\frac{x}{\sqrt{x^2-a^2}}}{x^2-a^2}=1+\frac{x^2b-a^2b-x^2b}{(x^2-a^2)^{3/2}}\\
=1+\frac{-a^2b}{(x^2-a^2)^{3/2}}=\frac{(x^2-a^2)^{3/2}-a^2b}{(x^2-a^2)^{3/2}}
$$
$$
h'(x)=0\implies (x^2-a^2)^{3/2}=a^2b\implies (x^2-a^2)^{3}=a^4b^2\\
\implies x^6-3x^4a^2+3x^2a^4-a^6=a^4b^2\implies x^6-3x^4a^2+3x^2a^4-a^6-a^4b^2=0\\
$$
How do I proceed further and find $h_{min}$ without using trigonometry ? Or is there anything wrong with my calculation ?
| we have $$\cos(\alpha)=\frac{a}{x}$$ and $$\sin(\alpha)=\frac{b}{y}$$ so we get
$$1=\frac{a^2}{x^2}+\frac{b^2}{y^2}$$ with this equation you ca eliminate $x$ or $y$ in $$c=x+y$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Basis of of subspaces of $R^3$ I'm having a little trouble with this question:
How do I find the basis of the set of vectors lying in the plane 2x − y − z = 0?
I'm stuck on how to start on this question
I tried to start by setting y= 2x-z
I'm not sure where to go from here
| Your start $y= 2x-z$ is good. It means that every vector
$$\begin{pmatrix}x\\2x-z\\z\end{pmatrix}, \quad\text{$x,y$ being real numbers,}$$
belongs to the plane. Hence every point in that plane may be expressed as
$$\begin{pmatrix}x\\2x-z\\z\end{pmatrix}=x\begin{pmatrix}1\\2\\0\end{pmatrix}+
z\begin{pmatrix}0\\-2\\1\end{pmatrix}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probability in uniform I need help to solve the following problems. Thank you in advance.
Problem 1:
A random variable $X$ is uniform $[0, 1]$. Find the probability that X's 2nd digit is $3$.
As far as I understand it is continuous uniform distribution. Each digit has $1$ chance in $10$ of being a $3$. Does it mean that the probability of X’s second digit being 3 is $1/10$? Can you check and confirm it?
Problem 2:
A random variable $X$ is uniform $[0, 3]$. Find the probability that X's first and/or second digit is $2$.
As I understand, the first digit has $1$ chance in $3$ of being a $2$, 2nd digit $1$ chance in $10$ of being a $2$. Avoiding both has probability $(2/3) (9/10)$. Contrary event $1− (2/3) (9/10)$. Am I correct? If so, eventually what is the probability that X's first and/or second digit is $2$. Maybe I am dumb and don’t get it, can someone help me?
| For problem 2, find the intervals of success and divide by the total interval.
The intervals of success (i.e. the first or the second digit is $2$) are:
$$[0.2,0.3); [1.2,1.3); [2,3) \Rightarrow I_S=0.1+0.1+1=1.2$$
The total interval is:
$$[0,3] \Rightarrow I_T=3.$$
Hence:
$$P(D_1=2\cup D_2=2)=\frac{I_S}{I_T}=\frac{1.2}{3}=\frac25.$$
Note that it is consistent with your answer $1-\frac23\cdot \frac{9}{10}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Compute $\lim_{n\to\infty}\int_\limits{0}^{\infty}\frac{dx}{x^{n}+1}$
Compute the following limit: $$\lim_{n\to\infty}\int_\limits{0}^{\infty}\frac{dx}{x^{n}+1}$$.
So I thought to use the result:
a) If $\{F_n\}$ converges uniformly on $S=[a,b]$ to $F$ and $F_n$ is integrable $\forall n$. Then $\int_\limits{a}^{b}F(x)dx=\lim\limits_{n\to\infty}\int_\limits{a}^{b}F_n(x)dx$.
$$
\lim_{n\to\infty}\int_\limits{0}^{\infty}\frac{dx}{x^{n}+1}=\lim_{n\to\infty}\int_\limits{0}^{1-\delta}\frac{dx}{x^{n}+1}+\lim_{n\to\infty}\int_\limits{1-\delta}^{1+\delta}\frac{dx}{x^{n}+1}+\lim_{n\to\infty}\int_\limits{1+\delta}^{\infty}\frac{dx}{x^{n}+1}
$$
I can easily pass the limit under the integral in the first and third integral which gives me the results of $1-\delta$ and $0$ respectively.
However my problems lie in the second integral: $\lim\limits_{n\to\infty}\int_\limits{1-\delta}^{1+\delta}\frac{dx}{x^n+1}$
I changed the variable $y=1-x$ which led me to $\lim\limits_{n\to\infty}\int_\limits{-\delta}^{\delta}\frac{dy}{(1-y)^{n}+1}=?$
Question:
How do I compute $\lim\limits_{n\to\infty}\int_\limits{-\delta}^{\delta}\frac{dy}{(1-y)^{n}+1}=$? As the delta is arbitrary I do not know how $1-y$ is going to behave.
| Instead of working with the limit in the middle integral, treat it like $n$ is large, then approximate the integral (bound the integrand from above by $1$). You should end up with a direct $\delta$ dependence. If you let $\delta $ be arbitrary, what does that tell you the middle integral must be?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to show that $a_n=1+1/\sqrt{2}+\cdots+(1/\sqrt{n-1})-2\sqrt{n}$ has an upper bound. Let $a_n=1+1/\sqrt{2}+\cdots+(1/\sqrt{n-1})-2\sqrt{n}$ while $a_1=-2,n\ge2 $ ,
I need to prove that $a_n$ converges.
I proved that it is monotonically increasing and tried to prove that it is upper-bounded by induction but failed to.
Also, it was told that $a_n$ converges to $-2<L<-1$ so I tried to show by induction that $a_n$ is bounded by $-1$, but I'm always stuck with $a_{n+1} \le -1 + 1/\sqrt{n}$ or something like that.
How can I prove that $a_n$ is bounded from above?
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
With a Riemann Zeta Function Identity:
\begin{align}
&\bbox[#ffd,10px]{\ds{1 + {1 \over \root{2}} + \cdots + {1 \over \root{n - 1}} - 2\root{n}}} =
\pars{\sum_{k = 1}^{n}{1 \over \root{k}} - {1 \over \root{n}}} - 2\root{n}
\\[5mm] = &\
-\,{1 \over \root{n}} + \pars{\sum_{k = 1}^{n}{1 \over \root{k}} - 2\root{n}} =
-\,{1 \over \root{n}} + \bracks{\zeta\pars{1 \over 2} +
{1 \over 2}\int_{n}^{\infty}{\braces{x} \over x^{3/2}}\,\dd x}
\end{align}
Note that
$\ds{0 < {1 \over 2}\int_{n}^{\infty}{\braces{x} \over x^{3/2}}\,\dd x <
{1 \over 2}\int_{n}^{\infty}{\dd x \over x^{3/2}} = {1 \over \root{n}}
\,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\Large\to}\,\,\, {\large 0}}$.
such that
$$
\bbx{\lim_{n \to \infty}\pars{\bbox[#ffd,10px]{\ds{1 + {1 \over \root{2}} + \cdots + {1 \over \root{n - 1}} - 2\root{n}}}} = \zeta\pars{1 \over 2}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2748996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Find out how many people from glasses clinking There's a party and there are people clinking glasses. We hear 28 clinks (one person clinks with exactly one person). I have to find out how many people there are at the party.
So I put in this equation:
$\binom{x}{2} = 28$, but I don't know how to find $x$.
| You can solve for $x$ in the polynomial $$28=\binom{x}{2}=\frac{1}{2}x(x-1)$$ as others mention.
However, 28 is small, and $x$ is limited to a positive integer. I'd use trial and error.
When $x=4$ we have $\binom{x}{2}=6$, which is too small.
When $x=5$ we have $\binom{x}{2}=10$, which is too small.
When $x=6$ we have $\binom{x}{2}=15$, which is too small.
...and keep going until you find the $x$-value which satisfies $\binom{x}{2}=28$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Why is no covering space of $\mathbb{R}P^2 \vee S^1$ homeomorphic to an orientable surface? I know that $\pi_1(\Sigma_g) = \langle a_1,b_1,\cdots,a_g,b_g | [a_1,b_1]\cdots[a_g,b_g]\rangle$, where $\Sigma_g$ is the orientable surface of genus $g$, and $[a_i,b_i]$ is their commutator. The idea of my argument for why no surface with $g \geq1$ could work is as follows (let me know if this seems fishy):
We know that the induced homomorphism from the fundamental group of a covering space, $p_*: \pi_1(\tilde X) \to \pi_1(X)$ must be injective. $\pi_1(\Sigma_g)$ has $2g$ generators of infinite order. In an injective homomorphism, they must be sent to distinct generators of infinite order in $\pi_1(X)$. However, $\pi_1(X) = \langle a,b |a^2\rangle \cong \mathbb{Z} * \mathbb{Z}_2$ only has one generator of infinite order. Therefore $g =0$ is the only case which might work.
However, this is just $S^2$, which is the universal cover for $\mathbb{R}P^2$. It seems "obvious" that this (nor a space homeomorphic to it) couldn't also be the universal cover for $\mathbb{R}P^2 \vee S^1$, but I'm having a hard time formalizing that.
I know that a covering space of a wedge sum should restrict to a covering space of each of the summands, but I could imagine a CW-construction of $S^2$ that includes a copy of $S^1$ (with a 2-cell on each side, for example), so it isn't totally clear to me why this fact makes it clear $S^2$ doesn't work.
Thanks in advance!
| Hint: a covering projection is a local homeomorphism. Every point of a surface (oriented or not) has a neighbourhood homeomorphic to $\Bbb{R}^2$. No point of the copy of $S^1$ in $\Bbb{R}P^2 \vee S^1$ has a neighbourhood homeomorphic to $\Bbb{R}^2$. (Since the connected component of a neighbourhood of any such point $x$ that contains $x$ becomes disconnected if you remove $x$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Changing index sum of series Let $$f(x)=\sum_{n=1}^{\infty}\frac{\sin(nx)}{n^3}$$.
Now let $f(x)$ be uniform convergence on $\mathbb{R}$
Show that $$\int_0^\pi f(x) \,dx=\sum_{k=0}^{\infty}\frac{2}{(2k+1)^4}$$
I'm not sure how to get past from this point. I was trying to move the index from n=0 but I can't seem to figure it out. I tried to "pull" 1 out and start at n=0, but I ended up with an undefined series (n^4 in the denominator).
$$\int_0^\pi \sum_{n=1}^{\infty} \frac{\sin(nx)}{n^3}\,dx
=\sum_{n=1}^{\infty}\frac{1}{n^3}\int_0^\pi\sin(nx) dx
=\sum_{n=1}^{\infty}\frac{1-\cos(\pi n)}{n^4}$$
| \begin{align*}
\dfrac{1}{n^{3}}\int_{0}^{\pi}\sin(nx)dx=\dfrac{1}{n^{3}}\cdot\dfrac{-1}{n}\cos(nx)\bigg|_{x=0}^{x=\pi}=\dfrac{-1}{n^{4}}\left((-1)^{n}-1\right),
\end{align*}
now consider even and odd $n$ separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f:\mathbb{R}^2\to\mathbb{R}^1$ is of class $C^1$, show that $f$ is not one-to-one. If $f:\mathbb{R}^2\to\mathbb{R}^1$ is of class $C^1$, show that $f$ is not one-to-one. [Hint: If $Df(x) = 0$ for all $x$, then $f$ is constant. If $Df(x_0)\neq0$, apply the implicit function theorem.]
Clearly there are two cases, if $Df(x)=0$ for all $x\in \mathbb{R}^2$ then $f$ is constant and therefore can not be one to one.
If there is a $x_0\in\mathbb{R}^2$ such that $Df(x_0)\neq 0$ then how can I use the implicit function theorem knowing that in order to apply it I have to ensure that $f(x_0)=0$ but this is not given in the problem? Here is the version of the theorem of the implicit function that I am using, thank you very much.
| Suppose that $Df(x_0,y_0)\neq 0$. You can suppose without restricting the generality that ${{\partial f}\over{\partial y}}\neq 0$, let $h(x,y)=f(x,y)-f(x_0,y_0), {{\partial h}\over{\partial y}}={{\partial f}\over{\partial y}}\neq 0$, the implicit function theorem implies that there exists a neighborhood $I$ of $y_0$, a function $h:I\rightarrow \mathbb{R}^2$ such that $h(x,g(x))=f(x,g(x))-f(x_0,y_0)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the length and width of rectangle when you are given the area The area of a rectangle is $x^2 + 4x - 12$. What is the length and width of the rectangle?
The solution says the main idea is to factor $x^2 + 4x -12$.
So, since $-12 = -2 \times 6$ and $-2 + 6 = 4$, it can be written as $x^2 + 4x - 12 = (x - 2)(x + 6)$
since the length is usually the longer value, the length is $6$ and the width is $-2$.
I don't understand the logic to this solution at all. I understand $\text{length} \times \text{width} = \text{area}$, but outside of this information I don't understand how they got to this solution from the given information in the problem.
| That is incorrect. Many rectangles, with different lengths and widths, can have the same area. Example: $2\times3 = 1\times6 = \pi\times\frac6\pi$
Also, the area is a function of $x$; if $x$ is not given, then the area is not given!
And width can't be negative.
Where did you find this "solution"? It's very wrong.
EDIT1
Perhaps they just wanted you to factor the expression. Then the "answer" would have the length $(x+6)$ , and the width $(x-2)$ . But even this isn't unique; it could be $(2x+12)$ and $(\frac12 x-1)$ . Someone gave you a bad question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding probability that a car experiences a failure
A car is new at the beginning of a calendar year. The time, in years,
before the car experiences its first failure is exponentially
distributed with mean 2. Calculate the probability that the car
experiences its first failure in the last quarter of some calendar
year.
Attempt
Let $T$ be rv in years before the car experiences its first failure. We know $T$ is $exp(\lambda = 1/2) $. We want to find
$$ P( 1 > T > 3/4) = F(1) - F(3/4) = e^{-3/4} - e^{-1/2} \approx 0.0808$$
but the answer in the book of my books gives $\boxed{0.205}$. What is my mistake?
| You need to find this:$$ \sum_{k=1}^\infty e^{-{1\over2}(k-0.25)} -e^{-{1\over2}k} $$
because you're looking for the probability that the car experiences a failure in any year, not just the first year. The summation above will give you the answer in your textbook.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2749746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.