Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Prove: $3 \nmid n \Leftrightarrow 3 \mid (n^2 -1)$ Been learning how to work with divisibility proofs, and I really need some help understanding this proof. I want to show that $3 \nmid n$ $\Leftrightarrow $ $3 \mid (n^2 -1)$. So, I want to go from say $3 \mid (n^2 -1)$ to $3 \nmid n$ and $3 \nmid n$ to $3 \mid (n^2 -1)$ by proof.
I wanted to start off with $3 \mid (n^2 -1)$ $\implies$ $3 \nmid n$, so I did the following:
If we are given $3 \mid (n^2 -1)$, then $n^2-1=3\cdot a$ for some $a\in \mathbb{Z}$.
This is equivalent to saying $(n-1)(n+1)=3\cdot a$.
So either $(n-1)$ or $(n+1)$ is a multiple of $3$.
Here is where I got a bit stuck, I'm not sure exactly what to do, perhaps considering whether $n$ is odd or even and proceeding from there? I'm even more unsure about the other direction, namely $3 \nmid n$ to $3 \mid (n^2 -1)$. Please help a novice out.
| First implication: Write $n=3k+1$, then $(n^2-1)=9k^2+6k$ which can be divided by three.
Otherwise $n$ can be written as $3k+2$ and $(n^2-1)=9k^2+12k+3$.
Reverse direction by contraposition: If $n$ can be written as $3k$, also $n^2$ can be divided by $3$ so $n^2-1$ isn't dividable by $3$. This shows equivalence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4246878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Q: Epsilon-delta proof for $\lim_{x\to1} \frac{x}{x^2+1}=\frac{1}{2}$ In proving $\lim_{x\to1} \frac{x}{x^2+1}=\frac{1}{2}$, I have done this working:
Let $ϵ>0$.
$0<|x-1|< \Rightarrow|\frac{x}{(x^2+1)}-\frac{1}{2}|<$
$ |\frac{x}{(x^2+1)}-\frac{1}{2}|< = |\frac{2x-x^2+1}{2(x^2+1)}| = |\frac{-2(x-1)^2}{x(x-1)^2+4x}| = |\frac{-(x-1)^2}{(x-1)^2+2(x-1)+2}| = \frac{(x-1)^2}{|(x-1)^2+2(x-1)+2|}|$
after which I get stuck at trying to implement triangle inequality to push in the modulo in the denominator to get $|x-1|$ terms to substitute .
| Alternative approach:
Let
$$0 < |x - 1| < \delta \leq \frac{1}{10}.\tag1$$
Such a constraint on $\delta$ is viable by developing a relationship between $\epsilon$ and $\delta$ that looks like
$\displaystyle \delta = \min\left[f(\epsilon), \frac{1}{10}\right]~~$
rather than
$~~\displaystyle \delta = f(\epsilon).$
Using (1) above, you have that
$~~~\displaystyle 1 - \delta < x < 1 + \delta$.
Then, since $\displaystyle \delta \leq \frac{1}{10} \implies \delta^2 < \delta$
you have that $~~~\displaystyle 1 - 2\delta < x^2 < 1 + 2\delta + \delta^2 < 1 + 3\delta.$
This implies that $~~~\displaystyle 2 - 2\delta < x^2 + 1 < 2 + 3\delta.$
Then, using (1) above, and considering the minimum and maximum possible values of the numerator $(x)$, and denominator $(x^2 + 1)$, you have that
$$\frac{1 - \delta}{2 + 3\delta} < \frac{x}{x^2 + 1} < \frac{1 + \delta}{2 - 2\delta}. \tag2$$
$\displaystyle \frac{1 - \delta}{2 + 3\delta}
= \frac{1 + (3/2) \delta}{2 + 3\delta} - \frac{(5/2) \delta}{2 + 3\delta}
= \frac{1}{2} - \frac{5\delta}{4 + 6\delta} > \frac{1}{2} - 2\delta ~: ~\delta \leq \frac{1}{10}.
$
Similarly,
$~~\displaystyle \frac{1 + \delta}{2 - 2\delta}
= \frac{1 - \delta}{2 - 2\delta} + \frac{2 \delta}{2 - 2\delta}
= \frac{1}{2} + \frac{\delta}{1 - \delta} < \frac{1}{2} + 2\delta ~: ~\delta \leq \frac{1}{10}.
$
Therefore, using (2) above,
$$\frac{1}{2} - 2\delta < \frac{x}{x^2 + 1} < \frac{1}{2} + 2\delta \implies
- 2\delta < \frac{x}{x^2 + 1} - \frac{1}{2} < 2\delta.$$
This implies that
$$\left|\frac{x}{x^2 + 1} - \frac{1}{2}\right| < 2\delta.\tag3$$
Consequently, set $\displaystyle \delta = \min\left(\frac{\epsilon}{2}, \frac{1}{10}\right).$
Then, based on (1), (2), and (3) above,
$~~\displaystyle0 < |x-1| < \delta~~$ will imply that
$~~\displaystyle \left|\frac{x}{x^2 + 1} - \frac{1}{2}\right| < \epsilon.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4246987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Proving that a closed set in the product topology need not be the product of closed sets Theorem
Let $X_1 $and $X_2$ be topological spaces and let $X_1 \times X_2$ have the product
topology . Then a closed subset of
$X_1 \times X_2$ need not be the product of closed set.
Do I give a counter example or a proof?
I am not to sure how to the proof here.
If F $\subset X_1 \times X_2$ then we
could say F$\subset X_1$ and
maybe F $\subset X_2$ . But this is not
very accurate…
Any help would be appreciated.
I would like to know how to do it .Thanks
| Consider the circle $\mathbb{S}^1 = \{(x,y) \in \mathbb{R}^2 \ \vert \ x^2 + y^2 = 1 \} \subset \mathbb{R}^2$. Clearly $\mathbb{S}^1$ is closed in $\mathbb{R}^2 = \mathbb{R} \times \mathbb{R}$, but $\mathbb{S}^1$ is not the product of any two closed sets of $\mathbb{R}$. Indeed, suppose $\mathbb{S}^1 = A \times B$ where $A, B \subset \mathbb{R}$ are closed subsets. Since $(0, 1)$ and $(1, 0$) are in $\mathbb{S}^1$, then $1 \in A$ and $1 \in B$, therefore $(1, 1) \in A \times B = \mathbb{S}^1$, which is a contradiction (what we have actually proven is stronger: $\mathbb{S}^1$ can not be expressed as a cartesian product of any subsets of $\mathbb{R}$). Therefore no such $A$ and $B$ can exist and you have a counterexample.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $f\left(x\right)=-\frac{x\left|x\right|}{1+x^{2}}$ then find $f^{-1}\left(x\right)$ Q:
If $f\left(x\right)=-\frac{x\left|x\right|}{1+x^{2}}$ then find $f^{-1}\left(x\right)$
My approach:
*
*Dividing the cases when $x\ge0$ and when $x\le0$ to break free of modulus.
*Re-arranging the terms to get the expression of x in terms of y.
*Here's what I got:
When $x\ge0$:
$$x=\sqrt{\frac{-y}{1+y}}$$ $$\to\ y\ ∈\ \left(-1,0\right] Now, y\to x$$
so, $f^{-1}\left(x\right)=\sqrt{-\frac{x}{1+x}}$ when $x\le0$
When $x\le0$:
$$x=-\sqrt{\frac{y}{1-y}}$$
when $y\ ∈\ \left[0.1\right)$ Now replacing $y\to x$
We get, $f^{-1}\left(x\right)=-\sqrt{\frac{x}{1-x}}\ ;\ x\ge0$
But I have to show that the inverse function $f^{-1}\left(x\right)$=$\operatorname{sgn}\left(-x\right)\sqrt{\frac{\left|x\right|}{1-\left|x\right|}}$
This is where I'm getting stuck. I am unable to convert my answer into this form, mainly because I'm not able to convert the cases into this expression. Is there any step-by-step systematic way in which I can do the same? Any help or guide will be greatly appreciated.
Edit:
Since we got $f^{-1}\left(x\right)$ and the cases,:
$f^{-1}\left(x\right)=-\sqrt{\frac{x}{1-x}}\ ;\ x\ge0$ and
$f^{-1}\left(x\right)=\sqrt{-\frac{x}{1+x}}$ when $x\le0$,
to write it in given form we need something that will give - sign when $x>0$ so we will use sgn(-x), and rest is just use of modulus so that we can make the general answer.
| The sign function is given by
$$\operatorname{sgn}(x)=\begin{cases}-1, \space\text{if}\space x<0\\ 0, \space\space\text{if}\space x=0\\1, \space\text{if}\space x>0\end{cases}$$
and the modulus of $x$ is given by
$$|x|=\begin{cases}x, \space\text{if}\space x\geq0\\ -x, \space\text{if}\space x<0\\\end{cases}$$
Thus the inverse can be written as
$$f^{-1}\left(x\right)=\operatorname{sgn}\left(-x\right)\sqrt{\frac{\left|x\right|}{1-\left|x\right|}}=\begin{cases}-\sqrt{\frac{x}{1-x}}, \space\text{if} \space x\geq0 \\ \sqrt{\frac{-x}{1+x}}, \space\text{if} \space x<0 \end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Any non-zero solution of some second-order differential equation is not $2\pi$-period
Suppose $$\frac {{d^2}y}{dx^2}+P(x)y = 0$$ where $P(x)$ is continuous and satisfying $n^2<P(x)<(n+1)^2$ where $n$ is a non-negative integer. Prove that any non-zero solution of the above second-order differential equation is not $2\pi$-periodic.
There are two facts I know that may be useful to the proof:
*
*$y''+Q(x)y=0$ where $Q(x)$ is continuous on $[a,+\infty]$ and satisfying $Q(x)\geq m>0$, then the distance between any two immediate zeros of non-zero solution of it is less than $\frac {\pi}{\sqrt{m}}$.
*$y''+Q(x)y=0$ where $Q(x)$ is continuous on $[a,+\infty]$ and satisfying $Q(x)\leq M \ (M>0)$, then the distance between any two immediate zeros of non-zero solution of it is larger than $\frac {\pi}{\sqrt{M}}$.
From this, I can't get a contradiction since if there is such a solution, with $x_0$ and $x_0+2\pi$ two zeros, then there can be $2n$ zeros between them with each of adjacent distance $d_j$ satisfying $\frac {\pi}{n+1}< d_j < \frac {\pi}{n}$. Does anyone know how to prove the result? Thank you
| You are nearly there. If a solution were periodic, it would have, in every period starting (and thus ending) at a root, an even number of segments between roots, an equal number of positive and negative-valued segments.
So if it were $2n$ segments or less, the upper bound gives a total length strictly smaller than $2$. For $2n+2$ segments or more, the lower bound gives a total length strictly larger than $2\pi$. In no way is it possible to get the total length equal to $2\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Caculate $\iint_D \frac{x^2+y^2}{\sqrt{4-(x^2+y^2)^2}}dxdy$, with D:$\frac{x^2}{2}+y^2\leq1$ I found some difficulty with this exercise:
Calculate
$$\iint_D \frac{x^2+y^2}{\sqrt{4-(x^2+y^2)^2}}dxdy$$
with $D := \left\{(x,y)\in\mathbb{R}^{2}\mid \dfrac{x^2}{2}+y^2\leq1\right\}$
I use change of Variables in Polar Coordinates, but the integral become so hard to calculate.
I think maybe we change variables $u = x^2 + y^2$, the integral will be easier, but I can't find $v(x,y$) to have the Jacobi easy to calculate.
| Here is a possible way to evaluate explicitly.
We use the change of variables $x=r\sqrt 2\; \cos t$, $y=r\sin t$. Then we have formally
$$
\begin{aligned}
dx &= \sqrt 2\; \cos t\; dr - r\sqrt 2\; \sin t\; dt\ ,\\
dy &= \sin t\; dr + r\; \cos t\; dt\ ,\\
dx\wedge dy &=
dr\wedge dt\cdot
\sqrt 2\; \cos t\cdot r\; \cos t
- dt\wedge dr\cdot r\sqrt 2\; \sin t
\cdot
\sin t
\\
&=
dr\wedge dt\cdot r\sqrt 2\cdot (
\cos^2 t +\sin^2 t)
\\
&=
dr\wedge dt\cdot r\sqrt 2\ .\\[2mm]
x^2 + y^2
&=2r^2\cos ^2t + r^2\sin^2 t\\
&=r^2(1+\cos^2 t)\ .
\end{aligned}
$$
So the given integral $I$ can be computed as follows:
$$
\begin{aligned}
I
&=\iint_D \frac{x^2+y^2}{\sqrt {4-(x^2+y^2)^2}}\; dx\; dy
\\
&=4
\iint_{substack{(x,y)\in D\\x,y\ge 0}}
\frac{x^2+y^2}{\sqrt {4-(x^2+y^2)^2}}\; dx\; dy
\\
&=
4
\iint_{\substack{0\le r\le 1\\0\le t\le \pi/2}}
\frac{r^2(1+\cos^2 t)}{\sqrt{4-r^4(1+\cos^2 t)^2}}
\;r\sqrt 2\; dr\; dt
\\
&\qquad\text{ ... now use $s=r^4$, $ds=4r^3\; dr$}
\\
&=
\sqrt 2
\iint_{\substack{0\le s\le 1\\0\le t\le \pi/2}}
\frac
{\color{blue}{1+\cos^2 t}}
{\sqrt{4-s\color{red}{(1+\cos^2 t)^2}}}
\;ds\; dt
\\
&\qquad\text{ ... now use
$\int_0^1\frac{ds}{\sqrt{4-s\color{red}{a}}}
=\frac 1{\color{red}{a}}(4-2\sqrt {4-\color{red}{a}})$}
\\
&=
\sqrt 2
\int_0^{\pi/2}
\frac{\color{blue}{1+\cos^2 t}}{\color{red}{(1+\cos^2 t)^2}}
\left(4-2\sqrt{4-\color{red}{(1+\cos^2 t)^2}}\right)
\; dt
\\
&=
2\sqrt 2
\int_0^{\pi/2}
\frac1{1+\cos^2 t}
\left(2-\sqrt{4-(1+\cos^2 t)^2}\right)
\; dt
\\
&=
2\pi
-
2\sqrt 2
\int_0^{\pi/2}
\frac1{1+\cos^2 t}
\sqrt{2^2-(1+\cos^2 t)^2}
\; dt
\\
&=
2\pi
-
2\sqrt 2
\int_0^{\pi/2}
\frac1{1+\cos^2 t}
\sqrt{(3+\cos^2 t)(1-\cos^2 t)}
\; dt
\\
&=
2\pi
-
2\sqrt 2
\int_0^{\pi/2}
\frac1{1+\cos^2 t}\cdot
\sqrt{3+\cos^2 t}
\; \sin t\; dt
\\
&\qquad\text{ ... now use $u=\cos t$}
\\
&=
2\pi
-
2\sqrt 2
\int_0^1
\frac1{1+u^2}\cdot
\sqrt{3+u^2}
\; du
\\
&=
\color{forestgreen}{
2\pi
-
\sqrt 2\log 3
-
4\arctan\frac 1{\sqrt 2}}\ .
\end{aligned}
$$
$\square$
Numerical check:
sage: 4 * numerical_integral( lambda x:
....: numerical_integral( lambda y :
....: (x^2 + y^2) / sqrt(4 - (x^2 + y^2)^2),
....: (0, sqrt(1-x^2/2)) )[0],
....: (0, sqrt(2)) )[0]
2.2675940740738505
sage: ( 2*pi - sqrt(2)*log(3) - 4*atan(1/sqrt(2)) ).n()
2.26759407407385
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find all values of $x$ such that $\frac{4x-5}{3x+5}\geq3$ Find all values of $x$ such that $\frac{4x-5}{3x+5}\geq3$.
If we consider two cases: $3x+5 > 0$ for the first and $3x + 5 < 0$ for the second.
When $3x+5 > 0$:
$\frac{4x-5}{3x+5}\geq3$.
Sloving for $x$ we find $x\leq -4$.
When $3x+5 < 0$:
$\frac{4x-5}{3x+5}\geq3$.
Sloving for $x$ we find $x\geq -4$
Additionaly through trail and error I discovered $x=-2$, $x=-3$ and $x=-4$... satisfy the original inequality.
$-2$, $-3$, $-4$ satisfy the inequality chain $-4 \leq x < -5/3$. All other values the satisify the inequality chain are also solutions.
In the case that $3x+5 > 0$, the inequality chain $-4 \geq x > -5/3$ is illogical.
Is this soultion correct? How can I improve it?
| Your way to proceed also works, indeed we have
*
*for $3x+5>0 \iff x>-\frac 5 3$
$$\frac{4x-5}{3x+5}\geq3 \iff 4x-5\ge 9x+15 \iff 5x \le -20 \iff x \le -4$$
that is no solution for this first case, then
*
*for $3x+5<0 \iff x<-\frac 5 3$
$$\frac{4x-5}{3x+5}\geq3 \iff 4x-5\le 9x+15 \iff 5x \ge -20 \iff x \ge -4$$
that is $x \in \left[-4,-\frac 5 3\right)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Regularity up to the boundary of conformal mappings Consider any Lipschitz simply connected domain $\Omega \subset \mathbb R^2$ and a conformal mapping $\phi : D \rightarrow \Omega$ which we know it extends continuously to the boundary by Caratheodory's theorem.
Can we say something more besides continuity, for example that $\bar \phi : \bar {D} \rightarrow \bar\Omega$ is globally Holder continuous?
I've seen Kellogg-Warschawski's theorem in another Stack Exchange question but it requires $C^{1,\alpha}$ boundary at least.
| I've had to use this result in the last year. The answer is that for chord-arc domains (which Lipschitz domains are an example of) then $\bar \phi$ is continuous, but the derivative $d \bar \phi$ may only be in $L^p$ for some $p > 1$, you cannot ask for more than this. For any $\varepsilon > 0$ you can get $d \bar \phi \not \in L^{1 + \varepsilon}$ by taking a polygon with acute enough angles.
The proof that the derivative will always be at least $L^p$ can be found in the following paper of Baratchard, Bourgeois, Leblond https://hal.inria.fr/hal-01084428v1 on p. 19-20. They show along the proof of their Lemma 5.1 that $d \bar \phi$ must be in the Muckenhoupt class $A_2$ and give references from which it can be deduced that $d \bar \phi \in L^{1 + \varepsilon}$ for some $\varepsilon > 0$.
I hope that a year later this answer is still useful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Approximating step functions with polynomials Let $t_1 < t_2 < \cdots <t_m$ be real, and $X = \cup_{i=1}^{m-1} (t_i, t_{i+1})$ be a union of real open intervals. Let $f:X \rightarrow \{-1, 1\}$ be any piecewise constant function of form
$$
f(x) =
\begin{cases}
a_1 & \text{ if } t_1 < x < t_2 \\
a_2 & \text{ if }t_2 < x < t_3 \\
\vdots \\
a_{m-2} & \text{ if } t_{m-2} < x < t_{m-1} \\
a_{m-1} & \text{ if } t_{m-1} < x < t_m
\end{cases}
$$
Where $a_i \in \{-1, 1\}$, and $a_{i} = -a_{i+1}$ for $i = 1, ..., m-1$.
I have a number of questions regarding polynomial approximations of such a function $f$:
*
*Can we always find a sequence of polynomials $(p_n)$ so that $(p_n)$ converge pointwise to $f$, AND we have some fixed (not arbitrary) global error bound, say $1$, such that $|p_n(x) - f(x)| \leq 1$ for all $x \in X$ and $n \in \mathbb{N}$?
*If so, are such polynomials easy to find?
*How quickly do we get convergence?
I am aware that, upon picking a suitable inner product, we can use any collection of orthonormal polynomials to make approximations of functions. For example I know the Chebyshev, Bernstein, Jacobi etc. polynomials can be used to approximate continuous functions on bounded intervals, but I have found no theorem that says we can use these to construct approximations for arbitrary piecewise constant functions like the one given above.
Indeed, it is easy to find a polynomial approximation for the Heaviside Step function for example, however it is unclear how, or if this an be done for more complicated step functions.
| The given step function can be presented in the form of
$$f(x)=(-1)^m a^\ _1\operatorname{sgn}y(x),\quad\text{where}\quad y(x)=\prod_{j=1}^{m-1}(x-t_j).\tag1$$
Function $\,\operatorname{sgn}z\,$ can be presented via Fourier series in the form of
$$\operatorname{sgn}z=\dfrac 4\pi\sum_{n=1}^\infty \dfrac{\sin((2n-1)\pi z)}{2n-1},\tag2$$
(n=1...20)
where the sine functions are the polynomials of $\,\sin \pi z:\,$
$$\sin((2n-1)\pi z)=P_{2n-1}(\sin \pi z),\tag3$$
$$P_{2n-1}(t) = t\sum_{j=0}^{n-1} (-1)^j\dbinom{2n-2-j}j(4-4t^2)^{n-1-j}.\tag4$$
Therefore,
$$\operatorname{sgn}z=\dfrac4\pi\,\lim\limits_{N\to\infty} \sum_{n=1}^N \dfrac1{2n-1}P_{2n-1}(\sin \pi z).\tag5$$
Since the function $\operatorname{sgn} z$ has the gap in the point $\,z=0,\,$ then attempt of transformation of formulas $(5)$ to the polynomial series leads to the divergent expressions for the coefficients.
However, looks possible to build the polynomial approximation of the function $f(x)$ as the polynomial sequence $\,f_N(x),\,$ based on the formulas $(1),(4),(5)$ and the known Maclaurin series for $\,\sin \pi z.\,$
The optimization task solution (via logarithmic derivative) gives
$$Y=\max\limits_{x\in(t_1,t_m)} |y(x)|=\max(|y(x_l)|,|y(x_h)|),\tag6$$
where
$$\dfrac1{x_l-t_1}=\prod_{j=2}^{m-1}\dfrac1{t_j-x_l},\quad x_l\in(t_1,t_2)\tag7$$
and
$$\dfrac1{x_h-t_{m-1}} = \prod_{j=1}^{m-2}\dfrac1{x_h-t_j},\quad x_h\in(t_{m-1}, t_m).\tag8$$
Then
$$\operatorname{sgn}(y(x)) = \operatorname{sgn} z(x),\quad\text{where}\quad z(x)=\dfrac {y(x)}Y,\quad |z(x)|\le1.\tag9$$
Applying of $\,z(x)\,$ instead $\,y(x)\,$ in $(1)$ allows to control the calculation accuracy of the sine function and finally - the required distances of the given function gaps, which provide the required calculation accuracy of the approximations $f_N(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4247996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Convergence of improper integral of c.d.f. Let $F(x),G(x)$ be two cumulative distribution functions. And
$$
\int^{+\infty}_{-\infty}|x|dF(x)<\infty,\int^{+\infty}_{-\infty}|x|dG(x)<\infty
$$
show that:
$$
\int^{+\infty}_{-\infty}|F(x)-G(x)|dx<\infty
$$
I try to do this by
\begin{aligned}
\int^{+\infty}_{-\infty}xdF(x)-\int^{+\infty}_{-\infty}xdG(x) = \int^{+\infty}_{-\infty}G(x)-F(x)dx<\infty
\end{aligned}
Then I don't know how to continue, since convergence doesn't imply absolute convergence. Maybe this is the wrong direction.
| By triangle inequality and linearity of expectation,
$$
\int^{+\infty}_{-\infty}|F(x)-G(x)|dx \leq \int^{+\infty}_{-\infty} F(x) + G(x)dx \leq \mathbb{E}_G [|X|] + \mathbb{E}_F [|X|] <\infty,
$$
since
$$
\int^{+\infty}_{-\infty}|x|dF(x) = \mathbb{E}_F [|X|] <\infty,
$$
where $\mathbb{E}_F$ denotes expectation w.r.t. the CDF $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4248305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Every ring $R$ has a minimal prime ideal. In my commutative algebra course I was asked to solve the following problem:
(i) Prove that in commutative unital ring $R$ every prime ideal $P$ contains a minimal prime ideal.
(ii) Prove that every commutative unital ring $R$ has a minimal prime ideal
Using Zorn's lemma I was able to prove (i). However I do not understand (ii), because if this is true, that would mean that there is some prime ideal $P$ such that for every prime ideal $Q$ we have $P\subset Q$. Of course if $R$ is an integral domain $(0)$ is a minimal prime ideal. But in $\mathbb{Z}_6$ we can prove that $(2)$ and $(3)$ are prime ideals and $2\notin(3)$, $3\notin (2)$, so that would be a counterexample.
It seems that I am missing something, I'll appreciate some help.
| From your comment in the comments:
Then it suffices to take some prime ideal , apply (i) and the minimal prime ideal obtained is as in (ii), right?
This will work, yes. If you know that in a ring with identity
*
*There exist maximal ideals; and
*maximal ideals are prime.
then you can extract a minimal prime ideal contained in that prime ideal, which would necessarily be a minimal prime in the entire ring.
However I do not understand (ii), because if this is true, that would mean that there is some prime ideal such that for every prime ideal we have ⊂.
As also discussed in the comments, it seems you are interpreting minimal as minimum (meaning "a prime ideal contained in all other prime ideals").
A minimal prime ideal is simply one that does not properly contain any other prime ideal. In your example, $(2)$ and $(3)$ are both minimal prime ideals of $\mathbb Z_6$, and no minimum prime ideal exists in the ring.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4248408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\omega$-area of a unit circle - Symplectic geometry Consider a standard symplectic structure $\omega=\sum_k dp_k\wedge dq_k$ on $\mathbb{R}^{2n}$. Let $S$ be a unit circle that belongs to some plane in $\mathbb{R}^{2n}$. I want to show that
$$\iint_S \omega\leq \pi.$$
One of the hint was to use Cauchy-Shwartz inequality to corresponding Hermitian form
$$\langle z,w\rangle=\sum z_k\overline{w_k}.$$
But, unfortunately, I don't see how to proceed further. Any hint would be appreciated. Are there any other approach to this problem?
Also, as I understand it geometrically, we basically compute the area of projection of the circle onto $p_iq_i$-planes, correct?
| Using the complex coordinates $z_i = p_i + \sqrt{-1} q_i$ and its conjugate $\bar z_i = p_i - \sqrt{-1} q_i$, the symplectic form can be written as
$$ \omega = \frac{\sqrt{-1}}{2}\sum_i dz_i \wedge d\bar z_i .$$
Thus for any $u = (u_1, \cdots, u_n)$ and $v = (v_1, \cdots, v_n)$ in $\mathbb C^n$,
\begin{align}
\omega (u, v) &= \frac{\sqrt{-1}}{2} \sum_i (u_i \bar v_i - \bar u_i v_i)\\
&= - \operatorname{Im} \langle u, v\rangle
\end{align}
Now let $e_1, e_2$ be two orthonormal basis in the plane which contains the unit disc. Then
$$ \iint_D \omega = \iint_D \omega (e_1, e_2) dxdy \le \iint _D |\langle e_1 , e_2\rangle| dxdy\le \iint _D \|e_1\| \| e_2\| dxdy =\pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4248726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine whether the series diverges or converges I would appreciate help in determining if the following series diverges or converges: $\sum_{n=1}^{\infty} \left(\ln (n+1)-\ln n\right)^{\ln n}$. I know of one approach, but that gets very complicated, and I thought that someone in here might know of a slightly smoother way to do this?
My approach: I have started by rewriting the expression to $e^{ln(n)ln(ln(n+1)-ln(n))}$. Then I applied the Limit Chain rule where $g_1(x)=ln(n)ln(ln(n+1)-ln(n))$ and $f_1(u)=e^{u}$ and started with solving $$(1)\lim_{x\rightarrow\infty}ln(n)ln(ln(n+1)-ln(n))=\lim_{x\rightarrow\infty}ln(n)\lim_{x\rightarrow\infty}ln(ln(n+1)-ln(n))=\infty\cdot\lim_{x\rightarrow\infty}ln(ln(n+1)-ln(n))$$
And to solve that I applied the Limit Chain rule again where $g_2(x)=ln(n+1)-ln(n)$ and $f_2(u)=ln(u)$ and started solving $$(2)\lim_{x\rightarrow\infty}ln(n+1)-ln(n)=\lim_{x\rightarrow\infty}ln(1+\frac{1}{n})$$
And here I need to apply the Limit Chain rule again where $g_3(x)=1+\frac{1}{n}$ and $f_3(u)=ln(u)$ and started solving $$(3)\lim_{x\rightarrow\infty}1+\frac{1}{n}=1.$$
And only from here I can start computing all $f(u)$, so for $f_3(u)$ we have that $\lim_{u\rightarrow 1}ln(u)=0$. For $f_2(u)$ we have that $\lim_{u\rightarrow 0}ln(u)=-\infty$ that we now can put in (1) and get $\infty(-\infty)=-\infty$. Now we can finally determine $f_1(u)=\lim_{u\rightarrow-\infty}e^u=0$ and say that the series is convergent.
| Here is a simple approach. Note that for $x>0$, $0<\log(1+x)\le x$. Therefore, we have
$$0< \log(n+1)-\log(n)\le \frac1n$$
So, for $n>3$, $\log(n)>1.09$ and hence
$$\left( \log(n+1)-\log(n) \right)^{\log(n)}<\frac1{n^{1.09}}$$
Using the p-test, we conclude that the series of interest converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4248857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Equivalence relation in singular homology I'm currently reading Rotman's Introduction to algebraic topology, and I'm struggling to understand what singular homology is.
The definition of a singular simplex isn't clear to me. If a singular simplex is any continuous map from the standard n simplex, then there are uncountably many of them (except for some special spaces). The point that I don't get is how all of those simplexes "cancel" each other in the homology group. I read about an equivalence relation that takes care of it but I didn't understand.
Thanks in advance!
| You are sort of correct. Singular homology is not really "computable".
However, one can show that it is naturally isomorphic to (semi)simplicial homology, CW homology, etc, and others are somewhat computable, giving you the results.
Simplexes do not trivially cancel as what you expected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Set question which I believe relates to Fibonacci numbers. For a subset $S$ of $\mathbb{N}$, let us define $S+1=\{x+1: x \in S\}$. How many subsets $S$ of the set $\{1,2,...,n\}$ satisfy the condition $S\cup \left(S+1\right)=\{1,2,...,n+1\}$?
I think that the amount of subsets is the $nth$ Fibonacci number. I looked at the cases for $n=1,2,3,4,5$ and I find $1,1,2,3,5$ subsets respectively. My professor suggested a proof by induction, but I don't see how to invoke the inductive hypothesis here.
Examining $n=3$. We want to show the subsets which give $S\cup \left( S+1\right)=\{1,2,3,4\}$.
Subsets $S$ include: $\{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1, 2, 3\}$.
Subsets $\left( S+1\right)$ include: $\{2\}, \{3\}, \{4\}, \{2, 3\}, \{2, 4\}, \{3, 4\}, \{2, 3, 4\}$. So, the only cases in which $S\cup \left( S+1\right)=\{1,2,3,4\}$ is when $S=\{1, 3\}$ and $S=\{1, 2, 3\}$. In other words, we have two subsets which satisfy the relationship when $n=3$.
As mentioned, I did this for $n=1,2,3,4,5$ to find a pattern, and it appears to be the Fibonacci sequence, but again, I am completely unsure how to show this through induction. Any help would be appreciated.
| I have below what I believe to be a direct bijective proof by induction. Let us refer to the number of possible such subsets $S$ as $S_n$. You have already manually shown that $S_n = F_n$ for $n=1,2,3,4,5$ so we now show that $S_{n+1} = S_n+S_{n-1}$.
Consider any $S\subseteq \{1,...,n+1\}$ so that $S\cup (S+1) = \{1,..., n+2\}$. Clearly $n+2\in (S+1)$ but $n+2\notin S$ and so we know that $n+1\in S$. Now there are two possibilities:
*
*$n\in S$. If this is the case, then observe that $S' = S\setminus \{n+1\}$ has the property that $S'\subseteq \{1,...n\}$ and also that $S'\cup (S'+1) = \{1,..., n+1\}$. Similarly observe that for any set $U$ that satisfies these requirements (i.e. are one of those counted by $S_n$) we may take $U' = S\cup\{n+1\}$ and see that $U'\subseteq \{1,...n+1\}, U'\cup (U'+1) = \{1,..., n+2\}$. Thus there are precisely $S_n$ sets $S$ in this case where $n\in S$.
*$n\notin S$. If this is the case, we know that $S' = S\setminus \{n+1\}$ has the property that $S'\subseteq \{1,...n-1\}$ and also that $S'\cup (S'+1) = \{1,..., n\}$. Similarly observe that every set $U$ counted by $S_{n-1}$ can have $n+1$ appended to it to get a set $U'$ such that $U'\subseteq \{1,...n+1\},U'\cup (U'+1) = \{1,..., n+1\}$. Thus there are precisely $S_{n-1}$ sets $S$ in this case where $n\notin S$.
Given it is either true that $n\in S$ or that $n\notin S$, we see that $S_{n+1} = S_n+S_{n-1}$, showing the result by induction!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Complex Matrix and its Conjugate terms If $z = \left| {\begin{array}{*{20}{c}}
{3 + 2i}&1&i\\
2&{3 - 2i}&{1 + i}\\
{1 - i}&{ - i}&3
\end{array}} \right|\& \left| {z + \overline z } \right| = k\left| z \right|$, find the value of k
My approach is as follow
$ \Rightarrow z = - \left| {\begin{array}{*{20}{c}}
1&{3 + 2i}&i\\
{3 - 2i}&2&{1 + i}\\
{ - i}&{1 - i}&3
\end{array}} \right|$ where $\left( {{C_1} \leftrightarrow {C_2}} \right)$
$ \Rightarrow \overline z = - \left| {\begin{array}{*{20}{c}}
1&{3 - 2i}&{ - i}\\
{3 + 2i}&2&{1 - i}\\
i&{1 + i}&3
\end{array}} \right|$
How do we proceed from here.
| Since $z=27$, $\left|z+\overline z\right|=2\times27$, and therefore $k=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can you help me find this unit vector?
The vectors $a = (1,1,1)$ and $b = (−1, −1, −1)$ are given. Determine the unit vector $c$ such that $\angle (a, c) = \frac \pi 6$ and that the area of the parallelogram constructed over the vectors $b$ and $c$ is equal to $\sqrt2.$
So I tried solving this like:
$$P=|b \times c|=\sqrt2 \Rightarrow b \times c=(y-z, z-x, x-y)$$
$$|c|=1 \Rightarrow x^2 +y^2+c^2=1$$
and $$∡ (a, c) = π / 6 \Rightarrow ac=|a||c|\cdot \cos ∡ (a, c)$$
And with this I got 3 equations but i really don't know if this is correct or not. Can you help me solve this, I need it for my homework.
| As currently stated, the problem has no solution, indeed vectors $a$ and $b$ are aligned with lenght $\sqrt 3$ therefore for any vactor $c$ such that $\angle (a, c) = \frac \pi 6$ we have that the area of the parallelogram is
$$A=\sqrt 3 \cdot 1 \sin\left(\frac \pi 6\right)= \frac 3 2\neq \sqrt 2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does it mean when an object appears in a category's diagram multiple times? In my Introduction to Category Theory class, we were being introduced to diagrams and were told that "Each object and morphism may appear more than once in the diagram". At first, I thought that that would just be a visual aid, perhaps to simplify the picture, but it instead seems to have some actual meaning, and I'm confused as to what that meaning is. For example, take the diagram below, which we were shown in class:
We were told that this diagram commutes iff $g \circ f = \text{id}_X$. However, I thought that it would also be a necessary condition for $f \circ g = \text{id}_Y$, as those are two different paths between $Y$ and itself. I asked the professor this question, and he told me that I couldn't take $f \circ g$, as the input of $f$ was a different $X$ as the output of $g$ (this might not have been his exact response, I don't quite remember what he said verbatim). This really confused me, as I don't see how the same object in the same category can have two different properties simultaneously. Could anyone help resolve my confusion?
| The problem is that your professor did not gave you a precise definition of a diagram.
One definition is that a commutative diagram in a category $\mathcal C$ is a functor $A \to \mathcal C$ where $A$ is a small category. The category $A$ is the shape of the diagram.
Your misunderstanding comes from the fact that your professor and yourself have different shapes in mind when looking at the diagram you drew.
For your professor, the shape of the diagram is the category with objects $0,1,2$ and morphisms $a: 0\to 1$, $b: 1\to 2$ and $c: 0\to 2$ (plus of course the identity morphisms). By force in that category, we have $b\circ a=c$. The diagram is then the functor mapping both $0$ and $2$ to $X$, $1$ to $Y$, $a$ to $f$, $b$ to $g$ and $c$ to $\mathrm{id}_X$.
For you, it seems that the shape is the category with the same objects and morphisms, except you add a morphism $d:2\to 0$. The diagram, as you see it, is defined as above except $d$ is mapped to $\mathrm{id}_X$ also. In that settings, the composite $a\circ d \circ b$ is sent to $f\circ g$, and $a\circ d \circ b$ is forcibly $\mathrm{id}_1$ in the shape category, so indeed the diagram yields $f\circ g = \mathrm{id}_Y$.
Basically, it all boils down to the fact that drawing a diagram without explicitly mentioning the shape is ambiguous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Can choosing a different value for a free variable yield a different solution to a Row Reduced Matrix? Let $ U = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 0 & 4 \end{bmatrix} $ and
$ c = \begin{bmatrix} 5 \\ 8 \end{bmatrix}$
I have been asked to reduce the matrix $ \begin{bmatrix} U & c \end{bmatrix} $ to $ \begin{bmatrix} R & d \end{bmatrix} $ and then solve $ Rx = d$
$ \begin{bmatrix} U & c \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 & 5 \\ 0 & 0 & 4 & 8 \end{bmatrix}$
Dividing row 2 by 2 and then subtracting it thrice from row 1 gives: $ \begin{bmatrix} 1 & 2 & 0 & -1 \\ 0 & 0 & 1 & 2 \end{bmatrix}$
This means that I can treat column 2 as a free column and thus $ x_2 $ is a free variable.
Writing out $ Rx = d $:
$$x_1 + 2x_2 = -1$$
$$x_3 = 2$$
If I take $x_2 = 0$ then the solution becomes $x = \begin{bmatrix} -1 \\ 0 \\ 2 \end{bmatrix}$
If I take $x_2 = 1$ then the solution becomes $x = \begin{bmatrix} -3 \\ 1 \\ 2 \end{bmatrix}$
Both solutions seem to work. However, the website on which I found this question only shows the second option. Did I do something wrong when I set the free variable equal to 0?
Edit: The original question is number 8.2 on this pdf: https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/ax-b-and-the-four-subspaces/solving-ax-b-row-reduced-form-r/MIT18_06SCF11_Ses1.8sol.pdf
| Since $x_2$ is a free variable, every value you assign to $x_2$ will yield a solution to the linear system. They are all valid. There is nothing strange about only one of the solutions being in the alternatives... I imagine they asked which of the alternatives were solutions of the system.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4249991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding suitable variables in a real number inequality I have some trouble with a inequality problem.
I have to find two values that will work in the given inequality. The problem goes as follows:
Let $0 < x <2.$
Find real numbers $1 < a < b < 2,$ so that
$$a \leq \frac{(x+3)}{(x+2)} \leq b.$$
I tried to solve this problem by multiplying the whole function with $(x+2)$, but that didn't work since then $a = 1$, which is wrong.
Any and all help is very appreciated!
| Let $f(x) = \frac{x+3}{x+2}$, take the derivative we get $f'(x) = -\frac{1}{(x+2)^2}$. Notice $f'(x) < 0$ for all $x$ in it's domain, which tells us that $f(x)$ is a decreasing function.
We can now say that the maximum of $f(x)$ on the domain $0 < x < 2$ will occur when $x=0 \rightarrow f(0)=\frac{3}{2} = 1.5$ similarly minimum occurs when $x=2 \rightarrow f(2) = \frac{5}{4} = 1.25$
We can therefore say, no matter the value of $x$, $1.25 \leq \frac{x+3}{x+2} \leq 1.5$.
Here is a Desmos graph to help you visualize.
Hope this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a non empty set X such that $X \subseteq X\times X$? I came up with this idea but I can't seem to prove it. Maybe I have to prove that X is the set of every set (such that X doesn't exist), or that X is empty, but I am not getting anything done and I would like a bit of help. Just for reference, I am just starting first year of undergrad, so I would prefer an answer that doesn't require much knowledge beyond set theory. Thanks a lot.
| Assuming the axiom of regularity (a standard ZFC axiom) the existence of such a set is not possible.
Assume otherwise that $X$ is a non-empty set satisfying $X \subseteq X \times X$ and take $Y = X \cup \bigcup X$. By the mentioned axiom there is an $\in$-minimal element $y \in Y$.
*
*If $y \in X$, then $y = \left< a, b \right> = \{ \{ a \}, \{ a, b \} \}$ for some $a, b \in X$. But then $\{ a \} \in \bigcup X$ and $\{ a \} \in y$, so $y$ is not $\in$-minimal.
*Else $y \in \bigcup X$, so there is some $x \in X$ with $y \in x$. But then again we can write $x = \{ \{ a \}, \{ a, b \} \}$ where $a, b \in X$. Then $y = \{ a \}$ or $y \in \{ a, b \}$. But $a \in y$ and $a \in X$ so again $y$ is not $\in$-minimal. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How many permutations of {1,2,...,n}. How many permutations of {1,2,...,n} where n >=5 are there such that none of {1,2,3} are adjacent to one another?
Example: the permutation (5,3,1,4,2) does not meet the condition described in the problem because 1 and 3 are in two adjacent places and the permutation (2,5,3,4,1) as well as (1,6,2,5,3,4) meets this condition.
| Moving comment to answer:
First arrange the numbers $4,5,6,\dots,n$.
Next, pick a "hole" between those or to either side to place the $1$. Then pick a different hole to place the $2$, etc...
$(n-3)!\cdot (n-2)(n-3)(n-4)$
In general, the number of permutations of $1,2,3,\dots,k,\dots,n$ where none of $1,2,\dots,k$ are adjacent, the same approach works.
$(n-k)!\cdot \frac{(n-k+1)!}{(n-2k+1)!}$
or better yet, using $a\frac{b}{~}$ to denote the falling factorial $a\frac{b}{~}=\underbrace{a(a-1)(a-2)\cdots(a-b+1)}_{b~\text{terms}} = \frac{a!}{(a-b)!}$ this would be
$(n-k)!\cdot (n-k+1)\frac{k}{~}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove if in a group $a$ commutes with $b$, it also commutes with $b^k$ for any integer $k$ Prove if in a group $a$ commutes with $b$, it also commutes with $b^k$ for any integer $k$ (based on Gallian's Algebra text).
This site has similar questions for individual integers; I'd like to prove it for all integers. My proof is below.
*
*Is my proof correct?
*Can writing be improved?
*Is my use of induction appropriate?
*Is there a more direct proof than using induction?
Proof: Since $ab=ba$, $aba^{-1}=b$ and similarly $a^{-1}ba=b$.
We now show that for any integer $k$, $a^kba^{-k}=b$. If $k>0$, we have $a^kba^{-k} = a^{k-1}(aba^{-1})a^{-(k-1)} = a^{k-1}ba^{-(k-1)}$. By induction, we get $a^kba^{-k}=b$. A similar induction can be used for $k<0$. And for $k=0$, $a^kba^{-k}=b$ is trivial.
Consequently, $a^kb=ba^k$, QED.
| *
*Yes, it appears so.
*I think it's perfectly fine, given what you are doing.
*I think there's a much more general statement that's equally easy to prove: the collection of all elements that commute with $x$ forms a subgroup of a group.
*This is 3. Suppose that $x$ and $y$ commute with $a$. Then $xy$ and $x^{-1}$ commute with $a$. To see this, we simply calculate:
$$ axy=xay=xya,$$
and for inverses we just multiply $ax=xa$ on left and right by $x^{-1}$ to obtain $x^{-1}a=ax^{-1}$.
Now that the collection of all elements that commute with $a$ is a subgroup, the result is obvious.
If you want to just prove your result, you can use the inverse trick to show that $a$ commutes with $b^{-1}$, and then you have proved the negative powers by proving the positive powers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
$\operatorname{Mat}_2(\mathbb{R})$ as a field I'm trying to solve this question:
Prove that the set of $2 \times 2$-matrices of the form
$$
\left(\begin{array}{ll}
a & b \\
c & d
\end{array}\right), \quad a, b, c, d \in \mathbb{R}
$$
with the usual matrix addition and multiplication is a non-commutative ring (that is, the multiplication is not commutative). Can you find conditions on $a, b, c, d$ to make it into a field?
I already did the first part and showed that this is a non-commutative ring. Now, I'm trying to find the conditions to make it be a field. For this, we just have to find the multiplicative inverse, which from linear algebra we know that a $2\times 2$ matrix is invertible if $\operatorname{det} (A) \neq 0$, which means $ad-bc \neq 0$ or equivalently, one row isn't a multiple of the other. Now, I just want to know that if this is enough!?
| This is how I finally did:
if $a=d$ and $b=-c$, the set $SM_2(\mathbb{R})=\bigg\{\begin{pmatrix} a & -b \\ b & a\\ \end{pmatrix} \bigg| a, b \in \mathbb{R}\bigg\}$ turns into a field because:
$SM_2(\mathbb{R})$ is commutative, because:
\begin{equation*}
\begin{aligned}
\begin{pmatrix} a & -b \\ b & a\\ \end{pmatrix} \begin{pmatrix} c & -d \\ d & c\\ \end{pmatrix} &= \begin{pmatrix} ac-bd & -ad-bc \\ bc+ad & -bd+ac\\ \end{pmatrix}
\\ &= \begin{pmatrix} c & -d \\ d & c\\ \end{pmatrix} \begin{pmatrix} a & -b \\ b & a\\ \end{pmatrix}
\end{aligned}
\end{equation*}
The multiplicative inverse exists and is the inverse of matrix $A$, because $\det(A)=a^2-(-b)b=a^2+b^2\neq 0$, so $A$ is invertible and $A^{-1}=\frac{1}{a^2+b^2}\begin{pmatrix} a & b \\ -b & a\\ \end{pmatrix}$ and
\begin{equation*}
\begin{aligned}
\begin{pmatrix} a & -b \\ b & a\\ \end{pmatrix} \begin{pmatrix} \frac{a}{a^2+b^2} & \frac{b}{a^2+b^2} \\ \frac{-b}{a^2+b^2} & \frac{a}{a^2+b^2}\\ \end{pmatrix}
&= \begin{pmatrix} \frac{a^2+b^2}{a^2+b^2} & \frac{ab-ba}{a^2+b^2} \\ \frac{ba-ab}{a^2+b^2} & \frac{b^2+a^2}{a^2+b^2}\\ \end{pmatrix}
\\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1\\ \end{pmatrix}
\end{aligned}
\end{equation*}
Thus, $SM_2(\mathbb{R})$ is a field. Plus, it cannot be an ordered field. For this, consider $\varphi: SM_2(\mathbb{R}) \longrightarrow \mathbb{C}$ such that $\varphi\left( \begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix}\right)=a+bi$. We have:
For any $A, B \in SM_2(\mathbb{R})$, we have:
\begin{equation*}
\begin{aligned}
\varphi(AB)&= \varphi\left( \begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix}\begin{pmatrix}
c & -d \\ d & c\\
\end{pmatrix}\right)= \varphi\left( \begin{pmatrix} ac-bd & -ad-bc \\ bc+ad & -bd+ac\\ \end{pmatrix} \right) \\
&= (ac-bd)+(ad+bc)i= ac+adi+bci+bdi^2\\
&= a(c+di)+bi(c+di)= (a+bi)(c+di)\\
&= \varphi\left( \begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix}\right) \varphi\left(\begin{pmatrix}
c & -d \\ d & c\\
\end{pmatrix}\right)=\varphi(A)\varphi(B)
\end{aligned}
\end{equation*}
So, $\varphi$ is a homomorphism.
For any $A, B \in SM_2(\mathbb{R})$, assume that $\varphi(A)=\varphi(B)$, then $a+bi=c+di$, so $(a-c)+(b-d)i=0$, and hence, $a=c$ and $b=d$ which means that $A=\begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix}= \begin{pmatrix}
c & -d \\ d & c\\
\end{pmatrix} =B$. So, $\varphi$ is injective.
Now, let $a+bi\in \mathbb{C}$. Then, since we defined matrices in $SM_2(\mathbb{R})$ to be all the $2\times 2$ matrices with $a=d$, and $b=-c$, there exists $A=\begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix} \in SM_2(\mathbb{R})$ such that $\varphi\left( \begin{pmatrix}
a & -b \\ b & a\\
\end{pmatrix}\right)=a+bi$.
So $\varphi$ is an isomorphism. Hence, $SM_2(\mathbb{R})$ is isomorphic to $\mathbb{C}$ and since no order can be defined on $\mathbb{C}$, the aforementioned field cannot take any orders as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Schur complement like operation on a singular matrix For the classical definition of matrix inversion by Schur complement, given by:
\begin{aligned}
M^{-1}=\left[\begin{array}{ll}
A & B \\
C & D
\end{array}\right]^{-1} &=\left(\left[\begin{array}{cc}
I_{p} & B D^{-1} \\
0 & I_{q}
\end{array}\right]\left[\begin{array}{cc}
A-B D^{-1} C & 0 \\
0 & D
\end{array}\right]\left[\begin{array}{cc}
I_{p} & 0 \\
D^{-1} C & I_{q}
\end{array}\right]\right)^{-1} \\
&=\left[\begin{array}{cc}
I_{p} & 0 \\
-D^{-1} C & I_{q}
\end{array}\right]\left[\begin{array}{cc}
\left(A-B D^{-1} C\right)^{-1} & 0 \\
0 & D^{-1}
\end{array}\right]\left[\begin{array}{cc}
I_{p} & -B D^{-1} \\
0 & I_{q}
\end{array}\right] \\
&=\left[\begin{array}{cc}
\left(A-B D^{-1} C\right)^{-1} & -\left(A-B D^{-1} C\right)^{-1} B D^{-1} \\
-D^{-1} C\left(A-B D^{-1} C\right)^{-1} & D^{-1}+D^{-1} C\left(A-B D^{-1} C\right)^{-1} B D^{-1}
\end{array}\right] \\
&=\left[\begin{array}{cc}
(M / D)^{-1} & -(M / D)^{-1} B D^{-1} \\
-D^{-1} C(M / D)^{-1} & D^{-1}+D^{-1} C(M / D)^{-1} B D^{-1}
\end{array}\right] \qquad \qquad \qquad \qquad \qquad \qquad \qquad (1)
\end{aligned}
the matrix $D$ is considered non-singular.
However, for generalised Schur complement, when $A$, $D$ and $M$ are singular, substituting the formulation for Schur complement using a Moore-Penrose inverse, i.e.,:
\begin{aligned}
M/A := A - (BD^{\dagger}C) \qquad \qquad(2)
\end{aligned}
and substituting the value in (1) results in a value of $M^{-1}$ (or should it be denoted by $M^{\dagger}$ ?) which gives $MM^{-1} \neq I$.
Granted M is ill-conditioned for this problem, but how do we go about inverting such matrices?
Any ideas/hints would be very helpful. Thanks in advance.
| While the Moore-Penrose inverse does indeed mean that $MM^\dagger \neq I$, this is more a "this is not always the case" statement. See the actual statement is:
$$MM^\dagger M = M, ~~~~ M^\dagger MM^\dagger = M^\dagger$$
Which includes many use cases. For instance, this could mean that $MM^\dagger \neq I$ but $M^\dagger M = I$ and vise versa. However, it can also mean that $M = M^\dagger$ and $MM^\dagger = M$ (take for instance an Identity matrix where some diagonal entries are 0).
With respect to the Schur complement, I think this boils down to 2 things: First, is $(A-BD^\dagger C)$ invertible? Second, is either $D^\dagger D = I$ or $DD^\dagger = I$, or the same but for $A$.
But this is when we only look for a $M^\dagger$ such that either $MM^\dagger = I$ or $M^\dagger M = I$. This does still mean that $M$ is ill-conditioned, but for singular matrices the question is never "find an unique matrix such that if multiplied with $M$ yields the identity matrix", but "find any matrix such that if multiplied with $M$ minimizes the squared difference with the Identity matrix"
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4250871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Discrepancy of uniformly distributed random variates Edit: I'm not sure if this is better suited to StatsStackexchange.
I thought of this question after reading about quasi-random sequences (and quasi-random numbers) in the context of Monte Carlo integration.
Introduction and definitions
Quasi-random numbers are different from pseudo-random numbers, because they are deterministically constructed to have **low discrepancy**.
Discrepancy is defined as follows,
Let $x_1, x_2, \cdots x_N$ be $N$ numbers in $I=[0,1]^s$, let $E \subset I$. Define a function $A(E;N)$ such that, it counts the number of $n$, $ 1\leq n\leq N$, for $x_n \in E.$ The discrepancy $D_n$ of the $N$ numbers in $I$ is then given by,
\begin{align*}
D_n=\sup_J \left| \frac{A(J;N)}{N}-\lambda_s(J) \right|
\end{align*}
Where, $J$ is any sub-interval of $I$, and $\lambda_s(J)$ denotes its $s$ dimensional Lebesgue measure.
The star discrepancy ($D_N^*$) is a more commonly used definition which is as follows.
\begin{align*}
D_N^*=\sup_{0\leq t \leq 1} \left | \frac{A([0,t); N)}{N} - \lambda_s([0,t)) \right|
\end{align*}
$D_n$ and $D_n^*$ are related by the following inequality
\begin{align*}
D_n^* \leq D_n \leq 2^s D_n^* \label{starrelation}
\end{align*}
Question
What is the discrepancy for a uniformly distributed random variate (or even pseudorandom numbers)?
Through the definition of star discrepancy it seems intuitively to me that a UD random variate probably has a high discrepancy. However I couldn't prove this, neither could I find any references online.
Update: There is a mistake in my bounty text (I misread the notation). The method described, to use the CDF of a Uniform Distribution times $N$ in place of $A([0,t),N)$ is correct and it gives the discrepancy of the uniformly distributed random variate as exactly $0$ for the single dimensional case. However the question to prove it for $s$-dimensional case is still open.
Thanks for any and all help!
| For $s=1$, you can use Glivenko-Cantelli's theorem to show that $D_N^*\to 0$ almost surely. If $x_1,x_2,\ldots,x_{N}$ are i.i.d uniform on $[0,1]^s$ then you can also think of each coordinate of $x_i$'s are i.i.d uniform on $[0,1]$. Therefore $D_N^* \to 0$ in this case as well (almost surely).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Coin tosses, A wins if HTT, B wins if TTH, is this fair? I found this riddle on the web so I am not sure of what it is actually asking. I think it is saying:
A and B toss the coin and the game ends whenever the following two exact combinations appear: HTT (A wins) and TTH (B wins).
My approach:
I am assuming that it depends on the fact that the expected number of flips $E(n)$ of having a T followed by a H is not equal to the $E(n)$ of having a T followed by a T.
So no, the game is not fair.
$E(TT) = 1 + P(T)E(TT|T) + P(H)E(TT|H) $
$E(HT) = 1 + P(H)E(HT|H) + P(T)E(HT|T) $
Following this reasoning and solving the equations:
$E(TT) = 6$ and $E(HT) = 4$
From here I don't know how to calculate the expected value though. Suggestions?
| The key idea here is to "think backwards", let's start with assuming $B$ wins in this game, which means at some point he throws a pattern of "$TTH$", now if we think backwards, what could last throw be before the tail in "$TTH$"? Clearly it can't be head since otherwise we will have a "$HTT$" which gives the victory to $A$. By now we have "$TTTH$", again, what could last throw be? Same reason, it can only be tail, then we keep repeating this induction, we get a sequence "$TTT\ldots TTH$", which means in the first two throws, $B$ must throw two consecutive tails, otherwise he has no chance of winning, and this gives the probability of $\frac14$. What if the first two throws are not $TT$? Then $A$ must win since $B$ is definitely gonna lose the game, so $A$ can relax a bit, drinking a cup of tea while throwing, just waiting for his "$HTT$" to happen(it's definitely gonna happen as the probability is positive). Now you can see, the chance of $A$ winning is $\frac34$ while $B$ is $\frac14$, so the game is unfair.
(To make it clear about why the the probability of B wins is $\frac14$, you can sum up a geometric series)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Cauchy-Riemann equations and required continuity of derivatives So I just read, that for any analytic function, the Cauchy-Riemann equations will hold. However, the reverse, i.e. Cauchy-Riemann equations hold -> function is analytic, is supposedly only true if the partial derivatives are continuous.
What would be an example though? I cannot think of any function, where the Cauchy-Riemann equations would hold, while not being analytic.
Can someone please help me out?
| Take any analytic function defined on $\mathbb C$. Now remove all the parts except those which are on the coordinate axes, so only a kind of cross remains. Replace the removed parts by some horribly discontinuous mess, such that the new function is also discontinuous at $0$. The function obtained this way will still be partially differentiable at $0$, and the partial derivatives will still be the same as those of the original function, since the partial derivatives at $0$ only depend on how the function behaves on the coordinate axis, which we didn't change.
A concrete example:
$$f(z)=\begin{cases}0&z\textrm{ is on the coordinate axes}\\0&z\textrm{ has both rational imaginary and real part}\\1&\textrm{otherwise}\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Basis vectors of an arbitrary plane I was trying to find the ortonormal basis vectors $\hat{x}$ and $\hat{y}$ in an arbitrary plane on $\mathbb{R}^3$.
The thing is, my plane is tangent to this surface:
$$f(x, y) = -\frac{k}{\sqrt{x^2 + y^2}}$$
Finding out the plane:
$$\boldsymbol \nabla f(x, y, z) \cdot (x-x_0, y-y_0, z - z_0) = \frac{\partial f}{\partial x} (x-x_0) + \frac{\partial f}{\partial y} (y-y_0) + \frac{\partial f}{\partial z} (z-z_0) = 0$$
Then I tried to do "reverse engineering" to the way of creating a plane given two vectors that are contained in this plane.
$$\hat{x} \times \hat{y} = \frac{\boldsymbol \nabla f}{||\boldsymbol \nabla f||}$$
I have the value of $\boldsymbol \nabla f$ but, I don't have the values of $\hat{x}$ and $\hat{y}$, but I know some conditions they must obey:
$$\hat{x} \cdot \hat{x} = 1$$
$$\hat{y} \cdot \hat{y} = 1$$
$$\hat{x} \cdot \hat{y} = 0$$
And also, this ortonormal bases, must depend on the position of a point:
$$\vec{r} = (x_0, y_0, z_0)$$
So I should rewrite $\hat{x}$ as $\hat{x}(\vec{r})$ and $\hat{y}$ as $\hat{y}(\vec{r})$.
But, bad luck for me, when I expand the expressions, there are a total of 6 systems of equations I have to solve. My question is: Is there any method to compute this vectors faster or in a more efficient way?
| I think the key issue in your calculation is that $\nabla f$ is not a vector in $\mathbb{R}^3$ orthogonal to the tangent plane (what does $\frac{\partial f}{\partial z}$ even mean, in your expression?). It is a 2D vector in the $xy$-plane that points in the direction where the tangent plane most steeply ascends.
There are two vectors that are obviously tangent to the plane:
$$\left(1,0,\frac{\partial f}{\partial x}\right)$$
$$\left(0,1,\frac{\partial f}{\partial y}\right)$$
and in general, for any vector $(v_x, v_y)$ in the $xy$-plane,
$$(v_x, v_y, \nabla f \cdot (v_x,v_y))$$
will lie in your tangent plane (the above two are special cases).
To find an orthonormal basis, compute two tangent vectors, and then orthogonalize them using Gram-Schmidt.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How many possible combinations? 4 x 4 grid of tiles (with caveats) I'm designing a board game, the surface of which is made up of 16 tiles, in a grid of 4x4 with 10 unique designs. 6 pairs of these are twins. The user can rotate and also rearrange the tiles and make their own configurations. I got curious about the number of possible permutations. NOTE that 2 of the 16 tiles have a circle design and so have a single state (looks the same when rotated), and one pair has a design that has only 2 states (90 degrees and 180 degrees look exactly like 270 and 360 degrees respectively). All tiles can appear anywhere in the grid. Any chance an equation exists for this or any way to calculate the number of possible permutations? Thanks!
| How many ways can we place down $16$ unique square tiles onto this grid, ignoring rotations? There are $16! = 20,922,789,888,000$ ways to do this. If we account for repetition of the tiles, there are $6$ pairs of $2$, which can be freely swapped, so $16!$ over-counts by a factor of $2!^6 = 64$. So, still ignoring rotations, there are
$$\frac{16!}{2!^6} = 326,918,592,000$$
possible placements into this grid.
Now, taking into account rotations, $12$ of the tiles have no symmetry whatsoever, and can be rotated independently. Then, two more tiles have $180^\circ$ rotational symmetry. This will add in a factor of $4^{12}\cdot 2^2 = 67,108,864$, This gives us the final count of:
$$\frac{16!}{2!^6} \cdot 4^{12}\cdot 2^2 = 16! \cdot 4^{10} = 21,939,135,329,599,488,000,$$
or just shy of $22$ quintillion possibilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How do you approach when completing the square? If $M = 3x^2 - 8xy + 9y^2 - 4x + 6y + 13$, where $x,y\in\mathbb R$, then $M$ must be:
a) positive $\qquad$b) negative $\qquad$c) $0 \qquad$
d) an integer
I somehow managed to figure it out by completing the square but in order to do so, it took me a lot of time and I'm not sure if every time I could solve such problems.
This whole expression can be written as:
$$ 2(x - 2y)^2 + (x - 2)^2 + (y + 3)^2$$
which implies $M$ is positive.
My point is sometimes I'm lucky and I could group them in squares but other times not.
Is there any particular technique/method which always works?
Secondly I also wanna know what you guys observe when completing the squares?
| Without completing the square, you can also apply the following technique:
$$\begin{align} &3x^2 - 4x(2y+1)+ (9y^2 + 6y + 13-M)=0\\
\implies &\Delta_x=4(2y+1)^2-3(9y^2+6y+13-M)≥0\\
\implies &3M≥11y^2+2y+35\\
\implies &3M≥11 \left(y + \frac{1}{11}\right)^2 + \frac{384}{11}\\
\implies &3M≥\frac{384}{11}\\
\implies &M≥\frac{128}{11}>0.\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Permutations of elements where some elements are of the same kind I want to find the number of permutations for element sets like AABC, AABBC, AABBCCFQQQ, etc. Element sets where there are distinct elements, whose placement relative to each other matters, but also identical elements (sub-sets of the element set), within which order doesn't matter. Put shortly, order matters within the element set, but not within the individual sub-sets.
In the case of AABC, there's four element spaces. If one is only directly moving the A-elements (and thus indirectly displacing the other elements), one gets six permutations. That makes sense because you're choosing two places out of four, and order doesn't matter: $\frac{4!}{(4-2)!2!}$, which is six. With the other elements, you're choosing one place out of four: $\frac{4!}{(4-1)!1!}$, which is four. So I thought, maybe the total number of permutations is all of these combinations added together?
AABC ...... Copy 1A
ABAC
ABCA
BAAC
BACA
BCAA ...... Copy 1B
BCAA ...... Copy 2B
CBAA
CABA
CAAB ...... Copy 1C
CAAB ...... Copy 2C
ACAB
AACB
AABC ...... Copy 2A
There are six copies, which divided by two yields three superfluous. Adding everything together, and subtracting three, gives eleven as the total number of permutations. This has led me to believe that this equation may give the answer. $$\frac{\text{No. elements}!}{(\text{No. elements} - \text{No. e}_1)! \ \ \text{No. e}_1!} + ... + \frac{\text{No. elements}!}{(\text{No. elements} - \text{No. e}_n)! \ \ \text{No. e}_n!} - n$$
If n is greater than 2, n being the number of distinct elements. e$_n$ is the nth element, and e$_1$ is the first element.
I've seen that when n = 2, there is no addition, but simply one combination. I think this may be because no. e$_1 = x$ and e$_2 = y$, $$\frac{(x+y)!}{(x+y-x)!x!} = \frac{(x+y)!}{(x+y-y)!y!}$$
Basically, by directly moving the A-elements through all of their combinations, one is indirectly displacing the B-elements through all of their combinations.
That's all I have, and it should give the answerer a rough notion of my level of combinatorics and math understanding. To conclude, I'm looking for an equation that gives the number of permutations with the input variables of the number of distinct elements, and the individual numbers of the element sub-sets.
EDIT:
I forgot to mention that I was looking for the permutations using all of the letters. Also, my brute force method missed ACBA.
| What you looking for is exponential generating function. For example , lets think of the example $AABBCCFQQQ$ ,which given by you. Assume that we want to find the number of all permutations consisting of these letter and lenght $5$.Then you should write the exponential generating function form of all of these letters separetely.
For $A$ : $$\bigg(1 +\frac{x}{1} + \frac{x^2}{2!}\bigg)$$
For $B$ : $$\bigg(1 +\frac{x}{1} + \frac{x^2}{2!}\bigg)$$
For $C$ : $$\bigg(1 +\frac{x}{1} + \frac{x^2}{2!}\bigg)$$
For $F$ : $$\bigg(1 +\frac{x}{1} \bigg)$$
For $Q$ : $$\bigg(1 +\frac{x}{1} + \frac{x^2}{2!} +\frac{x^3}{3!}\bigg)$$
Then ,find the expansion of them and find the coefficient of $x^5$ and multiply it by $5$ or find just the coefficient of $\frac{x^5}{5!}$
expansion in this link
Then, $$5! \times \frac{28}{3} =1120$$ is the number of all permutations of lenght $5$ by using these letters.
For example , $$3! \times \frac{16}{1}=96$$ is the number of all permutations of lenght $3$ by using these letters.
link for exponential generating functions
NOTE= If you want to find the case where all letter used , the find the coefficient of $x^{10}$ and multiply by $10!$ in given link.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4251947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding a closed formula for a simple integer sequence sum I'm trying to compute the average path lengths of a path graph.
I made the following observations:
There are $n$ nodes and $n-1$ edges and:
$n-1$ paths of length $1$
$n-2$ paths of length $3$
$n-3$ paths of length $4$
...
$1$ path of length $n-1$
I'd like to compute this sum, so that I could then find the average by dividing it by $n(n-1)$. It seems very easy yet I'm having trouble finding a closed formula for :
$$
S = \sum_{i = 1}^{n-1}{i (n-i)}
$$
I feel like it's related to the binomial formula.
| The brute force approach is to split $S$ as
$$
S = \sum_{i=1}^{n-1} i(n-i) = \sum_{i=1}^n i(n-i) = n \sum_{i=1}^n i - \sum_{i=1}^n i^2
$$
and then use two well-known formulas: $1 + \dots + n = \frac{n(n+1)}{2}$ and $1^2 + \dots + n^2 = \frac{n(n+1)(2n+1)}{6}$.
(I changed the sum to end at $i=n$ instead of $i=n-1$ to make applying these formulas easier; since the $i=n$ term is $n(n-n)=0$, this doesn't affect the result.)
Alternatively, write $i(n-i) = (n+1) \binom i1 - 2 \binom i2$, split up the sum using this, and then use the formula $$\sum_{i=1}^{n-1} \binom ik = \binom n{k+1}.$$
We can also prove these formulas, but that's harder than simply applying them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Evaluate $\tan(\frac{1}{2}\sin^{-1}\frac{3}{4})$ Evaluate $\tan(\frac{1}{2}\sin^{-1}\frac{3}{4})$
My attempt:
Method:- 1
Let $\sin^{-1}\frac{3}{4} = \theta \implies \sin\theta = \frac{3}{4}$ and $\theta \in [0,\frac{π}{2}]$
Therefore $$\cos\theta = \sqrt{1-\sin^2\theta} = \frac{\sqrt 7}{4}$$
Now $\tan(\frac{1}{2}\sin^{-1}\frac{3}{4}) = \tan\frac{\theta}{2} = \frac{\sin\frac{\theta}{2}}{\cos\frac{\theta}{2}} = \sqrt{ \frac{2\sin^2\frac{\theta}{2}}{2\cos^2\frac{\theta}{2}}} = \sqrt{\frac{1-\cos\theta}{1+\cos\theta}} = \frac{4-\sqrt7}{3}$
Method:-2
Let $\sin^{-1}\frac{3}{4} = \theta \implies \sin\theta = \frac{3}{4} \implies 2\sin\frac{\theta}{2} \cos\frac{\theta}{2} = \frac{3}{4} \implies
\sin\frac{\theta}{2} \cos\frac{\theta}{2} = \frac{3}{8}$
Now $\tan\frac{\theta}{2} = \frac{\sin\frac{\theta}{2}}{\cos\frac{\theta}{2}}$
Please help me in 2nd method. Thanks in advance.
| Your Method 2:
Since $$\sin\theta=2\sin\frac\theta2\cos\frac\theta2\\=2\tan\frac\theta2\cos^2\frac\theta2\\=\frac{2\tan\frac\theta2}{\sec^2\frac\theta2}\\=\frac{2\tan\frac\theta2}{1+\tan^2\frac\theta2}$$ and $$\sin\theta=\frac34,$$ therefore $$\frac{2\tan\frac\theta2}{1+\tan^2\frac\theta2}=\frac34\\3\tan^2\frac\theta2-8\tan\frac\theta2+3=0\\\tan\frac\theta2=\frac{4-\sqrt7}3.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Number of vacuously transitive relations A relation $R$ on a set $A$ is vacuously transitive if it is transitive but there do not exist ordered pairs such as $(x,y)$,$(y,z)$,$(x,z)$ .
If $A$ has $n$ elements, what is the number of such relations on $A$?
I tried for some small sets. For the null set, it is $1$. For the singleton, it is $2$.
OEIS lists a few more terms here. It shows the value $7$ for $n=2$. I am not getting $7$ though.
| The OEIS entry that you mention has the reference
H. Sharp, Jr., Enumeration of vacuously transitive relations, Discrete
Math. 4 (1973), 185-196.
If you follow that up, you find an explicit listing of the relations for $n=2$ and $3$ on page 194. For $n=2$ they are, in matrix notation:
$$
\begin{bmatrix}0&0 \\ 0&0\end{bmatrix},\begin{bmatrix}1&0 \\ 0&0\end{bmatrix},\begin{bmatrix}1&0 \\ 0&1\end{bmatrix},\\
\begin{bmatrix}0&0 \\ 1&0\end{bmatrix},\begin{bmatrix}1&0 \\ 1&0\end{bmatrix},\begin{bmatrix}1&0 \\ 1&1\end{bmatrix},\begin{bmatrix}0&0 \\ 1&1\end{bmatrix}.
$$
so check which one(s) you may have missed.
About the definition of vacuously transitive
EDIT (after comments). Sharp's definition is a bit vague. For completeness here is his definition:
Definition 2.1. A transitive triple in a relation is a set of three ordered pairs $\{(s_i,s_j),(s_j,s_k),(s_i,s_k)\}$. A relation is
called vacuously transitive if it is transitive but contains no
transitive triple.
At face value this seems to disallow self-loops in the graph altogether: for if we have $(a,a)$ in the relation, isn't $T=\{(a,a),(a,a),(a,a)\}$ a transitive triple? Yet Sharp clearly allows self-loops.
After reading the definition several times, I think the key is "a set of three ordered pairs". The $T$ above is a set of only one ordered pair, so it is not a transitive triple. So loops per se are allowed in a vacuously transitive relation.
However, in the 2-element relation $\{(1,1),(1,2),(2,1)\}$ we do have a transitive triple (the full relation is such a triple!) so this is not a vacuously transitive relation.
I have to say the definition could have been clearer.
About isomorphism
Another thing to note is that Sharp is explicitly counting isomorphism classes of the relations (i.e. up to permutation of elements). For example, the second of his relations above is $\{(1,1)\}$. He is not counting $\{(2,2)\}$ separately, because it is isomorphic to the previous (the isomorphism is, of course, $f(1)=2, f(2)=1$).
The OEIS entry A003041 does not explicitly say "isomorphism", but since the numbers are from Sharp's paper and that is the only source cited (besides the printed Encyclopedia), we can deduce that the OEIS entry is meant to be about isomorphism classes. I will try to refine the entry. (UPDATE: The OEIS entry is now better.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solving system of nonlinear equations by substitution I encountered this system of nonlinear equations:
$$\begin{cases}
x+xy^4=y+x^4y\\
x+xy^2=y+x^2y
\end{cases} $$
My ultimate goal is to show that this has only solutions when $x=y$. I didn't find any straight forward method to solving this. But then I came up with the following solution.
First, if $x=0$, then clearly $y=0$ and for the solution we need to
have $x=y$.
Then, assume that $x\ne 0$. Therefore there exists a real number $t$
s.t. $y=t x$. By substituting this to the equations, we find (by
comparing the coefficients) that $t=1$ and therefore $x=y$.
Therefore the system has only solutions of form $x=y$, and every pair
(x, y=x) is a solution.
So is this kind of method OK? If I checked the case $x=0$ separately?
| The approach is fine, but since you did not show us your computations, I cannot tell you whether or not the full solution is correct.
Here's how I would do it. Note that\begin{align}x+xy^2=y+yx^2&\iff x-y=yx^2-xy^2\\&\iff x-y=xy(x-y)\end{align}and so if $x\ne y$, $xy=1$. But (still assuming that $x\ne y$)\begin{align}x+xy^4=y+yx^4&\iff x-y=xy(x^3-y^3)=xy(x-y)(x^2+xy+y^2)\\&\iff1=x^2+1+y^2\text{ (since $xy=1$ and $x-y\ne0$)}\\&\iff x^2+y^2=0\\&\iff x=y=0.\end{align}But we were assuming that $x\ne y$. So, there is no solution with $x\ne y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Compute the limit of $\displaystyle\lim_{n \to \infty} \int_0^1 \frac{e^{-nt}}{\sqrt{t}} \, dt$ I am attempting to compute the limit of
$$\displaystyle\lim_{n \to \infty} \int_0^1 \frac{e^{-nt}}{\sqrt{t}} \, dt$$
Proof Attempt
Recall that $\int_a^b f(x) \, dx = \displaystyle\lim_{n \to \infty} \int_a^b f_n(x) \, dx$ provided that each $f_n$ is continuous and $f_n \to f$ converges uniformly on $[a,b]$ (Thomson & Bruckner, p. 388).
Denote $\langle f_n \rangle$ be a sequence of functions defined by $f_n(x)= \frac{e^{-nt}}{\sqrt{t}}$ where $n \in \mathbb{N}$ and $t \in [0,1]$. Note that each $f_n$ is continuous on $[0,1]$.
Consider, the fact that $f_n \to f$ pointwise, but does not converge to the function $f$ uniformly on $[0,1]$, however. This is because $t=0$ is undefined for the function $f_n \hspace{0.3cm} \forall n \in \mathbb{N}$. But notice that $f_n \to f$ uniformly on $[a,1]$ for $a>0$. So Theorem 9.26 is applicable for $t \in [a,1]$. Thus, consider the following:
$$(A.) \hspace{0.5cm} \displaystyle\lim_{n \to \infty} \int_0^1 f_n(x) = \displaystyle\lim_{n \to \infty} \int_{\frac{1}{n}}^1 f_n(x)$$
since, for $n$ large enough, $\frac{1}{n} < \varepsilon$ for any $\varepsilon > 0$ (Archimedean Property). Since $\displaystyle\lim_{n \to \infty} \langle \frac{1}{n}: n \in \mathbb{N} \rangle = 0$, the above statement $(A.)$ holds.
NOTE: This proof is not complete.
The basic idea of my proof is to observe that $t=0$ breaks the functions $f_n$. So the intuition is to split up $\int f_n \, dt$ into some other integrals that satisfy uniform convergence so that we may use the result from Thomson & Bruckner.
Does this motivation for a proof seem in the right ballpark? Or is there a better means of computing this limit?
| Υοu can find the limit without using uniform convergence and so many estimations.
Just make the change of variables $y=\sqrt{t}$ and another change of variables and using the fact that $e^{-y^2}$ is positive, the integral $I_n$ becomes: $$0 \leq I_n=\frac{2}{\sqrt{n}}\int_0^{\sqrt{n}}e^{-y^2}dy \leq \frac{2}{\sqrt{n}}\int_0^{\infty}e^{-y^2}dy \to 0$$
You just have to use that the integral $\int_0^{\infty}e^{-y^2}dy$ converges (and we also know its value)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
How is the notation in this Hessian Matrix calculated? I'm sorry I'm not sure how to word this question. I've returned to school after a long break where I was working full time. In school I did calculus, but most of it seems to have left me.
I have started schooling again in data science and I am trying to compute a Hessian Matrix for a simple function. The function is:
$$f(x_1, x_2) = (x_1-1)^2 + 100(x_1^2-x_2)^2$$
I have calculated the gradient vector by taking the first order derivative
$$\nabla f(x_1, x_2) = \begin{bmatrix}
2(x_1-1) + 400x_1(x_1^2-x_2) \\
-200(x_1^2-x_2)
\end{bmatrix}$$
In attempting to calculate the Hessian Matrix I am confused by the notation of entry 1,2 and 2,1:
$$ \nabla^2f(x_1, x_2) = \begin{bmatrix}
\frac{\partial^2f}{\partial x_1^2} & \frac{\partial^2f}{\partial x_2 \partial x_1}\\
\frac{\partial^2f}{\partial x_1 \partial x_2} & \frac{\partial^2f}{\partial x_2^2} \\
\end{bmatrix}$$
For entry (1,1) and (2,2), I just retake the derivative of the above gradient vector
$$ \frac{\partial^2f}{\partial x_1^2} = \frac{\partial f}{\partial x_1} [2(x_1-1) + 400x_1(x_1^2-x_2)] = 1200x^2-400x_2+2 $$
and
$$ \frac{\partial^2f}{\partial x_2^2} = \frac{\partial f}{\partial x_2} [ -200x_1^2+200x_2 ] = 200 $$
Therefore the matrix as it stands is:
$$ \nabla^2f(x_1, x_2) = \begin{bmatrix}
1200x^2-400x_2+2 & \frac{\partial^2f}{\partial x_2 \partial x_1}\\
\frac{\partial^2f}{\partial x_1 \partial x_2} & 200 \\
\end{bmatrix}$$
How would I go about calculating $$ \frac{\partial^2f}{\partial x_2 \partial x_1} and \frac{\partial^2f}{\partial x_1 \partial x_2} $$?
Thank you for your time
Edit: Update \Delta to \nabla and \delta to \partial as suggested by top answer.
| The mixed derivatives can also be obtained by taking the derivative of entries of the gradient.
$$\frac{\partial^2 f}{\partial x_2 \ \partial x_1} = \frac{\partial}{\partial x_2} \frac{\partial f}{\partial x_1}
= \frac{\partial}{\partial x_1} [2(x_1-1) + 400 x_1(x_1^2-x_2)]
= -400x_1$$
$$\frac{\partial^2 f}{\partial x_1 \ \partial x_2} = \frac{\partial}{\partial x_1} \frac{\partial f}{\partial x_2}
= \frac{\partial}{\partial x_2} [-200(x_1^2-x_2)]
= -400x_1$$
Note that the order you take the derivatives does not matter if the function has continuous second partial derivatives.
Latex comments: generally the gradient and Hessian are denoted with $\nabla$ and $\nabla^2$ respectively (\nabla), not $\Delta$ (\Delta). Also, the symbol for partial derivatives is usually $\partial$ (\partial) rather than $\delta$ (\delta).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When is the intersection of a cone and a cylinder is planar? I'm trying to understand better the link between the algebraic and geometric interpretation of the ellipse, and I've wondered about the intersection of a right circular cylinder and a right circular cone.
On one hand, we know that the intersection of each with a plane (given that the plane is not too slanted) is an ellipse, and so we can tweak the planes and the radius of the cylinder to give the same ellipses, as can be seen here.
In this, they conclude that if the cone and cylinder's axes are parallel, and the cone axis fall within the cylinder, the intersection is a planar ellipse.
However, it does not work when dealing with the algebraic equations of the two:
$$\begin{cases}
(x-a)^2 + y^2 = b^2\\
x^2 + y^2 = cz^2
\end{cases}$$
from substituting the second into the first, we get $2ax=cz^2+a^2-b^2$, which is the equation of a parbolic cylinder. (The case for $a=0$ gives a circle, and it sort of the trivial solution).
Which approach is the correct one, and what's wrong with the other?
| As you correctly found, the intersection of a cone and a cylinder with parallel axes lies on a parabolic cylinder, whose axis is perpendicular to the axes of cone and cylinder. This rules out a planar intersection, save for the trivial case $a=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
ABCD is a parallelogram. a straight line through A meets BD at X, BC at Y and DC at Z. Prove that AX:XZ = AY:AZ
ABCD is a parallelogram. a straight line through A meets BD at X, BC at Y and DC at Z. Prove that $$AX:XZ = AY:AZ$$
My Approach
I realised that since the question seems to "data insufficient" , it has got to do something with constructions. Seeing the "ratio" I thought that it must be related with Similar Triangles.
*
*Extend $AB$
*Drop perpendiculars from points $X$,$Y$,$Z$ on $AB$. Name the points of intersection as $P$,$Q$,$R$ respectively. Call $XP$ as $a$, $YQ$ as $b$ and $ZR$ as $c$.
*I simplified the L.H.S. and R.H.S. of the required proof and obtained this expression: $$\color{blue}{\frac{1}{a} = \frac{1}{b} + \frac{1}{c}}$$ which I by no means was able to proof.
*Then I assumed $\frac{AP}{AX} = \frac{AQ}{AY} = \frac{AR}{AZ} = k$ from the property of similar triangles. $$\frac{AX}{XZ} = \frac{a}{c-a} = \frac{(1-k^2){AX}^2}{(1-k^2)({AZ}^2 - {AX}^2)}$$ $$\frac{AX}{XZ} = \frac{AX^2}{(AZ+AX)XZ}$$ which is a contradiction as $AZ ≠ 0$.
Where is my fault and how can I solve this problem?
Addendum
When I saw that the antecedent and consequent were part of the same line segment, I did not realise that it can be solved without additional construction (because if $∆ABC \sim ∆A'B'C'$ we can write $\frac{AB}{A'B'} = \frac{BC}{B'C'}$ and since points $A$,$B$,$C$ cannot be collinear , so the terms of the ratio cannot be the part of the same straight line). Just for the sake of curiosity, I want to ask what algorithm is to be followed to find the required triangles that are to be proven similar?
| I am afraid you shook me on step 3. Here is what I did with it.
Construct the line through $B$ parallel to $XZ$. Let it meet $AD$ at $K$ and $CD$ at $L$. This makes $ABLZ$ and $KAYB$ parallelograms.
$KB=AY$ and $BL=AZ$ ... (opposites sides of a parallelogram)
$AX:XZ = KB:BL$ ... (concurrent transversals cutting parallel lines)
$AX:XZ = AY:AZ$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4252926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Probability to win the game with coin and dice I was reading a question and answer on this site here,
The problem:
This game is played with a fair coin and a die. First player flips a coin. If it turns out head ($H$), the player proceeds with tossing a die. If it turns out tail ($T$), the player proceeds with flipping a coin for the second time. The player wins if it gets head on the first tossing and 6 on the second or tails on both flips of coin. What is the probability of winning a game?
To solve this problem, I realized there are $8$ possible outcomes in this game: $$(H,1) , (H,2),(H,3),(H,4),(H,5),\color{\green}{(H,6)}, (T,H),\color{green}{(T,T)}$$
And as I demonstrated with the green colors there are only two cases that considered as a win. So I think the probability of winning is $\frac28=\frac14$.
But it has a contradiction
with the answer provided by the user @Ben:
First flip and second toss are independent events. So do first flip
and second flip in the case that first flip is tail.
So use multiplication:
P(head on the first flip and 6 on the second tossing)=P(head on the
first flip)*P(6 on the second
tossing)=$\frac{1}{2}*\frac{1}{6}=\frac{1}{12}$
P(tails on both flip)=$\frac{1}{2}*\frac{1}{2}=\frac{1}{4}$
Win the game if either one of the two events happens, so use addition:
P(winning the game)=P(head on the first flip and 6 on the second
tossing)+P(tails on both flip)=$\frac{1}{12}+\frac{1}{4}=\frac{1}{3}$
I'm confused, is my approach wrong? and why?
| The trick is that the various events are not all equally probable: $(H,1)$ happens $1/12$ of the time, and $(T,H)$ happen $1/4$ of the time.
If you do just want to count, you will have to come up with a list of cases that are all equally likely. In this case, this can be done by replacing the second coin flip with another die roll, and then accepting (say) any item from ${4, 5, 6}$ on the die roll in the tails case. This gives
$$(H,1), (H, 2), (H, 3), (H, 4), (H, 5), \color{red}{(H, 6)}, (T, 1), (T, 2), (T, 3), \color{red}{(T, 4)}, \color{red}{(T, 5)}, \color{red}{(T, 6)}$$
and it is then obvious that the result is $4/12 = 1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is this function can be differentiable at (0, 0)? Let we have $$f(x, y) = \left\{\begin{matrix}
\log(1 + x\sin\sqrt[3]{\frac{y^4}{x}}) & x \neq 0\\
0 & x = 0
\end{matrix}\right.$$
I wanna know if this fuction can be differentiated in $(0, 0)$.
Here's what I have:
$f'_x = f'_y = 0$ cuz $f(x, 0) = f(0, y) = 0$
In this case we only need to prove that $f(x, y) = o(||h||)$ for $h \to 0$
$$\lim_{(x, y) \to (0, 0)} \frac{\log(1 + x\sin\sqrt[3]{\frac{y^4}{x}})}{\sqrt{x^2 + y^2}} = 0$$
But how can I find this limit?
| We have that by squeeze theorem
$$\left|x\sin\sqrt[3]{\frac{y^4}{x}}\right| \le |x| \to 0$$
therefore
$$ \frac{\log\left(1 + x\sin\sqrt[3]{\frac{y^4}{x}}\right)}{\sqrt{x^2 + y^2}}=\frac{\log\left(1 + x\sin\sqrt[3]{\frac{y^4}{x}}\right)}{ x\sin\sqrt[3]{\frac{y^4}{x}}}\frac{x\sin\sqrt[3]{\frac{y^4}{x}}}{\sqrt{x^2 + y^2}}$$
with $\frac{\log\left(1 + x\sin\sqrt[3]{\frac{y^4}{x}}\right)}{ x\sin\sqrt[3]{\frac{y^4}{x}}}\to 1$ therefore all boils down in the following
$$\lim_{(x, y) \to (0, 0)} \frac{x\sin\sqrt[3]{\frac{y^4}{x}}}{\sqrt{x^2 + y^2}}=0$$
indeed assuming wlog $x>0$ we have
$$\frac{x\sin\sqrt[3]{\frac{y^4}{x}}}{\sqrt{x^2 + y^2}} \le \frac{x\sqrt[3]{\frac{y^4}{x}}}{\sqrt{x^2 + y^2}}=\frac{\sqrt[3]{y^4x^2}}{\sqrt{x^2 + y^2}}\to 0$$
indeed by polar coordinates
$$\frac{\sqrt[3]{y^4x^2}}{\sqrt{x^2 + y^2}}=\rho \sqrt[3]{\sin^4 \theta \cos^2 \theta} \to 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove $\int_{0}^{\infty} \frac{(1-x^2) \, \text{sech}^2\left(\frac{\pi x}{2} \right)}{(1+x^2)^2}\, dx = \frac{\zeta(3)}{\pi}$? I was recently searching for interesting looking integrals. In my search, I came upon the following result:
$$ \int_{0}^{\infty} \frac{(1-x^2) \, \text{sech}^2\left(\frac{\pi x}{2} \right)}{(1+x^2)^2}\, dx = \frac{\zeta(3)}{\pi}$$
and I wanted to try and prove it.
Inspired by this answer by Jack D'Aurizio, I took the Weierstrass product for $\cosh(x)$ to obtain
$$
\cosh\left(\frac{\pi x}{2} \right) = \prod_{n \ge 1}\left(1 + \frac{x^2}{(2n-1)^2} \right)
$$
And by logarithmically differentiating twice we get
$$
\frac{\pi^2}{4}\text{sech}^2\left(\frac{\pi x}{2} \right) = \sum_{n \ge 1} \frac{4(2n-1)^2}{\left(x^2 + (2n-1)^2\right)^2} - \frac{2}{x^2 + (2n-1)^2}
$$
Which means we get
\begin{align*}
\int_{0}^{\infty} \frac{(1-x^2) \, \text{sech}^2\left(\frac{\pi x}{2} \right)}{(1+x^2)^2}\, dx & =\frac{4}{\pi^2}\sum_{n\ge 1} \int_{0}^{\infty} \frac{(1-x^2)}{(1+x^2)^2}\left( \frac{4(2n-1)^2}{\left(x^2 + (2n-1)^2\right)^2} - \frac{2}{x^2 + (2n-1)^2}\right)\, dx
\end{align*}
However, after this, I couldn't figure out how to evaluate the resulting integral.
Does anyone know how I could continue this method? Or alternatively, does anyone know another way in which the result can be proven? Thank you very much!!
Edit:
Per jimjim's request, I'll add that I found this integral on the Wikipedia article for $\zeta(3)$. I believe the reference is to this text where the following formula is given
$$
(s-1) \zeta(s) = 2\pi \int_{\mathbb{R}}\frac{\left(\frac{1}{2} + xi \right)^{1-s}}{\left(e^{\pi x} +e^{-\pi x} \right)^2}\, dx
$$
which for the case of $s=3$ reduces to the surprisingly concise
$$
\int_{\mathbb{R}}\frac{\text{sech}^2(\pi x)}{(1+2xi)^2} \, dx = \frac{\zeta(3)}{\pi}
$$
And I presume that one can modify the previous equation to get to the original integral from the question, but it is not apparent to me how this may be done.
Edit 2:
Random Variable has kindly posted in the comments how to go from $\int_{\mathbb{R}}\frac{\text{sech}^2(\pi x)}{(1+2xi)^2} \, dx$ to $ \int_{0}^{\infty} \frac{(1-x^2) \, \text{sech}^2\left(\frac{\pi x}{2} \right)}{(1+x^2)^2}\, dx$. Thank you very much!
| A solution by Cornel I. Valean
\begin{equation*}
\begin{aligned}
\int_{0}^{\infty} \frac{\displaystyle (1-x^2) \, \text{sech}^2\left(\frac{\pi x}{2} \right)}{(1+x^2)^2}\textrm{d}x&=-\frac{1}{2\pi}\lim_{n\to1/2}\frac{d^2}{dn^2}(2\psi(2n+1)-\psi(n+1)+\gamma)\\&=\frac{\zeta(3)}{\pi}
\end{aligned}
\end{equation*}
where the following fact is exploited, $\displaystyle \int_0^{\infty}\frac{\tanh(n\pi x)}{x(1+x^2)}\textrm{d}x=2\psi(2n+1)-\psi(n+1)+\gamma$. This last result is immediately obtained from the generalization found at the point $iii)$, Sect. $\textbf{1.40}$, page $26$, in (Almost) Impossible Integrals, Sums, and Series.
A note: Nice to see how the harmonic numbers (in a generalized form) play a great part here!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 0
} |
Is rolling a 60 face die (D60) the same as rolling 10 6 faced ones (D6)? I understand that if you roll $10$ $6$-faced dice (D6) the different outcomes should be $60,466,176$ right? (meaning $1/60,466,176$ probability)
but if you are playing a board-game, you don't care about that right? what matters is the sum of the numbers in the end, so it should anyway be $1/60$ instead of whatever right? because its a sum not $10$ individual rolls.
so a $60$ face die (D60) should be the same as rolling all the dice right? if not can someone please explain? I've been looking all over the net for a compelling answer but there is none. I've seen a D60 can be used as $2$ D6 but also there is no explanation for this.
if not, what would be the number of faces needed for an equivalent single die that can replace $10$ D6. I know so far the D120 is the last possible die that can be made, but it doesn't matter, I just want to know too...
EDIT: Sorry I forgot to mention in some tabletop games Dice can have $0$, so you'd treat all $6$ sided dice as going from $0$ to $5$, making minimum value of $0$ possible in all $10$ rolls. But still that changes the maximum in the same way minimum used to be, now the max you get in 6 sided dice is $50$, while on the 60 sided die it would be $59$ if we apply a $-1$ to all results... it gets messy, and this detail doesn't really change much
| A simple argument is you can roll a $1$ using one $60$ sided die but you cannot using ten $6$ sided dice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Why logarithm of a negative number to a negative base is not defined? I have studied that the numbers of the form N=aˣ can be written as $\log_aN=x \;\forall N>0, a>0, a\not=1.$
Let's take $N=-8$ and $a=-2$ then $-8=(-2)^3$ and $\log_{(-2)}-8 = 3$.
Why there are restrictions on the number and the base even though $\log_{(-2)}-8 = 3$ is defined? Is there something I'm missing as I'm only learning the propaedeutics of mathematics?
| If $a>0$, then $a^x$ is defined for every real number $x$ and, if $y>0$, what we denote by $\log_ay$ is the only $x\in\Bbb R$ such that $a^x=y$.
But if $a<0$ then $a^x$ is defined only for some numbers $x$; for instance, $a^{1/2}$ is undefined. That's why we don't talk about $\log_ay$; it's undefined for most numbers $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solve the ODE $y'' = (y')^2$ I am asked in a past question paper to solve the following ODE: $$ y'' = (y')^{2}$$
To solve this, I began by equating $$y' = u$$
Differentiating both sides $w.r.t.x$ $$ \frac{d^{2}y}{dx^{2}} = \frac{du}{dx}$$
And therefore substituting back to out original equation I got:
$$ \frac{du}{dx} = u^{2}$$
Rearranging $$\frac{du}{u^{2}} = dx$$
Integrating $w.r.t.x$ I got
$$ \frac{-1}{u} = x+C$$ and substituting back $u$
$$ \frac{-dx}{dy} = x+C$$
$$ =x + \frac{dx}{dy} = C$$
$$=\frac{-dx}{x+c} = dy$$
$$ \log(\frac{1}{x+c}) = y$$
$$ -(x+c) = e^{y}$$
Is my answer and method correct?
EDIT As pointed out by @MtGlasser
We can re-write the D.E as $$ \frac{y''}{y'} = y'$$
Which is of the form :
$$(ln(y'))' = y'$$
Integrating on both sides we get
$$ ln (y') = y + c$$
And taking exponent on both sides and re-arranging we get
$$ e^{-1}e^{-y} dy = dx$$
Integrating we get the asnwer $$ e^{-(y+1)} - C = x$$
| You could even do it faster switching variables
$$y'' = (y')^{2} \implies -\frac {x''}{[x']^3}=\frac {1}{[x']^2}\implies x''=-x'$$
Reduction of order
$$p'=-p \implies p=x'=c_1\,e^{-y}\implies x=-c_1 \,e^{-y}+c_2\implies y=\cdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4253864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How do I solve for $x$ when the equation is as follows? I have tried to attack this problem in numerous ways, but I'm stumped. Initially I though I could extend the denominators to "fit them all" but it's a very tedious process. Surely there is a more effective approach! Could you point me in the right direction?
$$\begin{equation}\begin{aligned}
\frac{1}{x-1} - \frac{1}{x-2} &= \frac{1}{x-3} - \frac{1}{x-4}
\end{aligned}\end{equation}$$
| Hint
Simplify each side by combining the fractions
$$\frac{1}{x-1}-\frac{1}{x-2}=\frac{1}{x-3}-\frac{1}{x-4}$$
$$\frac{(x-2)-(x-1)}{(x-1)(x-2)}=\frac{(x-4)-(x-3)}{(x-3)(x-4)}$$
$$\frac{-1}{(x-1)(x-2)}=\frac{-1}{(x-3)(x-4)}$$
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How do we calculate the derivative of $f(x) = x^{3} - 3x$? Let the function :
$$
f(x) =x^{3}-3x
$$ with its domain in the real numbers.
Determine with the help of
$$
f'(x) \equiv \lim _{h\rightarrow 0}\dfrac{f\left( x+h\right) -f\left( x\right) }{h}
$$
the derivative $f'$ of function $f$.
I tried to plug the values of $f$ inside of $f'$:
$$
f (x ) =\lim _{h \to 0}\frac{(x+h)^{3}-3(x+h) -(x^{3}-3x) }{h}
$$
I then tried to factorize but it didn't yeld results.
I don't understand how I can expand $(x + h)^3$ properly.
| Look that if you expand your numerator you get:
$$\lim _{h\to 0} \frac{(x+h)^3-3(x+h)-(x^3-3x)}{h}=\lim _{h\to 0}(3x^2+3xh+h^2-3)=3x^2-3$$
Remember that: $(x+h)^3=x^3+3x^2h+3xh^2+h^3$ reading the comment of the user.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Taking the constant out in derivatives. How does it work and how do I notice that I am able to do it? I am currently trying to solve the derivative to this function:
$$
f(x)= \frac{x^3+9x+8}{2x^2}
$$
by using the quotient rule only and after simplifying to the best of my ability I get:
EDIT:
$$\frac{\frac{d}{dx}(x^3+9x+8)\cdot2x^2-(x^3+9x+8)\cdot\frac{d}{dx}(2x^2)}{(2x^2)^2} \Leftrightarrow $$
$$
\frac{(3x^2+9)\cdot2x^2-(x^3+9x+8)\cdot2(2x)}{2x^4}
$$
$$
\frac{6x^4+18x^2-4x^4+36^2+32x}{2x^4}
$$
$$
\frac{2x^4+54x^2+32x}{2x^4}
$$
I just realized that there is a plus sign after $2x^4$ in the numerator, so that I can't just cancel them. Although, I have figured it out now. :)
$$
54x^2+32x
$$
However, when I input the function into Symbolabs calculator I get the following from finding the derivative:
$$
\frac{x^3-9x-16}{2x^3}
$$
And when taking a look at the steps (since I made a mistake), I could see that they took the constant out before even applying the quotient rule.
How do I acknowledge this? I don't understand how it's possible to "randomly" take out one of the constants and why it would be from the denominator as well. I understand every step but that.
Thanks in advance! I really appreciate it :)
| Any time an expression, abc, can be rewritten as (a)(bc), where a doesn't contain any variables, "a" is a constant. Here,
$$
\frac{x^3+9x+8}{2x^2}
$$
Can be rewritten and retains the same mathematical value as:
$$
\frac 1 2 \frac{x^3+9x+8}{x^2}
$$
Therefore, $\frac 1 2$ is a constant that can be pulled out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it safe to use $|f(x)|Some peoples often use the following.
$$|f(x)|<g(x)\iff -g(x)<f(x)<g(x)$$
The weird part occurs when $g(x)<0$, for example:
$$10<f(x)<-10\to |f(x)|<-10$$
where $x\in\{\}$ for both sides of implication which is true.
Is it safe to use $|f(x)|<g(x)\iff -g(x)<f(x)<g(x)$?
| It is correct. If you have $|f(x)|<g(x)$, then $g$ must be positive everywhere since $0 \le |f(x)|$. Conversely, if $-g(x)<f(x)<g(x)$, the again $g$ must be positive everywhere because we have $-g(x)<g(x)$ which is impossible if $g(x) \le 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Integral representation for the heaviside step function I am studying many-particle quantum theory and I came across the following identity which is used to compute the Fourier transform of Green's functions, $$\theta(\pm \tau) = \mp \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d\omega \frac{e^{- i \omega \tau}}{\omega \pm i \eta}.$$ I feel like I am missing something that should be totally obvious, but to me this expression should actually just have a minus sign in front, for both $\theta(\tau)$ and $\theta(-\tau)$. I think this is true because, $$\theta( \tau) = - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty}d\omega \frac{e^{- i \omega \tau}}{\omega + i \eta}$$ such that, $$\theta(-\tau) = - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d\omega \frac{e^{- i \omega (-\tau)}}{\omega + i \eta} = - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d(-\omega) \frac{e^{- i \omega \tau}}{-\omega + i \eta} $$
$$= - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d\omega \frac{e^{- i \omega \tau}}{\omega -i \eta}.$$ Can anyone point out to me what I am missing here? I keep messing up minus signs trying to follow derivations in my textbook and it keeps coming back to this exact problem.
| When substituting $w\rightarrow -w$ the limits of integration also change. We have
$$\theta(\tau) = - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d\omega \frac{e^{- i \omega \tau}}{\omega + i \eta}.$$
and setting $\tau\rightarrow -\tau$ gives
$$\theta(- \tau) = - \lim_{\eta \rightarrow 0^{+}} \frac{1}{2\pi i} \int_{-\infty}^{\infty} d\omega \frac{e^{ i \omega \tau}}{\omega + i \eta}$$
$$=-\lim_{\eta\rightarrow 0^+}\frac{1}{2\pi i}\int_{\color{red}{\infty}}^{\color{red}{-\infty}}d(-w)\frac{e^{-iw\tau}}{-w+i\eta}$$
$$=-\lim_{\eta\rightarrow 0^+}\frac{1}{2\pi i}\int_{-\infty}^{\infty}dw\frac{e^{-iw\tau}}{-(w-i\eta)}$$
$$=\lim_{\eta\rightarrow 0^+}\frac{1}{2\pi i}\int_{-\infty}^{\infty}dw\frac{e^{-iw\tau}}{w-i\eta}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $x + \arctan x =1$ has exactly one solution can some of you confirm (give feedback) to my solution to the equation in the question:
Firstly, let $f(x) = x + \arctan(x)$
In order to prove that we only have one unique solution, we first have to prove that $f(x)$ is injective.
We know that both $x$ and $\arctan(x)$ are injective functions, so does this imply that the sum of two injective functions is injective? Let's try to prove my statement.
Proof: Suppose we got a function $g(x)$ and $h(x)$, both of which are injective in their corresponding domains. Now let's define a new function $j(x) = g(x) + h(x)$. If a function is injective, it follows from the definition that if $x_1 = x_2 \Rightarrow j(x_1)=j(x_2)$. Thus:
$j(x_1) = g(x_1) + h(x_1)$ and
$j(x_2) = g(x_2) + h(x_2)$. But since $g(x)$ and $h(x)$ are injective, it follows from the definition that $j(x)$ is injective itself.
So, if the LHS is injective, it means that it's one to one. In other terms, for every $x$, it maps to one unique $y$, in this case 1.
However, we also have to study the range of $f(x)$. Since $f(x)$ is defined for all real numbers $x$, and the function is strictly increasing, we can conclude that the range contains all real numbers.
This, together with that the function is injective, concludes that the function must have exactly one solution.
Thanks!
| There are at several flaws in your argument:
does this imply that the sum of two injective functions is injective?
No, it does not. $f(x) = x$ and $g(x) = -x$ is a simple counterexample.
If a function is injective, it follows from the definition that if $x_1 = x_2 \Rightarrow j(x_1)=j(x_2)$.
No, that conclusion does always hold. “Injective” means that $j(x_1)=j(x_2) \implies x_1 = x_2$.
Since $f(x)$ is defined for all real numbers x, and the function is strictly increasing, we can conclude that the range contains all real numbers.
No, you cannot. $f(x) = \arctan(x)$ is a counterexample.
But you can argue that the sum of two continuous and strictly increasing functions is again continuous and strictly increasing. If you find arguments $x$ and $y$ with $f(x) < 1$ and $f(y) > 1$ then the intermediate value theorem for continuous functions guarantees a solution of $f(x) = 1$. And a strictly increasing function is injective, to that the solution is unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
First mean value theorem for integrals
Hello! The above picture is an excerpt from Zorich's book. I was able to solve part a) but do not know how to attack part b).
Parts a) and b) are very similar to the theorem from this book which I'll attach below. In this theorem in part a) the infimum and supremum are taken over open interval $(a,b)$. In part b) if we take $g(x)\equiv 1$ then there is a point $\xi$ belongs to $[a,b]$ but in our problem this point should in $(a,b)$.
I guess part b) can be deduced somehow from part a) but I do not know how to do it. I'd be very thankful for any help!
| By basic properties of integrals, we have
$$
m\leq \dfrac{1}{b-a}\int_a^b f(x)\, dx \leq M.
$$
On the other hand, by continuity of $f$ on $[a,b]$, there exist $x_m, x_M\in [a,b]$ such that $f(x_m)=m$ and $f(x_M)=M$. Therefore
\begin{equation}\tag{1}
f(x_m)\leq \dfrac{1}{b-a}\int_a^b f(x)\, dx \leq f(x_M).
\end{equation}
If $m=M$ then $f$ is constant and so any $\xi\in ]a,b[$ will do, so we assume $m<M$. In this case we can see that both inequalities in (1) are strict (for instance consider $f(x)-m\geq 0$, and if the integral of this is zero then $f\equiv m$ which is a contradiction). Assume that $x_m<x_M$ (if the opposite is true change the interval $]x_m, x_M[$ for $]x_M,x_m[$ below), then by the intermediate value theorem, and (1) with the strict inequalities, there exists $\xi\in ]x_m,x_M[$ such that
$$
f(\xi)= \frac{1}{b-a}\int_a^b f(x)\, dx.
$$
This is what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4254924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partition of sets in topological spaces that preserves limit point for each member Recall that for a sequence $\{x_{n}\}$ in a metric space, if $\{x_{n}\} \rightarrow x$ while $x_{n} \neq x \ \ \forall n$, then $\{x_{2k}\} \rightarrow x$ and $\{x_{2k+1}\} \rightarrow x$. As a corollary, for arbitrary limit point $a$ of a countable set $A$, there exists an n-partition $\{A_{1}, ... ,A_{n} \} $of $A$ s.t. $t \in \overline{A_{i}},\forall i \in {1,...,n}$.
I wish to generalize this to arbitrary compact Hausdorff spaces and arbitrary sets (without assuming countability). More precisely, I wish to prove:
Given a compact Hausdorff space $X$, a subset $A$ of $X$, and a point $x \in \overline{A}-A$. There exists a partition $A = A_{1} \sqcup A_{2}$ such that $x \in \overline{A_{1}} \cap \overline{A_{2}}$.
Note that direct application of sequential methods fails since $X$ might not be first-countable.
| This is not true. For instance, let $X=\beta\mathbb{N}$, the Stone-Cech compactification of $\mathbb{N}$ with the discrete topology, let $A=\mathbb{N}$, and let $x$ be any point of $\beta\mathbb{N}\setminus\mathbb{N}$. Suppose $A=A_1\sqcup A_2$ is any partition. Then there is a continuous function $f:\mathbb{N}\to[0,1]$ which is $0$ on $A_1$ and $1$ on $A_2$, and this function extends continuously to $X$. If $x$ were in the closure of both $A_1$ and $A_2$, then the continuous extension would have to take both the value $0$ and the value $1$ at $x$.
In general, if $x\in\overline{A}$, the set of neighborhoods of $x$ intersected with $A$ forms a filter $F$ on $A$. If $B\subseteq A$, then $x\in\overline{B}$ iff the complement of $B$ is not in $F$. So, your question is whether there exists a set $A_1\subseteq A$ such that neither $A_1$ nor its complement is in $F$. Such an $A_1$ exists iff $F$ is not an ultrafilter.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to find the lower bound of $f(x, y) = -2xy+x^2y^2+x^2$? $f(x, y) = -2xy+x^2y^2+x^2, x\in \mathbb{R}, y\in \mathbb{R}$, how to find its lower bound?
Here are my thoughts, I don't know if it is rigorous.
$f(x, y) = -2xy+x^2y^2+x^2=(xy-1)^2+x^2-1$, as $(xy-1)^2 \geq 0$, $x^2\geq0$, and they can not get to $0$ at the same time, therefore $f(x, y) > -1$.
Therefore, the lower bound of $f(x, y)$ is $-1$.
| As you stated the function is $f(x, y)=(xy-1)^2+x^2-1$, and $x^2\ge0$, and $(xy-1)^2\ge0$. so it has a minimum when both of the inequalities are at their smallest
now $x^2$ is minimised when $x=0$, and at $x=0$, $(xy-1)^2=(-1)^2=1$, therefore $f(0,y)=0+1-1=0$,
But looking at $(x,y)=(\frac{1}{2},1)$, therefore $f(\frac{1}{2},1)=(\frac{1}{2}-1)^2+\frac{1}{2}^2-1=\frac{1}{4}+\frac{1}{4}-1=-\frac{1}{2}$ so minimising along $x=0$ won't give us the minima of the function
So maybe tying along the line where $(xy-1)^2$ is minimised (ie when $xy-1=0$) and seeing if it also allows you to minimises $x^2$
hopefully that helps
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Example of a continuous but not uniformly continuous function $f:Y\to Y$ on a complete convex bounded subset $Y$ of a metric space $X$. Does there exist a function on a complete convex bounded subset of a metric space (or normed linear space) which is continuous but not uniformly continuous?
I have tried to find such function but observed that first I need to consider non-compact closed bounded and convex subset. Since there is the convexity condition, I looked into Banach spaces. However, since the closed unit ball of a finite-dimensional Banach space is complete bounded convex but compact at the same time, there is no hope of getting continuous function on the closed unit ball which is not uniformly continuous.
On the other hand, in infinite-dimensional spaces the only functions I am familiar with, are bounded linear operators. However, every bounded linear operator is uniformly continuous.
Please help me with this problem. A detailed answer will be of very much help. Thanks in advance!
| Let $(c_0,\|\cdot\|)$ be the Banach space of all real sequences $x=(\xi_k)_{k=1}^\infty$ with limit $0$ endowed with the maximum norm, $e_n =(0,\dots ,0,1_n,0,0,\dots)$ for $n \in \mathbb{N}$, $B$ the closed unit ball and $f:B \to B$ defined as
$$
f(x)=(\xi_1,\xi_2^2,\xi_3^3,\xi_4^4\dots).
$$
Then $f$ is continuous (check) but not uniformly continuous: Let $\varepsilon=1/2$, $\delta \in (0,1)$ and $n \in\mathbb{N}$. Then
$\|e_n-(1-\delta)e_n\|=\delta$ and $\|f(e_n)-f((1-\delta)e_n)\|=1-(1-\delta)^n \ge 1/2$ for $n$ sufficiently big.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Expected value of $\exp\big(-\frac{1}{X+1}\big)$ when X have binomial distribution I have a function $f(X)= e^{-\frac{1}{X+1}}$ where X is $\operatorname{Bin}(n,p)$.
Is there a way to simplify $E[f(X)]=\sum_{k=0}^{n}\left(\begin{array}{l}
n \\
k
\end{array}\right) p^{k}(1-p)^{n-k} e^{-\frac{1}{k+1}}$.
If not, is there a good approximation when $p<<1, n>100$.
| I highly doubt that that can be simplified. An approximation can be obtained by doing a Taylor expansion around the mean.
The second order approximation I get is
$$ E[f(X)] \approx f(\mu) + \frac12f''(\mu) \sigma^2
=\exp\left(- \frac{1}{1+\mu}\right)\left(1- \sigma^2 \frac{\mu + \frac12}{(1+\mu)^4}\right)$$
where $\mu = np$ , $\sigma^2=np(1-p)$
Some values for $n=100$
p exact approx
0.1 0.90623 0.90721
0.05 0.82528 0.82942
0.01 0.55403 0.55024
We could add more terms to the expansion. It's not clear, though, under which conditions this behaves as an asymptotic expansion (so that higher order terms can be neglected). For fixed (and small) $p$, the second term (and, one hopes, the next ones) turns negligible for $n\gg 1/p$.
Furthermore, because $f(X)$ is concave, Jensen gives us the strict bound $E[f(X)] < \exp\left(- \frac{1}{1+\mu}\right)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In infinite dimensional vector spaces the complementary of a compact is connected Let $X$ be an infinite dimensional normed vector space and $K$ a compact subset of $X$. I want to show that $K^c$ is connected by path. I know that the unity sphere $\mathbb{S} := S(0,1)$ is connected by path and that up to an homeomorphism I can assume $K \subset \mathbb{B} := B(0,1)$. So I only need to find a path from $x$ to a certain $u \in \mathbb{S}$, for any $x \in K^c$. I was suggested to look at the easiest paths, $t \mapsto (1-t)x+tu$, but I fail to conclude. Any help would be appreciated.
| If none of the paths $t\mapsto (1-t)x+tu$ are contained in $K^c$, then for each $u\in\mathbb{S}$ there exists a point of the form $(1-t)x+tu$ in $K$. In other words, if you could define a continuous map $f:\mathbb{B}\setminus\{x\}\to\mathbb{S}$ that takes $(1-t)x+tu$ to $u$ (geometrically, "project away from $x$ until you hit $\mathbb{S}$") then $f$ would be surjective restricted to $K$ and so $\mathbb{S}$ would be compact, giving a contradiction.
Probably you can do a bit of work to prove this map $f$ is well-defined and continuous. You can avoid any of that work by ending the argument a different way, though. In the special case that $x=0$, then $f$ is just the map $v\mapsto v/\|v\|$ which is obviously well-defined and continuous. So, if you take an arbitrary point $x\in K^c$ and translate it to $0$, this shows that there exists a path in $K^c$ from $x$ to some point in any given sphere centered around $x$. In particular, taking that sphere to have sufficiently large radius, this gives a path from $x$ to a point that is outside of the ball $\mathbb{B}$. Since $X\setminus\mathbb{B}$ is path-connected, this completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to find the value(s) of $a$ and $b$ such that the following system of equations... Find the value(s) of $a$ and $b$ such that the following system of equations,
$\begin{cases}x - 2y = a \\
3x - 6y = b\end{cases}$
(1) is inconsistent. (2) Which values of a and b make this system consistent and further, (3) what can you say about the number of solutions to the system.
I'm not sure if my process for part (1) of the question is correct.
I made an augmented matrix using the system of equations:
$$\begin{bmatrix}
1 & -2 &a \\ 3 & -6 &b
\end{bmatrix}$$
I manipulated the matrix two separate times to find when the system is inconsistent, for b --> R2 = R2 - 3R1,
$$\begin{bmatrix}
1 & -2 &a \\ 0 & 0 &b-3a
\end{bmatrix}$$
and a --> R1 = R1 - (1/3)R2
$$\begin{bmatrix}
0 & 0 &a - {b\over3} \\ 3 & -6 &b
\end{bmatrix}$$
So, the system is inconsistent when/if $b = 3a$ and $a=b/3$?
Some general advice on how to approach the problem is greatly appreciated, thank you!
|
Some general advice on how to approach the problem...
You can also write,
$$\begin{align}\begin{cases}x-2y=a\\3x-6y=b\end{cases}&\implies \begin{cases}3x-6y=3a\\3x-6y=b\end{cases}\\
&\implies 3a=b.\end{align}$$
We see that,
*
*If $b=3a$, then it is enough to solve $x-2y=a$ or $3x-6y=b$. Because, in this case we have
$$x-2y=a\iff 3x-6y=b$$
This implies, we have infinitely many solutions.
*
*If $b≠3a$, then we immediately get a contradiction. This means, in this case the solution doesn't exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4255861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Problem in the derivation of the relation between $\sin^{-1}(x)$ & $\cos^{-1}(x)$ My book's derivation:
Let, $\sin^{-1}x=\theta\implies \sin\theta=x$
Now, $\cos\theta=\sqrt{1-\sin^2\theta}=\sqrt{1-x^2}\implies\theta=\cos^{-1}\sqrt{1-x^2}$
So, $$\fbox{$\theta=\sin^{-1}x=\cos^{-1}\sqrt{1-x^2}$}$$
My problem:
My problem with this derivation is that $\sqrt{1-x^2}$ is always positive. So, the principal value of $\theta$ will always remain in the 1st quadrant. So, for negative values of $\cos\theta$, we will not be able to determine $\theta$ accurately. So, how will I find the correct value of $\theta$ when $\cos\theta$ is negative? So, how will I find the correct value of $\theta$ when $\cos\theta$ is negative?
Example of my problem:
Let $\theta_1=\cos^{-1}(\frac{-4}{5})=\sin^{-1}(\frac{3}{5})$ is in the 2nd quadrant, so $\theta_1=143.1301^{\circ}$, but according to the derived formula $\theta_2=\cos^{-1}(\sqrt{1-(\frac{3}{5})^2})=36.869^{\circ}$, so we can see that $\theta_1\neq \theta_2$ when they should've been equal. This formula can't differentiate between when $\cos^{-1}(x)$ is in the 1st quadrant or when it is in the second quadrant. How do I find the value of $\theta$ correctly when $\cos\theta$ is negative or is the formula so that I just can't?
| In fact, to work with the inverse of the cosine function, we must devise a local concept that guarantees that such a function is bijective. Usually, for classic reasons, we define
$$\cos: [0,\pi] \longrightarrow [-1,1]$$
so that it makes sense to define $\cos^{-1}:[-1,1] \longrightarrow [0,\pi]$. That's why the principal value of the $\theta$ angle you mentioned is always in the first quadrant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
nth time differentiability at a point if limit of nth derivative exists at that point Exercise:
Let the function $f$ be defined and continuous in an open interval $A$. Suppose that $c$ is a point in $A$ and that $f$ has derivatives up to order $m$ on the set $A \backslash\{c\}$. Suppose further that $\lim\limits _{x \rightarrow c} f^{(k)}(x)$ exists for $k=1, \ldots, m$ and the limits are finite numbers. Show that $f$ has derivatives up to order $m$ in all of $A$. Moreover $f^{(k)}(c)=\lim\limits _{x \rightarrow c} f^{(k)}(x)$, for $k=1, \ldots, m$
I know proof of the case when $k=1$. It has a lot of answers in this site, for example here
for complete induction step I was suggested the following expression:
$$ f^{(k)}(c)=\lim _{x \rightarrow c} \frac{f^{(k-1)}(x)-f^{(k-1)}(c)}{x-c}=\lim _{x \rightarrow c} \frac{\int_{c}^{x} f^{(k)}(t) d t}{x-c}=\lim _{x \rightarrow c} f^{(k)}(x)$$
but we haven't studied integral yet. Is there any technique to replace integral with that one??
Thank you in advance.
| We show first that $f'(c)$ exists and is equal to $\lim_{x\to c}f'(x)$. Let $g$ be the function on $A$ given by $g(x)=f'(x)$ for $x\neq c$ and $g(c)=\lim_{x\to c}f'(x)$ which makes $g$ continuous. If $h$ and $k$ are small but positive then by the fundamental theorem of calculus we have $$f(c+h)-f(c+k)=\int_{c+k}^{c+h}g(t)dt.$$ Taking the limit for $k$ gives us $$f(c+h)-f(c)=\int_{c}^{c+h}g(t)dt.$$ Choosing appropriate $k$ we can get the same for negative $h$. It now follows that $$\left|\frac{f(c+h)-f(c)}{h}-g(c)\right|=\left|\frac{1}{h}\int_{c}^{c+h}g(t)dt-g(c)\right|=$$$$\left|\frac{1}{h}\int_{c}^{c+h}g(t)-g(c)dt\right|\leq \frac{1}{|h|}\int_{c}^{c+h}|g(t)-g(c)|dt.$$ Since $g$ is continuous we can choose $h$ small enough so that $|g(t)-g(c)|<\varepsilon$ in this interval. We now get $$\frac{1}{|h|}\int_{c}^{c+h}|g(t)-g(c)|dt\leq\frac{1}{|h|}|h|\varepsilon=\varepsilon$$ and we are done. Now we can apply this same logic to the function $f'(x)$ and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Proving an identity relating to binomial coefficients This question is from the book Discrete mathematics and its applications, by K. Rosen, 6th chapter and 27th question.
Show that if n is a positive
integers $2C(2n, n+1) + 2C(2n, n) =
C(2n+2,n+1)$
I know how to prove this algebraically, applying the formula and this is how it is also given in hints. Since I want to understand counting arguments I am thinking to give a proof of this using combinatorial arguments (double counting).
The right side is the number of ways to select n+1 elements from 2n+2 elements. But I am not able to get an argument for the left side. I'm not getting how can I argue that I need to choose n or n+1 elements from 2n elements twice.
Any other (counting) approaches are also fine. Thank you.
| Here we show the validity of the binomial identity
\begin{align*}
\color{blue}{2\binom{2n}{n+1}+2\binom{2n}{n}=\binom{2n+2}{n+1}}\tag{1}
\end{align*}
by counting lattice paths. We consider lattice paths with horizontal steps $(1,0)$ and vertical steps $(0,1)$. The binomial coefficient
\begin{align*}
\binom{n}{k}\tag{2}
\end{align*}
gives the number of lattice paths of length $n$ from $(0,0)$ to $(k,n-k)$ consisting of $k$ horizontal $(1,0)$ steps and $n-k$ vertical $(0,1)$ steps.
Let's have a look at the graphic below. We see an $(n+1)\times(n+1)$ rectangle where we study lattice paths from the origin $(0,0)$ to $(n+1,n+1)$.
We observe
*
*The number of lattice paths from $(0,0)$ to $(n+1,n+1)$ is according to (2) equal to
\begin{align*}
\color{blue}{\binom{2n+2}{n+1}}\tag{3.1}
\end{align*}
*A lattice path from $(0,0)$ to $(n+1,n+1)$ has to cross either the blue point $(n,n)$ or one of the orange points $(n-1,n+1)$ or $(n+1,n-1)$ respectively. This way we can partition the lattice paths into three sets.
*
*Blue point $(n,n)$: The number of lattice path from $(0,0)$ to the blue point $(n,n)$ is $\binom{2n}{n}$. From the blue point $(n,n)$ there are two ways to go to $(n+1,n+1)$. Either by a horizontal step $(n,n)\to(n+1,n)\to (n+1,n+1)$ or by a vertical step $(n,n)\to(n,n+1)\to(n+1,n+1)$. So we have two possibilities giving a total of
\begin{align*}
\color{blue}{2\binom{2n}{n}}\tag{3.2}
\end{align*}
lattice paths.
*Orange point $(n+1,n-1)$: The number of lattice paths from $(0,0)$ to the orange point $(n+1,n-1)$ is
\begin{align*}
\binom{2n}{n-1}=\color{blue}{\binom{2n}{n+1}}\tag{3.3}
\end{align*}
From $(n+1,n-1)$ there is just one possibility to go to $(n+1,n+1)$: namely $(n+1,n-1)\to(n+1,n)\to(n+1,n+1)$. The total number of lattice paths from $(0,0)$ to $(n+1,n+1)$ crossing $(n+1,n-1)$ is therefore $\binom{2n}{n-1}=\binom{2n}{n+1}$.
*Orange point $(n-1,n+1)$: A symmetrical situation. The number of lattice paths from $(0,0)$ to the orange point $(n-1,n+1)$ is
\begin{align*}
\color{blue}{\binom{2n}{n+1}}\tag{3.4}
\end{align*}
Since there is just one possibility to go from $(n-1,n+1)$ to $(n+1,n+1)$: namely $(n-1,n+1)\to(n,n+1)\to(n+1,n+1)$, the total number of lattice paths from $(0,0)$ to $(n+1,n+1)$ crossing $(n-1,n+1)$ is $\binom{2n}{n+1}$.
Putting (3.1) to (3.4) together we see
\begin{align*}
\color{blue}{2\binom{2n}{n+1}+2\binom{2n}{n}=\binom{2n+2}{n+1}}
\end{align*}
and the claim (1) follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Versors (Vectors) and Trigonometry I recently remebered, when I asked my physics high school teacher if unit vectors are somehow related to sine and cosines (or trigonometry in general). She replied to me that I was pretty lost and confuse if I was asking that...
The thing is, even when she told me that, I didn't change my opinion, and continue thinking about it. Let me explain myself:
2D Vectors:
If you have a vector (in 2D, for the moment):
$$\vec{v} = x_v \hat{i} + y_v \hat{j}$$
You can represent it in the plane:
For example, this is the vector given above
$$\vec{v} = \hat{i} + 2 \hat{j}$$
And if you normalize the vector, you simply divide by the lenght of the vector itself.
This is when I had an idea (back then). I thought this is pretty similar to the $\sin$ and $\cos$ definition:
$$
\cos{x} = \frac{adjacent}{hypotenuse}
$$
$$
\sin{x} = \frac{opposite}{hypotenuse}
$$
But instead, I'm doing:
$$
\frac{v}{|v|} = \frac{\hat{i} + 2\hat{j}}{\sqrt{1^2 + 2^2}} = \frac{1}{\sqrt{5}}\hat{i} + \frac{2}{\sqrt{5}}\hat{j}
$$
The $x$ component, is now similar to the cosine, beacuase I have divided the adjacent side, between the magnitude of the vector (that is like the hypotenuse). Same happens for the $y$.
3D Vectors:
So now, if you instead have a three-dimensional vector, you could also find the versor by doing the same thing:
$$
\hat{v} = \frac{x_v \hat{i} + y_v \hat{j} + z_v \hat{k}}{|v|}
$$
This was the concrete case I asked to my teacher, so after her ansewer, I still thinking about it, and maybe is not related with $sin$ and $cos$, but maybe to spherical coordinates or something similar (because the final unitary vector, will always lie on the spehere of radius 1).
So, this versors, have a relation with sines and cosines, or not?
| Given any vector $$\underline{v}=a\underline{i}+b\underline{j}+c\underline{k},$$
The unit vector is $$\underline{\hat{v}}=\frac{a\underline{i}+b\underline{j}+c\underline{k}}{\sqrt{a^2+b^2+c^2}}$$
The coefficients $\frac{a}{\sqrt{a^2+b^2+c^2}}$ etc are sometimes referred to as direction cosines because these are the cosines of the angles that $\underline{v}$ makes with the coordinate axes.
I hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Change the Cartesian integral into an equivalent polar integral. I started learning double integrals. I am trying to solve one problem but its not going well :).
$$ \int_{-1}^{1} \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} \ln\bigl(x^2+y^2+1\bigr)\, dx\,dy $$
then I wrote :
$$ -\sqrt{1-y^2} \leq x \leq \sqrt{1-y^2} \quad \text{and}\quad -1\leq y \leq1$$
now let $x = \sqrt{1-y^2} \Rightarrow x^2 + y^2 = 1$ so I thought the region was a circle with radius $1 \implies-1\leq r \leq 1$.
Also I change $ \ln(x^2+y^2+1) \rightarrow \ln(r^2+1) $
sum up:
$$\int_{?}^{?} \int_{-1}^{1} \ln\bigl(r^2+1\bigr) \,r\,dr\,d\theta$$
I don't know how to find values for $\theta$ if the region is circle then I think it should be
$- \frac{\pi}{2}$ and $\frac{\pi}{2}$ but in this case I am getting $0$
I know it's easy question but I can't understand how it works.
Thanks in advance
| Please note that in polar coordinates, $x = r \cos\theta, y = r \sin \theta$. As a standard practice in polar coordinates, you keep $r$ non-negative and you cover points in different quadrants by changing the value of $\theta \in (0, 2\pi)$
Here you have $ - \sqrt{1-y^2} \leq x \leq \sqrt{1-y^2}$ and $- 1 \leq y \leq 1$ which is the entire unit circle $x^2 + y^2 \leq 1$.
So in polar coordinates, $0 \leq r \leq 1$ and $0 \leq \theta \leq 2\pi$ and the integral becomes,
$ \displaystyle \int_0^{2\pi} \int_0^1 r \ln (1+r^2) \ dr \ d\theta$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Question on Schur's first lemma From my written notes, (purposely not typeset via Mathjax this time), I have a proof for Schur's first lemma:
My only question is regarding the red circled part, the author claims that for any $\vec y$ such that $$\underline{\underline{M}}\vec y=\lambda \vec y$$ which immediately implies that the matrix, $\underline{\underline{M}}$ is proportional to the unit matrix $\underline{\underline{I}}$.
But from linear algebra, I was taught that in general: $$\underline{\underline{M}}\vec y= \lambda \vec y\implies \big(\underline{\underline{M}}-\lambda \underline{\underline{I}}\big)\vec y=\vec 0$$
The proof (in the image) above states that for this construction, $\underline{\underline{M}}\propto \underline{\underline{I}}$.
But, I can find a matrix $\underline{\underline{M}}$, that is not proportional to the unit matrix for arbitrary $\vec y$, here is one such example: consider $\underline{\underline{M}}=\begin{pmatrix}5&-1\\-1&5\\ \end{pmatrix}$. Which, can be solved by method of $\mathrm{det}\big(\underline{\underline{M}}-\lambda \underline{\underline{I}}\big)=0$, and if this method is carried out it can be shown that there is an eigenvalue $\lambda=4$ with eigenvector $\vec y_1=c_1\begin{pmatrix}1\\1\\ \end{pmatrix}$ where $c_1 \ne 0$, and an eigenvalue $\lambda=6$ with eigenvector $\vec y_2=c_2\begin{pmatrix}-1\\1\\ \end{pmatrix}$ where $c_2 \ne 0$
I know that I am working in $\mathbb{R}^2$ and the author was working in $\mathbb{C}^n$, but the point is that I have found a matrix, $\underline{\underline{M}}$ that is not proportional to the unit matrix (for arbitrary $\vec y$). But according to the proof by the author, I shouldn't be able to find such a matrix. So what is the problem here?
| You are right that there are matrices $M$ which are not proportional to the unit matrix. The proof you included shows that if $D(g)M = MD(g)$ for every $g$, then $M$ cannot be a matrix like
$$\begin{pmatrix} 5 & -1 \\ -1 & 5 \end{pmatrix}.$$
Instead, $M$ MUST look like $\lambda I$ for some scalar $\lambda$. Why? Because in the proof, they show that if $Mx = \lambda x$ for SOME nonzero vector $x$, then in fact $Mx = \lambda x$ for EVERY vector $x$. This forces $M$ to take the form $\lambda I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find value of $x^{5} + y^{5} + z^{5}$ given the values of $x + y + z$, $x^{2} + y^{2} + z^{2}$ and $x^{3} + y^{3} + z^{3}$ If$$x+y+z=1$$
$$x^2+y^2+z^2=2$$
$$x^3+y^3+z^3=3$$
Then find the value of $$x^5+y^5+z^5$$
Is there any simple way to solve this problem ? I have tried all my tricks tried to multiply two equations , substitute $z=1-x-y$ , but things got messy nothing seems to work out .
| Here is yet another way, in case you are interested:
$$(x+y+z)^2=1\implies\sum x^2+2\sum xy=1\implies \sum xy=-\frac12$$
Therefore the polynomial equation whose roots are $x,y,z$ takes the form:
$$t^3-t^2-\frac12t+c=0$$
Summing this equation for each of the roots,
$$\sum x^3-\sum x^2-\frac12\sum x +3c=0$$
$$\implies3-2-\frac12(1)+3c=0\implies c =-\frac16$$
Multiplying the polynomial by $t$ and summing again,
$$\sum x^4-\sum x^3-\frac12\sum x^2-\frac16\sum x=0$$
$$\implies\sum x^4=3+\frac12(2)+\frac16(1)$$
$$\implies \sum x^4=\frac{25}{6}$$
Multiplying the polynomial by $t^2$ and summing again,
$$\sum x^5-\sum x^4-\frac12\sum x^3-\frac16\sum x^2=0$$
$$\implies\sum x^5=\frac{25}{6}+\frac12(3)+\frac16(2)=6$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4256912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Calculating a line integral along a curve I am struggling with the following question;
how can I calculate the line integral $\int_C f(r) \, dr$, where $C$ starts at the point $(0,0,0)^\top$, and ends at the point $(a,a,a)^\top$ along the curve with $x=y=z=t$. The vector field is
$$
f(x,y,z) = \frac{-2}{x^2+y^2+z^2+a^2} \, (x,y,z)^\top.
$$
How do you solve this? ($(x,y,z)^\top$ being a column vector).
I did the following, but am not sure if it is right:
Thanks for any help I can get
| Please note that
$ ~ \vec F(x, y, z) = \frac{-2}{x^2+y^2+z^2+a^2} \, (x,y,z) ~$ is a conservative vector field because we have a potential function $g(x, y, z)$ such that $ \vec F(x, y, z) = \nabla g(x, y, z)$.
The potential function $g(x, y, z)$ is given by $g(x, y, z) = - \ln (x^2+y^2+z^2+a^2) \tag1$
By Fundamental Theorem of Line Integral,
$ \displaystyle \int_C \vec F \cdot dr = \int_C (\nabla g) \cdot dr = g(B) - g(A)$
where $A$ is the starting point and $B$ is the end point on the curve $C$.
In this case, $A = (0, 0, 0), B = (a, a, a)$. Plugging into $(1)$, the line integral is
$ - \ln (4a^2) + \ln (a^2) = - \ln 4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4257076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Presentation of Grothendieck-Witt group $GW(\mathbb{F})$ in terms of generators and relations. Let $\mathbb{F}$ be a field, which for the sake of this discussion, is such that char $\mathbb{F} \neq 2$.
By Corollary 9.4 in Scharlau's Quadratic and Hermitian Forms, the Grothendieck-Witt group $GW(\mathbb{F})$ is generated by elements $\langle \alpha \rangle, \alpha \in \mathbb{F}^{\times}$, subject to the relations
*
*$\langle \alpha \rangle = \langle \alpha \beta^{2} \rangle$ for all $\alpha , \beta \in \mathbb{F}^{\times}$.
*$\langle \alpha \rangle + \langle \beta \rangle = \langle \alpha + \beta \rangle + \langle (\alpha + \beta)\alpha \beta \rangle$.
I understand the proof of this result as presented in Scharlau.
However, in Morel's $\mathbb{A}^{1}$ Algebraic topology over a field, lemma 2.9, he says that the second relation may be obtained from the first relation and the relation
$\langle \alpha \rangle + \langle -\alpha \rangle = 1 + \langle -1\rangle$.
I understand the motivation behind this relation (matrices of the form on the LHS are congruent to the hyperbolic plane when char $\mathbb{F} \neq 2$), but cannot formally derive the second relation using this and relation 1).
I am probably just missing a trick. Any help would be much appreciated!
| I think Morel's claim is not correct.
Consider the case $F=\mathbb{Q}_3$.
Let $M$ denote the free abelian group generated by $\langle\alpha\rangle$, $\alpha\in \mathbb{Q}_3^\times$.
We have $\mathbb{Q}_3^\times/(\mathbb{Q}_3^\times)^2=\{[1],[-1],[3],[-3]\}$, so the quotient of $M$ by the first relation is a free abelian group with basis $\langle 1 \rangle,\langle -1\rangle, \langle 3 \rangle,\langle -3\rangle$.
If Morel's claim is correct, then $\operatorname{GW}(F)$ must be the quotient of this group by the relation
$$
\langle 3\rangle + \langle -3\rangle = \langle 1\rangle + \langle -1\rangle.
$$
However, we have
$$
\langle 3\rangle + \langle 3\rangle\neq \langle 6\rangle + \langle 54\rangle \quad( = \langle -3\rangle + \langle -3\rangle)
$$
in this group, so the second relation does not hold.
On the other hand, it is true that the first and the second relation imply $\langle \alpha\rangle + \langle -\alpha\rangle=\langle 1\rangle + \langle -1\rangle$.
Let $x=\dfrac{\alpha+1}{2}$ and $y=\dfrac{\alpha-1}{2}$, so that $x^2-y^2=\alpha$.
By the second relation, we have
$$
\langle x^2\alpha\rangle + \langle -y^2\alpha\rangle = \langle (x^2-y^2)\alpha\rangle + \langle -x^2y^2(x^2-y^2)\alpha^3\rangle.
$$
Using the first relation and $x^2-y^2=\alpha$, we can rewrite this as
$$
\langle \alpha\rangle + \langle -\alpha\rangle = \langle 1\rangle + \langle -1\rangle.
$$
Perhaps this is what Morel wanted to note.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4257264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is curvature independent of "domain scaling"? $\def\vv#1{\mathbf{\vec{#1}}}
\def\unitvv#1{\mathbf{\hat{#1}}}
\def\derivative#1#2#3{\frac{d^{#3}#1}{{d#2^{#3}}}}$
I'm not sure if "domain scaling" is the correct word, but I had a problem:
Find the curvature of the curve $\vv r(t)= (9 + \cos 6t - \sin 6t)\unitvv i + (9 + \sin 6t + \cos 6t)\unitvv j+ 4 \unitvv k$
To do this, I used the formula that I had already derived for the curvature:
$\kappa := \frac{d\vv T}{ds} = \frac{\|{\vv{r'}\times\vv{r''}}\|}{{{\|\vv{r'}}\|}^3}$
It was an absolute mess with all sorts of factors of $6$ flying around the page, and I kept losing track of my work, so I decided to try and rescale by using $ u := \frac{t}{6}$ into:
$\vv r(u) = (9+\cos u - \sin u)\unitvv i + (9 + \sin u + \cos u) \unitvv j + 4 \unitvv k$.
I justified that the curvature would be the same by lemma:
Lemma 1: $ u(t) = \alpha t ; \alpha \in \mathbb{R \implies}\kappa (\vv r(t)) = \kappa(\vv r(u))$
Proof:
$\kappa (\vv r(u)) = \frac{\|{{\derivative{\vv r}{u}{}}\times{\derivative{\vv r}{u}{2}}}\|}{{{\|\derivative{\vv r}{u}{}}\|}^3} =
\frac{\|({\derivative{t}{u}{}\derivative{\vv r}{t}{} )\times((\derivative{t}{u}{})^2(\derivative{\vv r}{t}{2}) + (\derivative{t}{u}{2})(\derivative{\vv r}{t}{}))}\|}{{\|\derivative{t}{u}{}\derivative{\vv r}{t}{}\|}^3}
= \frac{\|({\alpha^{-1}\derivative{\vv r}{t}{} )\times(\alpha^{-2}(\derivative{\vv r}{t}{2}) + 0(\derivative{\vv r}{t}{}))}\|}{{\|\alpha^{-1}\derivative{\vv r}{t}{}\|}^3}
= \frac{\|\alpha^{-3}\|\|{{\derivative{\vv r}{t}{}}\times{\derivative{\vv r}{t}{2}}}\|}{{{\|\alpha^{-3}\|\|\derivative{\vv r}{t}{}}\|}^3}
= \frac{\|{{\derivative{\vv r}{t}{}}\times{\derivative{\vv r}{t}{2}}}\|}{{{\|\derivative{\vv r}{t}{}}\|}^3} = \kappa(\vv r(t))$
It got me the right answer of $\frac{\sqrt2}{2}$ in my example problem, but I can't seem to wrap my head around it conceptually from what my idea of $\kappa$ represents. Perhaps I made an error in my lemma that made this only work because the curvature was a constant? If not, maybe my idea of $\kappa$ is wrong and someone can give be a better intuition of it.
| "Scaling the domain"/reparametrizing the curve just changes the speed at which the curve is traced out. But the curvature at a point is a property of the shape that's traced out and doesn't care about the speed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4257619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does $\sum_{n=1}^{\infty} (-1)^n \cdot \frac{n}{2^n} $ diverge or converge? I want to prove that $\sum_{n=1}^{\infty} (-1)^n \cdot \frac{n}{2^n}$ converges.
I was trying the ratio test:
$\sum_{n=1}^{\infty} a_n $ converges if $ q:=\lim_{n\to\infty} \left|\frac{a_{n+1}}{a_n}\right| < 1$ and diverges if $q >1$.
So we get: $\lim_{n\to\infty} \left|\frac{a_{n+1}}{a_n}\right| = \lim_{n\to\infty} \frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}} = \lim_{n\to\infty} \frac{n+1}{2^{n+1}} \cdot \frac{2^n}{n} = \lim_{n\to\infty} \frac{n2^n+2^n}{n2^{n+1}}$.
How do you calculate the limit of this function? I'm currently practicing for an exam and the solution just used this step:
$\lim_{n\to\infty} \frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}} = \frac{1}{2}(1+\frac{1}{n}) = \frac{1}{2} < 1$ therefore convergent. However I really don't see how you transform $\frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}}$ to $\frac{1}{2}(1+\frac{1}{n})$.
| When $|x|<1$ the geometric series $$(1+x)^{-1}=\sum_{k=0}^{\infty} (-x)^k$$ converges
. Differentiating it w.r.t $x$, we get
$$-(1+x)^{-2}=-(-x)^{-1}\sum_{k=0} k(-x)^k \implies S=\sum_{k=1}^{\infty} (-1)^k x^k=-x(1+x)^{-2}$$
Let $x=1/2$, then $$S=-2/9<\infty.$$
So the series converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4257785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
When is a real number algebraic? By definition - A real number is algebraic if it is a root of a non-zero polynomial equation with rational coefficients. What does non-zero polynomial equation mean?
Well, an equation f(x) = x -5, becomes zero when x = 5, so this is a zero polynomial equation. Is the definition saying that the equation should not equal zero in any case?
Can someone clarify this?
| A zero polynomial is a polynomial which gives $0$ for all values of $x$,
Basically $P(x)=0$ means that no matter what $x$ you put in there, the result will always be zero. For your example,
$P(x)=x-5$ is not a zero polynomial because there is only a finite number of $x$'s for which the polynomial will be $0$. In this case there is only one such $x$. And that is $5$.
And a real number is algebraic if there exists a non-$\color{green}{\textrm{constant}}$ polynomial with rational coefficients, such that the the real number is one of the roots of the polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4257962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$\lim_{x\to0} \frac{\ln(x+e)-1}{x}$ I want to calculate the limit $\lim_{x\to0} \frac{\ln(x+e)-1}{x}$.
Just putting in the $0$ we can see $\frac{\ln(e)-1}{0} = \frac{0}{0}$ so my first guess is to use "L'Hopital" rules.
$\lim_{x\to0} \frac{\ln(x+e)-1}{x} = \lim_{x\to0} \frac{\frac{1}{x+e}}{1} = \frac{1}{e}$.
edit: I think this is the correct solution, can anyone approve?
| This works. If you're interested, an alternative way to attack this problem is to recognize that the limit
$$\lim_{x\to 0}\frac{\ln(x+e)-1}{x}=\lim_{x\to 0}\frac{\ln(x+e)-\ln(e)}{x}$$
is the derivative of $\ln$ evaluated at $e$. You know that $\frac{d}{dx}\ln(x)=\frac{1}{x}$ for every positive $x$, so
$$\lim_{x\to 0}\frac{\ln(x+e)-1}{x}=\frac{1}{e}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4258289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Maximum number of acute interior angles in a hexagon. Is it possible for a hexagon (in the plane, non-convex, non-overlapping) to have $5$ acute interior angles? It's possible for a pentagon or a hexagon to have $4$ acute interior angles. There are related questions on this site, but they are for general $n$-gons and no one seems to have a definitive answer. So what about just 6-gons?
Want to know what I tried? Fine, I wore out 3 Expo markers on my whiteboard trying stuff.
| Yes, it is possible to have $5$; see this figure for an example. You can easily formalize the argument once you see it; I have left that as an exercise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4258477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
fIs it possible to solve the D.E. $f''(x) + a(x) f'(x) + bf(x) = 0 $ for arbitrary $a(x)$? I have a second order differential equation with a coefficient which depends on the independent variable in a generally unspecified way. The equation is
$$f''(x) + a(x) f'(x) + b f(x) = 0 $$
I would like to obtain $f(x)$ for some boundary values.
For small $x$ the general solution of this equation should be close to
$$ f(x) = A \exp\Big[-\int_0^x \frac{b}{a(x')}dx'\Big],$$
whereas if $b=0$ then the solution is
$$ f(x) = A \int_0^x e^{-K(x)}dx + B,$$
where $K(x) = \int_0^x a(x')dx'.$
For arbitrary $x$ and $b$ I am not sure how to proceed. Can anyone suggest maybe a substitution for convert this equation into an easier problem? I was thinking $g(x) = f(x)/a(x)$ seemed rather natural but I cannot make progress with this one.
| $$f''(x) + a(x) f'(x) + b f(x) = 0 $$
For example :
If $a(x)=\frac{1}{x}$ the solution involves Bessel functions.
If $a(x)=e^x$ the solution involves confluent hypergeometric functions.
If $a(x)=\tan(x)$ the solution involves hypergeometric function and Meijer G-function.
Etc.
In many cases some special functions are involves in the solution. But in most of the cases no convenient standard special function is available.
Thus in the general case the solution cannot be expressed with a finite number of standard functions. One have to use infinite series (if possible) or more commonly to solve the problem thanks to numerical calculus.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4258618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Relationship between Correlation Matrix and Covariance Matrix Given a matrix $A = (X_1,X_2,...,X_n)$
How is it that $A^TA$ is the correlation matrix where $\frac{1}{n}(A^TA)_{ij} = Corr(X_i,X_j)$?
I am confused because $\frac{1}{n}(A^TA)_{ij} = \frac{1}{n}\sum_{k=1}^nA_{ki}A_{kj}$
and $\frac{1}{n}\sum_{k=1}^nA_{ki}A_{kj} = E[X_iX_j]$
but how does $Corr(X_i,X_j) = E[X_iX_j]$ ?
I know that $Cov(X_i,X_j) = E[X_iX_j] - \mu_i\mu_j$
but $Corr(X_i,X_j) = \frac{Cov(X_i,X_j)}{(Var(X_i)Var(X_j))^{\frac{1}{2}}}$
Wouldn't this imply the following?
$$Cov(X_i,X_j) + \mu_i\mu_j = \frac{Cov(X_i,X_j)}{(Var(X_i)Var(X_j))^{\frac{1}{2}}}$$
Unless I am just attempting to skip some serious algebra here, I am not sure what I'm missing here...
| This is because
\begin{equation}
C_{X_i X_j} = \frac{Cov(X_i,X_j)}{(Var(X_i)Var(X_j))^{\frac{1}{2}}}
\end{equation}
is the correlation coefficient determined by dividing the covariance by the product of the variables standard deviations, while the correlation is
\begin{equation}
Corr(X_i,X_j)= E[X_i X_j]
\end{equation}
If $X_i$ and $X_j$ have zero mean, this is the same as the covariance which is defined as
\begin{equation}
Cov(X_i,X_j)= E[(X_i-\mu_{X_i}) (X_j-\mu_{X_j})]
\end{equation}
with $\mu_{X_i}$ and$\mu_{X_j}$ the mean of ${X_i}$ and ${X_j}$, respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4258783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $uu_x+u_y=-\frac{1}{2}u$ with characteristics I'm stuck trying to solve $uu_x+u_y=-\frac{1}{2}u$ satisfying $u(x,2x)=x^2$, this solution also needs to be in some parametrised form with $x(r,s), y(r,s),u(r,s)$.
So far I have this:
$\dfrac{dx}{ds}=u \longrightarrow x(r,s)=us+x_0$
$\dfrac{dy}{ds}=1 \longrightarrow y(r,s)=s+y_0$
$\dfrac{du}{ds}=-\dfrac{1}{2}u \longrightarrow u(r,s)=u_0e^{-s/2}$
If we have that $y(0)=0$ we can see that $y=s$, so we can replace each instance of $s$ with $y$. I know I will end up with an implicit function, but I don't know how to apply the $u(x,2x)=x^2$ condition from here. I was thinking about directly plugging in $u(x,2x)=u_0e^{-x}=x^2\Rightarrow u_0=x^2e^x$ and then after letting $x(0)=r$ (the initial curve) and some substitutions end up with $u=(2uy-x)^2e^{-x}e^{-y/2}$ but I don't think that's correct.
I feel like there's a step missing before I start doing that. I have only just been introduced to PDEs, so all of this is new, any help is welcome and I appreciate your patience with me.
As an aside, I know there is another question that is asking the exact same thing that I am, but they introduce some $\Phi$ and $F$ notations which I have never seen before and leaves me more confused than I initially was.
| Let us apply the method of characteristics, with initial condition $u(x_0, 2x_0) = x_0^2$:
*
*$\frac{\text d}{\text d s} y = 1$, letting $y(0) = 2x_0$ we know $y = 2x_0 + s$.
*$\frac{\text d}{\text d s} u = -\frac12 u$, letting $u(0) = x_0^2$ we know $u = x_0^2 e^{-s/2}$
*$\frac{\text d}{\text d s} x = u$, letting $x(0) = x_0$ we know $x = x_0 - 2 x_0^2 (e^{-s/2} - 1)$
The latter is rewritten
$$
x = x_0 - 2 u\, (1 - e^{y/2-x_0}) .
$$
Resolution w.r.t. $x_0$ yields the expression
$$
x_0 = W_n\left(-2u e^{y/2-2 u - x} \right) + 2 u + x
$$
where $W_n$ is the analytic continuation of the product log function, to be substituted in $ue^{y/2} = x_0^2 e^{x_0}$ to get the final implicit expression.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4258943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is initial curve not assumed to be coincident with characteristics I was reading the method of characteristic for first order p.d.e's.
$a\frac{\partial u}{\partial x}+b\frac{\partial u}{\partial y}=c$.
Now it said to find out the solution we try to find out a specific curve along which the pde becomes an ode. This part I get, since the curve is where the first order derivatives get discontinuous. But why does it say that the initial data curve should not coincide with this chatacteristic. What will happen if the initial conditions are specified on the boundary. Can somebody give an example.?
Say i have the equation as $$yp+q=2$$ with $u$ known on the initial curve $y=0, 0\leq x\leq1$. So i get the characteristics and solution as
$y^2=2(x-x_R)$ and $u= 2y +u_R$
| If you give data along a characteristic curve, the solution $u$ has two possibly conflicting jobs to do: (1) satisfy the ODE along the characteristic, and (2) satisfy the given condition along the characteristic.
As a simple example, try solving the PDE $u_x=0$ subject to the condition $u(x,0)=x$. Good luck! :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4259127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Regarding the proof of $J(\chi,\chi^{-1}) = -\chi(-1)$
I am having trouble to understand the proof of the following identity for Jacobi Sums:
$$J(\chi,\chi^{-1}) = -\chi(-1)$$
, where $\chi$ is a non-trivial character over $\mathbb{F}_p$ (where $p$ is a prime).
The proof in Ireland-Rosen's book goes as follows:
$$J(\chi,\chi^{-1}) = \sum_{a+b =1 } \chi(a)\chi^{-1}(b) = \sum_{a+b =1 } \chi \biggl(\frac{a}{b}\biggr) = \sum_{a \ne 1} \chi \biggl(\frac{a}{1-a}\biggr) = \sum_{c \ne -1} \chi(c) = -\chi(-1)$$
I do not understand the penultimate equation at all. The book says that it is just a substitution of $c = \frac{a}{1-a}$ and as the $a$ run over all values in $\mathbb{F}_p \setminus \{1\}$ so must the $c$ run over $\mathbb{F}_p \setminus \{-1\}$. I do not understand this. Could you please explain this to me?
Remark: This question is related to my former question. There I learned that
$$\Bigl\{ \frac{a}{a-1} \mid a \ne 1, a \in \mathbb{F}_p \Bigr\} \ne \Bigl\{ b \mid b \ne -1, b \in \mathbb{F}_p \Bigr\}$$
, which adds to my confusion.
| Note that $\frac a{1-a} = -1 + \frac1{1-a}$. As $a$ runs over $\Bbb F_p\setminus\{1\}$, the quantity $1-a$ runs over $\Bbb F_p\setminus\{0\}$, and so $\frac1{1-a}$ also runs over $\Bbb F_p\setminus\{0\}$, and therefore $\frac a{1-a} = -1 + \frac1{1-a}$ runs over $\Bbb F_p\setminus\{-1\}$. That is the reason for the penultimate equality.
(Your former question dealt with the expression $\frac a{a-1}$ rather than $\frac a{1-a}$, and thus is not directly relevant.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4259514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\lim_{x\to 1}\frac{\sin(x^2-1)}{x-1}$ without L’Hopital. $$\lim_{x\to 1}\frac{\sin(x^2-1)}{x-1}$$
I have tried using squeeze theorem but that doesn’t work. I know you can use derivatives to find that it equals two but is there another to do it?
| The above solutions are very simple to understand but this is also what you can do:
Expansion
$\sin(t) = $ $t- \frac{t^3}{3!} +\frac{t^5}{5!}..... $
Now, $\frac{x^2-1}{x-1}-\frac{(x^2-1){3}}{3!(x-1)} +.......$ As ${x\to1} $ all higher powers ${\to0}$
Left with $\frac{(x+1)(x-1)}{x-1}$ As ${x\to1}$ implied $x-1{\to0}$ but is not equal to zero
$$\lim_{x\to1}(x+1){\to2}$$
That is $2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4259633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Find $\lim_{(x,y)\rightarrow (0,0)} \frac{e^{1-xy}}{\sqrt{2x^2+3y^2}}$. I am trying to calculate $\lim_{(x,y)\rightarrow(0,0)}\frac{e^{1-xy}}{\sqrt{2x^2+3y^2}}$, or show that the limit does not exist.
Thoughts: In the limit $(x,y) \to (0,0)$, it seems that the numerator is tending to a constant value $e$, while the denominator tends to zero, so the required limit might be infinity. However, I am not sure how to obtain a suitable lower bound to show this.
I also tried to show that the limit does not exist, without much luck. Approaching the origin along the $x$-axis would give: $\lim_{x\to 0}\frac{e}{x\sqrt{2}}$, and along the $y$-axis would give $\lim_{y\to 0}\frac{e}{y\sqrt{3}}$. Both these limits tend to infinity. Approaching $(0,0)$ along $y=x$ also suggests the same.
| Yes you are right, since numerator tends to $e$ and the denominator to $0^+$ we can easily conclude that the limit is $\infty$, indeed eventually by squeeze theorem
$$\frac{e^{1-xy}}{\sqrt{2x^2+3y^2}}\ge \frac 1{\sqrt{2x^2+3y^2}} \ge \frac 1{\sqrt 3\sqrt{x^2+y^2}}\to \infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4259865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int_0^\infty x^5e^{-x^3}\ln(1+x)dx$ Can anyone help me to cope with this integral? I have tried solving it but I had no breakthrough whatsoever ...
$$\int_0^\infty x^5e^{-x^3}\ln(1+x)dx$$
| \begin{align}
\int_0^\infty x^5 e^{-x^3} \ln(1+x)\,dx
&= -\frac{1}{3} e^{-x^3}\left(x^3+1\right)\ln(1+x)\Big|_0^\infty + \frac{1}{3} \int_0^\infty \frac{e^{-x^3}\left(x^3+1\right)}{1+x}\,dx \\
&= \frac{1}{3} \int_0^\infty \frac{e^{-x^3}\left(x^3+1\right)}{1+x}\,dx \\
&= \frac{1}{3} \int_0^\infty e^{-x^3}(x^2 - x + 1)\,dx \\
&= \frac{1}{3} \left[\int_0^\infty x^2e^{-x^3}\,dx - \int_0^\infty xe^{-x^3}\,dx + \int_0^\infty e^{-x^3}\,dx\right] \\
&= \frac{1}{3} \left[\frac{1}{3}\int_0^\infty e^{-u}\,du - \frac{1}{3}\int_0^\infty u^{-\frac{1}{3}}e^{-u}\,du + \int_0^\infty u^{-\frac{2}{3}}e^{-u}\,du\right] \\
&= \frac{1}{9} \left[1-\Gamma\left(\frac{2}{3}\right) + \Gamma\left(\frac{1}{3}\right)\right]
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can every bilinear map be represented as $B(x, y) = A(x)y$? Let $B(x, y) : X \times Y \to Z$ be a bilinear map on finite-dimensional vector spaces. Is it true that every such map admits a unique representation $A(x)y$ where $A : X \to (Y \to Z)$ is a linear map and $A(x)$ is a linear operator?
| As Ian has explained you, it is rather easy.
Here, I would like to give you the same "feeling of easiness" using the classical form of a bilinear map once a basis has been fixed, i.e.,
$$B(X,Y)=X^TAY \tag{1}$$
(see example below) where the vectors $x,y$ are assimilated to the column vectors $X,Y$ of their coordinates with respect to a fixed basis. Using (1), one can write:
$$B(X,Y)=X^TAY=X^TA^TY=(AX)^TY=(AX)\cdot Y \tag{2}$$
(assuming matrix $A$ is symmetrical) where symbol $\cdot$ is for dot product.
(2) is nothing else that the coordinate form of your expression $A(X)(Y)$.
Example of a bilinear map under the coordinate form (1):
$$2x^2+10xy+3z^2=\underbrace{\begin{pmatrix}x&y\end{pmatrix}}_{X^T}\underbrace{\begin{pmatrix}2&5\\5&3\end{pmatrix}}_A\underbrace{\begin{pmatrix}x\\y\end{pmatrix}}_X.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Where does this inequality come from? In Bollobás's Modern Graph Theory, section IV.4, Page 120 line -10, he uses the inequality
$$\frac{t {n \choose t}}{n {\epsilon n \choose t}} \leq \frac{t}{n} \epsilon^{-t} (1-\frac{t}{\epsilon n})^{-t}.$$
The general situation here is $t \approx \epsilon \ln n,$ if that's helpful. How do you derive this inequality?
To try to figure out this inequality, I applied Stirling's formula to the factorials in the lefthand side above. After some simplification (which I hope was correct!) I got
$$\frac{t {n \choose t}}{n {\epsilon n \choose t}} \sim \frac{t}{n} \epsilon^{-t} (1-\frac{t}{\epsilon n})^{-t} \left((1+\frac{t}{n-t})^{n-t+\frac{1}{2}} (1-\frac{t}{\epsilon n})^{\epsilon n + \frac{1}{2}} \right).$$
The rightmost parenthetical expression above seems to be larger than 1 (it takes the form big^big * small^small...). I'm not sure how to derive the inequality (either finishing this approach or otherwise).
| Using the inequality $$ \frac{n-i}{\epsilon n - i} < \frac{n}{\epsilon n - t}$$ for $ 0 \leq i \leq t-1$, we have that $$\frac{t {n \choose t}}{n {\epsilon n \choose t}} = \frac{t}{n} \prod_{i=0}^{t-1} \frac{n-i}{\epsilon n - i} \leq \frac{t}{n} (\frac{n}{\epsilon n - t})^t = \frac{t}{n} \epsilon^{-t} (1-\frac{t}{\epsilon n})^{-t} $$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How would I convert a recursive geometric sequence summation to a closed formula? How would I convert the below geometric (I assume based on the terms) recursive sequence summation to a closed formula?
$$a_1 = 1,\ \quad a_k = \sum_{i=1}^{k-1} a_i \ \quad for\ k \geqq 2$$
I've tried:
$$a_k = 2\frac{k-1}{k}$$
$$a_k = 2\frac{1-k^2}{1-k}$$
$$a_k = k\frac{1-k^2}{1-k}$$
But nothing seems to work correctly with the terms (with $a_1$ to $a_7$ being 1, 1, 2, 4, 8, 16, and 32 respectively). I'm pretty stuck and really not sure how to proceed so would appreciate any help.
UPDATE: Thanks for the help everyone, in truth it was a combination of the answers that helped me better understand how to proceed with this so for future users looking to understand this question better, I would advise going through all the answers and not just the chosen one.
| We have $a_k = S_{k-1}$ then $a_k - a_{k-1} = S_{k-1}-S_{k-2} = a_{k-1}$ then for $k \ge 2$ we have the equivalent recurrence
$$
a_k = 2 a_{k-1}\Rightarrow a_k = c_0 2^k
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
periodic function of x with period 2 and f(x) =|x|−x for −1Let $f(x)$ be a periodic function in $x$ with period 2, and $f(x) =|x|−x$ for $−1< x≤1$. Sketch the graph of the curve $y=f(x)$ in the interval $[−3,3]$.
$f(x) =|x|−x$ seems not to be a periodic function, though, so how can I solve the question? Thank you!
| Expanding David's comment, taking it step by step, first we have the function $f$ that's stated to be periodic with period 2, but we don't know what it looks like. Then, it's defined to be $f(x) =|x|−x$ for just the interval $(-1, 1]$.
Note that it's only for that specific interval that the function equals $|x|−x$. That can be evaluated to be this;
We've got what the function is for an interval of 2 units now*. We can use that as a repeating unit and make the rest of the graph.
Also note that the endpoints aren't both included in the graph, so you'll have to pay attention while completing it.
*Well, sort of. The leftmost point isn't in that '2 units'. But it'll work out when you think it through for the rest of the graph.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve $\sqrt{\tan x}=1$ without squaring? Problem:
$$\sqrt{\tan x}=1$$
Solution (with squaring):
$$\sqrt{\tan x}=1$$
$$\tan x=1$$
$$x=n\pi+\frac{\pi}{4}\tag{1}$$
1. $(i)$ contains extraneous roots. How do I filter them out?
2. How do I solve for $x$ without squaring?
|
$$\sqrt{\tan x}=1\\\tan x=1\\x=n\pi+\frac{\pi}{4}$$
You have shown that $$\sqrt{\tan x}=1\implies\tan x=1\implies x=n\pi+\frac{\pi}{4}$$ (the given equation's implicit condition that $\tan x\ge0$ justifies $\left(\sqrt{\tan x}\right)^2=\tan x$ in the first step).
But notice also that $$x=n\pi+\frac{\pi}{4}\implies\tan x=1\implies\sqrt{\tan x}=1$$ (taking principal square root in the last step).
Therefore, in this example, all three lines are equivalent to one another; so, no extraneous root has been created. Hence, squaring does not necessarily create extraneous roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find the right derivative of $\sin^2\left(x \sin \frac1x\right)$ at $x = 0$. $$f(x)=\sin^2\left(x \sin \frac1x\right)$$
I was trying to find the right derivative of $f$ at $x = 0$.
( In this question, it says to take the f(0) as f is right continuous at x = 0. Therefore, considering the right continuous at x = 0, I obtained f(0) = 0 )
before that, I know,
I should find whether $f$ is the right differentiable at $x = 0$.
Is $f$ right differentiable at $x = 0?$ If so, what is the right derivative of $f$ at $0?$
I think $f$ is not right differentiable at $x = 0$. It would be great if someone could clarify this.
Any help would be appreciated.
| You want to find out the limit $\lim_{x\to 0+} \frac{f(x)-f(0)}{x-0}$.
$f(x)=\sin^2(x\sin \frac 1x), x\ne 0$
Since $f(0)$ is not defined (in OP), the aforementioned limit does not make sense.
If however, in addition, you had $f(0)=0$, then it can be shown (using the fact that $|\sin y|\le |y|$ for all $y\in \mathbb R$) that: $$|\frac{f(x)-f(0)}{x-0}|=|\frac {f(x)}{x}|\leq |x|$$
So by Squeeze principle, it follows that the required limit is $0$.
Note that if $f(0)=k\ne 0$, then the limit can't exist. For, existence of the above limit would then imply:
$$\lim_{x\to 0+}(f(x)-f(0)=\lim_{x\to 0+}(\frac{f(x)-f(0)}{x-0})\lim_{x\to 0+} x=0$$
Note that $\lim_{x\to 0+} f(x)=0$ and hence by limit rules $\lim_{x\to 0+} (f(x)-f(0))=0=0-k$, which is a contradiction as $k\ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4260944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Does $\lim_{n\rightarrow\infty} \sum_{k = 0}^{n}{f(\frac{k}{n}) }$ exist? Why? $ f: [0;1] \rightarrow (0; +\infty) $,
$\lim_{n\rightarrow\infty} \sum_{k = 0}^{n}{f(\frac{k}{n}) }$ appears to be non-existent (as the gut feeling tells me), since it's basically the sum of images of all possible rationals between 0 and 1. Even if a sequence such as $\lim_{n\rightarrow\infty} \sum_{k = 0}^{n}{f(\frac{1}{2^k}) }$ converges, for example, we'd still have infinitely many series such as this one added up. So, what do you think? If it does exist, can you provide the example of such a function, if not, why?
| You can certainly find examples where $S(n)=\sum\limits_{k = 0}^{n}{f(\frac{k}{n}) }$ is bounded above for all $n$:
for example if $f\left(\dfrac ab\right)=\dfrac1{b^3}$ where $\frac kn$ is written as $\frac{a}{b}$ with $a,b$ coprime integers and $b\ge 1$,
then the sum is bounded above by its supremum of $1+ \dfrac{\zeta(2)}{\zeta(3)}\approx 2.368433$ approached when $n$ is a large factorial, and bounded below by its infimum of $2$ approached when $n$ is a large prime.
So this points a general counter example using Daniel Schepler's comments: consider each odd prime $p$, and you can say that $$S(p!) -S(p) \ge f(\tfrac12)$$ since every term in $S(p)$ is included in $S(p!)$, though $f(\tfrac12)>0$ is not in $S(p)$ while it is in $S(p!)$.
So $S(n)$ does not have Cauchy convergence and so does not have a finite limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4261197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Isn't a homeomorphism between an open interval and $\mathbb{R}$ a contradiction? Let $X = (-\frac{\pi}{2},\frac{\pi}{2})$. Define $f: X \rightarrow \mathbb{R}$ given by $f(x) = \tan(x), \, \forall x \in X$. It can be shown that $f$ is a homeomorphism.
We have that $X$ is an open set. $\mathbb{R}$ is open and closed. By the homeomorphism, $X$ should be open, once $\mathbb{R}$ is open; but also closed since $\mathbb{R}$ is closed.
However, $X$ is not closed.
What am I missing here?
Thanks!
| The thing you are missing is that a homeomorphism is a morphism of spaces and not sets of a space. When you are considering the interval as the origin of the homeomorphism, you are really comparing a topology on $\mathbb{R}$, and a topology on the interval (the relative topology on the interval). As the interval is the whole set of that topological space, it is, indeed, closed, even though it may not be closed as a subset of the reals
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4261410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
The angle $ \angle APB$ between the tangents to a curve.
I have been asked to find $\angle{APB}$ in the form: $\tan^{-1}{\alpha}$ + $\tan^{-1}{\beta}$
I got the equations of the lines through differentiating the function $h(x) = {(\ln(x) - 1.5)}^{2} - 0.25$ at $e$ and ${e}^{2}$ respectively
The equation of the first line is: $$y = -xe^{-1} + 1$$
The equation of the second line is: $$y = xe^{-2} -1$$
Then after doing some math, I got:
$\angle{APB}$ = 180 - (180 - $\tan^{-1}{(\frac{-1}{e})}$ + $\tan^{-1}{(\frac{1}{e^2})}$)
$\angle{APB}$ = $\tan^{-1}{(\frac{-1}{e})}$ - $\tan^{-1}{(\frac{1}{e^2})}$
However, this is not the correct answer(due to the minus sign). I do not know if I am doing the correct work so far or if I have gone far off.
$ \angle APB$
| In any triangle external angle made by producing a line of any triangle equals the sum of two opposite angles.
$$ \gamma_2- \gamma_1 $$
made at x-coordinate locations $(e^2,e) $ respectively
As you marked, adopting a consist anti clockwise rotation convention reckoned positive
$$ \pi+\tan^{-1} \alpha - (\pi-\tan^{-1} \beta )$$
$$ \tan^{-1} \alpha + \tan^{-1} \beta $$
Now $ \beta <0,$ then only can you get arctan obtuse between $(\pi/2,\pi) $ in second quadrant.
Numerical calculation results in a value $\approx 152^{\approx}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4261501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What proportion of the volume of Gabriel's horn could be filled by its surface area? Space filling curves imply that lower dimensional objects can fill a proportion of higher dimensional objects. Eg. in the case of the Hilbert curve and the 2D space it fills, this proportion is 1.
Gabriel's horn has an infinite surface area, but a finite volume. What proportion of the volume of Gabriel's horn could be filled by its surface area?
Is there a general way of thinking about the size of an object in one dimension compared to its size in the next higher dimension, which would answer this question as a special case?
Apologies for the problems with the question. All input appreciated. Cheers!
| tl; dr: The "plane-filling" property of the Hilbert curve (e.g.) is not restricted to raising the dimension by one. For example, there exists a surjection from the number line to Gabriel's horn.
It's a red herring that the surface area of Gabriel's horn is infinite.
$\newcommand{\Reals}{\mathbf{R}}$Let $h:\Reals \to \Reals^{2}$ be a continuous, surjective mapping, and write $h = (h_{1}, h_{2})$. The mapping $h \times h:\Reals^{2} \to \Reals^{4}$ defined by
$$
(h \times h)(s, t) = (h(s), h(t)) = (h_{1}(s), h_{2}(s), h_{1}(t), h_{2}(t))
$$
is also surjective, so the composition $H = (h \times h) \circ h$, defined by
$$
H(t) = (h(h_{1}(t)), h(h_{2}(t))),
$$
maps $\Reals \to \Reals^{4}$ continuously and surjectively. Iterating this construction gives us a continuous surjection from $\Reals$ to a Cartesian space of arbitrarily high dimension.
To map $\Reals$ continuously onto Gabriel's horn, we can:
*
*Start with $H:\Reals \to \Reals^{4}$; then
*Map $\Reals^{4} \to \Reals^{3}$ continuously and surjectively by a coordinate projection; then
*Map $\Reals^{3}$ continuously onto a closed cylinder by sending $(x, y, z)$ to $(x, y, z)$ if $x^{2} + y^{2} \leq 1$ and to $(x/\sqrt{x^{2} + y^{2}}, y/\sqrt{x^{2} + y^{2}}, z)$ otherwise; then
*Map the closed cylinder to a semi-infinite cylinder by sending $(x, y, z)$ to $(x, y, z)$ if $1 < z$ and to $(x, y, 1)$ otherwise; then
*Map the closed semi-infinite cylinder $\{(x, y, z) : x^{2} + y^{2} \leq 1, 1 \leq z\}$ continuously onto Gabriel's horn by axial scaling, sending $(x, y, z)$ to $(x/\sqrt{z}, y/\sqrt{z}, z)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4261885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Range of $\frac{x-y}{1+x^2+y^2}=f(x,y)$ I have a function $\frac{x-y}{1+x^2+y^2}=f(x,y)$. And, I want to find the range of it. I analyzed this function by plotting it on a graph and found interesting things. Like if the level curve is $0=f(x,y)$, then I get $y=x$ which is a linear function. But if the level curve is something not 0, then the level curve becomes a circle. And for big values of level curves, the circle disappears. Is there something I can use to find the range of this function?
| For an alternative approach...
Notice $f(1,1)=0$ so $0$ is in the range of $f$. Now if $C\neq 0$ then $$f(x,y)=C \iff\Big(x-\frac{1}{2C}\Big)^2+\Big(y+\frac{1}{2C}\Big)^2=\frac{1}{2C^2}-1$$ The above relation contains at least one point in the $(x,y)-$plane iff $C\in \big[-\frac{1}{\sqrt{2}},0)\cup (0,\frac{1}{\sqrt{2}}\big]$. Putting everything together we see the range is $\Big[-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\Big]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4262138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Calculate the standard deviation from confidence interval, sample size, and mean, when confidence interval is not symmetric around mean I have the following data:
*
*300 customers were asked to participate in a survey, and each customer was able to submit his/her response within the first two weeks.
*The mean time to respond to the survey was 107 hours.
Using this data, a statistician was able to conclude that the mean time to respond to the survey for future customers would be between 96.12 and 120.65 hours.
Assuming that this is 95% confidence interval, how can we figure out the type of distribution and the standard deviation from this data?
Thanks in advance for your help!
PS: I tried to use t-statistic, but realized that the 95% confidence interval (96.12,120.65) is not symmetric by the mean (107). This means that the data is not normally distributed.
| If the distribution of response times is
$\mathsf{Exp}(\mathrm{rate}=1/107)$ then
the probability a customer responds within
two weeks (336 hrs) is $P(X_i < 336) = 0.9567,$
So it is seems possible most or all customers may have responded
within two weeks. (Three weeks might be better.)
pexp(336, 1/107)
0.9567253
Then $\frac{\bar X_{300}}{\mu} \sim \mathsf{Gamma}(300,300).$ So a 95% CI for $\mu$ is about $(96, 120).$
107/qgamma(c(.975,.025), 300,300)
[1] 95.85392 120.22054
In summary, it seems that the distribution
of response times is $\mathsf{Exp}(1/107),$
with mean $\mu = \sigma = 107.$
I am aware that not everything matches
perfectly, but I think I'm on the right track. [One source of slight discrepancies
would come from using printed chi-squared tables (instead of gamma distributions in R)
for probabilities involving the sample mean.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4262247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
is the limit point of $(-1)^n$ in contradiction to the definition of a limit point? $(-1)^n=\{-1,1,-1,1,...\}$
Now according to the definition of limit point
A point $x$ in $X$ is a limit point or cluster point or accumulation point of a set of $S$ if every neighborhood of $x$ contains at least one point of $S$ different from $x$ itself
Now if I take deleted nbd of $1$ I don't have any element in the set. So how $1$ or $-1$ would be its limit point.
| Just as jjagmath says in his comment, the concept of a limit point is different for sequences and sets.
If you were to consider $\{-1,1,-1,...\}$ as a set, then it would actually be $\{-1,1\}$ since sets do not contain the same element more than once (unless it is a multiset). In this case you are correct, this set does not have limit points.
However, if you consider the sequence $-1,1,-1,...$, then a limit point is one that for every neighbourhood around the point, there are infinitely many terms that lie within the neighbourhood. So $-1$ and $1$ are limit points of the sequence because they occur infinitely often.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4262426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\int_{0}^{\infty} \sin^x(x) dx$ converge? From what I have found the indefinite integral does not have a closed form solution. Also, the function takes complex values except for in the intervals $[0,\pi],[2\pi,3\pi],$ (and for whole x between those intervals). But if we only considered those intervals where the function takes on real values, i.e.
$$\int_{0}^{\pi}\sin^x(x)dx+\int_{2\pi}^{3\pi}\sin^x(x)dx+\int_{4\pi}^{5\pi}\sin^x(x)dx+{...}$$
Does this infinite sum converge? For the record I have no idea how to go about proving this, but I am curious if anyone does.
| You can rewrite your sum as $$ \sum_{k=0}^\infty \int_{2k \pi}^{(2k+1)\pi} \sin^x(x)dx $$
Make a substitution $x = y+2k\pi$ in integral $\int_{2k\pi}^{(2k+1)\pi}\sin^x(x)dx$ getting $$ \sum_{k=0}^\infty \int_0^\pi \sin^{2k\pi + y}(y)dy $$
Due to non-negativeness and Monotone Convergence Theorem (or Fubinii) you can interchange sum with integral, getting $$ \int_0^\pi \sin^y(y)\cdot \sum_{k=0}^\infty \sin^{2k\pi}(y) dy = \int_0^\frac{\pi}{2} \sin^y(y) \cdot \frac{1}{1-\sin^\pi(y)}dy + \int_{\frac{\pi}{2}}^\pi \sin^y(y)\frac{1}{1-\sin^{2\pi}(y)}dy $$
Note that I did splited the integral into two parts because at point $y=\frac{\pi}{2}$ our series diverges. However, one point doesn't change the integral at all, so if you're okay with $\int_0^\pi \sin^y(y) \frac{1}{1-\sin^{2\pi}(y)}dy$, then let it be so. All we have to do now is to check whether the above converges or diverges. It is easily seen that the only point where we have $"$problem$"$ is point $y=\frac{\pi}{2}$, because the denominator is $0$ then. However, since $\sin^y(y) \to 1$ as $y \to \frac{\pi}{2}$ and since our integrand is non-negative, equivalently (via limit comparison criterion) we must check the convergence of $$ \int_0^\pi \frac{1}{1-\sin^{2\pi}(y)}dy = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{1}{1-\cos^{2\pi}(x)}dx$$ where we used substitution $y = \frac{\pi}{2} - x$ and equality $\sin(\frac{\pi}{2}-x) = \cos(x)$.
What's left is to use asymptotic of $\cos(x)$ near zero, that is $\cos(x) \sim 1 - \frac{x^2}{2} + o(x^2)$ as $x \to 0$ and $(1+y)^a \sim 1 + ay + o(y)$ as $y \to 0$ (for $a>0$) getting $1-\cos^{2\pi}(x) \sim \pi x^2 + o(x^2)$ as $x \to 0$. Hence, again limit comparison criterion (and non-negativeness of integrand) gives us that convergence of $$ \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{1}{1-\cos^{2\pi}(x)}dx $$ is equivalent with convergence of $$ \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{1}{\pi x^2}dx $$ which is known to diverge
EDIT: The part with $"$asymptotic$"$ behaviour of $\cos(y)$ and $(1+y)^a$ was only used to ensure $\lim_{x \to 0} \frac{\pi x^2}{(1-\cos^{2\pi}(x))} = 1$ (to use limit-comparison criterion of convergence). So if you're not so familiar with Taylor expansion, then you can try to compute this limit (via let's say De'Hopital $1$ or $2$ times, depending on your knowledge of $\lim_{x \to 0} \frac{\sin(x)}{x}$) and there will be no need of using asymptotic (however, the asymptotic was really helpful to come up with a function which is $"$comparable$"$ with $\frac{1}{1-\cos^{2\pi}(x)}$ at $x \sim 0$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4262816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Issues understanding the multinomial theorem and its multiindex notation $$(x_1+x_2+...+x_m)^n=\sum_{(k_1 + k_2 +... +k_m) \ = \ n} {n \choose k_1,k_2...k_m} \prod^m_{t=1}x_t^{k_t}$$
Let's do $(a+b+c)^3$. That means $a =x_1, b =x_2, c=x_3=x_m$. The multiindex below the sigma function is the different $P_m$ partitions of $n$, consisting of summands $\ge0$. If $n =3$, then those partitions will be $3+0+0$, $2+1+0$ and $1+1+1$. That means that the following operations after the sigma needs to use all of those k-values. So, the summation will look like this:
$${3\choose 3,0,0} \times a^3 \times b^0 \times c^0$$
$$+$$
$${3 \choose 2,1,0} \times a^2 \times b^1 \times c^0$$
$$+$$
$${3 \choose 1,1,1} \times a^1 \times b^1 \times c^1$$
$$=a^3 + 3a^2b + 6abc$$
But that's obviously wrong, as it's missing $b^3 + c^3 + 3a^2c + 3b^2a + 3b^2c + 3c^2a + 3c^2b$. Thing is, I can't see how the formula above is to be used any other way that the erroneous way I used it. This is how I've interpreted it:
Since the x has the same index as it's exponent, the order is stuck, which means instead of going through every combination of the $x_i^{k_i}$, it will only get the first $x$ raised to the first $k$th power, the second $x$ raised to the second $k$th power, and so on. This misses a lot of the summands. How have I misinterpreted the notation to produce this wrong algorithm?
| Order is important for these partitions. So you would need to have a term for every partition below:
*
*$3+0+0$
*$0+3+0$
*$0+0+3$
*$1+2+0$
*$2+1+0$
*$0+1+2$
*$0+2+1$
*$1+0+2$
*$2+0+1$
*$1+1+1$
Think of it as saying the more cumbersome
$$\sum_{\substack{(k_1,\cdots,k_m) \in \Bbb Z_{\ge 0}^m \\ k_1 + \cdots + k_m = n}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4262943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does it make sense to talk about sub-topologies? Upon examining the real line with the finite complement topology $\tau_{c}$, an open set $O$ is a union of certain open intervals in $\mathbb{R}$. Since $\mathbb{R}$ itself is the single interval $(-\infty,\infty)$, the empty set is the empty union of intervals, and any other set is of the form $O=(-\infty,p_1)\cup (p_1,p_2)\cup\dots\cup (p_{n-1},p_n)\cup (p_n,\infty)$ where $p_1<p_2<\dots <p_n$. Well, the usual topology on the real line $\tau$, is made up of all union of open intervals. So it seems like (and forgive me if this isn't the proper notation) that we could say $\tau_c\subset\tau$. I can't seem to find anything about "sub topologies" online though. Does this idea make sense/ is it any useful? Is there a name for this situation? This is my first semester in topology, so I'm not sure what to look up to find anything about it (but I've tried!).
| The 'sub-topologies' that you mention are known as coarser topologies.
As an example you may also take the Sorgenfrey line $\mathbb{R}_l$ and $\mathbb{R}$ with the usual topology. Every open set in $\mathbb{R}$ is open in $\mathbb{R}_l$ too, however there are open sets in $\mathbb{R}_l$ which are not open in $\mathbb{R}$. So $\mathbb{R}$ is coarser than $\mathbb{R}_l$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4263094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Integrability of Symplectic structures A symplectic structure on even dimensional manifold is a non-degenerate closed two form and I understood integrability of symplectic structure is closedness as a differential 2-form which comes from involutivity of symplectic vector fields by Frobenius theorem. However, in my calculation of Lie derivative of semi symplectic form $ \omega$ which means merely non-degenerate 2-form with Lie bracket $[X, Y]$ of two symplectic vector fields $X$ and $Y$ is zero without d closed condition.
My question is that did I misunderstand of the notion of integrability of symplectic structures in the sense of Frobenius, or mistake in the following calculation?
Assume that $0=\mathcal{L}_X\omega, \ 0= \mathcal{L}_Y\omega$, since $X$ and $Y$ are symplectic.
We now compute $\mathcal{L}_{[X, Y]}\omega$ using a formula $\mathcal{L}_{[X, Y]}=\mathcal{L}_X \mathcal{L}_Y -\mathcal{L}_Y\mathcal{L}_X$.
\begin{align}
\mathcal{L}_{[X, Y]}\omega & = (\mathcal{L}_X \mathcal{L}_Y -\mathcal{L}_Y\mathcal{L}_X)\omega \\
& = 0. \\
\end{align}
Now $\mathcal{L}_{[X, Y]}\omega$ vanished, it implies that $[X, Y]$ is also symplectic without using d-closed condition.
How should I use d-closed condition to confirm integrability of symplectic structures?
| So the notion of Integrability on a Symplectic manifold is by definition the following:
A symplectic structure on a manifold M is a differential $2$-form $\omega$ satisfying two conditions:
*
*$\omega$ is non-degenerate, i.e. for each $p \in M$ and tangent vector $\tilde{u}$ based at $p$, if $\omega_p(\tilde{u},\tilde{v}) = 0$ for all tangent vectors $\tilde{v}$ based at $p$, then $\tilde{u}$ is the zero vector;
*$\omega$ is closed, i.e. the exterior derivative of $\omega$ is zero, i.e. $\mathrm{d}\omega = 0$.
Condition (2) is often noted as the integrability condition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4263247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Can an irreducible polynomial of $\mathbb{Q}[x]$ have more than two real roots? The title is pretty much self explanatory; Can an irreducible polynomial of $\mathbb{Q}[x]$ have more than two real roots? And if so, what is an example of a polynomial with more than two real root? All the polynomials I've seen, had maximum two real roots, but from what I've learned of Galois Theory it seems to indicate that there could be more than two.
| Have you heard about totally real number fields? If $K$ is a totally real number field of degree $n$, then there is an irreducible polynomial $f$ over $\mathbb Q$ of degree $n$ such that all roots are real and generate $K$. This post https://math.stackexchange.com/a/2645829 implies that for every $n \in \mathbb N$ you find such a totally real number field of degree $n$. So for each $n$ you find an irreducible polynomial $f$ of $\mathbb Q$ with $n$ real roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4263538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
finding roots of $a \sin \theta + b \cos \theta +c \sin \theta \cos \theta + d \sin^2 \theta + e \cos^2 \theta$ I've encountered the following function while working on a project:
$$
f(\theta) = a \sin \theta + b \cos \theta +c \sin \theta \cos \theta + d \sin^2 \theta + e \cos^2 \theta
$$
where a through e are all real non-zero numbers and $0 \leq \theta < 2 \pi$.
I know this function has at least two roots. For reasons I won't bore you with, finding the roots numerically isn't ideal for the project I'm working on.
Applying the tangent half-angle substitution described in this answer results in a fourth order polynomial, which is troublesome to find the roots of without numerical methods.
$$
0 = (e-b)t^4 + (2a-2c) t^3 - 2et^2 + (2a+2c+4d)t + (b+e)
$$
Is there a better non-numerical option for finding the roots?
| I don’t know whether this helps you at all, but you might consider making the substitution $y=\sin\theta$, $x=\cos\theta$, to get the equation $ay+bx+cxy+dy^2+ex^2=0$, since you’re looking for roots of $f$. The resulting curve in the plane is a conic of some shape or other, depending on the coefficients, and you want to intersect it with the circle $x^2+y^2=1$.
You thus have two conics, which intersect in four points in the complex projective plane, counting multiplicity, by Bézout’s Theorem. So the problem is, far as I can see, unavoidably quartic. Seems to me that with numerical inputs $\{a,b,c,d,e\}$ the only plausible method of finding the roots is something numerical, like Newton-Raphson. (There is a formula for the general quartic, but you Do Not want to try to use it.)
(And: I wouldn’t have gone via the tangent half-angle formulas, I would have just written $\cos\theta=\sqrt{1-\sin^2\theta\,}$ and manipulated the radicals away. You still get a quartic, likely not the same one.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4263723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.