Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Well definedness of $p:\mathbb{Z}/\gcd(m,n)\mathbb Z\rightarrow P$, universal property of the tensor product. I am trying to get used to the universal property of the tensor product. I have tried to prove this common fact using the universal property
$\mathbb{Z}/n\mathbb Z\otimes \mathbb{Z}/m\mathbb Z \cong \mathbb{Z}/\gcd(m,n)\mathbb Z$
After checking that there is well-defined bilinear map (which i will call $b$) from $\mathbb{Z}/n\mathbb Z\otimes \mathbb{Z}/m\mathbb Z$ to $\mathbb{Z}/\gcd(m,n)\mathbb Z$ we need to show that we can factor any other bilinear map of $\mathbb Z$-modules $p:\mathbb{Z}/n\mathbb Z\otimes \mathbb{Z}/m\mathbb Z\rightarrow P$ over $b$.
The obvious thing to do is do define $i:\mathbb{Z}/\gcd(m,n)\mathbb Z\rightarrow P$ by $x\mod mn\mapsto p(x,1)$. I am having difficulty checking that this is well defined. We need to show that $p(x,1)=p(x',1)$ where $x'=x+k\cdot \gcd(m,n)$. But the only things we know are that
*
*$p(x,1)=p(1,x)$
*$p(x+n,1)=p(x,1)$
*$p(1,y)=p(1,y+m)$
I have not been able to use these to prove well definedness.
This exercise has been asked many times on this website but I am asking specifically about why in a certain method a particular map is well defined. I don't want an answer using a different method.
| We have for some $a, b \in \mathbb{Z}$
$$
p(x + k \cdot \gcd(m,n),1) = p(x + k(am + bn), 1)
$$
$$
= p(x + kam, 1) =p(1, x + kam) = p(1,x) = p(x,1)
$$
because $\gcd(m,n)$ is an element which generates an ideal $(m,n)$ in PID $\mathbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4010338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Why is the distribution of the x coordinate of a point randomly selected from the circumference the unit cricle not uniform? I am currently attempting to solve a question which deals with the distribution of the $x$ and $y$ coordinates of a randomly chosen point from the circumference of the unit circle. When I first attempted the question I thought that since point is chosen at random, it would follow that it is uniformly distributed on the circumference of the circle, and moreover, its $x$ and $y$ coordinates will be uniformly distributed on the region $[-1,1]$. However, this is wrong.
I have found a couple of threads in regards to this issue, where the answers suggest one uses the fact that $x=\cos \theta$ and start from there. My issue is that this was not my original thought and that without going through the algebra with $x=\cos \theta$ I wouldn't be able to say why the distribution of $x$ (and $y$ for that mater) are not uniform.
Could someone explain to me how can I intuitively rebutle the idea that $x$ (and $y$) are uniformly distributed, without involving calculation?
| It is because the circle is more steep on its left and right edges so a small interval there will contain a lot of arc length. But as you go toward the center of the circle, a small interval will be smaller. Consider an interval of [-1,-.99] vs an interval of [0,.01]. The slope is much higher in the first interval than the second, which is mostly flat, so the first interval will contain more “points,” and hence probability.
Also, Compare the interval [-1,-1/2] vs [-1/2, 0], dividing the left half in two equal interval widths. The first interval contains more arc length than the second. To convince yourself, draw it out. It looks like the first contains about twice as much as the second.
The more arc length, the more “points” there are to choose from.
Two intervals of the same width won’t contain the same arc length unless they are mirrors across the y axis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4010448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question about the gradient of a function We consider a function of class $C^1(\mathbb{R}^2)$, by searching a max or min between the points of $S=\{(x,y) \in \mathbb{R}^2 \;\; | \;\; x^2+y^2=1\}$, if $(x_0,y_0)$ is one of them, is it true that there exists $\lambda \in \mathbb{R}$ such that $\nabla f(x_0,y_0)=2\lambda (x_0,y_0)?$ Why?
| Let $g(x,y)=x^2+y^2$. Then $S=\{(x,y)\in\Bbb R^1\mid g(x,y)=1\}$ and the method of Lagrange multipliers tells us that if $f$ has a local extreme at some point $(x_0,y_0)\in S$, then $\nabla f(x_0,y_0)=\lambda\nabla g(x_0,y_0)$, for some $\lambda\in\Bbb R$. And $\nabla g(x_0,y_0)=(2x_0,2y_0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4010644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the x-coordinate of the stationary points of the curve and determine the nature of these stationary points. The equation of a curve is $y=x^2e^{-x}$.
*
*Find the x-coordinate of the stationary points of the curve and determine the nature of these stationary points.
*Show that the equation of the normal to the curve at the point where $x=1$ is $e^2x+ey = 1+e^2$.
This is the full question I am having difficulty solving, I simply don't know where to begin. I moved back to my home country because of covid and now I am doing self-studying I don't know how to solve this any help wold be great and much appreciated.
| $y=x^2e^{-x} \implies y'(x)=2xe^{-x}-x^2e^{-x} \implies y'(1)=e^{-1}.$
So the slope of normal at $x=1$ is $m=-1/y'(1)=-e$. So the equation of line having slope $-e$ and passing through the point $(1,e^{-1})$ is
$$y-e^{-1}=-e(x-1) \implies ey+e^2 x=1+e^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4010756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does $\sum^\infty_{n=3}\frac {\ln^2(n)}{n^{3/2}}$ converge or diverge? I am trying to find a way to decide if this sum converges or diverges, I'm trying to use the integral test and bound the integrand or use the limit comparison test, but I am not finding a good function to compare my integrand with at infinity. I would appreciate any help or tips in how to find a function that behaves like $\frac {\ln^2(x)}{x^{3/2}}$ at $x\to \infty$ that I can use it for limit comparison test.
Thanks in advance.
| Integral test for the convergence. The integral is pretty doable
$$\int_3^{\infty } \frac{\log ^2(x)}{x^{3/2}} \, dx=\lim_{M\to\infty}\left[-\frac{2 \left(\log ^2(x)+4 \log (x)+8\right)}{\sqrt{x}}\right]_3^M=\frac{2 \left(8+\log ^2(3)+\log (81)\right)}{\sqrt{3}}$$
As the integral converges, the series converges as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4011049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
An Euler-Lagrange Equation I have an action with a Lagrangian which I would like to apply the Euler-Lagrange equations to https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation but have spent hours really struggling with it. That is I define
$$
L\Big(\begin{pmatrix}x \\ y\end{pmatrix},\begin{pmatrix} \dot{x} \\ \dot{y}\end{pmatrix} \Big):= \Big\| A \Big( \begin{pmatrix}x \\ y\end{pmatrix} + \begin{pmatrix} -\dot{y} \\ \beta\dot{y}\end{pmatrix} \Big) \Big\|^2
$$
where
$$
A=\begin{pmatrix} c_1 & c_2 \\ c_2 & c_3 \end{pmatrix}, \text{for some }~c_i~\text{for which $A$ is invertible}.
$$
Which $x,y:[0,T]\to \mathbb{R}$ solve the Euler -Lagrange equation
$$
\nabla_{\begin{pmatrix}x \\ y\end{pmatrix}} L-\partial_t\nabla_{\begin{pmatrix} \dot{x} \\ \dot{y}\end{pmatrix} }L=0
$$
with initial and terminal conditions $x(0)=q,x(T)=q',y(0)=p,y(T)=p'$. ? Please help !
$\Big($would more information on how the constants $c_1,c_2,c_3$ relate to eachother be useful? because infact $c_1=\sigma^2+\gamma^2, c_2=-\gamma(\sigma+\alpha),c_3=\alpha^2+\gamma^2$, where these constants $\alpha,\sigma,\gamma$ are positive and $\alpha\sigma>\gamma^2$ $\Big)$.
EDIT : I got the E.L as with the above choices for $c_1,c_2,c_3$
\begin{equation}
\nabla_{(x,y)} L = (\frac{1}{\alpha\sigma-\gamma^2})^2\begin{pmatrix}
0 \\
\nabla_{y} B_1+\nabla_{y} B_2
\end{pmatrix}
\end{equation}
with
\begin{equation}
\nabla_{y} B_1=2(\sigma+\gamma \beta) y + \frac{1}{\sigma+\gamma\beta}(\gamma \dot{y}-\sigma \dot{x})
\end{equation}
and
\begin{equation}
\nabla_{y} B_2=2(\gamma + \alpha \beta) y + \frac{1}{\gamma+\alpha \beta}(\alpha \dot{x}-\gamma \dot{y}).
\end{equation}
Also
\begin{equation}
\partial_t \nabla_{(\dot{x},\dot{y})} L=2 (A^{-1})^2(\begin{pmatrix}\ddot{x}\\ \ddot{y} \end{pmatrix}-\begin{pmatrix}
\dot{y}
\\
-\beta \dot{y}
\end{pmatrix})
\end{equation}
Which is a coupled ODE
| The Lagrangian doesn't depends on $t$ so it obeys Betrami's identity. With $p=(x,y)$
$$
L - \dot p\nabla_{\dot p}L = c_0
$$
or
$$
c_2^2 \left(x^2+y^2-\left(\beta ^2+1\right) \dot y^2\right)+c_3^2 \left(y^2-\beta ^2 \dot y^2\right)+2 c_2 c_1 \left(\beta \dot y^2+x
y\right)+2 c_2 c_3 \left(\beta \dot y^2+x y\right)+c_1^2 \left(x^2-\dot y^2\right)=c_0
$$
From Euler-Lagrange's equations we obtain
$$
x = \frac{\left(c_2 \left(c_2-\beta c_3\right)+c_1^2-\beta c_2 c_1\right) \dot y-c_2 \left(c_1+c_3\right) y}{c_1^2+c_2^2}
$$
now substituting into the former ODE we obtain a new ODE now depending only on $y,\dot y$. Concluding, we have two independent constants to fix: $c_0$ an one additional boundary from the last obtained ODE, and four independent boundary conditions. This is not feasible.
NOTE
The lagrangian is kind of degenerate concerning the kinetic energy because
$$
\frac 12\nabla_{\dot p}\left(\nabla_{\dot p}L\right) = \left(
\begin{array}{cc}
0 & 0 \\
0 & \left(c_1-\beta c_2\right){}^2+\left(c_2-\beta c_3\right){}^2 \\
\end{array}
\right)
$$
which is not positive definite.
EDIT
Corrected some equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4011260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the sum of all possible values of $\cos 2x + \cos 2y + \cos 2z.$ Let $x$, $y$, and $z$ be real numbers such that $ \cos x + \cos y + \cos z = \sin x + \sin y + \sin z = 0 $. Find the sum of all possible values of $ \cos 2x + \cos 2y + \cos 2z $.
Here is what I have done so far
$$ \cos x + \cos y = -\cos z $$
$$ (\cos x + \cos y)^{2} = (-\cos z)^{2} $$
$$ \cos^{2} x + \cos^{2} y + 2\cos x \cos y = \cos^{2}z $$
likewise,
$$ \sin x + \sin y = -\sin z $$
$$ (\sin x + \sin y)^{2} = (-\sin z)^{2} $$
$$ \sin^{2} x + \sin^{2} y + 2 \sin x \sin y = \sin^{2}z $$
from this, you get
$$ \cos^{2} x + \cos^{2} y + 2\cos x \cos y + \sin^{2} x + \sin^{2} y + 2\sin x \sin y = \sin^{2}z + cos^{2}z $$
$$ 1 + 1 + 2(\cos x \cos y + \sin x \sin y) = 1 $$
$$ \cos x \cos y + \sin x \sin y = -\frac{1}{2} $$
$$ \cos(x-y) = -\frac{1}{2} $$
$$ x-y = \frac{2 \pi}{3}, \frac{4 \pi}{3} $$
likewise,
$$ \cos (x-z) = -\frac{1}{2} $$
$$ x-z = \frac{2 \pi}{3}, \frac{4 \pi}{3} $$
where do I go from here?
| Notice that $\begin{cases}\cos(x)+\cos(y)+\cos(z)=0\\\sin(x)+\sin(y)+\sin(z)=0\end{cases}\iff \underbrace{e^{ix}}_a+\underbrace{e^{iy}}_b+\underbrace{e^{iz}}_c=0$
We have $a+b+c=0\implies \bar a+\bar b+\bar c=0$
But since $|a|=|b|=|c|=1$ they verify $\bar a=\frac 1a$ and so on.
Therefore we get $$\frac 1a+\frac 1b+\frac 1c=0\implies \dfrac{ab+bc+ca}{abc}=0$$
Note that $|abc|=1$ so the numerator should be zero.
But then $$\overbrace{(a+b+c)^2}^{=0}=\overbrace{(a^2+b^2+c^2)}^{e^{i2x}+e^{i2y}+e^{i2z}}+2\overbrace{(ab+bc+ca)}^{=0}$$
And your sum is simply $0$ too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4011407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tight upper bound on the sum of $\left( \frac{1}{k} \right)^k \exp \left(-\frac{\log(N)(\sqrt{N-k} - \sqrt{k})^2}{N}\right)$ Consider the following sum:
\begin{align}
S = \sum_{k=\lceil\log(N)\rceil}^N\left( \frac{1}{k} \right)^k \exp \left(-\frac{\log(N)(\sqrt{N-k} - \sqrt{k})^2}{N}\right).
\end{align}
Is it possible to show that this is sum is of order $1/N$ for large $N$?
My attempt:
When $k$ is the same order as $N$, then $(1/k)^k$ is very small. It seems sufficient to consider $k \ll N$ and write
\begin{align*}
S \leq N \frac{1}{(\log(N))^{\log(N)}} \exp( - \log(N)) \leq \frac{1}{N},
\end{align*}
where I used the fact that $N \frac{1}{(\log(N))^{\log(N)}} \leq 1$ for $N$ large enough.
| You're absolutely right. Slightly more formally, we might do something like the following:
Expand out the exp term to get
$$
\sum k^{-k} \exp \left ( - \frac{\log N}{N} (N - 2 \sqrt{k(N-k)}) \right )
$$
Distributing and simplifying gives
$$
\sum k^{-k} \frac{1}{N} \exp \left ( \frac{2 \sqrt{k(N-k)} \log N}{N} \right )
$$
Then the stuff inside the $\exp$ is maximized when $k = \frac{N}{2}$. This is actually a terrible upper bound, but it's good enough for what we want. So our sum is at most
$$
\sum k^{-k} \frac{1}{N} e^{\log N}
$$
That is,
$$S_N \leq \sum_{k = \log N}^N k^{-k}$$
Now, as you've noticed, the first term will dominate this series. In fact, since we're already grossly overestimating from the last step, we might as well keep making our lives easy:
$$
\sum_{k = \log N}^N \left ( \frac{1}{k} \right )^k
\leq
\sum_{k = \log N}^\infty \left ( \frac{1}{\log N} \right )^k
$$
But now we're in the clear! Applying the formula for a geometric series gives
$$
S_N \leq
\left ( \frac{1}{\log N} \right )^{\log N} \frac{1}{1 - \frac{1}{\log N}}
$$
Of course, $\frac{1}{1 - (\log N)^{-1}} \leq 2$ for $N > 8$ or so, which means we can safely ignore it. So at long last
$$
S_N \leq O \left ( \left ( \frac{1}{\log N} \right )^{\log N} \right ).
$$
This is actually better than $O \left ( \frac{1}{N} \right )$, even if it's not obvious at first. You can compute
$\lim_{n \to \infty} \frac{(\log N)^{-\log N}}{N^{-1}} = 0$ (or look at a graph) to convince yourself of this.
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4011543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
maximum and minimum value of $(a+b)(b+c)(c+d)(d+e)(e+a).$
Let $a,b,c,d,e\geq -1$ and $a+b+c+d+e=5.$ Find the maximum and minimum value of $S=(a+b)(b+c)(c+d)(d+e)(e+a).$
I couldn't proceed much, however I think I got the minimum and maximum case.
For minimum, we get $-512$ with equality on $(-1,-1,-1,-1,9).$
For maximum, we get $288$ with equality on $(-1,-1,-1,4,4).$
| This is the problem of China national in math olympiad here is my proof:
.
.
.
Wlog let $|e|=Max(|a|,|b|,|c|,|d|,|e|)$ so we have $(e+d)(e+a)>0$ (take note we are to find the min at first ) for it to be negative we have $2$ cases one is that all $3$ of $(b+c),(c+d),(a+b)$ are negative which we have from AM_GM that :
$$(-a-b)(-b-c)(-c-d)(d+e)(e+a)\le (\frac{-a-2b-2c-d}{3})^3(\frac{a+d+2e}{2})^2=(\frac{a+d+2e-10}{3})^3(\frac{a+d+2e}{2})^2 \le 512$$
so its done the second case is that only one of them is negative. Let $a+b$ be negative (the way we use AM_GM gives us this freedom to assume this) now we have
$$(-a-b)(b+c)(c+d)(d+e)(e+a)\le (\frac{2(c+d+e)}{5})^5=(\frac{2(5-a-b)}{5})^5\le(\frac{14}{5})^5< 512$$
so we are done here. now we want to find the Max which we do in the ofllowing way.
if all of $a+b,b+c,c+d$ are possitive we have from AM_GM that
$$S\le (\frac{2(a+b+c+d+e)}{5})^5=32<288.$$
the next case is the following . we have $a+b\le0$, $b+c\le0$, $c+d\ge0$, then by AM_GM
$$(-a-b)(-b-c)(c+d)(d+e)(e+a)\le (\frac{-a-2b-c}{2})^2(\frac{c+d+e+a}{2})^2(d+e)=(\frac{d+e-b-5}{2})^2(\frac{5-b}{2})^2(d+e)\le 288$$
the last case is :
$a+b\le0$, $b+c\ge0$, $c+d\le0$, then by AM_GM we have
$$S=(-a-b)(b+c)(-c-d)(d+e)(e+a)\le (\frac{-a-b-c-d}{2})^2(\frac{b+c+d+e+e+a}{3})^3=(\frac{e-5}{2})^2(\frac{5+e}{3})^3< 288$$
and done done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4011875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Expected number of tosses to get a head from a coin using integration formulae? I recently started learning Expectation Probability , first of all Any Good resources to study it will be appreciated if any one can share
What I have learnt so far the expected value of some Unknown Variable say $x$ be $ E(x)$
which boils down this equation $ E(x) = \int_{-\infty}^\infty x*P(x) dx $ where P(x) is the Probability of some specific $x$ .
So I wanted to find the Expected Number of tosses to Get a Heads from an Unbiased Coin So I used this equation to solve it
$E(Getting Heads) => \int_{0}^\infty\dfrac{x}{2^x} dx$ , **IT IS A PRETTY STANDARD RESULT THAT E(GETTING HEADS) = 2, But this integral is giving me another answer , which is $\dfrac{1}{\ln^2\left(2\right)}$ Can Anyone tell me where am I going wrong with understanding stuff.
|
How will I solve this summation to get an answer 2 ?
From your phrasing of the question, I'm assuming that you threw the coin $n$ times and got tails each time, and then you got heads on the $(n+1)$-th time. The probability of this occurring is $\frac{1}{2^{n+1}}$. So you want to find the value of
$$ \frac{1}{2} \sum_{n=0}^\infty \frac{n+1}{2^n} $$
There is a trick for solving this kind of sum, assuming that you already know the sum of a geometric series and some basic calculus. (If not, there are other ways of proving it, but the only ways I know are conceptually more difficult.)
$$ \begin{align}
\sum_{n=0}^\infty (n + 1) x^n &= \sum_{n=0}^\infty \frac{d}{dx} x^{n+1} \\
&= \frac{d}{dx} \sum_{n=0}^\infty x^{n+1} \\
&= \frac{d}{dx} \frac{x}{1-x} \\
&= \frac{1}{(1-x)^2} \\
\end{align} $$
The second line (moving the derivative outside the sum) is actually nontrivial, but it is okay in this instance because the sum converges uniformly. (This is a technical detail that may or may not be interesting to you.) The third line is just summing a geometric series, and the fourth line is basic calculus and some algebra.
Plugging in $x=\frac{1}{2}$ recovers your original problem. The answer is
$$ \frac{1}{2} \sum_{n=0}^\infty \frac{n + 1}{2^n} = \frac{1}{2} \frac{1}{\left( 1 - \frac{1}{2} \right)^2} = 2 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Convexity of open balls in a metrizable subset of a locally convex t.v.s. The following result is well known (Rudin, Functional Analysis, Theorem 1.24): If $X$ is a locally convex topological vector space with a countable local base then there is a compatible metric $d$ such that all open balls are convex.
My question is: Assume that $X$ is a locally convex t.v.s. which is not metrizable and $C$ is a compact, convex and metrizable subset of $X$. Is there a metric compatible with the topology of $C$ such that all open balls are convex?
| $Y=\operatorname{span}(C)$ is a locally convex tvs which is still metrisable I think (using that $C$ has a countable base, being compact metrisable). So the first result would apply there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A prime $p = 2a^2 – 2ab + 3b^2$ for integers $a$ and $b$, then $p$ cannot be congruent to $11 \pmod{20}$.
A prime $p = 2a^2 – 2ab + 3b^2$ for integers $a$ and $b$, then $p$ cannot be congruent to $11 \pmod{20}$.
I need to prove the conclusion and I tried by constructing a contradiction:
First, I suppose $$p \equiv 11 \pmod{20}$$ and factorized the prime as $$(a+3b)(a-b)$$
So I get two possible cases:
*
*$(a+3b)\equiv 1 \pmod{20}$ and $(a-b) \equiv 11 \pmod{20}$
*$(a+3b)\equiv 11\pmod{20}$ and $(a-b) \equiv 1 \pmod{20}$
However I do not know how to deal with it.
Is this a right direction or there are other good methods to prove the statement? Looking forward to any suggestions!
| Well, initially I didn't see that your factorization was wrong, so I have written the answer according to the cases you got in the end. This is how you would do If the factorization was correct.
Just subtract the two equations you got to get $$4b \equiv \pm 10 \pmod{20}$$ which means $4b\pm 10$ is divisible by $20$, but this is not possible since $20$ is of the form $4k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Get recurrent formula for integral $I_n=\frac{1}{\sin^n(x)}$ The first thing that I could notice is that we can do so
$$I_n - I_{n-2} = \int \frac{1 - \sin^2 x}{\sin^n x} dx = \int \frac{\cos^2 x}{\sin^n x} dx$$
(Sometimes I will skip putting $dx$ for contraction)
However when I was trying to continue integrating in this form, it seemed to me as even harder problem and I decided to change approach and try integrating by parts.
$$I_n = \int \frac{1}{\sin^n x} = \int \frac{1}{\sin x \cdot \sin^{n - 1}x}$$
This one was a bit despeate, I know. So, I decided on putting this idea aside either.
So, do you have any ideas on how to approach this problem? Thank you in advance.
| $$
I_n = \int \dfrac{\sin x}{\sin^{n+1} x}dx = - \dfrac{\cos x}{\sin^{n+1} x} - (n+1)\int \dfrac{\cos^2 x}{\sin^{n+2} x}dx = - \dfrac{\cos x}{\sin^{n+1} x} - (n+1)(I_{n+2}-I_n)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that for every $a \in ℝ$ and for every $x>0$ the inequality $x^{-a}+a · e · \ln(x) \geq 0$ holds I have tried some techniques to solve the following problem but I was unable to solve it:
"Prove that for every $a \in ℝ$ and for every $x>0$ the inequality $x^{-a}+a · e · \ln(x) \geq 0$ holds".
I tried to take the derivative and see if it is always positive, I tried a direct approach manipulating the expression, I also tried to divide it in cases, but I always find troubles.
I would appreciate any help or hint you could give me. Thanks.
| $$
x^{-a}+ae \ln(x) = e^{-a \ln(x)} + ae \ln(x)
$$
so that, with the substitution $u = -a \ln (x)$, this is equivalent to showing that
$$
e^u \ge eu
$$
for all $u \in \Bbb R$, and that is true because $f(u) = e^u$ is convex:
$$
e^u = f(u) \ge f(1) + (u-1)f'(1) = e + (u-1)e = eu \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
$\frac 21 \times \frac 43 \times \frac 65 \times \frac 87 \times \cdots \times \frac{9998}{9997} \times \frac {10000}{9999} > 115$ Prove that
$$x = \frac 21 \times \frac 43 \times \frac 65 \times \frac 87 \times \cdots \times \frac{9998}{9997} \times \frac {10000}{9999} > 115$$
saw some similar problems like
show $\frac{1}{15}< \frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}<\frac{1}{10}$ is true
but didn't manage to get 115. I could get a weaker conclusion of $x>100$ though.
\begin{align}
x^2 &= \left(\frac 21 \times \frac 21\right) \times \left(\frac 43 \times \frac 43\right) \times \cdots \times \left(\frac{10000}{9999} \times \frac {10000}{9999}\right) \\
&\ge \left(\frac 21 \times \frac 32\right) \times \left(\frac 43 \times \frac 54\right) \times \cdots \times \left(\frac{10000}{9999} \times \frac {10001}{10000}\right) \\
&= 10001
\end{align}
so $x > 100$
| Show this product is equal to:
$$\frac{4^{5000}}{\binom{10000}{5000}}$$
Then use the inequality about the central binomial coefficient:
$$\frac{4^n}{\sqrt{4n}}\leq\binom{2n}{n}\leq\frac{4^n}{\sqrt{3n+1}}$$
Setting $n=5000$ this gives:
$$\frac{4^{5000}}{\sqrt{20000}}<\binom{10000}{5000}<\frac{4^{5000}}{\sqrt{15001}}$$
Or
$$122<\sqrt{15001}<\frac{4^{5000}}{\binom{10000}{5000}}<\sqrt{20000}<142$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4012920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Calculating the Rolling Variance of a set of numbers I would like to subscribe to a WebSocket stream which will supply me with many numbers per second. From this data, I would like to calculate the variance of say the last 1000 numbers.
How can I do this in a rolling fashion? That is, I would like some computation comparable to this one for the mean of the last 1000 numbers:
$$\rm{mean}_{i+1} = \rm{mean}_{i} + \frac{1}{1000}\left(x_{i+1}-x_{i-999}\right)$$
Thanks in advance for any help.
Ben
| $$\rm{variance}_{i+1} = \frac{1}{1000}\sum_{j=i-999}^{i+1}\left(x_j-\rm{mean}_{i+1}\right)^2\\
=\frac{1}{1000}(x_{i+1}^2-x_{i-999}^2)+\rm{variance}_{i}-\rm{mean}_{i+1}^2+\rm{mean}_{i}^2,$$
where you already computed $\rm{mean}_{i+1}$ according to your equation.
If you wonder why the equality holds, see here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4013062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
For which $a$ is the matrix negative definite? Find the number of integer numbers of $a$ such that the matrix $$\begin{pmatrix}-1& 0 & 5 \\ 0 & a-50 & 0 \\ 5 & 0 & 6-a\end{pmatrix}$$ is negative definite.
For that, the eigenvalues must be negative, right?
I calculated the eigenvalues using Wolfram and we get these.
So all these have to be negative and we get a system of inequalities :
$$a-50<0 \\ \frac{1}{2}\left (-\sqrt{a^2-14a+149}-a+5\right ) <0 \\ \frac{1}{2}\left (\sqrt{a^2-14a+149}-a+5 \right )<0$$ So we get from the first $a<50$, from the second $a\in \mathbb{R}$ and from the last one $a>31$.
So the integer values for $a$ are from $32$ to $49$, so there are $49-32+1=18$ values, right?
| Your approach is correct, but it could be slightly simpler: with the help of the rule of Sarrus or by other means, it's easy to find the characteristic polynomial is:
$$(\lambda-a+50)(\lambda^2+(a-5)\lambda+a-31)$$
The eigenvalues must all be negative, so, for the first factor, that means $a<50$. For the second factor, instead of computing the roots, note that the sum of the roots must be negative, while the product of the roots must be positive. With Vieta's formulas, that means $a>31$ and $a>5$. So the condition is indeed $31<a<50$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4013515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Angle between tangents to circle given by $x^2 + y^2 -12x -16y+75=0$?
Given the circle: $C(x,y)=x^2 + y^2 -12x -16y+75=0$, find the two tangents from origin
First, I get the line which passes through point of contact of tangents from origin using result here which is :
$$ -12x-16y-2 \cdot 75 = 0$$
Or,
$$ 6x + 8y -75 =0 $$
Now, I use the result discussed in this answer, which says that pair of straight from point $(P)$ to conic is given as:
$$ C(0,0) C(x,y) = (6x+8y-75)^2$$
This leads to:
$$ 75 (x^2 + y^2 -12x-16y+75) = (6x+8y-75)^2$$
$$ 0 = 75(x^2 +y^2 - 12 x - 16y +75) - (6x+8y-75)^2$$
For applying the result in this answer, then the formula
$$ a= 75 - 36, b=75-64 , h= - \frac{(6 \cdot 8 \cdot 2 )}{2}$$
Or,
$$ a = 39, b= 11 , h=-48$$
$$ \tan \theta = \frac{2 \sqrt{(-48)^2-39 \cdot 11)}}{39+11} = \frac{2 \sqrt{5^4 \cdot 3}}{25} = 2 \sqrt{3}$$
However, the intended answer was:
$$ \tan \theta = \frac{1}{\sqrt{3} } $$
Where have I gone wrong?
| A much more sane and simpler way:
Observe the right triangle created by the line segment from origin to center, and origin to the point which is tangent to circle. Since, tangent and radius are perpendicular, we can apply good ol' trignometery easily:
$$ \sin \theta = \frac{r}{OC}= \frac{5}{10}= \frac{1}{2}$$
Hence, the $ x= \frac{\pi}{6}$ , which makes total angle between tangent as $ x = \frac{\pi}{3}$
However, all the other answer are beautiful in demonstrating the different way of approaching the same problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4013658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Closedness of a set I am trying to understand closed sets.
Is the set
$$
A =\{x \in \mathbb{R}^2: \lVert x \rVert_2 \leq 1 \} -\{0\}
$$
closed?
I think it is not, because if we take the sequence $(\frac{1}{n})_{n=1}^{\infty}$ we can see it lies in A, but its limit is zero which is not in the set A. And because of that, A is not a closed set.
Is this right? Thank you.
| It is almost correct, but you missed the fact that $(\forall n\in\Bbb N):\frac1n\notin\Bbb R^2$. Take the sequence $\left(\left(\frac1n,0\right)\right)_{n\in\Bbb N}$ instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4013909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Similarity by SSA (side-side-angle) in obtuse triangles Source: Challenge and Thrills of Pre-College Mathematics, Page 74, Problem 54:
"In two obtuse triangles, an acute angle of the one is equal to an angle of the other, and sides about the other acute angles are proportional. Prove that the triangles are similar."
I tried this first by cosine rule, and the question reduced to:
Assume $\triangle ABC$ and $\triangle DEF$ with $\angle A$ and $\angle D$ being obtuse, $\angle B=\angle E$ and $AC/BC=DF/EF$. We may take $DF=k\cdot AC$, $EF=k\cdot BC$ and apply $\cos B= \cos E$, obtaining $DE=k\cdot AB$ and the problem is solved. EDIT: Even this however is difficult to prove, as pointed out by cosmo5 in his answer.
What is interesting and challenging is to prove the same without the cosine law (probably the expected solution, as cosine law doesn't seem to help), as the law hasn't been introduced in the book until later chapters, while this problem is taken from chapter 3.
My attempt:
Fix $\triangle ABC$ as well as points $E$ and $F$ of $\triangle DEF$. Assuming $DF=k\cdot AC$, $EF=k\cdot BC$, the locus of $D$ will be a circle with centre $F$ and radius $k\cdot AC$.
After this however, I couldn't go further with the same concept. Either a hint or a solution related to my thought process, or any other thought process for that matter, would be greatly appreciated.
| They are congruent if the known congruent angles are obtuse. If the known congruent angles are acute, then the triangles are congruent if the triangles are both acute or both obtuse. Why this is not more widely distributed is above my paygrade!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
if the Lebesgue outer measure of a set A which is a subset of R is positive does the set A contain for certain an interior point? I was looking at this exercise from measure theory.
If the set $A \subset \mathbb{R}$ contains at least one interior point show that $m^*(A)>0$.
I have done the exercise but I am wondering whether the inverse logical statement is true, meaning if
$m^*(A)>0$, does that imply that $A$ contains at least one interior point? I haven't figured anything out. Could someone help me? I feel that the statement is true because for $A$ to have a positive outer measure it must contain an interval and therefore an interior point but I am not certain.
| No, it is not true. Take a fat Cantor set, which is measurable with Lebesgue measure greater than $0$ (and therefore its outer measure is greater than $0$). But its interior is empty.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Bertrand's paradox: "... centre point lies in one of the two centre quarters of the diameter perpendicular to this direction ..."? I am currently studying Photogrammetric Computer Vision – Statistics, Geometry, Orientation and Reconstruction by Förstner and Wrobel. Chapter 2 Probability Theory and Random Variables gives the following example:
In the case of alternatives which are not countable, e.g., when the event is to be represented by a real number, we have difficulties in defining equally probable events. This is impressively demonstrated by Bertrand’s paradox (Fig. 2.1), which answers the question: What is the probability of an arbitrarily chosen secant in a circle longer than the side of an inscribing equilateral triangle? We have three alternatives for specifying the experiment:
*
*Choose an arbitrary point in the circle. If it lies within the concentric circle with half the radius, then the secant having this point as centre point is longer than the sides of the inscribing triangle. The probability is then $1/4$.
*Choose an arbitrary point on the circle. The second point of the secant lies on one of the three segments inducing sectors of $60^\circ$. If the second point lies in the middle sector the secant through these points is longer than the side of the inscribing triangle. The probability is then $1/3$.
*Choose an arbitrary direction for the secant. If its centre point lies in one of the two centre quarters of the diameter perpendicular to this direction the secant is longer than the side of the inscribing triangle. The probability is then $1/2$.
I don't quite understand the description given by 3.:
If its centre point lies in one of the two centre quarters of the diameter perpendicular to this direction the secant is longer than the side of the inscribing triangle. The probability is then $1/2$.
What is meant by "two centre quarters of the diameter perpendicular to this direction", and how is the probability $1/2$? A clearer explanation would be greatly appreciated.
| *
*In the hexagram picture on the right, the chosen direction of the secant is horizontal
*so the centre of the secant is on the vertical line
*and the secant will be too short if the centre is the top quarter of the vertical line or the bottom quarter (the thin parts)
*but the secant will long enough if the centre is the middle two quarters (the thick parts of the vertical line)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is the set of matrices $X \in M_{n\times n}$ satisfying $AX + XA^T = -L$ for a given but arbitrary $A,L \in M_{n\times n}$ a subspace? Is the set of matrices $X \in M_{n\times n}$ satisfying $AX + XA^T = -L$ for a given but arbitrary $A,L \in M_{n\times n}$ a subspace under usual definitions of matrix addition and scalar multiplication?
Apparently it is not ... For $X_1, X_2\in M_{n\times n}$ and $c \in R$ it is clearly closed by addition and multiplication due to matrix operations. The zero vector is not obvious to me since if $X=[0]$, this would have to make $L=0$ which may not be the case. Is that enough of a justification?
| You are right. It is a subspace only if $L=0$. Suppose $X_1$ and $X_2$ belong to the set. Then $AX_1+X_1A^T=-L$ and $AX_2+X_2A^T=-L$ and it follows that $A(X_1+X_2)+(X_1+X_2)A^T=-2L$, but $-2L=-L$ only if $L=0$. In general, non-homogeneous linear relations do not lead to (vector) subspaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let $U \subset \mathbb{R}$ ,open and $h:U \to \mathbb{R}$ , where h is a uniformly continuous homeomorphism.Then $U=\mathbb{R}$
Let $U \subset \mathbb{R}$ ,open and $h:U \to \mathbb{R}$ , where h is a uniformly continuous homeomorphism.Then $U=\mathbb{R}$
My attempt :
Let $p \in \mathbb{R}$ and $p$ not in $U$ be a limit point of $U$.I will try to show that $p \in U$,so that I can conclude that $U$ is both open and closed.
Now, we create a sequence $\{u_n\} \to p$.Since it is a convergent sequence so it is a cauchy sequence. Since $h$ is uniformly continuous and $\mathbb{R}$ is complete so $\{h(u_n)\} \to h(p)$.Now,as it is a homeomorphism so it is an onto mapping hence $p \in U$.So $U$ is both open and closed.
Now, we will try to show that the only open and closed subset of $\mathbb{R}$ are $\phi$ and $\mathbb{R}$.Let $A$ be an open and closed subset of $\mathbb{R}$ then, as $A$ is open it can be written as the union of countable disjoint segments in $\mathbb{R}$.
So, $A=(a_1,a_2) \cup (a_2,a_3) ..\cup (a_{n-1},a_{n})$.So $a_2 \in A^c$ which is also a closed and open set of $\mathbb{R}$.Hence, $a_2$ is an interior point of the set $A^c$, which is not possible.
I don't think it is correct and I am missing something. Can someone just go through my proof.
| First part requires a small correction. You can only say that the Cauchy sequence $h(p_n)$ converge to some real number $x$. Since $h$ is onto you can write $x$ as $h(q)$ for some $q\in U$. But continuity of the inverse map shows $p_n \to q$ which forces $p=q \in U$.
The fact that there are no open and close subsets of $\mathbb R$ other than $\mathbb R$ itself and the empty set is just connectedness of the real line.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A matrix equation $Ax=0$ has infinite solutions, does $A^Tx = 0$ have infinite solutions? I'm wondering whether a system with a transpose of a matrix has the same type of solution that the original matrix system has. If an equation $Ax=0$ equation has a unique solution, would a system with $A$ transpose instead of $A$ also have a unique solution? And what about with no solution, and infinite solutions?
| If you are dealing with square matrices, then the answer is yes. You are essentially asking if the matrix is invertible or not, and $A$ is invertible iff $\det(A)\ne0$ and $\det(A)=\det\!\left(A^T\right)$.
However,
$$
\begin{bmatrix}
3&1&2\\
2&0&1
\end{bmatrix}
\begin{bmatrix}1\\1\\-2
\end{bmatrix}
=\begin{bmatrix}0\\0
\end{bmatrix}
$$
and any real multiple of this solution is a solution. Yet
$$
\begin{bmatrix}
3&2\\
1&0\\
2&1
\end{bmatrix}
x
=\begin{bmatrix}0\\0\\0
\end{bmatrix}
$$
requires $x=\begin{bmatrix}0\\0\end{bmatrix}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Global existence of ODE $f'(x)=-f(x)^3$, $f(0)=a$ I am studying a PDE and as simplification I begin with studying the ODE
\begin{equation*}
f(t)'=-f(t)^3, f(0)=a.
\end{equation*} I want to develop global existence for this ODE, .i.e. for any $T>0$ $$\sup_{t\in[0,T]}|f(t)|<\infty.$$
Of course we can just write down the solution which is given by $f(t)=\pm \frac{1}{\sqrt{1/a^2+2t}}$ with + for $a>0$ and - for $a<0$. However such an explicit solution can't be hoped for in the PDE case and thus I want to know why global solutions exist. This clearly depends on the minus sign in the equation as $g(t)'=g(t)^3$ blows-up in finite time. However any Lipschitz argument or norm bound will treat both equations in the same manner and not account for the negativity.
Of course the idea would be to use some bounds to repetitively apply the local solution theory and glue solutions together, however one needs to show that these solution intervals don't decrease in size.
I can see how the equations behave differently from a plot but I am lacking a convincing argument. If we view the equation as $f(t)'=F(f(t))$ with $F(x)=-x^3$ and assume that $a>0$ then have have $F(a)<0$ and thus $f(0)'<0$ thus for some (small) $t_1>0$ $f(t_1)<f(0)$. If we assume we get a sequence $(t_n)_{n\in\mathbb{N}}$ for which we always have $0<f(t_n)$ we get a decreasing sequence and our solution won't blow-up in finite time. While this argument is intuitive it isn't rigours at all, but it shows that the local Lipschitz constant in a uniqueness proof wouldn't increase but even decrease.
Could someone develop a rigours argument, show how the solutions can be glued together and why the solution intervals don't decrease or point at a different approach to the global existence.
| Consider $V(y)=y^2$, then
$$
\frac{d}{dt}V(f(t))=2f(t)f'(t)=-2f(t)^4<0.
$$
Thus this simple Lyapunov function is decreasing, which implies that it and with it $f$ is bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4014956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $f(x,y) = \frac{x^2y^2}{x^2+y^2}$ is (totally) differentiable I want to prove that $f(x,y) = \frac{x^2y^2}{x^2+y^2} , (f(0,0) = 0)$ is (totally) differentiable at $ \begin{pmatrix} 0 \\ 0 \end{pmatrix}$. I want to use the criterion that a function which is continuously partially differentiable is differentiable. So I compute $\frac{\partial f}{\partial x}=\frac{2xy^4}{(x^2+y^2)^2}$ (The partial derivative $\frac{\partial f}{\partial y}$ works analogously). Going back to the definition of partial deriviative with respect to the $x$ direction, we see that $\frac{\partial f}{\partial x}(0,0)=0$.
So it remains to show that $\lim_{(x,y) \to (0,0)} \frac{\partial f}{\partial x}(x,y) = 0.$ I use the $\epsilon-\delta$ criterion: Let $\epsilon >0 $. Let $\delta := \frac{1}{2} \epsilon$. So let $d((x,y),(0,0)) = \sqrt{x^2+y^2}< \delta$, say $\sqrt{x^2+y^2} = \delta'<\delta.$ This implies $|x|\leq\delta'<\delta, |y|\leq\delta'<\delta$. Therefore: $|\frac{\partial f}{\partial x}(x,y)| \leq \frac{2(\delta')^5}{(\delta')^4} = 2\delta'<2\delta=\epsilon$, as desired.
Is this analysis correct?
| Your idea is fine but $0<\sqrt{x^2+y^2}<\delta$ doesn't imply
$$\frac{1}{(x^2+y^2)^2}<\frac{1}{\delta^4} $$
The correct inequality is actually in the opposite direction. Furthermore, you have to show
$$\left|\frac{2xy^4}{(x^2+y^2)^2}\right|<\varepsilon $$
(The absolute value is important here.) I would instead use $|xy|\leq x^2+y^2$ and $y^2\leq x^2+y^2$ so
$$\left|\frac{2xy^4}{(x^2+y^2)^2}\right|\leq \frac{2|y|^3}{x^2+y^2}\leq 2|y| $$
Then, $|y|\leq \sqrt{x^2+y^2}<\varepsilon/2$, and you can finish from here. So, your choice of $\delta$ works here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Got $e^{-x}=\cos(ix)$ from Euler's formula. Where is my mistake? When I was messing around with Euler's formula, I came across this: $$e^{ix}=\cos(x)+\sin(x)i$$ Then, let $x$ be an imaginary value, $ix$, so then: $$e^{i(ix)}=\cos(ix)+\sin(ix)i$$ which we can simplify to $$e^{-x}=\cos(ix)+\sin(ix)i,$$ but since $e^{-x}$ is a real value for all real inputs, then $\sin(ix)i$ must be equal to $0$. so that means that $$e^{-x}=\cos(ix)$$ This doesn't seem right, so could someone please point out where I made a mistake?
| If $a+bi=\lambda\in\Bbb R$, with $a,b\in\Bbb R$ then, indeed, you must have $a=\lambda$ and $b=0$. But, in the equality$$e^{-x}=\cos(ix)+\sin(ix)i,$$you have no reason to assume that $\cos(ix),\sin(ix)\in\Bbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
Need help on understanding a proof of the formula for calculating the $\zeta(2n)$ In this thread, Jack D'Aurizio provided a succinct proof for the formula of calculating the values of the Zeta function $\zeta(2n)$
$$ \coth z-\frac{1}{z} = \sum_{n\geq 1}\frac{4^n\,B_{2n}}{(2n)!}z^{2n-1}.\tag{2}$$$$\begin{eqnarray*} \coth z -\frac{1}{z} &=& \color{red}{\sum_{n\geq 1}}\frac{d}{dz}\log\left(1+\frac{z^2}{n^2 \pi^2}\right)\\&=&\sum_{n\geq 1}\frac{2z}{\pi^2 n^2+z^2}\\&=&\color{red}{\sum_{n\geq 1}\sum_{m\geq 1}}\frac{2\color{red}{(-1)^{m-1}z^{2m-1}}}{\pi^{2m}\color{red}{n^{2m}}} \\&=&\sum_{m\geq 1}\frac{2\,\zeta(2m)}{\pi^{2m}}(-1)^{m-1}z^{2m-1}\tag{3}\end{eqnarray*}$$
and we have the claim by comparing the coefficients in the RHSs of (2) and (3).
I just have a couple of questions regarding this derivation.
*
*First, the sigma notation. I thought that the infinite product for $\sinh(x)$ is $\displaystyle\prod_{n=1}^{+\infty}\color{red}{z}\left(1+\dfrac{z^2}{n^2\pi^2}\right)$. So why there is a sum here, shouldn't it be a product? Also where does the $\color{red}z$ go?
*How to prove, or how do we know that $\coth(z)-\dfrac{1}{z}=\displaystyle\sum_{n=1}^{+\infty}\dfrac{d}{dz}\log\left(1+\dfrac{z^2}{n^2\pi^2}\right)$. This is my guess. The derivative of $\log(\sinh(z))$ is $\coth(z)$. So that is why it is, am I correct?
*I don't understand anything at all about the double sigma notation. I am always in fear of double sigma notation because I haven't learnt how to manipulate them well. So what was Jack doing here? Suddenly there are $(-1)^{2m-1}$, $z^{2m-1}$, $\pi^{2m}$
*I don't understand anything when he says "comparing the coefficients in the RHSs of (2) and (3)." What should I suppose to do to understand this?
| $$\sinh z=z\prod_{n=1}^\infty\left(1+\frac{z^2}{n^2\pi^2}\right) \implies \log(\sinh{z}) = \log(z)+ \log\prod_{n=1}^\infty\left(1+\frac{z^2}{n^2\pi^2}\right) \\ \implies
\log(\sinh{z})-\log(z) = \log\prod_{n=1}^\infty\left(1+\frac{z^2}{n^2\pi^2}\right) = \sum_{n=1}^\infty\log\left(1+\frac{z^2}{n^2\pi^2}\right)$$
Differentiating both sides we get $$\coth z -\frac{1}{z} = {\sum_{n\geq 1}}\frac{d}{dz}\log\left(1+\frac{z^2}{n^2 \pi^2}\right) = \sum_{n\geq 1}\frac{2z}{\pi^2 n^2+z^2}$$
Next, we write the summand as an infinite series. Since $\displaystyle \frac{1}{1+t} = \sum_{m=1}^{\infty}(-1)^{m-1}t^{m-1}$ we have $$\frac{2z}{\pi^2 n^2+z^2} = \frac{2z}{\pi^2 n^2} \cdot \frac{1}{1+\left(\frac{z}{\pi n}\right)^2} = \frac{2z}{\pi^2 n^2} \cdot\sum_{m=1}^{\infty}(-1)^{m-1} \left(\frac{z}{\pi n}\right)^{2m-2}$$
Which is equal to $\displaystyle \frac{2z}{\pi^2 n^2} \cdot\sum_{m=1}^{\infty}(-1)^{m-1} \frac{z^{2m-2}}{\pi^{2m-2}n^{2m-2}} = \sum_{m=1}^{\infty}(-1)^{m-1} \frac{z^{2m-1}}{\pi^{2m}n^{2m}}.$
So we have $$\coth z -\frac{1}{z} ={\sum_{n\geq 1}\sum_{m\geq 1}}\frac{2{(-1)^{m-1}z^{2m-1}}}{\pi^{2m}{n^{2m}}} $$
And summing over $n$ first we have
$${\sum_{n\geq 1}\sum_{m\geq 1}}\frac{2{(-1)^{m-1}z^{2m-1}}}{\pi^{2m}{n^{2m}}} = {\sum_{m\geq 1}\sum_{n\geq 1}}\frac{2{(-1)^{m-1}z^{2m-1}}}{\pi^{2m}{n^{2m}}} ={\sum_{m\geq 1}}\frac{2{(-1)^{m-1}z^{2m-1}}}{\pi^{2m}} \bigg(\sum_{n \ge 1} \frac{1}{n^{2m}}\bigg)$$
Since $$\displaystyle \sum_{n \ge 1} \frac{1}{n^{2m}}= \zeta(2m)$$
We have
$$\displaystyle\coth z -\frac{1}{z} = \sum_{m\geq 1}\frac{2\,\zeta(2m)}{\pi^{2m}}(-1)^{m-1}z^{2m-1}.$$
So, on the one hand
$$\displaystyle \coth z-\frac{1}{z} = \sum_{n\geq 1}\frac{4^n\,B_{2n}}{(2n)!}z^{2n-1}$$
but on the other $$\displaystyle \coth z-\frac{1}{z} = \displaystyle \sum_{m\geq 1}\frac{2\,\zeta(2m)}{\pi^{2m}}(-1)^{m-1}z^{2m-1}$$
So $$\sum_{n\geq 1}\frac{4^n\,B_{2n}}{(2n)!}z^{2n-1} = \displaystyle \sum_{m\geq 1}\frac{2\,\zeta(2m)}{\pi^{2m}}(-1)^{m-1}z^{2m-1}$$
Reindexing the RHS with $n$
$$\sum_{n\geq 1}\frac{4^n\,B_{2n}}{(2n)!}z^{2n-1} = \displaystyle \sum_{n\geq 1}\frac{2\,\zeta(2n)}{\pi^{2n}}(-1)^{n-1}z^{2n-1}$$
Then comparing the coefficient of $z^{2n-1}$ on both sides
$$\frac{4^n\,B_{2n}}{(2n)!} =\frac{2\,\zeta(2n)}{\pi^{2n}}(-1)^{n-1} $$
And finally solving for $\zeta(2n)$ gives:
$$\zeta(2n) = \frac{4^n\,B_{2n} (-1)^{n-1}\pi^{2n}}{2(2n)! } =\frac{\,B_{2n} (-1)^{n-1}(2\pi)^{2n}}{2(2n)! }$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given vectors v = [5, −1], b1 = [1, 1], and b2 = [1, −1] all written in the standard basis, what is v in the basis defined by b1 and b2? Given vectors v= [5, −1], b1 = [1, 1], and b2 = [1, −1] all written in the standard basis, what is v in the basis defined by b1 and b2?
The solution is [2, 3], but I need to know how.
v in the same basis as b1 and b2 is [5,-1]
but if v has to be defined in terms of b1 and b2, then
vb = 2b1, 3b2 = [2, 3] because this vector v is now being shown with b1
and b2 as new basis vectors.
I understand b1, but get confused how projection b2 gets calculated the way it does.
The solution provided completed projection b2 as follows:
Proj. b2 = $\frac{v.b2}{|b2|^2}$ = $\frac{(5x1)+(-1x-1)}{-1^2 + (-1^2)}$ = $\frac{6}{2}$ = 3
How does b2 [1, -1] become [-1, -1] in the calculation?
And even still, shouldn't that equal -2, not 2? Giving answer -3, not 3?
I seem to calculate 1^2, + (-1^2) = 0, and arrive at 6/0, which obviously isn't right. Where am I going wrong?
| That's much too "sophisticated" for me!
To write v = [5, −1] in terms of b1 = [1, 1] and b2 = [1, −1] means to write [5, -1]= a[1, 1]+ b[1, -1]. We can write this as
[5, -1]= [a, a]+ [b, -b]= [a+ b, a- b].
So we want a+ b= 5 and a- b= -1.
Adding the two equation eliminates b: 2a= 4 so a= 2.
Then 2+ b= 5 so b= 3.
[5, -1]= 2[1, 1]+ 3[1, -1].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
A simple method to calculate minimal area enclosed between a tangent to $f(x)$ and coordinate axes Given a function $y=f(x)$, take a tangent to the function at some point. We are to find the minimum area between this line and the coordinate axes. An example question is the coordinate axes have minimum area.
I faced two different algorithms to find the solution. The first one is straight-forward:
*
*Pick a point on the function: $(x_1, y_1) = (x_1, f(x_1))$.
*Find the derivative of the function at $x_1$ to calculate the slope of the line. ($m=f'(x_1)$)
*Derive the tangent-line formula. $y-y_1=m(x-x_1)$
*Find the formulations of intersections with coordinate axes $(0, y_0)$, $(x_0, 0)$.
*Calculate the formulation of the area of triangle as $A = x_0 y_0/2$.
*$A$ shall be a function of $x_1$, minimize that to calculate minimum area.
Second algorithm is very short compared to that:
*
*Take the function $g(x) = 2 x f(x)$.
*Minimize that function to calculate the result.
I cannot figure out how the second algorithm works, or when it works. I checked both against the following family of functions, and both algorithms give the same result:
*
*$f(x) = ax+b$
*$f(x) = k/x$
*$f(x) = 3b^2 - a^2 x^2$
*$f(x) = \frac{b}{a} \sqrt{a^2-x^2}$
Question is: Can we prove/disprove the second algorithm? If disproved, under what conditions does the second algorithm work?
| Alternative proof with Lagrange Multipliers.
If $y=a(x)$, then, equivalently, we have a constraint function in x and y, $f(x,y)=y-a(x)=0$. So if $y=4-x^2$, $f(x,y)=y-4x^2=0$.
The slope of the tangent line to an arbitrary $f(x,y)$ is $m=\frac{-\delta f / \delta x}{\delta f / \delta y}$.
From above $\delta f/ \delta y =1$, and $\delta f / \delta x = -da/dx$.
Also: $\frac{\delta^2 f}{\delta x \delta y}=\frac{\delta ^2 f}{\delta y^2}=0. $
Area is:
$$A = \frac{(y+x\frac{\delta f / \delta x}{\delta f / \delta y})^2}{2}\frac{\delta f / \delta y}{\delta f / \delta x}$$
Cleaning up:
$$A = \frac{ (y\frac{\delta f}{\delta y}+x\frac{\delta f }{\delta x})^2 }{2\frac{\delta f}{\delta x}\frac{\delta f }{\delta y}}=\frac{(y+x \frac{\delta f}{\delta x})^2}{2 \frac{\delta f}{\delta x}}$$
$$A_x = \frac{2 \frac{\delta f}{\delta x}\cdot 2\cdot(y+x \frac{\delta f}{\delta x})(\frac{\delta f}{\delta x}+x\frac{\delta ^2f}{\delta x^2})-2\cdot \frac{\delta ^2 f}{\delta x^2}(y+x\frac{\delta f}{\delta x})^2 }{4(\frac{\delta f}{\delta x})^2}=y+x\frac{\delta f}{\delta x}-\frac{y^2 \frac{\delta ^2 f}{\delta x^2}}{2 (\frac{\delta f}{\delta x})^2}+\frac{x^2\frac{\delta ^2 f}{\delta x^2}}{2}$$
$$A_y =\frac{4\frac{\delta f}{\delta x}(y+x\frac{\delta f}{\delta x})(1+x\cdot \frac{\delta ^2 f}{\delta x \delta y})-2\frac{\delta ^2f}{\delta x \delta y}(y\frac{\delta f}{\delta y}+x\frac{\delta f}{\delta x})}{4(\frac{\delta f}{\delta x})^2} =\frac{y+x\frac{\delta f}{\delta x}}{ \frac{\delta f}{\delta x}}$$
By Lagrange Multipliers, $\lambda f_x=A_x$ and $\lambda f_y = A_y$
$f_y=1$, so $\lambda=\frac{y+x \frac{\delta f}{\delta x}}{\delta f/ \delta x} $, and $\lambda f_x=y+x\frac{\delta f}{\delta x}=y+x\frac{\delta f}{\delta x}-\frac{y^2 \frac{\delta ^2 f}{\delta x^2}}{2 (\frac{\delta f}{\delta x})^2}+\frac{x^2\frac{\delta ^2 f}{\delta x^2}}{2}$
So:
$$0=-\frac{y^2 \frac{\delta ^2 f}{\delta x^2}}{2 (\frac{\delta f}{\delta x})^2}+\frac{x^2\frac{\delta ^2 f}{\delta x^2}}{2}$$
And:
$$(x\frac{\delta f}{\delta x}-y)(x\frac{\delta f}{\delta x}+y)=0$$
If the second term is zero, we have zero area, so set the first term to zero.
But, $\frac{\delta f}{\delta x}=-\frac{dy}{dx}$, so $(x\frac{\delta f}{\delta x}-y)=0 \implies \frac{d(xy)}{dx}=0$
Since $y=x\frac{\delta f}{\delta x}$, $A=2x^2\frac{\delta f}{\delta x}=2xy$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
A graph theory problem from mobile games Example
game interface
This is the question that comes to my mind when I play a game called QuickyRoute,which essentially a Hamilton graph problem,the game will randomly generate a number of points,you need to draw the undirected complete graph with the smallest sum of distances.
As the difficulty of the game increases, the more points, the harder it is to find the correct answer.I want to solve these problems through programming. Obviously there are at most half of n factorial possibilities.
But enumeration is not very convenient, Because this is a famous graph theory problem,I want to ask if there is a better algorithm idea or some excellent papers can refer to this game
| The problem is actually equivalent to the traveling salesman problem, except we don't have to return to where we started. The general traveling salesman problem is an NP-hard problem in theoretical computer science. Furthermore, it is well-known that the traveling salesman problem cannot even be approximated within a constant factor.
However, it seems like the problem you have listed is actually a special case of the traveling salesman problem, known as the metric traveling salesman problem (i.e., an instance of the traveling salesman problem in which the triangle inequality holds). Unfortunately, the metric traveling salesman problem is still NP-hard, which means that you will probably not be able to find any efficient algorithms that compute an optimal solution when the number of vertices gets even moderately large. However, there are efficient polynomial-time approximation schemes (EPTAS) that can approximate solutions to the metric traveling salesman within a constant factor.
The best approximation heuristic currently known is Christofides algorithm, which was discovered in 1976. Christofides's algorithm is a $3/2$-approximation, which means that it guarantees that its solutions will be within a factor of $3/2$ of the optimal solution length.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using the ratio test on $\sum_{n=1}^{\infty} \left(1+\frac{1}{n}\right)^{n^2}$ I already know that I should do the root test on this series due to the $n$th power, but I want to see if I can establish the result using the ratio test first. (Or would I always be stuck with only one kind of test)
I get something like this: $$\lim_{n \to \infty} \left|\frac{\left(1+\frac{1}{n+1}\right)^{\left(n+1\right)^2}}{\left(1+\frac{1}{n}\right)^{n^2}}\right|$$
But then I'm not sure how (or even if I could) simplify this. I've tried on Wolfram Alpha to make it simplify but the best I can get is numerical approximations that seem to converge to $e$
| First, note that
$$
\frac{1+\frac{1}{n+1}}{1+\frac{1}{n}}
= 1-\frac{1}{(n+1)^2} \tag{1}
$$
We will use that to make the "right" exponent appear, since we want to use the known limit $\lim_{m\to\infty} (1+\frac{u}{m})^m = e^u$.
Now, since $n^2=(n+1)^2-(2n+1)$, we can rewrite
$$\begin{align*}
\frac{\left(1+\frac{1}{n+1}\right)^{\left(n+1\right)^2}}{\left(1+\frac{1}{n}\right)^{n^2}}
&= \left(1+\frac{1}{n}\right)^{2n+1}\cdot \frac{\left(1+\frac{1}{n+1}\right)^{(n+1)^2}}{\left(1+\frac{1}{n}\right)^{(n+1)^2}}
= \left(1+\frac{1}{n}\right)^{2n+1} \left(\frac{1+\frac{1}{n+1}}{1+\frac{1}{n}}\right)^{n^2} \\
&= \color{red}{\left(1+\frac{1}{n}\right)}\cdot \color{blue}{\left(1+\frac{2}{2n}\right)^{2n}}\cdot \color{green}{\left(1-\frac{1}{(n+1)^2}\right)^{(n+1)^2}} \tag{2}
\end{align*}$$
This is great! We know that
$$
\lim_{n\to\infty} \left(1+\frac{1}{n}\right) = \color{red}{1} \tag{3}
$$
and
$$
\lim_{n\to\infty} \left(1+\frac{2}{2n}\right)^{2n} = \color{blue}{e^{2}} \tag{4}
$$
and
$$
\lim_{n\to\infty} \left(1-\frac{1}{(n+1)^2}\right)^{(n+1)^2} = \color{green}{e^{-1}} \tag{5}
$$
so, combining (3), (4), and (5) into (2), we get
$$
\lim_{n\to\infty} \frac{\left(1+\frac{1}{n+1}\right)^{\left(n+1\right)^2}}{\left(1+\frac{1}{n}\right)^{n^2}}
= \color{red}{1} \cdot \color{blue}{e^{2}} \cdot \color{green}{e^{-1}} = \boxed{e}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4015915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What's $x^n = n!$ approximately What is the approximate solution to:
$x ^{1000} = 1000 !$
How can you solve for x?
More generally, for some constant $k$, how can you solve:
$x^k = k!$
| Notice that by the Hierarchy of limits, $\frac{x^n}{n!}\rightarrow 0$ as $n\rightarrow\infty$, so $n$ would have to be 'quite small'. In response to your first problem, $1000\log{x}=log{1000!}$ so $\log{x}=\frac{log{1000!}}{1000}$. Can you see a way to complete the problem? If not, here is a hint:
What can you deduce about the function $f(x)=\frac{log{x!}}{x}$? Can you sketch a graph of it? How does its derivative compare with $\log{x}$? You should be able to use these to find an approximate intersection point.
[To differentiate ${\log{x!}}$, consider the Gamma function definition of factorials, then apply the Fundamental Theorem of Calculus to differentiate.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Cardinality of set of all straight lines in $\mathbb R^2$, that pass through at least two different rational coordinates. Definition: A coordinate $(x,y)\in \mathbb R^2$ is rational if $x,y$ are both rational nos.
Let $S$ be the set of all straight lines in $\mathbb R^2$ that pass through at least two distinct rational points.
Let $C$ be the set of all rational points in $\mathbb R^2$. Since $\mathbb Q \times \mathbb Q$ is countable so are its subsets. Therefore, it is possible to write $C=\{c_1,c_2,...,c_k,c_{k+1},...\}$. Of course $C=\mathbb Q \times \mathbb
Q$.
Let $S_i$ be the set of all straight lines that pass through $c_i\in C$. For every $s\in S_i$, $\exists$ a rational point $c_{k(s)}\ne c_i$ thus we can have a bijection. Therefore, $S_i$ is countable.
Hence, $S=\cup_{i=1}^{\infty} S_i$ is countable collection of countable sets and therefore countable.
Is my proof correct? Thanks.
| Yes, but not completely:
*
*The set $S_i$, as defined, is not countable. In order to be countable, the set $S_i$ should be defined as the set of lines in $S$, tha passes through $c_i$.
*also, with this definition, $S_i$ the described map is not a bijection. It is true that there is a map from $C$ to $S_i$ that assigns to $c_j$ the line passing through $c_i$ and $c_j$. But this map is not a bijection (it is not injective): Still, it is surjective. This implies that $S_i$ is countable.
Last step of the proof is now correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Another interesting integral $\int{\frac{x^m}{x^{2m}-1}dx}$ After An interesting integral $\int{\dfrac{x^m}{x^{2m}+1}dx}$, I want to generalize a similar integral below
$$\int{\frac{x^m}{x^{2m}-1}dx}$$
for all values of $m\in\mathbb{N}$. Below are my steps:
$$\int{\frac{x^m}{x^{2m}-1}dx}=\int{\frac{x^m}{(x^m-1)(x^m+1)}dx}=\frac{1}{2}\left(\int{\frac{1}{x^m-1}dx}+\int{\frac{1}{x^m+1}dx}\right)$$
I don't know what my next step should be.
How should I solve this integral?
| Fractionalize partially the integrand
$$\frac{x^n}{x^{2n}-1}=\sum_{k=1}^{2n}\frac{c_k}{x-x_k}$$
with $x_k= e^{i a_k},\>a_k=\frac{k\pi}{n}$ and apply the L’Hoptital’s rule to obtain the coefficients
$$c_k = \lim_{x\to x_k}\frac{x^n(x-x_k)}{x^{2n}-1}=\frac{(-1)^kx_k}{2n}$$
Then
\begin{align}
&\int \frac{x^n}{x^{2n}-1}dx =\frac1{2n}\sum_{k=1}^{2n}\int \frac{(-1)^kx_k}{x-x_k} dx
= \frac1{2n}\sum_{k=1}^{2n} (-1)^kx_k\ln(x-x_k)\\
= &\frac1{2n} \sum_{k=1}^{2n}(-1)^k (\cos a_k +i \sin a_k)
\ln(x-\cos a_k -i\sin a_k)\\
= &\frac1{2n} \sum_{k=1}^{2n} (-1)^k\left(\cos a_k \ln\sqrt{x^2-2x\cos a_k+1}+\sin a_k \tan^{-1}\frac{\sin a_k}{x-\cos a_k} \right)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Drake, Seven Axioms of the Algebra of Events On page 3 of Drake Fundamentals of Applied Probability he lists The Seven Axioms of the Algebra of Events.
$1. A \cup B = B \cup A \\
2. A \cup (B \cup C) = (A \cup B) \cup C \\
3. A\cap( B \cup C) = A \cap B \cup A \cap C \\
4. (A')' = A \\
5. (A\cap B)' = A' \cup B'\\
6. A\cap A' = \phi \\
7.A \cap U = A $
I have two questions.
First while Drake states that his choice is not unique is there a reason that he selected these seven and is seven the minimum possible number of axioms with which the Algebra of events can be expressed.
Second he further states that any other relation in the algebra of events can be proved directly from these seven axioms with no additional information and in fact in the chapter problem section asks the student to do so. Does the author mean that all the other set relations in the Algebra of Events can be proved by direct manipulation of the seven axioms with out resorting to the method of $x \in A$
For example, proofs of $A \cup A' = U$ are given in this posting
Proof of union of a set and its complement is equivalent to a universe
but these proofs use the method of $x \in A$ and not direct manipulation of Axiomatic statements as required by Drake. How would I prove $A \cup A' = U$ or $A \cup A = A$ using only direct manipulation of Drake's Seven Axioms of the Algebra of Events?
| remark
From 1-7 we deduce that $U$ is unique. Suppose
$$A \cap U = A \text{ for all }A\tag7$$
$$A \cap V = A \text{ for all }A\tag{7'}$$
We claim $U = V$.
Proof. From $(7)$ we get $V \cap U = V$.
From $(7')$ we get $U \cap V = U$.
Then apply $(1)$ to get $V \cap U = U \cap V$, and conclude $U = V$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Divisibility by 7 Proof by Induction Prove by Induction that
$$7|4^{2^{n}}+2^{2^{n}}+1,\; \forall n\in \mathbb{N}$$
Base case:
$$
\begin{aligned}
7&|4^{2^{1}}+2^{2^{1}}+1,\\
7&|7\cdot 3
\end{aligned}$$ Which is true.
Now, having $n=k$, we assume that:
$$7|4^{2^{k}}+2^{2^{k}}+1,\;\; \forall k\in \mathbb{N}$$
We have to prove that for $n=k+1$ that,
$$7|4^{2^{k+1}}+2^{2^{k+1}}+1,\;\; \forall k\in \mathbb{N}$$
We know that if $a$ is divisible by 7 then $b$ is divisible by 7 iff $b-a$ is divisible by 7.
Then,
$$
\begin{aligned}
b-a &= 4^{2^{k+1}}+2^{2^{k+1}}+1 - (4^{2^{k}}+2^{2^{k}}+1)\\
&= 4^{2^{k+1}}+2^{2^{k+1}} - 4^{2^{k}}-2^{2^{k}}\\
&= 4^{2\cdot 2^{k}}+2^{2\cdot 2^{k}} - 4^{2^{k}}-2^{2^{k}}
\end{aligned}
$$
I get stuck here, please help me.
| Prove by Induction that
$ 7| {4^{2^n} + 2^{2^n} + 1} , ∀ n ∈ N $
Base case:
$7 | 4^{2^1} + 2^{2^1} + 1, $
7|7⋅3
Which is true.
Now, having n=k, we assume that:
$7|4^{2^k} + 2^{2^k} + 1, ∀ k ∈ ℕ $
We have to prove that for n=k+1 that,
$7| 4^{2^{k+1}} + 2^{2^{k+1}} + 1, ∀ k ∈ ℕ $
$ 4^{2^{k+1}} + 2^{2^{k+1}} + 1 = 4^{2*2^k} + 2^{2*2^k} + 1 $
$ = {4^{2^k}}^2 + {2^{2^k}}^2 + 1 $
Put $ {4^{2^k}} = a, {2^{2^k}} = b $
$ ∴ {(4^{2^k})}^2 + {(2^{2^k})}^2 + 1 = a^2 + b^2 + 1 $
Also,
$ (a+b+1)(a+b-1) = a^2 + b^2 + 2ab - 1 $
Since $ 7|a+b+1 $,
$ 7| a^2 + b^2 + 2ab - 1 ...(1) $
$ ab-1 = {8^{2^k}} - 1 $
For n ∈ ℕ, $ 8^n = (7+1)^n $
= (Multiple of 7) + 1
(Theorem of Binomial Expansion)
$ ∴ 8^n -1 = 7p $
Where p ∈ ℕ if n ∈ ℕ
$ ∴ 7| ab - 1 ....(2) $
Subtracting Equation (2) from equation (1),
$ 7| a^2 + b^2 + 2ab - 1 -2*(ab-1) $
$ ∴ 7| a^2 + b^2 + 1 $
ie
$$7| 4^{2^{k+1}} + 2^{2^{k+1}} + 1, ∀ k ∈ ℕ $$
if
$7| 4^{2^{k}} + 2^{2^{k}} + 1, ∀ k ∈ ℕ $
Hence, by principle of Mathematical Induction, the above statement is proved to be true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 5
} |
On the generalized Leibniz rule problem definition
I have to evaluate in $z=0$ the $n$-th derivative with respect $z$ of the product $f(z)\cdot z^k$, where $f(\cdot)$ is a generic smooth function and $k$ is a given integer. I will use the short notation $F^{(n)}(z)$ to denote the $n$-th derivative of the generic function $F(z)$, so what I want is to compute for any $n\in\mathbb{N}$
\begin{equation}
[(f(z)\cdot z^k)^{(n)}]_{z=0}\triangleq \frac{\text{d}^n\left[f(z)\cdot z^k\right]}{\text{d}z^n}\Bigg|_{z=0}
\tag{1}
\end{equation}
my attempt
For the generalized Leibniz rule, holds for any $n$
\begin{equation}(f(z)\cdot z^k)^{(n)}=\sum_{i=0}^n \binom{n}{i}\cdot f^{(n-i)}(z) \cdot \left(z^k\right)^{(i)}\end{equation}
so the problem consist into compute the $i$-th derivative of the power $z^k$. If I'm not wrong,
\begin{equation}
\left(z^k\right)^{(i)}=\begin{cases}
\frac{k!}{(k-i)!}z^{k-i} & \text{if } i\leq k\\
0 & \text{otherwise}
\end{cases}
\end{equation}
This formula says that the index $i$ of the previous summation cannot exceed the value $k$. Anyway, $k$ is an external parameter and can be greater than $n$: in this case the summation stops at the value $n$,
otherwise the summation stops at value $k$.
So I would write
\begin{equation}\begin{aligned}(f(z)\cdot z^k)^{(n)}&=\sum_{i=0}^{\text{min}(n,k)} \binom{n}{i}\cdot f^{(n-i)}(z) \cdot \frac{k!}{(k-i)!} z^{k-i}\\
&=k!\cdot\sum_{i=0}^{\text{min}(n,k)} \binom{n}{i}\cdot \frac{1}{(k-i)!}\cdot f^{(n-i)}(z) \cdot z^{k-i}\\
\end{aligned}\end{equation}
Now comes the problems. By setting $z=0$ turns out
\begin{equation}\begin{aligned}
%
[(f(z)\cdot z^k)^{(n)}]_{z=0}
&=k!\cdot\sum_{i=0}^{\text{min}(n,k)} \binom{n}{i}\cdot \frac{1}{(k-i)!}\cdot f^{(n-i)}(0) \cdot 0^{k-i}\\
\end{aligned}\end{equation}
from my prospective this expression is quite tricky because of the term $0^{k-i}$. I'm tempted to write
\begin{equation}
0^{k-i}=\begin{cases}1 & \text{if } i=k\\
0 & \text{otherwise}
\end{cases}
\end{equation}
and consequently simplify the last summation as
\begin{equation}\begin{aligned}
%
[(f(z)\cdot z^k)^{(n)}]_{z=0}
&=k!\cdot\sum_{i=0}^{\text{min}(n,k)} \binom{n}{i}\cdot \frac{1}{(k-i)!}\cdot f^{(n-i)}(0) \cdot 0^{k-i}\\
&=\begin{cases}
k!\cdot\binom{n}{k}\cdot \frac{1}{(k-k)!}\cdot f^{(n-k)}(0) \cdot 0^{k-k} & \text{if } n\geq k \\
0 & \text{otherwise}
\end{cases}\\
&=\begin{cases}
\frac{n!}{(n-k)!}\cdot f^{(n-k)}(0) & \text{if } n\geq k \\
0 & \text{otherwise}
\end{cases}\\
\end{aligned}\end{equation}
question
I don't have a precise question about my problem. Essentially I'm doubtful about my derivation because of the undefined power $0^0$.
| We consider $n,k$ non-negative integers. It might be more convenient to consider a slightly simpler representation, namely
\begin{align*}
\color{blue}{\left.\left(f(z)\cdot z^k\right)^{(n)}\right|_{z=0}}
&=\left.\sum_{j=0}^n\binom{n}{j}f^{(n-j)}(z)(z^k)^{(j)}\right|_{z=0}\\
&=\left.\sum_{j=0}^n\binom{n}{j}f^{(n-j)}(z)k(k-1)\cdots(k-j+1)z^{k-j}\right|_{z=0}\tag{1}\\
&=
\begin{cases}
\binom{n}{k}f^{(n-k)}(0)k!&\ \qquad 0\leq k\leq n\\
0&\ \qquad k>n
\end{cases}\\
&\,\,\color{blue}{=
\begin{cases}
\frac{n!}{(n-k)!}f^{(n-k)}(0)&\qquad 0\leq k\leq n\\
0&\qquad k>n
\end{cases}
}
\end{align*}
In the representation (1) we see at a glance that evaluation at $z=0$ is equal to zero if $k\ne j$. This way there is no need to additionally define $0^0:=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4016870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the least square solution x
Find the least square solution $x$ for $Ax = b$ if
$$A =\left(\begin{array}{rrr} 2 & 0\\
1 & 3 \\
0 & 2\end{array}\right) \quad b =\left(\begin{array}{rrr} 7\\
0\\-1\end{array}\right) $$
My Solution:
I found $A^T A =\left(\begin{array}{rrr} 7 & -1\\
-1 & 8 \\
\end{array}\right) $
$$A^T b =\left(\begin{array}{rrr} 2\\
2\\
\end{array}\right) $$
$$\left(\begin{array}{rrr} 5 & -1\\
-1 & 5 \\
\end{array}\right) x = \left(\begin{array}{rrr} 2\\
2\\
\end{array}\right) $$
But I can't solve this equation to find $x$ as there is no solution.
My solution after finding $x$ was to then find this:
$$Ax = \left(\begin{array}{rrr} 2 & 0\\
1 & 1 \\
0 & 2\end{array}\right) \left(\begin{array}{rrr} x\\
x\\ \end{array}\right) = \left(\begin{array}{rrr} a\\
b\\c\end{array}\right) $$
and then use $b-Ax^ = \sqrt{1+0+1} - \sqrt{a^2 + b^2 +c^2} = \textrm{final answer}$
Where am I going wrong with my approach? I am stuck as I can't solve for $x$?
| Your first error is in computing $A^\top b$. We have
$$A^\top b = \begin{pmatrix} 2 & -1 & 0 \\ 0 & 1 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 0 \\ -1\end{pmatrix} = \begin{pmatrix} 2 \\ -2 \end{pmatrix}$$
So, we need to solve
$$\begin{pmatrix} 5 & -1\\
-1 & 5
\end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 2\\
-2
\end{pmatrix}.$$
I don't know where you went wrong specifically here, but there is a unique solution, as $A^\top A$ is invertible (compute its determinant to see this). We can use the old shortcut to compute the inverse of this matrix:
$$\begin{pmatrix} a & b \\ c & d\end{pmatrix}^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \\ -c & a\end{pmatrix}.$$
In particular,
$$\begin{pmatrix} 5 & -1 \\ -1 & 5\end{pmatrix}^{-1} = \frac{1}{24}\begin{pmatrix} 5 & 1 \\ 1 & 5\end{pmatrix},$$
so
$$\begin{pmatrix} x \\ y \end{pmatrix} = \frac{1}{24}\begin{pmatrix} 5 & 1\\1 & 5\end{pmatrix}\begin{pmatrix} 2\\-2\end{pmatrix} = \begin{pmatrix}\frac{1}{3} \\ -\frac{1}{3}\end{pmatrix}.$$
Therefore, the closest point to $(1, 0, -1)^\top$ in the columnspace of $A$ is
$$A\begin{pmatrix}\frac{1}{3} \\ -\frac{1}{3}\end{pmatrix} =\begin{pmatrix} 2 & 0\\-1 & 1 \\0 & 2\end{pmatrix}\begin{pmatrix}\frac{1}{3} \\ -\frac{1}{3}\end{pmatrix} = \begin{pmatrix} \frac{2}{3} \\ -\frac{2}{3} \\ -\frac{2}{3} \end{pmatrix}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expanding exponents as opposed to solving logarithmically provides different answers. $10^{2t-3} = 7$ The solution should be simplified from the above
$$10^{2t-3} = 7$$
by recognizing the equivalent logarithmic format
$$\log(7) = 2t-3$$
which solves for $t$ as
$$\frac{\log(7)+3}{2} = t$$
Where I'm having problems is I attempted to work the problem from the other direction (just what stood out to me at first) by expanding
$$10^{2t-3} = 10^2 \cdot 10^t \cdot 10^{-3}$$
I think that might be where my problem is, because my next steps solve as
$$100 \cdot 10^t \cdot \frac{1}{1000} = 7$$
$$\frac{100 \cdot 10^t}{1000} = 7$$
$$\frac{1 \cdot 10^t}{10} = 7$$
$$10^t=70$$
Then I apply the equivalent logarithmic format as
$$\log(70) = t$$
and after calculating, it is very clear to me that
$$\log(70) \not = t = \frac{\log(7)+3}{2}$$
Can anyone help me understand what I'm doing wrong here?
| $10^{2t}$ does not equal $10^{2} * 10^{t}$. That would be $10^{t+2}$ instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How is the inclusion map both an Immersion and Submersion between Manifolds Hi i am trying to figure out why an inclusion map is both and immersion and submersion. This is what i have tried so far.Let $S$ be an open subset of a manifold $M$. Now the inclusion mapping is $\iota:S\to M$. Now to prove immersion/submersion we have to show that each of its differentials $\iota_{*,p}$, for $p \in S$, is, resp, injective/surjective. By definition of differential of a smooth map $F$,we have $(F_{*,p}(X_p))f=X_p(f\circ F)$ where $F_{*,p}:T_pM\to T_{F(p)}N$ where $M$ and $N$ are manifolds.Now, taking $F$ as the inclusion map $\iota$, we get $(\iota_{*,p}(X_p))f=X_p(f\circ \iota)=X_p(f)$ where $\iota_{*,p}:T_pS\to T_{\iota(p)=p}M$.
My questions are as follows:
How do we know from this that each $\iota_{*,p}$ is injective and surjective so that $\iota$ is both an immersion and a submersion?
| Say that $M$ is an $n$-dimensional smooth manifold, that $S \subset M$ is an open subset, that $i: S \to M$ is the inclusion and choose $p \in S$. Since $S$ is an open subset of $M$, $S$ is also an $n$-dimensional smooth manifold (this is true because the topology on $M$ is induced from the coordinate charts on $M$, which have target $\mathbb{R}^n$). Choose a coordinate system $\phi: U \to \mathbb R^n$ for $M$, centered at $p$. By replacing $U$ with $U\cap S$ if necessary (and using the fact that $S$ is open) we may assume that $U \subset S$. We see that $\phi$ is therefore also a coordinate system for $S$ centered at $p$. Representing the inclusion map in these coordinates gives
$$\phi \circ i\circ\phi^{-1} = Id,$$
where $Id:\mathbb R^n \to \mathbb R^n$ is the identity map. The differential of $Id$ at $0$ ($0$ since $\phi$ is centred at $p$) is
$$Id_{*,0} :T_0\mathbb R^n \to T_0\mathbb R^n,$$
which is also the identity map.
In particular, this shows that each differential $i_{*,p}$ is both injective and surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Find a geometric solution for a equilateral tringle within 3 circles For the task of constructing an equilateral triangle with a vertex on each of three concentric circles (the subject of a previous question), I found a solution on: http://mathafou.free.fr/pbg_en/sol113.html, which is hard to understand.
Red triangle is the solution. Quoting from the reference:
With point P as center, draw circles (A), (B) and (C) with radii a,b,c. We can fix A anywhere on circle (A).
So in the graphic is A fixed on the bottom of the outter circle.
A circle is drawn with the radius AP and cuts at M.
The locus of C is then (C). B is deduced from C by rotating with center A and angle π/3.
There is no B in the graph. Just confusing.
The locus of B is then circle (C') image of (C) using this rotation. But it is also (B) from definition. Point B is then the intersection point of (B) and (C').
Why is the circle (C') with middle point drawn here?
Point C is then deduced by inverse rotation.
Ok. We have B1 and now we rotate back 60 degrees to have C.
There are two intersection points B1 and B2 giving the two triangles obtained from analytic solution.
Only one of them contains point P.
Yes, B1, but why? "We must have angle ABP < π/3"
Please, I need some help.
| Comment: Experimental approach
$r_1=40, r_2=55, r_3=70$, in this case the difference between radii is equal.
The locus of center of triangle ABC , Q is a circle center on O and radius OQ and we have:
$OQ\approx \frac{\sqrt{40+55+70}+\sqrt{40}+\sqrt{55}+\sqrt{70}}2\approx 17.48$
This value is close to $17.88$ shown in figure.
In this figure the difference of radii is not equal. $r_1=47, r_2=60, r_3=80$ and we have:
$OQ\approx\frac{\sqrt{47+60+80}+\sqrt{47}+\sqrt{60}+\sqrt{80}}2\approx 18.6$
This value is close to $20.8$ shown in figure. So for a rough drawing:
1- calculate OQ and draw a circle $r=OQ$ center on O.
2- take an arbitrary point on this circle (Q). this is the center of triangle. In a particular case one of altitudes of triangle crosses the center O. So draw a line connecting O and Q, this line is perpendicular bisector of BC.
3- The angles both QB and QC make with OQ is $60^o$.By drawing the rays of these angles you get vertices B and C on circles with radii $r_2$ and $r_1$ respectively. Vertex A is the intersection of OQ with third circle with radius $r_3$.
May be by analytic geometry we can find a more accurate formula for $OQ$ in terms of $r_1, r_2$ and $r_3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Maximum value of $(x−1)^2+ (y−1)^2+ (z−1)^2$ with constraint $x^2+y^2+z^2 ≤2 , z≤1$ So the problem is that I have $D(f)=\{(x,y,z), x^2+y^2+z^2 ≤2 , z≤1\}$ and I have to determine the maximum value for the function $(x−1)^2+ (y−1)^2+ (z−1)^2$ in $D$.
I'm just confused as I don't actually know if $z\le1$ counts as a constraint as well, or is it just for me to sketch the area, which is actually a part of the question.
Furthermore, I know that I have to use Lagrange multiplier method, but I honestly don't know how because $\le$ is making the question hard for me. Do I just calculate as usual and count $\le$ the same as $=$?
appreciate all the feedback
Edit: I have calculated the grad f =0 which is = $D(f)=(2(x-1), 2(y-1), 2(z-1))$ where I've got that $x=y=z= 1$ and $f(1, 1, 1)=0$. (I don't know what to do with this though). Then I calculated $L(x, y, x, λ) = (x−1)^2+(y−1)^2+(z−1)^2 +λ(x^2+y^2+z^2-2)$, then the four cases where I got the same value which is $-2λ= 2(x-1)/x = 2(y-1)/y = 2(z-1)/z$. Which means that $x=y=z$, put it in the $D$ function $x^2+ x^2+ x^2$ and ended up with $x=y=z= −+√2/√3$. I took the minus sign for the maximum distance from $(1,1,1)$. which means that the answer is $x=y=z= −√2/√3$. Is it correct?
| Using Lagrange multipliers method.
Calling $p = (x,y,z),\ p_0=(1,1,1),\ p_1 = (0,0,1), f = \|p-p_0\|^2$ and using $e_1, e_2$ as slack variables to avoid the inequalities we have
$$
L = f + \lambda(\|p\|^2-2+e_1^2)+\mu(p\cdot p_1-1-e_2^2)
$$
The stationary points are the solutions for
$$
\nabla L = \cases{2(1+\lambda)p-2p_0-\mu p_1=0\\
\|p\|^2-2+e_1^2=0\\
p\cdot p_1-1-e_2^2=0\\
\lambda e_1=0\\
\mu e_2 = 0}
$$
obtaining
$$
\left[
\begin{array}{cccccc}
f & x & y & z & e_1^2 & e_2^2\\
0 & 1 & 1 & 1 & 1 & 0 \\
3+2 \sqrt{2} & -\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 & 0 & 0 \\
3-2 \sqrt{2} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 1 & 0 & 0 \\
\end{array}
\right]
$$
NOTE
Null $e_k$'s represent actuating restrictions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
A detail on the proof of Theorem 6.9 of Steel's note on iterated ultrapowers I am reading Steel's note on iterated ultrapowers, and I am filling the gaps in the proof of Theorem 6.9. However, I am stuck on how to proceed with it.
I am checking the last part of the proof (page 31 of the linked note), which claims $P_{n+1}\in V^Q_{\operatorname{lh}E}=V^{P_n}_{\operatorname{lh}E}$. However, I do not see why the last equality holds. I tried to prove it from the equality $V^{N_n}_{\operatorname{lh}F_n}=V^{N_{n+1}}_{\operatorname{lh}F_n}=V^{\operatorname{Ult}(N_k,F_n)}_{\operatorname{lh}F_n}$. While $\pi_n(V^{N_n}_{\operatorname{lh}F_n})=V_{\operatorname{lh}E}^{P_n}$ seems obviously true, I have no idea why $\pi_n(V^{\operatorname{Ult}(N_k,F_n)}_{\operatorname{lh}F_n})=V^Q_{\operatorname{lh}E}$ holds.
I do not see how iterations over a tree work, so I must be missing something on very simple facts. I would appreciate any help.
| I get how to proceed. Note that $k$ is the least number such that $\operatorname{crit} F_n<\operatorname{lh} F_k$. Moreover, if we take $\tau_k=\sup \sigma^" \operatorname{lh} F_k$, then $P_k\underset{\tau_k}{\backsim} P_n$ by the inductive assumption.
Fix $\xi$ such that $\operatorname{crit} F_n<\xi<\operatorname{lh} F_k$. Then $$\operatorname{crit}E=\pi_n(\operatorname{crit}F_n)<\pi_n(\xi)=\sigma(\xi)<\tau_k,$$ where the equality follows from Exercise 34. By Lemma 6.2, we have $\operatorname{Ult}(P_k,E)\backsim_{i_E(\operatorname{crit}E)+1} \operatorname{Ult}(P_n,E)$.
Since $\operatorname{lh}E\le i_E(\operatorname{crit}(E))$ and $P_n\models \text{$E$ is nice}$, we have $V^Q_{\operatorname{lh}E}=V^{\operatorname{Ult}(P_k,E)}_{\operatorname{lh}E}=V^{\operatorname{Ult}(P_n,E)}_{\operatorname{lh}E}=V^{P_n}_{\operatorname{lh}E}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
To prove or disprove that for each $y\in N_\epsilon(A)\setminus A$, $B(y,\epsilon)\cap [X\setminus N_\epsilon(A)]\neq \emptyset$. Let $A$ be a subset in a normed space $(X,\|\cdot\|)$, and $N_\epsilon(A)=\bigcup\limits_{a\in A}B(a,\epsilon)$, where $B(a,\epsilon)$ is open ball centered at $a$ with radius $\epsilon>0$. To prove or disprove that for each $y\in N_\epsilon(A)\setminus A$, $B(y,\epsilon)\cap [X\setminus N_\epsilon(A)]\neq \emptyset$, if $X\setminus N_\epsilon(A)\neq\emptyset$.
What I tried is:
I wanted to prove that if $B(y,\epsilon)\cap [X\setminus N_\epsilon(A)]= \emptyset$, then $y$ has to be in $A$, but could not succeed. We have, for two distinct points $u,v\in X$, there is an isometry $$I:[0,\|u-v\|]\to \{\alpha u+(1-\alpha)v:0\leq \alpha \leq 1\}$$ such that $I(0)=u, I(\|u-v\|)=v, I([0,\|u-v\|])=\{\alpha u+(1-\alpha)v:0\leq \alpha \leq 1\}$. I took a $z\in X\setminus N_\epsilon(A)$, so $[0,\|y-z\|]$ is isometric to $\{\alpha y+(1-\alpha)z:0\leq \alpha \leq 1\} $. Then, if $\epsilon\geq\|y-z\|$, then done; but if $0<\epsilon<\|y-v\|$, there is $w\in \{\alpha y+(1-\alpha)v:0\leq \alpha \leq 1\}$, such that $I(\epsilon)=w$, then how to prove $w\in X\setminus N_\epsilon(A)$.
| This is not true: let $\varepsilon=1$ and in the complex plane take $A$ to be a disk minus a disk, $A=\{z\in\mathbb{C}: 0.9<|z|<3\}$. Then $0\not\in A$, but $0\in N_\varepsilon(A)$. On the other hand $N_\varepsilon(A)=\{z\in\mathbb{C}:|z|<4\} \neq\mathbb{C}$, so obviously $\mathbb{C}\setminus N_\varepsilon(A)\neq\emptyset$, and $D(0,\varepsilon)\cap (\mathbb{C}\setminus N_\varepsilon(A)=\emptyset$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4017970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Any good upper bounds for $\sum_{i = 0}^{m} 2^i {k+i \choose i}$? I am trying to solve a recursion (I faced it when I wanted to analyze the running time of an algorithm precisely). I reduced it to compute a "good" upper bound for $\sum_{i = 0}^{m}2^i {k + i \choose i}$. We have these assumptions that $1 < m \leq Ck$, for some $C\in \mathbb{N}$.
I have tried many approximations but the bounds are not what I want. They are very loose. On the other hand, it doesn't look like a very sophisticated summation but I have so much difficulty solving it!
Does anyone have any ideas?
| I'm pretty sure there is no closed form for this sum and it is far from simple. This sum looks like one involved in finding the expected value of the length of the longest run of successes of a bernoulli random variable. See for instance this question, unfortunately the it's pretty opaque and it's been a while since I was trying to do that sum.
Here's a way of converting the problem to a derivative problem, although I don't know how useful it will be. Recall $\binom{k+i}{i}=\binom{k+i}{k}=[x^k y^i](x+y)^{k+i}$, where $[m](p)$ denotes the coefficient on the term $m$ in the polynomial/series $p$. So now we want to find
$$\sum_{i=0}^m 2^i[x^k y^i](x+y)^{k+i}.$$
Notice if we take the $k$th derivative of $(x+y)^{k+i}$ with respect to $x$ at $x=0$ and $y=1$, we get the coefficient we want times $k!$. Thus we can write the sum as
$$\sum_{i=0}^m\frac{2^i}{k!}\frac{d^k}{dx^k}(x+1)^{k+i}|_{x=0}$$
Now a lot of this stuff can be pulled out of the sum, giving us
$$\frac{1}{k!}\frac{d^k}{dx^k}\left((x+1)^k\sum_{i=0}^m 2^i(x+1)^i\right)|_{x=0}$$
The inside is now a geometric series so we can find its sum
$$\frac{1}{k!}\frac{d^k}{dx^k}\left((x+1)^k\frac{(2(x+1))^{m+1}-1}{2(x+1)-1}\right)|_{x=0}$$
$$\frac{1}{k!}\frac{d^k}{dx^k}\left(\frac{2^{m+1}(x+1)^{m+k+1}-(x+1)^k}{2x-1}\right)|_{x=0}$$
It may be possible to find an explicit form for this derivative or at least a recurrence relation, although I couldn't.
However, if $k$ is fixed you can find a closed form: the binomial coefficient $\binom{k+i}{k}$ is a degree $k$ polynomial, and $\sum_n b^n P(n)$ can be computed for any base $b$ and polynomial $P$ using a trick. Take $\frac{d^k}{db^k}\sum_n b^n$ for $k$ ranging from 1 to the degree of $P$. The derivatives of the summands form a basis for polynomials of degree $k$, and the derivatives of the corresponding sums let us calculate the sum for $b^n$ times any polynomial.
As far as an upper bound, a VERY rough one can be gotten from the approximation $\binom{mn}{n}\approx\frac{m^{m(n-1)+1}}{(m-1)^{(m-1)(n-1)}\sqrt{n}}$. Because $i\le m\le Ck$, we can rewrite the sum as $$\sum_{i=0}^m\frac{2^i(C+1)^{Ck-C+k}}{C^{Ck-C}\sqrt{k}}=\frac{(2^{m+1}-1)(C+1)^{Ck-C+k}}{C^{Ck-C}\sqrt{k}}.$$
Notice this is equivalent to using the upper bound from the other answer and then applying the approximation I mentioned, so this will also be over by a factor of about 2. Depending on what you need (ie if you just want big O), a factor of 2 may not matter to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Prove $\{z^n|n \in \mathbb{Z}\}$ is an orthonormal basis of $L^2(S^1)$ with Haar measure Prove $\{z^n|n \in \mathbb{Z}\}$ is an orthonormal basis.
*
*We show that this is an orthonormal set.
All we need to show is that $\int_{S^1} z^n d\mu=0$ since $\mu$ is a Haar measure $\int_{S^1} z^n d\mu=\int_{S^1} (az)^n d\mu=a^n\int_{S^1} z^n d\mu$, but if $a$ is not a root of unity, this implies the integral is $0$. Thus the set is orthonormal.
*We show the the span is dense in $C(S^1)$ which in turn is dense in $L^2(S^1)$ which proves it is an orthonormal basis.
The Span is clearly an algebra so all we have to show is that it separates points. But that is fairly obvious. $x\not = y$ then take the identity map.
Is this correct?
| You missed another condition. You need the fact that the span contains the complex conjugate of each of its elements. For this note that the conjugate of $z^{n}$ is $\frac 1 {z^{n}}=z^{-n}$ when $|z|=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to evaluate $\int _0^{\frac{1}{2}}\frac{\ln \left(x\right)\ln \left(1-x\right)\ln \left(1+x\right)}{1+x}\:dx$ I want to evaluate: $$\int _0^{\frac{1}{2}}\frac{\ln \left(x\right)\ln \left(1-x\right)\ln \left(1+x\right)}{1+x}\:dx,$$ but I don't see how can I achieve so.
My attempts so far have been rewriting the integral using algebraic identities such as: $$ab=\frac{1}{2}a^2+\frac{1}{2}b^2-\frac{1}{2}\left(a-b\right)^2=\frac{1}{4}\left(a+b\right)^2-\frac{1}{4}\left(a-b\right)^2,$$ that yield other integrals like:
$$\int _0^{\frac{1}{2}}\frac{\ln \left(x\right)\ln ^2\left(1-x\right)}{1+x}\:dx,\:\int _0^{\frac{1}{2}}\frac{\ln \left(x\right)\ln ^2\left(\frac{1-x}{1+x}\right)}{1+x}\:dx.$$Normally the beta function and expanding terms into series is used to deal with these kind of integrals yet neither can be used because of the upper bound.
What else can be done in order to compute the main integral? Thanks.
| An idea by Cornel Ioan Valean
Exploit the result
$$f(a)=\int_0^{1/2} \frac{\log (x) \log (1-x)}{1-a x} \textrm{d}x$$
$$\small =\frac{1}{2 a}\log ^3(2)-\frac{3 }{4 a}\zeta (3)-\frac{\log ^2(2) }{2 a}\log \left(1-\frac{a}{2}\right)-\frac{\log ^2(2) }{2 a}\log \left(\frac{a-2}{a-1}\right)+\frac{\log(2)}{a}\text{Li}_2\left(\frac{a}{2}\right)$$
$$\small+\frac{\log (2) }{a}\text{Li}_2\left(\frac{a}{2 (a-1)}\right)+\frac{1}{a}\text{Li}_3\left(\frac{a}{2}\right)+\frac{1}{a}\text{Li}_3\left(\frac{a}{2 (a-1)}\right)-\frac{1}{a}\text{Li}_3\left(\frac{a}{a-1}\right)-\frac{\text{Li}_3(a-1)}{a}.$$
This is a modified form of Lemma 2 in the paper A special way of extracting the real part of the Trilogarithm, $\displaystyle \operatorname{Li}_3\left(\frac{1\pm i}{2}\right)$ easily obtained by the means presented in the paper https://www.researchgate.net/publication/337868999_A_special_way_of_extracting_the_real_part_of_the_Trilogarithm_Li_31i2
A nice fact: maybe good to mention also this integral variant
$$\int_0^{1/2}\frac{\log(1-x)\log(x)\log(1+x)}{x}\textrm{d}x=\int_{-1}^0 f(a) da.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Conditional Probability vs Dependent Events I'm conceptually/intuitively confused about when we would use the joint probability (multiplication rule) versus a conditional probability, specifically when the dependent event has no chance of occurring without the other event.
Here's an example that illustrates my confusion:
Suppose we have a weird city, where there's a 20 percent chance of rain (so a 80% chance of any other weather). Now suppose that it can only rain frogs when it rains, and there's a 10% chance of frogs.
Why can we not use bayes rule to model:
$$Pr(frogs | rain) = \frac{Pr(rain | frogs)\cdot Pr(frogs)}{Pr(rain)} = \frac{1 \cdot 0.1}{0.2}$$
I know the correct way of doing it would be the multiplication rule, but why can we not use a conditional probability? We know that the probability of raining frogs is 0.1 because it cannot occur in any other scenario other than when it rains, and if there are raining frogs, we must have rain so $Pr(rain | frogs)=1$. And we also have $Pr(rain)=0.2$, so it seems like we have everything we need for conditional probability.
The correct answer would of course be $0.2\cdot0.1=0.02$.
What am I missing aside from the nonsensical results?
Edited for sensical numbers, but assume that the probability of raining frogs is just the proportion of days on which it rains.
| You have your definitions all mixed up. You have already stated that $P(\text{frogs}|\text{rain})=0.1$, so you don't need Bayes Theorem to prove it.
What you need is the law of total probability, which states that:
$$P(\text{frogs})=P(\text{frogs}|\text{rain})P(\text{rain}) + P(\text{frogs}|\text{no rain})P(\text{no rain})$$
which gives
$$P(\text{frogs})=(0.1\times0.2) + (0.0\times0.8)=0.02$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Pushouts in category of topological spaces modulo homotopy Let $HTOP$ denote the category whose objects are topological spaces and whose morphisms are equivalence classes of continuous maps modulo homotopy equivalence. I m wondering what are the properties of this category. Does it have products, coproducts, pushouts ? I tried searching Google but it seems I don't know the right keywords
| This category is often called simply “the homotopy category.” It has effectively no limits and colimits, other than products and coproducts which are constructed as in TOP. It was the first natural example of a non-concrete category, as proven by Freyd in “homotopy is not concrete.”
More commonly studied is its full subcategory spanned by the CW complexes, which is equivalent to the category of all spaces localized at the weak homotopy equivalences. This continues to lack any good categorical properties. It doesn’t even have a generating set, though the category of pointed connected CW complexes modulo homotopy famously does—namely, the spheres, by Whitehead’s theorem.
These various homotopy categories are the ur-examples of homotopy categories of model categories, which were invented in large part to get around the non-existence of categorical constructions in homotopy categories by allowing the construction of homotopically meaningful limits and colimits. For instance, a homotopy pushout of $A\leftarrow C\to B$ represents maps from $A,B$, not which are equal on $C$ like a pushout in Top, nor which are equal on $C$ up to homotopy like a (non-existent) pushout in the homotopy category, but maps from $A$ and $B$ together with a chosen homotopy between them on $C$. This is the double mapping cylinder. The more recent theory of $\infty$-categories is also designed in large part to make homotopy limits and colimits behave more like ordinary limits and colimits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
For different integers there exists an integer so that exactly one of the sums is prime. Let $i,j$ be distinct natural numbers. In a comuter science class the lecturer said
Since the prime numbers are not regularly distributed, there is some natural number $k$ such that $i+k$ is prime and $j+k$ is not.
I found this reasoning a bit unsatisfying, so I tried to show the claim, however it turned out to be not that trivial. If lets say $i$ has a prime factor $p$ that doesn't divide $j$ then by Dirichlet there is a prime $q\equiv j\pmod p$, $q>j$. Then choose $k=q-j$, so $j+k$ is prime but $i+k$ has the prime factor $p$, hence is not prime. But this reasoning doesn't work if $i,j$ have the same prime factors. I also feel like this can be proven more elementary but I don't see how.
Edit: Thanks everyone for the help, I learned something from every answer. Unfortunately I can only accept one of them, so I hope I didn't offend anyone for not accepting their answer.
| An elementary proof
Suppose $i<j$ and let $p$ be any prime greater than $i$. Let $a=j-i$.
Consider the sequence $p,p+a,p+2a,...,p+pa$. The initial number is prime and the final number is composite and so there is a successive pair of these numbers $u<v$ where precisely one is prime.
Then $k=u-i$ solves the problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
How to take the directional derivative of an integral over $u$ from $(x,y)$ to $(1,1)$ Please consider the following function
$$
v(x,y) = \int_{(x,y)}^{(1, 1)}u\,dS
$$
$v(x,y)$ is the integral along the straight line from $(x,y)$ to $(1, 1)$. I'm wondering how I can take the directional derivative in the direction of the vector from $(x,y)$ to $(1, 1)$. Let's say that the vector going from the origin through $(x,y)$ and through $(1, 1)$ is called $\beta$. How do I take the directional derivative in the direction of $\beta$ at the point $(x,y)$? Intuitively I would say that
$$
\beta\cdot\nabla v(x,y) = - u(x,y)
$$
but I'm not sure how I can show this rigorously.
| Write $X$, $X_0$, and $\underline 1$ for $(x,y)$, $(x_0,y_0)$ and $(1,1)$. Unpacking the definition of the line integral $\int_{[X,\underline 1]}u ds$ of a scalar function (I assume $u$ is scalar but a similar calculation works for other cases) you want to differentiate $v(X)=|X-\underline 1|\int_0^{1} u(t\underline 1 + (1-t)X) dt $ in the direction $X_0=\underline1-X$. Note that the factor $|X-\underline 1|$ comes from the speed of the parameterisation I chose. With no extra effort we can compute the derivative in all directions $X_0$. Setting $g(s)=|X +sX_0-\underline 1|$ and $f(s)=u(t\underline 1 + (1-t)(X + sX_0))$, you want the derivative of $g(s)\int^{1}_0 f(s,t)dt$ at $s=0$ which is $g'(0)\int f(0,t)dt +g(0)\int_0^1 \partial_s f(0,t)dt$.
$$g’(s)=\frac{X_0\cdot(X+sX_0-\underline1)}{|X+sX_0-\underline1|}$$ $$ f’(s)=X_0\cdot \nabla u(t\underline 1 + (1-t)(X + sX_0))$$
Put them together and you have the directional derivative at $X$ in direction $X_0$:
\begin{align}
D_{X_0} v(X) &= \frac{X_0\cdot(X-\underline1)}{|X-\underline1|}\int_0^1 u(t\underline 1 +(1-t)X)dt \\&\quad + {|X-\underline1|}\int_0^{1}X_0\cdot\nabla u(t\underline 1 + (1-t)X ) dt
\\&= \frac{X_0\cdot(X-\underline1)}{|X-\underline1|^2}\int_{[X,\underline 1]} u ds + \int_{[X,\underline 1]} X_0\cdot \nabla u ds
\end{align}
Specialising now to $X_0=\underline 1-X$ one gets
$$ D_{\underline 1-X}v(X) = - \int_{[X,\underline1]} u \,\text ds + \int_{[X,\underline1]} D_{\underline 1-X}u \,\text ds. $$
Since the answer is different from your guess, let me verify the answer for a special case. If $u\equiv 1$ then $v(X)$ is just the unsigned distance of $X$ to $\underline 1$, $v(X) = |X-\underline 1|$. Its gradient at $X$ is $\frac{X-\underline 1}{|X-\underline 1|}$. Thus the directional derivative in the direction $\underline 1-X$ is $(\underline 1-X)\cdot \nabla v(X) = -|X-\underline 1|$ which is not $-u(X)=-1$.
However if you define the directional derivative in direction $X_0$ as $\frac{X_0}{|X_0|}\cdot\nabla $ then in this case one gets $-u(X)=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4018936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Population Growth Question I am not sure how I am supposed to do this, but here is the problem and my attempt.
A farmer wants to produce chickens for the community. Let $P(t)$ represent the population of chickens at time $t$ in months. Assume the population of chickens reproduce at a rate proportional to its size. The farmer cultivates a small number of chickens for a trial run. In three months, the population of chickens increased from 75 to 300. The farmer wants to begin with an initial population of $P(0)$ chickens, and after a year, have $8000 + P(0)$ chickens, so that 8000 can be harvested, and $P(0)$ saved to start breeding again for the following year. What initial population $P(0)$ of chickens does the farmer need to start with?
Attempt:
First, since $P(t)$ represents the population at time $t$ in months, then the population growth equation can be written as: $$P(t) = P_0e^{kt}$$
Then I tried to find the relative growth rate. Since in three months, the population went from 75 to 300, then that means $$300 = 75e^{3k}$$
Or in other words, $k = \frac{\ln{4}}{3}$.
I am not exactly sure how to proceed from here, or if I am on the right track. I would like some assistance, or advice.
| Excellent so far!
You have $e^{3k}=4$ and you want $$Pe^{12k}=P+8000.$$
Note that $e^{12k}=4^4$.
Therefore $256P=P+8000$ and $P=32$ are required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4019130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Euclidean GCD and why does it work? By the dupe, the table implies $\,(14441,3565) = (3565,189) = \ldots = (28,21) = (21,7) = (7,0) = 7\ \ $
I understand that Euclid's algorithm on GCD is based on doing division via subtraction $x = qy + r$. I also understand that the process is keep expressing the quotient in terms of the remainder.
Example to find GCD of $14441$, $3563$:
x =
q
* y
+ r
14441
4
3565
189
3565
18
189
161
189
1
161
28
161
5
28
21
28
1
21
7
21
3
7
0
So the GCD is $7$.
So basically we try to divide the $2$ original numbers and then try to see if the remainder can express evenly $y$ and keep doing that recursively i.e. try to find the smallest number that divides the remainder.
But I am not sure I understand the intuition behind the idea. Why would that process lead to the smallest number that divides the original $x$?
I also read that a core idea is that $gcd(x,y) = gcd(y,r)$ but I didn't really get that part too.
Could someone please help?
| You might find it easier to consider just a single step:
For any two numbers $x$ and $y$ with $y<x$, consider $x$ and $x-y$.
Any divisor of $x$ and $y$ is also a divisor of $x-y$ and $y$. Similarly, any divisor of $x-y$ and $y$ is also a divisor of $x$ and $y$. Thus gcd$(x,y)=$ gcd$(x-y,y)$.
We have thus reduced the pair of numbers and can repeat this over and over again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4019255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it true that the edge weight of a bridge equals to its least-squares potential difference within a graph? Consider a (not necessarily connected) directed multigraph $G=(V,E,t,h,w,u)$, where V is the vertex set, E is the edge set,
$t: E \rightarrow V$ is the function of edge tails, $h: E \rightarrow V$ is the function of edge heads, while $w: E \rightarrow \mathbb{R}^+$ and $u: E \rightarrow \mathbb{R}$ are two (fixed) weight functions.
Suppose that $\pi: V \rightarrow \mathbb{R}$ is a potential function minimizing the following (least-squares) objective function:
$\mathit{\Omega}( \mathit{\Pi})=\sum\limits_{e \in E}{w_e{[u_e-\mathit{\Pi}(h_e)+\mathit{\Pi}(t_e)]}^2}$.
Is it true that if $e^* \in E$ is a bridge (i.e., an edge whose deletion increases the number of components within G - multiple edges are not bridges here), then $\pi(h_{e^*})-\pi(t_{e^*})=u_{e^*}$? If so, how to prove this conjecture?
| Suppose that $C$ is the connected component containing $h_{e^*}$ in $G - e^*$. Then adding the same constant to $\pi(v)$ for all vertices $v$ in $C$ changes $w_{e^*}[u_{e^*} - \pi(h_{e^*}) + \pi(t_{e^*})]^2$ and no other term of $\Omega(\pi)$.
In particular, by adding the right constant, we can make $u_{e^*} - \pi(h_{e^*}) + \pi(t_{e^*}) = 0$ and set this term to $0$, and since $\pi$ is supposed to be a minimizer, it must the the case that $u_{e^*} - \pi(h_{e^*}) + \pi(t_{e^*})$ is already $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4019548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Probability that a character kills the boss in a 3 versus 1 fight In a RPG game, your three characters A, B and C fight a boss.
The boss has $1000$ hp.
The attacks are sequential: A attacks then B attacks then C attacks then A attacks again, etc.
Character A can do any integral amount of damage between $25$ and $50$ with equal probability.
Character B can do any integral amount of damage between $30$ and $70$ with equal probability.
Character C can do any integral amount of damage between $10$ and $80$ with equal probability.
Assuming that the boss is not strong enough to kill any of the characters before it dies, what is the probability that player A will be the one to deliver the final blow and kill the boss?
Same question for players B and C.
Unfortunately I don't even know how to get started on this problem, any hint would be helpful.
| I think this problem can only be solved either approximately (asympotically exact) or numerically. Karl's approximation is the most natural answer, and I'd go for it. If you instead want an exact result, here's an efficient numerical procedure (computer assisted, though).
Letting $g_A(n)$ be the probability that player $A$ wins given that the boss has power $n$, we have the recursion
$$ g_A(n) = \sum_{j=n}^\infty p_A(j) + \sum_{j=1}^{n-1} p_{ABC}(j)g_A(n-j)$$
where $ p_A(j)$ is the probability mass function of the damage inflicted by player $A$ and $p_{ABC}(j)$ is the same for the three players summed.
Solved numerically in Octave (or Matlab) this gives
$$g_A(1000) = 0.28347$$
pretty near Karl's approximation ( $0.28302$)
The graph shows that that approximation is quite good after $N\approx 900$
Code:
pa = [1:100];
pa = pa >= 25 & pa <= 50;
pa = pa/sum(pa);
pb = [1:100];
pb = pb >= 30 & pb <= 70;
pb = pb/sum(pb);
pc = [1:100];
pc = pc >= 10 & pc <= 80;
pc = pc/sum(pc);
pab = [0, conv(pa,pb)];
pab = [0, conv(pa,pb)];
pac = [0, conv(pa,pc)];
pbc = [0, conv(pb,pc)];
pabc = [0, conv(pab,pc)];
N=1000;
pabc(N)=0;
ga = zeros(1,N);
ga(1)=1;
for n=2:N
ga(n) = sum(pa(n:100)) + flip(ga(1:n-1)) * pabc(1:n-1)';
endfor
ga(N)
For completeness, here are the other probabilities
gb = zeros(1,N); % prob of B winning, if B has next turn
gb(1)=1;
for n=2:N
gb(n) = sum(pb(n:100)) + flip(gb(1:n-1)) * pabc(1:n-1)';
endfor
gbb=zeros(1,N); % prob of B winning, if A has next turn
pa(N)=0; % extend pa size
for n=2:N
gbb(n)= flip(gb(1:n-1)) * pa(1:n-1)';
endfor
gcc = 1 - (ga+gbb);
Notice that the period of the quasioscillations is around the mean value of the whole turn : $(25+50+30+70+10+80)/2 = 132.50$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4019759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 0
} |
Question about definition of inverse function So, I've been reading Precalculus books and I found a question in the definition of inverse function,
some of my books say:
Let $f\colon X\to Y$ be a real function, where $X,Y\subseteq\mathbb{R}$ such that: $g\colon Y\to X$ is the inverse function of $f$ if for each $x\in X$, such that $$g(f(x))=x,\ \forall x\in X\quad\text{and}\quad f(g(y))=y,\ \forall y\in Y.$$
And others of my books say:
Let $f\colon X\to Y$ be a real function, such that $g\colon Y\to X$ is the inverse functdion of $f$ if for each element $x\in X$, if always it’s true that
$$y=f(x)\quad\text{if and only if}\quad x=g(y),\qquad \forall x\in X\text{ and }\forall y\in Y.$$
So, my question is:
What of these is the true definition of inverse function? and
If posible to proof the other definition with the real definition of inverse function?
Further, What if the proof the link a bijective function and inverse function?
| Claim. Let $X,Y$ be sets amd $f\colon X\to Y$, $g\colon Y\to X$ be functions. Then the following are equivalent
$$\tag1\forall x\in X\colon g(f(x))=x\;\land\;\forall y\in Y\colon f(g(y))=y$$
$$\tag2 \forall x\in X,\forall y\in Y\colon y=f(x)\leftrightarrow x=g(y)$$
Proof.
Assume $(1)$. To show $(2)$, let $x\in X, y\in Y$ be given. We wan to show $y=f(x)\iff x=g(y)$. So first assume $y=f(x)$. Then by the first part of $(1)$, $g(y)=g(f(x))=x$. In summary, $y=f(x)\to x=g(y)$. Conversely assume $x=g(y)$. Ten by the seoncd part of $(1)$, $f(x)=f(g(y))=y$. In summary, $x=g(y)\to y=f(x)$. IN overall summary, $y=f(x)\leftrightarrow x=g(y)$. As $x, y$ were arbitrary elements of $X$ and $Y$ respectively, we have shown $(2)$ from $(1)$.
Next, Assume $(2)$. To show the first part of $(1)$, let $x\in X$ be given. As $f$ is a function $X\to Y$, we can let $y=f(x)\in Y$. Specialize $(2)$ to $x$ and $y$ to find $y=f(x)\leftrightarrow x=g(y)$. By choice of $y$, the left side holds, hence $x=g(y)=g(f(x))$, or: $g(f(x))=x$. As $x$ was an arbitrary element of $X$, this shows $\forall x\in X\colon g(f(x))=x$.
To show the first part of $(1)$, let $y\in Y$ be given. As $g$ is a function $Y\to X$, we can let $x=f(y)\in X$. Specialize $(2)$ to $x$ and $y$ to find $y=f(x)\leftrightarrow x=g(y)$. By choice of $x$, the right side holds, hence $y=f(x)=f(g(y))$, or: $f(g(y))=y$. As $y$ was an arbitrary element of $Y$, this shows $\forall y\in Y\colon f(g(y))=y$.
Now that we have shown both $\forall x\in X\colon g(f(x))=x$ and $\forall y\in Y\colon f(g(y))=y$, we have in fact shown $\forall x\in X\colon g(f(x))=x\;\land\;\forall y\in Y\colon f(g(y))=y$, i.e., we have shown $(1)$ from $(2)$.
In summary, $(1)$ and $(2)$ are equivalent. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4019936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compact subset of a space Consider the topological space $X=\{0,1\}^{\mathbb{Z}}\times \mathbb{Z}$ and let $Y$ be a subset of $X$ given by $Y=\{0,1\}^{\mathbb{Z}}\times\{0,1,2,\cdots\}$. Is $Y$ compact subset of $X$?
Set $U_k=\{0,1\}^{\mathbb{Z}}\times\{k\}$. Then $\{U_k
; k=0,1,2,\cdots\}$ is an open cover of $Y$ having no subcover. So $Y$ cannot be compact. Am I correct?
| Yes, you are correct. More generally, if $X$ is any nonempty topological space and $Y$ is infinite discrete, then $X\times Y$ is never compact. That's because $\big\{X\times\{y\}\big\}_{y\in Y}$ is an open cover without finite subcover.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What are the chances of rain on particular days I study maths as a hobby and am on to elementary theory of probability.
I have come across this problem:
Find the probability of events A, B and C given it rained on exactly 2 days last week. | *
*The key observation here, as you have mentioned, is the number of total outcomes, which is $\begin{pmatrix}7\\2\end{pmatrix}=21$. Equipped with this fact, you can proceed by finding out what is the number of outcomes that fulfills each part.
*Yes, the first part is just $\frac{1}{21}$, taking one of the 21 possible combinations.
*On the second part, notice that there are just 6 out of 21 possible outcomes that makes the raining days consecutive (day 1 and 2, 2 and 3, ..., 6 and 7). Therefore, the possibility is just $\frac{6}{21}=\frac{2}{7}$.
*As for the third part, you would want to see how many combinations in them are not on two specific days. That would mean those combinations are only chosen from the other five days. So that would be $\begin{pmatrix}5\\2\end{pmatrix}=10$. Therefore there are 10 out of 21 possibilities that fulfill the requirement. The probability is $\frac{10}{21}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Help me get my head around this simple linear regression problem Given a very simple linear projection model $Y=X\beta+e$ with $E(e|X)=0$ and $X$ a scalar. Notice that this is a simple linear model with no intercept.
Then $\beta = E(XY)/E(X^2)$ from the least squares formula.
On the other hand, if I take expectation on both sides of the original model, I get $E(Y)=\beta E(X)+E(e)=\beta E(X)$. Therefore, $\beta=E(Y)/E(X)$. (Of course, suppose $E(X)\neq 0$.)
I just don't see how the two quantities could be identical. Did I do something wrong?
| $E[f(X)Y]=E[f(X)X]\beta+E[f(X)e]=E[f(X)X]\beta$, so $\beta=\frac{E[f(X)Y]}{E[f(X)X]}=\frac{E[Y]}{E[X]}=E[\frac{Y}{X}]=E[\frac{f(X)}{E[f(X)]}\frac{Y}{X}]$ ...
Strange ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the coefficients of this polynomial. Given $f(x)=x^4 +ax^3 +bx^2 -3x +5$ , $ a,b\in \mathbb{R}$ and knowing that $i$ is a root of this polynomial. what are $a,b$?
My work until now: since all the coefficients are real, I now know that $i$ and $-i$ are roots of this polynomial, there must be two more roots. I tried to use vietta but it didn't really help, I get that $x_3+x_4=-a$ and $i(-i)x_3x_4=5$. What am I missing?
Appreciate any help.
| By long division,
$$
\require{enclose}
\begin{array}{r}
x^2+ax+(b-1)\\[-4pt]
x^2+1\enclose{longdiv}{x^4+ax^3+bx^2-3x+5}\\[-4pt]
\underline{x^4\phantom{\ +ax^3}+x^2\ }\phantom{-3x+5\ \ }\\[-2pt]
ax^3+(b-1)x^2-3x\phantom{\,+5\ \ }\\[-4pt]
\underline{ax^3\phantom{\,\,+(b-1)x^2}+ax}\phantom{\ +5}\\[-2pt]
(b-1)x^2-(a+3)x\ +5\\[-4pt]
\underline{(b-1)x^2\phantom{\ \qquad}+(b-1)}\\[-2pt]
-(a+3)x+(6-b)
\end{array}
$$
We must have that $-(a+3)x+(6-b)=0$. Therefore, $b=6\text{ and }a=-3$. Thus,
$$
\left(x^2+1\right)\left(x^2-3x+5\right)=x^4-3x^3+6x^2-3x+5
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Limit involving a Differential Equation
Problem:
Consider the differential equation $$\dfrac{dy}{dx} + 10y = f(x) , \
> x>0,$$ where $f(x)$ is a continuous function such that $\displaystyle \lim_{x \to \infty} f(x) = 1$.
Find the value of $\displaystyle \lim_{x \to \infty} y(x)$.
My attempt:
Since this is a Linear Differential Equation, we can solve for $y(x)$ to get,
$$ y(x) = \dfrac{C + \int e^{10x} f(x)}{e^{10x}} $$
which gives,
$$ \require{cancel} \lim_{x \to \infty} y(x) = \cancelto{0}{\lim_{x \to \infty} \dfrac{C}{e^{10x}}} + \lim_{x \to \infty}\dfrac{{\int e^{10x} f(x)}}{e^{10x}} = \lim_{x \to \infty}\dfrac{{\int e^{10x} f(x)}}{e^{10x}} $$
But I don't know how to solve for the last limit involving the integral.
| $$y(x)=\frac{ \int _1^xe^{10 t} f(t)\,dt}{e^{10x}}+C e^{-10 x}$$
$$\lim_{x\to\infty}\frac{ \int _1^xe^{10 t} f(t)\,dt}{e^{10x}}\to\frac{\infty}{\infty}$$
use L'Hopital rule.
$$\frac{d}{dx}\int _1^xe^{10 t} f(t)\,dt=e^{10x}f(x)$$
so we have
$$\lim_{x\to\infty}\frac{e^{10 x} f(x)}{10e^{10x}}=\frac{1}{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to calculate the absolute extrema for this equation Basically, I am trying to find the absolute extrema for this problem, but I am stuck after a few steps.
$$y=\frac{4}{x}+\tan\left(\frac{\pi x}{8}\right),\quad x\in[1,2].$$
I have figured out the derivative, which is
$$y'=-\frac{4}{x^2}+\sec^2\left(\frac{\pi x}{8}\right)\cdot\frac{\pi}{8}.$$
However, after reaching this step, I am stuck. I have tried setting the derivative equal to zero, but I do not get anywhere as I don't know what to do next. How can I solve this problem? Any help is appreciated. Thanks.
| To continue after your last step . Simplifying after setting the derivative to zero,
$$ \dfrac{x}{2}=\cos \sqrt{\dfrac{\pi x}{8}}$$
Can be solved by iterative methods numerically, but no closed form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4020935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Quotient of two monotone function Let $(f_n)$ and $(g_n)$ be two positive and monotone increasing sequences of real functions.
Is $\frac{f_n}{f_n+g_n}$ monotone? Does it converge?
My thoughts
$\frac{f_n}{f_n+g_n}=1-\frac{g_n}{f_n+g_n}=1+\frac{(-1)g_n}{f_n+g_n}$. As the numerator $(-1)g_n$ is monotone decreasing and the denominator monotone increasing, the quotient should be monotone decreasing.
If there is $N$ such that $\forall n>N$, the quotient is in some bounded subset (it is eventually bounded), then it converges.
Did I go in the right direction?
| Hint: $\frac {f_n} {f_n+g_n}=\frac1 {1+\frac {g_n} {f_n}}$. So this is montone iff $\frac {g_n} {f_n}$ is monotone. It is easy to construct examples where the ratio of two positive increasing functions is not monotone. (Look at constant functions). Also, the limit may not exist. I hope you can work out some exmaples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4021056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Standardizing a binomial distribution Let $Y$ be a binomial random variable with parameters $n = 72$ and $p = 1/3$.
a) Find the exact probability $P(22 ≤ Y ≤ 28)$.
b) Is the condition $n ≥ 9 \left( \frac{\max(p, 1−p)}{\min(p, 1−p)} \right)$ required to approximate the distribution of Y by a normal distribution (textbook p.380) satisfied? If yes, approximate
$P(22 ≤ Y ≤ 28)$ with a normal probability.
I found a. I got $0.6$ but when I apply the central limit theorem and standardize it, I get $8.485$ and $-4.2$.
This already looks really wrong but I put it in R anyways by doing pnorm(8.485) - pnorm(-4.2) and I get $.999$.
The weird thing is if i code
pnorm(28,24,4)-pnorm(22,24,4) I get $.53$.
Then I remembered the .5 rule so I did pnorm(28.5,24,4)-pnorm(21.5,24,4) which is $.65$ and doing pnorm(28,24,4)-pnorm(21,24,4) gave me $.61$ which is REALLY close now to the answer. So I'm pretty sure the normal distribution DOES work in this case but I do not know why everytime I standardize it I get a big mess.
| $$np=24$$
$$npq=16$$
CLT in the version of De Moivre- Laplace is
$$\frac{\Sigma_i X_i-np}{\sqrt{npq}}\sim \Phi$$
Thus
$$\mathbb{P}[22\leq Y\leq 28]=\mathbb{P}[21.5<Z<28.5]= $$
$$=\Phi\left[\frac{28.5-24}{4}\right]- \Phi\left[\frac{21.5-24}{4}\right] = 0.87-0.27=0.6 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4021263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I examine the behavior of the Reciprocal Multifactorial Series $\Sigma \frac{1}{n!^{(k)}}$ as k approaches infinity. This question is a direct continuation of my previous question.
How to prove the formula for the Reciprocal Multifactorial constant?
User metamorphy provided a lucid proof for the closed form equation of the Reciprocal Multifactorial Constant. And also a statement on its asymptote as k approaches infinity. Which is as follows:
$m(k)=1+e^{1/k}(H_k-\Delta_k)\ \text{and,}\\
> \Delta_k=\frac1k\underbrace{\int_0^1\frac{1-t}{1-t^{1/k}}t^{1/k}\frac{1-e^{-t/k}}{t}\,dt}_{\text{has
> a finite $k\to\infty$ limit}}\underset{k\to\infty}{\longrightarrow}0.$
I was aware that the Reciprocal Multifactorial would eventually approach a harmonic series due to the definition of a multifactorial, which is as follows:
$n!^{(k)} = \begin{cases}
1 & \text{if $n=0$} \\
n & \text{if $0<n\leq k$} \\ n\left((n-k)!^{(k)}\right) & \text{if $n>k$} \end{cases}$
As k approaches infinity the $k^{th}$ multifactorial will tend towards the natural numbers. And the reciprocal series would approach $1+H_k$
However I am asking this question to know how to prove this using the closed form formula as metamorphy did in the previous question. I wasn't able to solve it myself.
Unrelated: Do you think there are any other interesting properties about this series worth examining, since I've found it quite interesting?
| I think I have to answer it myself. Let the point of departure be $$m(k)=1+\frac{e^{1/k}}{k}\sum_{r=1}^k\int_0^1 t^{r/k-1}e^{-t/k}\,dt,$$ appearing in my post immediately before the substitution $t=kx$.
The statement being discussed just comes from $e^{-t/k}=1-(1-e^{-t/k})$: $$\int_0^1 t^{r/k-1}e^{-t/k}\,dt=\frac{k}{r}-\int_0^1 t^{r/k-1}(1-e^{-t/k})\,dt,$$ giving $m(k)=1+e^{1/k}(H_k-\Delta_k)$ as written: $$\Delta_k=\frac1k\int_0^1\left(\sum_{r=1}^k t^{r/k}\right)\frac{1-e^{-t/k}}{t}\,dt=\frac1k\int_0^1\frac{(1-t)(1-e^{-t/k})}{t(t^{-1/k}-1)}\,dt.$$
For curiosity, here's the asymptotics of $\Delta_k$ in more detail:
\begin{align*}
\Delta_k&=\frac1k f\Big(\frac1k\Big),\\
f(\lambda)&=\int_0^1\frac{(1-t)(1-e^{-\lambda t})}{t(e^{-\lambda\log t}-1)}\,dt\asymp\sum_{n=0}^{(\infty)}a_n\lambda^n,\\
a_0&=\int_0^1\frac{1-t}{-\log t}\,dt=\color{blue}{\log 2},\\
a_1&=-\frac12\int_0^1\frac{1-t}{-\log t}(t-\log t)\,dt=\color{blue}{-\frac14-\frac12\log\frac32},\\
a_2&=\frac{1}{12}\int_0^1\frac{1-t}{-\log t}(2t^2-3t\log t+\log^2 t)\,dt=\color{blue}{\frac{5}{48}+\frac16\log\frac43},\\
a_3&=-\frac{1}{24}\int_0^1\frac{1-t}{-\log t}(t^3-2t^2\log t+t\log^2 t)\,dt=\color{blue}{-\frac{11}{864}-\frac{1}{24}\log\frac54},
\end{align*}
and so on. For $m(k)$, use the power series of $e^{1/k}$, and known asymptotics of $H_k$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4021528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are $\exists ! (x,y)$ and $\exists ! x \exists ! y$ equivalent? Here I have the following definition of $\exists ! (x,y) P(x,y)$:
$$\exists x \exists y[P(x,y) \land \lnot \exists u \exists v (P (u,v) \land (u \neq x \lor v \neq y))]$$
Are that formula equivalent to $\exists ! x \exists ! yP(x,y)$ ?
I'm using that $\exists ! xP(x)$ is equivalent to:
$$\exists x (P(x) \land \lnot \exists u (P(u) \land u \neq x)$$
I could not prove them equivalent, but don't know how to prove that it's not equivalent.
| Let $P(x,y)$ be defined by $y^2 = x$.
Then $\exists!(x,y)P(x,y)$ is false, since there are infinitely many pairs $(x,y)$ that solve the equation.
However, $\exists!x(\exists!yP(x,y))$ is true, since if and only if $x=0$, there is only one $y$ that solves the equation ($y=0$ obviously).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4021834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Simplify this statistical average I have a quantity $\tau$ given by:
$$
\frac{1}{\tau} = \frac{1}{\tau_1}+\frac{1}{\tau_2}+\frac{1}{\tau_3}
$$
where $\tau_1$, $\tau_2$ and $\tau_3$ are some constituent quantities. Now these $\tau$'s are function of a variable $x$, but the ensemble averages of $\tau_i$ are known, given by:
$$
\frac{\langle \tau_i^2\rangle}{\langle \tau_i\rangle^2} = \alpha_i
$$
Is there a way to express or simplify the ensemble average of total $\tau$ in terms of $\alpha_i$'s:
$$
\frac{\langle \tau^2\rangle}{\langle \tau\rangle^2}
$$
If not, what would be the necessary information needed w.r.t. the actual distribution of $\tau_i$ with $x$.
| First things first, as we know no relations between the $\tau_i$'s a priori, it's best to split this up via linearity of expectation.
Second, there is not a way in terms of your $\alpha_i$'s. This can be shown by considering the exponential distribution, where the $\alpha_i$'s will always be $2$, but the expectation of the inverse varies.
As for what would be proper information,I believe the only real way is to find $\mathbb{E}(1/X_i)$. One trick people I've seen used to compute this is to write it as
$$\mathbb{E}(1/X_i) = \mathbb{E}\left(\int_0^\infty e^{-tX_i}dt \right) = \int_0^\infty \mathbb{E}(e^{-tX_i})dt$$
So maybe that will help. Best of luck.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4021970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What do we call the domain and codomain of a functor? Do they have special names? Do the domain and codomain categories of a functor have special names, or are they simply called the domain and codomain? It seems to me that it might not be appropriate to call them as such, since we usually do this when we're talking about functions, whose domains have to be sets. But the "domain" of a functor need not be a set.
| The usual terminology for morphisms (so in particular functors) in category theory is that of source and target. So if $C,D$ are categories and $F\colon C\rightarrow D$ is a functor, you can refer to $C$ as the source (or source category, if there is possibility of ambiguity) of $F$ and $D$ as the target (or target category) of $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the Handshake Theorem by induction
Let $n \in \mathbb{N}$, and assume $n \geq 1$. Suppose you are at a party of $n$ people (including yourself). At the end of the party, define a person's parity as odd if they have shaken hands with an odd number of people, and even if they have shaken hands with an even number of people. Prove that the number of people of odd parity must be even.
My approach is first we find the base case, which is when $n = 1$, then there are $0$ people have shaken hands with an odd number of people, and $0$ is even, we are done.
For the induction step, we assume that for $k \in \mathbb{N}$ number of people, the number of people of odd parity must be even, we want to show for $k+1$ number of people, the number of people of odd parity is even as well. I tried to split it into two cases:
*
*Case 1: The new person's parity is even
*Case 2: The new person's parity is odd
But then I'm kind of stuck and don't know what to do since there are so many cases that can be and I wonder if there's a smarter approach.
| When a new person is added, that person shakes hands with $p$ people and that alter the previous parity.
Assuming the statement true for some k:
*
*There are $k_1$ even people
*There are $k_2$ odd people and $k_2$ mod $2$ = $0$
Adding a new person who shake hands with $p$ people (case $k+1$)
*
*If he shakes hands with $p_1$ even people, that people become odd
*If he shakes hands with $p_2$ odd people, that people become even
*$p_1$ + $p_2$ = $p$
The new number of odd people are:
*
*Case1: $k_2$ - $p_2$ + $p_1$ + 1 if $p$ is odd
*Case2: $k_2$ - $p_2$ + $p_1$ if $p$ is even
By induction hypothesis, $k_2$ is even.
And because $p_1$ - $p_2$ = $p$ - $2p_2$ we can conclude that $p_1$ - $p_2$ = $p$ mod $2$
In both cases, the number of odd people is even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Quantifier Elimination Pure Identity Language in Chang In Chang & Keisler's Model Theory, quantifier elimination on the theory of the pure identity language is shown. However, I'm confused about the notion of "basic formula" for which any quantified sentence needs to equivalent to a boolean combination of. Atomic formulas are used as these basic formulas for showing QE for the theory of dense linear orders, however the authors note that doing so in the theory of the pure identity language does not permit $\forall xy(x \equiv y)$ to be expressed as an open formula so the authors add the sentences saying there are at least $n$ distinct elements in the model to their basic formulas. (I believe $n$ is the number of variables being quantified over in the original sentence). The question is, how can we use these sentences for QE when they themselves use quantifiers?
I.e., if $\sigma_3$ is the sentence saying there are $3$ distinct elements in our model, how can we perform QE elimination on this sentence?
| This is a fairly common phenomenon. Many interesting theories don't admit quantifier elimination, but do if you extend the language. In your example, if you add predicates $\phi_n$ to the pure theory of equality that assert that the universe has at least $n$ elements, you get quantifier elimination. To define the $\phi_n$ in the original language involves quantifiers, but for many purposes the quantifier elimination process produces a formula in the extended language that is just what you need.
E.g., the quantifier elimination procedure for Presburger arithmetic has to add predicates $\delta_n(x)$ asserting that $n \mid x$ to the language for $n = 2, 3, \ldots$. Presburger's quantifier elimination procedure still gives us a decision procedure, because given a term $t$ with no free variables, we can calculate the truth value of $\delta_n(t)$ for any $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Cyclic $d$-polytope -- does every facet border every other facet? A cyclic $d$-polytope is "neighborly," where every set of vertices sized $d/2$ together make up a face. What I'm interested in: in a cyclic $d$-polytope with $N$ facet ($d-1$ dimensional faces), does every facet "border" every other facet (i.e., share a $d-2$ dimensional face)?
Why I think it might: we can use the upper bound theorem in dual form (https://www.cs.mcgill.ca/~fukuda/soft/polyfaq/node12.html) to find the total number of $d-2$ dimensional faces for a $d$-polytope with $N$ facets. If you use the formula in the provided link, we find that there are $N(N-1)/2$ such $d-2$ dimensional faces. Since every such face borders $2$ facets, this total number of $d-2$ dimensional faces suggests that the property holds.
| The cyclic polytope C(4,6) has nine tetrahedron facets. Each of these facets can only border four other facets, so not every facet borders all other facets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $T$ is invertible and $K$ is compact, is $0\ne\lambda\mapsto(\lambda T+K)^{-1}$ holomorphic? Let $X$ be a Banach space, $T\in\mathfrak L(X)$ be bijective, $K\in\mathfrak L(X)$ be compact and $$B:\mathbb C\to\mathfrak L(X)\;,\;\;\;\lambda\mapsto\lambda T+K.$$
Can we show that $B(\lambda)$ is bijective for all $\lambda\in\mathbb C\setminus\{0\}$ and $$\mathbb C\setminus\{0\}\to\mathfrak L(X)\;,\;\;\;\lambda\mapsto B(\lambda)^{-1}$$ is holomorphic?
This is clearly, if correct, an application of the analytic Fredholm theorem.
The trick is clearly to write $$B(\lambda)=\lambda T\left(1+\frac1\lambda T^{-1}K\right)\tag1\;\;\;\text{for all }\lambda\in\mathbb C\setminus\{0\}.$$ $\mathbb C\ni\lambda\mapsto\lambda T$ and $\mathbb C\setminus\{0\}\ni\lambda\mapsto\frac1\lambda T^{-1}K$ are trivially holomorphic. Moreover, $T^{-1}K$ is compact.
So, the only missing piece is that we need to show that $B(\lambda)$ is bijective for at least one $\lambda\in\mathbb C$. But why is that the case?
| You seem to assume invertibilty of $T$. In that case $T+\frac 1 n K \to T$ in operator norm. Since the set of all invertble operators is open it folows that $T+\frac 1 n K$ is invertible for large enough $n$ and this implies that $nT+K$ is invertible for such $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How fast does the sequence converge? We have the sequence $$a_1=\frac{9}{4} \quad \qquad a_{n+1}=2\frac{a_n^3-a_n^2+1}{3a_n^2-4a_n+1}$$ that converges to $\frac{9}{4}$.
I want to check how fast the convergence is.
Do we have to calculate $\lim_{n\rightarrow \infty}\frac{|a_{n+1}-9/4|}{|a_n-9/4|}$ ?
| Hint: Define
$$f(x)=2\frac{x^3 - x^2 + 1}{3 x^2 - 4 x + 1}$$
Prove the sequence $a_{n+1}=f(a_n)$ converge to $2$ with $a_1 =\frac{9}{4}$ as you said.
\begin{align}
a_{n+1} &= f(a_n) \\
&= 2\frac{(a_n-2)((a_n-2)^2+1)}{((a_n-2)+1)(3(a_n-2)+5)} \tag{1}\\
\end{align}
Denote $b_n = a_n-2$, we have $b_n \rightarrow 0$ when $n\rightarrow +\infty$ or $b_n=\mathcal{o}(1)$. And from (1) we have
$$b_{n+1}+2 = f(b_n+2)$$
or
\begin{align}
b_{n+1} &=f(b_n+2)-2 \\
&= \frac{b_n(b_n^2+1)}{(b_n+1)(3b_n+5)} -2 \\
&= \frac{2}{5}b_n(b_n^2+1)(1-b_n+\mathcal{O}(b_n^2))(1-\frac{3}{5}b_n+\mathcal{O}(b_n^2)) -2\\
&= \frac{4}{5}b_n^2-\frac{22}{5}b_n^3+\mathcal{O}(b_n^4)\\
&= \frac{4}{5}b_n^2(1+\mathcal{O}(b_n)) \tag{2}\\
\end{align}
Then,
$$\ln(b_{n+1})= \ln(\frac{4}{5})+ 2\ln(b_{n}) +\ln(1+\mathcal{O}(b_n)) $$
$$\iff (\ln(b_{n+1})-\ln(\frac{4}{5}))= 2(\ln(b_{n})-\ln(\frac{4}{5})) +\mathcal{O}(b_n) \tag{3}$$
We notice that from (2), we can deduce also that $b_{n+1}=\mathcal{O}(b_n^2)=\mathcal{o}(b_n)$ or $\mathcal{O}(b_{n+1})=\mathcal{o}(b_n)$. So,
$$(3)\iff (\ln(b_{n+1})-\ln(\frac{4}{5})+\mathcal{O}(b_{n+1}))= 2(\ln(b_{n})-\ln(\frac{4}{5})+\mathcal{O}(b_n)) $$
$$\iff \ln(b_{n})-\ln(\frac{4}{5})+\mathcal{O}(b_{n})= 2^{n-1}(\ln(b_{1})-\ln(\frac{4}{5})) =2^{n-1}\ln(\frac{5}{16}) $$
or
$$b_n=\frac{4}{5} \left(\frac{5}{16} \right)^{2^{n-1}}$$
(because $b_n \rightarrow 0$ when $n \rightarrow +\infty$ then $\exp(\mathcal{O}(b_{n}))=1$)
Conclusion:
$$a_n \approx 2+\frac{4}{5} \left(\frac{5}{16} \right)^{2^{n-1}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4022904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Integral of a function in the 1st/3rd quadrant If $f(x)\cdot x>0$ and $f(0)=0$ it is right that $\int_0^x f(x)dx>0$?
| No. Consider $x = 0$.
Assuming $x \neq 0$, consider the two cases of $x > 0$ and $x < 0$. Use the fact that the integral of a Riemann integrable function which is strictly positive (apart from a point) has a strictly positive integral. (See this for example.)
(The answer is "Yes" then.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate $\int_\gamma\frac{|z|e^z}{z^2}dz$, where $\gamma$ is the circle of radius 2 centered at the origin. Calculate $$\int_\gamma\frac{|z|e^z}{z^2}dz,$$ where $\gamma$ is the circle of radius 2 centered at $0\in\mathbb{C}$.
I wanted to apply the Residue Theorem, but I think it is not possible because $|z|$ is not analytic at the origin. Also, I tried to calculate the integral using the parametrization $z=2e^{i\theta}$, $0\leq\theta\leq 2\pi$. Then $$\int_\gamma\frac{|z|e^z}{z^2}dz=\int_0^{2\pi}\frac{|2e^{i\theta}|e^{2e^{i\theta}}}{2e^{2i\theta}}2ie^{i\theta}d\theta= 2i\int_0^{2\pi}\frac{e^{2e^{i\theta}}}{e^{i\theta}}d\theta.$$ But I have not been able to solve that integral.
I thank any suggestion.
| As @leoli1 and @Gary hint,$$\int_{|z=2|}\frac{|z|e^zdz}{z^2}=\int_{|z=2|}\frac{2e^zdz}{z^2}=2\pi i\operatorname{Res}_{z=0}\frac{2e^z}{z^2}=4\pi i.$$Alternatively,$$2i\int_0^{2\pi}\frac{e^{2e^{i\theta}}}{e^{i\theta}}d\theta=2i\int_0^{2\pi}2d\theta=4\pi i$$because $\int_0^{2\pi}e^{ik\theta}d\theta=2\pi\delta_{k0}$ for $k\in\Bbb Z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $(z^a)^b = z^{ab}e^{2kb\pi i}$ I'm trying to prove that
$$
(z^a)^b = z^{ab}e^{2kb\pi i},
$$
where $a,b,z\in\mathbb{C}$ and $k\in\mathbb{Z}$.
So far, I have that
\begin{align*}
(z^a)^b & = (\exp(a\log z))^b \\
& = \exp(b\log(\exp(a\log z))),
\end{align*}
and I know that the complex logarithm can be defined such that $\log z=\log|z|+i\theta+2k\pi i$, where $\theta\in(-\pi,\pi]$ is the principal argument of $z$.
I feel like I've got all the components I need, but I'm not quite sure how to piece it together to get the desired result.
| Instead use $(z^a)^b=(e^{a\ln z+2k\pi i})^b=e^{ab\ln z+2kb\pi i}=e^{ab\ln z}e^{2kb\pi i}=z^{ab}e^{2kb\pi i}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Nested sums - reciprocal of a sum : Find exact value $$\text{Find:} ~~~~~~ \sum_{k=1}^{\infty} \frac{1}{ \left (
\sum_{j=k}^{k^2} \frac{1}{\sqrt{~j~}} \right )^2}$$
(Beware of the bounds of $j$, it does not always start from $1$, but starts at $k$ and goes to $k^2$)
At first I thought it has something to do with Riemann's zeta function $\zeta$ and thus the solution is:
$$ \zeta \left( \sum_{j=k}^{k^2} \frac{1}{\sqrt{~j~}} \right ) $$
However this totally seems incorrect because we cannot find this sum as we have an unknown, $k$.
Which also is being summed to $\infty$.
I thought (however was unsure) of the fact that:
$$ \left ( \sum f \right )^2 \ge \sum f^2 $$
Thus (maybe) it is trivial the sum is converging to some value, which according to a little program I wrote is about: $$ \sum_{k=1}^{\infty} \frac{1}{ \left (
\sum_{j=k}^{k^2} \frac{1}{\sqrt{~j~}} \right )^2} \approx 1.596$$
But this is not a rigorous proof, is there a way to solve this so we get a solution by hand - calculating the exact value of this sum if it is finite (if not, why? )
Any mathematical tools (not programs) are acceptable, because this is not a question from a specific course ( I don't have context for this).
| Let
$$a_k=\sum_{j=k}^{k^2} \frac{1}{\sqrt{j}}=H_{k^2}^{\left(\frac{1}{2}\right)}-H_{k-1}^{\left(\frac{1}{2}\right)} $$ and
$$S_p=\sum_{k=1}^{p} \frac{1}{ a_k^2}$$
Using asymptotics of the harmonic numbers and continuing with Taylor series
$$a_k=2 k-2 \sqrt{k}+\frac 1{2\sqrt k}+\frac 1{2k}+\frac 1{24 k\sqrt k}-\frac 1{24k^3}+O\left(\frac{1}{k^{7/2}}\right)$$
$$\frac{1}{ a_k^2}=\frac 1{4k^2}+\frac 1{2k^{5/2}}+\frac 3{4k^{3}}+O\left(\frac{1}{k^{7/2}}\right)$$
$$\sum_{k=1}^{\infty} \frac{1}{ a_k^2}=\sum_{k=1}^{p-1} \frac{1}{ a_k^2}+\sum_{k=p}^{\infty} \frac{1}{ a_k^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Find a matrix $A$ with no zero entries such that $A^3=A$ I took a standard $2 × 2$ matrix with entries $a, b, c, d$ and multiplied it out three times and tried to algebraically make it work, but that quickly turned into a algebraic mess. Is there an easier method to solve this?
| One way to simplify the problem is to impose some further assumptions on $A$. For example, suppose we further assume that $A$ is singular. Then $A=uv^T$ for some entrywise vectors $u$ and $v$. The requirements that $A^3=A$ and $A\ne0$ entrywise are now equivalent to $(v^Tu)^2=1$ and $u,v\ne0$ entrywise, which are utterly easy to fulfil (when the characteristic of the underlying field is not $2$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 9,
"answer_id": 4
} |
Showing that the set $S$ is countable Here's the question:
Let $S$ be a set of positive real numbers. There is a constant $M>0$ such that for any $\{ a_1,a_2,\ldots, a_k \}\subset S$, we have that
$$\sum_{n=1}^{k}a_n\le M \tag{1}$$
Prove that $S$ is at most countable.
Here's my attempt:
If $S$ is finite then there is nothing to prove. So, we prove the following:
Claim. If $S$ is any infinite set of positive real numbers satisfying condition $(1)$ then $\max S$ exists.
Proof. Let $S$ be a set satisfying $(1)$. Then $M$ is an upperbound for the set $S$. By least upper bound axiom, $\alpha=\sup S$ exists. Since $S$ is a set of positive reals, $\alpha >0$. If the maximum existed then it must be equal to $\alpha$. So, suppose that it does not exist. Choose an $\varepsilon >0$ such that $a-\varepsilon >0$. Then by the Archimedean property, $M\le k(\alpha - \varepsilon)$ for some $k\in \mathbb{N}$.
Now, since $\alpha$ is the supremum and not the maximum, we can select distinct elements $a_1,a_2, \ldots , a_k$ from $S$ such that $\alpha - \varepsilon < a_i < \alpha$ for each $i$. Then $\sum_{n=1}^{k}a_n> k(\alpha - \varepsilon ) \ge M$ but this would contradict our assumption that $(1)$ holds. So, $\alpha$ must be the maximum. $\blacksquare$
Now, let $S$ be any set of positive real numbers satisyfing $(1)$. Then let $a_1= \max S$. Also $S \setminus \{ a_1 \}$ satisfies $(1)$ so, let $a_2=\max (S\setminus\{ a_1 \})$. We repeat this argument to obtain a sequence $\{ a_k \}$ such that $a_k=\max (S\setminus \{ a_1, a_2 , \ldots, a_{k-1} \})$.
Clearly this $\{ a_k \}$ is decreasing and there is no $a\in S$ such that $a_{k+1}<a<a_k$ for any $k$. Also, $\sum_{k=1}^{\infty}a_k$ converges, so, $a_n \to 0$.
If $a \in S\setminus \{a_k\}_{k\in \mathbb{N}}$ then $a<a_k$ for each $k$. So $a\le0$ (by taking limits) which is not possible. So $S=\{ a_k\}_{k\in \mathbb{N}}$.
Is this proof correct? Alternative proofs are also welcome!
| It wouldn’t hurt to say something to justify the assertion that $\sum_{n\ge 1}a_n$ converges, but the proof is correct. (There is a typo in that sum though: you have $\sum_{\color{red}k=1}^\infty a_{\color{red}n}$.) Here’s a slightly different approach.
Let $\alpha=\sup S>0$. For $n\in\Bbb N$ let $I_n=\left(2^{-(n+1)}\alpha,2^{-n}\alpha\right]$; then
$$S\subseteq(0,\alpha]=\bigcup_{n\ge 0}I_n\,,$$
and the intervals $I_n$ are pairwise disjoint. There are only countably many of these intervals, so if $S$ is uncountable, there must be an $n\in\Bbb N$ such that $S\cap I_n$ is uncountable. But then if $K$ is a $k$-element subset of $A\cap I_n$, we have
$$\sum K>2^{-(n+1)}\alpha k\,,$$
and this can be made arbitrarily large by choosing $k$ large enough, contradicting $(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4023932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Collatz Conjecture, can the following skip a prime number? Following my previous questions at: Collatz Conjecture, why an increment of $+6$ in the following? and Collatz Conjecture, why a rate of change of $*4$ in the following?
Following the rules of the Collatz Conjecture, in this experiment I have created a list of all odd numbers until $33333$. The list includes 3 columns, such as in the following sample:
A) Starting Odd $(X)$
B) $(X * 3) +1$
C) $X/2$ repeat until odd
1
4
2, (1)
3
10
(5)
5
16
8, 4, 2, (1)
7
22
(11)
9
28
14, (7)
11
34
(17)
13
40
20, 10, (5)
15
46
(23).
17
52
26 (13)
19
58
(29)
21
64
32, 16, 8, 2. (1)
23
70
(35)
25
76
38, (19)
...
You will notice that all the final odd results in column C) represent a list of all the prime numbers. as denoted in the () in column C): 1, 5, 7, 11, 13, 17, 19, 23, 29.
Is it possible to skip a prime number in that list (with the exception of $3$)?
| No, you should actually get every odd number which isn't a multiple of $3$ (and, in particular, every odd prime save $3$).
Note that any prime $p\ne2,3$ is either $1$ or $5$ mod $6$. Suppose $p=6k+5$. Then set $X=4k+3$, so that $3X+1=12k+10$ and dividing by $2$ gives $6k+5=p$. Now suppose $p=6k+1$. Then let $X=8k+1$, so that $3X+1=24k+4$. Dividing by $2$ twice gives $6k+1=p$, as desired.
In fact, since nothing here depends on $p$ being prime, this shows that any odd number which isn't a multiple of $3$ is a final number in your column (c).
Conversely, note that no multiple of $3$ can ever be a final number in (c). (In fact, no multiple of $3$ can ever be an element in that column!) After all, if $X=2k+1$, then the final number is of the form $2^{-n}(6k+4)$. Note that $6k+4$ isn't divisible by $3$, and so no number in the last column can be of the form $3k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A few questions about cyclic groups I cannot understand how is $\langle 1 \rangle$ the creator of $(\mathbb{Z},+)$. I know that this element can create all-natural numbers, simply by adding-1 a certain amount of times, but the negative numbers cannot be created. So what I thought is that each element has a negative number, but still, I can't see how $\langle 1 \rangle$ can create by itself all elements of $\mathbb{Z}$, only both -1 and 1 can create together all elements.
In addition, I have seen a couple of times that for example - the group $Z^{*}_8$ if defined as $Z^{*}_8=\{[1],[3],[5],[7]\}$. In order to check the elements that a certain element create, which means the order, we check the power of the element until we start returning on the elements created, let say $\langle 5 \rangle=\{[5]^{1},[5]^{2},[5]^{3}\}=\{[5],[1],[5]\}$ so the order is 2. Now, I know that power represents the amount of time we run the method $*$ of the group, but if so, why is here with $Z^{*}_8$ we see (*) as ($\cdot $), and in the group - $(\mathbb{Z},+)$ we check as $\langle 2 \rangle=\{2,2+2,2+2+2,...\}$ and so on?
I hope you can help me understand this. Thanks!
| A group $G$, written multiplicatively, is called cyclic if there is an elemet $a \in G$ with $G = \langle a \rangle =\{a^n \mid n \in \mathbb{Z}\}$. Note that $n$ spans all the integers. If the group is written additively, this becomes
$$
G = \langle a \rangle =\{na \mid n \in \mathbb{Z}\}.
$$
So in exponential notation we get terms like
$$
\ldots, a^{-2}, a^{-1}, a^0=e, a, a^2, \ldots
$$
In additive notation this would read
$$
\ldots, -2a, -a, 0, a, 2a, \ldots
$$
With $a=1$ this explains why $1$ generates all of $\mathbb{Z}$ (as does $-1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
measure defined in terms of an integral Here's a problem from a midterm I took 2 decades ago! Let $A$ be a Borel measurable set on the real line and $\lambda$ be the Lebesgue measure. Define a measure $\mu(A)=\int_A f d\lambda$ where $f(x)=1/x^2$ for $x\ne 0$ and $f(0)=\infty$.
*
*Is $\mu$ finite? $\sigma$-finite?
Now I know it's not finite, since if $A$ is any interval containing $0$ then the integral is infinite. I'm not sure about $\sigma$-finite though. Any interval that does not contain $0$ will have finite measure, and there's a countable number of these to cover everything but $\{0\}$, so it seems to me that the answer depends on the following....
*What is $\mu(\{0\})$?
| OK - I was thrown off by the facts that
*
*$f$ takes values on the extended reals and
*$f\chi_A$ is not necessarily integrable
but I should have gone back to basics.
Here's my attempt at filling in the details:
Despite 1. and 2. above, $f\chi_A$ is still measurable for all Borel measurable sets $A$. In particular, $f\chi_{\{0\}}$ is Borel measurable, and can be approximated by simple functions $s_n$ such that $0 \le s_n \le f\chi_{\{0\}}$ (take $s_n = n\chi_{\{0\}}$ for example). Since the support of $s_n$ is a Borel set of measure $0$, $\int_{\mathbb{R}} s_n=0$ for all $n$, and $\int_{\mathbb{R}} f\chi_{\{0\}} = sup \int_{\mathbb{R}} s_n=0$. This shows that $\mu$ is absolutely continuous with respect to $\lambda$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does $\sum^{\infty}_{n=1}\left(\frac{p_{n+1}}{p_n}-1\right)^2 $ converge? There appears to be a "prime constant" $\kappa$, generated from the sequence of primes:
$$\kappa = \sum^{\infty}_{n=1}\left(\frac{p_{n+1}}{p_n}-1\right)^2 \approx 1.653$$
Where $p_n$ is the nth prime.
However, how does one prove that such constant, does in fact, exist, that is, how does one prove that the above series converges?
| See my answer to the similar problem here: Does this sum of prime numbers converge?
Note that $(\frac{p_{n+1}}{p_n}-1)^2 = \frac{(p_{n+1}-p_n)^2}{p_n^2}$.
Let $g_n=p_{n+1}-p_n$ be the sequence of prime gaps. Since $p_n\geq n$, the convergence would be proved if we have the convergence of
$$
\sum_{n=1}^{\infty} \frac{g_n^2}{n^2}.
$$
By this result of R. Heath-Brown: Here,
we have
$$
\sum_{n\leq x} g_n^2 \ll x^{\frac{23}{18}+\epsilon}.
$$
Applying the partial summation with $A(x)=\sum_{n\leq x}g_n^2$ and $f(x)=x^{-2}$, we obtain
$$
\sum_{n\leq x}\frac{g_n^2}{n^2}=\int_{1-}^x f(x)dA(x)
$$
$$
=A(x)f(x)-\int_{1-}^x A(t)f'(t)dt. \ \ (1)
$$
Since $3-(23/18) >1$, we obtain the convergence of (1). Hence, the desired sum converges by the comparison test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Domain of a function quick question $$f(x)=\arcsin\left(\frac{x-1}{2x}\right)$$
We need to find the domain of this function.
My try:
$$-1\leq \frac{x-1}{2x} \leq 1 $$
We can split this into
$$-1\leq \frac{x-1}{2x} \quad\text{and} \quad\frac{x-1}{2x} \leq 1$$
My idea is to solve for this two inequalities and then take the intersection of them both.
$$-2x\leq x-1\\
-3x\leq -1\\
x\geq\frac{1}{3}$$
$$x-1 \leq 2x\\
-1 \leq x$$
By doing the intersection I get the wrong answer which is: $x\geq\frac{1}{3}$
What have I done wrong?
| The two inequalities can be handled at once as following without inviting mistakes :
For $x \neq 0$
$$\frac{x-1}{2x} \in [-1,1]$$
$$\frac{1}{2} - \frac{1}{2x} \in [-1,1]$$
$$-\frac{1}{2x} \in [\frac{-3}{2},\frac{1}{2}]$$
$$\frac{1}{x} \in [-1,3]$$
Taking reciprocal (and changing the direction),
$$x \in \; ]-1,\frac{1}{3}[$$
That is $$x \le -1 \vee x \ge 1/3$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Quotienting $\mathbb Z[x]$ by a maximal ideal gives a finite field. Let $R = \mathbb Z[x]$, a polynomial ring over $\mathbb Z$, and $\mathfrak m$ be any maximal ideal of $R$. How do we show that $R / \mathfrak m$ is a finite field? I know that the fact that it is a finite field directly follows, but not sure about the finite part.
| Let $A=R/\mathfrak{m}$ and $F$ be its prime subfield. Assume $F$ is finite, then $A$ is generated over $\mathbb{Z}$ by a single element, so the same holds for $A$ over $F$. In other words, $A$ is a quotient of $F[x]$ by a maximal ideal (as $A$ is a field) so is finite.
So we only need to show that $F$ cannot be $\mathbb{Q}$. Indeed, assume for the sake of contradiction that $F=\mathbb{Q}$. Then $\mathfrak{m}\mathbb{Q}[x]$ is a maximal ideal of $\mathbb{Q}[x]$ (because it’s a localization of $\mathbb{Z}[x]$ at a multiplicative subset not meeting $\mathfrak{m}$), and the quotients are equal.
There is an irreducible polynomial $f(x) \in \mathfrak{m}$, and $(f) \subset \mathbb{Q}[x]$ is a maximal ideal, so $\mathfrak{m}\mathbb{Q}[x]=(f)$, so $\mathfrak{m}=f(x)\mathbb{Q}[x] \cap \mathbb{Z}[x]$.
Now, let $g(x)=h(x)/D$ be a rational polynomial with $D > 1$ integer, $h(x) \in \mathbb{Z}[x]$ primitive (ie its coefficients do not have a nontrivial common divisor – note that $f$ is also primitive, since it’s irreducible) and assume $f(x)g(x) \in \mathbb{Z}[x]$. Then, if $p$ prime divides $D$, $f(x)h(x)=0$ mod $p$, so $f(x)=0$ mod $p$ or $h(x)=0$ mod $p$, both impossible. So $\mathfrak{m}=f\mathbb{Q}[x] \cap \mathbb{Z}[x]=f\mathbb{Z}[x]$.
But then, let $n \geq 1$ be an integer, then $n$ is invertible mod $\mathfrak{m}$, so there is a polynomial $p$ such that $f|np-1$ in $\mathbb{Z}[x]$. Take $n=|f(N)|$ where $N$ is a large integer: $f(x)|np(x)-1$ so in particular $f(N)|np(N)-1$, thus $f(N)|1$, hence a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4024913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't $f(x) = x^2$ generate a counter-example to $f(-x) = -f'(x)$? According to the chain rule, the derivative of $f(-x) = -f'(x)$, since
$$
\frac{d}{dx}(f(-x)) = \frac{df(u)}{du} \frac{du}{dx} = f'(-x)*-1 = -f'(-x)
$$
where $u = -x$. But doesn't the case of $f(x) = (x)^2$ generate a counter-example to this?
$$
f'(-x) = \frac{d}{dx}(-x)^2 = \frac{d}{dx}x^2 = 2x \ne -(2x) = -f'(-x)
$$
| Chain rule is
$$ \left(\frac d{dx} (f\circ g)\right) (x) = \left(\frac{df}{dx}\circ g\right)(x)\frac{dg}{dx}(x),$$
or using the $f'$ notation:
$$ (f\circ g)'(x) = (f'\circ g)(x) g'(x).$$
When we apply this with $g(x) = -x$, we see that the correct rule is
$$\frac{d}{dx} (f(-x)) = -f'(-x).$$
(This is what your derivation says, even if you stated it wrong in the title and the line before your derivation.)
We can verify this for $f(x)=x^2$ by calculating each side. On the one hand, since $f$ is even, $f\circ g= f$ so $\text{LHS}=(f\circ g)'(x)=f'(x)=2x$. On the other hand, $\text{RHS}=2(-x)\times -1=2x$, which is what I got for $\text{LHS}$.
It seems that the important error$^1$ is in saying that $-f'(-x)$ is $-2x$. In fact,
$$ f'(x) = 2x \implies f'(-x) =2(-x) = -2x \implies -f'(-x) = 2x.$$
Here are the two paths you could take
$$ f(x) \begin{cases} \xrightarrow{\text{evaluate at $-x$}} f(-x) \xrightarrow{d/dx} \frac{d}{dx} [f(-x)]\overset{\substack{Chain\\ Rule}}= -\frac{df}{dx}(-x) \\ \xrightarrow{d/dx} \frac{df}{dx}(x) \xrightarrow{\text{evaluate at $-x$}} \frac{df}{dx}(-x) \end{cases} $$
Some of the confusion could be stemming from confusing notation. I'll list here the notations that I know: for the first I can only think of $ \frac{d}{dx}(f(-x)) = (f(-x))' =2x$
which would be the derivative of $g(y)=f(-y)=y^2$ at the point $y=x$. This is the situation in which Chain rule applies.
The other case can be written in a number of ways, $f'(-x)=\frac{df}{dx}(-x) = \left(\frac{d}{dx} f\right)(-x) = \left.\frac{d}{dy} (y^2)\right|_{y=-x} = -2x$, these mean the function $f'(y)=2y$ evaluated at $y=-x$. The difference here is that you differentiate $f$ first before you evaluate.
In the comments to this answer the notation $\frac{d}{d(-x)}f(-x)$ is used, but I don't recognize it. If I had to guess, I would guess that this also means $\left.\frac{d}{dy} f(y)\right|_{y=-x}=-2x$.
$^1$ as mentioned by David K and zkutch.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find $∠ADC$ given $A,D$ on circle with diameter $BC$ and $∠BAO = 50°$ If $BC$ is a diameter of the circle and $∠BAO = 50°$. Then find the value of $∠ADC$.
This stumped me a little, I think there is a rule that mentions something about the center of the circle and it's relation to angles on the circumfrence, but I'm not sure, could someone help me point out the laws that would help to solve this problem? (Do not give me the solution, since I want to approach this myself)
Thanks in advance! :)
| See $ \angle OAB = \angle OBA = 50^\circ$ (why?)
$\angle AOC = 100^\circ $ (why?)
Now see angle subtended in the centre is double the angle subtended at any other point on the circle. Join AC. Now $ \angle AOC = 2\angle ADC$
$\angle ADC= 50^\circ $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Question about non-separable ODE I have a problem with my assignment. The problem is as follows:
$(x^2+y^2e^\frac{x}{y})\frac{dx}{dy}=xy$
I don't think this is an exact ODE, and I have tried to solve it using integrating-factor method but I cannot find the result. I also tried to multiply each side by $y^{-2}$ and set u = $\frac{x}{y}$ but unable to solve it properly.
I would really appreciate if anybody could help me with this problem. It would be a very big help for me. Thank you very much.
| We have $$(x^2+y^2e^\frac{x}{y})\frac{dx}{dy}=xy$$
Dividing through by $y^2$ as you suggested yields
$$\left(\frac{x^2}{y^2}+e^\frac{x}{y}\right)\frac{dx}{dy}=\frac{x}{y}$$
Again, we will use your suggestion: let $$u=\frac{x}{y}\iff x=uy$$
We can differentiate both sides implicitly with respect to $y$:
$$\frac{dx}{dy}=u+y\frac{du}{dy}$$
But we know that $$\frac{dy}{dx}=1\Big/\frac{dx}{dy}$$
Hence, substituting back into our original equation, we have
$$\begin{align}
\frac{u^2+e^u}{u+y\frac{du}{dy}}=u&\iff u+y\frac{du}{dy}=\frac{u^2+e^u}{u}=u+u^{-1}e^u\\
&\iff y\frac{du}{dy}=u^{-1}e^u=\frac{e^u}{u}\\
&\iff \frac{du}{dy}=\frac{e^u}{uy}\\
\end{align}$$
We are now in a position to separate the variables! We have:
$$\int ue^{-u}~du=\int \frac{1}{y}~dy$$
Integrating by parts on the left we eventually come to
$$-ue^{-u}-e^{-u}=\ln\lvert y\rvert +C=\ln\lvert y\rvert +\ln k$$
So finally
$$-\frac{x}{y}e^{-\frac{x}{y}}-e^{-\frac{x}{y}}=\ln \lvert ky\rvert$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why function is maximum at intersection of the level curve with its constrained curve? What I'm asking is the core question of Lagrange multiplier. Say we have a function $f:\mathbb{R^2}\rightarrow \mathbb{R}$. Now we intend to calculate at what $(x,y)$, the function achieves its maxima given constraint $g:\mathbb{R^2}\rightarrow\mathbb{R}$. The Lagrange multiplier says that we will find the maxima of the function where the level curve intersects the constrained function at exactly one point. Why is that? I'm not able to understand that why the function will achieve its maxima at that particular point
| The following is a standard explanation.
Consider this specific example:
$$\max f(x, y) = xy$$
$$\text{s.t. } x^2 + y^2 = 1$$
Let $L_c = \{(x, y)|f(x, y) = c\}$ be the level curve of $f$ at $c$. This is how the picture looks for $c = 4.3$:
As we decrease the value of $c$, the hyperbola comes closer and closer to the origin (you can interact with the graph here.) If we shrink the hyperbola until it is tangent to the circle, we will have a feasible point whose $f$ value is pretty high. But if we decrease $c$ even more, there will be $4$ points of intersection, but the objective value will be unnecessarily low.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\lim_{n\to\infty} \frac{10^n}{\sqrt{(n+1)!} + \sqrt{n!}}\,.$ $\lim_{n\to\infty} \frac{10^n}{\sqrt{(n+1)!} + \sqrt{n!}}\,.$
I used the ratio criterion for the calculation and I got to this, can I say now that it is zero or is it still an undefined expression?
$\frac{10+(10/\sqrt{(n+1)})}{\sqrt{(n+2)}+1}$
10/inf < 1 ----> lim an=0
| As you have pointed out,
$$\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = \lim_{n\to\infty} \frac{10+\frac{10}{\sqrt{n+1}}}{\sqrt{n+2}+1}$$ So,
$$\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = \lim_{n\to\infty} (\frac{10}{\sqrt{n+2}+1} + \frac{\frac{10}{\sqrt{n+1}}}{\sqrt{n+2}+1}) = \lim_{n\to\infty} \frac{10}{\sqrt{n+2}+1} + \lim_{n\to\infty} \frac{10}{(\sqrt{n+1})(\sqrt{n+2}+1)} = 0$$ since both limits exist. So, since $$\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0 < 1$$ you can say that $\lim_{n\to\infty} a_n = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
How can an average decrease in numbers show up as an average percentage increase? In the list below:
*
*the first column shows the population
*the second column shows the number by which the population has increased (so the total new population in the first row would be 27312).
*the third column shows the percentage increase i.e. column 2 / column 1
I then took the average of the second column and got an average number.
But then I took the average of the third column and got an average percentage increase.
That doesn't make sense to me. My brain is a little fried. I'm trying to get an intuition for why this is.
*
*Surely an average fall in numbers should mean an average fall in
percentage change?
| When you average the percentages, you’re giving each of them the same weight in the average. But a $10\%$ change starting at $100,000$ is a change of $10,000$, while a $10\%$ change starting at $10,000$ is a change of only $1000$. That first change has $10$ times the effect on the overall population, even though it’s an identical percentage of its base population. If the first is an increase and the second a decrease, the average of the percentages as $0\%$, but the total population has gone from $110,000$ to $119,000$, an increase of $9000$, or $8.\overline{18}\%$.
For an average of the percentages to be meaningful, it would have to be a weighted average, with each percentage weighted by the fraction of the total population that it affects. In this case the first change affects $\frac{10}{11}$ of the total population of $110,000$, while the second affects only $\frac1{11}$ of it, so the weighted average of the percentages is
$$\frac{10}{11}(10)+\frac1{11}(-10)=\frac{90}{11}=8\frac2{11}=8.\overline{18}\,,$$
which is indeed the percentage by which the total population has changed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Confused about partial derivatives I am having some issues understanding what should I keep constant and what not in certain cases when I take partial derivatives. Specifically in this kind of situation: say we have a function
$$f(x,y) = x^3+7y^2$$
and we also know that $y=2x+1$ and we need to find the partial derivative with respect to $x$ at a given point, say $\frac{\partial f(2,3)}{\partial x}$. From what I understood, when taking partial derivatives with respect to a variable, you need to keep the other constant, in which case, if I do that above (keeping $y$ constant) I would get, $\frac{\partial f(x,y)}{\partial x}=3x^2$, so I get $12$. However, if I plug $x$ in $y$ explicitly I would get $$f(x,y) = x^3+7(2x+1)^2$$ so,
$$\frac{\partial f(x,y)}{\partial x}=3x^2+28(2x+1)$$
So, I get $152$. What should I do? Thank you!
| The answer is that it depends (slightly) on the intent of the problem. When I read the problem as you've written it, I see this:
Given that $y = 2x+1$ and $f(x,y) = x^{3} + 7y^{2},$ calculate $\frac{\partial f(2,3)}{\partial x}$.
In that statement, I interpret this as $f$ is a function of two variables, $x$ and $y$, and we are asked to compute $\frac{\partial f}{\partial x}(x,y)$ and evaluate the result at $(x,y) = (2,3)$. In this case, the fact that $y$ depends on $x$ isn't relevant, because since $f$ depends explicitly on $x$ and $y$, $\frac{\partial f}{\partial x}(x,y)$ mean hold $y$ constant and differentiate with respect to $x$. That gives us $$\frac{\partial f}{\partial x}(x,y) = 3x^{2}\implies \frac{\partial f(2,3)}{\partial x} = 12.$$
Now, if instead the problem was phrased as
Given that $y = 2x+1$ and $f(x,y) = x^{3} + 7y^{2},$ calculate the total rate of change of $f$ with respect to $x$ at $(2,3)$.
In this case, I would interpret the problem as asking for the total derivative, which accounts for dependencies among the variables. In this case, your answer would be $$\frac{df}{dx} = \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}\frac{dy}{dx} = 3x^2+28(2x+1) \implies \frac{df}{dx}(2,3) = 152.$$
If the original problem does indeed ask for $\frac{\partial f(2,3)}{\partial x}$ then I would say the first interpretation is correct. If you are adding that notation yourself, it may be the case that the second interpretation is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4025883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to find $B=\csc^2 \phi-\frac{\cos^2 (45^\circ+\omega)}{\sin^2\phi}$ from $\sin\omega+\cos\omega=2\sin\phi$ when $\phi$ is not quadrantal? The problem is as follows:
First let:
$\sin\omega+\cos\omega=2\sin\phi$ and $\phi$ is a not quadrantal
angle.
Then using this relationship find:
$B=\csc^2 \phi-\frac{\cos^2 (45^\circ+\omega)}{\sin^2\phi}$
The alternatives on my book are as follows:
$\begin{array}{ll}
1.&\textrm{0.25}\\
2.&\textrm{0.5}\\
3.&\textrm{1}\\
4.&\textrm{2}\\
\end{array}$
Well what I did to attempt to solve this problem from my precalculus workbook was the following:
I did noticed that on the first expression I could do the following,
$\sin\omega+\cos\omega=2\sin\phi$
$\frac{1}{\sqrt{2}}\sin\omega+\frac{1}{\sqrt{2}}\cos\omega=\frac{2}{\sqrt{2}}\sin\phi$
Then using the sum of angles formula:
$\sin(45^\circ+\omega)=\frac{2}{\sqrt{2}}\sin\phi$
But that's where In got stuck. Now what?
I can't spot a way to make the inner part of the function to be equated with both sides of the equation. In other words, the earlier expression's argument cannot be equated as it is because both sides of the equation are different.
Then there is the other part of the equation which I don't understand. What does it mean not a quadrantal angle?. I mean I do know what a quadrantal is one that is in the standard position and has a measure that is a multiple of $90\circ$. So in short it cannot be $90^\circ$, $180^\circ$, $270^\circ$, $360^\circ$ ... and so on.
But I don't know how to include this information in the solution.
The expression from below doesn't seem to be reduced:
$B=\csc^2 \phi-\frac{\cos^2 (45^\circ+\omega)}{\sin^2\phi}$
I could only found that I could:
$B=\frac{1}{\sin^2\phi}-\frac{\cos^2 (45^\circ+\omega)}{\sin^2\phi}$
$B=\frac{1-\cos^2 (45^\circ+\omega)}{\sin^2\phi}$
Can someone help me here with the right approach?. Was my initial intuition right?. I would like someone could help me with that. Please try to include the most step by step as possible so I can catch properly the idea of the necessary manipulation to solve this problem without much fuss.
Can someone help me here please?. I'd really commend someone could guide me in the right path.
| First, we have:
$$\sqrt{2}\sin(\phi) = \frac{\sqrt{2}}{2}\sin(\omega) + \frac{\sqrt{2}}{2}\cos(\omega) = \cos\bigg(\frac{\pi}{4}\bigg)\sin(\omega) + \sin\bigg(\frac{\pi}{4}\bigg)\cos(\omega) = \sin\bigg(\frac{\pi}{4}+\omega\bigg)$$
Then:
$$B = \csc^{2}(\phi) -\frac{\cos^{2}\big(\frac{\pi}{4} + \omega\big)}{\sin^{2}(\phi)} =\frac{1}{\sin^{2}(\phi)}-\frac{1 - \sin^{2}\big(\frac{\pi}{4} + \omega\big)}{\sin^{2}(\phi)}=\frac{(\sqrt{2}\sin(\phi))^{2}}{\sin^{2}(\phi)}=\boxed{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4026022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
To find an integral that satisfies a substitution property. Consider the following integral. $$\int \cfrac{1}{9-x^2}dx$$
Substituting $x=3\sin u$ makes the integral $\displaystyle\int\cfrac{1}{3\cos u}du$.
I would like to modify the original integral such that after the substitution $x=3\sin u$, it looks like $\displaystyle\int\cfrac{1}{9\cos^2u}du$.
My only thought so far was that $dx=3\cos u du$ which tells me that I need to have $\cos^3u$ in the denominator of the integrand to get $\cos^2u$ in the denominator after simplification. How I do proceed from here? thanks for your time.
| Let $u\in[-\pi/2,\pi/2]$. After substitution in the original integral you get$$\int\frac{3\cos u~du}{9\cos^2u}$$so we want to get an additional $3\cos u$ term in the denominator to cancel the $3\cos u$ in the numerator. Note that $x=3\sin u\implies\sqrt{9-x^2}=3|\cos u|=3\cos u$, so modify the original integral to$$\int\frac{dx}{(9-x^2)\sqrt{9-x^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4026158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Notation for a function definition I came across the following notation in a research paper
Suppose we have a function $f(x) \in [1,l] \rightarrow \mathbb{R}$
It is the first time, I am looking such a notation. And the paper is using the same notation for all functions used in it.
Is it same as
$f(x) : [1,l] \rightarrow \mathbb{R}$
If yes, is it an abuse of notation?
If no, what does the notation mean?
| In the paper you're using, I understand it means that $f$ belongs to the set of all functions defined from the set $[1,l]$ to the set $\mathbb{R}$, to be said,
$$[1,l]\to\mathbb{R} := \{f \text{ function }: f \text{ is defined from $[1,l]$ to $\mathbb{R}$}\}$$
so it works the same way as defining $f$ the usual way, to be said:
$$f\in[1,l]\to\mathbb{R} \equiv f:[1,l]\to\mathbb{R}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4026269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Elements and Conjugacy Classes of a group Let
$G=(C_{p_1} : C_{3}) \times(C_{p_2} : C_{3})$
where $p_1,p_2\equiv{1}\pmod{3}$.
How many elements does the group $G$ have of each order? Furthermore, what is the total number of conjugacy classes?
I assumed that G contains exactly p-1 elements of order p, 2(p-1) elements of order 3p and 2(3p+1) elements of order 3, for each p_{i}. But I could be wrong.
Similarly, can I ask the same question for $G= A_{4} \times(C_{p} : C_{3})$ where $p\equiv{1}\pmod{3}$.
I am trying to adapt a current proof where $G= C_{3} \times(C_{p} : C_{3})$ and $p\equiv{1}\pmod{3}$. The proof is shown above and the authors claim that similar arguments can be used to prove the two cases I have presented above.
| For a direct product $G \times H$, the conjugacy classes are of the form $C \times D$, where $C$ and $D$ are conjugacy classes of $G$ and $H$, and the order of the elements in $C \times D$ is the least common multiple of the orders of elements in $C$ and in $D$. This makes it straightforward to answer your questions in direct products provided that we can answer them in the factors.
So lets apply that to the case when both $G$ and $H$ are nonabelian groups with structure $C_p:C_3$.
The factors have one element of order $1$, $p-1$ of order $p$ in $(p-1)/3$ classes, and two classes of elements of order $3$, both of size $p$.
So in $(C_{p_1}:C_3) \times (C_{p_2}:C_3)$ with $p_1 \ne p_2$, we have (if I have counted correctly):
one element of order $1$,
$p_1-1$ of order $p_1$ in $(p_1-1)/3$ classes,
$p_2-1$ of order $p_2$ in $(p_2-1)/3$ classes,
$2p_1 + 2p_2 + 4p_1p_2$ of order $3$ in $2 + 2 + 4 = 8$ classes,
$2p_2(p_1-1)$ of order $3p_1$ in $2(p_1-1)/3$ classes,
$2p_1(p_2-1)$ of order $3p_2$ in $2(p_2-1)/3$ classes, and
$(p_1-1)(p_2-1)$ of order $p_1p_2$ in $(p_1-1)(p_2-1)/9$ classes.
The only significant difference when $p_1=p_2$ is that the elements in the final class have order $p_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4026651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I prove that the limit doesn't exist? I have to study the following limit
$$\lim_{(x,y)\to(0,0)}\frac{1-\cos(\sqrt{|xy|})}{x}.$$
I think that this limit does not exist, so I'm trying to prove it. First, I discovered that, if $x=y$, then, the limit is equal to zero. Is there any other variable changing that I can use?
| hint
The limit exists and it is zero.
$$(\forall x,y\in \Bbb R)\;\;$$
$$ 1-\cos(\sqrt{|xy|})=2\sin^2(\frac{\sqrt{|xy|}}{2})$$
$$\le \frac 12|xy|$$
because
$$(\forall X\in \Bbb R)\;\;|\sin(X)|\le |X|$$
thus, if $ x\ne 0$,
$$|f(x,y)|\le \frac 12|y|$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4026871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How do I compute this integral with a Dirac's delta? While studying probability I encountered this integral
$$I=\int_{\mathbb{R}^2}\exp\left({-\frac{x_1^2+x_2^2}{2}}\right)\delta\left(r-\sqrt{x_1^2+x_2^2}\right)\,dx_1\,dx_2$$
If I compute this in polar coordinates i get
$$I=\int_0^{2\pi}\,d\theta \int_0^{+\infty}\exp\left(-\dfrac{\rho^2}{2} \right)\rho\delta(r-\rho)\,d\rho=2\pi r\exp\left(-\dfrac{r^2}{2}\right)$$
but in cartesian coordinates I only get
$$I=\exp\left(-\frac{r^2}{2}\right)$$
I don't understand why. I just thougth that I was using the Dirac's delta's properties in both cases. I think the first result is the correct one and there is something I don't know about Dirac's delta with more than one variable.
Which result is correct and why?
| You may also make direct calculations in Cartesian coordinates - integrating, for instance, over $x_1$ first and then over $x_2$.
We can multiply and divide the argument of $\delta$-function ($r-\sqrt{x_1^2+x_2^2}$) by ($r+\sqrt{x_1^2+x_2^2}$) (because ($r+\sqrt{x_1^2+x_2^2}$) is always positive). We can also replace the power of exponent by $-\frac{r^2}{2}$ - due to the condition imposed by $\delta$-function
$I(r)=\int_{\mathbb{R}^2}\exp\left({-\frac{x_1^2+x_2^2}{2}}\right)\delta\left(r-\sqrt{x_1^2+x_2^2}\right)\,dx_1\,dx_2=\int_{\mathbb{R}^2}\exp\left({-\frac{r^2}{2}}\right)\delta\left(\frac{r^2-(x_1^2+x_2^2)}{r+\sqrt{x_1^2+x_2^2}}\right)\,dx_1\,dx_2$
We see that $x_2$ contributes if only $x_2\in[-r,r]$, otherwise $\delta()=0$
$I(r)=\int_{-r}^rdx_2\int_{-\infty}^{\infty}dx_1\exp\left({-\frac{r^2}{2}}\right)\delta\left(\frac{(\sqrt{r^2-x_2^2}-x_1)(\sqrt{r^2-x_2^2}+x_1)}{r+\sqrt{x_1^2+x_2^2}}\right)$
But $\delta(\frac{(a-x_1)(x_1+b)}{A})$$=|\frac{A}{a-x_1}|\delta(x_1+b)+|\frac{A}{x_1+b}|\delta(a-x_1)=|\frac{A}{a-x_1}|\delta(x_1+b)+|\frac{A}{x_1+b}|\delta(x_1-a)$
We get
$I(r)=\int_{-r}^rdx_2\int_{-\infty}^{\infty}dx_1\exp\left({-\frac{r^2}{2}}\right)\left(r+\sqrt{x_1^2+x_2^2}\right)$$\left(\frac{1}{\sqrt{r^2-x_2^2}-x_1}\delta(\sqrt{r^2-x_2^2}+x_1)+\frac{1}{\sqrt{r^2-x_2^2}+x_1}\delta(\sqrt{r^2-x_2^2}-x_1)\right)=$
$\int_{-r}^rdx_2\int_{-\infty}^{\infty}dx_1\exp\left({-\frac{r^2}{2}}\right)2r$$\left(\frac{1}{2\sqrt{r^2-x_2^2}}\delta(\sqrt{r^2-x_2^2}+x_1)+\frac{1}{2\sqrt{r^2-x_2^2}}\delta(x_1-\sqrt{r^2-x_2^2})\right)=\int_{-r}^r\exp\left({-\frac{r^2}{2}}\right)\frac{2r}{\sqrt{r^2-x_2^2}}dx_2$
$$I(r)=\int_{-1}^1\exp\left({-\frac{r^2}{2}}\right)\frac{2r}{\sqrt{1-t^2}}dt=2\pi{r}e^{-\frac{r^2}{2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.