Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Minimum impossible score in darts. I was recently playing darts when I began pondering the question of the lowest whole number that cannot be scored in a game of darts.
For clarification, these are the possible scoring options:
*
*Each turn consists of throwing three darts, each with its own score independent of the other two.
*A standard dartboard has the numbers 1-20, along with double and triple scoring places for each number.
*In the center, there is a small ring that gives a score of 25, as well as the bullseye with a score of 50.
*For this problem, scoring 0 by missing the board entirely is completely acceptable.
| It is simple to write a Python program to compute the answer to this question. The result is $163$. I cannot think of any mathematical explanation for this.
dart_scores = set([0, 25, 50])
for sector in range(1, 21):
dart_scores.add(sector)
dart_scores.add(2 * sector)
dart_scores.add(3 * sector)
three_dart_scores = set([a + b + c for a in dart_scores for b in dart_scores for c in dart_scores])
smallest_unseen = 0
while smallest_unseen in three_dart_scores:
smallest_unseen += 1
print(smallest_unseen)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4283645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Solving the inequality $x^2 + 4x + 3 > 0$ So here is the question:
For what values of $x$ is $x^2 + 4x + 3 > 0$?
So I decided to factor the right-hand side into:
$$(x+1)(x+3) > 0$$
getting:
$$x+1>0 \text{ or } x+3 > 0 \implies x > -1 \text{ or } x > -3$$
But this is incorrect because I graphed the function and I saw $x\not>$ $-3$ for some values of $x$ as the graph dips below the $x$-axis. Where I am going wrong? Could someone help?
| Option:
$(x+2)^2>1;$
$\Rightarrow:$ $|x+2|>1$, i. e.
$x+2>1$, or $x+2<-1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4283809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Brezis' Functional analysis 3.27 item 2. I´m work in the following exercise of Brezis, but a i can´t understand some things.
Let $E$ be a Banach space whit norm $\|,\|$. The norm in $E^{\star}$ is also denoted by $\|,\|$. The purpose this exercise is to construct a equivalent norm on $E$ that is strictly convex and whose dual norm in the dual is also strictly convex.
Let $(a_{n})\subset B_{E}$ be a dense subset of $B_{E}$ whit respect to the strong topology.Let $(b_{n})\subset B_{E^{\star}}$ be a countable subset in $B_{E^{\star}}$ that is dense in $B_{E^{\star}}$ for the weak$^\star$ topology $\sigma(E^\star, E)$. Why does such set exists?(1)
Given $f \in E^{\star}$ set:
\begin{equation}\label{nor1}
\|f\|_{1} = \left\lbrace \|f\|^{2} + \sum_{n=1}^{\infty}\frac{1}{2^{n}}|\left< f, a_{n}\right>|^{2} \right\rbrace^{\frac{1}{2}}
\end{equation}
Given $x\in E$, set:
\begin{equation}\label{nor2}
\|x\|_{2} = \left\lbrace \|x\|_{1}^{2} + \sum_{n=1}^{\infty}\frac{1}{2^{n}}|\left< b_{n}, x \right>|^{2} \right\rbrace^{\frac{1}{2}}
\end{equation}
Onde $\|x\|_{1} = sup_{\|f\|_{1}\leq 1}\left< f, x \right>$ .
Prove $\|,\|_{1}$ is stricly convex.
My attempt:
Set $|f|^{2} = \sum_{n=1}^{\infty}|\left <f, a_n\right >|^{2}$ which satisfies the parallelogram law, so $|f|^{2}$ arise from inner product, then is strictly convex and the function $f \mapsto |f|^{2}$ is strictly convex. more precisely, we have for any $t \in [0,1]$ and $f, g \in E^{\star}$:
\begin{equation}\label{equs1}
|tf + (1-t)g|^{2} + t(1-t)|f-g|^{2} = t|f|^{2} + (1-t)|g|^{2}. \hspace{3mm} (2)
\end{equation}
My doubts are:
(1) Why does such set exists? I think is due to existence of $(a_n)$ using the Hahn-Banach theorem, but i´m not sure. I can´t support it.
(2) Why is this equality true?
Thanks.
| $E$ is assumed to be a separable Banach space. ($(a_n)$ would not exist otherwise). The closed unit ball of the dual space is compact (by Banach Alaoglu Thoerem) and it is metrizable by separability of $E$. ($d(f,g)=\sum \frac {|f(a_n)-g(a_n)|} {2^{n}|f(a_n)-g(a_n)|}$ metrizes weak* topology on the ball). Hence, this ball is separable.
The identity you have stated holds in any inner product space. Just expand the norm squares and simplify.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4284105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Uniqueness of solution to $x' = t \sqrt{1-x}$ (Proof checking) I am trying to prove that the solution to the problem:
$$ x' = t \sqrt{1-x} $$
with initial condition $x(0) = \tfrac{1}{2}$ is unique.
I have found that the solution to the problem is the function:
$$ x(t) = 1 - \left(\frac{1}{\sqrt{2}} -\frac{t^2}{4} \right)^2$$
It is clear that $f(t,x) = t \sqrt{1-x}$ is not locally Lipschitz at $0$, so we can't invoke Picard-Lindelöf. I am wondering if the next solution is ok:
By doing the change $z = \sqrt{1-x}$ we get the equivalent system:
$$ z' = \frac{t}{2}$$
Since the new system is linear and $g(t,x) = \frac{t}{2}$ is continous, it has an unique solution $z^*$ in a neighborhood of $0$. Therefore, the solution to the original system, given by $x^* = 1 - {z^*}^2$ must also be locally unique.
Is this approach correct? In case it isn't, could you suggest another one? Thank you in advance.
| $x(t) = 1 - \left(\frac{1}{\sqrt{2}} -\frac{t^2}{4} \right)^2$ is a solution of the initial value problem, but the solution is not unique.
With $t^* = 2^{3/4}$ we have $x(\pm t^*) = 1$ and $x'(\pm t^*) = 0$, so that, for example,
$$
x_1(t) = \begin{cases}
x(t) & \text{ if } x \le t^* \\
1 & \text{ if } x \ge t^*
\end{cases}
$$
is another solution.
As you noticed, $f(t,x) = t \sqrt{1-x}$ is not locally Lipschitz at points $(t, 1)$. Picard-Lindelöf guarantees a unique solution in a neighborhood of $x(0) = 1/2$, but can not be applied anymore when the solution reaches $x(t^*) = 1$.
The transformed function $z_1 = \sqrt{1-x_1}$ is not differentiable at $t=t^*$, that's why the transformed initial value problem does not show this solution. This means that $z'=-t/2$ is not equivalent to the original problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4284238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $h:\mathbb{S^2} \to \mathbb{R}$ has a zero.
Let do the usual, define $h:S^2 \to R$, by $h(x)=f(x)-g(x)$, then $h$ is continuous and clearly, $h(-x)=-h(x)$, so we have to find a zero for $h$.
As $S^2$ is connected, $h$ is continuous and for $x \in S^2$, if $h(x)>0$ then $h(x)=h(-(-x))=-h(-x)$ $\to$ $h(-x)<0$
We conclude that $h(S^2)$ is connected in $\mathbb{R}$ and $0\in h(S^2)$, so by the intermediate value theorem there existe some $x_0 \in S^2$ such that $h(x_0)=0$.
Is this okay?
I thought of using the the Brouwer Fixed Point Theorem, but the domain is not $D^2$ as in the theorem, or there is a way to used it? Or some other tool of Algebraic Topology to do it?
| As noted in comments, you haven’t shown $f(x_0)=g(x_0)=0,$ only that $f(x_0)=g(x_0).$
Assume there is no such $x.$ Then define $k: S^2\to S^1:$ $$k(x)=\frac{(f(x),g(x))}{\|(f(x),g(x))\|}$$
Then $$k(-x)=-k(x).\tag1$$
Show there can be no such $k:S^2\to S^1$ which satisfies (1).
Hint: Show there is a function $\alpha:S^1\to S^2$ such that $k\circ \alpha$ is not null-homotopic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4284573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How can we know if a vector is in the range of a matrix? When I was reading the A.5.5 on Page 651 of B&V's Convex Optimization book. It presents a method to distinguish if a vector is in the range of a matrix $A$.
The range condition $Bv \in \mathcal{R}(A)$ can also be expressed as $(I − AA^{\dagger})Bv = 0$.
where $A$ is singular and $A^{\dagger}$ the Moore Penrose inverse of $A$. We can denote $Bv$ as $b$ to simplify the notation, i.e.,
$$\tag{1}
b\in\mathcal{R}(A)\Leftrightarrow (I − AA^{\dagger})b = 0
$$
I can prove the above from left to right as follows.
From the definition of Moore Penrose inverse, we have $A A^{\dagger} A=A$. Since $b\in\mathcal{R}(A)$, we can find a $x$
such that $Ax=b$. Then we have the following derivation
$$
(I − AA^{\dagger})b =b − AA^{\dagger}b = Ax− AA^{\dagger}Ax=Ax−Ax=0
$$
But I failed to show the reverse. Any instruction will be appreciated.
| For the Moore-Penrose inverse you have (see below)
$$
AA^\dagger = P_{\operatorname{ran}A}\qquad\text{and}\qquad A^\dagger A = P_{\operatorname{ran}A^*},
$$
where $P_M$ denotes the orthogonal projection onto the subspace $M$. Therefore,
$$
(I-AA^\dagger)b=0\;\Longleftrightarrow\; b = P_{\operatorname{ran}A}b.
$$
Hence, if this holds, then $b\in\operatorname{ran}A$. Conversely, if $b\in\operatorname{ran}A$, then $P_{\operatorname{ran}A}b = b$.
The Moore-Penrose inverse can be defined as follows. Let $A : \mathbb R^n\to\mathbb R^m$ be linear. Then
$$
\mathbb R^n = \ker A\oplus\operatorname{im}(A^\top)
$$
and
$$
\mathbb R^m = \ker(A^\top)\oplus\operatorname{im}A.
$$
Consider the restriction $R := A|_{\operatorname{im}(A^\top)}$. Since $\operatorname{im}(A^\top)$ is complementary to $\ker A$, we have that $R$ is injective. It maps $\operatorname{im}(A^\top)$ bijectively onto $\operatorname{im}A$, that is, $R : \operatorname{im}(A^\top)\to\operatorname{im}A$. The Moore-Penrose inverse can be defined as
$$
A^\dagger := R^{-1}P_{\operatorname{im}A}.
$$
As $R^{-1}$ is the inverse actio of $A$ on $\operatorname{im}(A^\top)$, we have $AA^\dagger = P_{\operatorname{im}A}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4284760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof Explanation: Cauchy's Theorem (Homotopy Version) - Why take discs of radii $3\epsilon$? I have two questions about the proof of Cauchy's Theorem (Homotopy Version) in Stein & Shakarchi's Complex Analysis. This is Theorem $5.1$, in Chapter $3$. If you have a copy, see Pg. $93-95$.
*
*By uniform continuity of $F$, how do we get $\delta$ such that $$|s_1 - s_2| < \delta \implies \sup_{t\in [a,b]} |\gamma_{s_1}(t) - \gamma_{s_2}(t)| < \epsilon$$
My thoughts:
By uniform continuity of $F$, we have for every $\epsilon > 0$, some $\delta > 0$ such that $\sqrt{(s_1-s_2)^2 + (t_1-t_2)^2} < \delta$ implies $|F(s_1,t_1) - F(s_2,t_2)| = |\gamma_{s_1}(t_1) - \gamma_{s_2}(t_2)| < \epsilon$. To prove the required implication above, we only need to show the continuity of the map $\varphi: [0,1] \to \mathcal C([a,b], \Bbb C)$ given by $\varphi(s) = F_s$. $\mathcal C([a,b], \Bbb C)$ is endowed with the $\sup$ metric.
*What is the significance behind taking $3\epsilon$ and $2\epsilon$ in the proof? Why can't we take discs $\{D_0, \ldots, D_n\}$ of radii $\epsilon$? Pretty sure this has something to do with the claim from uniform continuity above, but I'm not able to figure it out. For $|s_1-s_2| < \delta$, we have $|\gamma_{s_1}(t) - \gamma_{s_2}(t)| < \epsilon$ for all $t\in [a,b]$, the two curves $\gamma_{s_1}$ and $\gamma_{s_2}$ could easily be contained in a union of discs of radii $\epsilon$ as well.
Reference:
| $|s_1-s_2| <\delta$ implies $\sqrt {(s_1-s_2)^{2}+(t-t)^{2} }<\delta$ for all $t$. Hence, $|F(s_1,t)-F(s_2,t)| <\epsilon$. [Just put $t_1=t_2=t$ in the inequality you got for uniform continuity of $F$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4284970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Does $\sum_{n=1}^N \int f_n \mathrm d\mu = \int \sum_{i=1}^N f_n \mathrm d\mu$ hold if $\int f_1 \mathrm d\mu = \infty$? Assume $f_1,\ldots,f_N:\mathbb R \to \mathbb R$. If $\int |f_n| \mathrm d\mu < \infty$ for all $n=1, \ldots, N$, then we always have $$\sum_{n=1}^N \int f_n \mathrm d\mu = \int \sum_{i=1}^N f_n \mathrm d\mu.$$
Now we assume $f_1,\ldots,f_N:\mathbb R \to [0, \infty]$ with $\int f_1 \mathrm d\mu = \infty$. I would like to ask if $$\sum_{n=1}^N \int f_n \mathrm d\mu = \int \sum_{i=1}^N f_n \mathrm d\mu$$ still holds. Thank you so much for your explanation!
| Yes, and for trivial reasons. We have
$$\sum_{n=1}^N \int_X f_n d\mu \ge \int_X f_1 d \mu = \infty$$
so
$$\sum_{n=1}^N \int_X f_n d \mu = \infty.$$
Next, by monoticity of integral, $$\int_X \sum_{n=1}^N f_n d \mu \ge \int_X f_1 d \mu = \infty$$
so we have
$$\int_X \sum_{n=1}^N f_n d \mu = \infty$$
and both sides are $\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Dividing numerator and denominator of integrand in definite integral I'm trying to solve the following definite integral $$\int_0^{\frac{\pi}{4}} \sqrt{\tan x}\,dx$$
First, I made the substitution $u = \sqrt{\tan x}$
And arrived with: $$\int_0^1 \frac{2u^2}{u^4+1}\,du$$
When divided both numerator and denominator of the integrand by $u^2$ and with some manipulation I got
$$\int_0^1 \frac{1+\frac{1}{u^2}}{(u-\frac{1}{u})^2+2}+\frac{1-\frac{1}{u^2}}{(u+\frac{1}{u})^2-2}\, du$$
This integral could be solved easily with u sub, giving the result:
$$\left.\frac{1}{2\sqrt{2}} \ln\left|\frac{u^2-\sqrt{2}u+1}{u^2+\sqrt{2}u+1}\right|+\frac{1}{\sqrt{2}}\arctan\left(\frac{1}{\sqrt{2}}\left(u-\frac{1}{u}\right)\right)\ \right|_0^1$$
The integral however is not defined at $u=0$, my question is: Are we allowed to do this kind of manipulation to definite integrals with rational function, and will proceeding with improper integral yield the correct result?
| I don't think the best thing to do is divide by $u^2$. I think this can solve your problem:
\begin{align*}
\int_0^1 \frac{2u^2}{u^4+1}\ du&=\frac{1}{\sqrt2}\int_0^1\frac{u}{u^2-\sqrt2u+1}\ du-\frac{1}{\sqrt2}\int_0^1\frac{u}{u^2+\sqrt2u+2}\ du\\[2mm]
&=\frac{1}{\sqrt2}\int_0^1\frac{u}{\left(u-\frac{\sqrt2}{2}\right)^2+\frac{1}{2}}\ du-\frac{1}{\sqrt2}\int_0^1\frac{u}{\left(u+\frac{\sqrt2}{2}\right)^2+\frac{1}{2}}\ du\\[2mm]
&=\frac{1}{2\sqrt 2}\ln\left|\frac{2x^2-2\sqrt2 x+2}{2x^2+2\sqrt2x+2}\right|-\frac{1}{\sqrt2}\arctan(\sqrt2x-1)+\frac{1}{\sqrt2}\arctan(\sqrt2x+1)\Big\vert_0^1\\[2mm]
&=\frac{\pi +\ln(3-2\sqrt 2)}{2\sqrt2}\\[2mm]
&\approx 0.487495
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Is there a natural number $n$ for which $\sqrt[n]{22-10\sqrt7}=1-\sqrt7$ Is there a natural number $n$ for which $$\sqrt[n]{22-10\sqrt7}=1-\sqrt7$$
My idea was to try to express $22-10\sqrt7$ as something to the power of $2$, but it didn't work $$22-10\sqrt7=22-2\times5\times\sqrt7$$ Since $5^2=25, \sqrt7^2=7$ and $25+7\ne22$. What else can we try?
| Note that $2.5\lt\sqrt7\lt2.7$, so $-1.7\lt1-\sqrt7\lt-1.5$, while $-5\lt22-10\sqrt7\lt-3$. The only possible integer power of $n$ for which $(1-\sqrt7)^n=22-10\sqrt7$ is $n=3$. (The power cannot be even, since even powers are non-negative, and if $n\ge5$ then $(-1.5)^n\lt-5$.)
This doesn't prove that $n=3$ satisfies the equation, of course. It merely tells us what needs to be checked, namely
$$(1-\sqrt7)^3=1-3\sqrt7+3\cdot7-7\sqrt7=22-10\sqrt7$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Limit of $\lim_{x \to \infty}\sqrt[3]{(x+1)^2}-\sqrt[3]{(x-1)^2}$ using nothing but L'Hopitals rule I'm trying to find a general procedure for solving any limit. I went about it by transforming any indeterminate form into $\infty / \infty$ or $0/0$ and applying L'Hopitals rule and repeating this process until a determinate form is reached.
This particular problem seems to not give me a solution using this procedure. Since the indeterminate form is $\infty - \infty$, it is transformed into $$\ln \lim_{x \to \infty} \frac{e^{\sqrt[3]{(x+1)^2}}}{e^{\sqrt[3]{(x-1)^2}}}$$ which is of the form $\infty / \infty$. But applying L'Hopitals rule doesn't seem to lead anywhere after taking the derivatives. Is this method not sufficient for these kinds of limits, or am I missing something?
| This is standard $\infty-\infty$. To find $\lim f-g$ transform it to
$\lim \frac{f}{\frac{1}{1-g/f}}$. Then L'Hopital it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is topology of $X^{\mathbb{Z}}$ where $X$ is a topological space. Let $(X, \tau)$ be a topological space and $I$ be an infinite set. I want to define a topology on $X^I$ by open sets in $X$. Is it true that a non-empty set $\mathcal{U}\subseteq X^I$ is open in $X^I$ if there is finite set $A$ such that $\mathcal{U}=\prod_{\alpha\in A} U_\alpha$, where $U_\alpha=X$ if $\alpha\notin A$ and for $\alpha\in A$, $U_\alpha\in \tau$.`
Please help me to know is it true?
| There are all kinds of topologies on $X^{I}$. The collection of sets you have defined is not a topology. You have to all possible unions of those sets to get a topology. The resulting topology is called the product topology on $X^{I}$. [For another topology you can look at Wikipedia for the 'box topology']
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Divisibility in totally ordered, dense, groups without commutativity If $(G,+,<)$ is a commutative, totally ordered, dense group one can prove that for every $x>0$ there exists $y>0$ such that $y+y\le x$. In fact by density there exists $z$ such that $0<z<x$ then either $z+z \le x$ (and we are done) or $z+z>x$. In the latter case
we take $(x-z)$ and with commutativity we can prove that $(x-z)+(x-z)< x$.
What if $G$ is not commutative? Is it possible to prove the same thing?
With this property and adding completeness one can prove that the group is divisible (for each $x\in G$ and $n\in \mathbb N$ there exists $y$ such that $ny=x$). So if the group is also not trivial it has a subgroup isomorphic to $\mathbb Q$ and, by completeness it is actually isomorphic to $\mathbb R$.
So a second question is whether there exists a non-commutative, totally ordered, dense, complete group.
| Since we are not assuming the group is commutative, I will write it multiplicatively rather than additively. So, suppose $x>1$ and take $z$ such that $1<z<x$. Let $y=xz^{-1}$, so $1<y<x$ as well. If $y\leq z$, then we have $y^2\leq yz=x$ and we're done. If $y>z$, then $z^2<yz=x$ and we're again done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4285903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solution to a SDE inclusive of mean-reversion in GBM I know how to solve these SDEs:
\begin{align}
\frac{dX_t}{X_t } &= \mu dt + \sigma dW_t\\
\\
dX_t &= \lambda (\mu - X_t) dt + \sigma dW_t\\
\\
\frac{dX_t}{X_t } &= \lambda(\mu- \ln(X_t)) dt + \sigma dW_t\\
\end{align}
where $W_t$ is the standard Brownian motion, and the rest of parameters are constants.
But then, I cannot get my head around for the following one.
\begin{equation}
{dX_t} = \lambda(\mu- X_t) dt + \sigma X_t dW_t\\
\end{equation}
Question: How can I solve the last SDE?
| Set
$$F_t=\exp\bigg\{-\sigma W_t+\frac{1}{2}\sigma^2t\bigg\}$$
Then define $Y_t=F_tX_t$. You get, after simplifying,
$$dY_t=F_t(\lambda(\mu-X_t))dt=F_t(\lambda(\mu-F_t^{-1}Y_t))dt$$
We solve the pointwise (fixed $\omega$) ODE
$$\begin{aligned}\frac{dY_t}{dt}&=\lambda \mu F_t-\lambda Y_t\\
dY_te^{\lambda t}+\lambda Y_te^{\lambda t}dt&=\lambda \mu e^{\lambda t}F_tdt\\
d(Y_te^{\lambda t})&=\lambda \mu e^{\lambda t}F_tdt\\
Y_te^{\lambda t}-Y_0&=\lambda \mu \int_{[0,t]}e^{\lambda s}F_sds\\
Y_t&=Y_0e^{-\lambda t}+\lambda \mu\int_{[0,t]}e^{-\lambda(t-s)}F_sds\end{aligned}$$
and finally
$$X_t=F_t^{-1}X_0e^{-\lambda t}+\lambda \mu\int_{[0,t]}e^{-\lambda(t-s)}F_t^{-1}F_sds$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tensor product of antilinear maps. Let $X_1, X_2, Y_1, Y_2$ be $\mathbb{C}$-vector spaces and $f_1: X_1 \to Y_1$ and $f_2: X_2 \to Y_2$ conjugate-linear maps, i.e. $f_1(\alpha x_1) = \overline{\alpha} f_1(x_1)$ and $f_1(x_1 + x_1') = f_1(x_1) + f_1(x_1')$ and similarly for $f_2$.
Question: Does there exist a unique conjugate linear map
$$f_1 \otimes f_2: X_1 \otimes X_2 \to Y_1 \otimes Y_2$$
such that $(f_1 \otimes f_2)(x_1 \otimes x_2) = f_1(x_1) \otimes f_2(x_2)?$
Basically, we would want to apply the universal property of the tensor product to construct this map, but the maps are not linear. But I believe there should be a trick to reduce the problem to linear maps.
An alternative way of approaching this: let $\{e_i\}_{i \in I}$ be a basis for $X_1$ and define
$$(f_1\otimes f_2) (\sum_{i \in I} e_i \otimes z_i) := \sum_{i \in I} f_1(e_i) \otimes f_2(z_i)$$
and this gives existence of the map, but it still looks like an unnatural way to proceed.
| If $V$ is a $\Bbb C$-vector space, then we can define a conjugate vector space $c(V)$ as follows.
The underlying abelian group of $c(V)$ is the same as the abelian group $V$. The scalar product $\bullet$ in $c(V)$ is defined as $\alpha \bullet v = \overline\alpha \cdot v$, where $\cdot$ denotes the scalar product on $V$.
Thus by definition, the map $f_1$ is nothing but a $\Bbb C$-linear map from $X_1$ to $c(Y_1)$. Same for $f_2$.
Now it only remains to prove a
Lemma: $c(Y_1) \otimes c(Y_2)$ is canonically isomorphic to $c(Y_1 \otimes Y_2)$ as $\Bbb C$-vector spaces.
The proof of the lemma is a simple exercise, and the original question immediately follows from the lemma.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $\{T \in \mathcal{L}(\mathbb{R}^5, \mathbb{R}^4) : \text{dim}(\text{null}(T)) > 2\}$ is not a subspace I am working my way through Axler's Linear Algebra Done Right. I attempted the problem and found a solution here (problem is $3$.B $4$): https://linearalgebras.com/3b.html.
The solution is as follows:
Let $U = \{T \in \mathcal{L}(\mathbb{R}^5, \mathbb{R}^4) : \text{dim}(\text{null}(T)) > 2\}$.
Let $e_1, ..., e_5$ be a basis of $R^5$ and $f_1, ..., f_4$ be a basis of $R^4$. Define $S_1$ by $S_1e_i = 0$ for $i = 1, 2, 3$, $S_1e_4 = f_1$, and $S_1e_5 = f_2$. Define $S_2$ by $S_2e_i = 0$ for $i = 1, 2, 4$, $S_2e_3 = f_3$, and $S_2e_5 = f_4$.
Then, $T_1, T_2 \in U$. However,
$$(S_1 + S_2)(e_1) = 0, (S_1 + S_2)(e_2) = 0$$
and
$$(S_1 + S_2)(e_3) = f_3, (S_1 + S_2)(e_4) = f_1, (S_1 + S_2)(e_5) = f_2 + f_4$$
Then, $\text{dim}(\text{null}(T_1 + T_2)) = 2$ and $T_1 + T_2 \notin U$. Thus, $U$ is not closed under addition which implies $U$ is not a subspace of $\mathcal{L}(\mathbb{R}^5, \mathbb{R}^4)$ as desired.
My solution is exactly the same except that I say let $f_1, f_2, f_3, f_4$ be arbitrary vectors in $R^4$ instead of let $f_1, f_2, f_3, f_4$ be a basis of $R^4$. Does this actually affect the validity of my solution? I do not believe so, but other solutions I found all specify the $f$'s being a basis.
| Let $$T=\begin{pmatrix}1&0&0&0&0\\
0&1&0&0&0\\
0&0&0&0&0\\
0&0&0&0&0\\
\end{pmatrix}$$
$$S=\begin{pmatrix}0&0&0&0&0\\
0&0&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
\end{pmatrix}$$
Both have a nullity of $3$ but
$$T+S=\begin{pmatrix}1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
\end{pmatrix}$$ has a nullity of only one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove that $ P(T>n)=P\left(X_{n}>Y_{n}\right)-P\left(X_{n}I am having a very hard time proving the below statement. I keep getting the wrong result so I feel like I am using the wrong probability identities. I would appreciate any help!
$\operatorname{Let}\left(X_{n}\right)_{n \geq 0}$ and $\left(Y_{n}\right)_{n \geq 0}$ be two independent simple symmetric random walks with $X_{0}=x$ and $Y_{0}=y$ where $x, y$ have the same parity and $x>y$
Define $T=\inf \left\{n \geq 1: X_{n}=Y_{n}\right\}$. Prove that
$$
P(T>n)=P\left(X_{n}>Y_{n}\right)-P\left(X_{n}<Y_{n}\right)
$$
This is what I did:
$$
P\left(T_{a}>n\right)=1-\mathbb{P}\left(T_{n} \leq n\right)
$$
By theorem 29 , since $X_{n}$ and $Y_{n}$ are simple symmetric random walk then:
$$
P_{0}\left(T_{a} \leq n\right)=2 P_{0}\left(X_{n} \geq a\right)-P_{0}\left(X_{n}=a\right)
$$
Hence:
$$
\begin{aligned}
&\mathbb{P}\left(T_{n}>n\right)=1-2 P\left(X_{n} \geq Y_{n}\right)+P\left(X_{n}=Y_{n}\right)\\
&=1-2 P\left(X_{n}>Y_{n}\right)-2 P\left(X_{n}=Y_{n}\right)+P\left(X_{n}=Y_{n}\right)\\
&=1-2 P\left(X_{n}>Y_{n}\right)-P\left(X_{n}=Y_{n}\right)
\end{aligned}
$$
$$
1-P\left(X_{n}=Y_{n}\right)=P\left(X_{n}>Y_{n}\right)+P\left(X_{n}<Y_{n}\right)
$$
$$
\begin{aligned}
P\left(T_{a}>n\right) &=R\left(X_{n}>Y_{n}\right)+P\left(X_{n}<Y_{n}\right)-2 P\left(X_{n}>Y_{n}\right) \\
&=\mathbb{P}\left(X_{n}<Y_{n}\right)-\mathbb{P}\left(X_{n}>Y_{n}\right)
\end{aligned}
$$
| $Z_n=X_n-Y_n$ is walk with $Z_0=x-y$ but with steps $-2, 0, 2$, now:
$$P(Z_n=k|T<n)=P(Z_n=-k|T<n)$$
$$\forall k>0: P(Z_n=k) = P(Z_n=k|T>n)+P(Z_n=k|T<n)= \\ =P(Z_n=k|T>n)+P(Z_n=-k|T<n) => \\
P(Z_n=k|T>n)=P(Z_n=k)-P(Z_n=-k|T<n)$$
now
$$L=P(T>n) = \sum_{k=1}^{2n+Z_0}P(Z_n=k|T>n) = \sum_{k=1}^{2n+Z_0}(P(Z_n=k)-P(Z_n=-k|T<n))= \\ =\sum_{k=1}^{2n+Z_0}P(Z_n=k)-\sum_{k=1}^{2n+Z_0}P(Z_n=-k|T<n)=P(Z_n>0) - P(Z_n < 0) =\\ =P(X_n>Y_n)-P(X_n<Y_n)=R$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
An equilateral triangle related to the Euler line Let $\triangle ABC$ be a non-equilateral triangle such that $∠BAC = 60^{\circ}$, and let $D , E$ be the intersection points of the Euler line of $\triangle ABC$ and the sides of $∠BAC.$
Prove that the $\triangle ADE$ is equilateral.
The only solution I can think of is to solve this using coordinate geometry, letting $AC$ be the $x$-axis, and letting $B$ be some point on the line $y = x \sqrt3 $. Then I would find the gradient of OGH and use that to find the angles of $\triangle ADE. $
I haven't been able to find a solution using euclidean geometry, so that's what I'm looking for.
| Let $O$ and $H$ be the circumcenter and orthocenter of $\triangle ABC$, respectively.
Let $X$ be a point on $DE$ such that $\angle DAX=EAX=30^{\circ}$ ($AX$ is the angle bisector).
Observe,
$$\angle DAH=\angle EAO=90^{\circ}-\angle B\implies \angle HAX=\angle OAX. $$
$$AH=2R\cos\angle A=2R\cos60^{\circ}=R=AO.$$
Hence, $\triangle AOH$ is isosceles and the angle bisector, $AX$ is perpendicular to $DE$.
Therefore, $AD=AE$ and $\angle A=60^{\circ}$ imlplies that $\triangle AED$ is equilateral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\sum\limits_{cyc} \sqrt{\frac{a^3}{1+bc}} \geq 2$ for $a, b, c > 0$ which satisfies $abc=1$. $\displaystyle \sum_{cyc} \sqrt{\dfrac{a^3}{1+bc}} \geq 2$ for $a, b, c > 0$ which satisfies $abc=1$.
My attempt:
\begin{align}
&\text{let } a=\frac{y}{x}, b=\frac{x}{z}, c=\frac{z}{y}. \\
&\text{Substituting for the original F.E.: }\displaystyle \sum_{cyc}\sqrt{\frac{(\frac{y}{x})^3}{1+\frac{x}{y}}} \geq 2 \text{ for }x, y, z\in \mathbb{R}^+. \\
&\therefore \text{ETS) }\displaystyle \sum_{cyc} \sqrt{\frac{y^4}{x^3y+x^4}} \geq 2. \\
\ \\
& \text{Two ways to think: }\\
\ \\
& (1) \\
&\therefore \text{Using Cauchy-Schwarz inequality, } \displaystyle \Bigg(\sum_{cyc} \sqrt{\frac{y^4}{x^3y+x^4}}\Bigg) \Bigg(\sum_{cyc} \frac {x}{y}\Bigg) \geq \sum_{cyc} \frac{y^2}{x^2+xy} \\
&\text{ETS) }\displaystyle \sum_{cyc} \frac{y^2}{x^2+xy} \geq 2\Bigg( \sum_{cyc} \frac {x}{y} \Bigg).
\ \\
&(2)\\
&\therefore \text{Using Cauchy-Schwarz inequality, } \displaystyle \Bigg(\sum_{cyc} \sqrt{\frac{y^4}{x^3y+x^4}}\Bigg) \Bigg(\sum_{cyc} \sqrt{\frac {x+y}{y}}\Bigg) \geq \sum_{cyc} \frac{y^3}{x^3} \\
&\text{ETS) }\displaystyle \sum_{cyc} \frac{y^3}{x^3} \geq 2\Bigg( \sum_{cyc} \sqrt{\frac{x+y}{y}} \Bigg).
\end{align}
p.s. I think we can't use the AM-GM one, but I'll try.
| Using Titu's Lemma,
$$\sum\sqrt{\frac{a^3}{1+bc}}=\sum \frac{a^2}{\sqrt{a+1}}\ge\frac{(a+b+c)^2}{\sum \sqrt{a+1}} \tag{1}$$
It remains to show,
$$(a+b+c)^2\ge 2(\sqrt{a+1}+\sqrt{b+1}+\sqrt{c+1})$$
Using A.M-G.M inequality,
$$\sum \frac{(a+1)+1}{2}\ge \sum\sqrt{(a+1)\cdot 1}\implies a+b+c+6\ge2(\sqrt{a+1}+\sqrt{b+1}+\sqrt{c+1}) $$
It remains to show,
$$(a+b+c)^2 \ge (a+b+c)+6\tag{2}$$
which is true, since $a+b+c\ge 3\sqrt[3]{abc}=3.$
Proof of $(2)$:$$(a+b+c)^2\ge 3(a+b+c)\ge (a+b+c)+6$$
Given below is a proof of a stronger result.
Claim:
$$\sum \sqrt{\frac{a^3}{1+bc}}\ge \frac{3}{\sqrt{2}}$$
Proceeding from $(1)$, it remains to show,
$$\sqrt{2}(a+b+c)^2\ge3\sum{\sqrt{a+1}}\implies 4(a+b+c)^2\ge 3\sum2\sqrt{2(a+1)}$$
Using A.M-G.M inequality,
$$\frac{\sum(a+1)+2}{2}\ge \sum \sqrt{2(a+1)}\implies a+b+c+9\ge \sum2\sqrt{2(a+1)} $$
It remains to show,
$$4(a+b+c)^2\ge 3(a+b+c)+27$$
which is true, since $a+b+c\ge 3.$
Also, equality can be achieved, unlike the original problem, at $a=b=c=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4286891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Multivariable Chain Rule for Implicit Multivariable Functions? I'd like to compute $\frac{\partial x}{\partial z}$ along $S$ at $(x,y,z)$ for $S: \frac{1}{x}+\arctan(y+2z)=1$.
My Approach: I can define $w(x,y,z)=\frac{1}{x}+\arctan(y+2z)$ and find the total differential and so on, i.e., $dw=w_x dx+w_y dy+w_z dz$ (we'd also need to use the fact that $y$ is held constant and $dw=0$).
How can I use the multivariable chain rule here? I'd like to find $\frac{\partial x}{\partial z}$ using the chain rule, but I'm a little bummed out here because I am only used to using the chain rule for solving equations where, say, $y$ depends on $a,b$ and $a, b$ depend on $t$ (e.g., $\frac{dy}{dt}=\frac{\partial y}{\partial a}\frac{da}{dt}+\frac{\partial y}{\partial b}\frac{db}{dt}$).
| We can write
\begin{align*}
S:\frac{1}{x}+\arctan(y+2z)=1\tag{1}
\end{align*}
as function in $x=x(y,z)$.
We obtain from (1)
\begin{align*}
x(y,z)&=\frac{1}{1-\arctan(y+2z)}\\
\\
\color{blue}{\frac{\partial x}{\partial z}(z,y)}
&=\frac{\partial}{\partial z}\left(\frac{1}{1-\arctan(y+2z)}\right)\\
&=\frac{\frac{\partial}{\partial z}\left(\arctan(y+2z)\right)}{\left(1-\arctan(y+2z)\right)^2}\tag{2}\\
&=\frac{2}{\left(1+(y+2z)^2\right)\left(1-\arctan(y+2z)\right)^2}\\
&\,\,\color{blue}{=\frac{2x^2}{1+(y+2z)^2}}\tag{3}
\end{align*}
Comment:
*
*In (2) we use $\left(\frac{1}{g(z)}\right)^{\prime}=-\frac{\left(g(z)\right)^{\prime}}{(g(z))^2}$.
*In (3) we use $x=\frac{1}{1-\arctan(y+2z)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4287080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Suppose X has the uniform distribution on [0,1]. Suppose that given X, Y has the Gamma distribution with mean X and variance 3X. Suppose $X$ has the uniform distribution on $[0,1]$. Suppose that given $X$, $\,Y$ has the Gamma distribution with mean $X$ and variance $3X$.
a) Find $E(Y)$
b) Find $Var(Y)$
c) Give and integral expression for $P(XY>1)$ without solving it.
This is what I think I should do.
for parts (a), $E(Y) = E(E(Y\mid X))$. The expectation of $Y$ given $X$ is $X$. So,
$E(Y) = E(E(Y|X)) = \int_0^1 X \times1 \ dX = \frac{1}{2}$
Also, for part(b),
$Var(Y) = E(Var(Y\mid X)) + Var(E(Y\mid X)) = E(3X) + Var(X) = 3E(X) + \frac{1}{12} = 3 \times 0.5 + \frac{1}{12} = \frac{19}{12}$
if part (a) and (b) are correct, how should I continue for part (c)?
Thanks.
| Your answers for $(a)$ and $(b)$ are correct. For $(c)$, we want $\Pr[XY > 1]$, and the most natural way to do this is to condition on $X$: $$\Pr[XY > 1] = \int_{x=0}^1 \Pr[Y > 1/X \mid X = x]f_X(x) \, dx = \int_{x=0}^1 \int_{y=1/x}^\infty \frac{y^{x/3-1} e^{-y/3}}{3^{x/3} \Gamma(x/3)} \, dy \, dx.$$
Here I have used the fact that if the conditional mean and variance are $X$ and $3X$, then this implies the gamma distribution must have shape $X/3$ and rate $1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4287280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Permutations with restrictions on letters not allowed to the left of another I'm working through Per Alexandersson's combinatorics handout (found here). My answer for #19 wrong and I'm trying to figure out why. The question is:
How many words can be made by rearranging aabbccdd, such that
no ’a’ appears somewhere to the right of some ’c’?
My approach is the following:
Lock the leftmost 'c' in every possible position it can take, count the permutations when c is in that position, and add up the different cases.
Case 1
leftmost 'c' is in position 3. This leaves 5 free letters, two of which are copies of another, to rearrange. This gives us $\frac{5!}{2!2!}$ permutations.
Case 2
leftmost 'c' is in position 4. Left of the leftmost 'c', we must have two 'a' letters, which leaves one space to put either a 'b' or a 'd'. Let's say we first put 'b' in that one spot, then to the right of the leftmost 'c', we'll have two 'd' letters, 'b' and 'c'. We can order the right side $\frac{4!}{2!}$. We can order the left side $\frac{3!}{2!}$. Then we can do the same thing with 'd' to the left of the leftmost 'c'. So, $2\times\frac{4!}{2!}\times\frac{3!}{2!}$
Case 3
leftmost 'c' is in position 5. Left of the leftmost 'c', there are two spaces available. I split this case into two subcases:
Subcase 1:
On the left side of the leftmost 'c', we have either 'bb' or 'dd'. In either case, the number of permutations will be $\frac{4!}{2!2!}\times\frac{3!}{2!}$, so the total cases for this subcase will just be $2\times\frac{4!}{2!2!}\times\frac{3!}{2!}$
Subcase 2:
On either side of the leftmost 'c', we have 'bd' in the free available spaces. This means we have $\frac{4!}{2!}\times3!$ permutations since the left will have 1 letter that is copied once and the right will have all distinct letters.
Case 4
leftmost 'c' is in position 6. Either 'd' or 'b' is on the right of leftmost 'c'. So we can order the left hand side in $\frac{5!}{2!2!}$ ways and we can order the right in 2 ways and there are 2 different letters that could go on the right. So our total permutations are: $2\times2\times\frac{5!}{2!2!}$
Case 6
last one. leftmost 'c' is in position 7. To the left of the leftmost c, there are 6 variables, each of which is a copy, that can be arranged. So we have, $\frac{6!}{2!2!2!}$ permutations.
Sum all of these and it comes to 384, but the answer is 420. What am I missing?
| Your method is correct. However, you added wrong. If you add up the values of your expressions, you should obtain $420$. In particular, it looks like you omitted Subcase 1 of Case 3 from your total.
A simpler approach is to choose two of the eight positions for the bs and two of the remaining six positions for the ds. The remaining four positions must be filled from left to right with the two as followed by the two cs. Hence, the number of admissible arrangements is
$$\binom{8}{2}\binom{6}{2} = 420$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4287493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Given triangle ABC with $\alpha=2\beta$ and $b, c, a$ is forming an arithmetic sequence. Find $\alpha, \beta, \gamma$. Given $\triangle ABC$ with $\alpha=2\beta$ and $b, c, a$ is forming an arithmetic sequence ($b$ is the $1$'st term, and $a$ is the last term), find $\alpha, \beta, \gamma$.
My attempts so far:
Let $c=b+k$ and $a=b+2k$. Then, from $\alpha=2\beta$, I got $\gamma = 180^\circ - 3\beta$.
With the Law of Sines , I can write
$$\dfrac{b}{\sin \beta}=\dfrac{b+k}{\sin 3\beta}=\frac{b+2k}{\text{sin } 2\beta}$$
From $\dfrac{b}{\sin\beta}=\dfrac{b+k}{\sin 3\beta}$, it gives $\cos 2\beta = \dfrac{k}{2b}$.
From $\dfrac{b}{\sin \beta}=\dfrac{b+2k}{\sin 2\beta}$, it gives $\cos \beta = \dfrac{b+2k}{2b}$.
After this, I don't know what should I do. Is there any other theorem that can be used to solve this problem?
| You can continue from where you left off. Using the fact that $\cos 2 \beta = 2 \cos^2 \beta - 1$:
$$2 \left( \frac{b+2k}{2b} \right)^2 - 1 = \frac{k}{2b}$$
$$2 (b + 2k)^2 - 4b^2 = 2bk$$
$$2b^2 + 8bk + 8k^2 - 4b^2 = 2bk$$
$$-2b^2 + 6bk + 8k^2 = 0$$
$$b^2 - 3bk - 4k^2 = 0$$
$$(b+k)(b-4k) = 0$$
but since $c = b + k \ne 0$, $b = 4k$ which gives side lengths $(4k, 5k, 6k)$ and has the same angles as $(4,5,6)$. Now the cosine rule $\cos C = \frac{a^2+b^2-c^2}{2ab}$ and so on can be used to find angles $\alpha,\beta,\gamma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4287641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Inverting Gaussian convolution in weak convergence I have the following question regarding weak convergence of probability measures on $\mathbb{R}$. Suppose that $(\mu_n)_n$ is a sequence of such probability measures and that, for each $k\in\mathbb{N}$, a subsequence of the sequence $(\nu^{(k)}_n)_n$ converges weakly to some probability measure $\nu^{(k)}$. Here,
$$ \nu^{(k)}_n:=\mu_n\ast\mathcal{N}(0,k^{-1})$$
is the convolution of $\mu_n$ with a cenetered normal distribution with variance $k^{-1}$.
My question: Does this imply that a subsequence of the original sequence $(\mu_n)_n$ also converges weakly to some probability measure $\mu$?
| It is enough if we have convergence for one fixed value of $k$. By Prohorov's Theorem it is enough to show tightness of $(\mu_n)$. Let $X_n \sim \mu_n$ and $Y \sim N(0,\frac 1 k)$ Then, for any given $\epsilon >0$, there exisst $M$ such that $P|X_n+Y|>M) <\epsilon /2$ for all. $n$. We can also choose $M$ such that $P(|Y|>M) <\epsilon/2$. Now $P|X_n|>2M)\leq P|X_n+Y|>M) +P(|Y|>M) <\epsilon$ for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4287842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Which topological properties are retained by Product Space $X\times X$ Given a space $X$ with some properties--say, for example, Hausdorff, compact, metrizable, etc.--does $X\times X$ retain all topological properties (forgive me for imprecise language, as I'm unsure of proper terminology)?
If not, what are some examples of properties not necessarily retained?
If so, how do we know this? Do we have to verify one-by-one, per property, or is there some generalized proof? For things such as Hausdorff and compact, I can think of proofs, but two properties is a very small subset of all possible properties.
| *
*If $X$ is normal (or $T_4$) $X \times X$ need not be normal.
*If $X$ is Lindelöf, $X \times X$ need not be.
*If $X$ is countably compact, $X \times X$ need not be (contrasting with the preservation of compactness, even for arbitrary products).
*If $X$ is ccc, $X \times X$ need not be.
What does get preserved: connectedness, local compactness, local connectednes, (local) path-connectedness, separation axioms $T_0$ to $T_{3\frac12}$, separability, second and first countability, to name some elementary ones.
The first need some counterexamples, and for the positive ones: we have to check them separately and many are straightforward to check.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $S_{N} = \sum_{i=0}^\infty \frac{i^N}{4^i}$ using recursion I am trying to obtain a formulae for a summation problem under section (d) given in a solutions manual for "Data Structures and Algorithm Analysis in C - Mark Allen Weiss", here's the screen shot
As the pdf is protected i could not download it.
Here's my attempt at it.
Let $S_{N} = \sum_{i=0}^\infty \frac{i^N}{4^i}$, then starting from $N = 4$ we have
$S_{0} = \frac{4}{3}, S_{1} = \frac{4}{9} , S_{2} = \sum_{i=0}^\infty \frac{2i + 1}{3*4^i}, S_{3} = \sum_{i=0}^\infty \frac{3i^2 + 3i + 1}{3*4^i},S_{4} = \sum_{i=0}^\infty \frac{4i^3 + 6i^2 + 4i + 1}{3*4^i}$.
Using recursion, we have
$S_{0} = \frac{4}{3}, S_{1} = \frac{1}{3}S_{0} , S_{2} = \frac{2S_{1} + S_{0}}{3} = \frac{5}{9}S_{0} ,
S_{3} = \frac{3S_{2} + 3S_{1} + S_{0}}{3} = \frac{11}{9}S_{0}, S_{4} = \frac{4S_{3} + 6S_{2} + 4S_{1} + S_{0}}{3} = \frac{95}{27}S_{0}$.
After cleaning up further we have
$$S_{0} = \frac{4}{3}, S_{1} = \frac{1}{3}S_{0} , S_{2} = \frac{5}{9}S_{0} ,
S_{3} = \frac{11}{9}S_{0}, S_{4} = \frac{95}{27}S_{0}$$
I dont see a pattern emerging to make a formulae.I may be doing something wrong, any help is greatly appreciated.
| I agree with the answer by @ThomasAndrews in terms of the fact that this recursion is difficult to be solved directly, although I have a feeling it can be solved using finite difference calculus, since there is at least one solution to it as I demonstrate below.
Define the function
$$F_N(e^x)=\sum_{n=0}^{\infty}n^N e^{nx}=\frac{d^N}{d x^N }\left(\frac{1}{1-e^x}\right)$$
We will attempt to explicitly evaluate the derivatives. We get extremely lucky because of the occurrence of $e^x$ which massively reduces Faa di Bruno's formula for $f(x)=1/(1-x), g(x)=e^x$ in terms of the Bell polynomial to:
$$F_N(e^x)=\sum_{n=1}^N f^{(k)}(e^x)B_{N,k}(e^x,..., e^x)$$
Note that since $B_{N,k}(x,...,x)=S(N,k)x^k$ - where $S(N,k)$ are the Stirling numbers of the 2nd kind - we can express the formula for $S_N(x)$ concisely as follows:
$$F_N(x)=\frac{1}{1-x}\sum_{k=1}^N k!S(N,k)\left(\frac{x}{1-x}\right)^k$$
Which shows that $\frac{3}{4}F_N(1/4)=\sum_{k=1}^N k!S(N,k)\left(\frac{1}{3}\right)^k$.
EDIT: It turns out that there is a simple way to solve the recursion formula directly, by using generating functions. The recursion relation for arbitrary $x$ reads
$$\frac{1-x}{x}F_N(x)=\sum_{k=0}^{N-1}{N\choose k} F_k(x)$$
Now multiply by $y^N/N!$ and sum for $N\geq 1$. Define $S(y;x)=\sum_{N=1}^{\infty}F_N(x) y^N/N!$. Now it is easy to show that the recursion relation as posed leads to the following generating function
$$S(y;x)=F_0(x)\frac{\frac{x}{1-x}(e^y-1)}{1-\frac{x}{1-x}(e^y-1)}~~,~~ F_0(x)=(1-x)^{-1}$$
Now expand in powers of $e^y-1$, use the fact that $$(e^{y}-1)^k=k!\sum_{n=k}^{\infty}\frac{y^n}{n!}S(n,k)$$
and exchange the order of summation to obtain the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compute $\lim_{n\to\infty}\int_0^n\frac{f(\frac{x}{n})}{1+x^2}dx$ In the middle of an exercise I have to compute:
$$\lim_{n\to\infty}\int_0^n\frac{f(\frac{x}{n})}{1+x^2}dx$$
($f\in C[0,1]$) It is obvious that this has to be equal to:
$$\int_0^{\infty}\frac{f(0)}{1+x^2}dx=\frac{\pi}{2}f(0)$$
But I am not sure how to prove it in a rigurous way, since you can have divergence problems or some weird situations.
| $\lim_{n\to\infty}\int_0^n\frac{f(\frac{x}{n})}{1+x^2}dx
$
This looks like
the usual
split the integral
and see what happens,
so I'll do that.
For $0 < c < n$,
let
$I_n
=\int_0^n\frac{f(\frac{x}{n})}{1+x^2}dx
$,
$J_n(c)
=\int_0^{c}\frac{f(\frac{x}{n})}{1+x^2}dx
$,
$K_n(c)
=\int_c^n\frac{f(\frac{x}{n})}{1+x^2}dx
$,
so
$I_n
=J_n(c)+K_n(c)
$.
I want
$K_n(c)
$
to be small
and
$J_n(c)$
to be close to
$\int_0^{c}\frac{f(0)}{1+x^2}dx
$
so we can let
$c\to\infty$
and get your result.
Looking at
$J_n(c)$,
we want to make
$c/n \to 0$.
Let
$M
=\max_{0 \le x \le 1} |f(x)|
$.
$\begin{array}\\
|K_n(c)|
&=|\int_c^n\frac{f(\frac{x}{n})}{1+x^2}dx|\\
&\le|\int_c^n\frac{M}{1+x^2}dx|\\
&=M\int_c^n\frac{1}{1+x^2}dx\\
&=M\arctan(x)|_{x=c}^n\\
&=M(\arctan(n)-\arctan(c))\\
&=M\arctan(\frac{n-c}{1+nc})\\
&\le M(\frac{n-c}{1+nc})\\
&\le M(\frac{n-n^a}{1+n^{1+a}})
\qquad\text{if } c = n^a, 0 < a < 1\\
&\le M(\frac{n}{n^{1+a}})\\
&= M(\frac{1}{n^a})\\
\end{array}
$
So we can make
$K_n(c)
$
small by making
$c$ of order $n^a$.
For
$J_n(c)$,
since $f$ is continuous,
$|f(z)-f(0)|
\to 0
$
as $z \to 0$.
In particular,
$\max(|f(z)-f(0)|)
\to 0
$
for $0 < z < \frac1{n^a}$
so,
if
$g(n, a)
=\max(|f(z)-f(0)|)|_{x=0}^{\frac1{n^a}}
$
then,
for any $0 < \epsilon < 1,0 < a < 1$,
$g(n, a) < \epsilon
$
for large enough $n$.
Therefore,
for any $\epsilon > 0, 0 < a < 1$
for large enough $n$,
$\begin{array}\\
|J_n(c)-\int_0^{c}\frac{f(0)}{1+x^2}dx|
&\le\int_0^{c}\frac{|f(\frac{x}{n})-f(0)|}{1+x^2}dx\\
&\le\int_0^{c}\frac{\epsilon}{1+x^2}dx\\
&\le \frac{\epsilon\pi}{2}\\
&\to 0\\
\end{array}
$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question of connection between $0 < | x - c | < δ$ and $c - δ < x < c + δ$, and $x ≠ c$ For a homework question of mine, it says:
$x$ is a solution of $0 < | x - c | < δ$ if and only if $c - δ < x < c + δ$ and $x ≠ c$.
I'm wondering if $x ≠ c$ creates the $ 0 < $ part, while $c - δ < x < c + δ$ creates the $| x - c | < δ$.
So, if $x = c$, does that mean $x$ cannot be a solution? Please help me with this dissection.
| The things you need to know are the following general facts:
*
*$0\leq\lvert x\rvert$
*$\lvert x\rvert < b \iff -b<x<b$
Now, to prove the forward direction, suppose $x$ is a solution of $0<\lvert x-c\rvert<\delta$. We have that
\begin{align*}
\lvert x-c\rvert<\delta&\implies-\delta<x-c<\delta\\
&\implies c-\delta<x<c+\delta,
\end{align*}
which proves the first part. Now, assume, to the contrary, that $x=c$. Then $\lvert x-c\rvert=\lvert 0\rvert=0$. Since $0\not<0$, we have reached a contradiction, and it must be that $x\neq c$.
To prove the backwards direction, assume $c-\delta<x<c+\delta$ and $x\neq c$. We have
\begin{align*}
c-\delta<x<c+\delta&\implies-\delta<x-c<\delta\\
&\implies \lvert x-c\rvert<\delta.
\end{align*}
Clearly, $0\leq \lvert x-c\rvert$. Since $x\neq c$, we have $\lvert x-c\rvert\neq 0$. Hence, $0< \lvert x-c\rvert$. Putting this together gives us $0< \lvert x-c\rvert<\delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In $l^p$ the map $x\longrightarrow \sum_{n=1}^\infty x_ny_n$ is well-defined. For the following I have proof ideas but they are uncertain:
Prove that for every $y\in l^q$ the map $x\longrightarrow \sum_{n=1}^\infty x_ny_n$ is well-defined, linear and continuous on $l^p$.
Proof ideas:
Linearity follows from definition: $f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)$.
WELL-defined:
Given $l_p=\{(x_1,x_2,...,x_n,...): \sum_{k}|x_k|^p<\infty\}$ we know that $‖x‖_{l^p}=(\sum_{n=1}^\infty|x_n|^p)^{1/p}$.
To prove our map is well defined means that there cannot be $f(x^a)\neq f(x^b)$ where $x^a=x^b\in l^p$.
So $|f(x_a)-f(x_b)|=|\sum_{n=1}^\infty x^a_ny_n-\sum_{n=1}^\infty x^b_ny_n|$ for some $y\in l^q$.
Then $|f(x_a)-f(x_b)|=|\sum_{n=1}^\infty (x^a_n- x^b_n)y_n|=\sum_{n=1}^\infty |(x^a_n- x^b_n)||y_n|>0$ WLOG assuming $f(x_a)>f(x_b)$.
This means that there is at least one $n$ such that $x_n^a\neq x_n^b$ and so $x^a\neq x^b$. This means it is well defined.
CONTINUITY:
There is an $\epsilon$ $\delta$ criterion somewhere.... since $\sum_{n=1}^\infty x^a_ny_n\leq|||x^a_n| ||y_n||$
$|f(x_a)-f(x_b)|\leq|||x^a_n| ||y_n||-||x^b_n| ||y_n|||=...$ for some $y\in l^q$.
Thanks and regards,
| To prove that the map is well defined you have to prove convergence of the series. For this recall that $\sum |x_ny_n|\leq (\sum |x_n|^{p})^{1/p}(\sum |y_n|^{q})^{1/q}$. The same inequality shows that your map is a bounded operator whose norm is at the most ($\sum |y_n|^{q})^{1/q}$. This implies continuity of the map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
asymptotic approximation of Fresnel integrals with complex argument It turns out that SciPy's Fresnel values are wrong for complex arguments and large enough absolute value. I'm trying to fix that.
The implementation is based on Zhang/Jin, Computation of special functions, which in turn is based on Abramowitz/Stegun, Handbook of Mathematical Functions. There we find for the Fresnel S integral (7.3.10)
$$
S(z) = \frac{1}{2} - f(z) \cos\left(\frac{\pi}{2} z^2\right) - g(z) \sin\left(\frac{\pi}{2} z^2\right)
$$
for all $z$ with the auxiliary functions (7.3.5), (7.3.6)
$$
\begin{split}
f(z) &= \left[\frac{1}{2} - S(z) \right] \cos\left(\frac{\pi}{2} z^2\right) - \left[\frac{1}{2} - C(z) \right] \sin\left(\frac{\pi}{2} z^2\right),\\
g(z) &= \left[\frac{1}{2} - C(z) \right] \cos\left(\frac{\pi}{2} z^2\right) + \left[\frac{1}{2} - S(z) \right] \sin\left(\frac{\pi}{2} z^2\right).
\end{split}
$$
Computation of $S$ for large values is done via the asymptotic expansion of $f$ (7.3.27)
$$
\DeclareMathOperator\arg{arg}
\pi z f(z)\sim 1 + \sum_{m=1}^\infty (-1)^m \frac{1\cdot 3\cdot\cdots \cdot (4m-1)}{(\pi z^2)^{2m}}, \quad z\to\infty, |\arg(z)|<\frac{\pi}{2}.
$$
This is where I have problems understanding the approximation.
Consider $f(iz)$; for the asymptotic $f_a$, we have $f_a(iz) = -if_a(z)$. However no such thing is true for $f$ itself. Numerical computation via the representations
$$
\DeclareMathOperator\erf{erf}
\begin{split}
S(z) &= \frac{1 + i}{4} \left[\erf\left(\frac{1 + i}{2} \sqrt{\pi} z\right) - i \erf\left(\frac{1 - i}{2} \sqrt{\pi} z\right)\right]\\
C(z) &= \frac{1 - i}{4} \left[\erf\left(\frac{1 + i}{2} \sqrt{\pi} z\right) + i \erf\left(\frac{1 - i}{2} \sqrt{\pi} z\right)\right]
\end{split}
$$
shows that the approximation incorrect for everything off of the real axis, but the issue here could be numerical instability in the $\erf$ represenation too.
My current guess is that the above infinite sum is valid only in the area $|\arg{z}|<\pi/4$, which would already change how we compute the Fresnel integral values significantly.
To finish things off, here's a cplot of $f$ (for smaller $|z|$):
| I will consider the function $\operatorname{f}(z)$, the treatment of $\operatorname{g}(z)$ is analogous. By http://dlmf.nist.gov/7.12.ii, we have
$$
\operatorname{f}(z) = \frac{1}{{\pi z}}\sum\limits_{m = 0}^{N - 1} {( - 1)^m \left( {\frac{1}{2}} \right)_{2m} \frac{1}{{(\pi z^2 /2)^{2m} }}} + R_N^{(\operatorname{f})} (z)
$$
where
\begin{align*}
R_N^{(\operatorname{f})} (z) & = \frac{{( - 1)^N }}{{\pi \sqrt 2 }}\int_0^{ + \infty } {\frac{{e^{ - \pi z^2 t/2} t^{2N - 1/2} }}{{1 + t^2 }}dt} \\ & = \frac{1}{{\pi z}}( - 1)^N \left( {\frac{1}{2}} \right)_{2N} \frac{1}{{(\pi z^2 /2)^{2N} }}\frac{1}{{\Gamma \left( {2N + \frac{1}{2}} \right)}}\int_0^{ + \infty } {\frac{{e^{ - s} s^{2N - 1/2} }}{{1 + s^2 /(\pi z^2 /2)^2 }}ds}
\\ &
= \frac{1}{{\pi z}}( - 1)^N \left( {\frac{1}{2}} \right)_{2N} \frac{1}{{(\pi z^2 /2)^{2N} }}\Pi _{2N + 1/2} (\pi z^2 /2),
\end{align*}
provided that $|\arg z|<\frac{\pi}{4}$ and $N\geq 0$. Here $\Pi_p(w)$ denotes one of Dingle's basic terminants:
$$
\Pi _p (w) = \frac{1}{{\Gamma (p)}}\int_0^{ + \infty } {\frac{{e^{ - s} s^{p - 1} }}{{1 + (s/w)^2 }}ds}
$$
for $|\arg w|<\frac{\pi}{2}$ and by analytic continuation elswhere. Using the expression for $R_N^{(\operatorname{f})} (z)$ in terms of this terminant, we can extend $R_N^{(\operatorname{f})} (z)$ to the universal covering of $\mathbb C \setminus \left\{ 0\right\}$. Now employing the estimates for the basic terminant established in https://doi.org/10.1007/s10440-017-0099-0, we obtain the bound
\begin{align*}
\left| {R_N^{(\operatorname{f})} (z)} \right| \le &\; \left| {\frac{1}{{\pi z}}( - 1)^N \left( {\frac{1}{2}} \right)_{2N} \frac{1}{{(\pi z^2 /2)^{2N} }}} \right| \\ & \times \begin{cases} 1 & \text{ if } \; \left|\arg z\right| \leq \frac{\pi}{8}, \\ \min\!\Big(\left|\csc ( 4\arg z)\right|,1 + \cfrac{1}{2}\chi(2N+1/2)\Big) & \text{ if } \; \frac{\pi}{8} < \left|\arg z\right| \leq \frac{\pi}{4}, \\ \cfrac{\sqrt {2\pi (2N + 1/2)} }{2\left| {\sin (2\arg z)} \right|^{2N+1/2} } + 1 + \cfrac{1}{2}\chi (2N +1/2) & \text{ if } \; \frac{\pi}{4} < \left|\arg z\right| < \frac{\pi}{2}. \end{cases}
\end{align*}
Here $\chi(p) =\sqrt{\pi}\Gamma(p/2+1)/\Gamma(p/2+1/2)$ for $p>0$. It is seen that the asymptotic expansion of $\operatorname{f}(z)$ is valid in every closed sub-sector of $|\arg z|<\frac{\pi}{2}$ in the sense of Poincaré. The Stokes lines are $\arg z =\pm \frac{\pi}{4}$ where terms are swiched on that remain exponentially small compared to any negative power of $z$ as long as we stay away from the rays $\arg z =\pm \frac{\pi}{2}$ (the anti-Stokes lines).
To obtain a better result and reveal the exponentially small terms, we can use the functional equation
$$
\Pi _p (w) = \pm \pi i\frac{{e^{ \mp \frac{\pi }{2}ip} }}{{\Gamma (p)}}w^p e^{ \pm iw} + \Pi _p (we^{ \mp \pi i} )
$$
where $p>0$ and $w$ is any element of the universal covering of $\mathbb C \setminus \left\{ 0\right\}$ (the Riemann surface of the logarithm). With this functional equation, we find, after some algebra,
$$
R_N^{(\operatorname{f})} (ze^{ \mp \frac{\pi }{2}i} ) = \pm iR_N^{(\operatorname{f})} (z) + \frac{{1 \mp i}}{2}e^{ \pm \frac{\pi }{2}iz^2 } .
$$
This result is valid for all $N\geq 0$ and $z$ on the universal covering of $\mathbb C \setminus \left\{ 0\right\}$. You can see that if we omit the second term (which is exponentially small when $
0 < \pm \arg z < \frac{\pi}{2}$) then, with $N=0$, we get the false result
$$
\operatorname{f}(ze^{ \mp \frac{\pi }{2}i} ) = \pm i\operatorname{f}(z).
$$
See also http://dlmf.nist.gov/7.4.
In summary, use
$$
\operatorname{f}(z) \sim \frac{1}{{\pi z}}\sum\limits_{m = 0}^\infty {( - 1)^m \left( {\frac{1}{2}} \right)_{2m} \frac{1}{{(\pi z^2 /2)^{2m} }}}
$$
when $\left| {\arg z} \right| \le \frac{\pi }{4}$, and use
$$
\operatorname{f}(z) \sim \frac{{1 \pm i}}{2}e^{ \pm \frac{\pi }{2}iz^2 } + \frac{1}{{\pi z}}\sum\limits_{m = 0}^\infty {( - 1)^m \left( {\frac{1}{2}} \right)_{2m} \frac{1}{{(\pi z^2 /2)^{2m} }}}
$$
when $\frac{\pi }{4} < \pm \arg z \le \frac{{3\pi }}{4}$. Of couse by relaying on the symmetry relation http://dlmf.nist.gov/7.4.E8, we can assume $|\arg z|\leq \frac{\pi}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4288912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Paths on a grid: number of routes problem The problem is, how many routes can one take to get to point A to B without backtracking? I know that I can solve this by assigning numbers to the intersections and that the intersections add up. However, I got stumped at certain parts because of the two rectangles. I might have also gotten some parts wrong. Any help is appreciated. Thank you!
| You have done everything correct so far. You just missed a couple intersections, circled below. Keep filling them in as you have done so far (I did one more for you). The bottom right intersection will contain you answer when you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why ${(0^+)}^{+ \infty}$ is well defined and not ${(0^+)}^{- \infty}$? I want to compute the limit of $|x|^{\frac{1}{x}}$ when $x$ tends to $0$.
When $x$ tends to $0$ from above, I get ${(0^+)}^{\frac{1}{0^+}}= {(0^+)}^{+\infty}=0$. However when, I approach $0$ from below I get $ {(0^+)}^{\frac{1}{0^-}} = {(0^+)}^{-\infty}$ and I can't give a value for this. ( why ?)
However, I was able to find a value for this limit by going through the exponential definition of $x$ as follow :
$$\lim_{x \to 0^-} |x|^{\frac{1}{x}} =\lim_{x \to 0^-} e^{\frac{1}{x} \ln(|x|)}=\lim_{x \to 0^-} e^{\frac{\ln(|x|)}{x} } = e^{\lim_{x \to 0^-}\frac{\ln(|x|)}{x} }= e^{\frac{-\infty}{0^-}}=e^{+\infty}=+\infty$$
So, my question is, why am I forced to go through this definition? I might have been able to give a value to ${(0^+)}^{- \infty}$
| "show me how to formalize my expressions please". Maybe you mean like this?
Assume $a_n\to 0^+$ and $b_n\to -\infty$ for $n\to \infty$. Note that $a_n>0$ for large enough $n$, so $a_n^{b_n}$ is well-defined large enough $n$. Then $a_n^{b_n}\to \infty$. And you basically did a proof yourself, starting with
$$
a_n^{b_n}
= e^{b_n\log a_n}.
$$
Now, $\log a_n \to -\infty$ by continuity, so $b_n\log a_n\to+\infty$, which gives
$$
a_n^{b_n}
= e^{b_n\log a_n}
\to +\infty.
$$
In short, $(0^+)^{-\infty} = +\infty$.
Prerequisites for this proof are
$
(-\infty)\cdot(-\infty) = +\infty$ and $c^{+\infty} = +\infty,
$
where $c>1$ is a constant greater than $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to calculate the line integral of a vector field over a parabola I am trying to answer a question about line integrals, I have had a go at it but I am not sure where I am supposed to incorporate the line integral into my solution.
$$ \mathbf{V} = xy\hat{\mathbf{x}} + -xy^2\hat{\mathbf{y}}$$
$$ \mathrm{d}\mathbf{l} = \hat{\mathbf{x}}\mathrm{d}x + \hat{\mathbf{y}}\mathrm{d}y $$
$$ \int_C\!\mathbf{V}\cdot\mathrm{d}\mathbf{l} = \int\!xy\,\mathrm{d}x - \int\!xy^2\,\mathrm{d}y = \left[ \frac{x^2y}{2}\right ]_?^? - \left[ \frac{xy^3}{3} \right]_?^? $$
I have a feeling that the parabola in question must come into play in the limits of the integrals, although I dont know how they are supposed to. The parabola in question is $y = \frac{x^2}{3}$ and the coordinates at which the line integral is supposed to go over are $a=(0,0)$ and $b=(3,3)$.
| *
*One could also use a parametrization.
Set $x(t)=t$ and and $y(t)= t^2/3$.
*
*The parabola corresponds to the endpoint of the position vector :
$ \vec r (t)= t\vec i + \frac {t^2}{3} \vec j$.
*
*This allows to calculate the differential $d\vec r$:
$d\vec{\mathbf{r}}(t)=\vec{\mathbf{r\space '}}(t) dt = (1\vec i + (2t/3) \vec j) dt$
*
*With the parametrization chosen, we get $\vec{\mathbf{V}}(t)= (t. \frac{t^2}{3})\vec i - (t. (\frac{t^2}{3})^2)\vec j$
*One can then apply the definition : vector line integral = $\int_C \vec V(t).d\vec r(t)= \int_{t_1}^{t_2} \vec (V(t).\vec r\space '(t) )dt$ ,
with, here, $t_1 = 0$ and $t_2 =3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\sqrt{\frac{1+\sqrt5}2}$ is not in $\mathbb Q(\sqrt5)$ Prove that
$$\sqrt{\frac{1+\sqrt5}2} \not \in \mathbb Q(\sqrt5)$$
I think I should start taking the next equation $\sqrt{\frac{1+\sqrt5}2}=p+q\sqrt5$ but I don't see how to continue from here
| Let $K = \mathbb Q(\frac{1+\sqrt{5}}{2})$. Note that $N_{K/\mathbb Q}(\frac{1+\sqrt{5}}{2}) = -1$. If $\alpha = \sqrt{\frac{1+\sqrt{5}}{2}} \in K$, then
$$-1 = N_{K/\mathbb Q}(\alpha^2) = N_{K/\mathbb Q}(\alpha)^2.$$
Since $N_{K/\mathbb{Q}}(\alpha) \in \mathbb Q$, this is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do I create a bezier spline section out of many points? I am building a program where I need to simplify N number of points into a single section of a bezier spline, ie describe them using just 2 end points and 2 control points. Naturally this will lead to some loss of information, but I would like the spline section to be as accurate to the original points as possible.
The input will be points that are already on a reasonably smooth curve.
Any answer is appreciated! (That being said, I'm not very good with equations, so an explanation in words would be even more so).
| For one thing: That is not a spline interpolation, that is just a Bézier curve. Those are simply polynomial in specific bases.
A spline interpolation would consist of piecewise smooth functions so that the whole function has smooth differentiability to some degree.
So what you actually want: Find a cubic Bézier curve that describes your points optimally. But then, it does not matter that much if you have Bézier curves or just a regular monomial base.
What you can then do is simply regression:
Take some $t_1,\ldots,t_N$ and the model
$$ x(t) = a+bt+ct^2+dt^3 $$
$$ y(t) = e+ft+gt^2+ht^3 $$
or for Bézier representation
$$ x(t) = aB_{0,3}+bB_{1,3}+cB_{2,3}+dB_{3,3} $$
$$ y(t) = eB_{0,3}+fB_{1,3}+gB_{2,3}+hB_{3,3} $$
Then consider the quadratic error
$$ E(a,b,c,d,e,f,g,h,t_1,\ldots,t_N) = \sum_{j=1}^N (x(t_j)-P_x^j)^2 + (y(t_j)-P_y^j)^2 $$
Then differentiate and equate to $0$ (easy if $t_1,\ldots,t_N$ are fixed, harder if they are variable as that will then involve finding multivariate roots, which might be solvable using Groebner bases) to find the paramters that minimize this error.
You can then take the minimum and the maximum of the $t_i$ and transform $x(t),y(t)$ to $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
how can I solve this integral$ \int_{0}^{\infty}\frac{(v - \lambda)^2}\lambda\exp(\frac {\text -v}{\lambda}) \,\mathrm dv$ $$\int_{0}^{\infty}\frac{(v - \lambda)^2}\lambda\exp(\frac {\text -v}{\lambda})
\,\mathrm dv$$
$$t=\frac {\text -v}{\lambda}$$
$$dt=\frac {\text -dv}{\lambda}$$
$$=-\lambda\int_{-\infty}^{0}\frac{(-\lambda t - \lambda)^2}\lambda\exp(t)
\,\mathrm dt$$
How should I do to continue?
| First, it seems like, if you are trying to evaluate $\int\limits_{-\infty}^{0}\frac{(v-\lambda)^2}{\lambda}\cdot e^{\frac{-v}{\lambda}}\partial v$, then the answer is that it simply diverges, as well, as if the bounds for integration were $(-\infty, +\infty)$, as $e^{\frac{-v}{\lambda}} \to +\infty,~\text{as }v \to -\infty$. Moreover, in your $t$-substitution, you have forgotten to change the integration bounds.
Now, let's assume, that you are looking for $\int\limits_{0}^{+\infty}\frac{(v-\lambda)^2}{\lambda}\cdot e^{\frac{-v}{\lambda}}\partial v$. Then, if (as correctly mentioned by @RAHUL)$\lambda \in \mathbb{R}^+$:
$$
\int\limits_{0}^{+\infty}\frac{(v-\lambda)^2}{\lambda}\cdot e^{\frac{-v}{\lambda}}\partial v~=~\left[t = \frac{-v}{\lambda},~\partial t = -\frac{\partial v}{\lambda} \biggm\vert~v = -\lambda t,~\partial v = -\lambda\partial t\right] = \\=(\text{Since we swap zero to be the upper bound, we have to change the sign}) = -1\cdot(-\lambda)\cdot\\\cdot\int\limits_{-\infty}^{0}\frac{(-\lambda t-\lambda)^2}{\lambda}\cdot e^{t}\partial t = \lambda^2 \int\limits_{-\infty}^{0}(t+1)^2 e^{t}\partial t = \lambda^2 \cdot\\\cdot\left(\int\limits_{-\infty}^{0}e^t \cdot t^2~\partial t + 2\int\limits_{-\infty}^{0}e^t \cdot t~\partial t + \int\limits_{-\infty}^{0}e^t\partial t\right) = \left[\text{Integrating by parts}\right] = \\=\lambda^2\left(e^tt^2\biggm\vert^{0}_{\lim\limits_{n\to-\infty}(n)} - \int\limits_{-\infty}^{0}e^t \cdot 2t~\partial t + 2\int\limits_{-\infty}^{0}e^t \cdot t~\partial t \int\limits_{-\infty}^{0}e^t\partial t \right) =\\= [\text{As }e^t \text{ has higher convergence rate to }0\text{ then the divergence rate of } t^2 \text{ at }-\infty]=\\=\lambda^2\left([0 - 0] + \int\limits_{-\infty}^0 e^t \partial t \right) = \lambda^2 \cdot \lim\limits_{n\to-\infty}\left(e^t\biggm\vert^{0}_{n}\right) = \lambda^2 \cdot (1 - 0) = \lambda^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4289978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
New trigamma identity for $\Psi_1(\frac3{20})+6\,\Psi_1(\frac15)+10\,\Psi_1(\frac25)-\Psi_1(\frac1{20})$ I play with Maple, and I find this relation for the trigamma function:
$$\begin{align}
\Psi_1\left({\frac{3}{20}}\right)+6\,\Psi_1\left(\frac15\right)+
10\,\Psi_1\left(\frac25 \right)-\Psi_1\left(\frac1{20}\right)
&=-96\,{G}-{\frac{24\,
\pi^2\sqrt{5}}{5}}+16\,\pi^2\\[0.5em]
&\quad-2\pi^2{\frac{15+\sqrt{5}}{\sqrt{10+2\sqrt{5}}}}
\end{align}$$
where $G$ is the Catalan's constant.
But I don't know if this relation is well-known or not.
Please suggest how to prove it.
Thanks.
| Identities involving linear combinations of trigamma functions at rational arguments can be proved semi-automatically. The arguments here are of the form $\frac k{20}$ for $1\le k\le 19$, so denote $a_k=\psi_1(k/20)$ and write down some identities:
$$a_k+a_{20-k}=\frac{\pi^2}{\sin^2k\pi/20},1\le k\le 10\tag{reflection}$$
$$a_5=\pi^2+8G\tag{special value}$$
$$a_k+a_{k+10}=4a_{2k},1\le k\le9\tag{duplication}$$
$$a_k+a_{k+5}+a_{k+10}+a_{k+15}=16a_{4k},1\le k\le4\tag{quadruplication}$$
$$a_k+a_{k+4}+a_{k+8}+a_{k+12}+a_{k+16}=25a_{5k},1\le k\le3\tag{quintuplication}$$
Treat the $a_k$ as variables and convert the identities into rows of a matrix equation $(a_1,a_2,\dots,a_{19},b)$, so e.g. reflection at $k=1$ becomes
$$\left[\begin{array}{ccccccccccccccccccc|c}1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&\frac{\pi^2}{\sin^2\pi/20}\end{array}\right]$$
Now drop the last ($\mathbf b$) column and see if the desired linear combination $\mathbf c$ – in this case $(-1,0,1,6,0,0,0,10,0,0,0,0,0,0,0,0,0,0,0)$ – is in the row space of the remaining matrix $\mathbf A$, which can be done by trying to solve $\mathbf A^T\mathbf x=\mathbf c^T$. For the question's $\mathbf c$ there is a solution:
$$\mathbf x=\left(\color{blue}{0, 0,\frac12, 0, 0, 2, 0, 2,\frac12, 0}, -12, \color{blue}{-\frac12, -2,\frac12, -2, 0, 0, 0, 0, 0}, 0, 0, 0, 0,\color{blue}{-\frac12, 0, 0}\right)^T$$
Then $\mathbf x\cdot\mathbf b$ gives an explicit expression for the trigamma linear combination:
$$\frac12\frac{\pi^2}{\sin^23\pi/20}+2\frac{\pi^2}{\sin^26\pi/20}+2\frac{\pi^2}{\sin^28\pi/20}+\frac12\frac{\pi^2}{\sin^29\pi/20}-12(\pi^2+8G)$$
Simplifying shows this is equal to $-96G-\frac{24\pi^2\sqrt5}5+16\pi^2-2\pi^2\frac{15+\sqrt5}{\sqrt{10+2\sqrt5}}$ as suspected. (The simplification I get from Mathematica is $-96G+\pi^2(16-24/\sqrt5-2\sqrt{25-2\sqrt5})$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4290297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
An intuitive approach to solving a question of geometry
In the square $ABCD$ of side $6$, points $P$ and $Q$ move along sides $AB$ and $CB$ so that $\overline{PB}=\overline{BQ}=x$. Consider the segment $OQ$ parallel to $AB$. Which of the following expressions describes the measure of the length of $\overline{OQ}$ as a function of $x$?
My approach: I named, for simplicity, $\overline{OQ}=y$. I have considered that $\text{area}(\triangle PBC)=3x$; $\text{area}(PBOQ)=\frac{(x+y)\cdot x}{2}$, $\text{area}(\triangle OQC)=\frac{y\cdot (6-x)}{2}$. Hence
$$\text{area}(\triangle OQC)=\frac{y\cdot (6-x)}{2}=3x-\frac{(x+y)\cdot x}{2}$$
Solved $y$ in function of $x$ we have:
$$y=f(x)=x-\frac 16 x^2.$$
Now the question is the following: given that my 18-year-old students will have to deal with simulations from the National Institute for the Evaluation of the Education and Training System, how will they be able to solve the problem without pen and paper as required comprising 41 questions in an hour? I would not have succeeded without to use pen and paper.
Is there an alternative proof instead of my approach?
| Use similar triangles $\triangle COQ\cong \triangle CPB$, therefore $$\frac{\overline{OQ}}{\overline{PB}} = \frac{\overline{CQ}}{\overline{CB}} = \frac{6-x}{6}$$
So $$\overline{OQ} = \frac{x(6-x)}{6}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4290446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do we know that this function is multivalued? So I have an integral $$ \int_{0}^{2\pi} \frac{1}{2}\left(e^{e^{ix}} + e^{e^{-ix}}\right) \text{ d}x$$
I am told that I am able to substitute $z=e^{ix}$ into this and convert it into a contour integral. This integral would have a pole at $0$, and a branch cut about the negative real number line, and we would make a counterclockwise keyhole contour about the unit circle to solve this.
When I directly look at what Mathematica says as the integral's indefinite antiderivative $$\frac{-i\operatorname{Ei}(e^{ix})+i\operatorname{Ei}(e^{-ix})}{2}+C$$ I can see where some of these come from. $\operatorname{Ei}(z)$ is undefined at $z=0$, which gives us a pole at $0$. $\operatorname{Ei}(z)$ also approaches $0$ as we approach complex infinity, giving us a pole at complex infinity.
Furthermore, despite both $\operatorname{Ei}(x)$ and $e^{ix}$ not being multivalued, for complex $z$, $\operatorname{Ei}(z)$ is indeed multivalued. Since $z=e^{ix}$ will never be $0$, our contour will not hit the pole as well.
However, this raises a few questions. First off, how do we know all of this by just looking at the original integrand $\frac{1}{2}\left(e^{e^{ix}} + e^{e^{-ix}}\right)$? When we are presented with the integral, we don't know it's indefinite antiderivative, so how would we know what its poles are and if it is multivalued or not? For all I know, $e^z$ is always single valued.
Moreover, why is the branch cut along the negative real axis? The poles are 0 and complex infinity. Why is the contour from 0 to negative real infinity instead?
| First of all there is no Multivalued function here so
you are talking about branch point and branch that will
help you in only case of multivalued functions
this is a simple integral
$\displaystyle \int_{0}^{2\pi}\frac{1}{2}\left(e^{e^{\iota\theta}}+e^{e^{-\iota\theta}}\right)d\theta$
To solve this integral you have to put $e^{\iota\theta}=z $ which will give give you
a unit circle now you have simple closed contour you can integrate easly
using Cauchy's residue theorem
$\displaystyle \int_{0}^{2\pi}\frac{1}{2}\left(e^{e^{\iota\theta}}+e^{e^{-\iota\theta}}\right)d\theta=\oint_{|z|=1}\frac{1}{2}\left(e^z +e^{\frac{1}{z}}\right)dz$
$\displaystyle=\frac{1}{2}\oint_{|z|=1}\exp\left(\frac{1}{z}\right)dz=\pi\iota\textbf{Res}_{z=0}\exp\left(\frac{1}{z}\right)=\iota\pi$
You can do this using antiderivative but you then you have to integrate along the line
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4290599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Equality on union of an increasing sequence of subspaces of vector space and second dual Let $V$ be a normed vector space over $\mathbb{C}$ and $V_n$ be an increasing sequence$(V_n \subset V_{n+1})$ of subspaces of $V$. Does the following equality holds
in $V^{**}$
$\left (\overline {\bigcup_n V_n}\right)^{**}= \overline {\bigcup_n V_n^{**}}$
Basically we need to show that $\bigcup_n V_n^{**}$ is dense in $\left (\overline {\bigcup_n V_n}\right)^{**}$. Any ideas?
| This is not true. To see this take $V=c_0$ and let $e_1 , e_2 , ....,$ where $e_i= (0,0,...0,1,0...)$. Denote $V_n =\mbox{span}\{e_1, ..., e_n \}.$ It iis easy to see that $$\overline{\bigcup V_n } =c_0 $$ and hence $$\left(\overline{\bigcup V_n } \right)^{**} =\ell^{\infty} .$$
But since $V_n $ are finite dimensional thus $$(V_n , ||\cdot ||_{\infty})^{**} =(V_n , ||\cdot ||_{\infty} )$$ shortly $$V_n^{**} = V_n $$ since on finite dimensional spaces all norms are equivalent. But then $$\overline{\bigcup V_n^{**} } =\overline{\bigcup V_n } \neq \ell^{\infty}$$ since sequence $$(1, 1, ....,1,..)$$
connot be approximated (in the supremum norm) by any sequence with only finite number of coordinates different to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4290747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\int_{0}^{\infty}2^{-ax^2}dx$ by using Gamma function Evaluate $\int_{0}^{\infty}2^{-ax^2}dx$ by using Gamma function
$$I=\int_{0}^{\infty}2^{-ax^2}dx$$
Solution:$$ \text{Let} \\x^2=t\implies 2xdx=dt\implies dx=\frac{dt}{2x}\implies dx=\frac{dt}{2\sqrt t}$$
$$I=\int_{0}^{\infty}2^{-at}\frac{1}{2\sqrt t}dt$$
$$\implies I=\int_{0}^{\infty}2^{-at}t^{1/2-1}dt$$
what should be the next step?
| Your integral is just
$$\int_0^\infty e^{-a\log(2)x^2}\,dx=\sqrt{\frac{\pi}{a\log 2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4291036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Given a basis for an inner product space $V$, is there a "dual" basis for $V$? Let $ V $ be a $ \mathbb {R} $ vector space, ''$ \cdot $'' as a inner product of $ V $, and $ \{e_1, \cdots, e_n \} $ as the basis for $ V $.
Is there always another basis $ \{e ^ 1, \cdots, e ^ n \} $ such that
$$ e_i \cdot e ^ j = \delta_i ^ j $$
holds for each $ i, j $ ?
The right side is Kronecker's $ \delta $.
I think $ \{e ^ 1, \cdots, e ^ n \} $ corresponds to the basis of the dual space $ V ^ * $ of $ V $. $ V \cong V ^ * $ holds, but I was wondering if the dual basis could be taken directly as a subset of $ V $.
| Consider the space $E_1=e_2^\perp\cap e_2^\perp\cap\ldots\cap e_n^\perp$. This is the intersection of $n-1$ spaces and the dimension of each of them is $n-1$. Therefore, $\dim E_1\geqslant1$ (actually, it is equal to $1$). Take $v\in E_1\setminus\{0\}$. You cannot have $e_1.v=0$, because otherwise $v$ would be orthogonal to every $e_k$ and therefore $v$ would be $0$. Now, define $e^1$ as $\frac1{e_1.v}v$. You can define $e^2,\ldots,e^n$ in the same way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4291250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Find $f: \mathbb{R} \rightarrow \mathbb{R}$ satisfying $f(x+y)^2=f(x)^2+yf(x)+xf(y)+y^2$ - Solved.
Find $f: \mathbb{R} \rightarrow \mathbb{R}$ satisfying $f(x+y)^2=f(x)^2+yf(x)+xf(y)+y^2$
It's solved, but I'm posting it to share it.
\begin{align} &P(x, y): f(x+y)^2=f(x)^2+yf(x)+xf(y)+y^2 \\ &P(y, x):f(x+y)^2=x^2+yf(x)+xf(y)+f(y)^2. \\ &\therefore f(x)^2-x^2=f(y)^2-y^2=c. \\ &\therefore f(x)= \pm \sqrt{x^2+c}. \\ &\text{Substituting to the original F.E.: } x^2+2xy+y^2+c = x^2+y^2+c \pm \Big(y\sqrt{x^2+c}+x\sqrt{y^2+c}\Big). \\ &\therefore 2xy= \pm y\sqrt{x^2+c} \pm x\sqrt{y^2+c}. \\ &\Rightarrow 4x^2y^2=x^2y^2+y^2c + x^2y^2+x^2c + 2xy\sqrt{(x^2+c)(y^2+c)}. \\ &\Rightarrow 2x^2y^2-(x^2+y^2)c=2xy\sqrt{(x^2+c)(y^2+c)}. \\ & \therefore 4x^4y^4 + (x^2+y^2)^2c^2 - 4x^2y^2(x^2+y^2)c = 4x^2y^2(x^2+c)(y^2+c). \\ &\Rightarrow 4x^4y^4+(x^2+y^2)c^2-4x^2y^2(x^2+y^2)c=4x^4y^4+4x^4y^2c + 4x^2y^4c + 4x^2y^2c^2 \\ &\Rightarrow (x^2+y^2)c^2-4x^2y^2(x^2+y^2)c=4x^2y^2(x^2+y^2+c)c \\ & \text{if } c=0: \text{Solution.} \\ &\text{if } c \neq 0: (x^2+y^2)c-4x^2y^2(x^2+y^2)=4x^2y^2(x^2+y^2+c). \\ &\Rightarrow (x^2+y^2-4x^2y^2)c=8x^2y^2(x^2+y^2). \\ & \therefore c = \frac {8x^2y^2(x^2+y^2)}{(x^2+y^2-4x^2y^2)}, \text{ which isn't constant, Contradiction.} \\ & \therefore c = 0. \\ & \text{Contributing this to the original F.E.: } x^2+2xy+y^2=x^2+y^2 \pm 2xy \\ & \Rightarrow f(x)=x.\end{align}
If you have another solution, please post it to the answer with a spoiler code. How to put math equations in a "spoiler" block?
|
Substitute $x = 1$ and $y = 0$ to get $f(0) = 0$.
Substitute $x = 0$ to get $f(y)^2 = y^2$.
The equation then becomes $(x + y)^2 = x^2 + x f(y) + y f(x) + y^2$ which simplifies to $2xy = x f(y) + y f(x)$. Plug in $y = 1$ to get $2x = x f(1) + f(x)$; then $f(x) = (2 - f(1)) x$.
Then in particular $f(1) = 2 - f(1)$. So $f(1) = 1$. Then $f(x) = x$.
We easily verify that $f(x) = x$ is a solution. So the only solution here is $f(x) = x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4291597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the set of complex numbers such that $Arg(\frac{z}{z-2}) = \frac{\pi}{4}$ I have been stumped by the following question:
Find the set of complex numbers $z\in\Bbb C$ such that $$\operatorname{arg}\left(\frac{z}{z-2}\right) = \frac{\pi}{4}.$$
I think that complex numbers, $z = x + iy$, with principal argument $\frac{\pi}{4}$, all have the property $x = -y$. I then become a bit lost.
| In general,
for $~w_1, w_2 \in \Bbb{C} ~: ~w_2 \neq 0,$
with $~\overline{w_2} = ~$ the complex conjugate of $w_2$,
you have that
$\displaystyle \frac{w_1}{w_2} = \frac{w_1 \times \overline{w_2}}{|w_2|^2}.$
This implies that
$\displaystyle ~\text{Arg}\left[\frac{w_1}{w_2}\right]
~= ~~\text{Arg}\left[w_1 \times \overline{w_2}\right].$
Set $z = x+ iy$.
Then, you must have that
$\displaystyle ~\text{Arg}\left[(x + iy) \times (x - 2 - iy)\right] = \pi/4.$
As something of a shortcut, if you examine
Re$\left[(x + iy) \times (x - 2 - iy)\right]$ and
Im$\left[(x + iy) \times (x - 2 - iy)\right]$
you must have that :
*
*The real component equals the imaginary component and
*Both components are positive.
The real component is $(x^2 - 2x + y^2),$
while the imaginary component is $-2y$.
So, you can guarantee the 2nd constraint above (i.e. both components positive), based on the 1st constraint, merely by requiring that $y < 0$.
So, the problem reduces to identifying all $(x,y) \in \Bbb{R^2}$ such that
*
*$y < 0$.
*$x^2 - 2x + y^2 = -2y.$
Edit
Originally, my work had one arithmetic mistake, which I corrected, and one (can't see the forest for the trees) simplification that I totally overlooked.
Once Charlotte left me a comment (following my answer), I proofread my answer and found both flaws.
The second constraint above may be re-expressed as $(x - 1)^2 + (y + 1)^2 = 2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4291767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the integral of the secant between $0$ and $\pi$? The secant function has a discontinuity at $\pi/2$, so I separated the integral as:
$\int_0^{\pi}{\sec(\theta) d\theta} = \int_0^{\pi/2}{\sec(\theta) d\theta} + \int_{\pi/2}^{\pi}{\sec(\theta) d\theta}.$
Computing these improper integrals as a limit, we obtain:
$\int_0^{\pi/2}{\sec(\theta) d\theta} = +\infty,$
$\int_{\pi/2}^{\pi}{\sec(\theta) d\theta} = -\infty$.
Is the original integral convergent or divergent?
| Note that $\sec(\theta) = 1/\cos(\theta)$, and that the cosine is an odd function around $\theta = \pi/2$. That is, for $\theta\in [\pi/2,\pi]$ we have $\cos(\theta) = -\cos(\pi - \theta)$. We can then rewrite:
$\int_{\pi/2}^{\pi}{\sec(\theta) d\theta} = \int_{\pi/2}^{\pi}{\frac{1}{\cos(\theta)} d\theta} = \int_{\pi/2}^{\pi}{\frac{1}{-\cos(\pi - \theta)} d\theta} = \int_{\pi/2}^{\pi}{\frac{-1}{\cos(\pi - \theta)} d\theta} = \int_{\pi/2}^{0}{\frac{1}{\cos(\varphi)} d\varphi} = - \int_0^{\pi/2}{\sec(\varphi) d\varphi}$
where we have used the change of variable $\varphi = \pi - \theta$. Now:
$\int_0^{\pi}{\sec(\theta) d\theta} = \int_0^{\pi/2}{\sec(\theta) d\theta} + \int_{\pi/2}^{\pi}{\sec(\theta) d\theta} = \int_0^{\pi/2}{\sec(\theta) d\theta} -\int_{0}^{\pi/2}{\sec(\varphi) d\varphi} = 0.$
Edit: I guess this only computes the principal value of the integral, it does not address convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4291950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is inf and sup of null sets defined as infinities I'm reading the definition of $inf\emptyset$ and $sup\emptyset$.
a) I'm wondering why $inf\emptyset = \infty$ and $sup\emptyset = -\infty$. I would have expected both to be undefined.
b) In general, can something equal infinity if it's not in the extend real number system? Should I assume they are using about extended real numbers in these definitions?
| Having$$\inf\emptyset=\infty\text{ and }\sup\emptyset=-\infty\tag1$$is that only way of defining $\inf\emptyset$ and $\sup\emptyset$ so that you always have$$A\subset B\implies \inf A\geqslant\inf B\quad\text{and}\quad\sup A\leqslant\sup B.$$And, yes, you can only have $(1)$ if we are working with the extended real numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4292091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Increasing annuity problem FM Consider a situation where you took a loan for $\$15,000$ and you are paying back the amount at annual effective interest rate of $3\%$, $\textrm{# of periods = 25}$.
Now, you make the first payment of $100$ and then the next payment is $200$. Notice that the payments are increasing by $100$, making it very obvious that we need to use the increasing annuity formula.
Adding to that, once 10 payments are made amounting to a $1000$ dollars, there is an adjustment and the remaining $15$ payments are $\$X$ per year at the same interest rate.
Find $\$X$.
The formula I used in the case is:
$15,000 = 100$ (Ia)for 10 periods + X($v^{10}$) annuity for 15 periods.
$15,000=4,483.8992+X\cdot (1.03)^{-10}\cdot \frac{1-(1.03)^{-15}}{0.03}$
What I am confused about is that why do we need use $v^{10}$ for the remaining $15$ periods ?
The formula is correct and gets to the correct answer of $1183.85$
Note: The payments are being made at the end of each year.
| The $15000$ dollars are in present value. When you sum the payments, you want all of them to be in present value, so you want to take the present value of $X$.
$$PV=FVv^{t}$$
And you know $t=10$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4292236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\frac{3}{4} \lim_{n \to \infty}\left(\frac{\sum_{r=1}^{n}\frac{1}{\sqrt{r}} \sum_{r=1}^{n}\sqrt{r} }{\sum_{r=1}^{n}r }\right)$ $$\frac{3}{4} \lim_{n \to \infty}\left(\frac{\sum_{r=1}^{n}\frac{1}{\sqrt{r}} \sum_{r=1}^{n}\sqrt{r} }{\sum_{r=1}^{n}r }\right)$$
Apparently, the answer is 2.
My try:
$$\frac{3}{4} \lim_{n \to \infty}\left(\frac{n^2}{\frac{n(n+1)}{2} }\right)$$
$$\frac{3}{2} \lim_{n\to \infty}\left(1-\frac{1}{n+1}\right)$$
Which is $\frac{3}{2}$. Surely I did something stupid or illegal here. What was it?
Also on the forum where I found this, It was said it can be done by converting into integrals, A push in the right direction on that too would be pretty rad.
Hope I am not asking too much.
| Here is a solution using the suggested integral approach. Notice that
\begin{align*}
\frac{{\sum\nolimits_{r = 1}^n {\frac{1}{{\sqrt r }}} \sum\nolimits_{r = 1}^n {\sqrt r } }}{{\sum\nolimits_{r = 1}^n r }} = \frac{{\sum\nolimits_{r = 1}^n {\sqrt {\frac{n}{r}} } \sum\nolimits_{r = 1}^n {\sqrt {\frac{r}{n}} } }}{{\sum\nolimits_{r = 1}^n r }} & = \frac{{\frac{1}{{n^2 }}\sum\nolimits_{r = 1}^n {\sqrt {\frac{n}{r}} } \sum\nolimits_{r = 1}^n {\sqrt {\frac{r}{n}} } }}{{\frac{1}{{n^2 }}\sum\nolimits_{r = 1}^n r }} \\ &= \frac{{\left( {\frac{1}{n}\sum\nolimits_{r = 1}^n {\sqrt {\frac{n}{r}} } } \right)\left( {\frac{1}{n}\sum\nolimits_{r = 1}^n {\sqrt {\frac{r}{n}} } } \right)}}{{\frac{1}{n}\sum\nolimits_{r = 1}^n {\frac{r}{n}} }}.
\end{align*}
Thus, noticing the Riemann sums, we find
\begin{align*}
\mathop {\lim }\limits_{n \to + \infty } \frac{3}{4}\frac{{\sum\nolimits_{r = 1}^n {\frac{1}{{\sqrt r }}} \sum\nolimits_{r = 1}^n {\sqrt r } }}{{\sum\nolimits_{r = 1}^n r }} & = \frac{3}{4}\frac{{\mathop {\lim }\limits_{n \to + \infty } \left( {\frac{1}{n}\sum\nolimits_{r = 1}^n {\sqrt {\frac{n}{r}} } } \right)\mathop {\lim }\limits_{n \to + \infty } \left( {\frac{1}{n}\sum\nolimits_{r = 1}^n {\sqrt {\frac{r}{n}} } } \right)}}{{\mathop {\lim }\limits_{n \to + \infty } \frac{1}{n}\sum\nolimits_{r = 1}^n {\frac{r}{n}} }} \\ & = \frac{3}{4}\frac{{\int_0^1 {\frac{{dx}}{{\sqrt x }}} \int_0^1 {\sqrt x dx} }}{{\int_0^1 {xdx} }} = 2.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4292393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
linear independence of $\{x_j-x_i\}_{j=1 ,\\j \ne i}^{k+1}$? Show that if the set $\{x_2-x_1,...,x_{k+1}-x_1\} \subset \mathbb{R}^n$ is linearly independent then the following set is also linearly independent:
$$\{x_1-x_i,...,x_{i-1}-x_i,x_{i+1}-x_i,...,x_{k+1}-x_i\}$$
| I think it’s more simple than that, at first you should now that if you have a linear independent set, then a same size combination of this set it’s also l.i, so you just have to use the linear combination above and that’s it. The proof of this statement it’s essentially by contradiction.
Let’s have $\{x_1,\dots,x_n\}$ an l.i. set, want to proof that this happens if and only if $\{x_1,\dots,x_i+kx_j,\dots,x_n\}$ it’s a l.i. set for each $i,j$. Let’s make left to right, if $\{x_1,\dots,x_i+kx_j,\dots,x_n\}$ isn’t l.i. then we would have that for some index (not necessarily $i$ or $j$) it’s a linear combination of the rest, then:
*
*If $x_l$, for $l\in\{1,\dots,n\}-\{i,j\}$ it’s a linear combination of the other elements of the set then the contradiction it’s obvious because $x_l\in\{x_1,\dots,x_n\}$.
*Other wise the argument it’s pretty much the same, if $x_i+kx_j$ it’s a linear combination of the other elements in $\{x_1,\dots,x_i+kx_j,\dots,x_n\}$ then $x_i\in \{x_1,\dots,x_n\}$ it’s a linear combinaton of elements in that set.
So it’s clear that you can make finite linear combinations of a l.i. set and you will get another l.i. set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4292750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Angle between tangent on circle and a line to a point on a larger concentric circle I have two concentric circles, the smaller one has radius $r$ and the larger one radius $r+a$. I am trying to calculate the angle between the tangent line at a point $A$ on the smaller circle and the line from point $A$ to point a $B$ on the larger circle. I made this diagram to illustrate my problem.
The angle I want to find is angle $\beta$.
Distances $a$ (between $B$ and $D$), $d$ (along the circumference of the inner circle), and $r$ are known. With this information, I can calculate angles $\alpha$, $\delta$, and distance $b$, angles $\epsilon$ and $\theta$, as well as the supplementary angles $\delta'$ and $\theta'$ (not drawn to avoid clutter).
Intuitively I see that triangles $ABD$ and $ABE$ are now fully defined but I am not able to work out angles $\beta$ and $\eta$.
How do I solve this problem? I want to code this problem with single-precision floating-point numbers in C++ so a computationally efficient solution is preferred.
EDIT: In this example point $B$ lies "above the horizon" as seen from point $A$. Is it also possible to calculate angle $\beta$ when $B$ is below the horizon?
| Assuming that point B always lies on the opposite side of the tangent line from the center C, this is a triangle problem of type SSS where all three sides are known.
To solve a an angle $\alpha$ of the triangle one uses the appropriate version of the Law of Cosines
$$\cos\alpha=\frac{b^2+c^2-a^2}{2bc}$$
In this case, we have
$$ \cos(\angle CAB)=\frac{r^2+f^2-(r+a)^2}{2rf}$$
and to find angle $\beta$ we subtract away a right angle.
$$ \beta=\angle CAB-90^\circ$$
The angle $\eta$ can be found in a similar fashion, using the Law of Cosines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4292904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Show if $A^2 = A$ then $\textrm{rank}(A) + \textrm{rank}(A - I) = n$ suppose $n \in \mathbb{N}$ and $A$ is a $n \times n$ idempotent matrix, meaning $A^2 = A$.
show that:
$$
\textrm{rank}(A) + \textrm{rank}(A - I) = n
$$
I think we should first use rank-nullity theorem:
$$
\textrm{rank}(A) + \textrm{dim}(\textrm{null}(A)) = n
$$
then show that:
$$
\textrm{dim}(\textrm{null}(A)) = \textrm{rank}(A - I)
$$
but I don't know how to show the second phrase.
| Since $A(A-I) = 0$, the eigenvalues of $A$ are $0,1$, and the matrix is diagonalizable. The rank of $A$ is the number of non-zero eigenvalues, which is the multiplicity of the eigenvalue $1$. Similarly, the rank of $A-I$ is the number of eigenvalues different from $1$, which is the multiplicity of the eigenvalue $0$. The sum of these multiplicities must be $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Transforming an improper integral The function $f(x)$ is defined in the interval $(-\infty, a]$, where $a$ is some real number.
If I have an improper integral of the form
$$\int_{-\infty}^{a} f(x) dx \tag{1}$$
and I need to decide if it converges or diverges, can I always transform it to this one
$$\int_{a}^{\infty} f(2a-x) dx \tag{2}$$
and study the integral (2) instead of (1)?
I am saying that (1) is convergent if and only if (2) is convergent.
First of all, is this statement true?
Also, in fact... I am also claiming that:
$$\int_{-\infty}^{a} f(x) dx = \int_{a}^{\infty} f(2a-x) dx$$
I think it is true (just by geometric considerations) because the graphs of the two functions $f(x)$ and $g(x) = f(2a-x)$ are symmetric with respect to the line $x=a$.
How can we justify this statement more formally?
Why I am asking this? Because in my book all criteria for convergence/divergence (of improper integrals) are given only for integrals of the kind (2). So that made me thinking that... OK, I need to have some way to deal with integrals of the kind (1). And as a result I came up with this transformation.
| $$\int_{-\infty}^{a}f(x)dx$$
Put $x=2a-u \implies dx=-du$.
$$-\int_{\infty}^{a}f(2a-u)du$$
According to properties of definite integrals,
$$\int_{a}^{\infty}f(2a-u)du$$
Which is second integral you are talking about.
Secondly, you shouldn't use same variable for different bounds for 2 or more integrals.
In the description you claimed that,
$$\int_{-\infty}^{a} f(x) dx = \int_{a}^{\infty} f(2a-x)dx$$
So it's not correct to use the same variable for both integrals with different bounds. Instead, do this.
$$\int_{-\infty}^{a} f(x) dx = \int_{a}^{\infty} f(2a-u)du$$
Doing one U-substitution(As i have done in the starting), says that it's true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Showing that a stopping time is finite for a biased random walk. If we consider $X_i$ iid with $\mathbb{P}(X_i=1) = p$ and $\mathbb{P}(X_i=-1)=1-p$. Where $p \in (1/2,1)$. The random walk is then given by,
$$S_n=\sum_{i=1}^n X_i. $$
We also define the stopping time $\tau = \inf\{k:S_k \in \{-\alpha,\beta\} \}$ with $\alpha,\beta>0$ in the natural numbers.
How to prove that $P(\tau<\infty)=1$? Intuitively it's clear that for sure at some point $S_n$ will hit $\beta$ (because the random walk has a tendincy to move upwards.). I was thinking about showing that $P(S_n=\infty \text{ i.o.})=1$. But I'm not really sure how to make a rigorous argument.
Could someone help me with this?
| Consider $\tau_\beta:=\inf\{n>0:S_n = \beta\}$. We construct the martingale $S_n-n(2p-1)$ which is s.t.
$$E[S_n-n(2p-1)|\mathcal{F}_k]=S_k-k(2p-1)$$
Now by optional stopping
$$E[S_{\tau_\beta \wedge n}-(\tau_\beta \wedge n)(2p-1)]=0$$
and by monotone convergence
$$E[\tau_\beta]=\frac{1}{2p-1}\lim_{n \to \infty}E[S_{\tau_\beta \wedge n}]\leq \frac{\beta}{2p-1}$$
So
$$P(\tau_\beta \geq k)\leq \frac{1}{k}E[\tau_\beta]\leq \frac{\beta}{k(2p-1)}\to 0\implies P(\tau_\beta = \infty )=0$$
and so $P(\tau=\infty)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
The Devil's Chessboard trick and coloring of the Hamming cube For $d \geq 2$, consider the $d$-dimensional Hamming cube $H_d$, i.e. the set of binary sequences $(b_0, b_1, \ldots, b_{d-1})$ with $b_k \in \{0,1\}$ of length $d$. Two such sequences are neighbors if they differ at exactly one position. The Hamming cube can be thought of as a $d$-regular graph with $2^d$ vertices. For what values of $d$ is it possible to color $H_d$ with $d$ colors such that, for any vertex $v \in H_d$, its $d$ neighbors have $d$ different colors. Note that, indeed, such a coloring does not prevent neighbors to share the same color.
It seems possible for $d=2^n$. For example, for $d=2$, one can color:
*
*$(0,0)$ and $(1,0)$ in red.
*$(0,1)$ and $(1,1)$ in blue.
Remark: such a coloring seems to give a solution toe the The Devil's Chessboard problem.
| Although your precise question has already been answered, the solution motivates a natural minor variant of the question:
If $H_d$ can be coloured with $r$ colours, so that every vertex is adjacent to a vertex of each colour, must there exist a natural number $k$ with $r\leq2^k\leq d$?
The solution to the Devil's Chessboard problem implies that if such a $k$ exists for a given $d,r$, then so does a valid $r$-colouring of $H_d$ - just ignore the extra squares and extra possible values that can be communicated.
But are these the only values of $d,r$ for which a valid colouring exists? The accepted answer shows there are no other solutions where $d=r$ and it is not easy to find other solutions at all (even when $d$ is almost double $r$), which causes people to speculate that there are no other solutions. However researchers in the field have long known that there are other solutions. The example below was extracted from $[1]$.
Starting with a chessboard with a coin on each square, by flipping precisely one coin, we will show that it is possible to communicate any desired element of $\mathbb{F}_7^2$. This just shows that $d=64, r=49$ is possible, which is unremarkable. However in our example, turning any coin on the top row will have the same effect, allowing us to ditch $7$ squares, showing that $d=57, r=49$ is possible.
Consider each column of the chessboard separately. The solution to the Devil's Chessboard problem allows us to read off the position of a pawn somewhere on the column from the coins on that column, in such a way that by turning precisely one coin on the column, we can move the pawn to any desired square on the column. Further we can arrange the encoding so that flipping the coin on the top row does not move the pawn.
Now looking at the whole chessboard, we have encoded the position of a pawn on each column. By flipping one coin, we can move one pawn to any square on its column. There are also $8$ squares which do not move any of the pawns.
Label the columns of the chessboard with pairwise non-colinear elements of $\mathbb{F}_7^2$. For example: $$
\left(\begin{array}{c}0\\1\end{array}\right),
\left(\begin{array}{c}1\\0\end{array}\right),
\left(\begin{array}{c}1\\1\end{array}\right),
\left(\begin{array}{c}1\\2\end{array}\right),
\left(\begin{array}{c}1\\3\end{array}\right),
\left(\begin{array}{c}1\\4\end{array}\right),
\left(\begin{array}{c}1\\5\end{array}\right),
\left(\begin{array}{c}1\\6\end{array}\right).
$$
Label the rows of the chessboard with elements of $\mathbb{F}_7$, using all of them. For example: $0,1,2,3,4,5,6,6$.
The arrangement of pawns on the board gives us a linear combination of vectors: sum over pawns the coefficient of the row, multiplied by the vector of the column.
Now moving one of the $8$ pawns to one of the $6$ other coefficients results in changing the linear combination to one of $6\times 8=48$ vectors, different to the initial one. These are all distinct as the column vectors are pairwise non-colinear. Thus by flipping precisely one coin, we can communicate any element of $\mathbb{F}_7^2$. Further all the squares on the top row have the same effect, so we only need one of them.
$[1]$ Östergård, Patric R. J., A coloring problem in Hamming spaces, Eur. J. Comb. 18, No. 3, 303-309 (1997). ZBL0880.05033.The The accepted answer shows that
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Problem understanding the Stationary-Phase-Approximation example of Wikipedia? I am trying to understand the Stationary-Phase Approximation from the Wikipedia example showed in here, but there is something I don´t understand how is made work. Following the article's example:
$$f(x,t) = \frac{1}{2\pi} \int\limits_{\mathbb{R}} F(w)\,e^{i[k(w)x-wt]}\,dw \,\,\,\,\texttt{(Eq. 1)}$$
Then, the phase term $\phi = k(w)x-wt$ is stationary when:
$$\frac{d}{dw}\left( k(w)x-wt \right)=0 \,\,\Rightarrow w_0\,\,\,\,\texttt{(Eq. 2)}$$
Or equivalently:
$$ \frac{dk(w)}{dw} = \frac{t}{x}\,\,\,\,\texttt{(Eq. 3)}$$
Then, through Taylor series approximations and other manipulations the Stationary-Phase Approximation is given by:
$$ f(x,t) \approx \frac{|F(w_0)|}{2\pi}\sqrt{\frac{2\pi}{x|k''(w_0)|}} \cos\left( k(w_0)x-w_0t \pm\frac{\pi}{4} \right) \,\,\,\,\texttt{(Eq. 4)}$$
I believe that the following term of Eq. 4 is interpreted as:
$$k''(w_0) \cong \frac{d^2}{dw^2}\left(k(w) \right)\Big|_{w=w_0}$$
But since the right-side of Eq. 3 is independent of $w$, it means that:
$$ \frac{d^2}{dw^2}\left(k(w) \right) = \frac{d}{dw}\left( \frac{dk(w)}{dw} \right) = \frac{d}{dw}\left( \frac{t}{x} \right) = 0 \,\,\,\forall \,w \Rightarrow k''(w_0) = 0$$
Which will means that I have a division by zero happening in Eq. 4.
So, How is possible that Eq. 4 is not undetermined?? What I am understanding wrongly??
| We are considering a point $w_0$ where we have the equality $\frac{dk}{dw}\Big|_{w=w_0} = \frac{t}{x}$. There is no claim that the identity holds for all $w$. For example, let $k=w^2$. Then $\frac{dk}{dw} = 2w$ and we are considering a point where $w = \frac{t}{2x}$. But clearly $\frac{d^2 k}{dw^2} =2 \neq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does compound interest usually have an $\frac{r}{n}$ term? I assume discrete time. Suppose I start with some amount of money $P_{0}$. Then, using a simple rate of interest $r$ for a given period of time, I would have
$$P_{t}= P_{0} (1+r)^{t}$$
Suppose $t$ represents years. Now, suppose I want the payments to happen more frequently, say $n$ times a year, but I want $t$ to still represent years. Then I can model the amount I get paid as
$$P_{t}= P_{0} (1+r)^{nt}$$
This model can't be right however because, as one user pointed out, as $n\rightarrow \infty$, then $P_{t} \rightarrow \infty$.
We want
\begin{align*}
t\rightarrow \infty &\Rightarrow P_{t} \rightarrow \infty\\
n\rightarrow \infty &\Rightarrow P_{t} \text{ converges to some limit}
\end{align*}
Ok, so we need a way to make it such that the limit exists as $n\rightarrow \infty$ and that necessitates adding a term $f(n)$ such that
$$P_{t}= P_{0} \left(1+\frac{r}{f(n)}\right)^{nt}$$
Why does compound interest typically involve a $f(n)=n$? I know the reason why we need $f(n)$ to have certain properties like $f(n)=n$ is because the purpose of this is to reduce the interest rate to prevent the infinity stated earlier. But I don't get why that needs to take the form of $f(n)=n$ specifically as opposed to some general class of functions $f(n)$, where they could have certain restrictions to give the derivatives properties similar to the usual formula.
I also recognize that by choosing to use $f(n)=n$, then in the limit, we have the continuous case of $P_t = P_0 \cdot e^{rt}$. But that doesn't change the fact there could be many compound interest formulas for the discrete case, even if most can't be made into a continuous version.
Is the choice of $f(n)=n$ just by convention or is there a reason for this choice?
|
Now, suppose I want the payments to happen more frequently, say $n$
times a year, but I want $t$ to still represent years. Then I can model the amount I get paid as $P_{t}= P_{0} (1+r)^{nt}$
No you cannot. As $n \rightarrow \infty$, the amount $P_t$ also tends to infinity. It means that it is not a right model. If you are paid $n$ times a year, $r$ must also depend on $n$.
If you want to accumulate the yearly interest $r$ in $n$ installments, each installment shall yield $\frac{r}{n}$ interest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4293960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find $\mathbf{Q}$ by minimizing $\operatorname{trace}(\mathbf{Q}^\top \mathbf{AQ}$) such that $\mathbf{Q}$ is orthonormal Suppose $\mathbf{Q} \in \mathbb{R}^{m \times k}$ where $k < m$ and $\mathbf{Q}^\top\mathbf{Q} = \mathbf{I}$.
How can I find $\mathbf{Q}$ such that
$$
\operatorname{trace}(\mathbf{Q}^\top \mathbf{AQ})
$$
is minimized where $\mathbf{A}$ is positive semi-definite?
I tried the following simplification:
$$
\mathbf{Q}^\top \mathbf{AQ} = \mathbf{Q}^\top \mathbf{UD}\mathbf{U}^\top \mathbf{Q} = \mathbf{Q}^{*\top} \mathbf{DQ}^*
$$
where $\mathbf{Q}^* = \mathbf{U}^\top\mathbf{Q}$, $\mathbf{A} = \mathbf{UDU}^\top$ and $\mathbf{Q}^{*\top}\mathbf{Q}^* = \mathbf{I}$.
If I minimize this quantity, does that mean the columns of $\mathbf{Q}^*$ is given by the eigenvectors corresponding to the smallest eigenvalues of $\mathbf{A}$?
| Using your notation but replacing $Q^*$ by $V$, the problem is $$\min_{V\in \mathbb{R}^{m \times k}\\ V^{\mathrm{T}}V = I}\ trace(V^{\mathrm{T}}DV).$$
Suppose $V = (v_{ij})_{m \times k}$ and $D = diag\{\lambda_1,\cdots,\lambda_m\}$ where $0 \leq \lambda_1 \leq \cdots \leq \lambda_m$, then it is easy to coumpute that $$(V^{\mathrm{T}}DV)_{jj} = \sum_{i=1}^m \lambda_i v_{ij}^2.$$
So $$trace(V^{\mathrm{T}}DV) = \sum_{j=1}^k \sum_{i=1}^m \lambda_i v_{ij}^2 \geq \sum_{j=1}^k\lambda_j$$
And when taking $v_{jj} = 1,\ 1 \leq j \leq k$ and $v_{ij} = 0,\ i \neq j$ we get the equality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4294111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Simplifying $\frac{p^2q^2(1-\epsilon^2\cos^2t)}{p^2\cos^2t+q^2\sin^2t}$, where $\epsilon=\sqrt{1-(q/p)^2}$ I am currently trying to show that the product of the distances from the focis of an ellipse to the tangent line at any point of the ellipse is a constant. While I thought that the computation is a straightforward plug-in and be done with it, this turned out to be harder than expected.
I know that the end result should be $q^2$, but I have no idea on how to manipulate the expression
$$\frac{p^2q^2(1 - \epsilon^2\cos^2(t))}{p^2\sin^2(t) + q^2\cos^2(t)}$$ where $p > q > 0, \epsilon = \sqrt{1 - (q/p)^2}$.
Edit: I had mixed up $\sin$ and $\cos$ in the denominator, so instead of $p^2\cos^2(t) + q^2\sin^2(t)$ the expression should be $p^2\sin^2(t) + q^2\cos^2(t)$.
| Marcos already calculated value of your expression if $t=0$ as $q^4/p^2$. If $t=\pi/2$ expression is equal to $p^2$. If $p \ne q$, these values are different, that is in this case your expression is not a constant. If $p=q$ the expression is equal to $p$ or $q$ for any $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4294293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What's the measure of the segment $PD$ in the rhombus below? For reference : Let $M$ and $N$ be the midpoints of the sides $AB$ and $AD$ of a rhombus $ABCD$.
Calculate $PD$, if $MP= 6, (P = MD \cap CN$)
My progress:
G is centroide
$\triangle ABD\\
GM = \frac{2M}{3}\\
GM = 2DG = 2(GP+DP)\\
MD = 3DG\\
GM = 6 -GP\\
\therefore 6 - GP = 2GP+2DP \implies \\
2DP = 6 - 3GP$
There is one more relationship to finish...
$
| As $\triangle NGM \sim \triangle BGD$, $DG = 2 GM$ as you mentioned. Also, $GH = \dfrac{AH}{3}$.
In $\triangle AGD$, traversal $NC$ intersects $AD$ and $GD$ internally and $AG$ externally. Applying Menelaus's theorem,
$ \displaystyle \frac{AC}{CG} \cdot \frac{GP}{PD} \cdot \frac{DN}{NA} = 1$
$ \displaystyle \frac{2 AH}{4AH/3} \cdot \frac{GP}{PD} = 1 \implies PD = \frac{3}{2} GP$
and we obtain $ ~ 5 GP = 4GM$
As $MP = 6, GP = \dfrac{8}{3} \implies PD = 4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4294513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Convergence of series using subsequence. If $\sum^\infty_\mathrm{n=1}a_n$ is a convergent series of positive numbers, and $\{a_{n_i}\}^\infty_\mathrm{i=1}$ is a subsequence of $\{a_{n}\}^\infty_\mathrm{n=1}$,
prove that $\sum^\infty_\mathrm{i=1}a_{n_i}$ converges.
I see several places on this site that talk about the convergence of sequences in this context. Specifically, I'm confused why $\sum^\infty_\mathrm{i=1}a_{n_i}$ , a series , must converge given the above information. If this is a simple proof, then that would be great. If the proof relies on some comparison test, what is the logic being used?
| By Cauchy criterion, there exists $N \in \mathbf{N}$ such that for all $n > m \geq N$, we have
$$|a_m + a_{m + 1} + \ldots + a_{n}| < \epsilon \tag{1}$$
Let $(a_{n_i})$ be an arbitrary subsequence of $(a_n)$.
$$(a_{n_i}) := (a_{n_1},a_{n_2},\ldots)$$
Let $a_{n_j}$ and $a_{n_l}$ be any two terms of the subsequence satisfying $n_l > n_j \geq N$. Then, from (1), it follows that, for all $n_l > n_j \geq N$, we have
$$|a_{n_j} + a_{{n_j}+1} + \ldots + a_{n_l}| < \epsilon$$
Since all numbers on the LHS are positive, we can only leave the terms of the subsequence and drop the remaining terms and write for all $n_l > n_j \geq N$:
$$|a_{n_{j}} + a_{n_{j+1}} + \ldots + a_{n_l}| < |a_{n_j} + a_{n_{j}+1} + \ldots + a_{n_l}| < \epsilon$$
Consequently, by the Cauchy Criterion, $\sum_{i=1}^{\infty}a_{n_i}$ is convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4294713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Proving convergence in recurrence relation with recurrence $x_{n+1} = \gamma(x_n - x_{n-1})$ with $\gamma <1$ (momentum gradient descent) Suppose we have a recurrence relation that $x_{n+1} = \alpha(x_n - x_{n-1})$ with $0 < \alpha< 1$.
Then I want to show that $f(x_n) \rightarrow 0$ as $n \rightarrow \infty$ where $f(x) = x^2$.
I think this should hold for any starting points, but if not, let's say $x_0$ is arbitrary and $x_1 = 0$.
This seems intuitive enough, and by plotting it in python it does indeed reliably converge to $0$.
This was obtained by simplifying gradient descent on $x^2$ with added momentum:
i.e, we use momentum gradient descent $x_{n+1} = x_n - \eta g_n $ for some learning rate $\eta$, and with momentum updated gradient $g_n = (1-\gamma)g_{n-1} + \gamma \nabla f(x_n)$. ($g_{-1} = 0$)
I wanted to show that if $\eta > 1$, then this momentum updated gradient descent converges as a solution (in contrast to vanilla GD which wouldn't update) IF we pick an appropriate value of $\gamma$:
So what I did was find that $x_{t+1} = x_t - \eta\big[(1-\gamma)g_{t-1} + \gamma\nabla f(x_t)\big] = x_t(2-\gamma - 2\eta\gamma) - x_{t-1}(1-\gamma)$ using the facts that $\nabla f(x) = 2x$ and $x_t - x_{t-1} = -2\eta g_{t-1}$.
Then I thought to pick $\gamma = 1/2\eta$ which leads me to $x_{t+1} = (1-\frac{1}{2\eta})(x_t - x_{t-1})$
Now I'm a bit stuck showing convergence of $f(x)$ using these updates.
(Note that with this scheme $x_1$ will always be $0$).
Should I be picking a more convenient value of $\gamma$? Or can I show convergence using this recurrence relation?
| If you have the recurrence
$x_n = \gamma(x_{n-1}-x_{n-2})$ with $0 < \gamma < 1$ you can make the ansatz
$x_n = \lambda^n$ and see that $\lambda$ must satisfy the equation
$\lambda^2 -\gamma\lambda+\gamma=0$. This has roots
$\lambda_{\pm} =\frac{\gamma\pm \sqrt{\gamma^2-4\gamma}}{2} $
Since $\gamma < 1$, $\gamma^2 < \gamma< 4\gamma$ and hence we have complex roots with positive real part less than 1. They are complex conjugate with modulus $\vert \lambda_{\pm}\vert = \sqrt{\gamma}$. So we have that $\vert\lambda_{\pm}\vert^n = \gamma^{n/2}\to 0$ as $n\to\infty$.
Then our solution is $x_n = A_+\lambda_+^n+A_-\lambda_-^n$. And for any initial condition
\begin{align}\lim_{n\to\infty}\vert x_n\vert \leq \lim_{n\to\infty}\vert A_+\vert\lambda_+\vert^n+\vert A_-\vert\vert\lambda_-\vert^n = 0\ .
\end{align}
By the continuity of $f(x)=x^2$ we have $f(x_n)\to 0$ as $n\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4294876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The contiuity of $f_n(x)=\int_0^{1/n} n f(x-u) du.$ Let $f:\mathbb R\to \mathbb R$ be continuous. Then, prove that $f_n(x)=\displaystyle\int_0^{1/n} n f(x-u) du$ is continuous on $\mathbb R$ for each $n\in \mathbb N$.
I tried but I couldn't do well.
Fix $n \in \mathbb N$ and fix $a\in \mathbb R$. I'll prove $f_n$ is continuous at $a$.
Let $\epsilon >0$.
(I have to find some $\delta >0.$)
When $|x-a|<\delta,$
\begin{align}
|f_n (x)-f_n(a)|
&=\left| \int_0^{1/n} n f(x-u)-nf(a-u) du\right| \\
&\leqq n \int_0^{1/n} |f(x-u)-f(a-u)| du.
\end{align}
If I could find $\delta>0 $ s.t. $|x-a|<\delta \Rightarrow |f(x-u)-f(a-u)|< \epsilon,$ I come to conclusion.
I think I have to use the continuity of $f$, but both $f(x-u)$ and $f(a-u)$ includes variable $u$ so I don't know how I should use the continuity of $f$.
Thanks for your help.
| In this single-variable case, one can make a simple change of variables $t=x-u$ to get
\begin{align}
f_n(x):=\int_0^{1/n}nf(x-u)\,du=-\int_{x}^{x-\frac{1}{n}} nf(t)\,dt.
\end{align}
So, if one defines $F:\Bbb{R}\to\Bbb{R}$ as $F(x)=\int_0^xf(t)\,dt$, then $f_n(x)=-n[F(x-\frac{1}{n})-F(x)]$. Note that from the fundamental theorem of calculus (which we can apply since $f$ is continuous), $F$ is actually $C^1$ (because $F'=f$ is continuous). Therefore, each $f_n$ is also $C^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4295288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to prove that $v_3t_1=t_2v_1$ for projectile motion I have the following equation before me:
Let $\alpha$, $\alpha-\beta$ and $\alpha-2\beta$ be the angles made with the horizontal by a projectile at three points $A$, $B$ and $C$ respectively where velocities are $v_1$,$v_2$ &$v_3$. Let $t_1$ and $t_2$ be the times required to describe arcs $AB$ and $BC$ respectively.
I have to prove that $v_3t_1=t_2v_1$.
To begin with, I have the following set of equations which is obtained by considering velocity in $x$ direction.
$v_1\cos(\alpha)=v_2\cos(\alpha-\beta)=v_3\cos(\alpha-2\beta)$
By considering velocity in $y$ direction, I get the following equation:
$v_2\sin(\alpha-\beta)=v_1\sin(\alpha)-gt_1$
$v_3\sin(\alpha-2\beta)=v_2\sin(\alpha-\beta)-gt_2$
These two equations can be combined into one given below:
$\dfrac{t_1}{t_2}=\dfrac{v_1\sin(\alpha)-v_2\sin(\alpha-\beta)}{v_2\sin(\alpha-\beta)-v_3\sin(\alpha-2\beta)}$
How can I get the required result by application of these equations? That's where I am stuck. Please suggest how to proceed further.
| Take your last equation. In the numerator write $v_2$ in terms of $v_1$, while in the denominator you write $v_2$ in terms of $v_3$. You get these relationships from the velocities you wrote in the horizontal direction.
$$\frac{t_1}{t_2}=\frac{v_1\sin\alpha-v_1\frac{\cos\alpha}{\cos(\alpha-\beta)}\sin(\alpha-\beta)}{v_3\frac{\cos(\alpha-2\beta)}{\cos(\alpha-\beta)}\sin(\alpha-\beta)-v_3\sin(\alpha-2\beta)}$$
Now take $v_1/v_3$ in front, and use common denominator for both numerator and denominator (notice that it's the same)
$$\frac{t_1}{t_2}=\frac{v_1}{v_3}\frac{\sin\alpha\cos(\alpha-\beta)-\cos\alpha\sin(\alpha-\beta)}{\sin(\alpha-\beta)\cos(\alpha-2\beta)-\cos(\alpha-\beta)\sin(\alpha-2\beta)}$$
Now use $\sin(x-y)=\sin x\cos y-\cos x\sin y$ and you see
$$\frac{t_1}{t_2}=\frac{v_1}{v_3}\frac{\sin(\alpha-(\alpha-\beta))}{\sin(\alpha-\beta-(\alpha-2\beta))}=\frac{v_1}{v_2}\frac{\sin\beta}{\sin\beta}=\frac{v_1}{v_3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4295441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factorize $x^8+2$ over $\mathbb{F}_3$ using cyclotomic cosets
Determine the splitting field of $x^8 + 2$ over $\mathbb{F}_3$ and factor it into irreducible polynomials over $\mathbb{F}_3$ using cyclotomic cosets.
I'm having issues in factorizing this polynomial. can anyone see if I am worng with the computations or something else please?
My attempt: By a Lemma we had in the lecture:
Lemma: The splitting field $\mathbb{F}_{q^s}$ $\text{split}(f,\mathbb{F}_q)$, of $f=x^n-1$, is characterized through the smallest integer $s$ such that $n|q^s-1$.
Notice that over $\mathbb{F}_3$ we have $x^8+2\equiv x^8-1$.
For $n=8$ and $q=3$, $n|3^2-1=8\Rightarrow \text{split}(x^8+2,\mathbb{F}_3)=\mathbb{F}_{3^2}=\mathbb{F}_9$.
Consider that 2 is a root of $x^8-1$, i.e. $2^8-1=255\equiv 0\mod3$
Then I find the cyclotomic cosets $C_0=\{0\}$, $C_1=\{1,3\}$, $C_2=\{2,6\}$, $C_3=\{1,3\}=C_1$, $C_4=\{4\}$, $C_5=\{5,7\}$
So the factorization gives: $x^8-1=(x-2^0)\cdot(x-2^1)(x-2^3)\cdot(x-2^2)(x-2^6)\cdot(x-2^4)\cdot(x-2^5)(x-2^7)\equiv(x+2)(x^2+2x+1)(x^2+x+2)(x+2)(x^2+2x+1)$
But checking the L.H.S., this doesn't give $x^8-1$...
| Here $x^8+2 = x^8-1 = (x^4-1)(x^4+1)$ with
$x^4+1 = (x^2-1)(x^2+1)$ and $x^2-1 = (x-1)(x+1)$.
Now $x^2+1 = (x+i)(x-i)$ where $i$ is a primitive 4th root of unity in an extension field of $F_3$.
Here $2^2 = 4 = 1$ in $F_3$ and so 2 is a primitive 2nd root of unity. Thus $x^2+1$ does not factor over $F-3$.
Similarly, $x^4+1$ does not factor over $F_3$ as $F_3$ does not contain a primitive 8th root of unity.
This gives the overall decomposition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4295757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Prove this function is not differentiable For a function $f$ of $(x, y)$, $$f(x, y) = \begin{cases}
\frac{2x^2y}{x^4+y^2} & (x, y) \not= (0, 0) \\
0 & (x, y) = (0, 0) \\
\end{cases}
$$
If the directional derivative of this function at $(0, 0)$ is $D_{\bar u}$, and given $D_\bar uf(0, 0)\not=\nabla f(0, 0) \bullet \bar u$, where $\bar u$ is a random unit vector $(u_1, u_2)$, use this to explain why $f$ is not differentiable at $(0, 0)$.
The solution I have is, first find $D_\bar u$. This value is $0$. Also $\nabla f$ is $(0, 0)$. So how can I use these values and $D_\bar uf(0, 0)\not=\nabla f(0, 0) \bullet \bar u$ to prove that $f$ is not differentiable? I know I can also show this is not a continuous function, but I want to prove it this way.
| If $u = (u_1, u_2) \neq (0, 0)$, then
\begin{align*}
D_u f(0, 0) &= \lim_{t \to 0} \frac{f(tu) - f(0, 0)}{t} \\
&= \lim_{t \to 0} \frac{2(tu_1)^2(tu_2)}{t((tu_1)^4 + (tu_2)^2)} \\
&= \lim_{t \to 0} \frac{2t^3 u_1^2u_2}{t^5u_1^4 + t^3u_2^2} \\
&= \lim_{t \to 0} \frac{2u_1^2u_2}{t^2u_1^4 + u_2^2} \\
&= \frac{2u_1^2u_2}{u_2^2} = \frac{2u_1^2}{u_2},
\end{align*}
provided that $u_2 \neq 0$. This should linearly depend on $u$ if $f$ were differentiable at $0$, but it doesn't. There is no vector $v$ such that $D_uf(0, 0) = v \cdot u$. This shows that $f$ is not differentiable at $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4296112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many 20-digit numbers can be formed from {1,2,3,4,5,6,7,8,9} such that no 2 consecutive digit is both odd How many 20-digit numbers can be formed from {${1,2,3,4,5,6,7,8,9}$} such that no 2 consecutive digit is both odd. (Repetition allowed)
I've noticed that the number of odd digit in the number must be less than 11. But i can't progress more than that.
Any help is appreciated. Thanks in advance.
| Hint. Let $x_n$ be the number of $n$-digit numbers which can be formed from $\{1,2,3,4,5,6,7,8,9\}$ such that no two consecutive digits are both odd.
We split $x_n$ into two parts: $a_n$ counts the numbers where the last digit is odd and $b_n$ counts the ones where the last digit is even. Then $x_n=a_n+b_n$.
Moreover we can easily check that $a_1=5$, $b_1=4$, $a_2=4\cdot 5=20$, $b_2=9\cdot4=36$.
Is there any recurrences which involve $a_n$, $b_n$, $a_{n+1}$ and $b_{n+1}$?
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4296305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
I am having trouble coming up with series solutions to differential equations I have a problem that needs to be solved by Power Series Method. The equation is $$y'+y=2$$
I know this is trivial to use seperable equations so I know what the answer is: $$y(x)=c_1e^{-x}+2$$ but I can't derive it using the Taylor Series expansion. using power series the DE becomes $$x^n\cdot\sum{[c_{n+1}\cdot(n+1)+c_n]}=2$$ My recurrence relationship is $$c_{n+1}=\frac{2-c_n}{n+1}$$ I used out to $n=5$, I have the general solution as $$c_{m+1}=\frac{(-1)^m(c_0+2[what\ I\ can't\ find])}{(m+1)!}$$
at $n=5$ $$[what\ I\ can't\ find]=2(4!-3!+2!)$$ If you increase n then the number of factorial terms increases. Any help is appreciated.
| Write your $2$ as $$2=2\cdot x^0+0\cdot x^1+0\cdot x^2+...$$
Since your equation is valid for any $x$, all the coefficients on the left hand side must equal to the coefficients on the right hand side. So for $n=0$ you have $$c_1(0+1)+c_0=2$$
For every other $n$ you have $$c_{n+1}(n+1)+c_n=0$$
The recurrence relationship is then $$c_n=\frac{(-1)c_{n-1}}{n}=\frac{(-1)^2c_{n-2}}{n(n-1)}=\frac{(-1)^3c_{n-3}}{n(n-1)(n-2)}=...$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4296675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
An isomorphism $ \mathbb{Z}_{(p)} / p \mathbb{Z}_{(p)} \rightarrow \mathbb{F}_p$ Since localization commutes with taking quotients, we know $ \mathbb{Z}_{(p)} / (p) \mathbb{Z}_{(p)}$ and $\mathbb{F}_p$, where $p$ is prime, are isomorphic, but I am struggling to prove that the natural map $\frac{r}{s} \mapsto [r]_p$ is a homomorphism, or even that it is well-defined (which would allow me to use the $1$st isomorphism theorem to prove the initial result, since the kernel is clearly $(p) \mathbb{Z}_{(p)}$). For example: Suppose $f(\frac{r}{s} + \frac{u}{t})= f(\frac{r}{s})+f(\frac{u}{t}) \Leftrightarrow f(\frac{rt+us}{st})=[r]_p+[u]_p \Leftrightarrow [rt]_p+[us]_p=[r]_p+[u]_p$, and this does not lead me anywhere. Am I thinking of the wrong map? It seems very intuitive.
| You are using the wrong map. Think of it this way: an element of $\mathbb{Z}_{(p)}$ is a fraction $\frac{r}{s}$ where $s$ is an integer not divisible by $p$. To get a homomorphism, you don't want to just map this to $r$, since that would be ignoring the value of $s$. In particular, $\frac{1}{s}$ needs to map to an element which when multiplied by $s$ gives $1$. That is, it should map to the multiplicative inverse of $s$ mod $p$. So, $\frac{r}{s}$ should map to $rs^{-1}$ mod $p$ where $s^{-1}$ is the multiplicative inverse of $s$ mod $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4296803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find all positive integers $n$, such that $(\left\lfloor \sqrt{n} \right\rfloor^{2} +2) | (n^2 + 1) $ I tried to look at the cases when $n$ is a perfect square. Then $\left\lfloor \sqrt{n} \right\rfloor^{2} +2= n+2$, $ n^2 + 1 =(n-2)(n+2) + 5$. Then we must have $(n+2)|5$.
But only $1$ and $5$ divide $5$. Thus, $n=3$, but that is not a solution since we assumed $n$ to be a perfect square. The problem therefore has no perfect-square solutions.
I'm not sure how relevant this is to the general case, but I did not manage to get any further.
Thank you for your help in advance.
| Let $m^2=n-k$ be the largest square less than or equal to $n$.
$\implies0\le k\le 2m$
We have $n-k+2|n^2+1$
$$\implies n-k+2|1+n(k-2)$$
$$\implies n-k+2|k^2-4k+5$$
$$\implies m^2+2|k^2-4k+5$$
$\dfrac{k^2-4k+5}{m^2+2}=1,2\text{ or }3$ otherwise the inequality $0\le k\le 2m$ is violated.
If $\dfrac{k^2-4k+5}{m^2+2}=1$ we then have $(k-m-2)(k+m-2)=1$.
The only solutions for the above equality are $(m,k)=(0,1),(0,3)$, but $k>2m$, so no solution exists.
$\dfrac{k^2-4k+5}{m^2+2}=2$ we then have $(k-2)^2+1=2m^2+4$. $k$ should be odd. The LHS is of the form $8k+2$ while RHS is of the form $8k+4$ or $8k+6$. So no solution exists.
If $\dfrac{k^2-4k+5}{m^2+2}=3$ we then have $(k-2)^2+1=3m^2+6$. $3$ never divides the LHS so there are no solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4296977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Let $a(n)=\sum_{r=1}^{n}(-1)^{r-1}\frac{1}{r}$. Prove that $a(2n) \neq 1$ for any value $n$. Let $$a(n)=\sum_{r=1}^{n}(-1)^{r-1}\frac{1}{r}.$$
I experienced this function while doing a problem, I could do the problem, but I got stuck at a point where I had to prove that $a(2n)<1$ for all $n$. I proved that $a(2n) \le 1$ for all $n$ so in order to prove what the question demands, I need to prove that $a(2n) \neq 1$ for any value $n$. Nothing striked my mind as of now how to prove it.
So can someone help me proving that $a(2n) \neq 1$ for any value $n$?
| $\frac{1}{n}$ is strictly decreasing, so $a(2n)$ is strictly increasing. $$\lim_{n\to\infty}a(2n)=\ln 2,$$ so
$$0\leq a(2n)<\ln 2< 1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4297176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Power rule for fundamental theorem of calculus Find $\frac{dy}{dx}$, where $y$ is given by
$$y=\left( \int_{0}^{x} (t^3+1)^{10}dt\right)^3$$
The solution says
$$\frac{dy}{dx}=3\left( \int_0^x (t^3+1)^{10}dt\right) \frac{d}{dx}\left( \int_0^x (t^3+1)^{10}dt\right) = 3(x^3+1)^{10}\left( \int_0^x (t^3+1)^{10}dt\right)$$
I'm confused because I thought that the power rule needs to be applied. So that the correct answer should be:
$$\frac{dy}{dx}=3\left( \int_0^x (t^3+1)^{10}dt\right)^2 \frac{d}{dx}\left( \int_0^x (t^3+1)^{10}dt\right) = 3((x^3+1)^{10})^{20}\left( \int_0^x (t^3+1)^{10}dt\right)$$
| It is very likely that the "solution" just has a typo: missing power of $2$ in the factor $(\int_0^x(t^3+1)^{10}\;dt)$.
Your first step is correct. But your final answer takes the power of $2$ in the wrong place:
$$\frac{dy}{dx}=3\left( \int_0^x (t^3+1)^{10}dt\right)^2 \frac{d}{dx}\left( \int_0^x (t^3+1)^{10}dt\right)
= 3\left( \int_0^x (t^3+1)^{10}dt\right)^2(x^3+1)^{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4297615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Permutations with a fixed element How would you understand the following set of permutations
$$S' :=\{ \pi\in S_n\mid \pi(n)=n\}$$
is the subset of all the permutations that fix the $n$-element?
Is it simply $S_{n-1}$? For example, consider $S_3$, then the subset contains all the permutation that fix the $3$. So the $S'=\{ (213) \}$?
| The fixed points of a function $f$ are precisely those elements $x$ in the domain of $f$ such that $f(x)=x$. Since each permutation is a function and your set $S'$ is, by definition, the set of all permutations $\pi$ such that $\pi(n)=n$ (and $S'$ contains no other elements), it follows that $S'$ is precisely the set of all permutations that fix that $n$ element.
Note that $S'\cong S_{n-1}$.
By the way, your example is mistaken. Note that $(213)\notin S'$ when $n=3$ because it does not fix $3$; it sends $3$ to $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4297898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is $k[x,y]_{(x,y)}$? After intensive internet research, I couldn't find any source that gave me a solution, probably the question is too easy. It would be great, if someone can tell me if I'm correct or not.
Let $k$ be any field (I'm usually working with $\mathbb{C}$, but every other field should be okay as well), is the polynomial ring localized at $(x,y)$ isomorphic to the formal power series ring, i.e.
$$
k[x,y]_{(x,y)} \simeq k[[x,y]]?
$$
Here $(x,y)$ is seen as an ideal, so the localization at $(x,y)$ could be interpreted as dividing by $k[x,y]\setminus (x,y)$.
Why should this be true? As a heuristic: On the LHS, only $x$ and $y$ (and multiples of these) have no inverse, the same is true for the RHS.
Does this hold for an arbitrary finite amount of variables? I would say yes.
Am I correct and it really is this easy or am I just standing on the hose?
| No, they are not necessarily isomorphic. For one thing, if you choose $k$ to be countable, then $k[x,y]_{(x,y)}$ is countable, but $k[[x,y]]$ is uncountable.
For another thing, if I recall correctly, the former one is not complete with respect to its maximal ideal, whereas the second one is.
But they do superficially look similar: two local algebras whose maximal ideal is generated by $(x,y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4298017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Minimizing the variational problem $ I(y(x)) = \int_{0}^1 e^{-y'(x)^2}dx $ This is a question from a mathematical contest.
Let $X= \{ y \in C^1[0,1]| y(0)=y(1)=0 \}$. Define $I: X \to \mathbb{R}$ by:
$$ I(y(x)) = \int_{0}^1 e^{-y'(x)^2}dx $$.
Then which of the following are true:
*
*$I$ doesn't attain its infimum.
*$I$ attains its infimum at a unique $y \in X$.
*$I$ attains its infimum at exactly two elements $y(x)\in X$.
*$I$ attains its infimum at infinitely many $u \in X$.
What I tried:
Using the Euler's condition the extremal $y(x)$ satisfies:
$$ \frac{\partial F}{\partial y} - \frac{\partial^2 F}{ \partial x \partial y'}- y' \frac{\partial^2 F}{\partial y \partial y'}- y'' \frac{\partial^2 F}{\partial y'^2} =0$$
Now since our $F$ is independent of $x,y$, Euler's equation will give us that $y(x)= Ax+B$.
Now using $y(0)=0=y(1)$, we get the extremal as $y \equiv 0$.
This gives the curve which maximizes the variational problem. I want to know how to proceed when we want to minimize the value? One guess I can make is by taking some function for which $y'$ is large enough. But is there some more formal method?
Thanks!
| If you know, that ELG can only have the Solution of $y = 0$, then you know there is no reachable infimum. But as a hint, $\int_0^1 y^\prime(x) \, dx = y(1)-y(0) = 0$ which means you can take like any $y^\prime := g \in C^0([0,1])$ as long the whole area is zero. :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4298172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this sufficient to prove that a multiple of a Lipschitz function is also Lipschitz? If we have a Lipschitz function, $f: S \rightarrow \mathbb{R}$ and $h(x) = 3f(x)$, can I show that $h: S \rightarrow \mathbb{R}$ is also Lipschitz by the following?
Assume $f: S \rightarrow \mathbb{R}$ is Lipschitz. Then, there is $K > 0$ such that for all $x,y \in S$ with $|f(x) – f(y)| < \frac{K}{3}|x – y|$. Then, choose for $|3f(x) – 3f(y)| < 3|f(x) – f(y)| < \frac{3K}{3}|x – y| = K|x – y|$. So $3f(x) = h(x)$ is also Lipschitz.
I feel there is a hole in my logic somewhere but cannot grasp quite where.
| It is perfectly fine. Indeed note that the space of Lipschitz functions is a vector space, hence it makes perfectly sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4298595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is this logical expression using $P$, $Q$, $R$ logically equivalent to one using only $P$ and $Q$? Formula 1:
$$(¬P ∧ ¬Q ∧ ¬R) ∨ (¬P ∧ ¬Q ∧ R) ∨ (P ∧ Q ∧ ¬R) ∨ (P ∧ Q ∧ R)$$
P Q R F
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 1
1 1 1 1
Formula 2:
$$(¬P ∧ ¬Q) ∨ (P ∧ Q)$$
P Q ((¬P ∧ ¬Q) ∨ (P ∧ Q))
0 0 1
0 1 0
1 0 0
1 1 1
How are these two formulas logically equivalent? To my understanding when two formulas are logically equivalent, they have identical truth values under all interpretations, these 2 formulas produce completely different truth tables- formula 1 has 3 variables and formula 2 has 2 variables to start off with. I don't understand how they are logically equivalent?
| Let the $3$-input function $F_1(P,Q,R)$ denote Formula 1's truth value, and likewise for Formula 2.
*
*The key insight is that in Formula 1, $R$ is a ‘bogus’
(propositional) variable: whatever $P$ and $Q$'s values are,
$F_1(P,Q,0)$ and $F_1(P,Q,1)$ are either both true or both false. So,
the input combination $(P,Q)$ fully determines the Formula 1's truth
value. To determine whether Formula 1 is true, there is never a need
to consider input $R$'s truth value. $$F_1(P,Q,R)=F_1(P,Q).$$
*And, since each choice of $(P,Q)$ returns the same truth value for
Formulae 1 and 2, these two formulae must therefore be logically
equivalent. $$F_2(P,Q)=F_1(P,Q)=F_1(P,Q,R).$$ In fact, Formula 2 is a
simplified version of Formula 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4298943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Mean value of $\Omega(n)$ is $\log\log n$ This question is from my number theory assignment and I was unable to solve it. I have been following Dekonick and Luca.
Let $\Omega(n)$ be the number of prime divisors of n, counted with multiplicity ie the number of prime powers dividing n, for instance $\Omega(12) = 3$. Show that mean value of $\Omega(n)$ is $\log\log n$: $\frac{1}{N}\sum_{n\leq N} \Omega(n) = \log\log N+O(1)$.
Attempt: I thought of using Abel's Identity : $\sum_{n\leq N} f(n) = f(N)[N]- \int_{1}^x [t] f'(t)\,dt$.
Here $f(n) = f(p_1^{x_1} \cdots p_r^{x_r}) = x_1 + \dots + x_r$
So, I got $\sum_{n\leq N} f(n) = (x_1 + \dots + x_r)[N]- \int_{1}^{x} [t] (x_1+\dots+x_r) \log t\,dt$. Dividing by $N$, $((x_1 + \dots + x_r)[N])/N =O(1)$, and the other term after simplifying I got as $$(x_1+\dots+x_r)(\log N -1-(N\log N)/2+N/4+1/N(3/4))$$ which is not equal to what has to be proved.
So, please help.
| By definition, we know
\begin{aligned}
0\le\sum_{n\le N}[\Omega(n)-\omega(n)]
&=\sum_{n\le N}\sum_{p^a\|n,a>1}a\le\sum_{a\ge2}a\sum_{p\le N}\sum_{\substack{n\le N\\p^a|n}}1 \\
&\le\sum_{a\ge2}a\sum_{p\le N}{N\over p^a}\le N\sum_{p\le N}\sum_{a\ge2}{a\over p^a}
\end{aligned}
To evaluate the rightmost sum, we recall the properties of geometric series. That is, for $|z|<1$ there is
$$
{1\over1-z}=1+z+\sum_{r\ge2}z^r
$$
Differentiating on both side, we have
$$
{1\over(1-z)^2}=1+\sum_{r\ge2}rz^{r-1}
$$
Multiplying $z$ on both side gives
$$
\sum_{r\ge2}rz^r={z-z(z^2-2z+1)\over(1-z)^2}={z^2(2-z)\over(1-z)^2}
$$
This indicates that
$$
\sum_{p\le N}\sum_{a\ge2}{a\over p^a}=\sum_{p\le N}{p^{-2}(2-p^{-1})\over(1-p^{-1})^2}\le\sum_{p\le N}{2\over(p-1)^2}
$$
Since the right most sum is convergent, we conclude
$$
\sum_{n\le N}[\Omega(n)-\omega(n)]=\mathcal O(N)
$$
which indicates that
$$
\sum_{n\le N}\Omega(n)=\sum_{n\le N}\omega(n)+\mathcal O(N)\tag1
$$
which converts our problem into a much more convenient one:
\begin{aligned}
\sum_{n\le N}\omega(n)
&=\sum_{n\le N}\sum_{p|n}1=\sum_{p\le N}\sum_{\substack{n\le N\\p|n}}1 \\
&=\sum_{p\le N}\left\lfloor\frac Np\right\rfloor=N\sum_{p\le N}\frac1p+\mathcal O(N)
\end{aligned}
To continue, we summon Mertens' formula (a superior version is presented as Theorem 427 in Hardy & Wright):
$$
\sum_{p\le N}\frac1p=\log\log N+\mathcal O(1)
$$
so that
$$
\sum_{n\le N}\omega(n)=N\log\log N+\mathcal O(N)\tag2
$$
Combining (1) and (2) gives the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Probability that we get X white marbles. I've been trying to find a way to solve this question for some time now, but nothing seems to work.
Suppose we have a box. Inside it are 1 white and 1 black marble. We play the game for n rounds, where for each round we choose randomly a marble and then put it back along with an extra of the same colour. So, after round 2, we'd have 1 white and 2 black or 2 white and 1 black, etc.
I am looking for the probability that the number of white marbles, X, is x by the end of the game (after n rounds). Therefore:
$$P[X=x]$$
where $x={1,...,1+n}$.
I know that by the end of the game I'll have 2+n marbles, black and white. I also have made a diagram that shows me that I get 1 more state the more I play.
I originally thought I could solve this by assuming the variable follows a binomial distribution, but... doesn't the probability that we pick a white or black marble change the more we play?
My second attempt was to see if I could solve this by using first step analysis, but I'm not sure if that's correct. I also tried to see if any other distribution methods worked, but none that I find take the extra added ball into consideration. Any help?
| After $n$ rounds, the box contains $m = n + 2$ marbles. Let $(B, W)$ be the state of the box containing $B$ black marbles and $W$ white marbles. So $B + W = m$. The probability of drawing a black marble is $B/m$, and the probability of drawing a white marble is $W/m$. So $(B, W)$ transitions to $(B+1, W)$ with probability $B/(B+W)$, and to $(B, W+1)$ with probability $W/(B+W)$.
Conversely, for $B, W > 1$, we can get to state $(B, W)$ from $(B-1, W)$ with probability $(B-1)/(B+W-1)$, or from $(B, W-1)$ with probability $(W-1)/(B+W-1)$.
This leads to the interesting situation that each state in a given round has the same probability!
Here's a diagram showing the states and the transition probabilities for 4 rounds of the game.
(1, 1)
1/2 1/2
(2, 1) (1, 2)
2/3 1/3 1/3 2/3
(3, 1) (2, 2) (1, 3)
3/4 1/4 2/4 2/4 1/4 3/4
(4, 1) (3, 2) (2, 3) (1, 4)
4/5 1/5 3/5 2/5 2/5 3/5 1/5 4/5
(5, 1) (4, 2) (3, 3) (2, 4) (1, 5)
So when there are $m$ marbles in the box, the probability that the box contains $x$ white marbles is simply $\frac1{m-1} = \frac1{n+1}$.
This is essentially a proof by mathematical induction. Clearly, in the initial state $(1, 1)$, when $n=0$ and $m=2$, the probability of 1 white ball is 1.
After one round, $m=3$, and the two possible states $(2, 1), (1, 2)$ have equal probability of $\frac12$.
In the next round, when $m=4$, we have three possible states, $(3, 1), (2, 2), (1, 3)$. We can only get to $(3, 1)$ from $(2, 1)$, with probability $\frac23$, but the probability of $(2, 1)$ is $\frac12$, so the total probability of getting to $(3, 1)$ from the initial $(1, 1)$ state is $\frac12×\frac23=\frac13$.
The case for $(1, 3)$ is the same by symmetry, so it has the same probability, $\frac13$.
The case for $(2, 2)$ is more interesting. We can get to that state either from $(2, 1)$ or $(1, 2)$, in both cases with probability $\frac13$, so we need to add those probabilities, which gives us $(\frac12×\frac13)+(\frac12×\frac13)=\frac13$.
Thus each of the 3 states in the $m=4$ row have the same probability, $\frac13$.
Let's assume that our hypothesis of equal probabilities is true for all rows up to some $m$. The same reasoning we used on the first few rows applies to subsequent rows, so if the hypothesis is true for row $m$ it should also be true for row $m+1$
The probability of $(m-1, 1)$ going to $(m, 1)$ is $\frac{m-1}m$, so the total probability of $(m, 1)$ is $\frac1{m-1}×\frac{m-1}m=\frac1m$.
As mentioned earlier, a general state $(B, W)$ inside the triangle diagram (with both $B, W > 1$) with $B+W=m+1$ has two parent states, $(B-1, W)$ and $(B, W-1)$, with associated transition probabilities $(B-1)/m$ and $(W-1)/m$. We add those probabilities, and multiply by the probability of the parent row:
$$\left(\frac{B-1}m + \frac{W-1}m\right) × \frac1{m-1}$$
$$=\frac{m-1}m\frac1{m-1}= \frac1m$$
Thus each state in the $m+1$ row has probability $\frac1m$, and so by induction the hypothesis is true for all $m\ge2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is this recurrence relation well defined? I encountered the following recurrence relation in a problem from the Kangaroo Competition:
$$ a_1=1~;~ a_{n+m}=a_n+a_m+n \cdot m ~ (n+m>1) $$
The following values are directly found:
$$a_2=a_1+a_1+1 \cdot 1 = 3\\ a_3=a_1+a_2+1 \cdot 2=6\\ a_4=a_1+a_3 +1\cdot 3=a_2+a_2+2 \cdot 2=10 \\ a_5=a_1+a_4+1 \cdot 4=a_2+a_3+2 \cdot 3= 15$$
Is there an elementary proof that this is a well-defined recurrence relation, that is, the right hand side never yields two different values for $a_{n+m}$?
| Following the suggestions made in the comments, if a succession $(a_n)_n$ satisfies the recurrence relation of the question, also it must satisfy $$ a_{n+1}=a_n+a_1+n \cdot 1$$.
This gives $a_n=\frac{n(n+1)}{2}$ (arihtmetic progression) and it's an easy computation to show that
$$\frac{(n+m)(n+m+1)}{2}= \frac{n(n+1)}{2}+\frac{m(m+1)}{2}+nm$$ so indeed the recurrence relation is well-defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\lim_{n\to\infty}\int_0^1f(x^n)dx = f(0)$ Given a continuous function $f:[0,1] \to R$, prove that $\lim_{n\to\infty}\int_0^1f(x^n)dx = f(0)$.
Attempt:
Let $u = x^n$, so $du = nx^{n-1}dx$. Then substituing $u$ in I got:
$\lim_{n\to\infty}\dfrac{\int_0^1f(u)du}{nx^{n-1}}$
Doesn't this limit go to $0$? I'm not sure which part I'm messing up on, any hints are appreciated.
| It's equivalent to show $\lim_{n\rightarrow\infty}\int_0^1 (f(x^n)-f(0))dx=0$.
Given any $\epsilon>0$, we have $$\left|\int_0^1 (f(x^n)-f(0))dx\right|\le\int_0^1 |f(x^n)-f(0)|dx$$ $$=\int_0^{1-\epsilon}|f(x^n)-f(0)|dx + \int_{1-\epsilon}^1 |f(x^n)-f(0)|dx$$ $$\le\int_0^{1-\epsilon}|f(x^n)-f(0)|dx + 2B\epsilon$$
where $B=\max_{x\in[0,1]}|f(x)|$ which exists due to continuity.
By $f(x)$ is continuous at $x=0$, there is a $\delta>0$, such that $|x|\le\delta\Rightarrow |f(x)-f(0)|\le \epsilon$. Thus when $n\ge \log_{1-\epsilon}\delta$, $|x^n|\le (1-\epsilon)^n\le\delta$ for $x\in [0, 1-\epsilon]$, and for sufficiently large $n$, $$\int_0^{1-\epsilon} |f(x^n)-f(0)|dx\le \int_0^1 \epsilon dx \le \epsilon$$
Finally,
$$\left|\int_0^1 (f(x^n)-f(0))dx\right|\le (1+2B)\epsilon$$
In fact, we only need $f(x)$ is bounded and right continuous at $0$ (and integrable).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Solving an absolute value inequality with fractions I'm having a hard time figuring out this inequality: $\bigg|\dfrac{x-4}{x+5}\bigg| \le 4$
I use one of the absolute value properties, which results in: $-4 \leq \dfrac{x-4}{x+5} \leq 4$.
From there, I get $x \geq -\frac{16}{5}$ and $x \geq -8$, yet when I look at Wolfram alpha, the answer is $[-\frac{16}{5}, \infty)$ and $(-\infty, -8]$, which makes sense. What am I missing?
These are my steps to get $x \ge -8$:
\begin{align*}
&\frac{x - 4}{x + 5} \le 4 \\
&x - 4 \le 4(x + 5) \\
&x - 4 \le 4x + 20 \\
&x \le 4x + 24 \\
&-3x \le 24 \\
&x \ge -8.
\end{align*}
| This is
$$y = \frac{x-4}{x+5} $$
$$ $$
The graph is a hyperbola, one horizontal asymptote and one vertical. I encourage you to get some graph paper and draw the same thing, maybe by plotting points when $x$ is an integer. You can improve the picture by plotting points when $y$ is an integer, using
$$ x = \frac{5y+4}{-y+1} $$
The educational bit: inequalities can be treacherous. A graph displays a good deal of firm information, while the process of plotting one and drawing in the curve by hand cements some concepts that are otherwise a bit uncertain.
Your mapping is called a Mobius Transformation. There is a simple rule for finding the inverse function, it is another Mobius transformation. Let me type in the rule, constants $a,b,c,d,$
$$ y = \frac{ax+b}{cx+d} \Longrightarrow x=\frac{dy-b}{-cy+a} $$
I figured out that Desmos will let me plot points. Here are enough points on the upper left arc of the hyperbola, then just a few on the lower right. Next I'll put more points on the lower right...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Inverse image of a set under quotient map Let $G$ be an abelian group written additively and let $N\leq G$ and consider $G/N$. Let $\pi: G \to G/N$ be the quotient map written $g \mapsto \overline{g}$. Let $Z\subseteq G/N$. Then is it true that $\pi^{-1}(Z+\overline{g})=\pi^{-1}(Z)+g$?
I tried to proceed like this. Let $y \in \pi^{-1}(Z+\overline{g})$. Then $\pi(y)\in Z+\overline{g}\iff \pi(y) \in Z+\pi(g)\iff \pi(y-g)\in Z \iff y \in \pi^{-1}(Z)+g$.
Am I making any mistake?
| Close, but not quite correct. If you want to prove that $A=B$ for two given subsets of $X$, then you don't want to prove that $[\forall a\in A, (a\in A\Leftrightarrow a\in B)]$, because that just proves $A\subseteq B$. If anything, you want to prove $[\forall a\in X,(a\in A\Leftrightarrow a\in B)]$.
Other than that, the underlying idea is fine.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4299987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of weak homotopy equivalence Recall a weak homotopy equivalence is a continuous map $f:X_1 \rightarrow X_2$ that induces a bijection $f_*:[Y,X_1] \rightarrow [Y,X_2]$ for any CW-complex $Y$, where $[Y,X]$ denotes the homotopy class of continuous maps $Y \rightarrow X$. When $Y$ is restricted to be $\Bbb{S}^n$, this results in a series of isomorphisms between $\pi_n(X_1)$ and $\pi_n(X_2)$, and by Whitehead theorem it automatically becomes a homotopy equivalence when $X_i$'s are CW-complexes.
Why don't we define it in a similar way as homotopy equivalence, namely $(f_*)^{-1}:[Y,X_2] \rightarrow [Y,X_1]$ is induced by another continuous map $g:X_2 \rightarrow X_1$? Does the latter one necessarily implies homotopy equivalence? What will the difference of the two definitions change if we restrict $Y$ to be $\Bbb{S}^n$?
| The reason why the definition of weak homotopy equivalence is good is because of Whitehead's theorem. It is the weakest definition (meaning the easiest to check) that is equivalent to homotopy equivalence for CW complexes, so you would not want to change it.
As to your new question, the Hawaiian earring is a counter example. There is a map $X \rightarrow H$ that is a weak homotopy equivalence from a cw complex to $H$, but there can be no inverse that you ask for because the fundamental group of $H$ has nontrivial topology while the fundamental group of $X$ is discrete (as it is for any CW complex).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4300230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Second Sylow theorem's proof In the proof of the 2nd Sylow theorem, I don't understand why the fact tha $O(S)| m$ implies that there exists a suborbit with one element. This is part of the proof
Second Sylow Theorem. Let $G$ be a finite group with order
$p^a \cdot m$ and $p$ a prime such that $m$ is not divisible by $p$.
Then any $p$-subgroup $H$ of $G$ is contained in a $p$-Sylow subgroup.
Proof. Let $P$ be the set of $p$-Sylow subgroups of $G$, and
let $H$ be a $p$-subgroup of $G$, and let $|H| = p^r$, $r \le; a$. Let
$S$ a fixed $p$-Sylow subgroup and $O(S)$ its orbit under the
conjugation action. Let $H$ act on $O(S)$: $hT = hTh^{-1} \, \forall T
> \in O(S)$ : it's an action on on $O(S)$ because the conjugate of an
element is still in $O(S)$. $O(S)$ is thus partitioned into suborbits,
each one having a cardinality which divides $p^r$. The cardinality
$O(S)$ is the sum of the cardinalities of all these suborbits.
Since the cardinality $O(S)$ divides $m$ and is relatevely
prime to $p$, there exists at least one suborbit having just one
element; that is $\exists T \in O(S)$ such that $hTh^{-1} = T \quad
> \forall; h \in H$.
| Remember that the number of elements in each (sub)orbit must be a factor of $|H|$, i.e., a power of $p$. Can they all be of the form $p^\alpha$ with $\alpha\ge 1$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4300416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How exactly is the value of $\frac{1}{y\sqrt{y^2+\frac{l^2}{4}}}$ as $y \gg l$ calculated? I am having trouble understanding if a particular calculation is or is not a limit calculation. I suspect it is not, but a particular set of notes from an MIT OCW physics course (problem starts on page 18, the limit is on page 20) seem to imply the calculation in question is a limit. The calculation is relatively simple, but I would really like to understand exactly what is being done to reach the result.
Note that I've slightly adapted the expressions to remove the parts that are specific to the physics domain (electromagnetism).
Consider the following expression for $E_p$:
$$E_p=\frac{1}{y\sqrt{y^2+\frac{l^2}{4}}}\tag{1}$$
I'd like to know what happens to this expression when $y$ is large relative to $l$.
The notes I am following state that "in the limit where $y \gg l$, the above expression reduces to the (point-charge) limit:"
$$E_p=\frac{1}{y^2}$$
As far as I can tell, if we take the limit of $(1)$ when $y \to \infty$ we get $0$.
I believe what is happening is that a linear approximation is being used in the denominator.
Here's what I came up with:
Rewrite $E_p$
$$E_p = \frac{1}{y\sqrt{y^2+\frac{l^2}{4}}}$$
$$=\frac{1}{y\sqrt{y^2(1+\frac{l^2}{4y^2})}}$$
$$=\frac{1}{y^2\sqrt{1+\frac{l^2}{4y^2}}}$$
Consider the term in the denominator $1+\frac{l^2}{4y^2}$
$$s=\frac{l}{2y}$$
$$1+(\frac{l}{2y})^2=1+s^2 = f(s)$$
$$y>>l \implies s \approx 0$$
I believe I can use a first order Taylor's expansion
$$\implies f(s) \approx f(0) +f'(0)s,\text{ near s=0}$$
$$\implies f(s) \approx 1$$
$$E_p \approx \frac{1}{y^2}$$
On the other hand I could have simply started by considering the limit of $E_p$ when $\frac{l}{y} \to 0$. In this case my question is: how does one calculate this limit? It's clear that the term in the square root goes to zero, but what should we say happens to the $y^2$ term?
Is this latter limit somehow connected or equivalent to the linear approximation I used, and was my calculation correct?
| (You asked how to take the limit as $l/y \rightarrow 0$.)
The "problem" with taking the limit as $l/y \rightarrow 0$ is that either $l$ is getting smaller, $y$ is getting larger, or both. Since we want to have an expression with $y$s and not $l$s, we need to arrange for all appearances of "$l$" to be in the combination "$l/y$". This is not initially the case.
Also, recall that $\sqrt{x^2} = |x|$ for any real $x$ (since the square root can only give you nonnegative values).
\begin{align*}
E_p &= \frac{1}{y\sqrt{y^2 + \frac{l^2}{4}}} \\
&= \frac{1}{y\sqrt{y^2 \left( 1 + \frac{l^2}{4y^2} \right)}} \\
&= \frac{1}{y|y|\sqrt{1 + \frac{l^2}{4y^2}}}
\end{align*}
Now, we have all instances of $l$ in $l/y$ combinations, so we can take the limit, leaving an expression in $y$s only.
$$ \lim_{l/y \rightarrow 0} \frac{1}{y|y|\sqrt{1 + \frac{l^2}{4y^2}}} = \frac{1}{y|y|\sqrt{1 + \frac{0^2}{4}}} = \frac{1}{y|y|} \text{.} $$
And, actually, we want $y|y|$ here -- the $E$-field should be pointing away from the origin above and below the $x$-axis, not always pointing upwards. I don't see that we are working only on $y \geq 0$ and getting the lower half by reflection, so the usual explanation for why we get to confine our attention to $y \geq 0$ is missing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4300611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Strategies to solve a limit $\lim_{x\to \pi/4} \frac{\cos(2x)}{2\cos(x)-\sqrt 2}$ This morning a colleague asked me how I would solve quickly this limit,
$$\lim_{x\to \frac{\pi}{4}} \frac{\cos(2x)}{2\cos(x)-\sqrt 2}$$
Probably because I am a member of this community but I have suggest to do:
$$\frac{\cos(2x)}{2\cos(x)-\sqrt 2}=\frac{\cos(2x)}{(2\cos(x)-\sqrt 2)}\frac{(2\cos(x)+\sqrt 2)}{(2\cos(x)+\sqrt 2)}$$
Being $\cos(2x)=2\cos^2 x-1$,
$$\frac{\cos(2x)(2\cos(x)+\sqrt 2)}{2(2\cos^2 x-1)}=\frac{2\cos(x)+\sqrt 2}{2}$$
and we can found the limit for $x\to \frac{\pi}4$.
But I have now look the denominator $2\cos(x)-\sqrt 2=2\left(\cos x-\cos \frac{\pi}{4}\right)$. Using the prostapheresis formulas I will have:
$$2\left(\cos x-\cos \frac{\pi}{4}\right)=-2\left(\sin\frac{(x+\pi/4)}{2}\cdot\sin\frac{(x-\pi/4)}{2}\right)$$
If $x\to \frac{\pi}4$ then $$-2\left(\sin\frac{(x+\pi/4)}{2}\cdot\sin\frac{(x-\pi/4)}{2}\right)\to 0$$
Hence the strategy with the prostapheresis formulas is it not good or is there a way to use the notable limits?
| Consider rewriting the numerator as $\cos(2x) - \cos(\frac{\pi}2).$ Then we can use the same manipulation to get
$$\cos(2x) - \cos(\frac{\pi}2) = -2\sin\left(\frac{2x + \frac{\pi}{2}}{2}\right)\sin\left(\frac{2x - \frac{\pi}{2}}{2}\right) = -2\sin\left(x + \frac{\pi}{4}\right)\sin\left(x - \frac{\pi}{4}\right)$$
Substituting this in gives us
$$\lim_{x \to \frac{\pi}4} \frac{-2\sin\left(x + \frac{\pi}{4}\right)\sin\left(x - \frac{\pi}{4}\right)}{2\left(-2\sin\left(\frac{x + \frac{\pi}{4}}2\right)\sin\left(\frac{x - \frac{\pi}{4}}2\right)\right)} = \frac12\lim_{x \to \frac{\pi}4}\frac{\sin\left(x + \frac{\pi}{4}\right)}{\sin\left(\frac{x + \frac{\pi}{4}}2\right)} \cdot \lim_{x \to \frac{\pi}4}\frac{\sin\left(x - \frac{\pi}{4}\right)}{\sin\left(\frac{x - \frac{\pi}{4}}2\right)}$$
supposing for the moment that those two limits exist. (also note the additional factor of $2$ in the denominator which seems to have been dropped in your manipulation)
For the first limit we can simply evaluate at $x = \frac{\pi}4$ to get $\frac{1}{\frac{1}{\sqrt{2}}} = \sqrt{2}.$
For the second limit, consider rewriting $\sin(x - \frac{\pi}4)$ in the numerator as $\sin\left(2\frac{x - \frac{\pi}{4}}{2}\right) = 2\sin\left(\frac{x - \frac{\pi}{4}}{2}\right)\cos\left(\frac{x - \frac{\pi}{4}}{2}\right).$ This gives us that
$$\lim_{x \to \frac{\pi}4}\frac{\sin\left(x - \frac{\pi}{4}\right)}{\sin\left(\frac{x - \frac{\pi}{4}}2\right)} = \lim_{x \to \frac{\pi}4} 2\cos\left(\frac{x - \frac{\pi}{4}}{2}\right) = 2$$
So our total limit is $\frac12 \cdot \sqrt{2} \cdot 2 = \sqrt{2},$ which agrees with your initial solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4300756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the pdf for this ordered statiatic? The independent random variables $X_1, X_2, X_3, X_4$. Each $X_i$~$U([0,1])$ (I.e. uniform distribution on the $[0,1]$)
$V = \min$ $\{ X_1, X_2, X_3, X_4 \}$ and $W = \max\{ X_1, X_2, X_3, X_4 \}$.
Find the joint probability density function $f(v,w)$ for two variables $V$ and $W$
$(sol)$ I'm focusing the case $x_1 \leq x_2 \leq x_3 \leq x_4 $
$$F(u,v) =P( W\leq w ,V \leq v )= 24 \int_0^{w} \int_0^{v} \int_0^{x_4} \int_{x_1}^{x_3} d{x_2}d{x_3}d{x
_1 }d{x_4} = 4vw^3 - 6v^2w^2$$
The reason why multiplying the 24 is there are 24 arangements by the order for $x_1$ to $x_4 $ including my case $x_1 \leq x_2 \leq x_3 \leq x_4 $.
Hence $f(v,w)=\frac{\partial ^2 F}{\partial v\partial w }=12w^2-24vw$ for$ 0<v<w<1$
But the answer was $12(w-v)^2$
What were mistakes in my solution? In my guess the integration domain is false. But I can't find which point I was wrong.
re-editing) From the integration domain, $x_1 \leq x_2 \leq x_3 \leq x_4 $
I'm focusing on the $x_1 \leq x_2 \leq x_3$ $\Rightarrow$ $\color{red}\int_\color{red}{x_1}^\color{red}{x_3} \color{red}d\color{red}{x_2}$
Next since the order of the $x_1, x_2$ and $x_3$ are determined, Just consider the $x_3 \leq x_4$ $\Rightarrow$ $\color{blue}\int_\color{blue}0^\color{blue}{x_4} \color{red}\int_\color{red}{x_1}^\color{red}{x_3} \color{red}d\color{red}{x_2}\color{blue}d\color{blue}{x_3}$
Next consider the domain of the $x_1$ and $x_4$
Hence, $\int_0^{w} \int_0^{v} \int_0^{x_4} \int_{x_1}^{x_3} d{x_2}d{x_3}d{x_1}d{x_4}$
| I don't understand your solution. Let me propose propose the following one :
If $v\geq w$, then $\mathbb P\{W\leq w, V>v\}=0$. Suppose $0<v<w\leq 1$. Then, $$\mathbb P\{V>v, W\leq w\}=\mathbb P\{v\leq X_1\leq w\}^4=\left(\int_v^w\,\mathrm d x\right)^4=\left(w-v\right)^4.$$
Therefore,
$$\mathbb P\{V\leq v,W\leq w\}=\mathbb P\{W\leq w\}-\mathbb P\{ V>v,W\leq w\},$$ and thus,
$$f(v,w)=-\partial _{vw}\mathbb P\{ V>v,W\leq w\}=12(w-v)^2\boldsymbol 1_{\{0<v<w< 1\}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4300992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Coefficients of power series with binomial theorem I am trying to get the coefficients of the power series of $\sqrt{\frac{1+z}{1-z}}$.
Rewriting gets us to $(1 + \frac{2z}{1-z})^{0.5}$. Now using the binomial theorem gets us $\sum_{k=0}^n {\frac{1}{2}\choose k} (\frac{2z}{1-z})^k $ but didnt get far with that.
| You could write $$\sqrt{\frac{1+x}{1-x}}=\frac{1+x}{\sqrt{1-x^2}}$$ and then expand $$(1+x)(1-x^2)^{-\frac12}$$ and get $$1+x+\frac12x^2+\frac12x^3+\frac38x^4+\frac38x^5+\frac{5}{16}x^6+\frac{5}{16}x^7+...$$
This is convenient since all the coefficients occur in identical pairs...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4301148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Combinatorics question with unequality and different subscript
a-) $x_1+x_2+...+x_7 \leq 30$ where $x_i's$ are even non-negative integers.
b-) $x_1+x_2+...+x_7 \leq 30$ where $x_i's$ are odd non-negative integers.
c-) $x_1+x_2+...+x_6 \leq 30$ where $x_i's$ are odd non-negative integers.
These questions is from my textbook. I know to solve similar questions such that $x_1+x_2+...+x_k \leq n$ where $x_i's$ are non-negative integers. We add an extra term on the lefthandside and the rest found by combination with repetition such that $ \binom{n+(k+1)-1}{n}$.However , what happens when they are even or odd , is there any special technique ? Moreover , as you see the part a and b differ from only subscripts , i think that there must be a reason behind different subscript in same question.Is there any reason ? Thanks in advance..
| For $(a)$, you should solve for $y_1+y_2+...+y_7 \leq 15, ~ $ where $x_i = 2y_i, y_ \geq 0$
For $(b)$, you should solve for $y_1+y_2+...+y_7 \leq 11$ where $x_i = 2y_i + 1, y_i \geq 0$
For $(c)$, you should solve for $y_1+y_2+...+y_6 \leq 12$ where $x_i = 2y_i + 1, y_i \geq 0$
The difference between $(b)$ and $(c)$ is that in $(b)$, there are $7$ odd numbers and $7$ odd numbers cannot sum to an even number so equality $( = 30)$ is not possible and you would have sum $\leq 29$. But in $(c)$, you have $6$ odd numbers so equality can be reached.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4301329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do you read the summa symbol with no superscript and the Real numbers subscript? How do you read $\int_\mathbb R f(x) dx$? I'm doing continuous random variables in probability, for context, so the whole thing is:
$\int_\mathbb R f(x) dx = \int^{\infty}_{-\infty} f(x) dx = 1 $
And I get the idea that it's just saying that the total area under the curve is 1, but how do I say the first part in English? "The integral over all the real values in the domain of f?"
| I may read $\int_\mathbb R f(x)\;dx$ as
"The integral over R, f of x d x"
I may say "reals" instead of R. I may omit "of x d x".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4301558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Example of $\omega$-limit set with two singular points (at least). According to Poincaré-Bendixson Theorem, if an orbit in the plane contains only finitely many fixed points, then the $\omega$-limit set is either
*
*a fixed point,
*a periodic orbit, or
*a connected set composed of a finite number of fixed points together with orbits connecting them.
We can see examples of the first and the second cases with the vector field $F(x,y)=(-\sin(x),y)$.
What are some examples of the third case?
| The following example is given on p. 224 in Gerald Teschl's book Ordinary Differential Equations and Dynamical Systems:
$$
\dot x = y+x^2 - \tfrac14 x (y-1+2x^2)
,\qquad
\dot y = -2(1+y)x
.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4301944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to evaluate $232^2-62^2\times14$ by factoring or using identities or...? The expression $232^2-62^2\times14$ can be calculated directly ($53824-3844
\times14=8$). But is it possible to evaluate it for example by factoring or using identities?
Here is what I have tried,
$$(58\times4)^2-62^2\times14=58^2\times16-62^2\times14=(60-2)^2(15+1)-(60+2)^2(15-1)$$
Or
$$58^2\times16-62^2\times14=29^2\times64-31^2\times56=(30-1)^2\times8^2-(30+1)^2(8\times7)$$
But I can't see an elegant way to get $8$ from either of the calculatins.
| Using a prime decomposition
$$
\begin{align}
232^2-62^2\times14 &= \\
(2^3 \times 29)^2-(2\times31)^2\times2\times7 &= \\
2^3(8\times29^2-(29+2)^2\times7)&= \\
2^3(29^2-28\times29-28) &= \\
2^3(29-28) &= 8
\end{align}
$$
Line 3 above used $31=29+2$ to facilitate further simplification.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
Can two circles intersect each other at right angles, such that one circle passes through the center of the other circle? More precisely, do there exist two intersecting circles such that the tangent lines at the intersection points make an angle of 90 degrees, and one of the circles passes through the center of the other? I think the answer is no, and that this can only be the case if the circle passing through the center of the other circle is actually a line. I'm not sure how to show this though. Can someone give a proof or a counterexample?
| No, this is not possible.
Here's a hint. Consider the following points:
*
*$A$: center of first circle (say radius $r_1$),
*$B$: center of second circle (say radius $r_2$),
*$C$: one point of intersection of the two circles.
Then, by the right-angle condition, you see that $ABC$ is a right-angled triangle. However, note that the sides are $r_1$, $r_2$, and $r_2$ with the right-angle being in between the sides $r_1$ and $r_2$. Check that this is not possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Why isn't $\sqrt[12]{\left(-1\right)^6}$ equal to $\sqrt{-1}$? Why isn't $\sqrt[12]{\left(-1\right)^6}$ equal to $\sqrt{-1}$? Clearly, the square root isn't defined. We have divided the index and the exponent by $6$. The theorem says that $$\sqrt[nk]{a^{mk}}=\sqrt[n]{a^m}$$ where $a\ge0$. How does it work when $a<0$, or it doesn't?
PP. I see that we can write $\sqrt[12]{(-1)^6}=\sqrt[12]{1^6}=1,$ but my question still holds.
| If $\ x\in\mathbb{R}\ $ and $\ x>0,\ $ then $\ x^y\ $ is well-defined and unique for any $\ y\in\mathbb{R}.\ $ This is, for example, an exercise left to the reader in Rudin's PMA chapter $1.$
If $\ x\in\mathbb{R}\ $ and $\ x<0,\ $ then $\ x^y\ $ is well-defined if and only if $\ y\ $ is an integer. Basically, attempts to define $\ x^y\ $ if $\ x<0\ $ always runs into problems, one of which you encountered in your question.
However, in $\ \mathbb{C}\ $ we do not get analogous issues: if $\ x,y\in\mathbb{C}\ $ then $\ x^y\in\mathbb{C}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
$\lim_{x \to a} x^2 = a^2$. As per the definition of limits if $\lim_{x \to a} f(x)= L$, then $$\forall \varepsilon \gt 0 \ \exists \delta \gt 0 \ s.t 0\lt\lvert x-a \rvert \lt \delta \ \implies \ 0\lt \lvert f(x)- L\rvert \lt \varepsilon $$
I want to prove that $\lim_{x \to a} x^2 = a^2$.
As per the definition $$\lvert f(x)- L\rvert = \lvert x^2- a^2\rvert = \lvert (x-a)(x+a)\rvert =\lvert x-a\rvert \lvert x+a \rvert$$
As per definition $$\lvert x-a \rvert \lt \delta \implies -\delta \lt x-a \lt \delta \implies a-\delta \lt x <a+\delta \implies 2a-\delta \lt x+a <2a+\delta $$
I'm stuck beyond this. I cannot find a suitable $\varepsilon$ to satisfy my condition here.
| A different approach for the sake of curiosity.
Let $0 < |x - a| < \delta_{\varepsilon}$. Then we have that:
\begin{align*}
|f(x) - L| & = |x^{2} - a^{2}|\\\\
& = |(x-a)(x+a)|\\\\
& = |x - a||(x - a) + 2a|\\\\
& \leq |x - a|(|x - a| + 2|a|)\\\\
& < \delta_{\varepsilon}(\delta_{\varepsilon} + 2|a|) := \varepsilon
\end{align*}
where you can choose the positive root of the corresponding equation on $\delta_{\varepsilon}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Big O notation property Does the following correct and why?
$O(1)+O(\sqrt{n})=O(\sqrt{n})$,
where $O(.)$ means big O notation.
My thinking is that it is correct because
$\lim_{n \rightarrow\infty} \frac{1}{\sqrt{n}}=0 < \infty$. Thus $1\leq \sqrt{n}$.
| One should follow the formal definition of the big O notation.
Let $f,g:\mathbb{N}\to\mathbb{R}$ be two sequences such that for some constant $C,M>0$, for every $n>M$,
$$
|f(n)|\le C,\quad |g(n)|\le C\sqrt{n}
$$
By the triangle inequality,
$$
|f(n)+g(n)|\leq C(1+\sqrt{n})\leq 2C\sqrt{n}
$$
So $f+g\in O(\sqrt{n})$.
On the other hand, if $f\in O(\sqrt{n})$, it is trivial to see that $f\in O(1)+O(\sqrt{n})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that $A$ is connected Q. Prove that: If we remove 12 points from the Euclidean plane $\mathbb{R}^2$ to get set $A$, then $A$ is connected.
I don't really understand how to proceed with the question. I thought of going via contradiction but I am stuck. Should I approach it via path connectedness?
| We know that if $X$ is connected by paths then $X$ is connected. If we think of a polygonal path that dribbles the n points, the test would end, but I think this way does not show what may be happening behind the scenes. I will try to give a fairly visual demonstration: So suppose that $A=\mathbb R^2\setminus\{x_1,x_2,\dots,x_k\}$ where $x_i\neq x_j$ if $i\neq j$. Let $y,z\in A$, if the line that joins $y$ with $z$ does not contain any of the points $\{x_1,x_2,\dots,x_k\}$, finish the test. Otherwise, let $p$ be the midpoint of $y$ and $z$. Let $\mathscr L=\{p+tv:t\in\mathbb R\}$ the equidistant line of points $y$ and $z$.
Then for each $n\in\mathbb N$ there are two lines $\alpha_n$ and $\beta_n$ that join the points $y$ with $p+nv$ and $p+nv$ with $z$, respectively. Let $\gamma_n=\alpha_n*\beta_n$ a path that $y$ with $z$. Then we notice that $\gamma_n\cap\gamma_m=\{y,z\}$ for $n\neq m$. Let's suppose $\forall n\in\mathbb N,\exists r_n\in\{1,2,\dots,k\}$ such that $x_{r_n}\in\gamma_n$, which is false since we can list the $n + 1$ first steps and have:
$$
\color{red}{k+1\text{ elements}}\left\{
\begin{array}{l}
\color{red}{x_{r_1}}\in\gamma_1\\
\color{red}{x_{r_2}}\in\gamma_2\\
\vdots\\
\color{red}{x_{r_k}}\in\gamma_k\\
\color{red}{x_{r_{k+1}}}\in\gamma_{n+1}\\
\end{array}
\right.
$$
thus there exists a point $x_r$ that belongs to two paths $\gamma's$, which is a contradiction. Therefore $\exists n_0\in\mathbb N$ such that $\{x_1,x_2,\dots,x_k\}\not\in\gamma_{n_0}$. Thus there is a path $\gamma_{n_0}$ in $\mathbb R^2\setminus\{x_1,x_2,\dots,x_k\}$ that joins point $y$ with $z$. Then $\mathbb R^2\setminus\{x_1,x_2,\dots,x_k\}$ is connected by paths, then $\mathbb R^2\setminus\{x_1,x_2,\dots,x_k\}$ is connected.
On the other hand if you want you can work with portions of circles instead of working with line segments, like this:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4302902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Convergence in Distribution for uniformly distributed RVs So I was wondering if someone could help me understand the following exercise a little bit more, cause I'm currently very confused.
Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of random variables on the $\mathbb{P}$-Space $([-1,1], \mathcal{B}[-1,1], \mathcal{U}[-1,1])$, where $\mathcal{U}[-1,1]$ is the uniform distribution. Calculate the PDF and CDF of $X_n$ when:
$X_n(t) = \left\{\begin{array}{1l} t^{\frac{1}{n}}, & t>0 \\
-\vert t \vert^{\frac{1}{n}}, & t \leq 0 \end{array}\right. .$
and show that $X_n \xrightarrow{\mathcal{D}}X$ with $X \text{~} \frac{1}{2} \delta_{-1} + \frac{1}{2} \delta_{1}$.
So my first question is if I'm understanding the first parts correctly about calculating the PDF because I thought for all $n$ the PDF is given by $\frac{1}{2},\ -1 \leq t \leq 1$ and $0$ otherwise but with this approach there is no $n$ dependency anywhere. Thank you for help!
| I will write $\mathbf{P} = \mathcal{U}[-1,1]$ for the probability measure on $[-1, 1]$ given by the uniform distribution. Then the CDF of $X_n$ is the function $F_{X_n}$ defined by
$$ F_{X_n}(x) = \mathbf{P}(X_n \leq x) = \mathbf{P}(\{t \in [-1, 1] : X_n(t) \leq x \}) $$
Now noting that $t \mapsto X_n(t) = \operatorname{sgn}(t) |t|^{1/n}$ is strictly increasing, it follows that
$$ X_n(t) \leq x
\quad \Leftrightarrow \quad
t \leq X_n^{-1}(x) = \operatorname{sgn}(x)|x|^n. $$
So
$$ F_{X_n}(x)
= \mathbf{P}(t \leq \operatorname{sgn}(x)|x|^n)
= \begin{cases}
1, & \text{if $x \geq 1$} \\
\frac{1+\operatorname{sgn}(x)|x|^n}{2}, & \text{if $-1 \leq x < 1$} \\
0, & \text{if $x < -1$}
\end{cases} $$
Differentiating the CDF $F_{X_n}$ then gives the PDF
$$ f_{X_n}(x) = \frac{\mathrm{d}}{\mathrm{d}x} F_{X_n}(x) = \begin{cases}
\frac{n}{2}|x|^{n-1}, & \text{if $|x| < 1$} \\
0, & \text{if $|x| \geq 1$}
\end{cases} $$
Finally, letting $n \to \infty$,
$$ \lim_{n\to\infty} F_{X_n}(x) = \begin{cases}
1, & \text{if $x \geq 1$} \\
\frac{1}{2}, & \text{if $-1 < x < 1$} \\
0, & \text{if $x \leq -1$}
\end{cases} $$
This coincides with the CDF of $\frac{1}{2}(\delta_{-1} + \delta_{1})$ on a dense subset of $\mathbb{R}$. (In fact, they coincide on all of $\mathbb{R}\setminus\{-1\}$.) Therefore the desired claim follows.
Alternatively, note that
$$ \lim_{n\to\infty} X_n(t) = \operatorname{sgn}(t) =: X(t) $$
for any $t \in [-1, 1]$. Since $X_n$ converges to $X$ everywhere, $X_n$ converges in distribution to $X$. Now it is not hard to check that $X \sim \frac{1}{2}(\delta_{-1} + \delta_{1})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit (trig) without L'Hospitals's Hello while I was studying, I found some difficulties solving this limit,specifically how to get rid of the $\sin$
$$\lim_{x\to3}\biggl(\biggl(\frac{1}{\sqrt{x+3}}-\frac{1}{\sqrt{6}}\biggr)\biggl(\frac{x+3}{\sin({2x-6})}\biggr)\biggr)$$
I am supposed to solve this without the use of L'Hospital's, and so far I tried,trying to get the $\sin$ to a form $\frac{\sin(x)}{x} $ where $x=2x+6$ but I still get the indeterminate form $0\times\infty$
Also tried turning it to $1-\cos^2(2x-6)$ yet i realized it didn't change anything
And lastly I wanted to make the silly attempt to make $\sin(x)=x$ but I am almost certain i am not able to do that
Any help or advice on how to deal with this limit or any other similar is much appreciated,I'm not interested on the result itself but more on how to proceed should I encounter similar ones.
I hope i made myself clear and thanks in advance!
| It might be useful to set $t=x-3$ and consider $t\to 0$.
\begin{eqnarray*} \left(\frac{1}{\sqrt{x+3}}-\frac{1}{\sqrt{6}}\right)\frac{x+3}{\sin({2x-6})}
& \stackrel{t=x-3}{=} & \left(\frac{1}{\sqrt{t+6}}- \frac 1{\sqrt 6}\right)\frac{t+6}{\sin(2t)} \\
& = & \left(\frac{\sqrt 6 - \sqrt{t+6}}{\sqrt 6}\right)\frac{\sqrt{t+6}}{\sin(2t)} \\
& = & -\frac{1}{2\sqrt 6}\frac{\sqrt{t+6}}{\sqrt 6 + \sqrt{t+6}}\frac{2t}{\sin (2t)} \\
&\stackrel{t\to 0}{\longrightarrow} & -\frac 1{4\sqrt 6}
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Every infinite-dimensional Banach space not containing $l^1$ has a normalized, weakly null sequence. I'm trying to deduce the following:
Theorem: Let $X$ be an infinite dimensional Banach space with no subspace $Y\le X$ with $Y\cong l^1$. There exists $(x_n)\in X$ such that $\Vert x_n\Vert =1$ and $x_n\rightharpoonup 0$.
from Rosenthal's $l^1$ theorem:
Theorem: Let $X$ be a Banach space with no subspace $Y\le X$ with $Y\cong l^1$. For all $(x_n)\in X$ bounded, there exists a subsequence $(x_{n_k})$ weakly Cauchy i.e. for all $\xi\in X^*$, $(\xi(x_{n_k}))$ is convergent in $\mathbb C$.
My attempt: First, choose $(x_n)$ linearly independent, normalized with dual elements $(\xi_n)$ i.e. $\xi_i(x_j)=\delta_{i,j}$. We can do this inductively: given such $x_1,\cdots,x_n$ and $\xi_1,\cdots, \xi_n$, we can choose $x_{n+1}\in\cap_{i=1}^n\text{ker}(\xi_i)$ normalized since $\text{dim}(X)=\infty$. Let $\xi_{n+1}\in X^*$ be normalized, extending the map given by $x_i\mapsto \delta_{i,n+1}$ on $\text{span}(x_1,\cdots,x_{n+1})$ using Hahn-Banach. Extract a weakly Cauchy subsequence so without loss of generality, assume $(x_n)$ is weakly Cauchy. Let $Y:=\overline{\text{span}(x_n|n\in\mathbb N)}$ and $Y^*=\overline{\text{span}(\xi_n|_Y|n\in\mathbb N)}$.
Suppose for a contradiction that there exists $\xi\in X^*$ such that $\xi(x_n)\rightarrow\alpha\neq 0$. I claim that $\Phi:l^1\rightarrow Y$ defined by extending $e_n\mapsto x_n$ (which is clearly bounded and injective) is an into isomorphism, contradicting hypothesis that $X$ does not contain $l^1$. I've thought of three ways to prove this:
*
*Show that $\Phi$ is surjective and use open mapping theorem.
*Show that $\Phi$ is bounded below.
*Show that $\Phi^*$ is surjective.
and I wasn't able to prove any of these. Does anyone have an idea how to prove this?
Thanks in advance!
| Since $X$ is infinite dimensional, there is a sequence $(x_n)\subset X$ with $\Vert x_n\Vert \le 1$ with $\Vert x_n-x_m\Vert >\frac{1}{2}$ by Riesz lemma. By Rosenthal's theorem, we have a weakly Cauchy subsequence $(x_{n_k})$. Set $y_n=\frac{x_{n+1}-x_n}{\Vert x_{n+1}-x_n\Vert }$. Then for all $f\in X^*$, $|f(y_{n_k})|\le 2|f(x_{n+1}-x_n)|\rightarrow 0$ since $ (x_{n_k})$ weak Cauchy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.