Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
absolute value random walk also a markov chain Consider the random walk $S_{n}$ for $n \geq 1$. Specifically, let $X_{1},X_{2},..$ be Independent with
$$
\mathbb{P}(X_{n}=1) = p,~~\mathbb{P}(X_{n}=-1) = 1- p =:q
$$
and $S_{n} = \sum_{k=1}^{n}X_{k}$. I have read in the book of Ross, Stochastic Processes on page 166, that $|S_{n}|$ is also a markov chain. He uses Proposition 4.1.1. to prove this. Here the Proposition
My question is why
$$
\mathbb{P}(S_{n} = i,...,|S_{j+1}| = i_{j+1},|S_{j}| = 0) = p^{\frac{n-j}{2}+\frac{i}{2}}q^{\frac{n-j}{2}-\frac{i}{2}}
$$
and
$$
\mathbb{P}(S_{n} = -i,...,|S_{j+1}| = i_{j+1},|S_{j}| = 0) = p^{\frac{n-j}{2}-\frac{i}{2}}q^{\frac{n-j}{2}+\frac{i}{2}}
$$
I know that if $S_{n} = i$ then because of the definition of $j$ $S_{j} = 0$ and $S_{j+1},...,S_{n-1} > 0$ and similar for $S_{n} = -i$ I have $S_{j+1},...,S_{n-1} < 0$.
I hope someone can help me
| The probabilities in your last two displayed equations aren't quite right. It's not
$$
\mathbb P(S_n=\pm i,\ldots,|S_{j+1}|=i_{j+1},|S_j|=0)=p^{\frac{n-j}2\pm\frac i2}q^{\frac{n-j}2\mp\frac i2}
$$
but
$$
\mathbb P(S_n=\pm i,\ldots,|S_{j+1}|=i_{j+1}\;\big|\;|S_j|=0)=p^{\frac{n-j}2\pm\frac i2}q^{\frac{n-j}2\mp\frac i2}\;,
$$
that is, given that we know that $S_j=0$, the probability for the remaining sequence to occur is given by the right-hand side. This is the case because to reach $\pm i$ in $n-j$ steps, we need to take $\frac{n-j}2\pm\frac i2$ steps in the positive direction with probability $p$ and $\frac{n-j}2\mp\frac i2$ steps in the negative direction with probability $q$: The total number of steps is $\frac{n-j}2\pm\frac i2+\frac{n-j}2\mp\frac i2=n-j$, and the distance traveled is $\frac{n-j}2\pm\frac i2-\left(\frac{n-j}2\mp\frac i2\right)=\pm i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
kernel of quotient is quotient of kernel? Let $A,B,C$ groups and $f:A\to B$ a homomorphism with $C\subset \ker f \subset A$.
Then $f$ induces a map on the quotient $A/C\to B$
Is it then true that $\ker (A/C\to B)= (\ker A\to B)/C~$?
I tried to prove it, but i am not sure if I did it right. The proof seems to tautological to me. Maybe I missed something. If so, what would be a sufficient condition to make the above hold.
Also, does anything change if $A,B$ are topological spaces with $A\subset C$?
So far I came up with this:
If $aC\subset \ker (A/C\to B)$, then $a\in \ker(A\to B)$ and thus $aC \in \ker(A\to B)/C$.
On the opposite: If $aC\in \ker(A\to B)/C$, then $a\in \ker(A\to B)$. But then the image of $aC$ under $A/C\to B$ is $f(a)\in \ker(A\to B)$.
| Apart from $f(a)=1_B$ rather than $f(a)\in{\rm Ker}(A\to B)$, your proof looks correct.
When proving an equality like this, you do sometimes find that the proof looks tautological. This is because everything is just true by definition.
That is ${\rm Ker}(A/C\to B)$ and ${\rm Ker}(A\to B)/C$ both consist precisely of those $aC$ for which $a\in {\rm Ker}(A\to B)$, so they're equal. Note that your if and only if proof is more clear, I've just put it in one sentence to make the point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
equation of a plane in space though 3 points We have the points $M(1,2,3),N(2,1,5), P(4,3,2)$ is it a correct way to find the equation of the plane using the determinant $\Delta$
$$\Delta=\begin{array}{|cccc|}
x & y & z & 1\\
1& 2 & 3 & 1\\
2 & 1 & 5 & 1\\
4 & 3 & 2 & 1
\end{array}=0$$
If so, what is the intuition behind this method of finding equation of a plane?
Thanks in advance!
| $$\Delta=\begin{array}{|cccc|}
x & y & z & 1\\
1& 2 & 3 & 1\\
2 & 1 & 5 & 1\\
4 & 3 & 2 & 1
\end{array}=x\begin{array}{|ccc|}2&3&1\\1&5&1\\3&2&1\end{array}-y\begin{array}{|ccc|}1&3&1\\2&5&1\\4&2&1\end{array}+z\begin{array}{|ccc|}1&2&1\\2&1&1\\4&3&1\end{array}-\begin{array}{|ccc|}1&2&3\\2&1&5\\4&3&2\end{array}=0$$
Compare this with $P:Ax+By+Cz=D$. So each of the above 3x3 determinant is a coefficient of the equation of the plane.
Since you're given $M(1,2,3)$, $N(2,1,5)$ and $P(4,3,2)$, you can easily find out that $\vec{MN}=(1,-1,2)$ and $\vec{MP}=(3,1,-1)$. The normal vector to the plane is then $\vec{n}=\vec{MN}\times \vec{MP}=(-1,7,4)$. Define an arbitrary point $K(x,y,z)$ that is on the plane.
Expand $\Delta$: $$\Delta=x[2(5-2)-3(1-3)+1(2-3\cdot 5)]-y[(5-2)-3(2-4)+(4-20)]+z[(1-3)-2(2-4)+(6-4)]-[(2-15)-2(4-20)+3(6-4)]=0\\\implies-x+7y+4z-25=0\\\iff -1(x-1)+7(y-2)+4(z-3)=0\\\iff \vec{n}\,\bullet\vec{MK}=0.$$
This means the normal vector $\vec{n}$ defined by the cross product of two vectors from 3 points is orthogonal to arbitrary vectors on the plane. Calculation of $\Delta$ addresses these reasoning.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Problems with recursive definitions I'm having a bit of trouble understanding how to go about formulating recursive definitions. This has caused trouble on the following:
$(1)$ Give a recursive definition of the set of subsentences of a sentence $\phi$ of $L_1$ — i.e. give a recursive specification of the function $\phi \mapsto$ {$ \psi \ | \ \psi$ is a subsentence of $\psi$}.
| In order to define the function $\text{Sub}(ϕ) = \text { the set of subformulas of } ϕ$, we have to follow the recursive (or inductive) definition of formula :
(i) $\phi \text { is an atomic formula, i.e. }$ :
$\top, \bot, t_1=t_2 \text { and } P_n(t_1,\ldots, t_n), \text { for terms } t_i \text { and predicate symbol } P_n.$
In this case $\text {Sub}(\phi) = \{ \phi \}$.
(ii) $\phi \text { is a formula } \psi_1 \circ \psi_2, \text { where } \circ \in \{ \land, \lor, \to, \leftrightarrow \}.$
In this case $\text {Sub}(\phi) = \text {Sub}( \psi_1) \cup \text {Sub}( \psi_2) \cup \{ \psi_1 \circ \psi_2 \}.$
(iii) $\phi \text { is a formula } \lnot \psi$.
In this case $\text {Sub}(\phi) = \text {Sub}( \psi) \cup \{ \lnot \psi \}.$
And similar for quantifiers.
We have to use them in the general schema of Definition by Recursion :
Let mappings $H_{\circ} : A^2 → A$ and $H_¬ : A → A$ be given and let $H_{at}$ be a mapping from the set of atoms into $A$. Then there is exactly one mapping $F : \text {FORM} → A$ such that :
$F(ϕ) = H_{at}(ϕ)$ for $ϕ$ atomic,
$F((ϕ\circ ψ)) = H_{\circ}(F (ϕ),F(ψ)$),
$F((¬ϕ)) = H_¬(F (ϕ))$.
Above we have defined the specific functions to be used with $A=\text{FORM}$ to get $F=\text{Sub}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\dot X = AX$ and find the time at which the area doubles. We have a linear operator $X(t): \mathbb{E}^2 \to \mathbb{E}^2$ such that
$$\dot X = AX\quad X(0)=\mathbf{1}$$
To be clear $X$ is a $2\times 2$-matrix. An ink spot is contained in $\mathbb{E}^2$ at $t=0$. At which time $t$, will the area of the image of the ink spot under $X(t)$, double for $A=\left(\begin{matrix}-1 & 0\\ 0 & -1\end{matrix}\right)$?
I know the solution is $X(t) = \exp(tA)$. Now the area of a scaling $2\times 2$ matrix is the absolute value of the determinant. So we have
$$|\det(X(t))|= |\det(\exp(tA))|= |\exp(tr(tA))|= \exp(-2t)$$
For $t=0$ we have $1$. So we need to solve
$\exp(-2t) = 2$ that is $t= \frac{\ln 2}{-2}$.
Is this solution correct?
| I know the solution is $X(t) = \exp(tA)$. Now the area of a scaling $2\times 2$ matrix is the absolute value of the determinant. So we have
$$|\det(X(t))|= |\det(\exp(tA))|= |\exp(tr(tA))|= \exp(-2t)$$
For $t=0$ we have $1$. So we need to solve
$\exp(-2t) = 2$ that is $t= \frac{\ln 2}{-2}$.
Posting my working as an answer to close this question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Substitution in a double integral I have trouble in doing a substitution. That is one step in a whole demonstration. I need to compute this integral : $\epsilon << 1 $
$$ \\ \int_{ [0,1-\epsilon] } \int_{ [0,1-\epsilon] } \frac 1 {1-xy} \, dx dy$$
And for that I need to do the substitution :
$$ x = u-v \, ; \, y = u + v $$
Can you help me to do it? I'm not asking for the whole calculus, I have the answer under the eyes. My problem is that it is one of the first multi-integral I'm computing and I struggle to understand what is depending of what, what do I need to write like a function of the rest etc...
For example,
*
*I know that I need to compute the determinant so this I ve done : it
gives 2.
*I know that I need to find the lines that bound my
submanifold, and so I have written :
$$
\left\{
\begin{array}{c}
y = 0 \\
x = 0 \\
y = 1 - \epsilon \\
x = 1 - \epsilon
\end{array}
\right.
\implies
\left\{
\begin{array}{c}
u= -v \\
u = v \\
u+v = 1 - \epsilon \\
u-v = 1 - \epsilon
\end{array}
\right.
$$
*
*I know that I need to find the boundaries for u and v :
$$
\left\{
\begin{array}{c}
0 < x < 1 - \epsilon \\
0 < y < 1 - \epsilon
\end{array}
\right.
\implies
\left\{
\begin{array}{c}
0 < u < 1 - \epsilon \\
-1 + \epsilon < v < 1 - \epsilon
\end{array}
\right.
$$
I would be so greatfull if you can help me figure out what I need to do in order to have a general method to deal with problems like this.
thank you!
P.S. : please, so many times, people told me just to do a graph... I have done it !!!! I just need some help at first, and I'm so sorry guys to ask you this, but it's really necessary for me... I tried for 2 days now :(
| On the $uv$-plane, we can graph the intersection of the following inequalities
$$
\begin{align}
u-v > 0 \\
u-v < 1 - \epsilon \\
u+v > 0 \\
u+v < 1 - \epsilon
\end{align}
$$
Here is what the intersection looks like.
When $u \in \left[0, \dfrac{1-\epsilon}{2}\right]$, it follows then that $v \in [-u,u]$. Otherwise, when $u \in \left[\dfrac{1-\epsilon}{2}, 1-\epsilon\right]$, it follows that $v \in [(u+\epsilon) - 1, 1 - (u+\epsilon)]$. Setting up the integrals:
$$I = \int_{0}^{(1-\epsilon)/2} \int_{-u}^{u} \frac{2}{1-(u^2-v^2)} \, dvdu + \int_{(1-\epsilon)/2}^{1-\epsilon} \int_{(u+\epsilon)-1}^{1-(u+\epsilon)} \frac{2}{1-(u^2-v^2)} \, dvdu$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Groups, G finite group from order 2p, p>2 prime and N is normal subgroup with index p in G , prove G cyclic i think i proved this question, but my proof isnt really elegant.
i assumed by contraditcion that there isnt exist an element from order 2p. then all the elements from lagrange are from order 2 or p.
Let N be {e,x}
First i proved that there cant be another group from order 2, which implies that all the elements are from order p.
let $g_1$ be an element from order p.
then i looked at the $G/N={N,g_1N,g_1^2N....,g_1^{p-1}N}$ and proved that all this cosets are distinct coset.
Now because $g_1$ is from order p, we have another element g, which isnt belong to $<g_1>$ and $N$ .
Now i proved that $gN$ isnt any of the coset that i mentiond above. so this is contratiction to the assumption that we have only p cosets.
I dont like this proof, seems to me not much elegant way to prove. Do you have another proof?
| Since $N$ has index $p$, $G/N$ has order $p$ and is therefore cyclic. Let $gN$ generate $G/N$.
In particular $g^pN=(gN)^p=N$. Suppose $G$ is not cyclic, then the order of $g$ must be $p$ (why?).
Since $N$ is normal in $G$, $g^{-1}xg=x$ (you might be able to say this without justification, but it could do with a justification).
Therefore the order of $gx$ is $2p$ (why?), so $G$ is cyclic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2813934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the meaning of "has partial derivatives everywhere"? I am reading Courant's "Differential and Integral Calculus Vol.2".
I am confused with something: Courant says that the function defined as $u(x,y)=\frac{2xy}{x^2 +y^2}$ and $u (0,0)=0$ is not continuous but has partial derivatives everywhere.
I am assuming that "has partial derivatives everywhere" means "is differentiable". I could be wrong.
I've seen somewhere else that a function is differentiable at $(x_0, y_0)$ if there is $(\alpha_1, \alpha_2)\in \mathbb{R}^2$ such that:
$$\lim_{(h,k)\to (0,0)} \frac{f(x_0+ h, y_0 + k )-f(x_0,y_0) - \alpha_1 h - \alpha_2 k }{\sqrt{h^2 + k^2}}=0$$
Applying to our function, we have:
$$\lim_{(h,k)\to (0,0)} \frac{2hk}{(h^2 + k^2)\sqrt{h^2 + k^2}}-\frac{\alpha_1 h +\alpha_2 k}{\sqrt{h^2 + k^2}}$$
With polar coordinates $h\to r\cos, k\to r \sin$.
$$\lim_{r\to 0} \frac{2\cos \theta \sin \theta}{r}- \alpha_1 \cos \theta - \alpha_2 \sin \theta = \lim_{r\to 0} \frac{\sin{2\theta}}{r} - \alpha_1 \cos \theta - \alpha_2 \sin \theta $$
I guess this shows that the limit depends on $\sin 2\theta$ being equal to $0$ and hence can not exist. So what Courant meant with "partial derivatives existing everywhere"? It could be $\mathbb{R}\setminus\{(0,0)\}$ but I wouldn't call this "everywhere".
Also, when defining $u(0,0)=0$, what does this means for the partial derivatives? What does this means for $u_x(0,0)$ and $u_y(0,0)$?
| "Has partial derivatives everywhere" means quite literally that it has partial derivatives everywhere. In other words, at every point, the partial derivatives of $u$ with respect to each variable exist. In other words, for all $(x,y)\in\mathbb{R}^2$ the limits $$u_x(x,y)=\lim_{h\to 0}\frac{u(x+h,y)-u(x,y)}{h}$$ and $$u_y(x,y)=\lim_{h\to 0}\frac{u(x,y+h)-u(x,y)}{h}$$ exist.
So, this doesn't have anything at all to do with $u$ being differentiable in your sense, at least not directly. You are correct that $u$ is not differentiable at $(0,0)$ (it is not even continuous there!), but the partial derivatives $u_x(0,0)$ and $u_y(0,0)$ still exist. Try computing them directly from the limit definitions above!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Determine the convergence being, Be,
$$f(x)=\lim_{n\rightarrow+\infty}\frac{x}{1+x^{n}},\hspace{5mm}\forall x\geq0$$
what I want is to graph that function, but as I do analytically, because for $ n = 0 $ and $ n = 1 $ are a line and a hyperbole, but then while the $ n \rightarrow \infty $ 'apparently' $ f (x) \rightarrow0 $
| Recall that for $x \ge 0$ we have
$$\lim_{n\to\infty} x^n = \begin{cases}
0, & \text{if $x < 1$} \\
1, & \text{if $x = 1$} \\
+\infty, & \text{if $x > 1$} \\
\end{cases}$$
Now clearly $$\lim_{n\to\infty} \frac{x}{1+x^n} = \begin{cases}
x, & \text{if $x < 1$} \\
\frac12, & \text{if $x = 1$} \\
0, & \text{if $x > 1$} \\
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose $f(x)$ monotonous decreases on $[0,+\infty) $ Suppose $f(x)$ monotonous decreases on $[0,+\infty) $ and $$\lim_{x\to+\infty}f(x) \text dx=0 $$
Then, proof that $\sum_{n=1}^{\infty} f(n) $ converges if and only if $\int_{0}^{\infty}f(x)\text dx $converges.
Actually I listed an inequality and almost thought it has been solved without the condition $$\lim_{x\to+\infty}f(x) \text dx=0$$
that is
$$f(k)=\int_{k-1}^{k}f(k)\text dx\le\int_{k-1}^{k}f(x)\text dx\le\int_{k-1}^{k}f(k-1)\text dx=f(k-1) $$
then we have
$$\sum_{n=2}^{\infty}f(k) \le \sum_{n=2}^{\infty} \int_{k-1}^{k}f(x)\text dx \le \sum_{n=2}^{\infty} f(k-1)=\sum_{n=1}^{\infty} f(k) $$
So if $\sum_{n=1}^{\infty} f(n) $ converges, then $\sum_{n=2}^{\infty} \int_{k-1}^{k}f(x)\text dx=\int_{1}^{\infty }f(x)\text dx$ converges. So is $\int_{0}^{\infty }f(x)\text dx$.
Also, if $\int_{0}^{\infty }f(x)\text dx$ converges, then $\int_{1}^{\infty }f(x)\text dx$ converges. Thus $\sum_{n=2}^{\infty}f(k)$ converges.
With the process, I think I have solved this without the condition $\lim_{x\to+\infty}f(x) \text dx=0 $. But since this a question from a serious exam I am not so sure about my proof. So anyone could help confirm or show some more ways to proof this?
| What you did is right. The fact that $\lim_{x\to\infty}f(x)=0$ is irrelevant for the proof that $\displaystyle\sum_{n=1}^\infty f(k)$ converges if and only if $\displaystyle\int_1^\infty f(x)\,\mathrm dx$ converges. Of course, when they converge, then $\lim_{x\to\infty}f(x)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate the second derivative of this rational function I'm studying for my exam Math and I came across a problem with one exercise.
Calculate the second derivative of $$f'(x)=\frac{2x^3-3x^2}{(x^2-1)^2}$$
I just can't seem to calculate the second derivative of this rational function. If someone could help me .
I just can't seem to get the right answer
See my calculations and what the right answer should be according to my teacher
Thanks!
| I split the fraction into two parts, then used the product rule on both parts.
$\dfrac{2x^3-3x^2}{(x^2-1)^2} = (2x^3*(x^2-1)^{-2}) - (3x^2*(x^2-1)^{-2})$
After product rule on both terms we have
$\dfrac{12x^3-8x^4}{(x^2-1)^3} + \dfrac{6x^2-6x}{(x^2-1)^2}$
$\dfrac{12x^3-8x^4}{(x^2-1)^3} + \dfrac{(6x^2-6x)*(x^2-1)}{(x^2-1)^{2}*(x^2-1)}$
After simplifying the numerators and factoring out a $-2x$, we are left with
$f''(x)=\dfrac{-2x(x^3-3x^2+3x-3)}{(x^2-1)^{3}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Satisfying explanation of Aristotle's Wheel Paradox. The paradox:
We have a circle and there is another circle with smaller radius. They are co-centeric.
If circle make full turn without sliding, both smaller and bigger circle make full turn too. If we assume that the passed road is equal to the circumference of circles. We have got smaller circle's radius is equal to bigger one's.
Unsatisfying Solutions I found:
*
*"Do not assume that smaller circle's circumference is equal to passed road since the surface that contacts to the ground is bigger one. " // Okey but it does not explain the paradox, it explains just what is the wrong assumption (even does not explain why it is a wrong assumption.)
*It is undeniable that every point on both smaller and bigger circle will contact exactly one and only one point on their path. Therefore we can think that this is a bijective maps and smaller circle is isomorphic to bigger one. (Okey but ....)
Question: What is the true answer? What is wrong with the definition of circumference of a circle and relationship with its taken path.
| Its really quite simple and doesnt even need math to show..
Take any given point on the outside larger circle as it moves along..an do the same for the smaller one...neither one travels in a straight line from point A to point B....and they dont travel the same distance. Its seems like they do but they dont..Thats they key..
.
For proof..stick a pencil in the outermost part of the larger circle and one in the smaller one..rotate the circle and notice the paths traced out..
They will not be straight lines but curved lines..and the curved line not the same length for both..Bigger the circle, the longer the path traced out from A to B.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Is every open interval a union of half open intervals? I am reading lower limit topology on Wikipedia, which states that the lower limit topology
[...] is the topology generated by the basis of all half-open intervals $[a,b)$, where a and b are real numbers. [...] The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals.
I cannot see how to write $(a,b)$ as a countably infinite union of half-open intervals.
| $$(a,b) = \bigcup_{n=1}^\infty \, \left[\, \left(1-\frac{1}{n}\right)\, a + \frac{1}{n} b ,\, b\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
} |
Expressive adequacy After reading a few others posts on this site, I am still struggling in showing how sets are incomplete.
(a) Show that {$↔,\bot, ∧$} is functionally complete, but that no proper subset is.
(b) is solved! :)
(b) Assume that $c$ is a 2-place connective. Show that if either $f_c(\top, \top) = \top$ or $f_c(\bot, \bot) = \bot$, then {$c$} is not complete.
For (b), I understand that for {$\uparrow$} and {$\downarrow$}, the only functionally complete 2-place connectives, that $f_{\uparrow}(\top, \top) = \bot$ and $f_{\uparrow}(\bot, \bot) = \top$. The same results apply for $f_{\downarrow}$. I just can't see how to then apply this to prove $f_c(\top, \top) = \top$ means {$c$} is incomplete.
| As Noah points out in the comments, you know that $c$ cannot be either the $\uparrow$ and $\downarrow$, and since those are the only two connectives that are by themselves complete, you know $c$ is not complete.
However, I doubt that this proof is 'acceptable', or at least probably not what was expected of you. Probably you had to give a more direct proof why such a $c$ is not complete. Well, think about it. Suppose you have any number of connectives $c$, applied to any number of instances of $\top$, e.g. $(\top c \top) c (\top c (\top c \top))$ ... if you evaluate this, you'll of course get $\top$ for any expression; you';ll never get $\bot$.
Now, to make that idea into a rigorous proof, use induction: prove by (strong) induction on the number of operators $c$ in any statement $\phi$ that is composed of $c$'s and $\top$, that $\phi$ will always evaluate to $\top$. I'll leave the details to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proving $y = x + 1$ doesn't have a quantifier-free formula in $(\mathbb{Z}, =, <)$ Assume the signature is $(\mathbb{Z}, =, <)$ with the natural interpretation, and consider whether the predicate $y = x + 1$ is representable as a quantifier-free formula in this interpretation.
First, clearly, it's representable with quantifiers: $x < y \land \forall z (x < z \rightarrow (y = z \lor y < z))$.
On the other hand, intuitively, any quantifier-free formula is just a boolean combination of atomic formulas of the form $x_i = x_j$ or $x_i < x_j$ (where $x_i, x_j$ are all either $x$ or $y$), and since any formula is by definition finite, there is just no sufficient expressive power to enumerate all the $(x, x+1)$ pairs. But how to prove this rigorously?
| Quantifier-free formulas are preserved (and reflected) by embeddings. That is, if $M$ and $N$ are $L$-structures, $f\colon M\to N$ is an embedding, $\varphi(x_1,\dots,x_n)$ is a quantifier-free formula, and $a_1,\dots,a_n$ is a tuple from $M$, $$M\models \varphi(a_1,\dots,a_n) \iff N\models \varphi(f(a_1),\dots,f(a_n)).$$
This is easy to prove: it follows directly from the definition of embedding when $\varphi$ is an atomic formula, and then proceed by induction on the complexity of $\varphi$.
Now the map $x\mapsto 2x$ is an embedding $(\mathbb{Z},=,<)\to (\mathbb{Z},=,<)$, but it does not preserve the relation $y = x+1$, so this relation is not quantifier-free definable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2814942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Understanding partial derivatives related to the Cauchy-Riemann equations So I am reading a soft introductory book on complex variables (I have complex analysis next year.) The book I am reading is called "Complex Variables Demystified". I am currently reading a chapter where the main goal is to arrive at the Cauchy-Riemann equations. I'm having problems following the deduction; specifically, given a function $f$ of a complex variable $z=x+iy$, the author starts out by expressing the partial derivatives of $f$ with respect to $z$ and $\bar z $. To do this, he first arrive at :
$\frac{\partial}{\partial z}=\frac{\partial x}{\partial z}\frac{\partial}{\partial x}+\frac{\partial y}{\partial z}\frac{\partial}{\partial y}$, and similarly $\frac{\partial}{\partial \bar z}=\frac{\partial x}{\partial \bar z}\frac{\partial}{\partial x}+\frac{\partial y}{\partial \bar z}\frac{\partial}{\partial y}$.
At this point in the text i am a littlebit lost. First of all, how did he deduce these equations? (Where do they come from?) And what do they mean? Is the author expressing a general differential operator $\frac{\partial}{\partial z}$ of a complex variable $z=x+iy$ in which $x$ and $y$ again are funcions $x=x(z,\bar z)$, $y=y(z,\bar z)$ of $z$ and $\bar z$ ? And what does it mean in the two equations above when $\frac{\partial}{\partial x}$ is written without anything in the numerator? Is this again a general differential operator?
I'm essentially just trying to understand how the author gets to the final expressions here:
I would be happy if anyone could explain what's going on as simple as possible :)
-Thanx
| is this correct? :)
and this? Thanks for your response by the way!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Constrained system of linear equations I have a question that might or might not be trivial for experts from the linear programming world.
We have a system of linear equations that we want to solve:
$A\cdot x=0$, with the constraint that all variables are non-negative: $x_i \geq 0 ~\forall i$.
The system is underdetermined, i.e. $A$ has more columns than rows, more variables than equations.
Question:
How do we determine which $x_i$ can never be positive (i.e. are zero) in the solution space. That means, determine those $x_i$ for which
$ x_i = 0$ for any solution.
Thanks!
| If the problem is an actual numerical problem for which you have a matrix A, then you can set this problem up as a semidefinite program (see Convex Optimization by Boyd and Vandenberghe).
Such a system can be solved by good convex solvers like cvx, or sedumi and yalmip in Matlab, or Pyopt in python. In this case, since $I$ is the only inequality, this is a simple linear program.
In your case,
$$
\begin{align}
& \underset{x}{\text{minimize}}
& & 1^T x \\
& \text{subject to}
& & -I_{n,m}x \le 0 \\
& & & Ax = 0
\end{align}
$$
This should provide a minimally non-negative answer for $x$ based on the constraints.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Proving an analytic function $f$ is bounded on $|z|\le1/2$ independent of $f$ subject to certain conditions
Let $f:D(0,1) \to \mathbb C$ be analytic. Show that there is a constant $C$ independent of $f$ such that if $f(0)=1$ and $f(z) \notin (-\infty,0]$ for all $z \in D(0,1)$, then $|f(z)| \le C$ whenever $|z| \le 1/2$.
I have (finally) figured out how to prove this, and I ended up with $C=9$. I am curious what the “best” bound is though, and what the best approach would be for proving this. In other words, what is the supremum of all analytic functions $f$ on $|z|\le 1/2$, subject to the two conditions above?
| Take $f(z) = \left( \dfrac{1 + z}{1 - z} \right)^2\ (|z| < 1)$, then $f(0) = 1$ and$$
\left| f\left( \frac{1}{2} \right) \right| = \left| \frac{1 + \dfrac{1}{2}}{1 - \dfrac{1}{2}} \right|^2 = 9.
$$
Thus $9$ is indeed the tightest bound.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Expected value of Brownian motion when it is less that a given number:$E[W_t\mathbb{1}_{(W_t \leq a)}] $ I want to find $E[W_t\mathbb{1}_{(W_t \leq a)}] $, where $W_t$ is Brownian motion and $a \in \mathbb{R}$. I thought that since $W_t \sim N(0,t)$, that its pdf would be $\frac{1}{\sqrt{2\pi t}}e^{-x^2/2t}$, and tried to use this to obtain:
$$\begin{aligned}
E[W_t\mathbb{1}_{(W_t \leq a)}] & = \int_{-\infty}^a\frac{1}{\sqrt{2\pi t}}e^{-x^2/2t}\\
& = \int_{-\infty}^{a/\sqrt{t}}n(y) dy\\
& = N(a/\sqrt{t})
\end{aligned}$$
Where I've used the change of variable $y = x/\sqrt{t}$,
and $n(x)$ is standard normal distribution, and $N(x)$ is its CDF. I have been told that this is wrong but I'm not sure why, or how to do it correctly. Thanks!
| What you have calculated is $EI_{{W_t} \leq a}$ and not $EW_tI_{{W_t} \leq a}$. Multiply the integrand by $x$ and then integrate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Shortest distance between parabola and point
Find the shortest distance between the parabola defined by $y^2 = 2x$ and a point $ E:= (1.5, 0)$.
I can't use the distance formula because I'm missing a set of points $(x, y)$ to plug into. So, instead, I have a normal that passes through the point $E$ from the parabola. Which is the definition of the shortest distance to a point.
$$y - y_1 = m(x - x_1)$$
The slope of the normal is $\frac{1}{y_1}$ by using implicit differentiation and that's where I'm stuck, because I plug the point E into it and I get
$$y_1^2=x_1-1.5$$
How do I prove the shortest distance is $\sqrt{2}$?
| Given a point $P=\left(\frac{y^2}2,y\right)$ of your parabola, consider the line segment joining $P$ to $C=\left(\frac32,0\right)$. The slope of this line segment is $\frac{2y}{y^2-3}$. And the slope of the tangent to the parabola at $P$ is $\frac1y$. Since two lines are orthogonal if and only if one of them is horizontal and the other one is vertical or when the product of their slopes is $-1$, these lines are orthogonal if and only if $y=0$ or $\frac2{y^2-3}=-1$, which means that $y=0$ or that $y=\pm1$. Forget $0$: that's a local maximum. So, the distance from the parabola to $C$ is$$\left\|\left(\frac12,1\right)-\left(\frac32,0\right)\right\|=\sqrt2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 5
} |
Simplifying $e^{2x+1/2}$ So I was taking some derivatives and the question was
$e^{2x+\frac{1}{2}}$
convert this to $a\cdot b^x$ where $a$ and $b$ are constants.
This is apparently needed to take the derivative of it without using the chain rule.
Any idea how to tackle this? Tried to manipulate it but I always end up with both of the constants having powers.
| $$e^{2x+\frac12}=e^{2x}\cdot e^{\frac12}=\sqrt e\cdot(e^2)^x$$So $a=\sqrt e, b=e^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Automorphism of power series Let us consider an endomorphism $\mathbb{C}[[x_1, ..., x_n]] \to \mathbb{C}[[x_1, ..., x_n]]$, $g(x_1,..,x_2) \mapsto g(f_1, ..., f_n)$ where $f_1, ..., f_n \in \mathbb{C}[[x_1, ..., x_n]]$. My homework is to show that it is an automorphism iff $ J = det(\frac{\partial f_i}{ \partial x_j})$ has no free term (which is to say iff it is a unit). I do not know where to begin so any hint would be appreciated.
| I’m real handy with one-variable series, and any suggestion I give here may be off the mark for many-variable series.
But I would prepare the situation by composing with the linear inverse of the Jacobian matrix. That is, form $J(\mathbf 0)$, take the inverse of this, and then take the linear substitution whose matrix is this $J^{-1}$. By composing with the original, you get $\{f_i=x_i+\text{higher}\}$. Finding the inverse of this should be much easier, conceptually at least, than the task for the original.
Alternatively, are you comfortable with Newton-Raphson for maps $\Bbb C^n\to\Bbb C^n$? Your given $n$-tuple of series $f$ represents a self-mapping of $\Bbb C[[z_1,\cdots,z_n]]$, by $g\mapsto f\circ g$. You want a $g$ that solves the equation $f\circ g=\mathbf{id}$, where $\mathbf{id}$ is the identity $n$-tuple mentioned by @ancientmathematician, $\mathbf{id}_i=x_i$. For single-variable series, Newton-Raphson is an extremely quick method of getting $\,f^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\sum\limits_{n=1}^\infty \ln\left(\frac{p_n}{p_n - 1}\right)$ converge? Suppose $p_n$ is the $n$-th prime number. Does $\sum\limits_{n=1}^\infty \ln\left(\frac{p_n}{p_n - 1}\right)$ converge?
Where did this question arise from:
I was trying to find $\inf_{n \in \mathbb{N}} \frac{\phi (n)}{n}$, where $\phi$ is Euler totient function. If $p$ is prime, then $\frac{\phi (p^n)}{p^n} = \frac{p - 1}{p}$. As $\phi$ is multiplicative, $\phi(n) = \prod\limits_{p\mid n; p \text{ is prime}} \frac{p - 1}{p}$. That means, that $ \inf_{n \in \mathbb{N}} \frac{\phi (n)}{n} = \prod\limits_{n = 1}^\infty \frac{p - 1}{p}$. And that results in $\inf_{n \in \mathbb{N}} \frac{\phi (n)}{n} = 0$ iff $\sum\limits_{n=1}^\infty \ln\left(\frac{p_n}{p_n - 1}\right)$ diverges.
| Note $\ln(p/(p-1)) = \ln(1/(1 - 1/p))$, so $\sum_{p\leq x} \ln(p/(p-1)) = \ln(\prod_{p\leq x} 1/(1-1/p))$. Intuitively, $\prod_p 1/(1-1/p) = \zeta(1) = \infty$, and this calculation can be justified. Thus, letting $x\rightarrow \infty$, we get $\sum_p \ln(p/(p-1)) = \ln(\prod_p 1/(1-1/p)) = \ln(\infty) = \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Prove that if ${x_1, x_2, x_3}$ are roots of ${x^3 + px + q = 0}$ then ${x_1^3+x_2^3 + x_3^3 = 3x_1x_2x_3}$ How to prove that ${x_1^3+x_2^3 + x_3^3 = 3x_1x_2x_3}$ holds in case ${x_1, x_2, x_3}$ are roots of the polynomial?
I've tried the following approach:
If $x_1$, $x_2$ and $x_3$ are roots then
$$(x-x_1)(x-x_2)(x-x_3) = x^3+px+q = 0$$
Now find the coefficient near the powers of $x$:
$$
x^3 - (x_1 + x_2 + x_3)x^2 + (x_1x_2 + x_1x_3 + x_2x_3)x - x_1x_2x_3 = x^3+px+q
$$
That means that I can write a system of equations:
$$
\begin{cases}
-(x_1 + x_2 + x_3) = 0 \\
x_1x_2 + x_1x_3 + x_2x_3 = p \\
- x_1x_2x_3 = q
\end{cases}
$$
At this point I got stuck. I've tried to raise $x_1 + x_2 + x_3$ to 3 power and expand the terms, but that didn't give me any insights. It feels like I have to play with the system of equations in some way but not sure what exact.
| If $ x_1,x_2,x_3 $ are roots of $ x^3+p x+q=0 $ then $ x_1+x_2+x_3 = 0 $
If $ x_1+x_2+x_3 = 0 $ then $ x_3 = -(x_1+x_2) $ and
$ x_1^3+x_2^3+x_3^3 = x_1^3+x_2^3+(-1)^3(x_1+x_2)^3 = -3(x_1^2x_2+x_1x_2^2) = -3x_1x_2(x_1+x_2) = -3x_1x_2(-x_3) = 3x_1x_2x_3 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2815985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
Does there exist 2 matricies, such that they can be used to transpose any n by n matrix? Ideally $\exists A, B$ to be able to transpose matrix $X \; \forall X \in M_{n\times n} $ by matrix multiplication. (Even more ideal is if there is only one matrix, $A$ that can transposes $X$ as follows: $X^T = AX$ but I'm ignoring this posibility for now)
So far, I'm guessing that such a matrix doesn't exist and have been trying to prove that. I've tried to prove $\not \exists A,B : \forall X\in M_{n\times n}, \: X^T = AXB $
I've examined a sub-case where $X$ is an orthogonal matrix, but that hasn't gotten me anywhere except $XAXB = I_n$
| If we stack row by row the $n\times n$ matrices into $n^2\times 1$ vectors, then the OP's question is:
Is the function $f:X\in M_n\rightarrow X^T\in M_n$ (we can present $f$ in the form of a $n^2\times n^2$ matrix) decomposable into a tensor product $A\otimes B^T$ ?
The answer is no and it's not a scoop ! Yet, $f$ can be written in a sum of $n^2$ such applications (perhaps, we can do better).
Proof. Let $E_{i,j}$ be the $n\times n$ matrix s.t. all the entries are $0$ except the $(i,j)$ one which is $1$. The matrix $E_{i,j}\otimes E_{k,l}$ is a $n^2\times n^2$ block-matrix s.t. all blocks are $0$ except the $i,j$ one which is $E_{k,l}$, that is, this matrix has only one non-zero entry (and it's $1$). Conversely, each matrix that has only one non-zero entry (and it's $1$) is decomposable as above.
Note that $f$ is a permutation with square $I_{n^2}$, that is a product of disjoint transpositions. Thus $f$ has exactly $n^2$ non-zero entries, and, consequently, is the sum of $n^2$ such decomposable tensor products.
EDIT. Although my post does not interest anyone, I give below a result concerning decomposable permutations (the proof is not difficult).
$\textbf{Proposition}$. Among the $(n^2)!$ permutations of $ n ^ 2 $ elements, only those of the form $U\otimes V$, where $U,V$ are permutations of $n$ elements are decomposable (thus, only $(n!)^2$ permutations are decomposable).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
What is $\lim_{n\to\infty} \left(\sum_{k=1}^n \frac1k\right) / \left(\sum_{k=0}^n \frac1{2k+1}\right)$? I have the following problem:
Evaluate
$$ \lim_{n\to\infty}{{1+\frac12+\frac13 +\frac14+\ldots+\frac1n}\over{1+\frac13 +\frac15+\frac17+\ldots+\frac1{2n+1}}} $$
I tried making it into two sums, and tried to make it somehow into an integral, but couldn't find an integral.
The sums I came up with,
$$ \lim_{n\to\infty} { \sum_{k=1}^n {\frac1k} \over {\sum_{k=0}^n {\frac{1}{2k+1}}}} $$
| Hint Denote the $n$th harmonic number by $$H_n := 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}.$$
Then, the numerator of the given ratio is $H_n$, and the denominator can be written as
\begin{align*}
1 + \tfrac{1}{3} + \tfrac{1}{5} + \cdots + \tfrac{1}{2 n + 1}
&= \left(1 + \tfrac{1}{2} + \tfrac{1}{3} + \cdots + \tfrac{1}{2 n}\right) - \left(\tfrac{1}{2} + \tfrac{1}{4} + \tfrac{1}{6} + \cdots + \tfrac{1}{2 n}\right) + \tfrac{1}{2 n + 1} \\
&= \left(1 + \tfrac{1}{2} + \tfrac{1}{3} + \cdots + \tfrac{1}{2 n}\right) - \tfrac{1}{2}\left(1 + \tfrac{1}{2} + \tfrac{1}{3} + \tfrac{1}{n}\right) + \tfrac{1}{2 n + 1} \\
&= H_{2 n} - \tfrac{1}{2} H_{n} + \frac{1}{2 n + 1} .
\end{align*}
Now, using appropriate Riemann sum estimates gives that $$H_n = \log n + O(1).$$
Additional hint So, the denominator is $$\log (2 n) - \tfrac{1}{2} \log n + O(1) = \tfrac{1}{2} \log n + O(1),$$ and so the ratio is $$\frac{\log n}{\tfrac{1}{2} \log n} + O((\log n)^{-1}) = 2 + O((\log n)^{-1}) .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
Meaning behind Filter in Set Theory In a course in logic and set theory, we studied the concept of a Filter. We defined a filter $F \in P(S)$ on $S$ an equivalent of the following definition from Jech's Introduction to Set Theory:
(a) $S \in F$ and $\emptyset \notin F.$
(b) If $X\in F$ and $Y \in F$ then $X \cap Y \in F$.
(c) If $X \in F$ and $X \subseteq Y \subseteq S$, then $Y \in F$.
I am having trouble grasping this concept and it's meaning.
My question is, what is the intuition behind this definition, and why are these kinds of sets called filters?
Thanks
| It's similar to the concept of "almost everywhere". Suppose to every subset $T\subseteq S,$ you write
$$
\mu(T) \begin{cases} =1 & \text{if } T\in F, \\ = 0 & \text{if } S\smallsetminus T\in F, \\ \text{is undefined} & \text{otherwise.} \end{cases}
$$
Then, according to the definition of "filter", you have
\begin{align}
& \mu(\varnothing) = 0 \\[6pt]
& \mu(S) = 1 \\[6pt]
& \text{If } \mu(T_1), \mu(T_2) \text{ both exist, and } T_1\cap T_2 = \varnothing, \\
& \text{then } \mu(T_1\cup T_2) = \mu(T_1) + \mu(T_2).
\end{align}
Saying $\{x\in S: P(x)\} \in F$ is the same as saying $P(x)$ for almost all $x\in S.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proving $\sum\limits_{k=0}^{\infty}\binom {m-r+s}k\binom {n+r-s}{n-k}\binom {r+k}{m+n}=\binom rm\binom sn$
Question: How do you show the following equality holds using binomials$$\sum\limits_{k=0}^{\infty}\binom {m-r+s}k\binom {r+k}{m+n}\binom {n+r-s}{n-k}=\binom rm\binom sn$$
I would like to prove the identity using some sort of binomial identity. The right-hand side is the coefficient of $x^m$ and $y^n$ in$$\begin{align*}a_{m,n} & =\left[x^m\right]\left[y^n\right](1+x)^r(1+y)^s\\ & =\binom rm\binom sn\end{align*}$$
However, I don’t see how the left-hand side can be proven using the binomials. Using the generalized binomial theorem, we get the right-hand side as
$$\begin{align*}(1+x)^r(1+y)^s & =\sum\limits_{k\geq0}\sum\limits_{l\geq0}\binom rk\binom slx^ky^l\end{align*}$$However, what do I do from here?
| With OP asking for formal power series in the evaluation of
$$\sum_{k\ge 0} {m-r+s\choose k} {r+k\choose m+n}
{n+r-s\choose n-k}$$
we write
$$[z^n] (1+z)^{n+r-s} [w^{m+n}] (1+w)^r
\sum_{k\ge 0} {m-r+s\choose k}
z^k (1+w)^k
\\ = [z^n] (1+z)^{n+r-s} [w^{m+n}] (1+w)^r
(1+z+zw)^{m-r+s}
\\ = [z^n] (1+z)^{n+r-s} [w^{m+n}] (1+w)^r
\sum_{q=0}^{m-r+s} {m-r+s\choose q}
(1+z)^{m-r+s-q} z^q w^q
\\ = \sum_{q=0}^{m-r+s} {m-r+s\choose q}
{m+n-q\choose n-q} {r\choose m+n-q}.$$
Note that
$${m+n-q\choose n-q} {r\choose m+n-q}
= \frac{r!}{(n-q)! \times m! \times (r+q-m-n)!}
\\ = {r\choose m} {r-m\choose n-q}.$$
We thus have
$${r\choose m}
\sum_{q=0}^{m-r+s} {m-r+s\choose q} {r-m\choose n-q}
\\ = {r\choose m} [z^n] (1+z)^{r-m}
\sum_{q=0}^{m-r+s} {m-r+s\choose q} z^q
\\ = {r\choose m} [z^n] (1+z)^{r-m} (1+z)^{m-r+s}
= {r\choose m} {s\choose n}.$$
This is the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Lipschitz constant of limit of functions part 2 This question follows from my other question Lipschitz constant of limit of functions.
Consider two metric spaces $(X,d_X)$ and $(Y,d_Y)$ and define the lipschitz constant of every continuous function $f:X\rightarrow Y$ as
$$Lip(f):=\sup\limits_{x\neq y}\frac{d_Y(f(x),f(y))}{d_X(x,y)}$$
Consider a sequence of continuous functions $f_n:X\rightarrow Y$ such that
*
*there is a $k>1$ such that for every $n\in \mathbb{N}$ it is $Lip(f_n)\le k$
*$\{f_n\}$ has limit $f:X\rightarrow Y$ for the uniform convergence on
compact sets (this means that for every $K\subset X$ compact it results $\lim\limits_{n\rightarrow \infty}\sup\limits_{x\in K}d_Y(f(x),f_n(x))=0$)
*$Lip(f_n)\rightarrow 1$
User pcp showed that it is not true $Lip(f)=1$, but showed an example when it happens $Lip(f)=0$. It seems to me that his counter-example only works for proving $Lip(f)<1$, so my other question is the following.
Question: Can it happen $Lip(f)>1$? Can you motivate your answer?
| let $\epsilon >0$. Then $Lip(f_n) <1+\epsilon$ for $n$ sufficiently large. Hence $d_Y(f_n(x),f_n(y)) \leq (1+\epsilon ) d_X(x,y)$ for all $x,y$ for $n$ sufficiently large.. Letting $n \to \infty$ we get $d_Y(f(x),f(y)) \leq (1+\epsilon ) d_X(x,y)$ for all $x,y$. Letting $\epsilon \to 0$ we conclude that $Lip(f) \leq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Explanation for Concrete Mathematics 3.38's solution I'm working on the exercises in Concrete Mathematics recently. In Exercise 3.38, one of the key points is to prove that:
For any real numbers $x,\ y \in (0,\ 1)$,$\exists n \in \mathbf{N}^+$ such that $\{nx\} + \{ny\} \geqslant 1$, where $\{x\}$ represents the fractional part of $x$ i.e. $\{x\} = x - \lfloor x \rfloor$.
Actually, I have known the method to prove it, but I just can't understand what the answer said:
I wonder why Dirichlet's box principle works and how $\vert P_k - P_j\vert < \epsilon$ is related to $P_{k - j - 1} \in B$.
I would appreciate it if someone could offer a clearer explanation.
| Hint.
Consider the numbers represented in base $2$
$$
n = \sum_{k=0}^m a_k 2^k \\
x = \sum_{k=1}^p b_k 2^{-k}\\
y = \sum_{k=1}^q c_k 2^{-k}\\
$$
with $a_k,b_k,c_k \in\{0,1\}$ and then compare
$$
\{nx\} + \{ny\} =\frac{a_0 b_1}{2}+\frac{a_0c_1}{2}+\frac{a_1b_2}{2}+\frac{a_1 c_2}{2}+\frac{a_2 b_3}{2}+\frac{a_2c_3}{2}+\frac{a_3 b_4}{2}+\frac{a_3 c_4}{2}+\frac{a_0 b_2}{4}+\frac{a_0 c_2}{4}+\frac{a_1 b_3}{4}+\frac{a_1c_3}{4}+\frac{a_2 b_4}{4}+\frac{a_2 c_4}{4}+\frac{a_0 b_3}{8}+\frac{a_0 c_3}{8}+\frac{a_1 b_4}{8}+\frac{a_1 c_4}{8}+\frac{a_0b_4}{16}+\frac{a_0 c_4}{16}+\cdots +
$$
with $ 1$
Here $a_0,a_1,a_2,\cdots, a_k ,\cdots, $ are for our choice (decision variables)
NOTE
Suffices that three of the products divided by $2$ are non null which is ever possible choosing conveniently the $a_k$'s
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2816879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
The number of ordered pairs $(a,b)$ that are solutions for the equation $\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1$ How many $(a,b)$ for $a,b \in \Bbb{N}$ pairs can satisfy the following equation:
$$\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$
The answer is $3$, but I can't figure out how to get that answer.
This is my attempt.
$$\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$
$$\frac{1}{a}\log_2\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$
$$\log_2\left(\log_{2^b}\left(2^{1000}\right)\right)=a$$
$$\log_{2^b}\left(2^{1000}\right)=2^a$$
$$\frac{1}{b}\log_{2}\left(2^{1000}\right)=2^a$$
$$\log_{2}\left(2^{1000}\right)=2^ab$$
$$2^{1000}=2^{2^ab}$$
$$1000=2^ab$$
That's it! This is dead end.
Honestly, this is the best I could do altough I very much doubt that I can get two variables by solving one equation (for that we need a system of equations!). So, I think that I need another approach that will either give me what $a$ and $b$ can be or direct answer (i.e. the number of possible values for $a$ and $b$), but I don't know which one.
| I agree with your derivation
$$\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1\iff \log_{2^b}\left(2^{1000}\right)=2^a\iff (2^b)^{2^a}=2^{1000}\iff b\cdot 2^a=1000$$
now we can have
*
*$a=1, 2^a=2, b=500$
*$a=2, 2^a=4, b=250$
*$a=3, 2^a=8, b=125$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Simplifying nested radicals with higher-order radicals I've seen that $$\sin1^{\circ}=\frac{1}{2i}\sqrt[3]{\frac{1}{4}\sqrt{8+\sqrt{3}+\sqrt{15}+\sqrt{10-2\sqrt{5}}}+\frac{i}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}}-\frac{1}{2i}\sqrt[3]{\frac{1}{4}\sqrt{8+\sqrt{3}+\sqrt{15}+\sqrt{10-2\sqrt{5}}}-\frac{i}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}}.$$
But then someone was able to simplify this neat, but long, expression with higher-order radicals, and they said they used De Moivre's theorem: $$\sin1^{\circ}=\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}+\frac{i}{2}}-\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}-\frac{i}{2}}.$$
I have been looking at this for a while now, and I cannot see how they were able to successfully do this. I am very impressed by the result and would like to use a similar technique to simplify nested radicals in the future.
Edit: It seems like the person who originally used De Moivre's theorem did not use it to directly simplify the longer radical expression, but rather found $\sin1^{\circ}$ by the method I figured out in my answer to this question. I do think there is limited value to writing the exact value of, say, $\sin1^{\circ}$ out, but which way do you think is better, the longer combination of square and cube roots, or the compact thirtieth-root?
| I have figured out how to find my answer using De Moivre's formula, not that the method in particular is of great importance but it is slightly alternative to InterstellarProbe's use of the definition of sine (which helped me figure this out).
De Moivre's formula is $$(\cos\theta+i\sin\theta)^n=\cos n\theta+i\sin n\theta, \qquad n\in\mathbb{Z}.$$
In my case, we let $\theta=1^{\circ}$. Then we can choose $n$ such that $\cos n\theta$ and $\sin n\theta$ are values that we know exactly. Let's choose $n=30$ because $\cos30^{\circ}=\frac{\sqrt{3}}{2}$ and $\sin30^{\circ}=\frac{1}{2}$.
Then \begin{align}(\cos1^{\circ}+i\sin1^{\circ})^{30}&=\frac{\sqrt{3}}{2}+\frac{i}{2}\\\Longrightarrow\qquad\cos1^{\circ}+i\sin1^{\circ}&=\sqrt[30]{\frac{\sqrt{3}}{2}+\frac{i}{2}}.\tag{1}\label{1}\end{align}
The trick is now to choose $\theta=-1^{\circ}$:\begin{align}(\cos1^{\circ}-i\sin1^{\circ})^{30}&=\frac{\sqrt{3}}{2}-\frac{i}{2}\\\Longrightarrow\qquad\cos1^{\circ}-i\sin1^{\circ}&=\sqrt[30]{\frac{\sqrt{3}}{2}-\frac{i}{2}}.\tag{2}\label{2}\end{align}
Then we subtract \eqref{2} from \eqref{1}:\begin{align}2i\sin1^{\circ}&=\sqrt[30]{\frac{\sqrt{3}}{2}+\frac{i}{2}}-\sqrt[30]{\frac{\sqrt{3}}{2}-\frac{i}{2}}\\\Longleftrightarrow\qquad\sin1^{\circ}&=\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}+\frac{i}{2}}-\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}-\frac{i}{2}}.\end{align}
We could have also chosen $n=45$ or $n=60$, say, and gotten a different but equivalent result. In fact, we may use this method to find the exact values of $\sin\theta$, $\cos\theta$, $\tan\theta$, $\sec\theta$, $\csc\theta$, and $\cot\theta$ for all $\theta\in\mathbb{Q}$ in terms of radicals, without having to solve quintic or higher-order equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If $x$ is the only element that $x^2=e$ then $x\in Z(G)$ $G$ be a group.
$$Z(G)=\{u\in G\mid ua=au \quad \forall a\in G \}$$
If $x$ is the only element in $G$ that satisfies $x^2=e$ then $x\in Z(G)$
Attempt:
*
*$x^2=e$ then $(\forall g\in G),\; gb^2=g=b^2g$ then $gb=b^2gb^{-1}=gb=gb^{-1}\ldots$
it is not good.
*I considered cayley table to examine elements but it did not go well.
*I considered conjugate things but couldnot create a reasonable way to solve this.
| Hint: $o(gxg^{-1})=o(x)$.
$ {} {} {} $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Congruence system with same modulus and same variable? I have this particular problem:
$$\begin{cases}
3k \equiv 2 \pmod 8 \dots(*) \\
7k \equiv 2 \pmod 8 \dots(**)
\end{cases}
$$
I know that the solution for this is $k = 8q + 6$. I can find this easily if I solve one of the equations alone.
Now, let's assume I subtract $(*)$ from $(**)$. I get $4k\equiv0[8]$ and $k = 2q$, which isn't coherent.
For example, if I take $q = 2$ then $3k=12$, which does not satisfy $(*)$ (nor $(**)$).
I can't figure out where I messed up. Please help me to understand this.
| You didn't mess up anywhere, except for your interpretation of what you did. The result that $k=2q'$ (I'm using a different letter to avoid confusion with the $q$ from $k=8q+6$) is correct — the solutions $k=8q+6$ indeed satisfy this property that you found:
$$k=8q+6=2q', \quad \text{where} \quad q'=4q+3.$$
When you have two equations to begin with, and you combine them e.g. by subtracting, what you get is an implication but NOT an equivalent equation. In other words:
*
*each $k$ that satisfies the original system of equations also satisfies the new equation;
*but values of $k$ that satisfy the new equation do not have to satisfy the original system.
As an example, think of the usual system of equations that I'm sure you've seen before; say, something like:
$$\begin{cases} 2x+3y=11, \\ 3x+4y=12. \end{cases}$$
When you subtract the first equation from the second, you'll get
$$x+y=1.$$
Does it follow from the original system? Of course, it does. Is it equivalent to the original system? Definitely, NOT: the original system has a unique solution, while the new equation alone has infinitely many solutions (pairs $(x,y)$ that satisfy it). You would need to put it together with one of the original equations (for example, as in the substitution method) to solve the original system completely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Laplace equation in a rectangle, Dirichlet to Neumann map Consider the problem
$$\Delta u = 0,(x,y) \in \Omega=(0,1)^2 \\
u(0,y)=u(1,y)=1 \\
u(x,0)=u(x,1)=0.$$
This problem can be solved exactly. The solution is
$$u(x,y)=4\sum_{\text{odd } n} \frac{1}{n\pi} \sin(n\pi x) \left ( 1 - \frac{\sinh(n\pi y)+\sinh(n\pi(1-y))}{\sinh(n\pi)} \right ).$$
Note that the boundary conditions on the left and right edges are not strictly speaking satisfied; these are only satisfied in a weak sense. Essentially the right way to think about this is to round the corners out into quarter-circles of a small radius $r$, make the boundary value go smoothly (but rapidly) from $0$ to $1$ around each corner, and then take the limit $r \to 0$. In the limit $r \to 0$ there is a Gibbs phenomenon appearing which spoils the boundary conditions.
I am trying to analytically compute or approximate $\frac{\partial u}{\partial y}(x,1)$. Unfortunately, it appears that the convergence properties of this sum are too bad to allow term-by-term differentiation with respect to $y$ (which is not a huge surprise, because the boundary conditions are singular at the corners). How can this be circumvented to extract the boundary derivative data?
| The derivative series is just fine for $0 < y < 1$:
$$
u_{y}(x,y) = 4\sum_{\mbox{odd $n$}}\sin(n\pi x)\frac{\cosh(n\pi(1-y))-\cosh(n\pi y)}{\sinh(n\pi)}
$$
For $\delta < y < 1-\delta$ the fraction in the sum is bounded by $C e^{-n\pi\delta}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Given matrices $A$ and $B$, solve $XA = B$
Let $$A = \begin{bmatrix} 3&-7\\ 1&-2\end{bmatrix} \qquad \qquad B = \begin{bmatrix} 0&3\\ 1&-5\end{bmatrix}$$ and $X$ be an unknown $2x2$ matrix.
a. Find $A^{-1}$
b. If $XA = B$, use (a) to find $X$.
I found
$$A^{-1} = \begin{bmatrix} -2&7\\ -1&3\end{bmatrix}$$
I am stuck on the part b. I thought that if $XA=B$, then
$$X=A^{-1}B$$
so I did:
$$ X=
\left[
\begin{array}{cc}
-2&7\\
-1&-3
\end{array}
\right]
\left[
\begin{array}{cc}
0&3\\
1&-5
\end{array}
\right] $$
and got:
$$X =
\left[
\begin{array}{cc}
7&-41\\
3&-18
\end{array}
\right] $$
I have been told that this is not correct and I missed a technical detail of matrix multiplication. Please help.
| The problem with your answer lies in the fact that what we actually have is$$XA=B\iff X=BA^{-1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2817912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Calculate the maximum of $f(x,y)=\left|\frac{\sin(xy)}{x\sqrt{y}}\right|$ Calculate the maximum of
$$f(x,y)=\left|\frac{\sin(xy)}{x\sqrt{y}}\right|\, , \quad \text{for} \ x\in\mathbb R\, , \, y\in\mathbb R^{+}\, .$$
I suspect that this function is unbounded. In fact:
$$f(x,y)=\left|\frac{\sin(xy)}{x\sqrt{y}}\right|= \left|\frac{\sin(xy)}{xy}\right| \sqrt{y}$$
but $\left|\frac{\sin(xy)}{xy}\right|\leq 1$ and $\sqrt y\to \infty$. In particular, fixed $x\in\mathbb R$ if we consider (for example) a sequence $y_k(x)= \frac{\frac{\pi}{2}+k\pi}{x}$ we have that $\lim_{k\to \infty} y_k(x)=\infty$ and
$$\lim_{y_k\to \infty} f(x,y_k)=\infty\, .$$
| It is not true that the limi is $\infty$ for every $x$. But if you take, say, $x=1$, then what you did is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that there exist only one analytic function that holds conditions
Let $f$ be a differentiable function in all $\Bbb{C}$ and numbers
$$\frac{1+i}{1},\frac{2+i}{2},\frac{3+i}{3},\frac{4+i}{4},\frac{5+i}{5},
\cdots$$ map respectively to numbers
$$\frac{1-i}{1},\frac{1-2i}{2},\frac{1-3i}{3},\frac{1-4i}{4},\frac{1-5i}{5},
\cdots$$ Prove that such a function is unique
and find $f(1)+f(2i)$.
Lucky me, I found one function by myself that holds all the given conditions. It is $f(z)=-iz$ which is obviously analytic. Now I surely can find $f(1)+f(2i)=-i+2$. But it was just luck. How can I prove that such function is the only one that holds hiven conditions?
| Let $g$ be a function that satisfies the same conditions. Then$$(\forall n\in\mathbb{N}):f\left(1+\frac in\right)=g\left(1+\frac in\right).$$Therefore, by the identity theorem and because the limit $\lim_{n\to\infty}1+\frac in$ exists, $g=f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability that a sample is generated from a distribution
Let $f_X(x)$ and $g_{Y}(x)$ be probability mass functions of discrete random variables X and Y. Mike selects a random variable (he chooses $X$ with probability $1/2$ or $Y$ with probability $1/2$), then he generates a sample of it and gives it to us. Let $a$ be the number that we get. We don't know which random variable was selected. Based on the observation $a$, find the probability that he has selected $X$.
Let ${A}$ be the event that $a$ is observed. To answer this question, we have to calculate:
\begin{align}
{P}(a \text{ is a sample of } X|A)=\frac{P(A \cap \{X\text{ selected}\})}{P(A)}&=\frac{P(A | X\text{ selected})P(X\text{ selected})}{P(A|X \text{ selected })0.5+P(Y \text{ selected })0.5}\\&=\frac{f_X(a) 0.5}{P(A|X \text{ selected })0.5+P(Y \text{ selected })0.5}\\
&=\frac{f_X( a)}{f_X( a)+g_Y( a)}\\
\end{align}
How can we extend this to the continuous random variables?
For each probability density function (pdf) the probability of observing $a$ is zero. So, we cannot use the above math to calculate the probability we need. But, intuitively, we can have examples of $X$s and $Y$s such that their support includes $a$, but one of them is more centered at $a$, so, it is more probable that it is generated from the one centered at $a$. How can we measure how much it is probable that $X$ generated $a$?
| Consider a small interval around $a$, i.e., the observed value to be in the interval $[a -\varepsilon, a+\varepsilon]$ and then take the limit $\varepsilon \to 0$ when evaluating the ratio.
Then the ratio becomes
\begin{equation}
\mathrm{lim}_{\varepsilon \to 0}
\frac{\int^{a+\varepsilon}_{a-\varepsilon}f_X (x) dx}{\int^{a+\varepsilon}_{a-\varepsilon}f_X (x) dx \, + \int^{a+\varepsilon}_{a-\varepsilon}g_Y (y) dy }
= \frac{f_X(a)}{f_X(a)+g_Y(a)}
\end{equation}
For an illustration, let us consider $X \sim \mathcal{N}$(0,1) and $Y \sim \mathcal{N}(1,1)$ and take the observed value to be $a$.
\begin{align}
\mathrm{lim}_{\varepsilon \to 0}
\frac{\int^{a+\varepsilon}_{a-\varepsilon}f_X (x) dx}{\int^{a+\varepsilon}_{a-\varepsilon}f_X (x) dx \, + \int^{a+\varepsilon}_{a-\varepsilon}g_Y (y) dy } &= \mathrm{lim}_{\varepsilon \to 0}\frac{ \int^{a+\varepsilon}_{a-\varepsilon} \mathrm{e}^{-x^2/2} dx}{\int^{a+\varepsilon}_{a-\varepsilon} \mathrm{e}^{-x^2/2} dx \, + \, \int^{a+\varepsilon}_{a-\varepsilon} \mathrm{e}^{-(y-1)^2/2} dy}
\end{align}
Using the Liebniz integral rule of differentiating under the integral sign to calculate the derivative for applying L'Hopital's rule,
\begin{equation}
\mathrm{lim}_{\varepsilon \to 0} \frac{e^{-(a+\varepsilon)^2/2}+e^{-(a-\varepsilon)^2/2} }{e^{-(a+\varepsilon)^2/2}+e^{-(a-\varepsilon)^2/2} + e^{-(a+\varepsilon-1)^2/2}+e^{-(a-\varepsilon-1)^2/2} } = \frac{e^{-(a)^2/2} }{e^{-(a)^2/2} + e^{-(a-1)^2/2} }
\end{equation}
So, if the observed value is $0$, the probability that it came from the distribution $X$ $\approx .6225$, which is higher, like we would expect.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$f \in \mathrm{End} (\mathbb{C^2})$ $f(e_1)=e_1+e_2$ $f(e_2)=e_2-e_1$. Eigenvalues of f and the bases of the associated eigenspaces Let $f \in \mathrm{End} (\mathbb{C^2})$ be defined by its image on the standard basis $(e_1,e_2)$:
$f(e_1)=e_1+e_2$
$f(e_2)=e_2-e_1$
I want to determine all eigenvalues of f and the bases of the associated eigenspaces.
First of all how does the transformation matrix of $f$ look like?
Is it
$\begin{pmatrix}1 &-1 \\1 &1 \end{pmatrix}$?
| If one represents the standard basis $e_1$, $e_2$ in the usual form
$e_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \tag 1$
$e_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \tag 2$
and writes the matrix of $f$ as
$[f] = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}, \tag 3$
then we have, since
$f(e_1) = e_1 + e_2, \; f(e_2) = e_2 - e_1, \tag 4$
$\begin{pmatrix} 1 \\ 1 \end{pmatrix} = e_1 + e_2 = [f]e_1 = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}\begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} \alpha \\ \gamma \end{pmatrix}, \tag 5$
and
$\begin{pmatrix} -1 \\ 1 \end{pmatrix} = e_2 - e_1 = [f]e_2 = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}\begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} \beta \\ \delta \end{pmatrix}, \tag 6$
from which it immediately follows that
$\alpha = \gamma = \delta = 1, \tag 7$
$\beta = -1; \tag 8$
thus
$[f] = \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}, \tag 9$
as anticipated by our OP user567319. Once we have (9), it is an easy matter to find the eigenvalues of $f$, sincd they must satisfy
$0 = \det([f] - \lambda I) = \det \left ( \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix} - \lambda I \right ) = \det \left (\begin{bmatrix} 1 - \lambda & -1 \\ 1 & 1 - \lambda \end{bmatrix} \right )$
$= (1 - \lambda)^2 + 1 = \lambda^2 - 2\lambda + 2; \tag{10}$
it follows from (10), using the quadratic formula, that
$\lambda = \dfrac{1}{2}(2 \pm \sqrt{-4}) = \dfrac{1}{2}(2 \pm 2 i) = 1 \pm i; \tag{11}$
it is now an easy matter to find the eigenvectors, satisfying as they do
$\begin{pmatrix} \lambda \mu \\ \lambda \nu \end{pmatrix} = \lambda \begin{pmatrix} \mu \\ \nu \end{pmatrix} = \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix} \begin{pmatrix} \mu \\ \nu \end{pmatrix} = \begin{pmatrix} \mu - \nu \\ \mu + \nu \end{pmatrix}, \tag{12}$
whence
$\lambda \mu = \mu - \nu, \tag{13}$
$\lambda \nu = \mu + \nu; \tag{14}$
from (13),
$(1 - \lambda) \mu = \nu; \tag{15}$
it follows that, taking $\mu = 1$, the eigenvectors are
$\begin{pmatrix} 1 \\ \mp i \end{pmatrix} = \begin{pmatrix} 1 \\ 1 - \lambda \end{pmatrix}, \; \lambda = 1 \pm i; \tag{16}$
we check the case $\lambda = 1 + i$:
$\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix} \begin{pmatrix} 1 \\ -i \end{pmatrix} = \begin{pmatrix} 1 + i \\ 1 - i \end{pmatrix} = (1 + i)\begin{pmatrix} 1 \\ -i \end{pmatrix}; \tag{17}$
a check of the case $\lambda = 1 - i$ follows from this by complex conjugation, since $[f]$ is a real matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the series $\sum_{n=1}^{\infty} \Bigl(1-\Bigl(1-\frac{1}{n^{1+\epsilon}}\Bigr)^n\Bigr)$ convergent? While i was solving a problem in probability theory I came across the following series
$$\sum_{n=1}^{\infty} \biggl(1-\biggl(1-\frac{1}{n^{1+\epsilon}}\biggr)^n\biggr)$$
and in order to complete my solution I want to show the above series converges but I couldn't prove it (I still dont know if it converges).
I tried to go with Taylor expansion of $x \mapsto \ln(1-x^{1+\epsilon})$ but I couldn't get anything interesting.
Then I showed that $1-\bigl(1-\frac{1}{n^{1+\epsilon}}\bigr)^n \leq \frac{1}{n^\epsilon}$ but this doesn't help either.
Let me know if you have any idea!
| You can get
explicit constants
rather than big or little oh
like this:
If $0 < x < 1$,
$-\ln(1-x)
=\sum_{k=1}^{\infty} \dfrac{x^k}{k}
\gt x$
and,
$\begin{array}\\
-\ln(1-x)
&=\sum_{k=1}^{\infty} \dfrac{x^k}{k}\\
&=x+\sum_{k=2}^{\infty} \dfrac{x^k}{k}\\
&\lt x+\sum_{k=2}^{\infty} \dfrac{x^k}{2}\\
&\lt x+\dfrac{x^2}{2(1-x)}\\
\end{array}
$
Therefore,
if $0 < x < \frac12$,
$-\ln(1-x)
\lt x+x^2
$
so
$-x-x^2
\lt\ln(1-x)
\lt -x
$.
Similarly,
if $0 < x < 1$,
$\exp(x)
=\sum_{k=0}^{\infty} \dfrac{x^k}{k!}
\gt 1+x$
and
$\exp(x)
\lt\sum_{k=0}^{\infty} x^k
=\dfrac1{1-x}
$.
Therefore,
for $0 < x < 1$,
$1-x \lt \exp(-x) \lt \dfrac1{1+x}$.
Therefore,
if
$n^{-c} < 1$,
then
$\begin{array}\\
(1-\frac1{n^{1+c}})^n
&=\exp(n\ln(1-\frac1{n^{1+c}}))\\
&\lt\exp(n(-\frac1{n^{1+c}}))\\
&=\exp(-n^{-c})\\
&\lt \dfrac1{1+n^{-c}}\\
\end{array}
$
so
$\begin{array}\\
1-(1-\frac1{n^{1+c}})^n
&\gt 1-\dfrac1{1+n^{-c}}\\
&=\dfrac{n^{-c}}{1+n^{-c}}\\
&=\frac12 n^{-c}\\
\end{array}
$
so the sum diverges
if $c \le 1$.
If $c > 1$ then
$\begin{array}\\
(1-\frac1{n^{1+c}})^n
&=\exp(n\ln(1-\frac1{n^{1+c}}))\\
&\gt\exp(n(-\frac1{n^{1+c}}-\frac1{n^{2+2c}}))\\
&=\exp(-n^{-c}-n^{-1-2c})\\
&\gt 1-(n^{-c}+n^{-1-2c})\\
\end{array}
$
so
$\begin{array}\\
1-(1-\frac1{n^{1+c}})^n
&\lt 1-(1-(n^{-c}+n^{-1-2c}))\\
&= n^{-c}+n^{-1-2c}\\
&\lt n^{-c}+n^{-3}\\
\end{array}
$
and the sum converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Auto-correlation function, an inverse problem $x[n]$ is a complex function $n=0,1,2,\cdots,L-1 $
we assume $x[n]$ is periodic in its index: $x[n+L]=x[n]$
Its auto-correlation function $C[n]$ is uniquely defined as:
$$
C[n]=\sum_{i=0}^{L-1} x[i+n]x^*[i]
$$
$C[n]$ also has the periodic property: $$C[n+L]=C[n]\tag{1}$$
And ''conjugate-symmetry'' property: $C[-n]=C^*[n] \tag{2}$
Now my question is:
For given $C[n]$, which satisfies property (1) and (2):
Can we find the corresponding $x[n]$ ?
If yes, is it unique?
$\qquad $ if unique, what is the method to find $x[n]$?
$\qquad $ if not unique, what is the class of those $C[n] \rightarrow \{x[n]\}$
If no, what other constraint properties should we add to $C[n]$, in order to make it yes?
| Even up to shifts it is not unique at all. For example there is a whole collection of sequences called $m-$sequences (maximal length sequences) generated by binary linear shift registers corresponding to primitive polynomials. See the discussion on wikipedia.
There are $\phi(2^n-1)/n$ different primitive polynomials over the binary field of degree $n$ and each of these give rise to an $m-$sequence (and all its shifts). All these sequences have period $2^n-1$ and ideal autocorrelation
$$C_t=-1+\delta(t)2^n$$
where $\delta(\cdot)$ is the Kronecker delta.
Corresponding complex valued sequences exist for non-binary fields, over $GF(p)$ they are sequences over complex roots of unity of order $p.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is it OK to be picky about math you find interesting? I am a layman interested in mathematics, and I would like to hear mathematicians' views on the following: Is it normal to be picky about mathematical stuff you find interesting?
I ask because 80% of math I encounter does not seem interesting to me. I want to know if this is true for most people or if there is something wrong with me.
| I would say it would be unwise and rather odd for a mathematician to dismiss any part of the subject matter of mathematics as uninteresting. The subject is very large and there are deep connections between apparently very different aspects: as a simple example, consider the many different proofs of the fundamental theorem of algebra. If 80% of the maths that you encounter seems to be uninteresting to you and you can't see why that 80% of interest to anyone else, then I don't think maths is for you. (I'm personally not very excited by the the theory of differential equations, but I know it's an important subject and I appreciate all the work that's been done on it over the centuries.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
} |
What is the minimum time from point A to point B? am working a bit on the theory of optimal control, and I have had a couple of doubts about how I should choose the control variable to minimize travel time.
Consider the control problem to reduce the travel time of a trolleybus, initially park at A, to a fixed pre-assigned destination B in a straight line.
*
*A first approach to the optimal control model is
$J=\int_{t_0}^{t_f} 1dt=\int_A^B \frac{1}{v(s)}ds$,
subject to
$\dot{\mathbf{x}}(t)=
\begin{bmatrix}
0 & 1 \\
0 & 0
\end{bmatrix} \mathbf{x}(t)+\begin{bmatrix}
0 & 0 \\
1 & 1
\end{bmatrix}\mathbf{u}(t)$, where $\mathbf{u}(t)=\begin{bmatrix}
u_1(t) \\
u_2 (t)
\end{bmatrix}$, $u_1$ is the throttle acceleration and $u_2$ is braking decelararion.
Let us define the state constraints. If $t_0$ is the time of leaving $A$, and $t_f$ is the time of arrival at $B$, then, clearly,
$x_1(t_0)=A, x_1(t_f)=B$.
In addtion, since the automobile starts from rest and stops at $B$,
$x_2(t_0)=0, x_2(t_f)=0$.
These boundary conditions are
$\mathbf{x}(t_0)=\begin{bmatrix}
A \\
0
\end{bmatrix}$ and $\mathbf{x}(t_f)=\begin{bmatrix}
B \\
0
\end{bmatrix}$
We assume that the trolleybus does not back up, then the additional constraints
$0\leq A\le x_1(t)\le B,$
$0\le x_2(t)\le 40$
are also imposed.
We know that acceleration is bounded by some upper limit which depends on the capability of the engine, and that the maximum deceleration is limited by the braking system parameters. If the maximum acceleration is $\beta>0$, and the maximum deceleration is $\alpha>0$, then the controls must satisfy
$0\le u_1(t)\le \beta,$
$-\alpha\le u_2(t)\le 0.$
Now, I have the next hamiltonian
$H(\mathbf{x},\mathbf{u},\mathbf{\lambda})=1+\lambda_1(t)x_2(t)+\lambda_2(t)(u_1(t)+u_2(t))$.
Where I find the next optimal control
$u_1^*(t)= \left\{ \begin{array}{lcc}
\beta & for & t\in [t_0,t^*] \\
\\ 0 & for & t\in (t^*,t_f]
\end{array}
\right.,$
$u_2^*(t)= \left\{ \begin{array}{lcc}
0 & for & t\in [t_0,t^*] \\
\\ -\alpha & for & t\in (t^*,t_f]
\end{array}
\right.,$
My questions are:
*
*How should I get the value of $t^*$?
*How is the dynamic equation solved? Here I am failing, the calculations that I have think are wrong.
*What would be the optimal time values and the optimal speed at which the trolley should travel to go from point A to point B?
| Instead the proposed dynamical system
$$
\begin{array}{rcl}
\dot{x}_{1} & = & x_{2}\\
\dot{x_{2}} & = & u_{1}+u_{2}
\end{array}
$$
with $0\le u_{1}\le\beta$ and $-\alpha\le u_{2}\le0$ we will consider
a simpler system with the same functionalities
$$
\begin{array}{rcl}
\dot{x}_{1} & = & x_{2}\\
\dot{x_{2}} & = & u
\end{array}
$$
with $-\alpha\le u\le\beta$ with the velocity restriction $|x_{2}|\le v_{max}$
The hamiltonian gives
$$
H(x,u,\lambda)=\lambda_{1}x_{2}+\lambda_{2}u
$$
then we have
$$
\begin{array}{rcl}
\dot{x} & = & \frac{\partial H}{\partial\lambda}\\
\dot{\lambda} & =- & \frac{\partial H}{\partial x}
\end{array}\Rightarrow\left\{\begin{array}{rcl}
\dot{\lambda}_{1} & = & 0\\
\dot{\lambda}_{2} & = & -\lambda_{1}
\end{array}\Rightarrow\left\{\begin{array}{rcl}
\lambda_{1} & = & c_{1}\\
\lambda_{2} & = & c_{2}-c_{1}t
\end{array}\right.\Rightarrow u\right.=\sigma(c_{2}-c_{1}t)
$$
with $\sigma(x)=$sign of $x$ function. Those conditions impose two
kind of orbits. So for $u=\beta$
$$
\begin{array}{rcl}
\dot{x}_{1} & = & \frac{1}{2}\beta t^{2}+s_{2}t+s_{1}\\
\dot{x}_{2} & = & \beta t+s_{2}
\end{array}\Rightarrow x_{1}=\frac{1}{\beta}\left(\frac{1}{2}\left(x_{2}-s_{2}\right)^{2}+s_{2}\left(x_{2}-s_{2}\right)\right)+s_{1}
$$
Analogously for $u=-\alpha$
$$
x_{1}=-\frac{1}{\alpha}\left(\frac{1}{2}\left(x_{2}-s_{2}^{'}\right)^{2}+s_{2}^{'}\left(x_{2}-s_{2}^{'}\right)\right)+s_{1}^{'}
$$
Now with the initial conditions $x_{1}(0)=x_{A},\;x_{2}(0)=0$ we
obtain $s_{1}=x_{A},\;s_{2}=0$
$$
x_{1}=\frac{1}{2\beta}x_{2}^{2}+x_{A}
$$
Keeping in mind the velocity restriction we have a state path as shown
in the attached figure
In blue we have the acceleration ($u=\beta$) orbits and in red the
breaking orbits ($u=-\alpha$). The restriction to velocity is attained
at B from A.
From B to C we have the dynamic system
$$
\begin{array}{rcl}
\dot{x}_{1} & = & x_{2}\\
\dot{x}_{2} & = & 0
\end{array}
$$
which gives
$$
\begin{array}{rcl}
x_{1} & = & v_{max}t+s_{1}\\
x_{2} & = & v_{max}
\end{array}
$$
so the minimum time orbit from A to D is A$\to$B$\to$C$\to$D. From
those data we can easily calculate $t_{f}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2818940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the Pre-Image of $\ q\ $, where $\ q^{-1}(S)\ $, where $\ S=f^{-1}([0,+\infty))$
Let $\ f:\mathbb{R^2}\rightarrow\mathbb{R}\ $ be given by $\ f(x,y)=y(x-y)\ $ and let $\ q:\mathbb{R^2}\rightarrow\mathbb{R^2}\ $ be given by $\ q(r,\theta)=(r \ \text{cos}(\theta),r \ \text{sin}(\theta))$. Find the pre-image $\ q^{-1}(S)\ $, where $\ S=f^{-1}([0,+\infty))$.
I have found that the pre-image of $\ f\ $ is the set $\ f^{-1}=\{(x,y)\in\mathbb{R^2}: y(x-y)\geq 0\}$. Is it also true that $S$ is a closed?
To find $\ q^{-1}(S)\ $, I thought this would correspond to where $\ (r,\theta)\geq S$. But I am very confused on how to proceed. I don't have the intuition of what to do next.
| By definition:
$$q^{-1}(S)=\{(r,\theta)\,\colon q(r,\theta)\in S\}$$
Then,
$$q(r,\theta)\in S\iff q(r,\theta)\in f^{-1}([0,+\infty))\iff f(q(r,\theta))\geq 0.$$
This yields
$$ r\sin(\theta)(r\cos(\theta)-r\sin(\theta))\geq 0$$
Calculating, I obtained that this happens if and only if
$$r^2(\sin(\theta)\cos(\theta)-\sin^2(\theta))\geq 0$$
Maybe you should apply some trigonometric identity to ''simplify'' this result
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Basic Probability with random number The question I have on hand is as follows :
We draw at random a number in interval [0,1] such that each number is "equally likely". Suppose we do the experiment two times (independently), giving us two numbers in [0,1]. What is the probability that the sum of these numbers is greater than 1/2?
My understanding is that since the experiments are independent, I am able to multiply the probability of each experiment with each other directly.
My attempt at this was to calculate probability of each experiment where x (the random number) is less than 1/4, thus giving me the probability of 1/16 when I multiply them. This would imply that there is a 15/16 chance my sum is greater than 1/2. However the answer is wrong since it is supposed to be 7/8.
Any help please? Thank you
| Choosing two numbers randomly (uniformly, independently) in the unit interval $[0,1]$ is the same as choosing a single point uniformly in the unit square $[0, 1]\times [0,1]$, and looking at its first and second coordinate.
Now, take a look at that square (you can even draw it, if you want). See if you can tell which points are such that their two coordinates add up to more than $\frac12$. (The line $x+y = \frac12$ is very relevant here, because it consists of the points where the two coordinates add up to exactly $\frac12$. That is what $x + y = \frac12$ really means, after all. You can draw that too to help you.) What's the area of that region?
As an extra exercise, you tried looking at the region where both the variables were smaller than $\frac14$. Can you draw that into the square? Can you see why your answer was wrong?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is the notion of analytic function so important? I think I have some understanding of what an analytic function is — it is a function that can be approximated by a Taylor power series. But why is the notion of "analytic function" so important?
I guess being analytic entails some more interesting knowledge rather than just that it can be approximated by Taylor power series, right?
Or, maybe I don't understand (underestimate) how a Taylor power series is important? Is it more than just a means of approximation?
| Being analytic, and especially being complex-analytic, is a really useful property to have, because
*
*It's very restrictive. Complex-analytic functions integrate to zero around closed contours, are constant if bounded and analytic throughout $\mathbb{C}$ (or if their absolute value has a local maximum inside a domain), preserve angles locally (they are conformal), and have isolated zeros. Analyticity is also preserved in uniform limits.
*Most of the functions we obtain from basic algebraic operations, as well as the elementary transcendental functions, (and, indeed, solutions to linear differential equations), are analytic at almost every point of their domain, so the surprising restrictiveness of being an analytic function does not stop the class of functions that are analytic from containing many interesting and useful examples. Proving something about analytic functions tells you a something about all of these functions.
Being real-analytic is rather less exciting (in particular, there is no notion of conformality and its related phenomena). Most properties of real-analytic functions can be deduced from restricting local properties of complex-analytic ones anyway, due to this characterisation. So we still have isolation of zeros, and various other properties, but nowhere near as much (and uniform limits are no longer analytic).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51",
"answer_count": 11,
"answer_id": 7
} |
Ring Around the Robot - Chance of ending on specific node $N$ nodes $(Node_1 .. Node_N)$ are arranged in a circle, and a robot is placed at $Node_1$. The robot moves clockwise with probability $p$ and counter-clockwise with prob. $(1-p)$. Given integers $S, B \in \mathbb{N}$, where $1<=B<=N$, what's the probability of it landing on $Node_B$ after taking $S$ steps?
I know there's a mathematical formulation of this problem, but I haven't been able to shift from an algorithmic frame of mind. What I've come up with is:
E = minabs(S - B) # How many steps is the minimum allowed to get to Node_B?
if(S < E) { Prob = 0 } # There aren't enough steps to get to Node_B
if (S == E) && (S-B < 0) { Prob = (1 - p)^S } # Only S counter-clockwise steps will get to Node_B.
if (S == E) && (S-B > 0) { Prob = p^S } # Only S clockwise steps will get to Node_B.
However, I'm stuck trying to think through the possible scenarios when $S > E$. Extracting the cases where the robot goes completely around the ring in either direction, or where a move in one direction is countered by the opposite move have me overwhelmed, and thinking there must be a better way.
| Let $X$ be the clockwise moves. The net clockwise displacement after $S$ steps is $X-(S-X)=2X-S$. Let $i$ be the position (node index) indexed $0,1\cdots, N-1$. Assume you start at $i=0$, you end in node $j$ if $2X-S = j \pmod N$
or
$$ 2X = j +S \pmod N $$
with $0\le X\le S$
Let $X_i$ be the solutions of this modular equation. Then the probability is simply
$$ P=\sum_i \binom{S}{X_i} p^{X_i} (1-p)^{S-X_i}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there a necessary condition for the projection of two matrices to be the same? Take $\textbf{A},\textbf{B} \in \mathbb{R}^{d \times d}$ with $\textbf{A} \neq \textbf{B}$ and $d > 1$. Let $\textbf{P}_M$ be some $d\times d$ projection matrix. Is there a necessary condition for $\textbf{P}_M \boldsymbol{A} = \textbf{P}_M \textbf{B}$?
| Let $M_i$ denote the $i$-th column of a matrix $M$.
A necessary (and sufficient) condition is that for all $i$, $$A_i-B_i\in\operatorname{Ker}P_M$$
A projection is entirely determined by its image and kernel. If you know them, then this is a handy criterion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Orthogonally Diagonalize a matrix with nonreal eigenvectors I am given the matrix A and asked to orthogonally diagonalize it.
$$A=\begin{pmatrix} 1 & -i \\ i & 1 \\ \end{pmatrix} $$
While doing this I got $\lambda = 0,2.$
Then I found the eigenvectors corresponding to the eigenvalues to be $$\begin{pmatrix} i \\ 1 \\ \end{pmatrix} $$ and $$\begin{pmatrix} -i \\ 1 \\ \end{pmatrix}, $$ respectively. While trying to divide each eigenvector by its norm I run into a problem, take $V_1$ for example $\|V_1\| = 0$ so I obviously can't divide by $ V_1$ by its norm to make it orthogonal. My question is, is $A$ even orthogonally diagonizable at all? Or if the $\|V_1\| = 0$ does this mean that it is already orthogonal and I can just use my eigenvectors as is to generate the matrix $U$ such that $A=UDU^*?$
| It happens that $\|(a,b)\|=\sqrt{|a|^2+|b|^2}$. Therefore, the norm of both vectors that you mentioned is $\sqrt2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Lagrange multiplier when decisions variables are not in the same set
Find the maximum of $2x+y$ over the constraint set $$S = \left\{ (x,y) \in \mathbb R^2 : 2x^2 + y^2 \leq 1, \; x \leq 0 \right\}$$
I want to use Lagrange multipliers to find the optimal solution. However, Lagrange requires $\vec x \in A$. In our case $R$, however $x$ can only be negative or zero. How can I get rid of this constraint? My idea is to do $x=w-z, w-z \le 0, w,z \in R$, but I am not sure if this is the right way to do it.
| Formulating the problem a bit better, you have the following :
Find the maximum of $f(x,y) = 2x + y$ over $S = \{(x,y) \in \mathbb R^2 : 2x^2 + y^2 \leq 1, \; x \leq 0\}$.
Recall that one of the most important Lagrange Multiplier methods is the Kuhn-Tucker Lagrange method. The KTL method calculates the total minimum of a function. To yield the minimum from the function $f(x,y)$, simply take the function $-f(x,y)$. The point will then be the maximum for $f(x,y)$.
Thus, naming our role functions :
$$f_0(x,y) = -f(x,y) = -2x-y, \; \; f_1(x,y) = 2x^2+y^2-1, \; \; f_2(x,y) = x$$
You are now called to solve the system :
$$\begin{cases} \nabla f_0 + \lambda_1 \nabla f_1 + \nabla f_2 = 0 \\ \lambda_1 f_1 = 0\\ \lambda_2 f_2 = 0\end{cases} \Rightarrow\begin{cases} \begin{bmatrix} -2\\-1\end{bmatrix} + \lambda_1\begin{bmatrix} 4x \\2y\end{bmatrix} +\lambda_2 \begin{bmatrix} 1 \\ 0 \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ \lambda_1(2x^2+y^2-1) = 0 \\ \lambda_2 x =0\end{cases}$$
$$\implies$$
$$\begin{cases} -2 + 4\lambda_1x +\lambda_2 & = 0 \\ -1 + 2\lambda_1y &=0 \\\lambda_1(2x^2+y^2-1) & = 0 \\ \lambda_2x & = 0\end{cases}$$
Now, yield cases for $\lambda_1, \lambda_2 = 0 \; \text{or} \; \neq 0$ or any combination of them, taking into account possibilities for $x,y = 0 \; \text{or} \; \neq 0$ too. You will then find some possible minimum points. The point for which the value of $f_0$ will be the smaller will be your minimum points, thus maximum for $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2819961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Solve $x^2+(8y)^2=p^2(4p^2y^2+1)$ I am trying to find solutions for $x^2+(8y)^2=p^2(4p^2y^2+1)$ for integer $x,y$ where $p$ is a prime $\equiv 1 \mod 4$ that does not divide $x,y$.
I think there are no solutions but I could not prove this. Obviously $x$ is odd, and $4p^2y^2+1$ is a product of primes $\equiv 1 \mod 4$, but I am unable to progress.
Note that the equation can be rearranged to the Pell-like equation
$$x^2-(p^4-16)(2y)^2=p^2$$
| Try $p=5$, $x=691$, $y=14$. https://www.alpertron.com.ar/QUAD.HTM is a good resource.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
PhD admission product $\lim_{n\to 0}\left(\frac21\left(\frac32\right)^2\left(\frac43\right)^3\cdots\left(\frac{n+1}{n}\right)^n\right)^{1/n}$ Hello there I saw this problem (#3) here:
http://www.sau.int/admission/2018/samplepapers/PAM.pdf
$$L=\lim_{n\to 0} \left( \frac{2}{1}\left(\frac{3}{2}\right)^2\left(\frac{4}{3}\right)^3...\left(\frac{n+1}{n}\right)^n\right)^\frac{1}{n}$$
The choices for the answer are $e$, $\pi$, $\frac{1}{e}$, $\frac{1}{\pi}$.
If we take the logartihm on both sides we get: $$\ln L=\frac{1}{n}\sum_{k=1}^{n} k\ln\frac{k+1}{k}$$ thus by telescoping $$\ln L= \ln\left(\frac{n+1}{(n!)^{\frac{1}{n}}}\right)$$ and now using wolfram I get the answer to be $L=e^\gamma$, which is not one of the choices.
Where did I go wrong? And could you share the correct solution?
| $$\dfrac{n+1}{(n!)^{\frac{1}{n}}} \approx \dfrac{n+1}{\sqrt{2 \pi n}^{1/n}\dfrac n e} \to e$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Do gradients of level curves at tangent point point at same direction? I watched the Lagrange multipliers video here and it was mentioned in minute 2:50 that the gradients of both level curves at tangent point point at the same direction
Is this guaranted that they will point always at the same direction? If it is, can you please explain and/or provide a link to an article with an easy explanation?
| Good catch! The video narrator is certainly wrong when he said that "they're pointing in the same direction." Note that up to that point he was only saying that the two gradients would be proportional, which is absolutely correct. Equivalently, being proportional means that they are parallel. But they do NOT have to point in the same direction. In fact, it doesn't matter at all whether they point in the same or in opposite directions — it only matters that they are parallel (proportional).
To give them the benefit of the doubt, we can presume that he knows the stuff, and just misspoke at that point. Still, it is very unfortunate that an educational resource would make such a blatant mistake, confusing students who are trying to learn from them…
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the limit using delta epsilon definition. Evaluate $\lim_\limits{x \to 0}\ \dfrac{e^x-1}{e^{2x}-1}$ using $\delta - \varepsilon$ definition.
Attempt: I claim that $\lim_\limits{x \to 0}\ \dfrac{e^x-1}{e^{2x}-1} = \dfrac 12$. $\forall \varepsilon >0, \exists\delta>0$ such that
$$|\frac{e^x-1}{e^{2x}-1}-\frac12| = |\frac {2e^x-2-e^{2x}+1}{2e^{2x}-2}|=|\frac {2e^x-e^{2x}-1}{2e^{2x}-2}|\le |\cdot|<\varepsilon$$
I don't know how to proceed from here. I appreciate any hint.
| Will be easier to evaluate if you first factorise the denominator followed by the delta-epsilon evaluation
Hopefully you can proceed from there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What can I say about $P\bigl[Y-X \le\frac{1}{2}\bigr]$ if $X$ and $Y$ are independent $U[0,1] $ variables? Let $X$ and $Y$ be two random independent variables with uniform distribution on $[0,1]$.
What can I say about $P\bigl[Y-X \le\frac{1}{2}\bigr]$?
I tried doing the following:
$$P\Bigl[ Y \le X + \frac{1}{2}\Bigr]$$
Let $X + \frac{1}{2} = Z$. $Z$ is uniform on $\bigl[\frac{1}{2},\frac{3}{2}\bigr]$
Then I evaluated $\int_{\!\frac{1}{2}}^1t\, dt$ but the result is wrong.
Can you give me some suggestions?
| Hint. Draw a picture of the unit square and shade the area that matters. Then you can find the answer without integrals (even without pencil and paper after you see the picture).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
What's the recursive definition of the heavy binary strings? Given $Σ = \{0, 1\}$ and $w ∈ Σ^*$. The binary string $w$ is called heavy if
(the number of $1$ of $w$) - (the number of $0$ of $w$) = $1$. For example, the strings $011$, $100011110$ are heavy, while the strings $0101$, $1100$, $1100100$,
$1111111$ are not.
What's the recursive definition of the heavy binary strings?
| If $H$ is the set of these heavy strings, then define the basis step as $1\in H$.
Now if $x, y$ are two strings in $H$, then the concatenations $xy$ or $yx$ will have two more $1$'s than the $0$'s. So we need to add one more zero in the string $xy$ for it to be a member of $H$. So the recursive step can be given as: if $x, y\in H$ then $0xy, x0y, xy0\in H.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Struggling to find implicitly-defined function and its second derivative The question I am working on is as follows:
Let $y$ be implicitly defined by $$\sin(x-y) - e^{xy} + 1=0$$ and $y(0) = 0$. Find $y''(0)$.
Any help with finding the implicit function and possibly its second derivative is greatly appreciated because I cannot seem to work it out myself.
| If you derivate it once you get:
$$\cos(x-y)(1-y') - e^{xy}(xy'+y) =0$$
and for $x=0$ we get $1-y'(0) = y(0)=0$ so $y'(0)=1$ and if we derivate it second time we get:
$$-\sin(x-y)(1-y')^2 -\cos(x-y)y''-e^{xy}(xy'+y)^2 -e^{xy}(xy''+2y')=0 $$
so for $x=0$ we get:
$$ -y''(0)-y(0)^2-2y'(0)=0\implies y''(0)=-2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine the coefficient of $wx^3y^2z^2$ in $(2w -x + y -2z)^8$ They provide a similar example:
Similarly provided example
I tried to set mine up the same way, so I had
My Answer so far:
Can someone let me know if I'm even close? In the example, I have no idea no idea how the would have determined that the "6" should be squared and I'm not sure where the "1!" term came from on the denominator of their first calculation. Any tips or pointers are appreciated (please forgive my stupidity, it's been about 10 years since I took a high-school math course!).
| Hint:
We can re-write as $$(2w-x+y-2z)^8=\sum_{r=0}^8\binom8r(2w-x)^{8-r}(y-2z)^r$$
As the sum of coefficients of $w,x$ is $1+3=4,$ we need $8-r=4\iff r=?$
Now the general term of $(2w-x)^4$ is $$\binom4k(2w)^{4-k}(-x)^k$$
We need $k=3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Clarifying the definition of continuity at a point in Johnsonbaugh/Pfaffenberger Foundations of Mathematical Analysis In my copy of Foundations of Mathematical Analysis, in the section on continuity, I'm not understanding definition 33.1. The definition is as follows:
Let $f$ be a function from a subset X of R into R. We say that $f$ is continuous at a if either:
*
*a is an accumulation point of X and $ \lim_{x \to a} f(x) = f(a)$
*a is not an accumulation point of X.
I don't understand the second bullet point. If a is not an accumulation point of X, then can't it be that a is not in the domain of f ? If that is the case, why would we say that f is continuous at a ? I can't see how it could vacuously be the case that f is continuous there. Perhaps is the definition simply missing a stipulation that $ a \in X$ ? Thanks!
| In fact, you need $a\in X$. Otherwise, there is no meaning in the continuity of $f$ outside of $X$. Consider $X=\{0\}\cup [1,2]$. Then $0\in X$ is no accumulation point of $X$. Hence, by definition $f$ is continuous at $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2820987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the relationship between singular values and eigenvalues of a matrix? Suppose I have a general $n\times n$ real matrix $A$. And suppose that $A$ has an SVD of the form $A=U^T S V$ with S of the form $I_m \oplus D$ where $I_m$ is the identity $m\times m$ matrix and $D$ is a matrix of size $n-m \times n-m$.
This means that $A$ has $m$ singular values equal to 1. Would this suffice to conclude that $A$ has $m$ eigenvalues of modulus 1? Why? Why not?
| In general the eigenvalues have no direct relation to the singular values. The only thing you can really be sure of is that the eigenvalues, in magnitude, lie in the interval $[\sigma_n,\sigma_1]$. Also each singular value of zero is in fact an eigenvalue (with the corresponding right singular vector as an eigenvector).
The exception is when $A$ is unitarily diagonalizable, which is equivalent to being normal. Then the left singular vectors and the right singular vectors coincide, each being equal to the eigenvectors. In this case the singular values are just the moduli of the eigenvalues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove a cone is convex and closed?
Let $A$ be a $m\times n$ matrix and consider the cones $G_0=\{d\in\mathbb R^n:Ad<0\}$ and $G'=\{d\in\mathbb R^n:Ad\le0\}$
Prove that $G'$ is a convex closed cone.
Lets see that $G'=\overline{ G'}.$ Note that this contention $G'\subset\overline{ G'}$ is always true. Let's see the other contention.
Suppose $d\in\overline{G'}$ and $d\not\in G'. $ Thus For every open ball $B$ with $d\in B$ we have that $B\cap G'\neq\emptyset$ and $ Ad>0$
How can I reach the contradition? Is this a good way to prove it or is there a better way? I don't know
And how to prove convexity?
Please help me please thanks
| And to show that $G'$ is convex just take two points $x,y \in G'$, the segment between these points must lie in $G'$. Let $0\leq \alpha\leq 1$, study the point $p = \alpha x + (1-\alpha)y$:
$$ Ap = A(\alpha x + (1-\alpha)y) = \alpha Ax + (1-\alpha)Ay \leq 0 $$
the last inequality is true because $\alpha, 1-\alpha \geq 0$ and since $x,y \in G'$ than $Ax \leq 0$ and $Ay \leq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
How can $\int_0^x\lfloor t \rfloor^2dt$ be written as $\sum_{j=1}^{\lfloor x - 1 \rfloor} j^2 + q^2r$ Question 6(c) from Section 1.15 Exercises of Apostol's Calculus is the following:
Find all $x > 0$ for which $\int_0^x\lfloor t \rfloor^2dt = 2(x-1).$
A particular piece of reference material solves the problem in this manner:
$\int_0^x\lfloor t \rfloor^2dt = \sum_{j=1}^{\lfloor x - 1 \rfloor} j^2 + q^2r$ where $x = q + r, q \in \mathbb Z^+, 0 \le r < 1$.
$$\int_0^x \lfloor t \rfloor^2dt = \frac{q(q-1)(2q-1)}{6} + q^2r = 2(x-1) = 2(q+r-1) \\ \implies q(q-1)(2q-1) + 6q^2r = 12q +12r -12 \\ \implies x = 1, x = \frac52$$
Now, I understand how $\int_0^x\lfloor t \rfloor^2dt$ can be rewritten with $j=1$ and $\lfloor x - 1 \rfloor$ being the lower and upper limits of summation respectively, for you may remove $0$ from the partition of $[0, x]$ without having any effect on the final integral.
The part I don't understand is where the $x = q + r$ expression, and the term $q^2 r$ come into it. I don't understand the effect they have or why they are used. I also feel like adding the $q^2 r$ term to the formula for the square series would not be allowed, for you aren't adding to both sides.
Note that this is not a homework question, I am rather attempting to self-study Calculus over the Summer.
I would appreciate any help you could provide. Thank you.
| Note that $$\int_{0}^{x}\lfloor t \rfloor^2 dt = \int_{0}^{\lfloor x \rfloor}\lfloor t \rfloor^2 dt + \int_{\lfloor x \rfloor}^{x}\lfloor t \rfloor^2 dt.$$
The first sum is given by the summation, and the second term is
$$\int_{\lfloor x \rfloor}^{x}\lfloor t \rfloor^2 dt = \int_{\lfloor x \rfloor}^{x}\lfloor x \rfloor^2 dt = \lfloor x \rfloor^2(x-\lfloor x \rfloor) = q^2r.$$
Here, I have used the fact that $\lfloor t \rfloor = \lfloor x \rfloor$ for $\lfloor x \rfloor \le t \le x$, $q = \lfloor x \rfloor$, and $x = q+r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
BlackJack Card Probability when Counting Cards In a single deck blackjack game - if you're not counting cards - the probability that the next card will be a 10/J/Q/K is 16/52.
I'm trying to figure out how to adjust the probabilities when you are counting cards. For those that might not be familiar, a common card counting system (HiLo) works by keeping a running "count", when you see a 2-6 you add 1 to the count, when you see a T/J/Q/K/A you subtract 1 from the count. When the count is positive it means the odds are better for the player (the remaining deck(s) is richer in high cards vs low cards).
Let say that it's a 2 deck game, and half the cards have already been dealt - 52 cards remain. The count is +5, that means there have been 5 more low cards seen so far than high cards. What is the probability that the next card dealt will be a 10/J/Q/K? It's gotta be more than 16/52, because we know the deck is richer in high cards based on the count. I just don't know how to model/calculate it.
| As in my answer to your later question
$\qquad$BlackJack Card Counting Probabilities
defining $f(a,b,c)$ as the number of $52$-card subsets of the $104$-card deck consisting of
*
*$a$ low cards$\;(2,3,4,5,6)$.$\\[4pt]$
*$b$ neutral cards$\;(7,8,8)$.$\\[4pt]$
*$c$ high cards$(10,\text{J},\text{Q},\text{K},\text{A})$.
with cards of the same type (low, neutral, high) regarded as indistinguishable, we get
$$f(a,b,c)={\small{\binom{40}{a}\binom{24}{b}\binom{40}{c}}}$$
Given that there were $5$ more low cards than high cards in the first $52$ dealt cards, the probability that the next card to be dealt card is a high card $(10,\text{J},\text{Q},\text{K},\text{A})$ is
$$
\frac
{{\displaystyle{\sum_{c=12}^{23}f(c+5,47-2c,c)(40-c)}}}
{52{\displaystyle{\sum_{c=12}^{23}f(c+5,47-2c,c)}}}
=\frac{45}{104}\approx\, 0.4326923077
$$
and the probability that it's a $10$-type card $(10,\text{J},\text{Q},\text{K})$ is approximately
$$
\left(\frac{4}{5}\right)\left(\frac{45}{104}\right)=\frac{9}{26}\approx\,0.3461538462
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is $\int_{-2}^3\frac{1}{x^3}dx=\frac5{72}$ or not defined? If we divide it into two parts such that $$I=\int_{-2}^0\frac{1}{x^3}dx+\int_0^3\frac{1}{x^3}dx$$
And then use substitution $x=-t$ we get $$I=\int_2^3\frac1{x^3}dx=\frac5{72}$$
However, If we use limits on both part separately, they both diverge, so the integral diverges too.
Which explanation is correct?
| You are right that the integral is not defined (because of the singularity), so you can't trust what the anti-derivative tells you. It's even worse with $$\int_{-1}^{1}\frac{1}{x^2}dx=-2\tag{1}$$
The area is clearly positive ($+\infty$). If you were to break up $(1)$ you would get $$\int_{-1}^{1}\frac{1}{x^2}dx=2\int_{0}^{1}\frac{1}{x^2}dx\rightarrow\infty$$ So it's not always true that you can break up integrals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Let $M$ be a orientable 2-closed surface,prove $H^1(M)$ is direct sum of an even number of $\Bbb Z$ Let $M$ be a orientable 2-closed surface,prove $H^1(M)$ is direct sum of an even number of $\Bbb Z$
Could anyone give some hints?
| Hint: Think about how you would triangulate such a surface. Think about how you can triangulate a $1$-holed torus, $2$-holed torus, $\dots$ , $n$-holed torus. This of course relies on what Georges mentioned in the comments; the fact that $M$ is just a sphere with $n$ handles.
Then think about what this triangulation is homotopy equivalent to, and what the first cohomology of this homotopy equivalent object is. Here you'll need that $H^1(S^1) \cong \Bbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does squaring both sides of an equation lead to extraneous solutions? Let’s say I have $x = x + 1$, which is a false statment for real $x$; why can I solve for real $x$ when I square both sides of the equation, giving $x^2=(x+1)^2$?
| An equality $e_1=e_2$, where $e_1$ and $e_2$ are expressions whose value is a number, means that the two expressions denote the same number.
You would of course expect that you if you apply the same function to two different representations of the same underlying number that you would therefore get the same result in both cases. This is a consequence of the usual notion of extensionality in mathematics. See e.g. https://en.m.wikipedia.org/wiki/Extensionality for more.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 11,
"answer_id": 9
} |
Find logarithm fit to two points Say I have an equation $f(x) = \log_b(ax+1)$, where $a$ and $b$ are constants. If I have two distinct points $(x_1,y_1)$ and $(x_2, y_2)$, where $x_2 > x_1$ and $y_2 > y_1$, how can I find values for $a$ and $b$ such that $f(x_1) = y_1$, and $f(x_2) = y_2$?
| Let $y_1=\log_{b}ax_1+1$ and $y_2=\log_{b}ax_2+1$ therefore$$b^{y_1}=ax_1+1\\b^{y_2}=ax_2+1$$then we have $$x_2b^{y_1}-x_1b^{y_2}=x_2-x_1$$ which doesn't have any analytic answer so doesn't for $a$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2821907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does the identity $\mathbb{E}(X) = \mathbb{E}\left(\int \mathbb{1}_{u \leq X}du\right)$ hold? I'm reading on Hoeffding's covariance identity, the proof of which is neatly covered here, or, in a similar manner, in this MSE post, but I can't seem to fully understand the trick/property used there.
I.e., assume $(X_1, Y_1)$ and $(X_2, Y_2)$ are two independent vectors with identical distribution. The key point in the proof is to note that we can write
$$ \mathbb{E}[(X_1 - X_2) (Y_1 - Y_2)]$$ as
$$ \mathbb{E}\left(\iint_{\mathbb{R}\times\mathbb{R}} [\mathbb{1}_{u\leq X_1} - \mathbb{1}_{u \leq X_2}] \cdot [\mathbb{1}_{v\leq Y_1} - \mathbb{1}_{v \leq Y_2}]\,du\,dv \right)$$
Why does this hold?
| What underlies the equality $\mathbb E(X) = \mathbb E(\int \mathbb 1_{u\le X}\,du)$ is, intuitively, the way one thinks of the Lebesgue integral as coming from partitioning the $y$-axis, whereas the Riemann integral comes from partitioning the $x$-axis.
Think of a reasonable function $f(x)$ (say continuous, but that's not necessary, and nonnegative to be concrete). We think of $\int_{-\infty}^\infty f(x)\,dx$ as the area under the curve $y=f(x)$.
Now write this as an iterated integral and then change the order of integration:
$$\int_{-\infty}^\infty f(x)\,dx = \int_{-\infty}^\infty\int_0^{f(x)} 1\,dy\,dx =
\int_0^\infty \mu(\{x: f(x)\ge y\}\,dy.$$
The $x$ cross-section at height $y$ is precisely the set of points $x$ where $f(x)\ge y$. Here $\mu(E)$ is the (Lebesgue) measure of $E\subset\Bbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Convergence of $a_n=(1-\frac12)^{(\frac12-\frac13)^{...^{(\frac{1}{n}-\frac{1}{n+1})}}}$ let us moving to telescopic sum using exponent ,Assume we have this sequence: $a_n=(1-\frac12)^{(\frac12-\frac13)^{...^{(\frac{1}{n}-\frac{1}{n+1})}}}$ with $n\geq1$ , this sequence can be written as power of sequences : ${x_n} ^ {{{y_n}^{c_n}}^\cdots} $ such that all them value are in $(0,1)$, I want to know if the titled sequence should converge to $1$ ? and how we can evaluate it for $n$ go to $\infty$ ?
| $\forall n\in N^{*}:u_{n}=(1-\frac{1}{2})^{(\frac{1}{2}-\frac{1}{3})^{(\frac{1}{3}-\frac{1}{4})^{...(\frac{1}{n}-\frac{1}{n+1})}}}\gt 0$
$u_{1}=v_{1}=1-\frac{1}{2}=\frac{1}{2}$
$u_{2}=v_{1}^{v_{2}},u_{3}=v_{1}^{v_{2}^{v_{3}}},...,u_{n}=v_{1}^{v_{2}^{v_{3}^{....v_{n}}}},v_{n}=\frac{1}{n}-\frac{1}{n+1}=\frac{1}{n(n+1)}$
$\ln u_{n}=v_{2}^{v_{3}^{^{....v_{n}}}}\ln v_{1}=-v_{2}^{v_{3}^{^{....v_{n}}}}\ln 2\lt 0$
$0\lt u_{n}\lt 1$
$\frac{1}{9900}^{\frac{1}{10100}}\approx 0.9991$
$\lim_{n \to \infty }(\frac{1}{(n-1)n})^{\frac{1}{n(n+1)}}=1 $
$n\ge 99\Rightarrow v_{n}^{v_{n+1}}\gt 0.999$
$ A_{n}=v_{99}^{v100^{v_{101}^{...v_{n}}}}\longrightarrow \lim_{n \to \infty }A_{n}\approx 1$
$\lim_{n \to \infty }u_{n}=v_{1}^{v_{2}^{^{...v_{99}}}} $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 6,
"answer_id": 5
} |
A urn contains blue balls and red balls. I need to find probabiltiy of drawing more blue balls than red balls
A urn contains 5 identical blue balls and 4 identical red balls. Taking 5 balls at random from the urn what is the probability that the number of blue balls be greater than the number of red balls?
My first guess was setting the ways that I can draw the balls.
It was:
$\color{blue} {BBBBB}$ ; $\color{blue} {BBBB}\color{red}{R}$; $\color{blue} {BBB}\color{red}{RR}$; $\color{blue} {BB}\color{red}{RRR}$; $\color{blue} {B}\color{red}{RRRR}$
Only $3$ cases have the number of blue balls greater than the number of red balls. Then the odds must be $\displaystyle{\frac{3}{5}}$.
But this answer sounds strange for me. I think that it is wrong?
Could anyone help me how to figure out this question?
| Three approaches:
(1) This can be viewed as a hypergeometric distribution. The urn contiains
four red balls and five blue balls. Let $X$ be the number of red balls
among five balls drawn at random without replacement. To draw more blue balls
then red you need to evaluate $P(X \le 2).$ In R statistical software
this can be evaluated as follows:
phyper(2, 4, 5, 5)
## 0.6428571
(2) The equivalent answer can be obtained using a combinatorial argument:
$$\frac{{4 \choose 0}{5 \choose 5}+{4\choose 1}{5 \choose 4}+{4 \choose 2}{5 \choose 3}}{{9 \choose 5}} = \frac{1 + 20 + 60}{126} = 81/126 = 9/14 = 0.6428571$$
(3) An approximate value (to about 3 places) from simulating a million draws of five balls from
such an urn and counting the red balls can be obtained as follows:
set.seed(616)
m = 10^6; urn = c(1,1,1,1,1,2,2,2,2) # 1 = blue, 2 = red
r = replicate(m, sum(sample(urn, 5)==2)) # sample 5 balls without replacement
mean(r <= 2) # mean of logical vector is nr of TRUEs
## 0.642822
The histogram below shows the simulated hypergeometric distribution of the number of red balls drawn. The open red dots show exact hypergeometric probabilities.
At the scale of the graph, it is not easy to see any difference between the
simulated and exact values.
Note: You should be sure you understand and can explain the details of either method (1) or method (2) for your class. The simulation is probably not something you are expected to know.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $F_0\cap G_0=\emptyset$ then $x$ is a local minimum of function Consider the theorem:
Consider the following linear optimization problem $$\max 2x_1+3x_2$$ $$\text{s.t.} x_1+x_2\le8\\ -x_1+2x_2\le4\\ x_1,x_2\ge0$$
a) For each extreme point verify if necessary condition of theorem is satisfied.
b)Find the optimal solution and justify the optimality of the solution.
First we switch the problem to $\min$. Thus we have $$\min -2x_1-3x_2$$ $$s.t. x_1+x_2\le8\\ -x_1+2x_2\le4\\ x_1,x_2\ge0$$
Drawing the feasible region we found that there are only 3 extreme points: $A=(0,2),B=(4,4),C=(0,8)$.
Notice that $A$ is the only point that satisfies the constraint conditions.
Now we try to see that $F_0\cap G_0=\emptyset$. We first calculate the gradients
$\nabla f(A)=(-2,-3)^t,\nabla g_1(A)=(1,1)^t,\nabla g_2(A)=(-1,2)^t$.
And $\nabla f(A)^td=-2d_1-3d_2$
$\nabla g_1(A)^td=d_1+d_2$
$\nabla g_2(A)^td=-d_1+d_2$
We ask the 3 of them to be less than zero.
My question is how can I check that $F_0\cap G_0=\emptyset$ ?
| Notice that $d_1+d_2<0$ and $-d_1+d_2<0$ implies that $d_2<-|d_1|$, or $-d_2>|d_1|$. Hence $-2d_1-3d_2>-2d_1+3|d_1|\geqslant0$ for all $d_1\in\mathbb{R}$.
You can also draw a picture (good for these 2d geometric problems):
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to find the minimum of $f(x)=\frac{4x^2}{\sqrt{x^2-16}}$ without using the derivative?
Find the minimum of function $$f(x)=\frac{4x^2}{\sqrt{x^2-16}}$$ without using the derivative.
In math class we haven't learnt how to solve this kind of problems (optimization) yet. I already know that is solvable using derivatives, but there should be another way. Thanks in advance!
| It is the same as finding the minimum of $\frac{4z}{\sqrt{z-16}}$ for $z>16$, or the minimum of $\frac{16 t}{\sqrt{t-1}}$ for $t>1$, or the minimum of $\frac{16(u+1)}{\sqrt{u}}$ for $u>0$, or the minimum of $16\left(v+\frac{1}{v}\right)$ for $v>0$. It is clearly $\color{red}{32}$ by the AM-GM inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 8,
"answer_id": 3
} |
Finding a negative power of $i$ How to find the value of $i$ when it has negative power? When solving for $i$ with positive power, I use something like
$$i^{101} = (i^{2})^{50}\times i = (-1)^{50} \times i = 1 \times i = i.$$
But how to solve for negative power of $i$ such as $i^{-10}$?
Can anyone explain what to do in this case?
Solution attempt:
I will solve for $i^{-3}$ as
$$i^{-3} = \frac{1}{i^3} = \frac{1}{i^2 \times i} = \frac{1}{-1 \times i} = \frac {1}{-i},$$
so the answer we get is $\dfrac{1}{-i}$. But my book is saying that I should get $i$.
| By your analogy, $$i^{-10}=\frac{1}{i^{10}}=\frac{1}{i^8*i^2}=\frac{1}{(i^{2})^4* i^2}.$$ Since $i^2=-1$, it follows that $$i^{-10}=\frac{1}{(-1)^4*-1}=\frac{1}{-1}=-1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Is a Contraction also a Contraction under equivalent metrics?
Definition of a Contraction. Let $(X, d)$ be a metric space. Then a map $T : X → X$ is called a contraction on $X$ if there exists $q ∈ [0, 1)$ such that $d(T(x),T(y)) \le q d(x,y)$ for all $x, y$ in $X$.
My question: Does a contraction remain a contraction under an equivalent metric $d'$?
I know that Lipschitz continuity is not preserved in general under equivalent metrics, and since the two definitions are quite similar we may believe contractions are not preserved.
However, contractions have the additional requirement that they map a metric space to itself, so changing the scale of the metric will not create issues.
| user357151 showed this isn't true for equivalent metrics in general.
However, if we restrict ourselves to metrics induced by equivalent norms, we get an interesting relation.
Consider $T:V \to V$, where $V$ is a normed vector space with two equivalent norms, say $||\cdot||_1$ and $||\cdot||_2$. Then, there exists some positive constants $a,b$ such that:
$$ a ||x||_1 \le ||x||_2 \le b ||x||_1 \text{ for all }x\in V $$
Additionally, assume that $d_1$ and $d_2$ are induced by the first and second norm respectively, and that we have
$$ d_1( T(x), T(y)) \le c d_1(x, y) $$
We have the relation
$$ \begin{align*}
d_2(T(x),T(y) ) &\le b\times c \times d_1(x,y) \\
&\le \frac{b}{a} \times c \times d_2(x,y)\\
&= c_* d_2(x,y) \end{align*} $$
with $c_* = \frac{b}{a} \times c$.
My result was obtained with the help of this proof.
Hence we may suspect we can lose the contraction properties if the ratio $b/a$ is large enough. This is indeed the case.
Consider the following function $f: \mathbb{R}^2 \to \mathbb{R}^2$,
$$f(x) = (.9\max(|x_1|,|x_2|), .9 \max(|x_1|,|x_2|))$$
Then $f$ is a contraction under the metric induced by the maximum norm $d_\infty$, but not under the metric induced by the Manhattan norm $d_1$.
Indeed,
$$ \begin{align*} d_\infty( f(x), f(y) ) &= \max( |f_1(x) - f_1(y)|, |f_2(x) - f_2(y)| ) \\
&= .9|\max(|x_1|,|x_2|) - \max(|y_1|,|y_2|)|
\\
&\le .9\max(|x_1 - y_1|,|x_2 - y_2|) \\
&= .9 d_\infty(x, y)\end{align*}
$$
where the inequality is obtained from this relation. Note that $c = .9$.
However, $f$ is not a contraction for a metric induced by the Manhattan norm $d_1$. For example, taking $x = (1,0)$, $y = (0,0)$, we have
$$\begin{align*} d_1(x,y) &= 1 + 0 = 1 \\
d_1(f(x),f(y)) &= .9 + .9 = 1.8 \end{align*}$$
which proves that $f$ is not a contraction.
Note that for vectors of length 2, the maximum and Manhattan norms follow the following relation,
$$ ||x||_\infty \le ||x||_1 \le 2 ||x||_\infty $$
$b = 2$ and $a = 1$, and so $c_* = 1.8$.
Similarly,
Consider the following function $g: \mathbb{R}^2 \to \mathbb{R}^2$,
$$g(x) = (.7(|x_1|+|x_2|), 0)$$
Then $g$ is a contraction under the metric induced by the Manhattan norm $d_1$, but not under the metric induced by the maximum norm $d_\infty$
We have
$$ \begin{align*} d_1(g(x),g(y)) &= .7(||x_1| - |y_1|| + ||x_2| - |y_2||) \\
&\le .7(|x_1 - y_1| + |x_2 - y_2|) \\
&= .7 d_1(x,y)
\end{align*}
$$
but we get, with $x = (1,1)$ and $y=(0,0)$,
$$d_\infty(g(x),g(y)) = 2 \times .7 = 1.4 d_\infty(x,y)$$
Again, that's because
$$ .5||x||_1 \le ||x||_\infty \le ||x||_1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Induced action is proper discontinuous Let $f:X\to Y$ be a surjective map, and let $G$ act on $X$ such that for each $g\in G$ and $x,x'\in X$ $f(x')=f(x)$ implies $f(g\cdot x)=f(g\cdot x')$. Further assume that the group action on $X$ is proper discontinuous.
In the above situation, we get an induced action on $Y$. But is this action again proper discontinuous?
My attempt so far: Let $U\subset X$ with $(g\cdot U \cap U \neq \emptyset \Rightarrow g=e)$.
Set $V=f(U)$ and let $y\in g\cdot V \cap V$, i.e. $y=g\cdot y'$ for some $y'\in V$. As $V=f(U)$, we can find $x,x' \in U$ such that $y=f(x)$ and $y' =f(x')$. Then we have $f(g\cdot x)=f(x')$ by the definition of the action on $Y$.
However, I need $g\cdot x=x'$ in order to get a contradiction.
Did I miss something? Do I need more assumptions?
| Consider $\mathbb{R}\times S^1$ where $S^1$ is the quotient of $\mathbb{R}$ by the action of $\mathbb{Z}$ defined by $n.x=x+n$ We denote by $p:\mathbb{R}\rightarrow S^1$ the quotient map. Consider $\mathbb{R}\times S^1$ endowed with the action of $\mathbb{Z}$ defined by $n.(x,y)=(x+n,p(y+nc))$ where $c$ is irrational number.
Let $f:\mathbb{R}\times S^1\rightarrow S^1$ defined by $f(x,y)=y$, the action of $\mathbb{Z}$ is proper on $\mathbb{Z}\times S^1$ but not on $S^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Loewner order in terms of eigenvalues Suppose that $A \succeq B$, where $A$ and $B$ are real symmetric matrices, so that $A - B$ is positive semidefinite, equivalently, $A - B$ has nonnegative eigenvalues.
Is it always true that $\lambda_i(A) \geq \lambda_i(B)$ (assuming that eigenvalues are ordered)?
| Just so that this question not remains formally unanswered: as @julian pointed out in the comments by the min-max-theorem we have that for all $k \in \{ 1, \ldots, d \}$
$$
\lambda_k(A)
= \min_{\substack{U \subset \mathbb C^d, \\ \dim(U) = k}} \max_{x \in U \setminus \{ 0 \}} \frac{x^{\mathsf{T}} A x}{x^{\mathsf{T}} x}
\ge \min_{\substack{U \subset \mathbb C^d, \\ \dim(U) = k}} \max_{x \in U \setminus \{ 0 \}} \frac{x^{\mathsf{T}} B x}{x^{\mathsf{T}} x}
= \lambda_k(B),
$$
where the inequality is due to $B \preceq A$, which is equivalent to $0 \preceq A - B$, i.e. $x^{\mathsf{T}} (A - B) x \ge 0$ for all $x \in \mathbb {C}^d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
$\frac{a^2} {1+a^2} + \frac{b^2} {1+b^2} + \frac{c^2} {1+c^2} = 2.$ Prove $\frac{a} {1+a^2} + \frac{b} {1+b^2} + \frac{c} {1+c^2} \leq \sqrt{2}.$ $a, b, c ∈ \mathbb{R}+.$
WLOG assume $a \leq b \leq c.$ I tried substitution: $x=\frac{1} {1+a^2}, y=\frac{1} {1+b^2}, z=\frac{1} {1+c^2},$ so $x \geq y \geq z$ and $(1-x)+(1-y)+(1-z)=2 \to x+y+z=1.$
We want to prove $ax+by+cz \leq \sqrt{2}.$ This somewhat looks like Cauchy-Schwarz so I tried that: $(a^2+b^2+c^2)(x^2+y^2+z^2) \geq (ax+by+cz)^2.$ The problem becomes $(a^2+b^2+c^2)(x^2+y^2+z^2) \geq 2,$ since $a,b,c,x,y,z>0.$
Expressing $a,b,c$ in terms of $x,y,z$: $(\frac {1}{x} + \frac {1}{y} + \frac {1}{z} - 3)(x^2+y^2+z^2)$
$= x+y+z+\frac{y^2}{x}+\frac{z^2}{x}+\frac{x^2}{y}+\frac{z^2}{y}+\frac{x^2}{z}+\frac{y^2}{z}-3(x^2+y^2+z^2) \geq 2.$
$\to \frac{y^2}{x}+\frac{z^2}{x}+\frac{x^2}{y}+\frac{z^2}{y}+\frac{x^2}{z}+\frac{y^2}{z}-3(x^2+y^2+z^2) \geq 1.$ Stuck here. Thinking about using AM-GM but not sure how. Help would be greatly appreciated.
| Let $$B:=\frac{1} {1+a^2} + \frac{1} {1+b^2} + \frac{1} {1+c^2}$$
From: $$A:=\frac{a^2} {1+a^2} + \frac{b^2} {1+b^2} + \frac{c^2} {1+c^2} = 2$$
we get $A+B =3$ so $B =1$.
Now by Cauchy inequality we have $$A\cdot B \geq \big(\underbrace{\frac{a} {1+a^2} + \frac{b} {1+b^2} + \frac{c} {1+c^2}}_{C}\big)^2$$
So we have $C^2\leq 2$ and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2822937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Directional derivative, gradient and metric Considering the general expression of gradient with the directional derivative operator on $f$ function along $\vec{v}$ vector :
$$df(v)=\langle\text{grad}(f),v\rangle = g^{ij}\partial_{i} f v_{j}=\partial_{i} f v^{i}$$
taking $\partial_{i} = \dfrac{\partial}{\partial x^{i}}$ (with $x^{i}$ contravariant coordinates).
Can I write also :
$$df(v)=\langle\text{grad}(f),v\rangle = \partial^{i} f v_{i}$$
with $\partial^{i} = g^{ij} \partial_{j} =\dfrac{\partial}{\partial x_{i}}$ where $x_{i}$ are covariant coordinates ??
i.e, I don't know if I can raise up the index of $\partial_{j}$ multiplying it by $g^{ij}$ while defining $\partial^{i} = \dfrac{\partial}{\partial x_{i}}$ ?
Regards
EDIT 1 : it may be that I do confusions between covariant/contravariant coordinates of a vector and curvilinear coordinates (curviliear coordinates are always contravariant, aren't they ?)
EDIT 2 : question transferred on https://math.stackexchange.com/questions/2823029/directional-derivative-gradient-and-metric
| If you are in a flat Euclidean space $\Bbb R^n$ and you are using the euclidean metric then the metric tensor is $g_{ij}=\delta_{ij}$ as well
$g^{ij}=\delta^{ij}$. So the law of raising indexes gives that $\partial^i=\partial_i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
probability game , What is the probability of winning? In a certain game, you perform three tasks sequentially. First, you flip quarter, and if you get heads you win the game. If you get tails, then you move to the second task. The second task is rolling a single die. If you roll a six, you win the game. If you roll anything other than a six on the second task, you move to the third task: drawing a card from a full playing-card deck. If you pick a spades card you win the game, and otherwise you lose the game. What is the probability of winning?
| You win if and only if you satisfy one of the three mutually exclusive situations:
*
*(1) You win on the first task by flipping a head
*(2) You lose on the first task by flipping a tail followed by winning on the second task by rolling a six
*(3) You lose on the first task by flipping a tail followed by losing on the second task by rolling a number other than six followed by winning on the third task by picking a spade
Case (1) occurs with probability $\frac{1}{2}$
Case (2) occurs with probability $\frac{1}{2}\times\frac{1}{6}$ (the 1/2 here referring to having lost the coinflip in the first task)
Case (3) occurs with probability $\frac{1}{2}\times\frac{5}{6}\times\frac{13}{52}$ (the 1/2 referring to having lost a coinflip and the 5/6 referring to having lost the dice roll)
Since these events are mutually exclusive and are the only ways in which you can win, adding these probabilities together gives the total probability of having won.
$$\frac{1}{2}+\frac{1}{2}\times\frac{1}{6}+\frac{1}{2}\times\frac{5}{6}\times\frac{13}{52}$$
An easier way to approach this calculation is to instead look at the probability that you lose instead and subtract it away from $1$. You lose if you failed the coinflip, failed to roll a six, and failed to draw a spade. That occurs with probability $\frac{1}{2}\times\frac{5}{6}\times\frac{39}{52}$. Subtracting away from $1$ gives the final total probability of winning as:
$$1-\frac{1}{2}\times\frac{5}{6}\times\frac{39}{52}$$
which equals the same as the above and simplifies to:
$$\frac{11}{16}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability path - Exercise 6.14 : on almost sure divergence
Let $\{X_n\}$ be independent with $P(X_n = n^2) = \frac{1}{n}$ and $P(X_n = -1) = 1 - \frac{1}{n}$.
Show that $\sum_{n=1}^{\infty}X_n = -\infty$ almost surely.
I found that $E[X_n] = n + \frac{1}{n} - 1 \to \infty$. So intuitively it has to tend to $\infty$. but why $-\infty$?
| Fix $n$. Let $S_n=X_1+\ldots+X_n$ and let $E_i$ be the event that $X_{i+1},\ldots,X_n$ are all equal to $-1$. Then $Pr[E_i]=\prod_{j=i+1}^{n}\left(1-\dfrac{1}{j} \right)=\dfrac{i}{n}$. Now let $i =\lfloor\sqrt{2n} \rfloor$, and note that if $E_i$ is false, then $S_n > 2n-n=n$. The probability that $S_n>n$ is thus at least the probability that $E_i$ is false, which is $1-\dfrac{\lfloor \sqrt{2n} \rfloor}{n} \to 1$ as $n \to \infty$. Thus, the series diverges to $\infty$ almost surely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Invalid syllogism passes Gensler's star test. Why? According to Gensler (2017):
An instance of a letter is distributed in a wff if it occurs just after “all” or anywhere after “no” or “not.” (p. 0008)
He then defines the star test as follows:
Star premise letters that are distributed and conclusion letters that aren’t distributed. Then the syllogism is valid if and only if every capital letter is starred exactly once and there is exactly one star on the right-hand side. (p. 0009)
Now, in the 2.2a Exercise, the third problem is as follows:
no Y* is E*
all G* is Y
∴ no Y is E
(p. 0011)
I have made distributed letters bold and starred where appropriate (or so I think; it is late). According to the "Answers to Selected Problems" (Gensler, 2017):
This isn’t a syllogism, because “Y” occurs three times and “G” occurs only once. (p. 0378)
This seems obvious. However, every capital letter is starred exactly once and there is exactly one star on the right-hand side. What am I missing?
Reference:
Gensler, H. J. (2017). Introduction to Logic (3rd ed.) [ProQuest Ebook Central version]. Retrieved from Ebookcentral.proquest.com
| See H.Gensler, Introduction to Logic, 2nd ed.,2017, page 9 :
More precisely, a syllogism is a vertical sequence of one or more wffs in which
each letter occurs twice and the letters “form a chain” (each wff has at least one letter in common with the wff just below it, if there is one, and the first wff has at least one letter in common with the last wff).
Thus, the issue is that the argument is not a syllogism. But it is a valid argument : there are lots of valid arguments that are not in "syllogistic form"; consider e.g. :
"if $P$, then $Q$; therefore if not $Q$, then not $P$."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving a 2nd order ODE: $\frac{d^2}{dx^2}y(x)=\left(C+(1+x^2)^{-1}\right)y(x)$. I would like to solve the following ode:
$$\frac{d^2}{dx^2}y(x)=\left(C+(1+x^2)^{-1}\right)y(x),\quad x\in\mathbb{R};$$
with boundary condition $y(0)=1$. $C$ is just some constant.
I am very stuck with this. Does anyone have any suggestions of how to proceed?
| I am afraid that a closed form solution could not exist and that, provided a second boundary condition, numerical method would be required.
Even if $C=0$ the solution is far away to be simple since given by
$$y=\, _2F_1\left(-\frac{\sqrt{5}+1}{4} ,\frac{\sqrt{5}-1}{4}
;\frac{1}{2};-x^2\right)+c_1\, x \,\, _2F_1\left(-\frac{\sqrt{5}-1}{4},\frac{\sqrt{5}+1}{4};\frac{3}{2};-x^2\right)$$ where appear hypergeometric functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A representation of $S^3$ I am intending to learn low dimensional topology from Saveliev's Book "Lectures on the Topology of 3-manifolds" by myself.
At the very beginning, he gives a Heegaard splitting of $S^3$ stating that
"the sphere $S^3$ is represented as the result of revolving the 2-sphere $S^2=R^2\cup\{\infty\}$ about the circle $l\cup\{\infty\}$ where $l$ is a straight line in $R^2$."
I do not understand why the revolution of $S^2$ about $S^1$ results in $S^3$. Is it a fact that can be generalized to higher dimensions, i.e. does the revolution of $S^{n+1}$ about $S^{n}$ result in $S^{n+2}$?. Thanks for any help and suggestions.
| Here the technical mechanism of ‘’revolving’’ means that for each element of the 2-sphere $S^2$ (or any other surface $F$), there is a circle $S^1$ attached. This kind of space is called ‘’circle bundle’’ over the 2-sphere (or over $F$, respectively).
For the hypersphere $S^3$ it happens that can be fibered as
$S^1\hookrightarrow S^3\stackrel{h}\to S^2$, and explicitly the map $h$ is constructed employing complex coordinates.
Any tetrad $(x,y,s,t)$ of real numbers which satisfy $x^2+y^2+s^2+t^2=1$ is an element in $S^3$, but with the complex numbers $z=x+iy$ and $w=s+it$ the pair $(z,w)$ parametrize points in the hypersphere $S^3$ if $z\overline{z}+w\overline{w}=1$. Here $\overline{z}=x-iy$ and $z\overline{z}=|z|^2$ is the squared norm of the complex number $z$.
Now $h(z,w)=(2z\overline{w},z\overline{z}-w\overline{w})\in\Bbb R^2\times\Bbb R$ is really a map onto the 2-sphere because
$$\|h(z,w)\|=4z\overline{z}w\overline{w}+(z\overline{z}-w\overline{w})^2
=(z\overline{z}+w\overline{w})^2=1.$$
In the hypersphere taking another point of the form $(\lambda z,\lambda w)$ with any complex number $\lambda$ satisfiying $|\lambda|=1$ are mapped onto the same point $h(z,w)$, because $\|h(\lambda z,\lambda w)\|=\|h(z, w)\|=1$
So the set $S=\{(\lambda z,\lambda w):|\lambda|=1\}$ is the fiber of the map $h$ and is a $S^1$, determined by $\lambda$, which is a ‘’circle’’, and that $(\lambda z,\lambda w)$ goes thru the location $(z,w)$ in $S^3$.
We emphasize that the circle bundle $S^3$ is different from the trivial $S^1\hookrightarrow S^1\times S^2\stackrel{\pi}\to S^2$, where $\pi$ is the projection $\pi(\theta,\xi)=\xi$.
Objects (or spaces) $E$ fibered as $S^1\hookrightarrow E\to F$ are called circle bundles over $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How do I apply integrating factor to solve this differential equation? $$x \frac{dy}{dx} + y = -2x^6y^4$$
I tried to find the general solution by dividing both sides by $x$ or $x^6$ but no solution I could get.. Do I even solve it with integrating factor?
| Contrary to other answers, you CAN find an integrating factor and manipulate your ODE via substitutions.
We have the ODE :
$$xy' + y = -2x^6y^4$$
Dividing both sides by $-\frac{1}{3}xy^4$, we yield :
$$-\frac{3y'}{y^4} - \frac{3}{xy^3} = 6x^5$$
Let $v(x) = \frac{1}{y^3(x)}$ and then this gives $v'(x) = -\frac{3y'(x)}{y^4(x)} $ and then the differential equation becomes :
$$v'(x) - \frac{3v(x)}{x} = 6x^5$$
Let $μ(x) = e^{\int -\frac{3}{x}\rm d x} = \frac{1}{x^3}$ and then multiply both sides by $μ(x)$ :
$$\frac{v'(x)}{x^3}-\frac{3v(x)}{x^4}=6x^2$$
By substituting $-\frac{3}{x^4}=\big(\frac{1}{x^3}\big)'$ we have :
$$\frac{v'(x)}{x^3}-\bigg(\frac{1}{x^3}\bigg)'v(x)=6x^2$$
Now, we shall apply the reverse product rule : $f\frac{\rm d g}{\rm d x}+g\frac{\rm d f}{\rm dx} = \frac{\rm{d}}{\rm d x}(f\;g)$ :
$$\int \frac{\mathrm{d}}{\mathrm{d}x}\bigg(\frac{v(x)}{x^3}\bigg)\mathrm{d}x=\int6x^2\mathrm{d}x \implies v(x) = x^3(2x^3+c_1)$$
Now, you can substitute $v(x) = \frac{1}{y^3(x)}$ and solve for $y(x)$ to yield the final result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 2
} |
$\mathbb{Z} \times \mathbb{Z}$ and application of isomorphism theorems Let $G = \mathbb{Z} \times \mathbb{Z}$ with group law given by addition. Let $H$ be the subgroup generated by the element $(2,3)$. Then $G/H$ is isomorphic to $\mathbb{Z}$.
Is $G/H$ also isomorphic to $\mathbb{Z}_2\times \mathbb{Z}_3$
using the homomorphism $h((x,y)) = (x\mod 2,\ y\mod 3)$? (here, $h((0,0)) = (0,0)$ and $h((x,y) + (m,n)) = h((x,y)) + h((m,n))$.
| You have that $(2,3)H$ and $(4,3)H$ are both send to the same element using the homomorphism. However it's not true that $(2,3)H = (4,3)H$, as this would mean that $(2,0) \in H$, which isn't the case.
Hence $G/H$ isn't isomorphic to $\mathbb{Z}_2 \times \mathbb{Z}_3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2823928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Distance from a convex set to a point Let $Y \in \mathbb{R}^n$ be a nonempty convex set such that $0 \notin Y$ and fix $y_1,\dots,y_n$ in $Y$, where $n \ge 2$. I know that there exist $i,j$ such that $\Vert y_i \Vert > \Vert y_j\Vert$. Define $ C = C(y_1,\dots,y_n)$, i.e. the set of convex combinations. Moreover, I know that there exists a unique $x\in C$ such that \begin{align*} \Vert x \Vert = d(0,C),\end{align*} where the latter is the distance from $0$ to $C$.
Now I want to show that \begin{align*} \Vert x \Vert < \Vert (1-\lambda) x + \lambda y_l\Vert\end{align*} for all $\lambda \in (0,1)$, where $y_l$ such that $\Vert y_l \Vert = \max\{\Vert y_1 \Vert, \dots, \Vert y_n \Vert \}$.
Notice that the proof should be constructive, i.e. no law of excluded middle should be used.
I did already show that $\langle x , c \rangle \ge 0$ for all $ c \in C$.
So far I was only able to show
\begin{align*}
(1-\lambda)\lambda\Vert x \Vert < \Vert (1-\lambda) x + \lambda y_l\Vert.
\end{align*}
| If I am understanding correctly, this seems quite trivial if the aformentioned distance function, $d$, computes the distance between the origin and the cet $C$ by finding a point $x$ in $C$ with minimal $L_2$-norm, i.e., $$ d(\overset{\to}{0}, C) = \min_{x \in C} \left| \left| x\right| \right|_2.$$
If that's the case, then it's easy to see that for any $i \in [n]$, and $\lambda \in [0,1]$, the following holds:
$$ \left| \left| x\right| \right|_2 < \left| \left| \lambda x + (1-\lambda) y_i\right| \right|_2.$$
The reason of why such inequality holds is by the fact that $x$ minimizes the distance to the origin (having the least norm, and such point being unique as assumed). Also note that each point $w$ displayed as $\lambda x + (1-\lambda) y_i$, by convexity of $C$, it holds that $w \in C$, thus $\left| \left| w\right| \right|_2 > \left| \left| x\right| \right|_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $X_1$ and $X_2$ be two random independent variables with Poisson distribution $\lambda = 1$ Let $X_1$ and $X_2$ be two random independent variables with Poisson distribution $\lambda = 1$. Denoting by $Y = min\{X_1,X_2\}$, I want to calculate $P[Y \geq 1]$.
This is what I did:
$$ P[Y \geq 1] = 1 - P[Y \leq 1] $$
I now calculate $P[Y \leq 1]$. The minum between $X_1$ and $X_2$ is less or equal than one in the following cases:
1) $X_1$ zero or one, and $X_2$ whatever.
2) $X_2$ zero or one, and $X_1$ whatever.
I calculate 1) and then I multiply by two because they are symmetric.
P[$X_1$ = 0 or $X_2$ = 0] = $e^{-1}*\frac{1^0}{0!}*e^{-1}\frac{1^1}{1!}$
My final answer is then $1 - 2e^{-2}$, however, it is wrong. Where did I make a mistake?
| The quickest solution is $P(Y\ge 1)=P(X_1,\,X_2\ge 1)=(1-e^{-1})^2$. Note that $P(Y\ge 1)=1-P(Y=0)$, because the distribution of $Y$ is discrete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Symmetric Bilinear Form: index of bilinear = the number of positive eigenvalues Problem
Let $b:\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$ a symmetric bilinear form and $A$ the (transformation) matrix of $b$. Further, let $\mu$ be the number of positive roots (counted by their multiplicity). Proof that $\textrm{index}(b)=\mu$.
So this intuitively makes sense after I calculated the diagonal form of a bilinear map with congruence, but I'm struggling to write the proof rigorously. What I know is:
*
*$A$ is symmetrical, therefore diagonalizable
*therefore, $\mu$ is the number of positive entries of the diagonalized matrix $\Lambda'$
*furthermore, since we are in $\mathbb{R}$ and $A$ is symmetrical, there exists a basis of $V$, so that we can diagonalize $A$ in such way, that the entries of the diagonalized matrix $\Lambda$ is either $-1$, $0$ or $1$
*according to Sylvester's theorem the number of $1$ as entries of $\Lambda$ is the same as the number of positive entries of the (normally) diagonalized matrix $\Lambda'$
*which is the definition of $\textrm{index}(b)$, so we have the desired result $\mu = \textrm{index}(b)$
I have a feeling that I can argue the third point better. Or maybe I'm totally missing the point of this exercise. Thank you for your help.
| You should stress (and justify) that the eigen-values of a symmetrical matrix are always real scalars.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are Kan extensions extensions in the traditional sense? Suppose $A$ is a full subcategory of $B$. Given $F:A\to Z$, is it true that the left Kan extension agrees with $F$ on $A$, ie. for all $a\in A$, $\mathrm{Lan}_i F(a)\simeq F(a)$, where $i:A\to B$ is the inclusion?
| You might be interested in these notes I wrote a few months ago. When I say "It turns out that this definition, albeit correct, it too general" I mean precisely what is contained in Kevin's answer!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove $\operatorname{Cov}(\overline{X_n}, X_j - \overline{X_n}) = 0$ for independent normally distributed random variables My homework states the following problem:
Let $X_1, \dots, X_n$ be independent $N(\mu, \sigma^2)$ distributed random variables, $\overline{X_n}$ be the sample mean and $S_n^2$ the empirical variance. Show that $\operatorname{Cov}(\overline{X_n}, X_j - \overline{X_n}) = 0$ and conclude that $\overline{X_n}$ and $S_n^2$ are independent.
My first approach:
$$ \begin{align*}
\operatorname{Cov}(\overline{X_n}, X_j - \overline{X_n})
&= \mathbb E(\overline{X_n} X_j - \overline{X_n}^2) - \mathbb E(\overline{X_n}) \mathbb E(X_j - \overline{X_n}) \\
&= \mathbb E(\overline{X_n} X_j) - \mathbb E(\overline{X_n}^2) - \mathbb E(\overline{X_n}) \mathbb E(X_j - \overline{X_n}) \\
\end{align*} $$
My second approach is
$$ \begin{align*}
\operatorname{Cov}(\overline{X_n}, X_j - \overline{X_n})
&= \mathbb E\left[(\overline{X_n} - \mathbb E(\overline{X_n})) (X_j - \overline{X_n} - \mathbb E(X_j - \overline{X_n}))\right]
\end{align*} $$
I always end up with expressions involving the sample mean and the population mean. My problem is: I know about the Law of Large Numbers bridging the gap between samples and populations, but here no series is given, but finite $n$. Which theorem can help me to solve this problem?
Update: A solution to the second question is given at jekyll.math.byuh.edu
| You have \begin{align}\text{Cov}(\bar X_n,X_j-\bar X_n)&=\text{Cov}(X_j,\bar X_n)-\text{Var}(\bar X_n)\\&=\text{Cov}\left(X_j,\frac{1}{n}\sum_{i=1}^nX_i\right)-\text{Var}(\bar X_n)\\&=\frac{1}{n}\sum_{i=1}^n\text{Cov}(X_i,X_j)-\text{Var}(\bar X_n)\\&=\frac{1}{n}\left(\text{Var}(X_i)+\sum_{i\ne j}\text{Cov}(X_i,X_j)\right)-\text{Var}(\bar X_n)\\&=\frac{1}{n}\text{Var}(X_i)-\text{Var}(\bar X_n)\\&=\frac{\sigma^2}{n}-\frac{\sigma^2}{n}=0\end{align}
Now you have to prove that $(\bar X_n,X_j-\bar X_n)$ is jointly normal for all $j=1,2,\cdots,n$ using MGF or otherwise. Once you have proved the joint normality, then the fact that $\bar X_n$ and $X_j-\bar X_n$ are uncorrelated would imply their independence. That is, $\bar X_n$ is independent of $X_1-\bar X_n,X_2-\bar X_n,\cdots,X_n-\bar X_n$, and hence also independent of $S_n^2=\frac{1}{n-1}\sum_{i=1}^n(X_i-\bar X_n)^2$.
To show the joint normality of $\bar X_n$ and $X_j-\bar X_n$, note that both $\bar X_n$ and $X_j-\bar X_n$ are linear combinations of independent normal variables $X_1,X_2,\cdots,X_n$ for all $j=1,2,\cdots,n$. As such, their joint distribution $(\bar X_n,X_j-\bar X_n)$ has to be bivariate normal.
For a formal proof, you may find the joint moment generating function (MGF) of, say, $(\bar X_n,X_1-\bar X_n)$ and show that the MGF is the MGF of a bivariate normal distribution.The details using MGF might get complicated, but to use the zero covariance proved in the first part to finally prove the independence of $(\bar X_n,S_n^2)$ you would have to show the joint normality somehow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\sqrt n$ is irrational unless $n = m^2$ for some natural number $m$ (from Spivak Calculus 3.ed., §2, Ex 17b). I've looked up the solution to this problem in the Spivak Caluclus Answers Book and found the following proof:
If $\sqrt n = a/b$, then $nb^2 = a^2$, so the factorization into primes of $nb^2$ and of
$a^2$ must be the same. Now every prime appears an even number of times in the
factorization of $a^2$, and of $b^2$, so the same must be true of the factorization of $n$. This implies that $n$ is a square.
I agree with all the steps up to the last one. Why is it that if some number $n$ can be factorized in a such a way that the composition would include even number of the same prime, then $n$ must be a square of some other number?
What if I take say 18. It can be represented as 3*3*2, it has an even number of primes in it and at the same time is not a square of some other number $m$.
| When it is said that every prime appears an even number of times in the factorization of $n$, this is counting each prime with multiplicity.
In your example $18 = 3 \cdot 3 \cdot 2$, $3$ is listed twice. So at the end, the number of primes counted with multiplicity in the factorization of $18$ is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Spectrum of $C(K) \oplus C(K')$ Let $K$, $K'$ be compact Hausdorff spaces. How does the spectrum of the $C^*$-algebra given by the direct sum $C(K) \oplus C(K')$ look like?
| Note that the spectrum of a C$^*$-algebra $A$ is the topological space of all nonzero $*$-homomorphisms $A\to\mathbb C$.
For the direct sum ,the map $\alpha:C(K)\oplus C(K')\to C(K\sqcup K')$ given by $$\alpha(f,g)(k)=\begin{cases} f(k),&\ k\in K\\ \ \\ g(k),&\ k\in K'\end{cases}$$ is a $*$-isomorphism.
For the tensor product $C(K)\otimes C(K')$, consider the map $\beta:C(K)\otimes C(K')\to C(K\times K')$ induced by by $$\beta(f\otimes g)(k,k')=f(k)g(k')$$and extended by linearity. It is straightforward that $\beta$ is a $*$-homomorphism. That $\beta$ is onto follows from the Stone-Weierstrass Theorem: the $*$-algebra
$$
\text{span}\,\{(x,y)\longmapsto f(x)g(y):\ f\in C(K),\ g\in C(K')\}
$$
separates points, so it is dense; as the image of a $*$-homomorphism is closed, $\beta$ is onto. For injectivity, if $\beta(\sum_jf_j\otimes g_j)=0$, we have $\sum_jf_jg_j=0$. Let $h_1,\ldots,h_m\subset\{f_1,\ldots,f_n\}$ be a basis for the span of $f_1,\ldots,f_n$. Then there exist coefficients $c_{jr}$ with $f_j=\sum_rc_{jr}h_r$. We obtain
$$
0=\sum_jf_jg_j=\sum_j\sum_rc_{jr}h_rg_j=\sum_r\left(\sum_jc_{jr}g_j\right)\,h_r.
$$
So for any $y\in K'$ we obtain
$$
0=\sum_jf_jg_j)y_=\sum_j\sum_rc_{jr}h_rg_j(y)=\sum_r\left(\sum_jc_{jr}g_j(y)\right)\,h_r.
$$
The linear independence then gives $\sum_jc_{jr}g_j(y)=0$ for all $y$, so
$$
\sum_jc_{jr}g_j=0.
$$
Now
$$
\sum_jf_j\otimes g_j=\sum_j\left(\sum_rc_{jr}h_r\right)\otimes g_j
=\sum_r h_r\otimes \left(\sum_jc_{rj}g_j\right)=0.
$$
Thus $\beta$ is injective. So $\beta $ is a $*$-isomorphism that gives us $C(K)\otimes C(K')\simeq C(K\times K')$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Roots of a complex equation are outside the unit disc using the triangle inequality Problem: Use the triangle inequality to show that the roots of the complex equation $$z^4+z+4=0$$ has roots all outside the unit disc $|z|\le1$
My Thought Process: Clearly I need to use the triangle inequality and this would be a proof by contradiction. Assuming that $|z|\le1$, then the triangle inequality gives $|z^4+z|\le|z^4|+|z|\le2$ but I'm not sure where my contradiction would be.
| If $z^4+z+4=0$, then $z^4+z=-4$, so you'd need $\lvert z^4+z \rvert = 4 $. But you've shown that $\lvert z^4+z \rvert \leq 2$ for $\lvert z \rvert \leq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2824963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
3rd Factorial Moment of X ~ geo(p) I'm working through Pitman's Probability (1993) problem 23b, page 221 (for knawledge not school).
Specifically, I am having trouble calculating
$E[(X)_3] = G^{(3)}(1)$, with $G^{(k)}(z) = \sum_{i=k}^{\infty}(P(X=i)\cdot(i)_k\cdot z^{i-k}$), where $G(z)$ is the probability generating function of $X$ distributed geometric(p); i.e. number of trials for first success. Pitman claims $E[(X)_k] = G^{(k)}(1)$
Pitman and WolframAlpha give $E[(X)_k] = 6(q/p)^3$. My solution gives $6q^2/p^3$.
Here's my process:
$$ \begin{align} G^{(3)}(1) &= q^2p(3)_3 + q^3p(4)_3 + q^4p (5)_3 + \cdots \\
&= q^2p \cdot \Sigma_1,\quad\Sigma_1 = 3\cdot2\cdot 1 \ + 4\cdot 3\cdot 2q\ +5\cdot 4\cdot 3q^2 +\cdots\\
\end{align} $$
Solving for $\Sigma_1$, we do the subtraction trick:
$$\begin{align}\Sigma_1 - q\Sigma_1 &= 3\cdot2\cdot (1 + 3q + 6q^2 + 10q^3 + 15q^4+\cdots)\\
&=6\cdot \Sigma_2, \quad \Sigma_2 = 1 + 3q + 6q^2 + 10q^3 + 15q^4+\cdots
\end{align}
$$
Solving for $\Sigma_2$, we do the old subtraction trick again:
$$\begin{align}\Sigma_2 - q\Sigma_2 &= 1 + 2q+ 3q^2 +4q^3+\cdots = \Sigma_3\\
\end{align}
$$
Solving for $\Sigma_3$, we do grandpa's subtraction trick again to get: $$\Sigma_3 = 1/(1-q)^2 \Rightarrow \Sigma_2 = 1/(1-q)^3 \Rightarrow \Sigma_1 = 6/(1-q)^4
$$
Hence $E[(X)_3] = q^2 p \cdot \frac{6}{(1-q)^4} = 6q^2/p^3$.
Did my subtractions fail? Are my assumptions wrong? What happened?
Edit: Pitman mentions in problem 23a) that $X$ is distributed $geometric(p)$ on ${0,1,2,...}$, which he explains in problem 6 is equivalent to the standard geometric distribution. I did not do problem 6 and used the wrong geometric distribution. Sad. The clever posters point out the difference more explicitly.
| In general, if $P(z)$ is the generating function of a random variable $X$ with probability mass function $\{p_k\}$, then since $0\leqslant p_k\leqslant 1$ and $\sum_{k=1}^\infty p_k=1$, it follows from dominated convergence that
$$
\frac{\mathsf d}{\mathsf dz} P(z) = \frac{\mathsf d}{\mathsf dz} \sum_{k=1}^\infty p_kz^k = \sum_{k=1}^\infty kp_kz^{k-1},\quad 0<z<1.
$$
By induction we see that
$$
\frac{\mathsf d^n}{\mathsf dz^n} P(z) = \sum_{k=n}^\infty (k)_n p_k z^{k-n},\quad n\geqslant1
$$
where $(k)_n = \frac{k!}{(k-n)!}$ denotes the falling factorial. Monotone convergence then yields
$$
\lim_{z\uparrow 1}\frac{\mathsf d^n}{\mathsf dz^n} P(z) = \sum_{k=0}^\infty (k)_np_k = \mathbb E[(X)_k].
$$
Here $$\mathbb E[z^X] := G(z) = \sum_{k=1}^\infty kp(1-p)^{k-1}z^k $$
so
\begin{align}
\mathbb E[(X)_3] &= \lim_{z\uparrow1} G'(z)\\
&= \lim_{z\uparrow1} p(1-p)^2\sum_{k=3}^\infty k(k-1)(k-2)((1-p)z)^{k-3}\\
&= p(1-p)^2\sum_{k=0}^\infty (k+1)(k+2)(k+3)\\
&= p(1-p)^2\left[\frac{\mathsf d^3}{\mathsf dp^3}\sum_{k=0}^\infty (1-p)^k \right]\\
&= p(1-p)^2 \frac{\mathsf d^3}{\mathsf dp^3} \left(\frac 1p \right)\\
&= p(1-p)^2 \cdot \frac 6{p^4}\\
&= \frac{6(1-p)^2}{p^3}.
\end{align}
Your solution is indeed correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Help with finding Limit What is the limit of $$\lim_{n \to \infty} \left(\frac{n!}{n^n}\right)^{\frac{3n^3+4}{4n^4-1}}$$
Does any one can help, I am not sure how to solve this.
| hint: Use the famous Sterling inequality:
$$ \sqrt{2\pi n}\left(\dfrac{n}{e}\right)^n \le n! \le \sqrt{2\pi n}\left(\dfrac{n}{e}\right)^n\cdot e^{\frac{1}{12n}}$$
and use the Squeeze lemma to find the limit. Can you manage to take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Fundamental Integral theorem for functionals Given a functional $L: X\to \mathbb{R}$. It is possible for two $x,x_h \in X$, to write
$$L(x)-L(x_h)=\int_0^1 L'(x_h+s(x-x_h))(x-x_h) ds$$
where $L'(\cdot)(v)$ is the directional in $v$. Why does this have to be tested with $x-x_h$ ?
Greetings.
| Let
$\gamma(s) = x_h + (x - x_h)s, \; s \in [0, 1]; \tag 1$
we note that
$\gamma(0) = x_h, \; \gamma(1) = x_h + (x - x_h) = x; \tag 2$
then
$L(x) - L(x_h) = L(\gamma(1)) - L(\gamma(0)) = \displaystyle \int_0^1 \dfrac{dL(\gamma(s))}{ds} \; ds; \tag 3$
now by the chain rule,
$\dfrac{dL(\gamma(s))}{ds} = L'(\gamma(s)) \dfrac{d\gamma(s)}{ds}, \tag 4$
and
$\dfrac{d\gamma(s)}{ds} = \dfrac{x_h + s(x - x_h)}{ds} = x - x_h; \tag 5$
therefore
$\displaystyle \int_0^1 \dfrac{dL(\gamma(s))}{ds} \; ds = \int_0^1 L'(\gamma(s)) \dfrac{d\gamma(s)}{ds} \; ds = \int_0^1 L'(\gamma(s)) (x - x_h) \; ds; \tag 6$
thus (3) becomes
$L(x) - L(x_h) = \displaystyle \int_0^1 L'(\gamma(s)) (x - x_h) \; ds = \int_0^1 L'(x_h + s(x - x_h)) (x - x_h) \; ds. \tag 7$
This derivation of the formula given in the text of the question shows that the directional derivative of $L(\cdot)$ in the direction of $\gamma'(s) = d\gamma(s) / ds = x - x_h$, i.e. $L'(\gamma(s)) (x - x_h)$, naturally introduces the "test" against $x - x_h$, that is the factor of $(x - x_h)$ in the integrand, by the use of the chain rule where $\gamma'(s) = x - x_h$ for all $s$; $x - x_h$ arises from the factor $\gamma'(s)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Chain rule for Hessian. How to compute $D^2 f^\alpha$ How does the chain rule generalize to the Hessian matrix. In particular, how can we compute $$D^2 f^\alpha,$$
where $f:\mathbb{R}^N \to \mathbb{R}$, $N>1$, and $\alpha >0$?
| There is nothing tricky about this--you can just use the ordinary single-variable chain rule to compute each partial derivative. If $\partial_i$ denotes the derivative with respect to the $i$th variable, then $$\partial_i(f^\alpha)=\alpha f^{\alpha-1}\partial_i(f)$$ (this is literally nothing but the fact that for a function $f$ of one variable, the derivative of $f^\alpha$ is $\alpha f^{\alpha-1}f'$). Then to get a second partial derivative, you just differentiate again the same way (using the product rule and chain rule):
$$\partial_j(\partial_i(f^\alpha))=\partial_j(\alpha f^{\alpha-1}\partial_i(f))=\alpha(\alpha-1)f^{\alpha-2}\partial_j(f)\partial_i(f)+\alpha f^{\alpha-1}\partial_j(\partial_i(f)).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do you change the order of integration without sketching? Specifically, for a double integral $$\int_a^b \int_{g_1(x)}^{g_2(x)} f(x,y) \, dy \, dx$$ how would you change the order of integration without having to sketch it out? I came across this while researching which talks about the use of the Heaviside function, however I am unsure how to apply this process to all double integrals.
Thanks!
| I consider it similar
to reversing the order of summation
in a double sum.
I'm going to
try to think this through
logically.
In this case,
$\int_a^b \int_{g_1(x)}^{g_2(x)} f(x,y) \, dy \, dx$,
$g_1(x) \le y \le g_2(x)$.
Therefore,
assuming that
$g_1$ and $g_2$
are strictly monotonic increasing
and therefore have an inverse,
and also satisfy
$g_1(x) \le g_2(x)$,
$x \le g_1^{(-1)}(y)$
and
$x \ge g_2^{(-1)}(y)$
so the new inner integral
will go from
$g_2^{(-1)}(y)$
to
$g_1^{(-1)}(y)$.
Since $a \le x \le b$,
$y \le g_2(b)$
and
$y \ge g_1(a)$
so the outer integral
would go from
$g_1(a)$
to
$g_2(b)$.
So the integral would be
$\int_{g_1(a)}^{g_2(b)} \int_{g_2^{(-1)}(y)}^{g_1^{(-1)}(y)} f(x, y) \,dx\,dy$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to explain irrational numbers to laymen? I am trying to describe how irrational numbers, which are all modeled as a series of fractions, can themselves not be fractions, and are instead part of a unique group of "decimal numbers" outside of fractions, called the irrational numbers. I am confused atm.
From Wikipedia, some example irrational numbers include:
*
*$\sqrt 2$
*the golden ratio
*The sqrt of all natural numbers which are not perfect squares
*Logarithms
Then they say:
Almost all irrational numbers are transcendental and all real transcendental numbers are irrational. Examples include $e^\pi$.
Rational numbers are fractions, which are included in the set of irrational numbers. Irrational numbers, however, are decimals and include things that "can't be represented as fractions" it seems.
But where I'm confused is, sqrt 2 can be represented by a series of fractions:
$${\displaystyle {\sqrt {2}}=\prod _{k=0}^{\infty }{\frac {(4k+2)^{2}}{(4k+1)(4k+3)}}=\left({\frac {2\cdot 2}{1\cdot 3}}\right)\left({\frac {6\cdot 6}{5\cdot 7}}\right)\left({\frac {10\cdot 10}{9\cdot 11}}\right)\left({\frac {14\cdot 14}{13\cdot 15}}\right)\cdots }$$
Similarly, $\pi$ can be represented by a series of fractions:
$${\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \,=\,{\frac {\pi }{4}}.}$$
Finally, the natural logarithm can be written as a series of fractions:
$${\displaystyle \ln(1+x)=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{k}}x^{k}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\cdots }$$
It has been a while since I have added/divided/subtracted/multiplied fractions, but from what I remember doing any of those operations results in a new fraction. So I'm wondering what I'm missing when it comes to understanding irrational numbers. If irrational numbers can represent non-fraction numbers, yet they are themselves represented by a series of fractions, it seems the result of the series would itself be a fraction, and so the irrational numbers are all rational numbers. Looking for an understanding of how to explain the difference between rational and irrational numbers. I tried saying "irrational are decimal numbers you can't represent with a fraction", but then when getting into the definition of a rational numbers (fraction numbers), I was unable to explain how if all irrational numbers are themselves definable as a series of fractions, how they themselves aren't representable as fractions. Thank you for your help.
| Representations that use ... to denote an infinite sequence often trick the mind into thinking the infinite sequence will behave like a finite one. They don't.
The fraction series are a neat way to give you an idea of the value of an irrational. If cut the sequence somewhere and compute the result of that finite sum, you will get an approximation of the irrational number you are looking for. If you add the next term of the sequence, you will obtain a (usually) better approximation. This process can be repeated as many times as you like and you will get arbitrarily close to the value of the irrational. The limit of the sum to infinity is equal in the strictest sense to the irrational number. However, whether an infinite sum is equal to its limit is more a philosophical question than a mathematical one. This is often defined to be true for convenience.
In general the limit of a sequence isn't always defined because the sequence might not converge. Maybe this makes it easier to accept that sequences might converge towards things that have properties absent of the elements of the sequence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2825625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 8,
"answer_id": 5
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.