Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Solve differential equation: $y' = \frac{y^2}{x^3} + 2\frac{y}{x} - x$ I need to solve this differential equation: $y' = \frac{y^2}{x^3} + 2\frac{y}{x} - x$.
My attempt
I found that $y = x^2$ is a solution. Then I tried to put $y = x^2f(x)$, and solved this way:
$$2xf(x) + x^2f'(x) = xf^2(x) + 2xf(x) - x \implies x^2f'(x) = xf^2(x) - x \implies$$
$$xf'(x) = f^2(x) - 1 \implies \frac{df(x)}{f^2(x) - 1} = \frac{dx}{x} \implies$$
$$\frac{1}{2}\ln\left|\frac{f(x) - 1}{f(x) + 1}\right| = \ln|x| + C_* \implies \frac{f(x) - 1}{f(x) + 1} = Cx^2 \implies$$
$$f(x) = \frac{1 + Cx^2}{1 - Cx^2}$$
And we lost a solution $f(x) = -1$. So finally we have
$$y = x^2\frac{1 + Cx^2}{1 - Cx^2}, y = -x^2$$
Now I have 3 questions:
$\quad 1)$ Is my solution correct? I'm not sure that all solutions were found.
$\quad 2)$ When can we use particular solution to find all other solutions? I mean doing something like $y = g(x)h(x)$, where $g(x)$ is a particular solution.
$\quad 3)$ Is there an easier method to solve this equation?
| For question 3
$$y' = \frac{y^2}{x^3} + 2\frac{y}{x} - x$$
$$y'- 2\frac{y}{x} = \frac{y^2}{x^3} - x$$
$$\left ( \dfrac y {x^2} \right )'= \frac{y^2}{x^5}-\frac{1}{x} $$
It's separable.
$$\left ( \dfrac y {x^2} \right )'=\dfrac 1 x \left ( \frac{y^2}{x^4}-1 \right) $$
$$\dfrac {du}{u^2-1}=\dfrac {dx}{x}$$
Where $ u=\dfrac y {x^2}$
Integrate.
I got this
$$y(x)=x^2\dfrac {1+Kx^2}{1-Kx^2}$$
For $K=0$ you have the particular solution you find by inspection :$y=x^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\inf\limits_{z \in S^{\perp}} \| x - z \| = \sup \left \{ \lvert \langle x , y \rangle \rvert\ \big |\ y \in S, \|y \| \leq 1 \right \}.$
Let $H$ be a Hilbert space and $S$ be a subspace of $H.$ Let $x \in H$ and $\left \|x \right \| = 1.$ Prove that $$\inf\limits_{z \in S^{\perp}} \left \|x - z \right \| = \sup \left \{\left \lvert \left \langle x , y \right \rangle \right \rvert\ \big |\ y \in S, \left \|y \right \| \leq 1 \right \}.$$
My attempt $:$ Let $L = \inf\limits_{z \in S^{\perp}} \left \|x - z \right \|$ and $M = \sup \left \{\left \lvert \left \langle x , y \right \rangle \right \rvert\ \big |\ y \in S, \left \|y \right \| \leq 1 \right \}.$ If $x \in S^{\perp}$ then clearly $L = 0$ and $M = 0$ (because if $x \in S^{\perp}$ then for any $y \in S$ we have $\left \langle x,y \right \rangle = 0$). Also if $x \in S$ then we have \begin{align*} L & = \inf\limits_{z \in S^{\perp}} \sqrt {\|x\|^2 + \|z\|^2} \\ & = \inf\limits_{z \in S^{\perp}} \sqrt {1 + \|z\|^2} \\ & = \sqrt {1 + \inf\limits_{z \in S^{\perp}} \|z\|^2} \\ & = 1 \end{align*} and for all $y \in S$ with $\|y\| \leq 1$ we have by Cauchy Schwarz's inequality $$\left \lvert \langle x,y \rangle \right \rvert \leq \|x\| \|y\| \leq 1.$$ This shows that $M \leq 1.$ Also since $x \in S$ with $\|x\| = 1$ we have by taking $y = x$ $$\langle x,x \rangle = \|x\|^2 = 1.$$ So $M = 1.$ Therefore $L = M$ holds if $x \in S \cup S^{\perp}.$ Now $H = S \oplus S^{\perp}.$ So every element of $H$ can be written as $x = u + v,$ where $u \in S$ and $v \in S^{\perp}.$ For this case \begin{align*} \|(u+v) - z \|^2 & = \|u+v\|^2 + \|z\|^2 - \langle v , z \rangle - \langle z , v \rangle \\ & = \|u+v\|^2 + \|z\|^2 - 2 \mathfrak {R} \left ( \langle v,z \rangle \right ) \\ & \geq \|u+v\|^2 + \|z\|^2 - 2 \left \lvert \langle v , z \rangle \right \rvert \\ & \geq \|u+v\|^2 + \|z\|^2 - 2\|v\| \|z\| \\ & = \left (\|u+v\|^2 - \|v\|^2 \right ) + \left (\|z\| - \|v\| \right )^2 \\ & \geq \|u+v\|^2 - \|v\|^2 \end{align*} So by taking $z = v$ we have $$L = \sqrt {\|u+v\|^2 - \|v\|^2} = \sqrt {\|u\|^2 + 2 \mathfrak {R} \langle u,v \rangle} = \|u\|\ \ (\text {since}\ u \perp v).$$ Now for any $y \in S$ with $\|y\| \leq 1$ we have \begin{align*} \left \lvert \langle u + v , y \rangle \right \rvert & = \left \lvert \langle u , y \rangle + \langle v , y \rangle \right \rvert \\ & = \left \lvert \langle u,y \rangle \right \rvert\ \ \ \ \ \ \ \ (\text {Since}\ v \perp y ) \\ & \leq \|u\| \|y\| \\ & \leq \|u\| \end{align*} Now if $u = 0$ then $x = v \in S^{\perp}$ in which case we have already proved that $L = M.$ So WLOG we may assume that $u \neq 0.$ Then by taking $y = \dfrac {u} {\|u\|}$ we have $M = \|u\|.$ So in this case also we have $L = M,$ as required.
QED
Does my proof hold good? Please check it.
Thanks in advance.
EDIT $:$ I don't think that what I did is correct. Because Hilbert space can't have such decomposition unless $S$ was given to be closed.
| It's not clear why you first take the case $x\in S$, as it is fairly particular.
When $x\in S^\perp$, one gets directly that $L=M=0$. So we may assume $x\not\in S^\perp$. Also, neither $L$ nor $M$ change if we replace $S$ with its closure, so we may assume that $S$ is closed.
What you have is, since $H=S\oplus S^\perp$, that $x=x_S+x_{S^\perp}$. As $S^\perp$ is a subspace, for $z\in S^\perp$ we have $x-z=x_S-(z-x_{S^\perp})$. Then
$$
L=\inf\{\|x_s-z\|:\ z\in S^\perp\}=\|x_S\|,
$$
since $\|x_s-z\|^2=\|x_s\|^2+\|z\|$ for any $z\in S^\perp$. Now, for any $y\in S$ with $\|y\|=1$, we have
$$
|\langle x,y\rangle|=|\langle x_S,y\rangle|\leq\|x_S\|\,\|y\|=L,
$$
so $M\leq L$. And
$$
M\geq\Bigg|\bigg\langle x,\frac{x_S}{|x_S\|}\bigg\rangle\Bigg|=\|x_S\|=L.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show that the open unit disk in $\mathbb{R}^2$ is homeomorphic to $\mathbb{R}^2$ I have already perused this answer to a similar question but I still am unsure how exactly they went about proving it is a bijection. Would someone be able to elaborate? Also, how does one come up with the function $\frac{x}{1+||x||}$ to begin with? Thanks.
| It’s actually easier to see how to come up with the inverse of that bijection. The obvious approach to finding a bijection from the open disk to $\Bbb R^2$ is simply to expand the disk so that each radial line expands to an infinite ray in the same direction. This clearly requires that the closer we get to the unit circle, the more we have to expand. Since the distance from $x$ to the unit circle is $1-\|x\|$, the most straightforward way to do this is to multiply each vector $x$ by $\frac1{1-\|x\|}$: as $\|x\|$ increases from $0$ towards $1$, $\frac1{1-\|x\|}$ starts at $1$ and increases without bound. This gives us the map
$$h(x)=\frac{x}{1-\|x\|}\,.$$
Verifying that it’s a bijection is basically just a matter of showing that it has an inverse defined on $\Bbb R^2$. Suppose that $y\in\Bbb R^2$, and we want to find an $x$ such that $y=h(x)=\frac{x}{1-\|x\|}$. Clearly $x$ and $y$ are scalar multiples of each other, so there is an $\alpha\in\Bbb R$ such that $x=\alpha y$. Then $y=\frac{\alpha y}{1-\|\alpha y\|}$, so $\alpha y=\big(1-\|\alpha y\|\big)y$, and $\alpha=1-\|\alpha y\|$. And $\alpha$ is clearly positive, so $\alpha=1-\alpha\|y\|$, $\alpha\big(1+\|y\|\big)=1$, and $\alpha=\frac1{1+\|y\|}$. This is defined for every $y\in\Bbb R^2$, so $h$ is a bijection whose inverse is the map
$$f(x)=\frac{x}{1+\|x\|}\,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Two diagonals of a regular heptagon are chosen. What is the probability that they intersect inside the heptagon?
Two diagonals of a regular heptagon are chosen. What is the probability that they intersect inside the heptagon?
I know there are 14 diagonals and the longer diagonals have 6 total intersections each and the shorter diagonals have 4 intersections each but now I'm stuck.
| Consider the first of the two diagonals chosen.
If it is a “short” diagonal, a quick sketch shows that $4$ of the other $13$ diagonals intersects it within the heptagon, and $9$ of the other diagonals do not. So for a short diagonal, there is a $4\over13$ chance that the second diagonal intersects it within the heptagon.
If it is a “long” diagonal, $6$ of the other $13$ diagonals intersects it within the heptagon, for a $6\over13$ chance of inside intersection.
There are the same number of short and long diagonals, so the probability that the second diagonal intersects the first one within the heptagon is the average of the probabilities for the short and long diagonals, which is $5\over13$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Simple Calculus AB MCQ Is my answer correct?
Let h be a differentiable function with a tangent line at x = π. The equation of the tangent line is y = −0.1x + 0.6. What must be true about h and h′ at x = π?
I. h(0) = 0.6
II. h(π) = 0.2858
III. h′(π) = 0.5
I and II
II and III
II only(My answer)
III only
| If $f \colon \mathbb{R} \to \mathbb{R}$ is a differentiable function, we say that $\hat f_x(y) = f(x) + f'(x)(y-x)$ is the tangent line to $f$ at $x$, evaluated at $y$.
In your question, you have $\hat h_\pi(x) = -0.1 x + 0.6 = (0.6 - 0.1\pi) - 0.1(x - \pi)$, and from this we see immediately that $h(\pi) = 0.6 - 0.1 \pi$ and that $h'(\pi) = -0.1$.
From this we see:
*
*II is correct: indeed, $h(\pi) = 0.6 - 0.1\pi$.
*III is false: instead, $h'(\pi) = -0.1$.
It requires a little bit more work to show that I is false: consider
$$
h(x) = (x - \pi)^2 - \frac{0.1}{\pi} x+ 0.6 - 0.1\pi + 0.1$$
This function has $h(\pi) = 0.6-0.1\pi$, $h'(\pi) = -0.1$, and so it has the tangent line as given. However, by direct computation one checks that $h(0) \neq 0.6$.
The intuition for I is that the tangent line only captures first-order information about $h$. So it only determines the slope and value at the point where the approximation is made (in this case $\pi$), and one cannot conclude anything more in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Central Limit Theorem and Strong Law of Large Numbers. Proof that converges in distribution to $N(0, e^2)$ I have this exercise of my homework.
Let $\{ X_n: n \geq 1\}$ be independent random variables and identically distribuited with uniform distribution $U(0,1)$ and $Y_n=(\prod_{i=1}^n X_i)^{-1/n} $.
Show that $\sqrt{n} (Y_n-e)$ converges in distribution to $N(0, e^2)$
I only know that $Y_n$ converges almost surely to $e$.
I do not know to do the proof using central limit theorem.
I know the law of large numbers.
I think that you can use delta method.
| One direction that sounds promising is to note that
$$
\ln Y_n
= \ln \left( \left(\prod_{k=1}^n X_k\right)^{-1/n}\right)
= \frac{-1}{n} \sum_{k=1}^n \ln X_k
= \frac{1}{n} \sum_{k=1}^n L_k,
$$
where $L_k = -\ln X_k$ and now you can apply the CLT directly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the roots of $x^3 - 6x = 4$ This exercise is from the book Complex Analysis by Joseph Bak, and it says:
"Find the three roots of $x^{3}-6x=4$ by finding the three real-valued possibilities for $\sqrt[3]{2+2i}+\sqrt[3]{2-2i}$".
I know that these numbers were found by Cardan's method, but I don't understand why they give these numbers, because I found three real roots by common methods.
Pd: the three real roots are $-2$, $1-\sqrt{3}$ and $1+\sqrt{3}$.
| Use $2+2i=(-1+i)^3$ and $2-2i=(-1-i)^3.$
Now, let $\omega=-\frac{1}{2}+\frac{\sqrt3}{2}i.$
Thus, $\omega^2+\omega+1=0$ and $$(-1+i)+(-1-i),$$ $$(-1+i)w+(-1-i)w^2$$ and
$$(-1+i)w^2+(-1-i)w$$ give our real roots.
The first line I got by the following way:
$$2+2i=-2i(-1+i)=(-1+i)^2(-1+i)=(-1+i)^3$$ and
$$2-2i=2i(-1-i)=(-1-i)^2(-1-i)=(-1-i)^3.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3821909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Two properties on $f^{-1}(A)$ For any function $f : X \rightarrow Y$ and any subset A of Y, define
$$f^{-1}(A) = \{x \in X: f(x) \in A\}$$ Let $A^c$ denote the complement of A in Y. For subsets $A_1,A_2$ of Y, consider the following statements:
(i) $ f^{-1} (A^c_1 \bigcap A^c_2) = (f^{-1}(A_1))^c \bigcup (f^{-1}(A_2))^c $
(ii) If $ f^{-1} (A_1) = f^{-1} (A_2)$ then $A_1 = A_2 $
Then which of the above statements are always true?
My effort: The first statement can not be true unless $A_1 = A_2$. So that's not always true. For the 2nd statement, let x = $ f^{-1}(A_1) = f^{-1}(A_2)$, then $f(x) = A_1 = A_2$. Since f is a function, f can not have duplicate values of f(x) for the same value of x. That's what I read in books. E.g. the function $y^2 = x$, is actually two functions, $y=+\sqrt x$ and $y=-\sqrt x$, since, for same x, there are two values of y. So, my answer is, (ii) is always true, (i) is not always true.
But the answer given is, neither (i) nor (ii) is always true. Any pointers on where my understanding is incorrect, is highly appreciated.
| You're getting confused with the notation $f^{-1}$... which is confusing!
Unfortunately, $f^{-1}$ is used to denote two different things:
*
*If $f$ is a bijection between $X$ and $Y$, for $y \in Y$ $f^{-1}(y)$ is the inverse image of $y$ under the map $f^{-1}$.
*$f^{-1}$ is also used to denote the inverse image of a subset $B \subseteq Y$. $f^{-1}(B) = \{x \in X: f(x) \in B\}$. To avoid this confusion, it may be interesting to denote $\{x \in X: f(x) \in B\}$ by $f^{-1}[B]$ instead of $f^{-1}(B)$.
Bottom line your main error is when you wrote $x=f^{-1}(A_1) = f^{-1}(A_2)$. $f^{-1}(A_1),f^{-1}(A_2)$ are subsets of $X$, not elements of $X$. And as pointed in other answers, $f(f^{-1}(A))$ may be different of $A$. Though you always have $f(f^{-1}(A)) \subseteq A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Swap two numbers among first n positive integers such that first m numbers have equal sum as the last m-n numbers The problem provides a natural number $n$. We have to consider the tuple $(1,2,\dots,n)$. Now, any $2$ numbers in that tuple can be swapped if and only if $\exists m, 1\leq m<n$, such that the prefix sum (sum of all elements $a_i,i\le m$) would be equal to the suffix sum (sum of all elements $a_i,i\gt m$) after the swap.
How many possible swaps are there, for a given $n$?
Let me elaborate this with an example. Let $n = 7$, which gives the tuple $(1, 2, 3, 4, 5, 6, 7)$.
In this tuple, $1$ and $5$ can be swapped because for $m = 4$, prefix sum $= 5+2+3+4 = 14$ and suffix sum $= 1+6+7 = 14$ are the same.
Similarly $2$ and $6$ can be swapped, because for $m = 4$, prefix sum $= 1+6+3+4 = 14$ and suffix sum $= 5+2+7 = 14$ are the same.
Same logic applies for the swapping of $3$ and $7$. So far, we found at least three swaps for $n=7$. How many are there in total?
I need to calculate the number of such possible swaps for large $n$ values. There must be some sort of formula. I have tried a number of ways to approach this but failed.
| The prefix sum of $m$ is $\frac{m(m-1)}{2}$ and the suffix sum is $\frac{n(n+1)}{2}-\frac{m(m+1)}{2}$. A successful swap of $a<m$ with $b=a+\delta>m$ would turn these into
$\frac{m(m-1)}{2}+\delta$ and $\frac{n(n+1)}{2}-\frac{m(m+1)}{2}-\delta$, respectively.
We want to achieve that these are equal, i.e.,
$$m^2+2\delta = \frac{n(n+1)}{2}. $$
Clearly, we can make $\delta$ any value in the range $2,\ldots, n-1$ (as long as $1<m<n$).
So, given $n$, compute $$m_0=\left\lfloor\sqrt{\frac{n(n+1)}2-2}\right\rfloor$$
and let $m_1=m_0$ or $m_1=m_0-1$, whichever has the same parity as $\frac{n(n+1)}{2}$.
Finally, let $$\delta = \frac{\frac{n(n+1)}{2}-m_1^2}2.$$
If $\delta<n$ (and $1<m_1<n$, which will but fail only for tiny values of $n$), we succeed by picking $m=m_1$ and $1\le a<m<b\le n$ with $b-a=\delta$; if on the other hand $\delta\ge n$, then no choice of $m,a,b$ is possible.
For $m=m_1$ and $\delta$ as just found, it is also not hard to compute the number of suitable pairs $(a,b)$ as we must have $m+1-\delta\le a\le\min\{m-1,n-\delta\}$; so the count is
$$ \min\{m-1,n-\delta\}-(m+1-\delta)+1 = \min\{\delta-1,n-m\}.$$
A priori, we may need to try again with $m=m_1-2$ etc., but as $m_1\approx\frac n{\sqrt 2}$, this increases $\delta$ by $\approx 2m\approx n\sqrt 2\gg n$, so (except possibly for tiny $n$) no additional values of $m$ need to be considered.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Combining two homotopies in linear time Suppose $f,g,h:M\to N$ are smooth maps such that $f\sim g$ and $g\sim h$. I.e. there are smooth homotopies
$$F:M\times I\to N,\qquad G:M\times I\to N,$$
such that
$$F(x,0)=f, F(x,1)=g,\qquad G(x,0)=g, G(x,1)=h$$
I want to know what is the problem with the following homotopy between $f$ and $h$? Isn't smooth?
$$H(x,t)=\begin{cases}F(x,2t), & t\in[0,\frac{1}{2}]\\ G(x,2t-1), & t\in[\frac{1}{2},1]\end{cases}$$
I am asking this in regard to Milnor construction of $f\sim h$ using bump function in Topology from differentiable viewpoint.
| This is not smooth at $1/2$. Take for instance $M = \{0\}$ and $N = \Bbb R$, with $F(0,t) = t$ and $G(0, t) = 1-t$. The graph of your $H$
One can overcome "cusps" like this as Milnor does by slowing down to a stop at the cusp-point before moving on to the next part of the curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can we find a stochastic matrix $Q$ such that $M = \frac{1}{C} \sum_{i=1}^C Q^i$ where $M$ is a stochastic matrix? Given a stochastic matrix $M$, can we always find another stochastic matrix $Q$ such that $M$ can be written by the mean of a geometric sum $M = \frac{1}{C} \sum_{i=1}^C Q^i$ for any $C > 1$?
My definition of stochastic matrix is a real square matrix whose components are non negative, with each row summing to 1.
The intuition is from the image of $f(x)=\frac{1}{C} (\frac{1-x^{C+1}}{1-x}-1)= \frac{1}{C} \sum_{i=1}^C x^i$. $f$ is strictly monotone in $[0,1)$ and $f\left([0,1)\right)= [0,1)$. We can extend $f$ to have a bijection from $[0,1]$ to $[0,1]$. As any eigenvalue of a stochastic matrix is $\le 1$, can we quantify the image of $g(Q) = \frac{1}{C} \sum_{i=1}^C Q^i$ on the space of stochastic matrix? Can that image cover the whole space of stochastic matrix (which could be seen as a product of simplices)?
Extra conditions: what if the Markov chain associated to $M$ is irreducible (and positively recurrent because the state space is finite)? even ergodic?
If we put stronger condition to make $M=P^{-1} \Delta P$ diagonalizable (for example, if reversible), we can have solution $D$ for eigenvalues: $g(D) = \Delta$. However is $Q=P^{-1} D P$ still stochastic?
| The question is stated in full generality for a stochastic matrix $M$, and a fixed $C$.
For a negative answer it is thus enough to consider a simple, very specific $M$. (But $C$ is still "general enough" and fixed for the issue.)
I decided to work with the example of a double stochastic matrix
$$
M = \begin{bmatrix}p & q\\ q & p\end{bmatrix}\ ,\ 0<p,q<1\ ,\ p+q=1\ .
$$
This is the case of a symmetric, diagonalizable $M$, its eigenvalues are $1=p+q$ and $p-q$.
Assume there is a matrix $Q$ with the property
$$
M = \frac 1C(Q+Q^2+\dots+Q^C)=g_C(Q)=g(Q)\ .
$$
(The notation $g$ is used for short for $g_C$, if $C$ is clear in the context.)
Then $M$ and $Q$ are commuting, simultaneously diagonalizable, let $P$ be the base change matrix as in the OP, and if $s,t$ are the eigenvalues of $Q$ (in the one good order matching via $g$ the order of $1$, $p-q$)...
$$
\begin{aligned}
M &=
P^{-1}
\begin{bmatrix}1 &\\ &p-q\end{bmatrix}
P\ ,\\
Q &=
P^{-1}
\begin{bmatrix}s & \\ & t\end{bmatrix}
P\ ,\\
P &=
\begin{bmatrix}1 & 1\\ 1 & -1\end{bmatrix}
\ ,\\[3mm]
1 &= \frac 1C(s+s^2+\dots+s^C)=g(s)\ ,\text{ so $s=1$,}\\
p-q
&= \frac 1C(t+t^2+\dots+t^C)=g(t)\ .
\end{aligned}
$$
Now consider the case $C=2$, $g(t)=g_2(t)=\frac 12(t+t^2)$, $g:(-1,1)\to[-1/8,1)\subset(-1,1)$. If $p-q$ is in the image of $g$, then any $t$ in the preimage (one or two choices) works and we have a stochastic solution $Q$. (Associate $p',q'>0$ with sum $1$ and difference $t$.) If not, there is no such $Q$. In the given generality we have a negative answer.
This inexistence was a consequence of the fact that $g_2:(-1,1)\to(-1,2)$ is not surjective. For a general $C>1$, the function $g_C:(-1,1)\to(-1,1)$ is also non-surjective. (Its restriction $[0,1)\to[0,1)$ is increasing and bijective, but on $(-1,0)$ the image keeps a positive distance to $-1$, since there is at least one positive term in the defining sum.) The same argument applies.
Note: Similar "symmetric" examples can be written considering a transition matrix on states building a group $(A,+)$, with fixed constants $p_a$ corresponding to the probable transition from a state $x\in A$ to the state $x+a$. The above example is obtained for $A=\Bbb Z/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Clarification on the meaning of "number of decimal places" I've learned in school that the "number of decimal places" in a number refers to how many digits are after the decimal point.
For example, 2.5 and 100.2 have 1 decimal place, and 0.234 has 3.
But what about numbers like 56. and 45.0? Do numbers like that have 0 or 1 decimal places?
(Also, please correct me if I'm misunderstanding decimal places in my question)
| The "number of decimal places" is not a property of the number itself, but about how it is written. So $1.$ has zero d.p. and $1.0$ has one, though they are numerically equivalent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understanding why the family of sets is not an algebra Let $\mathcal{F} = \{A \subseteq \Omega: |A| \text{ is even} \}$ be a family of sets of $\Omega$ = $\{1, 2, 3, ..., 2n \}$ for some $n \in \mathbb{N}$ and let $\Omega \in \mathcal{F}$. I am able to show that this family of sets satisfies the following two properties:
*
*Closed under complements
*Closed under finite disjoint unions
However, I want to show that it is not an algebra. Specifically I want to show that it is $\textbf{not}$ closed under finite unions.
I get lost when trying to show that it is not closed under finite unions because I can't think of a case where closed under disjoint unions $\nRightarrow$ closed under unions.
edit:
I should specify that an algebra satisfies the following properties:
*
*$\Omega$ $\in$ $\mathcal{F}$
*Closed under complements
*Closed under finite unions
Thanks for the help.
| I assume $n \geq 2$ (if $n=1$, then $\mathcal{F}= \{\emptyset, \{1,2\}\}$ is closed under finite unions).
The union $\{1,2,3\}=\{1,2\}\cup \{2,3\}$ is not in $\mathcal{F}$ so $\mathcal{F}$ is not closed under finite unions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Count of multiples Is is possible to estimate the total number of distinct multiples $N_{\text{mult}}(p, x)$ of primes $p_k \le p$ in the general case of arbitrary $p$ and range [1, x]? Or, alternatively, the number of integers in [1, x] that aren't divisible by any $p_k \le p$?
An estimate is straightforward for the first several primes via PIE but appears to be more challenging in the general case as the number of factors increases.
| Let $\Phi(x,y)$ denote the number of integers up to $x$ which are not divisible by any prime up to $y$. Let $u:=\log x/\log y$.
Then Theorem 3 in Section of III.6.2 of Tenenbaum’s Introduction to analytic and probabilistic number theory (Cambridge University Press, 1995) gives the asymptotic formula
$$ \Phi(x,y)=\frac{x\omega(u)-y}{\log y}+O\left(\frac{x}{(\log y)^2}\right),\qquad x\geq y\geq 2,$$
where $\omega:[1,\infty)\to[1/2,1]$ is the Buchstab function satisfying (cf. Corollary 3.1 after the theorem)
$$ \omega(u)=e^{-\gamma}+O(u^{-u/2}),\qquad u\geq 1.$$
As you can see this is a delicate question, $y$ is your $p$ and the relationship between $x$ and $y$ is crucial. The Buchstab function $w(u)$ controls this, it decreases from $w(0)=1$ to $w(2)=1/2,$ then increases again, eventually being asymptotic to
$e^{-\gamma},$ as $u$ gets large, which is related to Mertens third theorem
$$\prod_{p \leq n} \left(1-\frac{1}{p}\right) =\frac{ e^{-\gamma}+o(1)}{ \log n},$$
yielding that roughly
$$\frac{ e^{-\gamma}X}{ \log n},$$
integers $\leq X$ are not divisible by any prime $\leq n,$ if $X$ is much larger than $n.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Determine all zeros of the polynomial $X^4 - 2X^3 - X^2 + 2X + 1 \in \mathbb C[X]$. This is Exercise 14 on page 110 of Analysis I by Amann and Escher.
The hint given is as follows: multiply the polynomial by $1/X^2$ and substitute $Y = X - 1/X$.
If I attempt this, I get the following:
\begin{align*}
X^4 - 2X^3 - X^2 + 2X + 1 &= 0\\
\Rightarrow X^2 - 2X - 1 + \frac{2}{X} + \frac{1}{X^2} &= 0.
\end{align*}
My problem is that I don't understand how to make the suggested substitution. I'm wondering if there is something obvious I'm missing.
I appreciate any help.
| $Y^2= X^2-2+\dfrac{1}{X^2}$. So $X^2-1+\dfrac{1}{X^2} = Y^2+1$. Then
$$X^2-2X-1+\dfrac{2}{X}+1/{X^2} = Y^2+1-\left(2X-\dfrac{2}{X}\right)=Y^2-2Y+1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3822979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Hyperplane is a set of points satisfying a linear equation: How does a 3-dimension vector fill out a plane? A hyperplane H in $ R^n$ is the set of points $ (x_1, x_2, ..., x_n)$ that satisfy a linear equation:
$$ a_1 x_1 + a_2 x_2 + ... + a_n x_n = b$$
where the vector $ u = [a_1, a_2, ..., a_n]$ of coefficients is not zero. Thus a hyperplane in H in $R^2$ is a line, and a hyperplane H in $R^3$ is a plane.
I'm trying to conceptually understand the definition. But having issue with the picture in 3-dimensions. So the set of x's that satisfy the equation in $R^3$ will be a vectors with 3 corresponding elements. Unless there's more than one solution, how would a 3 element vector (made from the X's solving the system) fill out a plane in 3 dimensions?
| Maybe it's easier to conceptualize if we drop the $b$ for a moment and assume it is zero. Then the equation reads
$$a_1x_1+a_2x_2+a_3x_3=0.$$
The left side is essentially just the scalar product of the vectors $\vec a=(a_1,a_2,a_3)$ and $\vec x=(x_1,x_2,x_3)$. But the scalar product of two vectors is zero if and only if they are perpendicular. So the solution set consists of all the vectors $\vec x$ which are perpendicular to the vector $\vec a$. And that's a plane containing the origin.
Now if we take $b\neq0$ and let's say $\vec y$ is a vector satisfying $\vec a\cdot\vec y=b$, while $\vec x_0$ is a vector from the plane above, so $\vec a\cdot\vec x_0=0$. Then the vector $\vec x_0+\vec y$ is a solution of the original equation, because
$$\vec a\cdot(\vec x_0+\vec y)=\vec a\cdot\vec x_0+\vec a\cdot \vec y=0+b=b.$$
So if we take all the vectors in the plane from the start and then shift each of them by $\vec y$, we get the solutions of the original equation. And that's still a plane, but shifted away from the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3823107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Applying Leibniz to an integral equation The Integral equation
$$\frac{1}{2y} \int_{x-y}^{x+y} f(t) dt=f(x), \forall x,y \in R ~~~~(1)$$
relates to a question at NSE:
If $\frac{1}{2y} \int_{x-y}^{x+y} f(t) \space \mathrm{d}t = f(x)$, then $f$ is linear text*
By the Leibniz' rule, the D. w. r. t. $x$ on both sides leads to
$$\frac{1}{2y}[ f(x+y)- f(x-y)]= f'(x)~~~~(2).$$
It may be checked that $f(x)=x^2$ satisfies (2) but it does not satisfy (1).
The question is to pin point why (1) and (2) are not equivalent.
| $f(x)=g(x)$ implies $f'(x)=g'(x)$ but the converse is not true. ($g(x)$ could be $c+f(x)$ where $c$ is a constant).
In this case since you are differentiating w.r.t. $x$ treating $y$ as a constant you can only conclude that when the second one is assumed the first equation holds when you add a function of $y$ ( i.e. something not depending on $x$) to one side.
There is indeed a function of the form $x^{2}+\phi (y)$ which satisfies the first equation. Can you find it?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3823371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Existence of smooth vector field on Riemannian Manifold property How do I prove the following:
Let $M$ be a Riemannian manifold. Let $\omega: M\rightarrow T^{*}M$ be a smooth 1 form on $M$. Then, there exists a unique smooth vector field $Y$ on $M$, such that
$\omega(X)=\langle X,Y\rangle$ for every smooth vector field $X$ on $M$.
I'm not sure how to construct such a smooth vector field $Y$. I initially thought of using Riesz representation theorem, but I wasn't sure how to proceed.
May I have hints? (without using Christoffel symbols)
| On every tangent space you can indeed use the Riesz representation theorem to construct a one-form fulfilling the condition at that particular point. Using this, you can define a section of the cotangent bundle in a pointwise manner. What remains to be shown is that this section is actually smooth. For this you only have to understand the linear algebraic situation: the explicit formula for the value at a given point $p \in M$ only contains the values of the Riemannian metric and the given vector field $Y$ at $p$. Convince yourself of this!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3823592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why aren't I converting 1,564 to binary correctly? I am trying to teach myself how to convert to binary using the subtraction method.
$$
1564-1024=540
\\
540-512=28
\\
28-16=12
\\
12-8=4
\\
4-4=0
\\
\;
\\
\to 11000001100_2
$$
But $1,564 =11000011100_2\neq11000001100_2 = 1,556$
What did I do wrong to get the incorrect result here?
| $1564 = 1024+512+16+8+4=11000011100_2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3823699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving a new defined logic operator, using NOT, OR, AND, IMPLIES gates Question:
Define a new logical operator $A*B$. We will call this new binary operator $*$ a ”NAND” operation. The truth table for NAND is as follows: $$\begin{array}{|cc|c|}A&B&A*B\\\hline F&F&T\\F&T&T\\T&F&T\\T&T&F\end{array}$$ Prove that you can create all other logical operators we have discussed (NOT, OR, AND, IMPLIES) using only NAND operations. Hint: Start with creating a NOT operation using NAND and work from there.
I know how to get all the logic operators using the first part of the truth table A B, but how are you supposed to get it from the new NAND operator, cause your not proving its equivalence or proving a statement, maybe I'm reading the question wrong, can someone please explain this to me, thanks
| You are discovering and then proving four equivalences. I’ll get you started: prove that $\neg A$ is logically equivalent to $A*A$. Once you have that, it’s pretty easy to get $A\land B$ solely in terms of the $*$ operator. Then you can go for $\lor$, and finally for $\to$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using Qualifiers to Describe Multiples I was recently solving some questions about quantifiers when I came across an example I didn't quite understand. The universe of discourse is the set of natural numbers $\{1,2,3,\dots\}$ (i.e. all variables represent natural numbers). The statement being converted to logic format was "Every multiple of $4$ is a multiple of $2$", which was then represented with the following symbolization: $$∀n((∀m\space n≠4m) ∨ (∃r\space n = 2r)).$$ When I read the symbolization, I translate it as follows: "For all numbers $n$, $n$ is either not a multiple of $4$ * all values of $m$, or there exists an $r$ such that $n$ is a multiple of $2r$". Is this an accurate translation and if not, what's a better way to word it? Also, is the symbolization truly the best way to represent the concept of "Every multiple of 4 is a multiple of 2"?
| The most natural translation of the original statement into logical symbols is
$$\forall n\big(\exists m\,(n=4m)\to\exists r\,(n=2r)\big)\,.\tag{1}$$
What’s inside the scope of $\forall n$ has the form $p\to q$, which is equivalent to $\neg p\lor q$, so $(1)$ is equivalent to
$$\forall n\big(\forall m\,(n\ne 4m)\lor\exists r\,(n=2r)\big)\,.$$
I would translate this back into English as ‘for all $n$, $n$ is not a multiple of $4$, or $n$ is a multiple of $2$’ — not a multiple of $2r$. One could also stay reasonably close to the form of the symbolic expression with ‘for all $n$, $n$ is not a multiple of $4$, or $n$ is even’.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Two equivalent series converge on different limits Consider the series:
$$
\sum_{n=0}^{\infty} \left({\dfrac{(-1)^n}{2n+1}}\right)^3 = \dfrac{1}{1^3}-\dfrac{1}{3^3}+\dfrac{1}{5^3}-\dfrac{1}{7^3} \dots = \dfrac{\pi^3}{32}
$$
Now suppose I wish to represent this series with the following equivalent summation:
$$
\sum_{n=1}^{\infty} \left[\left({\dfrac{1}{4n-3}}\right)^3-\left({\dfrac{1}{4n-1}}\right)^3 \right]= \left[\dfrac{1}{1^3}-\dfrac{1}{3^3}\right]+\left[\dfrac{1}{5^3}-\dfrac{1}{7^3}\right] \dots
$$
I know that changing order of summation in infinite series might affect the limit, but in this example the order of summation is not really changed, and all of the terms are exactly the same.One can even prove by induction that both series term by term are equivalent - i.e that the second is a compressed version of the first.
So can I state that the limit of the second series is equal to $\dfrac{\pi^3}{32}$ ? If not, I would be happy to find some explanation as to why we even pretend to assign values to infinite series if we can see that certain examples of infinite series break basic axioms of arithmetic.
| What seems to be in question here (as commented by Mourad) is the Riemann Rearrangement Theorem, which says that if a series converges, but does not converge absolutely, then the terms of that series can be rearranged to converge to any real number.
This series converges absolutely; that is, the series of its absolute values converges:
$$
\begin{align}
\sum_{n=0}^\infty\left|\left(\frac{-1}{2n+1}\right)^3\right|
&=\sum_{n=0}^\infty\frac1{(2n+1)^3}\\[3pt]
&=\frac78\zeta(3)
\end{align}
$$
If a series converges absolutely, its terms can be reordered (and regrouped) in any way desired and the series will converge to the same limit.
As mentioned by José Carlos Santos, if a series converges, even conditionally, the partial sums of any grouping of the terms will simply give a subsequence of the partial sums of the original series. Thus, any grouping of the terms will give the same limit of partial sums.
However, if the original series does not converge, then regrouping terms might produce different results. For example,
$$
\sum_{n=0}^\infty(-1)^n
$$
does not converge, and its terms can be grouped as
$$
(1-1)+(1-1)+(1-1)+\dots=0
$$
or as
$$
1-(1-1)-(1-1)-(1-1)-\dots=1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Examples of T-conductor? I am struggling with understanding the concept of a T-conductor in linear algebra. I do know the definition, but some examples would be helpful.
Definition. Let $W$ be an invariant subspace for $T$ and let $\alpha$ be a vector in $V$. The $T$-conductor of $\alpha$ into $W$ is the set $S_T(\alpha;W)$, which consists of all polynomials $g$ (over the scalar field) such that $g(T)\alpha$ is in $W$. (p.201, Sec. 6.4)
I need some examples of T-conductor
Thanks in Advance
| Here's an example. Let $T:\Bbb C^4 \to \Bbb C^4$ be the transformation associated with the Jordan-form matrix
$$
\pmatrix{\mu&1&0&0\\0&\mu&0&0\\ 0&0&\lambda&1\\0&0&0&\lambda}.
$$
Let $W$ denote the subspace spanned by the vector $e_1 = (1,0,0,0)$. Let $e_1,\dots,e_4$ denote the standard basis of $\Bbb C^n$. We now describe $S_T(\alpha;W)$ for several different vectors $\alpha$:
*
*For any $\alpha \in W$, $S_T(\alpha;W)$ is the set of all polynomials
*For $\alpha = e_2$, $S_T(\alpha;W)$ is the set of polynomials divisible by $p(t) = (t-\mu)$
*For $\alpha = e_3$, $S_T(\alpha;W)$ is the set of polynomials divisible by $p(t) = (t-\lambda)$
*For any $\alpha = (\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ with $\alpha_2 \neq 0$ and $\alpha_4 \neq 0$, $S_T(\alpha;W)$ is the set of polynomials divisible by $(t-\mu)(t-\lambda)^2$.
*For any $v \in \Bbb C^4$ and $\alpha = e_1 + v$, $S_T(\alpha;W) = S_T(v;W)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If S is a nonempty set, then $F^S$ is a vector space over F I was given a task to prove the following statement
If $S$ is a nonempty set, then $F^S$ is a vector space over $F$ .
Definitions:
If $S$ is a set, then $F^S$ denotes the set of functions from $S$ to $F$.
I was trying to construct a proof, but do not even know where to start. Could you please suggest me any idea how can I prove this statement ?
I will try to construct proof based on the advices in the comments:
There are 2 operations on $F^N$:
addition $F^S : (f + g) (x) = f(x) + g(x)$
multiplication $F^S : (zf) (x) = zf(x)$
In order for a set V to be a vector space operations of addition and multiplication should be defined ( we already have it ) and following properties should held:
associativity
commutativity
distributivity
additive inverse
additive identity
| Hint: You only need to verify the vector space axioms. For example, you need to show that $F^S$ is an abelian group with the sum of vectors. This sum is defined as follows: Given two vectors $f,g\in F^S$ i,e, $f,g:S\rightarrow F$, define $(f+g)(s)=f(s)+g(s)$ for every $s\in S$ (remember that $F$ must be a field). Note that this sum satisfy $f+g=g+f$, because for every $s\in S$ $(f+g)(s)=f(s)+g(s)=g(s)+f(s)=(g+f)(s)$ and in $F$ the sum is also conmutative. In the same way you can proof the other axioms of abelian group. The axioms realted with scalars are also easy to check, you only need to use the right definition of product between scalars and vectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $ (A_1 \cap \dots \cap A_n) \triangle (B_1 \cap \dots \cap B_n) \subset (A_1 \triangle B_1) \cup \dots \cup (A_n \triangle B_n) $ Prove that $ (A_1 \cap \dots \cap A_n) \triangle (B_1 \cap \dots \cap B_n) \subset (A_1 \triangle B_1) \cup \dots \cup (A_n \triangle B_n) $ is true for any sets $A_1, \dots , A_n$ and $B_1, \dots , B_n $
I tried to solve it using math induction.
n = 1: $A_1 \triangle B_1 \subset A_1 \triangle B_1$ is true
n = m: $ (A_1 \cap \dots \cap A_m) \triangle (B_1 \cap \dots \cap B_m) \subset (A_1 \triangle B_1) \cup \dots \cup (A_m \triangle B_m) $
n = m + 1: $ (A_1 \cap \dots \cap A_m) \triangle (B_1 \cap \dots \cap B_m) \cup A_{m+1} \triangle B_{m+1} \subset (A_1 \triangle B_1) \cup \dots \cup (A_{m+1} \triangle B_{m+1})$
But I have no idea what to do next
| You can prove it directly by element-chasing; using induction just overcomplicates matters. Suppose that $x\in\left(\bigcap_{k=1}^nA_k\right)\triangle\left(\bigcap_{k=1}^nB_k\right)$; then either $x\in\left(\bigcap_{k=1}^nA_k\right)\setminus\left(\bigcap_{k=1}^nB_k\right)$, or $x\in\left(\bigcap_{k=1}^nB_k\right)\setminus\left(\bigcap_{k=1}^nA_k\right)$. Without loss of generality we may assume that $x\in\left(\bigcap_{k=1}^nA_k\right)\setminus\left(\bigcap_{k=1}^nB_k\right)$. Then $x\in\bigcap_{k=1}^nA_k$, so $x\in A_k$ for $k=1,\ldots,n$, and $x\notin\bigcap_{k=1}^nB_k$, so there is an $\ell\in\{1,\ldots,n\}$ such that $x\notin B_\ell$. But then $x\in A_\ell\setminus B_\ell\subseteq A_\ell\triangle B_\ell\subseteq\bigcup_{k=1}^n(A_k\triangle B_k)$, and since $x$ was an arbitrary element of $\left(\bigcap_{k=1}^nA_k\right)\triangle\left(\bigcap_{k=1}^nB_k\right)$, we conclude that $\left(\bigcap_{k=1}^nA_k\right)\triangle\left(\bigcap_{k=1}^nB_k\right)\subseteq\bigcup_{k=1}^n(A_k\triangle B_k)$, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Computing $\lim_{x\to-5}\frac{x^2+2x-15}{|x+5|}$ The problem is: $$\lim_{x\to-5}\frac{x^2+2x-15}{|x+5|}$$
I factored the numerator to get: $\frac{(x-3)(x+5)}{|x+5|}$
How do i solve the rest?
| Another place where "factor out the big part" helps. Here, $|x|$ is growing without bound so $x^2$ is the big part in the numerator and $|x|$ is the big part in the denominator.
\begin{align*}
L &= \lim_{x \rightarrow -\infty} \frac{x^2 + 2x - 15}{|x+5|} \\
&= \lim_{x \rightarrow -\infty} \frac{x^2(1 + 2/x - 15/x^2)}{|x(1+5/x)|} \\
&= \lim_{x \rightarrow -\infty} \frac{x^2}{|x|} \frac{1 + 2/x - 15/x^2}{|1+5/x|}
\end{align*}
We should be able to see that the second fraction is going to $\frac{1+0-0}{|1+0|}$. (Making this easy to see is why you factor out the "big part".) The first fraction is always positive (positive over positive) once $x < 0$, so
$$ L = \lim_{x \rightarrow -\infty} (|x| \cdot 1) = \infty \text{.} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3824902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Prove that for every $n \in \mathbb{N}$ and for all real numbers $x_1,x_2....x_n \in \mathbb{R}$ Prove that for every $n \in \mathbb{N}$ and for all real numbers $x_1,x_2....x_n \in \mathbb{R}$,
$$|x_1 + x_2 +....+x_n| \leq |x_1| + |x_2| +...+|x_n|$$
My attempt:
Base case: $n = 1$
$$|x_1| \leq |x_1|$$
Induction step: $n + 1$
$$|x_1 + x_{n+1}| \leq |x_1| + |x_{n+1}|$$
By the triangle inequality we have shown the inequality to be true for $n + 1$
Is this a valid and good proof? Is there something I am missing or should add?
| hint
For $ n=1$, it is obvious.
For $ n=2 $, we prove by disjunction of cases that
$$|x_1+x_2|\le |x_1|+|x_2|$$
for the $(n+1)^{\text{th}}$ step, you have to check that
$$|x_1+x_2+...+x_n+x_{n+1}|\le$$
$$|x_1|+|x_2|+...|x_n|+|x_{n+1}|$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
If $X_n=O_P(1)$ and $Y_n=o_P(1)$, prove $X_nY_n=o_P(1)$ We say $X_n=O_P(1)$ if $X_n$ is bounded in probability. We say $Y_n=o_P(1)$ if $Y_n$ converges in probability to $0$.
My attempt:
Since $X_n=O_P(1)$ and $Y_n=o_P(1)$, we have
$$
\forall\ \epsilon>0,\ \exists\ M \text{ and }\ n_0 \text{ such that } n\geq n_0\ \text{implies}\
P(|X_n|\leq M)\geq 1-\epsilon \text{ and} \lim_{n\to \infty}P\left(|Y_n|>\frac{\epsilon}{M}\right)=0.
$$
By the definition of limits,
$$
\forall\ \delta>0,\ \exists\ N_0\text{ such that } n\geq N_0\text{ implies } P\left(|Y_n|\leq \frac{\epsilon}{M} \right)\geq 1-\delta.
$$
If we take $N=\max{(n_0,N_0)}$, then $n\geq N$ implies
$$
P(|X_n|\leq M)\geq 1-\epsilon \text{ and } P\left(|Y_n|\leq \frac{\epsilon}{M } \right)\geq 1-\delta
$$
Note that
$$
\begin{aligned}
P(|X_nY_n|\leq \epsilon)
&=P(|X_n|\leq M,\ |X_nY_n|\leq \epsilon)
+P(|X_n|>M,\ |X_nY_n|\leq \epsilon)\\
&\geq P(|X_n|\leq M,\ |X_nY_n|\leq \epsilon)\\
&\geq P\left(|X_n|\leq M,\ |Y_n|\leq \frac{\epsilon}{M} \right)\\
&\geq P(|X_n|\leq M)+P\left(|Y_n|\leq \frac{\epsilon}{M} \right)-1\\
&\geq 1-\delta-\epsilon,\quad \text{for all }n\geq N \text{ and any } \epsilon,\ \delta>0.
\end{aligned}
$$
I'm confused how we can conclude that $\lim_{n\to\infty}P(|X_nY_n|\leq \epsilon)=1$ and then get $X_nY_n \stackrel{P}{\rightarrow}0$.
| Perhaps change the step
"
$\forall\ \epsilon>0,\ \exists\ M \text{ and }\ n_0 \text{ such that } n\geq n_0\ \text{implies}\
P(|X_n|\leq M)\geq 1-\epsilon \text{ and} \lim_{n\to \infty}P\left(|Y_n|>\frac{\epsilon}{M}\right)=0.$
"
to
$\forall\delta >0, \exists\ M \text{ and }\ n_0 \text{ such that } n\geq n_0\ \text{implies}\
P(|X_n|\leq M)\geq 1-\delta$
and $\forall \epsilon>0$, we have
$\text{ and} \lim_{n\to \infty}P\left(|Y_n|>\frac{\epsilon}{M}\right)=0.$
Following your steps, you will get that for all $\epsilon>0$, and $\delta>0$, there holds that
$P(|X_nY_n|\leq \epsilon)
\geq 1-2\delta,\quad \text{for all }n\geq N$.
This looks like to be the definition of $X_nY_n = o_p(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that there doesn't exist any normal subgroup $H$ such that $S_5/H $ is isomorphic to $S_4$ My attempt :
Order of the group $S_5$ is $5!$ so by Lagrange's theorem the order of the group $H$ should be $5$.So it must be a cyclic group generated by a $5$ cycle then let $(a_1 a_2 a_3 a_4 a_5)$ be a generator of the subgroup. Then for any element $g \in S_5$ ,$g(a_1..a_5)g^{-1} \in H$ , then $g(a_1)..g(a_5) \in H$.Now, all the elements of $H$ are $5$ cycles. Now if we choose $g$ in such a manner that $g(a_1)..g(a_5)$ becomes a two cycle then I am done. So I choose $g(a_1)=a_2$,$g(a_2)=(a_1)$..Is this OK? I don't think it is right where am I going wrong?
| You are on the right track, you've shown that such a normal subgroup would need to be generated by a $5$ cycle in $S_5$, so now we need to show that the subgroup generated by a $5$ cycle in $S_5$ can't be normal. Conjugating a $5$ cycle will always yield a $5$ cycle (since conjugation preserves the order of elements of a group), so we'll need to work a little harder. There are a few ways to see this, but I'll give a hands on proof. Its enough to show that from any $5$ cycle, we can find something in the subgroup generated by its conjugates that isn't a $5$ cycle, since then we can't have any group generated by a $5$ cycle being normal. So if we have $(abcde)$ is our $5$ cycle, conjugating by the transposition $(ab)$ yields the $5$ cycle $(bacde)$, and $(abcde)(bacde)^{-1}=(cba)$, which has order $3$, so gives something bigger than our hypothetical normal subgroup of order $5$. Thus, no normal subgroup of order $5$ can exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
complex numbers help
Given the following complex numbers:
$$
z=1+i\sqrt{3}
\qquad
w = 0.707 - 0.707i
$$
find the cartesian forms of the following expressions:
$$
z^2 \bar{w}\qquad\text{and}\qquad \frac{z^3}{w^9}
$$
The first one i found the answer to be 1.414 - 1.414i, is this correct?
| If you know the polar forms of $z$ and $w$, then the calculation becomes much easier as mentioned in the comments.
Suppose that $z=r_{1}e^{i\theta_{1}}$ and $w=r_{2}e^{i\theta_{2}}$, then
$z^{2}\bar{w}=(r_{1}e^{i\theta_{1}})^{2}(r_{2}e^{-i\theta_{2}})=r_{1}^{2}r_{2}e^{i(2\theta_{1}-\theta_{2})}$ and $\frac{z^{3}}{w^9}=\frac{r_{1}^{3}}{r_{2}^{9}}e^{3i(\theta_{1}-3\theta_{2})}.$
Then you can use Euler's formula: $re^{i\theta}=r(\cos(\theta)+i\sin(\theta))$ to express your result in the form $x+iy.$
Now since $r_{1}=|z|=2$ and $\theta_{1}=\arctan(\frac{\sqrt{3}}{1})=\frac{\pi}{3}$
you have that $z=2e^{i\frac{\pi}{3}}.$ Similarly $w=r_{2}e^{-i\frac{\pi}{4}}$ where $r_{2}=|w|=\sqrt{(0.707)^2+(0.707)^2}\space(\approx 0.9998489885978).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving $\sum\limits_{k=1}^n \binom{n}{k}\binom{n}{k-1} = \binom{2n}{n+1}$ I'm thinking about using a group of $2n$ balls and pick $n+1$ of them for the right hand side.
However, I have no idea how to do the left hand side. Or do I need to use induction?
| In how many ways can we choose $n+1$ people from $n$ men and $n$ women? The RHS is obviously right, or we could pick which $k$ men qualify and which $k-1$ don't, giving the desired LHS.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Solving linear tensor equation I am sorry if this question has been asked already but I was probably missing the correct keywords.
I am trying to solve a linear equation involving tensors. In Einstein notation it looks like :
$$
F_{di} = K_{dlij} C_{lj}
$$
where $F$ and $K$ are known and we want to know $C$. Also $K$ has the symmetry $K_{dlij} = K_{dlji}$.
For the context, I want to compute the weights of vector-valued finite RKHS.
Any help would be extremely welcome!
| I suppose the dimension is $3$, but you can modify correspondingly.
Linearizing indexes in some way, for example
$$
11\to1\\
12\to2\\
13\to3\\
21\to4\\
\ldots
$$
then the equation can be written as
$$
\mathcal{F}_a=\mathcal{K}_{ab}\mathcal{C}_b
$$
where $\mathcal{F},\mathcal{C}$ are vectors in $\mathbb{R}^9$
and if $\mathcal{K}\in\mathbb{R}^9\times\mathbb{R}^9$ is non-singular you can obtain
$$
\mathcal{C}_b=\mathcal{K}^{-1}_{ba}\mathcal{F}_a
$$
then go back to $C$ by
$$
C_{11}=\mathcal{C}_1\\
C_{12}=\mathcal{C}_2\\
C_{13}=\mathcal{C}_3\\
C_{21}=\mathcal{C}_4\\
\ldots
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3825987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the sphere $x^2 + (y-2)^2 + z^2 = 1$ intersect the plane $z-x = 3$? If so, find a point in their intersection. If not explain why.
Does the sphere $x^2 + (y-2)^2 + z^2 = 1$ intersect the plane $z-x = 3$? If so, find a point in their intersection. If not explain why.
I feel like I can give a convincing answer with a rough sketch, and words, but I can not figure out how to solve the problem algebraically. If anyone could help I would be greatly appreciative!
| If so, $$x^2+(y-2)^2+(x+3)^2=1$$ or
$$2(x^2+3x+4)+(y-2)^2=0,$$ which is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Intuitive explanation for inverse of a permutation matrix Today in lecture we learned that the transpose of a permutation matrix is the inverse of the permutation matrix. Meaning,
$$P^{T}P = I$$
I can work out the math by matrix multiplication but I'd prefer a deeper, more intuitive understanding.
What I have so far in my head is:
We know that the matrix $P$ will swap rows when we apply it to a matrix, let's say $A$. Then $PA$ will swap the $i^{th}$ row of A with the $j^{th}$ of $A$.
This then means that $P^{T}(PA)$ must swap our new $i^{th}$ row with the new $j^{th}$ row so we can have our original $A$ matrix back. Why is this always true? More specifically why does $P^{T}$ swap back out rows...?
| The key point is that any permutation matrix can be obtained as a product of elementary permutation matrices each of one by left multiplication exchange only $2$ rows, therefore given
$$P=P_{i,j}\cdots P_{h,k}$$
the reverse operation is given by
$$P^{-1}=P_{h,k}\cdots P_{i,j}$$
and since elementary permutation matrices are symmetric
$$P^{-1}=P_{h,k}\cdots P_{i,j}=(P_{i,j}\cdots P_{h,k})^T=P^T$$
Refer to the related
*
*Symmetric permutation matrix
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
sum of trace for representations of finite groups Recently I am reading something about trace formula, and I want to prove Frobenius reciprocity from trace formula for a finite group. And I come across the following two formulas which I can't prove.
Assume $G$ is a finite group, and $(\pi,V)$ is a representation of $G$. Let $V^G$ be the subspace of $V$ consisted of elements which are fixed by $g$ for all $g\in G$. Then I want to know if we can prove
$$\sum_{g\in G}\operatorname{tr}(\pi(g))=|G|\dim V^G.$$
Moreover, for any $\gamma\in G$, let
$$G_\gamma=\{g\in G丨g\gamma=\gamma g\},$$
and $S$ be a set consisted of representatives of conjuate classes of $G$.
Can we prove
$$\sum_{g\in G}\operatorname{tr}(\pi(g))=\sum_{\gamma \in S} \frac{|G|}{|G_\gamma|}\operatorname{tr}(\pi(\gamma))?$$
Thanks very much!
| Just to add to Adam Higgins's point a matrix $A$ with $A^2=A$ is called idempotent (involutive is $A^2=I$). Of course as Pedro Tamaroff says it is diagonalizable with eigenvalues 1 or 0. This follows from the Jordan Canonical form although it is an easy case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What this “resp.” means? I’m not a native English speaker and not a good math reader too.
My question came while I reading this Debreu article about a existence of a real function to represent the preferences and I’m stuck in this passage:
If $A$, $B$ are two $f$-sets (resp. $i$-sets) and $A \cap B$ is not empty, then $A \cup B$ is an $f$-set (resp. $i$-set).
What’s this “resp.” means?
| Resp. is an abbreviation for respectively.
What was written there is a shorter way of saying the following.
If $A,B$ are two f-sets and $A\cap B$ is not empty, then $A\cup B$ is an f-set.
If $A,B$ are two i-sets and $A\cap B$ is not empty, then $A\cup B$ is an i-set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Derivative of L2 norm and double summation I have to derive a constant vector μ for which the following equation is minimized:
$$ \sum_{i=1}^{n}\|x_i - \mu\|_{2}^{2} $$
I haven't done any of this in a long time and I want to know if I'm in the right direction or if I'm messing up. What I have so far is:
$$f(x) = \sum_{i=1}^{n}\|x_i - \mu\|_{2}^{2} $$
$$f(x) =\sum_{i=1}^n \left(\sqrt{\sum_{j=1}^n(x_{ij}-\mu)^2}\right)^2$$
$$f(x) =\sum_{i=1}^n \sum_{j=1}^n(x_{ij}-\mu)^2$$
$$\frac{\partial f(x)}{\partial \mu} = -2 \sum_{i=1}^n \sum_{j=1}^n (x_{ij} -\mu) = 0$$
$$\sum_{i=1}^n \sum_{j=1}^n x_{ij} - \sum_{i=1}^n \sum_{j=1}^n \mu = 0 $$
$$\mu \cdot n^2 = \sum_{i=1}^n \sum_{j=1}^n x_{ij} $$
$$\mu = \frac{\sum_{i=1}^n \sum_{j=1}^n x_{ij}} {n^2}$$
Did I totally mess up? Can I reduce the double summation? Thanks for any leads
| Here's a different derivation that does not use differentiation.
Compare with minimising a scalar quadratic:
\begin{align*}(x-\mu)^2+(y-\mu)^2&=x^2+y^2-2\mu(x+y)+2\mu^2\\
&=2\left(\mu-\frac{x+y}{2}\right)^2+(x^2+y^2-(x+y)^2/2
\end{align*} Clearly, the minimum value occurs when $\mu=(x+y)/2$. Now generalize to vectors:
\begin{align*}\sum_i\|x_i-\mu\|^2&=\sum_i\left(\|x_i\|^2-2\langle x_i,\mu\rangle+\|\mu\|^2\right)\\
&=\sum_i\|x_i\|^2-\langle2\sum_ix_i,\mu\rangle+n\|\mu\|^2\\
&=n\left\|\mu-\frac{1}{n}\sum_ix_i\right\|^2+\sum_i\|x_i\|^2-\|\sum_ix_i\|^2/n
\end{align*} Again, the minimum occurs when $$\mu=\frac{1}{n}\sum_ix_i$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Binomial coefficients-sums I need help solving this task so if anyone had a similar problem it would help me.
Calculate:
$\sum\limits_{i=0}^{n}(-1)^i i {n \choose i}$
I tried this:
$\sum\limits_{i=0}^{n}(-1)^i i \frac{n!}{i!(n-i)!}\\\sum\limits_{i=0}^{n}(-1)^i \frac{n!}{(i-1)!(n-i)!}\\\sum\limits_{i=0}^{n}\frac{(-1)^i n!}{(i-1)!(n-i)!}$
And now with this part I don’t know what to do next.
Thanks in advance !
| I did it like this, so I'm looking for your thoughts, is it right?
$\sum\limits_{i=0}^{n} (-1)^{i} i\binom {n} {i}\\\sum\limits_{i=0}^{n} (-1)^{i} n\binom {n-1} {i-1}\\\sum\limits_{i=0}^{n} (-1)^{i} n\frac{(n-1)!}{(i-1)!(n-i)!} \\\sum\limits_{i=0}^{n} (-1)^{i} n(n-1) \frac{1}{(i-1)!(n-i)!}\\n(n-1)!\sum\limits_{i=0}^{n} (-1)^{i}\frac{1}{(i-1)!(n-i)!}\\
n\sum\limits_{i=0}^{n} (-1)^{i}\frac{(n-1)!}{(i-1)!(n-i)!}\\n\sum\limits_{i=0}^{n}\binom{n-1}{i-1}(-1)^{i}\\n(1-x)^{n-1}=0 $ $n>1,n>-1,n=1$
Thanks for help !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Are the numbers $\sqrt{n^2 + q^2}$, $n=0,1,\dots$, linearly dependent over $\mathbb{Q}$? Let $q$ be a non-zero rational number, and consider the set of numbers $\sqrt{n^2 + q^2}$, with $n=0,1,\dots$. Are they linearly dependent over $\mathbb{Q}$? In other terms, can we find some positive integer $N$ and some rational numbers $a_0,\dots,a_N$ not all equal to zero, such that
\begin{equation}
\sum_{n=0}^{N} a_N \sqrt{n^2 + q^2} = 0?
\end{equation}
I found this statement in the post Linear Independence of Square Roots over Q, where the author of the post considers it "evident". For me, not only it is not evident at all, but I have some serious doubt that it is generally true.
What do you think about it?
Thank you very much for your attention in advance.
NOTE. Let us recall, in connection with this problem, that we have the following remarkable result.
Theorem Let $n_1,\dots,n_k$ be square-free integers. Then the numbers $\sqrt{n_1},\dots,\sqrt{n_k}$ are linearly independent over $\mathbb{Q}$ if and onfly if $n_1,\dots,n_k$ are pairwise distinct.
Elementary proofs of this result are given in Linear Independence of Radicals by Iurie Borieco, then a young pluri-medallist at the International Mathematical Olympiads.
| Guided by the pythagorean triples $(9,12,15)$ and $(5,12,13)$, we can take $q=12$ and have
$$\sqrt{5^2+12^2}-\frac{13}{15}\sqrt{9^2+12^2}=0.$$
Many pythagorean triples lend themselves to this.
I wonder if any counter-examples are not derived from a pythagorean triple...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3826953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
How to find an optimum distance between 2 lines? In the below graph there are 4 series of points. These points are symmetric respect to $OX$ axis and also with the $OY$ axis.
I have to create/to draw two parallel lines in order to include all these points in between. Then, the distance between these two lines will be the error which I need to compute.
My idea:
*
*Find out the highest point for each position on $OX$ axis.
*Find out the highest point from step 1.
*Compute the slope from the point found at step 2 to the points from step 1.
*Find out the minimum slope
*We have 2 points: $A1(x_{1}, y_{1})$ and $B1(x_{2}, y_{2})$ marked with blue circle on my picture. Having these 2 points and knowing that the points are symmetric we can conclude also that the second line, parallel with the first one will pass through $A2(-x_{1}, -y_{1})$ and $B2(-x_{2},-y_{2})$ marked with red.
*Now, it can be computed the distance between these 2 lines
BUT there is also another idea better than mine, I suppose.
I compute this error using only 4 points, but every point on the graph has its own weight and importance.
So, I am thinking somehow to take into consideration all these points. Maybe it is an optimization/minimization problem.
| The two lines can be parameterized as $y=ax+b$ and $y=ax-b$. The distance between the lines is given by $2|b| / \sqrt{a^2+1}$. You are therefore interested in solving
\begin{align}
\min_{a,b} \quad & \frac{2b}{\sqrt{a^2+1}} \\
\text{s.t.} \quad & ax_i+b \geq y_i \quad i=1,\ldots,n \\
& ax_i-b \leq y_i \quad i=1,\ldots,n
\end{align}
The constraints ensure that the lines $y=ax+b$ and $y=ax-b$ are above and below the datapoints $(y_i,x_i)_{i=1}^n$, respectively (so you know $|b|=b$). The objective function is not convex in $a$ (and the constraints make it difficult to do a nonlinear reparameterization to make it convex). The only thing working in your favor is that the problem only has three variables. BARON will have no problem solving this to optimality. You could do some preprocessing and for each constraint only include the extreme datapoints (for each $x$ only include the highest point for the first constraint, and the lowest point for the second constraint).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to give the sketch of a set I'm asked to give a sketch of this set: $K = \{(x,y)\in\mathbb R^2: 13x^2-10xy+13y^2=72\}$ and then give the points for which the distance from the origin is maximal/ will be maximal. Please help me. I have no idea how to solve it. Thanks in advance
| Let $x=r\cos t, y=r \sin t$, so $$r^2(t)=\frac{72}{13-10 \sin t \cos t}=\frac{144}{26-10 \sin 2t}.$$
So $$r^2(t)_{min}=r^2(t=3\pi/4)=\frac{144}{36},~r^2(t)_{max}=r^2(t=\pi/4)=\frac{144}{16}$$
$r_{min}=2, r_{max}=3$, these are the semi-minor and semi-major axes of an ellipse which are inclined at an angle $3\pi/4$ and $\pi/4$ with x-axis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Find the maximum value of $|\arg (\frac{1}{1-z})|$ for $|z|=1$ $$\arg \left(\frac{1}{1-z}\right)$$
$$=\arg (1) - \arg (1-z)$$
$$=-\arg (1-z)$$
Placing the modulus gives
$$|\arg (1-z)|$$
Since it’s a circle, one point is $(1,0)$, then the point which is farthest away is $(-1,0)$, so the arg should be $\pi$. The correct answer is $\frac{\pi}{2}$. How is true?
I think I went wrong in using $\arg (1-z)$ when should be $\arg (z-1)$. I am not sure if that changes things, but that’s a possible flaw I noticed.
| let $z=e^{ia}$
$1-z=2sin(a/2)e^{ia/2}$
thus
$arg(1-z)=a/2$
now by conventional domain $0\le a\le \pi$ and substituite...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How do I solve this $\lim\limits_{s\to \infty}\frac{2s^2}{s^2-4}$ How do I solve this $$\lim_{s\to \infty}\frac{2s^2}{s^2-4}$$ .
I have tried partial fraction, but it does not give me a constant. And I know the answer is 2.
| the answer is $2$. Note that $\lim\limits_{s \to \infty} \frac{M}{s^p} = 0 $ for any $M$ finite and $p\geq 1$.
$$ \lim_{s \to \infty} \frac{2s^2}{s^2 - 4} = \lim_{s \to \infty} \frac{2}{1 - 4/s^2} = \frac{2}{\lim\limits_{s \to \infty } 1 - \frac{4}{s^2} } = \frac{2}{1} = 2 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Insight on difference between Euler characteristics of 2 manifolds: $\chi(U)-\chi(V)$? For the Euler characteristic,
we have the inclusion-exclusion principle:
$$\chi(U\cup V) = \chi(U)+\chi(V)-\chi(U \cap V),$$
and also the connected sum property:
$$
\chi(U\#V) = \chi(U)+\chi(V)-\chi(S^n).
$$
However, is there any relation or topological/differential-geometric interpretation for the difference between two Euler characteristics
$$
\chi(U)-\chi(V)?
$$
This is studied in usual set theory, but I lack an intuitive understanding of what the above could mean. For instance, such an understanding could be gotten for the $+$ case by using the first equation above: $\chi(U)+\chi(V)=\chi(U\cup V) + \chi(U \cap V)$. I also haven't found too many resources on the matter. Any advice?
| I don't know exactly what niceness hypotheses are required for this, but if $Y$ is, say, a finite CW complex and $X$ is a CW subcomplex of $Y$ then we should have
$$\chi_c(Y) - \chi_c(X) = \chi_c(Y \setminus X)$$
where $\chi_c$ is the compactly supported Euler characteristic, defined using cohomology with compact support $H_c^{\bullet}$. $\chi_c$ is not a homotopy invariant but besides that it behaves nicer in some ways, such as this one. We can equivalently write the above rleation as
$$\chi_c(Y) = \chi_c(X) + \chi_c(Y \setminus X).$$
Note that this is manifestly not true for the ordinary Euler characteristic!
Some $\chi_c$ examples:
*
*$\chi_c(X) = \chi(X)$ if $X$ is compact, since then compactly supported and ordinary cohomology coincide. Hence if $X$ and $Y$ are both compact above then we have $\chi(Y) - \chi(X) = \chi_c(Y \setminus X)$.
*$\chi_c(\mathbb{R}^n) = (-1)^n$ (this is a counterexample to homotopy invariance). This is because $\mathbb{R}$ is $[0, 1]$ minus two points, so $\chi_c(\mathbb{R}) = \chi_c([0, 1]) - 2 = -1$. It's still true that $\chi_c(X \times Y) = \chi_c(X) \times \chi_c(Y)$ so this determines the answer for $\mathbb{R}^n$. This can be used to explain the Euler characteristic of the sphere: we have $\chi(S^n) = \chi(\text{pt}) + \chi_c(\mathbb{R}^n) = 1 + (-1)^n$.
*The wedge of $2$ circles satisfies $\chi(S^1 \vee S^1) = \chi(\text{pt}) + \chi \left( (0, 1) \sqcup (0, 1) \right)$ (the result of deleting the wedge point) which gives $\chi(S^1 \vee S^1) = 1 - 2 = -1$ as expected.
*The torus $T^2$ satisfies $\chi(T^2) = \chi(S^1 \vee S^1) + \chi_c( (0, 1)^2) ) = -1 + 1 = 0$ as expected.
*The above argument generalizes to the following: if $X$ is a finite CW complex with $c_i$ different $i$-cells, then $\chi_c(X) = \sum (-1)^i c_i$ (with no compactness hypotheses).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Reduced row echelon form of an augmented matrix is not unique I am given a sysetem of linear equations which after graphing, have no solution (the three lines intersect at different points). Now I am trying to prove this algebraically.
As an augmented matrix,
$$
\begin{bmatrix}
1 & -1 & 3 \\
1 & 1 & 1\\
2 & 3 & 6\\
\end{bmatrix}
$$
*
*$R_{1}-R_{2} \Rightarrow R_{2}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & -2 & 2\\
2 & 3 & 6\\
\end{bmatrix}
$$
continue from here in $(1)$ or $(2)$
$(1)$
*
*$R_{1}-\frac{1}{2}R_{3} \Rightarrow R_{3}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & -2 & 2\\
0 & -\frac{5}{2} & 0\\
\end{bmatrix}
$$
*
*$-\frac{1}{2}R_{2} \Rightarrow R_{2}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & 1 & -1\\
0 & -\frac{5}{2} & 0\\
\end{bmatrix}
$$
*
*$\frac{5}{2}R_{2} + R_{3} \Rightarrow R_{3}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & 1 & -1\\
0 & 0 & -\frac{5}{2}\\
\end{bmatrix}
$$
Then in RREF
$$
\begin{bmatrix}
1 & 0 & 2 \\
0 & 1 & -1\\
0 & 0 & -\frac{5}{2}\\
\end{bmatrix}
$$
$(2)$
*
*$-2R_{1}+R_{3} \Rightarrow R_{3}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & -2 & 2\\
0 & 5 & 0\\
\end{bmatrix}
$$
*
*$-\frac{1}{2}R_{2} \Rightarrow R_{2}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & 1 & -1\\
0 & 5 & 0\\
\end{bmatrix}
$$
*
*$-5R_{2} + R_{3} \Rightarrow R_{3}$
$$
\begin{bmatrix}
1 & -1 & 3 \\
0 & 1 & -1\\
0 & 0 & 5\\
\end{bmatrix}
$$
Then in RREF
$$
\begin{bmatrix}
1 & 0 & 2 \\
0 & 1 & -1\\
0 & 0 & 5\\
\end{bmatrix}
$$
which is different from the RREF in $(1)$
Can someone explain why I end up with a different RREF? I thought all RREF are unique, but clearly not in this case. Of course as mentioned earlier, the system has no solutions and both augmented matrices show this but their RREF's are not unique still.
| Any RREF must have pivot 1 in each row. Your matrices do not satisfy this condition (look at row 3). So these are not RREFs. RREF of any matrix is unique. It is not a completely trivial statement, but it is a fact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3827883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculating the matrix derivative $\frac{\partial}{\partial X}||CX^{-1}B||_F^2$ For matrices $C, X$, and $B$, I know that $\frac{\partial}{\partial X}||CXB||_F^2 = 2C^TCXBB^T$, and that $\partial X^{-1} = -X^{-1}(\partial X) X^{-1}$. However, I am unable to combine these results to calculate $\frac{\partial}{\partial X}||CX^{-1}B||_F^2$.
| Define a new matrix variable
$$\eqalign{
Y &= X^{-1} \qquad\implies\qquad
dY &= -Y\,dX\,Y \\
}$$
Use the gradient, which you already know, to write the differential of the function in terms of $Y$
$$\eqalign{
\phi &= \|CYB\|^2 \\
d\phi &= 2\Big(C^TCYBB^T\Big):dY \\
}$$
Then perform a change of variables from $Y\to X$
$$\eqalign{
d\phi
&= 2\Big(C^TCYBB^T\Big):\Big(-Y\,dX\,Y\Big) \\
&= -2\Big(Y^TC^TCYBB^TY^T\Big):dX \\
&= -2\Big((X^{-1})^TC^TCX^{-1}BB^T(X^{-1})^T\Big):dX \\
\frac{\partial \phi}{\partial X}
&= -2(X^{-1})^TC^TCX^{-1}BB^T(X^{-1})^T \\
}$$
In the above, a colon is used as a product notation for the trace, i.e.
$$\eqalign{A:B = {\rm Tr}(A^TB) = {\rm Tr}(B^TA) = B:A}$$
The terms in such a product can be rearranged in a number of equivalent ways, e.g.
$$\eqalign{
A:B &= A^T:B^T \\
A:BC &= B^TA:C = AC^T:B \\
}$$
due to the properties of the trace function.
As you have discovered, the chain rule is difficult to apply in Matrix Calculus. It often requires the calculation of intermediate quantities which are third and fourth order tensors.
The beauty of the differential approach is that the differential of a matrix acts like a matrix. In particular, it obeys all of the rules of matrix algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Kinetic Fokker-Planck Equation vs Kramers Equation Is there any difference between the Kinetic Fokker-Planck Equation and Kramers Equation ? I have seen them both used as a name for the Kolmogorov forward equation describing the time evolution for the distribution of the velocity and position of a particle (e.g living in some solvent).
Does the different choice in name have any meaning?
| Well, there are several stochastic process with several different names. Maybe, as you said, the Kinetic Fokker-Planck Equation is just the Kramers equation, which is basically a Brownian particle in the phase space, i.e, in the space of position $X$ and momentum $P$. In this equation, the kinetic term is taken into account, which is the first term in the following PDE:
\begin{align}
\frac{\partial \rho}{\partial t} = -\frac{P}{M}\left[\frac{\partial \rho}{\partial X}\right]+ \gamma\left[\frac{\partial \left(P\rho\right)}{\partial P}\right] +D\left[\frac{\partial^2 \rho}{\partial P^2}\right].
\end{align}
For example, take a look in the following reference eq. 1.1) https://arxiv.org/pdf/1905.05994.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $\alpha_{ij} A^i B^j = 0$ and $A^i$ and $B^j$ are arbitrary vectors, then prove that $\alpha_{ij}= 0$. This problem appeared in my Differential Geometry class, the professor explained the problem by, first taking an arbitrary vector and demonstrating that $\alpha_{ii} = 0$. and then proceeded to demonstrate that, $A^l = B^m = 1, (1\leqq l \leq n, 1\leqq m \leqq n, l\neq m)$. I get the proof somewhat. Can any of you elucidate it or give an alternative proof?
| Fix, $h,k$, then if you take
$$
\begin{aligned}
A^i=\begin{cases}
1, && i=h,\\
0, && i\neq h,
\end{cases}
\end{aligned}
\qquad
\begin{aligned}
B^j=\begin{cases}
1, && j=k,\\
0, && j\neq k
\end{cases}
\end{aligned}
$$
then
$$
\alpha_{ij}A^iB^j=0\implies\alpha_{hk}=0,
$$
for the arbitrariness of $h,k,$ this is true for all $h,k.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
$ \lim_{x \to 0}x \tan (xa+ \arctan \frac{b}{x})$ I have to evaluate the following limit
$$ \lim_{x \to 0}x \tan (xa+ \arctan \frac{b}{x})$$
I tried to divide tan in $\frac{sin}{cos}$ or with Hopital but I can't understand where I'm making mistakes.
The final result is:
$\frac{b}{1-ab}$ if $ab \ne 1$
$- \infty$ if $ab=1$ and $a>0$
$+ \infty$ if $ab=1$ and $a<0$
| $$\tan\left(ax+\arctan\frac bx\right)=\tan\left(ax+\frac\pi2-\arctan\frac xb\right)=\cot\left(\arctan \frac xb-ax\right)
\\\sim\dfrac1{\left(\dfrac1b-a\right)x}$$
and the limit for $ab\ne1$ is $\dfrac b{1-ab}$.
When $ab=1$, by Taylor the argument of the cotangent becomes asymptotic to $-\dfrac{a^3x^3}3$, hence the limit is $\pm\infty$, with the sign of $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Difference equation: verify answer? I would like to solve the difference equation
$$ k_{n+1} - \frac{\omega_2+1}{\omega_1 +1 } k_n = \frac{\omega_1 -\omega_2}{2( \omega_1 +1)}$$
Where $\omega_1 , \omega_2 $ are fixed positive real numbers.
I obtained the solution
$$k_n = (k_0 - \tfrac12) \left( \frac{\omega_2 +1}{\omega_1+1} \right)^n + \frac12$$
In the following way:
If
\begin{equation}
k_{s+1}-ak_s = b
\end{equation}
Let
\begin{equation}
k_s = A a^s +B
\end{equation}
be an ansatz (sorry for changing from $n$ to $s$, they are the same)
Then,
\begin{align*}
k_{s+1}-ak_s &= b \\
A a^{s+1} +B - A a^{s+1} +aB &= b \\
B(1-a) &= b \\
B &= \frac{b}{1-a}
\end{align*}
So,
\begin{equation}
k_s = A a^s + \frac{b}{1-a}
\end{equation}
So, in this case,
\begin{align*}
a &=\frac{\omega_2+1}{\omega_1 +1 } \\
b &= \frac{\omega_2 -\omega_1}{2( \omega_1 +1)}
\end{align*}
Figuring out $B$,
\begin{align*}
1- a &=\frac{\omega_1 - \omega_2}{\omega_1+1} \\
b &= \frac{\omega_1 -\omega_2}{2( \omega_1 +1)}\\
\frac{1}{1- a }&=\frac{\omega_1+1} {\omega_1 - \omega_2}\\
\frac{b}{1- a }&=\frac{\omega_1+1} {\omega_1 - \omega_2} \cdot \frac{\omega_1 -\omega_2}{2( \omega_1 +1) }\\
&= \frac12
\end{align*}
I think that this is incorrect, but I can't see any mistake I made.
Thank you!
| Hoping that you do not mind, let me give a small trick for this kind of problems
You have
$$k_{s+1}-ak_s = b$$ and the problem would be easy if $b=0$.
So, let $k_s=x_s+C$, replace and expand
$$x_{s+1}+C -ax_s-a C = b$$ Select $C$ such that
$$C-aC=b \implies C=\frac b {1-a}\implies x_{s+1}-ax_s = 0\implies x_s=A a^{s-1}$$ Back to $k_s$
$$k_s=x_s+\frac b {1-a}=A a^{s-1}+\frac b {1-a}$$ and then your good result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $\liminf z_n = 0$, then there is a null sequence $(y_n)$ such that $\sum y_n = \infty$ and $\liminf y_n/z_n = 0$
Let $(z_n)$ be a sequence in $(0, \infty)$ with $\liminf z_n = 0$. Show that there are null sequence $(y_n)$ in $(0,\infty)$ such that $\sum y_n = \infty$ and $\liminf y_n/z_n = 0$.
There is a hint told me to construct $(y_n)$ as follows: choose a convergent subsequence $(z_{n_k}) \to 0$. Set
$$
y_{n_k} = z_{n_k}^2 \text{ for all } k \in \mathbb{N}, \quad y_n = 1/n \text{ otherwise}
$$
$(y_k)$ clearly is null and safisfies $\liminf y_n/z_n = 0$. My question is, is $(z_{n_k}) \to 0$ enough to conclude $\sum y_n = \infty$? Since there are instances that can make the sequence converges under this construction (e.g. $z_k = k^{-2}$ and pick subsequence $(z_{k})$ where $k \neq 2^n$ for any $n$), how do we ensure the existence?
| You're right, the hint seems incomplete. You might try this: Start with the subsequence $z_{n_k}\to 0.$ We then divide $n_k$ into the two subsequences $n_{2k}$ and $n_{2k+1}.$ For $n\in \{n_{2k}\},$ do as you did and set $y_{n_{2k}}= z_{n_{2k}}^2.$ For $n\in \{n_{2k+1}\},$ set $y_{n_{2k+1}}= 1/k.$ For all other $n,$ if any, set $y_n=1/n.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Number of lists of n sorted elements of m values I am trying to count the number of sorted lists of $n$ elements where each element is in the set $\\{1, ..., m\\}$. I have made some progress by observing the following things:
*
*There can be from $1$ to $min(m, n)$ different values in any list
*If $k$ denotes the number of different values in the list, there are $\binom{m}{k}$ ways of chosing the $k$ different values among the $m$ available ones
*For each of those ways, there are $\binom{n-1}{k-1}$ ways of building a sorted list (think of it as placing $k-1$ bars between the $n$ numbers of the sorted list, to chose how to distribute the k different values to the n numbers)
Putting all of that together, the total number of sorted lists is :
$$\sum_{k=1}^{min(m,n)}{\binom{m}{k}\binom{n-1}{k-1}}$$
That's all good, but I would like to simplify that expression. I tinkered with that a lot without success (trying to somehow apply Vandermonde's identity, telescoping sums, induction, ...). Then, I typed it in Wolfram Alpha, and it told me that this whole sum simplifies down to $\frac{m(m+n-1)!}{m!n!}$, so I suppose that this expression is actually simplifiable.
My question is hence how to simplify that expression (which identity should I use in particular, since binomial coefficients have dozens of identities).
If anyone can help me, I would be very glad ! Thank you anyway !
| Say $A = \{a_1 = 1, a_2 = 2, ..., a_m = m\}$, $m$ distinct elements in sorted order.
You are making a sorted list of $n$ elements with values from $A$.
This is equivalent to making a set of $(m+n)$ elements where I first place $a_1$ to $a_m$ in sorted order in m places and then there is only one way to place our sorted list in the remaining $n$ places. Say, the values of all elements of our sorted list equal to the preceding element of $A$. So, for example, if $k$ positions are free after $a_i$, all of them will have value $a_i$. Since our list follows elements of $A$, we fix the first position for the first element of $A \, (a_1)$ and choose rest $(m-1)$ places for $A$ from $(m+n-1)$ places.
So number of sorted list with $n$ elements and values between $a_1$ and $a_m$ = ${m+n-1} \choose {m-1}$.
Also, you can apply Vandermonde's identity to your result.
$\sum_{k=1}^m{\binom{m}{k}\binom{n-1}{k-1}} = \sum_{i=0}^{m-1}{\binom{m}{i+1}\binom{n-1}{i}} = \sum_{i=0}^{m-1}{\binom{n-1}{i} \binom{m}{(m-1)-i}} = {{m+n-1} \choose {m-1}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3828909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A condition for greatest lower bound - Intuition In page 2 of this lecture note, it states that one of the conditions for $w = \inf A$ is that $\forall r \in \mathbb{R}, r>w \rightarrow \exists a \in A, a<r$.
I totally understand its contrapositive: if $s$ is a lower bound for A, then $w\geq s$.
However, I don't get the intuition behind the condition I stated first. What does having an element in set $A$ that is smaller than a number bigger than $\inf A$ have to do with the "greatest" lower bound?
| Recall that if $A$ is a nonempty subset of $\mathbb{R}$ and it is bounded below, then its infimum, say, $w \equiv \inf A$ exists in $\mathbb{R}$. The infimum of $A$ must satisfy two conditions:
*
*It is no larger than any given element of $A$, i.e., it is a lower bound of $A$:
$$\forall x(x \in A \implies w \leq x)$$
*It is the greatest lower bound of $A$, meaning no other real number $r$ exceeding $w$ can be a lower bound, so if $r$ is a lower bound of $A$, then $r \leq w$.
So given $r > w$, if we couldn't find some $x \in A$ with the property that $w \leq x < r$, then either $x < w$ for all $x \in A$, contrary to our choice of $w$ (indeed, contrary to assumption 1); or $w < r \leq x$, for all $x \in A$, meaning $r$ is a lower bound of $A$ which is larger than $w$, contrary to our second assumption. We are thus forced to conclude that if $r > w$, we can always find some $x \in A$ such that $w \leq x < r$, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the General solution $u(t,x)$ to the partial differential equation $u_x+tu=0$. Find the General solution $u(t,x)$ to the partial differential equation $u_x+tu=0$.
Here is what I've tried. I feel like this should be easy, but I'm not catching on to something here.
$$u_x+tu=0\implies -u_x=tu$$
Ansatz...$u=\frac{1}{t}e^{-xt}$ and $u_x=-e^{-xt}$
Thus: $\int_{0}^x -e^{-st}ds=\int_{0}^x\frac{1}{t}e^{-st}ds$
$\implies te^{-xt}-(-1)=\frac{1}{t^2}e^{-xt}-\frac{1}{t^2}$
Solving for $\frac{1}{t}e^{-xt}$
$\implies u(t,x)=\frac{1}{t}e^{-xt}=t^2e^{-xt}+t+\frac{1}{t}$
As I type all of this it occurs to me that my ansatz worked without all the other work. (that appears to not work.)
Is it ok to just stick with the ansatz and say $u(t,x)=\frac{1}{t}e^{-xt}=f(t)e^{-xt}$?
Please help me understand where my thinking goes astray.
Any Guidance would be greatly apppreciated.
| $$\frac{du}{dx}+tu=0$$
$$\frac{du}{u}=-t\:dx$$
Since $t$ is supposed to be not function of $x$ one can integrate wrt $x$.
$$\ln|u|=-t\:x+f(t)\quad \text{because}\quad \frac{df(t)}{dx}=0$$
$f$ is any function.
$$u=e^{-t\:x+f(t)}$$
Let $\quad e^{f(t)}=F(t)\quad ;\quad F$ is any function.
$$u(x,t)=F(t)e^{-t\:x}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
$\lim_{(x,y,z) \to (0,0,0)} \frac{xyz}{x^2+y^2+z^2}=0$ How to show that
$$\lim_{(x,y,z) \to (0,0,0)} \frac{xyz}{x^2+y^2+z^2}=0,$$ where $x,y,z>0$.
My attempt:
$$||(x,y,z)|| < \delta \implies |x|, |y|, |z| < \delta$$
$$\left | \frac{xyz}{x^2+y^2+z^2} \right | < \left | \frac{xyz}{x^2}\right | < \frac{\delta^3}{x^2}.$$
Now, I do not know how to proceed, and I think my attempt might be wrong.
| Using A-G inequality, $x^2+y^2+z^2\geq3(xyz)^{\frac{2}{3}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can I solve $\lim_{(x,y)\to\ (0,0)} \frac{x^2y^2}{x^2+x^2y^2+y^2}$ by converting to polar coordinates? Is it correct to solve this problem like this?
$$\lim_{(x,y)\to\ (0,0)} \frac{x^2y^2}{x^2+x^2y^2+y^2} $$
$$\lim_{(x,y)\to\ (0,0)} \frac{1}{1+\frac{x^2+y^2}{x^2y^2}}$$
$$\frac{x^2+y^2}{x^2y^2}=\lim_{r\to\ 0} \frac{1}{r^2\cos^2\theta\sin^2\theta}=\infty\implies \lim_{(x,y)\to\ (0,0)} \frac{1}{1+\frac{x^2+y^2}{x^2y^2}}=0$$
| Directly by polar we obtain
$$\frac{x^2y^2}{x^2+x^2y^2+y^2}=\frac{r^2\cos^2\theta \sin^2\theta}{1+ r^2\cos^2\theta \sin^2\theta}\to \frac{0}{1+0}=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Minimize $(x+y)(y+z)(z+x)$ given $xyz(x+y+z) = 1$ $x,y,z$ are positive reals and I am given $xyz(x+y+z) = 1$. Need to minimize $(x+y)(y+z)(z+x)$. Here is my approach.
Using AM-GM inequality
$$ (x+y) \geqslant 2 \sqrt{xy} $$
$$ (y+z) \geqslant 2 \sqrt{yz} $$
$$ (z+x) \geqslant 2 \sqrt{zx} $$
So, we have
$$ (x+y)(y+z)(z+x) \geqslant 8xyz $$
Also, I got
$$ \frac{x+y+z+(x+y+z)}{4} \geqslant \bigg[ xyz(x+y+z) \bigg] ^{1/4} $$
$$ \therefore x+y+z \geqslant 2 $$
But, I am stuck here. Any hints ?
| For $x=y=z=\frac{1}{\sqrt[4]3}$ we get a value $\frac{8}{\sqrt[4]{27}}.$
We'll prove that it's a minimal value.
Indeed, we need to prove that $$\prod_{cyc}(x+y)\geq\frac{8}{\sqrt[4]{27}}$$ or
$$27\prod_{cyc}(x+y)^4\geq4096x^3y^3z^3(x+y+z)^3.$$
Now, let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.
Thus, we need to prove that $$(9uv^2-w^3)^4\geq4096u^3w^9$$ or $f(w^3)\geq0,$ where
$$f(w^3)=(9uv^2-w^3)^4-4096u^3w^9.$$
But it's obvious that $f$ decreases, which says that it's enough to prove our inequality for a maximal value of $w^3$, which by $uvw$ happens for equality case of two variables.
Since the last inequality is symmetric and homogeneous, it's enough to assume $y=z=1$ and we need to prove that:
$$27(x+1)^8\geq256x^3(x+2)^3,$$ which is true by AM-GM:
$$27(x+1)^8=27(x^2+2x+1)^4=27\left(3\cdot\frac{x^2+2x}{3}+1\right)^4\geq$$
$$\geq27\left(4\sqrt[4]{\left(\frac{x^2+2x}{3}\right)^3\cdot1}\right)^4=256x^3(x+2)^3$$ and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Is $G = (\{x \in \mathbb{R} | x > 0\}, +)$ with $a + b \to a^b$ a group? Hi MathOverflow community,
I just stumbled upon the question in the title while learning for my Algebra exam. Intuitively, I would say no since:
$$
e+a = e^a = a \\
e = \sqrt[a]{a}
$$
which depends on a and is therefore not a single neutral element for all elements of the set. Is that the right argument or is there a way to argue that more elegantly?
| Another argument can be that it does not hold the associativity property, because $(a^b)^c \neq a^{(b^c)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Need help in evaluating a Definite Integral I am stuck at this definite integral
$$
\int_0^{\pi/2} \frac{1}{1-(\sin x)(\cos y)}\, dx .
$$
I tried multiplying and dividing by $cosx$ and then using by parts integration, but I not able to proceed further .Someone please help me out .
| Hint:
Try using the Weierstrass substitution ( $\tan{\left(\frac{x}{2}\right)}=t$).
You will be left with the following integral $$\int_0^1 \frac{2}{t^2-2t\cos{y}+1} \; \mathrm{d}t=\int_0^1 \frac{2}{\left(t-\cos{y}\right)^2+\sin^2y} \; \mathrm{d}t$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinatoric meaning to $1+2+\dots+n=\frac{n(n+1)}{2}= {n+1 \choose 2}$ It is well known that
$$ \sum_{k=1}^{n} k = \frac{n(n+1)}{2} .$$
As the story goes, Gauss notices that there are $n/2$ pairs of numbers that add up to $n+1$, hence the formula above.
But obviously the right hand side is ${n+1 \choose 2}$, namely
$$ \sum_{k=1}^{n} k = {n+1 \choose 2} .$$
Is there a "combinatorial proof" of this second equation? I am trying to see the connection between the sum (and Gauss' method) to the problem of choosing $2$ objects from $n+1$ objects.
| This can be viewed as an instance of Vandermonde's Identity
$$
\begin{align}
\sum_{k=1}^nk
&=\sum_{k\in\mathbb{Z}}\binom{k}{k-1}\binom{n-k}{n-k}\tag1\\
&=(-1)^{n-1}\sum_{k\in\mathbb{Z}}\binom{-2}{k-1}\binom{-1}{n-k}\tag2\\
&=(-1)^{n-1}\binom{-3}{n-1}\tag3\\[3pt]
&=\binom{n+1}{n-1}\tag4\\[3pt]
&=\binom{n+1}{2}\tag5
\end{align}
$$
Explanation:
$(1)$: $\binom{k}{k-1}=k\,[k\ge1]$ and $\binom{n-k}{n-k}=[k\le n]$
$(2)$: negative binomial coefficients
$(3)$: Vandermonde's Identity
$(4)$: negative binomial coefficients
$(5)$: symmetry of Pascal's Triangle
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3829923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Make a singular matrix invertible Assume to have a Positie Semi Definite matrix $A\in\mathbb{R}^{d\times d}$ , defined as
$$A = \sum_{i=1}^n \alpha_i x_i x_i^\top$$
such that $\alpha_i \in [0,1]: \sum_{i = 1}^n \alpha_i = 1$, $x_i \in \mathbb{R}^d$, $n>d$.
Further, assume that $A$ is a singular matrix: $rank(A)<d$, and to have control over the terms $\alpha_i$, for $i = 1,...,n$.
Can we make $A$ invertible by removing one or more term ($\alpha_i = 0$) in the sum, or for some other choice of $\alpha_i$?
| Since $\textrm{rank}(A) < d$, there is a non-trivial kernel or null space of $A$. Let $\mathbf{v}$ be in that null space: $A\mathbf{v} = \mathbf{0}$, the zero vector. Now consider $\mathbf{v}^{\mathsf{T}}A\mathbf{v}$.
\begin{equation}
\mathbf{v}^{\mathsf{T}}A\mathbf{v} ~=~ \mathbf{v}^{\mathsf{T}}\mathbf{0} ~=~ 0,
\end{equation}
but also
\begin{eqnarray}
\mathbf{v}^{\mathsf{T}}A\mathbf{v} &=& \mathbf{v}^{\mathsf{T}}\left(\sum_{i=1}^{n}\alpha_i\mathbf{x}_i\mathbf{x}_i^{\textsf{T}}\right)\mathbf{v}\\
&=& \sum_{i=1}^{n}\alpha_i(\mathbf{v}^{\mathsf{T}}\mathbf{x}_i)(\mathbf{x}_i^{\textsf{T}}\mathbf{v})\\
&=& \sum_{i=1}^{n}\alpha_i(\mathbf{v}^{\mathsf{T}}\mathbf{x}_i)^2,
\end{eqnarray}
because $\mathbf{v}^{\mathsf{T}}\mathbf{x}_i = \mathbf{x}_i^{\mathsf{T}}\mathbf{v}$. Since all the $\alpha_i$ are positive, every term in the sum in the final line is non-negative. In particular, there are no negative terms canceling out positive terms. That means that every single term in the sum is zero, so each dot-product (inner product) is zero. Removing an $\mathbf{x}_i$ will not fix that.
This shows that removing an $\mathbf{x}_i$ does not yield a new matrix with a trivial null space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the trick needed to compute the integral $\int \frac{1}{x^{2}+x+1} dx$? Working through Spivak's Calculus and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral $$\int \frac{1}{x^{2}+x+1} dx$$
Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:
$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n> 1$$
In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.
Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?
| $\int \frac{1}{(x+1/2)^2 + 3/4} dx= \frac{2}{\sqrt3}\tan^{-1}\frac{2x+1}{\sqrt3} +c$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Boundedness of $\log(\|A^{-1}\|)/\log(\|A\|)$ as $A$ ranges over $SL(n,\mathbb R)$ I wonder if the quantity $\frac{\log(\|A^{-1}\|)}{\log(\|A\|)}$ is bounded by a constant number that only depends on the size $n$ of the matrix as $A$ ranges over $SL(n,\mathbb R)$ (the determinant one matrices). I couldn't find a counterexample, nor can I prove it.
Here I require $\|\cdot \|$ as the operator norm of the matrix w.r.t the 2-norm of the Euclidean spaces. I am not sure if the answer is the same regardless of the norm of the matrix. Please let me know.
| Not really an answer I want, but this should be true for the spectral norm. If $A\in SL(n,\mathbb R)$, then so is $A^{T} A$, and we sort its spectrum as follows:
$$0< |a_1| \le \cdots \le |a_n|.$$
Since the product $|a_1 \cdots a_n|=1$, we have $|a_1 a_n^{n-1}|\ge 1$. Therefore
$$\frac{\log(\|A^{-1}\|)}{\log(\|A\|)}\le \frac{\log(|a_1^{-1}|)} {\log(|a_n|)}\le
\frac{\log(|a_1^{-1}|)} {\log(|a_n|)} \le n-1.$$
Let me know of any mistake.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to show that $T_{(1,0)}\mathbb S^1 \cong \operatorname{span}(\{e_2\})$? I want to show that $T_{(1,0)}\mathbb S^1 \cong \operatorname{span}(\{e_2\})$ using the stereographic chart and using the definition that $T_xM$ is the set of velocity vectors $v$ where each vector $v$ is the equivalence class of curves that goes through point $x$ and tangent to each other.
I got so far the following:
*
*Since $\varphi:U\to\mathbb{R}$ is given by $\varphi(x,y)=\frac{x}{1-y}$ and $v=\frac{d}{dt}(\varphi\circ \gamma)(t)\Big|_{t=0}$ for some $\gamma:I\to \mathbb S^1$ with $\gamma(0)=x=(1,0)$, we can compute that
\begin{align}
v& =\frac{d}{dt}(\varphi\circ \gamma)(t)\Big|_{t=0}\\
&=\frac{d}{dt}\Big(\frac{x(t)}{1-y(t)}\Big)\Big|_{t=0}\\
&=\frac{x^{\prime}(t)(1-y(t))-x(t)(-y^{\prime}(t))}{(1-y(t))^2}\Big|_{t=0}\\
&=\frac{x^{\prime}(0)(1-y(0))+x(0)y^{\prime}(0))}{(1-y(0))^2}\\
&=x^{\prime}(0)+y^{\prime}(0).
\end{align}
I don't know how to interprete that and how to actually show that $T_{(0,0)}\mathbb S^1$ should be a span of $e_2$.
*I know that if $i:\mathbb S^1\to\mathbb{R}^2$ is an inclusion, then $$di_x:T_x \mathbb S^1\to T_{i(x)}\mathbb{R}^2\text{ is
injective}.$$ So, we need somehow show $di_x(v)=\operatorname{span}(\{e_2\})$.
What should I do?
| Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ given by $f((x,y))=x^2+y^2$ then $1$ is a regular value of $f$ and hence $f^{-1}(1)$ is a submanifold of $\mathbb{R}^2$ but $S^1=f^{-1}(1)$ and furthermore given $p \in S^1$, $T_p (S^1)=Ker(df_p)$
and calculating we obtain that $df_p=2(p_1,p_2)$ then is easy to see that $$T_p(S^1)=\{x \in \mathbb{R}^2 ; \langle p,x \rangle =0\}=span(p) ^{\perp}$$
But $\mathbb{R}^{2}=span(p) \oplus span(p)^{\perp}$ and $dim(span(p))=1$ then $dim(span(p)^{\perp})=1$ then it is only necessary to find one element of $span(p)^{\perp}$ to characterize it, it is easy to see that if $p=(p_1,p_2)$ then $(-p_2,p_1) \in span(p)^{\perp}$ it follows that
$$T_p(S^1)= span((-p_2,p_1))$$
And in particular in your problem taking $p=(1,0)$ then $T_{(1,0)}(S^1)=span((0,1))$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Are endpoints critical points? In the function $f(x)=\max\{\sin (x),\cos (x)\}$ for all $x$ belonging to $(0,2π)$ , can we count end points of the domain as critical points, since a function is not differentiable at endpoints?
| $f(x)=max(\sin x, \cos x) \implies f(0)=1$ and $f'(0)=-\sin 0=0$ which is finite.
Next, $f(2\p1)=1$ and $f'(2\pi)=\sin(2 \pi)=0$, which again is finite. So $f'(0$ and $f'(2\pi)$ being finite, $f(x)$ at these end points is differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Real roots of $x^7+5x^5+x^3−3x^2+3x−7=0$? The number of real solutions of the equation,
$$x^7+5x^5+x^3−3x^2+3x−7=0$$
is
$$(A) 5 \quad (B) 7 \quad (C) 3 \quad (D) 1.$$
Using Descartes rule we may have maximum no. of positive real roots is $3$ and negative real root is $0.$ So there can be either $3$ real roots or $1$ real root but how to conclude what will be the no. of real roots exactly. Can you please help me?
| It is easy, by the rational root theorem, to find that $1$ is the only rational root. Then, dividing the polynomial by $x-1$ you get
$$
x^6+x^5+6 x^4+6 x^3+7 x^2+4 x+7
$$
that, by Decartes Rule, has zero positive roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
why use $A = \cap_{i=1}^{\infty} E_i ,E_{i+1} \subset E_{i}$ ,instead of $A=\lim_{i\to\infty}E_i$. In real analysis, why use $A = \cap_{i=1}^{\infty} E_i ,E_{i+1} \subset E_{i}$ ,instead of $A=\lim_{i\to\infty}E_i$.Are they the same things?
Is it because we prefer to use limits on real Numbers rather than on sets?
| They are not the same. The condition you write implies the existence of the limit. But the converse does not hold. The limit can exists, e.g., also if you have an incresing sequence of sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3830947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If $\alpha<\beta$, there exists a ordinal $\gamma$ such that $\alpha + \gamma = \beta$. Why? I want to prove the following proposition.
proposition: Let $\alpha,\beta$ be ordinals. If $\alpha<\beta$, there exists a ordinal $\gamma$ such that $\alpha + \gamma = \beta$.
I am trying to prove this by transfinite induction. However, I have some trouble with the case $\beta$ is a limit ordinal.(I know the definition of the sum of a ordinal and a limit ordinal)
please give me some ideas.
| As sets, $\alpha$ is a subset of $\beta$. Consider $C=\beta\setminus\alpha$, the set-theoretic difference. Then $C$ is a well-ordered set, having the same order type as some ordinal $\gamma$. Then $\beta=\alpha+\gamma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Determining the likelihood of all `f` "bad" samples being within `d` distance from each other in an ordered set of `n` samples If I have n ordered samples with f that are "bad", what's the likelihood that all of the f samples are within d distance of both its previous/next neighboring "bad" sample?
In case it helps, in my problem, n=21, f=3, d=5.
Here's an example in which all f bad samples are within d distance of each other (where 1 = "good" and 0 = "bad"). (In fact, the max distance between bad samples is between the 1st sample and the 4th sample, which is a distance of 4-1=3, which is less than my threshold of d=5.)
01101011111111111111
and here's an example in which not all f bad samples are within d distance of each other (since the last bad sample is clearly more than 5 samples away from its left "bad samples" neighbor. (It doesn't have a right "bad sample" neighbor.)
011101111111110111111
(Just to reiterate: a "bad sample" must be within d distance of both its left/right "bad sample" neighbor. But of course, if it's the first or last bad sample, it only has one immediate neighbor. And, the f bad samples are assumed to be randomly distributed among the n total samples)
(Note: I'm not sure which branch of mathematics this would necessarily fall under, so if someone thinks I mis-guessed the tag, please suggest different tags that I ought to use.)
| There are $\binom{n}f$ possible orderings. For each of the first $f-1$ bad experiments, we subtract the orderings where that bad experiment is followed by $d$ good experiments (because this would imply the distance to the next is more than $d$). Such a sequence can be chosen by choosing a sequence of $f$ zeroes and $n-d$ ones, and then inserting the remaining $d$ ones right after that particular bad experiment. This can be done in $\binom{n-d}f$ ways. Doing this for each of the $f-1$ bad experiments, you get
$$
\binom{n}f-(f-1)\binom{n-d}f
$$
However, this is still not correct. The problem is that sequences with two bad experiments whose distance to the next is more than $d$ have been doubly subtracted. These must be added back in. For each pair of bad experiments among the first $f-1$, we can use a similar method to count sequences where these are both followed by $d$ ones; set aside $2d$ ones, arbitrarily arrange the remaining $f$ zeroes and $n-2d$ ones, then insert $d$ ones after each bad experiment in the pair. We are now at
$$
\binom{n}f-(f-1)\binom{n-d}f+\binom{f-1}d\binom{n-2d}f
$$
However, we are still not done, since it turns out that the experiments with three distance which are more than $d$ have not been correctly counted. To fix this, and all further overlaps, you need to use the principle of inclusion exclusion. The final result is
$$
\sum_{k= 0}^{\lfloor(n-f)/d\rfloor}(-1)^k\binom{f-1}k\binom{n-kd}{f}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bounds on polynomial roots I am looking to find the minimum absolute value of the roots of the following polynomial:
$$ux^M - x + 1$$
where $u$ and $M$ are constants. Does a closed form upper or lower bound expression exist?
M is a positive integer.
| Consider the substitution $x\mapsto1/z$ to get
$$z^M-z^{M-1}+u$$
and hence the roots $z$ are bounded above by
$$1+\max(1,|u|)$$
and your roots are bounded below by
$$\frac1{1+\max(1,|u|)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove the limit of this integral? How to prove the limit for fucntion $f = 0 $ by Lebesgue Dominated Convergence Theorem:
$$f = \lim_{n\to \infty}\int_0^1 \frac{e^{-nt}-(1-t)^n}{t}\, dt=0$$
I believe I can use the equality: $1-e^{-nt}(1-t)^n=\int_0^t {ne^{-n\tau}\tau(1-\tau)^{n-1}}\, dt$; but after I do this equality n times, I find I get
$$\lim_{n\to \infty}\int_0^1 \frac{e^{-nt}}{t} dt$$ However, this will explode when t = 0 but I do not know how to find the function g $\geq \lvert f\rvert$ that to apply the Lebesgue Dominated Convergence Theorem to switch the $\lim$ and $\int$ position.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\bbox[5px,#ffd]{}}$
\begin{align}
&\bbox[5px,#ffd]{\lim_{n \to \infty}\int_{0}^{1}
{\expo{-nt} - \pars{1 - t}^{n} \over t}\,\dd t}
\\[5mm] = &\
\lim_{n \to \infty}\bracks{%
\int_{0}^{1}{\expo{-nt} - 1 \over t}\,\dd t +
\int_{0}^{1}{1 - \pars{1 - t}^{n} \over t}\,\dd t}
\\[5mm] = &\
\lim_{n \to \infty}\bracks{%
-\int_{0}^{n}{1 - \expo{-t} \over t}\,\dd t +
\int_{0}^{1}{1 - t^{n} \over 1 - t}\,\dd t}
\\[5mm] = &\
\lim_{n \to \infty}\bracks{-\operatorname{Ein}\pars{n} +
H_{n}}
\end{align}
$\ds{\operatorname{Ein}}$ is the
Complementary Exponential Integral And $\ds{H_{z}}$ is a
Harmonic Number.
$$
\mbox{As}\ n \to \infty,\quad
\left\{\begin{array}{lcll}
\ds{\operatorname{Ein}\pars{n}} & \ds{\sim} &
\ds{\ln\pars{n} + \gamma + {\expo{-n} \over n}} &
\ds{\color{red}{\large\S}}
\\
\ds{H_{n}} & \ds{\sim} &
\ds{\ln\pars{n} + \gamma + {1 \over 2n}} &
\ds{\color{blue}{\large\#}}
\end{array}\right.
$$
$$
\mbox{such that}\quad\bbox[5px,#ffd]{\int_{0}^{1}
{\expo{-nt} - \pars{1 - t}^{n} \over t}\,\dd t} \sim
{1 \over 2n}\quad\mbox{as}\quad n \to \infty
$$
\begin{align}
& \mbox{}
\\ &\ \implies
\bbx{\bbox[5px,#ffd]{\lim_{n \to \infty}\int_{0}^{1}
{\expo{-nt} - \pars{1 - t}^{n} \over t}\,\dd t} = 0} \\ &
\end{align}
$\ds{\color{red}{\large\S}}$:
See this link and this one.
$\ds{\color{blue}{\large\#}}$: The asymptotic $\ds{H_{z}}$ behavior is given in the
above cited link.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Sum of binomial terms, but with fixed exponents and a variable proportion I am working on a problem where I have to calculate the summation of "binomial terms":
$\sum_{t=0}^{N} (t/N)^k (1 - t/N)^{N-k}$
where k and N are fixed (we are indexing over t). Equivalently, we can think of this as
$\sum_{t=0}^{1} (t)^k (1 - t)^{N-k}$ where $t=0, \, \frac{1}{N}, \, \frac{2}{N}, ..., 1$
In plain English, we are summing binomial terms over equally-spaced proportions, e.g. for N=10 we would be summing over t = 0/10, 1/10, 2/10, ..., 9/10, 10/10 (for some known k, N).
Note that the usual binomial expansion handles the summation for indexing by variable k, but in my case k will be fixed.
For context, I am trying to determine the average of a discretely sampled Bezier curve (sampled across consistent intervals). This requires calculating this sum over all k<N, for which there may even be a shortcut that I'm unaware of. The continuous case is actually easy (just average the control points) but I'm struggling with the discrete case.
Thanks in advance.
| I'm not sure there is an exact solution, but I think I found an approximation: If you rewrite the summand using $x=e^{\log x}$ you get
\begin{align}
S_n &= \sum_{t=1}^{n}e^{k \log \frac{t}{n}}e^{(n-k)\log (1-\frac{t}{n})} \approx \sum_{t=1}^{n}e^{k \log \frac{t}{n}}e^{-(n-k)\frac{t}{n}} \\
&= \sum_{k=1}^{n}e^{k \log t-t+\frac{kt}{n} - k \log n} = e^{-k \log n}\sum_{k=1}^{n}t^k e^{-c_1 t}
\end{align}
For some constant $c_1$. I don't know if the last expression exists in the closed form, but it is comparable to the lower incomplete Gamma function:
$$
S_n \approx e^{-k \log n}\int_{0}^{n}x^ke^{-c_1 x}dx = \frac{e^{-k \log n}}{c_1^k}\int_{0}^{c_1n}v^ke^{-v}dv = \frac{e^{-k \log n}}{c_1^k}\gamma(k+1, c_1n)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the Eigenvectors $T\left(\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\right)=\left[\begin{array}{ll}d & b \\ c & a\end{array}\right]$ $V=$ $M_{2 \times 2}(\mathbb{R})$ y $T\left(\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\right)=\left[\begin{array}{ll}d & b \\ c & a\end{array}\right]$
I think the matrix associated to T is
$A= $$\left[\begin{array}{ll}0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0\\0 & 0 & 1 & 0 \\1 & 0 & 0 & 0 \end{array}\right]$
Then we can get the eigenvalues if we swap $R_4$ to $R_1$
$A`= $$\left[\begin{array}{ll}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 1 & 0 \\0 & 0 & 0 & 1 \end{array}\right]$
Then the eigenvalue is 1 but from here Im stuck to find the eigenvectors or is something wrong with my process? thank you!
| The eigenvalues and eigenvectors of $T$ may be found directly from the given formula
$T \left ( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right ) = \begin{bmatrix} d & b \\ c & a \end{bmatrix}, \tag 1$
for we have
$T^2 \left ( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right ) = T \left ( \begin{bmatrix} d & b \\ c & a \end{bmatrix} \right ) = \begin{bmatrix} a & b \\ c & d\end{bmatrix}; \tag 2$
thus,
$T^2 = I, \tag 3$
or
$T^2 - I = 0; \tag 4$
if $\mu$ is an eigenvalue of $T$, that is, if
$TZ = \mu Z\tag 5$
for some
$0 \ne Z \in M_{2 \times 2}(\Bbb R), \tag 6$
then
$T^2Z = T(TZ) = T(\mu Z) = \mu TZ = \mu (\mu Z) = \mu^2Z, \tag 7$
whence
$(\mu^2 - 1)Z = \mu^2 Z - Z = T^2 Z - Z = (T^2 - I)Z = 0, \tag 8$
so in light of (6) we have
$\mu^2 - 1 = 0, \tag 9$
which implies
$\mu = \pm 1; \tag{10}$
now if
$\mu = 1, \tag{11}$
$\begin{bmatrix} d & b \\ c & a \end{bmatrix} = T \left (\begin{bmatrix} a & b \\ c & d \end{bmatrix} \right ) =
\mu \begin{bmatrix} a & b \\ c & d \end{bmatrix} =
\begin{bmatrix} a & b \\ c & d \end{bmatrix}, \tag{12}$
which forces
$a = d; \tag{13}$
an eigenmatrix for eigenvalue $1$ thus takes the form
$\begin{bmatrix} a & b \\ c & a \end{bmatrix}, \tag{14}$
where $a, b, c \in \Bbb R$ are arbitrary. It is now easy to see that the $1$-eigenspace of $T$ is of dimension $3$. On the other hand, when
$\mu = -1, \tag{15}$
$\begin{bmatrix} d & b \\ c & a \end{bmatrix} = T \left (\begin{bmatrix} a & b \\ c & d \end{bmatrix} \right ) =
\begin{bmatrix} -a & -b \\ -c & -d \end{bmatrix}, \tag{16}$
whence,
$a = - d \tag{16}$
$b = -b, \; c = -c ,\Longrightarrow b = c = 0; \tag{17}$
the eigenmatrix thus becomes
$\begin{bmatrix} a & 0 \\ 0 & -a \end{bmatrix}, \; a \in \Bbb R; \tag{18}$
it is clear that the $-1$ eigenspace of $T$ is of dimension $1$.
Since the sum of the dimensions of the $1$ and $-1$ eigenspaces is
$4 = \dim M_{2 \times 2}(\Bbb R), \tag{19}$
we conclude there are no more eigenvectors/eigenvalues to be had.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3831887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How many positive integer solutions exist for $[\frac{x}{19}]=[\frac{x}{20}]$, where $[x]$ denotes the Greatest integer function Question
How many positive integer solutions exist for $[\frac{x}{19}]=[\frac{x}{20}]$, where $[x]$ denotes the Greatest integer function
What I tried
I took the following cases one by one,
CASE $1$
$$[\frac{x}{19}]=[\frac{x}{20}]=1$$
All numbers from $20$ till $37$ should work for this, thus a total of 18 solution in this case.
CASE $2$
$$[\frac{x}{19}]=[\frac{x}{20}]=2$$
All numbers from $40$ to $56$ should work for this, thus a total of $17$ solutions in this case.
Upon continuing this process, we reach the case where there is only one possible solution.
Thus the number of cases is $18+17+16+...+2+1$ which is equal to $171$
There is also the case of $$[\frac{x}{19}]=[\frac{x}{20}]=0$$
This case will have $18$ solutions, from $1$ till $18$.
Thus the total number of solutions is $171+18$ which is $189$
I am not sure if my answer is correct (maybe I am missing a few cases).
What I am looking for is a verfication of my method and answer, and maybe a more concrete solution which will work in cases where $[\frac{x}{m}]=[\frac{x}{n}]$ where m and n are not consecutive natural numbers.
Thank you so much in advance!
Regards
| $$\frac{x}{19}-1<\bigg\lfloor\frac{x}{19}\bigg\rfloor \le \frac{x}{19}$$
$$\frac{x}{20}-1<\bigg\lfloor\frac{x}{20}\bigg\rfloor \le \frac{x}{20}$$
$$\frac{x}{19}-1<\frac{x}{20}\implies x < 380$$
$$\frac{x}{20}-1<\frac{x}{19}\implies x > -380$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Exercise 25, Chapter 24 of Spivak's Calculus 3rd Edition Theorem: Let $\{f_n\}$ be sequence of integrable functions on interval $I=[a,b]$ and $f$ be the uniform limit of $\{f_n\}$ on the interval, then prove that $f$ is integrable and $\int_a^b f=\lim_{n\to \infty} \int_a^bf_n$.
Proof:
In this case, it is not known before hand that $f$ is integrable (#). However, it can be proven that $f$ is actually integrable.
It will suffice to show that for every $\epsilon \gt 0$ there exists a partition $P$ of $I$ such that $U(f,P)-L(f,P)\lt \epsilon $, where $U(f,P), L(f,P)$ are upper sum and lower sum respectively as used in Darboux's integrals.
Since, $f_n$ is (are) integrable, for $\epsilon/3\gt 0$ there exists a partition $P=\{a=y_0,y_1,\cdots, y_n=b\}$ of $I$ such that $U(f_n,P)-L(f_n,P)\lt \epsilon \tag{2}$ and by uniform convergence of $f_n$, we also have that $\exists N $ such that for all $x\in I$ and for all $n\ge N$, we have $|f_n(x)-f(x)|\lt \frac{\epsilon}{3} \tag{3}$
$U(f,P)-U(f_n,P)=\sum_{i=1}^{n}(M_i-M_i')\Delta_i=\sum_{i=1}^{n}(M_i-M_i')(y_i-y_{i-1})$, where $M_i=\sup f_n (x)$ on $[y_{i-1}, y_i]$ and $M_i=\sup f(x)$ on $[y_{i-1}, y_i] $.
Question: How can it be shown that $U(f,P)-U(f_n,P)\lt \epsilon/3$? If it could be shown then similar arguments for lower sum and subsequent use of triangular inequality will prove that $f$ is integrable on $I$.
(#): If it were known in advance that $f$ is integrable on $I$, then clearly for $\frac{\epsilon}{b-a} \gt 0 \;\;\exists N_\epsilon$ such that for all $x\in I$ and for all $n\ge N_\epsilon$, we have $|f_n(x)-f(x)|\lt \frac{\epsilon}{b-a} \tag {1}$
Therefore, $|\int_a^bf_n(x)-\int_a^b f(x)|=|\int_a^b(f_n(x)-f(x))|\le \int_a^b|(f_n(x)-f(x))|\le \int_a^b \frac{\epsilon}{b-a} =\epsilon \implies \lim_{n\to \infty}\int_a^bf_n(x)=\int_a^b f(x)$. Proved.
| How can it be shown that $U(f,P) - U(f_n,P) < \epsilon/3$?
We have $f_n \to f$ uniformly on $[a,b]$. For any $\epsilon > 0$ there exists $N$ such that $f(x) - f_n(x) < \frac{\epsilon}{4(b-a)}$ for all $n \geqslant N$ and for all $x \in I_j = [y_{j-1},y_j]$ where $I_j$ is any partition subinterval.
For any $n \geqslant N$ and all $x \in I_j$, we have
$$f(x) < \frac{\epsilon}{4(b-a)} + f_n(x) \leqslant \frac{\epsilon}{4(b-a)} + \sup_{x \in I_j}f_n(x) = \frac{\epsilon}{4(b-a)} +M_n(I_j),$$
and, consequently,
$$M(I_j) = \sup_{x \in I_j}f(x) \leqslant \frac{\epsilon}{4(b-a)} +M_n(I_j)$$
Thus,
$$U(f,P) - U(f_n,P) = \sum_{j=1}^n \left(\, M(I_j)- M_n(I_j)\,\right)\, |I_j|\leqslant \frac{\epsilon}{4(b-a)}\sum_{j=1}^n |I_j| = \frac{\epsilon}{4} < \frac{\epsilon}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove $a\leq b$ or $b \leq a$ in Coq I am reading the software foundations for Coq, and one property that I would like to prove is the excluded middle for $\leq$. The definitions are the following
Inductive nat : Type :=
| O
| S (n : nat).
Inductive bool : Type :=
| true
| false.
Fixpoint leb (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => leb n' m'
end
end.
Notation "x <=? y" := (leb x y) (at level 70).
Theorem EQEM: forall n m, n <=? m = true \/ n <=? m = false.
The proof is easy but how to implement it in Coq? Since the function is to type bool and terminates, the output is always a bool, and that should complete the proof. Induction seems to lead nowhere, although an infinite descent argument also concludes the matter trivially, perhaps there is something I missed...
| And here are the answers, thank you for the very helpful comments. First, as was pointed out, a regular induction can solve the problem. Some attention must be paid to the order of operations, it is possible to get stuck.
Theorem EQEM: forall n m, n <=? m = true \/ n <=? m = false.
induction n. intros m. destruct m. left. reflexivity.
left. reflexivity. intros m. induction m. right.
reflexivity. specialize (IHn m). simpl. exact IHn.
Qed.
However, the substitution proof works as well.
Theorem LEMB: forall b: bool, b = true \/ b = false.
Proof. destruct b. left. reflexivity. right. reflexivity. Qed.
Theorem EQEM: forall n m, n <=? m = true \/ n <=? m = false.
Proof. pose LEMB as H. intros n m. specialize (H (n <=? m)).
exact H. Qed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Restrictions on laws I'm wondering about the restrictions, my doubt is for example at $\log_a(b)=c\implies a^c=b$, how would anyone add the restrictions for this? I know the argument and the base of a log have to be >0 excluding 1, hence $a,b\in (0,+\infty)\setminus \{1\}$ but wouldn't that mean that $1^1=b$ is not valid? while it is obviously valid for b=1? I don't know i'm just confused, hopefully someone shines a light on everything, and if it's not much to ask, please tell me how to express restrictions on such complex situations? Another example would be $x^y=0$ the restrictions in my head would be $x,y>0$ because we want to avoid cases like $0^0$ or $0^{-2}$ BUT, according to what i wrote x can't be 0, but for example when x=0, $0^y=0$ is true for y>0, but WE SAID that x couldn't be 0! So i'm just in general confused by all this, hopefully the 2 examples i gave, helps to visualise the confusion i'm having better.
| The logarithmic function and the exponential function are inverse functions between each other.
If a function is bijective it has an inverse.
So definition of exponential function must have some restriction to make this function bijective.
$y=a^x$, $x$ a real number(domain), ($a>0$ but $a\not=1$) $y=a^x>0$(range) is a bijection
And its inverse $x=log_ay$ so $y>0$(domain), $x$ is a real number (range) and (($a>0$ but $a\not=1$)) same as exponential function
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Algorithm for taking square root of a matrix Suppose $A = A^T$ and suppose the entries of $A$ are in $\mathbb{Z}^+$. I want to find all matrices
$M$ with natural entries so that:
$$M^2 = A$$
How can one do this? I know techniques that will get a square-root of an arbitrary matrix, but I want the full set. I want to be able to do this efficiently for large matrices ~$100 \times 100$.
Of course, the set must be finite because we are working over positive integers and the matrix $A$ gives upper bounds.
| Find an invertible matrix $B$ and diagonal matrix $D$ such that $D=B^{-1}AB$. Then take all square roots $D_1,...,D_m$ (all of them diagonal, there are $m\le 2^n$ of these where $n$ is the size of $A$ because every non-negative number has at most 2 square roots) of $D$ and all matrices $BD_iB^{-1}$. Look which ones of these matrices are over natural numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $R$ be a region bounded by $y=x^2$, $y=2x$, and $y=3$. Solid $S$ is obtained by rotating region $R$ about $x=-1$.
Let $R$ be a region bounded by $y=x^2$, $y=2x$, and $y=3$. Solid $S$ is obtained by rotating region $R$ about $x=-1$. UPDATE: the region R is the region bounded below the line y=3.
*
*Write the integral for the volume of $S$ using the washer method.(no need to compute it)
*Find the volume of $S$ using the shell method.
Can someone help me with this problem please?
So for the first part of the problem, I am getting $$V = \pi \int_0^3(\sqrt y - (-1)) dy$$. I'm not sure if I did it right.
For the second part, I got the radius is $x+1$ and the height is $3$.
Then $$V = 2\pi\int3(x+1)dx$$ (I'm not sure what the bounds are).
I am having trouble with this problem. Please help me out. Thank you. Also, I am not familiar with the proper tools on this site for math symbols and such. Sorry.
| There is some ambiguity here because the region $R$ is not clearly defined. There are two possible enclosed regions, shown in the figure. I have elected to use $R = B$ in the figure shown. However, the solution for $R = A$ is substantially similar, with only minor modifications.
To see how we would set it up, we have to consider a representative washer for a corresponding $y$-coordinate. We first observe that the region $R$ is given by $y \ge 3$, $y \le 2x$, and $y \ge x^2$. Hence the outer radius of the washer is given by $r_o(y) = \sqrt{y} + 1$: note that because $R$ is rotated about the axis $x = -1$ and not the $x$-axis, we must add $1$ the radii.
Similarly, the inner radius of the washer is $r_i(y) = y/2 + 1$. Thus the differential volume of a representative washer is $$dV = \pi(r_o^2(y) - r_i^2(y)) \, dy = \pi\left(\left(\sqrt{y} + 1\right)^2 - (y/2 + 1)^2\right) \, dy.$$ The total volume is found by integrating over $y \in [3, 4]$, since $(2,4)$ is the common intersection point of $y = x^2$ and $y = 2x$ and is the upper bound for the $y$-interval that contains $R$. Hence $$V = \int_{y=3}^4 \pi\left(\left(\sqrt{y} + 1\right)^2 - (y/2 + 1)^2\right) \, dy.$$
For the shell method, we integrate with respect to $x$. This is a bit more complicated because while the interval of integration is $x \in [1.5, 2]$, the lower boundary of the shell's height is not continuous. The upper boundary is of course $y = 2x$, but when $x \in [1.5, \sqrt{3}]$, the lower boundary is $y = 3$, and when $x \in (\sqrt{3}, 2]$, the lower boundary is $y = x^2$. So for the first part, a representative shell has radius $r = x + 1$ (again, remembering that the axis of revolution is $x = -1$) and height $h_1 = 2x - 3$; thus the differential volume is $$dV_1 = 2\pi r h_1 \, dx = 2\pi(x+1)(2x-3) \, dx.$$ For the second part, the representative shell has again radius $r = x + 1$, but now the height function is $h_2 = 2x - x^2$, so $$dV_2 = 2\pi r h_2 \, dx = 2\pi(x+1)(2x-x^2) \, dx.$$ Thus our total volume is $$V = \int_{x=3/2}^{\sqrt{3}} 2\pi(x+1)(2x-3) \, dx + \int_{x=\sqrt{3}}^2 2\pi(x+1)(2x-x^2) \, dx.$$ You may perform the computations to verify both methods yield the same volume of $$\left(\frac{91}{12} - 4 \sqrt{3}\right)\pi.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3832878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\int \frac{1}{x-z}\,\mathrm{d}\mu(x)$ exists on the upper half-plane if $\int (1+|x|)^{-1}\,\mathrm{d}\mu(x)<\infty$ In Harmonic Analysis: A Comprehensive Course in Analysis, Part 3 page 62, he states a definition of Stieltjes transform of a finite (Borel) measure on $\mathbb{R}$:
If $\mu$ is a finite measure on $\mathbb{R}$, its Stieltjes transform, $F_\mu(z)$, is the function on (the upper half-plane) $\mathbb{C}_+$ given by
$$
F_\mu(z)=\int \frac{1}{x-z}\,\mathrm{d}\mu(x).\qquad (*)
$$
It is not difficult to see that this integral is well-defined. Later on page 64, he removed the assumption that $\mu$ is finite;
For $F_\mu$ to exist, one only needs $\int (1+|x|)^{-1}\,\mathrm{d}\mu(x)<\infty$ ...
I want to know the reason behind it. I tried to find the upper bound of the integrand in (*) in different ways to see if somewhere can be at most $(1+|x|)^{-1}$, but no luck! Here is one of my tries: I write $z=u+iv$ with $v>0$, then
$$
\frac{1}{x-z}=\frac{x-u}{(x-u)^2+v^2}+i\frac{v}{(x-u)^2+v^2},
$$
Multiplying both sides by $z=u+iv$, we get
$$
\frac{z}{x-z}=\frac{ux-u^2-v^2}{(x-u)^2+v^2}+i\frac{vx}{(x-u)^2+v^2}\\
=\frac{u(x-u)-v^2}{(x-u)^2+v^2}+i\frac{vx}{(x-u)^2+v^2}\\
=\frac{u(x-u)/v^2-1}{\left ( \frac{x-u}{v} \right )^2+1}+i\frac{x/v}{\left ( \frac{x-u}{v} \right )^2+1}
$$
Then I get stuck ...
| For fixed $z=u+iv$ with $ v > 0$ is
$$
x \mapsto \frac{1+|x|}{|x-z|} = \frac{1+|x|}{\sqrt{(x-u)^2+v^2}}
$$
a continuous function on $\Bbb R$ with limit $1$ for $x \to \pm \infty$. It is therefore bounded, so that
$$
\frac{1}{|x-z|} \le C \cdot \frac{1}{1+|x|}
$$
for some constant $C > 0$ and all $x \in \Bbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Inequality $\frac{x_1}{x_k+x_2}+\frac{x_2}{x_1+x_3}+\dots+\frac{x_k}{x_{k-1}+x_1}\geq 2.$ Prove that for any positive numbers $x_1, x_2,\dots, x_k$ $(k>3)$ the following inequality is true: $$\frac{x_1}{x_k+x_2}+\frac{x_2}{x_1+x_3}+\dots+\frac{x_k}{x_{k-1}+x_1}\geq 2.$$
The case when $k=4$ can be reduced to the inequality $a+\frac{1}{a}\geq 2$ so in this case everything is fine.
But for $k>4$ I do not see the way how to prove that inequality.
Can you show the solution please?
P.S. It seems like a Nesbitt inequality which states that if $a,b,c>0$ then $\frac{a}{b+c}+\frac{b}{a+c}+\frac{c}{a+b}\geq \frac{3}{2}$. I do know how to prove that one. We can make substitution $b+c=x, a+c=y, a+b=z$ and the rest is quite obvious.
| Incrementing $k$ from $j$ to $j+1$ increases the left-hand side by$$\frac{x_{j+1}}{x_j+x_1}+\color{red}{\frac{x_1}{x_{j+1}+x_2}-\frac{x_1}{x_j+x_2}}+\color{blue}{\frac{x_j}{x_{j-1}+x_{j+1}}-\frac{x_j}{x_{j-1}+x_1}}.$$With the above coloration, each monochromatic part is non-negative if we label the $j+1$ variables so $x_{j+1}$ is minimal. While we can't arbitrarily permute the $x_i$, we can, through rotation of the list of fractions, place the one with least numerator last, so this $x_{j+1}$-is-least convention handles the general case of the inductive step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show that for $0This a question from Fourier Series:
Show that for $0<x<\pi$
$x(\pi-x)=\frac{\pi^2}{6}-\big(\frac{\cos2x}{1^2}+\frac{\cos4x}{2^2}+\frac{\cos6x}{3^2}+.....\big)$
First of all, the interval given is an open interval i.e. $(0,\pi)$, but I have read that Fourier Series are only applicable for closed intervals. Then how can I solve this question using Fourier Series ?
Finding out the Fourier coefficients considering the interval $[-\pi,\pi]$, I have calculated the following:
$a_{0}=-\frac{\pi^2}{3}$
$a_{k}=\frac{4(-1)^{k+1}}{k^2},\forall k=1,2,3....$
$b_k=\frac{2(-1)^{k+1}}{k},\forall k=1,2,3,....$
But I think all this not useful in this question as the interval given is different. Also, even if I consider the Fourier series in the interval $[0,\pi]$, then also I will not be able to write the equality ($=$) sign in the equality to be shown in the question because equality means the series converges to the function $x(\pi-x)$ and for convergence of the Fourier series, the initial assumption is that the given function is a periodic function of periodicity $2\pi$. But here the function $x(\pi-x)$ is defined over a period of $\pi$.
Can anyone help me out here ? I will be highly grateful.,
| This is a standard application of Fourier series. See https://en.wikipedia.org/wiki/Fourier_series for the basics.
To compute the Fourier series of a given function, you first need a periodic function. For the Fourier series to involve only cosine terms, you need the function to also be even. Note that the period need not be $2\pi$.
Let $f$ be a $\pi$-periodic function, defined on $[0,\pi]$ by $f(x)=x(\pi-x)$. $f$ is even, because the function $x\to x(\pi-x)$ is symmetric with respect to $x=\pi/2$.
Here is a plot of $f$, showing $6$ periods:
Then the cosine Fourier coefficients are, for $n>0$:
$$a_n=\frac{2}{\pi}\int_0^{\pi} f(x)\cos(2nx)\,\mathrm dx=\frac{2}{\pi}\int_0^{\pi}x(\pi-x)\cos(2nx)\,\mathrm dx$$
Now, two integrations by parts:
$$a_n=\frac{2}{\pi}\left[x(\pi-x)\frac{\sin (2nx)}{2n}\right]_0^\pi-\frac{2}{\pi}\int_0^{\pi}(\pi-2x)\frac{\sin(2nx)}{2n}\,\mathrm dx\\=-\frac{2}{\pi}\int_0^{\pi}(\pi-2x)\frac{\sin(2nx)}{2n}\,\mathrm dx\\=\frac{2}{\pi}\left[(\pi-2x)\frac{\cos (2nx)}{4n^2}\right]_0^\pi+\frac{2}{\pi}\int_0^{\pi}2\frac{\cos(2nx)}{3n^2}\,\mathrm dx\\=\frac{2}{\pi}\left[(\pi-2x)\frac{\cos (2nx)}{4n^2}\right]_0^\pi\\=-\frac{1}{n^2}$$
The sine coefficients are $b_n=0$ since the function $f$ is even.
Last, the constant coefficient:
$$a_0=\frac{2}{\pi}\int_0^\pi f(x)\,\mathrm dx=\frac{2}{\pi}\int_0^\pi x(\pi-x)\,\mathrm dx\\=\frac{2}{\pi}\left[\pi\frac{x^2}2-\frac{x^3}{3}\right]_0^\pi=\frac{2}{\pi}\cdot\frac{\pi^3}{6}=\frac{\pi^2}{3}$$
Now, the function $f$ is continuous and piecewise $C^1$, hence the series converges everywhere to the function, hence, for all $x\in[0,\pi]$,
$$x(\pi-x)=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos(2n x)=\frac{\pi^2}{6}-\sum_{n=1}^\infty \frac{\cos(2n x)}{n^2}$$
Note that for $x=0$, you get the classic series:
$$\sum_{n=1}^\infty\frac1{n^2}=\frac{\pi^2}6$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Compact convex sets as intersection of balls
Prove that any compact convex set $K\subset \mathbb{R}^d$ is the intersection
of some balls.
Hello, I saw this problem during my Geometry class, but I don't know how to prove it. Can someone help me?
| Let $K$ be a compact convex set in $\mathbb{R}^d$. For each $x \notin K$, let $R_x = \inf\{r: K \subset \overline{ B(x, r) }\}$. This is well-defined because $K$ is compact and hence bounded. Clearly $$K \subset \bigcap_{x \notin K} \bigcap_{r > R_x} \overline{B(x, r)} = S.$$ For $x \notin K$ let $w_x$ be the unique point in $K$ such that $|x-w_x|$ is minimum; the existence and uniqueness of $w_x$ follows from the fact that $K$ is closed and convex and $\mathbb{R}^d$ is a Hilbert space (see for example this handout).
For the reverse inclusion: let $x \notin K$ and consider the points $x_t = x + t(w_x - x)$, i.e., the points on the ray in the direction of $x$ to $w_x$. The hyperplane perpendicular to this ray and passing through $x$ divides $\mathbb{R}^d$ into $2$ halves, and $K$ is strictly contained in the interior of the half-space on the same side as the ray. For $t$ large enough, the closed ball $\overline{B(x_t, |x_t - x|)}$ must contain $K$ in its interior (since in the limit $t \to \infty$ this ball will contain the entire half-space). Moving this ball by a small amount along the ray will give a ball that does not contain $x$ and still contains $K$. This ball is in the intersection above, so $x \notin S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Cardinality of Two Sets with Empty Sets I know cardinality is counting the number of elements in a set.
$\{ \emptyset, \{ \emptyset\}\}$ - I said that the cardinality of the set above was $2$ because $\emptyset$ is one element, and $\{\emptyset\}$ is another.
$\{ \emptyset, \{\emptyset\}, \{\emptyset, \{\emptyset\}\}\}$ - With this set I said the cardinality was $4$ because there is four elements. $2$ from what I previously stated and the other $2$ from $\{\emptyset, \{\emptyset\}\}$, it being two elements.
I don't understand cardinality with $\emptyset$ well. I know an empty set can be a set of its self and that the first one is the power set of a power set of an empty set, $P(P(\emptyset))$. Is my understanding flawed with cardinality? My logic for coming to my conclusions valid?
| Your understanding is flawed, I’m afraid. When you count the elements of a set, you count just the elements of that set: you don’t count the elements of those elements separately, and there is nothing special about $\varnothing$ in this context.
Let’s consider the set $x=\Big\{a,\{b\},\big\{c,\{d\}\big\}\Big\}$. It has $3$ elements: $a$, $\{b\}$, and $\big\{c,\{d\}\big\}$, so its cardinality is $3$. That last element happens to be a set with $2$ elements of its own, but it’s still just one member of $x$. We could replace it with the infinite set $\Bbb Z$ of all integers, getting the set $\big\{a,\{b\},\Bbb Z\big\}$, and we’d still have a set of cardinality $3$.
Now the cardinality of $x$ is $3$ no matter what $a,b,c$, and $d$ are.1 In particular, it’s $3$ even if $a=b=c=d=\varnothing$, so that $x=\Big\{\varnothing,\{\varnothing\},\big\{\varnothing,\{\varnothing\}\big\}\Big\}$. It’s also $3$ if $a=b=c=d=\Bbb Z$, and $x=\Big\{\Bbb Z,\{\Bbb Z\},\big\{\Bbb Z,\{\Bbb Z\}\big\}\Big\}$. In the first case the $3$ elements of $x$ are $\varnothing$, $\{\varnothing\}$, and $\big\{\varnothing,\{\varnothing\}\big\}$; in the second they are $\Bbb Z$, $\{\Bbb Z\}$, and $\big\{\Bbb Z,\{\Bbb Z\}\big\}$.
1 That’s not quite true, but the two exceptions involve a technicality that beginners sometimes find confusing. Specifically, if $a=\{b\}$, then $$x=\Big\{a,a,\big\{c,\{d\}\big\}\Big\}=\Big\{a,\big\{c,\{d\}\big\}\Big\}$$ and has only $2$ elements. Similarly, if $a=\big\{c,\{d\}\big\}$, then $$x=\big\{a,\{b\},a\big\}=\big\{a,\{b\}\big\}$$ and again has only $2$ elements. In no case, however, does $x$ have $4$ elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Change of basis and dimension of subspaces I'm trying to wrap my head around some basic concepts here. I have some vector subspace $\mathcal{V}$ and $\mathcal{W} = \{Ax \mid x \in \mathcal{V}\}$, where $A$ is an orthogonal matrix. Is it always true that $\dim(\mathcal{W}) = \dim(\mathcal{V})$? If $\{v_1, v_2, \dots, v_k\}$ is a basis of $\mathcal{V}$, how can we find a basis of $\mathcal{W}$?
| It's enough to assume that $A$ is invertible, which an orthogonal matrix is, to conclude $W=A(V)$ has the same dimension as $V$.
It's because both $A$ and $A^{-1}$ takes any linearly independent set of vectors to a linearly independent set.
Suppose $v_1,\dots,v_k$ are independent and $\sum_i\lambda_i\,Av_i=0$, then apply $A^{-1}$ to obtain $\sum_i\lambda_iv_i=0$ so each $\lambda_i=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3833864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove or disprove $\sum_{k=1}^{n} \frac{\cos(\frac{2\pi k x}{n})}{n}=1$ iff $n|x$, and equals $0$ otherwise. I know for sure it must be impossible that the function I put forth to only have all prime and only prime zeroes, but I have to be able to prove it to be untrue.
Below is the summation I have found:
$$f(x) = 2 - \sum_{n=1}^{x} \sum_{k=1}^{n} \frac{\cos(\frac{2\pi k x}{n})}{n}$$
The exact thing I must disprove is that: $f(x) = 0$ iff x is prime, for integer x. If anyone can help me prove this to be false, I would really appreciate it. Thank you.
EDIT:
I suppose at this time what I have need to prove or disprove is that:
$$f(n, x) = \sum_{k=1}^{n} \frac{\cos(\frac{2\pi k x}{n})}{n}$$
returns $1$ iff $n | x$, and return $0$ otherwise.
| Your formula has a closed form.
Now first notice that if $n$ divides $x=mn$ we have
$$f(n, mn) = \frac{1}{n}\sum_{k=1}^{n} \cos(\frac{2\pi k mn}{n})$$
but this is simply
$$\frac{1}{n}\sum_{k=1}^{n} \cos(2\pi k m)=1$$
On the other hand the closed form when $n$ does not divide $x$ is
$$f(n, x) = \sum_{k=1}^{n} \frac{\cos(\frac{2\pi k x}{n})}{n}=\frac1{2} \left ( \frac{\sin(\pi(\frac1{n}+2)x)}{\sin(\frac{\pi x}{n})} -1 \right )$$
Since $x$ is an integer, we can remove $2\pi x$ getting
$$\frac1{2} \left ( \frac{\sin(\pi \frac1{n}x) }{\sin(\frac{\pi x}{n})} -1 \right ) = 0$$
So your formula is a form of sieving, linear. Riemann zeta function is based on sieving as well, but it condenses it and technically surpasses its linear nature.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3834027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
If complex number $a, b, c, d,$ and $|a|=|b|=|c|=|d|=1$, why $|a(c+d)|+|b(c-d)|\leq 2\sqrt{2}$? If we have 4 complex number $a, b, c, d,$ and $|a|=|b|=|c|=|d|=1$, So, how to prove that $|a(c+d)|+|b(c-d)|\leq 2\sqrt{2}$?
I try to separate $|a(c+d)|+|b(c-d)|$ to $|a||(c+d)|+|b||(c-d)|$ than I get $|(c+d)|+|(c-d)|$. SO, if $c=d, c+d=2c=2b, c-d=0$
Is my idea good?
| $$|a(c+d)|+|b(c-d)|=\left|1+\dfrac dc\right|+\left|1-\dfrac dc\right|=|1+e^{i\theta}|+|1-e^{i\theta}|$$ where $\theta$ is the angle between $c$ and $d$.
Now
$$1+e^{i\theta}=2\cos^2\frac\theta2+2i\sin\frac\theta2\cos\frac\theta2=2\cos\frac\theta2 e^{i\theta/2}$$
and
$$1-e^{i\theta}=2\sin^2\frac\theta2-2i\sin\frac\theta2\cos\frac\theta2=2\sin\frac\theta2e^{-i\theta/2}.$$
Finally, the maximum of $\left|2\cos\dfrac\theta2\right|+\left|2\sin\dfrac\theta2\right|$ is $2\sqrt2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3834332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Is every integer $z$ representable in Pell form as $x^2 \pm dy^2 =z$? We know that there are integers that cannot be represented as the sum of two squares (Fermat's Four Square Theorem).
We also know that every natural number can be represented as the sum of four squares (Lagrange's Four Square Theorem).
Is every integer $z$ representable in Pell form as $x^2 \pm dy^2 = z$, with $d$ being a square-free integer with $|d| > 1$? $d$ is not fixed and cannot be equal to $z$ (since $x=0, y=1, d=z$ would be a trivial solution). Similarly, $x^2$ can be taken to be any square and $y = 1$ and $d = \pm(z - x^2)$ would be a trivial solution.
So, the question is are there any non-trivial solutions $(x, y, d)$ for the equation $x^2 \pm dy^2 = z$?
In other words, I am looking for representing $z$ as the sum (or difference) of a square and $d$ repetitions of a square.
Notes:
*
*$d = 1$ is the Two Square Theorem and $d = -1$ is the factorization of $z$
*$d$ is required to be square-free as the equation would reduce to the Two Squares form otherwise
| Let me paraphrase your question as follows:
Determine all $z\in\Bbb{Z}$ for which there exist $d,x,y\in\Bbb{Z}$ with $|d|,|y|>1$ and $d$ squarefree such that
$$x^2+dy^2=z.\tag{1}$$
First note that for $z=0$ there are no integral solutions with $d$ squarefree.
If $z\neq0$ then for every integer $x$ we have the trivial solution
$$(d,x,y)=(z-x^2,x^2,1),$$
which of course fails to meet the condition that $|y|>1$. But for sufficiently large values of $x$ we get
$$d=z-x^2<-1,$$
and so $\Bbb{Z}[\sqrt{-d}]$ is a real quadratic ring. By Dirichlet's unit theorem its unit group has rank $1$, so if $u+v\sqrt{-d}\in\Bbb{Z}[\sqrt{-d}]$ is a fundamental unit and $n\in\Bbb{Z}$ is any integer we have
$$N\left((x+y\sqrt{-d})(u+v\sqrt{-d})^n\right)=N(x+y\sqrt{-d})N(u+v\sqrt{-d})^n=z,$$
yielding infinitely many integral solutions to $(1)$: If $a_n,b_n\in\Bbb{Z}$ are such that
$$a_n+b_n\sqrt{-d}=(x+y\sqrt{-d})(u+v\sqrt{-d})^n,$$
then the above shows that indeed
$$a_n^2+db_n^2=z.$$
Moreover, this yields infinitely many integral solutions $(d,x,y)=(d,a_n,b_n)$ with $|y|>1$, because if $b_m=b_n$ then it quickly follows that $m=n$.
All that remains to be shown is that we can choose $x$ sufficiently large such that $d=z-x^2$ is squarefree.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3834635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
summing this binomial series I found a really interesting question which is as follows:
Prove that the value of
$$\sum^{7}_{k=0}[({7\choose k}/{14\choose k})*\sum^{14}_{r=k}{r\choose k}{14\choose r}] = 6^7$$
my approach:
I tried to simplify the innermost sigma as well as trying to simplify by using
${n\choose k}=n!/k!(n-k)!$ however I am can't get hold of this one.
My guess is that the summation simplifies into a standard series but I can't say for sure.
Kindly help me out.
| Using $${14 \choose r}{r \choose k} = {14 \choose k}{14-k \choose r-k}$$ given reduces to
$$
\begin{align*}
& \sum_{k=0}^7 {7 \choose k} \bigg\{\sum^{14}_{r=k} {14-k \choose r-k} \bigg\} \\
& = \sum_{k=0}^7 {7 \choose k} \{2^{14-k}\} \\
& = 2^{7} \times \sum_{k=0}^7 {7 \choose k} 2^{7-k} \\
& = 2^{7}\times(2+1)^{7} \\
& = 6^7
\end{align*}
$$
Edit : As pointed by @ElliotYu, outer bound should be from $0$ to $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3834796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Showing the inequality $f(x)=x^{2(1-x)}+(1-x)^{2x}\leq 1$ for $0My proof is not really natural but I think it works.
We want to show 1:
Let $0<x<1$ such that then we have :
$$f(x)=x^{2(1-x)}+(1-x)^{2x}\leq 1$$
Case $0<x\leq 0.25$
The proof of this case is due to user Batominovski:
By Bernoulli's inequality we have:
$$(1-x)^{2x}\leq 1-2x^2\quad (1)$$
We have also:
$$x^{2-2x}\leq 2x^2\quad (2)$$
Summing $(1)$ and $(2)$ we get the desired inequality.
Case $0.25\leq x\leq 0.49$:
I shall prove it later but we have :
Let $0.25\leq x\leq 0.49$ then we have :
$$p(x)=2^{2x}(1-x)x^{2}2\geq x^{2(1-x)}$$
And
$$h(x)=\cos^2\Big(x\frac{\pi}{2}\Big)(1+\frac{195}{100}(1-x)(0.5-x)x^2)\geq (1-x)^{2x}\quad (3)$$
With my work we have :
$$f^2(x)\leq \Big(\cos^2\Big(x\frac{\pi}{2}\Big)(1+\frac{195}{100}(1-x)(0.5-x)x^2)\Big)^2+\Big(2^{2x}(1-x)x^{2}2\Big)^2+4^{1.95}(x(1-x))^{2.95}2\leq 1$$
To show it we can use power series see here
Case $0.49\leq x \leq 0.5$
On the domain $[0.49,0.51]$ the function $g(x)=x^{2(1-x)}$ is concave or:
$$g''(x)\leq 0\quad (4)$$
So we have by Jensen's inequality:
$$g(x)+g(1-x)\leq 2g(0.5)=1$$
As $f(x)=f(1-x)$ it's proved for $0.5\leq x<1$
Question:
How to show $(3)$?
Thanks in advance!
1 Vasile Cirtoaje, "Proofs of three open inequalities with power-exponential functions",
The Journal of Nonlinear Sciences and its Applications (2011), Volume: 4, Issue: 2, page 130-137.
https://eudml.org/doc/223938
| Actually there is an easier way. Lets search for the maximum of that function. If its outside of $0$ and $1$ than it should be at the boundaries. (But this is not the case)
*
*Differentiate the function.
*Set the derivative equal to zero.
*Solve for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3834919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The ring $(k[x,y]/(y^2,xy))_x$ has no nonzero nilpotents I'd like to understand why the ring $(k[x,y]/(y^2,xy))_x$ has no nonzero nilpotents. I know that since localization is exact we have $(k[x,y]/(y^2,xy))_x\cong k[x,y]_x/(y^2,xy)_x$, but I'm not sure what to do from here. What can I try?
| From the right hand of the isomorphism you wrote, $(y^2,xy)_x=(y)_x$, so it looks like $k[x,y]_x/(y)_x\cong k[x]_x$, the Laurent polynomials, which is a domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proving square roots of convergent positive operators are convergent In my functional analysis class, we are studying positive self-adjoint operators and their square roots. I have the following exercise
*
*Let $A_n \to A$ be a sequence of positive self-adjoint operators converging in norm to $A$. We are asked to show $\sqrt{A_n} \to \sqrt{A}$ in norm.
*Let $A_n \to A$ be a sequence of positive self-adjoint operators converging strongly to $A$, meaning that for all vectors $x$ we have $\lVert (A_n - A)x \rVert \to 0$. We are asked to show that $\sqrt{A_n} \to \sqrt{A}$ strongly, meaning that $\lVert (\sqrt{A_n} - \sqrt{A})x \rVert \to 0$.
Here is what I have from class. The power series for $f(z)=\sqrt{1-z} = \sum_{i=0}^{\infty} c_i z^i$ converges absolutely for $|z| \leq 1$ so if we WLG scale our operators such that $ \lVert A_n \rVert \leq 1$ and thus $\lVert A \rVert \leq 1$ one has $\lVert I-A_n \rVert \leq 1$ and $\lVert I-A \rVert \leq 1$ (not difficult to show) and we may construct the square root of the operators as
$$ \sqrt{A_n} = \sqrt{I-(I-A_n)} = I - \sum_{i=1}^{\infty} c_i (I-A_n)^i$$ and $$ \sqrt{A} = \sqrt{I-(I-A)} = I - \sum_{i=1}^{\infty} c_i (I-A)^i$$
That is how we constructed the square root of a positive operator in the lecture, but I do not know how to use this or any other method to show $\lVert \sqrt{A_n} - \sqrt{A} \rVert \to 0$ and $\lVert (\sqrt{A_n} - \sqrt{A})x \rVert \to 0$. I am stuck and would appreciate a solution for this. I thank all helpers.
******* Progress: for 1 I can look at the following difference in norm
$$
\lVert \sqrt{A_n}-\sqrt{A} \rVert = \left\lVert \sum_{i=1}^{\infty} c_i \left[ (I-A_n)^i - (I-A)^i \right] \right\rVert
$$
and using the triangle inequality
$$
\lVert \sqrt{A_n}-\sqrt{A} \rVert \leq \sum_{i=1}^{\infty} |c_i| \left\lVert (I-A_n)^i - (I-A)^i\right\rVert
$$
and I can show that $\left\lVert (I-A_n)^i - (I-A)^i\right\rVert \leq 1$ and thus I can use Tannery's theorem to take the limit inside the sum so all I would need is that for all $i$ one has $\left\lVert (I-A_n)^i - (I-A)^i\right\rVert \to 0$ as $n \to \infty$. Is it possible to show this?
| For $s > 0$ and $0 < \alpha < 1$,
$$
\Gamma(\alpha)=\int_{0}^{\infty}e^{-u}u^{-1+\alpha}du \\
= \int_0^{\infty}e^{-su}(su)^{-1+\alpha}sdu \\
= s^{\alpha}\int_0^{\infty}e^{-su}u^{-1+\alpha}du \\
= s^{\alpha}\int_0^{\infty}\frac{e^{-su}-1}{s}u^{-2+\alpha}(-1+\alpha)du \\
=(1-\alpha) s^{\alpha}\int_0^{\infty}\frac{1-e^{-su}}{s}u^{-2+\alpha}du
$$
Therefore,
$$
\frac{1-\alpha}{\Gamma(\alpha)}\int_0^{\infty}(1-e^{-us})u^{-2+\alpha}du=s^{1-\alpha}.
$$
From this, the fractional powers $0 < \alpha < 1$ of a positive operator $A$ are obtained from the positive semigroup $S_A(u)=e^{-uA}$:
$$
A^{1-\alpha}=\frac{1-\alpha}{\Gamma(\alpha)}\int_0^{\infty}(1-e^{-uA})u^{-2+\alpha}du
$$
$0 \le (I-e^{-uA}) \le I$ because $0 \le e^{-uA} \le I$, which gives $0 \le A^{1-\alpha} \le I$. The question of norm convergence of $A_n^{r}$ to $A^{r}$ is reduced to looking at the operator norm convergence of $e^{-uA_n}$ to $e^{-uA}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of this combinatorial identity Show that for all $a\geq b$
$$\left(\sum_{i=0}^ax^{a-2i}\right)\left(\sum_{i=0}^b x^{b-2i}\right)=\sum_{k=0}^a\sum_{l=0}^{a+b-2k}x^{a+b-2k-2l}$$
where $a,b$ are positive integers.
I tried to prove this using double induction $T(a,b)$ where I show that
*
*$T(1,1)$ is true.
*If $T(a,1)$ is true, then $T(a+1,1)$ is true.
*If $T(a,b)$ is true, then $T(a,b+1)$ is true.
I face pretty heavy computation starting step 2. Is there a nice faster way proving this identity?
| Use the formula for the sum of a finite geometric series
$$
\sum_{i=0}^n r^i = \frac{r^{n+1}-1}{r-1}
$$
to simplify both sides to
$$
\frac{x^{a+b+4} - x^{a-b+2} - x^{-a+b+2} + x^{-a-b}}{(x^2 - 1)^2}.
$$
Note that to put the sums into the right form, you'll have to pull out some common factors first; for example in the sum as $i$ goes from $0$ to $a$ on the left, you want to factor out $x^a$, then take $r=x^{-2}$.
As Rob Pratt points out in the comments, the formula doesn't work when $r=1$ (in other words, when $x = \pm 1$). In that case, every term of every sum is $\pm 1$ (depending on the parity of $a$ and $b$) and we should get $(-1)^{a+b}(a+1)(b+1)$ on both sides. We could probably avoid considering this case by making some argument about polynomials being defined by finitely many values...
We should also assume that $x\ne 0$ or else we are dividing by $0$ on both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof of the existence of a well-defined function $\bar{f}$(2) Here is the question I am trying to solve (it is also mentioned in the answer of this link):
Proof of the existence of a well-defined function $\bar{f}$.
Let $X$ and $Y$ two sets and let $f:X\to Y$.
Define for $x_1,x_2\in X$ a relation in $X$ as $x_1\sim x_2$ if $f(x_1)=f(x_2)$.
*
*Prove that this defines an equivalence relation on $X$.
Now you can talk about the quotient $X/\sim \quad = \{[x]:x \in X\}$, the set of the classes of equivalence.
Define $\bar{f}(x):X/\sim \quad \to Y$ as $\bar{f}([x]) = f(x)$.
Since the definition uses an element of the class this could be ill-defined.
*Prove that this is well defined.
Now define $\pi:X \to X/\sim$ as $\pi(x) = [x]$.
Then
*$f = \bar{f}\circ\pi$
*$\bar{f}$ is injective
*$\pi$ is surjective
For the proof of 1) and 5) I have no problem in them.
For the proof of 2)
Assume that $[x_{1}] = [x_{2}]$ then $[f(x_{1})] = [f(x_{2})]$ but then what I do not know how to complete. Could anyone help me in that, please?
For the proof of 3)
I do not know how to do it. Could anyone help me in that, please?
For the proof of 4)
I know that it should be the reverse of 2)
| When you try to prove 2), you put square brackets around $f(x_1), f(x_2)$. These are incorrect (in fact do not mean anything as there is no equivalence relation defined on $Y$ at this point). It should be:
For the proof of 2)
Assume that $[x_{1}] = [x_{2}]$ then $f(x_{1}) = f(x_{2})$
From this it is clear that $\bar{f}([x])= f(x)$ is well defined e.g. it does not matter which representative of the class $[x]$ you select.
For part 3):
$$\bar{f}\pi(x)=\bar{f}([x])=f(x),$$
by definition of the functions $\bar{f}$ and $\pi$.
For part 4): You are right it is the reverse of 2):
If $\bar{f}([x_1])=\bar{f}([x_2])$ then $f(x_1)=f(x_2)$ so $[x_1]=[x_2]$, by the definition of the equivalence relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integrating a function f(x) I am a bit confused about an integration that I came across. $f(x)$ is a function and we are integrating:
$$\int_0^t {df(x)\over f(x)} = \ln \left({f(t)\over f(0)}\right)$$
I am a confused because I was expecting an answer like $\ln(f(x)) - \ln(f(0))$
| Technically,
$$\log(f(x))-\log(f(0))$$ is slightly different from $$\log\left(\frac{f(x)}{f(0)}\right)$$ because the latter also works when the two function valuations are negative.
You might prefer the expressions
$$\log(|f(x)|)-\log(|f(0)|)$$
or
$$\log\left(\left|\frac{f(x)}{f(0)}\right|\right)$$
but IMO they are not the best choice because they work for two valuations of different sign, though they should not (zero crossings are discouraged as this makes an improper integral).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$f(x+1)=f(x)+1 \Rightarrow \displaystyle \lim_{x\to \infty}\frac{f(x)}x=1 ?$ Let $f:\mathbb{R} \to \mathbb{R}$ be a continous function such that $f(x+1)=f(x)+1 $ for all $x\in \mathbb{R}$ . Then which of the following statements is necessarily false ?.
$(1)\displaystyle\lim_{x\to \infty} \frac{f(x)} {x^{1+\epsilon}}=0$ for all $\epsilon \gt 0$
$(2)\displaystyle\lim_{x\to \infty} \frac{f(x)} {x}$ does not exist .
$(3)
\displaystyle\lim_{x\to \infty} \frac{f(x)} {x}=1$
$(4)\displaystyle\lim_{x\to \infty} \frac{f(x)} {x^{1-\epsilon}}=+\infty$ for all $\epsilon \gt 0$
My attempt:
$f(x)=x$ satisfies the hypothesis and makes the result in $(1), (3) $ and $(4)$ true and so they are not necessarily false .
I need to prove $(2)$ is necessarily false i.e the limit does exist.
It can be easily proved by induction
$f(x+n)=f(x)+n$ for all $ n\in \mathbb{N}$ and for all $x\in \mathbb{R}$
Let $x\gt 1$ be any real number.Then
$x=\lfloor x\rfloor +\overline{ x } \quad(*)$
where the first part is greatest integer and the second is fractional part and is less than $1$ .
Then $f(x)=f(\overline {x})+ \lfloor x \rfloor $
Now $f$ being continous on $[0,1]$ is bounded above by some $M$ on $[0,1]$ .
Hence , we have $\displaystyle \lim_{x\to \infty} \frac {f(\overline{ x })}x=0$
Thus $ \displaystyle \lim_{x\to \infty} \frac {f(x)}x= \displaystyle \lim_{x\to \infty} \frac {\lfloor x \rfloor }x$
Now again using $(*)$ , we have
$1=\displaystyle\lim_{x\to \infty} \frac {\overline{x}}x+\displaystyle\lim_{x\to \infty} \frac {\lfloor x \rfloor }x $
$\Rightarrow
\displaystyle\lim_{x\to \infty} \frac {\lfloor x \rfloor }x=1$ since $0\le \overline {x}\lt 1$
Thus the limit exist and is equal to $1$.
Is everything okay ? Any alternative ideas/solution will be appreciated.
Thanks for your time.
| The proof is fine. Using the perhaps more common notation $\{ x \}$ for the fractional part of $x$, you showed that
$$
\frac{f(x)}{x} = \frac{f(\{ x \})}{x} + \frac{\lfloor x \rfloor}{x}
$$
where the first fraction converges to zero (because the numerator is bounded) and the second fraction converges to one (because $x-1 < \lfloor x \rfloor \le x$).
Instead of fiddling with the integer and fractional parts of $x$ one can also argue as follows: $g(x) = f(x) - x$ is continuous and $1$-periodic, and therefore bounded. It follows that
$$
\lim_{x\to \infty}\frac{g(x)}x=0 \implies
\lim_{x\to \infty}\frac{f(x)}x=\lim_{x\to \infty}\frac{g(x)+x}x=1 \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Give an example in which for a set $A \subseteq X$, the two sets $f(X \setminus A)$ and $Y \setminus f(A)$ are incomparable.
Give an example in which for a set $A ⊆ X$, the two sets $f(X \setminus A)$ and $Y \setminus f(A)$ are incomparable (i.e., neither is a subset of the other).
My example:
Take $X =$ {$1,2$}, $Y =$ {$3$}, and $A =$ {$1$}.
So $f(X \setminus A) = \{3\}$, and $Y \setminus f(A) = \phi$.
Hence neither is a subset of the other. Is my example correct? Any other examples?
| Your example does not work, since the empty set is a subset of $\{ 3 \}$, and they are comparable. Moreover you didn't define the function $f$.
Here is an example that works.
$$X=Y = \{ 1,2,3,4\} \qquad \qquad \ \ \ A= \{ 2,3\}$$ and $f: X \to Y$ is defined by
$$f(1)=f(2)=1 \qquad \qquad \ f(3)=3 \ \qquad \qquad f(4)=4$$
Then
$Y \setminus f(A)= \{ 2,4 \}$ while $f(X \setminus A) = \{ 1,4\}$. As you can see, these two sets are incomparable under inclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3835942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find value of $\sin x-\frac{1}{\cot x}$ If $\sin x+\frac{1}{\cot x}=3$, calculate the value of $\sin x-\frac{1}{\cot x}$
Please kindly help me
Let $\sin x -\frac{1}{\cot x}=t$
Then, $$\sin x= \frac{3+t}{2}, \cot x= \frac{2}{3-t}$$
By using $$1+\cot ^2x= \frac{1}{\sin^2 x}$$
Then, the equation $$t^4-18t^2+48t+81=0$$
| Hint:
$\sin x+\dfrac1{\cot x}=3$
Let $\sin x-\dfrac1{\cot x}=y$
Solve for $\sin x, \dfrac1{\cot x}$
Now use $$\dfrac1{\sin^2x}-\cot^2x=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is implication of conditions on sequences strict? Let $s_n$ be a sequence of positive real numbers. Clearly if $\sum\frac{1}{s_n\log s_n}$ diverges then $\sum\frac{1}{s_n}$ diverges but is this implication strict?
I.e. is there a sequence such that the first sum converges but the second sum diverges?
| My roommate just answered this! Let $s_n=n(\log n)^p$ with $0<p\leq 1$. Then the second sum diverges (it is a standard case), and for the first sum we have:
$$s_n \log s_n = n (\log n)^p \log(n(\log(n)^p)=n(\log n)^{p+1} +n(\log n)^p \log(\log n)^p$$
So the first sum converges because $\frac{1}{s_n\log s_n}\leq \frac{1}{n(\log n)^{p+1}}$ and $\sum\frac{1}{n(\log n)^{p+1}}$ converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Lattices over modules The standard construction of lattices in $\mathbb{R}^n$ can be generalised by taking any finite-dimensional vector space $V$ over a field $F$ and a subring $R$ of $F$, as per Wikipedia (https://en.m.wikipedia.org/wiki/Lattice_(group).
Is there any reason why only finite-dimensional vector spaces are considered here, and not free modules of finite rank? The lattice should still form an additive subgroup, and I think the property for bases to give isomorphic lattices should also be the same still. Is there any important property that is lost with this further generalisation that I'm missing?
| A important property of theory of lattices is the inner product that you get from $\mathbb{R}^n$, as well as related objects like the volume.
Many of the interesting and useful theorems come from the interaction between the algebraic and geometric structures, e.g. Minkowski's theorem and it's application to the geometry of numbers. The geometry is also important for its applications in cryptography, since one needs a norm to talk about things like the shortest vector problem.
An equivalent definition for lattices is to define them as a free $\mathbb{Z}$ module equipped with a quadratic form. The following generalization appears on page 3 of Ebeling's "Lattices and Codes."
Definition: Let $R$ be a commutative ring with unity. A symmetric bilinear form module $(S,b)$ over $R$ is a pair consisting of a free $R$-module $S$ of rank $n$, and a symmetric bilinear form $b : S \times S \to R$.
Proposition 1.1 (Ebeling) The integral lattices in $\mathbb{R}^n$ (lattices where every dot product is an integer) are the symmetric bilinear form modules over the integers where $b$ is a positive definite symmetric bilinear form.
The text mentions that symmetric bilinear form modules over $\mathbb{Z}$ without the positive-definite property are studied in chapter 4, and examples over rings of integers are studied are studied in chapter 5. You may find them interesting to peruse.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.