Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Solve the non-homogenous recurrence $h_n = 3h_{n-1} + 3(2)^n, h_0 = 1$? I am trying to solve the non-homogenous recurrence $h_n = 3h_{n-1} + 3(2)^n, h_0 = 1$.
I can see that the solution to the homogenous $h_n = 3h_{n-1}$ is clearly $h_n = 3^n$. That being said, I have no idea how to solve $h_n - 3h_{n-1} = 3(2)^n$: I assume it's some solution of the sort $(c_1n + c_2)(2)^n$, but I don't know where we'd go from here! Can anyone give advice?
| Welcome to MSE!
You could always use generating functions, but that machinery isn't necessary here. Instead, there's a standard trick. Write
$$t_n = \frac{h_n}{2^n}$$
Then after dividing both sides by $2^n$ your recurrence becomes
$$t_n = \frac{3}{2}t_{n-1} + 3$$
Do you see how to solve this recurrence for $t$? Do you see how that will solve your original problem too?
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Local minimum in undefined points in Hessian and first derivative of a function It might seem naive but consider having a convex function like $|x|$ for which $f'(x)$ is not defined in 0, (is it? because $f'(x)= \frac{x}{|x|}$.) Now what do we say about its local minimum?
Because for a local minimum we have $f'(x) =0$ and $f'(x) >=0$. Which don't hold here.
Thanks in advance.
| I think you're mixing up two somewhat-related but distinct theorems:
*
*If $f$ is a convex function, every local minimum of $f$ is also a global minimum.
*If $f$ is $C^2$ and $f'(x)=0; f''(x)>0$ then $f$ has a local minimum at $x$.
This second fact is a sufficient condition on the existence of a local minimum, but not a necessary condition. (This sufficient condition generalizes to higher dimension, where the condition instead is that $\nabla f(x) =0 $ and $Hf(x)$ is positive-definite. It is an immediate consequence of Taylor's theorem.)
Even for smooth functions, sometimes a local minimum does not have positive second derivative. Consider for instance $f(x) = x^4$, which clearly has a minimum at $x=0$ but $f''(0)=0$.
The function $f(x) = |x|$ is not a contradiction to the above: it is convex so fact (1) applies. It is not in $C^2$, but even if it were, it would not be a contradiction to fact (2) since the latter is a sufficient condition, not an "if-and-only-if" condition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding $\lim_{x\rightarrow \infty}x\ln\left(\frac{x+1}{x-1}\right)$ I'm trying to find out :
$$\lim_{x\rightarrow \infty}x\ln\left(\frac{x+1}{x-1}\right)$$
First I tried by inspection. I suspect that
$$\lim_{x\rightarrow \infty} \left( \frac{x+1}{x-1}\right)=1$$
So I expect $\ln (\cdots)\rightarrow 0$ as $x\rightarrow \infty$. While If I multiply this by $x$, One might expect this to go infinite.
Next what I have done is to use a substitution, I don't know if it's right to do or not.
$$x'=\frac{x+1}{x-1}\Rightarrow x=\frac{x'+1}{x'-1}$$
So that
$$\lim_{x\rightarrow \infty}x\ln\left(\frac{x+1}{x-1}\right)\Rightarrow \lim_{x'\rightarrow 1}\left(\frac{x'+1}{x'-1}\right)\ln x'$$
Now I can use L'Hospital rule
$$\lim_{x'\rightarrow \infty}\frac{\ln x'+\frac{x'+1}{x'}}{1}=2$$
Can I use such a substitution always to solve problems? What's wrong with the reasoning I did earlier?
| Hint:
Set $\dfrac1x=h$ to find
$$\lim_{h\to0^+}\dfrac{\ln\dfrac{1+h}{1-h}}h =\lim_{h\to0^+}\dfrac{\ln(1+h)}h+\lim_{h\to0^+}\dfrac{\ln(1-h)}{-h}=?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
} |
The FONC and the optimal step size $\alpha_k$. Help me with the intuition please I was hoping someone could clarify something for me. For a one-dimensional line search problem
$x^{k+1} = x^{k} + \alpha d_{k}$ where $d_k$ is the descent direction and $\alpha$ the step size, how do we find the optimal step size, $\alpha_k$ at the point $x_k$?
I am confused about a question from my tutorial set and the solution provided by the model answers does not convince me. I argue that just because we find the optimal $\alpha_k$ that does not mean that $x_k+ \alpha_k d_k$ is the point at which we have the global minima so why does the First Order Necessary Condition (FONC) hold? Each step k in the algorithm will have its own optimal $\alpha_k$ and only at the actual optima $x^*$ will the FONC hold
The later steps about orthogonality I follow but I struggle with the intermediate steps
EDIT: Clarified meaning of FONC as the First Order Necessary Condition which means that the directional derivative is zero for a local minima
| If I'm not mistaken:
*
*consider the functions $\phi_k: \mathbb{R} \rightarrow \mathbb{R}$ given by $\phi_k(\alpha)=f(x^k+\alpha d^k)$. Note that we have a different function for every index $k$
*since $\alpha_k$ is a global minimum of the differentiable function $\phi_k$, then $\frac{d}{d\alpha}\phi_k(\alpha_k)=0$. So, we are applying the "FONC" at each step to the composed function$\phi_k$, and that doesn't mean that $x_k+\alpha_k d^k$ is a critical point of $f$ (by coincidence, it would be if your method converged at the current iterate to a local solution of $f$, for instance, but this is generally not the case)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$y'=\frac{x-y+2}{x+y-1}$ First, I substitute $x=\xi + h$, $y=\eta+k$. The equation becomes $$y'=\frac{\xi+h-(\eta+k)+2}{\xi+h+\eta+k-1}$$
and this becomes homogenous if we let $h=-\frac{1}{2}$ and $k=\frac{3}{2}$ ( We solve the constant terms such that they become zero ). I think it's correct this far, but what next? At this point we'd have $$\eta'=\frac{\xi-\eta}{\xi+\eta}$$
which is homogenous, and I tried substituting $\eta = u\xi$. This would lead to the equation $$u'\xi+u=\frac{1-u}{1+u}$$
I reduce $u$ from both sides. This will give $$u'\xi=\frac{1-u}{1+u}-u=\frac{1-2u-u^2}{1+u}$$
or $$\frac{du}{d\xi}\xi=\frac{1-2u-u^2}{1+u}$$
inverting this gives the relation $$\frac{1}{\xi}d\xi= \frac{1+u}{1-2u-u^2}du$$
So there has been some mistake made. The correct answer does not involve logarithms, and $\ln(\xi)$ will inevitably be a part of the solution. Can somebody spot my mistake?
| The integral on the RHS also involves logarithm. When you take antilogs on both sides after integration, you will be free of any logarithms
Another method:
$$y'=\frac{x-y+2}{x+y-1}$$
Cross multiply:
$$xdy+ydy-dy=xdx-ydx+2dx$$
$$d(xy)+ydy-dy=xdx+2dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Characterization of complex exponential in terms of real identities It is well known (or at least easy to see) that the complex exponential function $\exp:\mathbb{C}\to \mathbb{C}$ can be charaterized as the unique holomorphic function $f:\mathbb{C}\to \mathbb{C}$ such that
$$
\lim_{h\to 0} \frac{f(h)-1}{h}=1 \quad \text{and} \quad f(z+w)=f(z)f(w) \quad \text{for all } z,w\in\mathbb{C}.
$$
In fact we can make a stronger charaterization:
$$
\lim_{h\to 0,\; h\in\mathbb{R}} \frac{f(h)-1}{h}=1 \quad \text{and} \quad f(z+w)=f(z)f(w) \quad \text{for all } z,w\in\mathbb{C}.
$$
My question is: What are all the holomorphic functions $f:\mathbb{C}\to \mathbb{C}$ such that
$$
\lim_{h\to 0,\; h\in\mathbb{R}} \frac{f(h)-1}{h}=1 \quad \text{and} \quad f(x+y)=f(x)f(y) \quad \text{for all } x,y\in\mathbb{R}?
$$
Note the (perhaps big) difference: now the condition $f(x+y)=f(x)f(y)$ is given for all real $x$ and $y$. I know that the complex exponential function is an example of such $f$, but are there any others? If not, how to prove that the exponential is the only function satisfying this conditions?
Thank you in advance!
| The answer is the same: If $f$ is holomorphic in $\Bbb C$ with $f(x+y)=f(x)f(y)$ for all $x, y\in \Bbb R$, then $f(z+w)=f(z)f(w)$ holds for all $z, w \in \Bbb C$, due to the identity theorem for holomorphic functions.
*
*First fix $y\in \Bbb R$: Then $f(x+y)=f(x)f(y)$ for all $x \in \Bbb R$ implies that $f(z+y)=f(z)f(y)$ for all $z \in \Bbb C$.
*Now fix $z \in \Bbb C$: Then $f(z+y)=f(z)f(y)$ for all $y \in \Bbb R$ implies that $f(z+w)=f(z)f(w)$ for all $z, w \in \Bbb C$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Upper bound on trace formula I am trying to prove that this quantity: $f(W) = trace((C^T\circ(W^T - W'^T))(W'-W)(XX^T))$ is smooth. $\circ$ is the Hadamard product. Smoothness is satisfied if $ f(W)\le H\|W - W' \|^2$. $H$ is a constant. $\| .\|$ is any norm.
I am very close to do it, however, I am struggling in dealing with Hadamard product inside a trace.
| One approach is to use vectorization. If $\|\cdot\|$ denotes the Frobenius norm, then we have $\|W\| = \|\operatorname{vec}(W)\|$, and we can apply the following facts:
*
*$\operatorname{vec}(A \circ B) = \operatorname{diag}(\operatorname{vec}(A)) B$,
*$\operatorname{tr}(A^T B) = \operatorname{vec}(A)^T\operatorname{vec}(B)$,
*$\operatorname{vec}(AB) = (B^T \otimes I)\operatorname{vec}(A)$.
With that, we can rewrite
$$
\operatorname{tr}((C^T\circ(W^T - W'^T))(W'-W)(XX^T)) =\\
\operatorname{vec}(W' - W)^T
\operatorname{diag}(\operatorname{vec}(C))((XX^T) \otimes I)
\operatorname{vec}(W' - W).
$$
With the inequality $x^TAx \leq \sigma_{\max}(A) \|x\|^2$, we can therefore conclude that
$$
\operatorname{tr}((C^T\circ(W^T - W'^T))(W'-W)(XX^T)) \leq
\\
\sigma_{\max}(\operatorname{diag}(\operatorname{vec}(C))((XX^T) \otimes I)) \cdot \|W - W'\|^2.
$$
Another less computation-intensive approach: for $W \neq W'$, note that $\frac{f(W)}{\|W - W'\|^2} = f(Y)$, where $Y = W' + \frac{W - W'}{\|W - W'\|}$. Note that every matrix of the from $A = \frac{W - W'}{\|W - W'\|}$ satisfies $\|A\| = 1$.
With that in mind, let $S$ denote the set
$$
S = \{Y: Y = W' + A \text{ and } \|A\| = 1\}.
$$
Note that $S$ is compact (closed and bounded). Thus, the continuous function $f$ must attain a maximum over $S$. In other words, there exists a $k > 0$ such that for all $W \neq W'$, we have
$$
\frac{f(W)}{\|W - W'\|^2} \leq k \implies f(W) \leq k \cdot \|W - W'\|^2.
$$
Trivially, this second inequality holds for $W = W'$ as well.
Regarding the explicit computation of an upper bound: note that
$$
\sigma_{\max}(\operatorname{diag}(\operatorname{vec}(C))((XX^T) \otimes I)) \leq \\
\sigma_{\max}(\operatorname{diag}(\operatorname{vec}(C)))
\cdot \sigma_{\max}(((XX^T) \otimes I)) = \\
\left[\max_{i,j} |C_{ij}| \right]\cdot \sigma_{\max}(X)^2.
$$
Most computational software will have a reasonably efficient method for computing $\sigma_{\max}(X)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4027995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Assume a real-valued net has a cluster point $a\in \mathbb R$. Is there a cofinal subnet converging to $a$? Let $(D, \geq)$ be a directed set and let $(n_d)_{d\in D}$ be a real-valued net. Assume $a\in \mathbb R$ is a cluster point of $(n_d)$, i.e., for every neighborhood $U$ of $a$ and every $d\in D$ there exists $d'\geq d$ such that $n_{d'}\in U$.
It is a standard fact that there exists a subnet of $(n_d)$ that converges to $a$.
But is it possible to find a cofinal subnet (see for example wikipedia) with the same property? To rephrase: is there a cofinal subset $D'\subset D$ such that $\lim_{d\in D'}n_d = a$?
If $(n_d)$ takes value in an arbitrary topological space $X$, then this is not true. See for example this answer and this one. My question is specific to the case $X=\mathbb R$, and for this space I could not find any counter-example nor proof.
| Not necessarily.
Let $D=\omega_1\times\omega$ ordered lexicographically: $\langle\alpha,m\rangle\preceq\langle\beta,n\rangle$ iff $\alpha<\beta$, or $\alpha=\beta$ and $m\le n$; this is clearly a directed set. Define a net
$$\nu:D\to\Bbb R:\langle\alpha,n\rangle\mapsto 2^{-n}\,;$$
clearly $0$ is a cluster point of $\nu$.
Let $C$ be a cofinal subset of $D$. For each $n\in\omega$ let $C_n=C\cap(\omega_1\times\{n\})$; $|C|=\omega_1$, so there is some $m\in\omega$ such that $|C_m|=\omega_1$. Then $C_m$ is cofinal in $C$, but $\nu[C_m]=\{2^{-m}\}$ is disjoint from the nbhd $(-2^{-m},2^{-m})$ of $0$. Thus, no cofinal subnet of $\nu$ converges to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4028197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding the limit by using Maclaurin series $$ \lim_{x \to 0}\frac {x\cdot \sin x}{1-\cos(2x)} $$
I'm supposed to find the limit by using the terms, and figuring out $ x\times \sin x$ is easy enough, just multiply $\sin x$ with $x$, but when it's divided by $1-\cos(2x)$, I'm really lost. I can and have calculated what the first 5 terms of $1-\cos(2x)$ is, but I have no idea what I do with those two to get one series, so I can get the limit. If WolframAlpha isn't lying to me, the limit should be $\frac{1}{2}$.
From all my lecturer has written, I can't see anywhere that it is mentioned how I divide power series of any kind in the notes he wrote. Now, I'm studying over the internet, so I just follow the lectures online, and don't have any proper help around me, so hence, I'm asking here, for the second time in 24hrs.
So where do I go from having the first 5 terms of the Maclaurin series of $ x\times \sin x$ and $1-\cos(2x)$?
| We have that $\cos 2x = 1 - \frac12 (2x)^2 + \frac1{24} (2x)^4 + \dots,$ so $1 - \cos 2x = 2 x^2 - \frac{2}{3} x^4 + \dots$ and $x \sin x = x^2 - \frac16 x^4 + \dots.$ Thus $\lim_{x \to 0} \frac{x \sin x}{1 - \cos 2x} = \frac{x^2 - \frac16 x^4 + \dots}{2 x^2 - \frac23 x^4 + \dots} = \lim_{x \to 0} \frac{1 - \frac16 x^2 + \dots}{2 - \frac23 x^2 + \dots} = \frac12$ simply by evaluating the limits of the numerator and denominator. Hope this helps. :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4028348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is there a formula for $n^{th}$ power of $\cos(ax+b)$? Is there a closed form formula or trigonometric relation for $\cos^{n}(ax+b)$?
| As mentioned in your comment, if you are only looking to find the period of $cos^n(ax+b)$, it wouldn't be necessary to write a closed form formula or expression for it.
When $n$ is even, We see that $(\cos(\pi - (ax+b)))^n = (\cos(\pi + (ax+b)))^n$ and this turns the period of the function into $\pi/a$, whereas when $n$ is odd, we don't have this luxury anymore. Instead we have $\cos^n(ax+b)$ going its full $2\pi/a$ cycle.
Edit: As @Rob Arthan pointed out in his comment, given $f(x)$ with period $T$ and some function $g(x)$; $h(x) = g(f(x))$ has period $T/k$ for some integer $k$. It is also possible to observe that $T$ is a period of $h$, but often depending upon the function $g$ we can do better.
In our case $f = \cos(ax+b)$ and $g(x) = x^n$. When $n$ is odd, $g(x)$ is injective, and so every input to $g$ has a unique associated output, which means there is no room in $[0,2\pi/a]$ (the original period of $f$) for repetition of the same output values. This is why, when $n$ is odd, the period of $h$ is the same as the period of $f$. However, when $g$ is not injective, we have multiple inputs being mapped to the same output. In the case of an even $n$, we have exactly two inputs in a given period being mapped to the same output, which essentially means we are repeating the graph of $f$ twice in $[0,2\pi/a]$ upon composing, which means the period is halved simply giving us a period of $\pi/a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4028524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
sum and product of random variables
Let $X$ and $Y$ be two (general) random variables with finite means and let $Z=X+Y$ and $Z' = XY.$ Define for a random variable $X,X^+$ to be the random variable equal to $\max\{X(\omega), 0\}$ and $X^-$ so that $X^-(\omega) = \max\{-X(\omega), 0\},$ where $\omega \in \Omega,$ the sample space.
Express $Z^+$ and $Z^-$ in terms of $X^+, X^-, Y^+, Y^-,$ with justification.
If $X$ and $Y$ are independent, express $Z'^+$ and $Z'^-$ in terms of $X^+, X^-, Y^+, Y^-,$ with justification.
I think $E(Z^+) - E(Z^-) = E(X^+) - E(X^-) + E(Y^+) - E(Y^-).$ But I'm not sure how to express $Z^+$ and $Z^-$ from this. I tried considering all the possible linear combinations of $X^+, X^-, Y^+, Y^-,$ with coefficients of absolute value less than or equal to $1$, but that seems very tedious. For instance, clearly $Z^+\neq X^+ + Y^+ - X^- - Y^-$ because they're unequal when $X(\omega), Y(\omega) < 0.$ Is there some sort of way for me to deterministically find the right coefficients?
Also, if $X$ and $Y$ are independent, then $E(Z'^+) - E(Z'^-) = E(Z')= E(XY) = E(X)E(Y) = (E(X^+) - E(X^-))(E(Y^+)- E(Y^-))$, and so the values of $Z'^+$ and $Z'^-$ should reflect this.
Could someone give some hints as to how to find the required relationships?
| Using all the possibilities that I believe are possible, e.g. when X, Y are both positive, when both negative, when X is positive and Y is negative but greater in absolute value or negative but less in absolute value, and with X and Y's roles switched for this last statement
X Y Z X+ Y+ X- Y- Z+ Z-
1 2 3 1 2 0 0 3 0
1 -1/2 1/2 1 0 0 1/2 1/2 0
-1/2 1 1/2 0 1 1/2 0 1/2 0
1 -2 -1 1 0 0 2 0 1
-2 1 -1 0 1 2 0 0 1
-1 -3 -4 0 0 1 3 0 4
I found that
$$Z^+=\max\{X^+-Y^-, 0\}+\max\{Y^+-X^-, 0\}\\
Z^-=\max\{X^--Y^+, 0\}+\max\{Y^--X^+, 0\}$$
and e.g. both positive, exactly one of them negative, and both negative,
X Y Z' X+ Y+ X- Y- Z'+ Z'-
1 2 2 1 2 0 0 2 0
1 -1/2 -1/2 1 0 0 1/2 0 1/2
-1 2 -2 0 2 1 0 0 2
-1 -3 3 0 0 1 3 3 0
gives
$$Z'^+=X^+Y^++X^-Y^-\\
Z'^-=X^+Y^-+X^-Y^+$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4028606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Whether Hausdorff Property is Required
Let $H$ denote a group that is also a Hausdorff topological space. Show that $H$ is a topological group if and only if the map $h: H \times H \rightarrow H, h:=(x,y)\rightarrow xy^{−1}$ is continuous.
The following proof does not use the Hausdorff property.
Suppose $h$ is continuous.
$f:H \rightarrow H \times H, f(x):=(e, x)$ is continuous (Munkres theorem 19.6). $(h\circ f)(x)=x^{−1}$ is continuous.
$j:H\times H \rightarrow H \times H, j(x, y):=(x, y^{−1})$. $\pi _1 (x, y)=x$ is continuous. $(h\circ f \circ \pi _2)(x, y)=y^{−1}$ is continuous. By Munkres theorem 19.6, $j$ is continuous. $(h\circ j)(x, y)=xy$ is continuous.
So $H$ is a topological group.
Suppose $H$ is a topological group. $k(x, y):=xy$ and $l(x):=x^{−1}$.
$\pi _1 (x, y)=x$ is continuous. $(l \circ \pi _2) (x, y)=x^{−1}$ is continuous. By Munkres theorem 19.6, $j$ is continuous. $j \circ k=h$ is continuous.
Please let me know whether Hausdorff property is required. (I'm aware that a topological group is Hausdorff).
Thanks.
Edit: Thanks to Aryaman Maithani's comment, some textbooks (for eg Munkres) require Hausdorff-ness in the definition of topological group. Hausdorff-ness is required probably only in that case.
| Indeed the proof does not require any Hausdorffness, we only need that certain other natural product maps are continuous, and this holds regardless of separation axioms.
I would do it this way: suppose $h$ is continuous, then indeed $f:x \to (e,x)$ from $H$ into $H \times H$ is continuous (the composition with the two projections are a constant map (always continuous) resp the identity on $H$ (ditto). The inversion map $I:x \to x^{-1}: H \to H$ then is continuous as $h \circ f$, a composition of continuous maps. The product map $P: (x,y) \to xy$ is then also continuous as $h \circ (1_H \times I)$ etc.
And if $P$ and $I$ are continuous so is $h$ as the composition $P \circ (1_H \times I)$, so both implications hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4028877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Epsilon delta proof sketch of $\lim_{x\to1}\frac{1-\sqrt{x}}{1-x}=\frac{1}{2}$ I need to prove that $\lim_{x\to1}\frac{1-\sqrt{x}}{1-x}=\frac{1}{2}$.
My first step was re-writing this as: $\frac{1}{1+\sqrt{x}}$. So now for the sketch of the proof i got:
Let $\epsilon>0$. Note that for all $x\in\mathbb{R}^+$ with $|x-1|<\delta$:
$\left|\frac{1}{1+\sqrt{x}}-\frac{1}{2}\right|=\frac{|1-\sqrt{x}|}{2|1+\sqrt{x}|}=\frac{|1-\sqrt{x}|}{2(1+\sqrt{x})}$. (since $|1+\sqrt{x}|\geq 0$ we can omit the absolute value symbols).
Now, observe that: $|x-1|<\delta \Leftrightarrow -\delta<x-1<\delta\Leftrightarrow -\delta+1<x<\delta+1\Leftrightarrow\sqrt{-\delta+1}-1<\sqrt{x}-1<\sqrt{\delta+1}-1$. And thus $|\sqrt{x}-1|<\sqrt{\delta+1}-1$. (Note that has to be $\delta<1$ or the root would be negative)
Now observe that:
$\frac{|1-\sqrt{x}|}{2(1+\sqrt{x})}\leq \frac{1}{2}|1-\sqrt{x}|<\frac{1}{2}(\sqrt{\delta+1}-1)<\sqrt{\delta+1}-1<\epsilon.$
So, we get that $\delta+1<(\epsilon+1)^2\Leftrightarrow\delta<\epsilon^2+2\epsilon$.
For me,this is a bit confusing. Normally i would get a more elegant delta definition, so i don't know if what i did was true. Another problem for me was that normally i could use the $|x-1|<\delta$ directly, and maybe got another restriction. But this time i only used that $|\sqrt{x}-1|<\sqrt{\delta+1}-1$. Can somebody clarify, if what i did was right / a way to do this more elegantly?
| Your method is fine; the bounds you on $\delta$ are "loose" in the sense that they do not correspond to the largest symmetric interval about $x = 1$ that guarantees $|1/(1+\sqrt{x}) - 1/2| < \epsilon$, but they are still valid for showing that the limit exists and equals $1/2$. In other words, your bound $\delta < \epsilon^2 + 2\epsilon$ approaches $0$ faster than the more complicated bound $$\delta < \begin{cases} \frac{8\epsilon}{(1+2\epsilon)^2}, & 0 < \epsilon < 1/2, \\ 1, & \epsilon \ge 1/2, \end{cases} $$ (note that the quantity $8\epsilon/(1-2\epsilon)^2$ is always strictly greater for $\epsilon > 0$).
That said, some of the reasoning in your proof can be simplified as suggested in the comments. But there's nothing mathematically wrong with your approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Can I rewrite this solution to a log growth question with less steps and text? I am working on a textbook exercise:
Given the log growth formula of $\frac{1000}{1+9e^{-0.6t}}$ find how long it takes for the population to reach 900.
The solution provided is 7.3 and I was able to arrive at this. But, reading over my solution I 'feel' like it's got too many steps and text and I just wondered if there was a simpler route to get my answer.
Here's my working:
$$900=\frac{1000}{1+9e^{-0.6t}}$$
$$1000=(900)(1+9e^{-0.6t})$$
$$\frac{1000}{900}-1=9e^{-0.6t}$$
(here is where I think I'm doing thing's 'wrong'. I have a fraction of a fraction on the left side, is there a better way to denote this? Can I somehow use a negative exponent instead of dividing by 9 for example? Or anything else? Or is what I have sound?)
$$\frac{\frac{1000}{900}-1}{9}=e^{-0.6t}$$
$$ln(\frac{1000}{900}-1)-ln9=-0.6t$$
(Again, here I feel like this expression is too cluttered, is there a simpler, less text way to express it?)
$$\frac{ln(\frac{1000}{900}-1)-ln9}{-0.6}=t$$
$$t=7.3$$
More generally, I'm wondering if I'm missing any opportunities to simplify my fraction expressions, either using rules of logs that I've missed or any other means?
| Your solution looks OK,
and there is almost always sevaral other ways, for example,
\begin{align}
\frac{1000}{1+9e^{-0.6t}}&=900
\tag{1}\label{1}
,\\
\frac{10}{1+9e^{-0.6t}}&=\frac91
\tag{2}\label{2}
,\\
\frac{10-9}{1+9e^{-0.6t}-1}
&=\frac91
\tag{3}\label{3}
,\\
\frac{1}{9e^{-0.6t}}
&=\frac91
\tag{4}\label{4}
,\\
e^{0.6t}&=81
\tag{5}\label{5}
,\\
0.6t&=4\ln 3
\tag{6}\label{6}
,\\
t&=\tfrac{20}3\,\ln3
\approx 7.324
\tag{7}\label{7}
.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to solve $\frac{\partial V}{\partial t} + x + \frac{\partial V}{\partial x}- \frac{1}{2} \frac{1}{\left(\frac{\partial V}{\partial x}\right)} = 0$ I'm solving this problem of optimal control using the dynamic programming:
$$
\begin{cases}
\min \displaystyle \int_0^2(x-u)dt + x(2) \\
\dot x = 1+u^2 \\
x(0) = 1
\end{cases}
$$
Then solving the Bellman-Hamilton-Jacobi equation I found the following PDE:
$$\frac{\partial V}{\partial t} + x + \frac{\partial V}{\partial x}- \frac{1}{4} \frac{1}{\left(\frac{\partial V}{\partial x}\right)} = 0$$
The problem gives an hint: In order to solve BHJ equation, we suggest to find the solution in the family of functions $\mathcal{F} = \{V(t,x) = A +Bt + Ct^2 + D\log(3-t) + E(3-t)x\}$ where $A,B,C,D,E$ are all real costants.
My question is:
How can someone derives that all the solution of the BHJ equation are in that family of functions? In other words, how could I manage to solve the BHJ without any hint?
| I find that this problem can be more easily solved using Pontryagin's maximum principle, which gives the following Hamiltonian
$$
H(t,x,\lambda) = x - u + \lambda\,(1 + u^2),
$$
such that
$$
\dot{\lambda} = -1, \quad u = \frac{1}{2\,\lambda}
$$
and from the terminal cost is follows that $\lambda(2) = 1$. In this case solving for the co-state as a function of time is easy, namely $\lambda(t) = 3 - t$ and thus
$$
u(t) = \frac{1}{2\,(3 - t)}. \tag{1}
$$
Note that this solution is still independent of $x(0)$.
When formulating the PDE one also obtains that
$$
u = \frac{1}{2\,V_x}, \tag{2}
$$
with $V_x$ shorthand notation for the partial derivative of $V(t,x)$ with respect to $x$. Equating $(2)$ to $(1)$ yields
$$
V_x = 3 - t. \tag{3}
$$
Therefore, the final expression for $V(t,x)$ should be of the form
$$
V(t,x) = x\,(3 - t) + U(t), \tag{4}
$$
with $U(t)$ a yet unknown function of only $t$ and no $x$. Substituting $(4)$ together with $(3)$ in the PDE yields
\begin{align}
0 &= -x + \dot{U} + x + 3 - t - \frac{1}{4} \frac{1}{3 - t}, \\
&= \dot{U} + 3 - t - \frac{1}{4} \frac{1}{3 - t}.
\end{align}
From this it becomes hopefully clear where the function family comes from. Namely, the last term is derived from $(3)$, the second order polynomial comes from integrating $t - 3$ and the logarithm from integrating $(3 - t)^{-1}$.
I am not sure how one could spot this family of functions without Pontryagin's maximum principle. Though, I suspect this is also why the exercise gave the function family, because solving nonlinear PDE's is hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Weak Stationary Time Series If $\{X_t : t \in Z\}$ is a weakly stationary time series does that necessarily mean that $\{X^2_t : t \in Z\}$ weakly stationary time series as well? I can't really think of an example to prove or disprove this statement.
| No, it does not. Weak stationarity requires finite variance. So let $X_t$ have finite variance, but not finite 3th moment. Then $X_t^2$ does not have finite second moment,i.e. variance does not exist, therefore can not be weakly stationary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Absolute convergence of $\sum_{n=0}^{\infty}a_nb_n$ if $\sum_{n=0}^{\infty}b_n^2$ and $\sum_{n=0}^{\infty}a_n^2$ converge Let $(b_n)$ and $(a_n)$ sequences of real numbers. If $\sum_{n=0}^{\infty}b_n^2$ and $\sum_{n=0}^{\infty}a_n^2$ converge, prove that $\sum_{n=0}^{\infty}a_nb_n$ converges absolutely.
I would like to have a feedback on my proof and know if everything is correct, please.
Proof.
As $\sum_{n=0}^{\infty}b_n^2$ converges, $\lim_{n\to\infty}b_n^2=0$. Therefore, $b_n^2$ is bounded and we have that: $\exists M>0$: $|b_n|<\sqrt{M}\ \forall n\in\mathbf{N}$ .
By analogous reasoning we obtain that $\forall \epsilon>0\ \exists N \ \forall n\ge N'$: $|a_n|<\frac{\epsilon}{\sqrt{M}(n-N)}$.
Let's show that $\sum_{n=0}^{\infty}|a_nb_n|$ is Cauchy. By definition we have to show the following: $\forall \epsilon>0 \ \exists N' \ \forall n\ge N'$:
$|\sum_{k=N'}^{n}|a_kb_k||<\epsilon$
But,
$|\sum_{k=N'}^{n}|a_kb_k||=\sum_{k=N'}^{n}|a_kb_k|=\sum_{k=N''}^{n}|a_k||b_k|\le \sqrt{M}\sum_{k=N'}^{n}|a_n|\le \sqrt{M}\cdot \frac{\epsilon}{\sqrt{M}(n-N)}\cdot (n-N')$
If we set $N'=N$, we obtain that
$|\sum_{k=N'}^{n}|a_kb_k||\le \epsilon$. We conclude then that $\sum_{n=0}^{\infty}a_nb_n$ converges absolutely.
| Your proof cannot be correct, as in the end you only used that (i) $(b_n)_n$ was bounded, and (ii) $\lim_{n\to \infty} a_n =0$. But those those assumptions alone cannot imply convergence (let alone absolute convergence) of $\sum_n a_n b_n$, as otherwise we would get convergence of $\sum_n \frac{1}{n}\cdot 1$.
Now, why is it false? Well, you write
By analogous reasoning we obtain that $\forall \epsilon>0\ \exists N \ \forall n\ge N$: $|a_n|<\frac{\epsilon}{\sqrt{M}(n-N)}$.
This step, for instance, is not true. It may holds for fixed $N,M$ (with a fixed $n$ on the RHS), but how do you prove it holds for all $n\geq N$? Here is a counterexample:
$$
a_n = \frac{1}{n^{2/3}}
$$
Note that $\sum_n a_n^2 < \infty$, but the statement you wrote does not hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing $a^2=b^2$ in a group with 8 elements isomorphic to Quaternion group I am working in a group $G$ with 8 elements such that $a, b\in G$ and $ba=a^3b$. Moreover, I am given that both $b$ and $a$ have order 4 and $\langle a \rangle \neq \langle b \rangle$. I understand this group is isomorphic to Quaternion group, but this is not given and I am trying to prove this. I want to show $a^2=b^2$, but I am stuck.
Here is what I have done so far:
$$(ab)^2=abab=a(ba)b=aa^3bb=a^4b^2=b^2$$
Also from the equality $ba=a^3b$ I can derive many other equalities, but they do not seem to give me anything useful. For instance
$$aba=b \text{ or } a^3ba^3=b$$
Another observation I have made is that both $b^2$ and $a^2$ have order 2. But I am not sure if that is enough to conclude $a^2=b^2$. Please guide me in the right direction.
| Similar to @GerryMyerson's comment, use the fact that there are only two nonabelian groups of order $8$.
For instance, the quaternions only have one element of order $2$. The dihedral group has $5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4029888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Can there exist a unique solution to an initial value problem if the hypotheses of the existence and uniqueness theorem are not satisfied? I have been thinking about this question for a while. I haven't found a definite answer, but I am led to believe that there can be a unique solution to an IVP outside of interval of validity. I just fail to prove it.
| For example, let
$$ f(x,y) = \cases{1 & if $xy < 0$\cr -1 & if $xy > 0$\cr 0 & if $xy = 0$\cr}$$
and consider the initial value problem
$$ \eqalign{ \dfrac{dy}{dx} &= f(x,y) \cr
y(0) &= 0\cr} $$
The hypotheses of the Existence and Uniqueness Theorem are not satisfied because $f(x,y)$ is not continuous in a neighbourhood of $(0,0)$, but there is a unique
solution, namely $y=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does a second-countable, locally compact, Hausdorff space admit a countable basis of pre-compact open sets? Let $X$ be a such space. I know that for a Hausdorff space, being locally compact is equivalent to having a basis of pre-compact open sets. But how do I prove that $X$ can be covered by countably many such sets, by using second-countability of X? Thank you.
| If we have a countable base $\mathcal{B}$ for $X$ then any other base $\mathcal{C}$ of $X$ has a countable subfamily that is also a base. See here e.g.
We can apply this to the base of pre-compact sets that exists in a locally compact Hausdorff space.
Finally note that a base is in particular a cover of $X$ (for every $x$ there is at least one base element containing it, as $X$ itself is open and so a union of base sets).
Final remark: the closures of the cover elements of course still cover $X$ and show that $X$ is then $\sigma$-compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
de Rham and Hodge cohomologies become locally free after localization? This is a follow-up question to these two: 1 2.
I am studying a paper about the degeneration of the de Rham-Hodge spectral Sequence. The following data are given: A field $k$ and a smooth proper $k$-scheme $X\longrightarrow\operatorname{Spec}(k)$, a finitely generated $\mathbb{Z}$-algebra $A$ and a smooth proper scheme $\mathfrak{X}\longrightarrow A$ so that $\mathfrak{X}_{k}\cong X$. The author claims:
After possibly localizing $A$, we can assume that the coherent $A$-modules $H^{n}_{\operatorname{dR}}(\mathfrak{X}/A)$ and $H^{n}_{\operatorname{Hodge}}(\mathfrak{X}/A)$ are locally free.
Why?
I am by no means an expert on the topic, so please be as détailed as possible in your answers; a reference might also help.
| this is a general property of coherent modules. First it is a very good idea to look at the paper of Illusie:Frobenius and Hodge degeneration.which gives much more detail than the paper you are reading now.(and written by the person who prove the theorem!)
about your question, look at the generic point $\eta$ of $A$(the point associated to the prime ideal 0) local ring of $A$ at this point is just your fraction field. now assume that your coherent sheaf is given by the module $M$,then $M_\eta$ is finite free over $A_\eta=Frac(A)$.let $(m_1/a_1,m_2/a_2,...,m_n/a_n)$ be a basis for the module $M_\eta$. consider generators for $M$,$(k_1,...,k_t)$ and write $$k_i=\Sigma \frac{b_{ij}}{c_{ij}}\frac{m_j}{a_j}$$. now if invert all the $a_i,c_{ij}$ in $A$ and call the ring $A_S$. now you have a surjective map $A_S^n\to M_S$ by sending $e_i\to \frac{m_i}{a_i}$.call the kernel of this map $N$, so you have $$0\to N\to A_S^n\to M_S\to 0$$
but you know that $A_\eta^n\to M_\eta$ is an isomorphism and $A_S\to A_\eta$ is flat,so from the above exact sequence we have $N_\eta=0$. because $N$ is finitely generated(by coherence) there should be an element $a\in A$ such that $aN=0$. so in summary after inverting $a$ the map between $A^n\to M$ become an isomorphism and hence $M$ would be free.
now back to your problem you know that $R^if\Omega_{X/A}$ is coherent and so after going to a localisation $A_S$,${(R^if\Omega_{X/A})}_S$ is free. but this is ${R^if\Omega_{X_S/A_S}}$(because R^if is always quasi cohehrent) so we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Help proof by induction I just got my first induction assignment in a new course. They want me to prove by induction that:
$$\sum_{s=1}^k s*s! = (K+1)!-1.$$
The way I understand induction is that I test for the first value. Then for n and then n+1, to show the given expression is true.
I've done it for 1 and n, however I'm stuck at n+1.
$$\sum_{s=1}^{n+1} s*s!=(n+1)!-1+(n+1)*(n+1)!$$
Using maple I can see the expression is $-1+(n+2)!$ (which is true) however i dont know how to reduce/rewrite $(n+1)!-1+(n+1)(n+1)!$ to $-1+(n+2)!.$$
I've asked my friends and a older student, but to no avail. I'm hoping you guys can help.
| First proof the statement for some value, which you did (induction basis).
Then induction step: Let's assume that the statement you want to proof holds for K = n. Then you have to establish that it also holds for n+1.
So we know that $\sum_{s=1}^{n} s \cdot s! = (n+1)! - 1$ (induction hypothesis)
$\sum_{s=1}^{n+1} s \cdot s! = \sum_{s=1}^{n} s \cdot s! + (n+1) \cdot (n+1)! = (n+1)! - 1 + (n+1) \cdot (n+1)! = (n+1)! + (n+1) \cdot (n+1)! - 1 = (1 + (n+1)) \cdot (n+1)! - 1 = (n+2) \cdot (n+1)! - 1 = (n+2)! - 1 = ((n+1)+1)! - 1$
So then we know that the statement also holds for n+1. This concludes the proof.
So in proofs by induction you explicitly use the induction hypothesis for the induction step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Rewriting a nonlinear transformation in multiple dimensions Let $x,y\in\mathbb{R}^n$, $n>0$. Suppose we have an arbitrary nonlinear mapping $T(x):\mathbb{R}^n\to\mathbb{R}^n$, that basically performs a nonlinear coordinate transformation for which we assume that $\frac{\partial T}{\partial x}$ has full rank for all $x$ (so it is a 'valid' transformation, so to speak). My question is; Does there always exists a $Q:\mathbb{R}^n\times \mathbb{R}^n\to\mathbb{R}^{n\times n}$, such that for all $x,y$
$$T(x)-T(y)=Q(x,y)(x-y).$$
And if not, what are the conditions on $T$, such that the above holds?
For $n=1$, it is quite easy, i.e. $Q(x,y)=\frac{T(x)-T(y)}{x-y}$, but for higher dimensions, this is a bit more tricky...
| Yes, if $T:\mathbb{U}\rightarrow R^n$ is continuously differentiable on $\mathbb{U}$ and $\mathbb{U}$ is open, it holds that
$$ T(x)-T(y)= \underbrace{\left(\int_0^1 \frac{\partial T}{\partial x}(y+t(x-y))\,d t\right)}_{Q(x,y)}(x-y)$$
for $x,y\in\mathbb{U}$, see Lemma 1 here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the dimension of the given vector space?
Let $V$ be the real vector space of all continuous functions $f:[0, 2] \to \mathbb{R}$ such that the restriction of $f$ to the interval $[0, 1]$ is a polynomial of degree less than or equal to $2$, the restriction of $f$ to the interval $[1, 2]$ is a polynomial of degree less than or equal to $3$ and $f(0) = 0$. Then the dimension of $V$ is equal to $\underline{\qquad}$.
(Original at https://i.stack.imgur.com/ep0s1.jpg)
So I gave an entrance exam and this was one of the questions. I answered that its dimension is 2, as the constant term in each function is 0 and there won't be $x^3$ in V as when restricted to [0,1], it won't be a polynomial of degree $\leq$2. So I am just left with $x$ and $x^2$, hence dimension of V is 2.
Is my reasoning and answer correct?
| Your answer is not correct because the polynomials on $[0,1]$ and $[1,2]$ are not supposed to be the same.
The dimension of the space of the functions, without the conditions of continuity and $f(0)=0$ is equal to $3+4=7$. The conditions $f(0)=0$ and the continuity over $[0,2]$ impose two linearly independent conditions. Finally the dimension is equal to $5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4030865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Induction problem I need to prove using induction that
$$
\frac{1}{1\cdot 2}+\frac{1}{2\cdot 3} + \cdots +\frac{1}{n(n+1)} >\frac{9n-1}{10(n+1)} \ ,\ for \ \ n \in \mathbb{N}
$$
I know how to prove it without induction, but when I try using induction I get 0>0 which is false.
| I think your problem is that the RHS of your induction step should say only $$\frac{9(k+1) - 1}{10(k+2)}$$ because that is the substitution you get from the RHS of the original statement with $n=k+1$, whereas what you have is the RHS with $n=k$ and then also the next term of the series, but the series was from the LHS to begin with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Equilateral triangle $ABC$ with $P$ inside, $PA= x$, $PB=y$, $PC=z$ and $z^2 =x^2+y^2$. Find side length of $ABC$
$ABC$ is an equilateral triangle $ABC$ with $P$ inside it such that $PA= x$, $PB=y$, $PC=z$. If $z^2 =x^2+y^2$ , find the length of the sides of $ABC$ in terms of $x$ and $y$?
If $z^2=x^2+y^2$ then how can I find measures of angles around $P$ so that the sides can be expressed in terms of $x$ and $y$. I've tried everything I can think of.
|
Rotate $\triangle BCP$ counter-clockwise 60$^\circ$ around the point $B$ to $\triangle BAQ$ and connect $PQ$. Then, $BPQ$ is an equilateral triangle and $APQ$ is a right triangle due to $x^2+y^2=z^2$. Apply the cosine rule to $\triangle BPA$ to obtain the side $s$
\begin{align}
s^2 & = AP^2+BP^2 - 2AP\cdot BP\cos\angle APB \\
&= x^2 + y^2 - 2x y \cos 150^\circ\\
&= x^2 + y^2 +\sqrt3x y
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Does the following necessarily converge to a normal random variable in distribution? Suppose $X_1, \dots, X_n$ are i.i.d. random variables with $E(X_1) = 0$ and $\operatorname{Var}(X_1) = 1$. Let
$$
S_n = \frac{1}{n}\sum_{i=1}^n\sqrt{i}X_i.
$$
Does $S_n$ converge to a normal random variable?
Originally, I attempted to use Lindeberg CLT to prove this. However, I ran into a wall because I can't figure out a way to check the Lindeberg condition that for $\forall \varepsilon > 0$
$$
\sum_{i=1}^nE(|Y_i|^2\mathbf{1}(|Y_i| > \varepsilon)) \to 0
$$
where $Y_i = \sqrt{i}X_i/s_n$ and $s_n^2 = \sum_{i=1}^ni\operatorname{Var}(X_i) = \frac{n(n+1)}{2}$. If I can prove this, then I can use Slutsky, then we are done. But I have no idea what $X_i$ actually is so I don't know how to verify the condition.
Then I tried using characteristic functions and try to do expansion and approximation. However, I also hit a wall due to the changing index of $i$.
I also tried finding counterexample, but nothing came up.
Can anyone provide some hint? Thank you!
| For $c_n=\sqrt n$, observe that $\frac{\max_{1\le k\le n}c_k^2}{\sum_{k=1}^n c_k^2}=\frac{n}{n(n+1)/2}=\frac{2}{n+1}\to 0$ as $n\to \infty%$.
As $X_i$'s are i.i.d, by Hajek-Sidak's CLT,
$$\frac{\sum_{k=1}^n c_k X_k}{\sqrt{\sum_{k=1}^n c_k^2}}=\sqrt{\frac{2}{n(n+1)}}\sum_{k=1}^n \sqrt k X_k \stackrel{d}\longrightarrow N(0,1)$$
That is, $$\sqrt{\frac{2n}{(n+1)}}S_n\stackrel{d}\longrightarrow N(0,1)$$
Hajek-Sidak's CLT can be shown using Lyapounov's condition (which implies Lindeberg's condition) under the additional assumption $E|X_1|^3<\infty$. But I am not aware of the general proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Modular Arithmetic Inverse Proof Let $m, x$ be positive integers such that $\gcd(m, x) = 1$. Then $x$ has a multiplicative inverse modulo $m$, and it is unique (modulo $m$).
Proof: Consider the sequence of $m$ numbers $0, x, 2x, \dots, (m−1)x$. We claim that these are all distinct modulo $m$. Since there are only $m$ distinct values modulo $m$, it must then be the case that $ax \equiv 1 \pmod m$ for exactly one $a$ (modulo $m$). This $a$ is the unique multiplicative inverse of $x$.
To verify the above claim, suppose for contradiction that $ax \equiv bx \pmod m$ for two distinct values $a,b$ in the range $0 \leq b \leq a \leq m−1$. Then we would have $(a−b)x \equiv 0 \pmod m$, or equivalently, $(a−b)x = km$ for some integer $k$ (possibly zero or negative).
However, $x$ and $m$ are relatively prime, so $x$ cannot share any factors with $m$. This implies that $a−b$ must be an integer multiple of $m$. This is not possible, since $a−b$ ranges between $1$ and $m−1$.
I understand the contradiction and how this proves that the sequence of $m$ numbers are all unique mod $m$; however, I am unsure how if this is the case, then it implies that $ax \equiv 1 \pmod m$ for exactly one $a \pmod m$.
| ''however, I am unsure how if this is the case, then it implies that ax≡1 (mod m) for exactly one a (mod m)''
Its in the proof. It shows that there exists an $a$ with $ax\equiv 1\mod m$ and since the values $0,x,2x,\ldots,(m-1)x$ are all distinct, there is exactly one $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Analytic formula for integral $I_p(\theta) := \int_0^{2\pi}\cos(t)^p\cos(t-\theta)^pdt$ Let $p$ be a nonnegative integer and $\theta \in [0, \pi]$.
Question. What is an analytic formula for the integral $I_p(\theta) := \int_0^{2\pi}\cos(t)^p\cos(t-\theta)^pdt$ ?
Note. My ultimate goal is to compute $I_p(1)$, $I_p(0)$, $I_p'(0)$, and $I_p''(0)$.
My guess is everything can be succinctly expressed in terms of special functions (gamma, beta, etc.).
| Amazingly, I found this kind of integrals in my old (65 years old !) cookbook.
The idea is to write
$$\cos (t) \cos (t-\theta )=\frac{1}{2} (\cos (\theta )+\cos (2 t-\theta ))$$ and to use the binomial expansion when you raise it to power $p$.
$$I_p(\theta)= \int_0^{2\pi}\big[\cos(t)\,\cos(t-\theta)\big]^p\,dt= 4^{1-p}\,\pi \,J_p(\theta)$$
Now, the first results
$$\left(
\begin{array}{cc}
p & J_p(\theta) \\
1 & \cos (\theta ) \\
2 & 2+\cos (2 \theta ) \\
3 & 9 \cos (\theta )+\cos (3 \theta ) \\
4 & 18+16 \cos (2 \theta )+\cos (4 \theta ) \\
5 & 100 \cos (\theta )+25 \cos (3 \theta )+\cos (5 \theta ) \\
6 & 200+225 \cos (2 \theta )+36 \cos (4 \theta )+\cos (6 \theta )\\
7 & 1225 \cos (\theta )+441 \cos (3 \theta )+49 \cos (5 \theta )+\cos (7 \theta )
\\
8 &2450+ 3136 \cos (2 \theta )+784 \cos (4 \theta )+64 \cos (6 \theta )+\cos (8 \theta
) \\
9 & 15876 \cos (\theta )+7056 \cos (3 \theta )+1296 \cos (5 \theta )+81 \cos (7
\theta )+\cos (9 \theta )
\end{array}
\right)$$ The patterns (for odd and even values of $p$) seem to be quite clear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Meaning of derivative of a function with respect to another function In chain rule we use,$\frac{dy}{dx}=\frac{dy}{dz} \times \frac{dz}{dx}$. My question is what is meant by $\frac{dy}{dz}$.When we find derivative,we find it with respect to an independent vairable meaning if $x$ changes by some amount,how much does $f(x)$ changes.But how much $g(f(x))$ changes by a change in $f(x)$ doesn't seem to make sense to me.Also derivatives are all about slopes of a graph.Now we will we draw the graph of $g(f(x))$ vs $f(x)$ since we can't place $f(x)$ on real axis as it can't achieve all real numbers for it being a dependent variable.So could someone please shed some light on what we are actually doing?
| Consider this example: let's say we have a variable $V$ representing volume, and another variable $T$ representing temperature. Now the rate of change of volume with respect to temperature is given by $\frac{dV}{dT}$. If we further suppose that $T$ depends on another variable $t$ representing time, then the rate of change of volume with respect to time can be found by $$\frac{dV}{dt}=\frac{dV}{dT}\cdot \frac{dT}{dt}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$\lim_{n\to\infty} \frac{2^n(n^4-1)}{4\cdot 3^n + n^7}$ How do I solve this example? I tried to point out the fastest growing term $2 ^ n$ and $3 ^ n$, but that doesn't seem to lead to the result. I know the limit is $0$ that's obvious, but I don't know how to work on it.
$$\lim_{n\to\infty} \frac{2^n(n^4-1)}{4\cdot 3^n + n^7}$$
Or can I make a power estimate of a theorem on a tightened sequence, for example, that $2 ^ n$ / $13 ^ n$ (smaller) goes to zero and the other side $2 ^ n$ / $3 ^ n$ (bigger) also goes to zero?
$\lim_{n\to\infty} 2 ^ n$ / $13 ^ n$< $\lim_{n\to\infty} \frac{2^n(n^4-1)}{4\cdot 3^n + n^7}$ < $\lim_{n\to\infty} 2 ^ n$ / $3 ^ n$
| It follows from the ratio test.
If $\displaystyle\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| = a < 1 \ $, then $\lim a_n = 0$.
$$ \dfrac{\frac{2^{n+1} \left((n+1)^4-1\right)}{4\cdot3^{n+1} + (n+1)^7}}{\frac{2^n(n^4-1)}{4\cdot3^n + n^7}} = \dfrac{2^{n+1} \left((n+1)^4-1\right)(4\cdot3^n + n^7)}{2^n(n^4-1)\left(4\cdot3^{n+1} + (n+1)^7\right)} \\ =\frac{2}{3} \left(\frac{n+1}{n}\right)^4 \left(\dfrac{1-\frac{1}{(n+1)^4}}{1-\frac{1}{n^4}}\right) \left(\dfrac{1+\frac{n^7}{3^n}}{1+\frac{(n+1)^7}{3^{n+1}}} \right) \longrightarrow \frac{2}{3} < 1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4031932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Prove that $ \sum \frac{2^k}{k}$ is divisible by $2^M$ For each integer $M > 0$, there ${\bf exists}$ an $n$ such that
$$ \sum_{k=1}^n \dfrac{ 2^k}{k} $$
is divisible by $2^M$
${\bf try}$ Im struggling a bit to visualize this exercise. So, I tried to see for concrete number, for instance take $M=1$, then $n=2$ works: as
$$ 2 + \dfrac{2^1}{2} = 2^1 (1 + 2 )$$
Now, take $M=2$ and factor
$$ 2^{2} \underbrace{ \left( \dfrac{1}{2} + \dfrac{1}{2} + \dfrac{2}{3} + \dfrac{2^4}{4} ... + \dfrac{2^{n-2} }{n} \right) }_{(*)}$$
now, we need to choose $n$ so that $(*)$ is an integer. Im unable to do so. Any help?
| Right so the "elementary answer." There are actually a couple that I've seen. One is cited by Gouvêa in his p-adic numbers book (Gouvêa, like me, also said the problem was too difficult to work out the answer from scratch).
The solution Gouvêa cites is Exercise 10.10 of D. P. Parent Exercises in Number Theory (1984).
The rough idea there (assuming I'm summarizing correctly) is
$$ 0 = \frac{1 - (1 - 2)^{2^n}}{2^n} = \sum_{k = 1}^{2^n} (-1)^{k+1}2^k \frac{1}{2^n}\binom{2^n}{k} \equiv \sum_{k=1}^{2^n} \frac{2^k}{k} \pmod{2^h} $$
for $n \ge h$. Plus there's some other work to compare this with the partial sums that are not powers of $2$.
The next two solutions I found cited by the OEIS (https://oeis.org/A087910). Namely the two solutions given for the 2002 Sydney University Mathematical Society Problems Competitions Problem 9.
Solution 1 Summary
If $n$ is even then
$$ 1 = (-1)^n = (1 - 2^n) = \sum_{k = 0}^n \binom{n}{k}(-2)^k. \tag{1} $$
Subtract 1 and divide by $n$ to get
$$ 0 = \sum_{k = 1}^n \frac1n \binom{n}{k}(-2)^k. $$
Then
$$ \frac1n \binom{n}{k} = \frac{(n-1)(n-2)\cdots(n-k+1)}{k!} = \frac{(-1)^{k-1}}{k} + n\frac{m_{n,k}}{k!} \tag{2}$$
for some integer $m_{n,k}$ (separate the term $(-1)(-2)\cdots(-k+1)$ from all the terms divisible by $n$).
By $(1)$ and $(2)$,
$$ \sum_{k = 1}^n \frac{2^k}{k} = n \sum_{k = 1}^n \frac{(-2)^km_{n,k}}{k!}$$
Then you use a well known fact that $v_2(k)! \le O(k)$. Thus
$$ v_2\left( \sum_{k = 1}^n \frac{2^k}{k} \right) = v_2(n) + v_2\left( \sum_{k = 1}^n \frac{(-2)^km_{n,k}}{k!} \right) \ge v_2(n). $$
Now we can see that when $n = 2^k$ this tends to infinity.
Solution 2 Summary
First show that
$$ \sum_{k = 1}^n \frac{2^k}{k} = \frac{2^n}{n} \sum_{k = 0}^{n-1} \frac{1}{\binom{n-1}{k}}. $$
Next, use the well-known formula $$v_2(n!) = \sum_{i = 0}^\infty \left\lfloor \frac n{2^i} \right\rfloor$$ to get
$$ v_2\left( \binom{n}{k} \right) = v_2(n!) - v_2(k!) - v_2((n - k)!) = \sum_{i = 0}^\infty \left\lfloor \frac n{2^i} \right\rfloor - \left\lfloor \frac k{2^i} \right\rfloor - \left\lfloor \frac {n-k}{2^i} \right\rfloor. $$
Then by some analysis,
$$v_2\left( \binom{n-1}{k} \right) \le r \text{ if } 2^r + 1 \le n \le 2^{r + 1}. $$
So if $n$ is even and $2^r + 1 \le n \le 2^{r + 1}$ then
$$ v_2\left( \sum_{k = 1}^n \frac{2^k}{k} \right) \ge n - r. $$
And the result follows.
You'll have to see the solutions I linked to if you want all the details, they wouldn't fit in one answer. I hope this gives you some appreciation for the $2$-adic logarithm approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
$(a_n)_{n=1}^\infty$ & $(b_n)_{n=1}^\infty$ are seq st $(a_n)_{n=1}^\infty$ & $[{(a_n)_{n=1}^\infty + (b_n)_{n=1}^\infty}]$ con. Prove $(b_n)$ con Suppose $(a_n)_{n=1}^\infty$ and $(b_n)_{n=1}^\infty$ are sequences such that $(a_n)_{n=1}^\infty$ and $[{(a_n)_{n=1}^\infty + (b_n)_{n=1}^\infty}]$ converge. Prove that $(b_n)_{n=1}^\infty$ converges
I can say $b_n=(a_n + b_n)-a_n$. Since both $(a_n)_{n=1}^\infty$ and $[{(a_n)_{n=1}^\infty + (b_n)_{n=1}^\infty}]$ converge, isnt' there a subtraction rule that says that because those both converge, that $b_n$ would also converge?
| hint
Given $ \epsilon>0 $ ,Assume
$$\lim_{n\to+\infty}a_n=A$$
and
$$\lim_{n\to+\infty}(a_n+b_n)=C$$
then
$$|b_n-(C-A)|=$$
$$|(a_n+b_n)-C-(a_n-A)|$$
$$\le |(a_n+b_n)-C|+|a_n-A|$$
$$<\frac{\epsilon}{2}+\frac{\epsilon}{2}$$
for $ n $ large enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Understanding basic concept of prime numbers My textbook provides a theorem but I cannot understand the structure of the sentence being used. Could someone please help me understand the meaning of this theorem?
A natural number $n>1$ is prime if and only if for all primes $p\leq \sqrt{n}$, $p$ does not divide $n$.
"for all primes $p\leq \sqrt{n}$, $p$ does not divide $n$". This is the part I don't understand. The premise is that $p$ is less than the square root of $n$ then the conclusion is that $p$ does not divide $n$. So for $p$ to divide $n$ it must be greater than the square root of $n$? I'm really confused as to what information and meaning I'm supposed to see in this theorem.
| It says that, in order to see if $n$ is prime, we don't have to test divide $n$ by every prime less than $n$. It's enough to check all primes $\leq\sqrt n$. For example, if we want to know if $101$ is prime, it's enough to check that it's not divisible by $2,3,5,\text{ or }7$, because the next prime, $11$ is $>\sqrt{101}$.
This is because if $n=ab$, it's impossible that both $a>\sqrt n$ and $b>\sqrt n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Finding Modulus and Argument for Complex Number $z$ Assume the Complex Number $z$ has argument $$\pi $$ and modulus $$\sqrt{3} $$ how would i find argument and modulus for $$-2iz$$ i already know the correct answer for this(it's from an old test) but i don't know how its calculated. I know how to find modulus and argument but im not sure about this exact math problem. Any help is greatly appreciated, please tell me if i should provide more info.
| Since we have the modulus and argument, we know that $z=-\sqrt3$, hence we get $-2iz=2\sqrt3i$. The argument and modulus for that complex number are $\pi/2$ and $2\sqrt3$.
For explanation, the key is Euler's formula: $e^{i\theta}=\cos\theta+i\sin\theta$. This allows us to identify every point $z=x+iy$ in the complex plane with a pair $(r,\theta)$, via $z=re^{i\theta}$..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the formula for the nth number in this series. $1,3,7,9,11,13,17,19,21\dots$
Basically all the numbers that end in the digits $1,3,7,9$
I am working on a formula for approximating how many factors I have to test to find if a large number is prime. So for example to test the number $229,597$. How many possible factors will I have to check?
So the convention is to take the square root of $229,597$ which is approx $479$. Then I take $(\frac{479}{10}) \times (4) - 1$. I do this because out of every ten numbers there are 4 numbers that end in $1,3,7,9$. I subtract the 1 because prime numbers also have 1 as a factor.
So $(\frac{479}{10}) \times (4) - 1 = 190$ factors to check to see if $229,597$ is prime.
Then I look at the series $1,3,7,9,11,13,17\dots$ to start checking for possible factors. But how do I find the nth number in this series?
| One possible formula is $$f(n) = \frac{1}{4}\left(5(2n-1) + (-1)^{n+1} + 2 \cos \frac{n \pi}{2} - 2 \sin \frac{n \pi}{2}\right), \quad n = 1, 2, \ldots.$$ This corresponds to the recurrence relation
$$\begin{align}
f(n) &= f(n-1) + f(n-4) - f(n-5), \\
f(1) &= 1, \\
f(2) &= 3, \\
f(3) &= 7, \\
f(4) &= 9, \\
f(5) &= 11.
\end{align}$$
I don't know why you would need it, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Show that if either $X$ or $Y$ is disconnected then $X \times Y$ is disconnected as well Here is the statement: Let $X$ and $Y$ be two non-empty metric spaces. Show that if either $X$ or $Y$ is disconnected, then the Cartesian product $X\times Y$ is also disconnected
My idea(s): The placement of this exercise in my book is around a section where the connection between connectedness and continuous functions is established, leading me to believe that one could also prove the claim by constructing some function. However, I think that it is also possible to prove it by using the definition of disconnectednessn and choosing an appropriate metric for a product space.
Namely, let $X$ be disconnected and $Z = X \times Y$. Then there exists two non-empty open subsets $S_1, S_2 \subset X$ such that $S_1 \cup S_2 = X$ and $S_1 \cap S_2 = \varnothing$. Since for any $z \in Z: z = (x, y), x \in X = S_1 \cup S_2, y \in Y$ it follows that $\left(S_1 \times Y\right) \cup \left(S_2 \times Y\right) = Z$. Also $\left(S_1 \times Y\right) \cap \left(S_2 \times Y\right) = \varnothing$, as $S_1$ and $S_2$ are disjoint. $\left(S_1 \times Y\right)$ and $\left(S_2 \times Y\right)$.
Now, if we can show that $\left(S_1 \times Y\right)$ and $ \left(S_2 \times Y\right)$ are both open w.r.t. the metric in $Z$, we are done. Thus for $(a, b), (c, d) \in Z$, let $d_{Z}((a, b), (c, d)) = d_{X}(a, c)$. Since $S_1$ and $S_2$ are open w.r.t. $d_{X}$, and the metric does not depend on $Y$, it follows that $\left(S_1 \times Y\right)$ and $\left(S_2 \times Y\right)$ are also open w.r.t. $d_{X}$. Hence there exists two non-empty, disjoint and open subsets of $Z$ whose union is $Z$. Thus $Z$ is disjoint.
Edit: It is a shame that I cannot accept multiple answers, since so far all answers given here have been brilliant!
| Your proof is almost correct, except for the justification of openness of $(S_1\times Y)$-- the space $X\times Y$ comes equipped with a very specific topology (the product topology) and your task is to show $(S_1\times Y)$ is open in that particular topology. (Otherwise, you could just say "let $d_Z$ be the discrete metric, then every set is open hence we are done." There is an obvious problem with the argument.)
The topology on $X\times Y$ can be defined in two ways:
*
*Either by the definition of product topology, where the open sets in $X\times Y$ are generated by (i.e. unions of) the sets in $\{U\times V\;|\; U\overset{\text{open}}\subset X, V\overset{\text{open}}\subset Y\}$. Using this definition, it is immediate that $S_1\times Y$ is open.
*Or by using one of the product metrics:
*
*$d_{1}((a,b),(c,d))=d_X(a,c) + d_Y(b,d)$
*$d_{2}((a,b),(c,d))=\sqrt{(d_X(a,c))^2 + (d_Y(b,d))^2}$
*$d_{\infty}((a,b),(c,d))=\max\{d_X(a,c),d_Y(b,d)\}$
It is a non-trivial fact that all the metrics above induce exactly the product topology on $X\times Y$ so it does not matter which one you choose to prove the openness of $S_1\times Y$-- using $d_\infty$ should be the easiest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Pathological vector fields Is it true that any smooth curve is an integral curve for some smooth vector field?
More concretely, consider the curve $\gamma:\mathbb{R}\rightarrow \mathbb{R}^2$ defined by $\gamma(t)=(t,t^3sin(1/t))$ if $t\neq 0$ and by $\gamma(0)=(0,0)$. It is a $C^1$ curve, so i was wondering if it could exist a $C^1$ vector field $v=v(x,y,t)$ such that $\gamma$ in an integral curve.
Of course i'm interested in the somehow pathological behaviour near $(0,0)$, more generally i was wondering if this kind of oscillation phenomena could happen with integral curves of vector fields (also in the $C^k$, $k\in \mathbb{N}\cup\{\infty\}$ case).
Thank you for reading.
| I was thinking about this, and actually it seems that it is true, at least locally.
It is actually the same construction for the canonic form of flows near non singular points (for an autonomous vector field).
Say that you have a smooth curve $t\rightarrow \gamma(t)\in \mathbb{R}^2$ for $t\in I=(-\epsilon, \epsilon)$ with $\gamma(0)=(0,0)$ and $\gamma'(0,0)=(1,0)$. Define the function $F:I\times\mathbb{R}\rightarrow \mathbb{R}^2$ by $F(x,y)=\gamma(x)+(0,y)$. We have that $F(x,0)=\gamma(x)$ and $DF(0,0)=I_2$. So $F$ is a diffeomorfism between two open sets $U\subseteq I\times\mathbb{R}$ and $V\subseteq\mathbb{R}^2$ both containing $(0,0)$. Now if we take the constant vector field $v(x,y)=(1,0)$ on $U$, we note that tha curve $\alpha(t)=(t,0)$ is an integral curve for $v$, so $F\circ\alpha$ is an integral curve for the push forward vector field $F_*v$ on $V$, but $F\circ\alpha=\gamma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4032920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
confirming solution to the series $\frac{n(n+1)}{(n+3)^3}$ with ratio test I want to confirm my solution to this series using the ratio test correct, I tested to show the series is divergent.
$$\frac{n(n+1)}{(n+3)^3}$$
Using the ratio test, then simplifying in stages:
$$\frac{(n+1)(n+1+1)}{(n+1+3)^3}\frac{(n+3)^3}{n(n+1)}$$
$$\frac{(n+1)(n+2)(n+3)^3}{(n+4)^3n(n+1)}$$
$$\frac{(n+2)(n+3)^3}{(n+4)^2(n+4)n}$$
$$\frac{(n+3)^3}{(n+4)^22n}$$
The concluding remark:
$$\frac{1}{2}\lim_{n \to \infty}\frac{(n+3)^3}{(n+4)^2n}$$
Hence the series is divergent, unless I went wrong somewhere?
| Without using asymptotic equivalence, an easy way is
\begin{equation}
\sum_{n=1}^\infty\left(\dfrac{n}{n+3}\times\dfrac{n+1}{n+3}\times\dfrac{1}{n+3}\right)\ge\sum_{n=1}^\infty\left(\dfrac14\times\dfrac24\times\dfrac{1}{n+3}\right)=\infty
\end{equation}
Another alternative solution is
\begin{equation}
\sum_{n=1}^\infty\dfrac{n(n+1)}{(n+3)^3}\ge\sum_{n=1}^\infty\dfrac{n^2}{(n+3n)^3}=\infty
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
MSC of Point Set Topology (Mathematical Subject Classification) In which lower level class belongs Point Set Topology?
[1]: https://zbmath.org/static/msc2020.pdf
Is it part of MSC03E-Set-theory, or part of MSC-54-General-topology?
Do you have a collection of relating keywords to MSC-classes?
It helps to see the connections.
Thanks in advance.
| It depends on what you are doing. I assume by lower level you really mean high level, or general, or 2-digit class. In that case, 54 is general topology, 26 is real functions, 03 is mathematical logic and foundations. "Point-set topology" most likely refers to the stuff in 54, or to the theory of Baire functions, as in 26A21, or to descriptive set theory, as in 03E15. That said, the latter two are also considered in 54 (perhaps with different emphasis).
MathSciNet allows you to search for reviews containing specific words or sentences. If you have access to the database, this may help you identify the appropriate classes you are interested in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving equations involving two sets Out of curiosity, how would one go about solving an equation involving two sets? For example,
$$ \{1, 2, 3\} = \{a + b + c, a + b - c, a - b + c, a - b - c\} $$
An intuitive solution to this is $ \{a = 2, b = 0.5, c = 0.5\} $, but is there a specific process?
| Notice the set on the left has three distinct elements, but the set on the right has four representations so two of those represetations are of the same number.
So we have six options.
$a+b+c = a+b-c$ and $c = 0$ and we have the values $a+b$ and $a-b$. But that's only two different values (or fewer) and we have exactly $3$ so that's impossible.
$a+b + c = a-b+c$ and $b =0$ and have the values $a+c$ and $a-c$ and that's the same problem.
$a+b+c = a-b-c$ and $b+c= 0$ and $b=-c$ and we have the values $a, a+2b, a-2b$. We'll get back to this.
$a+b-c = a-b + c$ and $b-c=0$ and $b=c$ and we have the values $a+2b; a; a-2b$. That will have the same solutions as above.
$a+b-c=a-b-c$ and $b=0$ and we have the same problem we had earlier.
Or $a-b+c = a-b-c$ adn $c=0$ and we have the same problem from the very begining.
So either way we have $a,a+2b, a-2b = 1,2,3$.
$a\pm 2b, a , a\pm 2b$ are in arithmetic progression so we must have either $a-2b < a < a+2b$ and $a-2b = 1, a=2, a+2b=3$ or $a+2b < a < a-2b$ and $a+2b = 1, a=2, a-2b =3$
So we must have $a=2$ and $b =\pm 0.5$ and $c = \pm 0.5$ (so there are four sets of solutions.)
There isn't really any way to do this in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Intersection of two balls in n-dimension I've been thinking about how two prove this statement (mathematically) but cant seem to figure it out
the definition of a ball two dimensions is $B(x,r)=:\|x-y\|\leq r ~~\forall y\in\mathbb{R} $ where $x$ is the center of the ball and r is the radius.
Now define
$B_\mathbb{R^{m_1+m_2}}((x_1x_2),r)$
a ball in $\mathbb{R^{m_1+m_2}}~~~$ where $x_1 =(x1,x2,x3,\dots,xm_1)$$x_1 =(xm_1+1,xm_1+2,xm_1+3,\dots,xm_2)$
and
$B_\mathbb{R^{m_1}}(x_1,r)\times B_\mathbb{R^{m_2}}(x_2,r)$
the cartesian product of two balls in $\mathbb{R^{m_1}}$ and $\mathbb{R^{m_2}}$
prove that $B_\mathbb{R^{m_1+m_2}}((x_1,x_2),r)\subset B_\mathbb{R^{m_1}}(x_1,r) \times B_\mathbb{R^{m_2}}(x_2,r)$
in the one dimensional case a ball is just an interval so the cartesian product of two intervals is a square and its easy to see geometrically that a circle centered at $(x_1,x_2)$ and radius r is less than a square with sides of length r. One idea I have is to choose an arbitrary point $z \in B_\mathbb{R^{m_1+m_2}}((x_1,x_2),r)$ and show that it is bounded by the cartesian product of the two balls.
the first step I think would be to center the balls at the origin to make calculation simpler.
defining $B_\mathbb{R^{m_1}}(0,r)\times B_\mathbb{R^{m_2}}(0,r)$ in n dimensions.
would it be $B_\mathbb{R^{m_1}}(0,r)\times B_\mathbb{R^{m_2}}(0,r)=:max\{\|x_1\|,\|x_2\|\}$ (i think) where do i go next?
| Consider $\|\cdot\|$ the maximum norm.
If $x \in B_{\Bbb{R}^{m_1 + m_2}}((x_1,x_2);r)$, then
$$\|(x_1,x_2) - x\| \leq r.$$
Making the identification $(v_1,...,v_{m_1},\underbrace{0,...,0}_{m_2\text{ times}})$ with $(v_1,...,v_{m_1}) \in \Bbb{R}^{m_1}$, you can write $x = (y_1,y_2)$ where $y_1 \in \Bbb{R}^{m_1}$ and $y_2 \in \Bbb{R}^{m_2}$. Thus,
$$\|(x_1-y_1,x_2-y_2)\| \leq r.$$
Note that
$$\|x_1 - y_1\| \leq \max\{|x_1 - y_1|,|x_2 - y_2|\} = \|(x_1-y_1,x_2-y_2)\| \leq r$$
in the same way, $\|x_2 - y_2\| \leq r$. Therefore, $y_1 \in B_{\Bbb{R}^{m_1}}(x_1,r)$ and $y_2 \in B_{\Bbb{R}^{m_2}}(x_2,r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How many four digit numbers in which each digit is at least as large as the previous digit? Given that the first digit has to be between 1 and 9, each subsequent digit depends on the previous. If they were strictly increasing, then it would be $\binom{9}{4}$ numbers. My idea is that for each digit after the first, there is an additional choice compared to the strictly increasing numbers, but I'm not sure how to translate that into an expression. Any help is appreciated!
| Hint:
If $$1\leq a_1<a_2<a_3<a_4\leq 12$$ then $$1\leq a_1\leq a_2-1\leq a_3-2\leq a_4-3\leq 9,$$
and visa versa.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Convergence of integral and summation for Time taken for complete revolution around vertical circle I want to find Time taken to complete Vertical circular motion by Particle of mass $m$
So I proceed as follows
Applying work energy theorem
$$-mgR(1-\cos(\theta)=\frac12 mv^2-\frac12 mu^2$$
$$\implies v^2=u^2-2gR(1-\cos\theta)$$
$$\implies \omega=\frac{\sqrt{u^2-2gR(1-\cos\theta)}}{R}$$
$$\implies \frac {d\theta}{dt}=\frac{\sqrt{u^2-2gR(1-\cos\theta)}}{R}$$
$$\implies \frac {d\theta}{\sqrt{u^2-2gR(1-\cos\theta)}}=\frac{dt}{R}$$
Let T be the time taken for complete revolution
Now, Integrating both sides
$$\int_{0}^{2\pi}\frac {d\theta}{\sqrt{u^2-2gR(1-\cos\theta)}}=\int_{0}^{T}\frac{dt}{R}$$
$$\implies T=2R\int_{0}^{\pi}\frac{d\theta}{\sqrt{u^2-4gR\sin^2\theta}}$$
$$\implies T=\frac{2R}{u}\int_{0}^{\pi}\frac{d\theta}{\sqrt{1-\frac{4gR}{u^2}\sin^2\theta}}$$
On evaluating this elliptic integral we get,
$$T=\sum_{n=0}^{\infty} \left(\frac{2}{u}\right)^{2n+1}\left(\frac{(2n-1)!!}{2^n n!}\right)^2 (gR)^n$$
Here $T$ denotes Time taken for complete revolution. Block will complete full circle only if $u\geq \sqrt{5gR}$. So my question is $T$ will be Real only when $u\geq \sqrt{5gR}$ so This condition should exist in integral as well as summation for $T$ but I'm not able to see it. Also Wolfram alpha evaluates integral only if $u\geq \sqrt{5gR}$
| Both the integral and the series converge if and only if $u > 2 \sqrt{g R}$, but your formula for $T$ is correct only if $u \geq \sqrt{5 g R}$. This discrepancy can be understood as follows:
The first condition only ensures that the particle has enough energy to reach the height $2R$ (it can be found by equating $2 R m g$ and $\frac{1}{2} m u^2$).
But your formula for $T$ can only be correct if the particle slides along the loop at all times. This only happens if the sum of the centrifugal force and the radial component of the gravitational force is always non-negative, i.e. $m R \omega^2 + m g \cos (\theta) \geq 0$. Using the conservation of energy to replace $\omega$ we can rewrite this inequality as $\frac{u^2}{g R} \geq 2 - 3 \cos(\theta)$. For $\theta = \pi$ the right-hand side is maximal, so we obtain the second (stronger) condition $\frac{u^2}{g R} \geq 5 ~ \Leftrightarrow~ u \geq \sqrt{5gR}$.
For $2 \sqrt{gR} < u < \sqrt{5gR}$ the particle has enough energy to reach the top of the loop, but it will detach itself from the track before that happens. Since your entire calculation is based on the energy conservation law (which contains less information than the original equations of motion), the stronger condition for the applicability of your result never appears and must be found separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4033921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Expectation of the product of two matrices I recently viewed this question and I decided it to give it a try. I got stuck in the computation of a certain expectation and variance. Summarizing the problem, this is the issue:
Let $X\in R^{n\times d}$ be a data matrix, $R\in R^{n\times d}$ a
matrix such that $ R_{ij}\sim\text{Bern}(p)$ is sampled i.i.d. from
the Bernoulli distribution, $M=X\odot R$ the element-wise product
of the previous two matrices, and $\Gamma$ be a diagonal matrix with
$\Gamma_{ii}=(X^\top X)_{ii}^{1/2}$.
Show that $E[M]=pX$, and $var(M)=p(1-p)\Gamma²$
I first went considering both the columns of $X$ and $R$ as independent vectors. Using a specific example where both $X$ and $R$ have three columns, i.e. $X=[A\ B\ C]^T$, and $R=[\alpha\ \beta\ \delta]^T$, we have that $M=[\alpha A\ \beta B\ \delta C]^T$. Taking the expectation of this:
$$E[M]=
\begin{bmatrix}
E[\alpha A] \\
E[\beta B]\\
E[\delta C] \\
\end{bmatrix} \stackrel{\star}{=}
\begin{bmatrix}
E[\alpha]E[A] \\
E[\beta]E[B]\\
E[\delta]E[C] \\
\end{bmatrix} =
\begin{bmatrix}
pE[A] \\
pE[B]\\
pE[C] \\
\end{bmatrix}=pE[X]
$$
where I used that $\star$ the variables are independent.
It doesn't look like the answer is correct and I am not sure what I am misunderstanding. For the variance, I don't even know where to start.
| I just realised what is the answer (and also thanks to the reflection questions of @Gateau-Gallois).
*
*Mean: $X$ is taken as a matrix of constants whereas $R\sim Bern(p)$. That's why $E[M]=pX$.
*Variance: Following a similar argument, one can see then that the covariance matrix is:
$$
\begin{align}
var(M_{ij})&=var(X_{ij}R_{ij})\stackrel{\star}{=}X_{ij}^2var(R_{ij})
\end{align}$$
where $\star$ is the same assumption as in the question and the fact that $var(aX)=a^2var(X)$. The expression results in 0 if $i\neq j$ since $R_{ij}$ are i.i.d. and otherwise is $X^2_{ii}var(R_{ii})=X^2_{ii}p(1-p)$. Taking, then, the matrix form:
$$
\begin{align}
var(M)=p(1-p)(X^TX)_{ii}\stackrel{\star^2}{=}p(1-p)\Gamma^2
\end{align}
$$
where $\star^2$ is the definition $\Gamma=(X^TX)_{ii}^{1/2}$stated in the question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $ \begin{bmatrix} A & BC(A+BC) \\ I_n & 0 \end{bmatrix} $ and $\begin{bmatrix} A+BC & 0 \\ 0 & -CB \end{bmatrix}$ have same nonzero eigenvalues Let $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, and $C\in\mathbb{R}^{m\times n}$. Define
\begin{equation}
D_1=\begin{bmatrix}
A & BC(A+BC) \\
I_n & 0
\end{bmatrix},\qquad D_2=\begin{bmatrix} A+BC & 0 \\ 0 & -CB \end{bmatrix}
\end{equation}
Show that $D_1$ and $D_2$ have the same nonzero eigenvalues.
My attemp: If we check the claim numerically, we see that the nonzero eigenvalues of $D_1$ and $D_2$ are the same. Probably, there is a factorization of $D_1$, for example $D_1=EF$ such that $D_2=FE$.
| In general, $BC$ and $CB$ share the same (multi)set of nonzero eigenvalues. The conclusion now follows from the observation that your $D_1$ is similar to
\begin{aligned}
&\pmatrix{I_n&-(A+BC)\\ 0&I_n}\pmatrix{A&BC(A+BC)\\ I_n&0}\pmatrix{I_n&A+BC\\ 0&I_n}\\
&=\pmatrix{-BC&BC(A+BC)\\ I_n&0}\pmatrix{I_n&A+BC\\ 0&I_n}\\
&=\pmatrix{-BC&0\\ I_n&A+BC}.
\end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $S$ is subspace then there must exist $Ax=0, x \in S$ I would like to show the first direction of the following statement.
$S$ is a subspace if and only if there exists a matrix $A$ s.t. $S = \{x \in \mathbb{R}^n: Ax =0 \}$.
Definition: $S \subset \mathbb{R}^n$ where $S\neq \emptyset$ is called a subspace if $x \in S$ implies $\lambda x \in S, \forall \lambda \in \mathbb{R}$ and $x,y \in S$ implies $x+y \in S$. In addition,
$``\Leftarrow"$If there exists a matrix $A$ s.t. $S = \{x \in \mathbb{R}^n: Ax =0 \}$, then we can select $x, y \in \mathbb{R}^n$ as the solutions for $Ax =0$. It is clear to show that:
$A(x+y) = Ax + Ay = 0$ and $A(\lambda x) = \lambda (Ax) = 0$. Hence, $S$ is a subspace.
How can I show the other direction? If I assume that $S$ is a subspace, how can I show that there must exists a matrix $A$?
| Hint.
Suppose $S$ is a subspace of $\mathbb{R}^n$. Then the exists a basis $\beta$ of $S$. Extend $\beta$ to be a basis $\alpha$ of $\mathbb{R}^n$.
Define $A$ to be a linear transformation on $\mathbb{R}^n$ such that $Av=0$ for every $v\in\beta$ and $Av=v$ for every $v\in\alpha\setminus\beta$. Extend $A$ linearly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A rational number is given in the form: A rational number is given in the form: $\frac{p}{q}$; $\frac{p}{q} \in (0,1)$ and $p$ and $q$ are coprime to each other. If $pq= 10!$ , what would be the number of ordered pairs $(p,q)$?
A beginning step would be noticing that $10!$ consist of $4$ primes $(2, 3, 5, 7)$. But how do I proceed? Taking combinations of two numbers from these $4$ primes doesn't seem right and making cases of this also doesn't yield the right result.
| Figure out not just the prime factors of $10!$ but the powers as well...i.e. the complete prime factorization of $10!$.
$10!$ is a big (relatively speaking) number, but it's not a monster. We can express it as $1*2*3*4*5*6*7*8*9*10 = 1*2*3*2^2*5*(2*3)*7*2^3*3^2*(2*5) = 2^8\cdot 3^4\cdot 5^2\cdot 7$
So if $\gcd(p,q) = 1$ and $pq = 2^8\cdot 3^4\cdot 5^2\cdot 7$ then
.... can you take it from here?...
Warning: The following is free thinking so it will include indirect inefficient reasoning. The point is to replicate how one would think about solving it; not haw to solve it in hindsight efficiently
$p = 2^a3^b5^c7^d$ and $q= 2^{\alpha}3^{\beta}5^\gamma 7^{\delta}$
Where $a + \alpha = 8; b+\beta = 4; ...etc.....$
But as $\gcd(p,q) = 0$ we either have $a=0$ or $\alpha = 0$ and $b =0$ or $\beta = 0$ etc.
In other word we have $p = 2^{0,8}3^{0,4}5^{0,2}7^{0,1}$ while $q=2^{8,0}3^{4,0}5^{2,0}7^{1,0}$.
So there is two choices for $a: 0$ or $8$; all or nothing. Same for the other variables. And $\alpha, \beta, $ etc are completely dependent $a,b,c,d$.
So there are $2^4 = 16$ options.
In hindsight, it be better to answer
If $pq = 10!$ then $q = \frac {10!}p$ so we only have to calculate howmany choices of $p$ there are.
$p = 2^a3^b5^c7^d$ and $q = 2^{8-a}3^{4-b}5^{2-c}7^{1-d}$. But as $\gcd(p,q)=1$ for each variable either $a =0$ or $8-a=0$ etc so there are $2$ options only.
So $p = 2^{0|8}3^{0|4}5^{0|2}7^{0|1}$ so there are four choices each with $2$ options.
So there $2^4=16$ choices.
.....
Oh.... I overlooked $\frac pq < 1$.
Oh well. There are $16$ ordered pairs total. As $\gcd(p,q) = 1$ (or because $10!$ is not a perfect square) we can't have $p < q$ and so there are $\frac {16}2=8$ unordered pairs or in other words pairs where $p < q$.
====
Actually in hind-hindsight it's better to answer as Eyob Tsegaye did and realize it's enought to know $10! =2^m3^n5^s7^t$ and $p$ and $q$ have those powers as "all or nothing" and not have to figure the actual prime factorization.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Interchanging expectations of log likelihood I see in papers (here in eq. 3 or here on page 4, for example) that it can be done like this using Fubini
$$\mathbb{E}_x\mathbb{E}_\theta\log f(x|\theta)=\mathbb{E}_\theta\mathbb{E}_x\log f(x|\theta)$$
Where $f(x|\theta)$ is a conditional probability density.
I don't immediately see why $\mathbb{E}_\theta\mathbb{E}_x|\log f(x|\theta)|\leq\infty$. Is it actually the case? Or are some additional conditions on $f$ needed?
| The identities
$$\mathbb{E}_x\mathbb{E}_\theta \bigl[{\bf 1}_{\{f(x|\theta)>1\}}\log f(x|\theta) \bigr]=\mathbb{E}_\theta\mathbb{E}_x\bigl[{\bf 1}_{\{f(x|\theta)>1\}}\log f(x|\theta) \bigr]\quad \; \; (*)$$
and
$$\mathbb{E}_x\mathbb{E}_\theta \bigl[{\bf 1}_{\{f(x|\theta)\le 1\}}\log f(x|\theta) \bigr]=\mathbb{E}_\theta\mathbb{E}_x\bigl[{\bf 1}_{\{f(x|\theta)\le 1\}}\log f(x|\theta) \bigr]\quad \; \; (**)$$
always hold by Tonelli's_theorem [1]. Adding them will give the desired formula, provided at least one of them is finite.
On the other hand, if $(*)$ yields $+\infty$ and $(**)$ yields $-\infty$, then the original formula might be undefined. For example,
consider the densities
$$f(x|\theta)=\frac{C}{|x-\theta |\, [1+(\log |x-\theta|)^2]} \,,$$
where $C$ is chosen to ensure $\int f(x|\theta) \, dx=1$.
(One can give an even simpler degenerate example, replacing $\theta$ on the RHS of this formula by $0$.)
To verify the example, the integration fact needed is that $$\int_1^\infty \frac{dx}{x (1 +(\log x)^\beta)} $$
is finite for $\beta>1$ but infinite for $\beta=1$. The change of variable $u =\log x$ transforms this to the standard integrals
$$\int_0^\infty \frac{du}{ (1 +u^\beta )} \,. $$
Similarly,
$$\int_0^1 \frac{dx}{x (1 +|\log x|^\beta )} $$
is finite for $\beta>1$ but infinite for $\beta=1$.
[1]
https://en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli's_theorem_for_non-negative_measurable_functions
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to use telescoping series to find: $\sum_{r=1}^{n}\frac{1}{r+2}$ I am a bit confused in this one, how do i do the required modification in this case?
| As many people have mentioned, this sum does not telescope. A summation series telescopes only if there is addition and subtraction involved. The addition and subtraction causes terms to cancel out. For example, the summation series $\sum_{n=0}^\infty \frac1n - \frac1{n+1}$ telescopes because the subtraction allows parts of the series to cancel with parts of the series that come after. For a product series, it should have multiplication and division so that the operations cancel each other out. For example, the product $\prod_{n=0}^\infty \frac1n*({n+1})$ uses multiplication in the previous term to cancel out with division in the current term.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4034928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$p_{n}$ be the probability that $C+D$ is perfect square. Compute $\lim\limits_{n \to \infty}\left(\sqrt{n} p_{n}\right)$
Assume $C$ and $D$ are randomly chosen from $\{1,2, \cdots, n\}$. Let $p_{n}$ be the probability that $C+D$ is perfect square. Compute $$\lim\limits _{n \to \infty}\left(\sqrt{n} p_{n}\right)$$
My Approach: My assumptions were that $C,D$ can be equal and the order of $C,D$ are not considered [I don't know I assumed correct, but in the question it was not clear.]
The number of ways a number $k$ can be partitioned into $2$ parts is equal to $\Big\lfloor\frac{k}{2}\Big\rfloor$
Now with the numbers $\{1,2, \cdots, n\}$ all the numbers from $2$ to $2 n$ can be made by choosing $2$ numbers and adding them up. So number of perfect squares in this region are $\big\lfloor\sqrt{2 n}\big\rfloor-1$ ($-1$ because we cannot make $1$). So the probability of choosing $C, D$ such that $C+D$ is a perfect square is
$$p_n=
\frac{\displaystyle{\sum_{k=2}^{\big\lfloor\sqrt{2 n}\big\rfloor} \Bigg\lfloor\frac{k^{2}}{2}\Bigg\rfloor}}{n^{2}}
$$
Now I cant progress further.
Tell me if I am correct. Also are my assumptions correct ?
Edit: The answer of the limit is $\frac{4(\sqrt{2}-1)}{3}$
| $C$ and $D$ are chosen independently, so the number of pairs $(C,D)\in[n]^2$ with $C+D=k^2$ is
$$
\begin{cases}
k^2-1 &\text{if }2\leq k\leq \sqrt{n+1}\\
2n+1-k^2& \text{if }\sqrt{n+1}<k\leq\sqrt{2n}\\
0 & \text{otherwise}
\end{cases}
$$
So
\begin{align*}
\sqrt{n}p_n&=\frac{\displaystyle
\sum_{k=2}^{\lfloor\sqrt{n+1}\rfloor} (k^2-1)+\sum_{k=\lfloor\sqrt{n+1}+1\rfloor}^{\lfloor\sqrt{2n}\rfloor}(2n+1-k^2)}{n^{3/2}}\\
&\sim n^{-3/2}
\sum_{k=2}^{\lfloor\sqrt{n}\rfloor} k^2+n^{-3/2}\sum_{k=\lfloor\sqrt{n}\rfloor}^{\lfloor\sqrt{2n}\rfloor}(2n-k^2)\\
&\sim\int_0^1 x^2\,\mathrm{d}x+\int_1^{\sqrt{2}}(2-x^2)\,\mathrm{d}x=\frac43(\sqrt{2}-1)
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finding the SDE Let $X_t = e^{(\mu-\frac{\sigma^2}{2})t + \sigma B_t}$, where $B_t$ is a standard Brownian motion. How do I find the SDE satisfied by $X_t^{-1}?$ I know I must use Ito's formula but not sure how to apply that. Any help would be appreciated.
| First, let $Y_t := \ln(X_t^{-1}) = -(\mu-\frac{\sigma^2}{2})t - \sigma B_t$ (which is well defined). We see that $Y_t$ is a solution to the SDE
$$ dY_t = -(\mu-\frac{\sigma^2}{2})dt - \sigma dB_t \tag1 $$
Which means that $Y_t$ is a regular drift-diffusion process, so applying Itô's lemma to $g(t,Y_t) = e^{Y_t} = f(Y_t)$ gives
$$df(Y_t) = f'(Y_t)dY_t + \frac{1}2f''(Y_t)d\langle Y\rangle_t \tag2$$
And the quadratic variation of $Y$ is given by $d\langle Y\rangle_t =(dY_t)^2 = \sigma^2dt$, so replacing in (2), we get
$$\begin{align} d(X^{-1}_t) = df(Y_t) &= f'(Y_t)dY_t + \frac{1}2f''(Y_t)\sigma^2dt \\
&= f(Y_t)\cdot(dY_t + \frac{\sigma^2}2dt) \\
&= f(Y_t)\cdot\left(-(\mu-\frac{\sigma^2}{2})dt - \sigma dB_t + \frac{\sigma^2}2dt\right) \\
&= f(Y_t)\cdot\left(-(\mu-\sigma^2)dt - \sigma dB_t\right)\\
&= -X_t^{-1}\cdot \left(\sigma dB_t + (\mu-\sigma^2)dt\right)\end{align} $$
Finally, $X^{-1}_t$ is solution to the SDE
$$d(X^{-1}_t) = X_t^{-1}\cdot \left(-\sigma dB_t + (\sigma^2 - \mu)dt\right)\qquad \square$$
Another way to prove it could have been to show that $X_t$ is an Itô diffusion process and applying Itô's lemma to $X_t^{-1} := {1 \over X_t} = f(X_t)$ directly (I haven't tried).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $F$ be a closed subset of a metric space $X$. Does there exist a continuous function $g : X \to R$ such that $F = g^{-1}({0})$? Let $F$ be a closed subset of a metric space $X$. Does there exist a continuous function
$g : X \to \mathbb{R}$ such that $F = g^{-1}({0})$? I am trying to produce a counterexample to this. Any ideas would be really helpful
| For $F=\emptyset$ we can take $g: x \to \Bbb R$ to be a constant function with value $1$, so that $g^{-1}[\{0\}] = \emptyset = F$. So we can assume that $F \neq \emptyset$.
Let $f(x) = \inf \{d(x,y): y \in F\}$, which is well-defined as the infimum of a non-emptys set that is bounded below (by $0$) in $\Bbb R$.
Then it's not too hard to see that $$\forall x,x' \in X: |f(x)-f(x')| \le d(x,x')$$ which makes $f$ uniformly continuous.
And it's also clear that any $x$ which is a limit point of $F$ obeys $d(x,F)=0$ and the same holds for all $x \in F$, so that $F= f^{-1}[\{0\}]$. So use $g=f$, the distance to the set function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $X$ is not rational $X = Z - \pi \times Y$ where $Y$ is rational , $Z$ is an integer, $\times$ means multiplication Excuse me for a silly question like this.
I am 60 years old retired engineer and
want to learn some basic math I di did not learn earlier.
I know an example where $(x + y\pi)$ can be an integer,
where $y$ is rational , * means multiplication
But how to prove the general case
if $x$ is always rational ,
always non-rational
or both depending upon the case.
Thanks
| Suppose for a contradiction $x$ is rational. Then $x=a/b$ where $a,b$ are coprime integers and $b\neq0$. Also assume $y=c/d$ for coprime integers $c$ and $d$, and furthermore assume $c$ is not zero (so $y$ is not zero). Then,
$$ x = z - \pi y $$
if and only if
$$ \frac{a}{b} = z - \pi \frac{c}{d} $$
Multiply by $bd$:
$$ ad = zbd - \pi cb $$
Then,
$$ \pi cb = zbd - ad $$
Divide by $cb$ (this is possible since both are nonzero):
$$ \pi = \frac{zbd-ad}{cb} $$
This implies that $\pi$ is rational which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Map $z^2=\frac{\frac{1}{2}+it}{ \frac{1}{2}-it },t\in\mathbb{R}$ maps the real axis $(-\infty,\infty)$ onto the unit circle $\mid z\mid=1$.
Show that the mapping
$$
z^2 = \frac{\frac{1}{2}+it}{\frac{1}{2}-it}, \quad t\in\mathbb{R}
$$
maps the real axis $(-\infty,\infty)$ to the unit circle $|z|=1$.
My try-
$$z^2=\frac{\frac{1}{2}+it}{ \frac{1}{2}-it }$$
$$|z^2|=\frac{|\frac{1}{2}+it|}{ |\frac{1}{2}-it| }$$
$$|z^2|=\frac{\sqrt{1/4+t^2}}{ \sqrt{1/4+t^2} }$$
$$|z^2|=1$$
$$|z|^2=1$$
$$|z|=1$$...
| Let the unit circle be $z=e^{i\theta}$, $\theta\in[0,2\pi]$. Then,
$$t= \frac1{2i} \frac{z^2-1}{z^2+1} = \frac1{2i} \frac{e^{i 2\theta}-1}{e^{i 2\theta} +1} =\frac12\tan\theta
$$
which reveals that $\theta\in[0,\pi)$ and $\theta\in[\pi,2\pi)$ separately map to $t\in(-\infty,\infty)$, thus, not bijective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Smooth hypersurfaces of the blow-up. Let's assume $X$ is the blow up of $\mathbb{P}^n$ along a smooth subvariety $Z$. Especially $X$ is smooth. I was wondering what the hypersurfaces of $X$ look like? The hypersurfaces should give an ample divisor, so it should have positive intersection number with any curve. Hence I think this implies that it should contain the exceptional divisor which is a projective bundle over $Z$. So it seems like every hypersurface section has two components one should be the exceptional divisor and possibly another divisor (which is blow of some divisor in the projective space). But this seems to imply that hypersurfaces are not smooth (they have at least two components) and contradicts Bertini's theorem. (So a better question is what smooth hypersurfaces look like?)
| I think you are asking a very difficult question. If you want to know the very ample divisors on $X$ then you are asking, at minimum, "what is the structure of the cone of ample/nef divisors on $X$?", and that is ignoring the ample/very ample distinction. If you Google the words "ample" and "blowup" together I think you will find some papers that will quickly demonstrate that it can take quite a bit of effort to say relatively little. Fixing $d > 1$, if you are blowing up a point $p$, then you are asking for something like a bound on the multiplicity of the divisor $D \subset \mathbb P^n$ at $p$ such that it can still be moved in a linear system $L \subset |D|$ which is large enough that strict transforms of members of $L$ cover $X$. And while this is straightforward enough for a single reduced point, blowing up at multiple and/or fat points will quickly get confusing, and blowing up a positive dimensional subvariety is even worse (in particular, the geometric characterization of a given linear system downstairs will become trickier and tricker). If you haven't already, I would advise that you start reading Lazarsfeld's Positivity in Algebraic Geometry to learn more about this kind of thing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4035885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to take indices and name of a variable in Sage? I define the following in Sage.
rank=8
R=PolynomialRing(QQ, 'a' ,rank+1)
a=R.gens()
I would like to define a function f which returns the index of a[3]: f(a[3])=3 and define another function g such that g(a[3])=a. How to do this in Sage? Thank you very much!
| This is something generic about lists in python so
a.index(a[3])
for the first (you can write f = a.index if you like), and for the second if you really want a function rather than using [3] you can do
def g(n):
return a[n]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4036016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\left(\prod\limits_{2Conjecture:
$$\left(\prod_{2<p}^{p_i} \frac{p-1}{p}\right) \cdot \left(p_{i + 1}^2 - p_i^2 \right) > \pi(p_{i + 1}^2) - \pi(p_i ^2) \tag{1}$$
The LHS expression is the product of Mertens's third theorem truncated at $p_i$ (subsequently just $M_x$ for truncation at $x$) times the length of the interval between the squares of $p_i$ and $p_i+1$, and the RHS is the actual number of primes between those squares. For instance, for $p_i = 7$, the expression is:
$$\frac{8}{35} \cdot \left(11^2 - 7^2\right) > \pi(121) - \pi(49)$$
And in fact $16.46 > 15$. Inequality $(1)$ is (perhaps surprisingly) empirically true for all $p_i$. The inequality seems intuitively likely to be true, as we would expect the result of $M_xx^2$ to be larger than $\pi(x^2)$ simply because $e^{-\gamma} > \frac12$ and therefore
$$\pi(x^2) \sim \frac{x^2}{\log x^2} = \frac12\frac{x^2}{\log x} < e^{-\gamma}\frac{x^2}{\log x} \tag{2}$$
(Inequality $(2)$ is empirically true only above $x \approx 100$, because the difference between $M_x \log x$ and $e^{-\gamma}$ is large for small $x$)
We can show (much more easily than I expected, in this question where I briefly failed basic algebra) that the density of primes between $p_{i-1}^2$ and $p_i^2$ is lower than the density of primes between $0$ and $p_i^2$, that is:
$$\frac{\pi\left(p_{i+1}^2) - \pi(p_i^2\right)}{p_{i+1}^2 - p_i^2} < \frac{\pi(p_i^2)}{p_i^2} \tag{3}$$
With this, we expect difference between the LHS and RHS of $(1)$ to be greater than $(2)$ alone would imply.
But... I'm roughly 110% certain that $(2)$ and $(3)$ together don't qualify as proof. They might be proof of asymptotic behavior as $p_i \to \infty$, but I'm wondering if this can be (or already has been!) proven for smaller values. Any links or thoughts from the community?
| $\prod \limits_{2\leq p \leq p_i } (1-\frac{1}{p}) (p_{i+1}^2-p_{i}^2) \geq li(p_{i+1}^2)-li(p_{i}^2)$ for all $i \geq 4$
By noticing that $ \prod \limits_{2\leq p \leq p_i } (1-\frac{1}{p}) (p_{i+1}^2-p_{i}^2) \approx \frac{e^{-\gamma}}{\ln p_i} (p_{i+1}^2-p_{i}^2)\geq \frac{p_{i+1}^2-p_{i}^2}{2\ln p_i}\geq \int \limits_{p_{i}^2}^{p_{i+1}^2} \frac{dt}{\ln t}=li(p_{i+1}^2)-li(p_{i}^2)$
If you assume R.H. and let $ p_{i+i^{\epsilon}}^2-p_{i}^2$ with $i^{\epsilon} = O(\ln^k i)$ for $ k \geq 2$ then since $ |\pi(x)- li(x)| \leq \sqrt{x} \ln x$ this would imply the correctness of your conjecture for all $ i\geq 3000$ give or take according to the choice of $k$.
But without R.H. i don't see a way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4036211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that $a ≤ x ≤ b ⇒ |x| ≤ |a|+|b|$ Can you please help me with this proof $$a ≤ x ≤ b ⇒ |x| ≤ |a|+|b|$$ ? I am literally stuck for hours.
This is what i thought, but i don't know if it counts as a proof.
First of all, if a ≤ x then $x ≤ -a$ so $a ≤ x ≤ -a$ if $x ≤ -a$ then $|x| ≤ |-a| = |a|$
Now we look at b:
if x ≤ b then $-b ≤ x ≤ b$ so $$|-b| = |b| ≤ |x|$$ OR $$|x| ≤ |-b| = |b|$$
let's choose the "worst" option which is |b| ≤ |x|
so $$|b| ≤ |x| ≤ |a| ⇒ |x| ≤ |a| - |b| ≤ |a| + |b| ⇒ |x| ≤ |a| +|b|$$
Is it correct? Please show me other ways to prove it.
| Hint
$x$ can not be further away from $0$ than both $a$ and $b$.
Consider that $x$ must have the same sign as at least one of $a,b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4036309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Words of length $10$ in alphabet $\{a,b,c\}$ such that the letter $a$ is always doubled
Compute the number of words of length $10$ in alphabet $\{a,b,c\}$ such that letter $a$ is always doubled (for example "$aabcbcbcaa$" is allowed but "$abcbcaabcc$" is forbidden).
I am looking for a quick/efficient way to resolve this problem. I thought of fixing "$aa$" in the beginning then draw a tree of the next possibilities but this tree will end up to be a whole forest. Can you help me ?
| Here is a second approach: Let $x_n$ $(n\geq0)$ denote the number of admissible words having $n$ letters. Then
$$x_0=1,\quad x_1=2,\qquad x_n=2x_{n-1}+x_{n-2}\quad (n\geq2)\ .$$
The characteristic equation of the recursion is $\lambda^2-2\lambda-1=0$ with roots $\lambda=1\pm\sqrt{2}$. It follows that
$$x_n=c(1+\sqrt{2})^n+c'(1-\sqrt{2})^n\qquad(n\geq0)\ ,$$
where the constants $c$, $c'$ have to be determined from the initial conditions. The computation gives
$$c={2+\sqrt{2}\over4},\qquad c'={2-\sqrt{2}\over4}\ .$$
Now $\bigl|c'(1-\sqrt{2})^n\bigr|<{1\over2}$ for all $n\geq0$. We therefore can write
$$x_n={\tt round}\left({2+\sqrt{2}\over4}(1+\sqrt{2})^n\right)\qquad(n\geq0)\ .$$
This gives
$$x_{10}={\tt round}\bigl(5740.9999782267896753\bigr)=5741\ .$$
This coincides with the value obtained by ${\tt user10354138}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4036510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Showing that the given sequence is bounded as follows Let $A = (x_n)$ be a sequence that defined as $x_n = \frac{1}{3^{n+5}}$.
Show that $A$ is bounded and find it's supremum and the infimum.
Attempt:
First, I claim that $A$ is a decreasing sequence. I show this by induction as follows:
To show $x_{n+1}<x_n$ for all $n \in \Bbb Z^+$. Indeed, this is true for $n=1$. Now, assume that for $n=k$, it's also true; that is s, $x_{k+1} < x_k$ for some $k \in \Bbb Z^+$. Then,
$x_{k+2} = \frac{1}{3^{(k+2)+5}} = \frac{1}{3^{k+7}} < \frac{1}{3^{k+6}} = \frac{1}{3^{(k+1)+5}} = x_{k+1}$. Hence, $x_{k+1} < x_k$ for some $k \in \Bbb Z^+$. Therefore, $x_{n+1} < x_n$
for all $n \in \Bbb Z^+$. Thus, $A$ is a decreasing sequence. $\Box$
Back to the problem. It's clear that $A$ is bounded above by $\frac{1}{3^6}$ (Should I show this first?).
Then, to show that $A$ is bounded, it's suffices to show that $A$ is bounded below by $0$.
I show this one again by induction. Indeed, it's true for $n=1$.
Assume that it's true for $n=k$; that is $0 < x_k$. Then, $x_{k+1} = \frac{1}{3^{(k+1)+5}}
= \frac{1}{3^{k+5}} \cdot \frac{1}{3} > 0$. Hence, $0< x_n$ for all $n \in \Bbb Z^+$. Therefore, $A$ is bounded below by $0$. Thus, $A$ is bounded, as desired.
Now, I claim that $\sup A = \frac{1}{3^6}$ and $\inf(A) = 0$.
For the proof of infimum, let $m$ be an another lower bound of $A$. To show: $m \le 0$. Suppose
$m > 0$. Then, by the Density Theorem, there exists $r \in \Bbb Q$ such that
$0 < r < m$. Hence, $r \in A$. A contradiction, since $m$ is a lower bound of $A$.
Thus, $m \le 0$ and therefore, $\inf A = 0$.
For the supremum, let $M$ be an another upper bound of $A$. To show: $\frac{1}{3^6} \le M$. Suppose $M < \frac{1}{3^6}$. Then, by the Density Theorem, there exists $s \in \Bbb Q$ such that
$M < s < \frac{1}{3^6}$. Hence, $s \in A$, contradiction with the fact that $M$ is an upper bound of $A$.
Therefore, $\frac{1}{3^6} \le M$ and thus, $\sup A = \frac{1}{3^6}$.
Does those approach true?
| Using the "Archimedean property" , we get for every $\epsilon \gt 0 $ , there exists $m \in \mathbb{N}$ such that $\frac{1}{n} \lt \epsilon$ for every $n\ge m $ .
As, $3^{n+5}\gt n \implies \frac{1}{3^{n+5}} \lt \frac{1}{n} \lt \epsilon $ for all $n\ge m $
By definition of convergence of a sequence, sequence
$\{\frac{1}{3^{n+5}}\}$ converge to $0$.
So, the given sequence is bounded and $limsup=liminf=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4036693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What's wrong in this inductive proof? Let $a$ be any positive integer. If $x$ and $y$ are positive integers such that $max(x, y) = a$, then we know that $x = y$.
I'm pretty sure this is a false claim, and I need to find what is wrong with the induction proof.
For the base case, we let $a = 1$. Here, if $x$ and $y$ are positive integers such that $max(x, y) = 1$, then $x$ and $y$ must both be $1$.
In the inductive case, we let $k$ be some arbitrary positive integer. The induction hypothesis assumes that if $x$ and $y$ are positive integers such that $max(x, y) = k$, then $x = y$ for some positive integer $k$.
Consider the case where $a = k + 1$ and let $x'$ and $y'$ be two positive integers such that $max(x', y') = k + 1$. Now we have $max(x' - 1, y' - 1) = k$, which, by the induction hypothesis, implies that $x' - 1 = y' - 1$, and therefore that $x' = y'$. Thus, we have proven the claim.
I can identify just one counterexample: if $a = 20, x = 10, y = 20$. Then, $max(10, 20) = 20$, but $20 \neq 10$.
| The problem is that $x'-1$ and $y'-1$ might not both be positive integers (since one of $x',y'$ might be 1), so the induction hypothesis doesn't apply.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to prove that $S^1 \times S^3$ identifying antipodal points is again $S^1 \times S^3$. I am studying the compactification of Minkowski spacetime as the space of generators $\mathcal{PN}$ of the null cone $\mathcal{N}$ of a six dimensional real manifold of signature 2. Concretely I am following the book "An introduction to twistor theory" by Huggett and Tod
(see the image). Topologically the say that $\mathcal{PN}$ is $S^1\times S^3$ identifying antipodal points (because each generator cuts the 5-sphere twice), which is again $S^1\times S^3$. Does anybody know why $S^1\times S^3/\sim$ is again $S^1\times S^3$?
And can this be extended to $S^1\times S^n$ in general?
Thank you.
| $S^1$ and $S^3$ can be identified with the unit spheres in $\mathbb C$ and $\mathbb C^2$ respectively. Then the map $$(z, w) \mapsto (z^2, z w)$$ where $z \in \mathbb C$, $w \in \mathbb C^2$ and $\lvert z \rvert = \lVert w \rVert = 1$ identifies exactly the antipodal points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Genus of a curve My question is:
How to show that for all $g>0$ there exists an algebraic curve of genus $g$?
The intuition of this is simple, but I don't know how to show it.
| Here is a perspective from working over the field of complex numbers. I'll assume you want to talk about nonsingular curves, in which case the genus unambiguously means the geometric genus $h^0(X,\Omega^1)$, i.e. dimension of the global sections of the holomorphic cotangent bundle. Let $X$ denote a Riemann surface. It is well known that the geometric genus of $X$ defined as above equals the topological genus of the underlying real surface.
Next, using Kodaira's Embedding Theorem (or simply any other way to prove that a Riemann surface holomorphically embeds in $\Bbb{P}^n$), you can holomorphically embed $X$ as an analytic subvariety of some $\Bbb{P}^n$ and by Chow's Theorem this realizes the image of $X$ as an algebraic variety in $\Bbb{P}^n$. The genus of this variety is then $g$.
Of course, a better construction in some sense is given by using hyperelliptic curves as mentioned in Somos' answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why is the sum of two functions expressed as simple functions is the sum of the weighted indicator variable of the intersection? If
$$g=\sum_{i=1}^n a_i \mathbb I_{A_i}$$
and
$$h=\sum_{i=1}^m b_j \mathbb I_{B_j}$$
why is
$$g + h = \sum_{i=1}^n \sum_{j=1}^m (a_i + b_j) \mathbb I_{A_i\cap B_j}$$
I would have guessed that the sum would use the indicator variable $\mathbb I_{A_i \cup B_j},$ which is to say the union of both measurable sets.
| Suppose $g = 3 \mathbb I_{A_1}$ and $h = 4 \mathbb I_{B_1}$.
The definition is multiplying the coefficients. The combination has the value $12$ only in the intersection, not the union.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What's wrong in this $\epsilon-\delta$ argument? In Spivak's Calculus, one exercise asks whether the following is true:
Let $f$ and $g$ be functions such that $f(x) < g(x)$, for all $x$.
Does it follow that $\lim\limits_{x \to a} f(x) < \lim\limits_{x \to
> a} g(x)$?
The previous result holds if the signs $<$ get replaced by $\leq$, but it turns out this is not true in general for strict inequality. However I "proved" it was true. Obviously my argument is wrong, but it is not clear to me where lies the mistake, so I am requesting your help to figure it out.
First I envisioned using one neat trick I found in Terence Tao's blog (second paragraph in item 2), namely that to prove a quantity $x$ vanishes one can prove $\lvert x \rvert \leq \epsilon$, for every $\epsilon > 0$.
So my argument goes as follows: let $f$ and $g$ be functions as in the statement above. Then $\lim\limits_{x \to a} f(x) \leq \lim\limits_{x \to a} g(x)$. We show that equality leads to a contradiction.
If $\lim\limits_{x \to a} f(x) = \lim\limits_{x \to a} g(x) = m$ and $\epsilon > 0$, then there are $\delta_1, \delta_2 > 0$ such that
*
*if $0 < \lvert x-a \rvert < \delta_1$, we have $\lvert f(x) - m \rvert < \cfrac{\epsilon}{2}$;
*if $0 < \lvert x-a \rvert < \delta_2$, we have $\lvert g(x) - m \rvert < \cfrac{\epsilon}{2}$.
Now, if $0 < \lvert x-a \rvert < \delta$, where $\delta$ equals the smallest number between $\delta_1$ and $\delta_2$, then
$$\lvert g(x) - f(x) \rvert = \lvert g(x) -m + m - f(x) \rvert \leq \lvert g(x) -m \rvert + \lvert m - f(x) \rvert < \cfrac{\epsilon}{2} + \cfrac{\epsilon}{2} = \epsilon$$.
This implies, by Prof. Tao's trick, that $f(x) = g(x)$; this is impossible since $f(x) < g(x)$ for all $x$, so we conclude $\lim\limits_{x \to a} f(x) < \lim\limits_{x \to a} g(x)$.
Where's the error? Thanks in advance.
| Your $\epsilon-\delta$ argument is correct, but for the trick to work, your $x$ must be fixed. If a fixed non-negative quantity is $\leq \epsilon$ for any arbitrary $\epsilon>0$, then the quantity must be $0$. However, in this case $|g(x)-f(x)|$ is changing as $x$ takes any value in $(0,\delta)$.
You showed $|g(x)-f(x)|<\epsilon$ when $x$ is in the neighborhood $0<|x-a|<\delta$. This just means you can make $g(x)$ and $f(x)$ arbitrarily close when $x$ is not too far away from $a$, which is consistent with $\displaystyle\lim_{x\to a} f(x)=\lim_{x\to a} g(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Proof by induction: Inductive step struggles Using induction to prove that:
$$
1 - \frac{1}{2} + \frac{1}{4} - \frac{1}{8} +...+\left( \frac{1}{2}\right)^{n} = \frac{2^{n+1}+(-1)^{n}}{3\times2^{n}}
$$
where $ n $ is a nonnegative integer.
Preforming the basis step where $ n $ is equal to 0
$$
1 = \frac{2^{1}+(-1)^{0}}{3\times2^{0}} = \frac{3}{3} = 1
$$
Now the basis step is confirmed.
Then I started the inductive step where $ n = k $ is assumed true and I needed to prove $ n = k+1 $
$$
1 - \frac{1}{2} + \frac{1}{4} - \frac{1}{8} +...+\left(- \frac{1}{2}\right)^{k} + \left(- \frac{1}{2}\right)^{k+1} = \frac{2^{k+1+1}+(-1)^{k+1}}{3\times2^{k+1}}
$$
Using the inductive hypothesis
$$
\frac{2^{k+1}+(-1)^{k}}{3\times2^{k}} + \left(- \frac{1}{2}\right)^{k+1} = \frac{2^{k+1+1}+(-1)^{k+1}}{3\times 2^{k+1}}
$$
After this I am struggling here trying to get around to the end. I would appreciate any guidance.
| Use instead the perturbation method from Concrete Mathematics:
\begin{align}
S_{n} + (-1)^{n+1}\frac{1}{2^{n+1}}&= \sum_{k=0}^{n}(-1)^{k}\frac{1}{2^{k}} +(-1)^{n+1}\frac{1}{2^{n+1}}=1 -\frac{1}{2}\sum_{k=0}^{n}(-1)^{k}\frac{1}{2^{k}} \\
&= 1-\frac{1}{2}S_n
\end{align}
\begin{align}
\frac{3}{2}S_n&=1 - (-1)^{n+1}\frac{1}{2^{n+1}} \\
S_n &=\frac{2(1 - (-1)^{n+1}\frac{1}{2^{n+1}})}{3} = \frac{2^{n+2}-(-1)^{n+1}}{3\cdot2^{n+1}}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4037846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Counting permutation of duplicate items I am not really sure I fully understand the formula for finding the number of permutation of duplicate items.
The formula is:
$$\frac{n!}{p!q!r!\cdots}$$
Where do the factorials in the denominator exactly come from?
If we asssume the string "ANNA" and we want the count of the permutation of duplicate items.
We have $4$ characters so since we have $4$ options for the first character, $3$ for the second, $2$ for the third and $1$ for the last we have $4!$ different permutations.
But some of the characters are duplicates.
We have "A" and "N" repeated, meaning it adds more permutations to the outcome.
If $p=2$ is the number of occurences of "A" and if $q = 2$ is the number of occurences for "N" then would the $p!$ and $q!$ essentially be selecting $1$ item at a time from $2$ (or $p$) duplicates? So essentially are the denominators the binomial $\binom{2}{1}$ (or generally $\binom{p}{1}$ where $p$ are the number of repeated characters) where order does not matter?
Is my understanding correct?
| A different way of viewing this formula which avoids "division by symmetry" arguments:
Given $n$ items, $p$ of which are identical of one type, $q$ of which are identical of another type, $r$ of which are identical of a third type, etc... with $p+q+r+\dots = n$, approach the count by following the steps:
Pick which of the spaces are used by the $p$ objects of the first type. There are $\binom{n}{p}$ ways to make this choice
Pick which of the remaining spaces are used by the $q$ objects of the second type. There are $\binom{n-p}{q}$ ways to make this choice given the earlier choice
Pick which of the remaining spaces are used by the $r$ objects of the third type. There are $\binom{n-p-q}{r}$ ways to make this choice given the earlier choices
$\vdots$
This gives a final total of:
$$\binom{n}{p}\binom{n-p}{q}\binom{n-p-q}{r}\cdots$$
Note, this explanation was able to completely avoid division as an operation as binomial coefficients could have been defined recursively purely using addition and multiplication.
Now... if you insist on rewriting this using fractions, you should know that $\binom{a}{b}=\dfrac{a!}{b!(a-b)!}$ in which case you have
$$\binom{n}{p}\binom{n-p}{q}\binom{n-p-q}{r}\cdots = \dfrac{n!}{p!\color{red}{(n-p)!}}\cdot \dfrac{\color{red}{(n-p)!}}{q!\color{blue}{(n-p-q)!}}\cdot\dfrac{\color{blue}{(n-p-q)!}}{r!(n-p-q-r)!}\cdots$$
Cancelling the numerators of each term after the first in the product with a portion of the denominator from the previous term gives the well known $$\dfrac{n!}{p!q!r!\cdots}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4038010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Square Brackets With Superscript & Subscript on Vector Space Does anybody know what the following notation means?
(I'm referring to the subscript & superscript to the right of the square brackets)
$\psi(T)=[T]_β^γ$
In this case, $T: V \rightarrow W$ is a linear map on finite-dimensional vector spaces $V$ & $W$ (over the same field $F$) with dimensions $n$ and $m$ resp. And, $\beta$ & $\gamma$ are ordered bases for resp. $V$ & $W$. Lastly, $\psi$ is the isomorphism $\psi:L(V,W)→M_{m\times n}(F)$.
(Full context here Proving isomorphism between linear maps and matrices)
| The expression on the right represents the matrix of the operator $T:V\to W$ with respect to the bases $\beta$ of $V$ and $\gamma$ of $W$.
More specifically, one evaluates T at the $j$-th basis vector in $\beta$ and expresses the output as a linear combination of basis vectors from $\gamma$. The respective coefficients in the linear combination form the $j$-th column of the matrix $[T]_{\beta}^{\gamma}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4038300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $a^x \equiv a^y \pmod{m}$, does this imply $x \equiv y \pmod{\varphi(m)}$? I know that $x \equiv y \pmod{\varphi(m)} \implies a^x \equiv a^y \pmod{m}$, so I was wondering if the converse also holds. If it does not hold in general, then are there any conditions where it does (such as assuming $(a, m)=1$)?
*Here $\varphi$ denotes the Euler totient function.
Edit: I see that the answer is no. However, the reason I was wondering about this is that I was watching this (https://youtu.be/f1oO9dEkqso?t=1386) lecture and don't know why else would Prof. Borcherds claim that $a_i \equiv 1 \pmod{p_{i}^{k_i}}$ is solvable if $\varphi(p_{i}^{k_i}) \mid n$ if the property in question is not true. In other words, how else is what's written justified?
| take $m=8$ and $a=2$, so $\phi(m)=4$ and $2^{3}\equiv2^{4}\equiv0(mod8)$ yet $3\not \equiv4(mod4)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4038494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Using definition of limits
Let $c∈\mathbb{R}$ and let $f:\mathbb{R}\setminus{c}\rightarrow\mathbb{R}$ be a function such that $f(x)>0$ for all $x∈\mathbb{R}$. Use the definition of limits to prove that
$$
\lim_{x\to c}f(x)=\infty \space\space\space\space\space\space\space\space\space\space\space \text{iff} \space\space\space\space\space\space\space\space\space\space\space \lim_{x\to c}\frac{1}{f(x)}=0.
$$
Proving the "$\Rightarrow$": Here is the definition: $\lim_{x\to c}f(x)=\infty$ if $\forall M∈\mathbb{R},\exists\delta>0$ such that $\forall x∈\mathbb{R}, 0<|x-c|<\delta\Rightarrow f(x)>M$. Here is my proof:
Let $\epsilon >0$ and set $M=\frac{1}{\epsilon}$. Since $\lim_{x\to c}f(x)=\infty$, we can find a $\epsilon >0$ such that $f(x)>M$ whenever $0<|x-c|<\delta$.
Thus $0<\frac{1}{f(x)}<\frac{1}{\epsilon}$ whenever $0<|x-c|<\delta$. This implies that it is possible to find a $\delta>0$ such that $|\frac{1}{f(x)}|<\epsilon$ whenever $0<|x-c|<\delta$. Since $\epsilon$ is arbitrary, we have proved that $\lim_{x\to c}\frac{1}{f(x)}=0$.
Proving the "$\Leftarrow$": This proof I am unsure of. I know that by the definition of a limit, $\lim_{x\to c} f(x) = 0$ if $\forall\epsilon>0, \exists\delta>0$ such that $\forall x∈\mathbb{R}\setminus{c}, 0<|x-c|<\delta \Rightarrow |f(x)-0|<\epsilon$. I am unsure of how to define $\lim_{x\to c}\frac{1}{f(x)}=0$ in a similar way. Any advice would be greatly appreciated.
| For the $\leftarrow$ direction. Let $M > 0$ be given, $\exists \delta > 0$ such that: $0 < |x-c| < \delta\implies \dfrac{1}{f(x)} < \dfrac{1}{M}\implies f(x) > M$. this shows that the limit is $\infty$ as claimed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4038716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Asymptotic of incomplete Gamma function, $\Gamma(n,n/a)$ Let $a>1$. I need to approximate (for n large)
$$
\Gamma(n,n/a) \approx f(n)
$$
Approximations I found are for the case which the second argument is fixed. Is there any simple formula for this asymptotic?
| $\Gamma(n, n/a)$ and $\Gamma(n)$ will become very close (in the sense that their ratio converges to $1$) as $n$ grow. The following computation provides a more precise error bound for their ratio:
I will write $\alpha = 1/a$ so that $0 < \alpha < 1$. Then
$$ \Gamma(n, \alpha n) = \Gamma(n) \biggl[ 1 - \frac{1}{\Gamma(n)} \int_{0}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t \biggr]. $$
Since $t \mapsto t^{n-1}e^{-t}$ is increasing on $0 \leq t \leq n-1$, for large $n$ we have
$$ \int_{0}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t \leq (\alpha n)^n e^{-\alpha n}. $$
So by the Stirling's approximation,
$$ \frac{1}{\Gamma(n)} \int_{0}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t
\leq \frac{(\alpha n)^n e^{-\alpha n}}{\sqrt{2\pi} \, n^{n-\frac{1}{2}} e^{-n}}
= \frac{1}{\sqrt{2\pi}} n^{1/2} (\alpha e^{1-\alpha})^n .
$$
Similarly, by noting that
$$ \int_{0}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t
\geq \int_{\alpha n-1}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t
\geq (\alpha n - 1)^{n-1} e^{-\alpha n+1}, $$
we have
$$ \frac{1}{\Gamma(n)} \int_{0}^{\alpha n} t^{n-1}e^{-t} \, \mathrm{d}t \geq c_{\alpha} n^{-1/2} (\alpha e^{1-\alpha})^{n} $$
for some constant $c_{\alpha} \in (0, 1)$ depending only on $\alpha$. This tells that
$$ \Gamma(n, \alpha n) = \Gamma(n) \left(1 - n^{\Theta(1)}(\alpha e^{1-\alpha})^n \right), $$
where $\Theta(1)$ represents a bounded sequence in $n$.
Remarks. A less precise asymptotic formula $\Gamma(n, \alpha n) = \Gamma(n)(1 - o(1))$ can be obtained by applying the central limit theorem to
$$ \frac{\Gamma(n, \alpha n)}{\Gamma(n)} = \mathbf{P}(\tau_1 + \cdots + \tau_n \geq \alpha n), $$
where $\tau_k$'s are independent $\operatorname{Exp}(1)$ variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4038859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Delta function and $\sum_{t}\exp\{ i k t\} $ I am interested in evaluating the following, which appears in an integrand,
$$
\sum_{t=0}^{\infty}e^{ikt}
$$
where $k$ is real. Using the following relation
$$
\sum_{t=-\infty}^{\infty}e^{ikt} = 2\pi\delta(k),
$$
the real part of it is
\begin{eqnarray}
\Re\left[\sum_{t=0}^{\infty}e^{ikt}\right]&=&\frac{1}{2}+\frac{1}{2}\Re\left[\sum_{t=-\infty}^{\infty}e^{ik}\right]\\&=&\frac{1}{2}+\pi\delta(k).
\end{eqnarray}
Note the addition of $\frac{1}{2}$ term! This $\frac{1}{2}$ term does not agree with the following derivation using $\epsilon$ trick!
\begin{eqnarray}
\lim_{\epsilon\rightarrow0_{+}}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}e^{ikt}e^{-\epsilon t}&=&\lim_{\epsilon\rightarrow0_{+}}\lim_{T\rightarrow\infty}\frac{e^{\epsilon}-e^{ik\left(T+1\right)}e^{-\epsilon T}}{e^{\epsilon}-e^{ik}}\\
&=&\lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}}
\end{eqnarray}
which becomes, when $k \rightarrow 0$,
\begin{eqnarray}
\lim_{k\rightarrow0,}\lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}}&=&\lim_{k\rightarrow0,}\lim_{\epsilon\rightarrow0_{+}}\frac{1}{\epsilon-ik}\\
&=&\lim_{k\rightarrow0}\lim_{\epsilon\rightarrow0_{+}}\frac{i}{k+i\epsilon}\\&=&\lim_{k\rightarrow0}\left[\pi\delta\left(k\right)+i\mathcal{P}\frac{1}{k}\right]
\end{eqnarray}
where Sokhotsky's formula is used.
Now when $k\nrightarrow0$,
$$
\lim_{\epsilon\rightarrow0_{+}}\frac{1}{e^{\epsilon}-e^{ik}}=\frac{1}{1-e^{ik}}
$$
Therefore, for all $k$,
$$
\sum_{t=0}^{\infty}e^{ikt}=\pi\delta\left(k\right)+\mathcal{P}\frac{1}{1-e^{ik}}
$$
Note that the real part of it does not have $\frac{1}{2}$ addition term, instead has $\Re\mathcal{P}\frac{1}{1-e^{ik}}$.
What am I doing wrong?
Building on Svyatoslav's answer which in turn supported by vitamin d, the conclusion is the following:
$$
\sum_{t=0}^{\infty}e^{ikt} = \frac{1}{2} + \pi \delta(k) + \frac{i}{2} \mathcal{P} \cot (k/2)
$$
| The Dirichlet Kernel is a Fourier Series approximations to the Dirac delta. It relies on cancellation rather than decay away from $0$. The Fejér Kernel makes a better approximation to the Dirac delta since it is positive and decays to $0$ away from $0$.
$$
\sum_{k=-n}^ne^{ikx}=\underbrace{\frac{\sin\left(\frac{2n+1}2x\right)}{\sin\left(\frac12x\right)}}_{2\pi\times\text{Dirichlet Kernel}}
$$
If we multiply this by $e^{inx}$ we get
$$
\begin{align}
\sum_{k=0}^{2n}e^{ikx}
&=\frac{\sin\left(\frac{2n+1}2x\right)}{\sin\left(\frac12x\right)}\,e^{inx}\\[3pt]
&=\frac12\left(\vphantom{\frac{\frac12}{\frac12}}\!\right.\underbrace{\frac{\sin\left(\frac{4n+1}2x\right)}{\sin\left(\frac12x\right)}}_{2\pi\times\text{Dirichlet Kernel}}+1\left.\vphantom{\frac{\frac12}{\frac12}}\!\right)+\frac i2\cot\left(\frac12x\right)\left(\vphantom{\frac{\frac12}{\frac12}}\!\right.1-\underbrace{\frac{\cos\left(\frac{4n+1}2x\right)}{\cos\left(\frac12x\right)}}_{\substack{2\pi\times\text{Dirichlet Kernel}\\\text{rotated by $\pi$,}\\\text{which is killed}\\\text{by $\cot\left(\frac12x\right)$}}}\left.\vphantom{\frac{\frac12}{\frac12}}\!\right)
\end{align}
$$
So the real part tends to $\frac12(2\pi\delta+1)$ the imaginary part tends to $\frac12\cot\left(\frac12x\right)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4039017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to prove that the line outside a convex polygon, having the minimum sum of distances is one of the edges? As part of a computational geometry question, I need a result for an intermediate step.
Suppose there is a convex polygon S, with finitely many points inside the polygon. Now, we want to find the line outside the polygon (i.e, the line can pass through a vertex or edge but cannot intersect) which minimizes the sum of perpendicular distances between the line and (vertices + interior points).
I have a hunch that we can prove that this line must be one of the edges, but I am unable to find a proof.
The simplest approach I tried was checking if rotating a line passing through vertex (but outside polygon) always decreases the sum till it reaches the edge. But this is clearly not true.
How to prove this?
Note 1: I think that this theorem must be true, but I have not seen it anywhere. It might be possible that it is actually false and the original question can be solved by a different method. If so, I would like to see a counterexample.
Note 2: The original question, the only answer proposes the theorem but does not provide a proof.
| A distance from a point $(x_i,y_i) $ to a line given by the equation $ax+by+c=0 $ is
$$
\frac{|ax_i+by_i+c|}{\sqrt {a^2+b^2}}.\tag1
$$
Now because the line is outside of the hull polygon the expression $ax_i+by_i+c$ has the same sign for all points, which we without loss of generality will assume to be positive. Hence we may drop taking the absolute value and write for the sum of distances:
$$
D (a,b,c)=\sum_{i=1}^N\frac{ax_i+by_i+c} {\sqrt {a^2+b^2}}=N\frac{aX+bY+c} {\sqrt {a^2+b^2}},\tag2
$$
where $(X,Y) $ are coordinates of the centroid of the points.
Now observe that the right hand side in (2) is nothing else but the multiplied by $N$ distance from the centroid to the line which is indeed minimized by one of the polygon edges, as claimed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4039181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Understanding steps of a proof of a positive matrix being symmetric In the solution it's previously proven through weak formulation theorem that if we can show
$<Px,x>=<P^*x,x>$, then a positive matrix is symmetric.
Here is the written proof. We assume that $A$ is a positive matrix.
$<Ax,x>=(Ax)^*x$
$=((Ax)^*x)^*$
$=x^*(Ax)$
$=<x,Ax>$
$=<A^*x,x>$
I have two basic questions about the steps:
I don't understand the first step $<Ax,x>=(Ax)^*x$. How do we end up there?
My second question is is on the second line where it's stated that $(Ax)^*x=((Ax)^*x)^*$. Is this a property of symmetric matrices and why aren't the $x$:es affected by this?
| *
*$<Ax,x>=(Ax)^*x$ is the definition of the inner product for two vectors $<x,y> = x^* y$
*$(Ax)^* x$ is a number so that is why is equal to it transpose $(Ax)^*x=((Ax)^*x)^*$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4039576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do you generate the numbers from an empty set? In a short 2007 Scientific American article, the former Harvard mathematician and author Dr Robert M Kaplan stated that:
in mathematics we can generate all numbers from the empty set
I've not been able to find the proof of that statement. I assume he means the natural numbers. I can see how you can get to the empty set by removing each number from a set of numbers, therefore, reverse that process. (Though I wonder if that applies to an infinite set?) But what is the start point? Is the start related to counting the span of the empty set? I.e. by counting the span of the empty set to get the first number then a proof can be shown that involves cardinality.
What is the straightforward proof that demonstrates that all numbers can be generated from the empty set?
| You can do better than just all natural numbers: you can construct every ordinal!
Nickname $\varnothing$ as $0$. For any $a$, let $a+1=a\cup\{a\}$. For any limit ordinal $\delta$, let $\delta=\bigcup\limits_{\alpha<\delta}\alpha$.
Hence, we've defined the ordinals as sets, just by starting with the empty set and building upwards.
Now how do we know that this is comprehensive? All we want to show is that every well-ordered set is in bijection with one of these ordinals. But that's a classic theorem you can find here.
(I'd be remiss to not mention that this construction is a bit circular, since we are assuming the existence of ordinals while constructing them, but if you want a proper treatment of this subject, there are plenty of texts for you.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4039712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Given a point on a circle, find it on a given square Math is not my forte so I apologize. I have a seemingly simple problem and I'm unable to figure out the formula for this.
I'd describe the problem as given the center point of a circle and a point on the edge of the circle, find the same point on a square with the same length as the diameter of the circle.
The filled in points in this image are the ones I'm trying to find.
As an addition, I'd love some online resources on where I can learn how to do this kind of math and intuitively find the answer for myself in the future. Cheers.
|
You are looking for trigonometric functions. Good news is you need very simple ones. Here is a recap:
In the figure above, suppose the radius of the circle is 1. The horizontal diameter (in cyan) is called the cosine axis, and the vertical diameter (in magenta) the sine axis. Corresponding to each point $P$ on the circle are two points $X$ and $Y$ on the cosine and sine axes, respectively. For angle $\widehat a$ between $OA$ and $OP$ we have
$$cos (a) = OX \quad , \quad sin(a) = OY$$
The vertical line (in red) that is tangent to the circle at point $A$ is called the tangent axis, and the horizontal line (in blue) that is tangent to the circle at $B$ is called the cotangent axis. Now you can see that your task has to do with these two axes. In the example shown in the figure above, you are looking for point $Q$, which is on the tangent axis. The length $AQ$ is $tan(a)$. Point $Q$ is on the tangent axis if angle $\widehat a$ is between $-45^o$ and $45^o$ . If angle $\widehat a$ is between $45^o$ and $135^o$ then the point $Q$ falls on the cotangent axis and you will want to have $cot(a)$ . For other angles you can easily find the coordinates of point $Q$ by symmetry.
So, if angle $\widehat a$ is given, then depending on its value you are looking for $\pm tan(a)$ or $\pm cot(a)$ . If instead of $\widehat a$ , the coordinates $x$ and $y$ of $P$ are given, then you should only note that
$$x = cos(a) \quad , \quad y = sin(a)$$
$$\frac yx = tan(a) \quad , \quad \frac xy = cot(a)$$
and when $\widehat a = 45^o$ , $\; x=y \;$ and $\; tan(a) = 1 \;$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4039873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Norm and metrics Let $E=C^1[a,b]$ and $\|\cdot\|_0$, $\|\cdot\|_1$ norms defined as $$\|f\|_0=\max_{x\in [a,b]} |f(x)|$$ and $$\|f\|_1=\max_{x\in [a,b]} |f(x)|+\max_{x\in [a,b]} |f'(x)|.$$
For $r>0$, consider the open ball with center in the origin defined by: $$B_r^0(0)=\{f \in E;\|f\|_0<r\}$$ and $$B_r^1(0)=\{f \in E;\|f\|_1<r\}.$$
*
*Prove that $B_r^1(0) \subset B_r^0(0)$.
*Prove that there is no $\epsilon>0$ such that $B_\epsilon^0 (0) \subset B_r^1(0)$.
Can I have some hints? This does not look very intuitive and I'm having some trouble to prove it.
| To get 1. let $f \in B_r^1(0)$, i.e. $\lVert f \rVert_0 + \lVert f' \rVert_0 < r$. Now we conclude as norms are non-negative:
$$
\lVert f \rVert_0 \leq \lVert f \rVert_0 + \lVert f' \rVert_0 < r
$$
So $f \in B_r^0(0)$.
To get 2. let us assume that such $\varepsilon$ exists. Then set $f_\delta: [a, b] \rightarrow \mathbb{R}$,
$$
f_\delta(x) := \frac{\varepsilon}{\sqrt{b-a +\varepsilon}} \cdot \sqrt{x-a+\delta}
$$
where $\delta \in (0, \varepsilon)$ is arbitrary. Clearly, $f_\delta \in C^1[a, b]$. Hence, for all $x \in [a, b]$:
$$
\lvert f_\delta(x) \rvert = \frac{\varepsilon}{\sqrt{b-a+\varepsilon}} \cdot \sqrt{x-a+\delta} \leq \frac{\varepsilon}{\sqrt{b-a+\varepsilon}} \cdot \sqrt{b-a+\varepsilon} = \varepsilon
$$
So $f_\delta(x) \in B_\varepsilon^0(0)$. Then, $f \in B_r^1(0)$ by assumption. So, for all $x \in [a, b]$:
$$
r > \lVert f_\delta(x) \rVert_0 + \lVert f_\delta'(x) \rVert_0 \geq \lVert f_\delta'(x) \rVert_0 = \max_{x \in [a, b]}\frac{\varepsilon}{2\sqrt{b-a +\varepsilon}} \cdot \frac{1}{\sqrt{x-a+\delta}} = \frac{\varepsilon}{2\sqrt{b-a +\varepsilon}} \cdot \frac{1}{\delta} \overset{\delta \downarrow 0}{\longrightarrow} \infty
$$
This is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4040022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim_{n \to \infty}\frac{1}{2}+\frac{3}{2^2}+\frac{5}{2^3}+ \dots +\frac{2n-1}{2^n}$ Find the following limit $$\lim_{n \to \infty}\left(\frac{1}{2}+\frac{3}{2^2}+\frac{5}{2^3}+ \dots +\frac{2n-1}{2^n}\right)$$
I don´t get catch a idea, I notice that
$$\left(\frac{3}{2}+\frac{5}{2^2}+\frac{7}{2^3}+\cdots + \frac{2n+1}{2^n} \right)$$ is such that
$$ (\frac{3}{2}-\frac{1}{2})=1, \, (\frac{5}{2^2}-\frac{3}{2^2})=\frac{1}{2},\, \, (\frac{7}{2^3}-\frac{5}{2^3})=\frac{1}{4}\cdots (\frac{2n+1}{2^n}-\frac{2n-1}{2^n})=\frac{1}{2^{n-1}}\text{Which converges to 0 }$$
Too I try use terms of the form $\sum_{n=1}^{\infty}\frac{2n}{2^n}$ and relatione with the orignal sum and consider the factorization and try sum this kind of terms$$\frac{1}{2}\lim_{n \to \infty }\left(1+\frac{3}{2}+\frac{5}{2^2}+ \dots +\frac{2n-1}{2^{n-1}}\right)$$.
Update:
I try use partial sum of the form
$$S_1=\frac{1}{2},S_{2}=\frac{5}{2^2},S_{3}=\frac{15}{2^3},S_{4}=\frac{37}{2^4} $$
and try find $\lim_{n \to \infty }S_{n}$ but I don´t get the term of the numerator.
Unfortunelly I don´t get nice results, I hope someone can give me a idea of how I should start.
|
I hope following answer would help. The given series is an infinite arithmetic-geometric series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4040187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
$\int_\gamma\textbf{u}\ d\textbf{r}$, wtih $\gamma$ a curve along an ellipse. Let $$\textbf{u}(x,y)=(y(8x+1),2y^2)\\ \gamma: 4x^2+y^2=1, \ \text{from}\ (-\frac{1}{4},\frac{\sqrt{3}}{2})\ \text{to} \ (\frac{\sqrt2}{4},\frac{\sqrt2}{2})$$
and calculate the integral $\int_\gamma\textbf{u}\ d\textbf{r}$.
So my take is to use Green's formula to get: $$\int_\gamma y(8x+1) dx+2y^2dy=\iint_D \frac{\partial}{\partial x}(2y^2)-\frac{\partial}{\partial y}(y(8x+1))\ dxdy=\iint_D -8x-1\ dxdy $$
I can substitute to polar coordinates; $0\leq r\leq 1, t_1=(-\frac{\pi}{2}-\arctan{(\frac{1}{2\sqrt3})})\leq \theta \leq \arctan{(2)}=t_2$) and get $$x=\frac{1}{2}r\cos(\theta)\\y=r\sin(\theta)\\J=\frac{1}{2}r$$
When I integrate $$\int_{t_1}^{t_2}\int_1^0 2r^2cos(\theta)+\frac{r}{2} drd\theta$$
This gives me the wrong evaluation, but I am not sure why. Is it because I have not defined the the path of the curve back to its starting point and thus not getting a well defined shape or something else?
| You cannot use Green's Theorem as the path is not a closed curve. So you will have to do line integral directly.
$\vec F = (8xy+y,2y^2)$
$\gamma: 4x^2+y^2=1, \ \text{from}\ \text{point A} (-\frac{1}{4},\frac{\sqrt{3}}{2})\ \text{to} \ \text{B} (\frac{\sqrt2}{4},\frac{\sqrt2}{2})$.
Parametrize the ellipse as $ \ \gamma(t) = (\frac{1}{2} \cos t, \sin t)$. Based on values of $x, y$ coordinates, point A is represented by $t = \frac{2\pi}{3}$ and point B by $t = \frac{\pi}{4}$.
$\gamma'(t) = (-\frac{1}{2} \sin t, \cos t)$
$\vec F(\gamma(t)) = (4 \sin t \cos t + \sin t, 2 \sin^2t)$
$\vec F(\gamma(t)) \cdot \gamma'(t) = - \frac{1}{2} \sin^2t$.
Now integrate going from point A to point B.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4040333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Field extension of $\Bbb Q$ of degree $4$ has reals Suppose $L / \mathbb{Q}$ is a field extension with $[L : \mathbb{Q}]$ = 4, with $L \not\subset \mathbb{R}$. Is it true that $L \cap \mathbb{R} \neq \mathbb{Q}$? If not, is it true if $L / \mathbb{Q}$ is normal?
I have tried to suppose $a \in L \cap \mathbb{R} \implies a \in \mathbb{Q}$, then from $z = a + b i \in L$ we get, $z + \bar{z} = 2a$ ($\bar{z} \in L$ if $L$ is normal at least). So $a \in \mathbb{Q}$. But I don't know how to proceed from there. I guess one should be able to show that $L = \mathbb{Q}(i)$ which contradicts degree $4$.
| Hint: As an example, consider a primitive 8th root of unity $\xi$. It obeys the equation $\xi^4=-1$.
Then $L = \{a+b\xi+c\xi^2+d\xi^3\mid a,b,c,d\in\Bbb Q\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4040648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How do I compare two expressions in Maple to see if they’re equivalent? I have to compare two expressions as an assignment but the professor didn’t explain how to do it (I doubt he actually knows how to do the stuff he asks us to do). The two expression pairs in question:
tg(x) + tg(y) = (sin(x+y))/cos(x)*cos(y)
arcsin(x) + arcsin(y) = pi/2
Now I know these are true, but whenever I use evalb or is I invariably get false. I tried many different things but none of them work and I’ve searched the internet for help but I didn’t find anything useful. The most similar thing to mine was this but I tried the solution and it didn’t work. Please help, I’m losing my sanity.
| You may have mistyped tg(...) when you actually mean tan(...), where tan is the name of a trigonometric function in Maple.
In that case you seem to be missing brackets in the right hand side of the first equation. Note the difference in the output that the extra pair of brackets makes:
tan(x) + tan(y) = (sin(x+y))/cos(x)*cos(y);
sin(x + y) cos(y)
tan(x) + tan(y) = -----------------
cos(x)
tan(x) + tan(y) = (sin(x+y))/(cos(x)*cos(y));
sin(x + y)
tan(x) + tan(y) = -------------
cos(x) cos(y)
And now,
eq1 := tan(x) + tan(y) = (sin(x+y))/(cos(x)*cos(y)):
is(eq1);
true
For your second equation, note that the well-known constant is spelled Pi in Maple. The lowercase name pi has no special meaning to Maple.
You might try the solve command, as one way to reformulate the implied relationship between x and y. Eg,
eq2 := arcsin(x) + arcsin(y) = Pi/2:
solve(eq2);
2 1/2
{x = x, y = (-x + 1) }
This forum is for mathematics, and questions about Maple syntax and programming are better suited to stackoverflow.com or www.mapleprimes.com (the Maplesoft user community).
[edit] You provided a revision to your second equation in a comment to another answer.You gave it as,
eq2 := arcsin(x) + arccos(x) = Pi/2:
Note the conversion,
convert(arcsin(x), arccos);
1/2*Pi - arccos(x)
And so these all work here.
convert((lhs-rhs)(eq2),ln);
0
convert((lhs-rhs)(eq2),arcsin);
0
convert((lhs-rhs)(eq2),arccos);
0
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4040919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do I simplify $34\csc{\frac{2\pi}{17}}$? I have $$ 34\csc\left(\dfrac{2\pi}{17}\right)$$ is equal to $$\dfrac{136}{\sqrt{8-\sqrt{15+\sqrt{17} + \sqrt{34 + 6\sqrt{17} - \sqrt{34-2\sqrt{17} } + 2\sqrt{ 578-34\sqrt{17}} - 16\sqrt{34-2\sqrt{17}} } }}}.$$
I want to rationalize it, but I am not sure where to start. Could anyone provide an explanation on how to rationalize this, and what should the answer be?
| $$\csc^2\left(\frac{2\pi}{17}\right)=\frac{1}{17}\left[102+17\sqrt{17}-17\frac{\sqrt{34-2\sqrt{17}}}{2}-17\frac{\sqrt{34+2\sqrt{17}}}{2}+\sqrt{17}\cdot \sqrt{A}\right]$$
$$A=850+204\sqrt{17}-143{\sqrt{34+2\sqrt{17}}}-113{\sqrt{34-2\sqrt{17}}}$$
$$\tan^2\left(\dfrac{2\pi}{17}\right)=5+3\sqrt{17}+5\frac{\sqrt{34-2\sqrt{17}}}{2}+5\frac{\sqrt{34+2\sqrt{17}}}{2}-\sqrt{A}$$
$$A=850+204\sqrt{17}+161{\sqrt{34+2\sqrt{17}}}+127{\sqrt{34-2\sqrt{17}}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Solivng differential equation by subtracting Y. I have the equation $$ \frac{dy}{dx} = y + 2$$ And normally u would isolate the y's and the x's but what if I subtract y and then solve the equation. That would like the following. $$ \frac{dy}{dx} - y =2 $$ $$
\int(\frac{dy}{dx} - y)dx = \int2dx$$ $$ \int\frac{dy}{dx}dx - \int ydx = \int 2dx $$ $$ \int1dy - \int ydx = \int2dx $$ $$ y - yx = 2x + C$$ $$ y = \frac{2x+C}{1-x}$$ But when you check this u get an equation where your only solution that works is when x=0. What am is wrong with the math.
| $$ \frac{dy}{dx} = y + 2$$
you should do
$$\frac{dy}{y+2}=dx$$
and then integrate
$$\log(y+2)=x+C$$
$$y=e^{x+C}-2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Laurent polynomials $ \mathbb C[t,t^{-1}]$ is the localization of $\mathbb {C}[t].$ I want to prove this question:
Show that the ring of Laurent polynomials $ \mathbb C[t,t^{-1}]$ is the localization of the polynomial ring $\mathbb {C}[t].$
Localization is defined as follows: Let $R$ be a commutative, $S\subset R$ multiplicatively closed. Define $\sim $ on $ S \times R $ by $\frac{r}{s} \sim \frac{r'}{s'}$ which is equivalent to $t(rs' - r's) = 0$ for some $t \in S$ then $S^{-1}R$ is a commutative ring.
Still, I do not know how to prove this localization, do I have to find an isomorphism? or what?
| "Show that the ring of Laurent polynomials $\mathbb C[t,t^{-1}]$ is the localization of the polynomial ring $\mathbb C[t]$."
The problem does not make sense as you state it because, as others have pointed out in the comments, there are many localizations of $R =\mathbb C[t]$. Each localization is done with respect to a multiplicatively closed subset $S$ of $R$. You must choose a suitable multiplicatively closed subset $S$ so that $S^{-1}R = \mathbb C[t,t^{-1}]$.
By the way, the abstract definition you have provided of localization is not at all necessary here. The following observation will make the problem much more tractable:
What does a localization of an integral domain look like? If $A$ is an integral domain, and $K$ is its field of fractions, any localization $S^{-1}A$ of $A$ is a ring which contains $A$ and which is contained in $K$. Indeed, $$A \subseteq S^{-1}A = \{ \frac{a}{s} : a \in A, s \in S\} \subseteq K.$$
The abstract definition of $S^{-1}A$ as the set of equivalence classes of pairs in $S \times A$ is not necessary here and will only confuse you. Sending the class of $(s,a)$ to $as^{-1} \in K$ gives an isomorphism between the formal definition of $S^{-1}A$ and the definition I just provided, as a subring of the field $K$.
What multiplicatively closed set $S$ should I pick to obtain $\mathbb C[t,t^{-1}]$?
The set should obviously contain $1$ and $t$, as you will want $t$ to be invertible in the localization. What other elements does $S$ need to contain to be closed under multiplication?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probabilistic recursion I have the following recursion:
$$p_{t+1} = \begin{cases} 1 \text{ with probability } 1-p_t \\ \alpha p_t \text{ with probability } p_t \end{cases}$$
for $0<\alpha<1$. Numerical simulations show that $\mathbb{E}[p_t]$ converges to a steady-state value as $t\rightarrow \infty$, irrespective of the initial condition, only dependent on $\alpha$. I'm not sure how to approach solving it in closed-form. It seems like it should be possible.
I have tried solving it by imposing a self-consistent condition, i.e.
$$\mathbb{E}[p_{\infty}] = \begin{cases} 1 \text{ with probability } 1-\mathbb{E}[p_{\infty}] \\ \alpha \mathbb{E}[p_{\infty}] \text{ with probability } \mathbb{E}[p_{\infty}] \end{cases}$$
I'm not sure if this or the next step is valid, I can neither justify it nor state exactly what's going wrong (but something is wrong, as the answer I get does not match my simuluations). Setting $\mathbb{E}[p_{\infty}] = 1\cdot(1-\mathbb{E}[p_{\infty}]) + \alpha \mathbb{E}[p_{\infty}] \cdot \mathbb{E}[p_{\infty}]$ and solving for $\mathbb{E}[p_{\infty}]$ gives a value which is different from simulations.
Can someone provide a solution or a resource for solving such probabilistic recurrences, or let me know where I'm going wrong and how to proceed?
| An observation that might prove fruitful for you, too much for a comment, but likely not the answer you're looking for. Anyway ...
It seems your interest is in the long-time behavior of $p_t$. When your process starts, there is considerable flexibility in how $p_t$ evolves. However, that changes the first time you hit the "set $p_t = 1$" situation. The next step after will necessarily yield $p_{t+1} = \alpha$, and the one following $p_{t+2}$ being set either to $\alpha^2$ or back to 1, etc. So after hitting that $p_t = 1$ reset button, all the values of $p_{t'}$ you generate are discrete and of the form
$1, \alpha, \alpha^2 \alpha^3, \dots$.
This helps you understand the later process, which you might imagine as having states
$S_0, S_1, S_2, S_3, \dots$. State $S_k$ has value $\alpha^k$, and you can write your process in terms of the transition matrix
\begin{equation}
T =
\begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
1 - \alpha & 0 & \alpha & 0 & 0 & 0 & \cdots \\
1 - \alpha^2 & 0 & 0 & \alpha^2 & 0 & 0 & \cdots \\
1 - \alpha^3 & 0 & 0 & 0 & \alpha^3 & 0 & \cdots \\
1 - \alpha^4 & 0 & 0 & 0 & 0 & \alpha^4 & \cdots \\
1 - \alpha^5 & 0 & 0 & 0 & 0 & 0 & \cdots \\
\vdots & \vdots& \vdots & \vdots & \vdots & \vdots &
\end{bmatrix}
\end{equation}
The Perron eigenvector $[\pi_0, \pi_1, \pi_2, \dots]$ of $T$ gives you the distribution of the steady state, and the expected value of $p_t$ tending to
$p_\infty = \pi_0 + \pi_1\alpha + \pi_2\alpha^2 + \cdots$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Let $a$ be a non zero real number. Evaluate the integral $\int \frac{-7x}{x^{4}-a^{4}}dx$ I hit a wall on this question. Below are my steps
$$\int \frac{-7x}{x^{4}-a^{4}}dx=-7\int \frac{x}{x^{4}-a^{4}}dx$$
Let $u=\frac{x^2}{2}, dx = \frac{du}{x}, x^{4}=4u^{2}.$
$$-7\int \frac{1}{4u^{2}-a^{4}}du=-7\int \frac{1}{(2u+a^2)(2u-a^2)}du$$
Use partial fraction decomposition,
$$\frac{1}{(2u+a^2)(2u-a^2)}=\frac{A}{2u+a^{2}}+\frac{B}{2u-a^{2}}.$$
Solve for $A$ and $B$:
$$\begin{cases}
A=\frac{1}{-2a^{2}}
\\
B=\frac{1}{2a^{2}}
\end{cases}$$
Now $$\int \frac{1}{(2u+a^2)(2u-a^2)}du=\int \frac{1}{-2a^2(2u+a^{2})}+\int \frac{1}{2a^{2}(2u-a^{2})}$$
Factoring out $a$ yields
$$\frac{7}{2a^{2}}(\int \frac{1}{2u+a^{2}}-\int \frac{1}{2u-a^{2}})$$
Evaluate the integral and substitute $u=\frac{x^{2}}{2}$ back.
My final answer is $$\frac{7}{2a^{2}}(\log(x^2+a^2)-\log(x^2-a^2)).$$
Feedback says my answer is wrong. Where did I mess up?
| $$I=\int\frac{-7x}{x^4-a^4}dx=-7\int\frac{x}{x^4-a^4}dx$$
now for ease lets let $u=x^2\Rightarrow dx=\frac{du}{2x}$ and $b=a^2$ so:
$$I=-\frac72\int\frac{du}{u^2-b^2}$$
now let $u=bv\Rightarrow du=b\,dv$ so:
$$I=-\frac{7b}{2}\int\frac{dv}{b^2(v^2-1)}=-\frac{7}{2b}\int\frac{dv}{v^2-1}$$
now this is a standard integral that you can solve with PFD or using a substitution
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to taylor expand $1/(1- \frac{k}{r})$ where k is some constant Hi i am doing some integrations and there came across one step that i couldn't understand. The step is as follows:
$$\frac{dr}{1 - \displaystyle\frac{k}{r}}=dr\left(1 + \frac{k}{r-k}\right)$$. I don't know how we can come up with right hand side from left hand side. I know that using taylor series expansion, we have $\frac{1}{1-x}=1 +x+x^2 + x^3+...$ but i am not able to apply this here. How did we get the right hand side $dr(1 + \frac{k}{r-k})$ from left hand side of the same equation?
PS: k is a constant here and r is the variable term.
| $$\frac{dr}{1 - \displaystyle\frac{k}{r}}=\frac{dr}{\frac{r-k}{r}}=\\dr(\frac{r}{r-k})=\\dr(\frac{r-k+k}{r-k})=\\
\\dr(\frac{r-k}{r-k}+\frac{k}{r-k})=\\
\\dr(1+\frac{k}{r-k})$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Simplifying $f(\sqrt{7})$, where $f(x) = \sqrt{x-4\sqrt{x-4}}+\sqrt{x+4\sqrt{x-4}}$
If $f(x) = \sqrt{x-4\sqrt{x-4}}+\sqrt{x+4\sqrt{x-4}}$ ; then $f(\sqrt {7})=\; ?$
I tried solving this equation through many methods, I tried rationalizing, squaring, etc. But after each of them, the method became really lengthy and ugly.
I also noted that once we substitute $\sqrt {7}$ the inner part of the radical becomes imaginary. How to proceed with this piece of information?
Please help me with this problem. Any more innovative methods would be appreciated.
Answer: $4$.
Edit $[7^{th}$ March, $2021]$:
I was told yesterday that this question was wrong and that we were supposed to find $f(2\sqrt5)$. Although I can solve with $2\sqrt5$ the same way as given below in the answers.... What I don't understand is that why $\sqrt7$ doesn't work. Can someone please help?
| Set $\sqrt{x-4}=u \to x=u^2+4\\$so
$$\sqrt{x-4\sqrt{x-4}}+\sqrt{x+4\sqrt{x-4}}=\\ \sqrt{u^2+4-4u}+\sqrt{u^2+4+4u}=\\
|u-2|+|u+2|=\\
|\sqrt{x-4}-2|+|\sqrt{x-4}+2|$$ but when deals to imaginary numbers absolute sign is not necessary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4041871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Well-Ordering Principle "proof" Theorem. Well-Ordering Principle.
Every non-empty subset of natural numbers has a least element.
I have seen some proofs for the theorem, but is very "complex"proof really needed here?
My attempt of proof:
Let $$ D \subset \mathbb{N}= \left\{ 1, \ 2, \ 3, \ \dots \right\} $$
be an arbitrary non-empty subset of natural numbers. Therefore it has at least one element $$ n \in D.$$
Consider the finite set $$ \left\{1, \ \dots, \ n \right\}.$$
We check which of those natural numbers are elements of D. Then we choose the smallest one of those. There we have the least element.
| This is a proof by induction. The theorem states that every non-empty subset of $\mathbb{N}$ has a least element.
Let $A\subseteq \mathbb{N}$ be a set with no least element. We want to prove that $A=\varnothing $, that is $\forall n$ $\in \mathbb{N}$, $n\notin A$.
For induction on $n$ we have that:
$0\notin A$, in fact if $0 \in A$ we would have that: $0=$min$\mathbb{N}=$min$A$, but $A$ has no least element.
Suppose now that $\begin{Bmatrix}
0,1,...,n
\end{Bmatrix}\cap A=\varnothing $. What we want to prove is that: $n+1\notin A$.
If $n+1\in A$, infact, $n+1\neq$ min$A$ , since $A$ has no minimum for hypotesis. So $\exists m\in A$, $m<n+1$, which is a contradiction with the inductive hypothesis. So $n+1 \notin A$.
Hence $A=\varnothing$, as we wanted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Real Mathematical Analysis Prelim problem 4.60 I am stuck on the following problem from Pugh's book:
Does there exist a continuous function $$f: [0,1] \rightarrow \mathbb{R}$$ such that $$\int_0^1 xf(x)\,dx = 1$$ and $$\int_0^1 x^n f(x)\,dx = 0$$ for $$n = 0,2,3,\ldots$$
The progress I made so far is that, for any polynomial $p$, we must have $p'(0) = \int_0^1 p(x)f(x)\,dx$. Also from Cauchy-Schwarz we can deduce that $\int_0^1 f^2(x) \, dx \geq 3$. We also know that the polynomials are dense in the space of continuous functions, but I don't know how to use this fact to solve the problem.
Thank you
| Let $q_n\in\mathbb{R}[X]$ such that $\|f-q_n\|_{\infty}\rightarrow 0$ and let $p_n=q_n+\frac{(1-x)^{m_n}}{m_n}q_n'(0)$ where $m_n=\lfloor|q_n'(0)|+1\rfloor^2+n+1$, then $p_n'(0)=0$ for all $n$ so that $\int_0^1 p_n(x)f(x)dx=0$ for all $n$. Moreover, $\|f-p_n\|_{\infty}\leqslant\|f-q_n\|_{\infty}+\frac{|q_n'(0)|}{m_n}\rightarrow 0$ thus, $$ \int_0^1 f(x)p_n(x)dx\rightarrow\int_0^1 f(x)^2dx $$
and therefore $\int_0^1 f(x)^2dx=0$ which means that $f=0$, $f$ being continuous. This contradicts the fact that $\int_0^1 xf(x)dx=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Why does an exponential function eventually get bigger than a quadratic I have seen the answer to this question and this one.
My $7$th grade son has this question on his homework:
How do you know an exponential expression will eventually be larger than any quadratic expression?
I can explain to him for any particular example such as $3^x$ vs. $10 x^2$ that he can just try different integer values of $x$ until he finds one, e.g. $x=6$. But, how can a $7$th grader understand that it will always be true, even $1.0001^x$ will eventually by greater than $1000 x^2$? They obviously do not know the Binomial Theorem, derivatives, Taylor series, L'Hopital's rule, Limits, etc,
Note: that is the way the problem is stated, it does not say that the base of the exponential expression has to be greater than $1$. Although for base between $0$ and $1$, it is still true that there exists some $x$ where the exponential is larger than the quadratic, the phrase "eventually" makes it sound like there is some $M$ where it is larger for all $x>M$. So, I don't like the way the question is written.
| By shifting and rescaling, you can turn any comparison of a quadratic vs. exponential into a standard one. Say you are trying to determine which of $f(x) = \alpha(x-\beta)^2$ and $g(x) = ab^x$ grows faster. By shifting $x\rightarrow x+\beta$ you can turn this into a comparison of $f_{1}(x) = \alpha x^2$ and $g_{1}(x) = a' b^x$. Then by rescaling $x\rightarrow c x$ where $b^c = 2$, you turn the comparison into $f_2(x) = \alpha' x^2$ and $g_2(x) = a'' 2^x$.
So as long as you can convince your son that $2^x$ grows faster than $a x^2$, all other exponentials must also grow faster than all other quadratics.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 14,
"answer_id": 10
} |
Classifying subgroups of $\mathbb{Z}^n$ up to isomorphism I believe that the only subgroups of $\mathbb{Z}^n$ up to isormorphism are $\{0\}$ and $\mathbb{Z}^m$, with $m\leq n$.
This because if $z\neq 0\hookrightarrow H<\mathbb{Z}^n$ so $\langle z\rangle\sim \mathbb{Z}\hookrightarrow H$.
However, I'd like to get this formally (for instance, by a result on a lemma or a more formal proof). Could you help me?
Thank you so much
| Since $\Bbb Z^n$ is a finitely-generated abelian group, the result follows from the Fundamental Theorem of Finitely-generated Abelian Groups, which is a classification result described in this Wikipedia article.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Powers of transcendental numbers that lead to integers For a given real number $x>0$ define $a_k(x)$ as follows for $k \geq 1$:
$a_1(x)=x$, $a_{k+1}(x)=x^{a_k(x)}$, so that $a_2(x)=x^x$, $a_3(x)=x^{x^x}$....
Question 1: Is there a explicit transcendental number $x$ such that $a_k(x)$ is an integer for a $k>0$?
Question 2: Let $Y:= \{x \in \mathbb{R} \mid x>0$, $x$ is transcendental and there exists $k >0$ such that $a_k(x)$ is an integer $\}$. Is $Y$ measurable and if yes, what is its measure?
The question is motivated by https://www.youtube.com/watch?v=BdHFLfv-ThQ where it is discussed whether $a_4(\pi)$ is an integer.
| For question 2, the answer is that the set is countable, so it is measurable and has measure $0$. Given any $n\gt 1$ and $k\gt 1$ we can see $a_k(x)$ is monotonically increasing with $x$, so we can implicitly invert it. We can find $x$ numerically to whatever precision we want, but generally you won't find a formula for it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show that $A$ is dense in $X$ if and only if every nonempty open set of $X$ contains a point of $A$. Show that $A$ is dense in $X$ if and only if every nonempty open set of $X$ contains a point of $A$.
I did not think there was much to prove for this, if using the theorem:$x \in \bar A$ if for each open set $U$ containing $x$, $U \cap A \neq \varnothing$.
Attempt:
$(\Rightarrow)$
If A is dense, then $\bar A=X$. So for each $x \in X$, there is an open set $U$ containing $x$ with $U \cap A \neq \varnothing$. Since $x$ was arbitrary this holds for each $x \in X$, so each open set in $X$ contains points in $A$.
$(\Leftarrow)$If each open set contains points in $A$, for any $x \in X$ and open set $U$ containing $x$, $U \cap A \neq \varnothing$. So $\bar A=X$.
I do not know if this proof is actually okay. I have seen a different proof in "topology without tears", that is much more involved, using a proof by contradiction in one direction. Am I over simplifying this, and is my proof unsuccessful? Any help will or criticism be appreciated.
| It's indeed that simple: if $\overline{A} = X$ and $U$ is non-empty open, then we can argue as you did: let $x \in U$ then $x \in \overline{A}$ , so any open neighbourhood of $x$ intersects $A$ (in particular $U$ too) so $U \cap A \neq \emptyset$.
Conversely, if $A$ intersects every non-empty open set, then $\overline{A}=X$ or else $U = X\setminus \overline{A}$ would be a non-empty open set disjoint from $A$, or: let $x \in X$, $U$ any open neighbourhood of $x$, by assumption $A$ intersects $U$ so by the criterion you mentioned $x \in \overline{A}$, so $X \subseteq \overline{A}$ and so $X=\overline{A}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4042778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculation in Routh's theorem The proof of Routh's theorem concludes with showing$$1-\frac{x}{zx+x+1}-\frac{y}{xy+y+1}-\frac{z}{yz+z+1}=\frac{(xyz-1)^2}{(xz+x+1)(xy+y+1)(yz+z+1)}.$$I seek an elegant "proof from the book" of this, rather than one that involves tedious, potentially error-prone algebra. In particular, it feels like symmetries, degree-counting etc. should make this obvious, rather than an accident where the numerator happens to have a nice factorization. My best approaches are these two:
Option 1
The LHS's first two, three and four terms have respective sums $\frac{zx+1}{zx+x+1}$,$$\frac{(zx+1)(xy+y+1)-y(zx+x+1)}{(zx+x+1)(xy+y+1)}=\frac{x^2yz+zx+1}{(zx+x+1)(xy+y+1)}$$and$$\frac{x^2yz+xz+1}{(zx+x+1)(xy+y+1)}-\frac{z}{yz+z+1}=\frac{(x^2yz+xz+1)(yz+z+1)-z(zx+x+1)(xy+y+1)}{(zx+x+1)(xy+y+1)(yz+z+1)}.$$The numerator is nine monic terms of degree $0$ to $6$ minus nine monic terms of degree $1$ to $5$, so the terms of degree $0$ and $6$ will survive as $(xyz)^2+1$, and any other surviving term(s) will have coefficients summing to $-2$. The problem's symmetries mandate $-2xyz$ to finish the job.
That's quite nice, but the third partial sum probably can't be done in one's head. What one can say, however, is the third partial sum's numerator will be six terms of degrees $0$ to $4$ minus three of $1$ to $3$, so a $0$ and a $4$ survives, but it's harder to deduce without calculation that the third uncancelled term will be of degree $2$.
Option 2
This one looks like it might end up more elegant at first, but it looks like it ultimately requires some of Option 1's techniques to finish.
The case $x=\tfrac{1}{yz}$ has left-hand side$$1-\frac{1}{yz+z+1}-\frac{yz}{yz+z+1}-\frac{z}{yz+z+1}=0,$$so the general case's numerator must be divisible by $xyz-1$. In the special case $x=y=z$, the left-hand side is$$1-\frac{3x}{x^2+x+1}=\frac{(x-1)^2}{x^2+x+1}=\frac{(x^3-1)^2}{(x^2+x+1)^3}.$$The most obvious generalization with appropriate symmetries and denominator is $\frac{(xyz-1)^2}{(xz+x+1)(xy+y+1)(yz+z+1)}$, as desired. The most general numerator is of the form$$(xyz-1)(xyz+1+p(x,\,y,\,z)),$$where $p$ is invariant under a cyclic permutation of $x,\,y,\,z$, with $p(x,\,x,\,x)=0$ and $p\left(\tfrac{1}{yz},\,y,\,z\right)=0$.
The first constraint makes $p$ a polynomial in $a:=x+y+z,\,b:=xy+yz+zx,\,c:=xyz$; the second ensures that polynomial vanishes when $a=3x,\,b=3x^2,\,c=x^3$. These are achievable with a factor such as $a^2-3b$, $a^3-27c$, $ab-9c$ or $b^3-27c^2$. The third constraint only adds one requirement, divisibility by $c-1$. Ultimately, some careful degree-counting is needed to prove $p=0$.
| The question asks to find an elegant proof of
$$ 1-\frac{x}{zx\!+\!x\!+\!1}-\frac{y}{xy\!+\!y\!+\!1}
-\frac{z}{yz\!+\!z\!+\!1}= \\
\frac{(xyz-1)^2}
{(zx\!+\!x\!+\!1)(xy\!+\!y\!+\!1)(yz\!+\!z\!+\!1)}. \tag{1} $$
In order to simplify algebraic manipulation define
$$ X:= zx+x+1,\quad Y:= xy+y+1,\quad Z:= yz+z+1. \tag{2}$$
Move all terms to the same side of the equation and
eliminate all denominators to get
$$ 0 = -X\,Y Z+x\, Y Z+X\,y\, Z+X\,Y z+(1-xyz)^2. \tag{3} $$
Define the polynomial expression
$$ A := X\,Y Z-x\, Y Z-X\,y\, Z-X\,Y z. \tag{4} $$
Proving equation $(3)$ and equation $(1)$ is equivalent to
proving $$ A = (1-xyz)^2.\tag{5} $$
One possible proof is to identity the homogeneous parts of
the degree six polynomial $\,A.\,$
For degrees $0$ and $6$ the only contribution is from the
first term of $\,A\,$ and thus
$$ A_0 = 1, \qquad A_6 = (xyz)^2. $$
For similar reasons,
$$ A_1 = (x+y+z)-x-y-z=0. $$
$$ A_5 = (xyz)^2(1/x+1/y+1/z) - (xyz)^2(1/z+1/x+1/y) = 0.$$
$$ A_2 = 2(zx+xy+yz)-x(y+z)-y(x+z)-z(x+y)=0. $$
$$A_4 = 2xyz (x+y+z)-xyz ((x+y)+(y+z)+(z+x))=0. $$
$$ A_3 \!=\! (4xyz\!+\!z^2x\!+\!x^2y\!+\!y^2z) \\
\!-\!xy(2z\!+\!x)\!-\!yz(2x\!+\!y)\!-\!zx(2y\!+\!z)
\!=\! -2xyz. $$
Putting all the parts together proves equation $(5)$.
This is essentially expanding the polynomial expression
$\,A\,$ and doing the same with $\,(1-xyz)^2.$
Another possible proof is to note that $\,A\,$ is a
polynomial in $\,x,y,z\,$ with maximum degree of each
variable being $2$. There are $3^3=27$ possible
monomials in $\,A.\,$ If equation $(5)$ can be proved
for at least $27$ generic values of $\,x,y,z\,$ then
the equation holds in general. Consider
$$ x=a/b,\;\; y=b/c,\;\; z=c/a,\;\;
x y z = 1, \\ d:=a+b+c,\quad (X,Y,Z) =
d\Big(\frac1b,\;\frac1c,\;\frac1a\Big), \\
A = \frac{d^2}{abc}(d-a-b-c) = 0 = (1-1)^2. $$
We can choose any nonzero values for $\,a,b,c\,$
which gives more than enough values of $\,x,y,z\,$
which proves equation $(5)$.
A simpler variation of this proof uses
$$ x = y = z =: w,\quad x y z = w^3,\quad X=Y=Z=1+w+w^2 =: W,
\\ A = W^3-3wW^2 = W^2(W-3w) = (W(1-w))^2 = (1-w^3)^2. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Finding the area of shaded region Consider a circle with the equation $x^2 + y^2 = 1$. Now find the area of the shaded region which we denote by $u$.
image of circle:
I'm trying to approach this question and would like support on my integral calculation. The difficulty I'm having is with substitution.
For example, given that:
$$u = (OM)(MP)+2\int^{1}_{x} \sqrt{1-x^2} \space dx$$
I should be getting:
$u = xy + [x\sqrt{1-x^2}-\cos^{-1}x]^{1}_{x}$
Although, I currently lack the technical skills to get this answer.
My approach:
I thought that I could substitute back in $x$ for $\sin(x)$ to get $\sqrt{1-\sin^2(x)}$, then when $u = \sin(x) \implies du = \cos(x)$, so that I get:
$$2\int^{1}_{x} \sqrt{1-\sin^2(x)}\cdot \cos(x) \space dx$$
However, this does not lead me close to the answer.
I have also tried inputting this equation into symbolab, and get:
$\arcsin \left(x\right)+\frac{1}{2}\sin \left(2\arcsin \left(x\right)\right)+2C$
Which is also wrong. What might be the approach towards this?
In Progress:
I've currently rearranged my substitution and it looks promising:
$u = 1-\sin^2 (x) \implies du = -cos(x)dx \implies-arcos(x)du = dx$
I cannot manage with this approach neither, as I'll have to integrate the square root.
Looking at the equation, I'll need something like:
$secant(x) -arcos(x)$ to convert the secant into $x\sqrt{x^2-1}$
| Alternative method:
The area of the shaded region is $\frac{1}{2}r^2\theta,\ $ where $r$ is the radius of the circle and $\theta\ $ is the angle in radians that the region makes at the centre of the circle $O$.
$r = 1,\ $ therefore the area is $\frac{1}{2}\times1^2\times \theta = \frac{1}{2}\theta.$
Now, $\angle POM = \frac{\theta}{2},\ $ so we can express $\frac{\theta}{2}\ $ in terms of the $x-$coordinate of $M = (M_x,M_y)$, $M_x$ say. Looking at the triangle $OPM,\ $ we know that $OP=1,\ $ therefore $\frac{\theta}{2} = \cos^{-1}\left(\frac{M_x}{1}\right) = \cos^{-1}(M_x) \implies \theta = 2\cos^{-1}(M_x).$
So area of shaded region $= \frac{1}{2} \times 2\cos^{-1}(M_x) = \cos^{-1}(M_x).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What is $(7^{2005}-1)/6 \pmod {1000}$? What is $$\frac{7^{2005}-1}{6} \quad(\operatorname{mod} 1000)\:?$$
My approach:
Since $7^{\phi(1000)}=7^{400}=1 \bmod 1000, 7^{2000}$ also is $1 \bmod 1000$.
So, if you write $7^{2000}$ as $1000x+1$ for some integer $x$, then we are trying to $((1000x+1)\cdot(7^5)-1)/6 = (16807000x + 16806)/6 \pmod {1000}$.
Obviously, this must be an integer, so $x=3y$ for some $y$. Then, we are trying to find $16807000\cdot 3y/6+2801 \pmod {1000} = 500y+801 \pmod {1000}$. However, this value can be $301$ or $801$, and I am not sure how to find which one is correct.
Any help is appreciated!
| We must multiply the modulus by $\,6\,$ to balance the division by $6$, i.e.
$\qquad 6\mid a \,\Rightarrow\, a/6 \bmod 1000 = (a \bmod 6000)/6\ $ by the mod Distributive Law
$6000\!=\! 2^4\cdot 3\cdot 5^3$ whose totients $2^3,2,100\mid 2000\,$ so $\,\color{#c00}{7^{2000}\!\equiv 1}\bmod 2^4,3,5^3$ so also mod $6000,\,$ so $\bmod 6000\!:\ a\equiv 7^5 \color{#c00}{7^{2000}}-1 \equiv 7^5-1\equiv4806,\,$ so $\bmod 1000\!:\ a/6\equiv 4806/6 \equiv 801$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Wedge sum of contractible spaces Let $\vee_i U_i$ be the wedge sum of a set of based spaces $(U_i,x_i)$. Suppose each $U_i$ is a contractible neighborhood of $x_i$, then I want to prove that $\vee_i U_i$ is also contractible. (J. P. May mentions this statement in Section 2.8 of his book A Concise Course in Algebraic Topology)
My attempt: Suppose $F_i: c_{x_i} \simeq id_{U_i}$ are homotopies, where $c_{x_i}: U_i \to U_i$ is the constant map with value $x_i$. I tried to construct a homotpy $F$ between $c_x:\vee_i U_i \to \vee_i U_i$ and $id_{\vee_i U_i}$ via $F|_{U_i}=F_i$, where $x$ is the homotopy class of $x_i$. But the problem is that for fixed $t \in [0,1]$, $F_i(x_i, t)$ are different in general, so $F$ is not well-defined at $x$.
Q: How to explicitly contruct a homotopy between $c_x$ and $id_{\vee_i U_i}$?
| The statement is wrong, the Griffiths Twin Cone is a counterexample, as suggested by JHK in the comment above. So when May says $U_i$ is a contractible neighborhood of $x_i$, I guess he means $x_i$ is a strong deformation retraction of its open neighborhood $U_i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is such function continuous? Can anyone help me see why the function $f: \bar{\mathbb{Q}} \to \mathbb{Q} $ where $\bar{\mathbb{Q}}$ denotes the set of algebraic numbers,
$$f(x)=\begin{cases}1,&\ \text{if the real part of $x$ is greater than $\pi$, and}\\
0,&\ \text{otherwise.}\end{cases}$$
is continuous?
| Note that $z \in \mathbb C$ is algebraic over $\mathbb Q$ if and only if $Re(z), Im(z)$ are algebraic over $\mathbb Q$. Now, to show that $f$ is continuous we need only show that $f^{-1}[0]$ and $f^{-1}[1]$ are closed. Indeed, $f^{-1}[1]$ is precisely those $z \in \overline{\mathbb Q}$ such that $Re(z) > \pi$. Since $Re(z)$ is algebraic and $\pi$ is transcendental, $Re(z) > \pi$ iff $Re(z) \geq \pi$. Now, let $S \subseteq \mathbb C$ be defined as $\{z \in \mathbb C : Re(z) \geq \pi\}$. The map $Re: \mathbb C \longrightarrow \mathbb R$ is just projection, so it is continuous. Thus, $S = Re^{-1}[[\pi, \infty)]$ is closed. Furthermore, as discussed, $S \cap \overline{\mathbb Q} = f^{-1}[1]$. Then as $S$ is closed in $\mathbb C$, $S \cap \overline{\mathbb Q} = f^{-1}[1]$ is closed in the subspace topology.
$f^{-1}[0]$ consists of $Re(z) \leq \pi$ so essentially the same argument shows $f^{-1}[0]$ is closed. Hence, $f$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.