Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
The uniqueness of homogeneous matrices of some geometric transformations Given the description of a specific geometric transformation, the homogeneous square matrix of it can be obtained per the methods described by many textbooks.
For example, many text books suggest the homogeneous matrix of a reflection be constructed by multiplying a series translations, rotations and so on to the reflection about some axis plane, which can be obtained immediately easily.
Specifically, the homogeneous matrix reflection about plane $x+2y+z+5=0$ can be obtained as:
$$-\dfrac{1}{3}\left[
\begin{array}{cccc}
-2 & 2 & 1 & 5 \\
2 & 1 & 2 & 10 \\
1 & 2 & -2 & 5 \\
0 & 0 & 0 & -3 \\
\end{array}
\right]$$
and more generally, a reflection about the plane $a X+b Y+c Z+d=0$ can be written into :
$$\left[
\begin{array}{cccc}
-a^2+b^2+c^2 & -2 a b & -2 a c & -2 a d \\
-2 a b & a^2-b^2+c^2 & -2 b c & -2 b d \\
-2 a c & -2 b c & a^2+b^2-c^2 & -2 c d \\
0 & 0 & 0 & a^2+b^2+c^2 \\
\end{array}
\right]$$
though the construction process for a specific transformation generally is not unique (not only with the non-unique origin reflection about axial reflection, but non-unique in matrix factors' selection and consequences), and of course it is not the definition of the transformation, it actually proves the existence of the specific reflection's homogeneous matrix. But how can we prove the uniqueness?
I have similar puzzles, i.e., the proof of the uniqueness of their homogeneous matrices, on such geometric transformations as translation, central projection, rotation, shear, reflection, and so on.
| As long as the origin is fixed, these are all ordinary linear transformations, so they are uniquely determined by their matrix, as any linear map $\varphi:\Bbb R^n\to\Bbb R^m$ is just a (left) multiplication by the matrix $M=[\varphi(e_1)\,|\dots|\,\varphi(e_n)]$ where $e_1,\dots,e_n$ is the standard basis of $\Bbb R^n$.
So that, $M\cdot v=\varphi(v)$ for every $v\in\Bbb R^n$.
[Observe that, for any matrix, $M\cdot e_i$ gives the $i$th column of $M$.]
Conversely, if $M\cdot v=\varphi(v)$ for all $v$, then in particular, it applies to the standard basis vectors $e_i$, which shows that the $i$th column of $M$ must be $\varphi(e_i)$, thus proving uniqueness.
If the origin is not fixed, we talk about affine transformations and bring in one more coordinate and the projective plane/space.
Specifically, we embed $\Bbb R^n$ into $\Bbb R^{n+1}$ at the affine hyperplane $H:=\{(x_1,\dots,x_n,1):x_i\in\Bbb R\}$, and apply the transformation $H\to H$ by extending it to a linear transformation $\Bbb R^{n+1}\to\Bbb R^{n+1}$.
This extension is only unique up to a scalar multiple.
See also https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to show every proper subgroup is cyclic? Let $|G| = p^2$ where $p$ is prime. Show that every proper subgroup of $G$ is cyclic.
I don't know how to approach this problem.
Here is what I have:
From Lagrange Theorem, for any subgroup $H\subset G$, $\frac{|G|}{|H|}$. Now, the order of $G$ is $p^2$. We need to show that the order of H is equal to $|\langle a \rangle |$ where $a \in H$. (Not sure!)
| If $H$ is a subgroup, by Lagrange's theorem $|H|$ is either $1$ or $p$ ($p^2$ is ruled out because $H$ is proper). Now, every group of prime order is cyclic (why?).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Books about synthetic projective geometry Are there books in English about synthetic projective geometry? More specifically, results of Karl von Staudt (imaginary elements theory through elliptical involutions, imaginary circle, infinity's imaginary circle)
| There's lots to learn from "modern" texts such as Richter-Gebert, Coxeter, et al. But the heyday of synthetic projective geometry appears to have been in the 19th and early 20th centuries. After that, both research and pedagogy moved to other topics in math.
For the older texts archive.org is your friend, going back to Poncelet's groundbreaking Traité des propriétés projectives des figures which introduces the principle of continuity, an early view of imaginary elements.
Some examples: Milne's Cross-Ratio Geometry (which makes reference to imaginary elements), and Pickford's Elementary Projective Geometry.
Dialing in on your interest in imaginary elements, Chapter XXVII of Russel's Pure Geometry discusses imaginary points and lines and gives some practical constructions.
Also check out Coolidge's Geometry of the Complex Domain (the final chapter is on Von Staudt Theory) and Hatton's Theory of the Imaginary in Geometry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Does there exist a field $K$ and $x \in K$ with $K(x^{1/4})=K(x^{1/2}) \neq K$ I want to know if there exists a field $K$ and $x \in K$ with $K(x^{1/4})=K(x^{1/2}) \neq K$.
My attempt thus far: if $K(x^{1/2}) \neq K$ then $x^{1/2} \notin K$ so the minimal polynomial of $x^{1/2}$ over $K$ has degree $>1$ and is therefore $X^2-x$. Then $[K(x^{1/2}):K] = 2$ and $\{1, x^{1/2}\}$ is a $K$-basis for $K(x^{1/2})$. Since $K(x^{1/4})=K(x^{1/2})$ we have $x^{1/4} \in K(x^{1/2})$ so there exist $a, b \in K$ such that $x^{1/4} = a + bx^{1/2}$.
Then I tried rearranging and squaring both sides etc to see if I could find something useful but I didn't get anywhere... Any tips? I don't even know whether such a $K$ exists at all!
| $s=x^{1/4}$ , $d= x^{1/2}$
s£ k(d)==> s=a+bd ===> s^2 =a^2 +b^2d^2 +2abd ==> d=a^2+b^2x+2adb
===> d( 1-2a) = a^2 +x
d= ( 1-2ab)^(-) ( a^2+xb^2) , if 1-2ab#0 ==> d£K but this absurde since k( d) #k sorry i have note Latex for better redaction
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove that if $x$ is the greatest lower bound of $U$, then $x$ is the least upper bound of $B$
Suppose $R$ is a partial order on $A$, $B \subseteq A$. Let $U$ be the set for all upper bounds for $B$.
Prove that if $x$ is the greatest lower bound (or g.l.b) of $U$, then $x$ is the least upper bound (or l.u.b) of $B$.
My attempt:
Suppose $L_u$ is the set containing all lower bounds of $U$.
Suppose $x$ is g.l.b. of $U$.
Take arbitrary $b \in B$
Take $u \in U$. We know that $bRu$. Since $u$ was arbitrary, we conclude that $b$ is lower bound for $U$, which means $b \in L_u$
Since $x$ is g.l.b. of $U$, we have $x \in L_u$ and $bRx$.
Since $b$ was arbitrary element of $B$, it follows that $x$ is upper bound of $B$, thus $x \in U$.
Since $x \in L_u$, it also follows that for all $u \in U$, we have $xRu$, hence $x$ is the smallest element of $U$, which implies that $x$ is l.u.b. of $B$. $\Box$
Is it correct?
| My attempt:
Consider L to be the set defined as L = {a $\in$ A | $\forall$ u $\in$ U (aRu)}. Or, L is the set of all lower bounds on U. If x is the greatest lower bound of U, x $\in$ L and thus $\forall$ u $\in$ U (xRu). Suppose u $\in$ U. Then, in particular, xRu. Since u was arbitrary, $\forall$ u $\in$ U (xRu). Or, x is the least upper bound of B.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Heat Equation with Neumann condition Suppose u solves $u_t-u_{xx} = 16u$ on the interval $(0, \pi),$ with the homogenous Neumann condition $u_x=0$ at $x=0,\pi$. Characterize the initial data $u_0=u(x,0)$ for which $u(x,t)$ stays bounded as $t \rightarrow\infty$.
I am not sure how to approach this PDE, thus any help/guidance is highly appreciated. Thanks!
| $$\begin{cases}
u_t -u_{xx} = 16u\\
u_x(0,t) = u_x(\pi,t)=0\\
u(x,0) = u_0
\end{cases}$$
Let $u(x,t) = X(x)T(t)$ and $\frac{dT(t)}{dt} = \mathring{T}(t)$
Therefore,
$u_t -u_{xx} = 16u$ becomes $\frac{\mathring{T(t)}}{T(t)}-16 = \frac{X''(x)}{X(x)} = \lambda$.
Here, $\lambda$ is a constant because "[t]he left hand side is independent of t. The right hand side is independent of x. The two sides are equal. So
both sides must be independent of both x and t and hence equal to some constant, say [$\lambda]"$. (I actually still don't understand this sentence... If anyone can help me, please! )
$T(t):$ $T(t) = ce^{(16+\lambda )t}$ for some constant $c$.
$X(x): $
There are three cases - $\lambda = 0 , \lambda > 0, \lambda < 0$.
i) According to my experience, it $\frac{X''(x)}{X(x)} = \lambda > 0 $ ususally yields trivial solution. aka, $X(x) = 0$. Thus I am going to skip the calculation here. I have calculated to verify and it does result in trivial solution.
ii) $\frac{X''(x)}{X(x)} = \lambda = 0 $ yields trivial solution.
iii) $\frac{X''(x)}{X(x)} = \lambda = -\mu^2 < 0 $
This is the fun part.
Solving the second order DE, $X(x) = A\cos (\mu x) + B\sin(\mu x)$.
Plugging in the boundary conditions, $B=0, A = a_n $when $\mu=n=1,2,3,...$
Therefore, $$u(x,t) = \sum^\infty_{n=1}a_n \cos (n\pi) e^{(16-n^2)t}$$
Now, we can use Fourier series and initial condition to solve for $a_n$.
$$a_n = \frac{2}{\pi}\int^{\pi}_0 u_0 \cos(n\pi)dx= \frac{2u_0}{\pi}\frac{(-1)^n-1}{n}$$
$$b_{2k-1} = \frac{2u_0}{\pi}\frac{-2}{2k-1}$$
Therefore, the final solution is, $$u(x,t) = \frac{-4u_0}{\pi}\sum^\infty_{k=1}\frac{cos(2k-1)\pi e^{(16-(2k-1)^2)t}}{\pi (2k-1)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Polynomial solution $xy''+(1-x)y'+ \lambda y=0$
For which values of the constant $\lambda$ does the differential
equation $$xy''+(1-x)y'+ \lambda y=0$$ have a polynomial solution?
I was thinking about solving this problem with the Theorem of Frobenius (even thought I am not sure if I should use it in this case since I don't know what regular singular point I should take).
Then we have $$y''+ \frac{1-x}{x}y'+ \frac{\lambda}{x} y=0$$
Then we have that $x=0$ is a regular singular point, that means that there exists at least one solution of the form $y= \sum_{n=0}^{\infty} a_n (x-x_0)^{n+r}$, where $r$ and $a_n$ are constants.
So what? I know that the solution exists but I need to determine $\lambda$ and this theorem doesn't really help.
Any other solutions would be highly appreciated. Thank you!
| Suppose that $\lambda$ gives a polynomial solution $y(x)=\sum_{k=0}^na_kx^k$ with $a_n\ne 0$. Then
$$y'(x)=\sum_{k=0}^nka_kx^{k-1}=\sum_{k=0}^n(k+1)a_{k+1}x^k$$
and
$$y''(x)=\sum_{k=0}^nk(k-1)a_kx^{k-2}=\sum_{k=0}^n(k+1)ka_{k+1}x^{k-1},$$
where we set $a_{n+1}=0$. Hence
$$xy''(x)=\sum_{k=0}^n(k+1)ka_{k+1}x^k$$
and
$$(1-x)y'(x)=y'(x)-xy'(x)=\sum_{k=0}^n(k+1)a_{k+1}x^k-\sum_{k=0}^nka_kx^k.$$
Therefore
$$xy''(x)+(1-x)y'(x)+\lambda y(x)=\sum_{k=0}^n\Big((k+1)ka_{k+1}+(k+1)a_{k+1}-ka_k+\lambda a_k\Big)x^k.$$
This shows that
$$(k+1)^2a_{k+1}=(k-\lambda)a_k$$
for every $k=0,1,2,\ldots,n$. In particular, when $k=n$, we have
$$0=(n+1)^2a_{n+1}=(n-\lambda)a_n.$$
Since $a_n\neq 0$, we get $\lambda=n$. Thus $\lambda$ must be a non-negative integer.
Now if $\lambda=n$, then we can then show by induction that
$$a_k=\frac{(-1)^k}{k!}\binom{n}{k}a_0$$
for $k=0,1,2,\ldots,n$. This yields a polynomial solution
$$y(x)=a_0\sum_{k=0}^n\frac{(-1)^k}{k!}\binom{n}{k}x^k=a_0L_n(x),$$
where $L_n$ is the $n$th Laguerre polynomial.
For a general $\lambda$, the solution $y(x)$ is a linear combination of $L_\lambda(x)$ and $U_\lambda(x)$, where
$$L_\lambda(x)=\sum_{k=0}^\infty\frac{(-1)^k}{k!}\binom{\lambda}kx^k={_1F_1}(-\lambda;1;x)$$
is the Laguerre function with parameter $\lambda$, and $$U_\lambda(x)=x^\lambda\ {_2F_0}\left(-\lambda,1-\lambda;;-\frac1x\right).$$
Here ${_p}F_q$ is the generalized hypergeometric function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3402921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $z(t)=\frac{1+it}{1-it}$ describes a circumference on the complex plane. I am asked to prove that $z(t)=\frac{1+it}{1-it}$ describes a circumference on the complex plane when $t$ takes every value in the extended real number line. That is, $\mathbb{R} \cup\{\pm\infty\}$. I don't have any idea how to proceed with this. Furthermore, Wolfram Alpha does not plot anything resembling a circumference when the function is plugged in. Am I not understanding the problem well enough?
| Method 1
Observe that $$|z|=\frac{|1+it|}{|1-it|}$$
$$=1$$
because denominator and numerator are conjugates. This equation of a circle centred at origin and of radius $1$.
Method 2
Put $$t=\tan \frac{\theta}{2}, 0\leq \theta<2π, \theta\neq π$$
Now,
$$z=\frac{1-t^2}{1+t^2}+i\frac{2t}{1+t^2}$$
$$=\cos\theta+i\sin\theta$$
which is equation of circle with radius $1$. But this method has a nice property that it gives you polar coordinates of the point directly. Also, this method reveals an important information, that if $t$ varies over real numbers only, the circle would not be complete, since point $(-1,0)$ is not in the locus. So you need extended real number system to complete the circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Existence of limit of a function from existence of another limit of a function
Let $a \in \mathbb{R}$ and suppose that $f:\mathbb{R} \to \mathbb{R}$
is a function such that $f(x) \ge 1,\ \forall x \in \mathbb{R}$. If
$\lim_{x \to a}\bigg[\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg]$
exists, then prove that $\lim_{x \to a} f(x)$ exists.
HINT: If $f(x) \ge B > 0$ and $(\lim_{x \to a} f(x))^2$ exists then $\lim_{x \to a}(f(x))^2$ exists.
Since we are given that $\lim_{x \to a}\bigg[\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg]$ exists, we can multiply this limit by itself to have
$$\lim_{x \to a}\bigg[\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg]\cdot\lim_{x \to a}\bigg[\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg] = \bigg(\lim_{x \to a}\bigg[\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg]\bigg)^2$$
exists. Therefore, by hint
$$\lim_{x \to a}\bigg[\bigg(\frac{1}{\sqrt{f(x)}} + \sqrt{f(x)}\bigg)^2\bigg] = \lim_{x \to a}\bigg[\frac{1}{f(x)} + f(x)+2\bigg] = 2+\lim_{x \to a}\bigg(\frac{1}{f(x)} + f(x)\bigg)$$
exists. We also know that $0<\frac{1}{f(x)} \le 1$ is bounded and $f(x) \ge 1$. But I am stuck here since I can't separate $\lim_{x \to a}\big(\frac{1}{f(x)} + f(x)\big)$ as $\lim_{x \to a}\frac{1}{f(x)} + \lim_{x \to a} f(x)$ as I don't know whether they separately exist or not. I thought about proof bu contradiction by assuming $\lim_{x \to a}f(x)$ does not exist but could not go any further. Any hint or advice is appreciated. Thank you in advance.
| Let's write $u=f(x), v=\sqrt{f(x)} $. Then it is known that $u\geq 1,v\geq 1$ and $v+(1/v)=w$ tends to a limit $l$. Next note that $w=v+(1/v)\geq 2$ via AM-GM inequality and hence $l\geq 2$.
Solving for $v$ we get $$v=\frac{w\pm\sqrt{w^2-4}}{2}$$ Since $v\geq 1$ the $+$ sign must be choosen above. Hence we have $$v=\frac{w+\sqrt{w^2-4}}{2}$$ and thus by continuity of square/square root function $v$ tends to $$\frac{l+\sqrt{l^2-4}}{2}$$ Then by continuity of square function, $u=f(x) =v^2$ tends to the square of this number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove $xP(|X_1|>x) \leq E(|X_1|1(|x_1|>x))$? It is from one proof of theorem in Probability: Theory and Examples by Durret.
The theorem is stated as below.
Let $X_1, X_2,\dots$ be i.i.d with $E|X_i|<\infty$. Let $S_n=X_1+\dots+X_n$ and let $u=E(X_1)$. Then $S_n/n \rightarrow u$ in probability.
The proof in the book shown as below:
Proof: Two applications of the dominated convergence theorem imply
$xP(|X_1|>x)\leq E(|X_1|1(|X_1|>x))\rightarrow 0$ as $x\rightarrow \infty$
$u_n= E(|X_1|1(|X_1|\leq n))\rightarrow E(X_1)=u$ as $n\rightarrow \infty$
My question is how to prove $xP(|X_1|>x) \leq E(|X_1|1(|x_1|>x))$ and $E(|X_1|1(|X_1|>x))\rightarrow 0$?
Can anyone explain it in details? Many thanks!
| The first inequality $x\mathbb P(|X_1|>x) \le \mathbb E\left[|X_1|1_{(|X_1|>x)}\right]$ is just a use of Markov inequality:
$$\mathbb E\left[|X_1|1_{(|X_1|>x)}\right] \ge \mathbb E\left[x\cdot 1_{(|X_1|>x)}\right] \ge x\mathbb P(|X_1|>x)\qquad (\because |X_1| >x\text{ on the support}).$$
The dominated convergence theorem implies:
*
*Since $|X_1| 1_{(|X_1|>x)}$ is bounded by an integrable function $|X_1|$,
$$\lim_{x\to\infty}\mathbf E\left[|X_1| 1_{(|X_1|>x)}\right] = \mathbf E\left[ \lim_{x\to\infty} |X_1| 1_{(|X_1|>x)}\right] = \mathbf E\left[ 0\right]=0.$$
The proof of $u_n\to u$ is similar to the above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find real parameter $a$, such that the solution of the linear system lies in the second quadrant
For which real parameter $a$ lies the solution of the system of equations
$$\begin{aligned} \frac{x}{a+1} + \frac{y}{a-1} &= \frac{1}{a-1}\\ \frac{x}{a+1} - \frac{y}{a-1} &= \frac{1}{a+1} \end{aligned}$$
in the second quadrant?
I do not how to start to solve this system of equations. Any help?
| Solve your system : you should find
$$x=\frac{a}{a-1},\quad y=\frac{1}{a+1}$$
You want $x<0$, so $a$ and $a-1$ must be of opposite signs. So you should have $a>0>a-1$, hence $0<a<1$, and this leads to $y>0$.
The solution is : $(x,y)$ is in the second quadrant if and only if $0<a<1$, or $0\le a<1$ if you look for the closed second quadrant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to compute the normalizer $N(H)$?
Let $H$ be a subgroup generated by $(12)$ in $S_3$. Compute the normalizer $N(H)$ of $H$.
My attempt:
$\sigma$ $\in N(H) $ then $ \sigma H \sigma^{-1}=H=(12)$
implies $\sigma (12)= (12)\sigma$
Now applying both sides to $3$
$\sigma (3) (12) =(12) \sigma(3)$
$(12)$ cancel both each other
$\sigma (3) = \sigma(3)$
I thinks I'm using wrong logic.
Any hints/suggestion on how to compute the normalizer $N(H)$?
| Hint: For any $\sigma\in S_n$, we have $$\sigma(12)\sigma^{-1}=(\sigma(1)\sigma(2)),$$ where $\sigma(i)$ is $\sigma$ applied to $i$, with $i$ from the set $S_n$ acts on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving that a function is injective and strictly increasing I have recently started doing proofs, and I find it quite hard to construct one. I am not really clear on how proofs should generally look like. What information should it include? Are we always need to suppose something is true and then arrive at the conclusion that our assumption along with the reasonings we provided imply something else (what's asked from us to be proved?) More specifically, questions such as these two are perfect examples of questions that I am not sure what I am supposed to show in a format of a proof.
*
*Let I ⊆ ℝ be an interval. We say that a function f: I → ℝ is strictly increasing if whenever $a,b ∈ I$ satisfy $a < b$, then $f(a) < f(b)$. Show that a strictly increasing function is injective.
That's what I have done:
Given that a function is increasing for some a,b ∈ R, assume a≠b. Since the function is strictly increasing, then $a < b$, which would also imply that $f(a)<f(b)$. Therefore, this shows that $f(a)≠f(b)$ and that the function is injective as needed to be shown.
*Suppose f: I → ℝ is strictly increasing if whenever $a,b ∈ I$ satisfy $a < b$, then $f(a) < f(b)$. Show that $f^{-1}$ is also strictly increasing.
For this question, should I apply the same reasoning as that above, and add that taking the inverse of that function wouldn't change the fact that $a < b$? Would that be sufficient?
| You have the right ideas, but you need to be more rigorous in how you make the arguments.
Let's start with the definition of being injective. A function is injective when for all $a,b$ in the domain, $a\neq b\to f(a)\neq f(b)$. So, let $a\neq b$. By the law of trichotomy, $a<b$ or $a>b$. Without loss of generality let $a<b$. Then, by assumption, we have that $f(a)<f(b)\to f(a)\neq f(b)$, which is what we desired.
For the second proof, we write down the definition of a strictly increasing function. For any $a,b$ in the domain where $a<b$, $f(a)<f(b)$. So, consider $c,d$ in the domain of $f^{-1}$ such that $c<d$. By definition of the inverse, there exists $a,b$ in the domain of $f$ such that $f(a)=c,\;f(b)=d$. We seek to show that $a < b$. We know that $f^{-1}$ is a bijection, so $a\neq b$. By the law of trichotomy, $a<b$ or $a>b$. If $a>b$, then by assumption, $f(c)>f(d)$, a contradiction. Hence, $a<b$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Without calculus, show that the sign of $(1+a)^{1+x}-a^{1+x}-1$ matches the sign of $x$, for any positive $a$ Let $a$ be a positive real constant. Consider the function
$$
f(x) = (1+a)^{1+x}-a^{1+x}-1
$$
We have for any choice of $a$, that $f(x) >0$ for $x>0$ and $f(x) <0$ for $x<0$, which can be proved using calculus.
My question:
How would a proof go without using calculus?
| For positive $x$ we have
$$
\begin{align}
\frac{1+a^{1+x}}{(1+a)^{1+x}} &= \left( \frac{1}{1+a}\right)^{1+x} + \left( \frac{a}{1+a}\right)^{1+x}\\
&=
\frac{1}{1+a} \left( \frac{1}{1+a}\right)^{x} + \frac{a}{1+a}\left( \frac{a}{1+a}\right)^{x} \\
&< \frac{1}{1+a} + \frac{a}{1+a} = 1 \, .
\end{align}
$$
For negative $x$ the same holds with $<$ replaced by $>$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3403865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does every vector space with a weak topology contain a dense subspace which is a direct sum of real lines? Is it true that given an index set $A$ and a topological vector subspace $E$ of $\mathbb{R}^A$ (that is, $E$ carries weak topology), there is a dense topological vector subspace of $E$ which is isomorphic as a topological vector space to a direct sum $$\bigoplus_{a\in B}\mathbb{R}_a$$ of $B$-many copies of $\mathbb{R}$ (endowed with the topology inherrited from the Tychonoff product $\mathbb{R}^B$) for some index set $B$?
| If $E$ is a normed space, then in its weak topology there is no subspace isomorphic as a topological vector space to $\mathbb R^{\mathbb N}$.
We can prove this using: in a normed space, if a sequence converges weakly then it is bounded (in norm).
So let $E$ be a normed space. Let $e_1,e_2,\cdots$ be any sequence of nonzero vectors in $E$. Then there exists a choice $(t_k)_{k \in \mathbb N}$ of scalars, so that the equence $(\sum_{k=1}^n t_k e_k)_{n \in \mathbb N}$ is unbounded, and therefore the series $\sum_{k=1}^\infty t_k e_k$ does not converge weakly.
Finally note that if $\phi : \mathbb R^{\mathbb N} \to E$ is a linear injective map that is continuous (from the product topology of $\mathbb R^{\mathbb N}$ to the weak topology of $E$), then it gives us a sequence $(e_k)$ of nonzero vectors such that
$$
\sum_{k=1}^\infty t_k e_k = \phi\Big((t_1,t_2,t_3,\cdots)\Big)
$$
converges weakly for all sequences $(t_k)$ of scalars.
added (no need for weak sequential completeness)
In a normed space, for any sequence $(e_k)$ there are positive scalars so that $t_k e_k$ is unbounded, and therefore $t_k e_k$ does not converge weakly. But any linear injection
$$
\phi : \bigoplus_{k \in \mathbb N} \mathbb R \to E
$$
that is continuous (from the topology inherrited from the Tychonoff product $\mathbb{R}^B$ to the weak topology) would give us a counterexample to that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Which mappings preserve convex bodies? Let $$f:\mathbb{R}^n\to\mathbb{R}^n,$$ $n\geq 2$, be a mapping which maps every convex body (compact convex set with nonempty interior) to a convex body.
If we assume $f$ to be a homeomorphism, it needs to be affine. Is there something we can say without this assumption?
| Let $h: {\mathbb R}^2\to [0,1]\subset {\mathbb R}$ be a continuous surjective map which is nonconstant on all nondegenerate intervals, for instance, we can take $h(x,y)=|\sin(x^2+y^2)|$. Let $g: [0,1]\to Q=[0,1]\times [0,1]\subset {\mathbb R}^2$ be a discontinuous function which is maps each nondegenerate interval onto the square $Q$, see the answer here. The composition $f=g\circ h$ then sends every nondegenerate interval onto the square $Q$. In particular, the image of each convex body under $f$ is again a convex body. The map $f$ will be discontinuous. It is unclear to me if there are continuous non-affine maps ${\mathbb R}^n\to {\mathbb R}^n$ which send convex bodies to convex bodies.
P.S. The accepted answer does not produce valid examples since non-affine linear-fractional (aka projective) transformations are not everywhere defined on the affine space. (They are well-defined on the projective space, of course.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove $\forall a,b$ where $a < b$, $\exists x\in \mathbb{Q}$ such that $a
$\forall a,b$ with $a < b$, $\exists x\in \mathbb{Q}$ such that $a<x<b$ and $x$ has a finite decimal expansion with some number of $7$'s in it.
I have shown that there are infinite number of rational numbers in between an interval of two arbitrary real numbers but I don't understand how to proceed afterwords.
| In order to find such rational number $x$, we modify suitably the classical proof of the density of $\mathbb{Q}$ in $\mathbb{R}$.
Since $b>a$ then there is an positive integer $n$ such that $10^n(b-a)>10$. It follows that the interval $(10^na,10^nb)$, whose size is greater than $10$, contains an integer $m$ with $7$ as unit digit. Hence $x:=\frac{m}{10^n}\in (a,b)$ and $x$ is a rational number which contains at least a $7$ digit in its decimal expansion.
P.S. By taking a larger $n$ such that $10^n(b-a)>10^d$, we can find a rational number $x$ with at least $d$ digits $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
I have to find maximum and minimum of a certain sequence. I was given a sequnce and have to identify maximum and minimum of this sequence:
And I think that maximum is 1 and minimum -1/2
*Speaking about maximum -> every other number of this sequence is equal or smaller and 1 is a part of sequence = OK
*Speaking about minimum -> every other number of this sequence is equal or larger and -1/2 is a part of sequence = OK
I am right?
| There is a flaw in your reasoning, because you handle the two subsequences completely independently. Indeed $1$ is the largest of the decreasing sequence of terms of odd rank.
But it could be that some term of the subsequence of terms of even ranks exceeds that. This is actually not the case as this sequence is always negative, but you should not leave this implicit.
Of course, similar flaw as regards the minimum.
Also note that you use twice the expression "every other number" without specifying which.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Growth rate of primes of the form $a n+1$, with $n$ fixed If we fix a natural $n$, and let $a$ vary, how big can $a n + 1$ be before the first prime is encountered, in terms of $n$? In other words, let $n = 32$. Then, we have:
$$1(32) + 1 = 33 = 3 * 11 \text{ is composite}$$
$$2(32) + 1 = 65 = 5 * 13 \text{ is composite}$$
$$3(32) + 1 = 97 \text{ is prime}$$
The gist of the question is to try to find asymptotic bounds on $a$. I believe that $a \in O(\log{(n)})$, and therefor $an+1 < c \cdot (n \log{(n)})$ for some $c$, but I'm not sure of this. If this isn't the case, could someone please provide the correct bounds.
| Let $ak+b$ be an arithmetic progression, $k = 1,2,3,\ldots$. Dirichlet theorem of primes in arithmetic progression says that number of prime $\le x$ which are of the form $ak+b$ is
$$
\pi_a(x) \approx \frac{\pi(x)}{\varphi(a)} \approx \frac{x}{\varphi(a)\log x}
$$
where $\varphi(n)$ is the Euler totient function. Hence heuristically, to get the first prime, we must have
$$
\frac{x}{\varphi(a)\log x} \ge 1
$$
or
$$
\frac{x}{\log x} \ge \varphi(a)
$$
which can be solved for $x$ for any given $a$.
Here is an approximate solution. Let $p_{\varphi(a)}$ be the $\varphi(a)$-th prime. Then, the smallest $x$ satisfying the above inequality is about $p_{\varphi(a)}$ which is about
$$
p_{\varphi(a)} \approx \varphi(a)\log \varphi(a) + \varphi(a)\log \log \varphi(a)
- \varphi(a)
$$
Then on a average the minimum value of $k$ is about
$$
\frac{\varphi(a)\log \varphi(a) + \varphi(a)\log \log \varphi(a)
- \varphi(a)}{a}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing $5a + 2b \equiv 0 \pmod 7$ is symmetric, pictorially I have a problem regarding the understanding of modulus.
So someone proved $5a + 2b \equiv 0\pmod 7$ is symmetric using the modulo circle,
and my brain cant comprehend how he meant that. I'm talking about circles like this:
I know there is a other way to prove it with $7 \mid 5a + 2b$
| The relation is defined on $\Bbb Z$ like this: $a \sim b$ if and only if $5a + 2b \equiv 0 \pmod 7$. On the assumption this holds, we wish to show $b \sim a$, i.e. $5b+2a \equiv 0 \pmod 7$.
Using the definition of modular congruence, then, $\exists k \in \Bbb Z$ such that
$$\frac{5a+2b-0}{7} = k \iff 5a + 2b = 7k$$
Subtract $7a$ and add $7b$ to both sides. Then you have
$$-2a - 5b = 7(k - a + b)$$
Now multiply both sides by $-1$:
$$2a + 5b = 7(-k+a-b)$$
Thus, there an $\ell \in \Bbb Z$ (namely $\ell = -k+a-b$) such that
$$\frac{5b+2a}{7} = \ell$$
ensuring that $5b + 2a \equiv 0 \pmod 7$, i.e. $b \sim a$. Thus, symmetry.
Now, with this in mind, consider the "circle" for equivalence modulo $7$:
If $5a + 2b \equiv 0 \pmod 7$, then the sum $5a+2b$ is in that topmost notch. Moreover, adding multiples of seven to it will make $5a+2b$ cycle back to that same place. So what we do, then, is add $-7a-7b= 7(-a-b) \in 7 \Bbb Z$ to it. This gives us $-2a-5b \equiv 0 \pmod 7$. However, in this notch of the diagram, if $x$ is in it, so is $-x$. (That is, $x \equiv -x \pmod 7$. I leave it to you to justify why.) Thus, $-(-2a-5b)= 2a +5b \equiv 0 \pmod 7$ as desired.
...granted, this question is quite old, so I imagine you don't need help now. But hopefully this helps someone in the future, and, if nothing else, gets this question out of the unanswered queue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A is an open set and B is a closed set. Prove that A\B is open and B\A is closed. The problem consists of using the epsilon neighborhood definition to prove that A\B is open and B\A is closed.
I'm thinking of using a proof by contradiction and assuming that A\B is not open and vice versa for B\A. Any help would be greatly appreciated.
Sorry, forgot to mention that we can't use the fact that a set is open iff its compliment is closed and vice versa.
| $$A\setminus B=A\cap B^c$$
Note that $B^c$ is open and $A^c$ is closed. The result naturally follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
} |
Show linearized Riemann tensor is gauge invariant I'm working in general relativity and figured this problem would be more suited to mathematics than physics.
The linearized Riemann tensor is given by:
$$R_{\alpha \beta \mu \nu}=-\frac{1}{2}\left[h_{\alpha \mu, \beta \nu}+h_{\beta \nu, \alpha \mu}-h_{\alpha \nu, \beta \mu}-h_{\beta \mu, \alpha \nu}\right]$$
I want to show this is invariant under the gauge transformation
$$h_{\alpha \beta} \rightarrow h_{\alpha^{\prime} \beta^{\prime}}=h_{\alpha \beta}-\xi_{\alpha, \beta}-\xi_{\beta, \alpha}$$
This seems relativitely straightforward, but I am unsure of how to plug this gauge transformation into the Riemann tensor, since the indices end up being swapped around.
| The first term contributes a difference of $$ -(\xi_{\alpha,\mu\beta\nu}+\xi_{\mu,\alpha\beta\nu})$$ Similarly, the second term $$ -(\xi_{\beta,\nu\alpha\mu}+\xi_{\nu,\beta\alpha\mu})$$ and the third $$+(\xi_{\alpha,\nu\beta\mu}+\xi_{\nu,\alpha\beta\mu})$$ Hey, stop right there. The second term of the second one $-\xi_{\nu,\beta\alpha\mu}$ and the second term of the third one $\xi_{\nu,\alpha\beta\mu}$ cancel out since partial derivatives commute. Same with the first term of the first one and the first term of the third one. If you write down the fourth one surely it will cancel the remaining two.
(To get the differences I just used the fact that partial derivatives are linear, so just distribute over the terms of the $h_{\alpha'\beta'}$... you can just tack the indices from the partial derivative onto the end.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3404980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $E_1 \cup E_2$ is measurable with $λ^\ast(E_1 \cup E_2) = λ^\ast(E_1)+ λ^\ast(E_2)$, then $E_1$ and $E_2$ are measurable. Denote by $\lambda^\ast$the Lebesgue outer measure on $\mathbb{R}^m$. Assume
that $E_1, E_2 \subset \mathbb{R}^m$ satisfy $E_1 \cap E_2 = \phi$, $E_1 \cup E_2$ is Lebesgue measurable with $λ^\ast(E_1 \cup E_2) <\infty$ , and $λ^\ast(E_1 \cup E_2) = λ^\ast(E_1)+ λ^\ast(E_2)$. Prove that $E_1, E_2$ are Lebesgue measurable.
$\textbf{Question}$:
Does $λ^\ast(E_1 \cup E_2) = λ^\ast(E_1)+ λ^\ast(E_2)$ implies that for any set $A$, we can write $λ^\ast\big((A\cap E_1) \cup (A\cap E_2)\big) = λ^\ast((A\cap E_1))+ λ^\ast((A\cap E_2))$ ? (note that I don't have measurability of $E_1$ and $E_2$).
$\textbf{My attempt, which relies on above doubt}$: by def we have that for any set $A$ we have
$$λ^\ast(A) = λ^\ast\big((A\cap (E_1 \cup E_2)\big) +λ^\ast\big((A\cap (E_1 \cup E_2)^c\big)$$
using the following identities;
$ A\cap (E_1 \cup E_2) = (A\cap E_1) \cup (A\cap E_1^c \cap E_2)$, where
$(A\cap E_1) \cap (A\cap E_1^c \cap E_2)= \phi$
if the above question is correct, I can write using, $(A\cap E_1^c \cap E_2) \cap (A\cap E_1^c \cap E_2^c)= \phi$
\begin{align}
λ^\ast(A)
& = λ^\ast((A\cap E_1))+ λ^\ast(A\cap E_1^c \cap E_2) +λ^\ast\big((A\cap (E_1 \cup E_2)^c\big)\\
& = λ^\ast((A\cap E_1))+ λ^\ast\big[(A\cap E_1^c \cap E_2) \cup (A\cap E_1^c \cap E_2^c)\big]\\
& = λ^\ast((A\cap E_1))+ λ^\ast\big[(A\cap E_1^c \cup (E_2 \cap E_2^c)\big]\\
& \geq λ^\ast((A\cap E_1))+ λ^\ast (A\cap E_1^c) \\
\end{align}
Then $E_1$ is measurable. similarly for $E_2$.
| Choose a $G_\delta$ set $G$ such that $E_{1}\subseteq G$ and $\lambda^{\ast}(E_{1})=\lambda(G)$. In particular, $G$ is measurable.
Now let $H=(E_{1}\cup E_{2})\cap G$ so $H$ is measurable. We have $E_{1}\subseteq H\subseteq G$ and hence $\lambda^{\ast}(E_{1})\leq\lambda(H)\leq\lambda(G)=\lambda^{\ast}(E_{1})$, so $\lambda^{\ast}(E_{1})=\lambda(H)$.
Now
\begin{align*}
\lambda^{\ast}(E_{1})+\lambda^{\ast}(E_{2})&=\lambda^{\ast}(E_{1}\cup E_{2})\\
&=\lambda^{\ast}((E_{1}\cup E_{2})\cap H)+\lambda^{\ast}((E_{1}\cup E_{2})-H)\\
&=\lambda^{\ast}(H)+\lambda^{\ast}((E_{1}\cup E_{2})-H)\\
&=\lambda^{\ast}(E_{1})+\lambda^{\ast}((E_{1}\cup E_{2})-H).
\end{align*}
So
\begin{align*}
\lambda^{\ast}(E_{2})=\lambda((E_{1}\cup E_{2})-H).
\end{align*}
On the other hand, we have $(E_{1}\cup E_{2})-H\subseteq E_{2}$.
Now
\begin{align*}
\lambda^{\ast}(E_{2})&=\lambda^{\ast}(E_{2}\cap((E_{1}\cup E_{2})-H))+\lambda^{\ast}(E_{2}-((E_{1}\cup E_{2})-H))\\
&=\lambda^{\ast}((E_{1}\cup E_{2})-H)+\lambda^{\ast}(E_{2}-((E_{1}\cup E_{2})-H))\\
&=\lambda^{\ast}(E_{2})+\lambda^{\ast}(E_{2}-((E_{1}\cup E_{2})-H)),
\end{align*}
so $\lambda^{\ast}(E_{2}-((E_{1}\cup E_{2}-H)))=0$ and hence $E_{2}-((E_{1}\cup E_{2})-H)$ is measurable and hence $E_{2}=((E_{1}\cup E_{2})-H)\cup(E_{2}-((E_{1}\cup E_{2})-H))$ is also measurable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$10000=abc$ with $a,b,c$ natural numbers don't have digit $0$. Smallest possible value of $a+b+c$ is? Let $10000=abc$ with $a,b,c$ natural numbers don't have digit $0$. If $a,b,c$ does not have to be all distinct, then the smallest possible value of $a+b+c$ is?
Attempt:
First write as prime factors: $10000 = 2^{4} 5^{4}$. The possible triples are:
$$ 2, 2^{3}, 5^{4} $$
$$ 2^{2}, 2^{2}, 5^{4} $$
$$ 2^{3}, 2, 5^{4} $$
$$ 2^{4}, 2^{4}, 5^{4} $$
$$ 5, 5^{3}, 2^{4}$$
$$ 5^{2}, 5^{2}, 2^{4}$$
$$ 5^{3}, 5, 2^{4}$$
The smallest sum is $5^{2} + 5^{2} + 2^{4}$. Are there better approaches?
| Well. "no zero" means none of the $a,b,c$ have both factors of $2$ and $5$ so you either have two numbers that are powers of $2$ and one that is a power of $5$ or you have $2$ that are powers of $5$ and one that is a power of $5$.
CASE 1: $\{a,b,c\} = \{2^k, 2^{4-k}, 5^4\}$.
So $a + b + c = 2^k + 2^{4-k} + 625$.
We have to minimize $2^k + 2^{4-k}$.
By AM-GM $2^k + 2^{4-k} \ge 2\sqrt{2^k2^{4-k}} = 2\sqrt{2^4} = 2*4=8$ with equality holding if and only if $2^k = 2^{4-k}$.
So $\{a,b,c\} = \{4,4,625\}$ and $a + b +c =633$. Is the minimum sum for this case.
CASE 2: $\{a,b,c\} = \{2^4, 5^{4-k},5^{k}\}$
Same argument of minimizing $a+b+c$ via $AM.GM$ $a=5^2, b=5^2, c=16$ and $a + b + c = 66$.
So the min is $\{a,b,c\} = \{25,25, 16\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Can we find an example where $U$ is not connected and the partition of $p^{-1}(U)$ into slices is not unique? This is an extension of the following question from Munkres. Section 53-problem 2.
Let $p: E \to B$ be continuous and surjective. Suppose that $U$ is an open set of $B$
that is evenly covered by $p$. Show that if $U$ is connected, then the partition of $p^{-1}(U)$ into slices is unique.
I could solve this problem but one question that I could not answer:
Can we find an example where $U$ is not connected and the partition of $p^{-1}(U)$ into slices is not unique?
I tried simple examples of the covering map $f : \mathbb{R} \to S^1$ and the non-covering maps $g : \mathbb{R}^{+} \to S^1$ without any success.
| Let $p : E \to B$ be a continuous surjection and let $U \subset B$ be a nonempty open set which is evenly covered, with slices $V_\alpha, \alpha \in A$. Let $U_1, U_2$ be two disjoint nonempty open subsets of $U$. As an example take $p : \mathbb R \to S^1$ , $U = S^1 \setminus \{1\}$ and $U_i$ any two disjoint nonempty open subsets of $U$.
Obviously the $U_i$ are evenly covered with slices $V^i_\alpha = V_\alpha \cap p^{-1}(U_i)$. The set $W = U_1 \cup U_2$ is not connected but of course also evenly covered. However, the partition of $p^{-1}(W)$ into slices is not unique. In fact, for any bijection $f : A \to A$ we get the partition
$$W^f_\alpha = V^1_\alpha \cup V^2_{f(\alpha)}$$
of $p^{-1}(W)$ into slices over $W$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to find x from the equation $\sqrt{x + \sqrt{x}} - \sqrt{x - \sqrt{x}} = m \sqrt{\frac{x}{x+\sqrt{x}}}$ For m are real number,
Find x from the equation
$$\sqrt{x + \sqrt{x}} - \sqrt{x - \sqrt{x}} = m \sqrt{\frac{x}{x+\sqrt{x}}}$$
I tried to multiply $\sqrt{x + \sqrt{x}}$ to the both sides and I get
$$x + \sqrt{x} - \sqrt{x^{2} - x} = m \sqrt{x}$$
What should I do now to get x? Can anyone show me a hint please?
| As you got (for $x\neq0$):
$$ x+\sqrt{x} + \sqrt{x^2-x} = m\sqrt{x} $$
$$ x+\sqrt{x} + \sqrt{x}\sqrt{x-1} = m\sqrt{x} $$
Setting $ u = \sqrt{x} $ ($ u\neq0 $) we get:
$$ u^2+u+u\sqrt{u^2-1} = mu $$
$$ u+1+\sqrt{u^2-1} = m $$
$$ \sqrt{u^2-1} = (m-1) - u$$
Squaring both sides:
$$ u^2-1 = (m-1)^2-2u(m-1)+u^2 $$
We get:
$$ u = \frac{(m-1)^2+1}{2(m-1)} $$
where $m\neq1$ and $x$ becomes:
$$ x = \Bigg(\frac{(m-1)^2+1}{2(m-1)}\Bigg)^2 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
How do I solve this integral via integration by parts when it shows a recursion-like behavior? I've come across an integral while solving a question for ODE's. The integral is as follows:
$$\int e^{-2x}x\,dx$$
From here I set:
$$
\begin{align}
f(x) = e^{-2x}\quad & f'(x) = -2e^{-2x} \\
g(x) = \frac{1}{2}x^2\quad & g'(x) = x
\end{align}
$$
and continued to solve the integral as follows:
\begin{align}
\int e^{-2x}x\,dx & = e^{-2x}x - \int \left(-2e^{-2x} \times \frac{1}{2}x^2\right)\,dx \\
& = e^{-2x}x + \int \left( e^{-2x}x^2 \right)\,dx
\end{align}
Solving this integral yields:
$$
\int e^{-2x}x\,dx = e^{-2x}x + \frac{2}{3}\int e^{-2x}x^3\,dx
$$
and it continues like this.
How should I proceed to solve this integral? Any tips or advice is greatly appreciated.
| Let $u=x$ and $dv=e^{-2x}dx$. Then, $du=dx$ and $v=\dfrac{e^{-2x}}{-2}$
$$\int e^{-2x}xdx=-\frac{xe^{-2x}}{2}+\frac{1}{2}\int e^{-2x}dx=-\frac{xe^{-2x}}{2}-\frac{e^{-2x}}{4}+C=-\frac{e^{-2x}}{4}\left(2x+1 \right)+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
A problem about connected components‘ image under onto and continous map Here is a problem on Topoloy written by Marco Manetti, page89, question 5.4.
Let $f:X \rightarrow Y$ be an identification. Show that if the connected components of $X$ are open, so are the connected components of $Y$.
I think that if we can prove that $A$ is a connected components of $Y$ implies $f^{-1}(A)$ can written as several connected components of $X$, then this problem can be solved. When I tried to prove it, I found that even if $f$ is just a onto and continous, this conclusion still holds. Here is my proof.
If $B\subset X$ is a connected component, since $f$ is continous, then $f(B)$ is connected, hence contained in some connected component of $Y$. Because $f$ is onto, each pre-image of connected components of $Y$ can be written as several connected components of $X$.
My question is, can we replace the identification by onto and continous in this problem?
| The answer is no, the claim is not true if you replace $f$ by continuous and onto.
The problem is the fact that you still need to show that the image of open is open, continuity does not satisfies.
Consider the following example: pick $Y$ to be any topological space whose components are NOT open. Pick $X$ to be the same set, equipped with the discrete topology.
Define $f :X \to Y$ to be $f(x)=x$. Then $f$ is continuous, onto, the components of $X$ are open, but the components of $Y$ are not.
P.S. An alternate approach for the problem is the following: Let $\{ C_i \}_{i}$ be the components of $X$ and $\{ D_j \}_j$ the components of $Y$.
You know that for each $i$ there exists some $j$ such that $f(C_i) \subset D_j$. Use this together with ontoness to show that for each $j$ you have
$$f^{-1}(D_j) = \bigcup_{f(C_i) \subseteq D_j} C_i $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two statements are equivalent if... Two statements "If A, then B." and "If P, then R." are equivalent if the validity of one statement implies the validity of the other?
That is, I assume that "If A, then B." is valid, then proceed to show the second statement is valid, and vice versa?
Or this done differently?
To show that a statement is valid, one need only show that if the premise is true, the conclusion necessarily follows, right?
|
Two statements "If A, then B." and "If P, then R." are equivalent if the validity of one statement implies the validity of the other?
Not quite right. It's not the the validity of the one statement implies the other (and vice versa), but rather that the assumed truth of the one implies the other (and vice versa)
That is, I assume that "If A, then B." is valid, then proceed to show the second statement is valid, and vice versa?
OK, so here's an example of what I mean. By itself, $A \to B$ is not valid. So, anything would follow once you assume it is valid, which includes something like the validity of $C$. That is, by your proposed definition, the validity of $C$ folows from the validity of $A \to B$. But, clearly $C$ is not implied by $A \to B$, let alone that it would be equivalent to it.
So, you have to think about this in terms of the truth of the statements, rather than their validity: if you assume the $A \to B$ is true, does it mean that $C$ has to be true as well? No. So, $A \to B$ does not imply $C$ ... which is of course exactly what we want.
If you do want to talk about validity, you can do the following:
Two statements $\phi$ and $\psi$ are equivalent if and only if the inference from $\phi$ to $\psi$ is logically valid, and vice versa. Or: if the statements $\phi \to \psi$ and $\psi \to \phi$ are both valid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3405916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to find the difference between the son's and mother's age if it(ages of son and mother) is reversible for a total of 8 times? There is a quiz question for which I need to write a python code. I don't need help for the code but this is what I need help with.
A son had recently visited his mom and found out that the two digits that form his age (eg :24) when reversed form his mother's age (eg: 42). Later he goes back to his place and finds out that this whole 'age' reversed process occurs 6 times. And if they (mom + son) were lucky it would happen again in future for two more times.
So we have a total of 8 times when this would occur. The actual question is how old is the son at present ? I don't need help with that , I am trying to figure out the age difference which in turn will lead to me the present age and based on that I could write my code.
Observation: If you take any two-digit number and switch the digits, the
difference between the two numbers is a multiple of 9.In particular, the difference is 9 times the difference between the two digits.
But how do I figure out this age difference ?
| He is his mother's age and presently he is $66$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Is $\{n!\alpha\},n \in\mathbb{N}$ dense in $R$ ? where $\alpha$ is irrational Is $\{n!\alpha\},n \in\mathbb{N}$ dense in $[0,1]$ ? where $\alpha$ is irrational.
I know that $\{n\alpha\}$ is dense in $[0,1]$? I wanted to generalise it.So, first I thought about $\{n!\alpha\},n \in\mathbb{N}$, but couldn't make any reasonable progress, although my intuition says answer will be no.
| You can work out, through a somewhat careful argument, that if a sequence $s_n$ increases without bound, then $\{s_n\alpha\}$ is dense in $[0,1]$ for almost every real $\alpha$. However, in general, this sequence is not dense for every irrational $\alpha$.
We can actually find a particular notable counterexample for $n!$. Consider
$$e=\sum_{k=0}^{\infty}\frac{1}{k!}.$$
Note that $$\{n!e\} = \sum_{k={n+1}}^{\infty}\frac{n!}{k!}$$
since the term up to the $n^{th}$ are integers after multiplication with $n!$ and the remaining terms (assuming $n > 1$) do not sum up to $1$. In fact, note that $\frac{n!}{k!} \leq \left(\frac{1}{n}\right)^{k-n}$ which implies, using a geometric sum, that
$$\{n!e\} = \sum_{k={n+1}}^{\infty}\frac{n!}{k!} \leq \sum_{i=1}^{\infty}\left(\frac{1}n\right)^i = \frac{1}{n-1}.$$
Of course, a sequence satisfying this cannot possibly be dense.
You might notice that the fact that we have $n!$ as opposed to some other similar sequence is not that important; it's possible to extend this to give counterexamples for any chosen sequence $s_n$ of integers where each term divides the next - but it's a bit nice that the counterexample here is a well-known number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Quadratic Modular Arithmetic I want to prove that
$$ w^2 \equiv 2 \quad (\bmod{5})$$
has no solutions in integers.
What I tried:
$$ w^2 \equiv 2 \quad (\bmod{5})$$
$$ \Rightarrow w^2 = 2 + 5k, \quad k \in \mathbb{Z} $$
Now, I don't know what to do, so I considered $x^2 = 2+ 5y$ over $\mathbb{R}$.
We can solve this to see if the solutions are integers.
If it has no solutions in integers, can I then say that this implies that $ w^2 \equiv 2 \; (\bmod{5})$ has no solutions?
Is there a different approach to this problem?
| Without knowing how you finished the proof it can't be verified.
This should prove it though:
All multiples of $5$ end with the digit $5$ or $0$ in decimal base.
$$5+2=7$$
$$w^2\ne7+10n$$
$$w^2\ne2+10n$$
because no perfect squares end in $7$ or $2$ in decimal base.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Solving indefinite integral $\int \frac{1+\sqrt{1+x^2}}{x^2+\sqrt{1+x^2}}dx$ How can we solve this integration?
$$\int \frac{1+\sqrt{1+x^2}}{x^2+\sqrt{1+x^2}}dx$$
I tried to make the following substitution
$$1+x^2=w^2$$
but this substitution complicated the integral.
| Like the answer of @AmerYR, taking $1+x^2=u^2$
$$I=\int \frac{1+\sqrt{1+x^2}}{x^2+\sqrt{1+x^2}} dx= \int \sec u ~du + \int \frac{sec u}{\tan^2 u+ sec u} du$$ $$ \implies I =\log (\tan u+ sec~ u)+J(u)$$
Next, using $\sin u=\frac{2 \tan(u/2)}{1+\tan^(u/2)},~~ \cos u= \frac{1-\tan^(u/2)}{1+\tan^2(u/2)}$ and $t=\tan(u/2), dt= sec^2 (u/2) du/2$, we get
$$J=\int \frac{\cos u ~du} {\sin^2 u + \cos u}= \frac{(1-t^2)/(1+t^2)}{4t^2/(1+t^2)^2+(1-t^2)/(1+t^2)} \frac{2dt}{(1+t^2)}$$
$$\implies J= 2 \int \frac{t^2-1}{t^4-4t^2-1} dt= 2 \int \frac{t^2-1}{(t^2+a^2)(t^2-b^2)} dt = 2 \int \left(\frac{A}{t^2-a^2} + \frac{B}{t^2+b^2} \right) dt$$
$$\implies 2 \left(\frac{A}{2a} \log \frac{t-a}{t+a} + \frac{B}{b} \tan^{-1} \frac{t}{b} \right) +C$$
Here $a=\sqrt{2+\sqrt{5}}, b=\sqrt{\sqrt{5}-2}$, $A=\frac{\sqrt{5}+1}{2\sqrt{5}}$, $B=\frac{\sqrt{5}-1}{2\sqrt{5}}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Probability of more than k balls in any m buckets given n total balls Say there are 100 balls that are randomly distributed across 64 buckets, with the same probability of ending up in any bucket and trials are independent. What is the probability that at least one of the 64 buckets will have more than 20 balls?
A similar problem to this is calculating the probability of a specific bucket having more than 20 balls, which can be solved using the survival function or CDF function of the binomial distribution:
$1-\sum_{i=0}^{20} \binom{100}{i}(1/64)^{i}(1-1/64)^{100-i}=7.33\times10^{-18}$
However, I'm not sure how to bridge the gap to determine the probability of any bucket in our population having more than 20 balls.
Thank you!
| For $i ∈ \{1,…,64\}$ Let $E_i$ be the event that the $i$-th bucket contains more than $20$ balls. You want to compute: $P[\bigcup_{i = 1}^{64} E_i]$. By the union bound we have:
$P[\bigcup_{i = 1}^{64} E_i] ≤ \sum_{i = 1}^{64} P[E_i] = 64 · 7.33 · 10^{-18}.$
However, any outcome where several buckets overflow at the same time is counted several times. Consider the event $E_i ∩ E_j$ that buckets $i$ and $j$ (for $i ≠ j$) both contain more than $20$ balls. It is possible to get an exact formula, but for now let us just use that the events are negatively correlated and write
$P[E_i ∩ E_j] ≤ P[E_{i}] · P[E_{j}] ≈ (7.33 · 10^{-18})^2$.
Now we can estimate
\begin{align}
P[\bigcup_{i = 1}^{64} E_i] &≥ \sum_{i = 1}^{64} P[E_i] - \sum_{i ≠ j} P[E_i ∩ E_j]\\
&≥ 64· 7.33 · 10^{-18} - \binom{64}{2} · (7.33 · 10^{-18})^2 ≈ (7.33 · 10^{-18})·(64 - 1.47 · 10^{-14})
\end{align}
To see that the first inequality holds, consider an outcome where exactly $k ≥ 1$ buckets overflow. The outcome is counted positively $k$ times in the first sum and negatively $\binom{k}{2}$ times in the second sum. Since $k - \binom{k}{2} ≤ 1$ for $k ≥ 1$ the outcome is counted at most once.
We now have upper and lower bounds that are very close together. Of course, this doesn't quite answer the question. However, it points in the right direction, namely, you can use the Inclusion-Exclusion principle to get more precise results. The next step would be to write down precise formulas for the probability that a set of $2,3$ or $4$ buckets overflow at the same time. Since there are not enough balls to make 5 or more buckets overflow, this would yield a (somewhat messy) but exact formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Advanced : Compute $\sum_{n=1}^\infty\frac{H_n^4-6H_n^2H_n^{(2)}+8H_nH_n^{(3)}+3\left(H_n^{(2)}\right)^2-6H_n^{(4)}}{n^5}$ How to prove the following equality
$$\mathcal S=\sum_{n=1}^\infty\frac{H_n^4-6H_n^2H_n^{(2)}+8H_nH_n^{(3)}+3\left(H_n^{(2)}\right)^2-6H_n^{(4)}}{n^5}\\=672\zeta(9)-240\zeta(2)\zeta(7)-105\zeta(3)\zeta(6)-168\zeta(4)\zeta(5)+24\zeta^3(3)$$
Where $H_n^{(r)}=\sum_{k=1}^n\frac1{k^r}$ is the harmonic number and $\zeta$ is The Riemann zeta function.
Here is my approach and would like to see different ways.
From here we have
$$\frac{\ln^4(1-x)}{1-x}=\sum_{n=1}^\infty\left(H_n^4-6H_n^2H_n^{(2)}+8H_nH_n^{(3)}+3\left(H_n^{(2)}\right)^2-6H_n^{(4)}\right)x^n$$
Multiply both sides by $\frac{\ln^4x}{4!x}$ then integrate from $x=0$ to $1$
and use the fact that $\frac1{4!}\int_0^1 x^{n-1}\ln^4x\ dx=\frac1{n^5}$ to have
\begin{align}
\mathcal S&=\frac1{4!}\int_0^1\frac{\ln^4(1-x)\ln^4x}{x(1-x)}\ dx\\
&=\frac1{4!}\int_0^1\frac{\ln^4(1-x)\ln^4x}{x} dx+\frac1{4!}\underbrace{\int_0^1\frac{\ln^4(1-x)\ln^4x}{1-x}dx}_{1-x\mapsto x}\\
&=\frac2{4!}\int_0^1\frac{\ln^4(1-x)\ln^4x}{x}dx\overset{IBP}{=}\frac1{15}\int_0^1\frac{\ln^3(1-x)\ln^5x}{1-x}dx\tag1
\end{align}
The interesting part in this solution is that we can calculate the last integral without using the derivative of beta function:
We proved here
$$\int_0^1\frac{x^n\ln^m(x)\ln^3(1-x)}{1-x}dx=\frac1{4}\frac{\partial^m}{\partial n^m}\left(H_n^4+6H_n^2H_n^{(2)}+8H_nH_n^{(3)}+3\left(H_n^{(2)}\right)^2+6H_n^{(4)}\right)$$
Set $m=5$ then let $n$ approach $0$ we get
$$\int_0^1\frac{\ln^3(1-x)\ln^5x}{1-x}\ dx\\=10080\zeta(9)-3600\zeta(2)\zeta(7)-1575\zeta(3)\zeta(6)-2520\zeta(4)\zeta(5)+360\zeta^3(3)$$
Substitute this result in $(1)$ we get the closed form of $\mathcal S.$
| Different approach by Cornel:
By master theorem we have
$$\sum_{k=1}^\infty\frac{H_k^4-6H_k^2H_k^{(2)}+8H_kH_k^{(3)}+3\left(H_k^{(2)}\right)^2-6H_k^{(4)}}{(k+1)(k+n+1)}\\=\frac{H_n^5+10H_n^3H_n^{(2)}+15H_n\left(H_n^{(2)}\right)^2+20H_n^2H_n^{(3)}+20H_n^{(2)}H_n^{(3)}+30H_nH_n^{(4)}+24H_n^{(5)}}{5n}$$
Multiply both sides by $n$ then differentiate with respect to $n$ we have
$$\sum_{k=1}^\infty\frac{H_k^4-6H_k^2H_k^{(2)}+8H_kH_k^{(3)}+3\left(H_k^{(2)}\right)^2-6H_k^{(4)}}{(k+n+1)^2}\\=\frac{\partial}{\partial n}\frac{H_n^5+10H_n^3H_n^{(2)}+15H_n\left(H_n^{(2)}\right)^2+20H_n^2H_n^{(3)}+20H_n^{(2)}H_n^{(3)}+30H_nH_n^{(4)}+24H_n^{(5)}}{5}$$
By differentiating both sides with respect to $n$ three times then let $n\mapsto -1$, the result of $\mathcal S$ follows.
The identity from the master theorem I used above can be found in the book Almost Integrals, Sums and Series page 291.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Minimal no of people to guarantee 4 consecutive chairs occupied at round table There are 2019 chairs at a round table. What is the minimum number of chairs that must be occupied such that there are some consecutive set of 4 chairs (or more) occupied.
The chairs can be divided in sets $S_{k} ={4k+1,4k+2,4k+3,4k+4}$ for $k={0,…,503}$ and $S_{504} = \{2017,2018,2019\} $.
If one seat is kept open in each set (at least 505 open seats) a sequence of 4 occupied seats may not appear. So, in the worst case scenario, we still would be able to find such arrangement that 1514 people would be seated and there may not be even one quadruple of consecutive seats e.g.
$$
\begin{array}{c|c|c|c|c|c}
⚪ ⚫ ⚫ ⚫&⚪ ⚫ ⚫ ⚫& ...&⚪ ⚫ ⚫ ⚫&⚪ ⚫ ⚫&⚪ ⚫ ⚫ ⚫\\
S_0 & S_1 & ... & S_{503} & S_{504}&S_0
\end{array}
$$
Applying the pigeonhole principle:
filling more than $2019−505=1514$ seats, with
*
*at least 1513 seats filled for $S_0,…,S_{503}$ or
*1512 seats filled for $S_0,…,S_{503}$ and 3 for $S_{504}$
guarantees us to find such 4 seats, since $1513>3*504$.
In the second case if we have 3 sits occupied in the last set they will always be adjacent to an occupied seat in either $S_0$ or $S_{503}$.
This solution was insufficient, what did I miss?
| I think the only gap in your solution is the claim "If we have 3 seats occupied in the last set they will always be adjacent to an occupied seat in either $S_0$ or $S_{503}$." That need not be true; the first seat of $S_0$ and the last seat of $S_{503}$ could both be empty.
You then need to prove that in this case there are four adjacent occupied seats somewhere around the table. Since 1515 seats are occupied, and three of them are in $S_{504}$, exactly three seats must be occupied in each of $S_0$ through $S_{503}$ (or else one of them has all four occupied and we are done).
If the unoccupied seat of $S_1$ is not the first one, then we have four occupied seats in a row across $S_0$ and $S_1$. Proceeding around the table, we see that if the first seat of $S_i$ is unoccupied, then the first seat of $S_{i+1}$ must be unoccupied also; but this can't continue indefinitely since we know the unoccupied set of $S_{503}$ is the last one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Motivation of the definition of Euclidean Domain We know the definition of Euclidean Domain is
An Euclidean Domain is an Integral domain $(E,+,\ast )$ together with a function $v: E\setminus \{0\} \to \mathbb{N} \cup \{0\} $ such that
(i) for all $a,b \in E$ with $b \neq 0$, there exist $q,r \in E$ such that $a = qb + r$ ,where $r=0$ or $v(r) \lt v(b)$
(ii) for all $a,b \in E \setminus \{0\}$, $v(a) \leq v(ab)$.
What is the motivation behind the definition of Euclidean Domain. Is it a generalisation of something?
| The motivation comes from a strong property of natural (and integer) numbers: the Euclidean algorithm that allows to find a gcd between two numbers.
The function $v$ is then just a degree function that gives you a weight comparison between these "abstract" elements of the ring. For the integers $v=|\cdot|$, for the ring of polynomial is the degree function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3406973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proving a much stronger version of AM-GM for three variables Here, @MichaelRozenberg stated the following inequality without proof:
Theorem. For all non-negative $a,b,c\in\mathbb R$,
$$a^6+b^6+c^6-3a^2b^2c^2\geq16(a-b)^2(a-c)^2(b-c)^2.$$
In my answer below I give a complete brute-force proof. However, more elegant proofs are welcome.
| I will use the Buffalo method: By symmetry of the inequality, we can assume without loss of generality that $a\le b\le c$. So we can substitute $a=x,b=x+y,c=x+y+z$ for some non-negative reals $x,y,z$.
Putting this in our inequality and expanding results in
$$\color{green}{12 x^4 y^2 + 12 x^4 y z + 12 x^4 z^2 + 28 x^3 y^3 + 42 x^3 y^2 z + 54 x^3 y z^2 + 20 x^3 z^3 + 27 x^2 y^4 + 54 x^2 y^3 z + 87 x^2 y^2 z^2 + 60 x^2 y z^3 + 15 x^2 z^4 + 12 x y^5 + 30 x y^4 z + 60 x y^3 z^2 + 60 x y^2 z^3 + 30 x y z^4 + 6 x z^5} + \color{blue}{2 y^6 + 6 y^5 z - y^4 z^2 - 12 y^3 z^3 - y^2 z^4 + 6 y z^5 + z^6}\geq 0.$$
The green part is automatically greater or equal than $0$ because the $x,y,z$ are non-negative. Also,
$$\color{blue}{2 y^6 + 6 y^5 z - y^4 z^2 - 12 y^3 z^3 - y^2 z^4 + 6 y z^5 + z^6}=y^6+(y-z)^2(y+z)^2(y^2+6yz+z^2)\geq0.$$
This achieves a proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
If two sides of a triangle are given, how many triangles are possible? I thought myself that you could rotate one arbitrary side, which is connected to the other side, over 90 degrees to the left and 90 degrees to the right, so basically there are infinitely many triangles possible.
Am I right?
| I agree with the other answers but would like to present a geometric argument which is very similar to the analysis already indicated by the OP.
Assume that two of the sides have a length of $x$ and $r$, where $x$ and $r$ are both positive. Assume that one of the vertices of the triangle will have Cartesian coordinates $(x,0).$ Further assume that the origin [i.e. Cartesian coordinates $(0,0)$] will be a 2nd vertex of the triangle.
Now, consider a circle centered at the origin, of radius $r$. You can select any point on this circle as the third vertex of the triangle [except for the points $(r,0)$ and $(-r,0)$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Demonstrate that there are no perfect squares ending with $8$ A number n will always end in some digit of the set {$0,1,2,3,4,5,6,7,8,9$}. The last digit of $n^2$ is the last digit of its last squared digit. Like this:
$$\ldots 0^2 = \ldots 0$$
$$\ldots 1^2 = \ldots 1$$
$$\ldots 2^2 = \ldots 4$$
$$\ldots 3^2 = \ldots 9$$
$$\ldots 4^2 = \ldots 6$$
$$\ldots 5^2 = \ldots 5$$
$$\ldots 6^2 = \ldots 6$$
$$\ldots 7^2 = \ldots 9$$
$$\ldots 8^2 = \ldots 4$$
$$\ldots 9^2 = \ldots 1$$
Therefore, no perfect square ends in $8.$
I think my proof is pretty bad, is there anything more formal than that?
| Your proof is not bad. You could make a more formal argument with modular arithmetic.
For example, $ n\equiv 0, \pm1, $ or $\pm2 \pmod 5$,
so $n^2\equiv 0, 1, $ or $-1\pmod 5$,
so $5\nmid n^2-3,$ so $10\nmid n^2-8$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Formulate two equations as a system of first-order ODEs I have two equations that I want to formulate into a system of first-order ODEs.
\begin{cases}
\displaystyle
x''(t) = \frac{x(t)}{\Big( \sqrt{x(t)^2 + y(t)^2} \Big)^{3}} \\
\displaystyle
y''(t) = \frac{y(t)}{\Big( \sqrt{x(t)^2 + y(t)^2} \Big)^{3}}
\end{cases}
and I also know the following information:
$x(0)=-1$
$x'(0)=0$
$y(0) = 0$
$y'(0) = -1$
I have tried to solve this math problem by introducing $z$ and $w$ so that:
\begin{cases}
z(t) = x'(t)\\
w(t) = y'(t)\\
\displaystyle
z'(t) = x''(t) = \frac{x(t)}{(\sqrt{x(t)^{2} + y(t)^{2}})^{3}} \\
\displaystyle
w'(t) = y''(t) = \frac{y(t)}{(\sqrt{x(t)^{2} + y(t)^{2}})^{3}} \\
\end{cases}
and then updating the initial condition to accordingly:
$z(0) = x'(0) = 0$
$z'(0)= x(0)/(\sqrt{x(0)^{2} + y(0)^{2} })^{3} = -1$
$w(0) = y'(0) = -1$
$w'(0) = y(0)/(\sqrt{x(0)^{2} + y(0)^{2} })^{3} = 0$
Is this a valid solution to the problem?
The math book I use does not show what the correct answer is suppose to be.
| Yes, your work is correct.
Usually the system is written as
$$\begin{cases}
x'(t)=z(t) \\
y'(t)=w(t) \\
\displaystyle
z'(t) = \frac{x(t)}{(\sqrt{x(t)^{2} + y(t)^{2}})^{3}} \\
\displaystyle
w'(t) = \frac{y(t)}{(\sqrt{x(t)^{2} + y(t)^{2}})^{3}} \\
\end{cases}$$
Or they use $u_1, u_2, u_3, u_4$ for $x,y,z,w$
$$\begin{cases}
u_1'=u_3 \\
u_2'=u_4 \\
\displaystyle
u_3' = \frac{u_1}{(\sqrt{u_1^{2} + u_2^{2}})^{3}} \\
\displaystyle
u_4' = \frac{u_2}{(\sqrt{u_1^{2} + u_2^{2}})^{3}} \\
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Relating orthogonal column distances to basis distances I have two $d\times r$ matrices $X$ and $Y$, with $X^TX=Y^TY=I_r.$ It is also known that the columns $X_i$ and $Y_i$ for $i=1,\dots,r$ satisfy $$\underset{i}{\max}\Vert X_i-Y_i\Vert_2\le \varepsilon.$$ Can I then say that $\Vert XX^T-YY^T\Vert_2\le c\varepsilon,$ for some constant $c$ (free of $d$ and $r$)?
The last norm refers to the spectral norm, i.e., $\Vert A\Vert_2=\underset{v^Tv=1}{\sup}\Vert Av\Vert_2.$
| No, there is no such $c$ in general. Consider the case $d = r+1$ (though the following construction works in any $d > r$). Let $e_1, \dots, e_{r+1}$ be the the standard basis vectors of $\mathbb{R}^{r+1}$ (considered as column vectors), so they are orthonormal. Also define $u = \frac{1}{\sqrt{r}}e_{r+1} - \frac{1}{r}(e_1 + \cdots + e_r)$. Now consider the $(r+1) \times r$ matrices $X$ and $Y$ given by the column vectors $X_i = e_i$ and $Y_i = e_i + u$ for $i = 1, \dots, r$. The $X_i$'s are clearly orthonormal, and with a bit of work you can check that the $Y_i$'s are orthonormal as well, giving $X^TX = Y^TY = I_r$. Note also that $\|X_i - Y_i\|_2 = \|u\|_2 = \sqrt{\frac{2}{r}}$ for each $i$, so we can set $\varepsilon = \sqrt{\frac{2}{r}}$. However, for each $i$ we have $X_i^T e_{r+1} = 0$ and $Y_i^T e_{r+1} = \frac{1}{\sqrt{r}}$, hence $XX^T e_{r+1} = \sum_{i=1}^r X_i X_i^T e_{r+1} = 0$, while $YY^T e_{r+1} = \sum_{i=1}^r Y_i Y_i^T e_{r+1} = \frac{1}{\sqrt{r}} \sum_{i=1}^r Y_i = e_{r+1}$, so $(XX^T - YY^T)e_{r+1} = -e_{r+1}$, which means $\|XX^T - YY^T\|_2 \geq 1 = \varepsilon \sqrt{\frac{r}{2}}$. Since we can take $r \to \infty$, there is no $c$ such that $\|XX^T - YY^T\|_2 \leq c\varepsilon$ always holds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f(f(x))$ is linear, is $f(x)$ linear? If f(x) is closed on the set of natural numbers? If I have a function defined on the natural numbers $\mathbb Z_{>0}$, does $f(f(x))=ax+b$ imply that $f(x)=rx+q$, where $a,b,r,q$ are naturals? This came to mind in the context of a different problem, but I don't know where to start.
| No: consider the function $f$ that swaps every odd natural number with its successor. Then $f\circ f=id$ but $f$ is not monotone.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are Cosets Isomorphic to One Another A basic question, I seem to be struggling with the concept of quotient groups and cosets.
Suppose $G$ is a group, and $N$ is a normal subgroup.
I know that is $x$ and $y$ are in the same coset of $G/N$, then $xN = yN$.
I also know that cosets are either disjoint or equivalent, $G/N$ is the set of cosets. Also, $G$ can be written as a disjoint union of the cosets.
Is it correct that all cosets are isomorphic to $N$, since they are of the form $xN$ for some $x \in G$?
Or is it possible for more elements of $G$ to be in one coset than in another?
| A coset $xN$ of $N\unlhd G$ for nontrivial $x$ cannot be isomorphic to $N$ if $xN\neq N$, since $xN$ has no identity element (why?) and is thus not a group.
There is, however, a bijection $\varphi:N\to xN$ given by $n\mapsto xn$. So the sets are of the same cardinality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to write $x^m+x^{-m}$ as a polynomial in $x+x^{-1}$. I did find this question Proving that $x^m+x^{-m}$ is a polynomial in $x+x^{-1}$ of degree $m$. but it only shows by induction that this is possible, not the actual form of the solution. How, for example, would you write $x^2+x^{-2}$ in a polynomial of $x+x^{-1}$?
| The inductive argument does tell you how to do this - and, it's worthwhile to see that induction very often gives you the answers you are after, especially since the fact that inductive proofs can be unrolled is often overlooked.
In the linked answer, it is noted that
$$x^{k+1}+x^{-k-1} = (x^k +x^{-k})(x+x^{-1}) - (x^{k-1} + x^{-k+1}).$$
and an inductive argument is built from this, but remember that the inductive hypothesis is just that, for each $k$, there exists some polynomial $P_k$ such that $P_k(x+x^{-1})= x^k+x^{-k}$. If we fill out the proof more completely, the implication here is that
$$x^{k+1}+x^{-k-1} = P_k(x+x^{-1})\cdot (x+x^{-1}) - P_{k-1}(x+x^{-1}).$$
And then we see that the left hand side is a polynomial in $x+x^{-1}$ as well - but we can be more explicit: Let
$$P_{k+1}(z)=zP_k(z)-P_{k-1}(z)$$
where we start the sequence as $P_0(z)=2$ and $P_1(z)=z$. Then, we have
$$P_k(x+x^{-1})=x^k+x^{-k}$$
due to the inductive argument. Note that this sequence is very easy to compute incrementally:
$$P_2(z)=z\cdot P_1(z) - P_0(z) = z^2 - 2$$
$$P_3(z)=z\cdot P_2(z) - P_1(z) = z^3 - 3z$$
$$P_4(z)=z\cdot P_3(z) - P_2(z) = z^4 - 4z^2 + 2$$
and so on.
As pointed out in the comments, it is possible to write these terms in a general form, although it's a bit surprising that the form you get is actually a polynomial. I won't go into details, since I'm using the usual tools for solving linear homogenous recurrences:
$$P_k(z) = \frac{\left(z - \sqrt{z^2-4}\right)^k + \left(z + \sqrt{z^2-4}\right)^k}{2^k}$$
Note that the $\sqrt{z^2-4}$ terms cancel out due to the symmetry of the sum and this does always leave a polynomial.
(I also think this sequence has a name, but I don't know what it is)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3407996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Prove that in the ring $Z[\sqrt2]$: $\langle3+8\sqrt2, 7\rangle$ = $\langle3+\sqrt2\rangle$ So,
I know that $\langle a\rangle$ = $aR$ and $\langle a,b\rangle$ = $aR + bR$ for any ring R.
I then multiplied each side by $\sqrt2$ and computed them.
These are the steps that I took:
$\langle3+8\sqrt2, 7\rangle$ = $\langle3+\sqrt2\rangle$
$(3+8\sqrt2)\sqrt2 + 7\sqrt2 = (3+\sqrt2)\sqrt2$
$3\sqrt2 + 16 + 7\sqrt2 = 3\sqrt2 + 2$
$16 + 10\sqrt2 = 2+3\sqrt2$
I'm not really sure where to go from here.
Any help would be much appreciated.
| Hint: To show $\ \langle 3+8\sqrt{2}, 7\rangle=\langle 3+\sqrt{2}\rangle\ $, you need to show:
*
*$\ 3+8\sqrt{2} = r_1\left(3+\sqrt{2}\right)\ $ and $\ 7 = r_2\left(3+\sqrt{2}\right)\ $ for some $\ r_1\ $ and $\ r_2\ $ in $\ \mathbb{Z}\left[\sqrt{2}\right]\ $; and
*$\ 3+\sqrt{2}=s_1\left(3+8\sqrt{2}\right) + s_2 7\ $ for some $\ s_1\ $ and $\ s_2\ $ in $\ \mathbb{Z}\left[\sqrt{2}\right]\ $.
The first part is very straightforward: simply calculate $\ r_1=\frac{3+8\sqrt{2}}{3+\sqrt{2}}\ $ and $\ r_1=\frac{7}{3+\sqrt{2}}\ $ in $\ \mathbb{Q}\left[\sqrt{2}\right]\ $, and check to see whether the results both lie in $\ \mathbb{Z}\left[\sqrt{2}\right]\ $. The second is not much more difficult. A solution for $\ s_1\ $ a small integer, and $\ s_2\ $ a small multiple of $\ \sqrt{2}\ $, is fairly easy to find by trial and error.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Calculate $[K(\alpha^2):K]$ Let $K\subseteq F$ be a field extension. $K(\alpha)/\ K$ an extension of degree $4$ where $\alpha^2$ is not a root of $m_{(\alpha,K)}(x)$. I am asked to calculate $[K(\alpha^2):K]$
Using the towers formula $$[K(\alpha^2):K]=[K(\alpha^2):K(\alpha)][K(\alpha):K]$$
And we already know that $[K(\alpha):K]=4$
How do I find $[K(\alpha^2):K(\alpha)]$?
| The correct tower formula is
$$4=[K(\alpha):K]=[K(\alpha):K(\alpha^2)][K(\alpha^2):K]$$
Since $[K(\alpha):K(\alpha^2)]\le2$, we have $[K(\alpha^2):K]\ge 2$ and so there are two possibilities:
*
*$[K(\alpha^2):K]=2$: this happens iff $K(\alpha^2)\ne K(\alpha)$. For instance, for $K=\mathbb Q$ and $\alpha$ a root of $x^4-2$. Then $\alpha^2$ is a root of $x^2-2$ and is not a root of the minimal polynomial of $\alpha$.
*$[K(\alpha^2):K]=4$: this happens iff $K(\alpha^2)=K(\alpha)$. For instance, for $K=\mathbb Q$ and $\alpha$ a root of $x^4-2x-2$. Then $\alpha^2$ is a root of $x^4 - 4 x^2 - 4 x + 4$, which is irreducible, and so $\alpha^2$ is not a root of the minimal polynomial of $\alpha$.
$x^4-2x-2$ was chosen because then clearly $\alpha$ is a polynomial in $\alpha^2$ and so $K(\alpha^2)=K(\alpha)$. It is irreducible by Eisenstein's criterion with $p=2$.
$x^4 - 4 x^2 - 4 x + 4$ is the characteristic polynomial of the map $x \mapsto \alpha^2 x$ and so must be irreducible since $\alpha^2$ has degree $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Understanding how $() = C^{\log(x)}$ a polynomial function Is $f(x) = C^{\log(x)}$ a polynomial function?
I assume it must be because I learnt that you can rewrite this as $f(x) = x^{\log(C)}$ and given that $\log(C)$ is a constant, this looks like a polynomial function. However, I'm struggling to understand intuitively why $C^{\log(x)}$ is polynomial, because it has a variable exponent in it, and I thought that all polynomial functions have a constant exponent in them.
| That function is not a polynomial, but its asymptotic growth is polynomial, since (as you show) it can be written as a constant power of $x$.
The fact that it can also be written some other way that does not look like polynomial growth is what makes the question interesting.
The function
$$
f(x) = x^{2+ \sin(x)}
$$
is not a polynomial but is $O(x^3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can the integral $\int_0^\frac{\pi}{2} x \cos ^n{x} dx$ be reduced? I tried to write $cos^nx$ as $\cos {x} \cos ^{n-1}{x}$ and then integrate by parts with the first function being $\cos ^{n-1}{x}$ and the second function being $x \cos {x}$. I was able to solve upto a point but then I got stuck since I was not able to find a term which I could write as $I_{n-k}$ assuming that the integral in the title is $I_n$ and $k$ is any integer less than $n$. Am I taking incorrect functions to solve by parts or do I need to do any trigonometric manipulation using the limits of the integral?
| To get a recurrence for the given integral, call it $I_n$, observe that $$I_{n-2}-I_n=\int_0^{\pi/2}x\cos^{n-2}x\sin^2 x\,dx=\int_0^{\pi/2}(x\sin x)(\cos^{n-2}x\sin x)\,dx,$$ which is ready to be integrated by parts: $$I_{n-2}-I_n=\frac{1}{n-1}\int_0^{\pi/2}\cos^{n-1}x(\sin x+x\cos x)\,dx=\frac{1}{n-1}\left(\frac{1}{n}+I_{n}\right),$$ yielding $I_n=\left(1-\frac{1}{n}\right)I_{n-2}-\frac{1}{n^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the basic lemma on composition of probability generating functions? What is the basic lemma on composition of probability generating functions and how is it most clearly proved?
(I'm posting this mainly to see if I can write an answer as clearly and simply as possible, but maybe other people know things about this that I don't and can post answers from their points of view.)
| Do you mean this? Let $S = \sum_{i\le N} Y_i$ where $N, Y_1, Y_2, \ldots$ are independent integer-valued random variables where $N\ge 0$ a.s., and $\mathbb E[z^N] = f(z)$ while $\mathbb E[z^{Y_i}] = g(z)$, then
$$ \mathbb E[z^S] = f(g(z))$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Burgers' equation / Finite difference method Let's consider the Burgers' equation:
$$u_t(x,t)+u(x,t).u_x(x,t)=\epsilon.u_{xx}(x,t) \; , \; x\in \mathbb{R} \; ,\;t>0$$
And with the following initial condition:
$$u(x,0)=1_{\{x<0\}}(x)$$
I want to solve it using (Matlab, C, ... or any software) numerically and have a graphe of the solution.
the problem is that the problem is not linear, so can the finite diffrence method work in this case ?
| Why would a finite difference method not work? It can even be applied to the non-linear Navier-Stokes equations in multiple dimensions. Non-linearity is not an impediment even with time-implicit schemes as a system of non-linear difference equations can be solved iteratively.
As a starting point, a simple explicit discretization is
$$\frac{u_j^{n+1} - u_j^n}{\Delta t} + u_j^n \frac{u_{j}^n- u_{j-1}^n}{\Delta x} =\epsilon \frac{u_{j+1}^n- 2u_j^n + u_{j-1}^n}{(\Delta x)^2} $$
There are many variations, both explicit and implicit, including leap frog, Lax-Wendroff, etc. They have different properties in terms of accuracy, stability and efficiency.
An internet search with keywords "Burger's equation" and "finite difference method" will provide many references.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A proof that $f(K)\le H$ if $K\le G$ for an isomorphism $f: G\to H$.
Let $f:G \rightarrow H$ be a group isomorphism, and $K \subset G$ be a subgroup. Show that $f(K) \subset H$ is a subgroup.
My attempt:
Let $a \in K$. Then since $K$ is a subgroup, $a^{-1} \in K \rightarrow f(a^{-1}) \in f(K)$. Therefore $f(K)$ is closed under inverse. Similarly, $e \in K$ by definition, so $f(e) \in f(K)$, thus $f(K)$ is closed under identity. Now consider $a,b \in K$. Then $ab \in K$ by definition, and so $f(ab) = f(a)f(b) \in f(K)$ since $f$ is a homomorphism. Therefore $f(K)$ is closed under multiplication and thus a subgroup $\blacksquare$
I do not know if I am playing too fast and loose with applying $f$ in the proof.
| A critique of your proof is already given in the comments.
Here's another way to prove the theorem. It uses the one-step subgroup lemma.
We have $e\in K$ since $K\le G$. Thus $f(e)\in f(K)$, so $f(K)$ is nonempty. As the image of an isomorphism, $f(K)$ is a subset of $H$.
Now let $a, b\in f(K)$. Then $a=f(h), b=f(k)$ for some $h, k\in K$. Therefore,
$$\begin{align}
ab^{-1}&=f(h)f(k)^{-1} \\
&=f(h)f(k^{-1}) \\
&=f(hk^{-1}),
\end{align}$$
but $hk^{-1}\in K$ as $K\le G$. Hence $ab^{-1}\in f(K)$.
Hence $f(K)\le H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the infimum and supremum of $\{x\in Q \mid x² < 9\}$ I have to find (and prove) the infimum and supremum of the following set:
$M_1:=\{x\in\mathbb{Q} \mid x^2 < 9\}$
On first glance, I would say:
$\inf M_1=-3 $
$\sup M_1=3$
Now I have to prove that these really are the infimum and supremum of the set, and that's the point where I'm having problems. According to the definition of $\inf$ and $\sup$, this means, that $-3$ is the biggest lower bound and 3 is the lowest upper bound:
$\forall x\in\mathbb(M_1): -3 \leq x \leq 3$
We can see, that -3 and 3 are not elements of M1, which means:
$\forall x\in\mathbb(M_1):-3<x<3$
But how can I show that -3 and 3 are the $\textbf{biggest / smallest}$ bound? I mean, for example, what if there is a number bigger than -3 that acts like a lower bound to the set? Obviously there isn't a bigger lower bound, but how can I mathematically show it? Do you guys have any advice? Thanks in advance, and sorry for my English :D
| Let $I= \{ x\in\mathbb{Q} : x^2 <9\}.$ $x^2<9\Rightarrow -3 < x < 3.$ Assume that $3-p,-3+p\in\mathbb{Q}$ are $\sup I$ and $\inf I$ respectively, where $p>0.$ By definition, the greatest lower bound $l$ of $I$ is the value satisfying $\forall \epsilon >0,\exists x\in I\;(x<l+\epsilon).$ Similarly, the least upper bound $u$ of $I$ is the value satisfying $\forall \epsilon >0,\exists y\in I\; (u-\epsilon<y).$ By the density of the rationals, we may choose a sequence of rationals $q_{n_x}\in I$ such that $-3<q_{n_x}<-3+p\;\forall x$ and a sequence of rationals $q_{n_y}\in I$ such that for all $y,3-p<q_{n_y}<3.$ This is a contradiction, as by assumption, $-3+p$ and $3-p$ are $\inf I$ and $\sup I$ respectively. Since $p$ cannot be less than $0$ (otherwise $-3+p$ and $3-p$ would both not be in $I$), we thus have that $p=0$ and so $\inf I = -3$ and $\sup I = 3.$
Notice that we have directly satisfied the $\epsilon$ definitions of infimum and supremum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3409068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Definite integration evaluation of $\int_0^{\pi/2} \frac{\sin^2(x)}{(b^2\cos^2(x)+a^2 \sin^2(x))^2}~dx$. $$\int_0^{\pi/2} \frac{\sin^2(x)}{(b^2\cos^2(x)+a^2 \sin^2(x))^2}~dx$$
how to proceed please help
The answer given is $\dfrac{\pi}{4a^3b}$.
| Here is my solution:
$$\int_0^\frac{\pi}{2}\frac{\sin^2 x}{(b^2\cos^2 x+a^2\sin^2 x)^2}dx=\int_0^\frac{\pi}{2}\frac{\tan^2 x\sec^2 x}{(b^2 +a^2\tan^2 x)^2}dx\overset{\tan x=t}=\int_0^\infty \frac{t^2}{(b^2+a^2 t^2)^2}dt$$
$$\overset{at=bp}=\frac{1}{a^3b}\int_0^\infty\frac{p^2}{(p^2+1)^2} dp=\frac{1}{a^3b} \left(\int_0^\infty \frac{1}{p^2+1}dp- \underbrace{\int_0^\infty \frac{1}{(p^2+1)^2}dp}_{p=\tan \theta }\right)$$
$$=\frac{1}{a^3b}\left(\arctan p\bigg|_0^\infty-\int_0^\frac{\pi}{2}\frac{1+\cos(2\theta)}{2}d\theta\right)=\frac{1}{a^3b}\left(\frac{\pi}{2}-\frac{\pi}{4}\right)=\frac{\pi}{4a^3b}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3409447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Curious about definition of well-ordered set We know that the definition of a well-ordered set guarantees the existence of least element in every non-empty subset of a set.
This isn't a creative or good question, but I am just wondering why it using the existence of least element but not the greatest element or even both, just partially the least element. Anyone shed a bit light would be really appreciated.
| About greatest element, well ordering comes to generalize in a sense the natural number, which has no maximum, so it makes sense to take least element.
In addition, it makes sense when looking on well founded sets, a well founded set is a set where when order by $∈$ it has a minimal element(not minimum, it need not be unique), that means, if $A$ is non empty well founded set, then there exists an element $a∈A$ such that $a\cap A=\emptyset$. Well founded sets by $∈$ have the property that $A∉A$.
Well ordered set by $∈$ are just like well founded but with the property that the minimal element is unique.
If you change it to maximal element, we get that well founded(by $∈$) sets will have the property that there exists $a∈A$ such that $a$ is not in any other $b∈A$.
A set, $A$, being well founded(by $∈$) implies there is no sequence $x_i$ such that $x_0∋x_1∋\cdots∋x_n\cdots$ where all $x_i$ are in $A$, if you replace it to maximal element it will say that you have no sequence $x_i$ such that $x_0∈x_1∈\cdots∈x_n∈\cdots$ with $x_i∈A$ for all $i$, but by the axiom of infinity, $ω$(the set of natural numbers) exists and it is such that $0∈1\in\cdots$ and all of those are in $ω$.
So it doesn't make sense to change it to greatest element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3409535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does $R\subseteq A\times A$ being antisymmetric imply the same for $S$? Given two sets $A, B$ let $f:A\to B$ be surjective and suppose $R\subseteq A\times A$ is an antisymmetric relation. Does it follow that $S=\{(b,b')\in B\times B\ \vert \exists a,a'\in A: a R a', f(a) = b, f(a') = b' \}$ is also antisymmetric?
Here's my work. If $R=\emptyset$ then $S=\emptyset$ and they're both antisymmetric.
So let $R\neq\emptyset$ be antisymmetric and suppose $b,b'\in B$ satisfy $b S b'$ and $b' S b$. Then there exist $a_1,a_2,a_3,a_4\in A$ such that $a_1 R a_2, a_3R a_4$ and $f(a_1) = b = f(a_4), f(a_2)= b'=f(a_3).$ Since in general we may assume that the $a_i$ are distinct and $f$ is not injective, $b= b'$ does not follow.
Is this correct, and enough? Or should I construct an explicit counterexample?
| You've done an appropriate amount of work to be done, but you should express your work as a counterexample rather than as a condition that any counterexample would have. For instance, your $A=\{a_1, a_2,a_3,a_4\}$, $B=\{b,b'\}$, and so on. Then if you explicitly show that your order on $A$ is antisymmetric but your order on $B$ is not, then you're done.
If you would rather have a counterexample that is a little less of an artificially constructed feel to it...
Think about $f=(x\mapsto|x|:\mathbb Z\to\mathbb N)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3409880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of Tychonoff space or $T_{3\frac{1}{2}}$ space Definition: A topological space $X$ which is $T_1$ space is called Tychonoff space or completely regular space ($T_{3\frac{1}{2}}$ space) if for any point $x\in X$ and $A$ closed in $X$ with $x\notin A$ exists continuous function $f:X\to [0,1]$ such that $f(x)=1$ and $f(A)=\{0\}$.
Let me ask you a stupid question please: If we want to show that some topological space $X$ is Tychonoff space we always have to consider closed set $A$ in $X$ which is not empty right?
Because if $A=\varnothing$ then $f(\varnothing)=\{0\}$ does not make sense, right?
| Right.
Though you could circumvent this problem if you require $f(A)\subseteq\{0\}$, instead of $f(A)=\{0\}$. If $A$ is empty then $f(\varnothing)=\varnothing\subseteq\{0\}$.
The definition requires that your could find a suitable $f$ (in general depending on $A$ and $x$) for every $x$ and every non-empty closed $A\subseteq X\setminus\{x\}$ (or every closed $A\subseteq X\setminus\{x\}$, if you adopt the version with $f(A)\subseteq\{0\}$). Not just one closed set $A$. So, you could not say, oh I found an $f$ that works for $A=\varnothing$, and I don't have to worry about other closed $A$. (That is, we should consider all $x\in X$, and all (empty or non-empty, depending on which version of the definition you adopt) closed sets $A\subseteq X\setminus\{x\}$, and find a suitable $f$ for each such $x$ and $A$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3409993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Extinction probability in a population with competition Suppose we have a colony of bacteria. At the end of each day, each bacterium produces an exact copy of itself with probability $p$ and then dies with probability $q$. However, $q$ is not constant, but a function of $N$, the total number of bacteria:
$$q=p\bigg(1-\frac{1}{N}\bigg)$$
So in larger populations of bacteria, each bacterium is more likely to die (because of competition, say).
To clarify, $N$ counts the number of bacteria before new ones were born. For instance, if there are $2$ bacteria on one day and they both reproduce to form $4$ bacteria, both of them still have exactly $p/2$ chance of dying (not $3p/4$). And the babies that have just been born cannot die immediately.
Let $P_N$ be the probability that a bacteria colony consisting of $N$ bacteria initially eventually goes extinct. Can we find an asymptotic formula for $P_N$? I suspect that we will have
$$P_N\sim \alpha^N$$
for some $\alpha$, but I don’t know how to calculate this constant.
I did manage to figure out that if we keep $q$ constant, then the probability of eventual extinction starting with $N$ bacteria is exactly equal to
$$\bigg(1-\frac{p-q}{p(1-q)}\bigg)^N$$
for $p>q$, and equal to $1$ for $p\le q$. But that problem was much easier because “newborn” bacteria were independent from their parents, whereas in this problem the chance of each bacterium’s survival is dependent on the overall population size.
So, really my question is: what is the value of
$$\lim_{N\to\infty}P_N^{1/N}=\space ?$$
| (query on the exact terms of the problem, too long for a comment)
One bacterium, at time $t$ , with an extant population $n(t)$, has
- $p$ , constant, probability to generate an additional bacterium, so that it contributes $+1$ to $n(t+1)$;
- $q=p(1-1/n(t))$, depending on $n(t)$, probability to die and contributing $-1$ to $n(t+1)$;
- and consequently $r=1-p-q$ probability of just surviving and contribute $0$ to $n(t+1)$.
This is a classical [birth-death process][1] , which fundamentally is continuous in time, referring to live organism as bacteria.
The standard approach is to assume that in a small interval $\Delta t$ the probability of having more than one birth/death is negligible (higher order infinitesimal
wrt $\Delta t$).
In the post, the example of two bacteria that replicate in one day hints that the adopted discretization in time is not obeying to the above hypothesis.
Although it would be possible to reduce the time unit from a day to an hour, or even less, so as to achieve that the hypothesis could be realistic,
it seems that the OP is considering the one day lapse as a sort of "juvenile quisciency" of no fertility and no mortality.
Is this the correct interpretation of the scheme being adopted ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
How to check if the number after decimal point of the result of a division goes to infinity? Particularly, is this number any special? I divide a number by a prime number and I observed the division in the below picture. I suspect that the result, quotient, goes to infinity. But how can I check if so? The big calculator showed the same repetitive pattern goes until 90000 digit only as it was apparently limited by the program.
How to check if the number after decimal point of the result of a division goes to infinity?
Also, as you see there is a repetitive pattern in the quotient, which is 473684210526315789. Is there anything mathematically special about this repetition?
Is there any other similar example division that gives a result going to infinity with a pattern like this?
Thank you.
| It's not infinity, and it's not a question at all, it's just a rational number which is either integer or number with a repetitive decimal number.
like $\frac{1}{3}=0.33333333333\cdots$, do you think it's infinity??
But if you mean number of decimal points, it's obviously infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
The kernel of this evaluation map. Consider $$f: \mathbb{Z}[x] \longrightarrow \mathbb{Z}[2^{1/3}]$$ which is a evaluation map for $x = 2^{1/3}$.
Question: What is the kernel of $f$?
My work so far: I take the ideal $(x^3-2)$ as my answer, and I want to show that $\ker(f) = (x^3-2)$. One side is trivial, i.e., $(x^3-2) \subset \ker(f)$.
Any idea for the another inclusion?
Note that: $a_0 + 2a_3 + 4a_6 + 8a_9 + \dots = 0$, $a_1 + 2a_4 + 4a_7 + 8a_10 + \dots = 0$ and $a_2 + 2a_5 + 4a_8 + 8a_{11} + \dots = 0$ for any polynomial $p(x) = a_0 +a_1x + a_2x^2 + \dots \in \ker(f)$. This can help but the proof is not very clean (or not a valid proof at all), i.e., I need to deal with different cases for $n=3k$, $n=3k-1$ or $n=3k-2$.
Well, thanks for the help!
| Hint:
The kernel has to be a non-maximal prime ideal in $\mathbf Z[x]$, and $x^3-2$ is irreducible in $\mathbf Z[x]$, hence, as it is a U.F.D., the ideal $x^3-2$ is prime, and not maximal. Furthermore, $\mathbf Z[x]$ has Krull dimension $2$, so non-zero, non-maximal prime ideals all have height $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
uncountable ring with countable quotient ring? Insipired by this post: Countable number of subgroups $\implies $ countable group
I'm wondering if it is possible to have an uncountable commutative ring with identity $R$, such that $R/I$ is countable for some two-sided proper ideal $I$?
It seems to be a really basic problem, but I couldn't give a proof or a counterexample. Any help is really appreciated, thanks!
| Yes. For instance, for any rings $A$ and $B$, the projection $A\times B\to B$ gives an isomorphism between $B$ and a quotient ring of $A\times B$. In particular, if $A$ is uncountable then $A\times B$ will be uncountable, but $B$ could be any ring at all. Or, for any ring $B$, you can form a polynomial ring $R$ over $B$ in uncountably many variables, which will be uncountable (assuming $B$ is nonzero) but has a quotient isomorphic to $B$ (just mod out the ideal generated by all the variables).
A more naturally occurring example is the ring $\mathbb{Z}_p$ of $p$-adic integers, which is uncountable, but the quotient $\mathbb{Z}_p/(p)\cong \mathbb{F}_p$ is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Proving $A \subset B \implies A \cup B = B$ I have to prove this:
$A \subset B \implies A \cup B = B$
One direction:
$x\in A \subset B$
$\implies x \in A \land x \in B$
$\implies x \in A \cup B$
$\implies x \in (A \cup B = B)$
Other direction:
$x \in (A \cup B = B)$
$\implies x \in A \land x \in B$ (I used $\land$ here instead of $\lor$ because it equals B, so it has to be in both sets.)
$\implies x \in A \subset B$
Is my proof correct or are there some steps missing.
If not I hope you could show me the right direction.
| Below a proof using the natural deduction method.
The goal is read as a simple implication : if A is included in B , then ... ( not as a double implication).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
If $k$ is the smallest integer such that $[a^k]>[a]^k$, which of the following is true? This is a question from the KVPY(SX)-2014 (an examination to get into various research institutes in India) paper.
Q. For a real number $r$ let $[r]$ denote the largest integer less than or equal to $r$. Let $a > 1$ be a real number which is not an integer and let $k$ be the smallest positive integer such that $[a^k] > [a]^k$. Then which of the following statements is always true?
$(A)\,\, k\le 2([a]+1)^2\quad (B) \,\, k\le ([a]+1)^4\quad (C)\,\, k\le 2^{[a]+1}\quad (D)\,\, k\le\frac{1}{a-[a]}+1$
The solutions which I’ve looked through on the Internet and on the booklet I’ve been given mostly state that by taking different values of a and k, option B is possible ( e.g. here). But have not provided a proof on why. Or have put forward a proof that doesn't click my head (e.g. this).
I'm looking for a proof that suits the needs for a 12th grader in India.
| You can argue for (D) by ruling out (A), (B), and (C), on the basis that they depend only on $[a]$, and not $[a]-a$. Any bound on $k$ depending only on $[a]$ can be defeated by choosing $a-[a]$ small enough. This kind of reasoning is quick and may be valuable on a timed test.
To see why (D) works (this part is unnecessary to answer the multiple choice question, but is a good sanity check if there is time), write $a = n+r$, where $n\in \mathbb{Z}$ and $0 < r < 1$. Put $k = \lceil 1/r \rceil$. We must show $(n+r)^k - n^k \ge 1$. By the binomial theorem,
$$
(n+r)^k - n^k = \sum_{i=1}^k \binom{k}{i} r^i n^{k-i} \ge k r n^{k-1} \ge n^{k-1} \ge 1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3410890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Calculate $\lim_{x\to\infty}\frac{x^2}{x+1}-\sqrt{x^2+1}$ I am stuck on a limit of the indeterminate form $\infty-\infty$. I have tried many approaches, such as multiplying with conjugates etc. and I am unable to find a solution. I suspect that there is an elementary trick that I am plainly missing right here. Can anybody give me a hint or solution as to solve
$$\lim_{x\to\infty}\frac{x^2}{x+1}-\sqrt{x^2+1}$$
| 1) $x^2=(x+1)-1)^2=$
$(x+1)^2-2(x+1)+1$;
$\dfrac{x^2}{x+1}=(x+1)-2 +\dfrac{1}{x+1};$
2) $(x^2+1)^{1/2}=x(1+1/(x^2))^{1/2}=$
$ x +O(1/x)$;
3) $\dfrac{x^2}{x+1} -(x^2+1)^{1/2}= $
$ -1+O(1/x)$;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
} |
The principle of inclusion and exclusion: How many permutations of the set $\{1, 2,. . . , 8\}$ do not leave any even number in its place? I am supposed to solve the following problem:
How many permutations of the set $\{1, 2,. . . , 8\}$ do not leave any even number in its place?
What I tried:
$$8!-\left ( \binom{8}{1}7!-\binom{8}{ 2}6!+\binom{8}{3}5!-\binom{8}{4}4! \right )$$
But I know that this is incorrect.
Can anyone tell me why?
| Let $A_k$ be the number of permutations that fix the $2k$-th element. By Inclusion-Exclusion, $$|A_1\cup A_2\cup A_3\cup A_4|=\sum_{1\leq w\leq4}|A_w|- \sum_{1\leq w<x\leq4}|A_w\cap A_x| +\sum_{1\leq w<x<y\leq4}|A_w\cap A_x\cap A_y|- \sum_{1\leq w<x<y<z\leq4}|A_w\cap A_x\cap A_y\cap A_z|$$ $$=\binom{4}{1}\cdot 7!-\binom{4}{2}\cdot 6!+\binom{4}{3}\cdot 5!-\binom{4}{4}\cdot4!=16296.$$This counts the number of permutations that fix at least one of the even elements, so our final answer is $8!-16296=24024$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
why here $x^2$ is used ? why not $x ?$
Does there exists a function $f : \mathbb{R }\rightarrow \mathbb{R}$ which is differentiable only at the point $0.$?
My attempt : I found the answer here Is there a function $f: \mathbb R \to \mathbb R$ that has only one point differentiable?
But i didn't understands the answer , my doubts given below
| Because while $x p(x)$ is continuous at $0$, it is not differentiable.
In particular, the fraction
$$
\frac{(0+h)p(0+h)-0p(0)}{h}
$$
has value $0$ or $1$ depending on whether $h$ is rational or not. So it has no limit as $h\to 0$, which by definition of derivative means that $xp(x)$ had no derivative at $0$.
On the other hand, the fraction
$$
\frac{(0+h)^2p(0+h)-0^2p(0)}{h}
$$
has value $h$ or $0$ depending on whether $h$ is rational or irrational. Thus it does have a limit as $h\to0$, which is to say that $x^2p(x)$ has derivative $0$ at $x=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Reduction of a Power Series Solution Separated by an Integer? I am given the following equation: $$x(x-1)\ddot y + 6x^2\dot y + 3y = 0 $$
and have solved/found the regular singular points, the exponents at singularity, the first series solution and the form of the second series solution, which are all as follows:
$x = 0, \space 1$ are regular singular points
For $x = 0$, $r_1 = 1, \space r_2 = 0$
Recurrence relationship: $$a_n = \frac {n^2 - n + 3}{n(n+1)}a_{n-1} + \frac {6(n-1)}{n(n+1)}a_{n-2}$$
$a_1 = \frac{3}{2}a_0$
The first solution is:
$$y_1(x) = x + \frac{3}{2}x^2 + \frac{9}{4}x^3 + \frac{51}{16}x^4 + ...$$
Since the exponents differ by an integer, the second solution is of the form:
$$y_2(x) = ay_1(x)ln(x) + 1 + \sum_{n=1}^{\infty} c_n(r_2)x^n$$
Substituting this equation into the original governing equation:
$$x(x-1)\bigg\{ay_1(x)ln(x) + 1 + \sum_{n=1}^{\infty} c_n(r_2)x^n)\bigg\}^" + 6x^2 \bigg\{ ay_1(x)ln(x) + 1 + \sum_{n=1}^{\infty} c_n(r_2)x^n \bigg\} ' + 3\bigg\{ ay_1(x)ln(x) + 1 + \sum_{n=1}^{\infty} c_n(r_2)x^n\bigg\} = 0 $$
Now my question is how does the above equation reduce to the following?:
$$aln(x)L[y_1] + 2a(x-1)y' - \frac{(x-1)}{x}ay_1 + 6axy_1 + L\bigg[ 1 + \sum_{n=1}^{\infty} c_n(r_2)x^n \bigg] = 0$$
The above form is advantageous since $L[y_1] = 0$, but I do not fully understand this reduction step.
Thanks!
| Just take derivative and separate:
Terms that $\ln$ has no derivative in them, they produce first term.
Terms that $\ln$ has derivative in them, make middle terms.
Terms that summation has derivative make last term.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
prove this series is convergent prove that the series $(a_1-a_2)+(a_2-a_3)+(a_3-a_4)+...$ converges if and only if the sequence $\{a_n\}$ converges.
I know a series is convergent if the sequence of its partial sums is bounded. I do not know how to apply that to this question.
| Hints:
Note that the series you have is
$$\sum_{k=1}^\infty (a_k - a_{k+1})$$
It's partial sum, in turn, is
$$\sum_{k=1}^n (a_k - a_{k+1})$$
Consider trying a few small values for $n$, and consider the pattern that emerges. Use this to make a formula that is equivalent to the partial sum. In the context of that formula, what can you say happens to it as $n \to \infty$, both when $(a_n)$ is a divergent sequence, and when it is a convergent sequence?
(We approach this issue this way because we say $\sum_{k=1}^\infty b_k$ converges only if the sequence of partial sums - $\sum_{k=1}^n b_k$, for $n=1,2,3,\cdots$ - converges. Your definition of convergence for infinite sums isn't quite correct.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Has this basic number theory conjecture been discovered or proved before? So I was sitting in a car bored for two hours playing around with the calculator on my phone, and I discovered that by picking two numbers (at least one being odd and both being not divisible by each other) I could always add them together to make many different numbers. Furthermore, I could always add and subtract them together make 1; this is significant because I could then negate and indefinitely duplicate the series to create any positive and negative integers. Here are some examples:
8-7=1
16+16+16+16-9-9-9-9-9-9-9=1
17+17+17-5-5-5-5-5-5-5-5-5-5=1
and so on.
For convenience, I preferred to write these equations in the form a*x-b*z=1
8*1-7*1 = 1
16*4-9*7 = 1
17*3-5*10 = 1
Here is my Conjecture:
Any two non-zero integers indivisible of each other, if at least one
of them is odd, can be added or subtracted together indefinitely to
make any integer.
Another way of writing it (though it is more limiting) is this:
Given that a and b are two non-zero integers indivisible of each other, if at least one of them is odd, there always exists some value of x and z for any given integer n that makes the equation ax-bz=n true.
My question is, has anyone ever discovered or proved this conjecture before, and how can it be proven? It almost feels like the solution should be as obvious as the equation a*x-b*z=0 (which just requires finding the lcm of the two numbers) and that it's just a simple quirk in numbers based upon the already laid down basic laws of mathematics.
| This is false: Try $15$ and $21$. Note that $3\mid15x-21y$ for all $x,y$.
However, the rule you are thinking of is basically Bezout's Identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3411896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Proving $\sum_{k = 1}^{\infty} k^{-2} = \pi^2 / 6$ using Fourier analysis I'm doing a homework for my analysis class, and a problem says the following:
Let $f : \mathbb{T}^1 \to \mathbb{C}$ be the function
$$f(\theta) = \begin{cases} 1 & 0 < \theta < \pi \\ 0 & - \pi < \theta < 0 \end{cases}$$
Compute the Fourier transform of $f$. Finally, it says to use the fact that the Fourier transform is an isometry from $L^2 \left( \mathbb{T}^1 \right) \to \ell^2$ to show that $\sum_{k = 1}^{\infty} k^{-2} = \pi^2 / 6$.
I've computed the Fourier transforms of $f$ as
$$\hat{f}(k) = \begin{cases}
\frac{1}{2} & k = 0 \\
0 & \textrm{$k \neq 0$ even} \\
- \frac{i}{\pi k} & \textrm{$k$ odd}
\end{cases}$$
If I understand correctly, what it wants me to do next is observe that since $f \mapsto \hat{f}$ is a unitary operator from $L^2 \left( \mathbb{T}^1 \right)$ to $\ell^2$, we have
$$\pi = \| f \|_{L^2}^2 = \sum_{k \in \mathbb{Z}} \left| \hat{f}(k) \right|^2 = \frac{1}{4} + \frac{1}{\pi^2} \sum_{\textrm{$k$ odd}} \frac{1}{k^2}$$
Now there's an example in the text that does something with a different function to get that $\sum_{\textrm{$k \geq 1$} odd} k^{-2} = \frac{\pi^2}{8}$, then does some arithmetic trickery to get that $\sum_{k = 1}^{\infty} k^{-2} = \frac{\pi^2}{6}$. But whenever I do this calculation on the $f$ shown above, I get different numbers. I've already sunk a lot of time into this. Do I have an arithmetic error somewhere that I need to iron out, or am I on the wrong track?
Thanks.
EDIT: In this text, the Fourier transform is defined as $$\hat{f}(k) = (2 \pi)^{-1} \int_{- \pi}^{\pi} f(\theta) e^{-k i \theta} \mathrm{d} \theta$$
| Thank you to @GReyes for his help. He pointed out that the correct formula for $\| \cdot \|_{L^2}$ should've been
$$\| f \|_{L^2} = (2 \pi)^{-1} \int_{- \pi}^{\pi} |f(\theta)|^2 \mathrm{d} \theta .$$
With this fix, the solution to the problem is as follows:
\begin{align*}
\hat{f}(k) & = (2 \pi)^{-1} \int_{- \pi}^{ \pi} f(\theta) e^{- i k \theta} \mathrm{d} \theta \\
& = (2 \pi)^{-1} \int_{0}^{\pi} e^{- i k \theta} \mathrm{d} \theta .
\end{align*}
If $k = 0$, then $e^{- i k \theta} = 1$ for all $\theta$, so we know that $\hat{f}(0) = \frac{1}{2}$. But if $k \neq 0$, then
\begin{align*}
\hat{f}(k) & = (2 \pi)^{-1} \int_{0}^{\pi} e^{- i k \theta} \mathrm{d} \theta \\
& = (2 \pi)^{-1} \left[ \frac{e^{- i k \theta}}{- i k} \right]_{\theta = 0}^{\pi} \\
& = (2 \pi)^{-1} \frac{1}{- i k} \left( e^{- i k \pi} - 1 \right) \\
& = \frac{i}{2 \pi k} \left( (-1)^{k} - 1 \right) \\
& = \begin{cases}
0 & \textrm{$k \neq 0$ even} \\
- \frac{i}{\pi k} & \textrm{$k$ odd}
\end{cases}
\end{align*}
In summary,
$$\hat{f}(k) = \begin{cases}
\frac{1}{2} & k = 0 \\
0 & \textrm{$k \neq 0$ even} \\
- \frac{i}{\pi k} & \textrm{$k$ odd}
\end{cases}$$
Now we use Parseval's theorem. We can see that $\| f \|_{L^2}^2 = (2 \pi)^{-1} \int_{- \pi}^{\pi} |f(\theta)|^2 \mathrm{d} \theta = \frac{1}{2}$. Now we're going to use the fact that $\| f \|_{L^2}^2 = \sum_{k \in \mathbb{Z}} \left| \hat{f}(k) \right|^2$ to find $\sum_{k = 1}^{\infty} k^{-2}$. Note that if we can show that $\sum_{\textrm{$k \geq 1$ odd}} k^{-2} = \pi^2 / 8$, the remainder of the proof will follow from the text's argument (see (5.4.21), page 194).
By Parseval's theorem, we have that
\begin{align*}
\| f \|_{L^2}^2 & = \sum_{k \in \mathbb{Z}} \left| \hat{f}(k) \right|^2 \\
= \frac{1}{2} & = \frac{1}{4} + \sum_{\textrm{$k \in \mathbb{Z}$ odd}} \frac{1}{\pi^2 k^2} \\
& = \frac{1}{4} + \frac{2}{\pi^2} \sum_{\textrm{$k \geq 1$ odd}} k^{-2} \\
\Rightarrow \frac{1}{4} & = \frac{2}{\pi^2} \sum_{\textrm{$k \geq 1$ odd}} k^{-2} \\
\Rightarrow \frac{\pi^2}{8} & = \sum_{\textrm{$k \geq 1$ odd}} k^{-2} .
\end{align*}
Finally, following the text's lead, we observe that
\begin{align*}
\sum_{k = 1}^{\infty} k^{-2} & = \sum_{\textrm{$k \geq 1$ odd}} k^{-2} + \sum_{\textrm{$k \geq 1$ even}} k^{-2} \\
& = \sum_{\textrm{$k \geq 1$ odd}} k^{-2} + \sum_{\ell = 1}^{\infty} (2 \ell)^{-2} \\
& = \sum_{\textrm{$k \geq 1$ odd}} k^{-2} + \frac{1}{4} \sum_{\ell = 1}^{\infty} \ell^{-2} \\
\Rightarrow \frac{3}{4} \sum_{k = 1}^{\infty} k^{-2} & = \sum_{\textrm{$k \geq 1$ odd}} k^{-2} \\
& = \frac{\pi^2}{8} \\
\Rightarrow \sum_{k = 1}^{\infty} k^{-2} & = \frac{4}{3} \frac{\pi^2}{8} \\
& = \frac{\pi^2}{6} .
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving implicit function theorem with the fixed point theorem I am trying to solve an exercise in Kolmogorov's analysis text asking for a proof of the implicit theorem using the fixed point theorem. I am struggling with how to get started on this, though. It's clear that I need to define a mapping, $T$, demonstrate that $\rho(Tx, Ty) = \alpha \rho(x,y)$ for some $\alpha < 1$, and then conclude that because $T$ is a contraction mapping, it has a fixed point, $x$, such that $Tx = x$. I know of only one proof of the implicit function theorem, though, and it in no way uses any of these facts. (I proved it at a time when I didn't know what a contraction mapping was.)
Any help on this would be greatly appreciated.
| Consider a continuously differentiable map
$$\begin{array}{l|rcl}
f : & \mathbb R^n \times \mathbb R^m & \longrightarrow & \mathbb R^m \\
& (x,y) & \longmapsto & f(x,y) \end{array}$$
a point $(a,b) \in \mathbb R^n \times \mathbb R^m$ such that $f(a,b) = 0$ and suppose that the partial Jacobian $J_{f,y}(a,b)$ is invertible. Those are the hypothesis of the implicit function theorem. With the given hypothesis, one can find a neighborhood $U_a$ of $a$ and $U_b$ of $b$ such that $J_{f,y}(x,y)$ is invertible for $(x,y) \in U_a \times U_b$.
For $x \in U_a$, the map $\varphi_x : y \mapsto y-J_{f,y}^{-1}(a,b) \circ f(x,y)$ is continuously differentiable in $U_b$ with $J_y(b) = 0$. Hence $\Vert J_y(y) \Vert <1$ for $y$ close enough to $b$. With that, you can apply the fixed point theorem to find $g(x)$ with $\varphi_x(g(x))= g(x)$ with means $f(x,g(x)) = 0$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Betting game: $3$ $20$-sided dice versus $2$ $30$-sided dice
Alice and Bob play a dice game. Alice rolls three $20$-sided dice and
Bob rolls two $30$-sided dice. Whoever has the highest sum wins.
Who has a better chance of winning?
This was a question asked in an interview. So I don't have much time to answer and I want to find the answer quickly. I did it the long way (enumerate all possibilities), and I figured out that Alice has a better chance of winning. But is there any way I can solve this problem very quickly?
I tried to make a "flipping over the dice" argument but got nowhere.
Thanks
| The flipping dice argument is straight forward. When flipping a dice you have that $n$ turns into $21-n$ (or $31-n$ depending on which type of die).
What you do is note that if Alice win for some outcome Bob will win in the equally probable(*) outcome when all dice are flipped. This is because if Alice rolled $a$ and Bob $b$ in total and $a>b$ the result with dice flipped is Alice rolling $21-a$ and Bob $31-b$ and because $a>b$ we have $21-a < 21-b < 31-b$.
The outcomes in the preceding paragraph is as many wins for Bob as it's far Alice. For outcomes that are not covered by the preceding paragraph there is no win for Alice. If we just can find a single win for Bob amongst these there is more outcomes where Bob wins and he would be more likely to win than Alice. But there is, consider for example Alice rolling $21=11+10$ and Bob rolling $47=16+15+16$ which Bob wins and he wins even if the dice are flipped (Alice rolling $21=10+11$ and Bob $46=15+16+15$) so the case is not covered by the preceding paragraph.
(*) All outcomes will be equally probable if we consider each die uniquely identified (ie distingishing Alice rolling $10+11$ from Alice rolling $11+10$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Dot product cancellation Given vectors $a,b,c$, I know that
$$
c\cdot a=c\cdot b
$$
does not imply $a=b$ (take three orthogonal vectors, for example).
However, if I say that $c\cdot a=c\cdot b$ holds for any vector $c$, is it then true that $a=b$? How should I argue?
| We have that
$$c\cdot a=c\cdot b \iff c\cdot (a-b)=0 \iff c=0 \quad \lor \quad a-b=0 \quad \lor \quad c\perp (a-b)$$
and since $c$ can assume any value only the case $a-b=0$ remains therefore
$$\forall c\,,\quad c\cdot a=c\cdot b \iff a=b $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Can the sum of two irrational roots of two coprime natural numbers greater than 1 be an irrational root of a rational number? After thinking more about my last question and reading the answers I reformulated it. I'm positive I wanted to ask the following:
Let $x,y\in \mathbb{N};\: x,y>1;\: gcd(x,y)=1; \: \sqrt{x},\sqrt{y}\notin\mathbb{N}$ and let $z\in\mathbb{Q}; \: \sqrt{z}\notin\mathbb{Q}$.
Is it possible to find $x, y, z$ obeying the rules above such that the following equality is true?
$\sqrt{x}+\sqrt{y}=\sqrt{z}$
If so, find a sufficient and necessary condition in the choosing of $x$ and $y$ for $z$ to exist.
Basically I want to know if we can find $r$ rational which makes the equality true in expressions like $\sqrt{2}+\sqrt{3}=\sqrt{r} \: and \: \sqrt{5}+\sqrt{11}=\sqrt{r}$.
| In addition to the elementary solution mentioned in the comments (square both sides and find an irrational number on one side and a rational on the other), you might be interested in more of the theory behind this sort of question. The short version is that any polynomial equation $P(x)=0$ with integer coefficients satisfied by a term like $\sqrt{2}+\sqrt{3}$ must have a polynomial of degree at least four, whereas any number of the form $\sqrt{r}$ with $r$ rational satisfies an equation $P(x)=0$ with $P()$ of degree two. The degree of the smallest polynomial that an algebraic number is a root of is also known as the degree of the algebraic number, and searching on 'algebraic number degree' should find you more good information on this sort of thing.
Incidentally, on a somewhat-interesting related note, while it's possible to determine which side of $\sqrt{r}$ such an expression lies on — or more generally, whether $a\sqrt{x}+b\sqrt{y}+\ldots$ is greater than or less than zero for arbitrary rationals $a, b, \ldots$ and $x, y, \ldots$ — it's also a 'hard' problem in the sense that nobody knows an efficient algorithm to do it, or even how efficient such an algorithm can be! The general phrase for this question is the Sum of Square Roots problem; see e.g. http://cs.smith.edu/~jorourke/TOPP/P33.html or https://cstheory.stackexchange.com/questions/4053/sum-of-square-roots-hard-problems .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $S=\{\frac{\sqrt{x}}{x+1}:x>0\}$. Show that $\inf{S}=0$. Let $S=\{\frac{\sqrt{x}}{x+1}:x>0\}$. Show that $\inf{S}=0$.
My solution so far:
Since $\frac{\sqrt{x}}{x+1}>0$ for all $x>0$, $0$ is a lower bound of $S$. Next I know I need to show that $0$ is the largest lower bound of the set but I don't know how.
| We have that $\forall x>0$
$$\frac{\sqrt{x}}{x+1}>0$$
and
$$\lim_{x\to \infty}\frac{\sqrt{x}}{x+1}=0$$
then $\inf{S}=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Show that $N$ is independent of $\{N_1 < N_2\}$ A random experiment has exactly three possible
outcomes, referred to as outcomes $1, 2,$ and $3,$ with probabilities $p_1 > 0, p_2 > 0,$ and $p_3 > 0,$ where $p_1 +p_2 +p_3 = 1.$ We consider a sequence of independent
trials, at each of which the specified random experiment is performed.) For
$i = 1, 2,$ let $N_i$ be the number of trials needed for outcome $i$ to occur, and
put $N := N_1 \wedge N_2.$
(a) Show that $N$ is independent of $\{N_1 < N_2\}.$
(b) Evaluate $E[N_1 \mid N_1 < N_2].$
(c) Roll a pair of dice until a total of $6$ or $7$ appears. Given that $6$ appears before $7,$ what is the (conditional) expected number of rolls?
The answer to $b)$ is $1/(p_1+p_2)$ and $c)$ is $3.272727$ but I'm unsure of even where to start for $a)$ or the steps involved in arriving at the answers for $b)$ and $c)$
| HINT
$N_i$ is the trial at which $i$ first appears. So $N = \min(N_1, N_2)$ is the trial at which $1$ or $2$ first appears. In other words, before the $N$th trial, only $3$ has been appearing. In other words, $N>n$ is the event that the first $n$ trials are all $3$s.
(a) I would directly calculate $P(N>n \mid N_1 < N_2)$ and $P(N > n \mid N_2 < N_1)$ and show that they are equal. Do you see why this implies the required independence?
(b) Consider $P(N_1 > n \mid N_1 < N_2)$. Can you see why this is exponential?
(c) is just an application of (b)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3412922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If A commutes with both of these matrices, then A must be a scalar multiple of the identity matrix I am working on the following problem:
Let $A$ be a $4 \times 4$ matrix with entries in a field of characteristic zero. Suppose that $A$ commutes with both $\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 3 & 0\\ 0 & 0 & 0 & 4 \end{pmatrix}$ and $\begin{pmatrix} 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{pmatrix}$. Prove that $A$ is a scalar multiple of the identity matrix.
I know that $A$ is a scalar multiple of the identity matrix if and only if $AB = BA$ for all other possible $4 \times 4$ matrices $B$ with entries in a field of characteristic $0$. However, I'm struggling with deducing here that $A$ commuting with these specific matrices forces $A$ to be a scalar multiple of the identity matrix. Does commuting with these specific matrices force $A$ to commute with all $4 \times 4$ matrices with entries in a field of characteristic $0$? If so, how can I deduce this?
Thanks!
| Hint:
Left multiplication of a square matrix by $D=$ first (diagonal) matrix amounts to multiply its rows by the diagonal elements (for this matrix, the first row is multiplied by $1$, the second row by $2$, &c.). Right multiplication amounts to multiplying its columns by the diagonal elements. If both results are equal, by identification, you can deduce that $A$ is a diagonal matrix.
Commutativity of multiplication by the second matrix will then let you show, by identification, that all elements on the diagonal are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
What are some interesting topological properties about the set of all convergent real sequences? Let C be the set of all convergent sequences in $\mathbb{R}$ under the product topology or uniform topology. So $C \subset \prod_{n=1}^\infty \mathbb{R}$. For one, Hilbert's Cube $\prod_{n=1}^\infty \left[0,\frac{1}{n}\right]$ is contained in $C$.
| Let $\ \lim:C\to\mathbb R\ $ be the limit function (functional). Then
$$ \lim(G)\ =\ \mathbb R $$
for every non-empty open subset of the product space. It follows that the limit function is not continuous under the product topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Express the polynomial in the form p(x) = (x+1) Q(x) +R where (x+1) is the divisor, Q(x) is the quotient and R is the remainder Express the polynomial in the form p(x) = (x+1) Q(x) +R where (x+1) is the divisor, Q(x) is the quotient and R is the remainder,
Hey I would just like to know how to solve this as the question had me confused I don’t know if I am suppose to do long division on write it out like in the question, thanks
| Put $\,a = -1\,$ in $\ p(x)\, =\, (x-a)\,\dfrac{p(x)-p(a)}{\ x\,-\,a} + p(a),\,$ the Polynomial Remainder Theorem.
The quotient is exact by the Polynomial Factor Theorem, and is computable by the Polynomial Division Algorithm (which also computes the remainder).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find a vector NOT perpendicular to a given set of vectors
Let us suppose to have a finite set of vectors $S=\{v_1,\ldots,v_m\}$ in $\mathbb{R}^n$ (with $m \gg n$ in general). I need to find a vector $x \in \mathbb{R}^n$ that is NOT perpendicular to any vector in $S$. The existence of such a vector $x$ is guaranteed, moreover almost every vector in $\mathbb{R}^n$ satisfies this property. But I need to find an algorithm to determine a vector with these properties, I cannot close my eyes and choose.
Any ideas?
EDIT1: in my case, the vectors in $S$ have some "symmetries" in the sense that they are generated by permutation and change of signs of a few vectors in $S$
EDIT2: $v_1+\ldots+v_m=0 \in \mathbb{R}^n$
EDIT3: I simplified the solution proposed by Tom Collinge. We define a sequence $\{x_i\}_{i=1}^m$ such that $x_m=x$ is what we are looking for. First define $x_1=v_1$, so $x_1 \cdot v_1 \ne 0$ and they are not perpendicular (obviously). Then, recursively, define for $k \in \{2,\ldots,m\}$
$$
x_k=\begin{cases}
x_{k-1}+2v_k & \text{if }x_{k-1}=-v_k\\
x_{k-1}+v_k & \text{otherwise}
\end{cases}.
$$
By construction we have that $x_k \cdot v_i \ne 0$ for $i \le k$, then $x_m \cdot v_i \ne 0$ for all $v_i \in S$, so $x=x_m$ is not perpendicular to every element in $S$. Can it works?
EDIT4: As pointed out, the algorithm proposed in EDIT3 does not work
| If you can implement on a computer, perhaps the following would work.
Create an $m \times n$ matrix $A$, the rows of which contain your vectors.
Now pick a random uniform vector $b > \vec{0} \in \mathbb{R}^m$, for example by generating the entrees of $\vec{b}$ uniformly, and solve $A \vec{x} = \vec{b}$.
If any solution is found, you are done. If not, re-generate $\vec{b}$ and do it again.
If your assertion that practically every vector in $\mathbb{R}^n$ will do is correct, you will get a viable answer in a very small amount of tries.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Proof clarification: For any r.v.s $X$ and $Y$, $-1 \le \text{Corr}(X, Y) \le 1$. My textbook, Introduction to Probability by Blitzstein and Hwang, gives the following theorem in a section on covariance and correlation:
Theorem 7.3.5 (Correlation bounds). For any r.v.s $X$ and $Y$,
$$
-1 \le\operatorname{Corr}(X, Y) \le 1.
$$
Proof. Without loss of generality we can assume $X$ and $Y$ have variance $1$, since scaling does not change the correlation. Let $\rho = \operatorname{Corr}(X, Y) = \operatorname{Cov}(X, Y)$. Using the fact that variance is nonnegative, along with property $7$ of covariance, we have
$$
\operatorname{Var}(X + Y)
= \operatorname{Var}(X) +\operatorname{Var}(Y) + 2\operatorname{Cov}(X, Y)
= 2 + 2\rho\ge 0,
$$
$$
\operatorname{Var}(X - Y)
= \operatorname{Var}(X) + \operatorname{Var}(Y) - 2 \operatorname{Cov}(X, Y)
= 2 - 2\rho\ge 0.
$$
Thus, $-1 \le \rho \le 1$.
Property 7 of covariance is given on a previous page as follows:
*$\operatorname{Var}(X + Y) = \operatorname{Var}(X) + \text{Var}(Y) + 2\operatorname{Cov}(X, Y)$. For $n$ r.v.s $X_1,\dots, X_n$,
$$
\operatorname{Var}(X_1 +\dots + X_n)
= \operatorname{Var}(X_1) +\dots +\operatorname{Var}(X_n) + 2\sum_{i < j} \operatorname{Cov}(X_i, X_j).
$$
It's not clear to me why we conclude that $2 + 2\rho\ge 0$ and $2 - 2\rho\ge 0$. I'm thinking that my understanding is missing some knowledge about the connection between variance and the covariance $\rho$, because I can tell that $2 - 2\rho = 2(1 -\rho)\ge 0$ for $-1\le\rho \le 1$ (and analogously for $2 + 2\rho\ge 0$), but, assuming this correctly identifies where my misunderstanding is, it isn't clear to me why $-1\le\rho \le 1$.
I would greatly appreciate it if people could please take the time to clarify this.
| If you write
$$0 \leq \text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2 \text{Cov}(X, Y) = 2 + 2 \rho,$$
and
$$0 \leq \text{Var}(X - Y) = \text{Var}(X) + \text{Var}(Y) - 2 \text{Cov}(X, Y) = 2 - 2 \rho,$$
then it might be clearer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Explanation for $\mathcal{L} (\theta(e^t))$ is holomorphic on $Re(s) > 1$ Let, $\theta(x)=\sum_{p\leq x} \log p$, it is known, $\theta(x) \sim x$, therefore, $\theta(e^t) = O(e^t)$. Recall the definition of the Laplace transform. I found in a lecture note (click and go to Page 6) that the Laplace transform$\mathcal{L} (\theta(e^t))$ is holomorphic on $Re(s) > 1$. See the below picture-
Can someone please explain how $\mathcal{L} (\theta(e^t))$ is inferred holomorphic on $Re(s) > 1$ automatically, in detail?
Thanks!
| The Laplace transform is $$\mathcal L(\vartheta(e^t))=\int_0^\infty e^{-st}\mathcal O(e^t)\,dt=\int_0^\infty\mathcal O(e^{-t(s-1)})\,dt$$ which converges whenever $\Re(s)>1$.
The last equality is justified by defining $$g(t)=\mathcal O(e^t)\iff|g(t)|\le Me^t\quad\forall t\ge t_0$$ and $M,t_0\in\Bbb R$, so $$e^{-st}|g(t)|\le Me^te^{-st}\implies |e^{-st}g(t)|\le Me^{-t(s-1)}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Probability generating functions for a linear change of variable? I am struggling to understand, why, in textbook exercise on probability generating functions it says that if $X$ is a random variable (rv) with probability generating function (pgf) $G_{X}(s)$, then for rv $Z = X + k$ for positive integer $k$ the pgf is given by $$G_{Z}(s) = s^{k}G_{X}(s).$$
Surely if we assume that the rv $X$ takes integer values $0,1,2,,,$ with probability $p_{0}, p_{1}, p_{2},...$then the rv $Z$ takes integer values $k, k+1, k+2,...$ with corresponding probability $p_{0}, p_{1}, p_{2},...$, and thus the corresponding generating functions should be identical? The only thing that changes is the integer starting position (it is shifted by $k$) for the distribution of rv?
| Since $\mathbb P(X\geqslant0)=1$ we have
\begin{align}
G_Z(s) &= \sum_{n=0}^\infty \mathbb P(Z=n)s^n\\
&=\sum_{n=0}^\infty \mathbb P(X+k=n)s^n\\
&=\sum_{n=0}^\infty \mathbb P(X=n-k)s^n\\
\end{align}
and with the change of variables $m=n-k$, this is equal to
$$
\sum_{m=0}^\infty \mathbb P(X=m)s^{m+k} = s^k\sum_{m=0}^\infty \mathbb P(X=m)s^m = s^kG_X(s).
$$
Your intuition was correct. The factor of $s^k$ is precisely what accounts for the shift of support from $\{0,1,\ldots\}$ to $\{k,k+1,\ldots\}$. Note that for example, the coefficient of $s^k$ is $\mathbb P(X=0)$, when in the probability generating function for $X$ the coefficient of $s^k$ was $\mathbb P(X=k)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3413942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Which is bigger: a googolplex or $10^{100!}$ A googol is defined as $ 10^{100}$
Let x = $10^{100}$
A googolplex is defined as $10^{x}$
Which is bigger: a googolplex or $10^{100!}$
I only know that:
$100! = 1×2×3×...×98×99×100$
$10^{100} = 10×10×10×...×10×10×10$
I think its easier to approach if I only compare the exponents, because they both have the same base $10$ anyways, but I don't know how to show which is bigger from $100!$ and $10^{100}$
| The question boils down to whether $10^{100}$ is greater than $100!$ (same bases). But by Stirling's approximation,
$$100!\approx\sqrt{200\pi}(100/e)^{100}$$
and $100/e>33>10$ and $\sqrt{200\pi}<30$ so $100!>10^{100}$. Hence $10^{100!}$ is the bigger number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
} |
If $G$ is an infinite simple group then any proper subgroup of $G$ has infinite index.
If $G$ is an infinite simple group then any proper subgroup of $G$ has infinite index.
This question's hint is use the $n!$-theorem but i dont understand how i use it for answer.
$n!$-theorem: Let $G$ be a group and $H$ be a subgroup of $G$ of finite index, say $|G:H|=n$. Then there is a normal subgroup N of $G$ such that $N\subseteq H$ and $G/N$ is isomorphic to a subgroup of $S_n$ and so $|G/N|$ divides $n!$. Indeed, ${\rm core}_G(H)$ is such a normal subgroup of $G$.
| From a stylistic perspective, quoting a "theorem" here this is unnecessarily complicated and makes things harder to understand. The proof of this statement relies on two basic facts; and these are facts that everyone who studies groups should know anyways. 1. There are only finitely many subgroups of a given finite index. 2. The intersection of two finite index subgroups has finite index. I will take these for granted; each is an easy exercise.
Now given a finite index subgroup $H$, it has only finitely many conjugates by 1. So intersect all of them. This subgroup has finite index by 2. It is clearly normal because conjugating the intersection amounts to intersecting the conjugates, but that just permutes the things we're intersecting and we end up intersecting the same finite set of subgroups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Confusion about Phantom Maps A phantom cohomology operation (originally read phantom map) $f: X \rightarrow Y$ is a non-nullhomotopic map such that the induced cohomology operation on the cohomology theories for spaces is trivial. It is known that phantom maps exist.
Could someone tell me where my argument for the opposite fails?
Suppose a map $f:X \rightarrow Y$ is a phantom map and that $X,Y$ are represented by sequential CW $\Omega$-spectra and $f$ by a map of sequential spectra. Since $f$ is nontrivial, there must be some $f_n :X_n \rightarrow Y_n$ on which it is non-nullhomotopic (if I had to guess, I'd presume this is where the error lies). Then this implies that $Y^n (Y_n)$ detects the non-triviality of $f$ because $[Y_n,Y_n] \ni 1 \rightarrow f_n$ under the cohomology operation associated to $f$.
If I am right about the error, could anyone provide intuition about why a map of spectra can have all its components nullhomotopic without being trivial?
| I'll translate my argument to the language of suspension spectra which makes it more clear where the error lies:
Let's suppose I decompose $X$ as a colimit of suspension spectra. Then this phantom operation $f$ has the property that the restriction of $f$ to any of the objects in the colimit is trivial. This is precisely the statement that $ f \in \operatorname{Ker}([\operatorname{colim}X_i,Y]) \rightarrow \operatorname{lim} [X_i ,Y]$. This is precisely what the $\operatorname{lim}^1$ term describes. In general, cohomology of a colimit is not the direct limit of the cohomologies.
I believe this is the same phenomenon behind the fact that a map of spectra (as I alluded to in my question) can have all its components nullhomotopic without being nullhomotopic itself. This is similar to the fact that maps between CW complexes exist which have their restriction to all finite CW complexes nullhomotopic without the entire map being so.
It is perhaps worth mentioning that none of these issues arise with homology in terms of bringing homotopy limits outside of homology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Compute the derivative from the definition of $f(x)=e^{-1/x^2}$ for $x\neq 0.$ Is $f'(x)$ continuous?
Compute the derivative from the definition of $f(x)=e^{-1/x^2}$ for $x\neq 0.$ Is $f'(x)$ continuous?
I tried using the definition of continuity and I wasn't sure how to show formally that $\lim\limits_{h\to 0}\dfrac{f(x+h)-f(x)}{h}=f'(x).$ Obviously, the derivative is $\dfrac{2}{x^3}e^{-1/x^2}.$
Here's my attempt.
So by the definition of the derivative, the derivative of $f(x)$ at a point $x$ is
$$\lim\limits_{h\to 0}\dfrac{e^{-1/(x+h)^2}-e^{-1/x^2}}{h}=e^{-1/x^2}\lim\limits_{h\to 0}\dfrac{e^{-1/(x+h)^2+1/x^2}-1}{h}\\
=e^{-1/x^2}\lim\limits_{h\to 0}\dfrac{e^{(2h+h^2)/(x^2(x+h)^2)}-1}{h},$$ but here I'm stuck. How can I get to the desired limit? Any help would be appreciated.
I know that $\lim\limits_{h\to 0} \dfrac{e^h-1}{h}=1.$ Also, we want to show that $$\lim\limits_{h\to 0}\dfrac{e^{(2hx+h^2)/(x^2(x+h)^2)}-1}{h}=\dfrac{2}{x^3}$$
I was thinking of using the Taylor series expansion for $e^x,$ but isn't there some other way? Thanks in advance!
| Note that $$ \frac 1 {x^2} - \frac 1 {(x+h)^2} = \frac {h (2x + h)}{x^2 (x+h)^2} \xrightarrow {h \to 0} 0 \cdot \frac {2x}{x^4} =0,$$ so the difference of the two fractions is an infinitesimal. Now apply the limit you have learned: $$\frac {\mathrm e^h - 1}h \to 1 [h \to 0],$$
by substituting the infinitesimals to a simpler one according this limit expression.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Suppose $f(x) > 0, f'(x) > 0,$ and $f''(x) >0.$ Then $\lim_{x \to \infty} f(x) = \infty.$ Intuitively I see why this is true, in terms of a graph, but I can't seem to get a handle on a proof. I know that I need to show for every $M > 0$ there's a $d > 0$ such that whenever $x > d$, $f(x) > M.$ I attempted proceeding by contradiction, but wasn't able to make much progress. I think I'm missing something.
| Hint:
Prove that $f(n+1)-f(n)\ge f'(0)$.
Remark: The condition that $f(x)>0$ is not needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Any hint for this product of complex numbers I have to calculate the modulus of $z$. I've already tried to find a general formula for $\left(\frac{1+i}{2}\right)^{2^n}$, which seems to be $\frac{1}{2^{2^{n-1}}}, \forall \geq3$, using the trigonometric form and de Moivre's formula.
$z=\left[1+\frac{1+i}{2}\right] \left[1+\left(\frac{1+i}{2}\right)^2\right] \left[1+\left(\frac{1+i}{2}\right)^4\right] \cdots \left[1+\left(\frac{1+i}{2}\right)^{2^n}\right]$
How should I keep solving this?
Thanks!
| Note that
$$z=
\frac {[1-(\frac {1+i}{2})]\left[1+\frac{1+i}{2}\right] \left[1+\left(\frac{1+i}{2}\right)^2\right] \left[1+\left(\frac{1+i}{2}\right)^4\right] \cdots \left[1+\left(\frac{1+i}{2}\right)^{2^n}\right] }{[1-(\frac {1+i}{2})]}=$$
$$\frac { \left[1-\left(\frac{1+i}{2}\right)^{2^{n+1}}\right] }{[1-(\frac {1+i}{2})]}$$
We may simplify it with $$(\frac{1+i}{2})^2 = \frac {i}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3414924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Infinite sum with binomial coeffcient - Hypergeometric function ${}_2F_1$ and gauss theorem From an urn and balls problem, I end up with the need to compute the following sum
$$S = \sum_{n\geq 1} \frac{1}{n}\binom{2n}{n+1}2^{-2n}$$
Using Maple I discovered $S=1$. Starting with some basics transformations, I get
$$S= \sum_{n=1}^{\infty} \frac{2^{-n}(2n-1)(2n-3)\ldots 3}{(n+1)!}$$
Therefore I can write $S'$ using hypergeometric function taken at point $z=1$,
$$S+1 = {}_2F_1(\frac{1}{2},1,2,1)$$
Then using Gauss Hypergeometric theorem
$$S+1 = {}_2F_1(\frac{1}{2},1,2,1) = \frac{\Gamma(2)\Gamma(1/2)}{\Gamma(1)\Gamma(3/2)}=\frac{\Gamma(1/2)}{\Gamma(3/2)}$$
And using $\Gamma(z+1)=z\Gamma(z)$ I get $S=1$.
My question relates to Gauss Hypergeometric theorem. I couldn't find an online proof, or an explanation. And I was wandering if I could find a more direct approach for ${}_2F_1(\frac{1}{2},1,2,1) = 2$ and directly for my sum $S$.
Gauss Theorem is quite generic, and it feels kinda using a bazooka to kill a bird. With the specific value I have, there might be an easier approach.
| The given series is telescopic. If we set $a_n=\frac{1}{4^n}\binom{2n}{n}$ we have
$$ \frac{a_{n+1}}{a_n} = \frac{(2n+2)(2n+1)}{4(n+1)^2} = \frac{2n+1}{2n+2}=1-\frac{1}{2n+2}$$
hence
$$ a_n-a_{n+1} = \frac{1}{2n+2}\binom{2n}{n}2^{-2n}= \frac{1}{2}\cdot\frac{1}{n}\binom{2n}{n+1}2^{-2n}$$
and since $a_n\to 0$ we have
$$ S=\sum_{n\geq 1}\frac{1}{n}\binom{2n}{n+1}2^{-2n}=2a_1 = \color{red}{1}.$$
As an alternative approach, by exploiting $a_n=\frac{2}{\pi}\int_{0}^{\pi/2}(\cos\theta)^{2n}\,d\theta $ we have
$$S=\frac{2}{\pi}\sum_{n\geq 1}\frac{1}{n+1}\int_{0}^{\pi/2}(\cos\theta)^{2n}\,d\theta=-\frac{2}{\pi}\int_{0}^{\pi/2}\frac{2\log\sin\theta+\cos^2\theta}{\cos^2\theta}\,d\theta$$
and it is not difficult to compute the last integral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3415081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Polynomial bijections between $\mathbb{N}^{m}$ and $\mathbb{N}$
*
*Is it known if a polynomial bijection from $\mathbb{N}^{m}$ to $\mathbb{N}$ must necessarily be a polynomial of degree $m$?
*Are there two polynomial bijections from $\mathbb{N}^{m}$ to $\mathbb{N}$ that are not obtainable from each other by a polynomial change of variables?
*I seem to remember polynomial bijections between $\mathbb{Z}^{m}$ and $\mathbb{Z}$ are not know to exist: what would be an up-to-date reference on this subject? Or am I remembering wrong?
| Though I don't know of any particular reference, your questions can be answered for $m\geq 3$ by using the existence of a such a polynomial on $m=2$.
First, let $f(x,y)={x+y+1 \choose 2}+x$ be the usual polynomial bijection $\mathbb N^2\rightarrow \mathbb N$. We can make two observations: first, this generalizes to all $m$ in a fairly direct way - the reason this works is that we first divide $\mathbb N^2$ into a series of "shells" based on the value of $x+y$ and then we can count how many points are within the shell via the quadratic term and then how many points on the shell precede the given term via the linear term $x$.
If we divide $\mathbb N^3$ by looking first at the function $x+y+z$. Each shell contains ${x+y+z+2 \choose 3}$ integers inside and, when we project a set of the form $x+y+z=k$ down by removing the third coordinate, we are left with the set $x+y \leq k$ in $\mathbb N^2$ and can use the previous polynomial bijection to assign each element of that set an index within the shell. Combining these gives
$$f(x,y,z)={{x+y+z+2}\choose 3} + {{x+y+1}\choose 2} + x$$
as a bijection. One may continue an argument like this inductively to show that, in $\mathbb N^{m}$ the following is a polynomial bijection:
$$f(x_1,\ldots,x_m)=\sum_{i=1}^{m}{{(k-1)+\sum_{k=1}^ix_k}\choose k}$$
for each $m$.
However, we could also, using the first $f$, simply define
$$f(x,y,z)=f(f(x,y),z)$$
to get another polynomial bijection $\mathbb N^3\rightarrow\mathbb N$, this one with total degree $4$. For $m\geq 3$, this shows that there are polynomial bijections of multiple degrees for every $m$.
It seems like these questions are open for $m=2$, however, but the Fueter-Polya theorem mentioned in the comments implies that the questions are equivalent for $m=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3415207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Recursive compactification: Is the direct limit still compact? Thinking about the question whether there is something like a “maximal compactification” (the one-point compactification could be seen as “minimal” compactification), I've come across the possibility to recursively compactify a set (so that you get multiple copies of the original space, but it is still a compactification of one of them).
The basic construction goes like this: Be $X$ a non-compact topological space (this is the space that shall be compactified). Be $C$ an arbitrary compact topological space. Then I define a topology on the disjoint union of $X$ and $C$ as follows:
A set on the disjoint union is open if it is either an open subset of $X$ or the union of the X-complement of a closed compact set of $X$ and a non-empty open subset of $C$.
It is not hard to show that this gives indeed a compactification of $X$, and in addition the subspace topology on $C$ is the original topology on $C$.
This of course answered my original question, as $C$ could be made arbitrary large, making the compactifiation of $X$ arbitrary large, too.
But then, I noticed that since the only condition on $C$ was that it is compact, I could choose as $C$ a compactification of $X$. That is, I then get a compactification of $X$ that contains a second copy of $X$.
Indeed, I could in turn use that compactification of $X$ as $C$, so I now get a compactification of $X$ that in total contains three copies of $X$.
Now I can continue this recursively, and since each of the compactifications is (homeomorphic to) a proper subspace of the next one, I can take the direct limit to get a space with (countably) infinitely many copies of $X$.
Now that direct limit is clearly not a compactification of $X$ (as the closure of each copy of $X$ only covers the earlier copies of $X$), however is it at least still a compact set (and thus a proper “seed” for another round of recursive compactifications)?
| Sequential direct limits are almost never compact. In particular, a direct limit of a sequence of inclusions of $T_1$ spaces is never compact (unless the sequence eventually stabilizes); see Compact subset in colimit of spaces. So, if the initial compactification you use is $T_1$, then your direct limit will not be compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3415520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Matroids-Discrete Optimization I happen to stumble across this question where they ask to give an example of an independent system that has two maximal independent sets $X$ and $Y$ such that $|X|\neq |Y|$. How exactly would you formulate such an example?
| An independence system is a collection $C$ of subsets of a finite set $E$ such that if $c \in C$ and $b \subset c$ then $b \in C$. Independence systems are also known as finite (abstract) simplicial complexes. A simplicial complex whose inclusion-maximal subsets all have the same cardinality is called pure. The question asks for a simplicial complex that is not pure. Here is a minimal example.
Let $E = \{1,2,3\}$ and let $C$ be the set of all subsets of $A = \{1\}$ together with the set of all subsets of $B = \{2,3\}$. So $C = \{\emptyset, \{1\},\{2\},\{3\},\{23\}\}$. Then $C$ is an independence system with (inclusion-)maximal subsets $A$ and $B$.
For contrast, a matroid is a simplicial complex whose maximal subsets satisfy the matroid basis exchange property. Here's a pure simplicial complex on $E = \{1,2,3,4\}$ that is not a matroid: $C = \{\emptyset, \{1\},\{2\},\{3\},\{4\},\{1,2\},\{3,4\}\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3415784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find $n$ points on a circle with integer distances.
Let $n$ be a positive integer, prove that it is possible to put $n$ points on a circle so that the distances among them are all integers.
For $n \leq 3$ this is trivial. I have shown it for $n=4$ by considering a rectangle.
I don't know how to do it for larger numbers though.
| This is with radius 5 and edge length 6. It appears that the final edge (not drawn in) also has rational length since it is vertical and both endpoints have rational coordinates
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3415894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Saturation of the reals as a dense linear order I've been given the following problem (note $T_{ord}$ is the theory of dense linear orders without endpoints):
Show that $\langle \mathbb{R}, < \rangle$ (the reals with their usual ordering) is not a saturated model of $T_{ord}$ of cardinality $\aleph_1$.
I don't think this is too terribly hard, but I'm thrown by the phrasing in light of a previous problem:
Let $M$ be a saturated model of $T_{ord}$. Show that for every $A \subseteq |M|$ such that $|A| < ||M||$ there exists $b \in |M|$ such that $M \models a < b$ for every $a \in A$.
So it seems to me that we don't need the "of cardinality $\aleph_1$" clause in the first question at all; if the reals were a saturated model of $T_{ord}$, then, eg, setting $A = \mathbb{N}$ would give us a real number $b$ larger than every natural number. So I am wondering if there's a flaw in this argument (maybe a snafu with the subtleties of $\models$ that I'm missing?) or if I'm correct that "of cardinality $\aleph_1$" is redundant here.
Thanks!
| Yes, the "of cardinality $\aleph_1$" here is not really relevant (it simply gives a much shorter proof in the event that $\mathbb{R}$ does not have cardinality $\aleph_1$ at all, i.e. if the continuum hypothesis is false). It is possible that the intended statement was instead that $\langle \mathbb{R}, < \rangle$ is not an $\aleph_1$-saturated model (which is equivalent to saturated if $\mathbb{R}$ has cardinality $\aleph_1$, but is stronger otherwise; in any case your proof still works thoough since $A=\mathbb{N}$ is countable).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Coordinatable continuous functions in terms of orthogonal systems $\{e^{in\theta}\}$ Question. Let $f$ be an arbitrary $2\pi$-periodic integrable function on reals. Does there exists any sequence $\{t_n\}$ with $f(\theta)=\sum t_ne^{in\theta}$?
If NOT, what about if we replace integrable function with a continuous one?!
| There is the following theorem of de la Vallée-Poussin, which I quote from page 880 of J. Marshall Ash's Uniqueness of Representation by Trigonometric Series:
If $S=\sum_n t_n e^{inx}$ converges to $f$ at each $x$, and if $f$ is finite at each $x$ and if $\int_{\mathbb T} |f(x)| dx< \infty$, then $S$ is the Fourier series of $f$.
This theorem is to be found in Zygmund's Trigonometric Series, Vol I, page 326. The beginning of Ash's paper proves the special case for $f=0$, but it seems that the proof of the general case spans several pages, so I will leave it at that.
For your question: consider a continuous function $f$ constructed via the Uniform Boundedness Principle (see for example, the note of Paul Garrett linked in this question). Suppose that $S$ converged pointwise to $f$. By de la Vallée-Poussin's theorem, $S$ must be the Fourier series of $f$. But the Fourier series of $f$ does not converge at every point, which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
When two curves touch each other at a single point, are they called intersecting? When any two curves touch each other at a single point, are they called intersecting, or just called they are touching each other? Are these terms - intersecting curves and touching curves used interchangeably? Is there any difference between these terms?
| Let me clarify my above comment.
The discussion might be endless because there is no consensus on the definitions of "intersecting" and "touching".
One cannot give a definitive answer to the question raised until the definitions of these two words be clear and accepted by everybody.
Presently I doubt that the respective definitions be standardized, except in case of "intersection" in set theory.
In the set theory the "intersection" is well defined : That is the subset common to the two sets considered. https://en.wikipedia.org/wiki/Intersection_(set_theory) .
If this definition in set theory was extended to usual geometry the intersection of two curves would be the common part of the curves whatever the overall configuration and whatever the common part be one point only or many. But everybody can or not support this view.
I am not qualified for the standardization of mathematical vocabulary. Nevertheless I am allowed to give my own opinion which is :
*
*For geometry, generalize the definition of "intersection" well established in the set theory.
*And for geometry, standardize the definitions of some sub-cases such as "Crossing intersection", "Touching intersection", etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What condition for $n$ such that $G_n$ is graph planar? Let $G_n$ be the simple graph with the set of vertices $V_n=\{1, 2, . . . , n\}$. Two different vertices $x$ and $y$ are adjacent if $x$ is a multiple of $y$ or $y$ is a multiple of $x$. For what $n$ is $G_n$ planar?
I tried to draw graph starting from small $n$ until I cannot draw graph $G_n$ it without crossings. When that happens, moving some of the vertices so I can avoid the crossings. But I cannot find which $n$ is $G_n$ graph planar?
Any hint is highly appreciated.
| If $n\geq18$ the graph $G_n$ is not planar since the vertices $\{1,2,3,6,12,18\}$ are the vertices of a $K_{3,3}$ inside $G_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A conceptual reason for why the Jacobian of a rotation by a changing angle is $1$? Consider
$$f(x,y)=(x\cos r^2+y\sin r^2, y\cos r^2-x\sin r^2)\qquad\text{with }r=\sqrt{x^2+y^2},$$
as a map $\mathbb{R}^2 \to \mathbb{R}^2$. Geometrically, $f(x,y)$ is obtained from the vector $(x,y)$ by rotating it by angle $r^2$. (we rotate the more we move away from the origin).
Wolframe Alpha claim that $\det (df)=1$ holds identically. Is there an elegant rigorous way of seeing this, without too much computations?
Should this be "obvious" in retrospect? I was a bit surprised by this result... and direct computation is not so nice to do by hand (although tractable).
I thought using the chain rule (treating the angle as a function of $x,y$) but got nowhere.
Edit:
I agree that roughly speaking, near a point $p$, this function is like a standard rotation by a fixed angle $|p|^2$. However, I do not consider this a rigorous explanation. Indeed, this vague intuition is still with us when we replace $r^2=x^2+y^2$ by $x^4+y^4$, but then the Jacobian is non-constant.
So, this property doesn't even hold for smooth radially symmetric angle function $\theta(x,y)$.
Is this just a coincidence then? Can we charavterize the angle functions (or at least radially symmetric ones) which satisfy this?
(Wolframe says the Jacobian remains $1$ when we replace $r^2$ by $r$ or $r^4$. Perhaps this remains true for any power of $r$?)
| Partition the plane into very thin annular sets $A_k: \>r_{k-1}\leq\sqrt{x^2+y^2}< r_k$ $(k\in {\mathbb N}_{\geq1})$. The map $f$ (approximately) rotates each $A_k$ by an angle $\phi_k$ around the origin, hence is (approximately) an isometry on $A_k$. This implies $\mu\bigl(f(X)\bigr)=\mu(X)$ for any $X\subset A_k$. For an arbitrary set $B\subset{\mathbb R}^2$ consider the annular subsets $B_k:=B\cap A_k\subset A_k$ and obtain
$$\mu\bigl(f(B)\bigr)=\sum_{k=1}^\infty \mu\bigl(f(B_k)\bigr)=\sum_{k=1}^\infty \mu(B_k)=\mu(B)\ .$$
This shows that $f$ is globally area preserving. (But I'd say that using the chain rule on $df$ is simpler.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Evaluation of Integral sums Hey I am supposed to evaluate: $$\lim_{n \to \propto }\sum_{i=1}^{n}\frac{i}{n^{2}+i^{2}}$$
What I did:
$$\lim_{n \to \propto }\sum_{i=1}^{n}\frac{i}{n^{2}+i^{2}}=\lim_{n \to \propto }\frac{1}{n^{2}}\sum_{i=1}^{n}\frac{i}{1+\left ( \frac{i}{n} \right )^{2}}$$
But I do not know, what to do next, or how to eliminate i so I can transfer it to integral.
Can anyone help me?
| $$\dfrac r{n^2+r^2}=\dfrac1n\cdot\dfrac{r/n}{1+(r/n)^2}$$
Now use The limit of a sum $\sum_{k=1}^n \frac{n}{n^2+k^2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.