Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
If $f$ is a non-constant analytic function on a compact domain $D$, then $Re(f)$ and $Im(f)$ assume their max and min on the boundary of $D$. This is a homework problem I got, my attempted proof is:
Since $f$ is non constant and analytic, $f=u(x)+iv(y)$ where neither $u$ nor $v$ is constant(by Cauchy Riemann equations) and $u v$ are both analytic in $D$.
Therefore $u$ and $v$ both assume their max on the boundary of $D$ (by maximum modulus theorem).
Also, $u$ and $v$ have no minimums in the interior of $D$ unless they are $0$. I'm stuck here and don't know how to show that $u$ and $v$ are nonzero.
I looked at the answer at the back of the book. They used the Open Mapping Theorem(the image of an open set under a nonconstant analytic mapping is an open set):
According to the Open Mapping Theorem, the image under f of any open set D containing z0 in its interior is an open set containing $f (z_0)$ in its interior. Hence, both Re f and Im f assume larger and smaller values in D than the values $Re f (z_0)$ and $Im f (z_0)$.
I don't understant the proof given by the book, can someone explain a bit? Also, what do you think about my proof?
| Here's the book's proof with a tad more detail: Suppose $z_0$ is not on the boundary. We will show neither the real nor the imaginary parts of $f$ are maximized at $z_0$. Since the real and complex parts are continuous, they obtain their maxima and minima somewhere on $D$ because $D$ is compact. Hence they must obtain their minima and maxima on the boundary.
Take a small neighborhood of $z_0$ inside your domain. The image of this open neighborhood is an open neighborhood of $f(z_0)$ by the open mapping theorem. Let $U$ be the neighborhood of $f(z_0)$ just described. For small values of $\varepsilon$, $U$ contains the points $f(z_0)+\varepsilon$ , $f(z_0)-\varepsilon$, $f(z_0)+i\varepsilon$, and $f(z_0)-i\varepsilon$, whose real and imaginary parts are more/less than those of $f(z_0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that $f$ maps the entire unit disc onto itself. Suppose $f$ is analytic in the unit disc $D(0,1)$ and maps the unit circle into itself. Show then that $f$ maps the entire disc onto itself.
So the outline wants us to use the Max Modulus Theorem to show that $f$ maps $D(0,1)$ into itself. Then, use the fact that we proved that if $f:S \to T$, $f$ non-constant and analytic on $S$, and if $f(z)$ is a boundary point, $z$ is a boundary of $S$ to show that the mapping is onto.
I'm not sure if mapping the unit circle into itself means that $|f|=1$ on the unit circle. Also is the unit disc compact? Thanks!
| If $w \in D$ and $w \not \in f(D)$ then
$$
z \mapsto \frac{1}{f(z)-w}
$$
is holomorphic on $\overline{D}$. By the maximum modulus principle, for any $z \in D$
$$
\left|\frac{1}{f(z)-w}\right| \leq \max_{\omega \in \partial D} \left| \frac{1}{f(\omega)-w} \right| \leq \max_{\omega \in \partial D}\frac{1}{\left||f(\omega)|-|w|\right|} = \frac{1}{1-|w|}
$$
so $|f(z) - w| \geq 1-|w|$. This means that $D \setminus f(D)$ is open. By the open mapping theorem either $f$ is constant or $f(D)$ is open. The latter implies that $f(D)=D$ since $D$ is connected.
Moreover, the image of a compact set under a continuous function is compact. Therefore if $f$ is not constant then $f(\overline{D}) = \overline{D}$.
Or, alternatively, this bound shows that $w$ can be moved in a straight line towards $0$ while the radius of the "image free" disc around it increases. For $w=0$ the bound becomes $|f(z)| \geq 1$ so that $f(D) \cap D = \emptyset$. So either $f(D)$ contains all of $D$ or avoids it entirely. In the latter case $f$ maps into the unit circle. This would mean that $\overline{f} = f^{-1}$ but $\overline{f}$ can be holomorphic (complex differentiable) only if $f'$ vanishes identically. The conclusion is that either $f(D)=D$ or $f$ is constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Complexity of counting the number of triangles of a graph The trivial approach of counting the number of triangles in a simple graph $G$ of order $n$ is to check for every triple $(x,y,z) \in {V(G)\choose 3}$ if $x,y,z$ forms a triangle.
This procedure gives us the algorithmic complexity of $O(n^3)$.
It is well known that if $A$ is the adjacency matrix of $G$ then the number of triangles in $G$ is $tr(A^3)/6.$
Since matrix multiplication can be computed in time $O(n^{2.37})$ it is natural to ask:
Is there any (known) faster method for computing the number of triangles of a graph?
| Let me cite this paper from 2007 (Practical algorithms for triangle computations in very large (sparse (power-law)) graphs by Matthieu Latapy):
The fastest algorithm known for finding and counting triangles relies
on fast matrix product and has an $\mathcal{O}(n^\omega)$ time
complexity, where $\omega < 2.376$ is the fast matrix product
exponent. This approach however leads to a $\theta(n^2)$ space
complexity.
There are some improvements for sparse graphs or if you want to list the triangles shown in the document.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
} |
p-adic ultrametric balloons I want to show that with p-adic ultrametric : $|.|_{p}=p^{-ord(.)}$ where . is a integer and p is a prime , with $|0|_{p}=0$ if we have balloons $$B(a,R)=\{z\in \mathbb{Z}||z-a|_{p}\le R\}$$ $$B(b,R)=\{z\in \mathbb{Z}||z-b|_{p}\le R\}$$
then they can never overlap.
Proposition: They can never overlap without being the same.
Assume we have two different points y and y' in one ball, if $y\in B(a,R)$ with the strong triangle inequality $|a+b|_{p}\le sup\{|a|_{p} ,|b|_{p}\}$
now set: $a:=z-y'$ and $b:= y'-y$ so we get : $|z-y|_{p}\le sup\{|z-y'|_{p}. |y'-y|_{p} \}$
So that means $|z-y|_{p} \le R \Leftrightarrow |z-y'|_{p}\le R $ e.g. every point of the ball is a center point so the balloons can never overlap without being the same.
Is this a proof for the proposition? (I never made use of the second balloon...)
| Your proof is correct and usually the point where most people would stop, but you can complete the proof explicitly to make it clearer how the second balloon is related:
You have shown that for all $y, y' \in B(a,R)$, you have $ B(a,R) = B(y,R)=B(y',R).$ Now suppose there is some intersection between the two balloons, say at $z$. Then since $z$ is in the first ball, by the previous result $ B(a,R) = B(z,R).$ Similarly, since $z$ is in the second balloon, $B(b,R) = B(z,R)$, and thus $B(a,R) = B(b,R)$ so the balloons are the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are there names for the indices of the spherical harmonics? I know that physicists call $\ell$ and $m$ the "azimuthal" and "magnetic" quantum numbers, respectively. But those sound very physics-y. (I am actually a physicist, but still.) Are there names for these considering the spherical harmonics simply from a mathematical perspective? Maybe "zonal" and "modal" numbers? Any precedent for those???
| So far as I'm aware, the names for those indices are inherited from the names for the corresponding arguments of the associated Legendre polynomials that show up in the definition of the spherical harmonics; thus, in $Y_\ell^m(\theta,\varphi)$, $\ell$ is the degree, and $m$ is the order.
But if you're communicating with physicists or chemists, I would recommend just terming them the quantum numbers, since those names are more intuitive anyway...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Dimension of $GL(n, \mathbb{R})$ Why is the group $GL(n, \mathbb{R})$ of dimension $n^{2}$?
| I'm sorry people misunderstood your question. You are clearly asking about dimension in terms of linear algebra and not in terms of manifolds.
The set $GL(n)$ is not a subspace. To see this, take any invertible matrix $A$, then $-A$ is also invertible. But $A-A=0$ is obviously not invertible, thus $GL(n)$ is not a vector subspace for any $n$ so it makes no sense to talk about it's dimension in terms of independent vectors.
However, GL(n) is what is known as a submanifold which means that although $GL(n)$ is not a vector space, you can "locally" (i.e. around some neighborhood around the set) view it a vector space structure. But this is something you shouldn't worry about in linear algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Given 5 children and 8 adults, how many ways can they be seated so that there are no two children sitting next to each other.
Possible Duplicate:
How many ways are there for 8 men and 5 women to stand in a line so that no two women stand next to each other?
Given 5 children and 8 adults, how many different ways can they be seated so that no two children are sitting next to each other.
My solution:
Writing out all possible seating arrangements:
tried using $\displaystyle \frac{34*5!*8!}{13!}$ To get the solution, because $13!$ is the sample space. and $5!$ (arrangements of children) * $34$ (no two children next to each other) * $8!$ (# of arrangements for adults).
| We have $8$ nice comfortable chairs for the adults, separated by some space. This determines $9$ "gaps" where a kid can drag a stool. (It is $9$ because a kid can drag as stool between two adult chairs, or to the left end or to the right end.)
The seating arranger chooses $5$ of these places to put a stool into. This can be done in $\binom{9}{5}$ ways. For each of these ways, the adults can be seated in $8!$ orders, and for every way to do this, the children can occupy the stools in $5!$ orders. The number of ways is therefore
$$5!8! \binom{9}{5}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
How to construct this metric space Please give an example of metric space that there are two open balls $B(x,\rho_1) \subset B(y,\rho_2)$ for $\rho_1>\rho_2$
| If your $\subset$ allows the possibility of equality (i.e. $\subset$ means $\subseteq$), then letting $d$ be the discrete metric on any non-empty set $X$, we have that $B(x,3)=B(y,2)=X$ for any $x,y\in X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Coin sequence paradox from Martin Gardner's book "An event less frequent in the long run is likely to happen before a more frequent event!"
How can I show that THTH is more likely to turn up before HTHH with a probability of
9/14, even though waiting time of THTH is 20 and HTHH, 18!
I would be very thankful if you could show me the way of
calculating the probability of turning up earlier,
and the waiting time. Thank you!
| It is enough to devise a set of linear equations ($p=q=1/2$):
\begin{align*}
T &= pT+qTH \\
TH &= pTHT + qH \\
THT &= pT+q1 \\
H &= p(pT+q(pTHT+q0))+qH \\
X &= pT + qH
\end{align*}
and after solving it
\begin{align*}
T &= 5/7 \\
TH &= 5/7 \\
THT &= 6/7 \\
H &= 4/7 \\
X &= 9/14
\end{align*}
We get $X = 9/14$ which is what were you looking for. Let (1) means getting THTH before HTHH. What those equations mean is that probability of (1) starting from T is the same as $1/2$ of probability of (1) starting from TT (which is equivalent to T) and $1/2$ of probability of (1) starting from TH (which is not equivalent). The rest follows similar suit.
Edit: To be more intuitive (but less strict) let us observe that if HTHH happens at position other than 0 (and that happens with $1-1/16 = 15/16$ probability), then with probability at least $1/2$ THTH happens before -- due to probability of T before HTHH. Just by that you know that probability of (1) is greater than $15/16 \cdot 1/2 = 15/32$.
Adding probability of THTHT (the last T is required so the events won't overlap) at position 0 (1/32) you got 1/2 total and we haven't counted all
the instances yet, so surely probability of (1) is strictly greater than 1/2.
Hope that helps ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
Dyadic Expansion-Proof? Working through a measure theory textbook, and would like to understand dyadic expansions before I can understand its connections with the law of large numbers. I want to see this proved in detail,
For a mapping $F$ from $\Omega=(0,1]$ into itself given by $$F\omega=\begin{Bmatrix}
2\omega & \mbox{if } 0< \omega\leqslant\frac{1}{2} \\
2\omega-1 & \mbox{if } \frac{1}{2}< \omega\leqslant1
\end{Bmatrix} $$ and $$d_1(\omega)=\begin{Bmatrix}
0 & \mbox{if } 0< \omega\leqslant\frac{1}{2} \\
1 & \mbox{if } \frac{1}{2}< \omega\leqslant1
\end{Bmatrix} $$ and $d_i(\omega)=d_1(F^{i-1}\omega)$ , How do we prove that $\omega=\sum_{i=1}^{\infty }d_i(\omega)/{2^i}$?
| The Idea: Suppose that $0.b_1b_2b_3\dots$ is the binary expansion of $\omega$, non-terminating if there is a choice. The map $F$ is essentially just a shift map: it takes $0.b_1b_2b_3\dots$ to $0.b_2b_3b_4\dots$. The map $d_1$ then picks off the first bit of the expansion, and in general $d_n$ picks off the $n$-th bit. Here are the details.
It’s not hard to check that
$$F^2(\omega)=\begin{cases}
4\omega,&\text{if }0<\omega\le\frac14\\\\
4\omega-1,&\text{if }\frac14<\omega\le\frac12\\\\
4\omega-2,&\text{if }\frac12<\omega\le\frac34\\\\
4\omega-3,&\text{if }\frac34<\omega\le1\;.
\end{cases}$$
In fact, it’s not hard to prove by induction on $n$ that for $k=0,\dots,2^n-1$, $F^n(\omega)=2^n\omega-k$ iff $\frac{k}{2^n}<\omega\le\frac{k+1}{2^n}$. And $\frac{k}{2^n}<\omega\le\frac{k+1}{2^n}$ iff $k<2^n\omega\le k+1$ iff $k=\lceil 2^n\omega\rceil-1$, so if we wish, we can simply write $F^n(\omega)=2^n\omega-\lceil 2^n\omega\rceil+1$.
Next, observe that $$0<\omega-\frac{d_1(\omega)}2\le\frac12$$ for all $\omega\in(0,1]$. Suppose that $$0<\omega-\sum_{k=1}^n\frac{d_k(\omega)}{2^k}\le\frac1{2^n}\;.\tag{1}$$
Then $$0<2^n\omega-\sum_{k=1}^nd_k(\omega)2^{n-k}\le 1\;.$$ Let $m=\sum_{k=1}^nd_k(\omega)2^{n-k}$; $m$ is an integer, and $m<2^n\omega\le m+1$, so $m+1=\lceil 2^n\omega\rceil$, and $2^n\omega-\lceil 2^n\omega\rceil+1=2^n\omega-m$.
Now $d_{n+1}(\omega)=d_1(F^n(\omega))=d_1(2^n\omega-\lceil 2^n\omega\rceil+1)=d_1(2^n\omega-m)$; this is $1$ if $2^n\omega-m>\frac12$ and $0$ otherwise. But
$$\begin{align*}
2^n\omega-1>\frac12&\text{ iff }\omega-\frac{m}{2^n}>\frac1{2^{n+1}}\\
&\text{ iff }\omega-\sum_{k=1}^n\frac{d_k(\omega)}{2^k}>\frac1{2^n}\\
&\text{ iff }\omega-\sum_{k=1}^{n+1}\frac{d_k(\omega)}{2^k}>0\;,
\end{align*}$$
so in all cases $$0<\omega-\sum_{k=1}^{n+1}\frac{d_k(\omega)}{2^k}\le\frac1{2^{n+1}}\;.\tag{2}$$
(The second inequality in $(2)$ follows from the fact that Since $2^n\omega-m\le 1$.) Thus, $(1)$ implies $(2)$, and by induction $(1)$ holds for all $n\ge 1$. Since $\frac1{2^n}\to 0$ as $n\to\infty$, it follows that $$\omega=\sum_{k=1}^\infty\frac{d_k(\omega)}{2^k}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Quadratic Diophantine equation in three variables How would one determine solutions to the following quadratic Diophantine equation in three variables:
$$x^2 + n^2y^2 \pm n^2y = z^2$$
where n is a known integer and $x$, $y$, and $z$ are unknown positive integers to be solved.
Ideally there would be a parametric solution for $x$, $y$, and $z$.
[Note that the expression $y^2 + y$ must be an integer from the series {2, 6, 12, 20, 30, 42 ...} and so can be written as either $y^2 + y$ or $y^2 - y$ (e.g., 12 = $3^2 + 3$ and 12 = $4^2 - 4$). So I have written this as +/- in the equation above.]
Thanks,
| For the case when the equation: $X^2+qY^2+qY=Z^2$
factor $q$ - is not a square, then solutions can yrazit using Pell's equation: $p^2-qs^2=1$
Then there is another solution:
$X=X$
$Y=2psX-p^2$
$Z=Xp^2-qps+qXs^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Another residue theory integral I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here)
I also need to use the following contour (specifically a keyhole contour to exclude the branch cut):
$$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$
| Since a solution involving contour integration has been given, I am providing an alternative method without contour integration. Let $u:=\sqrt{x}$. Then, the integral $I:=\displaystyle\int_0^\infty\,\frac{\sqrt{x}}{x^3+1}\,\text{d}x$ equals
$$I=2\,\int_0^\infty\,\frac{u^2}{u^6+1}\,\text{d}u=\int_{-\infty}^{+\infty}\,\frac{u^2}{u^6+1}\,\text{d}u\,.$$
Note that
$$\frac{u^2}{u^6+1}=\frac{1}{3}\,\left(\frac{u^2+1}{u^4-u^2+1}\right)-\frac13\,\left(\frac{1}{u^2+1}\right)\,.$$
Now, let $v:=u-\frac{1}{u}$. Then,
$$\frac{u^2+1}{u^4-u^2+1}=\frac{1+\frac{1}{u^2}}{\left(u-\frac{1}{u}\right)^2+1}=\left(\frac{1}{v^2+1}\right)\,\frac{\text{d}v}{\text{d}u}\,.$$
Thus,
$$\begin{align}\int\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u&=\int\,\frac{1}{v^2+1}\,\text{d}v\\&=\text{arctan}(v)+C\\&=\text{arctan}\left(u-\frac{1}{u}\right)+C\,,\end{align}$$
where $C$ is a constant of integration.
Thus,
$$\int_0^{+\infty}\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u=\pi=\int_{-\infty}^0\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u\,.$$
On the other hand,
$$\int_{-\infty}^{+\infty}\,\frac{1}{u^2+1}\,\text{d}u=\Big.\big(\text{arctan}(u)\big)\Big|_{u=-\infty}^{u=+\infty}=\pi\,,$$
making
$$I=\frac{1}{3}\,(2\pi)-\frac{1}{3}\,\pi=\frac{\pi}{3}\,.$$
This result agrees with the computation made by Amir Alizadeh approximately six years ago.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Tangent line with a slope of 1 to $x^2+2y^2=1$ I had a problem on the test I just took that I have never seen before.
$x^2 + 2y^2 = 1$ and was suppose to find the tangent lines on that curve that have a slope of one.
I just couldn't figure out how to do it. I am not even sure if I did anything but I got the derivative as
$2x + 4y y\prime$ and then from there I did some algera but I don't think any of that was correct.
| Basically, you want to find $(x,y)$ such that
$$\frac{dy}{dx}=1$$
and
$$x^2+2y^2=1$$
You have correctly put
$$2x +4 y \frac{dy}{dx} =0$$
Now you solve for $y'$, and get
$$ - \frac{x}{{2y}} = \frac{{dy}}{{dx}}$$
So, since you're looking for $1 = \dfrac{{dy}}{{dx}}$, you need:
$$ - \frac{x}{{2y}} = 1$$
or
$$-x = 2y$$
Squaring the equation gives:
$$x^2 = 4y^2$$
$$\frac{x^2}{2} = 2y^2$$
Substituting in our original equation you have:
$$2y^2 + {x^2} = 1$$
$$\frac{{{x^2}}}{2} + {x^2} = 1$$
this yields $x=\pm \sqrt{\dfrac{2}{3}}$
These values actually produce $|y'|=1$ so you need to choose the $y$ coordinate appropriately. See by yourself:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $ \sqrt{2-2\cos x}+\sqrt{10-6\cos x}=\sqrt{10-6 \cos 2x} $ $$ \sqrt{2-2\cos x}+\sqrt{10-6\cos x}=\sqrt{10-6 \cos 2x} $$
I tried squaring and/or using $1-\cos x=2\sin^2{\frac{x}2}$, but no luck.
| If $t = \cos(x)$, we have $\sqrt{2-2t} + \sqrt{10-6t} = \sqrt{16-12 t^2}$. Square both sides, isolate the term with square roots, square again, and factor. The result should be equivalent to $(t+1)(t-1)(3t-2)^2=0$. $t=-1$ does not work, but the other factors do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How to define sparseness of a vector? I would like to construct a measure to calculate the sparseness of a vector of length $k$.
Let $X = [x_i]$ be a vector of length $k$ such that there exist an $x_i \neq 0$ . Assume $x_i \geq 0$ for all $i$.
One such measure I came across is defined as $$\frac{\sqrt{k} - \frac{\|X\|_1}{{\|X\|_2}}} {\sqrt{k} -1}\;,$$ where $\|X\|_1$ is $L_1$ norm and $\|X\|_2$ is $L_2$ norm.
Here, $\operatorname{Sparseness}(X) = 0$ whenever the vector is dense (all components are equal and non-zero) and $\operatorname{Sparseness}(X) = 1$ whenever the vector is sparse (only one component is non zero).
This post only explains the when $0$ and $1$ achieved by the above mentioned measure.
Is there any other function defining the sparseness of the vector.
| You could of course generalize your current measure
\begin{align}
S(X) = \frac{\frac{k^{(1/m)}}{k^{(1/n)}} -\frac{\|X\|_m}{\|X\|_n} } {\frac{k^{(1/m)}}{k^{(1/n)}}-1}
\end{align}
while preserving your properties you specified.
An interesting special case could be $m = 1, n \to \infty$, in which case the expression simplifies to
\begin{equation}
S(X) = \frac{k-\frac{\|X\|_1}{\|X\|_c}}{k-1}
\end{equation}
where $c = \infty$, (for some reason, mathjax refused to render when I inserted $\infty$ directly in the fraction)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
} |
Analytic in $\mathbb{C}$ implies $\left|\frac{f'(x)}{f(x)}\right|$ is bounded in $\mathbb{R}$? If $f(z)$ is an analytic function in the complex plane, $z=x+iy$, and $f(x)\neq 0$ for all $x\in \mathbb R$, does this imply that $\frac{f'(x)}{f(x)}$ is bounded on $\mathbb R$?i.e., $\big|\frac{f'(x)}{f(x)}\big|\leq C$, for some $C>0$.
| The answer is negative.
For example $f(z)=\exp (z^2)$ is analytic and different from zero in the whole complex plane, and it has $f^\prime (z)=2z\ f(z)$. Hence for $x\in \mathbb{R}$ you get:
$$\left| \frac{f^\prime (x)}{f(x)}\right| =2|x|$$
which is not bounded from above on the real line.
I'd like to remark that "having a bounded logarithmic derivative" implies an exponential growth/decay estimate for $f(x)$.
In fact, assume you can find a function $f(z)$ which satifies your requirements, i.e. it is analytic in the whole plane, its restriction to the real line differs from zero everywhere and has bounded logarithmic derivative, i.e.:
$$\tag{BLD} \left| \frac{f^\prime (x)}{f(x)}\right| \leq C \qquad \text{, for }x\in \mathbb{R}$$
Assume for the time being also $f(x)>0$ for $x\in \mathbb{R}$ and $C>0$ (for, if $C=0$ then $f(x)$ is a constant); thus $f(x)$ satisfies the differential inequalities:
$$-C\ f(x)\leq f^\prime (x)\leq C\ f(x)$$
which imply the growth/decay estimates:
$$f(0)\ e^{-C|x|}\leq f(x)\leq f(0)\ e^{C|x|}\; .$$
If $f(x)<0$ then previous estimates rewrite:
$$f(0)\ e^{C|x|} \leq f(x)\leq f(0)\ e^{-C|x|}\; .$$
Therefore in any case your function $f(x)$ satisfies:
$$\tag{GDE} |f(0)|\ e^{-C|x|}\leq |f(x)|\leq |f(0)|\ e^{C|x|}\; .$$
Neverthless, I don't know if estimates (GDE) are equivalent to (BLD) in the case $f(x)$ is the restriction of an analytic function to the real line.
Certainly (GDE) is not equivalent to (BLD) for arbitrary real function: in fact, for example, the function $f(x) = \exp (|x|\ \sin x^4)$ is of class $C^1(\mathbb{R})$ (at least) and satisfies (GDE) with $C=1$, but it does not stisfy (BLD) for:
$$f^\prime (x) = \operatorname{sign}(x)\ f(x)\ (4\ x^4\ \cos x^4 + \sin x^4)$$
hence:
$$\left| \frac{f^\prime (x)}{f(x)}\right| = 4\ x^4\ \cos x^4 + \sin x^4 $$
which is not bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Unitary orbit of the Jordan matrices Let
$$
\mathcal{J}=\{A\in M_n(\mathbb{C}):\ A \text{ is a Jordan matrix}\}
$$
Then it is well-known that the similary orbit of $\mathcal{J}$ is all of $M_n(\mathbb{C})$.
What is the unitary orbit of $\mathcal{J}$? Is it dense?
It cannot be all of $M_n(\mathbb{C})$, because every matrix in $\mathcal{J}$ and its unitary conjugates have the property that eigenvectors corresponding to different eigenvalues are orthogonal to each other.
| It cannot be dense except in the trivial case of $n=1$. The (real) dimension of $U(n)$ is $n^2$ while the dimension of $Gl_n(\mathbb{C})$ is $2n^2$ Since $\mathcal{J}$ has dimension $n$ (in the sense that it's a union of the diagonal matrices together with unions of various Jordan blocks things of smaller dimension since we get to choose fewer eigenvalues), the $U(n)$ orbit through $\mathcal{J}$ has dimension at most $n^2+n$.
But $n^2 + n\leq 2n^2$ unless $n=n^2$, i.e., $n=0$ or $n=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why does the result hold for PIDs but not for UFDs? Let $R$ be a subring of an integral domain $S$, and suppose $R$ is a PID. Then it follows that if $r\in R$ is a gcd of $r_1$ and $r_2$ in $R$, where $r_1$ and $r_2$ are not both zero, then $r$ is a gcd of $r_1$ and $r_2$ in $S$.
What I would like to know why the same conclusion would fail if $R$ is only an UFD instead of a PID. I have been trying to think of counter-examples of R, like for instance, $\mathbb{Z}[X]$ and etc., but to no avail. Is there any counter-examples to disprove the statement when $R$ is an UFD but not a PID?
| Hint $\rm\ gcd(2,x) = 1\:$ in $\rm\mathbb Z[x]\:$ vs. $2$ in $\rm\:\mathbb Z[x/2]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Negative fractions - what's the difference? What's the difference between the following fractions:
$ \frac{-4}{-5}$
$ \frac{4}{-5}$
$ \frac{-4}{5}$
$ - \frac{4}{5}$
| Another way to think about this would be in terms of equivalence classes. If you are not familiar with this, it is pretty much how mathematicians say in the rationals that
$$\frac{1}{2} = \frac{2}{4} = \frac{4}{8}.$$
In fact one says that given two fractions
$$x = \frac{m}{n} \hspace{2mm} \text{and} \hspace{2mm} y= \frac{a}{b},$$
they are equal iff $mb - na = 0$. So in your case for example $\frac{-4}{5}$ and $-\frac{4}{5}$ are equal because
$$\frac{-4}{5} - \bigg((-1)\frac{4}{5}\bigg) = \frac{-4}{5} + \frac{4}{5} = \frac{-4 + 4}{5} = 0$$
recalling that $-\frac{4}{5} = (-1)\frac{4}{5}$. You can go on like this for all of them, e.g. for example the first and second are not equal because
$$\frac{-4}{-5} - \frac{4}{-5} = \frac{-4 - 4}{5} \neq 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
a question about sets Let $M,C,P,F,$ be nonempty sets satisfying the following conditions:
*
*$M\subset C$;
*$M\cap P\neq \emptyset$;
*$C\cap F\neq \emptyset$;
*$F\subset C\cup P$;
*$P\cap C^{c}\neq \emptyset$.
Is it true that $F\subset M\cup P?$ I was told by a friend of mine that it is true.
I wasn't able to solve that. If I start by saying that if $ x\in F $, then by (4) I get $x\in C$ or $x\in P$ and I got stuck. Then I'tried another way. If $x\notin M\cup P$ then I get $x\notin M$ or $x\notin P$, but again, I don't know how use all the hypothesis.
I would appreciate your help.
| It’s not necessarily true. Let $M=\{1\}$, $C=\{1,2\}$, $P=\{1,3\}$, $F=\{2\}$; all five conditions are satisfied, but $F\cap(M\cup P)=\varnothing$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
how to solve this $\lim_{a \to \infty}$ $\int_1^2 {\sin (ax)}/x^{2}(x-1)^{1/2}dx $ I want to compute
$$\lim_{t \to \infty} \int_1^2 \frac{\sin (tx)}{x^{2}(x-1)^{1/2}}\,dx.
$$
The integrand has discontinuity at $x=1$, so the integral is equal to the following limit:
$$\lim_{t \to \infty}\lim_{s \to 1^+} \int_s^2 \frac{\sin (tx)}{x^{2}(x-1)^{1/2}}dx, $$
and I use substitution $tx= a$; then $tdx=da$.
$$\lim_{t \to \infty}t^{3/2}\lim_{s \to 1^+}\int_{st}^{2t} \frac{\sin (a)}{a^{2}(a-t)^{1/2}}da $$
how to proceed this integral?
| Do you know the Riemann-Lebesgue lemma?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
De Rham cohomology of $S^n$ Can you find mistake in my computation of $H^{k}(S^{n})$.
Sphere is disjoint union of two spaces:
$$S^{n} = \mathbb{R}^{n}\sqcup\mathbb{R^{0}},$$
so
$$H^{k}(S^n) = H^{k}(\mathbb{R}^{n})\oplus H^{k}(\mathbb{R^{0}}).$$
In particular
$$H^{0}(S^{n}) = \mathbb{R}\oplus\mathbb{R}=\mathbb{R}^{2}$$
and
$$H^{k}(S^{n}) = 0,~~~k>0.$$
Where is mistake? Thanks a lot!
| This should be a comment but is too long.
If your reasoning were correct, we could also do the following: write $S^1$ as the "disjoint union" of two open intervals and two points (by cutting out the north and south poles, for example) Then your idea would show that $H^0(S^1)=\mathbb R^4$. And you can cut it in more pieces...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Conjugacy classes -- how to generate them for a list to be sorted? In another thread I had brought up the notion of sorting a list of four randomly scrambled items.
It was mentioned that they can be broken down into 5 conjugacy classes:
(), (12), (123), (12)(34) and (1234)
Can anyone explain how these work or if there is a general way to list all possible conjugacies? For instance, what about a list of size 6?
| I write this answer only to make sure that OP realises the connection between the problem stated and the formulation. I think this however could be closed as exact duplicate.
The notion of sorting $n$ items you're talking about is formally called the permutations of $n$ symbols. The notion of conjugacy discussed in the answer corresponds to the action of group on itself by conjugation.
Well, you are asking for the number of conjugacy classes in a symmetric group of order $n$. Yes, there is a nice description.
I'll recall the main result while I'll let you go through the details in an exactly same answer $^\dagger$ I had written over here.
Main result:
The number of conjugacy classes in $S_n$ equals the number of partitions of $n$.
We'll give a way to list an exhaustive set or representatives for the conjugacy classes.
Write down all the additive partitions of $n$. To each partition, associate a representative as follows.
For each number appearing in the partition, attach with it a disjoint cycle of that length. The product of all such cycles represents a unique conjugacy class. It is best illustrated by an example for $4$:
$$\begin{align*}Id &\cong 1+1+1+1\\(1234)&\cong 4\\(12)(34) &\cong 2+2\\ (34) &\cong 1+1+2(\text{since (1) and (2) are omitted in this notation})\\(123)&\cong 3+1\end{align*}$$
$\dagger$ This answer of mine deals with exactly this question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove the map has a fixed point Assume $K$ is a compact metric space with metric $\rho$ and $A$ is a map from $K$ to $K$ such that $\rho (Ax,Ay) < \rho(x,y)$ for $x\neq y$. Prove A have a unique fixed point in $K$.
The uniqueness is easy. My problem is to show that there a exist fixed point. $K$ is compact, so every sequence has convergent subsequence. Construct a sequence ${x_n}$ by $x_{n+1}=Ax_{n}$,$\{x_n\}$ has a convergent subsequence $\{ x_{n_k}\}$, but how to show there is a fixed point using $\rho (Ax,Ay) < \rho(x,y)$?
| you don't need to prove completeness or define any sequence. Define a nonnegative real function
$$ h(x) = \rho(x,f(x) ) $$
This is continuous, so its minimum is achieved at some point $x_0.$ If $h(x_0) >0,$ we see that
$$ h(f(x_0) ) = \rho( f(x_0), f(f(x_0 )) < \rho( x_0, f(x_0)) = h(x_0) $$
Put together,
$$ h(f(x_0) ) < h(x_0) $$
Thus the assumption of a nonzero minimum of $h$ leads to a contradiction. Therefore the minimum is actually $0,$ so $h(x_0) = 0,$ so $f(x_0) = x_0 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 3,
"answer_id": 2
} |
Differentiability in multivariable calculus Define function $f: \mathbb{R} ^3 \to \mathbb{R}$ as
\begin{equation}
f(x,y,z) = x^4 + y^4 + z^4 - 4xyz
\end{equation}
Show that $f$ is differentiable at the point $(1,1,1)$.
Solution:
I thought about using the good old
\begin{equation}
\lim _{\bf{h} \to \bf{0}} \frac{|f(\bf{a}+\bf{h}) - f(\bf{a}) - \nabla f(\bf{a}) \cdot \bf{h}|}{||\bf{h}||} = 0
\end{equation}
But that proved to be difficult so now I'm back to square one. Are there any alternative ways to evaluate differentiability at a point?
Thanks.
| You should check that each of the partial derivatives exist and are continuous. This will give you that the function itself is differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The simplest way of proving that $|\mathcal{P}(\mathbb{N})| = |\mathbb{R}| = c$ What is the simplest way of proving (to a non-mathematician) that the power set of the set of natural numbers has the same cardinality as the set of the real numbers, i.e. how to construct a bijection from $\mathcal{P}(\mathbb{N})$ to $\mathbb{R}$?
| I will prove every reasonable "equinumerosity" involved. There is a key lemma (Schröder–Bernstein theorem) that states that $|A|=|B|$ iff there is an injective map from $A$ to $B$ and an injective map from $B$ to $A$.
*
*$|\mathbb{R}|=|(0,1)|$ is trivial: $f(x)=\frac{1}{\pi}\left(\frac{\pi}{2}+\arctan(x)\right)$ is an increasing function that maps $\mathbb{R}$ to $(0,1)$;
*$|(0,1)|=|[0,1]|$ is less trivial: take an enumeration of the rational numbers in $[0,1]$ such that $q_0=0,q_1=1$, then map $q_n$ into $q_{n+2}$ (Hilbert's hotel);
*$|(0,1)|=|2^{\mathbb{N}}|$ is tricky: every number in $(0,1)$ has a unique canonical binary representation, where canonical means that no tails of $11111\ldots$ are allowed: that gives an injective map from $(0,1)$ to $2^\mathbb{N}$. On the other hand, if $\{a_n\}_{n\geq 0}$ is a sequence such that $a_i\in\{0,1\}$, the map
$$ \{a_n\}_{n\geq 0} \mapsto \sum_{n\geq 0}\frac{2a_n+1}{5^{n+1}} $$
sends $2^{\mathbb{N}}$ into a Cantor subset of $(0,1)$ in a injective way.
*$|\mathcal{P}(\mathbb{N})|=|2^{\mathbb{N}}|\,$ is trivial again: every non-empty subset $A\subseteq \mathbb{N}$ can be associated with the sequence $\{a_n\}_{n\geq 0}$ in which $a_n=1$ if $n\in A$ and $a_n=0$ otherwise.
By $(4)+(3)+(2)+(1)$, $$\left|\mathcal{P}(\mathbb{N})\right|=\left|\mathbb{R}\right|$$
as wanted. The trickiest part comes from noticing that this approach shows that a bijection exists without an explicit construction: the "i.e." in OP's question is not really an "i.e.", unless we are constructivists.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 5
} |
How big is the integral $\int_0^\infty \frac{x\exp(-x^2/4)\cosh(x)}{\sqrt{\cosh(x)-1}} dx$ I can't seem to get Maple to approximate the integral
$$\int_0^\infty \frac{x\exp(-x^2/4)\cosh(x)}{\sqrt{\cosh(x)-1}} dx.$$
Could somebody tell me why?
This integral "should be" well-defined. (My reasons are not mathematical. The book I'm reading suggest that this integral makes sense.) Do note that the denominator of the integrand explodes at $x=0$, but this should not be a problem...
Can we give an upper bound for this integral?
| Using the following inequality:
$$
\frac{x}{\sqrt{\cosh{x}-1}} = \sqrt{2} \frac{x/2}{\sinh(x/2)} \leqslant \sqrt{2}
$$
It is easy to work out the upper bound:
$$
\int_0^\infty \frac{x\exp(-x^2/4)\cosh(x)}{\sqrt{\cosh(x)-1}} \mathrm{d}x <
\sqrt{2} \int_0^\infty \exp(-x^2/4)\cosh(x) \mathrm{d}x = \sqrt{2 \pi} \mathrm{e} \approx 6.8
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Implicit use of the Implicit Function Theorem when finding tangent lines to polar curves. Recently I found myself having to teach students how to find the slope of a tangent line to a curve in $\mathbb R^2$ given in polar coordinates by the equation $r = f(\theta)$. The students' calculus book instructs them that surely the slope of the tangent line must be given by $\frac{dy}{dx}$ and uses the chain rule to calculate $$\frac{dy}{dx} = \frac{dy/d\theta}{dx/d\theta}$$ Then, using the fact that $x = r\cos\theta$ and $y = r\sin\theta$ they are therefore able to find a formula for $\frac{dy}{dx}$ in terms of $r$ and $\theta$. There is however a problem with this line of reasoning, namely that there is no compelling reason for $y$ to be defined even locally as a function of $x$. Knowledge of the implicit function theorem allows one to formulate the hypothesis necessary to make the above use of the chain rule correct. However it would be nice to justify the use of the chain rule here using only methods available to a first year calculus student. My question is therefore
Is there a nice way (intuitive or rigorous) of explaining when and why $y$ is a differentiable function of $x$ in this case, using only methods available to a first year calculus student?
Yes, it is possible to look through the proof of the implicit function theorem and simplify it in this case, but I am especially interested in hearing the thoughts of experienced teachers, so I hope I am justified in asking the question.
| I can think of two ways to do it. Whether they are adequate for first year calculus students depends on the syllabus.
The first is to solve for $\theta$ as a function of $x$. If you write $x-f(\theta)\cos\theta=0$, you use the implicit function theorem to do it. But if you write it $x=f(\theta)\cos\theta:=g(\theta)$, it is just the inverse function theorem for real functions of one real variable, which is included in he syllabus of many first year calculus courses. If it is not included, you can justify it as follows: if $g'(\theta_0)\ne0$, then $g$ is strictly monotone on a neighborhood around $\theta_0$, there is an inverse, the inverse is also strictly monotone and is differentiable.
The second is writing the curve in parametric form
$$\begin{align*}
x&=f(\theta)\cos\theta,\\
y&=f(\theta)\sin\theta.
\end{align*}$$
Then $(x',y')$ is a vector in the direction of the tangent (if it is not $(0,0)$). From it you can compute the slope of the tangent, and find when that tangent is vertical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Number of integer solutions to $3i^2 + 2j^2 = 77 \cdot 6^{2012}$ here is another problem I did not manage to solve in the contest I mentioned in my previous question.
Determine the number of integer solutions $(i, j)$ of the equation:
$3i^2 + 2j^2 = 77 \cdot 6^{2012}$.
Applying logarithms is not useful, since on the left hand side we have a sum; I also tried some algebraic manipulations that led me to nothing useful. Is there a simple solution to the problem?
Thank you,
rubik
| Observe that $i$ has to be even, so do $j$ because $3i^2$ and the right side can be divided by 4.
Let $i'=2i$, $j'=2j$; the equation becomes $3i'^2+2j'2=77 \cdot 3^{2012} \cdot 2^{2010}$. The same argument applies 1006 times, and we get something like $3i^2+2j^2=77 \cdot 3^{2012}$.
Now do the same looking the divisibility by 3. It works. Finaly check $3i^2+2j^2=77$
Edit: It's my first posting here, so hello to all from France !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
How to show that functions of this type are strictly decreasing Let $f:[0,\infty)\to \mathbf{R}$ be defined by $$ f(x) = \frac{1}{x+1} \int_x^\infty g(r,x) dr,$$ where $g(r,x)$ is a "nice" function and all of this makes sense.
Suppose that I want to show that $f(x)$ is strictly decreasing. In my particular problem, this should be the case and I'm trying to prove it.
One way would be to show that the derivative is negative.
How to derive $f$?
Are there other ways to prove that $f$ is strictly decreasing (assuming it is)?
| There are three ways in which the function $f$ depends on $x$, and the derivative contains one term for each of them:
$$
\begin{eqnarray}
f'(x)
&=&
-\frac1{(x+1)^2}\int_x^\infty g(r,x)\mathrm dr
\\
&&+\frac1{x+1}g(x,x)
\\
&&
+\frac1{x+1}\int_x^\infty\frac{\partial}{\partial x}g(r,x)\mathrm dr\;.
\end{eqnarray}
$$
[Edit in response to the comment:]
I'll assume that you meant $g(r,x) = \exp(-r^2) (r-x)^{-1/2}$, since the version with a $t$ in the denominator wouldn't cause problems at $r=x$.
In such a case, you could obtain a result by replacing the lower bound of the integral by $x+\epsilon$; then two of the terms would go to infinity as $\epsilon\to0$, and you could cancel them before taking that limit. However, a simpler approach would be to substitute:
$$\frac1{x+1}\int_x^\infty \frac{\mathrm e^{-r^2}}{\sqrt{r-x}}\mathrm dr=\frac1{x+1}\int_0^\infty\frac{\mathrm e^{-(u+x)^2}}{\sqrt u}\mathrm du\;.$$
Now the bound doesn't depend on $x$, and the integral of the derivative of the integrand with respect to $x$ is well-defined. Of course you can always make this substitution, but unless $g(r,x)$ contains $r-x$, it just rearranges the terms without reducing the work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
why the two ways of adding elements of array yield different results? Given the array, $$\begin{matrix}
-1 & 0 & 0 & 0 & \ldots \\
1\over2 & -1 & 0 & 0 & \ldots \\
1\over4 & 1\over2 & -1 & 0 & \ldots \\
1\over8 & 1\over4 & 1\over2 & -1 & \ldots \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\end{matrix}$$ Here $\sum_i\sum_ja_{ij} = -2$, while $\sum_j\sum_ia_{ij} = 0$. Why both sums are different?
| I assume you think that the two should be the same because you are adding all the same elements, simply ordering them differently. While it is true that rearranging an absolutely convergent series will give you the same result, no matter how you order the elements $a_{ij}$ in your double series (in order to make it a single series) you will have a conditionally convergent series, which can be rearranged to converge to any real number and even diverge. This fact is known as the Riemann rearrangement theorem, and is very cool.
Edit: As Robert Israel pointed out, you can't actually make this into a conditionally convergent single series as the limit of the terms can't go to $0$ (since there are infinitely many $-1$'s). So in this case the failure of rearrangement preserving the value of the series is even stronger.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to come up with the gamma function? It always puzzles me, how the Gamma function's inventor came up with its definition
$$\Gamma(x+1)=\int_0^1(-\ln t)^x\;\mathrm dt=\int_0^\infty t^xe^{-t}\;\mathrm dt$$
Is there a nice derivation of this generalization of the factorial?
| In Leonhard Euler's Integral: A Historical Profile of the Gamma Function: In Memoriam: Milton Abramowitz by Philip J. Davis in The American Mathematical Monthly , Dec., 1959: Apparently, Euler, experimenting
with infinite products of numbers, chanced to notice that if n is a positive integer,
$$\small n!=\Bigg [ \left(\frac 2 1 \right)^n \frac1{n+1}\Bigg ]\,\Bigg [ \left(\frac 3 2 \right)^n \frac2{n+2}\Bigg ]\,\Bigg [ \left(\frac 4 3 \right)^n \frac3{n+3}\Bigg ]\cdots$$
and succeeding in transforming this infinity product into an integral, extending the factorial beyond integers, upon noticing that for certain values the infinite product yielded $\pi, $ suggesting areas of a circle.
But there is a really neat intuition already expressed in one of the answers, and beautifully presented by Robert Andrew Martin here, simply expanding the integral part in the integration by parts of a polynomial modulated by an exponential. In essence, the counterpart of the factorials in Taylor series.
For instance for $x^4$ (leaving constant of integration out):
$$\begin{align}\small
\int x^4\; e^{-x} \; dx &\small= -x^4\;e^{-x} +\int 4\,x^3\;e^{-x}\;dx\\
&\small = -x^4 \; e^{-x}-4\,x^3\;e^{-x} +\int_0^\infty 4\cdot 3\,x^2\;e^{-x}\;dx\\
&\small = -x^4 \; e^{-x} -4\,x^3\;e^{-x} -4\cdot 3\,x^2\;e^{-x} +\int 4\cdot 3\cdot 2\,x\;e^{-x}\;dx\\
&\small =-x^4 \; e^{-x} -4\,x^3\;e^{-x} -4\cdot 3\,x^2\;e^{-x} -4\cdot 3\cdot2\,x\;e^{-x} - \underbrace{4\cdot 3\cdot 2\cdot 1}_{4!}\;e^{-x}
\end{align}$$
Generalizing and integrating from $0$ to $\infty:$
$$\small\int_0^\infty x^n\; e^{-x} \; dx=-x^n \; e^{-x} -n\,x^{n-1}\;e^{-x} \cdots - \underbrace{n\cdot (n-1)\cdots 3\cdot 2\cdot 1}_{n!}\;e^{-x}\;\;\Bigr|_{x=0}^\infty=n!$$
which can immediately be extended beyond integers as $\displaystyle \small x! = \int_0^\infty t^x\; e^{-t} \; dt$
essentially the gamma function, except for the accepted slightly different definition: $$\Gamma(x)=\int_0^\infty t^{x-1}\; e^{-t} \; dt$$
that makes $\small (x-1)!=\Gamma(x).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 4,
"answer_id": 3
} |
Is my proof correct: if $n$ is odd then $n^2$ is odd?
Prove that for every integer $n,$ if $n$ is odd then $n^2$ is odd.
I wonder whether my answer to the question above is correct. Hope that someone can help me with this.
Using contrapositive, suppose $n^2$ is not odd, hence even. Then $n^2 = 2a$ for some integer $a$, and
$$n = 2(\frac{a}{n})$$ where $\frac{a}{n}$ is an integer. Hence $n$ is even.
| Two bits:
Firstly, to clean your proof up, you might instead go like this. If $n^2 = 2a$, then in particular $2 \mid n^2$. $2$ is a prime, and thus we have at least 1 of $2 \mid n$ or $2 \mid n$. Clearly, one happens $\implies$ both happen, and thus $2 \mid n$. So $n$ is even.
Secondly, what if we didn't use contrapositive?
$m$ is odd means $m = 2k + 1$ for some $k$. Then $m^2 = 4k^2 + 4k + 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
Is $BC([0,1))$ ( space of bounded real valued continuous functions) separable? Is $BC([0,1))$ a subset of $BC([0,\infty))$? It is easy to prove the non-separability of BC([0,$\infty$)) and the separability of C([0,1]). It seems to me we can argue from the fact that any bounded continuous function of BC([0,$\infty$)) must also be in BC([0,1)) to somehow show BC([0,1)) is not separable, but BC([0,1)
| Consider the following, much simpler, construction:
For each binary sequence $a \in \{0,1\}^{\mathbb{N}}$ define bounded and continuous function $f_{a}(x)$ such that, $$f_{a}\left(\frac{1}{k}\right) = \begin{cases}1, \hspace{0.2cm} \text{if} \hspace{0.2cm} a_k = 1, \\
0, \hspace{0.2cm} \text{if}\hspace{0.2cm} a_k=0 \end{cases}.$$
In the other points of $(0,1)$ define $f$ to be linear function. More precisely in the intervals $\left(1-\frac{1}{k}, 1-\frac{1}{k+1}\right)$ define $f$ to be the linear function through the points $$\left(\frac{1}{k},f_a \left(\frac{1}{k}\right)\right),\left(\frac{1}{k+1},f_a \left(\frac{1}{k+1}\right)\right).$$
Now this saw-shaped functions are obviously continuous and bounded in the open interval $(0,1)$.
Thus we have constructed a uncountable family $\{f_{a}(x)\}_{a \in \{0,1\}^{\mathbb N}} \subset \operatorname{BC}(0,1).$
$a \ne b \in \{0,1\}^\mathbb N$ we get $f_a(x),f_b(x) \in \operatorname{BC}(0,1),$ for which it is trivial that
$$\|f_a (x) - f_b(x) \|_\infty =\sup\{|f_a(x)-f_b(x)| : x \in (0,1)\}=1.$$
And so for examples take the balls in $\operatorname{BC}(0,1)$ centered at these functions with radius $1/2$. $\left\lbrace B_{\frac{1}{2}}(f_{a})\right\rbrace_{a \in \{0,1\}^{\mathbb N}} \subset \operatorname{BC}(0,1).$ This is uncountable family and it is not separable. Thus the whole space $\operatorname{BC}(0,1)$ cannot be separable! (Or else we'd get a injective map between countable and uncountable sets, which is absurd!)
I'd like to point out that this is isometric embedding of , so to speak the minimal not separable part of $\ell_\infty, i.e. \mathcal{F} : \{0,1\}^{\mathbb{N}} \hookrightarrow \operatorname{BC}(0,1), \hspace{0.2cm} \mathcal{F}(a):=f_a.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Cauchy Sequence that Does Not Converge What are some good examples of sequences which are Cauchy, but do not converge?
I want an example of such a sequence in the metric space $X = \mathbb{Q}$, with $d(x, y) = |x - y|$. And preferably, no use of series.
| You take any irrational number, say $\sqrt2$, and you consider its decimal expansion,
$$
\sqrt2=1.4142\ldots
$$
Then you define $x_1=1$, $x_2=1.4$, $x_3=1.41$, $x_4=1.414$, etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 11,
"answer_id": 9
} |
Modular exponentiation by repeated squaring (and peasant multiplication] I came upon an interesting way to relatively quickly compute modular exponentiation with large numbers. However, I do not fully understand it and was hoping for a better explanation.
The method basically states if you have $x^y \bmod N$, it can be computed by repeatedly multiplying by $x$ modulo $N$:
$$x \bmod N \rightarrow x^2 \bmod N \rightarrow x^4 \bmod N \rightarrow x^8 \bmod N \rightarrow \ldots \rightarrow x^{2[\log y]} \bmod N$$
This is supposed to calculate in $\mathcal{O}(\log_2 N)$ time.
An example is given:
$$x^{25} = x^{11001_2} = x^{10000_2} * x^{1000_2} * x^{1_2} = x^{16} * x^8 * x^1$$
I don't really understand how this is faster. Aren't you just doing the same thing? You're just breaking down the exponents into representations of base $2$. Also where does $x^{2[\log y]}$ come from? If you're breaking it down into exponents of two where does a logarithm come from? (Aren't logs representations of numbers in exponents of $10$?)
I'm sorry if this appears to be a really stupid question - I have minimal knowledge in math. Also, I wasn't sure if this belongs here or on stack exchange, so give me the word and I'll move it.
| Explaining where the log comes from and how this is faster is a good question and it's the same whether you multiply the usual way or mod(N). But faster than what? The issue is how many multiplications and how many additions are required, with the assumption that multiplications are inherently slower than addition, a lot slower.
The usual way to compute $x^y$ for integer y is simply to multiply x by x, then x by the result, and so on, y times. Obviously this requires $y-1$ multiplications and no additions, 24 in your example.
The method of repeated squaring, on the other hand, uses the fact that any exponent $y$ can be written as the sum of powers of two: 1, 2, 4, 8, 16 and so on, which is the basis of the binary number system. Your breakdown of 25 into 16 + 8 + 1 is a good example, and the way it comes from the binary representation of 25 should be sufficient to convey the process to anyone who understands binary representation.
Then, ask yourself what is the most such powers of two that could be required. Clearly, it's the number of binary places in the representation of y, which is $\log_2(y)$, or more precisely $\lceil \log_2(y) \rceil$, ($\lceil x \rceil$ is the ceil or next higher or equal integer function). So the $\log$ here is base 2, not base 10 (but those only differ by a constant factor, namely $log_{10}(2) \approx .301$), which really tells you how many doublings are needed to go from 1 to $y$. In your example, $log_2(25)$ is about 4.64 ($\log_{10}(25)/.301$), so the most powers of two needed is 5.
Last question is, how many arithmetic operations are needed to carry out this method? To get the 5 powers of 2 in your example by repeated squaring you need only 4 multiplications: $x^2 = x \times x, x^4 = x^2 \times x^2, x^8 = x^4 \times x^4, x^{16}=x^8 \times x^8$). Of course, you also need some additions, but they are computationally "cheap", and I'll leave it to you to figure how many of them might be required.
Finally you need to multiply together the powers -- $x^{16} \times x^8 \times x^1$ in this case, which might involve yet one more set of up to 4 or $\lceil \log_2(y)\rceil-1$ multiplications.
So, the general answer is you will need $2 \times\lfloor \log_2(y)\rfloor$ multiplications rather than $y-1$ the "slow" way, or 8 versus 24 for your example -- a big time savings, even bigger for longer numbers, especially if it in the innermost loop of a program and gets done zillions of times.
Bill Dubuque's answer is really this method written out in a special polynomial form which accomplishes the required multiplications and additions efficiently and elegantly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to check convexity? How can I know the function $$f(x,y)=\frac{y^2}{xy+1}$$ with $x>0$,$y>0$ is convex or not?
| Consider $y=x$ then we have $\displaystyle g(x)=\frac{x^2}{x^2+1}=1-\frac 1{x^2+1}$
The second derivative of this is $g''(x)=\frac{2-6x^2}{(1+x^2)^3}$
and will change sign around $x=\frac 1{\sqrt{3}}$ so that $g$ is convex in $(0,\frac 1{\sqrt{3}})$ and concave in $(\frac 1{\sqrt{3}},\infty)$.
Your function is clearly not convex nor concave on $(\mathbb{R^{+*}})^2$ but you could search more restricted sets if needed...
Here is a picture (from below) of your function (convex near $y=0$ and concave when $y$ becomes larger at least in the x=y direction, in the x=-y direction it looks convex...) :
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 1
} |
How to prove $ \phi(n) = n/2$ iff $n = 2^k$? How can I prove this statement ? $ \phi(n) = n/2$ iff $n = 2^k $
I'm thinking n can be decomposed into its prime factors, then I can use multiplicative property of the euler phi function to get the $\phi(n) = \phi(p_1)\cdots\phi(p_n) $. Then use the property $ \phi(p) = p - 1$. But I'm not sure if that's the proper approach for this question.
| Suppose $n = 2^{k}$ for some $k \geq 0$. Given that $\varphi(n) = n \prod_{p \mid n}(1 - \frac{1}{p})$, it follows that $\varphi(2^{k}) = 2^{k} (1 - \frac{1}{2}) = \frac{n}{2}$. Now assume the converse, that $n = 2\varphi(n)$ for some positive integer $n$. Then $n = 2 n \prod_{p \mid n}(1 - \frac{1}{p})$ implies $2 \prod_{p \mid n}(p - 1) = \prod_{p \mid n} p$. Since the left side is even, so must be the right side. This implies that $2$ divides $n$. Moreover, $\prod_{2 < p \mid n}(p - 1) = \prod_{2 < p \mid n} p$ is impossible to satisfy, so $n = 2^{k}$ for some $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A smooth function f satisfies $\left|\operatorname{ grad}\ f \right|=1$ ,then the integral curves of $\operatorname{grad}\ f$ are geodesics $M$ is riemannian manifold, if a smooth function $f$ satisfies $\left| \operatorname{grad}\ f \right|=1,$ then prove the integral curves of $\operatorname{grad}\ f$ are geodesics.
| I'll use $\nabla$ for the gradient.
If $|\nabla f| = 1$, we have that $g(\nabla f,\nabla f) = 1$ where $g$ is the metric. Taking the covariant derivatve of the expression you have
$$ 0 = \nabla (1) = \nabla\left( g(\nabla f,\nabla f)\right) = 2 g(\nabla f, \nabla^2 f) = 2 \nabla_{\nabla f} (\nabla f) $$
The third equality used that $\nabla g = 0$ for the Levi-Civita connection of a Riemannian metric, and the fourth inequality uses that the Hessian of a scalar function is symmetric.
Since $\nabla_{\nabla f} \nabla f = 0$, we have that the vector field $\nabla f$ is geodesic, and hence the integral curves are geodesic curves.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Martingales, finite expectation I have some uncertainties about one of the requirements for martingale, i.e. showing that $\mathbb{E}|X_n|<\infty,n=0,1,\dots$ when $(X_n,n\geq 0)$ is some stochastic process. In particularly, in some solutions I find that lets say $\mathbb{E}|X_n|<n$ is accepted, for example here (2nd slide, 1.2 example). So my question is: what is the way of thinking if n goes to infinity? Why are we accepting n as a boundary or maybe I misunderstood something?
| For a definition of a martingale you only want each member $X_n$ of the sequence $\{X_n\}_{n\geq 0}$ to be integrable and you don't care about the growth of $\mathsf E|X_n|$, i.e. you only need to show that $\mathsf E|X_n|$ is finite - no matter how big is it or how fast does it grow with the growth of $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Numbers are too large to show $65^{64}+64^{65}$ is not a prime I tried to find cycles of powers, but they are too big. Also $65^{n} \equiv 1(\text{mod}64)$, so I dont know how to use that.
| $$64^{65}+65^{64} = 6^{65}+7^{64} \pmod{29}$$
$65=2 \times 28+9, 64 = 2 \times 28 +8$, and also gcd$(29,36)$ = gcd$(29,49) = 1$
Therefore by Fermat's Little Theorem
If gcd$(a,p)= 1$, and $p$ is a prime then $a^{(p-1)} \hspace{3pt}\equiv \hspace{3pt}1 \pmod{p}$
$36^{29-1} \equiv 1 \pmod{29}, \hspace{5pt}49^{29-1} \equiv 1 \pmod{29} \hspace{3pt} \implies \hspace{3pt}
(6^{2})^{28} \equiv 1 \pmod{29}, \hspace{5pt} (7^{2})^{28} \equiv 1 \pmod{29}$
Therefore $6^{65} = 6^{(56+9)} \equiv 6^9 \pmod{29}, \hspace{5pt} 7^{64} = 7^{(56+8)} \equiv 7^8 \pmod{29}$
$$64^{65}+65^{64} \equiv 6^9+7^8 \pmod{29} \hspace{5pt} \equiv 22+7 \pmod{29} \equiv 0 \pmod{29}$$
Which shows that $$29 | (65^{64}+64^{65})$$
Therefore $65^{64}+64^{65}$ is not a prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 1
} |
What operations are preserved by the canonical map to quotient rings Let $R$ be a commutative ring and $I,J,K$ be ideals such that $I\subseteq J$ and $I\subseteq K$.
Let $\pi: R \to R/I$ be the canonical map. I am able to prove $\pi$ preserves sum and products. Unsure about intersections.
$J/I + K/I = (J+K)/I$
$J/I \cdot K/I = (J\cdot K)/I$
$J/I \cap K/I = (J\cap K)/I$?
| Remember that the quotient map induces a lattice isomorphism between the ideals of $R/I$ and the ideals of $R$ that contain $I$; in particular, since $J\cap K$ is the largest ideal of $R$ that contains $I$ and is contained in both $J$ and $K$, then it follows that $(J\cap K)/I$ is the largest ideal of $R/I$ that is contained in both $J/I$ and $K/I$, hence must equal $(J/I) \cap (K/I)$.
Though I think that is by far the best way to go, to prove it explicitly, note that if $x\in J\cap K$, then $x+I\in J/I$ and $x+I\in K/I$, so you get $(J\cap K)/I\subseteq (J/I)\cap(K/I)$. For the converse inclusion, let $a+I\in (J/I)\cap(K/I)$. Then there exists $j\in J$ and $k\in K$ with $j+I = k+I = a+I$. In particular, $j-k\in I$. Therefore, $k=j-(j-k)\in J$, because $j\in J$ and $j-k\in I\subseteq J$, hence $k\in J\cap K$, thus, $a+I = k+I\in (J\cap K)/I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove $F(x)=\frac{1}{x}\int_0^xf(t)$ is convex Assume $f(x)$ is convex in $[0,\infty)$, Prove
$F(x)=\frac{1}{x}\int_0^xf(t)dt$ is convex
| 0. As alluded to in the comments, there exists several characterizations of convex functions and each of them can prove useful in a given context: one can mention usual convexity inequalities, nondecreasing difference ratios, nondecreasing left- and right-derivatives, nonnnegative second derivatives (when these exist), the graph being above its tangents, some duality à la Fenchel-Legendre, etc.
It happens that a simple change of variables allows to answer the specific question asked here rather shortly. Nevertheless we now explain another method, whose main advantage is that it allows to present some useful tools in this context.
To wit, a characterization of convexity is as follows: the function $f:[0,+\infty)\to\mathbb R$ is convex on $[0,+\infty)$ if and only if there exists a nondecreasing locally integrable function $g:(0,+\infty)\to\mathbb R$ and some finite $c$ such that, for every $x\geqslant0$,
$$
f(x)=c+\int_0^xg(y)\mathrm dy.
$$
(To be rigorous here, one can only be sure that $f(0)\geqslant c$, but to ask that $f$ is continuous at $0$ does not change its convexity.)
1. In our context, assuming that $f$ is convex and written as above, one gets
$$
F(x)=\frac1x\int_0^xf(y)\mathrm dy=c+\frac1x\int_0^x\int_0^yg(z)\mathrm dz\mathrm dy=c+\frac1x\int_0^xg(z)(x-z)\mathrm dz,
$$
and one seeks a nondecreasing locally integrable function $h$ such that $F=c+\int\limits_0^{\cdot} h$.
2. To find $h$, introduce the function
$$
k=\int\limits_0^{\cdot}yg(y)\mathrm dy.
$$
An integration by parts yields
$$
F(x)=c+\int_0^xk'(z)\frac{x-z}{xz}\mathrm dz=c+\left[\frac{x-z}{xz}k(z)\right]_ 0^x+\int_0^xk(z)\frac{\mathrm dz}{z^2}.
$$
Since $k(z)=o(z)$ when $z\to0$, the second term on the RHS is zero and $F$ has the desired form, with
$$
h(x)=\frac1{x^2}k(x)=\frac1{x^2}\int_0^xyg(y)\mathrm dy.
$$
3. To check whether $h$ is nondecreasing, several methods are available here, let us explain one which is completely elementary. For every $0\lt x\leqslant y$,
$$
y^2h(y)=\int_0^xzg(z)\mathrm dz+\int_x^yzg(z)\mathrm dz=x^2h(x)+\int_x^yzg(z)\mathrm dz,
$$
and $g(z)\geqslant g(x)$ for every $z\geqslant x$, hence
$$
2y^2h(y)\geqslant 2x^2h(x)+g(x)\int_x^y2z\mathrm dz=2x^2h(x)+g(x)(y^2-x^2).
$$
Now, $g(z)\leqslant g(x)$ for every $z\leqslant x$ hence
$$
2x^2h(x)\leqslant g(x)\int_0^x2z\mathrm dz=g(x)x^2,\quad\text{that is,}\quad g(x)\geqslant2h(x).
$$
Using this in the previous inequality, one gets
$$
2y^2h(y)\geqslant 2x^2h(x)+2h(x)(y^2-x^2)=2y^2h(x),
$$
hence $h(y)\geqslant h(x)$ and the proof is complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Compute $\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$ Compute
$$\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$$
I did
$$\lim_{x\to\infty} (\frac{x-2}{x+2})^x = \lim_{x\to\infty} \exp(x\cdot \ln(\frac{x-2}{x+2})) = \exp( \lim_{x\to\infty} x\cdot \ln(\frac{x-2}{x+2}))$$
But how do I continue? The hint is to use L Hopital's Rule. I tried changing to
$$\exp(\lim_{x\to\infty} \frac{\ln(x-2)-\ln(x+2)}{1/x})$$
This is
$$(\infty - \infty )/0 = 0/0$$
But I find that I can keep differentiating?
| If you want to use LHopital then $ \lim_{u\to 0} \frac{\ln(1+u)}{u}=1 $ by Lhopital's rule.
$ l= \lim_{x\to \infty} (\frac{x-2}{x+2})^x=\lim_{x\to \infty} \exp((x+2)\ln(1-\frac{4}{x+2})-2\ln(1-\frac{4}{x+2}))$
For $ u = -\frac{4}{x+2}: $ $l= \lim_{u\to 0}\exp(-4\times\frac{\ln(1+u)}{u}-2\ln(1+u))=\exp(-4)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
Prove that the following integral is divergent $$\int_0^\infty \frac{7x^7}{1+x^7}$$
Im really not sure how to even start this. Does anyone care to explain how this can be done?
| An inproper integral will diverge if the limit of the function at infinity is not zero (as Chris pointed out, it's a different business if the limit doesn't exist). Here,
$$
\lim_{x\to\infty}\frac{7x^7}{1+x^7}=7,
$$
so the integral diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What exactly do we mean when say "linear" combination? I've noticed that the term gets abused alot. For instance, suppose I have
$c_1 x_1 + c_2 x_2 = f(x)$...(1)
Eqtn (1) is such what we say "a linear combination of $x_1$ and $x_2$"
In ODE, sometimes when we want to solve a homogeneous 2nd order ODE like $y'' + y' + y = 0$, we find the characteristic eqtn and solve for the roots and put it into whatever form necessary. But in all casses, the solution takes form of $c_1y_1 + c_2y_2 = y(t)$.
The thing is that $y_1$ and $y_2$ itself doesn't even have linear terms, so does it make sense to say $c_1y_1^2 +c_2y_2^2 = f(t)$ is a "quadratic" combination of y_1 and y_2?
| I am sure about the linear combination like ODE
In which equation on power of variable involve is and multiplied by any scalar quantity. Such combination is known as linear combination of algebra
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
How to verify new digits of $\pi$? Bob makes a claim that he made a new record and computed $\pi$ to 10 trillion digits (or your favourite number here). How would Alice verify that the newly computed constant is actually a correct approximation of $\pi$?
Given a finite string $x,$ of $n+1$ decimal digits: $(3\ 1\ 4\ 1\ 5\ 9\ 2\ldots),$ is there an efficient algorithm to decide whether $x$ is an approximation of $\pi$ up to the $0.\underbrace{00 \ldots 01}_{n-1}$ decimal places?
Edit: Clarification.
*
*Alice does not have access to Bob's method (so she can't prove that his method is correct).
*Alice only receives $x$ from Bob (in any number bases), and wants to verify that $x$ is indeed a correct approximation. No further communications between them.
*Alice could look at all digits of $x$ but should be able to verify in time $\ll$ than what it takes to compute $x$.
*Motivation: Assume Alexander J. Yee did not publish his code nor his method. He only publish $x$ in many number bases. He said it took him 3 months to compute $x.$ How could we verify his claims that $x$ is correct in a day or week or two without access to his code and formulas? Is there such a verification formula or algorithm?
| Since you allow any base, (16 in particular) and randomized algorithm, you can use the Bailey-Borwein-Plouffe formula which allows you to compute the $n^{th}$ digit of $\pi$, without having to compute the earlier $n-1$ digits! (Alas, such a algorithm seems to have been discovered only for base-16.)
All Alice needs to do is pick "some" random digits and compare.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
Proof that $\pi$ is rational I stumbled upon this proof of $\pi$ being rational (coincidentally, it's Pi Day). Of course I know that $\pi$ is irrational and there have been multiple proofs of this, but I can't seem to see a flaw in the following proof that I found here. I'm assuming it will be blatantly obvious to people here, so I was hoping someone could point it out. Thanks.
Proof:
We will prove that pi is, in fact, a rational number, by induction on
the number of decimal places, N, to which it is approximated. For
small values of N, say 0, 1, 2, 3, and 4, this is the case as 3, 3.1,
3.14, 3.142, and 3.1416 are, in fact, rational numbers. To prove the rationality of pi by induction, assume that an N-digit approximation
of pi is rational. This number can be expressed as the fraction
M/(10^N). Multiplying our approximation to pi, with N digits to the
right of the decimal place, by (10^N) yields the integer M. Adding the
next significant digit to pi can be said to involve multiplying both
numerator and denominator by 10 and adding a number between between -5
and +5 (approximation) to the numerator. Since both (10^(N+1)) and
(M*10+A) for A between -5 and 5 are integers, the (N+1)-digit
approximation of pi is also rational. One can also see that adding one
digit to the decimal representation of a rational number, without loss
of generality, does not make an irrational number. Therefore, by
induction on the number of decimal places, pi is rational. Q.E.D.
| This proof also shows that every countably infinite set is finite, including the set of positive integers $\{1, 2, 3, 4, \ldots\}$. After all $\{1,2,3,\ldots,n\}$ is finite, and so if we add the next number $n+1$, the set we get, $\{1,2,3,\ldots,n,n+1\}$ is finite. Adding one more member does not make the set infinite, so by induction, we see that the set of all positive integers is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 2
} |
what are the applications of the isomorphic graphs? While studying data structures i was told my instructor that even i am given 3 hour/30 days/3 years to find out whether two graphs are isomorphic or not, it is very very complex and even after spending this much amount of time i will not able to figure out clearly whether the given two graphs are isomorphic or not. It is NP-complete problem.
So i got the curiosity that why i am studying it then. What could be the possible applications of such isomorphic graphs which can be solved ?
| If your question is actually: "what are possible applications of the isomorphism problem", then check this: http://en.wikipedia.org/wiki/Graph_isomorphism_problem#Applications
Another very interesting application of the isomorphism problem is the development of algorithms for fingerprint matching http://euler.mat.ufrgs.br/~trevisan/workgraph/regina.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
Why does $\oint_C d\log|f(z)|=0$? In the article on the argument principle on PlanetMath, it says that
$$\oint_C d\log|f(z)|=0$$ since $|f(z)|$ is single-valued. Why does that follow, or can someone point me to a fuller explanation? I'm studying complex-analysis right now, but this result is not obvious to me. Thanks.
| Since $\log|f(z)|$ is single-valued, we have
$$
\oint_Cd\log|f(z)|=\log|f(z)|\bigg|_A^A=0
$$
where $A$ is any point on $C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of distinct limits of subsequences of a sequence is finite? "The number of distinct limits of subsequences of a sequence is finite?"
I've been mulling over this question for a while, and I think it is true, but I can't see how I might prove this formally. Any ideas?
Thanks
| The answer is no.
Think for instance about subsequences of $(cos(n))$: after showing that the integers are dense in $[0,2\pi]$, it follows that for every real number $k$ in $[-1,1]$ there exists a subsequence $(cos(n_k))_{k \in \mathbb{N}}$ which converges to $k$. In particular, not only the number of distinct limits of subsequences is infinite, but uncountably infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$ Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$
Use proof by induction. I tried for $n=1$ and got $\frac{27}{9}=3$, but if I assume for $n$ and show it for $n+1$, I don't know what method to use.
| Here is a simple "direct proof":
\begin{align*}
10^n+3 \times 4^n + 5&=10^n-1 +3 \times 2^{2n}+6 =9999..9+6 \times [2^{2n-1}+1] \\
&=9999..9+6 \times (2+1)(2^{2n-2}-2^{2n-3}+\cdots-2+1) \\
&= 9 \times [1111...1+2 \times (2^{2n-2}-2^{2n-3}+\cdots-2+1)]
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 2
} |
How to prove a trigonometric identity $\tan(A)=\frac{\sin2A}{1+\cos 2A}$ Show that
$$
\tan(A)=\frac{\sin2A}{1+\cos 2A}
$$
I've tried a few methods, and it stumped my teacher.
| The given equality is false. Set $A = \pi/2$. (Note: this applied to an earlier version of the problem).
Perhaps what you meant was
$$ \tan \frac{A}{2} = \frac{\sin A}{1 + \cos A}$$
or
$$ \tan A = \frac{\sin 2A}{1 + \cos 2A}$$
which is true, by using the half/double angle formulas.
$$\frac{\sin A}{1 + \cos A} = \frac{ 2 \sin A/2 \cos A/2}{2 \cos^2 A/2} = \tan A/2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
} |
Sufficient conditions to conclude that $\lim_{a \to 0^{+}} \int_{0}^{\infty} f(x) e^{-ax} \, dx = \int_{0}^{\infty} f(x) \, dx$ What are sufficient conditions to conclude that $$ \lim_{a \to 0^{+}} \int_{0}^{\infty} f(x) e^{-ax} \, dx = \int_{0}^{\infty} f(x) \, dx \ ?$$
For example, for $a>0$, $$ \int_{0}^{\infty} J_{0}(x) e^{-ax} \, dx = \frac{1}{\sqrt{1+a^{2}}} \, ,$$
where $J_{0}(x)$ is the Bessel function of the first kind of order zero.
But I've seen it stated in a couple places without any justification that $$ \int_{0}^{\infty} J_{0}(x) \, dx = \lim_{a \to 0^{+}} \int_{0}^{\infty} J_{0}(x) e^{-ax} \, dx = \lim_{a \to 0^{+}} \frac{1}{\sqrt{1+a^{2}}} = 1 .$$
EDIT:
In user12014's answer, it is assumed that $ \int_{0}^{\infty} f(x) \, dx$ converges absolutely.
But in the example above, $ \int_{0}^{\infty} J_{0}(x) \, dx$ does not converge absolutely.
And there are other examples like
$$ \int_{0}^{\infty} \frac{\sin x}{x} \, dx = \lim_{a \to 0^{+}} \int_{0}^{\infty} \frac{\sin x}{x}e^{-ax} \, dx = \lim_{a \to 0^{+}} \arctan \left(\frac{1}{a} \right) = \frac{\pi}{2} $$
and
$$ \int_{0}^{\infty} \text{Ci}(x) \, dx = \lim_{a \to 0^{+}} \int_{0}^{\infty} \text{Ci}(x) e^{-ax} \, dx = - \lim_{a \to 0^{+}} \frac{\log(1+a^{2})}{2a} =0 \, ,$$ where $\text{Ci}(x)$ is the cosine integral.
SECOND EDIT:
Combining Daniel Fischer's answer below with his answer to my follow-up question shows that if $\int_{0}^{\infty} f(x) \, dx$ exists as an improper Riemann integral, then $$\lim_{a \to 0^{+}} \int_{0}^{\infty} f(x) e^{-ax} \, dx = \int_{0}^{\infty} f(x) \, dx.$$
| As suggested in the comments, the easiest way to see this is with the dominated convergence theorem. Suppose $f \in L^1(0,\infty)$, i.e.
$$\int_0^\infty \! |f| \, dx < \infty$$
Let $a_n \in \mathbb{R}$ be some sequence such that $a_n \geq 0$ and $a_n \to 0$. Define $f_n(x) = f(x)e^{-a_nx}$. Then we have that
$$|f_n(x)| \le |f(x)|$$ for all $x \in [0,\infty)$ and it is clearly true that
$$\lim_{n \to \infty} f_n(x) = f(x)$$ for all $x \in [0,\infty)$.
Thus by the dominated convergence theorem we have
$$\lim_{n \to \infty} \int_0^\infty \! f_n \, dx = \int_0^\infty \! f \, dx$$
But this says that for every non-negative sequence $a_n$ with $a_n \to 0$ we have
$$\lim_{n \to \infty} \int_0^\infty \! fe^{-a_nx} \, dx = \int_0^\infty \! f \, dx$$
which, by the general properties of metric spaces implies that,
$$\lim_{a \to 0^+} \int_0^\infty \! fe^{-ax} \, dx = \int_0^\infty \! f \, dx$$
is also true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Proof by contrapositive Prove that if the product $ab$ is irrational, then either $a$ or $b$ (or both) must be irrational.
How do I prove this by contrapositive? What is contrapositive?
| If you have to prove an implication $A\Rightarrow B$, contrapositive means you want to prove the equivalent statement $\neg B\Rightarrow\neg A$. The fact that they are equivalent guarantees you that also $A\Rightarrow B$ holds.
In your case, $A=$ 'the product $ab$ is irrational', while $B=$ '$a$ or $b$ must be irrational'. So you just have to negate both $A$ and $B$ and prove the contraposition $\neg B\Rightarrow\neg A$, which is not hard in your case.
By the way, there even is a Wikipedia page with exactly the title of your post here ;)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
An example of an endomorphism Could someone suggest a simple $\phi\in $End$_R(A)$ where $A$ is a finitely generated module over ring $R$ where $\phi$ is injective but not surjective? I have a hunch that it exists but I can't construct an explicit example. Thanks.
| Consider the morphism of $\mathbb{R}$-modules:
$$
\varphi : \mathbb{R}^\infty \longrightarrow \mathbb{R}^\infty
$$
defined by
$$
\varphi (x_1, x_2, \dots , x_n, \dots ) = (0, x_1, x_2 , \dots , x_n , \dots ) \ .
$$
This example is not possible with finite dimension vector spaces, because then, with endomorphisms, you have
$$
\text{isomorphism} \quad \Longleftrightarrow \quad \text{monomorphism} \quad \Longleftrightarrow \quad \text{epimorphism} \ .
$$
EDIT. Now I see you've added the finitely generated condition. So, this example doesn't apply any more obviously.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The set of linear fractional transformations that preserves the open unit disk is equicontinuous on every compact subset in it Let $\Delta(a, r)$ be the open disk of radius $r$ centered at the point $a$ in the complex plane, and $\operatorname{Aut}(\Delta(0, 1))$ be the set of linear fractional transformations that preserves the open unit disk, i.e. transformations of the form $z\mapsto e^{i\theta}(z-a)/(1-\bar az)$, where $a\in\Delta(0, 1)$ and $\theta\in\mathbb R$.
*
*I want to show $\operatorname{Aut}(\Delta(0, 1))$ is equicontinuous on every compact subset of $\Delta(0, 1)$. Does it suffice to show it is equicontinuous on $\overline{\Delta(0,r)}$ for any $r<1$? I think it surely suffices to show that it is continuous on every closed disk contained in the unit disk, but I'm unsure about the former.
*I want to show $\operatorname{Aut}(\Delta(0, 1))$ is equicontinuous on $\overline{\Delta(b,r)}$ which is contained in the unit disk. To do this, I evaluated $|f(z)-f(w)|$ for an arbitrary $f$ in $\operatorname{Aut}(\Delta(0, 1))$ and got $|f(z)-f(w)| \le |z-w|/(|1-\bar az||1-\bar aw|)$. But I can't go further. How do you get an upper bound for this?
| *
*A compact subset $K$ of $\Delta(0,1)$ is contained on a set of the form $\overline{\Delta(0,r)}$ for some $r\in (0,1)$.
Indeed, for each $x\in K$ we can find $r_x$ such that $\Delta(x,2r_x)\subset \Delta(0,1)$, so $K\subset \bigcup_{j=1}^N\Delta(x_j,r_{x_j})$ for some $N$ and $x_1,\ldots,x_N\in K$. Then put $r:=\max_{1\leq j\leq N}(1+r_{x_j})|x_j|$.
So it's enough to show equi-continuity on the sets of the form $\overline{\Delta(0,r)}$.
2. We have for $z_1,z_2\in\overline{\Delta(0,r)}$ that
$$|f(z_1)-f(z_2)|\leq\frac{|z_1-z_2|}{|1-\bar az_1|\cdot |1-\bar az_2|}$$
and using triangular inequality and the fact that $|a|<1$
$$|1-\bar az_1|\cdot |1-\bar az_2|\geq (1-|z_1|)(1-|z_2|)\geq (1-r)^2$$
so $$|f(z_1)-f(z_2)|\leq \frac{|z_1-z_2|}{(1-r)^2},$$
which proves equi-continuity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Graph Theory - How can I calculate the number of vertices and edges, if given this example An algorithm book Algorithm Design Manual has given an description:
Consider a graph that represents the street map of Manhattan in New York City. Every junction of two streets will be a vertex of the graph. Neighboring junctions are connected by edges. How big is this graph? Manhattan is basically a grid of 15 avenues each crossing roughly 200 streets. This gives us about 3,000 vertices and 6,000 edges, since each vertex neighbors four other vertices and each edge is shared between two vertices.
If it says "The graph is a grid of 15 avenues each crossing roughly 200 streets", how can I calculate the number of vertices and edges? Although the description above has given the answers, but I just can't understand.
Can anyone explain the calculation more easily?
Thanks
| This is so simple, I mean 15 avenues crossing 200 streets, which means there are 15 * 200 = 3000 crossings, i.e. 3000 nodes.
each nodes have upper, lower, left and right neighbors, so for each node, there are 4 edges connecting to the neighbors. However, each edge has been counted twice since node 1 has an edge connected with node 2, and the node 2 has the same edge connected to node 1
So that is totally 3000*4/2 = 6000 edges
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 3
} |
Convergence/Divergence of series with terms $2^{n}\left ( \frac{n}{n+1} \right )^{n^{2}} $ and $\sin (n)\sin \frac{x}{n}$ Help me please with these 2 questions:
1.Does it converge or diverge? :
$$ \sum_{n=2}^{\infty }2^{n}\left ( \frac{n}{n+1} \right )^{n^{2}} $$
2.Check out absolute and conditional convergence of: $x>0 $
$$ \sum_{n=1}^{\infty }\sin (n)\sin \frac{x}{n} $$
Thanks a lot!
| Hint for 1:
For sufficiently large $n$, $(\frac{n}{n+1})^n = (1 - \frac{1}{n+1})^n \le c$ for some $ 0 \lt c \lt \frac{1}{2}$. Why?
Now trying using the above to prove that your series converges.
For part 2, I believe you can use the Dirichlet Test to prove convergence.
To show that the series does not converge absolutely, use $\sin (x/n) \ge x/2n$ for sufficiently large $n$ and use the fact that at least one of $n$, $n+1$ is more than $\frac{1}{2}$ away from the multiple of $\pi$ which is closest to them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Traveling between integers- powers of 2
Moderator Note: At the time that this question was posted, it was from an ongoing contest. The relevant deadline has now passed.
Consider the integers. We can only travel directly between two integers with a difference whose absolute value is a power of 2 and every time we do this it is called a step. The distance $d$ between two integers is the minimum number of steps required to get from one to the other. Note however that we can travel backwards. For instance $d(2,17)$ is 2: $2+16=18 \rightarrow 18-1=17$.
How can we prove that for any integer n, we will always have some $d(a,b)=n$ where$b>a$?
If we are only able to take forward steps I know that the number of 1s in the binary representation of $b-a$ would be $d(a,b)$. However, we are able to take steps leftward on the number line...
| Presumably you mean every natural number $n$, not every integer $n$. We can simplify things by taking $a=0$ without loss of generality and then writing $d(b):=d(0,b)$. Also, it's enough to show that for every $n$ there is $b$ with $d(b)\ge n$, since that means that for all $n$ there are numbers not reachable in $n$ steps, and it then follows by induction that for all $n$ there are numbers reachable in exactly $n$ steps, since at each stage of the induction there are adjacent numbers of which one is reachable in $n$ steps and the other isn't, and the latter can be reached in an $(n+1)$-th step of $1$.
Now represent the integers in "infinite two's complement", that is, a non-negative integer is represented by its binary representation and a negative integer $k$ is represented by inverting the binary representation of $-(k+1)$, considered as a leftward infinite string with leading zeros. This puts the integers into bijection with the set of all leftward infinite binary strings with finitely many transitions between $0$ and $1$.
In this representation, adding a positive integer to a number works as expected, with carrying carried out until the infinite stretch of $0$s or $1$s is reached, and the $1$s flipped to $0$s by a carry.
Now in each step, we can add a positive power of two to the number we've reached, and then we can optionally flip its sign. (This is equivalent to adding or subtracting powers of two at each step.) Since flipping the sign means inverting the string and then adding $1$, in each step we can invert the string and add a power of two up to twice.
Inverting the string doesn't change the number of transitions between $0$ and $1$. Adding a power of two increases the number of transitions between $0$ and $1$ by at most $2$. Thus in each step we can increase the number of transitions by at most $4$. Since there are integers whose representations have arbitrary numbers of transitions, for every $n$ there are integers we can't reach in $n$ steps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
If $A$ is a subset of $B$, then the closure of $A$ is contained in the closure of $B$. I'm trying to prove something here which isn't necessarily hard, but I believe it to be somewhat tricky. I've looked online for the proofs, but some of them don't seem 'strong' enough for me or that convincing. For example, they use the argument that since $A\subset \overline{B} $, then $ \overline{A} \subset \overline{B} $. That, or they use slightly altered definitions. These are the definitions that I'm using:
Definition #1: The closure of $A$ is defined as the intersection of all closed sets containing A.
Definition #2: We say that a point x is a limit point of $A$ if every neighborhood of $x$ intersects $A$ in some point other than $x$ itself.
Theorem 1: $ \overline{A} = A \cup A' $, where $A'$ = the set of all limit points of $A$.
Theorem 2: A point $x \in \overline{A} $ iff every neighborhood of $x$ intersects $A$.
Prove: If $ A \subset B,$ then $ \overline{A} \subset \overline{B} $
Proof: Let $ \overline{B} = \bigcap F $ where each $F$ is a closed set containing $B$. By hypothesis, $ A \subset B $; hence, it follows that for each $F \in \overline{B} $, $ A \subset F \subset \overline{B} $. Now that we have proven that $ A \subset \overline{B} $, we show $A'$ is also contained in $\overline{B} $.
Let $ x \in A' $. By definition, every neighborhood of x intersects A at some point other than $x$ itself. Since $ A \subset B $, every neighborhood of $x$ also intersects $B$ at some other point other than $x$ itself. Then, $ x \in B \subset \overline{B} $.
Hence, $ A \cup A' \subset \overline{B}$. But, $ A \cup A' = \overline{A}$. Hence, $ \overline{A} \subset \overline{B}.$
Is this proof correct?
Be brutally honest, please. Critique as much as possible.
| You say that some of the proofs you have looked use the argument "that since $A$ is contained in $\overline{B}$, then $\overline{A}\subseteq\overline{B}$" and that they don't seem strong enough for you but this follows directly from definition #1. Any closed subset containing $B$ contains $A$ and consequently $A\subseteq \overline{B}$. Since $\overline{B}$ is closed, $\overline{A}\subseteq\overline{B}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 2
} |
'Squashing' a graph of data? I have a vector of sample data that describes a divergent oscillation like in this image.
I want to transform this data (just the data, not the system or anything), so that the data does converge to zero. I also want to keep the peaks of data at the same sample point, and the graph smooth. Simply multiplying the data by decreasing values moves the peaks of the data which is not desirable.
What sort of method should I use?
| I did not get it. Do you have a function that generates this graphic? If so, you can multiply for a positive real function (say, $e^{-\alpha x}$, where you can adjust the value of $alpha >0$ for faster zero convergence). That should do if I got your question right.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Suppose two $n\times n$ matricies, $A$ and $B$, how many possible solutions are there. Suppose i construct a $n\times n$ matrix $C$, by multiplying two $n\times n$ matrices $A$ and $B$ i.e. $AB = C$. Given $B$ and $C$, how many other $A$'s can yield $C$ also i.e. is it just the exact $A$, infinitely many other $A$'s or no other $A$'s. There are no assumptions made about the invertability of $A$ and $B$. In the case that $A$ and $B$ are invertable there exists only one such $A$.
| If $B$ is not invertible its range is a proper subspace; if $a$ is any matrix which maps the range of $B$ to $\{0\}$ then $(A+a) \cdot B = A \cdot B$. There are infinitely many such $a$. On the other hand if $D \cdot B = A \cdot B$ then $(D-A) \cdot B = 0$, i.e. $D-A$ maps the range of $B$ to $\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Set of cluster points of a bounded sequence Suppose that $\{\alpha_k\}$ is a bounded sequence of real numbers satisfying the
condition $\displaystyle\lim_{k\rightarrow\infty}|\alpha_k-\alpha_{k+1}|=0$. Let $\displaystyle m = \varliminf_{k\rightarrow\infty}\alpha_k$ and
$\displaystyle M = \varlimsup_{k\rightarrow\infty}\alpha_k$. Prove that the
cluster point set of the sequence $\{\alpha_k\}$ is the whole segment $[m;M]$.
|
Here is an example showing that in higher dimension the cluster point set of an unbounded sequence $(x_n)$ such that $\|x_{n+1}-x_n\|\to0$ may be disconnected, for the reason explained by @joriki (naturally, this cluster point set is still closed).
In words, consider the curves $\gamma_n$, each making a vee joining the points $a=(0,0)$ and $b=(2,0)$ through the bottom point $(1,-2^n)$, and the curve $\gamma$ which concatenates them, joining $a$ to $b$ through $\gamma_{2n}$, then $b$ to $a$ through $\gamma_{2n+1}$, and so on, at speed roughly $1$.
In maths, for each $n\geqslant0$, $\gamma_n:[0,2^n]\to\mathbb R^2$ is defined by
$$
\gamma_n(t)=(2^{1-n}t,|2^n-2t|-2^n),
$$
and the curve $\gamma:[0,+\infty)\to\mathbb R$ is defined by $\gamma(2^{2n}+t)=\gamma_{2n}(t)$ for every $t$ in $[0,2^{2n}]$ and by $\gamma(2^{2n+1}+t)=\gamma_{2n+1}(2^{2n+1}-t)$ for every $t$ in $[0,2^{2n+1}]$.
Now, consider $x_n=\gamma(\sqrt{n})$ for every $n\geqslant0$. Since $\sqrt{n+1}-\sqrt{n}\to0$ and $\|\gamma'\|$ is bounded, $\|x_{n+1}-x_n\|\to0$. Since $\sqrt{n}\to\infty$, the sequence $(x_n)$ passes near $a$ and near $b$ infinitely often (in fact $x_{2^{4n}}$ is exactly $a$ and $x_{2^{4n+2}}$ is exactly $b$, for every $n\geqslant0$).
The set of limit points of $(x_n)$ is $\{a,b\}$, which is not connected. Since the union of the paths $\gamma_n$ is unbounded, $(x_n)$ is unbounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Multiplayer zero sum games In a course I'm taking, the professor mentioned that a zero sum games are only interesting for 2 players.
Can someone explain me that?
| Take any $n$-player game. Add a $n+1$th dummy player who has a single action available and a payoff function $u_{n+1}$ given by $u_{n+1}(a_1,\ldots,a_n,a_{n+1})=-\sum_{i=1}^n u_i(a_1,\ldots,a_n)$. This $n+1$-player game is a zero-sum game that is strategically equivalent to the original $n$-player game.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Stability of trivial solution for DE system with non-constant coefficient matrix Given the arbitrary linear system of DE's
$$x'=A(t)x,$$
with the condition that the spectral bound of $A(t) $ is uniformly bounded by a negative constant, is the trivial solution always stable? All the $(2\times 2)$ matrices I've tried which satisfy the above property yield stable trivial solutions, which seems to suggest this might be the case in general. I can't think of a simple counterexample, so I'm asking if one exists. If there isn't what would be some steps toward proving the statement?
This is indeed homework.
| Here is an example.
$$
A(t)=\left(
\begin{matrix}
-1+\frac32\cos^2 t& 1-\frac32\cos t\sin t\\
-1-\frac32\sin t\cos t &-1+\frac32\sin^2 t
\end{matrix}
\right)
$$
One can check that the eigenvalues of $A(t)$ are
$$
\lambda_1=\frac14[-1+\sqrt7 i],\quad \lambda_2=\bar\lambda_1,
$$
both of which have negative real parts. But the origin is unstable.
This is an example from the section about Floquet theory in Ordinary Differential Equations and Dynamical Systems by Sideris.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$? Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$?
| One way of proving the non-isomorphism part would be to show that ${\rm PGL}(2,{\mathbb R})$ has the Klein 4-group $C_2 \times C_2$ as subgroup, but ${\rm PSL}(2,{\mathbb R})$ does not.
The first claim is easy. $C_2 \times C_2$ is the image in ${\rm PGL}(2,{\mathbb R})$ of the dihedral group of order 8 generated by $\left(\begin{array}{rr}1&0\\0&-1\end{array}\right)$ and $\left(\begin{array}{rr}0&1\\1&0\end{array}\right)$.
It is also straightforward to show that the only element of order 2 in ${\rm SL}(2,{\mathbb R})$ is $-I_2$. So the only possibility for the inverse image in ${\rm SL}(2,{\mathbb R})$ of $C_2 \times C_2$ is the quaternion group $Q_8$. But $Q_8$ does not have a 2-dimensional real representation. That can be shown using the Frobenius-Schur indicator of the 2-dimensional complex representation, but I expect you would prefer a more elementary proof.
What is the source of this problem? Is it an exercise, and if so at what level?
Edit: In fact it is not hard to show that $Q_8$ is not a subgroup of ${\rm GL}(2,{\mathbb R})$ without using representation theory. An element of order 4 in ${\rm GL}(2,{\mathbb R})$ has minimal polynomial $x^2+1$ and is therefore conjugate to $A := \left(\begin{array}{rr}0&1\\-1&0\end{array}\right)$. By simple linear algebra we find that the matrices conjugating $A$ to $A^{-1}$ have the form $B:=\left(\begin{array}{rr}a&b\\b&-a\end{array}\right)$, and $B^2 = -I$ gives $a^2+b^2 = -1$, which has no solution in ${\mathbb R}$.
So you now have three proofs of the non-isomorphism!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 2
} |
first order differential equation I needed help with this Differential Equation, below:
$$dy/dt = t + y, \text{ with } y(0) = -1$$
I tried $dy/(t+y) = dt$ and integrated both sides, but it looks like the $u$-substitution does not work out.
| This is a first order linear differential equation so general solution is given by :
$$y=\frac{\int u(t)\cdot t \,dt +C}{u(t)} ~\text{where}~ u(t)=e^{-\int dt}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Limit of $\arctan(x)/x$ as $x$ approaches $0$? Quick question:
I came across the following limit: $$\lim_{x\rightarrow 0^{+}}\frac{\arctan(x)}{x}=1.$$
It seems like the well-known limit:
$$\lim_{x\rightarrow 0}\frac{\sin x}{x}=1.$$
Can anyone show me how to prove it?
| Recall (see the diagram below) that for $0\le t<{\pi\over2}$:
$$\tag{1}
\sin t \le t \le \tan t.
$$
Taking $t =\arctan x$ in $(1)$, we have, for $x>0$:
$$
\sin\bigl(\arctan(x)\bigr)\le \arctan(x)\le x.
$$
But
$$
\sin\bigl(\arctan (x)\bigr) ={x\over \sqrt{1+x^2}};
$$
whence, for $x>0$:
$$
{x\over \sqrt{1+x^2}}\le \arctan(x)\le x.
$$
So, for $x>0$, we have
$$
{1\over \sqrt{1+x^2}}\le {\arctan(x)\over x}\le 1;
$$
and it follows from the Squeeze Theorem that
$$
\lim_{x\rightarrow0^+} {\arctan(x)\over x}=1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 2
} |
Deriving even odd function expressions What is the logic/thinking process behind deriving an expression for even and odd functions in terms of $f(x)$ and $f(-x)$?
I've been pondering about it for a few hours now, and I'm still not sure how one proceeds from the properties of even and odd functions to derive:
$$\begin{align*}
E(x) &= \frac{f(x) + f(-x)}{2}\\
O(x) &= \frac{f(x) - f(-x)}{2}
\end{align*}$$
What is the logic and thought process from using the respective even and odd properties,
$$\begin{align*}
f(-x) &= f(x)\\
f(-x) &= -f(x)
\end{align*}$$
to derive $E(x)$ and $O(x)$?
The best I get to is:
For even: $f(x)-f(-x)=0$ and for odd: $f(x)+f(-x)=0$
Given the definition of $E(x)$ and $O(x)$, it makes a lot of sense (hindsight usually is) but starting from just the properties. Wow, I feel I'm missing something crucial.
| This might be repeating parts of Sivaram's answer, but I think a reorganization might be enlightening.
Suppose we want to break $f$ into even and odd functions: $f(x)=E(x)+O(x)$ where $E(x)$ is even, that is $E(-x)=E(x)$, and $O(x)$ is odd, that is $O(-x)=-O(x)$. Simply from these considerations, we get
$$
\begin{align}
f(x)+f(-x)
&=(E(x)+O(x))+(E(-x)+O(-x))\\
&=(E(x)+O(x))+(E(x)-O(x))\\
&=2E(x)
\end{align}
$$
and
$$
\begin{align}
f(x)-f(-x)
&=(E(x)+O(x))-(E(-x)+O(-x))\\
&=(E(x)+O(x))-(E(x)-O(x))\\
&=2O(x)
\end{align}
$$
Therefore,
$$
\begin{array}{}
E(x)=\frac{f(x)+f(-x)}{2}&\text{and}&O(x)=\frac{f(x)-f(-x)}{2}
\end{array}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Find the angle in a triangle if the distance between one vertex and orthocenter equals the length of the opposite side Let $O$ be the orthocenter (intersection of heights) of the triangle $ABC$. If $\overline{OC}$ equals $\overline{AB}$, find the angle $\angle$ACB.
| Let point $P$ on $AC$ be the foot of the perpendicular $BO$, and note that $\angle OCA$ is the complement of $A$. Then,
$$\begin{eqnarray}
|AC|&=&|AP|+|PC|\\
&=&|AB|\cos A+|OC|\cos\angle OCA \\
&=&|AB| \cos A+|OC| \sin A \\
&=&|AB|(\cos A+\sin A)
\end{eqnarray}$$
Conveniently scaling to unit circumdiameter ---so that $|AC| = \sin B$, $|AB| = \sin C$, and $|BC| = \sin A$ (which we may assume is non-zero)--- we have
$$\begin{eqnarray}
\sin B &=& \sin C \; (\cos A+\sin A) \\
\implies\sin(A+C) &=& \cos A \sin C + \sin A \sin C \\
\implies\sin A \cos C + \cos A \sin C &=& \cos A \sin C + \sin A \sin C \\
\implies\sin A \cos C &=& \sin A \sin C \\
\implies\cos C &=& \sin C \\
\implies C &=& \pi/4
\end{eqnarray}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Hom of the direct product of $\mathbb{Z}_{n}$ to the rationals is nonzero. Why is $\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right)$ nonzero?
Context: This is problem $2.25 (iii)$ of page $69$ Rotman's Introduction to Homological Algebra:
Prove that
$$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right) \ncong \prod_{n \geq 2}\mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z}_{n},\mathbb{Q}).$$
The right hand side is $0$ because $\mathbb{Z}_{n}$ is torsion and $\mathbb{Q}$ is not.
| $\mathbb{Q}$ is an injective $\mathbb{Z}$-module.
The exact sequence
$$0\rightarrow \mathbb{Z} \rightarrow \prod \mathbb{Z}_n\rightarrow C\rightarrow 0$$ yields the exact sequence
$$0\rightarrow \mathrm{Hom}_{\mathbb{Z}}(C,\mathbb{Q}) \rightarrow \mathrm{Hom}_{\mathbb{Z}}(\prod \mathbb{Z}_n, \mathbb{Q})\rightarrow \mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z},\mathbb{Q})\rightarrow 0.$$
The last term in the latter exact sequence is just $\mathbb{Q}$, hence $\mathrm{Hom}_{\mathbb{Z}}(\prod \mathbb{Z}_n,\mathbb{Q})\neq 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Field of fractions of $R[X]$ Let $R$ be a domain and let $Q$ be its field of fractions. Show that the field of fractions of $R[X]$ is isomorphic to $Q(X)$.
By the way, I don't know exactly what $Q(X)$ is. It means $Q[X]$? Or $Q$ times the ideal generated by $X$ in $R[X]$?
| Generally, $A[\alpha]$ means "the ring generated by A and $\alpha$", and $A(\alpha)$ means "the field generated by A and $\alpha$". I have only ever seen the latter when $A$ itself is a field.
In the case of an indeterminate variable $X$, $A[X]$ would be the ring of polynomials (with coefficients in $A$), and $A(X)$ would be the fraction field of $A[X]$: the field of rational functions (with coefficients in $A$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Dimension of subspace of all upper triangular matrices If $S$ is the subspace of $M_7(R)$ consisting of all upper triangular matrices, then $dim(S)$ = ?
So if I have an upper triangular matrix
$$
\begin{bmatrix}
a_{11} & a_{12} & . & . & a_{17}\\
. & a_{22} & . & . & a_{27}\\
. & . & . & . & .\\
0 & . & . & . & a_{77}\\
\end{bmatrix}
$$
It looks to me that this matrix can potentially have 7 pivots, therefore it is linearly independent and so it will take all 7 column vectors to span it. But that answer is marked as incorrect when I enter it so what am I missing here?
| In general, an $n\times n$ matrix has $n(n-1)/2$ off-diagonal coefficients and $n$ diagonal coefficients. Thus the dimension of the subalgebra of upper triangular matrices is equal to $n(n-1)/2+n=n(n+1)/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Is it better to play $\$1$ on $10$ lottery draws or $\$10$ on one lottery draw? If I had 10 dollars to spend on a 1 dollar lottery draw, would I have more chance of winning if I spent all 10 dollars in one draw or bought 1 dollar tickets for 10 separate draws?
Edit:
in terms of lottery definition, you pick 6 numbers from a pool of 49 numbers (1-49), that is classed as one lottery ticket. So each 1 dollar represents a selection of 6 numbers. Across multiple tickets you can pick the same numbers as appear on your previous tickets. If you are familiar with EuroMillions or UK Lotto, it's that kind of lottery.
http://www.national-lottery.co.uk/player/p/lotterydrawgames/lotto.ftl
Edit 2:
Let me re-phrase the question. The probability of winning the jackpot in the lottery is 1 in 13,983,816.
Would buying 10 tickets for one draw change those odds to 10 in 13,983,816 ? and if so is that better than playing in 10 different draws at 1 in 13,983,816 odds each?
| This is a simple binomial problem.
Use the formula:
$$
\Bbb P(X=k) ={n \choose {k}}\cdot p^k\cdot (1-p)^{n-k},
$$
where $p$ is the probability of success.
For all ten dollars in $1$ draw, $n=1, k=1$.
For ten different draws and exactly $1$ success, $n=10, k=1$.
For ten different draws, and at least $1$ success, use $1-\Bbb(X=0)$ where $n=10$.
You will find it is better to put all $10$ dollars in $1$ lottery draw.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 6
} |
Can One use mathematica or maple online? Is it possible to use some of these algebra packages online ?
I have some matrices that I would like to know the characteristic polynomial of.
Where could I send them to get a nicely factorised answer ?
My PC is very slow & it would be nice to use someone elses super powerful computer !
Any suggestions
| I'll add this as an answer since I can't see how to make it render well in a comment.
You can do this in Sage as Graphth points out, but you do not need to import numpy. You can instead write
F.<x> = PolynomialRing(CC)
M = Matrix([[0,-1],[1,1]])
F(M.charpoly()).factor()
If you want your characteristic polynomial to factor over another field, you can simply replace CC with that field. The real numbers are RR, and the finite field Z/pZ is Integers(p). If you want to factor over the rationals, it's even easier,
M = Matrix([[0,-1],[1,1]])
M.charpoly().factor()
should give you what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 1
} |
Given a simple graph and its complement, prove that either of them is always connected. I was tasked to prove that when given 2 graphs $G$ and $\bar{G}$ (complement), at least one of them is a always a connected graph.
Well, I always post my attempt at solution, but here I'm totally stuck. I tried to do raw algebraic manipulations with # of components, circuit ranks, etc, but to no avail.
So I really hope someone could give me a hint on how to approach this problem.
| Additionally, it will be easy if simply look at the adjacency matrix of the graph.
More explanation: The adjacency matrix of a disconnected graph will be block diagonal. Then think about its complement, if two vertices were in different connected component in the original graph, then they are adjacent in the complement; if two vertices were in the same connected component in the orginal graph, then a $2$-path connects them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 9,
"answer_id": 4
} |
Binary random variables event-level independence implies random variable independence Can you help me in proving the following?
For two binary random variables X and Y, the event-level independence ($x^0 \perp y^0$) implies random variable independence $ X \perp Y$.
| It is a matter of a system of equations type argument. Using the law of total probability, and the fact that the event $(X=0)$ is the complement of the event $(X=1)$ (same for $Y$), then we know:
$$ P(X=0) = P(X=0|Y=0)\cdot{}P(Y=0) + P(X=0|Y=1)\cdot{}P(Y=1)$$
$$ P(X=0) = P(X=0)\cdot{}P(Y=0) + P(X=0|Y=1)\cdot{}(1-P(Y=0))$$
Note that I used the known independence assumption to simplify the first term on the RHS going from line one to line two.
Now solve this expression for $P(X=0|Y=1)$
$$ P(X=0 | Y=1) = \frac{P(X=0) - P(X=0)P(Y=0)}{1-P(Y=0)} = P(X=0)$$
Thus, we now also know that $x^{0}$ is independent of $y^{1}$ too. Now, just repeat the same thing with $P(Y=1)$ with the new knowledge about $y^{1}$ and $x^{0}$ to get the result for $y^{1}$ and $x^{1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
On the construction of the polynomial for the step of contradiction in Hilbert's basis Theorem I don't understand a step in this proof of the Hilbert Basis Theorem. Here is the proof Planeth Math.
I don't understand why $ \mathrm{deg} (f_{N+1}-g)< \mathrm{deg}(f_{N+1}) $. This can only happen if $ g $ has the same degree as $ f_{N+1} $ and also the same leading term, but I don't know why.
| I'll walk you through the proof (and will answer your question in a comment to this answer):
We claim that if $R$ is a Noetherian ring then so is the ring of polynomials $R[x]$.
Proof:
Our idea to prove this is to show that an arbitrary ideal of $R[x]$ is finitely generated.
To this end, let $I$ be an ideal of $R[x]$. Now we construct a sequence of polynomials $f_k$ as follows: Pick $f_1$ to be of minimal degree in $I$. Pick $f_{k+1}$ to be of minimal degree in $I \setminus \langle f_1 , \dots , f_k \rangle$. Note that by construction, $\mathrm{deg}(f_{k+1}) \geq \mathrm{deg}(f_i)$ for all $i \in \{1, \dots, k\}$.
Now if $f_k (x) = r_n x^n + \dots + r_1x + r_0$ then we denote by $a_k$ its leading coefficient $r_n$. Consider the set $J = \{a_1, a_2, \dots \}$. It's easy to see that this an ideal in $R$: If you add two of its elements $a_i + a_j$ then $a_i + a_j$ is the leading coefficient of $f_i + f_j$ and hence $J$ is closed with respect to addition. Also, if you multiply $a_i$ with an element in $R$, $ra_i$ is the leading coefficient of $r f_i$, so $J$ is also closed with respect to multiplication by elements of $R$. By assumption, $R$ is Noetherian, hence $J$ is finitely generated: $J = \langle a_1, \dots , a_N \rangle$.
We claim that $I = \langle f_1, \dots, f_N \rangle$.
By construction, $I \supset \langle f_1, \dots, f_N \rangle$.
For the other direction of inclusion assume that $I \supsetneq \langle f_1, \dots, f_N \rangle$. Then by the way we constructed $f_k$ we have $f_{N+1} \in I \setminus \langle f_1, \dots, f_N \rangle$. Now we construct a polynomial $g(x)$ with the same leading coefficient as $f_{N+1}$ as follows:
Note that $a_{N+1} = \sum_{i=1}^N a_i s_i$ for $s_i \in R, a_i \in J$ since $J$ is finitely generated. Define $g(x) := \sum_{i=1}^N a_i f_i (x) x^{\mathrm{deg}(f_{N+1}) - \mathrm{deg} (f_i)} $. Then $g(x)$ has leading coefficient $a_{N+1}$ and the same degree as $f_{N+1}$. Hence $f_{N+1} - g(x)$ has degree strictly less than $f_{N+1}$ hence must be in $\langle f_1, \dots, f_N \rangle$, for if not, $f_{N+1}$ would not be of minimal degree in $I \setminus \langle f_1, \dots, f_N \rangle$. So we have $ f_{N+1}(x) - g(x) \in \langle f_1, \dots, f_N \rangle$.
By construction, we have $g(x) \in \langle f_1, \dots, f_N \rangle$. Therefore we also have $f_{N+1} \in \langle f_1, \dots, f_N \rangle$. But this is a contradiction since $f_{N+1} \in I \setminus \langle f_1, \dots, f_N \rangle$.
So we can conclude that we must have $I \subset \langle f_1, \dots, f_N \rangle$ and hence equality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Standard way to divide numbers of base other than 10. I have some homework where I am supposed to divide two numbers that are base 5 or 3.
And I did it. I basically converted the numbers to decimal, divided, and then convert the result to the original base.
That seems to work. But also seems a bit silly when I look at it. The reason I did such conversions is because I am not very sure of the fastest, simplest way to actually do such divisions. So, what do you do?
| You can also use the ordinary long division algorithm, provided that you know (or can quickly work out) the single-digit multiplication and subtraction tables for the base in which you’re working. To divide $12343_{\text{five}}$ by $24_{\text{five}}$, for instance:
234
-----
24)12343
103
---
204
132
---
223
211
---
12
That is, the quotient is $234_{\text{five}}$, and the remainder is $12_{\text{five}}$. In base ten I’ve divided $973$ by $14$ to get a quotient of $69$ and a remainder of $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
orthonormal basis in $l^{2}$ I need an orthonormal basis in $l^{2}$. One possible choice would be to take as such the sequences $\{1,0,0,0,...\}, \{0,1,0,0,...\}, \{0,0,1,0,...\}$, but I need a basis where only finitely many components of the basis vectors are zero. Does anyone know a way to construct such a basis? One possible vector for such a basis would be $\{1,1/2,1/3,1/4,...\}$ devided by its norm. However, I don't know how to find similar vectors that are orthogonal to this one and to each other.
| You might try this: take
$$
\eqalign
{
x_1&\textstyle=( \rlap{1}\quad , \rlap{1\over2}\quad,\rlap{1\over3}\quad , \rlap{1\over4}\quad , \rlap{1\over5}\quad ,\ldots)\cr
x_2&\textstyle=( \rlap{\alpha_1}\quad , \rlap{1\over2}\quad,\rlap{1\over3}\quad , \rlap{1\over4}\quad , \rlap{1\over5}\quad ,\ldots)\cr
x_3&\textstyle=( \rlap{0}\quad , \rlap{\alpha_2}\quad,\rlap{1\over3}\quad , \rlap{1\over4}\quad , \rlap{1\over5}\quad ,\ldots)\cr
x_4&\textstyle=( \rlap{0}\quad , \rlap{0}\quad,\rlap{\alpha_3}\quad , \rlap{1\over4}\quad , \rlap{1\over5}\quad ,\ldots)\cr
}
$$
$$
\vdots
$$
for appropriately chosen scalars $\alpha_i$. Then normalize.
Note $e_1\in\text{span}\{x_1, x_2\}$, $e_2\in\text{span}\{x_1, x_2, x_3\}$,
$e_3\in\text{span}\{x_1, x_2, x_3,x_4\}$, $\ldots\,$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to find root of Polynomials? I'm studing polynomials; I have this exercise:
Find the irreducible factors of the polynomial $x^4-2x^2-3 \in \mathbb{Z}_5[x]$
I think in this way: I need to find root of the polynomial. A root of polynomial is a number such that the polynomials application $f(x) = x^4-2x^2-3=0$ that means $x^4-2x^2-3 \equiv 0 \pmod 5$, so $5|x^4-2x^2-3$. The only $x$ that makes this possible is $x=2$ (in fact $5|5$) and $x=3$ (in fact $5|60$). I know that this is a correct way, but how can I find roots if I'm in $\mathbb{Z}_{430}$, obviously I can't try this 430 times. Again: what if I'm in $\mathbb{R}[x]$?
Anyway, the next step is to divide the polynomial by $x-i$ where $i$ are my roots. So
$$\frac{x^4-2x^2-3}{x-2}= x^3+2x^2+2x+4$$
$$\frac{x^3+2x^2+2x+4}{x-3} = x^2+2$$
Since $x^2+2$ is irreducible, the factorization is $(x-2)(x-3)(x^2+2)$ is right?
| Hint $\ $ Over a domain, $\rm\: x^2 - (n-1)\:x - n\:$ has roots $\rm\:a,b\iff ab\: =\: -n,\:\ a+b\: =\: n-1.\:$ Alternatively grouping $\rm\ x\:(x-n)\: +\: x-n\:$ makes the factorization clear.
To find the solutions over $\rm\:\mathbb Z/m\:$ use the Chinese Remainder Theorem (CRT) to combine the solutions from $\rm\:\mathbb Z/p^k\:$ for all primes $\rm\:p\ |\ m,\:$ noting that for all odd primes $\rm\:p\:$ we have $\rm\: p^k\ |\ (x^2-n)(x^2+1)\iff p^k\ |\ x^2-n\:$ or $\rm\: p^k\ |\ x^2\!+1,\:$ since if $\rm\:p\:$ divides both factors then it also divides their difference $\rm\:x^2+1-(x^2-n) = n+1 = 2^J,\:$ contra $\rm\:p\:$ odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Complex number: equation I would like an hint to solve this equation: $\forall n\geq 1$
$$\sum_{k=0}^{2^n-1}e^{itk}=\prod_{k=1}^{n}\{1+e^{it2^{k-1}}\} \qquad \forall t \in \mathbb{R}.$$
I went for induction but without to much success; I will keep trying, but if you have an hint...
Many thanks.
| Let us consider the polynomial $P = \sum_{k=0}^{2^n -1 } X^k $ and $P_2 = \prod_{k=1}^{n} (1+X^{2^{k-1}})$. There exist a full combinatorial proff of this result. Hint : if you develop the second term and examine all power of $X$ you could find all the binary development of integer between $0$ and $2^{n}-1$. If you want that I complete the proof, ask me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What's the difference between tuples and sequences? Both are ordered collections that can have repeated elements. Is there a difference? Are there other terms that are used for similar concepts, and how are these terms different?
| The difference seems to be:
*
*A psychological difference: people often think about the concepts differently.
*A difference in the way people encode these when reducing everything to set theory. This is probably never a useful thing to do except when what you're doing is set theory.
Revised version six years later: In a sequence, the linear order in which things appear is essential information. In a tuple, the roles played by the different components are what is essential.
Thus at tuple may specify: longitude, latitude, point in time, temperature, humidity, barometric pressure. You could list the numbers in a different order and correspondingly list the those labels in a different order, and you'd still have the same tuple, but not the same sequence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 8,
"answer_id": 1
} |
Double Subsequences Suppose that $\{a_{n}\}$ and $\{b_{n}\}$ are bounded. Prove that $\{a_{n}b_{n}\}$ has a convergent subsequence.
In class this is how my professor argued:
By the Bolzano-Weierstrass Theorem, there exists a subsequence $\{a_{n_k}\}$ that converges to $a$. Since $\{b_n\}$ is bounded, $\{b_{n_k}\}$ is also bounded. So by the Bolzano-Weierstrass Theorem, there exists a subsequence of $\{b_{n_k}\}$ namely $\{b_{n_{{k_j}}}\}$ such that $\{b_{n_{{k_j}}}\}$ converges to $b$.
In particular $\{a_{n_{{k_j}}}\}$ will converge to $a$. And note that $\{a_{n_{{k_j}}}b_{n_{{k_j}}}\}$ is a subsequence of $\{a_{n}b_{n}\}$. So $a_{n_{{k_j}}}b_{n_{{k_j}}} \to ab$.
My question is why do we have to use so many subsequences. Is it wrong to argue as follows?
$\{a_{n}\},\{ b_{n} \}$ are both bounded, so by the Bolzano-Weierstrass Theorem, both sequences have a convergent subsequence. Namely $a_{n_k} \to a$ and $b_{n_k} \to b$. Then note that $\{a_{n_k}b_{n_k}\}$ is a subsequence of $\{a_{n}b_{n}\}$ which converges to $ab$. And we are done.
| You can actually use less subsequences using a different proof, that Aryabhata is hinting at in his comment:
Proof. Sequences $(a_n)_{n=1}^\infty$ and $(b_n)_{n=1}^\infty$ are bounded, so there exist $A\geq0$ and $B\geq0$ such that $|a_n|\leq A$ and $|b_n|\leq B$ for all $n\in\mathbb{N}$. But from this we immediately see that $|a_nb_n|\leq AB$ for all $n\in\mathbb{N}$. So $(a_nb_n)_{n=1}^\infty$ is a bounded sequence as well. Therefore by the Bolzano-Weierstrass theorem it has a convergent subsequence. $\square$
If you prove it the way your professor did, however, you have to use many sequences. To see why, I suggest you try the following. First prove:
Lemma. Suppose $(a_n)_{n=1}^\infty$ and $(b_n)_{n=1}^\infty$ are bounded sequences and $(a_n)_{n=1}^\infty$ is convergent. Then $(a_nb_n)_{n=1}^\infty$ has a convergent subsequence.
(Hint: one of the sequences is convergent, so you don't have to worry about indices properly aligning. Can you see why?)
After you have proved this, the professor's proof will look like this:
Proof. Sequences $(a_n)_{n=1}^\infty$ and $(b_n)_{n=1}^\infty$ are bounded, so there exists a convergent subsequence $(a_{n_k})_{k=1}^\infty$ of the sequence $(a_n)_{n=1}^\infty$. This means that $(a_{n_k})_{k=1}^\infty$ and $(b_{n_k})_{k=1}^\infty$ are bounded sequences and $(a_{n_k})_{k=1}^\infty$ is convergent. So we can use the lemma on them, telling us that some subsequence of $(a_{n_k}b_{n_k})_{k=1}^\infty$ is convergent. Since a subsequence of a subsequence is again a subsequence of the original sequence, this completes the proof. $\square$
I hope this clears things up a bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Partial derivative involving trace of a matrix Suppose that I have a symmetric Toeplitz $n\times n$ matrix
$$\mathbf{A}=\left[\begin{array}{cccc}a_1&a_2&\cdots& a_n\\a_2&a_1&\cdots&a_{n-1}\\\vdots&\vdots&\ddots&\vdots\\a_n&a_{n-1}&\cdots&a_1\end{array}\right]$$
where $a_i \geq 0$, and a diagonal matrix
$$\mathbf{B}=\left[\begin{array}{cccc}b_1&0&\cdots& 0\\0&b_2&\cdots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&b_n\end{array}\right]$$
where $b_i = \frac{c}{\beta_i}$ for some constant $c>0$ such that $\beta_i>0$. Let
$$\mathbf{M}=\mathbf{A}(\mathbf{A}+\mathbf{B})^{-1}\mathbf{A}$$
Can one express a partial derivative $\partial_{\beta_i} \operatorname{Tr}[\mathbf{M}]$ in closed form, where $\operatorname{Tr}[\mathbf{M}]$ is the trace operator?
| Expanding $\mathbf A(\mathbf A + \mathbf B + \mathbf E)^{-1}\mathbf A$ in $\mathbf E$ yields $\mathbf A(\mathbf A + \mathbf B)^{-1}\mathbf A-\mathbf A(\mathbf A + \mathbf B)^{-1}\mathbf E(\mathbf A + \mathbf B)^{-1}\mathbf A$ up to first order. Thus
$$
\begin{eqnarray}
\frac{\partial\operatorname{Tr}[M]}{\partial\beta_i}
&=&
-\operatorname{Tr}\left[\mathbf A(\mathbf A + \mathbf B)^{-1}\frac{\partial\mathbf B}{\partial\beta_i}(\mathbf A + \mathbf B)^{-1}\mathbf A\right]
\\
&=&
-\operatorname{Tr}\left[\frac{\partial\mathbf B}{\partial\beta_i}(\mathbf A + \mathbf B)^{-1}\mathbf A\mathbf A(\mathbf A + \mathbf B)^{-1}\right]
\\
&=&
\frac c{\beta_i^2}\left((\mathbf A + \mathbf B)^{-1}\mathbf A\mathbf A(\mathbf A + \mathbf B)^{-1}\right)_{ii}\;.
\end{eqnarray}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Formula for calculating residue at a simple pole. Suppose $f=P/Q$ is a rational function and suppose $f$ has a simple pole at $a$. Then a formula for calculating the residue of $f$ at $a$ is
$$
\text{Res}(f(z),a)=\lim_{z\to a}(z-a)f(z)=\lim_{z\to a}\frac{P(z)}{\frac{Q(z)-Q(a)}{z-a}}=\frac{P(a)}{Q'(a)}.
$$
In the second equality, how does the $Q(z)-Q(a)$ appear? I only see that it would equal $\lim_{z\to a}\frac{P(z)}{\frac{Q(z)}{z-a}}$.
| Because $a$ is a zero of $Q(z)$; i.e., $$Q(a) = 0.$$
So, $$Res_{a}f(z)=\lim_{z→a}(z−a)f(z) =\lim_{z→a}\frac{P(z)}{\frac{Q(z)}{z−a}} =\lim_{z→a}\frac{P(z)}{\frac{Q(z) - 0}{z−a}} = \lim_{z→a}\frac{P(z)}{\frac{Q(z)−Q(a)}{z−a}}= \frac{P(a)}{Q′(a).}$$
Then this is a "trick" to compute residue for a simple pole.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Russell Paradox and set theories The Russell paradox arise in the Cantor set theory, but it can be avoided in the $ZF$ and in $NGB$ axiomatic set theory. Are there other axiomatic set theories in which this paradox can be avoided? Thanks.
| As it is mentioned in comments above, the so called Russell paradox is a consequence of non-restricted Comprehension Schema according to which for any formula $\varphi(x)$, where $x$ is free, $\{x\mid\varphi(x)\}$ is a set. This paradox is actually a result of a logical truth ($R$ is a binary predicate):
$$
\neg\exists x\forall y(yRx\iff \neg yRy)\,.
$$
In light of this, assuming non-restricted comprehension in a language in which you have at least one binary predicate you always get inconsistent theory. Thus if you want to build a set theory based on classical logic you must restrict the schema one way or another.
EDIT: This address Asaf question below (I should have written it before it was asked). As I wrote above, if you build a system of set theory you must restrict the Comprehension Schema. All such restrictions I am aware of allow you to avoid falling into inconsistency due to Russell paradox.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equivalence of Logarithm Definitions As discussed in this question, there are many different approaches to defining the natural logarithm function. In particular, since the exponential function
$$
\exp(x) := \sum_{k=0}^{\infty}\frac{x^k}{k!}
$$
is strictly increasing, its inverse exists and by definition
$$
\ln(x) := \exp^{-1}(x).
$$
On the other hand, then natural logarithm can also be defined through
$$
\ln(x) := \int_1^{x}\frac{1}{t}dt.
$$
What is not at all obvious to me is how these two definitions are equivalent. So, my first question is, how is it that these two definitions are equivalent? A related question is, if one wanted to modify either of these definitions to account for a base other than $e$, how would one proceed? Note that a reference that discusses these topics is perfectly acceptable answer.
| I guess this is worth writing down. By your definition we have $\text{exp}(\log(x)) = x$. Differentiating gives
$$\text{exp}(\log(x)) \log'(x) = x \log'(x) = 1$$
hence $\log'(x) = \frac{1}{x}$. The desired result then follows by the fundamental theorem of calculus and the observation that $\log(1) = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How do you calculate probability of rolling all faces of a die after n number of rolls?
Possible Duplicate:
Expected time to roll all 1 through 6 on a die
Probability of picking all elements in a set
Im pretty new to the stackexchange, and posted this is statistics, and then discovered this site, and thought it was much more appropriate, so here I go again:
It is fairly easy to figure out what is the average rolls it would take to roll all faces of a die [1 + 6/5 + 6/4 + 6/3 + 6/2 + 6/1 = 14.7], but that got me thinking of a seemingly more complicated problem.
Say you roll a die 1-5 times, the is the odds of ALL faces showing, is obviously 0. If you roll a die 6 times, the odds of all faces showing can easily be calculated like so:
1 * (5/6) * (4/6) * (3/6) * (2/6) * (1/6) = .0154 or 1.54%
Now is where I get stuck. How to do 7, or more times, and calculate it with n.
Any tips is helpful!
| The probability of not rolling a 1 in $n$ rolls is $(5/6)^n$, similarly for not rolling a $2,\ldots,6$. Now, $6(5/6)^n$ would be the probability that we are not rolling a $1,\ldots,6$, but we would be double counting the rolls where we do not roll both a 1 or a 2. The probability of not rolling two specified numbers in $n$ rolls is $(4/6)^n$ and there are $\binom{6}{2}$ pairs of numbers. But if we subtract these out we undercount the rolls that avoid three numbers. This generalizes to the inclusion-exclusion principle, giving us the probability of missing any number as
$$\binom{6}{1}(5/6)^n-\binom{6}{2}(4/6)^n+\binom{6}{3}(3/6)^n-\binom{6}{4}(2/6)^n+\binom{6}{5}(1/6)^n$$
as the probability of missing at least one number in $n$ rolls. The probability of rolling all of them is just 1 minus this probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Homomorphisms from $S_{5}$ to $\mathbb{Z}_{12}$ , what does it mean the ID subgroup? I'm trying to find the number of homomorphisms from $S_{5}$ to $\mathbb{Z}_{12}$ , meaning :
$S_{5}$ $\longrightarrow$ $\mathbb{Z}_{12}$
I'm using the 1st Isomorphism theorem .
$G/Ker(f)≅Im(f)$
So , we can have the following subgroups for Ker :
*
*Id
*$A_{5}$
*$S_{5}$
For the first : $120/1≅Im(f)=120$ but by Lagrange theorem $|Im(f)| | |\mathbb{Z}_{12}| $ , but 120 doesn't divide 12 , then this can't be .
Question : what does it mean the ID subgroup ? why its size is 1 ?
Regards
| Firstly, I am not sure what your question that comes under the title "Question" is. Let me try my best at it:
*
*The "ID" subgroup you seem to be interested in is called the identity subgroup, which for an abstract group $G$ is the singleton set $I=\{e_G\}$ where $e_G$ is the identity element of $G$. It is routine to check that $I$ is actually a subgroup of $G$.
For example, in $S_5$, $Id_{S_5}=(1)(2)(3)(4)(5)$ is the identity permutation which fixes all the five symbols whose group of permutations in $S_5$.
Your approach is right, but you fail to observe that, if $a \mid b$, then $a \le b$. This will get you out of those unnecessary contradictions.
Let $f$ be a homomorphism from $S_5$ to $\Bbb Z_{12}$. Then, you have following restrictions on $f$:
*
*The first isomorphism theorem, together with Lagrange's theorem tells you that, $$|S_5/\operatorname {Ker} f|~~ \mbox{$=$} ~~|\operatorname{Im} f| ~~~\mbox{divides}~~ |\Bbb Z_{12}|$$
So, for knowing the cardinality of $\operatorname{Ker} f$, notice that you need to find all those $x$ such that $$\dfrac{120}{x} \mid 12$$ Firstly, this in particular means that, $\dfrac {120} x \le 12$. So, you have that, $x \ge 10$.
As you know that only normal subgroups in $S_5$ are $1=\{\operatorname{Id}_{S_5}\}, A_5$ and $S_5$, you are forced to conclude that $\operatorname{Ker} f \in \{A_5, S_5\}$.
So,...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proving that a natural number is divisible by $3$ I am trying to show that $n^2 \bmod 3 = 0$ implies $n \bmod 3 = 0$.
This is a part a calculus course and I don't know anything about numbers theory. Any ideas how it can be done? Thanks!
| The natural way to think about the problem is that since $n^2$ is divisible by 3, hence prime factorization of $n^2$ contains at least one 3 in it(since 3 is a prime number). If so is the case, then prime factorization of $n$ must contains 3 in it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Analysis Problem: Prove $f$ is bounded on $I$ Let $I=[a,b]$ and let $f:I\to {\mathbb R}$ be a (not necessarily continuous) function with the property that for every $x∈I$, the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ of $x$. Prove that $f$ is bounded on $I$.
Thus far I have that,
For all $n∈I$ there exist $x_n∈[a,b]$ such that $|f(x_n)|>n$. By the Bolzano Weierstrass theorem since $I$ is bounded we have the sequence $X=(x_n)$ is bounded. This implies there is a convergent sub-sequence $X'=(x_{n_r})$ of $X$ that converges to $c$, $c∈[a,b]$. Since $I$ is closed and the element of $X'$ belongs to $I$, it follows from a previous theorem that I proved that $c∈I$. Here is where I get stuck, I want to use that the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ somehow to show that $f$ is bounded on $I$. I'm not sure how to proceed.
$f$ is bounded on $I$ means if there exist a d-neighborhood $V_d(c)$ of $c$ and a constant $M>0$ such that we have $|f(x)|\leq M$ for all $x$ in $A ∩ V_d(c)$.
I would like to do try a proof by contradiction somehow.
| Proof by contradiction:
if we suppose that $f$ is unbounded in a neighbourhood $V_{x_0}$ of $x_0\in I$ then there exists a sequence $(Z_n)\in V_{x_0}$, $Z_n\to x_0$ and $|f(Z_n)|\to +\infty$ and that is contradictory with the fact that $f$ is locally bounded over $I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
The preimage of $(-\infty,a]$ under $f$ is closed for $a \in \mathbb{R}$, then $f$ is semi-continuous. So I've been thinking about this for the last two hours, but I am stuck.
Suppose $f:X \to \mathbb{R}$ where $X$ is a topological space.
$f$ is said to be semicontinuous if for any $x \in X$ and $\epsilon > 0$, there is a neighborhood of $x$ such that $f(x) - f(x') < \epsilon$ for all $x'$ in the neighborhood of $x$.
The question is the if $f^{-1}((-\infty,a])$ is closed for $a \in \mathbb{R}$, then $f$ is lower semi-continuous.
I started with choosing an $x \in f^{-1}((-\infty,a])$ and letting $\epsilon > 0$. So far I don't know much characterization of closed sets in a topological space except it is the complements of open sets.
Not sure if this is correct, but I approached this problem with the idea of nets. Since $f^{-1}(-\infty,a]$ is closed, then for each $x \in f^{-1}(-\infty,a]$, there's a net $\{x_i\}_{i \in I}$ such that it converges to $x$ (not sure if I'm allowed to do that). Pick any neighborhood of $x$ denote by $N_x$ (which will contain terms from $f^{-1}(-\infty,a]$), which contains an open set which has $x$. Let $f(x) = b$. Then $N_{x'}:=[N_x \backslash f^{-1}(-\infty, b)] \backslash [\mathrm{boundary \ of \ this \ set \ to \ the \ left}]$. So this gives me an open set such that it contains $x$ and such that $f(x) - f(x') < \epsilon$ for all $x' \in N_{x'}$. So $f$ must be semi-continuous.
Not sure if there is more to know about closed sets in a topological space, except its complement is open.
Any hint on how to think about this is appreciated.
| Here I showed that $f$ is lower semi-continuous whenever $f^{-1}((\alpha,\infty))$ is open for all real $\alpha$. Here we have that $f^{-1}((-\infty,\alpha])$ is closed, so $f^{-1}((\alpha,\infty))=f^{-1}((-\infty,\alpha]^c)=(f^{-1}((-\infty,\alpha]))^c$ is the complement of a closed set, and thus is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does proving the Collatz Conjecture entail? From the get go: i'm not trying to prove the Collatz Conjecture where hundreds of smarter people have failed. I'm just curious.
I'm wondering where one would have to start in proving the Collatz Conjecture. That is, based on the nature of the problem, what's the starting point for attempting to prove it? I know that it can be represented in many forms as an equation(that you'd have to recurse over):
$$\begin{align*}
f(x) &=
\left\{
\begin{array}{ll}
n/2 &\text{if }n \bmod2=0 \\
3n+1 &\text{if }n \bmod2=1
\end{array}
\right.\\
\strut\\
a_i&=
\left\{
\begin{array}{ll}
n &\text{if }n =0\\
f(a_i-1)&\text{if }n>0
\end{array}
\right.\\
\strut\\
a_i&=\frac{1}{2}a_{i-1} - \frac{1}{4}(5a_{i-1} + 2)((-1)^{a_i-1} - 1)
\end{align*}$$
Can you just take the equation and go from there?
Other ways I thought of would be attempting to prove for only odd or even numbers, or trying to find an equation that matches the graph of a number vs. its "Collatz length"
I'm sure there's other ways; but I'm just trying to understand what, essentially, proving this conjecture would entail and where it would begin.
| The problem can be just those two points:
1) is there a loop?
2) is there a sequence that increases without bound?
However, another way to solve it would be to show there cannot be two distinct < families >. Up to quite high values of n, we know empirically that starting with any given n less than that value, repeatedly choosing n/2 for even n and 3n+1 for odd n gives a sequence ending in 1.
Call this set of sequences ending in 1 the < terminate-in-1 family >.
The task then amounts to testing if there can be a < deviant family > where either a loop or a sequence increasing without bound would amount to a deviant family. Then the challenge (in order to prove Collatz/Ulam/Thwaites correct) is to show that any other family of sequences must somewhere produce a number that is within the terminate-in-1 family.
The terminate-in-1 family contains, and if it existed any deviant family would contain, infinitely many natural numbers each.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 6,
"answer_id": 4
} |
Convergence of $\sum_1^\infty \ln (\frac{3+n^p}{2+n^p})$ I am able to prove divergence for $p<0$ or $p=0$.
How can I prove convergence/divergence for $p>0$.
| Let $x_n=\log((3+n^p)/(2+n^p))$. This is $x_n=\log(1+y_n)$ with $y_n=1/(2+n^p)$.
If $p\lt0$, $y_n\to1/2$ and if $p=0$, $y_n\to1/3$. In both cases, $y_n\geqslant y^*$ for every $n$ with $y^*\gt0$ hence $x_n\geqslant\log(1+y^*)\gt0$ for every $n$ large enough and $\sum\limits_nx_n$ diverges.
If $p\gt0$, $y_n\to0$ hence $y_n/2\leqslant\log(1+y_n)\leqslant y_n$ for every $n$ large enough. Since $y_n\leqslant1/n^p$ for every $n$ and $y_n\geqslant1/(2n^p)$ for every $n$ large enough, this shows that the series $\sum\limits_nx_n$ behaves like $\sum\limits_n1/n^p$ hence it diverges for every $0\lt p\leqslant1$ and it converges for every $p\gt1$.
Finally, the series $\sum\limits_nx_n=\sum\limits_n\log((3+n^p)/(2+n^p))$ converges if and only if $p\gt1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
idempotent matrix and Jordan form A little time ago I proved this exercise:
If $B$ is a matrix $n\times n$ and $B^2=B$ ($B$ is idempotent) then the matrix $(I_n-B)$ is idempotent. Also show that the eigenvalues of $B$ are $\{0,1\}$ if $B\neq 0$ and $B\neq I_n$.
Now, what can I say about the Jordan form of $B$?
Thanks for your help.
| For linear algebra, a linear transformation (matrix) $B$ such that $B^2=B$ is always a projection. The only eigenvalues are $0$ and $1$, and the space decomposes as $\mathrm{null}(B)\oplus\mathrm{Im}(B)$. In particular, $B$ is always diagonalizable, so the Jordan canonical form of $B$ is diagonal, with $\mathrm{nullity}(B)$ zeros and $\mathrm{rank}(B)$ ones.
To verify this: note that $\mathrm{null}(B)$ is the eigenspace of $0$, and $\mathrm{Im}(B)$ is the eigenspace of $1$ (since $\mathbf{z}\in\mathrm{Im}(B)$ implies $B(\mathbf{z})=\mathbf{z}$). Since the dimension $\mathbf{F}^n$ equals the sum of the dimensions of the eigenspaces, $B$ is diagonalizable.
(Note: The fact that if $B$ is an idempotent then so is $1-B$ holds in far more generality: it's true in any ring (since $(1-B)(1-B) = 1-B-B+B^2 = 1-2B+B = 1-B$; in fact, this is one of the key ingredients in showing that a direct decomposition of a ring corresponds to the existence of a nontrivial central idempotent).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Multiplying exponents with variables inside Why is
$$(-1)^n(2^{n+2}) = (-2)^{n+2} ?$$
My thinking is that $-1^n \times 2^{n+2}$ should be $-2^{2n+2}$ but clearly this is not the case. Why is the variable essentially ignored, is there a special case of multiplication I'm unaware of?
| This is due to the fact that $(-1)^{n+2}$ is the same thing as $(-1)^n$. Hence, $(-1)^n2^{n+2}=(-1)^{n+2}2^{n+2}$, and by the properties of exponents, this is equal to $(-2)^{n+2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Coercivity vs boundedness of operator The definition of coercivity and boundedness of a linear operator $L$ between two $B$ spaces looks similar: $\lVert Lx\lVert\geq M_1\lVert x\rVert$ and $\lVert Lx\rVert\leq M_2\lVert x\rVert$ for some constants $M_1$ and $M_2$. Thus in order to show the existence of a PDE $Lu=f$ one needs to show that it is coercive. However if my operator $L$ happen to be bounded and $M_2 \leq M_1$?
What is the intuition behind those two concepts because they are based on computation of the same quantities and comparing the two?
| The number $\inf_{x\neq 0}\frac{\lVert Lx\rVert}{\lVert L\rVert\cdot\lVert x\rVert}$ measure how injective $L$ is (when $L=0$ it's not well defined).
Consider $R(x):=\lVert Lx\rVert$ for $x$ in the unit ball. Coercivity means that $R(x)\geq M_1>0$ for some constant $M_1$ and in particular $L$ is injective (but $L$ can be unbounded, for example if $B=\ell^2(\Bbb R)$ and $Le_n=ne_n$, $n\geq 1$).
A bounded operator doens't need to be coercive, for example $L\equiv 0$.
To get an intuition, coercivity means that the vectors of the unit ball are map to a positive distance from $0$, and this distance is independent of the point. Boundedness measure how far form $0$ can be mapped the vectors of the unit ball.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Recurrence $T(n)=2T([n/2]+17)+n$ and induction. Show that the solution to
$$T(n) = 2T\left(\biggl\lfloor \frac n 2 \biggr\rfloor+17\right)+n$$
is $\Theta(n \log n)$?
So the induction hypothesis is
$$ T \left( \frac n 2 \right) = c\cdot \frac n2 \cdot \log \frac n2.$$
Hence,
$$ T(n) = 2c \cdot \frac n2 \cdot \log \frac n2 + 17 + n $$
but how do I continue from here?
| Hint: Now you want to prove that the right side is less than $cn \log n$. The $2$'s cancel nicely. Now write $\log \frac n2 = \log n - \log 2$. If $c$ is large enough you can take care of the $n$ term, and if $n$ is large enough the $17$ won't matter.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.