Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
How to prove that $\sum\limits_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum\limits_{k=1}^\infty \frac{1}{(a+k)^2}$ for $a>-1$? A problem on my (last week's) real analysis homework boiled down to proving that, for $a>-1$,
$$\sum_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum_{k=1}^\infty \frac{1}{(a+k)^2}.$$ Mathematica confirms this is true, but I couldn't even prove the convergence of the original series (the one on the left), much less demonstrate that it equaled this other sum; the ratio test is inconclusive, and the root test and others seem hopeless. It was (and is) quite a frustrating problem. Can someone explain how to go about tackling this?
| Since at least J. M. asked for it, here's another solution for the case
when $a$ is a natural number.
I'll use the forward difference operator $\Delta$, defined by
$\Delta f(n) = f(n+1) - f(n)$, and the falling factorial defined
by
$$
n^{\underline{a}} =
\begin{cases}
n(n-1)(n-2) \dots (n-a+1), & a > 0, \\
1, & a=0 \\
\frac{1}{(n+1)(n+2) \dots (n+|a|)}, & a < 0,
\end{cases}
$$
and satisfying $\Delta n^{\underline{a}} = a n^{\underline{a-1}}$.
The summand, which I'll denote by $F_a(n)$, can be rewritten as
$$
F_a(n)
= \frac{(n-1)!}{n\prod_{i=1}^n(a+i)}
= \frac{(n-1)! a!}{n (a+n)!}
= \frac{a!}{n \cdot n(n+1)(n+2) \dots (n+a)}
$$
$$= \frac{(a-1)!}{n} \left( -(-a) (n-1)^{\underline{-(a+1)}}\right)
= -\frac{(a-1)!}{n} \Delta\left( (n-1)^{\underline{-a}}\right).
$$
Using the rule $\Delta(f(n)g(n)) = \Delta f(n) \, g(n+1) + f(n) \Delta g(n)$,
we get
$$
F_a(n)
= - \Delta\left( \frac{(a-1)!}{n} (n-1)^{\underline{-a}}\right)
+ \Delta\left( \frac{(a-1)!}{n} \right) \, n^{\underline{-a}}
$$
$$
= - \Delta\left( \frac{(a-1)!}{n \cdot n (n+1) \dots (n+a-1)} \right)
+ (a-1)! \left( \frac{1}{n+1} - \frac{1}{n} \right) \frac{1}{(n+1)\dots (n+a)}
$$
$$= - \Delta\left( \frac{(a-1)!}{n \cdot n(n+1) \dots (n+a-1)} \right)
+ F_{a-1}(n+1) - (a-1)! \Delta\left( \frac{(n-1)^{\underline{-a}}}{-a} \right).
$$
Summing over $n \ge 1$ gives (because of telescoping in the sums-of-deltas)
$$
\sum_{n=1}^{\infty} F_a(n)
= \frac{(a-1)!}{1 \cdot a!} + \sum_{n=1}^{\infty} F_{a-1}(n+1)
- \frac{(a-1)!}{a} 0^{-\underline{a}}
$$
$$
= \frac{1}{a} + \sum_{m=2}^{\infty} F_{a-1}(m)
- \frac{1}{a^2}
$$
$$
= \sum_{m=1}^{\infty} F_{a-1}(m) - \frac{1}{a^2}
$$
(since $F_{a-1}(1) = 1/a$).
Finally, since $F_0(n) = 1/n^2$, we obtain after using this result to work our way down $n$ steps that
$$
\sum_{n=1}^{\infty} F_a(n)
= \sum_{n=1}^{\infty} F_{a-1}(n) - \frac{1}{a^2}
= \dots = \sum_{n=1}^{\infty} F_0(n) - \left( \frac{1}{a^2} + \dots + \frac{1}{1^2} \right)
= \sum_{n=a+1}^{\infty} \frac{1}{n^2}
= \sum_{k=1}^{\infty} \frac{1}{(k+a)^2}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 2,
"answer_id": 1
} |
Conditional expectation of $\max(X,Y)$ and $\min(X,Y)$ when $X,Y$ are iid and exponentially distributed I am trying to compute the conditional expectation $$E[\max(X,Y) | \min(X,Y)]$$ where $X$ and $Y$ are two iid random variables with $X,Y \sim \exp(1)$.
I already calculated the densities of $\min(X,Y)$ and $\max(X,Y)$, but I failed in calculating the joint density. Is this the right way? How can I compute the joint density then? Or do I have to take another ansatz?
| If $Z = \min(X,Y)$ and $W = \max(X,Y)$, then for $w > z$,
$$\begin{align*}
F_{Z,W}(z,w) &= P\{Z \leq z, W \leq w\}\\
&= P\left[\{X \leq z, Y \leq w\} \cup \{X \leq w, Y \leq z\}\right]\\
&= P\{X \leq z, Y \leq w\} + P\{X \leq w, Y \leq z\} - P\{X \leq z, Y \leq z\}\\
&= F_{X,Y}(z, w) + F_{X, Y}(w,z) - F_{X,Y}(z,z)
\end{align*}
$$
while for $w < z$,
$$\begin{align*}
F_{Z,W}(z,w) &= P\{Z \leq z, W \leq w\} = P\{Z \leq w, W \leq w\}\\
&= P\{X \leq w, Y \leq w\}\\
&= F_{X,Y}(w,w).
\end{align*}
$$
Consequently, if $X$ and $Y$ are jointly continuous
random variables, then
$$f_{Z,W}(z,w) = \frac{\partial^2}{\partial z \partial w}F_{Z,W}(z,w) =
\begin{cases}
f_{X,Y}(z,w) + f_{X,Y}(w,z), & \text{if}~w > z,\\
\\
0, & \text{if}~w < z.
\end{cases}
$$
The conditional density of $W$ given $Z = z$ is
$$
f_{W \mid Z}(w \mid z) = \frac{f_{Z,W}(z,w)}{f_Z(z)}
= \begin{cases}
\frac{f_{X,Y}(z,w) + f_{X,Y}(w,z)}{\int_z^{\infty} f_{X,Y}(z,w) + f_{X,Y}(w,z)\
\mathrm dw}, & w > z,\\
0, & w < z,
\end{cases}
$$
and so with $f_{X,Y}(x,y) = e^{-x-y}$ for $x, y \geq 0$
$$
\begin{align*}E[W \mid Z = z]
&= \frac{\int_z^\infty w\cdot f_{X,Y}(z,w) + w\cdot f_{X,Y}(w,z)\ \mathrm dw}{
\int_z^\infty f_{X,Y}(z,w) + f_{X,Y}(w,z)\ \mathrm dw}\\
&= \frac{\int_z^\infty w\cdot e^{-w-z} + w\cdot e^{-w-z}\ \mathrm dw}{
\int_z^\infty e^{-w-z} + e^{-w-z}\ \mathrm dw}\\
&= \frac{2e^{-2z}\int_z^\infty w\cdot e^{-w}\ \mathrm dw}{
2e^{-2z}} = \frac{2e^{-z}[\left . (-we^{-w})\right\vert_z^{\infty}
+ \int_z^{\infty}e^{-w}\ \mathrm dw]}{2e^{-2z}}\\
&= 1 + z.
\end{align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 2
} |
What is step by step logic of pinv (pseudoinverse)? So we have a matrix $A$ size of $M \times N$ with elements $a_{i,j}$. What is a step by step algorithm that returns the Moore-Penrose inverse $A^+$ for a given $A$ (on level of manipulations/operations with $a_{i,j}$ elements, not vectors)?
| As far as I know and from what I have read, there is no direct formula (or algorithm) for the (left) psuedo-inverse of $A$, other than the "famous formula"
$$(A^\top A)^{-1}A^\top$$
which is computationally expensive. I will describe here an algorithm that does indeed calculate it efficiently. As I said, I believe this may be previously
un-published, so cite me appropriately, or if you know of a reference please state it. Here is my algorithm.
Set $B_0 = A$ and for each iteration step, take a column of $B_i$ and orthogonalize against the columns of $A$. Here is the algorithm ($A$ has $n$ independent columns):
1. Initialize $B_0=A$.
2. For $j \in 1\ldots n$ do
\begin{align}
\mathbf{t} &= B_i^\top A_{\star j} \\
R\mathbf{t} &= e_j & \text{Find such $R$, an elementary row operation matrix}\\
B_{i+1}^\top &= R B_i^\top \\
\end{align}
Notice that each step does one vector/matrix multiplication and one elementary matrix row operation. This will require
much less computation than the direct evaluation of $(A^\top A)^{-1}A^\top$. Notice also that there is a nice opportunity here for parallel computation. (One more thing to notice--this may calculate a regular inverse, starting with $B_0=A$, or with $B_0=I$.)
The step that calculates $R$ may partially be orthogonal
rather than elementary (orthonormal/unitary -thus better
in the sense of error propagation)
if desired, but should not make use of the previously orthonormalized rows of $B$, since they must remain orthogonalized in the process. If this is done, the process becomes a "cousin" to the QR algorithm, whereas before it would be considered a "cousin" to the LU factorization. "Cousin" because the matrix operations of those are self referential, and this algorithm references the matrix $A$.
The following is a hopefully enlightening
description.
See first edits for a more wordy description. But I think the following is more concise and to the point.
Consider the conformable matrix $B^\top$ such that $B$ has the same dimension as $A$ and
\begin{align}
B^\top A &= I \tag{1}\\
B &= A B_A \tag{2}\\
\end{align}
Here $B_A$ is arbitrary and exists only to ensure that the matix $B$ shares the same column space as $A$. (1) states that $B^\top$ is a (not necessarily unique) left inverse for $A$. (2) states that $B$ shares the same column space as $A$.
Claim: $B^\top$ is the Moore-Penrose pseudoinverse for $A$.
Proof:
The Moore-Penrose pseudoinverse for $A$ is $$A^\dagger = \left(A^\top A\right)^{-1}A^\top$$
Given (1) and substituting (2) we have
\begin{align}
\left(AB_A\right)^\top A &= I \\
B_A^\top A^\top A &= I \\
B_A^\top &= \left( A^\top A\right)^{-1} \\
B_A &= \left( A^\top A\right)^{-1} \\
\end{align}
Now solving for $B$ from (2):
\begin{align}
B &= A\left(B_A\right) \\
B &= A\left( A^\top A\right)^{-1} \\
\Rightarrow B^\top &=\left( A^\top A\right)^{-1}A^\top \\
\end{align}
Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Showing $f^{-1}$ exists where $f(x) = \frac{x+2}{x-3}$ Let $f(x) = \dfrac{x + 2 }{x - 3}$.
There's three parts to this question:
*
*Find the domain and range of the function $f$.
*Show $f^{-1}$ exists and find its domain and range.
*Find $f^{-1}(x)$.
I'm at a loss for #2, showing that the inverse function exists. I can find the inverse by solving the equation for $x$, showing that it exists without just solving for the inverse. Can someone point me in the right direction?
| A function will be invertible (on a proper domain) if and only if it is injective, that is, if it never takes the same value twice. If your function is continuous and is defined on an interval, this forces it to be increasing or decreasing. Even without being defined on an interval we can use calculus to determine if the function is increasing/decreasing, which lets us solve the problem fairly simply.
We have that $f'(x)=\frac{(x-3)-(x+2)}{(x-3)^2}=\frac{-5}{(x-3)^2}$. This is negative except where it is undefined, and so the function is decreasing, at least locally. Unfortunately, because we have the discontinuity at $x=3$, we can't say immediately that the function is injective. However, we do know that it never takes the same value twice on either side of the discontinuity. This combined with limits, is all we need:
Since $\displaystyle \lim_{x\to -\infty}f(x)=\lim_{x\to \infty}f(x)=1$, we must have that $f(x)<1$ if $x<3$ and $f(x)>1$ if $x>3$. Since no number is both less than 1 and greater than 1, we can't have $f(x_1)=f(x_2)$ for $x_1<3$ and $x_2>3$. Therefore $f(x)$ is injective, and hence invertible on a suitable domain (namely on $(-\infty,3)\cup (3,\infty)$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Evaluate the triple integral $\iiint\limits_D z \ dV$ over this domain $D$ Evaluate the integral $$ \iiint \limits_D z \ dV ,$$ where $D$ is the region bounded by the planes $y = 0$, $x = 0$, $z = 0$, $z = 1$, and the cylinder $x^2+y^2=1$ with $x,y \ge 0$.
| Usually the toughest part of these problems is finding the limits of integration. If we concentrate on just the $xy$-plane for a moment we can find the limits of integration for $x$ and $y$. The region in the $xy$-plane over which you are integrating is the region bounded by the circle $x^2+y^2=1$ in the first quadrant. Along the $x$-axis this region runs from $x=0$ to $x=1$. If we pick a particular $x$, then $y$ will run from $y=0$ to $y=\sqrt{1-x^2}$. So, now we've got bounds on $x$ and $y$. As for $z$, that certainly runs from $z=0$ to $z=1$. This gives all the bounds and the integral is
$$
\int_0^1\int_0^1\int_0^{\sqrt{1-x^2}}z\,dy\,dx\,dz.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is polynomial regression considered a kind of linear regression? Why is polynomial regression considered a kind of linear regression?
This is what I mean by polynomial regression. For example, the hypothesis function is
$$h(x; t_0, t_1, t_2) = t_0 + t_1 x + t_2 x^2 ,$$
and the sample points are
$$ (x_1, y_1), (x_2, y_2), \ldots$$
| This is a form of linear regression because it takes the form
$$h(x)=\sum_i t_if_i(x)\;,$$
which is a linear combination of functions $f_i(x)$ and is amenable to a solution using only linear algebra. The non-linearity of the functions $f_i(x)$ doesn't complicate the solution; it enters only in calculating the values $f_i(x_j)$, and everything is then linear in these values. What's important is that the function is linear in the parameters $t_i$; otherwise these need to be determined by non-linear optimization.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/75959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
The tricky time complexity of the permutation generator I ran into tricky issues in computing time complexity of the permutation generator algorithm, and had great difficulty convincing a friend (experienced in Theoretical CS) of the validity of my reasoning. I'd like to clarify this here.
Tricky complexity question Given a positive integer $n$, what is the time complexity of generating all permutations on the set $[n]=\{1,2,..,n\}$?
Friend's reasoning Any algorithm to generate all permutations of $[n]$ takes $\Omega(n!)$ time. This is a provable , super-exponential lower bound, [edited ]hence the problem is in EXPTIME.
My reasoning The above reasoning is correct, except that one should compute the complexity with respect to the number of expected output bits. Here, we expect $n!$ numbers in the output, and each can be encoded in $\log n$ bits; hence we expect $b=O(n!\log n)$ output bits. A standard algorithm to traverse all $n!$ permutations will take a polynomial time overhead i.e. it will execute in $s(n)=O(n!n^k)$ time, hence we will need $t(n)=b(n)+s(n) = O(n!(\log n + n^k)) $ time in all.
Since $b(n)$ is the number of output bits, we will express $t(n)$ as a function of $b(n)$. To do so, note that $n^k \approx (n!)^{k/n}$ using $n! \approx n^n$; so $s(n)=O( b(n) (b(n))^{k/n}) = O(b^2(n) )$ . Hence we have a polynomial time algorithm in the number of output bits, and the problem should be in $P$, not in say EXPTIME.
Main Question : Whose reasoning is correct, if at all?
Note I raised this problem here because I had a bad experience at StackOverflow with a different tricky time complexity problem; and this is certainly not suited for Cstheory.SE as it isn't research level.
| In the analysis of algorithms there are different cost models.
The most common ones are (quote from Wikipedia):
*
*the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost to every machine
operation, regardless of the size of the numbers involved
*the logarithmic cost model, also called logarithmic-cost measurement (and variations thereof), assigns a cost to every
machine operation proportional to the number of bits involved
I guess the complexities you gave use a different model and therefore can not be compared, they can both be correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Pappus's theorem Let $S$ be a surface of revolution in $\mathbb{R}^3$ (2-dimensional) and let $C$ be its generating curve. Let $s$ be its arc lenght. Let $x = x(s)$ be the distance from a point in the curve to the $Oz$ axis (C lies in the $xz$ plane and we rotate it around $Oz$).
In my DG book, it says that
$Area(S) = 2x \int_0^l \pi(s) ds $. Where $l$ is the lenght of the curve. Although it says nowhere what this $\pi$ function is.
I've been able to prove, using the theory formulated in the book that
$Area(S) = 2\pi \int_0^l x(s) ds $ where this $\pi$ is the regular 3.14... $\pi$
The proof looks pretty correct to me. Is it safe to assume that the book was printed wrong?
thanks in advance.
| Yes, it was typo. For the correct statement, you can look at p.101, Exercise 11 in Section 2-5 in the book "Differential Geometry of curves and surfaces" by Do Carmo.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If G is a group of order n=35, then it is cyclic I've been asked to prove this.
In class we proved this when $n=15$, but our approached seemed unnecessarily complicated to me. We invoked Sylow's theorems, normalizers, etc. I've looked online and found other examples of this approach.
I wonder if it is actually unnecessary, or if there is something wrong with the following proof:
If $|G|=35=5\cdot7$ , then by Cauchy's theorem, there exist $x,y \in G$ such that $o(x)=5$, $o(y)=7$. The order of the product $xy$ is then $\text{lcm}(5,7)=35$. Since we've found an element of $G$ of order 35, we conclude that $G$ is cyclic.
Thanks.
| As a concrete example, consider the single-cycle permutations $(1,2,3,4,5)$ and $(1,2,3,4,5,6,7)$, with orders $5$ and $7$, respectively. Their product is the cycle $(1,3,5,2,4,6,7)$ of order $7$.
On the other hand, pick two arbitrary axes in $\mathbb R^3$ and consider the groups of five-fold and seven-fold rotation symmetry about these axes. These are cyclic groups of orders $5$ and $7$, respectively, but the product of two elements, one from each group, is generally not a rotation through a rational multiple of $\pi$, and is thus generally of infinite order. You can see this by varying one of the axes; then the rotation angle of the product varies continuously with the orientation of the axis, and thus by the intermediate value theorem takes on irrational multiples of $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 5
} |
What would be the radius of convergence of $\sum\limits_{n=0}^{\infty} z^{3^n}$? I know how to find the radius of convergence of a power series $\sum\limits_{n=0}^{\infty} a_nz^n$, but how does this apply to the power series $\sum\limits_{n=0}^{\infty} z^{3^n}$? Would the coefficients $a_n=1$, so that one may apply D'Alembert's ratio test to determine the radius of convergence? I would appreciate any input that would be helpful.
| In a nutshell, my advice would be to forget that this is a power series and that you learned something called the ratio test, and to remember that a series $\sum\limits_nx_n$ cannot converge unless $x_n\to0$. In your case, $x_n$ is a power of $z$ hence $(x_n)$ does not converge to zero if $|z|\geqslant1$. In the other direction, a simple comparison such as $|x_n|\leqslant|z|^n$ for every $|z|\leqslant1$ yields the result.
And you might wish to read this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Infinitely many $n$ such that $p(n)$ is odd/even?
We denote by $p(n)$ the number of partitions of $n$. There are infinitely many integers $m$ such that $p(m)$ is even, and infinitely many integers $n$ such that $p(n)$ is odd.
It might be proved by the Euler's Pentagonal Number Theorem. Could you give me some hints?
| The theorem is due to O. Kolberg, Note on the parity of the partition function, Math. Scand. 7 1959 377–378, MR0117213 (22 #7995). The review by Mirsky says that the proof uses the Pentagonal Number Theorem, and is extremely short and simple.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
a circle graph is not a function? I'm a little confused by the rule: If you draw a vertical line that intersects the graph at more than 1 point then it is not a function.
Because then a circle like $y^2 + x^2 = 1$ is not a function?
And indeed if I rewrite it as $f(x) = \sqrt(1 - x^2)$ then wolfram alpha doesn't draw a circle. I guess I'm missing the intuition as to why this is though?
| Functions need to be well-defined as part of their definition, so for a given input there can only be one output.
$f(x,y)=x^2+y^2-1$ is a function of two variables, and the set of points for which this function gets $0$ is the unit circle.
However writing $y^2+x^2=1$ as a function of $x$ alone cannot be done, as $x=\dfrac12$ has two solutions ($y=\pm\sqrt{\dfrac34}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 7,
"answer_id": 0
} |
Prove sequence $a_n=n^{1/n}$ is convergent How to prove that the sequence $a_n=n^{1/n}$ is convergent using definition of convergence?
| Noticing that $n^\frac{1}{n} > 1$ for all $n$, it all comes down to showing that for any $\epsilon > 0$, there is a $n$ such that $(1+\epsilon) \geq n^\frac{1}{n}$, or by rearranging, that
$$
(1+\epsilon)^n \geq n
$$
Now, let's first of all choose an $m$ such that $(1+\epsilon)^{m}$ is some number bigger than 2, let's say the smallest number greater than $3$ that you can get. From here, swap $m$ for $2m$. This will make the left side a little over 3 times larger, and the right side 2 times larger. The next doubling will still double the right side, but the left side will increase roughly 9-fold. Repeating, we can easily see that the left side will at some point overtake the right side, and we have our $n$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 2
} |
Simplify an expression to show equivalence I am trying to simplify the following expression I have encountered in a book
$\sum_{k=0}^{K-1}\left(\begin{array}{c}
K\\
k+1
\end{array}\right)x^{k+1}(1-x)^{K-1-k}$
and according to the book, it can be simplified to this:
$1-(1-x)^{K}$
I wonder how is it done? I've tried to use Mathematica (to which I am new) to verify, by using
$\text{Simplify}\left[\sum _{k=0}^{K-1} \left(\left(
\begin{array}{c}
K \\
k+1
\end{array}
\right)*x{}^{\wedge}(k+1)*(1-x){}^{\wedge}(K-1-k)\right)\right]$
and Mathematica returns
$\left\{\left\{-\frac{K q \left((1-q)^K-q^K\right)}{-1+2 q}\right\},\left\{-\frac{q \left(-(1-q)^K+(1-q)^K q+(1+K) q^K-(1+2 K) q^{1+K}\right)}{(1-2 q)^2}\right\}\right\}$
which I cannot quite make sense of it.
To sum up, my question is two-part:
*
*how is the first expression equivalent to the second?
*how should I interpret the result returned by Mathematica, presuming I'm doing the right thing to simplify the original formula?
Thanks a lot!
| It follows from the Binomial Formula, http://en.wikipedia.org/wiki/Binomial_theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Check if a point is within an ellipse I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.
How do I determine if a point $(x,y)$ is within the area bounded by the ellipse?
| Another way uses the definition of the ellipse
as the points whose sum of distances to the foci is constant.
Get the foci at $(h+f, k)$
and $(h-f, k)$,
where $f = \sqrt{r_x^2 - r_y^2}$.
The sum of the distances
(by looking at the lines from
$(h, k+r_y)$ to the foci)
is
$2\sqrt{f^2 + r_y^2}
= 2 r_x
$.
So, for any point $(x, y)$,
compute
$\sqrt{(x-(h+f))^2 + (y-k)^2} +
\sqrt{(x-(h-f))^2 + (y-k)^2}
$
and compare this with $2 r_x$.
This takes more work, but I like using the geometric definition.
Also, for both methods, if speed is important
(i.e., you are doing this for many points),
you can immediately reject any point $(x, y)$ for which
$|x-h| > r_x$
or $|y-k| > r_y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 4,
"answer_id": 0
} |
Why does $a^b \bmod c=(a \bmod c)^b \bmod c$ This may be a very basic number theory question, but I don't understand:
Why does $$a^b \bmod c = (a \bmod c)^b \bmod c$$
| If you are comfortable with $x\equiv y\bmod n\Rightarrow xz\equiv yz \;(\bmod n)$, then
$a\equiv (a \bmod c)\; (\bmod c)$
$a^b\equiv (a\bmod c)^b\; (\bmod c)$
$a^b \,\bmod c=(a\bmod c)^b\bmod c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Experiences with Kumon We have enrolled our 5 year old son in Kumon which is an after school math and reading enrichment program of Japanese origin.
While he is learning lots of things (currently learning how to add i.e., 1+4=?, 2+4=? etc) I am concerned about the utility of this approach to teach math. For example, he already knows that 4+2=6 but when he sees 2+4=? he needs to again start counting to figure out the answer. I know that he is still too young to perhaps appreciate that 2+4 and 4+2 give the same answer.
But, the above example suggests that Kumon's method of teaching math may not necessarily enhance his understanding of math. He may come to view math as a bunch of rules to follow- an outcome which would be a complete disaster. So, given the above background my questions are:
*
*Does anyone know of any research/studies that have examined the effectiveness of Kumon? (A cursory google search did not turn up anything)
*Have any of you been part of the Kumon program? If so, how was your experience?
| A few years ago when I was in high school I worked part - time for the Kumon Learning Center as a teaching assistant. Generally my experiences with Kumon is that the learning is generally rote and done through the weekly worksheets assigned. Admittedly, some of the worksheets are quite cleverly designed, the material is not of a theoretical nature. Although personally I feel that this would not affect children too much at the younger grades, where I don't really see a better alternative to learning simple arithmetic other than simple repetition, the theoretical understanding when it comes to upper mathematics like algebra or calculus can be a little lacking (although in Kumon's defense, some of the kids get really good, simply from the sheer amount of practice they've had).
I feel that if you have the time and energy to simply sit down with your son and teach him yourself, then Kumon is not needed. In reality, most of the money you pay are for the worksheets you get. If you do not have the time and energy and you want your son to have a solid grounding in arithmetic then Kumon wouldn't hurt, although I would recommend an alternative when he gets into algebra based maths.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Definition: transient random walk What exactly does a "transient random walk on a graph/binary tree" mean? Does it mean that we never return to the origin (assuming there is one as for the tree) or just any vertex of the graph or tree? Thanks.
| I not so familiar with the terminology exactly for random walks on graphs, but since any random walk on graph is a Markov Chain, we just can refer to the Wikipedia article about Markov Chains: the state is called transient if there is non-zero probability to never return to that state. The Markov Chain is called transient if any state of it is transient.
Following your logic for the binary tree that would impossible to define transience of a random walk on any digraph in the natural way, because there we may not have an origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove that: set $\{1, 2, 3, ..., n - 1\}$ is group under multiplication modulo $n$? Prove that:
The set $\{1, 2, 3, ..., n - 1\}$ is a group under multiplication modulo $n$ if and only if $n$ is a prime number without using Euler's phi function.
| $\Rightarrow$ is simple and proven multiple times above.
$\Leftarrow$: There is no need to rely on the Euclidian algorithm:
Let $1 \leq j \leq p-1$. Then by the pigeon principle, among the numbers $j, j^2, j^3,..., j^{p+1}$ there are two congruent modulo $p$.
Thus
$$j^k \cong j^l \mod p \, ;\, k< l \,.$$
This means $p| j^k(j^{l-k}-1)$. Since $p$ and $j$ are relatiovely prime it follows that
$$j^{l-k} \cong 1 \mod p \, ;\, l-k \geq 1 \,.$$
From here proving that this is a group is simple...
P.S.
Alternately you can also use the following argument:
For all $1 \leq j \leq p-1$, the function $f: \{ 1,2,3,.., p-1 \} \rightarrow \{ 1,2,3,.., p-1 \}$, is well defined (prove it) and injective (prove it). Thus it has to be surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
What is the degree of a homeomorphism from $S^1$ to $S^1$? If $f : S^1 \to S^1$ is a homeomorphism, what is the degree of it?
I was told the answer is 1 or -1, but I can't prove it. Can anyone help me?
| One definition of degree of a map from $S^1$ to $S^1$ is to look at the image of $1\in \mathbb Z\cong \pi_1(S^1)$. Hence a map of degree $n$ induces multiplication by $n$ on $\pi_1(S^1)\cong \mathbb Z$. Now a homeomorphism $h$ has an inverse $g$ so that $gh=hg=id_{S^1}$. Applying $\pi_1$, $g_*h_*=h_*g_*=id_{\mathbb Z}$. So if $h_*(1)=\deg(h)$, this implies that $\deg(h)$ evenly divides $1$. So it is $\pm 1$.
The other notion of degree I know is to perturb $f$ to a smooth map, take a small neighborhood of a generic point $p$ and count the preimages of $p$ with sign determined by whether $f$ locally flips or preserves orientation. In this case, since a homeomorphism is injective, every point will have at most one preimage. I guess we need to check that a homeomorphism can be perturbed to a diffeomorphism for this argument to work.
Edit: I think I now know what the definition of degree is that the OP is considering. Namely, consider $f\colon I\to S^1$ with $f(0)=f(1)$ as representing a map from $S^1$ to $S^1$. Now lift $f$ to the universal cover to get $\tilde{f}\colon I\to \mathbb R$, where the covering projection $p\colon\mathbb R\to S^2$ is given by $p(t)=e^{2\pi i t}$. Then define $\deg(f)=\tilde{f}(1)-\tilde{f}(0)$. Now suppose $|\deg(f)|>1$. Then the image of $\tilde{f}$ contains one other lift, say $\tilde{f}(a)$, of $f(0)$ between $\tilde{f}(0)$ and $\tilde{f}(1)$. So $f=p\circ \tilde{f}$ is not injective since $f(a)=f(0)$. On the other hand, if $\deg(f)=0$, then either $\tilde{f}$, and hence $f$ is constant, or $\tilde{f}$ is not injective. But then $f=p\circ\tilde{f}$ is also not injective. So the only possibility for a homeomorphism is $\deg=\pm 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Factor the binomial $36 m^2 - \frac{25}{4} $ OK, I got a new one... it's $36 m^2 - \frac{25}4 $
and I got:
$(18m-\frac{5}2 )(18m+\frac{5}2 )$ although that is incorrect... where did I go wrong?
| I don't think you are right,since:
$$ 36 m^2 - \frac{25}4 = (6m)^2 - \left(\frac{5}2 \right)^2 = \left(6m + \frac{5}2 \right)\left(6m - \frac{5}2 \right) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
difference between implicit and explicit solutions? What is difference between implicit and explicit solution of an initial value problem?
Please explain with example both solutions(implicit and explicit)of same initial value problem? Or without example but in some way that is understandable.
thanks
| As requested:
Let's use the example initial-value problem
$$y^\prime y=-x,\qquad y(0)=r, \qquad r\text{ constant}$$
One can derive both an implicit and explicit solution for this DE. The implicit solution to this DE is
$$x^2+y(x)^2=r^2$$
This solution implicitly defines $y(x)$; all we have here is an equation involving $y(x)$. On the other hand, the explicit solution looks like
$$y(x)=\pm\sqrt{r^2-x^2}$$
and in this case, $y(x)$ is explicitly defined: $y(x)$ is expressed here as an explicit function with $x$ as the only independent variable.
We aren't always this lucky when we solve differential equations that show up in practice. It often happens that we can only be content with an implicit solution (or a parametric solution, which is a somewhat better state of affairs than having just an implicit solution). One famous example is the differential equation that pops up in the brachistochrone problem:
$$(1+(y^\prime)^2)y=r^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 3,
"answer_id": 0
} |
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. Prove $f(z)=$constant
Possible Duplicate:
Liouville's theorem problem
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and suppose $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. How would you prove the function is constant?
I am approaching it by attempting to show it is bounded then by applying Liouville's Theorem. But have not made any notable results yet, any help would be greatly appreciated!
| Since this seems like homework (if it is, you should use the homework tag), I will only give a hint.
Think about what the image domain of the function will look like. Can you postcompose $f$ with a simple holomorphic function on this domain (e.g. a Möbius transformation) such that the new function $g$ you obtain is bounded?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/76958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Calculate the expansion of $(x+y+z)^n$ The question that I have to solve is an answer on the question "How many terms are in the expansion?".
Depending on how you define "term" you can become two different formulas to calculate the terms in the expansion of $(x+y+z)^n$.
Working with binomial coefficients I found that the general relation is $\binom{n+2}{n}$. However I'm having some difficulty providing proof for my statement.
The other way of seeing "term" is just simply as the amount of combinations you can take out of $(x+y+z)^n$ which would result into $3^n$.
Depending on what is the right interpretation, how can I provide proof for it?
| If you foil, every term will be of the type $x^iy^jz^{n-i-j}$, and oviously $0 \leq i, j \leq n$ and $i+j \leq n$.
You can get $i$ $x's$ from $n$ brackets in $\binom{n}{i}$ ways, and you can get $j$ $y's$ from the reminding $n-i$ brackets in $\binom{n-i}{j}$ ways. This leads to
$$(x+y+z)^n = \sum_{0 \leq i+j \leq n} \frac{n!}{i! j! (n-i-j)!} x^iy^jz^{n-i-j} \,.$$
The question you ask is actually much simpler:
How many terms of the type $x^iy^jz^{n-i-j}$ are there?
i.e. How many $i,j$ verify the relation $0 \leq i \leq n, 0 \leq j \leq n$ and $i+j \leq n$.
For each $i$ you have $n-i+1$ choices for $j$ so your answer is $\sum_{i=0}^n (n-i+1)$. You don't need the more general binomial formula for this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Reversibility of a Markov Chain Rephrased question: Is it ever possible for a reducible Markov chain to be reversible?
| Reversibility implies that if the Markov chain can go from $x$ to $y$ in finite time with positive probability, the same holds for $y$ and $x$. Hence the only way to be reducible is that there exists some states $x$ and $y$ such that the chain cannot go from $x$ to $y$ neither from $y$ to $x$. In other words there exists a partition of the state space such that the Markov chain starting from a state in a given class stays in this class forever with full probability and such that the Markov chain restricted to this class is irreducible.
In other words, one considers a collection $(Q_i)_i$ of reversible irreducible transition kernels on disjoint non empty state spaces $(S_i)_i$ with stationary measures $\mu_i$. The state space of the Markov chain is the union of the spaces $S_i$ and while in $S_i$, the chain uses the kernel $Q_i$. Every barycenter of the stationary measures $(\mu_i)_i$ is stationary and the chain is reversible but reducible as soon as there is more than one state space $S_i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
C*-algebras as Banach lattices? It seems to be trivial but I am not sure about monotonicity of the norm in the non-commutative case:
Is every C*-algebra a Banach lattice with respect to its natural positive cone?
| In order for an ordered vector space to be a Banach lattice, among other things, it is necessary that any two elements of the space have a greatest lower bound and least upper bound in the space. This property usually fails in $C^*$ algebras with their usual $C^*$-algebraic ordering.
Consider for example the $C^*$ algebra $A$ of $2 \times 2$ matrices over the complex numbers (with the operations: matrix addition, matrix multiplication, and the matrix conjugate transpose as the involution). Let $a = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ and $b = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$. It is easy to see that $a \geq 0$ and $b \geq 0$ in in $A$, and hence that any element $c$ in $A$ satisfying $a \leq c$ or $b \leq c$ must be self-adjoint. Short calculations then show that if $c = \begin{pmatrix} x & y \\ y^* & z \end{pmatrix}$, then $a \leq c$ holds if and only if $x \geq 1$, $z \geq 0$, and $(x - 1) z \geq |y|^2$, and $b \leq c$ holds if and only if $x \geq 0$, $z \geq 1$, and $x(z - 1) \geq |y|^2$.
It follows that the set of upper bounds for $\{a,b\}$ in $A$ is the set
$$
U = \left\{\begin{pmatrix} x & y \\ y^* & z \end{pmatrix}: x \geq 1, z \geq 1, xz - \max(x,z) \geq |y|^2\right\}.
$$
I claim that $U$ has no least element. Suppose to the contrary that it does; denote it by $\lambda$. There are real numbers $s$ and $t$ and a complex number $u$ satisfying $s \geq 1$, $t \geq 1$, and $st - \max(s,t) \geq |u|^2$ with $\lambda = \begin{pmatrix} s & u \\ u^* & t \end{pmatrix}$. It is clear that the $2 \times 2$ identity matrix $I$ is in $U$, and from $\lambda \leq I$ one easily deduces that $1 - s \geq 0$ and $1 - t \geq 0$. Combining these with the previous constraints on $s$ and $t$ we deduce that $s = t = 1$ and hence $|u|^2 \leq 1 \cdot 1 - \max(1,1) = 0$, so that $u = 0$ and hence $\lambda = I$. But there are elements $d$ of $U$ for which $I \leq d$ does not hold. The matrix $\frac{1}{4} \begin{pmatrix} 5 & 2i \\ -2i & 6 \end{pmatrix}$ is a concrete example but there are many others.
This also shows that $\{-a,-b\}$ has no greatest lower bound in $A$, of course. So there is really no hope of $A$ being a lattice.
(Note that $a$ and $b$ are self-adjoint projections, and if you restrict $\leq$ to the set of self-adjoint projections in $A$, you do get a lattice, which is isomorphic to the lattice of closed subspaces of $\mathbb{C}^2$. The supremum of $a$ and $b$ in this lattice is $I$ and the infimum of $a$ and $b$ is $0$. But the set of projections in $A$ is not a real vector space, let alone a Banach lattice.)
Some general theory that may be of interest:
*
*S. Sherman proved (in Order in operator algebras, 1951) that a $C^*$-algebra $A$ is a lattice with respect to its usual ordering if and only if $A$ is commutative.
*R. V. Kadison proved (in Order properties of bounded self adjoint operators, 1951) that when $H$ is a Hilbert space of dimension $\geq 2$, the $C^*$ algebra $A$ of all bounded operators on $H$ is in some sense "as far from a lattice as you can get" in that if $a$ and $b$ are self-adjoint elements of $A$, then $\{a, b\}$ has a greatest lower bound only if either $a \leq b$ or $b \leq a$. So any pair of non-comparable self-adjoint operators will generate a counterexample like the one I gave above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Right angles in the clock during a day Can someone provide a solution for this question ...
Given the hours , minutes and seconds hands calculate the number of right angles the three hands make pairwise with respect to each other during a day... So it asks for the second and hour angle , minute and hour and second and minute
Thanks a lot..
| UPDATE:
Okay, I'll revise my count and way of thinking:
Rather than considering things per-hour, I'll consider the number of times a hand "laps" another hand, that is to say the number of times it does 360* more than the other hand. For that to happen, the faster hand must have been in right angles twice. We know which hand is faster, so recompute:
Hour-Minute: 24 hours = 2 loops (hour hand), 24 hours = 24 loops (minute-hand), thus the minute hand passes the hour hand 22 times => 44 loops.
Hour-Second: 24 hours = 1440 loops (second-hand), thus the second hand passes the hour hand 1440 - 2 = 1438 times => 2876 loops.
Minute-Second: 24 loops (minute-hand) vs. 1440 loops (second-hand), thus we have 1416 passes => 2832 loops.
The total is 5752 loops.
Old answer:
We have 3 hands and therefore 3 "pairs" to consider. They are the hour-minute, hour-second, and minute-second.
Each hour, the minute hand will be at a right angle twice, so that's 48 right angles -- 2 for each hour of the day.
Each hour, the second hand will go around 60 times and have 2 right angles for each, that's 120 right angles per hour or 120 * 24 = 2880.
Each minute, the second hand will make 2 right angles with the hour hand, that's another 120 * 24 = 2880.
The total is 5808.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Alternative definition of rank involving tensor products I have come across a definition of rank in some lecture notes I am using to prepare for a linear algebra qualifying exam and the notes define the rank of a linear map in a somewhat different manner than I am used to.
let $V$ be a finite dimensional vector space and $f \in End(V)$. Since $End(V) \cong V^* \otimes V$ we can provide a new definition of the rank of $f$.
How do we show $dim(Im(f)) = min \{ t : f = \sum_{i=1}^{t} v_{i}^{*} \otimes v_i \}$ where $v_{i}^{*} \in V^*$ and $ v_i \in V$?
| It should be immediate that $\dim(\mathrm{Im}\;f))$ (let's call that the "geometrical rank") cannot be more than the tensor rank, because $v_i^*\otimes v_i$ represents an endomorphism that always produces a multiple of $v_i$.
To show that the tensor rank cannot be more than the geometrical rank, choose a basis $(v_1,\ldots,v_t)$ for the image of $f$. Then define $v_i^*$ as the linear map that sends $w\in V$ to the coefficient of $v_i$ of $f(w)$. Then $f$ is represented by $\sum_{i=1}^t v_i^* \otimes v_i$, so the tensor rank is at most $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove using the definition of convergence in probability that $W_n$ converges to $0$ in probability Suppose $W_1, W_2, ...$ is a sequence of random variables. $W_n$ is defined as the following $W_n = 1/n$ with probability 1/2 and 0 with probability 1/2. Using the definition of convergence in probability, show that $W_n$ converges to 0 in probability. So far, I have tried using Chebyshev's and Markov's inequality, but have gotten nowhere. Any help is appreciated and thanks in advance.
| Wikipedia has the definition
A sequence ${X_n}$ of random variables converges in probability towards $X$ if for all $\varepsilon \gt 0$, $\lim_{n\to\infty}\Pr\big(|X_n-X| \geq \varepsilon\big) = 0.$
so you might look at what happens when $1/n \lt \varepsilon$, i.e. when $n \gt 1/\varepsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Bidual of a WSC space Let $E$ be a Banach space which is weakly sequentially complete (i.e. each weak Cauchy sequence converges weakly). Must $E^{**}$ be weakly sequentially complete either? Of course, this question is interesting only for non-reflexive spaces.
| This is not true.
I reproduce Remark 3. on page 101 (in the section on Banach lattices) of Lindenstrauss-Tzafriri (in the old Springer Lecture Notes 338 edition):
Reference [80] is:
William B. Johnson, A complementary universal conjugate Banach space and its relation to the approximation problem, Israel Journal of Mathematics 13(3-4) (1972), 301–310.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\lim \limits_{n \to \infty} \frac{x^n}{n!} = 0$, $x \in \Bbb R$. Why is
$$\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}$$
Can we generalize it to any exponent $x \in \Bbb R$? This is to say, is
$$\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}$$
This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.
and here: List of abstract duplicates.
| The simplest way would be; let
$$
\color{fuchsia}{P_n=\frac{x^n}{n!}=}
\color{maroon}{\frac x1.\frac x2.\frac x3\cdots\frac x{x-1}.\frac xx.\frac x{x+1}\cdots\frac x{n-1}.\frac xn}$$
Then
$$\color{maroon}{0}\color{red}{<}\color{fuchsia}{P_n}\color{red}{<}\color{maroon}{\frac x1.\frac x2\cdots\frac{x}{x-1}.\frac xx.}\color{green}{\frac x{x+1}.\frac x{x+1}\cdots\frac{x}{x+1}.\frac x{x+1}}$$
Or
$$\color{maroon}{0}\color{red}{<}\color{fuchsia}{P_n}\color{red}{<}\color{maroon}{\frac{x^x}{x!}.}\color{green}{\left(\frac x{x+1}\right)^{n-x}}$$
And as
$$\color{fuchsia}{\lim_{n\to\infty}\color{maroon}{0}=0}\\
\color{fuchsia}{\lim_{n\to\infty}\color{maroon}{\frac{x^x}{x!}.}\color{green}{\left(\frac x{x+1}\right)^{n-x}}=0}$$
By using $\color{red}{\text{Sandwich theorem}}$ the result can be obtained; I leave you to read between the lines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 15,
"answer_id": 13
} |
Cyclic subgroups, flaw in proof Let $G$ be a group, denote for $g \in G:$ $\langle g \rangle=\{g^0,g^1,\ldots \}$
I should show that for $x,y \in G$ with $xy=yx$ if $ord(\langle x \rangle)=l<\infty$ and $ord(\langle y \rangle)=m<\infty$ then $ord(\langle xy \rangle)<\infty$
To show it I assumed $ord(\langle xy \rangle)=\infty$, then especially
$$((xy)^l)^m=(x^l)^m\cdot (y^m)^l=e^me^l=e\neq e$$
But that is a contradiction. However I did not use that $xy=yx$ so the proof should be incorrect, but where is my mistake and how to show it correctly?
| You used the fact that $xy=yx$ when you split up exponents over $xy$ - in general, for any $d>1$,
$$(xy)^d=\underbrace{(xy)(xy)\cdots(xy)}_{d\text{ times}}\neq \left(\underbrace{ x\cdot x\cdots x}_{d\text{ times}}\right)\left(\underbrace{ y\cdot y\cdots y}_{d\text{ times}}\right)=x^dy^d$$
unless you can turn the successive $yx$'s on the left into $xy$'s:
$$\underbrace{(xy)(xy)\cdots (xy)}_{d\text{ times}}=x\underbrace{(yx)(yx)\cdots(yx)}_{(d-1)\text{ times}}y=x\underbrace{(xy)(xy)\cdots(xy)}_{(d-1)\text{ times}}y=x^2\underbrace{(yx)(yx)\cdots(yx)}_{(d-2)\text{ times}}y^2$$
$$(\text{ repeat })\cdots = x^dy^d$$
Also, note that you did not need to use a proof by contradiction - you have directly proven that
$$(xy)^{lm}=x^{lm}y^{lm}=(x^l)^m(y^m)^l=e^me^l=ee=e$$
so there was no need to assume $\text{ord}(xy)=\infty$ and then show it was false. It is not wrong, but it is much cleaner to say
Here is a proof of claim $P$.
than to say
Suppose claim $P$ were false. But here is a proof that it is true; contradiction. Thus, our assumption that claim $P$ is false must have been false, i.e. the claim $P$ is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proof for convergence of a given progression $a_n := n^n / n!$ "Examine whether the given progressions converge (eventually improperly) and determine their limits where applicable.
(a) $$(a_n)_{n\in\mathbb{N}}:=\frac{n^n}{n!}$$
[...]"
I am having problems getting this homework done as I have no clue about convergency at all. Through Mathematica I know that $\lim\limits_{n\to\infty}a_n=+\infty$ however I don't know how to proove it! During my research I found out that for sufficient big $n$ the inequality $x^n\leq n!\leq n^n$ for $x\in\mathbb{R}$ and $n\in\mathbb{N}$ is true. However this doesn't help me at all. Having a look at our lecture notes I developed the following statement: every $x^n$ with $|x| \geq 1$ diverges as $x^n$ is unbounded and therefore has the improper limit $\lim\limits_{n\to\infty}x^n=+\infty$. Furthermore it seems obvious to me that I need to express the terms in a simpler way, like fractions converging to 0 etc, but I don't know how. Neither some work with the $\varepsilon$-definition of limits in combination with the triangle inequality helped me out. Any suggestions or hints?
Information: This is "Analysis for computer scientist" and therefore we haven't "officially" learned any fancy tricks like L'Hôpital's rule etc. just basic properties of $\mathbb{R}$ and some inequalities as this is an introductory lecture.
| HINT: $$\begin{align*}
\frac{n^n}{n!}&=\frac{n}{n}\cdot\frac{n}{n-1}\cdot\frac{n}{n-2}\cdot\dots\cdot\frac{n}2\cdot\frac{n}1\\
&=\left(\frac{n}{n-1}\cdot\frac{n}{n-2}\cdot\dots\cdot\frac{n}2\right)n
\end{align*}$$
If you know that $$\lim_{n\to\infty}\left(1+\frac1n\right)^n = e\;,$$ you can also look at the ratio of consecutive terms to see that the sequence grows almost exponentially:
$$\begin{align*}
\frac{a_{n+1}}{a_n} &=\frac{\frac{(n+1)^{n+1}}{(n+1)!}}{\frac{n^n}{n!}}\\
&=\frac{(n+1)^{n+1}}{n^n}\cdot\frac{n!}{(n+1)!}\\
&=\frac{(n+1)^n(n+1)}{n^n}\cdot\frac1{n+1}\\
&=\frac{(n+1)^n}{n^n}\\
&=\left(\frac{n+1}n\right)^n\\
&=\left(1+\frac1n\right)^n
\end{align*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Counting the solutions of $x^2 \equiv y^2-d \pmod p$ So I am reading "A Classical Introduction to Modern Number Theory", and I need help for one question:
Show that $$x^2 \equiv y^2-d \pmod p$$ has $p-1$ solutions for $\gcd(p,d)=1$ and $2p-1$ for $\gcd(p,d)>1$, where $p$ is a prime number greater than 3.
I am a little confused, should the answer both be $2p-1$?
| We are looking at the congruence $(x-y)(x+y)\equiv d\pmod{p}$. If $d$ is divisible by $p$, the solutions are $(a,a)$ and $(a,-a)$, where $a$ travels from $0$ to $p-1$. Since $p$ is odd, these are all distinct modulo $p$, except when $a=0$. So there are $2(p-1)+1=2p-1$ solutions.
Suppose now that $d$ is not divisible by $p$. Let $x-y = a$, where $a$ travels from $1$ to $p-1$ (clearly $y-x$ cannot be congruent to $0$). For any such $a$, there is a unique $b$ such that $ab\equiv -d\pmod{p}$. Then $x^2-y^2\equiv d\pmod{p}$ if and only if $x+y\equiv b\pmod{p}$.
Since $p$ is odd, $2$ is invertible modulo $p$, and therefore the system $x-y\equiv a\pmod p$, $x+y \equiv b\pmod{p}$ has a unique solution $(x,y)$ modulo $p$. It follows that there are as many solutions of the original congruence as there are choices for $a$, namely $p-1$. The case $p=3$ is not special, we can take $p \ge 3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Equal simple field extensions? I have a question about simple field extensions.
For a field $F$, if $[F(a):F]$ is odd, then why is $F(a)=F(a^2)$?
| Firstly, $F(a^2)\subseteq F(a)$. If the inclusion is strict, then $[F(a):F(a^2)]\neq 1$. Now $a$ satisfies the polynomial $X^2-a^2\in F(a^2)[X]$, so the minimal polynomial $m_{a,F(a^2)}(x)$ has degree at most $2$, and this is necessarily $2$, so $[F(a):F(a^2)]=2$.
Then since the degree of field extensions is multiplicative, you get $2\mid [F(a):F]$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Forced wave equation If a system satisfies the equation $$\nu^2 {\partial^2 \psi\over \partial x^2}={\partial^2 \psi\over \partial t^2}+a{\partial \psi\over \partial t}-b\sin\left({\pi x \over L}\right)\cos\left({\pi \nu t\over L}\right)$$
subjected to conditions: $\psi(0,t)=\psi(L,t)={\partial \psi(x,0)\over \partial t}=0$ and $\psi(x,0)=c\sin\left({\pi x\over L}\right)$,
how might I solve this? I can solve the equation $$\nu^2 {\partial^2 \psi\over \partial x^2}={\partial^2 \psi\over \partial t^2}+a{\partial \psi\over \partial t}$$ by separation of variables. But I don't know how to deal with the $$b\sin\left({\pi x \over L}\right)\cos\left({\pi \nu t\over L}\right)$$ term. Also, what is the "forced component" of $\psi(x,t)$?
Thanks.
| $$\nu^2\dfrac{\partial^2\psi(x,t)}{\partial x^2}=\dfrac{\partial^2\psi(x,t)}{\partial t^2}+a\dfrac{\partial\psi(x,t)}{\partial t}-b\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
$$\nu^2\dfrac{\partial^2\psi(x,t)}{\partial x^2}-\dfrac{\partial^2\psi(x,t)}{\partial t^2}-a\dfrac{\partial\psi(x,t)}{\partial t}=-b\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
It is possible that the subsititution $\psi(x,t)=\psi_c(x,t)+\psi_p(x,t)$ can make the inhomogeneous linear PDE becomes homogeneous linear PDE if $\psi_p(x,t)$ can be found.
For this question, the form of $\psi_p(x,t)$ is not difficult to guess, just $$A_1\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+A_2\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}+A_3\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+A_4\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
Let $$\psi(x,t)=\psi_c(x,t)+A_1\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+A_2\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}+A_3\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+A_4\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
Then $$\dfrac{\partial\psi(x,t)}{\partial x}=\dfrac{\partial\psi_c(x,t)}{\partial x}+\dfrac{A_1\pi}{L}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_2\pi}{L}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi}{L}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi}{L}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
$$\dfrac{\partial^2\psi(x,t)}{\partial x^2}=\dfrac{\partial^2\psi_c(x,t)}{\partial x^2}-\dfrac{A_1\pi^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
$$\dfrac{\partial\psi(x,t)}{\partial t}=\dfrac{\partial\psi_c(x,t)}{\partial t}+\dfrac{A_1\pi\nu}{L}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi\nu}{L}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_3\pi\nu}{L}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi\nu}{L}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}$$
$$\dfrac{\partial^2\psi(x,t)}{\partial t^2}=\dfrac{\partial^2\psi_c(x,t)}{\partial t^2}-\dfrac{A_1\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
$$\therefore \nu^2\Biggl(\dfrac{\partial^2\psi_c(x,t)}{\partial x^2}-\dfrac{A_1\pi^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}\Biggr)-\Biggl(\dfrac{\partial^2\psi_c(x,t)}{\partial t^2}-\dfrac{A_1\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}\Biggr)-a\Biggl(\dfrac{\partial\psi_c(x,t)}{\partial t}+\dfrac{A_1\pi\nu}{L}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi\nu}{L}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_3\pi\nu}{L}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi\nu}{L}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}\Biggr)=-b\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$$
$\nu^2\dfrac{\partial^2\psi_c(x,t)}{\partial x^2}-\dfrac{A_1\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{\partial^2\psi_c(x,t)}{\partial t^2}+\dfrac{A_1\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_2\pi^2\nu^2}{L^2}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}+\dfrac{A_3\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_4\pi^2\nu^2}{L^2}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-a\dfrac{\partial\psi_c(x,t)}{\partial t}-\dfrac{A_1\pi a\nu}{L}\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}+\dfrac{A_2\pi a\nu}{L}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}-\dfrac{A_3\pi a\nu}{L}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}+\dfrac{A_4\pi a\nu}{L}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}=-b\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}$
$\nu^2\dfrac{\partial^2\psi_c(x,t)}{\partial x^2}-\dfrac{\partial^2\psi_c(x,t)}{\partial t^2}-a\dfrac{\partial\psi_c(x,t)}{\partial t}=\left(\dfrac{A_1\pi a\nu}{L}-b\right)\sin\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_2\pi a\nu}{L}\sin\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}+\dfrac{A_3\pi a\nu}{L}\cos\dfrac{\pi x}{L}\cos\dfrac{\pi\nu t}{L}-\dfrac{A_4\pi a\nu}{L}\cos\dfrac{\pi x}{L}\sin\dfrac{\pi\nu t}{L}$
$\therefore A_1=\dfrac{bL}{\pi a\nu}$ , $A_2=0$ , $A_3=0$ , $A_4=0$
The above calculation is only suitable for $a\neq0$ . For $a=0$ , the situation is more complicated and I still have no idea until now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$? Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$ when $\mathbb{Q}(\alpha)$ is a $p$th degree Galois extension of $\mathbb{Q}$?
($p$ is prime)
I got stuck with this problem while trying to construct polynomials whose Galois group is cyclic group of order $p$.
Edit: Okay, I got two nice answers for this question but to fulfill my original purpose(constructing polynomials with cyclic Galois group) I realized that I should ask for all primes $p$ if there is any such $\alpha$(depending on $p$) such that the above condition is satisfied. If the answer is no (i.e. there are primes $p$ for which there is no $\alpha$ such that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$) then I succeed up to certain stage.
| $\alpha=1+\sqrt 2$ (for $p=2$) should do the trick. By the binomial theorem, $(1+\sqrt 2)^n$ is always $a_n+b_n\sqrt2$ for some positive integer $b_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Prove the identity $ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$ $$ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$$
Class themes are: Generating functions and formal power series.
| Here is a slightly different proof that is simpler than the other one
I posted earlier.
Suppose we seek to verify that
$$\sum_{q\ge 0} {p+q\choose q} {m+2p\choose m-2q}
= 2^{m-1} \frac{2p+m}{m} {m+p-1\choose p}.$$
This is
$$\sum_{q\ge 0} {p+q\choose q} {m+2p\choose 2p+2q}.$$
We introduce
$${m+2p\choose 2p+2q} =
\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{m-2q+1}} \frac{1}{(1-z)^{2p+2q+1}} \; dz.$$
This integral controls the range, being zero when $2q\gt m$ and we
may extend the range of $q$ to infinity. We get for the sum
$$\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{m+1}} \frac{1}{(1-z)^{2p+1}}
\sum_{q\ge 0} {p+q\choose q} \frac{z^{2q}}{(1-z)^{2q}}
\; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{m+1}} \frac{1}{(1-z)^{2p+1}}
\frac{1}{(1-z^2/(1-z)^2)^{p+1}}
\; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1-z}{z^{m+1}}
\frac{1}{((1-z)^2-z^2)^{p+1}}
\; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1-z}{z^{m+1}}
\frac{1}{(1-2z)^{p+1}}
\; dz.$$
Extracting coefficients we get
$$2^m {m+p\choose p} - 2^{m-1} {m-1+p\choose p}
\\ = 2^{m-1} {m-1+p\choose p}
\left(2\frac{m+p}{m} - 1 \right)
\\ = 2^{m-1} {m-1+p\choose p}
\frac{m+2p}{m}.$$
This is the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/77949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Countable or uncountable set 8 signs Let S be a set of pairwise disjoint 8-like symbols on the plane. (The 8s may be inside each other as well) Prove that S is at most countable.
Now I know you can "map" a set of disjoint intervals in R to a countable set (e.g. Q :rational numbers) and solve similar problems like this, but the fact that the 8s can go inside each other is hindering my progress with my conventional approach...
| Let us associate each 8 with its center. If two 8's have exactly the same size and are too close together, then they will intersect. The same holds even if they only have almost the same size. That is, there are some $\epsilon,\delta$ such that two nonintersecting 8's of size between $r$ and $(1+\epsilon)r$ cannot be $\delta r$-close (that is, their centers need to be at distance at least $\delta r$.
Let us denote by $S_r$ the set of 8's of size between $r$ and $(1+\epsilon)r$. The circles at radius $\delta r/3$ around the centers are disjoint, and so if we choose for each 8 in $S_r$ a rational point inside the circle at radius $\delta r/3$ around its center, then these rational points will be distinct. This shows that $S_r$ is countable.
The set of all 8's is the union of countably many sets of the form $S_r$ (for example, we can take $r = (1+\epsilon)^z$ for all integers $z$). Since the countable union of countable sets is countable, we conclude that the number of 8's is countable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Splitting of primes in the compositum of fields If $L_i/K$ are Galois extensions of number fields, $i=1,\ldots,n$, and $L=L_1\cdots L_n$ is the compositum. Then it's true that a prime $\mathfrak{p}$ of $K$ splits in $L$ if and only if it splits in all of $L_i$. Does this also hold if the $L_i$ are not Galois extensions of $K$?
The proof that I know regarding the compositum uses the fact that the $L_i/K$ are Galois, so is this true in the more general setting and how would one prove it?
| It is true in general. A trivial direction is if $\mathfrak p$ splits in $L/K$, then it splits in any sub-extension because the spliting means unramified and no residue extensions. In particular $\mathfrak p$ splits in all $L_i$'s.
Suppose now that $\mathfrak p$ splits in all $L_i$. Without lose of generalities, we can suppose $n=2$. As the problem is local at $\mathfrak p$, we can localize and suppose $K$ is a discrete valuation field (you can stick to number fields if you prefer). Denote by $O_K$ the valuation ring of $K$ and by $O_{L_i}$ the integral closure of $O_K$ in $L_i$. Let $\pi$ be a uniformizing element of $O_K$. By hypothesis $O_{L_i}/(\pi)$ is a direct sum of copies of $k$, the residue field of $K$.
Consider the tensor product $A=O_{L_1}\otimes O_{L_2}$ over $O_K$. Its generic fiber $A\otimes K$ is $L_1\otimes_K L_2$ and is reduced because $L_i/K$ is separable. And $A/\pi A=O_{L_1}/(\pi) \otimes_k O_{L_2}/(\pi)$ is a direct sum of copies of $k$.
I claim that $O_L$ is a quotient of $A$. This will imply that $O_L/(\pi)$ is a direct sum of copies of $k$ hence $\mathfrak p$ splits in $L$.
Proof of the claim. Consider the canonical map
$$f : A\otimes K=L_1\otimes L_2\to L, \quad x_1\otimes x_2\mapsto x_1x_2.$$
The image $f(A)$ is a subring of $L$, finite over $O_K$ because $A$ is finite over $O_K$. Hence $f(A)\subseteq O_L$. Moreover $f(A)/(\pi)$ is a quotient of $A/\pi A$, so it is a direct sum of copies of $k$. In particular $O_K$ is unramified in $f(A)$, so $f(A)$ is regular hence equal to $O_L$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 0
} |
Proof: If $n=ab$ then $2^a-1 \mid 2^n-1$ I don't know how to explain or how to prove the following statement
If $n=ab$ and $a,b \in \mathbb{N}$ then $2^a-1 \mid 2^n-1$.
Any ideas? Perhaps an induction?
Thanks in advance.
| Hint: Recall that $$x^u-1=(x-1)(x^{u-1}+x^{u-2}+\cdots+x+1).$$ Then $$2^{ab}-1 =(2^a)^b-1$$ and letting $x=2^a$, $u=b$ we see that
$$(2^a)^b-1=(2^a-1)\left((2^a)^{b-1}+(2^a)^{b-2}+\cdots+(2^a)+1\right).$$ This means that $2^a-1$ must divide it. We can use a similar argument for $2^b-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Optimal number of answers for a test with wrong-answer penalty Suppose you have to take a test with ten questions, each with four different options (no multiple answers), and a wrong-answer penalty of half a correct answer. Blank questions do not score neither positively nor negatively.
Supposing you have not studied specially hard this time, what's the optimal number of questions to try to answer so the probabilities to pass the exam (having at least five points).
| If you answer 5, you need them all right, so your chance (assuming $\frac{1}{4}$ chance of a correct answer) is $(\frac{1}{4})^5=\frac{1}{1024}$. If you answer 7, you need at least 6 right, with chance $\binom {7}{6}(\frac{1}{4})^6\frac{3}{4}+(\frac{1}{4})^7$, which I make to be about $\frac{1}{745}$. You can do trying 9, but I think you are in a world of hurt.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
What kinds of non-zero characteristic fields exist? There are these finite fields of characteristic $p$ , namely $\mathbb{F}_{p^n}$ for any $n>1$ and there is the algebraic closure $\bar{\mathbb{F}_p}$. The only other fields of non-zero characteristic I can think of are transcendental extensions namely $\mathbb{F}_{q}(x_1,x_2,..x_k)$ where $q=p^{n}$.
Thats all! I cannot think of any other fields of non-zero characteristic. I may be asking too much if I ask for characterization of all non-zero characteristic fields. But I would like to know what other kinds of such fields are possible.
Thanks.
| No need to limit yourself to a finite number of transcendentals... So $\mathbb F_q(x_1,x_2,\dots,x_n,\dots)$ is another example. You can also use $\bar{\mathbb{F}_p}$ as the coefficient field. Many combinations are possible. What characterization are you after?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Confused about modular notations I am little confused about the notations used in two articles at wikipedia.
According to the page on Fermat Primality test
$
a^{p-1}\equiv 1 \pmod{m}$ means that when $a^{p-1}$ is divided by $m$, the remainder is 1.
And according to the page on Modular Exponential
$c \equiv b^e \pmod{m}$ means that when $b^e$ is divided by $m$, the remainder is $c$.
Am I interpreting them wrong? or both of them are right? Can someone please clarify them to me?
Update: So given an equation say $x \equiv y \pmod{m}$, how do we interpret it? $y$ divided by $m$ gives $x$ as remainder or $x$ divided by $m$ gives $y$ as remainder?
| x ≡ y (mod m)
is equivalent to:
x divided by m gives the same remainder as y divided by m
So, in your first example:
a^(p-1) ≡ 1 (mod m)
is equivalent to:
a^(p-1) divided by m gives the same remainder as 1 divided by m
But 1 divided by m gives remainder 1, so the above becomes:
a^(p-1) divided by m gives remainder 1
In your second example:
c ≡ b^e (mod m)
is equivalent to:
c divided by m gives the same remainder as b^e divided by m
But (assuming that 0 <=c < m) we have c divided by m gives remainder c, so the above becomes:
c is the remainder that b^e divided by m gives.
As it seems that the notation may be confusing, let me note that
x ≡ y (mod m)
should be read as:
[ x ≡ y ] (mod m) --- the (mod m) is applied to the whole congruence
and not as:
x ≡ [ y (mod m) ] --- and not just to the right part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Is there any intuition behind why the derivative of $\cos(x)$ is equal to $-\sin(x)$ or just something to memorize? why is $$\frac{d}{dx}\cos(x)=-\sin(x)$$ I am studying for a differential equation test and I seem to always forget \this, and i am just wondering if there is some intuition i'm missing, or is it just one of those things to memorize? and i know this is not very differential equation related, just one of those things that I've never really understood.
and alliteratively why is $$ \int\sin(x)dx=-\cos(x)$$
any good explanations would be greatly appreciated.
| Well if you find trouble remembering them maybe you can use the formulas $$\cos{x} = \frac{e^{ix} + e^{-ix}}{2} \quad \sin{x} = \frac{e^{ix} - e^{-ix}}{2i}$$
which you can get from Euler's formula and then if you forget what the derivative of $\cos{x}$ or of $\sin{x}$ is then you can just differentiate those exponentials, which is rather easy.
And I guess that you will not have problems remembering what the derivative of the exponential is.
Also using the power series representations for the sine and the cosine you can differentiate them term by term and verify easily that $(\cos{x})' = -\sin{x}$ and $(\sin{x})' = \cos{x}$.
But in any case, depending on how you define the trigonometric functions, there may be different ways to prove that each derivative is what it is.
I hope this helps a little.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Prove that $\frac{1}{n} \sum_{k=2}^n \frac{1}{\log k}$ converges to $0$
Prove that $\frac{1}{n} \sum_{k=2}^n \frac{1}{\log k}$ converges to $0.$
Okay, seriously, it's like this question is mocking me. I know it converges to $0$. I can feel it in my blood. I even proved it was Cauchy, but then realized that didn't tell me what the limit was. I've been working on this for an hour, so can one of you math geniuses help me?
Thanks!
| In general, if $a_n\to 0$, then $\frac1n \sum_{k=0}^n a_k \to 0$ too.
(For any $\varepsilon>0$, find an $N$ such that $|a_n|<\varepsilon/2$ for all $n>N$, and then take enough terms beyond $N$ that they dominate whatever the terms before $N$ contribute to the average).
Even more generally (and as an easy consequence), whenever $a_n$ converges, $\frac1n \sum_{k=0}^n a_k$ converges to the same limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Conditional expectation for a sum of iid random variables: $E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$ I don't really know how to start proving this question.
Let $\xi$ and $\eta$ be independent, identically distributed random variables with $E(|\xi|)$ finite.
Show that
$E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$
Does anyone here have any idea for starting this question?
| I state in full and prove in details.
Proposition: Let $(\Omega,\mathcal{F},P$) be a probability space. Let $X,Y$ be
i.i.d. random variables with $E\left[|X|\right]<\infty$. Let $\mathcal{G}=\sigma(X+Y)$.
Then $\operatorname{E} \left[X \mid \mathcal{G}\right] = \operatorname{E} \left[Y \mid \mathcal{G}\right]=\frac{1}{2}(X+Y)$.
Proof: Let $\mu_{XY}$ be the joint distribution measure on $\mathbb{R}^{2}$
induced by $(X,Y)$. That is, $\mu_{XY}(B)=P\left(\left\{ \omega \mid (X(\omega), Y(\omega)) \in B\right\} \right)$.
Let $\mu_X$ and $\mu_Y$ be the distribution measures on $\mathbb{R}$
induced by $X$ and $Y$ respectively. Since $X$ and $Y$ are independent,
we have $\mu_{XY}=\mu_X\times\mu_Y$. Moreover, since $X$ and
$Y$ are identically distributed, $\mu_X=\mu_Y$. We denote $\mu=\mu_X=\mu_Y$.
Let $A\in\mathcal{G}$ be arbitrary. There exists a Borel set $B\subseteq\mathbb{R}$
such that $A=(X+Y)^{-1}(B)$. Hence $1_{A}(\omega)=1_{B}(X(\omega)+Y(\omega))$
for any $\omega\in\Omega$.
We have
\begin{align}
& \int_A \operatorname{E}\left[X\mid\mathcal{G}\right]\,dP = \int_A X\,dP=\int 1_B(X+Y)X \, dP = \int 1_B(x+y)x\,d\mu_{XY}(x,y) \\[10pt]
= {} & \iint1_{B}(x+y)x\,d\mu_{X}(x) \, d\mu_Y(y) = \iint 1_B(x+y)x \, d\mu(x) \, d\mu(y).
\end{align}
By the same argument,
$$
\int_A \operatorname{E}\left[Y\mid\mathcal{G}\right]\,dP=\iint1_{B}(x+y)y \, d\mu(x) \, d\mu(y).
$$
Now it is clear that
$$
\int_A \operatorname{E}\left[X\mid\mathcal{G}\right]\,dP=\int_A \operatorname{E} \left[Y\mid\mathcal{G}\right] \,dP
$$
and hence $\operatorname{E} \left[X \mid \mathcal{G}\right] = \operatorname{E}\left[Y \mid \mathcal{G}\right]$.
Lastly, $\operatorname{E}\left[X+Y\mid\mathcal{G}\right]=X+Y$. It follows that $\operatorname{E}\left[X\mid\mathcal{G}\right]=\operatorname{E} \left[Y \mid \mathcal{G} \right]=\frac 1 2 (X+Y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
Count of a quantity coming negative in this Venn diagram Here is a question:
There are $140$ students in a batch. $10$ reserved stalls at the spring festival, $20$ sang on the stage and $45$ played games at various stalls. $8$ had reserved a stall and sang on the stage, $14$ sang and played games, $5$ who played games also had stalls reserved on their names. $2$ had stalls, sang and played games. How many did not go to the spring festival?
I tried to solve the problem with the Venn diagram but a set (people who just reserved stalls) is coming out to be $-1$. Can anyone help me out why?
Thanks.
| Your Venn diagram appears to agree with the statement. As you've noticed (and Srivatsan joked about), a negative number in this context doesn't have a physical interpretation.
Depending on where this question is from, it's possible that the statement is simply wrong. It's also possible that it is meant to say
There are 140 students in a batch. 10 reserved stalls at the spring event and did not play games or sing, 20 sang on the stage and did not reserve a stall or play games, etc.
in which case all of the Venn diagram counts will make sense; but this is not the usual interpretation.
I guess the answer is that this is just a bad question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Pronunciation of $M(x)$ and $m(x)$ Suppose I use two functions and I denoted them by lowercase and uppercase letters $m(x)$ and $M(x)$. Of course, I have to distinguish them somehow.
How do I read this? Is capital/uppercase $M$ of $x$ and lowercase $m$ of $x$ ok?
| Yeah, whatever makes you happy and is clear. I would personally, and I feel like this is the most common, read "big M of x" and "little m of x".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Estimate a upper bound of IQ scores Suppose the IQ scores of a million individuals have a mean of 100 and an SD of 10.
a)Without making any further assumptions about the distribution of the cores, find an upper bound on the number of scores exceeding 130
b)Find a smaller upper bound on the number of scores exceeding 130 assuming the distribution of scores is symmetric about 100.
For part a. I used Chebychev's Inequality to calculate the upper bound which the probability is $1/3^2 = 1/9$
And for part b, I understand that if the distribution is symmetric about 100 implies that $P(X\ge 130) = P(X\le 70)$, but I am not sure how to get a smaller upper bound.
Can someone help me here? And how many methods of finding an upper bound of a distribution are there in general?
| Chebyshev’s inequality bounds the probability of being at least $k$ standard deviations from the mean in either direction. If the distribution is symmetric, only half of the outcomes that are at least $k$ sigmas from the mean are on the high side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Breaking a variable out of a trigonometry equation $A = 2 \pi r^2 - r^2 (2 \arccos(d/2r) - \sin(2 \arccos(d/2r)))$
Given $A$ and $r$ I would like to solve for $d$. However, I get stuck breaking the $d/2r$ out the trig functions.
For context this is the area of two overlapping circles minus the overlapping region. Given a radius and desired area I'd like to be able to determine how far apart they should be. I know $A$ should be bounded below by $0 (d = 0)$ and above by $2 \pi r^2 (d \le 2r)$.
| You can simplify your algebra a little bit by working in terms of angles. If $\theta$ is the half of the central angle of the arc overlapping with the other circle, then $d=2r\cos\theta$ and your equation is further simplified into
$$
A=2\pi r^2+r^2(\sin(2\theta)-2\theta).
$$
Of course, this is still implicit; one cannot really solve this equation for $\theta$ without a numerical method.
In addition, you may be interested to know that there are various "tricks", ways to write an explicit solutions to transcendental equation like yours in terms of custom special functions, basically, integrals that would still need to be computed numerically. An early paper in this area by E.E. Burniston & C.E. Siewert is called "Exact analytical solution of the transcendental equation $\alpha\sin\zeta=\zeta$" and was published in SIAM J. Appl. Math., Vol.24, No.4 (1973). It can be downloaded from C.E. Siewert's web page.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove: if $f(x)\leq g(x)$ for all $x \in D$, then $\lim \limits_{x \to +x_0} f(x) \leq \lim \limits_{x \to +x_0} g(x)$ I've been self studying real analysis using Gaughan's Introduction to Analysis. In the chapter on the algebra of limits of functions he gives the following theorem, leaving the proof as an exercise:
Suppose $f:D \to\mathbb R$ and $g:D \to \mathbb R$, $x_0$ is an accumulation point of $D$ and $f$ and $g$ have limits at $x_0$. If $f(x) \leq g(x)$ for all $x \in D$, then $\lim \limits_{x\to+x_0}f(x) \leq \lim \limits_{x \to +x_0} g(x)$.
I've been thinking about this for a couple of days and can't seem to make any progress. I tried using the formal $\delta$-$\epsilon$ definition and thought about modeling the proof after the proof of the squeeze theorem since the two are kind of similar but nothing has come of it. Any hints as to how to approach this would be appreciated!
| Hint
If you look at $h=g-f$, then $h\ge0$ on $D$.
*
*Prove that $$\lim_{x\to x_0}h(x) \qquad\text{ exists.}$$
*Prove that $$\lim_{x\to x_0}h(x)\ge 0.$$
*Use that that $$\lim_{x\to x_0}h(x) = \lim_{x\to x_0}g(x) - \lim_{x\to x_0}f(x).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Counting words with parity restrictions on the letters Let $a_n$ be the number of words of length $n$ from the alphabet $\{A,B,C,D,E,F\}$ in which $A$ appears an even number of times and $B$ appears an odd number of times.
Using generating functions I was able to prove that $$a_n=\frac{6^n-2^n}{4}\;.$$
I was wondering if the above answer is correct and in that case what could be a combinatorial proof of that formula?
| There's a combinatorial method (by the way, I consider generating functions to be a combinatorial method, but never mind) using inclusion-exclusion. Count the total number of $n$-letter words, subtract off the ones with an odd number of $A$, subtract off the ones with an even number of $B$, then add back in the ones with both an odd number of $A$ and an even number of $B$. Of course, you have to work out the details....
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Limit-taking: Is this valid Is the following limit taking right? I am always confused as to when we are allowed to take the term-by-term limits then combine them as the correct full limit, sometimes term-by-term limit-taking doesn't give the right "full" limit...
$$\lim\limits_{\epsilon\to0} {cf(x)f(x+\epsilon)\over c+\epsilon}= f^2(x)$$
perhaps I need to say that $f$ is continuous?
Thanks.
| Certainly you need that $f$ is continuous. If not, there could be values of $f(x+\epsilon)$ very different from $f(x)$ even for very small $\epsilon$. If $f$ is continuous this is correct. Do you have the theorem that the limit of a product is the product of the limits? You have $\lim\limits_{\epsilon\to0} {cf(x)f(x+\epsilon)\over c+\epsilon}=f(x)\lim\limits_{\epsilon\to0} (\frac{c}{c+\epsilon})f(x+\epsilon)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/78942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\cos x = kx$, finding $k$ that gives two solutions $\cos x = 0.3x$ has three solutions. $\cos x = 0.4x$ has one solution. How to find $k$ so that $\cos x = kx$ has two solutions?
| By symmetry we can assume $a>0$.
By IVT there is a solution between $0$ and $\frac{\pi}{2}$. You want a second solution $x_0$.
This means that $ax$ will be tangent to $\cos(x)$ at $x_0$, otherwise you can prove there is another solution.
Thus $\cos(x_0)=ax_0$ and $-\sin(x_0)=a$. Hence $a^2(x_0^2+1)=1$. Which means that
$$x_0$ must be the solution to
$$\cos(x_0)= \frac{x_0}{\sqrt{x_0^2+1}}$$, between $\frac{3\pi}{2}$ and $2\pi$.
Then $a=\frac{1}{\sqrt{x_0^2+1}}$.
Geometrically, this is the point so that $ax$ is tangent to the second "mountain" of $\cos(x)$.
The equation unfortunately seems to be impossible to solve exactly, but you can approximate the solution.
$%edits must have at least 6 characters, blah blah blah...$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How to add cosines with different phases using phasors So, here's a question:
$ \cos( \omega t ) + 2 \cos( \omega t + \frac{\pi}{4} ) + 3 \cos( \omega t + \frac{\pi}{2} ) $
To add these together, I figure there should be at least 2 ways:
1) Cosine addition laws:
$$
\cos( \omega t ) +
2 \left(
\cos( \omega t ) \cos( \frac{\pi}{4} ) -
\sin( \omega t ) \sin( \frac{\pi}{4} )
\right)
+ 3 \left(
\cos( \omega t ) \cos( \frac{\pi}{2} ) -
\sin( \omega t ) \sin( \frac{\pi}{2} )
\right) \\
=\cos( \omega t )
\left(
1 + \sqrt{2}
\right)
-
\sin( \omega t )
\left(
3 + \sqrt{2}
\right)
$$
2) Phasors / complex addition
$$
1 \angle 0 + 2 \angle 45 ^\circ + 3 \angle 90^\circ
$$
$$
= 1 + \sqrt{2} + j \sqrt{2} + j 3
$$
$$
= 1 + \sqrt{2} + j ( 3 + \sqrt{2} )
$$
Which has
$ A = \sqrt{ 14 + 8 \sqrt{2} } \approx 5.03 $
$ \phi = \arctan{ \left( \frac{ 3 + \sqrt{2} }{ 1 + \sqrt{2} } \right) } \approx 1.07 rad \approx 61 ^\circ $
Thus answer is $ 5 \angle 61^\circ $, or $5 \cos( \omega t + 1.07 )$
If you graph them, $5 \cos( \omega t + 1.07 )$ produces the same graph as $ \cos( \omega t )
\left( 1 + \sqrt{2} \right) - \sin( \omega t ) \left( 3 + \sqrt{2} \right) $
So how can you convert between them?
| You have
$$ \tan\phi = \frac{\sin\phi}{\cos\phi}= \frac{3+\sqrt{2}}{1+\sqrt{2}} $$
Since $0<\phi<\pi/2$ we know that $\sin\phi,\cos\phi>0$. Therefore, $\sin\phi$ and $\cos\phi$ are equal to
$$ \sin\phi = \frac{3+\sqrt{2}}{\sqrt{(3+\sqrt{2})^2+(1+\sqrt{2})^2}}
=\frac{3+\sqrt{2}}{\sqrt{14+8\sqrt{2}}}$$
and
$$ \cos\phi = \frac{1+\sqrt{2}}{\sqrt{(3+\sqrt{2})^2+(1+\sqrt{2})^2}}
=\frac{1+\sqrt{2}}{\sqrt{14+8\sqrt{2}}}$$
Therefore
$$
\begin{split}
\cos(\omega t)(1+\sqrt{2}) - \sin(\omega t)(3+\sqrt{2}) &=
\sqrt{14+8\sqrt{2}}\left( \cos(\omega t)\cos\phi - \sin(\omega t) \sin\phi \right)\\&
= \sqrt{14+8\sqrt{2}} \cos(\omega t+\phi)
\end{split}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding a polynomial with given roots, degree, and specific coefficient
Find a degree 4 polynomial having zeros -6, -3, 2 and 6 and the coefficient of $x^4$ equal 1
The first step is something like $p(x)=c(x-6)(x+3)(x-2)(x-6)$ as those are all the 4 zeros.
The coefficient of $x^4$ equal to 1 is throwing me off though.
| You are almost right and almost done. That first factor should be $x+6$, not $x-6$ (which you have twice). Any value of $c\neq 0$ will give you a polynomial with the roots in the right place.
And when you multiply out what you have,
$$c(x+6)(x+3)(x-2)(x-6) = cx^4 + (\text{lower terms}).$$
So if you want the coefficient of $x^4$ to be $1$, then $c$ should be...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A function which changes sign infinitely many times, but for which L'Hôpital's rule works I found a visual proof of the L'Hôpital's rule by Giorgio Goldoni which uses an additional hypothesis: $g'(x)$ (i.e. the function which is at the denominator) cannot change its sign.
My question is this: is this additional hypothesis reductive? Does it exist a function whose first derivative changes sign infinitely many times, but for which the De l'Hospital rule works?
| If the denominator changes sign infinitely many times as $x\to a$, then by Darboux' theorem it will be $0$ at points arbitarily close to $a$. That means $\lim_{x\to a} \frac{f'}{g'}$ fails to exist at all, because it can only exist if there is a set $A\ni a$ such that $A$ is open in the domain of $\frac{f}{g}$, and $\frac{f'}{g'}$ is defined everywhere on $A\setminus\{a\}$.
Also, if $g'$ changes sign infinitely often, then so does $g''$ if it exists (and so forth by induction), so no matter what order of L'Hôpital we use, it will fail to work. Thus, higher-order L'Hôpital cannot work for such functions either.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Inner product space computation
If $x = (x_1,x_2)$ and $y = (y_1,y_2)$ show that $\langle x,y\rangle = \begin{bmatrix}x_1 & x_2\end{bmatrix}\begin{bmatrix}2 & -1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}y_1 \\ y_2\end{bmatrix}$ defines an inner product on $\mathbb{R}^2$.
Is there any hints on this one? All I'm thinking is to compute a determinant, but what good is that?
| An inner product has to have all of the following properties:
Positivity, $\langle v, v \rangle \ge 0$ for all $v \in V$.
Definiteness, $\langle v, v \rangle = 0$ if and only if $v = 0$.
Linearity in the first slot, $\langle \alpha(u+v), w \rangle = \alpha \langle u, w \rangle + \alpha \langle v, w \rangle$.
Conjugate symmetry, $\langle u, v \rangle = \overline{\langle v, u \rangle}$. Since the vector space is real, any scalar is equal to its conjugate, so you have to show that $\langle u, v \rangle = \langle v, u \rangle$.
To prove that your inner product has all of these properties, you can just calculate the inner product symbolically for appropriate arbitrary vectors and then show that the result has the desired property. For example, to show positivity simply calculate $\langle (x_1, x_2),(x_1, x_2) \rangle $ and show that the resulting expression must always be positive. Similar arguments should work for the rest.
Concretely, here is the argument for the first two parts. Positivity and definiteness concern an inner product of a vector with itself; so we take an arbitrary vector $x = (x_1, x_2)$ and compute its inner product with itself.
$$
\langle (x_1, x_2), (x_1, x_2) \rangle =
\begin{bmatrix}x_1 & x_2\end{bmatrix}
\begin{bmatrix}2&-1\\1&1\end{bmatrix}
\begin{bmatrix}x_1 \\ x_2\end{bmatrix} =
\begin{bmatrix}2x_1 + x_2 & -x_1 + x_2\end{bmatrix}
\begin{bmatrix}x_1 \\ x_2\end{bmatrix}
$$
$$
= 2x_1^2 + x_1 x_2 - x_1 x_2 + x_2^2 = 2x_1^2 + x_2^2
$$
Since any real number squared is non-negative, we can see that the inner product of a vector with itself is never negative, proving positivity. Additionally, we know that if the sum of a sequence of non-negative terms is zero, then each term must be zero. From our expression for the inner product, this implies that if $\langle v, v \rangle = 0$ then $v = 0$. Plugging in $x_1 = 0, x_2 = 0$ shows that the converse is also true, so this proves definiteness.
For linearity, compute $\langle (\alpha x_1, \alpha x_2), (y_1, y_2) \rangle$ and $\alpha \langle (x_1, x_2), (y_1, y_2) \rangle$; you will find that the results are the same. Then compare $\langle (x_1 + x_3, x_2 + x_4), (y_1, y_2) \rangle$ and $\langle (x_1, x_2), (y_1, y_2) \rangle + \langle (x_3, x_4), (y_1, y_2) \rangle$; the results will be the same.
For conjugate symmetry (or in this case, symmetry) compute $\langle (x_1, x_2), (y_1, y_2) \rangle$ and $\langle (y_1, y_2), (x_1, x_2) \rangle$; again you should find that the results are the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The Sum of the Odd Divisors of n
The sum of the odd divisors of n is $-\sum_{d|n}(-1)^{n/d}d$, and if n is even, then
$$\sum_{d|n}(-1)^{n/d}d=2\sigma(n/2)-\sigma(n)$$
Could you give me some hints on that?
| Hint : $d$ is an even divisors of $n$ if and only if $\frac{d}{2}$ is a divisors of $\frac{n}{2}$.
EDIT : Here's a complete proof. We shall prove two things : the sum of odd divisors of $n$ is given by the formula $-\sum_{d |n} (-1)^{n/d} d$, and if $n$ is even, then it's also equal to $\sigma(n)-2\sigma(n/2)$.
If $n$ is odd, this is obvious, so we're reduced to the case where $n$ is even. From the hint above, we get that the sum of even divisors of $n$ is $2\sigma(n/2)$. So the sum of odd divisors is $\sigma(n)-2\sigma(n/2)$. Finally, consider the sum
$$\sigma(n) + \sum_{d |n} (-1)^{d} \frac{n}{d} = \sum_{d |n} (1+(-1)^{d}) \frac{n}{d} = 2 \sum_{d |n, \ d \text{ even}} \frac{n}{d}$$
Changing the variable to $d' = d/2$, you get
$$\sigma(n) + \sum_{d |n} (-1)^{d} \frac{n}{d} = 2 \sum_{d' |n/2}\frac{n/2}{d'} = 2 \sigma(n/2)$$
Which concludes the proof. One a side note, you could perform the calculation using Dirichlet series, thus finding :
$$\eta(s) \zeta(s-1) = (1-2^{1-s}) \zeta(s) \zeta(s-1)$$
where $\eta(s) = \sum_{n \ge 1} \frac{(-1)^{n+1}}{n^s}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Difference between maximum and minimum? If I have a problem such as this:
We need to enclose a field with a fence. We have 500m of fencing material and a building is on one side of the field and so won’t need any fencing. Determine the dimensions of the field that will enclose the largest area.
This is obviously a maximizing question, however what would I do differently if I needed to minimize the area? Whenever I do these problems I just take the derivative, make it equal to zero and find x, and the answer always seems to appear - but what does one do differently to find the minimum or maximum?
| The problem of minimizing and the problem of maximizing are essentially the same.
As you say, take the derivative of your function and find the critical points. Do not forget, however, that global extrema can also occur at endpoints of the reasonable domain (i.e. the values of $x$ allowed within the context).
For the fence problem, suppose we let $x$ represent the width of the fence. You would first write a function for area in terms of $x$. Notice that this function, although technically defined for all reals, really only makes sense on $0 \leq x \leq 500$ , as the width of the fence cannot be smaller than 0 nor larger than 500. Thus, you need to check not only whatever critical points you find, but also $x = 0$ and $x = 500$. To determine which is the maximum and which is the minimum, you simply compare the values of the function at all these interesting points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Proof: If $f'=0$ then is $f$ is constant I'm trying to prove that if $f'=0$ then is $f$ is constant WITHOUT using the Mean Value Theorem.
My attempt [sketch of proof]: Assume that $f$ is not constant. Identify interval $I_1$ such that $f$ is not constant. Identify $I_2$ within $I_1$ such that $f$ is not constant. Repeat this and by the Nested Intervals Principle, there is a point $c$ within $I_n$ for any $n$ such that $f(c)$ is not constant... This is where I realized that my approach might be wrong. Even if it isn't I don't know how to proceed.
Thanks for reading and any help/suggestions/corrections would be appreciated.
| Does the real line have gaps? That's the issue. Suppose you can partition the line into two sets $A$ and $B$, so that
*
*Every real number belongs to either $A$ or $B$;
*No number belongs to both;
*Every member of $A$ is less than every member of $B$;
*For every member of $A$, there is a larger number that is still a member of $A$;
*For every member of $B$, there is a smaller number that is still a member of $B$.
In that case, there would be no boundary point, such that every number less than that point is in $A$ and every number greater than that is in $B$. That would be a gap.
Now suppose $f(x) = 0$ if $x\in A$ and $f(x)=1$ if $x\in B$. Then $f\;'(x)=0$ for every value of $x$, but $f$ is not constant.
You can't prove every function whose derivative is everywhere $0$ is constant unless you rule out gaps. The proof of the mean value theorem conventionally relies on Rolle's theorem, which in turn relies on the fact that a continuous function on a closed interval has a maximum and a minimum in that interval. That theorem is not true unless the real line is gapless. A continuous function could increase on the set $A$ described above and decrease on $B$, and it would have no maximum.
The mean value theorem is how the gaplessness of the line gets involved in the proof that if $f\;'=0$ everywhere then $f$ is constant.
Probably you could find other ways of proving that, but they'd have to invoke gaplessness somehow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
} |
linear ODE with constant coefficients, proof Let $
\sum\limits_{k = 0}^n {y^{\left( k \right)} a_k } = 0
$ an homogeneous ODE, where $a_k$ are constants. How can I solve the equation when the roots are repeated? One way, that I saw in wikipedia, is using the fact that if $ e^{cx} $ is a solution, then $ (x^r)(e^{cx}) $ is also too, How can I prove this? It´s difficult to me, to evaluate the sum, because i want to show that $
\sum\limits_{k = 0}^n {\left( {x^r e^{cx} } \right)^{\left( k \right)} a_k } = 0
$ but I need to evaluate $
{\left( {x^r e^{cx} } \right)^{\left( k \right)} }
$ and I don´t know how to do it Dx
| I don't know if you have the background for this, but I do not think there is much choice if you allow the degree $n \geq 3.$ It is not so bad for $n=2.$
Anyway, you introduce a bunch of variables $y_0 = y, y_1 = y', y_2 = y'',$ and so on. As your coefficients are constant, we may divide through by whatever $a_n$ might be to arrive at a revised constant coefficient equation with $a_n = 1.$
So the new system of equations is a linear system beginning with $$y_0' = y_1, \; y_1' = y_2, \ldots, \; y_{n-2}' = y_{n-1},$$ but finally
$$ y_{n-1}' = - b_0 y_0 - b_1 y_1 - \ldots - b_{n-1} y_{n-1},$$ where I have taken $b_j = a_j / a_n.$
We write a column vector $Y$ with entries $y_0, y_1, \ldots, y_{n-1}.$ that system now becomes
$$ Y' = B Y $$ with initial conditions written as
$$ Y(0) = Y_0$$
and the solution to the system is
$$ Y = e^{B x} Y_0$$
Note that $B$ has exactly the form of a companion matrix, see the square matrix at
http://en.wikipedia.org/wiki/Companion_matrix#Linear_recursive_sequences
The appearance of $x, x^2,\ldots$ comes from the Jordan normal form of $B,$ precisely when there are repeated eigenvalues (characteristic values) of $B,$ and when there are off-diagonal entries in the relevant Jordan block. If there are repeated roots but the Jordan normal form is diagonal anyway, then no polynomial terms appear. I do not know anything special about the Jordan normal form of a companion matrix, perhaps something precise can be said that need not hold for other types of coefficient matrix $B.$
This is half a semester of work if you have already had linear algebra. Anyway, see
http://en.wikipedia.org/wiki/Ordinary_differential_equation#Fundamental_systems_for_homogeneous_equations_with_constant_coefficients
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Absolute continuity of a distribution function This appeared on an exam I took.
$Z \sim \text{Uniform}[0, 2\pi]$, and $X = \cos Z$ and $Y = \sin Z$. Let $F_{XY}$ denote the joint distribution function of $X$ and $Y$.
Calculate $\mathbb{P}\left[X+ Y \leq 1\right]$. So this was easy -
$$\begin{align}
\mathbb{P}\left[X+Y \leq 1\right] &= \mathbb{P}\left[\sin Z+ \cos Z \leq 1\right] \\
&=\mathbb{P}\left[\sqrt{2}\sin\left(Z+\frac{\pi}{4}\right)\leq 1\right] \\
&= \mathbb{P}\left[Z \leq \arcsin\frac{1}{\sqrt{2}} - \frac{\pi}{4} \right] \\
&= \dfrac{\arcsin\frac{1}{\sqrt{2}} - \frac{\pi}{4}}{2\pi}
\end{align}
$$
But then, the question asked if $F_{XY}$ was absolutely continuous. I don't think so, but how would I prove it?
I thought about proceeding like this
$$
\begin{align}
F_{XY}(x, y) &= \mathbb{P}\left[X \leq x, Y \leq y\right],\; x, y \in [0, 1] \\
&= \mathbb{P}\left[Z \leq \min(\arccos x, \arcsin y)\right]
\end{align}
$$
This is definitely continuous, but is it absolutely continuous?
Thanks!
| Absolutely continuous distributions are measures with a density with respect to the Lebesgue measure. The distribution of $(X,Y)$ is concentrated on the unit circle, which has Lebesgue measure zero, hence this distribution is not absolutely continuous.
Regarding the computation of $\mathrm P(X+Y\leqslant1)$, note that $\arcsin(1/\sqrt2)=\pi/4$ hence your formula would yield $\mathrm P(X+Y\leqslant1)=0$ (a quite unlikely result). Rather, drawing a picture of the unit circle and of the line of equation $x+y=1$ shows that $x+y\leqslant1$ corresponds to the three quarters of the circle from the point $(0,1)$ to the point $(1,0)$ through the points $(-1,0)$ and $(0,-1)$, hence $\mathrm P(X+Y\leqslant1)=3/4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
limit of $f$ and $f''$ exists implies limit of $f'$ is 0
Prove that if $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f''(x)$ exist, then $\lim\limits_{x\to\infty}f'(x)=0$.
I can prove that $\lim\limits_{x\to\infty}f''(x)=0$. Otherwise $f'(x)$ goes to infinity and $f(x)$ goes to infinity, contradicting the fact that $\lim\limits_{x\to\infty}f(x)$ exists. I can also prove that if $\lim\limits_{x\to\infty}f'(x)$ exists, it must be 0. So it remains to prove that $\lim\limits_{x\to\infty}f'(x)$ exists. I'm stuck at this point.
| Hint $\ $ This follows easily from L'Hôpital's rule since
$$\rm \lim_{x\to\infty}\ f-f'\ =\ \lim_{x\to\infty}\frac{e^x\ (f-f')}{e^x}\ =\ \lim_{x\to\infty}\frac{e^x\ (\:f-f'+f'-f'')}{e^x}\ =\ \lim_{x\to\infty}\ f-f''\ exists$$
See also the similar classic Hardy old problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Locally Free Sheaves If $X$ is a locally noetherian scheme and $F$ is a coherent sheaf, I want to show the following equivalence:
$F$ is locally free iff its stalk is a free $O_{X,p}$-module for every $p$ in $X$.
=> follows from the definition of locally free.
<= is difficult for me: I don't see how to combine finite-type condition on $F$ with locally noetherian property.
| As the name indicates, "locally free" is a local concept.
So we can assume that $X=Spec(A)$, the affine scheme associated to the noetherian ring $A$, and $F=\tilde M$, the coherent sheaf associated to the finitely generated $A$-module $M$.
The sheaf $F=\tilde M$ is locally free if and only if the module $M$ is projective.
And a finitely generated module $M$ over a noetherian ring $A$ is projective if and only if all its localizations $M_{\mathfrak p} \; ( \mathfrak p \in Spec(A)$ are $A_{\mathfrak p}$-free modules.
Since at a point $p\in X$ corresponding to the prime $\mathfrak p \in Spec(A)$ we have $\mathcal O_{X,x}=A_{\mathfrak p}$ and $ F_p=M_{\mathfrak p}$, we see that indeed freeness of all stalks of $F$ implies local freeness of the given coherent sheaf $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 1,
"answer_id": 0
} |
ArcTan(2) a rational multiple of $\pi$? Consider a $2 \times 1$ rectangle split by a diagonal. Then the two angles
at a corner are ArcTan(2) and ArcTan(1/2), which are about $63.4^\circ$ and $26.6^\circ$.
Of course the sum of these angles is $90^\circ = \pi/2$.
I would like to know if these angles are rational multiples of $\pi$.
It doesn't appear that they are, e.g., $(\tan^{-1} 2 )/\pi$ is computed as
0.35241638234956672582459892377525947404886547611308210540007768713728\
85232139736632682857010522101960
to 100 decimal places by Mathematica. But is there a theorem that could be applied here to
prove that these angles are irrational multiples of $\pi$? Thanks for ideas and/or pointers!
(This question arose thinking about Dehn invariants.)
| $\arctan(x)$ is a rational multiple of $\pi$ if and only if the complex number $1+xi$ has the property that $(1+xi)^n$ is a real number for some positive integer $n$. It is fairly easy to show this isn't possible if $x$ is an integer with $|x|>1$.
This result essentially falls out of the fact that $\mathbb Z[i]$ is a UFD, and the fact that the only specific primes in $\mathbb Z[i]$ are divisors of their conjugates.
You can actually generalize this for all rationals, $|x|\neq 1$, by noting that $(q+pi)^n$ cannot be real for any $n$ if $(q,p)=1$ and $|qp|> 1$. So $\arctan(\frac{p}q)$ cannot be a rational multiple of $\pi$.
Fuller proof:
If $q+pi=z\in \mathbb Z[i]$, and $z^n$ is real, with $(p,q)=1$, then if $z=u\pi_1^{\alpha_1} ... \pi_n^{\alpha_n}$ is the Gaussian integer prime factorization of $z$ (with $u$ some unit,) $z^n = u^n \pi_1^{n\alpha_1}...\pi_n^{n\alpha_n}$. But if a Gaussian prime $\pi_i$ is a factor of a rational integer, $z^n$, then the complement, $\bar{\pi}_i$ must also be a factor of $z^n$, and hence must be a factor of $z$.
But if $\pi_i$ and $\bar{\pi}_i$ are relatively prime, that means $\pi_i\bar{\pi}_i=N(\pi_i)$ must divide $z$, which means that $N(\pi_i)$ must divide $p$ and $q$, so $p$ and $q$ would not be relatively prime.
So the only primes which can divide $q+pi$ can be the primes which are multiples of their complements. But the only such primes are the rational integers $\equiv 3\pmod 4$, and $\pm1\pm i$. The rational integers are not allowed, since, again, that would mean that $(p,q)\neq 1$, so the only prime factors of $z$ can be $1+i$ (or its unit multiples.) Since $(1+i)^2 = 2i$, $z$ can have at most one factor of $1+i$, so that means, finally, that $z\in\{\pm 1 \pm i, \pm 1, \pm i\}$.
But then $|pq|=0$ or $|pq|=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 3,
"answer_id": 2
} |
Ext groups of a point on a scheme Given a scheme $X$ over a field $k$ and a closed point $x$ with residue field $k(x)$ and inclusion $i:{x}\rightarrow X$ one can consider the following abelian groups
(1)$Ext^1_{\mathcal O_X}(i_*k(x),i_*k(x))$
and
(2)$Ext^1_{\mathcal O_{X,x}}(k(x),k(x))$.
The second one is just seen as Ext-group in the sense of modules over a ring.
Are they isomorphic?
I would define a map from (1) to (2) by just taking stalk in $x$, but I dont really see how one would come from (2) to (1).
Addition:
And is there a structure of a $k-$ vector space on (2) such that the iso is also one of $k-$spaces? (1) surely has a $k-$vector space structure as the scheme is defined over $k$.
| The second Ext-group is an $\mathcal O_{X,x}$-module. Since $\mathcal O_{X,x}$ is a $k$-algebra, it is in particular a $k$-vector space (and this is compatible with the $k$-v.s. structure on the first Ext-group and the map from the first to the second).
Furthermore, the two Ext-groups are isomorphic (i.e. the map you defined is an isomorphism). To see this, choose an affine open, say $U = $ Spec $A$, around $x$, and let $\mathfrak p$ be the prime ideal of $A$ corresponding to $x$.
Firstly, restriction to $U$ induces an isomorphism
$Ext^1_{\mathcal O_X}(k(x),k(x)) \cong Ext^1_{\mathcal O_U}(k(x),k(x))$
(since any extension of the two skyscrapers at $x$ is again supported on $x$,
and hence on $U$). The second Ext group in this isomorphism can then be computed in terms of modules, and the isomorphism we want reduces to the isomorphism
$Ext^1_A(A/\mathfrak p, A/\mathfrak p) = Ext^1_{A_{\mathfrak p}}(A/\mathfrak p,A/\mathfrak p),$ which is straightforward. (Any $A$-module which is an extension of $A_{\mathfrak p}$-modules is again an $A_{\mathfrak p}$-module.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Continuous holomorphic square root proofs Let $G\subset \mathbb{C}$ be a domain and let $f$ be a logarithmic function on $G$. Then it will be now shown that:
i) $ \displaystyle{ w(z)= \exp(\frac{1}{2}f(z))}$ is a holomorphic square root on $G$, e.g. $(w(z))^{2} = z \ \forall z \in G.$
ii) Every continuous function $w:G\rightarrow \mathbb{C}$ with $(w(z))^{2}=z \ \forall z$ is of the form $w(z)=\pm \exp(\frac{1}{2}f(z))$.
iii) If $0\in G$ then there would be no holomorphic square root on G.
VVV's work
i) The holomorphy of $w(z)$ follows directly from the theorem for composition of holomorphic functions. Every logarithmic function is of the form $f(z) \log(|z|) + i\phi + i2\pi\mathbb{Z} $
so: $w(z) = \exp(\frac{1}{2}(\log(|z|)+i\phi)) = |z|^{1/2}e^{i\frac{\phi}{2}} \Rightarrow (w(z))^{2} = |z|e^{i\phi} = z$
ii) in i) it is shown that $w(z)=\exp(\frac{1}{2}f(z))$ is a solution of $(w(z))^{2}=z \ \forall z\in G$. Since $(-w(z))^{2} = (-1)^{2}(w(z))^{2} = (w(z))^{2}$ also $-w(z)$ must fulfill this criteria. How does one show that these are all solutions?
iii) Assume $0\in G$, then look at $w(0) = \exp(\frac{1}{2}f(0))$ since $\log(0)$ isn't defined so neither can the holomorphic square root exist.
Are these proofs correct ? Does anybody see how to show that in $ii)$ $w(z)$ and $-w(z)$ are the only solutions? Please do tell me
| For (iii), use that if $w(z)^2 = z$, then $2w(z)w'(z)=1$. So $w'(0)$ cannot be defined.
For (ii): As mentioned in comments, in general, if $a,b$ are holomorphic on $G$, and $a(z)b(z)=0$ for all $z\in G$, then one of $a$ or $b$ is identically zero. So, if $w$ is a holomorphic square root function, and $w_0$ is another, then, since $0=w(z)^2-w_0(z)^2 = (w(z)-w_0(z))(w(z)+w_0(z))$, then one of $w(z)-w_0(z)$ or $w(z)+w_0(z)$ must be identically zero on $G$.
[The harder thing to prove, but not part of the problem, is that, if you can define a holomorphic square root on $G$, then you can define a logarithm on $G$.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/79956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Taking the derivative of $\frac1{x} - \frac1{e^x-1}$ using the definition Given $f$:
$$
f(x) = \begin{cases}
\frac1{x} - \frac1{e^x-1} & \text{if } x \neq 0 \\
\frac1{2} & \text{if } x = 0
\end{cases}
$$
I have to find $f'(0)$ using the definition of derivative (i.e., limits). I already know how to differentiate and stuff, but I still can't figure out how to solve this. I know that I need to begin like this:
$$
f'(0) = \lim_{h \to 0} \frac{f(h)-f(0)}{h} = \lim_{h \to 0} \frac{\frac1{h} - \frac1{e^h-1}-\frac1{2}}{h}
$$
But I don't know how to do this. I feel like I should, but I can't figure it out. I tried distributing the denominator, I tried l'Hôpital's but I get $0$ as the answer, while according to what my prof gave me (this is homework) it should be $-\frac1{12}$. I really don't know how to deal with these limits; could someone give me a few tips?
| A good strategy in such problems is to massage the problem into recognizable limits. (EDIT: I like this approach mainly because it avoids Taylor expansion and l'Hôpital's rule. This is, however, not the simplest approach.)
We can "simplify" given function as follows:
$$
\begin{eqnarray*}
\frac{\frac{1}{x} - \frac{1}{e^x - 1} - \frac{1}{2}}{x}
&=&
\frac{(2-x)(e^x - 1) - 2x}{2x^2 (e^x - 1)}
\\ &=&
\frac{\color{Blue}{(2-x)}(e^x - 1)- \color{Blue}{(2-x)} \frac{2x}{2-x}}{\color{Red}{2} \ \color{Magenta}{x^2} \color{Green}{(e^x - 1)}}
\\ &=&
\frac{\color{Blue}{2-x}}{\color{Red}{2}} \cdot \frac{\color{Magenta}{x}}{\color{Green}{e^x - 1}} \cdot \frac{e^x - 1 - \frac{2x}{2-x}}{\color{Magenta}{x^3}}. \tag{1}
\end{eqnarray*}
$$
The first two factors both approach $1$ as $x \to 0$. Let us concentrate on the third factor. By Taylor expansion (or the formula for summing a geometric series), we have (for $|x| < 1$),
$$
\begin{eqnarray*}
\frac{2x}{2-x}
=
\frac{x}{1 - x/2}
&=&
x + x \left(\frac{x}{2} \right) +x \left(\frac{x}{2} \right)^2 + x \left( \frac x 2 \right)^3 + \cdots
\\ &=&
\color{Red}{x + \frac{x^2}{2}} + \color{Blue}{x \left(\frac{x}{2} \right)^2 + x \left( \frac x 2 \right)^3 + \cdots}
\\ &=&
\color{Red}{x + \frac{x^2}{2}} + \color{Blue}{x \left(\frac{x}{2} \right)^2 \frac{1}{1 - \frac x 2}} \quad\quad \text{(summing the GP)}
\\ &=&
\color{Red}{x + \frac{x^2}{2}} \color{Blue}{+\frac{x^3}{2(2-x)}}. \tag{2}
\end{eqnarray*}
$$
(Though I used infinite GPs to obtain the final expression, one could verify it directly as well. In particular, the two expressions are equal for all $x \neq 2$, not just $|x| < 1$.) Plugging $(2)$ in $(1)$, we have
$$
\begin{eqnarray*}
\frac{e^x - 1 - \frac{2x}{2-x}}{x^3}
&=&
\frac{e^x - 1 \color{Red}{- x - \frac{x^2}{2}} \color{Blue}{-\frac{x^3}{2(2-x)}}}{x^3}
\\ &=&
\frac{e^x - 1 \color{Red}{- x - \frac{x^2}{2}}}{x^3} \color{Blue}{-\frac{1}{2(2-x)}}
\end{eqnarray*}
$$
Once again, the second term has an easy limit of $\frac14$. The first term
$$
\frac{e^x - 1 - x - \frac{x^2}{2}}{x^3}
$$
is also a standard limit. This limit can be evaluated using, say, the l'Hôpital's rule or using the Taylor expansion of $e^x$; it's value is $\frac{1}{3!}$. Plugging in both these limits, we can get the final answer to be
$$
\frac{1}{3!} - \frac{1}{4} = -\frac{1}{12}.
$$
Bonus! If you wish to avoid Taylor expansion and l'Hôpital's rule even further, I will mention an "elementary" ways to evaluate limits such as
$$
\lim_{x \to 0} \frac{e^x - 1 - x}{x^2} \quad\text{and}\quad \lim_{x \to 0} \frac{e^x - 1 - x - \frac{x^2}{2}}{x^3}
$$
assuming that these limits exists! (I stress that this is not a complete proof; yet I present it because I find the technique interesting.)
I will show the idea for the first limit, and leave the second one as an exercise. Suppose $A \stackrel{\text(def)}{=} \lim \limits_{x \to 0} \frac{e^x - 1 - x}{x^2}$ exists. Then $e^x = 1 + x + A x^2 + o(x^2)$. Therefore, by squaring: $$e^{2x} = (1 + x + A x^2 + o(x^2))^2 = \color{Red}{1} + \color{Blue}{2x} + \color{DarkGreen}{x^2 (1+2A)} + o(x^2) .$$ On the other hand, making the substitution $x \to 2x$ in the definition of the limit, we have $e^{2x} = \color{Red}{1} + \color{Blue}{2x} + \color{DarkGreen}{4A x^2} + o(x^2)$.
Equating the dominant terms in these two expressions, we must have $\color{DarkGreen}{4A} = \color{DarkGreen}{1 + 2A}$, which gives $A = \frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
How could we find the largest number in the sequence $ \sqrt{50},2\sqrt{49},3\sqrt{48},\cdots 49\sqrt{2},50$? How to find the largest number in the sequence$$ \sqrt{50},2\sqrt{49},3\sqrt{48},\cdots 49\sqrt{2},50$$
I am interested in a "calculus-free" approach.
Thanks,
| I would like to apply calculus whenever it is possible. So, here is my trying (IMO this is not a better solution though):
Consider the function $f(x)=x^2(51-x)$ over $[1,50]$. Then as usual, $f'(x)=0\Rightarrow x=0,34$ and $f''(0)>0,f''(34)<0$ implies $f$ has a unique global maximum at $x=34$ and global minimum at $x=0$. So,...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Variance over two periods with known variances? If period 1 has variance v1 and period 2 has variance v2, what is the variance over period 1 and period 2? (period 1 and period 2 are the same length)
I've done some manual calculations with random numbers, and I can't seem to figure out how to calculate the variance over period 1 and period 2 from v1 and v2.
| There is no simple relationship between the variance of different periods and the variance for the entire time period.
As a simple example consider the following two data sets: {1, 2, 3} and {3, 4, 5}. The variance for these two sets of numbers is 1 whereas the overall variance is 2. Now if you were to add 1 to the second set of numbers which would result in the following set {4, 5, 6} its variance is still 1 whereas the overall variance has increased to 3.5. Thus, just knowing v1 and v2 will not be enough to compute the overall variance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Characterizing graph classes by the same forbidden set in two ways Kuratowski and Wagner theorems characterize planar graphs in terms of forbidden homeomorphic subgraphs and forbidden minors, respectively. It turns out that both forbidden sets are the same: $\{K_5,K_{3,3}\}$.
Are there other examples where a forbidden set defines the same class of graphs in both ways?
| One example that springs to mind: A (simple) graph is a forest iff it has no $K_3$ as a minor, and also iff it has no $K_3$ as a topological minor (homeomorphic subgraph).
More generally, if the forbidden graphs have maximum degree 3, there's no difference between minors and topological minors (see Proposition 1.7.4 in Diestel's textbook).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A book asks me to prove a statement but I think it is false The problem below is from Cupillari's Nuts and Bolts of Proofs.
Prove the following statement:
Let $a$ and $b$ be two relatively prime numbers. If there exists an
$m$ such that $(a/b)^m$ is an integer, then $b=1$.
My question is: Is the statement true?
I believe the statement is false because there exists an $m$ such that $(a/b)^m$ is an integer, and yet $b$ does not have to be $1$. For example, let $m=0$. In this case, $(a/b)^0=1$ is an integer as long as $b \neq 0$.
So I think the statement is false, but I am confused because the solution at the back of the book provides a proof that the statement is true.
| This statement is true under certain conditions. You must assume $b \neq 0$. If $(a,b)=1$, you can easily show that $(a^n, b^n) = 1$. This means that $(a/b)^n = a^n / b^n$ is never an integer unless there exists $n$ such that $b^n = \pm 1$, but that means $b = \pm 1$.
The case where $a = 0$ and $|b| > 1$ must be excluded because then $(a,b) > 1$.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How much is cohomotopy dual to homotopy? To what degree can we dualize theorems regarding homotopy into theorems about cohomotopy (or is there a good source that tries to do this)?
For instance, is there some kind of Hurewicz theorem relating cohomotopy and ordinary cohomology? Is there a "cohomotopy extension property" (something that applies when relative cohomotopy groups are trivial)? If two spaces are cohomologically equivalent and have some property in cohomotopy analogous to simply-connected, are they cohomotopy equivalent?
Thanks, this is primarily a reference request, however there is the possibility that all this is impossible so no such reference exists, which would also be an acceptable answer.
| The homotopy groups can be written as covariant homotopy invariant functors $\pi_n:\mathrm{Top}_\ast\to\mathrm{Set}$. If we were to consider contravariant homotopy invariant functors $\pi^n:\mathrm{Top}_\ast^{op}\to\mathrm{Set}$, we would obtain the cohomotopy sets. How dual is it? Well, $\pi^n(S^m)=\pi_m(S^n)$. If $X$ is a CW-complex of dimension (at most) $n$, then $\pi^p(X)\to H^p(X)$ is a bijection. See the nlab. As for whether or not a "cohomotopy extension property" exists, I don't know; it seems like an interesting thing!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
How strong does a matrix distort angles? How strong does it distort lengths anisotrolicly? let there be given a square matrix $M \in \mathbb R^{N\times N}$. I would like to have some kind of measure in how far it
*
*Distorts angles between vectors
*It stretches and squeezes discriminating directions.
While I am fine with $M = S \cdot Q$, with $S$ being a positive multiple of the identity and $Q$ being an orthogonal matrix, I would like to measure in how far $M$ diverts from this form. In more visual terms, I would like to measure by a numerical expression to what extent a set is non-similar to the image of the respective set.
What I have in mind is a numerical measure, just like, e.g, the determinant of $M$ measures the volume of a cube under the transform by $M$. Can you help me?
| How much $M$ distorts angles between vectors depends not just on $M$, but on the vectors. The extent to which a set is "not similar" to its image depends not just on $M$, but on the set. So I'm not sure there's a coherent answer to your question.
E.g., if $$M=\pmatrix{1&0\cr0&0\cr}$$ then some big angles get squashed flat while vectors that are already parallel stay parallel; some sets get squashed flat while some sets that are already flat remain unchanged. What would you like the numerical measure of that matrix to be?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Order statistics of i.i.d. exponentially distributed sample I have been trying to find the general formula for the $k$th order statistics of $n$ i.i.d exponential distribution random variables with mean $1$. And how to calculate the expectation and the variance of the $k$th order statistics. Can someone give me some general formula? It would be nice if there is any proof.
| The minimum $X_{(1)}$ of $n$ independent exponential random variables with parameter $1$ is exponential with parameter $n$. Conditionally on $X_{(1)}$, the second smallest value $X_{(2)}$ is distributed like the sum of $X_{(1)}$ and an independent exponential random variable with parameter $n-1$. And so on, until the $k$th smallest value $X_{(k)}$ which is distributed like the sum of $X_{(k-1)}$ and an independent exponential random variable with parameter $n-k+1$.
One sees that $X_{(k)}=Y_{n}+Y_{n-1}+\cdots+Y_{n-k+1}$ where the random variables $(Y_i)_i$ are independent and exponential with parameter $i$. Each $Y_i$ is distributed like $\frac1iY_1$, and $Y_1$ has expectation $1$ and variance $1$, hence
$$
\mathrm E(X_{(k)})=\sum\limits_{i=n-k+1}^n\frac1i,\qquad
\mbox{Var}(X_{(k)})=\sum\limits_{i=n-k+1}^n\frac1{i^2}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 2,
"answer_id": 0
} |
How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$? How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$?
| This is expanding on a comment by Bill, the following might work:
You need
$$ (a+b)\ln\sqrt[3]{(\frac{a^4+b^4}{a+b})} \geq a \ln(a) + b \ln(b) \,.$$
Or
$$ (\ln\sqrt[3]{(\frac{a^4+b^4}{a+b})} \geq \frac{a}{a+b} \ln(a) + \frac{b}{a+b} \ln(b) \,.$$
Now, if I remember right, the Jensen inequality for Log reads:
$$\frac{a}{a+b} \ln(a) + \frac{b}{a+b} \ln(b) \leq \ln (\frac{a^2+b^2}{a+b}) \,.$$
Thus, you only need to show
$$\left( \frac{a^2+b^2}{a+b} \right)^3 \leq \frac{a^4+b^4}{a+b} \,.$$
Or
$$(a^2+b^2)^3 \leq (a+b)^2(a^4+b^4) \,.$$
EDIT
After a long calculation, this reduces to
$$a^6+3a^4b^2+3a^2b^4+b^6 \leq a^6+a^2b^4+2a^5b+2ab^5+a^2b^4+b^6$$
or
$$a^4b^2+a^2b^4 \leq a^5b+ab^5$$
After canceling $ab$ this follows imediatelly form the AM-GM.: $a^3b \leq \frac{a^4+a^4+a^4+b^4}{4}$ and $ab^3 \leq \frac{a^4+b^4+b^4+b^4}{4}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Proving a Diophantine equation has no solutions I'm trying to show that $7u^2=x^2+y^2+z^2$ has no solutions in $\mathbb{Z}$ when $u$ is odd. If $u$ is even, then it's simple to show that no solutions exists by looking modulo $4$. The odd case looks harder.
| For the odd case, let $u=1+2k$; now $u^2=1+4k+4k^2 = 1+4k(k+1)$. As $k$ or $k+1$ is even, we have $u^2\equiv1\mod8$ whenever $u$ is odd, hence $7u^2\equiv7\mod8$. For the righthand side, we have a sum of three squares mod 8, that is, three numbers taken from the set $\{0,1,4\}$ and summed $\mod8$, which cannot equal 7.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
"Physical" meaning of higher moments (their values and their existence) Suppose I have a probability distribution $A$ with continuous support over $\mathbb{R}$. Suppose $A$ has a sequence of finite (central) moments $\mu_1, \mu_2,\ldots,\mu_n$. I understand that $\mu_1$ is the mean, and $\mu_2$, $\mu_3$ and $\mu_4$ define variance, skewness, and kurtosis of the distribution, respectively.
I am wondering about the meaning of $\mu_5, \mu_6, \mu_7,\ldots$ When they are finite, what do they represent about the distribution $A$?
I understand that the odd central moments of the symmetric distribution are zero, so I am assuming that odd moments are related to the skew. What do even higher moments represent? I am particularly curious about $\mu_6$.
Also, suppose all moments of $A$ are finite. What does that say about $A$? Does it mean that $A$ has a specific representation? I've heard somewhere that all finite moments of $A$ with support $\mathbb{R}$ means that the tails of $A$ decay exponentially. Is that true? If so, can someone point me to a proof?
| There is actually a quite simple visual interpretation to all the higher central moments.
To ease the interpretation, and without loss of generality, assume that the moments refer to centered and standardized random variables.
Let $Z = (X-\mu)/\sigma$ and $V = Z^k$, and let $p_k(v)$ denote the probability density or mass function of $V$. For odd $k$, $p_k(v)$ has positive and negative support; only non-negative support for even $k$.
Here is the visual: The $k$th central (standardized) moment of $X$ is equal to the point of balance of $p_k(v)$.
Since the transformation greatly dilates values where $|Z| >1$ and contracts values where $|Z| < 1$, the point of balance is mostly determined by the extremes where $|Z| > 1$.
In the case of odd $k$, this point of balance is determined by the relative extremity of the left and right tails of the distribution of $X$ in the portions of the tails that are most amplified by the given power $k$. Higher $k$ amplifies more extreme portions of the tails.
In the case of even $k$, this point of balance is determined by the overall extremity the tails of the distribution of $X$, without regard to whether left or right, again in the portion of the tail that is most amplified by the given power $k$. Again, higher $k$ amplifies more extreme portions of the tail.
It is worth noting that this visual representation shows that the appearance of the center of the distribution of $X$ is all but irrelevant. In particular, "peakedness" or "flatness" interpretations of even moments are erroneous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How to prove such a simple inequality? Let $f\in C[0,\infty)\cap C^1(0,\infty)$ be an increasing convex function with $f(0)=0$ $\lim_{t->\infty}\frac{f(t)}{t}=+\infty$ and $\frac{df}{dt} \ge 1$.
Then there exists constants $C$ and $T$ such that for any $t\in [T,\infty)$, $\frac{df}{dt}\le Ce^{f(t)}.$
Is it correct? If the conditions are not enough, please add some condition and prove it. Thank you
| Here are some more details on my answer in the comment above. Let
$$ g_n(t) = 2^{-n}\left(\frac{1}{\pi}\arctan\left(2^ne^{4n}\pi(t-n)\right) + \frac{1}{2}\right).$$
Note that $|g_n(t)| \leq 2^{-n}$ and $g_n'(t) \geq 0$ for all $t$ and in particular $g_n'(n) = e^{4n}$.
Now define
$$f(t) = t + 2 + \sum_{n=1}^\infty g_n(t).$$
You should verify that this series actually converges and $f\in C^1$ (this is not hard). Then $|f(t)| \leq t + 3$ and $f'(n) = 1 + e^{4n}$. If the statement were true, then there would exist $C$ such that
$$e^{4n} \leq Ce^{n+3},$$
for all integers $n$ which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How do you convert $(12.0251)_6$ into fractions? How do you convert $(12.0251)_6$ (in base 6) into fractions?
I know how to convert a fraction into base $x$ by constantly multiplying the fraction by $x$ and simplifying, but I'm not sure how to go the other way?
| I assume you want decimal notation in your fraction. We have
$$(12.0251)_6=(1\times 6^1) + (2\times 6^0) +\frac{0}{6^1}+\frac{2}{6^2}+\frac{5}{6^3}+\frac{1}{6^4}.$$
Bring the right-hand side to the common denominator $6^4$, and calculate. Equivalently, multiply by $6$ often enough that you get an integer $N$ ($4$ times) and divide by that power of $6$. After a while, we get that $N=10471$. Since $6^4=1296$,
$$(12.0251)_6=\frac{10471}{1296}.$$
Another way: Note that $(12.0251)_6=\dfrac{(120251)_6}{(10000)_6}$. Convert numerator and
denominator to base $10$. This looks slicker, but the computational details are the same.
Comment: There is a nice trick for making the calculation of the numerator easier. It goes back almost two thousand years in China, and is sometimes called Horner's Method, after an early 19th century British schoolmaster. We work from the left. Calculate in turn
$(1\times 6)+2=a$
$(a\times 6)+0=b$
$(b\times 6)+2=c$
$(c\times 6)+5=d$
$(d\times 6)+1=e$
Our numerator is $e$. We find that $e=10471$.
Horner's Method does not really speed up things in this small calculation. But with longer calculations, there is substantial gain. Horner's Method is a useful tool when we want to evaluate a high degree polynomial $P(x)=a_0x^n+a_1x^{n-1}+ \cdots +a_n$ at a particular numerical value of $x$. With a bit of practice, it is even a handy tool for evaluation of polynomials with a calculator. There is no need to ever "store" intermediate results.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solutions to Linear Diophantine equation $15x+21y=261$ Question
How many positive solutions are there to $15x+21y=261$?
What I got so far
$\gcd(15,21) = 3$ and $3|261$
So we can divide through by the gcd and get:
$5x+7y=87$
And I'm not really sure where to go from this point. In particular, I need to know how to tell how many solutions there are.
| you find all solutions from $ax+by=c$ with
$$x=x_0-\frac{b}{(a,b)}*t , y=y_0+\frac{a}{(a,b)}*t$$
$x_0$ and $y_0$ you find with the Euclidean algorithm backwards:
$15x+21y=261$ with $261=3*87$
21=1*16+6
15=2*6+3
6=2*3+0
$3=15-2*6=15-2*(21-1*15)=15-2*21+2*15=3*15-2*21$
multiplied with 87 we get
$261=261*15-174*21$
So $x_0=261$ and $y_0=174$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 2
} |
A problem about measure Let $f\in C[0,\infty)$. Define $A=[1,\infty)$ and $f^{-1}(A)=\{s\in [0,\infty): f(s)\geq 1\}$. If the measure $m(f^{-1}(A))<\infty$, then there exists an bounded interval $[a,b]\subset [0,\infty)$ and a measurable $B\subset[0,\infty)$ with $m(B)=0$ such that $(f^{-1}(A)
\setminus B)\subset [a,b].$
| Updated answer to new problem: This is still false.
Let $F$ be the closed set $\cup_{n=1}^\infty [n,n+1/2^n]$ and set
$f(x)=1-d(x,F)$, where $d(x,F)$ is the distance from $x$ to $F$. Then $f(x)=1$
if and only if $x\in F$, so $f^{-1}(A)=F$.
Try $f(x)=\sin(x)$. ${}{}{}{}{}{}{}{}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsitution - integrals I am attempting to solve this integral using substituion
$\int (x^2 +1) (x^3 +3x)^4dx$ I make $u=x^3+3x$ and then made $dx=du/(3x^2 + 3)$
I then got $1/3 \int (x^3+3x)^4$ I have no idea what to do now.
| Your substitution $u=x^3+3x$ is a good idea as it yields $\frac{du}{dx}=3x^2 + 3$
Substituting in you get:
$\int (x^2 +1) (x^3 +3x)^4dx$
$= \frac{1}{3} \int (3x^2 + 3)(x^3 + 3x)^4 dx$
$= \frac{1}{3} \int u^4 du$ (this is where you seemed to have gone wrong by converting to du yet leaving the integral in terms of x)
$= \frac{1}{15} u^5 + c$
$= \frac{1}{15} (x^3 + 3x)^5 + c$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/80994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics counting, inclusion exclusion problem Santa Claus has five toy airplanes of each of n plane models. How many ways are there to put one airplane in each of $r$ ($r \geq n$) identical stockings such that all models of planes are used?
| If the stockings are truly indistinguishable, we’re simply counting the integer solutions of the equation $x_1+x_2+\dots+x_n=r$ that satisfy the inequalities $1\le x_i\le 5$ for $i=1,\dots n$: $x_i$ is the number of planes of model $i$ that go into stockings.
Without the upper bounds on the $x_i$’s there are $\binom{r-1}{n-1}$ solutions. The number of these that exceed the limit on a fixed $x_i$ is $\binom{r-6}{n-1}$. In fact, for any $S\subseteq [n]$ the number of integer solutions that exceed the limit on every $x_i$ with $i\in S$ is $\binom{r-1-5|S|}{n-1}$. There are $\binom{n}k$ subsets of $[n]$ of size $k$, so by the inclusion-exclusion principle the number of acceptable solutions $-$ those exceeding none of the upper bounds $-$ is $$\sum_{k=0}^n(-1)^k\sum_{\substack{S\subseteq[n]\\|S|=k}}\binom{r-1-5k}{n-1}=\sum_{k=0}^n(-1)^k\binom{n}k\binom{r-1-5k}{n-1}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Some interesting results from PI decimals I was reading some notes about PI and as a typical IT guy I have decided to test some algorithm but before I wanted to check number quantity per decimal length (explanation in next paragraph). For resource I have used: http://www.piday.org/million.php there I took 1000000 decimal and wrote a small application to see how many times 1,2,3,4,5,6,7,8,9 and 0 numbers are used. And I get some interesting results and wanted to share with you.
i.e. in first 100000 decimal total 9999 0(zero) exist. Generally total number * 100000 very close to decimal length/10;
or
total of 1000000 decimal: 4499934
total of 100000 decimal: 449333
total of 10000 decimal: 44894
total of 1000 decimal: 4476
total of 100 decimal: 477
total of 10 decimal: 41
Do you think it can be interesting? If someone wants I can share full results. (Can be useless information but I was doing just for fun, so dont judge me:))
| (I'd post this as a comment, but ran out of space.)
Your results are not surprising. Partly because there is no (known) reason why any one digit should occur more than other digits, many mathematicians believe that all digits $0$ to $9$ occur with about equal frequency. In technical terms, it is believed that $\pi$ is a normal number.
So among the first $N$ digits, you should expect to see the digit $0$ about $N/10$ times, the digit $1$ about $N/10$ times, and so on: each of the ten digits about $N/10$ times. This approximation gets (relatively) better as $N$ becomes large. So the sum of the first $N$ digits will roughly be
$$\begin{align}
&\frac{N}{10}(0) + \frac{N}{10}(1) + \frac{N}{10}(2) + \dots + \frac{N}{10}(9) \\
=& \frac{N}{10} \left( 0 + 1 + \dots + 9 \right) \\
=& \frac{N}{10} (45)
\end{align}$$
which is what you're seeing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How can I prove $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$?
If $A \subset [0,1]$ is the union of open intervals $(a_{i}, b_{i})$ such that each
rational number in $(0, 1)$ is contained in some $(a_{i}, b_{i})$,
show that boundary $\partial A= [0,1] - A$. (Spivak- calculus on
manifolds)
If I prove that $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$, the proof is complete.
I tried to find a contradiction, but I didn't find one.
| All you need is that $A$ contains all the rationals in $[0,1]$. Now suppose, for a contradiction, that the interior of $A^c$ was non-empty -- say that it contained $x$. Then by definition $A^c$ has to contain an open interval around $x$. But every open interval contains a rational number, and by assumption there are no rationals in $A^c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How to show that $x=\ln 3$ solves $x=\ln(10/3 - e^{-x})$ My homework has tasked me with finding $x$ when $\cosh x=5/3$. I know that the solution is $\ln (3)$, but I can't figure out how to solve it myself. The furthest I can simplify it is the following:
$$\frac{e^x+e^{-x}}{2} = 5/3$$
$$ e^x+e^{-x} = 10/3$$
$$e^x = \frac{10}{3}-e^{-x}$$
$$x = \ln \left(\frac{10}{3}-e^{-x} \right)$$
Now, if I put this into Wolfram Alpha it tells me that the answer is $\ln(3)$, but it doesn't tell me how it solved that. Also, I'm guessing there may be another way to solve this by taking a different route than the above.
| Start with $$e^x+e^{-x}={10\over3}$$
Multiply both sides by $e^x$:
$$
e^x\cdot e^x+e^x\cdot e^{-x}={10\over 3}e^x
$$
Simplify:
$$
(e^x)^2+1={10\over 3}e^x.
$$
Let $u=e^x$, then
$$
u^2+1={10\over3}u
$$
or
$$
3u^2-10u+3=0.
$$
This has solutions $u=3$ and $u=1/3$.
So $e^x=3$ or $e^x=1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Arithmetic error in Feller's Introduction to Probability? In my copy of An Introduction to Probability by William Feller (3rd ed, v.1), section I.2(b) begins as follows:
(b) Random placement of r balls in n cells. The more general case of [counting the number of ways to put] $r$ balls in $n$ cells can be studied in the same manner, except that the number of possible arrangements increases rapidly with $r$ and $n$. For $r=4$ balls in $n=3$ cells, the sample space contains already 64 points ...
This statement seems incorrect to me. I think there are $3^4 = 81$ ways to put 4 balls in 3 cells; you have to choose one of the three cells for each of the four balls. Feller's answer of 64 seems to come from $4^3$. It's clear that one of us has made a very simple mistake.
Who's right, me or Feller? I find it hard to believe the third edition of a universally-respected textbook contains such a simple mistake, on page 10 no less. Other possible explanations include:
(1) My copy, a cheap-o international student edition, is prone to such errors and the domestic printings don't contain this mistake.
(2) I'm misunderstanding the problem Feller was examining.
| I have to go with Feller: each of the 3 balls has one of 4 cell numbers associated with it. That is 4^3. Here are the 64 possibilities
111, 112, 113, 114, 121, 122, 123, 124,
131, 132, 133, 134, 141, 142, 143, 144,
211, 212, 213, 214, 221, 222, 223, 224,
231, 232, 233, 234, 241, 242, 243, 244,
311, 312, 313, 314, 321, 322, 323, 324,
331, 332, 333, 334, 341, 342, 343, 344,
411, 412, 413, 414, 421, 422, 423, 424,
431, 432, 433, 434, 441, 442, 443, 444
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
Counting inversions in lists algorithmically I want to extract useful info from some data and this makes me think how to do it efficiently.
I will try to explain the problem with math terms.
If we have a sequence of numbers $A=(a_{1}\space a_{2}\dots a_{n})$ and we want to count the number of inversions, ie the number of pairs $(a_{i},a_{j}), 1\leq i < j\leq n$ and $a_{i}>a_{j}$.
I want to find a good algorithm that will solve this problem.
I am looking for an algorithm that is better from the trivial one that solves the problem in polynomial time.
| As stated by @joriki, the trivial algorithm that compares all values pairwise has runtime in $\Theta(n^2)$.
A better runtime can be achieved by leveraging balanced binary search trees, e.g. AVL trees. Assume you have AVL trees that store in every node $v$ the number of nodes $t(v)$ in the subtree rooted in $v$; this does not change the runtime characteristics of AVL trees.
Now you take your list and an empty tree and insert the elements one by one. If you have no reversals, every element will end up as the right-most node after inserting. Conversly, the number of reversals an element causes can be read off while inserting; whenever you descend to a left child, add $t(v)$ of the right child you did not go to, and add $1$ if the father is properly larger than the element you insert.
The proposed alorithm has runtime in $\cal{O}(n\log(n))$ as insertion in AVL trees is in $\cal{O}(\log(n))$.
Edit: Apparently you can do better for permutations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Series around $s=1$ for $F(s)=\int_{1}^{\infty}\text{Li}(x)\,x^{-s-1}\,dx$
Consider the function $$F(s)=\int_{1}^{\infty}\frac{\text{Li}(x)}{x^{s+1}}dx$$ where $\text{Li}(x)=\int_2^x \frac{1}{\log t}dt$ is the logarithmic integral. What is the series expansion around $s=1$?
It has a logarithmic singularity at $s=1$, and I am fairly certain (although I cannot prove it) that it should expand as something of the form $$\log (1-s)+\sum_{n=0}^\infty a_n (s-1)^n.$$ (An expansion of the above form is what I am looking for) I also have a guess that the constant term is $\pm \gamma$ where $\gamma$ is Euler's constant. Does anyone know a concrete way to work out such an expansion?
Thanks!
| By looking at the sum $$\sum_{k=2}^N\frac{1}{k\log k}=\log \log N+K+O\left(\frac{1}{N}\right)$$ in this question, I found another way to prove that as $s\rightarrow 0$ $$\int_2^\infty \frac{x^{-s-1}}{\log x}dx=-\log(s)-\gamma-\log \log 2+O(s\log(s)).$$ Let $\Lambda(n)$ be the Von Mangoldt Lambda function, and $\gamma_0$ the Euler-Mascheroni Constant. Then we have the expansion of the similar sum $$\sum_{n\leq x}\frac{\Lambda(n)}{n\log n}=\log\log x+\gamma_{0}+O\left(\frac{1}{\log x}\right),$$ which appears in the proof of theorem 2.7 in Montgomery and Vaughn. Let $$S(x)=\sum_{2\leq k\leq x}\frac{1}{k\log k}- \sum_{n\leq x}\frac{\Lambda(n)}{n\log n},$$ and examine $I=\delta \int_1^\infty S(x)x^{-\delta -1}dx$ as $\delta\rightarrow 0$. As $S(x)=(K-\gamma_0)+O(1/\log x)$, it follows that $I=K-\gamma_0+O(\delta \log (1/\delta)$. Then, since $$\sum_{n=1}^{\infty}a_{n}n^{-s}=s\int_{1}^{\infty}A(x)x^{-s-1}dx$$ (Theorem 1.3 of Montgomery and Vaughn) we see that $$\sum_{n=2}^{\infty}\frac{n^{-\delta}}{n\log n}-\log \zeta(\delta+1)=O(\delta\log(1/\delta)$$ as $\delta\rightarrow 0$, and so $$\sum_{n=2}^\infty \frac{n^{-\delta-1}}{\log n}=-\log \delta+(K-\gamma)+O(\delta\log(1/\delta).$$ Now, writing the left hand side as a Riemann Stieltjes integral and simplifying the expression for $K$ combined with the resulting terms allows us to conclude that $$\int_{2}^\infty \frac{x^{-\delta-1}}{\log x}dx=-\log \delta -\gamma-\log \log 2+O\left(\delta\log(1/\delta)\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Are sin and cos the only continuous and infinitely differentiable periodic functions we have? Sin and cos are everywhere continuous and infinitely differentiable. Those are nice properties to have. They come from the unit circle.
It seems there's no other periodic function that is also smooth and continuous. The only other even periodic functions (not smooth or continuous) I have seen are:
*
*Square wave
*Triangle wave
*Sawtooth wave
Are there any other well-known periodic functions?
| Of course not.
For example, $\sin_{[n]}(x)$ as shown in http://en.wikipedia.org/wiki/Functional_square_root is in fact a smooth periodic function of period $2\pi$ $\forall n\in\mathbb{R}^+$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
$K_5$ minor implies $K_5$ or $K_{3,3}$ topological minor Problem. Let $G$ be a graph with a $K_5$ minor. Prove that $G$ contains either a $K_5$ or a $K_{3,3}$ topological minor.
I'm having a hard time believing this result. Consider the graph $G$ obtained from $K_5$ by replacing one of its vertices with a cycle of length 4:
Where is the $K_5$ or $K_{3,3}$ topological minor?
| Label your vertices as
A----X
/| |\
/ Y----B \
P__|____|__Q
|\_|_ _|_/|
\ |_><_| /
\_C____Z_/
Then $(\{A,B,C\},\{X,Y,Z\})$ is $K_{3,3}$ with the two indirect edges $XQC$ and $APZ$.
Later: But Patrick's suggestion (in comments) of $(\{X,P,C\},\{A,Q,Z\})$ is better because it doesn't use the $YB$ edge. Then all you have to prove for the main problem is prove that each of the subgraphs that collapse to one of the vertices in $K_5$ (as a minor) must have one of the following as a topological minor (aka homeomorphic subgraph):
|
|
-----O----- or ---O---O---
| | |
| | |
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
I calculated the number of permutations with no 2-cycles in two ways but I got 2 different results I calculated the number of permutations in $S_n$ with no 2-cycles in two ways but I got 2 different results. The first time I used the principle of inclusion-exclusion and I got $\sum_{k=0}^n \frac{n!}{k!}\frac{1}{2^k}(-1)^k$ and I'm pretty sure that it's right. The second way is using generating functions. Using the exponential formula I calculated that the generating function of this type of permutations is $\frac{e^{-x^2/2}}{1-x}$. So it is $(\sum x^n)(\sum 1/n! (-1/2)^n x^{2n}$. If I make the product of these two series I got the series with coefficient $\sum_{k=0, k\;even}^n 1/(k/2)!(-1/2)^{k/2}$. Could you tell me where is the mistake?
This is the computation of inclusion-exclusion:
we count the permutations with at least one 2-cycle. The number of permutation with at least k 2-cycles is $c_k=(n-2k)!\frac{\binom{n}{2}\binom{n-2}{2}\cdots\binom{n-2k+2}{2}}{k!}$. So the permutations with at least one 2-cycle are $\sum_{k=1}^n(-1)^kc_k$. So what we want is $n!-\sum_{k=1}^n\frac{n!}{k!}\frac{1}{2^k}(-1)^k=\sum_{k=0}^n\frac{n!}{k!}(-1/2)^k$. So it's probable that the mistake is in the computation of $(-1)^kc_k$. Does anyone see an expression for this?
| If $C$ is any set of positive integers, and $g_C(n)$ is the number of permutations of $[n]$ whose cycle lengths are all in $C$, then $$G_C(x)=\sum_{n\ge 0}g_C(n)\frac{x^n}{n!}=\exp\left(\sum_{n\in C}\frac{x^n}{n}\right)$$ is the exponential generating function for the $g_C(n)$. (Rather than derive it, I’ve simply quoted this from Theorem 4.34 in Miklós Bóna, Introduction to Enumerative Combinatorics.)
In your case $C=\mathbb{Z}^+\setminus\{2\}$, so it’s $$\begin{align*}
G_C(x)&=\exp\left(\sum_{n\ge 1}\frac{x^n}n-\frac{x^2}2\right)\\
&=\exp\left(-\ln(1-x)-\frac{x^2}2\right)\\
&=\frac{e^{-x^2/2}}{1-x},
\end{align*}$$ and your generating function is correct. Then
$$\frac{e^{-x^2/2}}{1-x}=\left(\sum_{n\ge 0}x^n\right)\left(\sum_{n\ge 0}\frac{(-1)^nx^{2n}}{n!2^n}\right),$$ and
$$[x^n]\left(\sum_{n\ge 0}x^n\right)\left(\sum_{n\ge 0}\frac{(-1)^nx^{2n}}{n!2^n}\right)=\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^k}{k!2^k}.$$
Recall, though, that the coefficient of $x^n$ in $G_C(x)$ is not $g_C(n)$, but rather $\dfrac{g_C(n)}{n!}$, so $$g_C(n)=n!\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^k}{k!2^k}.$$
As a quick check, this yields $g_C(1)=1$, $g_C(2)=2\left(1-\frac12\right)=1$, $g_C(3)=6\left(1-\frac12\right)=3$, $g_C(4)=24\left(1-\frac12+\frac18\right)=15$, and $g_C(5)=120\left(1-\frac12+\frac18\right)=75$, all of which are in agreement with the OEIS values.
For your inclusion-exclusion argument, $c_k=0$ for $k>\lfloor n/2\rfloor$, and $c_0=n!$, so what you actually want is $$\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^kc_k=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^k\frac{n!}{k!2^k},$$ which is exactly what we just got with generating functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Order Statistic Expectation / Probability Let $Y_1<Y_2$ be order statistics from a random sample of size $2$ from a normal distribution, $\mathcal{N}(\mu,\sigma^2)$, where $\sigma^2$ is known. Show that $P(Y_1<\mu<Y_2)=\frac12$ and find $E(Y_1-Y_2)$.
I am not exactly sure how to solve the question above. Any help would be appreciated. Thanks.
| Let $X_1,X_2$ be the i.i.d. sample; then $Y_2 =\max\{X_1,X_2\}$ and $Y_1=\min\{X_1,X_2\}$ (I'm regarding "${<}$" as a typo in the question, in view of the result to be proved).
Then either $Y_1<Y_2<\mu$ or $Y_1<\mu<Y_2$ or $\mu<Y_1<Y_2$. (I'm discounting the event of probability $0$ that two or more of these are equal.)
The first happens if and only if both $X_1$ and $X_2$ are less than $\mu$; the second if and only if one (either one) is less than $\mu$ and the other greater; the third if and only if both are greater than $\mu$.
The probability that $X_1>\mu$ is $1/2$; similarly for $X_2$.
So the event $Y_1<\mu<Y_2$ is the event of exactly one success in two independent trials, with probability $1/2$ of success on each trial. Therefore its probability is $1/2$.
Now notice that $E(Y_2-Y_1) = E(|X_2-X_1|)$, and $X_1-X_2 \sim \mathcal{N}(\mu-\mu,\sigma^2+\sigma^2)=\mathcal{N}(0,2\sigma^2)$. So $E(|X_2-X_1|)= \sqrt{2}\sigma E\left(\dfrac{|X_2-X_1|}{\sqrt{2}\sigma}\right)$ and $Z=\dfrac{X_2-X_1}{\sqrt{2}\sigma}\sim\mathcal{N}(0,1)$. So we want $\sqrt{2}\sigma E(|Z|)$.
So
$$
\begin{align}
E(|Z|) & = \int_{-\infty}^\infty |z| \varphi(z)\;dz = 2\int_0^\infty z \varphi(z)\;dz = 2\int_0^\infty z \frac{1}{\sqrt{2\pi}} e^{-z^2/2} \; dz \\ \\
& = \sqrt{\frac{2}{\pi}} \int_0^\infty ze^{-z^2/2} \; dz = \sqrt{\frac{2}{\pi}} \int_0^\infty e^{-u} \; du = \sqrt{\frac{2}{\pi}}.
\end{align}
$$
Multiplying that by $\sqrt{2}\;\sigma$, we get $\dfrac{2\sigma}{\sqrt{\pi}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Using recurrences to solve $3a^2=2b^2+1$ Is it possible to solve the equation $3a^2=2b^2+1$ for positive, integral $a$ and $b$ using recurrences?I am sure it is, as Arthur Engel in his Problem Solving Strategies has stated that as a method, but I don't think I understand what he means.Can anyone please tell me how I should go about it?Thanks.
Edit:Added the condition that $a$ and $b$ are positive integers.
| Although it should be mentioned, and the equation: $$aX^2-qY^2=f$$
If the root of the whole: $\sqrt{\frac{f}{a-q}}$
Using equation Pell: $$p^2-aqs^2=1$$ solutions can be written:
$$Y=(2aps\pm(p^2+aqs^2))\sqrt{\frac{f}{a-q}}$$
$$X=(2qps\pm(p^2+aqs^2))\sqrt{\frac{f}{a-q}}$$
And for that decision have to find double formula.
$$Y_2=Y+2as(qsY-pX)$$
$$X_2=X+2p(qsY-pX)$$
We will use these formulas to solve equations: $$3X^2-2Y^2=1$$
Decisions will be determined by the Pell equation: $$p^2-6s^2=1$$
Starting from the first solution: $(p_0,s_0)$ - $(5,2)$
You can find all the rest of the formula.
$$s_2=2p_1+5s_1$$
$$p_2=5p_1+12s_1$$
These numbers will need to substitute in:
$$Y=p^2\pm6ps+6s^2$$
$$X=p^2\pm4ps+6ps$$
Then you can consider and the twins. It is necessary to take into account that all of the substitution number can have any signs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How many $p$-adic numbers are there? Let $\mathbb Q_p$ be $p$-adic numbers field. I know that the cardinal of $\mathbb Z_p$ (interger $p$-adic numbers) is continuum, and every $p$-adic number $x$ can be in form $x=p^nx^\prime$, where $x^\prime\in\mathbb Z_p$, $n\in\mathbb Z$.
So the cardinal of $\mathbb Q_p$ is continuum or more than that?
| Well, $\mathbb{Q}_p\subset \mathbb{C}$ ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Group action on a subset having trouble with this problem. It's homework. We're given a finite set $S$ on which a finite group $G$ acts on transitively. If $U$ is a subset of $S$, I'm supposed to show that the subsets of $gU$ cover $S$ evenly. By evenly, I mean that each $s\in S$ is in the same number of sets $gU$.
Things I know are that for any $g,g'\in G$, $gU$ and $g'U$ both have order $|U|$. I also know that, since the operation is transitive, there is only one orbit (and I suspect this is important).
I also noticed that, when $|U| = 1$, this property is just the transitivity of the group action. I'm just having trouble generalizing this to where the sets overlap (ie, $s\in S$ is in more than one set $gU$.)
| Let $X_s = \{ gU \;|\; g \in G \;\mathrm{and}\; s \in gU \}$ (this is the set of all "$gU$" with $s \in gU$).
Since the action on $S$ is transitive, if you pick two elements, says $s,t \in S$, there exists some $x \in G$ such that $x \cdot s = t$.
Now you just need to show that $\varphi:X_s \rightarrow X_t$ defined by $gU \mapsto (xg)U$ is a bijection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Another quadratic Diophantine equation: How do I proceed? How would I find all the fundamental solutions of the Pell-like equation
$x^2-10y^2=9$
I've swapped out the original problem from this question for a couple reasons. I already know the solution to this problem, which comes from http://mathworld.wolfram.com/PellEquation.html. The site gives 3 fundamental solutions and how to obtain more, but does not explain how to find such fundamental solutions. Problems such as this have plagued me for a while now. I was hoping with a known solution, it would be possible for answers to go into more detail without spoiling anything.
In an attempt to be able to figure out such problems, I've tried websites, I've tried some of my and my brother's old textbooks as well as checking out 2 books from the library in an attempt to find an answer or to understand previous answers.
I've always considered myself to be good in math (until I found this site...). Still, judging from what I've seen, it might not be easy trying to explain it so I can understand it. I will be attaching a bounty to this question to at least encourage people to try. I do intend to use a computer to solve this problem and if I have solved problems such as $x^2-61y^2=1$, which will take forever unless you know to look at the convergents of $\sqrt{61}$.
Preferably, I would like to understand what I'm doing and why, but failing that will settle for being able to duplicate the methodology.
| I'm going to give you the general method to obtain the fundamental solutions of the Diofantine equation $x^2-dy^2=f^2$.
First solution:
We set $y=f-1$, $d=f^2+1$, $x=f^2-f+1$
Second solution:
$y=f+1$, $d=f^2+1$, $x=f^2+f+1$
In your case $f^2=9$ and $d=f^2+1=10$.
So the first solution is $7^2-10(2^2)=3^2$ and $13^2-10(4^2)=3^2$.
From the two fundamental solutions we obtain infinite solutions of the equation $x^2-10y^2=3^2 $with the well known methods.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.