Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Eigenvalues of an "Half-Kronecker "Product The Problem:
Given a 2 by 2 matrix $C$(the matrix elements of C are given), and two other
2 by 2 matrices $A$ and $B$(the matrix elements of A and B are given).
Now we can construct a new matrix $D$, which is given by the direct product
of (the first row of $C$) and $A$, the direct product of (the second row of $C$) and $B$, just like this:
$$D =
\begin{pmatrix}
c_{11} A & c_{12} A \\
c_{21} B & c_{22} B
\end{pmatrix}$$
Four Bolcks.
(Can we have a better way to write this kind of product?)
As we know, if $A=B$, then the eigenvalues of $D$ are products of eigenvalues
of $C$ and eigenvalues of $A$, 4 eigenvalues in all.
Then, can we know the eigenvalues of $D$? What can we say about the
eigenvalues of $D$? Implicit equation are OK.
Is this problem related to Khatri-Rao products? Is there anyone who
considered this problem and solved it?
|
It seems very doubtful that there could be a simple closed form for the
eigenvalues in general (i.e. simpler than explicitly taking the characteristic
polynomial and solving this quartic polynomial in radicals).
Case in point: take $$a_{{1,1}}=-3,a_{{1,2}}=3,a_{{2,1}}=0,a_{{2,2}}=1,b_{{1,1}}=-3,b_{{1,2
}}=-1,b_{{2,1}}=-2,b_{{2,2}}=-2,c_{{1,1}}=-2,c_{{1,2}}=3,c_{{2,1}}=-3,
c_{{2,2}}=2
$$
so that $$D = \pmatrix{
6&-6&-9&9\cr0&-2&0&3 \cr 9&3&-6&-2\cr 6&6&-4&-4
\cr}$$
Its characteristic polynomial is $t^4+6 t^3-27 t^2+230 t-300$ which is
irreducible over the rationals. On the other hand, $A$ and $B$ have
integer eigenvalues ($(1,-3)$ and $(-1,-4)$ respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Convergence in distribution of random variables question $X, X_1, X_2,\ldots $ are real random variables with $\mathbb{P}(X_n\leq x)\to \mathbb{P}(X\leq x)$ whenever $\mathbb{P}(X=x)=0$.
Why does $X_n\stackrel{L}{\to} X$? At the least, where would I begin?
|
A sequence $X_1,X_2,\ldots$ of random variables is said to converge in distribution, or converge weakly, or converge in law to a random variable $X$ if
$$
\lim_{n\to\infty}F_n(x)=F(x)
$$
for every number $x\in\mathbb R$ at which $F$ is continuous, where $F_n(x)=\mathbb P(X_n\le x)$ and $F(x)=\mathbb P(X\le x)$.
Thus, we need to show that $F(x)$ is continuous at $x$ if and only if $\mathbb P(X=x)=0$. $F$ is continuous from the right, so we need to investigate continuity from the left. Since
$$
\mathbb P(X=x)=F(x)-\lim_{y\uparrow x}F(y),
$$
we have that $\mathbb P(X=x)=0$ if and only if $F(x)=\lim_{y\uparrow x}F(y)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Prove the statement for definite integral We have positive continuous function $f(x)$ defined on $\mathbb{R}$,
such as $\int_{-\infty}^{+\infty} f(x) dx = 1$
let $\alpha \in (0,1)$ and $[a,b]$ is an interval
of minimal length amongst intervals for those holds: $\int_{a}^{b} f(x) dx = \alpha$.
Task is to prove that $f(a) = f(b)$.
I managed to sum up following statements:
$F(\infty) - F(-\infty) = 1$
$F(b) - F(a) = \alpha$
And one to prove is: $f(b) - f(a) = 0$
I am looking for hints to build a proof here.
|
If $f(a)\neq f(b)$, say $f(a)$ is larger. Then we can always shift the interval towards the direction of $a$ to make it shorter (since $f$ is continuous).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to determine the eigenvectors of this matrix? I have some problems to determine the eigenvectors of a given matrix:
The matrix is:
$$
A = \left( \begin{array}{ccc}
1 & 0 &0 \\
0 & 1 & 1 \\
0 & 0 & 2
\end{array} \right)
$$
I calculated the eigenvalues first and got $$ \lambda_1 = 1, \lambda_2 = 2, \lambda_3 = 1$$
There was no problem for me so far. But I do not know how to determine the eigenvectors. The formula I have to use is
$$ (A-\lambda_i E)u=0, \lambda_i = \{1,2,3\}, u\ is\ eigenvector$$
When I determined the eigenvector with $ \lambda_2=2$ there was not a problem. I got the result that $x_3 = variable$ and $x_2 = x_3$, so:
$$
EV_2= \left( \begin{array}{ccc}
0 \\
\beta \\
\beta
\end{array} \right) \ \beta\ is\ variable,\ so\ EV = span\{\left( \begin{array}{ccc}
0 \\
1 \\
1
\end{array} \right)\}
$$
But when I used $ \lambda_1 = \lambda_3 = 1 $, I had to calculate:
$$
\left( \begin{array}{ccc}
0 & 0 &0 \\
0 & 0 & 1 \\
0 & 0 & 1
\end{array} \right) *
\left( \begin{array}{ccc}
x_1 \\
x_2 \\
x_3
\end{array} \right)
=0
$$
what in my opinion means that $x_3 = 0 $ and $x_1$ and $x_2$ are variable, but not necessarily the same as in the case above, so $ EV_{1,3} = \left( \begin{array}{ccc}
\alpha \\
\beta \\
0
\end{array} \right) $
What does that mean for my solution? is it
$$
EV_{1,3} = span\{\left( \begin{array}{ccc}
1 \\
0 \\
0
\end{array} \right),
\left( \begin{array}{ccc}
0 \\
1 \\
0
\end{array} \right),
\left( \begin{array}{ccc}
1 \\
1 \\
0
\end{array} \right)\}
$$
What exactly is now my solution in this case for the eigenvectors $ \lambda_1, \lambda_3 $? In university we just had one variable value in the matrix so I don't know how to handle two of them being different.
|
Update: I have undeleted my answer because I think it is fixed now.
You got $$ V_{\lambda_2} = \left(\begin{array}{ccc} 0 \\ 1 \\ 1 \end{array} \right) $$
correct but then copied it down wrongly.(I think..)
Then you correctly wrote down the case $\lambda_1$. From
$$ \left(\begin{array}{ccc } 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right) $$
you should easily conclude (I think you did) that $ z = 0 $ , x and y can be anything leading to
$$ V_{\lambda_{1 \ or \ 3}} = \left(\begin{array}{ccc } 1 \\ 1 \\ 0 \end{array} \right) $$
Now , since the dimension of the nullspace is 2 we can decompose this into 2 seperate eigenvectors corresponding to the repeated eigenvalue of 1
$$ V_{ \lambda_1} = \left(\begin{array}{ccc } 1 \\ 0 \\ 0 \end{array} \right) \ \ , \ \ V_{ \lambda_2 } = \left(\begin{array}{ccc } 0 \\ 1 \\ 1 \end{array} \right) \ \ , \ \ V_{ \lambda_3} = \left(\begin{array}{ccc } 0 \\ 1 \\ 0 \end{array} \right) $$
You can check all three are independent and satisfy
$$AV_i = \lambda_iV_i$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Is this complex function harmonic? Let us consider the following convergente series in the set $0<x<1$ and all real $y$:
$$h(x+iy)=∑_{n=2}^{∞}(-1)ⁿ⁻¹((n^{2x-1}-1)/n^{x})n^{iy}$$
My question is: Is this complex function harmonic?
|
Look at the terms of the series. Ignoring the $(-1)^n$ for the moment, we have
$$\frac{n^{2x-1}-1}{n^x}n^{iy} = n^{x-1}n^{iy} - n^{-x}n^{iy} = n^{z-1} - n^{-\overline{z}}.$$
The first term is holomorphic, and hence harmonic. The second term is antiholomorphic, and hence harmonic. Thus the difference of the two terms is harmonic.
So every term in the series
$$h(x+iy) = \sum_{n=2}^\infty (-1)^n \frac{n^{2x-1}-1}{n^x}n^{iy}$$
is harmonic, and since the series is locally uniformly convergent, it follows that $h$ is harmonic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
dirichlet and gcds Let (a,b)=1 and c>0. Prove that there is an integer x such that (ax+b, c)=1.
Right now, I have the following approach:
Let's assume that for every x, (ax+b,c)$\neq$ 1. Then $\exists$ d, where d/c that also d/ax+b=> ax$\equiv$-b (mod d). I'm not sure how to continue from here on.
|
The critical thing is that $d|ax+b$ for all $x$. So take two different $x$'s and get $d|a$, then $d|b$ and you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to calculate lim inf and lim sup for given sequence of sets Let the indicator function be defined as $$I(x) \triangleq \begin{cases} 1, & \quad x \geq 0 \\ 0, & \quad x < 0 \end{cases}$$ and $I_{\nu}(x) \in [0,1]$ be a continuous approximation of the $I(x)$ such that $$\lim_{\nu \rightarrow \infty} I_{\nu}(x) = I(x), \quad \forall x$$ $$\lim_{x \rightarrow - \infty} I_{\nu}(x) = 0, \quad \forall \nu$$ $$\lim_{x \rightarrow \infty} I_{\nu}(x) = 1, \quad \forall \nu.$$
Let us define the sequence of connected sets $\{ C_{\nu} \}$, where each $C_{\nu}$ is the set of values assumed by $I_{\nu}(x)$, $\forall x \in \mathbb{R}$ (hence, each element of $C_{\nu}$ belongs to $[0,1]$).
How do I calculate $\liminf_{\nu \rightarrow +\infty} C_{\nu}$ and $\limsup_{\nu \rightarrow +\infty} C_{\nu}$?
My intuition is that $\liminf_{\nu \rightarrow +\infty} C_{\nu} = \limsup_{\nu \rightarrow +\infty} C_{\nu}=\{0,1\}$, but I can't prove it.
|
Let $I_n(x)$ be $0$ for $x \le -1/n$ and $1$ for $x \ge 1/n$. On $[-/n,0]$ define $I_n(x)$ as
$$f_n(x)=n^n(1-1/n)(x+1/n)^n,$$
and on $[0,1/n]$ define it as
$$g_n(x)=1-n^{n-1}(1/n-x)^n.$$
One can check that $f_n(0)=g_n(0)=1-1/n$ and that $f(-1/n)=g(1/n)=0,$ so that the rules are not in conflict at interval endpoints and this piecewise function $I_n(x)$ is continuous, so its range is all of $[0,1].$ Furthermore for fixed negative $x$ eventually $I_n(x)=0$ (as soon as $-1/n\ge x$) and similarly for fixed positive $x$ eventually $I_n(x)=1.$ Since at $0$ itself we have $I_n(x)=1-1/n$ we have $\lim_{n \to \infty}I_n(0)=1$ as required.
So this is an example wherein the sequence $C_n$ of sets is the constant sequence each term being the interval $[0,1].$ I wasn't familiar with lim inf /sup of sets but looked it up on Wiki, and for a constant sequence $A,A,\cdots$ each of lim sup and lim inf is just $A$, so that this example shows these limits may be all ofr $[0,1]$, rather than the two point set $\{0,1\}$ mentioned in the post.
Edit: A much simpler $I_n(x)$ sequence: $0$ for $x<-1/n,$ linear from $0$ to $1-1/n$ on $[-1/n,0],$ then linear from $1-1/n$ to $1$ on $[0,1/n],$ and $1$ for $x>1/n$. The above definition was made when I thought one would need the high powers in order to ensure the right properties, but these follow from the shrinking domain $[-1/n,1/n]$ on which $I_n$ is not $0$ or $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that the intersection of two subgroups is a subgroup. In more detail, if $G$ is a group and $H_1$, $H_2$ are subgroups of G then $H_1 \cap H_2$ is a subgroup of G.
Next, give an example of a particular group $G$ (any one you like), and two different subgroups $H_1$, $H_2$ of $G$ , compute the intersection $H_1 \cap H_2$ , and verify it is indeed a subgroup.
Finally, give three examples showing that $H_1 \cup H_2$ need not be a subgroup of $G$ .
|
Theorem Let $G$ be a non-trivial finite group. Then the following are equivalent.
(a) For each pair of subgroups $H_1$ and $H_2$ of $G$, $H_1 \cup H_2$ is a subgroup
(b) $G$ is cyclic of prime-power order.
Proof (b) $\Rightarrow$ (a) follows from the fact that in a cyclic group there is a unique subgroup of order $d$ for each divisor $d$ of $|G|$. (If $G$ has order $p^n$ and $G=\langle g \rangle$, then define $H_i=\langle g^{p^{n-i}} \rangle$. Then $|H_i|=p^i$ and the series $\{1\} =H_0 \subset H_1 \dots \subset H_{n-1} \subset G=H_n$ are all the subgroups of $G$.)
(a) $\Rightarrow$ (b). Assume that $G$ is not cyclic. Then we can find a non-identity $x \in G$, with $\langle x \rangle \subsetneq G$, and take $|\langle x \rangle|$ as large as possible. Also, we can find a $y \in G-\langle x \rangle$. The assumption implies $H=\langle x \rangle \cup \langle y \rangle$ is a subgroup. Clearly $x$, $y$ $\in H$, but $xy \notin H$, since $x$ is not a power of $y$ and vice versa: if $xy=x^i$ for some integer $i$, then $y=x^{i-1} \in \langle x \rangle$, a contradiction. If $xy=y^j$ for some integer $j$, then $x=y^{j-1} \in \langle y \rangle$, whence $\langle x \rangle \subseteq \langle y \rangle$, and hence $\langle x \rangle = \langle y \rangle$, by the maximality of $\langle x \rangle$. Again a contradiction. So $G$ must be cyclic. If $|G|$ has two different prime factors $p$ and $q$, the by Cauchy's Theorem we can find elements $a$ and $b$ of that orders respectively, and $a$ and $b$ are powers of a $g$ with $\langle g \rangle = G$. Again, $\langle a \rangle\cup \langle b \rangle$ is a subgroup of $G$, but it does not contain $ab$, that has order $pq$. A contradiction, the order of $G$ must be divisible by a single prime, whence $G$ is of prime-power order.$\square$
One can also use the well-known fact that a group can never be the union of two proper subgroups. Anyway, the theorem provides you a slew of examples for the second part of your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Proof involving matrix equation $A$ and $B$ are $(n\times n)$ matrices and $AB + B + A = 0$. Prove that then $AB=BA$.
How should I approach this problem?
|
Adding the identity matrix $I$ on both sides, we find $(A+I)(B+I) = I$. Hence $A+I$ and $B+I$ are inverses of each other. It follows that $(B+I)(A+I) = I$ as well. Expanding gives $BA + B + A = 0$, hence $AB = BA$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Question about proof of FTA I read a short proof of the Fundamental Theorem of Algebra, as follows:
Assume $p(z)$ is a nonconstant polynomial with no roots. Then $1/p$ is an analytic function on $\mathbb{C}$. Also, $1/p \to 0$ as $z \to \infty$, so $p$ is bounded. By Liouville's theorem, any bounded analytic function is constant, which is a contradiction.
My issue is I don't understand why $1/p$ is bounded. Could someone please explain this?
|
Absolute value of polynomial tends to infinity for $\left|z\right|\to\infty$. That is, for each $M>0$, there exists $R>0$ such that for $\left|z\right|>R$ we have $\left|p(z)\right|>M$. Take sufficiently large closed disk, so that $\left|p(z)\right|>1$ for $z$ outside the disk. The disk is compact, so it's image by $\left|p(z)\right|$ is compact, hence closed. It does not contain 0, so it's bounded away from 0, say, by $a>0$. Thus,
$$
\left|1/p(z)\right|<1/a
$$
inside the disk, and $\left|1/p(z)\right|<1$ outside.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What Is Exponentiation? Is there an intuitive definition of exponentiation?
In elementary school, we learned that
$$
a^b = a \cdot a \cdot a \cdot a \cdots (b\ \textrm{ times})
$$
where $b$ is an integer.
Then later on this was expanded to include rational exponents, so that
$$
a^{\frac{b}{c}} = \sqrt[c]{a^b}
$$
From there we could evaluate decimal exponents like $4^{3.24}$ by first converting to a fraction.
However, even after learning Euler's Identity, I feel as though there is no discussion on what exponentiation really means. The definitions I found are either overly simplistic or unhelpfully complex. Once we stray from the land of rational powers into real powers in general, is there an intuitive definition or explanation of exponentiation?
I am thinking along the lines of, for example, $2^\pi$ or $3^{\sqrt2}$ (or any other irrational power, really). What does this mean? Or, is there no real-world relationship?
To draw a parallel to multiplication:
If we consider the expression $e\cdot \sqrt5$, I could tell you that this represents the area of a rectangle with side lengths $e$ cm and $\sqrt5$ cm. Or maybe $e \cdot \pi$ is the cost of $\pi$ kg of material that costs $e$ dollars per kg.
Of course these quantities would not be exact, but the underlying intuition does not break down. The idea of repeated addition still holds, just that fractional parts of terms, rather than the entire number, are being added.
So does such an intuition for exponentiation exist? Or is this one of the many things we must accept with proof but not understanding?
This question stems from trying to understand complex exponents including Euler's identity and $2^i$, but I realized that we must first understand reals before moving on the complex numbers.
|
$a^b$ refers to the "multiplicative power" of performing b multiplications by a. This is intuitively obvious with positive integer 'b's, but still holds for fractional and negative values when you put a little brain grease into considering what it means to do 'half a multiplication' or a 'negative multiplication'.
$9^\frac{1}{2}$ is 3 since multiplying by 3 is half the multiplication that multiplying by 9 is. (eg, it's the multiplication which, if done twice, will be equivalent to multiplying by 9). Similarly for $8^\frac{1}{3} = 2$, etc.
$4^{-1}$ is 1/4, since multiplying by 1/4 is 'unmultiplying' once by 4. (eg, it's the operation that gets cancelled out by multiplying once by 4.)
$a^0$ (including $0^0$) is 1, since doing no multiplication to something is the same as multiplying that something by 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 13,
"answer_id": 2
}
|
Solve $\sin x - \cos x = -1$ for the interval $(0, 2\pi)$ We have an exam in $3$ hours and I need help how to solve such trigonometric equations for intervals.
How to solve
$$\sin x - \cos x = -1$$
for the interval $(0, 2\pi)$.
|
Method $\#1$
Avoid squaring which immediately introduces extraneous roots which demand exclusion
We have $\displaystyle\sin x-\cos x=-1$
$$\iff\sin x=-(1-\cos x)\iff2\sin\frac x2\cos\frac x2=-2\sin^2\frac x2$$
$$\iff2\sin\frac x2\left(\cos\frac x2+\sin\frac x2\right)=0$$
If $\displaystyle \sin\frac x2=0,\frac x2=n\pi\iff x=2n\pi$ where $n$ is any integer
If $\displaystyle\cos\frac x2+\sin\frac x2=0\iff\sin\frac x2=-\cos\frac x2$
$\displaystyle\iff\tan\frac x2=-1=-\tan\frac\pi4=\tan\left(-\frac\pi4\right)$
$\displaystyle\iff\frac x2=m\pi-\frac\pi4\iff x=2m\pi-\frac\pi2$ where $m$ is any integer
Method $\#2$
Let $\displaystyle1=r\cos\phi,-1=r\sin\phi\ \ \ \ (1)$ where $r>0$
$\displaystyle\cos\phi=\frac1r>0$ and $\displaystyle\sin\phi=-\frac1r<0$
$\displaystyle\implies\phi$ lies in the fourth Quadrant
On division, $\displaystyle\frac{r\sin\phi}{r\cos\phi}=-1\iff\tan\phi=-1$
$\displaystyle\implies\phi=-\frac\pi4$
$\displaystyle\sin x-\cos x=-1\implies r\cos\phi\sin x+r\sin\phi\cos x=r\sin\phi$
$\displaystyle\implies\sin(x+\phi)=\sin\phi$
$\displaystyle\implies x+\phi=r\pi+(-1)^r\phi$ where $r$ is any integer
If $r$ is even $=2a$(say) $\displaystyle\implies x=2a\pi$
If $r$ is odd $=2a+1$(say) $\displaystyle\implies x+\phi=(2a+1)\pi-\phi\iff x=2a\pi+\pi-2\phi$
Method $\#3$
Using Weierstrass Substitution,
$\displaystyle\frac{2u}{1+u^2}-\frac{1-u^2}{1+u^2}=-1\ \ \ \ (2)$ where $u=\tan\frac x2$
$\displaystyle\iff2u^2+2u=0\iff u(u+1)=0$
If $\displaystyle u=0,\tan\frac x2=0\iff\frac x2=b\pi$ where $b$ is any integer
If $\displaystyle u=-1,\tan\frac x2=-1$ which has been addressed in Method $\#1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 8,
"answer_id": 6
}
|
A question regarding a prefix code Let $C=\{ c_1, c_2, \dots, c_m \}$ be a set of sequences over an alphabet $\Sigma$ and $|\Sigma|=\sigma$. Assume that $C$ is a prefix-free code, in the sense that no codeword in $C$ is a prefix of another codeword in $C$, with $|c_i|= n_i\ \forall i$. Prove that $\sum_{h=1}^m \sigma^{-n_h} \leq 1$.
My attempt:
I want to argue that $C$ is a finite subset of all the keywords that can be built and therefore we have the following:
$p(creating\ c_i)= \frac{1}{\sigma^{n_i}},$
$\sum_{h=1}^m \sigma^{-n_h}= \sum_{i=1}^m p(creating\ c_i)\leq \sum_{i=1}^\infty p(creating\ c_i) = 1$
Would you please help me figure out if I am doing it right?
Thanks
|
@mnz has given a prefectly rigorous answer. There is another proof that works for any uniquely decodable scheme, i.e. given a sequence of letters from the alphabet, there is at most one way to separate the letters such that each subsequence is in $C$. It's clear that prefix-free codes can be uniquely decoded.
We define the "weight" of a sequence of letters to be $\sigma^{-l}$ where $l$ is the length of the sequence. Then the total weight of all the sequences of $l$ messages (not letters!) is just (where the first sum is taken over all sequences $(x_1, \cdots, x_l)$ of numbers from $1$ to $m$) $$\sum_{(x_1, \cdots, x_l)} \sigma^{-(n_{x_1}+n_{x_2}+\cdots+n_{x_l})} = (\sum_{h=1}^m \sigma^{-n_h})^n$$
On the another hand, the total weight of all sequences of length at most $l\cdot(\max_h n_h)$ is simply $l\cdot(\max_h n_h)$: There are exactly $\sigma^{l'}$ sequences of length $l'$ and each of them has weight $\sigma^{-l'}$, hence the total weight of sequences for a fixed length $l'$ is just $1$, and $l'$ can take at most $l\cdot(\max_h n_h)$ values.
Since each sequence can be decoded uniquely, there must be $$(\sum_{h=1}^m \sigma^{-n_h})^n\le l\cdot(\max_h n_h)$$
Note that while the RHS grows linearly, the LHS cannot grow exponentially, therefore $\sum_{h=1}^m \sigma^{-n_h}\le 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Assuming ${a_n}$ is a convergent sequence, prove that the lim inf of $a_{n+1}$ is equal to the lim inf of $a_n$ I'm aware that you have to use the definition of a limit of a sequence, which is:
$\lim\limits_{n \to \infty} a_n = L $ if for every $E > 0$, there is an $N$ such that if $n >N $then $|a_n - L | < E$
I just have no idea how to combine the two different limits.
|
First of all you should understand intuitively why this holds. The limit of a sequence $a_n$ tells us about the long-term behavior of the sequence. But $a_n$ and $a_{n+1}$ have the same long-term behavior, they are merely indexed differently; so they should have the same limit.
For the formal proof, suppose $a_n$ converges to $L$. Then like you say for any $\epsilon >0$, there is an $N$ such that if $n>N$, $|a_n-L|<\epsilon$. But in particular this tells us that if $n>N$, then $|a_{n+1}-L|<\epsilon$ as well (since $n+1>N$, so we may plug in $n+1$ into the previous inequality). This shows that $a_{n+1}$ converges to the same limit $L$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find basis so Transformation Matrix will be diagonal $e_1,e_2$ will be basis for $V$. $W$ has a basis $\{e_1+ ae_2,2e_1+be_2\}$. Choose an $a,b$ s.t. that the basis for $W$ will have a transformation matrix $T$ will be in diagonal form.
$T(e_1) = 1e_1+5e_2$
$T(e_2) = 2e_1+4e_2$
$V$ and $W$ are linear spaces of dimension $2$.
|
In the basis for $V$,
$$T_{[V]} = \left(\begin{matrix}
1 & 2 \\
5 & 4 \end{matrix} \right)$$
If you want the transformation $T$ written in $W$'s basis to be diagonal, then you want each basis vector of $W$ to be mapped to some multiple of itself:
$$T_{[W]} = \left(\begin{matrix}
\lambda_1 & 0 \\
0 & \lambda_2 \end{matrix} \right)$$
You know that $T : e_1 \mapsto e_1 + 5 e_2$ and $T : e_2 \mapsto 2e_1 + 4e_2$. Using this, you can solve for $a$ and $b$ by stipulating that:
$$e_1 + a e_2 \mapsto \lambda_1 e_1 + \lambda_1 a e_2,$$
$$2 e_1 + b e_2 \mapsto \lambda_2 2 e_1 + \lambda_2 b e_2$$
Appling the map $T$:
$$T_{[V]} \left( \begin{matrix}
1 \\
a \end{matrix} \right) = \lambda_1 \left( \begin{matrix}
1 \\
a \end{matrix} \right) = \left( \begin{matrix}
1 \\
5 \end{matrix} \right) + a\left( \begin{matrix}
2 \\
4 \end{matrix} \right)$$
$$T_{[V]} \left( \begin{matrix}
2 \\
b \end{matrix} \right) = \lambda_2 \left( \begin{matrix}
2 \\
b \end{matrix} \right) = 2\left( \begin{matrix}
1 \\
5 \end{matrix} \right) + b\left( \begin{matrix}
2 \\
4 \end{matrix} \right)$$
We've ended up with simultaneous equations for $\lambda_1$ and $\lambda_2$:
$$\begin{cases}
\lambda_1 = 1 + 2a \\
a \lambda_1 = 5 + 4a
\end{cases},\quad \quad \begin{cases}
2\lambda_2 = 2 + 2b \\
b \lambda_2 = 10 + 4b
\end{cases}$$
Rearranging these equations gives two quadratics:
$$2a^2 - 3a - 5 = 0$$
$$b^2 - 3b - 10 = 0$$
Our solutions are $a = 5/2$ or $-1$ and $b = 5$ or $-2$. Any of these four possible configurations will make $T$ written in $W$'s basis be a diagonal matrix.
As an example, set $a = -1, b = 5$. Then:
$$\underbrace{\left( \begin{matrix}
1 & 2 \\
5 & 4 \end{matrix} \right)}_{T_{[V]}} \left( \begin{matrix}
1 & 2 \\
-1 & 5 \end{matrix} \right) = \left( \begin{matrix}
-1 & 12 \\
1 & 30 \end{matrix} \right)$$
It can be seen that the first basis vector of $W$ is an eigenvector of $T$ with eigenvalue of $-1$, and the second is an eigenvector of $T$ with eigenvalue $6$.
Thus:
$$T_{[W]} = \left( \begin{matrix}
-1 & 0 \\
0 & 6 \end{matrix} \right)$$
With this choice for $a$ and $b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Probability of two digit number sequence in series of numbers Given a random sequence (say $15$) of numbers I want to find the odds of finding '$90$' and '$09$' in the sequence. Looking at just two numbers in the sequence you have a $\dfrac{2}{10}$ chance of getting a '$9$' or '$0$' as the first digit, followed by $\dfrac{1}{10}$ chance of getting specifically the opposite digit that you need to complete the pair. $\dfrac{1}{10}*\dfrac{2}{10} = \dfrac{1}{50}$. Simple right?
But then I think of the repercussions of a failure case on the next adjacent pair examined in the sequence. If analyzing the first two numbers in the set has failed, it is very likely sequences adjacent to it have failed too because adjacent digits 'share' numbers.
if you have the set $4 \; 5 \; 9 \; 1 \;0 \; 5$, my steps for solving the problem fall apart when you consider that in checking '$4$' '$5$' leads into checking '$5$' '$9$', the $5$ mathematically has been accounted for as a failure and is leading towards this next pair to fail as well.
I haven't taken a probability class so maybe I'm failing to express my logic and perhaps I have some crazy misunderstanding of probability. What is this dependency that I acknowledge and how do I account for it when solving these kind of problems?
|
Your question is not totally clear, but working from your calculation that the probability of success with a two digit string is $\frac{1}{50}$ then you seem to be looking for the probability of either $09$ or $90$ (or both) in your string.
This will be easier to consider as the complement of neither $09$ nor $90$ appearing.
So let $q_n$ be the probability that neither $09$ nor $90$ has appeared in the first $n$ digits and the $n$th digit is not $0$ or $9$, and let $r_n$ be the probability that neither $09$ nor $90$ has appeared in the first $n$ digits and the $n$th digit is $0$ or $9$. You have the following:
$$q_1=0.8 \qquad r_1=0.2$$ $$q_{n+1}=0.8q_n + 0.8 r_n$$ $$r_{n+1}=0.2q_n + 0.1 r_n$$
You can solve this in closed form, but it is probably easier to find $q_{10}=0.6880531472$ and $r_{10}=0.1561083282$, making the probability you are seeking $1-q_{10}-r_{10}=0.1558385246$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Equation with an infinite number of solutions I have the following equation: $x^3+y^3=6xy$. I have two questions: 1. Does it have an infinite number of rational solutions?
2. Which are the solutions over the integers?($ x=3 $ and $ y=3 $ is one)
Thank you!
|
Wolfram Alpha says that there are no rational solutions except the one you noted, $x=y=3$ although.
It seems that it chose to skip the trivial $x=y=0$ though. The link has some irrational solutions too, if you need them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/702936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
}
|
prove a subset of $l^2$ is closed? Let $\{f_i\}$ be a sequence of (nice) functions in $L^p[0,1],$ and $p>1, \frac{1}{p}+\frac{1}{q}=1.$
Define a subset $A$ of the space $l^2$ as
$$A=\left\{(a_1, a_2, \ldots)\in l^2: \text{ such that } a_i=\int_0^2g(x)f_i(x)dx, \text{ for }g\in L^q[0,1] \text{ and } \|g\|_q\leq1\right\}.$$
Here $\{f_i\}$ are good enough in order that for any $\|g\|_q\leq1,$ the resulting $(a_1, a_2, \ldots)\in l^2.$
Can we prove this subset $A$ is closed in $l^2$?
Have been working this for a while. Any suggestion is greatly appreciated. (more assumptions may be imposed on $\{f_i\},$ for instance, smoothness)
|
I will assume the limits of the integral should be $0$ and $1$, not $0$ and $2$. I also won't really comment on the requirements of the sequence $f_i$; the set you've provided is well-defined in any case. If we can establish that this set is always closed, regardless of the condition that $(a_1, a_2, \ldots) \in l^2$ for any $\lVert g \rVert \le 1$, then surely this will be of some use to you!
I also plan to generalise the result, for no other reason than to simplify the notation. We replace $L^p[0, 1]$ with a reflexive Banach space $X$, and $L^q[0, 1]$ with its dual $X^*$. The functions $f_i$ shall now be a sequence $x_i \in X$. We are therefore looking at the set
$$A = \left\lbrace (f(x_1), f(x_2), \ldots) \in l^2 : f \in B_{X^*} \right\rbrace.$$
Suppose we have a sequence $(f_i(x_1), f_i(x_2), \ldots)_{i=1}^\infty \in A$ that converges in $l^2$ to some sequence $(a_1, a_2, \ldots)$. That is,
$$\sum\limits_{j = 1}^\infty ~ \lvert f_i(x_j) - a_j \rvert^2 \rightarrow 0$$
as $i \rightarrow \infty$.
In order to prove $(a_1, a_2, \ldots) \in A$, we must construct its corresponding $f \in X^*$. Since $X$ is reflexive, so is $X^*$, hence $B_X^*$ is weakly compact. By the Eberlein-Smulian theorem, it's also sequentially weakly compact. Hence the sequence $f_i$ has a weakly convergent subsequence $f_{n_i}$, weakly converging to some $f \in B_{X^*}$. That is, for any $x \in X$,
$$f_{n_i}(x) \rightarrow f(x)$$
as $i \rightarrow \infty$. Now we must prove that $f(x_j) = a_j$ for all $i$.
But this is now straightforward. If $\sum\limits_{j = 1}^\infty ~ \lvert f_i(x_j) - a_j \rvert^2 \rightarrow 0$, then $\lvert f_i(x_j) - a_j \rvert \rightarrow 0$ for any fixed $j$. That is, $f_i(x_j) \rightarrow a_j$ as $i \rightarrow \infty$. Considering the subsequence $f_{n_i}(x_j)$ subsequence, which converges to $f(x_j)$, we see by the uniqueness of limits that $a_j = f(x_j)$ as required. Hence, the set $A$ is closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit of $\lim_{x\to 0}\frac{\sqrt{x^2+a^2}-a}{\sqrt{x^2+b^2}-b}$ Can someone help me solve this limit?
$$\lim_{x\to0}\frac{\sqrt{x^2+a^2}-a}{\sqrt{x^2+b^2}-b}$$
with $a>0$ and $b>0$.
|
No need for L'Hopital - we simply multiply and divide by the conjugate radical expression:
\begin{align}
\frac{\sqrt{x^2+a^2}-a}{\sqrt{x^2+b^2}-b}&=\left(\frac{\sqrt{x^2+a^2}-a}{\sqrt{x^2+b^2}-b}\cdot\frac{\sqrt{x^2+a^2}+a}{\sqrt{x^2+b^2}+b}\right)\cdot\frac{\sqrt{x^2+b^2}+b}{\sqrt{x^2+a^2}+a}
\\ &=\frac{x^2+a^2-a^2}{x^2+b^2-b^2}\cdot\frac{\sqrt{x^2+b^2}+b}{\sqrt{x^2+a^2}+a}=
\frac{\sqrt{x^2+b^2}+b}{\sqrt{x^2+a^2}+a}\to\frac{2b}{2a}=\frac{b}{a}.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Proof: $a^2 - b^2 = (a-b)(a+b)$ holds $\forall a,b \in R$ iff R is commutative We want to show that for some ring $R$, the equality $a^2 - b^2 = (a-b)(a+b)$ holds $\forall a,b \in R$ if and only if $R$ is commutative.
Here's my proof --- I'm not sure if the first part stands up to examination. I'd be grateful if someone could take a look.
Forward: $a^2 -b^2 = (a-b)(a+b) \forall a,b \in R$ implies $R$ is commutative
Let $x = (a-b)$. Then \begin{align}
x(a+b) &= xa+xb\\
&= (a-b)a + (a-b)b\\
&= a^2 -ba + ab - b^2\end{align}
Then we note that $a^2 - ba + ab - b^2 = a^2 - b^2$ iff $-ba + ab = 0$ if and only if $ab=ba$ iff $R$ is commutative.
Backwards: $R$ is commutative implies $a^2 - b^2 = (a-b)(a+b) \forall a,b \in R$.
Let $x = (a+b)$. Then $(a-b)x = ax - bx = a(a+b) - b(a+b) = a^2 + ab - ba - b^2$. $R$ is commutative, so $ab-ba = 0$, so $a^2 + ab - ba - b^2 = a^2 - b^2$.
|
What you did is correct, as far as I can tell.
Perhaps a shorter solution is that $(a^2 - b^2) - (a-b)(a+b) = ba - ab$. This is $0$ iff the $a$ and $b$ commute. So, the expression $(a^2 - b^2) - (a-b)(a+b)$ is identically $0$ iff the ring is commutative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Where is the mistake in this limit calculation? I got this limit:
$$\lim_{x\to1}\frac{\sqrt[3]{x}-1}{\sqrt[4]{x}-1} \implies \lim_{x\to1}\frac{\frac{x-1}{\sqrt[3]{x²}+\sqrt[3]{x}+1}}{\sqrt[4]{x}-1} \implies \lim_{x\to1}\frac{x-1}{\sqrt[3]{x²}+\sqrt[3]{x}+1}*\frac{1}{\sqrt[4]{x}-1}*\frac{\sqrt[4]{x}+1}{\sqrt[4]{x}+1} \implies \lim_{x\to1}\frac{\sqrt[4]{x}+1}{\sqrt[3]{x²}+\sqrt[3]{x}+1} \xrightarrow[x\to 1]{}\frac{2}{3}.$$
But in the book the answer is $$ \frac{4}{3}.$$
I cannot find the mistake in my calculation.
|
That's probably because $\dfrac{x-1}{\sqrt[4]x-1}=\sqrt[4]{x^3}+\sqrt[4]{x^2}+\sqrt[4]{x}+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Why square the result of $x_1 - \bar{x}$ in the standard deviation? I don't understand the necessity of square the result of $x_1 - \bar{x}$ in $$\sqrt{\frac{\sum_{i=1}^{N} (x_i - \bar{x})^2}{N-1}}$$. In fact I don't understand even why is $N - 1$ on the denominator instead of just $N$. Someone could explain it or recommend a good text about it? All books about Errors Theory or even Statistics that I found are either too much abstract or too much simplist.
Thanks in advance.
|
Squaring the Deviations
The variance of a sample measures the spread of the values in a sample or distribution. We could do this with any function of $|x_k-\bar{x}|$. The reason that we use $(x_k-\bar{x})^2$ is because the variance computed this way has very nice properties. Here are a couple:
$1$. The variance of the sum of independent variables is the sum of their variances.
Since $x_i$ and $y_j$ are independent, their probabilities multiply. Therefore,
$$
\begin{align}
\hspace{-1cm}\mathrm{Var}(X+Y)
&=\sum_{i=1}^n\sum_{j=1}^m\Big[(x_i+y_j)-(\bar{x}+\bar{y})\Big]^2p_iq_j\\
&=\sum_{k=1}^n(x_i-\bar{x})^2p_i+\sum_{j=1}^m(y_j-\bar{y})^2q_j+2\sum_{i=1}^n(x_i-\bar{x})p_i\sum_{j=1}^m(y_j-\bar{y})q_j\\
&=\sum_{k=1}^n(x_i-\bar{x})^2p_i+\sum_{j=1}^m(y_j-\bar{y})^2q_j\\
&=\mathrm{Var}(X)+\mathrm{Var}(Y)\tag{1}
\end{align}
$$
$2$. The mean is the point from which the mean square variance is minimized:
$$
\begin{align}
\sum_{i=1}^n(x_i-a)^2p_i
&=\sum_{i=1}^n(x_i^2-2ax_i+a^2)p_i\\
&=\sum_{i=1}^n\left(x_i^2-2\bar{x}x_i+\bar{x}^2+(\bar{x}-a)(2x_i-\bar{x}-a)\right)p_i\\
&=\left(\sum_{i=1}^n(x_i-\bar{x})^2p_i\right)+(\bar{x}-a)^2\tag{2}
\end{align}
$$
Dividing by $\mathbf{n-1}$
Considering $(2)$, it can be seen that the mean square of a sample measured from the mean of the sample will be smaller than the mean square of the sample measured from the mean of the distribution. In this answer, this idea is quantified to show that
$$
\mathrm{E}[v_s]=\frac{n{-}1}{n}v_d\tag{3}
$$
where $\mathrm{E}[v_s]$ is the expected value of the sample variance and $v_d$ is the distribution variance. $(3)$ explains why we estimate the distribution variance as
$$
v_d=\frac1{n-1}\sum_{i=1}^n(x_i-\bar{x})^2\tag{4}
$$
where $\bar{x}$ is the sample mean.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Solving $x^2 - 16 > 0$ for $x$ This may be a very simple question, but I can't figure out the correct reason behind it.
If $x^2 - 16 >0$, which of the following must be true?
a. $4 < x$
b. $-4 > x > 4$
c. $-4 > x < 4$
d. $-4 < x < 4$
I know the answer but I didn't get how they figured out the direction.
|
Since $(-4)^2=4^2=16,$ and for $x^2-16>0$ to be true, $x$ has to be strictly greater than $4$ and strictly less than $-4,$ then from this I think you can tell which answer must be correct
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why doesn't this calculation work? I want to find some closed form for $\gcd(x^3+1,3x^2 + 3x + 1)$ but get $7$ which is not always true.
|
With your procedure you found that the GCD between the two polynomials $x^3+1$ and $3x^2+3x+1$ in $\mathbb{Q}[x]$ is $7$, or equivalently $1$, because the GCD of polynomials is defined up to constants (every scalar value $c$ divides any polynomial $p(x)\in\mathbb{Q}[x]$).
Thus there is not contradiction in your statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
If $X_1,X_2\sim U(0,1)$ and $Y$ is one of them who is closest to an end point, find distribution of $Y$. Let $X_1$ and $X_2$ be independent, $U (0, 1)$-distributed random variables, and
let $Y$ denote the point that is closest to an endpoint. Determine the distribution of $Y$.
It's a question in chapter of "order statistic",I'm thinking about solving it without using order statistics, cuz I have no idea how to solve it in order statistics way at all. For the normal way, $$P(Y<y)=1-P(Y>y)=1-P(y<X_1<1-y,y<X_2<1-y)=1-P(y<X_1<1-y)P(y<X_2<1-y)=1-(1-2y)^2$$ so $f(y)=4-8y$, when $y<1/2$, and the way when $y>1/2$ is similar.
But the correct answer is $f(y) = 2 − 4y$ for $0 < y < 1$ , $4y − 2$ for $1 < y < 1$.
Dose anybody can tell me how to solve this question by both order statistics way and the normal way? Thanks a lot.
|
We derive the density function directly, in a way analogous to the way one finds the distribution of order statistics. The only interesting value of $y$ are between $0$ and $1$. We find $f_Y(y)$ for $0\lt y\lt \frac{1}{2}$.
The probability that $y$ is between $y$ and $y+dy$, for "small" $dy$, is approximately $f_Y(y)\,dy$. Or, if one feels like being more precise, the probability that $Y$ lies between $y$ and $y+h$ is $f_Y(y)h +o(h)$.
In order for $Y$ to lie in this interval, we want either (i) $X_1$ lies in this interval, and $X_2$ lies roughly in the interval $(y,1-y)$, or (ii) $X_2$ lies in this interval, and $X_1$ lies roughly in the interval $(y,1-y)$. The events (i) and (ii) are equiprobable, and disjoint. We find the probability of (i).
The probability that $X_1$ lies in the interval from $y$ to $y+dy$ is $dy$. Given this, the probability that $X_2$ lies in the interval $(y,1-y)$ is $1-2y$. Thus the probability of (i) is $\approx (1-2y)\,dy$. Taking (ii) into account, we find that the probability $Y$ lies in the interval is $\approx (2-4y)\,dy$. This is mildly inexact, there is a "second order infinitesimal" involved.
It follows that
$$f_Y(y)\,dy\approx (2-4y)\,dy,$$
and the result follows.
Remark: The assertion that (for $y\lt \frac{1}{2}$) we have $\Pr(Y\gt y)$ is $\Pr(y\lt X_1\lt 1-y)\Pr(y\lt X_2\lt 1-y)$ was not correct. Roughly speaking, this is because we can have $Y\le y$ in many ways. Let $t\le y$. If $t$ is equal to $y$, or close, and $X_1=t$, we do indeed want $y\lt X_2\lt 1-y$. But if $t$ is smaller, say $y/2$, and $X_1=y/2$, our condition on $X_2$ changes, it can be allowed to roam up to $1-y/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
variation of the binomial theorem Why does:
$$ \sum_{k=0}^{n} k \binom nk p^k (1-p)^{n-k} = np $$ ?
Taking the derivative of:
$$ \sum_{k=0}^{n} \binom nk p^k (1-p)^{n-k} = (1 + [1-P])^n = 1 $$
does not seem useful, since you would get zero. And induction hasn't yet worked for me, since – during the inductive step – I am unable to prove that:
$$ \sum_{k=0}^{n+1} k \binom {n+1}{k} p^k (1-p)^{n+1-k} = (n+1)p $$
assuming that:
$$ \sum_{k=0}^{n} k \binom {n}{k} p^k (1-p)^{n-k} = np $$
|
This is saying that the average number of coin flips if you flip $n$ fair coins each with heads probability $p$ is $np$ (you are calculating the mean of a Binomial(n,p) distribution).
You can use linearity of expectation - if $Y = \sum_i X_i$ where $X_i$ is Bernoulli(p) and there are $n$ such $X_i$, then $Y$ is Binomial(n,p) and has that distribution. Then, $E[Y] = \sum_i E[X_i]$ and you can easily show $E[X_i]=p$.
If you want to proceed in this manner, note $\binom{n}{k}=\frac{n}{k} \binom{n-1}{k-1}$. You can find the full details of the proof on Proof Wiki.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Doing take aways I am reviewing take aways. I am having trouble
How do I do $342 - 58$?
For the ones column I made the $2$ into a $12$ so I can do $12 - 8 = 4$ but I must take away one tens. So I get $3 - 5$ in tens column but I cant do $3 - 5$. What do I do now do I borrow something else?
|
You lower the hundreds spot by one, and increase the tens spot by 10. Example:
300 + 40 + 2 = 200 + 140 + 2
Now that you do this, you get 13-5 and your answer is an 8 in the tens place, and with a 2 in the hundreds place.
As a whole, it looks like this
342 - 58
300 + 40 + 2 - 50 - 8
200 + 130 + 12 -50 - 8 (1 hundred turns into 10 tens, and 1 ten turns into 10 ones)
200 + 130 -50 + 12 - 8 (just putting them in order)
200 + 80 + 4
284 is the answer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 5,
"answer_id": 2
}
|
Big Oh and Big Theta relations confirmation I just want to confirm these statements,
I know that Big O, and Big theta, are partial order and equivalence relations respectively, all positive integers, but not sure on these restrictions.
$f:N \rightarrow R^+$
where $f$ R $g$ is and only if $f(n) = O(g(n))$
This is still a PO (Partial Order)
where $f$ R $g$ is and only if $f(n) = Θ(g(n))$
This is still a ER (Equivalence Relation)
Thank you!
|
You are correct that Big-Oh is a partial order, and Big-Theta is an equivalence relation. One can say that $f < g$ if $f \in O(g)$ (or $f = O(g)$, alternate notation-wise).
Notice though that $\sin(n) \in O(n)$ and $\cos(n) \in O(n)$, so that with this order we have $\sin(n) < n$ and $\cos(n) < n$, but we have neither $\sin(n) < \cos n$ nor $\cos(n) < \sin(n)$. This is why it's a partial order (as opposed to a total order).
The equivalence relation $f \sim g$ iff $f \in \Theta(g)$ is, as you've called it, an equivalence relation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Solution of definite integrals involving incomplete Gamma function The solution of the integral $$\int_0^{\infty}e^{-\beta x}\gamma(\nu,\alpha \sqrt x)dx $$ is given as $$2^{-\frac{1}{2}\nu}\alpha^{\nu}\beta^{\frac{1}{2}\nu-1}\Gamma(\nu)\exp(\frac{\alpha^2}{8\beta})D_{-\nu}(\frac{\alpha}{\sqrt{2\beta}})$$
[Re $\beta>0$, Re $\nu>0$]. I need to calculate the same integral for finite limit such as $0$ to $a$. So how can I calculate $\{\int_0^{a}e^{-\beta x}\gamma(\nu,\alpha \sqrt x)dx\}$. Is there any given form of solution for this definite integration?
|
$\int_0^ae^{-\beta x}\gamma(\nu,\alpha\sqrt x)~dx$
$=-\int_0^a\gamma(\nu,\alpha\sqrt x)~d\left(\dfrac{e^{-\beta x}}{\beta}\right)$
$=-\left[\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt x)}{\beta}\right]_0^a+\int_0^a\dfrac{e^{-\beta x}}{\beta}d\left(\gamma(\nu,\alpha\sqrt x)\right)$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^a\dfrac{\alpha^\nu x^{\nu-1}e^{-\beta x-\alpha\sqrt x}}{2\beta}dx$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^\sqrt a\dfrac{\alpha^\nu x^{2\nu-2}e^{-\beta x^2-\alpha x}}{2\beta}d(x^2)$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^\sqrt a\dfrac{\alpha^\nu x^{2\nu-1}e^{-\beta x^2-\alpha x}}{\beta}dx$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^\sqrt a\dfrac{\alpha^\nu x^{2\nu-1}e^{-\beta\left(x^2+\frac{\alpha x}{\beta}\right)}}{\beta}dx$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^\sqrt a\dfrac{\alpha^\nu x^{2\nu-1}e^{-\beta\left(x^2+\frac{\alpha x}{\beta}+\frac{\alpha^2}{4\beta^2}-\frac{\alpha^2}{4\beta^2}\right)}}{\beta}dx$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_0^\sqrt a\dfrac{\alpha^\nu e^\frac{\alpha^2}{4\beta}x^{2\nu-1}e^{-\beta\left(x+\frac{\alpha}{2\beta}\right)^2}}{\beta}dx$
$=-\dfrac{e^{-\beta x}\gamma(\nu,\alpha\sqrt a)}{\beta}+\int_\frac{\alpha}{2\beta}^{\sqrt a+\frac{\alpha}{2\beta}}\dfrac{\alpha^\nu e^\frac{\alpha^2}{4\beta}\left(x-\dfrac{\alpha}{2\beta}\right)^{2\nu-1}e^{-\beta x^2}}{\beta}dx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/703987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
justification of a limit I encountered something interesting when trying to differentiate $F(x) = c$.
Consider: $\lim_{x→0}\frac0x$.
I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals?
I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?
I would appreciate a strong logical argument for why the limit stays at $0$.
|
A function isn't just an expression, but you can think whether a single expression can be applied to an argument. The expression $0^{-1}$ is rather meaningless, so you don't know how to get the behavior of the function $f(x)=0\cdot x^{-1}$ at $x=0$ from the expression.
Limits are just a way to describe the behavior (if it looks consistent enough that the limit exists) around the point. It doesn't state anything about the value of the function at the point. That is if
$a=\lim_{x\to b}f(x)$
then the function $f_1(x)=\left\{\begin{matrix}f(x)&\text{ if }x\neq b\\a&\text{ if }x=b\end{matrix}\right.$ is continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
}
|
Prove that $f_n$ converges uniformly on $[a,b]$ Let $f_n$ be a sequence of functions defined on $[a,b]$. Suppose that for every $c \in [a,b]$, there exist an interval around $c$ in which $f_n$ converges uniformly. Prove that $f_n$ converges uniformly on $[a,b]$
I know that since for every $c \in [a,b]$, there exist an interval around $c$ in which $f_n$ converges uniformly, meaning for all $\epsilon >0$ there exist an $N>0$ such that $n>N$ implies $|f_n(c)-f(c)|<\epsilon$.
I know that $c$ is arbitray any where in $[a,b]$, but I don't know how to argue that $f_n$ converges uniformly on $[a,b]$ formally.
ok, here is what I got
Assume that $f_n$ be a sequence of functions defined on $[a,b]$. Suppose that for every $c \in [a,b]$, there exist an interval around $c$ in which $f_n$ converges uniformly. Since $[a,b]$ is compact, there exists finitely many number of point $c_1,c_2,c_3, ... \in [a,b]$ such that
$[a,b] \subset \cup _{k=1}^n (c_k - \frac {\delta(c_k)}{2},c_k + \frac {\delta(c_k)}{2})$
since or every $c \in [a,b]$, there exist an interval around $c$ in which $f_n$ converges uniformly, meaning for all $\epsilon >0$ there exist finitely number of $Ns>0$ such that $n>Ns$ implies $|f_n(c)-f(c)|<\epsilon$. Hence, $f_n$ converges uniformly on $[a,b]$?
|
You can use a trick similar to mathematical induction but on real line:
Let $I = \{ x \in [a,b]: f_n \mbox{ converges uniformly on } [a,x] \}$
We know:
For every $c \in [a,b]$, there exist an interval around $c$ in which
$f_n$ converges uniformly
Take $c=a$ in the above statement, and then there exists an interval near $a$ that $f_n$ converges uniformly.
Therefore $I$ is nonempty, $\sup(I)$ exists and let $S = \sup(I)$
We want to prove $S = b$, then we are done.
Suppose $S < b$,
$f_n$ converges uniformly on $I$ by definition of $I$
Take $c=S$, then there exists an interval $J$ near $S$ that $f_n$ converges uniformly.
Then, take a subset $[S-\varepsilon,S+\varepsilon]$ of $J$ for some $\varepsilon > 0$.
Then $S+\varepsilon \in I$, contradicting $\sup(I) = S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How does this equation create this chart?
I am trying to understand this formula from the chart above.
For example, from the middle graph, How does h(x) = 0.5x get the coordinates 2,1?
Any explanations on the other graphs would be helpful, too.
Edit: How would I find theta 0 and theta 1 given the following graph?
|
The horizontal axis is for values of $x$ and the vertical one is for the values of a function $h_{\theta}(x)$. So if we have $h_{\theta}(x)=\theta_0+\theta_1x$ where $\theta_0=0$ and $\theta_1=0,5$ than you have the function $h_{\theta}(x)=0+0,5x=0,5x$ and for $x=2$ you have h_{\theta}(2)=0,5*2=1$ so if in the horizontal axis we take value 2 in the vertical one we get value 1. Thats how we get the coordinate (2,1).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Use L'Hopital's Rule to Prove Let $$f: \mathbb R\rightarrow \mathbb R$$ be differentiable, let a in $\mathbb R$. Suppose that $f''(a)$ exists. Prove that $$\lim_{h\rightarrow0}\frac{f(a+h)-2f(a)+f(a-h)}{h^2}=f''(a) $$
Suppose further that $f''(x)$ exists for all $x$, and that $f'''(0)$ exists. Prove that $$\lim_{h\rightarrow0}\frac{4(f(h)-f(-h)-2(f(h/2)-f(-h/2)))}{h^3}=f'''(0)$$
|
Question 1
$$
\lim_{h\rightarrow0}\frac{f(a+h)-2f(a)+f(a-h)}{h^2} \\
$$
when $h = 0$ is substituted numerator and denominator reduce to $0$. So, applying L'Hopital's rule (differentiate wrt $h$)
$$
\lim_{h\rightarrow0}\frac{f'(a+h)-f'(a-h)}{2h} \\
\lim_{h\rightarrow0}\frac{f'(a+h)-f'(a)-f'(a-h)+f'(a)}{2h} \\
\lim_{h\rightarrow0}\frac{f'(a+h)-f'(a)}{2h}+\lim_{h\rightarrow0}\frac{f'(a)-f'(a-h)}{2h} \\
= \frac{2f''(a)}{2} = f''(a)\\
$$
Question 2
$$
\lim_{h\rightarrow0}\frac{4(f(h)-f(-h)-2(f(h/2)-f(-h/2)))}{h^3}
$$
when h=0 is substituted numerator and denominator reduce to 0. So, applying L'Hopital's rule (differentiate wrt h)
$$
\lim_{h\rightarrow0}\frac{4(f'(h)+f'(-h)-f'(h/2)-f'(-h/2))}{3h^2} \\
= \lim_{h\rightarrow0}\frac{4(f''(h)-f''(0)+f''(0)-f''(-h)+\frac{1}{2}f''(0)-\frac{1}{2}f''(h/2)-\frac{1}{2}f''(0)+\frac{1}{2}f''(-h/2))}{6h} \\
= \lim_{h\rightarrow0}\frac{4(f''(h)-f''(0)+f''(0)-f''(-h)+\frac{1}{2}f''(0)-\frac{1}{2}f''(h/2)-\frac{1}{2}f''(0)+\frac{1}{2}f''(-h/2))}{6h} \\
= \frac{2}{3}\left(\lim_{h\rightarrow0}\frac{f''(h)-f''(0)}{h}+\lim_{h\rightarrow0}\frac{f''(0)-f''(-h)}{h}\right)+\frac{1}{3}\left(\lim_{h\rightarrow0}\frac{f''(0)-f''(h/2)}{h}+\lim_{h\rightarrow0}\frac{f''(-h/2)-f''(0)}{h}\right) \\
= \frac{4f'''(0)}{3}-\frac{f'''(0)}{3} = f'''(0)
$$
PS: Thanks to @Paramanand Singh for pointing out that f was not twice and thrice differentiable in first and second questions respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Number of groups of a given order In general, for what $n$ do there exist two groups of order $n$? How about three groups of order $n$?
I know that if $n$ is prime, there only exists one group of order $n$, by Lagrange's Theorem, but how do you classify all other such $n$ that have $2, 3, 4, ...$ groups?
This question came to me during a group theory class, when we were making a table of groups of order $n$. For instance, all groups of order $4$ are isomorphic to either $C_4$ or $C_2\times C_2$.
|
Exactly 2 groups. There is a paper, which claims to classify "Orders for which there exist exactly two groups".
This link contains the text of the paper in text (!) format. I didn't find a pdf.
Disclamer: I didn't check if the proofs in the paper are correct. I also don't know, if the paper was published in any peer-reviewed journal (probably it wasn't).
Exactly 3 groups. Proposition 2 of the paper above says, that if $n$ is not cube-free, then there are at least 5 groups of the order $n$. So you are only interested in cube-free numbers. Then you at least know all numbers $n$ with exactly 3 groups for $n<50000$ from here. Based on this one will know the probable answer and then one can try to prove it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
}
|
Poisson Distribution more than 2 raindrops will fall on the square inch Assume that raindrops fall on a particular square inch of a city block according to a Poisson process will an average of 2 raindrops per second. Find the probability that more than 2 raindrops will fall on the square inch during a 5-second time interval.
What I got for this was:
$$1 - [P(Y=0)+P(Y=1)+P(Y=2)]$$
basically:
$$1- \left[
\left(\frac{5(2)^2}{2!}\right)\left(e^{-2*5}\right)
+ \left(\frac{5(2)^1}{1!}\right)\left(e^{-2*5}\right)
+ \left(\frac{5(2)^0}{0!}\right)\left(e^{-2*5}\right)\right]$$
is this correct or am I think about this totally wrong?
|
Indeed $$P(Y>2)= 1 - [P(Y=0)+P(Y=1)+P(Y=2)]$$
Then if you mean $$1-\left[\frac{(5\cdot2)^2}{2!}e^{-2\cdot5} + \frac{(5\cdot2)^1}{1!}e^{-2\cdot5} + \frac{(5\cdot2)^0}{0!}e^{-2\cdot5}\right]$$ your answer is correct (be careful with the parentheses in your expression).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving a differential equation $\displaystyle \frac{d \alpha}{dt}=w \times\alpha$ Let $\alpha$ be a regular curve in $\mathbb{R}^3$ such that $\displaystyle \frac{d \alpha}{dt}=w \times\alpha$ for $w$ a constant vector. How can we determine $\alpha$ ?
$\displaystyle w \times\alpha$ : cross product
Any hint would be appreciated.
|
You can write $w\times\alpha$ as $\Omega\alpha$, where $\Omega$ is an (antisymmetric) matrix. Then the problem reduces to a linear ODE.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
ambidextrous mathematician. combinations problem Please help me solve this problem. At first it seemed to be easy, but I got stuck.
An ambidextrous mathematician with a very short attention span keeps two video game credit cards, one in each of her two front pockets. One game card has credit for 5 games. The other game card has for 4 games. The mathematician pays for a video game with a credit card selected from a random pocket and replaces the credit card once it is used to pay for the game.
a) What is the probability that when the mathematician uses the last credit from one of her two cards, then the other contains 4 credits?
b) 3 credits?
Thank you!
|
Hint: The problem is a natural for using a division into cases.
Without loss of generality we may assume that the $4$-game card is in the left pocket, and the $5$-game card in the right pocket.
Either (i) there are $4$ credits left on the $4$-card or (ii) $4$ left on the $5$-card.
Event (i) has probability $\frac{1}{2^5}$, since she must have gone to the right pocket $5$ times in a row.
Event (ii) can happen in $4$ ways, RLLLL, LRLLL, LLRLL, and LLLRL, so has probability $\frac{4}{2^5}$.
We leave the more difficult $3$ credits left problem to you. You can imitate the explicit listing of possibilities. But it may save some time to use machinery to count the relevant words in the alphabet consisting of the letters L and R. For larger numbers, we certainly would want to use machinery.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Linear algebra questions that a high-schooler could explore Are there any deep/significant concepts in linear algebra that are not overly complicated that a high schooler could explore in depth?
|
A few ideas:
(1) Numerical Stuff:
Look at various methods of solving linear systems or inverting matrices. Study performance (the number of operations involved), and what sorts of things can go wrong numerically. Show that the naive textbook methods don't work very well in practice. See the linear system example in "Why A Math Book Is Not Enough" (Forsythe).
(2) Relations to 3D Geometry:
How different configurations of planes correspond with solutions of linear systems. Rank, determinants, etc. If you get through all of that, move on to eigenvectors and how they're related to "morphing" of shapes. Classification of conics (or even quadrics).
(3) Linear Programming:
How inequalities describe polyhedra. Optimal sets contain vertices. Graphical solutions in 2D and 3D. Convexity. The simplex method. I guess this isn't very "deep" but it is most certainly "significant" in the real world.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
}
|
Modular Arithmetic - Pirate Problem I was reading an example from my book, and I need further clarification because I don't understand some things. I'm just going to include the $f_1$ part in full detail because $f_2$ and $f_3$ are identical.
Consider the following problem.
Once upon a time, a band of seven pirates seized a treasure chest containing some gold coins (all of equal value). They agreed to divide the coins equally among the group, but there were two coins left over. One of the pirates took it upon himself to throw the extra coins overboard to solve the dilemma. Another pirate immediately dived overboard after the sinking coins and was never heard from again. After a few minutes, the six remaining pirates redivided the coins and found that there were three coins left. This time a fight ensured and one pirate was killed. Finally, the five surviving pirates were able to split the treasure evenly. What was the least possible number of coins in the treasure chest to begin with?
If x represents the original number of coins, then the first division can be represented by the congruence
$x \equiv 2 $ (mod $7)$ [I understand this part because originally we had 7 pirates and 2 left over coins]
The second and third division give the congruences
$x - 2 \equiv 3$ (mod $6$) and $x-2 \equiv 0 $ (mod $5$).
[The second division is the 2 coins that were thrown overboard, the pirate jumping into the ocean is the new mod, and there are 3 coins leftover. The third division is the original 2 coins thrown overboard, the 5 surviving pirates, and nothing is left over].
So the system of congruences is $x \equiv 2$ (mod 7), $x \equiv 5$ (mod 6), and $x \equiv 2$ (mod 5)
We solve the system by letting $x= 2f_1+5f_2+2f_3$
where $f_1,f_2$, and $f_3$ (to be determined soon ) satisfy
$f_1 \equiv 1$ (mod 7)
$f_2 \equiv 0$ (mod 7)
$f_3 \equiv 0$ (mod 7)
$f_1 \equiv 0$ (mod 6)
$f_2 \equiv 1$ (mod 6)
$f_3 \equiv 0$ (mod 6)
$f_1 \equiv 0$ (mod 5)
$f_2 \equiv 0$ (mod 5)
$f_3 \equiv 1$ (mod 5)
[Where are they getting this from?]
To compute $f_1$ we set $f_1 = 6 \times 5 \times b_1$ where $b_1$ satisfies the single congruence $ 6 \times 5 \times b_1 \equiv 1$ (mod 7)
[Ok. That 6 and 5 comes from mod 6 and mod 5, but why is there a 1 at the end of the equation?]
Note that $f_1$ is necessarily divisible by 6 and by 5 and is congruent to 1 modulo 7. Thus $f_1 \equiv 1$ (mod 7), $f_1 \equiv 0$ (mod 6), and $f_1 \equiv 0$ (mod 5)
To solve the congruence, reduce $ 6 \times 5$ modulo 7 to get $2b_1 \equiv 1$ (mod 7) [ $ 6 \times 5 = 30 $, so if I divide 30 by 7, I would get a remainder of 2 because $ 7 \times 4 = 28$ and $ 30-28=2$.
Note that $b_1 = 4$ is a solution. [WAIT! Where did that 4 come from? Now I'm really confused. What are the steps?]
|
The innocent way. Reduce your work with the $\mathrm{\color{Red}{red}}$ equivalence between congruence systems.
$$\begin{cases}
x\equiv 2\pmod 7 \\
x\equiv 5 \pmod 6 \\
x \equiv 2 \pmod 5
\end{cases}\color{Red}{\iff}\begin{cases}
x\equiv 2\pmod {35} \\
x\equiv 5 \pmod 6
\end{cases}$$
From the first congruence, $x=35t+2$ for some $t$. This implies $$35t+2\equiv 2-t\equiv5\pmod 6\iff t\equiv 3\pmod 6.$$ Therefore, $t=6u+3$ for some $u$. All the solutions are then of the form $$x=35t+2=35(6u+3)+2=210u+107.$$ The least positive solution is $x_0=107.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding value of x in logarithms?
Q) Find the value of $x$ in $2 \log x + \log 5 = 2.69897$
So far I got:
$$2 \log x + \log 5 = 2.69897$$
$$\Rightarrow \log x^2 + \log 5 = 2.69897 $$
$$\Rightarrow \log 5x^2 = 2.69897 $$
What should I do next?
Note: In this question $\log(x) \implies \log_{10}(x)$ , it is therefore implied to use $\ln(x)$ to denote natural logarithm
|
Raise the base of the logarithm to both sides. Then, you get $5 x^2 = b^{2.69897}$ where $b$ is the base of the logarithm (probably $b=10$). Then, solve for $x$ by dividing by $5$ and taking square roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Why does $\sum_{k=0}^\infty\frac{r^k(k+n)!}{k!}=\frac{n!}{(1-r)^{n+1}}$? When I put the following series in Mathematica, I get an answer:
$$\sum_{k=0}^\infty\frac{r^k(k+n)!}{k!}=\frac{n!}{(1-r)^{n+1}}$$
Here $0<r<1$ and $n$ is a non-negative integer.
My question is: how does one arrive at this solution (without the use of Mathematica)? I'd like to learn how to solve this by hand... The $\frac{(k+n)!}{k!}$ looks like a Pochammer symbol $(k+1)^{(n)}$...
|
Hint: $\dfrac{(k+n)!}{k!}=n!\cdot\dfrac{(k+n)!}{n!\cdot k!}=n!\cdot\Large{n+k\choose k}$. Now see binomial series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/704997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to prove this inequality $ x + \frac{1}{x} \geq 2 $ I was asked to prove that:
$$x + \frac{1}{x}\geqslant 2$$
for all values of $ x > 0 $
I tried substituting random numbers into $x$ and I did get the answer greater than $2$. But I have a feeling that this is an unprofessional way of proving this. So how do I prove this inequality?
|
Hint: $x^2 -2x + 1 = (x-1)^2 \ge 0$. If you do not see it, divide the inequality through by $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 20,
"answer_id": 7
}
|
Cauchy's Theorem and maximum modulus principle Suppose $f\in H(\Omega), \Gamma$ is a cycle in $\Omega$ such that $Ind_{\Gamma}(\alpha)=0$,for all$\alpha \notin \Omega$,$|f(\zeta)|\leq 1$ for every $\zeta \in \Gamma$,
and $Ind_{\Gamma}(z) \neq 0 $.Prove that $|f(z)|\leq 1$
Cauchy's Theorem implies:
$$ |f(z)\cdot Ind_{\Gamma}(z)|=\frac{1}{2\pi }|\int_{\Gamma}\frac{f(\omega)}{\omega-z}dz|$$
But I can't estimate the integral.
I will appreciate your help.
|
Oh,I found the answer.
$Ind_{\Gamma}(\alpha)=0$,for all $\alpha \notin \Omega$ implies:
The union of $\Gamma$ and the components of the complement of E which $Ind_{\Gamma}\neq 0 $ is
$ \subset \Omega$.This is the point of the question. Then the boundary of the component contained z is $\subset \Gamma$ and maximum modulus principle implies $|f(z)|\leq 1$.
There is an example about the importance of $Ind_{\Gamma}(\alpha)=0$,for all $\alpha \notin \Omega$:
Suppose $\Omega={1/2<|z|<2}$,$\Gamma$ is the positively oriented circle with center at 0 and radius $1$,$f=1/z$.
If we follow the answer above,we can find the boundary of the union contained $|z|=1/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Decomposing real line as a union of a nullset and a set of first category $\Bbb R$ can be written of the form $A\cup B$ such that $A$ is of measure zero and $B$ is of the first category!
can anybody prove this?
I guess $A$ must be an $G_{\delta}$ set which is dense in $\Bbb R$ and obviously $B=\Bbb R-A$.
|
Enumerate the rational numbers as a sequence $\{ r_n;\; n\in\mathbb N\}$. For each $n\in\mathbb N$ and all $j\in\mathbb N$, set $$I_{n,j}:=\left] r_n-\frac{1}{j}\, 2^{-n}, r_n+\frac{1}j\, 2^{-n}\right[\, .$$
Then define
$$O_j:=\bigcup_{n\in\mathbb N} I_{n,j}\, , $$
and
$$ A:=\bigcap_{j\in\mathbb N} O_j\, .$$
Each $O_j$ is an open set containing all the $r_n$, so $A$ is a $G_\delta$ set containing all the $r_n$ and hence a dense $G_\delta$ set. Moreover, denoting by $\mu$ the Lebesgue measure on $\mathbb R$, we have
$$\mu(A)\leq\mu(O_j)\leq\sum_{n=1}^\infty \mu(I_{n,j})=\sum_{n=1}^\infty \frac{2}j\, 2^{-n}=\frac{2}j $$
for all $j\in\mathbb N$, so that $\mu(A)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Open and connected in $R^n$ revised I am trying to understand the following: If we have an open and connected set in $R^n$ then it can be connected with line segments parallel to the axes.
I managed to prove this:
If a set $U$ is open and connected in $\mathbb{R}^n$ then we can prove it is polygonally connected(there is a path formed from line segments completely contained in $U$).
My question now is how would I modify the path such that the line segments remain in $U$ and they are now parallel to the axes?
I would very much appreciate some help.
Thank you!
|
First note that for any cube $C=[-r,r]^n\subseteq\mathbb R^n$ any point $c=(c_0, c_1,\ldots c_{n-1})\in C$ is polygonally connected to the center of $C$ along the axes.
$$(0,0,0,\ldots,0)\to(c_0,0,0, \ldots, 0)\to(c_0,c_1,0,\ldots0)\to\ldots\to(c_0,c_1,\ldots c_{n-1})$$
Let $G$ be any nonempty open connected set in $\mathbb R^n$ and let $a\in G$.
Now set $A=\{g\in G\mid\text{$g$ is polygonally connected to $a$ along the axes}\}$.
Note that for any any $b\in A$ there's a cube $C=b+[-r,r]^n\subseteq G$ because G is open, so $C\subseteq A$. This means $A$ is open.
And for any $b\in\overline A$ there is a cube $C=b+[-r,r]^n\subseteq G$, so there's also a point $c\in A\cap C$, so $b\in A$. This means $A$ is closed.
Therefore $A=G$, because it's a nonempty clopen subset of a connected set $G$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Multiplication of infinite series Why multiplication of finite sums $(\sum_{i=0}^n a_i)(\sum_{i=0}^n b_i)=\sum_{i=0}^n (\sum_{j=0}^ia_jb_{i-j})$ (EDIT: This assumption was shown to be false) does not work in infinite case? I have constructed proof which shows it does but it must hase some flaw which I can not find. Here goes the proof:
By definition infinite series is just limit $\sum_{n=0}^\infty a_n = \lim_{n \to \infty } \sum_{i=0}^n a_i$. So using this definition, multiplication of finite sums and distributivity of $\lim$:
$$\left(\sum_{n=0}^\infty a_n\right)\left(\sum_{i=0}^\infty b_i\right) =
\lim_{n\to\infty}\sum_{i=0}^n a_i \cdot\lim_{n \to \infty } \sum_{i=0}^n b_i =
\lim_{n \to \infty } \sum_{i=0}^n a_i \sum_{i=0}^n b_i $$
$$= \lim_{n \to \infty } \sum_{i=0}^n \sum_{j=0}^ia_jb_{i-j} = \sum_{i=0}^\infty \sum_{j=0}^ia_jb_{i-j}$$
|
As @Claude has stated for the simpler cases here is it for 4 elements in $a$ and in $b$. The correct sum is the sum over all elements of the ("outer"(?)) product $C$ of the two vectors
$$ A^T \cdot B=C= \small \begin{array} {r|rrrr}
& b_0 & b_1 & b_2 & b_3 \\
\hline
a_0 & a_0b_0 & a_0b_1 & a_0b_2 & a_0b_3 \\
a_1 & a_1b_0 & a_1b_1 & a_1b_2 & a_1b_3 \\
a_2 & a_2b_0 & a_2b_1 & a_2b_2 & a_2b_3 \\
a_3 & a_3b_0 & a_3b_1 & a_3b_2 & a_3b_3
\end{array}
$$
But your second expression which refers to the antidiagonals sums only the antidiagonals up to (and including) $a_3b_0 ... a_0 b_3$ but not the remaining antidiagonals.
For the infinite cases this can only be correct if the sum of the whole remaining lower triangle beyond
$$ C^*= \small \begin{array} {r|rrrr}
& b_0 & b_1 & b_2 & b_3 \\
\hline
a_0& & & & \\
a_1 & & & & a_1b_3 \\
a_2& & & a_2b_2 & a_2b_3 \\
a_3& & a_3b_1 & a_3b_2 & a_3b_3
\end{array}
$$
would be neglectable (converges to zero). But notice that its size expands also without bound when n tends to infinity...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to tell if a code is lossless Consider the following code mapping:
$$a \mapsto 010, \quad b\mapsto 001, \quad c\mapsto 01$$
It's easy to see that the code isn't lossless by observing the code $01001$, which can be translated to "ac" or "cb".
Given a general code, how can you tell if it's lossless or not? I don't think trial and error is the best approach (though for small examples like the one I presented it's enough).
|
Thats kind of hard for a general code but you can use the Sardinas-Patterson algorithm.
The algorithm generates all possible "dangling suffixes" and checks to see if any of them is a codeword. A dangling suffix is the bits that are "left over" when you compare two similar sequences of your code. If you want your code to be decodable, there can be no dangling suffixes that are codewords.
In your example, if we compare "ac" and "c" we get
ac = 01001
c = 01
As you can see, ac is three bits longer than c, those bits (001) are the dangling suffix. And since 001 is also a codeword, the code is not uniquely decodable.
In a real life scenario though, we would probably use prefix codes instead of general codes since they are just as effective, faster and guaranteed to be uniquely decodable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
}
|
Prove that {$a ∈ ℤ : a ≤ k$} has a greatest element How can I prove that the set {$a ∈ ℤ : a ≤ k$}, where $k∈ℝ$, has a greatest element?
I have tried using the Well-ordering theorem in order to get a contradiction but I'm having trouble with my approach.
Thanks.
|
Hint: The floor function: $\lfloor k \rfloor$. For example, if $k$ were $2.1$, then the greatest element of your set would be $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Tangent spaces and $\mathbb{R}^n$ The tangent space of a circle is a line.
The tangent space of a sphere (in every point) can be thought of as a plane.
Is this a general thing? I mean, having an $n$ dimensional Riemannian manifold, can the tangent space in every point be thought as $\mathbb{R}^n$?
If the answer is yes, does this happen as well with the Lorentzian manifolds of GR? Can the tangent space of any space-time always be regarded as a Minkowski space?
|
If $M$ is a smooth $n$-dimensional manifold, then for each $p\in M$ the tangent space $T_p M$ is an $n$-dimensional real vector space. This tangent space is therefore isomorphic to $\mathbb R^n$ as a real vector space, though not in a "natural" way, in the sense that $T_pM$ does not have a distinguished basis corresponding to the usual basis of $\mathbb R^n$.
This is true for any smooth manifold, regardless of whether it is equipped with a Riemannian metric or a Lorentzian metric.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Find the center of a specific group The group $G$ is generated by the two elements $\sigma$ and $\tau$, of order $5$ and $4$ respectively. We assume that $\tau\sigma\tau^{-1}=\sigma^2$.
I have shown the following:
* $\tau\sigma^k\tau^{-1}=\sigma^{2k}$ and $\tau^k\sigma\tau^{-k}=\sigma^{2^k}$.
* $\langle\sigma\rangle$ is a normal subgroup of $G$, and $\langle\sigma\rangle\cap\langle\tau\rangle=\{e\}$.
* $G/\langle\sigma\rangle=\langle\tau\langle\sigma\rangle\rangle$.
* $G$ is of order $20$ and every element $g$ in $G$ may be written uniquely in the form $g=\sigma^k\tau^m$, where $0\le k<5$ and $0\le m<4$.
* The commutator subgroup $[G:G]=\langle\sigma\rangle$.
What remains is to find the center $Z(G)$ of $G$. Any suggestions on how to proceed? Thank you.
|
You have a normal subgroup of order $5$. Your calculations are already sufficient to show that the elements of this group other than the identity don't commute with $\tau$, or indeed any of its powers. So $\tau, \tau^2, \tau^3$ are not in the centre. $1=\tau^0=\tau^4$ is of course in the centre.
Suppose we have an element $\rho$ which is in the centre, and therefore does commute with $\tau$. You have shown that $\rho = \sigma^k\tau^m$ so that $$\rho \tau =\sigma^k\tau^{m+1}$$
$$\tau\rho=\tau\sigma^k\tau^m=\sigma^{2k}\tau^{m+1}$$
For these to be equal you need $k=0$ - so the only possible central elements other than $1$ are the powers of $\tau$ which have already been excluded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
$\mathbb Z[\sqrt {-5}]$ is Noetherian I'm trying to prove that $\mathbb Z[\sqrt {-5}]$ is Noetherian. I already know that $\mathbb Z[X]$ is Noetherian and I'm trying to find a surjective map
$$\varphi: \mathbb Z[X]\to \mathbb Z[\sqrt{-5}]$$
with $\ker\varphi=(X^2+5)$.
If I could find this map I could prove that $\mathbb Z[\sqrt{-5}]\cong \mathbb Z[X]/(X^2+5)$ and then $\mathbb Z[\sqrt{-5}]$ is Noetherian.
Thanks
|
Define $\varphi:\mathbb Z[X]\to\mathbb Z[\sqrt{-5}]$ by $\varphi(f)=f(\sqrt{-5})$. It's that simple.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Find the greatest common divisor (gcd) of $n^2 - 3n - 1$ and $2$ Find the greatest common divisor (gcd) of $n^2 - 3n - 1$ and $2$ considering that $n$ is an integer. Thanks.
|
Hint $\ $ One of $\,\ n,\,\ n\!-\!3\,$ is even so $\ n(n\!-\!3)-1\,$ is odd, so coprime to $\,2.$
Alternatively $\,2\nmid f(n)=n^2-3n-1\,$ since $f$ has no roots mod $\,2\!:\ f(0)\equiv 1\equiv f(1),\,$ which is a special case of the Parity Root Test.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
A function vanishing at infinity is uniformly continuous If $f\in C_0(\mathbb{R})$ (i.e. $f$ continuous and for all $\varepsilon>0$ there is $R>0$ such that $|f(x)|<\varepsilon$ whenever $|x|>R$), then why is $f$ uniformly continuous? I know that we should somehow use that $f$ is "small" outside a compact interval (on which it is uniformly continuous), how can we nicely write down the $\delta$?
|
We shall use the fact that a continuous function in a closed interval is uniformly continuous.
Let $f\in C_0(\mathbb R)$ and $\varepsilon>0$. We shall find a $\delta>0$, such that $\lvert x-y\rvert<\delta$ implies that $\lvert f(x)-f(y)\rvert<\varepsilon$.
As $\lim_{\lvert x\rvert\to\infty}f(x)=0$, there exists an $M>0$, such that $\lvert x\rvert>M$, implies that $\lvert f(x)\rvert<\varepsilon/3$.
Since $f$ is uniformly continuous in $[-M,M]$, there exists a $\delta>0$, such that
$x,y\in [-M,M]$ and $\lvert x-y\rvert<\delta$ implies that $\lvert f(x)-f(y)\rvert<\varepsilon/3$.
Now let $x,y\in\mathbb R$, with $\lvert x-y\rvert<\delta$.
Case I. $x,y\in [-M,M]$. Then indeed
$$\lvert f(x)-f(y)\rvert<\varepsilon/3<\varepsilon.$$
Case II. $|x|,|y|>M$. Then $\lvert f(x)\rvert<\varepsilon/3$ and
$\lvert f(y)\rvert<\varepsilon/3$ and hence $\lvert f(x)-f(y)\rvert<2\varepsilon/3<\varepsilon.$
Case III. $x\in [-M,M]$ and $\lvert y\rvert>M$. Assume that $y>M$. (The case where $y<-M$ is treated similarly.)
In this case
$$x\le M<y,$$ and hence $$|x-M|<|x-y|<\delta.$$
Also
$\lvert f(y)\rvert<\varepsilon/3$ and thus
$$
\lvert\, f(x)-f(y)\rvert\le \lvert\, f(x)-f(M)\rvert +\lvert\, f(M)-f(y)\rvert \\ \le \lvert\, f(x)-f(M)\rvert +\lvert\, f(M)|+|\,f(y)\rvert <\frac\varepsilon 3+
\frac\varepsilon 3+\frac\varepsilon 3=\varepsilon.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/705961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
}
|
Proving if $\gcd(c,m)=1$ then $\{x\in \Bbb Z \mid ax\equiv b \pmod m\} =\{x\in \Bbb Z \mid cax\equiv cb \pmod m\}$ Okay so I'm confused on how to approach this question.
If $\gcd(c,m)=1$, then $S=T$ where $S=\{x\in \Bbb Z \mid ax\equiv b \pmod m\}$ and $T=\{x\in \Bbb Z \mid cax\equiv cb \pmod m\}$.
I know that since $c$ and $m$ are coprime, then there exists two integers $y$ and $z$ such that $cy+mz=1$. Also, I know that to prove $S = T$, I need to show that $S\subseteq T$ and $T\subseteq S$.
But I'm stuck here and don't know how to proceed. Any help would be really appreciated. Thanks!
|
It might be worth stating and proving Bezut here. Bezuts lemma says that if $(a,b)=1$, then $\exists x,y$ st $ax+by=1$. To prove this, consider the set $S:=\{d>0|\exists x,y,ax+by=d\}$. Let $d_0$ be the minimal element of this set and use the division algorithm on $a$ and $d_0$ to find out that $d_0|a$ (Specifically, the remainder is of the form of elements of $S$ but is smaller than the least element of $S$, so the remainder is zero). Then repeat with $b$ and conclude $d_0=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find to which $( \forall x)$ , each occurrence of x belongs to. (logic) Find to which $( \forall x) $, each occurrence of x belongs to.
$$ (\forall x)((\forall x)(\forall y)\ x < y \lor x > z ) \rightarrow (\forall y)\ y=x $$
Is it right that the third and fourth occurrence of x belongs to the second occurrence of $ \forall x $ and the last occurrence of x belongs to the first occurrence of $\forall x $ ? ( I count the occurrences from left to right)
Is the following formula calculation right ?
$$ y=x,(\forall y\ y=x),x<y,\ x>z,\ x<y\ \lor x>z,\ ((\forall y)\ x<y\ \lor\ x>z), ((\forall x)(\forall y)\ x<y\ \lor\ x>z), ((\forall x)((\forall x)(\forall y)\ x < y \lor x > z ) \rightarrow (\forall y)\ y=x )$$
|
With this particular notation, there are two conventions (let the example be $(\forall x)\phi \star \psi$)
*
*quantifier binds as far as it can (the example becomes $(\forall x)(\phi \star \psi)$),
*quantifier binds only the closest subexpression (while here it is $\big((\forall x)\phi\big) \star \psi$).
You are correct in the first, but wrong in the second. You should check the notes on which one you need to use. One nice way to match variables with quantifiers it to number them, e.g.
$$(\forall x_1)\big((\forall x_2)(\forall y_3)\ x_2 < y_3 \lor x_2 > z_4 \big) \rightarrow (\forall y_5)\ y_5=x_1, \tag{1}$$
$$(\forall x_1)\big((\forall x_2)(\forall y_3)\ x_2 < y_3 \lor x_{\color{red}{1}} > z_4 \big) \rightarrow (\forall y_5)\ y_5=x_{\color{red}{6}}. \tag{2}$$
As for the second part, I would recommend drawing a tree, perhaps like this:
*
*$(\forall x)\big((\forall x)(\forall y)\ x < y \lor x > z \big) \rightarrow (\forall y)\ y=x$
*
*$(\forall x)(\forall y)\ x < y \lor x > z$
*
*$(\forall y)\ x < y \lor x > z$
*
*$x < y \lor x > z$
*
*$x < y$
*$x > z$
*$(\forall y)\ y=x$
*
*$y = x$
Going back bottom-up you will get the sequence of terms from your post. Also, with this particular notation you need to be careful with parentheses, e.g. you have a term $(\forall y\ y = x)$, which is confusing because now the reader does not know which convention you try to follow.
I hope this helps $\ddot\smile$
Edit: Some clarification after the comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Given $p(x)$ is a polynomial with integer coefficients and that $p(a)=1$ for some integer $a$ prove that $p(x)$ has no more than two integral roots. Given $p(x)$ is a polynomial with integer coefficients and that $p(a)=1$ for some integer $a$ prove that $p(x)$ has no more than two integral roots.
I've attempted a proof by contradiction assuming $p(x)$ has three or more roots, but haven't gotten anywhere on this. Help would be appreciated!
|
For this to be true, we need to specify that $p$ has integer coefficients: without this assumption, $p(x) = \frac16x(x-1)(x+1)$ is a counterexample, with roots at $-1,0,$ and $1$, but $p(2)=1$.
Suppose a polynomial $p(x)$ with integer coefficients has three or more distinct integral roots. This means that $p(x) = (x-a_0)(x-a_1)(x-a_2)q(x)$, and $q(x)$ also has integer coefficients (and so takes on integer values). For $p(x)$ to equal $1$ for some $x$, we then have three cases:
*
*At least two of $x-a_0,x-a_1,$ and $x-a_2$ are equal to $-1$, and both the third and $q(x)$ are equal to either $1$ or $-1$
*$q(x)$ and one of $x-a_0,x-a_1,$ and $x-a_2$ are equal to $-1$, and the other two are equal to $1$.
*All four of the factors are equal to $1$.
This is because the only ways to write $1$ as a product of four integers is as $$1 = (-1)^4 = (-1)^2 \cdot (1)^2 = (1)^4.$$
In all three of these cases, we have $x-a_i = x-a_j$ for some $i\neq j$, so $a_i = a_j$ and the roots are not distinct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How is this trig identity equal? I do not understand how this is equal.
$$
{cos\theta(cos\theta-1)\over 1-cos\theta} = -cos\theta
$$
What simplification step am I missing? Thanks.
|
$${\cos\theta(\cos\theta-1)\over 1-\cos\theta} = -\cos\theta $$
$$\iff \cos\theta(\cos\theta-1)=-\cos\theta(1-\cos\theta)$$
$$\iff \cos^2\theta - \cos\theta = \cos^2\theta - \cos\theta $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the probability that $HH$ occurs before $TH$ in an infinte sequence of coin flips? This is one of the questions of a set of exam review questions that don't have solutions to them. I can't get my head around this but it seems so simple.
By flipping a fair coin repeatedly and independently, we obtain a sequence of
H's and T's. We stop flipping the coin as soon as the sequence contains either HH or TH.
Two players play a game, in which Player 1 wins if the last two symbols in the sequence
are HH. Otherwise, the last two symbols in the sequence are TH, in which case Player 2
wins.
A = "Player 1 wins"
and
B = "Player 2 wins."
Determine Pr(A) and Pr(B)
|
First flip is either heads or tails. If the second flip is heads we have a winner no matter what. If the second flip is tails we have no winner, but it follows that Player 2 must win. Why?
Flip three is either heads or tails. If it is heads, player 2 wins. Tails, no one wins. Flip four and each afterward either results in a heads and player 2 wins or a tails and no one wins. Given the last flip was tails, HH will never occur before TH.
Using this information, we have a $\frac{1}{2}$ chance of the game ending on the second flip. Assuming it ends on the second flip, each player wins $\frac{1}{2}$ of the time (HH or TH). If the game does not end on the second flip, Player 2 wins.
This must mean $\operatorname{Pr}(A) = \frac{1}{4}$ and $\operatorname{Pr}(B) = \frac{3}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Calculating the limit $\lim((n!)^{1/n})$ Find $\lim_{n\to\infty} ((n!)^{1/n})$. The question seemed rather simple at first, and then I realized I was not sure how to properly deal with this at all. My attempt: take the logarithm,
$$\lim_{n\to\infty} \ln((n!)^{1/n}) = \lim_{n\to\infty} (1/n)\ln(n!) = \lim_{n\to\infty} (\ln(n!)/n)$$
Applying L'hopital's rule:
$$\lim_{n\to\infty} [n! (-\gamma + \sum(1/k))]/n! = \lim_{n\to\infty} (-\gamma + \sum(1/k))= \lim_{n\to\infty} (-(\lim(\sum(1/k) - \ln(n)) + \sum(1/k))
= \lim_{n\to\infty} (\ln(n) + \sum(1/k)-\sum(1/k)
= \lim_{n\to\infty} (\ln(n))$$
I proceeded to expand the $\ln(n)$ out into Maclaurin form
$$\lim_{n\to\infty} (n + (n^2/2)+...) = \infty$$
Since I $\ln$'ed in the beginning, I proceeded to e the infinity
$$= e^\infty
= \infty$$
So am I write in how I approached this or am I just not on the right track? I know it diverges, I was just wanted to try my best to explicitly show it.
|
Let a $n\in \Bbb N$. By definition $$[\frac n2]\leq \frac n2<[\frac n2]+1.$$
Then $n!=1\cdot 2\cdot ...\cdot[\frac n2]\cdot ([\frac n2]+1)\cdot...\cdot n>(\frac n2)^{n-[\frac n2]+a}>(\frac n2)^{\frac n2 +a}$, so $(n!)^{\frac 1n}>(\frac n2)^{\frac 12 + \frac an}\to \infty$, thus $(n!)^{\frac 1n}\to \infty.$ We set $a:=0$, if $n$ is even, and $a:=1$, if it is odd.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 7,
"answer_id": 6
}
|
Proving that $\dim(\mathrm{span}({I_n,A,A^2,...})) \leq n$ Let $A$ be an $n\times n$ matrix. Prove that $\dim(\mathrm{span}({I_n,A,A^2,...})) ≤ n$
I'm at a total loss here...
Can someone help me get started?
|
The following observations suffice to prove the statement:
*
*A power $A^k$ is in the span of lower powers $A^0,\ldots,A^{k-1}$ if and only if there exists a (monic) polynomial$~P$ of degree$~k$ with $P[A]=0$.
*If this happens for some $k=m$, it also happens for all $k>m$, so that by an immediate induction argument, $\operatorname{span}(A^0,\ldots,A^{m-1})$ contains all powers of$~A$ (and of course it has dimension${}\leq m$).
*There exists a monic polynomial$~P$ of degree${}\leq n$ with $P[A]=0$.
Point 1. is fairly obvious; it suffices to consider an equation expressing $A^k$ as linear combination of $A^0,\ldots,A^{k-1}$, and to bring all its terms to the same side of the equation as$~A^k$. Point 2. is also easy, since it suffices to multiply the polynomial $P$ by a power$~X^d$ to raise is degree; the result still annihilates$~A$, since $(X^dP)[A]=A^d\circ (P[A])=0$ by linearity of$~A$. (Alternatively one could argue for the remainder $R$ of $X^k$ after division by$~P$ that $A^k=R[A]\in\operatorname{span}(A^0,\ldots,A^{m-1})$, since $\deg R<m$.) For point 3. the Cayley-Hamilton theorem says the characteristic polynomial $P=\chi_A$, which has degree $n$, can be chosen.
Added.
When using Cayley-Hamilton, I always wonder if one could also do it by more elementary means (because the proof of C-H is somewhat subtle). For point 3. this is is in fact the case.
One can use strong induction on the dimension. The inductive hypothesis takes the following concrete form: for any $A$-stable subspace $V$, there exists a monic polynomial $P\in K[X]$ with $\deg P\leq\dim V$ such that $V\subseteq\ker(P[A])$. By expressing the restriction of $A$ to$~V$ on a basis, this is just point 3. for that restriction. So assume this result holds for any proper subspace. If $n=0$ one can take $P=1$, so also assume $n>0$ and choose a nonzero vector $v\in K^n$. The $n+1$ vectors $v,Av,A^2v,\ldots,A^nv$ are certainly linearly dependent, so one can take $d$ minimal such that $v,Av,A^2v,\ldots,A^dv$ are linearly dependent. Then $A^dv$ is a linear combination of the preceding vectors, and this gives a polynomial $R$ of degree$~d$ such that $R[A]v=0$. But $\ker(R[A])$ is $A$-stable so $v,Av,A^2v,\ldots,A^{d-1}v\in\ker(R[A])$, and since these vectors are linearly independent, one obtains $\dim\ker(R[A])\geq d$. Putting $V=\operatorname{Im}(R[A])$ the rank-nuullity theorem gives $\dim V\leq n-d$, and since $V$ is $A$-stable, the induction hypothesis gives a polynomial $Q$ with $\deg Q\leq n-d$ such that $V\subseteq\ker(Q[A])$. The latter means that $0=Q[A]\circ R[A]=(QR)[A]$; then taking $P=QR$ works, since one gets $\deg P=\deg Q+\deg R\leq(n-d)+d=n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Evaluating $\int_{0}^{\infty}\frac{1-e^{-t}}{t}\sin{t}\operatorname d\!t$ Find this integral
$$I=\int_{0}^{\infty}\dfrac{1-e^{-t}}{t}\sin{t}\operatorname d\!t$$
I know this
$$\int_{0}^{\infty}\dfrac{\sin{t}}{t}\operatorname d\!t=\dfrac{\pi}{2}$$But I can't find this value,Thank you
|
Since you know that
$$\int_0^\infty \frac{\sin t}tdt=\frac\pi2$$
so it suffices to find
$$\int_0^\infty\frac{e^{-t}}t\sin tdt$$
so let
$$f(x)=\int_0^\infty\frac{e^{-t}}t\sin (xt)dt=\int_0^\infty h(x,t)dt$$
so using Leibniz theorem and since
$$\left|\frac{\partial h}{\partial x}(x,t)\right|\le e^{-t}\in L^1((0,\infty))
$$
so we have
$$f'(x)=\int_0^\infty\cos(xt)e^{-t}dt=\frac1{1+x^2}$$
Now since $f(0)=0$ then we find
$$f(x)=\arctan x$$
and we have the desired result by taking $x=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Optimization for fat line intersecting most points Let's say I have a bunch of $(X,Y)$ points in 2D space. I want to find the line $y=mx+b$ which intersects the most points. We can add some kind of buffer (a delta) so if the line $y=mx+b$ is within delta of the point, then it also intersects the point.
I've never taken any optimization theory, but I'd assume this is a pretty simple optimization problem. I'm having some trouble formalizing the objective function to maximize/minimize, so any help with that would be awesome.
Thanks,
Michael
|
Your optimization problem can be formulated as
$$\min_{m,b \in \mathbb{R}} \|Y-mX-b\|_0,$$
where $\|\cdot\|_0$ is the $\ell^0$ semi-norm defined as $\|v\|_0 = \#\{v_i \neq 0\}$ (See here).
The only drawback is that $\|\cdot\|_0$ is not convex, thus you cannot employ the classical convex optimization tools.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Invertibility of a function in Z/m - Does what I have written work? Ok, so I know when $(a, m) = 1$, by Euler's Theorem, $a^{\phi(m)} \equiv 1 \mod m$. Since $\phi(323) = 288$, $a^{288} \equiv 1 \mod m$ when $(a, 323) = 1$. However, there are some elements $a$ such that $(a, 323) \not= 1$ and $a^{288} \not\equiv 1 \mod 323$. Since those elements do not have multiplicative inverses in $\mathbb{Z}/323$, how is it working that $x^n$ is invertible? Am I missing something?
Exercise I.8. Prove that $f : \mathbb{Z}/323 \to \mathbb{Z}/323$ given by $f(x) = x^n$ is a bijective map if $(n, 6) = 1$.
Proof. Assume that $(n, 6) = 1$. Then $2 \not\mid n$ and $3 \not\mid n$. Let $f(x) = x^n$. Now by Theorem 9.3, $\phi(323) = \phi(17 \cdot 19) = (17-1)(19-1) = 16 \cdot 18 = 288 = 2^5 \cdot 3^2$. We need $x$ such that $nx \equiv 1 \mod 288$. Since $2 \not\mid n$ and $ \not\mid n$, $(n, 288) = (n, 2^5 \cdot 3^2) = 1$. Then by Lemma 5.2, $nx \equiv 1 \mod 288$ has exactly one solution. That is, $n^{-1}$ exists in $\mathbb{Z}/288$. Then $f^{-1} = x^{n^{-1}}$ since $f^{-1}(f(x)) = f^{-1}(x^n) = (x^n)^{n^{-1}} = x^{n \cdot n^{-1}} \equiv x \mod 323$. Since $f$ is invertible, $f$ is bijective. $\blacksquare$
(Image version)
|
Note that $f$ being invertible doesn't mean that all elements of $\mathbb{Z} / 323\mathbb{Z}$ are invertible. That $f$ is an invertible map just means that there is an inverse map $g$ such that
$$\begin{align}
f \circ g &= 1 \\
g \circ f &= 1.
\end{align}
$$
That is: $f(g(x)) = x$ for all $x \in \mathbb{Z} / 323\mathbb{Z}$ and $g(f(x)) = x$ for all $\mathbb{Z} / 323\mathbb{Z}$. When we have such a $g$, we usually denote it by $f^{-1}$. So in your case $f(x) = x^{n^{-1}}$ where $n^{-1}$ is the inverse of $n$ in $\mathbb{Z} / 288\mathbb{Z}$
This relies in part on $n$ satisfying $(n, 6) = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Composition relation of P∘P Consider the following relation P on the set B = {a, b, {a, b}}:
P = {(a, a), (a, b), (b, {a, b}), ({a, b}, a)}.
Answer questions 6 to 8 by using the given relation P.
Question 6
Which one of the following alternatives represents the domain of P (dom(P))?
*
*{a, b}
*{{a, b}}
*a, b, {a, b}
*{a, b, {a, b}}
Question 7
Which one of the following relations represents the composition relation P ○ P?
*
*{(a, a),(a, b), (a, {a, b}), (b, a)}
*{(a, a),(a, b), (a, {a, b}), (b, a), ({a, b}, a), ({a, b}, b)}
*{(a, a),(a, b), (b, a)}
*{(a, a),(a, b), (b, a), ({a, b}, {a, b})}
Question 8
The relation P is not transitive. Which ordered pairs can be included in P so that P would satisfy
transitivity?
*
*(a, {a, b}) & (b, a)
*(b, b) & ({a, b}, {a, b})
*(b, a), (b, b), (a, {a, b}), ({a, b}, b) & ({a, b}, {a, b})
*(b, a), (a, {a, b}) & ({a, b}, b)
|
Question 6 :
4
domain of P is the entire set B
Question 7:
2
you just need to write it down and compose!
Question 8:
1
when $p . p \subseteq p$ then p is transitive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/706965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Real part of a quotient Is there some fast way to know the real part of a quotient?
$$\Re\left(\frac{z_1}{z_2}\right)$$
$z_i\in \mathbb{C}$
|
If you don't have $z_1$ and $z_2$ in a "nice" form eg some of the values of $e^{i\theta}$ for various $\theta$, you could use $1/2(z+\bar{z})=\Re{(z)}$.
So you'd get:
$\Re(\frac{z_1}{z_2})=\frac{z_1\bar{z_2}+\bar{z_1}z_2}{2z_2\bar{z_2}}$
for
$z_1=r_1e^{i\theta_1}$
$z_2=r_2e^{i\theta_2}$
$\Re(\frac{z_1}{z_2})=\frac{r_1}{r_2}cos(\theta_1-\theta_2)$
Just as an alternative, although I suspect you wanted something like user130512's post.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Use Fourier transform to find Fourier series coeficcients I understand that the Fourier Transform can be seen as a generalisation of the Fourier Series, where the period $T_0 \to \infty$ . Now I have encountered this strange question (in an engineering course on signal analysis):
Given a periodic function $x(t)$, find the Fourier Series coefficients $X_n$ by using the Fourier Transform.
What does this mean? How can it be done? As I see it, FS and FT are similar concepts, but they are not the same operation.
For reference, $x(t) = rect(\frac{t-0.25}{0.25}) * \Delta _1 (t)$ but I am seeking an answer in terms of any periodic function $x(t)$ .
|
Suppose we have a function $\tilde x(t)$ that is zero except on the interval $[-T_0/2,T_0/2]$ (on which $\tilde x(t) = x(t)$) and whose Fourier transform is given by
$$
\widehat x(\omega) = \int_{-\infty}^\infty \tilde x(t) e^{-i\omega t}dt
= \int_{-T_0/2}^{T_0/2} x(t) e^{-i\omega t}dt
$$
Using $\widehat x(\omega)$, we would like to find the Fourier series for the $T_0$-periodic function that agrees with $x(t)$ on this interval. We note that the coefficients of the Fourier series for $x$ are given by
$$
X_n = \frac{1}{T_0} \int_{-T_0/2}^{T_0/2} x(t)e^{-i (2 \pi n/T_0) t}\,dt
$$
for any integer $n$. Notice the similarity! From here, you can derive
$$
X_n = \frac{1}{T_0}\widehat{x}(2 \pi n/T_0)
$$
Alternatively, let's say you wanted to look directly at $\mathcal F\{x(t)\}$. Note that $x(t) = \sum_{n = -\infty}^\infty X_n e^{i 2 \pi n/T_0}$. It follows that
$$
\mathcal{F}\{x(t)\} =
\sum_{n=-\infty}^\infty X_n \mathcal{F}\{e^{i 2 \pi n/T_0}\} =
\sum_{n=-\infty}^\infty X_n \delta(\omega - 2 \pi n/T_0)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Lowest product of pair multiplication This is kind of an algebra question, and I am interested in an algebric proof to it.
Suppose we have $k$ natural numbers that are all greater than $0$.
We would like to arrange them in multiplication-pairs of two, such that the sum of each pair's product is the lowest possible.
For example: Given $A = \{5,9,1,3,6,12\}$, a minimal product of pairs multiplication is taking the pairs $(1,12), (3,9), (5,6)$, such that $ 1 \cdot 12 + 3 \cdot 9 + 5 \cdot 6$ is the lowest possible.
Is it safe to say, that for each pair selection out of the set of the natural numbers, we pair the minimal with the maximal, then remove them from the set and go on?
|
?:
A={1,2,3}
Suppose you have made one pair (1,2) and removed it from the set. How do you make another pair?
Your reasoning requires additional conditions.
Saying set to be containing even number of elements would be better.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Showing that a series in $l_{\infty}$ converges weakly, given a boundedness condition. I'm trying to understand the following:
Let $\{x_k\}_{k=1}^{\infty}$ be a sequence of elements in $l_{\infty}$ such that for some constant $K$, $$\|\sum_{1}^n \lambda_k x_k \|\leq K\sup_k |\lambda_k| \quad \textrm{for all} \ \{\lambda_k\}_{k=1}^n\subset \mathbb{R}, \quad n=1,2,\dots$$ Then for every bounded sequence $\{\lambda\}_{k=1}^{\infty} \subset \mathbb{R}$ the series $\sum_1^{\infty} \lambda_k x_k$ converges weakly to an element of norm $\leq K \sup_k |\lambda_k |$ in $l_{\infty}$.
From the stated condition we can say that the $\sup$ of the inequality's LHS is finite. Does that imply the convergence of the series? (Which would mean strong convergence, implying weak convergence) If not, how can I convince myself of the weak convergence?
|
Let $M=\sup_{k\ge 1} |\lambda_k|$. By assumption, the partial sums $s_n=\sum_{k=1}^n \lambda_k x_k$ satisfy $\|s_n\|_\infty\le MK$ for all $n$. For bounded sequences, weak* convergence in $\ell_\infty$ is precisely the coordinate-wise convergence. Thus, we only need to check that for each fixed index $j$ the numeric series $\sum_{k=1}^\infty \lambda_k x_k(j)$ converges.
Fix $j$. For each $k$, choose $\mu_k \in\{1,-1\} $ so that $\mu_k x_k(j) = |x_k(j)|$.
By assumption, the partial sums of the series $\sum_{k=1}^\infty \mu_k x_k(j) $ are bounded by $K$, which implies
$\sum_{k=1}^\infty |x_k(j)|$ converges to a number at most $K$. Thus,
$\sum_{k=1}^\infty \lambda_k x_k(j)$ converges and
$$\left|\sum_{k=1}^\infty \lambda_k x_k(j )\right| \le M\sum_{k=1}^{\infty} |x_k(j)| \le MK$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is $d(x,y) = \sqrt{|x-y|}$ a metric on R? For $x,y \in \mathbb{R}$, define $d(x,y) = \sqrt{|x-y|}$.
Is this a metric on $\mathbb{R}$?
It's clear that $d(x,x) = 0$ and $d(x,y) = d(y,x)$ for all $x,y \in \mathbb{R}$. The triangle inequality seems to hold for all values I have tested, but I have not found this function anywhere online as an example of a metric on $\mathbb{R}$.
|
Sure looks like it. It's translation invariant, so to prove the TE for $x \le y \le z$, adjust everything so that the lowest, $x$, of the three values is at $0$ (i.e., add $-x$ to all three numbers). Then you need to show that
$$
\sqrt{y} + \sqrt{z} \ge \sqrt{y+z}
$$
for any nonnegative $y$ and $z$, which is true (by just squaring both sides).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Separation of variables won't work
Find all solutions on $\mathbb{R}$ of the differential equation $y'=3|y|^{2/3}.$
Why wouldn't separation of variables method work for this differential equation? Why does the initial condition have to be nonzero?
|
I do not understand your statement "the separation of variables method will only work if the initial condition is nonzero".
I have problem with the absolute value so I shall only help you solving $$y'=3 y^{2/3}$$ You can separate the variables and integrate both sides. This leads to $$y^{1/3}=x+c$$ and then $$y=(x+c)^3$$ I hope and wish this could help continuing with your problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Showing independence of random variables When proving $\bar x$ and $S^2$ are independent in my noted it says that "functions of independent quantities are independent ". Can someone tell me how functions of independent quantities are independent happen?
Also let $X_1,X_2,X_3,\dots,X_n$ be a random sample.And suppose we want to estimate parameter $\theta$ using $T(x)$ as an estimator.
In this I have a expression as $E\{ [ T(X)-E(T(X))][E(T(x)-\theta)]\}$.
I want to know if I can say that in $[T(X)-E(T(X))]$ since $E(T(X)$ (expected value is a one particular fixed value) and $T(X)$ are independent of one another.
Also the expression $[E(T(x)-\theta)]$ is independent of $X$.
Hence the two expressions $[T(X)-E(T(X))]$ and $[E(T(x)-\theta)]$ are independent of one another. So that I can use if $a,b$ are independent $E(ab)=E(a)E(b)$ form
|
For your first question, suppose $X$ and $Y$ are independent random variables.
The statement is that for any Borel measurable functions $f$ and $g$, $f(X)$ and $g(Y)$ are independent. In fact, independence of $X$ and $Y$ is equivalent to
(and in some formulations is defined as) the events $X \in A$ and $Y \in B$ being independent for all Borel subsets $A$ and $B$ of $\mathbb R$. The result is then immediate, since $f(X) \in A$ iff $X$ is in the Borel set $f^{-1}(A)$, and similarly for $g(Y) \in B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Circle Area formula question Take a peek at the following proof
Everything makes sense but one thing: how did they determine that $\sqrt{\cos^2\theta}$ was positive and not negative? Thanks.
|
Remember that $\sqrt{\phantom{x}}$ denotes the positive square root, so
$$\sqrt{\cos^2\theta}=|\cos\theta|\ .$$
In the paper you linked, this occurs in an integral where $\theta$ goes from $0$ to $\pi/2$. For these $\theta$ values, $\cos\theta$ is positive, so $|\cos\theta|=\cos\theta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Using Newton's Method with a Piecewise Defined Function Using a Natural Cubic Spline approximation, I've generated an approximation polynomial to six points of data. Using the Cubic Spline approximation polynomial, I now need to use Newton's method to find a root of it (the spline approximation). I'm unsure of how to tweak the Newtonian algorithm to handle a piecewise defined function. Any ideas on how to do this?
|
I assume you have (or can write) functions that return the value and first derivative of your spline at any given argument value. If so, you can just use Newton's method directly -- no changes are needed to handle piecewise-defined functions.
Saying this another way ...
The fact that the spline is defined piecewise is not visible to your Newton algorithm. It just calls your "black box" functions to get values and derivatives, and doesn't care how they are calculated. The "piecewise" property should be entirely hidden inside these black box functions.
If you don't have function that calculates derivatives, a cheap way to write one is by using central differencing, as suggested in Claude's answer. For central differencing with double-precision arithmetic, I'd suggest a step size of around $10^{-7}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Markov processes and semimartingales Semimartingales and Markov processes are two fundamental families in probability theory. There are many specific processes that belongs to the intersection of those two families, e.g. Levy processes. More generally semimartingales with independent increments are Markov. I'm interested in examples of popular classes of processes that are outside of this intersection. For instance, Hawkes processes are semimartingales, but are not Markov. Are there other such examples? Vice versa, what are interesting specific examples of Markov processes that are not semimartingales?
Update: googling lead me to the following paper: "Stochastic calculus for symmetric Markov processes" by Chen, Fitzsimmons, Kuwae and Zhang, http://projecteuclid.org/euclid.aop/1207749086
|
If $(B_t)_{t \geq 0}$ is a Brownian motion, then $(|B_t|^{\frac{1}{3}})_{t \geq 0}$ is not a semimartingale (see this question) but a Markov process.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/707973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Height unmixed ideal and a non-zero divisor
Let $R$ be a commutative Noetherian ring with unit and $I$ an unmixed ideal of $R$. Let $x\in R$ be an $R/I$-regular element. Can we conclude that $x+I$ is an unmixed ideal?
Background:
A proper ideal $I$ in a Noetherian ring $R$ is said to be unmixed
if the heights of its prime divisors are all equal, i.e., $\operatorname{height} I=\operatorname{height}\mathfrak{p}$ for all $\mathfrak{p}\in \operatorname{Ass} I$.
|
Let $R$ be a noetherian integral domain and $I=(0)$. If $\dim R=2$ and $R$ is not Cohen-Macaulay, then there is $x\in R$, $x\ne 0$, such that $xR$ is not unmixed. (For more details look here.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Solve inequality: $\frac{2x}{x^2-2x+5} + \frac{3x}{x^2+2x+5} \leq \frac{7}{8}$ Rational method to solve $\frac{2x}{x^2-2x+5} + \frac{3x}{x^2+2x+5} \leq \frac{7}{8}$ inequality?
I tried to lead fractions to a common denominator, but I think that this way is wrong, because I had fourth-degree polynomial in the numerator.
|
HINT:
As $\displaystyle x^2\pm2x+5=(x\pm1)^2+4\ge4>0$ for real $x$
we can safely multiply out either sides by $(x^2+2x+5)(x^2-2x+5)$
Then find the roots of the Quartic Equation
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Let $G$ be a group and let $H,K$ be subgroups of $G$ where $|H|=12$ and $|K|=5$. Show that $H\cap K = \{e\}$.
Let $G$ be a group and let $H,K$ be subgroups of $G$ where $|H|=12$ and $|K|=5$. Show that $H\cap K = \{e\}$.
I used LaGrange's theorem to show that $|H|||G|$ and $|K|||G|$ so $12||G|$ and $5||G|$, and that $12$ and $5$ are relatively prime. I'm not sure if this has gotten me closer to solving the problem, as I am stuck. Any help is greatly appreciated.
NOTE: This problem provides does not specifically define $e$, so I'm attempting to prove this assuming $e$ is the identity of $G$.
|
Let $x\in H\cap K$ then by Lagrange theorem the order of $x$ divides the two coprime orders: $|H|$ and $|K|$ so $o(x)=1$ and then $x=e$. Conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Show a group of order 351 is NOT simple. I have read a bunch of answers around the web, but they perform a jump which i can't follow.
I have determined that there must be $12*27=324$ elements of order 13 in G, but when i try to count the amount of elements in G of order 3 i run into some problems, i don't get the contradiction i was hoping for.
Some answers i found on the web, just says that since there is $3^3=27$ elements left in G, these must amount to a unique and therefore normal subgroup in G.
But why is it so?
My proof so far:
Since $|G|=351=3^3*13$ Sylow theorem force $n_3$ to be $1$ or $13$, and $n_{13}$ to be $1$ or $27$. Show that neither $n_3$ nor $n_{13}$ is $1$, for contradiction. By lagrange theorem the order of the elements in the Sylow 13-subgroup must be $1$ or $13$, the only element with order $1$ is the $e$, which means there is $12$ elements, in each of the $27$ Sylow 13-subgroups, of order 13. Which means here are $12*27=324$ elements of order 13 in $G$. There remains $351-324=27=3^3$ elements in $G$, which must amount to a Sylow 3-subgroup, because?
I think i really need some Explain it like im 5 type of stuff here.
|
If $n_3=1$, we are done. Suppose not, so there are 13 Sylows for 3. If there is one 13 Sylow there is nothing to prove, so suppose also there are 27 Sylows for 13. If $P$, $Q$ are two 13-subgroups then they are either disjoint or equal for every nontrivial element in them is a generator, since 13 is prime. This justifies the statment there are $27\cdot 12$ elements of order 13. This leaves 27 elements in $G$ not of order 13. In particular there are at most 27 elements with order a power of 3, contradicting the fact there are 13 distinct Sylow 3-subgroups.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Solving $\frac{x}{1-x}$ using definition of derivative I was trying to find the equation of the tangent line for this function. I solved this using the quotient rule and got $\frac{1}{(x-1)^2}$ but I can't produce the same result using definition of derivatives. Can someone show me how to do it? I tried looking it up on wolfram alpha but I can't get it to produce the result using definition of derivatives.
|
Using the definition of derivatives, we have
$f(x)'=\lim_{h\rightarrow0}\frac{f(x+h)-f(x)}{h}$
Thus, the derivative of $\frac{x}{1-x}$ is
$\large f(x)'=\lim_{h\rightarrow0}\frac{1}{h}(\frac{x+h}{1-x-h}-\frac{x}{1-x})=\lim_{h\rightarrow 0}\frac{1}{h}(\frac{(x+h)(1-x)-x(1-x-h)}{(1-x-h)(1-x)})=\lim_{h\rightarrow0}\frac{1}{x^2-2x+1+h(1-x)}=\frac{1}{(x-1)^2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
$\lim_{x\rightarrow 0^+} \frac {\ln(x)}{\ln( \sin x)}$ without l'Hôpital's rule How to calculate $\displaystyle
\lim_{x\rightarrow 0^{+}}\frac{\ln x}{\ln (\sin x)}$ without l'Hôpital's rule please?
If anybody knows please help
I don´t have any idea :-(
I´m looking forward your helps
|
$\frac{\ln x}{\ln \sin x }=\frac{\ln x-\ln \sin x}{\ln \sin x }+1=\frac{\ln \frac{x}{\sin x}}{\ln \sin x }+1\rightarrow \frac{\ln 1}{-\infty}+1=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Is there a (f.g., free) module isomorphic to a quotient of itself? My question is as in the title: is there an example of a (unital but not necessarily commutative) ring $R$ and a left $R$-module $M$ with nonzero submodule $N$, such that $M \simeq M/N$?
What if $M$ and $N$ are finitely-generated? What if $M$ is free? My intuition is that if $N$ is a submodule of $R^n$, then $R^n/N \simeq R^n$ implies $N=0$. It seems like $N\neq 0$ implies $R^n/N$ has nontrivial relations, so $R^n/N$ can't be free.
If $R^n/N \simeq R^n$, we'd have an exact sequence
$0 \rightarrow N \hookrightarrow R^n \twoheadrightarrow R^n/N \simeq R^n \rightarrow 0$
which splits since $R^n$ is free, so $R^n \simeq R^n \oplus N$. Does this imply $N=0$? What if we assume $R$ is commutative, or even local? Maybe Nakayama can come in handy.
I'm interested in noncommutative examples too. Thanks!
|
One keyword which should bring up many useful results: Leavitt álgebras.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Positive Definite and Hermitian Matrices If we know that $C$ is positive definite and Hermitian, how can we prove that there exists a matrix $Q$ such that $Q^∗CQ=I$. Where, $Q^∗$ is complex conjugate.
The definition of positive definiteness for a Hermitian Matrix I am using is if all principal minors are positive.
I am also looking for a link between this definition and another equivalent definition i.e. $x^*Ax>0$ for all $x$.
|
Sylvester's Law of Inertia tells you that two hermitian matrices are congruent if, and only if, they have the same inertia. (Wikipedia only deals with the reals, but everything works out the same over $\mathbb C$).
Since $C$ is positive definite, its eigenvalues are all positive, thus it has the same inertia as the identity matrix and what you want follows.
The link you're looking for between your definition of positive definite matrix $A$ and the property $\forall x\in \mathbb C^{n\times 1}\setminus \{0_{\mathbb C^{n\times 1}}\}\left(x^*Ax>0\right)$ is the following:
$$A \text{ is positive definite} \iff \forall x\in \mathbb C^{n\times 1}\setminus \{0_{\mathbb C^{n\times 1}}\}\left(x^*Ax>0\right).$$
The proof of this isn't particularly short so if you want it I suggest you open a book or ask a new question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How can I explain to my professor his argument invokes the AC? This is not the standard definition, but my topology professor restricted contexts in metric spaces.
Definition
An open set $U$ in a metric space $X$ is a subset of $X$ such that the interior of it and $U$ Itself are identical. (Interior point $x$ of a subset $A$ is a point of which there exists a ball containing $x$, contained in $A$.
With this definition, He showed that "Every open set in a metric space is a union of balls."
Below is his argument:
Assume that $U$ is open.
Let $x\in U$.
Then, by definition, there exists $r(x)$ such that $B(x,r(x))\subset U$, so that $U=\bigcup_{x\in U} B(x,r(x))$.
I'm not asking that this can be resolved easily without Axiom of Choice, but I'm asking how do I explain this to my professor his argument invokes Axiom of Choice. I told him that choosing $r(x)$ for each $x$ is a consequence of the axiom of choice, but he does not seem to understand me. Please help.
|
My suggestion is not to bother your professor with this very much.
There are theorems whose choiceless proofs are very different than their choice-using proofs. There are theorems whose usual proof can be easily modified by showing that the arbitrary choice has some easy canonical choice instead.
When it is the former case, I agree that it might be worth insisting that a particular proof is using, or not using, the axiom of choice. But in the latter case it usually amount to unnecessary nitpicking (and that can always get people to dislike you).
In principle, you are right. When we say "some $r(x)$" rather than "The least $n$ such that ..." then we secretly invoke the axiom of choice. However since we can prove that the axiom of choice for this family of sets is true, there's little harm in this. (Not to mention that a lot of the time people use the axiom of choice in so many other places without noticing, that this one place where it's not a big deal shouldn't be different too.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
What is the exact coefficient of $x^{12}$ in $(2+x^3+x^6)^{10}$? What is the coefficient of $x^{12}$ in $(2+x^3+x^6)^{10}$?
I figure you need to pick $x^3$ 4 times so $C(10,4)$...but what happens with the other numbers/variables???
Can someone explain to me how this is done properly?
Thanks.
EDIT:
$(x + y)^n = C(n,k) \cdot x^{n-k} \cdot y^k$
EX: Find the term for $x^5$ in $(5-2x)^8$
Answer: $C(8,5) \cdot (-2)^5 \cdot 5^3$
How can I use this info to solve a polynomial based question such as the featured?
Answer:
To sum up all information provided by everyone (Thanks!!!):
$(C(10,4) * 2^6) + (C(10,2) * 2^8) + (C(10,1) * C(9,2) + 2^7) = 71040 $
|
Hint 1: $x^{12} = x^6 x^6 = x^3 x^3 x^3 x^3 = x^3 x^3 x^6$
How many ways to pick $x^6 x^6$? Everything that's not an $x$ term is a multiplier of $2$. This would be $2^8 {10 \choose 2}$ for a total of $10$ elements.
For four $x^3$ there would be $10 \choose 4$ ways to pick.
Hint 2: For $x^3 x^3 x^6$ there are $10$ ways to choose the $x^6$, $10-1 \choose 2$ ways to pick $x^3$.
Hopefully you can figure out the final sum of the three cases from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Sum of Prime Factors - TopCoder Recently in Topcoder, I faced a problem which stated as follows:
"You have a text document, with a single character already written in it. You are allowed to perform just two operations - copy the entire text (counted as 1 step), or paste whatever is in the clipboard (counted as 1 step). When you paste whatever is there in the clipboard, the original text on the text document is appended with that on the clipboard. Copying overrites whatever there is on the cliboard. It is required to find the minimum number of steps required to print 'n' characters on the text document.
For Example, to generate 9 characters - Copy the single character already present, Paste (2 chars), paste again (3 chars), copy (3 chars copied), paste (6 chars now), paste again (9 chars now).
So, total number of steps required is 6 which is (3+3 sum of prime factors of 9).
Can someone tell, how is this problem related to sum of the prime factors of 'n'?
Thanks!
|
At every step, the number of characters copied to be pasted then an arbitrary number of times has to be a divisor of $n$(because no matter how many times we paste them then, the resulting amount of characters will be a multiple of the number of characters we copied).
So let $\xrightarrow{a}$ mean that I copy the characters in the buffer and then I paste them $a-1$ times, multiplying the actual number of characters in the buffer by $a$ in $a$ steps.
So let's say that I applied this algorithm:
$$\xrightarrow{a_1}\xrightarrow{a_2}\xrightarrow{a_3}\cdots\xrightarrow{a_n}$$
We may note that $\xrightarrow{\,a}\xrightarrow{\,b}$ and $\xrightarrow{ab}$ do the same thing.
It is clear that if $a,b\ge2$,the first algorithm is better or equivalent, being equivalent iff $a=b=2$. If there is in fact a $\xrightarrow{4\,}$, we can replace it by a pair of $\xrightarrow{2\,}$ without changing efficiency or result. Then, if any $a_i$ is factorizable, that algorithm is not optimal. So every optimal algorithm is equivalent to the one that has all $a_i$ prime. Finally:
$$a_1+a_2+a_3+\cdots+a_n=\text{Sum of prime factors}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/708976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
how to compute the last 2 digits of $3^{3^{3^{3}}}$ to n times? Input $n$, output the last $2$ digits of the result.
n=1 03 3=3
n=2 27 3^3=27
n=3 87 3^27=7625597484987
n=4 ?? 3^7625597484987=??
Sorry guys, the formula given is $T_n=3^{T_{n-1}}$, I have updated the example.
I was asked this question during an interview, but I didn't have any clue. (The interviewer kind of gave me a hint that for $n>10$, the last $10$ digits of all results would be the same?)
|
Notice $$3^{100} = 515377520732011331036461129765621272702107522001 \equiv 1 \pmod{100}$$
If we define $p_n$ such that $p_1 = 3$ and $p_n = 3^{p_{n-1}}$ recursively and
split $p_n$ as $100 q_n + r_n$ where $q_n, r_n \in \mathbb{Z}$, $0 \le r_n < 100$, we have
$$p_n = 3^{p_{n-1}} = 3^{100 q_{n-1} + r_{n-1}} \equiv 1^{q_{n-1}}3^{r_n} \equiv 3^{r_{n-1}} \pmod{100} \\ \implies r_n \equiv 3^{r_{n-1}} \pmod{100}$$
This means to obtain $r_n$, the last two digit of $p_n$, we just need to start with
$r_1 = 3$ and repeat iterate it. We find
$$\begin{align}
r_1 &= 3\\
r_2 &\equiv 3^3 = 27 \pmod{100}\\
r_3 &\equiv 3^{3^3} = 7625597484987 \equiv 87 \pmod{100}
\end{align}$$
Notice
$$3^{87} = 323257909929174534292273980721360271853387 \equiv 87 \pmod{100}$$
So after the third (not fourth) iteration, we have $r_n = 87$ for all $n \ge 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Calculate ratio $\frac{p^{3n}}{(\frac{p}{2})^{3n}}$ How do I calculate this ratio? I do not know even where to begin.
$$\frac{p^{3n}}{(\frac{p}{2})^{3n}}$$
Thanks
|
Regarding the original question:
$$
\frac{p^{3n}}{\frac{p^{3n}}{2}} =
\frac{p^{3n}}{\frac{p^{3n}}{2}} \cdot 1 =
\frac{p^{3n}}{\frac{p^{3n}}{2}} \cdot \frac{2}{2} =
\frac{p^{3n}\cdot 2}{\frac{p^{3n}}{2} \cdot 2} =
\frac{p^{3n}\cdot 2}{p^{3n}\cdot 1} =
\frac{p^{3n}}{p^{3n}} \cdot \frac 2 1 =
1\cdot \frac 2 1 = 2.
$$
In general we have
$$
\frac{A}{\frac p q} = \frac{A\cdot q}{\frac p q \cdot q} = A\cdot\frac{q}{p},
$$
so dividing by $\frac p q$ is the same as multiplying by $\frac q p$. Using this you also get
$$
\frac{p^{3n}}{\frac{p^{3n}}{2}} = p^{3n}\cdot\frac{2}{p^{3n}} = 2.
$$
For your updated question we use $\left(\frac{p}{q}\right)^k = \frac{p^k}{q^k}$ to obtain
$$
\frac{p^{3n}}{\left(\frac{p}{2}\right)^{3n}} =
\frac{p^{3n}}{\left(\frac{p^{3n}}{2^{3n}}\right)} =
p^{3n} \cdot \frac{2^{3n}}{p^{3n}} = 2^{3n}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limit as $x$ approaches 2 is undefined? Does following function have a limit if x approaches 2. Calculate what the limit is and motivate why if it is missing.
$$
\frac{(x-2)^2}{(x-2)^3} =\frac{ 1 }{ x-2}.
$$
I answered $\frac{1 }{ 0 }= 0 $ undefined is that correct?
|
It looks like you are considering the function
$$
f(x) = \frac{(x-2)^2}{(x-2)^3} = \frac{1}{x-2}.
$$
You want to consider what happens to this function when $x$ approaches $2$. Note that the numerator is just the constant $1$ and when $x$ approaches $2$, then $x - 2$ approaches $0$. So you have something that approaches $1$ divided by $0$. This limit does not exist (as you correctly state).
Note, however, that $1$ divided by $0$ is not equal to $0$. In fact $1$ divided by $0$ is undefined which is the reason that the limit is undefined.
If you consider the limit as $x$ approaches $0$ from the right, then you are just considering what happens to $1 / x$ for positive values of $x$. And since you are taking this (non-zero) constant and dividing it by something that becomes smaller and smaller (while being positive), then the limit is $\infty$:
$$
\lim_{x\to 0^+} f(x) = \infty.
$$
Likewise
$$
\lim_{x\to 0^-} f(x) = -\infty.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Reference request: Nonlinear dynamics graduate reference There are already a number of requests for textbooks detailing nonlinear stability theory, chaos theory etc. but many of them are more introductory (e.g. Strogatz - Nonlinear Dynamics and Chaos)
I've covered all this material before but I'm prone to forgetting the details. I hoped somebody might be able to point me in the direction of a more formal reference text on this subject. Perhaps a graduate level text, that covers major undergraduate material in a fairly mathematically rigorous way, as well as a little extra?
|
"Nonlinear Oscillations, Dynamical Systems ,and Bifurcation of Vector Fields" by John Guckenheimer and Philip Holmes comes to mind. I took a class on Dynamical Systems with the first author many years ago and this was the text. I see people using the book by Strogatz and always feel that it is just not at the same level.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Expansion of lower incomplete gamma function $\gamma(s,x)$ for $s < 0$. The lower incomplete gamma function for positive $s$ is defined by the integral
$$
\gamma(s,x)=\int_0^{x} t^{s-1} e^{-t} dt.
$$
Taylor expansion of the exponential function and term by term integration give the following expansion
$$
\gamma(s,x)=\sum_{n=0}^\infty \frac{(-1)^n x^{n+s}}{n! (n+s)}
$$
Here $\gamma(s,x)$ can be analytically continued for complex $s$ except some singular points. Does the above expansion still hold in this case? Especially I'd like to know if the relation holds for $s < 0$. Each term of the series is well defined for $s < 0$. Is this enough to ensure the validity of the relation or does it need more arguments?
|
From http://dlmf.nist.gov/8.7.E3 we have the series expansion
$$\Gamma(s,x) = \Gamma(s) - \sum_{n=0}^\infty \frac{(-1)^n x^{n+s}}{n! (n+s)}, \qquad s \ne 0, -1, -2, \ldots
$$
Combine this with the relation for the gamma functions (http://dlmf.nist.gov/8.2.E3)
$$\gamma(s,x) + \Gamma(s,x) = \Gamma(s).$$
Therefore the series expansions remains valid for all non-integer $s<0$
$$
\gamma(s,x) = \sum_{n=0}^\infty \frac{(-1)^n x^{n+s}}{n! (n+s)}, \qquad s \ne 0, -1, -2, \ldots
$$
Another route is via Tricomi's entire incomplete gamma function $\gamma^{*}$, see http://dlmf.nist.gov/8.7.E1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Factorising $X^n+...+X+1$ in $\mathbb{R}$ How can factorize this polynom in $\mathbb{R}$:
$X^n+...+X+1$
I already try to factorize it in $\mathbb{C}$ but I couldn't find a way to turn to $\mathbb{R}$
|
We have
$$\sum_{k=0}^n x^k=\frac{x^{n+1}-1}{x-1}$$
hence
$$\sum_{k=0}^n x^k=\prod_{k=1}^{n}\left(x-e^{2ik\pi/{n+1}}\right)$$
so if $n$ is odd say $n=2p+1$ then
$$\sum_{k=0}^{2p+1}=\prod_{k=1}^{2p+1}\left(x-e^{2ik\pi/{2p+2}}\right)=(x+1)\prod_{k=1}^{p}\left(x-e^{2ik\pi/{2p+2}}\right)\prod_{k=p+2}^{2p+2}\left(x-e^{2ik\pi/{2p+2}}\right)\\=(x+1)\prod_{k=1}^{p}\left(x^2-2\cos(2k\pi/2p+2)+1\right)$$
and the case $n$ is even is left for you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The best symbol for non-negative integers? I would like to specify the set $\{0, 1, 2, \dots\}$, i.e., non-negative integers in an engineering conference paper. Which symbol is more preferable?
*
*$\mathbb{N}_0$
*$\mathbb{N}\cup\{0\}$
*$\mathbb{Z}_{\ge 0}$
*$\mathbb{Z}_{+}$
*$\mathbb{Z}_{0+}$
*$\mathbb{Z}_{*}$
*$\mathbb{Z}_{\geq}$
|
Wolfram Mathworld has $\mathbb{Z}^*$.
Nonnegative integer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 5,
"answer_id": 0
}
|
Is there a name for a topological space $X$ in which very closed set is contained in a countable union of compact sets? Is there a name for a topological space $X$ which satisfies the following condition:
Every closed set in $X$ is contained in a countable union of compact sets
Does Baire space satisfy this condition?
Thank you!
|
This property is equivalent to $\sigma$-compactness, which says that the space itself is a countable union of compact subsets. If your property holds for a space $X$, then since $X$ is a closed subspace of itself, it is contained in a countable union of compact subsets. Conversely, if $X$ is $\sigma$-compact, then your property holds because every subset is contained in a countable union of compact subsets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/709880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Differentiation of improper integrals defined on the whole real line. I am considering improper Riemann integrals (not Lebesgue integrals, mind you) of the form $$\int_{-\infty}^\infty f(t,x)dt,$$
with $f:\mathbf{R}\times\Omega\rightarrow\mathbf{R}$ continuous ($\Omega$ an open set in $\mathbf{R}$). What are sufficent conditions on $f$ to justify
$$\frac{d}{dx}\int_{-\infty}^\infty f(t,x)dt=\int_{-\infty}^\infty \frac{\partial}{\partial x}f(t,x)dt?$$
References are welcome. It seems to me that no book includes this :(
|
A sufficient condition is that the integral $\int_{-\infty}^\infty \frac{\partial}{\partial x}f(t,x)\,dt$ is uniformly convergent with respect to parameter $x$ (in some neighborhood of the point $x$ that you are interested in). This means you can bound the tail of integral by $\epsilon$ using the same size of tail for all $x$.
Googling "uniformly convergent" and "improper integral" brings up proofs of the result, such as Theorem 5 here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Volume of solid region bounded by $z=4x$, $z=x^2$, $y=0$, and $y=3$ as an iterated integral
Suppose R is the solid region bounded by the plane $z = 4x$, the surface $z = x^2$, and the planes $y = 0$ and $y = 3$. Write an iterated integral in the form below to find the volume of the solid R.
$$\iiint\limits_Rf(x,y,x)\mathrm{d}V=\int_A^B {\int_C^D {\int_E^F\mathrm{d}z} \mathrm{d}y} \mathrm{d}x$$
I need to find the limits, I found A and C which are zeros and I could not find the rest of the limits
|
the volume comes out to be:
$$\int_{0}^4 \int_{0}^{3} \int_{x^2}^{4x} 1dzdydx$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Probability of 5 cards drawn from shuffled deck
Five cards are drawn from a shuffled deck with $52$ cards. Find the probability that
a) four cards are aces
b) four cards are aces and the other is a king
c) three cards are tens and two are jacks
d) at least one card is an ace
My attempt:
a) $\left(13*12*\binom{4}{4}*\binom{4}{1}\right)/\binom{52}{5}$
b) same as (a)?
c) $\left(13 * 12 * \binom{4}{3} * \binom{4}{2}\right)/\binom{52}{5}$
d) $\left(13 * \binom{4}{1}\right)/\binom{52}{5}$
|
deepsea gave a complete and clear answer.
I'd just add that you could see straight away that the answers for (b) and (a) cannot be the same, because the requirement in (b) is so much more specific. Many hands that satisfy (a) do not satisfy the conditions for (b).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
If $G$ is non-abelian group of order 6, it is isomorphic to $S_3$ Let $G$ be a non-abelian group of order $6$ with exactly three elements of order $2$. Show that the conjugation action on the set of elements of order $2$ induces an isomorphism.
I just need to show that the kernel of the action is trivial. Not sure how to go about doing that. I think maybe a proof by contradiction but I can't find a contradiction. I would think it would violate "non-abelian-ness" of the group. Thanks for any help!
|
Hint: Suppose $x\in G$ is an element of the kernel of the action, i.e. fixes the three involutions under conjugation. What do you know about the group generated by the three involution, and what does that tell you about $x$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Two Dimensional Delta Epsilon Proof I was dawdling in some 2D delta epsilon examples, and I was wondering how to prove that the limit of $x^2+2xy+y^2$ has limit 3 as $(x,y)\rightarrow(1,1)$, using epsilon delta.
|
Let $\epsilon>0$ and we look for $\delta>0$ such that $|x^2+2xy+y^2-4|<\epsilon$ whenever $$||(x,y)-(1,1)||=\sqrt{(x-1)^2+(y-1)^2}<\delta\;(*)$$
We have
$$|x^2+2xy+y^2-4|=|(x+y)^2-4|=|(x+y-2)(x+y+2)|\le|(x+y-2)(|x|+|y|+2)|$$
Now let $\delta<1$ and with $(*)$ we have $|x|,|y|<\delta+1<2$ so
$$|(x+y-2)(|x|+|y|+2)|<6|(x+y-2)|\le6(|x-1|+|y-1|)\le12||(x,y)-(1,1)||\le12\delta$$
so it suffices to take $\delta=\min\left(\frac\epsilon{12},1\right)$ and the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Checking some Regular Expression problems I'm given the alphabet $$ \Sigma = {\{a,b}\} $$
I tried to write a regular expressions for presenting the following sets:
All strings in $$\Sigma ^ *$$
with:
a-) number of 2s divisible by 4
b-) exactly one occurrence of 122
c-) exactly one or two 1s
Well I tried to find their solutions, but I am afraid they might be incomplete. So it goes like:
$$ (1^*(22)^*1^*)^* $$
$$ 2^*1222^* $$
$$ 2^*(1 | 11)2^* $$
respectively for a, b and c parts.
|
a). Words with number of $2$s divisible by 4 are words made of subwords that contain exactly 4 $2$s with arbitrary number of $1$s between them.
$$(1^*21^*21^*21^*21^*)^*$$
b). Note that $(21^*+11(1+2)+121)^*$ is the complementary of $122$.
$$(21^*+11(1+2)+121)^*122(21^*+11(1+2)+121)^*$$
c). In a similar concept as in case (a),
$$2^*12^*+2^*12^*12^*$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Generate random sample with three-state Markov chain I have a Markov chain with the transition matrix
$$\pmatrix{0 & 0.7 & 0.3 \\ 0.8 & 0 & 0.2 \\ 0.6 & 0.4 & 0}$$
and I would like to generate a random sequence between the three states (such as $1, 2, 1, 3, \dots, n$). How do I get there while making sure the transition probabilities roughly apply for my sample?
|
Well, suppose you have a sequence of iid r.v. $U_t$ uniform on $[0,1]$.
Then define $$n_1(u) = 2\times 1_{u<0.7} + 3\times 1_{u\ge 0.7}\\
n_2(u) = 1\times 1_{u<0.8} + 3\times 1_{u\ge 0.8}\\
n_3(u) = 1\times 1_{u<0.6} + 2\times 1_{u\ge 0.6}\\
X_{t+1} = n_{X_t}(U_t)
$$
Then $X$ is a realisation of your Markov chain starting form the point you choose for the start for $t=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.