text
stringlengths 83
79.5k
|
|---|
H: Find the sum $\sum_{n=1}^{\infty} \frac{3^n}{5^n-2^{2n}}$.
Can someone help me with this sum $\sum_{n=1}^{\infty} \frac{3^n}{5^n-2^{2n}}$. I can't find $S_n$
AI: $$\begin{eqnarray*}S&=&\sum_{n\geq 1}\frac{3^n}{5^n-4^n}=\sum_{n\geq 1}\left(\frac{3}{5}\right)^n\frac{1}{1-\left(\frac{4}{5}\right)^n}=\sum_{n\geq 1}\sum_{k\geq 0}\left(\frac{3}{5}\right)^n\left(\frac{4}{5}\right)^{kn}=\sum_{k\geq 0}\frac{3\cdot 4^k}{5^{k+1}-3\cdot 4^{k}}\\&=&\sum_{m\geq 1}\frac{\frac{3}{4}4^m}{5^m-\frac{3}{4}4^m}\end{eqnarray*} $$
is actually a series deceleration, but there are ways to improve the convergence speed of the LHS.
For instance
$$ S = 3+\sum_{n\geq 2}\frac{3^{n}}{5^n-4^n}=3+\sum_{n\geq 1}\frac{3^{n+1}}{5^{n+1}-4^{n+1}} $$
where the last series is pretty close to $\frac{3}{5}S$, leading to
$$ \frac{2}{5}S = 3 + \sum_{n\geq 1}\left(\frac{3^{n+1}}{5^{n+1}-4^{n+1}}-\frac{3}{5}\cdot\frac{3^{n}}{5^{n}-4^{n}}\right) $$
and
$$ S = \frac{15}{2}-\frac{3}{2}\sum_{n\geq 1}\frac{3^{n}4^n}{(5^{n+1}-4^{n+1})(5^{n}-4^n)}.$$
The same trick can be applied to the last series, whose main term roughly behaves like $\left(\frac{12}{25}\right)^n$.
$$ S = \frac{95}{26}+\frac{162}{13}\sum_{n\geq 1}\frac{3^n 4^{2n}}{(5^{n+2}-4^{n+2})(5^{n+1}-4^{n+1})(5^n-4^n)} $$
After a few steps we get $S\approx 4.92476079$, but I highly doubt there is a nice closed form fo $S$. After all this is pretty much the acceleration method employed by Shingo Takeuchi (and reminiscent of Gosper's works in the seventies) for proving the trascendence of some series like $\sum_{k\geq 1}\frac{1}{2^k-1}$.
|
H: Topology and algebraic topology have any application in biology?
If so,
In what area? evolution, genetecs ....
And how good do my computer skills need to be to work in these areas?
AI: Topology and knot theory are used to study DNA supercoiling and topoisomerases in molecular biology. There is a little elaboration in The Knot Book by Colin Adams. See also, for example, http://www.omup.jp/modules/papers/knot/chap17.pdf.
Rational tangle calculus also has applications to the analysis of DNA recombination. See the last section of this: http://homepages.math.uic.edu/~kauffman/RTang.pdf.
|
H: Solution for $\beta$ in ridge regression
The RSS of the ridge regression in matrix form is:
$$RSS(\lambda) = (y−X\beta)^T(y−X\beta) +λ\beta^T\beta$$
the ridge regression solutions are easily seen to be
$$β_{ridge}= (X^TX+λI)^{−1}X^Ty$$
See page 64, https://web.stanford.edu/~hastie/Papers/ESLII.pdf
How is this derived because I dont think the solutions can be easily seen?
AI: You can see here how the derivative of $RSS=(y-X\beta)^T(y-X\beta)=(y^T-\beta^TX^T)(y-X\beta)$ has been obtained. It is
$$\frac{\partial RSS}{\partial \beta}=-2X^Ty+2X^TX\beta+2\lambda \beta$$
And the derivative of $\lambda \beta^T\beta$ w.r.t $\beta$ is $2\lambda \beta$, Setting the derivative equal to $0$.
$$-2X^Ty+2X^TX\beta+2\lambda \beta=0$$
$$2X^TX\beta+2\lambda \beta=2X^Ty$$
$$(X^TX+\lambda I) \beta=X^Ty$$
$$ \beta=(X^TX+\lambda I)^{-1}X^Ty$$
|
H: Why the Spectral Theorem does not exist in a Euclidean Space when proving
When I read Algebra by Artin, I am confused why the proof can not be generalized to the Euclidean Space.(Since the fact that some orthogonal operator can not be diagbosed)
This is the proof:(In the picture)I can not find which point is broken in a Euclidean space.
AI: Consider the linear operator $T = \begin{pmatrix} 3 & -2 \\ 4 & -1\end{pmatrix}$ acting on $\Bbb{R}^2$. This has nonreal eigenvalues $1 \pm 2\mathrm{i}$ with eigenvectors $\begin{pmatrix}1+\mathrm{i} \\ 2 \end{pmatrix}$ and $\begin{pmatrix}1-\mathrm{i} \\ 2 \end{pmatrix}$, neither of which is a vector in $\Bbb{R}^2$. Consequently, no choice of $v_1$ in the proof yields a basis for $\Bbb{R}^2$.
(More generally, a real matrix need not have real eigenvalues, which is the window through which nonreal eigenvectors enter.)
The example above is dissected in some detail at http://www.math.utk.edu/~freire/complex-eig2005.pdf .
|
H: Prove that $\sin(nx) \cos((n+1)x)-\sin((n-1)x)\cos(nx) = \sin(x) \cos(2nx)$
Question:
Prove that $\sin(nx) \cos((n+1)x)-\sin((n-1)x)\cos(nx) = \sin(x) \cos(2nx)$ for $n \in \mathbb{R}$.
My attempts:
I initially began messing around with the product to sum identities, but I couldn't find any way to actually use them.
I also tried compound angles to expand the expression, but it became too difficult to work with.
Any help or guidance would be greatly appreciated
AI: The left-hand side is$$\begin{align}&\sin nx(\cos nx\cos x-\sin nx\sin x)-(\sin nx\cos x-\cos nx\sin x)\cos nx\\&=(\cos^2nx-\sin^2nx)\sin x\\&=\cos 2nx\sin x.\end{align}$$
|
H: $ (\prod_{i\in I}\kappa_i)^\mu = \prod_{i\in I}\kappa_i^\mu $, where $ \kappa_i ,\mu $ are infinite cardinals, $I$ an infinite set.
If $|B|=\mu$ and $|A_i|=\kappa_i \;\forall i\! \in\! I$, than $(\prod_{i\in I}\kappa_i)^\mu =|\text{Fun}(B,\prod_{i \in I}A_i)|$. Also, $ \prod_{i\in I}\kappa_i^\mu = |\prod_{i \in I}{|\text{Fun}(B,A_i)|}| $. I'm not sure this last expression is right, and, above all, I don't know how to make a bijection between the two.
This is my first question, any suggestions are appreciated. :)
AI: The proof of this is essentially the same as the proof that there is a bijection
$$\mathrm{Fun}(A, \mathrm{Fun}(B,C)) \to \mathrm{Fun}(B, \mathrm{Fun}(A,C))$$
Each function $f : A \to \mathrm{Fun}(B,C)$ induces a function $\widetilde{f} : B \to \mathrm{Fun}(A,C)$ defined by $(\widetilde{f}(b))(a) = (f(a))(b)$ for all $a \in A$ and $b \in B$, and this establishes a bijection.
[See also: currying—this is the result of uncurrying, swapping the order of the arguments, and then currying.]
Now here it's a bit more complicated, but the idea is the same.
An element of $\prod_{i \in I} A_i$ is a function $g : I \to \bigcup_{i \in I} A_i$ with $g(i) \in A_i$ for each $i \in I$. Thus a function $f : B \to \prod_{i \in I} A_i$ assigns to each $b \in B$ a function $f(b) : I \to \bigcup_{i \in I} A_i$.
We can then obtain a function $\widetilde{f} : I \to \bigcup_{i \in I} \mathrm{Fun}(B,A_i)$ defied by $(\widetilde{f}(i))(b) = (f(b))(i)$ for all $i \in I$ and $b \in B$; note that we really do have $\widetilde{f} \in \prod_{i \in I} \mathrm{Fun}(B,A_i)$.
The assignment $f \mapsto \widetilde{f}$ induces a bijection
$$\mathrm{Fun}\left(B, \prod_{i \in I} A_i\right) \to \prod_{i \in I} \mathrm{Fun}(B,A_i)$$
You now need to check the details.
|
H: Counting Lattice Paths with Same Start/End Point
I want to find the number of paths of length $2n$ that start and end at $(0,0)$ in the diagram below (Just to be clear, each step is between connected nodes, so for example $(0,0)$ to $(0,2)$ is not allowed):
Clearly, any such path would have an even number of steps. Having counted paths of length $2$, $4$, and $6$, it appears that for length $2n$, there are $2^{2n-1}$ paths.
I thought of it this way: A step in which the $y$ coordinate increases, I call "North" or just "$N$". The opposite is "South" or "$S$".
Then we need to count the number of sequences of length $2n$ that satisfy the following:
An equal number of $N$s and $S$s ($n$ of each).
The sequence starts with $N$.
At no point in the sequence can the number of $N$s exceed the number of $S$s by $3$ or more.
At no point in the sequence can the number of $S$s exceed the number of $N$s.
Counting the sequences of length $2$ is easy: The sequence looks like $\{N,S\}$, and since there are $2$ choices for the first $N$, there are $2$ sequences.
Notice now that in general, every odd-numbered step in any sequence will have $2$ options.
The length $4$ sequences are $\{N,S,N,S\}$ which represents $4$ different paths, and $\{N,N,S,S\}$ which represents another $4$, for a total of $8$.
For length $6$, each of the below sequences represents $8$ different paths, for a total of $32$:
$\{N,S,N,S,N,S\}, \{N,S,N,N,S,S\}, \{N,N,S,S,N,S\}, \{N,N,S,N,S,S\}$
At this point, it becomes tedious to list all the possible sequences. For length $8$, I would predict $2^{8-1}=128$ different paths, and so on.
I'm sure I am overlooking some kind of insight that makes the counting easier and/or explains why I am correct. I appreciate any input!
AI: This is equivalent to counting the lattice paths from $\langle 0,0\rangle$ to $\langle 2n,0\rangle$ that consist of steps from $\langle x,y\rangle$ to $\langle x+1,y\pm 1\rangle$, never drop below the $x$-axis, and never rise above the line $y=2$. Let $a_n$ be the number of such paths.
Suppose that $p$ is such a path. There is a largest $k<n$ such that $\langle 2k,0\rangle$ is on $p$. There are $a_k$ ‘legal’ paths from $\langle 0,0\rangle$ to $\langle 2k,0\rangle$, and there is only one ‘legal’ path from $\langle 2k,0\rangle$ to $\langle 2n,0\rangle$ that hits the $x$-axis only at those two points. Thus,
$$a_n=\sum_{k=0}^{n-1}a_k\;.\tag{1}$$
Clearly $a_0=1$, and we immediately calculate that $a_1=1$, $a_2=2$, $a_3=4$, and $a_4=8$, leading to your conjecture that $a_n=2^{n-1}$ for $n\ge 1$. This is easily verifed from $(1)$ by induction: if $a_0=1$ and $a_k=2^{k-1}$ for $1\le k<n$, then
$$a_n=\sum_{k=0}^{n-1}a_k=1+\sum_{k=1}^{n-1}2^{k-1}=1+\sum_{k=0}^{n-2}2^k=1+(2^{n-1}-1)=2^{n-1}\;.$$
|
H: An interesting limit
Let $x\in\mathbb{R}.$ For all $i,j\in\mathbb{N},$ define $a_{i0} = \frac{x}{2^i}, a_{ij} = a_{i,j-1}^2 + 2a_{i,j-1}.$ Find, with proof, $\lim\limits_{n\to\infty} a_{nn}.$
Below is my attempt.
Let for each $n, p_n(x) = a_{nn}$. Then observe that $a_{n+1,n} = p_n(\frac{x}2).$ As well, $p_{n+1}(x)+1 = (p_n(\frac{x}2)+1)^2.$ Iterating gives $p_2(x) = ((p_0(\frac{x}4)+1)^2+1)^2, p_3(x) = (((p_0(\frac{x}8)+1)^2+1)^2+1)^2, p_n(x) = ((\dots (p_0(\frac{x}{2^n})+1)^2\dots)^2+1)^2,$ but I'm not sure how this can be converted to a more useful form such as $(1+\frac{x}{2^n})^{2^n}.$
AI: For $n\in \mathbb{N}$ ,
$a_{n,0}=\frac{x}{2^n}$
$a_{n,1}+1=(a_{n,0}+1)^2=(\frac{x}{2^n}+1)^2$
$a_{n,2}+1=(a_{n,1}+1)^2=(\frac{x}{2^n}+1)^4$
Continuing this way,
For $1\le j\le n$,
$a_{n,j}+1=(\frac x{2^n}+1)^{2^j}$
Thus for $n=j$,
$a_{n,n}+1=(\frac x{2^n}+1)^{2^n}$
Taking limit as $n\to \infty$, we get
$lim_{n\to \infty} a_{n,n}+1=e^x$ ( Since the right limit is a subsequence of the sequence $(1+\frac xn)^n$ converging to $e^x$)
This gives required limit as $e^x-1$.
|
H: Why is $x^4+x^2+1$ over $_2$ a reducible polynomial? What do I misunderstand?
I don't quite understand when a polynomial is irreducible and when it's not.
Take $x^2 +1$ over $_3$.
As far as I know, I have to do the following:
0 1 2 using $x \in _3$
1 2 2 using $p(x)$
I calculated it like that:
$(0^2 + 1) \mod 3 = 1$
$(1^2 + 1) \mod 3 = 2$
$(2^2 + 1) \mod 3 = 1$
This is irreducible because in none of them the result is $0$.
Now take $x^2 + 1$ over $_2$
The same approach:
0 1 using $x \in _2$
1 2 using $p(x)$
$(0^2 + 1) \mod 2 = 1$
$(1^2 + 1) \mod 2 = 0$
This is reducible because the result is $0$ in the latter case.
Now take $x^4+x^2+1$ over $_2$.
0 1 using $x \in _2$
1 1 using $p(x)$
$(0^4+0^2 + 1) \mod 2 = 1 $
$(1^4+1^2 + 1) \mod 2 = 1 $
Why is this polynomial still reducible even though we get both times $1$ as a result?
Can someone clarify?
AI: Because, just like in almost all fields, it is possible that a polynomial of degree $4$ is the product of two polynomials of degree $2$, neither of which happens to have a root.
|
H: Find an open set $U$ for which a function $f$ is one to one.
I was given a function $f : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ which is defined as follows:
$f(x,y) = (x+iy)^3$
We look at $\mathbb{C}\;$ as $\mathbb{R}^2$ with $(x,y) = x + iy$
I need to find an open set $U \subseteq \mathbb{C}\;$ for which $f$ is invertible.
We can define $f(x,y)=\left(\begin{array}{c}
f_{1}(x,y)=x^{3}-3y^{2}x\\
f_{2}(x,y)=3x^{2}y-y^{3}
\end{array}\right)$ for convinience.
We also have that
$(Df)_{(x,y)}=\left[\begin{array}{cc}
3x^{2}-3y^{2} & 6xy\\
-6yx & 3x^{2}-3y^{2}
\end{array}\right],Jf(x,y)=det(Df)_{(x,y)}=9x^{4}+9y^{4}+18x^{2}y^{2}$
How can I find such $U$? I don't really know where to begin. thanks!
AI: You can take, for instance, $U=\left\{re^{i\theta}\,\middle|\,r>\text{ and }\theta\in\left(0,\frac{2\pi}3\right)\right\}$. Since your map is $z\mapsto z^3$, its restriction to $U$ is injective.
|
H: Point estimate for quadratic loss function
Suppose $X_1,\dots,X_n$ are IID from a distribution uniform on $\left(\theta-\frac{1}{2},\theta + \frac{1}{2}\right)$, and that the prior for $\theta$ is uniform on $(10,20)$. Calculate the posterior distribution for $\theta$ given $x = X_1,\dots,X_n$ and show that the point estimate for $\theta$ under both quandratic and absolute error loss functions is:
$\begin{equation}
\hat{\theta} = \frac{1}{2}\left[\max(\max_i(x_i-\frac{1}{2}),10) + \min(\min_i(x_i+\frac{1}{2}),20)\right]
\end{equation}$.
My solution so far has been the following, and I do not seem to be getting at the same result as the question:
We need to evaluate $f(\theta|x)$. We can do this using Bayes' formula:
$f(\theta|x)=\frac{f(x|\theta)f(\theta)}{\int_{10}^{20}f(x|\theta)f(\theta)d\theta}$.
$f(\theta) = \frac{1}{10}$, as the prior for $\theta$ is uniform on $(10,20)$.
Now, $f(x|\theta) = f(x_1,\dots,x_n|\theta) = \prod f(x_i|\theta) = \prod 1 = 1$. Which means that $f(\theta|x) = \frac{1}{10}$. This would imply that the posterior mean, which would maximise $\theta$ would be $\int_{10}^{20}\theta\cdot \frac{1}{10}d\theta=15$, which is obviously different to the $\hat{\theta}$ in the question.
What am I doing wrong? Intuitively, I am doing something wrong when calculating $f(x|\theta)$, as I should move to the cdf and get a product of all $x_i$'s. However, differentiating that would still yield 1.
AI: You say $f(x\mid \theta) = f(x_1,\ldots,x_n\mid\theta) = \prod f(x_i\mid \theta) = \prod 1 = 1$
but this is only true when each of the $x_i \in \left(\theta-\frac{1}{2},\theta + \frac{1}{2}\right)$
You should use indicators for this. If you do, then you will be left with a posterior distribution for $\theta$ which is uniform on the interval $\Big(\max(\max_i(x_i-\frac{1}{2}),10) , \min(\min_i(x_i+\frac{1}{2}),20)\Big)$
Under the quadratic error loss function the best point estimate is the mean of the posterior distribution, while under the absolute error loss function the best point estimate is the median of the posterior distribution. For a uniform distribution on an interval, both of these are the midpoint of the interval.
|
H: Prove that if $A \sim I_n$ and $A \sim I_m$ then $n=m$
Some definitions:
Definition of equinumerous sets
Two sets are equinumerous if there exists a bijection $f: A \rightarrow B$. We write $A \sim B $ if $A$ is equinumerous to $B$
Definition of finite set and cardinality
A set $A$ is said finite if $A \sim I_n$ where $I_n=\{k \in \mathbb{N} | k \leq n\}$ and $\mathbb{N}=\{1,2...\}$
If $A$ is finite the unique number $n$ such that $A \sim I_n$ is called the cardinality of $A$
Now the question:
Prove the cardinality of A is well-defined (that is if $A \sim I_n$ and $A \sim I_m$ then $n=m$ )
Suggestion: prove first that if $n \neq m$ then there exist no bijection between $I_n$ and $I_m$.
So, if I assume by absurd that $m \neq n$, let's say $m < n$
There exists an injective but not surjective mapping $i:I_m \rightarrow I_n$.
Now since this is so obvious I am having trouble going forward. Actually, I don't know if the previous statement, provided it is needed, is too far a step into the proof, given that what I want to prove is as much as obvious.
Can someone enlighten me?
AI: Proving the statement in the suggestion is not trivial. What you’ve done isn’t nearly enough: the existence of an injection from one set to another that is not a surjection does not prove that there is no bijection between the two sets. For instance, the map $f:\Bbb N\to\Bbb N:n\mapsto n+1$ is a non-surjective injection from $\Bbb N$ to itself, but obviously there are bijections from $\Bbb N$ to $\Bbb N$! It’s only for finite sets that the existence of a non-surjective injection implies that there is no bijection, and that’s essentially what you’re supposed to be proving here.
Let $B$ be the set of $n\in\Bbb N$ such that there is a bijection from $I_n$ to some $I_m$ with $m<n$. Suppose that $B\ne\varnothing$; by the well-ordering principle we can let $n=\min B$. Let $f:I_n\to I_m$ be a bijection, where $m<n$, and let $k=f(n)$. Let $g$ be the restriction of $f$ to $I_{n-1}$; $g$ is a bijection from $I_{n-1}$ to $I_m\setminus\{k\}$.
Now define a function $h:I_m\setminus\{k\}\to I_{m-1}$ as follows:
$$h(i)=\begin{cases}
i,&\text{if }1\le i<k\\
i-1,&\text{if }k<i\le m\;.
\end{cases}$$
It’s easy to verify that $h$ is a bijection. But then $h\circ g$ is a bijection from $I_{n-1}$ to $I_{m-1}$, so $n-1\in B$, contradicting the choice of $n$ as the smallest member of $B$. This contradiction shows that $B$ must be empty and hence that no $I_n$ can be mapped bijectively to an $I_m$ with $m<n$.
|
H: Separability of group $C^*$-algebras
Let $A$ be the group $C^*$-algebra of free group $F_n$ of rank $n(n\geq 2)$. Is $A$ separable?
AI: Yes. So long as the group $\Gamma$ is countable, both $C^*(\Gamma)$ and $C_r^*(\Gamma)$ will be separable, as in both cases, those elements of $\mathbb C\Gamma$ with coefficients in $\mathbb Q+i\mathbb Q$ will be a countable dense subset.
|
H: Proof to n-th order inhomogenous differential equation
Let be $ I \subset \mathbb{R} $ an intervall and $ s \in C^{ ( \infty ) }( I) $
How can I show that every solution $ y \in C^{ (n)} (I) $ of
$$ y^{ (n)} + \sum_{j=0}^{n-1} a_jy^{(j)} = s(x) $$
( $ a_0,...,a_{n-1} \in \mathbb{R} $ constant ) is in $ C^{( \infty )} (I) $ ?
AI: If by absurd the solution $y$ were not $C^{\infty}$ it means that the left hand side of your equation is not $C^{\infty}$, but the r.h.s is $C^{\infty}$, and this means that the $y$ is not a solution, this a contraddiction. If $y$ is a solution it has to be $C^{\infty}$, if it's not $C^{\infty}$ it can't be a solution.
|
H: What is the derivative of the function $f(x)=ix?$ Is it $i$?
Why is this? How is $i$ the slope of the function? Where is it the slope?
I understand taking the derivative with the power rule in, for example, the parabola $x^2$ becoming $2x$ and seeing where that is the slope, but I don't understand how dividing two things a number was multiplied by gives you a derivative exactly.
AI: The values taken by the function $f : x \in \mathbb{R} \, \longmapsto \, ix$ are complex numbers. However that's not a big deal. You can still define the derivative of $f$ using a limit. Given $x \in \mathbb{R}$,
$$ f'(x) = \lim \limits_{h \to 0} \frac{f(x + h) - f(x)}{h} = \lim \limits_{h \to 0} \frac{i(x+h) - ix}{h} = i. $$
|
H: Testing convergence of a series using comparison test: $\sum_{k=0}^{\infty} \frac{\sqrt{k+1}}{2^k}$?
Can someone please explain to me why this series converged? In my textbook they compare it with geometric series that I don't understand. How am I supposed to come up with this? The series is:
$$\sum_{k=0}^{\infty} \frac{\sqrt{k+1}}{2^k}\tag1$$
They compare it to:
$$\frac{\sqrt{k+1}}{2^k}\leq \left( \frac{2}{3}\right)^k. $$
I understand that this geometric series converges and because of that (1) converges, too.
I just
wonder how should I come up with $\left( \frac{2}{3}\right)^k$?
AI: Basically $k^\alpha\ll c^k$ for $\alpha\ge 0,\ c>1$ provided $k$ is large enough (we are not interested in the first terms anyway for the series convergence). Power functions are dominated by exponentials.
So we get $\dfrac{\sqrt{k+1}}{2^k}\le \dfrac{c^k}{2^k}=\left(\dfrac c2\right)^k$.
For the RHS to converge we need $\dfrac c2<1$, and still $c>1$ thus $1<c<2$.
A simple choice would be $c=\dfrac 32$ but in the present case your textbook did choose $c=\dfrac 43$ which works as well.
|
H: Theorem about SDR
Theorem Let $A_1,\ldots,A_n$ be subsets of a set $X$. Suppose that, for some positive integer $m$, we have $|A(J)|\ge|J|-m\mbox{ for all }J\subseteq\{1,\ldots,n\}$, where $A(J)=\bigcup\limits_{j\in J} A_j$. Then it is possible to find $n-m$ of the sets $A_1,\ldots, A_n$ which have a SDR. I know I should take $m$ elements and add them to all the sets $A_i$ but I don't know the rest
AI: I’ll point you in the right direction. Let $M$ be a set of $m$ objects not contained in any of the sets $A_1,\ldots,A_n$. For $k=1,\ldots,n$ let $B_k=A_k\cup M$.
Show that $|B(J)|\ge|J|$ for each $J\subseteq\{1,\ldots,n\}$.
Apply Hall’s theorem to $\{B_1,\ldots,B_n\}$.
What is the largest number of sets $B_k$ that can have their representives in $M$? That leaves at least how many that have their representatives in $A_k$?
|
H: For any $n \in \Bbb N$, for any representation $\phi:SL_2(\Bbb R) \to U(n)$ we must have $\phi \begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix}= I_{n}$
Actually this is a continuation of this question, but am asking it separately as it deserves separate discussion as an independent problem. Thanks to comments by Exodd and an answer by Tsemo Aristide, we could solve upto step(2) and the only thing left to be proven was :
Let $A(t)=\begin{pmatrix}1 &t\\0 &1\end{pmatrix}, \forall t \in \Bbb R$ . Then show that the normal subgroup of $G=SL_2(\Bbb R)$, generated by $\{A(t):t \in \Bbb R\}$ is the whole group.
As I mentioned in the previous discussion, I found out this question, which implies that all I need to show that the normal subgroup of $G$ generated by $\{A(t):t \in \Bbb R\}$ contains $\begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix}$, which I was unable to prove!
NOW I THINK THAT THIS CLAIM IS ACTUALLY FALSE!
Since, for any $\begin{pmatrix} a &b \\ c &d\end{pmatrix} \in G$ we have , $\begin{pmatrix} a &b \\ c &d\end{pmatrix} \begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix} \begin{pmatrix} d &-b \\ -c &a\end{pmatrix}=\begin{pmatrix} -bd-ac &b^2+a^2 \\ -d^2-c^2 &bd+ac\end{pmatrix}$ .
Now if the claim in step(3) has to be true, then for certain $a,b,c,d \in \Bbb R$ with $ad-bc=1$ one must have that $\begin{pmatrix} -bd-ac &b^2+a^2 \\ -d^2-c^2 &bd+ac\end{pmatrix}=\begin{pmatrix} 1 &t \\ 0 &1\end{pmatrix}$ . But then $d^2+c^2=0 \implies d=c=0 \implies 1= -bd-ac=0$ , A contradiction!
So rather it's enough to just show that
For any $n \in \Bbb N$, for any representation $\phi:SL_2(\Bbb R) \to U(n)$ we must have $\phi \begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix}= I_{n}$
Note that $\begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix}^4=I_2$, then since $\phi$ is a group homomorphism $\phi \begin{pmatrix} 0 &1 \\ -1 &0 \end{pmatrix}$ is a Unitary matrix of order $1 \text{ or }2 \text{ or } 4$ .
Now that's all I could come up with, unable to see how to proceed from here. Thanks in advance for help!
Just a short comment: There are statements like "An irreducible finite-dimensional representation of a noncompact simple Lie group of dimension greater than 1 is never unitary" which would give the result immediately. Please don't use them, as I don't have them at my disposal!
AI: I don't know if it is the fastest way, but
$$
\begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1& 1\\
0& 1
\end{pmatrix}
\begin{pmatrix}
0&1 \\
-1&0
\end{pmatrix}
=
\begin{pmatrix}
1& 0\\
-1&1
\end{pmatrix}
$$
$$
\begin{pmatrix}
1 & 0\\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
1& 1\\
0& 1
\end{pmatrix}
=
\begin{pmatrix}
1& 1\\
-1&0
\end{pmatrix}
$$
$$
\begin{pmatrix}
1& 1\\
-1&0
\end{pmatrix}
\begin{pmatrix}
1& 0\\
-1& 1
\end{pmatrix}
=
\begin{pmatrix}
0& 1\\
-1&0
\end{pmatrix}
$$
|
H: Evaluate $ \lim _{x \rightarrow 0}\left(x^{2}\left(1+2+3+\dots+\left[\frac{1}{|x|}\right]\right)\right) $
Evaluate
$$
\lim _{x \rightarrow 0}\left(x^{2}\left(1+2+3+\dots+\left[\frac{1}{|x|}\right]\right)\right)
$$
For any real number $a,|a|$ is the largest integer not greater than $a$.
I am getting no clue! from where to start!
AI: We have
\begin{align*}
& \mathop {\lim }\limits_{x \to 0} \left( {x^2 \left( {1 + 2 + 3 + \cdots + \left[ {\frac{1}{{\left| x \right|}}} \right]} \right)} \right) = \mathop {\lim }\limits_{x \to 0} \left( {\left| x \right|^2 \left( {1 + 2 + 3 + \cdots + \left[ {\frac{1}{{\left| x \right|}}} \right]} \right)} \right)
\\ &
= \mathop {\lim }\limits_{n \to + \infty } \left( {\frac{1}{{n^2 }}\left( {1 + 2 + 3 + \cdots + n} \right)} \right) = \mathop {\lim }\limits_{n \to + \infty } \frac{{n(n + 1)}}{{2n^2 }} = \frac{1}{2}.
\end{align*}
|
H: Transformation of Uniform variable
I am looking for feedback on my solution to the following problem: A random variable X has a uniform distribution on $(c, 4c)$ with $c > 0$. $Y$ is given by $ \left\{
\begin{array}{ll}
x & x \in (2c,3c) \\
0 & \text{OW}
\end{array}
\right.$
Find the distribution of $Y$.
$\textbf{My attempt:}$
We have that $S_X = \{ x: f_X(x)\geq 0 \} = (c, 4c) $ and $S_Y = \{y: y=g(x) \text{ for some x in } S_X\} = \{ y: y =g(x) \text{ for } x \in (2c,3c)\}$. Since $f$ is continuous on $S_X$ and $g^{-1}(y) = y$ has a continuous derivative on $S_Y$, we get: $$f_Y(y) = \left\{ \begin{array}{ll}
f_X(g^{-1}(y))\left|\frac{dg^{-1}(y)}{dy}\right| & y \in (2c,3c) \\
0 & \text{OW}
\end{array}
\right. = \left\{ \begin{array}{ll}
\frac{1}{3c} & y \in (2c,3c) \\
0 & \text{OW}
\end{array}
\right.$$
Thanks for any help/feedback you may have for me!
AI: The resulting density is not absolutely continuous. so $f_Y(y)=\frac{2}{3}$ when $Y=0$
the rest is correct but no calculation is needed because in the interval you calculated it is given that $Y=X$
Now your density is good because you have
$$\frac{2}{3}+\int_{2c}^{3c}\frac{1}{3c}dy=1$$
So, concluding, you density is the following
$$f_Y(y) =
\begin{cases}
\frac{2}{3}, & \text{if $y=0$} \\
\frac{1}{3c}, & \text{if $2c<y<3c$} \\
0, & \text{if OW}
\end{cases}$$
EDIT: graph of the transformation function
as you can see, Y-domain is $\{0\} \cup (2c;3c)$
|
H: Is the following problem lacking more data?
I encountered the question below on a national-level high school test that took place today.
"Two ships, A and B, depart from the port at the same time. A sails at 8 km/h on a 120 degree course. B sails on a 195 degree course. After 90 min, the course from A to B is 255 degrees. What's the speed of B?
From my point of view the information provided seems scant. We only know that ship A has sailed for 12 km on a course 120 degrees starting from the first quadrant and that after 90 min ship B sails on a course of 15 degrees(not sure on that considering the confusing language of the problem). I was thinking of the cosine theorem but we don't know the distance from A to B. Could you provide some insight on the hidden data and possibly a solution to this problem?
AI: Like I said in a comment, you’re not seeing the third angle.
It is now a simple application of the sine rule.
Edit: clarification figure
|
H: Evaluation of $S_{k,j}=\sum_{n_1,\ldots,n_k=1}^\infty\frac{n_1\cdots n_j}{(n_1+\cdots+n_k)!}$ for $0\leqslant j\leqslant k>0$
For a positive integer $k$, and an integer $j$ with $0\leqslant j\leqslant k$, the problem of evaluating $$S_{k,j}=\sum_{n_1,\ldots,n_k=1}^\infty\frac{n_1\cdots n_j}{(n_1+\cdots+n_k)!}$$ appears as an extension of the problem 3.137 in the book "Limits, Series, and Fractional Part Integrals" by O. Furdui (which asks for evaluation of $S_{k,k}$). It's stated as an "open problem" there, and a quick search over the Internet reveals a few solutions, like this one.
In the answer below, I'm sharing a solution that looks straightforward to me. I wonder is there anything similar online. I didn't find it.
AI: I'm using the Cauchy integral formula ($n$ is a nonnegative integer):
$$\frac{1}{n!}=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z\,dz}{z^{n+1}},$$
where $C$ is, say, the circle $|z|=r$, with $r>1$ to ensure the convergence:
\begin{align*}
S_{k,j}&=\frac{1}{2\pi\mathrm{i}}\sum_{n_1,\ldots,n_k=1}^{\infty}n_1\cdots n_j\oint_C\frac{e^z\,dz}{z^{n_1+\cdots+n_k+1}}
\\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z}{z}\left(\sum_{n=1}^\infty\frac{n}{z^n}\right)^j\left(\sum_{n=1}^\infty\frac{1}{z^n}\right)^{k-j} dz
\\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z}{z}\left(\frac{z}{(z-1)^2}\right)^j\frac{dz}{(z-1)^{k-j}}
\\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{z^{j-1}e^z}{(z-1)^{k+j}}\,dz.
\end{align*}
For $j>0$, using the "coefficient-of" notation, we get readily
$$S_{k,j}=e[z^{k+j-1}](1+z)^{j-1}e^z=e\sum_{r=0}^{j-1}\binom{j-1}{r}\frac{1}{(k+r)!}.$$
For $j=0$, the residue at $z=0$ enters the picture, resulting in
$$S_{k,0}=(-1)^k+e[z^{k-1}](1+z)^{-1}e^z=(-1)^k\left(1-e\sum_{r=0}^{k-1}\frac{(-1)^r}{r!}\right).$$
|
H: True or False questions regarding $_9$ with the irreducible polynomial $x^2 +2x+2$
Let $_9$ be constructed with the irreducible polynomial $x^2 +2x+2$.
For $a,b \in _3$ we write $ax+b \in _9$ for $ab$.
In our exam we had to find out whether the following are true or false.
I know the answers, but I don't understand how one can find out.
$$22 \text{ is the inverse of } 11 \text{ -> True}$$
$$22 \text{ is the negative of } 11 \text{ -> True}$$
$$\text{For random } a \in _9 \setminus \{00\} \text{ it holds that } a^3 =01 \text{ -> False}$$
$$\text{For random } a \in _9 \setminus \{00\} \text{ it holds that } 3 \cdot a =01 \text{ -> False} $$
I just know that $_3[x](x^2+1)$ is a finite field with $9$ elements and $_9$ is isomorphic to it, but what is the approach when trying to figure out which of the above are true/false?
If the polynomial was $x^2+x+2$, then the answers would be False, True, False, False.
AI: Note that $$\Bbb{F}_9=\Bbb{Z}_3[x]/\langle x^2+2x+2 \rangle=\{ax+b \, | \, a,b \in \Bbb{Z}_3 \text{ and } x^2+2x+2 \equiv 0\}.$$
The notation you are using is a tad confusing: when you write $22$, I am assuming it refers to $2x+2$.
To check if $22=2x+2$ and $11=x+1$ are inverses of each other, we do the following:
\begin{align*}
(2x+2)(x+1)&=2(x+1)^2\\
&=2(x^2+2x+1)\\
&=2(-2+1) && (\because x^2+2x+2 \equiv 0)\\
&=-2\\
&=1 && (\because -2 \equiv 1 \text { in } \Bbb{Z}_3)
\end{align*}
Thus $2x+2$ and $x+1$ are inverses.
Added information:
Let $\alpha=ax+b \in \Bbb{F}_9 \setminus \{0\}$. Suppose $\alpha^3 =1$ for all such $\alpha$'s, then that would mean the order of all non-zero and non-identity elements is $3$. But the order of the multiplicative group $\Bbb{F}_9^{\times}=\Bbb{F}_9\setminus \{0\}$ is $8$ and $3$ does not divide $8$, so this is not possible by Lagrange's Theorem.
But here is an alternate way (simple calculation) to check this without invoking Lagrange's theorem
Let's take $\alpha=x$ (in your notation it is $10$). Then
\begin{align*}
x^3 & = x(x^2)\\
&=x(-2x-2) && (\because x^2+2x+2 \equiv 0)\\
&=x^2+x\\
&=-2x-2+x\\
&=-x-2\\
& =2x+1.
\end{align*}
THus $x^3 \not \equiv 1$.
Hopefully you can take the rest from here.
|
H: Taking logarithm preserving asymptotic equivalence
Let for some $c \in (0,1)$, $d \in (0, \infty)$ and $j \in \mathbb{N}$: $$ f(n) \sim \frac{c^n \, d^{j-1}} { (c^n + d)^{j+1}} $$ as $n \to \infty$. "$\sim$" denotes asymptotic equivalence, i.e. the quotient of both sides converges to $1$ as $n \to \infty$. Now, I want check whether $$ \ln (f_n) \sim \log \left( \frac{c^n \, d^{j-1}} { (c^n + d)^{j+1}} \right) $$ holds. I am aware that in general taking the logarithm does not preserve asymptotic equivalence, but nevertheless often does. I suspect that the above holds, but cannot prove it. Any ideas?
AI: Generally, for positive $a_n, b_n$ we have
$$a_n \sim b_n \iff \log a_n - \log b_n \to 0\,.$$
The latter implies $\log a_n \sim \log b_n$, except in the case that $(\log a_n)$ has $0$ as an accumulation point. For if $\lvert \log a_n\rvert \geqslant c > 0$ for all $n \geqslant n_0$, then
$$\bigl\lvert 1 - \frac{\log b_n}{\log a_n}\biggr\rvert = \frac{\lvert \log a_n - \log b_n\rvert}{\lvert \log a_n\rvert} \leqslant c^{-1} \lvert \log a_n - \log b_n\rvert \to 0\,,$$
i.e. $\frac{\log b_n}{\log a_n} \to 1$ or equivalently $\log a_n \sim \log b_n$. If there is a subsequence with $\log a_{n_k} \to 0$, then $\log a_n \sim \log b_n$ need not hold (but it may hold).
Here we have
$$0 < \frac{c^n d^{j-1}}{(c^n+d)^{j+1}} < \frac{d^{j-1}}{d^{j+1}}\cdot c^n = d^{-2}c^n \to 0\,,$$
hence
$$\log f_n \sim \log \frac{c^nd^{j-1}}{(c^n+d)^{j+1}}$$
follows.
|
H: How many perfect square factors does $20^{20} $ have?
How many perfect square factors does $20^{20} $ have?
I found that $20^{20} = 5^{20}. 2^{40}$.
$5^{2}, 5^{4}, 5^{6}, ... , 5^{20}$ (10 perfect square factors)
$2^{2}, 2^{4}, 2^{6}, ... , 2^{40}$ (20 perfect square factors)
$5^{2}.2^{2}, 5^{2}.2^{4}, ..., 5^{2}.2^{40}$ and like this, there are 20 more perfect square factors for every 10 perfect square factors. So, there are 20 * 10 = 200 factors. Total number of perfect square factor is 200 + 20 +10 = 230.
My question is am I missing something and is there any easy or more generalized way to solve this math?
AI: If $d \mid 20^{10}$ then $d^2 \mid 20^{20}$ and vice versa. So all you need to do is count the number of divisors of $20^{10}.$ And we have formulas for that:
$$\tau(20^{10}) = \tau(2^{20}5^{10}) = \tau(2^{20})\tau(5^{10}) = (20+1)(10+1) = 231.$$
|
H: Will $ -a e^{2x} + b e^{x} - cx + d$ always have a root for positive $a,c$?
Consider the following exponential polynomial
$$p(x) = -a e^{2x} + b e^{x} - cx + d,$$
with $a>0,c>0$ and $b,d$ arbtirary. My question is, how could one check whether this always has a root regardless of the particular choice of $b,d$?
I did some plots for different settings and it always turned out to have a root, so I am suspecting it to be true.
My ideas so far:
Use the intermediate value theorem. The problem here is that I am not able to manipulate $p(x)$ to get an idea on where positive and negative values should lie, depending on the paremeters.
To gain more insight, one could do a transformation $z= e^x$, such that one is left with a "polynomial" $-az^2 + bz - c \ln z + d$. If one ignores the $\ln$ part, under some conditions on the parameters, the remaining polynomial will have one root (satisfying $z>0$). At this point at least, one can then check the sign of $p$, which will depend on the parameters again.
Check derivatives for Minima/Maxima: Here at least one gets rid of the linear term. Still, one will have to solve something like a weighted $\cosh$, i.e. $2a e^x + ce^{-x} = b$
Thanks for any suggestions!
AI: Hint:
Check the limits $$\lim_{x\to \pm \infty} p(x)$$
|
H: Vector subspaces F and W such that F + W = F
If $F$ and $W$ are vector subspaces of the vector space $E$, can $W$ be any other vector subspace besides $\{ 0_{E} \}$ such that $F + W = F$?
At first I thought that if $W \subset F$, then $F + W = \{ u + v: u \in F \wedge v \in F \}$, so $\forall (u+v) \subset F: u,v \in F + W $. But I'm not sure this is enough. Maybe I'm missing something?
AI: You have proven that if $W \subseteq F$ then $F+W=F$.
Conversely, suppose that $F+W=F$. Then for $w \in W$, we must have $w=0+w \in F$. Therefore, $w \in F$, proving that $W \subseteq F$.
Conclusion
$$F + W = F \iff W \subseteq F.$$
|
H: Is partial trace additive in sense of direct sum?
I have an intuition, but not sure exactly, whether the partial trace is additive in the sense of direct sum. My intuition is that partial trace is additive and direct sum acts as a sum but with orthogonality (I hope I understand that right).
I mean: $Tr_{E_1} V_1 \oplus Tr_{E_2}V_2 = Tr_{E_1 \oplus E_2} (V_1 \oplus V_2) $.
Actually this kind of means that $(E_1 \otimes B_2) \oplus (E_2 \otimes B_2) = (E_1 \oplus E_2)\otimes(B_1\oplus B_2)$. Which I am not sure, but might be legit.
Is that true? Assuming $V_i: B_i \otimes E_i \longrightarrow B_i \otimes E_i$.
AI: If $B_1 = B_2 = B$, then it is true in a sense, but you have to be careful about what you mean by $\operatorname{Tr}_{E_1 \oplus E_2}$. In particular, we note that the spaces
$$
(E_1 \otimes B) \oplus (E_2 \otimes B), \quad (E_1 \oplus E_2) \otimes B
$$
are canonically isomorphic. In particular, the linear map $\Phi: (E_1 \oplus E_2) \otimes B \to$ $(E_1 \otimes B) \oplus (E_2 \otimes B)$ defined such that
$
\Phi[(x_1 \oplus x_2) \otimes y] = (x_1 \otimes y) \oplus (x_2 \otimes y)
$
defines an isomorphism.
With that said, your statement can be rendered as
$$
\operatorname{tr}_{E_1 \oplus E_2}(\Phi^{-1} \circ (V_1 \oplus V_2) \circ \Phi)= \operatorname{tr}_{E_1}(V_1) \oplus \operatorname{tr}_{E_2}(V_2).
$$
|
H: For every natural number $n$, $f(n) =$ the smallest prime factor of $n.$ For example, $f(12) = 2, f(105) = 3$
QUESTION: Let $f$ be a continuous function from $\Bbb{R}$ to $\Bbb{R}$ (where $\Bbb{R}$ is the set of all real numbers) that satisfies the following property:
For every natural number $n$, $f(n) =$ the smallest prime factor of $n.$ For example, $f(12) = 2, f(105) = 3.$ Calculate the following-
$(a)\lim_{x→∞}f(x)$
$(b)$ The number of solutions to the equation $f(x) = 2016$.
MY SOLUTION: I do not have a problem in understanding part $(a)$. Clearly, $\infty$ is not a number and we cannot find any smallest prime factor for it. Or we can also argue that for any even number-
$f(even)=2$
And for any odd number it depends on the type of the odd number.. in case it's prime then $f(prime \space x)=x$ and in case it's not prime then the answer is something else..
Anyway, we find that there are numerous possibilities and since all of these possibilities directly depend on the number we have chosen so, we cannot account for what happens in the case of $\infty$.
Coming to the second part, the question itself bounced over my head. Let's see carefully what it says-
We know, $f(x)=$ the smallest prime factor of $x$. Therefore, $f(x)=2016$ must imply (by the same logic that)-
The smallest prime factor of $x$ is $2016$.
Wait. What?! Firstly, 2016 is not prime. So how can I account for $x's$ which have such an impossible prime factor.. secondly, even if we assume that $2016$ is the smallest factor of $x$ there are infinite $x's$ which satisfies such a property. Our answer in that case is not bounded (although it's nowhere mentioned it should be)..
So what does the second part even mean?
Thank you in advance for your kind help :).
AI: For a, you can conclude that the limit does not exist. As you say, $f(n)=2$ for even $n$ and $f(n) \ge 3$ for odd $n$. If you think of the $N-\epsilon$ definition of a limit at infinity, this function will fail to have a limit and you can choose any $\epsilon \lt \frac 12$ to demonstrate that.
For b, you are expected to use the fact that $f(x)$ is continuous and use the intermediate value property. We have $f(2016)=f(2018)=2$ and $f(2017)=2017$ because $2017$ is prime. There must be at least one number in the intervals $(2016,2017)$ and $(2017,2018)$ where the function is $2016$. As there are infinitely many primes greater than $2016$, there will be infinitely many points where $f(x)=2016$, at least one each side of each of these primes.
|
H: prove that for any prime $p≥3$ the following divisibility holds $p|11…122…233…3…99…9-123456789$
prove that for any prime $p$ the following divisibility holds:
https://photos.app.goo.gl/1P1RSiUobgAJbbGr9
$p|11…122…233…34…445…556…667…778…899…9-123456789 $
for each different digit of the minuend being used p times.
I tried to identify a difference, but for that I need to transform it to something that depends on $p$
AI: Observe that $\underbrace{11\dots1}_p=\frac19\underbrace{99\dots9}_p=\frac{10^p-1}9$. Thus we find that \begin{align*}N_p&:=\underbrace{11\dots1}_p\underbrace{22\dots2}_p3\dots8\underbrace{99\dots9}_p\\&=9\cdot\frac{10^p-1}9+8\cdot\frac{10^p-1}9\cdot10^p+\dots+1\cdot\frac{10^9-1}9\cdot10^{8p}.\end{align*}
In particular, because $123456789=9+8\cdot10^1+\dots+1\cdot10^8$, we have that $$N_p-123456789=\sum_{k=1}^9\left(k\cdot\frac{10^p-1}9\cdot10^{(9-k)p}-k\cdot10^{9-k}\right).$$
Consider each term modulo $p$. We already know that $a^p\equiv a\pmod p$ for all $a$, and so the $k$-th term in the sum above is equivalent modulo $p$ to $$k\cdot\frac{10-1}9\cdot10^{9-k}-k\cdot10^{9-k}\equiv0\pmod p.$$
Thus the desired difference is equal to the sum of nine different terms, each divisible by $p$, from which we conclude the result.
|
H: Why doesn't adjoining $\sqrt{3}$ to $\mathbb{F}_{11}$ return $\mathbb{F}_{11}$?
I am confused about a particular instance where adjoining an element of a field to itself makes it not equal to itself and I am asking for clarification. I can see the result is true, but I can not see why. We are not introducing any new element and we are not setting any new elements equal to zero.
In the finite field $\mathbb{F}_{11}$ we adjoin $\alpha$ where $\alpha^2 - 3 =0$. Because $(\pm 5)^2 -3 =0$ the two square roots of $3$ are already in $\mathbb{F}_{11}$, so we are either adjoining $5$ or $-5$. We do not know which, although both elements are invertible. However we cannot invert $\alpha +5$ because we don't know if $\alpha +5$ or $\alpha-5 = 0$ so $\mathbb{F}_{11}[\alpha]$ is not a field.
By adjoining an ambiguous element of the field to itself I thought maybe we were setting elements equal to zero, but it's not $5 = -5$ because then $10 = 0$ which makes every element $0$.
Source: This was an example in Algebra by Artin where Artin says $\mathbb{F}_{11}[\alpha] \simeq \mathbb{F}_{11}[x]/(x^2-3)$ is not a field on page 366.
Sorry for edits
Edit 2: If I am understanding what is being said, $\alpha$ must assume a specific value in $\mathbb{F}_{11}[\alpha]$. So it is $not$ true $\mathbb{F}_{11}[\alpha] \simeq \mathbb{F}_{11}[x]/(x^2-3)$ because the kernel of the evaluation homomorphism $\phi: \mathbb{F}_{11}[x] \rightarrow \mathbb{F}_{11}[\alpha]$ is not the ideal $(x^2-3)$, but one of the ideals $(x-5)$ or $(x+5)$.
I will rewrite a comment here hopefully to clarify.
In this image Artin describes $R'$ as "obtained by adjoining an element $\alpha$ to $\mathbb{F}_{11}$". A page earlier Artin defined "$R[\alpha] = \text{ring obtained by adjoining} \ \alpha \ \text{to} \ R$". There are also examples of using the evaluation homomorphism to show results such as $\mathbb{R}[x]/(x^2+1)\simeq \mathbb{Q}$ and $R[x,y] \simeq R[x][y]$.
Artin writes
\begin{equation}
R' = \mathbb{F}_{11}[x]/(x^2-3)
\end{equation}
In the same paragraph he says "...procedure applied to $\mathbb{F}_{11}$ does not yield a field", and "But we haven't told $\alpha$ whether to be equal to $5$ or $-5$. We've only told that its square is $3$."
With this wording it sounds like the kernel of $\phi$ is not $(x-5)$ or $(x+5)$, but only $(x^2-3)$. Which again confuses me because then $\mathbb{F}_{11}[\alpha]$ is not a field.
AI: There are two different rings being discussed here:
$\mathbb{F}_{11}[\alpha]$, the ring obtained by adjoining some specific $\alpha\in \overline{\mathbb{F}}_{11}$ with $\alpha^2 = 3$;
$\mathbb{F}_{11}[X] / (X^2 - 3)$.
In the first case, $\mathbb{F}_{11}$ already contains $\alpha$; as you point out, $\alpha = \pm 5\in \mathbb{F}_{11}$. In the latter case, we no longer have a field:
\begin{align*}
\mathbb{F}_{11}[X]/(X^2 - 3) = \mathbb{F}_{11}[X]/(X - 5)\oplus \mathbb{F}_{11}[X]/(X + 5).
\end{align*}
If $f\in \mathbb{F}_{11}[X]$ is an irreducible nonconstant polynomial, then the map $\mathbb{F}_{11}[X]/(f) \to \mathbb{F}_{11}[\alpha]$ given by $X \to \alpha$, where $\alpha$ is a zero of $f$ in $\overline{\mathbb{F}}_{11}$, is an isomorphism; for then $\alpha$ is not a zero of any polynomial of degree less than $\deg f$, and comparing dimensions gives the result. That result doesn't hold without the irreducibility assumption, though.
|
H: Why is $a$ the derivative of $f(x)=ax$?
I thought there was some kind of process to calculate a derivative. Can this be graphed? I know about the power rule, the chain rule, etc. but I don't know what is happening here.
AI: You could look at this a few ways...
For instance, consider what the derivative "means." $f'(x)$, for a function $f(x)$, encodes the rate of change of the function at the point $x$. It is a generalization of the notion of slope from familiar algebra: it is now just the rate of change at a given point. Of course, this makes linear functions such as $f(x)=ax$ special. The slope of this line is $a$, so it makes sense intuitively that $f'(x)=a$ for all $x$.
Or we could calculate it rigorously. Recall:
$$f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$$
(provided the limit exists of course). Let $f(x)=ax$. Then the above becomes
$$f'(x) = \lim_{h \to 0} \frac{a(x+h)-ax}{h} = \lim_{h \to 0} \frac{ax+ah-ax}{h} = \lim_{h \to 0} \frac{ah}{h} = \lim_{h \to 0} a = a$$
I thought there was some kind of process to calculate a derivative.
I mean, there is, sort of. You can apply the definition as above, or you can use known formulas. For instance, you have, for constants $c$ and differentiable $f$,
$$\frac{d}{dx} (cf(x)) = c \cdot \frac{d}{dx} f(x)$$
or, in other notation, $(cf(x))' = cf'(x)$. Moreover, we know that
$$\frac{d}{dx} x^n = nx^{n-1}$$
from one of the familiar derivative laws, on the premise $n \ne 0$. You can apply both to your case, since $f(x) = ax = ax^1$:
$$f'(x) = (ax^1)' = a(x^1)' = a(1x^{1-1}) = a(x^0) = a(1) = a$$
|
H: Counting the number of solutions of $x^2\equiv 1 \text{ (mod n)}$ for even $n\geq 4$?
I am trying to solve the following problem:
Given the context in the book, I have noticed the following: Suppose $n=30$ then we write the following system of equations:
$$x^2\equiv 1 \text{ (mod 2)}\\x^2\equiv 1 \text{ (mod 3)}\\x^2\equiv 1 \text{ (mod 5)}$$
And use the chinese remainder theorem. The solutions for $x^2\equiv 1 \text{ (mod 30)}$ are then the solutions for the previous system. I made a table on Mathematica which seems to confirm my suspicion. Whenever the previous system of equations is satisfied, $x^2\equiv 1 \text{ (mod 30)}$.
$\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
I have two questions:
Why the solutions of that equation are found in that system of equations?
How can I count the solutions? I kinda guessed the previous one and it seems to work but I have no clue on how to count them. I've been able to see that there must be an even number of solutions because $a^2\equiv (n-a)^2 \text{ (mod n)}$ but aside from this, I didn't make much progress.
AI: The Chinese remainder theorem, in its abstract version, asserts that the map
\begin{align}
\mathbf Z/30\mathbf Z&\longrightarrow \mathbf Z/2\mathbf Z\times \mathbf Z/3\mathbf Z\times \mathbf Z/5\mathbf Z\\
n\bmod 30&\longmapsto(n\bmod 2,n\bmod3,n\bmod 5)
\end{align}
is a ring isomorphism.
Therefore, to count the solutions, you count them modulo $2$ (1), modulo $3$ and modulo $5$ (two each, since the quotients are fields) and combine them in all possible ways, which makes $4$ solutions modulo $30$ in all.
|
H: Is there an entire function with domains for which $f(A)=B$ and $f(B)=A$?
Let $f$ be an entire function. Suppose that there exist two nonempty disjoint, open, connected non-empty sets $A,B$ in the plane such that $f(A)=B$ and $f(B)=A$.
Does it follow that $f$ is linear?
Equivalently, if a meromorphic function satisfies this condition is it necessarily an automorphism?
Neither of the conditions of disjointness and openness can be dropped, of course. I tried to see if results in dynamics about 2-periodic domains apply, but they usually only regard Fatou components or are otherwise not suitable. But it does seem like a question simple enough that it "ought to" be amenable to such machinery.
Any ideas?
AI: The conclusion does not hold, not even for polynomials. If $z_0$ is an attracting fixed point of $f \circ f$ (but not a fixed point of $f$) and $A$ the component of the Fatou set containing $z_0$, then $B = f(A)$ is disjoint from $A$ with $f(B) = A$.
A concrete example is $f(z) = z^2 - 1$ with $f(0) = -1$, $f(-1) = 0$, and $A, B$ the components of the Fatou set containing $0$ and $-1$, respectively.
Here is an image of the Julia set of $z^2-1$ (Attribution: Prokofiev / Public domain):
The Fatou component in the center contains $z=0$ and the next one on the left contains $z=-1$.
|
H: How to prove a tighter bound $|\lambda_3-1| \leq \epsilon^2$ for an eigenvalue of $A$ with Gerschgorin's theorem and similar matrices?
Given the following matrix
$$A = \begin{bmatrix}
8 & 1 & 0\\
1 & 4 & \epsilon\\
0 & \epsilon & 1\\
\end{bmatrix}, |\epsilon| < 1.$$
Gerschgorin's theorem states that each of the $\lambda_i$ eigenvalues of A will be placed inside a circular disk with center $a_{i,i}$ and radius $\sum^n_{j = 1, j\neq i} |a_{i,j}|$. As a direct consequence, the lowest eigenvalue $\lambda_3$ of A is such that $|\lambda_3 -1| < \epsilon$.
However I'm now asked to prove the tighter bound $|\lambda_3-1| < \epsilon^2$ using diagonal similarity transformations. My initial idea was to use the first step of the QR algorithm for Hessemberg diagonalization as usual to obtain a similar matrix $A_1 = Q^*AQ$ (with $Q$ computed by Householder transformations) preserving the eigenvalues and hopefully the tighter bound appears using Gerschgorin's theorem once again. I've made some numerical tests using Octave for some particular $\epsilon$ and it seems to be indeed the case that $A_1$ satisfies the bound. The problem is that the exact computation of $Q$ and $A_1$ have been absolutely painful, specially when dealing with the unknown symbol $\epsilon$. I also tried to use a shift of $\rho = 1$ with no better results. So now I'm suspicious there is a simpler similarity transformation that could lead to the desired result. Any suggestions would be highly appreciated.
AI: Subtract off the identity matrix so the upper left term is now $7.$ The characteristic polynomial is now
$$ x^3 - 10 x^2 + (20 -\epsilon^2)x + 7 \epsilon^2 $$
We wish to show that this has a root near zero, between $- \epsilon^2$ and $\epsilon^2.$ I guess we take $\epsilon \neq 0$ and treat that as a separate case.
The value of the characteristic polynomial when $x = \epsilon^2$ is
$$ \epsilon^2 \left( 27 - 11 \epsilon^2 + \epsilon^4 \right) $$
which is positive, as $\epsilon^2 < 1.$
The value of the characteristic polynomial when $x = - \epsilon^2$ is
$$ -\epsilon^2 \left( 13 +9 \epsilon^2 + \epsilon^4 \right) $$
which is negative, as $\epsilon^2 < 1.$
Te characteristic polynomial is continuous in $x,$ thus it has a root with
$$ - \epsilon^2 < x < \epsilon^2 $$
The original matrix has an eigenvalue $x$ with
$$ 1- \epsilon^2 < x < 1 +\epsilon^2 $$
ADDED: As Robert points out, we can simply evaluate the shifted characteristic polynomial $ x^3 - 10 x^2 + (20 -\epsilon^2)x + 7 \epsilon^2 $ at $x = t \epsilon^2,$ with $t$ real. When $t=0$ we get (positive) $7 \epsilon^2.$ There is some cancellation available if we take $t = -\frac{7}{20},$ and the polynomial comes out as $$ \epsilon^2 \left( - \frac{350}{400} \epsilon^2 - \frac{343}{8000} \epsilon^4\right) $$
which is negative. Thus the shifted eigenvalue is between $-\frac{7}{20} \epsilon^2$ and $0,$ the original eigenvalue between $1-\frac{7}{20} \epsilon^2$ and $1.$
|
H: Game theory:- value of a game?
I haven't found any suitable explanation or even definition for this concept. What is the value of game in game theory? Can anybody explain it to me with an example.
AI: The value of a game is the expected value to a given player. For example, a game where you flip a coin and win $2$ for heads and lose $1$ for tails has a value to you of $\frac 12\cdot 2 + \frac 12 \cdot (-1)=\frac 12$. If you have to pay $\frac 12$ to play the game you will break even in the long run.
|
H: Closed graph theorem between Banach spaces: sufficiency of null sequence
For for a linear operator between Banach spaces $T:E \rightarrow F$, why does the seemingly weaker implication $$x_n \rightarrow 0,\quad Tx_n \rightarrow y \implies y = 0$$
yield
$$x_n \rightarrow x,\quad Tx_n \rightarrow y \implies y = Tx.$$
I'm new to operator theory and would like to understand the Closed graph theorem better.
AI: It's a consequence of linearity. Suppose you have the formally weaker condition, and you have $x_n \to x$, $Tx_n \to y$. Then put $\xi_n := x_n - x$. It follows that $\xi_n \to 0$, and $T\xi_n = T(x_n - x) = Tx_n - Tx \to \eta := y - Tx$.
By the assumed condition $\eta = 0$, i.e. $y = Tx$. Thus the more general condition follows.
|
H: If $S$ is simple a module over a ring $R$ which is noetherian, hereditary and every simple module is injective, then $S$ is finitely presentated.
Let $R$ be a left noetherian and left hereditary ring , also suppose every simple left module $M$ over $R$ is injective. Prove that a simple left module $S$ over $R$ is finitely presentated. So Im supposed to find and exact sequence $$\bigoplus_{i=1}^{n} R \to \bigoplus_{i=1}^{m} R \to S \to 0$$ where $n,m \in \mathbb{N}$. Some of my ideas are that as I already know that as $R$ is left hereditary then $p.d(S) \leq 1$ which is somehow why this exact sequence make sense. Also, as $S$ is simple over $R$, then $S$ is cyclic which means it is finitely generated which means $S$ can be covered by copies of $R$, also here I want to use is $R$ Noetherian to get an injection into the direct sum of copies of $R$ which cover $S$. Thanks.
AI: Since $S$ is simple, it is cyclic, and you have a homomorphism $R\to S$.
Since $R$ is Noetherian, the kernel is finitely generated.
This, AFAICT proves $S$ is finitely presented, without any of the other assumptions. You can then get your sequence by mapping onto the kernel with a free module.
In fact, one should also note that every finitely generated left $R$ module over a left Noetherian ring $R$ is automatically finitely presented.
|
H: Compactness of $GL_n\left (\cal{K}\right)$ where $\cal{K}$ is cantor set
Consider the following functions
$f:GL_n(\mathbb{C})\to \mathbb{C}\backslash \{0\},f(A)=det(A),\forall A\in GL_n(\mathbb{C})$;
and $g:\mathbb{R}\to M_2(\mathbb{R}),
g(x)=\begin{pmatrix} \cos x & -\sin x\\ \sin x & \cos x \end{pmatrix}, \forall x\in \mathbb{R}$
Choose the correct one(s) from the statements given below:
a) Let $\cal{K}$ denote the cantor set and $GL_n(\cal{K})$ denote the set of all $n\times n$ invertible matrices
having entries from$\cal{K}$. Then $f(GL_n(\cal{K}))$ is closed
b) Let $\cal{K}$ be as above. Then $g(\cal{K})$ is closed
c) $GL_n(\mathbb{C})$ has infinitely many closed subgroups contaning $SL_n(\mathbb{C})$
d) All the above three statements are true .
but what about the other options??
Option (b) is true as if $A=g(\cal{K})$ is open then $g^{-1}(A)$ is open as $g$ is continuous since each componants are continuous ,then $\cal{K}$ is open contradiction as $\cal{K}$ is closed .
AI: Hint for (a). $f$ extends to a continuous function from $\mathbb C^{n \times n}$ (the space of all $n \times n$ matrices over $\mathbb C$) to $\mathbb C$, and $\mathcal K^{n \times n}$ is compact.
$f(GL_n(\mathcal K)) = f(\mathcal K^{n \times n}) \backslash \{0\}$ would not be closed in $\mathbb C$, but it is closed if the codomain is taken to be $\mathbb C \backslash \{0\}$.
|
H: If $M$ is a compact Riemannian manifold and $g$ and $\tilde{g}$ are metrics on $M$, then $\frac{1}{C} g \leq \tilde{g} \leq C g$ for $C > 1$
I am reading Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities by Emmanuel Hebey and he stated on page $22$:
Let $M$ be a compact manifold endowed with two Riemannian metrics $g$ and $\tilde{g}$. As one can easily check, there exists $C > 1$ such that
$$\frac{1}{C} g \leq \tilde{g} \leq C g$$
on $M$, where such inequalities have to be understood in the sense of the bilinear forms.
I would like to help to prove this, because I can not give a satisfactory proof with my attempt, but I put it below to show my effort. I also would like to apologize if my proof is very detailed, but I would like to see if I understood very well the argument and what hypothesis are used and how they are used.
It is sufficient to prove that $\frac{1}{C} \delta_j^i \leq \tilde{g}_{ij} \leq C \delta_j^i$ on $M$ for some constant $C > 1$. Suppose that $\tilde{g}$ is a Riemannian metric which is geodesic normal coordinates at $p$ without loss of generality because if the inequalities above are proved, then the inequalities are true for the metric $\tilde{g}$ which is not geodesic normal coordinates at $p$ only changing $C$ by $\frac{C}{A}$, where $A$ denotes the Jacobian of the change of the coordinates. Now, consider $M$ connected (the author assumes in the beginning of the book that manifolds are connected, I think this is used here to define the next metric on $M$) and endowed with the metric $d(p,q) := \inf \left\{ l(\alpha) \ ; \ \alpha \ \text{is a piecewise differentiable curve joining} \ p \ \text{to} \ q \right\}$. Recall that the Riemannian metric $\tilde{g}$ is smooth in the sense that the map
\begin{align*}
\tilde{g}: (M,d) &\longrightarrow (\mathscr{L}^2(T_pM \times T_pM, \mathbb{R}),||\cdot||_{op})\\
p &\longmapsto \tilde{g}(p)
\end{align*}
is smooth ($||\cdot||_{op}$ denotes the operator norm over $\mathscr{L}^2(T_pM \times T_pM, \mathbb{R})$), in particular, the map above is a continuous map defined over a compact metric space, then it is uniformly continuous. This part I am stuck, but I want to define a norm $||\cdot||$ over the image of the Riemannian metric $\tilde{g}$ in order to, for every $\varepsilon > 0$, there exists $\delta(\tilde{g}) > 0$ such that
$$q \in B_{\delta(\tilde{g})}(p) \Longrightarrow |\tilde{g}_{ij}(q) - \tilde{g}_{ij}(p)| \leq = ||\tilde{g}(q) - \tilde{g}(p)|| < \varepsilon$$
Choosing $C > 1$ and $\varepsilon := \frac{1}{2} \left( C - \frac{1}{C} \right)$, we have
$$\frac{1}{C} \delta_j^i \leq \tilde{g}_{ij} \leq C \delta_j^i \ (1)$$
on $B_{\delta(\tilde{g})}(p)$ for each $p \in M$.
I do not sure how to do this, once that $\mathscr{L}^2(T_pM \times T_pM, \mathbb{R})$ and the coordinate fields vary with $p$, therefore I think I can not take simply the operator norm of this space to be $||\cdot||$, but if I can overcome this difficult, then we can do an analogous reasoning for $g$ to obtain
$$\frac{1}{C} \delta_j^i \leq g_{ij} \leq C \delta_j^i \ (2)$$
on $B_{\delta(g)}(p)$ for each $p \in M$.
Defining $\delta := \min \{ \delta(\tilde{g}), \delta(g) \}$, $(1)$ and $(2)$ hold on $B_{\delta}(p)$ for each $p \in M$. Combining $(1)$ and $(2)$ and observing that $\{ B_{\delta}(p) \ ; \ p \in M \}$ is an cover for $M$, we proved the inequalities desired.
$\textbf{EDIT:}$
We know that
$$\frac{1}{A} g_p(v,v) \leq \tilde{g}_p(v,v) \leq A g_p(v,v) \ (\star)$$
for all $v \in T_pM$ based on what DIdier_ proved. Analogously,
$$\frac{1}{B} \tilde{g}_p(v,v) \leq g_p(v,v) \leq B \tilde{g}_p(v,v) \ (\star \star)$$
for all $v \in T_pM$.
I will try to prove that
$$\frac{1}{C} g_p(u,v) \leq \tilde{g}_p(u,v) \leq C g_p(u,v)$$
for all $u,v \in T_pM$.
Let $q_{g_p}(v) := g_p(v,v)$ and $q_{\tilde{g}_p}(v) := \tilde{g}_p(v,v)$ be the quadratic forms associated to the $g_p$ and $\tilde{g}_p$ respectively, then
$$g_p(u,v) = \frac{q_{g_p}(u+v) - q_{g_p}(u) - q_{g_p}(v)}{2} \ \text{and} \ \tilde{g}_p(u,v) = \frac{q_{\tilde{g}_p}(u+v) - q_{\tilde{g}_p}(u) - q_{\tilde{g}_p}(v)}{2}.$$
This, $(\star)$ and $(\star \star)$ imply that
$$\tilde{g}_p(u,v) \leq \left( A - \frac{1}{A} \right) g_p(u,v)$$
and
$$g_p(u,v) \leq \left( B - \frac{1}{B} \right) \tilde{g}_p(u,v)$$
for all $u,v \in T_pM$, therefore
$$\frac{1}{\left( B - \frac{1}{B} \right)} g_p(u,v) \leq \tilde{g}_p(u,v) \leq \left( A - \frac{1}{A} \right) g_p(u,v)$$
for all $u,v \in T_pM$.
Choosing $C > 1$ sufficiently large such that
$$\frac{1}{C} g_p(u,v) \leq \frac{1}{\left( B - \frac{1}{B} \right)} g_p(u,v) \leq \tilde{g}_p(u,v) \leq \left( A - \frac{1}{A} \right) g_p(u,v) \leq C g_p(u,v)$$
for all $u,v \in T_pM$ gives the result.
AI: You can prove this in a more direct way.
It looks like the proof that in a finite dimensional vector space, all norms are equivalent.
Let $S_gM$ be the unit sphere bundle of $(M,g)$, that is $S_gM = \{ (p,v)\in TM | g_p(v,v)=1 \}$.
If $M$ is compact, then $S_gM$ is compact too.
The smooth function $f$ on $TM$ defined by $f(p,v)= \tilde{g}_p(v,v)$ is then continuous restricted to $S_gM \subset TM$.
Notice $f$ is positive, as every $v\in S_gM$ is non-zero.
By compactness, there exist $m,M >0$ such that $m\leqslant f(p,v) \leqslant M$ on $S_gM$.
You can chose some constant $C>1$ such that $\frac{1}{C} \leqslant m \leqslant M \leqslant C$, so that on $S_gM$, $\frac{1}{C} \leqslant \tilde{g}_p(v,v)\leqslant C$.
By the very definition of $S_gM$, we have that for every $(p,v)\in S_gM$,
$$\frac{1}{C}g_p(v,v)\leqslant \tilde{g}_p(v,v) \leqslant Cg_p(v,v).$$
Now, the homogeneity of quadratic forms shows that this inequality is true on all of $TM$.
|
H: When will an ellipse ‘fall’ into a parabola?
Consider the parabola $y=x^2$ and an ellipse which ‘rests’ on it, given by the equation $$\frac{x^2}{a^2} +\frac{(y-h)^2}{b^2}=1$$The goal is to find all ordered pairs $(a,b)$ for which the ellipse doesn’t fall to the origin, namely it touches the parabola at two distinct points.
Replacing $y$ by $x^2$ in the equation of the ellipse, we get $$\frac{x^2}{a^2} +\frac{(x^2-h)^2}{b^2}=1 $$
I could calculate the discriminant of this quadratic in $x^2$ and set it $\gt 0$. Is there a quick and neat way to express $h$ in terms of $a,b$? Or maybe another approach to solve this problem?
AI: I'd look at the problem in a different way. Suppose we are given $b = h > 0$ such that the ellipse touches the parabola's vertex. What is the largest $a$, say $a^*$, such that the ellipse has no other intersection points with the parabola? For $a > a^*$, the ellipse cannot have $h = b$, thus such an $(a,b)$ pair cannot "fall" all the way down to the vertex.
As such, we require $$\frac{x^2}{a^2} + \frac{(y-b)^2}{b^2} = 1, \\ y = x^2,$$ hence $$a^2 y^2 + (b^2 - 2a^2 b) y = 0,$$ for which the unique nonzero root in $y$ is $$y = \frac{(2a^2 - b)b}{a}.$$ Therefore, if this root exists and is positive, the ellipse has another intersection point with the parabola other than its vertex; i.e., if $2a^2 > b$ or $$a > a^* = \sqrt{b/2}.$$ It follows that the set of all ordered pairs for which the ellipse cannot touch the vertex are those pairs for which $a > \sqrt{b/2}$.
By request, there is the question of how to express $h$ as a function of $(a,b)$ for ellipses satisfying the condition $a > \sqrt{b/2}$. This involves only a slight modification of the above calculation, namely we solve the system $$\frac{x^2}{a^2} + \frac{(y-h)^2}{b^2} = 1, \\ y = x^2$$ for $y$, giving $$y = h - \frac{b^2}{2a^2} \pm \frac{b \sqrt{4a^4 + b^2 - 4a^2 h}}{2a^2}.$$ Then we note that under the assumptions $a > \sqrt{b/2}$ and $h \ge b$, we have $h - b^2/(2a^2) > h - b > 0$. When the ellipse is tangent to the parabola, the solution has a unique double root; i.e., the the discriminant $4a^2 + b^2 - 4a^2 h$ must be zero, or $$h = a^2 + \frac{b^2}{4a^2}.$$ This characterizes the location of such an ellipse when $h \ge b$.
|
H: Derivation of pressure using partition function
expression for the pressure due to a molecule in state number $i$
$$P_{i}=-\frac{d \varepsilon_{i}}{d V}$$
To find pressure
$$\begin{aligned}
P=\frac{N}{z} \sum_{i} P_{i} e^{-\varepsilon_{i} / k_{\mathrm{B}} T} &=-\frac{N}{z} \sum_{i}\left(\frac{d \varepsilon_{i}}{d V}\right) e^{-\varepsilon_{i} / k_{\mathrm{B}} T} \\
&=\frac{N k_{\mathrm{B}} T}{x} \sum_{i}\left(\frac{\partial}{\partial V} e^{-\varepsilon_{i} / k_{\mathrm{B}} T}\right)_{T} \quad \text{Eq.1}
\end{aligned}$$
I dont understand how you come to the second line from the first line of Eq.1. I know there is somehow $\frac{\partial}{\partial \varepsilon_{i} } e^{-\varepsilon_{i} / k_{\mathrm{B}} T}$. A step by step derivation from 1st line to second line would be really helpful
AI: This is just chain rule, $\varepsilon_i$ is a function of $V$
$$\begin{aligned}
\frac{\partial}{\partial V} (e^{-\varepsilon_{i} / k_{\mathrm{B}} T}) = - \frac{1} {k_{\mathrm{B}} T}e^{-\varepsilon_{i} / k_{\mathrm{B}} T}\left(\frac{d \varepsilon_{i}}{d V}\right)
\end{aligned}$$
therefore
$$\begin{aligned}
- e^{-\varepsilon_{i} / k_{\mathrm{B}} T}\left(\frac{d \varepsilon_{i}}{d V}\right) = k_{\mathrm{B}} T \left( \frac{\partial}{\partial V} (e^{-\varepsilon_{i} / k_{\mathrm{B}} T}) \right)
\end{aligned}.$$
|
H: Two definitions of Strong Markov property for Brownian motion
Strong Markov property for Brownian motion:
(Def 1)For every almost surely finite stopping time $T$, the process
$$\{B(T+t)-B(T): t\geq 0\}$$
is a standard Brownian motion independent of $\mathcal{F}(T)$.
(Def 2) $$\mathbb{E}_x[f(B(t))|\mathcal{F}(T)]=\mathbb{E}_{B(T)}[f(B(t-T))]$$
on ${T\leq t}$.
Why are these two definitions of SMP equivalent?
AI: From $2$ to $1$:
Let $V=(V_t) = (B_{T+t} - B_t)$ be that process, let $A \in \mathcal B(\mathbb R^{[0,\infty)})$ (in cylinder $\sigma-$field). Let $A_0 = \{ x \in \mathbb R^{[0,\infty)} : x-x(0) \in A \}$ ( we translate every function in $A$ by minus it's value at $0$). Note that:
$$ \mathbb P_x( V \in A | \mathcal F(T)) = \mathbb P_x( (B_{T+t} - B_T) \in A | \mathcal F(T)) = \mathbb P_x ( (B_{T+t}) \in A_0 | \mathcal F(T))$$
(since $B_T$ is value at $0$ of process $(B_{T+t})_{t \ge 0}$. Now apply $2$, getting:
$$ \mathbb P_x ( (B_{T+t}) \in A_0 | \mathcal F(T)) = \mathbb P_{B_T}( (B_t) \in A_0) $$
Note that for any $y \in \mathbb R$ we have:
$$ \mathbb P_y( (B_t) \in A_0 ) = \mathbb P_y( (B_t - y) \in A) = \mathbb P_0 ( (B_t) \in A) $$
Taking $y = B_T$ it finally gives us:
$$ \mathbb P_x( V \in A | \mathcal F(T)) = \mathbb P_0 ( (B_t) \in A) $$
in particular $$ \mathbb P_x (V \in A) = \mathbb E_x[\mathbb P_x(V \in A |\mathcal F(T))]= \mathbb P_0( ( B_t) \in A) $$
so we showed that $V$ has the same distribution (under $\mathbb P_x$ measure) as standard brownian motion $(B_t)$ (cause it's under $\mathbb P_0$ measure))
Now to show independence, Take any $B \in \mathcal F(T)$ we get:
$$ \mathbb P_x(B \cap \{ V \in A\}) = \mathbb E_x [ 1_B \mathbb P_0 ( (B_t) \in A)) = \mathbb P_x(B)\mathbb P_0( (B_t) \in A) = \mathbb P_x(B)\mathbb P_x( V \in A) $$
From 1 to 2:
$$\mathbb E_x [ f(B_{T+t}) | \mathcal F(T)] = \mathbb E_x [ f(B_{T+t} - B_{T} + B_{T}) | \mathcal F(T)] $$
I don't know what information you possess, but it can be shown that as adapted, right continuous process, Brownian Motion is progresivelly measurable, hence $B_T$ is $\mathcal F(T)$ measurable. By 1. we have that $B_{T+t} - B_T$ is independent of $\mathcal F(T)$ so by conditional expected value property, the last one is equal to:
$$ \mathbb E_x[ f(B_{T+t} - B_T + p)] |_{p = B_T} $$
Again using $1$, we know that $B_{T+t} - B_T$ under $\mathbb P_x$ is distributed as standard (so under $\mathbb P_0$) brownian motion, so:
$$ \mathbb E_x[ f(B_{T+t} - B_T + p)] |_{p = B_T} = \mathbb E_0 [ f(B_t + p)]|_{p = B_T} = \mathbb E_p[f(B_t)]|_{p =B_T} = \mathbb E_{B_T}[f(B_t)]$$
So we proved $\mathbb E_x [ f(B_{T+t}) | \mathcal F(T)] = \mathbb E_{B_T}[f(B_t)]$.
|
H: Relation for Bessel functions
I have function $$Q_{n}(z)=\frac{J_{n+1}(z)}{zJ_{n}(z)},$$where $J_{n+1}(z)$ and $J_{n}(z)$ are Bessel functions of the first kind. I need to prove
$$\frac{dQ_{n}}{dz}=\frac{1}{z}-\frac{2(n+1)}{z}Q_{n}(z)+zQ_{n}^{2}(z)$$
but I don't know where to start.
AI: The shown identity is more or less straightforward according to your definition of $J_n(z)$. Assuming the series definition
$$ J_n(x) = \sum_{m\geq 0}\frac{x^{2m}(-1)^m}{2^n 4^m n!(n+m)!} $$
we clearly have that $J_n$ is an entire function and $2\,J_n'(z)=J_{n-1}(z)-J_{n+1}(z)$ (which is even more evident from the integral definition $J_n(z)=\frac{1}{\pi}\int_{0}^{\pi}\cos(z\sin\theta - n\theta)\,d\theta$ and the cosine addition formulas), so
$$ Q_n'(z) = -\frac{J_{n+1}(z)}{z^2 J_n(z)}+\frac{J_{n+1}'(z)}{z J_n(z)}-\frac{J_{n+1}(z)J_n'(z)}{z J_n(z)^2} $$
equals
$$ -\frac{J_{n+1}(z)}{z^2 J_n(z)}+\frac{J_n(z)-J_{n+2}(z)}{2z J_n(z)}-\frac{J_{n+1}(z)(J_{n-1}(z)-J_{n+1}(z))}{2z J_n(z)^2} $$
while
$$ \frac{1}{z}-\frac{2(n+1)}{z}Q_n(z)+zQ_n^2(z)=\frac{1}{z}-\frac{2(n+1)J_{n+1}(z)}{z^2 J_n(z)}+\frac{J_{n+1}(z)^2}{z J_n(z)^2} $$
so the given identity is equivalent to
$$ -2J_{n+1}(z)J_n(z)+zJ_n(z)(J_n(z)-J_{n+2}(z))-z J_{n+1}(z)(J_{n-1}(z)-J_{n+1}(z))\\= 2z J_n(z)^2-4(n+1)J_n(z) J_{n+1}(z)+2z J_{n+1}(z)^2$$
which is a consequence of $\frac{2n}{x}J_n(x) = J_{n-1}(x)+J_{n+1}(x)$.
This is usually done "in the opposite way", i.e. by proving that the solutions of certain differential equations of the Riccati type are given by ratios of adjacent Bessel functions.
|
H: Given $ax^2+bx+c=0$ with two real roots, $x_1>x_2$, find a quadratic equation whose roots are $x_1+1$ and $x_2-1$ without solving the first equation
Roots of the equation $ (1): ax^2+bx+c=0$ are $x_{1}$ and $x_{2}$. They are both real.
Without solving first equation, make up new quadratic equation such that one of the its roots is $x_{1} + 1$ and second one is $x_{2}-1$. Note that $x_{2}>x_{1}$.
I started to solve this with Vieta's theorem but I couldn't continue. Deep explanation would be grateful!
Also I don't know what does not solve mean, I can't use a quadratic formula or something more specific?
AI: $f(x)=ax^2+bx+c=a(x-x_1)(x-x_2)$ is given to you. Consider
\begin{align*}
g(x)&=a(x-(x_1+1))(x-(x_2-1))\\
&=a(x-x_1-1)(x-x_2+1)\\
&=a(\color{red}{(x-x_1)}-1)(\color{blue}{(x-x_2)}+1)\\
&=a(x-x_1)(x-x_2)+a(x-x_1)-a(x-x_2)-a\\
&=f(x)+a\underbrace{(x_2-x_1)}_{\text{given }>0}-a.
\end{align*}
Now $x_2-x_2=\sqrt{(x_1+x_2)^2-4x_1x_2}=\sqrt{\left(\frac{b^2}{a^2}\right)-\frac{4c}{a}}=\frac{\sqrt{b^2-4ac}}{a}$
Thus
$$g(x)=ax^2+bx+c+\sqrt{b^2-4ac}-a$$
|
H: Residue of $\frac{1}{\cosh(z)}$.
When looking at $f(z)= \frac{1}{\cosh(z)}$, I found a singularity at $i \frac{\pi}{2}+i \pi k$ with $k \in \mathbb{Z}$ which has to be a pole of order 1. Now, when looking for the residue at that pole, is it enough to look at $\lim_{z \to i \frac{\pi}{2}} \frac{z-i \frac{\pi}{2}}{\cosh(z)}=\frac{1}{\sinh(i \frac{\pi}{2})}=-i$? Or am I ignoring certain values of k?
AI: The residue is equal to $$\lim_{z \to z_a} \frac{z-z_a}{\cosh{z}}= \lim_{z \to z_a} \frac{1}{\frac{d}{dz}\; \cosh{z}}= \lim_{z \to z_a} \frac{1}{\sinh{z}}$$
Where $z_a=i \pi \left(k+\frac{1}{2}\right)$ are the singularities. Using the hyperbolic trigonometric identity: $$\cosh^2{z}-\sinh^2{z}=1$$
Now, we know that $\cosh{z}=0$ because we're interested in $\sinh{z}$ at the singularities to evaluate the residues. Therefore, $$\sinh^2{z}=-1$$ $$\sinh{z}= \pm i$$
Now, plug this into the expression above to evaluate the residues and you get:
$$\operatorname{Res}\left(\operatorname{sech};z_k \right) = \frac{(-1)^k}{i},$$
|
H: Solving the system $x\sqrt{y} + y\sqrt{x} = 30$, $x\sqrt{x} + y\sqrt{y} = 35$
I'm stuck on this problem:
$$ \begin{cases} x\sqrt{y} + y\sqrt{x} = 30 \\ x\sqrt{x} + y\sqrt{y} = 35\end{cases} $$
I've not solved this kind of problem before. I tried formula for square of sum and also sum of cubes to possibly isolate an expression. I've also tried substituting $ \sqrt{xy}$ or some other expression with a new variable, but couldn't really get anywhere.
P.S. I'm aware that you can easily get the answers by just trying to plug in the numbers, I'm looking for a more general, algebraic solution.
AI: From
$$\begin{cases} x\sqrt{y} + y\sqrt{x} = 30 & \ \ (a)\\ x\sqrt{x} + y\sqrt{y} = 35& \ \ (b)\end{cases}$$
$3 \times$(a) + (b) gives :
$$(\sqrt{x}+\sqrt{y})^3=5^3\ \ \iff \ \ \sqrt{x}+\sqrt{y}=5 \ \tag{1}$$
Besides, (b) - (a) gives :
$$x(\sqrt{x}-\sqrt{y})-y(\sqrt{x}-\sqrt{y})=5 \ \iff$$
$$(\sqrt{x}-\sqrt{y})^2(\sqrt{x}+\sqrt{y})=5 \tag{2}$$
Taking (1) into account in (2), on gets :
$$\sqrt{x}-\sqrt{y}=\pm1\tag{3}$$
Gathering (1) and (3), we obtain easily the two solutions :
$$(x,y)=(4,9) \ \text{and} \ (x,y)=(9,4) $$
as confirmed by the following graphical representation ((a) in blue and (b) in red) :
|
H: How do I define inverse isomorphisms between Hom-sets?
Let $S_q(X;R)$ denote the free $R$-module with basis the singular $q$-simplices $\{\sigma:\Delta^q\to X\}$. I am trying to prove that $S^q(X;R)\cong Hom_\mathbb{Z}(S_q(X;\mathbb{Z}),R)$. We have that $S^q(X;R)=Hom_R(S_q(X; R),R)$. We claim that $Hom_\mathbb{Z}(S_q(X;\mathbb{Z}),R)\cong Hom_R(S_q(X; R),R)$. Let $\phi:\mathbb{Z}\to R$ be defined where $\phi(1)=1_R$ and $\phi(0)=0_R$. I claim we should define $j:Hom_\mathbb{Z}(S_q(X;\mathbb{Z}),R)\to Hom_R(S_q(X; R),R)$ by $j(f)(\Sigma_{i=1}^n r_i\sigma_i))=\Sigma_{i=1}^n\phi(r_i)f(\sigma)$ and $k:Hom_R(S_q(X; R),R)\to Hom_\mathbb{Z}(S_q(X;\mathbb{Z}),R)$ by $k(g)(\sum_{i=1}^nr_i\sigma_i)=\sum_{i=1}^nr_ig(\sigma_i) $. However, these don't seem to be inverse to each other.What should I do?
AI: Now that you've clarified the notation, I'll turn my comment into an answer.
If $S_q(X;R)$ is the free $R$-module on the $q$-simplices then $S_q(X;R)= S_q(X;\mathbb{Z} ) \otimes_{\mathbb{Z} } R$, i.e. the free $R$-module functor factors as $\mathrm{Set} \to \mathbb{Z}-\mathrm{mod} \to R-\mathrm{mod}$ where the first functor is the free abelian group functor and the second is extension of scalars.
So, the isomorphism $\mathrm{Hom}_{\mathbb{Z} }( S_q(X; \mathbb{Z}), R) \cong \mathrm{Hom}_{R }( S_q(X;R), R )$ follows from the fact that if $F$ denotes the extension of scalars functor and $U$ the restriction of scalars functor then $F \dashv U$, i.e. for an abelian group $A$ and an $R$-module $B$ we have that $\mathrm{Hom}_{\mathbb{Z} }( A, U(B)) \cong \mathrm{Hom}_{R }( F(A), B )$.
In your question you have some elements denoted as $r_i$ and it's unclear to me whether these are supposed to be elements of $\mathbb{Z}$ or elements of $R$, it looks like sometimes it is one and sometimes it is the other.
The isomorphism $\mathrm{Hom}_{\mathbb{Z} }( A, U(B))$ to $\mathrm{Hom}_{R }( F(A), B )$ sends $f$ to the map $\tilde{f}$ that is evaluated on simple tensors by $\tilde{f}(x \otimes r)=r f(x)$.
|
H: Calculate total no . of case per category, given case rate per $100,000$ and total no. of cases
I have the information on case rate per category
Eg - $$A \to 97 \text{ per } 100,000$$
$$B \to 169 \text{ per } 100,000$$
$$C \to 189 \text{ per } 100,000$$
$$D \to 234 \text{ per } 100,000$$
$$E \to 241 \text{ per } 100,000$$
$$F \to 420 \text{ per } 100,000$$
The total no. tests is $148,126$ and the total no. of confirmed cases is $10,490$
How should I calculate the total no. of cases per category?
Thank you
AI: We have $$
97 + 169 + 189 + 234 + 241 + 420=1350$$
So out of $10490$ cases, we expect that $$\frac{97}{1350}\cdot10490 = 753.726$$ cases belong to category A, and so on for the other categories.
|
H: What is the limiting probability distribution of a prime random walk
This random walk has an infinite amount of possibility. these are the moves ranked most common to least $(\times 0+1,+1,\times2,\times3,\times5,\times7,\times11,\times13,\times17,\times19,\times23,\times29,...,\times P(n),...)$
the most common operation has a probability of 1/2, the second 1/4, the kth most common has a $1/(2^k)$ chance to happen.
So a possible walk would be $((((((1)\times0+1)+1)\times7)+1)\times2)$ the place it would have landed on would be $30$.
so let's say $P_m(k)$ is the probability that k is the end of an m step walk
I would like to know the limit as m goes to infinity or a good estimation for $f(k)$ $$f(k) = \lim_{m \to \infty} P_m(k)?$$
AI: This is an interesting question, and I'm unsure if there is a nice answer. However, we can note that the function $f$ satisfies the recurrence
$$f(k) = \frac14 f(k - 1) + \sum_{p | k} \frac{f(k / p)}{2^{m(p) + 2}}$$
where $p$ is a prime number and $p$ is the $m(p)$th prime number. For example, $m(2) = 1$ and $m(7) = 4$. This is because $P_m(k)$ satisfies the recurrence
$$f(k) = \frac14 P_{m-1}(k - 1) + \sum_{p | k} \frac{P_{m-1}(k / p)}{2^{m(p) + 2}}$$
and $P_{m - 1} \sim P_m$ in the limit of $m \to \infty$ (supposing that the limit $f(k)$ exists, of course).
Some properties that we can observe:
For all $k$, $f(k) > 0$. This is due to the fact that no matter what state you are in, it's always possible to return to any other state (if you think of this as an infinite state space Markov chain, it is recurrent).
$f(k)$ is generally decreasing (but not monotonically so). Your state is overwhelmingly likely to be a low number in the limit $m \to \infty$, as calculations can show that
$$f(1) = \lim_{m \to \infty} P_m(1) = 1/2, f(2) = 3/16, f(3) = 1/16, f(4) = 7/256$$
Allowing $+1$ to be a transition makes $f(k)$ computationally intensive (at least with what I know) to compute, since $f(k)$ depends on $f(k - 1)$. (If you are interested in complexity theory, it seems that $f(k)$ probably cannot be calculated in polynomial time in the number of bits of $k$. But don't quote me on it.) On the other hand, if $+1$ was not an allowed state transition, then $f(k)$ would be quite easier to compute.
This is all I have for now; I'll try to see if an asymptotic expression of $f(k)$ can be obtained when I have time. Hope this helps!
|
H: Distribution of $X_{N(t)+1}$ in poisson process
Assume $\{N(t)\}_{t\geq 0}$ is a poisson process with parameter $\lambda$, $X_n$ is the $n^{th}$ interarrival time, $n \in \{1, 2, 3, ...\}$, which means $X_n$ is exponential distribution with parameter $\lambda$.
Then how to compute the distribution of $X_{N(t)+1}$ ?
(This is a problem from Stochastic Process (Ross), chapter 3: renewal process)
AI: To compute the cdf, simply write
$$\mathbb{P}(X_{N(t)+1}\leq a) =\sum_{n=0}^\infty \mathbb{P}(X_{n+1} \leq a, \, N(t) = n).$$
Now observe that $\{N(t) =0\} = \{X_1 >t \}$ and
$$\left\{N(t) = n\right\} = \left\{X_1+ \cdots + X_n \leq t < X_1 + \cdots + X_{n+1}\right\}, \quad \forall n \geq 1.$$
Thus it remains to compute
$$\begin{align}\mathbb{P}(X_{n+1} \leq a, \, N(t) = n) &= \mathbb{P}(X_{n+1}\leq a, \,X_1+ \cdots + X_n \leq t < X_1 + \cdots + X_{n+1} )\\
&= \mathbb{P}(X_{n+1} \leq a, \, S_n \leq t < S_n + X_{n+1})\end{align}$$
where $S_n \sim \mathrm{Gamma}(n,\lambda)$ and $X_{n+1} \sim \mathrm{Exp}(\lambda)$ are independent. This is equal to
$$\int_{[0,\infty)^2} \mathbf{1}_{x \leq a, \, y\leq t < x+y}\,\lambda e^{-\lambda x} \frac{\lambda^n}{(n-1)!}y^{n-1}e^{-\lambda y}\, dxdy.$$
Using Fubini's theorem we get
$$\begin{align}\mathbb{P}(X_{N(t)+1}\leq a) &= \mathbb{P}(t < X_1 \leq a)+\sum_{n=1}^\infty \int_{[0,\infty)^2} \mathbf{1}_{x \leq a, \, y\leq t < x+y}\,\lambda e^{-\lambda x} \frac{\lambda^n}{(n-1)!}y^{n-1}e^{-\lambda y}\, dxdy\\
&=\mathbb{P}(t < X_1 \leq a)+\int_{[0,\infty)^2} \mathbf{1}_{x\leq a, \, y \leq t < x+y}\, \lambda^2 e^{-\lambda x}\, dx dy, \end{align}$$
which I now leave you to calculate.
|
H: Using Fermat's Little Theorem for remainders
Using Fermat's little theorem, we know that
$$k^{p-2} \cdot k \equiv 1 \pmod p.$$
To find the multiplicative inverse of $6$ modulo $17$, we need to calculate $6^{15} \pmod {17}$. It's supposed to be all congruences hold modulo $17$.
$$6^{15} \equiv 6^8 \cdot 6^4 \cdot 6^2 \cdot 6 \equiv 16\cdot4\cdot2\cdot6 \equiv 3 \pmod {17}$$
I need help to understand the solution of $6^{15} \equiv 3 \pmod {17}$.
Thanks.
AI: $15 = 8+4+2+1$, hence $6^{15} = 6^8 \cdot 6^4\cdot 6^2\cdot 6^1$. Then:
$6^1 \equiv 6 \pmod{17}$
$6^2 = 36 \equiv 2 \pmod{17}$
$6^4 = (6^2)^2 \equiv 2^2 = 4 \pmod{17}$
$6^8 = (6^4)^2 \equiv 4^2 = 16 \pmod{17}$
Therefore $6^{15} \equiv 16 \cdot 4\cdot 2 \cdot 6 = 16\cdot 48 = (17-1)(51-3) \equiv 3 \pmod{17}$
|
H: How to calculate $ \int_{0}^{2K(k)} dn^2(u,k)\;du$?
How to calculate $$ \int_{0}^{2K(k)} dn(u,k)^2\;du?$$ Where $dn$ is the Jacobi Elliptical function dnoidal and $k \in (0,1)$ is the modulus. I know from the Fórmula $(110.07)$ of [1] (see page 10) that
$$ \int_{0}^{K(k)} dn(u,k)^2\;du=E(k),$$
where $E$ is the normal elliptic integral of the second kind complete. For this I can conclude that
$$ \int_{0}^{2K(k)} dn(u,k)^2\;du=2E(k)?$$
[1] P. F. Byrd. M. D. Friedman. Hand Book of Elliptical Integrals for Engineers and Scientis. Springer-Verlag New York Heidelberg Berlim, $1971$.
AI: I don't know much about these functions but I believe that it is not necessarily true since:
$$I=\int_0^{2K(k)}dn(u,k)^2du$$
$v=u/2$ then $du=2dv$ and our integral becomes:
$$I=2\int_0^{K(k)}dn(2v,k)^2dv$$
and this term of $2v$ rather than $v$ could change the value of the integral depending on what this $dn$ function is
|
H: Fermat's little theorem $a^y \pmod{p}$ when $y
I have a problem where I need to guess $425^{17} \pmod{541}$
$p=541$ is prime so, applying Fermat's little Theorem $a^{p-1} \equiv 1 \pmod{p}$ we got
$425^{540} \equiv 1 \pmod{541}$
But how should I continue?
I am trying $\frac{17}{540}= 0*540 + 17$ but nothing to do with this...
AI: If you insist on doing it by hand, use repeated squaring (via a calculator for squaring and "remaindering" by $541$):
$425^2 $ has remainder $472$ on division by $541$.
So $425^4 = 272^2 \pmod{541}$ and that is again (by a standard calculator) equal to $433$.
Hence $425^8 = 433^2 \pmod{541}$ which equals $303$.
Hence $425^{16} = 303^2 \pmod{541}$, and this is $380$.
So finally $425^{17}= 425 \cdot 380 \pmod{541} = 282$.
Which Python will immediately compute (essentially via this same algorithm ) by pow(425,17,451), which is a bit easier.
|
H: Equilibrium Points Help
Hi I'm doing work over the summer for my Differential Equations Module. Finding the equilibrium points here is important for all follow on questions and wanted to check to see if I'm wayyy out? Please could someone let me know if this is okay or whether I need to go back to the drawing board? Thank you:)
The question:
Determine the number and location of the equilibrium points of the
system below
$\dot{x}=yx^2 -x$
$\dot{y}=-xy-x^2y+4y^2+4xy^2$
My answer:
I ended up with the equilibrium points $(x_e,y_e)=(0,0), (2+2\sqrt{2},\frac{-1+\sqrt{2}}{2}), (2-2\sqrt{2},\frac{-1-\sqrt{2}}{2})$
I did this by finding that when $\dot{x}=0$ then $x=0$ or $xy=1 \Rightarrow x=\frac{1}{y}$ and then subbing into $\dot{y}$
EDIT: Thank you for all the help I've gone over it and I'm not sure how I managed to mess up the $\dot{y}$ factorisation so badly!
I checked and changed my work and got the following (for future reference):
when $x=0, \dot{y}=0=4y^2 \Rightarrow y=0$
and when $x=\frac{1}{y}, \dot{y}=0=-1-\frac{1}{y}+4y^2+4y$ and multiplied this by $y$
To this I found the factor $(y+1)$ and used algebraic division to find other factors.
I got $\dot{y}=0=(y+1)(2y+1)(2y-1)$ and then found the corresponding $x$ values
My final equilibrium points are $(x_e,y_e)=(0,0), (-1,-1), (2,\frac{1}{2}), (-2,-\frac{1}{2})$
AI: You forgot some solution, for example:
$$(x,y)=-(1,1)$$
You can check the number of solution since you have a third degree equation in $y$. So it gives three solutions.
$$-xy-x^2y+4y^2+4xy^2=0$$
Since we have $xy=1$
$$-1-x+4y^2+4y=0$$
$$-1-\dfrac 1 y+4y^2+4y=0$$
$$-(y+ 1) +4y^2(y+1)=0$$
$$(y+ 1)(4y^2-1)=0$$
$$\implies y=-1 \implies x=-1$$
Another solution is:
$$y=\pm \dfrac 12 \implies x=\pm 2$$
So the Equilibrium Points are:
$$S= \{(2,\dfrac 12),(-2,-\dfrac 12),(0,0),(-1,-1) \}$$
You applied the method correctly.
|
H: $\lim_{\lambda \to \infty}\dfrac{1}{\lambda}\int_0^{\lambda}yf(y)dy = 0$?
Assume that $f : [0,\infty) \to \mathbb{R}$ is a Borel--measurable function. Assume also that is integrable with respect to the Borel measure on $[0,\infty)$.
Is it true that:
$$\lim_{\lambda\to\infty}\dfrac{1}{\lambda}\int_0^{\lambda}yf(y)dy = 0?$$
I think that it is true, but could not prove it. I apprecaite any hint.
AI: Hint: Let $F(y)=\int\limits_0^{y} f(t)dt$. Then $\frac 1 {\lambda} \int\limits_0^{\lambda} yf(y)dy=\frac 1 {\lambda} [yF(y)|_0^{\lambda} - \int\limits_0^{\lambda} F(y)dy] \to \int_0^{\infty} f(t)dt -\int_0^{\infty} f(t)dt =0$.
[If $g(x) \to c$ as $x \to \infty$ then $\frac 1 {\lambda}\int\limits_0^{\lambda} g(t)dt \to c$].
|
H: Show that $\binom{n}{1}-3\binom{n}{3}+3^2\binom{n}{5}\cdots=0$
Show that if $n\equiv 0\pmod 6$ (although the statement holds true for $n\equiv 0\pmod 3$)
$\binom{n}{1}-3\binom{n}{3}+3^2\binom{n}{5}\cdots=0$
I am having trouble finding the appropriate polynomial to resolve this sum. Any hints? I prefer hints to complete solutions. I also have tried to come up with a probability story that gives me the relation above to no avail.
AI: By virtue of the binomial series, we have that
$$\binom{n}{1} - 3 \binom{n}{3} + 3^2 \binom{n}{5} + \ldots = \sum_{k=0}^\infty \binom{n}{2k+1} (-3)^k = \frac{2^n}{\sqrt{3}} \sin\left(\frac{n\pi}{3}\right).$$
Since $n \equiv 0 \pmod{6}$, we have that $n=6k$ for some $k\in\mathbb{Z}$. Plug that in and you will have the result.
|
H: $T:X\to Z$, $S:Y\to Z$ be given linear maps and $X,Y,Z$ be given Banach spaces, if $\forall x\in X$, $Tx=Sy$ has unique solution y.
$T:X\to Z$, $S:Y\to Z$ be given linear maps and $X,Y,Z$ be given Banach spaces, if $\forall x\in X$, $Tx=Sy$ has unique solution y. Then $M:X\to Y$, $Mx=y$ is continuous.
The intuition says that $M=S^{-1}T$ because for every $x\in X$ I can find unique $y$
such that $Tx=Sy$.
I first try to show $S$ is invertible and then $S^{-1}T$ is bijective then since $S^{-1}T:X\to Y$ then it is isometry so it is continuous. However I'm lost at showing it.
Secondly I try to use Closed Graph Theorem but again without showing the invertibility of $S$ I cannot go further.
I am open for every suggestion and solutions, hints.
AI: Assuming continuity of $S$ and $T$ this follows easily by Closed Graph Theorem: Let $x_n \to x, y_n \to z$ with $y_n=Mx_n$. Then $Tx_n=Sy_n$. Hence $Sz=\lim Sy_n =\lim Tx_n=Tx$. By definition this gives $z=Mx$. Hence $M$ has closed graph.
|
H: $M_x$ is free $\Rightarrow \widetilde{M}$ is locally free at $x$
Let $X=\text{Spec}(A)$ where $A$ is noetherian. Suppose $M$ is a finitelly generated $A$-module and that $M_x$ is a free $A_x$-module with finite rank for some $x\in X$. Show that there exists an open neighbourhood $U\subset X$ of $x$ such that $\widetilde{M}\big|_U$ is free.
[here $\widetilde{M}$ is the sheaf on $X$ defined by $\widetilde{M}(X_f)=M_f$ when $f\in A$]
Suppose $M_x$ is generated by $\frac{m_1}{f_1},...,\frac{m_r}{f_r}\in M_x$ with $m_i\in M$ and $f_i\notin x$. In that case $\frac{m_i}{f_i}\in M_{f_i}=\widetilde{M}(X_{f_i})$. Defining $f:=f_1\cdots f_n$, we have $X_f\subset X_{f_i}$ for all $i$, so we may assume the generators are of the form $\frac{m_i}{f^{k_i}}\in M_f$.
I'm trying to prove that $\widetilde{M}\big|_{X_f}$ is free. The fact that $M$ is finitely generated means that there is an exact sequence $A^n\stackrel{\phi}{\to} M\to 0$ where $\phi(a_1,...,a_n)=\sum_{i=1}^na_iu_i$ for some $u_i\in M$. Localizing at $f$, we have a new exact sequence $A_f^n\stackrel{\phi_f}{\to}M_f\to 0$ and an isomorphism $A^n_f/\ker(\phi_f)\simeq M_f$.
I really can't see how to prove that $A_f^n/\ker(\phi_f)$ is free, and also why is the noetherian condition necessary.
Is there maybe a better way to approach this?
AI: Since $M_x$ is generated by $\frac{m_1}{f_1}, \ldots, \frac{m_r}{f_r}$ it is also generated by $\frac{m_1}{1}, \ldots, \frac{m_r}{1}$. That means the isomorphism $A_x^{\oplus r} \cong M_x$ can be written as $\varphi_x$ for some homomorphism $\varphi: A^{\oplus r} \rightarrow M$. Since $M$ is finitely generated $\text{coker }\varphi$ is finitely generated. And since $A$ is noetherian $\text{ker }\varphi$ is finitely generated. As localization is an exact functor, we have $(\text{ker }\varphi)_x = \text{ker }(\varphi_x) = 0$ and $(\text{coker }\varphi)_x = \text{coker}(\varphi_x) = 0$, which means each of the generators of $\text{ker } \varphi$ and $\text{coker }\varphi$ restricts to $0$ in a neighborhood of $x$. Since there are only finitely many of these generators it is clear that we can choose a $f \in A$ (edit: with $f \not \in x$) such that $U=D(f)$ is contained in the intersection of these neighborhoods. That means that $\text{ker }\varphi_f = 0$ and $\text{coker }\varphi_f=0$. So $\varphi_f$ is an isomorphism showing that $\widetilde{M}_{|U}$ is free.
|
H: Prove $x^4 + x^2 +1$ is always greater than $x^3 + x$
Let's say P is equal to $x^4 + x^2 +1$ and $Q$ is equal to $x^3 + x$.
For $x <0$, $P$ is positive and $Q$ is negative. Hence, in this region, $P>Q$.
For $x=0$, $P>Q$.
Also, for $x = 1$, $P>Q$.
For $x > 1$, I factored out $P$ as $x^2(x^2+1) + 1$ and $Q$ as $x(x^2+1)$. For $x > 1$, $x^2(x^2+1) > x(x^2+1)$, hence $P>Q$.
The part where I have the problem is I can't prove this for the range $0 < x < 1$ without the help of a graphing calculator. Can anyone help?
What I've done in this region so far is:
Prove that $P$ and $Q$ is always increasing in this region,
The range for $P$ starts from $1 < P < 3$, and
The range for $Q$ starts from $0 < Q < 2$.
The only thing I need to prove now is that $P$ and $Q$ will not intersect at $0 < x < 1$, but I can't prove this part.
AI: Let
$$H=P-Q=x^4-x^3+x^2-x+1$$
As you have already one the case $x<0$, we will prove the case $x\geq 0$. Clearly, $H(0)=1>0$. For $x\in (0,1]$, we know
$$1\geq x$$
$$x^2\geq x^3$$
This implies
$$H=x^4+(x^2-x^3)+(1-x)>0$$
For $x>1$, we know
$$x^4>x^3$$
$$x^2>x$$
This implies
$$H=(x^4-x^3)+(x^2-x)+1>0$$
and we are done as $H$ has no real roots and $H(0)>0$.
|
H: Quotient of continuous local martingale with quadratic variation
Consider a local martingale $(M_t)_{t\ge 0}$ with continuous paths and $\lim_{t\rightarrow\infty}[M]_t=\infty$ a.s.
I want to show, that
$$\lim_{t\rightarrow\infty}\frac{M_t}{[M]_t}=0\quad\text{a.s.}$$
I tried using fatou's lemma giving
\begin{align}
\liminf_{t\rightarrow\infty}E\bigg[\frac{M_t}{[M]_t}(1_{\{M_t<1\}}+1_{\{M_t\ge 1\}})\bigg]\le\liminf_{t\rightarrow\infty}E\bigg[\frac{M_t^2}{[M]_t}\bigg],
\end{align}
but I do not know how to go on further.
Another idea is to use borel-cantelli, since there are countable stopping times, I may be using solving this.
I would be grateful for any hint or help.
AI: I believe that you need the further condition that $\langle M,M \rangle_\infty = \infty$ a.s., otherwise you can take $M$ to be something like Brownian motion stopped at $t=1$. Then you can use the method of time change: There exists a Brownian motion $B$ such that $M_t = B_{\langle M,M \rangle_t}$. Since you can show for Brownian motion that $\lim_{t \rightarrow \infty} \frac{B_t}{t} = 0$ a.s., the result still holds when $t$ is replaced by an increasing function that converges to $\infty$ a.s. so
$$\lim_{t \rightarrow \infty} \frac{M_t}{\langle M,M \rangle_t} = \lim_{t \rightarrow \infty} \frac{B_{\langle M,M \rangle_t}}{\langle M,M \rangle_t} = 0.$$
|
H: Giving an proof on a combinatorial statement
Prove with a combinatorial argument that $\displaystyle\binom{a+b}{2}-\binom{a}{2}-\binom{b}{2}=ab.$
I'm assuming we can give a committee forming argument, but I'm not sure how to start.
AI: Rewrite as $$\binom{a+b}{2}=\binom{a}{2}\binom{b}{0}+\binom{a}{1}\binom{b}{1}+\binom{a}{0}\binom{b}{2}$$ and note that both sides count the number of ways to choose a pair of people from $a$ men and $b$ women. The left hand side is clear. The right hand side performs the count according to three cases:
$2$ men and $0$ women
$1$ man and $1$ woman
$0$ men and $2$ women
|
H: Find the PDF, and the conditional PDFs of $Y$ when $Y = X + Z$, where $X$ and $Z$ are exponential functions.
$Y = X + Z$
$X$ and $Z$ are independant, and are exponentially distributed with parameters: $\lambda_X=5$ and $\lambda_Z=1$
a) Find the PDF of $Y$.
b) Find the conditional pdf of $Y$ when $X = 2$, and also the conditional PDF of $Y$ when X = x, where $x\in\mathbb{R}$.
c) Find the minimum mean square estimate of $X$ when $Y = y$, where $y > 0$
I have found that $$f_X(x)= \begin{cases} 5e^{-5x}&x \geq 0 \\ 0 & otherwise\end{cases}$$
and $$f_Z(z)= \begin{cases} e^{-z}&z \geq 0 \\ 0 & otherwise\end{cases}$$
but I do not know where to go from there for the first part.
AI: Use a double integral to generate the cdf of $Y=X+Z$ from the pdfs of $X$ and $Z$:
$$F_Y(y_0)=\int\int_{x+z<y_0} f_X(x)f_Z(z)\ dz\ dx$$
In writing down the integral explicitly it will help to sketch the region in the $xz$-plane where $Y\le y_0$ for a fixed but arbitrary $y_0$. Once you have the cdf, differentiate to obtain the pdf. That should get you started.
|
H: Basis for polynomials of degree k or lower
Is there a simple way to show that $\{(x-i)^n(x+i)^{k-n}\}_{n=0,...,k}$ is a basis of $\mathbb{C}_k[x]$ (space of polynomials of degree $\le k$) for $k\ge 2$ even? And likewise that $\{(x+w)^n(x+w^2)^{k-n}\}_{n=0,...,k}$ is a basis, where $w$ is the third root of unity.
AI: Let's consider $(x-a)^n(x-b)^{k-n}$ for $0\le n\le k$ where $a\ne b$. All we need
is to show these are linearly independent. Suppose then
$$\sum_{n=0}^kc_n(x-a)^n(x-b)^{k-n}=0$$
identically. Then for all $x\ne b$,
$$\sum_{n=0}^kc_n\left(\frac{x-a}{x-b}\right)^n=0.$$
But unless all the $c_n$ are zero, this equation can only have finitely
many solutions for $(x-a)/(x-b)$, and so for $x$. But $\Bbb C$ is an infinite field.
|
H: What is the codomain of the function which inputs a set and outputs a vector whose entries are elements of that set?
Consider the mapping $m$ whose domain is a totally ordered set $S=\mathcal{P}(\{1,\pi,e\})\backslash \varnothing$ (where $\mathcal{P}$ represents the power set) and whose output is a vector where each component is an element of $S$.
$m(\{1,\pi,e \}) = [1,\pi,e]$
$m(\{1,e\}) =[1,e]$
Normally I would write $m:S\to ?$, but I don't know what the codomain is, since the output is in one of $\mathbb{R}^k$ for $k=1,2,3$.
What is the codomain of $m$?
Edit: Follow up: Really I want to extend this to a set where S consists of a list of any $n$ real numbers. Then I guess the codomain is $\mathbb{R}\cup\mathbb{R}^2\cup\mathbb{R}^2\cup\ldots \cup \mathbb{R}^n $.
AI: Is the mapping even well-defined? Is
$$m(\{1,e\} )=[1,e]\ {\rm or}\ m(\{1,e\})=[e,1]$$
In the domain as you've described it,
{1,e}={e,1}
but this subset will need to map to only one of $[1,e]$ and $[e,1]$. If you wish you could redefine the domain to be all ordered subsets (permutations) of
$$A=\{1, e, \pi\}$$
and then the mapping makes sense as a function. The codomain then is just $A\cup A^2\cup A^3$.
|
H: Why the variance of uniformly distribution is like that?
According to several reference,
The variance of uniformly distribution is like below:
$$\frac{1}{12} (b-a)^2$$
However, after calculating the variance from scratch:
$$\sum _{x=a}^b \frac{\left(x-\frac{a+b}{2}\right)^2}{b-a}$$
The result is this:
$$\frac{1}{12} (a-b-2) (a-b-1)$$
They are not the same.
Which is wrong?
AI: If $X \sim \mathcal{U}(a,b)$ (where $\mathcal{U}$ denotes the continuous uniform distribution), then $f(x) = \mathbb{I}_{(a,b)}/(b-a)$ and
$$
\mathbb{E}[X]
= \int_a^b \frac{x\ dx}{b-a}
= \frac{1}{b-a} \frac{b^2-a^2}{2}
= \frac{b+a}{2},
$$
where the last step uses $b^2-a^2 = (b-a)(b+a)$.
Further,
$$
\mathbb{E}[X^2]
= \int_a^b \frac{x^2\ dx}{b-a}
= \frac{1}{b-a} \frac{b^3-a^3}{3}
= \frac{b^2+ab+a^2}{3},
$$
hence
$$
\mathrm{Var}\ X = \mathbb{E}[X^2] - (\mathbb{E}[X])^2
$$
Can you plug in and finish the arithmetic?
|
H: Verifying the definition of convergence, or showing it does not converge.
Can someone please help me prove whether this sequence converges or not? I am having trouble figuring it out. Should I find some $\epsilon$ such that its greater than our sequence? Thank you for your time and help!
For the following sequence how do I show it converges by guessing the limit? Also, how can I verify the definition of convergence, or show it does not converge.
(1) Sequence $\{x_k\}_{k=1}^\infty$ in $\mathbb{R}$ given by $$x_k = \left\{ \begin{array}{ll}
10^{100} & \mbox{if $k =10^{1000}$};\\
0 & \mbox{if $k\ne 10^{1000}$}.\end{array} \right.$$
$\text{Solution:}$ I believe this converges as $x_k$ goes to 0 but I am not entirely sure.
AI: First, yes, the sequence converges (specifically the limit is 0), and you can show this is true by the definition of convergence. If you understand the description of the sequence $(x_k)_{k\in\mathbb{N}}$ you have written, you can see that the sequence stays constant at 0 for an extremely long time, then all of a sudden on term $k = 10^{1000}$ it jumps erratically to $10^{100}$, and then the next term immediately goes back to 0, and the sequence stays constant at 0 forever onwards.
The definition of convergence to a the limit $L$ says intuitively that no matter how small the tolerance $\varepsilon$, the difference between your sequence and the limit $L$ will be smaller than this tolerance $\varepsilon$ for all terms beyond some point in the sequence. So here, since the sequence stays constant at 0 for all terms beyond $k = 10^{1000}$, your gut instinct should be that the limit is 0. And you need to ask, is it true that for all terms in the sequence beyond a certain point, will the difference between your sequence and 0 be smaller than any specified $\varepsilon > 0$, as long as you go far enough out on your sequence? And in fact, the answer here is yes, because if you go more than $10^{1000}$ terms in your sequence, your terms will all be exactly 0, and so then the difference between these terms and the limit 0 will also be 0, and so this difference of 0 will of course be smaller than any tolerance $\varepsilon > 0$, because, well, $\varepsilon$ is positive while 0 (being the difference between the limit and your sequence) is smaller than that.
It looks like another answer formalizes this notion of convergence of this sequence to the limit 0.
|
H: Prove that 9 divides $7\cdot5^{2n}+2^{4n+1}$
We have to prove that the following statement is true for all non zero natural numbers:
$$9|7\cdot5^{2n}+2^{4n+1}$$
AI: with congruence:
$7\cdot5^{2n}+2^{4n+1}\equiv$
$(-2)\cdot(-4)^{2n} + 2^{4n}\cdot 2\equiv $
$-2\cdot 16^n + 16^n \cdot 2 \equiv 0\pmod 9$.
Oh... I didn't actually expect it to end so soon.
....
By induction:
You can't go wrong with induction.
$7*5^2 + 2^5 = 7*25+ 32= 207=9*23$.
Okay... that was the base case.
If $7*5^{2k} + 2^{4k+1}$ is divisible by $9$ then
$7*5^{2(k+1)} + 2^{4(k+1)+1}=$
$7*5^{2k}\times 25 + 2^{4k+1}\times 16=$
$7*5^{2k}\times (16+9) + 2^{4k+1}\times 16=$
$16(7*5^{2k} + 2^{4k+1}) + 9(7*5^{2k})$.
And $9$ divides $7*5^{2k} + 2^{4k+1}$ and $9$ divides $9(7*5^{2k})$ so $9$ divides the sum which is $7*5^{2(k+1)} + 2^{4(k+1)+1}$
So that's our induction step.
|
H: Inclusion–exclusion principle for probability
Inclusion–exclusion principle for probability is as follows: https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
(1)How to use this principle to show that:
$$\sum_{i=1}^nP(\{x_i\}\subset X)+\sum_{1\leq i_1<i_2\leq n}P(\{x_{i_1}, x_{i_2}\}\subset X)+\dots+(-1)^{n-1}\sum_{i=1}^nP(\{x_1, x_2, \dots, x_n\}\subset X)=?$$
(2)Also, What is the $1-P(\{x_1,\dots, x_n\}\subset X^c)?$
EDIT: What I want is that why $$1-\sum_{i=1}^nP(\{x_i\}\subset X)+\sum_{1\leq i_1<i_2\leq n}P(\{x_{i_1}, x_{i_2}\}\subset X)+\dots+(-1)^{n}\sum_{i=1}^n P(\{x_1, x_2, \dots, x_n\}\subset X)=P(X\subset \Omega\setminus \{x_1,\dots, x_n\})$$
where $X=\{x_1, \dots, x_N\}$ of a finite set $\Omega$ with $|\Omega|\geq N$
AI: The statement doesn't look right..It should read:
$1-P(${$x_1,...,x_n$}$\subset X^c)=\sum_k P${$x_k$}$\subset X)-\sum_{k\lt j}P(${$x_k,x_j$}$\subset X)+$etc.
The basic idea is the probability that all the subsets are in $X^C$ is $1-$ the probability that at least one subset is in $X$. To calculate the later you add up the probabilities of one subset in $X$, but you need to subtract the probabilities that a pair of subsets is in $X$. This results from a basic relation $P(A\cup B)=P(A)+P(B)-P(A\cap B)$
You then need to add back the probabilities for three subsets in $X$,etc.
|
H: What is this decomposition?
What is the name of the decomposition shown below?
AI: It is technically a QR factorisation (https://en.wikipedia.org/wiki/QR_decomposition#Rectangular_matrix).
It is quite trivial, but you can indeed observe that on the Right Hand Side, the matrix on the left is orthogonal, and the matrix on the right is upper-triangular.
Edit
As a commenter pointed out, the left matrix is not necessarily orthogonal. However, it at least has orthogonal columns (unless $f = 0$).
|
H: How is the rank of a matrix affected by centering the columns of a matrix?
For some $n$ by $p$ matrix $X$, I'm trying to figure out how the rank of $X$ is affected if each column in $X$ is centered by the mean of that column (call the centered design matrix $Z$).
If $p < n$ and $X$ is full column rank, $Z$ is full column rank if multicollinearity is not present.
If $p = n$ and $X$ is full rank, $Z$ has rank $n-1$ due to the constraint from centering the variables, regardless of whether multicollinearity is present or not.
If $p > n$ and $X$ is full row rank, $Z$ has rank $n-1$ due to the constraint imposed from centering the variables
Which means rank of $Z \leq$ rank of $X$. I'm wondering if these observations are correct, and if so, if there's a technical way to show them, especially a).
AI: Your centered matrix is given by $Z= PX$ where $P:=I-\frac{1}{n}\mathbf{11}^T$.
Your 1st statement holds iff the ones vector is not in the column space of $X$. I.e. if $X\mathbf y = \mathbf 1$ then $PX\mathbf y = \mathbf 0$ and the kernel has dimension (at least) 1. Otherwise for any $X\mathbf y\neq \mathbf 1$ you have $P(X\mathbf y) = \mathbf 0$ iff $(X\mathbf y) = \mathbf 0$ which occurs iff $\mathbf y = \mathbf 0$ since $X$ has full column rank, so the kernel dimension is at most 1 as well. Again the key issue is whether $\mathbf 1$ is in the column space of $X$.
As for your 2nd and 3rd points
having full row rank and at least as many rows as columns means the columns of $X$ span your space ($X$ is surjective) so the ones vector is in the column space of $X$ in both cases. By the above argument $P$ acting on $X$ increments the kernel by 1 when we select $\mathbf y$ such that $X\mathbf y =\mathbf 1$ so $PX\mathbf y = \mathbf 0$.
To tighten this up, consider that
since $X$ is surjective it has a right inverse $M$ such that $XM = I_n$, then
$\text{rank}\big(PX\big) = \text{rank}\big(P(XM)X\big)\leq \text{rank}\big(P(XM)\big) = \text{rank}\big(P\big) \leq \text{rank}\big(PX\big)$
so $\text{rank}\big(PX\big) =\text{rank}\big(P\big)=n-1$
|
H: True or false: continuous image of convex is convex
Let $f: \mathbb{R}^n \to \mathbb{R}^m$ be a continuous function.
Is it true that $f(A)$ is convex given that $A$ is convex?
This claim seems to be quite intuitively true, but maybe I am using my intuition too much for $\mathbb{R} \to \mathbb{R}$ type functions. I cannot find references on MSE, nor can I find a general result elsewhere.
What I was able to find is the following,
Given $f: \mathbb{R}^n \to \mathbb{R}^m$, if $m \leq n$, and $f$ is $C^{1,1}$, the map $f^\prime(a):
\mathbb{R}^n \to \mathbb{R}^m$ surjective at some point $a \in
\mathbb{R}^n$, then image $K = f(B(a,\epsilon))$ of ball
$B(a,\epsilon) = \{x \in \mathbb{R}^n: \|x-a\|\leq \epsilon\}$ of
sufficiently small radius $\epsilon$ is convex.
in B. T. Polyak, “Convexity of nonlinear image of a small ball with applications to optimization".
which is not a global result...does it mean that it is not true in general?
AI: The function $f:\Bbb R^2\to\Bbb R^2:\langle x,y\rangle\mapsto \langle x,y+x^2\rangle$ is continuous and sends the $x$-axis, which is convex, to the graph of $y=x^2$, which is not.
|
H: Open submanifolds; why does $\mathcal{A}_U$ cover $U$?
I am currently reading Lee's Introduction to Smooth Manifolds, and have come across open submanifolds. Suppose that $M$ is a smooth manifold, and that $U \subseteq M$ is an open set, and define $\mathcal{A}_U := \{\text{smooth charts } (V, \varphi) \text{ of } M \text{ such that } V \subseteq U\}$.
Lee says that it is easy to verify that $\mathcal{A}_U$ is a smooth atlas for $U$. I am stuck on showing that the charts in $\mathcal{A}_U$ cover $U$.
If I take an element $u \in U$, how do I assert the existence of a chart $(V, \varphi)$ in $\mathcal{A}_U$ such that $u \in V$? I am not even sure how to see that $\mathcal{A}_U$ is non-empty when $U$ is non-empty.
There are other questions on this website about why $\mathcal{A}_U$ is a smooth atlas for $U$, but none that I have found address why $\mathcal{A}_U$ covers $U$. I feel that I am missing something simple here.
AI: Your concern is valid since without additional assumptions, there is no reason for $\mathcal A_U$ to cover $U$. Consider the case of $U < \mathbb R^n$ a proper open subset where the atlas on $\mathbb R^n$ is $\mathcal A_{\mathbb R^n} = \{(\mathbb R^n, id)\}$. This is a completely valid atlas but its restriction to $\mathcal A_U$ is empty. What you're missing here is that different atlases can induce the same smooth strucure. Indeed, Lee discusses this on page 9 on his book where he gives two atlases on $\mathbb R^n$ - the one I listed above and $\{(B_1(x), id) : x \in \mathbb R^n\}$. These induce the same smooth structure. The definition of a smooth manifold (per Lee) is then a pair $(M, \mathcal A)$ where $M$ is a topological manifold and $\mathcal A$ is a maximal atlas. i.e. one not contained in any strictly larger smooth atlas. There is no loss of generality in working with maximal atlases, as Lee proves that every atlas in contained in a unique smooth atlas.
Now, given that $\mathcal A_M$ is maximal, we can prove that $\mathcal A_U$ covers $U$. Indeed, let $u \in U$. As $\mathcal A_M$ covers $M$ there is a $(W, \phi) \in \mathcal A_M$ such that $u \in W$. As $\mathcal A_M$ is maximal, the restriction $(W \cap U, \phi|_{W \cap U}) \in \mathcal A_M$ as well. Of course, this means that $(W \cap U, \phi|_{W \cap U}) \in \mathcal A_U$ so this atlas covers $U$.
|
H: Why are there $p+1$ solutions to a projective line over a finite field of order $p$
Let $\mathbb{F}_p$ be a finite field with $p$ elements, and let
$$x+y+z=0$$
be a projective line with $x,y,z \in \mathbb{F}_p$. In a book I am currently reading about elliptic curves, it uses the fact that this projective line obviously has $p+1$ solutions to prove a theorem of Gauss, but doesn't explain (probably because it assumes general background on projective geometry). I have barely touched on projective geometry, so I was hoping someone could explain why there are obviously $p+1$ solutions.
The only thing I can think of is $x+y = -z$ corresponds to the equation $x^{\prime} + y^{\prime} = 1$ in affine space by $\frac{-x}{z} + \frac{-y}{z} = 1$ when $z \neq 0$ with $x^{\prime}, y^{\prime} \in \mathbb{F}_p$. Then if $x^{\prime} = s$, we have $y^{\prime} = 1-s$ and there are $p$ choices for $s$. So we have $p+1$ solutions, the $p$ mentioned and $(0,0,0)$. The only problem is I don't know if this is right and I thought $(0,0,0)$ wasn't a point in projective space. If not, are we assuming the extra solution is $\mathcal{O}$ in the context of elliptic curves? Thank you
AI: Just think about it in cases. When $x=0$, you must have $y=-z$, so there is essentially only one solution, $(x,y,z)=(0,1,-1)$.
Otherwise, you may assume $x=1$. There are $p$ possibilities for $y$, and all of them give a distinct point in projective space. These give the solutions $(x,y,z)=(1,y,-1-y)$.
Thus, in total, there are $p+1$ solutions.
Here is another way to think about this: (this method generalizes easier)
First, we consider solutions in $\mathbb F_p^3$. Any choice of $x$ and $y$ work, so there are $p^2$ of these. Thus, there are $p^2-1$ solutions in $\mathbb F_p^3\setminus\{(0,0,0)\}$. Finally, since each equivalence class in projective space has $p-1$ points, this gives $(p^2-1)/(p-1)=p+1$ solutions.
|
H: Dimension of an open subset of a submanifold?
Suppose that $S$ is an embedded/regular submanifold of $M$ with $\mathrm{dim}\ S = s < \mathrm{dim}\ M$. If $U$ is an open subset of M, then $S' = U \cap S$ is an open subset of $S$ in the subspace topology.
Question: If $S' \neq \varnothing$, is the dimension of $S'$ known?
AI: If $S' \neq \emptyset$ then it is a nonempty open subset of $S$ and therefore has dimension $s$. This is completely general - if $M$ is a manifold and $\emptyset \neq U \subseteq M$ then $\dim U = \dim M$. For instance, an atlas on $M$ witnesses local diffeomorphisms of $M$ with open subsets of $\mathbb R^n$ where $n = \dim M$. Restricting this atlas to $U$ yields local diffeomorphisms of $U$ with open subsets of $\mathbb R^n$, so $\dim U = n = \dim M$. You could also look at tangent spaces to conclude the same thing.
|
H: In which direction is the directional derivative of $ f(x, y) = (x^2 − y^2 )/(x^2+ y^2 )$ at $(1, 1)$ equal to zero?
I tried to use the definition of directional derivative and I think I need to solve vector v which gives the address but I don't know if it's okay,
AI: The directional derivative in direction $v \in \mathbb{R}^2$ at $(1, 1)$ is given by $Df(1, 1) v$, where $Df(1, 1)$ is the Jacobian matrix of $f$ i.e. the matrix of partial derivatives: $Df(1, 1) = \begin{pmatrix} \frac{\partial f}{\partial x}(1, 1) & \frac{\partial f}{\partial y}(1, 1) \end{pmatrix}$. Thus, you need to find $v = (v_1, v_2)$ s.t. $\frac{\partial f}{\partial x}(1, 1)v_1 + \frac{\partial f}{\partial y}(1, 1)v_2 = 0$. This can be done, for example, by setting $v_1 = \frac{\partial f}{\partial y}(1, 1)$ and $v_2 = -\frac{\partial f}{\partial x}(1, 1)$.
Sometimes "direction" is defined to be a vector of norm 1, so you may need to normalize $v$ for your answer to be accepted.
|
H: Sufficient conditions for De Morgan's law in intuitionistic logic
What are the sufficient conditions for De Morgan's law $\lnot(P\wedge Q)\Rightarrow \lnot P \vee \lnot Q$ in intuitionistic logic?
If $P\vee \lnot P$ and $Q\vee \lnot Q$ are true, is it true?
AI: Yes. Under that assumption, you can examine the four cases. In three of the cases either $\lnot P$ or $\lnot Q$ holds, then $\lnot P\lor \lnot Q$ holds you're done. In the other case, $P\land Q$ holds, and this contradicts the premise.
|
H: Showing $\frac{d\theta }{ d \tan \theta}=\frac{ 1}{ 1+ \tan^2 \theta}$
I suppose that
$$
\frac{d\theta }{ d \tan \theta}=\frac{d \arctan x }{ d x}= \frac{1}{1+x^2}=\frac{ 1}{ 1+ \tan^2 \theta}
$$
So is
$$
\frac{d\theta }{ d \tan \theta}=\frac{ 1}{ 1+ \tan^2 \theta}
$$
correct? And
$$
\frac{d (\theta) }{ d \tan \frac{\theta}{2}}=\frac{ 2}{ 1+ \tan^2 \frac{\theta}{2}} \; ?
$$
AI: Follows is the way I look at the derivation of
$\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{1+ \tan^2 \theta} \tag 0$
and related identities; starting with
$\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{\dfrac{d\tan \theta}{d\theta}} \tag 1$
we use the definition $\tan \theta = \sin \theta / \cos \theta$ and the quotient rule for derivatives to obtain:
$\dfrac{d\tan \theta}{d\theta} = \dfrac{d}{d\theta}\dfrac{\sin \theta}{\cos \theta} = \dfrac{(\cos \theta)(\cos \theta) - (-\sin \theta)(\sin \theta)}{\cos^2 \theta}$
$= \dfrac{\cos^2 \theta + \sin^2 \theta}{\cos^2 \theta} = \dfrac{\cos^2 \theta}{\cos^2 \theta} + \dfrac{\sin^2 \theta}{\cos^2 \theta} = 1+ \tan^2 \theta; \tag 2$
now by (1),
$\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{1+ \tan^2 \theta}; \tag 3$
having obtained this result, we may calculate $\dfrac{d\theta}{d\tan (\theta/2)}$, additionally invoking the chain rule; we set
$u(\theta) = \dfrac{\theta}{2}, \tag4$
whence
$\dfrac{du(\theta)}{d\theta} = \dfrac{1}{2}, \tag 5$
and
$\dfrac{d\tan (\theta/2)}{d\theta} = \dfrac{d\tan u(\theta)}{d\theta} = \dfrac{d\tan u(\theta)}{du} \dfrac{du(\theta)}{d\theta}$
$=\dfrac{1}{2}(1 + \tan^2(u(\theta)) = \dfrac{1 + \tan^2 (\theta/2)}{2}, \tag 6$
and thus
$\dfrac{d\theta}{d\tan (\theta/2)} = \dfrac{2}{1 + \tan^2 (\theta/2)}. \tag 7$
$OE\Delta$.
|
H: Prove or disprove when for any r,x $\in$ R and rx $\in$I where I is an ideal, then x $\in$ I?
Recall the definition of ideal I in a ring R;
I is a subgroup of R under addition
For any x $\in$ I and any r $\in$ R, rx $\in$ I and xr $\in$ I
My question is : Change the order
If for any r,x $\in$ R and rx $\in$I where I is an ideal,
Can we say that x $\in$ I? If Yes, please give a short proof, if not, give me a counterexample. Thanks
AI: This is an excellent question because it leads to the extremely important notion of a prime ideal. A proper ideal $\mathfrak p$ of a commutative ring $R$ is said to be prime when this condition $rx \in \mathfrak p$ implies $r \in \mathfrak p$ or $x \in \mathfrak p$, exactly as you stated. I can then answer your question by providing an example of a non-prime ideal. Indeed, the name "prime" is appropriate. Consider the ring $\mathbb Z$ and the ideal $4\mathbb Z \subseteq \mathbb Z$. $4 = 2 \cdot 2 \in 4 \mathbb Z$ but $2 \notin \mathbb Z$.
The fact that $4$ has a nontrivial prime factorization into $2^2$ is exactly why this works. Here's an exercise for you if you'd care to try it: prove that if $n \geq 2$ that $n \mathbb Z$ is a prime ideal if and only if $n$ is prime.
|
H: Find all the values of $y$ so that $\min\limits_{[1, 2]}\left | x^{3}- 3x+ y \right |= 6$ .
Find all the values of $y$ so that $\min\limits_{[1, 2]}\left | x^{3}- 3x+ y \right |= 6$ .
By Desmos https://www.desmos.com/calculator/i3cesnjguw , I see that the blue line $x= 2$ meets $\left | x^{3}- 3x+ y \right |\leq 6$ at $y_{0}= -8, 4$ , the blue line $x= 1$ meets $\left | x^{3}- 3x+ y \right |\leq 6$ at $y_{0}= -4, 8$ . I guess values of $y$ so that $\min\limits_{[1, 2]}\left | x^{3}- 3x+ y \right |= 6$ are $y= -8, 8$ but why are they ? I tried to use $\min\limits_{[1, 2]}\left | x^{3}- 3x \right |+ \min\limits_{[1, 2]}\left | y \right |\geq\min\limits_{[1, 2]}\left | x^{3}- 3x+ y \right |$ (of course it was an awful idea). I need to the help..
AI: Let $f(x)=x^3-3x$. Note that $f'(x)=3x^2-3$ is positive on the interval $x\in[1,2]$, so $f(x)$ is monotonically increasing. Thus, the minimum value of $|f(x)+y|$ either occurs at $x=1$, or at $x=2$ (because if it were somewhere in the middle, then the minimum value has to be zero, but the question needs it to be $6$). So we are seeking solutions to either
$$|f(1)+y|=|y-2|=6$$
or to
$$|f(2)+y|=|y+2|=6.$$
Clearly, the solutions are $y=\pm4,\pm8$.
Now we need to check that these values actually work, which I'll leave to you. You should come to the conclusion that, out of these four values, only $y=\pm8$ are solutions.
|
H: Evaluating $\int\frac{1}{x\sqrt{x^2+1}}dx$
I am very confused by this. I am integrating the function;
$$\int\frac{1}{x\sqrt{x^2+1}}dx$$
And Wolfram alpha is telling me, the result is;
$$\log{\left(\frac{x}{\sqrt{x^2+1}+1}
\right)}$$
However, Wolfram Mathematica is telling me that the answer is;
$$\int\frac{1}{x\sqrt{x^2+1}}dx=-\mathrm{artanh}(\sqrt{x^2+1})$$
Are these two representation equivalent?
AI: As I said in the comment, the correct representation is the first one.
If we consider $f\colon (0,+\infty) \to \mathbb R$ defined as
$$
f(x)=\log{\left(\frac{x}{\sqrt{x^2+1}+1}
\right)}
$$
Then $f'(x) = \frac{1}{x\sqrt{x^2+1}}$ for every $x > 0$.
The second function
$$
g(x) =- \operatorname{arctanh}(\sqrt{x^2+1})
$$
has indead no real domain.
One explanation is that if we consider the function
$$
\tanh(x) = \frac{\mathrm{e}^x - \mathrm{e}^{-x}}{\mathrm{e}^x + \mathrm{e}^{-x}}
$$
the image of $\tanh$ is the interval $(-1,1)$ therefore its inverse function cannot be evaluated for $\sqrt{x^2+1}$, because $\sqrt{x^2+1} \ge 1$ for every $x \in \mathbb R$,
|
H: definite integration of a function in terms of a composite function over a log-transformed domain
Let $f(x) = g(w)$, where $w=\log(x)$. Can the definite integral $F(b) - F(a) = \int_a^b f(x) \,dx$ be expressed as an integral involving $g(w)$ over the corresponding log-transformed interval (that is, from $\log(a)$ to $\log(b)$)?
AI: Yes. $\frac {dx} {dw }=e^{w}$ so the integral becomes $\int\limits_{\log a }^{\log b} g(w)e^{w}dw$.
|
H: Pair of linear equation in two variables
This is from a text book:-
"The general form of a linear equation in two variables is $ax + by + c = 0$ or, $ax + by = c$ where $a, b, c$ are real numbers such that $a ≠ 0$, $b≠0$ and $x, y$ are variables.
(we often denote the condition $a$ and $b$ are not both zero by $a^2+b^2≠0$.)"
I don’t understand this last condition.
How can we say that $a^2+b^2≠0$ represents the condition that $a$ and $b$ are not both zero.
Let $a = 0, b = 1$, then also this condition fulfills.
Any help?
AI: $a$ and $b$ are both zero $\iff$ $a=b=0$, so
$a$ and $b$ are NOT both zero $\iff$ at least one of $a,b$ is not $0$
which is equivalent to $a^2+b^2=0$ in the case that $a,b$ are both real numbers.
Answering to your comment, yes, you are right. Maybe a better way to write in an inequality is $(a,b)\neq(0,0)$ instead of "$a\neq0,b\neq0$"
|
H: Largest number of different values in $f(0),f(1),..,f(999)$ given $f(x)=f(398-x)=f(2158-x)=f(3214-x)$
I am having trouble trying to understand the Solution (question is also linked here). The solution states that $GCD(1056, 1760) = 352$ implies that $f(x)=f(352+x)$. However we also know that $GCD(398, 2158)=2$. Wouldn't by the same logic this imply that $f(x)=f(2+x)$? It would be nice if someone can rewrite the solution or explain thoroughly . The image is from https://artofproblemsolving.com/wiki/index.php/2000_AIME_I_Problems/Problem_12
AI: $f(x)=f(1056+x)$ means that $f$ is a periodic function with period dividing $1056$.
Similarly, $f(x)=f(1760+x)$ implies that the period divides $1760$.
On the other hand, $f(x)=f(398-x)$ means that $f$ is a function symmetric with respect to the vertical axis $x=199$.
The subtle difference between the plus/minus sign in the parentheses leads to the difference between periodic function and 'symmetric' function, which are different in nature. So, the similar argument does not hold for the $\gcd(398, 2158)=2$.
|
H: Why does the DFT matrix in numpy differ from the math definition?
I want to understand the DFT matrix better (starting with the real part first).
I'm first computing a DFT matrix by calculating the FFT of an identity matrix ( I can do sp.linalg.dft and I get the same result anyway while the former is faster )
dft_matrix = np.fft.fft(np.eye(nelems))
Second, I compute, say, the 20th row of the DFT from first principles using the cosine function
fi = 20
dotprod_row_test = np.cos(2 * np.pi * fi * np.linspace(0, 1, nelems))
If I compare the resulting values in the 20th row of these two, they don't match exactly?
BACKGROUND:
I was trying to compute the FFT through different methods and I found that the higher frequencies didn't match exactly. Backtracking, I found that the DFT matrix I "compute from first principles" is not correct.
AI: The below is a slightly modified version of your code which may be useful. In particular, the main change is with the line:
np.cos(2*np.pi/nelems * fi * np.linspace(0, nelems-1, nelems))
The original version was not correctly stepping from 0 to nelems-1 in the argument of the cosine. In particular, the factor should be $\frac{2 \pi}{nelems}$ with the index running from 0 to nelems-1, not 0 to nelems. Please see the code below for a running implementation.
I hope this helps.
import numpy as np
import matplotlib.pyplot as plt
nelems=128
dft_matrix = np.fft.fft(np.eye(nelems))
# First principles direct calculation
fi = 120
dotprod_row_test = np.cos(2*np.pi/nelems * fi * np.linspace(0, nelems-1, nelems))
# Plot one against the other
plt.plot(dotprod_row_test,'+')
plt.plot(np.real(dft_matrix[fi]),'x')
plt.legend(['direct DFT','numpy DFT'])
plt.show()
|
H: Vector space and "linear structure"
This question really only concerns terminology. In the linear algebra lectures that I am watching, the professor refers to the "linear structure" of a vector space. I know the definition of linearity in the context of a linear transformation, but that's a map between vector spaces. The vector spaces themselves do not seem to have "linear structure." I cannot figure out what exactly this term means. I believe another name for vector space is a "linear space," and that this must be related, but I cannot figure out what this could be referring to unless the ability to take linear combinations is the goal.
AI: As you know, a vector space $V$ over a field $\mathbb{F}$ is endowed with two different "structures", one is given by addition, $+$, and gives the set $V$ a structure of abelian group $(V, +)$, the other one is given by multiplication of vectors in $V$ by elements in $F$, called scalars, and satisfies some axioms. The result is that the set $V$ of vectors must be closed with respect to both addition and scalar multiplication. That is, if $v$ and $v'$ are vectors in $V$, then also the sum $v+v'$ must be a vector in $V$ and if $\lambda\in\mathbb{F}$ is a scalar, then $\lambda v$ must be a vector in $V$. You may summarize these closure properties by saying that $V$ is closed under linear combinations of vectors, that is $\lambda v+\lambda'v'$ is in $V$ for all $\lambda,\lambda'\in\mathbb{F}$ and $v,v'\in V$. Vectors of the form $\lambda v+\lambda'v'$ are called linear combinations of $v$ and $v'$. This can be a reason for calling a vector space a linear space, because it is closed under linear combinations of vectors. Linear combinations appear everywhere in vector spaces: if you fix a basis for the space, then every vector can be written (uniquely) as a linear combination of vectors in the basis.
|
H: Let $A$ & $B$ be sets. Prove that $\{A,B\}$ is a set.
Here are the axioms that I'm allowed to use.
Axiom of Existence:
There exists a set.
Axiom of Belonging:
If $x$ is an object and $A$ is a set, then $x \in A$ is a proposition.
Axiom of Extension:
Two sets are equal iff they have the same members.
Axiom Schema of Specification:
Let $S$ be a set and let $p(x)$ be an open sentence about the objects in $S$. Then, $\{x \in S: p(x)\}$ is a set.
Axiom of Unions:
Let $F$ be a family of sets. Then, $\cup F$ is a set and it contains all objects that belong to at least one set in the family $F$.
Axiom of Powers:
Let $S$ be a set. There exists a set $P(S)$ whose elements are all the subsets of $S$.
So, all of this is what I'm allowed to prove this result and nothing more. I think this is sufficient context based on the book that I'm using. Now, I will present my argument.
Proof Attempt:
Let $A$ and $B$ be sets. By the Axiom of Unions, $A \cup B$ is a set. By the Axiom of Powers, $P(A \cup B)$ is a set.
Since $A \subset A \cup B$ and $B \subset A \cup B$, it follows that $A \in P(A \cup B)$ and $B \in P(A \cup B)$. We define the following:
$$\phi = \{x \in P(A \cup B): (x = A) \lor (x = B) \}$$
By the Axiom Schema of Specification, $\phi$ is a set. Then, the Axiom of Extension implies that $\phi = \{A,B\}$ and it follows, then, that $\{A,B\}$ is a set. That proves the desired result.
I'm kind of not happy with that first line that uses the Axiom of Unions. It just feels wrong. But perhaps that's just me being stupid about this.
In any case, is the argument above correct? If not, what's wrong with it and how can I fix it?
AI: I suppose that the formulation of the Axiom of Union should be more specific, because otherwise the concept of family might introduce some circularity.
Axiom of Union variant. Let $f(x,y)$ be an open sentence about sets with the property $\forall x\,\exists! y\colon f(x,y)$. Let $I$ be an (index) set. Then there exists a set $\bigcup f(I)$ with
$$ x\in \bigcup f(I)\iff \exists i\in I\colon f(i,x).$$
Now to construct $A\cup B$, we need a suitable $f$ and $I$ to apply this. (Once we have $A\cup B$, we can proceed the way you did). If $I$ is any set with at least two elements and $i_0$ is one of them, we win by letting
$$f(x,y):= (x=i_0\land y=A)\lor (x\ne i_0\land y=B).$$
So now we are left with showing that there exists a set with at lest two elements.
Well, by Existence, there exists some set $X_0$.
By Specification, we find $\emptyset:=\{\,x\in X_0\mid x\ne x\,\}$ which has the property $\forall x\colon x\notin \emptyset$.
Then $X_1:=P(X_0)$ is a set. Clearly (well, the definition of subset is lacking, but ...), $\emptyset\subseteq X_0$ and $X_0\subseteq X_0$, so $\emptyset,X_0\in P(X_0)$. This shows the existence of a non-empty set $X_1$, but since it may be that $\emptyset=X_0$, we do not have a two-element set yet. However, $X_1$ is non-empty and so $\emptyset$ and $X_1$ are two distinct elements of $X_2:=P(X_1)$. In other words, $X_2$ has at least two elements, as desired.
|
H: Find all nonconstant polynomials P such that P({X})={P(X)}
Find all nonconstant polynomials $P$ which satisfy $P(\{X\})=\{P(X)\}$, where $\{x\}$ is the fractional part of $x$.
I've tried to prove that the polynomial in question is linear, but I can't think of how to prove it, especially since we don't know anything about the constants
AI: Hint
$P(\{X\})$ is periodic with a period of $1$ (since $\{X+1\}=\{X\}$), hence $\{P(X)\}$ is also periodic only if $P(X)$ is linear, because for non-linear $P(X)$ (WLOG we assume $\lim_{x\to\infty} P(X)=\infty$), by defining
$$
I_k=\{x: k\le P(x)<k+1\}\quad,\quad k\in\Bbb Z
$$we have $$\lim_{k\to \infty}|I_k|=0$$which means that $\{P(X)\}$ cannot be periodic and the statement is proved $\blacksquare$
|
H: Show that $S$ is subset of f$^{-1}(f(S))$
Here is the full question:
Let $f : X → Y$ be a function from one set $X$ to another set $Y$ , let $S$ be a subset of $X$, and let $U$ be a subset of $Y$. Show that $S \subset f^{-1}(f(S))$
My main problem is that I am not able to translate my reasoning to a formal Mathematical proof, here is what I am thinking of:
Every element $x \in S$ has a forward image $f(x)$ , $f^{-1}(f(S))$ will either return the same set $S$ (in case the function is one-to-one) or will return a set with more elements than $S$ (function isn't one-to-one, multiple inputs mapping to one output) where all elements of $S$ are in this set along with another new elements, either way all elements of $S$ will be in $f^{-1}(f(S))$ which is the definition of a subset.
What I tried:
I tried going back to the original definitions of a forward and inverse image in a way that I start with an element $x\in S$ and prove it's in $f^{-1}(f(S))$ but I can't see it using the formal definitions.
AI: By definition: $f^{-1}(B)=\{x \in X \, | \, f(x) \in B\}$. So
$$f^{-1}\left(f(S)\right)=\{x \in X \, | \, f(x) \in f(S)\}.$$
Let $a \in S$, then $f(a) \in f(S)$, this means $a \in f^{-1}\left(f(S)\right)$. Hence $S \subseteq f^{-1}\left(f(S)\right)$.
|
H: maximize $v_0 x+ v_1 y$ s.t. $ (x/a)^2+(y/b)^2 =1$
How to maximize the dot product of two vectors, one is fixed, the other is constrained on an ellipse?
i.e., how to maximize
$$
v_0 x+ v_1 y
$$
s.t.
$$
\left(\frac{x}{a} \right)^2 +\left(\frac{y}{b} \right)^2=1
$$
intuitively, let $x=a \sin t, y= b \cos t$, then the limit occurs when the tangent
$$
\begin{bmatrix}
a \cos t \\
-b \sin t \\
\end{bmatrix}
$$
is orthogonal to
$$
\begin{bmatrix}
v_0 \\
v_1
\end{bmatrix}
$$
i.e.
$$
v_0 a \cos t= v_1 b \sin t
$$
Thus, the maximum point is
$$
\begin{cases}
x=\frac{a^2 v_0}{ \sqrt{ a^2 v_0^2 +b^2 v_1^2 }} \\
y=\frac{b^2 v_1}{ \sqrt{ a^2 v_0^2 +b^2 v_1^2 }}
\end{cases}
$$
AI: Without multipliers:
Let $x=a\cos t,y=b\sin t$ and maximize
$$v_0a\cos t+v_1b\sin t=\sqrt{(v_0a)^2+(v_1b)^2}\cos(t-\phi).$$
Obviously, the maximum is
$$\sqrt{(v_0a)^2+(v_1b)^2}.$$
|
H: What are the differences between $\mathbb{R}^{k+m}$ and $\mathbb{R}^{k}×\mathbb{R}^{m}$
For $k,m \in \mathbb{N}$ are the two sets exactly the same? Or they are same only for $k = m = 1$?
AI: Formally, the two sets are different: the left one consists of elements of the form $\;(x_1,...,x_k,x_{k+1},...x_{k+m})\;$, whether the right one consists of ordered pairs of the form $\;\left((x_1,...,x_k),\,(x_1,...,x_m)\right)\;$
|
H: For a topological manifold $X$ is it true that $X$ is a covering of $X\lor X$?
Here is my question:
Let $X$ be a topological manifold. Is it true that $X$ is a covering
of $X\lor X$ and $X\lor X\lor X$ and, so on.
I have a intuition, $\pi_1(X\lor X)=\pi_1(X)*\pi_1(X)$.
AI: It is not true. Consider $X=\mathbb R$, then $X\vee X$ looks like a cross ($+$) and there can't be a local homeomorphism between the two, by considering the wegde point (where the lines cross).
|
H: How to evaluate : $\lim_{n \to \infty}\sum_{k=0}^{n} \frac{{n\choose k}}{n^k(k+3)}$
Usually I would write the given sum in the form
$$lim_{n \to \infty}\frac{1}{n}\sum_{r=o}^{n}{f(\frac{r}{n})}$$
and then approximate it with the integral
$$\int_{0}^{1}f(x)dx$$
but it doesn't seem so easy to do with this question.
The solution says that this sum is equal to the integral:
$$\int_{0}^{1}x^2e^xdx$$ without any further explanation. I can't see how they are equal.
Any help is appreciated.
AI: $$\lim_{n\to \infty} \sum_{k=0}^n {n \choose k} \frac{1}{(k+3)n^k} = \lim_{n\to \infty} \sum_{k=0}^n {n \choose k} \frac{1}{n^k} \int_0^1 x^{k+2}\:dx $$
$$\int_0^1 x^2 \lim_{n\to \infty}\left(1+\frac{x}{n}\right)^n\:dx = \int_0^1 x^2 e^x \:dx $$
with appropriate assumptions on uniform convergence.
|
H: Norm of functional on $L^4[0, 1]$
I am trying to calculate the norm of the operator
$$
\begin{align}
f: L^4[0, 1] &\rightarrow \mathbb{R} \\
x &\mapsto \int_0^1 t^3x(t) dt
\end{align}
$$
I started off by estimating
$$
||fx||
= \left| \int_0^1 t^3x(t) dt \right|
\le \int_0^1 |t^3x(t)| dt
\stackrel{Hölder}{\le} \left( \int_0^1 t^{12} dt \right)^{\frac{1}{4}} ||x|| \le \frac{1}{\sqrt[4]{13}} ||x||
$$
So therefore I know that $||f|| \le \frac{1}{\sqrt[4]{13}}$. Now I need to find some $x \in L^4[0, 1]$ such that
$$
||f|| \ge \frac{||fx||}{||x||} = \frac{1}{\sqrt[4]{13}}
$$
But I can't find any. Am I overseeing something simple?
AI: Applying Holder's inequality with $p=4$ and $q=\frac 4 3$ we se that $|f(x)| \leq 5^{-3/4} \|x||$. Hence the norm is at most equal to $5^{-3/4}$. To see that equality holds just take $x(t)=t$. I will let you verify that $\frac {|f(x)|} {\|x\|} =5^{-3/4}$ in this case.
Note: The choice of $x(t)$ is dictated by the condition for equality in Holder's inequality.
|
H: Is the or statement always inclusive in Mathematics?
My question is about when the statement has a potential of inclusivity, for example a statement like "It's either day time or night time" will obviously be exclusive as it's a logical contradiction if we are in day time and night time simultaneously, however I have realized that when there is potential of inclusivity then it's kind of an assumption that our statement is inclusive.
For example: The definition of the union of sets is considered inclusive, etc...
Is that always an assumption we can make (that when there potential for inclusivity then the statement is inclusive) or we will have to see what the author states? (for example in some books the definition of the union of sets is explicitly mentioned to be inclusive, in others no)
AI: Short answer: Yes.
Longer answer: The mathematical logical operator $\lor$ is by definition inclusive. In spoken and written "natural" language, a mathematitian will almost always mean $\lor$ when they say "or", to the point when they speak of exclusive or, they will almost always explicitly say that they are.
|
H: Statistics - Bootstrap Method
After scouring the internet and reference books for a couple of days I couldn't really find an answer to the current problem I am trying to solve. Lets say that I want to construct a confidence interval of a mean for a sample using the bootstrap method. The mean will represent the expected number of trials before the first success (Geometric Distribution). However, the data I have only consists of the total number of successes and total number of trials. I don't have access to the separate trials. My current approach to this problem is:
Generate a random binary set that consists of successes as ones and failures (number of trials - number of successes) as zeros.
For B times, sample from the generated binary set to create a bootstrap resample of the same size.
For each of these B resamples calculate the probability of success $p\_{mle}$ using the Maximum Likelihood Estimate for the Geometric Distribution. Then find the mean using $\frac{1}{p\_{mle}}$ to create a bootstrap distribution.
Then I construct the confidence interval by finding the respective percentiles of the bootstrap distribution of the means.
So the problem I have with this is that I am not sure if it's correct to be able to generate a random binary variable and assume that is a good representation of the original sample. Also, is it okay to transform the bootstrap sample?
Any advice would be appreciated! Thanks in advance.
AI: This seems like a good fit for parametric bootstrap. You can estimate $p$ in your sample by $\hat{p}$ (for example the MLE would be a good choice) and you can then sample from a geometric distribution with parameter $\hat{p}$ to generate your bootstrap samples. You know the size of your sample (because that is simply the number of successes you have) and this should also be the size of your bootstrap samples.
In point 1 of your procedure you do not make it so clear what you mean by "random binary set".
|
H: Every Cauchy sequence in $A$ converges in $X$, where $A$ is dense. Show that $X$ is complete.
My question is: Let $( X, d)$ be a metric space and $A$ a dense subset of $X$ such that every Cauchy sequence in $A$ converges in $X$. Prove that $( X, d)$ is complete.
Solution:
Case 1: If $X = A$ then it's trivial.
Case 2: If $X = A'$ for some X belonging to $X$ then there are sequences $a_{in}$ converging to those $x_{n}$ and so how should I deal with sequence of sequences?
AI: Let $\{x_n\}$ be any Cauchy sequence. Choose $y_n \in A$ such that $d(x_n,y_n) <\frac 1 n$. Then $d(y_n,y_m) \leq \frac 1 n +\frac 1 m +d(x_n,x_m)$ so $(y_n)$ is a Cauchy sequence in $A$. I will let you verify that if $y_n \to y$ the $x_n \to y$. Hence $X$ is complete.
|
H: Proving $r \binom{n}{r}=n\binom{n-1}{r-1}$ combinatorially. (Advice on combinatorial proofs in general?)
How do you combinatorially prove the following?
$$r \binom {n}{r} = n \binom {n-1}{r-1}$$
I find it easy to prove such equalities algebraically, but have a hard time finding the right combinatorial intuition.
Any advice for coming up with combinatorial proofs myself?
AI: The first step is to interpret the expressions - what are they counting? There are some tricks to this. For instance, addition corresponds to a single choice out of two sets of options whereas multiplication corresponds to two choices from two sets of options. Another trick is to find dependencies - for instance in the expression $r\binom{n}{r}$, we see the $r$ twice, so we ought to investigate what it would mean if one of the $r$s represented a choice that was dependent on the other $r$. In particular, if $\binom{n}{r}$ counts $r$-subsets of $\{1,\cdots,n\}$ then $r$ by itself can be interpreted as how many ways there are to choose a single element of that $r$-subset.
We always phrase this in more familiar terms. For instance, instead of an $r$-subset of $\{1,\cdots,n\}$, we can think of a committee of $r$ people out of $n$ candidates. Then the special one of the $r$ members chosen for the other $r$ in the expression $r\binom{n}{r}$ can be interpreted as choosing a president. So $r\binom{n}{r}$ counts committees of $r$ people drawn from $n$ candidates with a single president.
A next step is to think about how to count this, but in a different way. If you think about the thing you're constructing in terms of "choices" that can be made while constructing it, you can change the order in which you make these choices. For instance, instead of choosing $r$ out of $n$ people for a committee and then choosing a president out of those $r$, which gives $r\binom{n}{r}$, you can instead pick the president ($n$ options) and then pick the $r-1$ non-president members of the committee out of the remaining $n-1$ people, which gives the equivalent expression $n\binom{n-1}{r-1}$.
|
H: why $ f(x,y) =-g(x,y)?$
I have some confusion in Apostol calculas book Page no $: 369$
Books Pdf link : https://www.academia.edu/4744309/Apostol_-_
My confusion is marked in red circle ,given below
why $ f(x,y) =-g(x,y)?$
why negative sign come ?
AI: Let $(x,y) \in S$. Let $E$ be the solid in question. Then:
$$(x,y,z) \in E \iff \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2} \le 1 \iff z^2 \le g(x,y)^2 \iff |z| \le g(x,y) \iff -g(x,y) \le z \le g(x,y).$$
|
H: Determining moment generating function $\sum_{i=1}^n iX_i$
Let $X_i \sim Ber(0.5)$ and $X_i$'s independent. Let $Y$ be a random
variable with the same distribution as $\sum_{i=1}^n iX_i$. Determine
the moment generating function of $Y$.
I figured the moment generating function of $iX_i$ would be
$$M_{iX_i}(t) = 0.5e^{ti(0)}+0.5e^{ti(1)} = 0.5(1+e^{ti})$$
Now we could use that the $X_i$'s are independent and get
$$M_{Y}(t) = M_{\sum_{i=1}^n iX_i}(t) = \prod_{i=1}^n M_{iX_i}(t)$$
$$ =0.5\prod_{i=1}^n 1+e^{ti}$$
I'm not sure if $M_{Y}(t) = M_{\sum_{i=1}^n iX_i}(t) = \prod_{i=1}^n M_{iX_i}(t)$ this step is correct.
Any hint in the right direction is appreciated :)
AI: $0.5$ gets multiplied $n$ times, so the answer is $(0.5)^{n} \prod\limits_{i=1}^{n} (1+e^{it})$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.