text
stringlengths 83
79.5k
|
|---|
H: Find rank of matrix
If $X$ is a $N \times D$ matrix with $(D\gg N)$ with $\operatorname{rank}(X) = N$, what is $\operatorname{rank}(X^T \cdot X)$ where $X^T$ is the transpose matrix of $X$?
I am little new to linear algebra and I am not having any approach, I faced this problem in other context(linear regression).
AI: The rank of $X^TX$ is the same as the rank of $X$, assuming real matrices.
Why? Suppose $Xv=0$. Then also $X^TXv=0$. Conversely, if $X^TXv=0$, then also $v^TX^TXv=0$, that is, $(Xv)^T(Xv)=0$; therefore $Xv=0$.
We have proved that the two matrices have the same null space. But, by the rank-nullity theorem,
\begin{align}
N&=\operatorname{rank}X+\operatorname{nullity}X \\
N&=\operatorname{rank}X^TX+\operatorname{nullity}X^TX
\end{align}
and equality of the ranks follows.
|
H: Image of intersections is equal to intersections of images if and only if the inverse of the image is the set
I'm working through some exercises dealing with mappings and the properties of its image and pre-image.
Let $f: S \to T$ be function. I want to prove that the statements
$f$ is injective
$f(A\cap B) = f(A) \cap f(B)$ for all $A,B \subseteq S$
$f^{-1}(f(A)) = A$ for all $A \subseteq S$
are equivalent. Consulting the amazing: Overview of basic results about images and preimages one finds that it is quite easy to show that the first statement is equivalent to the other two, which should establish the equivalence of all three.
This seems to be the standard way to go about this. However, would it be possible to show that the second and third statements are equivalent?
Out of curiosity I played around with this for a bit and wasn't able to get anywhere. So this is either very easy or it takes some trick I've been unable to figure out.
If anyone could point me to a proof or give me a hint, it would be greatly appreciated!
AI: Suppose we have $f^{-1}(f(A)) = A$ and want to prove $f(A\cap B) = f(A) \cap f(B)$.
Assume we know, that set properties holds for $f^{-1}$ we have
$$f^{-1}(f(A\cap B)) = A\cap B = f^{-1}(f(A)) \cap f^{-1}(f(B)) = f^{-1}(f(A) \cap f(B))$$
not taking image from both sides you obtain desired.
|
H: Composite function with integral
Suppose $f(z)=\int_{0}^{z} g(y)dy, h(x)=x^2$, then the composite $f\circ h=\int_{0}^{x^2} g(y)dy$. Is this correct?
AI: Yes that's right. $f \circ h = f(h(x))$ so you just plug in $h(x)$ for $z$ to get $\int_{0}^{x^2} g(y) dy$.
|
H: How can a vector field act on a Lie Algebra element?
We have the definition of a vector field as a smooth section of the tangent bundle $$X:P\longrightarrow TP,$$ where $(TP,\pi',P)$ is the tangent bundle over the total space of the principal G-bundle $(P,\pi,M)$. I.e, a vector field is an assignment of a vector ($\in TP$) to every point of a smooth manifold $P$.
I have, however, come across expressions like "$X(A)$" where $X$ is (supposedly) a vector field and $A$ is a Lie algebra element (for example at 19:31 here https://www.youtube.com/watch?v=j36o4DLLK2k).
How is the vector field defined in this case? Do we simply take for granted that the lie algebra is given a manifold structure, at which point the vector field is defined in the same way, something like $X:T_eG\longrightarrow T(T_eG)$ - assigning a vector in $T(T_eG) $ to another vector in $T_eG$?
AI: This is the fundamental vector field generated by $A\in T_eG$. For $u\in P$, this is defined by $X^A(u):=\tfrac{d}{dt}|_{t=0}u\cdot exp(tA)$. Here the dot denotes the principal right action of $G$ on $P$, whose orbits are the fibers of $P$. For each $u\in P$, mapping $A$ to $X^A(u)$ induces a linear isomorphism from $T_eG$ to the vertical subspace $\ker(T_u\pi)\subset T_uP$. So these are the vectors tangent to the fibers of $P$.
|
H: Computing the number of $\sigma$-algebra.
Let $X$ be a set. How many $\sigma$-algebras of subsets of $X$ contain exactly $m$ elements?
Any hints for how to begin a solution to this problem are greatly appreciated.
My initial approach is as follows:
Let $|X| = n$, then $|P(X)| = 2^n$
Thus our count is given by
$\binom{2^n}{m}$
We have no information as to the cardinality of $X$.
AI: Algebras (and thus $\sigma$-algebras) of subsets of a finite set correspond to partitions of the set. If $P = \{P_1, \ldots, P_k\}$ is a partition of $X$, i.e. disjoint nonempty subsets whose union is $X$, then you get an algebra consisting of the $2^k$ unions of subsets of $P$. So if $m = 2^k$, the number of algebras of subsets of $X$ is the number of partitions of $X$ into $k$ parts. If $m$ is not a power of $2$, there are no algebras with cardinality $m$.
|
H: Simplifying trigonometry function by substitution
I want to simplify the following expression: $\frac{{\left( {1 + \sqrt 3 \tan {1^{\circ}}} \right)\left( {1 + \sqrt 3 \tan {2^{\circ}}} \right)\left( {\tan {1^{\circ}} + \tan {{59}^{\circ}}} \right)\left( {\tan {2^{\circ}} + \tan {{58}^{\circ}}} \right)}}{{\left( {1 + {{\tan }^2}{1^{\circ}}} \right)\left( {1 + {{\tan }^2}{2^{\circ}}} \right)}}$
My approach is as follow
$T = \left( {1 + \sqrt 3 \tan {1^{\circ}}} \right)\left( {1 + \sqrt 3 \tan {2^{\circ}}} \right)\left( {\tan {1^{\circ}} + \tan {{59}^{\circ}}} \right)\left( {\tan {2^{\circ}} + \tan {{58}^{\circ}}} \right){\cos ^2}{1^{\circ}}{\cos ^2}{2^{\circ}}$
$\Rightarrow \tan {58^{\circ}}\left( {1 + \sqrt 3 \tan {2^{\circ}}} \right) = \sqrt 3 - \tan {2^{\circ}}$
$\Rightarrow \tan {59^{\circ}}\left( {1 + \sqrt 3 \tan {2^0}} \right) = \sqrt 3 - \tan {1}$
$\tan {60^{\circ}} = \frac{{\left( {\tan {1^{\circ}} + \tan {{59}^{\circ}}} \right)}}{{\left( {1 + \tan {{59}^{\circ}}\tan {1^{\circ}}} \right)}} \Rightarrow \left( {\tan {1^{\circ}} + \tan {{59}^{\circ}}} \right) = \sqrt 3 \left( {1 + \tan {{59}^{\circ}}\tan {1^{\circ}}} \right)$
$\tan {60^{\circ}} = \frac{{\left( {\tan {2^{\circ}} + \tan {{58}^{\circ}}} \right)}}{{\left( {1 + \tan {{58}^{\circ}}\tan {2^{\circ}}} \right)}} \Rightarrow \left( {\tan {2^{\circ}} + \tan {{58}^{\circ}}} \right) = \sqrt 3 \left( {1 + \tan {{58}^{\circ}}\tan {2^{\circ}}} \right)$
$T = \frac{{\sqrt 3 - \tan {1^{\circ}}}}{{\tan {{59}^{\circ}}}} \times \frac{{\sqrt 3 - \tan {2^{\circ}}}}{{\tan {{58}^{\circ}}}} \times \sqrt 3 \left( {1 + \tan {{59}^{\circ}}\tan {1^{\circ}}} \right) \times \sqrt 3 \left( {1 + \tan {{58}^{\circ}}\tan {2^{\circ}}} \right){\cos ^2}{1^{\circ}}{\cos ^2}{2^{\circ}}$
After this step I am confused
AI: $$
\tan 59^{\circ} = \frac{\sqrt{3}-\tan 1^{\circ}}{1+\sqrt{3} \tan 1^{\circ}}
$$
$$
\tan 59^{\circ}+\tan 1^{\circ} = \frac{\sqrt{3}(1+\tan^2 1^{\circ})}{1+\sqrt{3} \tan 1^{\circ}},
$$so we have
$$
(1+\sqrt{3} \tan 1^{\circ})(\tan 59^{\circ}+\tan 1^{\circ})=\sqrt{3}(1+\tan^2 1^{\circ})
$$A similar argument works for $\tan 2^{\circ}$ and $\tan 58^{\circ}$. So the answer is $3$.
|
H: A set S is a subspace of an inner product space $V$ iff $(S^{\perp})^{\perp}=S$?
$S$ is any set of a finite-dimensional inner product space $V$. Prove or disprove that $S$ is a subspace of $V$ iff $(S^{\perp})^{\perp}=S$.
My approach was to consider a minimum set $B$ of linearly independent vectors which spans $S$. If such $B$ exists then there exists some $v \in B$ and $v \notin S$, but $v \in (S^{\perp})^{\perp}=(B^{\perp})^{\perp}=B$ so contradiction.
What I can't deal with is how can I consider such $B$. Can I just say "Try every set of $V$ and find the minimum?"
Any help is appreciated.
AI: First of all orthogonal complement is a subspace, so if $S=(S^{\perp})^{\perp},$ then $S$ must be a subspace.
Conversely, it is easy to see that $S \subseteq (S^{\perp})^{\perp}.$ For the other containment,
Hint: First show $$V= S \oplus S^{\perp},$$ so for $u \in (S^{\perp})^{\perp},$ there exists $u_1 \in S,u_2 \in S^{\perp}$ such that $u=u_1+u_2.$ It follows $u-u_1 \in S^{\perp}.$ You can also show $u-u_1 \in(S^{\perp})^{\perp}.$ Conclude $u =u_1 \in S.$
|
H: On the double series $\sum_{(m,n)\in\mathbb{Z}^2\setminus\{(0,0)\}}\frac{m^2+4mn+n^2}{(m^2+mn+n^2)^s}$
I'm having a difficult time calculating the series
$$\sum_{(m,n)\in\mathbb{Z}^2\setminus\{(0,0)\}}\frac{m^2+4mn+n^2}{(m^2+mn+n^2)^s} \quad , \quad s>2$$
I don't even know where to start. Truth be told I don't have that much experience in double sums and they are not my strong asset.
AI: It seems to me that this is a trick question because the series vanishes for reasons of symmetry.
Let $\vec{u}$ and $\vec{v}$ be two unit vectors forming a 60 degree angle. It is easy to see (law of cosines) that
$$
||m\vec{u}+n\vec{v}||^2=m^2+mn+n^2.
$$
Let $R$ be the sixty degree rotation, i.e. the linear transformation defined by
$R(\vec{u})=\vec{v}$ and $R(\vec{v})=\vec{v}-\vec{u}$.
So $R$ is an order six transformation, and for all vector $\vec{w}$ we have
$$||R(\vec{w})||^2=||\vec{w}||^2.$$
Consider an orbit of $\langle R\rangle$, that is, fix a pair $(m,n)$, and apply the transformations $R^i, i=0,1,\ldots,5$, to the vector $\vec{w}_0=m\vec{u}+n\vec{v}$. It follows that
the vectors $\vec{w}_i=R^i(\vec{w}_0)$, $i=0,1,\ldots,5$, all have the same length.
In other words, the pairs
$$(m,n), (m+n,-m), (n,-m-n), (-m,-n), (-m-n,m), (-n,m+n)$$
all contribute terms sharing the same denominator.
But the numerators of such a sextet of terms add up to zero.
Write
$R(m,n)=m^2+4mn+n^2$. Then the first three vectors contribute
$$
\begin{aligned}
&R(m,n)+R(m+n,-m)+R(n,-m-n)\\
=&m^2+4mn+n^2+(m+n)^2-4m(m+n)+m^2+n^2-4n(m+n)+(m+n)^2\\
=&(1+1-4+1+1)m^2+(4+2-4-4+2)mn+(1+1+1-4+1)n^2\\
=&0.
\end{aligned}
$$
The negatives of those three vectors give the same sum for reasons of parity, and the claim follows.
Whenever the series converges absolutely, the derangement theorem allows us to sum it one sextet at a time, and the series vanishes.
The vectors $\vec{w}$ with roughly equal norm $R^2$ fall on the circle of radius $R$ (give or take $1/2$, rounding to the nearest integer here). Given that the vectors form the hexagonal lattice, there are $\mathcal{O}(R)$ such vectors (scales with the length of the perimeter of that circle). The absolute values of those terms contribute thus sum up to roughly $\mathcal{O}(R^3/R^{2s})$. It is well known that the sum over $R$ converges absolutely, when $s>2$, so we are done.
|
H: Definition and intuition of a tubular neighborhood of a submanifold
Let $d\in\mathbb N$, $k\in\{1,\ldots,d\}$ and $M$ be a $k$-dimensional embedded $C^1$-submanifold of $\mathbb R^d$ with boundary$^1$.
Now let $$T_xM:=\left\{v\in\mathbb R^d\mid\exists\varepsilon>0,\gamma\in C^1((-\varepsilon,\varepsilon),M):\gamma(0)=x,\gamma'(0)=v\right\}$$ denote the tangent space of $M$ at $x$ and $N_xM:=(T_xM)^\perp$ for $x\in M$.
I'm trying to understand the definition of a tubular neighborhood of $M$. I was only able to find definitions of this notion in a way more general setting which built up on several concepts I'm not familiar with. So, I'd like to find a simplified, but equivalent, definition for my present setting.
What I've understood is that one considers the space $$N(M):=\{(x,v):x\in M\text{ and }v\in N_xM\}$$ and the map $$E(x,v):=x+v\;\;\;\text{for }(x,v)\in N(M).$$
Now the usual definition of a tubular neighborhood $U$ of $M$ is that it is a (open?) neighborhood of $M$ in $\mathbb R^d$ (really a neighborhood of all of $M$?) such that $U$ is the diffeomorphic image under $E$ of an open subset $V$ of $N(M)$ with $$V=\{(x,v)\in N(M):\left\|v\right\|<\delta(x)\}\tag1$$ for some continuous function $\delta:M\to(0,\infty)$.
I really got trouble to understand this. The vector $E(x,v)$ is simply a vector originating from $x$ and pointing in the direction $v$. If $M$ is a circle in $\mathbb R^3$, I guess the intuition is that for each point $x$ on the circle all the $v\in N_xM$ built a ring around $x$ by "rotating" the normals around $x$. Doing so for all $x$ on the circle, we obtain a torus. Is this correct so far? All the pictures I've found have confused me.
And how would a tubular neighborhood of a sphere in $\mathbb R^3$ look like? For a sphere, the normal spaces are 1-dimensional ...
$^1$ i.e. each point of $M$ is locally $C^1$-diffeomorphic to $\mathbb H^k:=\mathbb R^{k-1}\times[0,\infty)$.
If $E_i$ is a $\mathbb R$-Banach space and $B_i\subseteq E_i$, then $f:B_1\to E_2$ is called $C^1$-differentiable if $f=\left.\tilde f\right|_{B_1}$ for some $E_1$-open neighborhood $\Omega_1$ of $B_1$ and some $\tilde f\in C^1(\Omega_1,E_2)$ and $g:B_1\to B_2$ is called $C^1$-diffeomorphism if $g$ is a homeomorphism from $B_1$ onto $B_2$ and $g$ and $g^{-1}$ are $C^1$-differentiable.
AI: Your intuition seems to be right. I like to describe it as an open disk bundle i.e. the fibre over the point $p \in M$ looks like $\{p\} \times O^n$ where $O$ is an open disk of dimension $n$ (which is also equal to the dimension of the normal bundle, or even fancier, the codimension of $M$). So you have a tube that encapsulates the submanifold.
If $M$ is a circle in $\mathbb{R}^3$ then you are right, its tubular neighbourhood is exactly an (open solid) torus i.e. $S^1 \times O^2$ so it's actually a trivial bundle.
(really a neighborhood of all of ?)
Yes, all of $M$. Now let us consider the sphere $S^2 \subset \mathbb{R}^3$. You get essentially an (open) thickened sphere, you can think of it as $S^2 \times O^1 \cong S^2 \times (0,1)$. Note the dimension of the open disk you are working with, since the sphere is codimension $1$!
The general idea is that any submanifold can be identified with the zero section in its normal bundle, and in the normal bundle you have a little "wiggle room" in the normal direction around each point and gluing all these little wiggle rooms together give you an open neighbourhood of $M$ in the ambient manifold.
|
H: Is the product of two Cesaro convergent series Cesaro convergent?
Let $\{a_n \}_{n \geq 1}$ and $\{b_n \}_{n \geq 1}$ be two sequences of real numbers such that the infinite series $\sum\limits_{n=1}^{\infty} a_n$ and $\sum\limits_{n=1}^{\infty} b_n$ are both convergent in the Cesaro sense i.e. \begin{align*} \lim\limits_{n \to \infty} \frac 1 n \sum\limits_{k=1}^{n} s_k & < + \infty \\ \lim\limits_{n \to \infty} \frac 1 n \sum\limits_{k=1}^{n} t_k & < + \infty \end{align*}
where $\{s_k \}_{k \geq 1}$ and $\{t_k\}_{k \geq 1}$ are sequences of partial sums of the series $\sum\limits_{n=1}^{\infty} a_n$ and $\sum\limits_{n=1}^{\infty} b_n$ respectively. Can I say that $\sum\limits_{n=1}^{\infty} a_n b_n$ is convergent in the Cesaro sense? If "yes" then what can I say about it's limit in terms of the limits of the given two series?
AI: No. Consider $a_{n}=b_n=(-1)^n$. Then both of them are Cesaro summable but $c_n=a_n \cdot b_n= 1$ isn't, since $\lim\limits_{n \to \infty} \frac 1 n \sum\limits_{k=1}^{n} u_k= \lim\limits_{n\to \infty}\frac{1}{n}\frac{(n+1)n}{2}=\infty$
|
H: Set of possible reflections of a vector.
How many vectors can one construct by by reflecting a vector $b\in\mathbb{R}^d$ for $b\neq 0$?
Reflections can be described by Householder matrices $H=I-2vv^T/||v||_2^2$.
In other words, I'm interested in the subset $S(b):=\{v\mid \exists \text{ Householder Matrix } H: v=Hb\}\subseteq \mathbb{R}^d$.
Is it true that $S(b)=\{ v \mid ||v||_2^2=||b||_2^2\}$?
AI: Yep! Every reflection of $b$ has the same norm, as $H$ is orthogonal. Specifically,
\begin{align*}
H^\top H &= \left(I - 2\frac{vv^\top}{\|v\|^2}\right)^\top \left(I - 2\frac{vv^\top}{\|v\|^2}\right) \\
&= \left(I - 2\frac{vv^\top}{\|v\|^2}\right) \left(I - 2\frac{vv^\top}{\|v\|^2}\right) \\
&= I - 4\frac{vv^\top}{\|v\|^2} + 4\frac{v(v^\top v) v^\top}{\|v\|^4} \\
&= I - 4\frac{vv^\top}{\|v\|^2} + 4\frac{v v^\top}{\|v\|^2} \\
&= I = HH^\top.
\end{align*}
Therefore $\|Hb\| = \|b\|$ for all such matrices $H$.
On the other hand, choose any $c$ such that $\|c\| = \|b\|$, and let $v = b - c$. I claim that the $H$ corresponding to $v$ maps $b$ to $c$. We have
\begin{align*}
Hb - c &= \left(I - 2\frac{vv^\top}{\|v\|^2}\right)b - c \\
&= b - 2\frac{vv^\top b}{\|v\|^2} - c \\
&= b - c - 2(v \cdot b) \frac{v}{\|v\|^2} \\
&= b - c - 2((b - c) \cdot b) \frac{(b - c)}{\|b - c\|^2} \\
&= ((b - c) \cdot (b - c) - 2(b - c) \cdot b)\frac{b - c}{\|b - c\|^2} \\
&= ((b - c) \cdot (-b - c))\frac{b - c}{\|b - c\|^2} \\
&= (c \cdot c - b \cdot b)\frac{b - c}{\|b - c\|^2} \\
&= (\|c\|^2 - \|b\|^2)\frac{b - c}{\|b - c\|^2} \\
&= 0.
\end{align*}
|
H: Common root of a cubic and a biquadratic equation
The value of $a$ given that the cubic equation
$$x^3+2ax+2=0$$
and the biquadratic equation
$$x^4+2ax^2+1=0$$
have a common root.
I know how to use common root condition for two quadratic equations, But I don't know how to solve this...
AI: Assume $r$ is the common root. Then,
$$r^3+2ar+2=0\tag1$$
$$r^4+2ar^2+1=0\tag2$$
Take (1)$\cdot$r-(2) to obtain $r=\frac12$ and then $a=-\frac{17}8$.
|
H: Question about Chinese Remainder Theorem
This is a homework problem that I'm confused about. I understand the solution until it says "With the Chinese Remainder Theorem and some computation this shows that $n \equiv 301 \pmod{420}.$ I'm not sure how to use Chinese Remainder Theorem and how to get this, any explanation is appreciated.
A woman with a basket of eggs finds that if she removes either 2, 3, 4, 5, or 6 at a time from the basket, there is always one egg left over. If she removes 7 eggs at a time from the basket, there are no eggs left over. If the basket holds up to 500 eggs, how many eggs does she have?
Solution.
Let $n$ be the number of eggs in the basket. We know that $n$ is $1$ mod $2,3,4,5,6$. This means $n-1$ is $0$ mod $2,3,4,5,6$ and is divisible by all of those numbers. Since the LCM of $2,3,4,5,6$ is $60$, $n-1$ is a multiple of $60$, and $n \equiv 1 \pmod{60}$. Finally, we know that $n \equiv 0 \pmod{7}$. With the Chinese Remainder Theorem and some computation this shows that $n \equiv 301 \pmod{420}$. So the possible values of $n$ are $301, 721, 1141, \ldots$ and the only one less than $500$ is $\boxed{301}$
AI: You can just try the positive integers that are congruent to $1$ modulo $60$ until you find one that is also divisible by $7.$ The first few $61,121,181,241$ don't work, but $301=7\cdot 43$ does work. I believe the function of CRT here is to let you know that this is the unique solution that is a positive integer that is less than or equal to $60\cdot 7=420.$
|
H: Surjectivity of stalks of holomorphic functions
Let $X=\mathbb{C}$ with the classical topology, let $\mathcal{O}_X$ be the sheaf of holomorphic functions, and let $\mathcal{O}^*_X$ be the sheaf of invertible (nowhere $0$) holomorphic functions. I'd like to understand why $\rm exp: \mathcal{O}_X\rightarrow \mathcal{O}^*_X$ is an epimorphism.
Since $\rm exp$ is an epimorphism if and only if it is surjective on the stalks, we can consider the induced map $\rm exp:\mathcal{O}_{X_p}\rightarrow \mathcal{O}^*_{X_p}$. So let $(f,U)\in\mathcal{O}^*_{X_p}$. We would like to find a $g(z)\in\mathcal{O}_{X_p}$ such that $\rm exp (g(z))=f(z)$ on some open neighborhood $V$ of $p$ contained in $U$.
Now, perhaps the answer to this is a result in complex analysis that I'm unaware of. What I'd like to know is: why is this true, and how can surjectivity fail if we look at the level of sections instead of stalks?
AI: Surjectivity on stalks is a consequence of the fact that you can define (a branch of) $\log (f(z))$ in a neighborhood. It fails to be surjective on global sections because $\log$ cannot be globally defined as a single-valued function (the analytic continuation around the origin gives rise to a discrepancy of $2\pi i$).
|
H: Find volume under given contraints on the Cartesian plane.
The constrains are given as
$$x^2+y^2+z^2 \leqslant 64,\,x^2+y^2\leqslant 16,\,x^2+y^2\leqslant z^2,\,z\geqslant 0\,.$$
With the goal of finding the Volume.
Personally, I have trouble interpreting the constrains in terms of integrals to find the Volume. However, logically z can be maximum of 8
and x or y no larger than 4.
AI: The problem is simpler if you convert to cylindrical coordinates. In this coordinate system, we have $\rho=\sqrt{x^2+y^2}$, $z=z$, and $\phi=\textrm{arctan}\left(\frac{y}{x}\right)$ (sort of - see the wikipedia article for a better explanation of how $\phi$ relates to $x$ and $y$). An important point is that $\rho$ is taken to be positive, while $\phi$ dictates direction completely.
Under this system, we have the bounds $\rho^2+z^2\leq 64, \rho^2\leq 16, \rho^2\leq z^2, z\geq 0$. From these inequalities, we have that $z$ may range anywhere from $0$ to $8$, and there is no restriction on $\phi$ (so it ranges from $0$ to $2\pi$). Combining the inequalities involving $\rho$, we get that as $z$ ranges from $0$ to $4$, $\rho$ ranges from $0$ to $z$. Then, as $z$ ranges from $4$ to $\sqrt{48}$, $\rho$ ranges from $0$ to $4$. Finally, as $z$ ranges from $\sqrt{48}$ to $8$, $\rho$ ranges from $0$ to $\sqrt{64-z^2}$.
We can write each of these three volumes as triple integrals. The volume element in cylindrical coordinates, $dV$, is $\rho\;d\rho\;dz\;d\phi$. So the first will be
$$\int_0^{2\pi}\int_0^4\int_0^z 1\times \; \rho\;d\rho\;dz\;d\phi$$$
Similarly the second will be $$\int_0^{2\pi}\int_4^{\sqrt{48}}\int_0^4 1\times \; \rho\;d\rho\;dz\;d\phi$$
and the third will be $$\int_0^{2\pi}\int_{\sqrt{48}}^8\int_0^{\sqrt{64-z^2}} 1\times \; \rho\;d\rho\;dz\;d\phi$$
Solve each triple integral from the inside out, add the three together, and you'll have your answer.
(feel free to comment or edit for any correction or suggestion)
|
H: If the angle between the line $x=y=cz$ and the plane $z=0$ is $45^\circ$, find all values of $c$.
If the angle between the line $x=y=cz$ and the plane $z=0$ is $45^\circ$, find all values of $c$.
Before actually doing the calculations I thought I would get $c=1$ since the angle between $x=y=z$ and the $xy$-plane is $45^\circ$.
However, using the facts I know I got a different result:
Denote the asked angle by $\alpha$. Since the direction vector of the line is $(1,1,1/c)$ and the normal vector of the plane is $(0,0,1)$, we have
$$\sin\alpha= \frac{(1,1,\frac{1}{c})(0,0,1)}{\|(1,1,\frac{1}{c})\|\|(0,0,1)\|}=\frac{\frac{1}{c}}{\sqrt{2+\frac{1}{c^2}}}=\frac{1}{\sqrt 2}$$
Squaring both sides, we get
$$2c^2+1=2 \implies c=\pm \frac{1}{\sqrt 2}$$
Which way is correct?
AI: If $x = z$, and $y = 0$ the line makes a $45^\circ$ angle with the plane.
Same for $y = z$ and $x = 0$
This might explain your intuition.
In fact the set of all lines though the origin that form a $45^\circ$ angle with the plane would be the cone. $x^2 + y^2 = z^2$
Restricting $x = y$ then $2x^2 = z^2$
or $x = y =\pm \frac {\sqrt 2}{2} z$
Does this help?
|
H: What does this function do? (why do we xor?)
I stumbled across some source code on GitHub, but the author did not provide any documentation so I'm trying to figure it out by myself.
def s(k,x,mul=gf832_mul):
if k==0:
return x
tmp = s(k-1,x,mul)
result = mul(tmp,tmp)
return result ^ tmp
It's my understanding that this code just multiplies the second parameter by itself (in galois field) and then xors the result. It does this recursively k times. What I don't understand is why.
For example, for k=2, and x=4 it would be the same as doing this:
((4*4)^4 * (4*4)^4) ^ (4*4)^4
In the first iteration, temp will be 4, and in the second it will be (4*4)^4. If k were 3, temp would be the aforementioned result for k=2 xor with itself. All done in GF(8).
I can understand wanting to find the multiplication of two numbers in a GF, but why would you xor the result?
AI: The elements of $GF(8)$ are presumably represented as $3$-bit numbers standing for vectors over $GF(2)$. Then XOR is the addition in the field.
When $k=0$ the function returns $x$;
When $k=1$ the function returns $x^2+x$;
When $k=2$ the function returns $(x^2+x)^2+x^2+x$;
and so on, where addition and multiplication are in $GF(8)$.
|
H: Let $G$ be a finite non-solvable group, each of whose proper subgroups is solvable.
Show that $G/\Phi(G)$ is a non-abelian simple group, where $\Phi(G)$ denotes the Frattini subgroup of $G$
So $G/\Phi(G)$ can't be abelian since if it were then is would be solvable and since $\Phi(G)$ is a solvable normal subgroup of $G$, it would imply that $G$ is solvable.
For the next part, I think I was able to prove that $G$ is simple which would mean $G/\Phi(G)$ is simple by the correspondence theorem, but my intuition is telling me that showing $G$ is simple is a little to over reaching so I think I made a mistake.
For the sake of contradiction suppose $G$ has a proper non trivial normal subgroup. Let $N$ be a minimal proper normal subgroup and let $P$ be a Sylow subgroup of $N$. So by the Fratini Argument $G = N_G(P)N$. Since $N$ is minimal normal $N_G(P)$ must be a proper subgroup. But $N_G(P)N/N \cong N_G(P)/N_G(P)\cap N$ which is solvable since $N_G(P)$ is solvable and the quotient group of a solvable group is solvable.
AI: Let $N$ be any proper normal subgroup of $G$. If $N \not\le \Phi(G)$, then there is a maximal subgroup $M$ of $G$ with $N \not \le M$. So then $G = NM$ by maximality of $M$, and then $N$ and $M$ are both solvable and hence so is $G$.
So $N \le \Phi(G)$ and $G/\Phi(G)$ must be simple.
|
H: A compact normal operator is diagonalisable.
Consider the following proof in Murphy's book: "$C^*$-algebras and operator theory":
I try to understand why $u(K^\perp) \subseteq K^\perp$. Equivalently, one can prove $u^*(K) \subseteq K$.
I tried to calculate $\langle u(x), y \rangle$ with $x \in K^\perp, y \in K$ and I hoped to calculate that this is equal to $0$ but I must be missing something.
AI: At first observe that if $u$ is normal and $u(x) = \lambda x$, then $u^*(x) = \overline{\lambda} x$ (it is a simple exercise).
Now we can prove the following proposition: if $u$ is normal, $E$ is arbitrary set of eigenvectors of $u$, and $K = \overline{span(E)}$, then $K^\perp$ is $u$-invariant. Indeed, $x \in K^\perp$ iff $\langle x, a \rangle = 0$ for all $a \in E$. Thus, for all $a \in E$, $\langle u(x), a \rangle = \langle x, u^*(a) \rangle = 0$ (by the foregoing observation). Therefore, $u(x) \in K^\perp$.
$\mathbf{Edit}$: proof of the observation.
It is obvious that operator $v = u - \lambda I$ is normal, since $v^* = u^* - \overline{\lambda}I$. Let $v(x) = 0$. Then
$$ 0 = \langle v(x), v(x) \rangle = \langle v^*(v(x)), x \rangle = \langle v(v^*(x)), x \rangle = \langle v^*(x), v^*(x) \rangle = 0$$
Thus, $v^*(x) = 0$, i.e. $u^*(x) = \overline{\lambda}(x)$.
|
H: Calculate Hessian of a "weird" function
let be $L $ an invertible matrix, $b \in \mathbb{R}^{n}$ and $\mu \in \mathbb{R}$
and $J: \mathbb{R}^{n} \rightarrow \mathbb{R}$ defined as $J(x)=\|L x\|_{2}^{2}+\mu x^{t} x-b^{t} x$
How do I calculate its hessian matrix?
I cant operate as a usual function and calculate its gradient and also i dont know what to do with that norm
Progress:
$\|L x\|_{2}^{2} =\sum_{i=1}^{n}\sum_{j=1}^{n} (l_{i,j}\cdot x_{j})^2$
$\mu x^{t}\cdot x =\sum_{i=1}^{n} \mu \cdot x_{i}^2$
$-b^{t} x = - \sum_{i=1}^{n} b_{i} \cdot x_{i}$
so $J(x)=\\sum_{i=1}^{n}\sum_{j=1}^{n} (l_{i,j}\cdot x_{j})^2+\sum_{i=1}^{n} \mu \cdot x_{i}^2 - \sum_{i=1}^{n} b_{i} \cdot x_{i} $
so the gradient is
$\nabla J(x)=(\sum_{i=1}^{n} l_{i,1} + 2 \cdot \mu \cdot x_{1} - b_{1}, ..., \sum_{i=1}^{n} l_{i,n} + 2 \cdot \mu \cdot x_{n} - b_{n} ) $
and the Hessian is the diagonal matrix of $2 \cdot \mu$
Dunno if it is right and I also think there has to be an easier way of doing it, like operating with the vectors and matrix instead of deriving each variable on their own.
FINAL EDIT:
As some comments sugested(thank you all) i can write the function as :
$J(x)=x^tL^tLx + \mu x^tx -b^tx$
So:
$ \nabla J(x)= 2(L^tL+I\mu)x-b$
and
$HJ(x)=2LL^t+2 I \mu$
AI: Since the Hessian is the matrix of second partial derivatives $\frac{\partial^2}{\partial x_i \partial x_j}$, the linear term will vanish. You can rearrange the second order term to:
$$(Lx)^t(Lx) + \mu x^tx = x^t(L^tL + \mu I)x.$$
Each second derivative will then select one of the coefficients of this second-degree homogeneous polynomial. Can you see your way to the end of the computation from here?
As for thinking of this in terms of matrices, think first of the derivative. Every linear function can be written as $p_1(x) = v^Tx$ for some vector $v$. The gradient of a smooth function $f$ at a point is defined as this vector for the best linear approximation to $f$ at that point.
Every homogeneous second-degree polynomial $p$ can be written as
$$ p(x) = x^THx $$
for some symmetric matrix $H$. The Hessian of a smooth function $f$ at a point is defined as this matrix for the best second-order approximation to $f$ at that point.
So now you see the root of the strategy to isolate the second order term.
|
H: Projective algebraic sets
I want to find a projective algebraic set which is connected but not irreducible? I try to find but stuck at some points if it is not irreducible then it is also not connected
AI: It is probably worth looking at the notions of connectedness and irreducibility more attentively. A space is not connected if it is a disjoint union of two proper closed subsets, while for reducibility it must be a union of any proper closed subsets, not necessarily disjoint. So, irreducibility implies connectedness, but there is no implication in the other direction.
As for an example, the one given in a comment to the question (the union of two lines on $\mathbb P^2$) works.
|
H: How to prove this matrix is positive semi-definite?
Define $K:\mathbb{R}^{2}\times\mathbb{R}^{2} \to \mathbb{R}$ by
$$K(a,b)=K\left((a_{1},a_{2}),(b_{1},b_{2})\right)=a^{T}b+\cos\left(\frac{(a_{2}-b_{2})\pi}{3}\right)=a_{1}b_{1}+a_{2}b_{2}+\cos\left(\frac{(a_{2}-b_{2})\pi}{3}\right).$$
Let $a_{1},a_{2},\cdots a_{n}\in\mathbb{R}^{2}$ are arbitrary vectors. And $A$ is $n\times n$ matrix with $A_{ij}=K(a_{i},a_{j}).$
I want to prove that $A$ is positive semi-definite.But I don't know how to. Any help will be thanked.
AI: An $n\times n$ matrix whose $r,s$-th entry is $a_{r,s}=\cos(t_r-t_s)$, for some constants $t_1,\ldots,t_n$ is positive semidefinite, in the sense that if $z\in\mathbb C^n$, then $\sum_{r,s}a_{r,s}z_r\bar z_s\ge 0$. This can be seen by recognizing $\cos t$ as the characteristic function of a random variable,
namely $\cos t=E[e^{itX}]$, where $X=\pm 1$ with probabilities $1/2$ and $1/2$. (Which is just a complicated way of saying $\cos t=(e^{it}+e^{-it})/2$.)
One can check this by appealing to Bochner's theorem, or directly:
$$\sum_{r,s}\cos(t_r-t_s)z_r\bar z_s=\sum_{r,s}E[e^{i(t_r-t_s)X}] z_r\bar z_s\\=\sum_{r,s}E[e^{it_rX}z_re^{-it_sX}\bar z_s]=E\left[ \left|\sum_r e^{it_rX}z_r\right|^2\right] \ge 0.$$
The OP's matrix is the sum of such a matrix and of a Gram matrix, and hence is psd as well.
|
H: Rotating a quarter circle -- how long has a point traveled.
Question: see below quarter circle $AOB$. $P$ is the midpoint of $AO$. $OM$ is considered as the "ground surface". We keep rotating $AOB$ to the right, until $OB$ sits on the ground surface again. How long has $P$ travaled during this time?
This puzzle reminded me of this infamous SAT question: https://mindyourdecisions.com/blog/2015/07/05/everyone-got-this-sat-math-question-wrong-can-you-solve-it-sunday-puzzle/
But it looks even harder since it's not a full circle, rather, a oddly shaped quarter circle $AOB$...
[EDIT] as some hints suggested, the most difficult part is when "The circular arc rolls on the ground". How exactly do I calculate that. Looks like it's part of the https://mathworld.wolfram.com/CurtateCycloid.html and it looks awfully complicated..
AI: The difficult segment is the second where the quarter circle rolls $\frac\pi2$ arc on the ground. Assume unit radius, you may parametrize the path
of $P$ with
$$x=t+\frac12\cos t ,\>\>\>\>\>y=-\frac12\sin t$$
Then, the path length of the second segment is
$$ \int_0^{\pi/2}\sqrt{(x_t’)^2 + (y_t’)^2 }dt=\int_0^{\pi/2} \sqrt{\frac54-\sin t}dt=1.19
$$
where the integral is elliptic, evaluated numerically.
|
H: A donut shop sells 12 types of donuts. A manager wants to buy six donuts, one for himself and 5 for his employees.
I'm reading Advanced Combinatorics by Mitchell T. Keller and William T. Trotter and it's missing an answer book (or, at least, I couldn't find one.) I've been doing the exercises but want to make sure I'm crystal on the content. Here are my questions:
Suppose they do this by selecting a specific type of donut for each employee. (He can select the same type of donut for each person. In how many ways could you do this?)
Since there are 5 employees and 12 donuts to choose from without any specific restrictions, the answer is $12^5$.
How many ways could he select the donuts if he wants to ensure that he chooses a different type of donut for each person?
If he is buying for the employees, there are 5 people and 12 donuts to choose from. Then, the answer is $P(12, 5) = \frac{12!}{7!} = 12 \cdot 11 \cdot 10 \cdot 9 \cdot 8 = 95,040$ ways.
Suppose instead that he wishes to select one donut of each of six different types and place them in the break room. In how many ways can he do this? (The order of the donuts in the box is irrelevant.)
For this last question I'm having difficulty understanding the wording. Is the manager splitting the 12 donuts into 6 categories, therefore there are 2 types of donuts in each category (or partitioning if you will)? And then he is choosing?
Then, if that's the case, $P(2,1)^6 = 2^6 = 64$?
AI: Forget my smart ass comments. The book is assuming the boss has a list of employees (himself included so there are six employees) one after another. He buys donuts one by one each one specifically for a specific employee.
There are $12$ Types of donuts.
How many ways to select donuts for the employee.
Each employee has a choose of $12$ types of donuts. There are $12^6$ ways to do this. (Don't forget he is buying a donut for himself as well)
How many ways to select donuts for the employess if each employed gets a different type.
$P(12, 6)$ there are $12$ chooses for the first employeed, $11$ for the second....etc.
How many ways to get six different types but the boss doesn't care about the order or who gets what.
"For this last question I'm having difficulty understanding the wording."
I don't see why. This simply a matter of choosing $6$ types of donuts from $12$ different types. THat is ${12 \choose 6} =\frac {12!}{6!6!}$
" Is the manager splitting the 12 donuts into 6 categories, therefore there are 2 types of donuts in each category"
Uh... no. He is doing nothing of the sort. He is buying $6$ (not $12$) donuts and putting them on a table and walking away.
==== post script =====
On rereading question 3 is badly worded "he wishes to select one donut of each of six different types" makes it sound as though the store only had $6$ types not $12$. But still I think it only makes sense as how bought six different types of donuts (one each).
And my smart ass comment is that in buying the donuts order doesn't matter. It's only in passing out the donuts that order matters. But that's me being a smart ass.
|
H: How to prove that $-|z| \le \Re (z) \le |z|$ and $-|z| \le \Im (z) \le |z|$?
I am reading Ahlfors' "Complex Analysis". Early in the book, he uses the fact that for $z \in \mathbb{C}$ we have
$$
-\lVert z\rVert \le \Re (z) \le \lVert z\rVert\qquad \text{and} \qquad -\lVert z\rVert \le \Im (z) \le \lVert z\rVert
$$
He says that these can inequalities can be derived from the definitions of the real and imaginary parts, as well as the definition of the absolute value of a complex number. These definitions are as follows:
$$
\Re (z) = \frac{z + \overline{z}}{2} \qquad \Im (z) = \frac{z -\overline{z}}{2i} \qquad \rVert z \rVert^2 = z \overline{z}
$$
I managed to prove the statement using the following method. I write out $z$ explicitly as $z = x + iy$ for some $x, y \in \mathbb{R}$. Using this I can show that these definitions are equivalent to
$$
\Re (z) = x \qquad \Im (z) = y \qquad \lVert z\rVert^2 = x^2 + y^2
$$
Using these new definition, that fact that $a^2 \ge 0\ \forall a \in \mathbb{R}$, and knowing that the real-valued function $f(x) = \sqrt{x}$ is monotonically increasing on $[0, \infty)$, I can show that
$$
\sqrt{x^2 + y^2} \ge \sqrt{x^2} = |x| \qquad \sqrt{x^2 + y^2} \ge \sqrt{y^2} = |y|
$$
which is equivalent to saying
$$
\lVert z\rVert \ge |\Re (z)| \qquad \lVert z\rVert \ge |\Im (z)|
$$
proving the statement.
I don't like the proof I got because I feel like it "backtracks" into doing grunt work. All the definitions given are written in such a way that you don't need to write out a complex number $z$ as $x + iy$, so I feel like going back to this is not a "clean" proof.
Up to this point, the book has proven previously that the absolute value of a complex number is distributive over addition and multiplication of complex numbers, that $\overline{\overline{z}} = z$, and the following properties (for $a,b \in \mathbb{C}$):
$$
\lVert a + b \rVert ^2 = \lVert a \rVert ^2 + \lVert b \rVert ^2 + 2 \Re\left(a \overline{b}\right) \qquad \quad \lVert a - b \rVert ^2 = \lVert a \rVert ^2 + \lVert b \rVert ^2 - 2 \Re\left(a \overline{b}\right)
$$
I tried using these properties to give a proof of the statement where I didn't have to write out $z = x+iy$ explicitly, but I didn't seem to be able to get anywhere. Does anyone know a way to prove this statement without backtracking as I did? Thank you!
AI: Using the definitions
$$
\Re (z) = \frac{z + \overline{z}}{2} \, , \, \Im (z) = \frac{z -\overline{z}}{2i}
$$
you can compute
$$
\bigl(\Re (z)\bigr)^2 + \bigl(\Im (z)\bigr)^2 = \left(\frac{z + \overline{z}}{2}\right)^2
+ \left(\frac{z -\overline{z}}{2i} \right)^2 = z \overline{z} = \lVert z \rVert^2
$$
so that
$$
\bigl(\Re (z)\bigr)^2\le \lVert z \rVert^2 \implies |\Re (z)| \le \lVert z \rVert
$$
and similarly for the imaginary part.
|
H: Does $g(v_n) \longrightarrow g(0)$ for all $v_n \text{s.t.} ||v_{n+1}|| \leq ||v_n||$ imply $g$ continuos at $0$?
The question is pretty much summed up in the title:
Let $V,W$ be normed vector spaces and $$ g: V \to W$$.
Suppose g fulfills $$g(v_n) \longrightarrow g(0) $$ for all sequences $v_n$ such that $$v_n \longrightarrow 0$$ and $$||v_{n+1}|| \leq ||v_n||.$$
Does this imply continuity of $g$ at $0$?
Intuititively, this should hold true, but I am looking for a proof (or counterexample). Of course, for any sequence $v'_n \longrightarrow 0$ we can select a subsequence that fulfills the above requirement and will have the limit $g(0)$. But is this enough to ensure the convergence? Do we need additional restrictions on the spaces $V,W$ for this to hold?
AI: Assume that $v_n \rightarrow 0$ and $g(v_n) \nrightarrow g(0)$. Then there exists a subsequence $v_{k_n}$ s.t. $||g(v_{k_n}) - g(0)|| \ge \varepsilon$ for some $\varepsilon > 0$. Let $u_n$ denote the sequence $v_{k_n}$. There exists a subsequence $u_{m_n}$ s.t. $||u_{m_{n+1}}||\le ||u_{m_n}||$. Then $g(u_{m_n}) \rightarrow g(0)$. But it is already known that $||g(u_{m_n}) - g(0)|| \ge \varepsilon$. This contradiction shows that $g(v_n) \rightarrow g(0)$ for all $v_n \rightarrow 0$. Thus, $g$ is continuous at $0$.
|
H: Use linearisation of a certain function to approximate $\sqrt[3]{30}$
Background
Find the linearisation of the function
$$f(x)=\sqrt[3]{{{x^2}}}$$
at
$$a = 27.$$
Then, use the linearisation to find
$$\sqrt[3]{30}$$
My work so far
Applying the formula
$${f\left( x \right) \approx L\left( x \right) }={ f\left( a \right) + f^\prime\left( a \right)\left( {x – a} \right),}$$
where
$${f\left( a \right) = f\left( {27} \right) }={ \sqrt[3]{{{{27}^2}}} }={ 9.}$$
Then, the derivative using the power rule:
$${f^\prime\left( x \right) = \left( {\sqrt[3]{{{x^2}}}} \right)^\prime }={ \left( {{x^{\frac{2}{3}}}} \right)^\prime }={ \frac{2}{3}{x^{\frac{2}{3} – 1}} }={ \frac{2}{3}{x^{ – \frac{1}{3}}} }={ \frac{2}{{3\sqrt[3]{x}}}.}$$
then
$${f^\prime\left( a \right) = f^\prime\left( {27} \right) }={ \frac{2}{{3\sqrt[3]{{27}}}} }={ \frac{2}{9}.}$$
Substitute this in the equation for $L(x)$:
$${L\left( x \right) = 9 + \frac{2}{9}\left( {x – 27} \right) }={ 9 + \frac{2}{9}x – 6 }={ \frac{2}{9}x + 3.}$$
Then, to use this linearisation to find
$$\sqrt[3]{30}$$
I perform the following $\Delta x = x – a = 30 – 27 = 3$ as the condition is $x =30$ and the staring point is $a=27$
As the the derivative of this particular function is given by $f\left( x \right) = \sqrt[\large 3\normalsize]{x}$
$${f’\left( x \right) = {\left( {\sqrt[\large 3\normalsize]{x}} \right)^\prime } } = {{\left( {{x^{\large\frac{1}{3}\normalsize}}} \right)^\prime } } = {\frac{1}{3}{x^{ – \large\frac{2}{3}\normalsize}} } = {\frac{1}{{3\sqrt[\large 3\normalsize]{{{x^2}}}}},}$$
and its value at point $a$ is equal to
$${f’\left( {a} \right) = \frac{1}{{3\sqrt[\large 3\normalsize]{{{{27}^2}}}}} } = {\frac{1}{{3 \cdot {3^2}}} = \frac{1}{{27}}.}$$
Thus, getting the solution
$${f\left( x \right) \approx f\left( {a} \right) + f’\left( {a} \right)\Delta x,\;\;}\Rightarrow {\sqrt[\large 3\normalsize]{{30}} \approx \sqrt[\large 3\normalsize]{{27}} + \frac{1}{{27}} \cdot 3 } = {3 + \frac{1}{9} } = {\frac{{28}}{9} \approx 3,111.}$$
Is my process correct so far? Or, did I go wrong in the second part? Also, as $a=27$ is from the original linearisation, this would be brought into the linearisation approximation for $\sqrt[3]{30}$?
AI: The second part of your analysis is correct. The function that you want to approximate is $f(x)=\sqrt[3]{{x}}$ at $a=27$, not $g(x)=\sqrt[3]{{x^2}}$. So
$${f'(x) = {\left( {\sqrt[\large 3\normalsize]{x}} \right)^\prime } } = {{\left( {{x^{\large\frac{1}{3}\normalsize}}} \right)^\prime } } = {\frac{1}{3}{x^{ – \large\frac{2}{3}\normalsize}} } = {\frac{1}{{3\sqrt[\large 3\normalsize]{{{x^2}}}}},}$$
and its value at point $a$ equals
$${f'(a) = \frac{1}{{3\sqrt[\large 3\normalsize]{{{{27}^2}}}}} } = {\frac{1}{{3 \cdot {3^2}}} = \frac{1}{{27}}.}$$
Therefore since $f(a)=\sqrt[3]{27}=3$,
$$f(x)\approx L\left( x \right) ={ f\left( a \right) + f^\prime\left( a \right)\left( {x – a} \right)}=3+\frac{1}{27}\left(x-27\right).$$
Hence, the estimate for $\sqrt[3]{30}$ is
$$f(30)\approx3+\frac{1}{27}\left(30-27\right)=3+\frac{1}{9}=3.\overline{1}.$$
The actual value of $\sqrt[3]{30}$ (up to the fifth decimal place) is $3.10723$, so the linear approximation at $a=27$ does quite well.
|
H: Are final functors stable under pullback?
Recall the notion of a final functor, which is a sort of colimit-preservation property.
Is such class of functors stable under pullbacks in Cat? Namely, is the pullback of a final functor along any other functor still final? If not, what is a counterexample?
A reference would also be welcome.
AI: No, final functors are not stable under pullback in general.
Let $I := \{0\to1\}$ be the walking arrow category, then $1:*\to I$ picks out the terminal object and is thus final. The diagram
$\require{AMScd}$
\begin{CD}
\varnothing @>>> *\\
@V{F}VV @VV{1}V\\
* @>>{0}> I
\end{CD}
is a pullback square, but the functor $F:\varnothing\to*$ is not final. Indeed, let $G:*\to\mathbf{Set}$ pick any nonempty set $X$, then $\varinjlim G=X$, but $\varinjlim G\circ F=\varnothing$ since the colimit of the empty diagram is just the initial object in $\mathbf{Set}$. As the unique map $\varnothing\to X$ is not bijective, we can conclude that $F$ is not final despite the finality of $1:*\to I$.
However, final functors form an orthogonal factorisation system with discrete fibrations, and so in particular are closed under pushouts in $\mathbf{Cat}$ (see e.g., here)
|
H: Different ways for stating the recursion theorem
I've been trying to understand the recursion theorem for quite a while now and I still don't think I understand it 100%, I've checked multiple books and pdfs and I've noticed that this theorem is often stated in two different ways
Let $A$ be a set, $a ∈ A$, and $r : N × A → A$ any function. Then there exists a function
$f : N → A$ such that (i) $f(0) = a$, and (ii) $f(n + 1) = r(n, f(n))$ for all $n ∈ N$.
Let $X$ be a set, $a ∈ X$, and $f : X → X$.
Then there exists a function $u : ω → X$ such that $u(0) = a$ and $u(n^
+) = f(u(n))$
for all $n ∈ ω$
Why sometimes $N×A$ is used as domain for the function $r$ in the first case and $f$ in the second case, and sometimes just $X$ is used? What is the difference?
AI: At first glance, it seems that the first variant is more general (to compute the next term, you can use the previous term - but also the current index). More formally, if the first version holds and you are given $X,a,f$ as in the second, then apply the first to $A:=X$, $r(n,a):=f(a)$. Then the $f$ (in the notation of 1) is a valid $u$ (in the notation of 2).
Interestingly, however, we can also infer the first variant from the second:
Assume we are given $A,a,r$ as in the first. Let $X=N\times A$ and define $f\colon X\to X$ as $f(\langle n,a\rangle) = \langle n^+,r(n,a)\rangle$. Then the existing $u$, composed with the projection $N\times A\to A$ is a valid solution for the first variant.
|
H: How to prove that there is closure in NP for the reverse operation on strings?
I have a language A which is known to be in NP. I want to know that if I have the language $A^R$ which takes in $w^R$ which is a word of A but read in reverse will it still be in NP?
I tried to prove it by writing that: because A is in NP we have a certificat c and therefore a polynomial verificator. I therefore conclude that we can build another polynomial verificator V which will take in $w^R$ which is w read in reverse and $c^R$ which is the certificat of A but read in reverse. Because we can make V we conclude that $A^R$ is in NP.
Does my proof make sense or there is presence of circular reasoning? What should I do to have a correct proof or a more formal one?
AI: Hint: work directly with the definition of NP noting that you can reverse the contents of the input tape of a Turing machine in polynomial time.
|
H: What should $n$ be equal to, so that $5^{2n+1}2^{n+2} + 3^{n+2}2^{2n+1}$ is completely divisible by $19$?
What should $n$ be equal to, so that the number:
$$5^{2n+1}2^{n+2} + 3^{n+2}2^{2n+1}$$
is completely divisible by 19? I broke it into this:
$$20\cdot 2^{n}\cdot 25^{n}+18\cdot 3^{n}\cdot 4^{n}$$ But what should i do next?
AI: Okay, so you have shown the expression to be equal to $20\cdot 2^n\cdot 25^n+18\cdot3^n\cdot4^n=20\cdot 50^n+18\cdot12^n$ and as @mwt as shown in comments , write $20=19+1$ and $18=19-1$ to get the expression equal to $19(50^n+12^n)+50^n-12^n$. Now we know that $a^n-b^n$ is divisible by $a-b$ for any natural number $n$. If you don't know that, you can prove it by factorizing $a^n-b^n$.
So $50^n-12^n$ is divisble by $38$ and so $19$ divides the whole expression for any natural number $n$.
|
H: If $ \lim_{x \to +\infty}f(x) = A $ and $ \lim_{x \to +\infty}f'(x) = B $, prove that $B = 0$
Problem: Let $ f: \mathbb{R} \to \mathbb{R} $ be a function of class $ C^1 $ such that $ \lim_{x \to +\infty}f(x) = A $ and $ \lim_{x \to +\infty}f'(x) = B $ for $ A, B \in \mathbb{R} $. Prove that $B = 0$.
I need help in validating my proof. Here it goes:
Suppose that $ B \neq 0 $. Take $ \epsilon = B+1 $. From the definition of limit to infinity, the following holds: $ (\exists M > 0)(\forall x > M) | f'(x) - B | < B+1 $
From there, $ 1 < f'(x) < 2B + 1 $ (for all x greater than M).
Now, take an interval $ [M+1, M+2] $. $ f $ is continuous on that segment, so it's bounded and reaches it's maximum and minimum. Function $ f $ is also differentiable on that segment, and if we apply Fermat's theorem (on local maximum/minimum), we'll get a contradiction, since $ f'(x) > 0 $ for all $ x \in [M+1, M+2] $. Therefore $ B = 0 $.
AI: A possible solution is :
By the definition of the limit, there exists $M>0$ such that $|f'(x)-B|<\frac{|B|}{2}$ for $x\geqslant M$. We suppose without loss of generality that $B>0$ (otherwise consider $-f$), then for $x\geqslant M$, we have $f'(x)>B-\frac{|B|}{2}=\frac{B}{2}$. Thus, for $x\geqslant M$,
$$ f(x)=f(M)+\int_M^xf'(t)dt\geqslant f(M)+(x-M)\frac{B}{2} $$
Taking the limit as $x\rightarrow +\infty$ gives that $\lim\limits_{x\rightarrow +\infty}f(x)=+\infty$.
|
H: Show that $\sum_n (1-e^{-1/n})$ diverges
I'm stuck in showing that $\sum_n (1-e^{-1/n})$ diverges. The ratio and the root test are inconclusive. Possibly the comparison test is the way to go, but I can't find a proper bound. I got
$$
1-\frac{1}{e^{1/n}}>1-\frac{1}{2^{1/n}}
$$
but still I can't see here if the last implied series diverges.
AI: If you want an inequality about the exponential function,
$$ e^x\ge 1+x$$
is your best friend. So
$$ e^{\frac1n}\ge 1+\frac1n=\frac{n+1}{n}$$
and
$$e^{-\frac1n}=\frac1{e^{\frac1n}}\le \frac n{n+1}=1-\frac1{n+1},$$
making
$$ 1-e^{-\frac1n}\ge \frac1{n+1}.$$
|
H: How many "types" of hypothesis tests are there?
I don't know why but I find textbooks to be so inconsistent on describing the types of hypothesis tests there are (at an intro level); I want to organize my thoughts a little bit.
When you repeatedly take samples of size $n$ from a certain population distribution with parameters mean $\mu$ and variance $\sigma^2$, you get a sampling distribution of the sample statistic, in our case as with most intro books the sample statistic in question is the sample mean $\bar{x}$.
Such sampling distribution can be normal or non-normal. Even if it's non-normal, CLT says if $n$ is very large then the sampling distribution is approximately normal. So far so good?
Case 1: sample distribution is normal, regardless of $n$: we use the z-score and a normal table to do a hypothesis test:
$$
z=\frac{\bar{x}-\mu}{\sigma}
$$
Case 2: sample distribution is non-normal, but $n$ is large: we use the z-score and a normal table to do a hypothesis test, but the SE is slightly different:
$$
z=\frac{\bar{x}-\mu}{\sigma /\sqrt{n}}
$$
Case 3: sample distribution is non-normal, and $n$ is small: we use the t-stat and the t-table to do a hypothesis test, and the SE involves the sample deviation $\hat{s}$ instead of $\sigma$ the population deviation:
$$
t = \frac{\bar{x}-\mu}{\hat{s}/\sqrt{n}}
$$
Questions at this point is: do I have case 1 & 2 correct? My book is a little confusing and I'm not sure if they should be different, or if the both use $\sigma/\sqrt{n}$ as the SE. Also, are we indeed looking at whether the sampling distribution is normal, or was I supposed to look at whether the population distribution is normal or not?
I have further questions for when you start testing differences in means but I'll put that in a separate question here.
AI: To really understand the issue, we first revisit what happens when observations in a sample are drawn from a normal distribution. Let $$X_i \sim \operatorname{Normal}(\mu, \sigma^2), \quad i = 1, 2, \ldots$$ be a sequence of independent and identically distributed normal random variables with mean $\mu$ and variance $\sigma^2$. Then the sample mean $$\bar X = \frac{1}{n} \sum_{i=1}^n X_i$$ from a sample of size $n$ is also exactly normally distributed: $$\bar X \sim \operatorname{Normal}(\mu, \sigma^2/n).$$ Note that the mean is the same, but now the variance is $\sigma^2/n$, rather than $\sigma^2$. This makes intuitive sense: if you average many observations together, the variance of the average should decrease with more data. Formally, we observe that $$\operatorname{Var}[\bar X] \overset{\text{iid}}{=} \frac{1}{n^2} \sum_{i=1}^n \operatorname{Var}[X_i] = \frac{1}{n^2} n \sigma^2 = \frac{\sigma^2}{n}.$$ Therefore, the expression $$Z = \frac{\bar X - \mu}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0, 1) \tag{1}$$ is a pivotal quantity. There is no asymptotic approximation or use of the Central Limit Theorem here.
Now, if the $X_i$ are not normal, but $n$ is large, then the above quantity $(1)$ is approximately standard normal via the CLT (except in certain special cases for the distribution of the $X_i$, where the CLT does not hold). Note that $\mu$ and $\sigma^2$ are the mean and variance of the non-normal $X_i$.
Thus far, we have not spoken about known versus unknown parameters. We see that if $\mu$ and $\sigma$ are known, we can directly calculate a realization of $Z$ from the sample mean $\bar X$. For example, if I said that the $X_i$ follow a gamma distribution with shape $a = 5$ and rate $b = 4$, then the mean and variance are $$\mu = \frac{a}{b} = \frac{5}{4}, \\ \sigma^2 = \frac{a}{b^2} = \frac{5}{16},$$ and if we observe in a sample of size $n = 50$ the sample mean $\bar X = 1.32906$, all we need to do to calculate $Z$ is write $$Z = \frac{1.32906 - 5/4}{\sqrt{5/16}/\sqrt{50}} \approx 1.0000.$$ But what if we did not know $\sigma^2$? Then it must be estimated from the sample itself. Doing this incurs a penalty, because we are using the data to estimate both the true mean $\mu$ and the true standard deviation $\sigma$. If an inference is to be performed on the true mean, then for a hypothesis test that assumes under the null $$H_0 : \mu = \mu_0$$ we have the conditional test statistic $$Z \mid H_0 = \frac{\bar X - \mu_0}{s/\sqrt{n}} \sim t_{n-1},$$ where $s$ is the sample standard deviation and $t_{n-1}$ is the Student's $t$ distribution with $n-1$ degrees of freedom. Notice here that this is a true statistic: $Z \mid H_0$ is only a function of the random variables $X_i$ in the sample that allow us to compute $\bar X$ and $s$. $\mu_0$ is assumed as part of performing the hypothesis test, and $n$ is known.
In summary, there really are only two cases. When the distribution of the $X_i$ is normal, the statistic $\bar X$, which is the sampling distribution of the sample mean, is also normal, with no use of CLT. When the distribution of the $X_i$ is not normal, the statistic $\bar X$, which again is the sampling distribution of the sample mean, is not normal, but under certain conditions, is asymptotically normal via CLT. What you do with this when constructing a hypothesis test is a separate issue, which relates to whether the parameters of the population distribution are known or unknown.
When the population parameters are known, there is no inference needed. When the population variance is known, then a hypothesis test can be constructed to make a statistical inference about the unknown mean based on the observed sample mean, because the sampling distribution of the sample mean is either exactly normal, or approximately normal with sufficient sample size. When the population variance is unknown, then irrespective of whether the population is normal or not, the test statistic for the location parameter cannot be computed without estimating this variance through the sample. It is this estimation that changes the distribution of the test statistic from normal to Student's $t$.
|
H: Let $G$ be a finite group such that if $A, B\le G$ then $AB\le G$. Prove $G$ is a solvable.
Let $G$ be a finite group satisfying the following property: (*) If $A, B$ are subgroups of $G$ then $AB$ is a subgroup of $G$. Prove $G$ is solvable.
So I felt like a good place to start is to let $|G| = p_1^{a_1}p_2^{a^2}...p_r^{a_r}$ such that each $p_i$ is a distinct prime, and then Let for each $i = 1, 2, ..., r$ Let $H_i$ be a Sylow$_{p_i}$ subgroup. So $G = H_1H_2..H_r$ and each $H_i$ is solvable since all $p$-groups are solvable.
I was then thinking that a good idea since I know by Burnside's Theorem that $H_iH_j$ is solvable, to try and use induction and prove that if $H_1H_2...H_m$ is solvable then $H_1H_2...H_mH_{m+ 1}$ is sovable, but I can't figure it out.
Any help will be greatly appreciated.
AI: Here's another proof, now that you have the answer. Let $x$ and $y$ be elements of coprime order. Then $\langle x\rangle\langle y\rangle$ is a subgroup, and has order $o(x)\cdot o(y)$. Thus any two elements of coprime order commute.
It turns out this is enough to prove the result: a finite group $G$ is nilpotent if and only if every $2$-generated subgroup is nilpotent.
But let's not use that, and let $z$ be an element in the centre of some Sylow $p$-subgroup of $G$. Consider $C_G(z)$. It contains all elements of order prime to $p$, and also contains a Sylow $p$-subgroup of $G$, thus is all of $G$. Thus $z\in Z(G)\neq 1$.
Since your condition is inherited by quotients, we see that $G$ must possess a central series, hence is nilpotent.
Edit: some more on the structure of these groups. In general (i.e., infinite groups), they are locally nilpotent. The finite ones were mostly classified by Iwasawa in 1941, but there were some gaps that were filled by Napolitani and Janko.
A $p$-group is quasihamiltonian (i.e., $AB$ is a subgroup for all $A,B\leq G$) if and only if
It has an abelian normal subgroup $N$ such that $G/N$ is cyclic, and there exists a generator $x$ of $G/N$, and an integer $n$, such that $a^x=a^{1+p^n}$ for all $a\in A$, or
$p=2$, and $G$ is the direct product of $Q_8$ and an elementary abelian $2$-group.
A subgroup $H$ such that $HK=KH$ (i.e., $HK$ is a subgroup) for all $K\leq G$ is called a permutable subgroup. The above groups are those where all subgroups are permutable. There's been recent research on groups where lots of, but not all, subgroups are permutable.
Edit 2: here's another proof, which given you mention Burnside's $p^\alpha q^\beta$ theorem, might have been the intention of your tutor (assuming you are doing a course).
Let $x\in G$ have order $p$ for some $p\mid |G|$. By the above method, we see that $y\in C_G(x)$ for all elements $y\in G$ of order prime to $p$. But now this means that $|G:C_G(x)|$ is a power of $p$. By that useful theorem you did while proving Burnside's $p^\alpha q^\beta$ theorem, $G$ cannot be a non-abelian simple group. Thus $G$ possesses a normal subgroup $N$. The condition is inherited by both $N$ and $G/N$, so by induction both of these are soluble. Thus $G$ is soluble.
|
H: Geometric proof for the half angle tangent
Using the fact that the angle bisector of the below triangle splits the opposite side in the same proportion as the adjacents sides, I need to give a geometric proof of the half-angle tangent $$ \tan \frac{\beta}{2} \ \ = \ \ \frac{\sin \beta}{1 \ + \ \cos \beta} \ \ . $$
This is what I've done so far:
I tried to write the sides $ \ a \ $ and $ \ c \ $ in terms of $ \ b_1 \ $ and $ \ b_2 \ $ using the Pythagoras Theorem and the relations of sine and cosine. After a lot of manipulation, I ended up with $$ \tan \frac{\beta}{2} \ \ = \ \ \frac{\sin \beta \ · \ b_2^2 · \ \cos \beta}{b_1 \ + \ b_2} \ \ . $$ But this is certainly wrong.
Any hints on how to proceed?
AI: Any fraction can as per a rule of fractions be algebraically also written to form an identity taking sum/difference of numerator and denominator separately with or without a common multiplier. Using this rule
$$ \frac{p}{q}=\frac{r}{s}=\frac{ap+br}{aq+bs}$$
$$
\tan \beta/2=\frac{b_1}{a}=\frac{b_2}{c}=\frac{b_1+b_2}{a+c}=\frac{b}{a+c}=\frac{b/c}{a/c+1}=\frac{\sin\beta}{\cos\beta+1}.
$$
|
H: Constants in matrix integration
Suppose you have an integral of a matrix-valued function of the form:$$\int_a^b B A(t) C dt$$
In this case, the notation $A(t)$ is to denote a matrix that depends on the integrated variable $t$ (for example $A(t) = e^{tA}$), where the matrices $B$ and $C$ are independent of $t$. Is the following necessarily true as it would be for the analogous scalar problem?
$$\int_a^b BA(t)Cdt = B\int_a^bA(t)Cdt = \int_a^b BA(t)d t C = B\int_a^b A(t)dt C$$
AI: Yes it is.. Let $A(t) = (a_{ij})(t)_{1 \leq i, j \leq n}$, $B = (b_{ij})_{1 \leq i, j \leq n}$ and $BA(t) = (\beta_{ij}(t))_{1 \leq i, j \leq n}$. We only want to prove it for $BA(t)$. The other case is an analogue. Then, for each $i, j$ we have:
$$
\left(\int_a^b BA(t)~\mathrm{d}t\right)_{ij} =\int_a^b \beta_{ij}(t)~\mathrm{d}t = \int_a ^b \sum_{j = 1}^n b_{ij}a_{ij}(t) ~\mathrm{d}t = \sum_{j = 1}^n \int_a ^b b_{ij} a_{ij}(t)~\mathrm{d}t = \sum_{j = 1}^n b_{ij} \int_a^b a_{ij}(t)~\mathrm{d}t = \left( B\int^b_a A(t)~\mathrm{d}t \right)_{ij}
$$
|
H: Showing the support of a sheaf may not be closed (Liu 2.5)
This is question 2.5 of Qing Liu.
I am new in algebraic geometry and really stuck on it and can't do anything to solve it.
The question:
Let $F$ be a sheaf on $X$. Let $\operatorname{Supp} F=\{x\in X:F_x\neq 0\}$. We want to show that in general, $\operatorname{Supp} F$ is not a closed subset of $X$. Let us fix a sheaf $G$ on $X$ and a closed point $x_0\in X$. Let us define a pre-sheaf $F$ by $F(U)=G(U)$ if $x_0\notin U$ and $F(U)= \{s\in G(U):s_{x_0}=0\}$ otherwise. Show that $F$ is a sheaf and that $\operatorname{Supp} F = \operatorname{Supp}G\setminus \{x_0\}$.
I don't know how to solve this question:
To show a pre-sheaf is a sheaf I need to check the "uniqueness" and "gluing local sections".
For the uniqueness: Let $U$ be an open subset of $X$ , $s\in F(U)$, if $x_0\notin U$ , then since $G$ is a sheaf, I don't see a problem for $F$ to be a sheaf.
If $s\in F(U)$ and $x_0 \in U$ and $\{U_i\}_i$ be an open covering of $U$,then there exists an $i_0$ such that $x_0\in U_{i_0}$. the image of $s$ in the stalk $F_{x_0}$ is $s_{x_0}$. $F(U_{i_0})=\{s\in G(U):s_{x_0}=0\}$ by definition. I don't know what to do now? (so sorry and I know this is an easy question...)
AI: This is just definition-pushing, and you've already made a good start.
To see uniqueness, let $U\subset X$ be an open subset and $\{U_i\}$ an open cover of $U$. Let $s,t\in F(U)$ and let $s_i,t_i\in F(U_i)$ be their restrictions. Then the condition that $s_i=t_i$ in $F(U_i)$ means that $s_i=t_i\in G(U_i)$, which means that $s=t$ in $G(U)$ and as $F(U)\subset G(U)$, we have that $s=t$ in $F(U)$.
To check gluing, let $s_i$ be a collection of sections of $F(U_i)$ so that $s_i|_{U_i\cap U_j}=s_j|_{U_i\cap U_j}$ as elements of $F(U_i\cap U_j)$. Then this equality is also true in $G(U_i\cap U_j)$, and by the assumption that $G$ is a sheaf, there is a section $s\in G(U)$ so that $s|_{U_i}=s_i$. This implies that $s_{x_0}=0$ (if $x_0\in U$ - if not, we have nothing to worry about), as the maps $G(U)\to G(U_i)\to G_{x_0}$ commute, so $s\in F(U)$ as well and thus $F$ satisfies gluing.
|
H: Let $0\leq a \leq b \leq 1$. Then we have for all natural numbers $m\geq 2$ the inequality $b^{\frac m2}-a^{\frac m2} \leq\frac m2(b-a)$
Let $0\leq a \leq b \leq 1$. Then we have for all natural numbers $m\geq 2$ the inequality $b^{\frac{m}{2}}-a^{\frac{m}{2}} \leq\frac{m}{2}\left(b-a\right)$.
My first idea was to consider the function $f(x)=x^{\frac{m}{2}-1}$ on the interval $[0,1]$. Since $m\geq 2$ it follows that $\underset{x\in [0,1]}{\text{sup}}f(x)=1.$ Then, by the fundamental theorem of calculus we can conclude:
$\begin{align*} b^{\frac{m}{2}}-a^{\frac{m}{2}} & =\displaystyle\int_{a}^{b}\frac{m}{2}f(x)\,dx \\ & =\frac{m}{2} \displaystyle\int_{a}^{b} x^{\frac{m}{2}-1}\,dx \\
& \leq \frac{m}{2} \underset{x\in [0,1]}{\text{sup}}f(x)(b-a) \\
& = \frac{m}{2}(b-a) \end{align*}$
Is this proof correct?
AI: Your proof is correct but you had to say that
$$\sup_{[a,b]}f\le \sup_{[0,1]}f$$
hint
here is an other :
Let $$g(x)=x^{\frac m2}$$
$ g $ is continuous at $ [a,b] $ and differentiable at $ (a,b) $, then by MVT,
$$\exists c\in (a,b)\;\; :\;\frac{g(b)-g(a)}{b-a}=g'(c)$$
but
$$g'(c)=\frac m2 c^{\frac m2-1}$$
and $$0<c<1$$
|
H: Let $x=\begin{bmatrix}3\cr4\end{bmatrix}$ and $A=\begin{bmatrix}0&x^T\cr x&0\end{bmatrix}$ is A diagonizable?
I had a problem: let $x=\begin{bmatrix}3\cr4\end{bmatrix}$ and $A=\begin{bmatrix}0&x^T\cr x&0\end{bmatrix}$ is A diagonizable?
But when I plug in the matrix x and its transpose into A the dimensions don't work out and we have empty slots where there should be elements and I was wondering if I'm just tripping or if this problem is unsolvable?
AI: Presumably the bottom right $0$ is actually a $2 \times 2$ matrix of $0$, i.e.
$$A = \left[\matrix{0 & 3 & 4\cr 3 & 0 & 0\cr 4 & 0 & 0\cr} \right]$$
|
H: In △ABC, ∠A= 60$^∘ $, BC=12, BD⊥AC, CE⊥AB and∠DBC = 3∠ECB. Find EC in the form $a(\sqrt{b}+\sqrt{c})$
Given that $m \angle A= 60^\circ$, $BC=12$ units, $\overline{BD}\perp\overline{AC}$, $\overline{CE} \perp \overline{AB}$, and $m \angle DBC = 3m \angle ECB$, the length of segment $EC$ can be expressed in the form $a(\sqrt{b}+\sqrt{c})$ units where $b$ and $c$ have no perfect-square factors. What is the value of $a+b+c$?
Let the intersection of $EC$ and $BD$ be $F$. I have figured out that $\triangle EFB$ and $\triangle DFC$ are both $30-60-90$ triangles, and that $\triangle BDC$ is a $45-45-90$ triangle. Along with this, I have figured out all other angles. However, I can't figure out the next step. Can someone help? Thanks!
AI: Let $\angle ECB =x $. Then, $\angle DBC =3x$. Given $\angle A=60$, we have $\angle ABD = \angle ACE =30$. For the triangle ABC,
$$60 + 30+30+3x+x=180$$
which yields $x=15$. Thus
$$EC = BC \cos x=12 \cos(45-30)=3(\sqrt6+\sqrt2)$$
and $a+b+c =11$.
|
H: Is $(I \circ A - I \circ B)$ positive semi-definite if $A$, $B$ and $A - B$ are positive semi-definite?
Let $A$ and $B$ are positive definite and positive semi-definite matrices, respectively. $A - B$ is positive semi-definite.
Is it true that $(I \circ A - I \circ B)$ is positive-semidefinite?
I believe this statement is true. Because
$$
(I \circ A - I \circ B) = I \circ (A - B)
$$
and the Hadamard product of two positive (semi)-definite matrices is also positive (semi)-definite. Is it a valid argument?
I don't think $(C \circ A - C \circ B)$ is a positive semidefinite matrix for any arbitrary positive definite matrix $C$.
AI: The question is equivalent to whether or not $I\circ A$ is positive semi definite if $A$ is so. But $I\circ A$ is nothing but the diagonal elements of $A$. If $A≥0$, denote with $A_{ij}$ the components of $A$ and $e_i$ the standard basis of $\Bbb C^n$, then
$$A_{ii}= \langle e_i , A e_i\rangle ≥0$$
by positivity and every diagonal element of the diagonal is $≥0$. So $I\circ A$ is diagonal with all entries $≥0$, hence also positive semi-definite.
|
H: $f^{*}$ is surjective if and only if $f$ is injective
I’m having a hard time understanding this proof and I hope someone could help me.
Theorem: Let $f: A \rightarrow B$ a map. Think of this map as inducing the map $f^{*}: \mathcal{P}(B) \rightarrow \mathcal{P}(A)$. Then, $f^{*}$ is surjective if and only if $f$ is injective.
The $\Longleftarrow$ part I already prove it:
Proof: $\Longleftarrow.$ Suppose $f$ is injective. Hence, we know that $E = f^{*}(f_{*}(E))$ for all subsets $E \subseteq A$. Let $S$ be a subset of $A$. Then $S \in \mathcal{P}(A)$. We define the set $X_0$ as $X_0 = f_{*}(S)$. Observe that $X_0 \in \mathcal{P}(B)$. Hence $f^{*}(X_0) = f^{*}(f_{*}(S)) = S$. Therefore $f^{*}$ is surjective. $\square$
For the $\implies$ part, I don’t know what to do.
My attempts:
I tried to prove the contrapositive, so if $f$ is not injective, then $f^{*}$ is not surjective. Suppose that $f$ is not injective. Then there exists some $a,b \in A$ such that $a \neq b$ and $f(a) = f(b)$. But I don’t know what to do next.
I tried a direct proof: Suppose that $f^{*}$ is surjective. Hence for all $X \in \mathcal{P}(A)$, there exists some $Y \in \mathcal{P}(B)$, such that $f^{*}(Y)=X$. Since $A \subseteq A$, we have that $A \in \mathcal{P}(A)$. So there exists some $Y_0 \in \mathcal{P}(B)$ such that $f^{*}(Y_0) = A$. But, again, I don’t know where to go next.
Can someone help me? Thank you in advance!
AI: Suppose $x,y$ are two distinct elements of $A$ such that $f(x)=f(y)$. Since $f^*$ is surjective there is a subset $E\subseteq B$ such that $\{x\}=f^*(E)$. Now, we have $x\in\{x\}=f^*(E)$ and hence $f(x)\in E$. Thus:
$f(y)=f(x)\in E$
But this means $y\in f^*(E)=\{x\}$, a contradiction.
|
H: Is there any visual representation on why (certain) trigonometric functions have infinite derivatives.
As far as I understand, the first derivative of a function gives you the slope at a particular point.
The second derivative would give the concavity.
The third derivative would give the rate of change of the concavity.
So in general, with the knowledge I have, I would say that the derivative measures rate of change.
And I am very curious as to why you are able to go down to the infinite-th derivative on trig functions.
Even more interesting for me is the fact that a particular n-th order derivative for sin(x) for example, might be sin(x) itself.
Are there any "cool" visual (or just an easy explanation) representation on what is going on when you are calculating the derivatives of these type of functions?
AI: Suppose you are traveling around on a circular track. To make thing simper, we will make the radius of the track 1, and your speed 1.
Your position at time $t$ is $(\cos t, \sin t)$
Or we could say $x(t) = \cos t, y(t) = \sin t.$
What is your direction of travel or velocity? Straight ahead or on a line tangent to the circle. The tangent of a circle is perpendicular to the radius at the point of tangency. We can add 90 degrees (or $\frac {\pi}{2}$ radians) to $t$ to get your velocity.
$V = (\cos (t+\frac \pi2),\sin (t+\frac \pi 2)) = (-\sin t, \cos t)$
$x'(t) = -\sin t, y'(t) = \cos t$
The derivative of these two functions, is a $\frac {\pi}2$ phase shift of the two functions.
The consequence is that the second derivative (acceleration) must be centripetal, and the 4th derivative returns the original function. This allows us to take derivatives indefinitely working around this cycle.
|
H: Find the area of a triangle $\triangle FGH$
In triangle $\triangle FGH$, $GM$ is a median that lies on $4x-y=27$; height $HA$ lies on $x-y+3=0$; $F$ is $(4,5)$. Find the area of the triangle $\triangle FGH$.
My attempt:
F is not on any of the lines. Intersection of the lines (10,13). Then, i struggle.
AI: With GF$\perp$HA, the slope of GF is $-\frac14$ and GF lies on $4y+x-24 =0$. G is the intersection of GF and GM, which is $G(\frac{12}5,\frac{27}5)$.
Let $H(a,4a-27)$. Then, the midpoint is $M(\frac {a+4}2, \frac{4a-22}2)$ and it satisfies $x-y+3=0$, or
$$\frac {a+4}2-\frac{4a-22}2+3=0
$$
which yields $a=\frac{32}3$ and $H(\frac{32}3, \frac{47}3)$. With the coordinates of $F,G,H$ known, the area is given by the formula below
$$Area = \frac12| x_F(y_G-y_H)+ x_G(y_H-y_F)+ x_H(y_F-y_G)|
$$
|
H: How to use general recursion to generate a set of words?
I am just getting to recursion in one of my classes, and I'm a bit confused on how to go about generating a set of words and the notation.
Given the following question, how would I go about generating this set?
(assuming ∑={a,b}), the set of strings with twice as many a’s as b’s.
If I start by saying that λ∈T as my base case, how would I then generate the recursive case?
AI: Edit: Please consider accepting @Brian M. Scott's answer as it is complete and more rigorous.
(Too large for a comment, but I hope it will give you some insight.)
Generate 2 $a$'s for each $b$. First, you have 3 possibilities ($1^{st}$ generation).
$$
\mathbf{w}_1^1=aab, \ \mathbf{w}_2^1 = aba, \ \mathbf{w}_3^1=baa
$$
Next, generate all the possible permutations of each previous $\mathbf{w}_i^1$, inserting each previous $\mathbf{w}_i^1$ at each possible place ($2^{nd}$ generation). For example:
$$
\mathbf{w}_1^2 = aab|\mathbf{w}_1^1, \hspace{10pt} \mathbf{w}_2^2 = aba|\mathbf{w}_1^1, \hspace{10pt} \mathbf{w}_3^2 = baa|\mathbf{w}_1^1 \\
\mathbf{w}_4^2 = aa|\mathbf{w}_1^1|b, \hspace{10pt} \mathbf{w}_5^2 = ab|\mathbf{w}_1^1|a, \hspace{10pt} \mathbf{w}_6^2 = ba|\mathbf{w}_1^1|a \\
\large
\dots
$$
Note that you can insert multiple (previous) strings at different positions, for the next generation. You can go to the next generation only after you have enumerated all possible strings for the current generation, and this includes inserting multiple previous strings.
I have used the notation $u \ | \ v$ to denote the concatenation of $u$ and $v$, and $\mathbf{w}_i^j$ denotes the $i$-th word in generation $j$.
You can then derive a recursive formula for $\mathbf{w}_i^j$ using this.
|
H: Prime ideals of $\Bbb C[x, y]$
In the exercise 3.2.E of Vakil's "Foundations of Algebraic Geometry", it is asked to prove that all the prime ideals of $\Bbb C[x, y]$ are of the form $(0)$, $(x-a, y-b)$ or $(f(x, y))$, where $f$ is an irreducible polynomial. In order to do so, it is suggested to consider a non-principal prime ideal $\mathcal{p}$ along with $f, g \in \mathcal{p}$ with no common factor. Then, dividing $g$ by $f$ in $\Bbb C(x)[y]$, one is supposed to find $h(x) \in (f, g)$ - so some factor of the form $x-a$ is in $\mathcal{p}$. But I can't seem to figure out how to find such $h(x)$, because the expression of the division is something of the form:
$$g(x, y) = f(x,y) \left (\frac{p_0(x)}{q_0(x)} + \cdots + \frac{p_n(x)}{q_n(x)}y^n \right) + \left(\frac{r_0(x)}{a_0(x)} + \cdots + \frac{r_m(x)}{a_m(x)}y^m \right)$$
And I don't know how to relate the fact that $f, g$ have no common factor with this expression. My guess is that it has something to do with the term $\frac{r_0(x)}{a_0(x)}$, but I don't know what it is. Can anyone shed some light?
AI: You're not supposed to divide $g(x, y)$ by $f(x, y)$. You're suppose to use the Euclidean algorithm in $\mathbb C(x)[y]$ to find a greatest common divisor for $f(x, y)$ and $g(x, y)$. (Remember, $\mathbb C(x)[y]$ is a Euclidean domain, so it makes sense to use the Euclidean algorithm.)
The greatest common divisor for $f(x,y)$ and $g(x,y)$ in $\mathbb C(x)[y]$ can be written in the form $\frac{a(x)c(x,y)}{b(x)}$, where $c(x,y)$ has no non-trivial factors in $\mathbb C[x,y]$ that are purely polynomials in $x$. (Remember, $\mathbb C[x,y]$ is a UFD, so this statement makes sense.)
Then for some $p(x,y)$, $q(x)$, $r(x, y)$ and $s(x)$, we have
$$ \frac{a(x)c(x,y)}{b(x)} \frac{p(x, y)}{q(x)} = f(x, y), \ \ \frac{a(x)c(x,y)}{b(x)} \frac{r(x, y)}{s(x)} = g(x, y), $$
which is to say that
$$ a(x)c(x,y)p(x,y) = b(x)q(x)f(x, y), \ \ a(x)c(x,y)r(x,y) = b(x) s(x) g(x, y). $$
But $c(x,y)$ doesn't have any factors in common with $b(x)$, $q(x)$ or $s(x)$ in $\mathbb C[x, y]$, since $c(x, y)$ has no factors in $\mathbb C[x, y]$ that are purely polynomials in $x$. So $c(x, y)$ must divide both $f(x, y)$ and $g(x, y)$ in $\mathbb C[x, y]$.
And now, we use the fact that $f(x, y)$ and $g(x, y)$ have no common factor in $\mathbb C[x, y]$ to conclude that $c(x, y)$ is a constant.
Hence the greatest common divisor for $f(x,y)$ and $g(x,y)$ in $\mathbb C(x)[y]$ can be written in the form $a(x) / b(x)$. (We absorb the constant into $a(x)$.)
Using the Euclidean algorithm (i.e. the Bezout identity), it must be possible to write the greatest common divisor $a(x) / b(x)$ of $f(x, y)$ and $g(x, y)$ in $\mathbb C(x)[y]$ as a linear combination,
$$ \frac{a(x)}{b(x)} = \frac{u(x, y)}{t(x)}f(x, y) + \frac{v(x, y)}{w(x)}g(x, y),$$
or, clearing denominators,
$$ a(x) t(x) w(x) = u(x, y) b(x) w(x) f(x, y) + v(x, y) b(x) t(x) g(x, y).$$
Defining $$h(x) := a(x) t(x) w(x),$$ we have a non-zero polynomial in $x$ which is contained in $(f(x, y), g(x, y))$, and hence is contained in $\mathfrak p$ too.
Using the fact that $(f(x, y), g(x, y))$ is prime (by assumption), we see that some linear factor $x - a$ of $h(x)$ must be in $\mathfrak p$.
(Note that $h(x)$ really does have linear factors. If $h(x)$ were a constant, then $\mathfrak p$ would be the whole of $\mathbb C[x, y]$, contradicting the fact that $\mathfrak p$ is prime.)
|
H: Determine the lie algebra of the subgroup of SO(4)
Let $G\subset SO(4)$be the subgroup given below:
$$G=\left\{
\begin{pmatrix}
a & -b & -c &-d\\
b & a & -d & c\\
c & d & a & -b \\
d &-c & b &a
\end{pmatrix} : a,b,c,d\in \mathbb{R}, a^2+b^2+c^2+d^2=1\right\}$$
Find the lie algebra $\mathfrak{g}$.
I know that if $X\in \mathfrak{so}_4$, then $X\in \mathfrak{g} \iff e^{tX}\in G$ $\forall$ $t\in \mathbb{R}$.
However, I am not able to use it here, as the given group is a bit complicated. Any help is appreciated.
Thanks
AI: How does one find the Lie algebra of a Lie group? Differentiate the map $$(a,b,c,d) \mapsto \begin{pmatrix} a & -b & -c & -d \\ b & a & -d & c \\ c & -d & a & -b \\ d & c & b & a \end{pmatrix}\quad \mbox{and the equation}\quad a^2+b^2+c^2+d^2=1$$at $(1,0,0,0)$ and evaluate at $(x,y,z,w)$. So $$\mathfrak{g} = \left\{\begin{pmatrix} x & -y & -z & -w \\ y & x & -w & z \\ z & -w & x & y \\ w & z & y & x \end{pmatrix} \mid x,y,z,w \in \Bbb R \mbox{ and }2x+0y+0z+0w = 0 \right\},$$which of course agrees with Stephen's answer $$\mathfrak{g} = \left\{\begin{pmatrix} 0 & -y & -z & -w \\ y & 0 & -w & z \\ z & -w & 0 & y \\ w & z & y & 0 \end{pmatrix} \mid y,z,w \in \Bbb R \right\}.$$It is the general principle that to find the equation of a tangent space to a submanifold, you differentiate the equation defining it. Also, $G \cong \Bbb S^3$ is isomorphic to the group of unit quaternions, as the general expression for an element of $G$ is in the image of the composition of the maps $$\Bbb H \ni z+wj \mapsto \begin{pmatrix} z & -\overline{w} \\ w & z\end{pmatrix} \in \mathfrak{gl}(2,\Bbb C)\quad\mbox{and}\quad \Bbb C \ni a+bi \mapsto \begin{pmatrix} a & -b \\ b & a\end{pmatrix} \in \mathfrak{gl}(2, \Bbb R).$$
|
H: Show that a martingale $\{X_n\}$ is bounded in $L^2$ if and only if $EX_n^2<\infty$ for each $n$ and $\sum_{n\ge1}E(X_{n+1}-X_n)^2<\infty$
A martingale $\{X_n\}$ is bounded in $L^2$ by definition if $\sup\limits_nEX_n^2<\infty$. Show that a martingale $\{X_n\}$ is bounded in $L^2$ if and only if $EX_n^2<\infty$ for each $n$ and
\begin{align*}
\sum_{n\ge1}E(X_{n+1}-X_n)^2<\infty.
\end{align*}
This question was previously asked here but I have not been able to work out the details for how either answer leads to the solution. If anyone is willing to work out those details or provide a different solution, that would be much appreciated.
AI: Claim: Suppose that $(X_n)_{n\in\mathbb N}$ is a square-integrable martingale adapted to the $\sigma$-algebras $(\mathscr F_n)_{n\in\mathbb N}$. Then, for any $n\in\mathbb N$,
\begin{align*}
\mathbb E[(X_{n+1}-X_n)^2]=\mathbb E[X_{n+1}^2]-\mathbb E[X_n^2].
\end{align*}
Proof: It will be sufficient to show that $\mathbb E[X_nX_{n+1}]=\mathbb E[X_n^2]$; the rest is basic algebra. By the law of iterated expectations, the martingale property, and the fact that $X_n$ is $\mathscr F_n$-measurable,
\begin{align*}
\mathbb E[X_nX_{n+1}]&=\mathbb E\big[\mathbb E[X_nX_{n+1}|\mathscr F_n]\big]\\
&=\mathbb E\big[X_n\mathbb E[X_{n+1}|\mathscr F_n]\big]\\
&=\mathbb E[X_n\times X_n]\\
&=\mathbb E[X_n^2],
\end{align*}
as sought. $\enspace\blacksquare$
It follows that for any $N\in\mathbb N$,
\begin{align*}
\sum_{n=1}^N\mathbb E[(X_{n+1}-X_n)^2]=\mathbb E[X_{N+1}^2]-\mathbb E[X_1^2]\tag{$\star$}
\end{align*}
by telescoping.
Now suppose that $K\equiv\sup_{n\in\mathbb N}\mathbb E[X_n^2]<\infty$. Clearly, for each given $n\in\mathbb N$, we have $\mathbb E[X_n^2]\leq K<\infty$ and ($\star$) implies that
\begin{align*}
\sum_{n=1}^{\infty}\mathbb E[(X_{n+1}-X_n)^2]&=\limsup_{N\to\infty}\left\{\sum_{n=1}^N\mathbb E[(X_{n+1}-X_n)^2]\right\}\\
&=\limsup_{N\to\infty}\left\{\mathbb E[X_{N+1}^2]-\mathbb E[X_1^2]\right\}\\
&\leq K-\mathbb E[X_1^2]<\infty.
\end{align*}
Conversely, suppose that $\mathbb E[X_n^2]<\infty$ for each $n\in\mathbb N$ and that
\begin{align*}
L\equiv\sum_{n=1}^{\infty}\mathbb E[(X_{n+1}-X_n)^2]<\infty.
\end{align*}
Again, ($\star$) implies for every $N\in\mathbb N$ that
\begin{align*}
\mathbb E[X_{N+1}^2]=\sum_{n=1}^{N}\mathbb E[(X_{n+1}-X_n)^2]+\mathbb E[X_1^2]\leq L+\mathbb E[X_1^2],
\end{align*}
so
\begin{align*}
\sup_{n\in\mathbb N}\mathbb E[X_n^2]\leq L+\mathbb E[X_1^2]<\infty.
\end{align*}
|
H: arctan of ratio of two normal variables is uniform
Say $X, Y$ are independent standard normals, and $\theta = \arctan(Y/X)$. Prove that $\theta$ is uniformly distributed over it's range.
It is pretty intuitive that the distribution of $\theta$ would be uniform given a scatter plot of $X,Y$, but how can I mathematically show it?
AI: Let $R=\sqrt{X^2+Y^2}$. We want to calculate the distribution of $(R,\theta)$, i.e. the polar cordinates of $(X,Y)$.
For $r\ge 0$ and $\alpha\in [-\pi,\pi]$ we have $\Bbb P[R\le r,\, \theta\le\alpha]=\Bbb P[(X,Y)\in\{(x,y)\in \Bbb R^2|\; x$ and $y$ have polar cordinates $(\rho , \beta)$ with $\rho \le r$ and $\beta\le\alpha\}=\Bbb P[(X,Y)\in \Omega]$.
So, since $X$ and $Y$ are indipendents, density of $(X,Y)$ is the product of the densities of $X$ and $Y$. So, passing in polar cordinates:$$\Bbb P[R\le r,\, \theta\le\alpha]=\int\int_{\Omega} \frac 1{2\pi} e^{-\frac {x^2+y^2}2}dxdy=\int_{-\pi}^{\alpha}\int_0^r \frac 1{2\pi} \rho e^{-\frac{\rho}2}d\rho d\theta=\frac {\alpha}{2\pi}(1-e^{-\frac{r^2}2})$$
Now we have that $(R,\theta)$ has density $f(r,\alpha)=\frac r{2\pi} e^{-\frac{r^2}2}\chi _{[0,+\infty)\times [-\pi,\pi]} (r,\alpha)$, from this, finally we obtain the density of $\theta$:
$\int_0^{+\infty} \frac r{2\pi} e^{-\frac{r^2}2} \chi _{[0,+\infty)\times [-\pi,\pi]} (r,\alpha) dr=\frac 1{2\pi}\chi _{[-\pi,\pi]} (\alpha)$.
|
H: Translation of perfect set still perfect set
Let $P\subset\mathbb R$ be a perfect set. For each nonzero $r\in\mathbb R$, define $$ D_r=r\cdot P$$ Without checking the details $D_r$ still perfect set. I think, it is true. Does that correct ?
AI: $x \to rx$ is a homeomorphism so it certainly maps perfect sets to perfect sets.
|
H: Simple Linear Regression Machine Learning Course
Your friend in the U.S. gives you a simple regression fit for predicting house prices from square feet. The estimated intercept is -44850 and the estimated slope is 280.76. You believe that your housing market behaves very similarly, but houses are measured in square meters. To make predictions for inputs in square meters, what intercept must you use? Hint: there are 0.092903 square meters in 1 square foot. You do not need to round your answer.
AI: Linear regressions do not depend on units. The intercept is $-\$44,850$, and the slope $\$280.76/\text{ft}^{2}$ converts to:
$$\$280.76/\text{ft}^{2} = \frac{\text{ft}^{2}}{.092903\text{m}^{2}} \cdot \$280.76/\text{ft}^{2} = \$3022.08/\text{m}^{2}$$
(You can similarly convert dollars into whatever currency you like.)
|
H: Can $y=10^{-x}$ be converted into an equivalent $y=\mathrm{e}^{-kx}$?
I was dealing with the values:
| Digits | Expression | Value |
|--------|------------|-----------------------|
| 1 | 10⁻¹ | 0.1 |
| 2 | 10⁻² | 0.01 |
| 3 | 10⁻³ | 0.001 |
| 4 | 10⁻⁴ | 0.0001 |
| 5 | 10⁻⁵ | 0.00001 |
| 6 | 10⁻⁶ | 0.000001 |
| 7 | 10⁻⁷ | 0.0000001 |
| 8 | 10⁻⁸ | 0.00000001 |
| 9 | 10⁻⁹ | 0.000000001 |
| 10 | 10⁻¹⁰ | 0.0000000001 |
| 11 | 10⁻¹¹ | 0.00000000001 |
| 12 | 10⁻¹² | 0.000000000001 |
| 13 | 10⁻¹³ | 0.0000000000001 |
| 14 | 10⁻¹⁴ | 0.00000000000001 |
| 15 | 10⁻¹⁵ | 0.000000000000001 |
And then I plotted the results in Excel on a log scale:
Now, I already know the formula for this graph, it's:
$$ y = 10^{-x} $$
But was curious to see how well an "exponential" trendline would fit, and it fits very well:
The $R^2$ is $1$, even for $15$ decimal places.
So it seems that:
$$y = 10^{-x} ↔ y = e^{-2.30258509299405x} $$
The question
So I have to wonder:
is there an algebraic transformation of: $$y = 10^{-x} → y = e^{-kx} $$
Where does the constant $k$ come from?
Does it have an expression?
Or is this all a very interesting coincidence?
AI: Yes, it can be. Notice that
$${y=10^{-x}\geq 0}$$
Hence ${\log(10^{-x})}$ is well defined, and so
$${10^{-x}=e^{\log(10^{-x})}=e^{-x\log(10)}=e^{-\log(10)x}}$$
And so
$${k=\log(10)\approx 2.30258509....}$$
|
H: In a quiz with 13 people, what are the probabilities that exactly 12, exactly 11 and exactly 10 people answer correctly?
A group of 13 people, p1, p2, p3, ... , p13 answers the same question. The probability that a person answers correctly are known but differs between the people. For example the probability that p1 answers correctly is 0.68, for p2 it's 0.23 and so on. How can I calculate the probabilities that, for example, exactly 10 people get the question correct?
AI: If the answers are independent events,
$$p=\prod_{i=k}^{l}p_i\prod_{j\neq i}(1-p_j)$$
is the probability that only persons $k,k+1,\ldots,l$ answer correctly, where $p_i$'s are the probability of correctly answering.
Write the same for all ${13\choose l-k+1}$ combinations and add all of them to get the final probability.
|
H: Proof of The Third Isomorphism Theorem
Here's what I'm trying to prove right now:
Let $V$ be a vector space over $\mathbb{F}$. Let $M$ be a linear subspace of $V$ and $N$ be a linear subspace of $M$. Prove that the mapping $x+N \mapsto x+M$ between the quotient spaces $V/N \to V/M$ is linear with kernel $M/N$. Then, deduce that:
$$\frac{V/N}{M/N} \cong V/M$$
Proof Attempt:
We have to show that the given relation $T: V/N \to V/M$ is a linear mapping. We define:
$$\forall x \in V: T(x+N) = x+M$$
This is totally-defined. To show well-definedness, let $x+N = y+N$ where $x,y \in V$. Then, $x-y \in N$. So, $x-y \in M$. Hence:
$$x + M = y+M$$
$$\iff T(x+N) = T(y+N)$$
To prove that it is linear, we need to show additivity and homogeneity.
Proof of Additivity
Let $u,v \in V/N$. Then, $u = x+N$ and $v = y+N$ for some $x,y \in V$. Then:
$$T(u+v) = T((x+N)+(y+N)) = T((x+y)+N) = (x+y)+M = (x+M)+(y+M) = T(u) + T(v)$$
This proves additivity.
Proof of Homogeneity
Let $\alpha \in \mathbb{F}$ and $u \in V/N$. Then, $u = x+N$ for some $x \in V$. So:
$$T(\alpha u) = T(\alpha(x+N)) = T(\alpha x +N) = \alpha x + M = \alpha (x+M) = \alpha T(u)$$
This proves homogeneity. Hence, $T$ is a linear map. To show that the kernel of $T$ is $M/N$, we have:
$$T(x+N) = x+M = \theta_V+M$$
$$\iff x \in M$$
$$\iff x+N \in M/N$$
$$\iff \ker(T) = M/N$$
Now, we notice that $T$ is surjective. By the first isomorphism theorem, it follows that:
$$\frac{V/N}{M/N} \cong V/M$$
That proves the desired result.
Does the proof above work? If it doesn't, why? How can I fix it?
AI: Your approach is absolutely right!
I would change the part concerning $\ker T$, writing:
\begin{align}
x + N \in \ker T & \iff T(x+N) = 0 \in V/M \\
& \iff x+M = 0 \in V/M \\
& \iff x \in M \\
& \iff x + N \in M/N
\end{align}
which means $\ker T = M/N$.
|
H: Is this set of linear maps valid?
Notation: The set of all linear maps from a vector space $V$ to a vector space $W$ (over a field $\mathbb{F}$) is denoted $\mathcal{L}(V, W)$.
The question states:
Show that $\{ T \in \mathcal{L}(\mathbb{R}^5, \mathbb{R}^4) : dim(null(T)) > 2\}$ is not a subspace of $\mathcal{L}(\mathbb{R}^5, \mathbb{R}^4)$.
If I understand correctly, $\mathcal{L}(\mathbb{R}^5, \mathbb{R}^4)$ is the set of all linear maps from $\mathbb{R}^5$ to $\mathbb{R}^4$. So $dim(\mathbb{R}^4) = 4$ and hence $range(T) = 4$ also.
But by the Fundamental Theorem of Linear Maps, could $T$ ever exist?
$$
dim(\mathbb{R}^5) = dim(null(T)) + dim(range(T)) \\
5 = dim(null(T)) + 4 \\
1 = dim(null(T))
$$
And hence $dim(null(T)) \not > 2$.
The answer to this questions seems to assume the linear mapping is from $\mathbb{R}^5$ to a subspace of $\mathbb{R}^4$, as it provides the following counter example:
Let $e_1, \ldots, e_5$ be a basis of $\mathbb{R}^5$ and $f_1, \ldots, f_4$ be a basis of $\mathbb{R}^4$. Define $S_1$ and $S_2$ by:
$$ S_1e_i = 0, S_1e_4 = f_1, S_1e_5 = f_2, i = 1, 2, 3 \\
S_2e_i = 0,S_2e_3 = f_3, S_2e_5 = f_4, i = 1, 2, 4 $$
(goes onto show not closed under addition)
Have I midunderstood?
AI: I think you have misunderstood. $\mathcal{L}(\Bbb{R}^5, \Bbb{R}^4)$ is the set of linear maps $T : \Bbb{R}^5 \to \Bbb{R}^4$, which means linear maps whose domain is $\Bbb{R}^5$ and whose codomain is $\Bbb{R}^4$. This means that $Tv \in \Bbb{R}^4$, for any $v \in \Bbb{R}^5$. It does not mean that the map $T$ is surjective, i.e. for any $w \in \Bbb{R}^4$, there exists some $v \in \Bbb{R}^5$ such that $Tv = w$.
The range of a linear map is automatically a subspace of its codomain, and need not be the full codomain. For example, the $0$ map in $\mathcal{L}(\Bbb{R}^5, \Bbb{R}^4)$ takes every vector in $\Bbb{R}^5$ and maps it to $(0, 0, 0, 0) \in \Bbb{R}^4$. That is, it maps onto a (trivial) subspace of $\Bbb{R}^4$.
The ranges of maps in this set, by the rank-nullity theorem, must have dimension strictly less than $3$. They cannot be surjective.
|
H: Calculate $\sum_{r=0}^n \cosh(\alpha+2r\beta)$
I try to calculate $\sum_{r=0}^n\cos(\alpha+2r\beta)$ first to get some insights.
First we prove that $\sum_{r=0}^\infty\cos(\alpha+2r\beta)=0$. (The statement and the proof is flawed, I will edit it later.)
$Proof$: For $\sum_{r=0}^\infty\cos(\alpha+2r\beta)$ is the real part of $\sum_{r=0}^\infty\exp i(\alpha+2r\beta)$, and the latter (denoted by $X$), as addition of complex numbers, can be viewed as addition of vectors (of module 1), we have:
when $2\beta=2\pi/n$, $X$ equals to sum of vectors that corresponds to edges of a n-polygon, and so $X=0$;
when $2\beta\neq2\pi/n$, $2\pi/n = k\cdot2\beta+\rho$ where $0<\rho<2\beta$, and so by summing $k$ vectors we get a vector $z_{1,j_1}$, i.e. $\sum_{r=j_1}^{j_1+k-1}\exp i(\alpha+2r\beta)$, that is of smaller module and smaller 'phase shift', say $\beta_1$.
Then we have $2\pi/n = k_1\cdot2\beta_1+\rho_1$, and so by summing $k_1$ vectors we get a new vector $z_{2, j_2} = \sum_{r=j_2}^{j_2+k-1}z_{1,j_1}$, that is of smaller module and smaller 'phase shift', say $\beta_2$.
Repeating the process we will finally get smaller and smaller module and 'phase shift', the sequence of modules will tend to $0$ (the proof is left out), which completes the proof. $\blacksquare$
But I have no idea yet how to do similar things to $\sum_{r=0}^n\cosh(\alpha+2r\beta)$. The latter can be viewed as a component of $\sum_{r=0}^n e^{\alpha+2r\beta}$. But one cannot easily separate the former from the latter as we do above (for here is no clear distinction between components like one between real and imaginary parts). And the sum itself can't be calculated conveniently as sum of vectors.
So is this method workable for $\sum_{r=0}^n \cosh(\alpha+2r\beta)$? If not, can anyone just give me a hint about how to calculate it?
(Thanks for pointing out that the sum doesn't converge as $n\rightarrow\infty$.)
A graph representation:
https://www.desmos.com/calculator/nih1sg4fwm
AI: $$S_1=\sum_{r=0}^n e^{a+2 b r}=\frac{e^a \left(e^{2 b (n+1)}-1\right)}{e^{2 b}-1}$$
$$S_2=\sum_{r=0}^n e^{-(a+2 b r)}=\frac{e^{-(a+2 b n)}(e^{2 b (n+1)}-1)}{e^{2 b}-1}$$
$$\frac{S_1+S_2}2=\sum_{r=0}^n \cosh(a+2br)=\text{csch}(b) \sinh (b (n+1)) \cosh (a+b n)$$
|
H: prove that $f(x)=\sum_{|\alpha|\leq k}\frac{1}{\alpha !}D^{\alpha}f(0)x^{\alpha} + O(|x|^{k+1})$
If $\alpha$ is multiindex and $f$ is smooth,prove that $f(x)=\sum_{|\alpha|\leq k}\frac{1}{\alpha !}D^{\alpha}f(0)x^{\alpha} + O(|x|^{k+1})$. The hint is to use taylors form for $g(t)=f(tx)$. If i do this i will found that
$g(t)=\sum \frac{g^{n}(0)}{n!}t^n = \sum \frac{ D^nf(0)x^n t^n}{n!}$. How i continued? i need to do induction over size of $\alpha$. Any hint please, thank you.
AI: The key is to show using the chain rule that $$g^{(n)}(0) = \sum_{|\alpha| = n} D^\alpha f(0) x^\alpha.$$
Combining with your first step and plugging in $t=1$ yields $$f(x) = g(1) = \sum_{n \ge 0} \frac{1}{|\alpha|!}\sum_{|\alpha| = n} D^\alpha f(0) x^\alpha.$$
Can you take it from here?
|
H: Necessary condition for x>0 being an integer
I was trying to solve a number theory problem and then I realized that I was needing to verify (prove or disprove) the following ''fact'' about numbers. I would appreciate any help.
Q: Suppose $x >0$ is such that $x^n \in \mathbb{Z}$ for all $n \geq 2$, then $x$ must be an integer? Furthermore, suppose that $n\geq 3$ is an odd integer, does the conclusion holds?
AI: Suppose that $x^2,x^3\in\Bbb Z$. Then $x^2(x-1)=x^3-x^2\in\Bbb Z$, so $x-1$ is rational. The square of a rational number is an integer if and only if the rational number itself is an integer, so $x-1$ is an integer, and therefore so is $x$.
|
H: Does the Null spaces of matrix $n\times n$ matrix $A$ and matrix $BA$ equal to each other if the matrix $B$ is invertible?
As the title says, both $A$ and $B$ are $n\times n$ matrices, I want to prove the Null spaces $Null (A)$ = $Null(BA)$.
I do not know if the statement is correct,
and I am not sure whether my derivation as follows is correct. Is there a new method to prove or any counterexamples?
Proof: $\forall u \in Null(A)$, then $Au = 0$. Multiply $B$ from left, we obtain $BAu=0$, which means any vector $u \in Null(A)$ is a vector in $Null (BA)$.
Similarly, $\forall v \in Null(BA)$, we have $BAv = 0$. Multiply $B^{-1}$ from left, we obtain $B^{-1}BAv=0$, ie. $Av=0$, which means any vector $v \in Null(BA)$ is a vector in $Null (A)$.
Therefore, we obtain $Null(A) = Null (BA)$, ie., $Ax=0$ and $BAx=0$ have the same solution.
AI: Given that $B$ is invertible, we have that $0 = BAv = B(Av)$ if and only if $Av = 0$ because $B$ is injective. But this precisely that $\ker A = \ker BA.$
|
H: Hausdorff and non-discrete topology on $\mathbb{Z}$
Construct a topology $\mathfrak{T}$ on $\mathbb{Z}$ such that $\mathbb{Z}$ is Hausdorff and non-discrete with respect to $\mathfrak{T}$.
$\textbf{My idea}$ : We know that $\mathbb{Q}$ is Hausdorff and non-discrete with respect to the topology inherited from $\mathbb{R}$. So we should use this fact in a following way.
Let $\pi:\mathbb{Q}\to\mathbb{Z}$ be any $\textit{onto}$ function. We define a topology $\mathfrak{T}$ on $\mathbb{Z}$ such that $U\subset\mathbb{Z}$ is open in $\mathbb{Z}$ if and only if $\pi^{-1}(U)$ is open in $\mathbb{Q}$. In other words, we define the largest topology on $\mathbb{Z}$ such that $\pi$ becomes continuous. Now it is a matter of choosing an appropriate onto map $\pi$. I am having a difficulty in choosing such $\pi$.
I have tried the integer part function as $\pi$, and the resultant topology $\mathfrak{T}$ on $\mathbb{Z}$ is indeed $\textit{non-discrete}$ but the $\textit{Hausdorffness}$ is not clear. Can anyone please suggest me some other functions as $\pi$ ? Also I would be really grateful if someone suggests me an entire new way to define a topology $\mathfrak{T}$ on $\mathbb{Z}$.
AI: Perhaps the simplest solution is to let $\tau$ be the topology on $\Bbb Z$ generated by the following base:
$$\big\{\{n\}:n\in\Bbb Z\setminus\{0\}\big\}\cup\{\cup(\leftarrow,-n]\cup\{0\}\cup[m,\to):n,m\in\Bbb Z^+\}\;.$$
That is, each point of $\Bbb Z\setminus\{0\}$ is isolated, and nbhds of $0$ contain a tail of the positive integers and a tail of the negative integers.
For your original idea I’d not even bother with an explicit $\pi$: $\Bbb Q$ is countably infinite, so there is a bijection between it and $\Bbb Z$, and you can use that to define a topology on $\Bbb Z$ making it homeomorphic to $\Bbb Q$. That’s sufficient unless you absolutely need an explicit description of the topology on $\Bbb Z$.
|
H: Rearrangement diverges then original series also diverges?
Question: if $\sum y_n$ is "any" rearrangement of series $\sum x_n$ , where $\sum x_n$ is series of positive terms. Then, if $\sum y_n$ diverges then original series $\sum x_n$ also diverges?
I think yes. Because series $\sum x_n$ is series of positive terms and hence if we rearranged its terms then sum does not changed.
Am i correct? Hiw to prove it? Please help
AI: A series of nonnegative numbers converges iff the series converges absolutely. A series converges absolutely iff every rearrangement of that series converges to the same limit.
Since there is a rearrangement that does not converge, the original series cannot converge absolutely. But the terms are nonnegative, so the original series diverges.
|
H: Given a basis $\mathcal{B}$, can I assume that $\mathcal{B}$ is orthonormal?
Let $E$ be a vector space over $\mathbb{C}$ such that $\text{dim}(E)=n \in \mathbb{N}$. Let $\mathcal{B}:=\{e_1,e_2,\cdots, e_n\}$ be a basis of $E$. I know that if $ E $ is vector space with inner product $\langle \cdot, \cdot \rangle : E \times E \longrightarrow \mathbb{C}$, then I can always assume, due to the Gram-Schmidt Process, that $ \mathcal{B}$ is an orthonormal basis, that is,
$$||e_i||=1,\; \forall \; i \in \{1,2,\cdots, n\},$$
where $||x||=\sqrt{\langle x, x \rangle }$, for all $x \in E$.
Question. If $E$ it is not a space with an inner product, only a normed space (with norm $||\cdot||$), so I can also assume that $||e_i||=1$ for all $i \in \{1,2, \cdots , n\}$?
AI: A basis $\{e_1,\dots,e_n\}$ of an inner product space $E$ is orthonormal iff $\langle e_i,e_j\rangle=\delta_{ij}$, not only $\|e_i\|=1$.
Next, as specifically your question says that "is it possible to assume $\|e_i\|=1$ for a basis $\{e_1,\dots,e_n\}$?". So it is true. You can assume. Because if not, then you can take elements of the basis as $e_i'=\dfrac{e_i}{\|e_i\|}$.
|
H: Convergence in L2 (up to a constant) implies convergence in probability?
Suppose we have a sequence of random variables $\{X_n\}$ such that $\mathbb{E} [|X_n|^2] = a_n + c$ where $a_n \to 0$ is a decreasing sequence of positive real numbers and $c > 0$ is a constant. Then can we say anything about the convergence in probability of $X_n$?
AI: No. Consider the sequence of random variables $X_n = (1 + \frac{1}{n})$ with the uniform measure on $[0,1]$. Well $\mathbb{E}(|X_n|^2) = \frac{1}{n^2} + \frac{2}{n} + 1$ where $c = 1$ and $a_n = \frac{1}{n^2} + \frac{2}{n}$ fulfill your properties. Now consider the sequence of random variables $Z_n = (-1)^n X_n$. You get that $Z_n$ have the same properties, but $Z_n$ does not converge in measure/probability to any random variable; the two candidates being $1$ and $-1$.
|
H: Absolutely continuous function with bounded derivative on an open interval is Lipschitz
I've come across a question which states that one can prove $f:\mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz iff $f$ is absolutely continuous and there exists $M \in \mathbb{R}$ such that $|f'(x)|≤ M$ almost everywhere.
I've only ever seen this fact proven for functions on closed and bounded intervals. Is it possible to show this on all of $\mathbb{R}$? I'm skeptical as the main fact I'd like to use which represents $f$ as an indefinite integral ( since $f$ is absolutely continuous) requires the domain to be a closed and bounded interval. However I can't find a counterexample.
AI: $\Rightarrow:$ Suppose that $f$ is Lipschitz continuous, then there
exists $M>0$ such that $|f(x)-f(y)|\leq M|x-y|$ for all $x,y\in\mathbb{R}$.
Let $\varepsilon>0$ be given. Define $\delta=\frac{\varepsilon}{2M}>0$.
Let $\{I_{n}=(a_{n},b_{n})\mid n\in\mathbb{N}\}$ be a countable family
of pairwisely disjoint open intervals such that $\sum_{n=1}^{\infty}\mu(I_{n})<\delta$.
(Here, $\mu(A)$ denotes the Lebesgue measure of a set $A$.)
\begin{eqnarray*}
& & \sum_{n=1}^{\infty}|f(b_{n})-f(a_{n})|\\
& \leq & \sum_{n=1}^{\infty}M\mu(I_{n})\\
& < & \varepsilon.
\end{eqnarray*}
This shows that $f$ is absolutely continuous. From the standard theory
of any real analysis textbook, for each $N\in\mathbb{N}$, $f'(x)$
exists a.e. for $x\in[-N,N]$. Let $A_{n}=\{x\in[-N,N]\mid f'(x)\mbox{ does not exist}.\}$,
then $\{x\in\mathbb{R}\mid f'(x)\mbox{ does not exist}\}=\cup_{N}A_{N}$
which has measure zero. That is, $f'(x)$ exists a.e. Let $x\in\mathbb{R}$
for which $f'(x)$ exists. For any $y>x$, we have $|\frac{f(y)-f(x)}{y-x}|\leq M$.
Letting $y\rightarrow x+$, then we have $|f'(x)|\leq M$.
$\Leftarrow:$ Suppose that $f$ is absolutely continuous and there
exists $M>0$ such that $|f'(x)|\leq M$ a.e.. Let $x_{1},x_{2}\in\mathbb{R}$
with $x_{1}<x_{2}$. From the standard theory with $f$ restricted
on $[x_{1},x_{2}]$, we have that $f(x_{2})-f(x_{1})=\int_{x_{1}}^{x_{2}}f'(x)dx$.
Hence
\begin{eqnarray*}
& & |f(x_{2})-f(x_{1})|\\
& \leq & \int_{x_{1}}^{x_{2}}|f'(x)|dx\\
& \leq & M(x_{2}-x_{1}).
\end{eqnarray*}
Therefore, $f$ is Lipschitz continuous.
|
H: Proving only certain methods of Integration work on certain problems
I have not studied Pure Maths (Stuff which includes rigorous proving) yet however I did take classes on Integration.
There were certain problems which we could do only by certain ways to get the results for instance the integral of $ln(x)$ can be easily computed via Integration by Parts however there is no meaningful substitution in my knowledge which can do the same.
I pondered a bit and then it seemed pretty obvious that not all integrals can be solved by all techniques however this is a very general statement to make and rigour is everything in mathematics so are there ways to prove such a statement?
I thought of a way where I tried using Proof by Contradiction and said if the statement is true then there should exist no counterexamples then took up the integral of some function where according to my knowledge there is no substitution technique like $ln(x)$ so the proof became contingent on the fact that there is no subsitution for the following integral which can solve it but couldn't think of any ways to do that
AI: There certainly is a substitution that works: $u = x\ln x - x$. Then $du =\ln x\,dx$, so $\int \ln x\,dx = \int du = u = x\ln x - x$.
Of course I made that substitution because I knew the answer, but my point is that "not yet knowing the answer" isn't a mathematical concept. I gave you a substitution that worked, case closed.
Rather than try to show certain methods of integration don't work, a more robust concept is showing that the antiderivative of a (continuous) function does not lie among a certain class of functions, such as what are called the "elementary" functions. See https://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra). This is not going to tell you anything about how you can or can't work out an antiderivative of $\ln x$ by techniques taught in a calculus course (in fact you know that it can be done), but it will tell you that functions such as $e^{-x^2}$ do not have an antiderivative among the kinds of functions students learn before taking calculus (the "elementary functions").
|
H: How to solve this ODE: $y'(x) e^x = y^2(x)$?
I am trying to solve the differential equation
$$y'(x) e^x = y^2(x) \quad (DE) $$
This is a Bernoulli form DE i.e $y'(x) + a(x)y(x) = b(x)y^r(x)$, where $r = 2, a(x) = 0, b(x) = \frac1e $
Let $u(x) = y^{1-r} = y^{-1} \iff u'(x) = -y^{-2}(x) y'(x)$
Then for $y \neq 0$: $(DE) = \frac{y(x)'}{y^r(x)} = e^{-x} \iff -u'(x) = -e^x (2)$
But $(2)$ is a seperate variable form ODE therefore:
$$ u(x) = e^{-x} + C \iff \frac1y = e^{-x} + C \iff $$
$$ \bbox[15px,#ffd,border:1px solid green]{y(x) = \frac{1}{e^x + C} }$$
with $y(x) =0$, not being a solution of the DE.
It all seems right to me, but wolfram has another opinion i.e
$$ \bbox[15px,#ffd,border:1px solid blue]{y\left(x\right)\:=\:\frac{-e^x}{\left(Ce^x\:-\:x\:-\:1\right)}} $$
I never won an argument against Wolfie, so I am wondering what I did wrong in my solution.
AI: As mentioned in the comments, Wolfram Alpha interprets $y^2(x)$ as $y^2\times x$. You can write $(y(x)^2)$ in Wolfram Alpha instead. The correct input in Wolfram Alpha produces:
$$y(x)=-\frac{e^x}{Ce^x-1}$$
In your solution, $b(x)=e^{-x}$. Setting $u(x) = y^{-1} \implies u'(x) = -y^{-2}(x) y'(x)$, so you should divide both sides of the original differential equation by $y^2$
$$y'(x) e^x = y^2(x)\implies \frac{y'(x)}{y^2(x)}=e^{-x}\implies -u'(x)=e^{-x}$$
Hence
$$-\frac{du(x)}{dx}=e^{-x}$$
$$-u(x)=-e^{-x}+C$$
$$-\frac{1}{y(x)}=-e^{-x}+C$$
$$y(x)=-\frac{1}{C-e^{-x}}$$
Therefore, multiplying the numerator and denominator by $e^x$ forms
$$y(x)=-\frac{e^{x}}{Ce^{x}-1}$$
Alternatively, we may write
$$ y(x) = \frac{1}{e^{-x}+C}$$
Note: $y(x)=0$ is not a solution.
|
H: Is "An undirected graph $G(V,E)$ has at least $|V|-|E|$ connected components" a true statement?
I'm taking this Coursera's course on Graph Theory, which is part of a specialization in discrete math for CS, offered by University of California, San Diego: https://www.coursera.org/specializations/discrete-mathematics
In this course they state this theorem:
An undirected graph $G(V,E)$ has at least $|V|-|E|$ connected components.
With the proviso that if $|E|>|V|$, then |V|-|E| will be negative, so, despite of still being true, it will be kind of useless.
Looking for further information on this theorem, I don't find it anywhere else on the web, so I want you guys to tell me if this is a correct theorem, because for some reason I find it "weird".
AI: Start with the empty graph (with $0$ connected components), then add your vertices one at a time. At each step the number of connected components goes up by $1$. Then add the edges one at a time. At each step the number of connected components goes down by 1 or stays the same. Therefore the lowest it can go is $|V|-|E|$.
|
H: Evaluating $\lim_{n\rightarrow\infty}\int_0^n\frac{(1-\frac{x}{n})^n}{ne^{-x}}dx$
Question: Find $\lim_{n\rightarrow\infty}\int_0^n\frac{(1-\frac{x}{n})^n}{ne^{-x}}dx$.
My thoughts: First, I'd like to bring the limit inside the integral, because $\lim_{n\rightarrow\infty}\frac{(1-\frac{x}{n})^n}{ne^{-x}}=\frac{e^{-x}}{ne^{-x}}\rightarrow0$ and $n\rightarrow\infty$, and so the value of the integral would be $0$. However, I am a bit stuck on justifying pulling the limit inside the integral. I was hoping to be able to use the Dominated Covergence Theorem, so I need to find an integral majorant. The way that I have always gone about doing that (when the answer isn't obvious to me) is to take the derivative of the denominator with respect to $n$ and set it equal to $0$ to minimize it, then get $n$ in terms of $x$. Next, find the minimum over $n$ of my denominator (now in terms of $x$), and then find the supremum of the fraction, and see when that integral converges. However, for this one, I am a bit stuck..... maybe DCT isn't best here?
AI: Easier: Move everything to $[0,1]$, so
$$
\int_0^n\frac{(1-(x/n))^n}{ne^{-x}}\,\mathrm{d}x=\int_0^1\frac{(1-t)^n}{e^{-nt}}\,\mathrm{d}t=\int_0^1[(1-t)e^t]^n\,\mathrm{d}t
$$
But $0\leq (1-t)e^t\leq 1$ for $t\in[0,1]$, so DCT gives the limit $0$.
|
H: Find pdf of transformation of two random variables using CDF
Let $X,Y \sim$ Uniform$(0,1)$ be independent. Find the PDF for $X/Y$.
Let $Z=X/Y$. We want to find $F_z(z)=P(Z \leq z)=P(X/Y \leq z)$.
We can make $Y$ super small with fixed $X$, and conversely we can make $X$ really small with fixed $Y$. Thus it appears to me that we have $0<z<\infty$. I am struggling to find the subgraph of the unit square. We know that $X \leq Yz$. If $z = 1$, then we have a simple diagonal through the unit square. Increasing the value of $z$ shrinks what $Y$ can be since $X$ must be between $0$ and $1$. So we should see these lines fan out below the diagonal of the square as $z$ increases. Conversely, if $z$ approaches $0$, then we we limit the range of $X$, and so these lines fan out above the diagonal.
What I am having trouble with is putting this all together and calculating the integral itself.
AI: $$F_Z(z)=P[\frac{X}{Y} \leq z]=P[Y \geq \frac{X}{z}]$$
Drawing the line $Y=\frac{X}{z}$ in the unit square you see that CDF(z) is the area above this line thus
$$F_Z(z) =
\begin{cases}
0, & \text{if $z<0$} \\
\frac{z}{2}, & \text{if $0\leq z<1$} \\
1-\frac{1}{2z}, & \text{if $z \geq 1$}
\end{cases}$$
To get PDF(z) just derivate F
Not any integral is needed. All the calculations can be done evaluating triangles' areas.
|
H: Number theory question: Prove $27\mid a+b+c$ if $(a-b)(b-c)(c-a)=a+b+c$.
Integers $a,b$ and $c$ satisfy $(a-b)(b-c)(c-a)=a+b+c$. Prove $27\mid a+b+c$.
AI: First, assume that $3 \nmid (a+b+c)$. Then, $3 \nmid (a-b)(b-c)(c-a)$, which means that $a,b,c$ must be distinct modulo $3$. However, we still get $a+b+c \equiv 0+1+2 \equiv 0 \pmod{3}$ which is a contradiction. Thus, $3 \mid (a+b+c)$, and thus $3 \mid (a-b)(b-c)(c-a)$.
WLOG let $3 \mid (a-b)$. It then follows that:
$$3 \mid (a+b+c) \implies 3 \mid (a-b)+(2b+c) \implies 3 \mid (2b+c) \implies b \equiv c \pmod{3}$$
Thus, we get $a \equiv b \equiv c \pmod{3}$. This means that each of the factors $(a-b),(b-c),(c-a)$ is divisible by $3$. Hence:
$$27 \mid (a-b)(b-c)(c-a) \implies 27 \mid (a+b+c)$$
Hence, proved.
|
H: Examples of closed manifolds?
In Spivak's Diff Geom (vol.1), p.19, he says a closed manifold is non-bounded and compact (A point in boundary has a neighborhood homeomorphic to half-space). I don't know a non-trivial example of that.
For example, compact subset of $\mathbb{R}^2$ is usually closed set and has boundary, so it's not closed manifold according to Spivak's definition.
An example is the finite set of discrete points of $\mathbb{R}^2$, it's compact and, since no point in it has a neighborhood homeomorphic to half-space, it has no boundary. But the example is trivial.
Does anyone know a non-trivial example of closed manifold (in Spivak's definition)?
[Well I check the definition of closed and compact manifolds here:
https://mathworld.wolfram.com/ClosedManifold.html
https://mathworld.wolfram.com/CompactManifold.html
It seems what confuses me is that 'compact' here means $\sigma$-compact (locally compact and connected, or say its any open cover has countable sub cover), and I thought it means that its open cover has finite sub cover. Right?]
AI: The standard first examples of closed manifolds are the spheres. For instance, $S^2$, usually imagined as the unit sphere in $\Bbb R^3$. The torus is also a popular example, and you will eventually be very familiar also with the projective plane and the Klein bottle. If you want to make your own examples, the boundary of compact manifolds with boundary will always work.
"Compact" here does not mean $\sigma$-compact. Indeed, $\sigma$-compactness is one of Spivak's requirements for any manifold. If I recall correctly, there is an appendix exploring a few non-$\sigma$-compact examples, like the so-called long line.
|
H: Integration by parts on manifold without boundary
Suppose $M$ is a compact Riemannian manifold without boundary. Does the integration by part hold? For example, do we have
$$
\int_M\nabla_g u\nabla_g v dx=\int_M-\Delta_g uvdx?
$$
Here, $\nabla_g$ and $\Delta_g$ are defined w.r.t the Riemannian metric $g$ on the manifold.
AI: Yes. $${\rm div}(u\nabla v) = u\,\triangle v + \nabla u \cdot \nabla v.$$If $M$ is compact without boundary, integrate: $$\int_M {\rm div}(u\nabla v)\,{\rm d}M =\int_M u\,\triangle v\,{\rm d}M +\int_M \nabla u\cdot \nabla v\,{\rm d}M. $$Now Stokes' theorem kills the left side, so reorganizing we get $$\int_M \nabla u \cdot \nabla v\,{\rm d}M = -\int_Mu\,\triangle v\,{\rm d}M$$as wanted.
|
H: How to solve this ODE: $x^3dx+(y+2)^2dy=0$?
I am trying to solve $$ x^3dx+(y+2)^2dy=0 \quad( 1)$$
Dividing by $dx$, we can reduce the ODE to seperate variable form, i.e
$$ (1) \to (y+2)^2y'=-x^3 $$
Hence,
$$ \int (y(x)+2)^2y'(x) dy = \int -x^3dx = - \frac{x^4}{4} + c_1$$
This LHE seems to be easy to solve using integration by parts:
$$ \int (y(x)+2)^2y'(x) dy = y(x)(y(x)+2)^2 - \int y^3dy- \int 4y^2dy + \int4ydy \iff$$
$$ \iff - \frac{y^4}{4} + -\frac13 y^3 + 4y^2 + 8y + c_2$$
But then solving the ODE for $y(x)$ is a struggle:
$$ \iff - \frac{y^4}{4} + -\frac13 y^3 + 4y^2 + 8y = - \frac{3}{4}x + C$$
Any ideas on how I can solve this?
AI: $$x^3dx+(y+2)^2dy=0$$
$$x^3dx=-(y+2)^2dy$$
$$x^3=-(y+2)^2y'$$
Integratation gives:
$$\int x^3dx=-\int (y+2)^2 y'dx$$
$$\int x^3dx=-\int (y+2)^2 dy$$
Substitute $u=y+2$
$$\int x^3dx=-\int u^2du$$
$$\dfrac {x^4}4+\dfrac {u^3}{3}=C$$
$$\dfrac {x^4}4+\dfrac {(y+2)^3}{3}=C$$
|
H: Non-negative convergent series $a_n$ where $\lim\sup na_n >0$
From Carothers, Chapter 1, Exercise 34:
Suppose that $a_n \geq 0$ and $\sum_{n=1}^\infty a_n< \infty$. Give an example showing that $\lim\sup_{n\to \infty} n a_n > 0$ is possible.
Looking at the sequence $n a_n$, there is a subsequence $n_k a_{n_k}$ that either diverges or converges to some positive number, where $a_{n_k} \to 0$ because the series itself converges. I tend to get stuck at this point; I know that $a_{n_k}$ must decrease slowly relative to $n_k$ but not so slow so that the series doesn’t converge. Does anyone have any tips?
AI: $a_n=\frac 1 n $ when $n =m^{2}$ for some $m$ and $0$ otherwise.
|
H: Is there a third "version" of difference-of-means hypothesis testing?
I'm following Schaum's Outlines for statistics as well as taking a course and I'm getting mixed up with the way hypothesis testing is done for differences of means.
First the class described a "two-sample unpaired t-test":
$$
t = \frac{(\bar{x}_1 - \bar{x}_2)-(\mu_1 - \mu_2)}{\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}}
$$
Then a "two-sample paired t-test":
$$
t = \frac{\bar{x}_D -\mu_{test}}{\frac{s_D}{\sqrt{n}}}
$$
Where I assume $\bar{x}_D=\bar{x}_1 - \bar{x}_2$ as a way to shorten what they wrote previously in the unpaired test, but the document doesn't confirm this. I also assume $\mu_{test}$ is meant to be $\mu_1-\mu_2$. I also found it interesting they only take about t-tests, ignoring anything with the z-score, whereas Schaum's starts the chapter off with this for "test for difference of means":
$$
z = \frac{(\bar{x}_1 - \bar{x}_2)-0}{\sqrt{\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}}}
$$
I think this is because the course is focused on Python statistics, so we go straight to the more practical case where we don't know population variance?
So I believe Schaum's test is what's meant to be the "two-sample unpaired t-test", but for the case where population variance is known. I'm not sure what the paired test is doing though.
Furthermore, in Schuam's under small sampling theory, they give this for the difference of means test:
$$
t = \frac{\bar{x}_1 - \bar{x}_2}{\sigma\sqrt{1/n_1 + 1/n_2}} \\
\sigma = \sqrt{\frac{n_1 s_1^2+n_2 s_2^2}{n_1+n_2-2}}
$$
I assume the zero was left out for brevity. I have no idea wht this is, and if it has anything to do with the paired test. Am I looking at essentially three tests? 1) testing if means of two distributions are difference, using $t$ if pop variance unknown and $z$ if known, 2) some sort of "unpaired" distributions and 3) whatever that small sample test is supposed to be?
AI: Let's go through these one at a time. The key takeaway throughout this discussion is that the choice of test statistic is a tradeoff between the distributional assumptions that can be made about the groups being compared, versus the power of the test to reject the null hypothesis. For example, you could use a nonparametric test to compare the means of two normally distributed populations with known variances, but it won't be as powerful to detect a difference compared to a two-sample $z$-test.
The first test statistic,
$$T = \frac{(\bar x_1 - \bar x_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}}$$ is the Welch $\boldsymbol t$-test which is an approximately $t$-distributed statistic under the assumption of the null hypothesis. The degrees of freedom for this test is calculated using the Welch-Satterthwaite approxmation $$\nu = \frac{\left(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}\right)^2}{\frac{s_1^4}{n_1 (n_1-1)} + \frac{s_2^4}{n_1 (n_2-1)}}.$$ Note that this test is applied when there is no assumption about the equality of the within-group variances; i.e., we may use this test when $\sigma_1^2 \ne \sigma_2^2$. This is the most flexible test of location for two normally distributed groups. It does not assume their variances are known or equal, and does not require equal group sample sizes. Moreover, it is reasonably robust to deviations from normality of the groups; just like in the one-sample hypothesis test when the population is not normally distributed, the larger the sample size, the closer to asymptotic normality the sample mean becomes due to the CLT. But in the small-sample case, use of this test when normality cannot be assumed, should be reconsidered. A nonparametric test may be more appropriate.
The second test statistic,
$$T = \frac{\bar x_D - \mu_{\text{test}}}{s_D/\sqrt{n}},$$ is for a paired $\boldsymbol t$-test. Here, $\mu_{\text{test}}$ is the hypothesized difference in population means, and $s_D$ is the sample standard deviation on the paired differences; i.e., $$s_D^2 = \frac{1}{n-1} \sum_{i=1}^n \left((x_{i,1} - x_{i,2}) - (\bar x_1 - \bar x_2)\right)^2,$$ where $x_{i,j}$ is the $j^{\rm th}$ observation in group $i$ and $\bar x_1 - \bar x_2$ is the difference of sample means, which is equal to the sample mean of the paired differences.
This test is applicable when observations from each group can be naturally paired with each other, thus necessitating equal numbers of observations from each group. An example where such a test applies is if we are interested whether the use of a particular gasoline additive improves gas mileage. Assuming mileage is normally distributed, we collect data on a fleet of cars, running them twice: once with and once without the additive, and calculate their mileages. By calculating the difference between mileages for each car, we are in effect controlling for the between-car fuel efficiency variation. The resulting test is more powerful than a two-sample independent $t$-test for this reason.
The third test statistic,
$$Z = \frac{(\bar x_1 - \bar x_2) - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}$$ is a two-sample independent $z$-test of the equality of means when the within-group variances $\sigma_1^2, \sigma_2^2$ are known and the groups are normally distributed. In such a case, as we have discussed in another question, the within-group sample means $\bar x_1, \bar x_2$ are exactly normally distributed with means $\mu_1$, $\mu_2$, variances $\sigma_1^2/n_1$, $\sigma_2^2/n_2$; therefore, their difference is also exactly normally distributed: $$\bar x_1 - \bar x_2 \sim \operatorname{Normal}\left(\mu_1 - \mu_2, \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}\right).$$ Therefore, $Z$ is standard normal under the null hypothesis $\mu_1 = \mu_2$.
The fourth test statistic,
$$T = \frac{\bar x_1 - \bar x_2}{\sigma \sqrt{1/n_1 + 1/n_2}}$$ with $\sigma$ specified as in your question, is a two-sample independent $t$-test using a pooled variance estimate. As noted in the above comment, the formula is incorrect if the sample standard deviations are calculated with Bessel's correction. This statistic has (very slightly) more power than the Welch $t$-test if the assumption that the group variances are (roughly) equal is valid.
Note: All four statistics assume normality or approximate normality of the groups. The Welch $t$-test does not assume anything else. The paired $t$-test assumes the observations are naturally paired. The $z$-test assumes the group variances are known. The pooled $t$-test assumes the group variances are roughly equal.
|
H: A double check of my answer. Involves geometry and algebra
The $2$ slices cut into by Lee combine to make one big triangle. I used the Pythagorean theorem to find the length of the missing side (hypotenuse) to be $\sqrt{128}$, and I used the Pythagorean theorem again to find the length of the side John's slice borders with the other slice that Lee has cut into, the length is $\sqrt{32}$. Using this information, I just plug in the correct variables to the correct formulas to the answer to the question.
Apparently the area of a right triangle is $\frac{ab}{2}$$ so:
$$\frac{1}{2} \left(\frac{\sqrt{128}}{2} \cdot \sqrt{32} \right) = 16$$
So that's John's slice, now for Lee's:
Apparently, the area for an ellipse is: (half major axis * half minor axis * π) / 2 so...
$\frac{1}{2}\frac{\sqrt128}{2} (8 - \sqrt32) π = 20.82$
That means
John's slice = 16
Lee's slice = 20.82
And the difference is 4.82, according to my math. But when I look at the answer key it agrees that Lee's pie is bigger but differs by how much, stating Lee's pie is only bigger by 2.265
Now funny enough, I can achieve the same decimal value of the prescribed answer by using the formula to find the area of a circle instead of an ellipsis. But clearly what I'm looking at is an ellipse, not a circle. So I'm thinking it's possible they used the wrong formula.
AI: It's easier to recognize John's slice is a right triangle with two $45^\circ$ angles. So it's sides are $s,s$ and $\sqrt 2 s$. And the pizza has diameter $16$ the hypotenuse is $8 = \sqrt 2 s$ so $s =\frac 8{\sqrt 2}=4\sqrt 2$. And the area is $\frac 12 s\cdot s = \frac 12 (4\sqrt 2)^2 = 16$ square inches.
Now the entire pie is $\pi r^2 = 64\pi$ there should be $8$ normal slices each $\frac {64\pi}8 = 8\pi$ square inches in area. So one of those two parts of Lee's piece is a whole slice minus John's slice. That's $8\pi - 16 = 8(\pi-2)$ square inches. Lee has $2$ of those pieces so his part is $16(\pi-2)$ square inches.
As $\pi-2 > 1$ Lee's pieces are $(\pi - 2)$ times bigger.
The difference is $16(\pi - 2) - 16 = 16(\pi - 3)$.
I see utterly no reason to try to convert or estimate that as a decimal number but it is apparently what the book wants. so $16(\pi-3) \approx 16(3.14-3)=16\cdot 0.14 = 1.6\cdot 1.4 = (1.5 + 0.1)(1.5-0.1)= 2.25 - 0.01= 2.24$ square inches roughly.
If we use a calculator I get Lee's slice is is $16(\pi -2)\approx 18.265482457436691815402294132472...$ (If I calculate by hand with $\pi \approx 3.14$ I get $18.24$ so that accounts for a difference of $0.025482457436691815402294132472...$ which is the aproximate value of $16 \times 0.0015926535897932384626433832795....$) But such accuracy is ludicrously unimportant.
|
H: Definition of vertex in graph theory
What is the definition of vertex in graph theory?
Is it just an endpoint of an edge.If we consider it like that then won't there be an uncountably many number of vertex in every graph because every point can be considered as a vertex right also won't there be an uncountably many number of edges?
AI: In all definitions of graph I know of (undirected graph, simple graph, directed graph, multigraph, hypergraph) the vertices are dedicated part of the data, ie. in all these cases you start with a set $V$ of vertices, which is then turned into a graph by attaching edges from a set $E$ to these vertices.
Sometimes you can recover the vertices from the set of edges, for example when you have functions $s,t:E \rightarrow V$ giving to each edge a source- and a target vertex.
General points of an edge are usually not considered to be a vertex, so if there are finitely many edges you will have finitely many vertices. Yet it is possible to have $V$ being an infinite set (e.g. $\Bbb Z=: V$ considered as a graph with an edge between each $n$ and $n+1$) or even $V$ finite and $E$ infinite (e.g. Hawaiian earrings). The standard assumption of combinatorics/graph theory is that $V$ and $E$ are assumed to be finite, though.
|
H: Is there a general formula for infinite series of a rational function
Is there some sort of formula to calculate $$\sum_x \frac{P_1(x)}{P_2(x)}$$
In particular, what is $$\sum_x \frac{1}{ax^2+bx+c}$$
And what is $$\sum_x \frac{1}{x^3+1}$$
AI: $$S_p=\sum_{x=0}^p \frac{1}{ax^2+bx+c}=\frac 1a\sum_{x=0}^p \frac{1}{(x-r)(x-s)}=\frac 1{a(r-s)}\sum_{x=0}^p \left(\frac 1{x-r}- \frac 1{x-s}\right)$$ Assuming that $r$ and $s$ are not integers, use
$$\sum_{x=0}^p\frac 1{x-t}=\psi (p-t+1)-\psi (-t)$$
$$S_p=\frac{\psi (p-r+1)-\psi(p-s+1)-\psi (-r)+\psi (-s)}{a(r-s)}$$ Now, for large values of $p$, using asymptotics
$$S_p=\frac{\psi (-s)-\psi (-r)}{a (r-s)}-\frac{1}{a p}+\frac{1-r-s}{2 a
p^2}+O\left(\frac{1}{p^3}\right)$$
For the other cases, it is the same story : use partial fraction decomposition.
|
H: A prime ideal is either maximal right ideal or small right ideal.
Definition:- A right ideal $I$ of a ring $R$ is called small right ideal if $I+J=R\implies J=R$ for any right ideal $I$ of $R$.
My Question:- A prime ideal is either a maximal right ideal or a small right ideal.
I have tried to find counterexamples but I couldn't find. So, I have tried to prove it in many ways but I couldn't do so also. So, I couldn't conclude that above statement is true or false. I need your suggestion in this problem.
My attempt:- Suppose that $P$ is not a small right ideal then there is a proper right ideal $J$ of $R$ such that $P+J=R$. Now we need to show that $P$ is maximal. Let $Q$ be a right ideal of $R$ such that $P\subseteq Q\subseteq R$.Then we show that either $Q=P$ or $Q=R$. If possible, assume that $Q\neq P$ then there is an element $x\in Q\backslash P.$
AI: In $\mathbb{C}[x,y]$ the principal ideal $(x)$ is prime but not maximal or small, as $x+(1-x)=1$.
|
H: How to evaluate $\int_{0}^{\infty} x^{\nu} \frac{e^{-\sqrt{x^2+a^2}}}{\sqrt{x^2+a^2}} \, dx$?
$$\int_{0}^{\infty} x^{\nu} \frac{e^{-\sqrt{x^2+a^2}}}{\sqrt{x^2+a^2}} \, dx$$
Is it possible to calculate this for $a>0$ and $\nu=0, 2$ ?
I think the result seems to include exponential integral function, but I failed to find the answer from the integration table.
I would be very grateful if you could share some of the good integration skills, ideas, or any advice.
AI: Let $x=a \sinh t$, then
$$I_1=\int_{0}^{\infty} \frac{e^{-\sqrt{x^2+a^2}}}{\sqrt{x^2+a^2}} dx= \int_{0}^{\infty} e^{-a \cosh t} dt=K_0(a),$$
where $K_{\nu}(x)$ is cylindrical modified Bessel function of order $\nu$.
Next, $$I_1=\int_{0}^{\infty} x^2 \frac{e^{-\sqrt{x^2+a^2}}}{\sqrt{x^2+a^2}} dx=\frac{a^2}{2}\int_{0}^{\infty} (\cosh 2t-1)~ e^{-a\cosh t} dt=\frac{a^2}{2} [K_2(a)-K_0(a)]$$
See for $K_{\nu}(z):$
https://en.wikipedia.org/wiki/Bessel_function
|
H: Calculation of $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}}$
Shouldn't $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}} = |\sec(x)|$?
Why does Symbolab as well as my professor (page one, also below) claim that $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}} = \sec(x)$, which can be negative? Also, the length of a vector cannot be negative... isn't it?
AI: You are correct that $\sqrt{x^2} \neq x$ for every $x\in\Bbb R$. Otherwise, we would have absurdity like $1 = \sqrt{1}=\sqrt{(-1)^2} =-1$. For $x\in\Bbb R$ we indeed have $\sqrt{x^2}=|x|$. However, we have $\sqrt{x^2}=x$ for all $x\geq 0$.
Hence, as you professor seems to assume that $t\in [-\pi/2,\pi/2]$, his claim that $\sqrt{\frac{1}{\cos^2(x)}}=\sec(t)$ is correct. Indeed for $t\in [-\pi/2,\pi/2]$, we have $\cos(t)\geq 0$.
|
H: MIN-FORMULA $\in$ NP
I am thinking about the following problem: Show that MIN-FORMULA $\in$ NP.
MIN-FORMULA is the set of minimal boolean formulas, i.e. formulas such that there is no shorter formula that computes the same boolean function.
I would like to have an advise whether or not the following approach/idea is ok.
Show by describing a non-deterministic algorithm N that solves the question of elementhood in polytime:
On a given formula $\phi$:
Construct non-deterministically (i.e. simultaneously) all boolean formulas $\psi$ such that
$\psi$ is smaller than $\phi$ and
$\psi$ contains all variables of $\phi$.
Check all resulting formulas $\psi$ for equivalence with $\phi$.
If none is equivalent, ACCEPT, otherwise, REJECT.
Thank you.
AI: The solution proposed seems wrong. Recall that when building a turing machine that uses non-determinism, if any path accepts,
So we cannot ACCEPT if none is equivalent: we can only take ACCEPT decisions from one of the branches of computation, not all of them!
A solution set to the above problem uses that this problem is $\overline{\texttt{NP}}$ : That is, the complement of $\texttt{NP}$, where we can ACCEPT from all branches, and REJECT from one branch.
|
H: Action of the $n$-th roots of unity on $\mathbb{A}^2$
The following appears as Example 1.9.5 (c) in Fulton's Intersection theory.
Let $X$ be the quotient of $\mathbb{A}^2$ obtained by identifying $(s,0)$ with $(\mu s,0)$ for all $n$-th roots of unity $\mu$; equivalently, $$X = \operatorname{Spec} K[s^n, st, s^2t, \dotsc, s^{n-1}t, t].$$
I don't understand how the $n$-th roots of unity can only act on the line $\{t = 0\}$, what happens to a general point $(s, t)$? But if $$\mu(s, t) = (\mu s, t),$$ then I don't see why $s\cdot t$ should be an invariant.
Or is Fulton not talking about group actions here?
AI: It's not a group action on the whole of $\Bbb A^2$, because $\mu$ doesn't act on points with nonzero $t$-coordinate. It might help you to think about an example: taking $n=2$, we're looking at $\operatorname{Spec} k[s^2,st,t]$. As $k[s^2,st,t]\cong k[x,y,z]/(y^2-xz^2)$, we're looking at the Whitney umbrella, which you can think about as grabbing the left half-plane, twisting it, and shoving it through the right-half plane so the positive and negative $x$-axes are overlapping. We can see that when we do this, we preserve all the points which aren't on the $x$-axis.
|
H: How can we prove mgf of sample proportion of binomial distribution converges to exp(pt)?
$S_{n}$ follows Binomial(n,p).
$X_{n}$ is the sample proportion which is $X_{n} = S_{n}/n$.
How can we prove $\lim_{n \to +\infty} M_n{(t)} = e^{pt}$ ?
What I found is
\begin{align}
\lim_{n \to +\infty} M_n{(t)} &=\lim_{n \to +\infty}(pe^{t/n} + q)^n \\
&=\lim_{n \to +\infty}(pe^{t/n} + 1-p)^n\\
&=\lim_{n \to +\infty}[1 + p(e^{t/n}-1)^n]\\
&=\lim_{n \to +\infty}[1+p(1+t/n + (t/n)^2/2! + \cdots -1)]^n\\
&=\lim_{n \to +\infty}(1+pt/n)^n\\
&=e^{pt}
\end{align}
But how can we know
$$\lim_{n \to +\infty}[1+p(1+t/n + (t/n)^2/2! + ... -1)]^n=
\lim_{n \to +\infty}(1+pt/n)^n?$$
AI: $\frac {e^{x}-1} x \to 1$ as $x \to 0$. Given $\epsilon >0$ there exist $\delta >0$ such that $x \leq (e^{x}-1) \leq (1+\epsilon) x$ if $0 \leq x <\delta$. This gives $(\frac t n) \leq (e^{t/n}-1)\leq (1+\epsilon) (\frac t n)$ for $n$ sufficiently large and $t \geq 0$. Can you finish the argument now?
|
H: Interchange of diagonal elements with unitary transformation
I have a matrix:
$$\left(\begin{array}{lll}
a & 0 & 0 \\
0 & b & 0 \\
0 & 0 & c
\end{array}\right)$$
Which I want to change to:
$$\left(\begin{array}{lll}
a & 0 & 0 \\
0 & c & 0 \\
0 & 0 & b
\end{array}\right)$$
How can I do that with a unitary transformation?
AI: Just multiply with the permutation matrix $$P=\begin{bmatrix}1&0&0\\0&0&1\\0&1&0\end{bmatrix}$$
as follows: $A_2=P^\top A_1 P$. Permutation matrix is unitary and orthogonal:
$$\bar P^\top P=P^\top P=I.$$
|
H: Is $\mathbb{Q}\;\cong\; (\prod_{n\in\omega}\mathbb{Z}/p_n\mathbb{Z})/\simeq_{\cal U}$?
Let ${\cal U}$ be a non-principal ultrafilter on $\omega$, and for each $n\in\omega$, let $p_n$ denote the $n$th prime, that is $p_0 = 2, p_1=3, \ldots$.
Next we introduce the following standard equivalence relation on $\big(\prod_{n\in\omega}\mathbb{Z}/p_n\mathbb{Z}\big)$: we say $a \simeq_{\cal U} b$ for $a,b \in \big(\prod_{n\in\omega}\mathbb{Z}/p_n\mathbb{Z}\big)$ if and only if $$\{n\in\omega:a(n) = b(n)\}\in {\cal U}.$$
I think I have proved that $\big(\prod_{n\in\omega}\mathbb{Z}/p_n\mathbb{Z}\big)/\simeq_{\cal U}$ is a field. Is that field isomorphic to $\mathbb{Q} $ ?
AI: This is a simple cardinality argument. Ultraproducts of finite sets are either finite or uncountable. Since the ultrafilter is free, and the sets are all increasing in size, it is not finite.
To see why the ultraproduct is indeed uncountable, take an almost disjoint family of subsets of $\Bbb N$, and consider their characteristics functions. These functions agree on finitely many points with each other, so their equivalence classes are different in the ultraproduct (if two functions have the same equivalence class in the ultraproduct, they must have agreed on infinitely many values). Now recall that there are almost disjoint families of size $2^{\aleph_0}$ and finish the proof.
|
H: What's the difference between a Proof System and a Theory?
I found this question Is the axiom of induction required for proving the first Gödel's incompleteness theorem?
I am apparently under the (wrong) impression that a theory and a proof system are synonyms. I'm pointing this out because from the accepted answer:
Note that the term "(in)complete" is annoyingly overloaded: (in)completeness of a theory is a very different thing from (in)completeness of a proof system.
I am also currently reading a book at leisure hoping to understand the basics well enough.
In the first few pages, the book defines:
A logical system consists of the following:
An alphabet
A grammar
Propositional forms that require no proof
Rules that determine truth
Rules that are used to write proofs.
I'm not quite sure whether this definition is akin to a theory or a proof system. From where I currently stand, where the lines are drawn is very unclear. Maybe this definition describes both a theory and a proof system? Any help clearing this up would be very appreciated.
This book was recommended by one of my professors as it does cover some model theory which is what I'm hoping to learn. Well, in honesty my goal is to have a very clear idea of what people are talking about when discussing the "good" stuff about $ZFC$, $Q$, $PA$, etc. I decided to look into how the book introduces the fundamentals, mostly because I've already covered first-order logic in a previous CS unit about formal languages.
AI: What's the difference between a Proof System and a Theory?
A proof system is the "logical machinery" made of logical axioms and rules of inference, like propositional calculus and predicate calculus.
A formal mathematical theory is based on axioms; see e.g. the first-order version of Peano arithmetic and Zermelo-Fraenkel set theory built using first-order logical language and using the "calculus" to prove theorems.
See Leary, page 68:
The term theory refers to a collection of propositions all surrounding a particular subject. Since different theories have different notation (think about how algebraic notation differs from geometric notation), alphabets change depending on the subject matter.
This means that we start with the basic alphabet of first-order logic [see Def.2.1.2] made of variables, logical connectives and quantifiers, and develop the theory of sets using (in addition to equality) a single "theory symbol": the binary relation symbol $\in$, where $(x \in y)$ reads:
"$x$ is an element of $y$".
Using examples from Leary's book, we have propositional axioms: $⊢ p→(q→p)$ [axiom FL1, page 24] and rules of inference: Modus Ponens [1.2.10], as well as rules regarding the quantifiers: $\forall x p(x) \to p(a)$ [Universal Instantiation, page 87].
In the development of set theory we will use them to prove mathematical theorems starting from mathematical axioms and using logical axioms and rules.
The definitions above are consistent with the terms used in the post you have linked about Gödel Incompleteness Theorem.
In that case, the post is about the formal mathematical theory $\mathsf Q$, the so-called Robinson arithemtic: a first-order mathematical theory that is a fragment (sub-system) of first-order Peano arithmetic ($\mathsf {PA}$).
|
H: Why do we consider the zeroes of the expression when solving rational inequalities?
When finding the solution set of rational inequalities (of the form $ {x-3\over x+2} \geq 0 $ etc.) , why do we consider the "zeroes" of the expression.
So I've been taught to solve this as follows:
First,
$x-3 = 0 \implies x = 3 \quad and \quad x+2\not= 0 \implies x\not=-2 \,\text{(b/c, undefined)} $ are "points of interest".
Sketch a number line from $-∞$ to $+∞$, with the points of interest ($-2$ & $3$) labeled.
Then we check each region bound by those numbers to see if they're positive/negative and label them as such.
The thing I'm confused by is why do we check those zeroes? What tells us that the signs don't suddenly flip at some other random value? Can someone explain in detail this working, suggest a better method or point me to other resources which "prove" the things I'm talking about.
AI: For instance, consider $$\frac{x-a}{x-b} \ge 0, \ a\gt b$$ For a fraction to be non-negative, we need the numerator and the denominator to have the same parity, or that the numerator be zero.
If both are positive, then $$x-a\ge 0 \implies x\ge a$$and $$x-b \gt 0 \implies x\gt b \hspace{1 cm} \because x-b\ne 0$$ Taking the inetrsection of the two requirements, we get $$x\ge a \hspace{2 cm} (\mathbf 1)$$
If both are negative, then $$x-a \lt 0 \implies x\lt a \\ x-b \lt 0 \implies x\lt b $$ This means $$x\lt b \hspace{2 cm} (\mathbf 2)$$ Now, either of the two cases can happen, so we take the union of $(\mathbf 1)$ and $(\mathbf 2)$: $$x\in(-\infty, b) \cup [a, \infty) $$ To answer your question more directly, we consider the zeroes of each factor, because that’s precisely where the factor changes its sign.
|
H: Hankel function expansion for large arguement
The leading order behaviour of the Hankel function for large arguments is known to be
$$
H_{n}^{(1)}(z)\sim\sqrt{\frac{2}{\pi z}}e^{i\left(z-\frac{n\pi}{2}-\frac{\pi}{4}\right)}
$$
as $z\to\infty$. I would like to know what the analytical form of the full expansion is i.e. if we write
$$
H_{n}^{(1)}(z)=\sqrt{\frac{2}{\pi z}}e^{i\left(z-\frac{n\pi}{2}-\frac{\pi}{4}\right)}f_{n}\left(\frac{1}{z}\right)
$$
then I would like to know the power series expansion for $f_{n}(z)$. I can get the first few terms in Mathematica but I don't see how to derive a general form for the coefficients. Thanks in advance for any help.
AI: The asymptotic expansion for the Hankel function can be found in DLMF:
\begin{align}
&{H^{(1)}_{\nu}}\left(z\right)\sim\left(\frac{2}{\pi z}\right)^{\frac{1}{2}}e^{i\omega}\sum_{k=0}^{\infty}i^{k}\frac{a_{k}(\nu)}{z^{k}}\\
&\omega=z-\tfrac{1}{2}\nu\pi-\tfrac{1}{4}\pi\\
&a_{k}(\nu)=\frac{(4\nu^{2}-1^{2})(4\nu^{2}-3^{2})\cdots(4\nu^{2}-(2k-1)^{2})}{
k!8^{k}}\text{ for }k\ge1\\
&a_0(\nu)=1
\end{align}
for $-\pi<\operatorname{arg}z<2\pi$. It is obtained using a contour integral representation for the function (see here section 7, for example).
|
H: Finding $P[X+Y > 1, X > 1]$
I'm trying to solve the following problem: i have two independent exponentially distributed r.v. $X$ and $Y$ both with $\lambda = 1$. I want to know the probability $P[X+Y > 1, X > 1]$. Since they are independent, i wrote the joint p.d.f. as $f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y) = e^{-x}\cdot e^{-y}$. Then i've tried to solve
$$
\int_{1}^{\infty} \int_{1 - x}^{\infty} e^{-x}\cdot e^{-y}\, dy\, dx
$$
so,
$$
\int_{1}^{\infty} e^{-x}\cdot e^{x - 1} dx = e^{-1} \cdot \int_{1}^{\infty} dx = \infty
$$
but i get as result $\infty$. What am i doing wrong? I think the integral is correctly soved so the initial fomula of the joint may be wrong...
AI: Since $X$ and $Y$ are positive random variables the event $(X>1, X+Y>1)$ is same as $X>1$. So the value is $\int_1^{\infty} e^{-x} dx=\frac 1 e$.
The mistake in your calculation is the integral w.r.t $y$ should, start from $0$ and not $1-x$ because $1-x <0$.
|
H: Bessel functions in terms of the confluent hypergeometric function
I am familiar with the representation of the Bessel functions as
$$J_m(x) = \left(\frac{x}{2}\right)^m \sum_{k = 0}^{\infty} \frac{(-1)^k}{k! (k + m)!} \left(\frac{x}{2}\right)^{2k},$$
for some integer $m$.
Now, I have come across the representation of the Bessel functions in terms of the confluent hypergeometric function as
$$J_m(x) = \frac{1}{\Gamma(m + 1)} \left(\frac{x}{2}\right)^m e^{-i x} \Phi\left(m + \frac{1}{2}, 2m + 1; 2i x\right).\tag{*}$$
I would like to show that these two are equivalent.
We know that
$$\Phi(\alpha, \gamma; x) = \frac{\Gamma(\gamma)}{\Gamma(\alpha)} \sum_{k = 0}^{\infty} \frac{\Gamma(k + \alpha)}{\Gamma(k + \gamma) \Gamma(k + 1)} x^k.$$
Thus, $J_m(x)$ in terms of the confluent hypergeometric function is:
$$J_m(x) = \frac{1}{m!} \left(\frac{x}{2}\right)^m e^{-i x} \frac{\Gamma(2m + 1)}{\Gamma(m + \frac{1}{2})} \sum_{k = 0}^{\infty} \frac{\Gamma(k + m + \frac{1}{2})}{\Gamma(k + 2m + 1) \Gamma(k + 1)} (2i x)^k.$$
By exploiting the following two relations
\begin{align*}
\Gamma(n + 1) &= n!,
\\
\Gamma\left(n + \frac{1}{2}\right) &= \frac{(2n)!}{2^{2n} n!} \sqrt{\pi},
\end{align*}
the above reduces to
$$J_m(x) = \left(\frac{x}{2}\right)^m e^{-i x} \sum_{k = 0}^{\infty} \frac{(2k + 2m)!}{2^{2k} (k + 2m)! (k + m)! k!} (2i x)^k.$$
From here, I don't know how one could go further, for example, how to get rid of the exponential term and etc., and to reproduce the representation of the Bessel functions as given in the beginning of this post.
AI: One may deduce $(\text{*})$ from the Poisson's integral representation for $\Re m>-1/2$, $$J_m(x)=\frac{\left(\frac{x}{2}\right)^m}{\Gamma\left(m+\frac12\right)\sqrt\pi}\int_{-1}^{1}(1-t^2)^{m-1/2}e^{ixt}\,dt,\tag{1}\label{poisson}$$ for a proof, expand $e^{ixt}$ into a power series and integrate termwise, and the one for $$\Phi(\alpha;\gamma;x)=\frac{\Gamma(\gamma)}{\Gamma(\alpha)\Gamma(\gamma-\alpha)}\int_0^1 z^{\alpha-1}(1-z)^{\gamma-\alpha-1}e^{xz}\,dz \qquad(\Re\gamma>\Re\alpha>0),$$ obtained the same way. The RHS of $(\text{*})$ is then $$\frac{(x/2)^m e^{-ix}}{\Gamma(m+1)}\frac{\Gamma(2m+1)}{\Gamma^2(m+1/2)}\int_0^1\big(z(1-z)\big)^{m-1/2}e^{2ixz}\,dz.$$ Using the duplication formula for $\Gamma(2m+1)$ and substituting $z=(1+t)/2$, we get the RHS of \eqref{poisson}.
|
H: Uncountability of $\mathbb{R}$
I was trying to follow this proof of the uncountabilty of R and at first it seemed clear, but when I tried to explain it to myself I realized I didn't really understand one of the steps.
The proof is by contradiction, using the nested intervals theorem:
Assume $\mathbb{R}$ is countable. Then we can define a bijection from $\mathbb{N}$ to $\mathbb{R}$, in other words we can assign each number in $\mathbb{R}$ a subscript in $\mathbb{N}$ and get the infinite sequence $R = \{x_1, x_2, x_3, ... \} $.
Let $I_1 \subset \mathbb{R}$ be a closed interval such that $x_1 \notin I_1$. $I_1$ can also be written as $[a_1, b_1]$, with $x_1 < a_1 < b_1$.
So far so good. Now comes the step where I ran into difficulties.
Let $I_2 \subset I_1$ be a closed interval such that $x_2 \in I_1$ and $x_2 \notin I_2$. $I_2$ has bounds $a_2, b_2$. This means we now have the following inequality $x_1 < a_1 \leq x_2 < a_2 < b_2 < b_1$.
But it seems to me that for this to make sense $a_1$ must be equal to $x_2$, since we have assumed that $\mathbb{R}$ is countable and we know that $\mathbb{R}$ is increasing. Otherwise we've just produced a new number in $\mathbb{R}$ between $x_1$ and $x_2$ that the count sort of skipped.
Now obviously the point of the proof is to show precisely that $\mathbb{R}$ is uncountable and that assigning natural subscripts to the real numbers won't get you anywhere, but it feels like at that moment in the proof we'd be asssuming the conclusion.
The proof then goes on to define $I_{n+1}$ as a closed interval $\subset I_n$ such that $x_{n+1} \notin I_{n+1}$.
Once we have these intervals constructed such that $I_1 \supset I_2 \supset I_3 \supset$ ... we can apply the nested intervals theorem, which tells us that the intersection of these nested sets is non-empty, and get:
$$\exists x \in \mathbb{R}\;such\;that\; x \in ( \cap I_n \forall n \in \mathbb{N} ) $$
Since we have assumed $\mathbb{R}$ is countable, we know that $x = x_m, m \in \mathbb{N}$. So if the intersection of the nested sets is non-empty it must contain a number of the form $x_m$.
But we have constructed our sets such that for any number $x_m$ there is a nested set $I_m$ that doesn't contain it. So the intersection of the nested sets cannot contain any number of the form $x_m$. Which is the contradiction we were after.
It seems to me like the rest of the proof holds even if we require that $a_1 = x_2$ (and by extension $a_n = x_{n+1}$). What am I missing? Does the proof still hold even if $a_1$ is allowed to be $\leq x_2$? If so, why?
AI: I think the point you are missing is that the values $x_1,x_2,\dots$ are not necessarily ordered. Indeed, the rationals $\mathbb Q$ are countable, so there is an infinite sequence $q_1,q_2,\dots$ covering all rationals, but there must be infinitely many rationals between $q_1$ and $q_2$ (which appear later in the sequence).
The proof still works just using $a_1\leq x_2$; given a closed interval $[a,b]$ and a real number $x$ we can always find a smaller closed interval $[a',b']\subseteq [a,b]$ which doesn't contain $x$. Thus we can find a nested sequence of closed intervals, each of which misses the first $n$ numbers in the list, then take the intersection of all these intervals, which is a nonempty closed interval missing all of them, contradicting the assumption that our list covered all the reals.
So what goes wrong if we use rationals rather than reals? We can find $a_n,b_n\in\mathbb Q$ such that $[a_n,b_n]$ misses out $q_1,\dots,q_n$, and these intervals are nested as before. The problem comes when we take the intersection of all the intervals. Say the intersection is $[a,b]$ where $a=\lim a_n, b=\lim b_n$. Now since the limit of a sequence of rationals need not be rational, $a,b$ could be irrational. Also, they could be equal. So it could be that the final interval we get is, say, $[\sqrt 2,\sqrt 2]$. This interval is non-empty, but it doesn't contain any rationals, so doesn't give a contradiction.
|
H: A right triangle has a certain angle twice of another angle in the triangle. Find the maximum number of integer side lengths it has.
A right triangle has a certain angle twice of another angle in the triangle. Find the maximum number of integer side lengths it has.
How I tried working on the problem:
There are $2$ possible triangular angles that satisfy this,
$30, 60, 90$ triangle
$45, 45, 90$ triangle
For $30, 60, 90$ triangle, the ratio is $1:\sqrt{3}:2$.
For $45, 45, 90$ triangle, the ratio is $1:1:\sqrt{2}$.
How should I continue working on this problem?
AI: For (1), the sides would be of the form $k,\sqrt 3k, 2k$ with $k\in \mathbb R$. To maintain the ‘integer-ness’ of the first and third sides, it is required that $k$ is not irrational, but then that means $\sqrt 3 k$ is not rational either. On the other hand, if $k$ is irrational, then $\sqrt 3k$ might be an integer, but $k$ and $2k$ are certainly not. Hence, the answer is $2$.
A similar argument suffices for (2).
|
H: Is $(a/b)-1$ approximately equal to $\log_e (a/b)$
I was reading an article where in one of the steps we were trying to calculate the daily return. It said
Return = (a / b) – 1
It then said, this equation can be approximated to:
Return = Log e (a/b)
Could someone explain a proof around how these are equal? Why $\log_e$ (and not $\log$ base of another value)?
AI: This is explained by the Taylor's theorem, expanding to the first order:
$$\log(1+x)\approx \log(1+x)_{x=0}+\left(\log(1+x)\right)'_{x=0}x=\frac x{1+0}=x.$$
For logarithms in other bases, it suffices to apply the conversion factor. (The natural logarithm is used because no factor is required by the derivative.)
With $x:=\dfrac ab-1$,
$$\log\left(\frac ab\right)\approx \frac ab-1.$$
The closer to $1$ the ratio, the better the approximation.
In fact, you are replacing the curve by its tangent:
|
H: Prove that $\text{tr} (\phi \otimes \psi) = \text {tr} \phi \text {tr} \psi $.
Let $E,F$ be two vector spaces of dimension $n$ and $m$ respectively and let $\phi : E \rightarrow E$,
$\psi : F \rightarrow F $ be two linear transformations. Prove that $\text{tr} (\phi \otimes \psi) = \text {tr} \phi \text {tr} \psi $ and $\text {dt}(\phi \otimes \psi) = (\text {dt} \phi)^m (\text{det} \psi)^n$.
Looking at one of my old books I found this exercise and I found it interesting because it involves one of the fundamental concepts of linear algebra: trace and determinant.
I have been trying to do the first equality by direct method, however I have not had a result yet. I am clear that $ \text{tr} (AB) = \text{tr} (BA) $, and $ \text{im} (\phi \otimes \psi) = \text{im} \phi \otimes \text{im} \psi $, but could this help me?
I am in need of help with this exercise please. I am not very familiar with the subject.
AI: One approach to compute the trace is as follows:
Suppose we have bases $\{e_1,\dots,e_n\},\{f_1,\dots,f_m\}$ and associated dual bases $\{\alpha_1,\dots,\alpha_n\},\{\beta_1,\dots,\beta_m\}$ respectively. It follows that the sets $\{e_i \otimes f_j\}$ and $\{\alpha_i \otimes \beta_j\}$ form a basis and associated dual basis for $E \otimes F$. It follows that
$$
\begin{align}
\operatorname{tr}(\phi \otimes \psi) &= \sum_{i=1}^n \sum_{j=1}^m (\alpha_i \otimes \beta_j)(\phi \otimes \psi)(e_i \otimes \beta_j)
\\&=
\sum_{i=1}^n \sum_{j=1}^m (\alpha_i \otimes \beta_j)(\phi(e_i) \otimes \psi(f_j))
\\&= \sum_{i=1}^n \sum_{j=1}^m \alpha_i(\phi(e_i)) \beta_j(\psi(f_j))
\\ & =
\left(\sum_{i=1}^n \alpha_i(\phi(e_i)) \right)
\left(\sum_{j=1}^m \beta_j(\psi(f_j)) \right)
= \operatorname{tr}(\phi)\operatorname{tr}(\psi).
\end{align}
$$
For the determinant, use the fact that for maps $\Phi,\Psi$ over $E \otimes F$, $\det(\Phi_1 \circ \Phi_2) = \det(\Phi_1) \det(\Phi_2)$. Now, define $\Phi = \phi \otimes \operatorname{id}_F$, so that
$$
\Phi(x \otimes y) = \phi(x) \otimes y.
$$
Similarly, define $\Psi = \operatorname{id}_E \otimes \psi$. It suffices to show that $\det(\Phi) = \det(\phi)^m$, and $\det(\Psi) = \det(\psi)^n$.
To show that $\det(\Phi) = \det(\phi)^m$, note that the spaces $V_i = \{x \otimes \beta_i : x \in E\}$ are invariant subspaces of $\Phi$. So, we can write $\Phi$ as a direct sum of maps
$$
\Phi = \overbrace{\phi \oplus \cdots \oplus \phi}^m.
$$
It follows that $\det\Phi = \det(\psi)^m$, which was what we wanted. The proof for $\det \Psi$ is similar. The conclusion follows.
|
H: All norm in a finite dimensional topological space are equivalent
Definition
If $V$ is a finite dimensional vector space then we say that the norm $||\cdot||_1$ and $||\cdot||_2$ are equivalent if and only if there exist two positive constant $m$ and $M$ such that
$$
m||v||_1\le||v||_2\le M||v||_2
$$
for any $v\in V$.
Lemma
If $V$ is a finite dimensional vector space then the function $||\cdot||_\infty:V\rightarrow\Bbb R^+_0$ defined through the condition
$$
||v||_\infty:=\max_{i=1,...,n}|v_i|
$$
is a norm in $V$.
Theorem
If $V$ is a finite dimensional vector space then all norm in $V$ are equivalent.
Clearly if $||\cdot||$ is a norm in $V$ then by triangle inequality
$$
||v||\le M\cdot||v||_\infty
$$
where $M:=\max_{i=1,...,n}||e_i||$ for any basis $\mathcal{B}:=\{e_1,...,e_n\}$.
Unfortunately I can't prove the other inequality. So could someone help me, please?
AI: Method 1
Suppose There are no $D>0$ s.t. for all $x\in V$, $\|x\|\geq D\|x\|_\infty $. This mean that
$$\forall n\in\mathbb N^*, \exists x_n\in V: \|x_n\|\leq \frac{\|x_n\|_\infty }{n}.$$
Instead of considering $y_n=\frac{x_n}{\|x_n\|_\infty }$, we can suppose WLOG that $\|x_n\|_\infty =1$ for all $n$. Therefore, by Bolzano-Weierstrass, there is a subsequence (still denoted $(x_n)$) that converges to $x$. Since a norm is continuous, you have $$\|x\|=\lim_{n\to \infty }\|x_n\|=0,$$
and thus $x=0$, but on the other hand, $$\|x\|_\infty =\lim_{n\to \infty }\|x_n\|_\infty =1,$$
which is a contradiction.
Method 2
You have that $\|\cdot \|$ is continuous on $$S=\{x\in V\mid \|x\|_\infty =1\},$$ which is compact. Therefore, there are $x_0,x_1\in S$, s.t. for all $x\in S$, $$0<\|x_0\|\leq \|x\|\leq \|x_1\|.$$
Since $$S=\left\{\frac{y}{\|y\|_\infty }\mid y\in V-\{0\}\right\},$$
you get the wished result.
|
H: How to prove that these functions do not intersect?
I want to prove that these two functions $f(x)$ and $g(x)$ do not intersect for $x>1$:
$$f(x)=\cosh \left(\frac{2 \sqrt{2} \pi x \left(x^2-1\right) \cosh (\pi x)}{\sqrt{x^4+6 x^2+\left(x^2-1\right)^2 \cosh (2 \pi x)+1}}\right)$$
$$g(x)=\frac{4 x^2+\left(x^2-1\right)^2 \cosh (2 \pi x)}{\left(x^2+1\right)^2}$$
Both functions are greater than one and strictly increasing. Subtraction and taking derivative do not work, the problem becomes more complicated.
Does anyone have an idea to prove it by assuming false assumption? Or to prove that $f(x)-g(x)$ has no real root?
Any hints or suggestions are really appreciated.
AI: Since you said that both functions are greater than one and strictly increasing, they vary so fast that I should instead consider
$$F(x)=\log(f(x)) \qquad \text{and} \qquad G(x)=\log(g(x))$$ If you plot them, you should notice that, as soon as $x > 3$, $F(x)$ and $G(x)$ looks very linear with $F(x) > G(x)$; I agree that this does not prove anything.
Now, consider a Taylor expansion of $F(x)-G(x)$ around $x=1$. This gives
$$H(x)=F(x)-G(x)=(x-1)^2 \left((\pi ^2+1)+\left(\pi ^2-1\right) \cosh (2 \pi )\right)+O\left((x-1)^3\right)$$ So, starting from $0$ at $x=1$, $H(x)$ is an increasing function, which, according to Murphy's principle, will go through a maximum value. This maximum is located very close to $\frac 98$ and its nature is confirmed by the second derivative test.
For sure, now the problem is : what happens for infinitely large values of $x$ ?
For the asymptotics analysis, I shall assume that, for large $t$, $\cosh(t) \sim \frac 12 e^t$. Factoring the exponentials and ignoring the terms which are divided by $e^t$, we end with
$$f(x) \sim \frac 12 e^{2\pi x} \quad \text{and} \quad g(x) \sim \frac 12 e^{2\pi x} \frac{\left(x^2+1\right)^2}{\left(x^2-1\right)^2}\implies H(x) \sim \frac{4}{x^2}+O\left(\frac{1}{x^6}\right)$$ In other words, when $x$ is large
$$f(x) \sim g(x) \, \exp \left({\frac{4}{x^2}}\right)$$
Computed exactly for $x=10$, the result is $H(10)=0.0400013$ !!
Edit
Looking at the last result, I computed as few values
$$\left(
\begin{array}{cc}
x & x^2\, H(x) \\
2 & 4.0859321 \\
3 & 4.0165826 \\
4 & 4.0052206 \\
5 & 4.0021354 \\
6 & 4.0010293 \\
7 & 4.0005555 \\
8 & 4.0003256 \\
9 & 4.0002032 \\
10 & 4.0001333 \\
11 & 4.0000911 \\
12 & 4.0000643 \\
13 & 4.0000467 \\
14 & 4.0000347 \\
15 & 4.0000263 \\
16 & 4.0000203 \\
17 & 4.0000160 \\
18 & 4.0000127 \\
19 & 4.0000102 \\
20 & 4.0000083
\end{array}
\right)$$
|
H: Periodic functions for the definite integral
Considering this question where there is this integral:
$$\int_{0}^{4\pi} \ln|13\sin x+3\sqrt3
\cos x|\mathrm dx \tag 1$$
Easily all the periodic function $$a'\sin(x)+b\cos(x)+c=0 \tag 2$$ can be written as:
$$A\sin(x+\phi)+c=0, \ A=\sqrt{a'^2+b^2}\quad \text{ or }\quad A\cos(x+\varphi)+c=0\tag 3$$
where $\phi, \varphi=\arctan \ldots$ are angles definited in radians, hence $\in\Bbb R$.
Reading the comments of the user @Sangchul Lee, I think that $|\sin(x)|$ is a even-function and $\pi-$periodic,
$$\int_{0}^{4\pi} \ln|13\sin x+3\sqrt3
\cos x|\,\mathrm{d}x=4\int_{0}^{\pi}\ln| A\sin(x+\phi)|\,\mathrm{d}x=4\int_{0}^{\pi}\ln(A| \sin(x+\phi)|)\,\mathrm{d}x$$
Why $\phi$ vanished? It is true if $\phi=K\pi$, with $K\in\Bbb Z$. I not remember this now.
Considering the comment "Let $f:\mathbb R→\mathbb R$ be $T$-periodic and integrable on any finite interval then
$∫_0^Tf(x)dx=∫_0^Tf(x+a)dx$" when is it useful, for a periodic function,
$$\int_{0}^{T}f(x)\,\mathrm{d}x=\int_{0}^{T}f(x+a)\,\mathrm{d}x=\int_{\color{red}{-a}}^{\color{red}{T-a}}f(x+a)\,\mathrm{d}x$$
and if are there general rules (or what is it happen) for the limits of the integral of a generic periodic function?
$$\int_{\color{blue}{\lambda}}^{\color{blue}{\mu}}f(x+a)\,\mathrm{d}x=\int_{\color{blue}{\lambda}}^{\color{blue}{\mu}}f(x)\,\mathrm{d}x=C\int_{\color{magenta}{\cdots}}^{\color{magenta}{\cdots}}f(x)\,\mathrm{d}x$$
where $C=C(\lambda)$ (upper bound) or $C=C(\mu)$ (lower bound) is a real constant.
AI: We have $13^2+27=14^2$, so
$$ 13\sin(x)+3\sqrt{3}\cos(x) = 14\sin(x+\varphi),\qquad \varphi=\arctan\frac{3\sqrt{3}}{13}\not\in\pi\mathbb{Z} $$
and
$$\begin{eqnarray*}\int_{0}^{4\pi}\log\left|13\sin(x)+3\sqrt{3}\cos(x)\right|\,dx &=& 4\pi\log(14)+\int_{0}^{4\pi}\log\left|\sin x\right|\,dx\\&=&4\pi\log(14)+4\int_{0}^{\pi}\log\sin(x)\,dx\end{eqnarray*}$$
where
$$\begin{eqnarray*} I=\int_{0}^{\pi}\log\sin(x)\,dx &=& 2\int_{0}^{\pi/2}\log\sin(2z)\,dz\\&=&\pi\log(2)+2\int_{0}^{\pi/2}\log\sin(z)\,dz+2\int_{0}^{\pi/2}\log\cos(z)\,dz\\&=&\pi\log(2)+2I\end{eqnarray*}$$
leads to
$$\int_{0}^{4\pi}\log\left|13\sin(x)+3\sqrt{3}\cos(x)\right|\,dx =4\pi\log(14)-4\pi\log(2) = \color{red}{4\pi\log(7)}.$$
|
H: How to prove that there is an infinite number of primitive pythagorean triples such as $b=a+1$ and $2 | a$?
I need to prove that there is an infinite primitive pythagorean triples such as $b=a+1$ and $2 | a$ but I don't know how.
I know that $(2st, s^2-t^2, s^2+t^2)$ is a primitive pythagorean triple, then I tried to say that: $ \ 2st+1=s^2-t^2$ but it didn't work. I also tried $ \ s^2-t^2+1 = 2st$ but I don't know how to continue.
Thanks in advance
AI: Only the first version should be considered because $2|a$.In that case $2t^2+1= (s-t)^2$, Replace $s-t$ by $x$ and get a Pell equation $x^2-2t^2=1$ which has infinitely many solutions.This gives you infinitely many pairs $(s, t)$ and infinitely many pythagorean triples.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.