text
stringlengths 83
79.5k
|
|---|
H: Does string substitution have a definition, similar to the one for string homomorphism in terms of monoid morphism of the free monoid?
string homomorphism is defined in formal language theory as:
A string homomorphism (often referred to simply as a homomorphism in formal language theory) is a string substitution such that each character is replaced by a single string. That is, f(a)=s , where s is a string, for each character a.
and has a clearer and equivalent algebraic definition:
String homomorphisms are monoid morphisms on the free monoid, preserving the empty string and the binary operation of string concatenation.
The first definition shows that in formal language theory, string homomorphism is defined as a special case of substitution, which is defined as:
Let L be a language, and let Σ be its alphabet. A string substitution or simply a substitution is a mapping f that maps characters in Σ to languages (possibly in a different alphabet).
Does string substitution have a clearer and equivalent definition, similar to the one for string homomorphism in terms of monoid morphism of the free monoid?
If yes, does that equivalent definition of substitution still consider homomorphism as a special case?
Thanks.
AI: Yes. Let $A$ and $B$ be alphabets. Then $A^*$ and $B^*$ are (free) monoids, but ${\cal P}(B^*)$ is also a (non-free) monoid under the usual product of languages. Now, a substitution is a monoid morphism from $A^*$ to ${\cal P}(B^*)$.
Homomorphisms are a special case if you identify $B^*$ as the submonoid of ${\cal P}(B^*)$ consisting of the singletons.
|
H: A comparision between two subadditivity
I have a simple question about the subadditivity of inf,
it's easy to show that
$\inf_x (f+g)(x) \ge \inf_x f(x) + \inf_y g(y)$.
And for norm we have
$\inf_{x\in X,y\in Y}(\|x\|+\|y\|)\le \inf_{x\in X}\|x\| +\inf_{y\in Y}\|y\|$.
My qustion is why they look quite similar but with different directions of inequalities.
AI: The second done is actually an equality. The first one would have been an equality if we had $\inf_{x,y} (f(x)+g(y))$ instead of $\inf_x (f+g)(x))$. The main difference is you have two variables $x$ and independent of each other in the second one whereas there is only one variable in the LHS of first inequality.
|
H: Plane wave solutions to the Dirac Lagrangian
I was studying Tong's lecture notes and there's a specific mathematical step I do not see how to derive (page 108); specifically, I do not see how to derive $(5.9)$ and $(5.10)$
Let's assume the following equation holds (which is a sum of plane wave solutions for $(-i\gamma^i \partial_i + m)\psi$)
$$(-i\gamma^i \partial_i + m)\psi=\sum_{s=1}^2\int \frac{d^3 k}{(2\pi)^3} \frac{1}{\sqrt{2 \omega_{\vec k}}}\Big[b_{\vec k}^s(-\gamma^i k_i +m) u^s (\vec k) e^{i \vec k \cdot \vec x}+c_{\vec k}^{s \dagger}(\gamma^i k_i +m)v^s (\vec k) e^{-i \vec k \cdot \vec x}\Big] \tag{*}$$
Given the defining equations for the spinors (which we also assume to hold)
$$(\gamma^{\mu} k_{\mu} - m)u^s(\vec k)=
\begin{pmatrix}
-m & k_{\mu}\sigma^{\mu} \\
k_{\mu} \bar \sigma^{\mu} & -m \\
\end{pmatrix} u^s(\vec k)=0 \tag{4.105}$$
$$(\gamma^{\mu} k_{\mu} + m)v^s(\vec k)=
\begin{pmatrix}
m & k_{\mu}\sigma^{\mu} \\
k_{\mu} \bar \sigma^{\mu} & m \\
\end{pmatrix} v^s(\vec k)=0 \tag{4.111}$$
Then Tong asserts that using $(4.105)$ and $(4.111)$ we get
$$(-\gamma^i k_i +m) u^s (\vec k)=\gamma^0 k_0 u^s (\vec k), \ \ \ \ (\gamma^i k_i +m) v^s (\vec k)=-\gamma^0 k_0 v^s (\vec k) \tag{5.9}$$
And then using $(5.9)$ we get
$$(-i\gamma^i \partial_i + m)\psi=\sum_{s=1}^2\int \frac{d^3 k}{(2\pi)^3} \sqrt{\frac{\omega_{\vec k}}{2}} \gamma^0 \Big[b_{\vec k}^s u^s (\vec k) e^{i \vec k \cdot \vec x} - c_{\vec k}^{s \dagger} v^s (\vec k) e^{-i \vec k \cdot \vec x}\Big] \tag{5.10}$$
But I am completely lost in how to get $(5.9)$ and $(5.10)$
Could you please explain how to derive them?
Maybe for $(5.10)$ we could start from the plane wave solutions $\psi(x)$ and $\psi^{\dagger}(x)$
$$\psi(\vec x) = \sum_{s=1}^2 \int \frac{d^3 k}{(2\pi)^3} \frac{1}{\sqrt{2 \omega_{\vec k}}}\Big[b_{\vec k}^s u^s (\vec k) e^{i \vec k \cdot \vec x}+c_{\vec k}^{s \dagger}v^s(\vec k) e^{-i\vec k \cdot \vec x}\Big]$$
$$\psi^{\dagger}(\vec x) = \sum_{s=1}^2 \int \frac{d^3 k}{(2\pi)^3} \frac{1}{\sqrt{2 \omega_{\vec k}}}\Big[b_{\vec k}^{s \dagger} u^{s \dagger} (\vec k) e^{-i \vec k \cdot \vec x}+c_{\vec k}^s v^{s \dagger}(\vec k) e^{i\vec k \cdot \vec x}\Big] \tag{5.4}$$
But no idea how to proceed (Contour integration, Cauchy residue theorem...?)
PS: please let me know if more details need to be included. Thanks.
EDIT
We get (5.9) from $\gamma^\mu k_\mu=\gamma^0k_0+\gamma^ik_i$.
Alright, let's expand out (4.105) and (4.111)
$$(\gamma^0 k_0+\gamma^i k_i - m)u^s(\vec k)=
\begin{pmatrix}
-m & k_0 + k_i\sigma^i \\
k_0 - k_i\sigma^i & -m \\
\end{pmatrix} u^s(\vec k)=0 \tag{4.105}$$
$$(\gamma^0 k_0+\gamma^i k_i - m)v^s(\vec k)=
\begin{pmatrix}
m & k_0 + k_i\sigma^i \\
k_0 - k_i\sigma^i & m \\
\end{pmatrix} v^s(\vec k)=0 \tag{4.111}$$
Where I have used:
$$\sigma^{\mu}=(1,\sigma^i), \ \ \ \ \bar\sigma^{\mu}=(1,-\sigma^i) \tag{4.63}$$
But ,unfortunately, I still do not see why this leads to (5.9)
We get (5.10) by substituting the two halves of (5.9) in (*) to remove the $\pm\gamma^ik_i+m$ operators.
By simply plugging (5.9) into (5.10) I get
$$(-i\gamma^i \partial_i + m)\psi=\sum_{s=1}^2\int \frac{d^3 k}{(2\pi)^3} \sqrt{\frac{\omega_{\vec k}}{2}} \gamma^0 k_0\Big[b_{\vec k}^s u^s (\vec k) e^{i \vec k \cdot \vec x} - c_{\vec k}^{s \dagger} v^s (\vec k) e^{-i \vec k \cdot \vec x}\Big]$$
Note I get $k_0$, which shouldn't be there. What am I missing? Thanks.
EDIT 1
$$\gamma^0k_0v^s+\gamma^ik_iv^s=\gamma^\mu k_\mu v^s=-mv^s\implies(\gamma^ik_i+m)v^s=-\gamma^0k_0v^s.$$
AI: We get (5.9) from $\gamma^\mu k_\mu=\gamma^0k_0+\gamma^ik_i$. We get (5.10) by substituting the two halves of (5.9) in (*) to remove the $\pm\gamma^ik_i+m$ operators.
In particular, from (4.105)$$\gamma^0k_0u^s+\gamma^ik_iu^s=\gamma^\mu k_\mu u^s=mu^s\implies(-\gamma^ik_i+m)u^s=\gamma^0k_0u^s.$$The proof from (4.111) of the second half of (5.9) is similar.
|
H: Prove $\left[\ln(n + 1)\right]^p - \left[\ln(n)\right]^p \to 0$ as $n \to \infty$, $p \geq 1$.
Problem: I am trying to prove that for each real number $p \geq 1$, that the sequence $\left(\left[\ln(n + 1)\right]^p - \left[\ln(n)\right]^p \right)_{n \geq 1}$ converges to $0$.
I can do it for $p = 1$ using the continuity of the natural logarithm, but I cannot evaluate the limit for $p > 1$. By plotting some test cases on Desmos, I am pretty sure that the result holds for each $p > 1$ as well.
Is it possible to use the continuity of the map $f: \mathbb{R} \to \mathbb{R}; x \to x^p$ and the limit in the case $p = 1$ to prove the general result? I have tried this, but I have made practically no progress.
The start of a proof:
Suppose $\varepsilon > 0$. Then we need to show that there exists $k \in \mathbb{Z}^+$ such that for each integer $n \geq k$, $\left|\left[\ln(n+1)\right]^p - \left[\ln(n)\right]^p\right| < \varepsilon$. I can't go any further than this at the moment.
AI: Let $p >1$. By L'Hopital's Rule you can see that $\frac {\ln (n+1)} {n^{1/(p-1)}} \to 0$. Now by MVT $[\ln (n+1)]^{p}-[\ln n ]^{p}= px^{p-1} [\ln (n+1)-\ln n]$ for some $x$ between $\ln n$ and $\ln (n+1)$. This gives $[\ln (n+1)]^{p}-[\ln n ]^{p} \leq p(\ln (n+1))^{p-1} \ln (1+\frac 1 n)$. Now use the fact that $\ln (1+\frac 1 n) \leq \frac 1 n$ to finish the proof.
|
H: Open and connected in normed space implies path-connected
Suppose that $V$ is an open subset of a normed space $X$. Then $V$ is connected iff $V$ is path-connected.
My attempt:
The implication $\Leftarrow$ is a well-known result. My question is about the other implication:
It suffices to show that every $x\in V$ has a path-connected neighborhood. $V$ is open, so for all $x\in V$ there is a neighborhood $W_x$ of $0$ such that $x+W_x\subseteq V$. I believe that all $W_x$'s are path-connected, since we can take different elements $a,b\in W_x$, find a simple path between $a, 0$ an $b,0$ (of the form $t\mapsto ta$), and by linking these two lines together we can find a path between $a$ and $b$. A normed space is a topological vector space and therefore $x+W_x$ is path-connected for all $x\in V$ (addition is a homeomorphism). Can I conclude the $V$ is path-connected?
Is the above reasoning somewhat in the right direction? Can this be approach in a different way? Maybe by working via path-connected components?
Thanks.
AI: Your proof doesn't work if $W_x$ is just some open set containg $x$ because such a set need even be connected.
Fix $x$ and consider the set of all points $y$ such that there is path in $V$ from $x$ to $y$. Since $V$ is connected it is enough to show that this set is both open and closed in $V$. [For the it is empty of=r equal to $X$. But the set contains $x$ so it cannot be empty].
If a take a point $y$ in this set then there is an open ball $B$ centered at $x$ contained in $V$. Let $z \in B$ There is a path from $x$ to $y$ and there is a path from $y$ to $z$ in $V$, namely $t \to tz+(1-t)y$. Hence there is a path in $V$ from $x$ to $z$. We have proved that the ball $B$ is contained in our set is it is open.
The fact that this set is close is proved in a similar way. (If there is no path from $x$ to $y$ there cannot be any path from $x$ to $x=z$ either!).
What is crucial to this proof is convexity of open balls in an normed linear space.
|
H: Determine a basis in given subspace
Let $V$ be the subspace of $R^4$ consisting of the vectors $ x = (x_1, x_2, x_3, x_4)^T$
whose coordinates satisfy the equation $ x_4 = x_1 + 2x_2 − 3x_3$ . Determine a basis in $ V$
containing the vector $ v = (1, 1, 1, 0)^T$
I figured it out by myself that we can find 1 other vector in this subspace(Let's say $ w= (0,1,0,2 ) $). So $ w $ and $ v $ can be basis of dimension $2$. However the answer is $ \dim=3 $ . What did I do wrong?
AI: Note that $V$ is the kernel of a linear map from $\Bbb R^4$ into $\Bbb R$. Since the range of that map is $\Bbb R$, it follows from the rank-nullity theorem that $\dim V=4-1=3$. So, no, it is not $2$. For instance a basis of $V$ is$$\left\{(1,1,1,0)^T,(0,1,0,2)^T,(0,0,1,-3)^T\right\}.$$
|
H: Closed operator bijective
Definition. Let $X$ and $Y$ normed space. A linear operator $T\colon X\to Y$ is said closed if the graph of operator $T$ is closed in the product topology. That is $T$ is closed if and only if for each sequence $\{x_n\}\subseteq X$ such that $\{x_n\}$ converges in $X$ and $\{Tx_n\}$ converges in $Y$, holds $$\lim_{n\to \infty}T (x_n)=T(\lim_{n\to \infty}x_n).$$
We suppose that $T$ is a bijective closed operator, how I can prove that $T^{-1}$ has closed graph and then it is closed?
AI: Let $y_n \to y$ and $T^{-1}y_n \to x$. We can write $y_n$ as $Tx_n$ for some $x_n$. Hence $Tx_n \to y$ and $x_n =T^{-1}y_n \to x$. Since $T$ is closed this implies $y=Tx$ or $x =T^{-1} y$ as required.
|
H: Can we generate any random variable by transforming a uniform?
Let $X$ be a real-valued random variable with cdf $F$.
Question: Can we always find a transformation of a uniform random variable that has the same distribution as $X$? That is, given $U \sim \mathrm{Unif([0,1])}$ can we find a (measurable) function $\varphi: \mathbb R \to \mathbb R$ s.t. $\Pr(\varphi(U) \le x) = F(x)$?
If $F$ is continuous and strictly increasing then we can simply take $\varphi = F^{-1}$ and then
\begin{align*}
\Pr(\varphi(U) \le x) &= \Pr(F^{-1}(U) \le x) \\
&= \Pr(U \le F(x)) \\
&= F(x).
\end{align*}
(Continuity is needed to that $F^{-1}(u)$ is defined for all $u \in ]0,1[$ and $F$ strictly increasing is needed so that $F^{-1}(u) \le x \Leftrightarrow u \le F(x)$ for all $u \in ]0,1[$.)
But what if $F$ is not continuous or strictly increasing?
AI: Then we can define $\phi(u) = \inf\{t\in\mathbb{R}: F(t)\geq u\}.$ This is called the generalized inverse. You can now prove that indeed
$$Pr(\phi(U)\leq x) = F(x).$$
The key step is using that $F$ is right-continuous and (not necessarily strictly) increasing, which implies
$$Pr(\phi(U)\leq x) = Pr(U\leq F(x)).$$
You can try to fill in the details yourself.
See also https://en.wikipedia.org/wiki/Cumulative_distribution_function#Inverse_distribution_function_(quantile_function).
|
H: How do I find the ODE in Sturm-Liouville form when given two functions
I have not had any success in finding the Sturm-Liouville ODE corresponding to the ODE solutions
$\{x^m \cos(nx), x^m \sin(nx)\}$ where $n,m \in \mathbb {R}$.
Thanks in advance!
AI: Consider
$$(\frac{y}{x^m})’’+n^2 \frac{y}{x^m} = 0.$$
Now, can you put it in Sturm-Liouville form?
Also, I think it makes more sense for $n$ to be an integer.
|
H: Let $G\subset\Bbb R$ be a Borel set, show that the Borel sets of $G$ (as a subspace) are the same as the Borel subsets of $\Bbb R$ included in $G$
Let $B$ be the Borel sigma-algebra over $\Bbb R$ (real numbers).
Let $G\subset R$ be a Borel-set. And $A_0$ the family of all subsets of $G$ which have the form $G\cap O$ for $O$ being an open subset of $R$.
Let $A_1$ be the sigma algebra over $G$ generated by $A_0$
and $A_2 = \{X\in B\mid X \subset G\}$
How to show that $A_1 = A_2$?
I would be especially interested in the direction $A_2 \subset A_1$
AI: Consider $\{X \in B: X \cap G \in A_1\}$. Verify that this is sigma algebra. It contains all open sets in $\mathbb R$. Hence it contains all Borel sets. In particular, if $X$ is Borel and contained in $G$ then $X \in A_1$.
|
H: Suppose that $x$ and $y$ are real numbers. Prove that if $x\neq0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$.
Not a duplicate of
Prove that if $x \neq 0$, then if $ y = \frac{3x^2+2y}{x^2+2}$ then $y=3$
Prove that for any real numbers $x$ and $y$ if $x \neq 0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$.
This is exercise $3.2.10$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Suppose that $x$ and $y$ are real numbers. Prove that if $x\neq0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$.
Here is my proof:
Proof. We will prove the contrapositive. Suppose $y=\frac{3x^2+2y}{x^2+2}$ and $y\neq3$. Suppose $x=0$. Then substituting $x=0$ into $y=\frac{3x^2+2y}{x^2+2}$ we obtain $y-y=0$ which means that $y$ can be any number and in particular $y=3$ which contradicts the assumption that $y\neq 3$. Thus $x\neq 0$. Therefore if $x\neq0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$. $Q.E.D.$
Is my proof valid$?$
Edit:
I was reviewing the material today and I noticed a fatal error in the above proof. I am not allowed to assume $y\neq3$ and conclude $y=3$. So the above proof is certainly not valid.
Proof. Suppose $x\neq0$. Suppose $y=\frac{3x^2+2y}{x^2+2}$. Simplifying $y=\frac{3x^2+2y}{x^2+2}$ we obtain $(y-3)x^2=0$. Since $x\neq 0$ and $(y-3)x^2=0$, then $y-3=0$ which is equivalent to $y=3$. Thus if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$. Therefore if $x\neq0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$. $Q.E.D.$
I think this one should be valid.
Thanks for your attention.
AI: You could have said this:
if $y=\dfrac{3x^2+2y}{x^2+2}$, then $y(x^2+2)=3x^2+2y$, so $(y-3)x^2=0$, so $x=0$ or $y=3$.
|
H: Marginal Distributions of given CDF
I have the CDF given by :
$$F(x_1, x_2) = e^{-(-x_1-x_2)^{1/\beta}}$$
with $x_1,x_2 \leq 0$ and $\beta \geq 1 .$
I need to find the marginal distribution functions. However when I try to apply the limit to infinity for any of these two random variables I get something not determined, i.e.
$$F_{X_1}(x_1) = \lim_{x_2 \to \infty} e^{-(-x_1-x_2)^{1/\beta}} "=" e^{-(-\infty)^{1/\beta}} "=" e^{-\infty(-1)^{1/\beta}}$$
which, dependent on $\beta$, is either a complex infinity or a real infinity.
Does somebody see what I am missing here?
Thanks!
AI: $F_{X_1}(x_1)=F_{X_1X_2}(x_1;0)=e^{-(-x_1)^{\frac{1}{\beta}}}\mathbb{1}_{(-\infty;0)}(x_1)+\mathbb{1}_{[0;\infty)}(x_1)$
and similarly for $X_2$
|
H: precompact compact differece
the definition of precompact on a metric space (X,d) shall be: $\forall r >0 \exists F\subset X \ \mathrm{ finite, s.t. } \ X= \cup_{x\in F} B_r(x).$
As all open sets in (X,d) are balls, why is that condition not sufficient, and we also need completeness?
Further, can one define precompactness in the above sense also on topological spaces?
AI: To see an example why this is not sufficient, note that $(0,1)$ is precompact with this definition. This does not depend on the topology alone, but also on the metric; the homeomorphic space $\mathbb R$ is not precompact as a metric space. Sometimes a metric space with this property is called "totally bounded."
|
H: Derivative of $\frac{d}{d\theta }\left(\left(\theta \:-1\right)\sum\limits_{i=0}^n\:\ln\left(x_i\right)\right)$
I'm doing some maximum likelihood estimation exercise and I got stuck on the derivative of the following part: $\frac{d}{d\theta }\left(\left(\theta \:-1\right)\sum\limits_{i=0}^n\:\ln\left(x_i\right)\right)$
AI: The sum is a constant with respect to $\theta$. So the answer is $$\sum\limits_{i}\ln x_i,$$ since $\frac{d}{d\theta}((\theta-1)c)=c$ if $c$ is a constant.
|
H: Flux of $(x,y,z^2)$ through $\{(x,y,z): 2(x^2+y^2)
Let $$D=\{(x,y,z): 2(x^2+y^2)<z<3\}$$
and the vector field $$F(x,y,z)=(x,y,z^2)$$
I wanto to compute the flux of $F$ through $\partial D$
Applying the divergence theorem, I should compute
$$\int_D2+2z \ dxdydz$$
To evaluate the integral I pass to cylindrical coordinates but I have some problems at this point. We have $$2r^2<z<3,$$ $$0<r<\sqrt{3/2}$$ so then it becomes
$$\int_D2z \ dxdydz=2\pi\int_0^{\sqrt{3/2}}r \ dr\int_{2r^2}^3 \ z \ dz=2\pi(\sqrt{3/2}9/2-\int_0^{\sqrt{3/2}}2r^5 \ dr)=2\pi(\sqrt{3/2}9/2-\sqrt{3/2}^6/3)$$$$+$$$$\int_D2dz=2\pi(\int_0^{\sqrt{3/2}}6r-4r^2\ dr)$$
But this does not look good at all. So maybe I'm doing something
wrong?
If I try computing the flux directly I first have to compute the boundary, which I am not sure about
$$\partial D=\{(x,y,z):x^2+y^2=\frac{z}{2}, z<3\} \cup \{(x,y,z):x^2+y^2\leq \frac{3}{2}, \ z=0\}=B_1 \cup B_2$$
I can parametrize $B_1$ as $$\phi:(u,v)\mapsto (u,v,2(u^2+v^2))$$ so that $$\phi_u \land \phi_v= (-4u,-4v,1)$$ and gives, for $B_1,$ the integral
$$\int_{2u^2+2v^2<3} \langle F(\phi(u,v)),\phi_u \land \phi_v \rangle dudv= \int_{2u^2+2v^2<3}-4(u^3+v^3)+4(u^2+v^2)^2 \ dudv$$ which is very ugly.
AI: By using the divergence theorem, we have to evaluate
$$\begin{align}2\pi\int_{r=0}^{\sqrt{3/2}}\int_{z=2r^2}^3(2+2z)r\,dz dr&=2\pi\int_{r=0}^{\sqrt{3/2}}(15r-4r^3-4r^5) dr\\&=
\left[\frac{15r^2}{2}-r^4-\frac{2r^6}{3}\right]_{r=0}^{\sqrt{3/2}}=\frac{27\pi}{2}.
\end{align}$$
As regards the direct computation, the outward flux through $B_1$ is
$$\begin{align}\int_{2u^2+2v^2<3}4(u^2+v^2)-4(u^2+v^2)^2 \ dudv&=2\pi\int_{0}^{\sqrt{3/2}}(4r^2-4r^4)r\,dr\\
&=\left[r^4-\frac{2r^6}{3}\right]_{r=0}^{\sqrt{3/2}}=0
\end{align}$$
whereas the outward flux through $B_2$ is
$$\int_{z=3,r\leq \sqrt{3/2}}z^2 r\,dr=9\cdot \frac{3\pi}{2}=\frac{27\pi}{2}.$$
Their sum is again $\frac{27\pi}{2}$.
|
H: Proving the isomorphism $A \otimes B \cong B\otimes A$ of the tensor products of abelian groups $A,B$ given the definition by the quotient groups.
For two Abelian groups $A$ and $B$ we define their tensor product
$A\otimes B$ as the quotient of the free Abelian group on the set of
formal generators $\{a \otimes b \mid a \in A; b \in B\}$ by the
subgroup generated by elements of the form $$a_1 \otimes b + a_2
\otimes b − (a_1 + a_2) \otimes b$$ and $$a\otimes b_1 +a\otimes b_2
−a\otimes(b_1 +b_2).$$ By abuse of notation we write $a\otimes b$ for
the corresponding element in the quotient $A \otimes B.$
I'd like to prove that $A\otimes B \cong B\otimes A$. Now my first thought was using the map $$a\otimes b \mapsto b \otimes a$$
Now it's obvious this is compatible with the relations. But I don't know how the quotients $A\otimes B$ and $B\otimes A$ become isomorphic. According to the solution, "it descends to the quotients", but I don't know how. Could someone please elaborate how we precisely get the isomorphism??
AI: Let $A\widetilde{\otimes} B$ denote the free abelian group on symbols $a\otimes b$. We have, by the universal property of free abelian groups, a group homomorphism $\tilde{f}:A\widetilde{\otimes} B \to B\widetilde{\otimes} A$, given on generators by $a\otimes b\mapsto b\otimes a$.
Let $I$ be subgroup in $A\widetilde{\otimes} B$ generated by expressions $a_1 \otimes b + a_2
\otimes b − (a_1 + a_2) \otimes b$ and $a\otimes b_1 +a\otimes b_2
−a\otimes(b_1 +b_2)$, with $a,a_i\in A,b\in B$. Define the subgroup $J\subset B\tilde{\otimes} A$ in a similar way.
As you noted, if $x\in I$ then $\tilde{f}(x)\in J$, so $\tilde{f}$ induces a homomorphism $$f:A\otimes B = A\widetilde{\otimes}B/I\to B\widetilde{\otimes}A/J= B\otimes A,$$
given by $f(x+I)=\tilde{f}(x)+J$.
In a similar way, you have a homomorphism $g$ in the other direction, given by $g(y+J)=\tilde{g}(y)+I$.
It remains to check that $f$ and $g$ are inverses of each other. That's clear, as on generators $$(g\circ f)(a\otimes b +I)=g(b\otimes a +J)=a\otimes b +I.$$
The same for $f\circ g$. This completes the proof.
|
H: Showing a prime is inert
This is a part from the book "Generic Polynomials
Constructive Aspects of the Inverse Galois Problem" which I don't understand. This is also pretty much the counterexample given in: https://www.jstor.org/stable/1969410?seq=1.
Let $L_2$ be the unramified extension of $\mathbb{Q}_2$ of degree $8$ (that is obtained by adding a $2^8-1$ primitive root of unity). Let $L$ be an extension of $\mathbb{Q}$ also with Galois group $\mathbb{Z}/8\mathbb{Z}$. Let $\mathbb{Q}(\sqrt{D})$ be the quadratic subextension of $L$. Suppose $L_2$ is the compositum of $L$ and $\mathbb{Q}_2$.
I can't figure out why:
$2$ remains inert in $L/\mathbb{Q}$.
$2$ remains inert implies $D\equiv 5\mod 8$.
And finally,
$p|D$ implies the completion $L_p/\mathbb{Q}_p$ is again a $C_8$ extension.
Because of how it's worded, I think $L_2$ being the compositum of $L$ and $\mathbb{Q}_2$ must be why $2$ remains inert. Also maybe, use Kronecker-Weber and try to get some relation between the roots of unity? I would appreciate any hints or references.
AI: Here are some hints:
Let $\mathfrak p$ be any prime of $L$ above $2$. Then $L_2$ is the completion of $L$ at $\mathfrak p$: it is the smallest complete field that contains both $L$ and $\mathbb Q_2$. In particular, $\mathrm{Gal}(L_2/\mathbb Q_2)$ is isomorphic the decomposition subgroup of $\mathfrak p$ in $\mathrm{Gal}(L/\mathbb Q)$.
Since $2$ is inert in $L$, it is inert in $\mathbb Q(\sqrt{D})$. Use the Kummer-Dedekind theorem to deduce that $D\equiv 5\pmod 8$.
If $p\mid D$, then $p$ is ramified in $\mathbb Q(\sqrt D)$. Let $\mathfrak p$ be a prime of $L$ above $p$. Then the inertia subgroup of $\mathfrak p$ is a subgroup $I$ of $\mathrm{Gal}(L/\mathbb Q)$ such that $\mathrm{Gal}(L^I/\mathbb Q)$ is unramified at $p$.
|
H: The "symmetric" property of Day convolution.
This question has to be divided into the following parts:
The definition of Day convolution in nlab
To define Day convolution, it assumes that $V$ be a closed symmetric monoidal category with all small limits and colimits, and $C$ be a monoidal category.
see https://ncatlab.org/nlab/show/Day+convolution#definition
Notice that nlab doesn't says that $C$ must be symmetric.
Day convolution form a monoidal category in nlab
see https://ncatlab.org/nlab/show/Day+convolution#DayConvolutionYieldsMonoidalCategoryStructure
That means, if we there is a tensor unit $y(I)$, then the category $([C,V], ⊗_{Day}, y(I))$ form a monoidal category automatically.
Notice that nlab doesn't say that $C$ must be symmetric.
The definition of Day convolution in wikipedia
To define Day convolution, it assumes that $C$ be a symmetric monoidal category. (Of course, $V$ must be monoidal category, because enriched)
see https://en.wikipedia.org/wiki/Day_convolution
Notice that wikipedia doesn't say that $V$ must be symmetric.
Day convolution form a monoidal category in wikipedia
It says that
If the category $V$ is a symmetric monoidal closed category, we can show this defines an associative monoidal product.
see https://en.wikipedia.org/wiki/Day_convolution
Since a monoidal category must satisfy associative law, that means if we expect that the category $([C,V], ⊗_{Day}, y(I))$ form a monoidal category, then $V$ must be symmetric, i.e. $C$ and $V$ are both symmetric monoidal category.
It also provides a proof for this associative law, in which, it seems that the two symmetric /commutative laws be used.
My questions are:
Why the definition of Day convolution in nlab and wikipedia are different?
I mean that, to define Day convolution, why nlab require $V$ to be a symmetric monoidal category, but wikipedia doesn't require symmetric on $V$ and vice versa...
Why the condition of "Day convolution form a monoidal category" in nlab and wikipedia are different?
I mean that, to form a monoidal category under Day convolution, why wikipedia require both $C$ and $V$ are symmetric, but nlab doesn't require this condition?
Why Day convolution need some sort of "symmetric" property?
I didn't see any symmetry intuition from this Day convolution formula:
$F*G = \int^{x,y \in C} C(x \otimes y, -) \otimes Fx \otimes Gy$
PS: I apologize if the question is silly, I'm a category theory beginner, but this definition make me confusion...
Very thanks.
AI: The description on the nLab is correct: $\mathscr C$ does not need to be symmetric, but $\mathscr V$ does. If $\mathscr C$ is symmetric, then the Day convolution tensor product on $[\mathscr C, \mathscr V]$ will also be symmetric. Wikipedia actually does require $\mathscr V$ to be symmetric, but delays stating this to establish why symmetry is important: it's necessary for the induced tensor product to be associative (and hence be monoidal). This matches Day's original setting.
As of the time of writing, Wikipedia does state that $\mathscr C$ should be symmetric, but this is unnecessary. Anyone can edit Wikipedia, so this could easily be addressed.
|
H: Folland Real Analysis Exercise 5.15 - A unique linear tranformation $S$ such that $T = S \circ \pi$ where $\pi$ is the projection onto $X/N(T)$
Consider the following problem, Exercise 15 of chapter 5 of Folland's Real Analysis, 2nd edition:
Suppose that $X$ and $Y$ are normed vector spaces and $T \in L(X, Y)$. Let $N(T) = \{x \in X \ : \ Tx = 0\}$.
a. Show that $N(T)$ is a closed subspace.
b. Show that there exists a unique $S \in L(X/N(T), Y)$ such that $T = S \circ \pi$ where $\pi: X \to X/N(T)$ is the canonical projection, and that $||S|| = ||T||$.
Part a. is clear. My question regards item b.. I tried defining $S(x + N(T)) = T(x)$. Is it really that trivial? I think I am missing something. Regarding the equality of norms, I was able to show that $||S|| \geq ||T||$:
$$
\sup_{||x + N(T)|| \leq 1} |S(x + N(T)| = \sup_{||x + N(T)|| \leq 1} |T(x)| = \sup_{\inf_{y \in N(T)} ||x - y|| \leq 1} |T(x)| \geq||T||,
$$
but I am stuck with the other inequality.
Any help will be the most appreciated.
Thanks in advance and kind regards.
AI: I am stuck with the other inequality.
Given $t\in N(T)$ and $x\in X$, we have
$$\|S(\pi(x))\|=\|T(x)\|=\|T(x+t)\|\leq \|T\|\|x+t\|$$
and thus
$$\|S(\pi(x))\|\leq \inf_{t\in N(T)} \|T\|\|x+t\|=\|T\|\|\pi(x)\|,\quad\forall\ x\in X.$$
Therefore,
$$\|S\|=\sup_{\substack{x\in X\\ x\neq 0}}\frac{S(\pi(x))}{\|\pi(x)\|}\leq \|T\|$$
Edit in response to the comment.
Existence. Define $S:X/N(T)\to X$ by $S(\pi(x))=T(x)$. Show that $S$ is well-defined. It is clear that $T=S\circ \pi$.
Uniqueness. Suppose that $\tilde{S}:X/N(T)\to X$ satisfies $T=\tilde{S}\circ \pi$. Then
$$\tilde{S}(z)=\tilde{S}(\pi(x))=T(x)=S(\pi(x))=S(z),\quad\forall \ z=\pi(x)\in X/N(T)$$
and thus $\tilde{S}=S$.
|
H: Prove that a function is distance preserving.
Let $V$ be a real inner product space, $\{x_{n}\mid n\in\mathbb{N}\}$ is an orthonormal basis.
I want to show that $\phi : V \rightarrow l_2\;$ which takes each $v\in V$ to $\left\langle v,x_{n}\right\rangle _{n=1}^{\infty}\in l_{2}$, is distance preserving.
$l_{2}=\{(x_{n})_{n=1}^{\infty}\mid\sum_{n=1}^{\infty}\left|x_{n}\right|^{2}<\infty\}$
I tried by using some indenties and polarization but i'm stuck.
I know that:
$d_{V}(u,v)=\left\langle u,v\right\rangle =_{polarization}\frac{1}{2}(\left\Vert u+v\right\Vert ^{2}-\left\Vert u\right\Vert ^{2}-\left\Vert v\right\Vert ^{2})=_{Parallelogram\;law}\frac{1}{2}(2(\left\Vert u\right\Vert ^{2}+\left\Vert v\right\Vert ^{2}-\left\Vert u-v\right\Vert ^{2})-\left\Vert u\right\Vert ^{2}-\left\Vert v\right\Vert ^{2})$
I need to show that $d_{V}(u,v)=d_{l_{2}}(\phi(u),\phi(v))\;$ for all $u,v \in V$
AI: Let $v\in V$, since $x_n$ is an orthogonal basis, write $v=\sum v_nx_n$, we have $\langle v,x_n\rangle=v_n$. Let $w=\sum w_nx_n$
$d(v,w)^2=\sum |v_n-w_n|^2$
We have
$\phi(v)=(,..v_n,..)$. , we have $\phi(w)=(,..,w_n,..)$
This implies that implies that $|\phi(v)-\phi(w)|^2=\sum|v_n-w_n|^2$.
|
H: Kernel of complex tori morphism, is this elementary assertion true?
Let $\Lambda_1, \Lambda_2$ be lattices of $\mathbb{C}$ and $T:\mathbb{C}\rightarrow\mathbb{C}$ be a $\mathbb{C}$-linear map such that $T(\Lambda_1) \subset \Lambda_2$. This induces complex tori morphism $\varphi:\mathbb{C}/\Lambda_1 \rightarrow \mathbb{C}/\Lambda_2$.
A book I'm reading right now asserts that $\operatorname{Ker}(\varphi) \cong \Lambda_2/T(\Lambda_1)$.
Is this true? This seemed elementary so I glossed over it initially but now I'm having trouble showing this.
To me, $\operatorname{Ker}(\varphi)$ are the elements of $\mathbb{C}$ sent to $\Lambda_2$ under $T$, elements which are then considered mod $\Lambda_1$. Therefore, we have $\operatorname{Ker}(\varphi) \cong T^{-1}(\Lambda_2)/\Lambda_1$, with a well-defined quotient since $T$ sends $\Lambda_1$ into $\Lambda_2$.
Then I would be tempted to apply the map induced by $T$ to get $T^{-1}(\Lambda_2)/\Lambda_1 \xrightarrow{T} \Lambda_2/T(\Lambda_1)$.
But is this an isomorphism? I have doubts on this, it doesn't seem to be injective or surjective at first sight. For instance nothing tells me that if $x \in T^{-1}(\Lambda_2) \setminus\Lambda_1$, it won't land under $T$ in $T(\Lambda_1)$ (unless if $T^{-1}(T(\Lambda_1))=\Lambda_1$ but this is not clear to me why this would be so), which would contradict injectivity... I only know it lands in $\Lambda_2$ but $\Lambda_2$ contains $T(\Lambda_1)$ so this is not obvious...
AI: Nicely framed question. Indeed
$$
T^{-1}(\Lambda_2)/\Lambda_1 \xrightarrow{T} \Lambda_2/T(\Lambda_1)
$$
is an isomorphism. It's surjective because the original map $T$ was surjective to $\mathbb{C}$ so everything in $\Lambda_2$ has a pre-image, which by definition lives in $T^{-1}(\Lambda_2)$. It's injective because the original map $T$ was injective, so if $T(x) \in T(\Lambda_1)$ then $x \in \Lambda_1$. (Having $T(x) \in T(\Lambda_1)$ means there exists $y$ such that $y \in \Lambda_1$ and $T(y) = T(x)$, but now we can conclude $x=y$ by injectivity of the original map $T$.)
|
H: How to Prove $\int_{0}^{\infty}\frac {1}{x^8+x^4+1}dx=\frac{π}{2\sqrt{3}}$
Question:- Prove that
$\int_{0}^{\infty}\frac {1}{x^8+x^4+1}dx=\frac{π}{2\sqrt{3}}$
On factoring the denominator we get,
$\int_{0}^{\infty}\frac {1}{(x^4+x^2+1)(x^4-x^2+1)}dx$
Partial fraction of the integrand contains big terms with their long integral.So i didn't proceed with partial fraction.I'm unable figure out any other method.I think that there might be some other method for evaluation of this definite integral since its value is
$\frac{π}{2\sqrt{3}}$.
Does anyone have nice way to solve it!
AI: Since your function is even, your integral is$$\require{cancel}\frac12\int_{-\infty}^\infty\frac{\mathrm dx}{x^8+x^4+1}.$$On the other hand, $x^8+x^4+1=\dfrac{x^{12}-1}{x^4-1}$ and therefore the roots of $x^8+x^4+1$ are the roots of order $12$ of $1$ which are not fourth roots of $1$. Among these, those with imaginary part greater than $0$ are $e^{\pi i/6}$, $e^{\pi i/3}$, $e^{2\pi i/3}$ and $e^{5\pi i/6}$. The residue of $\dfrac1{z^8+z^4+1}$ at these points is, respectively, $-\dfrac1{4\sqrt3}$, $-\dfrac i{4\sqrt3}$, $-\dfrac i{4\sqrt3}$, and $\dfrac1{4\sqrt3}$. Therefore, if $R>1$, then, by the residue theorem\begin{multline}\int_{-R}^R\frac{\mathrm dx}{x^8+x^4+1}+\int_{|z|=R,\ \operatorname{Im}z\geqslant0}\frac{\mathrm dz}{z^8+z^4+1}=\\=2\pi i\left(\cancel{-\frac1{4\sqrt3}}-\frac i{4\sqrt3}-\frac i{4\sqrt3}+\cancel{\frac1{4\sqrt3}}\right)=\frac\pi{\sqrt3}\end{multline}and so, since$$\lim_{R\to\infty}\int_{|z|=R,\ \operatorname{Im}z\geqslant0}\frac{\mathrm dz}{z^8+z^4+1}=0,$$we have\begin{align}\int_{-\infty}^\infty\frac{\mathrm dx}{x^8+x^4+1}&=\lim_{R\to\infty}\int_{-R}^R\frac{\mathrm dx}{x^8+x^4+1}\\&=\frac\pi{\sqrt3}.\end{align}
|
H: Examples of Continuous Probability Distributions with Slowly Varying Upper Tail and Infinite Expectation
$L(x) = 1 - F(x)$ is the slowly varying upper tail, where F(x) is a continuous probability distribution function with infinite expectation. That is $\int_{0}^{\infty} u dF(u) = \infty$. I am unable to construct any such probability distribution and would like some examples with the specified properties.
Definition: A function $L(x)$ is said to be slowly varying at $\infty$ if $lim_{x \to \infty} \frac{L(\alpha x)}{L(x)} = 1 , \forall \alpha >0$, where $\alpha$ is a real number.
AI: Consider the cdf $F(x) = 1-\frac{1}{\log(x)}$ for $x\geq e$. Then $L(x) = \frac{1}{\log(x)}$ is slowly varying at infinity and if $X$ is a random variable with cdf $F$ then
$$\mathbb{E}[X] = \int_0^\infty \mathbb{P}(X>x) \, dx = \int_0^e 1 \, dx + \int_e^\infty \frac{dx}{\log(x)} = \infty.$$
|
H: Combinatorics problem about weight of balls. (POSN Camp $2$)
If Jack has $25$ white balls and $63$ black balls and all black ball weight are less than $26$ grams , all white ball weight are less than $64$ grams. (All weight are in integer and some balls may have the same weight.) Prove that Jack can select some white balls and some black balls such that total weight of all white ball is equal to total weight of black balls. (Number of white and black balls are not necessary equal.)
This is from Thailand POSN Camp $2$ , $19$ June $2020$.
AI: This is a common Olympiad problem setup, skinned in a myriad of ways.
I'm slightly surprised that "almost all participants can't prove it", since there's a decent chance that some of them have seen a version of it before (example below).
Hint: Pigeonhole principle.
Hint: Deal with the general case, then set $ n = 25, m = 63$.
We have positive integers $ 1 \leq w_i \leq n$ for $i = 1$ to $m$, and $1 \leq b_j \leq m$ for $j = 1$ to $n$.
WTS $\sum_I w_i = \sum_J b_J$ for some indexing set.
Hint: It is sufficient for the indexing set to be an interval (taking consecutive integers).
Let $W_i$ for $i=1$ to $m$ be the sum of the first $i$ elements.
Let $B_j$ for $j = 0$ to $n$ be the sum of the first $j$ elements.
Hint: Show that for some suitably defined function $j(i)$, we have $ 0\leq W_i - B_{j(i)} \leq n-1$.
These differences are our pigeons and the value of the differences is our holes. Then the result follows by pigeonhole principle as, either
all of those differences are distinct and so one of them is equal to $0$, which gives subsets of the same sum, or
two of those differences are the same, so taking the difference of sets yields subsets with the same sum.
Essentially the solution: (The obvious choice of definition is) $j(i)$ is the largest index such that $B_j \leq W_i$, allowing for $j=0$ as needed.
Notes:
The case of $n = m$ is also pretty common. EG I posted an answer here.
Another skin of this problem is Putnam 1993, which is where I first came across this setup:
Let $x_1, \ldots , x_{19}$ be positive integers less than or equal to 93. Let $y_1, \ldots , y_{93}$ be positive integers less than or equal to 19. Prove that there exists a (nonempty) sum of some $x_i$’s equal to a sum of some $y_i$’s.
We are applying the 4th form of Pigeonhole Principle, namely
If there are $ n > \sum_{i=1}^k a_i$ pigeons and $k$ holes, then there is some hole with at least $a_i + 1 $ pigeons.
In this case, we have holes of value $0, 1, 2, \ldots, n-1$, with corresponding sizes $a_1 = 0, a_2=a_3=\ldots a_n = 1$ and $ \sum a_i = n-1 < n$ pigeons.
See this generalization from 1977 Soviet where we remove the restriction on individual values, and just bound the total by $< mn$.
|
H: Confused about a step in a proof; For a closed set, $F$, and a point $x\in F$, there's an open interval around $x$ and contained in $F$?
I've underlined the step in the proof that isn't clear to me.
I know what I said in the title is wrong; but I'm not really sure what motivates the step that I underlined.
This is taken from Royden, 4th edition, page 66.
Any thoughts would be greatly appreciated.
Thanks in advance.
Edit: To be more clear, I'm confused about how we get an open interval fully contained in $F_i$ which also contains $x$. Why must $F_i$ fully contain an open interval that also contains $x$?
AI: To say that $g$ is continuous on $F$ means that for any $x \in F$ and for any $\epsilon>0$ there is some $\delta >0$ such that if $|y-x|<\delta$ and $y \in F $ then $|g(y)-g(x)| < \epsilon$.
In the above case, we have $x \in F_i$, and since the $F_k$ are
disjoint and closed, the set $\cup_{k\ne i} F_k$ is closed and
disjoint from $F_i$. Hence $x \notin \cup_{k\ne i} F_k$ and so
there is some interval $(a,b)$ containing $x$ such that $(a,b) \cap (\cup_{k\ne i} F_k) = \emptyset$.
Hence if $y \in F \cap (a,b)$ we must have $y \in F_i$ and hence
$g(y) = g(x)$.
So, given $x \in F$ and $\epsilon>0$, we can find some $i$ and an interval $(a,b)$ as above such that $x \in (a,b)$ and $(a,b) \cap F \subset F_i$ (in particular we have $g(x) = a_i$).
Now choose $\delta>0$ such that $B(x,\delta) \subset (a,b)$, then if $|y-x| < \delta$ and $y \in F$ we have $y \in (a,b) \cap F_i$ and so $g(y) = a_i$ (and so $|g(y)-g(x)| < \epsilon$, of course).
|
H: Solving $Ax=0$ for non-negative $x$
This seems like a fairly elementary problem but I have not been able to find an a suitable way to this problem. Any advice would be greatly appreciated.
The problem is simple. I have a large matrix and would like to find a vector in its nullspace such that the components are greater than equal to zero, i.e., solve:
$$\begin{align} A\mathbf{x} &= 0 \\ \text{ subject to: } x_i &\geq 0 \end{align}$$
It is possible to find a basis for the nullspace of $A$ numerically, but this just leads to a system of linear inequalities which I am unsure how to solve:
$$ B\mathbf{c} \geq 0 $$
where the columns of $B$ are the nullspace basis vectors. If I can solve this for $\mathbf{c}$ then the solution to the previous problem is just $\mathbf{x} = B \mathbf{c}$
AI: You can solve this as a linear programming problem, e.g.:
maximize $\sum_i x_i$
subject to $Ax = 0$
all $0 \le x_i \le 1$
The upper bounds are to make the problem bounded.
|
H: Proof that pullback is smooth by considering charts only
I'm reading up about pullbacks and pushforwards. If $\psi:M\to M'$ is smooth and $f:M'\to\mathbb{R}$ is a smooth function, then we define the pullback as $\psi*f=f\circ\psi$. By definition of a smooth map between manifolds then, the pullback of $f$ is also a smooth function on $M$.
Even though this is clear, I'm still trying to understand this in terms of charts. Let $M,M'$ be $n,n'$ dimensional manifolds respectively. In terms of charts, the definition of a smooth map $\psi:M\to M'$ is one for which $\phi'\circ\psi\circ\phi^{-1}:\mathbb{R}^n\to\mathbb{R}^{n'}$ is smooth, for homeomorphisms $\phi,\phi'$ corresponding to arbitrary charts in $M,M'$ respectively.
Also, a smooth function $f:M'\to\mathbb{R}$ can be defined as one for which $f\circ\phi'^{-1}:\mathbb{R}^{n'}\to\mathbb{R}$ is smooth for all homeomorphisms corresponding to charts covering $M'$.
With this, I can rephrase the conditions in bold at the top. If I have a map $\psi:M\to M'$ such that $\phi'\circ\psi\circ\phi^{-1}$ is smooth for all $\phi,\phi'$ and if there exists some $f:M'\to\mathbb{R}$ such that $f\circ\phi'^{-1}$ is smooth for all $\phi'$, then to show that the pullback is also smooth (in terms of charts), I have to show that $(f\circ\psi)\circ\phi^{-1}$ is smooth for all chart homeomorphisms $\phi$ on $M$.
Is this alternative statement of the problem correct? And how do I prove this? i.e. how to show that smoothness of $\phi'\circ\psi\circ\phi^{-1}$ and smoothness of $f\circ\phi'^{-1}$ for all $\phi,\phi'$ implies smoothness of $f\circ\psi\circ\phi^{-1}$ for all $\phi$?
AI: Note that, restricted to the appropriate domain, we have $f\circ\psi\circ\phi^{-1} = (f\circ\phi'^{-1})\circ(\phi'\circ\psi\circ\phi^{-1})$ for any $\phi'$. As the two paranthetical terms are smooth, so is their composition. Smoothness is a local condition, and the above holds for every $\phi'$, so it follows that $f\circ\psi\circ\phi^{-1}$ is smooth, and hence $\psi^*f = f\circ\phi$ is a smooth function on $M$.
|
H: Determine the dimension of the subspace
The vectors $a, b, c, d $ form a basis of $ R^4$. Determine the dimension of the
subspace generated by the vectors $a + b, c + d, a + c, b + d$.
I think combined vectors($a+b,c+d,a+c,b+d$) somehow must be dependent, however, I could not prove.
AI: Generally a good way to tackle these problems is to consider the relation:
$$r_1(a+b)+r_2(c+d)+r_3(a+c)+r_4(b+d)=0$$
and see if you can show all the $r_i$'s to be zero. If you can't, you'll be able to identify one or more of those vectors as linear combinations of the others. In this case, the above equation gives:
$$a(r_1+r_3)+b(r_1+r_4)+c(r_2+r_3)+d(r_2+r_4)=0$$
Since $a,b,c,d$ form a basis, they're independent, which means these coefficients $r_1+r_3$, etc. are all zero. That in turn gives you $r_1=-r_3=-r_4=r_2$. Note that you can have $r_1\neq 0$ and still satisfy this condition, i.e. you don't arrive at any contradiction. So assume that $r_1\neq 0$. Plug these back into the first equation:
$$r_1(a+b)+r_1(c+d)-r_1(a+c)=r_1(b+d)
\\\implies b+d=(a+b)+(c+d)-(a+c)$$
This seems really obvious, but I'm just trying to illustrate a general method of tackling such problems where it might not be that obvious. From the above equation you realize that $(b+d)$ is in the span of the other three vectors, so repeat the process with the remaining vectors to see if they're independent! Let
$$s_1(a+b)+s_2(c+d)+s_3(a+c)=0
\\\implies a(s_1+s_3)+bs_1+c(s_2+s_3)+ds_2=0$$
which implies that $s_1+s_3=s_1=s_2+s_3=s_2=0$ again due to independence of $a,b,c,d$. This gives $s_1=s_2=s_3=0$, which implies that $(a+b),(c+d),(a+c)$ are indeed linearly independent and the subspace dimension is $3$.
|
H: Computation of the commutator of permutations of $J_n$
I've been slowly going through some of the material in Serge Lang's Alegbra, and I've just stumbled upon some computations that's puzzling me at the moment. It's a specific step in the proof of a theorem, namely:
Theorem: If $n\geq 5$ then $S_n$ is not solvable.
The step that's troubling me is the following one :
Let $i,j,k,r,s$ be five distincts integers in $J_n=\{1,2,\ldots,n\}$ and let $\sigma=[ijk]$ and $\tau=[krs]$. Then direct computation gives their commutator : $$\sigma\tau\sigma^{-1}\tau^{-1}=[rki]$$
I don't think I really understant how the computation works here, I'm definitely unable to get to the result. So if anyone could explain it, I'd appreciate it thanks.
AI: Recall that the product of two permutations written in cycle notation is defined as the composition, and therefore should be computed right-to-left.
As an example, to compute $(12)(23)$ you should write $(1$, then compute
$$([12][23]).1 = [12].([23].1) = [12].1 = 2$$
and hence write $(12$. Then
$$([12][23]).2 = [12].([23].2) = [12].3 = 3$$
and hence write $(123$. Then
$$([12][23]).3 = [12].([23].3) = [12].2 = 1$$
and so you close the bracket, and get $(123)$.
Now, just use the associative property to do one product at a time, as described above. You get:
$$[ijk][krs][ikj][ksr] = [ijk][krs][iksrj] = [ijk][irj] = [irk] = [rki]$$
The standard notation is to put as leftmost element of a cycle the smallest one, but when letters are involved you can freely shuffle them around. What I mean is that $[irk]=[rki]=[kir]$.
|
H: A linear map $T:V\rightarrow V$ can be written as $T=T_2T_1$ for some linear map $T_1$ and $T_2$.
Question: Let $V$ be a finite dimensional vector space over $\mathbb{R}$ and $T:V\rightarrow V$ be a linear map. Can you always write $T=T_2T_1$ for some linear maps $T_1:V\rightarrow W$, $T_2:W\rightarrow V$, where $W$ is some finite dimensional vector space and such that
$T_1$ is onto, $T_2$ is one to one?
$T_1$ is one to one, $T_2$ is onto?
First one true, if we put $W=Im(T)$, $T_1=T$ and $T_2=I$.
Second one is also true. But here $W$ will have larger dimension than $V$. So I am stucked in construction of $W$. Please help in constructing $W$ and accordingly $T_1$, $T_2$.
Thank you.
AI: Take $W=V\oplus V$, $T_1(v)=(v,0)$ is injective, $T_2(u,v)=T(u)+v$ is surjective and $T=T_2T_1$.
|
H: ${\frac{\mathrm{d}^2 y}{\mathrm{d} x^2}+\frac{\mathrm{d} y}{\mathrm{d} x}-2\cdot y}={4\cdot x\cdot e^ {- 3\cdot x }-17\cdot e^ {- 3\cdot x }}$
I have the equation $${\frac{\mathrm{d}^2 y}{\mathrm{d} x^2}+\frac{\mathrm{d} y}{\mathrm{d} x}-2\cdot y}={4\cdot x\cdot e^ {- 3\cdot x }-17\cdot e^ {- 3\cdot x }}$$ I think my substitude of $y_p$ is wrong and I do not know why.
I tried subtitude private solution as following:
$y_p(x)= (AX+B)e^{-3x}+De^{-3x}$
but at the end of the proccess I have 3 unknown with 2 equations. I think it is something with the polynomial cooficent I did.
AI: Since the RHS is $(4x-17)e^ {- 3x }$, the term $De^{-3x}$ is useless and it suffices to consider
$$y_p(x)= (Ax+B)e^{-3x}.$$
After the substitution we find,
$${\frac{\mathrm{d}^2 y_p}{\mathrm{d} x^2}+\frac{\mathrm{d} y_p}{\mathrm{d} x}-2\cdot y_p}=(4Ax+4B-5A)e^{-3x}.$$
Can you take it from here?
P.S. Your substitution is not wrong. The coefficient of $e^{-3x}$ is $B+D$ and after the substitution you find a linear system which is indeterminate. Since you are looking for a PARTICULAR solution you can take $D=0$ (actually you may assign any value to $D$) and find $B$. The sum $B+D$ will be always the same!
|
H: How to convert degrees of a circle into positive and negative X and Y values
I am hoping this is a very simple equation that can solve this (I am not a mathematician) but considering I have values in degrees (0 - 360), I am trying to convert that back into plus and minus X and Y values.
So between 0 and 180 degrees it is a positive X value (180-360 being negative X) and between 90 and 270 degrees it would be a positive Y (270-90 being negative Y).
For example if I am facing North West it is a value of 45 degrees and I want to convert that to 0.5+Y and 0.5-X. North is +1Y, 0X and East is +1X, 0Y etc.
Is this straightforward, I will add a diagram if required, thanks
AI: Use the fact that there is a correspondence between the points $(x, y)$ in the Cartesian plane and the points $(r, \theta)$ in polar coordinates given by $x = r \cos \theta$ and $y = r \sin \theta,$ where $r$ is the radius and $\theta$ is the angle. One can see this by observing that one can always draw a circle centered at the origin starting and ending at a point $(x, y)$ in the plane with radius $r = \sqrt{x^2 + y^2}.$ Our angle $\theta$ is the angle from the $x$-axis taken counterclockwise, e.g., $\theta = \pi = 180^\circ$ gives the point diametrically opposite to the point on the circle of radius $r = \sqrt{x^2 + y^2}$ that lies on the $x$-axis, i.e., $(-r, 0).$
Considering that you want due north to coincide with the point $(0, 1)$ in the plane, we may take $r = 1.$ Consequently, the formula for a pair $(x, y)$ in terms of the angle $\theta$ is simply $(\cos \theta, \sin \theta).$
|
H: For about real Eigen vector.
Is there any complex hermitian matrix, which have real Eigen vector. If there any, please give such example. Thanks.
AI: Well, yes. See for instance $\begin{pmatrix}1&0&0\\ 0&1&i\\ 0&-i&1\end{pmatrix}$ which has an eigenvector in $(1,0,0)^t$. It is true that if a Hermitian matrix has a basis $(v_1,\cdots, v_n)$ of real eigenvectors, then the matrix is real, because the eigenvalues of a Hermitian matrix are always real. This means that $A=PDP^{-1}$, where $P$ is the matrix having those eigenvectors as columns and $D$ is a real diagonal matrix. Therefore the entries of $A$ are the result of sums, mutliplications and quotients of real numbers.
|
H: Integrate $\int_0^1 \ln{\left(\ln{\sqrt{1-x}}\right)} \mathop{dx}$
Problem says to integrate $$\int_0^1 \ln{\left(\ln{\sqrt{1-x}}\right)} \mathop{dx}$$
I try $u=1-x$ and got
$$\int_0^1 -\ln{2}+\ln{(\ln{u})} \mathop{du}$$
Then $t=\ln{u}$
$$-\ln{2}+\int_{-\infty}^0 e^t \ln{t}$$
Now what?
AI: You are on the right track. To evaluate $\int_{-\infty}^0 e^t \ln{t} \; dt$, I would do another substitution with $w=-t$:
$$\int_0^{\infty} e^{-w} \ln{(-w)} \; dw$$
$$=\int_0^{\infty} e^{-w} \ln{(-1)} \; dw + \int_0^{\infty} e^{-w} \ln{(w)} \; dw$$
Now, the left integral is easy and we will use only the principal value of $\ln{(-1)}=\pi i$:
$$\pi i \int_0^{\infty} e^{-w} \; dw = \pi i$$
To evaluate the right integral, you want to express $e^{-w}$ as its limit definition to eventually manipulate it into the Euler-Mascheroni constant:
$$\lim_{n \to \infty} \int_0^n {\left(1-\frac{w}{n}\right)}^{n-1} \ln{w} \; dw$$
Let $u=1-\frac{w}{n}$:
$$\lim_{n \to \infty} n\int_0^1u^{n-1} \ln{\left(n(1-u\right)} \; du$$
$$=\lim_{n \to \infty} n\ln{n}\int_0^1u^{n-1}\; du+n\int_0^1 u^{n-1} \ln{(1-u)} \; du$$
$$=\lim_{n \to \infty} \ln{n}-n\int_0^1 u^{n-1} \sum_{j=1}^{n} \frac{u^j}{j} \; du$$
And by the dominated convergence theorem we can interchange the summation and integral sign:
$$=\lim_{n \to \infty} \ln{n}-n\sum_{j=1}^{n}\int_0^1 \frac{u^{n+j-1}}{j} \; du$$
$$=\lim_{n \to \infty} \ln{n}-\sum_{j=1}^{n} \frac{1}{j}-\frac{1}{j+n}$$
$$=\lim_{n \to \infty} \ln{n}-\sum_{j=1}^{n} \frac{1}{j}$$
$$= -\gamma$$
Therefore, the original integral evaluates to $$\boxed{-\ln{2}-\gamma+ \pi i}$$
|
H: Derivative of Input in nonlinear State Space representation
I am dealing with obtaining an space state representation of a nonlinear differential equation that arises from an inverted pendulum. It includes some terms that reflect the fact pendulum is controlled in a PID similarly fashion.
The original EOM would be like this:
$$ \ddot{\theta}(t)=a_1\sin{\theta(t)} -a_2\theta(t)-a_3\dot{\theta}(t)-a_4\int_0^\tau{\theta(\tau)d\tau}
+b_1u(t) $$
So if I derive it once to get rid of the integral term, the equation takes the form:
$$ \dddot{\theta}(t)=a_1\cos{\theta(t)}\dot{\theta}(t) -a_2\dot{\theta}(t)-a_3\ddot{\theta}(t)-a_4\theta(t) +b_1\dot{u}(t) $$
And form here I'm kinda stuck and second guessing myself a little bit. If I want to express the equation in state space form, $\vec{x}(t)=\mathbf{f}\{\theta(t),\dot{\theta}(t),\ddot{\theta}(t),u(t)\}$ to then linearize it to obtain the classic form, $\dot{\vec{x}}(t) = A(\sigma)\vec{x}(t) + B(\sigma)\vec{u}(t)$...
Kinda my question would be how to deal with that input derivative.. Could a change of variable be the right thing to do? Like set $v(t)=\dot{u}(t)$ and go from there.. or consider a vector input $\vec{u}(t)=[u(t), \dot{u}(t)]$?
I mean, for instance if I would want to obtain the TF of that system for a particular operation point, I could get back from the change of variable knowing that $V(s)=sU(s)$, but not sure if I would get the same results with the extended input vector.
AI: Here is one way:
Let $x_1 = \dot{\theta}$, $x_2 = \theta$ and $x_3(t) = x_3(0) + \int_0^t \theta(\tau) d \tau$.
Then we obtain
$\dot{x} = \begin{bmatrix} a_1 \sin x_2 - a_2 x_2 - a_3 x_1 - a_4 x_3 +b_1 u \\ x_1 \\ x_2\end{bmatrix} $.
Note however, the behaviour of this system matches the original system iff $x_3(0) = 0$. From a transfer function perspective they are of course equivalent.
|
H: Prove the following equality with determinants
Show that
$$
\begin{vmatrix}
b_1+c_1 & c_1+a_1& a_1+b_1 \\
b_2+c_2 & c_2+a_2 & a_2+b_2 \\
b_3+c_3 & c_3+a_3 & a_3+b_3 \\
\end{vmatrix}
=
2\begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix}
$$
I can only see the brute-force approach. However, it really seems to have a more attractive and smarter way, but cannot find how to do that. Any help is appreciated.
AI: Just to illustrate what @RobertIsrael pointed out: since$$\left(\begin{array}{ccc}
b_{1}+c_{1} & c_{1}+a_{1} & a_{1}+b_{1}\\
b_{2}+c_{2} & c_{2}+a_{2} & a_{2}+b_{2}\\
b_{3}+c_{3} & c_{3}+a_{3} & a_{3}+b_{3}
\end{array}\right)=\left(\begin{array}{ccc}
a_{1} & b_{1} & c_{1}\\
a_{2} & b_{2} & c_{2}\\
a_{3} & b_{3} & c_{3}
\end{array}\right)\left(\begin{array}{ccc}
0 & 1 & 1\\
1 & 0 & 1\\
1 & 1 & 0
\end{array}\right),$$we need only calculate$$\left|\begin{array}{ccc}
0 & 1 & 1\\
1 & 0 & 1\\
1 & 1 & 0
\end{array}\right|=-\left|\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right|+\left|\begin{array}{cc}
1 & 0\\
1 & 1
\end{array}\right|=1+1=2.$$
|
H: Finding an irreducible polynomial over the rationals
I'm very confused by a homework question.
"Find the irreducible polynomial for $ \sin{2\pi/5}$ over Q.
I found that $16t^{4}-20t^{2}+5=0$ but this is not monic? This is also irreducible by Eisenstein, but minimal polynomials are always monic?
AI: If you want a monic polynomial, you can divide by $16$. Of course it won't have integer coefficients any more. A number that is a root of a monic polynomial with integer coefficients is an algebraic integer. $\sin(2\pi/5)$ is not an algebraic integer.
|
H: bound for the ratio of Gamma functions
Let $x \in R$, $N$ is a natural number.
How to bound from above
$$
\frac{\Gamma(1-1/x)}{\Gamma(N+1-1/x)}
$$
AI: Repeated application of the fact that $\Gamma(x+1)=x\cdot\Gamma(x) \text{ for } x\notin-\mathbb{N}\cup\{0\}$ yields
$$\frac{\Gamma(1-1/x)}{\Gamma(N+1-1/x)}=\prod_{i=1}^N \,\left(i-\frac{1}{x}\right)^{-1}$$
Does that answer your question?
|
H: Finite-Type Algebra Property Under Change of Base
I was wondering how to prove the following assertion. Any help would be appreciated!
Suppose we have to ring homomorphisms: $f: A \rightarrow B$ and $g: B \rightarrow C$.
Then $C$ being finite type over $A$ implies that $C$ is finite type over $B$.
AI: $C$ being finite type over $A$ means that any element of $c\in C$ is of the form
$$ c=\sum_{i=1}^ka_ix_i^{\alpha_i}$$
for $x_1,\ldots, x_k$ a fixed set of generators in $C$ and $a_i\in A$. Note that this above expression really means
$$ c=\sum_{i=1}^k (g\circ f)(a_i)x_i^{\alpha_i}$$
by the definition of the algebra structure. In particular, if we set $f(a_i)=b_i$, we have
$$ c=\sum_{i=1}^k g(b_i)x_i^{\alpha_i}$$
for $b_i\in B$. This is the same thing as $C$ being finite type over $B$ with generators $x_1,\ldots,x_k$: e.g.
$$ c=\sum_{i=1}^k b_ix_i^{\alpha_i}.$$
|
H: Solve: $3x(x+y-2)=2y$ with $y(x+y-1)=9x$
This is a question from my Regional Mathematics Olympiad. It is from the topic- polynomials.
AI: Multiply both equations,
$$3x(x+y-2)y(x+y-1) = 18yx$$
Cancelling $x$ and $y$, we obtain a solutions $x=0$ and $y=0$
Next assume $x+y=t$ and solve quadratic.
$$(t-2)(t-1)=6$$
$$(t-4)(t+1)=0$$
$$x+y = 4,x+y+1=0$$
Thus the solutions must be all points on the lines $x+y=4$ , $x+y+1=0$ , $x=0$ and $y=0$
Finally, look at the respective equations.
$$3x(x+y-2)=2y$$
Here if we check with the line $x=0$
$$y=0$$
So, one solution is $(0,0)$.
Note we don't need to check with $y=0$ as its already solved here. Also this is satisfying the other given equation.
Next check the same with the other line obtained...(i.e. solve $x+y = 4,-1$ with the given equations) and take the common points as your answer.
I think you can take it from here..
|
H: Should I use the "variation of parameters" here?
I am trying to understand how I should approach this kind of equation. $${x^2\cdot \left(\frac{\mathrm{d}^2 y}{\mathrm{d} x^2}\right)+6\cdot x\cdot \left(\frac{\mathrm{d} y}{\mathrm{d} x}\right)+6\cdot y}={\frac{x+4}{x^3-4\cdot x^2}}$$ $x>4$
I think I should use the "Variation of parameters" technique but for that I have to get at least one homogenous solution but I do not know how I should handle high order differential equation with none constant coefficients.
AI: Hint:
$${x^2\cdot \left(\frac{\mathrm{d}^2 y}{\mathrm{d} x^2}\right)+6\cdot x\cdot \left(\frac{\mathrm{d} y}{\mathrm{d} x}\right)+6\cdot y}={\frac{x+4}{x^3-4\cdot x^2}}$$
It's Cauchy-Euler's Differential Equation .
Try $y=x^m$ for the homogeneous DE.
Another way:
You can also try this
$${x^4\cdot \left(\frac{\mathrm{d}^2 y}{\mathrm{d} x^2}\right)+6\cdot x^3\cdot \left(\frac{\mathrm{d} y}{\mathrm{d} x}\right)+6x^2\cdot y}={\frac{x+4}{x-4}}$$
And integrate directly since you have that:
$$(x^4y')'+(2x^3y)'={\frac{x+4}{x-4}}$$
I think this method is easier than the variation of parameters. Integration gives us:
$$x^4y'+2x^3y=x+C_1 +8\ln |{x-4}|$$
It's a first order DE now.
$$(x^2y)'=\dfrac 1 x+ \dfrac {C_1}{x^2} +8\dfrac {\ln |{x-4}|}{x^2}$$
Integrate again if it's possible.
|
H: if 3 random variables are independent, then any pair is independent given the remaining random variable
How do I prove the following:
$X,Y,Z$ are independent, then any pair is independent given the remaining
random variable.
I know that $E(X|Z)=E(X) $ and $E(Y|Z)=E(Y)$
so $E(XY)=E(X)E(Y)=E(X|Z)E(Y|Z)$
does it imply that $E(X|Z)E(Y|Z)=E(XY|Z)$
On the other hand, if $X,Y,Z$ are pairwise independent, how do I disprove that any pair is independent given the remaining random variable?
AI: $X,Y,Z$ are independent, hence, $P(X,Y,Z) = P(X) P(Y) P(Z)$. Then,
$$P(X,Y \mid Z) = \frac{P(X,Y,Z)}{P(Z)} = \frac{P(X) P(Y) P(Z)}{P(Z)}$$
This proves that $X,Y \mid Z$ are independent, since $P(X,Y \mid Z) = P(X)P(Y)$. The same proof goes to the other cases.
For the case you want to disprove, just use the following counter-example:
You can easily check that they are pairwise independent. But their collection is not independent. Just doing some calculations:
$$
P(X,Y \mid Z) = \frac{P(Z \mid Y,X)P(X,Y)}{P(Z)}= \frac{P(Z \mid Y,X)P(X)P(Y)}{P(Z)}
$$
But $P(Z=1 \mid X=1, Y=1)=0 \neq P(Z=1)=0.5$. So we conclude that $P(X,Y \mid Z) \neq P(X)P(Y)$ which is what we wanted to disprove.
|
H: There exists a nonnull subset $A\in \mathcal{A}$ such that : $ \{f_n\}, \{g_n\}\text{ and }\{h_n\} \text{ are uniformly integrable on }A $?
Let $(E,\mathcal{A},\mu)$ be probability space.
Lemma. Suppose $\{f_n\}$ is a sequence in $\mathcal{L}^1_\mathbb{R}$ such that
$$
\sup_n\int_{E}|f_n|d\mu<+\infty.
$$
Then there exists a nonincreasing sequence $\{B_p\}$ in $\mathcal{A}$ such that $\mu(\cap_p B_p) =0$ and for every $p$
$$
\{f_n\}\text{ is uniformly integrable over }E\setminus B_p
$$
Let $\{f_n\}$, $\{g_n\}$ and $\{h_n\}$ are bounded sequence in $\mathcal{L}^1_\mathbb{R}$. Can we say that there exists a nonnull subset $A\in \mathcal{A}$ such that:
$$
\{f_n\}, \{g_n\}\text{ and }\{h_n\} \text{ are uniformly integrable on }A
$$
AI: There are sequences of sets $B_p, B_p', B_p''$ satisfying the conclusion of the lemma for the sequences $\{f_n\}, \{g_n\}, \{h_n\}$ respectively. Since the intersection of each such sequence has zero measure, continuity from above implies $\mu(B_p) \to 0$ and similarly for $B_p', B_p''$, so given $\epsilon$ we can find $q$ sufficiently large that $\mu(B_q) < \epsilon$, $\mu(B_q') < \epsilon$, $\mu(B_q'') < \epsilon$. Set $A = (B_q \cup B_q' \cup B_q'')^c$. By union bound $\mu(A) > 1-3\epsilon$, which is positive as soon as $\epsilon < 1/3$, and by construction all three sequences are uniformly integrable on $A$.
|
H: Restriction of a quotient map to its inverse image is a quotient map
If $q:X\to Y$ is a quotient map. Is it true that $q|_{q^{-1}(B)}:q^{-1}(B)\to B$ is also a quotient map for any subset $B$ of $Y$.
It seems that it is not true but i'm not getting any counter example. can anyone help. Thanks in advance!
AI: Counterexample:
Consider $X=[0,2]$.
Define the equivalence relationship:
$x \sim y$ if and only if $x=y$ or $x \in A$ and $y \in A$ where $A=\{0\} \cup ]1,2]$.
Let $Z=]0,1] \subset X$ and define $Y=X/\sim$.
If you consider $q : X \to Y$ the natural quotient map then taking $B=q(Z)$ you have your counterexample.
Proof
You should infact verify that $q^{-1}(B)=Z$ and that is $q_{|Z}$ is not a quotient map. I will prove only the second statement.
If $q_{|Z}$ is a quotient map then $U=]\frac 1 2,1]$ is in the form $q_{Z}^{-1}(C)=U$ for an opportune $C$ and it is an open set of $Z$.
Thus there should exists an open set $V$ full and such that $U=Z \cap V$.
Then, because $1 \in V$ and $V$ is full and open, $0 \in A \subset V$. Thus we would have that there exist $0 < x < \frac 1 2$ such that $x \in V$ which is impossible.
|
H: Suppose that $f:[0,1]\to M$ is locally injective. Show that $f$ is piecewise injective.
Consider a metric space $M$ and $f: [0,1]\to M$ locally injective, i.e. for each $x\in M$ there is a neighborhood on which $f$ is injective. Show that $f$ is piecewise injective, i.e. there exists $0= t_0 < t_1 < \dots < t_N = 1$ such that $f$ is injective on $[t_{n-1},t_n]$ for each $n\in\{1,\dots, N\}$.
My attempt:
Given is that $\forall x\in M: \exists \delta_x>0: \forall y,z\in ]x-\delta_x,x+\delta_x[: f(y)=f(z)\Rightarrow y=z$. Then $$ [0,1]\subseteq\bigcup_{x\in [0,1]} ]x-\delta_x,x+\delta_x[ \quad \Rightarrow \quad [0,1]\subseteq\,\, ]x_1-\delta_{x_1},x_1+\delta_{x_1}[\,\, \cup\dots\cup \,\, ]x_N-\delta_{x_N},x_N+\delta_{x_N}[,$$ for some $x_1,\dots,x_N\in [0,1]$. Assuming w.l.o.g. that $x_1< \dots < x_N$, then $0\in \,\,]x_1-\delta_{x_1},x_1+\delta_{x_1}[\,\, =: B(x_1,\delta_{x_1})$ and $1\in B(x_N,\delta_{x_N})$ (I'm not too sure about this). I believe that all the open intervals that cover $[0,1]$ will have to overlap (otherwise, i.e. no overlapping intervals or at least two intervals that don't overlap, we won't cover whole $[0,1]$). By 'all' I mean that for each interval $B(x_i,\delta_{x_i})$ we can find $B(x_j,\delta_{x_j})$ such that $\exists t_{ij}\in B(x_i,\delta_{x_i})\cap B(x_j,\delta_{x_j})$. W.l.o.g. we can assume that $B(x_i, \delta_{x_i})$ intersects $B(x_{i+1}, \delta_{x_{i+1}})$ for $i=1,\dots, N-1$. Then we can select $t_{i-1}\in B(x_i,\delta_{x_i}), i=1,\dots, N$.
Is this somewhat correct?
AI: What you’ve done is basically correct; it just needs a bit more detail in the construction of the partition.
You have a finite set of intervals $(x_1,y_1),(x_2,y_2),\ldots,(x_n,y_n)$ that cover $[0,1]$ and are such that $0\in(x_1,y_1)$, and $1\in(x_n,y_n)$. You can further assume that none of these intervals is a subset of another: if one were, you could simply throw it away.
Let $(u_1,v_1)=(x_1,y_1)$. If $v_1>1$, then $[0,1]\subseteq(u_1,v_1)=(x_1,y_1)$, and $f$ in injective. Otherwise, there is an interval $(x_k,y_k)$ that contains $v_1$, and we let $(u_2,v_2)=(x_k,y_k)$. Proceeding in this fashion, in a finite number of steps we reduce the open cover to a family $\{(u_1,v_1),\ldots,(u_m,v_m)\}$ such that $0\in(u_1,v_1)$, $1\in (u_m,v_m)$, and $v_k\in(u_{k+1},v_{k+1})$ for $k=1,\ldots,m-1$.
Now you have a cover of $[0,1]$ by open intervals that overlap in a fairly predictable way, and you can define the desired partition: let $t_0=0$, $t_k\in(u_{k+1},v_k)$ for $k=1,\ldots,m-1$, and $t_m=1$. Then $[t_{k-1},t_k]\subseteq(u_k,v_k)$ for $k=1,\ldots,m$, so $f$ is injective on each segment of the partition.
|
H: Taylor series for $\ln(\frac{1-z^2}{1+z^3})$
Taylor series for $\ln(\frac{1-z^2}{1+z^3})$
I've tried to $$\ln(1-z^2)-\ln(1+z^3)=\sum (-1)^{3n-1}z^{2n}-\sum(-1)^{4n-1}z^{3n}$$
I didn't manage to make it one series any help is good
AI: I think you mean $-\sum_{n\ge1}\frac1nz^{2n}+\sum_{n\ge1}\frac{(-1)^n}{n}z^{3n}$. The $z^{6m+j}$ coefficient can be calculated for each $j\in\{0,\,\cdots,\,5\}$:
If $j=0$ both series contribute, so the coefficient is $-\frac{1}{3m}+\frac{1}{2m}=\frac{1}{6m}$
If $j\in\{1,\,5\}$ neither series contributes, so the coefficient is $0$
If $j\in\{2,\,4\}$ only the first series contributes, so the coefficient is $-\frac{1}{3m+j/2}$
If $j=3$ only the second series contributes, so the coefficient is $-\frac{1}{2m+1}$
|
H: true or false- continuous functions
I'm having some hard time deciding if those sentences are true or false:
$1$. If $f$ is continuous on $\mathbb{R}$ then if $\left|f(x)-x\right|<1$ for every $x$ on $\mathbb{R}$ then $f$ is getting every real value on $\mathbb{R}$.
$2$. If $f$ is continuous on R then if $\left|f(x)-x\right|>1$ for every $x$ on $\mathbb{R}$ then $f$ is getting every real value on $\mathbb{R}$.
$3$. If $f$ is continuous on $(a,b)$ then if $f$ is getting maximum on $(a,b)$ then it isn't injective on $(a,b)$.
I feel like the first and the third ones are true but I'm not sure so I would appreciate some guidance :)
AI: $1)$ True. Assume by way of contradiction that $f(x)\neq a$ for some $a\in\mathbb{R}$. By our condition on $f(x)$, we know
$$-1<f(a-2)-(a-2)<1$$
$$a-3<f(a-2)<a-1$$
and
$$-1<f(a+2)-(a+2)<1$$
$$a+1<f(a+2)<a+3$$
This implies
$$f(a-2)<a<f(a+2)$$
Then by the intermediate value theorem, there exists $c\in (a-2,a+2)$ such that $f(c)=a$. Since this contradicts our assumption, we conclude the range of $f(x)$ is all the real numbers.
$2)$ False. $f(x)=x^2+x+2$ is a counterexample. We have
$$|f(x)-x|=|x^2+x+2-x|=|x^2+2|\geq 2>1$$
for all $x\in\mathbb{R}$. But $f(x)$ is minimized at $x=-\frac{1}{2}$. There, $f\left(-\frac{1}{2}\right)=\frac{7}{4}>0$. Thus, $f(x)$ is always positive and the conjecture is false.
$3)$ True. Let $f(x)$ be maximized at $c\in(a,b)$. Now, consider the interval
$$I=\left[c-\frac{c-b}{2},c+\frac{a-c}{2}\right]$$
Note that
$$c-\frac{c-b}{2}>c-(c-b)=b$$
$$c+\frac{a-c}{2}<c+(a-c)=a$$
This implies $I\subset (a,b)$. Now, consider $f(x)$ at
$$s_1=f\left(c-\frac{c-b}{2}\right)$$
$$s_2=f\left(c+\frac{a-c}{2}\right)$$
First, if either $s_1=f(c)$ or $s_2=f(c)$, then $f(x)$ is not injective on $I$ (and therefore not injective on $(a,b)$) and we are done. Suppose $s_1<f(c)$ and $s_2<f(c)$. Now, if $s_1=s_2$, we are done. If not, consider the case where $s_1<s_2$. Then define $h(x)=f(x)-s_2$. Then
$$h(c)=f(c)-s_2>0$$
$$h\left(c-\frac{c-b}{2}\right)=f\left(c-\frac{c-b}{2}\right)-s_2=s_1-s_2<0$$
$$h\left(c+\frac{a-c}{2}\right)=f\left(c+\frac{a-c}{2}\right)-s_2=s_2-s_2=0$$
By the intermediate value theorem, there exists
$$d\in \left(c-\frac{c-b}{2},c\right)$$
such that $h(d)=0$. But then
$$0=h(d)=h\left(c+\frac{a-c}{2}\right)$$
$$s_2=f(d)=f\left(c+\frac{a-c}{2}\right)$$
and we are done. In a similar manner, we can modify this proof for $s_1>s_2$. Having exhausted all cases, we conclude $3)$ is true.
|
H: Find the PDF of $Y = X^2 + 3$ where $X \sim Poisson(\lambda)$.
I am following the CDF method to calculate the PDF oF Y. Up to this, I have done:
$$F_Y = P(Y \leq x)$$
$$F_Y = P(X^2+3 \leq x)$$
$$F_Y = P(X \leq \sqrt{x-3)}$$
$$F_Y = e^{-\lambda}.\sum_{k=0}^{\sqrt{x-3}} \frac{\lambda^k}{k!}$$
Now I have to differentiate the CDF to get PDF.
$$f_Y = \frac{\partial F_Y}{\partial x}$$
Now how can I proceed from here? In the continuous case, I can use Leibniz rule but I'm not sure about the discrete case.
AI: The CDF methods you showed is for continuous distribution. With discrete laws the pdf (more precisely the pmf, probability mass function) is not the derivative of the CDF.
With a discrete distribution, it is enough to directly calculate the pmf:
$$\mathbb{P}[Y=y]=\mathbb{P}[X^2+3=y]=\mathbb{P}[X=\sqrt{y-3}]=\frac{e^{-\lambda}\lambda^{ \sqrt{y-3}}}{ (\sqrt{y-3})!}\mathbb{1}_{\{3;4;7;12,...\}}(y)$$
|
H: How do I integrate $f(z)=3\cdot\operatorname{Im}z$ on a curve?
Let $f(z)=3y$, where $z=x+iy\in\mathbb C$, and let $\gamma(t)=t+it^2$, for $t\in[0,1]$. Find $\int f(z)\ dz$.
If the problem read "Let $f(z) = 3z$" for example, I would first find $\gamma'(t)= 1+2it$.
Then, I would find $\int f(\gamma(t))(\gamma'(t))\ dt$, which would become $\int3(t+it^2)(1+2it)\ dt$.
And I would just integrate as normal.
But, given $f(z)=3y$... I missing on what to do. I'm sure it's something simple, but I can't capture it. Can anyone help?
AI: On the curve $\gamma, f(z)=3y=3t^2$ (which is $\it{real}$).
|
H: Determine sequence with generating function $(2x-3)^3$
Determine the sequence associated to the following generating function: $(2x-3)^3$
I know that the sequence $\left(\begin{pmatrix} 3 \\ 0 \end{pmatrix}, \begin{pmatrix} 3 \\ 1 \end{pmatrix}, \begin{pmatrix} 3 \\ 2 \end{pmatrix}, \ldots\right)$ has the generating function $(1+x)^3$ so I know that this sequence must be similar to the one I am looking for. But I am not sure how to manage the rest. Could someone help me?
AI: The sequence determined by a power series $\sum_{n\ge 0}a_nx^n$ is simply the sequence $\langle a_0,a_1,a_2,\ldots\rangle$. Expanding $(2x-3)^3$, we get
$$-27+54x-36x^2+8x^3\;,$$
which corresponds to the sequence $\langle -27,54,-36,8,0,0,0,\ldots\rangle$.
|
H: Does the pullback of the metric tensor have the form $\phi^* g(X,Y)=g(\phi X,\phi Y)$?
I have seen in many textbooks that the pullback of an arbitrary tensor field of type (r,s) under the diffeomorphism $\phi:M \rightarrow N$ is defined as
$\phi^* T(\eta_1,\dots, \eta_r, X_1, \dots, X_s) = T( (\phi^{-1})^*(\eta_1), \dots, (\phi^{-1})^*(\eta_r), \phi_* X_1, \dots, \phi_* X_s)$
where $\eta_i \in T_p^*(M)$ is a covector and $X_j \in T_p(M)$ is a vector. So, in the case of the metric tensor this would reduce to the following:
$\phi^*g(X,Y) = g(\phi_*X, \phi_*Y)$
where $X,Y$ are vectors.
Now, at the same time, Wikipedia suggests that we could find the pullback of such a tensor as
$\phi^*g(X,Y) = g(\phi X, \phi Y)$
and my question is how do they get $\phi_*$ to become $\phi$?
AI: If $\phi:V \to W$ is a linear map, then for any $p\in V$, the tangent mapping/pushforward mapping $T\phi_p$ or $d\phi_p$ or $\phi_{*,p}$ (however you want to use the notation) is a linear map $T_pV \to T_{\phi(p)}W$. But for a vector space, the tangent space can be canonically identified with itself: $T_pV \cong V$ and $T_{\phi(p)}W\cong W$. Because of this, you can "think of" the tangent mapping as a map $V \to W$. This is simply the derivative of a linear transformation $\phi:V \to W$ at the point $p \in V$. But a linear transformation is its own derivative.
If you want a more precise formulation of what I said above, here it is: on any (say finite-dimensional) vector space $V$, and any $p \in V$, there is a canonical isomorphism $\xi_{V,p}:T_pV \to V$. Note that the exact construction of this isomorphism will depend on which definition of tangent space you're using, but in any case, it is a good idea to prove this yourself. Similarly we have an isomorphism $\xi_{W,\phi(p)}:T_{\phi(p)}W \to W$. If you unwind the definitions of everything, you'll see that the following diagram commutes:
$\require{AMScd}$
\begin{CD}
T_pV @>{\phi_{*p}}>> T_{\phi(p)}W \\
@V{\xi_{V,p}}VV @VV{\xi_{W,\phi(p)}}V \\
V @>>{D\phi_p = \phi}> W
\end{CD}
In other words, $\phi = \xi_{W,\phi(p)} \circ \phi_{*,p} \circ (\xi_{V,p})^{-1}$, or said differently once again, up to isomorphisms, for each $p \in V$, we have $\phi_{*,p} = \phi$. But all of this is only because $\phi$ is a linear transformation.
But in the general case, if you have smooth manifolds $M,N$, and you have a metric tensor $g$ on $N$ and a diffeomorphism $\phi:M \to N$, there is no reason to even expect that $M,N$ have vector space structures, so it doesn't even make sense to talk about $\phi$ being linear. This is why we have to use the push-forward map, and there is no sense in which we can "identify" the push-forward with the original map itself.
See this for a more general perspective of everything I mentioned here (with slightly different notation).
Edit: In response to comment.
The author DOES NOT say $(s^{-1})^*g(x,y) = g(s^{-1}(x), s^{-1}(y))$. He says
\begin{align}
d_{B^n}(x,y) &= [(s^{-1})^*d_{\mathcal{H}^n}](x,y) = d_{\mathcal{H}^n}(s^{-1}(x), s^{-1}(y))
\end{align}
These are completely different statements. Note that if you have two (let's for simplicity say simply connected) Riemannian manifolds $(M,g)$ and $(N,h)$. Then, the metric tensors $g$ and $h$ give rise to distance functions $d_g$ and $d_h$ respectively (in the article, the author refers to these as $d_{B^n}$ and $d_{\mathcal{H}^n}$). Now, suppose we have a diffeomorphism $\phi:M \to N$. Then, we can consider the following objects:
first is the pullback tensor field $(\phi^{-1})^*g$ on $N$ (as defined above).
second is the pull-back distance function $(\phi^{-1})^*d_g$ on $N$, which is DEFINED as
\begin{align}
[(\phi^{-1})^*d_g](x,y) &= d_g\left( \phi^{-1}(x), \phi^{-1}(y)\right) \qquad \text{for all $x,y \in N$}
\end{align}
Note that although we are using the same notation $(\phi^{-1})^*$, and calling both of them "pullbacks", these are completely different things. The first is a pullback of tensor field, while the second is a pull-back of a distance function. The word "pull-back" should be thought of literally as the name suggests: you have a certain object defined on one space (eg. a tensor field or distance function), and you have a invertible map between two spaces. Then, you can use this map to "transport" this object to the new space.
Now, here is a theorem which you should try to prove (it is really just an exercise in unwinding all the definitions).
Theorem.
Let $(M,g), (N,h)$, $\phi:M \to N$, and $d_g,d_h$ all have the same definitions as above. If $h = (\phi^{-1})^*g$ then $d_h = (\phi^{-1})^*d_g$.
What this says is that if your metric tensors are related to each other by a pullback, then so are the associated distance functions. Note that this is precisely what the author is saying in the first sentence of his proof:
"Since the stereographic projection $s: \mathcal{H}^+ \to B^n$ is a Riemannian isometry, it is also a metric isometry for the induced distances."
|
H: Differential equation with initial conditions problem: how do we solve $yy'' - 2(y')^2 - y^2 = 0$?
I have problem with solving following equation with initial conditions:
$$y*y''-2(y')^2-y^2=0 $$
$$y(0)=1; y'(0)=0 $$
The problem is that i've tried substitution $ u(y)=y' $
and I end up with $$u'*u-2u^2/y =y $$ which is basically bernouli equation.
I've done z sub so that: $z(y)=u^2$
and got equation $$z'-4z/y=2y $$
I solved that and got (with my initial condition ) $z=y^4-y^2 $ and that implies $ u^2=y^4-y^2 $
I have no idea what should be next step
Any help appreciated !
AI: HINT
To begin with, notice that
\begin{align*}
yy'' - 2(y')^{2} - y^{2} = 0 \Longleftrightarrow \frac{y''}{y} - \frac{(y')^{2}}{y^{2}} - \left(\frac{y'}{y}\right)^{2} - 1 = 0
\end{align*}
Moreover, we do also have that
\begin{align*}
\frac{y''}{y} - \frac{(y')^{2}}{y^{2}} = \frac{y''y - (y')^{2}}{y^{2}} = \left(\frac{y'}{y}\right)'
\end{align*}
Hence, if we make the substitution $y' = uy$, we obtain the following ODE
\begin{align*}
u' - u^{2} - 1 = 0 \Longleftrightarrow u' = u^{2} + 1 \Longleftrightarrow \frac{u'}{u^{2}+1} = 1
\end{align*}
Can you take it from here?
|
H: natural logarithm of a complex number $\ln(a+bi)$
I am trying to find a formula for $\ln(a+bi)$, is my working correct?
$$a+bi=re^{i\theta},\,\,r=\sqrt{a^2+b^2},\,\,\theta=\frac{|b|}{b}\arctan\left(\frac ba\right)$$
and so:
$$\ln(a+bi)=\ln(re^{i\theta})=\frac{\ln(r^2)}{2}+i\theta$$
$$\therefore \ln(a+bi)=\frac{\ln(a^2+b^2)}{2}+i\frac{|b|}{b}\arctan\left(\frac ba\right)$$
I would also like to calculate: $$(a+bi)^{c+di}$$ and using a similar method I got:
$$(a+bi)^{c+di}=(a^2+b^2)^{c/2}\exp\left(-\frac{d|b|}{b}\arctan\left[\frac ba\right]\right)\left[\cos\left(\frac{c|b|}{b}\arctan\left[\frac ba\right]\right)+i\sin\left(\frac{c|b|}{b}\arctan\left[\frac ba\right]\right)\right]\left[\cos\left(\frac d2\ln(a^2+b^2)\right)+i\sin\left(\frac d2\ln(a^2+b^2)\right)\right]$$
could anyone confirm if this is correct? Thanks
AI: When you have $z^u$ it is convenient to have $z=a+ib=re^{i\theta}$ in polar form and keep $u=c+id$ in cartesian form.
The polar formula is $\begin{cases}r=|z|=\sqrt{a^2+b^2}\\\theta=\operatorname{atan2}(b,a)\end{cases}$
You got it ok, but for the angle it is more convenient to use atan2 which deals with the proper quadrants for the angle (this function is defined on most systems that have a math library):
https://fr.wikipedia.org/wiki/Atan2
We have $\ln(z)=\ln(r)+i\theta+2ik\pi\quad k\in\mathbb Z$, and it is important to keep this $k$ because the complex logarithm is multivalued (i.e. it has branches).
It ensures that formulas like $z^{u}\times z^v=z^{u+v}$ and $(z^u)^v=z^{uv}\ $ stays true at least for some value of $k$.
$\begin{align}z^u
&=\exp\bigg(u\ln(z)\bigg)\\
&=\exp\bigg((c+id)(\ln(r)+i\theta)+2ik\pi)\bigg)\\
&=\exp\bigg((c\ln(r)-d\theta)+i(c\theta+d\ln(r)+(-2kd\pi+2ikc\pi)\bigg)
\end{align}$
So we have the formula $\boxed{z^u\in\bigg\{z_{[k]}=z_{[0]}\ w^k\mid k\in\mathbb Z\bigg\}}$ where:
$$\begin{cases}z_{[0]}&=r^c\ e^{-d\theta}\ \bigg(\cos\big(c\theta+d\ln(r)\big)+i\sin\big(c\theta+d\ln(r)\big)\bigg)\\\\w&=e^{-2d\pi}\ \bigg(\cos(2c\pi)+i\sin(2c\pi)\bigg)\end{cases}$$
Comparatively to your formula (i.e. $z_{[0]}$), you still have a complex multiplication remaining, while I separated real part and imaginary part, but this is basically the same.
But I also have the multiplicative factor $w^k$ which correspond the the branches of the complex log.
The value $z_{[0]}$ obtained for $k=0$ is called the principal value.
Note: the principal value is not necessarily the "simplest" one, it happens that some other $z_{[k]}$ is simpler (i.e. real for instance) for a value of $k\neq 0$.
|
H: What are the critera for a family of languages to qualify as a type of language in Chomsky hierarchy?
Chomsky hierarchy consists of several types of languages: regular, CFL, CSL, r.e..
What are the critera for a family of languages to qualify as a type of language in Chomsky hierarchy?
All the four types of language families in Chomsky hierarchy are (full) trios.
is being a trios a necessary and/or sufficient condition for forming a type of languages in Chomsky hiearchy?
All the four types of language families in Chomsky hierarchy are (full) AFL's
is being a AFL a necessary and/or sufficient condition for forming a type of languages in Chomsky hiearchy?
Thanks.
AI: The Chomsky hierarchy is a particular division of context-free grammars into exactly four categories using three successively more restrictive conditions. By extension, these grammar categories define language categories (i.e. the set of languages for which there exists at least one grammar of Type I for values of I from 0 to 3). The properties of these four sets of languages, which are interesting, are a consequence of the three restrictions on the grammars which generate them, rather than being a motivation for the categorization.
It's not an extensible hierarchy, although Chomsky's 1958 paper On Certain Formal Properties of Grammars, which laid the foundation for the study of these grammatical types, ends with the suggestion that "it seems particularly important to try to arrive at some characterization of the languages of these various types26 and of the languages that belong to one type but not the next lower type in the classification," with a footnote adorning the word "types" starts "and several other types". (The paper is well worth reading if you are interested in the field.)
That is, Chomsky was not satisfied with this particularly hierarchy of restrictions on grammars, because it failed to provide a useful category of grammars which could be used to parse human languages, but nonetheless the particular hierarchy was worth studying.
|
H: Lie algebra of Lorentz Group $O(1,3)$
Let
$$
O(1,3)=\{A\in GL_4(\mathbb R):A^TgA=g\}
$$
where $g$ is the diagonal matrix with $1$ on the first diagonal entry, and $-1$ on the other diagonal entries. I want to show that the Lie algebra consists of matrices $X$ such that $gXg=-X^T$. As I've already shown the spin homomorphisms $SL_2(\mathbb C)\to SO(1,3)_e$, I know that the Lie algebra of $O(1,3)$ is isomorphic to that of $SL_2(\mathbb C)$, which are traceless $2$-by-$2$ complex matrices. However, I don't think this is going to help particularly (except for a dimensional check). I was thinking of using properties of the exponential map, as we are looking for matrices $X$ such that
$$
(e^{tX})^T g e^{tX}=g.
$$
Now, I believe $(e^{tX})^T=e^{tX^T}$, which means that we have
$$
ge^{tX}g=e^{-tX^T},
$$
which almost seems to be $gXg=-X^T$. I'm not sure how to proceed, though.
AI: Let's calculate the tangent space at the identity matrix $I$. Matrix $V\in \mathrm{End}(\mathbb R^4)$ is in the tangent space if and only if
$$
\lim_{t\to 0} \frac{(I + tV)^T g (I + tV) - g}{t} = 0.
$$
This is exactly $V^Tg + gV = 0$. Multiplying by $g$ from the right and using $g^2=I$, you get $$V^T + gVg= 0.$$
|
H: If the integral of a continuous function vanishes for every compact set, the function is identically zero.
Let's consider the continuous function
$$
f:\mathbb{R}^n\to \mathbb{R}
$$
Is it true that if $$\int_C fdx=0$$ for every compact set $C$ then the function is $f\equiv 0$?
I would say it is true, but I wouldn't know how to prove it. Any hint would be appreciated
AI: Yes, it is true. Suppose otherwise. More precisely, suppose that $f(y)>0$ for some $y\in\Bbb R^n$. So, since $f$ is continuous, for some $r>0$ we have $f(x)>\frac{f(y)}2$ for every $x\in\overline{D_r(y)}$. Therefore,$$\int_{\overline{D_r(y)}}f\geqslant\frac{f(y)}2\operatorname{vol}\left(\overline{D_r(y)}\right)>0,$$which is impossible, since $\overline{D_r(y)}$ is compact.
The case in which $f(y)<0$ is similar.
|
H: There exists no $M_n(\mathbb{C})$ module of dimension coprime to $n$
Let $m,n\in\mathbb{N}:\gcd(m,n)=1$ prove there is no $M_n(\mathbb{C})$-module $V$ s.t. $\dim_{\mathbb{C}}V=m$
I think that since $V$ is an $M_n(\mathbb{C})$-module we should have that $\dim_{\mathbb{C}}V\mid n$ and thus we are done but is that really correct? If so how can I prove it?
AI: The ring $A = M_n(\mathbb C)$ is simple, which implies that there is a unique simple left $A$-module $M$, and every left $A$-module is a direct sum of copies of $M$. In this case, one can take $M = \mathbb C^n$ with the natural action of $A$.
So yes, $\dim_{\mathbb C} V$ is divisible by $\dim_{\mathbb C} M = n$.
|
H: Let $f(u)=\tan(u)$ and $g(x)=x^8$
I have solved the following:
$$f'(u)=\sec^2(u)$$
$$g'(x)=8x^7$$
However, these two have been giving me issues. This is my progress so far:
$$f(g(x))=\tan(u)^8$$
$$f'(g(x))=\sec^{10}(u)$$
Where did I go wrong here? Also, how would I go about in solving the following (I would just want the first step, not the solution, so I can work it out myself):
$$(f \circ g)'(x)$$
UPDATE - Ty, $\sec^{10}(u)$ was just a guess. Could you let me know where I went wrong here?
AI: When you write $\tan(x)^8$ it's not clear whether you mean $\tan(x^8)$ or $(\tan x)^8.$
But if $f(x)= \tan x$ and $g(x) = x^8,$ then $f(g(x)) = \tan (x^8).$
The chain rule is differentiation by substitution.
\begin{align}
& y = \tan u, & & u = x^8 \\[8pt]
& \frac{dy}{du} = \sec^2 u, & & \frac{du}{dx} = 8x^7
\end{align}
And then:
\begin{align}
\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx} & = \big(\sec^2 u\big)\cdot 8x^7 \\[10pt]
& = \big( \sec^2 (x^8)\big)\cdot 8x^7.
\end{align}
|
H: If $f:\mathbb{R}^d\to\mathbb{R}^d$ is a homeomorphism, then $\lim_{\| x \| \to \infty} \| f(x) \| = +\infty$
If $f:\mathbb{R}^d\to\mathbb{R}^d$ is a homeomorphism, then $\lim_{\| x \| \to \infty} \| f(x) \| = +\infty$.
My attempt:
I need to show that $\forall M\in\mathbb{R}: \exists R>0: \forall x\in\mathbb{R}^d:\| x\| \ge R \Rightarrow \| f(x)\| \ge M$. Suppose that this is not the case, i.e. $\exists M: \forall R: \exists x\in\mathbb{R}^d: \| x\| \ge R \quad\& \quad \| f(x)\|< M.$ Pick such an $M\in\mathbb{R}$. Then $$\forall n\in\mathbb{N}: \exists x_n: \|x_n\|\ge n \quad \&\quad \| f(x_n)\|<M.$$ Then we have that $x_n\not\to 0$. I thought of using the bijectivity of $f$, i.e. $\exists y_n\in\mathbb{R}^d: f(x_n)=y_n \iff x_n = f^{-1}(y_n)$. Then we would have that $\| f^{-1}(y_n)\|\ge n \quad \& \quad \| y_n\| < M$.
I want to show that $f^{-1}(x_n)\not\to f^{-1}(0)$ as this would imply that $f^{-1}$ is not continuous in $0$, a contradiction. How can I do this?
Thanks.
AI: Starting from here:
$\forall n\in\mathbb{N}. \exists x_n: \|x_n\|\ge n \quad \&\quad \| f(x_n)\|<M$
I would use the fact that every sequence which lies in a compact set of $\mathbb{R}^d$ has a convergent subsequence. Notice this applies to the sequence $f(x_n)$, which lies in the compact set $\{x: ||x|| \leq M\}$. Let $(f(x_{n_m}))_{m=1}^\infty$ be such a subsequence, with $f(x_{n_m}) \to L$ as $m \to \infty$.
By continuity of $f^{-1}$, we must have $f^{-1}(f(x_{n_m})) = x_{n_m} \to f^{-1}(L)$ as $m \to \infty$. But this is a contradiction, since $||x_{n_m}|| \to \infty$.
|
H: Proving that $zJ''(z)+J'(z)+zJ(z)=0$
In my complex analysis book there is the following problem that I'm having some trouble solving:
Consider the function:
$$J(z)=\sum_{n\geq0} \frac{(-1)^n}{(n!)^2} \left(\frac{z}{2}\right)^{2n}$$
1 - Determine the radius of convergence of this power series.
2 - Show that:
$$zJ''(z)+J'(z)+zJ(z)=0$$
I already solved number one and got that the radius of convergence is $+\infty$, but I'm having some trouble with the second one
My approach:
Lets rewrite:
$$J(z)=\sum_{n\geq0} \frac{(-1)^n}{(n!)^2} \left(\frac{z}{2}\right)^{2n}=\sum_{n\geq0} \frac{(-1)^n}{2^{2n}(n!)^2} z^{2n}$$
We can define a sequence $\alpha_n$ to be:
$1$, if $n = 0$
$0$, if $n$ is odd
$\frac{(-1)^{\frac{n}{2}}}{\left(\left(\frac{n}{2}\right)!\right)^2 2^{n}}$, if $n$ is even
Then we end up with:
$$J(z)=\sum_{n \geq 0}\alpha_n z^n$$
So now we know that:
$$zJ(z)=\sum_{n \geq 0}\alpha_n z^{n + 1}$$
$$J'(z)=\sum_{n \geq 1}n\alpha_n z^{n - 1}=\sum_{n \geq 2}n\alpha_n z^{n - 1}$$
$$zJ''(z)=\sum_{n \geq 2}n(n - 1)\alpha_n z^{n - 1}$$
So if we sum all these 3 we end up with:
$$
zJ''(z)+J'(z)+zJ(z)= $$
$$\sum_{n \geq 2}n(n - 1)\alpha_n z^{n - 1} + \sum_{n \geq 2}n\alpha_n z^{n - 1} + \sum_{n \geq 0}\alpha_n z^{n + 1}=$$
$$=\sum_{n \geq 2} n^2\alpha_n z^{n - 1} + \sum_{n \geq 0}\alpha_n z^{n + 1}=$$
$$=\sum_{n \geq 1} (n + 1)^2\alpha_{n + 1} z^{n} + \sum_{n \geq 1}\alpha_{n - 1} z^{n}=$$
$$=\sum_{n \geq 1} \left[(n + 1)^2\alpha_{n + 1} + \alpha_{n - 1}\right]z^{n}$$
But I don't know how to proceed now. How can I prove that this is equal to $0$?
AI: Introducing the $\alpha_n$ is making a lot of extra work. Just compute directly:
Note that $zJ(z) = \sum_{n \ge 0} 2 {(-1)^n \over (n!)^2} ({z \over 2})^{2n+1}$
$J'(z) = \sum_{n \ge 0} (n+1) {- 1 \over (n+1)^2} {(-1)^n \over (n!)^2} ({z \over 2})^{2n+1}$,
$zJ''(z) = \sum_{n \ge 0} 2(n+1)(n+{1 \over 2}) {- 1 \over (n+1)^2} {(-1)^n \over (n!)^2} ({z \over 2})^{2n+1}$.
Hence the coefficient of $({z \over 2})^{2n+1}$ in $zJ(z)+J'(z)+zJ''(z)$ is
${(-1)^n \over (n!)^2}(2 -{1 \over n+1} - {2 (n+{1 \over 2}) \over n+1} ) = 0$.
|
H: Is $x*y=x$ associative and/or commutative on $\Bbb Z$?
Let $*:\mathbb{Z}\times\mathbb{Z}\rightarrow\mathbb{Z}$ on the integers by the formula $x*y=x$ for any $x,y\in\mathbb{Z}$. Decide whether $*$ is associative and/or commutative.
To the best of my understanding this means I'm taking a Cartesian product $(x,y)$ and sending it to $x$. I think this would be both associative and commutative since it appears this only happens when the pair is $(x,0)$ with $(0,0)$.
Associativity means $(x*y)*z=x*(y*z)$.
By this, if $x=(x,0), y=(0,0)$, then $z$ must also be $(0,0)$ by necessity which clearly is associative.
Commutativity means $x*y=y*x$.
If $x=(x,0)$, then $y$ must be $(0,0)$. This is clearly commutative.
I'm struggling to put this into formal terminology. Are my thoughts correct? If not, what adjustments should I consider?
AI: It is associative, because for all $x,y,z\in\mathbb{Z}$, you have that
$$x\star (y\star z)=x\star y=x=(x\star y)\star z$$
It is not commutative, for example because
$$0\star 1=0\neq 1=1\star 0$$
|
H: If a series converges and its sequence is monotonic, why it can not be a monotonic increasing function?
I have been wondering this for a while
If $$\sum a_{n}$$ converges and {$a_{n}$} is a monotonic function
Why must {$a_{n}$} be decreasing. Why can't it be increasing
AI: Let me answer this question in a more intuitive, informal way. Imagine all $a_n>0$ and $(a_n)$ increasing. Then
$\sum a_n = a_1+ a_2+\dots \ge a_1+a_1+ a_1\cdots =\infty$.
The mathemetically correct answer is that for every convergent series, it is necessary the underlying sequence $(a_n)$ is a null sequence.
If it's increasing and strinctly greater zero the sequence can't converge to $0$, i.e. the series must diverge.
|
H: Find the range of an irrational function
Find the range of given function: $y=-\sqrt{3x^2+4x+3}$. I don't know how to find the range of an irrational function. Can someone explain to me? Also it would be awesome if there are some excersises about this topic! (You can't use calculus here)
AI: \begin{align}
y
&=-\sqrt{3\left(x+\frac23\right)^2+\frac53}\\
&\le-\sqrt{\frac53}\\
\end{align}
So the range is $y\in\left(-\infty,-\sqrt{\frac53}\right]$.
|
H: Prove that $ \kappa\times\lambda=\lambda $
let $ \kappa<\lambda $ and assume $ \aleph_{0}\leq\lambda $
prove that: $ \kappa\times\lambda=\lambda $
So, my attempt, based on the fact that i already proved for infinite cardinals $ \lambda $ that it follows : $ \lambda\times\lambda=\lambda $
if $ \kappa\neq0 $
choose $ \kappa,\lambda $ to be ordinals (we can choose because the definition cardinal arithmetic is well defined)
then $ \kappa\times\lambda\subseteq\lambda\times\lambda=\lambda $
and therefore $ \lambda\leq\kappa\times\lambda\leq\lambda $
and from Cantor-Bernstein theorem, it follows that $ \kappa\times\lambda=\lambda $
But im not sure what about the case $ \kappa =0$ is the statement holds? I mean what happens when we multiply the cardinal zero with infinity?
Thanks in advance
AI: You are right, the case $\kappa = 0$ requires special attention. In that case we are looking at $0 \times \lambda$, which is just the cardinality of $\emptyset \times \lambda = \emptyset$. So $0 \times \lambda = 0$.
In particular, the inequality $\lambda \leq \kappa \times \lambda$ will not hold if $\kappa = 0$. But for any other $\kappa$ it will hold and your proof is correct.
|
H: Dimension of product of affine varieties is the sum of dimensions of each variety
How do I prove that the dimension of the product of affine varieties is the sum of dimensions of each affine variety?
I am aware that similar questions had been asked in Dimension of product of affine varieties and Dimension of a tensor product of affine rings. In those answers, they seemed to suggest that we consider, by normalization, transcendental bases ${x_1,...,x_n}$ and ${y_1,...,y_m}$ of $A(X)$ and $A(Y)$, then claim ${x_1,...,x_n,y_1,...,y_m}$ is a transcendental basis. I think it is clear that ${x_1,...,x_n,y_1,...,y_m}$ is algebraic independent, however I am not sure why they form a basis.
A complete proof would be appreciated.
AI: Since $A(X)$ is finitely generated and integral over $k[x_1,\cdots,x_n]$, it is finite over this ring, and we may pick some finite collection of generators of this ring extension $a_1,\cdots,a_r$, each of which is algebraic over $k[x_1,\cdots,x_n]$. Then $K(X)=k(x_1,\cdots,x_n,a_1,\cdots,a_r)$. We make a similar construction for $Y$, where $b_1,\cdots,b_s$ are the generators for $A(Y)$ over $k[y_1,\cdots,y_m]$, and note that $K(Y)=k(y_1,\cdots,y_m,b_1,\cdots,b_s)$. Since $A(X\times Y)\cong A(X)\otimes A(Y)$, the elements $x_i,y_j,a_k,b_l$ generate $A(X\times Y)$ and we see that $$K(X\times Y)=k(x_1,\cdots,x_n,y_1,\cdots,y_m,a_1,\cdots,a_r,b_1,\cdots,b_s)$$ which is the same as saying that $$K(X\times Y)=k(x_1,\cdots,x_n,y_1,\cdots,y_m)(a_1,\cdots,a_r,b_1,\cdots,b_s).$$ This extension is algebraic since it is generated by algebraic elements, so the $x_i$ and $y_j$ considered together form a transcendence basis.
|
H: Proof that the union of connected sets where the intersection of the closure of one with the other is non-empty.
The problem says:
Prove that if $(X,d)$ is a metric space and $A, B$ are connected subsets of $ X$, then if $cl(A)\cap B\neq\emptyset$, $A\cup B$ is connected.
To show this, I supposed the contrary, that $A\cup B$ is disconnected and thus, $A\cup B=C\cup D$, with $cl(C)\cap D=\emptyset$ and $cl(D)\cap C=\emptyset$.
Then I can define the function:
$$f:C\cup D\to \{0,1\}$$
$$f(x)=\begin{cases}0 ,& x\in C, \\ 1,& x\in D.\end{cases}$$
Which is continuous and not constant.
If $x\in C$, then $x\in A$ or $x\in B$. WLOG, suppose it's in $A$, then because $f$ is continuous, the image of connected sets is connected and thus, $f(a)=f(x), \forall a\in A$.
Because,$cl(A)\cap B\neq\emptyset$, if $x\in cl(A)\cap B$ there is a sequence $\{x_n\}$ of points in $A$ such that:
$$lim_{n\to\infty}x_n=x$$
And, because $f$ is continuous,
$$lim_{n\to\infty}f(x_n)=f(x)$$
And because each $x_n\in A$, we can conclude that $f(x)=1$. But because $x\in B$, we can alsio conclude that $f(b)=1\forall b\in B$. But then, $f$ is constant, which is a contradiction.
Is this correct, or am I missing something? I feel that when I use the function, I made a leap of logic by regarding $C\cup D$ as a metric space.
AI: It’s correct. In particular, $C\cup D$ is a metric space with the metric that it inherits from $X$. However, the result is true for arbitrary topological spaces, though part of the proof has to be changed slightly. If $x\in B\cap\operatorname{cl}A$, there need not be a sequence in $A$ converging to $x$ if $X$ is not metric, but suppose that $f(x)=0$: then $f^{-1}\left[\left(-\frac12,\frac12\right)\right]$ is an open nbhd of $x$ disjoint from $A$, which is impossible. Thus, $f(x)=1$, and rest of your argument goes through unchanged.
|
H: Example of a group of order $15$ satisfying some conditions
Suppose that a group $G$ of order $15$ has only one subgroup of order $3$ and only one subgroup of order $5$, then I need to prove that $G$ is cyclic.
If I can show that $\exists a\in G$ such that $|a|=15$, then the result is proved.
By Lagrange's theorem, every subgroup of order $3$ and $5$ is cyclic. Hence, let $H=\{e,a,a^2\}$ and $K=\{e,b,b^2,b^3,b^4\}$ be the only subgroups of $G$ of orders $3$ and $5$ respectively, such that $|a|=3, |b|=5$ and that $b\notin \langle a \rangle$
Clearly, $H\cup K=\{e,a,a^2,b,b^2,b^3,b^4\}\subset G$. Suppose that $x\in G$ but $x\notin H\cup K$, then by Lagrange's theorem, $|x|=1,3,5,15$. Clearly, $|x|\ne 1$. If $|x|=3$, then $x\in H$ and similarly if $|x|=5$ then $x\in K$. So $|x|=15$. Hence proved.
I can't believe it so I am looking for an example of such a group. How will this group look like? I just can't get past the following thought:
$G=\{e,a,a^2, b,b^2,b^3,b^4, ab, ab^2,ab^3,ab^4,a^2b,a^2b^2,a^2b^3,a^2b^4\}\tag{1}$
Will the group $G$ look like as in $(1)$? If yes, then
$(A)$ how can it be cyclic.
$(B)$ How to ensure closure i.e., whether for example $(ab)^2\in G$ or not?
Thanks for your time.
AI: Well, yes, you can certainly form the group $C_3 \times C_5$ which has the elements that you write above. The point is that... this group is cyclic! And it is generated by the product of the generators.
The reason is that $3$ and $5$ are coprime, and that is actually a necessary condition for this to happen (this = "the direct product of two cyclic groups being cyclic"): in fact, take $C_a = \langle x \rangle$ and $C_b = \langle y \rangle$, where $a$ and $b$ are coprime. Since the generators commute in $G=C_a \times C_b$, the order of $xy$ divides the least common multiple of the orders of $a$ and $b$ (since if $x^a=1$ then $x^{ak}=1$), so if $q$ is a multiple of both $a$ and $b$ then $x^q=y^q=1$ and hence $(xy)^q = 1$ (because, again, they commute).
But if the orders are coprime... the least common multiple is $ab$, the product of the orders! Since $G$ has order $ab$, then it follows that all elements of $G$ are powers of $ab$. In other words, $G$ is cyclic.
|
H: Let $A\in M_{n\times n}(\textbf{F})$. Then a scalar $\lambda$ is an eigenvalue of $A$ if and only if $\det(A - \lambda I_{n}) = 0$.
Proposition
Let $A\in M_{n\times n}(\textbf{F})$. Then a scalar $\lambda$ is an eigenvalue of $A$ if and only if $\det(A - \lambda I_{n}) = 0$.
MY ATTEMPT
We say that $\lambda$ is an eigenvalue of $A$ iff $\lambda$ is an eigenvalue of $L_{A}:\textbf{F}^{n}\to\textbf{F}^{n}$ defined by $L_{A}v = Av$.
On the other hand, $\lambda$ is a eigenvalue of $L_{A}$ iff $(A - \lambda I_{n})v = 0$.
Since $v\neq 0$, it happens iff $\ker(L_{A - \lambda I_{n}})\neq\{0\}$.
This, by its turn, happens iff $L_{A-\lambda I_{n}}$ is not invertible, that is to say, $A - \lambda I_{n}$ is not invertible.
Finally, we arrive at the desired restriction: $\det(A - \lambda I_{n}) = 0$.
My concerns
I am mainly concerned with the wording of my proof. Could someone point out any theoretical flaw or missing step? Maybe I am overcomplicating things. Please let me know. Any contribution is appreciated.
AI: Given that $\lambda$ is an eigenvalue of $A,$ there exists a nonzero vector $v$ such that $Av = \lambda v,$ from which it follows that $(A - \lambda I_n) v = Av - \lambda v = 0.$ But this says exactly that $\ker(A - \lambda I_n)$ is nontrivial (since it contains the nonzero vector $v$), hence $A - \lambda I_n$ is not invertible.
Conversely, if $\det(A - \lambda I_n) = 0$ for some scalar $\lambda,$ then there exists a nonzero vector $v$ in $\ker(A - \lambda I_n).$ By definition, we have that $0 = (A - \lambda I_n) v = Av - \lambda v$ so that $Av = \lambda v.$ QED.
|
H: probability with two variables - Taking balls out of the basket
There's 3 balls in the basket - White, red and black. Three people chose one ball after the other with return.
$X$ is the various colors that got chosen.
$Y$ is the number of people that chose white.
I need to calculate $P(X+Y≤3|X-Y≥1)$.
So I got that I need to start from that getting each ball is probability of $1/3$, since it's with return.
I'm getting confused with how Can I compute Y? Can I say it's sort of uniform distribution and maybe to try by that? Meaning, that the expected value is $(a-b)/2$?
AI: We can make a probability table of joint outcomes of $(X, Y)$. Note $X \in \{1, 2, 3\}$, and $Y \in \{0, 1, 2, 3\}$. For $X = 1$, there are only the three outcomes $$(w,w,w), (r,r,r), (b,b,b).$$ For $X = 2$, we have $\binom{3}{2}(2^3 - 2) = 18$ outcomes:
$$(r, r, w), (r, r, b), (r, w, r), (r, w, w), (r, b, r), (r, b, b), \\
(w, r, r), (w, r, w), (w, w, r), (w, w, b), (w, b, w), (w, b, b), \\
(b, r, r), (b, r, b), (b, w, w), (b, w, b), (b, b, r), (b, b, w).$$
For $X = 3$, we have $3! = 6$ outcomes, which are the permutations of $r, b, w$ in some order. The total is $3^3 = 27$.
When $X = 1$, we have either $Y = 3$ with probability $1/3$ or $Y = 0$ with probability $2/3$.
When $X = 2$, we have $Y = 0$, $Y = 1$, $Y = 2$ each with probability $1/3$.
When $X = 3$, we have $Y = 1$ with probability $1$.
So we have
$$\begin{array}{c|cccc}
\Pr[X = x, Y = y] & 0 & 1 & 2 & 3 \\
\hline
1 & \frac{2}{27} & 0 & 0 & \frac{1}{27} \\
2 & \frac{2}{9} & \frac{2}{9} & \frac{2}{9} & 0 \\
3 & 0 & \frac{2}{9} & 0 & 0
\end{array}$$
The rest is simply conditioning. Select those outcomes for which $X - Y \ge 1$, and among those, tabulate the probabilities for which $X + Y \le 3$. then divide by the sum of the probabilities that you considered.
|
H: Why does the characterization of Gaussian primes really work?
Citing https://en.wikipedia.org/wiki/Gaussian_integer :
A Gaussian integer $a + bi$ is a Gaussian prime if and only if either:
one of $a, b$ is zero and absolute value of the other is a prime number of the form $4n + 3$ (with $n$ a nonnegative integer)
or both are nonzero and $a^2 + b^2$ is a prime number (which will not be of the form $4n + 3$).
This statement implies the following:
If $a^2 + b^2$ is not prime ($a,b \ne 0$), then there are 2 complex numbers $a_1 + b_1i$ and $a_2 + b_2i$, such that:
$$
a_1^2 + b_1^2 \ne 1
$$
$$
a_2^2 + b_2^2 \ne 1
$$
$$
a + bi = (a_1 + b_1i)(a_2 + b_2i) =>
Magnitude(a + bi) = Magnitude(a_1 + b_1i) Magnitude(a_2 + b_2i) => \sqrt{(a^2 + b^2)} = \sqrt{(a_1^2 + b_1^2)}\sqrt{(a_2^2 + b_2^2)} => (a^2 + b^2) = (a_1^2 + b_1^2)(a_2^2 + b_2^2)
$$
If $p$ is a prime of the form $4n + 1, n > 0$, then there are 4 integers $a_1, b_1, a_2, b_2$, such that $p^2 = (a_1^2 + b_1^2)(a_2^2 + b_2^2)$
How do we prove these two statements?
EDIT 1
I must misunderstand something. For example, let us take $45 = 3^2 + 6^2$, but I do not think $45$ can be broken as a product of sum of squares. What am I missing?
EDIT 2
As people commented:
$45 = (1^2 + 2^2)(0^2 + 3^2)$
If $p$ is a prime, such that $p = 4n + 1$ for some $n > 0$, then there integers $u,v$ such that $p = u^2 + v^2$, which trivially resolves the second item in my question. But then, how do we prove that any prime of this form is a sum of squares?
AI: Both of these statements can be deduced from the unique factorization theorem for Gaussian integers, which can in turn be deduced from the fact that $\mathbb{Z}[i]$ is a Euclidean domain (see https://en.wikipedia.org/wiki/Gaussian_integer#Euclidean_division for instance).
To prove the first statement, suppose $a,b\in\mathbb{Z}\setminus\{0\}$ and $a^2+b^2$ is not prime in $\mathbb{Z}$. If $a+bi$ were a Gaussian prime, then $a-bi$ would also be a Gaussian prime (since complex conjugation is an automorphism of $\mathbb{Z}[i]$), and so $$a^2+b^2=(a+bi)(a-bi)$$ would be the unique (up to units and reordering) prime factorization of $a^2+b^2$ in $\mathbb{Z}[i]$. In particular, up to units, the only factors of $a^2+b^2$ in $\mathbb{Z}[i]$ are $1,a\pm bi,$ and $a^2+b^2$ But since $a^2+b^2$ is not prime in $\mathbb{Z}$, it has a nontrivial integer factor $c$. This is a contradiction, since $c$ cannot be associate to any of $1,a\pm bi,$ or $a^2+b^2$. (Note that we use the assumption that $a,b\neq 0$ to conclude that $c$ cannot be associate to $a\pm bi$.)
To prove the second statement, suppose $p$ is prime in $\mathbb{Z}$ and has the form $4n+1$. Then $-1$ is a square mod $p$ (since group of units mod $p$ is cyclic of order $4n$ and thus has an element of order $4$), so there is some $a\in\mathbb{Z}$ such that $a^2+1$ is divisible by $p$. Over $\mathbb{Z}[i]$, we can factor $a^2+1=(a+i)(a-i)$. If $p$ were prime in $\mathbb{Z}[i]$, then $p$ would have to divide either $a+i$ or $a-i$. But this is impossible, since then $p$ would divide the imaginary part $\pm 1$.
(Note that it actually follows that $p$ itself is a sum of squares. Indeed, if $p=(a_1+b_1i)(a_2+b_2i)$ is a nontrivial factorization, then $p^2=(a_1^2+b_1^2)(a_2^2+b_2^2)$ and the only possibility for these factors is $a_1^2+b_1^2=a_2^2+b_2^2=p$ since $p$ is prime.)
|
H: Is $f(x)=\sin x$ integrable?
I'd like to know if the following reasoning to prove that $f(x)=\sin x$ is not integrable is correct, or what's the mistake I'm making.
Consider the following definition and Corollary taken from Folland's Real Analaysis:
Definition: Consider a measure space $(X,M, \mu)$. If $f:X \to \Bbb R$, we say that $f$ is integrable if both $\int_X f^+$ and $\int_X f^-$ are finite. It is clear that $f$ is integrable iff $\int_X |f|<\infty$ since $|f|=f^+ + f^-$.
Corollary 2.2: If $X$ and $Y$ are topological spaces, every continuous $f:X\to Y$ is $(B_X, B_Y)$-measurable, where $B_X$ and $B_Y$ are the Borel $\sigma$-aglebras on $X$ and $Y$, respectively.
Now, the function $f:\Bbb R \to \Bbb R$ defined by $f(x)=\sin x$ is continuous, and by the corollary, it's Borel-measurable.
We have that $\int_{\Bbb R} f = \int_{-\infty}^\infty \sin (x) dx = 0$, but $\int_{\Bbb R} f^+= \int_{\Bbb R} f^-=\infty$, so, $f(x)=\sin x$ is not integrable.
Note: I'm doing this because I'm trying to understand why do we have $\int_X |f|<\infty$ and not $\int_X f<\infty$ in the definition of integrable function, and I'm trying to find a counterexample that $\int_X f<\infty$ doesn't imply $\int_X |f|<\infty$. If someone has a valid counterexample it would be nice to know it.
AI: First of all, $\int_{-\infty}^{\infty} \sin(x)dx=0$ is false. Even if we consider the improper Riemann integral, it is not defined as $\lim_{R\to\infty}\int_{-R}^R f(x)dx$, but as $\lim_{R,M\to\infty}\int_{-M}^R f(x)dx$. So even as an improper Riemann integral $\int_{-\infty}^{\infty} \sin(x)dx$ is divergent.
Anyway, we are discussing the Lebesgue integral here. As you noted correctly the integral is divergent, for example because $f^{+}$ is not integrable.
I'll write a few words about your note. You ask why $\int_X f<\infty$ is not the definition of an integrable function. Well, actually it is, but first of all we have to define what does $\int_X f$ even means for a general measurable function. Thing is, Lebesgue integration is defined in a few steps. Usually it is first defined for non negative measurable functions. After we did that we can define it for a general measurable function $f$: it is called integrable if both integrals $\int_X f^{+}$ and $\int_X f^{-}$ are finite, and then we define $\int_X f=\int_X f^{+}-\int_X f^{-}$. But it is also easy to see that the condition $\int_X f^{+},\int_X f^{-}<\infty$ is equivalent to the condition $\int_X |f|<\infty$ (remember, we already defined the Lebesgue integral for non-negative functions), so we get an equivalent definition of $f$ being integrable that way.
In other words, it is true that $\int_X f$ is a finite number if and only if $\int_X |f|$ is.
|
H: Vectors and Cross Product in 3D
I set the vector V as (a,b,c). I know that you have to multiply out the two vectors, so that is what I did. I then got multiple equations which I used to find the values of a, b, and c. The answer that I got was (2,2,-5), however, that solution only works for the equation on the bottom. I don't know how else to do this problem. Any ideas/answers?
AI: Since $(a,b,c)\times(1,0,-3)=(-3b,3a+c,-b)$, you should solve the system$$\left\{\begin{array}{l}-3b=-6\\3a+c=11\\-b=-2.\end{array}\right.$$You will get that $b=2$ and $c=11-3a$. And, in order to have $(a,2,11-3a).(1,-5,1)=-7$, $a$ shall have to be equal to $4$. So, the answer is $(4,2,-1)$.
|
H: How to calculate $\int \frac{dx}{\cos(x) + \sin(x)} $?
I did it by the method of integration by parts, with
$$
u=\frac{dx}{\cos(x) + \sin(x)},\quad dv=dx$$
so
$$
\int \frac{dx}{\cos(x) + \sin(x)} =
\frac{x}{\cos(x)+\sin(x)}
- \int \frac{x(\sin(x)-\cos(x))}{(\cos(x)
+ \sin(x))^{2}}
$$
Where,
$$\int \frac{x(\sin(x)-\cos(x))}{(\cos(x) + \sin(x))^{2}}\;dx
= \int \frac{x \sin(x)}{1+2\sin(x)\cos(x)}dx - \int \frac{x \cos(x)}{1+2\sin(x)\cos(x)}dx ,
$$
I have not managed to solve those two integrals that were expressed, really appreciate if you can help me.
AI: $$\int \frac{dx}{\cos(x) + \sin(x)} $$
$$=\int \frac{dx}{\sqrt2\left(\sin (x)\frac{1}{\sqrt2}+\frac{1}{\sqrt2}\cos(x) \right)} $$
$$=\int \frac{dx}{\sqrt2\sin \left(x+\frac{\pi}{4} \right)} $$
$$=\frac1{\sqrt2}\int \csc \left(x+\frac{\pi}{4} \right)\ d\left(x+\frac{\pi}{4} \right) $$
Using formula: $\int\csc\theta d\theta=\ln\left|\tan\left(\frac{\theta}{2}\right)\right|$,
$$=\frac1{\sqrt2}\ln\left|\tan\left(\frac{x+\frac{\pi}{4}}{2}\right)\right|+C$$
$$=\frac1{\sqrt2}\ln\left|\tan\left(\frac x2+\frac{\pi}{8}\right)\right|+C$$
|
H: Equivalence relation on the complement of a subspace.
I'm trying to solve this problem: Let $V$ be a real vector space with dimension $n$ and $S$ a subspace of $V$ with dimension $n-1$. Define an equivalence relation $\equiv$ on the set $V\setminus S$ by $u\equiv v$ if the line segment
$$L(u,v)=\{(1-t)u+tv: t\in [0,1]\}$$
satisfies that $L(u,v)\cap S=\emptyset$. Prove that $\equiv$ is an equivalence relation and it has exactly two equivalence classes.
I proved that $\equiv$ is reflexive and symmetric since for $u,v\in V\setminus S$ we have that $L(u,u)=\{u\}$ and $L(u,v)=L(v,u)$. But I couldn't prove that $\equiv$ is transitive. I saw with some examples in $\mathbb{R}^3$ using lines that the hypothesis of $\dim S=n-1$ is necessary for this but in the general case I don't know how to use this. And for the last part I think that for every $v\in V\setminus S$ the unique two equivalence classes are $[v]$ and $[-v]$. Could you please give me some suggestions for this? Thanks.
AI: Let $\{e_1,\ldots,e_n\}$ be a basis of $V$ such that $\{e_1,\ldots,e_{n-1}\}\subset S$. For $x\in V$, let $x_k$ be the k-th component of $x$ in that basis. Then we have $V\setminus S = \{x\in V | x_n \neq 0\}$ and for $x,y\in V\setminus S$ we have
$$x\equiv y \iff tx_n+(1-t)y_n\neq0 \text{ for all } t\in [0,1]$$
$$\iff (x_n\gt0\land y_n\gt0) \lor (x_n\lt0\land y_n\lt0) $$
In other words, $x\equiv y \iff $ the $e_n$ componnents of $x$ and $y$ have the same sign. That is clearly a reflexive, symmetric and transitive relation, and the equivalence classes are just the two sets determined by the sign of that component.
|
H: Computing $\lim_{x\rightarrow 0}{\frac{xe^x- e^x + 1}{x(e^x-1)}}$ without L'Hôpital's rule or Taylor series
This limit really stamped me because i'm not allowed to use L'Hôpital's rule or Taylor's series, please help!
I think the limit is $\frac{1}{2}$, but i don't know how to prove it without the L'Hôpital's rule or Taylor's series
$$\lim_{x\rightarrow 0}{\frac{xe^x- e^x + 1}{x(e^x-1)}}$$
AI: Replacing $ x $ by $\color{red}{ -x} $,
$$L=\lim_0\frac{xe^x-e^x+1}{x(e^x-1)}$$
$$=\lim_0\frac{-xe^{\color{red}{-x}}-e^{-x}+1}{-x(e^{-x}-1)}$$
$$=\lim_0\frac{-x-1+e^x}{x(e^x-1)}$$
the sum gives
$$2L=\lim_0\frac{x(e^x-1)}{x(e^x-1)}=1$$
thus
$$L=\frac 12$$
|
H: $\lim_{n\rightarrow \infty} E[\min(|X_n-X|,1)]=0\quad\Rightarrow\quad\lim_{n\rightarrow \infty} E[|X_n-X|]=0$
Consider a sequence of measurable functions $X_n$ and $X$ measurable on some probability space $(\Omega,\mathcal{F},P)$.
I want to show, that following holds
$$\lim_{n\rightarrow \infty} E[\min(|X_n-X|,1)]=0\quad\Rightarrow\quad\lim_{n\rightarrow \infty} E[|X_n-X|]=0$$
This statement is intuitively clear, but I fail to find the actual argument. Thanks in advance!
AI: This is not true. Consider $(0,1)$ with Lebesgue measure and let $A_n=(0, 1 -\frac 1 n)$. Let $X_n=\frac 1 n$ on $A_n$ and $n$ on $A_n^{c}$. Then $E X_n \wedge 1 =\frac 1 n P(A_n)+P(A_n^{c}) \leq \frac 2 n \to 0$ but $EX_n \geq nP(A_n^{c}) =1 $ for all $n$.
|
H: proof that $\mathbb{R}=\mathbb{Q}\cup\mathbb{Q}^{'}$
I'm not sure if this is correct or not, I'm trying to find a proof or disproof that $\mathbb{R}=\mathbb{Q}\cup\mathbb{Q}^{'}$, that means, the set of the real numbers is the union of the rational and irrational numbers, if possible I'd like a more explicative answer since I'm not that good in math, thanks in advance!
AI: $\mathbb Q =\{x\in \mathbb R|$ there are integers $n,m$ so that $x = \frac mn\}$.
$\mathbb Q' = \{x\in \mathbb R|$ there are not any integers $n,m$ so that $=\frac mn\}$
So $\mathbb Q\cup \mathbb Q' = \{x\in \mathbb R|$ there are integers $n,m$ so that $x = \frac mn\}\cup \{x\in \mathbb R|$ there are not any integers $n,m$ so that $=\frac mn\}=$
$\{x\in \mathbb R|$ there are integers $n,m$ so that $x=\frac mn$ or there are not any integers $n,m$ sothat $x=\frac mn\}=$.
$\{x\in \mathbb R|$ there are or are not integers $n,m$ where $x=\frac mn\}=$
$\{x\in \mathbb R| x$ can be any real number whather it can be written as $x=\frac mn$ for some integers $m,n$ or not$\}=$
$\{x\in \mathbb R|x$ is a real number$\}=$
$\{x\in \mathbb R\}=$
$\mathbb R$.
|
H: Continuing previous discussion?
Regarding the following question's answer : Seeking a combinatorial proof of $2n^{n-3} = \sum_{m=1}^{n-1}\binom{n-2}{m-1}m^{m-2}(n-m)^{n-m-2}$
I have some additional questions:
1. why m can't be 0 or n?
2. Plus what (n-2 / m-1) refers to? why are we choosing m-1 from n-2?
AI: Consider trees on the vertex set $[n]=\{1,2,\ldots,n\}$. Some of these trees contain the edge $\{1,2\}$. Let $T$ be such a tree. Let $V_1$ be the set of vertices that can be reached from vertex $1$ without going through $2$, and let $V_2$ be the set of vertices that can be reached from vertex $2$ without going through $1$; note that $1\in V_1$ and $2\in V_2$, that $|V_1|+|V_2|=n$, and that the subgraphs of $T$ induced by $V_1$ and $V_2$ are both trees.
Let $m=|V_1|$. Then $m\ge 1$, since $1\in V_1$, and $m\le n-1$, since $2\notin V_1$. For a specific value of $m\in\{1,2,\ldots,n-1\}$, how many of these trees are there with $|V_1|=m$? We know that $1\in V_1$, so there must be $m-1$ other members of $V_1$. Moreover, $2\notin V_1$, so these $m-1$ members of $V_1$ must be chosen from the $n-2$ vertices $3,4,\ldots,n$. Thus, there are $\binom{n-1}{m-1}$ ways to choose the remaining $m-1$ vertices in $V_1$. There are then $m^{m-2}$ different trees that can be made with those vertices. Finally, there are $n-m$ vertices in $V_2$, so there are $(n-m)^{n-m-2}$ different trees that can be made with those vertices. Putting the pieces together, we see that there are
$$\binom{n-2}{m-1}m^{m-2}(n-m)^{n-m-2}$$
trees on the vertex set $[n]$ that contain the edge $\{1,2\}$ and have $m$ vertices in $V_1$. Summing over the possible values of $m$, we find altogether
$$\sum_{m=1}^{n-1}\binom{n-2}{m-1}m^{m-2}(n-m)^{n-m-2}$$
trees on the vertex set $[n]$ that include the edge $\{1,2\}$.
Alternatively, we can start with the fact that there are $n^{n-2}$ trees on the vertex set $[n]$. Each of them has $n-1$ edges, so there are altogether $(n-1)n^{n-2}$ edges in these trees. We now count these edges in a different way.
Let $t$ be the number of trees containing the edge $\{1,2\}$; then there are $t$ trees containing any given edge, and there are altogether $\frac{n(n-1)}2$ different possible edges, so the total number of edges in all of these trees must be $\frac{n(n-1)}2\cdot t$. Thus,
$$\frac{n(n-1)}2\cdot t=(n-1)n^{n-2}\;,$$
and therefore $t=2n^{n-3}$. And $t$ is the quantity that we computed in the first part of the answer, so we conclude that
$$2n^{n-3}=\sum_{m=1}^{n-1}\binom{n-2}{m-1}m^{m-2}(n-m)^{n-m-2}\;,$$
as desired.
|
H: Find the recurrence relation for the probability that the number of successes is divisible by $3$
(Feller Vol.1, P.285, Q.21) Let $u_n$ be the probability that the number of successes in $n$ Bernoulli trials is divisible by $3$. Find a recursive relation for $u_n$.
Answer: $u_n = q^n + \sum_{k=3}^n {k-1 \choose 2} p^3 q^{k-3} u_{n-k}$ with $u_0 = 1, u_1=q, u_2 = q^2, u_3 = p^3 + q^3$.
I understand that $q^n$ represents ${n \choose 0}p^0q^n$, but I don't know how to interpret the series.
I tested the answer for the case of $u_6$. $u_6 = q^6 + {6 \choose 3} p^3q^3 + {6 \choose 6} p^6q^0$, and I checked that the answer works. However, I cannot do more than that. I would appreciate if you explain how to derive the answer.
AI: Suppose the number of successes in $n$ trials is divisible by $3$. Then, either there are no successes at all (the first term) or there are $3m$ successes for some $m>0$.
Assume that we are in the second case. Then let $k$ be the location of the the third success. There are, of course, $n-k$ trials left to go. The probability that $k$ is the third success and the number of successes in the remaining $n-k$ trials is a multiple of $3$ is given by $$\binom {k-1}2p^3q^{k-3}u_{n-k}$$
hence the summands.
|
H: Some question about proving $\displaystyle\limsup_{n\to\infty}|\cos{n}|=1$ by using density of $\{a+b\pi|a,b\in\mathbb{Z}\}$
I have seen Proving $\displaystyle\limsup_{n\to\infty}\cos{n}=1$ using $\{a+b\pi|a,b\in\mathbb{Z}\}$ is dense and got this question.
Hagen von Eitzen gave the solution as following:
Pick an integer $n$. By density of $\Bbb Z+\pi\Bbb Z$, there exist $a_n,b_n\in\Bbb Z$ with $\frac 1{n+1}<a_n+b_n\pi<\frac1 n$. If $a_m=a_n$, then $|b_n\pi-b_m\pi|<1$, which implies $b_n=b_m$ and ultimately $n=m$. We conclude that $|a_n|\to \infty$.
As $$\cos|2a_n|=\cos 2a_n=\cos(2a_n+2\pi b_n)>\cos\frac 2n\to 1, $$
the desired result follows.
I was wondering why $|a_n|\to \infty.$ Could someone give more details about it? –Moreover, is $|a_n|$ increasing to $\infty?$
AI: Hagen von Eitzen shows that $a_m=a_n$ implies $m=n$, so the integers $a_1,a_2,a_3,...\;$are distinct.
Hence for any fixed positive integer $N$, we must have $|a_n| \ge N$ for all but at most $2N-1$ values of $n$.
It follows that $|a_n|$ approaches infinity as $n$ approaches infinity.
However it's not automatic that the sequence $(|a_n|)$ is an increasing sequence.
|
H: Intersecting families
Suppose A, B, C are 3 subsets of the set {1,...,n} Where each pair has nonempty intersect. Is there any intersecting family F from {1,...,n} subsets where F cardinality is equal to 2^(n-1) and F contains A, B and C?
AI: HINT: If $A\cap B\cap C\ne\varnothing$, it’s pretty easy to find such an $F$. If not, there is a $3$-element set $\{a,b,c\}\subseteq[n]$ such that $a\in B\cap C$, $b\in A\cap C$, and $c\in A\cap B$. Let $F$ be the family of all subsets of $[n]$ that contain at least two of the points $a,b$, and $c$.
Show that $F$ is an intersecting family.
Use an inclusion-exclusion argument to show that $|F|=2^{n-1}$.
If you get completely stuck, I’ve added a further hint in the spoiler-protected block below; mouse over to see it.
How many subsets of $X$ contain both $a$ and $b$? Both $a$ and $c$? Both $b$ and $c$? If you add these three numbers together, how often have you counted each of the subsets of $X$ that contains all three of $a,b$, and $c$?
|
H: Computing Binomial Distribution Expectation
How do I compute $E(2^{X}3^{(1-X)})$ given $X \sim Bin(1,p)$. Note that $X = 1$ with probability $p$ and $0$ otherwise.
AI: You mean that $X$ is an indicator random variable with probability of success equal $p$? In that case, if $X=1$ the expectation equals 2 and if $X=0$ it equals 3. As the first occurs with probability $p$, the final answer is $2p + 3(1-p) = 3-p$
|
H: Is the nullspace of a matrix's transpose equal to the nullspace of its RREF's transpose?
Consider a matrix $ A $ and its RREF $ B $. Are $ Null(A^\intercal) $ and $ Null(B^\intercal) $ equal?
How should I go about this problem?
AI: No. Conceptually, the row reduced echelon form of a matrix $A$ must be in the form of $PA$ for some nonsingular matrix $P$. However, $A^Tx=0$ is not equivalent to $A^TP^Tx=0$.
For a concrete counterexample, consider $A=\pmatrix{0&0\\ 1&0}$. Its RREF is $B=\pmatrix{1&0\\ 0&0}$. However,
$$
A^T\pmatrix{1\\ 0}=\pmatrix{0&1\\ 0&0}\pmatrix{1\\ 0}=0\ne\pmatrix{1&0\\ 0&0}\pmatrix{1\\ 0}=B^T\pmatrix{1\\ 0}.
$$
|
H: Probability of a Uniform Random Variable
Let $X_1, X_2, X_3$ be iid Uniform (0,1) random variables.
How do I find the probability that $X_{\min} = \min[X_1,X_2,X_3]$, is between 0 and 1/2?
AI: For the min not to be $\le 1/2)$, all have to be $\gt 1/2$.
$P(X_{min}\le 1/2)=1-P(X_{min}\gt 1/2)=1-\prod_{k=1}^3 P(X_k\gt 1/2)=1-1/8=7/8$
|
H: Is it possible to make curvature of sine wave equal to that of a parabola?
Suppose there is a symmetric parabola pointing downwards now we only consider the part above the x axis so is it possible to make curvature of sine wave equal to that part above x axis
So that it coincides with that part of parabola above x axis sorry if this question makes no sense please don't close it if i was not able to make this more understandable then i will edit it
AI: Locally you can make a sine wave and a parabola agree to second order. The Taylor series of the cosine (which is a shifted sine wave) is $1-\frac{x^2}{2!}+\frac {x^4}{4!}+\ldots$. The first two terms make a parabola centered at $0$ with a maximum of $1$. As long as you are close enough to $0$ that the $\frac {x^4}{4!}$ term is negligible, they will agree. The agreement is not exact. The Alpha plot below shows they match very closely out to $\pm \frac 12$, and rather well out to $\pm 1$
|
H: Another proof of $\displaystyle\limsup_{n\to\infty}|\cos{n}|=1$
I have seen a proof of $\displaystyle\limsup_{n\to\infty}|\cos{n}|=1$ by using density of $\{a+b\alpha: a,b\in \mathbb{Z}\}$ in $R$, where $\alpha$ is irrational.
Here I give another proof of as following:
See this article, a special case is that there is two increasing sequences of odd positive integers $(p_n),(q_n)$ such that
$$ \left|\pi - \frac{p_n}{q_n} \right| \leq \frac{1}{q_n^2},\quad n>1.$$
Note that $|\cos (\pi-x) |= |\cos x |$ for $x\in [0,\pi]$, then
$$|\cos \left(q_n\pi - p_n\right)|= |\cos p_n| \geq \cos\frac{1}{q_n} \to 1.$$
therefore $|\cos p_n|\to 1$.
Is this solution right? Thanks in advance.
AI: Your proposed alternate proof looks good, but for clarity, I would provide a little more detail:$\\[6pt]$
\begin{align*}
&
\left|\pi-\frac{p_n}{q_n} \right|\le \frac{1}{q_n^2}\\[4pt]
\implies\;&
\left|q_n\pi-p_n\right|\le\frac{1}{q_n}\\[4pt]
\implies\;&
0 \le \left|q_n\pi-p_n\right|\le\frac{1}{q_n} < \pi\\[4pt]
\implies\;&
\cos\left(\left|q_n\pi-p_n\right|\right)\ge\cos\Bigl(\frac{1}{q_n}\Bigr)\\[4pt]
\implies\;&
\cos\left(q_n\pi-p_n\right)\ge\cos\Bigl(\frac{1}{q_n}\Bigr)\\[4pt]
\implies\;&
\cos\left(\pi-p_n\right)\ge\cos\Bigl(\frac{1}{q_n}\Bigr)
&&\text{[since $q_n$ is odd]}
\\[4pt]
\implies\;&
-\cos(p_n)\ge\cos\Bigl(\frac{1}{q_n}\Bigr)\\[4pt]
\implies\;&
\cos(p_n)\le -\cos\Bigl(\frac{1}{q_n}\Bigr)\\[4pt]
\implies\;&
\lim_{n\to\infty} \cos(p_n)=-1
&&\text{[since the sequence $(q_n)$ is increasing]}
\\[4pt]
\implies\;&
\lim_{n\to\infty} |\cos(p_n)|=1
\\[4pt]
\end{align*}
|
H: How to prove statement regarding directional derivatives and gradients
The question asked me to prove that
$$\|\nabla f\|^2 = (D_{u}f)^2 + (D_{v}f)^2$$
whenever vectors $u$ and $v$ are perpendicular.
How can I prove this?
AI: Hint:
$$\Vert \nabla f \Vert^2=(\nabla f\boldsymbol{\cdot}\underline{u})^2+(\nabla f \boldsymbol{\cdot}\underline{v})^2$$
Let $\theta$ and $\phi$ be the angles between $\nabla f$ and $\underline{u},\underline{v}$ respectively. Then $$(\nabla f \boldsymbol{\cdot}\underline{u})^2=\Vert \nabla f \Vert^2\Vert\underline{u}\Vert^2 \cos^2(\theta)$$ and $$(\nabla f \boldsymbol{\cdot}\underline{v})^2=\Vert \nabla f \Vert^2\Vert\underline{v}\Vert^2 \cos^2(\phi)$$
Perhaps you can take it from here?
EDIT: It's only true if the two vectors are orthonormal, not orthogonal. I.e $\Vert \underline{u} \Vert=
\Vert \underline{v} \Vert =1$.
Continuing on,
$$1=\Vert \underline{u}\Vert^2 \cos^2(\theta) + \Vert \underline{v} \Vert^2 \cos^2(\phi)$$
Since $\underline{u}$ and $\underline{v}$ are orthogonal, $\phi = \theta \pm \pi/2$. WLOG, I'll assume $\phi = \theta + \pi/2$. But, as $\cos^2(x+\pi/2)=1-\cos^2(x)=\sin^2(x)$ (check this),
$$1=\Vert \underline{u}\Vert^2 \cos^2(\theta)+ \Vert \underline{v} \Vert^2 \sin^2(\theta)$$
Which is obviously true as long as $\Vert \underline{u} \Vert=
\Vert \underline{v} \Vert =1.$
|
H: Property of closure relation.
I am reading $\textit{On the Foundations of Combinatorial Theory II. Combinatorial Geometries}$. They give a definition.
A closure relation on a set $S$ is a function $A\mapsto \bar{A}$ defined for all subsets > $A\subseteq S$, satisfying
$$
\overline{A}\subseteq S,\quad A\subseteq \overline{A},\quad
A\subseteq \overline{B}\text{ implies }\overline{A}\subseteq \overline{B}
$$
They then write that we can prove that $\overline{A\cap B}=\overline{A}\cap\overline{B}$. I can show that $\overline{A\cap B}\subseteq \overline{A}\cap\overline{B}$, but I cannot figure out the reverse containment. Is this even true in general?
AI: Unless there are some background assumptions that you’ve not stated, the assertion is false. Let $S=\Bbb R$, and for $A\subseteq\Bbb R$ let $\overline{A}$ be the usual topological closure of $A$ in $\Bbb R$. Then $A\subseteq\overline{A}\subseteq\Bbb R$ for all $A\subseteq\Bbb R$, and $A\subseteq\overline{B}$ implies that $\overline{A}\subseteq\overline{B}$, but if we set $A=(0,1)$ and $B=(1,2)$, then
$$\overline{A\cap B}=\overline{(0,1)\cap(1,2)}=\overline{\varnothing}=\varnothing\subsetneqq\{1\}=[0,1]\cap[1,2]=\overline{A}\cap\overline{B}\;.$$
|
H: Continuous Distribution and Probability density function Question
Suppose that $X_1, X_2, X_3$ denote a random sample of size three from a continuous distribution with probability density function (pdf)
$f_X(x) = \frac{1}{\theta}e^{\frac{−x}{\theta}}, x>\theta.$
Consider the following three estimators for $\theta:\hat\theta_1 = X_1; \hat\theta_2=\frac{X_1+X_2+X_3}{3} = \bar X;\theta_3 =\frac{X_1+2X_2}{3}$. Given that the moment generating function (mgf) of $X$ is
$M_X(t) = \frac{1}{1−\theta t}.$
How do I find Find $E(X)$ and $E(X^2)$ and determine which estimators for $\theta$ are unbiased.
AI: To derive $\mathsf E[X^n]$ from $M_X(t)$ (aka the $n^{th}$ moment of $X$), you take $\cfrac{\mathrm d^n}{\mathrm dt^n} M_X(t)$ and evaluate the result at $t=0.$
$$\cfrac{\mathrm d}{\mathrm dt} \cfrac{1}{1-\theta t} = \cfrac{\theta}{(1-\theta t)^2}$$
Evaluating at $t=0$ gives:
$$\mathsf E[X] = \theta$$
I'll let you find $\mathsf E[X^2].$ To determine which of the given estimators are unbiased, you should know that an estimator $\hat\theta$ for $\theta$ is unbiased when $\mathsf E[\hat \theta] = \theta$. For example,
$$\mathsf E[\hat\theta_1] = \mathsf E[X_1]= \mathsf E[X] = \theta$$
so $\hat\theta_1$ is an unbiased estimator for $\theta$. Hopefully this helps you with the other two.
|
H: Nested Interval Property and uniqueness
Is the common element contained within $\bigcap\limits_{i = 1}^{\infty} I_j$ where $I_j$ is a closed interval unique?
AI: Hint: Consider $I_j = [-1/j, \ 1\!+ \!1/j]$
With this in mind, what must be true of nested closed intervals $I_j$ if they are such that $\displaystyle \bigcap_{j=1}^\infty I_j = \{x\}$ for some $x \in \mathbb{R}$?
|
H: Showing $\sum_{k=0}^{n+1} \binom n k \frac{(-1)^k}{(n+k)(n+k+1)} = \sum_{k=0}^{n+1} \binom {n+1} k \frac{ (-1)^k}{n+k}$
The background context is this old MSE question here. Essentially I was trying to write up a proper response for the question, but I'm stuck on one part myself. One goal for the question is to prove the following equality:
$$\sum_{k=0}^{n+1} \binom n k \frac{(-1)^k}{(n+k)(n+k+1)} = \int_0^1x^{n-1}(1-x)^{n+1}dx$$
(At least, based on the assumption the OP's notation is that $C_r := \binom n r$ where $n$ is understood to be fixed. Seems to work right when calculating both via Wolfram.) Per a suggestion in the comments, it seems a good route route to go would be expanding $(1-x)^{n+1}$ via the binomial theorem and integrating termwise, and hopefully the expression for the sum "pops out," as it were.
Doing so, we find that
$$\begin{align}
\int_0^1x^{n-1}(1-x)^{n+1}dx &= \int_0^1 x^{n-1} \sum_{k=0}^{n+1} \binom {n+1} k (-1)^k x^{k}dx \\
&= \int_0^1 \sum_{k=0}^{n+1} \binom {n+1} k (-1)^k x^{n+k-1}dx \\
&= \sum_{k=0}^{n+1} \binom {n+1} k (-1)^k \int_0^1 x^{n+k-1}dx \\
&= \sum_{k=0}^{n+1} \left. \binom {n+1} k \frac{ (-1)^k}{n+k} x^{n+k} \right|_{x=0}^1\\
&= \sum_{k=0}^{n+1} \binom {n+1} k \frac{ (-1)^k}{n+k}
\end{align}$$
So far so good, but with one issue. What remains, seemingly, is to show
$$\binom n k \frac{1}{n+k+1} = \binom{n+1}k \tag 1$$
However, these two quantities are unequal. For instance, $n=10,k=5$ give us $63/4 = 462$, pure nonsense. So this suggests something in my work along the way is wrong ... but what? I've checked and double-checked a lot, and I'm not sure where I went wrong in this derivation, or how to move forward. Perhaps I'm overlooking something obvious, but does anyone have any ideas?
One hypothesis was that the original inequality I seek to prove is false, but it seems not. For instance, take $n=15$. Then the sum and integral evaluate to about $2.2182 \times 10^{-10}$ per WolframAlpha, and I tried a few other $n$ besides that. So the issue almost certainly lies in my derivation. But my sum also evaluates to this for the prescribed value of $n=15$... which suggests another possible problem.
Another possibility is that I'm oversimplifying the matter by assuming
$$\sum_{k=0}^{n+1} \binom n k \frac{(-1)^k}{(n+k)(n+k+1)} = \sum_{k=0}^{n+1} \binom {n+1} k \frac{ (-1)^k}{n+k}$$
would imply corresponding terms are equal (which was my motivation for the earlier equality $(1)$ which is a clear dead-end). For clarity, that means I assumed $\sum a_n = \sum b_n \implies a_n = b_n$. While that'd obviously make everyone's life easier, it's definitely not necessarily true. Which, if it's not in this case, I'm not sure where to go, and would like any potential nudges forward on the matter.
AI: $$\begin{align*}
\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^k}{(n+k)(n+k+1)}&=\sum_{k=0}^{n+1}\binom{n}k\left(\frac{(-1)^k}{n+k}-\frac{(-1)^k}{n+k+1}\right)\\
&=\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^k}{n+k}-\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^k}{n+k+1}\\
&=\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^k}{n+k}+\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^{k+1}}{n+k+1}\\
&=\sum_{k=0}^{n+1}\binom{n}k\frac{(-1)^k}{n+k}+\sum_{k=1}^{n+1}\binom{n}{k-1}\frac{(-1)^k}{n+k}\\
&=\frac1n+\sum_{k=1}^{n+1}\left(\binom{n}k+\binom{n}{k-1}\right)\frac{(-1)^k}{n+k}\\
&=\frac1n+\sum_{k=1}^{n+1}\binom{n+1}k\frac{(-1)^k}{n+k}\\
&=\sum_{k=0}^{n+1}\binom{n+1}k\frac{(-1)^k}{n+k}
\end{align*}$$
|
H: Definition of a rational number.
In high school, in definition of rational number we used to say,
"A number, which can be written in the form $\frac{p}{q}$, where $p,q(\neq0)\in\Bbb{Z}$ is called rational number."
But now I am realising that this is not a definition but it is a characterisation of rational numbers. Because the above definition starts with "a number", which means we have already defined real numbers before giving this definition.
So how to really define rational numbers? One way we have constructed $\Bbb{Q}$ as quotient field of $\Bbb{Z}$. But this is a long construction. So please give little clarification that what should be an answer if one asks "define rational number."
AI: You have to decide where to start, in other words you have to decide what your starting axioms are.
Starting with the real numbers is perfectly acceptable, they have pretty straightforward axioms (in summary: the real numbers are a complete ordered field). And then you can really define rational numbers inside the real numbers: before that you have to first define the integers $\mathbb Z$ inside the real numbers, and once you've done that then the definition of $\mathbb Q$ that you wrote makes perfect sense.
Or you can start with Peano's axioms for the natural numbers, from there build up the integers, and from there build up the rational numbers.
If you really follow each of these through step by step then yes, they are long constructions. If you want a really long construction, start instead from axioms of set theory, next build up the natural numbers, and then continue as before.
|
H: Why is $f(x+h) = 3x+3h-1$ when $f(x)=3x-1$?
$f(x+h) = 3x+3h-1$ when $f(x)=3x-1$
Is there some kind of factoring that gets you $3x+3h-1$? It looks the 3h comes out of nowhere.
AI: $$
f(x) = 3x - 1 \implies f(x+h) = 3(x+h) - 1 = 3x + 3h - 1
$$
|
H: Find all $(x,y)$ pairs : $x,y$ $\in \mathbb{Z}$ such that :- $x^4 - 4x^3 - 19x^2 + 46x = y^2 - 120.$
So here is the Question :-
Find all $(x,y)$ pairs : $x,y$ $\in \mathbb{Z}$ such that :-
$$x^4 - 4x^3 - 19x^2 + 46x = y^2 - 120.$$
What I tried :- I factored the LHS and got as :- $$x(x - 2)(x^2 - 2x - 23) = y^2 - 120$$
From here I don't know how to proceed . I can see that $(y^2 - 120)$ has $3$ factors to be broken into , and each of $x,(x - 2) , (x^2 - 2x - 23)$ divides $y^2 - 120$ , but how will I proceed from here?
Any hints or answers to this problem will be greatly appreciated !!
AI: Here's a solution using what I think is an underappreciated problem solving technique. Note that completing the square (!) shows that
$$
x^4 - 4x^3 - 19x^2 + 46x + 120 \quad \text{is close to} \quad ( x^2 - 2x - 11.5 )^2.
$$
It's not hard then to show that
\begin{align*}
( x^2 - 2x - 11 )^2 - (x^4 - 4x^3 - 19x^2 + 46x + 120) &= x^2-2x+1 > 0 \text{ for } x\ne1, \\
( x^2 - 2x - 12 )^2 - (x^4 - 4x^3 - 19x^2 + 46x + 120) &= -x^2+2x+24 < 0 \text{ for } x\notin[-4,6].
\end{align*}
In particular, $x^4 - 4x^3 - 19x^2 + 46x + 120$ is between the squares of two consecutive integers (and therefore cannot itself be the square of an integer) when $x\notin[-4,6]$.
It is a simple matter to check all values $-4,-3,\dots,6$ to find that $x=\{-4,-3,-2,1,4,5,6\}$ are the only values that make $x^4 - 4x^3 - 19x^2 + 46x + 120$ a perfect square. (Indeed, the symmetry around $x=1$ would reduce the amount of checking here, if we notice that $x^4 - 4x^3 - 19x^2 + 46x + 120$ is invariant under changing $x$ to $1-x$.)
|
H: Proof Hints: Teichmüller-Tukey Lemma
Synopsis
Why this is NOT a duplicate (as far as I'm aware).
After having read the other posts on StackExchange about this lemma, many of them rely on concepts like well-ordering and other things I haven't learned yet. All I have learned so far is the axiom of choice and basic cardinality with nothing related to partial ordering or finite characters. So I hope this is enough justification for asking another question about this lemma. Please provide me with hints that don't utilize the concepts mentioned above.
Exercise
(Teichmüller-Tukey lemma) Assume that $\mathscr{A}$ is a nonempty set such that for every set $B$, $$B \in \mathscr{A} \Leftrightarrow \text{every finite subset of $B$ is a member of $\mathscr{A}$.}$$ Show that $\mathscr{A}$ has a maximal element, i.e., an element that is not a subset of any other element of $\mathscr{A}$.
What I've Tried
I started with the following attempt to apply Zorn's lemma:
Consider a chain $\mathscr{B} \subseteq \mathscr{A}$. We wish to show
that $\bigcup \mathscr{B} \in \mathscr{A}$. Let $x$ be a subset of
$\bigcup \mathscr{B}$. Then for all $x' \in x$, there exists a set $B
\in \mathscr{A}$ such that $x' \in B$. So $\{x'\} \subseteq B$ and
$\{x'\} \in \mathscr{A}$. Since $\{x'\}$ is a subset of $x$ and $\{x'\} \in \mathscr{A}$, then $x \in \mathscr{A}$ since it satisfies the requirements for $\mathscr{A}$. So $\bigcup \mathscr{B} \in \mathscr{A}$ and Zorn's lemma guarantees a maximal element.
Upon further examination, though, I realized that I haven't necessarily proved that every subset of $x$ was in $\mathscr{A}$, but only all singleton subsets. So I thought I might have to do something with $\bigcup x$ or maybe invoke the form of the axiom of choice that guarantees a choice function. But I didn't know how to proceed and I confused myself quickly. As such, I would appreciate any hints in showing that $\bigcup \mathscr{B} \in \mathscr{A}$ (assuming that Zorn's lemma is the right path towards the solution). Thank you.
AI: You need to use the fact that $\mathscr{B}$ is a chain. You want to show that every finite subset of $\bigcup\mathscr{B}$ is a member of $\mathscr{A}$, so let $F$ be a finite subset of $\bigcup\mathscr{B}$. For each $x\in F$ there is a $B_x\in\mathscr{B}$ such that $x\in B_x$. $\mathscr{B}$ is a chain, and $F$ is finite, so $\{B_x:x\in F\}$ has a maximum element $B_{x_0}$. But then $x\in B_x\subseteq B_{x_0}$ for each $x\in F$, so $F\subseteq B_{x_0}$. Finally, $B_{x_0}\in\mathscr{A}$, so $F\in\mathscr{A}$, as desired.
|
H: Expected value for random variables based on Poisson distribution
I have to calculate the limit of the sequence $E((\frac{X_n-n}{\sqrt{n}})^+)$, where $X_n$ is Poi(n) and $\xi^+$ means "Maximum of $\xi$ and $0$". I get $e^2/\sqrt{2\pi}$ and I am not sure that this is correct. My calculation is straightforward, using Stirling formula at the end. It is rather long so I don't really want to place it here. Perhaps somebody has a better idea?
Thanks in advance!
AI: Hint: Let $(Y_n)$ be i.i.d. with $Poss(1)$ distribution. The $X_n$ has same distribution as $Y_1+Y_2+...+Y_n$. Now just apply CLT to find the limit.
You will need the following: If $Z_n \to Z$ in distribution and $EZ_n^{2}$ is bounded the $EZ_n \to $EZ$.
|
H: IMO 1992 Problem 6
For each positive integer $\,n,\;S(n)\,$ is defined to be the greatest integer such that, for every positive integer $\,k\leq S(n),\;n^{2}\,$ can be written as the sum of $\,k\,$ positive squares.
a.) Prove that $\,S(n)\leq n^{2}-14\,$ for each $\,n\geq 4$.
Now solution says for part a)
Representing $n^2$ as a sum of $n^2-13$ squares is equivalent to represent $13$ as sum of numbers of the form $x^2-1$,..
I did not get why it is equivalent ???
AI: $$\begin{align}
n^2&=\sum_{k=1}^{n^2-13}a_k^2\\
&=\sum_{k=1}^{n^2-13}(a_k^2-1)+(n^2-13)
\end{align}$$
Therefore,
$$13=\sum_{k=1}^{n^2-13}(a_k^2-1)$$
|
H: Name of Math Symbol $"\mapsto"$ in the expression $\mathbf{x} \mapsto A \mathbf{x}$
I do not know the name of the math symbol with the arrow bracket pointing to the right. Any help identifying it or resource to find it would be helpful. I already checked the LA symbols on Wikipedia and no look. Thanks!
AI: The symbol is essentially "maps to", as pointed out in the comments.
As, for its purpose, the maps to symbol describes the input and output of a function, as its name suggests (a input maps to this point)...
See https://wumbo.net/symbol/maps-to/ for more details.
|
H: how to derive the cross enthropy equation from the information theory?
This is the q(xi) distribution of information theory
(1/2) is bit information and $l_{i}$ is the length of the number of bit
then the cross enthropy derived to
I'm lack of experiecne to handling the log so can you show me how $-E_{p}[ln(q(x))/ln(2)]$ derived?
I want to know how
to
AI: Change of base formula in logarithm states, $$\log_ba \times \log_xb=\log_xa$$
Then $$-\mathbb{E}_P\frac{\ln q(X)}{\ln2}=-\mathbb{E}\log_2 q(X)$$
rest is writing out the expectation explicitly.
The context is unclear, however as $q(x_i)=\frac{1}{2^{l_i}}$. Taking $\ln$ we obtain, $$\ln q(x_i)=l_i \ln 2^{-1} \implies l_i=-\frac{\ln q(x_i)}{\ln2}$$. Taking expectation on both sides with respect to the disribution $P$ it follows.
|
H: $A$ could've done a job in 6 hours longer, $B$ in 15 hours longer and $C$ in twice the time needed if they work together. How long together?
I am trying to solve this question:
A, B and C do a piece of work together. A could've done it in 6 hours longer, B in 15 hours longer and C in twice the time. How long did it take all three to do the work together?
My attempt:
Let the time for A, B and C to do it be $x$. Then:
A takes $x+6$ hours.
B takes $x+15$ hours.
C takes $2x$ hours.
LCM of $x+6$ and $x+15$ and $2x$ is $2x(x+6)(x+15)$. In this time:
A can do $2x(x+15)$ of work.
B can do $2x(x+6)$ of work.
C can do $(x+6)(x+15)$ of work.
So in total they do $2x(x+15) + 2x(x+6) + (x+6)(x+15)$ work.
$= 5x^2 + 63x + 90$.
I'm not sure what to do next, or if I'm even on the right method. Am I on the right track? Is there a better method? What do I do next?
AI: Express their rates of work as the amount they do in 1 hour:
A does $1/(x+6)$
B does $1/(x+15)$
C does $1/2x$
So in 1 hour they do $1/(x+6) + 1/(x+15) + 1/2x$ amount of work.
So in x hours they will complete the job; in other words:
$x(1/(x+6) + 1/(x+15) + 1/2x)$ = 1.
Solve this, and we get $x^2 + 7x - 30 = 0$, so $(x+10)(x-3)=0$.
Since x cannot obviously be negative, x = 3.
So 3 hours.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.