text
stringlengths 83
79.5k
|
|---|
H: Question involving an invertible complex matrix.
Let $A$ be an $n \times n$ invertible matrix with complex entries and call $A = R + iJ$, where $R$ is the real part of $A$ and $J$ is the imaginary part of $A$. Show that there exist a $\lambda_0 \in \mathbb{R}$ s.t $R+ \lambda_0J$ is invertible. Futhermore, conclude that if $A$ and $B$ are matrices with real entries and they are similar in $\mathbb{C}$ then they are in $\mathbb{R}$.
I've tried to prove the first statement by contradiction, but what kind of things I can do if $\det(R+xJ)= 0$ for each $x \in \mathbb{R}$??
Also, I cannot see the conection with the fact that if $A$ and $B$ are similar in $\mathbb{C}$, then they are in $\mathbb{R}$.
AI: Consider the complex polynomial $\lambda \mapsto \det(R + \lambda J)$. Note that it is non-zero when $\lambda = i$, so it is not the zero polynomial. Thus it has finitely many $0$s in the complex plane, which means we cannot have $\det(R + \lambda J) = 0$ for all $\lambda \in \Bbb{R}$.
Now, suppose that $A$ and $B$ are real matrices, and $P = R + iJ$ is a complex invertible matrix such that
$$B = P^{-1}AP \iff PB = AP \iff RB + iJB = AR + iAJ.$$
Equating real and imaginary parts,
\begin{align*}
RB &= AR \\
JB &= AJ.
\end{align*}
By the previous exercise, we can find some $\lambda_0 \in \Bbb{R}$ such that $Q = R + \lambda_0 J$ is invertible. We therefore have,
$$QB = RB + \lambda_0 JB = AR + \lambda_0 AJ = AQ \implies B = Q^{-1} AQ,$$
proving real-similarity.
|
H: Show that if $|z| < 1$, then $\displaystyle\frac{z}{1+z} + \frac{2z^{2}}{1+z^{2}} + \frac{4z^{4}}{1+z^{4}} + \frac{8z^{8}}{1+z^{8}} + \ldots$
Show that if $|z| < 1$, then
\begin{align*}
\frac{z}{1+z} + \frac{2z^{2}}{1+z^{2}} + \frac{4z^{4}}{1+z^{4}} + \frac{8z^{8}}{1+z^{8}} + \ldots
\end{align*}
converges
MY ATTEMPT
Let us start by noticing that
\begin{align*}
\frac{z}{1-z} - \frac{z}{1-z} + \frac{z}{1+z} = \frac{z}{1-z} + \frac{z - z^{2} -z - z^{2}}{1-z^{2}} = \frac{z}{1-z} - \frac{2z^{2}}{1-z^{2}}
\end{align*}
Similarly, we have that
\begin{align*}
\frac{z}{1-z} - \frac{2z^{2}}{1-z^{2}} + \frac{2z^{2}}{1+z^{2}} = \frac{z}{1-z} + \frac{2z^{2} - 2z^{4} - 2z^{2} - 2z^{4}}{1-z^{4}} = \frac{z}{1-z} - \frac{4z^{4}}{1-z^{4}}
\end{align*}
Based on this pattern, it can be proven by induction the given series is the same as
\begin{align*}
\frac{z}{1-z} = z + z^{2} + z^{3} + \ldots = \sum_{n=1}^{\infty}z^{n}
\end{align*}
which converges, since it is the geometric series inside the circle $|z| < 1$
(we may apply the ratio test, for instance).
Could someone tell me if my approach is correct? Any contribution is appreciated.
AI: Your approach is correct! It can be verified in the following way too.
Consider the partial sums $$S_n=\sum_{j=0}^{n}\frac{2^jz^{2^j}}{1+z^{2^j}}$$
Then $$\sum_{m=0}^{\infty}\frac{2^mz^{2^m}}{1+z^{2^m}}=\lim_{n\rightarrow\infty}S_n$$
Observe (prove by induction on $n$) that $$S_n+\frac{z}{z-1}=\frac{2^{n+1}z^{2^{n+1}}}{z^{2^{n+1}}-1}$$
Now $$\frac{z}{z-1}+\sum_{m=0}^{\infty}\frac{2^mz^{2^m}}{1+z^{2^m}}=\lim_{n\rightarrow\infty}\frac{2^{n+1}z^{2^{n+1}}}{z^{2^{n+1}}-1}=0$$
|
H: Does the following series converge or diverge: $\sum_{n=1}^{\infty}\frac{n!}{n^{n}}$?
Which of the following series converge, and which diverge?
$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^{2}+1}$
$\displaystyle\sum_{n=1}^{\infty}\frac{n!}{n^{n}}$
$\displaystyle\sum_{n=1}^{\infty}\frac{1}{\sqrt{n^{2}+n}}$
MY ATTEMPT
The first one converges due to the comparison test. Indeed, one has
\begin{align*}
\sum_{n=1}^{\infty}\frac{1}{n^{2}+1} \leq \sum_{n=1}^{\infty}\frac{1}{n^{2}}
\end{align*}
where the last series is the $p$-series with $p = 2 > 1$.
The second series does converge due to the ratio test. Indeed, one has
\begin{align*}
\lim_{n\to\infty}\frac{(n+1)!}{(n+1)^{n+1}}\times\frac{n^{n}}{n!} = \lim_{n\to\infty}\left(\frac{n}{n+1}\right)^{n} = \lim_{n\to\infty}\left(1 - \frac{1}{n+1}\right)^{n} = e^{-1} < 1
\end{align*}
Finally, for the third series it suffices to notice that $n^{2} + n \leq 2n^{2}$. Thence we conclude that it diverges. Indeed, one has
\begin{align*}
\sum_{n=1}^{\infty}\frac{1}{\sqrt{n^{2}+n}} \geq \frac{1}{\sqrt{2}}\sum_{n=1}^{\infty}\frac{1}{n} \longrightarrow +\infty
\end{align*}
Is the wording of my solutions good? Any comments and contributions are appreciated.
AI: Most of this is correct and well-worded. Where you have used the phrase "harmonic series with $p = 2$", you really should say "$p$-series with $p = 2$. The harmonic series is $\sum_{n=1}^\infty \frac{1}{n}$, whereas $p$-series are series of the form $\sum_{n=1}^\infty \frac{1}{n^p}$.
|
H: Finding minimal sufficient statistic and maximum likelihood estimator
Question:
Let $Y_1, \dots, Y_n$ be a random sample from a distribution with density function
\begin{align}
f_Y(y ; \alpha, \theta) =
\begin{cases}
\alpha e^{- \alpha ( x - \theta)} & x > \theta,\\
0 & x \leq \theta,
\end{cases}
\end{align}
where $\alpha, \theta > 0$ are parameters. Find the minimal sufficient statistic and maximum likelihood estimator for $(\alpha, \theta)$ when neither is known.
Attempt:
1) Minimal sufficient statistic
$f_{\boldsymbol{Y}}(\boldsymbol{y}; \alpha, \theta) = \alpha^n e^{-\alpha(\sum y_i - n\theta)} I_{(\theta, \infty)}(y_{(1)})$ where $y_{(1)}$ is the first order statistic and $I$ is an indicator function. According to the factorization theorem, $(\sum y_i, y_{(1)})$ is sufficient for $(\alpha, \theta)$.
To prove minimal sufficiency I need to show that given samples $\boldsymbol{y}$ and $\boldsymbol{x}$, the ratio of the joint densities $\frac{f_{\boldsymbol{Y}}(\boldsymbol{y}; \alpha, \theta)}{f_{\boldsymbol{Y}}(\boldsymbol{x}; \alpha, \theta)}$ involves neither $\alpha$ nor $\theta$ $\Leftrightarrow (\sum y_i, y_{(1)}) = (\sum x_i, x_{(1)})$.
\begin{align}
\frac{f_{\boldsymbol{Y}}(\boldsymbol{y}; \alpha, \theta)}{f_{\boldsymbol{Y}}(\boldsymbol{x}; \alpha, \theta)} &= \frac{\alpha^n e^{-\alpha(\sum y_i - n\theta)} I_{(\theta, \infty)}(y_{(1)})}{\alpha^n e^{-\alpha(\sum x_i - n\theta)} I_{(\theta, \infty)}(x_{(1)})}\\
&= e^{-\alpha (\sum x_i - \sum y_i)} \frac{I_{(\theta, \infty)}(y_{(1)})}{I_{(\theta, \infty)}(x_{(1)})}.
\end{align}
For this expression to not involve either $\alpha$ or $\theta$, I need $\sum x_i = \sum y_i$. My problem is that I don't see why I need $y_{(1)} = x_{(1)}$. As far as I can tell, $y_{(1)}, x_{(1)} > \theta$ is good enough.
Where am I going wrong?
2) Maximum likelihood estimator
For $\alpha, \theta > 0$,
\begin{align}
L(\alpha, \theta ; \boldsymbol{y}) &= \alpha^n e^{-\alpha(\sum y_i - n\theta)}\\
\Rightarrow \ell (\alpha, \theta ; \boldsymbol{y}) &= n \log \alpha - \alpha \big( \sum y_i - n\theta \big)\\
\frac{\partial \ell}{\partial \alpha} &= \frac{n}{\alpha} - \big( \sum y_i - n\theta \big).
\end{align}
Setting $\frac{\partial \ell}{\partial \alpha} = 0$, I get $\hat{\alpha} = \frac{n}{\sum y_i - n \theta}$.
To find $\hat{\theta}$, I look at $L(\alpha, \theta ; \boldsymbol{y}) = \alpha^n e^{-\alpha \sum y_i} e^{\alpha n \theta} I_{(\theta, \infty)}(y_{(1)})$ and I see that $L$ gets larger as $\theta$ increases from $0$ right until $\theta = y_{(1)}$, at which point $L = 0$. Since the MLE is defined to be a supremum, I conclude that $\hat{\theta} = y_{(1)}$.
So the maximum likelihood estimator for $(\alpha, \theta)$ is $\big( \frac{n}{\sum y_i - n \theta}, y_{(1)}\big)$.
Is this the correct MLE?
Thank you.
AI: It's more or less correct, but some very minor issues...
to verify minimality, your ratio is
$$\frac{e^{-\alpha \sum_ix_i}\times\mathbb{1}_{(0;x_{(1)})}(\theta)}{e^{-\alpha \sum_iy_i}\times\mathbb{1}_{(0;y_{(1)})}(\theta)}$$
This ratio is independent by $(\alpha;\theta)$ iff
$$
\begin{cases}
\sum_ix_i=\sum_iy_i \\
x_{(1)}=y_{(1)}
\end{cases}$$
Considering the profile likelihood,
$$\hat{\theta}=x_{(1)}$$
Then, substituting $\hat{\theta}$ with $\theta$ you get
$$\hat{\alpha}=\frac{n}{\sum_i[x_i-x_{(1)}]}$$
Thus
$$(\hat{\alpha};\hat{\theta})=(\frac{n}{\sum_i[x_i-x_{(1)}]};x_{(1)})$$
|
H: Given cups that are $\frac12$, $\frac13$, $\frac14$, $\frac15$, $\frac18$, $\frac19$, $\frac1{10}$ full, can we pour to get a cup $\frac16$ full?
There are seven cups, $C_1$, $C_2$, $\ldots$, $C_7$ and they have the same capacity $V$.
Initial:
Water of $C_1$ occupies $\frac{1}{2}V$
Water of $C_2$ occupies $\frac{1}{3}V$
Water of $C_3$ occupies $\frac{1}{4}V$
Water of $C_4$ occupies $\frac{1}{5}V$
Water of $C_5$ occupies $\frac{1}{8}V$
Water of $C_6$ occupies $\frac{1}{9}V$
Water of $C_7$ occupies $\frac{1}{10}V$
Allow pouring all water from one cup to another if the water does not overflow or pour water from one cup to another until it is full. Can we, after a number of times pouring water, have cup that occupies $\frac{1}{6}V$?
This is my attempt:
We consider cup A and cup B have the amount of water, respectively, a and b where $0\le a,b\le 1$. If you pour water from cup A to cup B, the following will happen:
If $a+b<1$. Then after pouring, cup A will be empty and cup B takes up $a+b$ cup.
If $a+b<1$. Then after pouring, cup A will takes up $a-b$ cup and cup B is full.
I just think something here and I can't solve this problem ! Can you help me this stuck!
AI: Whenever you pour one cup $a$ into cup $b$, you are either left with an empty cup and a cup with $a+b$, or a full cup and a cup with $a + b -1$. Either way, the action leaves you with a cup with $a+b \mod 1$ water in it. That means if we want to have $1/6$ left, we need to find a combination of cups that adds up to $1/6\mod 1$.
A straightforward way is to put them all on a common denominator, namely $360$. Then we have $180$, $120$, $90$, $72$, $45$, $40$, and $36$, and we need these to add up to $60$ or $420$. Clearly this can't be done, so it's not possible.
|
H: If $W=\{x \in R^4|x_3=x_1+x_2,x_4=x_1-x_2\}$ show that $W$ is or is not a subspace
If $W=\{x \in \mathbb R^4|x_3=x_1+x_2,x_4=x_1-x_2\}$ show that $W$ is or is not a subspace
I would imagine that vector $x = (a,b,c,d)$ and to show that something is a subspace it has to be closed under addition and scalar multiplication
Closed under multiplication
$(kx) \in W$, therefore, it is closed under scalar multiplication
Closed under addition
let $y \in W$ as well where $y=(a_1,b_1,c_1,d_1)$, then $x+y=(a+a_1,b+b_1,c+c_1,d+d_1)$
and $x-y=(a-a_1,b-b_1,c-c_1,d-d_1)$
if we let $\alpha=x+y$ and $\beta=x-y$, then $\alpha+\beta=2X$ which is true
Since it is closed both under addition and multiplication I conclude it is a subset.
But I feel that I am making an error in this verification and would like some help
AI: I think for addition you have to do this: given $x=(x_1, x_2, x_1+ x_2, x_1-x_2)$ and $y=(y_1, y_2, y_1+ y_2, y_1-y_2)$. Then $$x+y=(x_1+y_1, x_2+y_2, x_1+x_2+y_1+y_2, x_1-x_2+y_1-y_2)\\=(x_1+y_1, x_2+y_2, (x_1+y_1)+(x_2+y_2), (x_1+y_1)-(x_2+y_2))\in W$$
|
H: What is the basis for subspace: $W=\{x \in R^4|x_3=x_1+x_2,x_4=x_1-x_2\}$
What is the basis for subspace: $W=\{x \in \mathbb{R}^4|x_3=x_1+x_2,x_4=x_1-x_2\}$?
I previously posted a similar question regarding showing whether this is a subspace but now I wish to find what the basis is.
I know if we have a linear combination of linearly independent vectors we can have the basis.
So since
$$X=<x_1,x_2,x_1+x_2,x_1-x_2>=x_1<1,0,1,1>+\ x_2<0,1,1,-1>,$$
would the basis be $\{<1,0,1,1>,<0,1,1,-1>\}$ with dimension of subspace $W$ being $2$? Or would it be $4$?
AI: Your work is correct but incomplete. You should show that those two vectors, $\langle 1, 0, 1, 1 \rangle$ and $\langle 0, 1, 1, -1 \rangle$ are linearly independent, as you have only shown that they span $W$. This is not too hard, as they have zeroes in different positions.
Indeed, let $a\langle 1, 0, 1, 1 \rangle + b\langle 0, 1, 1, -1 \rangle = 0$. Then
$$0 = \langle a, 0, a, a \rangle + \langle 0, b, b, -b \rangle = \langle a, b, a + b, a - b \rangle,$$
so $a = b = 0$ and they are linearly independent, and so $\dim W = 2$. Also, I'd like to point out that saying the basis is bad usage. There will be many many bases for $W$. For example, scale up both vectors by $2$. It would be more proper to say a basis.
|
H: Abstract algebra olympiad problem for university student
I am looking for abstract algebra problem book for preparring olympiad for university student, any recommendations?
AI: A book is "Putnam and Beyond"
https://link.springer.com/book/10.1007/978-0-387-68445-1
|
H: Couldn't understand a step in solving Homogenous Linear Recurrence relations
I was reading a Wiki on Brilliant.org regarding linear recurrence relations. They have mentioned that, "note that if two geometric series satisfy a recurrence, the sum of them also satisfies the recurrence".
And I am stuck there! How do can we prove the above statement formally? In addition to this how do we prove that $u_{n} = c_{1}u_{n-1}+...+c_{k}u_{n-k} \implies a_{n} = \alpha_{1}r_{1}^{n}+\alpha_{2}r_{2}^{n}$ .
AI: It’s just a matter of checking the algebra. Suppose that the recurrence is
$$x_n=c_1x_{n-1}+c_2x_{n-2}+\ldots+c_kx_{n-k}\;,\tag{1}$$
and that $x_n=ar^n$ and $x_n=bs^n$ both satisfy it, meaning that
$$ar^n=c_1ar^{n-1}+c_2ar^{n-2}+\ldots+c_kar^{n-k}$$
and
$$bs^n=c_1bs^{n-1}+c_2bs^{n-2}+\ldots+c_kbs^{n-k}\;.$$
Let $u_n=ar^n+bs^n$. Then
$$\begin{align*}
u_n&=ar^n+bs^n\\
&=(c_1ar^{n-1}+c_2ar^{n-2}+\ldots+c_kar^{n-k})+\\
&\quad+(c_1bs^{n-1}+c_2bs^{n-2}+\ldots+c_kbs^{n-k})\\
&=c_1(ar^{n-1}+bs^{n-1})+c_2(ar^{n-2}+bs^{n-2})+\ldots+c_k(ar^{n-k}+bs^{n-k})\\
&=c_1u_{n-1}+c_2u_{n-2}+\ldots+c_ku_{n-k}\;,
\end{align*}$$
so the sequence $\langle u_n:n\in\Bbb N\rangle=\langle ar^n+bs^n:n\in\Bbb N\rangle$, the sum of the two geometric sequences, also satisfies the recurrence $(1)$.
|
H: Smooth approximation identities
Let $G$ be a Lie group. Let $\mathcal U$ be a neighbourhood base at $1$ of G. Does there exist $\{\psi_U:U\in\mathcal U\}$ with the following properties support of $\psi_U$ is compact and contained in $U,$ $\psi_U\geq 0,$ $\psi_U(x^{-1})=\psi_U(x)$, $\int_G\psi_U(x)dx=1$ and $\psi_U$ is smooth?
AI: It seems so. Start with any bump function $\phi_U \geq 0$ around $1$ whose compact support is contained in $U$. Then let $\varphi_U(x) \doteq \phi_U(x)\phi_U(x^{-1})$. This is smooth, also has compact support contained in $U$, and the property $\varphi_U(x^{-1}) = \varphi_U(x)$ holds by construction. Then set $$\psi_U(x) = \frac{1}{\int_G \varphi_U(x)\,{\rm d}x} \varphi_U(x).$$
|
H: Field extension: a conundrum
In the following, I will have a conclusion that is definitely wrong but I don't know why. Your answer and explanation will be greatly appreciated.
Let $Q$ be the field of rational numbers. We know the equation $x^3-2=0$ has three solutions $\sqrt[3]{2}$, $\sqrt[3]{2} \omega$, and $\sqrt[3]{2} \omega^2$, where $\omega=e^{\frac{2\pi i}{3}}$. As such, the equation $x^3-4=0$ also has three solutions $\sqrt[3]{4}$, $\sqrt[3]{4} \omega$, and $\sqrt[3]{4} \omega^2$. None of these solutions belong to $Q$.
Now we do field extensions $Q\left ( \sqrt[3]{4} \right )$ and $Q\left ( \sqrt[3]{4} \omega \right )$. Because $\sqrt[3]{4}$ and $\sqrt[3]{4} \omega$ are both the solutions to $x^3-4=0$, we know that
$$ Q\left ( \sqrt[3]{4} \right ) \simeq Q\left ( \sqrt[3]{4} \omega \right ) .$$
where $\simeq$ represents 1-isomorphism. We then continue to extend both fields by $\sqrt[3]{2}$, we have
$$ Q\left ( \sqrt[3]{4},\sqrt[3]{2} \right ) \simeq Q\left ( \sqrt[3]{4} \omega,\sqrt[3]{2} \right ). $$
Because $\sqrt[3]{4} = (\sqrt[3]{2})^2$, we also have $$\sqrt[3]{4} \omega= (\sqrt[3]{2})^2,$$
due to the isomorphism.
I know the conclusion is wrong, but I don't know what was wrong during the deduction.
AI: $\require{AMScd}$
Great question! Essentially, an abstract isomorphism of fields $K\simeq L$ need not be preserved when adjoining an element of a common extension field, because you need to make things compatible with the embeddings of $K$ and $L$ into the larger extension field. In your case, the element you want to adjoin already belongs to one of the fields, but not the other. So the first field will be unchanged, but the second will not.
What you're doing here is starting with an abstract isomorphism of fields $\sigma : \Bbb{Q}[\sqrt[3]{4}]\to\Bbb{Q}[\sqrt[3]{4}\omega]$, and then embedding both inside a larger extension field $\overline{\Bbb{Q}}$ which contains both. However, you cannot simply adjoin another element of $\overline{\Bbb{Q}}$ to both and expect them to remain isomorphic, because you've embedded these isomorphic fields inside $\overline{\Bbb{Q}}$ in different ways. Indeed, as you've observed, $\Bbb{Q}[\sqrt[3]{4},\sqrt[3]{2}] = \Bbb{Q}[\sqrt[3]{2}]$ (but $\Bbb{Q}[\sqrt[3]{4}\omega,\sqrt[3]{2}] = \Bbb{Q}[\omega,\sqrt[3]{2}]\not\simeq\Bbb{Q}[\sqrt[3]{2}]$).
The way to fix this is the following. The isomorphism $\sigma : \Bbb{Q}[\sqrt[3]{4}]\to\Bbb{Q}[\sqrt[3]{4}\omega]$ can be extended to an isomorphism $\tilde{\sigma} : \overline{\Bbb{Q}}\to \overline{\Bbb{Q}}.$ If you want to adjoin an element to both and preserve the isomorphism, you need to twist this element via the isomorphism $\tilde{\sigma}.$ That is, it will be true that
$$
\Bbb{Q}[\sqrt[3]{4},\sqrt[3]{2}]\simeq\Bbb{Q}[\tilde{\sigma}(\sqrt[3]{4}),\tilde{\sigma}(\sqrt[3]{2})] = \Bbb{Q}[\sqrt[3]{4}\omega,\sqrt[3]{2}\omega^2].
$$
But if you omit this twist, all bets are off! The fields you get may no longer be isomorphic, as in your example. They also could be isomorphic! For example, if $\tilde{\sigma}(\alpha) = \alpha,$ then you will have $$\Bbb{Q}[\sqrt[3]{4},\alpha] \simeq\Bbb{Q}[\sqrt[3]{4}\omega,\alpha].$$
A very concise way to put all of this is that the diagram
\begin{CD}
\overline{\Bbb{Q}} @>\tilde{\sigma}>> \overline{\Bbb{Q}} \\
@AAA @AAA\\
\Bbb{Q}[\sqrt[3]{4}] @>\sigma>> \Bbb{Q}[\sqrt[3]{4}\omega]\\
@AAA @AAA\\
\Bbb{Q}@>\operatorname{id}>>\Bbb{Q}
\end{CD}
commutes, but the diagram
\begin{CD}
\overline{\Bbb{Q}} @>\operatorname{id}>> \overline{\Bbb{Q}} \\
@AAA @AAA\\
\Bbb{Q}[\sqrt[3]{4}] @>\sigma>> \Bbb{Q}[\sqrt[3]{4}\omega]\\
@AAA @AAA\\
\Bbb{Q}@>\operatorname{id}>>\Bbb{Q}
\end{CD}
does not.
Another way to think about what's going on here is that the polynomial $x^3 - 4$ is no longer irreducible over the field $\Bbb{Q}[\sqrt[3]{2}],$ so when you want to add a root of this polynomial to $\Bbb{Q}[\sqrt[3]{2}],$ the field you get will change depending on which root you picked.
Notice that in $\Bbb{Q}[\sqrt[3]{2}][x],$ we have
$$
x^3 - 4 = (x - \sqrt[3]{2}^2)(x^2 + \sqrt[3]{2}^2x + \sqrt[3]{2}^4),
$$
because over an algebraic closure of $\Bbb{Q},$ we have
\begin{align*}
x^3 - 4 &= (x - \sqrt[3]{2}^2)(x - \sqrt[3]{2}^2\omega)(x - \sqrt[3]{2}^2\omega^2)\\
&= (x - \sqrt[3]{2}^2)(x^2 - \sqrt[3]{2}^2\omega x - \sqrt[3]{2}^2\omega^2 x + (\sqrt[3]{2}^2)^2\omega^3)\\
&= (x - \sqrt[3]{2}^2)(x^2 - \sqrt[3]{2}^2x(\omega + \omega^2) + \sqrt[3]{2}^4)\\
&= (x - \sqrt[3]{2}^2)(x^2 + \sqrt[3]{2}^2x + \sqrt[3]{2}^4),
\end{align*}
as $\omega^2 + \omega + 1 = 0.$
Thus, we cannot simply adjoin a root of $x^3 - 4$ to the field $\Bbb{Q}[\sqrt[3]{2}]$. We instead have the isomorphism
$$\Bbb{Q}[\sqrt[3]{2}][x]/(x^3 - 4)\cong\Bbb{Q}[\sqrt[3]{2}]\times\Bbb{Q}[\sqrt[3]{2}]/(x^2 + \sqrt[3]{2}^2x + \sqrt[3]{2}^4),$$
so the choice of root you adjoin to $\Bbb{Q}[\sqrt[3]{2}]$ changes which extension field you get.
Edit: To answer the question in your comment, that you want to think about adjoining a root of $x^3 - 2$ to $\Bbb{Q}[\sqrt[3]{4}]$ and not vice versa, I would argue that this is the wrong way to think about the situation. You're starting with two different isomorphic fields $\Bbb{Q}[\sqrt[3]{4}]$ and $\Bbb{Q}[\sqrt[3]{4}\omega]$ and then adjoining an element of a common algebraic closure, but there's no reason for these to remain isomorphic, because your abstract isomorphic between the two is not compatible with their embeddings into $\overline{\Bbb{Q}}$ (without twisting by the extension of your original isomorphism, anyway). This was what I was explaining in the first 3 paragraphs of my answer.
On the other hand, you can start with the same field ($\Bbb{Q}[\sqrt[3]{2}]$) and then try to adjoin different roots of the same polynomial, and this gives you different field extensions for the reason I've explained above. Since adjoining elements can be done in either order, I find this to be more of a philosophically satisfying way to think about the situation.
However, if you really do want to think about starting with $\Bbb{Q}[\sqrt[3]{4}]$ and then adjoining a root of $x^3 - 2,$ it turns out that again, $x^3 - 2$ is not irreducible in $\Bbb{Q}[\sqrt[3]{4}][x]$! In fact, we have
\begin{align*}
\frac{\sqrt[3]{4}^2}{2} &= \frac{\sqrt[3]{16}}{2}\\
&= \sqrt[3]{\frac{16}{8}}\\
&= \sqrt[3]{2},
\end{align*}
and of course $\sqrt[3]{2}^2 = \sqrt[3]{4}.$ Together, these imply that $\Bbb{Q}[\sqrt[3]{4}] = \Bbb{Q}[\sqrt[3]{2}].$ Similarly, $\Bbb{Q}[\sqrt[3]{4}\omega] = \Bbb{Q}[\sqrt[3]{2}\omega^2],$ so both fields already have a particular root of $x^3 - 2.$ But, each field contains a different root of $x^3 - 2,$ so while on the one hand we have
\begin{align*}
\Bbb{Q}[\sqrt[3]{4}][x]/(x^3 - 2)&\cong\Bbb{Q}[\sqrt[3]{4}][x]/(x - \sqrt[3]{2})\times\Bbb{Q}[\sqrt[3]{4}][x]/(x^2+\sqrt[3]{2}x + \sqrt[3]{4})\\
&\cong\Bbb{Q}[\sqrt[3]{4}]\times\Bbb{Q}[\sqrt[3]{4}][x]/(x^2+\sqrt[3]{2}x + \sqrt[3]{4}),
\end{align*}
on the other hand we have
\begin{align*}
\Bbb{Q}[\sqrt[3]{4}\omega][x]/(x^3 - 2)&\cong\Bbb{Q}[\sqrt[3]{4}\omega][x]/(x - \sqrt[3]{2}\omega^2)\times\Bbb{Q}[\sqrt[3]{4}\omega][x]/(x^2 + \sqrt[3]{2}\omega^2 x + \sqrt[3]{4}\omega)\\
&\cong \Bbb{Q}[\sqrt[3]{4}\omega]\times\Bbb{Q}[\sqrt[3]{4}\omega][x]/(x^2 + \sqrt[3]{2}\omega^2 x + \sqrt[3]{4}\omega).
\end{align*}
The rings $\Bbb{Q}[\sqrt[3]{4}][x]/(x^3 - 2)$ and $\Bbb{Q}[\sqrt[3]{4}\omega][x]/(x^3 - 2)$ are of course abstractly isomorphic, but the two factors of the direct product decomposition are not.
|
H: Prove that the set of all positive integers less than $n$ and relatively prime to n form a group under multiplication modulo n
I came across the problem
Prove that the set of all positive integers less than $n$ and relatively prime to n form a group under multiplication modulo n.
Proving the associativity of multiplication modulo n, closure and existence of identity are fairly easy.
But how would we prove that there is inverse for all elements i.e.$\forall$ $a \in U(n),\space \exists b \in U(n)$ such that $ab(modn) = 1$?
MY TRY:
I know that if $gcd(a,n) = 1$ $\exists x$ such that $ax(modn) = 1$. But $x$ should be in U(n) in order to complete the proof.
Also, from the theory of diophantine equations such $x$ is not unique rather if $x_{0}$ is a particular solution then, $x_{0}+nt$ for $t\in \Bbb Z$ is also a solution.
So, we can find some x such that $0\le x \le n-1$ and $ax(modn) = 1$. But, how would we prove that such $x$ is relative prime to n i.e. $gcd(x,n) = 1$?
I got stuck here. Any hint in that direction would be a great help Other ways of solving the problem are also welcomed.
AI: Hint:
You have shown for each $a$ coprime to $n$ there is an $x$ so that $ax=1$ mod $n$.
Towards a contradiction, assume $\gcd(x,n) = d \neq 1$. Then any linear combination $rx+sn$ must be divisible by $d$... What happens if we take $r=a$? Do you see how to finish the proof from here?
I hope this helps ^_^
|
H: Proving that $f\left(\frac{x+y}{2}\right) \le \frac{f(x)+f(y)}{2}$ is convex
I'm trying to prove:
$f$ is a continuous real function defined in $(a, b)$ such that $$f\left(\frac{x+y}{2}\right) \le \frac{f(x)+f(y)}{2}$$ for all $x, y \in (a, b) \implies f$ is convex.
I was also given:
A real function $f$ defined in $(a, b)$ is said to be convex if $$f(\lambda x+(1-\lambda)y) \le \lambda f(x)+(1-\lambda)f(y)$$ whenever $a < x, y < b, 0<\lambda < 1$.
My attempt at a proof: Let $x, y \in (a, b)$ and take $f$ be a continuous real function defined in $(a, b)$ such that $$f\left(\frac{x+y}{2}\right) \le \frac{f(x)+f(y)}{2}.$$ Now, suppose to the contrary that $\exists x, y \in (a, b), \lambda \in (0, 1)$ such that $$f(\lambda x+(1-\lambda)y) > \lambda f(x)+(1-\lambda)f(y).$$ We claim that the equation above os satisfied if we take $\lambda = 0.5$, that is, we obtain $$f\left(\frac{x+y}{2}\right) > \frac{f(x)+f(y)}{2}$$ which contradicts the definition of $f$.
Can someone please proofread this proof and let me know if it is accurate? If not, please just point out the error(s); please don't bother to suggest a new proof of this problem.
AI: There is a rather serious error: you are concluding from your assumption of non-convexity that there is some $\lambda \in [0, 1]$ and $x, y$ such that
$$f(\lambda x + (1 - \lambda)y) > \lambda f(x) + (1 - \lambda)f(y).$$
This is fine, but then next you assume that this $\lambda = 1/2$. This assumption is erroneous. You can only assume $\lambda \in [0, 1]$, not any specific value of $\lambda$ you wish.
You will absolutely need the continuity assumption too; there are discontinuous counterexamples!
|
H: Let $Y$ be a dense subspace of a space $(X, d)$. Prove that if every Cauchy sequence $(y_n)⊂Y$ is convergent on $X$, so $X$ is complete.
Let $Y$ be a dense subspace of a space $(X, d)$. Prove that if every Cauchy sequence $(y_n)⊂Y$ is convergent on $X$, so $X$ is complete.
Hello,it is difficult for me to use the properties of the metric spaces, I would be very grateful if you could help me to do this proof, thanks.
AI: Let $(x_n)$ be a Cauchy sequence in $X$. To prove that $X$ is complete we need to show that $(x_n)$ converges. Since $Y$ is dense in $X$ each element in the sequence $(x_n)$ can be approximated by an element in $Y$. Then we can find a sequence $(y_n)$ in $Y$ such that $d(x_n,y_n)<\frac{1}{n}$ for each $n \in \mathbb{Z}_{>0}$. Since $(x_n)$ is Cauchy and
$$
d(y_m,y_n) \leq d(y_m,x_m)+d(x_m,x_n)+d(x_n,y_m) < \frac{1}{m}+d(x_m,x_n)+\frac{1}{n},
$$
it follows that $(y_n)$ is also Cauchy. Thus, by hypothesis there is an $x \in X$ such that $y_n \to x$ in $X$. Let $\varepsilon >0$ and choose $N$ such that $\frac{1}{n}< \frac{\varepsilon}{2}$ and $d(x, y_n)< \frac{\varepsilon}{2}$ for all $n \geq N$. Then,
$$
d(x,x_n) \leq d(x,y_n)+d(y_n,x_n) < \varepsilon
$$
for all $n \geq N$. This proves that $x_n \to x$ in $X$ and therefore $X$ is complete.
|
H: Does having non-trivial solutions means trivial solution is also included?
If a system of linear equations have non trivial solutions according to Cramer's Rule (i.e. infinitely many solutions) then it means that zero is also one of it's solutions (since it has INFINITELY many solutions). Now zero is a trivial solution. So having non trivial solutions means that it also has a trivial solutions?
AI: The system $Ax=0$ always has the trivial solution, and $Ax=b$ when $b≠0$ does not.
Having an infinite number of solutions does not necessarily mean that $0$ is one of them; consider the system:
$A=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$, $b=[1,0]$
Every $x=[y,1]$ (for every $y$) solves $Ax=b$, thus you have infinite solutions. However $x=[0,0]$ is not a solution.
|
H: Generalized Euclid Lemma
Could anyone check my proof? Thanks in advance.
Statement: If $p$ is a prime and $p$ divides $a_{1}a_{2}...a_{n}$ prove that p divides $a_{i}$ for some $i$.
Proof: Suppose $p$ is a prime that divides $a_{1}a_{2}...a_{n}$ but not $a_{j}$ for all $j\ne i$. Let the set of those $j$ be $J$. Then there exist integers $s$ and $t$ such that $1=a_{k1}a_{k2}...a_{kr}s+pt$ where $k1,k2,k3,...,kr$ corresponds distinctly to each element in $J$ and $r$ is the number of elements in $J$. Now if we multiply the equation above with $a_{i}$ to get $a_{i}=a_{1}a_{2}...a_{n}s+pa_{i}t$. As $p$ divides the right-hand side, $p$ also divides $a_{i}$. Here we assume that there is only one such number as if there are two or more, these can all be multiplied and reduced to just one number.
AI: In your proof, you mention $i$, but you don't explain where it comes from. It's not quite a proof by contradiction either, and is a bit confusing.
The generalized Euclid lemma follows from the original Euclid lemma by induction, where the base case is $n = 2$ (Euclid's lemma). Informally, the proof looks like this:
Since $p$ is prime and $p \mid (a_1)(a_2 \ldots a_n)$, it follows by Euclid's lemma that $p \mid a_1$ or $p \mid a_2 \ldots a_n$. Likewise, if $p \mid (a_2)(a_3 \ldots a_n)$, it follows by Euclid's lemma that $p \mid a_2$ or $p \mid a_3 \ldots a_n$. Continuing in this manner, we find that $p \mid a_1$ or $p \mid a_2$ or ... or $p \mid a_n$, as desired.
Can you see how to do this induction proof rigorously?
|
H: What base does the author take when taking the log of both sides?
I am learning exponential distribution in ThinkStats2 by Allen Downey..
It says that "if you plot the complementary CDF of a dataset that you think is
exponential, you expect to see a function like:
$$
y\approx e^{-\lambda x}
$$
Then, taking the log of both sides yields:"
$$
\log y \approx -\lambda x
$$
My question is what base does the author take when taking the log of both sides? I guess that the author takes log of base 10, but it does not explain why we get $-\lambda x$ on the right side of equation.
Could someone explain this?
AI: The author takes the natural logarithm on both sides (base $e=2.7182818\cdots $)
|
H: Find when $\frac{x^5-1}{x-1}$ is a perfect square?
$\textbf{Question:}$Find when $f(x)=\frac{x^5-1}{x-1}$ is a perfect square? where $x \in \mathbb N/ \{1\}$.
I tried upto certain number and somewhat convinced that $3$ is the only solution.But I failed to prove that.Here's what I got so far:
If some prime $p \mid f(x)$ then, $p \equiv 1 \pmod{5}$ or $p=5$ .But I could deduce nothing useful from here so far.Then I tried factoring
$f(x)=(x^2+x+1)^2-x(x+1)^2$ like this.But still failed to make any useful conclusion.
I would appreciate both hints or full solutions.
AI: Hint. We have that $f(2)=31$ and $f(3)=121=11^2$. For any integer $n>3$, show that
$$\left(n^2 + \left\lfloor\frac {n}{2}\right\rfloor\right)^2 < f(n)=\frac {n^5 - 1}{n - 1} < \left(n^2 + \left\lfloor\frac {n}{2}\right\rfloor+1\right)^2.$$
|
H: showing that an function $f$ is constant
I am trying to solve the following problem.
$f(z)=u(x,y)+iv(x,y)$ is an analytic function in $D$ ($D$ is connected and open).
If $u, v$ fulfill the relation $G(u(x,y), v(x,y))= 0 $ in $D$ for some function ($G:\mathbb{R^2}\to\mathbb{R}$) with the property
$(\frac{\partial G}{\partial x})^2+(\frac{\partial G}{\partial y})^2 > 0$
Show that $f$ is constant.
My attempt:
We have to show that $f'(z)=0$
$(\frac{\partial G}{\partial x})^2+(\frac{\partial G}{\partial y})^2 = (\frac{\partial G}{\partial u})^2(\frac{\partial u}{\partial x})^2 +2\frac{\partial G}{\partial u}\frac{\partial u}{\partial x}\frac{\partial G}{\partial v}\frac{\partial v}{\partial x}+(\frac{\partial G}{\partial v})^2(\frac{\partial v}{\partial x})^2+(\frac{\partial G}{\partial u})^2(\frac{\partial u}{\partial y})^2 +2\frac{\partial G}{\partial u}\frac{\partial u}{\partial y}\frac{\partial G}{\partial v}\frac{\partial v}{\partial y}+(\frac{\partial G}{\partial v})^2(\frac{\partial v}{\partial y})^2$
Since $f$ is analytic, it satisfies the Cauchy Riemann equations, $ \frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$.
Then,
$(\frac{\partial G}{\partial x})^2+(\frac{\partial G}{\partial y})^2 = (\frac{\partial G}{\partial u})^2(\frac{\partial u}{\partial x})^2 +(\frac{\partial G}{\partial v})^2(\frac{\partial v}{\partial x})^2+(\frac{\partial G}{\partial u})^2(\frac{\partial u}{\partial y})^2 +(\frac{\partial G}{\partial v})^2(\frac{\partial v}{\partial y})^2 > 0$
But because $G(u(x,y), v(x,y))= 0 $ , doesn't this mean that $\frac{\partial G}{\partial u}=0 $ and $\frac{\partial G}{\partial v} = 0$? Then the inequality doesn't seem to make sense. What am I getting wrong? Is this the right approach?
Thanks in advance.
AI: A better way is to try and visualize what's exactly going on behind the scenes.See, the condition in the problem implies that $G$ is differentiable with its total derivative $dG$ being a surjection from $\mathbb{C}$ to $\mathbb{R}$ at each point.Now since $G(f)=0$ we apply chain rule to get
$dG_{f(z)} \circ \hspace{1mm} df_{z} = 0$ for all $z$ in $D$.If for some $z$, $f'(z)$ is non-zero, then $df_{z}$ is a nontrivial scaled rotation, which is invertible. Hence $dG_{f(z)}(df_z(\mathbb{C}) = dG_{f(z)}(\mathbb{C}) = \mathbb{R}$, a contradiction.Hence for all $z$ in $D$ we have $f'(z)=0$.Since $D$ is open and connected, $f$ is constant.
|
H: Does midpoint-convexity at a point imply midpoint-convexity at a larger point?
Let $f:(-\infty,0] \to \mathbb [0,\infty)$ be a $C^1$ strictly decreasing function satisfying $f(0)=0$.
Given $c \in (-\infty,0]$, we say that $f$ is midpoint-convex at the point $c$ if
$$
f((x+y)/2) \le (f(x) + f(y))/2,
$$
whenever $(x+y)/2=c$, $x,y \in (-\infty,0]$.
Question: Let $r<s<0$, and suppose that $f$ is midpoint-convex at $r$. Is $f$ midpoint-convex at $s$?
AI: No, not necessarily. Take, for example:
$$f(x) = 1 - (x + 1)^3.$$
Then $f$ is $C^1$, strictly decreasing and midpoint convex at $x = -1$ (since $x^3$ is odd). But, it is not midpoint convex at any point in $(-1, 0)$, as the function is strictly concave on $(-1, 0)$.
|
H: What is the largest number of squares that can be cut by the sides of the triangle in this picture?
I came across a question in the Mathematics Olympiad exercise booklet for primary school students in Australia. The questions is presented in the following picture. I can't quite understand what the question is about. Would anyone please help make the question clearer and suggest a solution? Thank you very much in advance.
AI: If you keep vertex $A$ at the lower left corner of the grid, and you allow one vertex of the triangle to be anywhere on the right side of the rectangle enclosing the grid, and the other vertex to be anywhere on the top of the rectangle enclosing the grid, what is the maximum possible number of squares that the edges of the triangle will pass through? You can see in the example figure that the triangle's bottom edge passes through $6$ squares; the top edge passes through another $6$ squares; and the left edge passes through $6$ squares. But for the bottom and top vertices, the squares are counted twice, so this triangle passes through $6 + 6 + 6 - 2 = 16$ different squares. By changing the position of the top and right vertices of the triangle, can you make it pass through more squares?
|
H: Sign of zeros of $\lambda^2+2\lambda+1-a+\frac{ar}{\delta}=0$ without explicit calculation
I am interested in the zeros of this polynomial in $\lambda$:
$$\lambda^2+2\lambda+1-a+\frac{ar}{\delta}=0$$
where $0<a\leq1$, $r<0$ and $\delta>0$.
How to determine the sign of their real parts without explicit calculation?
AI: For sake of comfort, I will denote $\lambda$ by $z$. The equation is same as
$$(z+1)^2=a-\frac{ar}{\delta}$$Obviously, $a-\frac{ar}{\delta}\geq 0$, and hence, both the roots are real. Now, lets write the roots precisely,$$z_1:=-1+\sqrt{a-\frac{ar}{\delta}}\quad z_2:=-1-\sqrt{a-\frac{ar}{\delta}}$$Clearly, $z_2$ is negative. However, we can't comment on the sign of $z_2$ with the provided info. For instance, when $r=-1,\delta = 3, a=0.5$ we have $z_2<0$ and when $r=-6,\delta=2,a=0.5$ we have $z_2>0$.
|
H: Prove that for every integer $n$, $n^3$mod$6$=$n$mod$6$
I will use induction to prove this.
Firstly for $n=1$,
$1^3\text{mod}6=1\text{mod}6$
Now we assume that this holds for some $n=k$ and prove that if it holds for $n=k$ it should also hold for $n=k+1$.
$(k+1)^3\text{mod}6=(k^3+3k^2+3k+1)\text{mod}6$
As for $n=k$ because it holds,
$k^3\text{mod}6=k\text{mod}6$
$k^3-6p(k)=k-6q(k)$
$k^3-k=6p(k)-6q(k)$
Consider $(k^3+3k^2+3k+1)-6a(k)=6p(k)-6q(k)+k+1+3k^2+3k$
Now we just need to prove that $3k^2+3k$ is always a multiple of 6.
For $k=1$ it holds.
Assume $k=j$ holds. Then for $k=j+1$,
$3(j+1)^2+3(j+1)=3(j^2+2j+1)+3j+3=3j^2+3j+6j+6$
So for $k=j+1$ it is also a multiple. As it holds for $k=1$, then it holds for all positive integers $k$.
Then this means that
$6p(k)-6q(k)+k+1+3k^2+3k=k+1-6b(k)$
And if $(k^3+3k^2+3k+1)-6a(k)=k+1-6b(k)$ this means $(k+1)^3\text{mod}6=(k+1)\text{mod}6$
Is this correct?
AI: An easy approach:
$$n^3-n=(n-1)n(n+1)$$
As in any two consecutive integers, one is even, and in any three consecutive, one is multiple of three. So the product of three consecutive numbers is a multiple of $6$. $$\begin{split}n^3-n&\equiv0\pmod6\\n^3&\equiv n\pmod 6\end{split}$$ Done!
|
H: Find the equation of the plane given a line and a point
I’m trying to solve a problem where i have to find the plane equation that contains a given straight line and a given point.
In this photo you can see the equation of the straight Line and the given point $P = (1,-2,3)$.
The answer of the problem is the last Line.
What i tried is this:
transform the straight line into parametric form
find the director parameters of the straight line $V = ( L,m,n)$
find a point of the straight line $Q = ( f, g, h)$
find the vector $PQ$.
do the vector product of the vectors $V$ and $PQ$ that gives me the director parameters of the plane that I'm searching.
The problem is that at this point what i find is that the director parameters of the plane are
$N = ( -\frac{1}{3}, \frac{10}{3}, -\frac{14}{9})$ that are different from the solution ones.
Can anyone show me how to solve this problem so i can check and understand where I’m doing it wrong?
AI: This is a real quick method:
Find the pencil of planes such that $H_{\alpha, \beta}=\alpha (2x-3y+z-3)+\beta (x+3y+2z+1)=0$
(all these are planes that have the line given by the exercise in common).
Then, you have to impose that $P=(1,-2,3) \in H_{\alpha, \beta}$.
This condition gives us $\alpha (2+6+3-3)+ \beta (1+6-6+1)=0$ $\implies$ $\beta= -4\alpha$.
If we now substitute $(\alpha , \beta)=(1,-4)$ in the equation of $H_{\alpha, \beta}$ we have $2x-3y+z-3-4x-12y-8z-4=0$, so
$$2x+15y+7z+7=0$$
|
H: Rudin Theorem 1:11: understanding why $L \subset S$
Rudin's theorem 1.11 states:
Suppose $S$ is an ordered set with the least-upper-bound property, $B \subset S$, $B$ is not empty, and $B$ is bounded below. Let $L$ be the set of all lower bounds of $B$. Then $\alpha = \sup L$ exists in $S$, and $\alpha = \inf B$. In particular, $\inf B$ exists in $S$.
I am having trouble understanding why $L \subset S$. This has to be the case so that we can invoke the LUB property to conclude that $\sup L$ exists in $S$. After a lot of searching, the answer seems to be that "$S$ is the universe; nothing exists outside of $S$." I'm struggling to understand why that follows from the problem. What if, for example, $S$ is the set $\mathbb{Z}$, $B$ is the positive integers, and $L$ is the set of all lower bounds in $\mathbb{R}$? That is, $L = (-\infty, 0)$. Surely, $L \not \in S$.
If we went a step further and defined $L$ as the set of all lower bounds of $B$ in $S$, the proof would make much more sense to me. Is that what Rudin, implicitly, means?
AI: Simply the sentence 'Let $L$ be the set of all lower bounds of $B$.' should be implicitly understood as
Let $L$ be the set of all lower bounds $s\in S$ of $B$.
|
H: $\phi_{i}(P)=P\left(x_{i}\right)$, $\psi_{i}(P)=P^{\prime}\left(x_{i}\right)$ for distinct $x_{1}, \ldots, x_{n}$ is basis of$({R}_{2 n-1}[x])^{*}$
I know that 2n distinct evaluation functionals $\varphi_{i}(P)=P\left(x_{i}\right)$ for 2n $x_{i}$ is a basis for $(\operatorname{R}_{2 n-1}[x])^{*}$, but I have no idea how to prove when there are n evaluation functionals and n derivation at a point functional.
AI: The following is a key observation.
Claim: If $\phi_i(P) = \psi_i(P) = 0$ for all $1 \leq i \leq n$, then $P = 0$.
Proof: Note that if $P$ is such that $\phi_i(P) = \psi_i(P) = 0$, then $(x-x_i)^2$ divides $P$. Thus, since $\phi_i(P) = \psi_i(P) = 0$ for all $i$, the degree $2n$ polynomial $(x-x_1)^2 \cdots (x- x_n)^2$ divides $P$. Because $P$ has degree at most $2n-1$, this can only happen if $P=0$.
With that, the remainder of the proof is entirely linear algebra. Multiple approaches exists; here is one:
Define $\Phi:\Bbb R_{2n-1}[x] \to \Bbb R^{2n}$ by
$$
\Phi(P) = (\phi_1(P),\dots,\phi_n(P),\psi_1(P),\dots,\psi_n(P)).
$$
By the claim above, we see that $\ker \Phi = 0$. Because $\dim(\Bbb R_{2n-1}[x]) = \dim(\Bbb R^{2n})$, we can conclude that $\Phi$ is an isomorphism of vector spaces. It follows that for any vector $c \in \Bbb R^{2n}$, the map $\Phi^*(c) = \Phi_c:\Bbb R_{2n-1}[x] \to \Bbb R$ defined by $\Phi_c(P) = c^T\Phi(P)$ will only be the zero map if $c = 0$. In other words, if $c_1,\dots,c_{2n}$ are such that
$$
c_1 \phi_1 + \cdots + c_n \phi_n + c_{n+1}\psi_{1} + \cdots + c_{2n}\psi_n = 0,
$$
then $c_1 = \cdots = c_{2n} = 0$. Since the functionals are linearly independent and there are $2n$ of them, we conclude that they indeed form a basis. Because we have shown that $\Phi$ is injective,
Alternatively, we could have shown that the $\phi_i,\psi_i$ span the dual space. Equivalently, we want to show that for an arbitrary $\alpha \in (\Bbb R_{2n}[x])^*$, there is a $c \in \Bbb R^n$ for which $\Phi^*(c) = \alpha$. This holds because for maps over finite dimensional spaces, $\Phi^*$ is surjective whenever $\Phi$ is injective.
|
H: If the function $Q(x,y)=ax^2+2bxy+cy^2$, restricted to the unit circle, attains its max at $(1,0)$, then $b=0$.
I feel confused about the following proof. Why did the author introduce $\epsilon$ in parametrizing the unit circle? To trace out the circle, one can simply use the closed interval $[0,2\pi]$. Besides, why did the derivative of $Q$ w.r.t. $t$ have to vanish at $t=0$? After all, by the extreme value theorem, a continuous function defined on a closed interval can attain its extremes at endpoints, points where the derivative vanishes, or points where the derivative is not defined.
AI: Why did the author introduce $\epsilon$ in parametrizing the unit circle?
The purpose of considering the open interval was so that $t = 0$ becomes an interior point of the interval.
Besides, why did the derivative of $Q$ w.r.t. $t$ have to vanish at $t=0$?
Because $t = 0$ is an interior point of the domain and if a local maximum occurs at an interior point, then the derivative must be $0$. (This is not true for the endpoints. Which is why the author considered $\epsilon$.)
After all, by the extreme value theorem, a continuous function defined on a closed interval can attain its extremes at endpoints, points where the derivative vanishes, or points where the derivative is not defined.
The "extreme value theorem" isn't really applicable anymore since the domain isn't compact. (It isn't closed.)
Similarly, it also doesn't make sense to talk about endpoints since every point is an interior point. One can also see that the derivative is indeed defined everywhere here.
|
H: Find all polynomials satisfying $p(x)p(-x)=p(x^2)$
Find all polynomials $p(x)\in\mathbb{C}[x]$ satisfying $p(x)p(-x)=p(x^2)$.
We can see that if $x_0$ is a root of $p$, then so is ${x_0}^2$.
If $0<|x_0|<1$ (or $|x_0|>1$), then we have $|x_0|^2<|x_0|$ (or $|x_0|^2>|x_0|$).
So
repeating this process will give an infinite number of distinct roots, a contradiction.
Hence any root $x_0$ of $p$ must have $|x_0|=0$ or $|x_0|=1$.
Experimenting with lower degree polynomials, we find solutions:
$p(x)=1, 0$
$p(x)=-x, 1-x$
$p(x)=x^2, -x(1-x), (1-x)^2, x^2+x+1$.
Further, we can verify that the general form $f(x)=(-x)^p(1-x)^q(x^2+x+1)^r$ will work. I am unsure if these capture all possible solutions, and if so how do we show it?
I've been able to show that any root $x_0=e^{i\theta}\neq 1$ must satisfy $\theta=\frac{2^n p}{q}\pi$, where $m\geq 1$ and $p,q$ are coprime integers.
AI: Let $p(x)\in\mathbb{C}[x]$ be a polynomial that satisfies the functional equation
$$p(x)\,p(-x)=p(x^2)\,.\tag{*}$$
Clearly, $p\equiv 0$ and $p\equiv 1$ are the only constant solutions. Let now assume that $p$ is nonconstant. Hence, the set $Z(p)$ of the roots of $p$ is nonempty.
Suppose that $z\in Z(p)$. Then, $z^2\in Z(p)$ by (*). Hence, we have an infinite sequence $z,z^2,z^{2^2},z^{2^3},\ldots$ of elements of $Z$. However, $Z$ must be a finite set. Therefore, $$z^{2^k}=z^{2^l}$$ for some integers $k$ and $l$ such that $k>l\geq 0$. This means either $z=0$, or $z$ is a primitive root of unity of an odd order.
It is easy to show that, if $m$ is a nonnegative integer such that $x^m$ divides $p(x)$ but $x^{m+1}$ does not, then
$$p(x)=(-x)^m\,q(x)\,,$$
where $q(x)\in\mathbb{C}[x]$ also satisfies (*). If $n$ is a nonnegative integer such that $(x-1)^n$ divides $q(x)$ but $(x-1)^{n+1}$ does not, then
$$q(x)=(1-x)^n\,r(x)$$
where $q(x)\in\mathbb{C}[x]$ also satisfies (*). We now have a polynomial $r$ satisfying (*) such that $\{0,1\}\cap Z(r)=\emptyset$. If $r$ is constant, then $r\equiv 1$, making
$$p(x)=(-x)^m\,(1-x)^n\,.$$
Suppose now that $r$ is nonconstant, so $Z(r)\neq\emptyset$. For each $z\in Z(r)$, let $\theta(z)\in\mathbb{R}/2\pi\mathbb{Z}$ be the angle (modulo $2\pi$) such that $z=\exp\big(\text{i}\,\theta(z)\big)$. Define $\Theta(r)$ to be the set of $\theta(z)$ with $z\in Z(r)$. Note that each element of $\Theta(r)$ is equal to $\dfrac{2p\pi}{q}$ (modulo $2\pi$), where $p$ and $q$ are coprime positive integers such that $p<q$ and $q$ is odd. Furthermore, $\Theta(r)$ is closed under multiplication by $2$. Therefore, the set $\Theta(r)$ can uniquely be partitioned into subsets of the form
$$C(\alpha):=\{\alpha,2\alpha,2^2\alpha,2^3\alpha,\ldots\}\,,$$
where $\alpha\in\mathbb{R}/2\pi\mathbb{Z}$. Such a subset of $\Theta(r)$ is called a component.
Here are some examples of components.
If $\alpha=\dfrac{2\pi}{3}$, then $C(\alpha)=\left\{\dfrac{2\pi}{3},\dfrac{4\pi}{3}\right\}$ modulo $2\pi$.
If $\alpha=\dfrac{2\pi}{5}$, then $C(\alpha)=\left\{\dfrac{2\pi}{5},\dfrac{4\pi}{5},\dfrac{6\pi}{5},\dfrac{8\pi}{5}\right\}$ modulo $2\pi$.
If $\alpha=\dfrac{2\pi}{7}$, then $C(\alpha)=\left\{\dfrac{2\pi}{7},\dfrac{4\pi}{7},\dfrac{8\pi}{7}\right\}$ modulo $2\pi$.
If $\alpha=\dfrac{6\pi}{7}$, then $C(\alpha)=\left\{\dfrac{6\pi}{7},\dfrac{10\pi}{7},\dfrac{12\pi}{7}\right\}$ modulo $2\pi$.
For each component $C(\alpha)\subseteq\Theta(r)$, let $$\mu_\alpha(x):=\prod_{\beta\in C(\alpha)}\,\Big(\exp\big(\text{i}\,\beta\big)-x\Big)\,.$$ Observe that $\mu_\alpha$ is a cyclotomic polynomial if and only if $2$ is a generator of the multiplicative group $(\mathbb{Z}/q\mathbb{Z})^\times$, where $\alpha=\dfrac{2p\pi}{q}$ (modulo $2\pi$) for some positive integers $p$ and $q$ with $\gcd(p,q)=1$.
Show that there exist positive integers $\nu_\alpha$ for each component $C(\alpha)$ of $r(x)$ such that
$$r(x)=\prod_{C(\alpha)\subseteq \Theta(r)}\,\big(\mu_\alpha(x)\big)^{\nu_\alpha}\,.$$
For convenience, we let $\Theta(p):=\Theta(r)$.
Therefore,
$$p(x)=(-x)^m\,(1-x)^n\,\prod_{C(\alpha)\subseteq\Theta(p)}\,\big(\mu_\alpha(x)\big)^{\nu_\alpha}\,.$$
Any polynomial $p(x)$ in the form above is always a solution to (*).
|
H: Is there a formula for $\sum^{n}_{k=1}\binom{ n-k }{h }k$?
I'm trying to show $$\sum ^{400}_{m=1}\binom{ 400-m }{ 3 }m \sim 8.5 \times 10^9\,,$$ but can't seem to find a binomial coefficient identiy.
AI: Hint: Apply the Hockey Stick lemma twice.
$$\sum ^{400}_{m=1} \left[ { 400 -m \choose 3 } \sum_{i=1}^{m} 1 \right] = \sum_{i=1}^{400} \left[\sum_{m=i}^{400}{ 400-m \choose 3 } \right] = \sum_{i=1}^{400} { i \choose 4 } = { 401 \choose 5 }$$
Note: There might be indexing errors as it's late and I'm going to sleep, but you should be able to get the gist.
Yes, this generalizes and can be expressed as an identity.
|
H: Show that if 12 integers are chosen there are always two whose sum or difference is divisible by 20.
Also, prove that this is sharp, i.e., one can pick 11 integers so that the sum or the difference of any two of the chosen integers will never be divisible by 20.
I'm trying to solve this problem using the pigeon hole principle.
AI: Welcome to MSE! For the best response from the MSE community, could you please show your efforts and working next time?
For part 1:
We see that amongst any group of 12 integers, there is always going to be two numbers who sum is a multiple of 10, and who have the same tens digit (for example, 62 and 68 or 76 and 74).
Hence they will end in 0, and have an even tens digit, as odd + odd = even, and even + even = even and thus be divisible by 20. So by the pigeonhole principle, we can see that amongst any 12 numbers, two numbers' sum will be divisible by 20.
For part 2
As pointed out by @AlexeyBurdin and @Peter, we can simply just use the integers 0 - 10, non of which have a difference or sum divisible by 20, as 10 + 9 (largest possible numbers in this set) do not even reach 20 in their sum.
Edit:
In case you weren't convinced, see this excellent exhaustive code program written by @AlexeyBurdin which proves the 2nd part, and which I thought I'd acknowledge due to its clarity and usefulness.
|
H: Verify and understand Proof of Path connected implies connected
I was “reading” Pugh’s Mathematical Analysis Chapter 2. There, he defines a metric space to be path connected if there exists a continuous function $f : [a,b] \mapsto M$ for all points $(p,q)$ such that $f(a) = p, f(b) = q$.
He then proceeds to prove that path connected implies connected.
Assume that $M$ is path-connected but not connected. Then $M = A ⊔ A^c$ for some proper clopen $A⊂M$. Choose $p∈A$ and $q∈A$. There is a path $f :[a,b]→M $ from $p$ to $q$. The separation $f^{pre}(A) ⊔ f^{pre}(A^c)$ contradicts connectedness of $[a, b]$. Therefore M is connected. $\blacksquare$
I don’t get what he means by the last line of his proof.
I also attempted to try it on my own and landed up with a “different proof”.
Assume that $M$ is path-connected but not connected. Then $M = A ⊔ A^c$ for some proper clopen $A⊂M$. Choose $p∈A$ and $q∈A$. There is a path $f :[a,b]→M $ from $p$ to $q$. Now let $S = \{ x : f(x) \in A \}$ and let $s = \sup S$.
Now see that there exists sequences $(a_n)$ and $(b_n)$ such that $f(a_n) \in A$ and $f(b_n) \in A^c$ and both the sequence converges to $s$ “one from below” and “one from above”. By continuity of $f$ and the fact that $A$ is clopen we can see that $s$ lies in both $A$ and $A^c$ leading to a contradiction. $\blacksquare$
I also want to know if my “proof” is indeed correct as it seems different to the one given by the author.
AI: Since $f$ is continuous and both $A$ and $A^\complement$ are clopen, $f^{-1}(A)$ and $f^{-1}(A^\complement)$ are clopen subsets of $[a,b]$. But they are proper subsets of $[a,b]$, their intersection is empty and their union is $[a,b]$. This is impossible, because it would mean that $[a,b]$ is disconneted.
Concerning your proof, what happens if $s=b$? How can you then approach $s$ from above? A similar remark applies if $s=a$.
|
H: A Smarter way to solve this system of linear equations?
I am a high school student and when practicing for the SAT I stumbled across this question:
$$
\begin{eqnarray}
−0.2x + by &=& 7.2\\
5.6x − 0.8y &=& 4
\end{eqnarray}
$$
Consider the system of equations above. For what value of $b$ will
the system have exactly one solution $(x,y)$ with $x=2$? Round the
answer to the nearest tenth.
My initial though was that if the x don't cancel each other by elimination then I must find out a way to make y cancel out. So I directly put $b = 0.8$ without really thinking about it.
But when I had a look at the answer sheet they solved it by finding the value of y from the second equation then replacing the value y they got in the first equation to get $b$. By doing so they were able to get $b = 0.8$ just like I did.
So my question is do I really need to follow their time consuming way, or will my way of solving work of all questions of this type?
AI: Since it says "exactly one solution $(x,y)$ with $x=2$" you can first insert $x=2$":
$$
\begin{eqnarray}
−0.2 \times 2 + by &=& 7.2\\
5.6 \times 2 − 0.8y &=& 4
\end{eqnarray}
$$
The second equation gives $y=(5.6 \times 2 - 4)/0.8 = 9.$ Then the first equation gives $b = (7.2 + 0.2 \times 2)/9 = 0.8444\ldots \approx 0.8$.
|
H: Classification of groups of order 66
We have that $|G|=66=2 \cdot 3 \cdot 11$, so we have 2,3,11-Sylow. The number of 11-Sylow $n_{11}$ is such that $n_{11} \equiv 1 \ \ (11)$ and $n_{11} \mid 6$, so we have that $n_{11}=1$, and this means that the only 11-Sylow is normal in $G$. So we can say that $\mathbb{Z}_{11}\mathbb{Z}_3<G$ (because the 11-Sylow, that is isomorphic to $\mathbb{Z}_{11}$, is normal). We have also that $\mathbb{Z}_{11} \cap \mathbb{Z}_3 =\{e\}$ for order reasons, and so we can say that $\mathbb{Z}_{11}\mathbb{Z}_3 \cong \mathbb{Z}_{11} \rtimes _{\varphi} \mathbb{Z}_3$ with $\varphi :\mathbb{Z}_3 \rightarrow \text{Aut} (\mathbb{Z}_{11}) \cong \mathbb{Z}_{10}$. There's only one possible homomorphism $\varphi$, that is the one such that $\varphi ([1]_3)=[0]_{10}$, and so we have that the only possible semidirect product is $\mathbb{Z}_{11} \times \mathbb{Z}_3 \cong \mathbb{Z}_{33}$. In other words, we have that $\mathbb{Z}_{33}<G$, and, because $[G:\mathbb{Z}_{33}]=2$, we can say that $\mathbb{Z}_{33}\triangleleft G$. Finally, we have that $G=\mathbb{Z}_{33}\mathbb{Z}_2$ because $\mathbb{Z}_{33} \cap \mathbb{Z}_2=\{e\}$, and so we have that $G \cong \mathbb{Z}_{33} \rtimes _\psi \mathbb{Z}_2$ with $\psi : \mathbb{Z}_2 \rightarrow \text{Aut} (\mathbb{Z}_{33}) \cong \mathbb{Z}_{20}$ homomorphism. So we end up with two homomorphisms that bring us to $\mathbb{Z}_{66}$ and $D_{33}$, but there are other two groups of order 66. What's wrong with my proof?
AI: As you say, it's a semidirect product of $Z_2$ and $Z_{33}$. But there are other
ways for $Z_2$ to act on $Z_{33}$. If $a$ and $b$ are generators of $Z_{33}$ and $Z_2$
then $bab^{-1}=a^r$ where $r^2\equiv1\pmod{33}$. But there are four possible $r$
modulo $33$ solving this, not just $\pm1$, there are $\pm10$ also. This gives two
other groups, which are $Z_3\times D_{11}$ and $Z_{11}\times D_3$.
|
H: Does there exist a continuous and differentiable EVEN function whose slope at zero isn't zero?
as I understand, there should not be a case, where the slope at x=0 is nonzero, if the function is even and continuously differentiable at all points including x=0.
AI: Well, just compute the derivative: If $f$ is even and differentiable at $0$, we have
$$
2f'(0)=\lim_{x\to 0} \left(\frac{f(x)-f(0)}{x}+\frac{f(-x)-f(0)}{-x}\right)=\lim_{x\to 0} \frac{f(x)-f(-x)}{x}=0,
$$
since $f$ is even.
|
H: Question about Projection operators
Let $P\in L(X)$ be a projection operator, where $X$ is a complex non-trivial banach space, that is $P^2=P$ and $Q\in L(X)$ such that $Q^2=0$, then in my Functional analysis exam it was asked to see that we will have $(P+Q)^2=P+Q$, however I cannot seem to prove this , we wil have that $(P+Q)(P+Q)=P^2+PQ+QP+Q^2=P+PQ+QP$ but how can I relate $PQ+QP$ with $Q$?
I also tried coming up with a counterexample if we consider $X=l^2(\mathbb{N})$ and the operator $P(x)=\langle x,e_1\rangle e_1$ and the operator $Q(x)=Q((x_1,x_2,x_3,...)=(0,x_1,0,x_3,0,x_5,...)$ isn't this going to be a counterexample? Because we will have that $(P+Q)(x)=(x_1,x_1,0,x_3,0,...)$ and that $(P+Q)^2(x)=(x_1,x_1,0,0,...)$.
Thanks in advance.
AI: Your counterexample works !
My idea:
For $P=I$ we do not have $(P+Q)^2=P+Q$ for each $Q$ with $Q^2=0:$
$(I+Q)^2=I+2Q$, thus
$$(I+Q)^2=I+Q \iff I+2Q=I+Q \iff Q=0.$$
Someone told you nonsense !
|
H: Proving that the set of all finite subsets of a countable set is coutable
I am trying to prove the following proposition:
Proposition: Let $S$ be a countable set. Then the set of all finite subsets of $S$ is also countable.
My approach:
If $S$ is countable that means that $S$ is finite or that $S \sim\mathbb{N}$.
If $S$ is finite then the number of finite subsets of $S$ is also finite, so in this case it's easy to show that this is countable as well.
In the case that $S$ is countably infinite:
Let $A_k=\{A \subset S:\text{card} (A)=k\}$. $A_k$ is countable because it is the union of various countable sets $A \subset S$
The set of all finite subsets of $S$ will be equal to $\bigcup \limits_{i=1}^\infty A_i$. Because every $A_i$ is countable, then this union is also countable.
My question is: Can I do this union? Because $A_\infty$ means the set is the set of all infinite subsets of $S$, and I'm making an union from $i=1$ to $\infty$. So is this proof right?
AI: There is no problem with that union. However, it is wrong to assert that $A_k$ is countable “because it is the union of various countable sets”. An arbitray union of countable sets doesn't have to be countable. You have to justify that you have a countable union of countable sets here.
|
H: $f_{i}(P)=P^{(i)}\left(x_{i}\right)$ for arbitrary scalars $x_{0}, \ldots, x_{n}$ is a basis for $\left(\mathbb{K}_{n}[X]\right)^{*}$
Fix a field $\mathbb{K}$ and a nonnegative integer $n$. Let $\mathbb{K}_{n}[X]$ be the $\mathbb{K}$-vector space of all polynomials in $X$ over $\mathbb{K}$ that have degree $\leq n$. Consider its dual space $\left(\mathbb{K}_{n}[X]\right)^{*}$.
I know that the linear maps $f_i : \mathbb{K}_{n}[X] \to \mathbb{K}$ defined by $f_{i}(P)=P\left(x_{i}\right)$ form a basis for $\left(\mathbb{K}_{n}[X]\right)^{*}$ whenever $x_{0}, \ldots, x_{n}$ are $n+1$ arbitrary distinct scalars.
I also know that the linear maps $f_i : \mathbb{K}_{n}[X] \to \mathbb{K}$ defined by $f_{i}(P)=P^{(i)}\left(0\right)$ form a basis for $\left(\mathbb{K}_{n}[X]\right)^{*}$.
I am trying to combine these two facts to prove that
the linear maps $f_i : \mathbb{K}_{n}[X] \to \mathbb{K}$ defined by $f_{i}(P)=P^{(i)}\left(x_{i}\right)$ form a basis for $\left(\mathbb{K}_{n}[X]\right)^{*}$ whenever $x_{0}, \ldots, x_{n}$ are $n+1$ arbitrary distinct scalars.
But I can't seem to find a way. Can anyone give me a hint?
AI: We could apply the same argument that I explain here if we show the following.
Claim: If $f_i(P) = 0$ for all $i$, then $P = 0$.
Proof: Proceed inductively. If $f_0(P) = 0$, then $P$ must have the form
$$
P(x) = (x - x_0)Q(x).
$$
Now, $P'(x)$ is a polynomial in $\Bbb K_{n-1}[X]$ for which the functionals $P \mapsto P^{(i-1)}(x_i)$ for $i = 1,\dots,n$ are zero, so by our inductive hypothesis it must be that $P'(x) = 0$. That is, $\frac {d}{dx}[(x - x_0)Q(x)] = 0$. So, $(x-x_0)Q(x)$ must be a constant polynomial. However, if $Q(x)$ is a polynomial, then $(x-x_0)Q(x)$ can only be a constant polynomial if $Q(x) = 0$. Thus, $Q(x) = 0$, so that we indeed have $P = 0$ as desired.
|
H: If $a,b,c$ are sides of a triangle, then find range of $\frac{ab+bc+ac}{a^2+b^2+c^2}$
$$\frac{ab+bc+ac}{a^2+b^2+c^2}$$
$$=\frac{\frac 12 ((a+b+c)^2-(a^2+b^2+c^2))}{a^2+b^2+c^2}$$
$$=\frac 12 \left(\frac{(a+b+c)^2}{a^2+b^2+c^2}-1\right)$$
For max value, $a=b=c$
Max =$1$
How do I find the minimum value
AI: In $\Delta ABC$, $a^2=b^2+c^2-2bc \cos A > b^2+c^2-2bc$.
Adding up the similar inequalities gives $ab+bc+ca > \frac{1}{2} (a^2+b^2+c^2)$
|
H: Probability of Two Pairs ( Cards game )
Question: Calculate the probability of getting a two pair hand ( e.g., two 8’s, two Queens, and a Knight )
My answer: The probability of getting a two pair hand is :
$$ \frac{13\cdot\binom{4}{2}\cdot12\cdot\binom{4}{2}\cdot11\cdot\binom{4}{1}}{\binom{52}{5}} = \frac{396}{4165} $$
but the correct answer is :
$$ \frac{\binom{13}{2}\cdot\binom{4}{2}^2\cdot44}{\binom{52}{5}} = \frac{198}{4165} $$
which is my answer but divided by $2$. My question is why is my answer wrong, and in the correct answer where did $\binom{13}{2}$ come from?
AI: You are double-counting as you count any two pairs twice (as (8,8,Q,Q) and as (Q,Q,8,8)). To avoid this the correct solution suggests to use $\binom{13}2$ so that the whole pair (8,Q)$\equiv$(Q,8) is chosen.
|
H: Distributive law for ideals
Let $A,B,C\triangleleft R$ be Ideals. prove that:
$$B\cap(A+C)\subseteq C+(A\cap B)$$
$$\Updownarrow$$
$$B\cap(A+C)=(B\cap A)+(B\cap C)$$
I managed to proved the upper part from the lower, but I am struggling to prove the second direction.
AI: One direction is straighforward as
$$(B\cap A)+(B\cap C)\subseteq (B\cap A)+C=C+(A\cap B)\,.$$
For the other direction, first of all, we always have $(B\cap A)+(B\cap C)\subseteq B\cap (A+C)$.
Now, assuming $B\cap(A+C)\subseteq C+(A\cap B)$, let $b\in B\cap(A+C)$, that means in particular $b\in B$, and by the assumption, also $b\in C+(A\cap B)$, so $b=c+a$ with some $c\in C,\, a\in A\cap B$.
Then, $c=b-a\in B$, too, so $c\in B\cap C$, thus $b=a+c\,\in(B\cap A)+(B\cap C)$.
|
H: $p(x)$ be a fifth degree polynomial with integer coeffients that has an integral root $\alpha$. If $p(2)=13$ and $p(10)=5$
$p(x)$ be a fifth degree polynomial with integer coeffients that has an integral root $\alpha$. If $p(2)=13$ and $p(10)=5$ then find the value of $\alpha$
I am looking for various other approaches to this problem
Thanks !
AI: We have that $a-b$ divides $p(a)-p(b)$ for all integers $a$ and $b$.
$\alpha -2$ divides $0-13$ and so $\alpha -2 \in \{-13,-1,1,13 \}$.
$\alpha -10$ divides $0-5$ and so $\alpha -10 \in \{-5,-1,1,5 \}$.
Therefore, $\alpha \in \{ -11,1,3,15 \} \cap \{ 5,9,11,15 \} = \{ 15 \}$.
|
H: Is it true that $H(Y|X_1,\dots,X_n)=H(X_1,\dots,X_n,Y)-H(X_1,\dots,X_n)$?
Suppose $X_1,\dots,X_n,Y$ are random variables. Is it true that the conditional entropy can be expressed as the difference between the joint entropy of all variables and the joint entropy of only $X$ variables? That is:
$$H(Y|X_1,\dots,X_n)=H(X_1,\dots,X_n,Y)-H(X_1,\dots,X_n)$$
AI: Yes it is. Consider the big tuple $(X_1,\cdots ,X_n)$ as one random variable $Z$. Hence $$H(Y|Z)=H(Z,Y)-H(Z)$$which is indeed true for any two random variables $Y,Z$.
|
H: Combinatorics: number of fruit baskets of size $n$
I improved a little but that expression about even number of strawberries prevented me to solve this.
How many fruit baskets are there, which should include $n$ fruits with up to 3 bananas, an even number of strawberries, and any number of pineapples and grapes?
AI: Hint:
essentially you are looking for the number of non-negative integer solutions to the equation:
$$
x_1+2x_2+x_3+x_4=n,
$$
where $x_1\le3$.
The solution of the problem is:
$$
[x^n]\frac{1+x+x^2+x^3}{(1-x^2)(1-x)^2}=[x^n]\frac{1+x^2}{(1-x)^3},
$$
where $[x^n]$ is the coefficient extractor function which gives the coefficient at $x^n$ in the series expansion of the following expression. With help of generalized binomial theorem you can even obtain an explicit expression in terms of binomial coefficients.
|
H: convergence or divergence of infinite rational series
Finding whether the series $$\sum^{\infty}_{k=0}\frac{5k^2+7}{8k^2+2}$$ is converges or diverges.
What i Try:
I am Trying to solve it using ratio test
Let $\displaystyle a_{k}=\frac{5k^2+7}{8k^2+2}$. Then $\displaystyle a_{k+1}=\frac{5(k+1)^3+7}{8(k+1)^2+2}$.
Then $$\lim_{k\rightarrow \infty}\bigg|\frac{a_{k+1}}{a_{k}}\bigg|=1$$
But this test does not gave any conclusion.
Please help me How do i solve it. Thanks
AI: $\frac{5k^2+7}{8k^2+2}\ge\frac{5k^2+5}{8k^2+8}=\frac{5}{8}$
By comparision test, since $\sum_{k=0}^{k=\infty}5/8$ diverges hence
$\sum^{\infty}_{k=0}\frac{5k^2+7}{8k^2+2}$ also diverges.
Alternatively, simply note that $n$th term does not tend to zero for this series as $n\to \infty$, which is necessary (not sufficient!) condition for convergence of a series.
|
H: Comparability with zero of an ordered semigroup
Is it correct that any ordered semigroup $S$ can be embedded into an ordered semigroup with zero $S_0$ in which every element is comparable with $0$, in a way that the order of $S$ is a subset of the order of $S_0$?
If not, is it always possible to re-order an ordered semigroup in this way?
The question was moved from here: Ordered semigroup with an absorbing element
AI: The answer to the first question is yes.
Let $S$ be an ordered semigroup and let $S_0 = S \cup \{0\}$, where $0$ is a new zero element, that is, $0s = s0 = 0$ for all $s \in S_0$. Then the order on $S$ can be extended by adding the condition $0 < s$ for all $s \in S$.
|
H: for which a is this integral bounded
I am trying to prove that for a > -1 the following integral :
$$\int_{0}^{\infty} x^a*\lambda * \exp(-\lambda x)dx < \infty$$
with $\lambda$ > 0
Is there a criteria that I can use to do so ?
Thank you very much.
AI: You have to check boundedness at both ends of the interval.
Hint for boundedness around $0$:
What should be enough is to know that $x^ae^{-\lambda x}<x^a$.
Hint for boundedness at $\infty$:
$$x^ae^{-\lambda x} = \left(x^a\cdot e^{-\frac\lambda2x}\right)\cdot e^{-\frac\lambda2x}$$
and the function in parentheses is bounded.
|
H: Remainder when divided by $7$
What would be the remainder when
$12^1 + 12^2 + 12^3 +\cdots + 12^{100}$ is divided by $7$ ?
I tried cyclic approach (pattern method), but I couldn't solve this particular question.
AI: In the comments, you recognized that $12^1+12^2+12^3+\cdots+12^{100}$
$\equiv \underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\underbrace{5+4+6+2+3+1}+\,5+4+6+2\pmod7.$
Note that the sum over each brace is a multiple of $7$, and you're almost done.
|
H: What is the correct name for relation like things such as $\in$ and =
= and $\in$ look like relations and you can sometimes treat them like they are. However the domain and codomain of both is the class of all sets. Is there a term for this type of thing?
AI: Ultimately, use the shortest term you don't think will be misunderstood. I second @Arthur's suggestion: If you define relation too precisely (i.e. on sets) for that to be a legal description, go with class relation. (I prefer binary class relation over class binary relation, but neither is likely to be used often.) Having said that, @MauroALLEGRANZA noted such precision is skipped in Kunen 2009, and suggested calling them predicate symbols, which makes sense unless you want to emphasize that they're mathematical objects. For example in $\{\emptyset,\{\emptyset\}\}\in\mathord{\in}$ (I can't apologize enough for how horrible that looks!), the first $\in$ is a symbol for a binary predicate, but the second is a binary class relation.
|
H: Copula Theory : CDF from Marginals
I have given $(X,Y)$ to be a two-dimensional random vector with Exp(1)-marginals and a Copula
$C(u,v) = uv + (1-u)(1-v)uv$
I need to determine the density of $(X,Y)$, if it exists.
I would assume that it is the product of the density of the components. However, in the question it is not stated that the components are independent, so I am having my doubts. Could anyone clarify this please?
AI: Using the definition of a copula, the joint distribution function is given by $F(x,y)=C(F_1(x),F_2(y))$, where the Exp(1) marginals are $F_1(x)=1-exp(-x)$ and $F_2(y)=1-exp(-y)$. Hence
$$F(x,y)=(1-e^{-x})(1-e^{-y})(1+e^{-x-y}).$$
The joint density is obtained as $f(x,y)=\frac{\partial ^2 F(x,y)}{\partial x\partial y}$
|
H: Why are $\sin,\cos,\tan$ continuous
I'm done with two courses in Analysis, but just can't seem to work out how I'll show the base trigonometric functions to be continuous.
Any references or indications for a simple, preferably elementary proof ?
Is it possible to do it relying only on $\epsilon$-$\delta$ arguments?
AI: For example, using the fact that $$\cos(p)-\cos(q)=-2\sin\left(\frac{a+b}{2}\right)\sin\left(\frac{a-b}{2}\right),$$
yields
$$|\cos(x+h)-\cos(x)|=2\left|\sin\left(\frac{2x+h}{2}\right)\sin\left(\frac{h}{2}\right)\right|\leq |h|.$$
If $\varepsilon >0$, take $\delta =\varepsilon $ and thus $$|h|\leq \delta \implies |\cos(x+h)-\cos(x)|\leq \varepsilon .$$
This prove the continuity of the cosine function. Do the same with the sine function gives the wished result.
|
H: Interesting problem on determinant (and pinch of number theory)
The digits A, B and C are such that the three digit numbers A88 , 6B8
and 86C are divisible by 72, what is $$\begin{vmatrix}
A & 6 & 8\\
8 & B & 6 \\ 8 & 8 & C \end{vmatrix} \pmod {72}$$
I can (and did) find the individual digits B and C , A was a bit harder to find. And then solved the expanded the Determinant. This was quite time consuming.
Is it possible to solve this solely using properties of determinant (or matrix)?
Edit: without actually finding the digits
AI: Since replacing a row in a matrix by that same row plus a multiple of another row does not change the determinant, we see that by replacing the third row by the third row plus ten times the second row plus a hundred times the first row... that is... by performing the modification:
$$R_3 \leftarrow R_3 + 10R_2+100R_1$$
we arrive at a matrix who should have the same determinant as before. The new third row however has every entry divisible by $72$, as per our hypothesis, noting that the entries in our new third row are precisely those three-digit numbers $A88, 6B8$ and $86C$
As such, it follows that the determinant itself is divisible by $72$ and is then equivalent to $0$ when considered modulo $72$.
|
H: Constructing the root diagram for $B_2$
I'm trying to self-teach some Lie theory, and in particular I'm trying to construct the root diagram for $B_2$. I've found 8 roots, labelled $\pi_1,\pi_2,-\pi_1,-\pi_2,\pi_1+\pi_2,-(\pi_1+\pi_2),\pi_1+2\pi_2$ and $-(\pi_1+2\pi_2)$.
To draw the root diagram in an $(x,y)$ plane, (and here's where I think I might be going wrong) I've given $\pi_1$ the coordinates $(0,1)$, and thus (via a 135 degree rotation) $\pi_2$ has coordinates $(-\frac{\sqrt2}{2},-\frac{\sqrt2}{2})$. I've then tried to find the coordinates of all the other roots from there. I have two (or maybe more) problems:
The coordinates for $\pi_2$ that result from the 135 degree rotation are inconsistent with the notion that one fundamental root has to be longer than the other (I've chosen $\pi_1>\pi_2$).
The root diagram that results from the process described above is clearly incorrect - the diagrams I've found online/in textbooks show pairs of roots that are 90 degrees from eachother, which mine does not.
If anyone could shed any light on where I'm going wrong, it'd be much appreciated.
AI: Since you know that the two simple roots have different length, you cannot get from on to the other by a rotation. So $\pi_2$ should be a positive multiple of the vector you have. To choose the right factor you can either use what you know about the length ratio of the two roots or what you know about their inner product.Anyway the result should be $(-\frac12,\frac12)$. This leads to the familiar picture (maybe roatated because of the choice where the long root points).
|
H: Determine if $\int_1^{\infty}\frac{dx}{x^p+x^q}$ converges ...
Determine if
$\int_1^{\infty}\frac{dx}{x^p+x^q}$
converges if $\min(p, q) < 1$ and $\max(p, q) > 1$, where $\min (p, q)$ is the minor of the numbers $p$ and $q$, and $\max (p,q)$ is the major of the numbers $p$ and $q$.
I have doubts of how to arrange denominator.
AI: Suppose $p<q$. Then
$$\frac{1}{x^p+x^q}= \frac{1}{x^q(1+x^{p-q})} \sim \frac{1}{x^q} $$
So integral converges when $q>1$.
It is interesting consider this integral on whole $(0, +\infty)$, so for convergence should add conditions from existence $$\int_{0}^{1}\frac{dx}{x^p+x^q} $$
Obviously integral for $(0, +\infty)$ diverges when $p=q$, so make sense consider $p \ne q$. For second case we have $$\frac{1}{x^p+x^q}= \frac{1}{x^p(1+x^{q-p})}$$ and obtain, that integral diverges for $p<1$.
So conditions for whole $(0, +\infty)$, when $p<q$, are $p<1$ and $q>1$.
|
H: Prove that $\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$ and find a case where the equality doesn't hold
Let $V$ be a finite dimensional vector space of dimension $n$. Let $1 \leq k \leq n$ and consider $U_1,...,U_k$ distinct subspaces of $V$, all of dimension $n-1$
a) Prove that $\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$
b) Give (at least) an example where the equality on $\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$ doesn't always hold.
My attempt
a) To do the proof, I am using the dimensional theorem
$$\dim(U \cap W) = \dim(U) + \dim(W) - \dim(U + W)$$
And induction.
Let me show you what I have so far.
$\underline{\text{The base case} \ k=1}$
$$\dim(U_1) \geq n-1$$
I think this is not necessary for the proof, but I also checked $k=2$. Using the dimensional theorem I get
$$\dim(U_1 \cap U_2) = \dim(U_1) + \dim(U_2) - \dim(U_1 + U_2)$$
Where
$$\dim(U_1) \geq n-1$$
$$\dim(U_2) \geq n-1$$
$$\dim(U_1 + U_2) \leq n$$
Thus
$$\dim(U_1 \cap U_2) \geq n-2$$
$\underline{\text{Induction step}}$
Here I assumed that $\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$ holds and proved that it also holds for $k+1$ by means of the dimensional theorem.
$$\dim((U_1 \cap U_2 \cap ... \cap U_k) \cap U_{k+1}) = \dim(U_1 \cap U_2 \cap ... \cap U_k) + \dim(U_{k+1}) - \dim((U_1 \cap U_2 \cap ... \cap U_k) + U_{k+1})$$
Where
$$\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$$
$$\dim(U_{k+1}) \geq n-1$$
$$\dim((U_1 \cap U_2 \cap ... \cap U_k) + U_{k+1}) \leq n \ \ \text{due to} \ (U_1 \cap U_2 \cap ... \cap U_k) + U_{k+1} \subseteq V$$
Thus
$$\dim((U_1 \cap U_2 \cap ... \cap U_k) \cap U_{k+1}) \geq n-k-1$$
Which is the statement for $k+1$.
$\underline{\text{Conclusion}}$
Since both the base case and the inductive step have been proved as true, by mathematical induction the statement $\dim(U_1 \cap U_2 \cap ... \cap U_k) \geq n-k$ holds for every natural number $k$.
Is my proof OK?
b) Here I did not find any case breaking the equality; I tried with $k=3$, $k=4$ and so on and I always get
$$\dim(U_1 \cap U_2 \cap U_3) \geq n-3$$
$$\dim(U_1 \cap U_2 \cap U_3 \cap U_4) \geq n-4$$
$$...$$
So what am I missing here?
Any help is appreciated.
AI: The proof looks fine.
Consider $\mathbb{R}^3$, and three planes which intersect in a single line. Then $n=3$ and $k=3$, but the dimension of the intersection subspace is $1$.
|
H: Fourier Transform of char. function of $d$-dimensional unit cube
I want to find the Fourier transform of the unit cube.
So far, I have
$$f(\xi) = \frac{1}{(2\pi)^\frac d 2}\int_{\mathbb{R}^d}\chi_{[-1,1]^d}e^{-i\langle x,\xi\rangle}dx = \frac{1}{(2\pi)^\frac d 2}\int_{[-1,1]^d}e^{-i\langle x,\xi\rangle}dx$$
Now I don't know how to continue with that dot product in the exponent, any help would be appreciated.
AI: Note that since $\langle x,\xi\rangle=\sum_{n=1}^d x_n\xi_n$ we have
$$\int_{[-1,1]^d}e^{-i\langle x,\xi\rangle}\,dx=\prod_{n=1}^d \int_{-1}^1 e^{-ix_n\xi_n}\,dx_n$$
And now you can finish.
|
H: Proof for the general formula for $a^n+b^n$.
Based on the following observations. That is
$$a+b = (a+b)^1 \\ a^2+b^2 = (a+b)^2-2ab \\ a^3+b^3 = (a+b)^3-3ab(a+b) \\ a^4+b^4= (a+b)^4-4ab(a+b)^2+2(ab)^2\\ a^5+b^5
= (a+b)^5 -5ab(a+b)^3+5(ab)^2(a+b)\\\vdots$$
I came to make the following conjecture as general formula.
$$ a^n +b^n =\sum_{k=0}^{n-1}(-1)^k \frac{n\Gamma(n-k)}{\Gamma(k+1)\Gamma(n-2k+1)}(a+b)^{n-2k}(ab)^k $$ where $\Gamma(.) $ is gamma function.
I tried up proving the result using binomial theorem $\displaystyle (a+b)^n=\sum_{r=0}^n a^{n-r}b^r$ for positive integers $a,b$ however, I didn't find any elegance in the work. So in the expect of some beautiful proofs, I wish to share general formula here.
Thank you
AI: $a$ and $b$ are roots of $x^2=(a+b)x-ab$. Therefore, $a^{n+2}=(a+b)a^{n+1}-(ab)a^n$ and analogously for $b$.
Let $p_n=a^n+b^n$. Then $p_{n+2}=(a+b)p_{n+1}-(ab)p_n$ is a simple recurrence. The initial values are of course $p_0=2$ and $p_1=a+b$.
This recurrence is a special case of Newton's identities.
|
H: When the classes of two finitely generated modules are equal in the Grothendieck group
Let $R$ be a Commutative Noetherian ring. Let $G_0(R)$ denote the Grothendieck group of the abelian category of finitely generated $R$-modules i.e. it is the abelian group generated by the isomorphism classes of finitely generated $R$-modules subject to the relation : $[M]=[M_1]+[M_2]$ if there is a short exact sequence of $R$-modules $0\to M_1\to M\to M_2\to 0$.
It can be shown that $G_0(R)$ is generated by the classes $[R/P]$ as $P$ runs over all prime ideals of $R$. Now my question is the following:
If $M,N$ are finitely generated $R$-modules such that $[M]=[N]$ in $G_0(R)$, then is it true that there exists short exact sequences of finitely generated $R$-modules $0\to A\to B\to C\to 0$ and $0\to A\to B'\to C\to 0$ such that $M\oplus B\cong N\oplus B'$ ?
AI: Something similar was once pointed out by Steven Landsburg, that works in any exact category: there are exact sequences $0 \to A \to B\to C \to 0$ and $0 \to A' \to B'\to C' \to 0$ such that $ M \oplus B \oplus C \oplus A' \cong N \oplus B' \oplus C' \oplus A$. It's an easy exercise.
|
H: A set problem (inspired by geometry)
Here's my problem:
say we have four sets of letters (abcdef) (abde) (abc) (ad). We can only add or subtract those sets in a way that (abc) + (ad) = (aabcd), (abcdef) - (abde) = (cf), but (abc) - (ad) is not allowed. Is it possible to get (b) only with these rules?
(inspired by a "find an area" geometry problem)
(is there a tag for these specific types of problems?)
AI: Expanding on my comment, once we see that we can't use (abcdef) at all, of the remaining sets (abc) is the only one that has a 'c', and (abde) is the only one that has an 'e', rendering both of those useless.
The remaining set is (ad) and obviously we can't get (b) by itself.
|
H: Questions about the euclidean topology
My general topology textbook just gave the definition of euclidean topology on $\mathbb{R}$ but unfortunately did not provide any examples and I was hopping that someone here could help me with some questions I have. The definition they gave is the following:
A subset $S$ of $\mathbb{R}$ is said to be open in the euclidean topology on $\mathbb{R}$ if it has the following property:
(1)$\ \ \ \ $For each $x \in S$, there exists $a,b\in\mathbb{R}$, with $a<b$, such that $x \in ]a,b[\subseteq S$.
My questions are the following:
Let $A=[1,2]$ then we can define an interval $]1-\epsilon,2+\epsilon[$ and then we would have that $[1,2]\subset ]1-\epsilon,2+\epsilon[$, wouldn't that mean that $[1,2]$ is opened?
Let $A=]1,2[\ \cup\ ]3,4[$,
we can define a new interval $]1,4[$ such that $\forall x\in A,x\in ]1,4[$. But it is not true that $]1,4[\subseteq A$. Rather we have that $A \subseteq ]1,4[$. So the set $A$ does not have property (1) thus not being opened.
On the other hand, $]1,2[$ and $]3,4[$ are opened and the union of open sets is also opened. Since $A =]1,2[\ \cup\ ]3,4[$ then $A$ must be opened. So is $A$ opened or not?
AI: The set $[1,2]$ is not open, because there are no numbers $a$ and $b$, with $a<b$, such that $1\in(a,b)$ and that $(a,b)\subset[1,2]$.
And $(1,2)\cup(3,4)$ is open. For instance, if $x\in(1,2)$ and $r=\min\{|x-1|,|x-2|\}$, then $(x-r,x+r)\subset(1,2)\subset(1,2)\cup(3,4)$.
|
H: Is it possible to tell that one of the coin was biased if coins are changed mid experiment?
Suppose I have two coins one is baised i.e its probality of head $\ne 0.5$ and another is a fair one with head and tails as equally likely outcome.
Now I begin tossing the baised coin first and note down the results for very large number of experiments(finite), then I switch to the fair one and do the some but this time I do it for even a larger number.
Then I note the total number of heads and total number of tails.
Now my question : If I perform the experiment with fair coin for a very large number would it be able to hide the fact that a unfair coin was involved?
AI: Absolutely, but the ratio of fair coin flips to unfair coin flips (in order to hide that an unfair coin was ever used) is dependent on the unfairness of the biased coin.
|
H: Can someone explain the limit $\lim _{n \rightarrow \infty} \left(\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}\right)$?
$$\begin{aligned}
&\text { Find the following limit: } \lim _{n \rightarrow \infty} \left(\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}\right)\\
&\text { **In a solution I got** Ans:- Let } u_n=\frac{n}{\sqrt{n^2+n}}\\
&\therefore \lim _{n \rightarrow \infty} u_n=\lim _{n \rightarrow \infty} \frac{n}{\sqrt{n^{2}+n}}=\lim _{n \rightarrow \infty} \frac{1}{\sqrt{1+\frac{1}{n}}}=1\\
&\text { By Cauchy's first theorem:- } \lim _{n \rightarrow \infty} \left(\frac{u_1+\cdots+u_n}{n}\right)=1\\
&\text{So, } \lim _{n \rightarrow \infty}\left(\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}\right)=1
\end{aligned}$$
I am unable to understand this. How the general term comes like this! Any such explanation to the whole problem would be greatly appreciated.
AI: $$\frac{1}{\sqrt{n^2+n}} \leq\frac{1}{\sqrt{n^2+1}}\leq \frac{1}{\sqrt{n^2}}\\
\frac{1}{\sqrt{n^2+n}} \leq\frac{1}{\sqrt{n^2+2}}\leq \frac{1}{\sqrt{n^2}}\\
\frac{1}{\sqrt{n^2+n}} \leq\frac{1}{\sqrt{n^2+3}}\leq \frac{1}{\sqrt{n^2}}\\
\vdots\\
\frac{1}{\sqrt{n^2+n}} \leq\frac{1}{\sqrt{n^2+n}}\leq \frac{1}{\sqrt{n^2}}.$$ Now look at sum of them
$$\frac{n}{\sqrt{n^2+n}} \leq(\frac{1}{\sqrt{n^2+1}}+...+\frac{1}{\sqrt{n^2+n}})\leq \frac{n}{\sqrt{n^2}}.$$ Now take limit.
|
H: Why do polynomials for higher degrees have large oscillations near the edge of the interval?
In regards to Polynomial Interpolation, especially Lagrange Interpolation, I noticed that near the edges of the interval there are huge oscillations. My question is: Why do polynomials of higher degrees have big oscillations near the end points? And how do we avoid running into this scenario?
AI: This is known as the Runge phenomenon. Also, the interpolation with many nodes is numerically instable. Therefore, in practice , either splines are used (in particular cubic splines) or Tchebycheff-interpolation also avoiding this annoying phenomenon.
The reason is that the interpolation polynomial has moderate values in a large range , hence the polynomial usually has many minima and maxima. If they have large absolute values, we have the oscillation.
|
H: Showing that the Diophantine equation $m(m-1)(m-2)(m-3) = 24(n^2 + 9)$ has no solutions
Consider the Diophantine equation $$m(m-1)(m-2)(m-3) = 24(n^2 + 9)\,.$$ Prove that there are no integer solutions.
One way to show this has no integer solutions is by considering modulo $7$ (easy to verify with it).
I am curious whether there is a slightly less $``$random$``$ way to solve this problem such as using the fact that if $p\equiv 3 \pmod 4$ divides $x^2 + y^2$, then $p$ must divide both $x$ and $y$. This looks convenient since the left-hand side has a multiplier which is $\equiv 3 \pmod 4$ (and hence such a $p$ surely exists) and we will be done provided we can take $p\neq 3$ (since the only prime $p\equiv 3 \pmod 4$ which divides $y=3$ is $3$ itself). Any idea if this method could work?
I am of course also open to see other ideas. Any help appreciated!
AI: Here is one approach that is more motivated. Let $k$ be the integer in between $m-1$ and $m-2.$ So $$k=\frac{(m-1)+(m-2)}{2}=m-\frac{3}{2}.$$ Then the left side of the equation is
\begin{align*}
m(m-1)(m-2)(m-3) &= \left(k+\frac{3}{2}\right)\left(k+\frac{1}{2}\right)\left(k-\frac{1}{2}\right)\left(k-\frac{3}{2}\right)\\
&=\left(k^2-\frac{9}{4}\right)\left(k^2-\frac{1}{4}\right)\\
&= k^4 -\frac{10}{4}k^2+\frac{9}{16}\\
&= k^4 -\frac{5}{2}k^2+\frac{25}{16}-\frac{25}{16}+\frac{9}{16}\\
&= \left(k^2 -\frac{5}{4}\right)^2 - 1\\
&= \left(\left(m-\frac{3}{2}\right)^2-\frac{5}{4}\right)^2-1\\
&= (m^2-3m+1)^2 -1.
\end{align*}
So the equation is $$(m^2-3m+1)^2 - 24n^2=24\cdot 9 +1 =7\cdot 31.$$ This is a good enough reason to try $\mod 7.$ Then $$(m^2-3m+1)^2\equiv 3n^2 \pmod{7}.$$ The quadratic residues modulo $7$ are $0,1,2,4.$ The only two residues where one is three times the other are $0$ and $0.$ So $m^2-3m+1$ and $n$ are both divisible by $7.$ In the first case $$m^2-3m+1\equiv 0\pmod 7.$$ Equivalently, $$(m+2)^2\equiv 3\pmod{7}.$$ But none of the quadratic residues are $3$ so that's a contradiction to the existence of a solution $(m,n).$
|
H: Bounding $\|Ax\|_2$ in terms of $\|A\|_1$ and $\|x\|_2$
Given a matrix $A$ of size $m\times n$ and a vector $x$ of size $n\times 1$, how can be bouund the $l_2$ norm $\|Ax\|_2$ in terms of $\|A\|_1$ and $\|x\|_2$, where $\|A\|_1$ is the sum of the absolute values of all entries of the matrix $A$.
AI: One useful and simple bound is that $$||Ax||_2 \leq \sqrt{m}||A||_1 ||x||_2.$$ To see this, the $n$-th entry of $Ax$ is bounded by $$|(Ax)_n| = \left| \sum_{k} A_{nk} x_k \right| \leq \sum_k |A_{nk}| |x_k| \leq ||A||_1 \sum_k |x_k|,$$ so that $$||Ax||_2 = \sqrt{\sum_{j=1}^m (Ax)_n^2 } \leq ||A||_1 \sqrt{\sum_{j=1}^m \sum_{k=1}^n |x_j|^2} = ||A||_1 \sqrt{m \sum_{k=1 }^n |x_k|^2} = \sqrt{m}||A||_1 ||x||_2.$$
|
H: A doubt about Theorem 14 in textbook Algebra by Saunders MacLane and Garrett Birkhoff
I'm reading the proof of Theorem 14 in textbook Algebra by Saunders MacLane and Garrett Birkhoff.
Any permutation $\sigma \neq 1$ on a finite set $X$ is a composite $\gamma_{1} \cdot \cdot \gamma_{k}$ of disioint cyclic permutations $\gamma_{i}$, each of length $2$ or more. Except for changes in the order of the cyclic factors, $\sigma$ has only one such decomposition.
They said that
Each of the points ${\sigma}^{i} {x}$ in this set $C$ has the same orbit [under $\sigma$].
IMHO, this is only when ${\sigma}^{i} {x}$ is a generator of $C$, i.e. $\gcd(i,m) = 1$. As such, I feel that the statement may be not correct.
Could you please verify my observation?
AI: If $y=\sigma^i(x)$, $x=\sigma^{-i}(y)$, we have $\sigma^p=Id$ implies that $\sigma^{-i}=\sigma^{p-i}$ and $x=\sigma^{p-i}(y)$.
|
H: How do I Find Preparatory Exposure?
I always prefer to be prepared ahead of time, but I am not
sure of how to be for a career in mathematics. How do I immerse myself in a formal, mathematical environment
without necessarily enrolling in a university, and at the
same time take a general survey of the profession?
AI: I would suggest taking one of the free pdf textbooks that are available online, and doing exercises / proofs on the first few pages.
A career (in research mathematics) is not exactly described by this, but it provides the same idea you would have gotten from university.
Just know that actually doing research mathematics might be a lot more undirected than following an exam book. To really know what that's like, you should look at some issue or other that you're interested in, and trying to prove things about it, without following some predefined structure.
|
H: Effect of squaring while finding roots of unity
Consider $$b=\frac{1}{b}\rightarrow b^2=1$$
Clearly $b=\pm1$ But if we square the above equation on both sides and then solve
$$(b=\frac{1}{b})^2\rightarrow b^2=\frac{1}{b^2}\rightarrow b^4=1
$$
And we know fourth root of unity are $1,i,-1,-i$ why am i getting extraneous roots can someone please explain
AI: Because $f(x)=g(x)$ implies $[f(x)]^2=[g(x)]^2$, but not vice versa. More generally, $y^2=z^2$ doesn't imply $y=z$. That's why you wrote $b=\color{blue}{\pm}1$ in the first place.
|
H: Suppose $R$ is $(3, 5)$ and $S$ is $(8, -3)$. Find each point on the line through $R$ and $S$ that is three times as far from $R$ as it is from $S$.
Suppose $R$ is $(3, 5)$ and $S$ is $(8, -3)$. Find each point on the line through $R$ and $S$ that is three times as far from $R$ as it is from $S$.
I'm confused regarding "three times as far from $R$ as it is from $S$", like is it equally $3$ times far from $R$ being equidistant and also $3$ times from $S$ since I'm not familiar with these question but I think I've found one point.
Let $P$ be the point on segment $\overline{RS}$ such that $RP:PS=3:1$
$\therefore \frac{RP}{RS}=\frac{3}{4}$
$ \frac{3}{4}RS=RP$
Since S is $5$ units to the right and $8$ units below $R$
$P$ is $5(\frac{3}{4})$ units to the right and $8(\frac{3}{4})$ units below $R$
$\therefore P:(3+\frac{15}{4},5-\frac{24}{4})=(6.75,-1)$
I need help with the other one, thanks in advance.
AI: If $T$ is that point then you have $$ST = 3RT\implies ||\vec{ST}||= 3||\vec{RT}|| $$
So you have two cases:
$\vec{ST}= 3\vec{RT} \implies \vec{T} = {1\over 2}(3\vec{R}-\vec{S})$
$\vec{ST}= -3\vec{RT}\implies \vec{T} = {1\over 4}(3\vec{R}+\vec{S})$
|
H: Problem with Summation of series
Question: What is the value of $$\frac{1}{3^2+1}+\frac{1}{4^2+2}+\frac{1}{5^2+3} ...$$ up to infinite terms?
Answer: $\frac{13}{36}$
My Approach:
I first find out the general term ($T_n$)$${T_n}=\frac{1}{(n+2)^2+n}=\frac{1}{n^2+5n+4}=\frac{1}{(n+4)(n+1)}=\frac{1}{3}\left(\frac{1}{n+1}-\frac{1}{n+4}\right)$$
Using this, I get,$$T_1=\frac{1}{3}\left(\frac{1}{2}-\frac{1}{5}\right)$$ $$T_2=\frac{1}{3}\left(\frac{1}{3}-\frac{1}{6}\right)$$ $$T_3=\frac{1}{3}\left(\frac{1}{4}-\frac{1}{7}\right)$$
I notice right away that the series does not condense into a telescopic series. How do I proceed further?
AI: You have solved it already. Recognise, that you can factor out $\frac 13$ from all terms. You have an infinite terms of +/- fractions, from which the first positive three ($\frac 12$, $\frac 13$, $\frac 14$) remains in the sum, every other is cancelled out by a negative counterpart with a 3 step gap. Therefore the end result is $\frac 13 (\frac 12 + \frac 13 + \frac 14)$.
|
H: Solve $\int_{|z|=5} \frac{z^2}{(z-3i)(z-3i)}$
Solve$$\int_{|z|=5} \frac{z^2}{(z-3i)(z-3i)}.$$
So I am currently working on the unit about the Cauchy Integral Formula, where$$\int\frac{f(z)}{z-z_0}=2\pi i f(z_0).$$
The zero $z=3i$ lies within the circle $|z|=5$ so $z_0=3i$ and we can rewrite the integral so$$\int_{|z|=5} \frac{\frac{z^2}{z-3i}}{z-3i},$$where $f(z)=\frac{z^2}{z-3i}$. However, when I attempt to plug $z_0$ into $f(z)$, the denominator is zero which is definitely wrong.
Where am I going wrong?
AI: There is this generalization of Cauchy's integral formula that you can apply here: under the same conditions as Cauchy's integral formula, you have, for each $n\in\Bbb Z_+$,$$\frac1{2\pi i}\oint_{|z-a|=r}\frac{f(z)}{(z-a)^{n+1}}\,\mathrm dz=\frac{f^{(n)}(a)}{n!}.$$So, take $n=1$ here. Of course, with $n=0$ this is just the standard formula.
|
H: Rudin 2.2 Solution Explanation
The question is to prove that the set of all algebraic numbers is countable. He gives the hint that for every positive integer N there are only finitely many equations with n + $\vert a_0 \vert$ + $\vert a_1 \vert$ + ... + $\vert a_n \vert$ = N. I looked at the solution given at https://minds.wisconsin.edu/bitstream/handle/1793/67009/rudin%20ch%202.pdf?sequence=10&isAllowed=y but I'm having trouble understanding it. They let $A_n$ be the set of numbers satisfying the equation above, and then say that the set of algebraic numbers is the union of $A_n$ from 2 to oo. But that's the part I'm having trouble with. How is the set of algebraic numbers equal to this union of the $A_n$? And why do they start at 2 rather than 1?
AI: $A_N$ (not $A_n$) is not the set of numbers satisfying the equation $n+\vert a_0\vert+\vert a_1\vert+\cdots+\vert a_n \vert=N$. It is the set of numbers satisfying a polynomial equation $\sum_{i=1}^n a_ix^i=0$ for some coefficients $a_0,\ldots,a_n$ which satisfy the equation $n+\vert a_0\vert+\vert a_1\vert+\cdots+\vert a_n \vert=N$.
And such a polynomial equation has at most $n$ solutions, so each $A_N$ is finite.
|
H: Question about parametric equations
This is a question from MIT 18.01 single variable calculus on parametric equations:
I have the answers, but I don't quite understand it, especially the equation circled in pink. What does it mean? Moreover, I don't get how it equates to $\theta = \frac{\pi}{2} - \frac{\pi}{6}t$. I believe it might be a typo.
AI: $\theta$ is a linear function of $t$. So to derive its equation, think of how to find the equation of a line knowing its value at two places. You know that $\theta=\frac\pi2$ when $t=0$ and that $\theta=\frac\pi3$ when $t=1$. So calculate rise over run:
$$
\frac{\theta-\theta_0}{t-t_0}=\frac{\theta_1-\theta_0}{t_1-t_0}.
$$
and then plug in:
$$\frac{\theta - \frac\pi2}{t-0}=\frac{\frac\pi3-\frac\pi2}{1-0}
$$
|
H: Vector sequence in $l^2$ is Cauchy
I am trying to prove that the sequence of vectors in $l^2$ $ \{ v^{(n)} \} _{n \in \mathbb{N} }$ which is defined as $ \underline v^{(n)}:= \sum_{k=1}^{n} e^{-k \alpha} \underline e^{(k)}$ , $\alpha > 0$
where $\{ \underline e^{(n)} \} _{n \in \mathbb{N} }$ denotes the canonical basis for $l^2$ is Cauchy
I started this way :
$|| \underline v ^{(m)} - \underline v ^{(n)}|| =
|| \sum_{k=1}^{m} e^{-k \alpha} \underline e^{(k)} - \sum_{k=1}^{n} e^{-k \alpha} \underline e^{(k)}||_2 = \\|| \sum_{k=n+1}^{m} e^{-k \alpha} \underline e^{(k)} ||_2$
now this must be bounded cause it's in $l^2$ but I don't know how to come forward. $\\$
The second part is to evaluate the norm of the given sequence vector as n goes to infinity. This sequence must converge since the $l^2$ space is complete but I think I must prove that without using this fact.
I am sorry for the long question and I will be thankful for any tip.
AI: Continuing from what you did: We have due to triangle inequality:
$$\|v^{(m)}-v^{(n)}\|=\|\sum_{k=n+1}^me^{-k\alpha}e^{(k)}\|\leq\sum_{k=n+1}^me^{-k\alpha},\;\;\;(*) $$
Now the series $\sum_{k=1}^\infty\frac{1}{e^{k\alpha}}$ converge (see the next computation) and since it converges it is Cauchy, and this shows that the estimate in $(*)$ becomes small (take $\varepsilon>0$ find $N$ such that for $n,m$ greater than $N$ the partial sums are close..)
The norm of $v^{(n)}$ can by computed via Parseval's identity: we have
$$\lim_{n\to\infty}\|v^{(n)}\|^2=\lim_{n\to\infty}\sum_{k=1}^ne^{-2k\alpha}=\sum_{k=1}^\infty(e^{-2\alpha})^k=\frac{e^{-2\alpha}}{1-e^{-2\alpha}} $$
so taking the square root gives the result.
|
H: Spivak's Calculus on Manifolds theorem 2-9 why is continuous differentiable needed
In the third last line, it is said that because each $g_i$ is continuously differentiable at $a$ then the constructed function $g$ is also differentiable at $a$.
I do not see why "continuously" differentiable is need cuz I think a function is differentiable if and only if each of its components is differentiable (theorem 2-3(3)).
It seems that I am missing something very obvious
Edit: theorem 2-8 is
AI: Yes, you're right, you can drop the "continuously differentiable" hypothesis. Theorem $2$-$9$ is an application of the chain rule (Theorem $2$-$2$) and Theorem $2$-$2(3)$ in a special case. So, you can write the theorem as:
Let $g^1, \dots g^m: \Bbb{R}^n \to \Bbb{R}$ be functions which are differentiable at $a$, and define $g : \Bbb{R}^n \to \Bbb{R}^m$ by $g= (g^1, \dots g^m)$. Suppose $f: \Bbb{R}^m \to \Bbb{R}$ is differentiable at $g(a)$. Then $f \circ g$ is differentiable at $a$ (which means all the partial derivatives exist by Theorem $2$-$7$), and
\begin{align}
D_i(f \circ g)(a) &= \sum_{j=1}^m (D_jf)(g(a)) \cdot (D_ig^j)(a)
\end{align}
However, if you read the previous paragraph, Spivak says
"With Theorem $2$-$8$ to provide differentiable functions, and Theorem $2$-$7$ to provide their derivatives, the chain rule may therefore seem almost superfluous. However, it has an extremely important corollary concerning partial derivatives."
So, yes, the hypotheses of Theorem $2$-$9$ are not the weakest ones you can impose, but Spivak explicitly mentions that it is "an extremely important corollary". He says this because many functions you may encounter typically in the beginning, for example the kind in Problem $2$-$28$ or Problems $2$-$17$ to $2$-$20$ mostly satisfy the special hypothesis of continuous differentiability, and some are even infinitely continuously differentiable (which in those problems, is particularly easy to check) which means calculating partial derivatives becomes reduced to a mechanical procedure (rather than having to directly use the limit definition).
|
H: Convergence in probability implies mean squared convergence
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of $\mathcal{F}$ measurable random variables. Let $X$ be another $\mathcal{F}$ measurable random variable. I have $X_n \rightarrow X $ in probability. Additionally, $\mathbb{P}(|X_n|<L) = 1 \hspace{3mm} \forall \hspace{2mm}n \in \mathbb{N}$, where $L$ is a constant independent of $n$. I have to show that $X_n \rightarrow X$ in mean squared sense, i.e. as $n \rightarrow \infty$, $\mathbb{E}(X_n - X)^2 \rightarrow 0$. How do I go about this? Thanks.
AI: Convergence in probability: For any $\delta>0$, $\lim_{n\to\infty}\mathbb{P}(|X_n-X|>\delta)=0$.
Also, since $\mathbb{P}(|X_n|<L)=1$, we have that $|X_n|<L$ almost surely for all $X_n$. Since convergence in probability implies almost-everywhere convergence of a subsequence, we also have that $\mathbb{P}(|X|<L)=1$, i.e. $|X|<L$ almost surely. Now let $\delta>0$.
We have
$$\mathbb{E}[X_n-X]^2=\int|X_n-X|^2=\int_{\{|X_n-X|>\delta\}}|X_n-X|^2+\int_{\{|X_n-X|<\delta\}}|X_n-X|^2\leq$$ $$\leq\int_{\{|X_n-X|>\delta\}}|X_n-X|^2+\delta^2\leq\mathbb{P}(|X_n-X|>\delta)\cdot (4L^2)+\delta^2\to\delta^2$$
Since $\delta>0$ was arbitrary and we have that $\limsup_{n\to\infty}\mathbb{E}|X_n-X|^2\leq\delta^2$, we conclude that $\mathbb{E}|X_n-X|^2\to0$.
|
H: Degree of minimal polynomial over $\mathbb{Q}$
Can someone please help with this question.
Prove that degree of minimal polynomial over $\mathbb{Q} $ of $\zeta_{7} $ , a primitive 7th root of unity is not a prime number.
I thought as $\zeta_{7} $ =$1^{1/7} $ so I can write $1^{1/7} $ =x which implies $x^{7} $ =1 . So, it's prime.
But answer is not prime. So, I am missing some concept.
Can anyone please tell what mistake I am doing.
AI: The mistake is, $x^7-1$ is not the minimal polynomial of $\zeta_7$, as it is not irreducible over $\Bbb{Q}$. The minimal polynomial of $\zeta_7$ is the irreducible factor $x^6+x^5+x^4+x^3+x^2+x+1$ of $x^7-1$ over $\Bbb{Q}$.
|
H: Show that convergence in probabiltiy plus domination implies $L_p$ convergence
I want to show that if random variable $X_n $ converges to $X$ in probability (Let $(\Omega, \mathcal{A},P)$ be the probability triple) and $|X_n| < Y \,\,\forall\, n$ then $X_n$ converges to $X$ in $L_p$.
Here's my attempt so far:
Since $|X_n-X| \leq |X_n|+|X|\leq Y + |X|$, I can use the dominated convergence theorem like so: $$\lim_{n \to \infty}\int_{\Omega} |X_n-X|^p dP = \int_{\Omega} \lim_{n \to \infty} |X_n-X|^p dP$$ This is the part where I want to use convergence in probability, but can't quite figure out how.
edit: Assumption: $Y \in L_p,$ as pointed out by the two answers below.
AI: First of all, since $|X_n|\leq Y$ for all $n$, take a subsequence $(X_{n_k})$ of $(X_n)$ that converges to $X$ almost everywhere (this is possible due to convergence in probability) and taking limits in $|X_{n_k}|\leq Y$ conclude that $|X|\leq Y$ almost surely.
We will show that $\int|X_n-X|^p\to0$. Suppose that this isn't true, then there exists $\delta>0$ and a subsequence $(X_{n_k})\subset (X_n)$ such that for all $k$ it is
$$\int|X_{n_k}-X|^p\geq\delta\;\;\;\;(*)$$
But $X_{n}\to X$ in probability, so $X_{n_k}\to X$ in probability too. Therefore we may find a subsequence $(X_{n_{k_j}})$ of $(X_{n_k})$ such that $X_{n_{k_j}}\to X$ almost everywhere. But it is also $|X_{n_{k_j}}-X|^p\leq(2Y)^p$, so by the dominated convergence theorem we must have that
$$\int|X_{n_{k_j}}-X|^p\to0 $$
which is impossible due to $(*)$.
Edit: of course this is true under the assumption $Y\in L^p$. I assume you forgot to mention that.
|
H: Why do partitions correspond to irreps in $S_n$?
As stated for example in these notes (Link to pdf), top of page 8, irreps of the symmetric group $S_n$ correspond to partitions of $n$. This is justified with the following statement:
Irreps of $S_n$ correspond to partitions of $n$. We've seen that conjugacy classes of $S_n$ are defined by cycle type, and cycle types correspond to partitions. Therefore partitions correspond to conjugacy classes, which correspond to irreps.
I understand the equivalence between partitions, cycle types, and conjugacy classes, but I do not fully get the connection with irreps:
I can associate to a partition $\lambda\vdash n$ the conjugacy class of permutations of the form
$$\pi=(a_1,...,a_{\lambda_1})(b_1,...,b_{\lambda_2})\cdots (c_1,...,c_{\lambda_k}).$$
The fact that conjugacy classes are defined by cycle types comes from the fact that $\sigma\pi\sigma^{-1}$ has the same cycle type structure as $\pi$.
However, in what sense do conjugacy classes correspond to irreps? I can understand this if we restrict to one-dimensional representations, as then $\rho(\pi)=\rho(\sigma\pi\sigma^{-1})$ for all $\sigma$, but this is not the case for higher dimensional representations I think, being $S_n$ non-abelian.
AI: This is a standard result in representation theory. The number of (isomorphism types of) irreducible complex representations of a finite group is the same as the number of conjugacy classes of that group. In short, this follows from the fact that the irreducible characters (and, in this setup, two representations are isomorphic iff they have the same character) form a basis of the space of class functions (complex-valued functions on the group that are constant on conjugacy classes). Clearly, the dimension of the space of class functions is the number of conjugacy classes. The space of class functions comes with a natural inner product and the linear independence of the irreducible characters follows from the Schur orthogonality relations. That they generate the space follows from examining the orthogonal complement of their span. Setting up the necessary preliminaries and giving these arguments in full amounts to the first couple chapters in a standard text on representation theory, so I shall not do as such.
In general, there is no natural bijection between conjugacy classes and irreducible representations. A nice, albeit a bit more advanced, discussion of this can be found here on MO. In the particular case of $S_n$, however, there is a natural choice of bijection between irreducible representations and conjugacy classes. Establishing this bijection will be part of the notes you are reading.
|
H: $\{x\in X: f(x)=g(x)\}$ is closed in $X$ if $f$ and $g$ are continuous on $X$.
could you help me demonstrate the following please
Let $f,g:(X,d_X)\rightarrow\mathbb{R}$ be continuous functions. Prove that $\{x\in X: f(x)=g(x)\}$ is closed in $X$.
It is a result that is very interesting to me, but I really do not know how to start the demonstration, it is like relative closed, but I do not know how to use the continuity hypotheses to arrive at the result. I have no attempts, because I have no idea how to attack the problem
AI: The inverse image of closed sets through continuous functions are closed sets. Take the function $f-g$, it is continuous as a difference of continuous functions. Take the closed subset $\{0\}$ of $\mathbb{R}$. Do you see why your set is equal to $(f-g)^{-1}(\{0\})$?
|
H: Uniform limit of a sequence in $C_{c}^{0}(\mathbb{R}^n)$ is in $C_{c}^{0}(\mathbb{R}^n)$
I am trying to prove the next:
Let $(f_k)$ be a sequence in $C_{c}^{0}(\mathbb{R}^n),$ the space of continuous functions with compact support from $\mathbb{R}^n$ to $\mathbb{R}.$Let $K$ be a compact set in $\mathbb{R}^n$ which containts $\mathrm{supp}(f_k)$ for each $k\in\mathbb{N}.$ If $f_k\rightarrow f$ uniformly then $f\in C_{c}^{0}(\mathbb{R}^n).$
Because the uniform limit of continuous functions is a continuous function it remains to show $f$ has compact support, so is enough to prove that $\mathrm{supp}(f)\subset K;$ here I am having problems.
If $x\in K^c$ then $f_k(x) = 0$ for each $k\in\mathbb{N},$ and $f_k(x)\rightarrow f(x)$ implies $f(x)=0,$ but I cannot see how to ensure $x\notin\mathrm{supp}(f).$
Any kind of help is thanked in advanced.
AI: I believe that what worries you is the possibility that $x$ might be in the closure of the set $\{y\in\mathbb{R^n}: f(y)\ne 0\}$? Well, it can't be the case. You proved that for each $x\notin K$ we have $f(x)=0$. Hence $\{y: f(y)\ne 0\}\subseteq K$. But remember that $K$ is compact, hence closed. So:
supp$(f)=\overline{\{y: f(y)\ne 0\}}\subseteq K$
|
H: Is $f(x)=\sum_{n=1}^{\infty}\frac{x}{1+n^2x^2}$, $x\in[0,1]$ continuous on $[0,1]$
Let $f(x)=\sum_{n=1}^{\infty}\frac{x}{1+n^2x^2}$, $x\in[0,1]$.
Question: Show that $f$ is Lebesgue integrable and determine whether $f$ is continuous on $[0,1]$.
For the first part, I have no problem, I showed that it is Lebesgue integrable. But for the second part, I could not reach any result by using the definition. Also, I'm not sure if I should use the first part. Can you give me a hint for this part?
AI: Let $x_n = 1/n$ and note that
$$f(x_n) \geqslant \sum_{k=1}^n \frac{x_n}{1+k^2 x_n^2} \geqslant \frac{nx_n}{1+n^2x_n^2} = \frac{1}{2}$$
Since $x_n \to 0$ and $f(x_n) \not\to 0$, we have a discontinuity at $x = 0$. Elsewhere the function is continuous since convergence of the series is uniform on $[a,1]$ for all $0<a < 1$.
A clue that there is a discontinuity at $x=0$ is that the series converges uniformly on $[a,1]$ for all $0 < a <1$ but not on $[0,1]$. Uniform convergence is, of course, only a sufficient condition for continuity.
|
H: The vector $\mid \phi \rangle \langle \phi \mid \psi \rangle$ is the projection of a vector $\mid \psi \rangle$ along the vector $\mid \phi \rangle$?
I am currently studying the textbook Mathematical methods of quantum optics by Ravinder R. Puri. When going over some basic facts associated with bra-ket notation, the author says the following:
The vector $\mid \phi \rangle \langle \phi \mid \psi \rangle$ is the projection of a vector $\mid \psi \rangle$ along the vector $\mid \phi \rangle$.
I'm unsure of this. The scalar product $\langle \phi \mid \psi \rangle$ is a measure of the overlap between the vectors $\mid \psi \rangle$ and $\mid \phi \rangle$. So how is it then the case that $\mid \phi \rangle \langle \phi \mid \psi \rangle$ is the projection of a vector $\mid \psi \rangle$ along the vector $\mid \phi \rangle$? I would greatly appreciate it if people would please take the time to explain this.
AI: You are already on the right track
The scalar product $\langle \phi | \psi \rangle$ is a measure of the overlap between the vectors $| \psi \rangle$ and $| \phi \rangle$
Indeed, and overlap can also be read as projection, this is just a simple inner product of two vectors. And you know that is the way of getting a projection: via the inner product.
Now if you multiply that scalar times the vector $|\phi \rangle$ you will get the projection along $|\phi \rangle$
|
H: Minimize the maximum inner product with vectors in a given set
Given a finite set $S$ of non-negative unit vectors in $\mathbb R_+^n$, find a non-negative unit vector $x$ such that the largest inner product of $x$ and a vector $v \in S$ is minimized. That is,
$$
\min_{x\in \mathbb R_+^n,\|x\|_2=1}\max_{v\in S} x^Tv.
$$
It seems like a quite fundamental problem in computational geometry. Has this problem been considered in the literature?
It can be formulated as an infinity norm minimization problem, which can in turn be expressed as a quadratically constrained LP. If the rows of matrix $A$ are the vectors in $S$, we seek
$$
\begin{align}
&&\min_x\|Ax\|_\infty
\\ \rm{s.t.} && x^Tx=1
\\ && x\geq 0.
\end{align}
$$
But the quadratic constraint is non-convex, so this is not very encouraging.
I am interested in understanding properties of the solution, rather than obtaining it numerically, although a reasonably efficient exact numerical method might be insightful, of course.
AI: My friend and I just published a paper on a very similar version of this problem - we consider it for arbitrary unit vectors in $S$ and $x$ is also an abritrary unit vector. We were basically able to derive matching upper and lower bounds. We call the problem "SphericalDiscrepancy." The problem is APX-Hard, but we mention some algorithms that work well in practice, including our own (our algorithm has an embarrassingly large running time, but its polynomial in the input anyway). The paper can be found here. A survey on the problem can be found here (friend's thesis).
For your case, the non-negativity constraints don't matter in terms of lower bounds when $|S| = O(n)$, any positive orthonormal basis will have $x^Tv \geq 1/\sqrt(n)$ for some $v\in S$.
My heart tells me that the same is true for upper bounds, but I would have to think about it longer. It seems close to the boolean discrepancy problem, where $S \subseteq \{ 0, 1 \}^n $, but $x \in \{ \pm 1 \} ^n $.
Note that there is no LP version of this problem, as the unit sphere is non-convex. Standard convex optimization techniques don't really apply here; at least not obviously.
EDIT: Some special Cases (somewhat hand-wavy proofs).
Upper Bound
I'm using soubscript R to denote sampling uniformly at random below...
Let $x \sim_{R} \{0,1\}^n$ and suppose that $x$ has $\approx$ half of its entries set to 1. Let $S \subseteq_{R} \{0,1\}^n$, with $S = \{v_1 , \dots, v_m \}$ Let $Y_i$ indicate the event that $|x^Tv_i| \leq n/4 + \sqrt{(2m \log(2n)} $. Note that $\mathbb{E}[x^Tv_i] = n/4$.
By Hoeffding bound ($p=\frac{1}{4}$, $ \epsilon = \frac{\sqrt{(2m \log (2n))}}{n}$), we have $\Pr[X_i = 1] \leq 1/m. $. This implies (via the probabilistic method and union bound), that the random $x$ has max inner product at most $n/4 + \sqrt{(2m \log(2n)}$
Lower Bound
The lower bound is similar, we just need to construct a set of vectors where no $x$ and use the probabilistic method to show that there exist $S$ of size $m$ such that for any $x \in \{0,1\}^n$ there exists some $v_i$ with $|x^Tv_i|>\frac{n}{4}+O(\sqrt{n \log(m/n)})$. This is again done by choosing the entries of $S$ uniformly at random, and noting that there exists a constant $c>0$ s.t. $\Pr[|x^Tv_i|> \frac{n}{4}+c \sqrt{(n \log(m/n))}] <(\frac{1}{2})^{n/m}$. Since the entries of $S$ were chosen uniformly at random, the probability a random $x$ does not violate the inequality for some $i$ is at most $(1/2)^{n/m}$. By the probabilistic method, there exists some $S$ with $|S|=m$ such that no $x$ satisfies the inequality for all $v_i 's$.
Generalizing to the sphere is pretty easy here. Just divide by the norm of $x$. We did use the fact that approximately half of the entries of $x$ are 1, but you can probably handle the alternative as a special case.
|
H: A quiz question based on finding sum of series
This question was asked in my analysis quiz and I had no clue in the exam how it could be solved.So, I am asking here .
Find the sum of series $\sum_{n=0}^{\infty} \frac{n^2} {2^{n} } $ .
Unfortunately, I have no idea on how to approach this problem.
AI: We start with $$\sum_{n=0}^\infty \frac{x^n}{2^n} = \frac{2}{2-x}.$$
Taking the derivative, we see that $$\sum_{n=1}^\infty \frac{nx^{n-1}}{2^n} = \frac{2}{(2-x)^2}.$$
Multiplying by $x$ and taking the derivative once more, we see that $$\sum_{n=1}^\infty \frac{n^2 x^{n-1}}{2^n} = \frac{4+2x}{(2-x)^3}.$$
Lastly, plugging in $x=1$ gives that $$\sum_{n=1}^\infty \frac{n^2}{2^n} = 6.$$
|
H: number of submodules of direct sum of simple modules
Let $M$ be a simple $R$ module. show that the number of submodules of $M \oplus M$ can be infinite.
AI: Consider $M=R=\mathbb{R}$. This is a simple $\mathbb{R}$-module, obviously. Note that for each $\lambda\in\mathbb{R}$ the set
$$M_\lambda:=\{(x,\lambda x): x\in\mathbb{R}\}\subset\mathbb{R}\oplus\mathbb{R}$$
is a submodule of $\mathbb{R}\oplus\mathbb{R}$.
|
H: implicit differentiation and taking limit on derivative
I have this equation $x^3-xy^2+y^3=0$ and I want to know the value of the derivative at $(0,0)$. Through implicit differentiation I find $y'=\frac{y^2-3x^2}{3y^2-2xy}$. Now for $x=0,y=0$ this fraction becomes an indeterminate form. Upon graphical inspection I think if I were to draw a tangent at the Origin, it will have a slope very close to $-0.75$, but what is an easy method to find out the actual value? Note, when you graph the original function it LOOKS like a line, but it ISN'T. Thank you for your input.
AI: Because the function is homogeneous (every term is has degree $3$), the graph is actually in fact a union of lines through the origin. Since the cubic polynomial $u^3-u^2+1=0$ has precisely one real root $u_0$, the curve is in fact just the line $y=u_0x$.
Here's a general approach to such problems, if you're interested. Substitute $y=ux$ and factor out the highest power possible of $x$. In this case, you're left with $x^3(u^3-u^2+1)=0$, so in the $ux$-plane we get $x=0$ and $u^3-u^2+1=0$. This tells us that our curve becomes (aside from $x=0$) the curve $u^3-u^2+1=0$, which we've already said is just the line $u=u_0$. This means that our equation reduces to $y=u_0x$, as we said.
Let me show you a different example. Suppose you had instead the curve $y^2-x^2-x^3=0$. If you try implicit differentiation at the origin, you have the same situation. Now if I substitute $y=ux$, I get
$x^2(u^2-1+x) = 0$, and so I'm looking at the parabola $x=1-u^2$ in the $xu$-plane. When $x=0$, we get $u=1$ and $u=-1$. This tells me that the lines $y=x$ and $y=-x$ are the two tangent lines to our original curve at the origin.
|
H: linear combination of periodic sequence is also periodic?
Let $x_t$ and $y_t$ real periodic sequences such that the least common multiple of their periods exists. Then, given a constant $a$, $x_t+ay_t$ is also periodic with period, say, $p$.
Does $x_t-ay_t$ also need to have period $p$?
Thanks in advance.
Observations
Let $p_1$ be the fundamental period of $y_t$. I see that $ay_t$ has the same period as $y_t$. Also, $-ay_t=a(y_t-2y_t)$ and $y_t-2y_t$ is also $p_1$-periodic. Hence $x_t-ay_t$ is a sum of $x_t$ with another $p_1$-periodic sequence.
I know that it is $p*$-periodic, with $p*$ being the least common multiple of both periods. But I'm interested on the fundamental period, and not its multiples.
AI: Let $$x=\{(-1)^n\}=\{1,-1,1,-1,\cdots \}$$ and let $y=x$ and $a=1$.
Then $x+y=\{2,-2,\cdots\}$ has period $p=2$. But $x-y=\{0,0,\cdots\}$ has period $1$.
|
H: Any linear transformation in $\mathbb{C}$ (complex vector space) is a multiplication by $\alpha \in \mathbb{C}$
In the Linear Algebra Done Wrong book, one of the exercises was to show that any linear transformation in $\mathbb{C}$ is a multiplication by $\alpha \in \mathbb{C}$.
Here's the proof in the solutions part:
"Suppose $T:\mathbb{C}\rightarrow \mathbb{C}$ is a linear transformation. Let $T(1)=a+ib$. Then, $T(-1)=-T(1)=-a-ib$. Since $i^2=-1$, we have $T(-1)=T(i^2)=iT(i)$, which means $T(i)=\frac{-a-ib}{i}=i(a+ib)$. So, for any $w=x+iy \in\mathbb{C}$, we have
$\begin{align*}
T(w) &= T(x+iy) \\
&= xT(1)+yT(i) \\
&= x(a+ib)+yi(a+ib) \\
&= (x+iy)(a+ib) \\
&= wT(1)."
\end{align*}$
I understand this proof. However, I don't understand what is wrong with the following proof. (I assume there is an error in it because it is much shorter, and the proof in the book seems a little convoluted if my proof is correct.)
"Let $T:\mathbb{C}\rightarrow \mathbb{C}$ be a linear transformation. Let $T(1)=a+ib$ and $z=x+iy \in \mathbb{C}$. Then:
$\begin{align*}
T(z) &= T(x+iy) \\
&= T(x)+T(iy) \\
&= xT(1)+iyT(1) \\
&= (x+iy)T(1) \\
&= zT(1)."
\end{align*}$
What am I missing?
AI: A more general fact is that every linear map from a 1-dimensional vector space to itself is multiplication by some scalar. If $V$ is a vector space such that $\dim V = 1$ and $T \in \mathcal{L}(V,V)$, we know that some nonzero vector $v$ generates $V$, so we may ask what $T$ does to $v$.
Since $Tv \in V$, there must be some scalar $\lambda$ such that $Tv = \lambda v$.
Now let $w \in V$ be arbitrary. Then $w = \alpha v$ for some scalar $\alpha$, so
$$ Tw = T(\alpha v) = \alpha Tv = \alpha ( \lambda v) =\lambda (\alpha v) =\lambda w. $$
|
H: Determine extrema of a multivariate function defined on a set
Given a function $$f(x,y)=2+2x^2+y^2$$ and the set $$A:=\{(x,y)\in\mathbb R^2 | x^2+4y^2\leq 1\}$$ and $f:A\to\mathbb R$ how do I determine the global and local extrema on this set? Normally I would not find this hard since it would just be determining $\nabla f=0$ however I am not sure how this works when $A$ is not something simple like $\mathbb R^2$.
AI: The fact that we are restricted to the set $A$ doesn't change the principle at hand. The Extreme Value Theorem states that if the domain is closed and bounded then there exists a maximum (minimum). Your domain is not closed, so we cannot use this theorem, but if we look at the closed ellipse (call it $\bar{A}$) then we can guarantee a max (min).
We then use the technique we know: check where $\nabla f=0$, and all maxima (minima) occur at either these points or on the boundary. In this case we have
$$\nabla f=(4x,2y)=0\implies x=y=0$$
This gives one critical point. We can analytically check, but based on the fact that this function is always increasing from the origin this must be a minimum. As such, $f$ has a minimum at $(0,0)$.
What about a maximum? This function doesn't have a maximum, because as we approach the border of the ellipse this function keeps getting larger, but never achieves a maximum. If we included the boundary of the ellipse, then we would have a maximum on the boundary, and there are several ways of approaching this problem, such as Lagrange Multipliers or paramaterization. Since this was not in the scope of the question, I'll leave it at that.
Edit If we are to consider the boundary as part of the domain, then the problem can be solved a few ways. I'll detail two of them.
The single variable calculus way of doing it is through parameterizations. We can actually parameterize the boundary of the domain with the following equations
$$x=\cos(t),\;\;y=\frac{1}{2}\sin(t),\;\;0\leq t<2\pi$$
As a single variable problem this then reads
$$g(t)=f(x,y)=2+2x^2+y^2=2+2\cos^2(t)+\frac{1}{4}\sin^2(t)$$
The derivative is
$$0=g'(t)=-4\cos(t)\sin(t)+\frac{1}{2}\sin(t)\cos(t)\implies\sin(t)=0\mbox{ or }\cos(t)=0$$
As such the boundary has critical points whenever $t=0,\pi/2,\pi,3\pi/2$. Plugging in these values for $t$ into $g$ gives a maximum value of $4$, occuring both at $(1,0)$ and $(-1,0)$.
The multivariate approach is to use Lagrange Multipliers. We seek to minimize $f$ subject to the constraint $g(x,y)=x^2+4y^2=1$. This gives the auxiliary equation
$$L(x,y,\lambda)=2+2x^2+y^2-\lambda(x^2+4y^2-1)$$
The derivatives are
$$L_x=4x-2\lambda x=0\implies x=0\mbox{ or }\lambda=2$$
$$L_y=2y-8\lambda y=0\implies y=0\mbox{. or }\lambda=\frac{1}{4}$$
$$L_\lambda=x^2+4y^2-1=0$$
Since $\lambda$ cannot be both $2$ and $1/4$, we have either $x=0$ or $y=0$, which gives points of $(\pm1,0)$ and $(0,\pm1/2)$. This gives us the same maximum.
|
H: How to prove that for every integer $k \geq 2$ we have $k^{1/k} \leq e^{1/e}$ without using the first derivative test?
I just stumbled across this cool property, by doing some calculus I could prove it, the function $f(x) = x^{1/x}$ has a local maximum at $x = e$ and the derivative changes sign at that point, but I was wondering if there was other were to prove it without using the first derivative test ?
AI: Here's a proof I saw a number of years ago
that $e^{1/e} > x^{1/x}$
for $x > e$.
All it uses is
$e^z > 1+z$ for $z > 0$.
If $x > e$ then
$e^{\dfrac{x-e}{e}}
\gt 1+\dfrac{x-e}{e}
= 1+\dfrac{x}{e}-1
= \dfrac{x}{e}
$
so
$e^{\dfrac{x}{e}-1}
\gt \dfrac{x}{e}
$
or
$e^{\dfrac{x}{e}}
\gt x
$
or
$e^{1/e}
\gt x^{1/x}
$.
The case $k = 2$ is
$2^{1/2} < e^{1/e}
$
or
$\ln(2) < 2/e$
and
$\ln(2) < .7$
and
$2/e > .73$.
|
H: Prove that if $\alpha$ is any cycle of length $n$, and $\beta$ is any transposition, then ${\alpha, \beta}$ generates $S_n$
Question 6 from the above set of exercises has me a bit stumped. I see that you can rotate then relabel $\alpha = (a_1 a_2 \cdots a_n)$ such that $\alpha = (1 2 \cdots n)$ and $\beta = (1 m)$ — that almost makes the problem reducible to the result of question 5, but not quite, since we do not know that $m = 2$, and this fact is crucial in the proof for question 5. I would appreciate any tips.
AI: Statement 6 is not true as stated. For example, $(1\ \ 3)$ and $(1\ \ 2\ \ 3\ \ 4)$ fail to generate $S_4$. In particular, the subgroup generated by these two elements is isomorphic to the dihedral group $D_4$ of order $8$.
In fact, we can see that for $n>2$, $\alpha = (1\cdots n)$ and $\beta = (1\ \ m)$ will generate $S_n$ if and only if $n$ and $m-1$ are coprime.
For the $\Longleftarrow$ direction, it suffices to repeat the construction in the hint for part 5.
In particular, we have
$$
\alpha^{m-1}(1\ \ m)\alpha^{1 - m} = (m\quad 2m - 1), \quad (1\ \ m)(m\quad 2m - 1)(1 \ \ m) = (1\ \ 2m - 1).
$$
Here, $2m - 1$ is taken modulo $n$, as are any further operations here. With that, we have constructed every transposition of the form $(1\quad 1 + k(m-1))$. However, because $m-1$ and $n$ are relatively prime, we see that every element of $\{0,1,\dots,n-1\}$ can be written in the form $k(m-1)$ for some $k$. So, we have constructed every transposition of the form $(1\ \ k)$ (for $k \in \{1,\dots,n\}$), which was precisely the point of the hint.
For the $\implies$ direction: let $d = \gcd(m-1,n)$. Note that $\alpha, \beta$ have the property that whenever $i \equiv j \pmod d$, then it holds that $\alpha(i) \equiv \alpha(j)$ and $\beta(i) \equiv \beta(j)$, modulo $d$. It follows that every element of the subgroup generated by $\alpha$ and $\beta$ also has this property. We therefore see that the subgroup generated by $\alpha$ and $\beta$ does not include all of $S_n$ because, for instance, the element $(1\ \ 2)$ does not have this property.
|
H: How to prove that a sequence is Cauchy
How do I show if $x_n = \frac{n^2}{n^2 -1/2}$ is a Cauchy sequence? (using the definition of Cauchy sequence)
My attempt: A sequence is Cauchy if $ \forall \epsilon>0$ $ \exists N \in \mathbb N$ $\forall m,n \geq N$ :|$x_n -x_m$|$\leq \epsilon$
|$x_n -x_m|=|\frac{n^2}{n^2 -1/2} -\frac{m^2}{m^2 -1/2}$| $\leq|\frac{n^2}{n^2 -1/2}-1|+|1-\frac{m^2}{m^2 -1/2}|= 1/2|\frac{1}{n^2 -1/2}|+1/2|\frac{1}{m^2 -1/2}|$
$1/2|\frac{1}{n^2 -1/2}| \leq \epsilon$ and also $1/2|\frac{1}{m^2 -1/2}| \leq \epsilon$
So $1/2|\frac{1}{n^2 -1/2}|+1/2|\frac{1}{m^2 -1/2}| \leq 2 \epsilon$
if we choose $N \in \mathbb N$ such that $N >\sqrt{\frac{\epsilon}{2}+\frac{1}{2}}$
AI: The definition you posted is correct. Note though that
$$
x_n
= \frac{n^2}{n^2-1/2}
= 1 + \frac{1/2}{n^2-1/2},
$$
and it's clear this is a sequence which decreases to $1$, so $x_{n+1} < x_n$ for all $n$. (You could at this point claim that since the sequence is convergent, it is also Cauchy, but if you need a proof from the fundamentals, read on).
Assuming $N<m<n$,
$$
\begin{split}
\left|x_m - x_n\right|
&= x_m - x_n \\
&= \left(1 + \frac{1/2}{m^2-1/2} - 1 - \frac{1/2}{n^2-1/2}\right) \\
&= \frac12 \left(\frac{1}{m^2-1/2} - \frac{1}{n^2-1/2}\right) \\
&= \frac12
\left(\frac{n^2 - m^2}
{\left(m^2-1/2\right)\left(n^2-1/2\right)}\right) \\
&\le \frac12
\left(\frac{n^2}
{\left(m^2-1/2\right)\left(n^2-1/2\right)}\right) \\
&\le \frac{1/2}{m^2-1/2} \left(\frac{n^2}{n^2-1/2}\right)\\
&\le \frac{1/2}{m^2-1/2} \left(1 + \frac{1/2}{n^2-1/2}\right)\\
&\le \frac{1}{m^2-1/2}\\
&\le \frac{1}{N^2}.
\end{split}
$$
Can you find what $N$ you need to pick in terms of $\epsilon$ to have that expression $< \epsilon$ in the end?
|
H: Number of permutations for hiring mr.X
Let be $9$ candidates for a job hiring (mr.X included). There are $3$ judges. Every judge defines a priority list ($1st-9th$) for the candidates (1st being the best and 9th the worst). A candidate is being hired only if they exist in the first $3$ spots of each of the three lists. In how many cases is mr.X being hired?
I am thinking that for each judge there are :
$ (9-1)! * 3 $
permutations (cases) for mr.X to be hired. Given the above, then for all three judges it is:
$(8!*3)^3$
cases.
But that seems like extremely many cases.
AI: Yes correct. The probability to be in the first 3 positions on 9 total is $\frac{3}{9}=\frac{1}{3}$
Then with 3 judges the probability is $(\frac{1}{3})^3$
|
H: Does $ \sum\left( (n^3+1)^{\frac{1}{3}} -n \right) $ converge or diverge?
The test I know are Cauchy's root test, Cauchy's integral, Raabe's test, logarithmic test and D' Alembert's Ratio Test. I dont know which test I can use to prove that this series converges?
$$\sum \left( (n^3+1)^{\frac{1}{3}} -n \right) $$
AI: If we can prove that
$$
(n^3+1)^\frac13-n \leq \frac1{n^2},
$$
then we can use the comparison test to prove that the series converges.
Let's find $x$ such that
$$
(n^3+x)^\frac13-n \leq \frac1{n^2}.
$$
$$
\begin{align}
(n^3+x)^\frac13 &\leq \frac1{n^2}+n\\
n^3+x &\leq \left(\frac1{n^2}+n\right)^3\\
n^3+x &\leq n^3+3+\frac{3}{n^{3}}+\frac{1}{n^{6}}\\
x &\leq 3+\frac{3}{n^{3}}+\frac{1}{n^{6}}\\
& \leq 3.
\end{align}
$$
Since in your particular case $x=1\leq3$, we know that the sum is less than the sum of $1/n^2$, so it must converge.
|
H: Group of order 28 with normal subgroup of order 4 is abelian
Herstein ch2.11 q19
Prove that if $G$ of order 28 has normal subgroup of order 4, then $G$ is abelian.
My attempt: The 7-sylow subgroup lies in center. So $\circ(Z)=7, 14$ or $28$.
For $\circ(Z)=14$, $G/Z$ is cyclic. But this argument fails for $\circ(Z)=7$.
I have not utilized the fact $G$ has normal subgroup of order 4.
Please give a hint. Please do not give solution. Thanks!
(This looks problematic. Also in one of the comments, meaning of $[\operatorname{Aut} H :1]$ is unclear)
Edit: Thanks to @DietrichBurde 's comment, this answers this question. So my post is a duplicate.
AI: Let $H$ be the subgroup of order 4. It is well known that $H$ must be abelian.
Now, the map that send $g \in G$ to the conjugation by $g$ define an omomorphism from $G$ to $Aut H$.
What can you say about the image and the kernel of this omomorphism?
Can you see how to conclude?
|
H: Why does it make sense to talk about the 'set of complex numbers'?
In my complex analysis course we've discussed quite a few times the idea that $\mathbb{C}$ is really 'the same thing' as $\mathbb{R}^2$ with the added complex multiplication operation. I've also read a number of the popular posts here including this one: What's the difference between $\mathbb{R}^2$ and the complex plane?.
This post: Is $\mathbb R^2$ a field? explains that the complex numbers can be defined to be the field of $(\mathbb{R}^2,+,*)$, where the operations are the familiar $\mathbb{R}^2$ addition, and complex multiplication.
In my (basic understanding) of algebra, there is a fundamental difference between the group $(G,*)$, and the set $G$. That is to say that we can meaningfully talk about elements of the set $G$, but not directly about 'elements of the group'. I.e. the group itself is a fundamentally different object to the set $G$, and it tells us about relationships between the elements of $G$.
In this sense, is it possible to talk about 'the set of complex numbers'? If we use the definition of the complex numbers as being the field $(\mathbb{R}^2,+,*)$, then doesn't this really mean that the 'complex numbers' IS a ring? In other words, there is no meaningful way to talk about 'elements' of this ring? If this is the case, then is the set of complex numbers literally $\{(\mathbb{R}^2,+,*)\}$?
The reason I ask is because I am having some conceptual difficulty when confronting the idea of dealing with 'elements' of the complex set. For example if we say $\mathbb{C} = \{x + iy: x,y \in \mathbb{R}\}$, then isn't this just a subset of $\mathbb{R}^2$, since $x + iy = (x,0) + (0,1)*(y,0)$? In this sense this set doesn't actually tell us about the structure imposed on the elements of $\mathbb{R}^2$?
EDIT: I realise my question may be slightly unclear, so I would like to try to express it in the context of the set of natural numbers.
When we talk about $\mathbb{N}$, we are talking about a collection of objects, in which these objects satisfy certain properties, either by definition or by theorem. In particular, when we construct $\mathbb{N}$, each element is precisely defined. So say the symbol $0$ represents the null set, and the symbol $1$ represents $\{0\}$ and so on so forth.
But coming to defining $\mathbb{C}$ now, I am not sure how we do the same construct the set as with $\mathbb{N}$. For example, we want each element of the set of complex numbers to abide by the property of complex multiplication, because this is what makes it fundamentally different from $\mathbb{R}^2$. But this is fundamentally a relationship between two different complex numbers. It requires the operation $*$ to even make sense of. So if we construct a set without the structure, we do literally just end up with the set $\mathbb{C}$ = $\mathbb{R}^2$ because the structure cannot be 'codified' into our construction of the set, because it requires a definition of $*$.
AI: "Element of a group (ring, topological space, etc.)" is simply a common abreviation for "element of the underlying set of a group (ring, topological space, etc.)".
|
H: Measure theory $\mu(\lim \inf E_n) \leq \lim \inf E_n$
Let $(E_n)_{n \in \mathbb{N}}$ be some measurable sets. Define $\lim \inf E_n$ to be
$$\bigcup_{n \in \mathbb{N}} \bigcap_{m = n}^\infty E_m$$
I want to show that $\mu(\lim \inf E_n) \leq \lim \inf E_n$. I thought we might take an increasing sequence $(F_n)$ where $F_n = E_1 \cup ... \cup E_n$ and use that $\mu(\lim \inf F_n) = \lim_{n \to \infty} \mu(\bigcap_{m = n}^\infty F_m)$, but I am stuck at relation $\bigcap_{m = n}^\infty F_m$ to the $E_n$ sequence and I don't see how the $\lim \inf$ of a sequence factors in.
AI: $X_n = \bigcap_{m=n}^{\infty} E_m$ is an increasing sequence of sets, hence $\mu(\bigcup_{n=1}^\infty X_n) = \lim \mu(X_n)$ by sigma additivity. But $X_n\subset E_n$, hence $\mu(X_n)\le \mu(E_n)$. Hence
\begin{equation}
\mu(\liminf E_n) = \lim \mu(X_n) = \liminf\mu(X_n)\le\liminf\mu(E_n)
\end{equation}
|
H: If $ax+by+cz = 0$, show that $span(x,y)=span(y,z)$
If we have $ax+by+cz = 0$ show that $span(x,y)=span(y,z)$
My steps:
$$x = -\frac{b}{a}y-\frac{c}{a}z\Rightarrow x\ \in\ span(y,z)$$
$$z=-\frac{a}{c}x-\frac{b}{c}y\Rightarrow z \in\ span(x,y)$$
But we still have not proved that $span(x,y)=span(y,z)$ and I'm not entirely sure how to proceed from here
AI: I'll assume $abc\neq 0$.
As you pointed out, $x\in \text{span}(y,z)$, $z\in \text{span}(x,y)$, and $y\in \text{span}(x,z)$. This means
$$
\text{span}(y,z)\subseteq \text{span}(y,\text{span}(x,y))=\text{span}(x,y)
$$and likewise
$$
\text{span}(x,y)\subseteq \text{span}(y,\text{span}(y,z))=\text{span}(y,z)
$$Another way: the equation is a $2$-dimensional plane, so it should be spanned by any combination of two of the basis vectors.
|
H: Index of cyclic subgroup $\langle h \rangle$ in a subgroup $H$ with finite index in a finitely generated group $G$
Let $\phi: H\to \mathbb{Z}$ be a homomorphism with finite kernel.
Let $h \in H$ such that $\phi(h) \not=0$.
Can someone help me understanding why exactly the cyclic group $\langle h\rangle$ has finite index in $H$ ?
AI: The image $\phi(H)$ is cyclic group generated by $\phi(t)$ for some $t\in H$. Let $K$ be the finite kernel. Every element $z\in H$ satisfies $\phi(z)=\phi(t)^k$ for some $k$. Hence $z=t^k s$, $s\in K$. Hence $H=\langle t\rangle K$. Your element $h$ is then equal to $t^cs'$ for some integer $c$ and $s'\in K$. Therefore $\langle h\rangle K=\langle t^c\rangle K$ so $\langle h\rangle$ has finite index in $H$.
|
H: Why isn't conformal mapping more flexible?
I have been spending some time familiarizing myself with the basics of conformal mapping, and found myself somewhat stumped with the limitations of some of the methods I have encountered. Möbius transformations or Schwarz-Christoffel maps, for example, have very strict requirements on where and how they can map the unit disk.
My intuition, however, is that conformal maps should be a lot more general, and a lot more flexible than they are. Consider for example the shape to on the right side in the figure below. Imagine that the three blue closed lines correspond to topographic contour lines of a monotonous hill. Since the hill is monotonous, you can walk to the lowest contour and/or the summit from any point within the hill by walking perpendicular (summit paths: orange lines) up or down the contours (blue closed lines). Since the summit paths and topographic contours are always orthogonal to each other and exist at any point within the hill, the properties of a conformal map (as I understand them) are preserved. So shouldn't there be a conformal mapping from the unit disk (left) to this hill?
Is there something I have misunderstood?
AI: I know of three methods for making a conformal mapping that is not as simple as Moebius transforms and (simple) applications of Schwarz-Christoffel. One reason that the examples you are seeing are "simple" is that the space of shapes (your blue curves) is infinite dimensional and does not have a nice structure (for instance, is not a vector space). Being infinite dimensional, one is inclined to use an infinite amount of data to specify an arbitrary shape -- every shape gives an infinite set of coefficients. However, not every infinite set of coefficients describes a shape -- it is easy for such a representation scheme to yield many objects that fail to be connected, which is a problem for this application.
The "nice" thing about Moebius transforms and Schwarz-Christoffel is that using either only requires a small (finite) amount of information. But this means that these methods cannot give maps that are too complicated. Although, one can increase the complexity via Schwarz-Christoffel by subdividing one's piecewise linear approximations more and more finely.
Luteberget has an overview of the three methods I list below.
(1) Using complicated Schwarz-Christoffel. See the work of Driscoll and Trefethen, for example, https://pdfs.semanticscholar.org/ec28/b851707a35630faf58fdb5690f31cc814b15.pdf , references thereto, and their subsequent work, e.g., https://arxiv.org/abs/1911.03696 .
(2) Use Stephenson's circle packing method, for example, http://www.cs.jhu.edu/~misha/Fall09/Stephenson97.pdf , and references thereto.
(3) Use Marshall's "ZIPPER" algorithm. Examples are visible here: http://sites.math.washington.edu/~marshall/zipper.html . More recent work on ZIPPER: https://arxiv.org/abs/math/0605532 .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.