text stringlengths 83 79.5k |
|---|
H: Summation of cardinals is smaller than product of cardinals
How do i prove $\sum a_i$ is equipotent with a subset of $\prod a_i$ ??
I seems obviously true but its actually hard to prove it...
$\{a_i\mid i\in I\}$ is a set of cardinals and $a_i$ is a cardinal for each $i\in I$.
AI: I assume that $a_i\ge 2$ for each $i$.
I would first try to show that
$$a+b\le a\cdot b \tag{1}$$
whenever $a,b\ge 2$.
Then I would try to continue by transfinite induction - i.e. I would assume that $I$ can be well ordered, which means I can work with cardinals
$a_\gamma$ for $\gamma<\alpha$.
Inductive step in the transfinite induction:
a) Non-limit ordinals: If we know that $\sum\limits_{\gamma<\alpha} a_\gamma<\prod\limits_{\gamma<\alpha} a_\gamma$ then
$$\sum_{\gamma<\alpha+1} a_\gamma=\sum_{\gamma<\alpha} a_\gamma + a_\alpha \le \prod_{\gamma<\alpha} a_\gamma + a_\alpha \overset{(1)}\le \prod_{\gamma<\alpha+1} a_{\gamma}.$$
b) Limit ordinals: Suppose that $\alpha=\sup\{\beta; \beta<\alpha\}$. Then
$$\sum_{\gamma<\alpha} a_\gamma = \sup_{\beta<\alpha} \sum_{\gamma<\beta} a_\gamma \le \sup_{\beta<\alpha} \prod_{\gamma<\beta} a_\gamma \le \prod_{\gamma<\alpha} a_\gamma.$$
You can find a different proof (without using transfinite induction) of a slightly more general result as Theorem 1.6.7a) in the book
Michael Holz, Karsten Steffens, E. Weitz: Introduction to Cardinal Arithmetic, p.61. The second part of this theorem is König's theorem, which is a very useful result. |
H: Finding a grammar for a formal language
I am looking for a grammar that describes the formal language
$L = \{ xyx^R \;|\; x,y \in \{a,b\}^*\}$
where $\{a,b\}^*$ corresponds to the regular expression [ab]*.
If there would be no y and the language would therefore contain all the words that are palindromes there wouldn't be any problem. I just don't get the "y" included therein.
Could you please help me to find a solution?
Thanks in advance
AI: S -> aSa | bSb | A
A -> aA | bA | $\varepsilon$
This way $x$ and $x^R$ are simultanously generated with the start symbol $S$ in the middle. After that $S$ changes to $A$ in order to produce some word $y$ between $x$ and $x^R$.
Edit: As was pointed out, the language is just $\{a,b\}^*$, so there is a simpler grammar (see the other answer). |
H: Parabolic PDE existence/uniqueness
Consider the parabolic PDE:
$$\frac{\partial u}{\partial t} = u^2\frac{\partial^2u}{\partial x^2} + u^3$$
with some initial condition.
Apparently this is a straightforward parabolic PDE in which I can apply standard results to prove short term existence and uniquness. Can someone tell/refer me to these standard results please? The equation is non-linear and I haven't seen any theory for non-linear results.
Thanks
AI: A standard theory can be found in, e.g., Ladyženskaja, O. A.; Solonnikov, V. A.; N. N., Ural'ceva (1968), Linear and quasi-linear equations of parabolic type. |
H: Finding all groups with given property
My problem is how to find all groups which have one exactly non-proper subgroup.
Thanks
AI: Groups with only one proper subgroup
A nontrivial group $G$ has no proper subgroups except the trivial group iff $G$ is finite and of prime order. |
H: Fitting of Closed Curve in the Polar Coordinate.
I know how to fit a curve when given some data points in the cartesian coordinate. Recently, I encountered a model that needs to fit a closed curve in the polar coordinate. I'm thinking of deducing a similar formula using Maximum Likelyhood, but the problem is I don't know what kind of hypothesis to choose.
In Cartesian coordinate, we can use the hypothesis of Polynomials $y=a_nx^n+...+a_0$, but this cannot be extended to my model because the "polynomial" in the polar coordinate $r=a_n\theta^n+...+a_0$ may not be closed.
What can I do?
AI: Is there a reason why you think taylor series won't work? Look at this Wolfram Alpha plot. It will diverge eventually, but you can use the same taylor series math to determine how many terms you need for it to converge within the range you want. With closed (i.e. periodic) functions I'd imagine it would be easier, since you're essentially done if it appears to go around the loop once since you can reuse previous values. |
H: Can % error in computed value be less than % error in one of its parameters
I am using an equation to compute energy as follows
$E= C_1\times t + C_2 \times a$
Here $C_1$ and $C_2$ are constants. $t$ shows the time-taken, $a$ shows the dynamic-activity.
Using a technique, I am estimating $t$ and $a$ and using them to estimate $E$. After estimating $t$ and $E$, I compare them with their actual values, obtained through actual measurement. I see % error in $t$ as, say 10% and that in $E$ as, say 7%. Is it mathematically correct and acceptable. There is also some error in estimation of $a$. Does the answer depend on $C_1$ and $C_2$ also? Thanks in advance for your help.
AI: If the $C_1t$ term makes a relatively small contribution to $E$ compared $C_2a$, then any given absolute error in $t$ is going to be a larger fraction of $t$ than it is of $E$. So it will look like a larger relative error in $t$ than the resulting relative error in $E$. |
H: Stone duality for ideals and filters (exercise)
In A Course in Universal Algebra (Burris, Sankapannavar), the exercise 4.4.7-8, p.158, says:
Let $A$ be a Boolean algebra. Denote $A^\ast:=\{\text{ultrafilters of }A\}$, and give $A^\ast$ the topology, defined by the basis of open sets $\{N_a; a\!\in\!A\}$, where $N_a\!:=\!\{U\!\in\!A^\ast; a\!\in\!U\}$.
(a) The map $(\{\text{ideals of }A\},\subseteq)\!\rightarrow\!(\{\text{open subsets of }A^\ast\},\subseteq),\, I\!\mapsto\!I^\ast\!:= \bigcup_{a\in I}\!N_a$ is a lattice isomorphism, with $a\!\in\!I \Leftrightarrow N_a\!\subseteq\!I^\ast$.
(b) The map $(\{\text{filters of }A\},\subseteq)\!\rightarrow\!(\{\text{closed subsets of }A^\ast\},\subseteq),\, F\!\mapsto\!F^\ast\!:= \bigcap_{a\in F}\!N_a$ is a lattice isomorphism, with $a\!\in\!F \Leftrightarrow N_a\!\supseteq\!F^\ast$.
For any $S\!\subseteq\!A$, let $\mathfrak{I}(S)$ denote the ideal generated by $S$, and $\mathfrak{F}(S)$ the filter generated by $S$. Then $$\bigcup_{a\in S}\!N_a=\!\bigcup_{a\in \mathfrak{I}(S)}\!N_a~~~\text{ and }~~~\bigcap_{a\in S}\!N_a=\!\bigcap_{a\in \mathfrak{F}(S)}\!N_a.$$
In $(\{\text{ideals of }A\},\subseteq)$, the supremum is described as $I\!\vee\!I'\!=\!\{x\!\in\!A; \exists a\!\in\!I\,\exists a'\!\in\!I'\!: x\!\leq\!a\!\vee\!a'\}$, and in $(\{\text{filters of }A\},\subseteq)$, the supremum is described as $F\!\vee\!F'\!=\!\{x\!\in\!A; \exists a\!\in\!F\,\exists a'\!\in\!F'\!: x\!\geq\!a\!\wedge\!a'\}$. Moreover, $N_a\!\cup\!N_b\!=\!N_{a\vee b}$; $N_{a}\!\cap\!N_{b}\!=\!N_{a\wedge b}$; $(N_a)^c=\!N_{a^c}$. In Boolean algebras, an ideal $I$ of $A$ is maximal (i.e. maximal w.r.t. $\subseteq$ among all ideals $I'$ with $1\!\notin\!I'$) iff it is prime (i.e. $1\!\notin\!I$ and $\forall x,y\!\in\!A\!: x\!\wedge\!y\!\in\!I \Leftrightarrow (x\text{ or }y\!\in\!I)$). In Boolean algebras, a filter $F$ of $A$ is maximal (or an ultrafilter, i.e. maximal w.r.t. $\subseteq$ among all filters $F'$ with $0\!\notin\!F'$) iff it is prime (i.e. $0\!\notin\!F$ and $\forall x,y\!\in\!A\!: x\!\vee\!y\!\in\!F \Leftrightarrow (x\text{ or }y\!\in\!F)$). (Stone) If $I$ is an ideal of $A$ and $a\!\in\!A\!\setminus\!I$, then there is a maximal ideal $M$ with $F\!\subseteq\!M\!\subseteq\!A\!\setminus\!\{a\}$. (Stone) If $F$ is a filter of $A$ and $a\!\in\!A\!\setminus\!F$, then there is an ultrafilter $U$ with $F\!\subseteq\!U \!\subseteq\! A\!\setminus\!\{a\}$.
Questions: Here are the things that I didn't yet manage to prove and am having problems with.
(1) We have $F^\ast\cap F'^\ast=(F\!\cap\!F')^\ast$ iff $(\bigcap_{a\in F}\!N_a)\cap(\bigcap_{a'\in F'}\!N_{a'}) = \bigcap_{x\in F\cup F'}\!N_x = \bigcap_{y\in F\cap F'}\!N_y$ iff for each ultrafilter $U$, we have $F\!\cap\!F'\!\subseteq U \Rightarrow F\!\cup\!F'\!\subseteq U$, but I don't see why this would be true.
(2) Proving $F^\ast\!\cup F'^\ast=(F \vee\!F')^\ast$ boils down to showing that the following inclusion holds: $\{U\!\in\!A^\ast; \forall a\!\in\!F\,\forall a'\!\in\!F'\!: a\!\vee\!a'\!\in\!U\}\subseteq\{U\!\in\!A^\ast; \forall b\!\in\!F\,\forall b'\!\in\!F'\, \forall x\!\geq\!b\!\wedge\!b'\!: x\!\in\!U\}$. Now for $b,b'$, we have $b\!\vee\!b'\!\in\!U$, and from primality of $U$, we have w.l.o.g. $b\!\in\!U$. But how do we show $b\!\wedge\!b'\!\in\!U$?
(3) Injectivity: We have $I^\ast\!=\!I'^\ast$ iff $\forall U\!\in\!A^\ast\!: (\exists a\!\in\!I\!: a\!\in\!U)\Leftrightarrow(\exists a'\!\in\!I'\!: a'\!\in\!U)$ iff $\forall U\!\in\!A^\ast\!: U\!\cap\!I\!=\!\emptyset \Leftrightarrow U\!\cap\!I'\!=\!\emptyset$. I've proved the injectivity of $F^\ast\!=\!F'^\ast$ by using Stone's theorem above, but for $I^\ast\!=\!I'^\ast$, I must produce an ultrafilter by using ideals, so I'm not sure what to do.
(4) We have $a\!\in\!I\Leftarrow N_a\!\subseteq\!I^\ast$ iff $\{U\!\in\!A^\ast;a\!\in\!U\}\!\subseteq\!\{U\!\in\!A^\ast\!; I\!\cap\!U\!\neq\!\emptyset\}\Rightarrow a\!\in\!I$. I don't know where to go from here. I've proved $a\!\in\!F\Leftarrow N_a\!\supseteq\!F^\ast$, by using Stone's theorem above, but here, we must find an ultrafilter by using ideals, so I'm out of good ideas.
AI: You have $F^*=\bigcap\{N_a:a\in F\}=\{U\in A^*:F\subseteq U\}$. This clearly means that if $F_0\subseteq F_1$, then $F_0^*\supseteq F_1^*$: the bigger the filter $F$, the more sets $N_a$ you’re intersecting to form $F^*$, so the smaller $F^*$ must be. In fact, if $F$ is an ultrafilter, then $F^*=\{U\in A^*:F\subseteq U\}=\{F\}$: the only ultrafilter that contains $F$ is $F$ itself. Thus, if $\mathscr{F}$ is the set of filters on $A$, and $\mathscr{C}$ is the family of closed subsets of $A^*$, the map $\mathscr{F}\to\mathscr{C}:F\mapsto F^*$ is order-reversing. In particular, you can’t hope to prove that $\langle\mathscr{F},\subseteq\rangle$ and $\langle\mathscr{C},\subseteq\rangle$ are isomorphic lattices: if the map is a lattice isomorphism, it must be an isomorphism between $\langle\mathscr{F},\subseteq\rangle$ and $\langle\mathscr{C},\supseteq\rangle$.
In particular, this means that you should be trying to prove that if $F_0,F_1\in\mathscr{F}$, then $$(F_0\cap F_1)^*={F_0}^*\cup{F_1}^*$$ and $$(F_0\lor F_1)^*={F_0}^*\cap{F_1}^*\;.$$ This should dispose of most of your difficulties with (1) and (2).
For (3) and (4), note that (maximal) ideals and (ultra)filters are complementary to each other: a set $S\subseteq A$ is a (maximal) ideal iff $\{\lnot a:a\in A\}$ is an (ultra)filter. Thus, by taking complements you can work with filters or with ideals, as you choose. |
H: $\lim\limits_{n \to\infty}x_n-x_{n+1}=0$, $x_{n_k}$ converges but $x_n$ does not converge
This is a counter example homework question that I can't seem to solve.
I need to find a sequence of real numbers $(x_n)_{n=1}^{\infty}$, and a monotonic increasing sequence of natural numbers $(n_k)_{k=1}^{\infty}$such that:
$(i)\lim\limits_{n\to\infty}(x_n-x_{n+1})=0$, $(ii)(x_{n_k})_{k=1}^{\infty}$ converges, but $(iii)(x_n)_{n=1}^{\infty}$ do not converges.
Only thing I know so far is that for all $k$, $[n_{k+1}-n_k]$ cannot be bounded.
AI: It's sometimes easiest to go for the most blatant counterexample you can get away with. So decide that you will have $x_{n_k}=0$ for all $k$ and $x_n=1$ for some $n$ between each set of $x_{n_k}$ and $x_{n_{k+1}}$.
This calls for a sequence that climbs from 0 to 1 and back down to 0 and up again infinitely many times. It must eventually do this slower and slower due to the condition on the successive difference, but you can decide how slow or fast the differences go to 0.
For example, decide that $|x_n-x_{n+1}|$ must be $1/k$ whenever $n$ is between $n_k$ and $n_{k+1}$... |
H: Riemann's theorem on removable singularities
Theorem
Let $\Omega\subseteq \mathbb{C}$ open , $ a\in\Omega,\ f\in H(\Omega\backslash \{a\})$ and there is $r>0$ with
$f$ is bounded on $C(a,r)\backslash \{a\}$ ($C(a,r)$ is the circle with origin $a$ and radius $r$),
then $a$ is a removable singularity.
Proof
Let $h:\Omega\rightarrow \mathbb{C}$ be defined as:
$$
h(z)=\left\{\begin{array}{l}
0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ z=a \\
(z-a)^{2}f(z), \ z\in\Omega\backslash \{a\}
\end{array}\right.
$$
Then we have:
$$
\lim_{z\rightarrow a}\frac{h(z)-h(a)}{z-a}=\lim_{z\rightarrow a}(z-a)f(z)=0\Rightarrow h'(a)=0.
$$
$\color{red}{\text{ Why is} \lim\limits_{z\rightarrow a}(z-a)f(z)=0?}$
So we have $h\in H(\Omega)$ and therefore $(h(a)=h'(a)=0)$.
$$
h(z)=\sum_{n=2}^{\infty}c_{n}(z-a)^{n}\ (z\in K(a,\ r)).
$$
Letting $f(a):=c_{2}$, it follows:
$\color{red}{\text{ Why do we have} f(a):=c_{2}?}$
$$
f(z)=\sum_{n=0}^{\infty}c_{n+2}(z-a)^{n}\ (z\in K(a,\ r)),
$$
so $f\in H(\Omega).\ \square $
AI: The first red line follows because $|(z-a)f(z)|\le M|(z-a)|$ near $a$, where $M$ is the bound for $f(z)$. This clearly goes to zero as $z$ goes to $a$.
The coefficient $c_2$ is $h''(z)/2!$ evaluated at $a$. We see $h'(z)=(z-a)^2f'(z)+2(z-a)f(z)$ and $h''(z)=(z-a)^2f''(z)+2(z-a)f'(z)+2(z-a)f'(z)+2f(z)$. Plugging in $a$ and dividing by 2 gives the the value for $c_2$ above. |
H: Casorati-Weiestrass theorem for essential singularities
Casorati-Weiestrass theorem for essential singularities
Let $\Omega\subseteq \mathbb{C}$ be open, $ a\in\Omega,\ f\in H(\Omega\backslash \{a\})$ ($f$ analytic on $\Omega\backslash \{a\})$. In the case of an essential singularity: If $ C(a,r)\subseteq\Omega$ ($C(a,r)$ is the disk with origin $a$ and radius $r$), then $f(C(a,r)\backslash \{a\})$ is dense in $\mathbb{C}$.
$\color{green}{\text{(1) What does this theorem actually say? How would you put in descriptive words?}}$
$\color{green}{\text{(2) How would an outline of the proof (in words) look like? What steps should be followed?}}$
AI: The theorem says that if $f$ has an essential singularity at $a$, then arbitrarily close to $a$, $f$ takes values arbitrarily close to whatever you like. So the behaviour of $f$ is pretty wild close to $a$ (by comparison to removable singularities, where $f$ is straightforward near $a$, or poles, where $f$ just goes to infinity near $a$).
Suppose $f$ on $D(a,r) \setminus {a}$ doesn't take any values near $b\in \mathbb C$. Then define $g(z) = \frac{1}{f(z)-b}$. Then $g$ is holomorphic and bounded, so it can be extended to $a$. But then $f(z) = \frac{1}{g(z)} + b$ has a pole or removable singularity at $a$. Hence if $f$ does not have a pole or removable singularity at $a$, the above process must fail, i.e. $f$ takes values very close to $b$.
The latter is not a proof as such, but it's a good outline, and the real proof is not much more complex: as a commenter said, see Wikipedia.
It's worth noting that in fact the Casorati-Weierstrass theorem is strengthened by the Big Picard theorem, which states that $f$ doesn't just come close to every value, it in fact takes every value (with at most one exception). The proof of the Picard theorems is unfortunately not so simple. |
H: Is $f$ injective in $W$?
If
$\|f(x)-f(y)\|\geqslant \frac 1{2} \|x-y\|$ for any $x, y \in W$
then
$f$ is injective in $W$
How to prove this? If that inequality is right is it mean that the images are equal or not?
AI: Assume $f(x) = f(y)$. What is $\|f(x)-f(y)\|$? What can you conclude about $\|x-y\|$? |
H: Showing that a group of order $21$ (with certain conditions) is cyclic
How can i show that if $o(G)=21$ and if $G$ has only one subgroup of order $3$ and only one subgroup of order $7$, then show that $G$ is cyclic.
AI: Hint: Let $H$ and $K$ be the unique subgroups of order 3 and 7, respectively. Since they are the only subgroups of a given order they must be normal (why?). Conclude that $H, K$ are normal subgroups that intersect trivially, and so their internal direct product $HK \simeq H \times K$. |
H: An example for a homomorphism that is not an automorphism
Let $K/F$ be a field extension, I know that if $K/F$ is a finite extension then a simple argument from linear algebra shows that since every homomorphism of fields from $K$ to $K$ that fixes $F$ is 1-1 it is also onto, i.e. an automorphism of $K$.
Can someone please give an example that this is not the case if $K/F$ is not a finite extension ?
AI: Consider $\mathbb{Q}(\pi) / \mathbb{Q}$ and the field homomorphism induced from mapping $\pi$ to $\pi^2$.
The fact that this is not an automorphism follows from the transcendence of $\pi$. |
H: sampling technique: lower error in estimation on using higher sampling rate
I am using sampling to estimate characteristics of a population. My sampling ratio is 32. I take N/32 samples, calculate their sum and multiply by 32. Then I compare it to actual value to get error. It is 3.67%.
Then I take sampling ratio as 64 and do the same and get error as 3.62%. Generally one would expect a higher sampling-error.
Is that possible. Thanks for your reply.
AI: It is certainly possible. You could be luckier in your sampling of 1 in 64 and have no error at all. Imagine a population of 640, with 600 of value 0, 20 of value -1 and 20 of value 1. The average is 0. If your 10 samples are all 0, the average will be exactly right. If your 20 samples (for the 1/32 case) include a couple of 1's and no -1's there will be an error. |
H: Using the roots of polynomial finding the value of sum.
If $a,b$ and $c$ are the roots of $x^{3}+px^{2}+qx+r$, then how can we find the value of $\displaystyle \sum \frac{b^{2}+c^{2}}{bc}$.
AI: I think you are asking for
$$\frac{a^2+b^2}{ab}+\frac{b^2+c^2}{bc}+\frac{c^2+a^2}{ca},\tag{$1$}$$
or perhaps twice this quantity.
If you bring Expression $(1)$ to a common denominator, you will get
$$\frac{a^2c+b^2c+b^2 a+c^2 a+c^2b+a^2b}{abc}.$$
Note that
$$(a+b+c)(ab+bc+ca)=a^2c+b^2c+b^2 a+c^2 a+c^2b+a^2b+3abc.$$
Thus
$$\frac{a^2+b^2}{ab}+\frac{b^2+c^2}{bc}+\frac{c^2+a^2}{ca}=\frac{(a+b+c)(ab+bc+ca)-3abc}{abc}.$$
Everything term on the right-hand side is expressible simply in terms of the coefficients. |
H: Open mapping theorem
I have some understanding problems with the open mapping theorem.
Lemma
Let $f\in H(D(z_{0},\ R))$ ($f$ analytical on the disc with the origin in $z_0$ and radius $R$) and
$$
|f(z_{0})|<\min_{|z-z_{0}|=r}|f(z)|
$$
for a $r\in(0,\ R)$. Then there is a $w\in D(z_{0},\ r)$ with $f(w)=0$.
Theorem
Let $\Omega\subseteq \mathbb{C}$ be a region and $f\in H(\Omega)$. Then we have either $f(\Omega)$ is also a region or $f$ is constant.
Proof
We choose $f$ not constant. $f(\Omega)$ is connected as a continuous image of a connected set. We show, that $f(\Omega)$ is open:
Let $w_{0}\in f(\Omega)$, so $w_{0}=f(z_{0})$ for a $ z_{0}\in\Omega$. There is a $\delta>0$ with
$$
f(z)\neq w_{0}\ (z\in\partial D(z_{0},\delta)),\ (a)
$$
because otherwise $f(z)=w_{0}$ for all $ z\in\Omega$.
$\color{green}{\text{I don't understand why there is a } \delta \text{ so that (a) is fulfilled. I don't understand the followup.}}$
$\color{green}{\text{What is actually the outline of proving the openness in the next steps?}}$
$\color{green}{\text{(Read: I have no idea what happens in the steps below)?}}$
Because $f$ is continuous, there is a $\varepsilon>0$ with
$$
|f(z)-w_{0}|\geq 3\varepsilon\ (z\in\partial D(z_{0},\delta)).
$$
We show: $D(w_{0},e)\subseteq f(\Omega)$: For $|w-w_{0}|<\varepsilon$ and $|z-z_{0}|=\delta$ we have:
$$
|f(z)-w|=|f(z)-w_{0}+w_{0}-w|\geq|f(z)-w_{0}|-|w_{0}-w|\geq 3e-e=2e
$$
and $|f(z_{0})-w|=|w_{0}-w|<\varepsilon$, so that
$$
|f(z_{0})-w|<\min_{|z-z_{0}|=\delta}|f(z)-w|.
$$
Using the Lemma presented before the theorem, we have that $f(z)-w$ has a zero $z_{w}$ with $|z_{w}-z_{0}|<\delta$, so that $f(z_{w})=w$ with $ z_{w}\in\Omega.\ \square $
AI: If $\forall \delta$ there exists $z\in \partial D(z_0,\delta)$ such that $f(z) = w_0$, we would have that $z_0$ is an accumulation point of $f^{-1}(w_0)$. But since $f-w_0$ is holomorphic its roots can only accumulate if $f - w_0 \equiv 0$. This would contradict the assumption that $f$ is non constant. (For a proof of the accumulation point fact, see e.g. Theorem 4.8 in Chapter 2 of Stein and Shakarchi's Complex Analysis.)
The remainder of the proof is setting up to apply the Lemma, which is a corollary of the maximum principle. Now, consider the function $g(z) = f(z) - w_0$. This function takes 0 at $z_0$. By the previous step we see that along the boundary of some disk $D(z_0,\delta)$ $g \neq 0$, and so is bounded away from zero. So if we subtract from $g$ a sufficiently small number, $g(z_0) - \Delta w$ is still going to be much smaller than $g(z) - \Delta w$ along $\partial D(z_0,\delta)$, and we can apply the Lemma. |
H: Why can't a polynomial fit infinitely many points of an exponential function?
Possible Duplicate:
Polynomial satisfying $p(x)=3^{x}$ for $ x \in \mathbb{N}$
I'm looking for an elementary solution to this question:
There is no polynomial $P$ such that $P(0)=1, P(1)=3, P(2)=9, P(3)=27, \dots$.
AI: Assume $P(X) = a_n X^n + a_{n-1}X^{n-1} +\ldots + a_1 X + a_0$ is a polynomial that satisfies $P(k) = 3^k$ for all natural numbers $k$. Then $P(k+1) - 3P(k) = 0$ for all $k$, so the polynomials $P(X+1)$ and $3P(X)$ are equal. Now compare the highest coefficient. |
H: How to "rotate" a polar equation?
Take a simple polar equation like r = θ/2 that graphs out to:
But, how would I achieve a rotation of the light-grey plot in this image (roughly 135 degrees)? Is there a way to easily shift the plot?
AI: Just put $\theta-135^\circ$ in place of $\theta$. Or if you're working in radians, then the equivalent in radians. |
H: Maximum principle
Theorem
Let $\Omega\subseteq \mathbb{C}$ be a region and $f\in H(\mathbb{C})$ ($f$ is analytic on $\mathbb{C}$). If $|f|$ has in $\Omega$ a local maximum, then $f$ is constant.
Proof. Let $ D(a,\ r)\subseteq\Omega$ be a disk with $|f(z)|\leq|f(a)|$ for all $z\in D(a,\ r)$ ($a$ is therefore a local maximum for $f$). Then we have for every $\rho\in(0,\ r)$ with $\gamma(t)=a+\rho e^{it} \ (t\ \in[0,2\pi])$:
$$
|f(a)|=\frac{1}{2\pi}\left|\int_{\gamma}\frac{f(\xi)}{\xi-a}\mathrm{d}\xi\right|\ \leq\frac{1}{2\pi}\int_{0}^{2\pi}|f(a+\rho e^{it})|\mathrm{d}t\ \leq|f(a)| \ \ \ (1)
$$
$\color{green}{\text{I understand the inequality in (1), but what is the point of showing: } |f(a)|\leq|f(a)|?}$
$\color{green}{\text{Why is this mentioned?}}$
From $|f(a+\rho e^{it})|\leq|f(a)$ for $t \in[0,2\pi]$ and $\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi}|f(a+\rho e^{it})|\mathrm{d}t =|f(a)|$ (2) we have:
$$
|f(a+\rho e^{it})|=|f(a)|\ (t\ \in[0,2\pi]).\ \ \ (3)
$$
$\color{green}{\text{What is the role of the integral in (2), in showing the equality (3)?}}$
Therefore we have:
$$
|f(z)|=|f(a)|\ (z\in D(a,r))\ (*)
$$
If $f$ would not be constant, then (because $\Omega$ is a region) also $f:D(a, r)\rightarrow \mathbb{C}$ would not be constant and $f(D(a,\ r))$ would be open, which is a contradiction to $(*)$.
$
\square
$
===
I still don't get something:
So we know that:
$|f(a+\rho e^{it})|\leq|f(a)$ for $t \in[0,2\pi]$ and
$\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi}|f(a+\rho e^{it})|\mathrm{d}t =|f(a)|$
I still don't see why:
$f(a+\rho e^{it})|\mathrm{d}t =|f(a)|$ follows.
AI: The point of inequality (1) is that we have equality two. Define $g\colon [0,2\pi]\to\Bbb R$ by $g(t)=|f(a)|-|f(a+\rho e^{it})|$. This function is non-negative, continuous and its integral is $0$ hence $g$ is identically $0$ (that's why we used integral). |
H: Products of Vector Bundles
Suppose that $E$ is a vector bundle over a compact, Hausdorff space $X$. Then $E^n$ is a vector bundle over $X^n$. If $D(E)$ is the disk bundle, there is a map on fibers $D(E^n)_x \rightarrow D(E)^n_x$. Does this induce a homotopy equivalence $D(E^n)\simeq D(E)^n$?
AI: Yes. For any vector space $V$, there is a canonical (basis-independent) deformation retraction of $D(V)^n$ onto $D(V^n)$, obtained by moving directly inwards along a line through the origin. Performing this on each fiber, we obtain a deformation retraction of $D(E)^n$ onto $D(E^n)$. |
H: How to prove that $\mathrm{Fibonacci}(n) \leq n!$, for $n\geq 0$
I am trying to prove it by induction, but I'm stuck
$$\mathrm{fib}(0) = 0 < 0! = 1;$$
$$\mathrm{fib}(1) = 1 = 1! = 1;$$
Base case n = 2,
$$\mathrm{fib}(2) = 1 < 2! = 2;$$
Inductive case assume that it is true for (k+1) $k$
Try to prove that $\mathrm{fib}(k+1) \leq(k+1)!$
$$\mathrm{fib}(k+1) = \mathrm{fib}(k) + \mathrm{fib}(k-1) \qquad(LHS)$$
$$(k+1)! = (k+1) \times k \times (k-1) \times \cdots \times 1 = (k+1) \times k! \qquad(RHS)$$
......
How to prove it?
AI: $$
F_{k+1} = F_k + F_{k-1} \le k! + (k - 1)! \le k! + k! \le 2 k! \le (k + 1) k!
$$ |
H: Combinatorics: Selecting objects arranged in a circle
If $n$ distinct objects are arranged in a circle, I need to prove that the number of ways of selecting three of these $n$ things so that no two of them are next to each other is $\frac{1}{6}n(n-4)(n-5)$.
Initially I can select $1$ object in $n$ ways. Then its neighbours cannot be selected. So I will have $n-3$ objects to choose $2$ objects from. Again, I can select the second object in $n-3$ ways. The neighbours of this object cannot be selected. However from here onwards, I am unable to extend this argument as the selection of the third object is dependent on the position of the first and the second object. Is there any simpler method to prove the result?
AI: Choose one object first. Then think of the remaining $n-3$ objects in a line. There are then $n-4$ spaces in between the remaining $n-3$ objects in the line. Choose 2 of the $n-4$ spaces and choose the object to the left of the left space and right of the right space.
This last step is in 1-1 correspondence with the ways to have the final 2 objects; therefore we divide by 3 (corresponding to the initial first choice, which could have been any of the 3 objects to start) giving $1/3 \cdot n \cdot \binom{n-4}{2} = 1/6 \cdot n(n-4)(n-5)$. |
H: Find the number of ways in which $6$ different toys may be distributed among $4$ children so that each child gets at least $1$ toy.
Find the number of ways in which $6$ different toys may be distributed among $4$ children so that each child gets at least $1$ toy.
The problem is simple enough, I only need to verify my solution.
Initially I select $4$ toys out of $6$. This can be done in $\binom{6}{4}$ ways. These can be distributed in $4!$ ways. Now I have $2$ toys left to be distributed to $4$ children. For every toy, I have $4$ options, hence this can be done in $4^2=16$ ways.
So in all, I have $\binom{6}{4} \times 4! \times 16 = 5760$.
AI: The solution is not correct. As an illustration, let us distribute $5$ toys instead. Let us solve this simpler problem using the argument of the post.
That procedure would suggest a count of $\binom{5}{4}(4!)(4)=480$.
Now let us count the actual number of ways to distribute the $5$ toys. Some kid will get $2$ toys, the rest will get $1$ each. The lucky kid can be chosen in $4$ ways. For each of these ways, the toys she gets can be chosen in $\binom{5}{2}$ ways. And for each of these ways, the rest of the distribution can be done in $3!$ ways, for a total of $240$.
Remark: For $6$ toys, the answer turns out to be $1560$. Note that there is no simple "overcount ratio."
In general, the number of ways to distribute $n$ toys among $k$ children so that each gets at least one toy is $k!S(n,k)$, where $S(n,k)$ is the Stirling number of the second kind.
There are nice recurrences that make computing the Stirling numbers of the second kind not too hard, at least for small inputs. Also, a number of programs will do the job. Wolfram Alpha does it for free. Just type in $S2(6,4)$.
Added: There is a simple way to see that $5760$ cannot be correct. Let us count the number of ways to split the $6$ toys between the $4$ kids, with no restrictions. So one or more kids may get nothing. Line up the toys in ascending order of desirability. The worst toy can be assigned in $4$ ways. For each of these, the second worst can be assigned in $4$ ways, and so on, for a total of $4^6=4096$. This is less than $5760$!
This beginning, together with the Principle of Inclusion/Exclusion, can be used to count the number of ways to distribute the toys so that each kid gets at least one toy.
For a quick and dirty count for $6$ toys and $4$ kids, we use the method that was illustrated earlier for $5$ toys and $4$ kids.
Maybe (i) a kid gets $3$ toys, and the rest get $1$ each or (ii) two kids get $2$ toys each, and the rest get $1$ each.
Counting the patterns in (i) is easy. The lucky kid can be chosen in $4$ ways. For each way, the toys she gets can be chosen in $\binom{6}{3}$ ways. And now the remaining toys can be distributed in $(3)(2)(1)$ ways, for a total of $480$ ways of type (i).
Counting the patterns in (ii) is tricky, just like counting the number of "two pair" poker hands is tricky. The two lucky kids can be chosen in $\binom{4}{2}$ ways. For each of these ways, the toys for the younger of the two lucky kids can be chosen in $\binom{6}{2}$ ways. And now the toys for the other lucky kid can be chosen in $\binom{4}{2}$ ways. Finally, the rest of the toys can be assigned in $(2)(1)$ ways. The total number of ways of type (ii) is $1080$.
This gives $1560$ as the number of ways to do the distribution. |
H: Is there a general way to solve transcendental equations?
To make things definite, let's narrow them and call transcendental equation of the form
$$f(x) = 0$$
where $f$ is a real elementary function in the usual sense. For example
$$\cos(\pi x) + x^2 = 0$$
or
$$a = x \tan x$$
Is there a general way to solve such equations except for numerics? That is to produce an expression$$x_0 = \text{RHS}$$
where $\text{RHS}$ does not depend on $x_0$ and can be actually computed? Maybe in a form of series or something similar, not in "finite terms".
There is of course a question about the existance of solutions, it would be nice if the form of $\text{RHS}$ incorporatted the answer to it.
In the formulation above the problem seems to be equivalent to finding local inverse function.
AI: Under certain assumptions you can use inversion of power series to obtain a solution. If $f(z)$ be analytic at $z_0$ where $f'(z_0)\ne 0$, then $w=f(z)$ has an analytic inverse $z=g(w)$ in some neighbourhood of $w_0=f(z_0)$, hence
$$z-z_0=\sum_{k=1}^{\infty}a_k(w-w_0)^k$$
where
$$a_k=\frac{1}{n!}\left[\frac{(z-z_0)^n}{(f(z)-w_0)^n}\right]^{(n-1)}_{z-z_0}$$
If the equation is of the form
$$z=a+wf(z)$$
where $a$ is inside the domain of analyticity of $f(z)$ and $f(a)\ne 0$, then
$$z=a+\sum_{k=1}^{\infty}\frac{w^k}{k!}\left[f^{k}(a)\right]^{(k)}$$
For example, a solution to the Kepler's equation
$$z=m+E\sin z$$
can be expressed as follows
$$z=m+E\sin m+\frac{E^2}{2!}(\sin m)' +...$$
Your second example can be written as
$$z=w\cot z$$
So if you work your way through the formulae, you should be able to obtain some form of a solution. |
H: Coloring points in a cycle
I have a question that relates to the Widom-Rowlinson model of statistical physics. Take a cycle on $n$ vertices. How many ways are there to color the $n$ vertices with the colors $\{\text{Red, Yellow, Blue}\}$ with the only restriction being that Red vertices cannot be next to Blue vertices? I'd like an explicit formula, if possible.
AI: Consider the matrix $A = \pmatrix{1 & 1 & 0\cr 1 & 1 & 1\cr 0 & 1 & 1\cr}$.
The number of ways to colour $\{0,1,\ldots, n\}$ subject to your restriction with
$0$ coloured $i$ and $n$ coloured $j$ is $(A^n)_{ij}$ So the number of ways to colour your cycle is $\text{Tr}(A^n)$. Now $A$ has eigenvalues $1$, $1+\sqrt{2}$ and $1-\sqrt{2}$, so the answer is $1 + (1+\sqrt{2})^n + (1-\sqrt{2})^n$. |
H: Unit speed geodesics
This might be a very trivial question so bare with me. If $(X,d)$ is a length space we define a unit speed geodesic to be a path $\gamma:[0,1]\to X$ for which
\begin{align*}
d(\gamma(s),\gamma(t))=|t-s|d(\gamma(0),\gamma(1))\,\,\mathrm{for}\,\,\mathrm{all}\,\,s,t\in[0,1].
\end{align*}
The author notes without further remarks that this is equivalent to if $\leq$ always a holds. I tried couple of tricks and I would like some fresh input if possible. I'm literally stuck and I believe there's a very simple way to see why this holds.
I know that we can find a reparameterization $\widetilde{\gamma}$ of $\gamma$ (i.e. $\gamma=\widetilde{\gamma}\circ\alpha$ for some non-decreasing continuous surjection $\alpha:[0,1]\to[0,1]$) for which the equation holds. From here to me it seems we can't do better than $d(\gamma(s),\gamma(t))=|\alpha(t)-\alpha(s)|d(\gamma(0),\gamma(1))$ for all $s,t\in[0,1]$. Or can we conclude something from here?
If someone is interested on the source, it's "A user's guide to optimal transportation", written by Ambrosio and Gigli. It's probably free to view through google. Page 34, equation (2.5).
Thanks in advance.
AI: Clearly "$=$" $\Rightarrow$ "$\le$", so it suffices to consider "$\le$" $\Rightarrow$ "$=$".
This follows from the triangle inequality.
Note, first, that, by assumption ($\le$) for $ 0 < s < t < 1$:
$$d(\gamma(0), \gamma(s)) + d(\gamma(s), \gamma(t))+d(\gamma(t), \gamma(1))\le \left(s + (t-s) + (1-t)\right)\, d(\gamma(0), \gamma(1)) = d(\gamma(0), \gamma(1))$$
Now the rhs is $\le$ the lhs, by the triangle inequality. If there are $s, t$ such that strict inequality holds in the first inequality, you arrive at a contradiction. |
H: Solving problem: Area of Triangle
I have this data:
$a=6$
$b=3\sqrt2 -\sqrt6$
$\alpha = 120°$
How to calculate the area of this triangle?
there is picture:
AI: Because the angle at $A$ is obtuse, the given information uniquely determines a triangle. To find the area of a triangle, we might want:
the length of a side and the length of the altitude to that side (we don't have altitudes)
all three side lengths (we're short 1)
two side lengths and the measure of the angle between them (we don't have the other side that includes the known angle or the angle between the known sides)
(There are other ways to find the area of a triangle, but the three that use the above information are perhaps the most common.)
Let's find the angle between the known sides (since we'd end up finding that angle anyway if we were trying to find the unknown side). The Law of Sines tells us that $\frac{\sin A}{a}=\frac{\sin B}{b}$, so $$\frac{\sin120^\circ}{6}=\frac{\sin B}{3\sqrt{2}-\sqrt{6}},$$ which can be solved for $B$ (since $A$ is obtuse, $0^\circ<B<90^\circ$, so there is a unique solution). Once we have $B$, we can use $A+B+C=180^\circ$ to get $C$ and then the area of the triangle is $$\frac{1}{2}ab\sin C.$$
See also my answer here on general techniques for triangle-solving. |
H: $K/E$, $E/F$ are separable field extensions $\implies$ $K/F $ is separable
I wish to prove that if $K/E$, $E/F$ are algebraic separable field extensions then $K/F$ is separable. I tried taking $a\in K$ and said that if $a\in E$ it is clear, otherwise I looked at the minimal polynomials over $E,F$ and called them $f(x),g(x)$ respectively, then I said that $f(x)$ is separable and since there exist $h(x)$ s.t $f(x)h(x)=g(x)$ it remains to see that $h(x)$ is separable — here I am stuck.
Can someone please help proving this claim or can say how to continue ?
AI: This is only problematic in the context of positive characteristic, so assume that all fields are of cahracteristic $p$.
Suppose that $K/E$ is separable, and $E/F$ is separable. Let $S$ be the set of all elements of $K$ that are separable over $F$. Then $E\subseteq S$, since $E/F$ is separable.
Note that $S$ is a subfield of $K$: indeed, if $u,v\in S$ and $v\neq 0$, then $F(u,v)$ is separable over $F$ because it is generated by separable elements, so $u+v$, $u-v$, $uv$, and $u/v$ are all separable over $F$. So $S$ is a field.
I claim that $K$ is purely inseparable over $S$. Indeed, if $u\in K$, then there exists $n\geq 0$ such that $u^{p^n}$ is separable over $F$, hence there exists $n\geq 0$ such that $u^{p^n}\in S$. Therefore, the minimal polynomial of $u$ over $S$ is a divisor of $x^{p^n} -u^{p^n} = (x-u)^{p^n}$, so $K$ is purely inseparable over $S$.
But since $E\subseteq S\subseteq K$, and $K$ is separable over $E$, then it is separable over $S$. So $K$ is both purely inseparable and separable over $S$. This can only occur if $S=K$, hence every element of $K$ is separable over $F$. This proves that $K/F$ is separable.
Added. Implicit above is that an extension is separable if and only if it is generated by separable elements. That is:
Lemma. Let $K$ be an extension of $F$, and $X$ a subset of $F$ such that $K=F(X)$. If every element of $X$ is separable over $F$, then $K$ is a separable extension of $F$.
Proof. Let $v\in K$. Then there exist $u_1,\ldots,u_n\in X$ such that $v\in F(u_1,\ldots,u_n)$. Let $f_i(x)\in F[x]$ be the irreducible polynomial of $u_i$ over $F$; by assumption, $f_i(x)$ is separable. Let $E$ be a splitting field over $F(u_1,\ldots,u_n)$ of $f_1(x),\ldots,f_n(x)$. Then $E$ is also a splitting field of $f_1,\ldots,f_n$ over $F$, and since the $f_i$ are separable, $E$ is separable over $F$. Therefore, since $v\in F(u_1,\ldots,u_n)\subseteq E$, it follows that $v$ is separable over $F$. $\Box$
Added. Despite the OP's acceptance of this answer, it is clear from the comments that he does not actually understand the answer, which is rather frustrating. Equally frustrating is to be told, in drips, what it is the OP does and does not know about separability, in the form of "Just explain why this is true", only to be find out the facts that underlie that assertion are also unknown.
The following is taken from Hungerford's treatment of separability.
Definition. Let $F$ be a field and $f(x)\in F[x]$ a polynomial. The polynomial is said to be separable if and only if for every irreducible factor $g(x)$ of $f(x)$, there is a splitting field $K$ of $g(x)$ over $F$ where every root of $g(x)$ is simple.
Definition. Let $K$ be an extension of $F$, and let $u\in K$ be algebraic over $F$. Then $u$ is said to be separable over $F$ if the minimal polynomial of $u$ over $F$ is separable. The extension is said to be separable if every element of $K$ is separable over $F$.
Theorem. Let $K$ be an extension of $F$. The following are equivalent:
$K$ is algebraic and Galois over $F$.
$K$ is separable over $F$ and $K$ is a splitting field over $F$ of a set $S$ of polynomials in $F[x]$.
$K$ is the splitting field over $F$ of a set $T$ of separable polynomials in $F[x]$.
Proof. (1)$\implies$(2),(3) Let $u\in K$ and let $f(x)\in F[x]$ be the monic irreducible polynomial of $u$. Let $u=u_1,\ldots,u_r$ be the distinct roots of $f$ in $K$; then $r\leq n=\deg(f)$. If $\tau\in\mathrm{Aut}_F(K)$, then $\tau$ permutes the $u_i$. So the coefficients of the polynomial $g(x) = (x-u_1)(x-u_2)\cdots(x-u_r)$ are fixed by all $\tau\in\mathrm{Aut}_F(K)$, and therefore $g(x)\in F[x]$ (since the extension is Galois, so the fixed field of $\mathrm{Aut}_F(K)$ is $F$). Since $u$ is a root of $g$, then $f(x)|g(x)$. Therefore, $n=\deg(f)\leq \deg(g) = r \leq n$, so $\deg(g)=n$. Thus, $f$ has $n$ distinct roots in $K$, so $u$ is separable over $F$. Now let $\{u_i\}_{i\in I}$ be a basis for $K$ over $F$; for each $i\in I$ let $f_i\in F[x]$ be the monic irreducible of $u_i$. Then $K$ is the splitting field over $F$ of $S=\{f_i\}_{i\in I}$, and each $f_i$ is separable. This establishes (2) and (3).
(2)$\implies$(3) Let $f\in S$, and let $g$ be an irreducible factor of $f$. Since $f$ splits in $K$, then $g$ is the irreducible polynomial of some $u\in K$, where it splits. Since $K$ is separable over $F$, then $u$ is separable, so $g$ is separable. Thus, the elements of $S$ are separable. So $K$ is the splitting field over $F$ of a set of separable polynomials.
(3)$\implies$(1)
Since $K$ is a splitting field over $F$, it is algebraic. If $u\in K-F$, then there exist $v_1,\ldots,v_m\in K$ such that $u\in F(v_1,\ldots,v_m)$, and each $v_i$ is a root of some $f_i\in S$, since $K$ is generated by the roots of elements of $S$. Adding all the other roots of the $f_i$, $u\in F(u_1,\ldots,u_n)$, where $u_1,\ldots,u_n$ are all the roots of $f_1,\ldots,f_m$; that is, $F(u_1,\ldots,u_n)$ is a splitting field over $F$ of the polynomial $f_1\cdots f_m$.
If the implication holds for all finite dimensional extensions, then we would have that $F(u_1,\ldots,u_n)$ is a Galois extension of $F$, and therefore there exist $\tau\in \mathrm{Aut}_F(F(u_1,\ldots,u_n))$ such that $\tau(u)\neq u$. Since $K$ is a splitting field over $F$, it is also a splitting field over $F(u_1,\ldots,u_n)$, and therefore $\tau$ extends to an automorphism of $K$. Thus, there exists $\tau\in\mathrm{Aut}_F(K)$ such that $\tau(u)\neq u$. This would prove that the fixed field of $\mathrm{Aut}_F(K)$ is $F$, so the extension is Galois. Thus, we are reduced to proving the implication when $[K:F]$ is finite. When $[K:F]$ is finite, there is a finite subset of $T$ that will suffice to generate $K$. Moreover, $\mathrm{Aut}_F(K)$ is finite. If $E$ is the fixed field of $\mathrm{Aut}_F(K)$, then by Artin's Theorem $K$ is Galois over $E$ and $\mathrm{Gal}(K/E) = \mathrm{Aut}_F(K)$. Hence, $[K:E]=|\mathrm{Aut}_F(K)|$.
Thus, it suffices to show that when $K$ is a finite extension of $F$ and is a splitting field of a finite set of separable polynomials $g_1,\ldots,g_m\in F[x]$, then $[K:F]=|\mathrm{Aut}_F(K)|$. Replacing the set with the set of all irreducible factors of the $g_i$, we may assume that all $g_i$ are irreducible.
We do induction on $[K:F]=n$. If $n=1$, then the equality is immediate. If $n\gt 1$, then some $g_i$, say $g_1$, has degree greater than $1$; let $u\in K$ be a root of $g_1$. Then $[F(u):F]=\deg(g_1)$, and the number of distinct roots of $g_1$ in $K$ is $\deg(g_1)$, since $g_1$ is separable. Let $H=\mathrm{Aut}_{F(u)}(K)$. Define a map from the set of left cosets of $H$ in $\mathrm{Aut}_F(K)$ to the set of distinct roots of $g_1$ in $K$ by mapping $\sigma H$ to $\sigma(u)$. This is one-to-one, since $\sigma(u)=\rho(u)\implies \sigma^{-1}\rho\in H\implies \sigma H=\rho H$. Therefore, $[\mathrm{Aut}_F(K):H]\leq \deg(g_1)$. If $v\in K$ is any other root of $g_1$, then there is an isomorphism $\tau\colon F(u)\to F(v)$ that fixes $F$ and maps $u$ to $v$, and since $K$ is a splitting field, $\tau$ extends to an automorphism of $K$ over $F$. Therefore, the map from cosets of $H$ to roots of $g_1$ is onto, so $[\mathrm{Aut}_F(K):H]=\deg(g_1)$.
We now apply induction: $K$ is the splitting field over $F(u)$ of a set of separable polynomials (same one as we started with), and $[K:F(u)] = [K:F]/\deg(g_1)\lt [K:F]$. Therefore, $[K:F(u)]=|\mathrm{Aut}_{F(u)}(K)|=|H|$.
Hence $|\mathrm{Aut}_F(K)| = [\mathrm{Aut}_{F}(K):H]|H| = \deg(g_1)[K:F(u)]=[F(u):F][K:F(u)] = [K:F]$, and we are done. $\Box$
Corollary. Let $F$ be a field, and let $f_1,\ldots,f_n\in F[x]$ be nonconstant separable polynomials. Then any splitting field of $f_1,\ldots,f_n$ over $F$ is separable over $F$. |
H: Proof of 2 Matrix identities (Traces, Logs, Determinants)
I am working through a derivation in someone's thesis at the moment to understand an important result, but I am more than a bit rusty on matrices. Could anyone give me some tips on these identities? They are stated without proof and I'm having a hard time finding a derivation online.
Below, X is a matrix and E is a scalar, and X is a function of E.
1) $Tr(X' X^{-1}) = \frac{d}{dE} Tr(ln(X))$
When I first saw this I thought it would be the same as treating X as a scalar, then by the definition of the ln function the above would be true. Is the fact that there is a trace and that X is a matrix important in the derivation?
He does mention that "" $Tr(log X)' = Tr[X' X] = Tr[X X']$ "" but I think this was probably a typo, since an expression of the form $Tr[X' X]$ does not appear in his calculation.
2) $Tr(ln(X)) = ln(det(X))$
This one I am a bit stuck on, I would guess that it has something to do with the definitions of the trace and determinant but not sure where to go from there. I haven't done anything with matrices in about 3 years, and I'm a physicist, so keep it basic :)
EDIT
OK here is my working for proof 1 using Robert's guide below:
$$ X' = \frac{d}{dE} \sum_{n=0}^{\infty} \frac{L^n}{n!} $$
Using the chain rule, $$ X' = \sum_{n=0}^{\infty} \frac{n L' L^{n-1}}{n!} = \sum_{n=0}^{\infty} \frac{L' L^{n-1}}{(n-1)!} = \sum_{n=1}^{\infty} \frac{L' L^{n-1}}{n!}$$
Here is the bit I don't quite follow, regarding the introduction of the dummy j which seems to cancel later on in the calculation without being used.
$$ \sum_{j=0}^{n-1} \sum_{n=1}^{\infty} \frac{L' L^j L^{n-1 - j}}{n!} $$
Now using this expression for X':
$$ X' X^{-1} = X' e^{-L} = \sum_{j=0}^{n-1} \sum_{n=1}^{\infty} \frac{L' L^j L^{n-1 - j}}{n!} e^{-L} $$
Here is where I find a problem now, since
$$ \sum_{j=0}^{n-1} (\sum_{n=1}^{\infty} \frac{L^{n-1}}{n!}) L' e^{-L} $$
Now my part in the brackets in that last expression isn't $e^L$ so doesn't cancel nicely. I am pretty sure I am missing something with commutativity and when you introduced the sum over j!? EDIT 2 Just realised my last step on the chain rule, changing the sum from n=0 to n=1 doesn't make much sense.
AI: 1) is a bit tricky if $X'$ and $X$ don't commute. $\log(X)$ is a matrix $L$ such that $\exp(L) = X$. Now
$$\dfrac{d}{dE} X = \dfrac{d}{dE} \sum_{n=0}^\infty \dfrac{L^n}{n!} =
\sum_{n=1}^\infty \sum_{j=0}^{n-1} \dfrac{L^j L' L^{n-1-j}}{n!} $$
and so
$$ X' X^{-1} = X' \exp(-L) = \sum_{n=1}^\infty \sum_{j=0}^{n-1}\frac{L^j L' L^{n-1-j}}{n!} \exp(-L)$$
but since $\text{Tr}(AB) = \text{Tr}(BA)$ and $L$ commutes with $\exp(-L)$,
$$ \text{Tr}(X' X^{-1}) = \sum_{n=1}^\infty \sum_{j=0}^{n-1} \dfrac{\text{Tr}\left(L^j L' L^{n-1-j} \exp(-L) \right)}{n!} = \sum_{n=1}^\infty \dfrac{\text{Tr}\left( L'L^{n-1} \exp(-L)\right)}{(n-1)!} = \text{Tr}(L' \exp(L) \exp(-L)) = \text{Tr}(L')$$
2) If $X$ has eigenvalues $\lambda_j$ (counted by algebraic multiplicity), $\log(X)$ has eigenvalues $\log(\lambda_j)$. Then $\text{Tr}(\log(X)) = \sum_j \log(\lambda_j) = \log(\prod_j \lambda_j) = \log(\det(X))$. Actually there is a question of which branches of the logarithm to use when there are non-positive eigenvalues, so it is more accurate to say that $\text{Tr}(\log(X))$ is one of the branches of $\log(\det(X))$. |
H: Nonexistence of a strongly multiplicative increasing function with $f(2)=3$
Show that there does not exist a strictly increasing function
$f:\mathbb{N}\rightarrow\mathbb{N}$ satisfying
$$f(2)=3$$ $$f(mn)=f(m)f(n)\forall m,n\in\mathbb{N}$$
Progress:
Assume the function exists. Let $f(3)=k$
Since $2^3 < 3^2$,
$$3^2=f(2)^3=f(2^3)<f(3^2)=f(3)^2=k^2$$
so $k>5$ and since $3^3 < 5^2$, then
$$k^3=f(3)^3=f(3^3)<f(2^5)=f(2)^5=3^5=243<343=7^3$$
so $k<7$ therefore $k=6$.
I've messed around with knowing $f(3)=6$ and $f(2)=3$ but I am stuck.
AI: Hint: suppose not. You know what $f(2^k)$ must be. You'll show that $f(3)$ can't be any natural number. You have bounds since $f(2) < f(3) < f(4)$. Start considering $f(3^j) = f(3)^j$ for some small values of $j$ and compare to $f(2^k)$ for some small values of $k$ to eliminate all possibilities for $f(3)$. |
H: Dirichlet Series and Average Values of Certain Arithmetic Functions
If an arithmetic function $f(n)$ has Dirichlet series $\zeta(s) \prod_{i,j = 1} \frac{\zeta(a_i s)}{\zeta(b_j s)}$, for which values of $a_{i}$ and $b_{j}$ is the following true? That
\begin{align}
\lim_{x \to \infty} \tfrac{1}{x} \sum_{n \leq x} f(n) = \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )}
\end{align}
or, more generally, there is a $\kappa > 0$ such that
\begin{align}
\sum_{n \leq x} f(n) = x \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )} + O(x^{1-\kappa}) \sim x \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )}.
\end{align}
Does the Wiener-Ikehara Theorem apply here?
AI: The following theorem appears in an introductory chapter in a soon to be published book by Andrew Granville and Kannan Soundararajan. An early version of the book which contains this material can be found on Granville's website. (Note: The statement of the result may appear different, but the proof is in the book which is currently on his website.)
Theorem: Let $f(n)=1*g(n)$ be a multiplicative function, and suppose that for $0\leq \sigma \leq 1$ the sum $$\sum_{d=1}^\infty \frac{|g(d)|}{d^{\sigma}}=\tilde{G}(\sigma)$$ converges. Then, if we write $\mathcal{P}(f)= \sum_{n=1}^\infty \frac{g(n)}{n},$ we have that $$\left|\sum_{n\leq x} f(n)-x\mathcal{P}(f)\right|\leq x^{\sigma}\tilde{G}(\sigma).$$
With your definition of $f(n)$, we see that $\mathcal{P}(f)=\prod_{i,j} \frac{\zeta(a_i)}{\zeta(b_i)}$, and letting $$\delta=\min\{a_i-1,\ b_i-1,\ 1\},$$ we see that by setting for $\sigma=\delta+\frac{1}{\log x}$, we get $$\sum_{n\leq x}f(n)=x\prod_{i,j}\frac{\zeta(a_i)}{\zeta(b_i)}+O\left(x^{1-\delta}\log x\right).$$
Remark: As an almost immediate corollary, since $\frac{\phi(n)}{n}$ has Dirichlet series $\frac{\zeta(s)}{\zeta(s+1)}$, we get that $$\sum_{n\leq x}\frac{\phi(n)}{n}=\frac{x}{\zeta(2)}+O(\log x).$$
See also this answer on Math.Stackexchange. |
H: Questions about the center of a group
I need help with this advanced algebra problem.
Let $G$ be a group. We call the set $C(G)= \{a \in G : ab=ba, \forall b \in G\}$ the center of $G$.
Prove that:
(a) $C(G)$ is normal subgroup of $G$.
(b) $C(G)=G$ if and only if $G$ is abelian group.
(c) If $a$ is the only element in $G$, then $a\in C(G)$.
AI: (a) Let $g \in G$, suppose $a \in C(G)$, $gag^{-1} = agg^{-1} = a$.
(b) If $G = C(G)$, then $a \in G$ implies $a \in C(G)$ so for any $b \in G$, $ab = ba$. Suppose $G$ is abelian, then clearly $ab = ba$ for all $a$.
(c) If $a$ is the only only element of $G$, then $a$ must be the identity. Clearly the identity element is in the center. |
H: Integral of $\int_{3}^{\infty} \frac{dx}{(x-2)^{3/2}}$
I am not sure how to evaluate this problem
$$\int_{3}^{\infty} \frac{dx}{(x-2)^\frac{3}{2}}$$
I do not know if it converges or diverges but I do know that it is undefined at $3$.
It is $3$ not $-3$.
AI: The integrand is defined for any $x\ge3$. The integral is improper, however, since the upper limit of integration is infinite. You'll have to set it up as a limit:
$$
\int_{3}^\infty{dx\over (x-2)^{3/2}}
=\lim_{a\rightarrow\infty}\int_{3}^a{dx\over (x-2)^{3/2}}.
$$
Evaluate $\int_{3}^a{dx\over (x-2)^{3/2}}$ first. This will leave you with an expression in $a$. Then you can take the limit. If the limit exists, the integral converges to its value. If the limit does not exist (including in the infinite sense), then the integral diverges. |
H: $\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$, question related to the dirichlet theorem
The question is:
A certain real number $\theta$ has the following property: There exist infinitely many rational numbers $\frac{a}{b}$(in reduced form) such that:
$$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$$
Prove that $\theta$ is irrational.
I just don't know how I could somehow relate $b^{1.0000001}$ to $b^2$ or $2b^2$ so that the dirichlet theorem can be applied. Or is there other ways to approach the problem?
Thank you in advance for your help!
AI: Hint: Let $\theta=\frac{p}{q}$, where $p$ and $q$ are relatively prime. Look at
$$\left|\frac{p}{q}-\frac{a}{b}\right|.\tag{$1$}$$
Bring to the common denominator $bq$. Then if the top is non-zero, it is $\ge 1$, and therefore Expression $(1)$ is $\ge \frac{1}{bq}$.
But if $b$ is large enough, then $bq<b^{1.0000001}$.
Edit: The above shows that if $\theta$ is rational, there cannot be arbitrarily large $b$ such that
$$\left|\theta-\frac{a}{b}\right|<\frac{1}{b^{1.0000001}}.\tag{$2$}$$
Of course, if we replace the right-hand side by $b^{1.0000001}$, then there are arbitrarily large such $b$. Indeed if we replace it by any fixed $\epsilon\gt 0$, there are arbitrarily large such $b$, since any real number can be approximated arbitrarily closely by rationals. Thus if in the original problem one has $b^{1.0000001}$, and not its reciprocal, it must be a typo. |
H: Given an interval $I$, what does the notation $\overline{I}$ mean?
I have below a beginning of a theorem:
If a function $f:I \rightarrow \mathbb{C}$ defined on an interval $I$ of length $p$ can be expanded to a piecewise differentiable function on $\overline{I}$, then will...
What does $\overline{I}$ mean in this context?
AI: Here $\overline I$ means the closure of $I$ - in general this is the smallest closed set which contains $I$. If $I \subseteq \mathbb{R}$ is an interval, then it is just the interval with the endpoints included.
For a more general example, in $\mathbb{R}^2$ you have the set $B = \{ (x,y) \in \mathbb{R}^2 : | (x,y) | < 1 \}$, the open ball of radius $1$, centred at the origin. If we take its closure we get $\overline B =\{(x,y) \in \mathbb{R}^2 : |(x,y)| \le 1 \}$, the open disc which contains the boundary. This is the two-dimensional analogue of an interval. |
H: Proving that a language build from two regular languages is regular as well
Let $L, M \subseteq \Sigma^*$ be regular languages. I need to prove that
$$N = \{x \in \Sigma^* \; | \; \exists y \in L : xy \in M\}$$ is as well a regular language.
My favored approach is to find a finite state machine that recognizes that language. I thought about the FSM that recognizes M and $\lambda$-transitions to the FSM of $L$, but this didn't help me on it. I don't know how to integrate the recognition of $L$ to each step of $M$ and am stuck.
Could you please help me with that?
Thanks in advance!
AI: Take a state machine for $M$. Given a state $\alpha$ in that machine, we can say that $\alpha$ is $L$-complete if, for some string $y\in L$, starting at $\alpha$, yields a successful match.
Now, create a new machine that uses the same state machine as $M$, but that register a "match" only at the $L$-complete nodes.
You don't really need to know how to figure out which nodes are $L$-complete, you know just that they are some subset of the state nodes.
This is a non-constructive solution, in that it does not show you how to determine the $L$-complete nodes, so you cannot use this argument to make the machine for $N$ out of known machines for $L$ and $M$.
Indeed, this proof doesn't appear to need $L$ to be regular. |
H: Smallest possible perimeter of a triangle if area is 135
What is the smallest possible perimeter of a (flat, regular) triangle if the area is 135?
I tried various equations but I always ended up with two unknown variables.
AI: Hint: You want an equilateral triangle. That should give you a single equation. |
H: Arc length of $y = \frac{x^3}{3} + \frac{1}{4x}$
Arc length of $y = \frac{x^3}{3} + \frac{1}{4x}$ over $1 \leq x \leq 2$
I know that the first thing I need to do is take the derivative.
$$y' = x^2 - 4x^{-2}$$
Then I take the integral on that range using the arc length formula.
$$\int_1^2 \sqrt{1 + (x^2-4x^{-2})^2}$$
$$(x^2-4x^{-2})^2 = -16x^{-4} - 8 + x^4$$
$$\int_1^2 \sqrt{-16x^{-4} - 7 + x^4 }$$
From here I have no idea how to factor this but I am pretty sure I must have messed up something before that.
AI: Your derivative is wrong, and your squaring is wrong.
Your derivative is wrong:
$$\left(\frac{x^3}{3} +\frac{1}{4x}\right)' = \left(\frac{1}{3}x^3 + \frac{1}{4}x^{-1}\right)' = x^2 - \frac{1}{4}x^{-2} = x^2 - \frac{x^{-2}}{4}.$$
You squared incorrectly: if you square $x^2-4x^{-2}$, you get:
$$(x^2-4x^{-2})^2 = x^4 - 8x^2x^{-2} + 16x^{-4} = x^4 - 8+16x^{-4}.$$
Note the plus sign on $16x^{-4}$; you have a minus sign.
If you square the correct function, you get
$$\left( x^2 - \frac{1}{4}x^{-2}\right)^2 = x^4 - \frac{1}{2} + \frac{1}{16}x^{-4}.$$
So the integral would be
$$\int_{1}^2 \sqrt{ x^4 + \frac{1}{2} + \frac{1}{16}x^{-4}}\,dx.$$
As for solving it, note that:
$$x^4 + \frac{1}{2} + \frac {1}{16}x^{-4} = (x^2)^2 + 2x^2\left(\frac{1}{4}x^{-2}\right) + \left(\frac{1}{4}x^{-2}\right)^2 = \left( x^2 + \frac{1}{4}x^{-2}\right)^2.$$ |
H: Arc length of $y = 1/3 \sqrt{x} (x-3)$
$y = 1/3 \sqrt{x} (x-3)$ from 1-9
In my book it is actually $x = 1/3 \sqrt{y} (y-3)$ but I prefer to work with y so I just swap the two variables and I think everything should be the same.
The first thing I need to do is get an integral.
$y = 1/3 \sqrt{x} (x-3)$ = $\frac{x^\frac{3}{2} - 3x^\frac{1}{2}}{3}$
$$\sqrt{x} + \frac {\sqrt{x} - 3}{2\sqrt{x}}$$
Plug that into the arc length formula
$$\int_1^9 \sqrt{1 + (\sqrt{x} + \frac {\sqrt{x} - 3}{2\sqrt{x}})^2}$$
$$\int_1^9 \sqrt{1 + x - 2+ x^\frac{1}{4} + \frac{x}{4} -\frac{3}{2} - \frac{9}{4x}}$$
From here I am very lost.
AI: It is very similar your previous problem. Magically, when we add $1$ to $(f'(x))^2$, we get something whose square root is pleasant. But we must differentiate and do the algebra flawlessly!
Let $f(x)$ be our function. Then $f(x)=\frac{1}{3}\left(x^{3/2}-3x^{1/2}\right)$, and therefore $f'(x)=\frac{x^{1/2}}{2}-\frac{x^{-1/2}}{2}$.
Square. We get $\frac{x}{4}-\frac{1}{2}+\frac{x^{-1}}{4}$
Add $1$. We get $\frac{x}{4}+\frac{1}{2}+\frac{x^{-1}}{4}$. Take the square root. This turn out to be $\frac{x^{1/2}}{2}+\frac{x^{-1/2}}{2}$. Finally, integrate. |
H: Rotation group, altitude
Could someone give me a rigorous proof that the group of rotations each element of which is a composition of rotations around the altitudes of a tetrahedron that transform the tetrahedron into itself is isomorphic to the alternating group $A_4$.
AI: Jykri's solution is excellent, but it may be useful to have another way of understanding why the symmetry group isn't all of $S_4$ (since it may not be immediately obvious that the group is generated by its three-cycles - that is, that every element of the group can be written as a product of three-cycles). Choose one of the vertices, $v_0$ say, and consider the triple product of the vectors from $v_0$ to the three other vertices $v_1, v_2, v_3$: $((v_1-v_0)\times(v_2-v_0))\cdot(v_3-v_0)$. This is the determinant of the matrix whose rows are the three vectors $v_i-v_0$; it represents the signed volume of the tetrahedron. As such, its sign has to be preserved by any rotation of the tetrahedron (this just says that rotations preserve the orientation of space), but a little calculation will show that swapping two vertices changes the sign of the triple product. This means that the two-cycles aren't in the rotation group, and so the group cab't be all of $S_4$ (since we know some elements of $S_4$ that aren't in it). Jykri's answer shows that the group is at least as large as $A_4$, and $A_4$ is a so-called maximal subgroup of $S_4$; there are no other groups 'in between' $A_4$ and $S_4$. Since the rotation group is at least $A_4$ and it can't be anything larger, it must be $A_4$ exactly. |
H: Example for changing the compactness of a manifold by considering another topology
If compactness depends on topology, what are good examples for "alternative" topologies of spaces which steal away their compactness, i.e. I ask for spaces which are usually considered together with a topology such that they are compact and then topologies in which they are not.
AI: Any topology on a finite set is compact, so you won't find any examples there.
However, given any infinite topological space, replacing the topology with the discrete topology always makes it non-compact. |
H: Winding number on a simply connected region
Let $\Omega$ be a simply connected region in $\mathbb{C}$ and $\gamma$ a closed piecewise continuous differentiable path.
Is there an intuitive explanation why the winding number $\mathrm{ind}_\gamma(\alpha)=0$, $\alpha \in \mathbb{C}\backslash \Omega$, on simply connected regions?
AI: I assume you mean simply connected rather than "simple", and also that you mean $\mathbb C\setminus \Omega$ rather than the other way around.
In that case it is because by definition of "simply connected" any closed path in a simply connected region is homotopic to a point. The winding number is a continuous function of the curve, and since for a closed curve it is always an integral multiple of $2\pi$ the winding number cannot change during a homotopy, provided that it exists at every intermediate step -- such as here when $\alpha\notin \Omega$.
Since a point (i.e., a constant curve) obviously has winding number 0, so has any closed curve that is homotopic to it. |
H: Is the integral $\int_1^\infty\frac{x^{-a} - x^{-b}}{\log(x)}\,dx$ convergent?
Is the integral $$\int_1^\infty\frac{x^{-a} - x^{-b}}{\log(x)}\,dx$$ convergent, where $b>a>1$?
I think the answer lies in defining a double integral with $yx^{(-y-1)}$ and applying Tonelli's Theorem, but the integral of $\frac{x^{-a}}{\log x}$ is still not integrable. Any ideas?
AI: If we make substitution $x=\log t$, then we get integral of the form
$$
\int\limits_{0}^\infty\frac{e^{-(a-1)t}-e^{-(b-1)t}}{t}dt
$$
Now result follows from Frullani's integral formula applied to the function $f(t) = e^{-t}$. Moreover, this formula gives us exact value of the integral
$$
\int\limits_{0}^\infty\frac{e^{-(a-1)t}-e^{-(b-1)t}}{t}dt=(f(0)-f(\infty))\log\frac{b-1}{a-1}=\log\frac{b-1}{a-1}
$$
As for the proof of this formula see this discussion. |
H: Application residue theorem for improper integrals
Let $R(z)=\displaystyle \frac{P(z)}{Q(z)}$ be a rational function of order(Q) $\geq \mathrm{order}(P)+2$ and $Q(x)\neq 0$ for all $x\in \mathbb{R}$. Then we have:
$$
\int_{-\infty}^{\infty}R(x)\mathrm{d}x=2\pi i\sum_{z:\ \mathrm{Im} \ z>0}{\rm Res}(R,z)
$$
Why is
order(Q) $\geq \mathrm{order}(P)+2$ and $Q(x)\neq 0$ for all $x\in \mathbb{R}$
important?
AI: The basic reason is that you need a bounded growth condition on the function. In order to evaluate this integral, you're actually doing a complex line integral over a half circular region, but you need the complex part to go to zero as the radius goes to infinity. If your function doesn't have bounded growth, the residue formula will retain some of the complex line integral, not just the real line. |
H: Is $\mathbb{R}$ a vector space over $\mathbb{C}$?
Here is a problem so beautiful that I had to share it. I found it in Paul Halmos's autobiography. Everyone knows that $\mathbb{C}$ is a vector space over $\mathbb{R}$, but what about the other way around?
Problem: Prove or disprove: $\mathbb{R}$ can be written as vector space over $\mathbb{C}$
Of course, we would like for $\mathbb{R}$ to retain its structure as an additive group.
AI: If you want the additive vector space structure to be that of $\mathbb{R}$, and you want the scalar multiplication, when restricted to $\mathbb{R}$, to agree with multiplication of real numbers, then you cannot.
That is, suppose you take $\mathbb{R}$ as an abelian group, and you want to specify a "scalar multiplication" on $\mathbb{C}\times\mathbb{R}\to\mathbb{R}$ that makes it into a vector space, and in such a way that if $\alpha\in\mathbb{R}$ is viewed as an element of $\mathbb{C}$, then $\alpha\cdot v = \alpha v$, where the left hand side is the scalar product we are defining, and the right hand side is the usual multiplication of real numbers.
If such a thing existed, then the vector space structure would be completely determined by the value of $i\cdot 1$: because for every nonzero real number $\alpha$ and every complex number $a+bi$, we would have
$$(a+bi)\cdot\alpha = a\cdot \alpha +b\cdot(i\cdot \alpha) = a\alpha + b(i\cdot(\alpha\cdot 1)) = a\alpha + b\alpha(i\cdot 1).$$
But say $i\cdot 1 = r$. Then $(r-i)\cdot 1 = 0$, which is contradicts the properties of a vector space, since $r-i\neq 0$ and $1\neq \mathbf{0}$. So there is no such vector space structure.
But if you are willing to make the scalar multiplication when restricted to $\mathbb{R}\times\mathbb{R}$ to have nothing to do with the usual multiplication of real numbers, then you can indeed do it by transport of structure, as indicated by Chris Eagle. |
H: Simplifying $\sqrt {1+(x/2 - 1/(2x))^2}$
I am having trouble figuring this out.
$$\sqrt {1+\left(\frac{x}{2}- \frac{1}{2x}\right)^2}$$
I know that $$\left(\frac{x}{2} - \frac{1}{2x}\right)^2=\frac{x^2}{4} - \frac{1}{2} + \frac{1}{4x^2}$$ but I have no idea how to factor this since I have two x terms with vastly different degrees, 2 and -2.
AI: Since
$$\begin{equation*}
\left( \frac{x}{2}-\frac{1}{2x}\right) ^{2}=\frac{x^{2}}{4}-\frac{1}{2}+
\frac{1}{4x^{2}},
\end{equation*}$$
we have
$$\begin{eqnarray*}
1+\left( \frac{x}{2}-\frac{1}{2x}\right) ^{2} &=&1+\left( \frac{x^{2}}{4}-
\frac{1}{2}+\frac{1}{4x^{2}}\right) \\
&=&1+\frac{x^{2}}{4}-\frac{1}{2}+\frac{1}{4x^{2}} \\
&=&\frac{x^{2}}{4}+\left( 1-\frac{1}{2}\right) +\frac{1}{4x^{2}} \\
&=&\frac{x^{2}}{4}+\frac{1}{2}+\frac{1}{4x^{2}} \\
&=&\left( \frac{x}{2}+\frac{1}{2x}\right) ^{2},
\end{eqnarray*}$$
because
$$\left( \frac{x}{2}+\frac{1}{2x}\right) ^{2}=\frac{x^{2}}{4}+\frac{1}{2}+
\frac{1}{4x^{2}}.$$
Therefore
$$\begin{equation*}
\sqrt{1+\left(\frac{x}{2}-\frac{1}{2x}\right)^2}=\sqrt{\left(\frac{x}{2}
+\frac{1}{2x}\right)^2}=\left\vert\frac{x}{2}+\frac{1}{2x}\right\vert .
\end{equation*}$$ |
H: is $\frac{\{1, \ldots ,n\}}{n+1}$ proper notation when $n\geq1$?
I am trying to explain R code:
(1:n)/(n+1)
such that:
> n <- 4
> (1:n)/(n+1)
[1] 0.2 0.4 0.6 0.8
I might use
$$\frac{\{1, \ldots ,n\}}{n+1}$$
Is that okay? It seems to imply $n\neq1$. Does it?
AI: A lot of people are going to be completely mystified if you write
$$\frac{\{1, ... ,n\}}{n+1}$$ I think you would do better to write this:
$$\frac1{n+1},\cdots,\frac n{n+1}$$
Note that there are no curly braces, which would imply that the result was a set, rather than a sequence.
Perhaps you could write the first one if you first explained that it means the second one.
My suggestion does not imply $n\ne 1$. $n=1$ is perfectly okay, and in that case the expression means a sequence with one element.
R most likely generates an empty sequence when $n=0$; check this. If not, mention it explicitly. Depending on your audience, you might want to mention it anyway. |
H: What does it mean to take the splitting field of $f(x)\in F[x]$ over $K$ where $K/F$ is a field extension
Let $K/F$ be a field extension and let $f(x)\in F[x]$. I know $f(x)$ have a splitting field, i.e. a field $E$ that $f(x)$ splits in ($E/F$ and $f(x)$ doesn't split in any proper subfield of $E$).
I heard the term "the splitting field of $f(x)$ over $K$" - but what does this mean ?
I realize that it is also true that $f(x)\in K[x]$, but I still don't understand the term...it looks like we want a field $L$ s.t $L/K$ is the minimal field extension s.t $f(x)$ splits in $L$ but this like the composition of $E,L$ , but they arn't both subfields of a field I know...
I'm confused, can someone please explain the term ?
AI: Let $K$ be a field and let $f(x)$ be a nonconstant polynomial in $K[x]$. A splitting field of $f$ over $K$ is a field extension $L$ of $K$ such that:
$f(x)$ splits into linear factors in $L$; and
$L = K(\{u\in L\mid f(u) = 0\})$; that is, $L$ is generated over $K$ by the roots of $f(x)$.
One can prove by induction on $\deg(f)$ that splitting fields always exist, and moreover that two splitting fields for $f$ over $K$ are isomorphic over $K$.
One can extend this to arbitrary sets of polynomials: let $K$ be a field, and let $S\subseteq K[x]$ be a set of nonconstant polynomials. A splitting field of $S$ over $K$ is a field extension $L$ of $K$ such that:
For every $f(x)\in S$, $f(x)$ splits in $L$; and
If $U=\{ a\in L \mid \text{there exists }f\in S\text{ such that }f(a)=0\}$, then $L=K(U)$. That is, $L$ is generated over $K$ by the roots of the polynomials in $S$.
It is easy to see that if $S$ is finite, $S=\{f_1,\ldots,f_n\}$, then a splitting field for $S$ over $K$ is the same thing as a splitting field for $g(x)$ over $K$, where $g(x) = f_1(x)\cdots f_n(x)$. Thus, splitting fields are generally only interesting for either single polynomials, or infinite sets of polynomials. The fact that every set of nonconstant polynomials in $K[x]$ has a splitting field over $K$, and moreover that any two such splitting fields are isomorphic over $K$, can be established by applying Zorn's Lemma. |
H: Touch Typing Index - Speed and Accuracy
I am trying to determine the ability of my students to touch type.
I have data on their speed (in seconds) and their accuracy (number of errors). I also know the number of words in the test (50 words).
Eg. Name: Bob, Speed: 113s, Errors: 19
Eg. Name: Jane, Speed: 831s, Errors: 65
How do I create an index value that places the students with the higher speed and the higher accuracy with the highest index value and a sliding scale for students with lower speed and lower accuracy?
The index would also reflect situations where students are slow but accurate, or fast yet inaccurate.
AI: When I took typing, the formula was (words per minute)-10(mistakes) for a 1-2 minute test. You can adjust the coefficient as you wish depending on whether you think speed or accuracy is more important. Presumably the errors scale with time, so the coefficient should go down as the test length increases. |
H: Indefinite integral of secant cubed $\int \sec^3 x\>dx$
I need to calculate the following indefinite integral:
$$I=\int \frac{1}{\cos^3(x)}dx$$
I know what the result is (from Mathematica):
$$I=\tanh^{-1}(\tan(x/2))+(1/2)\sec(x)\tan(x)$$
but I don't know how to integrate it myself. I have been trying some substitutions to no avail.
Equivalently, I need to know how to compute:
$$I=\int \sqrt{1+z^2}dz$$
which follows after making the change of variables $z=\tan x$.
AI: We have an odd power of cosine. So there is a mechanical procedure for doing the integration. Multiply top and bottom by $\cos x$. The bottom is now $\cos^4 x$, which is $(1-\sin^2 x)^2$. So we want to find
$$\int \frac{\cos x\,dx}{(1-\sin^2 x)^2}.$$
After the natural substitution $t=\sin x$, we arrive at
$$\int \frac{dt}{(1-t^2)^2}.$$
So we want the integral of a rational function. Use the partial fractions machinery to find numbers $A$, $B$, $C$, $D$ such that
$$\frac{1}{(1-t^2)^2}=\frac{A}{1-t}+\frac{B}{(1-t)^2}+ \frac{C}{1+t}+\frac{D}{(1+t)^2}$$
and integrate. |
H: Is this the free abelian group functor?
Let $\mathbb{Z}(.) : \mathbf{Set} \to \mathbf{Ab}$ be the functor that assigns to any set $S$ the set of maps
$\mathbb{Z}(S) := \{ z: S \to \mathbb{Z} \; | \; z(s)=0 \mbox{ for almost all } s \in S \}$
and to any set map $f: S \to T$ the morphism $\mathbb{Z} f :\mathbb{Z}(T) \to \mathbb{Z}(S)$ of
abelian groups, defined by $\mathbb{Z} f(z):= z \circ f$ for all $z \in \mathbb{Z}(T)$.
This defines a contravariant functor. Right?
Is this what is called the free abelian group functor? (I wonder because of its
contravariance)
AI: What you've defined is not a functor, covariant or contravariant. Let $S$ be an infinite set and $f : S \to 1$ the unique map. Then $\mathbb{Z} f$ does not exist. (You need to require that $f$ is proper, that is, that the preimage of a finite set is finite.)
The free abelian group functor is covariant. You've assigned the right things to objects (more or less) but not to morphisms. The free abelian group functor assigns to a set $S$ the abelian group $\mathbb{Z}[S]$ of formal linear combinations
$$\sum_{s \in S} c_s s, c_s \in \mathbb{Z}$$
and assigns to a function $f : S \to T$ the homomorphism $\mathbb{Z}[f]$ sending $\sum c_s s$ to $\sum c_s f(s)$. In this setup, the homomorphism you wanted to assign to a function $g : T \to S$ sends $\sum c_s s$ to
$$\sum c_s \sum_{g(t) = s} t$$
and of course this is not well-defined if $\{ t : g(t) = s \}$ is infinite. This desire to "integrate over" inverse images appears in other contexts (e.g. pullbacks in homology, of which this may be regarded as a toy example (the $H_0$ of discrete spaces)) but I am not well-qualified to discuss them.
This issue is precisely the reason why I get annoyed when people call $\mathbb{Z}^S$ the free abelian group on $S$ when $S$ is finite: the assignment $S \to \mathbb{Z}^S$ ought to be contravariant, not covariant.
Edit: The comparison to homology might be valuable as a way of contextualizing this discussion. If $S$ is a set regarded as a discrete space, $\mathbb{Z}[S]$ is the zeroth homology $H_0(S)$ while $\mathbb{Z}^S$ is the zeroth cohomology $H^0(S)$; in particular, the former is covariant while the latter is contravariant. The fact that for $S$ finite we can identify the two can then be thought of as a very special case of Poincaré duality, which hammers home the point that the finiteness of $S$ is essential here. |
H: Arc length of $y^3 = x^2$
I am trying to find the arc length of $y^3 = x^2$ and I am suppose to use two formulas, one for in terms of x and one for in terms of y.
At first I need to find (0,0) (1,1) and I start with in terms of x
$$y\prime = \frac{2}{3}x^\frac{-1}{3}$$
$$\left(\frac{2}{3}x^\frac{-1}{3}\right)^2 = \frac{4}{9x^\frac{2}{3}}$$
$$\int_0^1 \sqrt{1 + \left( \frac{2}{3}x^\frac{-1}{3}\right)^2}\, dx$$
$$\int_0^1 \sqrt{1 + \frac{4}{9x^\frac{2}{3}}}\, dx$$
This is undefined at 0 so it is an improper function. I do not think I can continue.
Now in terms of y.
$$x = y^\frac{3}{2}$$
$$x \prime = \frac{3\sqrt{y}}{2}$$
$$ \left(\frac{3\sqrt{y}}{2}\right)^2 = \frac{9y}{4}$$
$$\int_0^1 \sqrt{1+ \frac{9y}{4}}\, dy$$
I have no idea how to factor that out, I have tried many ways but I can not get it.
AI: $$\int_0^1\sqrt{1 + {9y\over 4}}
= {4\over 9}\int_1^{13/4} \sqrt{y}\,dy
= {8\over 27}\left({13\sqrt{13}\over 8} - 1\right)
= {13\sqrt{13} - 8\over 27}$$ |
H: Degrees of freedom vs. cardinality of tuples
Sometimes it is said that the number of DoF of a system means how many real numbers have to be used at least to describe the system.
But we know from set theory that the cardinality of any tuple of reals is the same as the reals, so all the information that is in a 3-tuple of reals, can be represented in just one real number.
Of course this representation may not be very useful as the properties of the system will be non-continuous functions of this variable, but still it is enough to describe the state of the system.
So how could this confusing thing be resolved?
AI: While $R^n$ and $R^m$ may have the same cardinality for all positive integer $n,m$ dimensions, they are homeomorphic topological spaces if and only if $n=m$, a result due to Brouwer.
So one cannot say with assurance that a parameterization by $n$ real values is equivalent to one by $m$ real values, if the continuity of the parameterization plays a role (as it most often will). |
H: The Nearest Points
Given a set $R$ of $N$ points $R={(x_1, y_1, z_1), (x_2, y_2, z_2),....., (x_n, y_n, z_n)}$ and set $S$ of $M$ points $S={ ((a_1, b_1, c_1), (a_2, b_2, c_2),...(a_m, b_m, c_m))}$.
for each point $p_i(i=1\space \text{to}\space N)$ in Set $R$, find the point $q_j(j=1\space \text{to}\space m)$ in set S such that the euclidean distance between $p_i$ and $q_j$ is minimum.
Note: euclidean distance between $(l_1, m_1, n_1)$ and $(l-2, m_2, n_2)$$=\sqrt{ abs(l_1-l_2)^2+abs(m_1-m_2)^2+abs(n_1-n_2)^2}$.
The constraints are large. The total number of points in each set $R$ and $S$ is very large $(1\leq N, M\leq 50,000)$.
How to solve this problem..
Edit
Ross Millikan pointed out this can be done in polynomial time.(So i had made a slight change in the problem statement).
By brute force it can be done in $O(N*M)$ time. But the problem is $(N*M)$ means $(50000*50000)=10^8$ steps. But I have to get output in less than $3$ seconds. So, is there better solution than $O(N*M)$ ?
Thanks!
for example::
The points of Set S {(1,0,0),(0,0,1),(0,1,0),
The points of Set R {(0,-1,0),(0,0,-1),
For the point p: (0 ,-1 ,0) in set R
Eucd distance between point p and (1,0,0) is: sqrt(2)
Eucd distance between point p and (0,0,1) is: sqrt(2)
Eucd distance between point p and (0,1,0) is: sqrt(4)
So minimum distance is (sqrt)2
So the point ie .Answer= q= (0 ,0 ,1)
For the point p: (0 ,0 ,-1) in set R
Eucd distance between point p and (1,0,0) is: sqrt(2)
Eucd distance between point p and (0,0,1) is: sqrt(4)
Eucd distance between point p and (0,1,0) is: sqrt(2)
So minimum distance is (sqrt) 2
So the point ie .Answer= q= (0 ,1 ,0).
Note ,if there are more than one points for which the distance is minimum,we just need any one of them.
Example 2:
The points of Set S {(-91,10,5),(3,-6,7),(4,5,8),(-89,1,4),
The points of Set R {(4,-6,7),(10,-30,17),(8,9,10),(3,4,-9),
For the point p: (4 ,-6 ,7) in set R
Eucd distance between point p and (-91,10,5) is:(sqrt) 9285
Eucd distance between point p and (3,-6,7) is:(sqrt) 1
Eucd distance between point p and (4,5,8) is:(sqrt) 122
Eucd distance between point p and (-89,1,4) is:(sqrt) 8707
So minimum distance is (sqrt) 1
So the point ie .Answer= q= (3 ,-6 ,7)
For the point p: (10 ,-30 ,17) in set R
Eucd distance between point p and (-91,10,5) is:(sqrt) 11945
Eucd distance between point p and (3,-6,7) is:(sqrt) 725
Eucd distance between point p and (4,5,8) is:(sqrt) 1342
Eucd distance between point p and (-89,1,4) is:(sqrt) 10931
So minimum distance is (sqrt)725
So the point ie .Answer= q= (3 ,-6 ,7)
For the point p: (8 ,9 ,10) in set R
Eucd distance between point p and (-91,10,5) is:(sqrt) 9827
Eucd distance between point p and (3,-6,7) is:(sqrt) 259
Eucd distance between point p and (4,5,8) is:(sqrt) 36
Eucd distance between point p and (-89,1,4) is:(sqrt) 9509
So minimum distance is (sqrt)36
So the point ie .Answer= q= (4 ,5 ,8)
For the point p: (3 ,4 ,-9) in set R
Eucd distance between point p and (-91,10,5) is:(sqrt) 9068
Eucd distance between point p and (3,-6,7) is:(sqrt) 356
Eucd distance between point p and (4,5,8) is:(sqrt) 291
Eucd distance between point p and (-89,1,4) is:(sqrt) 8642
So minimum distance is (sqrt) 291
So the point ie .Answer= q= (4 ,5 ,8)
AI: You can treat this as $n$ independent nearest neighbour search problems, namely, for each point in $R$, find its nearest neighbour among the $m$ points in $S$. The Wikipedia article lists lots of methods to do this efficiently. A popular approach is to use a $k$-D tree on the points in $S$. This takes $O(m\log m)$ time to build, and $O(\log m)$ time per nearest neighbour query if the points are randomly distributed. This gives a total time of $O((m+n)\log m)$. |
H: Is there really no way to integrate $e^{-x^2}$?
Today in my calculus class, we encountered the function $e^{-x^2}$, and I was told that it was not integrable.
I was very surprised. Is there really no way to find the integral of $e^{-x^2}$? Graphing $e^{-x^2}$, it appears as though it should be.
A Wikipedia page on Gaussian Functions states that
$$\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}$$
This is from -infinity to infinity. If the function can be integrated within these bounds, I'm unsure why it can't be integrated with respect to $(a, b)$.
Is there really no way to find the integral of $e^{-x^2}$, or are the methods to finding it found in branches higher than second semester calculus?
AI: That function is integrable. As a matter of fact, any continuous function (on a compact interval) is Riemann integrable (it doesn't even actually have to be continuous, but continuity is enough to guarantee integrability on a compact interval). The antiderivative of $e^{-x^2}$ (up to a constant factor) is called the error function, and can't be written in terms of the simple functions you know from calculus, but that is all. |
H: Elliptic Curves over Noncommutative rings
It is known that we can define elliptic curves over commutative rings. However can we define an elliptic curve over a noncommutative ring?
This question is considered to some extent in this thesis (Section 4.4) but reaches no conclusion on the matter.
NOTE: By an elliptic curve over a noncommutative ring I do not mean a noncommutative torus.
I asked a similar question on mathoverflow before but got no replies in the affirmative or the negative.
AI: There is no generally accepted or straightforward definition of a curve, or more generally a variety, over non-commutative rings. Noncommutative geometry is an ongoing subject of research, but the basic definitions and constructions are not settled, unlike in the situation of algebraic geometry (which takes place over commutative rings). If you want to know why, you could look at this answer, and related material on non-commutative geometry posted on MO. |
H: Identifying compactness and connectedness of subspace P = $\{(x, y, z)\in \mathbb{R}^3 : x^2+y^2+z^2 = 1 ,~ x^2+y^2\neq 0\}$
I have to check for compactness and connectedness of subspace P = $\{(x, y, z)\in \mathbb{R}^3 : x^2+y^2+z^2 = 1 ,~ x^2+y^2\neq 0\}$
Intuitively it is clear to me that subspace P is not compact as it is not closed.
But I am not sure about connectedness of P. Please help me with this.
Thank you very much.
AI: Let $S=\{\langle x,y,z\rangle:x^2+y^2+z^2=1\}$; this is the surface of the sphere of radius $1$ centred at the origin. What points must be removed from $S$ to get $P$? You have to remove the points the points $\langle x,y,z\rangle\in S$ such that $x^2+y^2=0$. Which points are these?
If $\langle x,y,z\rangle\in S$, then $x^2+y^2+z^2=1$, so if in addition $x^2+y^2=0$, it must be that $z^2=1$, and hence $z=\pm 1$. That is, the only points of $S$ that are removed to get $P$ are $\langle 0,0,1\rangle$ and $\langle 0,0,-1\rangle$. You could think of these two points as the north and south poles of $S$; $P$ is then everything except these two poles. You should have little trouble showing that $P$ is connected.
For example, notice that if $p$ and $q$ are distinct points of $P$, you can travel from $p$ to $q$ without leaving $P$: just follow a line of constant longitude from $p$ to the equator, then go round the equator until you reach the longitude of $q$, and finally follow a line of constant longitude from the equator to $q$. |
H: Laplace transform problem RL circuit
I'm having trouble solving an RL circuit using the Laplace Transform.
There's just a 2H inductor in series with a 5M ohm resistor. The inductor is initially charged to 1.25A.
So... Here's what I've done:
$$
v_L(t) = v_R(t)\\
2i_L'(t) = 5Mi_L(t)
$$
Now taking the Laplace Transform,
$$
2(I_L(s)s - 1.25) = 5M I_L(s)\\
2.5 = 2I_L(s)s - 5M I_L(s)\\
2.5 = I_L(s)(2s - 5M)\\
I_L(s) = \frac{2.5}{2s-5M}\\
I_L(s) = \frac{1.25}{s-2.5M}
$$
Now taking the inverse Laplace transform,
$$
i_L(t) = 1.25e^{2.5Mt}
$$
This answer is wrong though, and there should be negative
$$
i_L(t) = 1.25e^{-2.5Mt}
$$
For the life of me, I can't work out where that negative comes from in the maths.
I'd really appreciate some help.
Thanks!
Oh, I should add that the $M$ is Mega (so $10^6$).
AI: The problem is with your conventions.
Either take $v_R = v_L$, in which case $i_R = -i_L$. This would be the usual convention (ie, current 'enters' the device through the $+$ terminal).
Or you could take $i_R = i_L$, in which case you will need to take $v_R = -v_L$.
In your problem above, you have, in effect, taken a negative resistance (or a negative inductance, of course). It might help to draw the circuit graph when conventions are not obvious. |
H: How to prove such a function doesn't exist?
I was wondering (or mind-wandering) about a function described as:
For any given $x_1$, $x_2$, $x_m = \frac {x_1+x_2} {2}$,
$$f(x_m) = f(x_1) + a (f(x_2) - f(x_1)) , a \in ]0;1[$$
For example, $f(x_m)$ is $\frac 23$ of the distance between $f(x_1)$ and $f(x_2)$.
I can intuitively see that such a function isn't consistent if $a \neq \frac 1 2$ but how to prove it mathematically.
AI: Interchange $x_1$ and $x_2$: $f(x_m) = (1-a) f(x_1) + a f(x_2) = (1-a) f(x_2) + a f(x_1)$, so
$(1-2a) (f(x_1) - f(x_2)) = 0$. Thus unless $a = 1/2$, $f(x_1) = f(x_2)$, and $f$ is constant. |
H: Find the remainder when $ 12!^{14!} +1 $ is divided by $13$
Find the remainder when $ 12!^{14!} +1 $ is divided by $13$
I faced this problem in one of my recent exam. It is reminiscent of Wilson's theorem. So, I was convinced that $12! \equiv -1 \pmod {13} $ after this I did some test on the exponent and it seems like $12!^{n!} +1\equiv 2\pmod {13}\forall n \in \mathbb{N}$.
After I came back home I ran some more test and I noticed that if $p$ is prime then $(p-1)!^{n!} +1\equiv 2\pmod {p}\forall n \in \mathbb{N}$.
I was wondering if this result is true, if yes how to prove it? If not what is the formal way for solving the mother problem.
AI: By Wilson's Theorem, $12!\equiv -1\pmod{13}$. So for any non-negative even integer $m$, $(12!)^m+1\equiv (-1)^m+1\equiv 2\pmod{13}$. Since $0\le 2\lt 13$, it follows that $2$ is the remainder when $(12!)^m+1$ is divided by $13$.
If $m$ is odd, the same reasoning shows that $(12!)^m+1\equiv 0\pmod{13}$.
And $13$ is not particularly lucky or unlucky. The same result, with the same proof, holds if $13$ is replaced by any odd prime $p$, and $12$ is replaced by $p-1$.
The prime $2$ is slightly different. Whether $m$ is odd or even, $(1)^m +1\equiv 2\pmod{2}$, but the remainder is $0$, not $2$. |
H: Do $\omega^\omega=2^{\aleph_0}=\aleph_1$?
As we know, $2^{\aleph_0}$ is a cardinal number, so it is a limit ordinal number. However, it must not be $2^\omega$, since $2^\omega=\sup\{2^\alpha|\alpha<\omega\}=\omega=\aleph_0<2^{\aleph_0}$, and even not be $\sum_{i = n<\omega}^{0}\omega^i\cdot a_i$ where $\forall i \le n[a_i \in \omega]$. Since $\|\sum_{i = n<\omega}^{0}\omega^i\cdot a_i\| \le \aleph_0$ for all of them.
Besides, $\sup\{\sum_{i = n<\omega}^{0}\omega^i\cdot a_i|\forall i \le n(a_i \in \omega)\}=\omega^\omega$, and $\|\omega^\omega\|=2^{\aleph_0}$ since every element in there can be wrote as $\sum_{i = n<\omega}^{0}\omega^i\cdot a_i$ where $\forall i \le n[a_i \in \omega]$ and actually $\aleph_{0}^{\aleph_0}=2^{\aleph_0}$ many.
Therefore $\omega^\omega$ is the least ordinal number such that has cardinality $2^{\aleph_0}$, and all ordinal numbers below it has at most cardinality $\aleph_0$. Hence $\omega^\omega=2^{\aleph_0}=\aleph_1$?
AI: Your notation confuses cardinal and ordinal exponentiation, which are two very different things. If you’re doing cardinal exponentiation, $2^\omega$ is exactly the same thing as $2^{\aleph_0}$, just expressed in a different notation, because $\omega=\aleph_0$. If you’re doing ordinal exponentiation, then as you say, $2^\omega=\omega$.
But if you’re doing ordinal exponentiation, then $$\omega^\omega=\sup_{n\in\omega}\omega^n=\bigcup_{n\in\omega}\omega^n\;,$$ which is a countable union of countable sets and is therefore still countable; it doesn’t begin to reach $\omega_1$. Similarly, still with ordinal exponentiation, $\omega^{\omega^\omega}$ is countable, $\omega^{\omega^{\omega^\omega}}$ is countable, and so on. The limit of these ordinals, known as $\epsilon_0$, is again countable, being the limit of a countable sequence of countable ordinals, and so is smaller than $\omega_1$. (It’s the smallest ordinal $\epsilon$ such that $\omega^\epsilon=\epsilon$.)
Now back to cardinal exponentiation: for that operation you have $2^\omega\le\omega^\omega\le(2^\omega)^\omega=2^{\omega\cdot\omega}=2^\omega$, where $\omega\cdot\omega$ in the exponent is cardinal multiplication, and therefore $2^\omega=\omega^\omega$ by the Cantor-Schröder-Bernstein theorem. The statement that this ordinal is equal to $\omega_1$ is known as the continuum hypothesis; it is both consistent with and independent of the other axioms of set theory. |
H: What is the difference between the terms "classical solutions" and "smooth solutions" in the PDE theory?
What is the difference between the terms "classical solutions" and "smooth solutions" in the PDE theory? Especially,the difference for the evolution equations? If a solution is in $C^k(0,T;H^m(\Omega))$,can I call it smooth solution?
AI: A smooth solution is infinitely differentiable. A classical solution is a solution which is differentiable as many times as needed if you want to plug the function into the PDE (for example, if the PDE contains the term $u_{xxxx}$, then the fourth derivate $u_{xxxx}$ must exist in order for $u$ to be a classical solution).
In particular, every smooth solution is a solution in the classical sense. But for the unidirectional wave equation $u_x + u_t = 0$, any function of the form $u(x,t)=f(x-t)$ where $f$ is only (say) twice differentiable, is a classical solution which is not smooth. |
H: Evaluate $(\overline{z}_3)^4$ given that $z_3 = -\frac{1}{2}+j\frac{\sqrt{3}}{2}$
Given that:
$$z_3 = -\frac{1}{2}+j\frac{\sqrt{3}}{2}$$
evaluate the following:
$(\overline{z}_3)^4$
Solution:
$$(\overline{z}_3)^4 = [-\frac{1}{2}-j\frac{\sqrt{3}}{2}]^4$$
$$=[1\angle(-\frac{2\pi}{3})]^4$$
$$=1\angle(-\frac{8\pi}{3})$$
$$=1(\cos(-\frac{8\pi}{3})+j\sin(-\frac{8\pi}{3}))$$
$$=-\frac{1}{2}-j\frac{\sqrt{3}}{2}$$
To write $-\frac{1}{2}-j\frac{\sqrt{3}}{2}$ in polar form we first note that $|-\frac{1}{2}-j\frac{\sqrt{3}}{2}|=\sqrt{(\frac{1}{2})^2+(\frac{\sqrt{3}}{2})^2}=1$. With $a=-\frac{1}{2}$ and $b=-\frac{\sqrt{3}}{2}$, we obtain the reference angle of $\tan^-1(\frac{b}{a})=\tan^-1(\sqrt{3})=\frac{\pi}{3}$ radians. However $-\frac{1}{2}-j\frac{\sqrt{3}}{2}$ is in the third quadrant. Hence to get the argument of $-\frac{1}{2}-j\frac{\sqrt{3}}{2}$, we need to ratate $\frac{\pi}{3}$ (clockwise) by $\pi$ radians to get $\theta=\frac{\pi}{3}-\pi=-\frac{2\pi}{3}$.
My questions:
1. Why do we have to note that $|-\frac{1}{2}-j\frac{\sqrt{3}}{2}|=\sqrt{(\frac{1}{2})^2+(\frac{\sqrt{3}}{2})^2}=1$?
2. How do we know that $-\frac{1}{2}-j\frac{\sqrt{3}}{2}$ is in the third quadrant?
AI: Given: $z = -\frac{1}{2}-\frac{\sqrt{3}}{2}i$
Find: $z^4$
In order to avoid multiplying (FOIL) numbers of the form $a+bi$ three times we make use de Moivre's of formula which is: $\left(\cos\theta+i\sin\theta\right)^n = \cos n\theta + i\sin n\theta$.
This means we need to transform the complex number $z$ into trig (polar) form: $r\left(\cos\theta+i\sin\theta\right)$.
As you noted, $r = \left|z\right| = 1$.
In order to solve for $\theta$ we use the fact that $\tan\theta = \frac{\sqrt{3}/2}{1/2} = \sqrt{3}$. This means $\theta = \tan^{-1}\sqrt{3} = \frac{\pi}{3}$. However this is only the reference angle and not the full angle we need.
In order to find the quadrant where our complex number is located we can plot it using the x-axis for the real part and the y-axis for the complex part. So left $\frac{1}{2}$ and down $\frac{\sqrt{3}}{2}$. The point is in the third quadrant.
So the angle is $\theta = \pi+\frac{\pi}{3} = \frac{4\pi}{3}$.
Using de Moivre's formula:
$z = -\frac{1}{2}-\frac{\sqrt{3}}{2}i = 1\left(\cos\theta+i\sin\theta\right)\\
\Rightarrow z^4 = \left(1\left(\cos\theta+i\sin\theta\right)\right)^4\\
\Rightarrow z^4 = 1^4\left(\cos\theta+i\sin\theta\right)^4\\
\Rightarrow z^4 = 1^4\left(\cos 4\theta+i\sin 4\theta\right)\\
\Rightarrow z^4 = \cos 4\theta+i\sin 4\theta$
I included a couple more steps than needed in the last part to show how it all fits together. |
H: Condition on function $f:\mathbb{R}\rightarrow \mathbb{R}$ so that $(a,b)\mapsto | f(a) - f(b)|$ generates a metric on $\mathbb{R}$
Can we impose such condition on function $f:\mathbb{R}\rightarrow \mathbb{R}$ so that
$(a,b)\mapsto | f(a) - f(b)|$ generates a metric on $\mathbb{R}$?
This question came into my mind when I was working on problem $(a,b)\mapsto | e^{a} - e^{b}|$ is a metric on $\mathbb{R}$. I guess this can be done by taking injective function $f$. But I am not sure whether this will work or not. Certainly, this will help everyone in dealing with such kind of problems. I need help with this.
Thank you very much.
AI: Let $f:\Bbb R\to\Bbb R$, and for $x,y\in\Bbb R$ define $d(x,y)=|f(x)-f(y)|$.
First note that for any function $f:\Bbb R\to\Bbb R$ and $x,y,z\in\Bbb R$ we have $$\begin{align*}
|f(x)-f(y)|&=\left|\big(f(x)-f(z)\big)+\big(f(z)-f(y)\big)\right|\\
&\le|f(x)-f(z)|+|f(z)-f(y)|\;,
\end{align*}$$
so $d$ always satisfies the triangle inequality. It’s also clear that $d(x,x)=0$ for all $x\in\Bbb R$ and that $d$ is symmetric no matter what $f$ we use. Thus, $d$ is always a pseudometric on $\Bbb R$. Finally, in order for $d$ to separate points, so that it’s necessary and sufficient that $f$ be injective: that ensures that if $x\ne y$, then $f(x)\ne f(y)$ and hence $d(x,y)\ne 0$. The function $f$ need not be nice in any other way.
For example, you could use the following function:
$$f(x)=\begin{cases}
\tan^{-1}x,&\text{if }x\in\Bbb Q\\
\tan^{-1}(x+1),&\text{if }x\in\Bbb R\setminus\Bbb Q\;.
\end{cases}$$
It’s discontinuous at every point, and it’s not surjective, but it is injective, and that’s all that matters. |
H: Intensity of sound wave question
The question is:
The intensity of sound wave A is 100 times weaker than that of sound wave B. Relative to wave B the sound level of wave A is?
The answer is -2db
I tried doing (10dB)Log(1/100) but that equals -20dB
thanks
AI: Your answer of $-20$ dB is correct.
I suspect that you’re using a definition that gives the sound level in decibels as $10\log\left(\frac{I}{I_0}\right)$, where $I_0$ is the minimum perceptible intensity, or something very similar; if so, the following calculation shows exactly why $-20$ dB is correct.
Suppose that $I_A$ is the intensity of sound wave $A$, and $I_B$ is the intensity of sound wave $B$. Then the loudness of $A$ in decibels is $$L_A=10\log\left(\frac{I_A}{I_0}\right)\;,\tag{1}$$ and that of $B$ is $$L_B=10\log\left(\frac{I_B}{I_0}\right)\;.$$
You’re told that $I_A=\frac1{100}I_B$. Substituting that into $(1)$, we get
$$\begin{align*}
L_A&=10\log\left(\frac{I_A}{I_0}\right)\\
&=10\log\left(\frac{\frac1{100}I_B}{I_0}\right)\\
&=10\log\left(\frac{\frac{I_B}{I_0}}{100}\right)\\
&=10\left(\log\left(\frac{I_B}{I_0}\right)-\log 100\right)\\
&=10\log\left(\frac{I_B}{I_0}\right)-10\cdot2\\
&=L_B-20\;.
\end{align*}$$ |
H: Partial Derivation: $\lim_{(x,y)\to(0,0)}\frac{x^2+\sin^2y}{2x^2+y}$
$$\lim_{(x,y)\to(0,0)}\frac{x^2+\sin^2y}{2x^2+y}$$
Above problem is in my textbook. it's different from others because I don't know how to process with trigometric element: in this example is $\sin^2y$
Thanks :)
AI: HINT: Don’t let the trig function throw you. What happens if you approach the origin along the $x$-axis, with $y=0$ What happens if you approach the origin along the $y$-axis, with $x=0$? (You could also see what happens when you approach along the line $y=x$.)
By the way, a useful thing to remember is that when $x$ is very close to $0$, both $\sin x$ and $\tan x$ are very very close to $x$. Thus, to get a quick idea of what’s probably going on near the origin with this function, temporarily replace $\sin^2 y$ by $y^2$: it’s a good approximation when $y$ is close to $0$. |
H: How to write a combinations formula for this?
I have 8 distinct elements. Each set has 4 pairs from the 8 elements above.
How many such distinct sets are possible?
e.g.
8 elements - 1,2,3,4,5,6,7,8
example set - 1,2;3,4;5,6;7,8 the ordering of elements in a pair is not important
AI: I’m assuming that the order of pairs within a set of pairs is also irrelevant.
There are $7$ ways to choose the number to be paired with $1$. Once that pairing has been made, there are $6$ elements left, so there are $5$ ways to decide which gets paired with the smallest of the remaining numbers. After that $4$ elements remain, and there are $3$ ways to decide which gets paired with the smallest of them. The last two elements must form the fourth pair, so the total number of sets of four pairs is $7\cdot5\cdot3=105$.
Added: Alternatively, you can argue as follows. There are $\binom82$ ways to choose a pair, $\binom62$ ways to choose a pair from the remaining $6$ elements, and $\binom42$ ways to choose a pair from the $4$ elements that still remain; the last two elements will of course be the fourth pair. That’s $$\binom82\binom62\binom42=\frac{8!}{2!6!}\cdot\frac{6!}{2!4!}\cdot\frac{4!}{2!2!}=\frac{8!}{2!2!2!2!}=\frac{40320}{16}=2520\;,$$ but it takes into account the order in which the pairs are chosen. Since a given set of $4$ pairs can be chosen in $4!=24$ different orders, we have to divide by $24$ to get $\dfrac{2520}{24}=105$.
By the way, I could have saved myself some of the arithmetic in this approach:
$$\frac{8!}{2!2!2!2!4!}=\frac{8\cdot7\cdot6\cdot5\cdot4\cdot3\cdot2}{2^7\cdot3}=7\cdot5\cdot3\;.$$ |
H: Common eigenvectors of two special commuting matrices
Suppose you have a symmetric real 3x3 Matrix $S$ and an orthogonal matrix $O$ such that $O$ commutes with $S$, i.e. $OS = SO$. Suppose that $O$ is a nontrivial rotation about an axis in direction of $n \in \mathbb{R}^3$, i.e. $On = n$ and $O \neq \mathrm{id}$.
Is it true that $n$ must be an eigenvector of $S$? If so, how to prove it?
If it is not true, are there slight modifications of the suppositions which make it true?
If it is true, is there a simple way to generalize this statement?
AI: Remark that $O$ has just one real eigenvalue: $1;$ and its eigenspace is $\mathbb{R}n.$
From $On=n$ and $SO=OS$ we derive $OSn=SOn=Sn.$
Therefore $Sn$ is in the eigenspace $\mathbb{R}n$ of $O$ i.e.: there exists a unique $\lambda\in\mathbb{R}$ s.t. $Sn=\lambda n.$ This means: $n$ is eigenvalue of $S.$ |
H: Eigenvalues of $A+B$
$A,B$ are symmetric matrices, $A$ has eigenvalues in $[a,b]$ and $B$ has eigenvalues in $[c,d]$ then we need to show that eigenvalues of $A+B$ lie in $[a+c,b+d]$, I am really not getting where to start. What I know $A,B$ have real eigenvalues, they are diagonalizable also.
AI: We can use Rayleigh quotients: for a symmetric matrix $M$ and $x\neq 0$, it is defined as $R_M(x):=\frac{\langle Mx,x\rangle}{\lVert x\rVert^2}$. If $\lambda_1\leq\ldots\leq \lambda_n$ are the eigenvalues of $M$, then $$\lambda_1=\min_{x\neq 0}R_M(x)\quad\mbox{and}\quad\lambda_n=\max_{x\neq 0}R_M(x).$$
To see this, use the fact that $M$ is diagonalizable in an orthonormal basis of eigenvectors (in order to just deal with the case $M$ diagonal.
Once you have this result, the mininmal eigenvalue of $A+B$ is $\geq\min_{x\neq 0}\frac{\langle Ax,x\rangle}{\lVert x\rVert^2}+\frac{\langle Bx,x\rangle}{\lVert x\rVert^2}\geq a+c$. Use a similar argument for the $\max$. |
H: Proving a function satisfies the binomial recurrence relation and that it equals $\binom{n-k+1}{k}$
I have the recurrence relation $g(n,k)=g(n-2,k-1) + g(n-1,k)$ for all $k\geq1$
with the boundary conditions $g(n,k)=0$ if $n<2k-1$ and $g(2k-1,k)=1$
What I'm trying to do is define a new function by the equation $ h(p,q) = g(n,k)$ where $n=p+q-1$ and $k=q$ and show that $ h(p,q)$ satisfies the binomial recurrence relation and its boundary conditions and hence deduce that $g(n,k)=\binom{n-k+1}{k}$
My workings are:
We have the binomial recurrence relation $$\binom{p}{q}=\binom{p-1}{q-1}+\binom{p-1}{q}$$
where$$\begin{align*} \\&\binom{p}{q}=0\text{ when }q>p \\&\binom{p}{q}=1\text{ when }p=q \end{align*}$$ Now if $$\begin{align*} \\&p=n-k+1 \\&q=k \end{align*}$$
Then $$\begin{align*} \\&\binom{n-k+1}{k}=0\mbox{ when }n<2k-1\text{ (the first boundary condition)}\end{align*}$$ and $$\begin{align*} \\&\binom{n-k+1}{k}=1\text{ when }n=2k-1 \text{ (the second boundary condition)}\end{align*}$$
Also $$\begin{align*}\\&\binom{n-k+1}{k}=\binom{n-k}{k-1}+\binom{n-k}{k}\end{align*}\tag{a}$$
Now from the recurrence relation $g(n,k)$ we have (Is this true???): $$\binom{n}{k}=\binom{n-2}{k-1}+\binom{n-1}{k}$$ Which therefore equals: $$ \binom{n-k+1}{k}=\binom{n-k-1}{k-1}+\binom{n-k}{k}\tag{b}
$$
Setting $(a)$ equal to $(b)$ we now get: $$\begin{align*}\binom{n-k}{k-1}+\binom{n-k}{k}&=\binom{n-k-1}{k-1}+\binom{n-k}{k}\\ \binom{n-k}{k-1}&=\binom{n-k-1}{k-1}\end{align*}$$
However, I am not sure where to go from here. I'm rather dubious as to the veracity of this identity as I have been unable to prove that it is true. Am I on the right track? If so how do I go about proving this identity? If I'm off beam any ideas where I went wrong?
Thanks in advance.
AI: The basic problem is that you’re starting at the wrong end: you should be working with $h$.
Let’s restate the problem a little more clearly. You’re to define $h(p,q)=g(p+q-1,q)$ and prove that the function $h$ satisfies the binomial recurrence, which, stated in terms of $h$, is $$h(p,q)=h(p-1,q-1)+h(p-1,q)\tag{1}\;.$$ Let’s see what happens when we expand the righthand side of $(1)$ in terms of $g$:
$$\begin{align*}
h(p-1,q-1)+h(p-1,q)&=g(p+q-3,q-1)+g(p+q-2,q)\\
&=g(p+q-1,q)\\
&=h(p,q)\;,
\end{align*}$$
which is exactly what we wanted. The only step that might be puzzling for a moment is the second, which uses the fact that $g(n,k)=g(n-2,k-1)+g(n-1,k)$: just let $n=p+q-1$ and $k=q$.
Next, you’re to check that $h$ satisfies the same boundary conditions as the binomial coefficients. In other words, you’re to show that $g(p,q)=0$ when $p>q$, and $g(p,q)=1$ when $p=q$. When $p=q$, for instance, $g(p,q)=g(p,p)=h(2p-1,p)$, which is what? The case $p>q$ is equally easy.
Once you finish that, you’ll have shown that $h(p,q)=\binom{p}q$: they satisfy the same boundary conditions and the same recurrence, so by an easy induction they’re always equal.
Your final task is to use this to show that $g(n,k)=\binom{n+k-1}k$; this will be easy once you express $g(n,k)$ in terms of $h$. That is, just as $h(p,q)=g(p+q-1,q)$, you can express $g(n,k)$ as $h(\text{thing}_1,\text{thing}_2)$ and therefore as $\binom{\text{thing}_1}{\text{thing}_2}$. |
H: NP-hardness reduction
Although I know the notion of polynomial time reduction since many years, I am currently confused about the following problem.
In a reduction from 3-SAT to 3-Coloring, one constructs (in polytime) a graph $G_F$ out of a given 3-CNF formula $F$, s.t. $F$ is satisfiable if and only if $G_F$ is 3 colorable.
In order to show the above "if-part" a typical proof goes like:
" If the graph is 3-colorable, then we can extract a satisfying
assignment from any 3-coloring..." (This sentence is copied form Jeff Ericksons notes, so it must be correct :-) )
Even if we know that a graph is 3 colorable, it is np hard to find any proper 3 coloring. So given $P \neq NP$ this reduction can not be done in polynomial time.
I know that I am making some mistake in the argument above, but I can't currently see where I am wrong.
AI: $P$ and $NP$ are classes of decision problems. You only need to be able to tell whether a given formula $F$ is satisfiable or a graph $G$ is 3-colourable, not to actually produce the satisfying assignment or the colouring (the "witness").
The reduction itself is not required to find a 3-colouring of the graph $G_F$. You are merely constructing $G_F$ from $F$ in polynomial time, and that's it. The point of the reduction is that if you then had a hypothetical algorithm that could check whether $G_F$ is 3-colourable in polynomial time, you would then immediately know whether $F$ is satisfiable. (Assuming $P \ne NP$, it is impossible to do the latter in polynomial time, so the former must be impossible too.)
That said, it is often easy to convert a witness of the constructed instance into a witness of the original problem, and that is the case here. If you can find a 3-colouring of $G_F$, the vertex corresponding to each variables must be coloured either $\mathrm{true}$ or $\mathrm{false}$. That gives you a satisfying assignment for $F$. |
H: Prove that the product of four consecutive positive integers plus one is a perfect square
I need to prove the following, but I am not able to do it. This is not homework, nor something related to research, but rather something that came up in preparation for an exam.
If $n = 1 + m$, where $m$ is the product of four consecutive positive
integers, prove that $n$ is a perfect square.
Now since $m = p(p+1)(p+2)(p+3)$;
$p = 0, n = 1$ - Perfect Square
$p = 1, n = 25$ - Perfect Square
$p = 2, n = 121$ - Perfect Square
Is there any way to prove the above without induction? My approach was to expand $m = p(p+1)(p+2)(p+3)$ into a 4th degree equation, and then try proving that $n = m + 1$ is a perfect square, but I wasn't able to do it. Any idea if it is possible?
AI: Your technique should have worked, but if you don't know which expansions to do first you can get yourself in a tangle of algebra and make silly mistakes that bring the whole thing crashing down.
The way I reasoned was, well, I have four numbers multiplied together, and I want it to be two numbers of the same size multiplied together. So I'll try multiplying the big one with the small one, and the two middle ones.
$$p(p+1)(p+2)(p+3) + 1 = (p^2 + 3p)(p^2 + 3p + 2) + 1$$
Now those terms are nearly the same. How can we force them together? I'm going to use the basic but sometimes-overlooked fact that $xy = (x+1)y - y$, and likewise $x(y + 1) = xy + x$.
$$\begin{align*}
(p^2 + 3p)(p^2 + 3p + 2) + 1 &= (p^2 + 3p + 1)(p^2 + 3p + 2) - (p^2 + 3p + 2) + 1 \\
&= (p^2 + 3p + 1)(p^2 + 3p + 1) + (p^2 + 3p + 1) - (p^2 + 3p + 2) + 1 \\
&= (p^2 + 3p + 1)^2
\end{align*}$$
Tada. |
H: The language that contains no proper prefixes of all words of a regular language is regular
Let $L$ be a regular language. I need to prove that the language
$$M_L = \{w \in L \; | \forall x \in L \; \forall y \in \Sigma^+ : w \neq xy \}$$
that contains all words of L that do not have a related proper prefix in L is regular.
As an example I thought about the language $L_1 = \{12,34,56,3456\}$ where $M_{L_1} = \{12,34,56\}$. I played around with complement of the automaton of the language L and the intersection of this automaton with the one of the language L (no complement) and some modifications, but am stuck right now.
Could you please help me to go on?
Thanks in advance!
AI: The condition that no proper prefix is in $L$ means that the input should be rejected if you encounter an accepting state before the word is completely read. So you could use a FSM for $L$ with the modification that from an accepting state all transitions are redirected to a non-accepting error state.
Edit: Of course, one has to assume w.l.o.g. that the FSM that recognizes $L$ is deterministic and has no $\lambda$-transitions. |
H: Counting matrices over $\mathbb{Z}/2\mathbb{Z}$ with conditions on rows and columns
I want to solve the following seemingly combinatorial problem, but I don't know where to start.
How many matrices in $\mathrm{Mat}_{M,N}(\mathbb{Z}_2)$ are there such that the sum of entries in each row and the sum of entries in each column is zero? More precisely find cardinality of the set
$$
\left\{A\in\mathrm{Mat}_{M,N}(\mathbb{Z}/2\mathbb{Z}): \forall j\in\{1,\ldots,N\}\quad \sum\limits_{k=1}^M A_{kj}=0,\quad \forall i\in\{1,\ldots,M\}\quad \sum\limits_{l=1}^N A_{il}=0 \right\}
$$.
Thanks for your help.
AI: Take any $(M-1)\times (N-1)$ matrix of $0$'s and/or $1$'s. We can in a unique way add a column of $0$'s and/or $1$'s at the right and bottom to make the sums of rows and columns congruent to $0$. In coding theory they would be called check bits. Note that the entry at bottom right is uniquely determined, for the sum of row sums must, by basic accounting principles, be equal to the sum of column sums.
So there are just as many restricted $M\times N$ matrices as there are unrestricted $(M-1)\times (N-1)$ matrices.
The number of restricted $M\times N$ matrices is therefore $2^{(M-1)(N-1)}$. |
H: Computing probabilities involving committees
A committee consisting of 6 members is randomly selected from 25 students,
5 teachers, and 10 parents. I wish to find the following:
(i) the probability of having no teacher on the committee
(ii) the probability of having neither students nor parents on the committee.
(iii) the probability of having an all parents committee given that the first two selected members are parents.
(iv) the probability of having at least a teacher or at least a student on the
committee
For (i), if no teacher is to be on the committee, then there are 35 people to choose from. So the probability is $$\frac{\binom{35}{6}}{\binom{40}{6}}.$$
For (ii) If we are to have neither parents or students, then the committee cannot be formed, since we would only have 5 teachers to choose from. so the probability is $0$.
For (iii) I got the following: $$ \frac{\binom{10}{6}/\binom{40}{6}}{\binom{10}{2}\cdot \binom{30}{4}/\binom{40}{6}}=\frac{\binom{10}{6}}{\binom{10}{2}\cdot \binom{30}{4}}.$$
Is (iv) equivalent to finding 1 minus the probability of having neither teacher nor student on the committee? If so, then I would get $$1-\frac{\binom{10}{6}}{\binom{40}{6}}.$$
Is what I have done okay? Are there better ways of doing them?
AI: You’ve done fine with (i),(ii), and (iv), but (iii) isn’t right. Having picked two parents, you’re now in the position of choosing a $4$-person committee from a group consisting of $25$ students, $5$ teachers, and $8$ parents. The probability that you pick $4$ parents to fill out the $6$-person committee is $$\frac{\binom84}{\binom{38}4}\;.$$ |
H: Why metric defined on $\mathbb{R}^2\times \mathbb{R}^2$ by $(a,b)\mapsto | a_1 - b_1| +| a_2 - b_2| $ is known as taxicab metric?
$\mathbb{R}^2$ with the function defined on $\mathbb{R}^2\times \mathbb{R}^2$ by $(a,b)\mapsto | a_1 - b_1| +| a_2 - b_2| $ where $a = (a_1, a_2)$ and $b = (b_1, b_2)$ is a metric. I wonder why it is known as taxicab metric. Could anyone explain me?
Thank you very much
AI: Imagine a big city laid out so that the streets all run either north-south or east-west and are spaced at regular intervals. In order to go from one point to another by taxicab, you must travel along the streets, so you can only travel north-south or east-west. The distance travelled is therefore the sum of the north-south separation and the east-west separation between you and your destination. The taxicab metric is the distance as the cab would travel instead of the straight-line, or Euclidean distance distance. (The latter is sometimes described in English as as the crow flies, so we could call the Euclidean metric the crow metric, but no one does!) |
H: Where does complex exponential come from?
The complex exponential function is defined as : $$e^{ix} = \cos x + i\sin x$$ It shares most of its properties with real exponential and it allows a lot of trigonometric calculations such as de Moivre's formula : $$(\cos x+i\sin x)^n = \cos{nx}+i\sin{nx}$$
But where does this definition come from and why does it work ?
AI: Another way to look at it is to view the exponential and trigonometric functions as defined by a power series:
$$\exp(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots$$
$$\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots$$
$$\cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots$$
This has the advantage that the $x$ can be anything, as long as we know how to multiply two of them, add two of them together, and divide them by a real number. In particular, it makes sense for both real and complex numbers.
Now you can put $ix$ into the definitions in place of $x$, and compute:
$$\begin{align}
\exp(ix)
& = 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \cdots \\
& = \left( 1 - \frac{x^2}{2!} + \cdots\right) + i \left( x - \frac{x^3}{3!} + \cdots\right) \\
& = \cos x + i \sin x
\end{align}$$
so the formula you quoted is seen to be a theorem rather than a definition. If we now assume that the familiar law
$$\exp(a+b) = \exp(a) \exp(b)$$
holds for arbitrary $a$ and $b$ (it does, and you can prove it from the power series definition) then we now have a way to compute the exponential of any complex number:
$$\exp(x+iy) = \exp(x) (\cos y + i\sin y)$$
where $x$ and $y$ are real. |
H: Is there any sense in zero-padding a matrix to make it $n\times n$ and find its eigenvalues?
I am debuging my Kalman filter and the Jacobian matrix of partial derivatives of h(measurement function) with respect to x(state) is not n×n, it is 13×16.
$\displaystyle \quad\ \bf H_{[i,j]}$ = $\bf \frac{\partial h_{[i]}}{\partial x_{[j]}}(\tilde x_k,0)$
I would like to extract as much information as possible and one the things I thought of was eigenvalues, but it is not square. I wonder if I can zeropadd and make it square.
Is there any sense?
AI: That depends on what you are doing (I have no idea about your application). But a priori filling with zeros does imo make sense. A $n\times m$ matrix can be understood as a linear map $\mathbb R^m\to \mathbb R^n$. Here you have $n=13<m=16$. You have two options now which eventually should give the same result.
First you realise that you can perform row and column operations to reduce your matrix to a non-trivial $13\times 13$ matrix and a $13\times 3$ block of zeros. This means that you split the bigger $\mathbb R^m$ into two subspaces, one of dimension $3$ which is mapped to $0$ and one of dimension $13$ which still maps surjectively on the image of the original linear map. All information is then concentrated in the $13\times 13$ block and this should give you some information.
Or you fill the matrix with zeros to get a $16\times 16$ matrix. This corresponds to changing your target space where the additional dimensions are not hit by your map. The image will then still live in a $13$-dimensional subspace of $\mathbb R^{16}$. But diagonalising will give you a nontrivial $13\times 13$ block in the $16\times 16$ matrix again. This will look like the $13\times 13$ block as above.
Edit: Just a comment on a possible application (I still don't get it, and won't without spending a lot more time on it, whcih I won't;P ). All row and column operations perform base changes on the source or the target. This somehow assumes that all dimensions have equal rights. So if all dimensions correspond to positions for example, then changing the base will just correspond to a coordinate change. If some dimensions correspond to positions and some, say, to velocities you will end up with mixed expressions after changing your base. This might or might not make sense. |
H: Can we get uncountable ordinal numbers through constructive method?
As we know, $2^{\aleph_0}$ is a limit ordinal number, however, it is greater than $\omega$, $\omega+\omega$, $\omega \cdot \omega$, $\omega^\omega$, $\omega\uparrow\uparrow\omega$, and even $\omega \uparrow^{\omega} \omega$.
My question is can we get uncountable ordinal numbers with only natural number, $\omega$ and ordinal hyperoperation through constructive method?
AI: There is a trivial way, namely $\omega$ and $1$ and the set theoretical operation "the next initial ordinal" which gives us $\omega_1$.
For other operations, we can do the following claim:
Suppose $\ast$ is an ordinal operation such that for $\alpha,\beta$ countable we have $\alpha\ast\beta$ is countable, then every countable ordinal $\gamma$, the $\gamma$-th iteration, $\alpha\ast^\gamma\beta$ is countable.
We need to define exactly what does iterations mean:
$\alpha\ast^{\gamma+1}\beta=(\alpha\ast^\gamma\beta)\ast\beta$;
If $\delta$ is a limit ordinal then $\alpha\ast^\delta\beta=\sup\{\alpha\ast^\gamma\beta\mid\gamma\lt\delta\}$.
The proof is by induction over the iterations:
We already assume that if $\alpha$ and $\beta$ are countable then $\alpha\ast\beta$ is countable.
Suppose that for iterations of length $\gamma$ we know that $\alpha\ast^\gamma\beta$ is countable, then $\alpha\ast^{\gamma+1}\beta$ is countable since we apply $\ast$ to two countable ordinals.
If $\delta$ is a limit ordinal, and $\alpha\ast^\gamma\beta$ is countable for all $\gamma<\delta$ then we have that $\alpha\ast^\delta\beta$ is a countable limit of countable ordinals and therefore countable. |
H: Closed-form for eigenvectors of rotation matrix
For matrices that are elements of $SO(3)$ is there a formula for the eigenvectors corresponding to the eigenvalue $1$ in terms of the entries of the matrix?
AI: Let $A \in SO(3)$. The matrix $A-A^T$ is skew-symmetric, hence of the form
$$
\begin{pmatrix}
0 & a & b \\
-a & 0 & c \\
-b & -c & 0
\end{pmatrix}
$$
for some $a$, $b$, $c$. Then the vector $(-c,b,-a)^T$ is an eigenvector of $A$ with eigenvalue $1$.
This works since if $F$ is a rotation by the angle $\phi$ around the vector $n$, then the transformation $F-F^{-1}$ geometrically equals $\sin\phi$ times the operation of taking the cross product with $n$. |
H: Limit on a topological vector space
in the Wikipedia article on Gâteaux derivative , the limit of a function between two topological vector spaces is taken. How is the limit defined on a topological space for a function ? I find articles on net and filters for corresponding notions on topological spaces, but those are limits for discrete sequences, right ?
Thank you,
JD
AI: For the sake of having an answer I am posting my above comment as an answer - since the OP seems to be satisfied with this. (Judging by his comment.)
Limit of a function between two topological spaces $X$ and $Y$ at a
point $p\in X$ is defined quite similarly as in the case of real
functions, see Wikipedia.
In the definition of Gateaux derivative, you use a function from
$\mathbb R$ to $Y$. |
H: Weak-* convergence on subinterval
If $U_{n} \to U$ weakly* in $BV[0,1]$ is it true that $U_{n}\mid_{[0,\frac{1}{2}]} \to U\mid_{[0,\frac{1}{2}]}$ weakly* in $BV[0,\frac{1}{2}]$?
I'll give some explication. Let $U_{n} \in BV[0,1]$ and $U_n \to U$ weakly*, i.e. for any $y \in C[0,1]$
$$
\int\limits_{0}^{1} y(t)dU_{n}(t) \to \int\limits_{0}^{1}y(t)dU(t)
$$
Restrictions $U_{n} \mid _{[0,\frac{1}{2}]}, U \mid_{[0,\frac{1}{2}]} \in BV[0,\frac{1}{2}]$. For any $y \in C[0,\frac{1}{2}]$ such that $y(\frac{1}{2}) = 0$ we have
$$
\int\limits_{0}^{\frac{1}{2}} y(t)dU_{n}(t) \to \int\limits_{0}^{\frac{1}{2}}y(t)dU(t)
$$
But is it true for any $y \in C[0,\frac{1}{2}]$? In other words, it it true, that $U_{n}\mid_{[0,\frac{1}{2}]} \to U\mid_{[0,\frac{1}{2}]}$ weakly*?
AI: Not if I understand your conventions correctly. Consider $$U_n=\chi_{(1/2+1/n,1]},\quad U=\chi_{(1/2,1]}.$$ In general, you have to be very careful about jump discontinuities at interval endpoints.
**Edit:** More than that, you have to worry about concentrations of variation. Here is another example: Let $U_n(x)=0$ for $x\in\{0,\frac12-\frac1n,\frac12+\frac1n,1\}$, $U_n(\frac12)=1$, and interpolate linearly between the points named. Then $U_n\stackrel*\rightharpoonup0$, but $U_n|_{[0,1/2]}\stackrel*\rightharpoonup\chi_{\{1/2\}}$ (where $\stackrel*\rightharpoonup$ is weak* convergence).
If you wish for a positive result, you have to make more stringent assumptions, such as
$$
\lim_{\delta\downarrow0}\sup_n \operatorname{TV}(U_n|_{[1/2-\delta,1/2+\delta]})=0.
$$
(I think that should be sufficient.) |
H: Parabolic PDE local and global existence
If you have a local solution to a parabolic PDE (say we know it exists (weakly anyway) from time 0 to T), then if the solution is bounded in an appropriate way (in which norms?) then we can apparently extend the solution globally. Can someone refer me to these results or explain this, please?
I also heard that the as the solution $u(t)$ converges a $C^\infty$ function as $t \to T$. Why is that? I thought this had something to do with Sobolev embeddings but I can't see in general how this latter statement can be true (maybe it's just for this case).
Thanks.
AI: Often you apply a Banach fixed point argument to prove local existence. Then for global existence a sufficient is to show that the norm of the space you used does not blow up in a finite time, assuming a global smooth solution. This is in practically any book that discusses global existence.
As for your second question, perhaps you mean a convergence as $t\to\infty$ of a global solution. In parabolic case this has to do with dissipation but for instance if you have an external forcing generally it is not true. |
H: Visualize a difference equation with Matlab
I have a difference equation for a Single Pole Infinite Impulse Response Filter, defined on a discrete time-series:
$y[n]-(1-\alpha)*y[n-1]=\alpha*x_n$
While the []s brackets refer to a position n within the series. I'm looking for a visual way to represent this in order to get how this equation behaves. I have tried wolfram-alpha, MalLab... Is anyone me a pointer how I can make MatLab (e.g.) show me the plot for this function? Use-case is a DC offset filter, that uses this SPIIR filter with $\alpha=0,0004$. So it's mostly DSP related.
Best,
Marius
AI: Let's do a quick rewrite into more 'mathematical' notation:
$$y_n = \alpha x_n + (1-\alpha) y_{n-1}$$
By repeated substitution you can see that this is equal to:
$$
\begin{align}
y_n & = \alpha \left( x_n + (1-\alpha) x_{n-1} + (1-\alpha)^2 x_{n-2} + \cdots \right) \\
& = \alpha \sum_{k=0}^\infty (1-\alpha)^k x_{n-k}
\end{align}$$
So $y$ is an infinite sum of past values of $x$ (which is why it's called an infinite impulse response filter). One way to visualize this is to look at the weights
$$w_k = \alpha(1-\alpha)^k$$
as a function of $k$, which you can achieve in Matlab by
alpha = 0.2;
k = 0:20;
w = alpha .* (1-alpha).^k;
bar(k,w)
another way is to generate some data x and calculate y from it, and compare the two:
x = randn(30,1);
y = zeros(30,1);
y(1) = x(1);
for k = 2:30
y(k) = alpha * x(k) + (1-alpha) * y(k-1);
end
plot(1:30, [x y])
legend({'x','y'})
Is this what you meant by 'visualize' the equation? |
H: How to prove that the sum and product of two algebraic numbers is algebraic?
Suppose $E/F$ is a field extension and $\alpha, \beta \in E$ are algebraic over $F$. Then it is not too hard to see that when $\alpha$ is nonzero, $1/\alpha$ is also algebraic. If $a_0 + a_1\alpha + \cdots + a_n \alpha^n = 0$, then dividing by $\alpha^{n}$ gives $$a_0\frac{1}{\alpha^n} + a_1\frac{1}{\alpha^{n-1}} + \cdots + a_n = 0.$$
Is there a similar elementary way to show that $\alpha + \beta$ and $\alpha \beta$ are also algebraic (i.e. finding an explicit formula for a polynomial that has $\alpha + \beta$ or $\alpha\beta$ as its root)?
The only proof I know for this fact is the one where you show that $F(\alpha, \beta) / F$ is a finite field extension and thus an algebraic extension.
AI: The relevant construction is the Resultant of two polynomials. If $x$ and $y$ are algebraic and $P(x) = Q(y) = 0$ and $\deg Q=n$ then $z=x+y$ is a root of the resultant of $P(x)$ and $Q(z-x)$ (where we take this resultant by regarding $Q$ as a polynomial in only $x$) and $t=xy$ is a root of the resultant of $P(x)$ and $x^n Q(t/x).$ |
H: Principal ideal ring analytic functions
Could someone sketch a proof and explain me in words, why the set of analytic functions on $\mathbb{C}$ does not form form a principal ideal ring?
Thank you!
AI: While I am not very familiar with analytic functions themselves, I would imagine that one could construct an infinite ascending chain of ideals, showing that the ring is not Noetherian (and hence, definitely not principal.)
So say for example $I_j=\{f \mid \forall n\in\mathbb{N}, n\geq j\ \ f(n)=0\}$ would probably constitute an infinite ascending chain of ideals (as it does in the ring of continuous functions.)
Added: Experts have added useful examples in the comments to show that there is such an ascending sequence:
Ragib Zaman: $f_j(x)=\prod_{k=j+1}^{\infty} (1-z^2/k^2).$
Georges Elencwajg: $f_j(z)=\frac {sin(\pi z)}{z(z-1)...(z-j) }$
Thank you for contributing these: I too have learned something, now :) |
H: Is this computation of the tensor product correct?
I'm reading the proof of the existence of the tensor product. If $M,N$ are two $R$-modules then we can construct the tensor product $T$ as the quotient $C/D$ where $C$ is the free module over $M \times N$ and $D$ is the submodule generated by the set of all elements in $C$ of the form
$$(m+m^\prime, n) - (m,n) - (m^\prime, n)$$
$$ (m, n+n^\prime) -(m,n) - (m,n^\prime) $$
$$ (am, n) - a(m,n)$$
$$ (m,an) - a(m,n)$$
I use $(m,n)$ to denote the element $e_{(m,n)} \in F(M\times N)$. Since $F(S) \cong \bigoplus_{s \in S} R$ I picture these elements as $e_{(m,0)} = (0, \dots, 0,1, 0, \dots)$ where the $1$ here is at position $m$ and $e_{(m,n)}$ the sequence with $1$ at position $m \cdot |M| + n$ and so forth.
Is this correct so far?
Now I wanted to see what this looks like. So I computed the tensor product of $M = N = \mathbb Z / 2 \mathbb Z$ over $R=\mathbb Z$. For $C$ I get that $C \cong \mathbb Z^4$. Then I computed all the elements above and noticed that $D \cong \langle \{(1,0,0,0), (0,1,0,0), (0,0,1,0)\} \rangle$. Hence
$$M \otimes N = \mathbb Z / 2 \mathbb Z \otimes_{\mathbb Z} \mathbb Z / 2 \mathbb Z = \langle (0,0,0,1) \rangle \cong \mathbb Z$$
Is this correct?
AI: I will add to the comment I gave beneath your question. I know you said that you were interested in the specific construction, but perhaps later you will find this answer useful too.
We know that for any ideal $I \lhd R$, and any $R$-module $M$, we have the isomorphism:
$$ R/I \otimes_R M \cong M/IM$$
To see this, take the exact sequence $$0 \to I \to R \to R/I \to 0$$
and tensor by $M$ (recalling that tensoring by $M$ preserves right-exact sequences), then use the standard isomorphism theorem for modules.
So in your case it follows that
$$ \mathbb{Z}/2 \mathbb Z \otimes_\mathbb{Z} \mathbb{Z}/2 \mathbb Z \cong \frac{\mathbb{Z}/2\mathbb Z }{ \langle 2 \rangle \mathbb{Z} / 2\mathbb{Z}} \cong \mathbb{Z} / 2 \mathbb{Z} $$
Since the bottom part of the quotient is just $0$. |
H: Why (finite) Blaschke products are actually rational fractions?
I have found in several books the following affirmation :
Let $f: \Delta \rightarrow \Delta$ be a non constant holomorphic function that extends continuously to $\overline{\Delta}$, $\Delta$ being the open unit disk. Then $f$ is a finite Blaschke product, i.e. of the form
$$B(z) = e^{i \theta} \prod_{k=1}^d \frac{z- a_i}{1-\overline{a_i} z}$$
Now it is not hard to check that those rational fractions do indeed send $\Delta$ onto $\Delta$, but I do not know where to find a proof that they are the only ones.
Where can I find that proof in the literature ? I am not interested in the infinite product case, only finite.
(Of course, if anyone is kind enough to post a proof, that would do fine as well =)
AI: I think you have to assume that $f$ is non constant and that $f$ maps the unit circle into itself. In this case, the result follows by the maximum modulus principle.
Indeed, first note that $f$ has a finite number of zeros in $\mathbb{D}$, otherwise since $f$ is not identically zero the zeros would accumulate on the circle, which contradicts the fact that $|f|=1$ there.
Then, consider $B$ a finite blaschke product that has exactly the same zeros than $f$. Then $B/f$ and $f/B$ are holomorphic in $\mathbb{D}$, continuous in $\overline{\mathbb{D}}$. Furthermor, $|B/f|=1$ and $|f/B|=1$ on the unit circle. By the maximum modulus principle, we get that $|B/f| \equiv 1$ in $\mathbb{D}$ and thus $B/f$ is a unimodular constant. This gives the result.
Note that infinite blaschke products do not extend continuously to the unit circle : they have (essential) singularities where the zeros accumulate. |
H: Is it generally accepted that if you throw a dart at a number line you will NEVER hit a rational number?
In the book "Zero: The Biography of a Dangerous Idea", author Charles Seife claims that a dart thrown at the real number line would never hit a rational number. He doesn't say that it's only "unlikely" or that the probability approaches zero or anything like that. He says that it will never happen because the irrationals take up all the space on the number line and the rationals take up no space. This idea almost makes sense to me, but I can't wrap my head around why it should be impossible to get really lucky and hit, say, 0, dead on. Presumably we're talking about a magic super sharp dart that makes contact with the number line in exactly one point. Why couldn't that point be a rational? A point takes up no space, but it almost sounds like he's saying the points don't even exist somehow. Does anybody else buy this? I found one academic paper online which ridiculed the comment, but offered no explanation. Here's the original quote:
"How big are the rational numbers? They take up no space at all. It's a tough concept to swallow, but it's true. Even though there are rational numbers everywhere on the number line, they take up no space at all. If we were to throw a dart at the number line, it would never hit a rational number. Never. And though the rationals are tiny, the irrationals aren't, since we can't make a seating chart and cover them one by one; there will always be uncovered irrationals left over. Kronecker hated the irrationals, but they take up all the space in the number line. The infinity of the rationals is nothing more than a zero."
AI: Mathematicians are strange in that we distinguish between "impossible" and "happens with probability zero." If you throw a magical super sharp dart at the number line, you'll hit a rational number with probability zero, but it isn't impossible in the sense that there do exist rational numbers. What is impossible is, for example, throwing a dart at the real number line and hitting $i$ (which isn't even on the line!).
This is formalized in measure theory. The standard measure on the real line is Lebesgue measure, and the formal statement Seife is trying to state informally is that the rationals have measure zero with respect to this measure. This may seem strange, but lots of things in mathematics seem strange at first glance.
A simpler version of this distinction might be more palatable: flip a coin infinitely many times. The probability that you flip heads every time is zero, but it isn't impossible (at least, it isn't more impossible than flipping a coin infinitely many times to begin with!). |
H: What is Gal($\mathbb{F}_{q^k}/\mathbb{F}_q)$?
I know that if $q=p$ (where $p$ is prime) then Gal($\mathbb{F}_{p^k}/\mathbb{F}_p)$ is cyclic of order $k$.
I heard that in general (for $q=p^m$) the galois group is cyclic of the order of the extension (i.e. that :Gal($\mathbb{F}_{q^k}/\mathbb{F}_q)=C_{[\mathbb{F}_{q^k}:\mathbb{F}_q]}$).
How can I prove this claim ?
AI: Note that the Galois group you are after is a subgroup of Gal$(\mathbb{F}_{q^k}/\mathbb{F}_{p})$ (if $q = p^m$). This is true for any tower $L/M/K$ where Galois groups are defined; Gal$(L/M)$ is always a subgroup of Gal$(L/K)$.
Now the Galois group Gal$(\mathbb{F}_{q^k}/\mathbb{F}_p)$ is generated by the Frobenius automorphism $Frob_p : x\mapsto x^p$. Which powers of this are in Gal$(\mathbb{F}_{q^k}/\mathbb{F}_q)$, i.e. which powers fix the field $\mathbb{F}_q$?
Well these are the powers of $Frob_p^m$. (check this)
So the Galois group you want is cyclic, generated by $Frob_p^m$. The claim about the degree follows easily.
EDIT: You do not need to worry about the extension being Galois (although if you know this then the claim about the size of the Galois group is trivial, by definition of Galois. This is the approach Benjamin Lim is using and is equally valid just slightly more theoretical).
Explicitly, we know that Gal$(\mathbb{F}_{q^k}/\mathbb{F}_p)$ is cyclic of order $mk$, generated by $Frob_p$. Thus the subgroup generated by $Frob_p^m$ has order $k$.
This agrees with:
$[\mathbb{F}_{q^k} : \mathbb{F}_q] = \frac{[\mathbb{F}_{q^k} : \mathbb{F}_p]}{[\mathbb{F}_{q} : \mathbb{F}_p]} = \frac{mk}{m} = k$. |
H: Rational embedded irreducible curves in a complex surface.
Given a complex surface $X$ and an embedded irreducible compact curve $C$ with its arithmetical genus $g(C) = 0$, how can one show that $C$ is non-singular ?
Thanks for your answers!
AI: Given an irreducible complete curve $C$ and its normalization $\tilde C$ you have the following relation between their arithmetic genera:$$ p_a(C)=p_a(\tilde C)+\sum_{P\in Sing(C)}dim_\mathbb C (\mathcal O_{C,\tilde P}/\mathcal O_{C, P})$$
where for a singular point $P\in Sing (C)$ the ring $\mathcal O_{C,\tilde P}$ is the integral closure of the ring $\mathcal O_{C, P}$.
It is then clear that $ p_a(C)=0$ forces $ p_a(\tilde C)=0$ and also forces all $P$ to actually not be singular at all, since all $\mathcal O_{C,\tilde P}=\mathcal O_{C, P}$ . The curve $C$ was smooth after all and isomorphic to $\mathbb P^1$.
Note that the embedding of $C$ in some surface $X$ is irrelevant. |
H: How to induce a connection on a submanifold?
Suppose an affine connection is given on a smooth manifold $M$ and let $N\subset M$ be an embedded submanifold. Is there a canonical way of defining an induced connection on $N$?
In classical differential geometry of smooth surfaces in Euclidean 3-space, the corresponding construction is that of covariant derivative (cfr. Do Carmo, Differential geometry of curves and surfaces §4-4). The covariant derivative of a vector field along a curve on the surface is defined as the orthogonal projection of the ordinary Euclidean derivative onto the plane tangent to the surfaces.
I wonder how (and if) this can be ported to the language of connections.Wikipedia's entry does something like that by means of the Riemannian structure: I wonder if this extra structure is really necessary.
AI: With just an affine structure you will not be able to get an induced connection. (Part of the story is told in Fox's AMS Notices article from March 2012 titled "What is an affine sphere?".)
Instead, you can consider the following for codimension 1 submanifolds: given $\tau:N\to M$ an embedding and let $v$ be a vector field on $M$ along $N$ that is transverse to $N$, then $(\nabla,v)$ on $M$ together induces a connection on $N$. For $(X,Y)$ vector fields on $N$, we can define
$$ D^{(v)}_X Y = [\nabla_{\tau_*X}\tau_*Y] $$
where $[W]$ for $W\in T_pM$, $p\in \tau(N)$ is defined by $\tau_*[W] - W = \lambda v$ for some $\lambda\in\mathbb{R}$. For higher codimension case you need more (linearly independent) vector fields. In the Riemannian case, $v$ is canonically chosen to be the unit normal vector to $N$ (or in higher codimension, a family that spans the normal bundle). |
H: combinatorics: The pigeonhole principle
Assume that in every group of 9 people, there are 3 in the same height.
Prove that in a group of 25 people there are 7 in the same height.
I started by defining:
pigeonhole- heights.
pigeons-people.
I do not know how to use the assumption.
thanx.
AI: Let $n_i$ be the number of people of height $x_i$. Without loss of generality
$$
n_1\ge n_2\ge \cdots \ge n_t>0,
$$
where $t$ is the number of distinct heights.
Assume contrariwise that $n_1<7$. If $n_4<2$, then $n_1+n_2+n_3\le 3n_1\le 18$, and consequently $t\ge10$, because $n_k=1$ for all $k$ such that $4\le k\le t$, and therefore
$$(t-3)=\sum_{k>3}n_k=25-(n_1+n_2+n_3)\ge7.$$
This is a contradiction, because we can select a group of nine people with distinct heights violating the assumption. Therefore $n_4\ge2$. If $t\ge5$, then we can again form a group of nine contradicting the assumption: two people of heights $x_1,x_2,x_3,x_4$ each and one of height $x_5$. So $t=4$. But $25=n_1+n_2+n_3+n_4\le 4\cdot6$, which is a contradiction. |
H: can the statement "a simplicial set is the nerve of a category if and only if it satisfies a horn-filling condition" be tweaked for groupoids?
For some reason I convinced myself that a simplicial set (or maybe I mean directly Kan complex) is homotopy equivalent to the nerve of a groupoid if and only if it has no higher homotopy groups.
Is this true? (it seems like a proof could go through the fact mentioned in the title of my question)
And a related question: can a space with no higher homotopy groups be described (up to homotopy) as a CW complex with only 1-cells? (I expect not, is there a simple counterexample?)
The reason I ask is that it seems to me that CW complexes with only 1-cells seem special, in the sense that they do not have higher homotopy groups (right?) while something like $S^2$ has only 2-cells but has non-vanshing $\pi_3$.
AI: Your first statement is true: there is an equivalence between homotopy 1-types and groupoids, given by taking the fundamental groupoid. (In general, one expects $n$-groupoids to be modeled by homotopy $n$-types).
An easy way to see this is as follows. There is an adjunction between simplicial sets and groupoids, where the left adjoint sends a simplicial set $X_\bullet $ to $\pi_1 X_\bullet$ (the groupoid, not the group) and where the right adjoint sends a groupoid to its nerve. For any simplicial set $X_\bullet$, there is an adjunction map
$$ X_\bullet \to N(\pi_1 X_\bullet)$$
which, roughly speaking, sends each point (vertex) in $X_0$ to the corresponding object in the groupoid, and sends each edge to the morphism in the fundamental groupoid that it determines. This is always an equivalence on $\pi_0$ and $\pi_1$. If there are no higher homotopy groups, it is an equivalence by Whitehead's theorem.
(In response to the title of your question: (nerves of) groupoids can be characterized as those simplicial sets with the unique horn filling property for all horns, just as categories can be characterized as those simplicial sets with the unique horn filling property for inner horns. If you believe the latter statement, it's not too hard to verify the former.) |
H: Weak a.s. convergence VS a.s.weak convergence
Let's consider a sequence $(\mu_n)_n$ of random probability measures on $\mathbb R$, and let $C_b$ be the Banach space of bounded continuous functions on $\mathbb R$. I am considering the following types of convergence, where $\mu$ is some non-random probability measure :
(A)
$$ \forall f\in C_b, \quad \mathbb P \left(\int f(x)d\mu_n(x)\rightarrow_{n\rightarrow\infty}\int f(x)d\mu(x)\right)=1$$
and
(B)
$$ \mathbb P \left(\forall f\in C_b,\quad \int f(x)d\mu_n(x)\rightarrow_{n\rightarrow\infty}\int f(x)d\mu(x)\right)=1.$$
Whereas (B) clearly implies (A), what about the other direction ?
1) Do you have a counter-example where (A) does not implies (B) ?
2) Do we have a sufficient criterion so that (A) implies (B) ?
(I have in mind the use of the Borel-Cantelli lemma to improve the probability convergence to the almost sure one)
AI: I believe A and B are equivalent.
First, I believe that when the underlying state space (in this case $\mathbb{R}$) is locally compact, then probability measures $\nu_n$ converge weakly to $\nu$ iff $\int f\, d\nu_n \to \int f\,d\nu$ for all continuous functions $f$ which vanish at infinity, i.e. for all $f \in C_0(\mathbb{R})$. I don't remember the proof off the top of my head, but you can find it in Billingsley's Convergence of Probability Measures.
Now $C_0(\mathbb{R})$ is separable, so one can choose a countable dense subset $\{f_1, f_2,\dots\}$. Then using the triangle inequality and the fact that the measures $\nu_n$ are uniformly bounded in total variation (since they are all probability measures), you can show that $\nu_n \to \nu$ weakly iff $\int f_k \,d\nu_n \to \int f_k \,d\nu$ for all $k$.
Now if A holds, then by countable additivity
$$\mathbb{P}\left( \int f_k\,d\mu_n \to \int f_k\,d\mu \text{ for all } k\right) = 1.$$
But we have just argued that on this event, $\mu_n \to \mu$ weakly. |
H: Calculate $\ln(2)$ using Riemann sum.
Possible Duplicate:
Is $\lim\limits_{k\to\infty}\sum\limits_{n=k+1}^{2k}{\frac{1}{n}} = 0$?
Show that
$$\ln(2) = \lim_{n\rightarrow\infty}\left( \frac{1}{n + 1} + \frac{1}{n + 2} + ... +
\frac{1}{2n}\right)$$
by considering the lower Riemann sum of $f$ where $f(x) = \frac{1}{x}$ over $[1, 2]$
I was confused looking at the equality to begin with, since taking $n \rightarrow \infty$ for all of those terms would become $0$ right?
Anyway, I attempted it regardless.
$$\sum_{k=1}^n \frac{1}{n}(f(1 + \frac{k}{n}))$$
$$= \sum_{k=1}^n \frac{1}{n}(\frac{1}{1+ \frac{k}{n}})$$
$$=\sum_{k=1}^n \frac{1}{n + k} = $$ the sum from the question?
I wasn't sure what to do from here. I tried something else though:
$$=\frac{1}{n + \frac{n(n+1)}{2}}$$
$$=\frac{2}{n^2 + 3n}$$ which seemed equally useless if I'm taking $n \rightarrow \infty$ as it all becomes $0$.
AI: Although the terms of the sum go to $0$ as $n\to0$, there are $n$ terms and each is between $\dfrac1n$ and $\dfrac{1}{2n}$, therefore, the sum should be between $1$ and $\dfrac12$.
There at least a couple of ways to look at the sum $\displaystyle\sum_{k=1}^n\frac{1}{n+k}$:
Bound by Integrals:
Since $\frac1x$ is a decreasing function, we get that
$$
\log\left(\frac{2n+1}{n+1}\right)=\int_{1}^{n+1}\frac{\mathrm{d}x}{n+x}\le\sum_{k=1}^{n}\frac{1}{n+k}\le\int_{0}^{n}\frac{\mathrm{d}x}{n+x}=\log(2)\tag{1}
$$
Both sides of $(1)$ go to $\log(2)$ as $n\to\infty$, and the sum is sandwiched in between.
Riemann Sums:
The sum can be rewritten as
$$
\sum_{k=1}^n\frac{1}{1+\frac{k}{n}}\frac1n\tag{2}
$$
which is a Riemann sum for the integral
$$
\int_0^1\frac{1}{1+x}\mathrm{d}x\tag{3}
$$
where $\frac{k}{n}$ approximates $x$ and $\frac1n$ approximates $\mathrm{d}x$.
As $n\to\infty$, the mesh represented in $(2)$ gets finer and the sum approaches the integral, whose value is $\log(2)$.
Note that $\log(2)\,\dot{=}\,0.69314718$ is between $1$ and $\frac12$, as mentioned above. |
H: Primes of the form $p=a^2-2b^2$.
I've stumbled upon this and I was wondering if anyone here could come up with a simple proof:
Let $p$ be a prime such that $p\equiv 1 \bmod 8$, and let $a,b\geq 1$ such that
$$p=a^2-2b^2.$$
Question: Is $b$ necessarily a square modulo $p$?
I have plenty of numerical data to support an affirmative answer, but the proof eludes me so far. For instance:
\begin{align*}
17 & = 5^2 - 2\cdot 2^2\\
&= 7^2 - 2\cdot 4^2\\
& = 23^2 - 2\cdot 16^2\\
& = 37^2 - 2\cdot 26^2\\
& = 133^2 - 2\cdot 94^2\\
\end{align*}
and $2\equiv 36$, $4$, $16$, $26\equiv 9$, $94\equiv 9 \bmod 17$ are squares.
Thanks!
AI: Since $p\equiv 1\pmod 8$, $2$ is a square modulo $p$. It will therefore be enough to show that any odd prime divisor of $b$ is also a square modulo $p$. Then any prime divisor of $b$ will be a square modulo $p$, therefore $b$ itself will be.
Let $q$ be an odd prime divisor of $b$, and consider your equation modulo $q$. You find that $p\equiv a^2 \pmod{q}$, so that $p$ is a square modulo $q$. By quadratic reciprocity (using that $p\equiv 1\pmod 4$), $q$ is a square modulo $p$. |
H: Prove the derivative of $(a-x) / x$ by definition
Hello and thanks in advance for any help!!
I currently have to get to the derivative function of $\frac{a-x}{x}$ by definition.. that is
$$\lim_{h\to0} \frac{\frac{a - (x+h)}{x+h}- \frac{a-x}{x}}{h}$$
So it's kind of a little mess for a newbie in algebra like me. I've tried turning the X into X^-1 with no results, like this:
$$\frac{a(x+h)^{-1} - 1 - ax^{-1} +1}{h}$$
Also, I've tried using a common divisor of $XH(X+H)$ or something, but it's too long to write in $\LaTeX$ (this is my first time using it and I don't feel that it's relevant to the question).
I just wanna know how to begin. That means that I don't want the full answer, just a piece of advice to get me in the right path and then work the exercise out myself.
Thanks a ton everyone!! =)
AI: Let $f(x) = \dfrac{a}{x} -1$. Then $$f'(x) = \lim_{h \rightarrow 0} \dfrac{f(x+h) - f(x)}{h} = \lim_{h \rightarrow 0} \dfrac{\left(\dfrac{a}{x+h}-1 \right)- \left( \dfrac{a}{x} - 1\right)}{h}$$
$$f'(x) = \lim_{h \rightarrow 0} \dfrac{ \left(\dfrac{a}{x+h}-1 - \dfrac{a}{x} + 1 \right)}{h} = \lim_{h \rightarrow 0} \dfrac{ \left(\dfrac{a}{x+h} - \dfrac{a}{x}\right)}{h} = \lim_{h \rightarrow 0} \dfrac{ \left(\dfrac{ax - a(x+h)}{x(x+h)}\right)}{h}$$
$$f'(x) = \lim_{h \rightarrow 0} \dfrac{ \left(\dfrac{ax - ax - ah}{x(x+h)}\right)}{h} = \lim_{h \rightarrow 0} \dfrac{ \left(\dfrac{- ah}{x(x+h)}\right)}{h} = \lim_{h \rightarrow 0} \left(\dfrac{- a}{x(x+h)}\right)$$
$$f'(x) = \left(\dfrac{-a}{\displaystyle \lim_{h \rightarrow 0} \left(x(x+h) \right)}\right) = - \dfrac{a}{x \times x} = - \dfrac{a}{x^2}$$ |
H: The Affine Property of Connections on Vector Bundles
Given any two connections $\nabla_1, \nabla_2: \Omega^0 (V) \to \Omega^1 (V)$ on a vector bundle $V \to M$, their difference $\nabla_1 - \nabla_2$ is a $C^\infty (M)$-linear map $\Omega^0 (V) \to \Omega^1 (V)$.
Question: I have difficulties swallowing the implication that $\nabla_1 - \nabla_2 \in \Omega^1 (\text{End } V)$.
Of course, $\Omega^1 (\text{End } V) = \Gamma (T^\ast M \otimes \text{End } V)$, so this is saying that $\nabla_1 - \nabla_2$ is an endomorphism-valued 1-form. Also, given any section $s \in \Omega^0 (V)$, the difference $(\nabla_1 - \nabla_2) s$ at any point $m \in M$ is completely determined by the value $s(m)$, i.e. the operator $(\nabla_1 - \nabla_2) |_m$ is an endomorphism of the fiber $V|_m$, but I don't see how this is relevant, yet...
AI: Let $E$ and $F$ be vector bundles over a manifold $M$, and suppose I have a $C^\infty (M)$-linear map of global sections $\alpha : \Gamma (M, E) \to \Gamma (M, F)$. I claim this $\alpha$ is induced by a unique homomorphism of vector bundles $A : E \to F$.
Indeed, let $\vec{v}$ be a vector in the fibre $E_p$. By taking a local trivialisation and then multiplying by a bump function, I can get a global section $X \in \Gamma (M, E)$ such that $X |_p = \vec{v}$. Define $A (\vec{v}) = \alpha(X) |_p$. This is independent of the choice of $X$: if $Y$ is any other global section of $E$ with $Y |_p = \vec{v}$, then $(X - Y) |_p = \vec{0}$, so there is a smooth function $f : M \to \mathbb{R}$ and a global section $Z$ such that $f(p) = 0$ and $f Z = X - Y$. But then $C^\infty (M)$-linearity implies
$$\alpha(X) = \alpha(X - Y) + \alpha(Y) = \alpha(f Z) + \alpha(Y) = f \alpha(Z) + \alpha(Y)$$
so by evaluating at $p$ we get $\alpha(X) |_p = \alpha(Y) |_p$, as claimed. Verifying that $A$ is indeed a vector bundle homomorphism is straightforward, and uniqueness is obvious. |
H: Variation Tolerance
I came across a statement in my course book that 3$\sigma$ is considered as a means of tolerance. Can anyone explain it to me. I understand that +3$\sigma$ to -3$\sigma$ constitutes 99% of the samples. But how is tolerance related to this. Is it like a threshold we are putting on the sample's values.
AI: I assume that you are referring to statistical tolerance intervals. Tolerance intervals are like ocnfidence intervals on distributions. A 100(1-α)% tolerance interval is an interval at the 100(1-α)% confidence level includes a specified percentage of a probability distribution for a random variable. So for example a 95% coverage interval with 95% confidence is an interval that in repeated sampling would cover at least 95% of the distribution in 95% of the cases. The book Statistical Intervals by Hahn and Meeker provides excellent descriptions of the various type of statistical intervals and from it you can learn the difference between confidence intervals, tolerance intervals and prediction intervals. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.