text
stringlengths 83
79.5k
|
|---|
H: How can i find the following probabilities?
Let $X$ be binomially distributed with $n = 60$ and $p = 0.4$. Now i have to compute
(a)$P(20\leq X$ or $X\geq40)$
(b)$P(20\leq X$ and $X\geq10)$
i know $P(x\leq X)=\sum_{k=x}^{60}\binom{60}{k}(0.4)^k(0.6)^{60-k}$
But i don't know how to compute the probability when it includes or and.
How can i compute (a) and (b)?
AI: Hints:
(a) Draw a number line of possible values for $x$. What do you see?
More hints:
Note that we see that the stated condition is the same as finding $P(X \ge 20)$. Simply plug into the formula.
(b) Draw another number line. Note that this time it says "and", so you have to find where both (not one) of the conditions are satisfied.
More hints:
Drawing the number line, we see that the conditions are both true whenever $X \ge 20$, so we just find that probability.
|
H: $(X,\mathscr T)$ is compact $\iff$ every infinite subset of $X$ has a complete limit point in $X$.
Let $(X,\mathscr T)$ be a topological space. Given $A\subseteq X$, we say that $x$ is a complete limit point of $A$ if for every nbhd $N$ of $x$, $|N\cap A|=|A|$. I want to prove
Suppose $(X,\mathscr T)$ is compact. Then every infinite subset $A$ of $X$ has a complete limit point in $X$.
PROOF Assume the contrary. Then there exists an infinite subset $A$ of $X$ such that for any $x\in X$, there is a nbhd $N_x$ such that $|N_x\cap A|<|A|$. The collection $\{N_x:x\in X\}$ covers $X$. By compactness, there exists a finite subcover $\{N_{x_1},\ldots, N_{x_n}\}$. Then $A=\bigcup_{i=1}^n A\cap N_{x_i}$. But this is impossible, since $A$ is infinite, and $\bigcup_{i=1}^n A\cap N_{x_i}$ is a finite union of sets strictly smaller than $A$.
Is this correct? The problem is in "Introductory Real Analysis" by Kolmogorov.
Also, is a proof of the converse too complicated? Kolmogorov gives a reference as "P.S. Alexandroff, Einführung in die Mengenlehre und die Theorie der Reellen Funktionen (1956), pp. 250-251; J.L. Kelley, General Topology (1955), pp. 163-164" but I have no idea what this means. I am honestly clueless on how to approach it. Could you provide hints? I wouldn't mind a complete answer, if you think it is requires something too advanced.
AI: Your proof is correct. For the converse, suppose $X$ is not compact. Let $\mathcal{O}$ be an open cover of $X$ of minimal cardinality [such that no finite subcover exists], and let $\{O_\alpha\}_{\alpha < |\mathcal{O}|}$ be an enumeration of $\mathcal{O}$. Construct a set $A$ as follows:
let $x_\alpha \in X$ be any element not covered by any of the $O_\beta$ for $\beta < \alpha$. Such $x_\alpha$ exists or else $\{O_\beta\}_{\beta < \alpha}$ would be an open cover, contradicting the minimality of $|\mathcal{O}|$.
let $A = \{x_\alpha : \alpha < |\mathcal{O}|\}$
We'll show that $A$ has no complete limit point. Indeed, for any $x \in X$ there is some $\alpha$ such that $O_\alpha$ is a neighbourhood of $x$, but $O_\alpha$ avoids all the $x_\beta$ for $\beta > \alpha$, and so:
$$\left |A \cap O_\alpha\right | \leq \left|\{x_\gamma : \gamma \leq \alpha\}\right| = |\alpha| < |\mathcal{O}| = |A|$$
|
H: Prove that $\frac{1}{(1+a)^2}+\frac{1}{(1+b)^2}+\frac{1}{(1+c)^2}+\frac{1}{(1+d)^2}\geq 1$
Let $abcd=1$ and $a,b,c$ and $d$ are all positive.
Prove that
$\dfrac{1}{(1+a)^2}+\dfrac{1}{(1+b)^2}+\dfrac{1}{(1+c)^2}+\dfrac{1}{(1+d)^2}\geq 1$
I am probably able to do this by assuming $a\geq b\geq c\geq d$
and by using derivative show that
$\dfrac{1}{(1+a)^2}+\dfrac{1}{(1+b)^2}\geq \dfrac{2}{(1+\sqrt{ab})^2}$
In this way I reduce the number of variables.
However this approach seems to be tedious.
Is there any other methods of proving this inequality?
I hope to learn the proof of the inequality in the title.
AI: This inequality I have nice methods:use
$$\dfrac{1}{(1+a)^2}+\dfrac{1}{(1+b)^2}\ge\dfrac{1}{1+ab}$$
pf:use $Cauchy-Schwarz$ inequality
we have
$$(a+1)^2\le\left(\dfrac{a}{b}+1\right)(ab+1)$$
$$(b+1)^2\le\left(\dfrac{b}{a}+1\right)(ab+1)$$
so
$$\dfrac{1}{(1+a)^2}+\dfrac{1}{(1+b)^2}\ge\dfrac{1}{\left(\dfrac{a}{b}+1\right)(ab+1)}+\dfrac{1}{\left(\dfrac{b}{a}+1\right)(ab+1)}=\dfrac{1}{ab+1}$$
$$\dfrac{1}{(1+c)^2}+\dfrac{1}{(1+d)^2}\ge\dfrac{1}{1+cd}$$
since $abcd=1$ then we have
$$\dfrac{1}{1+ab}+\dfrac{1}{1+cd}=\dfrac{1}{1+\dfrac{1}{cd}}+\dfrac{1}{1+cd}=\dfrac{cd}{1+cd}+\dfrac{1}{1+cd}=1$$
|
H: $f(x)=x^{x}$ what happens when $x$ is a negative irrational number?
Just looking at negative numbers, $x^{x}$ is defined for all rational numbers (on the real plane) in all instances except whenever $x=\large \frac {2a+1}{2b}$ where $(a, b)$ are integers . However, what happens when $x$ takes a negative irrational value, such as $- \pi$? Is $f(x)$ defined there? I don't even think it can be defined as the limit as $x$ approaches those values since I think the function is discontinuous on the negative $x$ axis because of the $x=\large \frac {2a+1}{2b}$ exception.
AI: The correct answer is "it's complicated". Or rather, "there's more than one reasonable meaning for $x^x$ when $x <0$".
The function $x \mapsto x^x$ is analytic on the positive real axis $\mathbb R_{>0}$, and so extends uniquely to a function in an open neighborhood of $\mathbb R_{>0}$ in $\mathbb C$. An explicit description that makes it easier to see what's going on is that $x^x = \exp( x \log x)$, where $\log$ is the natural logarithm. The function $\exp$ is well-defined for all complex numbers by $\exp(x) = \sum_{n\geq 0} \frac{x^n}{n!}$. The function $\log x$ is a bit more complicated. When $|x-1|<1$, we can define $\log x = \sum_{n\geq 1} \frac{(1-x)^n}n$. You can give $\log$ a larger domain by using its Taylor expansion around some larger positive real number.
The first problem is that $\log$ does not have a single-valued expansion to all of $\mathbb C$. The problem is the following. Take the function $x \mapsto \sum_{n\geq 0} \frac{x^n}{n!}$, say, with domain $\{x : |x-1|<1\}$, and try to extend it to $\mathbb C \smallsetminus\{0\}$ by walking around the origin in the counterclockwise direction. As you walk, provided you avoid the origin, you can always extend the function — keep re-expressing your function in Taylor series, and then walk a little more, and then compute a new Taylor series. If you go in the counterclockwise direction, then when you get to $-1$, for example, you decide that $\log(-1) = i\pi$. Now keep walking. By the time you get back to $1$, you discover that your function no longer evaluates to $0$ at $1$, but rather to $2\pi i$. Oops! $2\pi i \neq 0$.
The usual fix for this is to choose a branch cut $B$, which is any continuous non-self-intersecting path that starts at $0$ and ends at $\infty$ (aside: you can map the plane $\mathbb C$ injectively into the sphere $S^2$ by stereographic projection, and then "$\infty$" means the north pole; so it's just one point, but it is in all directions). The simplest way to choose a branch cut is to choose an angle $\theta \in (0,2\pi)$, and take the cut to run along the ray $re^{i\theta}$ for $r \in \mathbb R_{>0}$. Provided $B$ does not intersect $\mathbb R_{>0}$, the function $\log$ extends uniquely from $\mathbb R_{>0}$ to $\mathbb C \smallsetminus B$. This extension will have a jump discontinuity along the branch cut $B$: it will jump by $2\pi i$ as you cross in the clockwise direction. Knowing how the extension jumps allows different people to compare their extensions if they choose different branch cuts.
You asked about $x^x$ when $x$ is negative. Well, choose a branch cut then that doesn't intersect either $\mathbb R_{>0}$ or $\mathbb R_{<0}$. There are essentially two ways to make this choice: either the branch cut is in the upper half plane (positive imaginary part), or it is in the lower half. Either case gives a well-defined notion of "$\log x$", and hence "$x \log x$" and "$\exp(x\log x)$".
How do the two values compare? Well, let $\log_+ x$ denote the value of $\log x$ when the branch cut is along $-i\mathbb R_{>0}$, and $\log_- x$ the value using the branch cut $i\mathbb R_{>0}$. Then for $x\in \mathbb R_{<0}$, $\log_\pm x = \log (-x) \pm i\pi$. So the two values of $x \log x$ differ by $2\pi i x$. Therefore the ratio of the two values of "$x^x$" is $\exp(2\pi i x)$.
Note that $\exp(2\pi i x) = 1$ iff $x$ is an integer. Therefore the symbol "$x^x$" is well-defined without extra choices (like a branch cut) only when $x$ is an integer. Otherwise, it has many reasonable meanings.
By the way, how are you defining defining $x^x$ when $x$ is a negative rational number?
|
H: Special operator on a normed space
Let $E$ be a normed space and $T \in L(E)$ with $\|Tx\|\lt\|x\|$ for all $x\ne0$ and $\|T\|=1$.
I want to prove the following:
$A=\{x\in E: \|Tx\|\ge1\}$ is closed.
There is no $x\in A$ with $$\inf_{y\in A} \|y\|=\|x\|.$$
Let $(y_n)_{n \in \mathbb{N}} \subset \mathbb{R}^+$ be convergent to $1$ from below. Then $$T: \ell_2 \rightarrow \ell_2, \\ (x_n)_{n\in \mathbb{N}} \mapsto (x_n \cdot y_n)_{n\in \mathbb{N}}$$ has the properties stated above.
I was able to show 1., but I'm stuck with 2. and 3.
We have $$1=\|T\|=\sup_{x\in E,\ x \ne 0} \frac{\|Tx\|}{\|x\|}$$ and $$1 \le \|Tx\| \lt \|x\|$$ for $x \in A$.
But I don't see how to derive 2. from that - towards a contradiction, I suppose.
For 3., I want to show that $$1=\|T\|=\sup_{\|(x_n)\|=1}\|T(x_n)\|=\sup_{\|(x_n)\|=1}\|(y_n\cdot x_n)\|=\sup_{\|(x_n)\|=1}\left(\sum |y_n\cdot x_n|^2\right)^{1/2}.$$ Why does this hold?
I also don't see why we have $\|(y_n\cdot x_n)\| \lt \|(x_n)\|$.
Can anyone help me to understand this?
[edit] I forgot to mention that in c), $(y_n)$ is a sequence of positive real numbers.
AI: For 2:
As $\|T\|=1$, there exists a sequence $\{z_n\}$ with $\|z_n\|=1$ and $\|Tz_n\|\to1$. Putting $z_n'=z_n/\|Tz_n\|$ we get a sequence in $A$ (because $\|Tz_n'\|=1$) with $\|z_n\|\to1$. So
$$
\inf\{\|y\|:\ y\in A\}=1.
$$
But if $x\in A$, then $\|x\|>1$, so the desired infimum cannot exist.
For 3:
Take any $x\neq0$. Then there exists $n_0$ with $x_{n_0}\ne0$.
$$
\|Tx\|^2=\sum_n|x_n|^2\,|y_n|^2=|x_{n_0}|^2|y_{n_0}|^2+\sum_{n\ne n_0}|x_n|^2\,|y_n|^2
<\sum_n|x_n|^2=\|x\|^2.
$$
So $\|Tx\|<\|x\|$ for all nonzero $x$.
Finally, $\|T\|=1$. We can see this by considering the vector $x$ that has $1$ in the $n^{\rm th}$ coordinate and zero elsewhere. Such $x$ satisfies $\|x\|=1$ and
$$
\|Tx\|=|y_n|.
$$
As $y_n\to1$, this shows that $\|T\|\geq1$. And we already had $\|T\|\leq1$ in the previous paragraph, so $\|T\|=1$.
|
H: On a proposition of Engelking's General Topology
Let $\mathcal{F}$ and $\mathcal{F'}$ be filters on a set.
We say that $\mathcal{F'}$ is finer than $\mathcal{F}$ if $\mathcal{F'} \supset \mathcal{F}$.
A point $x$ of a topological space $X$ is called a cluster point of a filter $\mathcal{F}$ if $x$ belongs to the closure of every member of $\mathcal{F}$.
Is the following assertion in Engelking's book General Topology (1989) p.52 true?
(Part of)Proposition 1.6.8
If $x$ is a cluster point of a filter $\mathcal{F}$, then $x$ is a limit of a filter $\mathcal{F'}$ that is finer than $\mathcal{F}$.
AI: Yes. Let $\mathcal F'$ be the filter generated by the union of $\mathcal F$ and the neighborhood filter of $x$. The assumption that $x$ is a cluster point of $\mathcal F$ ensures that this $\mathcal F'$ is a proper filter.
|
H: Differentiability of projection
Where $\pi_i:\Bbb R^n\rightarrow\Bbb R$ is projection onto the $i$th coordinate, the differentiability of $\pi_i$ at $X$ is given by:
$$\pi_i(X+H)-\pi_i(X)=\textrm{grad}\ \pi_i(X)\cdot H+||H||g(H)$$
Where $g$ tends to $0$ as its argument does. We deduce:
$$g(H)=\frac{h_i}{||H||}(1-x_i)$$
Where $h_i$ and $x_i$ are the $i$th coordinates of $H$ and $X$ respectively.
But how can this tend to zero? What if $H$ is approaching along the $i$th axis? Then $||H||=|h_i|$ and the limit either doesn't exist or is equal to $1-x_i$.
AI: No, $\pi_i$ is linear and $g(H)=0$ for all $H$. Your mistake is that $\text{grad} \,\pi_i(X) = e_i$, the $i$th standard basis vector.
|
H: Prove exponent law $a^b\cdot a^c=a^{b+c}$ for all $ a, b \text{ and } c \in \mathbb{R}$ and $a>1$
For all $ a, b \text{ and } c \in \mathbb{R}$ and $a>1$, Prove that $a^b\cdot a^c=a^{b+c}$
I have come across this question and its bugging me. Its a basic property that we learn in HS and I was hoping someone can enlighten me
AI: On way of doing this is to define $a^x$ for real $x$ using the least upper bound property of the real numbers.
$$ a^x = \text{ l.u.b of } \lbrace a^t \mid t< x, t\in \mathbb{Q} \rbrace $$
This means we consider the values for all rational powers which are less than $x$ and define the result of raising to the $x$ power as the smallest real number that is greater than or equal to the elements of this set.
To prove the product rule then we can first look at the meaning of $a^xa^y$ when $x,y\in \mathbb{R}$. Consider the following set,
$$ B= \lbrace a^r a^t \mid r < x, t < y, r \in \mathbb{Q}, t \in \mathbb{Q} \rbrace $$
We can conclude the following,
$$ a^r < a^x \text{ by the definition of } a^x.$$
$$ a^t < a^y \text{ by the definition of } a^y.$$
$$ \text{ Therefore } a^r a^t < a^x a^y$$
I am going to leave the proof that $a^xa^y$ is the l.u.b of $B$ to the thoughtful reader.
We can now look at $a^{x+y}$, this is by definition
$$ a^{x+y} = \text{l.u.b of } \lbrace a^s \mid s < x+ y, s \in \mathbb{Q} \rbrace = \text{ l.u.b of } \lbrace a^{r+t} \mid r < x, t < y, r \in \mathbb{Q}, t \in \mathbb{Q} \rbrace. $$
Using our knowledge of the power rule for rational number (which is a different proof) we notice that this new set is really just $B$. Therefore $a^{x+y}$ is just the l.u.b of $B$. A set cannot have two distinct least upper bounds therefore $a^{x+y}=a^xa^y$.
Notice that there is no need to appeal to the exponential function (which is far easier) to establish the existence and properties of the real powers. All that is necessary is the defining property of the real numbers which distinguishes them from the rationals (the least upper bound property).
For rational powers we have to take a different approach. We first define rational powers using $n$th roots. If $m,n \in \mathbb{Z}$ then define,
$$ a^{m/n} \equiv \sqrt[n]{a^m} .$$
Now suppose we multiply two expressions with different rational powers,
$$ a^{m/n} a^{p/q} = \sqrt[n]{a^m} \sqrt[q]{a^p} .$$
We will label the left hand side with the variable $K$ giving us,
$$ K = \sqrt[n]{a^m} \sqrt[q]{a^p} .$$
Taking the $n\cdot q$ power of both sides (remember $n$ and $q$ are integers so this is well defined) we get,
$$ K^{nq} = \left(\sqrt[n]{a^m} \sqrt[q]{a^p}\right)^{nq}$$
$$ K^{nq} = \left(\sqrt[n]{a^m} \right)^{nq} \left( \sqrt[q]{a^p}\right)^{nq}$$
$$ K^{nq} = \left[ \left( \sqrt[n]{a^m} \right)^n\right]^{q} \left[ \left(\sqrt[q]{a^p} \right)^q \right]^{n}$$
$$ K^{nq} = \left(a^m \right)^{q} \left( a^p\right)^{n}$$
$$ K^{nq} = a^{mq} a^{pn} $$
$$ K^{nq} = a^{mq+pn} $$
Take note that in every step above I only either used the rules of exponents for integers or the definition of $n$'th roots.
Now we will remove the power from $K$ using radicals.
$$ K = \sqrt[nq]{a^{mq+pn}} $$
Recalling the definition of rational powers we rewrite the right hand side as,
$$ K = a^{\frac{mq+pn}{nq}} = a^{\frac{m}{n} + \frac{p}{q}}$$
Replacing $K$ with its value we have,
$$a^{m/n} a^{p/q} = a^{\frac{m}{n} + \frac{p}{q}}$$
and therefore we have established the product rule for exponents.
|
H: Showing $\max\limits_{|z|=r}|p(z)| \ge |a_n|r^n$, without Cauchy integral formula.
Let $p(z) = a_n z^n + a_{n-1}z^{n-1} + \cdots + a_0$.
My question is: Is there an elementary way to show that for all $r > 0$
$$ \max \limits _{|z| = r} |p(z)| \ge |a_n|r^n$$
without using complex analysis machinery that falls out of the Cauchy integral formula?
One way to do it with the Cauchy integral formula:
$$
|a_n|
\le \frac{1}{2 \pi} \Bigg|\int \limits_{|z| = r} \frac{f(\zeta)}{\zeta ^{n+1}}\ \text{ }d\zeta \Bigg|
\le \frac{1}{2 \pi} 2 \pi r \max \limits _{|z| = r} \Bigg| \frac{p(z)}{z^{n+1}} \Bigg |
= \max \limits _{|z| = r} \Bigg| \frac{p(z)}{z^{n}} \Bigg |
$$
(I was looking at a previous post Domination of complex-value polynomial by highest power)
Thanks!
AI: You can actually almost avoid complex analysis (or rather Cauchy integral formula in the full generality) by averaging over the circle. Namely, you can check that $$\frac{1}{2\pi}\int_{0}^{2\pi}|P(re^{i\theta})|^2d\theta=|a_n|^2r^{2n}+|a_{n-1}|^2r^{2n-2}+....|a_0|^2.$$
|
H: Binomial-like Sum
We know that $\sum_{k=0}^n a^k \frac{n^{\underline k}}{k!} = (1+a)^n$. Is there a known (preferably closed) form for $\sum_{k=0}^n a^k/k!$ ?
This question was prompted by another recent question.
AI: It can be written in terms of the incomplete Gamma function: $\dfrac{e^a}{n!} \Gamma(n+1,a)$.
|
H: Comparing Areas under Curves
I remembered back in high school AP Calculus class, we're taught that for a series:
$$\int^\infty_1\frac{1}{x^n}dx:n\in\mathbb{R}_{\geq2}\implies\text{The integral converges.}$$
Now, let's compare $$\int^\infty_1\frac{1}{x^2}dx\text{ and }\int^\infty_1\frac{1}{x^3}dx\text{.}$$
Of course, the second integral converges "faster" since it's cubed, and the area under the curve would be smaller in value than the first integral.
This is what's bothering me: I found out that
$$\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx<\int^\infty_{1/2}\frac{1}{x^4}dx$$
Can someone explain to me when is this happening, and how can one prove that the fact this is right?
Thanks!
AI: $$\text{The main reason that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$
$$\text{is because although that }\int^\infty_1\frac{1}{x^2}dx>\int^\infty_1\frac{1}{x^3}dx$$
$$\text{remember that }\int^1_{1/2}\frac{1}{x^2}dx<\int^1_{1/2}\frac{1}{x^3}dx\text{.}$$
$$\text{So, in this case: }\int^\infty_1\frac{1}{x^2}dx+\int^1_{1/2}\frac{1}{x^2}dx=\int^\infty_1\frac{1}{x^3}dx+\int^1_{1/2}\frac{1}{x^3}dx,$$
$$\text{which means that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$
|
H: Area of a square with double the area of another square
There are 2 squares, one with an area of $a^2$ and another with an area of $b^2$, and when this is true: $(a+b)^2=2a^2$ and: $2b^2=c^2$ then: $b+c=a$.
My question is, is this called anything? I know this is pretty vague but I just want to know if this property has a name. Thanks.
AI: You have two equations in three unknowns, so should expect a one dimensional space of solutions. In this case both your equations are homogeneous in scale, so you can scale everything by $a$. Set $a=1, b'=\frac ba, c'=\frac ca$. Then $(1+b')^2=2,\ \ 2b'^2=c'^2$ so $b'=\pm (\sqrt 2 -1), c'^2=\pm(3-2\sqrt 2)$ where the signs do not have to correspond. Now check if $b'+c'=1$ to check the sign constraint.
|
H: Finding the relation between function x,y,z - trigo problem
Problem :
For $\displaystyle 0 < \theta < \frac{\pi}{2}$ if
$$\begin{align}x &= \sum^{\infty}_{n =0} \cos^{2n}\theta \\ y &= \sum^{\infty}_{n =0} \sin^{2n}\theta\\ z &= \sum^{\infty}_{n =0} \cos^{2n}\theta \sin^{2n}\theta \end{align}$$
then
options are :
(a) $xyz = xz+y$
(b) $xyz = xy+z$
(c) $xy^2 =y^2+x$
I have solution of this however I have one doubt in that, it is mention that :
$\displaystyle x=\sum^{\infty}_{n =0} \cos^{2n}\theta = \frac{1}{1-\cos^2\theta}$ (How it is derived... or what about this result.). Please guide on this... thanks.
AI: This is just the geometric series in disguise: for any number $s$ with the property that $|s|<1$,
$$\sum_{n=0}^\infty s^n=\frac{1}{1-s}.$$
By restricting $0<\theta<\frac{\pi}{2}$, we guarantee that $|\cos^2(\theta)|<1$, so that
$$\sum_{n=0}^\infty (\cos^2(\theta))^n=\sum_{n=0}^\infty\cos^{2n}(\theta)=\frac{1}{1-\cos^2(\theta)}.$$
|
H: Change of Variables via Line Element Differential vs. Jacobian
If I had a double integral
$\int \int_R f(x,y) dxdy $
I would change variables to polar coordinates by expressing the position vector
$\vec{r} \ = \ \vec{r}(x,y) \ = \ x \vec{e}_x \ + \ y\vec{e}_y$
in terms of polar coordinates
$\vec{r} \ = \ \vec{r}(r,\theta) \ = \ r\cos(\theta) \vec{e}_x \ + \ r\sin(\theta)\vec{e}_y$
& use the fact that the line element
$d\vec{r} \ = \ \frac{\partial \vec{r}}{dr}dr \ + \ \frac{\partial \vec{r}}{d\theta}{d\theta} \ = \ ||\frac{\partial \vec{r}}{dr}||dr \ \vec{e}_r \ + \ ||\frac{\partial \vec{r}}{d\theta}|| {d\theta} \ \vec{e}_\theta \ = \ dr \ \vec{e}_r \ + \ r {d\theta} \ \vec{e}_\theta$
shows that
$||\frac{\partial \vec{r}}{dr}||||\frac{\partial \vec{r}}{d\theta}|| dr d\theta \ = \ rdrd\theta$
is the area element & modify my integral accordingly.
Similarly for any other change of variables between orthogonal coordinate systems. What is the relationship between this way of thinking and the Jacobian way of thinking when you are changing variables? I keep forgetting things when thinking of Jacobians yet I can re-derive on the spot any area/volume element I want doing the above, can somebody give me intuition on how & why the Jacobian is superior to the above & how the above is a special case of the Jacobian? If you think you can make it more interesting I'd love to see how you'd relate this to wedge products, though that's not needed.
AI: First lets consider the change in area when we use a coordinate independent transformation. We will consider a transformation from coordinates (x,y) to coordinates (u,v) via the following,
$$u = a x + b y$$
$$v = c x + d y$$
note that $a$,$b$,$c$, and $d$ are constants independent of $x$ or $y$.
Now consider a rectangle in the (x,y) system with edges defined by the vectors $\vec{l}$ and $\vec{w}$ with $ \vec{l} \cdot \vec{w} $=0 (orthogonal).
For concreteness let the vectors be, $$ \vec{l} = l \vec{x}$$ $$\vec{w} = w \vec{y}$$.
The area of the $xy$ rectangle is given by the magnitude of the cross product between $\vec{l}$ and $\vec{w}$. We will denote this area $A_{xy}$.
$$ A_{xy} = \vert \vec{l} \times\vec{w} \vert = \vert \ lw \vec{z} \ \vert = lw$$
Now since $\vec{l}$ and $\vec{w}$ are vectors we know their components will change under coordinate transformation by the same rule that $x$ and $y$ follow. Therefore we can conclude that the transformed vectors will be,
$$ l'_u = a (l) + b (0) = al$$
$$ l'_v = c (1) + d (0) = cl$$
$$ w'_u = a (0) + b (w) = bw$$
$$ w'_v = c (0) + d (w) = dw$$
$$ \Rightarrow \vec{l'} = al \vec{u} + cl \vec{v} \qquad \vec{w'} = bw \vec{u} + dw \vec{v}$$
Notice that these are no longer orthogonal and in fact now define the edges of a parallelogram. We will now compute the area of this parallelogram in the $uv$ system.
$$ A_{uv} = \vert \vec{l'} \times \vec{w'} \vert = \vert \left( (al)(bw)-(cl)(dw) \right) \vec{\zeta} \vert = \vert ab-cd\vert lw = \vert ab-cd\vert A_{xy}$$
$$ \Rightarrow \quad A_{uv} = J A_{xy} $$
Where $J= \vert ab-cd \vert $ is the conversion factor which converts $xy$ area into $uv$ area.
Now this isn't a jacobian yet because our transformation was independent of the coordinates. However if our transformation is differentiable we can apply similar reasoning locally.
Let $f(x,y)$ and $g(x,y)$ be continuously differentiable functions of $x$ and $y$. Furthermore require that the change of coordinates defined below be invertible,
$$ u = f(x,y)$$
$$ v = g(x,y)$$
Let $(u_0,v_0)$ be the image of $(x_0,y_0)$ under this transformation and consider a neighborhood of $(x_0,y_0)$ small enough for the tangent plane approximations of $f$ and $g$ to be good. Then consider the following,
$$ u-u_0 = f(x,y) - f(x_0,y_0) \approx \frac{\partial f}{\partial x} (x-x_0) + \frac{\partial f}{\partial y} (y-y_0)$$
$$ v-v_0 = g(x,y) - g(x_0,y_0) \approx \frac{\partial g}{\partial x} (x-x_0) + \frac{\partial g}{\partial y} (y-y_0)$$
Now we have the rule for the transformation of displacements near $(x_0,y_0)$ which is the same as rule for transformations of vectors at that point. From this point on we apply the same reasoning above to arrive at the conversion factor for area.
The distinction between this approach and yours is that your transformations are orthogonal. This means that they locally preserve the angles between displacement vectors, so a small rectangle will be sent to a small rectangle instead of a parallelogram.
|
H: Calculate the Lie Derivative
In trying to get to grips with Lie derivatives I'm completely lost, just completely lost :(
Is there anyone who could provide an example of calculating the Lie derivative of the most basic function you can, i.e. like in showing someone how to calculate the derivative you'd pick something like $f(x) \ = \ x^2$, by using all possible formuations of the lie derivative, simply as a means to illustrate the idea. Thus we could calculate it on $f(x) \ = \ x^2$ & $\vec{v}(x,y) \ = \ x^2i \ + \ 2y^2j$, I cannot find one simple calculation of this thing anywhere :(
AI: The Lie derivative of a smooth function $f:M\to \mathbb{R}$ with respect to a tangent vector $X\in T_{p}(M)$ at a point $p$ is just the directional derivative of $f$ with respect to $X$ at $p$. So, in your example, $f:\mathbb{R}^2\to \mathbb{R}$ is a smooth function and $v$ defines a smooth vector field on $M$. If $(x,y)\in \mathbb{R}^2$, then what is the directional derivative of $f$ with respect to the vector $v_{(x,y)}=(x^2,2y^2)\in T_{(x,y)}(\mathbb{R}^2)$? You may recall from multivariable calculus that it is given by the dot product of this vector with the gradient of $f$ at this point; so,
$v_{(x,y)}(f)$
$=x^2\frac{\partial f}{\partial x}+2y^2\frac{\partial f}{\partial y}$
$=x^2(2x)+2y^2(0)$
$=2x^3$.
Of course, $v$ is a smooth vector field on $\mathbb{R}^2$ and $f:\mathbb{R}^2\to \mathbb{R}$ is a smooth function. We've seen that for each $(x,y)\in \mathbb{R}^2$, $v_{(x,y)}(f)$ is a well-defined real number. Thus, $v(f):\mathbb{R}^2\to \mathbb{R}$ is a function defined by the rule $(x,y)\to v_{(x,y)}(f)$ and in this case it's smooth because it's given by the rule $v(f)(x,y)=2x^3$. In fact, this is a general rule in differential geometry: a smooth vector field can be thought of as an operator which swallows a smooth function and spits out another smooth function! The general principle is below:
Let the set of smooth functions $f:M\to \mathbb{R}$ be denoted by $C^{\infty}(M)$. If $X\in T_{p}(M)$ is a vector tangent to $M$ at $p$, then $X$ is a derivation $X:C^{\infty}(p)\to \mathbb{R}$. Note that $C^{\infty}(p)$ denotes the real vector space of germs at $p$, i.e., pairs $(f,U)$ where $U$ is an open neighborhood of $p$ and $f:U\to \mathbb{R}$ is a smooth function. Also, by a derivation $X:C^{\infty}(p)\to \mathbb{R}$, I mean a linear map satisfying the Leibniz rule: if $(f,U),(g,V)\in C^{\infty}(p)$, then $X(fg)=X(f)g+X(g)f$ on $U\cap V$.
I guess your question is: how in practice do we compute $X$? I've already claimed that $X$ can be thought of as a directional derivative operator. If $X\in T_{p}(M)$, then all we need is that we have a real-valued function $f$, smoothly defined in a neighborhood of $p$, and then we can talk about the directional derivative of $f$ at $p$ in the direction of $X$. In general, if $(U,\phi)$ is a coordinate neighborhood of $p\in M$, then we can write $X$ in these local coordinates: $X=\sum_{i=1}^{n} a_i \frac{\partial}{\partial x_i}$; here, $(x_1,\dots,x_n)$ are the local coordinates for $(U,\phi)$ and $\frac{\partial}{\partial x_i}$ for $1\leq i\leq n$ are the coordinate frames on $(U,\phi)$. The coordinate frames are just the directions on $(U,\phi)$ analogous to the case of $\mathbb{R}^n$ where we have coordinate axes. Note, however, that these "directions" are vectors in the tangent space to $M$ at various points in $U$!
Now, if $f:U\to \mathbb{R}$ is a smooth function, then $X(f)=\sum_{i=1}^{n} a_i\frac{\partial f}{\partial x_i}$. Technically, you need to prove that this formula is valid. All we know is that $X$ is linear and satisfies the Leibniz rule but it turns out that this is sufficient to restrict $X$ enough that it is given by the formula just stated.
In general, if $X$ is a smooth vector field on $M$, then a smooth function $f:M\to \mathbb{R}$ results in a smooth function $X(f):M\to \mathbb{R}$. In other words, $X:C^{\infty}(M)\to C^{\infty}(M)$ defines an operator:
Exercise 1: Prove that $X:C^{\infty}(M)\to C^{\infty}(M)$ is a derivation, i.e., it is real linear and satisfies the Leibniz rule.
Exercise 2: Prove that every derivation $X:C^{\infty}(M)\to C^{\infty}(M)$ is given by a smooth vector field on $M$ according to the recipe described above.
Exercise 3: What is the Lie derivative of the following functions with respect to the given smooth vector fields on $\mathbb{R}^3$:
(a) $f(x,y,z)=1$ and $v(x,y,z)$ is any smooth vector field on $\mathbb{R}^3$. Interpret the result geometrically.
(b) $f(x,y,z)=x^2+y^2+z^2$ and $v(x,y,z)=(x,y,z)$. Interpret the result geometrically.
(c) $f(x,y,z)=x^2+y^2+z^2$ and $v(x,y,z)$ is an smooth vector field on $\mathbb{R}^2$ tangent to the sphere of radius $r$ at any point of distance $r$ from the origin. Interpret this result geometrically.
Exercise 4: Let $f:S^1\to \mathbb{R}$ be the smooth function defined by assigning a point on the circle to its angle (measured in radians) from the x-axis in the counterclockwise direction; this is a number in $[0,2\pi)$ for each $x\in S^1$. Define local coordinates on $S^1$ by the rule $\theta\to (\cos \theta, \sin \theta)$ and let $X$ be the smooth vector field on $S^1$ given by $\frac{\partial}{\partial \theta}$ in these coordinates. Determine $X(f)$.
Exercise 5: If $X$ and $Y$ are smooth vector fields on $M$, then the Lie derivative of $Y$ with respect to $X$ is given by the derivation $Z:C^{\infty}(M)\to C^{\infty}(M)$ defined by the rule $Z(f)=X(Y(f))-Y(X(f))$ for all $f\in C^{\infty}(M)$. Of course, every such derivation corresponds to a smooth vector field on $M$ and in this case, $Z$ is commonly denoted by $[X,Y]$, the Lie bracket of $X$ and $Y$. In your example of $v(x,y)=(x^2,2y^2)$ for all $(x,y)\in\mathbb{R}^2$, define also $w(x,y)=(2y^2,x^2)$ for all $(x,y)\in \mathbb{R}^2$. Determine the Lie bracket of $v$ and $w$, that is, explicitly write down this smooth vector field on $\mathbb{R}^2$ in the form $(g(x,y),h(x,y))$ for all $(x,y)\in \mathbb{R}^2$.
I hope this helps! Let me know if you have further questions.
|
H: A die is rolled until a 6 comes up. Should the sample space of this experiment contain the set of all infinite sequences which do not contain a 6?
Is there a standard way to view this? The problem is,
In an experiment, die is rolled continually until a 6 appears, at which point the experiment stops. What is the sample space of this experiment?
My first instinct was to say that it was the set of all finite sequences with exactly one 6, which is in the final position. But this neglects the (zero-probability) event that a 6 never comes up. Should this be in the sample space? Does it matter?
AI: Since you (might) need to account for the hypothetical event that a 6 is never rolled, I would go ahead and say that Brian M. Scott is more or less correct - in plain terms, the sample space is the set of all finite sequences terminating in a 6 and all infinite sequences containing no 6. If the zero-probability event that a 6 is never rolled is irrelevant, then it doesn't matter too much, as Ross Millikan points out in another answer.
|
H: Question about the terms and operations in basic division
Let's pretend that I am a child and you want to teach me division. You demonstrate through an example division as repeated subtraction.
This is the simple algorithm the child learns from your lesson:
n=0
remainder=dividend
if(dividend == divisor) return 1;
while(remainder > divisor) {
n = n + 1;
remainder=remainder - divisor;
}
println "Whole portion: ", n, " with remainder ", remainder, "/", divisor
There is a bug in that algorithm. See if you can spot it.
Here is the revised version:
n=0
remainder=dividend
assert(divisor != 0);
if(dividend == divisor) return 1;
while(remainder > divisor) {
n = n + 1;
remainder=remainder - divisor;
}
println "Whole portion: ", n, " with remainder ", remainder, "/", divisor
In this version if you try to divide by zero, the system fails.
There is a different way to fix the bug though. The bug was an infinite loop, btw, if you didn't notice, because remainder would never decrease when the divisor was zero.
n=0
remainder=dividend
if(dividend == divisor) return 1;
while( (n==0 && n>divisor) && remainder > divisor) {
n = n + 1;
remainder=remainder - divisor;
}
println "Whole portion: ", n, " with remainder ", remainder, "/", divisor
So for example, if you divide 3/8 you get:
Whole portion: 0 with remainder 3/8
If you try to divide 3/0 you get:
Whole portion: 0 and 3/0 remaining.
What am I doing wrong in the third example? Isn't dividing by zero asking for a fractional number < 0?
Isn't that just a special kind of complex number?
AI: Division by zero is undefined and the result of the operation is not a number, neither real nor complex.
Division by zero can sometimes be represented by limits, e.g.
$$
\lim_{x \to 0} 1/x^2 = +\infty, \text{ but }
\lim_{x \to 0^+} 1/x = +\infty \text{ while }
\lim_{x \to 0^-} 1/x = -\infty,
$$
so the 2-sided limit would be undefined in the latter case.
|
H: If the order of $x\in G$ is $p$, the smallest prime dividing $|G|$, and $h^{-1}xh=x^{10}$ for some $h\in G$, then $p=3$
Let $G$ be a group and let $p$ be the smallest prime dividing $|G|$. Let $x\in G$ be such that $|x|=p$. If $\exists h\in G$ such that $h^{-1}xh=x^{10}$, then show that $p=3$.
AI: Look at this one till another way is appear here. This is my attempt. Since $|G|<\infty$ so we can speak about $|h|=m$. We know that: $$h^{-m}xh^m=x^{10^m}\longrightarrow x^{10^m-1}=e_G\longrightarrow p|10^m-1$$
|
H: Find the maximum of $xy(72-3x-4y)$?
$x$ and $y$ are positive. I have been stuck on this problem for a while now, any hints please?
AI: If $72-3x-4y\leq 0$, then the product is $\leq 0$, which clearly can't be the maximum.
Hint: Multiply by 12. Apply AM-GM to $3x, 4y, (72-3x-4y)$ (which are all positive)
$$\sqrt[3]{12xy(72-3x-4y)} \leq \frac{ 3x + 4y + (72-3x-4y) } { 3} = 24. $$
Hint: Write it as a quadratic in one variable.
If $x$ is fixed, then let $72-3x = L$, and we want to maximize $y(L-4y) = -4y^2 + yL$. This achieves its maximum when $8y = L= 72-3x$.
If $y$ is fixed, then let $72-4y = K$, and we want to maximize $x(K-3x) = -3x^2 + xK$. This achieves its maximum when $6x = K = 72-4y$.
Thus, this gives us $8y+3x = 72 = 6x+4y$, or that $4y=3x = 24$, which agrees with the AM-GM equality case.
|
H: Check Convergence of $\sum_{n=1}^{\infty}(-1)^n\cdot\sin(\frac{\pi}{n})$
I want to check the convergence of this series:
$$\sum_{n=1}^{\infty}(-1)^n\cdot\sin\left(\frac{\pi}{n}\right)$$
1) I can check the limit of the positive series and find that its equal to $0$.
after that I need to check more things? because monotonic decreasing its not.
AI: To apply the Alternating Series Test, it is sufficient if the absolute values are monotonically decreasing after a while. And they are, we don't even have to wait long.
|
H: If $A$ and $B$ are positive definite, then is $B^{-1} - A^{-1}$ positive semidefinite?
I've found this while googling some properties of positive semidefinite matrices. (Unfortunately, I cannot remember where I've discovered it.) If this is true, it'll greatly save my time in my work. Is it true? How can you prove it?
Let's say I have two real symmetric and positive definite matrices $\mathbf A$ and $\mathbf B$ of the same size. Also, assume that $\mathbf A - \mathbf B$ is positive semidefinite too. Then, $\mathbf B^{-1} - \mathbf A^{-1}$ is also positive semidefinite.
AI: Yes. Here is one way to prove it:
Since $A-B\geq0$, we have (conjugating with $B^{-1/2}$) that $B^{-1/2}AB^{-1/2}-I\geq0$. This tells you that all the eigenvalues of the positive definite matrix $B^{-1/2}AB^{-1/2}$ are $\geq1$.
Now note that $B^{-1/2}AB^{-1/2}=(B^{-1/2}A^{1/2})(A^{1/2}B^{-1/2})$. Since commuting the product of two matrices does not change the eigenvalues, we get that $A^{1/2}B^{-1/2}B^{-1/2}A^{1/2}=A^{1/2}B^{-1}A^{1/2}$ has also all eigenvalues $\geq1$. So
$$
A^{1/2}B^{-1}A^{1/2}-I\geq0.
$$
Now conjugating with $A^{-1/2}$ we get $B^{-1}-A^{-1}\geq0$.
|
H: Notation of random variables
I am really confused about capitalization of variable names in statistics.
When should a random variable be presented by uppercase letter, and when lower case?
For a probability $P(X \leq x)$, what do $x$ and $X$ mean here?
AI: You need to dissociate $x$ from $X$ in your mind—sometimes it matters that they are "the same letter" but in general this is not the case. They are two different characters and they mean two different things and just because they have the same name when read out loud doesn't mean anything.
By convention, a lot of the time we give random variables names which are capital letters from around the end of the alphabet. That doesn't have to be the case—it's arbitrary—but it's a convention. So just as an example here, let's let $X$ be the random variable which represents the outcome of a single roll of a die, so that $X$ takes on values in $\{1,2,3,4,5,6\}$. Now I believe you understand what would be meant by something like $P(X\leq 2)$: it's the probability that the die comes up either a 1 or a 2. Similarly, we could evaluate numbers for $P(X\leq 4)$, $P(X\leq 6)$, $P(X\leq \pi)$, $P(X\leq 10000)$, $P(X\leq -230)$ or $P(X\leq \text{any real number that you can think of})$. Another way to say this is that $P(X\leq\text{[blank]})$ is a function of a real variable: we can put any number into [blank] that we want and we end up with a unique number. Now a very common symbol for denoting a real variable is $x$, so we can write this function as $P(X\leq x)$. In this expression, $X$ is fixed, and $x$ is allowed to vary over all real numbers.
It's not super significant that $x$ and $X$ are the same letter here. We can similarly write $P(y\leq X)$ and this would be the same function. Where it really starts to come in handy that $x$ and $X$ are the same letter is when you are dealing with things like joint distributions where you have more than one random variable, and you are interested in the probability of for instance $P(X\leq \text{[some number] and } Y\leq\text{[some other number]})$ which can be written more succinctly as $P(X\leq x,Y\leq y)$. Then, just to account for the fact that it's hard to keep track of a lot of symbols at the same time, it's convenient that $x$ corresponds to $X$ in an obvious way.
By the way, for a random variable $X$, the function $P(X\leq x)$ a very important function. It is called the cumulative distribution function and is usually denoted by $F_X$, so that $$F_X(x)=P(X\leq x)$$
|
H: Radius, Center, and Plane
How do you determine the radius, center, and the plane containing the circle $r(t)=7i+(12cos(t))j+(12sin(t))k?$ The way I tried it is using just the basic approach:
$$7^2+(12cos(t))^2=12sin(t))^2$$ $$ 49 + 144cos^2t=144sin^2t$$
$$144cos^2t-144sin^2t=-49$$
$$144sin^2t-144cos^2t=49$$
$$144(cos^2t-sin^2t)=49$$
$$144cos(2t)=49$$
which yields a radius of 144 but surely this can't be correct since this is not a circle! Any detailed help would be much appreciated.
AI: When $\textbf{r}(t)=\langle 7,12\cos(t),12\sin(t) \rangle$, so the $x$'s component is being fixed for every $t$ and so it makes the vector function living in the plane $x=7$. Now consider you are moving on $x=7$. So, you are lost the $x$ component. This means that you know just $y(t)=12\cos(t),~~z(t)=12\sin(t)$. What does this mean? Indeed $y^2+z^2=144$, that is a circle with center $(7,0,0)$ and radii $R=12$.
|
H: Check Convergence of $\sum_{n+1}^{\infty}(-1)^n*\left(\sqrt{n+1}-\sqrt{n-1}\right)$
I want to check the convergence of this series:
$$\sum_{n+1}^{\infty}(-1)^n*\left(\sqrt{n+1}-\sqrt{n-1}\right)$$
what I did is:
$$\sum_{n+1}^{\infty}(-1)^n*\left(\sqrt{n+1}-\sqrt{n-1}\right)=$$
$$=\sum_{n+1}^{\infty} \frac{2}{\sqrt{n+1}+\sqrt{n-1}}$$
now what is the best way to check it? from all the comparison tests?
1) I can say that its $>\frac{1}{n}$ and if I`m not see it directly what I need to do?
thanks!
AI: Use Alternative Series Test, Here the sequence $a_n=\sqrt{n+1}-\sqrt{n-1}$ is decreasing with $a_n\to 0$
|
H: If $G$ is disconnected and the vertices $x,y$ are adjacent in $G$, then there is a vertex that isn't adjacent to $x$ and isn't adjacent to $y$.
I'm just starting graph theory and I'm trying to prove the following:
Let $G$ be a simple disconnected graph with vertex set $V(G)$ and edge set $E(G)$. If $x,y\in V(G)$ and $xy \in E(G)$, then $\exists z \in V(G)$ such that $xz,yz \notin E(G)$.
Here's my attempt:
We proceed by contradiction. Suppose instead that $\forall z \in V(G)$, $xz\in E(G)$ or $yz \in E(G)$. Since $G$ is disconnected, we know by definition that $\exists a,b \in V(G)$ such that $a$ and $b$ do not belong to the same path. By letting $z=a$, we know that $ax\in E(G)$ or $ay \in E(G)$. In either case, there must be a path from $a$ to $x$ (the path will either be $ax$ or $ayx$). Similarly, by letting $z=b$, there must be a path from $b$ to $x$ (the path will either be $bx$ or $byx$). Hence, there is a path from $a$ to $b$ that passes through at least one of $\{x,y\}$ (the path will be either $axb$, $axyb$, $ayxb$, or $ayb$), a contradiction. So $\exists z \in V(G)$ such that $xz,yz \notin E(G)$, as desired.
Is my proof correct? I'm fairly sure it is, but it feels overly long. Can parts of my proof be omitted? Is there a shorter, more elegant proof that doesn't rely on case analysis? Maybe a direct proof (not a proof by contradiction) would be faster? Side Note: I haven't covered the definition of a connected component yet.
AI: I’m assuming from the wording of the question that for you a graph is disconnected if there are vertices $a,b\in V(G)$ such that no path contains both $a$ and $b$. If that’s the case, your proof is indeed correct, and you can’t really shorten it much. You could do something like this to organize it a little more clearly:
Suppose that every vertex of $G$ is adjacent to at least one of $x$ and $y$. Let $$X=\{v\in V(G):vx\in E(G)\}$$ and $$Y=\{v\in V(G):vy\in E(G)\}\;;$$ by hypothesis $X\cup Y=V(G)$. If $u,v\in X$, then $uxv$ is a path from $u$ to $v$, and if $u,v\in Y$, then $uyv$ is a path from $u$ to $v$. Finally, if $u\in X$ and $v\in Y$, then $uxyv$ is a path from $u$ to $v$. It follows that $G$ is connected.
|
H: Check Convergence of $\sum_{n+1}^{\infty}(-1)^n*(\frac{e^n}{n!})$
I`m trying to check the convergence of this series:
$$\sum_{n+1}^{\infty}(-1)^n*(\frac{e^n}{n!})$$
what I decided to do is to use Delambre test, what I get is:
$$\sum_{n+1}^{\infty}\frac{e^{n+1}}{(n+1)!}*\frac{n!}{e^n} = \frac{e}{n+1}$$
1) the limit is $n\rightarrow \infty =0$
2) monotonically decreasing
the answer is absolute convergence, what I did wrong in my way to show it.
I know that the limit is only prerequisite but not enough, how I need to show it with another way?
thanks!
AI: Let us put
$$a_n:=\frac{e^n}{n!}\implies \frac{a_{n+1}}{a_n}=\frac{e^{n+1}}{(n+1)!}\frac{n!}{e^n}=\frac e{n+1}\xrightarrow [n\to\infty]{}0$$
Thus, the series converges absolutely and thus it also converges, i.e.:
$$\sum_{n=1}^\infty \frac{e^n}{n!}\;\;\text{converges} \implies \sum_{n=1}^\infty (-1)^n \frac{e^n}{n!}\;\;\text{also converges}$$
|
H: If $AA^T$ is the zero matrix, then $A$ is the zero matrix
Let $A$ be a $4 \times 4$ matrix. Show that if $A^TA$ or $AA^T$ is the zero matrix, then $A$ is the zero matrix.
I feel very close to solving the problem so far. I have said that
$$[0]_{ij}=\sum_{k=1}^4 [A]_{ik}[A^T]_{kj} =\sum_{k=1}^4 [A]_{ik}[A]_{jk} \qquad \text{for all }i,j \in \{1,2,3,4\} $$
and proven for the case if $i=j$. However, I cannot seem to find a good way to prove it to be true if $i\neq j$.
AI: An idea: if we put $\;A=(a_{ij})_{1\le i,j\le n}\;$ , then $\,A^t=(b_{ij})\;$ , with $\,b_{ij}=a_{ji}\,$ , so by definition:
$$AA^t=\left(\sum_{k=1}^n a_{ik}b_{kj}\right)=\left(\sum_{k=1}^n a_{ik}a_{jk}\right)$$
If you now look at the main diagonal's general entry of the above, you get
$$\sum_{k=1}^n a_{ik}a_{ik}=\sum_{k=1}^n a_{ik}^2$$
So if $\,AA^t=0\;$ then the above diagonal's entries are zero, but a sum of squared real numbers is zero iff each number is zero, so...
The same result is true with complex matrices if instead we require $\,AA^*=0\;,\;\;A^*:=\overline{A^t}\;$
|
H: What does it mean to have an $L$-basis of $L\otimes_K V$?
Exercise: I have got a vector space $V$ over $K$, and $L$ a field extension of $K$. The task is to show that if $(v_1, ..., v_n)$ is a basis of $V$, then $(1\otimes_K v_1, ... ,1\otimes_K v_n)$ is an $L$-basis of $L\otimes_K V$.
Question: I know that I can consider $L$ as a vector space over $K$, so $L\otimes_K V$ is well-defined. But my question is more general, what does actually mean to have basis over $L$ for some vector space over $K$? Aren't we getting then a vector space over $L$?
And then what would it mean corresponding the exercise given? Because we have a tensor product over $K$, so basis over $L$ would be basis of tensor product over $L$, like $L\otimes_L V$, or am I wrong?
I hope the question is clear. Thank you in advance!
AI: In fact, $L\otimes_K V$ already has the structure of a vector space over $L$; the addition is the same as the one it has normally (i.e. as a $K$-vector space), and the scalar multiplication from $L$ is simply
$$\lambda\cdot (\alpha\otimes v)\overset{\text{def}}{=}(\lambda\alpha)\otimes v$$
where $\lambda,\alpha\in L$ and $v\in V$. Your task is then to show that the set described is a basis for this $L$-vector space.
This process can be done more generally, with $K$ being replaced by a ring $R$, and $L$ being replaced by any $R$-algebra. The process is known as extension of scalars. The analog of your question in this more general case is found in this math.SE question.
|
H: Proving Continuity of a function
Prove that $$f(x)=\lim_{n\rightarrow \infty}{{x^{2n}-1}\over x^{2n}+1}$$
is continuous at all points of $\Bbb R$ except $x=\pm1$
AI: Hints:
$$|x|<1\implies \lim_{n\to\infty}x^{2n}=0$$
$$|x|>1\implies \lim_{n\to\pm\infty} x^{2n}=\infty$$
Now use arithmetic of limits to get (justify!) that the limit function is
$$f(x)=\begin{cases}\!\!-1&,\;\!-1<x<1\\{}\\\;1&,\;\;|x|>1\\{}\\\;0&,\;\;x=\pm 1\end{cases}$$
|
H: Does every absorbing set of a Banach space contain a neighborhood of origin?
Let $X$ be a Banach space and $A$ be any absorbing subset of $X$. Does $A$ contain a neighborhood of the origin?
AI: We should expect the answer to this question to be "no", since being absorbent depends only on the algebraic structure of $B$, and is thus unlikely to imply the topological property of containing a neighbourhood of $0$. In fact, the answer is even no in the finite-dimensional case, where the algebra determines the topology.
Suppose $A$ is an absorbing set in $B$, and $v$ is a vector. We know that there exists an $r$ such that $v \in sA$ for $s>r$. In other words, $\lambda v$ is in $A$ for all sufficiently small positive $\lambda$. Since this is true for every $v$, $A$ must contain a radial neighbourhood of $0$: that is, it must contain an open line segment in every direction. Conversely, it's not hard to show that if $A$ contains a radial neighbourhood of $0$ then $A$ is absorbing.
Thus the question is whether a radial neighbourhood of $0$ must be an actual neighbourhood. The answer is no: containing open segments in every straight line direction doesn't stop $A$ omitting a curve that goes to $0$. For example, in $\Bbb R^2$, the set $A$ consisting of the whole space with the nonzero points of the parabola $y=x^2$ removed is an absorbing set.
|
H: Application of Hahn-Banach
Let $(E,\mathcal{E},\mu)$ be a measured space with finite measure $\mu$. We denote with $K$ the space of all real valued functions on $E$, which are $\mu$-a.s. equal. This is a vector space. Now I have a function $\kappa:K\to \mathbb{R}$, which satisfies
$\kappa(\lambda f)=\lambda \kappa(f),\forall \lambda\ge 0,f\in K$
$\kappa(f+g)\le \kappa(f)+\kappa(g),\forall f,g\in K$
$\kappa(f+c)=\kappa(f)-c,\forall f\in K,c\in\mathbb{R}$
$\kappa(f)\le\kappa(g)$, if $f\ge g$.
Now we define $\eta(f):=\kappa(-f)$. We have $\eta(1)=1$ and $\eta(f)\le 0$ for $f\le 0$.
Now how can I use Hahn-Banach to get a linear Functional $\Lambda:K\to\mathbb{R}$ with $\Lambda(1)=\eta(1)=1$ and $\Lambda(f)\le\eta(f)$ for all $f\in K$?
Clearly $\kappa$ is sublinear and so is $\eta$. I think as the linear subspace I choose the whole space $K$. But then, which is my linear functional from $K$ to the reals?
AI: Since you need $\Lambda(1)=1$, your subspace will have to contain $1$. Since you don't need $\Lambda$ to have any particular value at any other point, nothing else needs to be in your subspace. So just use the span of $1$, in other words the constant functions.
|
H: Sequences of integers with lower density 0 and upper density 1.
It is possible to construct a sequence of integers with lower density 0 and upper density 1?
where lower and upper density means asymptotic lower and upper density (cf. References on density of subsets of $\mathbb{N}$)
EDIT: So, if this is true, then one can split $\mathbb{N}$ into to sequeneces of null lower density. I find it paradoxical.
AI: Yes, it is possible. What follows isn't a completely rigorous proof, but you should be able to make one from that.
The basic idea is to construct a set $A$ such that along one subsequence, $\lim |A \cap I_{a_n}|/a_n = 1$ and along another subsequence, $\lim |A \cap I_{b_n}|/b_n = 0$. We'll do this by adding or skipping consecutive "chunks" of integers.
Start by adding $1$ to $A$. We're at density $1$.
Then skip $2$. For $n = 2$, we have $|A \cap I_2|/2 = 1/2$.
Now add $3$ and $4$, so that $|A \cap I_4|/4 = 3/4$.
Now skip $5$ through $12$, so that $|A \cap I_{12}|/12 = 3/12 = 1/4$.
This should give you an idea of how the following works:
"Add" enough integers so that you're back to density $\geq (2^n - 1)/2^n$
Then "skip" enough integers so that you're below density $\leq 1/2^n$.
Repeat the two steps in order. Along the two subsequences constructed, the limit of the density is either 1 or 0.
|
H: How to prove $XP = X'P$?
Two triangles $\triangle ABC$, $\triangle A'BC$ have the same base and the same height. Through the point $P$ where their sides intersect we draw a straight line parallel to the base; this line meets the other side of $\triangle ABC$ in X and the other side of $\triangle A'BC$ in $X'$. Prove that $XP = X'P$.
AI: Observe that $\triangle XPA \sim \triangle ABC$ and $\triangle X'PA' \sim \triangle A'BC$. As the $XX'$ line is parallel to $BC$, and $\triangle ABC$ and $\triangle A'BC$ were of the same height, then $\triangle XPA$ and $\triangle X'PA'$ are also of the same height. The result follows from the fact, that the bases have the same length, that is $|XP| = |X'P|$ because $|BC| = |BC|$.
I hope this helps ;-)
Edit: Solution using trapezium area (as asked for in the comments):
We start with the area for trapezium
$$|ABCA'| = |BCP| + |A'AP| + |BXP| + |CX'P| + |XPA| + |X'PA'|$$
then relate the areas of two triangles of the same height
\begin{align}
|BCP| + |BXP| + |XPA| + |A'AP| &= |ABC| \\
= |A'BC| &= |BCP| + |CX'P| + |X'PA'| + |A'AP|
\end{align}
and combining both we get
$$|BXP| + |XPA| = |CX'P| + |X'PA'|.$$
However $\triangle BXP$ and $\triangle CX'P$ are of the same height as well as $\triangle XPA$ and $\triangle X'PA'$. Writing this down we obtain
$$\frac{1}{2}b_1*(h_1+h_2) = \frac{1}{2}b_2*(h_1+h_2)$$
and after simplification ($h_1+h_2 \neq 0$)
$$b_1 = b_2.$$
|
H: Complex integration: $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))?$
Let $\gamma$ be a closed path defined on $[a,b]$ with image in the complex plan except the upper imaginary axis, ($0$ isn't in this set).
Then $\frac{1}{z}$ has an antiderivative there and it is $\log z$. Therefore $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))=0$ because it is a closed path.
Now let $\psi(t)=e^{it}+3$, $0\leq t\leq 2\pi$. Then $\psi'(t)=ie^{it}, 0\leq t\leq 2\pi$.
So $$\int _\psi\frac{1}{z}dz=\int _0^{2\pi}\frac{ie^{it}}{e^{it}}dt=2\pi i$$
but $\psi$ is a closed path so there's something wrong.
What's going on here?
AI: The denominator in the second expression should be $e^{it}+3$ instead of $e^{it}$.
|
H: what does it mean when $\operatorname{E}[X^2]$ diverges?
is it possible for a random variable $X$, such that the expected value of $X^2$, $\operatorname{E}[X^2]$ is a divergent integral?
If it is impossible, does that mean the probability density function of $X$ is wrong? (the integral of the probability density function over the support does not equal to $1$)
Also, since $\operatorname{Var}[X] = \operatorname{E}[X^2] - \operatorname{E}[X] ^ 2$, does $\operatorname{Var}[X]$ still exist in this case?
AI: This simply mean that the variance does not exist for this random variable. Nothing more! The probability density function can be perfectly correct.
There are several example for this kind of random variables:
Some Student distributions
Cauchy random variables
...
|
H: Determining Jordan form from ranks of matrix powers.
Suppose you're working over an algebraically closed field. If $J$ is a Jordan matrix, then one can determine the number of Jordan blocks and their sizes for any eigenvalue $\lambda$ by looking at the sequence of ranks $\operatorname{rank}(J-\lambda I)^k$ for $k=1,\ldots,m$ where $m$ is the first integer such that $\operatorname{rank}(J-\lambda I)^m=\operatorname{rank}(J-\lambda I)^{m+1}$.
I understand how this works for Jordan matrices, but not arbitrary matrices. If you're just given an arbitrary matrix $A$, and its set of eigenvalues $\{\lambda_i\}$, why does computing $\operatorname{rank}(A-\lambda_i I)^k$ for $k=1,\dots,m$ also determine the Jordan form for $A$, up to the order of blocks? Is there a reason why we may replace $A$ with its Jordan form for rank calculations?
AI: Let $A=T^{-1}JT$, where $\det T\ne 0$, $A$ - square matrix, $J$ - its Jordan form.
$$\operatorname{rank}((J-\lambda I)^{r}) = \operatorname{rank}(T^{-1}(J-\lambda I)^{r}T) =\operatorname{rank}((A-\lambda I)^{r}),$$
because multiplication of a matrix by non-singular matrix doesn't change the rank. Another way to see this is to interpret rank as a dimension of the image: $$\operatorname{rank}A = \dim\operatorname{Im} A = \dim\operatorname{Im }\left(TAT^{-1}\right)=\dim\operatorname{Im} J =\operatorname{rank}J.$$
|
H: Choose two disjoint three-element sets, so the product is a set of five non-identical numbers
So I want to create two unordered sets $x_1=(a,b,c)$ and $x_2=(d,e,f)$ so that all possible products of a term from $x_1$ with a term of $x_2$ constitute five different numbers. The sets can't overlap and have to consist of three distinct numbers each.
I have been trying this by trial and error, and I'm not sure what kind of math I can use to solve this. With one set with only two nonidentical numbers i can do it, but the task does not allow me this.
In the title I say product of sets, Im not sure that is the right word. Above I try to describe this: {x*y for x in {1,2,3} for y in {2,3,4} if x != y}
AI: Straightforward solution: Choose $(1,2,3)$ and $(4,5,6)$. All numbers between $1+4=5$ and $3+6=9$ appear as sums of elements of this two sets.
Now use @DanShved's hint:
$$(2^1,2^2,2^3)=(2,4,8)$$
$$(2^4,2^5,2^6)=(16,32,64)$$
EDIT: You were asking, why this method works:
Let $(a_1,a_2,a_3)$ and $(b_1,b_2,b_3)$ be sets, such that there are five distinct numbers in the sums of the nine possible pairs, and $f:\mathbb R\to\mathbb R$ be an injective and multiplicative function, where multiplicative means $\forall x,y\in\mathbb R: f(x+y)=f(x)f(y)$.
Then for $i_1,i_2,j_1,j_2\in\{1,2,3\}$:
$$a_{i_1}+b_{j_2}=a_{i_2}+b_{j_2} \Leftrightarrow f(a_{i_1}+b_{j_2})=f(a_{i_2}+b_{j_2}) \Leftrightarrow f(a_{i_1})f(b_{j_2})=f(a_{i_2})f(b_{j_2})$$
The first equivalence holds because of injectivity and the second because $f$ is multiplicative.
It follows, that there are five distinct numbers in the product of the nine possible pairs of $(f(a_1),f(a_2),f(a_3))$ and $(f(b_1),f(b_2),f(b_3))$.
In the above example, I chose $f(x)=2^x$, which is injective and multiplicative.
|
H: What is the difference between a reflexive relation and an identitive relation
Given a set $X$ and a relation $R$ over $X$, we say that $R$ is reflexive if
\begin{equation}
xRx\ \forall\ x\in X.
\end{equation}
What does 'identitive' mean? Is it the same as antisymmetry?
Seen in Struwe's book Variational Methods: Applications to nonlinear PDE's and Hamiltonian systems, 4th ed., p.52:
AI: Your guess is correct: Google Books gave me a look at p. $52$ of Struwe, Variational Methods, and it’s very clear that he’s using the term identitive to mean antisymmetric.
|
H: Is it possible to find the term (variable) $c$ from the equation $a^2=\sqrt{b^2+c^2}$
If I have the equation $a^2=\sqrt{b^2+c^2}$. Is it possible to me to find the term (variable) $c$ from it ?
$c=\,?$
AI: Are you simply talking about re-arranging the equation?
$$\left(a^{2}\right)^{2}=\left(\sqrt{b^{2}+c^{2}}\right)^{2}\implies a^{4}=b^{2}+c^{2}\implies c^{2}=a^{4}-b^{2} \implies c = \pm\sqrt{a^{4}-b^{2}}$$
|
H: If $M$ is complete is the closed ball compact?
Let $M$ be a Riemannian manifold and $p,q \in M$. Let $\Omega=\Omega(M;p,q)$ be the set of piecewise $C^\infty$ paths from $p$ to $q$. Let $\rho$ denote the topological metric on $M$ coming from its Riemann metric. Let $S$ denote the ball $\{x \in M : \rho(x,p) \le \sqrt{c}\}$ with $c >0$. Why if $M$ is complete is $S$ compact?
AI: Since $M$ is complete, the exponential map $\exp_p$ from $T_pM$ to $M$ is well defined. Denote the norm on $T_pM$ induced by the Riemannian metric of $M$ by $\|\cdot\|_p$. Then $$K:=\{v\in T_pM:\|v\|_p\le \sqrt{c}\}$$
is a compact subset of $T_pM$, and hence $\exp_p(K)$ is a compact subset of $M$. By the definition of $\rho$, $S$ is a closed subset of $\exp_p(K)$, so $S$ is compact.
|
H: Don't Gödel's completeness and incompleteness theorems contradict each other?
Gödel's completeness theorem: Given a set of axioms, if we cannot derive a contradiction, then the system of axioms must be consistent.
Gödel's incompleteness theorem:'Given any consistent, computable set of axioms, there's a true statement about the integers that can never be proved from those axioms'.
Since the incompleteness theorem assumes that the given set of axioms is consistent, then, by using the completeness theorem shouldn't all statements derived from this set of axioms be true thereby rendering the claim that 'there exists a true statement that cannot be proved' false?
AI: No. You seem to confuse between "true" and "derivable" (or "provable"). The incompleteness theorem tells us that givens a theory with certain constraints, if it is consistent then there are things it cannot prove.
Furthermore, the completeness theorem tells us that a statement is provable from a theory if and only if it is true in all the models of the theory. So by combining these two we simply have the following statement:
Given a theory which satisfies the requirements of the incompleteness theorem, there will be a statement about the integers which is true in some models, but false in other models.
One finer point here is that when we say that a statement about the integers is "true", we mean to say that it is true in their standard model, $\Bbb N$. But the integers have other models, which are called non-standard models. So if a statement is true in the standard model of the integers it tell us nothing about its truth value in other models. In fact, if the statement is not provable it will be false in some of these models.
|
H: Evaluating $\sum_{n=1}^{\infty} {(-1)^n \cdot \frac{2^{2n-1}}{(2n+1)\cdot 3^{2n-1}}}$
Calculate the summation $\sum_{n=1}^{\infty} {(-1)^n \cdot \frac{2^{2n-1}}{(2n+1)\cdot 3^{2n-1}}}$.
So I said:
Mark $x = \frac{2}{3}$. Therefore our summation is $\sum_{n=1}^{\infty} {(-1)^n \cdot \frac{x^{2n-1}}{(2n+1)}}$.
But how do I exactly get rid of the $(-1)^n$? Also I notice it is a summation of the odd powers of $x$, how can I convert it to a full sum? (I know I should subtract from the full sum) but the signs of this summation is different than the signs of the full sum
AI: One can notice that
$$\frac{1}{x^2}\int x^{2n}\mathrm dx=\frac{x^{2n-1}}{(2n+1)}$$
So:
$$\sum_{n=1}^{\infty} {(-1)^n \frac{x^{2n-1}}{(2n+1)}}=\frac{1}{x^2}\sum_{n=1}^{\infty} {(-1)^n }\int x^{2n}\mathrm dx=\frac{1}{x^2}\int \left(\sum_{n=1}^{\infty} {(-1)^n }x^{2n}\right)\mathrm dx$$
And (keeping in mind that $x\leq 1$)
$$\sum_{n=1}^{\infty} {(-1)^n }x^{2n}=-\frac{x^2}{x^2+1}$$
Then
$$\sum_{n=1}^{\infty} {(-1)^n \frac{x^{2n-1}}{(2n+1)}}=\frac{1}{x^2}\int \left(-\frac{x^2}{x^2+1}\right)\mathrm dx=\frac{\tan^{-1}(x)-x}{x^2}$$
|
H: Continuity of an integral with respect to one variable
Let $V\subseteq \mathbb{R}^n$ and $f:V\to\mathbb{R}^n$. Consider the function
$$g(x_1,x_2,...,x_n) = \int_{x_2}^{x_1} {f(t,x_2,...,x_n)dt}$$
on $V$. What conditions will I need to conclude that $g$ is also continuous on $V$? (I would also appreciate if you give me some justification for them.)
In the problem I am working on, I know for certain that $f$ is continuous and Lipschitz on $V$. But all I can deduce from this is that if I fix $x_2,...,x_n$, then $g$ (as a function of $x_1$ alone) is continuous. Will it help that $V$ is also compact, so that $f$ is uniformly continuous there?
If you are interested, I encounter this problem in the proof of a theorem in Coddington and Levinson's "Theory of differential equations." In particular, it is in the proof of Theorem 7.1 on pages 23-24. The successive approximations there are defined similarly as above; he says that they are continuous without much detail.
AI: I am assuming you are using the Euclidiean norm $|.|$ on $V$. Let $x=(x_1,x_2,...,x_n)\in V$, and $\epsilon>0$. Since $f$ is continuous, there exists $\sigma>0$ such that
$$
|x-y|\leq\sigma \implies |f(x)-f(y)|\leq\epsilon
$$
Now consider $y\in V$ such that $|x-y|\leq\sigma$. For any $t$ between $x_1$ and $x_2$, we have $|(t,x_2,...,x_n)-(t,y_2,...,y_n)|\leq|x-y|\leq\sigma$, and therefore $|f(t,x_2,...,x_n)-f(t,y_2,...,y_n)|\leq\epsilon$
\begin{align}
|g(x)-g(y)|&=\left|\int_{x_2}^{x_1}f(t,x_2,...,x_n)\mathrm{d}t
-\int_{y_2}^{y_1}f(t,y_2,...,y_n)\mathrm{d}t \right|\\
&=\left|\int_{x_2}^{x_1}(f(t,x_2,...,x_n)-f(t,y_2,...,y_n))\mathrm{d}t\right.\\
&\kern 3em \left.+\int_{x_2}^{y_2}f(t,y_2,...,y_n)\mathrm{d}t
+\int_{y_1}^{x_1}f(t,y_2,...,y_n)\mathrm{d}t
\right|\\
&\leq|x_1-x_2|\epsilon+\left|\int_{x_2}^{y_2}f(t,y_2,...,y_n)\mathrm{d}t\right|+\left|\int_{y_1}^{x_1}f(t,y_2,...,y_n)\mathrm{d}t\right|
\end{align}
Since $f$ is continuous on the compact $V$, it is also bounded, so there exists $M>0$ such that for all $x\in V$, $|f(x)|\leq M$. Finally, we find:
$$
|g(x)-g(y)|\leq|x_1-x_2|\epsilon+(|y_2-x_2|+|y_1-x_1|)M.
$$
Choosing $y$ such that $|y-x|\leq\min(\sigma,\epsilon)$ yields
$$
|g(x)-g(y)|\leq(|x_1-x_2|+2M)\epsilon,
$$
proving the continuity at point $x$. This reasoning is valid for any $x$ in $V$, and $g$ is therefore continuous on $V$.
|
H: Remainder problem using MOD
What's the remainder when $ 43^{101} + 23^{101}$ is divided by 66?
If we use the remainder obtained when $ 43^{101} + 23^{101}$ is divided by $66$, then it becomes,
$$13^{101}+23^{101}$$ then how can I use further MOD?
AI: Hint: $43\equiv -23\pmod{66}$. What happens when you raise $-1$ to an odd power?
Hint for a different approach. Do you know how the polynomial $x^n+y^n$ is divisible by $x+y$?
|
H: Show that $\frac {a+b+c} 3\geq\sqrt[27]{\frac{a^3+b^3+c^3}3}$.
Given $a,b,c>0$ and $(a+b)(b+c)(c+a)=8$. Show that $\displaystyle \frac {a+b+c} 3\geq\sqrt[27]{\frac{a^3+b^3+c^3}3}$.
Obviously, AM-GM seems to be suitable for LHS.
For RHS, $a^3+b^3+c^3=(a+b+c)^3-3(a+b)(b+c)(c+a)=(a+b+c)^3-24$, then I don't know what to do.
Can someone please teach me? Thank you.
p.s. That $\sqrt [27]{}$ is really terrible...
AI: since
$$(a+b+c)^3=a^3+b^3+c^3+3(a+b)(b+c)(a+c)=a^3+b^3+c^3+24$$
so
$$(a+b+c)^3=a^3+b^3+c^3+3+3+3+3+3+3+3+3\ge 9\sqrt[9]{(a^3+b^3+c^3)\times 3^8}$$
so
$$\dfrac{a+b+c}{3}\ge\sqrt[27]{\dfrac{a^3+b^3+c^3}{3}}$$
|
H: Why in a totally disconnected seperable metric space, every open set is a disjoint union of clopen sets?
On page 14, Probability Measures on Metric Spaces,Parthasarath(1967):
We recall that a totally disconnected seperable metric space, every open set can be expressed as a countably disjoint union of closed and open sets.
I'm only able to show it's true in some examples, say, $\Bbb Q$ and Cantor set, but with no luck in general.
Added:Here's a screenshot of relevant part of the statement:
AI: The result is false.
If the stated result were true, every totally disconnected, separable metric space would have a base of clopen sets and would therefore be zero-dimensional. However, Example $6.2.19$ of Engelking, General Topology, is an example of a totally disconnected, separable metric space that is not zero-dimensional. I shan’t go through the details, which are a bit lengthy, but the space itself is the subspace of $\ell_2$ consisting of the square-summable sequences of rational numbers, with the usual $\ell_2$ metric.
|
H: How to solve these three equations?
If α ,β ,γ are three numbers s.t.:
$\ α^ \ $ + $\ β \ $ + $ γ \ $ = −2
$\ α^2 \ $ + $\ β^2 \ $ + $ γ^2 \ $ = 6
$\ α^3 \ $ + $\ β^3 \ $ + $ γ^3 \ $ = −5,
then $\ α^4 \ $ + $\ β^4 \ $ + $ γ^4 \ $ is equal to ??
I tried out substituting the values of each equation to one other ...but it became very
complex .. I also remember some crammers rule for this ..using matrices??
Is that the way??
AI: Let $A_{n}=a^n+b^n+c^n$. Then we have
$$A_{n+3}=(a+b+c)A_{n+2}-(ab+bc+ac)A_{n+1}+abcA_{n}$$
and
$$2(ab+bc+ac)=(a+b+c)^2-(a^2+b^2+c^2)=4-6=-2$$
$$a^3+b^3+c^3=(a+b+c)^3-3(a+b+c)(ab+bc+ac)+3abc$$
Then we can easily find $abc$.
|
H: To what extent are morphisms required to be functions?
Just beginning category theory, and I am looking for clarification on the precise nature of morphisms. The most familiar categories, e.g. $\mathbf{Top}$, have morphisms that are functions in the traditional sense, i.e. subsets of $A \times B$ such that for $(a_1,b_1)$ and $(a_2,b_2)$, $b_1 \not= b_2 \implies a_1 \not= a_2$.
But considering a category such as $\mathbf{Rel}$ where the morphisms are only relations, this seems not to be constant. It would seem a priori that the minimal structure required to be a morphism is having a well defined source and target and notion of composition; how is this formalized?
Apologies for the somewhat vague nature of this question. What I am asking is can we regard a morphism as an ordered pair (domain and codomain) which may or may not have additional structure?
AI: ".... how how is this formalizes ?"
Answer: It's formalized by the definition of category. In a category, the morphisms don't have to have any structure on their own. All that is required is that you know what the domain and codomain of each morphism are, and that you know how to compose morphisms (and, of course, that the category axioms are satisfied).
|
H: Measurability of restriction
Let $(X,\mathcal{A})$ and $(Y,\mathcal{B})$ be measurable spaces and $f:X\rightarrow Y$ a measurable function (that is, $f^{-1}(B)\in \mathcal{A}$ whenever $B\in \mathcal{B}$).
Let $A_1,...,A_n$ be measurable subsets of $X$ such that $X=\cup A_i$.
Is it true the following equivalence: $f_{|A_1},...,f{|A_n}$ are measurable $\Leftrightarrow$ f is measurable?
I think to have proved this to be true, so I'm essencially asking a confirmation.
AI: Yes, it's true, if we use the trace
$$\mathcal{A}_i = \{A_i \cap M\colon M \in \mathcal{A}\}$$
$\sigma$-algebra on $A_i$.
The injections $\iota_i \colon A_i \hookrightarrow X$ are then measurable, and therefore $f \circ \iota_i$ is measurable for all $i$ if $f$ is measurable.
Conversely, if $f \circ \iota_i$ is measurable for all $i$, and $B \in \mathcal{B}$, then
$$f^{-1}(B) = \bigcup_{i = 1}^n \iota_i\bigl((f\lvert_{A_i})^{-1}(B)\bigr) = \bigcup_{i=1}^n \iota_i(M_i)$$
is the union of finitely many measurable sets $M_i \in \mathcal{A}_i \subset \mathcal{A}$.
(The last is a little abuse of notation, strictly, $\iota_i(M_i) \in \iota_i(\mathcal{A}_i) \subset \mathcal{A}$.)
|
H: Any way to tell when an algebraic expression takes on values that are a square?
Say I have the expression $256x^2 -480x$. As a polynomial this isn't a perfect square. However that doesn't stop it from taking real values that are a perfect square for given x, such as x = 8. Is there any way to determine what other values of x make it into a perfect square? Or is it always guess and check? Thanks.
AI: Assuming
$256x^2-480x= b^2$
$(16x-15)^2=b^2+15^2$
Now look for Pythagorean triples involving 15,
http://en.wikipedia.org/wiki/Formulas_for_generating_Pythagorean_triples
|
H: Geometric progression problem
Please help me solve this:
A man decided to save 50 cents on the first day and on each successive day double the amount saved the previous day.
how long does it take for his total savings to be 1 USD million.
AI: This is simpler than it might seem. Each day, he more than doubles his total money. On the second day, he has 1.5 dollars = 2 dollars - 0.5 dollars. On the third day, he has 3.5 dollars = 4 dollars - 0.5 dollars.
On the $k$th day, he has a total of $2^{k-1}$ dollars - 50 cents, so you only need to find the first value of $k$ that makes this more than 1 million dollars. $2^{20} = 1048576$, so he will have 1 million dollars on the 21st day.
Written formally, you have the sequence $a_n = 2^{n-2}$, representing the amount of money saved on the $n$th day. You're looking for the value of $k$ such that
$$\sum_{n=1}^k a_n > 1000000.$$
Since
$$\begin{eqnarray*} \sum_{n=1}^k 2^{n-2} &=& \frac{1}{2} \sum_{n=1}^k 2^{n-1} \\
&=& \frac{1}{2} \sum_{n=0}^{k-1} 2^n \\
&=& \frac{1}{2} (2^k - 1) \end{eqnarray*}$$
and solving $1000000 = \frac{1}{2}(2^k - 1)$ using logs gives $k = 20.93...$, we need $k = 21$ days to save up a million dollars.
|
H: Find for which values of $\alpha\in\mathbb R, f_n(x)$ converge uniformally
The question is
"Find for which values of $\alpha\in\mathbb R$ such that $ f_n(x)$ converge uniformally in $[0,\infty)$ where $f_n(x)=n^\alpha x \dot e^{-nx} $".
For $\alpha<0$, the sequence converges uniformally to $0$ (since $\forall x\in[0,\infty):e^{nx}>x $ hence the denominator is bigger then the counter and tend to infinity). My problem is I don't know how to procedd further... The book claims the sequence converges $\forall \alpha<1$. Why is this answer correct?
AI: We know that the $f_n$ are continuous and non-negative, that $f_n(0) = 0$ and $\lim_{x\to\infty} f_n(x) = 0$, since the exponential function grows much faster than $x$.
So let's try to find $\max \{ f_n(x) \colon x \in [0,\,\infty)\}$. That must be a critical point of $f_n$, so let's try to find the zeros of $f_n'$.
$$f_n'(x) = n^\alpha e^{-nx} + n^\alpha x \cdot(-n)e^{-nx} = n^\alpha e^{-nx} (1-nx).$$
The only zero of $f_n'$ is therefore $x = \frac1n$, and thus that must be the point where $f_n$ attains its maximum.
$$f_n(\frac1n) = n^\alpha \frac1n e^{-n\frac1n} = n^{\alpha-1}/e,$$
and $n^{\alpha-1} \to 0 \iff \alpha < 1$.
|
H: How many times does the function $y=e^x $ meet $y=x^2$?
As you know $y=e^x$ and $y= x^2$ meet once on $x<0$.
But I want to know whether or not they meet on $x>0$.
Since $\lim_{x\rightarrow \infty } e^x/x^2=\infty$, if they meet once on $x>0$, they
must meet again.
To summarize, my question is whether or not they meet on $x>0$.
Thank you in advance.
AI: Consider the function $f(x) = x^2 e^{-x}$, which has a maximum value of $4/e^2$ at $x=2$. As this max value is less than $1$, this implies that $x^2$ and $e^x$ do not meet for $x>0$.
|
H: Re-arranging the equation $t=\arctan\left(\frac{a}{b}\tan\theta\right)$ to find $\theta$
How can I re-array the equation
$t=\arctan\left(\frac{a}{b}\tan \theta\right)$
to find the equation of $\theta$.
$\theta=\,?$
Actually I tried this equation:
$\theta=\frac{\arctan\left(\tan t\right)}{\frac {a}{b}}$
but it didn't work.
AI: \begin{array}{rl}
t
=
\arctan\left(\frac{a}{b}\,\tan\,\theta\right)
\implies
&
\tan (t) =\tan\left( \arctan\left(\frac{a}{b}\,\tan\,\theta\right)\right)
\\
\implies
&
\tan (t) =\frac{a}{b}\,\tan\,\theta
\\
\implies
&
\frac{b}{a}\tan (t) =\,\tan\,\theta
\\
\implies
&
\tan\,\theta = \frac{b}{a}\tan (t)
\\
\implies
&
\arctan\left(\tan\,\theta\right) = \arctan\left(\frac{b}{a}\tan (t)\right)
\\
\implies
&
\arctan\left(\tan\,\theta\right) = \arctan\left(\frac{b}{a}\tan (t)\right)
\\
\implies
&
\theta = \arctan\left(\frac{b}{a}\tan (t)\right)+k\pi,\quad k\in\mathbb Z
\\
\end{array}
|
H: Simple probability problem
A bag contains $(2m+1)\,$ coins. It is known that
$m$ of these coins have a head on both sides and the
remaining coins are fair. A coin is picked up at
random from the bag and tossed.
If the probability
that the toss results in a head is $14/19 \,$, then $m$ is
equal to ?
How to go about this? What Sample space to consider?
AI: HINT:
Break it into two parts--first find the probability of getting heads with a "guaranteed" coin, then the probability of getting heads with a fair coin. You can do one or the other, so you add those two probabilities.
Answer: (mouse over grey spots to see equations)
I'm not entirely sure about the sample space to consider, but the way I would approach this problem is as follows:
You can pick any one of the $m$ coins that are double-headed, and be guaranteed a head. So, the probability of getting heads this way is the number of double-headed coins over the total number of coins:
$$\displaystyle\frac{m}{2m+1}$$
You can pick any one of the remaining $m+1$ coins, and have $50:50$ chances of getting heads or tails. So, we first find the probability of picking a fair coin, then multiply it by the chances of getting heads with that coin:
$$\displaystyle\frac{m+1}{2m+1}\cdot\frac{1}{2}$$
So, the overall probability of getting a head is:
$$\displaystyle\underbrace{\frac{m}{2m+1}}_{\text{double headed}} + \underbrace{\frac{1}{2}\cdot\frac{m+1}{2m+1}}_{\text{flip a fair head}} = \frac{3m+1}{4m+2}$$
We can tell that $\frac{14}{19}$ won't fit this for, but perhaps try a non-reduced form of the fraction. For example: $\frac{28}{38},\, \frac{42}{57},\, \frac{56}{76},\ldots$
|
H: find length of semi major axes of ellipse
suppose that equation of ellipse is given by
$4x^2+3*y^2=25$
we should length of major axes ,first let us transform this equation into standard form or divide by $25$
$4*x^2/25+3*y^2/25=1$
if we compare it to $x^2/a^2+y^2/b^2=1$;we get that $a=5/2$ and $b=5/\sqrt{3}$;so it means that as i know major axes$=2*a$,therefore major axes=5 right?but in book it is equal to $10/\sqrt{3}$,why it is so?please help me
AI: The length of a semi major axis is just $b$, if the equation of the ellipse is $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ where $a<b$ which here is $\frac5{\sqrt3}$
|
H: "Any finite poset has a maximal element." How to formalize the proof that is given?
For fun, I'm reading The Mathematics of Logic, and the author gives the following theorem
Theorem. Any finite poset has a maximal element.
and a proof thereof. But, I can't figure out how to formalize the proof. Help, anyone? I was thinking of using reductio ad absurdum.
Here's the proof (I've cut a bit of the fluff, but its more or less verbatim).
Proof. Let $X$ be our finite poset and let $a_0 \in X$. We're going to define a sequence of elements $a_n \in X$ such that $a_n < a_{n+1}$ for each $n$. Given $a_n \in X$, there are two possibilities. Either $a_n$ is maximal, in which case we're finished, or else there is some $b \in X$ with $a_n < b$. In the latter case, choose $a_{n+1}$ to be such a $b$.
Now the argument in the previous paragraph cannot give an infinite sequence of elements of $X$. This is because $X$ is finite and the sequence $a$ is strictly increasing, hence all the terms of $a$ are distinct. Therefore, we cannot always have the second option in the previous paragraph, so at some point the $a_n$ we have obtained will be maximal. That is, our finite poset has a maximal element, as required.
Remark. We're allowed to use Konig's lemma, since this has already been proved.
AI: The given proof glosses over some details, but it's basically complete.
You could use induction on the cardinality of $X$. If $|X|=1$ there's nothing to prove. So let's assume $|X|=n>1$ and that the result holds for posets having cardinality less than $n$.
Pick an element of $X$, call it $a$; if $a$ is maximal, then we're done. Otherwise there's at least an element $b$ such that $a<b$. Consider $Y=\{x\in X: a<x\}$. Then $Y$, with the induced order relation, is a finite poset and $a\notin Y$, so that $|Y|<|X|$. By the induction hypothesis, $Y$ has a maximal element $c$. Let's show that $c$ is also maximal in $X$. If $d\in X$ and $c<d$, then we have $a<c<d$, so $a<d$ and hence $d\in Y$, contradicting the fact that $c$ is maximal in $Y$.
|
H: Calculating this double integral in polar coordinates
Calculate the double integral $\iint_D {(1+x^2 + y^2)ln(1+x^2+y^2)dxdy} $ where $D = \{(x,y) \in \mathbb R^2 | \frac{x}{\sqrt3} \leq y \leq x , x^2 + y^2 \leq 4\}$.
I heard there is a way called Polar Coordinates but the more I looked and read about it the more I did not understand.
But I started drawing $D$ and wolfram gave this:
But doesn't $D$ also include the opposite direciton of this? And if so and if not, how would I calculate it with "Polar Coordinates?"
I know Polar Coordinates is a wide subject and I am sorry for asking it this way, but I did not understand scholar papers.
AI: Taking @Daniel Fisher's comment to its logical conclusion, the lines imply a circular sector in which $\theta \in [\pi/6,\pi/4]$. Thus the area integral is
$$\int_{\pi/6}^{\pi/4} d\theta \, \int_0^2 dr \, r \, (1+r^2) \, \ln{(1+r^2)}$$
|
H: Irreducible elements are not associates
I would like to know if every irreducible elements in a ring are not associates. I'm asking that because of this part in the page 61 of this book:
Thanks a lot
AI: Two primes are associates if one of them is a unit times the other. For example, in the integers $5$ and $-5$ are associates (as are $p$ and $-p$). More trivially a prime is always an associate of itself.
You should think of "not associates" as a slightly stronger version of "distinct."
|
H: Show that the limit exists and find it's value?
The Fibonacci series defined recursively by $x(1) = 1, x(2) = 2$ and
$x(n+1) = x(n) + x(n-1)$
Find$$\lim_{n\rightarrow\infty}\frac{x(n+1)}{x(n)}$$
AI: Let $L= \lim_{x\rightarrow \infty} \dfrac{x(n+1)}{x(n)}= L$. By the recursive definition we also have that $\lim_{x\rightarrow \infty} \dfrac{x(n+1)}{x(n)}=\lim_{x\rightarrow \infty}\dfrac{x(n)+x(n-1)}{x(n)}= 1+\dfrac{1}{L}$. Therefore,
$$\begin{align*}1+\dfrac{1}{L}=L\\
L^2 - L - 1=0
\end{align*}$$
So $$\dfrac{L=1\pm\sqrt{5}}{2}$$.
|
H: Re-arranging the equation $L=\sqrt{a^2\sin^2t+b^2\cos^2t\,}$ to find $\left(t\right)$?
How can I re-array the equation
$L=\sqrt{a^2\sin^2t+b^2\cos^2t\,}$
to find the equation of $\left(t\right)$ ?
$t=\,?$
I tried to solve it but I'm stuck at:
$L^2=a^2\sin^2t+b^2\cos^2t$
AI: Hint: $\cos^2 t+\sin^2 t = 1$
SOLUTION:
$L^2=\left(\sqrt{a^2\sin^2t+b^2\cos^2t\,}\right)^2$
$L^2=a^2\sin^2t+b^2\cos^2t$
$L^2=a^2\sin^2t+b^2\left(1-\sin^2t\right)$
$L^2=a^2\sin^2t+b^2-b^2\sin^2t$
$L^2=\sin^2t\,\left(a^2-b^2\right)+b^2$
$L^2-b^2=\sin^2t\,\left(a^2-b^2\right)$
$\sin^2t=\frac{L^2-b^2}{a^2-b^2}$
$\sqrt{\sin^2t}=\sqrt{\frac{L^2-b^2}{a^2-b^2}}$
$\sin t=\sqrt{\frac{L^2-b^2}{a^2-b^2}}$
$\arcsin \left(\sin t\right)=\arcsin \left(\sqrt{\frac{L^2-b^2}{a^2-b^2}}\right)$
$t=\arcsin \left(\sqrt{\frac{L^2-b^2}{a^2-b^2}}\right)$
|
H: Where does the symbol $\mathcal O$ for sheaves come from?
Sheafs are often denoted by the letter $\mathcal O$. What does this O stand for? To me it seems that more natural choices of symbols for sheaves would be $\mathcal S$ or $\mathcal F$ (for the french faisceau).
AI: Quoting the historical footnote in Grauert/Remmert, Coherent Analytic Sheaves, concerning the origin of the notation $\mathcal{O}$ for the rings of holomorphic functions (and their associated sheaves):
Some people think the symbol $\mathcal{O}$ was chosen in honor of Oka, sometimes it is even said that $\mathcal{O}$ reflects the French pronunciation of holomorphe. The truth is that the symbol was chosen accidentally. In a letter to the authors from March 22, 1982, H. Cartan writes: "Je m'étais simplement inspiré d'une notation utilisée par van der Waerden dans son classique traité 'Moderne Algebra' (cf. par exemple §16 de la 2e édition allemande, p.52)"
|
H: Determining the angle degree of an arc in ellipse?
Is it possible to determine the angle in degree of an arc in ellipse by knowing the arc length, ellipse semi-major and semi-minor axis ?
If I have an arc length at the first quarter of an ellipse and I want to know the angle of it, what is the data that I will need it to use it and what is the exact method to use it?
Please take a look at this picture:
Actually I know how to determine the arc length of a ellipse here. but I want to do the obverse.
AI: Parameterization of an ellipse by angle from the center is
$$
\gamma(\phi)=(\cos(\phi),\sin(\phi))\frac{ab}{\sqrt{a^2\sin^2(\phi)+b^2\cos^2(\phi)}}
$$
and
$$
\left|\gamma'(\phi)\right|=ab\sqrt{\frac{a^4\sin^2(\phi)+b^4\cos^2(\phi)}{\left(a^2\sin^2(\phi)+b^2\cos^2(\phi)\right)^3}}
$$
and integrating $\left|\gamma'(\phi)\right|$ gets extremely messy.
So instead, we use $\theta$, where
$$
b\tan(\theta)=a\tan(\phi)
$$
Then
$$
\gamma(\theta)=(a\cos(\theta),b\sin(\theta))
$$
and
$$
\left|\gamma'(\theta)\right|=\sqrt{a^2\sin^2(\theta)+b^2\cos^2(\theta)}
$$
Now, integrating $\left|\gamma'(\theta)\right|$ is a lot simpler.
$$
\int\left|\gamma'(\theta)\right|\,\mathrm{d}\theta
=b\,\mathrm{EllipticE}\left(\theta,\frac{b^2-a^2}{b^2}\right)
$$
However, to go from from arc length to angle, we still need to invert the Elliptic integral.
Solution of the problem given using Mathematica
Here we get the solution to 30 places.
In[]= Phi[a_,b_,s_,opts:OptionsPattern[]] := Block[ {t}, t+ArcTan[(b-a)Tan[t]/(a+b Tan[t]^2)] /. FindRoot[b EllipticE[t,(b^2-a^2)/b^2] == s, {t, 0}, opts]]
In[]= Phi[4, 2.9`32, 3.31`32, WorkingPrecision->30]
Out[]= 0.87052028743193111752524449959
Thus, $\phi\doteq0.87052028743193111752524449959\text{ radians}$.
Let me describe the algorithm a bit.
FindRoot[b EllipticE[t,(b^2-a^2)/b^2] == s, {t, 0}, opts]
inverts the elliptic integral to get $\theta$ from $a$, $b$, and $s$.
Now, we want to find $\phi$ so that $b\tan(\theta)=a\tan(\phi)$. However, simply using $\tan^{-1}\left(\frac ba\tan(\theta)\right)$ will only return a result in $\left(-\frac\pi2,\frac\pi2\right)$. To get the correct value, we use the relation
$$
\begin{align}
\tan(\phi-\theta)
&=\frac{\tan(\phi)-\tan(\theta)}{1+\tan(\phi)\tan(\theta)}\\
&=\frac{\frac ba\tan(\theta)-\tan(\theta)}{1+\frac ba\tan(\theta)\tan(\theta)}\\
&=\frac{b\tan(\theta)-a\tan(\theta)}{a+b\tan(\theta)\tan(\theta)}\\
\phi
&=\theta+\tan^{-1}\left(\frac{(b-a)\tan(\theta)}{a+b\tan^2(\theta)}\right)
\end{align}
$$
This is why we have
t+ArcTan[(b-a)Tan[t]/(a+b Tan[t]^2)]
|
H: Weak deformation retraction (exercise 0.4 from Hatcher) - Proof check
A deformation retraction in the weak sense of a space $X$ to a subspace $A$ is a homotopy $f_t:X\to X$ such that $f_0=Id_X$, $f_1(X)\subset A$ and $f_t(A)\subset A$ for all $t$. Show that if $X$ deformation retracts to $A$ in this weak sense, then the inclusion $i:A\to X$ is a homotopy equivalence.
Proof: We need to find a $g:X\to A$ such that $i\circ g=Id_X$ and $g\circ i=Id_A$.
If we put $g=f_1$, we see that $i\circ f_1=f_1\simeq Id_X$, exactly because of the deformation retraction. Now, as $f_1\simeq Id_X$, $f_1\circ i\simeq Id_X\circ i=Id_A$.
Are there any mistakes?
AI: This answer is just to remove the question from the unanswered queue.
You are right in what you have done. Could you be more specific about your doubt? Which step are you unsure of?
|
H: Integral of $\int \frac{dx}{\sqrt{x^2 -9}}$
$$\int \frac{dx}{\sqrt{x^2 -9}}$$
$x = 3 \sec \theta \implies dx = 3 \sec\theta \tan\theta d\theta$
$$\begin{align} \int \frac{dx}{\sqrt{x^2 -9}} & = \frac{1}{3}\int \frac{3 \sec\theta \tan\theta d\theta}{\tan\theta} \\ \\ & = \int \sec\theta d\theta \\ \\ & = \ln | \sec\theta + \tan\theta| + c\end{align}$$
$x = 3\sec \theta \implies \sec\theta = \frac {x}{3}$
$\tan\theta = \frac{\sqrt{x^2 - 9}}{x}$
I have have confused x with 3 but I cannot get the proper answer which is
$$\ln | x + \sqrt{x^2 - 9}| + c$$
I always get $\dfrac{x}{3}$ or $\dfrac{\sqrt{x^2 - 9}}{x}$ or some variation of that, I can't eliminate them to get their answer.
AI: What you should end with is
$\sec\theta = \dfrac x3\quad$ and $\quad\tan \theta = \dfrac{\sqrt{ x^2 - 9}}{3}$.
Then you have $$\begin{align} \log \Big| \frac 13\left(x + \sqrt{x^2 - 9}\right)\Big| + C & = \log|x +\sqrt{x^2 - 9}| -\log 3 + C \\ \\ & = \log|x + \sqrt{x^2 - 9}| + C'\end{align}$$
|
H: Projection of closed set
Set $A \subset R^2$, set B is projection of A on x-axis. Do you know a counterexample to the statement: if A is closed, then B is closed.
AI: Hint: Consider the graph of $\tan(x)$ between two asymptotes.
|
H: Conjecture on limit of $1-(n^{p-1}\mod p)$
Given $p \in \Bbb P$ prime, $n \in \Bbb N$ and
$$\mathcal V_p=1-(n^{p-1}\mod p)$$
let me conjecture that
$$\lim_{p\rightarrow \infty}\mathcal V_p = \operatorname{sinc}(2\pi \,n)$$
Question: Is this conjecture true?
AI: If $p$ does not divide $n$, we have $n^{p-1}\equiv 1\pmod p$, so $\mathcal V_p=0$ for almost all $p$.
And $\operatorname{sinc}(2\pi n)=0$ for all $n\in\mathbb N$.
So yes, your conjecture is true.
It is even true if you have $0\in\mathbb N$, for then we get $\mathcal V_p=1$ for all $p$ and $\operatorname{sinc}(0)=1$.
|
H: Why I cannot solve this limit problem this way?
$\lim_{x\to -1} \ \, \frac{1}{{1+x}} \left( \frac{1}{{x+5}}+ \frac{1}{{3x-1}} \right) $
As, the limit is not of the form$\ \frac{0}{{0}} $ so, put $\ x $ as $\ -1 $ we get Answer $\ 0$ .
What is wrong in this?
AI: If you write the limit as
$$
\frac{\frac1{x+5}+\frac1{3x-1}}{x+1}
$$
it becomes of the form $\frac00$. From here try combining the fractions in the numerator:
$$
\frac{\frac{4x+4}{(x+5)(3x-1)}}{x+1}
$$
now you can cancel the pieces that go to $0$ and try to plug in $x=-1$.
|
H: The orthogonal projection onto a plane - explanation
Could somebody explain, why orthogonal projection onto a plane with equation $x_1+x_2+x_3=0$ is given by $$y=(x_1,x_2,x_3)-\bigg( \frac{x_1+x_2+x_3}{3}\bigg)(1,1,1)$$
I don't understand, why we sum three coordinates and divide by $3$? I thought, we need to use the dot product of the point and the normal instead of taking weighted average, so I'm a bit confused?
I suppose this has to do with the geometry of how the plane sits in the 3D space and arbitrary point. We basically need to find the distance of arbitrary point to the the plane, but I still don't see how this translates into the given form.
Thanks!
AI: Notice that the unit normal to your plane $x_1+x_2+x_3=0$ is $\frac1{\sqrt3}(1,1,1)$. Use the dot product formula with this unit normal and you'll get the formula in your question.
|
H: solving arduous limit
How to simplify the following:
$$\lim_{x\rightarrow 0} \frac {\sin (\pi / 2 - 10 \sqrt x) \ln ( \cos (2x))}{(2^x -1)((x+1)^5-(x-1)^5)}$$
Here is what I've done:
$$ \sin (\pi / 2 - 10 \sqrt x) = \cos 10 \sqrt x$$
$$(x+1)^5-(x-1)^5 = 10x^4 +20 x^2 +2$$
So:
$$\lim_{x\rightarrow 0} \frac {\cos (10 \sqrt x) \ln ( \cos (2x))}{(2^x -1)(10x^4 +20 x^2 +2)}$$
What else can be done? I've started to use L'Hopital's rule but it makes things worse, i.e. more terms and more. Probably, there is a way to make the fraction simpler?
AI: Hint: Note that
$$
\lim_{x\rightarrow0}\frac{\sin(\pi/2-10\sqrt{x})}{(x+1)^5-(x-1)^5}=\frac{\sin(\pi/2)}{1-(-1)}=\frac{1}{2}.
$$
So, it remains to consider
$$
\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{2^x-1}.
$$
Try applying L'Hopital's rule to this last limit. Then, combine your results.
|
H: Cover of a Grassmannian by an open set
I am reading this document here and in exercise 1, the author asks to show the Grassmannian $G(r,d)$ in a $d$ dimensional vector space $V$ has dimension $r(d-r)$ as follows. For each $W \in G(r,d)$ choose $V_W$ of dimension $d-r$ that intersects $W$ trivially, and show one has a bijection
$$\{ \text{Subspaces of dimension $r$ that intersect $V_W$ trivially}\} \leftrightarrow \operatorname{Hom}(W,V_W).$$
Now I can set up a bijection as follows. For each $U$ on the left, we have $U \oplus V_W = V$. Thus any $w \in W$ can be written uniquely as $u + v_w$ for $u\in U$ and $v_w \in V_W$ and so we obtain a linear map $T : W \to V_W$ by sending $w$ to this $v_w$. Conversely for any linear map $T \in \text{Hom}(W,V_W)$ we have a subspace on the left consisting of vectors $w + T(w)$ for $w \in W$.
My question is: The author has not defined why the set on the left hand side above is an open set. Presumably he meant that if we look in $\Bbb{P}(V)$, the set of all linear varieties of dimension $r -1$ that do intersect a given linear variety of dimension $d-r - 1$ is closed (in the Zariski topology), but why is this so?
AI: One way to think about this is to pass to the frames (so we're looking at a Stiefel variety). Fix a basis $u_{r+1},\dots,u_d$ for $V_W$. The set of frames $v_1,\dots,v_r\in V$ so that $v_1,\dots,v_r,u_{r+1},\dots,u_d$ form a linearly independent set is an open set ($\det\ne 0$) in the space of all $r$-frames. This descends, taking quotients, to an open set in $G(r,d)$.
|
H: A bounded holomorphic function
If $\Omega$ is a region which is dense in $\mathbb{C}$, $f\in H(\Omega)$ and is continuous on $\mathbb{C}$, moreover $f$ is bounded on $\mathbb{C}$, can we claim that $f$ is a constant?
AI: This answer is incorrect.
You can consider the function $g(x):=\sum_{k=0}^{\infty}\frac{x^{2^k}}{2^{k^2}}$. This function, and many examples like it, is going to be analytic on the unit disc, and continuous and even differentiable infinitely many times on the unit circle (but not analytic there!). We can have a biholomorphism between the unit disc and the plane minus the non-negative real line. Composing $g$ with this biholomorphism we get a function analytic in the whole plane minus the non-negative real line, and continuous on the non-negative real line.
|
H: First order linear differential equation, wrong result
I have a differential equation:
$y'= \frac{y}{x} + x^2$
I apply this formula:
$y(x)= e^{A(x)} \int {e^{-A(x)} b(x) dx} $
With $a(x) = \frac{1}{x}$ and $b(x)= x^2$, and $A(x)$ primitive of $a(x)$.
It gives:
$y(x)= e^{log|x|} \int{ e^{-log|x|} x^2 dx }$ = $|x| - \int{ |x| \cdot x^2 dx } $
With $x>0$ it is equal to $x - \frac{x^4}{4} $
But the result is wrong, it is $\frac{1}{2} \cdot x^3 + c \cdot x $
What am I doing wrong? I have this problem solved on the book using a "trick" instead of using this formula. Possibly I would know what I am doing wrong and how this problem can be solved using the formula above, instead of giving another solution which doesn't follow an ordinary pattern.
AI: The solution is
$$
y(x) = e^{A(x)} \cdot \bigg ( \int e^{-A(x)} b(x) dx + C \bigg )
$$
Then computing you get
$$
y(x) = x \cdot \bigg ( \int x dx + C \bigg ) = \frac{x^3}{2} + C x
$$
Your manipulations with the $\log$ contained many mistakes.
|
H: Finite extension of integrally closed ring again integrally closed
Let $S\subset R$ be a finite ring extension, i.e. $R$ is finitely generated as an $S$-module. Assume that $S$ is integrally closed. Does this imply that also $R$ is integrally closed (in its quotient field)?
AI: Take $ S = \Bbb{Z}$ and take $R = \Bbb{Z}[\sqrt{-3}]$. Then $R$ is f.g. over $S$ (in fact free of rank $2$) but now $R$ is not integrally closed: The quotient field of $R$ is $\Bbb{Q}(\sqrt{-3})$ and $ \frac{1 + \sqrt{-3}}{2}$ is in here, and this is integral over $R$ being a solution of $x^2 - x + 1$. But this element is not in $R$.
|
H: Uses of the incidence matrix of a graph
The incidence matrix of a graph is a way to represent the graph. Why go through the trouble of creating this representation of a graph? In other words what are the applications of the incidence matrix or some interesting properties it reveals about its graph?
AI: There are many advantages, especially if the total number of edges is $|E| = \Omega(|V|^2)$. First of all, worst-case constant time for adding, deleting edges, also testing if edge exists (adjacency lists/sets might have some additional $\log n$ factors). Second, simplicity: no advanced structures needed, easy to work with, etc. Moreover some algorithms like to store data for each edge (like flows), matrix representation is then very convenient, and sometimes has nice properties like:
if $A$ contains $1$ for edges and $0$ otherwise, then $A^k$ contains the number of paths of length $k$ between all vertices,
if $A$ contains weights (with $\infty$ meaning no edge), then $A^k$ using the min-tropical semiring gives you the lightest paths of length $k$ between all the pairs.
Finally, there is spectral graph theory.
I hope this explains something ;-)
|
H: How to fit a formula to three data points?
I need a very basic formula that will be used to determine a CSS line-height based on a provided font pixel size.
So in essence, I need the formula to covert
13 == > 17
15 == > 22
19 == > 27
Been going crazy all morning trying to derive a formula to do this...
AI: Well, three non-collinear points uniquely specify a parabola, so a simple solution would simply be to perform a quadratic regression on the points $(13, 17)$, $(15, 22)$, and $(19, 27)$. This gets you the following quadratic equation:
$\displaystyle-\frac{5}{24}x^2+\frac{25}{3}x-\frac{449}{8}$
|
H: A basis such that $A$ is of a certain type.
Let $A$ be a $2\times 2$ real matrix without eigenvalues, and the roots of its characteristic polynomial be $\alpha+i \beta$ and $\alpha - i \beta$. Show that there exists a basis of $\mathbb{R^2}$ such that
$A=\begin{pmatrix} \alpha & \beta \\ - \beta & \alpha \end{pmatrix}$.
I've tried to use the fact that those are its eigenvalues, when considering it a complex matrix, and try to construct a basis to $\mathbb{R^2}$ from the eigenvectors somehow, but I didn't achieve anything.
AI: Hint: Suppose that $x$ is a complex eigenvector of $A$ corresponding to the eigenvalue $\alpha+i\beta$ and note that $$A\overline x=\overline{Ax}=\overline{(\alpha+i\beta)x}=(\alpha-i\beta)\overline x.$$
|
H: How does $a^2 + b^2 = c^2$ work with ‘steps’?
We all know that $a^2+b^2=c^2$ in a right-angled triangle, and therefore, that $c<a+b$, so that walking along the red line would be shorter than using the two black lines to get from top left to bottom right in the following graphic:
Now, let's assume that the direct way using the red line is blocked, but instead, we can use the green way in the following picture:
Obviously, the green way isn't any shorter than the black one, it's just $a/2+b/2+a/2+b/2 = a+b$. Now, we can divide the green path again, just like the black path, and get to the purple path. Dividing this one in two halfs again, we get the yellow path:
Now obviously, the yellow path is still as long as the black path from the beginning, it's just $8*a/8+8*b/8=a+b$. But if we do this segmentation again and again, we approximate the red line - without making the way any shorter. Why is this so?
AI: Essentially,
it is because the distance of the
stepped curve from the line
does not get small compared
to the length of the steps.
An example where the limit
is properly found
is dividing a circle
into $n$ equal parts
and computing the sum of the
line segments
connecting the endpoints
of the arcs.
This $does$ converge
to the length of the circle
because the height of each arc
gets arbitrarily small
compared to the length of each arc
as $n$ gets large.
|
H: Testing polynomial equivalence
Suppose I have two polynomials, P(x) and Q(x), of the same degree and with the same leading coefficient. How can I test if the two are equivalent in the sense that there exists some $k$ with $P(x+k)=Q(x)$? P and Q are in $\mathbb{Z}[x].$
AI: The condition of $\mathbb{Z}[x]$ isn't required.
Suppose we have 2 polynomial $P(x)$ and $Q(x)$, whose coefficients of $x^i$ are $P_i$ and $Q_i$ respectively. If they are equivalent in the sense of $P(x+k) = Q(k)$, then
Their degree must be the same, which we denote as $n$.
Their leading coefficient must be the same, which we denote as $a_n=P_n=Q_n$
$P(x+k) - a_n(x+k)^n = Q(x) - a_n x^n$
By considering the coefficient of $x^{n-1}$ in the last equation, this tells us that $P_{n-1} + nk a_n = Q_{n-1}$.
This allows you to calculate $k$ in terms of the various knowns, in which case you can just substitute in and check if we have equivalence.
We can simply check that $Q(i) = P(i+k)$ for $n+1$ distinct values of $i$, which tell us that they agree as degree $n$ polynomials.
|
H: What does δA mean in differentiation?
To be more specific, I met this when doing analytical mechanics involving the principle of least action:
AI: The easiest way to understand this notation is as follows:
A variation is a transformation of the independent variables in a problem; it says that for each $\epsilon$ we can transform $x_i$ to $x_i=x_i(\epsilon)=x_i + \epsilon\,\delta x_i + O(\epsilon^2)$ for some $\delta x_i$. You can think of it as a path in $\mathbf x$ space parametrized by $\epsilon$.
Notice $$\left[\frac{\mathrm d x_i}{\mathrm d \epsilon}\right]_{\epsilon=0} = \delta x_i$$Thus the vector $\mathbf{\delta x}$ is a tangent to that path.
In general, we define $$\delta F(x_1,\cdots,x_n) = \delta F(x_1(\epsilon),\cdots,x_n(\epsilon)) = \left[\frac{\mathrm d F}{\mathrm d \epsilon}\right]_{\epsilon=0}$$Hence $\epsilon\times \delta F$ is the small change in $F$ under this variation.
Using the chain rule $$\delta F = \left[\frac{\mathrm d F}{\mathrm d \epsilon}\right]_{\epsilon=0} = \sum_i\frac{\partial F}{\partial x_i}\left[\frac{\mathrm d x_i}{\mathrm d \epsilon}\right]_0 = \sum_i\frac{\partial F}{\partial x_i}\delta x_i$$
This can be formalized using differential forms, but this above is key to a more intuitive understanding. The message is that $\delta F$ is the small change in $F$ due to the small change in $\delta \mathbf x$ in $\mathbf x$.
For completeness, we can very quickly sketch how one moves towards differential forms.
Idea: What derivatives $F'(x)$ are for is telling you how to figure out what a small change in $F$ is given a small change in $x$. Define $\mathrm d F$ to be something mapping a change in $x$ to the change in $F$ to first order in the Taylor expansion. $\mathrm d F(U) = U \times \partial_x F$.
In more arbitrary settings, you are allowed to change $\mathbf x$, thinking of it as a set of coordinates, by moving along any path in $\mathbf x$ space. (Here, we're thinking of allowing any path. In general, we might be constrained to manifolds like the unit sphere.) Then a small change along a path is determined by derivatives along that path; this is simply the tangent vector dotted with the gradient, $\mathbf U \cdot \nabla F$. In analogy with the above, we define
$$\mathrm d F(\mathbf U) = \text{derivative of $F$ along curve tangent to $\mathbf U$} = U_i \cdot \partial F/\partial x_i$$
using the chain rule for the last step.
This is essentially the idea of (exact) differential 1-forms: they take vectors to the derivatives of functions.
|
H: Using trig substitution to evaluate $\int \frac{dt}{( t^2 + 9)^2}$
$$\int \frac{\mathrm{d}t}{( t^2 + 9)^2} = \frac {1}{81} \int \frac{\mathrm{d}t}{\left( \frac{t^2}{9} + 1\right)^2}$$
$t = 3\tan\theta\;\implies \; dt = 3 \sec^2 \theta \, \mathrm{d}\theta$
$$\frac {1}{81} \int \frac{3\sec^2\theta \mathrm{ d}\theta}{ \sec^4\theta} = \frac {1}{27} \int \frac{ \mathrm{ d}\theta}{ \sec^2\theta} = \dfrac 1{27}\int \cos^2 \theta\mathrm{ d}\theta $$
$$ =\frac 1{27}\left( \frac{1}{2} \theta + 2(\cos\theta \sin\theta)\right) + C$$
$\arctan \frac{t}{3} = \theta \;\implies$
$$\frac{1}{27}\left(\frac{1}{2} \arctan \frac{t}{3} + 2 \left(\frac{\sqrt{9 - x^2}}{3} \frac{t}{3}\right)\right) + C$$
This is a mess, and it is also the wrong answer.
I have done it four times, where am I going wrong?
AI: The integral $$ \int \cos^2 \theta \, \mathrm d\theta=\frac{\theta}{2}+\frac{1}{2} \sin \theta \cos \theta+C. $$
(You had the coefficient wrong)
|
H: The indefinite integral $\int{\frac{\mathrm dx}{\sqrt{1+x^2}}}$
I need to solve this integral:
$$\int{\frac{\mathrm dx}{\sqrt{1+x^2}}}$$
First I thought it was easy, so I tried integration by parts with $g(x)=x$ and $g'(x)=1$:
$$\int{ \frac{x^2}{(1+x^2)^{\frac{3}{2}} }}\,\mathrm dx $$
But I've made it even more complicated than before, and if I want to solve it again by parts I'll have $g(x)= \frac{x^3}{3}$ , and I will never end integrating.
How should I solve it?
Edit
Trying this way: $x= tg(t)$, then I get:
$$ \int{ \frac{1+tg^2(t)}{ \sqrt{1+ tg^2(t)} } dt}= \int{ \sqrt{ 1 + tg^2(t) } dt } $$
But it doesn't remind me anything, I still can't solve it.
AI: your integral can be solved by
$x=\sinh t$ and $dx=\cosh t\, dt$ which gives
$$\int \frac{\cosh t}{\cosh t} \, dt = \int 1\, dt=t=\sinh^{-1} x = \ln (x+\sqrt{x^2+1})+C,$$
since $\sqrt{1+x^2}=\sqrt{1+\sinh^2 t}=\cosh t$.
|
H: Quadratic integer Programming
Would anyone mind helping me solve this problem
$$
\min\space f(x) = \frac12 x^\mathrm TQx + bx + c \qquad \text{s.t. } \sum_i x_i=\lambda
$$
where $x$ is a vector whose entries are positive integers and $Q$ is positive definite.
AI: These kinds of (mixed) integer quadratic programming (MIQP) problems are typically solved by branch and bound algorithms. One widely used commercial code is IBM's CPLEX. You might also be interested in Sven Leyffer's MIQPBB code.
More general codes for mixed integer nonlinear programming (MINLP) can also be used for this problem. See for example the BonMin code.
|
H: Adjoint matrix eigenvalues and eigenvectors
I just wanted to make sure that the following statement is true:
Let $A$ be a normal matrix with eigenvalues $\lambda_1,...,\lambda_n$ and eigenvectors $v_1,...,v_n$. Then $A^*$ has the same eigenvectors with eigenvalues $\bar{\lambda_1},..,\bar{\lambda_n}$, correct?
AI: Yes. If $Av=\lambda v$ then $(A-\lambda I_n)v=0$ whence, using the inner product $(x,y)=x^*y$:
$$
0=(v,(A^*-\overline{\lambda}I_n)\underbrace{(A-\lambda I_n)v}_{=0})\stackrel{\mbox{normality}}{=}(v,(A-\lambda I_n)(A^*-\overline{\lambda}I_n)v)
$$
$$
=((A^*-\overline{\lambda}I_n)v,(A^*-\overline{\lambda}I_n)v)=\|(A^*-\overline{\lambda}I_n)v\|^2.
$$
So $(A^*-\overline{\lambda}I_n)v=0$, i.e. $A^*v=\overline{\lambda}v$.
I prefer to see it as below, though.
For any matrix, $\ker B=\ker B^*B$. The inclusion $\subseteq $ is clear. Then we usually use that $(x,B^*Bx)=(Bx,Bx)=\|Bx\|^2$ from which the other inclusion follows.
Hence, for $B$ normal, $\ker B=(\ker B^*B=\ker BB^*=)\ker B^*$.
Just apply this to each normal matrix $B=A-\lambda_jI_n$.
|
H: Proving the function f , which has zero first order parital derivatives, is constant
Let the function $f: \Bbb R^{2} \to \Bbb R$
The first order derivatives of f are zero.
i.e $f_x(x,y)$ = $f_y(x,y)$ = $0$
How can I prove that $f(x,y)$ is constant for all $(x,y)$
AI: If $\operatorname{f}_x \equiv 0$ then $\operatorname{f}$ is independent of $x$. (Changing $x$ does not change $\operatorname{f}$.)
If $\operatorname{f}_y \equiv 0$ then $\operatorname{f}$ is independent of $y$. (Changing $y$ does not change $\operatorname{f}$.)
If $\operatorname{f}_x \equiv 0$ and $\operatorname{f}_y \equiv 0$ then $\operatorname{f}$ is independent of both $x$ and $y$.
If $\operatorname{f}$ is independent of both $x$ and $y$ then it must be constant.
|
H: Elementary set proof
On a statistics trial exam I encountered the following proof I was supposed to give but I have no idea how to start with this proof and solve it:
$P(A\cap B)$ $\geq$ $1 - P(A') - P(B')$ where $A'$ is the complement of A
If anyone could help me, that would be great!
AI: First of all, split up the sample space: $S=(A\cap B) \cup A^C \cup B^C\\ \iff P(S)=P(A\cap B) + P(A^C) + P(B^C) - P(A^C \cap B^C)\\ \iff P(A\cap B) = \underbrace{P(S)}_{=1} - P(A^C) - P(B^C) + P(A^C \cap B^C).\\ \text{Now since } P(A^C \cap B^C) \geq 0,\\ P(A\cap B) \geq 1 - P(A^C) - P(B^C).$
|
H: Mix GP and AP question
I appreciate your good help:
Two consecutive terms of a GP are the 2nd, 4th and 7th terms of an AP respectively. Find the common ratio of the GP.
AI: I’m going to assume that you meant that three consecutive terms of the geometric progression are the second, fourth, and seventh terms, respectively, of an arithmetic progression.
Let $r$ be the common ratio of the geometric progression. Let $a$ be the first term of the arithmetic progression, and let $d$ be its difference. Finally, let $b$ be the first of the three terms of the geometric progression. Then $b=a+d$, $br=a+3d$, and $br^2=a+6d$. Subtracting the first of these equations from the second, we find that $b(r-1)=2d$. Subtracting the second from the third yields the equation $br(r-1)=3d$.
From $b(r-1)=2d$ we get $r-1=\dfrac{2d}b$. Substituting this into $br(r-1)=3d$, we get
$$br\left(\frac{2d}b\right)=3d\;,$$
which is easily solved for $r$.
Note that I did make a couple of silent assumptions about $b$ and $d$ in these calculations; I’ll leave it to you to work out what they are, but if you have a question, just leave a comment.
|
H: For non-negative $f$ such that $\int_1^\infty |f'(t)|dt < \infty$, $\sum f(k)$ and $\int_1^\infty f(t)dt$ converge or diverge together
Suppose that $f\in C^1([1, \infty))$, $f>0$, and $\int_{1}^\infty |f'(t)|dt < \infty$. I want to show that $\sum_1^\infty f(k)$ and $\int_1^\infty f(t)dt$ are either both convergent or both divergent.
One approach might be to try to show that $\lim_{n\to \infty}(\sum_1^n f(k) - \int_1^n f(t)dt)< \infty$, which is true under the additional hypothesis that $f$ is monotonically decreasing. But I haven't gotten anywhere in proving this, and I suspect it isn't true.
One thing I have come up with is that $\forall \epsilon >0$, $f$ is eventually Lipschitz with Lipschitz constant $\epsilon$, which we can see as follows: take $x_{\epsilon}$ such that $\forall x > x_{\epsilon}$ $|f'(x)|<\epsilon$. Then $\forall x,y > x_{\epsilon}$ we have $|f(x) - f(y)| = |\int_{y}^{x}f'(t)dt| \leq \int_x^y|f'(t)|dt \leq |y-x|\varepsilon$. But I'm having trouble leveraging this information into a solution. Any ideas?
AI: Hint: Consider
$$
\sum_{k=1}^\infty\color{#C00000}{\int_k^{k+1}|f(x)-f(k)|\,\mathrm{d}x}
$$
Note that
$$
\begin{align}
\left|\int_k^{k+1}f(x)\,\mathrm{d}x-f(k)\right|
&=\left|\int_k^{k+1}(f(x)-f(k))\,\mathrm{d}x\right|\\
&\le\color{#C00000}{\int_k^{k+1}|f(x)-f(k)|\,\mathrm{d}x}\\
&=\int_k^{k+1}\left|\int_k^xf'(t)\,\mathrm{d}t\right|\,\mathrm{d}x\\
&\le\int_k^{k+1}\int_k^x|f'(t)|\,\mathrm{d}t\,\mathrm{d}x\\
&=\int_k^{k+1}\int_t^{k+1}|f'(t)|\,\mathrm{d}x\,\mathrm{d}t\\
&=\int_k^{k+1}(k+1-t)|f'(t)|\,\mathrm{d}t\\
&\le\int_k^{k+1}|f'(t)|\,\mathrm{d}t
\end{align}
$$
|
H: If $\sum a_k^2 /k$ converges, then $1/N \sum_1^{N}a_k \to 0$
I want to show that if $\sum a_k^2 / k$ converges, then $1/N \sum_{1}^Na_k \to 0$.
Now, if $a_n \to 0$, then the result follows. But of course $a_n\to 0$ is not a necessary condition for $\sum a_{n}^2/n$ to converge. We might want to use Cauchy-Schwarz to say that $a_k^2/k \leq a_k^4 + 1/k^2$, but of course the inequality is pointing the wrong way. Any ideas?
AI: Split the sum at $k = \sqrt{N}$.
If $\sqrt{N} < k < N$ then by Cauchy-Schwarz
$$
\sum_{\sqrt{N} < k \leq N} a_k \leq N^{1/2} \cdot \bigg ( \sum_{\sqrt{N} < k \leq N}
a_k^2 \bigg )^{1/2}
$$
And notice that
$$
\sum_{\sqrt{N} < k \leq N} a_k^2 \leq N \sum_{N > k > \sqrt{N}} \frac{a_k^2}{k}
= o(N)
$$
because $\sum_{k > \sqrt{N}} a_k^2 / k = o(1)$.
If $k < \sqrt{N}$ then
$$
\sum_{k \leq \sqrt{N}} a_k \leq \bigg ( \sum_{k \leq \sqrt{N}} k \bigg )^{1/2}
\cdot \bigg ( \sum_{k} \frac{a_k^2}{k} \bigg )^{1/2} \leq C N^{1/2}
$$
with $C = (\sum a_k^2 / k)^{1/2}$.
Combining the two sums we conclude that
$$
\sum_{k \leq N} a_k \leq C N^{1/2} + o(N) = o(N)
$$
This is the desired claim. Note that we could have splitted at any $k = f(N)$ with $f$ going to infinity.
|
H: Proving a complex sum equals factorial
I have just stumbled across the equality that:
$$
\sum_{j=0}^{n}(-1) ^ {n + j} j ^ {n} \binom{n}{j} = n!
$$
How would I go about proving this equality?
Also, what is the left hand side equal to if the power of j is increased:
$$
\sum_{j=0}^{n}(-1) ^ {n + j} j ^ {n+k} \binom{n}{j} = ?
$$
where k=1,2,3...
Thanks!
AI: It’s an instance of the inclusion-exclusion principle. First let’s rewrite the expression. Let $k=n-j$:
$$\sum_{j=0}^n(-1)^{n+j}j^n\binom{n}j=\sum_{k=0}^n(-1)^k(n-k)^n\binom{n}{n-k}=\sum_{k=0}^n(-1)^k(n-k)^n\binom{n}k\;.$$
Now imagine that you have $n$ toys to distribute to $n$ children, and you want each child to get a toy; clearly you can distribute the toys in $n!$ different ways. On the other hand, you can consider the number of ‘bad’ distributions, in which some child gets no toy. For $k=1,\dots,n$, $\binom{n}k(n-k)^n$ is the number of ways to choose $k$ of the children to receive no toy and to distribute the $n$ toys to the other $n-k$ children. Of course some of the $n-k$ potentially lucky children may end up with no toy; that’s why you need an inclusion-exclusion argument. It tells you that there are
$$\sum_{k=1}^n(-1)^{k-1}\binom{n}k(n-k)^n$$
bad distributions, and since there are $n^n=(-1)^0\binom{n}n(n-0)^n$ distributions altogether, there must be
$$n^n-\sum_{k=1}^n(-1)^{k-1}\binom{n}k(n-k)^n=\sum_{k=0}^n(-1)^k\binom{n}k(n-k)^n$$
good ones.
This is a special case of the explicit formula for the Stirling numbers of the second kind,
$${n\brace k}=\frac1{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}jj^n\;.$$
Note that with a change of variables this can be rewritten as
$$\frac1{n!}\sum_{j=0}^n(-1)^{n-j}\binom{n}jj^{n+k}\;,$$
and $(-1)^{n-j}=(-1)^{n+j}$; this should point you in the right direction for the second part of the question.
|
H: Formal Definition of Yang Mills Lagrangian
I have a question regarding the Lagrangian in non abelian gauge theory. Say, $G$ is the gauge group and $\mathfrak g$ the associated Lie algebra. The Lagrangian is often written as
$$
\mathcal L=-\frac {1}{4} \text{tr} (F_{\mu \nu} F^{\mu \nu}) + \overline \psi (i \tilde D-m)\psi
$$
with
$$
\tilde D=\gamma^\mu D_\mu \qquad D_\mu=\partial_\mu-igA_\mu
$$
As far as i understood, in this equation $\psi$ is an element of a representation $V$ of $\mathfrak g$ and $A_\mu \in \text{End}(V)$, right? What's puzzling me now, is that this equation has to be Lorentz-invariant as well, so the $\psi$'s are still Dirac spinors, that is they are elements of the fundamental representation $\Delta$ of the Clifford Algebra $\text{Cliff}(1,3)$. How should i interpret this ambiguity and in particular how should i calculate $\gamma^\mu A_\mu \psi$ in local coordinates?
AI: The field $\psi$ is not one Dirac spinor but a collection of those: it belongs to tensor product representation $V\otimes\Delta$, and accordingly has two indices $\alpha$, $a$. In components (I guess this is what you mean by local coordinates) one has
$$(\gamma^{\mu}A_{\mu}\psi)_a^{\alpha}=\gamma_{ab}^{\mu}A_{\mu}^{\alpha\beta}\psi^b_{\beta},$$
where $a,b$ are spinor labels and $\alpha,\beta$ are gauge group representation labels.
For example, when $V$ is the fundamental representation of $SU(3)$, the multicomponent field $\psi$ contains three Dirac spinors corresponding to different quark colors.
|
H: Solve $x\sqrt{10} = \prod\limits_{k = 1}^{90} \sin(k), x\in \mathbb Q$.
Can someone help me with this question? I've found a solution but it's not a very nice one. I used 6 times the relation $\sin(2\theta) = 2\sin(\theta)\cos(\theta)$. There's got to be a better way.
$x\sqrt{10} = \prod\limits_{k = 1}^{90} \sin(k)$, k in degrees.
AI: We use this, which is somewhat complicated.
Let $S$ be the product $\prod_{k=1}^{90} \sin k^\circ $. Then $S^2 = \prod_{k=1}^{179} \sin k^\circ = \frac{ 180} { 2^{179}}$
Hence $S = \sqrt{ 10} \frac{3}{2^{89} }$.
I believe your method of using $\sin 2\theta$ repeatedly is better, in part because the proof of the quoted theorem is complicated.
|
H: Separability requirement for Measurable distance?
I was looking up Egorov's Theorem on Wikipedia. https://en.wikipedia.org/wiki/Egorov%27s_theorem
One of the conditions is that the functions attain values in a separable metric space M, and in the "Discussion of assumptions" section following the theorem, the following is stated:
"The separability of the metric space is needed to make sure that for M-valued, measurable functions f and g, the distance d(f(x), g(x)) is again a measurable real-valued function of x."
Can someone explain how to prove the boldfaced statement?
[EDIT]To be precise, I want to know how to prove "If the metric space M is separable, and the functions f and g are measurable functions attaining values in M, then the distance function d(f(x),g(x)) is measurable."
Thanks!
AI: A more elegant and intuitive way (that I've come up with upon finishing the direct proof, so I'll leave it anyway...) to show it is as follows: if $M$ is a separable metric space, it is second-countable.
From that it follows that open sets are Borel in $M^2$ (with respect to the square of Borel algebra of $M$; this is because we don't need uncountable unions to form open sets).
In particular, continuous functions from $M^2$ to $\bf R$, such as $d$, are Borel with respect to product algebra. On the other hand, $x\mapsto (f(x),g(x))$ is obviously measurable with respect to the product Borel algebra, so the composition of this and $d$ is measurable.
|
H: Simple algebra loss calculation
Kylie bought an item for $x$ and sold it for \$10.56. If Kylie incurred a loss of $x$ percent, find $x$.
The answer is apparently "12 or 88" but I cannot see how they got there. I have tried
$$\frac{10.56-x}{x}=x$$
But the result is no where near what the answer is.
AI: You want to solve the equation $$\frac{x-10.56}{x}=\frac{x}{100},$$
and cross-multiplying and bringing all the terms to the right side of the equation gives $$x^2-100x+1056=0.$$
|
H: Statements equivalent to $A\subset B$
Each of the following statements are equivalent to $A\subset B$:
(1) $A \cap B = A$
(2) $A \cup B = B$
(3) $B^{c} \subset A^{c}$
(4) $A \cap B^{c}= \emptyset$
(5) $B \cup A^{c}= U$
I only understand (1) and (4). Why is (5) $U$ and not $B$?
Can you explain/elaborate on the other ones, perhaps with a proof? Is $\subset$ used in the main statement as a subset or as a proper subset?
Thanks.
AI: Most of them will become clearer if you draw Venn diagrams to illustrate them.
For (2), note that it’s always true that $B\subseteq A\cup B$, so what you really need to show is that $A\subseteq B$ if and only if $A\cup B\subseteq B$. Certainly if $A\subseteq B$, then $A\cup B\subseteq B$, since it’s trivially true that $B\subseteq B$. On the other hand, $A\subseteq A\cup B$, so if $A\cup B\subseteq B$, then $A\subseteq B$.
For (3), suppose that $A\subseteq B$. If $x\in B^c$, then $x\notin B$, so certainly $x\notin A$; but that means that $x\in A^c$, and we’ve shown that $B^c\subseteq A^c$. Now suppose that $B^c\subseteq A^c$. By what we just proved, we know that $(A^c)^c\subseteq(B^c)^c$. But $(A^c)^c=A$ and $(B^c)^c=B$, so $A\subseteq B$, as desired.
(5) is really very clear if you draw a Venn diagram, but you can argue as follows. Suppose that $A\subseteq B$. Let $x\in U$ be arbitrary. If $x\in B$, then certainly $x\in B\cup A^c$. If $x\notin B$, then $x\notin A$, so $x\in A^c$, and again we see that $x\in B\cup A^c$. Thus, everything in $U$ is in $B\cup A^c$, and since $B\cup A^c\subseteq U$, we must have $B\cup A^c=U$. Conversely, suppose that $B\cup A^c=U$, and let $x\in A$. Then $x\notin A^c$, but certainly $x\in U=B\cup A^c$, so it can only be that $x\in B$. This shows that $A\subseteq B$.
|
H: Line Integral where C is Line Segment
Evaluate the line integral
$$ \oint\limits_C xe^y \mathrm ds \;\text{where C is the line segment from (-1,2) to (1,1).}$$
My answer was $\sqrt{5}e(e-3)$. Did I get this right?
Thanks!
AI: I believe the answer is slightly different:
If you let $x=-1+2t$ and $y=2-t$ for $0\le t\le 1$,
I think you get the integral $$\int_0^1 (-1+2t)e^{2-t}\sqrt5 dt,$$
and then you can integrate by parts.
|
H: Use of Multiple "if and only if" statements
I apologize in advance if this question is too basic to warrant a post. I just ran into the following question:
Let $f: A \to B$. If $A$ and $B$ are finite sets with the same number of elements, then $f: A \to B$ is bijective if and only if $f$ is injective if and only if $f$ is surjective.
The use of two "if and only if" statements has confused me; I am not exactly sure what I am trying to prove. I understand the function concepts being presented in the question, but I'm just not sure what I need to show. Therefore, I ask not for hints toward the solution of the question, but only the meaning of the question.
Thanks!
AI: The question is precisely equivalent to the following question:
Let $f:A\to B$, where $A$ and $B$ are finite sets with the same number of elements. Show that the following three statements are equivalent:
$f:A\to B$ is bijective.
$f:A\to B$ is injective.
$f:A\to B$ is surjective.
In other words, given the hypotheses, each of the three statements implies and is implied by each of the others.
|
H: Bounded Derivate of a differentiable and Lipschitz function
Let $E, F$ normed spaces and $f:A\subseteq E\to F$ with $A$ open set, suppose that $f$ is differentiable at $a\in A$ and that $f$ is locally Lipschitz of constant $k>0$ in $a$. Show that $||Df(a)||\leq k$.
Note: $f$ is differentiable in $a$ iff exist $Df(a)\in \mathcal{L}(E, F)$ so that $$\lim_{h\to 0}\frac{||f(a+h)-f(a)-Df(a)h||_F}{||h||_E}=0$$
AI: If $f$ is locally Lipschitz of constant $k > 0$ about the point $a \in A$ then we have a neighborhood $U \subset A$ for which $$\|f(x) - f(y)\|_F \le k \|x-y\|_E.$$
Notice also that $$\frac{|\|f(a+h) - f(a)\|_F - \|Df(a)h\|_F|}{\|h\|_E} \le \frac{\|f(a+h) - f(a) - Df(a)h\|_F}{\|h\|_E}$$ for all $h$. By the definition of differentiability you have provided, this means that as $h \to 0$ we have $$\frac{\|Df(a)h\|_F}{\|h\|_E} \to \frac{\|f(a+h) - f(a)\|_F}{\|h\|_E} \le k.$$
The inequality is from the Lipschitz property, since for small enough $h$ we know that $a+h$ is in $U$. This gives $$\left\| Df(a) \frac{h}{\|h\|} \right\| < k$$ for all $h$ sufficiently small. However, since $\frac{h}{\|h\|}$ is a unit vector for any $h$, this implies that the operator norm of $Df(a)$ is less than $k$.
|
H: Differential equation of a mass on a spring
I have the following differential equation which is motivated by the dynamics of a mass on a spring:
\begin{equation}
my'' - ky = 0
\end{equation}
I split this into a system of equations by letting $x_1=y$ and $x_2=y'$
\begin{equation}
x' = \begin{pmatrix} 0&1\\\dfrac{k}{m}& 0\end{pmatrix}x
\end{equation}
Find an Eigenvalue:
\begin{align}
\det(A-\lambda I) =& 0 \\
\det\begin{pmatrix} -\lambda&1\\\dfrac{k}{m}&-\lambda\end{pmatrix} =& 0 \\
\lambda^2 - \dfrac{k}{m} =& 0 \\
\lambda = \pm\sqrt{\dfrac{k}{m}}
\end{align}
Find the matching eigenvector:
\begin{align}
(A-\lambda I)x =& 0 \\
\begin{pmatrix} -\sqrt{\frac{k}{m}}&1\\\dfrac{k}{m}&-\sqrt{\frac{k}{m}}\end{pmatrix}x =& 0
\end{align}
This matrix is similar to:
\begin{equation}
\begin{pmatrix}-\sqrt{\frac{k}{m}}&1\\0&0\end{pmatrix}x = 0
\end{equation}
$x_2$ is free so let $x_2$ = 1. We have:
\begin{align}
-\sqrt{\frac{k}{m}}x_1 + 1 =& 0 \\
x_1 =& \dfrac{1}{\sqrt{\dfrac{k}{m}}}
\end{align}
So the corresponding eigenvector is
$\begin{pmatrix}\dfrac{1}{\sqrt{\dfrac{k}{m}}}\\1\end{pmatrix}$.
Any solution to the vector differential equation $x'=Ax$ is $x = e^{\lambda t}x_0$ where $\lambda$ is an eigenvalue and $x_0$ is the corresponding eigenvector.
Our solution is then
\begin{equation}
e^{\sqrt{\frac{k}{m}} t}\begin{pmatrix}\dfrac{1}{\sqrt{\dfrac{k}{m}}}\\1\end{pmatrix}
\end{equation}
I am just using technique I learned in a differential equations book. This doesn't make sense from a physics stand point where solutions should be periodic and thus would have solutions in terms of sines and cosine. Perhaps I did not setup the system correctly, because if I got imaginary eigenvalues then I could have had sines and cosine in my solution.
I want to know what I did wrong, let me know what you think.
AI: If we have $$mt^2+k=0, ~~m>0,k>0$$ instead so $t=\pm\sqrt{\frac{k}{m}}i$ which lead us to have $$y_c(x)=C_1\exp\left(\sqrt{\frac{k}{m}}ix\right)+C_2\exp\left(-\sqrt{\frac{k}{m}}ix\right)$$ Now Euler's formula $e^{it}=\cos(t)+i\sin(t)$ can help us to convert the resulted solution to the following tolerable form: $$y_c(x)=C_1\cos\left(\sqrt{\frac{k}{m}}~x\right)+C_2\sin\left(\sqrt{\frac{k}{m}}~x\right)$$
|
H: Show that $|\lambda|\leq 1$ for each eigenvalue $\lambda$ of a partial isometry
Let $V$ be an inner product space. A linear map $U:V\rightarrow V$ is a partial isometry if there is a subspace $M\subset V$ such that $\parallel Ux\parallel =\parallel x \parallel$ for all $x\in M$ and $\parallel Ux\parallel =0$ for all $x\in M^{\perp}$.
Prove that if $\lambda$ is a eigenvalue of a partial isometry then $|\lambda|\leq 1$.
This problem was taken from Halmos's Finite Dimensional Vector Spaces (sec. 76).
Thanks.
AI: Hint: what can you say about $\|U\|$? That's usually the first thing one should look when trying to estimate a spectral radius.
Recall $V=M\oplus M^\perp$ for any subspace $M$ of a finite-dimensional inner-product space (or closed subspace of a Hilbert space, if you care about generalizing this fact). Then for every $v\in V$, we can write $v=x+y$ with $x\in M$ and $y\in M^\perp$. Hence $$\|v\|^2=\|x\|^2+\|y||^2\qquad Uv=Ux+Uy=Ux\quad\Rightarrow\quad \|Uv\|=\|Ux\|=\|x\|\leq \|v||.$$ Now just apply this to any eigenpair $(\lambda,v)$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.