text
stringlengths 83
79.5k
|
|---|
H: In how many ways can you bet thirteen different football matches to get exactly 12 right
In how many ways can you bet thirteen different football matches to get exactly 12 wins?
My attempt
First I notice that there is only $1$ way to bet $13$ different matches to get $13$ wins.
Second, I notice that ther are $3^{13}$ ways to bet $13$ different football matches.
Now, I refrase the question: In how many ways can you bet thirteen different football matches to get exactly $1$ loss?
This gives
$$\binom{13}{1}\cdot2^{1} = \frac{13!}{1! (12)!}\cdot2=13\cdot2=26.$$
Now, assume you want to know in how many ways you can bet thirteen different football matches to get exactly $2$ losses?
Well that would give $$\binom{13}{2}\cdot2^{2} = \frac{13!}{2! (11)!}\cdot4=78\cdot4=312.$$
So, the general formula is $\binom{n}{r}\cdot2^{r} = \frac{n!}{r! (n-r)!}\cdot2^{r}.$
On the other hand, with $13$ matches to be bet on and exactly $1$ to be a loss, I could argue that the loosing match could be chosen in $13$ different ways, and that each match only have $2$ possible choices, as
$$13\cdot2=26.$$ This result corresponds well to the answer above.
With the same argument, with $13$ matches to be bet on and exactly $2$ to be a losses I could argue that the first loosing match could be chosen in $13$ different ways, and the second loosing match in $12$ different ways, and that each match only have $2$ possible choices, as $$13\cdot12\cdot2=312.$$
Even this result correspond to the answer above.
But, when repeating the calculation for the choice of $3$ lost games (corresponding to $10$ wins), the magic disappears:
$$\binom{13}{3}\cdot2^{3} = \frac{13!}{3! (10)!}\cdot8=286\cdot8=2288\ne13\cdot12\cdot11\cdot2=3432.$$
Where do I go wrong?
AI: Your calculation of $13 \cdot 12 \cdot 11 \cdot 2$ is wrong two ways. The first three factors are the number of ways to choose the games you get wrong, but you count each order differently. If the games you lose are $ABC$ you are counting $CBA$ as different. That is the factor $3!$ between $13 \choose 3$ and $13 \cdot 12 \cdot 11 $. Then, having chosen the three games to lose, there are two ways to lose each, so the final factor should be $2^3$. The two combine to multiply $3432$ by $\frac 46$, getting $2288$.
|
H: Why it is not possible to construct a set function that is defined for all sets of real numbers with the following 3 properties?
I was reading page 30 of Royden and Fitzpatrick" real analysis ", fourth edition and the book said: " it is not possible to construct a set function that is defined for all sets of real numbers with the following 3 properties:
1- The measure of an interval is its length.
2- The measure is translation invariant.
3- The measure is countable additive over countable dis-joint unions of sets." and then the book said as a justification for this that we should look at page 48. here is a part of pg. 48:
After that, on the remaining part of the page, the book started to prove thm. 17.
My question is:
What is on pg. 48 say that it is not possible to construct a set function that is defined for all sets of real numbers with the previous 3 properties? could anyone explain this to me, please?
AI: Suppose we have such a function $f$. Now let's use some choice set $\mathcal{C} = \mathcal{C}_E$, where we took $E=[0,1]$.
First, $\{\lambda + \mathcal{C}\}_{\lambda\in \mathbb{Q},\, 0\leq \lambda \leq 1}$ is countable and disjoint, so by property 3 we have
$$
f\left(\bigcup_{\lambda\in\mathbb{Q}, \, 0\leq \lambda \leq 1} (\lambda + \mathcal{C}) \right) = \sum_{\lambda \in \mathbb{Q},\,0\leq \lambda \leq 1} f(\lambda + \mathcal{C}).
$$
Since each point $x\in \mathopen[0,1\mathclose]$ is rationally equivalent to a point $c\in \mathcal{C}$, we have $[0,1] \subset \bigcup_{\lambda\in\mathbb{Q},\, 0\leq \lambda \leq 1} (\lambda + \mathcal{C}) \subset [0,2]$. But $f$ is nondecreasing, so our left hand side lies between $1$ and $2$ by property 1.
Now for the right hand side : by property 2, each $f(\lambda + \mathcal{C})$ is equal to $f(\mathcal{C})$. We have a countable sum of the same real number. Either $f(\mathcal{C})=0$, and then the RHS is zero ; or $f(\mathcal{C})>0$, which implies that the RHS is $+\infty$. Both cases lead to a contradiction.
|
H: Proving $g: \mathbb{Z} \to \mathbb{R}$, $g(x)=2x+3$ is one to one.
Working on the book: Daniel J. Velleman. "HOW TO PROVE IT: A Structured Approach, Second Edition" (p. 242)
We can define a function $g: \mathbb{Z} \to \mathbb{R}$ by the rule that for every $x \in \mathbb{Z}$, $g(x) = 2x + 3$.
Assume $a,a' \in \mathbb{Z} \land g(a)=g(a')$
$g(a)=g(a')$
$2a+3=2a'+3$
$2a=2a'$
...
I know $a \in \mathbb{Z}$ and division is not closed in $\mathbb{Z}$. Now, my question is: how can I justify $a=a'$ ?
AI: You know what set you're working in by knowing $g(a), g(a') \in \mathbb{R}$. Therefore performing the following computation in $\mathbb{R}$ allows you to deduce $a = a'$ from $2a = 2a'$ since $\mathbb{R}$ is a field.
|
H: continuity and norm from a Hilbert space to a Hilbert space
Let $H$ be a Hilbert space. let $e$ be a normed vector of $H$. Let $A$ be the application from $H$ in $H$ défined by :
[
x \mapsto x-2\langle x, e\rangle e
]
Show that $A$ is linear continuous, of a norme inferior to 1
Show by a certain choice of $x$ that $\|A\|_{\mathcal{L}(H)}=1$
It was easy to show that it was linear, but then the continuity and a norm I didn't see a way:
$\begin{array}{l}
\|A x\|^{2}=\langle A x, A x\rangle=\langle x-2\langle x, e\rangle e,\langle x-2\langle x, e\rangle e\rangle \\
=\|x\|^{2}-4\langle x, e\rangle\langle e . x\rangle+4\|\langle x, e\rangle\|^{2}\|e\|^{2}
\end{array}$
$\langle x, e\rangle=\langle e . x\rangle$ and $e$ is normed vector, so we have
$\|A x\|^{2}=\|x\|^{2}$
but then how to continue?
AI: $\|Ax\|=\|x\|$ implies that $A$ is bounded and $\|A\|=\sup \{\|Ax\|: \|x\|=\sup \{\|x\|: \|x\| \leq 1\}=1$. The norm is attained when $x=e$.
|
H: Making intuition rigorous that integral of some positive function on set should be monotone in the Haar measure of the set
Let $\mathcal{M}$ be a compact Riemannian manifold with geodesic distance function $d$ and $\Omega$ its volume measure.
Pick some $A,B\subseteq\mathcal{M}$ such that $\Omega(A)\ll\Omega(B)$, but: (1) $A$ is not contained in $B$ (otherwise what I want to ask is trivial) and (2) $A$ is not contained in $B$ after an isometry of $\mathcal{M}$.
Let $f:[0,\infty)\to[0,\infty)$ be some nice smooth monotone decreasing function.
How to prove (is it even true??) that $$ \int_{x,y\in A}f(d(x,y))\mathrm{d}\Omega(x)\mathrm{d}\Omega(y) \leq \int_{x,y\in B}f(d(x,y))\mathrm{d}\Omega(x)\mathrm{d}\Omega(y)\,? $$
The statement is clear in case $f=1$ because then we simply have $\Omega(A)^2 \ll \Omega(B)^2$ which is true as $\cdot^2$ is monotone increasing. However, when $f$ is not equal to a constant and there is no isometry bringing $A$ into $B$, I am not sure how to approach this, though it seems intuitive that it must be true, or at least, can anyone think of a counter-example?
AI: Let me write $$V(A)=\int_{x,y\in A}f(d(x,y))\mathrm{d}\Omega(x)\mathrm{d}\Omega(y).$$ Note that since $f$ is decreasing, $f(r)\leq f(0)$ for all $r$ so $V(A)\leq f(0)\Omega(A)^2.$ In particular, this goes to $0$ as $\Omega(A)\to 0$, so for any fixed $B$ of positive measure, $V(A)\leq V(B)$ for all $A$ of sufficiently small measure.
(Note that unless $f$ is identically $0$, $V(B)>0$ for any $B$ of positive measure, since $B$ must have positive measure in some small ball in which $f(d(\cdot,\cdot))$ is always close to $f(0)$. In fact, using compactness of $\mathcal{M}$ to cover it with finitely many (say, $n$) small balls, $B$ must have measure at least $\Omega(B)/n$ in one of those balls, and so we get a lower bound on $V(B)$ depending only on $\Omega(B)$ (and $f$). So, the corresponding upper bound on $\Omega(A)$ needed to guarantee $V(A)\leq V(B)$ depends only on $\Omega(B)$ (and $f$), not on $B$ itself.)
|
H: If R is a ring, and A has all the sets in R and it's complements, is A an algebra? (Halmos Measure Theory question)
The question I have is related to problem 4.5 in chapter 1 of Halmos' text. Some definitions related to the question are the following. If $X$ is a set then a ring $\textbf{R}$ is a non-empty class of subsets such that if $E,F\in \textbf{R}$ then $E\cup F, E-F\in \textbf{R}$, where $E-F=E\cap F^c$.
Similarly, an algebra is a non-empty class of sets such that if $E,F\in \textbf{R}$ then $E\cup F\in \textbf{R}$ and $E\in\textbf{R}$ then $E^c\in\textbf{R}$.
The problem in the text is as follows. If $\textbf{R}$ is a ring, and $A=\{E\subseteq X|E\in \textbf{R}$ or $E^c\in\textbf{R}\}$, then show that $A$ is an algebra.
My trouble is in the following line of argument in showing so. If $E\in\textbf{R}$ and $F\in A$ is such that $F^c\in\textbf{R}$, why does it imply that $E\cup F\in A$?
I'm surely missing something. Here's what I've got till now. $E\cup F^c\in R\subseteq A\Rightarrow$ either $E\cup F^c\in R$ or $E^c \cap F\in R$. In the case of the latter, $E\cup F=(E-F)\cup (F-E)\cup (E\cap F)\in R$ by definition, but I'm stuck with the former and have tried other arguments like showing that it's in the intersection of all algebras that contain the ring and $X$, but can't wrap my head around why $E\cup F$ in this case must be in $A$, and am starting to doubt whether $A$ would even be an algebra.
Any pointers to this effect would help, thanks!
AI: Note that $E$ and $F^c$ are in $\mathbf{R}$, hence so is $F^c-E=F^c\cap E^c = (F\cup E)^c = (E\cup F)^c$. But if $(E\cup F)^{c}\in\mathbf{R}$, then $E\cup F\in A$.
|
H: Best method to find how many solutions are there to the equation $a + b + c = 21?$
The positive integers $a, b, $ and $c$ are such that $ a + b + c =21$.
$a = 5, b = 5, c = 11$ is a solution. $a = 5, b = 11, c = 5$ is another solution (i.e. order matters).
How many different solutions are there?
I can see a very slow way of writing each individual case out, but this is definitely not ideal... Is there a faster way?
AI: If $a=1$ there are 19 ways to choose $b$ and then only one way to choose $c$.
If $a=2$ there are 18 ways to choose $b$ and then only one way to choose $c$.
If $a=3$ there are 17 ways to choose $b$ and then only one way to choose $c$.
This pattern clearly continues, so there are $19 + 18 + 17 + … + 1$ ways altogether. This is simply an arithmetic series with first term 19, and common difference 1, so its sum is (20 x 19)/2. Hence your answer is 190.
|
H: Is there a bijection between $\mathcal{P}(\mathbb{N})$ and $\mathcal{P}(\mathbb{N} \times \mathbb{N})$?
I tried using the Cantor-Schröder-Bernstein theorem. I defined $f_1 \colon \mathcal{P}(\mathbb{N}) \to \mathcal{P}(\mathbb{N} \times \mathbb{N})$ as $\{a_1, a_2, a_3,...\} \mapsto \{(a_1,a_1), (a_2, a_2), (a_3,a_3),...\}$ and $f_2 \colon \mathcal{P}(\mathbb{N} \times \mathbb{N}) \to \mathcal{P}(\mathbb{N})$ as $\{(a_1, b_1), (a_2, b_2),...\} \mapsto \{2^{a_1-1}(2b_1-1), 2^{a_2-1}(2b_2-1),... \}$. Then, I showed that both $f_1$ and $f_2$ are injective. Therefore, by the Cantor-Schröder-Bernstein theorem, there exists a bijection $f \colon \mathcal{P}(\mathbb{N}) \to \mathcal{P}(\mathbb{N} \times \mathbb{N})$. Is this a correct approach?
AI: Your $f_1$ is a function from $\Bbb N$ to $\Bbb N\times\Bbb N$, not from $\wp(\Bbb N)$ to $\wp(\Bbb N\times\Bbb N)$, and your $f_2$ is a function from $\Bbb N\times\Bbb N$ to $\Bbb N$, not from $\wp(\Bbb N\times\Bbb N)$ to $\wp(\Bbb N)$, so the Cantor-Schröder-Bernstein theorem tells you that there is a bijection $f:\Bbb N\to\Bbb N\times\Bbb N$. You still have some work to do to get a bijection from $\wp(\Bbb N)$ to $\wp(\Bbb N\times\Bbb N)$, though it’s pretty easy work: there is a very natural, easy way to use $f$ to define a bijection $F:\wp(\Bbb N)\to\wp(\Bbb N\times\Bbb N)$. Can you find it?
|
H: What are the ordered pairs when A = {1, 3, 5, 15, 18} and R be defined by xRy if and only if x|y.
I just wanted to confirm I understand correctly:
When trying to find the pairs for:
A = {1, 3, 5, 15, 18} and R be defined by xRy if and only if x|y
First I determine the factors:
x|y 1 is a factor of 3
x|y 1 is a factor of 5
x|y 1 and 3 are factors of 18
x|y 1, 3 and 5 are factors of 15
Therefore the pairs should be as follows:
R = {(1,3),(1,5),(1,15),(1,18),(3,15),(3,18),(5,15)}
Is this correct?
AI: Almost correct. Don't forget that $x \mid x$ for all $x \in A$ though! $3$ divides itself, for instance, so $(3,3) \in R$.
|
H: Hausdorff Property for a Covering Space of a Manifold $E\to M$.
I want to show that if $\pi : E \to M$ is a topological covering map and $M$ is a manifold then $E$ is a manifold. I was reading this post which helped me for the second-countability. The OP says it is simple to show that $E$ is Hausdorff but I don't see it. If I take $p\neq q \in M$.
Either $\pi(p) \neq \pi(q)$. In this case since $M$ is Housdorff then take $U,V$ that separate $\pi(p)$ and $\pi(q)$ and $\pi^{-1}(U)$ and $\pi^{-1}(V)$ will separate $p$ and $q$.
Or $\pi(p) = \pi(q)$. I now consider an open $U \ni \pi(p)=\pi(q)$. Then I was told that the sheets of $\pi^{-1}(U)$ containing $p$ and $q$ are disjoints but I don't understand why.
AI: Like @ArcticChar commented, part of the definition of a covering map $\pi\colon E\to M$ is that if $U$ is an evenly covered neighborhood in $M$, then $\pi^{-1}(U)$ is a disjoint union $\bigsqcup_\alpha V_\alpha$ in $E$ such that for each $\alpha$, $\pi|V_\alpha\colon V_\alpha\to U$ is a homeomorphism. In particular, $\pi|V_\alpha$ is injective.
Do you see how to finish from here?
|
H: The intuition of the round value of $e^{i \pi}$
I can see the value of $e^{i\pi}$ is $-1$, this value is round without decimals. The $e$ value can be defined as the value (v) that the derivative function of $v^{x}$ is still $v^{x}$. And the $\pi$ value is defined as the value of circumference over diameter of whichever circle. And $i$ is $\sqrt{-1}$ everybody knows.
These 3 values $e$ and $i$ and $\pi$ have no direct relations, but $e^{i\pi}$ is a round value, what is the intuition (the simple explanation) behind this?
AI: You see, there are other seemingly unlikely identities... $${\left(\sqrt{2}^{\sqrt2}\right)}^{\sqrt{2}} = 2,$$ $$\log 2 + \log 5 = 1,$$ etc.
The fact is, $\pi$ is defined such that the identity holds. Usually $\pi$ is defined as the smallest positive number such that $2\pi$ is a period of $f(x)=e^{ix}$. So $e^{i\pi}$ must be a square root of $1$, but since $2\pi$ is the smallest positive period, you can't take to positive square root. Therefore $e^{i\pi}=-1$.
To see why this is a good definition, consider doing this the other way: defining $\pi$ with circumference requires a definition of curve length, which requires the definition of improper integrals, the smallest positive period approach only requires the definition of $e^x$ as a power series sum, which only requires the definition of limits. You can also define $e^x$ as a unique solution to a differential equation, which is also straightforward. This is the definition used in Rudin's real analysis book.
|
H: Intuitive proof for distributive property of dot product using $\overrightarrow{u}\cdot\overrightarrow{v} = u_{x}v_{x} + u_{y}v_{y}$
I understand the intuitive way to think of the distributive property using $\overrightarrow{A}\cdot\overrightarrow{B} = AB\cos\theta$:
Then, this makes me wonder if it's possible to prove distributive property using the more general form for dot product: $\overrightarrow{u}\cdot\overrightarrow{v} = u_{x}v_{x} + u_{y}v_{y}$
AI: It is much simpler using the coordinate definition because it just comes down to the distributivity of real multiplication. If you write $R_x=B_x+C_x$ and the same for $y$, then $A\cdot R=A_xR_x+A_yR_y=A_x(B_x+C_x)+A_y(B_y+C_y)$ and now use distributivity and associativity in the reals.
|
H: Does Hilbert's theorem 90 hold for local rings?
Let $R$ be a local ring (commutative with 1). Let $G$ be a finite subgroup of $\text{Aut}(R)$ preserving the maximal ideal. Then it seems to me that we also have:
$$H^1(G,R^\times) = 0$$
Is this correct? (The classical Hilbert theorem 90 states this when $R$ is a field).
Here's the argument:
First, you need the Lemma: If $g_1,\ldots,g_n$ are distinct automorphisms of $R$, then if for $c_i\in R$, $\sum_{i=1}^n c_ig_i = 0$ (as a function $R\rightarrow R$), then each $c_i = 0$. Indeed, one may assume that there is a minimal such relation, where $c_1g_1 + \cdots + c_rg_r = 0$ with $c_1,\ldots,c_r$ all nonzero. In this case we must have $r > 1$ since $c_1g_1 = 0$ means $c_1g_1(1) = c_1\cdot 1 = 0$ so $c_1 = 0$. Now since $g_1\ne g_r$, let $a\in R$ be such that $g_1(a)\ne g_r(a)$. Let $x\in R$ be arbitrary. We have:
$$\sum_{i=1}^r c_ig_i(ax) = \sum_{i=1}^r c_ig_i(a)g_i(x) = \sum_{i=1}^rc_ig_i(x) = 0$$
Multiplying the last sum by $g_r(a)$ and subtracting from the second sum, we get:
$$\sum_{i=1}^rc_i(g_i(a)-g_r(a))g_i(x) = \sum_{i=1}^{r-1}c_i(g_i(a)-g_r(a))g_i(x) = 0$$
Since this holds for all $x\in R$, this gives a shorter relation and by our choice of $a$, the coefficients are not all zero, since $c_1(g_1(a) - g_r(a))\ne 0$.
To prove the theorem, we now proceed as usual: Let $\alpha : G\rightarrow R^\times$ be a 1-cocycle, so $\alpha$ satisfies $\alpha(gh) = \alpha(g)\cdot (g.\alpha(h))$. In particular, we have $g.\alpha(h) = \alpha(g)^{-1}\alpha(gh)$.
Applying the above result to the residue field $R/\mathfrak{m}$, the linear combination
$$\sum_{g\in G}\alpha(g)\cdot g$$
is nonzero in the residue field. Thus, there exists a $\theta\in R$ such that
$$\beta := \sum_{g\in G}\alpha(g)\cdot g(\theta) \in R^\times$$
Then, for each $h\in G$, we have
$$h(\beta) = h\left(\sum_{g\in G}\alpha(g)\cdot g(\theta)\right) = \sum_{g\in G}h(\alpha(g))\cdot (hg)(\theta) = \sum_{g\in G}(\alpha(h)^{-1}\alpha(hg))\cdot (hg)(\theta) = \alpha(h)^{-1}\sum_{g\in G}\alpha(hg)\cdot (hg)(\theta) = \alpha(h)^{-1}\beta$$
Thus, $\alpha(h) = \frac{\beta}{h(\beta)} = \frac{h(\beta^{-1})}{\beta^{-1}}$, so $\alpha$ is a coboundary.
Does this seem right? I just want to record this here since I find it strange that I've never seen this simple generalization, which doesn't require any real additional technology to state or prove.
AI: This is false. Let $R$ be $\mathbb{Z}_2[i]$ (where $i$ denotes a choice of square root of negative one). It has an automorphism $\sigma$ exchanging $i$ and $-i$, and (writing $G = \{\text{id}, \sigma\}$) one has
$$
H^1(G, R^\times) = \frac{\text{ker} R^\times \stackrel{N}{\to} R^\times}{\text{im} R^\times \stackrel{\sigma - 1}{\longrightarrow} R^\times}
$$
Now the element $i$ of $R$ has norm 1. But it is not of the form $\alpha/\alpha^\sigma$ for any $\alpha \in R^\times$. Indeed, if
$$
\frac{a + bi}{a-bi} = i
$$
then cross-multipyling gives
$$
a + bi = ai + b.
$$
so a=b. But any element $a(1+i)$ of $\mathbb{Q}_2(i)$ with $a \in \mathbb{Z}_2$ has positive valuation, so is not a unit in $\mathbb{Z}_2[i]$.
|
H: Conservation of energy in three dimension
I'm trying to derive the conservation of energy in 3D from the equation $\vec{F}=m\vec{a}$.
David Morin, in his book "Introduction to Classical Mechanics With Problems and Solutions" p. 138-139, proves the conservation of energy in 1D in the following way:
I wanted to prove the 3D version in the same way, so I got the term
$$\int_C m\vec{v} \cdot d\vec{v}$$
or, if parametrized,
$$\int_{t_0}^t m\vec{v}(t) \cdot \frac{d\vec{v}(t)}{dt} \ dt$$
This should obviously yield $$\frac{1}{2}m|\vec{v(t)}|^2 - \frac{1}{2}m|\vec{v(t_0)}|^2.$$
But what I'm wondering is, how do I deal with the dot product? Please see the diagram below.
The angle between $d\vec{v}$ and $\vec{v}$ looks too complicated to be taken into account at infinitesimal level. (and note that even $d\theta$ is not the angle between these two)
AI: You can write the $d\vec v$ in terms of the components along $\vec v$ and perpendicular to it.
$$d\vec v=d|\vec v| \hat v+v d\theta\hat\theta$$
When you multiply with $\vec v=|\vec v|\hat v$, the second term will wanish. So all you need to consider is the radial component (1 dimension)
|
H: Prove that $f_{2k} \cdot f_{(2k+4)} + 1$ is a perfect square, where $f_n$ is the $n$th Fibonacci number, $n\geq0$.
I found a pattern that makes me conjecture that this is also true for $f_{2k+4}$, as well as $f_{2k+2}$. I actually accidentally found this pattern while thinking about the proof for $f_{2k+2}$! But I haven't been able to prove it yet. Any help would be appreciated, thanks!
AI: Nice to see another Fibonacci question from you, Danny! It's really cool you found that pattern. I think pattern finding is a really important skill in math in general. Keep it going!
Before you look at my full solution below, try and see if you can use my hint and figure it out.
Hint: Use a direct proof (no induction, which we used for the previous problem), applying basic properties of Fibonacci numbers, i.e. $ f_{n} = f_{n-1} + f_{n-2} $.
Full proof:
We aim to prove that $f_{2k} \cdot f_{2k+4} + 1$ is a perfect square, where $f_n$ is the $n$th Fibonacci number, $n\geq0$.
$f_{2k} \cdot f_{2k+4} + 1 = f_{2k} \cdot (f_{2k+2} + f_{2k+3}) + 1 = f_{2k} \cdot f_{2k+2} + f_{2k} \cdot f_{2k+3} + 1$.
Based on what we previously proved (for your previous question), this is equivalent to:
$ (f_{2k+1}^2 - 1) + f_{2k} \cdot f_{2k+3} + 1 = (f_{2k+1}^2 - 1) + f_{2k} \cdot (f_{2k+1} + f_{2k+2}) + 1 = f_{2k+1} \cdot (f_{2k+1} + f_{2k}) + f_{2k} \cdot (f_{2k+2}) - 1 + 1$.
By the definition of a Fibonacci number, $f_{2k+2} = f_{2k+1} + f_{2k}$, so this is equivalent to:
$f_{2k+1} \cdot f_{2k+2} + f_{2k} \cdot (f_{2k+2}) = (f_{2k+2}) \cdot (f_{2k+1} + f_{2k}) = (f_{2k+2})^2$.
Because every Fibonacci is by definition, an integer (by its recursive definition, it is the sum of two previous integers), $ (f_{2k+2})^2 $ is a perfect square, which proves the claim.
$ Q.E.D. $
Hope this helps!
|
H: Unboundedness of a continuous function
Given a function f on R is continuous and also it satisfies $|f(x)-f(y)|≥K|x-y|$ for all x,y in R. This function is one-one, which is clear but how to prove such functions are unbounded? Thanks in advance.
AI: Take $y=0$. Then $|f(x)-f(0)|\ge K |x|$. Then the triangle inequality implies $|f(x)|+|f(0)| \ge |f(x)-f(0)| \ge K |x|$. Then $|f(x)| \ge K|x| - |f(0)|$.
Suppose, by way of contradiction, that $f$ is bounded by $M$. If you choose $x$ such that $ M< K |x| - |f(0)|$, which is accomplished by $|x| > (M+|f(0)|)/K$, you break the bound. Therefore, $f$ is unbounded.
I've had three beers, factor that into your estimation of the quality of the answer.
|
H: understanding the limits in calculation of expectation
My question is from the book Bertsekas, "Introduction to probability".
Let's say X is continuous first
I believe I should maniputate the given expression to look the like known definiton of Expecation.
$ E[X] = \int_{-\infty}^\infty x \times f_X(x) dx $
equivalently,
$ \int_{0}^\infty P(X > x)dx = \int_{0}^\infty \Bigl(\int_{x}^\infty f_X(x) dx\Bigr) dx $
The solution is given below.
Why is a new variable y introduced? and why did the limits change from $(x,\infty)$ to $(0,y)$
Thanks.
AI: The solution uses the definition of the tail probability for a nonnegative random variable $P(x>x)=\int_x^\infty f(\tau) d\tau$. The next step is switching the order of integration (Fubini's theorem).
|
H: Prove that $\sqrt{2}$ must be in the open set strings which covers the $\mathbb{R}$
I read a book about the set theory and met with the following:
I really do not understand the notes in the parenthesis, is it means that the irrational number is very close to the rational number? And why $\sqrt{2}$ is so special irrational number? And how to prove it? Thank you.
AI: The point of the paragraph is that you can make a countable union of intervals that covers all the rationals (or any countable set of points we want) but has very small total length. The point of the parenthetical note is to counteract the feeling some people have that all the irrationals must be covered as well because they are very close to rationals. There is nothing special about $\sqrt 2$ except that it is irrational. Depending upon the $\epsilon$ chosen and the order we list the rationals in, it might be covered, but it is not guaranteed. Their point is that you cannot prove $\sqrt 2$ (or any other number not in $A$) is covered by any of the intervals.
|
H: Is there a tighter upper bound for $\sum_{k=1}^n|k\sin k|$ than $\frac12n(n+1)$?
Consider the sum
$$\sum_{k=1}^n|k\sin k|$$
An obvious upper bound for this is clearly $\sum_{k=1}^n k=\frac12n(n+1)$. But it seems that this upper bound is too "loose", and so I was wondering if it is possible to find a tighter upper bound for it?
Below please find a Mathematica plot for the sum in the interval $n\le1000$.
AI: The average value of $\left|\sin k\right|$ is $2/\pi$ so an upper bound is probably near $$\frac{n(n+1)}{\pi}$$
As the plot below shows, this is quite accurate when $900\le n\le1000$. $n(n+1)/\pi$ is the red curve, the scatterplot im blue represents the actual sums.
At most one in every three of $|\sin k|$ is more than $\cos 0.5$ so one bound is near $$\frac{1+2\cos0.5 }6n(n+1)$$
In the same way, the average of $22$ consecutive values of $|\sin k|$ is always between $0.635$ and $0.638$. So the sum is bounded above by
$$\sum_{k=1}^N k\left|\sin k\right|\lt 0.638\sum_{k=1}^N \left(N-22\lfloor (N-k)/22 \rfloor\right)$$
where each $k$ has been rounded up to the nearest number of the form $N-22m$.
This has a polynomial sum if $N$ is a multiple of $22$, and a finite correction if not.
$$\sum_{k=1}^N k\left|\sin k\right|\lt 0.638\frac{N(N+22)}2+C$$
By noting that $\sum_{k=1}^{22}k|\sin(M+k)|$ is always between $153$ and $169$, this can be improved to
$$0.3175N(N-0.1)+C_1\lt\sum_{k=1}^N k\left|\sin k\right|\lt0.319N(N+2)+C_2$$
|
H: Consider $v=v_1+v_2$, where $v_1 \in M$ and $v_2 \in M^{\perp}$
$M = span\{\begin{pmatrix}8\\0\\-6\end{pmatrix}, \begin{pmatrix}8\\6\\-6 \end{pmatrix}\}$
I am trying to calculate $v_1$ and $v_2$ when $v=\begin{pmatrix}2 \\ 4 \\ 6 \end{pmatrix}$.
I know that since $v_1 \,\in \, M$ and $v_2 \, \in \, M^{\perp}$, $v_1\cdot v_2 = 0$.
Let $v_1 = a_1\begin{pmatrix}8\\0\\-6\end{pmatrix} + a_2\begin{pmatrix}8\\6\\-6 \end{pmatrix}$. The orthogonal basis of $M = \begin{pmatrix}8\\0\\-6\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix}$. From this, we can say that $v=a_1\begin{pmatrix}8\\0\\-6\end{pmatrix} + a_2\begin{pmatrix}8\\6\\6 \end{pmatrix}+a_3\begin{pmatrix}8\\0\\-6\end{pmatrix}+a_4\begin{pmatrix}0\\1\\0\end{pmatrix}$
But if we put this into an augmented matrix, it gives an inconsistent system (which is clearly wrong). What is the correct way to solve this?
AI: It is quite easy to see that $w=\begin{bmatrix}6\\0\\8\end{bmatrix}$ is orthogonal to the basis vectors given for $M$. Thus $w \in M^{\perp}$. Moreover $\text{dim}(M)=2$ and $M \subset \Bbb{R}^3$, so $\text{dim}(M^{\perp})=1$. This means we can say that $\{w\}$ is a basis for $M^{\perp}$.
So we want to solve for $a,b,c$ such that
$$\begin{bmatrix}2\\4\\6\end{bmatrix}=\underbrace{a\begin{bmatrix}8\\0\\-6\end{bmatrix}+b\begin{bmatrix}8\\6\\-6\end{bmatrix}}_{v_1 \in M}+\underbrace{c\begin{bmatrix}6\\0\\8\end{bmatrix}}_{v_2 \in M^{\perp}}.$$
This yields the system
\begin{align*}
4a+4b+3c&=1\\
3b&=2\\
-3a-3b+4c&=3
\end{align*}
Upon solving this, we get $a=-\frac{13}{15}, b=\frac{2}{3}$ and $c=\frac{3}{5}$.
Thus
$$v_1=\color{red}{\frac{-13}{15}}\begin{bmatrix}8\\0\\-6\end{bmatrix}+\color{red}{\frac{2}{3}}\begin{bmatrix}8\\6\\-6\end{bmatrix},$$
and
$$v_2=\color{red}{\frac{3}{5}}\begin{bmatrix}6\\0\\8\end{bmatrix}$$
|
H: Sum of $s_n=10-8+6.4-5.12+...$
I'm asked to find the sum for $s_n=10-8+6.4-5.12+...$ as $n\rightarrow \infty$. I discovered that the sum can be written as $$10\sum_{n=1}^{\infty}(-1)^{n-1}\left(\frac{8}{10}\right)^{n-1}$$
I know from the ratio/roots test the series indeed converges.
My problem is figuring out what it converges to. I don't see how I can use the geometric formula $\frac{a}{1-r}$.
AI: $$\sum_{n=1}^{\infty}10(-1)^{n-1}\biggr(\frac{8}{10}\biggr)^{n-1}
= \sum_{n=0}^{\infty}10\left(\frac{-8}{10}\right)^{n}=10\frac{1}{1+\frac{8}{10}}=\frac{100}{19}
$$
|
H: Linear Least Squares with Monotonicity Constraint
I'm interested in the multidimensional linear least squares problem: $$\min_{x}||Ax-b||^2$$
subject to a monotonicity constraint for $x$, meaning that the elements of $x$ are monotonically increasing: $x_0 \leq x_1$, $x_1 \leq x_2$, ... , $x_{n-1} \leq x_n$.
I basically have two questions regarding this problem:
1.) Is there maybe literature regarding this problem out there? I wasn't able to find anything online so far.
2.) If not, is it maybe possible to rewrite my problem in such a way that i could use already existing methods like Non Negative Least Squares (NNLS) or a Constrained Least Squares (CLS) method?
Regarding the NNLS, I had the idea to formulate my problem in terms of an $\tilde{x} := (x_0, x_1-x_0,\; ...\;,x_n - x_{n-1})$ as this would also achieve monotonicity if every term in non negative, but I can't seem to do it, maybe I'm missing something here?
Many thanks in advance!
AI: Let $L$ be an $n\times n+1$ matrix such that $$ L =
\begin{pmatrix}
-1 & 1 & 0 & ... &0 \\
0 & -1 & 1 & ... &0 \\
& & \\
0 & 0 & ...& -1 &1 \\
\end{pmatrix}$$
Then you can formulate this as a constrained least squares problem$$\min_{x}||Ax-b||^2\ s.t. Lx \geq 0$$
|
H: Suppose $H
I read in some text the following statement:
Let $H$ be a subgroup of $G$. Denote $N=\bigcap_\limits{x\in G} xHx^{-1}$, then $N$ is the largest normal subgroup of $G$ contained in $H$.
It's easy to show $N<G$, since $H$ is a subgroup of $G$ any conjugate $xHx^{-1}~(x\in G)$ of $H$ is also a subgroup of $G$, and the intersection of subgroups is also a subgroup. $N\lhd G$ is also easily shown, if $n\in N$ then for any $g\in G$ there exists $h\in H$ such that $n=ghg^{-1}$, and for any $x\in G$ we have $xnx^{-1}=x(ghg^{-1})x^{-1}=(xg)h(xg)^{-1}$. Since for any $g$ such an $h$ always exists and $x\mapsto xg$ is surjective, clearly $xnx^{-1}\in N$. $N\subseteq H$ because $1H1^{-1}=H$ is one of the intersecting subgroups.
It's left to show $N$ is the largest normal subgroup contained in $H$, which I do not know how to achieve. I appreciate any help or hint, thanks.
AI: Suppose there is a larger normal subgroup of $H$, i.e. some subgroup $N'$ exists such that
$$N \subseteq N' \subseteq H.$$
We need to show that $N' = N$. We only need to show that $N' \subseteq N$.
Take any $g \in N'$ and $x \in G$. Then $x^{-1} g x \in N' \subseteq H$, thus $g \in xHx^{-1}$. This is true for any $x \in G$, hence
$$g \in \bigcap_{x \in G} xHx^{-1} = N,$$
completing the proof.
|
H: Parallelogram Inequality
Let M be a point inside parallelogram ABCD. Then prove that $MA + MB + MC + MD < 2(AB + BC)$
I tried this problem using Triangle Inequality but couldn't proceed. Please help.
AI: Let $PP'$ be parallel to $BC$ and $QQ'$ parallel to $AB$, both through $M$. Note that you can apply the triangle inequality to $AM$, $AP'$ and $MP'$. Same for others.
|
H: Why is subspace $\mathcal{C}$ the intersection of the kernels of $n-d$ linear forms?
I was reading Waldschmidt's notes on Finite fields and error coding, where I came across the following statement, in section $\S 3.3$:
A subspace $\mathcal{C}$ of $F_q^n$ of dimension $d$ can be described by giving a basis ${e_1, . . . , e_d}$ of $\mathcal{C}$ over $F_q$, so that $\mathcal{C} = \{m_1e_1 + · · · + m_de_d | (m_1, . . . , m_d) \in F_q^d \}$. An alternative description of a subspace $\mathcal{C}$ of $F_q^n$ of codimension $n−d$ is by giving $n−d$ linearly independent linear forms $L_1, . . . , L_{n−d}$ in n variables $x = (x_1, . . . ,x_n)$ with coefficients in $F_q$, such that $$(*)\quad\mathcal{C} = \ker L_1 \cap · · · \cap \ker L_{n−d}.$$
I am aware that a subspace is always the kernel of a linear map and vice versa. However, I don't see how $\mathcal{C}$ can alternatively be represented as the intersection of kernels of $n-d$ linear maps.
AI: Extend $e_1, \ldots, e_d$ to a basis $e_1, \ldots, e_n$ of $F_q^n$. For each $i$ between $1$ and $n$, define a linear form $e_i^*$ by its action on the basis: let $e_i^*(e_j)$ be $0$ when $i \neq j$ and $1$ when $i = j$. The linear forms $e_1^*, \ldots, e_n^*$ form the dual basis to the basis $e_1, \ldots, e_n$.
I claim that $\mathcal{C} = \bigcap_{i = d + 1}^n \ker e^*_i$. Note that if $1 \le j \le d$ and $d + 1 \le i \le n$, then
$$e_i^*(e_j) = 0 \implies e_j \in \ker e_i^*,$$
hence
$$\mathcal{C} = \operatorname{span}(e_1, \ldots, e_d) \subseteq \bigcap_{i=d+1}^n \ker e_i^*.$$
Conversely, suppose $x \in \bigcap_{i=d+1}^n e_i^*$. Since $x \in F_q^n$, we have $x = a_1 e_1 + \ldots + a_n e_n$ for some scalars $a_1, \ldots, a_n \in F_q$. We have, for $d + 1 \le i \le n$,
$$0 = e_i^*(x) = a_1 e_i^*(e_1) + \ldots + a_{i - 1} e_i^*(e_{i - 1}) + a_i e_i^*(e_i) + a_{i + 1} e_i^*(e_{i + 1}) + \ldots + a_n e_i^*(e_n) = a_i,$$
hence
$$x = a_1 e_1 + \ldots + a_d e_d + 0 + \ldots + 0 \in \mathcal{C}.$$
Thus, $\mathcal{C}$ can indeed be expressed as the intersection of the kernels of $n - d$ linear forms.
|
H: Definition by Abstraction in Axiomatic Set Theory by Suppes
I am just starting self-studying Axiomatic Set Theory by Patrick Suppes. I have a doubt on the definition by abstraction.
Just to give some context, Suppes defines a set as
$y $ is a set $\leftrightarrow \exists x\in y\ \vee\ y=0$ (empty set)
I found the first element in Def 11's disjunction is enough. If $y$ is not an empty set, we don't go to the second element; if $y$ is indeed an empty set, then for any $x$, $x\in y$ is not true, which requires $\phi(x)$ cannot be satsified. Conversely, if $\phi(x)$ is not satisfiable, then $x\notin y$ for every $x$. Then, $y$ is an empty set. This already expresses out intuitive understanding.
I further found the second element confusing. Suppose indeed $\phi(x)$ is not satisfiable, in which case our intuition tells us $y=0$ (empty set). I can indeed find $B=0$ (empty set) too. $\forall x\ \neg \phi(x)$, and $\forall x\ \neg x\in B$. Thus, I conclude there exists $B=\phi$, $\forall x,x\in B\leftrightarrow \phi(x)$.
This contradicts the definition of Suppes. Am I wrong with the reasonings in both paragraphs above? Thanks.
AI: The author is defining the "set-builder" operator: $\{ x \mid \varphi(x) \}$ that "maps" a predicate (a formula $\varphi$ with free variable $x$) into a term (i.e. the "name" of an object).
It is well-known that the so-called unrestricted Comprehension Principle is inconsistent [see §1.3]: thus, not every predicate can meaningfully define a set.
The author uses the definitional schema illustrated at page 19 with the $x/y=z$ example.
Going back to Definition 11, the author illustrates it at page 34.
There are two possible cases:
(i) either there is a set $A$ such that $(\forall x)(x \in A \leftrightarrow \varphi(x))$,
in which case we define that the "set-builder" operator maps formula $\varphi$ to that set, or
(ii) there is no set $B$ such that $(\forall x)(x \in B \leftrightarrow \varphi(x))$,
in which case we "arbitrarily" define that the "set-builder" operator maps formula $\varphi$ to the empty set.
Maybe your confusion is due to the incorrect way of reading the definition:
"if $y$ is not an empty set, we don't go to the second element; if $y$ is indeed an empty set, then for any $x, x∈y$ is not true, which requires $\varphi(x)$ cannot be satisfied.
We are defining $y$, i.e. we have to start from the formula and "manufacture" the corresponding set.
Regarding:
I further found the second element confusing. Suppose indeed $\varphi(x)$ is not satisfiable,...
The issue is not the satisfiability of the formula; consider the discussion about Russell's paradox (page 6).
Into the formula $(\forall x) (x \in y \leftrightarrow \lnot (x \in x))$we are using $\lnot (x \in x)$ as $\varphi(x)$ and the formula is indeed satisfiable: $\lnot (\emptyset \in \emptyset)$.
|
H: Dictionary ordering in $\mathbb{R}^2$ is not complete.
Show that the order "$\leq$" on $\Bbb{R}^2$ defined by,
$(a,b)\leq(c,d)$ if ($a<c$) or, $(a=c$ and $b\leq d)$
is not complete.
Hint: Use the set $E=\{(\frac1 n, 1-\frac1 n): n\in \Bbb{N}\}$.
Can any one help me with this? How can the set $E$ be used to show that, the ordering on $\mathbb{R}^2$ is not complete? Thank you.
AI: The set should be
$$E=\big\{(1-1/n, 1/n)\ \big|\ n\in\mathbb{N}\big\}$$
I've swapped coordinates (your original $E$ has $(1,0)$ as the least upper bound).
With that assume that $(p,q)$ is an upper bound of $E$. By looking at the first coordinate we get that $1-1/n\leq p$ for any $n\in\mathbb{N}$. Therefore we conclude that $p\geq 1$. But then we have
$$(1-1/n,1/n)<(1,r)$$
for any $n\in\mathbb{N}$ and any $r\in\mathbb{R}$ by our order definition. In particular if $(p,q)$ is an upper bound of $E$ then $(1,q-1)$ is an upper bound of $E$ which is lower than $(p,q)$. Therefore the least upper bound of $E$ does not exist.
|
H: Proof feedback $ f $ is differentiable at $c$ it is also continous at $c$
Hi I have done my own proof of this theorem, and it's likely wrong as it's different to the proof in the book which is also very simple.
I would love some feedback in what wrong assumptions I am making. I think it may be that I'm bounding x-c. Or maybe I can not use the absolute value?
Here it goes:
$ f $ is differentiable at $ c $ so we can say
$$ f'(c) = \lim_{ x \to c } \frac{f(x) - f(c)}{x-c}$$
I thought I may be able to say that: $$ |x-c| < \delta $$
so in absolute terms and assuming x-c is smaller than 1.
$$ f'(c) > \lim_{ x \to c } \frac{f(x) - f(c)}{\delta}$$
then:
$$ f'(c)\delta > \lim_{ x \to c } f(x) - f(c)$$
Where I thought I can say that $f'(c)\delta = \epsilon $
Or is the problem here as $f'(c) = 0 $ is a possibility?
If this is wrong, can I make it work by adding something?
Thank you very much
AI: You have to be careful with the sign of $f(x) - f(c)$ when writing your second inequality, and before all with handling limits. You cannot write right away an inequality involving a limit: in your case, you have to write the equality for a given $x$ and then make $x$ goes to $c$. As a result, you will not end up with a strict equality even if you have a strict equality for any $x$. (Think of letting $x$ goes to zero in $x > 0$ for example)
Here is how I would do:
$\frac{f(x) - f(c)}{x-c}$ converges as $x$ tends to $c$ so this quantity is bounded in a neighborhood of $c$, i.e. there exists $M > 0$ and $\delta > 0$ such that for all $x \neq c$ in $]c-\delta, c+\delta[$, $\left|\frac{f(x) - f(c)}{x-c}\right| \leq M$. Therefore,for all such $x$ we have
$$
|f(x) - f(c)| \leq M |x-c|
$$
and thus $f(x)$ converges to $f(c)$ as $x$ converges to $c$.
|
H: combination of points in the open unit disk also lie in the unit disk
Suppose $a,b$ are points in the open unit disk $\{z\in\mathbb{C}:|z|<1\}$. Then, does the combination $$\frac{(1-|a|^2)b+(1-|b|^2)a}{1-|ab|^2}$$ also lie in the unit disk (open)?
I think yes, but am unable to prove. The triangle inequality does not give any hopes here. I think Cauchy-Schwarz should be used in some way here. Any hints how to proceed? Thanks beforehand.
AI: Using the triangle inequality, the absolute value of the expression is at most
$$
\frac{(1-|a|^2)|b|+(1-|b|^2)|a|}{1-|ab|^2} =
\frac{(|a|+|b|)(1-|a||b|)}{1-|a|^2 |b|^2} = \frac{|a|+|b|}{1+|a||b|} \\
= 1 - \frac{(1-|a|)(1-|b|)}{1+|a||b|} < 1 \, .
$$
Remark: This is related to Conformal automorphism of unit disk that interchanges two given points. If $a, b$ are distinct points in the unit disk then
$$
T(z) = \frac{c- z}{1- \bar cz}
$$
with
$$
c = \frac{(1-|a|^2)b+(1-|b|^2)a}{1-|ab|^2}
$$
is the (unique) automorphism of the unit disk with $T(a) = b$ and $T(b) = a$.
|
H: Prove that the limit of a convergent subsequence has to be greater than or equal to 4.
So, here's the problem:
Let $\{a_n\}_{n \in \mathbb{N}}$ be a sequence of real numbers such that all the terms of the sequence belong to the interval $[4,9)$. Then, prove or disprove the assertion that there exists a convergent subsequence $\{b_n\}$ such that $\lim_{n \to \infty}b_n \geq 4$.
Proof Attempt:
I claim that there exist no convergent subsequences that have limit strictly less than 4. Since this is a bounded sequence of real numbers, it will have a convergent subsequence and that subsequence must have limit greater than or equal to 4.
To prove this, suppose that all convergent subsequences must have limit strictly less than 4. We will pick one of them and say that the limit is $c$. Then, consider an $\epsilon$ neighbourhood of $c$ such that $c+\epsilon < 4$.
We can certainly define this because, for instance, $\epsilon = \frac{4-c}{2}$. Then, this neighbourhood of $c$ must contain infinitely many terms of the subsequence. In other words, there are terms of the original sequence that are outside of the given interval. This is a contradiction.
It follows that such a convergent subsequence cannot exist.
Does the proof above work? If it doesn't, then why? How can I fix it?
AI: I claim that there exist no convergent subsequences that have limit strictly less than 4.
To prove this, suppose that all convergent subsequences must have limit strictly less than 4.
That is not how you would argue by way of contradiction. Because what you then have shown is that not all convergent subsequences have limit strictly less than $4$. However, one of them could.
Although, your proof then actually works with only an arbitrary such subsequence, so just modifying your statement to
To prove this, suppose that there exists a convergent subsequence with limit strictly less than 4.
will do the job.
|
H: Interval of convergence of $\sum_{n=1}^\infty\frac{(x-2)^n\prod_{i=1}^n(n+i)}{n^n}$
$$\sum_{n=1}^\infty\left(\frac{\prod_{i=1}^n(n+i)}{n^n}\cdot(x-2)^n\right)$$
I am here to ask another part of this power series given above. I am to find the "interval of convergence". I have the radius of convergence, which is $e\over4$
To find the interval of convergence, I need to make sure if it is less or less equal, in other words, I need to make sure if it is convergence at the extreme points:
$$x\in\left(2-\frac{e}4,2+\frac{e}4\right)$$
How can I find the interval of convergence? I mean, I try putting $8+\frac{e}4$ instead of $x$ on the power series given at the top, and check if it is converging. But I don't know how to examine it. How do I examine if it is convergent? Is there a way to shorten the power series that is given?
And I will check $8-\frac{e}4$ after that, too.
AI: Hint: By Stirling's formula
$$
\frac{{(n + 1)(n + 2) \cdots (2n)}}{{n^n }} = \frac{{(2n)!}}{{n!n^n }} = \sqrt 2 \left( {\frac{4}{e}} \right)^n \left( {1 +\mathcal{ O}\!\left( {\frac{1}{n}} \right)} \right).
$$
|
H: Are there 3 out of 17 students A,B,C such that the hieght,id and phone number of B is in between A and C
In a clasroom there are 17 students such that no 2 students are at the sam hieght. Are there 3 students A,B,C such that the hieght,id and phone number of B is in between A,C
meaning:
$A_{hieght} > B_{hieght} > C_{hieght}$
$C_{id} > B_{id} > A_{id}$
$C_{phone} > B_{phone} > A_{phone}$
This is a question from our past test that I'm trying to solve but had a hard time
I think it has somthing to do with Erdos–Szekeres theorem but I wasn't able to bulid the right subsequences
My attemp:
We have a sequence of 17 hieghts, so there is a subsequences of 5 hieghts wich are incresing or decreasing (lets say incresing) so they have 5 distinct ids and there is a subsequences of 3 lets say again incresing, so i have
$A_{hieght} < B_{hieght} < C_{hieght}, A_{id} < B_{id} < C_{id}$
First I'm not sure if it's correct, but if it is what do I do next with the phones?
AI: You are requiring too much. Let the $17$ people stand in line according to increasing heights. Then, you say, there is a subsequence of length $5$ where the IDs are increasing or decreasing, and among this subsequence there should be a subsubsequence of length $3$ where the phone numbers are monotone as well.
|
H: How to solve $\sum_{k=1}^{2500}\left \lfloor{\sqrt{k}}\right \rfloor $?
I was trying to solve $\sum_{k=1}^{2500}\left \lfloor{\sqrt{k}}\right \rfloor $ using Iverson's brackets but I can't get the bounds right. I think I'm also missing something.
Here's what I did:
$ m = \left \lfloor{\sqrt{k}}\right \rfloor$
$\sum_{m,k} \space m \space[ m = \left \lfloor{\sqrt{k}}\right \rfloor][0 \leq k < 2,500]$
$\sum_{m,k} \space m \space [ m^2 \leq k < (m+1)^2 ][0 \leq k \leq 2,500]$
$\sum_{m,k} \space m \space[ m^2 \leq k \leq 2,500 < (m+1)^2 ]$
from here I'm not sure what will be the bounds of $m$. I'm new to this kind of sum manipulation so please bear with me.
AI: You have
$$\sum_{m=1}^{50}m\sum_k[m^2\le k<\min\left((m+1)^2,2501\right)].$$
The inner sum is just the number of $k$ for which the inequality holds.
For $m<50$ that number is just $(m+1)^2-m^2$, and for $m=50$ it's $1$. So
one gets
$$\sum_{m=1}^{49}m\left((m+1)^2-m^2\right)+50.$$
|
H: Maps passing to Quotient Topology
Let $f:X\rightarrow Y$ be a continuous map between topological spaces and let $q_X:X\rightarrow Z$ and $q_Y:Y\rightarrow W$ be quotient maps. Then, is there a unique map $F:Z\rightarrow W$ making the following diagram commute:
$$
F\circ q_X = q_Y\circ f?
$$
AI: The short answer is "it depends".
You can define an equivalence relation $\sim_X$ on $X$ by $x_1\sim_X x_2$ if $q_X(x_1) = q_X(x_2)$ and similarly for $\sim_Y$ on $Y$. Then a necessary condition for $F$ to exist is that $f$ preserves the equivalence classes, i.e. that $x_1 \sim_X x_2 \Rightarrow f(x_1) \sim_Y f(x_2)$.
If I were to guess I would say that this condition is also sufficient, but right now I am not convinced it is true.
Edit: After a short search (i.e. wikipedia ) I am quite confident that this condition is also sufficient. The space $Z$ "is" the quotient space $X/\sim_X$ and the properties of the quotient space ensures the existence of your map $F\colon X/\sim_X \to W$ (since by the condition mentionned above, $x_1 \sim_X x_2 \Rightarrow q_Y\circ f(x_1) = q_Y\circ f(x_2)$)
|
H: A function which integrates to 0 over any set of measure 1
Problem:
$f$ is integrable on $[0,2]$, for any measurable set $E \subseteq [0,2]$, when $m(E) = 1$, $\int_E fdm = 0$ prove $f = 0$ $a.e.$ on $[0,2]$
I think the following theorem might be of help, but I'm not sure.
Theorem. If $f$ is integrable on $[a,b]$ and $\int_a^x f(t)dt = 0$ $\forall x \in [a,b]$, then $f = 0$ $a.e.$ on $[a,b]$.
The proof is in this link.
AI: Let $E_- = \{f < 0\}$ and $E_+ = \{f > 0\}$ and $A = [0,2] \setminus E_- \setminus E_+ = \{f = 0\}$. Suppose that $m(E_-) \geq 1$ or $m(E_+) \geq 1$. In either case we get a contradiction when integrating over the larger set. Thus $m(E_-),m(E_+) < 1$. Now we integrate over $E_- \cup A$ and $E_+ \cup A$ to find that in fact $m(E_-) = m(E_+) = 0$.
|
H: What is the fundamental error in my reasoning?
What is fundamentally wrong in writing $(-a)^{1/2}$ as $((-a)^{2})^{1/4}$ when $a$ is positive and thus equating it to $a^{1/2}$?
Edit:
I'm basically asking if there is anything wrong with this operation like multiplying $1$ and $2$ with $0$ and equating it to "prove" that $1=2$.
AI: When defining $a^{m/n}$ for $a > 0$, $m, n \in \Bbb N_{\ge 1},$ it is an exercise to verify that the following things hold:
$$a^{m/n} = (a^m)^{1/n} = (a^{1/n})^m = a^{m'/n'}$$
for any $m', n'\in \Bbb N_{\ge 1}$ such that $m'n = mn'$.
The point I want to make is that in a manipulation like $$a^{1/2} = a^{2/4} = (a^{2})^{1/4},$$
each equality has to be justified and does not just follow on its own.
It is to be checked that the usual manipulation of rational numbers goes through and that everything works the way you would desire to.
When you have $a < 0$, the things just don't hold anymore even if you involve complex numbers.
|
H: Class of 16 Participants Answering 6 Questions in Subsets of 4, Each One in A Different Combination With All Pairings Covered
16 Participants are arranged into 4 groups of 4. The participants work together on a question within their groups. Next the groups are rearranged into another 4 groups of 4, where they work on the second question, and so on for six questions. Each participant must work with every other participant at least once at some point in the exercise.
Is this possible? If not, how can you prove that it is impossible. If it is possible, what are the six configurations of the participants?
The initial configuration of the 16 participants into groups A, B, C & D is:
A: 1, 2, 3, 4
B: 5, 6, 7, 8
C: 9, 10, 11, 12
D: 13, 14, 15, 16
AI: This is a variant of the "social golfer problem", but there you want each pair to be together at most once.
1-2-3-4, 5-6-7-8, 9-10-11-12, 13-14-15-16
1-5-9-13, 2-6-10-14, 3-7-11-15, 4-8-12-16
1-6-11-16, 2-5-12-15, 3-8-9-14, 4-7-10-13
1-7-12-14, 2-8-11-13, 3-5-10-16, 4-6-9-15
1-8-10-15, 2-7-9-16, 3-6-12-13, 4-5-11-14
So after five sessions, every pair has worked together exactly once, so you can do whatever you like for the sixth session.
This was taken from "four groups of four golfers for five weeks" at http://web.archive.org/web/20050407074608/http://www.icparc.ic.ac.uk/~wh/golf/solutions.html#4-4-5
|
H: Computation of the integral of the product of a double exponential and an exponential
I would like to compute the following integral:
$$\int_0^t\exp\left(\frac{\alpha^2}{2\lambda}e^{-2\lambda s}-\lambda s\right)ds\space\space\space(1)$$
An integral near from this one is:
$$\int_0^t\exp\left(\frac{\alpha^2}{2\lambda}e^{-2\lambda s}\right)ds\space\space\space (2)$$
and may be computed by setting the following change of variables $u=e^{-2\lambda s}$. This leads to a integral of the following kind:
$$\int_c^d \frac{e^{au}}{u}du$$
By using the integral serie development of the exponential, a calculation of the integral (2) may be achieved. The remaining issue is the speed of convergence of the calculus but that is another issue that is a numeric one.
Unfortunately, if i am not wrong, this trick is not usable to compute the integral (1). Any idea to obtain an usable calculus of the integral (1)?
AI: Considering $$I=\int\exp\left(\frac{\alpha^2}{2\lambda}e^{-2\lambda s}-\lambda s\right)\,ds$$ let (not so obvious, I agree but quite close to your $u$)
$$s=-\frac 1{\lambda }{\log \left(\frac{ x}{\sqrt{\frac{\alpha ^2}{2\lambda
}}}\right)}\implies ds=-\frac{dx}{\lambda x}$$
$$I=-{\lambda \sqrt{\frac{2\alpha ^2}{\lambda }}}\int e^{x^2}\,dx=-{\lambda \sqrt{\frac{2\alpha ^2\pi}{2\lambda }}} \text{erfi}(x)$$
Back to $s$, this should give
$$I=-\frac{\sqrt{\frac{\pi\alpha ^2}{2\lambda }}}{\alpha ^2}\text{erfi}\left(\sqrt{\frac{\alpha ^2}{2\lambda }} e^{-\lambda s}\right)$$ and simplifying
$$J=\int_0^t\exp\left(\frac{\alpha^2}{2\lambda}e^{-2\lambda s}-\lambda s\right)\,ds=\frac 1 \alpha \sqrt{\frac \pi {2\lambda}}\left(\text{erfi}\left(\frac{\alpha }{ \sqrt{2\lambda
}}\right)-\text{erfi}\left(\frac{\alpha }{
\sqrt{2\lambda }}e^{-\lambda t}\right) \right)$$
Edit
Making, as you thought about it, $u=e^{-2\lambda s}$ is good and leads for the antiderivative to $$I=\int -\frac{e^{\frac{\alpha ^2 u}{2 \lambda }}}{2 \lambda \sqrt{u}}\,du$$ Now
$${\frac{\alpha ^2 u}{2 \lambda }}=x^2\implies u=\frac{2 \lambda x^2}{\alpha ^2}\implies du=\frac{4 \lambda x}{\alpha ^2}\,dx$$
$$I=-\frac{ \sqrt{\frac{2\lambda }{\alpha ^2}}}{\lambda }\int e^{x^2} \,dx=-\frac{ \sqrt{\frac{2\lambda }{\alpha ^2}}}{\lambda }\frac{1}{2} \sqrt{\pi } \text{erfi}(x)$$
Assuming $\alpha >0$ and $\lambda>0$, the definite integral is
$$\color{blue}{\int_0^t\exp\left(\frac{\alpha^2}{2\lambda}e^{-2\lambda s}-\lambda s\right)\,ds=\frac 1 \alpha \sqrt{\frac{\pi }{2\lambda}}\Big[ \text{erfi}\left(\frac{\alpha }{ \sqrt{2\lambda
}}\right)-\text{erfi}\left(\frac{\alpha }{
\sqrt{2\lambda }}e^{-\lambda t}\right)\Big]}$$
|
H: Let $A=\begin{bmatrix}1 & 2\\-1 & 1\end{bmatrix}$. Find a Matrix $B$ s.t for any $u,v \in \Bbb{R}^2, (u,Av)=(Bu,v)$
Let $A=\begin{bmatrix}1 & 2\\-1 & 1\end{bmatrix}$. Find a $B\in \Bbb{M}_2(\Bbb{R})$ such that for any $\textbf{u,v} \in \Bbb{R}^2, (\textbf{u},A\textbf{v})=(B\textbf{u},\textbf{v})$ or prove that no such $B$ exists.
AI: The most basic and mechanical way to do this is by computing the products explicitly.
$$
<u,Av> = u_1v_1 + 2u_1v_2 - u_2v_1 + u_2v_2
$$
$$
<Bu,v> = b_{11}u_1v_1 + b_{21}u_1v_2 + b_{12}u_2v_1 + b_{22}u_2v_2
$$
and you get $b_{11}=1; b_{12}=-1; b_{21}=2 ; b_{22}=1$, that is, $B=A^T$, as a comment suggests.
|
H: On classification of groups of order $p^5$
Can someone suggest me some source where the author has classified all non-isomorphic groups of order $p^5$ ?
Edit 1 : I need complete classification (not upto isoclinism), and also in finitely presented form . I found that with increase in value of prime $p$, number of groups increases. So, can we completely classify all groups of order $p^5$ for any prime $p$, in finitely presented form or get their structure description ?
AI: There are several papers in the literature on the classification of groups of order $p^5$. For example,
R. James, The groups of order $p^6$ (p an odd prime), Math. Comp. 34 (1980), 613-637,
also contains the case $p^5$.
|
H: How do I integrate an unknown function of a variable e.g. $a(t)$?
How would I integrate equations of the following form:
$$\frac{d a(t)}{dt}=ka(t)$$
where $k$ is constant.
I have the feeling that this is quite simple, but I seem to be stuck.
My initial thought was that I could do:
$$a(t) da(t) = k dt \iff \frac{a^2(t)}{2} = kt + c $$
But for some reason I don't think this is right.
Should I instead do integration by parts as done in Integrating an unknown function?
AI: This is quite an infamous differential equation, one of the basic ones you learn in calculus courses. Use separation of variables.
$$
\frac{a(t)}{dt} = ka(t) \implies \frac{da(t)}{a(t)}= kdt \implies \int \frac{da(t)}{a(t)}=\int k dt $$
and so on.
|
H: In a math competition with $8$ students and $8$ problems, if each problem is solved by $5$ students, then two students together solve all problems.
Eight students are entered in a math competition. They all have to solve the same set of $8$ problems. After correction, we see that each problem was correctly resolved by exactly $5$ students. Show that there are two students who together have solved all the problems.
AI: For each pair of students, consider the set of those problems which
was not solved by them. There exist
$${8 \choose 2}
= 28 $$sets; we have to prove
that at least one set is empty.
For each problem, there are at most $8-5=3$ students who did not solve it.
From these students at most
$${3\choose 2}= 3$$ pairs can be selected, so the
problem can belong to at most 3 sets. The 8 problems together can
belong to at most $8\cdot3 =24$ sets.
Hence, at least $28 − 24 = 4$ sets must be empty. Hence proved, so that's it.
|
H: Expected Number of Distinct Numbers in N trials from a set.
Given the set of numbers from 1 to n: { 1, 2, 3 .. n } We draw n numbers randomly (with uniform distribution) from this set (with replacement). What is the expected number of distinct values that we would draw?
I came across this question on Brainstellar. I have understood a way to solve this problem using Stack Exchange but I am not able to understand the fault in my methods.
$X_i$ represents if I have picked up a unique number on the $i^{th}$ pick. Naturally,
$X_i$
= 1 when I have picked up a number that was not picked earlier.
= 0 when I have picked up a number that was already picked earlier.
$E[\sum_{i=0}^n X_i]$ should be my answer to the problem.
$E[X_1] = 1$ as I would definitely pick a unique number on the first trial.
$E[X_i] = \frac{(n-1)^{i-1}}{n^{i-1}}$ where $i \neq 1$
The first part is the probability that in the $i-1$ trials a particular number was never picked. This is how I got the $E[X_i]$.
Now all i need to do is $1 + \sum_{i=2}^n E[X_i]$
And what I get is $$1 + \frac{(n-1)}{n} + \left(\frac{(n-1)}{n}\right)^n$$
I am not able to find the folly in this approach.
PS: I am not a maths student so kindly forgive any silly mistakes you might find. I have been struggling for hours trying to find the mistake but to no avail.
AI: Ok I have figured it out.
Mistake made:
I tried adding $1$ even after summing over from $i$ = 1 to $i = n$
Also the E[$X_i$] was calculated wrong the first time as $\frac{1}{n}$$\left(\frac{(n-1)}{n}\right) ^ {i-1}$
Correct value is
E[$X_i$] = $\left(\frac{(n-1)}{n}\right) ^ {i-1}$
So the answer is $\sum_{i=1} ^n \left(\frac{(n-1)}{n}\right) ^ {i-1}$
The answer is $n$$\left(1 - \left(\frac{(n-1)}{n}\right) ^ {n}\right) $
Thanks to @lulu in the comments for pointing out
|
H: Spreading tickets in a lottery actually diminishes your chances?
Here is the scenario: There is a lottery running for $n$ terms, which means that it is repeated. In each term, there are a total of $T$ tickets and one prize. You currently own $t$ tickets and your dilemma is to either use all of your tickets in one go or spread them for $n$ terms. The probability of winning at least one prize (naturally, only one in the first scenario) when you group or evenly spread your tickets can be calculated respectfully:
$P_1=\frac{t}{T}$ and $P_2=1-(\frac{T-\frac{t}{n}}{T})^n$
Let's say that $T=100$, $t=12$ and $n=2$:
$P_1=\frac{12}{100}=0.12$ and $P_2=1-(\frac{100-\frac{12}{2}}{100})^2=0.1164$, hence $P_1>P_2$.
Even if I try to spread the tickets unevenly, like 11 tickets in one term and 1 in the other, the relation is the same:
$P_1=0.12$ and $P_2=1-(\frac{100-11}{100})⋅(\frac{100-1}{100})=0.1189$, still $P_1>P_2$.
The mathematical model where the tickets are distributed unevenly becomes:
$P_2=1-\prod_{i=1}^n\frac{T-t_i}{T}$ where $t_i$ is the number of tickets spent in each term.
I tried plotting the mathematical models on Desmos and playing around with different combinations of variables, but it always seemed that using all tickets together always, even if by a minuscule margin, gives better chances of winning anything at all than spreading them in every case.
Will this always be the case; should we always use all of our tickets in one go? How can it be mathematically proved then? I think that the number of prizes shouldn't change the outcome, should it?
Thank you for reading!
AI: Suppose $n=2$. Using $x$ tickets in the first lottery and $t-x$ in the second yields a probability of winning $1-(1-\tfrac{x}{T})(1-\tfrac{t-x}{T})$. The maximum in the range $0\leq x \leq t$ is at the bounds, so it is better to attend in only one lottery, either the first or the second.
This is true in general for $n>2$ (by induction). The idea is that you can always repeat the previous argument for two of the lotteries (say, the 4th and the 10th) keeping the number of tickets used in the others fixed, and deduce that the optimal strategy would be to use all the tickets dedicated to the 4th and the 10th lottery in only one of them.
|
H: Balls in connected metric spaces
Let $(X,d)$ be a connected metric space and let $x\in X$. I was wondering if it can happen that there exists $\delta>0$ such that $B_\epsilon(x)=X$ for any $\epsilon>\delta$ and $B_\epsilon(x)\neq X$ for every $\epsilon<\delta$.
If we remove the hypothesis that $X$ is connected, then the discrete metric shows that this can happen. Is it sufficient to assume $X$ connected to remove this circumstance? Also, in general, is there some standard assumption that prevents this circumstance?
AI: Take $X=[-1,1]$, endowed with the usual metric. Then $B_\varepsilon(0=X$ if $\varepsilon>1$ and $B_\varepsilon(0)\ne X$ if $\varepsilon<1$.
|
H: Fish weight - Normal distribution
If the weight of fish distributes by Normal distribution, with $\mu = 900$ and ${\displaystyle \sigma ^{2}}\ = 150^2$. What is the probability that out of $10$ fish that were chosen randomly, at least $2$ and at most $9$ are weighted not less than $667.5$?
So I assumed by the central limit theorem that the 10 fish got $\mu = 900$ and ${\displaystyle \sigma ^{2}}\ = 150^2/10$. When I'm trying to give this a standard score by $Z$ I get $Z= - 4.901$. When checking the $Z$ Table I can assume that the probability for the will be $1$.
So If it's $1$, so get $2<X<9$ is $1$ too? or I'm not in the right direction?
AI: Notes:
CLT is not an option. The weight cannot be a singleton, but it must be "at least 667.5" or "at most 667.5". The probability to be exactly 667.5 is 0.
Solution
First: calculate the probability of one fish to be in that weight
Second: conclude with the binomial distribution
$P(X=k)= \binom{10}{k}p^k(1-p)^{10-k}$
where $p$ is the probability that one fish is in the desired weight and $k=\{0,1,2,3,4,5,6,7,8,9,10\}$ are the number of successes
|
H: Uncountably many disjoint dense subsets in $\Bbb{R}$
Show that, there are uncountably many disjoint dense subsets in $\Bbb{R}$.
I know $\Bbb{Q}$ is a dense subset in $\Bbb{R}$. But other than this and disjoint to $\Bbb{Q},$ I have no idea.
Please help me. Thank you.
AI: Consider an equivalence relation $\sim$ on $\Bbb R$ given by $x\sim y$ iff $x-y\in \Bbb Q$. Using axiom of choice, from each equivalence class take a representative and consider the corresponding translate of $\Bbb Q$. Note that any translation of $\Bbb Q$ is dense in $\Bbb R$.
Actually, any equivalence class can be written as $i+\Bbb Q$ for some $i\in \Bbb R\backslash \Bbb Q$, in particular any equivalence class is countable. So, there are uncountably many equivalence classes, as countable union of countably many sets is countable, but $\Bbb R$ is uncountable.
Note another fact, $i+\Bbb Q=(i+r)+\Bbb Q$ for any $r\in \Bbb Q$ and any $i\in \Bbb R\backslash \Bbb Q$. In other words, two distinct irrational translates of $\Bbb Q$ may give same equivalence class. That's why we need to consider the equivalence relation $\sim$ to get uncountably many disjoint dense subsets of $\Bbb R$.
|
H: Prove $\lim_{h\rightarrow0}m(E\Delta(E+h)) = 0$ for measurable set $E$ with finite measure
Here is my attempt:
Define $f_n=\chi_{E\Delta(E+ \frac{1}{n})}$. Then $f_n$ decreses with regard to $n$. Since $$m(E\Delta(E+\frac{1}{n})) = \int_\mathbb{R}\chi_{E\Delta(E+ \frac{1}{n})}dm,$$ it suffices to show $$lim_{n\to\infty}\int_\mathbb{R}\chi_{E\Delta(E+ \frac{1}{n})}dm = 0.$$ According to Lebesgue dominated convergence theorem, $$lim_{n\to\infty}\int_\mathbb{R}\chi_{E\Delta(E+ \frac{1}{n})}dm = \int_\mathbb{R}\lim_{n\to\infty}\chi_{E\Delta(E+ \frac{1}{n})}dm.$$ Thus we only need to show $$\chi_{E\Delta(E+ \frac{1}{n})}\overset{a.e.}\to0.$$
EDIT:
By the Approximation Theorem of Measure Theory, $\forall \epsilon > 0$ there exist a finite number of disjoint intervals $\{I_k\}_{k=1}^N$ such that $m(E\Delta(\cup_{k=1}^NI_k)) < \epsilon$. Assume $F = \cup_{k=1}^NI_k$, then $m(E\Delta F) < \epsilon$. Define $f_n = \chi_{E\Delta (E+1/n)}$, $g_n = \chi_{F\Delta (F+1/n)}$.
Step 1. I will show $\int \mid f_n - g_n\mid dm < 2\epsilon$. Since $$(E\Delta (E+1/n))\Delta (F\Delta (F+1/n))\subseteq (F\Delta E)\cup ((F + 1/n)\Delta (E + 1/n))$$ we have $$\int \mid f_n - g_n\mid dm = m((E\Delta (E+1/n))\Delta (F\Delta (F+1/n))) \leq m(F\Delta E) + m((F + 1/n)\Delta (E + 1/n)) < 2\epsilon$$
Step 2. I will show $\lim_{n\to \infty}\int \mid g_n\mid dm = 0$. $\{I_k\}_{k=1}^N$ can be written as $\{[a_k,b_k)\}_{k=1}^N$, then $$\int \chi_{F\Delta (F+1/n)}dm = m(\cup_{i=1}^N([a_i,a_i+1/n)\cup [b_i,b_i+1/n)))\leq \frac{2}{n}N$$ Therefore $$\lim_{n\to \infty}\mid g_n\mid dm = \lim_{n\to \infty} g_n dm = 0$$
Step 3. $$\int \mid f_n - g_n\mid dm < 2\epsilon$$ $$\implies \int \mid f_n\mid dm - \int \mid g_n\mid dm < 2\epsilon$$ $$\implies \lim_{n\to \infty}\int \mid f_n \mid dm < 2\epsilon$$ Let $\epsilon \to 0$, we get $\lim_{n\to \infty}\int\mid f_n\mid dm=0$. Since $f_n$ is non-negative, $\lim_{n\to \infty}\int f_n dm=0$
AI: It is neither true that $f_n $ is decreasing nor is it true that $f_n \to 0$a.e.
By the Approximation Theorem of Measure Theory (Ref. Halmos's book) we can find a finite disjoint union $F$ of intervals of the type $[a_i,b_i), 1 \leq i \leq N$ such that $m (E\Delta F) <\epsilon$. Let $g_n= \chi_{F\Delta (F+\frac 1 n)}$. I will let you verify that $\int |f_n-g_n| <2 \epsilon$ and $\int g_n \leq \frac 2 n N \to 0$.
|
H: What's the differentiation of $2x(\frac{dx}{dt})$ with respect to the variable $t$? Is it $2(\frac{dx}{dt})^2+2x(\frac{d^2x}{dt^2})$?
What's the differentiation of $2x\left(\frac{dx}{dt}\right)$ with respect to the variable $t$?
Is it $$2\left(\frac{dx}{dt}\right)^2+2x\left(\frac{d^2x}{dt^2}\right)?$$
AI: Let $g(t) = 2 x(t) \frac{dx}{dt}(t)$.
By the product rule for differentiation we get
$$
\frac{dg}{dt}(t) = 2 \left(\frac{dx}{dt}(t)\right)^2 + 2x(t)\frac{d^2x}{dt^2}(t)
$$
so yes you are correct!
|
H: Defining the Polar set
For a subset P of R^n (real numbers) the polar set is defined by:
$$
P^*:= \{ y\in \Bbb R^n\mid y\cdot x \leq 1 \text{ for all } x\in P \}.
$$
Can someone break the definition into plain english as im struggling to understand the notations used and such a sets structure.
The best guess ive got is: $P*$ is defined as the set of all (x,y) which follows that, for all real y ($\in \mathbb {R^n})$ there exists an x in P that satisfies $y·x \le 1$
Even if have this correct how is this useful in general geometry? The original question is to show that if $P$ is a convex set that contains the origin then (P*)*=P
AI: One way to read the definition literally is to say the following:
$P^*$ is the set of all vectors $y$ for which $y \cdot x \leq 1$ holds for all $x \in P$.
For more information on how to make sense of definitions like these, see the wiki page for "set-builder notation". The idea behind a polar set is that it provides a "test" by which we can check whether or not an element fails to be in $P$.
To see how this works, let's consider an example. Take the set
$$
P = \{(x_1,x_2) \in \Bbb R^2 \mid 0 \leq x_2 \leq x_1\}.
$$
That is, $P$ is the region in the $xy$-plane satisfying $0 \leq y \leq x$. As it turns out, the polar set will be equal to
$$
P^* = \{(x_1,x_2) \in \Bbb R^2 \mid x_1 \leq 0 \text{ and } x_2 \leq -x_1\}.
$$
Now, consider the point $v = (1,2)$, which we can see fails to be an element of $P$. The polar set gives us another way to show that $v$ fails to be an element of $P$: if there is an element $w \in P^*$ for which $w \cdot x > 1$, then we can be sure (by the definition of a polar cone) that $v$ is not an element of $P$.
In this case, it is useful to consider the element $w = (-2,2) \in P^*$. We find that
$$
v \cdot w = 1 \cdot (-2) + 2 \cdot (2) = 2 > 1.
$$
If $v$ were an element of $P$, then by the definition of a polar cone, any $w \in P^*$ would have to satisfy $v \cdot w \leq 1$. Because that fails to happen here, we can be sure that $v$ is not an element of $P$.
|
H: What does this colon notation mean?
I was reading a paper and found the following, "$k$-th entry of sorted $S((i,:))$". I don't understand what does this colon notation mean.
Suppose we have $$S=\begin{pmatrix}1.01330\dots &1.00958\dots &0.96263\dots &0.35814\dots &0.75399\dots \\ 0.59616\dots &0.79699\dots &0.56635\dots &0.24665\dots &0.51927\dots \\ 0.32087\dots &0.31970\dots &0.45483\dots &0.11341\dots &0.23876\dots \\ 0.55846\dots &0.70586\dots &0.53054\dots &0.40040\dots &0.52716\dots \\ 0.86933\dots &1.03356\dots &0.82586\dots &0.42076\dots &0.88582\dots \end{pmatrix}$$, then what is the "$k$-th entry of sorted $S((i,:))$"? Can someone explain me?(for exmaple, lets say $k=2$ here.)
AI: By not reading the original passage, I guess $S(i, :)$ is the $i^{\text{th}}$ row of $S$. The colon simply means everyone $-$ in this case every entry from the $i^{\text{th}}$ row. This colon writing is pretty common in MATLAB/Octave. Then the $k^{\text{th}}$ entry of $S(i, :)$ is just $S(i, k)$, the entry at $i^{\text{th}}$ row and $k^{\text{th}}$ column.
In contrast, $S(:, j)$ is the $j^{\text{th}}$ column of $S$.
Edit: If "sorted" just means selecting that row by context, you can skip my edit. But in case "sorted" means some kind of rearranging the entries within that particular row then you need to dump away the $S(i, k)$ stuff and do the rearrangement by yourself. Oscar got me.
E.g. In case you pick the $3^{\text{rd}}$ row: $(0.321\cdots \text{ } 0.320\cdots \text{ } 0.455\cdots \text{ } 0.113\cdots \text{ } 0.239\cdots )$ and the "sorted" means sort the entries in ascending order, then $k = 2$ would pick $0.239\cdots$ instead of $0.320\cdots$.
|
H: skew-diagonalizing an anti-symmetrc matrix
Let's assume that i have a (real) $2N\times2N$ anti-symmetric matrix
$B=\left\{ b_{ij}\right\} $ with the property that
$BB^{T}=\boldsymbol{1}$
where $\boldsymbol{1}$ is the identity matrix.
Is it true that I can always find an orthonormal matrix $U$ such that
$\tilde{B}=U^{T}BU$
is an anti-symmetric matrix with $\pm a$ on the counter diagonal,
and zeros everywhere else? And if so, how to prove it?
For instance in the $4\times4$ case, if I start with the (almost)
most general case
$B=\left(\begin{array}{cccc}
0 & b_{1} & b_{2} & b_{3}\\
-b_{1} & 0 & b_{3} & -b_{2}\\
-b_{2} & -b_{3} & 0 & b_{1}\\
-b_{3} & b_{2} & -b_{1} & 0
\end{array}\right)$
where $b_{1}^{2}+b_{2}^{2}+b_{3}^{2}=1$ by an appropriate choice
of $U$ i can bring it to the form
$B=\left(\begin{array}{cccc}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & -1 & 0 & 0\\
-1 & 0 & 0 & 0
\end{array}\right)$
AI: What is true in general is that you can find an orthonormal matrix $P$ such that $\bar{B}=P^TBP$ is a block diagonal matrix with blocks of the form $M_b=\pmatrix{0 & b \cr -b & 0}$ (general theory of orthonormal reduction of normal matrices + some playing around the normal form using skewsymmetry).
Now $\bar{B}\bar{B}^T=P^TBP P^TB^TP=I_n,$ since $B$ and $P$ are orthonormal. Looking at the blocks, you see that you need $M_b M_b^T=I_2$, and that it implies $b^2=1$.
Hence the block involved are $M_1$ and $M_{-1}$. Changin gthe order of the vectors in your orthonormal basis (aka the columns of $P$) , you get the result you seek.
|
H: Prove that a particle will never pass through the centre of a sphere under a condition.
Question: A particle was fired inside of a sphere. There was no gravity acting on the particle, no air resistance and each time it hit the inside of the sphere, it reflected without losing any velocity. If the particle doesn't pass through the centre of the sphere before the second bounce, show that it will never pass through the centre.
My attempts:
I considered representing the points where the particle reflects as a variable point and showing that the angle will never equal zero, no matter where the particle comes from (assuming it isn't coming from the centre).
I also considered looking for a vector representation of each reflection to see if there were any interesting results, although I personally couldn't find anything.
I also considered the possibility of a recurrence relation that related each angle of reflection, although this was also futile.
Note: Although a geometric proof would be helpful, I was looking for a more vector related proof. If there are no ways to do this, then I am happy to accept a geometric one. Vectors would be prefered, or at least some algebraic proof, but if nothing can be done, there's no issue.
Any help or guidance will be appreciated!
AI: Let's assume the particle passes the origin at some iteration. It follows a straight line and hits the sphere. Notice that a straight line passing the origin and the sphere is a radius, and therefore normal to the sphere. Because of the normality to the surface, the particle returns in the same straight line (this can be proven by symmetry arguments)- and it must pass the origin again.
We conclude that if at some iteration we pass through the origin, we are forever confined to the same line, passing the origin at every iteration after the initial one.
If we use the same logic in reverse-time we conclude that it passed the origin at every iteration before the initial one.
We are told that the particle did not pass the origin at the first iteration, so we can conclude that it never will.
|
H: Sobolev Spaces inner product
The inner product in Sobolev space defined as
$$ \langle u,v\rangle _{H^m(\Omega)} = \int_{\Omega}\sum_{\alpha=0}^m \sum_{\beta=\alpha} D^{\beta}uD^{\beta}v d\Omega $$
where
$$D^{\alpha}u = \frac{\partial^{\alpha}u}{\partial^{\alpha_1}x_1\partial^{\alpha_2}x_2...\partial^{\alpha_n}x_n} $$
$$ \alpha_1+...\alpha_n =\alpha$$
So for example assume $ \Omega \subset R^2; m=1$ then we have
$$ \langle u,v\rangle _{H^1(\Omega)} = \int_{\Omega}uv + \frac{\partial u}{\partial x}\frac{\partial v}{\partial x} +\frac{\partial u}{\partial y}\frac{\partial v}{\partial y} d\Omega $$
if $m=2$
$$ \langle u,v\rangle _{H^2(\Omega)} = \int_{\Omega}uv + \frac{\partial u}{\partial x}\frac{\partial v}{\partial x} +\frac{\partial u}{\partial y}\frac{\partial v}{\partial y} +
\frac{\partial^2 u}{\partial x^2}\frac{\partial^2 v}{\partial x^2} +
\frac{\partial^2 u}{\partial y^2}\frac{\partial^2 v}{\partial y^2} +
\frac{\partial^2 u}{\partial x \partial y}\frac{\partial^2 v}{\partial x \partial y} d\Omega $$
My Question is why don't we need 2 before $ \frac{\partial^2 u}{\partial x \partial y}\frac{\partial^2 v}{\partial x \partial y} $ because we have another partial derivative $ \frac{\partial^2 u}{\partial y \partial x}\frac{\partial^2 v}{\partial y \partial x} $. Am I missing the point?
Also how this integral will look if $m=3$?
AI: Yes, the 2 should be needed if you want to write it like that. That is because in the definition of weak derivative you "discharge" the derivatives on a function that is $\mathcal{C}^{\infty}_0$ and thus regular enough to have symmetry of partial derivatives.
If $m=3$, then you have to keep combining derivatives, i.e. $\frac{\partial^3u}{\partial x^3}\frac{\partial^3v}{\partial x^3}, \frac{\partial^3u}{\partial x^2 \partial y}\frac{\partial^3v}{\partial x^2 \partial y},$ and so on.
|
H: Intersection of two disconnected sets
I have prove or disprove that the intersection of two disconnected sets is a disconnected set. I tried finding a counter example on the real line with the usual metric,but I can't seem to find a counter example.I want to proceed to prove that it is the case that the intersection is indeed a disconnected set,but want to know if I might be wrong.
Thanks in Advance
AI: Take $\{0,1\}$ and $\{1,2\}$. They are both disconnected sets of $\mathbb R$ and yet their intersection $\{1\}$ is connected.
|
H: Shouldn't I be able to use both rads and degrees in complex exponentials?
Up until now, I've been using rads and degrees interchangeably, simply using the $^{\circ}$ symbol to signify degrees, and then using the correct trigonometric function, so that:
$$sin(90^\circ)=sin(π/2)$$
I would think that the same line of thought could be used when dealing with complex exponentials, since x appears to always end up in a trigonometric function:
$$e^{ix}=cos(x) + isin(x)$$
However this seems to completely break down when logarithms are brought into the picture:
$$ln(e^{ix})=i(x + 2κπ),\hspace{1em}κ\in\mathbb{Z}$$
But (assume $κ=0$ for simplicity's shake)
$$ln(5e^{i90^\circ}) = ln5 + i90^\circ$$
Isn't the same number as
$$ln(5e^{iπ/2}) = ln5 + iπ/2$$
My textbook (on electronic circuit analysis) tells me to use radians here, but there is no mention as to why. Any help would be greatly appreciated.
AI: Why it matters:
In degrees, Euler's formula would read
$$e^{iz\pi/180}=\cos(z)+i\sin(z)$$ and the whole world would hate that $\frac\pi{180}$ factor.
|
H: If all elements of Lie algebra are nilpotent , is the Lie algebra nilpotent?
Suppose $\mathfrak{g}$ be a Lie algebra over $\mathbb{F}$. Then $\mathfrak{g}$ is
nilpotent if and only if, for all $x \in \mathfrak{g}$, $\mathrm{ad}~ x$ is a nilpotent linear operator on $\mathfrak{g}$.
This is Engel's theorem
My doubt is this:
Suppose $\mathfrak{g}$ be a Lie algebra consisting of nilpotent operators on a finite vector space $V$. Can we say that $\mathfrak{g}$ is nilpotent ?
AI: Let $\mathfrak{g}$ be a Lie algebra consisting of nilpotent operators on a finite vector space $V$. Then $\mathfrak{g}$ is a subalgebra of $\mathfrak{gl}(V)$. Since $x$ nilpotent implies that $\operatorname{ad}(x)$ is nilpotent, we have that $\operatorname{ad}(x)$ is nilpotent for all $x\in \mathfrak{g}$.
By Engel's Theorem $\mathfrak{g}$ is nilpotent.
|
H: Prove that $ \sum_{k=0}^{n}\left\lvert x-\frac{k}{n}\right\rvert\binom{n}{k}x^k(1-x)^{n-k} \le \frac{1}{2\sqrt{n}} $
I have to prove that $$\sum_{k=0}^{n}\left\lvert x-\frac{k}{n}\right\rvert\binom{n}{k}x^k(1-x)^{n-k} \le \frac{1}{2\sqrt{n}} $$ where $n\in \mathbb{N}$ and $x \in [0,1]$
I have already proven that $\displaystyle \sum_{k=0}^{n}\left(x-\frac{k}{n}\right)^2\binom{n}{k}x^k(1-x)^{n-k} \le \frac{1}{4n}$
and that I have to use the Cauchy Schwartz formula but I'm stuck because it never yields the wanted formula.
Thank you.
AI: In the language of random variables, you already have $E |x - Y/n|^2 \le \frac{1}{4n}$, where $Y$ is a binomial random variable of parameters $x$ and $n$. Using that $E Z^2 \geq (E Z)^2$ leads to the desired inequality. This inequality is indeed a consequence of Cauchy-Schwarz, since it can be obtained by:
$$ EZ = E[1 \cdot Z] \le E[1^2]^{1/2} \cdot E[Z^2]^{1/2}.$$
|
H: How do i find the normal?
Find the equations of the tangent and normal to $y = x^2$ at the point $H(2, 4)$.
I've already found the equation of the tangent which is $y=4x-4$
but I'm not sure how to approach the actual question, which is finding the normal.
AI: By definition a normal line is the line that touches a curve at one point and is perpendicular with the tangent line at the same point.
Thus, if your tangent line is $(y-y_1) = m (x-x_1) $,
The slope of a line $\perp$ to it is given by $-\frac{1}{m}$
Thus, to write the equation of a line with a slope of $-\frac{1}{m}$, passing through the same point $(x_1,y_1)$ :
$$(y-y_1) = -\frac{1}{m}(x-x_1)$$
In your example, slope of tangent $m$ is $4$. Thus slope of normal is $-\frac{1}{4}$.
Thus the line should be :
$$(y-4) = -\frac{1}{4}(x-2)$$
|
H: Finite extensions
I'am trying to solve the following exercise in Dummit & Foote(Chapter 13 Section 2 Question 13),
Suupose $F={Q}(\alpha_1,\ldots, \alpha_n)$ where $\alpha_i^2\in Q$ for each i. Prove that $2^{1/3}$ is not in F.
Q denotes the set of rational numbers.
According to a solution that I read, it says $[Q(\alpha_1,\ldots, \alpha_k):Q(\alpha_1,\ldots,\alpha_{k-1})]=1 \quad \text{or} \quad 2.$ I could not understand this part. Why do we have the conclusion =1 or 2? Is it because of the condition given in the question $\alpha_i^2 \in Q$ for each i? If $\alpha_{l} \in Q(\alpha_1,\ldots,\alpha_{l-1})$, then $[Q(\alpha_1,\ldots, \alpha_l):Q(\alpha_1,\ldots,\alpha_{l-1})]=1.$
Is this true? A Little bit confused here.
Any help is appreciated.
AI: The explanation you give is correct.
$$
\mathbb Q(\alpha_1,\ldots,\alpha_{k}):\mathbb Q(\alpha_1,\ldots,\alpha_{k-1})
$$
is equal to the degree of the minimal polynomial of $\alpha_k$. Since $\alpha_k$ the root of a quadratic polynomial $x^2-q$ for some $q\in\mathbb Q$, the degree of its minimal polynomial is either 1 or 2, so the extension degree is either 1 or 2.
Also, to write $\mathbb Q$ properly in LaTeX, you should use \mathbb Q
|
H: Representing a Rayleigh distribution by Gamma distribution
Rayleigh distribution is formulated as $$P(x\mid\sigma)=\frac{x}{\sigma^2}\exp\left(-\frac{x^2}{\sigma^2}\right) \,,\tag{1}$$ where $\sigma^2$ is the variance. $Z$ is a complex variable specified as $Z=a+jb$.
A paper said that Gamma distribution $\Gamma(k,\theta)$ can represents the Rayleigh distribution when $k=\theta=1$.
I know that $\Gamma(k,\theta)=\frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k\Gamma(k)}$ and $\Gamma(1,1)=e^{-x}$ which is unequal to the above Rayleigh distribution $(1)$.
I don't understand why the Gamma distribution can represent the Rayleigh distribution and if the conclusion is correct how to prove that?
Thanks a lot! : )
AI: The statement is wrong! (better, the statement is correct but something was left implied)
Some notes:
The density you call Rayleigh is not a nice density. The Rayleigh's density is the following, for $x>0$
$$f_X(x;\sigma^2)=\frac{x}{\sigma^2}e^{-\frac{x^2}{2\sigma^2}}$$
The variance is $V[X]=(2-\frac{\pi}{2})\sigma^2$
to Represent a Rayleigh with a Gamma you have to do a variable transform:
If X is a $Gamma(n,\theta)$ and you transform it with
$Y=\sqrt{X}$ then your statement is correct.
$f(y;1;1) \sim Ray(\frac{1}{2})$
|
H: Frobenius norm involving Kronecker Product
Consider $ J = ||\mathbf{G} - ( \mathbf{B} \otimes \mathbf{X} )||_F^2 $, where $\mathbf{G}$ and $\mathbf{B}$ are complex matrices, and $||.||_F$ is the Frobenius norm. Find the derivative with respect to $ \mathbf{X} $
Note: My question is related to this post: Derivative of a trace with second order Kronecker product. I would like a solution that does not involve SVD decomposition. Any help/hints on how to solve this problem are welcomed.
AI: Let $\langle \cdot , \cdot \rangle$ denote the Frobenius inner product $\langle A,B \rangle = A:B = \operatorname{Tr}(AB^T)$. One approach is to expand $J(X + H)$, then extract the total derivative. In this case, we have
$$
J(X + H) = \langle G - (B \otimes (X + H)), G - (B \otimes (X+ H)) \rangle\\
= \langle (G - (B \otimes X)) - B \otimes H, (G - (B \otimes X)) - B \otimes H \rangle\\
= J(X) - 2 \operatorname{Re} \langle G - (B \otimes X),B \otimes H \rangle + o(H).
$$
So, the derivative of $J$ with respect to $X$ is
$$
J'(X)(H) = - 2 \operatorname{Re} \langle G - (B \otimes X),B \otimes H \rangle.
$$
The trick, however, is to extract the matrix form of this derivative. For the numerator-layout derivative, we're looking for a matrix $\frac{\partial J}{\partial X} = M$ (that depends on $X$) for which $\langle M,H \rangle = J'(X)(H)$.
The post you linked explains how this matrix can be found using SVD. For another approach, we can use the fact that $M_{ij} = J'(X)(E_{ij})$, where $E_{ij}$ denotes the matrix with a $1$ as the $E_{ij}$ entry and zeros elsewhere. We therefore have
$$
M_{ij} = - 2 \operatorname{Re} \langle G - (B \otimes X),B \otimes E_{ij} \rangle.
$$
We can make a bit more sense out of this if we break $G$ into a sum. If $X$ has size $m \times n$, then we can write
$$
G = \sum_{i=1}^m \sum_{j=1}^n G_{ij} \otimes E_{ij},
$$
where each $G_{ij}$ has the same size as $B$ (note that $G_{ij}$ is actually a submatrix of $G$). With that, we have
$$
M_{ij} = - 2 \operatorname{Re} \left\langle \sum_{i=1}^m \sum_{j=1}^n G_{ij} \otimes E_{ij} - (B \otimes X),B \otimes E_{ij} \right\rangle\\
= - 2 \sum_{i=1}^m \sum_{j=1}^n\operatorname{Re} \left\langle G_{ij} \otimes E_{ij} - (B \otimes X),B \otimes E_{ij} \right\rangle\\
= - 2 \sum_{i=1}^m \sum_{j=1}^n \operatorname{Re}[\langle G_{ij},B \rangle - \langle B,B\rangle x_{ij}].
$$
|
H: Finding an algebraic number $z \in \mathbb{C}$ with Galois group over $\mathbb{Q}(\sqrt{5})$ equal to $\mathbb{Z}/7\mathbb{Z}$
Let $K = \mathbb{Q}(\sqrt{5})$. I'm trying to find an algebraic number $z \in \mathbb{C}$ such that $K(z)/K \cong \mathbb{Z}/7\mathbb{Z}$. How to go about this? The only thing I could think of as useful is the fact that $\mathbb{Q} \subset \mathbb{Q}(\sqrt{5}) \subset \mathbb{Q}(\zeta_5)$ is a tower of field extensions.
AI: How about this? Take $\mathbb{Q}(\zeta_{29})$. Its Galois group over $\mathbb{Q}$ is cyclic of order 28, so it has a unique subgroup of order $4$; let $L$ be the fixed field of this subgroup.
Now $L$ is linearly disjoint from $K$, so $LK$ (as an extension of $K$) has Galois group $\mathbb{Z}/7\mathbb{Z}$ as desired.
|
H: Evaluate $\frac{1}{1 \cdot 2 \cdot 3}+\frac{2}{4 \cdot 5 \cdot 6}+\frac{3}{7 \cdot 8 \cdot 9}+ \cdots$
Evaluate $$\frac{1}{1 \cdot 2 \cdot 3}+\frac{2}{4 \cdot 5 \cdot 6}+\frac{3}{7 \cdot 8 \cdot 9}+ \cdots$$
I see this is the same as $$\frac{1}{3} \sum_{i=0}^{\infty} \frac{1}{(3i+1)(3i+2)} $$
$$\frac{1}{3} \sum_{i=0}^{\infty} \frac{1}{(3i+1)} - \frac{1}{(3i+2)} $$
Here I am confused. Any suggestions?
AI: With $\frac1n=\int_0^1x^{n-1}dx$ your sum is$$\frac13\sum_{i\ge0}\int_0^1x^{3i}(1-x)dx=\frac13\int_0^1\frac{1-x}{1-x^3}dx=\frac13\int_0^1\frac{1}{1+x+x^2}dx=\frac{\pi}{9\sqrt{3}}.$$
|
H: Finding a counter-example for Gaussian-periods for non-primes
I need to give a counter-example against the following theorem:
Suppose $H \subset \operatorname{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q})$ is a subgroup. Then we have $\mathbb{Q}(\zeta_n)^H = \mathbb{Q}(\eta_H)$, with $\eta_H = \sum_{\sigma \in H} \sigma(\zeta_n)$, the Gaussian-period.
This theorem is true for $n = p$ prime, but not for general $n$. So far, my attempt is the following. We take $n = 8$. Then $\operatorname{Gal}(\mathbb{Q}(\zeta_8)/\mathbb{Q}) \cong (\mathbb{Z}/8\mathbb{Z})^\ast = \{1,3,5,7\} \cong \mathbb{Z}/4\mathbb{Z}$.
Now we have the subfield $\mathbb{Q}(\zeta_4) = \mathbb{Q}(i)$, since $4 \mid 8$. We also have the subfield $\mathbb{Q}(\sqrt{2})$ since $\zeta_8 + \zeta_8^{-1} = 2 \cos(2\pi/8) = \sqrt{2}$. Hence, we have also the quadratic subfield $\mathbb{Q}(\sqrt{-2})$. Notice that $1,5$ keep $i$ fixed, so according to the above theorem, we have
$$
\mathbb{Q}(i) = \mathbb{Q}(\zeta_8)^{\{1,5\}} = \mathbb{Q}(\zeta_8 + \zeta_8^5).
$$
I am stuck at this point, how to derive a contradiction from this?
AI: This is because $\zeta_8^5=-\zeta_8$, so that $\zeta_8+\zeta_8^5=0$.
|
H: Characterization of invariants of a matrix
The coefficients of the characteristic polynomial of an $n\times n$ matrix $A$, which are invariant with respect to similarity transformations $T A T^{-1}$ (where $T\in GL(n)$), are given by polynomials of traces of powers of $A$.
However, these invariants do not completely characterize a matrix, since two matrices with different Jordan canonical forms can have a same characteristic polynomial. As far as I remember, one actually has to specify the elementary divisors or invariant polynomials of $A$ for a complete characterization.
My question is: can one express all invariants that completely characterize $A$ as polynomials of traces of powers $A^k$?
AI: No. Take any two nilpotent matrices of the same order that are not similar, say
$$ A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, B = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}. $$
The matrices $A^k$ and $B^k$ (for $k \geq 1$) are also nilpotent and so $\operatorname{tr}( A^k) = \operatorname{tr}(B^k) = 0$ but $A$ and $B$ are not similar.
More generally, if $\lambda_1, \dots, \lambda_n$ are the eigenvalues of $A \in M_n(\mathbb{F})$ (maybe over some splitting field), then $\operatorname{tr}(A^k) = \lambda_1^k + \dots + \lambda_n^k$ are symmetric functions in the eigenvalues of $A$ so any polynomial expression in them will be a polynomial symmetric function in the eigenvalues. The only invariants you'll be able to construct from them are related to the coefficients of the characteristic polynomial (which are also symmetric functions in the eigenvalues) but there are other similarity invariants which don't "depend symmetrically" on the eigenvalues.
|
H: Mittag-Leffler condition for De Rham complexes
I am currently looking at a proof of De Rham's theorem.
Denote by $\Omega^p(M)$ the space of $p$-forms. At some point, one considers a manifold $M=\bigcup U_i$ where each $U_i$ is an open subset, such that $U_i\subset U_{i+1}$ and $\overline{U_i}$ is compact.
Then by considering the inclusion maps $U_i \to U_{i+1}$, for any $p$ their pullbacks make $(\Omega^p(U_i))_{i}$ into an inverse system. This system satisfies the Mittag-Leffler condition, that is to say that for any integer $n$, there is an integer $m\geq n$ such that for any $i\geq m$, we have $$\mathrm{Im}(\Omega^p(U_i)\to \Omega^p(U_n))=\mathrm{Im}(\Omega^p(U_m)\to \Omega^p(U_n))$$
My question is: why is it true that it satisfies the Mittag-Leffler condition ?
AI: Let's try $m=n+1$. It suffices to prove that if $\omega$ is a smooth form on $U_{n+1}$
then there's a smooth form $\omega'$
on $M$ whose restriction to $U_n$ is the same as that of $\omega$.
Let $K=\overline{U_n}$. Then $K$ is a compact subset of $U_{n+1}$ and indeed of $M$.
There is a smooth "bump function" $\phi:M\to\Bbb R$ which equals $1$ on $K$
and has compact support contained in $U_{n+1}$. Then $\phi\omega$ is well-defined
on $U_{n+1}$, it has compact support, and restricts to $\omega$ on $U_n$. We can
extend $\phi\omega$ by zero to a form on all of $M$ with these same properties.
Then we can take $\omega'$ to be this extension.
|
H: Uniform continuity of composition?
If we consider $f\circ u$ and know that
f is bounded and uniformly continuous
u is bounded and continuous
does this imply that $f\circ u$ is bounded and uniformly continuous?
It is clear that it is bounded.
But it is not clear to me whether it is uniformly continuous. Surely it is continuous.
AI: The answer is no. You can give the following counter-example. Let your domain be $(0,1)$, $f(x)= x$ and $g(x)= x^2$. Then $f(g(x))= f(x^2)= x^2$ which is bounded, continuous, but not uniformly continuous on the given domain.
|
H: Proof that $f(x)=x|x|$ is differentiable on $\mathbb{R}$
I want to show that
$$f(x) = x|x|$$
is differentiable for all reals.
My approach would be:
Since $ \forall x \in \mathbb{R}$ such that $x < 0$ we have $f(x) = -x^2$ which is differentiable.
also $\forall x \in \mathbb{R}$ such that $x > 0$ we have $f(x) = x^2$ which is also differentiable.
Last but not least I would have to show that $f$ is differentiable at $x=0$ and from this it would follow that $f$ is differentiable $\forall x \in \mathbb{R}$ and of course also continuous.
Is this correct?
AI: Yes. If you prove the differentiability at $x<0, x>0$ and $x=0$ then $f$ is differentiable.
|
H: The radius of a sphere is measured as 5 cm ± 0·1 cm. Use differentiation to find the volume of the sphere in the form Vcm^3 ± bcm^3.
I have tried:
V = 4/3πr^3
If the radius is 5 ± 0·1, then V = 4/3π(5 ± 0·1) ^3
giving V = 555.65 or V = 492.81
The provided solution is (524 ± 31) cm^3 but I am not sure how you derive this result. This is from a Year 12 Maths Methods textbook.
AI: This is a "linearization" or "approximation by differentials" problem. The goal is to approximate the error in calculating the volume, knowing the error in measuring the radius. Since $V=\frac{4}{3}\pi r^3$, then $V'= 4 \pi r^2$, and you have:
$$\Delta V \approx dV = 4 \pi r^2 \; dr$$
Let the error in the radius be $\Delta r = dr = 0.1$ and $r=5$ to get
$$\Delta V \approx 4 \pi (5)^2 (0.1)= 31.4\ldots.$$
|
H: Solve for $y$ in $\frac{dy}{dx}-\frac{3y}{2x+1}=3x^2$
I saw a challenge problem on social media by a friend, solve for $y$ in $$\frac{dy}{dx}-\frac{3y}{2x+1}=3x^2$$
I think this is an integration factor ODE
$$\frac{1}{{(2x+1)}^{\frac{3}{2}}} \cdot \frac{dy}{dx}-\frac{3y}{{(2x+1)}^{\frac{5}{2}}}=\frac{3x^2}{{(2x+1)}^{\frac{3}{2}}}$$
Is this correct?
$$\left(\frac{y}{{(2x+1)}^{\frac{3}{2}}} \right)'=\frac{3x^2}{{(2x+1)}^{\frac{3}{2}}}$$
$$\left(\frac{y}{{(2x+1)}^{\frac{3}{2}}} \right)=\int \frac{3x^2}{{(2x+1)}^{\frac{3}{2}}} \mathop{dx}$$
AI: $$\frac{dy}{dx} -\frac{3}{2x + 1}y = 3x^2$$
For an integrating factor we have
$$\frac{dy}{dx} + Py = Q, \quad I = \exp(\int P \; dx)$$
then $$Iy = \int IQ\;dx$$
For our method
$$I = \exp(\int -\frac{3}{2x + 1} \; dx) = \exp(-\frac{3}{2}ln(2x + 1)) = (2x + 1)^{-\frac{3}{2}}$$
so
$$y = (2x + 1)^\frac{3}{2} \int \frac{3x^2}{(2x + 1)^{\frac{3}{2}}} \; dx= C_1(2x + 1)^\frac{3}{2} + (2x + 1)(x^2 - 2x -2)$$
since the integral gives
$$\frac{x^2 - 2x - 2}{(2x + 1)^\frac{1}{2}}$$
|
H: Finding column space - why does this algorithm work?
When I want to find the column space of a matrix, I can row reduce it to echelon form and choose only the columns corresponding to the columns without the free variables in the reduced row echelon form matrix. I have proved it mathematically, but I fail to see why does it work intuitively? Is there any intuition to why that happens or can it only be seen with a rigorous proof? The linearly independence is intuitive, but the fact that these columns span the column space is far from intuitive for me.
AI: Sure. Consider for example
$$ A = \begin{pmatrix} 1 & 2 & 6 & 4 \\ 1 & 0 & 4 & 3 \\ 1 & 2 & 6 & 0 \end{pmatrix} \xrightarrow{\textrm{row reduce}} R = \begin{pmatrix} 1 & 0 & 4 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. $$
The basic property of row reduction is that $Ax = 0$ iff $Rx = 0$ (the solution space stays the same). You can look at this property in terms of columns: If we write $A = (a_1,a_2,a_3,a_4)$ and $R = (r_1,r_2,r_3,r_4)$ where $a_i$ are the columns of $A$ and $r_i$ are the columns of $R$ then
$$ A \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} = x_1 a_1 + x_2 a_2 + x_3 a_3 + x_4 a_4 = 0 \iff R \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} = a_1 r_1 + a_2 r_2 + a_3 r_3 + a_4 r_4 = 0. $$
That is, any linear dependence which is satisfied by the columns of $A$ must be satisfied by the columns of $R$ and vice versa. So let's say that you want to find the column space of $A$. You look instead at $R$ and see immediately that $r_1,r_2,r_4$ are linearly independent which means that
$$ x_1 r_1 + x_2 r_2 + x_4 r_4 = 0 \iff x_1 = x_2 = x_4 = 0. $$
By the above observation, this also holds of $a_1,a_2,a_4$ so $a_1,a_2,a_4$ are also linearly independent. Do they span the column space? Let's look again at $R$. We immediately have
$$ r_3 = 4r_1 + r_2 \iff 4r_1 + r_2 - r_3 = 0. $$
But this means that we also have
$$ 4a_1 + a_2 - a_3 = 0 \iff a_3 = 4a_1 + a_2. $$
That is, if you are able to write a column of $R$ as a linear combination of the other columns of $R$, this means that the corresponding column of $A$ is the same linear combination of the other columns of $A$. In particular, $a_3$ belongs to the span of $a_1,a_2,a_4$ so you have found a basis for the column space.
This obviously generalizes. In the row reduced form, the columns corresponding to the free variables can be immediately expressed in terms of the columns which correspond to the bound variables (which are just the standard basis vectors) which means that in that also in the original matrix, the corresponding columns belong to the span of the other columns. The columns corresponding to the bound variables are linearly independent in $R$ and so they are also linearly independent in $A$.
|
H: What is the inverse of a matrix with the following structure?
I have come across a $N \times N$ square matrix of the following structure
\begin{bmatrix}
m_1 & m_2 & m_3 & \ldots & m_{N-1} & m_N \\
1 & -1 & 0 & \ldots & 0 & 0 \\
0 & 1 & -1 & \ldots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \dots & 1 & -1
\end{bmatrix}
where the $m_i \neq 0$s are real constants. Is there an analytic expression for the inverse of this matrix?
AI: Assuming the matrix is invertible, its inverse can be expressed as follows. Define
$$
A = \begin{bmatrix}
m_1 & m_2 & m_3 &.....& m_{N-1} & m_N \\
1 & -1 & 0 &.....& 0 & 0 \\
0 & 1 & -1 &.....& 0 & 0 \\
. & . & . &.....& . & . \\
. & . & . &.....& . & . \\
. & . & . &.....& . & . \\
0 & 0 & 0 &.....& 1 & -1
\end{bmatrix}.
$$
We can write $A = M - I + xy^T$, where $I$ is an identity matrix,
$$
M = \pmatrix{0\\1&0\\&\ddots&\ddots\\&&1&0}, \quad x = (1,0\dots,0)^T, \quad y = (m_1 + 1,m_2,\dots,m_N).
$$
Using the Sherman Morrison formula, we have
$$
A^{-1} = (M - I)^{-1} - \frac{(M - I)^{-1}xy^T(M - I)^{-1}}{y^T(M - I)^{-1}x}.
$$
$(M-I)^{-1}$ is easy to compute: since $M$ is nilpotent of order $N$, we have
$$
(M - I)^{-1} = -I - M - \cdots - M^{N-1} =
\pmatrix{-1&0\\ \vdots & \ddots \\
-1&\cdots &-1}.
$$
|
H: Proving angles in a circle are equal
$A$, $B$, $R$ and $P$ are four points on a circle with centre $O$.
$A$, $O$, $R$ and $C$ are four points on a different circle.
The two circles intersect at the points $A$ and $R$.
$CPA$, $CRB$ and $AOB$ are straight lines.
Prove that angle $CAB$ = angle $ABC$.
Not really sure how to start here. I am thinking about proving the sides $AC = CB$, but not sure how I can do that. The only striking thing is that $AB$ is a diameter, so angle $APB = 90$. Then letting $PAB = x$, one gets $PBA = 90 - x$ and also $CPB = 90$.
However, can't get much further from here.
AI: Since $AB$ is a diameter, as you've already stated, this means $\measuredangle ARB = 90^{\circ}$ as well. As such, you also have $\measuredangle ARC = 90^{\circ}$. Thus, in the circle on the left side, you have $AC$ is its diameter. This means $\measuredangle COA = 90^{\circ}$ (note you could also get $\measuredangle COA = \measuredangle ARC$ from that both angles subtend the same chord of $AC$) and, thus, $\measuredangle COB = 90^{\circ}$ also.
Since $|OA| = |OB|$, you have with $CO$ being a common side that the Pythagorean theorem in $\triangle COA$ and $\triangle COB$ gives $|CA| = |CB|$. This means $\triangle ABC$ is isosceles so $\measuredangle CAB = \measuredangle ABC$.
Update: Instead of using the Pythagorean theorem, I could've used that side-angle-side (SAS) matches so $\triangle COA \cong \triangle COB$ which directly gives $\measuredangle CAB = \measuredangle ABC$.
|
H: Calculate $\int_D\sin(\frac{x\pi}{2y})dxdy$
Calculate $\int_D\sin(\frac{x\pi}{2y})dxdy$ $D=\{(x,y)\in\mathbb{R}^2:y\geq x,y\geq 1/\sqrt{2},y\leq(x)^{1/3}\}$.
I've calculate the limits of the integral $2^{-1/6}\leq x\leq1,1/\sqrt{2}\leq y\leq1$ and after doing integral of x first i got stuck
AI: First, your region of integration is wrong. It looks like this:
This suggests integrating first with respect to $x$:
\begin{align*}\int_{1/\sqrt{2}}^1\int_{y^3}^y \sin\left(\frac{x \pi}{2y}\right)\,dx\,dy
&= \int_{1/\sqrt{2}}^1\left(-\frac{2 y \cos \left(\frac{\pi x}{2 y}\right)}{\pi }\right)\bigg\lvert_{y^3}^y\,dy\\
&= \int_{1/\sqrt{2}}^1\frac{2 y \cos \left(\frac{\pi y^2}{2}\right)}{\pi }\,dy\\
&= \frac{2 \sin \left(\frac{\pi y^2}{2}\right)}{\pi ^2}\bigg\lvert_{1/\sqrt{2}}^1 \\
&= \frac{2-\sqrt{2}}{\pi ^2}.
\end{align*}
|
H: Under what conditions is $A^T \Sigma A$ positive (semi-)definite for $\Sigma$ p.s.d?
Consider a positive (semi) definite, symmetric matrix $$\Sigma \in \mathbb{R}^{k\times k}$$ and a matrix $$A \in \mathbb{R}^{k \times m}$$.
Under what conditions is $$B = A^T \Sigma A$$ positive (semi) definite?
I had the following idea: consider any $x$ and decompose it into eigenvectors $v_i$ of $A$, with corresponding eigenvalues $\lambda_i$,i.e.
$$ x = \sum_{i} c_i v_i$$
Then we have $$x^T A^T \Sigma A x = \sum_{i} \lambda_i^2 c_i^2 v_i^T \Sigma v_i + \sum_{p \neq q} c_p \lambda_p v_p^T \Sigma c_q \lambda_q v_q.$$
The first part of this sum will be $\geq 0$ by the positive definite-ness of $\Sigma$.
If we choose $v_i$ to be orthogonal w.r.t. the scalar product $\langle x, y \rangle_\Sigma = x^T \Sigma y$, then we are done. Can we always choose them in such a way?
AI: If $A$ is not square, you cannot talk of its eigenvalues / vectors. Even if it is, you cannot assume that the eigenvectors are orthogonal w.r.t. to the scalar product that you give. Think for instance $\Sigma = I$; then you would like the eigenvectors to be orthogonal, which means that $A$ itself is orthogonal.
In general, the easiest way to prove p.s.d.-ness is from the definition: for $x \in \mathbb{R}^m$, compute
$$
x^T (A^T \Sigma A) x = (A x)^T \Sigma (Ax),
$$
and this is positive this $\Sigma$ is p.s.d.
|
H: Equivalence of EVT consequence
I was getting over EVT whics states that if a real-valued function $f$ is continuous on the closed interval $[a,b]$ then $f$ must attain a maximum and a minimum, each at least once. Then there is one consequence which states following: If $f:[a,b]\rightarrow\mathbb{R}$ is continuous $\implies$ $f([a,b])=[f(x_{max}),f(x_{min})]$. Also there is some theorem which states that if $f$ is monotonic, then $\Longleftarrow$ holds.
My question is, why doesn't $\Longleftarrow$ hold if $f$ is not monotonic?
I would be very grateful if someone would explain this to me.
AI: Take $$[a,b]=[0,4\pi]$$
$$f(x)=\sin(x)\;\; if \;\; x\ne \frac{\pi}{4}$$
$$f(\frac{\pi}{4})=\frac 12$$
$f$ is not continuous at $[a,b]$.
$f$ is not monotonic.
$$f([a,b])=[-1,1]=[f(\frac{3\pi}{2}),f(\frac{\pi}{2})]=[\min f,\max f]$$
|
H: Prove that any countable cartesian product of countable sets is countable
I want to prove that infinite (yet countable) cartesian product of countable sets is countable.
Here's what I tried:
Step 1:
I proved that for 2 countable sets $ A_1,A_2 $ , the product $ A_{1}\times A_{2} $ is countable.
Step 2:
I proved by induction that for any $n\in \mathbb{N} $ if $ A_{1},...,A_{n} $ are countable sets, then
$ A_{1}\times A_{2}\times,...,\times A_{n} $ is countable.
Now, I want to show that any countable infinite cartesian product would be countable.
How do i show that $ A_{1}\times,....\times A_{\aleph_{0}} $ is countable?
Thanks in advance.
AI: I guess you know that $[0,1)$ is uncountable. If your statement was true, then $\mathbb{N}^\mathbb{N}$ would be countable. But there is a surjection of $\mathbb{N}^\mathbb{N}$ in $[0,1)$ since every $x\in[0,1)$ can be represented by a countable string of digits (writing it's decimal part in base 10 for instance). Then $[0,1)$ would be countable, which is absurd. So a countable cartesian product in general is not countable.
|
H: Proof that this function is an isomorphism
If $K = (k_1, k_2, k_3, k_4)$ contains the basis of the linear function $f: R^4 \rightarrow R^4 $ with
$f(k_1) = k_4 , f(k_2) = k_1 + 2k_2 , f(k_3) = 2k_1 + k_2 + k_3 , f(k_4) = 2k_2 - k_3$
show that f is an Isomorphism.
So it got to be bijective to be an Isomorphism, i would start with $Ker(f)= 0$ to prove that it is injective and $dim(Im(f))$ would be then equal to $dim(Im(R^4))$ so that it would be also surjective. But im out of ideas on how to show $Ker(f)=0$
AI: We will solve $f(c_1 k_1+c_2 k_2+c_3k_3+ c_4k_4)=c_1 f(k_1)+c_2 f(k_2)+c_3 f(k_3)+c_4 f(k_4)=c_1 k_4+c_2 (k_1+2k_2)+c_3 (2k_1+k_2+k_3)+c_4 (2k_2-k_3)=0$.
Since $(k_1,k_2,k_3,k_4)$ forms a basis, they are linearly independent. Thus $c_1=0, c_2+2c_3=0,2c_2+c_3+2c_4=0,c_3-c_4=0$. Solving this system gives $c_1=c_2=c_3=c_4=0$, so $\ker f=0.$
|
H: Linear independence between functions
I'm trying to solve some exercises on the linear independence between functions and in most of them we use the "trick" of deriving. I would like to know why, if there is a theorem, a proposition or a simple consideration about function that I missed in order to explein this method.
For example:
Let $\\f(x)=e^x$ and $g(x)=e^{2x}$ we suppose $\lambda_1e^x+\lambda_2e^{2x}=0$.
Deriving we have $\lambda_1e^x+2\lambda_2e^{2x}=0$.
Subtracting the first to the second $\lambda_2e^{2x}=0$ so $\lambda_2=0$ and $\lambda_1=0$
AI: First, it is notable that proving independence using your differentiation trick is equivalent to considering the Wronskian of the function.
Second, here is a justification of this method as you have presented it. The only fact being used here is the following:
Claim: If two differentiable functions $f,g:\Bbb R \to \Bbb R$ satisfy $f(x) = g(x)$ for all $x$, then they also satisfy $f'(x) = g'(x)$ for all $x$.
I'd say that with the statement written in the above form, there is really nothing to prove: if $f$ and $g$ are the same function, then they have the same derivative. With that established, we begin with your equation
$$
\lambda_1e^x+\lambda_2e^{2x}=0.
$$
Keep in mind that, since we are determining whether the functions $e^{x},e^{2x}$ are linearly independent, the above should be interpreted as stating that $\lambda_1e^x+\lambda_2e^{2x}=0$ holds for all $x \in \Bbb R$. In other words, if we define two functions $f,g$ by $f(x) = \lambda_1e^x+\lambda_2e^{2x}$ and $g(x) = 0$, then $f(x) = g(x)$ for all $x \in \Bbb R$.
Now, since $f'(x) = \lambda_1e^x+2\lambda_2e^{2x}$ and $g'(x) = 0$, it follows that the new equation
$$
f'(x) = g'(x) \implies \lambda_1e^x+2\lambda_2e^{2x} = 0
$$
holds for all $x$.
|
H: Contour integral of non analytic function
Let $a,b\in \mathbf{C}$ with $|b|<1$. I want to calculate
$$\int_{|z|=1} \frac{|z-a|^2}{|z-b|^2}\frac{dz}{z}\, .$$
I'm not sure what tools I can use here since the function is not analytic so (I think) this rules out Cauchy's integral formula, etc.
AI: Note that when $|z| = 1$, $\overline{z} = 1/z$, so this integral is the same as
$$ \oint_{|z|=1} \frac{(z-a)(1/z - \overline{a})}{(z-b)(1/z-\overline{b})}\; \frac{dz}{z} $$
which can be done with residues.
|
H: Epsilon-delta on a function with restricted range
On James Stewart's Calculus Early transcendental it says:
The definition of limit says that if any small interval $(L - \epsilon , L + \epsilon)$ is given around $L$,
then we can find an interval $(a - \delta, a + \delta)$ around a such that $f$ maps all the points in
$(a - \delta, a + \delta)$ (except possibly a) into the interval $(L - \epsilon , L + \epsilon)$.
However, ''small'' is not specific which contradicts the notion of a formal definition.
In the definition it says $\forall \epsilon$ but functions with a restricted range e.g. $\sin(x)$ it is impossible for $f$ to map all the points in
$(a - \delta, a + \delta)$ (except possibly a) onto the interval $(L - \epsilon , L + \epsilon)$, for all $\epsilon$.
AI: The definition requires $(a - \delta, a + \delta)$ to map into $(L - \varepsilon, L + \varepsilon)$, not onto it. There's no requirement for all of $(L - \varepsilon, L + \varepsilon)$ to be covered; we simply want the image of every point within $\delta$ of $a$, to be within $\varepsilon$ of $L$.
As a concrete example (simpler than $\sin$), consider the constant function $f(x) = 1$. Let's also take $a = 2$. We show that it approaches the limit $L = 1$ as $x \to 2$. For any given $\varepsilon > 0$, I'll choose $\delta = 10$. Then,
$$x \in (a - \delta, a + \delta) = (-8, 12) \implies f(x) = 1 \in (L - \varepsilon, L + \varepsilon).$$
This proves that $\lim_{x \to 2} f(x) = 1$. Note that not every point in $(L - \varepsilon, L + \varepsilon)$ is in the range of $f$, but more importantly, the points around $a$ map within the interval. This is in the spirit of continuity: we need points nearby $a$ to map near to $L$; we don't really care if they take the full tour.
|
H: Statistical sampling and random variables?
I'm studying statistical sampling and there is a point which is not very clear to me. Let us discuss the following example.
Suppose we would like to study the heights of 3.000 students in a given school. Let us do this taking 80 samples of size 25 each. For each sample there should be an associated random variable, let us say $X_1, \ldots, X_{25}$, which will give the heights.
My question is, what are the domains of $X_1, \ldots, X_n$?
As it seems it can't be the set of the respective $25$ samples because usually one wants to perform some kind of algebraic manipulation using the values of $X_i(\omega)$. For instance, in order to perform
$$X_1(\omega)+\ldots+X_{25}(\omega)$$ the variable $\omega$ should belong to the intersection of all the domains of $X_1, \ldots, X_{25}$. On the other hand, if we defined $X_1, \ldots, X_{25}$ on the set of all students what would $X_i(\omega)$ be if $\omega$ is not in the $i$ th sample.
The only way I see to figure this out is to allow the random variable to be defined on the set of all students although the real data we know about the variable is their values on the respective sample set.
Can someone clear this out?
Thanks.
AI: The domain / sample space $\Omega$ needs to be rich enough to encompass all possible outcomes of your experiment. Remember that after you conduct the experiment, you will have in front of you just one element $\omega$ of the sample space $\Omega$.
A reasonable domain $\Omega$ for conducting sampling is the set of all possible sequences of individuals that could result from your sampling. For example, if your experiment samples $100$ individuals from a population of people, or a box of tickets, or an urn of marbles, then take $\Omega$ to be the set of all possible vectors of size $100$ from this population, conformant with your sampling protocol. (If your sampling is performed without replacement, then those vectors should not contain duplicates.)
Given this sample space, it is natural to associate $X_i(\omega)$ with whatever measurement you're taking on the $i$th member of your vector $\omega$, and it is straightforward to perform numerical manipulation of your random variables $X_1,\ldots,X_n$.
|
H: Taking second partial derivative of spherically symmetrical wavefunction $\psi(r)$ with respect to $x$ only
I am currently studying Optics, fifth edition, by Hecht. In chapter 2.9 Spherical Waves, when discussing the spherical coordinates $x = r \sin(\theta) \sin(\phi)$, $y = r \sin(\theta)\sin(\phi)$, $z = r \cos(\theta)$, the author says that the Laplacian operator is
$$\nabla^2 = \dfrac{1}{r^2} \dfrac{\partial}{\partial{r}} \left( r^2 \dfrac{\partial}{\partial{r}} \right) + \dfrac{1}{r^2 \sin(\theta)} \dfrac{\partial}{\partial{\theta}} \left( \sin(\theta) \dfrac{\partial}{\partial \theta} \right) + \dfrac{1}{r^2 \sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2}. \tag{2.67}$$
I just asked how this is derived. User David Quinn commented with this, showing me that this derivation was far more tedious than I had expected.
The author then continues as follows:
We can obtain this result without being familiar with Eq. (2.67). Start with the Cartesian form of the Laplacian, Eq. (2.61); operate on the spherically symmetrical wavefunction $\psi(r)$; and convert each term to polar coordinates. Examining only the $x$-dependence, we have
$$\dfrac{\partial{\psi}}{\partial{x}} = \dfrac{\partial{\psi}}{\partial{r}} \dfrac{\partial{r}}{\partial{x}}$$
and
$$\dfrac{\partial^2{\psi}}{\partial{x}^2} = \dfrac{\partial^2{\psi}}{\partial{r^2}} \left( \dfrac{\partial{r}}{\partial{x}} \right)^2 + \dfrac{\partial{\psi}}{\partial{r}} \dfrac{\partial^2{r}}{\partial{x}^2}$$
I don't understand why the second derivative with respect to $x$ is $\dfrac{\partial^2{\psi}}{\partial{r^2}} \left( \dfrac{\partial{r}}{\partial{x}} \right)^2 + \dfrac{\partial{\psi}}{\partial{r}} \dfrac{\partial^2{r}}{\partial{x}^2}$. We have that $\psi$ is only a function of $r$, so should we not get
$$\dfrac{\partial^2{\psi}}{\partial{x}^2} = \dfrac{\partial{\psi}}{\partial{r}} \dfrac{\partial^2{r}}{\partial{x}^2}?$$
I would greatly appreciate it if people would please take the time to clarify this.
AI: I think there might be a typo in what you wrote. You have a $\frac{\partial^2 \psi}{\partial x^2}$ on both sides of your equation.
Anyway, you can just use the product and chain rule. Start with
$$\frac{\partial \psi}{\partial x}=\frac{\partial \psi}{\partial r}\frac{\partial r}{\partial x}$$
We know that $\frac{\partial}{\partial x}=\frac{\partial}{\partial r} \frac{\partial r}{\partial x}$, so then
$$\frac{\partial^2 \psi}{\partial x^2}=\frac{\partial}{\partial x} \left(\frac{\partial \psi}{\partial r}\frac{\partial r}{\partial x}\right)$$
$$=\frac{\partial^2 \psi}{\partial r^2}\frac{\partial r}{\partial x} \frac{\partial r}{\partial x} + \frac{\partial^2 r}{\partial x^2} \frac{\partial \psi}{\partial r}$$
$$=\frac{\partial^2 \psi}{\partial r^2}\left(\frac{\partial r}{\partial x}\right)^2 + \frac{\partial \psi}{\partial r}\frac{\partial^2 r}{\partial x^2}.$$
|
H: Question regarding the correlation of limit points and convergent sequences
I've been told, that:
$(i)$ $M$ is a closed set
$(ii)$ For every convergent sequence $a_{n}$ in $M$, $\lim \limits_{n \to \infty } a_{n} \in M$
are equivalent. I understand that $M$ being a closed set implies that every limit point is in $M$, so the limit of a sequence being a limit point, is also in $M$. $(i)\implies(ii)$ is clear to me. What I don't quite understand is why $(ii)\implies(i)$. Not every limit point is a limit, so you can't prove it analogously. So my question is: How can you prove $(ii)\implies(i)$?
AI: The property you are talking about is being a sequential space. A sufficient (but not necessary) condition for a space to be sequential is for a space to be first-countable. Since the majority of spaces that we work with are first-countable (and usually second-countable too), oftentimes we assume that these two definitions are the same. Let us show that the equality of statements holds in first-countable spaces.
Let us first show that if $p$ is a limit point of $M$, then there exists a sequence $p_n \in M$ such that $p_n \to p$. Take a (possibly countable) neighbourhood basis of $p$ and we can modify it using intersections so that you have open sets $\{V_i\}_1^\infty$ such that $V_i \subset V_j$ if $i \geq j$. This is called a nested neighbourhood basis. Now, if $p \in M$, then just take the constant sequence $p_n = p$ and you are done. If not, then you can pick a $p_n \in (V_n \cap A)$ and from the way its set up, $p_n \to p$. So, in a first-countable space, every limit point of $M$ can be detected by a sequence in $M$. (ii) implies that as a result, every limit point is in $M$, so $M = \overline{M}$ so $M$ is closed.
On the other hand, if $M$ is closed, then $M$ contains all its limit points. Note that $\lim_{n \to \infty} a_n$ is a limit point of $A$ (do you see why?). Thus it belongs to $M$.
Last, we note that this is only a sufficient condition. There are spaces that are sequential but not first-countable. Wikipedia lists $\mathbb{R}/\mathbb{Z}$ as one such space.
Edit: Since the tag you have is analysis, I'm assuming you're only working with metric spaces, which are automatically first-countable.
|
H: How to refer to the single element inside a unit set?
The other similr questions/answers refer to advanced set theory notation or defining exclusive functions for this.
[I have answered here my own question with a middle ground solution.]
If sets were ordered, I could just do $s_1$ and get the first, also only element of the unit set $s$. But they are not ordered. Is there a shorthand for that?
Related: how to extract an element according to a single-match criterion, like pertaining to another set?
AI: Based on parts of answers pointed by the comments, I think a good approach would be $[\{x\} \mapsto x](s)$.
|
H: Inequality from Geometric Series
My Calculus II textbook shows the following example:
I do not understand how they got -2 from solving the inequality $\left|\left(\frac{x}{2}\right)^2\right| < 1$. Doesn't it result in an imaginary part when you square -4 in the last step?
Therefore, we have
\begin{align*}
\frac{x^2}{4-x^2}&=\frac{x^2}{4\left(1-\left(\frac{x}{2}\right)^2\right)}\\
&=\frac{\frac{x^2}{4}}{1-\left(\frac{x}{2}\right)^2}\\
&=\sum\limits_{n=0}^\infty\frac{x^2}{4}\left(\frac{x}{2}\right)^{2n}
\end{align*}
This series converges as long as $\left|\left(\frac{x}{2}\right)^2\right|<1$ (note that when $\left|\left(\frac{x}{2}\right)^2\right|=1$ the series does not converge). Solving this inequality, we conclude that the interval of convergence is $(-2,2)$ and
AI: It sounds like you successfully went from $|(\frac x2)^2|<1$ to $-4<x^2<4$.
However, to solve this inequality for $x$, it is not correct to simply apply square roots to all three expressions. Mechanical algebraic steps (while great) need to be supplemented by knowledge about the functions involved.
Here, we know that $x^2$ is always nonnegative; therefore the inequalities $-4<x^2<4$ are equivlanent to the inequalities $0\le x^2<4$.
Now the square root function is defined and increasing on the entire range of values we care about, and so we can apply it to get $\sqrt 0\le\sqrt{x^2}<\sqrt4$, which evaluates to $0\le|x|<2$.
|
H: What is the central algebra?
I' m studying over Brauer group.
But I have a problem in starting point. What is the central algebra?
Defination of center of $\mathbb{k}$-algebra S as following ( Noncommutative Algebra, Farb & Dennis, p:86,1991)
$ Z(S) = \{x\in S : x.s=s.x \, \forall s\in S\}$ that is, Z(S) is just the center of S considered as a ring. Note that for an algebra S over $\mathbb{k}$, it is always true that $\mathbb{k} \subset Z(S)$.
But why?
By the defination Z(S) must be a subset of S. How we say $\mathbb{k}\subset S$?
For this I think defined new map $\alpha:\mathbb{k} \rightarrow S$, $k\rightarrow k.1_S$ (with module operation) If we think this way, we say $\alpha(\mathbb{k})\subset Z(S)$. Is this what definition want to say?
If explained to what is $\mathbb{k} \subset Z(S)$ , it is easy to reach the definition of central algebra.
AI: Yes, there is an abuse of notation going on here and your suggested interpretation is correct.
There is a natural embedding of $\mathbb{k}$ into $S$ whenever $S$ is a $\mathbb{k}$-algebra - namely, via the map $$x\mapsto x 1_S,$$ where $1_S$ is the multiplicative identity in $S$. Generally we conflate $\mathbb{k}$ with its image under this embedding.
So more precisely, a $\mathbb{k}$-algebra $S$ is central iff $$Z(S)=\{x 1_S: x\in\mathbb{k}\}.$$ For example, $\mathbb{C}$ as an $\mathbb{R}$-algebra is not central since it is fully commutative but the relevant inclusion $\mathbb{R}\hookrightarrow\mathbb{C}$ is not surjective. Meanwhile, the quaternions $\mathbb{H}$ do form a central $\mathbb{R}$-algebra since the center is the set of quaternions of the form $a+0i+0j+0k$, which is exactly the image of the canonical embedding of $\mathbb{R}$ into $\mathbb{H}$.
|
H: True /false question about compact set in real analysis
This question was a true /false question in my real analysis quiz today and I am unable to provide a reason for it.
Question is : A closed and bounded subset of complete metric space is compact.
I think it is not compact but only because any open cover may not have finite subcover but I am unable to provide a counterexample for the same.
AI: For a counterexample, you can consider $[0,1]$ with the discrete metric $d(x,y) = 1$ if $x \ne y$, $d(x,y) = 0$ if $x = y$. This is complete because any Cauchy sequence is eventually constant and hence convergent, but the open cover $\{B(x,\frac 12) : x \in [0,1]\}$ has no finite subcovers because each ball $B(x, \frac 12) = \{x\}$ is a singleton.
|
H: A Simple Application of Mean Value Theorem?
If $f:\mathbb{R}\to \mathbb{R}$ is continuous, $a>0$ and $\int_{-a}^a f(x)dx=0$, prove that there exists some $\xi\in (0,a)$ such that
$$\int_{-\xi}^{\xi}f(x)dx=f(\xi)+f(-\xi)$$
A natural idea is to consider the function $g:[0,a]\to \mathbb{R}$, where
$$g(y)=\int_0^y \left\{\int_{-x}^x f(t)dt-\left[f(x)+f(-x)\right]\right\}dx$$
It is easy to see that $g$ is differentiable on $(0,a)$ and that $g(0)=0$. Furthermore, note that
$$\int_0^y f(-x)dx=-\int_0^{-y}f(z)dz=\int_{-y}^0 f(z)dz$$
Therefore, we know that
$$g(a)=\int_0^a \int_{-x}^x f(t)dtdx-\int_{-a}^a f(x)dx=\int_0^a \int_{-x}^x f(t)dtdx$$
If $\int_0^a \int_{-x}^x f(t)dtdx=0$, then by the mean value theorem, there exists some $\xi$ such that
$g'(\xi)=0$, i.e. that
$$\int_{-\xi}^{\xi}f(x)dx=f(\xi)+f(-\xi)$$
However, in general $\int_0^a \int_{-x}^x f(t)dtdx$ need not be $0$.
Let $h(x)=\int_{-x}^x f(t)dt$. Then $h'(x)=f(x)-(-1)f(-x)=f(x)+f(-x)$. Also, $h(0)=h(a)=0$. So there exists some $w\in (0,a)$ such that
$$f(w)+f(-w)=h'(w)=0$$
But this only proves that $\int_{-a}^a f(x)dx=f(w)+f(-w)$ though. I'm not sure how to proceed.
What can we do when $\int_0^a \int_{-x}^x f(t)dtdx\neq 0$? Or is there some clever trick that I didn't notice? Any hint will be appreciated. Thank you!
AI: As you note, it suffices to show that for a differentiable function $h(\xi)$ with $h(0) = h(a) = 0$, there exists a $\xi \in (0,a)$ such that $h(\xi) = h'(\xi)$.
To that end, note that
$$
h(\xi) - h'(\xi) = 0 \iff h'(\xi)e^{-\xi} - h(\xi)e^{-\xi} = 0.
$$
If we define the new function $H(x) = e^{-x}h(x)$, it suffices to show that, given $H(0) = H(a) = 0$, there is a $\xi$ with $H'(\xi) = 0$. However, this is precisely the content of Rolle's theorem.
|
H: Find the domain of convergence for the series as well as the sum $S(x)$.
The given series: $$\sum^{\infty}_{n=1} \frac{\cos (\pi n) \sin \left(\pi x \right)}{(n+1)n \cot^n x}$$
Here is what I did:
$$\sum^{\infty}_{n=1} \frac{\cos (\pi n) \sin \left(\pi x \right)}{(n+1)n \cot^n x} \le \frac{1}{n(n+1) \cot^n x} = \left[ y = \cot^n x \right]$$
According to the necessary condition for the series: $$\lim_{n \to \infty} \frac{1}{n(n+1)y} = 0 \Rightarrow y \neq 0$$ Then $\cot^n x \neq 0 \Rightarrow x \neq \frac{\pi k}{2} \ \ \forall k \in Z \Rightarrow x \in ] \pi k + \frac{\pi}{2}; \frac{3\pi k}{2}+\frac{\pi}{2}[$, where $k \in Z$
I was thinking that I have found the domain of convergence, but I was wrong. I guesss the interval at which the convergence is possible, but it is not domain of the convergence? Also, how do I find the sum $S(x)$ for these series?
$$\sin(\pi x)\sum_{n=1}^\infty \frac{(-1)^n}{n(n+1)}\tan^n(x)$$
$\tan x = y$. Applying ratio test:
$$\lim_{n\to \infty} \left|\frac{n(n+1)y^{n+1}}{y^n(n+1)(n+2)}\right| = \lim_{n\to \infty}\left|\frac{ny}{n+1}\right| = |y|$$
Then $|\tan x|< 1 \iff x\in ]-\frac{\pi}{4}+\pi k; \frac{\pi}{4}+\pi k [, \ \ k \in Z$
AI: Using the Taylor series for $\log(1+z)=\sum_{n=1}^\infty \frac{(-1)^{n-1}z^n}{n}$, we have for $|z|=|\tan(x)|<1$
$$\begin{align}
\sum_{n=1}^\infty \frac{\cos(\pi n)\sin(\pi x)}{n(n+1)\cot^n(x)}&=\sin(\pi x)\sum_{n=1}^\infty \frac{(-1)^n}{n(n+1)}\tan^n(x)\\\\
&=\sin(\pi x)\sum_{n=1}^\infty \left(\frac{(-1)^n\tan^n(x)}{n}-\frac{(-1)^n\tan^n(x)}{n+1}\right)\\\\
&=\sin(\pi x)\sum_{n=1}^\infty \frac{(-1)^n\tan^n(x)}{n}\\\\&+\sin(\pi x)\cot( x)\sum_{n=1}^\infty \frac{(-1)^{n+1}\tan^{n+1}(x)}{n+1}\\\\
&=\sin(\pi x)-\sin(\pi x)(1+\cot(x))\log(1+\tan(x))
\end{align}$$
The series also converges for $|\tan(x)|=1$. That evaluation is left as a simple exercise for the reader.
|
H: Does continuous extension exists under specific conditions
I was unable to solve this problem asked in my exam of Topology and need help.
True or False : Does A continuous function on $\mathbb{Q} $ Intersection [0, 1] can be extended to a continuous function on [0, 1] .
I couldn't think of what theorem or counterexample I should use and hence I am posting here.
Any help please.
AI: Hint: What is the simplest function you can think of that blows up at $1/\sqrt 2?$
|
H: Open and Closed subset of $\mathbb{R}^n$
Let $A,B\subseteq \mathbb{R}^n$, define $A+B=\{a+b : a\in A,b\in B\}$ Then which of the following is/are true?
$(1)$ If $A$ and $B$ are open, then $A+B$ is open.
$(2)$ If $A$ open and $B$ close, then $A+B$ closed.
$(3)$ If $A$ is closed and $B$ open, then $A+B$ is open.
$(4)$ If $A$ and $B$ are closed, then $A+B$ is closed.
My attempt
To prove $(1)$
Let $a+b \in A+B$. Since $A$ and $B$ are open , there are $\epsilon_1,\epsilon_2$ such that $B(a,\epsilon_1)\subseteq A$ and $B(b, \epsilon_2) \subseteq B$
Here $B(a,\epsilon_1)=\{x\in \mathbb{R}^n : ||x-a||\lt \epsilon_1\}$
Let $\epsilon=min \{\epsilon_1,\epsilon_2\}$
Let $z\in B(a+b,\epsilon) $ be arbitary.
Then $||(a+b)-z||\lt \epsilon$
or $||(a-(z-b)||\lt \epsilon \lt \epsilon_1$
$\Rightarrow z-b=a_1 \in A$
$\Rightarrow z=a_1+b \in A+B$
Hence $B(a+b,\epsilon) \subseteq A+B$ impling $A+B$ is open if $A$ and $B$ are open.
$(2)$ It is false by taking $A=(0,1) $ and $B=\{0\}$ in $\mathbb{R}$
To prove $(3)$
This is the kind of same as $(1)$
Let $a+b\in A+B$ .Since $B$ is open there is $\epsilon \gt 0$ such that $B(b,\epsilon)\subseteq B$
Let $z\in B(a+b,\epsilon)$
$||(a+b)-z||\lt \epsilon$
$\Rightarrow ||b-(z-a)||\lt \epsilon $
$z-a=b_1 \in B$
$z=a+b_1 \in A+B$ and thus $A+B$ is open.
$(4)$. It is false , I have two examples
Eg $(a)A=\mathbb{N}$ and $B=\{-n+\frac 1n: n\in \mathbb{N}\}$ in $\mathbb{R}$ .Then $A$ and $B$ being discrete subsets are closed. But $A+B$ cotains the sequence $\{1/n \}$ converging to $0$ but $0\notin A+B$
Eg $(b)$ A book I referred asked to think of $tanx$ on $(-\pi/2,\pi/2)$ in $\mathbb{R}^2$ but I can't proceed. Can you please help me complete it.?
Sorry for a long solution but being a budding mathematician, I want correct proofs . Can you please go through it and point out mistakes , if any?
Any alternative ideas will be appreciated. Thanks for your valuable time.
AI: Your answers seem right to me.
Here is a more general way to look at (1) and (3).
Claim 1. If $A$ is open and $B$ is arbitrary then $A+B$ is open.
Proof. Note that $A+B=\bigcup_{b\in B}A+\{b\}$. Each set $A+\{b\}$ is open (by your argument for "open + closed is open"; but this specific case is easier.) Now $A+B$ is a union of open sets so it's open.
For $\tan x$ I guess you could do something like let $A$ be the graph of $\tan x$ on that interval and let $B=\{(0,r):r\in\mathbb{R}\}$. Those are closed sets but $A+B$ is $(-\pi/2,\pi/2)\times\mathbb{R}$ which is not closed. (I like your example better.)
|
H: why is it called $f^{-1}(x)$?
Why do we name the inverse function $f^{-1}(x)$? Is it nonstandard to say $f^{0}(x)=x$, $f^{1}(x)=f(x)$, $f^{2}(x)=f(f(x))$, $f^{\infty{}}(x)=f(f(f(⋯f(x)⋯)))$?
Some (all) functions can have a $f^{1/2}(x)$?
Like if $f(x)=x+1$ then $f^{1/2}(x)=x+1/2$ so $f^{1/2}(f^{1/2}(x))=(x+1/2)+1/2=x+1=f(x)$
Or if $f(x)=x^4$ then $f^{1/2}(x)=x^2$ so $f^{1/2}(f^{1/2}(x))=(x^2)^2=x^4=f(x)$
What about superscripts with complex numbers, which are supposed to be a natural phenomenon appearing anywhere that the negative reals can?
$f(f(x))$ adds the superscripts, what about multiplying them? This notation could be formalized with recursion. It could help describe the mandelbrot set more elegantly as $f^{\infty}(0)$ where $f^n(z)=f(n-1)^2+z$ . It could help people realise there exist smooth transitions between functions and themselves called on themselves: $f^{1.5}(x)$
Most importantly, who invented this syntax? Is there an area of mathematics formally exploring this? Thanks!
AI: I have no idea who invented this syntax. However, I typically use $f^{\circ n}(x)$ rather than $f^n(x)$ to denote repeated composition, since $\circ$ is the symbol used to denote function composition, and $f^n$ can be easily confused with exponentiation. (Especially when using trig functions, since $\sin^2 x$ is often used to denote $(\sin x)^2$.)
Regarding why $f^{-1}$ denotes the inverse function: notice that repeated function composition satisfies the following nice addition property:
$$f^{\circ m}\circ f^{\circ n}=f^{\circ (m+n)}$$
If we want to make $f^{\circ n}$ well-defined for negative $n$, but still want this nice addition property to hold true, then we need
$$f^{\circ -1}\circ f^{\circ 1}=f^{\circ 1}\circ f^{\circ -1}=f^{\circ 0}$$
and the inverse function $f^{-1}$ satisfies the desired property.
If you’re looking for a multiplicative analogue of the additive law
$$f^{\circ m}\circ f^{\circ n}=f^{\circ (m+n)}$$
then the following might be what you’re looking for:
$$(f^{\circ m})^{\circ n}=f^{\circ mn}$$
Be careful referring to $f^{\circ 1/2}$ as if it were a unique function. For example, the function $f(x)=2x+1$ has at least two half-iterates:
$$f^{\circ 1/2}(x) =^? \sqrt{2}x+\frac{1}{1+\sqrt{2}}$$
and
$$f^{\circ 1/2}(x) =^? -\sqrt{2}x+\frac{1}{1-\sqrt{2}}$$
So again, be careful! It’s not always uniquely defined.
Regarding which functions have half-iterates: lots of them, but if you impose the restriction that the half-iterate must be continuous, it turns out that continuous decreasing functions cannot have continuous half-iterates.
Related blog posts:
Fractional iterates
A summary of functional iteration
N-involutory rational functions
Iterated polynomials
|
H: Online Calculator for Complex Calculus - path in. C z3(z 1− 1)2 dz; |z − 2| = 5tegrals
Does anyone know of an online calculator/tool that allows you to calculate integrals in the complex number set over a path?
I've searched in the standard websites (Symbolab, Wolfram, Integral Calculator) and none of them has this option for complex calculus (they do have, as it has been pointed out, regular integration in the complex plain, but none has an option to integrate over paths).
AI: Wolfram alpha sure can work with complex integrals. Here I'm integrating $e^{i \theta}$ with respect to $\theta$, between $0$ and $2 \pi$:
https://www.wolframalpha.com/input/?i=Integrate%5Be%5E%28I+theta%29%2C%7Btheta%2C0%2C2*Pi%7D%5D
|
H: Lower Bound for the square root of the sum of squares
I am looking for a good lower bound for the square root of the sum of squares:
Let's say we have some known parameters : $x_i > 0$ where i $\in [1,...,n]$
I am looking for a good lower bound of this term (a sum or something similar but not a one square root term):
$\sqrt{\sum_{i}x_i^2}$
PS: $\min{x_i}$ is a lower bound but not of a good quality.
Thank you.
AI: The bound $\sqrt{n} \min x_i$ is better than $\min x_i$, and is sharp, in the sense that it becomes an equality, in case the $x_i$ are all equal.
EDIT: Sorry - I just saw the comment on your answer. The one in the link is better than mine, in that it is generally greater than the one I gave.
SECOND EDIT: That is, $(\sum x_i)/\sqrt{n}$.
|
H: standard normal law is rotation invariant
Prove that the standard normal law is rotation invariant, i.e.,x≝ UX for any matrix unitary matrix U
i've no idea hoe to prove it, any suggestions?
AI: If an $n$-dimensional variable $X$ has PDF $(2\pi)^{-n/2}\exp(-\tfrac12x^Tx)$, $Y:=UX$ has PDF $(2\pi)^{-n/2}\exp(-\tfrac12(U^{-1}y)^T(U^{-1}y))$, because the Jacobian is $|\det U|=1$. But$$(U^{-1}y)^T(U^{-1}y)=y^TU^{-1T}U^{-1}y=y^TUU^{-1}y=y^Ty.$$
|
H: Tournament championship proof
Let's define a tournament as a contest among $n$ players where each player plays a game agains each other player and there are no draws. Now let me define a tournament champion.
A tournament champion is a player $c$ where, for each other player $p$ in the tournament, either
$c$ won his/her game against $p$, or
there’s a player $q$ where $c$ won his/her game against $q$ and $q$ won his/her game against $p$.
I need to prove the following:
Let T be an arbitrary tournament and p be any player in that tournament. Prove the following statement: if $p$ won more games than anyone else in $T$ or is tied for winning the greatest number of games, then $p$ is a tournament champion in $T$.
My proof is:
Let $c$ be any player in $T$ that won more games than anyone else or is tied for winning the greatest number of games. We want to show that $c$ is also a champion in $T$. In order to show this we proceed by contradiction. Assume that $c$ is not a champion. Then, there should exist a player $p$ which won $c$ and for any other player $q$ that won $p$ $c$ lost his/her game against $q$.
Assume that each player should play $N$ games and that there were $n$ such players $q$ that won $p$. It means that the maximum number of victories of $c$ is $cv = N - n - 1$, because he/she lost his/her games against all $q$s and against $p$. Notice that $n$ represents the number of losses of $p$, therefore the minimum number of victories of $p$ is $pv = N - n$. We see that $pv > cv$ and this means that $c$ didn't won the greatest number of games (and wasn't tied for winning the most games), but it contradicts our assumption that $c$ won the most games. Consequently, $c$ is a champion.
Could you please review my proof and say what's wrong with it and how it can be improved. I'm especially interested in variable introduction - do I do that right? And can I do something like this?
Assume that each player should play $N$ games and that there were $n$
such players $q$ that won $p$.
I'm not sure if I can manipulate a group of $n$ objects in the proof, because as I saw earlier other proofs do something like
Let $k$ be any number/player/whatever in $T$
AI: Your proof is correct. In particular, the way in which you introduced $N$ and $n$ is fine. Your argument could be presented more clearly and efficiently, but that’s partly because you’re not writing in your first language. Here is a more polished version of the same argument.
Let $T$ be a tournament with $n$ players, so that each player plays $n-1$ games, and let $c$ be a player in $T$ who won at least as many games as any other player in $T$; we want to show that $c$ is a champion. If not, there is another player, $p$, who beat $c$ and also beat every player whom $c$ beat. Thus, if $c$ beat $m$ players, $p$ beat at least $m+1$ players, contradicting our hypothesis that no player won more games than $c$.
We don’t actually need to argue by contradiction here: essentially the same argument proves the contrapositive, i.e., that if $c$ is not the champion, then some player won more games than $c$. It’s even possible to give a direct proof that a player who won at least as many games as any other player is a champion:
Assume that no player won more games than $c$, let $p$ be any other player, and suppose that $p$ beat $c$. Let $n_c$ be the number of games won by $c$ and $n_p$ the number won by $p$. Let $A$ be the set of players other than $c$ and $p$. Then $p$ beat $c$ and $n_p-1$ members of $A$, and $c$ beat $n_c\ge n_p>n_p-1$ members of $A$, so there is at least one $a\in A$ such that $c$ beat $a$, and $p$ did not beat $a$. But that means that $a$ beat $p$. Thus, every player who beat $c$ was beaten by someone whom $c$ beat, and therefore $c$ is a champion.
|
H: Prove asymptotic equivalence of $\text{li}(n)$ and $n/\ln(n)$
The prime number theorem, PNT, states that the prime counting function $\pi(n)$ is asymptotically equivalent to Gauss' first approximation:
$$\pi(n) \sim \frac{n}{\ln(n)}$$
We know this means that
$$\lim_{n \rightarrow \infty}\frac{\pi(n)}{n/\ln(n)} \rightarrow 1$$
Gauss' second approximation is the logarithmic integral $\text{li}(n)$, and this produces better approximations for $\pi(n)$.
$$\pi(n) \sim \text{li}(n) = \int_{0}^{n}\frac{1}{\ln(x)}dx$$
The prime number theorem is also stated in terms of this $\text{li}(n)$.
For the PNT to be valid with both approximations, the two approximations must both be asymptotically equivalent. That is,
$$\text{li}(n) \sim \frac{n}{\ln(n)}$$
Question: How does one prove the two approximations are asymptotically equivalent?
We can expand the logarithmic integral using integration by parts, and this process leaves an integral. Several applications extract several terms of the form $\frac{An}{\ln(n)}$.
$$\text{li}(n) = \frac{n}{\ln(n)} + \frac{n}{\ln^2(n)} + \frac{2n}{\ln^3(n)} + \int_0^n\frac{6}{\ln^4(n)} + C$$
Can we argue that dividing each term by $\frac{n}{\ln(n)}$, and taking the limit $n \rightarrow \infty$, leaves terms that all tend to zero except the first term which tends to 1?
Can we argue that arbitrary applications of integration by parts results in terms that tend to zero, and that the remaining integral is itself smaller because the $\ln(n)$ in the denominator of the integral has higher and higher powers?
Note: I am not mathematically trained so would appreciate responses with minimal assumptions about terminology.
AI: One way to prove it is to use de l'Hôpital's rule.
Let $f(x) = \frac{x}{\log{x}}$ and $g(x) = \int_1^x \frac{dt}{\log{t}}$. Then
$$
\lim_{x \to \infty}f(x) = \lim_{x\to\infty}g(x)=\infty
$$
and therefore
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)}
$$
is undefined. We can apply de l'Hôpital once and we obtain
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)} = \lim_{x \to \infty}\frac{f'(x)}{g'(x)}.
$$
We have
$$
f'(x) = \frac{\log{x}-1}{\log^2 x} \qquad \text{ and } \qquad g'(x)=\frac{1}{\log{x}},
$$
the latter following from the fundamental theorem of calculus. Finally we have
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)} = \lim_{x \to \infty}\frac{f'(x)}{g'(x)} = \frac{(\log x - 1)\log x}{\log^2 x} = 1.
$$
|
H: Prove that a linear operator $T$ on a finite-dimensional vector space is invertible if and only if zero is not an eigenvalue of $T$.
(a) Prove that a linear operator $T$ on a finite-dimensional vector space is invertible if and only if zero is not an eigenvalue of $T$.
(b) Let $T$ be an invertible linear operator. Prove that a scalar $\lambda$ is an eigenvalue of $T$ if and only if $\lambda^{-1}$ is an eigenvalue of $T^{-1}$.
MY ATTEMPT
(a) Let us suppose that $\lambda = 0$ is an eigenvalue of $T$. Thus $\det([T]_{\mathcal{B}}) =\det([T]_{\mathcal{B}} - 0 I_{n}) = 0$, and $T$ is not invertible.
Conversely, let us suppose that $T$ is not invertible. Then $0 = \det([T]_{\mathcal{B}}) = \det([T]_{\mathcal{B}} - 0I_{n})$, and $0$ is an eigenvalue of $T$.
(b) Indeed, provided that $\lambda \neq 0$, one has
\begin{align*}
Tv = \lambda v \Rightarrow T^{-1}(Tv) = T^{-1}(\lambda v) \Rightarrow (T^{-1}T)v = \lambda(T^{-1}v) \Rightarrow T^{-1}v = \lambda^{-1}v
\end{align*}
Once again, I am mainly interested in the wording of my proof. Any comments or contributions are appreciated.
AI: Your proofs are correct.
I would edit the second assertion to put the equations on separate lines separated by the implications, rather than running them all together.
I think that a proof that relies more directly on the concepts and avoids determinants would be better. If $0$ is an eigenvalue then there is a nonzero $v$ with $Tv=0$ so T is not injective, so it's not invertible. Conversely, if $T$ is not invertible then it's not bijective. For a linear transformation from a finite dimensional space to itself, bijective is equivalent to injective. But an noninjective linear transformation has a kernel, so has $0$ for an eigenvalue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.