text
stringlengths 83
79.5k
|
|---|
H: slot machine, probability of stops
could you please tell me , am I correct with my answers in this task ?
slot machine has 3 wheels. Each wheel has 11 stops: a bar and the digits 0,1,2,...,9.
When the handle is pulled, the 3 wheels spin independently before coming to rest.
Find the probablity that the wheels stop on the following positions:
3 bars ,
the same digit on each wheel ,
at least one bar
my answers:
1/1331 ,
10 / 1331 ,
331 / 1331
AI: You are correct on the first one: $(\frac{1}{11})^3$ is indeed the probability, which turns out to be $\frac{1}{1331}$ which is what you got.
You are also correct on the second one, we choose $\frac{10}{11}$ of one wheel since a bar is not a number, then there's a $\frac{1}{11}$ chance for the other two, resulting in a probability of $(\frac{1}{11})^2 \cdot \frac{10}{11}$ which is what you got.
For at least one bar, it's better to calculate the probability we get zero bars (complementary counting), and so there are $10$ numbers not bars out of $11$. $11^3-10^3 = 331$, and that over $11^3$ is $\frac{331}{1331}$, which is also what you got.
You got all the answers correct. Good job!
-FruDe
|
H: Find side length of square with vertices on line $y=x+8$ and parabola $y=x^2$
Let ABCD be a square with the side AB lying on the line $y=x+8$. Suppose C, D lie on the parabola $x^2=y$. Find the possible values of the length of the side of the square.
I'm not sure how to start, I thought of taking the four vertices as $(t_1,t_1+8), (t_2,t_2+8), (t_3,t_3^2), (t_4,t_4^2)$ but I don't think that helps.
AI: Let $y=x+b$ be the parallel line on which the other two vertexes lie and substitute it into $y=x^2$ to get $x^2-x-b=0$. Then, we have $x_1+x_2=1$ and $x_1x_2=-b$ and the side length $a$ of the square
$$a^2 = (y_1-y_2)^2+(x_1-x_2)^2= (x_1-x_2)^2((x_1+x_2)^2+1)\\
= ((x_1+x_2)^2-4x_1x_2)((x_1+x_2)^2+1)=2+8b
$$
Note that the distance between the two parallel lines is $\frac{|8-b|}{\sqrt2}$ to establish the equation for the side length as
$$a= \sqrt{2+8b}=\frac{|8-b|}{\sqrt2}$$
Solve to obtain $b=2,\>30$ and the corresponding side lengths of the square $3\sqrt2,\> 16\sqrt2$.
|
H: What is the final intuition of Galois solvable groups and radical solutions?
At the end of the fundamental theorem of Galois theory, and after some intermediate moments of clarity realizing, for example, that the subfield lattice is built on fixed elements by different automorphisms, and that the corresponding subgroups are permutations of roots, there comes the idea that the quintic doesn't have necessarily radical solutions because we can find polynomials with a subgroup lattice that is not "solvable."
Unfortunately, and despite the suggestive name of "solvable," it feels like hitting an unmotivated new definition involving a chain of normal subgroups and abelian quotient groups.
What is the non-axiomatic intuition of the solvability of subgroups to see their need to have radical solutions?
AI: There's no real intuition behind calling solvable groups solvable other than the fact that a solvable Galois group is equivalent to solvability by radicals. But in a typical lecture, the definition comes first, and then the proof that solvability of the Galois group and solvability of the equation by radicals are equivalent. Historically, this was the other way around: People were looking for conditions under which a polynomial equation is solvable by radicals. When they actually found a neccessary and sufficient condition, they called it solvability, since that's what it's really used for: determining wether polynomials equations are solvable (by radicals) or not.
Solving a polynomial equation by radicals is essentially nothing else than taking the field we're working with, throwing a radical into the mix, then adding another radical, then another, and so on, until the solution of the polynomial equation is included in the field. If this is possible, then the equation is solvable by radicals. And the nice thing is, we can throw in additional, possibly unneeded radicals to make working with the corresponding field extensions easier. It would be optimal if we could make the extensions Galois! To start, we can throw in as many roots of unity as we like. So we do that. Luckily, adding primitive $n$-th roots of unity to $\mathbb Q$ results in a Galois extension with Galois group $\mathbb Z_n^\times$ (the unit group of $\mathbb Z_n$), which is Abelian. Now comes the important part: If our field already contains suitable roots of unity, then adding a root of any other element will also result in a Galois extension, this time with cyclic Galois group. This means that as we add all the roots we need to solve our polynomial equation, we always get a Galois extension with Abelian Galois group (cyclic groups are also Abelian). Now consider the finished extension and its Galois group. All the Abelian groups we considered beforehand are the factor groups in the chain of normal subgroups used to define solvable groups!
So, if a polynomial equation is solvable by radicals, then there is a Galois extension in which the polynomial splits and which has a chain of normal subgroups with Abelian factors. And we can also show that if a group has this property, then its subgroups have this property as well. And the Galois group of the splitting field of our polynomial is a subgroup of the one we just constructed. So if a polynomial equation is solvable, then the Galois group of its splitting field has the property that there exists a chain of normal subgroups with Abelian factors. This is big, since it gives us a neccessary condition for solvability by radicals.
Even better! It can be shown that this condition is not only neccessary. It is also sufficient. So this property of the Galois group completely determines wether the underlying equation is solvable by radicals. Such a cool property deserves a name. And since it completely determines solvability of an equation, why not call the property solvability?
|
H: Correct way of proving if and only if statements?
When proving that
\begin{equation}
A\iff B
\end{equation}
We generally split the prof into two parts:
\begin{equation}
A\implies B \tag{1}
\end{equation}
\begin{equation}
\tag{2}
B\implies A
\end{equation}
In the cases I have seen, these proofs are completely independent. Today I've come across a different proof, though. After proving $(1)$, it proved $(2)$ by using $(1)$ in the last part of the proof. Is it fine to do so from a logical standpoint?
AI: This is OK. After proving $A\implies B$ you can use it for whatever proof you like,
even for the proof of $B\implies A$. This does not introduce a logical flaw of some sort.
|
H: Finding the probability that each child gets at least 1 ball when we are distributing 5 DISTINCT balls among 4 children(who are distinct of course).
My approach:-
First, I found the total number of cases(the sample space). We can find that out from, $$4^5=1024$$
Now for favourable cases, I did this:-
I selected any 4 balls out of the given 5 by $${5\choose 4}$$ then I gave each child 1 ball out of the selected 4 balls, that can be done in 4! ways. With this we have provided each child 1 ball hence satisfying the condition given the question that no one should go home empty handed. Now we have 1 ball left which has to be given to any of the 4 children. So that ball can be given in 5 ways.
Hence the total favourable cases become, $${5\choose4}(4!)(5)=600$$ So the probability becomes $$600/1024$$But the answer given in the book is 15/64.
AI: You’re overcounting, as described in aryan bansal’s comment. A valid method would be to use Inclusion-Exclusion for counting the number of ways atleast one child gets no ball, which is $$S={4\choose 1} 3^5 -{4\choose 2} 2^5 +{4\choose 3} 1^5 $$ and then subtract this from the total ways, giving the number of ways where every child gets atleast one ball. The answer therefore is $$ \frac{4^5-S}{4^5} =\frac{15}{64}$$
|
H: Problems with Universal Generalization
I’ve been studying universal generalization recently and, according to the textbooks, $\forall x Q(x)$ can be derived from $Q(a)$, if the variable $a$ is arbitrary. A variable is arbitrary, when it does not appear in any of the undischarged assumptions throughout the derivation.
Example $1$: Prove that $\forall x Q(x)$ derives from $\forall x [P(x) \rightarrow Q(x)]$ and $\forall x P(x)$.
$$\begin{array}{lll}
1. & \forall x [P(x) \implies Q(x)] & \\
2. & \forall x P(x) & \\ \hline
3. & P(a) \implies Q(a) & (1.; \text{U.I.}) \\
4. & P(a) & (2.; \text{U.I.}) \\
5. & Q(a) & (3.,4.;\text{M.P.})\\
6. & \forall xQ(x) & (5.; \text{U.G.})
\end{array}$$
We could generalize $Q(a)$ in step $6$, because the variable $a$ was not in the premises of our proof (steps $1$ and $2$).
However, consider the following example:
Example $2$: Prove that $\forall x Q(x)$ derives from $\forall x [P(a) \rightarrow Q(x)]$ and $P(a)$.
$$\begin{array}{lll}
1. & \forall x[P(a) \implies Q(x)] & \\
2. & P(a) & \\ \hline
3. & P(a) \implies Q(a) & (1.; \text{U.I.}) \\
4. & Q(a) & (2.,3.; \text{M.P.}) \\
5. & \forall xQ(x) & (4.; \text{U.G.}) \\ & & \text{MISTAKE: $a$ appears in $P(a)$}
\end{array}$$
In this case, according to the definition of arbitrariness presented above, we are actually not able to universally generalize $Q(a)$ in step $5$, since the variable $a$ does appear in one of the premises (step $2$). Nevertheless, $\forall x Q(x)$ does derive from $\forall x [P(a) \rightarrow Q(x)]$ and $P(a)$, so universal generalization should be possible here. Where is my reasoning flawed?
AI: Prove that $(\forall x)Q(x)$ derives from $(\forall x)[P(a) \implies Q(x)]$ and $P(a)$.
$$\begin{array}{lll}
1. & \forall x[P(a) \implies Q(x)] & \\
2. & P(a) & \\ \hline
3. & P(a) \implies Q(x_0) & (1.; \text{U.I.}) \\
4. & Q(x_0) & (2.,3.; \text{M.P.}) \\
5. & \forall xQ(x) & (4.; \text{U.G.})
\end{array}$$
Note that, when we use the U.I. rule, we are considering that $P(a) \implies Q(x_0)$, for any chosen $x_0$. So, in the end, you have $Q(x_0)$ for all $x_0$. Then, we only have to use the U.G. rule. $\square$
|
H: Convergence of infinite series of log function
Check the convergence of the infinite series $\sum\limits\frac1{(\log n)^{3/2}}$.
I have tried to use comparison test but got no success.
AI: More generally, For $\sum\limits_{n=2}^\infty \frac{1}{(\operatorname{log}n)^p}$
We use Cauchy Condensation test,
\begin{eqnarray}
\sum\limits_{n=2}^\infty 2^n \frac{1}{(\operatorname{log}2^n)^p}=\frac{1}{(\operatorname{log}2)^p} \sum\limits_{n=2}^\infty \frac{2^n}{n^p}
\end{eqnarray}
which diverges for all $p>0$. Since $2^nn^{-p}\to \infty$ for all $p>0$
|
H: Finding first term of arithmetic sequence given first three terms and no common difference
Assuming that the first three terms of an arithmetic sequence are $x, \frac{1}{x}, 1$ and $x<0$.
I seem to be unable to figure out what the first term is.
I know that $a_n = a_1+d(n-1)$ but how do we work out the common difference in order to calculate $a_1$.
Is there anyway to calculate this recursively perhaps, given that we know the value of $a_3$.
I've tried manipulating the arithmetic formula above to figure this out but seem to be stuck. Can someone please point me in the right direction without flat-out giving the answer away?
AI: By calculating the common difference in two different ways we have
$$\frac1x-x=1-\frac1x$$
which simplifies to
$$(x-1)(x+2)=0$$
and hence the first term is $x=-2$ as $x\lt0$.
|
H: On the existence of a pullback
I’m not sure about my answer to the following problem:
Problem: Let $A,B$ and $C$ be sets, and let $f:A \rightarrow C$ and $g:B \rightarrow C$ be maps. Show that there exists a set $P$ and maps $h:P \rightarrow A$ and $k:P \rightarrow B$ such that $f \circ h = g \circ h$, and that for any set $X$ and maps $s:X \rightarrow A$ and $t:X \rightarrow B$ such that $f \circ s = g \circ t$, there is a unique map $u:X \rightarrow P$ such that $s = h \circ u$ and $t = k \circ u$.
Here it is my solution.
Solution: I’m dividing my solution in three parts in order to be more organised:
I started to define the set $P$ as $P = \{(x,y) \in A \times B | f(x) = g(y)\}$ and the maps $h:P \rightarrow A$ and $k:P \rightarrow B$ as $h((x,y))=x$ and $k((x,y))=y$ for all $(x,y) \in P$. Then it follows that $f \circ h, g \circ k:P \rightarrow C$. For $x \in P$, we deduce that $x = (a,b)$ with $a \in A$, $b \in B$ and $f(a)=g(y)$. So $(f \circ h)(x)=(f(h(x))=f(h((a,b)))=f(a)=g(y)=g(k((a,b)))=g(k(x))=(g \circ k)(x)$. Therefore $f \circ h = g \circ k$.
For the next step, I defined the map $u:X \rightarrow P$ as $u(x)=(s(x),t(x))$ for all $x \in X$. Now, let $x \in X$. We know that $(f \circ s)(x)=(g \circ t)(x)$, so $f(s(x))=g(t(x))$. Then $(s(x),t(x)) \in P$, which means that $u(x) \in P$. We observe that $h \circ u:X \rightarrow A$. Hence $(h \circ u)(x)=h(u(x))=h((s(x),t(x))=s(x)$. Therefore $h \circ u = s$. By the same reasoning, we conclude that $k \circ u = t$. This proves the existence of map $u$.
Now we turn our attention to the uniqueness of map $u$. Suppose that $u_1,u_2:X \rightarrow P$ are maps such that $h \circ u_1 = s = h \circ u_2$ and $k \circ u_1 = t = k \circ u_2$. Let $x \in X$, then $u_1(x) = (u_{1,1}(x),u_{1,2}(x))=((h \circ u_1)(x),(k \circ u_1)(x)) = (s(x),t(x))=((h \circ u_2)(x),(k \circ u_2)(x))=(u_{2,1}(x),u_{2,2}(x))=u_2(x)$. Therefore $u_1 = u_2$. So such map is unique.
What is concerning me on this solution is:
Is it really necessary that $f \circ s = g \circ t$?
Since I rarely use the above condition, I feel like something is missing in the solution (specially in the uniqueness part).
Any ideas or comments about that? Thank you for your time!
AI: The thing about pullbacks in general is that it is a pair of maps $h:P\to A$ and $k:P\to B$ that are universal with the property $f\circ h=g\circ k$. In other words, it is somehow the "best pair of functions" that achieve this property.
Therefore, it's necessary to compare the pair $(h,k)$ only against pairs $(s,t)$ that also satisfy this property; that is, $f\circ s=g\circ t$. The metric of being "better" is measured by the existence of a unique map $u$ through which $s$ and $t$ factor to recover $h$ and $k$ (which you've stated precisely in your question).
You mention that you "hardly use" the property $f\circ s=g\circ t$, and sure it may have been used only once, but it was used in a crucial way: the map $u:X\to P$ you defined would not exist otherwise. The map $u:X\to P$ is necessarily unique without this condition, because like you've shown, the set $P$ is a subset of $A\times B$ and so functions into $P$ are determined by their action on the components. Since $h$ and $k$ are just projections into the respective components, any two $u_1,u_2:X\to P$ that agree on components will be equal.
You can use this fact to realise the necessity of $f\circ s=g\circ t$ for the existence portion: by the uniqueness argument, you are forced to define $u:X\to P$ as $u(x) := (s(x),t(x))$ as you've done, but this is only a well-defined function $X\to P$ iff $(s(x),t(x))\in P$ for all $x$; that is, $f(s(x))=g(t(x))$ for all $x\in X$.
|
H: Units in $A = \mathbb{Z}_3[x]/(x^2+1)$
Let $A = \Bbb{Z}_3[x]/(x^2+1)$, the quotient ring by the ideal $(x^2+1)$. Which ones are units?
I did this question in a very boring way, merely listing all the possibilities and check. I cannot find an efficient way to find the unit. Is there any efficient way to do this?
AI: Perhaps the most efficient elementary way of doing it in this case is to note that
everything nonzero in $\mathbb Z/3\mathbb Z$ is a unit
everything in $A$ can be written (uniquely) in the form $a+bx$ for $a,b\in\mathbb Z/3\mathbb Z$
and then ask what an inverse for $a+bx$ would look like in general. For this, since we are identifying $x^2+1\sim0$, we can think of $x$ as a square root of $-1$, so you can seek inspiration from the complex numbers.
For complex numbers, the inverse of $a+bi$ is given by rationalising the denominator:
$$
\frac1{a+bi} = \frac{a-bi}{(a+bi)(a-bi)} = \frac{a-bi}{a^2+b^2} = \frac a{a^2+b^2} + \frac{-b}{a^2+b^2}i
$$
you can check that if the right-hand side is well-defined in $A$ (using $x$ instead of $i$), then it will also serve as an inverse. You can now ask when the right-hand side is defined, which is whenever $a^2+b^2\neq0\pmod3$, and this is true whenever $a$ and $b$ are not both zero.
Therefore, the units are $a+bx$ for $a$ and $b$ not both zero, and this constitutes all nonzero elements of $A$.
|
H: Exponentiating similar Laplacians
Let $L_c$ be the $n\times n$ Laplacian matrix of the complete graph, and $L$ be the $n\times n$ Laplacian of any simple, connected graph possessing a vertex $k$ of maximum degree $d_k=n-1$. Clearly, there is a column of $L$ that looks like the corresponding column of $L_c$.
Now, consider the exponential matrices $e^{-itL_c}$ and $e^{-itL}$, where $t>0$ is a real parameter. Is the equality of the same columns preserved? I've been playing around with this in Mathematica and haven't come up with a counterexample.
Edit. Here are a few examples. For the star graph we obtain with $n=5$ the following matrix: $$\left(
\begin{array}{ccccc}
\frac{1}{5} \left(1+4 e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{20} \left(4+15 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4+15 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4+15 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4-5 e+e^5\right) & \frac{1}{20} \left(4+15 e+e^5\right) \\
\end{array}
\right)$$ where clearly the first column is the same as the first column of the exponential of $L_c$, given by $$\left(
\begin{array}{ccccc}
\frac{1}{5} \left(1+4 e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1+4 e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1+4 e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1+4 e^5\right) & \frac{1}{5} \left(1-e^5\right) \\
\frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1-e^5\right) & \frac{1}{5} \left(1+4 e^5\right) \\
\end{array}
\right).$$ It's easy enough to see that this works for a generic $n$, as the dependance of the coefficients on $n$ is straightforward. A simpler example is the path graph on three vertices, which gives $$\left(
\begin{array}{ccc}
\frac{1}{6} \left(2+3 e+e^3\right) & \frac{1}{3} \left(1-e^3\right) & \frac{1}{6} \left(2-3 e+e^3\right) \\
\frac{1}{3} \left(1-e^3\right) & \frac{1}{3} \left(1+2 e^3\right) & \frac{1}{3} \left(1-e^3\right) \\
\frac{1}{6} \left(2-3 e+e^3\right) & \frac{1}{3} \left(1-e^3\right) & \frac{1}{6} \left(2+3 e+e^3\right) \\
\end{array}
\right)$$ to be compared with the $3\times 3$ $e^{L_c}$ matrix.
AI: Okay, so after fiddling around with it for a bit, I got a proof.
It can perhaps be made a little bit shorter.
Also, everything should go through with the $-it$ term, but I didn't include this in the proof.
We begin with a lemma which provides a necessary condition on the first column of $A$ in $B$.
Lemma
Let $A$ and $B$ be symmetric matrices with the same first column, denoted $v = A e_1 = B e_1$.
If $A v = B v = \alpha v$ for some $\alpha \in \mathbb{R}$, then $\exp(A)$ and $\exp(B)$ have the same first column.
Proof
Recall that the matrix exponential is defined as
$$\exp (A) = I + A + \frac{1}{2} A^2 + \frac{1}{6} A^3 + \dots = \sum_{k=0}^\infty \frac{1}{k!} A^k$$
Using this, we have that the first column of $\exp(A)$ is given by
$$
\exp (A) e_1
= \sum_{k=0}^\infty \frac{1}{k!} A^k e_1
= I e_1 + \sum_{k=1}^\infty \frac{1}{k!} A^k e_1
= e_1 + \sum_{k=1}^\infty \frac{1}{k!} A^{k-1} v .$$
The same argument may be used to obtain the first column of $\exp(B)$.
Note that the first term $e_1$ is independent of the matrix being exponentiated.
Thus, by canceling this common term, we have that the first column of $\exp(A)$ is equal to the first column of $\exp(B)$ if and only if
$$
\sum_{k=1}^\infty \frac{1}{k!} A^{k-1} v = \sum_{k=1}^\infty \frac{1}{k!} B^{k-1} v
$$
We now show that the corresponding terms in both series are equal by induction on $k$.
The base case of $k=1$ follows as
$$A^{k-1} v = A^{0} v = I v = B^{0} v = B^{k-1} v.$$
Suppose that for each $k \leq r$, $A^{k-1} v = B^{k-1} v$.
Then for $k=r+1$ with $r \geq 1$,
\begin{align}
A^{(r+1)-1} v &= A^{r} v \\
&= A^{r-1} (A v) \\
&= A^{r-1} (\alpha v) &\text{(By assumption, $Av = \alpha v$)}\\
&= \alpha (A^{r-1} v) \\
&= \alpha (B^{r-1} v) &\text{(inductive hypothesis)}\\
&= B^{r-1} (\alpha v) \\
&= B^{r-1} B v &\text{(By assumption, $Bv = \alpha v$)}\\
&= B^r v\\
&= B^{(r+1)-1} v
\end{align}
Thus, we have shown that all terms in the series are equal, so that the first columns of $\exp(A)$ and $\exp(B)$ are equal. $\square$
The last thing we need is the following claim, which makes use of Laplacian matrices. I'll leave it as an exercise to prove since it's very similar to the fact that the all ones vector $\mathbf{1}$ is an eigenvector with eigenvalue $0$.
Claim If $L$ is a Laplacian matrix whose first row/column is $v = \begin{pmatrix} n-1, & -1, & -1, & \dots, & -1 \end{pmatrix}$, then $L v = n \cdot v$.
|
H: UFD is also an ideal of a ring
Is it true that when a UFD is another ring $R$'s ideal, then ring $R$ is also a UFD?
I find an example but I'm not sure: the holomorphic ring $\mathcal O_x$, it's a UFD and the meromorphic ring $\mathcal M_x$, it's also a UFD (I guess but I'm not sure).
AI: If your ideal shared identity with the containing ring, then trivially yes, since an ideal containing the identity is the entire ring.
If it is an ideal with a different identity, then no because the identity is a nontrivial idempotent, and domains don’t have nontrivial idempotents.
|
H: Proof of the weak Mordell-Weil theorem and showing that torsion part of $A(k)$ is finite
Reading a proof of the weak Mordell-Weil theorem, I'm stuck somewhere. We have the following theorem :
Let $A$ be an abelian variety defined over a number field $k$ and $v$ a finite place of $k$ at which $A$ has good reduction. Let $\tilde{k}$ be the residue field of $v$ and let $p$ be the characteristic of $\tilde{k}$. Then, the map :
$$ A_m(k) \rightarrow \tilde{A}(\tilde{k})$$
is injective for any $m \geq 1$, $p \nmid m$, where $A_m$ design the kernel of the elements of $m$-torsion on $A$.
From there, it is written that we can directly show that $A_{\text{tors}}$, i.e the torsion part of $A$, is finite. Actually, we choose two places $v$ and $w$ at which $A$ has good reductions, and $v$ and $w$ of different characteristics $p$ and $q$. It is then written that we obtain an injection :
$$ i : A(k)_{\text{tors}} \hookrightarrow \tilde{A}_v(\tilde{k_v}) \times \tilde{A}_w(\tilde{k_w})$$
But I can't justify that this map is an injection. By composing with the projection over $\tilde{A}_v(\tilde{k_v})$ or $\tilde{A}_w(\tilde{k_w})$, if we take some $x$ in the kernel of $i$, $x$ of $m$-torsion, with $p \nmid m$ or $q \nmid m$, we can then deduce that $x= 0$ using the theorem below. But if $x$ is of $pq$-torsion, this argument doesn't work anymore, right ?
I really don't succeed to justify the fact that $i$ is injective...
Thank you for the help !
AI: It follows from the first fact that anything in the kernel of the reduction
$$A_{tors}(k) \rightarrow \tilde{A}(\tilde{k})$$
must have order a power of $p$. Actually, that fact is how one usually proves the first statement (the kernel is associated to a formal group, and the torsion of formal groups can only have elements of prime power order, for the prime in question) so your trouble here might just be from picking a slightly less than optimal statement to apply.
Take some $x$ on the LHS of order $p^r m$ with $p\nmid m$. That means we can decompose it into a sum $y+z$ where $y$ has order $p^r$ and $z$ has order $m$.
Reducing, $\tilde y$ has order dividing $p^r$ and possibly strictly smaller than $p^r$, and $\tilde z$ has order exactly $m$, because the above map is injective on prime-to-$p$ torsion, and hence preserves orders of elements prime to $p$. In an abelian group, when you add two elements of coprime order, the order of the sum is the product of their orders, and so the order of $\tilde y + \tilde z$ is $m$ times whatever power of $p$ is associated to $\tilde y$. In particular it is at least $m$ and so it can only have order 1 (i.e. be the identity) when $m=1$, which is to say that $x$ has order a power of $p$.
Thus when you put together two such maps, the kernel is the intersection of the two reduction maps. By the above, that means everything in the kernel has order dividing $p$ and order dividing $q$, but the primes $p,q$ are distinct, so that requires everything in the kernel has order $1$, i.e. it is just the identity.
In the case of your particular example, when you reduce such an $x$ mod two such primes, in the first component, only the ``$p$'' part of its order can be killed, and so it has order $p$ or $pq$ in the first component. Likewise in the second component it has order $q$ or $pq$. In any of those cases the whole thing must have order at least $pq$, hence exactly $pq$.
|
H: Under what circumstances does $\lvert \lambda\rvert \lvert \lvert x \rvert \rvert = \lvert \lvert y \rvert \rvert $ imply $\lambda x = y$
In a proof I saw, we made use of the fact that
for some $y = \lambda_{1}x_{1} + \lambda_{2}x_{2}$ , if we have
$\lvert \lambda_{1}\rvert \lvert \lvert x_{1} \rvert \rvert = \lvert \lvert y \rvert \rvert $, then we can conclude that $y = \lambda_{1}x_{1}$.
This can certainly not be true in a general case, right? What assumptions are needed for it to be true. In the proof I mentioned, the space we were investigating is a Hilbert space.
Edit:
Let $\mathcal{H}$ be a Hilbert space, and $F_{1},F_{2}$ two bounded linear functionals such that $F_{1}\neq 0$ and $F_{2}\neq 0$. Suppose that
$\forall x \in \mathcal{H}: \lvert F_{1}(x)\rvert=\lvert \lvert F_{1}\rvert \rvert \cdot \lvert\lvert x \rvert \rvert \implies F_{2}(x)=0$
Now show that
$\forall x \in \mathcal{H}: \lvert F_{2}(x)\rvert=\lvert \lvert F_{2}\rvert \rvert \cdot \lvert\lvert x \rvert \rvert \implies F_{1}(x)=0$
Proof:
Identify, $y_{1},y_{2}$ with $F_{1}(\cdot)=\langle y_{1}, \cdot\rangle$ and $F_{2}(\cdot)=\langle y_{2}, \cdot\rangle$ by Riesz representation, then we can clearly see that:
$\lvert F_{1}(y_{1})\rvert=\langle y_{1},y_{1}\rangle = \lvert \lvert y_{1}\rvert \rvert^{2}\implies \langle y_{2},y_{1}\rangle = 0 $
Now consider the closed subspace $K:=\operatorname{span}\{y_{1},y_{2}\}$. Then, by the orthogonal projection theorem, every $x \in \mathcal{H}$ can be written as $x = \alpha_{1}y_{1}+\alpha_{2}y_{2}+k$ where $k \in K^{\perp}$
And hence we assume that for some $x \in \mathcal{H}$ that $\lvert \langle y_{2}, x\rangle \rvert= \lvert \lvert y_{2}\rvert \rvert \cdot \lvert \lvert x \rvert \rvert$. Note that
$\langle y_{2},x\rangle = \lvert \alpha_{2} \rvert \cdot \lvert \lvert y_{2}\rvert \rvert^{2}\implies \lvert \alpha_{2} \rvert \cdot \lvert \lvert y_{2}\rvert \rvert^{2}=\lvert \lvert x \rvert \rvert \cdot \lvert \lvert y_{2}\rvert \rvert\implies \lvert \lvert x \rvert \rvert =\lvert \alpha \rvert \cdot \lvert \lvert y_{2}\rvert \rvert$
And then the implication which I do not understand is stated:
$x = \alpha_{2}y_{2}$, hence implying that $\langle y_{1},x\rangle = 0$.
AI: Note that $y_1, y_2, k$ are pairwise orthogonal, so by Pythagoras:
$||x||^2=||\alpha_1y_1||^2+||\alpha_2y_2||^2+||k||^2$
But during the proof we also got that $||x||^2=||\alpha_2y_2||^2$. Hence we must have $||\alpha_1y_1||^2+||k||^2=0$, which implies $\alpha_1y_1=k=0$.
|
H: 2 questions in Statement of Primary Decomposition Theorem and it's Corollary in Linear Algebra
I am self studying Linear Algebra from Hoffman Kunze and I have 2 questions in section 6.8 whose image I am adding->
Questions : (1) In the last paragraph how does one can deduce that $W_{i}'s $ are invariant under T?
(2) In last line of corollary : How is each subspace $W_{i} $ invariant under U ?
AI: Well, (2) implies (1) as for $T$ commutes with itself, so about (2): For $i = 1, …, k$, as $W_i = \ker p_i^{r_i}(T)$,
$$(p_i^{r_i}(T)∘U)(W_i) = (U∘p_i^{r_i}(T))(W_i) = U(p_i^{r_i}(W_i)) = U(0) = 0,$$
so $U(W_i) ⊆ \ker p_i^{r_i}(T) = W_i$.
|
H: Prove or disprove: $A$ is a subgroup of $G$ if and only if $AA=A$.
I have a question about groups.
I need to prove or disprove:
Let $G$ be a group, and $A$ non-empty subset of $G$. $A$ is a subgroup of $G$ if and only if $AA=A$, where $AA=$ $\{a*a'|a,a' \in A\}$.
If $A$ is a subgroup then of course $AA=A$.
However, I couldn't prove the other direction. I know $A$ is closed under multiplication, but I think something must be wrong with the inverse. However, I couldn't prove it. Any help will be appreciated!
AI: $G=\mathbb{Z}$ and $A=\mathbb{N}\cup\{0\}$ is a counterexample.
The statement is true if $G$ is finite though. (because then the inverse of $g$ is a power of $g$, so if a subset is closed under multiplication then it has to be closed under inverses)
|
H: A matrix polynomial converging to $A^T$
Does there exist a sequence of matrices $A_i$ such that $$\sum^\infty_{i=0}A_iA^i=A^T$$
i tried inputing $A= 0, I$ but these don't give any substantial information except $$\sum^\infty_{i=0}A_i=I$$
if we take the classical example of a nilpotent matrix $\left[\begin{matrix} 0 & 1 \\ 0 & 0 \\ \end{matrix}\right]$
$$A_1 \left[\begin{matrix} 0 & 1 \\ 0 & 0 \\ \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 1 & 0 \\ \end{matrix}\right]$$
I don't know how to progress from here, any help?
AI: That is impossible in the last case of $A$ you consider. Just multiply both sides by $A$ on the right. The LHS becomes 0,
and the RHS becomes non-zero.
|
H: If $\alpha,\beta,\gamma$ are the roots of $x^3+x+1=0$, then find the equation whose roots are: $(\alpha-\beta)^2,(\beta-\gamma)^2,(\gamma-\alpha)^2$
Question:
If $\alpha,\beta,\gamma$ are the roots of the equation, $x^3+x+1=0$, then find the equation whose roots are: $({\alpha}-{\beta})^2,({\beta}-{\gamma})^2,({\gamma}-{\alpha})^2$
Now, the normal way to solve this question would be to use the theory of equations and find the sum of roots taken one at a time, two at a time and three at a time. Using this approach, we get the answer as $(x+1)^3+3(x+1)^2+27=0$. However, I feel that this is a very lengthy approach to this problem. Is there an easier way of doing it?
AI: Let $a,b,c$ be the roots of $x^3+x+1=0$ so we have $a+b+c=0, ab+bc+ca=1,abc=-1$, so $a^2+b^2+c^2=-2$ and $c^3=-c-1$
We would explore a transformation from $x$ to $y$ to get the required cubic equation of $y$.
Let $$y=(a-b)^2=a^2+b^2-2ab\implies y=-2-c^2+2/c \implies c=\frac{3}{1+y}$$
Replacing $c$ by $x$ we get the required transformation $x=\frac{3}{1+y}$, putting it in the given $x$ equation, we get:
$$\frac{27}{(1+y)^3}+\frac{3}{(1+y)}+1=0 \implies y^3+6y^2+9y+31=0,$$
which is the required cubic equation.
|
H: $\lim_{(x,y)\to(0,0)} \frac{x^2y^3}{x^4+2y^6}$ limit calculation
I have tried to write the limit using polar coordination. but I remain with a $cos(\theta)$ in the denominator. thanks for the help
AI: Along the path $y=0$
$$\lim_{(x,0)\to(0,0)} \frac{0}{x^4+0} = 0$$
but along the path $ y=x^{\frac{2}{3}}$
$$\lim_{(x,x^{\frac{2}{3}})\to(0,0)} \frac{x^4}{3x^4} = \frac{1}{3}$$
thus the limit does not exist.
|
H: Simple cardinal arithmetic
How can I see that $$2^{2^\lambda}>2^\lambda$$ ? Is it used here that $\lambda \geq 2^{\aleph_0}$ ?
The reference is here, pages 4 and 8.
AI: Let $X$ be a set of cardinality $2^\lambda$, then there is no bijection between $X$ and its powerset; that is, $2^{2^\lambda}=2^{|X|}>|X|=2^\lambda$ by Cantor's theorem. This holds for any cardinal $\lambda$.
|
H: Newton’s method to estimate a root of $f(x)=x^5-3x^2+1$
Taking $x_1=3$ as my initial estimate.
My work so far
The derivative of $f$, which is
$$f'(x)=\frac{d}{dx}(x^5-3x^2+1)=5x^4-6x$$
And, applying Newton's method to the table below.
\begin{array}{|c|c|c|c|}
\hline
x_n& f(x_n) & f'(x_n) & \frac{f(x_n)}{f'(x_n)} & x_n-\frac{f(x_n)}{f'(x_n)} \\ \hline
x_1=3.000000 & 217.000000 & 387.000000 & 0.560724 & 2.439276 \\ \hline
x_2=2.439276 & 69.508302 & 162.380993 & 0.428057 & 2.011220 \\ \hline
x_3=2.011220 & 21.772682 & 69.742981 & 0.312185 & 1.699035 \\ \hline
x_4=1.699035 & 6.498158 & 31.471553 & 0.206477 & 1.492558 \\ \hline
x_5=1.492558 & 1.724044 & 15.858533 & 0.108714 & 1.383844 \\ \hline
x_6=1.383844 & 0.329922 & 10.033520 & 0.032882 & 1.350962 \\ \hline
x_7=1.350962 & 0.024737 & 8.549144 & 0.002894 & 1.348068 \\ \hline
x_8=1.348068 & 0.000181 & 8.424276 & 0.000021 & 1.348047 \\ \hline
x_9=1.348047 & 0.000000 & 8.423353 & 0.000000 & 1.348047 \\ \hline
\end{array}
After $x_8$, it becomes apparent there is a convergence to about 1.348. A root for the polynomial. Is my process correct? Also, would there be a better way in computing this?
EDIT - The table has been amended. Thanks Alexey Burdin!
AI: This should really be a comment. Consider this python script. Now I let myself to borrow your table structure and the header. The script produces this:
\begin{array}{|c|c|c|c|}
\hline
x_n& f(x_n) & f'(x_n) & \frac{f(x_n)}{f'(x_n)} & x_n-\frac{f(x_n)}{f'(x_n)} \\ \hline
x_1=3.000000 & 217.000000 & 387.000000 & 0.560724 & 2.439276 \\ \hline
x_2=2.439276 & 69.508302 & 162.380993 & 0.428057 & 2.011220 \\ \hline
x_3=2.011220 & 21.772682 & 69.742981 & 0.312185 & 1.699035 \\ \hline
x_4=1.699035 & 6.498158 & 31.471553 & 0.206477 & 1.492558 \\ \hline
x_5=1.492558 & 1.724044 & 15.858533 & 0.108714 & 1.383844 \\ \hline
x_6=1.383844 & 0.329922 & 10.033520 & 0.032882 & 1.350962 \\ \hline
x_7=1.350962 & 0.024737 & 8.549144 & 0.002894 & 1.348068 \\ \hline
x_8=1.348068 & 0.000181 & 8.424276 & 0.000021 & 1.348047 \\ \hline
x_9=1.348047 & 0.000000 & 8.423353 & 0.000000 & 1.348047 \\ \hline
\end{array}
There's no place for a mistake in the script, you see?)
|
H: Prove that $\lim(x_n)=0$ using definition of limit of sequences.
Let $x_n:=\dfrac{1}{\ln(n+1)}\space\forall \space n\in N$
Use the definition of limit of sequences to prove that $\lim(x_n)=0$
I tried to use $e^n>n+1\Rightarrow n>\ln(n+1)$ but that gives me $\dfrac{1}{n}<\dfrac{1}{ln(n+1)}$ which doesn't seem of much use.
Please help.
AI: To show that a sequence $(x_n)$ converges to $0$, you to prove that for every $\epsilon > 0$, there exists a natural $N$ such that for every natural number $n > N$, you have $\vert x_n\vert < \epsilon$.
So let's take $\epsilon > 0$ arbitrarily small. We have
$$
\dfrac{1}{\ln(n+1)} < \epsilon
$$
if and only if
$$
n+1 > e^{1/\epsilon}.
$$
So you can fix some natural number $N$ larger than $e^{1/\epsilon}$ and you will have for every natural number $n > N$ that (using monotonicity of log and positivity of the sequence)
$$
0 < x_n < x_N < \epsilon,
$$
which gives the result.
|
H: If $X$ is not full rank, are $X^TX$ or $X^TX + \lambda I_p$ invertible?
Suppose $X$ is a $n \times p$ matrix, where $rank(X) < p$. Since $X$ is not full rank, then it is not invertible.
I'm trying to understand whether functions of $X$ are invertible:
$X^TX$
$X^TX + \lambda I_p$ ($\lambda > 0$ is some scalar, and $I_p$ is a $p \times p$ identity matrix)
$X(X^TX + \lambda I_p)^{-1}X^T$
My intuition is that 1) is NOT invertible, but 2) IS invertible. Given that 2) is invertible, however, 3) is NOT invertible.
Is this correct? Can anyone help me understand why this is?
AI: $\DeclareMathOperator{\rank}{rank}$
$1$ is not invertible, and it is because $\rank X = \rank X^TX$. See Prove that $\text{rank}(X^TX)=\text{rank}(X)$.
$2$ is invertible, as long as $\lambda > 0$. See When is $\mathbf{X}^{T}\mathbf{X}+\lambda\mathbf{I}$ invertible?.
$3$ is known as the hat matrix for ridge regression. Thanks to Brian Borchers. If $n > p$,
$$\rank X(X^TX + \lambda I)^{-1}X^T \leq \min (\rank X, \rank (X^TX + \lambda I)^{-1}X^T)$$
We know that $\rank X <p$ and $(X^TX + \lambda I)^{-1}X^T$ is a $p \times n$ matrix. Thus $\rank (X^TX + \lambda I)^{-1}X^T \leq \min(n, p) = p$ and it must be the case that
$$\rank X(X^TX + \lambda I)^{-1}X^T\leq \min (\rank X, \rank (X^TX + \lambda I)^{-1}X^T) < p < n$$
hence $X(X^TX + \lambda I)^{-1}X^T$ (an $n \times n$ matrix) is not invertible.
|
H: Question about Rudin (Principles of Math Analysis) theorem 7.26. Why does $Q_n \to 0$ uniformly?
In the proof of the Stone-Weierstrass theorem (7.26), Rudin claims $Q_n \to 0$ uniformly. Can someone explain why this is the case? I don't see how that immediately follows from the bound.
AI: Asserting that a sequence $(f_n)_{n\in\Bbb N}$ of functions from a set $D$ into $\Bbb R$ converges uniformly to a function $f$ is the same thing as asserting that the functions $f-f_n$ are bounded (at least if $n$ is large enough) and that $\lim_{n\to\infty}\sup_{x\in D}|f(x)-f_n|(x)=0$. So, asserting that $(Q_n)_{n\in\Bbb N}$ converges uniformly to $0$ is the same thing as asserting that each $Q_n$ is bounded (again, at least if $n$ is large enough) and that$$\lim_{n\to\infty}\sup_{x\in D}|Q_n(x)|=0.$$But$$\sup_{x\in D}Q_n\leqslant\sqrt n\left(1-\delta^2\right)^n\text{ and }\lim_{n\to\infty}\sqrt n\left(1-\delta^2\right)^n=0,$$since $0\leqslant1-\delta^2<1$.
|
H: Find function $f(x)$ whose expansion is $\sum_{k=0}^{+\infty}k^2x^k$.
I know the expansion of $\frac{1}{1-x}$ is $$1+x+x^2+\cdots+x^k+\cdots$$
So taking the derivative of $$\frac{\partial}{\partial{x}} \frac{1}{1-x}=\frac{1}{(1-x)^2}$$
And subsequently the expansion is (after taking derivative): $$1+2x+3x^2+4x^3+...+ kx^{(k-1)}$$
And multiplying by $x$: $$x+2x^2+3x^3+\cdots+kx^k+\cdots$$
I can't seem to figure out how to work with knowing the above expansion and knowing I need to arrive at: $$x+2^2 x^2+3^2 x^3 +\cdots+k^2 x^k +\cdots$$
I thought it would be a matter of $f(x)=\frac{1}{(1-x^2)^2}$ or some form of squaring or adding a constant in there but I'm stumped.
AI: Simplify $\displaystyle\sum_{k=0}^{+\infty}k^2x^{k}$.
Differentiate and multiply the below expression by $x$ twice
\begin{align*}
\sum_{k=0}^{+\infty}x^k&=\frac{1}{1-x}\\
\sum_{k=0}^{+\infty}kx^{k-1}&=\frac{1}{(1-x)^2}\\
\sum_{k=0}^{+\infty}kx^{k}&=\frac{x}{(1-x)^2}=-\frac{1}{(1-x)}+\frac{1}{(1-x)^2}\\
\sum_{k=0}^{+\infty}k^2x^{k-1}&=-\frac{1}{(1-x)^2}+\frac{2}{(1-x)^3}=\frac{x+1}{(1-x)^3}\\
\sum_{k=0}^{+\infty}k^2x^{k}&=\frac{x(x+1)}{(1-x)^3}\\
\end{align*}
|
H: Jensen's inequality tells us variation of $x$ will increase the average value of $f(x)$?
This is from Boyd's convex optimization 6.4.1 stochastic robust approximation (p. 319):
"When the matrix $A$ is subject to variation, the vector $Ax$ will have more variation the larger $x$ is, and Jensen's inequality tells us that variation in $Ax$ will increase the average value of $\|Ax -b\|_2$."
I'm confused about "Jensen's inequality tells us that variation in $Ax$ will increase the average value of $\|Ax -b\|_2$". Taking $Ax$ as one variable, I think this statement can be expressed similarly as "larger $\textbf{var}(z) \Rightarrow \text{larger} E f(z)$, where $f$ is convex".
Jensen's inequality tells us $E f(z) \geq f(E(z))$, from which we can get the sense of larger $E(z)$ brings larger $E f(z)$. However, how can we connect to $\textbf{var}(z)$ instead?
AI: You cannot connect to variance in general. For example if a random vector $X$ varies but is always in the nullspace of a (potentially random) matrix $A$, then $AX=0$ always and any function of $AX$ is constant (having variance 0), even though both $A$ and $X$ might vary with nonzero measures of variation. [You can easily compare that to examples with new matrix $\tilde{A}$ and/or new vector $\tilde{X}$, where both $\tilde{A}$ and $\tilde{X}$ have smaller measures of variation than their counterparts $A$ and $X$, but $\tilde{A}\tilde{X}$ is nonconstant.]
In the special case when $f:\mathbb{R}^n\rightarrow\mathbb{R}$ is $c$-strongly convex (for some $c>0$) and $X\in \mathbb{R}^n$ is a random vector with finite mean $m=E[X]$ then you can connect to variance
\begin{align}
&f(X) \geq f(m) + f'(m)^{\top}(X-m) + \frac{c}{2}||X-m||^2 \\
&\implies E[f(X)] \geq f(m) + \frac{c}{2}\underbrace{E[||X-m||^2]}_{\sum_{i=1}^n Var(X_i)}
\end{align}
where $f'(m)$ is a subgradient of $f$ at $m$, which exists because $f$ is convex and $m$ is interior to $\mathbb{R}^n$. So the lower bound on the Jensen inequality gap grows with increasing $\sum_{i=1}^n Var(X_i)$.
When Boyd is saying it "will increase the average value" he is just giving an intuitive interpretation of Jensen's inequality for a convex function $f$. He is not trying to state any new theorem. More precise language would replace "will increase" with "will not decrease," meaning if you compare $E[f(X)]$ with $f(E[X])$ then one will always be at least as large as the other. Note also that he is just comparing $E[f(X)]$ and $f(E[X])$; he is not comparing $E[f(X)]$ and $E[f(\tilde{X})]$ for some other non-constant $\tilde{X}$.
|
H: $C(S \times T)$ is isomorphic to $C(S) \otimes C(T).$
Let $S$ and $T$ be two arbitrary sets and consider the vector spaces $C(S)$ and $C(T)$
generated respectively by S and T. Show that
$C(S \times T)$ is isomorphic to $C(S) \otimes C(T).$
I am starting to read Werner Greub's Multilinear Algebra and I come across this exercise, I have tried to find a bilinear mapping that associates these two vector spaces with me, can you help me?
AI: By the universal property of the free space, there is a unique linear map $\varphi:C(S\times T)\to C(S)\otimes C(T)$ with $\varphi(s,t)=s\otimes t$ for $s\in S$ and $t\in T$.
Observe that $\varphi$ is injective since
$$0=\varphi\bigl(\,\sum_i\lambda_i(s_i,t_i)\bigr)=\sum_i\lambda_i\,s_i\otimes t_i$$
implies $\lambda_i=0$ by linear independence of the $s_i$ in $S$ and the $t_i$ in $T$ (1.5.1). Also $\varphi$ is surjective since
$$\bigl(\,\sum_i\lambda_i s_i\bigr)\otimes\bigl(\,\sum_j\mu_j t_j\bigr)=\sum_{i,j}\lambda_i\mu_j\,s_i\otimes t_j=\sum_{i,j}\lambda_i\mu_j\,\varphi(s_i,t_j)$$
and the elements on the left generate $C(S)\otimes C(T)$.
|
H: Measure 0 of set of points where $f$ is discontinuous
Let $f: [a,b] \rightarrow \mathbb{R}$ be an increasing function. Show that $\{x: f \text{ is discontinuous at $x$}\}$ has measure $0$. Hint: Show that $\{x: o(f,x) > \frac{1}{n}\}$ is finite for each integer $n$. Use the fact that given $f: [a,b] \rightarrow \mathbb{R}$ an increasing function, if $x_1, ..., x_n \in [a,b]$ are distinct, then $\sum_{i=1}^n o(f, x_i) \leq f(b) - f(a)$.
Here is my proof:
Suppose for contradiction that that there exists an $N \in \mathbb{N}$ such that for all $n \geq N$, $\{x: o(f,x) > \frac{1}{N}\}$ is infinite. Pick distinct $x_1, ..., x_k \in [a,b]$ where $k$ satisfies $k > N \cdot (f(b)-f(a))$. Then, $\sum_{i=1}^k o(f, x_i) > \sum_{i=1}^k \frac{1}{N} = \frac{k}{N} > \frac{N \cdot (f(b)-f(a))}{N} = f(b) - f(a)$. But this contradicts the fact given in the hint, so our assumption is false and the set must be finite for any $n \in \mathbb{N}$. Since the set is finite for each $n \in \mathbb{N}$, and the condition of the set is equivalent to $f$ being discontinuous at point $x$, so the set has measure $0$. Since the union of measure $0$ sets is also measure $0$, so the set where $f$ is discontinuous at $x$ has to have measure $0$.
I feel like the steps makes sense, but let me know if I'm missing something in the proof. Thanks!
AI: I assume that "$o(f,x)$" means the oscillation of $f$ around $x$. It seems that you understand the overall procedure of the proof, but your presentation could be improved.
Suppose for contradiction that that there exists an $N \in \mathbb{N}$ such that for all $n \geq N$, $\{x: o(f,x) > \frac{1}{N}\}$ is infinite.
You don't need the "for all $n\geq N$" bit. It is also a good idea to start explaining what you are doing, something like "We will follow the hint given in the question. Suppose[...]"
Pick distinct $x_1, ..., x_k \in [a,b]$ where $k$ satisfies $k > N \cdot (f(b)-f(a))$. Then, $\sum_{i=1}^k o(f, x_i) > \sum_{i=1}^k \frac{1}{N} = \frac{k}{N} > \frac{N \cdot (f(b)-f(a))}{N} = f(b) - f(a)$. But this contradicts the fact given in the hint, so our assumption is false and the set must be finite for any $n \in \mathbb{N}$.
This is ok, but you should avoid just referring to "the set". Give it a name! Why not start the whole proof with "Given a positive integer $n$, define $O_n=\left\{x:o(f,x)>\frac{1}{n}\right\}$"?
Since the set is finite for each $n \in \mathbb{N}$, and the condition of the set is equivalent to $f$ being discontinuous at point $x$, so the set has measure $0$.
Substituting every instance of "the set" by the name you gave it makes the presentation clearer. Also there is a small part which is not right:
the condition of the set is equivalent to $f$ being discontinuous at point $x$.
What you really want to say is the following:
Recall that the function $f$ is discontinuous at a point $x$ if and only if there exists some $n$ such that $o(f,x)>\frac{1}{n}$. This means that the set of points of discontinuity of $f$ is the union $\bigcup_{n=1}^\infty O_n$.
You should be careful with the difference: The way you are phrasing it, it seems that $O_n$ is the set of points of discontinuity of $f$, which is not true!
Since the union of measure $0$ sets is also measure $0$, so the set where $f$ is discontinuous at $x$ has to have measure $0$.
Not any union, but countable unions.
Overall you seem to get the idea for the proof, but you should be more careful in your explanations. Do not try to write things in a concise manner if it makes the explanation worse, specially if it allows for a wrong interpretation (and you would probably lose marks for it). Naming your objects is usually preferred. A careful marker would take issue
So here is my suggested improved version (with a few extra changes):
We will follow the hint given in the question.
Given a positive integer $n$, define $O_n=\left\{x:o(f,x)>\frac{1}{n}\right\}$. Suppose for contradiction that there exists an $N \in \mathbb{N}$ such that $O_N$ is infinite.
Pick distinct $x_1, ..., x_k \in [a,b]$ where $k$ satisfies $k > N \cdot (f(b)-f(a))$. Then, $\sum_{i=1}^k o(f, x_i) > \sum_{i=1}^k \frac{1}{N} = \frac{k}{N} > \frac{N \cdot (f(b)-f(a))}{N} = f(b) - f(a)$. But this contradicts the fact given in the hint, so our assumption is false and $O_N$ must be finite for any $N \in \mathbb{N}$.
Since $O_n$ is finite for each $n \in \mathbb{N}$, then it has measure $0$.
Recall that the function $f$ is discontinuous at a point $x$ if and only if there exists some $n$ such that $o(f,x)>\frac{1}{n}$. This means that the set of points of discontinuity of $f$ is the union $\bigcup_{n=1}^\infty O_n$. Since a countable union of measure $0$ sets also has measure $0$, the set of points of discontinuity of $f$ has measure $0$.
|
H: $K_0(C(\mathbb{T}^{n})) \cong \mathbb{Z}^{2^{n-1}}$
I am new to this website and I have a question.
I want to show that $K_0(C(\mathbb{T}^{n})) \cong \mathbb{Z}^{2^{n-1}}$ but first I want to show that $C(\mathbb{T}^{n}) \cong C(\mathbb{T} \rightarrow C(\mathbb{T}^{n-1}))$, where $C(\mathbb{T}^{n})$ is the continuous functions on the $n$-torus.
Can anybody help me out?
AI: The algebras $C(\mathbb{T}^{n})$ and $C(\mathbb{T}^{n} , C(\mathbb{T}^{n-1}))$ are not isomorphic, as their spectra ($\mathbb T^n$ and $\mathbb T^n\times\mathbb T^{n-1}$ respectively) are not homeomorphic.
For computing $K_0(C(\mathbb{T}^{n}))$, cutting a circle out of $\mathbb T^n$ in a nice way yields an exact sequence
$$0\to C_0(\mathbb T^{n-1}\times\mathbb R)\to C(\mathbb T^n)\leftrightarrows C(\mathbb T^{n-1})\to 0$$
And thus
\begin{align*}
K_i(C(\mathbb T^n))&\cong K_i(C(\mathbb T^{n-1}))\oplus K_i(C_0(\mathbb T^{n-1}\times\mathbb R))\\
&\cong K_i(C(\mathbb T^{n-1}))\oplus K_{1-i}(C(\mathbb T^{n-1})).
\end{align*}
Now apply induction.
EDIT To see that $C(\mathbb T^n)$ and $C(\mathbb T,C(\mathbb T^{n-1}))$ are isomorphic, note that if $f\in C(\mathbb T^n)$ then for each $z\in\mathbb T$, the map $f_z:\mathbb T^{n-1}\to\mathbb C$ given by $f_z(z_1,\ldots,z_{n-1})=f(z,z_1,\ldots,z_{n-2})$ is in $\mathbb C(\mathbb T^{n-1})$, and the map $z\mapsto f_z$ is continuous from $\mathbb T$ to $C(\mathbb T^{n-1})$. Now define $*$-homomorphism $\varphi:C(\mathbb T,C(\mathbb T^{n-1}))\to C(\mathbb T^n)$ by
$$\varphi(f)(z_0,z_1,\ldots,z_{n-1})=f_{z_0}(z_1,\ldots,z_{n-1}).$$
This map is the (or one of the possible) desired isomorphism.
|
H: How do you find the interval in which a parametric equation will be traced exactly once
I have been all over the internet and I can't find an answer to what seems like a simple question. I want to be able to find the interval for a parametric equation so that it is only traced once. My equations are:
\begin{align}
x &= 11\cos(u) - 4\cos\left(\frac{11u}{2}\right)\\
y &= 11\sin(u) - 4\sin\left(\frac{11u}{2}\right)
\end{align}
After looking at the graph, I realize the answer is $4\pi$, but how would I solve this if I am not able to look at the graph. I have seen solutions where people check one loop at a time until they arrive back at the original starting point, but it always seems as if they know how long one loop is and to me it seems they are just taking arbitrary numbers. In other words, how would I know to check every $\frac{\pi}{4}$ versus every $10\pi$.
AI: What is the interval after which the parameterization $x = 11\cos(u) - 4\cos\left(\frac{11u}{2}\right),
y = 11\sin(u) - 4\sin\left(\frac{11u}{2}\right)$ repeats itself?
Formally, $4\pi$ is just the result of calculating the LCM of the periods of individual terms.
Recall that
period $T\left[\cos\left(\frac abx\right)\right]=2\pi\frac ba$
$LCM(\frac ab,\frac pq)=\frac{LCM(a,p)}{HCF(b,q)}$
The first term in both expressions repeats after $2\pi$ (i.e., it has a period of $2\pi$), and the second repeats after $\frac{4\pi}{11}$. So, after $4\pi$, both terms must repeat.
|
H: Complex numbers limits
$\lim_\limits{z\to\infty} \sqrt{z-2i} - \sqrt{z-i} ,$ where z is complex no.
How to evaluate this?
I tried by assuming $z = x+iy$ and evaluated $z-2i = x+ i(y-2)$ and $z-1 = x + i(y-1)$ and after putting the value in the given question , I couldn't think of the next step at all
AI: Typically, to find the limit of $\sqrt{a}- \sqrt{b}$ where both a and b go to infinity, one multiplies by the "unit fraction" $\frac{\sqrt{a}+ \sqrt{b}}{\sqrt{a}+ \sqrt{b}}$. Since $(\sqrt{a}- \sqrt{b})(\sqrt{a}+ \sqrt{b})= a- b$ the fraction becomes $\frac{a- b}{\sqrt{a}+ \sqrt{b}}$.
In this problem, $a= z- 2i$ and $b= z+ i$ so $a- b= z- 2i- z- i= -3i$. The fraction is now $\frac{3i}{\sqrt{z- 2i}+ \sqrt{z+ i}}$. As z goes to infinity, the denominator goes to infinity and, since the numerator is constant, the fraction goes to 0.
|
H: Prove that $f_x(1,1)=-f_y(1,1)$
I have a function $f(x,y) \in C^1$. I need to prove that if $f(x,x)=1$ for any real number $x$, then $f_x(1,1)=-f_y(1,1)$.
I tried to prove it using the definition of the partial derivative, but couldn't figure it out.
$$ f_x(1,1)=\lim\limits_{h \to 0} \frac{f(1+h,1)-f(1,1)}{h} = \lim\limits_{h \to 0} \frac{f(1+h,1)-1}{h}$$
And the same for $f_y(1,1)$.
How can I prove it?
AI: If $f$ is differentiable, the directional derivative along $(1,1)$ can be computed as
$$
\partial_{(1,1)} f(1,1) = f'_x(1,1) \cdot 1 + f'_y(1,1)\cdot 1.
$$
Since the directional derivative is zero ($f$ is constant along that direction), it must be true that $f'_x(1,1) + f'_y(1,1)=0.$
|
H: Trunk in a tree
Just to make things clear:
the author defines a trunk of a tree by formula in [...] in $4.3.1(a)$ in the snippet below.
Is it true that since $\eta_0\in T$ is maximal then for all $\nu\in T$ it holds vacuously that
$lg(\nu)\leq lg(\eta_0)$ ?
AI: No, the definition allows $\nu\in T$ with $\operatorname{\ell g}(\nu)>\operatorname{\ell g}(\eta_0)$. It imposes just two requirements on them:
$\nu\upharpoonright\operatorname{\ell g}(\eta_0)=\eta_0$, since $\operatorname{\ell g}\big(\nu\upharpoonright\operatorname{\ell g}(\eta_0)\big)\le\operatorname{\ell g}(\eta_0)$; and
there is a $\mu\in T$ such that $\operatorname{\ell g}(\mu)\le\operatorname{\ell g}(\nu)$, and $\mu\ne\nu\upharpoonright\operatorname{\ell g}(\mu)$.
The first follows from the fact that $\eta_0$ is the trunk of $T$, and the second follows from the fact that $\nu$ is longer than the trunk.
The terminology really is descriptive: every sequence in $T$ that is no longer than $\eta_0$ is an initial segment of $\eta_0$, and every longer sequence in $T$ has $\eta_0$ as an initial segment. All of the branching occurs above $\eta_0$.
|
H: Prove that the series $\sum_{n=1}^\infty {|a_n b_n|}$ and $\sum_{n=1}^\infty {(a_n + b_n)^2}$ converges
Prove that the series $\sum_{n=1}^\infty {|a_n b_n|}$ and $\sum_{n=1}^\infty {(a_n + b_n)^2}$ converges
If the series $\sum_{n=1}^\infty {a^2_n}$ and $\sum_{n=1}^\infty {b^2_n}$ converges.
The part $\sum_{n=1}^\infty {(a_n + b_n)^2}$ does not necessarily converge, I had thought so, but
I am doubting the first condition. Someone can help me, please.
AI: If the series $\sum_{n=1}^{\infty}a^{2}_{n}$ and $\sum_{n=1}b^{2}_{n}$ converge, then $\sum_{n=1}^{\infty}(a_{n}+b_{n})^{2}$ converges due to the comparison test. Indeed, one has that
\begin{align*}
(a_{n} + b_{n})^{2} = a^{2}_{n} + 2a_{n}b_{n} + b^{2}_{n} \leq a^{2}_{n} + 2|a_{n}b_{n}| + b^{2}_{n} \leq 2(a^{2}_{n} + b^{2}_{n})
\end{align*}
where it has been used the AM-GM inequality:
\begin{align*}
\frac{a^{2}_{n} + b^{2}_{n}}{2}\geq \sqrt{a^{2}_{n}b^{2}_{n}} = |a_{n}b_{n}|
\end{align*}
Similarly, the comparison test works for the other series, and we are done.
Hopefully this helps.
|
H: Derivative of $y = \log_{\sqrt[3]{x}}(7)$.
Never dealt with a derivative of these type. My approach was $$y = \log_{\sqrt[3]{x}}(7) \iff 7 = (\sqrt[3]{x})^y.$$ Then,
$$\frac{d}{dx}(7) = \frac{d}{dx}\left(\sqrt[3]{x}\right)^y \Rightarrow (\sqrt[3]{x})^y = e^{\frac{y\ln(x)}{3}} $$
From here,
$0 = e^u\dfrac{du}{dx}$ and $u = \dfrac{y\ln(x)}{3}.$ Thus,
$$0 = \frac{du}{dx} = \frac{y}{3x} +\frac{\ln(x)}{3}\frac{dy}{dx}.$$
Which implies that $$\frac{dy}{dx}= \frac{-\log_{\sqrt[3]{x}}(7)}{x\ln(x)}.$$
Is this the correct derivative? Can I alternatively use $\log_{b}(a) = \dfrac{\ln(a)}{\ln(b)}$, with $b = \sqrt[3]{x}$ and $a=7$? In that case, I arrive at
$$\frac{dy}{dx}= \dfrac{-3\ln(7)}{x(\ln(x))^2}.$$
AI: Yes, you are right. Simplify as follows
$$y=\log_{\sqrt[3]{x}}(7)=\frac{\ln 7}{\ln (\sqrt[3]{x})}=\frac{\ln (7)}{\frac13\ln x}=\frac{3\ln (7)}{\ln x}$$
$$\therefore \frac{dy}{dx}=3\ln (7)\left(\frac{-1}{(\ln x)^2}\frac1x\right)=-\frac{3\ln (7)}{x(\ln x)^2}$$
|
H: What does w ∈ a, b* mean?
What does $w ∈ a, b^*$ mean?
The context is the language of an automation, which is $L=\{w∈ a, b^*|:|w|$ is even and the central symbols of $w$ are $aa\}$
I really don't understand what $w ∈ a, b^*$ mean, I think it means that $w$ can be $a$ or $b^*$ but that makes no sense because $w$ has to have the centraly symbols $aa$.
[EDIT] I tried to applly the pumping lemma to show that the language is not regular. I chose the string $w=b^paab^p$ and I divided it as $x=b^r$, $y=b^s$ and $z=b^{p-r-s}aab^p$. For $i=0$ I get $xy^iz=xz=b^{p-s}aab^p$ so $|b^{p-s}|<|b^p|$ because $|s|>0$ but this proves nothing because $p$ could be 4 and $s$ could be 2 so the word could still be even-lenght, how can I do?
AI: It’s almost certainly missing a pair of curly braces and should be $$L=\big\{w\in\{a,b\}^*:|w|\text{ is even and the central symbols of }w\text{ are }aa\big\}\;.$$
Here $\{a,b\}^*$ is the set of all finite strings of symbols from the set $\{a,b\}$, i.e., the set of all finite strings composed exclusively of the letters $a$ and $b$, including the empty string. $L$ is the set of all such strings with even length and $aa$ as the middle two characters, so $L$ includes the strings $aa$, $aaaa$, $aaab$, $baaa$, $baab$, $aaaaaa$, $abaabb$, etc.
Added: For showing that $L$ is not regular using the pumping lemma, you were on the right track when you took $w=b^paab^p$, where $p$ is the pumping length. If $L$ were regular, you could write $w$ as $xyz$, where $|xy|\le p$, $|y|\ge 1$, and $xy^kz\in L$ for each $k\ge 0$. Since $|xy|\le p$, $xy$ must, as you saw, be a string of $b$s, say $x=b^r$ and $y=b^s$, where $r+s\le p$ and $s\ge 1$, so for any $k\ge 0$ we have $$xy^kz=b^{p+(k-1)s}aab^p\;.$$
Now take $k=2$: $xy^2z=b^{p+s}aab^p$. And $p+s>p$, so the word cannot have $aa$ at the centre: there is only one subword $aa$, and there are more characters before it than after it. (It’s true that if $s$ is odd, you can eliminate this word even more easily, since its length is then odd, but showing that the word doesn’t a central segment $aa$ does the job no matter what $s$ is.)
|
H: Show that $\lim_{x\to 0^+} xf'(x)=0$.
Suppose $f$ is a continuous function on $[0,1]$. Suppose $f$ is differentiable on $(0,1)$ and its derivative is continuous on $(0,1)$. Then is it true that
$$\lim_{x\to 0^{+}} xf'(x)=0 \ ?$$
I only thought about functions like $x^{a}$ for $a>0$. It seems to be true. Indeed, in fact if we assume $f'$ is monotonically decreasing and non negative then $$0\le xf'(x) \le \int_0^x f'(t) \, dt = f(x)-f(0)$$
But I am not sure how to do it generally. Any help suggestions?
AI: Nope. Consider the function $$f(x) =\begin{cases}
x\sin(\frac{1}{x})\hspace{4mm}x > 0 \\
0 \hspace{17mm}x = 0\end{cases}$$
Then on $(0,1)$ we have $f'(x) = \sin(\frac{1}{x}) -\frac{1}{x}\cos(\frac{1}{x})$. Hence $$ \lim_{x\to0^+}xf'(x) = \lim_{x\to 0^+}x\sin\Big(\frac{1}{x}\Big)-\cos\Big(\frac{1}{x}\Big) = DNE$$
So the limit does not even exist.
|
H: Schur's lemma for finite-dimensional unitary representations
I am reading the book 'Representations of Linear Groups' by Rolf Berndt, and on page 19 they state the following theorem:
'Let $(\pi,\mathbb{C}^n)$ be a unitary matrix representation of a group $G$, i.e. $\pi(g) = A(g)$. Let $M\in GL(n,\mathbb{C}^n)$ be a matrix commuting with all $A(g)$. Then $M$ is a scalar multiple of the unit matrix.'
So far, this is easy to understand. But in the proof they give the argument, that if $a_{i,j} = a_{j,i} = 0$ for all such matrices $A(g)$, then the representation is reducible. I do not see how this holds, there must be some argument why all $A(g)$ then share an invariant subspace, but I fail to find it.
AI: By the sound of it, you are happy that $\tilde a_{i,j}=\tilde a_{j,i}=0$ when the diagonal elements are different. What this means is that $\tilde A(g)$ is a block-diagonal matrix, and the same blocks for all $g\in G$. Thus there is either one block, so it's a scalar matrix, or the representation can be written as a direct sum. The invariant subspaces are the eigenspaces of $D$.
|
H: Simplify $\frac{d}{dt}\int_x^t f(t,y)dy$
I am trying to simplify $\frac{d}{dt}\int_x^t f(t,y)dy$ as a part of a proof.
I am somewhat confused on how I can proceed with this. Do I define a function $g(t,y)$ such that $\frac{\partial g}{\partial y} = f(x,t)$ and then say $\frac{d}{dt}\int_x^t f(t,y)dy = \frac{d}{dt}(g(t,t) - g(t,x))$?
Or can I just say that $\frac{d}{dt}\int_x^t f(t,y)dy = \frac{d}{dt}(f(t,t) - f(t,x))$? If so why can I say this.
Also note that I am trying to avoid using leibniz integral rule since it has not been covered yet.
AI: You can think of the integral as being a function of three parameters:
$$g(a,b,c) = \int_a^b f(c,y)\:dy$$
Thus the derivative you want can be derived from chain rule
$$\frac{d}{dt}g(a(t),b(t),c(t)) = \frac{\partial g}{\partial a}\frac{da}{dt} + \frac{\partial g}{\partial b}\frac{db}{dt} + \frac{\partial g}{\partial c}\frac{dc}{dt}$$
$$= 0 + f(t,t) +\int_x^t \frac{\partial f}{\partial t}(t,y)\:dy $$
|
H: What is the error solving this problem about instantaneous rate of change?
I have the following problem:
A hot air balloon rising straight up from a level field is tracked by
a range finder $150$ meters from the liftoff point. At the moment that
the range finder’s elevation angle is $\frac{\pi}{4}$, the angle is
increasing at the rate of $0.14$ rad/min. How fast is the balloon
rising at that moment?
My development was:
Let $h$ the altitude of the hot air balloon, $\theta$ the angle.
Using trigonometry, i got: $\sin(\theta) \cdot 150\sqrt{2} = h$, where $150\sqrt{2}$ is the hypotenuse.
Using implicit derivation respect to the time or moment $t$, to get:
$\frac{d}{dt}\sin(\theta) \cdot 150\sqrt{2}=\frac{d}{dt}h$
Since $\sin(\theta)$ is a composition of the functions $\sin(x)$ and $\theta(t)$ i need to use chain rule, so i have: $\cos(\theta) \cdot \frac{d}{dt}\theta \cdot 150\sqrt{2}=\frac{d}{dt}h \implies \frac{d}{dt}h=21$.
But the correct answer is $42$ that is exactly the double of my answer, what is wrong with my development? Thanks in advance.
AI: When you write $\sin(\theta) \cdot 150\sqrt{2} = h$ you are implicitly assuming that the hypotenuse isn't changing, which is false. If you were going to set it up like this you would need to say $\sin(\theta)\cdot c = h$ where here $c$ is the hypotenuse. You can then use the fact that $c^2 = 150^2 + h^2$ to eliminate $c$ and then proceed with differentiating with respect to $t$.
I should also note that another route you could take is to say $\tan(\theta) = \frac{h}{150}$ and go from there. In that case you wouldn't need to worry about the hypotenuse.
|
H: Show that $X= \ker((u-\lambda)^p) \oplus (u-\lambda)^p(X)$ if $u$ is compact.
Consider the following theorem in Murphy's book "C*-algebras and operator theory"
Can someone explain why the marked lines are true? I think this must be a matter of pure linear algebra but I can't find out the specifics. I.e. my question, why is
$$X= \ker((u-\lambda)^p) + (u-\lambda)^p(X)$$Thanks!
AI: This is from dimensional reasons. You have that the kernel and the image are linearly independent and that the complement of the image has dimension of the defect, which is the same dimension as the kernel.
More concretely, since $\ker(u-\lambda)^p$ and $(u-\lambda)^p(X)$ are disjoint you have that the map
$$\ker(u-\lambda)^p\to X/(u-\lambda)^p(X)$$
is injective, but this is an injective map between vector spaces of the same dimension, hence a vectors space isomorphism. Thus $\ker(u-\lambda)^p$ is a complement of $(u-\lambda)^p(X)$.
|
H: Find the function $f(x)$ knowing the volumen of the solid of revolution for $a \gt 1$
Find the function $f(x)$ knowing that the volumen of the solid of revolution of $y=f(x)$ around the x axis, for $a \gt 1$ is $V=a^2 -a$ , $f(x)$ continuos and $ f(x) \gt 0$
Basically what I tried to do was use the mean value theorem for integrals by doing:
$$V=\pi\int_{0}^{a} (f(x))^2 dx = a^2 -a = a(a-1)$$
$$V=\int_{0}^{a} (f(x))^2 dx = \frac{a(a-1)}{\pi}$$
Then, by the Mean Value Theorem for Integral
$$\int_{0}^{a} (f(x))^2 dx = f(c)(a-0) = f(c)a = \frac{a(a-1)}{\pi}$$
and so we have that
$$(f(c))^2 = \frac{(a-1)}{\pi}$$ and
$$(f(c)) = \sqrt{\frac{(a-1)}{\pi}}$$
Is this the way to do it? Anyways, I'm stuck here. Any Hints? Thanks.
AI: Differentiate both sides with respect to $a$ (using the FTC):
$$\pi(f(a))^2=\frac{d}{da}(a^2-a)=2a-1$$
so, since $f(x)>0$ for all $x$, $$f(x)=\sqrt{\frac{2x-1}{\pi}}.$$
|
H: Assume that $f$ is holomorphic in $B(0,R)$ and that $|f(z)|\leq e^{-\frac{1}{|z|}} $ for all $0<|z|
Assume that $f$ is holomorphic in $B(0,R)$ and that $|f(z)|\leq e^{-\frac{1}{|z|}} $ for all $0<|z|<R$ then $f=0$
Not sure how to proceed, usually we try Liouville but $ e^{-\frac{1}{|z|}}$ has an essential singularity, so that will not work. Another method uses Cauchy inequality, but that is certainly not helpful here due to Little Picard theorem. Any hints/solutions appreciated.
AI: The given estimate implies that $f(0) = 0$. The idea is that for $z \to 0$, the upper bound $e^{-1/|z|}$ decreases so fast to zero that $f$ cannot have a zero of finitely multiplicity at $z=0$.
Concretely: If $f$ is not identically zero then $f(0) = 0$ with some multiplicity $k \ge 0$, i.e.
$$
a_k = \lim_{z \to 0} \frac{f(z)}{z^k}
$$
exists and is not zero. But
$$
\left|\frac{f(z)}{z^k} \right| \le \frac{e^{-1/|z|}}{|z|^k} \to 0
$$
for $z \to 0$ and any non-negative integer $k$, which is a contradiction.
The last limit is zero because
$$
\lim_{r \to 0 } \frac{e^{-1/r}}{r^k} = \lim_{x \to \infty }\frac{x^k}{e^x} \le \lim_{x\to \infty }\frac{x^k}{x^{k+1}/(k+1)!} = 0 \, ,
$$
i.e. because the exponential function growth faster than any polynomial.
One can also use the Cauchy estimates for the derivatives: If $f(z) = \sum_{n=0}^\infty a_n z^n$ is the Taylor series of $f$ in $B(0, R)$ then
$$
|a_n| \le \frac{1}{r^n} \max_{|z|=r} |f(z)| \le \frac{e^{-1/r}}{r^n}
$$
for all $n$ and $0 < r < R$. As above, the right-hand side tends to zero for $r \to 0$, so that all $a_n = 0$.
|
H: Converging / Diverging sum with a constant power:
I need to prove this sum is diverges/convergent/conditional convergent , but I am pretty sure it is converging to a value:
$$\sum_{k=1}^{\infty} \frac{(1+\frac{1}{k})^{k^a}}{k!}$$
For some constant:
$a >0 , a \in \mathbb{R}$
I tried to prove it using:
$\frac{(1+\frac{1}{n})^{k^a}}{k!} \leq \frac 1k \rightarrow \frac{k \cdot (1+\frac{1}{n})^{k^a}}{(k)!} \leq 1 \rightarrow \frac{(\frac{k+1}{k})^{k^a}}{(k-1)!} \rightarrow$ now I divide both by $k^{k^a}$ and get:
$\frac{(k+1)^{k^a}}{k^{k^a}} \cdot \frac{1}{(k-1)!}$ and I check the $\text{limit}$ as the first term in this product $k \rightarrow \infty$ and get $0$
And thus it is divergent because $\frac{(1+\frac 1k)^{k^a}}{k!} \leq \frac 1n$ (Harmonic divergent)
How can I solve this for any given $a \in \mathbb{R} , ~~ a > 0 $ ?
Thank you very much!
AI: For $a \leq 1$, the numerator
$$
(1+\frac{1}{k})^{k^a} < e
$$
because $a_n = (1+\frac{1}{n})^n$ is an increasing series that converges to $e$. So the series is convergent.
EIDT: For $a=2$,
the upper bound in
$$
(1+\frac{1}{k})^{k^a}<(1+\frac{1}{k})^{k^{1+1}}<e^k
$$
The series converges. because the sum you have converges for all $b_k = |x|^k$/k!
For $a>2,$ consider ratio test with
$$
a_k = \frac{(1+\frac{1}{k})^{k^a}}{k!}
$$
Then, take $|\frac{a_{k+1}}{a_k}|$. Write the expressions in the numerator and denominator as $a=e^{\log a}$, then use Taylor series expansion to get
$$
\frac{e^{n^{a-1}}(1+\frac{1}{n})^{a-1} - 1}{n+1}>\frac{e^{n^{a-2}}(a-1)}{n+1}
$$
which diverges for $a>2$
|
H: Why a subspace is assumed instead of a vector space?
Why does the following theorem start with "Let $\{u_1, ..., u_p\}$ be an orthogonal basis for a subspace $W$ of $\mathbb{R}^n$" instead of "Let $\{u_1, ..., u_p\}$ be an orthogonal basis for a vector space $W$". In other words, what is the point for starting with a subspace of a vector space instead of a vector space?
Thanks
AI: Note that the inner product (dot product) of $\Bbb R^n$, and in particular orthogonality are used in the statement, which are not given in an abstract vector space.
Well, indeed, one could say 'let $W$ be an inner product space with orthogonal basis $u_1,\dots,u_p$', but probably abstract inner product spaces were not yet introduced in the course/book.
|
H: Suppose $A$ is Artinian and commutative with 1. If $J(A)M=M$, then $M=\{0\}$. $J(A)$ is the Jacobson radical of A.
I just want to know if the following proof is valid for the above theorem. (Note: $M$ is a $A$-module)
Sketch: Since $A$ is Artinian we know that $J(A)$ is nilpotent i.e. there exists $k\geq 1, k\in \mathbb{Z}$ such that $J(A)^k=\{0\}$. Hence, ${0}=\{0\}M=J(A)^kM=J(A)^{k-1}M=...=J(A)M=M$. Is this valid? Does it also mean that Nakayama's Lemma holds for non-finitely generated modules of Artinian Rings?
AI: It clearly shows that (this formulation of) Nakayama’s lemma holds for all modules, not just finitely generated ones, over a ring with nilpotent Jacobson radical.
The formulation of Nakayama’s lemma you gave is also still true for noncommutative rings.
|
H: Understanding Lang's proof that every closed and bounded set is compact
In his book Serge Lang provides the following proof that every bounded set that is closed is also compact.
" Let S be a closed and bounded set. This implies that there exists $B$ s.t. $|z|\leq B$ where $z\in s$. For all $z$ write $z=x+iy$, then $|x|\leq B$ and $|y| \leq B$.
Let ${z_n}$ be a sequence in $S$, and write $z_n=x_n+iy_n$.
There is a subsequence {$z_{n_1}$} s.t. {${x_{n_1}}$} converges to a real number $a$ and similarly there is a subsequence {${z_{n_2}}$} s.t. {${y_{n_2}}$} converges to a real number $b$.
Then {$z_{n_2}= x_{n_2}+i{y_2}$} converges to $a+ib$ which implies that $S$ is compact."
What I don't understand is two things.
How does proving {$x_{n_1}$} converges to $a$ prove that {${x_{n_2}}$} converges to $a$ aswell?
How is it implied that there exist such subsequences such that {${x_{n_1}}$} and {${y_{n_2}}$} converge to these values? Afterall isn't this what we are suppose to prove?
If anybody could shine some light, I will greatly appreciate it.
AI: This is indeed a mistake. You should take a convergent subsequence of $y_{n_1}$ which converges, not just a subsequence of $y_n$. In that case $x_{n_2}$ is a subsequence of $x_{n_1}$ and hence it converges to $a$ as well.
As for your second question, I believe the author assumes you already know that a bounded sequence of real numbers has a convergent subsequence (the Bolzano-Weierstrass theorem), and now he uses it to prove this property holds for complex sequences as well.
|
H: Prove that the serie $\sum_{n=1}^\infty \frac{1}{n(n + a)}$ converges
I was trying to solve this problem but i got stucked. I use the Radio Test to compute the convergence interval, but it doesnt works in this case, i need help...
Prove that the serie $\sum_{n=1}^\infty \frac{1}{n(n + a)}$ converges
Calculate the sum of the serie
AI: hint
Let $$u_n=\frac{1}{n(n+a)}$$
and
$$v_n=\frac{1}{n^2}$$
As you know
$$\lim_{n\to+\infty}\frac{u_n}{v_n}=1$$
$$\implies \frac{u_n}{v_n}\le 2 \text{ for great enough } n$$
$$\implies 0< u_n\le 2v_n$$
but $$\sum v_n \text{ converges}$$
thus
$$\sum u_n \text{ converges}$$
For the sum, observe that
$$au_n = \frac 1n - \frac{1}{n+a}$$
|
H: Index of intersection of two subgroups
Let $H$ and $K$ be subgroups of $G$ with indices $3$ and $5$ in $G$.
I need to show that the index of $H\cap K$ is a multiple of $15$.
ATQ
$\frac{|G|}{|H|}$ = 3
So $|G|$ is a multiple of $3$.
Similarly, $|G|$ is a multiple of $5$.
So we conclude that $|G|$ is a multiple of $15$.
Now $H\cap K$ is a subgroup of $G$. So by Lagrange's theorem , it will divide $|G|$.
From here, how can I conclude that its order is a multiple of $15$?
AI: Hint. $[A : C] = [A : B] [B : C]$ for (finite) groups $A$ with subgroups $B ⊆ A$ and $C ⊆ B$.
|
H: Proof that a closed set contains all the limit points
In my general topology textbook there is the following proposition:
Let $A$ be a subset of a topological space $(X, τ)$. Then $A$ is closed in $(X, τ )$ if and only if $A$ contains all of its limit points.
And then they give the following proof:
Assume that $A$ is closed in $(X, \tau)$. Suppose that $p$ is a limit point of $A$ which belongs to $X \setminus A$. Then $X \setminus A$ is an open set containing the limit point $p$ of $A$. Therefore $X \setminus A$ contains elements of $A$ (1).This is clearly false and so we have a contradiction to our supposition. Therefore every limit point of $A$ must belong to $A$
Conversely, assume that $A$ contains all of its limit points. For each $z \in X \setminus A$, our assumption implies that there exists an open set $U_z \ni z$ such that $U_z \cap A = \emptyset$; that is, $U_z \subseteq X \setminus A$. Therefore $X \setminus A = \bigcup_{z \in X \setminus A} U_z$. So $X \setminus A$ is the union of open sets and hence is open. Consequently $A$ is closed.
My question is in the marker (1). How do they conclude that $X \setminus A$ contains an element of $A$?
AI: The definition of a limit point that your text uses (I'd call that notion an adherence point BTW) going by the second part of the proof:
$p$ is a limit point of $A$ iff every open set that contains $p$, intersects $A$.
But in your first part $X\setminus A$ is an open set (because we assumed $A$ is closed) that contains $p$ but and we also assumed that $p$ is a limit point so the definition can be applied to the open set $X\setminus A$ to reach that conclusion.
|
H: Limit of a monotone sequence of measurable functions is measurable.
I'm currently studying measure theory as a non mathematician out of self interest but I've come across a typical proof style that I find rather hard. Generally it's showing that a typical set can be written as a countable union of measurable sets. An example I'm having trouble with is below.
Theorem
Let $(X, \mathcal{A})$ be a measurable space and let $(f_n:n \in \mathbb{N})$ be a monotone sequence of of extended real valued $\mathcal{A}$- measurable functionson a set $D \in \mathcal{A}$. Then $\lim_{n \rightarrow \infty}f_n$ exists on $D$ and is $\mathcal{A}$- measurable on $D$.
Proof
For the increasing case. Usual idea in that we would like to write the set $\{D: \lim_{n \rightarrow \infty} f_n > \alpha\}$ as a union of measurable sets. The proof states that
\begin{equation}
\{D: \lim_{n \rightarrow \infty} f_n > \alpha\} = \bigcup_{n \in \mathbb{N}} \{D: f_n > \alpha\} \in \mathcal{A}
\end{equation}
However, I'm not particularly confident in my ability to show the equality above. The book gives the hint that $\lim_{n \rightarrow \infty} f_n > \alpha $ iff $f_n > \alpha$ for some $n \in \mathbb{N}$.
I've tried to prove this as follows:
Write $f=\lim_{n \rightarrow \infty} f_n$. Assume that $f \leq \alpha$. Since the pointwise limit $f_n(x)$ exists for each $x \in D$ by the monotone convergence theorem. Therefore, for all $n > K\in \mathbb{N}$ we have that
\begin{equation}
|f-f_n|< \epsilon
\end{equation}
Or
\begin{equation}
-\epsilon + f_n < f < f_n + \epsilon
\end{equation}
In particular, $f > f_n - \epsilon$ so $f \geq f_n > \alpha$ which is a contradiction. Hence $f_n > \alpha$ for some $n \in \mathbb{N}$ $\implies$ $\lim_{n \rightarrow \infty} f_n > \alpha$. I'm not so sure about the other case.
As for showing the sets are equal I'm quite happy that $x \in \bigcup_{n \in \mathbb{N}} \{D: f_n > \alpha\}$ implies that $x \in \{D: f_n > \alpha\}$ for some $n$ which from the hint above implies that $x \in \{D: \lim_{n \rightarrow \infty} f_n > \alpha\}$ hence $ \bigcup_{n \in \mathbb{N}} \{D: f_n > \alpha\} \subset \{D: \lim_{n \rightarrow \infty} f_n > \alpha\}$.
If you could provide any hints or advice I would be very grateful.
AI: It is a standard result in calculus that weak inequalities are preserved in a limit. Suppose $x\in\{D: f>\alpha\}$. If we had $x\notin\cup_{n=1}^\infty \{D: f_n>\alpha\}$ then it would mean that we have $f_n(x)\leq\alpha$ for all $n\in\mathbb{N}$. But then passing to the limit this would imply $f(x)\leq\alpha$, a contradiction.
Other direction: suppose $x\in\cup_{n=1}^\infty \{D: f_n>\alpha\}$. Then there exists $n_0\in\mathbb{N}$ such that $f_{n_0}(x)>\alpha$. By monotonicity this implies we have $f_n(x)\geq f_{n_0}(x)>\alpha$ for all $n\geq n_0$. Again, since weak inequalities are preserved in a limit we conclude that $f(x)=\lim_{n\to\infty} f_n(x)\geq f_{n_0}(x)>\alpha$. So $x\in\{D: f>\alpha\}$.
|
H: A player rolls four 20-sided dice, takes the lowest value, ignores the rest. What is the probability of this value at least 7?
I'm designing a tabletop game, and I need to figure out how to calculate a few probabilities:
Roll 3 20-sided dice, take the highest value. What is the probability of it being 7 or higher? 15 or higher?
Roll 4 20-sided dice, take the highest value. What is the probability of it being 7 or higher? 15 or higher?
Roll 3 20-sided dice, take the lowest value. What is the probability of it being 7 or higher? 15 or higher?
Roll 4 20-sided dice, take the lowest value. What is the probability of it being 7 or higher? 15 or higher?
How can I do this? Could you explain to me how this works, or even better - give me a simple formula?
AI: If you roll one $20$-sided die, then the probability that it is $7$ or higher is $\frac{14}{20}$. In general, to get a $k$ or higher, the probability is $\frac{21 - k}{20}$.
If you have $n$ $20$-sided dice and you take the highest value, then the probability that this max is $7$ or higher is the probability that at least one die is also $7$ or more. The complement is that all of the dice are $6$ or less, so we have:
$$
1 - \left( \frac{6}{20} \right)^n
$$
Likewise, if you have $n$ $20$-sided dice and you take the lowest value, then the probability that this min is $7$ or higher is the probability that all of the dice are $7$ or higher, so we have:
$$
\left( \frac{14}{20} \right)^n
$$
So to generalize...
Roll $n$ $20$-sided dice, take the highest value. What is the probability of it being $k$ or higher?
$$
1 - \left( \frac{k - 1}{20} \right)^n
$$
Roll $n$ $20$-sided dice, take the lowest value. What is the probability of it being $k$ or higher?
$$
\left( \frac{21 - k}{20} \right)^n
$$
|
H: Cubic Spline Interpolation - Constructing the Matrix
Interpolate a cubic spline between the three points $(0, 1), (2, 2) \text{ and } (4, 0).$
I'm trying to understand how to interpolate a given set of points using cubic splines with the help of this solved example. I don't quite get how they arrived at the matrix shown in [s11]. I'm aware of the conditions that have to be imposed so that we don't get an underdetermined linear equation system, but I'm not sure how these equations look like (why are there $8$ unknowns in each equation?) Can someone explain in detail how to get these equations?
AI: Okay let's consider the [s11].
$$\begin{pmatrix}
0&0&0&1&0&0&0&0\\
8&4&2&1&0&0&0&0\\
0&0&0&0&8&4&2&1\\
0&0&0&0&64&16&4&1\\
12&4&1&0&-12&-4&-1&0\\
12&2&0&0&-12&-2&0&0\\
0&2&0&0&0&0&0&0\\
0&0&0&0&24&2&0&0
\end{pmatrix}
\begin{pmatrix}
\delta_1 \\
\gamma_1 \\
\beta_1 \\
\alpha_1 \\
\delta_2 \\
\gamma_2 \\
\beta_2 \\
\alpha_2
\end{pmatrix}
=\begin{pmatrix}
1\\
2\\
2\\
0\\
0\\
0\\
0\\
0
\end{pmatrix}
$$
You know that matrices are multiplied "row by column" -- a row from the first matrix by a column from the second one and then sum.
The first row corresponds to $\alpha_1=1$ i.e. [s3], the second row is [s4], then [s5], [s6], the 5th row is certainly [s7] in the form $p_1'(2)-p_2'(2)=0$, then [s8], 7th row is [s9], then [s10] is clearly $24\delta_2+2\gamma_2=0$ is $(\delta_2x^3+\gamma_2x^2+\beta_2x+\alpha_2)''(4)=0$.
Does it make sense now or do I have to present the equations in an explicit form with all $\delta_1,\,\ldots,\,\alpha_2$? Thanks.
|
H: Help to solve $y'=y$, building exp function
I come to ask for help building the exponential function as the solution to $y'=y$.
This question is different from :
Prove that $C\exp(x)$ is the only set of functions for which $f(x) = f'(x)$
Since I would like help to prove it using the following arguments :
show that the solution should verify : $f(a+b)=f(a)f(b)$
show that $f(x)$ for any $x$ in $\Bbb R$, will write $f(x)=c a^x$.
show that if the function value is $1$ at $0$, using a numerical tool we will be able to find the Euler constant value and not it e.
For the moment here are my ideas :
no idea – this is here that I need the more help
prove it for naturals, rationals then all real numbers using density arguments.
using Euler method,I can show that $a$ is the limit of $f(1) = \lim_{n\to\infty} (1+1/n)^n$
As you can see here, the computation will tend to $e$:
https://www.freecodecamp.org/news/eulers-method-explained-with-examples/
Many thanks, I'll appreciate your help
G
AI: A hint for 1: consider the function $g$ defined as
$$g(x)=\frac{f(a+x)}{f(x)}$$
and calculate $g'(x)$. What can you conclude?
|
H: Stuck on a particular step of finding the integral closure of $\mathbb Z$ in $\mathbb Q[\sqrt{n}]$.
Here, $n\in \mathbb Z$ is square-free, $R$ is the integral closure of $\mathbb Z$ in $\mathbb Q[\sqrt{n}]$, and $\alpha = a+b\sqrt{n}$.
I have shown that $a \in \frac{1}{2}\mathbb Z$ statement.
I have shown the if and only if statement.
I have shown that if $a=\frac{1}{2}$ then $b \in \frac{1}{2}\mathbb Z$ statement.
But I am stuck on the last sentence. What do we need to do for this?
AI: This is just asserting that $1/2$ is not in $R$, which is clear because it is not the solution of a monic polynomial equation with coefficients in $\mathbb Z$, i.e. it is not integral over $\mathbb Z$.
|
H: Prove that $\int_{x}^{1} \frac{1}{1+ t^2} = \int_{1}^{\frac{1}{x}} \frac{1}{1+ t^2}$ for $x \gt 0$
Prove that $$\int_{x}^{1} \frac{1}{1+ t^2} = \int_{1}^{\frac{1}{x}} \frac{1}{1+ t^2} $$ for $x \gt 0$
Basically what i did is to form the function $$f(x)=\int_{x}^{1} \frac{1}{1+ t^2} - \int_{1}^{\frac{1}{x}} \frac{1}{1+ t^2}$$ and the I differentiated using the FTC to which i have
$$(\frac{1}{1+(\frac{1}{x})^2})\frac{1}{x^2} - \frac{1}{1+x^2} = 0$$ doing some algebra.
So then I know that the function that I made has to be a constant $\forall x \in \Bbb R$ but the problem is equality between the integrals is valid only for $x \gt 0$.
I know that at $0$ the function that I made changes the value, but I don't know how to prove that it changes at $0$ and that for $x \gt 0$ the value is the same ($0$). Thanks in advance.
AI: Let $A = \Bbb{R}\setminus\{0\}$. Then, the function you defined is $f:A \to \Bbb{R}$. This function has the property that for every $x\in A$, $f'(x) = 0$. However, from this, you cannot conclude that $f$ is constant. All you can conclude is that $f$ is constant on each connected component of $A$ (look carefully at the proof of this theorem and try to find where the connectedness assumption comes into play). In this case, there are two connected components, namely $A_- = (-\infty, 0)$ and $A_+= (0,\infty)$. So, all you know is that $f|_{A_+}$ and $f|_{A_-}$ are constant. To evaluate what is the constant value of $f|_{A_+}$, simply evaluate at $x=1$, and you'll see that $f|_{A_+} = 0$.
However, there is no reason at all to expect that $f|_{A_-} = 0$. But since $f|_{A_-}$ is constant and $-1\in A_-$, we can evaluate this constant as follows: for every $x\in A_-$, we have
\begin{align}
f(x) &= f(-1) \\
&=\int_{-1}^1\dfrac{dt}{1+t^2} - \int_1^{-1}\dfrac{dt}{1+t^2} \\
&= 2 \int_{-1}^1 \dfrac{dt}{1+t^2} \\
&= 4\int_0^1 \dfrac{dt}{1+t^2} \\
&= 4 \arctan(t)\bigg|_0^{1} \\
&= \pi
\end{align}
Thus, $f$ is not a constant function. It is in fact given by
\begin{align}
f(x) &=
\begin{cases}
0 & \text{if $x\in A_+$} \\
\pi & \text{if $x \in A_-$}
\end{cases}
\end{align}
|
H: Exercise 4.16 in Brezis' Functional Analysis (Counterexample)
The exercise 4.16 in the Brézis book - Functional Analysis, Sobolev Spaces and PDE's, is as follows:
Let $1<p<\infty$. Let $(f_n)$ be a sequence in $L^p(\Omega)$ such that
(i) $f_n$ is bounded in $L^p(\Omega)$.
(ii) $f_n \rightarrow f$ a.e. on $\Omega$.
Prove that $f_n \rightharpoonup f$ weakly $\sigma(L^p,L^{p'})$;
Same conclusion if assumption (ii) is raplaced by
(ii') $\|f_n - f\|_1\rightarrow 0$.
Assume now (i), (ii), and $|\Omega|<\infty$. Prove that $\|f_n -f\|_{q}\rightarrow 0$ for every $q$ with $1\leq q<p$.
My question: Is it possible to build a sequence satisfying (i), (ii), and $|\Omega|<\infty$ and $\|f_n -f\|_q \not\rightarrow 0$ for $q=p$, i.e. $\|f_n -f\|_p \not\rightarrow 0$?
AI: Yes. On $(0,1)$ with Lebesgue measure take $f_n(x)=\sqrt n$ for $x<\frac 1 n$ and $0$ for $x \geq \frac 1 n$. Take $f=0$ and $p=2$.
|
H: Borel probability measure vs Probability measure
Is there a difference between these two terminologies?
space of all Borel probability measure on $\mathbb R^n$ or some complete, separable metric space.
space of all Probability measure on $\mathbb R^n$ or some complete, separable metric space.
In other words, what would be differences in the definitions of a Borel probability measure and a probability measure on the above mentioned spaces.
Thanks for explaining to me.
AI: In 2) the sigma-algebra is not specified.
In 1) the sigma-algebra is the sigma-algebra of all Borel sets, the one generated by open sets.
|
H: A Question on "Stars and Bars" and why it doesn't apply to a problem asked earlier today.
The problem below was presented earlier today and I am wondering why a "stars and bars" approach wouldn't be an appropriate method to solving this problem. The problem reads as follows:
Find the probability that each child gets at least 1 ball when we are distributing 5 DISTINCT balls among 4 children (who are distinct of course).
Here is how I interpreted this problem.
Counting the number of ways we can distribute $n=5$ balls among $k=4$ children in such a way so that each child gets at least one ball is equivalent to counting the number of ways we can express $n=5$ as a sum of $k=4$ positive integers, of which there are nCr($5-1$,$4-1$) ways via stars and bars.
Meanwhile, the number of ways we can distribute the $n=5$ balls among the $k=4$ children in any way we'd like is equivalent to counting the number of ways we can express $n=5$ as a sum of $k=4$ non$-$negative integers, of which there are nCr($5+4-1$,$5$) ways.
Dividing these two numbers gives us a probability of $\frac{1}{14}$ which is incorrect.
Is my problem with this approach that the "star and bars" approach doesn't account for the fact that the balls are distinct?
AI: Yes, that’s the problem. If the children are $A$, $B$, $C$, and $D$, there are actually $\binom52\cdot 3!=60$ different ways to distribute the balls so that $A$ gets $2$ balls and $B$, $C$, and $D$ get one each: there are $\binom52=10$ different pairs of balls that can be given to $A$, and the remaining $3$ balls can be permuted amongst $B,C$, and $D$ in $3!=6$ distinguishable orders.
|
H: Transforming constraints into linear inequality
I want to model the following two constraints in terms of LP, but after trying various ways without success, I wonder if it is possible at all?
Given $x$ and $y_{ij}$ are binary variables. We need the following two constraints:
If $x = 1$, then $\sum_{i = 1}^{n} y_{ij}\leq 1$ for any $j=1,2,\ldots, n$
If $x=0$, then $\sum_{i = 1}^{n} y_{ij} = 2$ for any $j = 1,2,\ldots, n$.
Can someone please help me write these two constraints in terms of linear inequality? It seems so simple yet surprisingly difficult to me. Any help would really be appreciated.
AI: Answer: \begin{cases} \sum\limits_{i=1}^n y_{ij} \leq 2 -x &\mbox{for every } j = 1, 2, \dotsc, n \\ \sum\limits_{i=1}^n y_{ij} \geq 2 - 2x &\mbox{for every } j = 1, 2, \dotsc, n \end{cases}
Motivation:
What you want can be rewritten as:
If $x = 1$, then $0 \leq \sum\limits_{i=1}^n y_{ij} \leq 1$ for every $j = 1, 2, \dotsc, n$;
If $x = 0$, then $2 \leq \sum\limits_{i=1}^n y_{ij} \leq 2$ for every $j = 1, 2, \dotsc, n$.
Therefore, we just need to find two linear functions $f, g$ such that $f(1) = 0$, $f(0) = 2$, $g(1) = 1$, $g(0) = 2$. This is easy to do, using slope-intercept or whichever is your favourite way to compute equations for straight lines.
|
H: If $(m,n)\neq 1$, prove $\mathbb{Z}_{mn} \not \cong \mathbb{Z}_{m} \times \mathbb{Z}_{n}$.
I am really struggling with this problem. We just learned isomorphism in rings. We have not learned groups. From reading multiple sources, so far I have:
Suppose $(m,n) = d > 1 \Longrightarrow \frac{m}{d}, \frac{n}{d}$ integers.
Suppose $k = \textrm{lcm} (m,n)$, then $k = \frac{m n}{d}$ integer $\Longrightarrow m \vert k, n \vert k$.
How would I proceed from here?
AI: Hint: If $m,n$ are not relatively prime, then $k = \mathrm{lcm}(m,n)$ is strictly smaller than $mn$.
Show that $k x= 0$ whenever $x \in \mathbb{Z}_m \times \mathbb{Z}_n$.
Find an element $x \in \mathbb{Z}_{mn}$ such that $k x \neq 0$.
|
H: A property of the function $\frac{\sin x}{x}$
How can one prove, that $0$ is the only value of $\frac{\sin x}{x}$ taken infinitely often?
What I tried:
To see how the graph looks like https://www.wolframalpha.com/input/?i=%28sin+x%29%2Fx
The function is continuous and has infinitely many positive and negative values, so by Darboux it has infinitely many zeroes.
Also, the line $y=0$ is an asymptote both in $\pm\infty$, but this thing only, doesn't imply the result. What else should I use?
AI: Let $0\ne r\in [-1,1].$ Let $f_r(x)=-rx+\sin x.$
Since $\frac {\sin x}{x}\to 0$ as $|x|\to \infty,$ take $M>0 $ such that $|x|>M\implies \left|\frac {\sin x}{x}\right|<|r|\implies f_r(x)\ne 0.$
Now $f_r'(x)=-r+\cos x$ so the set $S=\{x\in [-M,M]: f'_r(x)=0\}$ is finite. So let $S\cup \{-M,M\}=\{x_j: 1\le j\le n\}$ for some $n\in \Bbb N,$ where $x_j<x_{j+1}$ for each $j<n.$
Now $f_r$ is strictly monotonic on each interval $[x_j,x_{j+1}]$ for $j<n$ because $f'_r$ is continuous and non-$0$ on $(x_j,x_{j+1}).$ So there is at most one $x\in [x_j,x_{j+1}]$ such that $f_r(x)=0.$
We could also say there is a member of $(f'_r)^{-1}\{0\}$ between any 2 members of $f_r^{-1}\{0\}$ so if $[-M,M]\cap f_r^{-1}\{0\}$ was infinite then $[-M,M]\cap (f_r')^{-1}\{0\}$ would be infinite.
|
H: Sum of dot products of linearly independent vectors
Suppose I have $n$ column vectors of equal length $\{\vec{a}_i\}_{i \in \{1,...,n\}}$ which are linearly independent of one another. Suppose I have a further $n$ column vectors $\{\vec{b}_i\}_{i \in \{1,...,n\}}$, each of the same length as $\{\vec{a}_i\}_{i \in \{1,...,n\}}$, which I know are also linearly independent of one another. Finally, suppose I now know that:
$$\sum_i \vec{a}_i'\vec{b}_i = 0$$
Is this information enough to conclude any of the following results? If so why/why not?
$\vec{a}_i=\vec{0}$ for all $i$.
$\vec{b}_i=\vec{0}$ for all $i$.
$\vec{a}_i'\vec{b}_i=\vec{0}$ for all $i$.
(Where $\vec{0}$ is the zero vector of appropriate dimension).
AI: I'll stick to two dimensions for ease of notation, then you can consider the following:
$$
a_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad a_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \qquad b_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad b_2 = \begin{bmatrix} 0 \\ -1 \end{bmatrix}
$$
Then $a_1^Tb_1+a_2^Tb_2 = 1-1 = 0$ and yet none of $a_i$, $b_i$, nor $a_i^Tb_i$ are zero, despite them being linearly independent pairs of vectors.
|
H: Prove that $0<\lim\limits_{x\to \infty}f(x)<1\implies \lim\limits_{x\to \infty}\big(f(x)\big)^x=0$
Prove that
$$0<\lim_{x\to \infty}f(x)=l<1\implies \lim_{x\to \infty}\big(f(x)\big)^x=0.$$
I'm looking for a delta - epsilon proof for this.
Starting from the LHS, I got to
$$\forall\varepsilon_1 >0,\; \exists m>0,\;\forall x>m,\; (l-\varepsilon_1)^x<\big(f(x)\big)^x<(l+\varepsilon_1)^x$$
But I can´t find a way to manipulate that inequality to arrive at
$$\forall\varepsilon>0, \; \exists m'>0,\; \forall x>m',\; -\varepsilon<\big(f(x)\big)^x<\varepsilon$$
($\varepsilon$ maybe as a function of $\varepsilon_1$).
Any hints or solutions are more than welcomed. Thanks in advance.
AI: As a hint: since $0<\lim_{x\to\infty}f(x)<1$, you can especially find some $B<1$ and some $m>0$ so that $0<f(x)<B$ for all $x>m$.
Therefore, when $x>m$, we get $0<f(x)^x<B^x$. Can you see how to finish this off?
|
H: two local homeomorphisms
I am being silly here.
Suppose we have two local homeomorphisms $f: E \to X$ and $g: E' \to X$. If $S$ is a sheet of $E$.
Would $g^{-1}f(E)$ be homeomorphic to $f(E)$? My guess is yes as $g$ is a local homeomorphism. Any help would be appreciated!
AI: Take $X$ to be any nonempty space, then the inclusion $g:\varnothing\hookrightarrow X$ is vacuously a local homeomorphism. Take $f := \operatorname{id}_X:X\to X$, then $f$ is certainly also a local homeomorphism.
Now, $g^{-1}f(X) = \varnothing$ cannot be homeomorphic to $f(X)=X$.
|
H: Why are homogeneous and non-homogeneous first order differential equations called homogeneous an vice versa?
So I have recently been studying differential equations and I am extremely confused as to why the properties of homogeneous and non-homogeneous equations were given those names. It seems to have very little to do with their properties are. Is there some reason for their naming scheme?
AI: A function is homogeneous of degree $n$ if
$$f(kx,ky) = k^nf(x,y).$$
This is a two variable example, but you could have more.
A homogeneous equation is one that might look like
$$f(x,y,z) =0$$
where $f$ is a homogeneous function. For instance
$$x^n+y^n-z^n =0$$.
What's special about a homogeneous equation is that you can multiply all the variables by a constant and it doesn't really change the equation. If you multiply $x$, $y$ and $z$ by a constant $k$ in the last equation, you can factor out a $k^n$ and divide it out. In a homogeneous equation, if $(x,y,z)$ is a solution, then so is $(kx,ky,kz).$
So now, a homogeneous differential equation is one where you can multiply a solution by a constant and it's still a a solution. When you solve a homogeneous linear equation and find a solution like $y=e^{2t}$, then you know that $y=ke^{2t}$ is also a solution. You get infinite solutions for the price of one.
Non-homogeneous equations don't have that nice property.
|
H: Does uniform convergence of a function sequence imply the convergence of the corresponding level set?
Suppose $f_n(x),f(x)$ are both continuous functions with domain $M\subset R^2$ and codomain $R$, and define $L=\{x:f(x)\geq c\}$ and $L_n=\{x:f_n(x)\geq c\}$. Suppose $\underset{x\in M}{sup}|f_n(x)-f(x)|\rightarrow 0$. Do we have convergence of $L_{n}$ to $L$ in the following sense:
$\mu(L\Delta L_{n})\rightarrow 0$, where $\mu(\cdot)$ denotes the Lebesgue measure and $L\Delta L_{n}$ denotes the symmetric set difference: $L\Delta L_{n}=(L\cap L_{n}^c)\cup( L_{n}\cap L^c)$ (in plain words, the Lebesgue measure of the difference of these two sets vanishes)?
AI: Let $f(x)=0$ and $f_n(x)=-1/n$ for all $x\in M$ and all $n$. Let $c=0$. Then $L=M$ but $L_n=\phi$, and $\mu(L\Delta L_n)$ does not converge to $0$. (Unless $\mu(M)=0$.)
|
H: Express difference between two positive numbers as a number between $0$ and $1$
It looks like a relatively simple problem, but I found some difficulties finding a way to express the difference between two numbers as a number in between $0$ and $1$ (you can say as a percentage).
Rules are simple: both numbers are always positive, and regardless of how big each of two numbers is, the difference between them must be expressed as any number between $0$ and $1$.
A usual
$\frac{x-y}{y}=$ difference
Doesn't work here, since if $X$ is much better than $Y$, their difference will be expressed as a number larger than $1$. Any simple variation of this formula gives the same results.
How would you make sure that difference can be expressed within boundaries of $0$ and $1$, regardless of how big or small either $X$ or $Y$ is?
AI: Let the two numbers be $a$ and $b$ which are positive integers and let $x=a-b$. You could use the sigmoid function or any variant of it as it is used in neural networks in computer science, only retuning a number of 0 to 1. The function is:
$$f(x)=\frac{e^x}{e^x+1}$$
If you feel this goes from 0 to 1 too quickly and only expresses the difference between two small numbers accurately, you could change the value of $c,n\in\mathbb{R}^+$ in:
$$f(x)=\frac{e^{\frac{1}{n}(x-c)}}{e^{\frac{1}{n}(x-c)}+1}$$
You can use this https://www.desmos.com/calculator/x0budxt4uq as a visualisation, though really large differences (which is denoted by d in the desmos graph), eventually leads to the function equalling $1$ (or so close the computer can't calculate the decimal points). You can also use any of the functions on https://en.wikipedia.org/wiki/Sigmoid_function under the examples section, which also all give out a number from $0$ to $1$ no matter the input. To summarise, it is probably impossible to find a function which expresses the difference between two numbers linearly, as I am pretty certain it has to exceed one at one point. You will probably have to use functions which asymptote at $0$ and $1$ such as the sigmoid function of functions alike. I hope this helps!
|
H: Confused About Irreducibility - Fraleigh
Fraleigh says:
"It is worthwhile to remember that the units in $F[x]$ are precisely the nonzero elements of F. Thus we could have defined an irreducible polynomial $f(x)$ as a nonconstant polynomial such that in any factorization $f(x) = g(x)h(x)$ in $F[x]$, either $g(x)$ or $h(x)$ is a unit."
I'm quite confused about this. I think his first sentence is saying that because $F$ is a field, every element is either $0$ or a unit. But I don't see how the second sentence follows at all. Just let $g(x)$ and $h(x)$ be two polynomials with degree lower than $f(x)$, and $g(x)h(x) = f(x)$. Aren't $g(x)$ and $h(x)$ automatically units since they are in $F[x]$ and $f(x)$ is nonconstant (and hence nonzero)? So I don't see how restricting $g(x)$ or $h(x)$ to be a unit is a restriction at all, or shows that we don't have $g(x)h(x) = f(x)$.
AI: Every nonzero element of $F$ is a unit. But not every element of $F[x]$ is a unit.
For instance every nonzero real number is a unit in $\mathbb R$, but nonconstant polynomials are not units in ${\mathbb R}[x]$.
|
H: Prove that $g$ is injective or surjective
I have two questions which are related to mappings as follow:
Given 3 sets E, F, and G such that $f: E \rightarrow F, g: F \rightarrow G$ are two mappings. Prove that:
a. If $g \circ f$ is injective and $f$ is surjective then $g$ is injective.
b. If $g \circ f$ is surjective and $g$ is injective then $f$ is surjective.
For these two problems. I have solved halfway for each.
For the problem (a), I have proved that $f$ is bijective.
For the problem (b), I have proved that $g$ is bijective.
I don't know how to do next because I don't find the relation to conclude.
AI: In part a, because $g \circ f$ is injective and $g^{-1}$ (which exists, since $g$ is bijective) is injective, we therefore have
$$f = g^{-1} \circ (g \circ f)$$
is the composition of injective functions, which hence makes it injective. Part b can be solved dually.
|
H: What is the set of functions such that any quotients of two of them at infinity is real or infinity?
In this question How to quantify asymptotic growth? I was told that to assume $\lim_{x\to\infty}\frac{f(x)}{g(x)}$ is either real or $\pm\infty$, I have to assume a certain subset of non-oscillating functions for $f$ and $g$. I am wondering if simply eliminating trigonometric functions or any compositions of functions involving them from $\mathbb{R}\to\mathbb{R}$ is enough.
AI: The variety of functions is far more than you can imagine. Functions with a limit as $x \to \infty$, even if you accept $\pm \infty$ as a limit, are rare. You can't eliminate the ones that don't have a limit by something simple like banning trig functions.
As an example, take any function $f(x)$ that converges to a limit. Now I define $g(x)=f(x)$ at all points except the integers. At each integer, I make $g(x)=\pm 1$ randomly. This is an uncountable family of functions that (except for a set of measure $0$) has no limit.
|
H: How is $\mathbb Z[\frac{1}{2}+\frac{1}{2}\sqrt{n}]=\{a+b\sqrt{n} \mid a,b \in \mathbb Z \text{ or } a,b \in \mathbb Z +\frac{1}{2}\}$?
Let $n \in \mathbb Z$ be square-free.
If $n \equiv 1 \pmod 4$, how is $\mathbb Z[\frac{1}{2}+\frac{1}{2}\sqrt{n}]=\{a+b\sqrt{n} \mid a,b \in \mathbb Z \text{ or } a,b \in \mathbb Z +\frac{1}{2}\}$?
I can see that $\mathbb Z[\frac{1}{2}+\frac{1}{2}\sqrt{n}] \subset \{a+b\sqrt{n} \mid a,b \in \mathbb Z \text{ or } a,b \in \mathbb Z +\frac{1}{2}\}$?
How do we show the reverse containment?
AI: In general, $a+b\sqrt{n}=(a-b)+2b(\frac12+\frac12\sqrt n)$.
Note that both $a-b$ and $2b$ are integers, when either $a,b\in\mathbb Z$ or $a,b\in\mathbb Z+\frac12$. This proves $\mathbb Z[\frac{1}{2}+\frac{1}{2}\sqrt{n}] \supset \{a+b\sqrt{n} \mid a,b \in \mathbb Z \text{ or } a,b \in \mathbb Z +\frac{1}{2}\}$.
|
H: Conditional probability on dice rolling
Let's roll four dice. What is the probability that there is no "4" on any of the dice conditional on each dice having different values.
My answer is the following:
$$\frac{5 \times 4 \times 3 \times 2}{6 \times 5 \times 4 \times 3}$$
The denominator being all events with different values; The numerator being all events with different values but "4".
Is my reasoning correct?
AI: Your reasoning seems fine.
You can also reason alternatively as follows:
Notice that since you are given that the four dice are all different, then your four rolls will include exactly $4$ of the $6$ values from $1$ through $6$. So any given value has a $\frac46$ chance of occuring.
|
H: Prove that if $G$ is a finite group in which every proper subgroup is nilpotent, then $G$ is solvable.
Prove that if $G$ is a finite group in which every proper subgroup is nilpotent, then $G$ is solvable. (Hint: Show that a minimal counterexample is simple. Let $M$ and $N$ be distinct maximal subgroups chose with $|M\cap N|$ as large as possible and apply Part 2 of Theorem 3. Now apply the methods of Exercise 53 in Section 4.5.)
This is Exercise 6.1.35 in Dummit and Foote. Using the idea from the hint, I tried the following proof. But I couldn't prove that $M\cap N=1$. Does anyone know how to prove this? Thanks.
Here is what I have done so far:
We proceed by induction. If $|G|=2$, then $G$ is clearly solvable. Let $|G|\geq6$. Assume that the statement is true for all groups of order $<|G|$.
If $G$ is of prime order, then clearly $G$ is solvable. So we assume that $G$ is not of prime order. Since $G$ is finite, $G$ contains nontrivial maximal subgroups.
Claim: There exists a maximal subgroup of $G$ which is normal. Suppose not. Since conjugates of a maximal subgroup are maximal subgroups, $G$ has more than one maximal subgroups. Let $M$ and $N$ be the distinct maximal subgroups such that $|M\cap N|$ is maximal. Since $M$ and $N$ are nilpotent, $M\cap N<N_M(M\cap N)$ and $M\cap N<N_N(M\cap N)$. (Here I want to show that $M\cap N=1$ following the hint.)
Now since $G\neq\bigcup_{g\in G}gMg^{-1}$, there exists $H\leq G$ maximal such that $H$ is not a conjugate of $M$. So $G$ has at least the following number of nonidentity elements:
\begin{equation*}
\begin{split}
(|M|-1)|G:N_G(M)|+(|H|-1)|G:N_G(H)|=&(|M|-1)|G:M|+(|H|-1)|G:H|\\=&2|G|-|G:M|-|G:H|\\\geq&2|G|-\frac{1}{2}|G|-\frac{1}{2}|G|=|G|
\end{split}
\end{equation*}
which is a contradiction. Hence there exists a maximal subgroup of $G$ which is normal.
Now let $M\unlhd G$ be a maximal subgroup. Then $M$ is nilpotent and hence solvable. Now $|G/M|<|G|$. Since every subgroup of $G$ is nilpotent, by the correspondence theorem, every subgroup of $G/M$ is nilpotent. So $G/M$ is solvable. Hence $G$ is solvable.
AI: Your proof is not complete and also incorrect. Here is a proof. If $G$ is not solvable, then one of the composition factors $B/A$ is simple non-Abelian. If $B$ is a proper subgroup then $B$ is nilpotent, hence $B/A$ is nilpotent, a contradiction. So $B=G$. Similarly if $A\ne 1$, then $|G/A|<|G|$, all proper subgroups of $G/A$ are nilpotent, hence $G/A$ is solvable.Since $A$ is nilpotent, $G$ is solvable.
Thus $G=G/A=B/A$ is simple non-Abelian.
The shortest way to finish is then by using J. Thompson's famous theorem about classification of all simple finite groups where all proper subgroups are solvable. Each of these groups contains non-nilpotent solvable subgroups. QED
Now a longer way suggested by D-F. $G$ contains at least two maximal subgroups $M,K$ since $G$ is simple. Assume that $L=M\cap K$ is maximal possible. Since $M,K$ are nilpotent $N_1=N_M(L)>L<N_2=N_K(L)$ (Theorem 3 in D-F). If $L$ is not 1, its normalizer in $G$ is not $G$ (again because $G$ is simple), whence that normalizer must be inside some maximal subgroup $K'$ of $G$. But then $K'\ge N_1$, hence $K'\cap M$ is bigger than $L$, a contradiction. Thus $L=1$ and the intersection of any two maximal subgroups of $G$ is trivial.
We can assume that $M$ and $K$ are not conjugate. Note that $N_G(M)=M, N_G(K)=K$ since these subgroups are maximal and not normal. Hence there are $[G:M]$ conjugates of $M$ each two intersecting trivially and there are $[G:K]$ conjugates of $K$ each two intersecting trivially. Altogether these subgroups contain $2|G|-[G:K]-[G:M]+1$ elements. Note that the indices of these maximal subgroups are not bigger than $|G|/2$ by Lagrange's theorem. So $2|G|-[G:K]-[G:M]+1\ge |G|+1$ a contradiction.
|
H: If $y=f(x)=\frac{3x-5}{2x-m}$ find $m$ so that $f(y)=x$.
Question: If $y=f(x)=\dfrac{3x-5}{2x-m}$ find $m$ so that $f(y)=x$.
We have $y=\dfrac{3\left(\frac{3x-5}{2x-m}\right)-5}{2\left(\frac{3x-5}{2x-m}\right)-m} $
How can I find $m$? It is given than $m=3$.
AI: Now, $$\dfrac{3\left(\frac{3x-5}{2x-m}\right)-5}{2\left(\frac{3x-5}{2x-m}\right)-m}=x$$ or
$$\frac{3(3x-5)-5(2x-m)}{2(3x-5)-m(2x-m)}=x$$ or
$$\frac{-x+5m-15}{x(6-2m)+m^2-10}=x$$ or $$-x+5m-15=x^2(6-2m)+x(m^2-10).$$
We need $$5m-15=0,$$ $$6-2m=0$$ and $$m^2-10=-1,$$ which gives $m=3.$
|
H: Trignometric Substitution Problem. Can't find right answer
I am trying to solve an integral that involves using Trig Sub (I know it can be also done with partial fraction).
However, no matter how many times I try, I still cannot find the right answer. I hope someone can point out where I did wrong :)
$\displaystyle \int \frac{1}{1-x^{2}} dx$
$\displaystyle\int \frac{1}{(\sqrt{1-x^{2}})^{2}} d x$
$\displaystyle x = \sin\theta, dx=\cos\theta \, d\theta, \sqrt{1-x^{2}} =\cos\theta$
$\displaystyle \int \frac{1}{(\cos \theta)^{2}} \cos \theta \, d \theta$
$\displaystyle \int \sec \theta \, d \theta$
$\displaystyle \ln |\sec \theta+\tan \theta|+ C$
$\displaystyle \ln \left|\frac{x}{\sqrt{1-x^{2}}}+\frac{1}{\sqrt{1-x^{2}}}\right|+C$
I hope someone can tell me where I commit an error Thanks!
AI: You are absolutely correct. You just need to simplify the answer a bit.
$$\ln \Big(\dfrac{x+1}{\sqrt{1-x^2}}\Big)=\ln \Big(\dfrac{x+1}{\sqrt{(1-x)(1+x)}}\Big)=\ln \Big(\dfrac{\sqrt{1+x}}{\sqrt{1-x}}\Big)=\dfrac{1}{2}\ln\Big(\dfrac{1+x}{1-x}\Big) $$
|
H: Evaluating limit of the function at $\frac{\pi}{2}$
I'm trying to solve this
$$\lim_{x \to \frac{\pi}{2}} \frac{\cos{x}}{(x-\frac{\pi}{2})^3}$$
I have tried using the L'Hôpital's rule
But I'm stuck at
$$\lim_{x \to \frac{\pi}{2}} \frac{-\sin{x}}{3(x-\frac{\pi}{2})^2}$$
Since the above equation is not in the $\frac{0}{0}$ , $\frac{\infty}{\infty}$
or $\frac{anything}{\infty}$ form
Then I tried expanding the $\cos{x}$ as taylor series at $x=\frac{\pi}{2}$. Which on simplifying I am left with
$$ \lim_{x \to \frac{\pi}{2}} \frac{-1}{(x - \frac{\pi}{2})^2} + \frac{1}{6} - \frac{1}{120} (x - \frac{\pi}{2})^2 + \frac{(x - \frac{\pi}{2})^4}{5040} - ... $$
and I'm stuck again. How do I proceed ahead?
AI: welcome to MSE
AS a hint
$$\lim_{x \to \frac{\pi}{2}} \frac{\cos{x}}{(x-\frac{\pi}{2})^3}=\lim_{x \to \frac{\pi}{2}} \frac{\sin(\frac{\pi}{2}-x)}{(x-\frac{\pi}{2})^3}$$now take $x-\frac{\pi}{2}=a $ when $x$ tends to $\frac{\pi}{2}$ ,a tends to zero
$$\lim_{x \to \frac{\pi}{2}} \frac{\sin(\frac{\pi}{2}-x)}{(x-\frac{\pi}{2})^3}=\\
\lim_{a\to 0} \frac{\sin(-a)}{(a)^3}\\=
\lim_{a\to 0} \frac{-\sin(a)}{(a)^3}\\=
\lim_{a\to 0} \frac{-1}{(a)^2}\to -\infty$$
|
H: What concept of order is introduced in the twentyfold way?
Four of the folds not present in the twelvefold way but introduced in the twentyfold way, rows $5$ and $6$ of the linked table, are defined by the statement that order matters.
However, my understanding is that labeling/de-labeling the elements of the domain and the codomain determine whether or not order matters in the domain and codomain, respectively. These distinctions are already considered in the twelvefold way.
While a physical example might suggest that the relation itself could have an order, i.e., dropping the same balls into the same bins but in a different temporal sequence, in general a relation does not join elements in a particular order.
What concept of order is being used to define these combinatoric categories?
AI: Let’s start with the fairly familiar settings of Row $3$ of the table. The Stirling numbers of the second kind $n\brace k$ count partitions of $[n]$ distinct objects into $k$ non-empty parts; we don’t care about the order of the parts or the order of the objects within each part. If we care about the order of the parts, the number is $k!{n\brace k}$.
Row $5$ is part of what we get when we do care about the order of the objects in each part. Bogart’s example is shelving $n$ books in an empty bookcase with $k$ shelves and then pushing the contents of each shelf to the left. If you imagine shelving the books one at a time, processing them in alphabetical order by author, there are $n$ places to put the first book: you can put it on any shelf. There are $n+1$ places to put the second book, because you can put it on any shelf, and if you put it on the same shelf as the first book you can put it on either side of that book. (Remember, order on the shelf now matters.) Each book that you add to the shelves increases the number of identifiable spots for the next book by $1$, so in the end you have $n^{\overline k}$ possible arrangements (where $n^{\overline k}$ is a rising factorial). The shelves have an inherent order (e.g., from top to bottom), so here we’re partitioning the $n$ books into an ordered collection of $k$ ordered subsets, any of which may be empty.
If instead we simply divide the books into $k$ stacks scattered around the room, allowing any of the stack so be empty, but we care about the order of the books within each stack, the count is different. The Lah number $L(n,i)$ is the number of ways to divide $n$ books into $i$ non-empty linearly ordered subsets, and we allow any number of non-empty stacks from $1$ through $k$, so in this case the number of arrangements is $\sum_{i=1}^kL(n,i)$.
What distinguishes the partitions in Row $5$ from those in Row $3$ is that we now care about the order of the objects within each part. To use your example, if we imagine putting balls into bins, not only are the individual balls identifiable, so that it matters which balls are in which bins, but we also care about the order in which the balls in each bin were placed there. One might imagine the bins as cylinders just big enough in diameter to accommodate a ball, so that the balls in a bin end up sorted from bottom to top in the order in which they were placed in the bin, and different orders are counted as different arrangements of the balls.
Row $6$ is the same thing, except that we have to have at least one book on each of the $k$ shelves or in each of the $k$ stacks, and the reasoning that leads to $n^{\underline k}k^{\underline{n-k}}$ arrangements of books on the shelves and $L(n,k)$ ways to distribute the books into $k$ stacks is similar.
In short, we’re not just counting the ways to divide $n$ distinct objects into parts of some kind: we’re counting the number of ways to divide them into linearly ordered parts. Because the objects are distinct, a part that has $\ell$ elements can be linearly ordered in $\ell!$ different ways, and different orders are counted as distinct arrangements. There need not be any natural or intrinsic order of the objects involved: all that matters is that we can distinguish the $\ell!$ different linear orderings of $\ell$ objects.
|
H: Let $f(x)$ satisfy Rolle's theorem conditions and have three successive solutions $x_1, x_2, x_3$. How to prove that $f'(x)$ is differentiable?
Rolle's Theorem
Let $f(x):[a,b]\to\mathbb{R}$ where $f$ is differentiable at $(a,b)$ and continuous at $[a,b]$, with $f(a) = f(b)$.
We know from Rolle's theorem that $\exists$ at least one $x_o: f'(x_0)=0$
The problem
Let $x_1,x_2,x_3$ be the successive solutions of $f$.
Prove $f''(x)$ has at least one solution
Solution Attempt
From Rolle's theorem is is obvious that $f'(x)$ has at least two solutions $$f'(c_1) = f'(c_2) = 0$$
$$x_1<c_1<x_2<c_2<x_3$$
Therefore, if we could prove $f'(x)$ is differentiable, then $f'(x)$ would also satisfy Rolle's theorem.
Thus, we will be able to prove that $f''(x):(c_1,c2)\to\mathbb{R}$ has also at least one solution.
The Question
How to prove that $f'(x)$ is differentiable (given the fact $f$ satisfies Rolle's conditions)?
AI: You will have to assume from the beginning that both $f$ and $f'$ satisfy the hypothesis of Rolle's theorem. For a counter-example, think of a function $g$ which is continuous, but not differentiable (like $|x|$), but modify it slightly (think of "piecing together" a few absolute value functions, so that the graph looks like a bunch of letter "W" stuck side-by-side, like a jagged sine graph). Then, integrate take $f(x):= \int_0^x g(t)\, dt$.
Then, $f$ is not twice differentiable on $\Bbb{R}$, but wherever $f''(x)$ exists, it is always $\pm 1$ (in particular non-zero).
|
H: Any finite connected graph with every vertex has degree $\ge 2$ has a circuit
Is my proof for the following statement correct?
Any finite connected graph with every vertex has degree $\ge 2$ has a
circuit.
My attempt: Let $G$ be a finite connected graph. Let $|G|=n$. Suppose that degree of any vertex $\ge 2$. Now, pick any vertex $v_1$. By hypothesis $v_1$ must have at least two distinct edges incident on it; pick one, call it $e_1$. Call the end vertex of $e_1$ different from $v_1$ as $v_2$. Now pick an edge $e_2$ different from $e_1$ incident on $v_2$. Let $v_3$ be the end vertex of $e_2$ different from $v_2$. If there is an edge joining $v_3$ and $v_1$, we are done. If not, pick any edge $e_3$ different from $e_2$ incident on $v_3$ and repeat the previous argument. Now, we proceed by induction and find vertices $\{v_1 , v_2, \ldots, v_n , v_{n+1} \}$ such that there is an edge between $v_i$ and $v_{i+1}$. Since $G$ is connected, the component of $G$. containing $v_1$ which is a superset of $\{ v_1, v_2, \ldots , v_{n+1} \}$ is equal to $G$. So, $v_{n+1}=v_1$ and hence we are done.
Is this proof correct? Is there an easier way to do this?
AI: There is no guarantee $v_{n+1}=v_1$. For example, take a "bow-tie" graph (i.e., two 3-cycles with a vertex in common) and start at $v_1$ any degree-2 vertex, $v_2$ the cutvertex and now only walk the other triangle for $v_3,v_4,v_5,v_6$.
Instead, note that $v_1,v_2,\dots,v_{n+1}$ are $n+1$ vertices from the set of $n$ vertices of $G$, so by pigeonhole there must be $1\leq i<j\leq n+1$ with $v_i=v_j$. By construction we know $j\neq i+1$ and so ...
|
H: Let $f$ continuous at $[a,b]$ and differentiable at $(a,b)$ where $f(b)=0$. How to prove that $f'(x_0) = \frac{f(x_0)}{a-x_0}$?
The Problem
Let $f$ continuous at $[a,b]$ and differentiable at $(a,b)$ where $f(b)=0$.
How to prove that:
$$\exists x_0 \in (a,b): f'(x_0) = \frac{f(x_0)}{a-x_0} \quad (1)$$
My solution attempt
$f$ satisfies Mean Value Theorem's requirements, thus $\exists x_o \in (a,b): f'(x_0) = \frac{f(b) - f(a)}{b-a} \quad (2)$
Given the fact that $f(b)=0$,
$$(2) \to f'(x_0) = \frac{f(a)}{a-b} \quad (2)$$
It seems we are getting closer to $(1)$. But we can't let $a=x_0$ because $x_o \in (a,b)$.
Any ideas?
AI: Write the desired conclusion as
$$
f(x) + (x-a) f'(x) = 0
$$
for some $x \in (a, b)$, and note that the left-hand side is the derivative of $(x-a)f(x)$.
This suggests to apply Rolle's theorem to $g(x) = (x-a) f(x)$.
|
H: (Weak convergence $\implies$ strong convergence) $\implies \mathcal{H}$ finite-dimensional
Let $\mathcal{H}$ be a Hilbert space. Show:
$(\forall \psi \in \mathcal{H}: \lim \langle \psi, \phi_{n}\rangle = \langle \psi, \phi_{n}\rangle \implies \lim \phi_{n}=\phi )\implies \mathcal{H}$ finite-dimensional. $(*)$
The idea:
Assume that $\mathcal{H}$ is infinite dimensional, then in particular there exists a countable orthonormal system $(e_{n})_{n\in \mathbb N}$ such that by the Bessel inequality, we have:
$\sum\limits_{n \in \mathbb N}\lvert \langle e_{n}, \phi_{m}\rangle\rvert^{2}\leq \lvert \lvert \phi_{m}\rvert \rvert^{2}<\infty $
Thus for any $m \in \mathbb N$, we obtain $ \lim\limits_{n \to \infty}\langle e_{n}, \phi_{m}\rangle=0$.
I do not see how this shows that the left-hand side of $(*)$ is false
AI: $e_n \to 0$ weakly because $\sum (\langle e_n, x \rangle)^{2} <\infty$ which implies $\langle e_n, x \rangle \to 0$ for all $x$. But $\|e_n\|=1 $ so $e_n$ does not tend to $0$ in the norm..
|
H: Proof Check: $x \leq y+ \epsilon$ for all $\epsilon >0$ iff $x \leq y$.
Synopsis
I want to be sure I'm utilizing proof by contradiction correctly, so please check my proof of the exercise below. It's relatively simple, so it shouldn't take you too much time.
Exercise
Let $x$ and $y$ be real numbers. Show that $x \leq y + \epsilon$ for all real numbers $\epsilon > 0$ if and only if $x \leq y$.
Proof
Suppose $x \leq y + \epsilon$ for all $\epsilon > 0$ and $x > y$. Then $x - y > 0$ and $x - y + \epsilon > \epsilon$ for all $\epsilon > 0$. But if $x \leq y + \epsilon$, then $y - x + \epsilon \geq 0$. So $0 \leq y-x+\epsilon < y-x+(x-y+\epsilon) = \epsilon$, a contradiction. For the converse, suppose $ x \leq y$. Then it is obvious that $x \leq y+\epsilon$. This concludes our proof.
Update
This proof is obviously wrong. It is not a contradiction that $\epsilon >0$. For some reason, I deluded myself that my conclusion stated that $\epsilon < 0$, but that's just due to my occasional stupidity and habitual lack of double checking. Instead, consider some $\epsilon$ such that $0 < \epsilon < x-y$. Then $x \leq y + \epsilon < y+x-y < x$, a contradiction. Thank you to the various people who commented on the issues with my proof. This was a very stupid mistake, and I don't even know how I overlooked what I did.
AI: 0≤y−x+ϵ<y−x+(x−y+ϵ)=ϵ, a contradiction.
Why is that a contradiction? $0$ is $< \epsilon$.
....
Instead Note $\epsilon$ is not fixed. If $x-y >0$ then we can let $\epsilon$ be some value $0 < \epsilon < x-y$ and .... then what happens?
$x \le y + \epsilon < y+(x-y) = x$ so $x < x$. Which certainly is a contradiction!
|
H: Can $\int_0^\infty f (x) \, dx$ exist if $\lim_{x \to \infty} f(x)$ does not exist?
Is is possible to have a function for which $\lim_{x \to \infty} f(x)$ does not exist, but $\int_0^\infty f(x) \, dx$ exists and is finite?
I think I've found an example actually, but I'm not sure it works. Let $H_n$ be the $n$th harmonic number. Consider $f$ such that $f(x) = 1$ for $x \in [0,1)$ and $f(x) = (-1)^{n}$ for $x \in [H_n , H_{n + 1})$. It seems that
$$
\int_0^\infty f(x) \, dx = \sum_{n = 1}^\infty \frac{(-1)^{n + 1}}{n} = \log 2
$$
even though $\lim_{x \to \infty} f(x)$ doesn't exist. Does this work?
AI: You example is correct. Note that you based your idea on an oscillating function, however, we can also give an example with a non-negative one.
Consider hats functions with maximum value $1$ centered on integers such that the hat width at for the $n$-th is $\frac 1 {n^2}$. Thus, the area of the $n$-th hat function is $\frac 1{2n^2}$. Now, define $f$ as the sum of those hats functions, thus : $$\int_0^\infty f(x) dx =\sum_{n>0} \frac 1{2n^2} = \frac{\pi^2}{12}$$
And of course the limit of $f$ does not exist since $f(n) = 1$ and $f(n + 0.5) =0$ for $n$ large enough.
|
H: Give an example of a sequence of functions having the following property.
I am finding an example of a sequence $(f_n)$ of differentiable functions on an interval $I$ such that $f_n\to 0$ uniformly on $I$ but $f'_n(x)\to \infty$ for all $x\in I$ as $n\to \infty$.Can someone provide me an example where this can occur.I was trying with trigonometric functions like $f_n(x)=\sin(nx)/n$ etc but they are not working.
AI: Partial answer: such an example has to be weird. Assuming continuity of the derivatives I will show that no such sequence can exist.
Write $I$ as $\cup_m\{x:f_n'(x) \geq 1 \forall n >m\}$. By Baire Catrgory Theorem there is an integer $m$ and interval $[a,b] \subset I$ (with $a <b$) such that $f_n'(t) >1$ for all $t \in [a,b]$ for all $n \geq m$. But now we get a contradiction from $f_n(b)-f_n(a)=\int_a^{b} f_n'(t)dt \geq b-a$ for all $n >m$.
|
H: Defining a family of self-adjoint operators via a bilinear form
I started reading an article and I'm having some trouble understanding a certain family operators they defined. Here the relevant parts:
I'm trying to understand how exactly $L_\sigma$ are defined. I noticed that if I take $\sigma=0$ then I get $B_0(u,v)=\langle Lu,v \rangle$, with respect to the standard $L^2$ inner product (and $L$ is the original Schrodinger operator). This seems like a possible direction on understanding how the operators are defined, but since they mentioned the space $H^1_0$ (which uses a different inner product), this might not be a correct idea.
I also found out that one can define self-adjoint operators from symmetric bilinear forms using the Riesz representation theorem, as described here. But this seems like a not very concrete way to define the $L_\sigma$ operators. If this is indeed the case, then I'm wondering if there's a more concrete way to interpret these operators (can we find a more concrete formula or something?), and once again, I'm not sure exactly if the writers are referring to the $L^2$ inner product or the $H^1_0$ inner product. Moreover, what's the relation to the original Schrodinger operator $L$, and why is the limit $L_\infty$ a Dirichlet boundary condition operator?
Anyway, if anyone has any ideas and can explain to me how to think of the $L_\sigma$ operators, I'd appreciate it. The original article is Nodal deficiency, spectral flow, and the
Dirichlet-to-Neumann map, by Berkolaiko, Cox and Marzuola.
Thanks in advance!
AI: The general construction is as follows: Let $H$ be a Hilbert space and
$$
b\colon D(b)\times D(b)\to \mathbb{C}
$$
a symmetric nonnegative bilinear form. Here $D(a)$ is a dense subspace of $H$. The form $b$ is called closed if $D(a)$ endowed with the inner product $\langle\cdot,\cdot\rangle_b$ given by
$$
\langle u,v\rangle_b=b(u,v)+\langle u,v\rangle_H
$$
is complete.
Then there is a positive self-adjoint operator associated with $b$ that can be described in two different ways (Kato's first and second representation theorem):
The domain of $B$ is given by
$$
D(B)=\{u\in D(b)\mid \exists v\in H\,\forall w\in D(b)\colon \langle v,w\rangle=b(u,w)\}
$$
and $Bu=v$, where $v$ is defined as in the definition of $D(B)$ (if it exists, it is unique). This definition is similar to the one using the Riesz representation theorem, but if $D(b)$ is a proper subspace of $H$, then one has to be careful with the domain of $B$.
The domain of $B^{1/2}$ is $D(b)$ and
$$
\langle B^{1/2}u,B^{1/2}v\rangle_H=b(u,v)
$$
for all $u,v\in D(b)$.
As noted in the OP, neither of these two representations is very explicit, but for a good reason: In many cases, the domain of $B$ does not have a nice explicit description, while the domain of $b$ can be easily written down.
If you are only interested in the action of $B$ on "nice functions", you can typically just integrate by parts. In your case that yould be
$$
L_\sigma u=-\Delta u+Vu+\sigma 1_{\Gamma}u.
$$
In particular, for $\sigma=0$ you get your original operator $L$ and as $\sigma\to \infty$ the last summand forces $u$ to be zero on $\Gamma$, which means that it satisfies Dirichlet boundary conditions on $\partial \Omega\cup \Gamma$ (the $\partial \Omega$ part comes from the fact that $b_\sigma$ is defined on $H^1_0(\Omega)$, which already forces Dirichlet boundary conditions on $\partial \Omega$).
If you are in Schrödinger operators and the like, I suggest you read up on these form methods as they are widely used. I can recommend Kato's Perturbation theory and Reed and Simon's Methods of Modern Mathematical Physics (you certainly don't have to read all 4 volumes, but I don't have have them at hand right now and don't know in which one form methods are treated).
|
H: Finding the Locus if $z$ Purely Imaginary/Real
If $\omega=\frac{z-1}{z+i}$, find the locus if $\omega$ is:
Purely Imaginary
Purely real
This is for the applying complex numbers topic of an advanced HSC maths course. I was asked to describe the loci.
One of my friends suggested that I rationalise $\omega$ and split it into its real and imaginary parts like below:
$\omega=\frac{z^2+z}{z^2+1}-i\frac{z+1}{z^2+1}$
but I'm not convinced that this works, because $z$ could stand for a complex number as well? And wouldn't that make this whole process invalid?
AI: Hint:
$z=x+yi$, $z\not=-i$
i. $\omega = ai$ with $a\in \mathbf{R}$
ii. $\omega = a$ with $a\in \mathbf{R}$
|
H: Finding the value of a variable in a quadratic
Question:
The number of negative integral values of $m$ for which the expression $x^2+2(m-1)x+m+5$ is positive $\forall$ $x>1$ is?
For me, solving this question if the parameter "$\forall$ $x>1$" was not given would be quite easy. But how do I solve it under the given parameter?
AI: There are two ways for having $\;p(x)=x^2+2(m-1)x+m+5>0\enspace\forall x>1$:
either $p(x)$ has no real root, which means the reduced discriminant $\Delta'=(m-1)^2-(m+5)=(m+1)(m-4)<0$ is negative;
or $p(x)$ has real roots, say $\xi_0\le \xi_1$ , i.e. $\Delta'\ge 0$. In this case, the condition means $1$ is on the right of the interval of the roots. This is satisfied if and only if $p(1)\ge 0$ ($1$ is outside of the interval of the roots or is a root) and, using Vieta's relations, $1>\frac12(\xi_0+\xi_1)=1-m$.
|
H: Convergence in probability of a root of an equation
Let $X_1, \dots, X_n$ be iid Uniform $(0, 1)$ random variables, and set $\theta_n$ to be the root of the equation
$$
\sum_{k=1}^n \theta^{X_k} = \sum_{k=1}^n X_k^2.
$$
Apparently, this $\theta_n$ converges in probability to a constant. Is that easy to see?
AI: First, we note that the equation
$$
\mathsf{E}\theta^X-\mathsf{E}X^2=0,\tag{1}\label{1}
$$
where $X\sim U[0,1]$, has the solution $\theta_0=\exp(-W(-3/e^3)-3)$ (here $W$ denotes the the Lambert W function). Replacing expectations on the LHS of Equation $\eqref{1}$ with sample averages yields its finite sample version:
$$
\Psi_n(\theta):=\frac{1}{n}\sum_{i=1}^n \left(\theta^{X_i}-X_i^2\right).
$$
One may refer to $\hat{\theta}_n$ that solves $\Psi_n(\theta)\approx 0$ as an estimator of $\theta_0$. To show consistency (i.e. $\hat{\theta}_n\xrightarrow{p}\theta_0$) one needs to use the ULLN. However, since $\theta\mapsto\Psi_n(\theta)$ is continuous and nondecreasing, we may resort to Lemma 5.10 in van der Vaart's "Asymptotic Statistics":
|
H: The Explanation of these steps
I was following a lecture on Tangent Spaces, where I find expressions as:
$$(f\circ\gamma\circ\mu)'(0) = (f\circ\gamma)'(\mu(0)).\mu'$$
And in some other place, I find:
$$((f\circ x^{-1})\circ(x\circ\sigma))'(0) = (x\circ\sigma)^{i'}(0).(\partial_{i}(f\circ x^{-1}))(x(\sigma(0))) $$
Now, I am new to Undergraduate Analysis, and cannot understand how the derivative of the above mentioned operators are happening. So can anyone please explain me how these steps come about, or, provide me with some materials to study these.
Thanks in advance.
AI: First is simple chain rule using for $f\circ\gamma$ and $\mu$.
Second look likes same chain rule for multiple variables: if rewrite as
$$((f\circ x^{-1})\circ(x\circ\sigma))'(0) = \sum_{i}(\partial_{i}(f\circ x^{-1}))(x(\sigma(0)))(x\circ\sigma)^{i'}(0).$$
|
H: Interpretation of the convergence in the mean square sense
I'm new to the notion of convergence in the mean square (or convergence in $L^2$). So, I want to ask about the intuition/interpretation of an algorithm whose iterates converge in the mean square to a certain value, say I have the following:
$$ \underset{k \rightarrow \infty}{\lim} \mathbb{E}\left[\|x_k - x^*\|^2\right] = 0, \label{eq.1}\tag{1}$$
In the deterministic world (without the expectation), I could say that $x_k$ converges to $x^*$ but at least to my understanding in the probabilistic world convergence in $L^2$ does not imply convergence almost surely and I fail to interpret Eq. (\ref{eq.1}).
AI: From the bias-variance decomposition of the mean squared error (that is, the $L^2$ norm), we have that convergence in $L^2$ is equivalent to the bias and the variance vanishing. Intuitively, this means that as $n\to\infty$, the distribution of $x_n$ will become more and more concentrated around $x^*$.
Something that may also be useful, is that convergence in mean squared implies convergence in probability.
|
H: Prove: if $\sum^\infty_{n=0}a_nx^n$ converges for every $x$, then $\sum^\infty_{n=0}a_n$ converges absolutely
Prove:
if $\sum^\infty_{n=0}a_nx^n$ converges for every $x$, then $\sum^\infty_{n=0}a_n$ converges absolutely.
I get why the statement is correct (because it means that the convergence of the series doesn't dependent on $x$), but can't find a formal way to prove it.
AI: If suffices to require that $\sum^\infty_{n=0}a_nx^n$ converges for one $x$ of absolute value larger than one. Then
$$
|a_n x^n| \to 0
$$
implies that
$$
|a_n| \le \frac{C}{|x|^n}
$$
for some $C > 0$. The conclusion follows because $\sum_{n=1}^\infty \frac{1}{r^n}$ is convergent for $r > 1$.
In the context of power series: If $\sum^\infty_{n=0}a_nx^n$ converges for all $x$ then its radius of convergence is infinity, and that implies absolute convergence for all $x$, in particular for $x=1$.
|
H: equivalence class and partially ordered sets question
the question is -
1.we have 4 sets $A,B,C,D$ , if $AΔB⊆D$ and $BΔC⊆D$ then $AΔC⊆D$
2.given the power set $P$({1,2,3}) and given two relations $A,B∈$P$({1,2,3})$ $ARB$ if and only if $AΔB⊆$ {1,2} and $ASB$ if and only if AΔ{1,2}⊂BΔ{1,2}. which one of the sets is an equivalence relation ? prove it and find its classes
which one is a partially ordered set / total order set?
my tries:
i will start with the second part . ARB is the equivalence relation , because it is reflexive $AΔA⊆∅$ and $∅⊆${1,2} (not sure about this) . ARB is symmetric because $AΔB=BΔA$ and it is Transitive because if $AΔB$ and $BΔZ$
then we can do ($AΔBΔBΔZ$)=$AΔZ$ as for equivalence classes what i got was ∅ , {1} , {2} and $N$/{1},{2} (im not sure about this one)
first part i didn't know how to do it but it feels like transitive relation but i could not figure out how to prove this and for the third part anti reflexive is AΔ{1,2}⊂AΔ{1,2} should be AΔ{1,2}⊆AΔ{1,2}if it is reflexive transitive is the same as we did in the second part and we also need to check if it is comparable but i did not know how to how can i check if i can compare(not sure if it is the right word in English i searched but couldn't find)?
thanks for any help and explanation! and sorry if there are translation mistakes i will edit if there is
AI: For the relation $\mathcal{R}$, there are two equivalence classes here: the class of subsets that contain 3 and the class of subsets that don't contain 3. Indeed, $A\Delta B\subset\{1, 2\} \Longleftrightarrow 3\notin A\Delta B$. Note that an equivalence class is never empty. So, here the classes are
\begin{equation}
Cl_\emptyset =\{\emptyset, \{1\},\{2\}, \{1, 2\} \}
\end{equation}
and
\begin{equation}
Cl_{\{3\}} = \{\{3\}, \{1, 3\},\{2, 3\}, \{1, 2, 3\} \}
\end{equation}
For the relation $\mathcal{S}$, it is only a partial order because
\begin{equation}
\{1\}\Delta \{1, 2\} = \{2\}\quad \text{and} \quad \{1,2,3\}\Delta \{1, 2\} = \{3\}
\end{equation}
so that $\{1\}$ and $\{1, 2, 3\}$ are not comparable.
|
H: Given $\cos(a) +\cos(b) = 1$, prove that $1 - s^2 - t^2 - 3s^2t^2 = 0$, where $s = \tan(a/2)$ and $t = \tan(b/2)$
Given $\cos(a) + \cos(b) = 1$, prove that $1 - s^2 - t^2 - 3s^2t^2 = 0$, where $s = \tan(a/2)$ and $t = \tan(b/2)$.
I have tried using the identity $\cos(a) = \frac{1-t^2}{1+t^2}$. but manipulating this seems to have got me nowhere.
AI: $\frac{1-t^2}{1+t^2} +\frac{1-s^2}{1+s^2}=1$
$1+s^2-t^2-s^2t^2+1+t^2-s^2-s^2t^2=1+s^2+t^2+s^2t^2$
$\Rightarrow 1-s^2-t^2-3s^2t^2=0$
|
H: Prove inequality $\tan(x) \arctan(x) \geqslant x^2$
Prove that for $x\in \left( - \frac{\pi} {2},\,\frac{\pi}{2}\right)$ the following inequality holds
$$\tan(x) \arctan(x) \geqslant x^2.$$
I have tried proving that function $f(x) := \tan(x) \arctan(x) - x^2 \geqslant 0$ by using derivatives but it gets really messy and I couldn't make it to the end. I also tried by using inequality $\tan(x) \geqslant x$ on the positive part of the interval but this is too weak estimation and gives opposite result i.e. $x\arctan(x) \leqslant x^2$.
AI: It's enough to prove this for $0<x<\pi/2$. Let $f(x)=(\tan x)/x$. Then $f$ is increasing
on $(0,\pi/2)$. To prove this, for instance $f(x)$ has nonnegative Maclaurin coefficients.
Let $x\in(0,\pi/2)$, and let $y=\arctan x$. Then $x=\tan y\ge y$ as $f(y)=(\tan y)/y\ge1$.
Therefore $g(y)\le g(x)$, that is
$$\frac{\tan y}y\le\frac{\tan x}x$$
or
$$\frac{x}{\arctan x}\le\frac{\tan x}x$$
etc.
|
H: Evaluate: $\int_0^{\infty}\frac{\ln x}{x^2+bx+c^2}\,dx.$
Prove that:$$\int_0^{\infty}\frac{\ln x}{x^2+bx+c^2}\,dx=\frac{2\ln c}{\sqrt{4c^2-b^2}}\cot^{-1}\left(\frac{b}{\sqrt{4c^2-b^2}}\right),$$ where $4c^2-b^2>0, c>0.$
We have:
\begin{align}
4(x^2+bx+c^2)&=(2x+b)^2-\left(\underbrace{\sqrt{4c^2-b^2}}_{=k\text{ (say)}}\right)^2\\\\
&=(2x+b+k)(2x+b-k).
\end{align}
Thus,
\begin{align}
I&=4\int_0^{\infty}\frac{\ln x}{(2x+b+k)(2x+b-k)}\,dx\\\\
&=\frac4{2k}\int_0^{\infty}\left(\frac1{2x+b-k}-\frac1{2x+b+k}\right)\ln x\, dx\\\\
&=\frac2k\left[\underbrace{\int_0^{\infty}\frac{\ln x}{2x+b-k}\,dx}_{=I_1}-\underbrace{\int_0^{\infty}\frac{\ln x}{2x+b+k}\,dx}_{=I_2}\right]
\end{align}
For $I_1:$ Letting $2x+b-k=t$ yields
\begin{align}
I_1&=\int_{b-k}^{\infty} \frac{\ln(t-b+k)-\ln 2}t\,dt\\\\
&=\int_{b-k}^{\infty}\frac{\ln(t-b+k)}t\, dt-\ln 2\int_{b-k}^{\infty}\frac{dt}t
\end{align}
Now $\ln(t-b+k)=\ln(k-b)+\ln\left(\frac{t}{k-b}+1\right).$ So,$$I_1=\ln(k-b)\int_{b-k}^{\infty}\frac{dt}t+\int_{b-k}^{\infty} \frac{\ln\left(\frac t{k-b}+1\right)}t\,dt-\ln 2\int_{b-k}^{\infty}\frac{dt}t.$$
If we let: $\frac t{k-b}=-u.$ Then $$I_1=\ln\left(\frac{k-b}2\right)\int_{b-k}^{\infty} \frac{dt}t+\int_1^{\infty}\frac{\ln(1-u)}u\,du.$$
It's getting messier and messier. Also it seems that $I_1$ and hence $I$ diverges. Please tell me, am I heading towards something useful? It has become difficult for me to handle this integral. Please show me a proper way.
AI: Hint : Substitute $xy=c^2\to x=\frac{c^2}{y}$ then the integral becomes $$ I= \int_0^{\infty}\frac{\ln(c^2)-\ln y}{y^2+by+c^2}dy\\2I= \int_0^{\infty}\frac{\ln(c^2)}{y^2+by+c^2}dy$$ and since $$ y^2+by+c^2= \left(y+\frac{b}{2}\right)^2+\left(c^2-\frac{b^2}{4}\right)$$ and using elementary integral formula of $$\int_{0}^{\infty}\frac{dx}{x^2+a^2}=\frac{1}{a}\tan^{-1}\left(\frac{x}{a}\right)$$ we have our desired closed form.
|
H: Compute the $\int \sqrt[3]{1+\sin x}\ dx$ by help of Taylor series.
I want to compute the integral $\int \sqrt[3]{1+\sin x}\ dx$ via Taylor series.
My idea is : find Taylor expansion around zero of the function $f(x)= \sqrt[3]{1+\sin x}=\displaystyle\sum_{n=0}^{\infty} c_nx^n$, and after to integrate the Taylor expansion. Then $\int \sqrt[3]{1+\sin x}\ dx=\displaystyle\sum_{n=0}^{\infty}\frac{c_n}{n+1} x^{n+1}.$
Firt question: Am I right?
Second question: Is it difficult to find the Taylor expansion? I believe the way to find $f^{(n)}(0)$ for all $n$, is difficult in a sense of computing the derivatives.
AI: It's not difficult $-$ you don't need to compute the derivatives $-$ but it does get messy. Suppose we want the Taylor expansion up to $x^4$. First, the Taylor expansion of $\sqrt[3]{1+u}$ is
$$\sqrt[3]{1+u}=1+\frac13u-\frac19u^2+\frac{5}{81}u^3-\frac{10}{243}u^4+O(u^5)$$
Second, the Taylor expansion of $\sin x$ is
$$\sin x=x-\frac16x^3+O(x^5)$$
From this you get the powers of $\sin x$:
$$\sin^2 x=x^2-\frac13x^4+O(x^5)$$
$$\sin^3 x=x^3+O(x^5)$$
$$\sin^4 x=x^4+O(x^5)$$
And now you can substitute these expansions for $u, u^2,u^3,$ and $u^4$ to get your answer.
As you will appreciate, this becomes more and more complicated as you need more and more terms. But for small $x$, it will converge rapidly.
|
H: Matrix calculation / operation
Assume there is a vector,
$$
A = \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_m \end{bmatrix} \in \mathbb R^{mn},
$$
and each subvector $a_i, i\in \{1,2,\cdots,m\}$ are $n$-by-$1$ vector.
The output is
$$
B = \begin{bmatrix} a_1^T & 0 & \cdots & 0 & 0 \\ 0 & a_2^T & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & a_{m-1}^T & 0 \\ 0 & 0 & 0 & 0 & a_m^T \end{bmatrix}\in \mathbb R^{m\times mn}.
$$
How to use matrix operations to obtain $B$ from $A$?
(Here, we exclude the method of defining a mapping from $A$ to $B$. )
AI: The transformation can be described by
$$B = (I_m\otimes{\tt1}_n^T)\cdot\operatorname{Diag}(A)$$
where $I_m$ is the $m\times m$ identity matrix, ${\tt1}_n$ is the $n\times 1$ all-ones vector, $\otimes$ denotes the Kronecker product, and the Diag operation creates a diagonal matrix from its vector argument.
If Diag is deemed an unacceptable operation,
it can be replaced with a Hadamard $(\odot)$ product.
$${\rm Diag}(a) = I_{mn}\odot (a{\tt1}_{mn}^T)$$
|
H: Proving a vector space has an uncountable basis
I am required to prove that a vector space $V=C[0,1]$ (the space of continuous functions over complex field) has an uncountable basis. The approach I took was to show that there is a subspace $W$ of $V$ with an uncountable basis and concluded that the vector space $V$ itself must have an uncountable basis.
Is this approach correct? I'm just unsure about the last step, if you have a subspace with an uncountable basis does it imply that the space itself must have an uncountable basis?
AI: The answer to your general question is yes. If $W$ is a subspace of $V$ and $W$ has an uncountable basis $B\subseteq W$ then $B$ is still linearly independent as a subset of $V$. So $B$ can be extended to a basis for $V$ which means $V$ also has an uncountable basis.
|
H: Positive semi-definiteness of the adjoint matrix
I am studying the conditions of positive semi-definiteness of a $(n+1)\times(n+1)$ symmetric matrix $\mathbf{M}$ built in the following way:
$$
\mathbf{M}=\begin{pmatrix}
\mathbf{A} & \mathbf{b} \\
\mathbf{b}^T & c
\end{pmatrix}
$$
where $\mathbf{A}$ is a symmetrix $n\times n$ matrix, $\mathbf{b}$ is a $n$-dimensional column vector and $c$ is a real number.
The first $n$ leading principal minors of $\mathbf{M}$ are the leading principal minors of $\mathbf{A}$, so $\mathbf{A}$ should be positive semi-definite.
The last condition is $\det\mathbf{M}=|\mathbf{M}|\geq0$. By a simple calculation, I obtained
$$
|\mathbf{M}|=c|\mathbf{A}|-\mathbf{b}^T\mathbf{A}^*\mathbf{b}\geq0
$$
where $\mathbf{A}^*$ is the adjoint matrix of $\mathbf{A}$, i.e. the transpose of the matrix of cofactors.
This condition can be written
$$
c|\mathbf{A}|-\mathbf{b}^T\mathbf{A}^*\mathbf{b}=
\begin{cases}
|\mathbf{A}|\left(c-\mathbf{b}^T\mathbf{A}^{-1}\mathbf{b}\right), & \text{if }|\mathbf{A}|>0 \\
-\mathbf{b}^T\mathbf{A}^*\mathbf{b}, & \text{if }|\mathbf{A}|=0
\end{cases}
$$
So, when $|\mathbf{A}|>0$ the condition simply becomes
$$
c\geq\mathbf{b}^T\mathbf{A}^{-1}\mathbf{b}\geq0,
$$
given that $\mathbf{A}^{-1}$ is positive definite.
When $|\mathbf{A}|=0$ the condition becomes
$$
\mathbf{b}^T\mathbf{A}^*\mathbf{b}\leq0,
$$
so I am interested to know if $\mathbf{A}^*$ is positive semi-definite when $\mathbf{A}$ is positive semi-definite.
In the case $|\mathbf{A}|>0$, using spectral decomposition
$$
\mathbf{A}=\sum_{i=1}^n\lambda_i\mathbf{e}_i\otimes\mathbf{e}_i,
$$
where $\lambda_i$ are the eigenvalues and $\mathbf{e}_i$ the unit eigenvectors, so we have
$$
\mathbf{A}^*=|\mathbf{A}|\mathbf{A}^{-1}=\left(\prod_{k=1}^n{\lambda}_k\right)\sum_{i=1}^n\frac{1}{\lambda_i}\mathbf{e}_i\otimes\mathbf{e}_i = \sum_{i=1}^n\left(\prod_{k=1,k\neq i}^n{\lambda}_k\right)\mathbf{e}_i\otimes\mathbf{e}_i,
$$
so $\mathbf{A}^*$ is positive definite when $\mathbf{A}$ is, given that its eigenvalues are expressed as the product of eigenvalues of $\mathbf{A}$, excluded one in turn.
I suspect that this last expression represents $\mathbf{A}^*$ also when $|\mathbf{A}|=0$, probably by considering a positive semi-definite matrix with vanishing determinant as the limit of a positive definite matrix when one or more eigenvalues tends to zero.
So my questions:
are my calculation correct?
the last expression of $\mathbf{A}^*$ is valid also when $|\mathbf{A}|=0$?
how can this be proved?
AI: Yes, your equations are correct. Yes, the last expression you wrote is valid when $|A| = 0$. Note in particular that $\mathbf A^* = 0$ whenever the kernel of $\mathbf A$ has dimension at least $2$.
For a quick proof, we could simply note that both sides of the equation
$$
\mathbf{A}^* = \sum_{i=1}^n\left(\prod_{k=1,k\neq i}^n{\lambda}_k\right)\mathbf{e}_i\otimes\mathbf{e}_i
$$
are continuous functions of the entries of $\mathbf A$. If the equation holds for all strictly positive definite $\mathbf A$, then it must hold for positive semidefinite $\mathbf A$ "by continuity". In particular, if we define $\mathbf A_{\epsilon} = \mathbf A + \epsilon \mathbf I$ and $\lambda_{k}^{\epsilon}$ to be the $k$th eigenvalue of $\mathbf A_{\epsilon}$, then we can say that for a positive semidefinite $\mathbf A$ we have
$$
\mathbf{A}^* =
\lim_{\epsilon \to 0^+}\mathbf{A}_{\epsilon}^*
=
\lim_{\epsilon \to 0^+}\sum_{i=1}^n\left(\prod_{k=1,k\neq i}^n{\lambda}_k^{\epsilon}\right)\mathbf{e}_i\otimes\mathbf{e}_i
=
\sum_{i=1}^n\left(\prod_{k=1,k\neq i}^n{\lambda}_k\right)\mathbf{e}_i\otimes\mathbf{e}_i.
$$
For a direct proof: we note that $\dim\ker \mathbf A \geq 2$ implies that $\mathbf A^* = 0$, which is positive semidefinite. For the case where $\dim\ker \mathbf A = 1$, we see that $\mathbf A$ is symmetric and $\mathbf A \mathbf A^* = 0$ implies that $\mathbf A^*$ has rank at most $1$, which means that $\mathbf A^*$ can be written in the form $\mathbf A^* = k \mathbf {xx}^T$ for some unit vector $\mathbf x$ and some $k \in \Bbb R$. We note that $k$ satisfies $\operatorname{tr}(\mathbf A^*) = k$.
With that, it suffices to note that
$$
\operatorname{tr}(\mathbf A^*) = -\frac{d}{dt}|_{t = 0} \det(t\mathbf I - \mathbf A) = -\frac{d}{dt}|_{t = 0} (t - \lambda_1) \cdots (t - \lambda_n).
$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.