text
stringlengths 83
79.5k
|
|---|
H: How can I solve this definite integral: $\int_{0}^{a}\frac{x^4dx}{\sqrt{a^2-x^2}}$
Evaluate $$\int_{0}^{a}\dfrac{x^4dx}{\sqrt{a^2-x^2}}$$
I tried taking $t$ as
$$t = \sqrt{a^2-x^2}$$
Thus my final integral became
$$\int_{0}^{a}(a^2-t^2)^{3/2}dt$$
but I couldn't go any further in solving this integral.
I also tried by taking $t$ as
$$t = a\sin^{-1}{x}$$
But I don't know how to solve the resulting integrand.
Also, can the king's rule be applied here? If yes then how?
AI: With $x=a\sin t$, which isn't quite what you said you tried, the integral is$$\begin{align}a^4\int_0^{\pi/2}\sin^4tdt&=\frac14a^4\int_0^{\pi/2}(1-\cos2t)^2dt\\&=\frac14a^4\int_0^{\pi/2}(1-2\cos2t+\cos^22t)dt\\&=\frac18a^4\int_0^{\pi/2}(3-4\cos2t+\cos4t)dt\\&=\frac18a^4[3t-2\sin2t+\tfrac14\sin4t]_0^{\pi/2}\\&=\frac{3\pi}{16}a^4.\end{align}$$
|
H: Is an open bounded subset of $\mathbb{R}^n$ a Banach space?
Let $\Omega$ be an open bounded subset of $\mathbb{R}^n$. Is $\Omega$ a Banach (sub)space?
AI: Subspaces of $\mathbf R^n$ are all unbounded and closed.
|
H: What is the remainder when $2019^{2019}-2019$ is divided by $2019^2+2020$
After some time I gave up and cheated using Wolfram Alpha and got the result $4076363$.
I played around with the general statement
What is the remainder when $x^x-x$ is divided by $x^2+x+1$ where $x$ is an integer.
After trying few values
I noticed that when $x$ is a multiple if $3$ the remainder is $x^2+2$. As $ 2019$ is a multiple of $3$, we can test $x=2019$ surprisingly I got the correct answer.
Let $x=3k$. After some modular arithmetic manipulation, this all boils down to proving
$(3k)^{3k} \cong 1 (mod 9k^2+3k+1)$ where $k = 0,1,2...$
Again after playing around I noticed that $ 3k|\phi(9k^2+3k+1)$. I don't know whether this would be helpful in proving. How can I progress from here? Please try to post an elementary solution as I am just a high school student.
AI: $2019$ is divisible by $3$, so $2019^{2019}-1$ is divisible by $2019^3-1$.
In turn, $2019^3-1$ is divisible by $2019^2+2019+1$.
Therefore, $2019^{2019}-1$ is divisible by $2019^2+2019+1$.
Therefore, the remainder when $2019^{2019}-2019$ is divided by $2019^2+2019+1$
is $2019^2+2019+1-2018=2019^2+2$.
|
H: Find numbers $A,B$ such that the function is differentiable at $x=0$
I have the following function:
And the statement says:
Find $A,B$ such that $f$ is differentiable at $x=0$
My attempt was:
$f$ will be differentiable at $0 \iff \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$ exists.
Solving the lateral limits, i got:
$\lim_{h\to 0^+}\frac{f(0+h)-f(0)}{h} = \lim_{h \to 0^+} -A\frac{1-\cos(3h)}{h} +\lim_{h \to 0^+}3A\frac{\sin(h)}{h} + \lim_{h\to 0^+}\frac{Bh}{h} = 3A+B $
$\lim_{h\to 0^-} \frac{f(0+h)-f(0)}{h} = \lim_{h\to0} h^4+h = 0$
So, if $3A + B = 0$ holds, the function is differentiable at $x = 0$.
However, this doesn't not work. Per example, the values $A = 2, B = -3$ holds the condition, but creates a discontinuity at $x = 0$. This is weird to me, since the conditions to be differentiable are $3A +B = 0$ they should also work for continuity of the function, since differentiability implies continuity.
So, what is wrong?
Thanks in advance.
AI: Note that $f(0)=A+B$. Therefore$$\lim_{h\to0^-}\frac{f(h)-f(0)}h=\lim_{h\to0^-}\frac{x^5+x^2+6-A-B}h$$and this limit exists if and only if $A+B=6$. So, the only solution of your problem is the only pair $(A,B)$ such that$$\left\{\begin{array}{l}A+B=6\\3A+B=0.\end{array}\right.$$Can you take it from here?
|
H: Small Confusion in definition of a limit of function
Consider the definition of a limit of function. Suppose that $E\subseteq \mathbb{R}$ and the function $f:E\to \mathbb{R}$ and $x_0$ is the limit point of $E$.
Definition: We say that $A=\lim \limits_{x\to x_0} f(x)$ iff $\forall \varepsilon>0$ $\exists \delta=\delta(\varepsilon)>0$ : $\forall x\in E$ with $0<|x-x_0|<\delta$ $\Rightarrow$ $|f(x)-A|<\varepsilon$.
We note that in this definition $x_0$ may not be an element of $E$.
But I have a question: Why do we care that $0<|x-x_0|<\delta$? I guess that even if I take $|x-x_0|<\delta$ it should be OK (in both cases when $x_0\in E$ or $x_0\notin E$) because I am restricting the inequality $|x-x_0|<\delta$ over all $x\in E$.
Can anyone answer am I right or not? Would be very grateful for detailed answer!
AI: Actually when you define the limit in $x_0$ you want to exclude $x_0$. This is the reason of $|x-x_0|>0$. If you don't do so, then you would need continuity in $x_0$ for the limit to be defined (if the function is defined in $x_0$) which is not convenient in general.
Indeed, consider for instance $f(x) = 0$ for $x\neq 0$ and $f(0) = 1$. Then with your definition of limit the limit for $x\to 0$ of $f(x)$ would not exist.
Why with your definition the limit is not defined?
Let's consider the usual definition of limit. For any $x\neq 0$ you have $f(x) = 0$. In particular you can say that $\lim_{x\to 0}f(x) = 0$ since for $|x|\leq \delta$ (whatever $\delta>0$) $|f(x)|=|0|=0<\epsilon$ (whatever $\epsilon>0$) if $x\neq 0$.
Now your definition requires that this is true even if $x=0$. But this is not the case since $|f(0)|=|1|>\epsilon$ if $\epsilon<1$.
So with your definition the function $f(x)$ does not admit a limit.
Now why should we want it to have a limit?
The point is that the concept of limit wants to measures the behaviour of a function in the neighbourhood of a point, as a separate thing compared to the value of the function in the point itself. Otherwise the definition would work for continuous functions only, so that for each point in which the function is defined it would not really make sense to give such a complicated definition, since the limit would just be the value of the function in that point.
|
H: Epsilon delta definition with restricted epsilon
Here, I tried to prove:
$\lim\limits_{x \to 4} \sqrt{x} = 2$
$2 -\epsilon < \sqrt{x} < 2 + \epsilon$
Edit:
Mu Prime Math has told me that before squaring both sides of the inequality must be positive:$2 - \epsilon \geq 0$
$2 \geq \epsilon$
$\epsilon>0$
$0 < \epsilon \leq 2$
End of edit
$(2 -\epsilon)^2 < x < (2 + \epsilon)^2 $
$4- 4\epsilon +\epsilon^2 < x < 4 + 4\epsilon + \epsilon^2$
$4- (4\epsilon -\epsilon^2) < x < 4 + (4\epsilon + \epsilon^2)$
$\delta \leq \text{min}\{4\epsilon + \epsilon^2,4\epsilon - \epsilon^2\} =4\epsilon - \epsilon^2$
$ \epsilon \ngeq 4 $
but the epsilon-delta definition require there to be a $\delta$ for all $\epsilon$
so:
how can you prove $\forall \epsilon$
AI: Let $\epsilon\gt0$. Choose $\delta=2\epsilon$. Then
\begin{align}
0\lt|x-4|\lt\delta\implies|\sqrt{x}-2|&=\frac{|x-4|}{\sqrt{x}+2}\\
&\lt\frac{\delta}{\sqrt{x}+2}\\
&\le\frac{\delta}2\\
&=\epsilon\\
\end{align}
Hence $\lim_{x\to4}\sqrt{x}=2$ by definition.
|
H: $eSe$ in a finite semigroup
Where is the following argument going wrong?
Let $S$ be a finite semigroup. There exists $e\in S$ such that $ee=e$. The subsemigroup $eSe = \{ese \mid s\in S\}\subseteq S$ is a monoid with the identity $e$. The map $ese\mapsto s$ is an injection from $eSe \to S$. Therefore $eSe = S$. Thus, every finite semigroup is a monoid ?! What?!
AI: The reason is, formula $ese\mapsto s$ does not always define a function, as there might be two different $s$ and $s'$ such that $ese = es'e$.
|
H: How do we know that the P versus NP problem is an NP problem itself?
I have been doing some research on the P versus NP problem. On multiple occasions, I have seen people say that the problem itself is an NP problem. I have been curious about how we know this. If we know that the problem is NP, then has anyone come up with an algorithm that could be run on a nondeterministic Turing machine to solve the problem in polynomial time? Or is there some other reason that we know that the problem is NP?
Thanks for any replies in advance
AI: Determining for any statement if there is a proof with $n$ symbols or less is an $NP$ problem (i.e. the proof can be checked in polynomial time with respect to the length of the proof and the statement), that's probably the sense in which they meant that "P versus NP is itself NP". However, it does not really make sense to assign a complexity class to proving any particular statement (such as $P\neq NP$), as that technically takes constant time.
|
H: How Lebesgue integration solved the problem of changing the order of integration will change the value of integration?
Our professor started a course in measure theory by stating the problems of Riemann integration. One of the problems he\she stated is the following double integration:
$\int_{0}^{1}\int_{0}^{1} \frac{x^2 - y^2}{(x^2 + y^2)^2} dy dx = \frac{-\pi}{4}$ but $\int_{0}^{1}\int_{0}^{1} \frac{x^2 - y^2}{(x^2 + y^2)^2} dx dy = \frac{\pi}{4}.$
My question is:
I have studied Lebesgue integration but still, until now it is not clear for me how Lebesgue integration solved the problem of changing the order of integration will change the value of integration? Is it by Fubini? if so, what was the solution?
Could anyone explain this to me, please?
AI: If $f:[0,1]^2 \to \mathbb{R}$ is Lebesgue integrable, then we would have $\int_{[0,1]^2} |f| < \infty$ and Fubini's theorem would guarantee that the interated integrals are equal.
However, in this case, using polar coordinates, we have
$$\{(r,\theta): 0\leqslant r \leqslant 1, 0 \leqslant \theta \leqslant \pi/2\} \subset [0,1]^2,$$
and we see that
$$\int_{0}^{1}\int_{0}^{1} \frac{|x^2 - y^2|}{(x^2 + y^2)^2} \,dy\, dx > \int_0^{\pi/2}\int_0^1\frac{|r^2\cos^2 \theta - r^2\sin^2 \theta|}{(r^2)^2}\, r \, dr \, d\theta \\= \int_0^1 \frac{dr}{r}\int_0^{\pi/2}|\cos^2 \theta - \sin^2 \theta| \, d\theta = \infty$$
|
H: What do the open sets in the Urysohn Metrization Theorem look like?
I am following Munkres' Topology and I am a bit confused about the Urysohn metrization theorem construction (Theorem 34.1). The proof goes as follows:
Show that there is a countable collection of continuous functions $f_n : X \to [0,1]$ having the property that given any point $x_0 \in X$ and any neighborhood $U$ of $x_o$, there exists an index $n$ such that $f_n$ is positive at $x_0$ and vanishes outside $U$. (This is done using Urysohn's lemma and a countable basis)
Taking the functions from step one the map $F: X \to \mathbb{R}^\omega $ is an imbedding ($\mathbb{R}^\omega $ is in the product topology), where $F(x)=\langle f_1(x), f_2(x),...\rangle$.
While I see that each step is correct, it seems to me that the function $F$ does not carry open sets to open sets. For example, if the set $X$ is to be $\mathbb{R}$ in it's usual topology, it's countable basis are $\epsilon$-balls less than $q\in \mathbb Q$ and $U=(0,1)$, then $F(U)$ must have an infinite number of sets different than $[0,1]$:
For there is an infinite number of base elements such that $U$ is strictly contained in them, like $(-1,2), (-2,3)$, etc; and to each of these open sets $W$ there is a function $f_i$ that maps $W$ to $1$ and vanishes outside of it. So there are infinite functions that map $U$ to $1$, and so $\pi_i (F(U))$ is $\{1\}$ for an infinite number of $i$'s and hence is not open ($\pi_i$ here is just the projection function onto the the $i$th cordinate)
There must be something wrong with this argument but I can't find what, what is it?
*Edit: Explaining my example better.
Take $X = \mathbb R$. Give it as a countable basis all the sets all the open sets with rational endpoints.
For each pair of pairs of rationals $(a,b)$ and $(c,d)$ such that $(a,b)\subset (c,d)$ by Urysohn's lemma there is a function $g_{ab,cd}$ such that $g([a,b])=\{1\}$ and $g\left((-\infty, c] \cup [d, \infty)\right)=\{0\}$. Map each pair of pairs of rationals to a natural number, and applying that function to the indices of the $g_{ab,cd}$ gives us the required $f_n$.
Construct $F(x)=\langle f_1(x), f_2(x),...\rangle$.
My question is what $F((0,1))$ looks like here. It seems to me that there are infinite $f_i$ such that $f_i((0,1))=1$. For example, $g_{(0,1),(-1,2)}$ has this property. Because by point 1. $g_{(0,1),(-1,2)}$ is $1$ in the closed set $[0,1]$. Similarly $g_{(0,1),(-2,2)}$, and in general $g_{(0,1),(-i,i)}$ for $i \geq 2$. So it seems to me that $F((0,1))$ is not open.
AI: I’m afraid that I don’t understand your argument. (By the way, the function $f_n$ defined from some particular $B$ in a countable base is not necessarily $1$ at every point of $B$; in fact, this is possible only if $B$ is clopen.)
Suppose that $F(x)\in F[U]$ for some open $U$ in $X$. Then $x\in U$, and there is an $n_x$ such that $f_{n_x}(x)>0$ and $f_{n_x}[X\setminus U]=\{0\}$. Let $B_x=\{\langle y_n:n\in\omega\rangle\in\Bbb R^\omega:y_{n_x}>0\}$; this is a basic open set in $\Bbb R^\omega$, and $F(x)\in B_x\cap F[X]\subseteq F[U]$. It follows that
$$F[U]=\{F(x):x\in U\}\subseteq\bigcup_{x\in U}(B_x\cap F[X])\subseteq F[U]$$
and hence that
$$F[U]=F[X]\cap\bigcup_{x\in U}B_x\;,$$
which is open in $F[X]$.
|
H: What is the Solution to this sum $\sum \limits_{n=1}^{\infty}(1-(-1)^{\frac{n(n+1)}{2}})(\frac{1}{2})^n$
what is the value of this series $\sum \limits_{n=1}^{\infty}(1-(-1)^{\frac{n(n+1)}{2}})(\frac{1}{2})^n$ ?
Anything what's solid and that i got so far is only
$\sum \limits_{n=1}^{\infty}(1-(-1)^{\frac{n(n+1)}{2}})(\frac{1}{2})^n$ = $\sum \limits_{n=1}^{\infty}(1-i^{n^2+n})(\frac{1}{2})^n$
and that the right part reminds me of the geometric series
AI: Note that$$(1-(-1)^{\frac{n(n+1)}{2}})=\begin{cases}0 & \text{ if } n \equiv 0,3 \pmod{4} \\ 2 & \text{ if } n \equiv 1,2 \pmod{4}\end{cases}$$
Thus
\begin{align*}
\sum_{n=1}^{\infty}(1-(-1)^{\frac{n(n+1)}{2}})\left(\frac{1}{2}\right)^n&=\sum_{\substack{n=1 \\{\small n \equiv 0 \pmod{4}}}}^{\infty}0+\sum_{\substack{n=1 \\{\small n \equiv 1 \pmod{4}}}}^{\infty}+\sum_{\substack{n=1 \\{\small n \equiv 2 \pmod{4}}}}^{\infty}+\sum_{\substack{n=1 \\{\small n \equiv 3 \pmod{4}}}}^{\infty}0\\
&= \sum_{\substack{n=1 \\{\small n \equiv 1 \pmod{4}}}}^{\infty}2 \left(\frac{1}{2}\right)^n+\sum_{\substack{n=1 \\{\small n \equiv 2 \pmod{4}}}}^{\infty}2 \left(\frac{1}{2}\right)^n\\
&= 2\left(\sum_{\substack{n=1 \\{\small n \equiv 1 \pmod{4}}}}^{\infty}\frac{1}{2^n}+\sum_{\substack{n=1 \\{\small n \equiv 2 \pmod{4}}}}^{\infty} \frac{1}{2^n}\right)\\
\end{align*}
Now we just have to focus on these geometric series. For example,
\begin{align*}
\sum_{\substack{n=1 \\{\small n \equiv 1 \pmod{4}}}}^{\infty}\frac{1}{2^n}&=\frac{1}{2}+\frac{1}{2^5}+\frac{1}{2^9}+\dotsb\\
&=\frac{\frac{1}{2}}{1-\frac{1}{2^4}}\\
&=\frac{8}{15}.
\end{align*}
Hopefully you can complete now.
|
H: If an abelian group has subgroups of relatively prime orders r and s(which are cyclic), there exists a subgroup of order rs?
Task is:
Let $G$ be an abelian group and let $H$ and $K$ be finite cyclic subgroups with $|H| = r$ and $|K| = s$.
Show that if $r$ and $s$ are relatively prime then $G$ contains a cyclic subgroup of order $rs$.
I'm thinking about how to solve this.
My intuition is that because the orders are coprime and the groups are cyclic, the generators must be $r$ and $s$, by Lagrange's theorem. That would mean that the elements of the groups are distinct and since the group is abelian, we can simply count the pairs, giving us a group of $rs$ order.
Is this correct?
AI: generators must be $r$ and $s$
The above statement does not make much sense since $r$ and $s$ are not the group elements. You mean to say "generators must be $x$ and $y$ where $x$ and $y$ are the elements of order $r$ and $s$."
Your idea is correct. You would, however, need to prove that $\langle x, y\rangle$ does indeed have order $rs$. To do this, you would need to show that if
$$x^a = y^b$$
for $0 \le a < r$ and $0 \le b < s$, then it is forced that $a = 0 = b$.
This will then tell you that the set $\{x^ay^b \mid 0 \le a < r,\;0 \le b < s\}$ has cardinality no less than $rs$. (Why?)
Alternately, you could try to show that the element $xy$ itself has order $rs$. (The proof will be almost identical to the above one.) Then, you can simply consider the subgroup $\langle xy\rangle$.
EDIT: Details on how to prove that $a = 0 = b$ claim.
We can note that $x^a \in H$ and $y^b \in K$. Since the two elements are assumed to be equal, we see that $x^a \in H \cap K$.
However, note that $H \cap K$ is a subgroup of $H$ and of $K$. (Why? Show that the intersection of subgroups is again a subgroup.)
Thus, the order of $H \cap K$ must divide that of $H$ and $K$. This gives us that $|H\cap K| = 1$ and thus, its only element is the identity. Thus, $x^a = 1 = y^b$. Conclude.
|
H: Implicit differential of diameter
In my physics course there is a problem where the volume "V" of a sphere is filled with a gas. The sphere is released in a liquid, therefore the amount of gas in this volume "V" decreases because of concentration differences. If you evaluate a time dependent mass-balance the following diff. equation is to be solved for the diameter "D(t)", where $\alpha$ is a physical constant.
Boundary conditions are: $D(t=0)=D_{0}$
$$
\frac{d}{dt}D^3(t)=(-24 \pi\alpha) D(t)
$$
I would like to have some help evaluating the $\frac{d}{dt}D^3(t)$ term...
AI: We can simplify the above equation as:
$$\frac{\mathrm{d}(D(t))^3}{\mathrm{d}t}=3(D(t))^2\frac{\mathrm{d}D(t)}{\mathrm{d}t}=-24\pi\alpha D(t)$$
Using the chain rule of differentiation.
Now let's simplify it a little more so that it looks like a neat Differential equation:
$$D(t)\;{\mathrm{d}D(t)}=-8\pi\alpha \mathrm{d}t$$
This should be simple to integrate with the boundary conditions given. Can you proceed from here?
|
H: Galois correspondence of subgroups of $D_4$ with subfields of $\mathbb Q (\sqrt[4]{2},i)$
The Galois group of $\mathbb Q (\sqrt[4]{2},i)$ over $\mathbb Q$ is the Dihedral group $D_4$ = {$id, \sigma, \sigma^2, \sigma^3, \tau, \sigma\tau, \sigma^2\tau, \sigma^3\tau $}
Denoting $\sqrt[4]{2}$ as $\theta$, the action of the elements are $\sigma$(i) = i, $\sigma(\theta)$ = i $\theta$, $\tau$(i) = $-$i, $\tau(\theta)$ = $\theta$
Then the following are the subgroups with corresponding fixed fields that I have been able to conclude are correctly associated:
$H_0$ = {id} with $\mathbb Q (\theta,i)$
$H_8$ = $D_4$ with $\mathbb Q$
$H_1$ = {id, $\tau$} with $\mathbb Q(\theta)$
$H_5$ = {id,$\tau, \sigma^2, \sigma^2\tau$ } with $\mathbb Q(\theta^2)$
$H_7$ = {id, $\sigma, \sigma^2, \sigma^3$} with $\mathbb Q(i)$
Further I think these two are also correctly associated:
$H_6$ = {id, $ \sigma\tau, \sigma^2, \sigma^3\tau $ } with $\mathbb Q(i\theta^2)$
$H_2$ = {id, $\sigma^2\tau $} with $\mathbb Q(i\theta)$
Are these two also correct?
Assuming the above are correct, it still leaves me to find the corresponding fixed fields of these two subgroups:
$H_3$ = {id, $\sigma\tau$}
$H_4$ = {id, $\sigma^3\tau$}
what will be the corresponding fixed fields?
I thought the two missing subfields are $\mathbb Q(\theta^3)$ and $\mathbb Q(i\theta^3)$ but they don't seem to be fixed under $H_3$ or $H_4$
AI: As $\Bbb Q(\theta^3)=\Bbb Q(\theta)$, you already have that, as the fixed
field of $H_1$.
To find elements fixed by $H_3$ let's look at elements of the form $a+\sigma\tau(a)$,
which will automatically be in the fixed field.
We compute
$$\sigma\tau(\theta)=\sigma(\tau(\theta))=\sigma(\theta)=i\theta.$$
Therefore
$$\beta=\theta+i\theta=(1+i)\theta$$ lies in the fixed field $K_3$ of $H_3$.
We find
$$\beta^2=2i\theta^2$$
and
$$\beta^4=-8.$$
Then $\beta$ has degree $4$ over $\Bbb Q$ and $K_3=\Bbb Q(\beta)=\Bbb Q((1+i)\theta)$.
The $H_4$ case is very similar.
|
H: Prove that if $A=A^2$, and $0\ne \bar{v}\in \text{Col} A$. then $\bar{v}$ is an eigenvector corresponding to the eigenvalue $1$.
Suppose $A=A^2$ for $A\in \Bbb{M}_{n\times n}^{(\Bbb{R})} \ ,$ and $0\ne \bar{v}\in \text{Col} (A)$. then $\bar{v}$ is an eigenvector corresponding to the eigenvalue $1$.
AI: So, $v=Ax$ for some $x\in\Bbb R^n$. Now, $Av=A(Ax)=A^2x=Ax=v$. Since, $v\not=0$, we can say $v$ is an eigen-vector of $A$ corresponding to eigen-value $1$.
|
H: What does $[5.9]$ mean?
I came across this notation in the CAA module 0 sample questions. See photo:
It looks like it means lower bound but not sure. Can’t find any info. on google either. Anyone come across this notation before in this context? If so, what does it mean?
AI: It looks like the greatest integer (also known as floor) function.
It can also be denoted as $\lfloor 5.9\rfloor$.
|
H: Is it true that the following definition of a convex function cannot be used in a more general way?
One definition for convex functions I found on Wikipedia was that 'the line segment between any two points on the graph of the function always lies above or on the graph'. That makes sense. However, am I right in suggesting that this is not a suitable definition for convexity on a particular interval? To explain why I think this, consider the graph $y=x^3$ on the interval $[-0.5,1]$:
This function is clearly not convex on the interval $[-0.5,1]$. However, using the definition above, it does meet the criterion that the line segment between those two points always lies above or on the graph. What the above definition really tells us is that $y=x^3$ is not a convex function. It doesn't say whether the function is convex or concave over a particular interval, and cannot be used as such. Otherwise, we get contradictions such as '$y=x^3$ is convex over the interval $[-0.5,1]$', when it clearly is not.
So this definition helps us show that a function is convex everywhere, or not convex everywhere. It tells us whether a function is a convex function or not a convex function. What it can't tell us is whether a function is convex for every point in an interval. (To show this, we might use the second derivative test, for example.) Is this reasoning correct?
AI: You are correct $f(x) = x^3$ is not convex on the interval $[-0.5, 1]$ (I assume you made a typo when you wrote $[0.5,1])$. However, this agrees with the Wikipedia definition: try the points $(-0.5, (-0.5)^3)$ and $(-0.4, (-0.4)^3)$. Is the line segment passing through these points still above the graph? (It's not!) So in fact the Wikipedia definition does tell you if a real valued function defined on an interval is convex. Remember, the definition states "for any two points on the graph."
|
H: Is a differentiable non constant real function with continuous derivative strictly monotone on a interval?
Let $f : [0,1] \to \mathbb{R_+}$ be a differentiable non constant real function with continuous derivative
My question:
Is it true that $\exists \lambda \in \mathbb{R_+}$ such that $f$ is strictly monotone on $[0, \lambda)$
Thanks
AI: Consider $$f(x)=\begin{cases}x+x^5\sin\frac{1}{x^3} &\text{ if }x\in(0,1],\\ 0 &\text{ if }x=0.\end{cases}$$
The factor $x^5$ comes for continuous differentiablity of $f$. Note also that $f'$ is continuous as $f'(0)=1$ and $f'(x)=1+5x^4\sin\big(\frac{1}{x^3}\big)-3x\cos\big(\frac{1}{x^3}\big)$ for $x>0$.
|
H: Proving $(x_1 x_2 \cdots x_n)^{-1} = x_n^{-1} x_{n-1}^{-1} \cdots x_2^{-1}x_1^{-1}$ for $x_i $ in group $G$
Let $x_1, x_2, \ldots, x_n \in G$ for some group $G$. We wish to prove that
$$(x_1 x_2 \cdots x_n)^{-1} = x_n^{-1} x_{n-1}^{-1} \cdots x_2^{-1} x_1^{-1}.$$
I'm not sure if the correct way to proceed is by showing the multiplication out, which doesn't seem to me to be required for the inductive step. Here is what I have so far.
Proof. Let $x_1, x_2, \ldots, x_n \in G$ for some group $G$. We proceed by induction on $n$.
When $n = 1$, we have
$$x_1^{-1} = x_1^{-1}.$$
Less trivially, when $n = 2$, we have
$$\begin{align}
(x_1 x_2)(x_2^{-1} x_1^{-1}) &= x_1 (x_2 x_2^{-1})x_1^{-1} \\
&= x_1 e x_1^{-1} \\
&= (x_1 e)x_1^{-1} \\
&= x_1 x_1^{-1} \\
&= e,
\end{align}$$
and
$$\begin{align}
(x_2^{-1} x_1^{-1})(x_1 x_2) &= x_2^{-1} (x_1^{-1} x_1)x_2\\
& = x_2^{-1} e x_2 \\
&= x_2^{-1} (ex_2) \\
&= x_2^{-1} x_2 \\
&= e,
\end{align}$$
so $(x_1 x_2)^{-1} = x_2^{-1} x_1^{-1}$.
Supposing inductively that the result holds when $n = k$,
$$
(x_1 x_2 \cdots x_k)^{-1} = x_k^{-1} x_{k-1}^{-1} \cdots x_2^{-1} x_1^{-1},$$
we prove the result when $n = k + 1$:
\begin{align*}
(x_1 x_2 \cdots x_k x_{k+1})^{-1} & = ((x_1 x_2 \cdots x_k)x_{k+1})^{-1} = x_{k+1}^{-1} (x_1 x_2 \cdots x_k)^{-1} \\
& = x_{k+1}^{-1} (x_k^{-1} x_{k-1}^{-1} \cdots x_2^{-1} x_1^{-1}) \\
& = x_{k+1}^{-1} x_k^{-1} x_{k-1}^{-1} \cdots x_2^{-1} x_1^{-1}.
\end{align*}
How does this look?
AI: Your proof is fine.
Nitpicking: your use of associativity misses a few steps; also, it suffices in a group to check whether a candidate inverse of an element is a one-sided inverse for it to be an inverse.
|
H: How does $ f_{( T_1, T_2 )} (t_1, t_2 ) = \frac {\partial ^2 }{ \partial t_1 \partial t_2 } \mathbb P ( T_1 > t_1 , T_2 > t_2 )$?
I saw that expression in a paper :
$$ f_{( T_1, T_2 )} (t_1, t_2 )= \frac {\partial ^2 }{ \partial t_1 \partial t_2 } \mathbb P ( T_1 > t_1 , T_2 > t_2 )$$
And it seems to me that it is false. The algebra is so simple that I can't figure out where I could be wrong.
Basically, I think that :
$$ f_{( X )} (x) \text{ ?=? } \frac {\partial }{ \partial x } \mathbb P ( X > x ) = \frac {\partial }{ \partial x } 1 - \mathbb P ( X \leq x ) = - f_{( X )} (x) $$
Am I wrong ? Is it possible for the first expression to be correct?
AI: Assuming that derivatives of $\mathbb P(T_1 > t_1,T_2 > t_2)$ exist and the density of vector $(T_1,T_2)$ is $f_{(T_1,T_2)}$ we have by Fubini and Schwarz rule (about order of differentiating):
$$ \frac{\partial}{\partial t_2} \frac{\partial}{\partial t_1} \mathbb P(T_1 > t_1, T_2 > t_2) = \frac{\partial}{\partial t_2} \frac{\partial}{\partial t_1} \int_{t_1}^\infty \int_{t_2}^\infty f_{(T_1,T_2)}(x,y)dydx $$
Now by fundemental rule of calculus, setting $G(t)$ be such that $G'(t)= \int_{t_2}^\infty f_{(T_1,T_2)}(t,y)dy$ we get:
$$ \frac{\partial}{\partial t_2} \frac{\partial}{\partial t_1} G(\infty) - G(t_1) = -\frac{\partial}{\partial t_2} G'(t_1) = -\frac{\partial}{\partial t_2} \int_{t_2}^\infty f_{(T_1,T_2)}(t_1,y)dy $$
Apply that rule one more time to function $H(t)$ such that $H'(t) = f_{(T_1,T_2)}(t_1,t)$ getting:
$$ - \frac{\partial}{\partial t_2} H(\infty) - H(t_2) = (-1) \cdot (-H'(t_2)) = H'(t_2) = f_{(T_1,T_2)}(t_1,t_2)$$
|
H: Answer true or false: For A and B sets, A ∩ B = B ∩ B'
Answer true or false: For sets $A$ and $B:$ $A \cap B = B \cap B'.$
The statement is false. Let $A$ and $B$ be non-empty sets with $A = B$ and let $X = \{ a , b , c \}.$ Then
$A \cap B = \{ a \} \cap \{ a \} = \{ a \} $ and $B \cap B'= \{ a \} \cap \{ b , c\}.$ Since for all set $A, \emptyset \subseteq A$, note that $\{ a \} \cap \{ b , c \} = \emptyset.$
But then $A \cap B \neq B \cap B'$ because $\{a\} \neq \emptyset.$
Is my answer correct? This is an exercise taken from my workbook.
AI: You're right but it needs to be written in a slightly better manner. For instance, you never write what $A$ actually is.
Let $X=\{a,b,c\}$ and $A=B= \{a\}.$ Then $$A\cap B=\{a\}\cap \{a\}=\{a\}$$ and $$B \cap B^c =\emptyset$$ and so $$A\cap B \neq B\cap B^c.$$
|
H: Eisenstein criterion on f(x+1)
I need to show that the polynomial $f(x)=x^6+x^5+x^4+x^3+x^2+x+1$ is irreducible in $\mathbb{Z}[X]$ and in $\mathbb{F}_2[X]$. As we can't find a prime number p satisfying the conditions for the Eisenstein criterion, I did not know how to solve it. I looked into the solutions and they apply the Eisenstein criterion to $f(x+1)$ instead of $f(x)$. I don't understand why we can do this.
Could somebody explain this to me? And is proving irreducibility for $f(x+1)$ enough?
AI: I looked into the solutions and they apply the Eisenstein criterion to $f(x+1)$ instead of $f(x)$. I don't understand why we can do this.
It suffices to show irreducibility of $f(x+1)$. To see this, assume that $f(x)$ is reducible, then $f(x) = g(x)h(x)$ for some proper factors $g(x)$ and $h(x)$. In that case, you get that
$$f(x+1) = g(x+1)h(x+1)$$
where the right factors are still proper factors. Thus, reducibility of $f(x)$ implies that of $f(x+1)$. The contrapositive shows that it's sufficient to prove the irreducibility of $f(x)$.
|
H: Calculate a vector that lies on plane X and results in vector b when projected onto plane Y.
I'm working on a computer program and ran into this problem that I'm struggling to figure out.
Given:
two planes X and Y that belong to $\mathbf{R^3}$ that both pass through the origin and are defined by perpendicular vectors x and y respectively
vector b that lies on plane Y
How would you find vector a that lies on plane X and results in vector b when it is projected onto plane Y?
I tried projecting b onto X to find a, but I'm pretty sure that doesn't work.
AI: The projection of $\mathbf a$ on $Y$ is
$$
\mathbf a - <\mathbf a,\mathbf y>\mathbf y,
$$
so you need a vector $\mathbf a\in X$ that can be written as $\mathbf b + \lambda\mathbf y$ for some $\lambda$. The condition $\mathbf a\in X$ is equivalent to $<\mathbf a, \mathbf x>\, = 0$, so
$$
0 = <\mathbf b + \lambda\mathbf y, \mathbf x> \implies \lambda =-\frac{<\mathbf b , \mathbf x>}{<\mathbf y, \mathbf x>}.
$$
As a consequence,
$$
\mathbf a = \mathbf b -\frac{<\mathbf b , \mathbf x>}{<\mathbf y, \mathbf x>}\mathbf y
$$
|
H: After applying a sequence of involutory real matrices to a vector, is the norm of this vector bounded from below?
For $n,N \in \mathbb{N}$, let $A_1, \ldots, A_n$ be a finite sequence of involutory $(N \times N)$-matrices over $\mathbb{R}$, i.e. $A = A^{-1}$.
We know, that the eigenvalues of any involutory matrix lie in the set $\{-1,+1\}$. Further, each involution is diagonalizable, i.e. there are no generalized eigenvectors.
Assume that each matrix $A_i$, $i \in \{1, \ldots, n\}$, does not have all eigenvalues equal to $-1$ or $+1$ and let $i_k \in \{1, \ldots ,n\}$, for every $k \in \mathbb{N}$.
I conjecture that, for every $v \in \mathbb{R}^{N} \setminus \{\vec{0}\}$
\begin{align}
\lim_{k \to \infty} \|A_{i_k}A_{i_{k-1}} \ldots A_{i_1}v\|
\end{align}
is bounded away from zero.
Here is my (geometrical) try: For every $v \in \mathbb{R}^{N} \setminus \{\vec{0}\}$, there exist coordinates $\alpha_{1,{i_1}}, \ldots, \alpha_{N,{i_1}}$, such that
\begin{align}
v = \alpha_{1,{i_1}}v_{1,{i_1}} + \ldots + \alpha_{N,{i_1}}v_{N,{i_1}},
\end{align}
where $v_{1,{i_1}}, \ldots, v_{N,{i_1}}$ are the eigenvectors corresponding to the matrix $A_{i_1}$. When applying $A_{i_1}$ to $v$, we see that the eigenvectors corresponding to the eigenvalue $1$ stay put and the rest changes orientation. This means, $v$ is reflected on the eigenspace corresponding to eigenvalue $1$ and therefore does not decrease in length. Finish by induction.
Any thoughts?
AI: Counterexample:
$$\frac{1}{\sqrt5}\begin{pmatrix}1&2\\2&-1\end{pmatrix} \begin{pmatrix}1&2\\0&-1\end{pmatrix}\begin{pmatrix}2\\-1\end{pmatrix}=\frac{1}{\sqrt5}\begin{pmatrix}2\\-1\end{pmatrix}$$ So each iteration of this couple divides the vector by $\sqrt5$, so converging to 0. The error in OP's argument is to assume the eigenvectors are orthogonal.
|
H: Proyection of a subspace.
Let $W \subset \mathbb{R}^{4}$ a subspace generated by two vectors
$$W := span \left\lbrace \begin{pmatrix}
1\\
1\\
0\\
0
\end{pmatrix},\begin{pmatrix}
1\\
1\\
1\\
2
\end{pmatrix}
\right\rbrace.
$$
Find $w \in W$ wich minimize $||w-v||$ where $v= \begin{pmatrix}
1\\
2\\
3\\
4
\end{pmatrix}$. ($||\cdot ||$ is the usual norm in $\mathbb{R}^{4}$.
I found the projection matrix, and is
$$P= \begin{pmatrix}
\frac{1}{2} & \frac{1}{2} & 0 & 0\\
\frac{1}{2} & \frac{1}{2} & 0 & 0\\
0 & 0 & \frac{1}{5} & \frac{2}{5}\\
0 & 0 & \frac{2}{5} & \frac{4}{5}
\end{pmatrix} $$
so, $w= Pv=\begin{pmatrix} \frac{3}{2} \\
\frac{3}{2}\\
\frac{11}{5}\\
\frac{22}{5}
\end{pmatrix}$
Am I right?, I don't know if I solved the problem.
AI: Yes, your solution is correct and complete.
|
H: Where did 3.5 come from?
This is a homework question.
The problem. I know the solution, but I don't know where it came from. The videos say nothing. The equation is $d(v) = (2.15v^2)/(64.4f)$ I need to solve for $f$, so I tried plugging in the numbers from the table into the equation and solving. I got approximately $0.018$ by solving, yet apparently the answer is $3.5$. How do I solve problems like this, and what was my mistake?
AI: Let's try plugging in, for example, the values $v = 20$ and $d = 38$ from the table. This gives us
$$
d = \frac{2.15 v^2}{64.4 f} \implies 38 = \frac{2.15 \cdot 20^2}{64.4 f}.
$$
We now solve for $f$. Multiply both sides by the denominator, then divide both sides by the coefficient to solve for $f$.
$$
38 = \frac{2.15 \cdot 20^2}{64.4 f} \implies (38 \cdot 64.4) f = 2.15 \cdot 20^2 \implies f = \frac{2.15 \cdot 20^2}{38 \cdot 64.4} \approx 0.35.
$$
If we forget the exponent attached to $v$, then we instead end up with
$$
f = \frac{2.15 \cdot 20}{38 \cdot 64.4} \approx 0.018.
$$
I suspect that this is the source of your error.
|
H: Why is $C^5[a,b]$ infinite dimensional
Let $V$ denote the vector space $C^5[a,b]$ over $R$.
How to show it is infinite dimensional?
I know that we can write:
$C^5[a,b]$ = { $f\in$ $C[a,b]$ : $5$th derivative exists and is continuous}
How to show that there does not exist a linearly independent subset of $V$ which spans V ?
AI: Hint:
Note that $\mathbb{R}[x] \subset C^5([a,b],\mathbb{R})$ and that $\mathbb{R}[x]$ is not finite dimensional.
|
H: Question on proof of existence of a maximum of a continuous function on a closed set. - Proof inspiration
I'm trying to get stronger in constructing my mathematical arguments and so through that process I attempt to prove as much as possible the theorems that are presented in the textbook I'm reading from, in this case that is Spivak's Calculus. So when attempting the following theorem and not succeeding, looking at the proof, Spivak applied the following trick:
The function $$g(x) = \frac{1}{\alpha - f(x)}$$
does seem esoteric, yet and still it had to come from somewhere. It had to come from some line of thinking that allowed Spivak to introduce this function and know the consequences of introducing it. My question is what line of thinking was Spivak looking at this question with ? What kind o fquestions did he ask himself when working through this ?
As an example of what I mean, I approached the question in this way:
I KNOW that $f$ is continuous on a closed set. This means that the function is bounded. I would then probably write out the $\delta - \epsilon$ definition of continuity. I would also ask myself what I WANT. In this case we are trying to show the existence of a value, $y$, in our closed interval. I would myself most likely eventually get to the point of concluding that it suffices to show $\alpha = f(y)$. But then I would ask myself "what or how can we show such a thing on an abstract set?".........and I would be stuck.......What/How did Spivak proceed from here? I can say even if I had stayed with it for a day or a few days I probably would've never thought of introducing a new function. So what line of reaoning would bring about such a "moment of brilliance"?
AI: In this case it comes from looking at what it means for $\alpha$ to be the least upper bound of $\big\{f(x):x\in[a,b]\big\}$.
Since $\alpha=\sup\big\{f(x):x\in[a,b]\big\}$, we know that for each $\epsilon>0$ there is an $x_\epsilon\in[a,b]$ such that $\alpha-f(x_\epsilon)<\epsilon$. Thus, we can make $\alpha-f(x)$ as small as we like by choosing a suitable $x\in[a,b]$. But that immediately tells us that we can make $\frac1{\alpha-f(x)}$ as big as we like by choosing a suitable $x\in[a,b]$. Oops!
|
H: The ratio of moments in a normal distribution
I'm reading a paper where they (Mann and Whitney) want to show the limiting distribution they get is normal. They do this by looking at a ratio of moments. They do a computation then conclude the limiting distribution is normal by a "well known theorem". Can someone provide a reference? The relevant part of the paper is copied below:
AI: This fact is in Billingsley's Probability and Measure, although that's not how Mann and Whitney knew it.
Section 30, "The Method of Moments", notes that the normal distribution is "determined by its moments", that is, is the only probability distribution with the same moments, and states Theorem 30.2, on page 344 in the first (1979) edition, and page 390 of the third (1995) edition:
Suppose that the distribution of $X$ is determined by its moments, that the $X_n$ have moments of all orders, and that $\lim_n E[X^r_n] = E[X^r]$ for $r=1,2,\ldots.$ Then $X_n\Rightarrow X$.
(Mann and Whitney possibly could have known the result as stated in Appendix II (see especially p.384) of Uspensky's 1937 Introduction to Mathematical Probability, which presents the Chebyshev theory of the method of moments described in a wikipedia article. This article is largely by Michael Hardy, who supplied the other answer to this question.)
M&W's business about "ratio of moments" is a notational paper tiger, an artifact of standardization. To show that $Y_n/\sigma(Y_n)=X_n$ converges in distribution to $X$ this way, Mann and Whitney verify (in separate even $r$ and odd $r$ cases) $E[Y_n^r]/E[Y_n^2]^{r/2}\to EX^r$, and so on.
The treatment of the limiting normality of the Mann-Whitney test is much slicker (= less ham-fisted) in Hájek and Šidák's Theory of Rank Tests.
|
H: Example such that dimension of subspace is 24
If $S$ and $T$ are two subspaces of a vector space $\Bbb R^{24}$ of dimensions $19$ and $17$ respectively. Then what $S$ and $T$ can I choose such that the $\dim(S+T)=24$?
Help, please!
AI: Since $S$ is a 19-dimensional subspace of $\Bbb R^{24}$, the Orthogonal complement of $S$ in $\Bbb R^{24}$ denote by $S^{\perp}$, would be of dimension 5. So since, $T$ is of dimension 17, $\textbf{if we have } S^{\perp} \subset T$, then $dim(S+T)=24$
EDIT (More details) : The linear sum of the two subspaces $S,T$, namely $S+T$ is the smallest subspace containing both $S$ and $T$. Now note that $S \oplus S^{\perp}=\Bbb R^{24}$, and since (according to our assumption) $S^{\perp} \subset T $, it gives that S+T is a subspace containing a basis of $\Bbb R^{24}$ and thus is all of $\Bbb R^{24}$
|
H: Intuitive steps can be used during a proof?
Im trying to prove that: if $A \cup C = B \cup C$ and $A \cap C = B \cap C$ then $A = B$
I started assuming that $A \neq B$ then in a intuitive way i can see that $$((A \cup C = B \cup C) \land A \neq B) \Rightarrow A \subseteq C \land B \subseteq C$$
And from this in an intuitive way i concluded that:
$$((A \subseteq C \land B \subseteq C) \land A \neq B) \Rightarrow A \cap C \neq B \cap C$$
Which i think its a contradiction in relation to the original sentence, then $A = B$ is need to be true to $A \cup C = B \cup C \land A \cap C = B \cap C$ be also, so i ended with:
$$(A \cup C = B \cup C \land A \cap C = B \cap C) \Rightarrow A = B$$
I feel like i missed steps to make it correct, i want to know how can i fix this and if my atempt was totally wrong i want to know the directions to do it properly.
Other implication from accepted answer
$x \in B \Rightarrow x \in B \cup C$, and by premise $B \cup C = A \cup C$ then we have
$$x \in B \Rightarrow x \in A \cup C$$
$$x \in B \Rightarrow x \in A \lor x \in C$$
$$(x \in B \Rightarrow x \in A) \lor (x \in B \Rightarrow x \in C)$$
But $x \in B \land x \in C \Leftrightarrow x \in B \cap C$ and from this we have
$$(x \in B \Rightarrow x \in A) \lor (x \in B \land x \in C\Rightarrow x \in B \cap C)$$
And by our premise we have $B \cap C = A \cap C$, then we got
$$(x \in B \Rightarrow x \in A) \lor ((x \in B \land x \in C)\Rightarrow x \in A \cap C)$$
$$(x \in B \Rightarrow x \in A) \lor ((x \in B \land x \in C)\Rightarrow x \in A \land x \in C)$$
$$(x \in B \Rightarrow x \in A) \lor (x \in B \Rightarrow (x \in C \Rightarrow x \in A \land x \in C))$$
So in any case case we have $x \in B \Rightarrow x \in A$ then $B \subseteq A$
AI: The proper way to do this proof would be to show, on the premise $A \cup C = B \cup C$ and $A \cap C = B \cap C$, that $x \in A \implies x \in B$ and $x \in B \implies x \in A$. These two implications give $A \subseteq B$ and $B \subseteq A$ respectively, and in turn $A = B$.
First, we want to begin with the statement $x \in A$.
Since $x \in A$, we know $x \in A \cup C$ as a result.
However, by our premise, this means $x \in B \cup C$ as a well.
Thus, at least one of $x \in B$ or $x \in C$ holds.
Obviously if $x \in B$ we're done. We continue instead on the premise $x \in C$ and see that in turn leads to $x \in B$.
Then we must have $x \in C$ per the previous.
Thus, $x \in A \cap C$ since it is in both.
But $A \cap C = B \cap C$. Thus, $x \in B \cap C$.
But this means that $x \in B$ and $x \in C$.
Thus, $x \in B$.
Thus, $x \in A \implies x \in B$. This gives $A \subseteq B$.
I'll leave the reverse implication ($x \in B \implies x \in A$) to you.
|
H: Radius of convergence and absolute convergence
Say I have a power series $\sum_{n=1}^{\infty} a_nx^n$ with a known radius of convergence $-1< x \leq 1$. What can I generally know about the series convergence in $x=1$? I'm asking because I've seen it can't converge absolutely in $x=1$ and I don't understand why.
AI: Consider series
$$
\sum_{k=0}^\infty x^k.
$$
At points $\pm 1$ it diverges.
Consider series
$$
\sum_{k=0}^\infty \frac{x^k}{k^2}.
$$
At points $\pm 1$ it converges.
Consider series
$$
\sum_{k=0}^\infty \frac{x^k}{k}.
$$
It converges at $-1$ and diverges at $1$.
For all series the radius of convergence are equal to $1$. So, anything can happen at the endpoints of the interval.
If the series
$$
\sum_{n=0}^{\infty} a_n x^n \tag{1}
$$
converges absolutely at $x = 1$, then the series
$$
\sum_{n=0}^{\infty} |a_n|
$$
converges and that's why (1) converges absolutely at $x=-1$ as well. So, if we have $(-1,1]$ as an interval of convergence, the convergence at the endpoint must be conditional.
|
H: Finite group of even order has an element $g \neq e$ such that $g^ 2 = e$
I am trying to prove the following result.
Let $G$ be a finite group of even order. Prove that there exists $g \in G$ where $g^2 = e$ and $g \neq e$.
Here is my attempt.
Since $G$ has even order, $|G| \geq 2$. Hence, there exists some $g \neq e$. Since $G$ is of a finite order, there must exist some power, possibly not minimal, such that $g^m = e$. (Otherwise, the order is infinite.) Let $n$ be the order of $G$. Then $n \mid m$, so $m = nk$ for some $k \in \mathbb{N}$. But $G$ is of even order, so $n = 2j$ for some natural number $j$, so $m=nk=(2j)k = 2(jk)$. We have
$$g^m = e \iff g^{2jk} = (g^{jk})^2.$$
The one remaining thing to show is that $g^{jk} \neq e$, but I'm having trouble accomplishing this. (I worry, actually, that we may have $jk = n$, in which case this wouldn't work.)
AI: If $G$ does not have $x\ne 1, x^2=1$ then every non-1 element $x$ has the property that $x^{-1}\ne x$. Then we can represent $G$ as a union of $\{1\}$ and several 2-element subsets $\{x,x^{-1}\}$. Hence $|G|$ is odd.
|
H: Show that if $\phi_{X}(t)=1$ in a neighborhood of $0$, then $X=0$ a.s.
Let $\phi(t),t\in\mathbb{R}$, be the characteristic function of a random variable $X$. Show that if $\phi(t)=1$ in a neighborhood of $0$, then $X=0$ a.s.
The problem comes with the following hint: Show that $1-Re(\phi(2t))\le4(1-Re(\phi(t)))$ for $t\in \mathbb{R}$. I am stumped by this one, I am not even sure where to begin or how to prove\use use the hint, any help here would be greatly appreciated.
AI: Let's assume that $\varphi(t) = 1$ for any $t \in [0,\delta]$. Then in particular $\varphi(\delta)=1$. We'll show that it is the case that $\mathbb P(X \in \{\frac{2k\pi}{\delta} : k \in \mathbb Z \}) = 1$ . Let $\mu_X$ be distribution of $X$.
Note that $\varphi(\delta)=1$ means:
$$ 0 = 1 -\varphi(\delta) = 1 - \int_{\mathbb R} \cos(\delta x) d\mu_X(x) = \int_{\mathbb R} (1 - \cos(\delta x)) d\mu_X(x)$$
Since $1-\cos(\delta x) \ge 0$, we must have $\cos(\delta x) = 1$ , $x - d\mu_X$ almost surely, so that $x = \frac{2k\pi}{\delta}$ , $d\mu_X$ almost surely, which means $\mu_X( \{\frac{2k\pi}{\delta} : k \in \mathbb Z \})=1$.
Now note that we have only countable many points in set $\{\frac{2k\pi}{\delta} : k \in \mathbb Z \}$. For every $k \in \mathbb Z \setminus \{0\}$ we can find such $t_k \in (0,\delta)$ that $\frac{2k\pi}{\delta}$ is not equal to $\frac{2 m \pi}{t_k}$ for any $m \in \mathbb Z$ (because for every $m \in \mathbb Z$ there is at most one $s \in (0,\delta)$ such that $\frac{2m \pi}{s} = \frac{2k\pi}{\delta}$, but we have only countable many $m \in \mathbb Z$, but continuum-many $s \in (0,\delta)$, so there exists such $t_k$) which means that $\mu_X(\frac{2k\pi}{\delta}) = 0$ (for that given $k \in \mathbb Z \setminus \{0\}$, because $\varphi(t_k)=1$, so $\mu_X( \{ \frac{2m\pi}{t_k} : m \in \mathbb Z \}) = 1$, too). Since $k \in \mathbb Z \setminus \{0\}$ was arbitrary, and there are only countable many of them, we have $\mu_X( \{ \frac{2k \pi}{\delta} : k \in \mathbb Z \setminus \{0\} \} ) = 0$, so that $\mu_X(\{0\}) = 1$ what was to be proven.
EDIT: If you're interested, here's an approach with your hint. Let's prove it beforehand. $$ 1 - Re(\varphi(2t)) = \int_{\mathbb R} (1-\cos(2tx))d\mu_X(x) = 2\int_{\mathbb R} (1 - \cos^2(tx))d\mu_X(x) $$
It would be sufficient to show $1-\cos^2(s) \le 2(1- \cos(s))$ which is equivalent to $0 \le \cos^2(s) - 2\cos(s) + 1 = (\cos(s)-1)^2$, so true. Hence $$ 1- Re(\varphi(2t)) \le 4\int_{\mathbb R}(1 - \cos(tx))d\mu_X(x) = 4(1-Re \varphi(t))$$
Having lemma, it is pretty easy. Note that you have such $\delta$, that $\varphi(t) = 1$ for any $[-\delta,\delta]$. Now let's prove it is also the case for any $t \in [-2\delta,2\delta]$ using hint: Take $s \in [-2\delta,2\delta]$. We have $$ 1 - Re(\varphi(2s)) \le 4(1 - Re(\varphi(s)) = 0$$ since $s \in [-\delta,\delta]$. Moreover, $|\varphi(s)| \le 1$, so $\varphi(s) = 1$. Use it again, to prove the fact for any $[-2^k\delta,2^k\delta]$ getting $\varphi(t)=1$ for any $t \in \mathbb R$.
|
H: If $\alpha$ is algebraic number then so is $\alpha+1$
I have to prove that if $a$ is algebraic number then so is $\alpha+1$. I've tried to construct the polynomial for $\alpha+1$ using the polynomial for $\alpha$ but that didn't lead me to anything.
Let the $W(x)=a_n x^n+a_{n-1} x^{n-1} +\dots + a_1x+a_0$ be a polynomial and $W(\alpha)=0$. Consider
$$W(\alpha+1)=a_n (\alpha+1)^n+a_{n-1} (\alpha+1)^{n-1} +\dots + a_1(\alpha+1)+a_0$$
$$W(\alpha+1)=a_n(\alpha^n+{n\choose 1}a^{n-1}+\dots+{n\choose n-1}\alpha +1)+\\a_{n-1}(\alpha^{n-1}+{n-1\choose 1}a^{n-2}+\dots+{n-1\choose n-2}\alpha +1)+\dots +\\a_2\alpha+a_1+a_0$$
Then taking first element of every bracket I can obtain $W(\alpha)$ which is $0$. Now I'm left with the rest of the stuff and don't know where to go from there. Is this strategy any good? If not - what would be a proper way to prove this. If I'm doing it correctly -what's next?
AI: If you are to substitute $x \mapsto x - 1$ in your characteristic polynomial $W(x)$ then you obtain a polynomial $$W'(x) = a_n (x-1)^n + \dots a_1 (x -1) + a_0.$$ It should be clear that $W'(\alpha + 1) = W(\alpha) = 0$. If you want to find the coefficients $a'_k$ of $W'(x)$ you will have to use binomial expansion similar to the way you have already done to calculate them.
|
H: Proof Verification: Equivalent Definition for Locally Compact Hausdorff Space
The main theorem is as follows. I think most people are familiar with that:
Theorem. Let $X$ be a Hausdorff space. Then $X$ is locally compact if and only if for every $x\in X$ and every open set $U$ containing $x$, there exists a neighborhood $V$ of $x$ such that ${\rm Cl}(V)$ is compact and ${\rm Cl}(V)\subseteq U$.
One direction is trivial, so we only need to show that the condition holds if $X$ is locally compact.
In my definition:
Definition. A topological space $X$ is locally compact if for every $x\in X$, there is a compact subset $C$ of $X$ such that $x\in{\rm Int}(C)$.
I know there are many proofs available to that theorem, but I wonder if I can prove it without referring to the one-point compactification. Here follows my proof, which uses the regularity of locally compact Hausdorff space.
Proof. Suppose $X$ is locally compact. For each $x\in X$, let $C$ be a compact subset of $X$ with $x\in{\rm Int}(C)$. For every neighborhood $U$ of $x$, since $X$ is regular, there exists a neighborhood $V'$ of $x$ such that ${\rm Cl}(V')\subseteq U$. Then we set
\begin{equation*}
V=V'\cap{\rm Int}(C).
\end{equation*}
Apparently, $V$ is a neighborhood of $x$ where
\begin{equation*}
{\rm Cl}(V)={\rm Cl}(V'\cap{\rm Int}(C))\subseteq{\rm Cl}(V')\cap{\rm Cl}({\rm Int}(C))\subseteq{\rm Cl}(V')\cap C.
\end{equation*}
On the one hand, we have ${\rm Cl}(V)\subseteq{\rm Cl}(V')\subseteq U$. On the other hand, since ${\rm Cl}(V)$ is closed in $C$ and $C$ is compact, we can see that ${\rm Cl}(V)$ is also compact, as desired.
If anyone finds it interesting, could you please help me check whether my proof is valid? Any help will be appreciated.
AI: I think it's better not to rely on $X$ being (completely) regular (which is also most easily proved by using the one-point compactification) but by using the classic fact that a compact Hausdorff space is normal (and hence regular).
So if $x \in O \subseteq C$ with $O$ open and $C$ compact (as the assumption of local compactness gives us) and $U$ is any open set containing $x$, then $U \cap O$ is open in $C$ which is (as said) regular and so we find an open neighbourhood $V$ of $x$ (open in $C$, so of the form $V=V' \cap C$ for some $V'$ open in $X$) such that $\operatorname{cl}_C(V) \subseteq U\cap O$ and then check that $V' \cap O$ is as required.
|
H: If the limit of the difference of two random variables goes to 0, are the limits of their expectations the same?
I have two discrete random variables $X_n$ and $Y_n$ and a relation between them that looks like, for $n\geq 1$
$$X_n = a_nP\{A_n\} + Y_n P\{{A_n}^c\},$$
where $A_n$ is an event and $a_n$ depends on $n$ but is not a random variable. The thing is, the sequence of probabilities $P\{A_n\}$ go to $0$ as $n\to \infty$, so I believe this means that $X_n$ converges to $Y_n$ in some meaningful way, but I'm not sure which. From here, can one conclude that
$$\lim_{n\to \infty} E\{X_n\} = \lim_{n\to\infty} E\{Y_n\}?$$
I'm sorry if this is very elementary. I have difficulties getting my head around limits and expectations.
AI: Since $X_n = a_n P(A_n) + Y_nP(A_n^c)$, we know by the linearity of the expected value that:
$$E(X_n) = E(a_n P(A_n) + Y_nP(A_n^c))= E(a_n P(A_n)) + E(Y_nP(A_n^c)) =$$
$$
=a_nP(A_n) + P(A_n^c)E(Y_n)$$
If $a_n \rightarrow a$ and if $Y_n$ converges (note that this is necessary for the limit to exist), then
$$
\lim_{n \to \infty}E(X_n) =\lim_{n \to \infty}a_nP(A_n) + P(A_n^c)E(Y_n)=
\lim_{n \to \infty}a_nP(A_n) + \lim_{n \to \infty}P(A_n^c)E(Y_n)
$$
Since $P(A_n) \rightarrow 0$, we conclude that
$$
\lim_{n \to \infty}E(X_n) =\lim_{n \to \infty}P(A_n^c)\lim_{n \to \infty}E(Y_n)=\lim_{n \to \infty}E(Y_n)
$$
|
H: Covariance of a uniform distribution
I have the following problem:
The variables $X$ and $Y$ have the joint probability: $f(x,y)=2 $ for
$0 \le y \le x \le 1$. What is the covariance between $X$ and $Y$?
The answer is 1/36
I know that $\operatorname{Cov}(x,y) = E[XY] - E[X]E[Y]$
First I calculated the marginal PDFs since it was the first part of the question.
$f_x(x)=2x $ and $f_y(y) = 2(1-y)$
To calculate $E[X], E[Y]$ I need to multiply with $x$ and $y$ and integrate. Do I need to integrate for both $x$ and $y$ from 0 to 1? If not what will be the integration boundaries?
How do I calculate $E[XY]$? I tried with the following integral:
$$\int_0^{1} \int_0^{x} (2xy) \mathrm{d}y \mathrm{d}x$$
But I'm not sure if this is correct, since I don't get the right answer.
AI: You could use the expression $Cov(X,Y)=E((X-\mu_X)(Y-\mu_Y))$ instead. Since you have the marginals, $\mu_X$ and $\mu_Y$ should be easy to compute.
|
H: Combinatorics of binning data with repetitions
I'm trying to model random arrival times in discrete time bins.
Suppose I have $n$ (integer) arrival times, which are between $1$ and $m$, with $m$ possible time bins. I randomly draw $n$ integers between $1$ and $m$, and I place every one of the (possibly alike) random numbers in the bin with its number. Thus if I draw $\{1,5,9,5\}$, the bin count for this draw looks like $\{1,0,0,0,2,0,0,0,1,0\}$ and I call this a $\{2,1,1\}$ configuration.
What is the probability of finding a configuration $\{p_1,p_2,\ldots,p_n\}$, with $p_1\ge p_2\ge p_3$ etc, containing $p_1$ count in any bin, $p_2$ count in any other bin, and so forth until $p_n$ (which may or may not be $0$)?
For clarity I imagine I have $n=4$ arrival times and $m=10$ bins. There are $10^4$ possible outcomes. The probability of getting all different arrivals times is the number of permutations of a string like $\{0,0,0,0,0,0,1,2,3,4\}$, containing $4$ distinct symbols and $6$ other identical symbols.
This works out to $10\times 9\times 8\times 7=5040$ as I can choose to place $1$ in any of the $10$ slots, place $2$ in any of the remaining $9$ open slots etc. Thus this type of outcomes occurs with probability $5040/10000$.
Now if I try to compute the probability of getting two like arrival times, and the remaining two arrivals times different - say I draw $\{1,8,2,8\}$ something like $\{0,0,0,0,0,0,1,2,8,8\}$ - there are $10\times 9\times (8\times 7/2)=2520$ permutations of these. The logic is simple: I can place my first symbol in any of the 10 empty bins, my second symbol in any of the remaining $9$ empty bins, and my like symbols in any of the remaining bins, but I must divide by $2$ because they are identical.
However, by running big numerical experiment where I randomly pick $4$-tuples between $1$ and $10$ and simply count the configurations, I find the correct number ought to be something like $10\times 9\times 8\times 6
= 10\times 9\times 8\times {4\choose 2}=4320$. Not good.
The results of the computer simulation (for $10^5$ draws) are
$$\left(
\begin{array}{cc}
\{1,1,1,1\} & 50371 \\
\{2,1,1\} & 43076 \\
\{3,1\} & 3690 \\
\{2,2\} & 2772 \\
\{4\} & 91 \\
\end{array}
\right)
$$
By hook or by crook I somehow produced the following table:
\begin{align}
\begin{array}{ccc}
\hbox{configuration}&\hbox{combinatorics}&\hbox{Prob}\\
\{1,1,1,1\}& 10!/6!&5040/10^4\\
\{2,1,1\}& 10\times 9\times 8\times {4\choose 2}&4320/10^4\\
\{3,1\}&10\times 9 \times {4\choose 3} & 360/10^4\\
\{2,2\}& 10\times 9 \times {4\choose 2}\times \frac{1}{2}& 270/10^4\\
\{4\} & 10 &10/10^4
\end{array}
\end{align}
The probabilities sum to $1$, ($10^5\times$Prob) more or less matches the numbers of the simulation, and there's definitely a pattern but I'm defeated to understand how to generalize this to $n$ arrivals times in $m$ time bins. It seems there is a prefactor which depends on the number of distinct symbols, and some combinatorial factor to account for identical entries.
However, trying to $n=5$ times in $m=10$ bins, it's not clear how to infer from the pattern how to compute the probability of the configuration $\{2,2,1\}$ arriving in $10$ different bins.
Since my "configurations" $\{p_1,p_2,\ldots,p_n\}$, with $p_1\ge p_2\ge p_3$ etc are similar to Young tableaux I thought counting tbut it's not clear at all how this would be useful. Moreover the pattern for case of $n=4$.
AI: So you have $n$ objects labelled $1,2, \cdots, n$, whose value ranges in $[1,m]$ and might be repeated.
A) Disregarding the time sequence label, the different arrangements of the objects according to the value (frequency histogram)
correspond to the number of way of arranging $n$ undistinguishable objects into $m$ distinguishable bins, or which is the same
to the number of weak compositions of
$n$ into $m$ parts, which is
$$\binom{n+m-1}{n}$$.
Asigning them the time labels correspond to make all the possible permutations of the $n$ objects which are $n!$
The total number thus comes out to be
$$
\left( \matrix{
n + m - 1 \cr
n \cr} \right)n! = {{\left( {n + m - 1} \right)^{\,\underline {\,n\,} } } \over {n!}}n! = \left( {n + m - 1} \right)^{\,\underline {\,n\,} } = m^{\,\overline {\,n\,} }
$$
However, this way of counting is making distinction among the histograms for
different number of balls in each bin;
different label of the balls in each bin;
and as well, for different order of the ball labels in the bin .
For example, for two balls and two bins the $ 2^{\,\overline {\,2\,} } =6$ configurations are:
$$
\eqalign{
& \left( {\left. {\matrix{ a \cr b \cr } } \right|\emptyset } \right),
\;\left( {\emptyset \left| {\matrix{ a \cr b \cr } } \right.} \right),
\;\left( {\left. a \right|b} \right), \cr
& \left( {\left. {\matrix{ b \cr a \cr } } \right|\emptyset } \right),
\;\left( {\emptyset \left| {\matrix{ b \cr a \cr } } \right.} \right),
\;\left( {\left. b \right|a} \right) \cr}
$$
B) Now consider the expansion of the multinomial of degree $n$ in $m$ variables
$$
\eqalign{
& \left( {x_{\,1} + \,x_{\,2} + \, \cdots + \,x_{\,m} } \right)^{\,n}
= \left( {x_{\,1} + \,x_{\,2} + \, \cdots + \,x_{\,m} } \right) \cdots \left( {x_{\,1} + \,x_{\,2} + \,
\cdots + \,x_{\,m} } \right) = \cr
& = \cdots \; + x_{\,k_{\,1} } x_{\,k_{\,2} } \cdots x_{\,k_{\,n} } + \; \cdots \quad \left| {\;k_{\,j}
\in \left\{ {1, \cdots ,\,m} \right\}} \right. = \cr
& = \sum\limits_{\left\{ {\matrix{ {0\, \le \,r_{\,j} \, \le \,n} \cr {r_{\,1} + r_{\,2} + \,
\cdots + \,r_{\,m} \, = \,n} \cr } } \right.}
{\left( \matrix{
n \cr
r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,m} \cr} \right)x_{\,1} ^{\,r_{\,1} } x_{\,2} ^{\,r_{\,2} }
\cdots x_{\,m} ^{\,r_{\,m} } } \cr}
$$
The second line tells you that you have all the possible sequences of $n$ elements from the set
$\{ {x_{\,1} ,\,x_{\,2} ,\, \cdots ,\,x_{\,m} } \} $ with repetition allowed (any, from $0$ to $n$).
The third lines gives you the number of ways to arrange the $n$ elements into a frequency histogram
with occupation profile $\left( {r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,m} } \right)$, considered as an $m$-tuple, i.e.
occurring exactly in that order.
The expansion of the multinomial consists in picking one of the $m$ values from the first parenthesis, one from the second, etc.,
which corresponds to take ball No. $1$ and assign it to one of the $m$ bins, and same for the second till the $n$th.
In this process the balls enter each bin naturally ordered according to their timing label, and we do not distinguish
any more for the order inside a single bin.
The example $m=2,\, n=2$ now gives $m^n=4$ different arrangements as
$$
\left( {\left. {a,b} \right|\emptyset } \right),\;\left( {\emptyset \left| {a,b} \right.} \right),
\;\left( {\left. a \right|b} \right),\;\left( {\left. b \right|a} \right)
$$
and
$$
\left( \matrix{ 2 \cr 2,\,0 \cr} \right) = 1,
\quad \left( \matrix{ 2 \cr 0,\,2 \cr} \right) = 1,
\quad \left( \matrix{ 2 \cr 1,\,1 \cr} \right) = 2
$$
for each different $m$-tuple of the frequency profile.
C) The problem you pose is relevant to case B), but you are interested not just on a specific $m$-tuple,
yet in any permutation of a given $m$-tuple.
Let's order the representative $m$-tuple in an increasing way (multiset) and let's count how many of its elements have value $0,1,\cdots,n$
$$
\left( {r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,m} } \right)\; \Rightarrow \;
\left\{ {\underbrace {0, \cdots ,0}_{q_{\,0} }\;,\;\underbrace {1, \cdots ,1}_{q_{\,1} }\;,\,\; \ldots \;,
\;\underbrace {n, \cdots ,n}_{q_{\,n\;} }\;} \right\}\quad \left| \matrix{
\;0 \le q_{\,j} \le n \hfill \cr
\;q_{\,0} + q_{\,1} + \cdots + q_{\,n} = m \hfill \cr
\;0q_{\,0} + 1q_{\,1} + \cdots + nq_{\,n} = n \hfill \cr} \right.
$$
Now the number of ways to permute $n+1$ different objects, each replicated $q_j$ times (null included) for a total of $m$ is just the multinomial coefficient $binom{m}{\bf q}$.
Therefore the required No. of ways would be
$$ \bbox[lightyellow] {
\eqalign{
& N = \left( \matrix{ n \cr r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,m} \cr} \right)
\left( \matrix{ m \cr q_{\,0} ,q_{\,1} , \cdots ,q_{\,n} \cr} \right) = \cr
& = {{n!} \over {r_{\,1} !\,\;r_{\,2} !\,\; \cdots \,\;r_{\,m} !}}{{m!} \over {q_{\,0} !\;\;q_{\,1} !\;
\cdots \;q_{\,n} !}} = \cr
& = {{n!} \over {r_{\,1} !\,\;r_{\,2} !\,\; \cdots \,\;r_{\,m} !\;0! \cdots 0!}}{{n!} \over {q_{\,0} !\;\;q_{\,1} !\;
\cdots \;q_{\,n} !}} = \cr
& = {{n!} \over {\left( {0!} \right)^{\,q_{\,0} } \;\left( 1 \right)!\,^{\,q_{\,1} } \; \cdots \,\;
\left( {n!} \right)^{\,q_{\,n} } }}{{m!} \over {q_{\,0} !\;\;q_{\,1} !\; \cdots \;q_{\,n} !}} \cr}
}$$
In your example with $n=4, m=10$
$$
\eqalign{
& \left\{ {1,1,1,1} \right\}\; \Rightarrow \;{\bf r} = \left( {0, \cdots ,0,1,1,1,1} \right)\;
\Rightarrow \;{\bf q} = \left( {6,4,0, \cdots ,0} \right) \Rightarrow \cr
& \Rightarrow \;N = {{n!} \over {\left( {0!} \right)^{\,6} \;\left( 1 \right)!\,^{\,4} }}{{m!} \over {6!\;\;4!\;}}
= {{10!} \over {6!}} = 10^{\,\underline {\,4\,} } = 5040 \cr
& \left\{ {1,1,2} \right\}\; \Rightarrow \;{\bf r} = \left( {0, \cdots ,0,1,1,2} \right)\; \Rightarrow \;{\bf q}
= \left( {7,2,1, \cdots ,0} \right) \Rightarrow \cr
& \Rightarrow \;N = {{n!} \over {\left( {0!} \right)^{\,7} \;\left( 1 \right)!\,^{\,2} \;\left( 2 \right)!\,^{\,1} }}
{{m!} \over {7!\;\;2!\;\;1!\;}} = {{4!10!} \over {7!\, \cdot 4}} = 6 \cdot 10^{\,\underline {\,3\,} } = 4320 \cr
& \left\{ {1,3} \right\}\; \Rightarrow \;{\bf r} = \left( {0, \cdots ,0,0,1,3} \right)\; \Rightarrow \;{\bf q}
= \left( {8,1,0,1,0 \cdots ,0} \right) \Rightarrow \cr
& \Rightarrow \;N = {{n!} \over {\left( {0!} \right)^{\,8} \;\left( 1 \right)!\,^{\,1} \;\left( 3 \right)!\,^{\,1} }}
{{m!} \over {8!\;\;1!\;1!\;}} = {{4!10!} \over {3!\, \cdot 8!}} = 4 \cdot 10^{\,\underline {\,2\,} } = 360 \cr
& \left\{ {2,2} \right\}\; \Rightarrow \;{\bf r} = \left( {0, \cdots ,0,0,2,2} \right)\; \Rightarrow \;{\bf q}
= \left( {8,0,2,0 \cdots ,0} \right) \Rightarrow \cr
& \Rightarrow \;N = {{n!} \over {\left( {0!} \right)^{\,8} \;\left( 2 \right)!\,^{\,2} }}{{m!} \over {8!\;\;2!\;}}
= {{4!10!} \over {4 \cdot 2\, \cdot 8!}} = 3 \cdot 10^{\,\underline {\,2\,} } = 270 \cr
& \left\{ 4 \right\}\; \Rightarrow \;{\bf r} = \left( {0, \cdots ,0,0,4} \right)\; \Rightarrow \;{\bf q}
= \left( {9,0,0,0,1,0 \cdots ,0} \right) \Rightarrow \cr
& \Rightarrow \;N = {{n!} \over {\left( {0!} \right)^{\,9} \;\left( 4 \right)!\,^{\,1} }}{{m!} \over {9!\;\;1!\;}}
= {{4!10!} \over {4! \cdot 9!}} = 1 \cdot 10^{\,\underline {\,1\,} } = 10 \cr
& {\rm Tot} = 10000 = m^{\,n} \cr}
$$
|
H: The curl operator: Why does it map $C^k$ functions in $\mathbb{R}^3$ to $C^{k-1}$ functioins in $\mathbb{R}^3$?
My Understanding of the curl operator is the following: it maps continuously differentiable functions
$f$: $\mathbb{R}^3$ $\to$ $\mathbb{R}^3$ to continuous functions $g$: $\mathbb{R}^3$$\to$ $\mathbb{R}^3$, in particular it maps $C^k$ functions in $\mathbb{R}^3$ to $C$$^k$$^-$$^1$ functions in $\mathbb{R}^3$. however, my question is this:
why does it map functions with $k$ continuous derivatives to functions with $k-1$ continous derivatives?
Implicitly, the curl of a vector field F is defined as (where $p$ is any point in the field):
($\nabla$ $\times$ F)($p$) $\bullet$ n $\stackrel{\mathrm{def}}{=}$ $\displaystyle{\lim_{A \to \infty}}$($\frac{1}{|A|}$$\oint\limits_{C}$F $\bullet$ dr).
My only thought as to why this is true is because in the defintion shown above, ($\nabla$ $\times$ F) before it acts on the point $p$ is simply the determinant of the matrix of the component functions of the vector field where the second row of the matrix is the partial derivatives that act on the component functions of the vector field in standard cartesian coordinates.
AI: The curl is also defined using derivatives. Check the Wikipedia definition. The components of the curl of a vector field $\mathbf{F}$ are linear combinations of the partial derivatives of $\mathbf{F}$. If $\mathbf{F}$ is $k$-times continuously differentiable, then its partial derivatives are $k-1$-times continuously differentiable. So $\nabla \times \mathbf{F}$ is in $C^{k-1}$.
You can easily construct a vector field in $C^k$ whose curl is NOT in $C^k$; for example $\mathbf{F}(x,y,z) = \min(x^{k+1}, 0) \hat{x}$.
|
H: Coin Flipping Game - Wins at 20 Heads
Game Rules
Let's say you have a coin, with $50/50$ chance of lending on Heads or Tails.
You win the game when you get $20$ Heads.
Question
Now, knowing we already threw the coin $50$ times, what are the odds that we have less than $10$ throws left to win.
Answer
The first thing that came to my mind is that to get the answer, I could do the sum of all the possible outcomes from $1$ throw left to win to $9$ throws left to win.
I'm not sure if that is a problem that could involve Cumulative distribution or maybe Negative Binomial distribution.
AI: Note that one throws the coin exactly $n$ more times means that out of total $(50+n)$ throws the last throw has to result in a head, before that there were exactly 19 heads and the rest $(30+n)$ tails.
So the required probability is $\sum_{n=0}^9 {50+(n-1) \choose 19}(\frac{1}{2})^{50+n}$
|
H: Understanding Wittgenstein's proof of Infinitude of prime
Can someone please tell me why the last claim "It is thus the case..." is true?
I tried considering negation of the last claim. But it didn't help.
Any help would be appreciated. Thanks in advance.
AI: The product on the right consists of the sum of $\frac 1n$ where $n$ is any number divisible only by the primes $≤m$. These numbers may occur with some multiplicity, that doesn't matter. It is also easy to see that the product on the right is $$\frac 2{2-1}\times \frac 3{3-1}\times \cdots \times \frac m{m-1}=m$$
Thus, if every natural number from $1$ to $4^m$ factored completely using the primes $≤m$ we'd have that the left hand sum was $≤ m$ contrary to the stated premise.
|
H: How to evaluate $\int_{-\pi/2}^{\pi/2} \tan x \cos (A \cos x +B \sin x) \, dx$?
$$\int_{-\pi/2}^{\pi/2} \tan x \cos (A \cos x +B \sin x) \, dx$$
Is it possible to calculate this? Both A and B are non-zero and assumed to be real numbers.
I tried Integrate[Tan[x]*Cos[A*Cos[x]+B*Sin[x]],{x,-Pi/2,Pi/2},PrincipalValue->True],
but it didn't work.
I would be very grateful if you could share some of the good integration skills, ideas, or any advice.
p.s. I think the integration result should be expressed as a combination of Bessel functions.
AI: One can find a representation in terms of infinite series
Integrate[Tan[x]Series[Cos[b Sin[x]+a Cos[x]],{a,0,5}]//Normal,{x,-Pi/2,Pi/2},PrincipalValue->True]
It yields
$$\frac1{\pi}I= -a J_1(b)+\frac{a^3}{6b} J_2(b)-\frac{a^5}{40b^2}J_3(b)+\ldots$$
We can continue to find the following representation
$$
\frac1{\pi}I=\sum_{i=1}^{\infty} \frac{(-1)^{i}a^{2i-1}}{2^{i-1} (i-1)! (2i-1)b^{i-1}} J_{i}(b).
$$
So, you were right about the Bessel functions.
Note
I am cautiously optimistic that a closed form may be found due to the existence of recursive relations that can be used to reduce $J_i$ to a sum of just few Bessel functions of lower order. However, Mathematica seem not to recognize these relations. Maybe someone here can help.
|
H: Find inverse element of $1+2\alpha$ in $\mathbb{F}_9$
Let $$\mathbb{F}_9 = \frac{\mathbb{F}_3[x]}{(x^2+1)}$$ and consider $\alpha = \bar{x}$. Compute $(1+2 \alpha)^{-1}$
I think I should use the extended Euclidean algorithm: so I divide $x^2 +1 $ by $(1+2x)$:
$$x^2 + 1 = (1+2x)(2x+2)+2$$
$$(2x+2)(1+2x) + 2(x^2+1) = 1$$
Therefore, considering $\text{mod}(x^2+1) $, I have $$(2x+2)(1+2x) = 1\text{mod}(x^2+1)$$
and so $2x+2 = (1+2x)^{-1}$
Is it okay, or did I misunderstood something?
AI: $(2x+2)(2x+1)=2(x+1)(2x+1)=2(2x^2+3x+1).$
Since we are in $\mathbb{F}_3,$ then $3x=0$ and $4x^2=x^2$ so
$(2x+2)(2x+1)=2(2x^2+1)=4x^2+2=x^2+2=(x^2+1)+1,$
reducing modulo $(x^2+1)$ we get $1,$ so yeah, you are correct.
|
H: Understanding surjectivity proof of $f(n)=2^n$.
Working on the book: Richard Hammack. "Book of Proof" (p. 252)
Let $B=\{2^n:n \in \mathbb{Z}\}$. Show that the function $f: \mathbb{Z}\to B$, defined as $f(n) = 2^n$ is bijective. Then find $f^{-1}$.
The author proves surjectivity:
The function $f$ is surjective as follows. Suppose $b \in \mathbb{B}$. By definition of $B$ this means $b = 2^n$ for some $n \in \mathbb{Z}$. Then $f(n) = 2^n = b$
Perhaps I'm missing something, but I think this does not proves surjectivity. Instead, would be neccesary to take an arbitrary element $b \in B$, and show there exists an element $a \in \mathbb{Z}$ such that $f(a) = b$. In this case, letting $a = \log_2(b)$, we see
$$
f(a)=f(\log_2(b))=2^{\log_2(b)}=b
$$
Is my observation correct ?
AI: Perhaps I'm missing something, but I think this does not proves surjectivity. Instead, would be neccesary to take an arbitrary element b∈B, and show there exists an element a∈Z such that f(a)=b.
That is exactly what the author did do!
S/he picked $b$ to be an arbitrary element of $B$. By the definition of $B$ all the elements of $B$ are of the form $2^n$ for some $n\in \mathbb Z$.
So s/he just let $a$ be that $n$. She just didn't want to spend the $25$ cents to buy a second variable. (If she asked me, I know a place where she can get variables wholesale.....)
In this case, letting a=log2(b),
That's one way to get the $a$.... but you have to then prove that $\log_2 b\in \mathbb Z$[1].
But another way to get that $a$ is to say $b = 2^n$ for some $n$, so let $a = n$.
That way you can save on the cost of the function turning handcranks. Variables are cheap but the function turning handcranks are mucho bucks if you don't need them.
====
[1] How would you prove $\log_2 b \in \mathbb Z$?
Well, really the only way I see is: ... Let $b \in B$. Then there is an integer $n$ so that $b = 2^n$. So $\log_2 b =\log_2 2^n = n$ which is an integer.
|
H: Two different cases of uniform hypothesis testing
I have two different p-value uniform-distribution problems. I know that the definition of the p-value is: The probability of observing a new $X$ at least as extreme or more extreme than the initial $X$.
Problem I:
$X$ has a uniform distribution on interval $[0, z]$. We test $H_0: z=3$
against $z >3$ as test static we take $X$. We observe $x=1$, what is the
p-value?
Problem II:
We have a collection of tanks numbered from $1$ to $K$ and $20$ of them are
chosen as sample with putting back. We want to test $H_0: K = 100000$
versus $H_1: K<100000$. The test statistic is the max number from the
sample $M.$ Assume $M= 81115$ what is the p-value?
The first p-value is the region right of the observed p-value $(2/3)$. The second p-value is the region left of the observed value $(81115/100000)^{20}$. Initial I thought I had to look at the sign of the $H_1$. But that 'assumption' doesn't hold with the theory.
I see how the definition of the p-value holds in the first one. You want to know the chance of observing a value $x$ bigger or egual than 1, so you take the right region.
I don't see how the definition holds in case of problem 2. I think I'm confused by the test statistic $\max\{\}$. So with the same line of thought. What is the probability of achieving the same or more extreme result? Here I get stuck, I dont see why the left region is calculated. I do see why it is ^20, because of the independence of 20 observations.
Question 1: Why is in problem 2 the left region taken?
Question2: Here is the test statistic $\max\{x_1,x_2,\ldots,x_n\}$, but what should I do when the test statistic is $\operatorname{MEAN}\{x_1,x_2,\ldots,x_n\}$, or $\min\{x_1,x_2,\ldots,x_n\}$? Is there a derivation somewhere I can look into?
AI: The $p$-value is the probability of observing a value of the test statistic that is at least as extreme as what you observed, given that the null hypothesis is true. In the case of the second question, this means observing a maximum tank number that is $81115$ or smaller, given that there are $K = 100000$ tanks. The reason why is because if $M$ is the maximum tank number observed in the sample, and the alternative hypothesis is that there are fewer than $100000$ tanks, the smaller the value of $M$ you observe, the more evidence you have in favor of rejecting $H_0$. Consequently, smaller $M$ values are considered "more extreme" than larger ones. To illustrate, if indeed it was true that there are $100000$ tanks, and you observed $M = 37$, is that very likely? You'd have to pick, out of $20$ tries, tanks with numbers not exceeding $37$ every time. The probability of such an event is $(37/100000)^{20} \approx 2.31225 x 10^{-69}$.
This is why your $p$-value for the second question is $(81115/100000)^{20} \approx 0.0152063$, because this is the probability that the maximum tank number in a sample of size $n = 20$ is numbered $81115$ assuming there are $100000$ tanks. It's not impossible, but somewhat unlikely: each tank in your sample had only a $0.81115$ probability of being a number not exceeding the value of your statistic.
|
H: Question about integral in measure theory
I have a question About excersice 4.E of Bartle's the elements of integration and measure theory, the problem says: If $f\in M^+ (X,\mathbb{X})$ and
$$\int fd\mu < +\infty,$$
then for every $\varepsilon>0$ there exist a set $E \in \mathbb{X}$ such that $\mu(E)< +\infty$ and
$$\int fd\mu \leq \int_E fd\mu +\varepsilon.$$
My attempt: Let $\varepsilon>0$ define $E_n = \{x\in X | f(x) \geq \varepsilon/n\}$ so that $\{E_n\}$ is a increasing secuence of sets, and let
$$f_n = f\chi_{E_n},$$
where ${\chi}_{E_n}$ is the characteristic function. $\{f_n\}$ is an increasing sequence and $f_n \rightarrow f$
But I don't know how to proceed. Could you suggest me a hint?
AI: By Monotone Convergence Theorem $\int f_n d\mu \to \int f d\mu$. Hence there exists $n$ such that $\int f_n d\mu > \int f d\mu-\epsilon$. Now take $E=E_n$ and note that $\int_E fd\mu$ is same as $\int fI_E d\mu$.
Note that $\int f d\mu \geq \int fI_{E_n} d\mu \geq \int \epsilon /n I_{E_n} d\mu =(\epsilon /n)\mu(E_n)$ so $\mu(E_n) <\infty$ for all $n$.
|
H: Contention of arbitrary families of sets
Apparently for $\{ A_{ij} \}_{(i,j)\in I \times J}$,
$$\bigcup\limits_{j \in J} \left(\bigcap\limits_{i \in I} A_{ij}\right) \subseteq \bigcap\limits_{i \in I} \left(\bigcup\limits_{j \in J} A_{ij}\right).$$
I'm having a hard time proving that contention. Here's what I've got so far:
"Let $x \in \left( \bigcup _{i \in I} \left( \bigcup _{i \in J} A_{ij} \right) \right)$. That means there exists some $j \in J$ such that, for all $i \in I$ $x$ is in $A_{ij}$." I'm not sure if I can therefore argue that for every $i \in I$ there is a $j \in J$ such that $x \in A_{ij}$, especially because the other contention doesn't always hold according to my textbook. I.e.,
$$\bigcup\limits_{j \in J} \left(\bigcap\limits_{i \in I} A_{ij}\right) \nsupseteq \bigcap\limits_{i \in I} \left(\bigcup\limits_{j \in J} A_{ij}\right).$$
I suppose this is a matter of quantifiers, but I just don't know what I'm missing.
AI: You’ve started off fine. Fix $j_0\in J$ such that $x\in A_{ij_0}$ for each $i\in I$. Then for each $i\in I$ we have
$$x\in A_{ij_0}\subseteq\bigcup_{j\in J}A_{ij}\;,\tag{1}$$
and since $(1)$ is true for every $i\in I$, we must have
$$x\in\bigcap_{i\in I}\bigcup_{j\in J}A_{ij}\;.$$
Finally, $x$ was an arbitrary element of $\bigcup_{j\in J}\bigcap_{i\in I}A_{ij}$, so
$$\bigcup_{j\in J}\bigcap_{i\in I}A_{ij}\subseteq\bigcap_{i\in I}\bigcup_{j\in J}A_{ij}\;.$$
|
H: counterexample of reflexive space not hilbert
We know that all Hilbert spaces are reflexive. My problem is to show that the reciproque is not true: But I can't find a counterexample. An idea please.
AI: $\ell^{p}, L^{p}([0,1])$ with $1 <p <\infty$ are reflexive spaces which are not inner product spaces.
|
H: Simplification of $\sqrt{2\zeta^2-1+2\zeta\sqrt{\zeta^2-1}}+\sqrt{2\zeta^2-1-2\zeta\sqrt{\zeta^2-1}}$
If I try to evaluate $\sqrt{2\zeta^2-1+2\zeta\sqrt{\zeta^2-1}}+\sqrt{2\zeta^2-1-2\zeta\sqrt{\zeta^2-1}}$ numerically for real $\zeta$, it looks like it is just equal to $2|\zeta|$ for $\zeta \ne 0$ and $2j$ for $\zeta=0$, but I can't figure out how to simplify to get there...
It's of the form $\sqrt{b+c} + \sqrt{b-c}$ with $b=2\zeta^2-1$ and $c=2\zeta\sqrt{\zeta^2-1}$. I can write:
$$\sqrt{b+c} + \sqrt{b-c} = \frac{(b+c) - (b-c)}{\sqrt{b+c} - \sqrt{b-c}}$$
but that doesn't seem to help either....
AI: Oh, I figured it out:
$$\begin{align}
(\sqrt{b+c}+\sqrt{b-c})^2 &= (b+c)+2\sqrt{b^2-c^2}+(b-c) \\
&= 2b+2\sqrt{b^2-c^2}
\end{align}$$
and in this case $b^2 - c^2 = 4\zeta^2-4\zeta+1 - 4\zeta^4 +4\zeta^2 = 1$
so
$$\begin{align}
(\sqrt{b+c}+\sqrt{b-c})^2 &= (b+c)+2\sqrt{b^2-c^2}+(b-c) \\
&= 2b+2 \\
&= 4\zeta^2
\end{align}$$
|
H: State Diagrams Probability Question
A bug is sitting on vertex A of a regular tetrahedron. At the start of each minute, he randomly chooses one of the edges at the vertex he is currently sitting on and crawls along that edge to the adjacent vertex. It takes him one minute to crawl to the next vertex, at which point he chooses another edge (at random) and starts crawling again. What is the probability that, after 6 minutes, he is back at vertex A?
This problem can be solved with a simple state diagram with 2 states (on A, not on A).
Pn = (1/3) * (1-Pn-1). Basically, the probability that the bug is on A is the probability that it wasn't on A on the previous timestep, times 1/3 (it chooses to go to A). Using this recurrence relation, we get the answer of 61/243.
Now this is my question. If we do the same problem, but now the bug is crawling on a cube, why is the answer the same? On a cube, if we drew the state diagram, there would be 4 states: on A, 1 away from A, 2 away from A, and then 3 away from A. How come the answer comes out to be the same, 61/243?
AI: Let $X_n\in\{T_0,T_1\}$ denote the position of the ant on the tetrahedron and let $Y_n\in\{C_0,C_1,C_2,C_3\}$ denote the position of the ant on the cube, where $n=0,1,2,\dots$ is the time step, and the state index represents distance from initial position. Both $X_n$ and $Y_n$ are Markov processes.
Observe that if the cube ant starts in position $C_0$, then for even $n$ we must have $P(Y_n=C_1)=P(Y_n=C_3)=0$, this is because on the cube the ant is forced to move to an adjacent state each step. Thus, if we only consider even $n$, we can form a reduced system consisting only of the states $\{C_0,C_2\}$. It is not hard to check that this system is equivalent to the tetrahedron system since the transition probability are the same.
|
H: If the function does not depend on the indicated parameter, why is the derivative zero?
If we have the derivative $\dfrac{dy}{dx}$ but $y$ doest not depend on $x$, why is $\dfrac{dy}{dx} = 0 ?$
I think that a possible correct thought is that if we see the derivative as rate of change, is clear that since the variable $x$ does not affect $y$, then no change occurs and therefore the derivative is zero.
But, what is the interpretation if we see the derivative as slope ?
AI: If you think of $y$ as a function not depending on $x$, then what you're saying is that $y$ is a constant, for any value of $x$. Pictorially, that means the graph is a horizontal line at some height $c \in \mathbb{R}$. What is the slope of a horizontal line? (Zero!)
|
H: Showing an operator is not bounded.
There are two spaces $C^1 [0,1]$ and $C[0,1]$ with supremum norm, which is defined by
$$ \|f\| = \sup_{x\in[0,1]} |f(x)|$$
for any $f$. I have to show that if the operator $A:C^1[0,1] \rightarrow C[0,1]$ is defined by $Af=f'$, then $A$ is not bounded.
I tried to find some counterexample function $f\in C^1[0,1]$ not satisfying $\|Af\| \le C\|f\|$ for some uniformly $C$ . But I failed. How can I show that?
AI: Boundedness of a linear map is equivalent to continuity.
Let $f_n(x)=\frac {x^{n}} n, f(x)=0$. Then $f_n \to f$ uniformly but $f_n'$ does not tend to $f'$ uniformly.
Also $\|f_n\|=\frac 1 n$ and $\|f_n'\|=1$ so your constant $C$ does not exist.
|
H: Area between $5e^x$ and $5xe^{x^2}$ using substitution
I need to find the answer to this problem:
and was told to substitute $u=x^2$. I tried that and couldn't get to the correct answer. When substituting, I get $\frac{5}{2}\; du = 5x$, and I pull out $\dfrac{5}{2}$ from the integral, and my integrand turns out to be $x^{-1}e^x + e^u$. Am I messing up somewhere? I can't get to $\dfrac{5}{2}(e-1)$ (the correct answer) no matter what I try. Any guidance or help is appreciated.
AI: Split the integral into a subtraction of integrals, and substitute $u=x^2, \mathrm d u=2x\mathrm d x$ only on the second integral.$$\begin{align}\int_0^1 5(\mathrm e^x-x\mathrm e^{x^2})~\mathrm d x&=5\int_0^1 \mathrm e^x~\mathrm d x-\tfrac 52\int_0^1 \mathrm e^u\mathrm d u\\[1ex]&=\tfrac 52\int_0^1\mathrm e^x\,\mathrm d x \end{align}$$
|
H: Integration of $\frac{1}{u^4 + (4\zeta^2-2)u^2 + 1}$
I am trying to compute
$$I(\zeta) = \int_{-\infty}^{\infty} \frac{1}{u^{4} + \left(4 \zeta^{2} - 2\right)u^{2} + 1}\, du$$
for positive real $\zeta$. Can anyone help?
I'm way out of practice for integrals except for simple stuff like $\int 1/(1+u^2)\, du = \tan^{-1} u + C$.
Sympy fails on the definite integral and gives me this weird RootSum expression for the indefinite integral:
$$\operatorname{RootSum} {\left(t^{4} \left(4096 \zeta^{8} - 8192 \zeta^{6} + 4096 \zeta^{4}\right) + t^{2} \left(256 \zeta^{6} - 384 \zeta^{4} + 128 \zeta^{2}\right) + 1, \left( t \mapsto t \log{\left (- 512 t^{3} \zeta^{6} + 768 t^{3} \zeta^{4} - 256 t^{3} \zeta^{2} - 32 t \zeta^{4} + 32 t \zeta^{2} - 4 t + u \right )} \right)\right)}$$
Wolfram Alpha gives me the following for the indefinite integral :
$$\begin{align}
& \frac{\frac{1}{a_1}\tan^{-1} \frac{u}{a_1} - \frac{1}{a_2}\tan^{-1} \frac{u}{a_2}}{4\zeta\sqrt{\zeta^2-1}} + C \\
\\
a_1 &= \sqrt{2\zeta^2-2\zeta\sqrt{\zeta^2-1}-1} = \sqrt{b-c}\\
a_2 &= \sqrt{2\zeta^2+2\zeta\sqrt{\zeta^2-1}-1} = \sqrt{b+c}\\
\end{align}$$
(with $b=2\zeta^2-1$ and $c=2\zeta\sqrt{\zeta^2-1}$) but I'm a bit lost how it got there, and then I'm not exactly sure what to do if $\zeta \le 1$ (is the formula still valid?!)
edit: OK, partial fraction expansion is sloooowwwwly coming back to me. It looks like $a_1a_2 = 1$ and $a_1{}^2 + a_2{}^2 = 4\zeta^2-2$, so I guess they used the expansion
$$
\frac{1}{u^{4} + \left(4 \zeta^{2} - 2\right)u^{2} + 1} = \frac{1}{4\zeta\sqrt{\zeta^2-1}}\left(\frac{1}{u^2+a_1{}^2} - \frac{1}{u^2+a_2{}^2}\right)
$$
AI: Note
\begin{align}
& \int_{-\infty}^{\infty}\frac{du}{u^4 + (4\zeta^2-2)u^2 + 1}\\
= & \int_{0}^{\infty}\left( \frac{1+\frac1{u^2}}{u^2+\frac1{u^2} + 4\zeta^2-2} -\frac{1-\frac1{u^2}}{u^2+\frac1{u^2} + 4\zeta^2-2}\right)du\\
= & \int_{0}^{\infty}\left( \frac{d(u-\frac1{u})}{(u-\frac1{u} )^2+ 4\zeta^2} -\frac{d(u+\frac1{u})}{(u+\frac1{u} )^2+ 4\zeta^2-4}\right)\\
= & \int_{-\infty}^{\infty} \frac{dt}{t^2+ 4\zeta^2}- \int_{\infty}^{\infty} \frac{dt}{t^2+ 4\zeta^2-4}\\
=&\frac\pi{2\zeta}-0 =\frac\pi{2\zeta}
\end{align}
|
H: IS $(\mathbb{Z}_4,+) \rightarrow (\mathbb{Z}_5^{*},\cdot), n\pmod 4 \mapsto 2^n \pmod 5 $ well-defined??
For the following relation
$(\mathbb{Z_4},+) \rightarrow (\mathbb{Z_5^{*}},\cdot), n\bmod 4 \mapsto 2^n \bmod 5 $
Determine if it is well-defined (so that it is a mapping)
Can someone show me how to do it?
So i know that to know if it is well-defined I have to show it doesn't depend on the member of the class chosen to represent it.
So I take
$n_1 \equiv n_2$ that is $n_1-n_2=4k$
then
$2^{n_1} =2^{n_2+4k}=16^k2^{n_2}=(5n+1)2^{n_2}=5n'+2^{n_2}$
so $2^{n_1}\equiv 2^{n_2} \pmod 5$
I am not sure, if it is correct, but still if it is I am not happy with the notation, in which I have mixed $\equiv$ with variables introduced to show one quantity is a multiple of the other. Can someone rewrite it better?
AI: Let's be careful. Notice that $(\mathbb Z_4, +)$ as set is finite and has four elements and those elements are not integers.
$(\mathbb Z_4, +)$ is (using the notation used in the, rather dubious, IMO, phrase "$n\pmod 4\mapsto 2^n \pmod 5$") the set $\{[0]_4,[1]_4, [2]_4,[3]_4\}$ where the element $[k]_4$ is a class of integers $\{n\in \mathbb Z: 4|(n-k)\}$ or $\{n\in \mathbb Z: n\equiv k \pmod 4\}$ or $\{k+4m|m\in \mathbb Z\}$.
So what ther are saying is the relationship maps $[n]_4\to $ the equivalence class so that if $k\in [n]_4$ then the mapped value will be $[2^k]_5$.
The BIG assumption is that for all the $k\in [n]_4$ that all the integers $2^k$ will be in the same equivalence classes $\mod 5$.
so what we must show is that if $n \equiv m\pmod 4$ then i) $2^n \equiv 2^m\pmod 5$ and that ii) $2^n\not \equiv 0 \pmod 5$.
And that is straight forward and you did it correctly.
There is one caveat we should be aware of. Suppose $n\ge 0; n \equiv k\pmod 4$ and $k < 0$ what do we mean when we so $2^k\pmod 5$. For instance what so $\frac 18 \pmod 5$ means? Well, that actually just means what class $[a]_5$ is is so that $[a]_5*8\equiv 1\pmod 5$. That is $[a]_5 = [2]_5$.
This is fine. If $k < 0$ then $2^4\equiv 1 \pmod 5$ then $2^k\equiv 2^k*(2^{4m})\equiv 2^{k+4m}\pmod 5$ and we can just pick an $m$ that makes the whole thing positive.
|
H: I have this identity that I'd like to prove. $\sum_{k=0}^{n}\left(\frac{n-2k}{n}\binom{n}{k}\right)^2=\frac{2}{n}\binom{2n-2}{n-1}$
I have this identity that I'd like to prove.
$$\displaystyle{\sum_{k=0}^{n}\bigg(\dfrac{n-2k}{n}\binom{n}{k}}\bigg)^2=\dfrac{2}{n}\binom{2n-2}{n-1}$$
Here's what I have done so far: (using a binomial indentity)
$$=\displaystyle{\sum_{k=0}^{n}\bigg({\binom{n}{k}-2\binom{n-1}{k-1}}\bigg)^2}$$
$$=\displaystyle{\sum_{k=0}^{n}\bigg({\binom{n-1}{k}-\binom{n-1}{k-1}}\bigg)^2}$$
$$=\displaystyle{\sum_{k=0}^{n}\bigg({\binom{n-1}{k}-\binom{n-1}{k-1}}\bigg)^2}$$
At this point I expanded the square, Here's where I made a mistake
$$=\displaystyle{{\sum_{k=0}^{n}\binom{n-1}{k}^2+\sum_{k=0}^{n}\binom{n-1}{k-1}}^2-\sum_{i=0}^{n}\sum_{j=0}^{n}}\binom{n-1}{j-1}\binom{n-1}{i}$$
$$=\displaystyle{{\sum_{k=0}^{n}\binom{n-1}{k}^2+\sum_{k=0}^{n}\binom{n-1}{k-1}}^2-\sum_{i=0}^{n}\sum_{j=0}^{n}}\binom{n-1}{j-1}\binom{n-1}{i}$$
$$=\displaystyle{2\binom{2n-2}{n-1}-\sum_{i=0}^{n}\sum_{j=0}^{n}}\binom{n-1}{j-1}\binom{n-1}{i}$$
because I can separate the sums
$$=\displaystyle{2\binom{2n-2}{n-1}-2^{2n-2}}$$
Clearly, at some point here I made a stupid mistake. I was hoping someone will point the error to me and perhaps give me a hint. I prefer hints to complete solutions. Thank you for your time.
AI: After expanding the square, the summation index should be the same for the last summation, making it a single summation over $i,$ not a double summation with $i,j.$ You then have a sum which can be evaluated with Vandermonde's identity.
|
H: For Galois extension $L:K$, does $L = K(\alpha)$ imply $\{\sigma_1(\alpha), \dots, \sigma_n(\alpha)\}$ is a basis for $L$ over $K$?
For Galois extension $L:K$ with Galois group $\{\sigma_1, \dots, \sigma_n\}$, does $L = K(\alpha)$ imply $\{\sigma_1(\alpha), \dots, \sigma_n(\alpha)\}$ is a basis for $L$ over $K$?
The proof that I've seen for the normal basis theorem starts with a primitive element $\alpha \in L$ and then switches to another element $\beta \in L$ to show $\{\sigma_1(\beta), \dots, \sigma_n(\beta)\}$ is a basis for $L$ over $K$.
Does the result still hold for $\alpha \in L$?
AI: Consider $L=\mathbb{C}$, $K=\mathbb{R}$. Then $\mathbb{C}=\mathbb{R}(i)$, but $i,-i$ is not a basis for $\mathbb{C}$ as $\mathbb{R}$-vector space. Generally, not any primitive element will do.
|
H: Generating function of recursive algorithm with random subcalls
I was presented with the following algorithm. As input the algorithm gets an array of length $n \geq 0$. If $n \geq 2$ then for each $k \in \{1, 2, ..., n\}$ the algorithm calls itself recursively with the probability $\frac{1}{2}$ with the array of length $k$. Using generating functions I have to get to the formula which estimates the average number of calls depending on $n$. I have checked analysis of QuickSort algorithm that was performed in similar terms. My proposition for recursive equation, regarding my problem, is as follows: $q_n = 1 + \frac{1}{2}\sum_{k=1}^nq_k$.
Is proposed recursive equation correct (will it estimate the number of calls correctly)? If so, then how, using generating functions, can I get the closed-form expression for the $q_n$?
AI: The recurrence $q_n=1+\frac12\sum_{k=1}^nq_k$ for $n\ge 2$ appears to be correct, and you have the initial conditions $q_0=0$ and $q_1=1$. I would modify the recurrence slightly to make it correct for all $n\ge 0$ on the assumption that $q_n=0$ for all $n<0$:
$$q_n=1+\frac12\sum_{k=1}^nq_k-[n=0]-\frac12[n-1]\;,\tag{1}$$
where the square brackets are Iverson brackets, and we can include $k=0$ because $q_k=0$. Now multiply $(1)$ by $x^n$ and sum over $n\ge 0$:
$$\sum_{n\ge 0}q_nx^n=\sum_{n\ge 0}x^n+\frac12\sum_{n\ge 0}\left(\sum_{k=0}^nq_i\right)x^n-1-\frac{x}2\;.\tag{2}$$
The lefthand side of $(2)$ is the desired generating function, say $g(x)$, so we have
$$\begin{align*}
g(x)&=\frac1{1-x}-1-\frac{x}2+\frac12\sum_{n\ge 0}\left(\sum_{k=0}^nq_i\right)x^n\\
&=\frac12\left(\frac{x+x^2}{1-x}+\sum_{n\ge 0}\left(\sum_{k=0}^nq_i\right)x^n\right)\;.
\end{align*}$$
Now recognize $\sum_{n\ge 0}\left(\sum_{k=0}^nq_i\right)x^n$ as the Cauchy product of $\sum_{n\ge 0}q_nx^n$ and a very simple power series whose corresponding function $f(x)$ you know, so that
$$2g(x)=\frac{x+x^2}{1-x}+f(x)g(x)\;.$$
You can then solve for $g(x)$:
$$g(x)=\frac{x+x^2}{(1-x)(2-f(x))}\;.$$
And if you’ve done this correctly, you’ll easily be able to expand $g(x)$ into a power series from which you can read off the coefficients $q_n$.
|
H: Find a sequence of $\alpha(t)$ such that $\sum_{t=1}^\infty\alpha(t)=\infty$ while $\sum_{t=1}^\infty{\alpha(t)}^2<\infty$
As described in the title, can we find a $\alpha(t)$ sequence that satisfies those two requirements?
AI: As suggested in the comment, a good choice is $a(t) = \frac{1}{t}$, since we know from comparison with integrals that: $\sum_{t=1}^{\infty} \frac{1}{t^p} = \infty$ for $p \leq 1$ and $\sum_{t=1}^{\infty} \frac{1}{t^p} < \infty$ for $p > 1$
|
H: Prove that for A, B and C sets, A - ( B - C ) = ( A - B ) ∪ ( A ∩ C ) (Alternative to the proof given)
My proof:
Let A, B and C be arbitrary sets.
As a means to prove such a statement we are going to verify that
x ∈ A - ( B - C ) ⇔ x ∈ ( A - B ) ∪ ( A ∩ C )
Note that
x ∈ A - ( B - C ) ⇔ x ∈ A ∧ x ∉ ( B - C ) ⇔ x ∈ A ∧ ( x ∉ B ∨ x ∈ C ) ⇔ ( x ∈ A ∧ x ∉ B ) ∨ ( x ∈ A ∧ x ∈ C ) ⇔ x ∈ ( A - B ) ∨ x ∈ ( A ∩ C ) ⇔ x ∈ ( A - B ) ∪ ( A ∩ C )
Hence,
A - ( B - C ) = ( A - B ) ∪ ( A ∩ C )
AI: $$A\setminus(B\setminus C) = A \cap (B\setminus C)^C $$
$$=A\cap(B\cap C^c)^c = A\cap(B^c \cup C)$$
$$= (A\cap B^c)\cup(A\cap C)$$
$$=(A\setminus B)\cup(A\cap C)$$
|
H: Is $( \mathbb{ Z}_{10}^{*},\cdot) \rightarrow (\mathbb{Z}_5^{*},\cdot), n\pmod {10} \mapsto n \pmod 5 $ well-defined?
Is $( \mathbb{ Z}_{10}^{*},\cdot) \rightarrow (\mathbb{Z}_5^{*},\cdot), n\pmod {10} \mapsto n \pmod 5 $ well-defined?
So what I think is that it is not because the odd multiples of 5 in $\mathbb{ Z}_{10}^{*}$ map to the class of $5\equiv0$ which is the only class taken out of $\mathbb{Z}_5$ to yield $\mathbb{Z}_5^{*}$, and since not all elements in the domain are being mapped, then it is not a mapping, ie it is not well-defined(in a mapping all elements of the first set must be part of the domain)
What do you think? Feel free to elaborate
AI: The first question you need to ask to determine whether the map is well defined is whether the following always holds for $x, y \in \Bbb Z_{10}^*$:
$$x \equiv y \pmod{10} \Rightarrow x \equiv y \pmod{5}.$$
And that's obviously true: $x \equiv y \pmod{10} \Rightarrow 10 \mid y-x \Rightarrow 5 \mid y-x \Rightarrow x \equiv y \pmod{5}$.
And it's also clearly true that $x \in Z_{10}^* \iff (x, 10)=1 \Rightarrow (x, 5)=1 \iff x \in Z_5^*.$
The map is therefore well defined.
Now whether the map is a homomorphism is an entirely separate question.
|
H: What kind of problem is the membership problem of a recursive enumerable language?
Is it correct that the membership problem (i.e. the characteristic function) of a recursive language is a decidable i.e. computable problem?
What kind of problem is the membership problem of a recursive enumerable language, as opposed to the membership problem of a non r.e. language? Is the membership problem of a recursive enumerable language also decidable?
Thanks.
AI: By membership problem, you mean if a word is an element of the recursive language? Then yes, by definition: A language is recursive, if there is a turing machine (TM) that halts and accepts words which are elements of the language.
The difference between recursive and recursive enumerable language is that a TM for a recursive language always halts but a TM for recursive enumerable language is not guaranteed to hold for words not in the language (it might loop forever). We call the latter therefor semidecidable
|
H: Is there another way to prove this expression over $1/(1-z)$
I came across the following relationship:
$$
\frac{1}{1-z} = (1+z)(1+z^2)(1+z^4)(1+z^8)...
$$
If induction is used, the statement can be proven given that:
$$
(1+z)(1+z^2)=1+z+z^2+z^3
$$
and
$$
(1+z)(1+z^2)(1+z^4)=1+z+z^2+z^3+z^4+z^5+z^6+z^7
$$
and so on and so forth ...
Since:
$$
\frac{1}{1-z}=\sum_k z^k
$$
The relationship follows... However, am wondering, is there another way to prove the first equation aside from using induction ?
AI: If $|z|<1$, multiply by $1-z$ and FOIL it out.
|
H: Enumerate the possible combinations
We have three windows of opportunity, say W1, W2, W3. And we have 4 competitors, say, C1, C2, C3, C4.
Each competitor wants to be allocated at least two windows (so will get either 2 or all 3). Each window can be allocated to any number of competitors (so, from 1 to 4; or even 0 to 4 if that makes much difference).
The question is to enumerate all possible allocations of the windows to the competitors, if I do not want to differentiate amongst the windows themselves. (so, e.g., the allocation [(C1,C2,C3); (C2,C3,C4); (C3,C4,C1)] is considered identical to the allocation [(C2,C3,C4); (C3,C4,C1); (C1,C2,C3)])
If I did want to distinguish amongst the windows, it should be easy to list all combinations: Basically each competitor can be placed in 4 ways (it can go in 2 of the 3 windows (3C2=3 ways), or in all 3 windows (1 way)), so total combinations should be 256. But how to enumerate the "unique" ones here (which do not distinguish amongst the windows) without doing it manually by comparing ?
AI: If all three windows are the same then everybody must be in all three windows, so $1$ undistinguished-windows possibility and $1$ distinguished-windows possibility
If two of the windows are the same then everybody must be in those two windows but not all will be in the other window, so $2^4-1=15$ undistinguished-windows possibilities and ${3\choose 2}\times 15 = 45$ distinguished-windows possibilities
For all three windows different there are $256-1-45=210$ distinguished-windows possibilities and so $\frac{210}{3!} =35$ undistinguished-windows possibilities
That gives $1+15+35=51$ undistinguished-windows possibilities in total
Added: We could have counted these $35$ for all three windows different from the bottom up:
If each number appears two times then the window sizes will be $4+3+1$ with ${4 \choose 1}=4$ ways of choosing the one, or $4+2+2$ with ${4 \choose 2}/2!=3$ ways of choosing the pairs, or $3+3+2$ with ${4 \choose 2}=6$ ways of choosing the pairs, so $4+3+6=13$ ways
If one number appears three times and the others twice then the window sizes will be $4+3+2$ with ${4 \choose 1}{3 \choose 2}=12$ ways of choosing the one appearing three times and splitting the others, or $3+3+3$ with ${4 \choose 1}=4$ ways of choosing the one appearing three times, so $12+4=16$ ways
If two numbers appear three times and the others twice then the window sizes will be $4+3+3$ with ${4 \choose 2}=6$ ways of choosing the two appearing three times
and $13+16+6=35$ as calculated earlier.
Whether this is easier than a full enumeration is another question. That enumeration of $51$ could have been:
1234 1234 1234
1234 1234 123
1234 1234 124
1234 1234 12
1234 1234 134
1234 1234 13
1234 1234 14
1234 1234 1
1234 1234 234
1234 1234 23
1234 1234 24
1234 1234 2
1234 1234 34
1234 1234 3
1234 1234 4
1234 1234 -
1234 123 4
1234 124 3
1234 134 2
1234 234 1
1234 12 34
1234 13 24
1234 14 23
123 124 34
123 134 24
123 234 14
124 134 23
124 234 13
134 234 12
1234 123 14
1234 124 13
1234 134 12
1234 123 24
1234 124 23
1234 234 12
1234 123 34
1234 134 23
1234 234 13
1234 124 34
1234 134 24
1234 234 14
123 124 134
123 124 234
123 134 234
124 134 234
1234 123 124
1234 123 134
1234 124 134
1234 123 234
1234 124 234
1234 134 234
|
H: Use Slutsky's theorem to show that: $\sqrt{n}(e^{\frac{S_n}{n}}-e^{\mu}) \xrightarrow{d} \sigma e^{\mu}Z$
Let $\{X_n\}_{n\ge1}$ be a sequence of i.i.d. random variables with common mean $\mu$ and variance $\sigma^2 \in (0,\infty)$. Use Slutsky's theorem to show that:
\begin{align}
\sqrt{n}(e^{\frac{S_n}{n}}-e^{\mu}) \xrightarrow{d} \sigma e^{\mu}Z
\end{align}
where $S_n=\sum\limits_{k=1}^{n}X_k$ and $Z\in N(0,1)$.
I have used Slutsky's theorem in plenty of problems before but I cannot make any progress on this one, any help here would be greatly appreciated.
AI: Define
$$
G(t)=\begin{cases}\dfrac{e^t-\mu}{t-\mu}, & t\neq \mu,\cr e^\mu, & t=\mu.\end{cases}
$$
This function is continuous everywhere since $\lim\limits_{t\to\mu}\dfrac{e^t-\mu}{t-\mu}=(e^t)'_\mu=e^\mu$. By LLN, $\frac{S_n}{n}\xrightarrow{p}\mu$, and by continuous mapping theorem,
$$
G\left(\frac{S_n}{n}\right)\xrightarrow{p}G(\mu)=e^\mu.
$$
Then
$$
\sqrt{n}\left(e^{\frac{S_n}{n}}-e^{\mu}\right)=\sqrt{n}\left(\frac{S_n}n-\mu\right)\cdot G\left(\frac{S_n}{n}\right).
$$
Here the first term $\sqrt{n}\left(\frac{S_n}n-\mu\right)\xrightarrow{d}\sigma Z$ by CLT, where $Z\sim N(0,1)$. And the second term converges in probability to $e^\mu$. Slutsky's theorem implies that the product converges in distribution to $\sigma e^\mu Z$.
Note that this is almost the same reasoning as in the previous answer, and indeed the same as the so-called delta method.
|
H: How to evaluate $\int \sqrt{1-n\cos(\omega t)}\,dt$?
How can I evaluate $$\int \sqrt{1-n\cos(\omega t)}\,dt$$
I don't even know if this is even elementary. Just found it on an astronomy problem. If necessary, evaluate from $0$ to $2\pi/\omega$.
AI: I think the answer involves incomplete elliptic integral of second kind, which can be seen here: https://en.wikipedia.org/wiki/Elliptic_integral#Incomplete_elliptic_integral_of_the_second_kind
You can rewrite the integral as
$$\int\sqrt{1-n}\sqrt{\frac{2n\sin^2(\frac{\omega t}{2})}{1-n}+1} \, dt$$ and substitute $u=\frac{\omega t}{2}$. After some calculation, you can see how the elliptic integral works.
|
H: Does there exist a gradient chain rule for this case?
My question comes from this article in Wikipedia. I noticed that there is a chain rule defined for the composition of $f:\mathbb{R}\to\mathbb{R}$ and $ g: \mathbb{R}^n \to \mathbb{R}$ given by
$$
\nabla (f \circ g) = (f' \circ g) \nabla g \tag{1}
$$
My question is if instead we had some functions $f: \mathbb{R}^m \to \mathbb{R}$ and $g: \mathbb{R}^n \to \mathbb{R}^m$ such that $(f \circ g): \mathbb{R}^n \to \mathbb{R}$, does there exist an expression for $\nabla (f \circ g)$ similar to equation $(1)$?
I tried looking for any resource who answered this but had no luck. If someone could point me in the right direction I would greatly appreciate it. Thank you!
AI: Background info: If $g:\mathbb R^n \to \mathbb R^m$ is differentiable at $x$, then $g'(x)$ is an $m \times n$ matrix. If $f:\mathbb R^m \to \mathbb R$ is differentiable at $u$, then $f'(u)$ is a $1 \times m$ matrix (row vector). If we use the convention that the gradient of $f$ at $u$ is a column vector, then $\nabla f(u) = f'(u)^T$.
The multivariable chain rule is actually easy. Let $h(x) = f(g(x))$. The chain rule tells us that
$$
h'(x) = f'(g(x)) g'(x).
$$
This formula is wonderful because it looks exactly like the formula from single variable calculus. This is a great example of the power of matrix notation.
If we use the convention that the gradient is a column vector, then
$$
\nabla h(x) = h'(x)^T = g'(x)^T \nabla f(g(x)).
$$
By the way, if $f:\mathbb R \to \mathbb R$ and $g:\mathbb R^n \to \mathbb R$, then the chain rule tells us that the derivative of $h(x) = f(g(x))$ is $h'(x) = f'(g(x)) g'(x)$. If we use the convention that the gradient is a column vector, then
$$
\nabla h(x) = h'(x)^T = \underbrace{g'(x)^T}_{\text{column vector}} \underbrace{f'(g(x))}_{\text{scalar}} = f'(g(x)) \nabla g(x).
$$
So the version of the chain rule you mentioned in your post is just a special case of the standard chain rule.
|
H: A complete DVR $A$ is a $\Bbb Z_p$ module, Serre's local field
I am trouble understanding how one obtains a $\Bbb Z_p$ action in the last line in this statement in pg. 36 of Serre's Local fields
In particular
Observe that $\Bbb Z$ injects into $A$ and
by continuity $\Bbb Z_p$ injects into $A$.
Maysome one elaborate on the details?
AI: The injection $f:\mathbb{Z}\to A$ can be extended to $\mathbb{Z}_p$ by writing each element of $\mathbb{Z}_p$ as a limit of elements of $\mathbb{Z}$, using the completeness of $A$. Specifically, given $x\in\mathbb{Z}_p$, choose a sequence $(x_n)$ of integers converging to $x$ in the $p$-adic topology. Note then that the sequence $(f(x_n))$ is Cauchy with respect to the valuation topology of $A$: if $m$ and $n$ are large, then $x_n-x_m$ is divisible by a large power of $p$, which means $f(x_n)-f(x_m)$ has large valuation since $v(p)\geq 1$. So by completeness of $A$, $(f(x_n))$ converges to an element of $A$ we can define as $f(x)$. It is easy to see that this element is actually independent of the sequence chosen (the difference between any two such sequences is eventually divisible by arbitrarily large powers of $p$, so their images under $f$ are getting close in $A$). Similarly, it is easy to see that the extension of $f$ to $\mathbb{Z}_p\to A$ defined in this way is a homomorphism. Finally, the extension is injective because every nonzero ideal in $\mathbb{Z}_p$ is generated by some power of $p$, but the kernel cannot contain any power of $p$ since the original $f:\mathbb{Z}\to A$ was injective.
(What does this have to do with "continuity"? Well, $f:\mathbb{Z}\to A$ is continuous with respect to the $p$-adic topology on $\mathbb{Z}$, again because of the assumption that $v(p)\geq 1$, so this extension is just the unique way to extend $f$ continuously to all of $\mathbb{Z}_p$.)
|
H: Is $GL_{n}(\Bbb{C})$ isomorphism to a subspace of $GL_{2n}(\Bbb{R})$
A problem in the Algebra by Artin.
Is $GL_n(\Bbb{C})$ isomorphism to a subspace of $GL_{2n}(\Bbb{R})$
I think there is an isomorphism. Because I know that when $n=1$, $$\{A\mid A=\left(\begin{matrix}a&b\\-b&a\end{matrix}\right)\}\simeq\mathbb{C}$$, so I think maybe it can be genaralized into $n\ge1$. But I have no idea how to do it.
AI: You can take the idea of replacing $a+bi$ by the matrix $\pmatrix{a&b\\-b&a}$
and run with it. For instance for $n=2$ map
$$\pmatrix{a_{11}+b_{11}i&a_{12}+b_{12}i\\
a_{21}+b_{21}i&a_{22}+b_{22}i}\mapsto\pmatrix{a_{11}&b_{11}&a_{12}&b_{12}\\
-b_{11}&a_{11}&-b_{12}&a_{12}\\a_{21}&b_{21}&a_{22}&b_{22}\\
-b_{21}&a_{21}&-b_{22}&a_{22}}.$$
More theoretically, an $n$ by $n$ matrix over $\Bbb C$ represents
a linear map from $V=\Bbb C^n$ to itself. But $V$ is also a vector space
over $\Bbb R$, of dimension $2n$, so choosing an $\Bbb R$-basis allows one
to express that linear map as a $2n$ by $2n$ real matrix.
|
H: Does the Borel-Cantelli Lemma imply countable additivity?
Let $(\Omega, \mathcal F, P)$ be a finitely additive probability space. If $P$ is not only finitely additive but also countably additive, then it satisfies the Borel-Cantelli Lemma:
For all sequences $A_1, A_2,...$ in $\mathcal F$, if $\sum_n P(A_n) < \infty$, then $P(\limsup_n A_n) = 0$.
I'm wondering if the converse holds as well.
Question. If $P$ (a finitely additive probability) satisfies the Borel-Cantelli Lemma, is $P$ countably additive?
Suppose that $P$ satisfies the Borel-Cantelli Lemma and that $A_1, A_2,\ldots$ is a disjoint sequence in $\mathcal F$. By finite additivity,
$$\sum_n P(A_n) \leq P(\bigcup_n A_n) < \infty.$$
So, by the Borel-Cantelli Lemma $P(\limsup_n A_n)=0$, which implies $P(\liminf_n A_n^c)=1$. I tried using this fact to manipulate
$$P(\bigcup_n A_n) = P(\bigcup_nA_n \cap \liminf_n A_n^c)$$
into something useful, but I wasn't able to get anywhere.
I suspect the result doesn't hold, but it seems like coming up with a counterexample (a merely finitely additive probability that satisfies the Borel-Cantelli Lemma) will be pretty difficult.
AI: Here is a counterexample. Pick a nonprincipal ultrafilter $U$ on $\mathbb{N}$ and consider the finitely additive probability space $(\mathbb{N},\mathcal{P}(\mathbb{N}),P)$ where $$P(A)=\sum_{n\in A}\frac{1}{2^{n+2}}$$ if $A\not\in U$ and $$P(A)=\frac{1}{2}+\sum_{n\in A}\frac{1}{2^{n+2}}$$ if $A\in U$. (So, we have a weighted counting measure on $\mathbb{N}$ with total weight $\frac{1}{2}$, and we give an extra $\frac{1}{2}$ weight to being in $U$.) This is not countably additive since $U$ is nonprincipal so the measures of all the singletons only add up to $\frac{1}{2}$. However, I claim it satisfies the Borel-Cantelli lemma.
Indeed suppose a sequence of sets $(A_n)$ satisfies $\sum P(A_n)<\infty$. If some $k$ were in infinitely many $A_n$, then $P(A_n)$ would be at least $\frac{1}{2^{k+2}}$ for infinitely many $n$, and $\sum P(A_n)$ would diverge. Thus no $k$ is in infinitely many $A_n$, and $\limsup A_n=\emptyset$, so $P(\limsup A_n)=0$.
|
H: Question about "commutation" relation
I'm reading Dummit & Foote, Abstract Algebra and they briefly mention something about a ' "commutation" relation', such as $xy = yx^2$. In general, if we have the relation $xy = y^i x^j$, where $x,y$ generate the (finite) group $G$, it seems to me that any element of the group can be written in the form $y^m x^n$ for some (nonnegative integers) $m,n$. This is because any group element is just some string of $x$'s and $y$'s, and we can use the commutation relation to "move each $y$ to the left".
My question is: What about moving the $y$'s to the right? I.e. when can we write any group element as $x^m y^n$ for some $m,n$? Obviously we could do this if we have a "commutation" relation $yx = x^sy^t$, but what about if we only have the "commutation" relation $xy = y^i x^j$? Are there certain additional properties/relations of the group (aside from being abelian) that will guarantee the ability to write any element of the group both as a string of $y$'s then $x$'s as well as a string of $x$'s then $y$'s?
I'm asking this question because it seems fairly natural: if a group is generated by two elements and it has a "commutation" relation, then any "action"/"move" can be done by first performing the first move $x$ repeatedly, and then performing the second move $y$ repeatedly. Then naturally we might ask if we can also do it the other way around: first repeat the move $y$, then repeat the move $x$.
Edit: As an example, the dihedral group for a regular $n$-gon has the relation $rs = sr^{-1}$ (or $rs = sr^{n-1}$ if we require nonnegative powers). By multiplying both sides on the left by $s$ and on the right by $s$ as well, we get $sr = r^{-1}s$ (since $s^2 = 1$). So we have both "commutation" relations for $rs$ and $sr$. This tells us that any symmetry can be obtained either by rotations then reflections or by reflections then rotations.
AI: Given $g\in G$ write $g^{-1}=x^iy^j,$ with $0\leq i<o(x), 0\leq j<o(y).$ Then $$g=y^{-j}x^{-i}=y^{o(y)-j}x^{o(x)-i}$$
|
H: How to build unambiguous binary expression with property
Is there a strategy for building unambiguous binary expressions that fit some property like "no consecutive 1's", "can't start with 0", "Blocks of 1 can only be divisible by 5", etc.?
I can't seem to figure out a list of steps I want to follow after practicing.
Is there some sort of strategy I can follow for each problem?
For example I just start off by writing down the block decomposition unambiguous string for all binary strings $\{0\}^*\{\{1\}\{1\}^* \{0\}\{0\}^*\}^*\{1\}^*$ and then start tearing it apart/adding stuff randomly to make my property work, which I feel is not right.
For example the blocks of 1 need to be divisible by 5 example I mentioned, I just go "Ok let's just add $\{11111\}\{1\}^*$ in the middle instead of $\{1\}\{1\}^*$... and it should work??
But I don't even know if that's right.
Can I get some guidance on how to start these questions?
Thanks!
edit: more specifically, it would be great to get sort of a checklist of things to check to make sure the expression is valid. For example, I've kind of picked up that I should check that I'm not restricting my string to starting with something specific, by adding a {0} at the start, forcing it to start with 0.
AI: What you are asking about is regular expressions. A regular expression is just a way to compactly describe a set of strings formed using characters from an alphabet $A$ (which we will refer to as a language) using three operations:
Concatenation: We can indicate the concatenation of characters $a, b$ by $ab$. $ab$ simply denotes the language $\{ab\}$, consisting of one two element string $ab$.
Or: Our second operation gives us the ability to specify alternatives that may occur in a pattern. We write $a | b$ to denote the language $\{a,b\}$, or $a|b|c|d|e$ for $\{a,b,c,d,e\}$. Concatenation generally is given precedence over or, thus as an example the expression $abc | de$ denotes the language $\{abc, de\}$.
Closure: Our final operation is closure. Given a language $L$, we write $L^*$ to denote a new language that consists of all possible concatenations of strings in $L$. Thus as an example, if $L = \{a\}$, then $L^* = \{ \ \ ,a, aa, aaa, \dots\}$. If $L = \{a,b\}$, then $L^* = \{\ \ , a, b, aa, ab, ba, bb, aaa, aab, \dots\}$. Notice that the first element I've listed in $L^*$ is an empty string. This means that we can choose to include no strings from the language $L$ that we have chosen.
Lastly, we can use parentheses to order regular expressions. Thus to address your question, "how do we build unambiguous binary strings that fit a particular pattern?", let's do an example:
Strings with no consecutive 1's
One way to describe this language is to find some building blocks with which to build the language. Here are two types of strings that appeal as good building blocks for this language: those that start with a 1 and are followed only by zeros, and those that start with 0's and end with a single 1.
More compactly, we have languages $L_0 = 0\{0\}^*1$, and $L_1 = 10\{0\}^* $. Using these as building blocks, note that $L_0^*$ describes all strings without 1 repeating that start with a 0 and end with a 1. Likewise $L_1^*$ describes all strings without 1 repeating that start with a 1 and end with a 0. Thus all we are missing is all such strings starting and ending with a 0, and similarly a 1. Concatenating a 0 to $L_0^*$ generates all missing strings starting and ending with a 0, except for those that contain no 1's. Similarly concatenating a 1 to $L^*_1$ generates all missing strings starting and ending with a 1, with the exception of the singular string 1. All such strings of this type fall into one of the above categories.
Hence we may describe all strings without 1's by: $1 \ | \ \{0\}^* \ | \ L^*_{0} \ | \ L^*_0 0 \ | \ L^*_1 \ | \ L^*_1 1$
As you can see, regular expressions can often be tricky. I don't think there is a "one size fits all" algorithm that you can apply to learn how to describe all regex with ease, but what can help is to modularize the problem as I have done above. Instead of describing the entire language at once, break it into building blocks, and try to reconstruct it from those building blocks. Keep track of what you cannot form so far with the blocks you select, as you may not have enough to describe the entire language.
When you realize your regex cannot construct some family of strings you desire, just create a new regex to describe that family and use the Or operation. Without using all of the operations available to you, generating arbitrary languages becomes quite difficult indeed.
To check that an expression you have made is valid, it suffices to check two things:
For every element of the language, the regex is able to describe it. For example, in the no consecutive 1's case, I showed that for strings starting and ending with either 0,0; 0,1; 1,0; or 1,1 that contained no consecutive 1's that the regex was able to generate these strings.
The regex does not generate any element not in the language. You've got to ensure that no unintended elements are generated. For example, in the no consecutive 1's problem, if we were to have accidentally have done $\{1\{0\}^*\}^*$, then this could allow for consecutive 1's (when we select no zeros for the first string in the concatenation). Recognizing pitfalls like these allows you to ensure that you are not including extraneous strings.
|
H: Decomposition of symmetric matrices over $\mathbb{F}_2$
Can every $n\times n$ symmetric matrix over $\mathbb{F}_2$ be decomposed into $$\sum_{v=1}^k v_i v_i^T$$ for vectors $v_1, \dots,v_k\in\mathbb{F}_2^n$ and integer $k$?
As far as I know, for symmetric real matrices this is true and these vectors are orthogonal, but I am not sure if these properties hold over finite fields.
AI: This is definitely possible over $\Bbb{F}_2$. If $v$ is the column vector with $1$s at positions $i$ and $j$ and zeros elsewhere, $i<j$, then $vv^T$ has $1$s at positions $(i,i),
(i,j),(j,i)$ and $(j,j)$. Similarly if $w$ has a single $1$ at position $i$, then $ww^T$ has a single $1$ on the diagonal.
We can write every symmetric matrix as a linear combination of these. Use vectors of the first type to get the non-diagonal entries, and then "fix" the diagonal with vectors of the second type.
On the other hand, asking for orthogonality is a bit strange given that over $\Bbb{F}_2$ vectors are often orthogonal to themselves. Anyway, to get the symmetric matrix
$$
A=\pmatrix{0&1\cr1&0\cr}
$$
you need to use all the three non-zero vectors $v_1,v_2,v_3\in\Bbb{F}_2^2$, and $k=3$ is the smallest possible value. The conclusion is that we cannot ask for the vectors $v_i$ to be linearly independent.
At this time I don't want to say anything about the minimal required $k$ that works for larger matrices :-)
|
H: Power series approximation for $\ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)})$ to calculate $ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $
Problem
Approximate $f(x) = \ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)})$ and then calculate $ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $
My attempt
Let
$$f(x) = \ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)}) \iff $$
$$f(x) = (1+x)\ln(1+x) + (1-x)\ln(1-x) \quad $$
We know that the basic Taylor series for $\ln(1+x)$ is
$$ \ln(1+x) = \sum_{n=0}^\infty (-1)^n \frac{x^{n+1}}{n+1} \quad (1)$$
As far as $\ln(1-x)$ is concerned
$$y(x) = \ln(1-x) \iff y'(x) = \frac{-1}{1-x} = - \sum_{n=0}^\infty x^n \text{ (geometric series)} \iff$$
$$y(x) = \int -\sum_{n=0}^\infty x^n = - \sum_{n=0}^\infty \frac{x^{n+1}}{n+1} \quad (2)$$
Therefore from $f(x), (1), (2)$ we have:
$$ f(x) = (1+x)\sum_{n=0}^\infty (-1)^n \frac{x^{n+1}}{n+1} - (1-x)\sum_{n=0}^\infty \frac{x^{n+1}}{n+1} \iff$$
$$ f(x) = \sum_{n=0}^\infty \frac{2x^{n+2} + (-1)^n x^{n+1} - x^{n+1} }{n+1} $$
Why I hesitate
It all makes sense to me up to this point. But the exercise has a follow up sub-question that requires to find:
$$ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $$
I am pretty sure that this sum is somehow connected with the previous power series that we've found, but I can't find a way to calculate it, so I assume that I have made a mistake.
Any ideas?
AI: $$ f(x) = \sum_{n=0}^\infty \frac{2x^{n+2} + (-1)^n x^{n+1} - x^{n+1} }{n+1} $$
Supposing the above is right. We want to change the $n+2$'s to $n+1$'s. To do this, write, by letting $m+1 = n+2$,
$$ \sum_{n=0}^\infty \frac{2x^{n+2}}{n+1} = \sum_{m=1}^\infty \frac{2x^{m+1}}{m} = \sum_{n=1}^\infty \frac{2x^{n+1}}{n},$$
where, in the last step, we simply changed the dummy variable $m$ to $n$.
I haven't read it very carefully, but sometimes you can get $2n+1$ in the denominator when you're only summing over odd integers.
SPOILER
|
H: Upper bound on probability of binomial exceeding expectation
For iid $X_i$ taking values in $\{0, 1\}$ with parameter $E[X_i]$ show that when $nE[X_i] > 1$:
$$P\left(\frac{1}{n}\sum_i^n X_i > E[X_i]\right) \leq1/4$$
This inequality is from the proof of Lemma 4.1 in Vapnik's Statistical Learning Theory.
My first thought is to bound this with a normal approximation, but I need this for all n such that $nE[X_i] > 1$. Also, of course Markov's inequality is no help here.
AI: Your probability is $P(n^{-1/2} \sum_1^n (X_i - E[X_i])>0)$. By the CLT, this converges to 1/2 as $n\to\infty$, where your condition $n E[X_i]>1$ is satisfied for $n$ large enough.
So I doubt that the claimed inequality is true.
|
H: Give an example of two sets A and B such that |A| = |B|, and a function f : A → B such that f is one-to-one but not onto
Actually, I am a new student in discrete mathematics and this question appeared in my text book. And just before that I read that if |A|=|B| then the function must be one-one, onto, Invertible if any one of these is correct for the given functions but now they are saying opposite of this. Can you please explain me this by giving the example?
I am reposting this question for the clarification that is function, f(x) =x+5 on natural number is correct answer? If you wanna check the conversation about this question then you can check the exact question posted approximately 50 minutes ago... Actually in that question we messed up in comment section and lost the talk.
Thanks
AI: Yes, the function $f:\mathbb{N} \to \mathbb{N}$ givem by $f(x)=x+5$ is indeed one-one but not onto. One-one: assume $f(x)=f(y),$ then $x+5=y+5,$ then $x=y$ since we may subtract $5$ from both sides. Also $f$ is not onto since $f(x)=1$ has no solution.
Edit: technically "subtract $5$ from each side" isn't quite valid. What if one side were $5,$ then subtracting $5$ gives $0$ which is not in $\mathbb{N}.$ But an easy inductive argument takes care of that gap in logic.
|
H: Laplace equation with boundary condition
Solve Laplace's equation in polar coordinates $$ \frac {1}{r} \frac {\partial u} {\partial r} + \frac {\partial^2 u} {\partial r^2} + \frac {1} {r^2} \frac {\partial^2 u} {\partial \theta^2} = 0$$
on the disk $$ {{(r, \theta) | 0 \leq r \leq R , 0 \leq \theta \leq 2}} $$
subject to the boundary condition $ u (R, \theta) = Tsin^2 (\theta) $
I got $ u (r, \theta) =
\sum_{n=0}^{\infty}r^n [a_ncos (n\theta)+b_nsin (n\theta)] $ for $ n \in \mathbb{N} $
And solvin for the condition using
$$ a_n= 1/\pi \int_{0}^{2\pi} Tsin^2 (\theta)cos (n\theta) d\theta $$and
$$ b_n= 1/\pi \int_{0}^{2\pi} Tsin^2 (\theta)sin(n\theta) d\theta $$
I get
$$ a_n = \frac{2Tsin (2\pi n)}{4n \pi-n^3 \pi} $$ and
$$ b_n = \frac{4Tsin^2(\pi n)}{4n \pi-n^3 \pi} $$
But $ sin (n\pi)=0 $ for $n \in \mathbb{N} $ so i would get $ u (r, \theta) = 0 $
What should i consider for solving it?
AI: You have to work a little harder to write down the exact values of $a_n$ and $b_n$.
$\int_0^{2\pi} sin ^{2}\theta \cos (n\theta) d\theta=\frac 1 2\int_0^{2\pi} (1-\cos (2\theta) \cos (n\theta) d\theta$ using the identity $cos (2\theta) \cos (n\theta)=\frac 1 2 (\cos (n+2) \theta -\cos (n-2) \theta$ this becomes $\pi \delta_{0,n} -\frac {\pi} 2 \delta_{2,n} -\frac {\pi} 2 \delta_{-2,n}$ wheer $\delta_{i.j}=1$ if $i=j$ and $0$ if $i \neq j$.
I will let you handle $b_n$ by a similar method. .
|
H: Is an invariant or reducing subspace necessarily the image of a spectral projection?
In the following, the section numbers I mention are from Rudin's Functional Analysis text,
Chapter 12.
Let $T$ be a bounded normal operator in the (not necessarily separable) Hilbert space $\mathfrak{H}$.
Let $E$ be the resolution of the identity for $T$ on the Borel subsets of the spectrum $\sigma(T)$.
Let $f$ be
a bounded measurable complex function on $\sigma(T)$. Suppose $\mathfrak{M}$ is a closed subspace of
$\mathfrak{H}$ which is reducing for $T$. That is, $T\mathfrak{M}\subseteq\mathfrak{M}$ and
$T^*\mathfrak{M}\subseteq\mathfrak{M}$, or equivalently, $T\mathfrak{M}\subseteq\mathfrak{M}$ and
$T\mathfrak{M}^\perp\subseteq\mathfrak{M}^\perp$. I would like to show that $f(T)\mathfrak{M}\subseteq\mathfrak{M}$, that is, that $\mathfrak{M}$ is an invariant
subspace with respect to $f(T)$.
If I could show that there is a borel subset $\omega\subseteq\sigma(T)$ such that the range
$\mathscr{R}(E(\omega))=\mathfrak{M}$ (that is, $E(\omega)$ is a projection on $\mathfrak{M}$), then
I would be done, because by the spectral theorem (12.23), every $E(\omega')$ commutes with $T$, and by the properties of the resolution of the identity (12.17(c)), $E(\omega)$ commutes with every
$E(\omega')$, so by 12.21, $E(\omega)$ commutes with $f(T)$. I could then write
for $x\in\mathfrak{M}=\mathscr{R}(E(\omega))$, say $x=E(\omega)y$,
$$f(T)x=f(T)E(\omega)y=E(\omega)f(T)y\in\mathscr{R}(E(\omega))=\mathfrak{M}.$$
So, is it true that such an $\omega$ must exist and how do I show it? If not, is it still true that
$\mathfrak{M}$ is $f(T)$-invariant, and how would I show it?
AI: It is not true in general that the orthogonal projection $P$ onto a reducing subspace $M$ is of the form $P = 1_\omega (T)$ for some subset $\omega$. Indeed, if $T$ is the identity operator, then every subspace is reducing, but the only spectral projections are the trivial projections.
Nevertheless, you have $T P= PT$ (why?), so that Theorem 12.24 in Rudin's functional analysis shows $f(T)P = Pf(T)$, which implies that $f(T)M \subset M$.
|
H: If $\mathbb{R^k}= \cup^{\infty} F_n $ where $ F_n $ is closed, then at least one $ F_n $ has non empty interior.
Question: If $\mathbb{R^k}= \cup^{\infty} F_n $ where $ F_n $ is closed, then at least one $ F_n $ has non empty interior.
closed definition: a set E is closed if every limit point of E is a point of E.
Proof: Assume that $\mathbb{R^k}= \cup^{\infty} F_n $ where each $ F_n $ is closed and has nonempty interior. Let $N_O $ be a ball of finite radius around a point $ x_1 \in F_1 $ so that $\bar{N_O} $ is compact. Assume $ N _ {i-1} $ is open and does not comtain any points of $ F _ 1,...,F _ {i-1} $. this set must contain a point $ x _ i $ not in $ F _ i $, otherwise it would belong to the interior of $ F _ i $. $ x _ i $ must be contained in a neighborhood $ N _ i \subset N $ that does mot intersect $ F_i $ as $ x_i $ otherwise would be a limit point of $ F_i $ and therefore belong to $ F_i $. We can choose $ N_i $ such that $\bar{N_{i-1}} $,and we observe that it does not comtain any points of $ F_1,..F_i $.
Since each $\bar{N_i} $ is compact, and that $\bar{N_{i+1}} \subset \bar{N_i} $, so that by the corollary (if ${K_n} $ is a sequence of nonempty compact sets such that $ K_{n+1} \subset K_n $, then $\cap_{1}^{\infty} K_n $ is not empty), $ I=\cap_i \bar{N_i} $ is nonempty. By construction, if $ x \in I $, then $ x \notin F_i $ for any i. This implies $ x \notin \cup F_i= \mathbb{R^k} $, a contradiction.
I know this is a proof by contradiction, but I don't get the overall idea of the proof. Can someone help me out with this? Thanks
AI: I have tried to explain it using only words.
Let $X$ be a complete metric space. Write $X$ as countable union of closed sets. If possible, let each of these closed sets have empty interior. Choose a point $x\in X$, and consider a relatively compact nbd of $x$ which does not intersect with the $n$-th closed set appears in the union. Find another smaller relatively compact nbd of $x$, whose closure is contained in previously chosen nbd of $x$, so that it doesn't intersect with $(n+1)$-th closed set appears in the union.
Iterating this process, we have a decreasing sequence of relatively compact nbds of $x$ such that closure of $(n+1)$-th nbd is contained in $n$-th nbd. So, taking closure of these nbds we again have another decreasing sequence compact subsets, so its intersection is non-empty. Take a point in this non-empty intersection, then this point belongs to $X$ but not in the any closed set in the countable union, so a contradiction.
|
H: Find integers $1+\sqrt2+\sqrt3+\sqrt6=\sqrt{a+\sqrt{b+\sqrt{c+\sqrt{d}}}}$
Root numbers Problem (Math Quiz Facebook):
Consider the following equation:
$$1+\sqrt2+\sqrt3+\sqrt6=\sqrt{a+\sqrt{b+\sqrt{c+\sqrt{d}}}}$$
Where $a,\,b,\,c,\,d$ are integers. Find $a+b+c+d$
I've tried it like this:
Let $w=\sqrt6,\, x=\sqrt3, \, y=\sqrt2, z=1$
$$\begin{align}
(y+z)^2 &= (y^2 + z^2) + 2yz\\
y+z &= \sqrt{(y^2 + z^2) + 2yz}\\
y+z &= \sqrt{3 + \sqrt{8}}
\end{align}$$
Let $y+z=f$
$$\begin{align}
(x+f)^2 &= (x^2 + f^2) + 2xf\\
x+f &= \sqrt{(x^2 + f^2) + 2xf}\\
x+f &= \sqrt{(9+\sqrt8) + 2\sqrt{9+3\sqrt8}}
\end{align}$$
And I don't think this going to work since there's still a root term on the bracket that is $9+\sqrt8$. I need another way to make it as an integer.
AI: Expand out enough to get to
\begin{align*}
(a^2-24a+476-b)+\sqrt{2}(336-16a)+\sqrt{3}(272-12a)+\sqrt{6}(192-8a)&=\sqrt{c+\sqrt{d}}.
\end{align*}
This means, when we square the left side, we need to only have two terms with nonzero coefficient. Note that
$$(w+x\sqrt2+y\sqrt3+z\sqrt6)^2=(w^2+2x^2+3y^2+6z^2)+2\sqrt2(wx+3yz)+2\sqrt3(wy+2xz)+2\sqrt6(wz+xy),$$
so we need two of $\{wx+3yz,wy+2xz,wz+xy\}$ to be $0$. However, if the first two are $0$, then
$$wxy+3y^2z=wxy+2x^2z=0$$
implies that either $z=0$ or $x=y=0$; in the first case, $w=0$. We may get similar conclusions for each of the other selections to be $0$, so we must have that two of the parameters $\{w,x,y,z\}$ are $0$. In particular, since none of our polynomials in $a$ for $x,y,z$ have common roots, we must have that $w=0$. Then, $y\neq 0$ since $y$ has a noninteger root for $a$, so we have $a\in\{21,24\}$ and $a=21\implies b=413$, with $a=24\implies b=476$. If $a=24$, the left side is actually negative (it's $-48\sqrt2-16\sqrt3$), so it can't be the square root of anything. For $a=21$, $b=413$, we may find by direct calculation that
$$1+\sqrt2+\sqrt3+\sqrt6=\sqrt{21+\sqrt{413+\sqrt{4656+\sqrt{16588800}}}}.$$
|
H: How to solve $\int_{-\infty}^{\infty} exp(-\sqrt{2\pi}x-\dfrac{(y-x)^2}{2})dy$
What i tried was:
$\int_{-\infty}^{\infty} exp(-\sqrt{2\pi}x-\dfrac{(y-x)^2}{2})dy = -\dfrac{1}{y-x}exp(-\sqrt{2\pi}x-\dfrac{(y-x)^2}{2})|_{-\infty}^{\infty}$
The exp dominates the expression so it goes faster to $0$ for $y$ that approaches infinity or negative infinity, so the expression will be zero. However when i use wolfram to solve this integral i get: $\sqrt{2\pi}exp(-\sqrt{2\pi}x)$. Going to polar representation doesn't seem to be effective here. How did they come by this result?
AI: $e^{-\sqrt 2\pi x} $ does not depend on $y$. So the answer is $e^{-\sqrt 2\pi x} \int e^{-(y-x)^{2}/2} dy=\sqrt {2 \pi} e^{-\sqrt {2\pi x}} $.
|
H: How to construct a linear system that has no sol
Construct a linear system that has no solution. Unknown Variable counts must be more than the equation count. Is it possible an equation like this? What are needed?
AI: Here is an example which might be useful.
$$
\begin{bmatrix}
1 & 1 & 1\\
2 & 2 & 2\\
\end{bmatrix}
\mathbf{x}
=
\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}
$$
Mainly, the rank of the matrix is 1 which is less than the dimension of the right hand side (which is two). Therefore, you can find a right hand side which doesn't have a solution vector $\mathbf{x}$.
I hope this helps.
|
H: Show that $p \to q$ is a tautology if and only if $P \subseteq Q$.
I really don't know how to approach this.
I wrote out the truth table for $p' + q$ in which each row has a value of $1$ apart from when $p = 1$ and $q = 0$.
Intuitively if $q = 0$ then $Q$ is a null set. And if $P\subseteq Q$ then that means $\{1\}\subseteq \{0\}$ which is false.. so this row doesn't apply.
All other rows have a value of $1$, and so the said proposition is a tautology.
Please point out if/where I'm wrong in this and how to approach this more mathematically.
Thanks.
AI: $P$ and $Q$ ate the truth sets for $p$ and $q$ respectively, i.e. the sets of valuations that make $p$ and $q$ True respectively.
To say that $p → q$ is a tautology means that there is no valuation such that $p$ is True and $q$ is False, i.e. in every valuation where $p$ is True also $q$ is True.
Thus, the truth set $P$ is a subset of $Q$.
The same for the "only if" part.
|
H: What are the values of $a$ for which this integral converges?
What are the values of a for which this integral converges?
$$I = \int_{0}^{\infty} \frac{\sin x}{x^a}\,dx.$$
I tried comparing it with the integral $$\int_{0}^{\infty} \frac{1}{x^a}\,dx.$$ but I couldn't get anything out of it.
Any help would be appreciated. :)
AI: Firstly let's divide $\int_{0}^{\infty} \frac{\sin x}{x^a}\,dx $ from $0$ to $1$ and from $1$ to $\infty$.
For first lets use, that $\dfrac{\sin x}{x^a}$ for $x \to 0+$ can be majored by
$\dfrac{1}{x^{a-1}}$ and it converges when $a<2$.
Second can be majored by $\dfrac{1}{x^a}$ and absolutely converges when $a>1$. For simple converging is enough $a>0$, as antiderivative of $\sin$ is bounded and $\dfrac{1}{x^a}$ tends to $0$ monotonically.
Joining we come to $0<a<2$.
|
H: Find the area between $(y-x+2)^2=9y, \ \ x=0, \ \ y=0.$
Find the area between $$(y-x+2)^2=9y , \ \ x=0, \ \ y=0.$$ The graph is attached below.
The area between these lines is
$$A=\int_0^2 ydx$$
From $(y-x+2)^2=9y \ \ (*)$ we get $$x=y+3\sqrt{y}+2 \implies dx=(1+\frac{3}{2\sqrt{y}})dy$$ Thus,
$$A=\int_0^2 ydx=\int_0^2y(1+\frac{3}{2\sqrt{y}})dy=2+2^{3/2}$$
Question: When taking square root from both sides in $(*)$, one could also get $x=y-3\sqrt{y}+2$ which would lead to the answer $2-2^{3/2}$ (negative). Is checking both cases and choosing the one with a positive answer the right thing? Or should I take any conditions into account to restrict one?
Attached the graph:
AI: Did you notice that your answer, $2 + 2^{3/2} > 2 + 2^1 = 4$, yet the intended region, which I presume to actually be the set satisfying all inequalities $$0 \le x \le 2, \\ 0 \le y \le 1, \\ (y-x+2)^2 \ge 9y, $$ obviously has area less than $1$, being bounded above by the triangle with vertices at $(0,0)$, $(2,0)$, and $(0,1)$?
Let's do this the correct way. When you solved the equation $$(y - x + 2)^2 = 9y,$$ you chose $$x = y + 3\sqrt{y} + 2.$$ But when $y = 1$, this gives $x = 6$, whereas we would expect instead $x = 0$ if we are to be on the portion of the curve bounding the region of interest. If we choose the other root, we get $$x = y - 3\sqrt{y} + 2,$$ and now when $y = 1$, we get $x = 0$ as expected. Now we integrate, but since this equation gives us the boundary as a function of $y$, we have to integrate with respect to $y$, not $x$: $$A = \int_{y=0}^1 y - 3\sqrt{y} + 2 \, dy = \frac{1}{2},$$ and this is consistent with our requirement that $0 < A < 1$.
|
H: Why is grouping $2n$ students into pairs NOT equal to $^{2n}C_2$?
I just don't understand.
The way to couple $2n$ people into pairs $\dfrac{2n!}{(n!(2!))^n}$
I get this reasoning. $2n!$ is the rearrangement of the $2n$ students, divide it $n!$ and $2!$ to get rid of repeated groupings.
Shouldn't $^{2n}C_2$ give me the same number of groupings though?
AI: Why should it? $\binom{2n}{2}$ simply counts the number of ways to select $2$ people out of a group of $2n$ people. It doesn't say anything about how the other $2n-2$ people not selected are arranged.
|
H: question on Natural Log, $\lim \limits_{n\to∞ }(1+\frac{1}{n} + \frac{1}{n^2})^n $
I'm curious what is the solution of this. Is this just same as ordinary natural log?
$\lim \limits_{n\to∞ }(1+\frac{1}{n} + \frac{1}{n^2})^n =e?$
A few people says the $\frac{1}{n^2}$ just goes to $'0'$, so it's same as $'e'$.
But why? Why $\frac{1}{n}$ remains meaningful, while $\frac{1}{n^2}$ goes to zero??
I'm asking this because I'm stuck while deriving a formula.
I post the part of deriving as a picture. Look. From 1st, they go to 3rd line, by using 2nd. And I assume this suppose $\lim \limits_{n\to∞ }(1+\frac{1}{n} + \frac{1}{n^2})^n $ is just same as 'e'.
Thank you genius
AI: Hint: For any $\epsilon >0$ we have $1+\frac 1 n \leq 1+\frac 1 n+\frac 1 {n^{2}} \leq 1+\frac {1+\epsilon} n$ for $n$ sufficiently large. Apply logarithm, multiply by $n$ and take the limit.
|
H: Prove $\frac{a^2}{(a+b)^2} \geqslant \frac{4a^2-b^2-bc+7ca}{4(a+b+c)^2}$
Let $a,\,b,\,c$ are positive numbers. Prove that
$$\frac{a^2}{(a+b)^2} \geqslant \frac{4a^2-b^2-bc+7ca}{4(a+b+c)^2}. \quad (*)$$
Note. My proof is use sos. Form $(*)$ we get know problem
$$\frac{a^2}{(a+b)^2}+\frac{b^2}{(b+c)^2}+\frac{c^2}{(c+a)^2} \geqslant \frac{3}{4}.$$
AI: We need to prove that:
$$4a^2c^2+(a+b)(a^2-6ab+b^2)c+b^2(a+b)^2\geq0,$$ which is a quadratic inequality of $c$.
If $a^2-6ab+b^2\geq0,$ it's obviously true.
But for $a^2-6ab+b^2\leq0$ it's enough to prove that
$$(a^2-6ab+b^2)^2-16a^2b^2\leq0$$ or
$$(a^2-2ab+b^2)(a^2-10ab+b^2)\leq0,$$ which is obvious.
|
H: Prove that V is an identity operator
Recently I have attended linear algebra course, unfortunately it was more focused on practice, when I've discovered that I decided to focus more on theory to boost up my skills during summer vacation. Yesterday I've came across following task and literally have no idea how to solve it. Can you help me with this one, or set some direction to digest over?
It is known that linear operator V which is applied in unitary space obtains unitary matrix $V_e$ in basis $e$ and positive-definite matrix $V_f$ in basis $f$. The task is to prove that $V$ is identity operator.
AI: Fix any basis $b$ and consider the matrix $A=V_b$. Then you know that there exist invertible matrices $S$ and $T$ such that $S^{-1}AS$ is unitary and $T^{-1}AT$ is positive definite.
Since similar matrices share eigenvalues, you see that the eigenvalues of $A$ have modulus $1$ (because $S^{-1}AS$ is unitary) and positive real (because $T^{-1}AT$ is positive definite).
This only leaves tha possibility that $A$ has the single eigenvalue $1$.
Now use that positive definite matrices are diagonalizable.
|
H: A question about convergence of a sequence
Theorem. Let $X$ a uniformly convex Banach space. Let $\{x_n\}$ be a sequence in $X$ such that $x_n \rightharpoonup x$ and $$\limsup\lVert x_n\rVert\le\lVert x\rVert$$ then $x_n\to x.$
Proof. We assume that $x\ne 0$. Set $\lambda_n=\max\{{\lVert x_n\rVert,\lVert x \rVert}\}$, $y_n=\lambda_n^{-1}x_n$ and $y=\lVert x \rVert^{-1}x.$
Question. The proof is clear to me except this point, I think it's a stupid thing, but I can't understand: Why $$\lambda_n\to \lVert x\rVert$$
AI: Let $\epsilon >0$. Then $\lim \sup \|x_n\|<\|x\|+\epsilon$. This implies $\|x_n\| <\|x\|+\epsilon$ for $n$ sufficiently large and hence $\|x\| \leq \lambda_n <\|x\|+\epsilon$ for $n$ sufficiently large.
|
H: On a real square matrix of order $10$
Let $M_{10}$ be the set of $10×10$ real matrices; if $U\in M_{10}$, then let $\rho(U)=rank(U)$.
Which of the followings are true for every $A\in M_{10}$?
$(1)\rho(A^8)=\rho(A^9)$
$(2)\rho(A^9)=\rho(A^{10})$
$(3)\rho(A^{10})=\rho(A^{11})$
$(4)\rho(A^8)=\rho(A^7)$
I am able to discard options $(1),(2)$ &$ (4)$ by taking nilpotent matrices of order $9,10$ & $8$ respectively. This leaves $(3)$ as true but I am finding it difficult to write a general proof.
I think characteristics polynomial can be used.
Can you give any suggestions? Thanks for your time.
AI: The fact that $\rho(A^{10})$ is different from $0$ implies that there exists $k$ such that $\rho(A^{k}) = \rho(A^{k-1})$. The reason is that the rank can only decrease, and cannot be negative.
Then $\rho(A^{k+1})$ cannot be lower than $\rho(A^{k}).$
If $\rho(A^{10}) = 0$ then the problem is directly solved.
|
H: What if $\epsilon$ is infinity in the $\epsilon$-$\delta$ definition of limits?
The epsilon delta definition of limits says that if the limit as $x\to a$ of $f(x)$ is L, then for any $\delta>0$, there is an $\epsilon>0$ such that if $0<|x-a|<\delta$, then $|f(x)-L|<\epsilon$.
But the problem is that this definition says very generally that for ANY $\delta$, there is SOME $\epsilon$. So what if I always choose $\epsilon=\infty$? Then it is guaranteed that the distance between $f(x)$ and $L$ is less than $\epsilon$, and, as a bonus, $L$ can literally be anything, which means that the limit can be any value you like. Which is obviously absurd. What am I missing here?
Also, most people say that this definition intuitively tells us that $f(x)$ can be as close to $L$ as you like, because if $\delta$ gets smaller and smaller and approaches zero, then epsilon gets smaller and smaller and approaches zero as well. But this can't be right, as $\epsilon$ is not a function of $\delta$ or something, so you can't say that if one approaches 0, then the other will as well.
Edit: I feel like the problem has to do with the fact that usually when people use this definition to solve limit problems, then they obtain some expression for epsilon as a function of delta (like I write about above), and using this expression, you usually find that as delta goes to zero, then epsilon goes to zero as well. If it was assumed in the definition itself that this should ALWAYS be the case, then the definition would make total sense to me, but it doesn't seem like it does to me. If someone could share some thoughts on this, then I would be very happy.
AI: You seem to have the definition backwards in your first sentence.
$$\forall \epsilon > 0 \; \exists \delta > 0 \; ...$$
In English: for all $\epsilon > 0$ there exists $\delta > 0$ ...
An intuitive way to think about it is a game. If I am claiming the limit then you can challenge me with any accuracy you want, a positive $\epsilon$, and I need to be able to respond with a positive $\delta$ that achieves it. $\epsilon$ and $\delta$ need to be numbers so $\infty$ is implicitly excluded. Anyway, even we allowed $\infty$ with obvious naive rules and you challenged me to get within $\epsilon = \infty$ of my claimed limit then it would be easy for me to achieve. It wouldn't change things.
Limits are an area where you see the symbol $\infty$ frequently and it is easy to get the impression that it is being treated as a number. It isn't, it is just a suggestive notation for a separate definition. The definitions of limits when $x \rightarrow \infty$ is different from $x \rightarrow a$.
Some extra based on comments, note that although I must be able to supply a suitable $\delta$ for any $\epsilon$ that you give me, it does not in any sense have to be the best or optimal one. Suppose that I am claiming that $x^2 \rightarrow 0$ as $x \rightarrow 0$. In a sense, the best $\delta$ is $\sqrt \epsilon$ which only just does the job but I could just reply $1$ if your $\epsilon$ is $> 1$ and give you your own $\delta$ back if it is $< 1$. This would be more than good enough but that is okay.
Some more based on edited question. Again, it is backwards: $\delta$ is a function of $\epsilon$ not the reverse. $\epsilon$ is the desired accuracy and $\delta$ how close you get need to get to achieve that.
Yes, in general, as $\epsilon$ get smaller, so will $\delta$. This seems quite intuitive to me: in my game, as you challenge to get closer to my claimed limit, I need to go closer to the limit point.
It is not always true but the exceptions are not interesting. Consider the function $f(x) = 1$, a constant function. I claim that $f(x) \rightarrow 1$ as $x \rightarrow 0$. Now for whatever $\epsilon$ you give me, I can just reply $1$ or googleplex if that amused me.
|
H: f is a continuous real valued function with period $2 \pi$. Determine which of the following cases are always true.
This is a NBHM Phd 2019 question.
f is a continuous real valued function of period $2 \pi$ then which of the cases are true.
Case 1: $\exists $ $t_0 \in \mathbb{R}$ such that $f(t_0) = f(t_0 + \frac{\pi}{2})$
Case 2: $\exists $ $t_0 \in \mathbb{R}$ such that $f(t_0) = f(t_0 + \frac{\pi}{4})$
I tried with $f(x) = \sin(x)$, and verified that for this particular f both the options are true. The answer key says that both the options are correct.
I tried with $g(x)= f(x) - f(x + \frac{\pi}{2})$ and $g(x) = f(x) - f(x + \frac{\pi}{4})$, to use the intermediate value property, but I wasn't able to verify the statements.
Any help is highly appreciated.
AI: Both are true. If 1) is false then the continuous $f(t+\frac {\pi} 2)-f(t)$ never vanishes and hence it is always positive or always negative. If it is always positive you get $$0=f(2\pi)-f(0)=[f(2\pi)-f(2\pi-\frac {\pi} 2))]$$ $$+[f(2\pi-\frac {\pi} 2))-f(2\pi-2\frac {\pi} 2))]$$ $$+...+[f(\frac {\pi} 2)-f(0)]$$ which is a contradiction since each term in the sum is $>0$. You can handle 2) in a similar way.
|
H: why the basis generates the ideal as an R-module?
This is from Page 82 of Rotman's homological algebra book.
Definition: Let k be a commutative ring. Then a ring R is a k-algebra if R is a k-module satisfying: a(rs) = (ar)s = r(as) for all a in k and r,s in R.
If k is a field and R is a finite-dim k-algebra, then every left or right ideal I in R is a subspace of R. A basis of I generates I as a k-module; a fortiori, it generates I as an R-module, and so I is finitely generated.
I don't quite follow why the basis generates I as an R-module as the scalars are now from R. Any help would be appreciated!
AI: Assuming $R$ has identity element, we obtain a ring homomorphism $\varphi:k\to R$ by $\lambda \mapsto \lambda 1$.
Already the $k$-linear combination of the basis elements give all elements of $I$, and these are all $R$-linear combinations (through $\varphi$) as well.
|
H: Verification of a logarithmic inequality
Verify the inequality
$ \frac{(\log (x) + \log (y))}{2} \le \log\frac{(x+y)}{2}$, where $x,y>0$
I'm still struggling how to solve the inequality, I have tried AM-GM and Bernoulli, without any success.My suggestion is that the solution is very elementare, but I can't see it.
AI: $\log$ is a concave function, which says that your inequality is true by Jensen.
About Jensen see here: https://en.wikipedia.org/wiki/Jensen%27s_inequality
|
H: Variance of Univariate Gaussian Mixture
Let $\mathbb{P}_1,\dots,\mathbb{P}_n$ be univariate Gaussian measures with respective means $m_1,\dots,m_n \in \mathbb{R}$ and respective variances $\sigma_1,\dots,\sigma_n$. Let $r_1,\dots,r_n$ be numbers in $(0,1)$ which sum to $1$. Is the variance of a random variable distributed according to $\sum_{i=1}^n r_i\mathbb{P}_n$ equal to $\sum_{i=1}^n r_i^2 \sigma_i^2$?
AI: Define indipendent random variable $X_i$ with distribution $\mathbb{P}_i$.
Thus $\mathbb{E}[X_i]=m_i$ and $Var(X_i)=\sigma_i^2$.
Finally note that $Var(\sum_{i=1}^n r_iX_i)=\sum_{i=1}^n Var(r_iX_i)$ because they are uncorrelated (they are independent) and then you get $\sum_{i=1}^n Var(r_iX_i)=\sum_{i=1}^n r_i^2 \sigma_i^2$ also if you do not assume $r_1,...,r_n$ have sum $1$.
|
H: Dimensional property of kernel for sum of two linear maps
Let $T: V \longrightarrow V, S: V \longrightarrow V$ be two linear operators. Let $P: V \longrightarrow V$ be another linear operator. Suppose $P \circ S=S \circ P$, then prove or disprove that
$$
\operatorname{dim}(\operatorname{ker}(T \circ S+P))=\operatorname{dim}(\operatorname{ker}(S \circ T+P))
$$
Is there any isomorphism possible from the space $R_1=\{T(S(x))+P(x):~x \in V\}$ to $R_2=\{S(T(y))+P(y): y \in V\}$ using the fact $P(S(x))=S(P(x))$ so that we can apply Rank-Nullity theorem? Is there any result regarding $\dim \ker (T+P)$ for linear maps $T$ and $P$?
AI: This is not true. Counterexample:
$$
T=\left(\begin{array}{cc|cc}0&0&1&0\\ 0&0&0&1\\ \hline 1&0&0&0\\ 0&1&0&0\end{array}\right),
\ S=\left(\begin{array}{cc|cc}0&0&0&0\\ 0&1&0&0\\ \hline 0&0&0&1\\ 0&0&0&0\end{array}\right),
\ P=\left(\begin{array}{cc|cc}0&0&0&0\\ 0&0&0&0\\ \hline 0&0&1&0\\ 0&0&0&1\end{array}\right),
$$
$$
TS+P=\pmatrix{0&0&0&1\\ 0&0&0&0\\ 0&0&1&0\\ 0&1&0&1},
\ ST+P=\pmatrix{0&0&0&0\\ 0&0&0&1\\ 0&1&1&0\\ 0&0&0&1}.
$$
We have $PS=SP$ but $\operatorname{rank}(TS+P)=3\ne2=\operatorname{rank}(ST+P)$.
|
H: $X_i \sim^{iid}\operatorname{Ber}(p)$ and $Y_m = \sum_{i=1}^{m}X_i$. find $E[Y_m|Y_n]$
I have a math problem regarding condition expectancy.
Let there be $$X_i \sim^{iid}\operatorname{Ber}(p), Y_m = \sum_{i=1}^{m}X_i$$.
Now we know that $$Y_m\sim \operatorname{Bin}(m,p), m \leq n$$
Im tring to find $E[Y_m\mid Y_n]$ but I dont know how to approach it to begin with, since Im not sure whether to keep it with $\Sigma$ oreither change to two binomial conditioned.
Suppose $m \leq n$ so the smaller sum can definately give us information about the greater one, so I know Bayes is needed. So far what came to mind was:
Create a variable $Y_k = Y_n-Y_m$ so that I can divide the bigger sum into the dependent stuff and what's not. but I'm not so sure about it. I'd like a guide for how to proceed, not even a full solution but a hint.
AI: Since $X_i$'s are i.i.d $E(X_i|X_1+X_2+...+X_n)$ is the same for each $i \leq n$. The sum of these conditional expectations is $$E(X_1+X_2+...+X_n|X_1+X_2+...+X_n)$$ $$=X_1+X_2+...+X_n=Y_n.$$ Hence $$E(X_i|X_1+X_2+...+X_n)=Y_n/n$$ for each $i\leq n$. Adding then first $m$ of these we get $E(Y_m|Y_n)= \frac m n Y_n$.
|
H: Inequality 6 deg
For $a,b,c\ge 0$ Prove that $$4(a^2+b^2+c^2)^3\ge 3(a^3+b^3+c^3+3abc)^2$$
My attempt: $$LHS-RHS=12(a-b)^2(b-c)^2(c-a)^2+2(ab+bc+ca)\sum_{sym} a^2(a-b)(a-c)$$
$$+\left(\sum_{sym} a(a-b)(a-c)\right)^2+14\left(\sum_{sym} a^3b^3+3a^2b^2c^2-abc\sum_{sym} ab(a+b)\right)+\sum_{sym} c^2(a-b)^2[2(a+b)^2+(a-b)^2]$$
Since $\sum_{sym} x^3+3xyz-\sum_{sym} xy(x+y)\ge 0$, set $(x,y,z)\rightarrow (ab,bc,ca)$ we obtain $$\sum_{sym} a^3b^3+3a^2b^2c^2-abc\sum_{sym} ab(a+b)\ge 0$$
Thus $LHS-RHS\ge 0$
I think this's a complicated solution and hard to find it by hand :">
Please give me a simplier solution. Thank you very much.
AI: By the Cauchy-Schwarz inequality we have
$$(a^3+b^3+c^3+3abc)^2 = \left[\sum a(a^2+bc)\right] \leqslant \sum a^2 \sum (a^2+bc)^2.$$
Therefore we will show that
$$4(a^2+b^2+c^2)^2 \geqslant 3[(a^2+bc)^2+(b^2+ca)^2+(c^2+ab)^2],$$
equivalent to
$$a^4+b^4+c^4 + 5(a^2b^2+b^2c^2+c^2a^2) \geqslant 6abc(a+b+c). \quad (1)$$
Which is true because
$$a^4+b^4+c^4 \geqslant a^2b^2+b^2c^2+c^2a^2 \geqslant abc(a+b+c).$$
Note. The sum of squares of $(1)$
$$(a^2+b^2+c^2+ab+bc+ca)(a^2+b^2+c^2-ab-bc-ca)+2 \sum a^2(b-c)^2 \geqslant 0.$$
Zhaobin have posted it before
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.