text
stringlengths 83
79.5k
|
|---|
H: Limit $\lim _{x \to 0}\sqrt {x+\sqrt {x+\sqrt{x+\sqrt{x...}}}}=1$
I've been investigating some interesting infinite square roots, and I've arrived at the hypothesis that $$\lim_{x\to 0}\sqrt {x+\sqrt {x+\sqrt{x+\sqrt{x...}}}}=1$$
However, I have tried to prove this but have found myself unable to do so. For example, I've tried re-writing this as $$1=\sqrt{x+1}$$ so $1=x+1$, which leads us to $x=0$, which doesn't exactly work- replacing $x$ with $0$ yields a value of $0$.
That's another side point: obviously the method I just used yields an incorrect result, but where is the maths flawed?
Please could you either prove or disprove my hypothesis? Thank you in advance.
AI: Take $a_0 = \sqrt{x}$ and $a_{n+1} = \sqrt{x+a_n}$. We need to show that
$1$. $a_{n+1} > a_n$ (the sequence is monotonically increasing)
$2$. There exists an $m$ such that $a_n \leq m$ for all $n$ (the sequence is bounded)
$1$ is easy. We have $a_0 = \sqrt{x}$ and $a_1 = \sqrt{x+\sqrt{x}}$. First, $a_1 > a_0$ as we have
$$\sqrt{x} > 0 \iff x+\sqrt{x} > x \iff \sqrt{x+\sqrt{x}}>\sqrt{x} \iff a_1 > a_0$$
Assume $1$ holds up to $n$. Then $a_{n+1} = \sqrt{x+a_n} > \sqrt{x+a_{n-1}} = a_n$ so $1$ holds for $a_{n+1}$. By induction, $1$ holds such that $a_n$ is monotonically increasing.
Now for $2$, we do the following. You can use the common way to solve this kind of radical, which is to assign a value to it $y$:
$$y = \sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}}$$
$$y^2 = x+\sqrt{x+\sqrt{x+\ldots}}$$
$$y^2=x+y$$
$$y^2-y-x=0$$
Using the quadratic equation,
$$y=\frac{1 \pm \sqrt{1+4x}}{2}$$
$y > 0$ so the smaller solution is extraneous.
$$y = \frac{1+\sqrt{1+4x}}{2}$$
this means
$$\lim_{n \to \infty} a_n = \frac{1+\sqrt{1+4x}}{2}$$
and we can prove the bound
$$a_n \leq \frac{1+\sqrt{1+4x}}{2} $$
inductively. First, we have $a_0 = \sqrt{x} < \frac{1}{2} + \sqrt{\frac{1}{4}+x} = \frac{1+\sqrt{1+4x}}{2}$. Now assume the inequality is true for all $a_i$ for $i \leq n$. Then, for $x\geq 0$,
$$\Big(\frac{1+\sqrt{1+4x}}{2}\Big)^2 = \frac{1+2\sqrt{1+4x}+(1+4x)}{4} = x + \frac{1+\sqrt{1+4x}}{2}$$
so
$$\frac{1+\sqrt{1+4x}}{2} = \sqrt{x+\frac{1+\sqrt{1+4x}}{2}}$$
Then
$$a_{n+1} = \sqrt{x+a_n} \leq \sqrt{x+\frac{1+\sqrt{1+4x}}{2}} = \frac{1+\sqrt{1+4x}}{2}$$
Therefore, this sequence is bounded and monotonically increasing, so it converges. Now we can evaluate: at $0$, it comes out to
$$y\vert_0=\lim_{x \to 0}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}}= \frac{1+
\sqrt{1}}{2}=1$$
|
H: How To Think About Measurability in $\mathbb{R}$
How do Platonist-leaning mathematicians think about the measurability/non-measurability of subsets of $X=\mathbb{R}\cap [0,1]$? For clarity, let's use "size" for the informal concept of length/area/volume, and "measure" for usual formalized version of this concept. In the context of subsets of $X$, would most mathematicians agree, disagree or have no opinion about the following statements (feel free to just answer for yourself):
If a set is Lebesgue-measurable, then its size is its Lebesgue measure. If a set is not Lebesgue measurable, then it is completely meaningless to ask about its size.
Lebesgue measure has little or nothing to do with the intuitive notion of size. It is a purely formal concept, and there is no deep philosophical significance to a set being non-measurable.
Lebesgue measure is a correct but incomplete formalization of the notion of size. There are canonical extensions of Lebesgue measure that allow one to meaningfully talk about the size of certain non-Lebesgue-measurable sets.
For some sets, the question of their size is fundamentally meaningless. That is, there is absolutely no reasonable way to assign them a size, even if one extends beyond Lebesgue measure.
There are multiple, conflicting notions of "size" which are all compatible with Lebesgue measure. Some non-Lebesgue-measurable sets may be assigned different sizes, depending on which notion one has in mind.
Please feel free to add to this list, if you feel that something is missing!
AI: I cannot talk for "Platonist-leaning mathematicians", but here is my take.
The Lebesgue measure $\lambda$ is determined on all Lebesgue-measurable sets if you prescribe that
$\lambda(X)=1$
$\lambda(\varnothing)=0$
$\lambda$ is translation-invariant
for a disjoint sequence $\{E_n\}\subset X$, $$\tag1\lambda(\bigcup_nE_n)=\sum_n\lambda(E_n).$$
So yes, if a set if Lebesgue measurable, its size is its Lebesgue measure. For non-measurable Lebesgue sets, one can easily define Lebesgue's outer measure (and its definition is fairly intuitive). But things like $(1)$ fail, so it is hard to defend that the outer measure of a (non-measurable) set is its size when things like joining two disjoint sets will give you a "size" that is not the sum of the sizes.
As (clearly, I hope!) said in 1, Lebesgue measure has everything to do with "size". It's defined in terms of "size" and extended logically from there.
One of those "canonical extensions" (I don't know what they are) of Lebesgue measure would have to assign measure to non-measurable sets that does not agree with Lebesgue's outer measure. So I cannot see where "meaningful" would come from: you would have a "size" of a set that does not agree with the size obtained by covering it with smaller and smaller segments. The latter is the notion of "size" on which all of Calculus is built, so you seem to be willing to stir quite a few things here (all of calculus, basically). How would this be "meaningful"?
Lebesgue measure is super-common-sense way to assign "size" to subsets of $X$, so I cannot imagine where you are going here.
"There are multiple, conflicting notions of "size" which are all compatible with Lebesgue measure". Don't agree. See 1.
The only "natural" way of assigning measure to non-Lebesgue-measurable sets is by denying the Axiom of Choice, using something like that Solovay Model. So now you have extended Lebesgue measure to all non-measurable sets. And you cannot exhibit any of them, because you don't have the Axiom of Choice. So now you have a "natural" measure on all subsets of $X$; all the sets of $X$ where this would make a difference are not accessible to you, and meanwhile you have broken huge parts of analysis by moving to an ad-hoc model of set theory that gives you something useless, at the cost of losing lots of useful things.
|
H: Show that $\ker\sigma \subset \varphi_1(\ker\rho)$ and $\operatorname{im}\tau \subset \psi_2(\operatorname{im}\sigma)$.
Given the homomorphism of short exact sequences, I must show that $\ker\sigma \subset \varphi_{1}(\ker\rho)$ and $\operatorname{im}\tau \subset \psi_{2}(\operatorname{im}\sigma)$. For the first part, let $ y \in \ker\sigma $, that is, $ \sigma(y) = 0$. (Is it correct to say that $ \psi_1(y) \in \psi_1 (\ker \sigma)$ ?). I must find a $ x \in \ker \rho$ such that $ \varphi_1(x) = y $
On the other hand, I have the following question: If $ \tau $ is surjective, can I say that $ \sigma $ is surjective?
AI: As stated and illustrated in the comments by Mindlack the first part is clearly wrong. Consider $\rho=\sigma=\tau=0$ the trivial homomorphism. Then $\ker\rho=F_1$ and $\ker\sigma= E_1$ and you are asked to show that $E_1\subset\varphi_1(F_1)$ which is only the case for $\varphi_1$ being surjective. Pick your favorite non-surjective $\varphi_1$ and see that this cannot work. However, the reverse inclusion holds.
To see this, consider $x\in\ker\rho$. Then by commutativity
$$(\varphi_2\circ\rho)(x)=(\sigma\circ\varphi_1)(x)=0$$
and thus $\varphi_1(\ker\rho)\subset\ker\sigma$. The second part is correct. Pick $\tau(x)\in{\rm im}\,\tau$ for some $x\in G_1$. By exactness, $\psi_1$ is surjective and therefore there is some $y\in E_1$ such that $\psi_1(y)=x$. Then
$$(\psi_2\circ\sigma)(y)=(\tau\circ\psi_1)(y)=\tau(x)$$
and thus ${\rm im}\,\tau\subset\psi_2({\rm im}\,\sigma)$.
Arguments like this are commonly known as proofs by diagram chase.
As $y\in\ker\rho$ it is correct to say $\psi_1(y)\in\psi_1(\ker\rho)$ as this is nothing else than applying $\psi_1$.
EDIT: Concerning the question below: consider
where the arrows are the respective inclusions and projections from the direct sums (and the identity for the rightmost column).$\tau$, in your notation, is surjective but $\sigma$ clearly not.
|
H: Inequalities in matrix norm.
My book says, for any $t$
$e^
{tA} = C diag(e^
{tJ_1}
,··· , e^
{tJ_k} ) C^{
−1}$
.
Hence,$
|e^
{tA}
| ≤ |diag(e^
{tJ_1}
,··· , e^
{tJ_k} )
|$
Where $J_i$ are exponential of jordan blocks of A.
I didn't understand why this has to be true.please help
|A| here denote induced p norm of A
AI: It is not true in general: all you can say is
$$ |e^{tA}| \le |C| |C^{-1}| |\text{diag}(e^{t J_1},\ldots,e^{t J_k})|$$
but there's no reason to think $|C| |C^{-1}| = 1$.
You might try some $2 \times 2$ (non-normal) examples. The computations can get rather messy, but you should find that the inequality is false most of the time.
One fairly simple example is
$$ A = \pmatrix{1 & 2\cr -1 & -1\cr}$$
This has eigenvalues $\pm i$, and $|e^{tJ}| = 1$ for all $t$. But
$$ \exp(tA) = \pmatrix{\cos(t)+\sin(t) & 2 \sin(t)\cr -\sin(t) & \cos(t) - \sin(t) \cr}$$
and since $2 \sin(t)$ can be as much as $2$, it is evident that $|\exp(tA)|$ can be at least $2$. Indeed, for $t=\pi/2$ the norm turns out to be $\sqrt{7/2 + 3 \sqrt{5}/2} \approx 2.618$.
|
H: Finding distribution function when pdf is $f(x) = |x|$ for $ -1 < x < 1$
In my probability/stats course, they have defined a probability density function as:
$$f(x) = |x|\quad,\quad -1 < x < 1$$
I am having difficulty with how they have integrated this to find the cumulative distribution function:
$F(x) = (1-x^2)/2, \quad -1 \leq x \leq 0$, and
$F(x) = (1+x^2)/2, \quad 0 \leq x \leq 1$.
AI: We have
$$f(x) = \begin{cases}
|x|, & -1 \leq x \leq 1 \\
0, & \text{otherwise}.
\end{cases} = \begin{cases}
-x, & -1 \leq x < 0 \\
x, & 0 \leq x \leq 1 \\
0, & \text{otherwise}.
\end{cases}$$
When finding the CDF, you must partition the interval $(-\infty, \infty)$ appropriately and consider each case. In this situation, we must consider $(-\infty, 1)$, $[-1, 0)$, $[0, 1)$, and $[1, \infty)$.
Case 1 is the interval $(-\infty, -1)$:
For any $x < -1$, $F(x) = 0$, since if $x < -1$:
$$F(x) = \int_{-\infty}^{x}\underbrace{f(t)}_{=\text{ }0\text{ for } t < -1}\text{ d}t = \int_{-\infty}^{x}0\text{ d}t = 0$$
Case 2 is the interval $[-1, 0)$:
Suppose now that $-1 \leq x < 0$. Then
$$\begin{align}
F(x) &= \int_{-\infty}^{x}f(t)\text{ d}t \\
&= \int_{-\infty}^{-1}\underbrace{f(t)}_{=\text{ }0\text{ for } t < -1}\text{ d}t + \int_{-1}^{x}\underbrace{f(t)}_{=\text{ }-t\text{ for } -1 \leq t < 0}\text{ d}t\\
&= 0 + \int_{-1}^{x}-t\text{ d}t \\
&= -\left[\dfrac{t^2}{2} \right]^{x}_{-1} \\
&= \dfrac{1}{2}\left(1 - x^2\right)\text{.}
\end{align}$$
Case 3 is the interval $[0, 1)$:
Suppose that $0 \leq x < 1$. Then
$$\begin{align}
F(x) &= \int_{-\infty}^{x}f(t)\text{ d}t \\
&= \int_{-\infty}^{-1}\underbrace{f(t)}_{=\text{ }0\text{ for } t < -1}\text{ d}t + \int_{-1}^{0}\underbrace{f(t)}_{=\text{ }-t\text{ for } -1 \leq t < 0}\text{ d}t + \int_{0}^{x}\underbrace{f(t)}_{=\text{ }t\text{ for } 0 \leq t < 1}\text{ d}t\\
&=0 + \int_{-1}^{0}-t\text{ d}t + \int_{0}^{x}t\text{ d}t \\
&= \underbrace{\dfrac{1}{2}(1-0^2)}_{\text{from Case 2}} + \left[\dfrac{t^2}{2} \right]^x_{0} \\
&= \dfrac{1}{2}+\dfrac{x^2}{2} \\
&= \dfrac{1+x^2}{2}
\end{align}$$
Case 4 is the interval $[1, \infty)$:
If $x \geq 1$, we have
$$\begin{align}
F(x) &= \int_{-\infty}^{x}f(t)\text{ d}t \\
&= \int_{-\infty}^{1}f(t)\text{ d}t + \int_{1}^{x}\underbrace{f(t)}_{=\text{ }0\text{ for } t > 1}\text{ d}t \\
&= \underbrace{\dfrac{1+1^2}{2}}_{\text{from Case 3}} + 0 \\
&= 1
\end{align}$$
Thus,
$$F(x) = \begin{cases}
0, & x < -1 \\
\dfrac{1-x^2}{2}, & -1 \leq x < 0 \\
\dfrac{1+x^2}{2}, & 0 \leq x < 1 \\
1, & x \geq 1\text{.}
\end{cases}$$
|
H: A Systematic way to solve absolute value inequalities?
So, I had to solve this problem: $\left\vert \dfrac{x^2-5x+4}{x^2-4}\right\vert \leq 1$
I factored it in the form: $\left\vert \dfrac{(x-4)(x-1)}{(x-2)(x+2)} \right\vert \leq 1$.
After that I found the intervals in which the expression is positive: $x \in(-\infty, -2) \cup [1,2)\cup[4, \infty) $. And when expression is negative: $x \in (-2,1) \cup(2,4) $.
So, then I went ahead and solved the equations, when the expression is positive and when it's negative. The answer I got for positive is: $ x \in (-2,8/5] \cup(2, \infty).$ I'm not sure how to proceed from here, I figured that I'd find the intersection of the answer I got (when solved for positive expression) and when x is positive. But that doesn't seem to be the correct way to proceed. Since my answer is way off.
I've tried searching how to solve these kinds of problems, but couldn't find anything close to this. So I'd really like to know how to solve this problem and how to approach problems like these in general. I really lack intuition and would like to understand it more. I'd also appreciate any textbook / information about how to solve more complex problems in general.
AI: I can think of two methods to solve these kinds of equations, so I'll write them both down and you can see which you prefer. The first is a lot more algebraic, the second you can argue is more geometric. It may also not fly in your class, depending on the syllabus and professor.
First, we have $-1\leq\frac{x^2-5x+4}{x^2-4}\leq 1$, so we've gotten rid of the absolute value. If this was an equation rather than two inequalities we'd definitely want to multiply by the denominator, and here is no different. The problem is we don't know if the denominator is positive or negative. So we first assume $x^2-4=(x-2)(x+2)>0$. Then multiplying by the denominator gives us
\begin{align*}
&\left\{\begin{array}{ll}-x^2+4\leq x^2-5x+4,\\ x^2-5x+4\leq x^2-4\end{array}\right. \\
\Rightarrow &\left\{\begin{array}{ll}0\leq 2x^2-5x,\\ 0\leq 5x-8\end{array}\right. \\
\Rightarrow &\left\{\begin{array}{ll}0\leq x(2x-5),\\ 0\leq 5x-8.\end{array}\right. \\
\end{align*}
So if $x^2-4=(x-2)(x+2)>0$, we need these equations to be true as well. Now looking at the intersection of the sets that cause each of the equations to be true, we have that the original absolute inequality is true in the following set, where I use curly braces rather than parentheses to show order:
\begin{align*}
&\{(-\infty,-2)\cup(2,\infty)\}\bigcap\left\{(-\infty,0]\cup\left[\frac{5}{2},\infty\right)\right\}\bigcap \left[\frac{8}{5},\infty\right) \\
=&\left\{(-\infty,-2)\bigcap\left\{(-\infty,0]\cup\left[\frac{5}{2},\infty\right)\right\}\bigcap \left[\frac{8}{5},\infty\right)\right\}\bigcup\left\{(2,\infty)\bigcap\left\{(-\infty,0]\cup\left[\frac{5}{2},\infty\right)\right\}\bigcap \left[\frac{8}{5},\infty\right)\right\} \\
=&\emptyset\bigcup \left[\frac{5}{2},\infty\right) \\
=&\left[\frac{5}{2},\infty\right).
\end{align*}
If you repeat the above steps yourself, assuming instead that $x^2-4<0$, you should get that $x$ needs to be in the set $\left[0,\frac{8}{5}\right]$. So our overall answer is $\left[0,\frac{8}{5}\right]\cup\left[\frac{5}{2},\infty\right)$.
I've made one assumption here which is that you know how to solve inequalities like $0<(x-2)(x+2)$, let me know if that's not the case and we can go through that as well.
Now, the second method. Your function inside the absolute value is continuous everywhere except the asymptotes, which is where the denominator is equal to 0. So lets first look at the boundaries of the set where the equation is satisfied.
\begin{align*}
&\left\vert\frac{x^2-5x+4}{x^2-4}\right\vert=1 \\
\Rightarrow&\frac{x^2-5x+4}{x^2-4}=\pm 1. \\
&\left\{\begin{array}{ll}\frac{x^2-5x+4}{x^2-4}=1\\ \frac{x^2-5x+4}{x^2-4}=-1\end{array}\right. \\
\Rightarrow&\left\{\begin{array}{ll}x^2-5x+4=x^2-4\\ x^2-5x+4=-x^2+4\end{array}\right. \\
\Rightarrow&\left\{\begin{array}{ll}-5x+8=0\\ 2x^2-5x=0\end{array}\right. \\
\Rightarrow&\left\{\begin{array}{ll}-5x+8=0\\ x(2x-5)=0\end{array}\right. \\
\Rightarrow &x=0,\frac{8}{5},\frac{5}{2}.
\end{align*}
At these points, the function $\left\vert\frac{x^2-5x+4}{x^2-4}\right\vert$ potentially switches from being less than 1 to being greater than 1, or from being greater than 1 to being less than 1. This can also happen when the denominator is equal to 0, which is when $x=-2,2$, as the function can switch from being close to $-\infty$ to being close to $+\infty$ for example. These values together are then the only times the inequality $\left\vert\frac{x^2-5x+4}{x^2-4}\right\vert\leq 1$ potentially changes from being true to not true. Therefore in between these points, if the inequality is true (or false) for one value it's true (or false) for all values. So we just need to pick values in the intervals \begin{align*}
(-\infty,-2),(-2,0],\left(0,\frac{8}{5}\right],\left(\frac{8}{5},2\right),\left(2,\frac{5}{2}\right],\left(\frac{5}{2},\infty\right)
\end{align*}
to see if the inequality is true in the interval. Note when I use open and closed ends in the intervals, it doesn't matter in which interval you include $x=\frac{5}{2}$, but you don't include $x=2$ in either interval because the function is not defined there.
So for $(-\infty,-2)$ let $x=-3$. Then $\left\vert\frac{x^2-5x+4}{x^2-4}\right\vert=\left\vert\frac{9+15+4}{9-4}\right\vert=\left\vert\frac{28}{5}\right\vert=\frac{28}{5}> 1$. So the inequality is false when $x=-3$ and so is false in the entire interval $(-\infty,-2)$. Repeat for the other intervals to get the same answers as before.
|
H: Showing openess in topology of point-wise convergence
I want to check if the set
$$ P_{x_0}=\{f\in C([a,b])\mid f(x_0)=0\},\quad x_0\in[a,b] $$
is open in the topology of point-wise convergence.
I already have a problem with an intuitive picture of this topology. I know it is has the basis
$$ O(x_1,\ldots,x_n,t_1,\ldots,t_n,\varepsilon)=\{f\in C([a,b])\mid f(x_i)\in B_\varepsilon(t_i),\quad i=1,\ldots,n\}. $$
If I had to guess I would say $P_{x_0}$ is not open because the property in $O$ only needs a finite number of points, so I think there's a way to construct a function which lies in $O$ but has a non-zero "spike" at $x_0$.
Is my approach correct or am I wrong with the way I imagine this basis set?
AI: $P_{x_0}$ is not open in $C([a,b])$, but it is closed, as it equals $e_{x_0}^{-1}[\{0\}]$ where $e_p: C([a,b]) \to \Bbb R$ is the evaluation map $e_p(f)=f(p)$ for any $p \in [a,b]$.
The point-wise convergence topology is the minimal topology that makes all $e_p$ continuous, by definition. And the inverse of a closed set under a continuous map is closed.
To see it $P_{x_0}$ is not open, note that $c_{\frac{1}{n}}$ (the constant map with value $\frac1n$), defines a sequence not in $P_{x_0}$ that converges point-wise to $c_0$ (the zero function) that is in the set. So $c_0$ is not an interior point of $P_{x_0}$.
|
H: Prove $f(x) = \sqrt{x}\ln(x)$ is uniformly continuous for $x = [1, \infty)$
Original question is to show this is true for all $x > 0$ with the hint to split cases on $x \in (0,1]$ and $x \in [1, \infty)$. I can show this is true for $x \in (0,1]$ by extending the interval to include $0$ (because $f(0) = 0 = \lim_{x\rightarrow 0^+}f(x)$) and use Heine-Cantor Theorem. But how do I prove it for $x \in [1,\infty)$ with the $\epsilon - \delta$ definition of uniform continuity?
Here is my attempt: let $x,y \in [1,\infty)$, then for all $\epsilon > 0$, we want to find $\delta$ such that $\vert x-y \vert < \delta$ implies:
$$
\begin{align*}
\vert \sqrt{x}\ln(x) - \sqrt{y}\ln(y)\vert
&= \vert \sqrt{x}\ln(x) - \sqrt{x}\ln(y) + \sqrt{x}\ln(y) - \sqrt{y}\ln(y)\vert \\
&\leq \sqrt{x}\,\bigg\vert\ln\left(\frac{x}{y}\right)\bigg\vert + \ln(y)(\sqrt{x} - \sqrt{y}) \\
& < \epsilon
\end{align*}
$$
Take cases on $x<y$ and $x\geq y$ and we can use the property that for any $x' > 0$, $1+x' < e^{x'}$ to further simplify the $\bigg\vert\ln\left(\frac{x}{y}\right)\bigg\vert$ term. But I'm not sure how to find the $\delta$ that is independent from $x,y$ from this. Any hints? Or maybe I'm on the wrong track?
AI: $$f'(x)=\frac{\ln(x)}{2\sqrt{x}}+\frac{1}{\sqrt{x}}$$
When $x\to\infty$, then $f'(x)\to 0$. In particular, $f'$ is bounded on $[1,\infty)$ (since it is continuous), thus $f$ is uniformly continuous (in fact Lipschitz) by the Mean Value Theorem.
|
H: Showing "Right hand continuity" , critique and help on solution.
Suppose that $\lim_{x \to a^{+}}f(x) = f(a)$ and $f(a) > 0$. Prove there is a number $\delta > 0$ such that $f(x) >0$ for all $x$ satisfying $0 \leq x - a < \delta$
The issue I'm having is linking the ideas properly to make the solution sound.
So the rough work I have to this point is:
$$|f(x) - f(a)| < \epsilon \\ f(a) - \epsilon < f(x) < \epsilon + f(a)$$
So my thinking leads along the lines of "if I can somehow make $(f(a) - \epsilon) = 0$ from my initial conditions then I would be done"
This revolves around me finding "a good $\epsilon > 0$". From the definition of the limit given I have
$$\forall \ \epsilon > 0 \, \exists \ \delta > 0 \ \text{such that,} \text{for all } x\ \text{if} \ 0 < x-a < \delta \ \Rightarrow \ |f(x) - f(a)| < \epsilon$$
Using $$0 < x-a < \delta \\ a< x < \delta + a < \delta + x$$
Here is where I'm stuck....I know from seeing a solution that I'm supposed to eventually draw a conclusion that $|f(x) - f(a)| < f(a)$ and this would lead to $f(x) > 0$. What I'm stuck with is how to "choose the $\delta$" based on the information I have....I also think I'm missing a line of reasoning for myself internally on how to manipulate the expressions.
AI: Let's consider $\epsilon = \frac{f(a)}{2}$. Outgoing from limit definition for this $\epsilon$ we have $\exists \delta >0$ such that for all x satisfying $0 \leqslant x−a < \delta$ w'll have $$0<f(a) - \epsilon = \frac{f(a)}{2}< f(x)$$ .
|
H: Is there a shortcut to find a Taylor series not centered at 0 with a Taylor series centered at 0?
We know that the Taylor series of $\ln(1+x)$ centered at 0 is $x-\frac{x^2}{2} + \frac{x^3}{3} - \dots$.
We can find the Taylor series of $\ln(2+x)$ by writing $\ln(1+(1+x))$, so this is equal to $(x-1)-\frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \dots$ but then this is centered at $-1$, since $1+x$ is centered at 0.
Other than calculating the derivatives and applying the general formula, is there a quicker way to find a Taylor series centered at $a \neq 0$ with the Taylor series centered at $0$? Or are is brute force the only method here?
AI: There is no simple formula in general.
You can't even find the value of $f(a)$ without using all the Taylor coefficients of the series around $x=0$.
However, in the case of $\ln$, you can say
$$ \ln(2+x) = \ln(2 (1+x/2)) = \ln(2) + \ln(1+x/2)$$
and substitute $x/2$ for $t$ in the series of $\ln(1+t)$.
|
H: Converse to a proposition on algebraically closed fields
This is a follow up to a previous question. Let us call a field $F$ root-closed if every element $x$ of $F$ has at least one $n$-th root for every positive integer $n$. It is very easy to show that every algebraically closed field of characteristic $0$ is root-closed. Is the converse true? That is, is every root-closed field of characteristic $0$ algebraically closed?
AI: Abel's impossibility theorem explicitly says "no". For instance, start with $K_0 = \Bbb Q\subseteq \Bbb C$. Then recursively define $K_i$ as the extension of $K_{i-1}$ by all roots of all polynomials of the form $x^n - k$, for $k\in K_{i-1}$. The union of all these $K_i$ will be a root-closed subfield of $\Bbb C$ (it is the smallest subfield of $\Bbb C$ where each non-zero element has all its $n$ $n$-th roots). It consists exactly of all complex numbers that can be reached from the rational numbers by some finite application of the four standard arithmetic operations as well as taking complex $n$-th roots.
Abel's theorem says that there are polynomials over the rationals whose roots are impossible to describe in this form. One such polynomial is $x^5-x-1$.
|
H: Show for some subsets of $G$ we have subgroups of $(G, \ast)$
Let $G$ be an abelian group. Show that for the following subsets $H_n$, we have subgroups of $G$.
$H_1= \lbrace g \in G | g^n=e \rbrace $, with $n$ being a certain fixed natural number.
$H_2 = \lbrace g \in G | g^{-1}=g \rbrace$
$H_3 = \lbrace g \in G | g=x^2 $ for a $x \in G \rbrace$
For $H_1$:
$e \in H_1$ is obvious.
Let be $k \in H_1$, so $k^n=e\Longleftrightarrow k \ast(k)^{n-1}=e\Longleftrightarrow k^{-1}=(k)^{n-1}$
We need to show that $(k)^{n-1} \in H_1$
So we show:
$((k)^{n-1})^n=e$
$((k)^{n-1})^n=\underbrace{k^{n-1}\ast k^{n-1} \ast...\ast k^{n-1}}_{n}=\overbrace{\underbrace{k^{n}\ast k^{n} \ast...\ast k^{n}}_{n-2}\ast \underbrace{k^{n-1}\ast k^{1}}_{=e}}^{\text{Since $\ast$
is associative}}=\underbrace{e\ast e \ast...\ast e}_{n-2}\ast e=e$
$\Longrightarrow \forall k \in H_1:k^{-1} \in H_1$
We show $\forall k,t \in H_1: k\ast t \in H_1$:
To show that we need to show: $(k \ast t)^n = e$
$(k \ast t)^n=\underbrace{(k \ast t) \ast (k \ast t) \ast ... \ast (k \ast t)}_{n}=\overbrace{\underbrace{(k \ast... \ast k \ast k)}_{n} \ast \underbrace{(t \ast ...\ast t \ast t)}_{n}}^{\text{since $(G,\ast)$ is associative and kommutative}}=k^n\ast t^n=e \ast e= e$
$\Longrightarrow \forall k,t \in H_1: k\ast t \in H_1$
$\Box$
For $H_2$:
$e \in H_2$ is obvious.
Let $k \in H_2 \Longrightarrow k=k^{-1}\Longrightarrow \forall k \in H_2:k^{-1} \in H_2$
We now show that $\forall k,t \in H_2: k \ast t \in H_2$:
In order for $k \ast t \in H_2$, $\,\,\,(k \ast t)=(k \ast t)^{-1}$ has to hold!
Here again the kommutativity of $(G,\ast)$ plays a role!
$k \ast t \ast k^{-1} \ast t^{-1}= k \ast t \ast t^{-1} \ast k^{-1}=k \ast e \ast k^{-1}= k \ast k^{-1}=e$
This tells us indeed: $(k\ast t)^{-1}=k^{-1}\ast t^{-1}=k \ast t= (k \ast t)$
$\Box$
For $H_3$:
Again $e=e^2 \Longrightarrow e \in H_3$
Let $k \in H_3 \Longrightarrow k = x^2$ for some $x \in G$
Then $k^{-1}=(x^2)^{-1}$ which is again since we have an abelian group $(x^2)^{-1}=(x^{-1})^2=k^{-1}$
$\Longrightarrow \forall k \in H_3:k^{-1} \in H_3$
We now show $\forall k,t \in H_3: k\ast t \in H_3$:
$k \ast t= x^2 \ast y^2$ with $x,y \in G$
$k \ast t= x^2 \ast y^2=x \ast x \ast y \ast y= x \ast y \ast x \ast y=(x\ast y)^2 \longleftarrow$ because its still an abelian group
Since $x,y \in G \Longrightarrow x \ast y \in G$
Let $(x \ast y):= z$
$\Longrightarrow k \ast t=z^2$
$\Longrightarrow \forall k,t \in H_3: k\ast t \in H_3$
$\Box$
It would be great if someone could look over it and give me some feedback :)
AI: For $H_1$ note that
$$g^n=e\implies \left(g^n\right)^{-1}g^n=\left(g^n\right)^{-1}\implies \left(g^{-1}g\right)^n=e=(g^{-1})^n$$
That $\left(g^{-1}\right)^n=\left(g^n\right)^{-1}$ can be deduced from inverses being unique (this implies that the order of an element and of its inverse coincide). Your proof is fine too but a bit lengthy.
Besides that I cannot spot any significant shortcuts based on a case-by-case consideration. Anyway, the following known as One-Step subgroup test (EDIT: I realized, you are aware of this) might be of interest
Claim. A non-empty subset $H$ of a group $G$ is a subgroup if and only if $a,b\in H\implies a\circ b^{-1}\in H$.
Proof. $H$ being subgroup implying the latter condition should be clear. For the converse take $x\in H$ (there is such an $x$ as $H$ is non-empty) and let $a=b=x$. Then $a\circ b^{-1}=x\circ x^{-1}=e\in H$. Now, take $a=e$ and $b=x$ to deduce that for all $x\in H$ we have $a\circ b^{-1}=e\circ x^{-1}=x^{-1}\in H$. Finally, for $x,y\in H$ we have $y^{-1}\in H$ and thus $a=x$, $b=y^{-1}$ implying $a\circ b^{-1}=x\circ(y^{-1})^{-1}=x\circ y\in H$ finishing the proof.$~~~\square$
This is a useful criterion which usually shortens up the amount of calculations necessary. Take $H_1$ and oberserve that for $g,h\in H_1$ we have $g^n=h^n=e$ and thus
$$(g\circ h^{-1})^n=(g\circ h^{-1})\circ\dots\circ(g\circ h^{-1})=g^n\left(h^{-1}\right)^n=e$$
And as $e\in H_1$ the One-Step subgroup test yields the result (arguably, more calculations are hidden within the fact $\left(h^{-1}\right)^n=\left(h^n\right)^{-1}$ if not yet established). I encourage you to try it for $H_2,H_3$ too!
Minor spelling remark: it is written 'commutative' in English, not 'kommutative'. Anyway, as a German native speaker I understand the tendency towards writing the latter.
|
H: Question regarding the Quotient Rule, How does the textbook reach this intermediate step?
$$\begin{align*}
\left(\frac{f(x)}{g(x)}\right)’ &= \frac{(x-3)^{1/3}\frac{1}{2}(x+2)^{-1/2}}{(x-3)^{2/3}} - \frac{(x+2)^{1/2}\frac{1}{3}(x-3)^{-2/3}}{(x-3)^{2/3}}\\
&= \frac{(x-3)^{-2/3}(x+2)^{-1/2}}{(x-3)^{2/3}}\cdot\left[\frac{1}{2}(x-3) - \frac{1}{3}(x+2)\right]
\end{align*}$$
Can someone advise how they got to the step after the second equality sign? Step by Step would be most helpful.
AI: $$\dfrac{(x-3)^{1/3}\frac12(x+2)^{-1/2}}{(x-3)^{2/3}}-\dfrac{(x+2)^{1/2}\frac13(x-3)^{-2/3}}{(x-3)^{2/3}}$$
$$=\dfrac{(x-3)\color{blue}{(x-3)^{-2/3}}{\frac12\color{blue}{(x+2)^{-1/2}}}-(x+2)\color{blue}{(x+2)^{-1/2}}\frac13\color{blue}{(x-3)^{-2/3}}}{(x-3)^{2/3}}$$
$$=\dfrac{\color{blue}{(x-3)^{-2/3}(x+2)^{-1/2}}}{(x-3)^{2/3}}\left[\frac12(x-3)-\frac13(x+2)\right].$$
Let me know if you need further details.
|
H: Coin Flip Problem
So my friend gave me this question this other day, and I've tried to start it (I'll show my logic below), but I couldn't find any efficient way to do the problem.
You start out with 1 coin. At the end of each minute, all coins are flipped simultaneously. For each heads that is flipped, you get another coin. But for every tails that is flipped, a coin is lost. (Note any new coins are not flipped until the next moment). Once there are no more coins remaining, the process stops. What is the probability that exactly after 5 minutes (that's 5 sets of flips), that the process will have stopped (so no earlier or no later)?
I've taken a few approaches to this problem. What I've tried to do is to find the total amount of possibilities for each amount of coins by the 5th moment, and then multiply that by the probability that all coins will be vanished on the 5th moment. But I'm just not able to calculate how many possible ways exist to get to each amount of total coins by the end. Does anyone have any other ideas, or perhaps a formula to solve this problem?
AI: Let $q(k)$ be the probability that the process initiated by a single coin will stop
on or before $k$ minutes. We write $q(k+1)$ in terms of $q(k)$:
\begin{align}
q(1) &= 1/2\\
q(2) &= (1/2) + (1/2)q(1)^2 = 5/8\\
q(3) &= (1/2) + (1/2)q(2)^2 = 89/128\\
q(4) &= (1/2) + (1/2)q(3)^2 = 24305/32768\\
q(5) &= (1/2) + (1/2)q(4)^2 = 16644\hspace{0pt}74849/2147483648
\end{align}
and the probability we stop at 5 minutes exactly is:
$$q(5)-q(4) = \frac{71622369}{2^{31}} \approx 0.0333517645...$$
|
H: finding the point of tangency for two circles
The two circles $x^2 + y^2−16 x−20 y + 115 =0$ and $x^2 + y^2+8 x−10 y + 5 =0$ are tangent.
How could I find the point of tangency?
AI: $x^2+y^2-16x-20y+115=x^2+y^2+8x-10y+5$
$\Longleftrightarrow -16x-20y+110=8x-10y$
$\Longleftrightarrow -24x-10y+110=0$
$-24x-10y+110=0$ is the tangent to both circles.
The tangent point of the two circles is where the line between the two center point of the circles and the tangent above cross each other.
$x^2+y^2-16x-20y+115=(x-8)^2+(y-10)^2-49\Longrightarrow C_1=(8|10)$
$x^2+y^2+8x-10y+5=(x+4)^2+(y-5)^2-34 \Longrightarrow C_2=(-4|5)$
Now you only need to connect $C_1$ and $C_2$ to a line and see where it intersects with the tangent we found above!
Are you able to continue from here? :)
Okey we continue:
We found out with rewriting our circle equations where our Center $C_1$ and $C_2$ lie on the plane.
Now we first "draw" a vector between thoose two points.
$\begin{pmatrix}
8 \\
10
\end{pmatrix}-\begin{pmatrix}
-4 \\
5
\end{pmatrix}=\begin{pmatrix}
12 \\
5
\end{pmatrix}$ this vector represents our lines growth/slope
and now we "move" the vector to the right position with fixing it at one center point (ofc it will also cross the second ;))
with picking $C_1$ our line will be $h:\begin{pmatrix}
8 \\
10
\end{pmatrix}+\lambda\begin{pmatrix}
12 \\
5
\end{pmatrix}$
Now we need to change the line we have into a form which we can calculate with:
$x=8+12\lambda $
$y=10+5 \lambda$
$\Longrightarrow \lambda=\frac{x-8}{12}$
$\Longrightarrow y=10+5\frac{x-8}{12}\Longleftrightarrow y - \frac{5x}{12}-\frac{20}{3}=0$
So your line we were looking for is also called:
$y - \frac{5x}{12}-\frac{20}{3}=0$
Now we only need to set our tangent line equal to this line which is crossing both $C_1$ and $C_2$.
(With $C_1,C_2$ being the center of our circle)
$y - \frac{5x}{12}-\frac{20}{3}=0$
$-24x-10y+110=0$
$\Longrightarrow x= \frac{20}{13},y=\frac{95}{13}$
So finally our tangent point $T=\left(\frac{20}{13}|\frac{95}{13}\right)$
Any more questions? ;)
|
H: Volume of a solid of revolution with change of variable
I want to calculate the volume of the solid of revolution around the x-axis of this figure
$x = (1-t^2)/(t^4+4)$
$y = (t+1)*(1-t^2)/(t^4+1)$
for t between -1 and 1. In the figure below the plot is shown
plot
In my opinion to do this I have to calculate the integral of
$\int_{0}^{0.25}\pi * (y_{upper}^2 – y_{lower}^2) dx$.
The solution in my book is
$\int_{-1}^{1}\pi * y(t)^2 * (dx/dt) \text{dt} $
I do not understand why after changing the variable of integration to $t$ the upper and lower part of the curve or not subtracted anymore.
AI: Divide the integral into two parts, one from $0$ to $1$ and one from $-1$ to $0$. In fact you should plot $(x(t),y(t))$ separately for those two intervals. You will notice that these two intervals form the upper and lower branches of your curve. Moreover, when you vary $t$ from $-1$ you go on the lower curve from $(0,0)$ and return on the upper branch. But that means that you increase $x$ then you decrease it. You can therefore write $$\int_0^{0.25}(y_{upper}^2-y_{lower}^2)dx=\int_0^{0.25}y_{upper}^2dx-\int_0^{0.25}y_{lower}^2dx=\int_0^{0.25}y_{upper}^2dx+\int_{0.25}^0y_{lower}^2dx$$
|
H: Clarification on mutual singularity of probability measures
Let $P_1$ and $P_2$ be two probability measures on a measurable space, $(\Omega, \mathcal{F})$. Then $P_1$ and $P_2$ are mutually singular (denoted $P_1 \perp P_2$) if there exists $A \in \mathcal{F}$ such that $P_1(A) = 1$ and $P_2(A) = 0$. The book Gaussian Random Processes by Ibragimov and Rozanov gives the following sufficient condition for mutual singularity of $P_1$ and $P_2$.
If there exists $A_1, A_2,.... \in \mathcal{F}$ such that $\lim\limits_{n\rightarrow\infty}P_1(A_n) = 0$ and $\lim\limits_{n\rightarrow\infty}P_2(A_n) = 1$, then $P_1 \perp P_2$.
They give the following explanation as to why, but I'm still unsure as to why this implies mutual singularity.
We can decompose $P_2$ as $P_2 = P_2^\prime + P_2^{\prime\prime}$ where $P_2^\prime \perp P_1$ and $P_2^{\prime\prime} << P_1$. Under the condition $\lim\limits_{n\rightarrow\infty}P_1(A_n) = 0$, we have,
\begin{eqnarray}\lim\limits_{n\rightarrow\infty}P_2(A_n) &=& \lim\limits_{n\rightarrow\infty}P_2^\prime(A_n) + \lim\limits_{n\rightarrow\infty}P_2^{\prime\prime}(A_n)\\
&=& \lim\limits_{n\rightarrow\infty}P_2^\prime(A_n)\leq P_2^\prime(\Omega) < 1
\end{eqnarray}
I can follow the argument they give, but not the conclusion. Why can we deduce that $P_1 \perp P_2$ in this case? If we look at the definition of mutual singularity, what are the authors using as the measurable set $A$ here?
AI: If $P_2'(\Omega) <1$ we get a contradiction to the assumption that $P_2(A_n) \to 1$. Hence $P_2'(\Omega) =1$. But then $P_2(\Omega) =P_2'(\Omega)+ P_2''(\Omega) $ shows that $P_2''(\Omega) =0$. Thus $P_2''$ is the zero measure. So $P_2=P_2' \perp P_1$.
|
H: Find the equation of the two tangent planes to the sphere $x^2+y^2+z^2-2y-6z+5=0$ which are parallel to the plane $2x+2y-z=0$
Find the equation of the two tangent planes to the sphere $x^2+y^2+z^2-2y-6z+5=0$ which are parallel to the plane $2x+2y-z=0$
My Attempt
We need to find a point which is shortest distance from the plane to the sphere. Let it be $(a,b,c).$ Then Equation of the tangent plane at $(a,b,c)$ is given by the formula
$$2a(x-a)+2(b-1)(y-b)+2(c-3)(z-c)=0$$
Finding the diametrically opposite point of $(a,b,c)$. I can find the equation of another tangent parallel to the plane.
I don't know how to find the shortest distance from the plane to the sphere.
Is there any short method to find the tangents?
AI: Instead complete the square
$$x^2+(y-1)^2+(z-3)^2 = 5$$
then take the gradient
$$\langle x, y-1, z-3 \rangle = \lambda\langle 2, 2, -1\rangle \implies x = y-1 = \frac{3-z}{2}$$
which means
$$x^2 + x^2 + 4x^2 = 6x^2 = 5 \implies x = \pm \sqrt{\frac{5}{6}}$$
This gives you your two points once you plug in for $y$ and $z$.
|
H: Partial Derivatives from Loring Tu
I attempt to understand the definition of partial derivatives from An Introduction to Manifolds by Loring Tu (Second Edition, page no. 67). The definition is given below.
My Confusions & Questions
I am confused about how the following argument works.
The partial derivative $\partial f/\partial x^i$ is $C^{\infty}$ on $U$ because its pullback $(\partial f/\partial x^i) \circ \phi^{-1}$ is $C^{\infty}$ on $\phi(U)$.
My understanding is as follows.
Given that $f: U \to \mathbb{R}$ is $C^{\infty}$ on $U$. According to the definition of a smooth function on a smooth manifold (Definition 6.1. on page no. 59), if $p \in U$, then there exists a chart $(U, \phi)$ about $p$ s.t. $f \circ \phi^{-1}: \phi(U) \to \mathbb{R}$ is $C^{\infty}$ at $\phi(p)$. This conclusion is applicable for all $p \in U$ and it follows that $f \circ \phi^{-1}: \phi(U) \to \mathbb{R}$ is $C^{\infty}$ on $\phi(U)$. (Here, I have used the fact that $U$ being an open set of smooth manifold $M$ of dim $n$ is itself a smooth manifold of dim $n$, so that I can apply Definiton 6.1. .)
Then $f \circ \phi^{-1}$ is $C^{\infty}$ on $\phi(U)$ $\Rightarrow$ $\frac{\partial \left(f \circ \phi^{-1}\right)}{\partial r^i}$ is $C^{\infty}$ on $\phi(U)$ $\Rightarrow$ $\frac{\partial f}{\partial x^i} \circ \phi^{-1}$ is $C^{\infty}$ on $\phi(U)$.
I am not sure how to deduce that $\frac{\partial f}{\partial x^i}$ is $C^{\infty}$ on $U$ from here. Can you please help me to clear up the confusion?
If it is given that $f: U \to \mathbb{R}$ is $C^{\infty}$ on $U$, then why can't we immediately deduce that $\frac{\partial f}{\partial x^i}$ is $C^{\infty}$ on $U$? Why do we need to use that 'pullback' argument?
I don't understand why '$:=$' (by definition symbol) is used before $\left.\frac{\partial}{\partial r^i}\right|_{\phi(p)} \left(f \circ \phi^{-1} \right)$. I think it should be an '$=$'sign as the definition of partial derivative of $f$ wrt $x^i$ at $p$ has been used it write it:
$$\left.\frac{\partial}{\partial x^i}\right\vert_p f := \frac{\partial f}{\partial x^i}(p).$$
AI: For your first question:
Note that a real-valued function $F$ on $M$ is smooth if and only if $F \circ \phi^{-1}$ is smooth on $\phi(U)$ for every coordinate chart $(U,\phi)$ on $M$. The reverse implication is easy: If $F\circ\phi^{-1}$ is smooth for every coordinate chart, you can obviously find some chart as in your definition. Conversely, if there is some chart $(V,\psi)$ such that $F\circ\psi^{-1}$ is smooth, then the identity $F\circ\phi^{-1}=F\circ\psi^{-1}\circ(\psi\circ\phi^{-1})$ and the definition of smoothness of charts implies the forward implication.
Now, as you point out, if $f$ is smooth on $M$, then $\frac{\partial(f\circ\phi^{-1})}{\partial r^i}$, and hence $\frac{\partial f}{\partial x^i}\circ\phi^{-1}$, is smooth on $\phi(U)$ for every coordinate chart $(U,\phi)$. Thus, by the first sentence, $\frac{\partial f}{\partial x^i}$ is smooth on $U$.
Note that I didn't really need to worry about arbitrary charts: I only did it this way because of the definition of smoothness you gave. Had you instead started with "$F$ is smooth in a coordinate chart $(U,\phi)$ if and only if $F\circ\phi^{-1}$ is smooth", the result would be immediate from what you wrote.
For your second question:
The issue is that smoothness is defined in terms of pullbacks by coordinate maps. In particular, a function on a manifold is not defined to be smooth if it is infinitely differentiable, since differentiation on manifolds is not yet defined (you are working through the first part of the definition now).
|
H: Is the set $\{\langle \varnothing, a \rangle ,\langle \{ \varnothing \}, b \rangle \}$ considered as function?
The set defined as
$$F=\{\langle\varnothing,a \rangle, \langle \{\varnothing\},b\rangle\}$$
a function?
AI: Assuming you're using the "functions-as-sets-of-ordered-pairs" approach, then yes, it is: its domain is $\{\emptyset,\{\emptyset\}\}$, it sends $\emptyset$ to $a$, and it sends $\{\emptyset\}$ to $b$. Crucially $\emptyset\not=\{\emptyset\}$, so there's no inconsistency here.
(The other standard approach to functions views a function as a set of ordered pairs together with a specific mention of domain and codomain. Of course the domain is determined by the set of ordered pairs (at least, if we don't allow partial functions), but the codomain is not, so this is a genuinely different approach. If we take this stance then what you've written is not a function.)
EDIT: note that the parenthetical remark is exactly the point Arturo Magidin makes in his comment above.
|
H: If $a\frac {dy}{dx} + by = c$ has constant coeffcients, does that means that $a=b=c$?
I am trying to identify if a differential equation has constant coefficients.
Let $A = a\dfrac {dy}{dx} + by = c$
The $A$ has constant coefficients only if $a=b=c$ correct?
AI: The general form of an ODE is
$$F(x,y,y',y'',...,y^{[n]})=g(x)$$
We call it homogeneous if $g(x)=0$. (Well, this isn't technically precise, but I hope you get the idea.) We call it linear if it is of the form
$$p_0(x)y+p_1(x)y'+...+p_n(x)y^{[n]}=g(x)$$
We call it constant coefficient if it is of the form
$$c_0q_0(y)+c_1q_1(y')+...+c_nq_n(y^{[n]})=g(x)$$
Where $c_0,...,c_n$ are constants but are not necessarily equal.
|
H: E is a collection of sets. How to prove a class/collection of all sets that can be covered by finite union of sets in E is a ring?
Given E is any collection of sets $F_i$ ,
Let E1 be a collection of all sets that can be covered by a finite union of sets in E.
How do we show that E1 is a ring?
Or how can we show that E1 is closed under difference?
(I am currently thinking along the line of single element sets, but not sure how to prove this.)
AI: If $A$ is finitely coverable, then any subset of $A$ is finitely coverable, hence since
$$A-B=A\cap B^c\subseteq A$$
it follows that $A-B$ is finitely coverable.
|
H: Show that $E(|S_n-np|) = 2vq b(v; n, p) $.
(Feller Vol.1, P.241, Q.35) Let $S_n$ be the number of successes in $n$ Bernoulli trials. Prove
$$E(|S_n-np|) = 2vq b(v; n, p) $$
where $v$ is the integer such that $np < v \le np+1$ and $b(v; n,p)$ is a binomial distribution with $v$ successes of $n$ trials. Hint: The left side $= \sum_{k=0}^{v-1} (np - k) \frac{n}{k} p^k q^{n-k}$.
My attempt: I found that $P(|S_n - np| = j)= b(np +j ; n, p)$ if $S_n \ge np$, $b(np-j; n,p)$ if $S_n < np$. Therefore,
$$E(|S_n-np|)= \sum_{k=0}^{v-1} (np-k)b(k;n,p) + \sum_{k=v}^{n}(k-np)b(k; n,p).$$
I am stuck here, and don't know how to proceed. I would appreciate if you give some help.
AI: I'd be tempted to take this hint and see how I can simplify it first. And by simplify I mean get rid of the summation. So we have
\begin{align*}
&\sum_{k=0}^{v-1}(np-k)\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k} \\
=&np\sum_{k=0}^{v-1}\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k}-n\sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k}
\end{align*}
using the identity $\left(\begin{array}{cc} n \\ k\end{array}\right)=\frac{n}{k}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)$ which is easy to prove, and which should often be in the back of your head when you have these combination expressions multiplied by $n$ or $k$. I replace $k=0$ with $k=1$ in the second sum also as the substitution is only valid when $k\geq 1$ and the $k=0$ term contributed nothing to the sum anyway.
Now we don't know how to sum any of these really, unless they refer to an expectation or probability. If I'd kept the $k$ in the second sum it would look more like an expectation but as the sum is partial it will never be one, which is what motivated me to replace it with $n$ which can be removed from the sum. It's easy to see that the first sum is $\mathbb{P}[S_n\leq v-1]$, it would be nice if we could make the second sum look similar. It would have to be a probability related to $S_{n-1}$; see that
\begin{align*}
\sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k}&=\sum_{l=0}^{v-2}\left(\begin{array}{cc} n-1 \\ l\end{array}\right)p^{l+1}q^{n-(l+1)} \\
&=p\sum_{l=0}^{v-2}\left(\begin{array}{cc} n-1 \\ l\end{array}\right)p^{l}q^{n-1-l} \\
&=p\mathbb{P}[S_{n-1}\leq v-2].
\end{align*}
Then we have
\begin{align*}
np\sum_{k=0}^{v-1}\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k}-n\sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k}
&=np\left(\mathbb{P}[S_n\leq v-1]-\mathbb{P}[S_{n-1}\leq v-2]\right).
\end{align*}
It's annoying that these probabilities are for different variables, but consider the following. If, instead of being independent variables, $S_{n-1}$ was the number of successes in $n-1$ trials and $S_n$ was the number of successes in $n$ trials, and these trials were from the $\textbf{same sequence}$, then by sketching a Venn diagram if necessary we can see that
\begin{align*}
\mathbb{P}[S_n\leq v-1]-\mathbb{P}[S_{n-1}\leq v-2]=&\mathbb{P}[S_{n-1}=v-1,\;n\text{-th trial is a failure}] \\
=&\left(\begin{array}{cc} n-1 \\ v-1\end{array}\right)p^{v-1}q^{n-1-(k-1)}q
\end{align*}
so that ultimately we get
\begin{align*}
\mathbb{E}[\vert S_n-np\vert]=np\left(\begin{array}{cc} n-1 \\ v-1\end{array}\right)p^{v-1}q^{n-1-(v-1)}q=vq\left(\begin{array}{cc} n \\ v\end{array}\right)p^vq^{n-v}=vqb(v;n,p),
\end{align*}
as required.
|
H: Can $\{(x,y) \mid x^2 + y^2 < 1\}$ can be written as the cartesian product of two subsets of $\mathbb{R}$?
We consider the set
$$S := \{(x,y) \mid x^2 + y^2 < 1\}.$$
The exercise in Munkres asks whether it is possible to write this set as the cartesian product of two subsets of $\mathbb{R}$. We, naturally, consider restricting ourselves to the interval $(-1,1)$. If $|x|, |y| \geq 1$, then $x^2 + y^2 \geq 1$, so we require $|x|, |y| < 1$. Hence, the set $(-1,1)$ should work. However, if I take, say, $x, y = \frac{3}{4}$, I get
$$\left(\frac{3}{4}\right)^2 + \left(\frac{3}{4}\right)^2 = 2 \cdot \frac{9}{16} = \frac{9}{8} > 1,$$
so this construction fails.
I cannot figure out how to prove rigorously (and generally) that no such construction would work because it's possible that there is some kind of restrictions, maybe to $\left(-\frac{1}{2}, \frac{1}{2}\right)$, for example, that I am missing. Any help on this would be appreciated.
AI: Suppose $S = A \times B$ with $A,B \subset \mathbb{R}$.
Since $(t,0)^T,(0,t)^T \in S$ for any $t\in (-1,1)$ we must have $t \in A$ and $t \in B$ and so we must have $(t,t)^T \in S$, and we have $t^2+t^2 = 2t^2$.
We can choose any $t$ satisfying $|t| \ge {1 \over \sqrt{2}}$ to get a contradiction.
|
H: Maximum and Supremum
Please, I want to know
$\max_{x \in \{0,\frac{3}{4},1\}} (x-1/2)^2$, $\max_{x \in [0,1)} \min\{x,1/2\}$ and their $\arg \max$.
Thanks!
AI: For the first part
$max_{x\in\{0,\frac34, 1\}}(x-\frac12)^2$=$max\{\frac14, \frac1{16}, \frac14\}$=$\frac 14$ and the argmax are $0,1$.
Note you could have easily predicted the max and the argmax by observing that the function to be maximised represents a parabola with vertex at $(\frac 12,0) $ with axis parallel to $y-axis$(since the parabolic function is decreasing on left and increasing on right of vertex)
Now for the second part
Take $A=[0,\frac12]$ and $B=(\frac 12,1)$
Then for $x \in A$ , $min\{x,\frac12\}=x$ and for $x\in B$, $\min \{x,\frac12\}=\frac12$
So $max_{x\in[0,1)}min\{x,\frac12\}=max\{\min_{x\in A}\{x,\frac12\}\cup min_{x\in B}\{x,\frac12\}\}$
$=max\{A\cup\{\frac12\}\}=\frac12$
Clearly the argmax is $B\cup \{\frac12\}=[\frac12,1)$
Hope this helps!
|
H: An artinian ring is a product of local rings
I am rather confused with the last line of the argument used in 00JA, Stacks Project.
Lemma 00JA. Any ring with finitely many maximal ideals and
locally nilpotent Jacobson radical is the product of its localizations
at its maximal ideals. Also, all primes are maximal.
Proof. Let $R$ be a ring with finitely many maximal ideals
$\mathfrak m_1, \ldots, \mathfrak m_n$.
Let $I = \bigcap_{i = 1}^n \mathfrak m_i$
be the Jacobson radical of $R$. Assume $I$ is locally nilpotent.
Let $\mathfrak p$ be a prime ideal of $R$.
Since every prime contains every nilpotent
element of $R$ we see
$ \mathfrak p \supset \mathfrak m_1 \cap \ldots \cap \mathfrak m_n$.
Since $\mathfrak m_1 \cap \ldots \cap \mathfrak m_n \supset
\mathfrak m_1 \ldots \mathfrak m_n$
we conclude $\mathfrak p \supset \mathfrak m_1 \ldots \mathfrak m_n$.
Hence $\mathfrak p \supset \mathfrak m_i$ for some $i$, and so
$\mathfrak p = \mathfrak m_i$. By the Chinese remainder theorem
(Lemma 00DT)
we have $R/I \cong \bigoplus R/\mathfrak m_i$
which is a product of fields.
Hence by Lemma 00J9
there are idempotents $e_i$, $i = 1, \ldots, n$
with $e_i \bmod \mathfrak m_j = \delta_{ij}$.
Hence $R = \prod Re_i$, and each $Re_i$ is a
ring with exactly one maximal ideal. $\square$
How is the "Hence $R=\prod Re_i$..." deduction made?
Why does $Re_i$ have exactly one maximal ideal?
Relating back to the statement, how is this a localization at the maximal ideals?
AI: An important thing that is kind of suppressed is that the idempotents form a complete orthogonal family of idempotents, i.e. they satisfy
$$e_ie_j=\delta_{ij}e_j \;\;\;\text{ and }\;\;\;e_1+e_2+\dots+e_n=1.$$
To see the orthogonality, note that for $i \neq j$ we in fact have $e_ie_j \in \mathfrak{m}_1 \cap \mathfrak{m}_2 \cap \dots \cap \mathfrak{m}_n=I$. Since $I$ is locally nilpotent, $(e_ie_j)^N=0$ for some $N$. But $(e_ie_j)^N=e_ie_j$ as these are idempotent. So $e_ie_j=0$.
To see the completeness, note that $e:=e_1+e_2+\dots+e_n$ is again an idempotent (thanks to orthogonality above) and we have $e\equiv 1 \pmod{\mathfrak{m}_i}$ for all $i$. So $1-e$ is again an idempotent (one can check) and, by the same argument as above, it is contained in $I$, hence nilpotent, hence $1-e=0$. That is, $e_1+e_2+\dots+e_n=1$, as desired.
Now,
Any element $x \in R$ is uniquely written as a sum of elements from $Re_i$'s, $x=x(e_1+e_2+\dots +e_n)=xe_1+xe_2+\dots+xe_n$. The fact that $e_i$ is an idempotent means that $Re_i$ gets a ring structure whose unit element is $e_i$. The correspondence $x \leftrightsquigarrow (xe_1, \dots, xe_n)$ then defines the isomorphism $R \simeq \prod_i Re_i$. (Checking that this correspondence is a ring homomorphism will involve the orthogonality.)
Each of the rings $Re_j$ will have at least one maximal ideal, pick one and call it $M_j$. Then clearly $$\mathfrak{M}_j:= Re_1 \times Re_2 \times \dots \times Re_{j-1} \times M_j \times Re_{j+1} \times \dots \times Re_n$$
is a maximal ideal of $\prod_iRe_i$, hence it corresponds to a unique maximal ideal of $R$. But there are only $n$ of those, so by the pigenhole principle, each $Re_j$ has only the one maximal ideal $M_j$.
To see that $R$ is the product of its localizations, it is enough to see that the same is true for $\prod_iRe_i$. Well, in this case it is easy to see that the localization of $\prod_iRe_i$ at $\mathfrak{M}_j$ is naturally isomorphic to $Re_j$ (meaning that the localization map is just the projection onto the $j$th component). So the claim is true for $\prod_iRe_i,$ hence for $R$ as well.
|
H: Which sequence ${a_n}$ does $\sum_{n=1}^\infty a_n$ is conditionally convergent and $\sum_{n=1}^\infty (-1)^n a_n$ converges
Which sequence ${a_n}$ does $\sum_{n=1}^\infty a_n$ is conditionally convergent and $\sum_{n=1}^\infty (-1)^n a_n$ converges?
I tried with ${a_n}= \frac{\sin(n)}{n}$ and it seems that it does. But I would like to know if there is another.
AI: Take your favorite conditionally convergent series $\sum_{n=1}^\infty b_n$, e.g. $b_n = (-1)^{n+1} \frac{1}{n}$. Then define $a_{2n} = b_n$ and $a_{2n - 1} = 0$. Then,
$$ \sum_{k=1}^\infty a_k = \sum_{n=1}^\infty b_n \in\mathbb R$$
and
$$ \sum_{k=1}^\infty |a_k| = \sum_{n=1}^\infty |b_n| = \infty $$
and
$$ \sum_{k=1}^\infty (-1)^k a_k = \sum_{n=1}^\infty b_n \in\mathbb R.$$
|
H: Do Biconditionals Have to be Logically Related?
I'm studying real analysis from Terence Tao's book, Analysis 1, and was familiarizing myself with mathematical logic that Tao explains in the appendix. In it, he covers the biconditional, or "if and only if" statements. From what I understand, a biconditional is only true when both side are held to be true, or "logically equivalent". The examples he gives as a biconditional which evaluates to true and one that evaluates to false were:
if $x$ is a real number, then the statement
“$x = 3$ if and only if $2x = 6$” is true: this means that whenever $x = 3$
is true, then $2x = 6$ is true, and whenever $2x = 6$ is true, then $x = 3$ is
true. On the other hand, the statement “$x = 3$ if and only if $x^2 = 9$”
is false; while it is true that whenever $x = 3$ is true, $x^2 = 9$ is also
true, it is not the case that whenever $x^2 = 9$ is true, that $x = 3$ is also
automatically true
From what I see, these biconditional statements contain statements that seem to be logically related, or logically relevant to each other: by being given that $x = 3$, we are then able to assess the truth of the statement $2x = 6$ for example.
My question is: is it necessary for the statements to be logically relevant to each other? For example, if I have the statement "It is sunny today if and only if it is a Tuesday", and I were given that the statements "it is sunny" and "it is a Tuesday" were both true statements, would the biconditional statement hold true, despite the fact that the truth of these statements are determined independently from each other, and have no logical correlation? Is it necessary in a biconditional that each statement be logically related to the other, where each statement holds relevant information which is then utilized to assess the truth of the other statement?
AI: The biconditional ($\iff$) is a logical connective that is true when both operands are true, or both are false $(0)$, it is false otherwise.
$$(\color{red}{\lnot A}\land\color{red}{\lnot B})\lor(\color{blue}{A}\land \color{blue}{B})\tag{0}$$
The truth table of the biconditional is the following:
\begin{array}{c|cc}
&\lnot A&A\\\hline
\lnot B&\color{red}{1}&0\\
B&0&\color{blue}{1}
\end{array}
Looking at the truth table alone, it is clear that the biconditional is true whenever both its constituents are true. So given that it is Tuesday, and it is sunny, the statement "It is sunny today if and only if it is a Tuesday", is true by virtue of the biconditionals definition $(0)$.
So to answer your question, it is not necessary for the statements to be relevant to another. A subtle point is that the two statements "It is Tuesday" and "It is sunny" are correlated, because they both have the same truth value (today at 3:37 am UTC in my (and possibly your) location at least).
|
H: Does $\frac{\sum_{k=1}^{n-1} k! }{n!}$ converge?
I want to know if $$\frac{\sum_{k=1}^{n-1} k! }{n!}$$ converge as $n \to \infty$. I know that the sequence is bounded by one since $\sum_{k=1}^{n-1} k! \leq (n-1)(n-1)!$.
Any help is appreciated.
AI: If you take your inequality one step further,
\begin{align*}
\sum_{k=1}^{n-1}k!=\sum_{k=1}^{n-2}k!+(n-1)!\leq (n-2)(n-2)!+(n-1)!
\end{align*}
you can show your sequence is bounded above by $\frac{2}{n}\rightarrow 0$ as $n\rightarrow \infty$.
|
H: Norm of sesquilinear form bounded by norm of associated quadratic form
I have the following question from Teschl's "Mathematical Methods in Quantum Mechanics":
A sesquilinear form is called bounded if $$\|s\|=\sup_{\|f\|=\|g\|=1}|s(f,g)|$$ is finite. Similarly, the associated quadratic form $q$ is bounded if $$\|q\|=\sup_{\|f\|=1}|q(f)|$$ is finite. Show $$\|q\|\le\|s\|\le2\|q\|.$$
There is a hint that says to use the parallelogram law and the polarization identity. Applying the polarization identity to $s$, we have $$s(f,g)=\frac14(q(f+g)-q(f-g)+iq(f-ig)-iq(f+ig)).$$ Now, the parallelogram law gives us $$q(f+g)=2q(f)+2q(g)-q(f-g)$$ and
$$q(f+ig)=2q(f)+2q(ig)-q(f-ig)=2q(f)+2q(g)-q(f-ig).$$ So
$$|s(f,g)|=|\frac14(2q(f)+2q(g)-2q(f-g)+2iq(f-ig)-2iq(f)-2iq(g))|$$
$$=|\frac12(q(f)+q(g)-q(f-g)+iq(f-ig)-iq(f)-iq(g))|.$$
If the $q(f-g)$ and $iq(f-ig)$ terms were absent, we could use the triangle inequality and then take the supremum to obtain the second inequality. However, $$-q(f-g)=-q(f)-q(g)+s(f,g)+s(g,f)$$ and
$$iq(f-ig)=iq(f)+iq(g)+s(f,g)-s(g,f)$$ do not seem to cancel. Any help with this problem would be appreciated.
AI: The first inequality follows straightforwardly from the fact that $s(f, f) = q(f)$. To prove the second inequality, we can note that if $\|q\| = A$, then
$$q(f) \leq n^2 A$$
for all $\|f\| = n$. This follows from property that $q(f)$ scales quadratically when the argument is scaled, i.e. $q(af) = \|a\|^2 f$ for all $a \in \mathbb{C}$.
Using the first equation of your work
$$s(f,g)=\frac14(q(f+g)-q(f-g)+iq(f-ig)-iq(f+ig))$$
we can apply the triangle inequality, which tells us that
$$\|s(f, g)\| \leq \frac14 \left[ \|q(f + g)\| + \|q(f - g)\| + \|q(f - ig)\| + \|q(f + ig)\| \right]$$
Let us assume that $\|f\| = \|g\| = 1$. Then it follows that all of $f + g$, $f - g$, $f - ig$, and $f + ig$ have norm at most $\sqrt{2}$ (why?). From here, we can use what we discussed previously to conclude that each of the four terms in the parentheses above are at most $2 \|q\|$, from which the second inequality follows. $\square$
|
H: Is it true that minimizing the square of the expectation is the same as minimizing the expectation of the square?
Is it true that minimizing the square of the expectation is the same and minimizing the expectation of the square?
Consider a random variable $X_c$ depending on some parameter $c$. Do we have that $$\arg\min_c E[X_c^2] = \arg\min_c (E[X_c])^2$$
This seems very natural.
AI: No: for $c\ge1$ say, let $X_c$ be uniformly distributed in the interval $[-c,c]$. Then $E[X_c^2]$ is positive (I think $\frac c3$) while $E[X_c]^2=0$.
|
H: Birthday Problem Proof?
I was looking at the Birthday Problem (the probability that at least 2 people in a group of n people will share a birthday) and I came up with a different solution and was wondering if it was valid as well. Could the probability be calculated with this formula:
$$1-(364/365)^{n(n+1)/2}$$
The numbers don't seem to perfectly match up with the normal proof, but I don't see the flaw in my logic, so if someone could clear it up, that would be much appreciated.
To find the formula, I found the probability that one person didn't share a birthday first, which is:
$(364/365)^{n-1}$ for the first person, $(364/365)^{n-2}$ for the next, and so on. The probability that none of them do would be the product, and considering exponent laws, would be $(364/365)^{n(n+1)/2}$. We subtract that from $1$ to find the converse of our statement.
AI: The problem is that the probability of the conjunction of two events is the product of individual probabilities only if the events are independent. And more generally, for your line of logic to be correct, all events you are considering must be mutually independent from one another. That is, the outcome of any event is independent, regardless of the outcomes of any of the other events.
Here, what you are doing is calculating the probability of event $A_{ij}$, where person $i$ does not share a birthday with person $j$, for every pair of $i$ and $j$. But the problem is that the $A_{ij}$ are not mutually independent. For example, given $A_{ij}$ and $A_{jk}$ both to be false, $A_{ik}$ is most certainly false as well. In plain English, if I told you that $i$ and $j$ in fact did share the same birthday, and so did $j$ and $k$, then the probability of $i$ and $k$ sharing the same birthday is non longer $1/365$, but in fact $1$. This suffices to show that the events are not mutually independent, so you cannot justify the step where you claim the probability of none of the pairs sharing birthdays is a product of probabilities. And for good reason as well, because unfortunately the formula you gave is not the correct one.
|
H: Proof of convergence of series (progression)
First, I'm a beginner in this site. In addition, my mother tongue is not English. Thus, I'm sorry if the sentences I write are difficult to understand.
I'll move on to the main topic.
I can't solve this problem.
Precondition(Given);
There exists a progression ${x_n}$ such that all terms in the progression are $0$ or more($x_n \geq 0$).
Moreover, an infinite series $\sum_{n=1}^∞ n^2・(x_n)^2$ converges.
Problem;
If the above precondition works, prove the fact that $\sum_{n=1}^∞ x_n$ converges.
I think that I have to determine the condition for the fact that an infinite series $\sum_{n=1}^∞ n^2・(x_n)^2$ converges, and have to use the condition in order to prove the fact that $\sum_{n=1}^∞ x_n$ converges.
However, I can't solve this problem.
If you find how to solve this problem, I want you to teach it.
AI: The series $\sum_{n=1}^{\infty}1/n^2$ converges to $K\in \Bbb R+$. (We do not need to know what $K$ is.)
Let $x_n=y_n/n.$ By the Cauchy-Schwartz Inequality, if $M\in \Bbb Z^+$ then $$\sum_{n=1}^Mx_n=\sum_{n=1}^M(1/n)y_n\le$$ $$\le \left(\sum_{n=1}^M1/n^2\right)^{1/2}\cdot \left(\sum_{n=1}^My_n^2\right)^{1/2}\le$$ $$\le \sqrt K \cdot \left(\sum_{n=1}^My_n^2\right)^{1/2}=$$ $$=\sqrt K\cdot
\left(\sum_{n=1}^Mn^2x_n^2\right)^{1/2}.$$
The Cauchy-Schwartz Inequality: $(\,\sum_{n=1}^Mw_n^2 \,)\cdot (\,\sum_{n=1}^My_n^2\,)-(\sum_{n=1}^Mw_ny_n)^2=$ $=\sum_{1\le i\le j\le M}(w_iy_j-w_jy_i)^2 \ge 0.$
|
H: Explicit bijection between $\mathbb{N}$ and $\mathbb{N}^2$
I am having some trouble writing a bijection between $\mathbb{N} = \{0, 1, 2, 3, \ldots\}$ and $\mathbb{N}^2$, particularly using the definition that $\mathbb{N}$ includes $0$. (Otherwise, it is straightforward.) Here is what I have so far.
Consider an arbitrary positive integer, $z$. I claim that we can write $z$ as the the unique product of some power of $2$ and an odd number, that is, we may write $z = 2^a m^b$ for some odd $m$ (since $m^b$ is odd for any odd $m$ and integral power $b$).
Proof of claim. If $z$ is odd, we may write
$$z = 1 \cdot z = 2^0 z,$$
and the result is proved. If $z$ is even, $z$ is divisible by $2$. Let $k$ be the highest power of $2$ that divides evenly into $z$. Then,
$$z = 2^k \cdot r,$$
for some integer $r$. If $r$ is even, then write $r = 2j$ for some integer $j$, but then
$$z = 2^k \cdot 2j = 2^{k+1} j,$$
meaning that there exists a higher power of $2$ that divides evenly into $z$, a contradiction. Hence, $r$ must be even, and the construction is proved.
Hence, for any positive $z \in \mathbb{N}$, we may write
$$z = 2^a m^b$$
for integers $a$ and $b$. We may $z$ to the pair $(a,b)$ when $z$ is positive. When $z = 0$, map $z$ to $(0,0)$.
The problem is I cannot figure out how to write this bijection as an explicit formula from $\mathbb{N}$ to $\mathbb{N}^2$. Is there a way to do this?
AI: The decomposition $2^a m^b$ is not unique for all integers, but it would be without the $b$. For example, the number $18$ could be written as $2^1 \times 9^1$ or $2^1 \times 3^2$.
Instead, just use the decomposition $2^a(2b + 1)$, where $a, b \in \Bbb{N}$ (including $0$). To prove this, we basically use the proof you presented. Dividing by the highest power of $2$ of a positive integer must yield an odd number, which means that every positive integer can be expressed in this form. This expression is unique, since the $a$ in $2^a(2b + 1)$ can be recovered by finding the highest power of $2$ that divides the given number, and the $b$ follows instantly from this.
Every positive integer can be expressed uniquely in this form, so every natural number can be expressed in the form $2^a(2b + 1) - 1$. We can therefore write an explicit bijection:
$$f : \Bbb{N}^2 \to \Bbb{N} : (a, b) \mapsto 2^a(2b + 1) - 1.$$
If you want a bijection from $\Bbb{N}$ to $\Bbb{N}^2$, consider $f^{-1}$. It's actually most simple to express as a method: to compute $f^{-1}(n)$, let $a$ be the highest power of $2$ that divides $n$, and let $b = 2^{-a}n$. If you want a symbolic formula for whatever reason, you could try:
$$f^{-1}(n) = \left(\log_2(\gcd(n + 1,2^{n+1})), \frac{n + 1}{\gcd(n + 1, 2^{n + 1})}\right).$$
|
H: Why is $\frac{a-ar^n}{1-r}$ always an integer when $a$, $r$ (except $1$), and (positive) $n$ are integers?
If $a_1$ and $r$ are integers, explain why the value of $\dfrac{a_1-a_1r^n}{1-r}$ must also be an integer.
Does anyone have any ideas to rigorously explain/prove it? I can't really think of anything. (Also this is besides when $r=1$ of course.)
EDIT: $n$ is any positive integer.
AI: If n is a positive integer, then the expression $r^n - 1$ can be factored:
$r^n - 1 = (r - 1)(r^{n-1} + r^{n-2} + ... + r^2 + r + 1)$
Therefore: $\frac{a_1-a_1r^n}{1-r} = \frac{a_1r^n-a_1}{r-1}=a_1(r^{n-1} + r^{n-2} + ... + r^2 + r + 1)$
|
H: Find the flaw in this proof that $1$ is the greatest natural number
I think the flaw is in assuming that $N^{2} \in \mathbb{N}$, but I don't know.
AI: The proof is logically correct, and leads to a contradiction (we know that $1<2$). Thus it is a valid proof that there is no greatest natural number.
|
H: Lambda Calculus: What is the difference between a $\lambda$ term with and w/o parenthesis?
Eg. what is the difference between $(\lambda y.M)[x:=N]$ and $\lambda y.M[x:=N]$?
AI: The substitution applies to the whole expression in the first term.
The substitution only applies to M in the second term.
In this case the two are equal. If you had $(\lambda x.M)[x:=N]$ they would not be equal any longer as the substitution would end rather than recursively work inside the body of the lambda.
|
H: How can I evaluate this complex integral equation on Wolfram?
I need to evaluate the complex line integrals in the following equation:
$$g(z)=\frac{\int_0^z\zeta^{-5/6}(\zeta-1)^{-1/2}d\zeta}{\int_0^1\zeta^{-5/6}(\zeta-1)^{-1/2}d\zeta}.$$
Can someone advise me on how to evaluate this expression on Wolfram? For those interested in the background, the source of the above equation is
"Conformal mapping between two right-angled triangles". If not possible on Wolfram, is there an alternate means to evaluate this?
AI: In this case, Mathematica seems to have no trouble performing the integral directly:
|
H: How to compute $P(X>Y\mid Y<1)$ given pdf of $(X,Y)$?
I have the following function
$$f(x,y)=\begin{cases} e^{-(x+y)} &, x,y > 0 \\ 0 &, \text{otherwise} \end{cases}$$
I want to compute the following conditional:
$$P(X>Y\mid Y<1)$$
I'm trying to solve this using the following :
$$f(x\mid y) = \frac{f(x,y)}{f_2(y)}$$
$$f_2(y) = \int_0^{\infty} f(x,y) \,dx = e^{-y}\int_0^{\infty}e^{-x}\,dx = e^{-y}$$
So the conditional probability should be like:
$$\int_y^{\infty}\int_0^1\frac{e^{-(x+y)}}{e^{-y}}dy\,dx$$
I would like to know if this is a good approach.
AI: $X>Y, Y<1$ is equivalent to $Y <X\wedge 1$. Hence the required probability is $$\frac {\int_0^{1} \int_0^{x} e^{-(x+y)}dydx +\int_1^{\infty} \int_0^{1} e^{-(x+y)} } { \int_0^{\infty} \int_0^{1} e^{-(x+y)} dydx}.$$ I will let you carry out the integrations.
|
H: In how many ways can you form a committee of three from a set of $10$ men and $8$ women, such that there is at least one woman in the committee?
My textbook employs a brute force method: add the number of committees that could be formed with one woman, two women, and three women in them. Then, the total number of such committees will be:
$$\left(\begin{smallmatrix} 8 \\ 1 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 2 \end{smallmatrix}\right) + \left(\begin{smallmatrix} 8 \\ 2 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 1 \end{smallmatrix}\right) + \left(\begin{smallmatrix} 8 \\ 3 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 0 \end{smallmatrix}\right) = 360 + 280 + 56 = 696$$
(The number of ways of choosing one woman from eight times that of choosing two men from ten + ...)
I had solved the question with this reasoning: You can choose one woman from eight as a member of the committee, and for the remaining two positions, you could choose either a man or a woman. This is equivalent to choosing two people from $(10 + 8) - 1 = 17$. Thus, the number of possible ways to form such a committee is:
$$\left(\begin{smallmatrix} 8 \\ 1 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 17 \\ 2 \end{smallmatrix}\right) = 8 \cdot 136 = 1088$$
What mistake have I made?
AI: Suppose $w_1$ was the first woman chosen among possible $8$, then say $m_1,w_3$ were chosen. So your committee is $w_1,m_1,w_3$.
In your way of counting: what if $w_3$ is chosen as the first woman, followed by $m_1, w_1$? Then too the committee is same as before ($w_3,m_1,w_1$) but you are counting this as a different committee.
|
H: Is it reasonable to say that each random variable has one and only one variance?
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
Is it reasonable to say that each random variable has one and only one variance?
AI: No. Variance is defined only when expectation is finite so there are random variables with no variance.
|
H: Gaussian distribution with mean zero and variance $\sigma^2$
For a Gaussian distribution with mean zero and variance $\sigma^2$, let $X \sim N(0, \sigma^2)$. Is it integral
$$P(X>0)=\frac{1}{2}?$$
AI: For any random variable with a continuous symmetric distribution this is true. $N(0,\sigma^{2})$ has these properties.
[ $P(X=0)=0$ and $P(X>0)=P(X<0)$. Hence $P(X>0)=P(X<0)=\frac 1 2$].
|
H: Prove that distance between foot of perpendiculars from an arbitrary point on circle to two given diameters is constant.
https://www.geogebra.org/geometry/xaj5mjuz
ORIGINAL QUESTION : You have given a circle with two diameters drawn. Select any point $P$ on the circle and drop perpendiculars to given diameters. The foot of perpendiculars are $X$ and $Y$. Prove that $XY$ is constant
The only thing that remains invariant is the two given diameters.After sometime playing with geogebra i noticed that we can consider the extreme case that is when $P$ coincide with $A$ or $B$ or $C$ or $D$.
Now we can rephrase the original question as
You have given a circle with two diameters drawn. Select any point $P$ on the circle and drop perpendiculars to given diameters. The foot of perpendiculars are $X$ and $Y$.Let $Q$ be the foot of perpendicular from point $A$ to $CD$.Prove that $XY$ is equal to $AQ$
Pure angle chasing does not solve the problem(I might have missed something).Normally olympiad geometry questions which are easy to state are generally more difficult.I think some kind of ingenious auxillary construction is required.If possible please provide your intuition and thought process along with solution.
AI: The quadrilateral $PYOX$ is obviously cyclic. It follows
$\angle PYX=\angle POX$. Continue the lines $(PY)$ and $(XO)$ till intersection at point $Z$. We have
$$
\triangle ZYX\sim \triangle ZOP\implies \frac{XY}{OP}=\frac{ZY}{ZO}\implies XY=R\sin\theta,
$$
where $R$ is the circle radius, and $\theta$ is the angle between the diameters.
|
H: Questions about parametric equations
Consider the parametric equations: $$x=t^3-3t, \; \; y=t^2+t+1.$$
What is the lowest point on this parametric curve?
For what values of $t$ does the curve move left, move right, move up and move down?
When is the curve concave up?
Find the area contained inside the loop of this curve if the curve intersects itself at the point $(-2,3)$.
Well for 1. I get $(2,1)$ after calculating $\frac{dy}{dx}=0$ for $t$.
I’m not sure on 2.
For 3. we just find $t$ such that $\frac{d^2y}{dx^2}>0$.
For 4. I’m not sure on this one too.
Thanks for the help.
AI: $$\dot x=3t^2-3$$ is negative for $-1<t<1$, meaning that the curve is traversed from right to left in this range, and conversely.
$$\dot y=2t+1$$ is negative when $t\le-\dfrac12$ and the curve is traversed from top to bottom in this range and conversely. The lowest point is reached at $t=-\dfrac12$.
For the final question, use a curvilinear integral,
$$A=\oint x\,dy=\int x\dot y\,dt.$$
|
H: $T:\Bbb{R}^2\rightarrow\Bbb{R}^2$ has 2 distinct eigenvalues. Showing that $v$ or $T(v)− \lambda_1v$ are eigenvectors of $T$
$T:\Bbb{R}^2\rightarrow\Bbb{R}^2$ which is diagonalizable with 2 distinct eigenvalues. Showing that either $v$ is an eigenvector for $\lambda_1$ or else $T(v)− \lambda_1v$ is an eigenvector for $\lambda_2$.
I tried to use the matrix representation of $T$ and the Cayley-Hamilton theorem, but I can't reach the conclusion. Is there a way to prove it without the matrix representation?
I would prefer to get guidance rather than a full solution.
AI: I will assume that $v\ne0$; otherwise, the statement is false.
Let $v_1$ be an eigenvector corresponding to the eigenvalue $\lambda_1$ and let $v_2$ be an eigenvector corresponding to the eigenvalue $\lambda_2$. If $v\in\Bbb R^2$, then $v$ can be written as $\alpha_1v_1+\alpha_2v_2$. There are two possibilites now:
$\alpha_2=0$: then $v=\alpha_1v_1$ and therefoe it is an eigenvector corresponding to the eigenvalue $\lambda_1$.
$\alpha_2\ne0$: then $T(v)-\lambda_1v=\alpha_2(\lambda_2-\lambda_1)v_2$ and therefore $T(v)-\lambda_1v$ is an eigenvector corresponding to the eigenvalue $\lambda_2$.
|
H: n vertex graph without isolated vertices - maximum vertex degree
Provided there is a $n$-vertex graph without isolated vertices which is disconnected, prove that the maximum vertex degree does not exceed $n-3$
AI: The graph has at least 2 connected components, which have (since there is no isolated vertex) at least two vertices each. Hence the maximal size of a connected component is given by $n-2$. The maximal degree of a vertex cannot exceed one minus the size of the connected component it belongs to, which in this case is at most $n-3$.
|
H: Majorisation inequality/upper bound
I saw the following relation and now I'm trying to prove it
$$\sum_{i=1}^l a^{\downarrow}_i \geq \sum_i \{a_i | a_i \geq 1/l \} \, ,$$
but I'm stuck. Here $a^{\downarrow}_i$ is an element, in non-increasing order, of a probability vector $\textbf{a}$. For example, considering the probability vector $\textbf{p} = (1/2, 3/8, 1/8)$ and choosing $l = 2$, one has
$$ \sum_{i=1}^2 p^{\downarrow}_i = \frac{7}{8} \, ,$$
while the right-hand side yields
$$ \sum_i \{p_i | p_i \geq 1/2 \} = \frac{1}{2} \, ,$$
then easily the inequality is verified. Could you please give me a hint or recommend something to read that will help me to prove it?
Thank you very much,
Alex.
AI: This is easily proved by the follows:
In the new array $a^{\downarrow}$ which is in non-increasing order, it cannot be that the first $l$ elements are all larger than $1/l$, otherwise the sum of them must exceed $1$. Therefore, the right hand side of the inequality must be
\begin{align}
&\sum_i \{ a_i | a_i \geq 1/l \} \\
=& \sum_i \{ a_i^{\downarrow} | a_i^{\downarrow} \geq 1/l \} \\
=& \sum_{i=1}^m a_i^{\downarrow},
\end{align}
where the $m$ is smaller than or equal to $l$ by the above analysis. Therefore, it must be that
\begin{align}
&\sum_i \{ a_i | a_i \geq 1/l \} \\
=& \sum_{i=1}^m a_i^{\downarrow} \\
\leq & \sum_{i=1}^l a_i^{\downarrow}.
\end{align}
Comment: This problem actually has nothing to do with probability, except that it used the property that the elements of a probability vector sum to one.
|
H: Combinatorics counting problem- double counting?
How many different numbers can be formed by the product of two or more of the numbers 3,4,4,5,5,6,7,7,7?
My answer is 138.However the book says 134.
Which answer is correct? My working is -
(2)(3)(3)(2)(4)-6=138
However , the book minuses 10 rather than 6 .
Pls help.
AI: I'm sorry, I don't have enough reputation to comment, but I think you made a slight error. If the book minuses 9 rather than 6, then the answer given by the book should be 135, and not 134.
I wrote a python script, to test the actual number, and it turns out, the answer is in fact 138.
12, 15, 16, 18, 20, 21,
24, 25, 28, 30, 35, 42,
48, 49, 60, 72, 75, 80,
84, 90, 96, 100, 105, 112,
120, 126, 140, 147, 150, 168,
175, 196, 210, 240, 245, 288,
294, 300, 336, 343, 360, 400,
420, 450, 480, 504, 525, 560,
588, 600, 630, 672, 700, 735,
784, 840, 882, 980, 1029, 1050,
1176, 1200, 1225, 1372, 1440, 1470,
1680, 1715, 1800, 2016, 2058, 2100,
2352, 2400, 2520, 2800, 2940, 3150,
3360, 3528, 3675, 3920, 4116, 4200,
4410, 4704, 4900, 5145, 5488, 5880,
6174, 6860, 7200, 7350, 8232, 8400,
8575, 10080, 10290, 11760, 12600, 14112,
14700, 16464, 16800, 17640, 19600, 20580,
22050, 23520, 24696, 25725, 27440, 29400,
30870, 32928, 34300, 41160, 50400, 51450,
58800, 70560, 82320, 88200, 98784, 102900,
117600, 123480, 137200, 154350, 164640, 205800,
352800, 411600, 493920, 617400, 823200, 2469600
|
H: For $C$ open, bounded, and convex, is it true that $x+r C\subset x + 3rC$, $x\in \mathbb R^d, r>0$?
I have a question regarding transformations of convex sets. Given an open, bounded and convex set $C$, is it true that
$$
x+r C\subset x + 3rC \qquad x\in \mathbb R^d, \ r>0 \quad?
$$
The reason I am asking is that the set $(x - a, x + a)$ is contained in $(x - 3a, x+ 3a)$. Is this situation true more generally for convex sets $C$ in $\mathbb R^d $?
Further, if the claim in the title is true, is the conditions of openness and boundedness of $C$ be necessary?
Thanks in advance!
AI: No. Take $d=1, C=(1,2), x=0$ and any $r>0$.
|
H: $|z+2|=|z|-2$; Represent on an Argand Diagram
Represent on an Argand Diagram the set given by the equation $|z+2|=|z|-2.$
My attempt:
Apparently the answer is $x\leq 0$ $(z = x + yi)$ and $y = 0$, based on the idea that $-x = \sqrt{(x^2 + y^2)}$, but I am struggling to derive this.
I originally assumed the answer was $y = 0, x\leq-2$, going from the idea that the distance of $z$ from $(-2,0)$ is the same as the distance from the origin minus $2.$ However, this is not the solution.
Any help would be much appreciated
AI: The solution given to you was wrong. Notice, for instance, if $x=y=0$, then $z=0$ but $z=0$ leads to $|0+2| =|0|-2 \implies 2 = -2$. Obviously, this is nonsense.
Indeed, your reasoning is correct. Through simple algebraic manipulation you can conclude $y=0$, and from there you can reduce the equation to $|x+2| = |x|-2$. Simply graphing it alone shows that $f(x) = |x+2| - |x| + 2$ satisfies $f(x) = 0$ (i.e. the desired $x$ values) only when $x \le -2$:
|
H: Show if $f_x \rightarrow \eta \,\,\,$ and $g_x \rightarrow \zeta$ so $f_x+g_x \rightarrow \eta + \zeta$
Let $(f_x)$ and $(g_x)$ be two nets on a directed set $X$.
Show if $f_x \rightarrow \eta \,\,\,$ and $g_x \rightarrow \zeta$ so $f_x+g_x \rightarrow \eta + \zeta$
For $(f_x)$ holds:
$$\forall \epsilon > 0 \,\,\, \exists x_1 \in X\,\,\, \forall x\in X:x \succ x_1 \Longrightarrow |f_x-\eta|<\epsilon$$
And for $(g_x)$ holds:
$$\forall \epsilon > 0 \,\,\, \exists x_2 \in X\,\,\, \forall x\in X:x \succ x_2 \Longrightarrow |g_x-\zeta|<\epsilon$$
From the 3rd axiom of directed sets we know:
$$\forall x,y \in X \,\,\,\exists z\in X: z\succ x \,\,\wedge\,\, z\succ y$$
This means
$$\exists x_3 \in X: x_3\succ x_1 \,\,\wedge\,\, x_3\succ x_2$$
So:
$$\forall \epsilon > 0 \,\,\, \exists x_3 \in X\,\,\, \forall x\in X:x \succ x_3 \Longrightarrow|f_x-\eta|<\epsilon$$
and
$$\forall \epsilon > 0 \,\,\, \exists x_3 \in X\,\,\, \forall x\in X:x \succ x_3 \Longrightarrow|g_x-\zeta|<\epsilon$$
$$|(f_x+g_x)-(\eta+\zeta)|=|(f_x-\eta)+(g_x-\zeta)|\le |f_x-\eta|+|g_x-\zeta|<2\epsilon$$
Lets call $\vartheta:=2\epsilon$
$$\Longrightarrow \forall \vartheta > 0 \,\,\, \exists x_3 \in X\,\,\, \forall x\in X:x \succ x_3 \Longrightarrow|(f_x+g_x)-(\eta+\zeta)|<\vartheta$$
Since the limit is distinct.
$$f_x+g_x \rightarrow \eta+\zeta$$
$\Box$
Would be great if someone could look over it, and give me feedback, if my work is correct, and if not, what I should improve! Thank you
AI: It is almost correct but you shouldn't say call $\nu=2\epsilon$. You should start with $\nu >0$ and take $\epsilon =\frac { \nu} 2$ in your argument.
|
H: Rouché's theorem example verification
I have to use Rouché's Theorem to check how many zeros in $D(0,2)$(disk with center 0 and radius 2) do the following functions have
$z^3+6z-1$
$z^3+6z+1$
Now, the first one: $f(z) = 6z, g(z) = z^3+6z-1$ so for $|z| = 2$
$$
|f(z)-g(z)| = |-z^3+1| \leq |-z^3| + |1| \leq 9 < 12 = |6z| = |f(z)|
$$
Thus Rouché's Theorem is satisfied and $z^3+6z-1$ has one zero at $D(0,2)$, because $f(z)=6z$ has just one zero. Is that correct?
Is the second example any different?
AI: Your proof for $f_1(z) = z^3 + 6z -1$ is correct.
$f_2(z) = z^3 + 6z + 1$ can be treated in the same way, or simply by noting that
$$
f_2(z) = - f_1(-z) \, .
$$
|
H: Statistics and Probability- Cumulative Distribution
There are $10,000$ people in front of you in line at the airport. Each person takes $\text{Exp}(1/3)$ minutes to be served once they get to the front of the line. Approximate the probability that you get to the front of the line in less than $29,000$ minutes.
I am not sure how to solve this question. Do I have to use some distribution function? Can I get some hints. My options are:
a) $\phi(-\frac{10}{3})$
b) $\phi(-\frac{11}{3})$
AI: Your answer (a) is correct
This because the CLT states that
$$\frac{\sum_i X_i-n\mu}{\sigma\sqrt{n}}\sim \Phi$$
The $\Phi$ has to be a capital letter because it is a CDF
|
H: Give an example of a non-zero linear operator $T$ on a vector space $V$ such that $T^{2}=O$ but $ \operatorname{Ker} T \neq \operatorname{Im} T$.
Give an example of a non-zero linear operator $T$ on a vector space $V$ such that $T^{2}=O$ but $ \operatorname{Ker} T \neq \operatorname{Im} T$.
AI: $T(e_1)=T(e_2)=0, T(e_3)=e_1$ in a $3$-dimensional vector space.
|
H: If $\cos 3x=\cos 2x$, then $3x=\pm 2x + 2\pi k$. Why the "$\pm$"?
There is an equation I'm solving at the moment which involves $\arccos$. In the correction my teacher gave me, it seems after taking the $\arccos$ of an angle, you must take the positive and negative value of the angle plus a multiple of 2π:
Hence, solve:
$$ 4\cos^3(x) - 2\cos^2(x) - 3\cos(x) + 1 = 0, $$
For $0 ≤ x < \pi $
Solution:
$$\cos(3x) = 2\cos^2(x)-1$$
$$\cos(3x) = \cos(2x)$$
$$3x = ± 2x + 2\pi\times k$$
$$x=0, \ x=\frac{2}{5}\pi k$$
I was wondering why that is, and if there is an intuitive way of understanding this.
Thanks,
AI: $$ \cos x = \cos (-x) $$
So inverse function solution necessarily include its negative also.
$$ \cos x = \cos (\alpha) $$
$$ x= \pm \alpha \pm 2 k \pi $$
In the present case
$$ 3x= \pm 2x + 2 k \pi$$
$$x=2 \pi k , \frac{2}{5}πk$$
Intuitive way is the graphical visual way, as an aid. For any inverse even function we have $\pm$ values necessarily, as the graph is symmetric to the x-axis.
EDIT1:
After posting the above, I realized giving the same symbol is part of the problem which came from not realizing that it can be factored. By setting each factor to zero, disambiguation is possible while recognizing two frequency waves are superimposed, and two symbols $(m,n)$ can or rather should be used.
$$ \cos 5x- \cos 3x =0,\quad -2 \sin 4 x \sin x =0 $$
$$x=2 \pi m , \frac{2}{5}π n$$
The $m$ wave roots are colored blue, and $n$ wave roots are green. Negative $x-$ axis graph (plotted in units of $\pi$) is not plotted because it is anyhow symmetrical as said above.
It can be seen why the roots labelled $(1,2,3,4,...)$ are double roots. In the interval required there is one real double root and two other real roots.
|
H: Let $G$ be the group of all the maps from closed interval $[0,1]$ to $\mathbb{Z}$.
Let $G$ be the group of all the maps from closed interval $[0,1]$ to $\mathbb{Z}$. The subgroup $H= \left \{ f \in G :f(0)=0 \right \}$
Then
$1)$ $H$ is countable
$2)$ $H$ is uncountable
$3)$ $H$ has countable index
$4)$ $H$ has uncountable index
Solution I tried- In this question he is asking about maps not for functions. The number of maps form $[0,1]$ to $\mathbb{Z}$ must be more than $\aleph_0^\mathfrak{c}$. Now I am confused here, how to proceed further because, I have no idea what is $\mathfrak{c}$ times $\aleph_0$. please give me a hint so that I can solve this further.
Thank you.
AI: Consider the map $\varphi\colon G\to\mathbb{Z}$ defined by $\varphi(f)=f(0)$.
Then this map is a group homomorphism (verify it), it is surjective (just consider constant functions) and $H=\ker\varphi$.
Therefore by the homomorphism theorem, $G/H=G/\ker\varphi\cong\mathbb{Z}$.
This answers the question about the index, doesn't it?
Also, since you have just fixed the image at $0$, you can easily seen that $H$ is isomorphic to the group of all function $(0,1]\to\mathbb{Z}$, which has cardinality $\aleph_0^{\mathfrak{c}}=2^{\mathfrak{c}}$.
|
H: Where to find English translations of Euler's collected works?
I will highly appreciate if someone can provide me the link to the book or link to the paper where I can find the following papers $(90-94)$ with English translation.
Any help would be appreciated. Thanks in advance.
AI: The Euler Archive provides links with english translations:
$\textbf{90}:$ E41 -- De summis serierum reciprocarum [english translation]
$\textbf{91}:$ E47 -- Inventio summae cuiusque seriei ex dato termino generali [english translation]
$\textbf{92}:$ E72 -- Variae observationes circa series infinitas [ english translation]
$\textbf{93}:$ E596 -- De summa seriei ex numeris primis formatae [ english translation]
$\textbf{94}:$ E792 -- Tractatus de numerorum doctrina [english translation]
|
H: Does $\lim \limits_{n\to\infty} \int_0^1 \sin(\frac{1}{x}) \sin(nx)dx$ exist?
Is $\lim \limits_{n\to\infty} \int_0^1 \sin(\frac{1}{x}) \sin(nx)dx$ convergent and if so, what is the limit?
Neither Riemann-Lebesgue lemma nor Dirichlet lemma can be applied directly.
The limit seems to be 0, but I'm not completely certain.
Dirichlet lemma is as follows.
Let $f:(0,1)\to \mathbb{R}$ be monotone and bounded. Then $\lim \limits_{n\to\infty}\int_0^1f(t)\frac{sin(tn)}{t}dt=\frac{\pi}{2}\lim \limits_{t\to 0^+}f(t).$
AI: $\int_0^{1}|\sin (\frac 1 x)| dx=\int_1^{\infty} \frac { |\sin y|} {y^{2}}dy <\infty$. So $\sin (\frac 1 x)$ is integrable on $(0,1)$ and Riemann Lebesgue Lemma shows that the limit is $0$.
|
H: Values of c for which the given quotient ring is a field.
I am stuck with the problem :
Find all values of 'c' in $F_{5}=\frac{\mathbb{Z}}{5\mathbb{Z}}$ such
that the quotient ring $\frac{F_{5}}{⟨X^3 + 3X^2 + cX + 3⟩}$ is a
field. Justify your answer.
My approach was, we've got a theorem for commutative ring R that if I is a maximal ideal in R then R/⟨I⟩ is a field. Now to prove $⟨X^3 + 3X^2 + cX + 3⟩$ is a maximal ideal in the given field we need to show that this is irreducible. So, I think for the set of values of 'c' for which this polynomial is irreducible, will be the set for which the above quotient ring is a field.
But I don't know how to find all the values of 'c' for which $⟨X^3 + 3X^2 + cX + 3⟩$ is irreducible except to try each value of 'c' individually and then use some irreducibility test.
Is there a proper and simpler way to find such 'c'. Please help me in finding such values.
AI: Actually, there is a theorem by Dickson to decide whether a cubic polynomial is irreducible over a finite field.
For $f(x)=x^3+bx+c$ over $\mathbb F_q$, where $q=p^n$ with $p>3$, its discriminant is $D(f)=-4b^3-27c^2$. Then $f$ is irreducible over $\mathbb F_q$ if and only if $D(f)$ is a square in $\mathbb F_q$, say $D(f)=81d^2$, and $\frac12(-c+dw)$ is a cube in $\mathbb F_q$ if $q\equiv1\pmod3$, or in $\mathbb F_{q^2}$ if $q\equiv2\pmod3$, where $w$ is a root of $x^2+3$ ($w\in\mathbb F_q$ if $q\equiv1\pmod3$ and $w\in\mathbb F_{q^2}$ if $q\equiv2\pmod3$).
In your case, $f(x)=x^3+3x^2+cx+3=x^3=(x+1)^3+(c-3)(x+1)-c+5$, so we can consider $g(x)=x^3+(c-3)x-c$ WLOG. It is kind of complicated, but there is another criterion which is more pratical in this case.
A polynomial of degree $2$ or $3$ is irreducible over a field $F$ if and only if it has no root in $F$.
Therefore we can just let $c$ vary over $\mathbb F_5$. Then $f$ is irreducible, if and only if $f(\alpha)\ne0$ for all $\alpha\in\mathbb F_5$.
|
H: Given a set with 6 vertices, can you prove or disprove G is planar
$p,q$ prime numbers
On set $V=\{1,p,q,pq,p^2q,pq^2\}$ of verticies
Given $G(V,E)$ a undirected simple graph
We define :
$\forall x,y \ \in \ V $
$\exists \ xy \ \in E \ \text{ iff }\ x|y$
Prove or disprove that the graph is planar .
I tried to prove and it seems it is a planer but i am not sure
AI: For a connected planar graph with $n\ge 3$, we must have $e\le 3n-6$
$\;\;\;\;$ https://sites.math.rutgers.edu/~sk1233/courses/graphtheory-F11/planar.pdf
but for the given graph, we have $n=6$ and $e=13$.
|
H: Given $f\in Hol(|z|<1)$ and $image(f(z))\subset K\subset D_0(1)$ where K is compact subset Prove that $f$ has one fixed point.
Given $f\in Hol(|z|<1)$ and $image(f(z))\subset K\subset D_0(1)$ where K is compact subset
Prove that $f$ has one fixed point.
My try is:
$g(z)=z-f(z)$ and then we know that $|f(z)|<1$ using Rouche theorem $1=|z|>|f(z)|$ and because $z$ has only 1 zero we get the needed.
I'm not sure if my prove is correct
AI: You cannot apply Rouche's Theorem to the unit circle. But there exist $t\in (0,1)$ such that $|f(z)| <t$ for all $z$. For any $r \in (t,1)$ we can apply Rouche's Theorem to see that $f$ has a unique fixed point in $\{z: |z| <r\}$ for every $r \in (t,1)$ . This implies that $f$ has a unique fixed point in $\{z: |z|<1\}$.
|
H: Propositional logic - wrong proof
Given the propositions A, B and C, we assume that
\begin{equation}
(A\wedge B)\rightarrow C \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)
\end{equation}
I want to demonstrate the other way around implication
\begin{equation}
C \rightarrow (A\wedge B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)
\end{equation}
Obviously statement (1) doesn't necesssarely imply statement (2) anyway I started my strange reasoning
Attempted proof
by contraposition of the statement (1), we get
$$\neg C\rightarrow \neg A \vee \neg B \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$$
and the proposition (2) implies the following statement
$$\neg C\vee(A\wedge B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$$
Now, here comes my doubt!
Combining statements (3) and (4) do I get the following statement?
$$(\neg A \vee \neg B)\vee(A\wedge B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)$$
the intuition suggests that writing $\neg C$ or $\neg A \vee \neg B$ doesn't change anything in statement (4) since the implication (3) holds by assumption.
If yes, since the statement (5) is always true therefore the statement 2 would be true as well.
Am I right? Am I wrong?
Thank you!
AI: Long comment
(1) holds by assumption and it is equivalent to (3) [and (4)].
We want to prove that also (2) holds.
What does it mean "combining statements (3) and (4)" ?
Conjunction ? If so, the "combination" implies (5): $(¬A∨¬B) ∨ (A∧B)$, and this formula is a tautology, i.e. always true.
Now we have the problem: "the statement (5) is always true therefore the statement (2) would be true as well." Why ?
We are trying to prove (2) and thus we are not entitled to assert that it is true.
The fact that (2) implies a True proposition does not mean that (2) is true: a tautology is implied by every proposition.
For example:
$\varphi \vDash A \lor \lnot A$
is a valid inference for a formula $\varphi$ whatever.
The short answer is that $(A∧B) → C \nvDash C → (A∧B)$.
We can check it with truth-table: for $C$ True and $A$ False, we have that (1) is True while (2) is False.
|
H: convergence of $ \sum_{n=1}^{\infty} \frac{n !}{n !+3} $?
determine the convergence of
$$
\sum_{n=1}^{\infty} \frac{n !}{n !+3}
$$
I tried using the ratio test and also for n! , I use Stirling approximation.Still I got stuck.
AI: Since $\lim_{n\to\infty}\frac{n!}{n!+3}=1\ne0$, your series diverges.
|
H: Finding maximal rings and Laurent Series - solution verification
I am to find regions where $f(z) = \frac{1}{z-2}$ around $i$ has Laurent series. As Jose pointed out, the natural regions are rings $A(i, 0, \sqrt{5})$ and $A(i, \sqrt{5}, \infty)$ because $\sqrt5$ is the distance between the center and singularity.
Starting with $|z-i|<\sqrt5$.
$$
\frac{1}{z-i+i-2} = \frac{1}{i-2}\frac{1}{1+\frac{z-i}{i-2}} = \frac{1}{i-2}\frac{1}{1-\frac{-(z-i)}{i-2}} = \frac{1}{i-2}\sum_\limits{n=0}^{\infty}\frac{(-1)^n(z-i)^n}{(i-2)^{n}}= \sum_\limits{n=0}^{\infty}\frac{(-1)^n(z-i)^n}{(i-2)^{n+1}}
$$
If $|z-i|>\sqrt5$ we do as follows:
$$
\frac{1}{z-i+i-2} = \frac{1}{z-i}\frac{1}{1+\frac{i-2}{z-i}} = \frac{1}{z-i}\frac{1}{1-\frac{-(i-2)}{z-i}} = \frac{1}{z-i}\sum_\limits{n=0}^{\infty}\frac{(-1)^n(i-2)^n}{(z-i)^{n}} = \\
= \sum_\limits{n=0}^{\infty}\frac{(-1)^n(i-2)^n}{(z-i)^{n+1}} = \sum_\limits{k=-\infty}^{-1}(-1)^{-k-1}(i-2)^{-k-1}(z-i)^{k}
$$
So for ring $A(i, 0, \sqrt{5})$ Laurent series is $\sum_\limits{n=0}^{\infty}\frac{(-1)^n(z-i)^n}{(i-2)^{n+1}}$ and for $A(i, \sqrt{5}, \infty)$ it equals to $\sum_\limits{k=-\infty}^{-1}(-1)^{-k-1}(i-2)^{-k-1}(z-i)^{k}$.
Is this correct?
AI: No, it is not correct. The natural regions are $\left\{z\in\Bbb C\,\middle|\,|z-i|<\sqrt5\right\}$ nad $\left\{z\in\Bbb C\,\middle|\,|z-i|>\sqrt5\right\}$, since the distance from $i$ to $2$ is $\sqrt5$.
|
H: how do I differentiate this function implicitly
How do I differentiate $$\frac{(x^2 - 4y^2)} {(x^2 + xy^2)} = 2$$ implicitly? I did it by bringing the denominator over to the other side, and I got $$\frac{-(2x + 2y^2)}{(4xy + 8y)}$$
Here are the images of the question and the suggested answer.
AI: Ok, here's how you'd do it:
We have $$\frac{x^2-4y^2}{x^2+xy^2}=2$$
So$$x^2-4y^2=2x^2+2xy^2$$
Differentiating implicitly:
$$2x-8y\frac{dy}{dx}=4x+(2y^2+4xy\frac{dy}{dx})$$(using product rule as well) so
$$4xy\frac{dy}{dx}+8y\frac{dy}{dx}=-2x-2y^2$$
so$$\frac{dy}{dx}(4xy+8y)=-2x-2y^2$$
and finally $$\frac{dy}{dx}=\frac{-2x-2y^2}{4xy+8y}$$
I hope that helped!
|
H: Primitive elements in fields and finite fields
I have the following two definitions:
If $K$ is an extension field of $F$ and $K = F(a)$ for some $a \in K$, then $a$ is a primitive element of $K$.
If $K$ is a finite field and $a$ is a generator for its multiplicative group $K^*$, then $a$ is a primitive element of $K$.
It's clear that if $a$ is a primitive element (def2) of the finite field $K$ (characteristic $p$), then we can write $K = Z_p(a)$ (because $K^* = \langle a \rangle $). So in this case $a$ is also primitive (in the sense of def1).
However, if I'm not wrong, it is possible for $K = Z_p(a)$ to be a finite field, without $a$ being a generator for $K^*$. Can someone provide a counterexample?
AI: Sure. Let $p\equiv 3 \ [4]$ be a prime number. Then $X^2+1\in\mathbb{F}_p[X]$ is irreducible. Take an element $\alpha$ in an algebraic closure of $\mathbb{F}_p$ satisfying $\alpha^2=-1$.
Set $K=\mathbb{F}_p(\alpha)$. Then $K$ is a finite field with $p^2$ elements, and $\alpha\in K$ is a primitive element in the sense of $1$, by definition.
Now $\alpha^2=-1$ and $\alpha^4=1\in K$, so $\alpha$ has order $4$ in $K^*$, while $K^*$ has order $p^2-1\geq 8$. Hence $\alpha$ is not a primitive element in the sense of $2$.
For $p=3$, one may check that $\alpha+1$ is a primitive element in sense of $2$.
|
H: Show that the map is a linear functional and determine the dimension of the quotient space $V/ker(y)$
During an exam I was given the following task:
Consider the vector space
$$V:=\{p \in \mathbb{Q}[x]\ |\ p \text { has degree at most } 3\}$$
Show that the map
$$y(p)=p(0),\quad V\rightarrow\mathbb{Q}$$
is a linear functional and determine the dimension of the quotient space $V/ker(y)$.
I argued that:
$$y\left(\gamma_{1} p_{1}+\gamma_{2} p_{2}\right)=p(0)=\sum_{i=0}^{n} \alpha_{i} 0^{i}=0$$
and
$$\gamma_{1} y\left(p_{1}\right)+\gamma_{2} y\left(p_{2}\right)=\gamma_{1} p(0)+\gamma_{2} p(0)=\gamma_{1} \sum_{i}^{n} \alpha_{i} 0^{i}+\gamma_{2} \sum_{i}^{n} \alpha_{i} 0^{i}=0$$
However, I am not sure if this is correct, because what is $0^0$?
To determine the dimension of $V/ker(y)$, I argued that $y$ is the zero map with dimension $0$ and that the dimension of a polynomial of degree at most $3$ is $4$, and therefore that the dimension of $V/ker(y)$ is
$$\operatorname{dim} V-\operatorname{dim}(\operatorname{ker} y)=4-0=4$$
Will someone tell me if I messed up?
AI: In such expressions, $0^0$ is defined to be $1$.
Clearly $y$ is NOT the zero map. For example, the polynomial $1 \in V$ gets mapped to $1$. Also $X+1$ gets mapped to $1$.
By the isomorphism theorem, we have $$V/\ker y \cong y(V)= \mathbb{Q}$$
and thus $\dim_\mathbb{Q}(V/\ker y) = 1$.
|
H: How do you prove that $\ln(x) = \int_0^\infty \frac{e^{-t}-e^{-xt}}{t}$?
I got the following result using the technique "Integral Milking":
$$\ln(x) = \int_0^\infty \frac{e^{-t}-e^{-xt}}{t} dt= \lim_{n\to0}\left(\operatorname{Ei}(-xn)-\operatorname{Ei}(-n)\right)$$
for $x > 0$. So, I have a proof of it the result, but now I would like to know how to prove starting with either the integral or the limit. I'm not that familiar with the exponenential integral $\operatorname{Ei}(x)$, so my attempts were pretty bad (I'm not so familiar with the technique, but I'm still going to try to differentiate under the integral sign). Personally I have never seen an integral representation of the natural logarithm like this before, and I can't find it anywhere (e.g. here), but WolframAlpha directly gets the limit right.
Question: How do you prove the the integral (or limit) is equal to $\ln(x)$ by starting with the integral (and not using e.g. integral milking)?
AI: The fastest way to prove this is using Frullani's theorem, as you've seen in the comments, but you can also use a double integral to quickly solve this problem (which is one method how Frullani's theorem is proven).
$$I=\int_0^{\infty} \frac{e^{-t}-e^{-tx}}{t} \; dt = \int_0^{\infty} \frac{1}{t}\int_{1}^{x} te^{-tw} \; dw \; dt$$
Where $w$ and $t$ are dummy variables. Now switch the order of integration:
$$I=\int_1^x \int_0^{\infty} \frac{1}{t} \cdot te^{-tw} \; dt \; dw= \int_1^x \int_0^{\infty} e^{-tw} \; dw \; dt\int_1^x \frac{1}{w} \; dw = \boxed{\ln{x}}$$
You can generalize this to the $\int_0^{\infty} \frac{f(ax)-f(bx)}{x} \; dx$. Following these steps, you should see that the generalized integral is just $$[f(\infty)-f(0)] \ln{\left(\frac{b}{a}\right)}$$
|
H: Universal completions of *algebras
I am dealing with two "universal completions" but I am not sure if they are the same thing and would appreciate some guidance.
Let $\mathcal{A}$ be a unital *-algebra. A $\mathrm{C}^*$-seminorm on a $\mathcal{A}$ is a seminorm $p:\mathcal{A}\rightarrow \mathbb{R}^+$ that satisfies, for all $a,\,b\in\mathcal{A}$:
$$p(ab)\leq p(a)p(b)\text{ and }p(a^*a)=p(a)^2.$$
We get a 'large' (pre?)-$\mathrm{C}^*$-norm by taking:
$$a\mapsto \|a\|_{u_1}:=\sup\{p(a)\mid p\text{ is a $\mathrm{C}^*$-seminorm on $\mathcal{A}$}\}.$$
Denote $A_{u_1}$ the corresponding norm-completion of $\mathcal{A}$.
$\mathcal{A}$ as before, for $H$ Hilbert spaces, another 'large' (pre?)-$\mathrm{C}^*$-norm:
$$a\mapsto \|a\|_{u_2}:=\sup\{\|\pi(a)\|\mid\pi:\mathcal{A}\rightarrow B(H),\text{ a unital *-homomorphism}\}.$$
Denote $A_{u_2}$ the corresponding norm-completion of $\mathcal{A}$.
Question: Are these two norms (and hence $\mathrm{C}^*$-algebras) the same thing?
AI: Remark: The Hilbert space $H$ should vary as well when taking the supremum, otherwise the two can be different and you generally only have (depending on what $H$ you chose) $\|\cdot\|_{u_2}≤\|\cdot\|_{u_1}$.
Suppose $p$ is a non-zero $C^*$ semi-norm. The zero locus of $p$ is a two-sided $*$ ideal, as such $\mathcal A/N(p)$ is also a unital $*$-algebra, if we complete it then $p$ makes it into a $C^*$-algebra. By using for example the GNS construction you find a Hilbert space $H$ and an injective unital representation $\pi:\overline{\mathcal A/N(p)}\to B(H)$. Injective $*$-morphisms are isometric for $C^*$-algebras so you have $$p(a) = p([a]) = \|\pi([a])\|$$
so the representation $\mathcal A\to B(H)$, $a\mapsto \pi([a])$ is a unital isometric $*$ representation if you give $\mathcal A$ the $C^*$ semi-norm $p$.
Conclusion: Every $C^*$ semi-norm comes from a unital representation to operators on a Hilbert space, as such the two sets over which you take the supremum are the same.
|
H: How to get information from events per week to events per day?
I was given the following homework question:
If a company's computer has around 28.14 errors per week. What is the probability of less than 3 errors at one day ?
Can someone give me a little start up aid to solve the question? (no demand here, for solving my homework!).
What I got so far:
I shall estimate $p(X<3)$,
therefore I could simply add the probabilities for $p(X=0)+p(X=1)+p(X=2)$.
But this is where I am struggling: How to get a useful information for any of those probabilities out of 28.14 per week?
$28.14/7 = 4.02$ So on average there is 4.02 errors per day. But how to know how likely it is that there's less or more than average on a day ?
Once again I am not asking for a solution but for a little help getting started.
AI: Using the poisson distribution, you have a $Po(4)$ per day (approximating $4.02 \approx 4$)
So the requested probability is
$\mathbb{P}[X<3]=e^{-4}[1+4+\frac{4^2}{2!}]\approx 24\%$
it is understood that it is not forbidden to use exactly 4.02 a a mean errors per day....the calculation process is the same
|
H: How can this implication be proved?
What I have given is:
(i) $\operatorname{f}(a_{n})\leq0$ and $\operatorname{f}(b_{n})\geq0.$
(ii)$\forall n\in\Bbb N_{0}: a\leq a_{n}\leq a_{n+1}\leq b_{n+1}\leq b_{n}\leq b.$
(iii)$\forall n \in \Bbb N_{0}:b_{n}-a_{n}=\frac{b-a}{2^{n}}.$
Now it is stated $a_{n}$ and $b_{n}$ are bounded monotone increasing/decreasing sequences. The bounds are 0 for $a_{n}$ and $b_{0}$ for $b_{n}$, right?
I understood (if my reasoning for why both sequences are bounded is correct) everything so far.
What I understand only partly is:
$\lim \limits_{n\to\infty}a_{n} = \lim \limits_{n\to\infty}b_{n}$
I understand that $b_{n}-a_{n}$ is a null sequence.
But which theorem is used to state:
$\lim \limits_{n\to\infty}a_{n} = \lim \limits_{n\to\infty}b_{n}$
AI: $(a_n)$ and $(b_n)$ are both monotone bounded sequences. Hence they have finite limits. Also $\lim b_n-\lim a_n=\lim (b_n-a_n) =\lim \frac {b-a} {2^{n}}=0$. Hence $\lim b_n=\lim a_n$
|
H: System $\,x+y+z=1\,$ and $\,\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$
I'm trying to solve the following system of equations:
I. $\,x+y+z=1$
II. $\,\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$
And an elegant solution just eludes me.
It should be a rather easy problem, but I'm having slight problems solving it.
I tried just brute force substituting it, but didn't seem to get anywhere with that approach either...
AI: As pointed out in the comments, this is an indeterminate system of equations. However we can still solve for a general form of the solutions. Notice that $\dfrac1x+ \dfrac1y+ \dfrac1z=\dfrac{xy+yz+xz}{xyz}$, by Vieta's theorem for cubic equations we may construct an equation in t which has solutions $t= x,y,$ or $z$.
$t^3-t^2-a^2t+a^2=0\implies (t^2-a^2)(t-1)=0$ , where a is an arbitrary constant.
So $t=a, -a$ (these two solution exhibits the non-negativity restriction on $a^2$ if you're solving in $\Bbb R$), or $t=1$.
Set $x=a,y=-a, z=1$, and you have a set of solutions.
Edit: Made the substitution $A=-a^2$ as suggested by Yves in the comments.
|
H: Probability that construction fails
I have the following problem:
The resistance D has a normal distribution with expectation 11 and
variance = 2. The force B has a normal distribution with expectation 9
and variance 1. Assume independence. What is the probability that the
construction doesn't fail.
The answer is 0.8749
What I did already is draw the two normal distributions. I know that the construction will fail if the force is bigger than the resistance. So I have two intersecting normal distributions and I drew a vertical line through the intersection. Right of the line is the region where the resistance is bigger than the force.
But how do I calculate this region?
AI: The random variable
$Y=(D-B)\sim N(2;3)$
so you can easily calculate
$\mathbb{P}[Y>0]=1-\Phi(\frac{-2}{\sqrt{3}})=1-\Phi(-1.15) \approx 87.49\%$
|
H: Induce Metric From Topological Space
Given any topological space $(X, T)$,
can we induce a metric $d$ on $X$, such that the set of all open sets in the metric space $(X, d)$ is equal to the set $T$?
Just trying to grab the intuition behind topological spaces.
AI: Below I'll focus on metrizable topologies - plenty of spaces fail to come from any metric whatsoever, but here I want to discuss the inability to "pick out" a specific metric even when we know that one exists. For example, as Kavi Rama Murthy observes every metrizable topology is Hausdorff, or phrased in the contrapositive we can't hope to get a metric from a non-Hausdorff topology.
I'm choosing to focus on this aspect of the question because of the word "induce" - this generally signifies a choice of object which is somehow "canonical," and the point I want to make here is that this aspect is difficult (to put it mildly!) even if we sidestep the metrizability issue.
Certainly we can't hope to do so in a unique way, except in the trivial case of the one-point space (or the zero-point space if you allow that): we can always scale a metric by an arbitrary positive factor without changing the topology.
Precisely, given a metric space $(X,d)$ and a positive $\alpha$ consider the new function $$d_\alpha(x,y)=\alpha d(x,y).$$ This is a metric on $X$ and generates the same topology as $d$, but - as long as $X$ has at least $2$ elements - we have $d_\alpha\not=d$.
Of course this is in a sense pretty silly. A better sense of the gap between topologies and metrics can be gotten by considering some concrete examples. For example, in the context of $\mathbb{R}^2$ we have:
The usual Euclidean metric $d((a,b),(c,d))=\sqrt{(a-c)^2+(b-d)^2}$.
The "taxicab" metric $d_{taxi}((a,b),(c,d))=\vert a-c\vert + \vert b-d\vert$.
The "square" metric $d_{square}((a,b),(c,d))=\max\{\vert a-c\vert, \vert b-d\vert\}$.
These are genuinely different$^1$ metrics - consider in each case the ball centered on the origin with radius $1$ - but they all generate the usual topology.
$^1$What exactly do I mean by this? Well, I'm deliberately being a bit vague here, but one observation we can make is that none is a "scaling" of any other.
But we can say more: there are fairly simple geometric differences we can see between the metrics. For example:
We can think about decompositions of squares/diamonds versus circles. In the Euclidean metric we can never write a given closed ball with positive radius as a union of four closed balls which overlap only on their boundaries - but we can do this in the taxicab and square metrics.
We can also think about corners of squares/diamonds versus circles. In both the taxicab and square metrics, we can show that given a nonempty open ball $B$ there are exactly four points $c_1,c_2,c_3,c_4$ (the corners) such that for all sufficiently small $\epsilon>0$ and each $i\in\{1,2,3,4\}$ the intersection of the ball of radius $\epsilon$ centered on $c_i$ with $B$ is itself an open ball. But no such points exist for the Euclidean metric (the intersection of two circles is never a circle).
As a fun problem, think about why I haven't described any "simple geometric" way to distinguish the taxicab and square metrics from each other ...
|
H: Function that produce a spike in 3D
I am looking for a function that would produce a spike, of any size, in the z-direction at a given x and y coordinate. A little bit like the https://en.wikipedia.org/wiki/Dirac_delta_function but that could be graphed in 3D.
I am not sure if those specifications are clear, if not please let me know I will clarify according to feedback.
Thanks a lot
AI: There are a few ways of doing it, but a 'nice' way (continuous, infinitely differentiable) would be to consider a pulse as a normal distribution with close to zero variance, that is
$$z = A\exp\left(-\frac{(x-x_0)^2 + (y-y_0)^2}{2\sigma^2}\right)$$
You can keep taking variances smaller and smaller till you get the desired result
|
H: How can I scale a sigmoid curve to fit the criteria I would like
Is there a way I can make a scaled sigmoid function $f(x)$ such that $f(0) \to -1$ (or as close to it as possible) and $f(n) \to 1$ (or as close to it as possible), for whatever $n$ I choose?
AI: While I agree with 5xum, I will give another solution, using the sigmoid function.
The function that I would present is
$$
f(x) = 2\left(1+\exp\left\{ -\ln(40~000) \frac{x-n}{n} + \ln(0.005) \right\} \right)^{-1} +1
$$
Here is a visualization: https://www.desmos.com/calculator/2ah3qgjbim
Notice that I have determined the coefficients $\ln (40~000)$ and $\ln (0.005)$ by setting certain values for $f(0)$ and $f(n)$. They can be replaced by $\ln(\text{a large number})$ and $\ln(\text{a small but positive number})$.
|
H: Prime number $p>3$ is congruent to $2$ modulo $3$. Let $a_k= k^2+k+1$. Prove that $a_1a_2\cdots a_{p-1}$ is congruent to $3$ modulo $p$.
Prime number $p>3$ is congruent to $2$ modulo $3$. Let $a_k= k^2+k+1$ for $k=1,2,\ldots,p-1$. Prove that product $a_1a_2\cdots a_{p-1}$ is congruent to $3$ modulo $p$.
I tried to solve this problem for two hours, I realized this:
$$
a_1\equiv a_{p-1}\pmod p \\
a_2\equiv a_{p-2}\pmod p \\
\vdots \\
a_{(p-3)/2}\equiv a_{(p+1)/2}\pmod p
$$
But now I don't know what to do.
AI: Since $a(k) = (k^3-1)/(k-1)$ for $k\ne1$, we have $a(k) \equiv (k^3-1)(k-1)^{-1}\pmod p$ for $2\le k\le p$. Then
$$
a(2)a(3)\cdots a(p) \equiv \prod_{k=2}^p (k^3-1) \bigg( \prod_{k=2}^p (k-1) \bigg)^{-1} \pmod p.
$$
But since $p\equiv2\pmod3$, the cubes $k^3$ run through a complete set of residues $\pmod p$ as $k$ does (and of course $1^3=1$ is mapped to itself). Therefore, making the change of variables $j=k^3$, we obtain
$$
a(2)a(3)\cdots a(p) \equiv \prod_{j=2}^p (j-1) \bigg( \prod_{k=2}^p (k-1) \bigg)^{-1} \equiv 1 \pmod p.
$$
The problem now follows on noting that $a(1)=3$ and $a(p) = p^2+p+1\equiv1\pmod p$.
|
H: Regarding ln|f| being upper semicontinuous
Let $f$ be a holomorphic function on the open unit disc $\mathbb{D}$ In $\mathbb{C}$. Can anyone tell why $\ln|f|$ is upper semicontinuous but not continuous? In particular $\ln|z|$, $z\in \mathbb{D}$.
AI: $\{z: \ln |f(z)| < a\}=\{z: |f(z)| <e^{a}\}$ is open by continuity of $|f|$. So the function is upper semi-continuous.
$\ln |f|$ is not even finite valued in general, so it is not continuous. But it can be continuous in special cases. If $f(z)=e^{z}$ then $\ln |f(z)|=\Re z$ which is continuous.
More generally whenever $f$ is holomorphic and $f(z) \neq 0$ for all $z$ we get continuity. [This is because $f(z)=e^{g(z)}$ for some holomorphic function $g$ and $\ln |f(z)|=\Re g(z)$].
|
H: Isomorphisms Between Multiplicative and Additive Modulo Groups.
While working with the fundamental theorem of Abelian groups, I noticed something and would like to confirm my assumption.
$$(\mathbb{Z}/n\mathbb{Z})^* \cong \mathbb{Z}/φ(n)\mathbb{Z} $$
Is the above true? If so why exactly?
AI: Your statement is true for some $n$, but not for all $n$. For example, it's not true for $n=8$.
Addendum: I recommend this Wikipedia article, which states that
the group $ (\mathbb{Z}/n\mathbb{Z})^\times$ is cyclic if and only if $n$ is $ 1, 2, 4,$ $p^k$ or $2p^k$,
where $ p$ is an odd prime and $k > 0$.
|
H: solve $x' = t^\alpha +x^\beta$
I need to solve the following ode for some non zero $\alpha \beta$ :
$x' = t^\alpha + x^\beta$. I don't have any initial conditions.
I am not sure how to proceed with this ,I tried doing$ x=y^m$ to get $x'=my^(m-1) y'$ but I don know if it helps.
Any help will be appreciated
AI: I don't think you'll get a closed-form solution in general. Maple doesn't find one. Even in the special case $\alpha=2, \beta=3$ it doesn't find one. Nor does Wolfram Alpha.
Of course if $\beta = 1$ you have a linear equation.
If $\beta = 2$ a solution can be found in terms of Bessel functions.
|
H: Exact value of limit
Find the exact value of
$ \lim_{x\to \infty } (1+ \frac {1}{2x} - \frac {15}{16x^2})^{6x}$.
Feel tempted to reduce $(1+ \frac {1}{2x} - \frac {15}{16x^2})$ to 1 as $x$ appoaches infinity, but a quick check on the GC yields the final answer as $e^3$.
I'm not sure how to arrive there.
AI: When it comes to limits, you have to be very careful about steps like you mentioned, reducing the inside to ${1}$ - because what you really have is a ${1^{\infty}}$ situation, which is an indeterminate form.
Hint: rewrite the limit as ${e^{\lim_{x\rightarrow \infty}\ln\left(\left(1+\frac{1}{2x}-\frac{15}{16x^2}\right)^{6x}\right)}}$. If you want to see the full answer, continue reading.
This can be simplified to ${e^{\lim_{x\rightarrow \infty}6xln\left(1+\frac{1}{2x}-\frac{15}{16x^2}\right)}}$. Now since ${e^x}$ is continuous, we can simply find the limit of ${\lim_{x\rightarrow\infty}6x\ln\left(1+\frac{1}{2x}-\frac{15}{6x^2}\right)}$ and raise e to this power. This is,
$${\Rightarrow \lim_{x\rightarrow\infty}\frac{\ln\left(1+\frac{1}{2x}-\frac{15}{6x^2}\right)}{\frac{1}{6x}}}$$
And now this is a ${\frac{0}{0}}$ situation in the limit, we can use the infamous L'hopital's rule. Differentiating the top and bottom gives some ugly expression:
$${\lim_{x\rightarrow\infty}\frac{\frac{10-x}{x(2x^2+x-5)}}{\frac{-1}{6x^2}}=\lim_{x\rightarrow\infty}-\frac{6x(10-x)}{2x^2+x-5}=3}$$
(if you are confused why this limit is $3$, let me know in the comments and I will explain). And hence the overall limit is ${e}$ to this power, which is ${e^3}$. As expected
|
H: Absolute convergence and continuity of $\sum_{n=1}^\infty \sin(\frac{x}{n^4})\cos(nx)$
Question:
Prove that the series $$\sum_{n=1}^\infty \sin\left(\frac{x}{n^4}\right)\cos(nx)$$ converges absolutely, and it is continuous on $\mathbb{R}$.
Attempt: I can readily see that the series converges absolutely on some closed interval $[-b,b]$, since for any $x\in\mathbb{R}$ we have $$|\sin(x)|\leq |x|$$
Such that
$$\sum_{n=1}^\infty \bigg|\sin\bigg(\frac{x}{n^4}\bigg)\cos(nx)\bigg| \leq \sum_{n=1}^\infty \bigg|\frac{x}{n^4}\cos(nx)\bigg| \leq \sum_{n=1}^\infty \bigg|\frac{x}{n^4}\bigg| \leq \sum_{n=1}^\infty \bigg|\frac{b}{n^4}\bigg|$$
Which converges absolutely by comparison with $\sum_{n=1}^\infty \frac{1}{n^2}$, and I've used $\cos(nx)\leq 1$.
However, this is only on some closed interval, I do not think the series converges uniformly on $\mathbb{R}$? So how do I show continuity? - and absolute convergence on $\mathbb{R}$ in general, rather than just a closed interval?
Can I perhaps say, so long as the series is absolutely convergent on $[-\pi,\pi]$, it must be convergent on the rest of $\mathbb{R}$, given the periodic behaviour of the functions? Would this also imply continuity? On a closed interval, the series converges uniformly by Weierstrass $M$-test, and uniform converges preserves continuity.
AI: You have shown that $\sum_{n=1}^\infty \sin(\frac{x}{n^4})\cos(nx)$ converges uniformly on every interval $[-b, b]$.
It follows that $f(x) = \sum_{n=1}^\infty \sin(\frac{x}{n^4})\cos(nx)$ is continuous on $[-b, b]$ for arbitrary $b > 0$, and therefore continuous on $\Bbb R$.
|
H: Show that $f_{n}(x):=nx(1-x)^{n}$ is uniformly bounded on $[0,1]$ for all $n\geq 1$.
Consider $f_{n}(x):=nx(1-x)^{n}$ defined for $n=1,2,3,\cdots$ and $x\in [0,1]$. The exercises have two parts:
(a) Show that for each $n$, $f_{n}(x)$ has a unique maximum $M_{n}$ at $x=x_{n}$. Compute the limit of $M_{n}$ and $x_{n}$ as $n\rightarrow\infty.$
(b) Prove that $f_{n}(x)$ is uniformly bounded in $[0,1]$.
I have computed that for each $n$, $f_{n}(x)$ has a unique maximum in $[0,1]$ at $$x_{n}=\dfrac{1}{1+n}$$ with the maximum value $$M_{n}=\Big(\dfrac{n}{n+1}\Big)^{n+1}.$$ Thus $$\lim_{n\rightarrow\infty}x_{n}=0\ \ \ \text{and}\ \ \ \lim_{n\rightarrow\infty}M_{n}=\lim_{n\rightarrow\infty}\Big(1-\dfrac{1}{1+n}\Big)^{n+1}=e^{-1}.$$
The solution says that since $|f_{n}(x)|\leq |M_{n}|$ for each $n=1,2,\cdots$, the above shows that $|f_{n}(x)|\leq e^{-1}$ for all $x\in [0,1]$ and $n=1,2,\cdots$.
I don't understand this. To show the uniform boundedness, don't we need to show $$\sup_{n}|f_{n}(x)|\leq C,\ \ \text{for some constant}\ C\ \text{and for all}\ x\in [0,1]?$$
It is true that since $|f_{n}(x)|\leq M_{n}$ for each $n$ and for all $x\in [0,1]$, we have $$\sup_{n}|f_{n}(x)|\leq\sup_{n}|M_{n}|,$$ but why does the limit of $M_{n}$ being $e^{-1}$ implies the sup is $e^{-1}$?
Thank you!
AI: It is well known that $(1 + 1/n)^n \nearrow e$ and $(1+1/n)^{n+1} \searrow e.$ See one of the many proofs on this site here.
Thus, $M_n = \left(\frac{n}{n+1}\right)^{n+1} = \frac{1}{(1+1/n)^{n+1}} \nearrow e^{-1}$.
|
H: Solution of the ODE $x'=t+x$
How can i find the general solution of
$$x'=t+x$$
I tried a few things, but I couldn't get to the solution, I don't think I'm remembering the right method to solve this problem.
AI: You can to use "variation of constants technique". You start by solving the homogeneous equation $x'-x = 0$, which yields the general solution $x(t)=c e^t$. Then you assume that the solution of your equation is of the form $x(t)=c(t) e^t$ and determine $c(t)$:
$$
(c(t) e^t)' = c(t) e^t + t\Leftrightarrow c'(t) e^t+c(t) e^t = c(t) e^t+t\Leftrightarrow c'(t)=t e^{-t}\Rightarrow c(t)=-(t+1)e^{-t} + k
$$
Finally, you get $x(t)=-(t+1) + ke^t$.
|
H: Show that $U:=\left\{x_{1} \otimes v | v \in V\right\}$ is a subspace
Let $V$ be vector space with basis $\{x_1,...,x_m\}$. I want to show that
$$U:=\left\{x_{1} \otimes v | v \in V\right\}$$
Is a subspace of $V\otimes V$.
My attempt:
To show that $0\in U$, I pick $v=0$:
$$x_{1} \otimes 0=0, \quad \Rightarrow 0 \in U$$
And since $V$ is a vector space:
$$v_{1}, v_{2} \in V \Rightarrow \alpha v_{1}+v_{2} \in V$$
Where $\alpha$ is a scalar. I can then write:
$$x_{1} \otimes \alpha_{1} v_{1}+x_{1} \otimes v_{2}=x_{1} \otimes\left(\alpha_{1} v_{1}+v_{2}\right) \in U$$
Which shows that $U$ is a subspace of $V\otimes V$. Any flaws in my proof?
AI: Your proof is mostly correct. However, keep in mind that to show that $U$ is a subspace, we must show that $\alpha_1(x_1 \otimes v_1) + (x_1 \otimes v_2)$, so your equation at the end should have an "extra step".
|
H: Semicircle law theorem (Math notation)
I will put part of the sentence here because I am interested in something very specific. It follows that
Let $I\subset \mathbb{R}$ be an interval. Define the random variables
$$
E_n(I)=\frac{\#\left( \{\lambda_1(\mathbf{X}_n),...,\lambda_n(\mathbf{X}_n)\}\cap I\right)}{n}.
$$
What $\#$ operator does mean? Does $E_n(I)$ represents a sequence of the eigenvalues average? What is the meaning of the $\{\lambda_1(\mathbf{X}_n),...,\lambda_n(\mathbf{X}_n)\}\cap I$?
AI: Since the title of your question references the semicircle law, I assume that $X$ is a random Hermitian matrix. $\{\lambda_{1}(X),\ldots,\lambda_{n}(X)\}$ is the spectrum of the matrix $X$, its set of eigenvalues in $\mathbb{R}$. Then $\{\lambda_{1}(X),\ldots,\lambda_{n}(X)\}\cap I$ is the set of the eigenvalues of $X$ that also lie in the interval $I$. The $\#$ operator takes the cardinality of the subsequent set, so $\#\{\lambda_{1}(X),\ldots,\lambda_{n}(X)\}\cap I$ is the number of eigenvalues of $X$ (without multiplicity) that lie in the interval $I$.
|
H: Question about a proof of a theorem regarding montoness and countable additivity of a measure
I have a question regarding the proof of the following theorem:
If $\mu$ is a measure on a ring $\mathbb{R},$ if $E \in \mathbb{R},$ and if $\left\{E_{i}\right\}$ is a finite or infinite sequence of sets in $\mathbf{R}$ such that
$
E \subset \bigcup_{i} E_{i}, \text { then } \quad \mu(E) \leq \sum_{i} \mu\left(E_{i}\right)
$
Proof Steps:
If $\left\{F_{i}\right\}$ is any sequence of sets in a ring $\mathbb{R}$, then there exists
a disjoint sequence $\left\{G_{i}\right\}$ of sets in $R$ such that
$
G_{i} \subset F_{i} \text { and } \bigcup_{i} G_{i}=\bigcup_{i} F_{i}.
$
Applying this result to the sequence $\left\{E \cap E_{i}\right\}$, the desired result follows from the countable additivity and monotoneness of $\mu$.
When I tried to apply the result to $\left\{E \cap E_{i}\right\}$, I am stucked. I said there is disjoint $\left\{E \cap F_{i}\right\}$ such that $\mu(E) = \bigcup_{i} \mu(F_{i}).$ And I don't see how to apply monotoness to complete the proof.
AI: You're almost there. By the Lemma you mentioned, extract disjoint $G_i$ such that $G_i \subset E_i$ and $\bigcup_i G_i = \bigcup_i E_i$. Note that $E = \bigcup_i (E \cap G_i )$, and $(E \cap G_i) \subset G_i$, then
$$\mu(E) = \mu \left( \bigcup_i (E \cap G_i ) \right) = \sum_i \mu (E \cap G_i) \leq \sum_i \mu(G_i) \leq \sum\mu(E_i),$$
where the second equality uses the countable additivity and the third one uses the monotoneness.
|
H: Three-Handed Gambler’s Ruin - end of the game
Three players start with $a,b$, and $c$ chips, respectively, and play the following game. At each stage, two players are picked at random, and one of those two is picked at random to give the other a chip. This continues until one of the three is out of chips, and quits the game; the other two continue until one player has all the chips.
Okay so let us consider the following stopping time: $T$- end of the game, the moment of time, when one of the players has all the chips.
How to show that $ET<\infty$? In other words: how do we know that game will end in some time?
AI: Assuming you know the result for the two-player game, it follows quickly. From the point of view of player A, he face as coalition of B and C; he wants to get all their chips and he doesn't acre how they transfer chips among themselves.
This game almost surely ends in finite time. Then either A has won, and then game is over, or B and C play as two-person game which again almost surely ends in finite time.
The result extends in the same way to the $n$-person game.
EDIT
For the two-person game, suppose there are $n$ chips in play, and let $E_k$ be the expected time till the end of the game if player A has $k$ chips. We have the boundary conditions $$E_0=E_n=0$$ and the recurrence relation $$E_k=1+\frac12E_{k+1}+\frac12E_{k-1},\ k=1,2,\dots,n-1\tag1$$ because we always have to make one play, and A gains or loses a chip with equal probability. $(1)$ is a second-order linear recurrence relation with constant coefficients, and may be solved by standard methods. One can easily check that the solution is $$E_k=k(n-k),\ 0\leq k\leq n$$
One can extend this result to the three-person game by first computing the probability that the player with $k$ chips wins the two-person game in a manner similar to the way we computed the expectation of the length of the game.
|
H: Twice differentiable function f(x) satisfying $f(x)+f''(x)=2f'(x)$
Consider a twice differentiable function f(x) satisfying $f(x)+f''(x)=2f'(x)$ where $f(0)=0,f(1)=e$.
Find the value of $f'(-1)$ and $f''(2)$
I used the following concept $f(x)=\alpha e^{ax}-\alpha $ as it satisfies $f(0)=0$ and $\alpha =\frac{e}{e^a-1}$, upto this level I am satisfied with my steps. Now using $f(x)+f''(x)=2f'(x)$
$\alpha e^{ax}-\alpha+\alpha a^2e^{ax}=2\alpha ae^{ax} $ which is equal to $ e^{ax}-1+a^2e^{ax}=2 ae^{ax} $
I am not able to use it in my formula
AI: hint
Let $$g(x)=f(x)-f'(x)$$
the equation becomes
$$g'(x)=g(x)$$
then
$$g(x)=e^{x}+C$$
thus
$$f'(x)=f(x)-e^{x}+C$$
so
$$f(x)=\lambda.e^x-xe^{x}-C$$
$$f'(x)=\lambda.e^x-(1+x)e^{x}$$
$$f''(x)=2f'(x)-f(x)$$
It is to you to find $\lambda$ and $ C$ such that
$$f(0)=0\;\; and \;\; f(1)=e$$
|
H: A conditional probability question: Three identical jewelry boxes with two draws each. Each draw contains a watch
The following problem is from the book "Probability and Statistics" which is part of the Schaum's outline series. It can be found on page 30 and is problem number 1.57.
Problem:
Each of three identical jewlery boxes has two draws. In each draw of the first box there is a gold watch. In each each draw of the second box there is a silver watch. In one draw of the third box there is a gold watch while in the other there is a silver watch. If
we select a box at random, open one of the drawers and find it contains a silver watch, what is the probability that the other drawer
has the gold watch?
Answer:
Let $A_1$ be the probability that we selected the first box. Let $A_2$ be the probability that we selected the second box. Let
$A_3$ be the probability that we selected the third box. Let $A$ be the probability that the first draw we open contains a silver watch. Now my goal will be to find the probabilities: $P(A_1)$, $P(A_2)$ and $P(A_3)$. Given these probabilities, I can find the probability that the other draw has a gold watch. Since the first box has only gold watches in this case, the first box was not selected.
\begin{align*}
P( A_2 | A ) &= \frac{ P(A_2)P( A|A_2) } { \sum_{\substack{j=1 }}^n { P(A_j)P(A|A_j) } } \\
P(A_1) &= P(A_2) = P(A_3) = \frac{1}{3} \\
P( A|A_1) &= 0 \\
P( A|A_2) &= 1 \\
P( A|A_3) &= \frac{1}{2}
\end{align*}
\begin{align*}
P( A_2 | A ) &= \frac{ \left( \frac{1}{3} \right) (1) } { \sum_{\substack{j=1 }}^n { P(A_j)P(A|A_j) } } \\
3 P( A_2 | A ) &= \frac{ 1 } { P(A_1)P(A|A_1) + P(A_2)P(A|A_2) + P(A_3)P(A|A_3) } \\
3 P( A_2 | A ) &=
\frac{ 1 } { \left( \frac{1}{3} \right) P(A|A_1) + \left( \frac{1}{3} \right) P(A|A_2)
+ \left( \frac{1}{3} \right) P(A|A_3) } \\
3 P( A_2 | A ) &= \frac{ 3 } { P(A|A_1) + P(A|A_2) + P(A|A_3) } \\
P( A_2 | A ) &= \frac{ 1 } { 0 + 1 + \frac{1}{2} } \\
P( A_2 | A ) &= \frac{3}{2} \\
P( A_3 | A ) &= \frac{ P(A_3)P( A|A_3) } { \sum_{\substack{j=1 }}^n { P(A_j)P(A|A_j) } } \\
P( A_3 | A ) &= \frac{ P(A_3)P( A|A_3) } { P(A_1)P ( A | A_1 ) + P(A_2)P ( A | A_2 ) + P(A_3)P ( A | A_3 ) } \\
P( A_3 | A ) &= \frac{ \left( \frac{1}{3} \right) \left( \frac{1}{2} \right) }
{\left( \frac{1}{3} \right)( 0 ) + \left( \frac{1}{3} \right)(1) + \left( \frac{1}{3} \right) \left( \frac{1}{2 }\right) } \\
P( A_3 | A ) &= \frac{ \frac{1}{6} } { \frac{3}{6} } \\
P( A_3 | A ) &= \frac{1}{3} \\
\end{align*}
\begin{align*}
P(A) &= P(A_1)P(A|A_1) + P(A_2)P(A|A_2) + P(A_3)P(A|A_3) \\
P(A) &= \left( \frac{1}{3 }\right) (0) + \frac{1}{3} \left( 1 \right) + \frac{1}{3} \left( \frac{1}{2}\right) \\
P(A) &= \frac{1}{3} + \frac{1}{6} \\
P(A) &= \frac{1}{2}
\end{align*}
The book's answer is $\frac{1}{3}$. Where did I go wrong? I am also thinking I did a lot of unnecessary work but I am not
sure about that.
AI: I am also thinking I did a lot of unnecessary work but I am not sure about that.
You have already chosen a silver drawer, so the gold drawers become irrelevant.
The issue is now, of the relevant three drawers, what is the probability that your random choice is the one drawer that was paired with gold?
The answer is obviously 1/3.
|
H: Weak convergence implies norm inequality.
When I was reading Mathematical Methods in Quantum Mechanics With Applications to Schrodinger Operators by Gerald Teschl. Link here:
http://www.ams.org/bookstore-getitem?item=gsm-157.
I found in page 56 that if $\psi_n \rightharpoonup \psi$ on a Hilbert space $H$ then
$$\lim\inf \langle\psi,\psi_n\rangle \leq \|\psi\|\lim\inf \|\psi_n\|$$
And I don't know why. It looks like Fatou's theorem, but not exactly.
What am I missing here?
AI: This is just Cauchy-Schwarz. For any $n$
$$|\langle\psi,\psi_n\rangle|\leq\|\psi\|\|\psi_n\|.$$
Taking the $\liminf$ of both sides gives the result.
|
H: Determine values of $\theta$ for which $\arg(z-4+2i)=\theta$ and $|z+6+6i|=4$ have no common solutions
So there is this question that's asking for a "range of values for theta from $-\pi$ to $\pi$, for which $\arg(z-4+2i)=\theta$ and $|z+6+6i|=4$ have no common solutions."
I'm not really sure how to do it as my teacher didn't explain these sort of questions at all to us. I just don't know where to start or what exactly to equate.
Any help would be appreciated. Thanks in advance!
AI: Hint: Let $a=-6-6i,\,b=4-2i$ be points on the complex plane. $|z-a|=4$ is a circle of radius $4$ and center in $a$. What is $\arg(z-b)$?
If you start with all the values if $z$ on the circle and determine somehow the range of $\arg(z-b)$ for all these $z$ (say we have set $B$ being the range), what will the answer be?
|
H: How to find values of variables for which $f(x)= x^3+3x^2+4x+b \sin x + c \cos x$ is one-one?
I have a question that tells me to find the range of $b^2+c^2$ for which $$f(x)= x^3+3x^2+4x+b \sin x + c \cos x,$$ $\forall x \in R$ is one-one function.
[The answer says that $b^2+c^2 \leq 1$]
My book does this question as follows:
$f'(x) = 3x^2+6x+4+b \cos x -c \sin x$
Now, the book states that the only possibility for $f(x)$ to be one-one would be if $f'(x) \geq 0$, i.e, it is monotonically increasing. I have a doubt here. Why cant $f'(x) \leq 0$, i.e, a monotonically decreasing function also satisfy the criteria that the function would be one-one?
Why can the function only be monotonically increasing and not decreasing for all $x$ in R??? I know that that the parabola is upwards, from $f'(x)$, but depending upon b and c values, can't the derivative of $f(x)$ be negative? Why is it said for one-one function the only possibility is that the function is increasing?
AI: By C-S $$f'(x)=3x^2+6x+4+b\cos{x}-c\sin{x}\geq$$
$$\geq3(x+1)^2+1-\sqrt{(b^2+c^2)(\cos^2x+\sin^2x)}\geq1-\sqrt{b^2+c^2}.$$
We see that for $b^2+c^2\leq1$ we have $f(x)\geq0.$
From here easy to see that $b^2+c^2\leq1$ it's the condition for increasing.
$f$ can not be decreased because for any $a$ and $b$ we have: $$\lim\limits_{x\rightarrow+\infty}f(x)=+\infty$$ and $$\lim\limits_{x\rightarrow-\infty}f(x)=-\infty$$
|
H: Is limit of a an increasing function is a bound?
I have a function $f(x)$ which is defined for positive $x$. I know that $f'\geq 0 $ and $lim_{x \to \infty} f(x) = m$. Can I say that $f(x)$ is bounded by $m$?
AI: No. "$f(x)$ is bounded by $m$" (for $m \geq 0$) means that all values of $f$ have magnitude no greater than $m$. That is, $-m \leq f(x) \leq m$ for all $x$ in the domain of $f$.
You can say $f$ is lower bounded by $m$. (In this Question, that claim depends on the fact that $f$ is monotonically decreasing. In general, a function can oscillate around its limit at infinity, for example, $\frac{\sin x}{x}$ has limit $0$ as $x \rightarrow \infty$, but passes above and below $0$, so $0$ neither bounds this function above nor below.)
|
H: Examples of functions satisfying two particular properties
Give some examples of a general function $f(a,b)$ with $b > 0$ satisfying the following two properties:
$f(a,b) > a$;
$f(a_1, b) - f(a_2, b) \leq a_1 - a_2$.
Obviously $f(a, b) = a + g(b)$ with $g(b) > 0$ satisfies those two properties. Are there any other examples except for $f(a, b) = a + g(b)$?
AI: No, there are no other examples.
If (2) is true for all $a_1$ and $a_2$, then it must be true with $a_1$ and $a_2$ interchanged, i.e.
$$ f(a_2,b) - f(a_1,b) \le a_2 - a_1$$
Combined with (2), this says $f(a_1, b) - f(a_2, b) = a_1 - a_2$,
and thus $f(a,b) - a$ is the same for all $a$. If we let $f(a,b) - a = g(b)$, that says $f(a,b) = a + g(b)$.
|
H: Prove that for any positive integer $a,$ $a^{561} \equiv a \pmod{561}.$
Prove that for any positive integer $a$, $a^{561} \equiv a \pmod{561}$.
(Hence, $561$ is a pseudoprime with respect to any base. Such a number is called a Carmichael number.)
This obviously works for $1$ but how do I find $2^{561}$ or any other number to the power of $561?$
AI: Since $$a^{561}-a=\left(a^{560}-1\right)a,$$ we see that $a^{561}-a$ is divisible by $a^3-a$, by $a^{17}-a$ and by $a^{11}-a$, which says that it's divisible by $3$, by $17$ and by $11$, which says that it's divisible by $3\cdot17\cdot11=561$.
|
H: How to prove that f(w)=0
Let I=[a,b] , let f:$\rightarrow$R is continuous on I,and assume f(a) < 0 and f(b)>0 .let W={x$\in$I:f(x)<0},and let w:=supW .prove that f(w)=0.
My work: since w=supW then either w$\in$W or w is a limit point of W. If w is in W then this question does not make any sense so w is a limit point of W and f(w)$\geq$0. Since w is a limit point there exist a sequence ($x_n$) in W such that it converges to w. But after that I cannot proceed.
AI: $$w=\sup W\implies w=\lim_{n\to+\infty}w_n\;\;$$
with $ w_n\in W$.
$$f(w_n)<0\implies \lim_{n\to+\infty}f(w_n)\le 0$$
$$\implies f(\lim_{n\to+\infty}w_n)\le 0$$
$$\implies \color{red}{f(w)\le 0}$$
On the other hand, for $ n$ great enough, $ f(w+\frac 1n)\ge 0$
because $w+\frac 1n \notin W$
thus
$$\lim_{n\to+\infty}f(w+\frac 1n)\ge 0$$
$$\implies f(\lim_{n\to+\infty}(w+\frac 1n))\ge 0$$
$$\implies \color{red}{f(w)\ge 0}$$
|
H: Probability of $(0,...,9)$ balls never being drawn on $10$ draws from $10$ balls with putbacks.
I want to calculate the expected value and variance for the random variable
$$X = \text{number of balls which were never drawn}$$
when drawing from $10$t times from $10$ different balls with putbacks.
To calculate the expected value and variance I wanted to first find a closed formula for the probability distribution of $X$ on the numbers $(0-9)$. I tried to reduce it to a product of binomial distributions without success.
How can this problem be modeled? Is there another way to calculate the expected value or variance?
AI: If you use indicator random variables you can calculate expectation and variance without getting a pmf.
$$\text{Let } I_i = \begin{cases} 1: & \text{ball $i$ is never drawn} \\ 0: &\text{ball $i$ is drawn} \end{cases} \\ \mathsf E[I_i] = P(I_i = 1) = P(\text{ball $i$ is never drawn}) = .9^{10} \\ \mathsf E[X] = \mathsf E[I_0 + I_1 + \cdots + I_9] = 10\left(.9^{10}\right)$$
$$\mathrm{Var}(I_i) = P(I_i=1)(1-P(I_i=1)) = .9^{10}\left(1-.9^{10}\right) \\ \forall \ i \ne j, \ \mathrm{Cov}(I_i,I_j) = .8^{10}-(.9^{10})^2 \\ \mathrm{Var}(X) = \mathrm{Var}(I_0 + I_1 + \cdots I_9) = \sum_{i=0}^9\mathrm{Var}(I_i) + \sum_{i\ne j}\mathrm{Cov}(I_i,I_j) \\ =10(.9^{10})(1-.9^{10}) + 90(.8^{10}-.9^{20})$$
Reference: https://en.wikipedia.org/wiki/Indicator_function#Mean%2C_variance_and_covariance
|
H: What is a simple example of a reduced, noetherian, local ring of dimension $0$ which is not Gorenstein?
As the title says, I am looking for a noetherian local ring $R$ of dimension 0 which is reduced (and thus Cohen-Macaulay) but not Gorenstein.
Due to Bruns, Herzog $-$ Cohen-Macaulay Rings Theorem 3.2.10 every noetherian local ring which is not Gorenstein fails to be Cohen-Macaulay or fails to be of type 1. Since every reduced ring of dimension $\leq 1$ is Cohen-Macaulay (see Stacks-Reference), we are thus looking for a noetherian, reduced local ring of dimension 0 that fails to be of type 1.
What constitutes a simple example of such a ring?
I am grateful for any kind of help or input! Cheers!
AI: If you mean Krull dimension $0$, then I guess there is no example.
A reduced ring with Krull dimension $0$ is von Neumann regular, and a local VNR ring is a field.
The reducedness condition really kills things. $F_2[x,y]/(x,y)^2$ satisfies everything you said except it is not reduced.
|
H: Annihilator, vector space
Let $\dim_K(V)\geq 1$ and $M \subset V$ with $\emptyset \neq M \subsetneq V$.
Can $M$ exist with $M^0=V$? $M^0$ refers to the annihilator of $M$ and $V$ is a finite dimensional vector space.
AI: Edit: the spoiler was wrong! The set $M$ is assumed to be non empty but it could very well be zero.
Hint: note that your question amounts to proving or disproving that given a non-empety subset $M \subsetneq V$ there exists $\varphi \in V^\ast$ and $m \in M$ such that $\varphi(m) \neq 0$.
This shows that indeed no such $M$ exists if $M \neq 0$. Since $M$ is non empty, we can fix $x \in M$. The set $\{x\}$ is linearly independent and so it can be extended to a basis $B = \{x,y_1, \cdots, y_n\}$ of $V$. Now take $\varphi$ to be the first element of the dual basis, i.e. define $\varphi : V \to \Bbbk$ such that $\varphi(y_j) = 0, \varphi(x) = 1$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.