text
stringlengths 83
79.5k
|
|---|
H: Example of function where improper integral doesn't exist
Let $f \colon [1,\infty) \to \mathbb{R}$ be continuous, with $\lim_{x \to \infty}f(x) =0$. Does the integral \begin{equation*}\int_{1}^{\infty}\frac{f(x)}{x}\mathop{}\!\mathrm{d}x\end{equation*} necessarily converge?
I think the answer is "no", because from $\lim_{x \to \infty}f(x) = 0$, the best bound we can get is that $\left\lvert f(x) \right\rvert \leq \epsilon$ for $x \geq \delta$ (where $\delta$ is some number $\geq 1$), but $\int_{1}^{\infty}\frac{\epsilon}{x}\mathop{}\!\mathrm{d}x$ doesn't converge. However, I couldn't find a specific example of $f$ for which the integral doesn't converge.
AI: Take $f(x) = \frac{1}{\ln (x+1)}$. It has limit $0$ at infinity and is continuous. But $\int_1^{\infty} \frac{1}{x\ln (x+1)} \mathrm{d}x = +\infty$.
|
H: Proving $\pi=(27S-36)/(8\sqrt{3})$, where $S=\sum_{n=0}^\infty\frac{\left(\left\lfloor\frac{n}{2}\right\rfloor!\right)^2}{n!}$
I have to prove that:
$$\pi=\frac{27S-36}{8\sqrt{3}}$$
where I know that $$S=\sum_{n=0}^\infty\frac{\left(\left\lfloor\frac{n}{2}\right\rfloor!\right)^2}{n!}$$
Where do I get started?
AI: Assignment:
Prove:
$$\pi=\frac{27\mathcal{S}-36}{8\sqrt{3}}$$
Where:
$$\mathcal{S}:=\sum_{\text{n}\space\ge\space0}\frac{\left(\left\lfloor\frac{\text{n}}{2}\right\rfloor!\right)^2}{\text{n}!}$$
Where $\left\lfloor x\right\rfloor$ is the Floor function.
Solution:
First, let's add the sum over all odd and even numbers:
\begin{equation}
\begin{split}
\mathcal{S}&=\frac{\left(0!\right)^2}{0!}+\frac{\left(0!\right)^2}{1!}+\frac{\left(1!\right)^2}{2!}+\frac{\left(1!\right)^2}{3!}+\frac{\left(2!\right)^2}{4!}+\frac{\left(2!\right)^2}{5!}+\dots\\
\\
&=\underbrace{\sum_{\text{n}\space=\space0}^\infty\frac{\text{n}!\cdot\text{n}!}{\left(2\text{n}+1\right)!}}_\text{odd part}+\underbrace{\sum_{\text{n}\space=\space0}^\infty\frac{\text{n}!\cdot\text{n}!}{\left(2\text{n}\right)!}}_\text{even part}\\
\\
&=\sum_{\text{n}\space=\space0}^\infty\frac{\text{n}!\cdot\text{n}!}{\left(2\text{n}+1\right)!}+\sum_{\text{n}\space=\space0}^\infty\frac{\text{n}!\cdot\text{n}!}{\left(2\text{n}+1\right)!}\cdot\left(2\text{n}+1\right)
\end{split}\tag1
\end{equation}
Now, let's recall that the Gamma function is defined as:
$$\Gamma\left(\text{n}+1\right)=\text{n}!\tag2$$
$\forall\text{n}\in\mathbb{N}_0$.
We can now re-write the factorials using the Gamma function and combine the sums:
\begin{equation}
\begin{split}
\mathcal{S}&=\sum_{\text{n}\space=\space0}^\infty\frac{\Gamma\left(\text{n}+1\right)\cdot\Gamma\left(\text{n}+1\right)}{\Gamma\left(2\text{n}+2\right)}+\sum_{\text{n}\space=\space0}^\infty\frac{\Gamma\left(\text{n}+1\right)\cdot\Gamma\left(\text{n}+1\right)}{\Gamma\left(2\text{n}+2\right)}\cdot\left(2\text{n}+1\right)\\
\\
&=\sum_{\text{n}\space=\space0}^\infty\left\{\frac{\Gamma\left(\text{n}+1\right)\cdot\Gamma\left(\text{n}+1\right)}{\Gamma\left(2\text{n}+2\right)}+\frac{\Gamma\left(\text{n}+1\right)\cdot\Gamma\left(\text{n}+1\right)}{\Gamma\left(2\text{n}+2\right)}\cdot\left(2\text{n}+1\right)\right\}\\
\\
&=\sum_{\text{n}\space=\space0}^\infty\frac{\Gamma\left(\text{n}+1\right)\cdot\Gamma\left(\text{n}+1\right)}{\Gamma\left(2\text{n}+2\right)}\cdot\left(2\text{n}+2\right)
\end{split}\tag3
\end{equation}
Now, recall that the Beta function is given by:
$$\beta\left(x,\text{y}\right)=\frac{\Gamma\left(x\right)\Gamma\left(\text{y}\right)}{\Gamma\left(x+\text{y}\right)}\tag4$$
Applying this to our sum gives:
$$\mathcal{S}=\sum_{\text{n}\space=\space0}^\infty\left(2\text{n}+2\right)\beta\left(\text{n}+1,\text{n}+1\right)\tag5$$
The integral representation of the Beta function is:
$$\beta\left(x+1,\text{y}+1\right)=\int\limits_0^1 t^x\left(1-t\right)^\text{y}\space\text{d}t\tag6$$
Applying this to the sum gives:
$$\mathcal{S}=\sum_{\text{n}\space=\space0}^\infty\left(2\text{n}+2\right)\int\limits_0^1 t^\text{n}\left(1-t\right)^\text{n}\space\text{d}t=2\int\limits_0^1\sum_{\text{n}\space=\space0}^\infty\left(\text{n}+1\right)\left(t\left(1-t\right)\right)^\text{n}\space\text{d}t\tag7$$
Let's recall that:
$$\sum_{\text{k}\space=\space0}^\infty\left(\text{k}+1\right)x^\text{k}=\frac{1}{\left(1-x\right)^2}\tag8$$
This can easily be proven using the Geometric series and differentiating both sides.
Using that we can re-write $(7)$:
$$\mathcal{S}=2\int\limits_0^1\frac{1}{\left(1-t\left(1-t\right)\right)^2}\space\text{d}t\tag9$$
Using a substitution $t-\frac{1}{2}=\frac{\sqrt{3}}{2}\cdot\tan\left(\theta\right)$, it is not hard to prove that:
$$\mathcal{S}=\frac{4}{3}+\frac{8\pi\sqrt{3}}{27}\tag{10}$$
Which I let you do, and that proves your result, because:
$$\mathcal{S}=\frac{4}{3}+\frac{8\pi\sqrt{3}}{27}\space\Longleftrightarrow\space\pi=\frac{27\mathcal{S}-36}{8\sqrt{3}}\tag{11}$$
|
H: Excludе $x$ frоm this systеm оf equаtiоns.
How do you go about excluding $x$ from the system of equations?
\begin{cases}x^3-xy-y^3+y=0\\x^2+x-y^2=1\end{cases}
AI: The resultant of the two polynomials $x^3 - xy - y^3 + y$ and $x^2 + x - y^2 - 1$ with respect to $x$ is $5\,{y}^{5}-7\,{y}^{4}+6\,{y}^{3}-2\,{y}^{2}-y-1$. That is, the equation with $x$ removed is
$5\,{y}^{5}-7\,{y}^{4}+6\,{y}^{3}-2\,{y}^{2}-y-1=0$.
|
H: Estimate definite integral by Maclaurin series with an error at most 10^-1
$\int_{0}^{1} \frac{\sinh x}{x}\mathrm{d}x$ with an error at most $10^{-1}$.
I tired to find expression of error in order to decide order n, but I can't express it because this Maclaurin series is not alternating series.
$$\frac{\sinh x}{x}=\sum_{n=0}^{\infty} \frac{x^{2n}}{(2n+1)!}$$
How can I estimate this integral within error $10^{-1}$ by using Maclaurin series?
AI: You are using $$ \int_0^1 \frac{\sinh(x)}{x}\; dx = \sum_{n=0}^\infty \int_0^1 \frac{x^{2n}}{(2n+1)!}\; dx = \sum_{n=0}^\infty \frac{1}{(2n+1)(2n+1)!}$$
If you stop at $n=m$, the error will be
$$ E_m = \sum_{n=m+1}^\infty \frac{1}{(2n+1)(2n+1)!}$$
Since you only need an error less than $10^{-1}$, you don't have to be very sophisticated in bounding $E$. Take some $m$ for which the first omitted term is less than $1/10$, and see if you can bound the rest by a geometric series.
|
H: Given $F(x)=\int\limits_x^{x^2}\frac{\sin t}{t}dt$, find $\lim_{x\rightarrow 0}F(x)$ and $\lim_{x\rightarrow 0}F'(x)$
Given $F(x)=\int\limits_x^{x^2}\frac{\sin t}{t}dt$, find $\lim_{x\rightarrow 0}F(x)$ and $\lim_{x\rightarrow 0}F'(x)$
Assume the options are ${-1, 0, 1}$
Intuitively I'm pretty sure the answer is $\lim_{x\rightarrow 0}F(x)=0$ , because $\frac{\sin t}{t}$ approaches 1 and we look at a smaller segment as $x$ approaches $0$, so the area under the graph would approach $0$.
Also, I'm pretty sure $\lim_{x\rightarrow 0}F'(x)=-1$, because $x^2 < x$ for a small enough $x$ so we would get minus the value of $\frac{\sin 0}{0}$.
I'm having trouble formalizing this, would appreciate help with finding more precise arguments, and also of course let me know if I'm wrong.
AI: I will start with the second limit. You can easily evaluate the derivative of $F$ as follows: let $g(t):= \frac{\sin t}{t}$, and note that $F(x)=G(x^2)-G(x)$. Then, also computing the derivative of the composite function, you obtain:
$$ F'(x)=2x g(x^2) - g(x) = 2x \cdot \frac{\sin x^2}{x^2} - \frac{\sin x}{x} $$
Now, recall that:
$$ \lim_{x \rightarrow 0} \frac{\sin x}{x} = 1$$
This gives you:
$$ \lim_{x \rightarrow 0} F'(x) = 2 \cdot 0 \cdot 1 - 1 = -1 $$
For the other limit, consider the MacLaurin series of the integrand, and integrate term by term. It is not difficult to see that the result of the limit is $0$.
EDIT: we have the Taylor series:
$$ \frac{\sin t}{t} = \sum_{k=0}^{\infty} \frac{(-1)^n}{(2n+1)!} t^{2n} $$
Now integrate term by term (it is known that this is possible in a case like this one):
$$ F(x) = \int_{x}^{x^2} \frac{\sin t}{t} dt = \sum_{k=0}^{\infty} \int_{x}^{x^2} \frac{(-1)^n}{(2n+1)!} t^{2n} dt = \sum_{k=0}^{\infty} \frac{(-1)^n}{(2n+1)! (2n+1)} x^{2n+1} (x^{2n+1} - 1) $$
Then, you can see that, as $x \rightarrow 0$, $F(x) \rightarrow 0$.
EDIT: another solution. Note that $g(t)=\frac{\sin t}{t}$ is bounded. To see this, recall that $|\sin t | \leq |t|$, which implies that $|g(t)| \leq 1$, and thus the function is indeed bounded. Then, we can write:
$$ \left |\int_{x}^{x^2} \frac{\sin t}{t} dt \right | \leq |x^2 - x| \cdot 1 = |x^2 - x| $$
When $x \rightarrow 0$, the RHS goes to $0$, and thus:
$$ \lim_{x \rightarrow 0} F(x) = 0 $$
|
H: Entropy of the Uniform Mixture of Discrete Probability Distribtuions
Consider the following inequality:
\begin{equation}
H\left(\frac{1}{3}p_{1} + \frac{1}{3}p_{2} + \frac{1}{3}p_{3}\right) \geq H(0.5p_{1} + 0.5p_{2})
\end{equation}
where H(.) denotes the Shannon entropy of the probability distribution in its argument. Does this hold for discrete probability distributions $p_{1}, p_{2}$ and $p_{3}$ over the same alphabet such that the distributions are permutations of each other (i.e. such that $H(p_{1}) = H(p_{2}) = H(p_{3})$)? Furthermore, can this be generalised to say that the entropy of a larger uniform mixture of permutations of discrete probability distributions always increases (or maintains) the entropy?
AI: Consider domain $\{0,1\}$, and trivial point masses $p_1,p_2,p_3$ as follows.
$
p_1(0)=p_3(0)=1
$
and
$p_2(1)=1$. Then
$$
H\left(\frac{p_1+p_2}{2}\right) = h_2(1/2) =\log 2
$$
but
$$
H\left(\frac{p_1+p_2+p_3}{3}\right) = H\left(\frac{2}{3}p_1+\frac{1}{3}p_2\right) = h_2(1/3) < \log 2
$$
so your inequality does not hold, even for a domain of size $2$.
|
H: Why is the directional derivative not defined for $v_1=0$?
I have some trouble understanding how my prof. got to this conclusion.
I'm asked to find the directional derivative at point $\zeta=(0,0)$ of $$\left\{
\begin{array}{c} f(x,y)=\frac{x+xy}{\sqrt{x^2+y^2}} ,\forall(x,y)\neq(0,0) \\f(0,0)=0 \end{array}
\right.$$
Applying the definiton of the directional derivative and using $v_1^2+v_2^2=1$, I get:
$$\lim_{h\to 0} \frac{f(hv_1,hv_2)-f(0,0)}{h} =\lim_{h\to 0}\frac{hv_1+h^2v_1v_2}{h^2\sqrt{v_1^2+v_2^2}}=\lim_{h\to 0} \frac{v_1+hv_1v_2}{h}=undef.$$
Given this result, my professor now says that at $f(0,0)$ the directional derivative is not defined.
My question now is:
Shouldn't the dericational derivative for $v_1=0$ do exist? Cause if that's the case, I have:
$$\lim_{h\to 0} \frac{f(0,hv_2)-f(0,0)}{h}=\lim_{h\to 0}0=0$$
Which would imply that that the directional derivative does exist and is not undefined.
Does this make any sense to you guys or is my logic and understanding of directional derivatives just flawed?
AI: You are right, the directional derivative in direction $(0,1)$ exists and is zero. This directional derivative is nothing else than $\dfrac{\partial f}{\partial y} (0,0)$.
However, strictly speaking it does not make sense to say that the directional derivative exists at some point - directional derivatives always depend on a given direction. I guess your professor wants to say that the directional derivative of $f$ exists at some point $(a,b)$ if directional derivatives in all directions exist. This is not the case at $(0,0)$.
|
H: Show that if $\phi$ is an odd function on $(-l,l)$, its full Fourier series on $(-l,l)$ only has sine terms.
Show that if $\phi$ is an odd function on $(-l,l)$, its full Fourier series on $(-l,l)$ only has sine terms.
The full Fourier series is defined as $$\phi(x)=\frac{1}{2}A_0+\sum_{n=1}^{\infty}(A_n cos \frac{n \phi x}{l}+B_n sin \frac{n \pi x}{l})$$, for $-l <x< l$, and $$A_n=\frac{1}{l} \sum_{-l}^l \phi(x) cos \frac{n \pi x}{l} dx(n=0,1,2,...)$$ $$B_n=\frac{1}{l} \sum_{-l}^l \phi(x) sin \frac{n \pi x}{l} dx(n=0,1,2,...)$$
I also know that an odd function is like this $\phi(-x)=-\phi(x)$, but am not sure how to prove it.
AI: The result holds for both the discrete version of the coefficients and the ones involving integrals. Consider the term $A_n$. We will show that it is always $0$. First of all, write:
$$ {A_n} = \frac{1}{l} \sum_{x=-l}^{-1} \phi(x) \cos(\frac{n \pi x}{l}) + \phi(0) + \frac{1}{l} \sum_{x=1}^{l} \phi(x) \cos(\frac{n \pi x}{l}) $$
Since $\phi$ is odd, $\phi(0)=-\phi(0)$, which implies that $\phi(0)=0$. Moreover, recall that $\cos(\cdot)$ is an even function, so that we can write:
$$ A_n = \frac{1}{l} \sum_{x=1}^{l} -\phi(x) \cos(\frac{n \pi x}{l}) + 0 + \frac{1}{l} \sum_{x=1}^{l} \phi(x) \cos(\frac{n \pi x}{l}) = 0$$
And the result is thus proved, because the coefficients of the cosines in the series are always $\mathbb{0}$. The same can be proved also for integrals, using the fact that $\phi$ is odd.
|
H: Niemytzki continious function
In Wiki, they state that this function, whose image is in [0,1] and that is defined on X with the Niemyetzki topology, is continuous:
Therefore, the preimages of sets of the form $(x-r, x+r) $ with $ x \in R $ and $r>0$ (that are exactly the open sets in R with the standard topology), have to be of the form:
I don't see how the preimages does satisfy this, like $(x_0,0)$ will be in non of the preimages as zero is not in any open set. Therefore, y has to be positive and the preimages are balls that are contained in X. That is not clear.
AI: No, the preimages of open intervals don’t have to be of the form $U_r(x_0,y_0)$: they just have to be unions of sets of that form. But that’s the hard way to check that $f_{r,x_0}$ is continuous. Let $H=\{\langle x,y\rangle\in\Bbb R\times\Bbb R:y>0\}$. It’s not too hard to check that $f_{r,x_0}\upharpoonright H$ is continuous in the Euclidean topology on $H$ and hence in the Niemytzki topology, since they agree on $H$, so all that remains is to check that $f_{r,x_0}$ is continuous on $\Bbb R\times\{0\}$. This is also pretty easy if you use the fact that the Niemytzki plane $X$ is first countable and check that if $\langle p_n:n\in\Bbb N\rangle\to\langle x,0\rangle$ in $X$, , then $\langle f_{r,x_0}(p_n):n\in\Bbb N\rangle$ converges to $0$ if $x=x_0$ and to $1$ otherwise.
(Those functions might be worth adding to the English Wikipedia article, too; I’ll have to think about it.)
|
H: Proving that $f(n)=nlog(n)$ is a $b$-smooth function
First I start with the definition: a function $f:\mathbb{N} \rightarrow \mathbb{R}^{+}$ is b-smooth for an integer $b \geq 2$ if $f$ is eventually non decreasing and if
$$ \exists c \in \mathbb{R}^{+} \exists n_0 \in \mathbb{N} \forall n \geq n_0 \hspace{1cm} f(b*n) \leq c*f(n)$$
The function is smooth if the b-smoothness holds for all integers $b \geq 2$.
Now I need to prove that $f(n)=n*log(n)$ is b-smooth for all $b \geq 2$.
I started like this
$f(b*n)=bn*log(bn) \leq c*n*log(n)$
$bn*log(bn) \leq c*n*log(n)$
I could divide both sides by n then
$b*log(bn) \leq c*log(n)$
now
$\frac{log(bn)}{log(n)} \leq \frac{b}{c}$
with the rules of logarithm I can write it as
$\frac{log(bn)}{log(n)}=log_{n}(bn) \leq \frac{b}{c}$
But how can I show that the function is bounded by $\frac{b}{c}$ ? Maybe my steps were completely wrong
AI: Just take $n_0=b$ and $c=2b$. Then, since $\log$ is increasing:
$$
\implies \log(b)\leq \log(n)
$$
$$
\implies bn\log(b)\leq bn\log(n)
$$
$$
\implies bn\log(b)+bn\log(n)\leq bn\log(n)+bn\log(n)
$$
$$
\implies bn\log(bn)\leq 2bn\log(n)
$$
|
H: Why does this "gradient field test" not work on the spin field $S / r^2$? (from Strang's Calculus)
In section 15.2, Strang's Calculus explains that for any gradient field $\bf{F} = Mi + Nj$, ${\partial M \over \partial y} = {\partial N \over \partial x}$. (Strang calls this "test D" for identifying a vector field as being a gradient of some function.)
This makes sense, since the components of a "gradient field" are the partial derivatives of some function $f$, and we know that for any $f$, ${\partial f \over \partial x \partial y} = {\partial f \over \partial y \partial x}$.
The gradient of $f = \tan^{-1}\left({y \over x}\right)$ is ${-y \over x^2 + y^2}i + {x \over x^2 + y^2}j$. However, this vector field does not seem to pass "test D", since
$$
{\partial \over \partial x}\left({-y \over x^2 + y^2}\right) = {2xy \over (x^2 + y^2)^2}
$$
But
$$
{\partial \over \partial y}\left({x \over x^2 + y^2}\right) = -{2xy \over (x^2 + y^2)^2}
$$
I'm sure something is wrong with my reasoning, but I am struggling to find the mistake. Can anyone point it out?
AI: You have a tiny mistake.
What you did:
$$
{\partial \over \partial x}\left({-y \over x^2 + y^2}\right)
$$
$$
{\partial \over \partial y}\left({x \over x^2 + y^2}\right)
$$
but you should do:
$$
{\partial \over \partial y}\left({-y \over x^2 + y^2}\right) (1)
$$
$$
{\partial \over \partial x}\left({x \over x^2 + y^2}\right) (2)
$$
(notice that the first is differentiated with respect to $y$ not $x$ like you did.
we get $(1)=(2)=\frac{y^2-x^2}{(x^2+y^2){^2}}$ which confirms that this is indeed a gradient field.
|
H: If $t<0$, what is $t\sum a_n$?
Let $t$ be a nonpositive real number (i.e. $t<0$) and $\{a_n\}$ be a nonnegative sequence if $$\sum a_n<\infty$$ then how do we prove or disprove that $$t\sum a_n<\infty?$$
AI: Since $a_n \geq 0$ by assumption, we have:
$$ \infty > \sum a_n \geq 0 $$
Now, if you multiply for some real number $t < 0$, the result will be:
$$ - \infty < t \sum a_n \leq 0 < + \infty $$
Notice that the sum is not infinite, because you have multiplied by a finite value $t$.
|
H: Weak version of Hahn-Banach separation
Let $K\subset \mathbb{R}^n$ be a compact convex subset and $p\in \mathbb{R}^n$ be a point not in $K$. Then $p$ and $K$ can be strongly separated with a hyperplane $c_1 x_1+\cdots+c_nx_n=b$ (I mean "$c_1p_1+\cdots c_np_n <b$ and $c_1 k_1+\cdots c_nk_n > b$ forall $k\in K$").
I want to prove this statement for second year undergraduate students. I don't want to use functional analysis. Any help?
AI: Let $K \subseteq R^n$ be compact and convex and $p \not\in K$. Assume that there is a unique point $x_0 \in K$ that is closest to $p$ that is
\begin{equation}
||x_0 - p || = \min\limits_{x \in K} ||x - p||.
\end{equation}
Let $B$ denote the closed ball with center $p$ and radius $||x_0 - p|| > 0$. Then $B \cap K = \{x_0\}$ by uniqueness of the minimizer. Let $l$ denote the plane perpendicular to $p-x_0$ passing through $(p + x_0)/2$. Clearly $l \cap K = \varnothing$ by convexity of $K$ (this can also be seen graphically), so we've foudn a strongly separating hyperplane. That $x_0$ does in fact exist follows by continuity of the functional $x \mapsto ||x-p||$ over the compact set $K$. Uniqueness can be proven in a few lines using the standard argument involving the parallelogram rule.
|
H: Solve $9x(1-x)y^{\prime\prime} - 12y^\prime + 4y = 0$ using power series
Let $y(x) = \sum_{n=0}^\infty c_nx^n \Rightarrow y^\prime(x) = \sum_{n=0}^\infty nc_nx^{n-1}, y^{\prime\prime}(x) = \sum_{n=0}^\infty n(n-1)c_nx^{n-2}$. Now:
$$9x(1-x)\sum_{n=0}^\infty n(n-1)c_nx^{n-2} - 12\sum_{n=0}^\infty nc_nx^{n-1} + 4\sum_{n=0}^\infty c_nx^n = 0$$
$$9\sum_{n=0}^\infty n(n-1)c_nx^{n-1}- 9\sum_{n=0}^\infty n(n-1)c_nx^{n} - 12\sum_{n=0}^\infty nc_nx^{n-1} + 4\sum_{n=0}^\infty c_nx^n = 0,$$
$$9\sum_{n=1}^\infty n(n-1)c_nx^{n-1}- 9\sum_{n=0}^\infty n(n-1)c_nx^{n} - 12\sum_{n=1}^\infty nc_nx^{n-1} + 4\sum_{n=0}^\infty c_nx^n = 0,$$
Make $t = n-1$,
$$9\sum_{t=0}^\infty (t+1)(t)c_{t+1}x^{t}- 9\sum_{n=0}^\infty n(n-1)c_nx^{n} - 12\sum_{t=0}^\infty (t+1)c_{t+1}x^{t} + 4\sum_{n=0}^\infty c_nx^n = 0,$$
Make $t = n$,
$$9\sum_{n=0}^\infty (n+1)(n)c_{n+1}x^{n}- 9\sum_{n=0}^\infty n(n-1)c_nx^{n} - 12\sum_{n=0}^\infty (n+1)c_{n+1}x^{n} + 4\sum_{n=0}^\infty c_nx^n = 0,$$
So, $9(n+1)(n)c_{n+1} - 9n(n-1)c_n - 12(n+1)c_{n+1} + 4c_n = 0 \Rightarrow (n+1)c_{n+1}(9n - 12) = c_n(9n^2-9n -4) \Rightarrow c_{n+1} = c_n\frac{9n^2-9n -4}{(n+1)(9n - 12)} = \frac{3n+1}{3(n+1)}c_n$
But I'm having troubles with the numerator when I try to solve the recurrence relation.
AI: Hints:
First, manipulate the recurrence as: $$c_n =c_0\prod_{k=1}^n\dfrac{3k-2}{3k}=\dfrac{c_0}{n!}\prod_{k=1}^n\left(k-\frac 23\right) .$$
On the other hand, recall the series expansion of $(1+x)^\alpha:$
$$(1+x)^\alpha = \sum_{n=0}^\infty\binom{\alpha}{n}x^n,$$
where:
$$\binom{\alpha}{n} = \prod_{k=1}^n\dfrac{\alpha-k+1}{k}.$$
So now all you need to do is to recognize correct $\alpha$ and do some small dirty work.
|
H: Finding a Pivotal Quantity
Let $X_1, \dots, X_n$ be i.i.d with probability density function
$$f(x;\theta) = \theta \frac{8^\theta}{x^{\theta +1}}, \qquad x\geq 8,\qquad \theta > 0 $$
And given the statistic $W(X_1, \dots, X_n) = \sum_{i=1}^n ln\left(\frac{X_i}{8}\right)$, I want to find a function $g(\theta)$ s.t:
$$Q(\theta, W) = g(\theta) W \sim \chi^2_k$$
I've tried many things, unfortunately, without any success. Any hints would be appreciated.
Thanks in advance!!
AI: First of all calculate the distribution of $Y_i=log(\frac{X_i}{8})$.
By the fundamental transformation theorem you find
$Y_i \sim Exp(\theta)$ so the rv $W=\sum_y Y \sim Gamma (n;\theta)$
Thus taking $g(\theta)=2\theta$ you get your $\chi_{(k=2n)}^2$
FYK the $f(x,\theta)$ is a known distribution: the Pareto
|
H: Having trouble working out how two vector expressions are equivalent
I'm doing some coursework on linear regression, and part of it requires finding a closed-form solution of the mean-squared error minimisation problem:
$$\min_{\bf w} \frac{1}{n} \sum_{i=1}^{n} (\mathbf{w}^T \mathbf{x}_i - y_i)^2$$
This is how I tried to do it (with some help from online sources):
$$
\frac{1}{n} \sum_{i=1}^{n} \nabla_{\bf{w}} (\mathbf{w}^T \mathbf{x}_i - y_i)^2 = 0 \\
\Rightarrow \frac{2}{n} \sum_{i=1}^{n}(\mathbf{w}^T \mathbf{x}_i - y_i) \mathbf{x}_i = 0 \\
\Rightarrow \sum_{i=1}^{n}(\mathbf{w}^T \mathbf{x}_i) \mathbf{x}_i = \sum_{i=1}^{n} y_i \mathbf{x}_i \\
\Rightarrow \sum_{i=1}^{n} \mathbf{x}_i \mathbf{x}_i^T \mathbf{w} = \sum_{i=1}^{n} y_i \mathbf{x}_i
$$
My only issue here is rearranging the left-hand side to get from the second-last to the last line. Could someone show me how this is done? Thanks.
AI: Notice that $\mathbf{a}_{i} := (\mathbf{w}^T \mathbf{x}_i)$ are scalars and so $\mathbf{a}_{i} = \mathbf{a}_{i}^T = (\mathbf{w}^T \mathbf{x}_i)^T = (\mathbf{x}_i^T \mathbf{w})$. By virtue of being scalars they also commute with the vectors, $\mathbf{x}_i$, giving us $$\mathbf{a}_{i} \mathbf{x}_i = \mathbf{a}_{i}^T \mathbf{x}_i = \mathbf{x}_i \mathbf{a}_{i}^T.$$ Hence $$ \sum_{i=1}^{n} y_i \mathbf{x}_i = \sum_{i=1}^{n}(\mathbf{w}^T \mathbf{x}_i) \mathbf{x}_i = \sum_{i=1}^{n} \mathbf{x}_i \mathbf{x}_i^T \mathbf{w}.$$
|
H: Derive $P[D = 1|X]$ from $X = f(X,-1)P[D =-1|X] + f(X,1)P[D =1|X]$
Given the random variables $X: \Omega \to \mathbb R$ and $D: \Omega \to \{-1,1 \}$, and the (measurable) function $f: \mathbb R \times \{-1, 1 \} \to \mathbb R$.
Show that if
$$
X = f(X,-1)P[D =-1|X] + f(X,1)P[D =1|X],
$$
then
$$
\frac{X - f(X, -1)}{f(X, 1) - f(X, -1)} = P[D = 1 |X].
$$
All I can see is the immediate algebraic manipulation
$$
\frac{X - f(X, -1)P[D=-1|X]}{f(X, 1)} = P[D = 1|X]
,$$
but then I am perplexed how to proceed.
Most grateful for any help provided!
AI: Using the definition of $X$, we can write,
\begin{align*}
X - f(X,-1) & = f(X, -1)P[D = -1|X] - f(X, -1)+ f(X, 1)P[D = 1|X] \\
& = f(X, -1)(P[D = -1|X] - 1)+ f(X, 1)P[D = 1|X] \\
& = -f(X, -1)P[D = 1|X] + f(X, 1)P[D = 1|X] \\
& = P[D = 1|X](f(X, 1)-f(X, -1)) \\
\end{align*}
On rearranging, we get the required expression.
|
H: Are affine morphisms with coherent direct image finite?
Let $f:X \longrightarrow Y$ be a morphsim of Noetherian schemes. I was doing excersise 5.5 of Hartshorn Algebraic Geometry and in (c) i showed that finite morphisms preserve coherence (i.e. if $\mathscr{F}$ is coherent on $X$ then $f_*\mathscr{F}$ is coherent on $Y$).
Now I am wondering about something like a converse, suppose we have a morphism $f:X \longrightarrow Y$ of Noetherian schemes with $f_*\mathcal{O}_X$ a coherent $\mathcal{O}_Y$-module. What conditions de we need on $f$ for it to be finite? (I was thinking maybe about affine morphisms)
Or maybe, a more precise question: are affine morphsims with the condition that $f_*\mathcal{O}_X$ coherent, proper? That would work if it is the case.
AI: Any proper morphism has the property that $f_*F$ is coherent if $F$ is. So, in particular, they need not be finite. Affine morphism with $f_*O_X$ coherent are proper. Putting these together, a morphism (of reasonable schemes) is finite if and only if it is proper and affine.
|
H: In a casino a player can win 1 euro with a probability of 18/28 and loses 1 euro with a probability of 20/38.
In a casino, for each bet on the wheel, a player wins 1 euro with the
probability of $\frac{18}{38}$ and loses 1 euro with a probability of $\frac{20}{38}$.
a) What is the average value won per game?
b) What is the probability the player loses money if he plays 6 times?
Not sure how to solve either. Not sure how I can calculate the average without knowing how may games he player, and for b) I tried to use the binomial distribution with $x = 0$ and $p = \frac{18}{38}$ but didn't work. Help?
The right answers are $\frac{1}{19}$ for a) and $0.394296$ for b).
AI: For the first part, you are asked for the expected winnings per game; that is to say, after one play. If $X$ is a random variable representing the winnings after one game, then either $X = 1$ with probability $18/38$, or $X = -1$ with probability $20/38$, where we use $-1$ to represent a loss of $1$ euro.
We can express this as
$$\Pr[X = 1] = \frac{18}{38} \\ \Pr[X = -1] = \frac{20}{38}.$$
The expected value of $X$ after one play is therefore $$\operatorname{E}[X] = (1) \Pr[X = 1] + (-1)\Pr[X = -1].$$ All we did is take each possible outcome ($1$ or $-1$), multiply it by the probability of observing that outcome, and then taking the sum.
For the second part, the player loses money in $n = 6$ plays if they have at least 4 losses, because if they win $3$ times and lose $3$ times, their net earnings is $0$. So they have to lose more times than they win. If $Y$ is a binomial random variable that represents the number of wins in $6$ tries, where the probability of any try resulting in a win is $p = 18/38$, then $$\Pr[Y = y] = \binom{n}{y} p^y (1-p)^{n-y}.$$ You then must find $$\Pr[Y \le 2] = \Pr[Y = 0] + \Pr[Y = 1] + \Pr[Y = 2],$$ representing the sum of probabilities for all outcomes in which there are fewer wins than losses.
|
H: ${\log}_{a}{x}\neq {\int}^{x}_{1}{\frac{1}{t}}dt$
In most calculus textbooks, $\ln{x}$ is defined to be ${\int}^{x}_{1}{\frac{1}{t}}dt$. Some textbooks validate this definition by demonstrating that this function $\int^{x}_{1}{\frac{1}{t}}dt$ has all the properties of a logarithmic function (I've included pictures of this). I'm skeptical to this particular approach, since we could also define ${\log}_{a}{x}={\int}^{x}_{1}{\frac{1}{t}}dt$. We can still show that the laws of logarithms are properties of this integral, it's also obvious how the algebra will work out. And that means we've justified our claim?
Hell no! The derivative of ${\log}_{a}{x}$ is $\frac{1}{x}{\log}_{a}{e}$. Isn't this approach erroneous then? How then, could we show that this integral does not equal ${\log}_{a}{x}$? We could try showing that some of the properties of the logarithmic functions ($a\neq{e}$) do not hold for ${\int}^{x}_{1}{\frac{1}{t}}dt$. But how do we go about it?
img-1 img-2
AI: Your critique is completely correct: the properties shown in those proofs are not enough to distinguish between $\ln x$ and $\log_a x$ for any $a>1$. Indeed, all of the proofs would go through for the function $\int_1^x \frac Ct\,dt$ for any positive constant $C$ as well. (This is secretly the same ambiguity in disguise....)
So yes, you're right that this is not a proof that $\int_1^x \frac 1t\,dt$ must equal $\ln x$ instead of some other $\log_a x = \frac1{\ln a}\ln x$. To be fair, the textbook didn't claim it was such a proof—only that the integral does have logarithm-like properties.
A proof that we really do get $\ln x$ itself, instead of some multiple of it, would need to use some property of the number $e$—which itself depends on what definition of $e$ you choose. One common definition is of all the exponential functions $a^x$ with $a>1$, the number $e$ is the only base $a$ with the property that $\frac d{dx}(b^x)\big|_{x=0} = 1$.
From this one can derive (using the relationship between the derivatives of a function and its inverse function) that $e$ is the only base $a$ for which the inverse function $\log_a x$ satisfies $\frac d{dx} \log_a x\big|_{x=1}=1$. And this additional property is enough to show that $\int_1^x \frac1t\,dt$ is equal to $\log_e x=\ln x$, since the derivative of $\int_1^x \frac1t\,dt$ equals $\frac 1x$ by the fundamental theorem of calculus.
|
H: Given that $n^4-4n^3+14n^2-20n+10$ is a perfect square, find all integers n that satisfy the condition
So, I tried solving that by
$$n^4-4n^3+14n^2-20n+10=x^2\\10=x^2-a^2, a^2=n^4-4n^3+14n^2-20n+10\\10=(x+a)(x-a)$$ but I couldn't find any integers when I solved it
AI: It always helps to form squares from the biggest power and is a good strategy:
$$n^4-4n^3+14n^2-20n+10=n^2(n^2-4n+4)+10n^2-20n+10=\\
n^2(n-2)^2+10(n-1)^2=(n(n-2))^2+10(n-1)^2=((n-1)^2-1)^2+10(n-1)^2=\\
(n-1)^4-2(n-1)^2+1+10(n-1)^2=(n-1)^4+8(n-1)^2+1=\\
((n-1)^2+4)^2-15=x^2$$
I think this quite large hint will make it a bit easier to solve it.
Just as a note: it just happens we can make a nice square, that is not always the case!
|
H: Bound on integral on function implies bound order of entire function
Let $f$ be an entire function such that $\int_{\mathbb{C}}|f(z)|^2e^{-|z|^2} <\infty$ (with Lebesgue measure on $\mathbb{C}$). I need to prove that $f(z)$ has order $\le 2$.
My ideas:
Try to find bounds on coefficients and derive information about order from this bound.
Try to rewrite integral as integral over integrals over circles and with some tricks like moving constant from the right under the integral by using $1 = \int_0^\infty (-a)e^{(-a)|z|}$ and connecting result that some integral is zero with order of function
Try to find some information about areas where function is "bad" and if they are not empty find some contradiction.
None of them were successful, so i'm asking for hints/help.
AI: Combine idea 1 with the start of idea 2.
On the circle $\lvert z\rvert = r$, writing $z = re^{i\varphi}$ yields
$$\lvert f(z)\rvert^2 = \sum_{m,n = 0}^{\infty} a_n\overline{a_m} r^{n+m} e^{i\varphi(n-m)}\,.$$
Plugging this into the integral and using polar coordinates gives
\begin{align}
\int_{\mathbb{C}} \lvert f(z)\rvert^2 e^{-\lvert z\rvert^2}\,d\lambda
&= \int_0^{\infty} \int_0^{2\pi} \sum_{n,m = 0}^{\infty} a_n\overline{a_m} r^{n+m} e^{i\varphi(n-m)}\,d\varphi\; e^{-r^2} r\,dr \\
&= \pi \int_0^{\infty} \sum_{n = 0}^{\infty} \lvert a_n\rvert^2 r^{2n} e^{-r^2}\: 2r\,dr \\
&= \pi \sum_{n = 0}^{\infty} \lvert a_n\rvert^2 \int_0^{\infty} u^n e^{-u}\,du \\
&= \pi \sum_{n = 0}^{\infty} \lvert a_n\rvert^2 \cdot n!\,.
\end{align}
In particular, $\sqrt{n!}\,\lvert a_n\rvert$ is bounded. From this you can deduce that the order of $f$ is at most $2$ (e.g. using the argument here).
|
H: $N(0,\sigma^2_n)$ and $\sigma^2_n\to\sigma^2$ imply $N(0,\sigma^2_n)\overset{d}{\to}N(0,\sigma^2)$?
The following result seems to be natural to me:
$$N(0,\sigma^2_n) \text{ and } \sigma^2_n\to\sigma^2 \implies N(0,\sigma^2_n)\overset{d}{\to}N(0,\sigma^2)$$
as $n\to\infty$, where $\overset{d}{\to}$ denotes convergence in distribution. But I can't find the exact argument to show this.
Can you point me out?
My attempt
The moment generating function uniquely determines the distribution function. We can then stablish
$$\lim M_{X_n}(t)=M_X(t), \forall t\in (-a,a) \implies X_n\overset{d}{\to}X,$$
for some $a>0$.
As $\lim e^{\sigma^2_nt^2/2}=e^{\sigma^2t^2/2}$ , we have the result for the gaussian case.
I printed below, Curtiss's theorem where he stated the above result:
For the Gaussian case, the moment generating function exists for any real $t$ and $\lim_{n\to\infty} M_{X_n}(t)=M_X(t)$ holds for any real $t$, given that $\sigma^2_n\to\sigma^2$.
AI: Perhaps the fastest way is to use CF, that is $\varphi_n(t) = \exp (-\frac{t^2\sigma_n^2}{2}) \to \exp(-\frac{t^2\sigma^2}{2}) = \varphi(t)$, where $\varphi_n,\varphi$ are respectivelly CF of $\mathcal N(0,\sigma_n^2)$ and $\mathcal N(0,\sigma^2)$ distributions. Now use Levy Cramer continuity theorem and you have your result.
But your reasoning is correct, too. Note that convergence in distribution is equivalent with convergence of CDF at continuity point of limiting distribution. It is even enough that there exists $\delta > 0$ such that if $M_n(t) \to M(t)$ for $t \in (-\delta,\delta)$, because then we get $\lim F_n(t) = F(t)$ for every continuity point t of $F$, so if $M_n(t) \to M(t)$ for every $t$ in small interval with $0$ isnide, then corresponding distributions converge in the weak sense.
If $\sigma > 0$ we can apply sheffe thereom. Since densities converge a.s (note that for $n>N$ we would have $\sigma_n > 0$, so we have densities) then corresponding distributions converge, too (even in the stronger sense of total variation).
Another way is to try to tackle it by definition, but it would be a bit problematic because of cases $\sigma = 0$ and $\sigma > 0$
|
H: How to calculate determinant of a (N-1) order Matrix?
For $n \geq 2,$ consider the following square matrix of order $(n-1)$
\begin{array}{cccccc}
3 & 1 & 1 & 1 & & 1 \\
1 & 4 & 1 & 1 & \dots & 1 \\
1 & 1 & 5 & 1 & \cdots & 1 \\
1 & 1 & 1 & 6 & & 1 \\
& & & & \ddots & \vdots \\
1 & 1 & 1 & 1 & \cdots & n+1
\end{array}
Find its determinant using only elementary row operations and denote it by $A_{n}$. Hence or otherwise, check whether the sequence $\left\{^{A_{n}} / n_{1}\right\}_{n>2}$ is bounded
Basically I am getting no clue.
Should I proceed with Echelon Form?
AI: Basic idea:
1. We get rid of $1$s except for the first row
2. We get rid of $1$s in the first row
3. We consider $(A)_{1,1}$ as the rest is known
So 1. we subtract the 1st row from every other row:
$$\begin{align*}
\left|\begin{array}{cccccc}
3 & 1 & 1 & 1 & & 1 \\
1 & 4 & 1 & 1 & \dots & 1 \\
1 & 1 & 5 & 1 & \cdots & 1 \\
1 & 1 & 1 & 6 & & 1 \\
& & & & \ddots & \vdots \\
1 & 1 & 1 & 1 & \cdots & n+1
\end{array}\right|
&=
\left|\begin{array}{cccccc}
3 & 1 & 1 & 1 & & 1 \\
-2 & 3 & 0 & 0 & \dots & 0 \\
-2 & 0 & 4 & 0 & \cdots & 0 \\
-2 & 0 & 0 & 5 & & 0 \\
& & & & \ddots & \vdots \\
-2 & 0 & 0 & 0 & \cdots & n
\end{array}\right|
\end{align*}$$
2. we subtract each row (say $k$th), multiplied by $\frac{1}{k+1}$ from the first row:
$$=
\left|\begin{array}{cccccc}
a_{1,1} & 0 & 0 & 0 & & 0 \\
-2 & 3 & 0 & 0 & \dots & 0 \\
-2 & 0 & 4 & 0 & \cdots & 0 \\
-2 & 0 & 0 & 5 & & 0 \\
& & & & \ddots & \vdots \\
-2 & 0 & 0 & 0 & \cdots & n
\end{array}\right|$$
3. As it's lower-triangular now, we consider $a_{1,1}$ as
$|A|=a_{1,1}\prod\limits_{k=3}^n k$,
$a_{1,1}=3+2\sum\limits_{k=3}^n \frac{1}{k}$
|
H: Finding an identity to simplify this combinatorics solution
Steve flips 499 coins and Marissa flips 500 coins. What is the probability that Marissa flips more heads than Steve does?
We use casework for each of the possible number of heads that Steve flips. The first is $$\frac{1}{2^{499}\cdot2^{500}}\left(\binom{500}{1}+\binom{500}{2}+\dots+\binom{500}{500}\right),$$ because there is a $1/2^{499}$ chance that Steve flips 0 heads, and each of the possible number of heads that Marissa can flip have a denominator of $1/2^{500}$, because that is the total number of arrangements of heads and tails for Marissa. Following this, we have $$\frac{\dbinom{499}{1}}{2^{499}\cdot2^{500}}\left(\binom{500}{2}+\binom{500}{3}+\dots+\binom{500}{500}\right),$$
omitting the possibility of Marissa flipping 1 head, because in this case Steve flips 1 head. Then, we just keep going with this summation $$\sum_{k=0}^{499}\frac{\dbinom{499}{k}}{2^{499}\cdot2^{500}}\left(\sum_{i=k+1}^{500}\dbinom{500}{i} \right).$$
I am unsure of how to simplify it.
I understand that Marissa has a $1/2$ chance of winning by a $1-1$ correspondence between the number of non-winning and winning outcomes, but I would just like to see how the above could be simplified. In other words, it would be nice if I could get a solution similar to mine.
AI: It can be simplified algebraically, but it’s a bit of a pain. I actually did the first five steps of the second computation first, then came back and did the first computation, and finally finished it off.
First observe that
$$\begin{align*}
\sum_{0\le k<i\le 500}\binom{499}k\binom{500}i&=\sum_{0\le k<i\le 500}\binom{499}k\left(\binom{499}{i-1}+\binom{499}i\right)\\
&=\sum_{0\le k\le i\le 499}\binom{499}k\binom{499}i+\sum_{0\le k<i\le499}\binom{499}k\binom{499}i\;.
\end{align*}$$
Then
$$\begin{align*}
\sum_{k=0}^{499}\binom{499}k\sum_{i=k+1}^{500}\binom{500}i&=\sum_{0\le k<i\le 500}\binom{499}k\binom{500}i\\
&=\left(\sum_{k=0}^{499}\binom{499}k\right)\left(\sum_{i=0}^{500}\binom{500}i\right)\\
&\quad\quad-\sum_{0\le i\le k\le 499}\binom{499}k\binom{500}i\\
&=2^{499}\cdot2^{500}-\sum_{0\le i\le k\le 499}\binom{499}k\binom{500}i\\
&=2^{499}\cdot2^{500}-\sum_{0\le i\le k\le499}\binom{499}k\left(\binom{499}{i-1}+\binom{499}i\right)\\
&=2^{499}\cdot2^{500}-\sum_{0\le i\le k\le499}\binom{499}k\binom{499}i\\
&\quad\quad-\sum_{0\le i<k\le499}\binom{499}k\binom{499}i\\
&=2^{499}\cdot2^{500}-\sum_{0\le k<i\le 500}\binom{499}k\binom{500}i\\
&=2^{499}\cdot2^{500}-\sum_{k=0}^{499}\binom{499}k\sum_{i=k+1}^{500}\binom{500}i\;,
\end{align*}$$
so
$$\sum_{k=0}^{499}\binom{499}k\sum_{i=k+1}^{500}\binom{500}i=\frac12\cdot2^{499}\cdot2^{500}\;,$$
and
$$\frac1{2^{499}\cdot2^{500}}\sum_{k=0}^{499}\binom{499}k\sum_{i=k+1}^{500}\binom{500}i=\frac12\;.$$
|
H: Definition of chain of tree
I am trouble understanding the definition of chain of tree at p15.
Here is a rooted tree. The root is "a".
abc is clearly chain.
However, I cannot understand whether bc is chain or not.
a
|
bーd
|
c
At first, I thought chain is only "abc","ab","a","abd".
But looking at the proof of Lemma1.5.5(ii), "bc" should be included. Which is correct?
AI: They are giving an example of a chain as the down-set. So a chain is a subset of the graph with comparable elements(a linear ordering), The down-set of a vertex is an example of a chain. So, indeed bc is chain in the graph. They are completely comparable.
|
H: Resolvent definition: bounded operator vs. unbounded operator
Maybe my question is obvious in some sense, but I ask here because I didn’t find a “satisfactory” answer on the web.
If we have a bounded or unbounded operator, the definition of resolvent changes? And, in general, why one should prefer to work with a bounded operator instead of an unbounded one?
Also some references will be appreciated.
Thank you!
AI: There isn't significant difference.
Let $V$ be a Banach space, and $T:V\rightarrow V$ be unbounded, closed, and densely-defined with domain $D(T)$. The resolvent set $\rho (T)$ is the collection of all $\zeta\in \mathbb{C}$ for which $\zeta-T:D(T)\rightarrow V$ is bijective. The resolvent is the operator $R_\zeta:=(\zeta-T)^{-1}:V\rightarrow D(T)$, which is defined for $\zeta\in\rho(T).$ Just like in the case of $T$ bounded, the resolvent is a bounded operator, by the closed-graph theorem.
One should note that this requires an appropriate choice of domain, and the resolvent set (and spectrum) can look quite a bit different for unbounded operators.
To answer your second question, bounded operators are easier to work with. Unbounded operators lead to more technicalities, much of which is a byproduct of needing to make a "nice" choice of domain.
For a reference, see the functional analysis appendix of Michael Taylor's Partial Differential Equations I: Basic Theory.
|
H: Radius of convergence of a power series summing from $- \infty$ to $0$.
How would one go about computing the radius of convergence of, say, the following power series:
$$\sum_{n=-\infty}^0 n \, 3^{-n} z^n.$$
It is tempting to directly apply the Cauchy-Hadamard theorem here, but the statement is true for power series summing from $n=0$. I tried to make a substitution by realizing the sum above is equal to :
$$\lim_{m \to \infty} \sum_{n=-m}^0 n \, 3^{-n} z^n,$$
and letting $k = n + m$ for $n= -m, -m+1, \ldots, 0$ we have
$$\lim_{m \to \infty} \sum_{k=0}^m (k-m) \, 3^{m-k} z^{k-m}.$$
But this seems like making things worse. Any thoughts on tackling this problem?
AI: $$\sum_{n=0}^{\infty}(-n)3^nz^{-n}$$ converges for $|z|>3$.
|
H: What do $\min{[f,g]}$ and $\max{[f,g]}$ mean in my proof of continuity?
I must understand this proof of continuity. I understand the basics of continuity and algebra of continuity of limits. So I add the picture of the proof such that it is in the book of Real Analysis.I would really appreciate if someone could explain it to me as simple as possible. I'm struggling to understand it, What do $\min{[f,g]}$ and $\max{[f,g]}$ mean? Thank you.
AI: $\max(f,g)=\begin{cases}f \text{ if }f\ge g \\\\g\text{ otherwise }\end{cases}$
$\min(f,g)=\begin{cases}f \text{ if }f\le g \\\\g\text{ otherwise }\end{cases}$
|
H: Computing a 3D rotation matrix aligning 1 orthonormal basis to another
I have 2 sets of 3 vectors ($\vec{u_1}$, $\vec{v_1}$, $\vec{w_1}$, $\vec{u_2}$, $\vec{v_2}$, $\vec{w_2}$) and the 3 vectors form an orthonormal basis. That is:
$$
|\vec{u}|=|\vec{v}|=|\vec{w}|=1 \\
\vec{u} \cdot \vec{v}=0 \\
\vec{u} \times \vec{v}= \vec{w}
$$
Using the values for the vectors, how do I compute the 3D rotation matrix $R$ that rotates the first orthonormal basis onto the other? That is, $R \vec{u_1} = \vec{u_2}$, $R \vec{v_1} = \vec{v_2}$, etc.
AI: Let us define the (orthogonal) matrices :
$$B_1=[u_1|v_1|w_1],\ \ \ B_2=[u_2|v_2|w_2]$$
where $v$ stands for the column vector with entries the coordinates of $\vec{v}$.
Their columns being issued from orthonormal direct bases, these matrices are orthogonal with a determinant $1$.
You want a matrix $R$ such that
$$Ru_1=u_2, \ \ Rv_1=v_2, \ \ Rw_1=w_2$$
These 3 equations can be gathered into a single matrix equation
$$R[u_1|v_1|w_1]=[u_2|v_2|w_2] \ \ \iff \ \ RB_1=B_2\tag{1}$$
Conclusion : right multiplying (1) by $B_1^{-1}$, we arrive at the conclusion that :
$$R=B_2B_1^{-1}=B_2B_1^T\tag{2}$$
(the last expression being due to the fact that $B_1$ is an orthogonal matrix.)
Remark :
a) (2) gives in particular $\det(R)=\det(B_2)\det(B_1)=1$ : $R$ is indeed a rotation.
b) expression (2) says that we can compute matrix $R$, as remarked by @Jyrki Lahtonen, as the matrix of mutual dot products of the old base with respect to thenew one (it was usual, now less common, to designate these numbers as the "direction cosines" of the transformation (see for example here).
|
H: Is there a rigorous proof that $|G|=|\text{Ker}(f)||\text{Im}(f)|$, for some homomorphism $f\,:\,G\rightarrow G'$.
Is there a rigorous proof that $|G|=|\text{Ker}(f)||\text{Im}(f)|$, for some homomorphism $f\,:\,G\rightarrow G'$? Can anyone provide such a proof with explanations?
AI: Here is a proof in full detail.
You said in the comments that you know the first isomorphism theorem, which will make the proof quite simple. Let $f : G \to G'$ be the group homomorphism. The first isomorphism theorem tells us that $G / \ker(f)$ is isomorphic to $\operatorname{img}(f)$; hence $\lvert G / \ker f \rvert = \lvert \operatorname{img}(f) \rvert$. Next, we claim that $\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert$. Once we can show this, we will have that $\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert = \lvert \ker(f) \rvert \lvert \operatorname{img}(f) \rvert$, as desired! So here's the proof that $\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert$:
Proof. Let $\pi : G \to G / \ker f$ be the canonical projection, and pick any splitting $\ell : G / \ker f \to G$ (all we're doing here is picking a representative of each coset), so that $\pi \circ \ell = \operatorname{id}_{G / \ker f}$. Now define $\varphi : G \to (\ker f) \times (G / \ker f)$ by $$\varphi(g) = (g^{-1} \ell(\pi(g)), \pi(g)).$$
$\varphi$ is well-defined because $$\pi(g^{-1}\ell(\pi(g))) = \pi(g)^{-1}\pi(\ell(\pi(g))) = \pi(g)^{-1} \operatorname{id}_{G / \ker f}(\pi(g)) = \pi(g)^{-1} \pi(g) = 1,$$
so $g^{-1} \ell(\pi(g)) \in \ker \pi = \ker f$ for all $g \in G$. Next, we will show that $\varphi$ is injective. Suppose $\varphi(x) = \varphi(y)$ for some $x, y \in G$. Since $\pi(x) = \pi(y)$, we know that $\ell(\pi(x)) = \ell(\pi(y))$. At the same time, we have $x^{-1} \ell(\pi(x)) = y^{-1} \ell(\pi(y))$, so we conclude that $x^{-1} = y^{-1}$, whence $x = y$. Finally, we need to show that $\varphi$ is surjective. Let $(a, b) \in (\ker f) \times (G / \ker f)$ be arbitrary, and let $x = \ell(b)a^{-1}$. Then
$$\pi(x) = \pi(\ell(b)a^{-1}) = \pi(\ell(b)) \pi(a)^{-1} \operatorname{id}_{G / \ker f}(b) 1^{-1} = b,$$
so
$$\varphi(x) = (x^{-1} \ell(\pi(x)), \pi(x)) = (a \ell(b)^{-1}\ell(b),b) = (a,b).$$
We have now shown that $\varphi$ is a bijection, so
$$\lvert G \rvert = \lvert (\ker f) \times (G / \ker f) \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert.$$
|
H: Help with understanding how to find sum of series
Find the sum to $\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{3^n}x^n$
I do this:
$\sum_{n=0}^{\infty}x^n=\frac{1}{1-x}$ I then take the derivative twice and get
$\sum_{n=1}^{\infty}nx^{n-1}$
$\sum_{n=2}^{\infty}(n-1)nx^{n-2}$ And the change the sum index to n=0 and get and
$\sum_{n=0}^{\infty} (n+1)(n+2)x^n=\frac{2}{(1-x)^3}$ and substitute $x$ for $x/3$.
$\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{3^n} x^n=\frac{2}{(1-x/3)^3}=\frac{54}{(3-x)^3}$
But my question is, why do I not get the same answer if I substitute from the beginning?
$\sum_{n=0}^{\infty} (x/3)^n=\frac{1}{1-x/3}=\frac{3}{3-x}$
$\sum_{n=1}^{\infty} n(x/3)^{n-1}=\frac{3}{(3-x)^2}$
$\sum_{n=2}^{\infty} (n-1)n (x/3)^{n-2}=\frac{6}{(3-x)^3}$
$\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{3^n} x^n=\frac{6}{(3-x)^3}$
When should one substitute in general to get the right answer? Before or after all the different operations?
AI: Your derivative on the LHS is wrong.
$$
\begin{split}
\frac{d}{dx} \left(\frac{x}{3}\right)^n &= n (x/3)^{n-1} \frac13\\
\frac{d}{dx} \left[\frac{d}{dx} \left(\frac{x}{3}\right)^n \right]
&= n (n-1) (x/3)^{n-2} \frac19.
\end{split}
$$
That factor of $1/9$ is what you are missing: $$\frac{6}{1/9} = 54$$
|
H: Finding the area of the region defined in polar coordinates by $0\leq\theta\leq\pi$ and $0\leq r\leq\theta^3$
Find the area of the region defined in polar coordinates by $0 \leq \theta \leq \pi$ and $0 \leq r \leq {\theta}^3$.
I tried using the formula
$$A = \int_{0}^{\pi} \frac{1}{2} r(\theta)^2 d\theta$$
However, I don't know which value I should use for $r(\theta)$. I tried using ${\theta}^3$ but it marked it wrong. Here is my work:
$$A = \frac{1}{2} \int_{0}^{\pi} {\theta}^3 d\theta$$
$$A = \left(\frac{1}{2}\right) \left(\frac{1}{4}\right) \theta^4 \Big\rvert_{0}^{\pi}$$
$$A = \frac{1}{8}(\pi)^4$$
What value should I use for $r(\theta)$? What am I missing?
AI: $$\large A=\int_0^\pi\int_0^{\theta^3}rdrd\theta=\int_0^\pi\frac12r^2\Big|_0^{\theta^3}d\theta=\int_0^\pi\frac12\theta^\color{red}6d\theta$$
|
H: Lipschitz continuity and boundedness of derivatives.
Suppose $f(x)$ is a function that has derivative in the domain of our concern. We all know that if the derivative is bounded then $f$ is Lipschitz continuous. I was wondering if the above statement is an if-and-only-if statement since if the derivative is unbounded then surely we can find two points that fail the Lipschitz continuity property?
Many thanks in advance!
AI: Yes it is.
Assume that $ f$ is differentiable and Lipschitz at $ A$, then there exists some $K>0$ such that for all $(x,x_0)$ in $A$, with $x\ne x_0$ ,
$$|\frac{f(x)-f(x_0)}{x-x_0}|\le K$$
then if we pass to the limit when $x\to x_0$,
$$|f'(x_0)|\le K$$
the derivative is necessarily bounded at $A$.
|
H: $\sigma$-algebras and sample space
Does a $\sigma$-algebra always contain the sample space (or the full set over which it is defined) ?
I know the smallest $\sigma$-algebra over $\Omega$ can be defined as $G = \{\emptyset, A, A^{c}, \Omega\}$, and the largest by the powerset of $\mathscr{P}(\Omega)$, where
$A \subset \Omega$ and $A^{c} \subset \Omega$.
The third condition for the definition of a $\sigma$-algebra states :
$A_{i} \in G$ $\rightarrow$ $\bigcup\limits_{i=1}^{\infty}A_{i} \in G$
But since this union will always equal $\Omega$, this seems to imply that the $\sigma$-algebra always contains the sample space.
Am I missing something here ? Are there any counter-examples ?
AI: The three properties of a $\sigma$-algebra $\mathscr{G}$ are:
i) $\emptyset \in \mathscr{G}$
ii) $A \in \mathscr{G}$ $\rightarrow$ $A^{c} \in \mathscr{G}$
iii) If $A_{i} \in \mathscr{G}$ $\rightarrow$ $\bigcup\limits_{i=1}^{\infty}A_{i} \in \mathscr{G}$
As pointed out in the comment by @Dominik, by property ii. we must have $\Omega \in \mathscr{G}$.
Also, since $\emptyset \cup \Omega = \Omega$, the smallest possible $\sigma$-algebra is actually $\mathscr{G}=\{\emptyset, \Omega\}$ ... and not $\{\emptyset, A, A^{c}, \Omega\}$. The latter would be instead the smallest non-degenerate algebra.
|
H: Let $h(Y)$ be a random variable such that $E|h(Y)| < \infty$ and $K(X)$ a random variable. Show that $E[h(Y)|X,K(X)] = E[h(Y)|X]$
I have an idea but I don't know how to write it formally. Well, if we are conditioning over $X$ and $K(X)$, this second random variable could be a transformation of $X$ but since we are also conditioning over $X$, then we can forget about $K(X)$
AI: Conditioning on a random variables or a set of them is actually conditioning on the $\sigma$-algebra generated by that random variable or set of them. Presumably $K$ is
a Borel measurable function. Then for each $\alpha$, $\{K(X) \le \alpha\} = \{X \in K^{-1}((-\infty, \alpha])\}$ is already in the $\sigma$-algebra generated by $X$, so the $\sigma$-algebra generated by $X$ and $K(X)$ is the same as the $\sigma$-algebra generated by $X$.
|
H: Solve $\int_{|z|=2}\frac{e^{3z}}{(z-1)^3}dz$ using residue.
I'm trying to evaluate $$\int_{|z|=2}\frac{e^{3z}}{(z-1)^3}dz$$ using the residue theorem. I get a pole of order $3$ at 1 with a residue of $\frac{9}{2}e^3$. But since the absolute value of the residue (which in this case is exactly the residue) is bigger than 2, does that mean the integral is equal to zero? I'm just really confused, since this is the first in a series of practice problems and I was expecting to have to use the residue theorem.
AI: The absolute value of the residue is irrelevant here. Since$$\operatorname{res}\left(1,\frac{e^{3z}}{(z-1)^3}\right)=\frac92e^3,$$then$$\int_{|z|=2}\frac{e^{3z}}{(z-1)^3}\,\mathrm dz=9e^3\pi i.$$
|
H: Number of full binary trees is Catalan, What is the number of Binary trees?
In exercise 12-4 of "Introduction to Algorithms" by Cormen et.al (third edition), they mention that the number of Binary trees with $n$ nodes is given by the Catalan numbers,
$$b_n = \frac{1}{n+1}{2n \choose n}$$
In the Wikipedia article on Catalan numbers here however, they say that this is the number of "Full binary trees". Meaning every node has zero or two children.
This seems to be an apparent contradiction. Is this an omission in Cormen's book? Or is the Wikipedia article incorrect? And if $b_n$ is the number of full binary trees with $n$ nodes, what is the number of binary trees? And why should only one of them satisfy the recurrence given in part-a of exercise 12-4:
$$b_n = \sum_{k=0}^{n-1}b_k b_{n-1-k}$$
AI: You omitted an essential part of the Wikipedia statement: $C_n$ is the count of full binary trees with $n+1$ leaves, not with $n$ nodes as in the other statement. So there’s no contradiction; both statements are correct.
|
H: What field do entries of eigenvectors belong in?
I have the following problem:
"Given the matrix $A = \begin{pmatrix} 1&i\\ i&1 \end{pmatrix}$, find the eigenspaces of the respective eigenvalues".
First I found the eigenvalues to be $\lambda_1 = 1+i$ and $\lambda_2 = 1-i$. Then, using that the eigenspace of an eigenvalue $\lambda$ is $E_{\lambda}=\text{Ker} (A -\lambda I)$ I found that if $(x,y) \in E_{\lambda_1}$ then $x=y$, and similarly if $(x,y) \in E_{\lambda_2}$ then $x=-y$.
My question arose when I wanted to write where the $x$ and $y$ entries of the eigenvector live in. WolframAlpha defaults to real entries for the example eigenvectors (gives $v_1 = (1,1)$ and $v_2 = (1,-1)$), but can the entries be complex too? And if so, is there any type of matrix in which the entries can only be real?
Thank you!
AI: Yes, the eigenvectors can also have complex coefficients. For instance, since $(1,1)$ is an eigenvector, then so is $(i,i)$, since it is equal to $i(1,1)$.
|
H: Problem about inequality with symetric matrices and inner product
Let $A$ and $B$ be two matrices of order $n$ with entries in $\mathbb{R}$.
$\newcommand{\lg}{\langle}$ $\newcommand{\rg}{\rangle}$
a) If $A$ and $B$ are symmetric then
$$ \lg(A^{2} + B^{2})x, x \rg \geq \lg(AB+BA)x,x\rg $$
for any $x \in \mathbb{R}^{n}$ where $\lg,\rg$ means the usual inner product in $\mathbb{R}^{n}$.
hint: Consider $\lg(A-B)^{2}x,x\rg.$
b) If $A$ and $B$ are not symmetric then find a counterexample.
c) If $C$ is other matrix of order $n$ with entries in $\mathbb{R}$ and $\lg Cx,x\rg = \lg Bx,x\rg$ for all $x \in \mathbb{R}^{n}$, what can you say about $B-C$?,don't suppose $B$ is symmetric.
I tried use the hint but I don't see why help, I got $$\lg (A-B)^{2}x,x\rg = \lg (A^{2}-AB-BA+B^{2})x,x\rg = \lg (A^{2}+B^{2})x,x\rg - \lg (AB+BA)x,x\rg$$
AI: Suppose your problem focuses on part a). The key is that since $A$ and $B$ are both symmetric, then so is $A-B$ and thus
$$0\le\langle(A-B)x,(A-B)x\rangle=\langle(A-B)^2x,x\rangle,
$$
and then the conclusion follows your calculation.
|
H: Power series method to solve $(1-x)y^\prime + y = 1 + x, y(0) = 0$
Let $y(x) = \sum_{n=0}^\infty c_nx^n \Rightarrow y^\prime(x) = \sum_{n=0}^\infty nc_nx^{n-1}, y^{\prime\prime}(x) = \sum_{n=0}^\infty n(n-1)c_nx^{n-2}$. Now:
$$(1-x)\sum_{n=0}^\infty nc_nx^{n-1} + \sum_{n=0}^\infty c_nx^n = 1 + x$$
$$\sum_{n=0}^\infty nc_nx^{n-1} -\sum_{n=0}^\infty nc_nx^{n}+ \sum_{n=0}^\infty c_nx^n = 1 + x$$
But I'm having trouble with the right side of the equation.
AI: $$\sum_{n=0}^\infty nc_nx^{n-1} -\sum_{n=0}^\infty nc_nx^{n}+ \sum_{n=0}^\infty c_nx^n = 1 + x$$
Start at $n=1$ since for $n=0$ the first and second sum is zero.
$$\sum_{n=1}^\infty nc_nx^{n-1} -\sum_{n=1}^\infty nc_nx^{n}+ \sum_{n=0}^\infty c_nx^n = 1 + x$$
Now shift the indice of first sum $n \to n+1$:
$$\sum_{\color{red}{n+1}=1}^\infty (\color{red}{n+1})c_{\color{red}{n+1}}x^{\color{red}{n+1}-1} -\sum_{n=1}^\infty nc_nx^{n}+ \sum_{n=0}^\infty c_nx^n = 1 + x$$
$$\sum_{n=0}^\infty (n+1)c_{n+1}x^{n} -\sum_{n=1}^\infty nc_nx^{n}+ \sum_{n=0}^\infty c_nx^n = 1 + x$$
$$c_1+c_0+\sum_{n=1}^\infty ((n+1)c_{n+1}+(1-n)c_n)x^n = 1 + x$$
$$c_1+c_0+2c_2x+\sum_{n=2}^\infty ((n+1)c_{n+1}+(1-n)c_n)x^n = 1 + x$$
From this you deduce that:
$$c_1+c_0=1$$
$$2c_2x=x \implies c_2=\dfrac 12$$
And the recurrence relation:
$$c_{n+1}=\dfrac {(n-1)}{(n+1)}c_n$$
|
H: Proving the orthogonality of $\sin\frac{2\pi x}{\pi-e}$ and $\cos\frac{2\pi x}{\pi-e}$
I want to prove the orthogonality of the functions: $\sin\left(\dfrac{2\pi x}{b-a}\right)$ and $\cos\left(\dfrac{2\pi x}{b-a}\right)$, where $b=\pi$ and $a = e$
My work:
$$\begin{align}
\int^{\pi}_{e} \frac{1}{2} \sin\left(\frac{4\pi x}{\pi - e}\right)dx
&= -\frac{\pi - e}{8 \pi} \left[\cos\left(\frac{4\pi x}{\pi - e} \right)\right]^{\pi}_e \tag{1}\\[6pt]
&= \frac{e-\pi}{8\pi}\left[\cos\left(\frac{4\pi^2}{\pi - e} \right) - \cos\left(\frac{4\pi e}{\pi - e}\right) \right] \tag{2}\\[6pt]
&= \frac{\pi - e}{4\pi} \left[\sin\left(\frac{2\pi (\pi - e)}{\pi - e} \right)\sin\left(\frac{2\pi (\pi + e)}{\pi - e} \right) \right] \tag{3}\\[6pt] = 0
\end{align}$$
Haven't I made any mistake?
AI: No mistake.
In general, the two functions $f$ & $g$ are said to be orthogonal if the integral of their product (treating as dot or inner product of two vectors) over some arbitrary interval is zero
$$\langle f,g\rangle =\int f(x)g(x)dx=0$$
|
H: Birkhoff ergodic theorem on a lattice
Let $\mathbb{P}_0$ be a probability measure on $\mathbb{R}$. Let $\Omega = \mathbb{R}^{\mathbb{Z}^d}$ and $\mathbb{P} = (\mathbb{P}_0)^{\otimes \mathbb{Z}^d}$ so the the canonical process $X:\Omega \rightarrow \Omega$, defined by $X(\omega) =\omega$, can be regarded as iid random variables on the lattice $\mathbb{Z}^d$. The system is translationally-invariant (ergodic), in the sense that if we define $\tau_n :\Omega \rightarrow \Omega$ by $X_m (\tau_n (\omega)) =X_{m-n} (\omega)$, then $\tau_n$ preserves probability $\mathbb{P}$ and all invariant sets (sets $A$ such that $\tau_n^{-1}(A) =A$ for all $n\in \mathbb{Z}^d$) either have probability 0 or 1. Therefore, it seems that we should be able to have a variation of Birkhoff's ergodic theorem, in the sense that, almost surely, we have
$$
\frac{1}{(2L+1)^d}\sum_{n\in \Lambda_L} f(X_n)\rightarrow\mathbb{E}f(X_0)
$$
where $\Lambda_L =\{-L,-L+1,...,L-1\}^d$.
However, I have only seen Birkhoff's theorem on $\mathbb{N}$ with a single ergodic $\tau: \Omega \rightarrow \Omega$. I'm not sure whether what I'm stating here is true, and if so, the details of its proof. Any references or sketches of the proof would be appreciated.
AI: The averaging depends often on the particular choice of $\Lambda_L$ but by now results in the direction that you mention should be considered well known.
See for example the paper "Pointwise theorems for amenable groups" by Elon Lindenstrauss in Inventiones Mathematicae 146 (2001), 259-295.
https://link.springer.com/article/10.1007/s002220100162
For a more down to earth reference and specifically for $\mathbb Z^d$ only, have a look at Keller's book Equilibrium States in Ergodic Theory, Cambridge University Press, 1998.
https://www.cambridge.org/core/books/equilibrium-states-in-ergodic-theory/4D3BF3EC4BD0C957FE1F076BD1379710
As always, one should be careful for $d>1$ with the lack of uniqueness of equilibrium measures.
|
H: Schwarz space $S(\mathbb{R})$ is dense inside $L_p(\mathbb{R})$-spaces
I was wondering why the schwarz functions $S(\mathbb{R})$ are dense inside the $L_p(\mathbb{R})$ spaces and I was reading this answer, but I don't understand why the $g_t$ are in $S(\mathbb{R})$. Could someone explain this?
AI: Note the following first:
The convolution with any function with compact support is given by an integral on a compact interval.
So now you can establish the second point:
The convolution with any $C^\infty$ function given by an integral on a compact interval gives a $C^\infty$ function (basically you can take the derivatives inside).
|
H: Is the definition of the Riemann sum from Thomas' Calculus correct?
I'm having trouble with theoretical understanding of the Riemann sum with this explanation/definition from Thomas' Calculus. I checked Wikipedia and it seems to state virtually the same.:
On each subinterval we form the product $f(c_k)*∆x_k$. This product is positive, negative, or zero, depending on the sign of $f(c_k)$. When $f(c_k) > 0$, the product $f(c_k)*∆x_k$ is the area of a rectangle with height $f(c_k)$ and width $∆x_k$. When $f(c_k) < 0$, the product $f(c_k)*∆x_k$ is a negative number, the negative of the area of a rectangle of width $∆x_k$ that drops from the x axis to the negative number $ƒ(c_k)$.
Finally we sum all these products to get:
$$
S_p = \sum_{k=1}^{n}{f(c_k)}∆x_k
$$
… Any Riemann sum associated with a partition of a closed interval [a, b] defines rectangles that approximate the region between the graph of a continuous function ƒ and the x-axis. Partitions with norm approaching zero lead to collections of rectangles that approximate this region with increasing accuracy
To illustrate the problem, suppose we want to approximate the area between $f(x) = -x$ and the x axis on the interval [-1; 1]. The area is 1, but the Riemann sum should give something close to 0:
Is the statement that any Riemann sum with the norm approaching 0 approximates the area with increasing accuracy correct? It seems not, since in the example above the area tends to 0 as the norm approaches 0, which is not "increasing accuracy". Does it miss the part that one should take the absolute values of the rectangles' areas?
Thank you.
AI: The Riemann sum approaches the signed area. In your picture, the green area is positive, and the red area is negative. The Riemann sum should approach $0$, which is the accurate signed area for $f(x)=-x$ on the interval $[-1,1]$. If you don't like that, try $f(x)=|x|$.
|
H: Find linear map depending on parameter t
I'm currently sitting on the following problem:
For what $t \in \mathbb{R}$ exists a linear map $\varphi_t: \mathbb{R}^{1x3}\rightarrow \mathbb{R}^{2}$
$\varphi_t:\left\{
\begin{array}{lcl}
\left( 0, -1, 0 \right) & \mapsto &
\left( \begin{array}{c} -1 \\ 1 \end{array}\right), \\
\left( 0, -1, 1 \right) & \mapsto &
\left( \begin{array}{c} t \\ 1 \end{array} \right), \\
\left( t-1, -1, 1 \right) & \mapsto &
\left( \begin{array}{c} t \\ t \end{array} \right)
\end{array} \right.$
Give a linear mapping for every $t$ by specifying $Image$ $\varphi(\left( a, b, c \right))$ $a,b,c \in \mathbb{R}$.
For what $t$ is this mapping is unique?
This looks like it should just be a calculation, however I don't even know where to start on this one. Could someone enlighten me please or give a hint?
AI: If $t = 1$, the last two domain vectors $(0,-1,1)$ and $(t-1,-1,1)$ are equal, while the first two vectors are linearly independent, so the three domain vectors span a plane in $\mathbb{R}^3$. So there is no unique linear map, since to define a linear map we can choose any third vector not in the span of those two vectors and send this to whatever point in $\mathbb{R}^2$ we like.
If $t \neq 1$, the three domain vectors are linearly independent, hence they form a basis for $\mathbb{R}^3$. Then the given definition of $\phi_t$ defines a unique linear map, because a linear map is determined by its values on a basis.
To find the image, we can proceed as follows. First, we can find out how to express any vector $(x,y,z)$ as a linear combination of the given basis $\{ (0, -1, 0), (0, -1, 1), (t - 1, -1, 1) \}$ by solving the augmented matrix: $$\left[ \begin{array}{ccc | c} 0 & 0 & t - 1 & x\\ -1 & -1 & -1 & y\\ 0 & 1 & 1 & z \end{array} \right].$$ If we reduce this, the coefficients are $\displaystyle -y-z, z - \frac{x}{t-1}$ and $\displaystyle \frac{x}{t-1}$.
Therefore, for any vector $(x,y,z)$, we have $$ \displaystyle \phi_t(x,y,z) = (-y-z) \phi_t(0,-1,0) + (z - \frac{x}{t-1})\phi_t (0, -1, 1) + \frac{x}{t-1}\phi_t (t-1, -1, 1) = (-y-z)(-1,1) + (z-\frac{x}{t-1})(t, 1) + \frac{x}{t-1}(t, t) = … = (y+z+zt, -y) = y(1, -1) + z(1+t, 0).$$
Therefore, the image can be described as $$\text{image of $\phi_t$} = \{ \phi_t(x, y, z): (x, y, z) \in \mathbb{R}^3 \} = \{ y(1, -1) + z(1 + t, 0): y, z \in \mathbb{R} \} = \text{span$\{ (1,-1),(1+t, 0)\}$},$$
which is just all of $\mathbb{R}^2$.
Remark 1: Actually, you don't need to do any calculations if you are only interested in finding what the image of $\phi_t$ is. We can see that the given codomain vectors $(-1, 1), (t, 1)$ and $(t, t)$ will span all of $\mathbb{R}^2$ no matter what $t$ is, so the image is always all of $\mathbb{R}^2$.
Remark 2: Instead of finding out how $(x,y,z)$ can be written as a linear combination of those basis vectors, which is what I did above, you could also just find out what $\phi_t$ does to each standard basis vector $e_i$. Then $\phi_t(x,y,z) = x\phi_t(e_1) + y\phi_t(e_2) + z\phi_t(e_3)$.
|
H: Distance from a point to a complement of a set
Let $D=\{(x,y) \in \mathbb{R}^{2}: x^2+y^2 \leq 1\}$ be the unit disk and consider $U$ be a open subset of $\mathbb{R}^{2}$ such that $D \subset U$. Since $D$ is compact and $U^c$ is closed, $\operatorname{dist}(D,U^c)=r>0$.
Intuitively, it seems that $\|y\| \geq r+1$ for all $y \in U^c$ (or $B[0,1+r] \subset U$).
My question: Is this statement true? More generally, can we generalize this to any metric space?
My attempt: Let $y \in U^c$ and $\displaystyle x=\frac{y}{\|y\|}$, then $y \in D^c=\{z \in \mathbb{R}^{2}:\|z\|>1\}$ and
\begin{eqnarray*}
\|y\|^2&=&\|y-x+x\|^2\\
&=& \|y-x\|^2+2(x,y-x)+\|x\|^2\\
&\geq& r^2+2\left(\frac{y}{\|y\|},\|y\|\frac{y}{\|y\|}-\frac{y}{\|y\|}\right)+1\\
&=&r^2+2\left(\frac{y}{\|y\|},\frac{y}{\|y\|}(\|y\|-1) \right)+1\\
&=&r^2+2(\|y\|-1)+1\\
&\geq&r^2+1,
\end{eqnarray*}
since $\|y\|-1>0$. But this inequality does not provide the inclusion that I want.
AI: You have obtained $\|y\|^2\geq r^2+2(\|y\|-1)+1$. Continuing with this,
$$(\|y\|-1)^2=\|y\|^2-2\|y\|+1\geq r^2\implies\|y\|\geq 1+r$$
I don't think you can generalize to arbitrary metric space, though, for two reasons.
A metric space may not be equipped with a norm, i.e. $\|y\|$ may not be defined.
Even though we work with normed vector spaces, the unit ball may not be compact.
But the generalization to high dimensional Euclidean spaces should be fine.
|
H: Find $\lim\limits_{x \to \infty}{\mathrm{e}^{-x}\int_{0}^{x}{f\left(y\right)\mathrm{e}^{y}\,\mathrm{d}y}}$
Given $f(x)$ is a continuous function defined in $(0,\: \infty)$ such that $$\lim_{x \to \infty}f(x)=1$$ Then Find $$L=\lim_{x \to \infty}{\mathrm{e}^{-x}\int_{0}^{x}{f\left(y\right)\mathrm{e}^{y}\,\mathrm{d}y}}$$
My try:
we have $$L=\lim_{x \to \infty}{\frac{\int_{0}^{x}{f\left(y\right)\mathrm{e}^{y}\,\mathrm{d}y}}{e^x}}$$
If the numerator is finite then $L=0$ else by L'Hopital's Rule we have $\infty/\infty$ form we get
$$L=\lim_{x \to \infty}\frac{f\left(x\right)\mathrm{e}^{x}}{\mathrm{e}^x}=\lim_{x \to \infty}{f\left(x\right)}=1$$
But how to tell whether $\lim\limits_{x \to \infty}\int_{0}^{x}{f\left(y\right)\mathrm{e}^{y}\,\mathrm{d}y}$ is Finite or Infinite?
AI: Hint: Since $\lim_{x\to\infty}f(x)=1$, we have $f(x)>1/2$ for all $x>N$, where $N$ is sufficiently large. Ignoring the integral over $[0,N]$ (which is a fixed finite value), we consider the remaining part:
$$\int_N^\infty f(x)e^xdx\geq\frac{1}{2}\int_N^\infty e^xdx$$
|
H: What is the $\|f\|_{L^{\infty}}=$?
For
$$f(x):=\frac{e^x-e^{-x}}{e^x+e^{-x}}$$
what is the $\|f\|_{L^{\infty}}=$?
Is it just $1$?
AI: We have $f(x)=\tanh{x}$ which is continuous on the whole real line. Since $f^{\prime}(x)=1/\cosh^{2}{x}>0$ for all $x\in \mathbb{R}$, then $f$ is strictly increasing on $\mathbb{R}$. Therefore,
$$\|f\|_{\infty}=\sup_{x\in \mathbb{R}}|f(x)|=\lim_{x\longrightarrow +\infty}|f(x)|=1$$.
|
H: Prove universal morphism is unique up to unique isomorphism.
I'm following along Wikipedia page(https://en.wikipedia.org/wiki/Universal_property) on universal property, and this seems it should be trivial, but I couldn't finish the proof.
The definition I am working with:
The problem
So I want to prove that if $(A, u)$ and $(A', u')$ are universal morphism from $X$ to $F$, then there exists unique isomorphism $k: A \rightarrow A'$ such that $u' = F(k) \circ u$.
My attempt:
For any $B \in C$ and $f: X \rightarrow F(B)$, there exists unique $h: A \rightarrow B$ such that $f = h \circ u$.
Letting $B = A'$ and $f = u'$, we see that there exists unique $k: A \rightarrow A'$ such that $Fk \circ u = u'$.
This shows existence and uniqueness. We only need that $k$ is isomorphism, but this is the part I cannot prove.
Similarly, there exists unique $k': A' \rightarrow A$ such that $Fk' \circ u' = u$.
Plugging into each equation, we get
$$Fk \circ Fk' \circ u' = F(k \circ k') \circ u' = u'$$
$$Fk' \circ Fk \circ u = F(k' \circ k) \circ u = u$$
but this of course doesn't imply $F(k \circ k') = id$, so it seems useless.
Thanks for your help!
AI: You've derived the equation $F(k'\circ k)\circ u=u$. So $k'\circ k\colon A\to A$, but since $(A,u)$ is a universal morphism, by definition there is a unique morphism $h\colon A\to A$ such that $F(h)\circ u=u$. (This is the condition you get when you apply the universal property to the universal morphism itself.) But $h=1_A$ also works, so by uniqueness, $k'\circ k=1_A$.
Reversing the roles, as you've already done, shows $k\circ k'=1_{A'}$, so $k$ and $k'$ are isomorphisms.
|
H: Maximize area of isosceles triangle with given median
Given icoscales triangle with sides $a, b, c, a = b$; median performed to the side of triangle, say, to b, denoted as $m_b.$
Note: I need to maximize area of triangle. I need to solve it using inequalities, not Lagrange multipliers etc.
Firstly i tried to solve it using median formula $m_b^2 = \dfrac{2a^2 + 2c^2 - b^2}{4} = \dfrac{a^2}{4} + \dfrac{c^2}{2}$ and Geron's formula of sqare of triangle:
\begin{align}
S^2 &= p(p - a)(p-b)(p -c), \; p = \dfrac{2a+c}{2} = a + \dfrac{c}{2}, \\
S^2 &= \Big(a^2 - \dfrac{c^2}{4} \Big) \dfrac{c^2}{4} = \Big(\dfrac{a^2}{4} - \dfrac{c^2}{16}\Big)c^2
= \Big(m^2_b - \dfrac{9c^2}{16}\Big) c^2 = \\
&=\Bigg(\sqrt{\Big(m^2_b - \dfrac{9c^2}{16}\Big) c^2 }\Bigg)^2 \overset{AM-GM}{\leq}
\Big(\dfrac{m^2_b}{2}+ \dfrac{7c^2}{32}\Big)^2
\end{align}
Then since inequality turns to equality iff $m^2_b - \dfrac{9c^2}{16} = c^2$, i got $c = \dfrac{4}{5}m_b$
and $S_{\max} = \dfrac{16}{25}m^2_b$.
Well, it seems wrong answer.
How can i solve it right-way?
AI: You could try completing the square:
\begin{align}
S^2 &= \left(m^2_b - \dfrac{9c^2}{16}\right)c^2= m^2_b c^2 - \dfrac{9c^4}{16} \\ &= \left(\dfrac23m^2_b\right)^2 - \left(\dfrac23m^2_b\right)^2 + 2\left(\dfrac23m^2_b\right) \left(\dfrac34c^2\right) - \left(\dfrac34c^2\right)^2 \\ &= \dfrac49m^4_b - \left( \dfrac23m^2_b-\dfrac34c^2\right)^2 \\ &\le \dfrac49m^4_b
\end{align}
with equality (i.e. a maximum) when $\frac23m^2_b=\frac34c^2$, i.e. $c= \sqrt{\frac89}m_b$, giving $S_{\max}=\frac23m^2_b$.
|
H: Show that if $A$ and $B$ are compact subsets of $(\mathbb{R}^m,||.||_2)$ not empty and disjointed, then $\inf\{||a-b||_2:a\in A,b\in B\} > 0$
Show that if $A$ and $B$ are compact subsets of $(\mathbb{R}^m,||.||_2)$ not empty and disjointed, then $$\inf\{||a-b||_2:a\in A,b\in B\} > 0$$
I know the definitions and I been trying for a while but I'm stuck with this proof. Any suggestions would be great!
AI: Consider a function $A\times B \to \mathbb{R}$; $(a, b) \mapsto \|a-b\|_2$.
Since this function is continuous and the domain $A\times B $ is compact, there exist $a_0 \in A$, $b_0 \in B$ which satisfies $\|a_0 - b_0\|_2 = \inf \operatorname{im} f$. If this is zero, we have $\|a_0- b_0\|_2 = 0$, which implies $a_0 = b_0$, which implies $a_0 = b_0 \in A \cap B $ which is a contradiction.
|
H: Find a constant $c$, such that $f(x)\leq cx^2$ for every $x\geq 0$?
For
$$f(x):=\log(e^x+e^{-x})$$ and
$$f'(x)=\frac{e^x-e^{-x}}{e^x+e^{-x}}$$
Can we find a constant $c$, such that $f(x)\leq cx^2$ for every $x\geq 0$?
Clearly, $f'(0)=0$. It seems we need to have a bound on $f''$.
Based on the some answers, $f(x)\leq x^2$ for $x>1$. But what I want to get is about $x\geq 0$. Is there any new bound for
$$f(x)\leq c x^n+\log 2,$$ where $n\in \mathbb{N}$ ?
AI: Use the inequality $ \cosh(x) \leq e^{x^2/2} $, which gives $ \log(e^x+e^{-x}) \leq \log 2 + x^2/2 $.
|
H: Can $\frac{\mathrm d}{\mathrm dx}$ increase the support of a function?
Let "support" mean the closed support.
Can $\frac{\mathrm d}{\mathrm dx}$ increase the support of a function? That is, is there any $f=f(x)$ in one variable with $\operatorname{supp}(f)$ completely contained in $\operatorname{supp}(f')$? Can we choose $f$ smooth?
AI: It is the contrary. The derivative "decreases" the support. If $x\notin{\rm supp}f$, then for some neighborhood $N_x$ of $x$, $f\equiv0$ in $N_x\implies f'\equiv0$ in $N_x\implies N_x\subset({\rm supp}(f'))^c$. Hence by taking the union over $x\notin {\rm supp}(f)$,
$${\rm supp}(f)^c\subset{\rm supp}(f')^c\implies{\rm supp}(f')\subset{\rm supp}(f)$$
A remark: taking the derivative decreases the support strictly if $f$ is nonzero constant in some open neighborhood.
|
H: What's the advantages of writing standard calculus into lie differentiation?
Say $\dot{x} = f(x)$, $x\in\mathcal{M}$, $\phi: \mathcal{M} \mapsto \mathcal{M}$.
Then $\dot{\phi} = f \cdot \nabla_x \phi $.
However, one can define a lie differentiation and write
$f \cdot \nabla_x \phi = \mathcal{L}_f \phi$.
QUESTION:
What is the point of doing this? Is there any nice things we could get from writing things in lie differentiation while hard to get in standard calculus?
AI: The wikipedia article on Lie Derivative actually nicely answers this question:
"A 'naïve' attempt to define the derivative of a tensor field with respect to a vector field would be to take the components of the tensor field and take the directional derivative with respect to the vector field of each component. However, this definition is undesirable because it is not invariant under changes of coordinate system"
It is that last sentence that is of vital importance, as much of the power of the machinery of differential geometry is in its independence from choice of coordinates (hence you can choose convenient ones for the problem at hand).
|
H: Axiomatizability of a relative complement
Given a fixed first order lexicon $\mathcal{L}$, suppose $\mathcal{K}$ is an axiomatizable class such that $\mathcal{K}\subseteq Mod(\varphi)$ for some sentence $\varphi$. If $Mod(\varphi)-\mathcal{K}$ is axiomatizable, Does $\mathcal{K}$ necessarily to be finitely axiomatizable?
I proved the converse but I don't know how to deal with this direction. I've already proved using compactness that if $\mathcal{K}$ and $\mathcal{K}^{c}$ are both axiomatizable classes, then $\mathcal{K}$ need to be finitely axiomatizable. At this point, I tried to use last fact but this is not the case because $Mod(\varphi)-\mathcal{K}$ is a relative complement, whereas $\mathcal{K}^{c}$ is the universal complement.
I would appreciate some hint, ¡Thanks in advance!
AI: One way to do this is to shift attention away from $\mathcal{K}$ itself, so that we don't get stuck inside $Mod(\varphi)$.
Let $$\mathcal{K}_\varphi=\mathcal{K}\cup\{\mathfrak{A}:\mathfrak{A}\models\neg\varphi\}.$$
Now $Mod(\varphi)-\mathcal{K}=(\mathcal{K}_\varphi)^c$, and we're assuming that that's axiomatizable. Our natural next step is to show that $\mathcal{K}_\varphi$ is axiomatizable:
Suppose $\mathcal{K}=Mod(\Gamma)$ and let $\Gamma'=\{\gamma\vee\neg\varphi:\gamma\in\Gamma\}$.
In fact, a minor tweak of this shows that the union of finitely many axiomatizable classes is axiomatizable:
Given $\Gamma_1,...,\Gamma_k$ we have $$Mod(\{\gamma_1\vee...\vee\gamma_k: \gamma_1\in\Gamma_1,...,\gamma_k\in\Gamma_k\})= \bigcup_{1\le i\le k}Mod(\Gamma_i).$$
OK, so we have that $\mathcal{K}_\varphi$ and $(\mathcal{K}_\varphi)^c$ are each axiomatizable. So we know that $\mathcal{K}_\varphi$ is in fact finitely axiomatizable. We now want to turn a finite axiomatization of $\mathcal{K}_\varphi$ into a finite axiomatization of $\mathcal{K}$:
If $\mathcal{K}_\varphi=Mod(\Theta)$, think about $Mod(\{\theta\wedge\varphi:\theta\in\Theta\})$.
|
H: Find value of t then the series is convergent?
$$\sum _ { n = 1 } ^ { \infty } \frac { n ^ { 3 n } } { ( n + 2 ) ^ { 2 n + t } ( n + t ) ^ { n + 2 t } }$$
I use the ratio test, than I saw the series failed.
Right or wrong ?
AI: Using root test, I find:
$$\sqrt[n]{\frac { n ^ { 3 n } } { ( n + 2 ) ^ { 2 n + t } ( n + t ) ^ { n + 2 t } }} = \frac { n^3 } { ( n + 2 ) ^ { 2 + t/n } ( n + t ) ^ { 1 + 2 t/n } } = \frac { 1 } { n^{3t/n}\ ( 1 + 2/n ) ^ { 2 + t/n } ( 1 + t/n ) ^ { 1 + 2 t/n } } \to 1$$
therefore the root test is inconclusive and, for a well-known theorem, the ratio test also has to be inconclusive.
On the other hand, I also find:
$$\frac { n ^ { 3 n } } { ( n + 2 ) ^ { 2 n + t } ( n + t ) ^ { n + 2 t } } = \frac { n ^ { 3 n } } { n^{3n+3t} ( 1 + 2/n ) ^ { 2 n + t } ( 1 + t/n ) ^ { n + 2 t } } = \frac { 1 } {n^{3t} ( 1 + 2/n ) ^ { 2 n } (1+2/n)^{ t } ( 1 + t/n ) ^ { n } (1+t/n)^{2 t } } \approx \frac{1}{e^{4+t}\ n^{3t}}$$
therefore your series converges for $3t >1$ by limit comparison test (or something like that).
|
H: How to prove that there is an open simply connected subspace containing a simple closed curve
Let $\Omega$ be an open set in $\mathbb C$. Let $\gamma$ be a simple closed curve (i.e., $\gamma$ is homeomorphic to $S^1$) in $\mathbb C$. Let $W$ be the bounded component of $\mathbb C\setminus\gamma$. Suppose $\gamma\cup W\subseteq\Omega$. Then is it true that there is open simply connected $\Omega'$ such that $\gamma\cup W\subseteq\Omega'\subseteq\Omega$?
I tried to prove this as follows. For each $z\in\gamma$, choose an open ball $B(z,\epsilon_z)\subseteq\Omega$. Let $\Omega'=W\cup(\bigcup_{z\in\gamma}B(z,\epsilon_z))$, where $W$ is a bounded component of $\mathbb C\setminus\gamma$. I showed that $\Omega'$ is open path connected and that $\gamma\cup W\subseteq\Omega'\subseteq\Omega$. So it remains to show that a fundamental group of $\Omega'$ is trivial.
AI: You know that $\gamma \in \mathbb{C}$ can be seen as a curve in $\mathbb{R}^2$. Using the Schonflies statement of the Jordan theorem, there is a homeomorphism $f: \mathbb{R}^2 \to \mathbb{R}^2$ that sends $\gamma$ to the unit circle, and your $W$ gets sent to the open unit disk. It will send $\Omega$ to some open set around the unit disk. Let us construct a set $V$ so that $f(\gamma \cup W) \subseteq V \subseteq f(\Omega)$. Let $\epsilon_\theta$ be the radial distance along the angle $\theta$ between $\gamma$ and the first time the ray intersects $\partial f(\Omega)$. We can define $\epsilon = \inf \epsilon_\theta$.
Notice that this infimum cannot be $0$. This would occur if there were a sequence of points on $\partial f(\Omega)$ that approached a point on $f(\gamma)$, but then this point of $f(\gamma)$ would be in $\partial(f(\Omega))$ since $\partial(f(\Omega))$ is closed. This can't happen because $\Omega$ is open so $int( f(\Omega)) = f(\Omega)$, so if $f(\gamma)$ had a point in the $\partial(f(\Omega))$, then this would contradict the fact that $\gamma \subset \Omega$.
Now let $V$ be the open unit disk of radius $1 + \epsilon$, and let $\Omega' = f^{-1}(V)$. Since this is an open disk, it is simply connected.
|
H: Does this sequence of functions converges uniformly?
¿Converges uniformly? $$f_n (x)= n^3 x^n (1-x)^4$$ for $$ x\in\mathcal [0,1]$$
I have this, it's clear that $$\lim_{x\to \infty}f_n (x)= n^3 x^n (1-x)^4 =0$$
Then
$$ |f(x)-f_n (x)|=|f_n (x)| $$
So, note that
$$n^3 x^n (1-x)^4 \leq n^3 x^n \leq n^3 < \epsilon $$ for all $$ x\in\mathcal [0,1]$$
Thus i took
$$ \frac {1 }{\sqrt[3]{\epsilon}}<\frac{1}{n},\frac {1 }{\sqrt[3]{\epsilon}}+1<\frac{1}{n}+1= \frac {n+1}{n}<n+1$$
Finally $$f_n(x)$$ converges uniformly for $$N=\frac {1 }{\sqrt[3]{\epsilon}}+1$$ for all $$N\leq n$$
I'm not sure if the epsilon is correct?
AI: You have $f_n(x)\geq 0$ in $[0,1]$ and $f_n \in C^\infty(]0,1[) \cap C([0,1])$, therefore you can use the $M$-test.
In order to find $M_n=\max |f_n - f| = \max f_n$, you can differentiate to find:
$$f_n^\prime (x) = n^3 \Big[ n x^{n-1}(1-x)^4 - 4x^n (1-x)^3\Big] = n^3 x^{n-1}(1-x)^3 \Big[n-(n + 4)x\Big] \geq 0\quad \text{iff}\quad 0< x\leq \frac{n}{n+4}$$
therefore in $\frac{n}{n+4}$ the function $f_n$ takes its maximum, which is:
$$M_n = f_n \left( \frac{n}{n+4}\right) = n^3 \left( \frac{n}{n+4}\right)^n \left( 1 - \frac{n}{n+4}\right)^4 = \frac{4^4n^3}{(n+4)^4} \left( 1 - \frac{4}{n+4}\right)^n\; .$$
Since $M_n \to 0$ as $n\to \infty$, your sequence converge uniformly on $[0,1]$. ;-)
|
H: Convergence $\sum \frac {1} {k^2}$
Let $a_n=\frac{1}{1^2} + \frac{1}{2^2} + .....+\frac{1}{n^2}\;\forall n \in \mathbb{Z_+}$. Prove that $a_n \leq 2 - \frac{1}{n} \;\forall n \in \mathbb{Z_+}$.Deduce the convergence of ${a_n}$.
I have proved the inequality using mathematical induction. But I'm stucked at proving the convergence of $\{a_n\}$. Please help.
AI: $a_{n}=\displaystyle \sum_{k=1}^{n}\dfrac{1}{k^{2}}$. Note that $\dfrac{1}{k^{2}}$ is smaller than $\dfrac{1}{k(k-1)}$ for positive integer $k$ if $k$ is not $1$. So $a_{n}$ is smaller than $1+\displaystyle \sum_{k=2}^{n} \left(\dfrac{1}{k-1}-\dfrac{1}{k}\right)=2-\dfrac{1}{n}$. This proves inequality.
Since $2-\dfrac{1}{n}$ is bounded above, so does $a_{n}$ and $a_{n+1}-a_{n}=\dfrac{1}{(n+1)^{2}}>0$. Therefore It is increasing sequence. By Monotone convergence theorem, $a_{n}$ converges.
|
H: mapping between sets
If $f$ is a mapping between two sets $A$ and $B$ and if $ a\in A $ and $f(a)=a$ what is that called? I looked though some Abstract Algebra books but couldn't recall.
AI: If $f(a) = a$ that's called a "fixed point".
It is usually assumed that $f: X\to X$ but that isn't necessary. It is necessary that $a \in A\cap B$ and $A\cap B \ne \emptyset$ though.
|
H: Nonhomogenous examples in Fraïssé limit
On wiki page on Fraïssé limit, it says that neither $⟨\Bbb{N}, < ⟩$ nor $⟨\Bbb{Z}, < ⟩$ are the Fraïssé limit of FCh (Fraïssé class) because although both of them are countable and have FCh as their age (the class of all finitely generated substructures), neither one is homogeneous.
Then it gives such examples as substructures $⟨ { 1 , 3 } , < ⟩$ and $⟨ { 5 , 6 } , < ⟩$, and the isomorphism $1 ↦ 5, 3 ↦ 6$ between them. It concludes that this cannot be extended to an automorphism of $⟨\Bbb{N}, < ⟩$ or $⟨\Bbb{Z}, < ⟩$, since there is no element to which we could map $2$, while still preserving the order.
I think the substructure $⟨ { 1 , 3 } , < ⟩$ and $⟨ { 5 , 6 } , < ⟩$ are sums of $a+3b$ and $5a+6b$. So the isomorphism is $\phi(a+3b)=5\phi(a)+6\phi(b)$. But if we take $a=-1, b=1$ and $a'=-2,b'=2$, then $\phi(2)=2$. So what does it mean really?
AI: The last paragraph makes no sense at all to me. How do you propose to read $\phi(a+3b) = 5a'+6b'$ as the definition of a function $\mathbb{Z}\to\mathbb{Z}$?
In any case, if $\phi\colon \mathbb{Z}\to \mathbb{Z}$ is a map with the property that $\phi(1) = 5$, $\phi(3) = 6$, and $\phi(2) = 2$, then $\phi$ is not an isomorphism, since $1 < 2$, but $\phi(1) = 5 > 2 = \phi(2)$, so $\phi$ does not preserve $<$.
In fact, for any integer $n$, if $\phi\colon \mathbb{Z}\to \mathbb{Z}$ is a map with the property that $\phi(1) = 5$, $\phi(3) = 6$, and $\phi(2) = n$, then $\phi$ is not an isomorphism. This is because $1 < 2 < 3$, so for $\phi$ to be an isomorphism, we'd have to have $5 = \phi(1) < \phi(2) < \phi(3) = 6$, but there is no integer strictly between $5$ and $6$.
Edit: Upon rereading, I've realized what you mean by "sums of $a + 3b$". In the structure $(\mathbb{Z};0,+)$, the substructure generated by $1$ and $3$ would be $\{a+3b\mid a,b\in \mathbb{N}\}$ (which is just $\mathbb{N}$). But note that $+$ is not in the language! The only symbol in the language is $<$, and the substructure generated by $1$ and $3$ is just $\{1,3\}$. The isomorphism $(\{1,3\},<)\to (\{5,6\},<)$ is exactly as stated: $1\mapsto 5$, $3\mapsto 6$.
|
H: Original Fraïssé's paper and texts on Fraïssé theory
I wonder where I can find the Fraïssé's paper "Sur l’extension aux relations de quelques propriet es desordres", appeared in Annales Scientifiques de l'Ecole Normale Superieure. Troisieme Śerie 71 (1954), 363–388." (or an English translation).
Also I'd like to know which text on model theory discuss details on Fraïssé theory, with one main result being a countable homogeneous structure is completely determined by its age.
AI: I'm not sure about an English translation of that particular paper, but Fraïssé's book Theory of Relations is available in English translation (this includes what is today called Fraïssé theory, in Section 11.1).
I agree with HallaSurvivor's comment that Hodges' A Shorter Model Theory is the best textbook reference for Fraïssé theory.
|
H: $(x,y)$ pairs in lattice $Z^2$ that are co-prime with euclidean-norm at most $k$
Let $B(k) = \{(x,y)\in Z^2 ~|~ x^2+y^2\leq k^2\}$, where $Z$ is the set of integers.
It is quite straight forward to show that $|B(k)|$ is $\Theta(k^2)$.
My question is whether the number of co-prime pairs $(x,y)$ in $B(k)$ also $\Theta(k^2)$, or whether it is asymptotically smaller than $k^2$?
AI: This is the Primitive Circle Problem, the asymptotics are still $\Theta(k^2)$ (and the constant is known to be $6/\pi$).
|
H: Infinite Cartesian Product: Understanding
I'm having a bit of trouble understanding the definition of the infinite cartesian product, particularly with the intuition behind it.
According to my textbook, Enderton's Elements of Set Theory, the infinite cartesian product takes the cartesian product of each set $X_i$ for $i \in I$. This idea makes sense to me, but the definition of $$\prod_{i \in I} X_i = \left\{\left. f: I \to \bigcup_{i \in I} X_i\ \right|\ (\forall i)(f(i) \in X_i)\right\}$$ does not.
For example, if I make a function $X = \{(1,\{2\}), (2,\{3\}), (3, \{4\})\}$ where $X_1 = \{2\}$, $X_2 = \{3\}$, and $X_3 = \{4\}$ if I take the cartesian product of them, don't I get $(2,3,4)$? How is this a function and how does it relate to the definition?
I am very aware that my misunderstanding most likely comes from an inadequate knowledge of cartesian products, and that my example may be incorrect. If so, please let me know what misconceptions I may have so that I can grow and learn!
AI: If $X_1=\{2\}$, $X_2=\{3\}$, and $X_3=\{4\}$, then by this definition $X_1\times X_2\times X_3$ is the set of functions $f$ from the index set $\{1,2,3\}$ to $X_1\cup X_2\cup X_3=\{2,3,4\}$ such that $f(1)\in X_1$, $f(2)\in X_2$, and $f(3)\in X_3$. As it happens, there is only one such function:
$$f=\{\langle 1,2\rangle,\langle 2,3\rangle,\langle 3,4\rangle\}\;,$$
so that $f(1)=2$, $f(2)=3$, and $f(3)=4$, and $X_1\times X_2\times X_3=\{f\}$.
We don’t usually use this definition for Cartesian products of finitely many sets; by the more familiar definition we would have
$$X_1\times X_2\times X_3=\{\langle 2,3,4\rangle\}\;,$$
a set with one member, the ordered triple $\langle 2,3,4\rangle$. But the difference is mostly cosmetic. The ordered triple with which you’re familiar is simply a way of specifying to which factor set each component belongs: if $\langle x_1,x_2,x_3\rangle\in X_1\times X_2\times X_3$, we know that $x_1\in X_1$, $x_2\in X_2$, and $x_3\in X_3$. The functions in Enderton’s definition1 do the same thing: they associate an element of each factor set with an identifier of that set, namely, its index, so that even if all of the factors are the same set, we can tell which ‘component’ comes from which factor. You might notice that when we write an ordered triple as $\langle x_1,x_2,x_3\rangle$, we’re really doing the same thing, albeit in slightly different format, as writing it $\langle x(1),x(2),x(3)\rangle$, as if were an ordered list of the outputs of some function $f$ on the index set $\{1,2,3\}$.
In fact, there are actually several ways to define ordered triples, and one of them is precisely Enderton’s definition of elements of a Cartesian product: by that definition the ordered triple $\langle 2,3,4\rangle$ is the function $f$ above. If one is using that definition of ordered triple, there is literally no difference between the Cartesian products with finitely many factors that you’ve seen before and these with infinitely many factors.
You’ve also probably seen some infinite Cartesian products in another setting: the product $\Bbb R^{\Bbb N}$, i.e., $\prod_{n\in\Bbb N}X_n$, where each $X_n=\Bbb R$, is just the set of infinite sequences of real numbers: each $x\in\Bbb R^{\Bbb N}$ is a sequence $\langle x_n:n\in\Bbb N\rangle=\langle x_0,x_1,\ldots\rangle$ of real numbers, which formally is simply a function
$$x:\Bbb N\to\Bbb R:n\mapsto x_n\;.$$
We could just as well write the terms of the sequence $x(n)$, emphasizing the functional nature of the sequence as an element of a Cartesian product, instead of as $x_n$. Either way, the $n$ identifies the factor $X_n$ of the product, the factor from which the term $x_n$ or $x(n)$ comes.
1 It’s not really Enderton’s definition: it’s standard.
|
H: System of equations involving 3 variables , whether it it solvable for real values of k
Can anyone confirm if I am correct for this question, thank you.
There are positive real numbers $x$ and $y$ which solve the equations $2x + ky = 4, \;x + y = k$,for
(a) all values of $k$
(b) no values of $k$
(c) $k = 2$ only
(d) only $k > −2$.
My attempt: $$k-y=x\\2(k-y)+ky=4\\2k+ky-2y=4\\k(2+y)=4+2y\\2+y=4+2y\implies y=-2\\k=4+2(-2)\implies k=0$$
There is a problem with my solution. $y$ is a positive real value, and my answer is not correct. I suspect it is line 4 - can I not treat $k$ as a factor similar to factorising quadratics?
AI: Hints for solving the problem:
You have $2k+ky-2y=4$, which means $y(k-2)=-2k+4$.
Solve for $y$ if $k\ne 2$.
And can you see that, if $k=2$, then all real numbers $x$ and $y$ solve the equations?
|
H: There is a function $f:X \to \Bbb{R}$ such that $a$ is a limit point of $X$, $f$ does not have limit at $a$, but $|f(x)|$ has a limit at $a$.
Is this true? There is a function $f:X \to \Bbb{R}$, $X \subseteq \Bbb{R}$, such that $a$ is a limit point of $X$, $f$ does not have limit at $a$, but $|f(x)|$ has a limit at $a$
If it did not said that $a$ is a limit point, then $f(x) = \dfrac{x}{|x|}$ would be an example with $a=0$.
But since $a \in X'$, I think it is false: There is no function such that this happens. But I do not know how to prove this.
Any help would be appreciated.
Thanks.
AI: Take $X=[0,1]$ and
$$
f(x) = \frac{\sin(1/x)}{|\sin(1/x)|}
$$
for $x\to 0$.
|
H: Uniform Continuity of Characteristic Function
I am trying to understand the concept of uniform continuity as it pertains to characteristic functions.
First my understanding of uniform continuity:
Def:
$$\forall x_0, \forall \epsilon>0, \exists \delta>0,\hspace{4mm} \text{if}\hspace{4mm} |x-x_0|<\delta \hspace{4mm}\text{then} \hspace{4mm} |f(x)-f(x_0)|<\epsilon$$
I see this as fixing an $\epsilon$ and then find an interval of size $2\delta$ such that as I slide this interval across the x axis then all points $f(x), f(y)$ of x and y in this interval will be within $2\epsilon$ distance of each other. In layman terms I see this as the rate at which any two points $f(x)$ and $f(y)$ approach each other is very similar irrespective of the points selected.
My understanding may be wrong please correct me if I am wrong.
Now for the characteristic function:
$$|\varphi(t)-\varphi(s)|=\bigg|\int e^{itX}-e^{isX}\mu(dx)\bigg|\leq\int |e^{itX}-e^{isX}|\mu(dx)
\leq\int|e^{iX(t-s)}-1|\mu(dx) \leq\int 2\mu(dx)=2$$
then by the dominated convergence theorem
$$\lim_{t\to s}|\varphi(t)-\varphi(s)|=0$$
But why does this imply uniform continuity not just continuity?
AI: Another way of stating your correct intuition: uniform continuity results when you can prove that you can take an $\epsilon$ that does not depend on $s,t$, except through the difference $|t-s|<\delta$. A careful analysis of your proof shows that in fact
$$ \lim_{t\to s} \int | e^{iX(t-s)}-1| \mu(dx) \to 0$$
and this upper bound $ \int | e^{iX(t-s)}-1| \mu(dx)$ only depends on $|t-s|$, as needed.
|
H: how to prove that this problem?
Is there partial laplace derivative equations? I am so confused.
Show that the function provides the equation.
\begin{equation}
\label{simple_equation0}
u = {\varphi }(xy)+\sqrt{xy}{\psi}(\frac{y}{x})
\end{equation}
\begin{equation}
\label{simple_equation1}
x^{2}\frac{\partial^2 u}{\partial x^{2}}-y^{2}\frac{\partial^2 u}{\partial y^{2}}= 0
\end{equation}
AI: Hint: \begin{align*}
\frac{\partial^2 u}{\partial x^2} &= -\frac{y^2 \psi '\left(\frac{y}{x}\right)}{x^2 \sqrt{x y}}+\sqrt{x y} \left(\frac{y^2 \psi ''\left(\frac{y}{x}\right)}{x^4}+\frac{2 y \psi '\left(\frac{y}{x}\right)}{x^3}\right)-\frac{y^2 \psi \left(\frac{y}{x}\right)}{4 (x y)^{3/2}}+y^2 \varphi ''(x y) \\
&= \frac{\sqrt{x y} \left(4 y \left(x^3 \sqrt{x y} \varphi ''(x y)+y \psi ''\left(\frac{y}{x}\right)+x \psi '\left(\frac{y}{x}\right)\right)-x^2 \psi \left(\frac{y}{x}\right)\right)}{4 x^4} \text{.}
\end{align*}
|
H: Prove that 2 is not a primitive root of any prime of the form $3\cdot 2^n+1$ for $p>13$
I am really struggling with this proof. This doesn't seem like it should be that hard. All I have been trying to do is find a $k<3.2^n$ such that $2^k\equiv 1($mod $ 3\cdot 2^n+1)$, but it turns out there are a lot of numbers between $1$ and $3\cdot 2^n$.
I am just not really sure how to go about this otherwise, but I feel like I must be missing something that makes it more rigorous than just guessing until something works.
I also had a go at writing the congruence as $({2^{2^n}})^3-1\equiv0 $ $($mod $ 3\cdot 2^n+1),$ and I then did difference of cubes and got that either ${2^{2^n}}=1,$ which would mean $2$ is not a primitive root, or $({2^{2^n}})^2+{2^{2^n}}+1\equiv0$ $ ($mod $ 3\cdot 2^n+1),$ but then I couldn't work out why the second one can't be zero.
AI: Hint:
By Euler's criterion, $2^{(p-1)/2}\equiv\left(\dfrac2p\right)\pmod p$, and $\left(\dfrac2p\right)=1$ if $p\equiv1\pmod8$.
|
H: The binomial coefficient $\left(\begin{array}{l}99 \\ 19\end{array}\right)$ is $ 107,196,674,080,761,936, x y z $ , Find $x y z$
The binomial coefficient $\left(\begin{array}{l}99 \\ 19\end{array}\right)$ is a 21 -digit number:
$
107,196,674,080,761,936, x y z
$
Find the three-digit number $x y z$
I showed that $\left(\begin{array}{l}99 \\ 19\end{array}\right) \equiv 2(\bmod 4)$
and $\left(\begin{array}{l}99 \\ 19\end{array}\right) \equiv 19(\bmod 25)$
Now how to combine them to find last two digit (y and z)??
because we can only combine when $a \equiv b(\bmod n)$
$a \equiv b(\bmod m)$ then if (n,m)=1 then
$a \equiv b(\bmod mn)$ but here we have different b's ...
and also can someone tell some easier method to find $\left(\begin{array}{l}99 \\ 19\end{array}\right) \equiv 2(\bmod 4)$
and $\left(\begin{array}{l}99 \\ 19\end{array}\right) \equiv 19(\bmod 25)$ my approach takes me too long, so i want to see some easier method...
AI: Since $99 \equiv -1 \pmod {25}$, we have $99 \cdot 98 \cdots 81 \equiv (-1)^{19}19! \pmod {25}$. What we would like to do is to simply divide by $19!$ and be done, but you'll notice that $19! \equiv 0 \pmod{25}$ because of the multiples of $5$. So instead, we treat the multiples of $5$ separately and this gives
$$ \binom{99}{19} \equiv (-1)^{19 - 3} \frac{95 \cdot 90 \cdot 85}{15 \cdot 10 \cdot 5} \pmod{25}.$$
Now we simplify:
$$ (-1)^{19 - 3} \frac{95 \cdot 90 \cdot 85}{15 \cdot 10 \cdot 5} = 3 \cdot 17 \cdot 19 = 51 \cdot 19 \equiv 19 \pmod{25}.$$
|
H: Finding eigenvalues and eigenvectors of a certain matrix
Find eigenvalues and eigenvectors of a matrix $A_{n\times n}$ where elements $a_{ij} $ of $A_{n\times n}$ are given as
\begin{cases}
\alpha, & \text{if }i=j \\[2ex]
1, & \text{if }|i-j|=1\\[2ex]
0 & \text{otherwise}
\end{cases}
where $\alpha$ is a constant.
I tried finding out the eigenvalues by finding the polynomial equation of this equation and the result which I was getting was of the form:-
$|A_{n\times n}-\lambda I_{n\times n}|=(\alpha-\lambda)(|A_{(n-1)\times (n-1)}-\lambda I_{(n-1)\times (n-1)}|-|A_{(n-2)\times (n-2)}-\lambda I_{(n-2)\times (n-2)}|)$
But I was not able to go further.
AI: A formula for the eigenvalues and eigenvectors of such a matrix is given here.
We can deduce these eigenvalues and eigenvectors nicely, however, if we correctly "guess" that we can find a complete set of eigenvectors in which each eigenvector is of the form
$$
v = (\sin(\theta), \sin(2 \theta), \dots , \sin (n \theta)).
$$
See this post for details on this approach.
Note that it suffices to consider the case of $\alpha = 0$, since for any matrix $M$, the matrices $M$ and $M + \alpha I$ have the same eigenvalues (where $I$ denotes the identity matrix).
|
H: Calculate $6^{1866}$ in $\mathbb{Z}_{23}$
Calculate $6^{1866}$ in $\mathbb{Z}_{23}$
$Solution:$ Note that $1866=22\cdot 84 + 18$ then by Fermat's theorem
$$[6^{1866}]=[6^{22}]^{84}[6^{18}]=[1]^{84}[6^{18}]=[6^{18}]$$
Then $6^6=46656=2028\cdot 23 + 12$ It is true that $$[6^{1866}]=[6^6]^3=[-11]^3=[121][-11]=[72]=[3]$$
So $ 6 ^ {1866} $ is $3 $ in $ \mathbb {Z} _ {23} $, is that correct? Thank you for reading.
AI: Your work is correct.
You could have also said $6^{1866}\equiv6^{-4}\bmod23$.
$6^{-1}\equiv4$ and $4^4=256\equiv3\bmod 23.$
|
H: continuous and inverse function problem
Show that $f:\Bbb{R^n} \rightarrow \Bbb{R^m}$ is continuous if and only if for each subset $E \subseteq \Bbb{R^m}$ we have $$f^{-1}(E^\circ) \subseteq [f^{-1}(E)]^{\circ}$$, where $E^\circ$ denotes the interior of the set E.
Theorem: A function $f:\Bbb{R^n} \rightarrow \Bbb{R^m}$ is continuous if and only if for each open set V in $\Bbb{R^m}$, $f^{-1}(V)$ is open in $\Bbb{R^n}$. A function $f: \Omega \subseteq \Bbb{R^n} \rightarrow \Bbb{R^m}$ is continuous on $\Omega$ if and only if for each open set V in $ \Bbb{R^m}$, $f^{-1} (V)$ is open relative to $\Omega$
$f^{-1}(E^\circ) \subseteq [f^{-1}(E)]^{\circ}$, what can we tell from that, and how does it lead to continuous?
AI: I’ll do one direction and start the other. Suppose that $f^{-1}[\operatorname{int}E]\subseteq\operatorname{int}f^{-1}[E]$ for every $E\subseteq\Bbb R^m$. Let $V\subseteq\Bbb R^m$ be open; then
$$f^{-1}[V]=f^{-1}[\operatorname{int}V]\subseteq\operatorname{int}f^{-1}[V]\subseteq f^{-1}[V]\;,$$
so $\operatorname{int}f^{-1}[V]=f^{-1}[V]$, and $f^{-1}[V]$ is open. $V$ was an arbitrary open set in $\Bbb R^m$, so $f$ is continuous.
Now suppose that $f$ is continuous, and let $E\subseteq\Bbb R^m$. Let $U=f^{-1}[\operatorname{int}E]$; since $f$ is continuous, $U$ is open in $\Bbb R^n$, and clearly $U\subseteq\;$ . . . what? Can you finish it from here?
|
H: Confusion on a directional derivative expression
I'm studying "A Visual Introduction to Differential Forms and Calculus on Manifolds" and came across a confusing part on intro to directional derivatives. First the definition
The directional derivative of $f:\mathbb{R}^2\to\mathbb{R}$ at $(x_0,y_0)$ in the direction of the unit vector $u=[a,b]^T$ is
$$D_uf(x_0,y_0)=\lim_{t\to 0}\frac{f(x_0+ta,y_0+tb)-f(x_0,y_0)}{t}$$
if this limit exists
This definition makes sense. But then there's:
To remind ourselves of some other equivalent notations, notice that if we let $p=(x_0,y_0)$ then we can also write
$$\lim_{t\to 0}\frac{f(x_0+ta,y_0+tb)-f(x_0,y_0)}{t}=\frac{d}{dt}\bigg(f(p+tu)\bigg)\bigg|_{t=0}$$
I don't understand how the expression on the right came about. The expression on the left indicates the incremental change in $f$ as we vary the vector $x$ from its initial value $[x_0,y_0]^T$, which is why it makes sense. I thought maybe we could write $g(t)=f(x_0+ta,y_0+tb)$, and so its derivative would be
$$\lim_{h\to 0}\frac{f(x_0+ta+ha,y_0+tb+hb)-f(x_0+ta,y_0+tb)}{h}$$
which is the same as the first $D_u$ expression when we put $t=0$. But I'm still not 100% certain since this seems like a very roundabout way of getting the same thing. Why even go through so much trouble if we have a perfectly good definition in the first place? Apologies if this is a naive question.
AI: Start on the other side:
$$\frac{d}{dt} \bigg( f(p+tu) \bigg)\bigg|_{t=0} = \lim_{h\to 0} \frac{f(p+(0+h)u)-f(p+(0)u)}{h}. $$
Now change the limit variable to $t$.
Addendum. This reworking of the directional derivative is useful because it makes it clear that it coincides with the standard one-dimensional derivative of the restriction of $f$ to the line defined the point $p$ and the vector $u$. This links them to the usual intuition for partial derivatives (directional derivatives where said line is either horizontal or vertical) and allows us to think of subtleties such as "if all directional derivatives of $f$ exist at a point, does the differential of $f$ necessarily exist?" in a more geometrical sense. (Spoiler: it doesn't)
|
H: A problem regarding a maximal ideal in a polynomial ring in several variables
$\mathbf {The \ Problem \ is}:$ Is the ideal $I =\langle x^2-2,y^2+1,z\rangle$ maximal in the polynomial ring $R =\mathbb Q[x,y,z]$ ?
$\mathbf {My \ approach} :$ Actually, by $3rd$ isomorphism theorem of rings, quotenting both $R$ and $I$ by $\langle z \rangle$, we get $\frac{R}{I} \cong \frac{\mathbb Q[x,y]}{\langle x^2-2 , y^2+1\rangle}$ ;
Now , if we define a map $\phi : \mathbb Q[x,y] \to \mathbb Q(\sqrt 2,i)$ where $i^2=-1;$ by $\phi(f(x,y)) = f(\sqrt 2,i)$ then can we show that kernel of this map is $J =\langle x^2-2,y^2+1\rangle$ ?
Here, one inclusion is obvious, but how about the other ?
I have tried a lot, but I can't prove it .
A small hint is warmly appreciated.
AI: Hint. Let $f\in\mathbb{Q}[x,y]=\mathbb{Q}[y][x]=\mathbb{Q}[x][y]$. Using long division by $x^2+1$, we get $f=q(x,y)(x^2+1)+r_1(y)+r_2(y)x$.
Now, divide $q$ and $r_1,r_2$ by $y^2-2$.
|
H: Finding a bijective correspondence between $X^{\omega}$ and $\mathcal{P}(\mathbb{Z}_+)$
Let $X = \{ 0,1 \}$ and let $\mathcal{P} (\mathbb{Z}_+) $. Find a
bijective correspondence between $\mathcal{P} (\mathbb{Z}_+) $ and the
cartesian product $X^{\omega} $ or ${\bf show}$ there isn't one
Attempt to solution:
I claim we can find one bijection. Here is my idea. Notice that the elements of $X^{\omega}$ are sequences $(a_n)$ where $a_n $ is either $1$ or $0$
Now, let $A \subset \mathbb{Z}_+$, then $0 \leq |A| \leq \infty $ and let $n = |A|$. Now, we define $f: \mathcal{P} (\mathbb{Z}_+) \to X^{\omega}$ as :
If $n=0$, then define $f(A) = (0,0,.....) $
if $n=1$, then define $f(A_k) = (a_k)$ where $a_k = 1$ in the kth position and $0$ eveyrwhere else.
if $n=2$, then this approach becomes more complicated.
Is this a good way to start the construction? Is it possible to find a closed form function?
AI: First note that $Y^X$ is the notation in functional analysis, or just in general, for the space of all functions $f$ from $X$ to $Y$.
There is one, and the key to achieving it is to note that a function $f$ from $X$ to a two element set, say $\{0,1\}$, corresponds to the element of the power set, $S\in P(X)$, defined by $S=\{x\in X: f(x)=1\}$.
See my Easy proof that $\mathfrak c=\lvert P(\mathbb Z)\rvert$...
|
H: If $a_n$ converges to $a$, can we say $a_n^c$ converges to $a^c$?
I'm doing a lot of practice problems with sequences, and I've noticed a number of problems ask about the convergence of the sequence raised to a positive power. It seems like in all the examples that I've tried, if $a_n$ converges to $a$, then $a_n^c$ converges to $a^c$, where $c$ is some positive real number. Is this always true?
I want to say yes, since we can define a new sequence $b_n$ as the product of $a_n$ and use the Algebraic Limit Theorem, but I'm wondering if there are any special cases I'm failing to consider.
AI: The statement does not hold.
For example, take $a_n = -1/n$ and $c=0.5$. $c$ is a positive real number, $a_n$ convergese to $0$, but $a_n^c$ is not defined for all $n$, so the sequence does not converges.
|
H: Method to solve factored quadratic diophantine equations?
Is there a method that can solve all quadratic diophantine equations of the following type
$$X (X + a) = Y (Y + b)$$
where $a,b$ are given integers?
AI: $X (X + a) = Y (Y + b) \implies (2 X + a)^2 - (2 Y + b)^2 = a^2 - b^2$
Get finite set solutions of difference of squares $x^2 - y^2 = a^2 - b^2$ and check $X=\frac{x-a}{2}$ and $Y=\frac{y-b}{2}$ as integers.
|
H: Another series involving $\log (3)$
I will show that
$$\sum_{n = 0}^\infty \left (\frac{1}{6n + 1} + \frac{1}{6n + 3} + \frac{1}{6n + 5} - \frac{1}{2n + 1} \right ) = \frac{1}{2} \log (3).$$
My question is can this result be shown more simply then the approach given below? Perhaps using Riemann sums?
Denote the series by $S$ and let $S_n$ be its $n$th partial sum.
\begin{align}
S_n &= \sum_{k = 0}^n \left (\frac{1}{6k + 1} + \frac{1}{6k + 3} + \frac{1}{6k + 5} - \frac{1}{2k + 1} \right )\\
&= \sum_{k = 0}^n \left (\frac{1}{6k + 1} + \frac{1}{6k + 2} + \frac{1}{6k + 3} + \frac{1}{6k + 4} + \frac{1}{6k + 5} + \frac{1}{6k + 6} \right )\\
& \quad - \sum_{k = 0}^n \left (\frac{1}{2k + 1} + \frac{1}{2k + 2} \right ) - \frac{1}{2} \sum_{k = 0}^n \left (\frac{1}{3k + 1} + \frac{1}{3k + 2} + \frac{1}{3k + 3} \right )\\
& \qquad + \frac{1}{2} \sum_{k = 0}^n \frac{1}{k + 1}\\
&= H_{6n + 3} - H_{2n + 2} - \frac{1}{2} H_{3n + 3} + \frac{1}{2} H_{n + 1}.
\end{align}
Here $H_n$ denotes the $n$th harmonic number $\sum_{k = 1}^n \frac{1}{k}$. Since $H_n = \log (n) + \gamma + o(1)$ where $\gamma$ is the Euler-Mascheroni constant we see that
$$S_n = \log (6n) - \log (2n) - \frac{1}{2} \log (3n) + \frac{1}{2} \log (n) + o(1) = \frac{1}{2} \log (3) + o(1).$$
Thus
$$S = \lim_{n \to \infty} S_n = \frac{1}{2} \log (3).$$
AI: Your sum is$$\sum_{n\ge0}\int_0^1x^{6n}(1-2x^2+x^4)dx=\int_0^1\dfrac{(1-x^2)^2}{1-x^6}dx=\int_0^1\dfrac{1-x^2}{1+x^2+x^4}dx,$$where the first $=$ uses monotone convergence. Since$$1+x^2+x^4=(1+x^2)^2-x^2=\prod_\pm(1\pm x+x^2),$$you can show as an exercise that this integral is$$\left[\dfrac12\ln\dfrac{1+x+x^2}{1-x+x^2}\right]_0^1=\dfrac12\ln3.$$
|
H: Asymptotically, can supercomputers easily solve the Travelling Salesman Problem, why or why not?
I just want to know if supercomputers could easily solve the TSMP or would still take a lot of time, as it does now?
AI: You must understand that no classical computer in the world nor in the universe will ever solve a large TSMP.
Supercomputer are barely one million times faster than personal computers, this is nothing compared to, say $100!$. A supercomputer would just allow you to add a handful of points for the same running time.
|
H: show $I(a,1) + I(-a,1)\ge2$
$I(a,b)$= $\int_1^e x^a\ln^bx \,dx, b > 0$
I need to show that $I(a,1) + I(-a,1)\ge2$
I took both integrals. For the first one I get:
$I(a,1)$= $\int_1^e x^a\ln x \,dx$ = $\frac1{(a+1)^2}$($ae^{a+1} + 1)$
The second one should be the same but $-a$ will take place for $a$.
Is there any elegant solution for this?
AI: Notice that $I(a,1)+I(-a,1)=\int_1^e (x^a+x^{-a})\ln x dx\ge \int_1^e 2\ln x dx=2$.
This is in fact the best possible since $I(0,1)+I(-0,1)=2$.
|
H: Number of homomorphisms from direct products of $\mathbb{Z}_n$ to $\mathbb{Z}_{18}$
How many homomorphisms are there from $\mathbb Z_3\times \mathbb Z_4\times\mathbb Z_9$ to $\mathbb Z_{18}$.
I tried to find possible kernals. The answer is $54$ but I'm getting something else. Can anyone show me some easy way to compute these homomorphisms.
AI: The images of the elements $(1,0,0),(0,1,0)$ and $(0,0,1)$ will determine the homomorphism. Also, those images need orders dividing $3,4,9$ respectively.
There are three choices for the image of $(1,0,0)$, mapping to elements of order $1$, that is, to $e$, or two one of the two elements of order $3$.
Next, there are two choices, since the order of $h(0,1,0)$ has to divide $4$, hence be $1$ or $2$ (there are no elements of order $4$).
Finally, there are nine choices for $h(0,0,1)$. Because the order must divide $9$, hence be $1,3$ or $9$. There are $\varphi(3)=2$ elements of order three, and $\varphi(9)=6$ elements of order nine in $\Bbb Z_{18}$.
Thus we have $2\cdot3\cdot9=54$.
|
H: Isomorphism of cohomology related to Kunneth formula
Let $S$ be a smooth complex (rational) algebraic surface, and $\mathcal{F}$ be a quasi-coherent sheaf such that $H^2(S, \mathcal{F}) = 0$.
Then, by Kunneth formula, we have $H^2(S \times S, \mathcal{F} \boxtimes \mathcal{F} ) \simeq H^1(S, \mathcal{F}) \otimes H^1(S, \mathcal{F})$.
Consider the action of $\mathcal{S}_2$ on $S \times S$ by commuting coordinates.
Is the map
$(H^1(S, \mathcal{F}) \otimes H^1(S, \mathcal{F}))^{\mathcal{S}_2} =Sym^2 (H^1(S, \mathcal{F})) \rightarrow H^2(S, \mathcal{F}^{\otimes 2})$
isomorphism ?
(In the first place, how $\mathcal{S}_2$ acts on $H^2(S \times S, \mathcal{F} \boxtimes \mathcal{F} ) \simeq H^1(S, \mathcal{F}) \otimes H^1(S, \mathcal{F})$ ?)
Thanks in advance.
AI: No, it is not true. Take for instance $S = \mathbb{P}^2$ and $F = \mathcal{O}(-2)$.
Then $H^2(F) = H^1(F) = 0$, but $H^2(F \otimes F) = H^2(\mathcal{O}(-4)) \ne 0$.
|
H: Proving a system of equations has only one solution
I want to find all critic point of $f(x,y)=xe^y-ye^x$. So I calculated $\nabla f(x,y)=(e^y-ye^x, xe^y-e^x)$ and tried to solve $$\begin{cases}e^y-ye^x=0 \\ xe^y-e^x=0 \end{cases}\qquad \Rightarrow \qquad \begin{cases}(1-xy)=0 \\ xe^y=e^x \end{cases}$$
A trivial solution is {$x=1, y=1$}.
I'm pretty sure this is the sole solution, but I wasn't able to prove that!
My attempt:
$$\begin{cases}(1-xy)=0 \\ xe^y=e^x \end{cases}\qquad \Rightarrow \qquad \begin{cases}y=\frac{1}{x} \\ y=x-\log{x} \end{cases}$$
I studied the difference $\left|x-\log{x} -\frac{1}{x}\right|$ and I suppose to find one and only one global minimum at $x=1$. $$D\left[\left|x-\log{x} -\frac{1}{x}\right|\right] \quad= \quad \operatorname{sgn}\left(x-\log{x} -\frac{1}{x}\right)\cdot \left(1-\frac{1}{x}+\frac{1}{x^2} \right)$$
The derivative has no stationary points, and $(\lim_{x\to0^+}=-\infty),$ $(\lim_{x\to+\infty}=+\infty)$; thus I must search among critical points. But here I got stuck, since the critical points lie where $\left(x-\log{x}-\frac{1}{x} \right)=0.$ So that's a circular definition...
AI: Consider the function $g(x)=x - \ln x - \frac{1}{x}$. If we prove that it is strictly increasing for $x > 0$, then, it cannot have more than one zero. But its derivative, namely
$$ g'(x)= \frac{1}{x^2} - \frac{1}{x} + 1 $$
is always $>0$ for $x > 0$, so we have concluded.
|
H: prove that $\int_{0}^{\cos^2{x}}\arccos\sqrt{t}\ dt + \int_{0}^{\sin^2{x}}\arcsin\sqrt{t}\ dt =\frac{\pi}{4}$
I have to prove the following problem:
$$\int_{0}^{\cos^2{x}}\arccos\sqrt{t}\ dt + \int_{0}^{\sin^2{x}}\arcsin\sqrt{t}\ dt =\frac{\pi}{4}$$
I know that $\arccos(x)+\arcsin(x)=\frac{\pi}{2}$ but now, I don't know what more to do...
AI: Let $ \arccos (\sqrt t) = z \Rightarrow t = \cos^2(z) $. Now the limits become, $z : \frac\pi2 \to x$
Similarly do with the second one by using $\arcsin$.
So, $$\color{blue}{I = \int_\frac\pi2^x z (-2\cos z\sin z)dz+\int_0^x z (2\cos z\sin z)dz }$$
Using the facts that,
$\int_a^b-f(x)dx = \int_b^af(x)dx$
and
$\int_a^bf(x)dx+\int_b^cf(x)dx = \int_a^cf(x)dx$
$$\color{blue}{I = \int_0^{\frac\pi2} (2z\cos z\sin z)dz = \int_0^{\frac\pi2} z\sin(2z)dz = \frac\pi4} $$
|
H: Which is the spectrum of this operator?
Let $T : \ell_2 \to \ell_2$, $T(x_1,x_2,x_3,...,x_n,...) = (0,\frac{x_1}{1},\frac{x_2}{2},\frac{x_3}{3},..., \frac{x_n}{n},...)$. Which is the spectrum of this operator?
AI: It is easy to find the spectrum if you realize that this is a compact operator. To show that it is compact define $T_N(x_n)=(0,\frac {x_1}1 ,...,\frac {x_N} N,0,0...)$. $T_N$ is of finite rank and hence it is compact. I will let you verify that $\|T-T_N\| \leq \sqrt {\sum\limits_{k=N+1}^{\infty} \frac 1 {k^{2}}} \to 0$. Hence $T$ is compact.
Now non-zero points in the spectrum of a compact operator are eigen values. It is quite easy to see that $Tx=\lambda x$, $\lambda \neq 0$ implies $x_n=0$ for all $n$. Hence there are no non-zero eigen values. It follows that $\sigma (T)=\{0\}$.
|
H: What does the notation $A\in\mathscr{B}(H_1, H_2)$ mean?
I am sorry for the trivial question, but I am a little bit confused about this notation in literature. Let $H_1$ and $H_2$ be two Hilbert spaces. I am interested in understanding what means that an operator $A$ is bounded from $H_1$ to $H_2$, i.e. $A\in\mathscr{B}(H_1, H_2)$. It means that taken $u\in H_1$ we have
$$\Vert Au\Vert_{H_2}\leq C\Vert A\Vert \Vert u\Vert_{H_1}?$$
Or it means that taken $u\in H_2$ we have
$$\Vert Au\Vert_{H_1}\leq C\Vert A\Vert \Vert u\Vert_{H_2}?$$
Thank you in advance!
AI: Let $T:H_1 \rightarrow H_2$ be an operator. This operator is bounded if:
$$ \| Tv \|_{H_2} \leq M \| v \|_{H_1} $$
for some constant $M >0$. The "best" constant, namely:
$$ \| T \| = \sup_{v \in H_1} \frac{\| Tv \|_{H_2}}{\| v \|_{H_1}} $$
is called the norm of $T$.
|
H: The Axiom of Choice: Proof Validity
Synopsis
In Enderton's Element's of Set Theory, he introduces several forms of the Axiom of Choice. Currently, I've gotten through the first and second forms. Mainly:
(1) For any relation $R$, there is a function $H \subseteq R$ with dom $H$ = dom $R$
(2) For any set $I$ and any function $H$ with domain $I$, if $H(i) \neq \varnothing$ for all $i \in I$, then $\prod_{i \in I} H_i \neq \varnothing$.
After introducing the second form, he asks us to show that the two forms are equivalent. I would greatly appreciate it if you would check the validity of my attempt, and also perhaps give me an explanation for how you personally understand and think about the axiom of choice. I have a vague notion right now in my head, and I think an alternative explanation of the same concept my give me a deeper understanding. Now, onto the proof.
Proof
Suppose the first form is true. Define a relation $R$ as follows: $$R = I \times \bigcup_{i \in I} H(i).$$ By the first form of the axiom of choice, we can construct a function $f \subseteq R$ with dom $f$ = dom $R$ $= I$. This means that $f(i) = R(i)$ for all $i \in I$ and by definition of $R$, $f(i) \in H(i)$. Hence, $f \in \prod_{i \in I} H_i$.
Now for the converse, suppose the second form is true. Then for a relation $R$, let $I =$ dom $R$. Define a function $H: I \rightarrow \mathscr{P}(\text{ran } R)$ where $H(i) := \{x \in \text{ran } R \mid iRx \}$. By the axiom of choice, $\prod_{i \in I} H_i \neq \varnothing$, so there exists a function $f$ with $\text{dom }f = I$ such that $(\forall i \in I) f(i) \in H(i)$. That means $(\forall i \in I) iRf(i)$. So $f \in R$ and $\text{dom } f = \text{dom } R$.
Thus, the two forms are equivalent.
Q.E.D.
Thank you so much for your time, and I'll diligently pay attention to any comments or takes on how you understand the Axiom of Choice and/or how I can better my proof-writing abilities.
AI: The first proof is not correct, the second one is fine with the exception of a typo saying $f\in R$ rather than $f\subseteq R$.
The problem with the first proof is that if I picked one $i$ and one $a\in H(i)$, then $f=I\times\{a\}$ is a function such that $f\subseteq R$ and they have the same domain. Instead you need to ensure that the relation captures the thing you're choosing from. This is the approach you're taking in the second proof, and it works just fine. You can correct this by taking $R=\bigcup_{i\in I}\{i\}\times H(i)$.
Your mistake lies in "this means", which is an unverified claim.
So, how can you do better? One way is to practice. With practice you develop a better intuition as to where you might be "cheating yourself out of a proof". You can go over your proof and question each statement that you made, and see how exactly it should follow, and if you can't convince yourself in full, assume there is a mistake, or at least a gap, until you've seen otherwise.
As for general intuition about the axiom of choice? That's easy. If you're choosing from infinitely many sets, and you haven't specified exactly what is the element you're choosing from which one, then you've used the axiom of choice. Just be wary that sometimes we delegate the use of the axiom to a background choice. Again, practice makes better, although it never makes perfect.
|
H: Riemann Integration vs Lebesgue Integration
If a function is not Riemann integrable, what does it mean geometrically?
Is it that we can't use integration to find out area under the curve?
So basically when a function is Riemann integrable then it can tell us about area.
I am asking this because I saw a function which is not Riemann integrable but Lebesgue integrable.
What does it mean geometrically?
AI: Usually non-Riemann integrable functions are pathological functions like the Dirichlet function etc. But these are Lebesgue integrable.
One of the main implications of a function being non-Riemann integrable is that it is not continuous enough to have a well-defined area under the curve. But the mathematical definition limits itself to the existence of a limiting sum only. In that sense, it is quite rigorous.
I further quote from this article:
We realize that both of them can help us to integrate functions. The difference is that the Riemann integral subdivides the domain of a
function, while the Lebesgue integral subdivides the range of that function. The
step function for Riemann integral has a constant value in each of the subintervals of the partition, while the simple function for Lebesgue integral provides
finitely many measurable sets corresponding to each value of that function.
The improvement from the Riemann integral to the Lebesgue integral is that
the Lebesgue integral provides more generality than the Riemann integral does.
From the reverse perspective, the Riemann integral can imply the Lebesgue
integral.
Hope this helps.
|
H: infinite-dimensional inner product space
I've been asked to get an example of an infinite-dimensional inner product space, so I wrote this
As an example is there anything wrong with what I wrote?
AI: Example: infinite dimensional Hilbert spaces are inner product spaces and they have infinite dimension. $l^2$ is a well known example of infinite dimensional Hilbert space (and actually, every infinite dimensional separable Hilbert space is isometrically isomorphic to $l^2$).
Your example, including the definition of the usual inner product on $l^2$, is correct. There is only a typo: you should write $\overline{\omega}_i$, not only in the "compact" sum notation, but also when you explicitely write the first terms of the series. Moreover, do not stop to $z_n \overline{\omega}_n$: write '$+...$'. Of course, recall that this series is evaluated w.r.t. the norm induced by the inner product.
|
H: Show that $\sum_{i=1}^\infty\frac{42}{(i+1)^3}$ converges.
I am having some difficulties proving this statement. Any help would be appreciatted. I have proved that $\left (\frac{1}{2^i} \right)_{i \in \mathbb{N}}$ is summable, however, I couldn't prove it for this one.
P.S. I am not sure if the formula is readable. It is the sequence $42/((i+1)^3)$ ($i$ is an element of the Nat. numbers) and I need to show that it is summable. Sorry if it is not readable.
AI: I'm not sure what you mean by summable, but if you mean that the series $\sum_{i=1}^{\infty}\frac{42}{(i+1)^3} $ converges, then it does.
$\sum_{i=1}^{\infty}\frac{42}{(i+1)^3}= 42\sum_{i=1}^{\infty}\frac{1}{(i+1)^3}< 42\sum_{i=1}^{\infty}\frac{1}{i^2}$ which converges. Than by the Direct comparison test, our series converges.
|
H: How to solve $\tanh(x-y)=\frac{y}{2t}?$
How to solve the following equation about variable $y$:
$$\tanh(x-y)=\frac{y}{2t}?$$
where $x$ is fixed and for small $t>0$ and large $t<\infty$, there would be different cases.
AI: With suitable change of variables/constants, the equation can be written in the cleaner form
$$\tanh z=mz+p.$$ You have the intersection(s) of a sigmoid with a straight line. The sigmoid can be roughly approximated by the lines $\tan z\approx z$ and $\tanh z=\pm1$.
If $m<0$, you have a single real solution. For large $p$,
$$z\approx\frac{1-p}m.$$ For tiny $p$,
$$z\approx \frac p{1-m}$$ (and similar formulas for negative $p$).
For $m>1$, you have a single solution, similar to the above case. For $0<m<1$, you have from one to three solutions. The limit case (two roots, one of which is double) occurs when the line is tangent, i.e. when simultaneously
$$\frac1{1-z^2}=m,$$
$$z=\pm\sqrt{1-\frac1m}$$ and $$p=\tanh z-mz.$$
So for given $m,p$ you can tell the number of roots and have them isolated in separate intervals.
This qualitative analysis should be enough to allow you to use a numerical method such as Newton's iterations and find all roots in all cases.
|
H: how does knowing the indeterminate form of a limit help in solving that limit?
i know what indeterminate forms are, but fail to find their use while solving questions on limits. I know that 0/0 and infinity/infinity forms indicate use of l'Hopital's rule, but i dont know what other indeterminate forms lead us to, for example, how do you proceed when you get suppose indeterminate form 0* infinity or infinity-infinity?
AI: You can use the identities
$$f\cdot g=\frac f{\dfrac 1g}$$ and $$f-g=f\cdot\left(1-\frac gf\right),$$ which reduce the problems to known forms.
Assuming that the conditions for L'Hospital hold,
$$f\cdot g=\frac f{\dfrac 1g}\to\frac{f'}{-\dfrac{g'}{g^2}}=-\frac{f'g^2}{g'}$$
and
$$f-g=\dfrac{1-\dfrac gf}{\dfrac1f}\to\frac{-\dfrac{g'f-gf'}{f^2}}{-\dfrac{f'}{f^2}}=\frac{g'f-gf'}{f'}.$$
|
H: Can I always change the order of integration in an ordered multidimensional integral?
Imagine I have an integral of the following form:
$$I = \int_{-\infty}^{\infty} d\tau_1 \int_{\tau_1}^\infty d\tau_2 \int_{\tau_2}^\infty d\tau_3\ f(\tau_1,\tau_2,\tau_3) \tag{1}$$
Can I always commute the integrals, by changing the integration limits accordingly? And if not, when is it allowed/not allowed? For example:
$$I \overset{?}{=} \int_{-\infty}^\infty d\tau_2 \int_{-\infty}^{\tau_2} d\tau_1 \int_{\tau_2}^\infty d\tau_3\ f(\tau_1,\tau_2,\tau_3) \tag{2}$$
It seems to me that the region over which I integrate is the same, however I did run into discrepancies when numerically integrating $(2)$ vs. $(1)$ in some instances. I am not sure if they are artifacts from the numerical integration, hence the question.
AI: In general, it is not always possible. Fubini's Theorem gives some conditions under which it is possible to change the order of integration (see, for instance, https://en.wikipedia.org/wiki/Fubini%27s_theorem).
|
H: Prove that if $a_n \to 1$ then $\sqrt[n]{a_n} \to 1$ if $n \to \infty$
Prove that if $a_n \to 1$ then $\sqrt[n]{a_n} \to 1$ if $n \to \infty$.
What could be the way to prove that in that case also $\sqrt[n]{a_n} \rightarrow 1$?
AI: You are confused. The question has nothing to do with any series. It is question about convergence of sequences.
For $n$ sufficiently large $\frac 12 \leq a_n \leq 2$ so $(\frac 12)^{1/n} \leq a_n^{1/n} \leq 2^{1/n}$. Use Squeeze Theorem and the fact that $x^{1/n} \to 1$ for any positive number $x$.
|
H: Calculate the following integral $\int_{|z|=1}\frac{z^m}{(z-a)^n}dz$
Given $n,m\in\mathbb{N},|a|\neq1$
Calculate the following integral $\int_{|z|=1}\frac{z^m}{(z-a)^n}dz$
I thought maybe using Cauchy's integral formula and I'm not sure what happens when $a$ is outside the domain
AI: For $|a| <1$ it is $2\pi i/{(n-1)!} f^{n-1}(a)$ where $f(z)=z^{m}$. For $|a|>1$ it is $0$ by Cauchy's Theorem.
|
H: Parties and distributions
It is known that the distribution of the people coming to the party is Poisson with a rate of 0.9. The DJ is going to play only if there are people. What is the probability that the DJ is going to play in front of one person exactly when it is known that no more than two people came to the party.
I'm not sure how to do it since I see it's a probability with a condition, but also I need first to calculate the Probability by the Poisson distribution? I would love any explanation.
AI: P(2 or fewer people arrive)$=\frac{(0.9)^0 e^{-0.9}}{0!}+\frac{(0.9)^1 e^{-0.9}}{1!}+\frac{(0.9)^2 e^{-0.9}}{2!}$
P(1 person arrives)$=\frac{(0.9)^1 e^{-0.9}}{1!}$
So P(1 person arrives|no more than 2 will arive)=$\frac{\frac{(0.9)^1 e^{-0.9}}{1!}}{\frac{(0.9)^0 e^{-0.9}}{0!}+\frac{(0.9)^1 e^{-0.9}}{1!}+\frac{(0.9)^2 e^{-0.9}}{2!}}$,
since the probability of both fewer than 2 arriving and only 1 arriving is just the probability of only 1 arriving.
|
H: Modular calculation high exponent?
I want to show, that $5^{96}\equiv -1 \pmod{193}$, without using the formula for quadratic residue.
So far I have :
$5^{96}\equiv 5^{4\cdot24} \equiv 625^{24}\equiv 46^{24}\equiv 186^{12}\equiv -7^{12}\equiv 7^{12}\equiv 7^{3\cdot4}\equiv 150^4\equiv -43^4\equiv 43^4\equiv 112^2\equiv -81^2\equiv 3^8\\\
$
I think Euler's totient doesn't help, as $\varphi(193)=192>96=\frac{192}{2}$ or can I write this ?
$5^{96}\equiv 5^{192-96} \equiv 5^{-96}\equiv(5^{-1})^{96}\equiv 116^{96}\equiv -77^{96} \pmod{193} $
What am I doing wrong ? What am I missing ? Thanks in advance.
AI: Let $x=5^{96}$. Then in the field $\Bbb F_{193}$ we have
$$
1=5^{\phi(193)}=(5^{96})^2=x^2,
$$
so that $(x-1)(x+1)=0$ in $\Bbb F_{193}$. Since a field has no zero divisors, we must have either $x=1$ or $x=-1$.
But $5^4=625=46$, so that $5^8=46^2=186$ and $5^{16}=49$,
$5^{32}=85$, so that
$$
5^{96}=(5^{32})^3=85^3=-1.
$$
Your calculation is correct, too, since
$$
3^8=6561=-1.
$$
|
H: Infinite dimensional separable Hilbert spaces having an open countable dense subset
While working on infinite dimensional Hilbert spaces, I came up with the following question: under what conditions is the existence of an open dense countable subset assured? If we assume that the space is separable, we are certain that a part of this question is answered. But what about an open dense countable subset? For instance (this example is not related to infinite dimensional spaces), $\mathbb{Q}$ is countable and dense in $\mathbb{R}$ (under the usual topology), but it is neither open nor closed. Is there any condition that assures the existence of at least one such open subset?
AI: A non-empty open set in $\mathbb R$ (or any Hilbert space $\neq \{0\}$) is necessarily uncountable. Hence you can never have such a set.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.