text
stringlengths 83
79.5k
|
|---|
H: Can I prove that $A \Rightarrow \neg B$ is false given $A\iff(B∨C)$?
Let $A$, $B$, $C$ be statements. Given $A\iff(B∨C)$.
I am trying to prove (if possible) that the implication $A \Rightarrow \neg B$ is false.
The motivation for me to do this is that I am thinking of eliminating the ‘‘redundant’’ cases in an implication (by that I mean, e.g. the implication $\forall x\in \mathbb{R},x^2=1\Rightarrow(x=1∨x=2∨x=-1)$ is true but $x=2$ is redundant). The solution I come up with is to prove for both directions of the implication as above so that the equivalence should force both $B$ and $C$ not to be redundant. To verify this, I would like to prove that with this equivalence, if it is the case of $A$, then it is not the case that, say, $B$ could never happen. And, for me, this is the same as to prove that given $A\iff(B∨C)$, $A \Rightarrow \neg B$ is false.
The argument I conduct is as follows:
Assume $B \Rightarrow \neg A$ is true. Since we have $B \Rightarrow (B∨C)\Rightarrow A$, by contradiction $B \Rightarrow \neg A$ is false. And since $A \Rightarrow \neg B$ is the contrapositive of $B \Rightarrow \neg A$, we have $A \Rightarrow \neg B$ is false.
However, maybe it is due to a lack of quantifiers or maybe because of the way I use that contraposition/contradiction, or some other things: I find my proof really confusing in term of its logic flow, and I have a feeling that somethings are missing/misused so that there are some flaws in it that are needed to be fixed/completed.
Could anyone please clarity for me?
AI: Well, if $A\Leftrightarrow B\vee C$, then $A\Rightarrow \neg B$ becomes $B\vee C\Rightarrow \neg B$, which is not a tautology by taking $B,C$ to be true. Then the premise is true, but the conclusion is false.
|
H: Find the real numbers given their product
We have four real numbers $a,b,c,d$ and their six products should satisfy $\{ab,ac,ad,bc,bd,cd\}=\{2,2.4,3,4,5,6\}$. How do we find them?
The sequence of products is not necessarily in order. For example we don't know if $ab=2$ or $ac=2.4$.
AI: $a= \sqrt{1.2}$, $b= \sqrt{10/3}$, $c=\sqrt{4.8}$, $d=\sqrt{30/4}$
I take the sequence as $\{ab,ac,ad,bc,bd,cd\}$. By the way, if the sequence changes then it would vary the values over $\{a,b,c,d\}$. But the four values will always be the same.
|
H: Why is $\frac{1}{\pi ^{1/4} \sqrt{a}}e^{-x^2/(2a^2)}$ not a gaussian function?
In an quantum mechanics exercise, we were asked to find the ground-state wavefunction of a perturbed harmonic system. The resulting wave-function is $$\psi_0(x) = \frac{1}{\pi ^{1/4} \sqrt{a}}e^{-x^2/(2a^2)}$$
After that, there is a true or false statement:
The ground-state wave function of the perturbed system
is a gaussian. -> False.
May I ask you why this is not a gaussian ? When looking on Wikipedia https://en.wikipedia.org/wiki/Gaussian_function , this seems to be a gaussian for me.
Thanks for your help !
AI: The PDF is $|\psi_0|^2=\frac{1}{a\sqrt{\pi}}\exp\frac{-x^2}{a^2}$, that of an $N(0,\,a^2/2)$ distribution, but $\psi_0$ is not itself a Gaussian PDF. In particular, $\int_{\Bbb R}\psi_0(x)=\pi^{1/4}\sqrt{2a}$.
|
H: Formula derivation of the Second derivative
I'm trying to prove the below equation:
The given function $f$ is twice differentiable. And the given formula is
$f''(x) = 2\lim_{h \rightarrow 0} \frac{f(x+h) -f(x) - f'(x)h } { h^2}$
I know this can be proved by l'Hospital's rule.
Since the numerator and denominator goes to zero as $h \rightarrow 0$, the given formula becomes $lim_{h \rightarrow 0} \frac{f'(x+h)-f'(x)} {h}$, which equals to $f''(x)$.
But the reason I write this post is this:
When I tried to change the $f'(x)$ of the numerator, by definition of the derivative, the equation no longer true.
The trial is
$f''(x)$ = $2\lim_{h \rightarrow 0} \frac{f(x+h) -f(x) - f'(x)h } { h^2}$ = $2\lim_{h \rightarrow 0} \frac{f(x+h) -f(x) - h\cdot lim_{h \rightarrow 0} \frac{f(x) - f(x-h)} {h} } { h^2}$ = $2\lim_{h \rightarrow 0} \frac{f(x+h) -2f(x) + f(x-h) } { h^2}$ = $2f''(x)$.
But $f''(x)$ $=$ $2f''(x)$ can't be true for $f''(x)$ isn't zero.
The plausible inference for the obtaining the false result is the "change of $f'(x)$ to limit", from the second to the third of the above equalities, is wrong.
I think the possible explanation is, when the derivative change to the limit, the limit variable doesn't necessarily be $h$. The choice of the limit variable doesn't affect the derivative of the $f$, for example, $f'(x) = \lim_{k \rightarrow 0} \frac{f(x+k) -f(x) } {k}$ is also okay. So the formula can't be combined by $lim_{h \rightarrow 0}$.
Any help would be appreciated.
AI: Note that Taylor's theorem tells us that
$$
f(x + h) = f(x) + f'(x) h + \frac 12 f''(x) h^2 + o(h^2).
$$
Using a $k$ for the inner limit provides us with some insight. In particular, we have
$$
\lim_{h \to 0}\frac{f(x + h) - f(x) - f'(x) h}{h^2} =
\lim_{h \to 0}\frac{f(x + h) - f(x) - h\lim_{k \to 0}\frac{f(x) - f(x-k)}{k} }{h^2}\\
=
\lim_{h \to 0}\lim_{k \to 0} \frac{f(x+h) - f(x) - \frac hk f(x) + \frac hk f(x-k)}{h^2}.
$$
Here, $\lim_{h \to 0}\lim_{k \to 0}$ indicates that we take a limit first with respect to $k$, then with respect to $h$.
Note that taking $k$ to be a function of $h$ and computing a limit over $h$ produces a different result. For your example, we see that taking $k(h) = h$ yields a limit of $2f''(x)$. More generally, taking $k(h) = h/a$ for $a \neq 0$ yields
$$
\lim_{h \to 0} \frac{f(x+h) - f(x) - \frac h{k(h)} f(x) + \frac h{k(h)} f(x-k(h))}{h^2}\\
= \lim_{h \to 0}
\frac{f(x+h) - f(x) - af(x) + a f(x-h/a)}{h^2}\\
= \lim_{h \to 0} \frac{f(x+h) - (1 + a)f(x)+ a f(x-h/a)}{h^2}\\
= \lim_{h \to 0} \frac 1{h^2}[f(x) + hf'(x) + \frac 12f''(x)h^2 - (1 + a)f(x)\\
\qquad + a (f(x) - f'(x) (h/a) + \frac 12 f''(x) (h/a)^2) + o(h^2)]\\
= \lim_{h \to 0} \frac{\frac 12(1 + 1/a)f''(x)h^2 + o(h^2)}{h^2}
\\ = \frac{a + 1}{2a} f''(x).
$$
|
H: Prove that $\|R\|_2 = \|A\|_2^{1/2}$ where $A=R^* R$ is a Cholesky factorization of $A$
Prove that $\|R\|_2 = \|A\|_2^{1/2}$ where $A = R^* R$ is a Cholesky factorization of $A$.
In my book it says that I should use the Singular Value Decomposition.
I have that
$\rho(A)=\sqrt{\rho(A^*A)}=\sqrt{\rho(A*A)}=\sqrt{\rho(A^2)}$
$$ \Rightarrow \rho^2(A)=\rho(A^2)$$
So, I have tried:
$R^*R=Q\Sigma V^*$ where $Q\Sigma V^*$, is de SVd of A.
Then $\rho(R^*R)=\rho(Q\Sigma V^*)$. With $\rho(.)$ is the bigger eigenvalue.
We conclude that $\rho^2(R)=\rho(A) \Leftrightarrow ||R||=\sqrt{||A||}$.
This last equation comes from the relation between the singular value of A in its norm.
I do not know if this process is well formulated.
AI: Hint: What is the relationship between the eigenvalues of $A$ and the singular values of $R$? Using the fact that $A$ is self-adjoint, what is $\|A\|$ in terms of the eigenvalues of $A$? What is $\|R\|$, in terms of the eigenvalues of $A$?
|
H: Showing properties of a specific maximization problem, as well as finding the maximum.
Let there be $p_1,p_2,..,p_n,q_1,q_2,...,q_n$ with $\sum_{i=1}^n p_i = 1 = \sum_{i=1}^nq_i$
For $M :=\{ x\in (0,\infty)^n: \sum_{i=1}^nq_ix_i = a \}$
With the following Maximization Problem.
$$(*) \sup \bigg\{\sum_{i=1}^np_i\ln (x_i) : x \in M \bigg \}$$
(i) I need to show: $\forall\eta >0$ $$\sup_{r>0}(\ln(r) -\eta r) = \ln(\frac{1}{\eta})-1$$
(ii) I also need to show: $\forall x\in M$ and $\forall \eta > 0$ $$\sum_{i=0}^np_i\ln(x_i) \leq \sum_{i=0}^np_i\ln(\frac{p_i}{\eta q_i}) + a\eta -1 $$
(iii) I need to also show, that for the problem $(*)$ there is a maximizer $x^*\in M$ and to determine it's value.
I currently have no idea how to approach this, as this is my first maximization problem. For now i know that for the maximizer i can use the Lagrange-Method to obtain the maximum of this function. But for the issues (i) and (ii) i got a bit more trouble figuring out what i need to do.
Any help is appreciated.
AI: i) Take the first derivative of the function $f(r)=ln(r)-\eta r$ then do some calculation.
ii) Using i) with $r=x_i$ and $\eta=\dfrac{\eta q_i}{p_i}$ then multiple with $p_i$ and sum up all the inequality.
iii) Notice that in ii) equality happens when $x_i=\dfrac{p_i}{\eta q_i}$. We need $\displaystyle\sum_{i=1}^n q_i x_i=a$, which is equivalent to $\eta =\dfrac{1}{a}$. Therefore, by just taking $\eta =\dfrac{1}{a}$ in ii), we're done.
|
H: If $X$, $Y$ and $Z$ are mutually independent random variables, is it true that $X+Y (X \cdot Y,X/Y,\dots)$ and $Z$ are independent?
I just wonder that given that RVs $X, Y, Z$ are mutually independent, how should I quickly determine whether a combination of $X$ and $Y$, e.g. $X+Y, X\cdot Y, X/Y, X^Y$, is independent of $Z$?
Are there some general conclusions?
Thanks in advance.
AI: For any measurable function $f: \mathbb R^{2} \to \mathbb R$, $Z$ is independent of $f(X,Y)$. Hence the answer is YES in all these cases (assuming that $X/Y$ and $X^{Y}$ are defined).
Proof: $$P(f(X,Y) \in E, Z \in F)=P((X,Y) \in f^{-1}(E), Z \in F)$$ $$=P((X,Y) \in f^{-1}(E))P( Z \in F)$$ $$=P(f(X,Y) \in E) P( Z \in F)$$
for any Borel sets $E$ and $F$ in $\mathbb R$.
The last equality is proved by considering the collection of all Borel set $D$ in $\mathbb R^{2}$ such that $P((X,Y) \in D, Z \in C)=P((X,Y) \in D)P(Z \in C)$, and verifying that this is a sigma algebra which contains sets of the form $A \times B$ where $A$ and $B$ are Borel sets in $\mathbb R$.
|
H: References on (more or less) explicit calculations of probability distributions of nonlinear transformations of random variables
Premise. After a former question and related answer, I searched for references on the calculation of the probability distributions of nonlinear functions of (one or more) random variables, in order to build a firm basis for dealing with engineering problems which require such calculations in a rigorous manner.
The question
What are the works where the methods for calculating probability distributions of nonlinear transformation of random variables occupy a central place?
I listed here a few guidelines in order to the ease answering to the question.
The works should be abstract in the sense that the results they present should be rigorous and widely applicable (i.e. applicable to the widest range of problems, or abstract the Italian way).
The "flavor" has to be analytic in the sense that the reference should deal with the problems in a constructive way by using techniques inherited from real and complex analysis.
The work should aim at being comprehensive, therefore I prefer monographs/textbooks instead of single research works, even if survey papers on the topic are welcome.
What I have already found
Just to give a few examples, following more or less closely the above guidelines, I have identified the following works:
Deutsch, R. Nonlinear transformations of random processes, (English) Prentice-Hall International Series in Applied Mathematics. Englewood Cliffs, N.J.: Prentice-Hall, Inc., pp. XI+157 (1962), MR0148499, Zbl 0125.36801.
Rohatgi, V. K., An introduction to probability theory and mathematical statistics (English) Wiley Series in Probability and Mathematical Statistics. New York-Chichester-Brisbane: John Wiley & Sons, a Wiley-Interscience Publication, pp. XIV+684 (1976), MR0407916, Zbl 0354.62001.
Springer, M. D., The algebra of random variables, (English) Wiley Series in Probability and mathematical Statistics. New York-Chichester-Brisbane: John Wiley & Sons, pp. XIX+470 (1979), MR0519342, Zbl 0399.60002.
AI: I think this book will be very useful for your purpose
Probabilità e informazione
I suggest you to read the whole book but, in particular,
Chapter 4 for single rv
Chapter 6 for pairs of rv's
Chapter 8 for vectors of rv's
|
H: Is $G^2$ necessarily a subgroup of $G$?
Let $G$ be a group and $H=\{g^2 : g\in G\}$ then which of the following is/are true?
$(1)H $ is always a subgroup of $G$
$(2)H$ may not be a subgroup of $G$
$(3)$ If $H$ is a subgroup of $G$, then it must be normal in $G$
$(4)H$ is a normal subgroup of $G$ only if $G$ ia abelian.
My attempt
$(3)$ is true
Let $H$ be a subgroup and $x^2 \in H$.
Then $gx^2g^{-1}=(gxg^{-1})^2 \in H$
So yes if $H$ is a subgroup , then it is normal.
$(4)$ Let $G=Q_8$ , the group of Quaternions.
Then $H=\{1,-1\}$ is a normal subgroup but $G$ is non-abelian.
I guess $H$ is not necessarily a subgroup since $H$ may not be closed under multiplication but I can't find one example.
I think I have to look for non-abelian groups (may be of odd order, but not sure..)
Can you give an example?
Thanks for your time.
AI: For abelian groups, $g^2h^2 = ghgh = (gh)^2 \in G^2$ so $H$ is always a (normal) subgroup of $G$ when $G$ is abelian. As you correctly pointed out, $Q_8$ is a counterexample to the "only if" direction of (4).
(3) is fine.
(1) and (2) are mutually exclusive, so you need to prove the statement or find a counterexample. I think that the smallest counterexample is $A_4$: trivially, every $3$-cycle is a square (as if $x$ has order $3$ then $x=(x^2)^2$), and there are eight $3$-cycles in $A_4$. Every other nontrivial element is the product of two disjoint $2$-cycles, and as such has order $2$, so it cannot be the square of anything as $A_4$ doesn't have elements of order $4$.
So, $H$ is the set of all $3$-cycles in $A_4$, and it has $9$ elements (eight plus the identity). But $|A_4|=12$, and $9$ does not divide $12$, so $H$ cannot be a subgroup. And, indeed:
$$(123)(124) = (13)(24) \not \in H$$
|
H: Closure of a subgroup is again a subgroup
Let $G$ be a topological group and $H$ a subgroup. Then $\overline{H}$ is again a subgroup.
Attempt: Let $x,y \in \overline{H}$. Choose nets $\{x_\alpha\}_{\alpha \in I}$ and $\{y_\beta\}_{\beta \in J}$ with $x_\alpha \to x, y_\beta \to y$ and these nets are in $H$. Then we get a net $\{(x_\alpha, y_\beta)\}_{(\alpha, \beta) \in I \times J}$ where $I \times J$ is ordered in the obvious way such that $(x_\alpha, y_\beta) \to (x,y)$ in $G \times G$. By continuity of multiplication, we obtain $x_\alpha y_\beta \to xy$ and since $x_\alpha y_\beta \in H$ for all indices $\alpha, \beta$, we get $xy\in \overline{H}$.
Is this correct?
AI: Seems ok, but I'd use "pre-image of open is open" directly instead of converging nets.
Let $x,y\in\overline H$.
Suppose $xy\notin\overline H$. By openness of $\overline H^\complement$, there is a neighbourhood of $U$ of $(x,y)\in G\times G$ such that $uv\notin \overline H$ for all $(u,v)\in U$. $U$ contains some $V_x\times V_y$ where $V_x,V_y$ are neighbourhoods of $x,y$, respectively. Pick $u\in V_x\cap H$, $v\in V_y\cap H$ and arrive at a contradiction.
|
H: Finding the equation of the normal to the parabola $y^2=4x$ that passes through $(9,6)$
Let $L$ be a normal to the parabola $y^2 = 4x$. If $L$ passes through the point $(9, 6)$, then $L$ is given by
(A) $\;y − x + 3 = 0$
(B) $\;y + 3x − 33 = 0$
(C) $\;y + x − 15 = 0$
(D) $\;y − 2x + 12 = 0$
My attempt: Let $(h,k)$ be the point on parabola where normal is to be found out. Taking derivative, I get the slope of the normal to be $\frac{-k}{2}$. Since the normal passes through $(9,6)$, so, the equation of the normal becomes:$$y-6=\frac{-k}{2}(x-9)$$$$\implies \frac{kx}{2}+y=\frac{9k}{2}+6$$
By putting $k$ as $2,-2,-4$ and $6$, I get normals mentioned in $A,B,C$ and $D$ above (not in that order).
But the answer is given as $A,B$ and $D$. What am I doing wrong?
AI: In addition to having slope $-k/2$ the normal must also pass through the point of contact $(h,k)$. The line in option $C$ does not pass through the point of contact for $k=2$ which is $(1,2)$. Your equation is the equation of a line having the slope of a normal at point $(h,k)$ on the parabola and passing through $(9,6)$. It is not necessarily a normal because you didn't make it pass through the point of contact.
|
H: What is meant by $f : [a,b] \times D \rightarrow R^m$
I am working through Nonlinear Systems by Khalil.
Lemma 3.1 states the following
Let $f : [a,b] \times D \rightarrow R^m$ be continuous for some domain $D \subset R^n$. Suppose that $[\partial f/\partial x]$ exists and is continuous on $[a,b]\times D$. If, for a convex subset $W \subset D$, there is a constant $L \geq 0$ such that
$$\left|\left| \frac{\partial f}{\partial x}(t,x)\right|\right| \leq L$$
on $[a,b]\times W$, then
$$||f(t,x) - f(t,y)|| \leq L ||x-y||$$
for all $t \in [a,b],\space x \in W$, and $y \in W$.
I am a bit uncertain how to think about the $f : [a,b] \times D \rightarrow R^m$.
Is it the Cartesian product? Are we slicing out part of the domain?
AI: Yes, $[a,b]\times D$ is the Cartesian product of $[a,b]$ and $D$. And, no, the author is not slicing out part of the domain.
|
H: Calculating Residue of $\frac{1}{\sin\left(\frac{\pi}{z}\right)}$
How do I calculate the residue of $f(z)=\frac{1}{\sin\left(\frac{\pi}{z}\right)}$ at the points $z=\frac{1}{n}$, $n\in\mathbb{Z}\setminus\{0\}$?
I know that the set $\{\frac{1}{n}:n\in\mathbb{Z}\setminus\{0\}\}$ is the set of isolated singularities of $f$ and the points $\frac{1}{n}$ are the poles. So $$Res\left(f;\frac{1}{n}\right)=\lim_{z\to\frac{1}{n}}\left(z-\frac{1}{n}\right)f(z)=\lim_{z\to\frac{1}{n}}\frac{\left(z-\frac{1}{n}\right)}{\sin\left(\frac{\pi}{z}\right)}$$$$=\frac{1}{\pi}\lim_{z\to\frac{1}{n}}z\left(z-\frac{1}{n}\right)\left(1+\frac{\pi^2}{6z^2}+\frac{\pi^4}{120z^4}+\dots\right).$$
But is it possible to calculate the limit from here?
AI: If $f(z)=g(z)/h(z)$ with $g(z_0)\neq 0, h(z_0)\neq 0, h'(z_0)\neq 0$, then, $\operatorname{Res}(f,z_0)=\frac{g(z_0)}{h'(z_0)}$.
Here: $f(z)=1, g(z)=\sin(\pi/z)$ and $z_0=1/n$.
$$\implies \operatorname{Res}(f,1/n)=\left.\frac{1}{-\frac{\pi}{z^2}\cos(\pi/z)}\right|_{z=1/n}=\frac{(-1)^{n+1}}{n^2\pi}$$
|
H: Continuous Functions with countably many jump discontinuitites
Let $X$ be the set of functions from $\mathbb{R}$ to $\mathbb{R}$ which can be written as
$$
f = \sum_{i=1}^{\infty} f_i I_{[a_i,b_i]},
$$
where $a_i<b_i$, $f_i$ is continuous, but $f$ need not be continuous at $a_i$ (or $b_i$). What can be said about this set of functions (ex do they contain all $L^p$ functions etc...)
More generally, if $X$ is the set of all functions from $\mathbb{R}^k$ to $\mathbb{R}^n$ with at most $\mathfrak{c}$ many discontinuities, does $X$ contain the set of measurable functions?
AI: Let $f$ be a nowhere continuous bounded function on $[0,1]$. This function belongs to $L^p(\Bbb R)$, yet it does not belong to $X$.
Edit: this function needs to be measurable, but the result still holds.
|
H: Implicit function theorem on vector valued function
Above is implicit function theorem, and here is a special case
In second case, $f(y)=(x,z)\in\mathbb{R}^2$, also $F\in\mathbb{R}^2$, how do we find $\partial_y f$ ? I know $\partial_y f=-\frac{\partial_y F}{\partial_y F}$,
but this only apply if $F\in\mathbb{R}$ right?
AI: In the most general case of the Implicit Function Theorem we have that if $F\colon\mathbb{R}^{n+m}\to\mathbb{R}^m$ and $f\colon\mathbb{R}^n\to\mathbb{R}^m$ is such that $F(x_1,\ldots,x_n,f(x_1,\ldots\,x_n))=0$, then
$$\begin{pmatrix}\frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_1}{\partial x_n}\\\vdots & \ddots & \vdots\\\frac{\partial f_m}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_n}\end{pmatrix} = -\begin{pmatrix}\frac{\partial F_1}{\partial y_1} & \cdots & \frac{\partial F_1}{\partial y_m}\\\vdots & \ddots & \vdots\\\frac{\partial F_m}{\partial y_1} & \cdots & \frac{\partial F_m}{\partial y_m}\end{pmatrix}^{-1}\cdot\begin{pmatrix}\frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n}\\\vdots & \ddots & \vdots\\\frac{\partial F_m}{\partial x_1} & \cdots & \frac{\partial F_m}{\partial x_n}\end{pmatrix}$$
Therefore in this particular example for the functions $F\colon\mathbb{R^3}\to\mathbb{R}^2$ and $f\colon\mathbb{R}\to\mathbb{R}^2$ we have
$$D_yf=\begin{pmatrix}\frac{\partial f_1}{\partial y}\\\frac{\partial f_2}{\partial y}\end{pmatrix} = -\begin{pmatrix}\frac{\partial F_1}{\partial x} & \frac{\partial F_1}{\partial z}\\\frac{\partial F_2}{\partial x} & \frac{\partial F_2}{\partial z}\end{pmatrix}^{-1}\cdot\begin{pmatrix}\frac{\partial F_1}{\partial y}\\\frac{\partial F_2}{\partial y}\end{pmatrix}$$
|
H: when $a$ and $b$ are relatively primes, how is $ax - by= 1$ always possible?
If $a$ and $b$ are relatively primes, with any number of $x$ and $y$, you could always find a set of $x$ and $y$ which makes $ax-by=1$
How is it possible?
AI: Given $a,b$ with $\gcd(a,b)=1$, let $c$ be the smallest positive integer of the form $ax-by$. We claim $c$ divides every number of the form $ax-by$. For let $am-bn=c$ and $ax-by=d$, then by the Division Theorem $d=cq+r$ with $0\le r<c$, so $$r=d-cq=ax-by-(am-bn)q=au-bv$$ where $u=x-mq$ and $v=y-nq$; by the minimality of $c$, we must have $r=0$, so $c$ divides $d$. Then $c$ must divide $a$ (take $x=1$, $y=0$) and $c$ must divide $b$ (take $x=0$ and $y=-1$), so $c=1$, QED.
|
H: A Square and two Quarter-Circles problem
Square $ABCD$ has side equal to $a$. Points $A$ and $D$ are centers of two Quarter-Circles (see image below), which intersect at point K. Find the area defined by side $CD$ and arcs $KC$ and $KD$.
Here's what I did: The darkened area can be found by Substracting area of figure defined by points $AKD$ from quarter-circle $CAD$. Area of quarter-circle $= \dfrac{a^2\pi}{4}$. Now onto the harder part:
The way I calculated the Area of $AKD$ is by noticing that it's half of an elipse (at least I'm pretty sure it is). With $R1 = \dfrac{a}{2}$ (by symmetry) and $R2=\dfrac{a\sqrt3}{2}$ (by Pythagoras). The area of $AKD$ will be half of an ellipse: $\dfrac{R1R2\pi}{2} = \dfrac{\ a^2\sqrt3}{8}\pi$
The area of darkened figure will be the difference between two areas: $\dfrac{a^2\pi}{4} - \dfrac{\ a^2\sqrt3}{8}\pi $.
But my answer, for some reason, is way off. What am I doing wrong? Does $AKD$ not represent a semi-ellipse?
AI: Find area $S$ first:
$$S=\frac16 a^2\pi-P_{\triangle ADK}$$
Area of ADK is:
$$P_{ADK}=2S+P_{\triangle ADK}=2(\frac16 a^2\pi-P_{\triangle ADK})+P_{\triangle ADK}=\frac13 a^2\pi-P_{\triangle ADK}$$
$$P_{ADK}=\frac13 a^2\pi-\frac14a^2\sqrt3$$
Shaded area is simply:
$$P_{shaded}=P_{ADC}-P_{ADK}=\frac14 a^2\pi-(\frac13 a^2\pi-\frac14a^2\sqrt3)$$
$$P_{shaded}=\frac14a^2\sqrt3-\frac1{12}a^2\pi=\frac1{12}a^2(3\sqrt3-\pi)$$
|
H: What is wrong in the following method of obtaining the Maclaurin series of $\frac{2x}{e^{2x}-1}$?
$\frac{2x}{e^{2x}-1} = -2x(1-e^{2x})^{-1}$
We can obtain the binomial series expansion of $(1-e^{2x})^{-1}$:
$(1-e^{2x})^{-1} = \sum_0^\infty\begin{pmatrix}-1\\n\end{pmatrix}(-e^{2x})^{n} = \sum_0^\infty(e^{2x})^{n} = 1 + e^{2x} + e^{4x} + e^{6x}... $
Now, each of the expressions $e^{2x}, e^{4x}, e^{6x}, ...$ can itself be expanded using the Maclaurin series for an exponential function, i.e. $e^{x}=\sum_0^\infty\frac{x^{n}}{n!}$. Thus, we have
$\sum_0^\infty(e^{2x})^{n} = 1 + e^{2x} + e^{4x} + e^{6x}... = 1 + \sum_0^\infty\frac{(2x)^{n}}{n!} + \sum_0^\infty\frac{(4x)^{n}}{n!} + \sum_0^\infty\frac{(6x)^{n}}{n!} + ...= 1 + \sum_0^\infty[\frac{2^{n}x^{n}}{n!} + \frac{4^{n}x^{n}}{n!} + \frac{6^{n}x^{n}}{n!} + ...]$
And,
$1+\sum_0^\infty[\frac{2^{n}x^{n}}{n!} + \frac{4^{n}x^{n}}{n!} + \frac{6^{n}x^{n}}{n!} + ...] = 1 + \sum_0^\infty[\frac{2^{n}x^{n}}{n!}(1 + 2^{n} + 3^{n} + ...)] = 1 + \sum\limits_{n=0}^\infty\frac{2^{n}x^{n}}{n!}\sum\limits_{k=1}^\infty k^{n}$
Next, we can multiply the last expression by $-2x$ to obtain the required series. But that turns out to be wrong, for if I expand the series thus obtained, I get something completely different from the correct expansion, which is
$1 − x + x^{2}/3 − x^{4}/45 ...$
Where am I going wrong?
AI: This is wrong because the expansion $(1-y)^{-1}=1+y+y^2+...$ is legal only when $|y|\lt 1$. For $(1-e^{2x})=1+e^{2x}+e^{4x}+...$ to be true, we must have $|e^{2x}|\lt 1$, which is not true for $x\gt 0$.
|
H: How is $(2+i\sqrt{2}) \cdot (2-i\sqrt{2})$ calculated?
What is $(2+i\sqrt{2}) \cdot (2-i\sqrt{2})$ ?
Answer:
(a) $4$
(b) $6$
(c) $8$
(d) $10$
(e) $12$
I calculate like this:
$(2+i\sqrt2),(2-i\sqrt2)\\(2+1.41421i),\;(2-1.41421i)\\3.41421i,\;0.58579i\\3.41421i+0.58579i\\4i$
Therefore, the answer is $4$.
But the correct answer is $6$.
How is it calculated correctly?
AI: Use $(P+Q)(P-Q)=P^2-Q^2$ and $i^2=-1$
Then $$F=(2+i\sqrt{2})(2-i\sqrt{2})=4-2i^2=4+2=6$$
|
H: Countable sets bijective function
Suppose we have the infinite set $\{\ldots, - 1,-2,0,1,2,\ldots\}$
If the set is countable then there will be a bijective function which maps elements of this set to the set containing all natural numbers.
However, I can't find any such function. Does this prove the set is uncountable?
AI: To prove the set is uncountable,
you would have to prove there is no bijective function from it to $\mathbb N$.
In fact, here is one:
$f(j)=\begin{cases}-2j\;\text{ if } j\le0\\\\2j-1\;\text{ if } j>0.\end{cases}.$
|
H: From generalized eigenvector to Jordan form
I can't figure out the following part of Chen's Linear Systems book. How does he "readily obtain" $Av_2=v_1+\lambda v_2$?
AI: Note that the fact that we have a "chain of generalized eigenvectors" implies that $(A - \lambda I)v_i = v_{i-1}$. So, we have
$$
(A - \lambda I)v_2 = v_1 \implies Av_2 - \lambda v_2 = v_1 \implies Av_2 = v_1 + \lambda v_2.
$$
|
H: Recursive to explicit form involving Fibonacci
I have a recursive formula for a sequence O: $ O_n = O_{n-1} + O_{n-2} + F_{n-1}$ where $F_n$ is the n-th Fibonacci number, $O_1 = 1$ and $O_2 = 2$.
After playing around with it, I found a new formula that might be easier to convert to the form I search: $ O_n = F_{n-3} * O_1 + F_{n-2} * O_2 + \sum_{k=2}^{n-1} F_{n-1-k} * F_k$.
Now what I am searching for is an explicit formula for $O_n$ that doesn't include a summation.
I also tried filling in Binet's formula and simplify, to no avail.
Here's a similar post, but the math is too hard for me, so I can't transform it to fit my problem.
An interesting property I found is that $\lim_{x\to\infty} \frac{O_x}{O_{x-1}}$ is equal to the golden ratio.
AI: I calculated a few terms and got $(O_n)=(1,2,4,8,15,28,51...)$.
I looked this up in OEIS and found this formula, which could be made explicit with Binet's formula:
$O_n=\dfrac{(n+4)F_n+2nF_{n-1}}5.$
|
H: Totalizing a complex in triagulated category
I am self-studying homotopy theory and trying to understand a proof in this paper on page 218
Let
$$ ... \to X_n \xrightarrow{{f_n}} X_{n-1} \xrightarrow{{f_{n-1}}}
... \xrightarrow{{f_2}} X_1 \to 0$$
be a sequence in trianguated category $\mathcal{I}$.
(sequence or more conventionally a complex means $f_{i+1} \circ f_i=0$).
Complete $X_2 \xrightarrow{{f_2}} X_1$ to a triangle
$X_2 \xrightarrow{{f_2}} X_1 \xrightarrow{{f_y}} Y_1
\xrightarrow{{f_{s2}}} \Sigma X_2 $.
Then the text says "
Because the composite
$X_3 \xrightarrow{{f_3}} X_2 \xrightarrow{{f_2}} X_1$ is zero,
we can lift to
$\Sigma X_3 \to Y_1$".
I don't understand why it's possible to make this lift.
AI: The answer is that exact triangles have the property that each morphism is a weak kernel/weak cokernel for the following/previous morphism in the following precise sense.
Lemma In a (pre-)triangulated category, if
$$ \newcommand\toby\xrightarrow
X\toby{f} Y \toby{g} Z\toby{h} \Sigma X$$
is an exact triangle, and $k:A\to Y$ is such that $gk=0$, then there exists a (not unique!) map $\tilde{k}:A\to X$ such that $k=f\tilde{k}$, and dually, if $l:Y\to A$ is such that $lf=0$, then there exists a map $\tilde{l}:Z\to A$ such that $l=\tilde{l}g$.
Proof.
Apply the morphism axiom to the following diagram
$$
\require{AMScd}
\begin{CD}
A @>1_A>> A @>>> 0 @>>>\Sigma A\\
@. @VkVV @V0VV @. \\
X @>f>> Y @>g>> Z @>h>> \Sigma X\\
\end{CD}
$$
$\blacksquare$
Now, how this gets applied to your question.
We have $f_2f_3=0$, and we have an exact triangle
$$\Sigma^{-1}Y_1 \to X_2 \toby{f_2} X_1\to Y_1,$$
so we can apply the lemma to make the lift.
|
H: Number of ways to pick at least three of a kind in 5-card poker - what's wrong with C(49,2) for the last two cards?
In finding the number of ways to get a 5-card poker hand that contains at least three of a kind, what's wrong with the following
$$n=\binom{13}{1}\binom{4}{3}
\binom{49}{2}$$
So, we have 13 numbers to choose from, and for each of those there are 4 cards for which we choose 3. Finally, we don't care about the other two cards because we care only about the probability to get at least three of a kind, so we pick 2 out of the remaining 49 cards. Using this and a total of
$$N = \binom{52}{2}$$ ways to pick 5 cards, we get probability
$$P=\frac{n}{N}=0.0235294\approx2.353\text{ %}$$
However, poker probabilities Wikipedia page says that the probability to get three of a kind or better is 2.87%, so what is going on here ?
AI: Wikipedia gives the probability "$3$ of a kind or better" which includes straights, flushes, full house, and $4$ of a kind.
Your calculation allows for $3$ or $4$ of a kind, or full house, but does not capture straights or flushes.
|
H: Exercise 24(a) Chapter 3 Baby Rudin Proof Verification
Let $X$ be a metric space.
(a) Call two Cauchy sequences $\left\{ p_n \right\}$, $\left\{ q_n \right\}$ in $X$ equivalent if $$ \lim_{n \to \infty} d \left( p_n, q_n \right) = 0.$$ Prove that this is an equivalence relation.
Can someone let me know if my proof for transitivity is correct?
Let $\{p_n\}, \{q_n\}, \{r_n\}$ be Cauchy sequences in $X$. Suppose $\lim\limits_{n \to \infty} d \left( p_n, q_n \right) = 0 \textrm{ and } \lim\limits_{n \to \infty} d \left( q_n, r_n \right) = 0$. Let $\epsilon > 0$. Then, $\exists N_1, N_2 \in \mathbb{N}$ such that
\begin{equation*}
\begin{split}
n \geq N_1 &\implies d(p_n, q_n) < \epsilon/2 \\
n \geq N_2 &\implies d(q_n, r_n) < \epsilon/2
\end{split}
\end{equation*}
Pick $N = \max\{N_1, N_2\}$. Then, by the triangle inequality, $n \geq N$ implies
\begin{equation*}
d(p_n, r_n) \leq d(p_n, q_n) + d(q_n, r_n) < \epsilon
\end{equation*}
showing that $p_n \to r_n$ which means that$\lim\limits_{n \to \infty} d \left(p_n, r_n \right) = 0$.
AI: Almost. You claimed to have proved that $p_n\to r_n$, whatever that means. What you did prove was that $\lim_{n\to\infty}d(p_n,r_n)=0$, and that is what you were supposed to prove.
|
H: Completion product is product of completions
Let $X,Y$ be metric spaces and $\tilde{X}, \tilde{Y}$ their completions. Is it true that $\tilde{X} \times \tilde{Y}$ is the completion of $X \times Y$? Here both these products have the product metric/topology. I guess this can be proven using the universal property of the completion?
AI: Indeed it can. Since $\overline X\times\overline Y$ is complete and since the natural inclusion from $X\times Y$ into $\overline X\times\overline Y$ is distance-preserving and its range is dense, $\overline X\times\overline Y$ is indeed a completion of $X\times Y$. That's so because, if $Z$ is a complete metric space and $f\colon X\times Y\longrightarrow Z$ is uniformly continuous, $f$ can be extended to one and only one continuous function $F\colon\overline X\times\overline Y\longrightarrow Z$: if $(x,y)\in\overline X\times\overline Y$, you take a sequence $(x_n)_{n\in\Bbb N}$ of elements of $X$ ande a sequence $(y_n)_{n\in\Bbb N}$ of elements of $Y$ such that $\lim_{n\to\infty}x_n=x$ and $\lim_{n\to\infty}y_n=y$ and you define$$F(x,y)=\lim_{n\to\infty}f(x_n,y_n).$$
|
H: Let $a,b \in \mathbb{Z}$ and let $d = gcd(a,b)$. Show that $\{ ka + lb: k,l \in \mathbb{Z}\} = \{md : m \in \mathbb{Z} \}$
I know that given $d = gcd(a,b)$ that this also means $xa + yb = d$. Using this we get (showing from left to right side)
$$xa + yb = d$$ $$m(xa + yb) = md$$ $$xma + ymb = md$$
Now I am unsure how to conclude this properly. Because I do not think that showing that $gcd(ma, mb) = md$ is enough to conclude that for $\{ ka + lb: k,l \in \mathbb{Z}\} = \{md : m \in \mathbb{Z} \}$, since $k, l$ are arbitrarily.
Conversely, showing from right side to left side we get
$$md = m*gcd(a,b)$$ $$md = m* (xa + yb)$$ $$md = xma + ymb$$
Again, I do not believe that it is enough to conclude that $\{md : m \in \mathbb{Z} \} = \{ ka + lb: k,l \in \mathbb{Z}\}$
AI: Notice that $d$ divide $a$ and $b$. Therefore $$d\mathbb Z\supset a\mathbb Z+b\mathbb Z.$$
Moreover, $d$ is the "smallest" integer (up to a unit) with such property. Therefore, the equality follow (because $\mathbb Z$ is a PID).
|
H: How is the $ inf $ defined in a metric space?
In my lecture notes:
definition:
In a metric space $(X,d)$ the distance $d(p,E)$ from a point $p \in X$ and a subset $E\subseteq X$ is defined as: $d(p,E)=inf\{d(p,x)| x \in E\}$
Proposition: Let $(X,d)$ be a metric space and let $E\subseteq X$,
$p\in Cl(E)$ if and only if $d(p,E)=0$
proof:
By definition of $ inf$ there exists a sequence $p_n$ in $E$ such that
$\lim_{n\to\infty} d(p_n,p)=0$.
So $p$ is the limit of a sequence in $E$ iff $p\in Cl(E)$
I am having trouble with the first part, how is the $inf$ in a metric space defined and how is it proved that the definition applied to $d(p,E)$ is equivalent to the existence of that converging sequence?
I was trying to use this one
$y=inf(X) $iff $ \forall \varepsilon >0, \exists x \in X$ such that $y\leq x \leq y+ \varepsilon$
The problem is that it is valid for $\mathbb{R}$, and couldn't generalized to a metric space.
AI: The infimum $\inf\{d(p,x):x\in E\}$ is just the ordinary infimum of a set of non-negative real numbers. It doesn’t matter that the real numbers happen to have been obtained as distances in some metric space: this is still just a set of real numbers, and since it’s bounded below (by $0$), it must have an infimum.
If $d(p,E)=0$, i.e., if $\inf\{d(p,x):x\in E\}=0$, then by the definition of infimum for each $n\in\Bbb Z^+$ there is an $x_n\in E$ such that $d(p,x_n)<\frac1n$, which by definition means that $\lim_\limits{n\to\infty}d(p,x_n)=0$.
Added: To prove the proposition, suppose first that $p\in\operatorname{cl}E$. Then for each $n\in\Bbb Z^+$ there is an $x_n\in B_d\left(p,\frac1n\right)\cap E$, where $B_d(y,r)$ is the open ball of $d$-radius $r$ centred at $y$; clearly this means that $d(p,x_n)<\frac1n$.
Let $D=\{d(p,x):x\in E\}$; $d(p,x)\ge 0$ for each $x\in E$, so $0$ is a lower bound for $D$, so $D$ has an infimum, say $\alpha$. By definition $\alpha$ is a lower bound for $D$, and if $\beta$ is any lower bound for $D$, then $\beta\le\alpha$, so $0\le\alpha$. Suppose that $\alpha>0$; then there is an $n\in\Bbb Z^+$ such that $\frac1n<\alpha$. But then $d(p,x_n)<\frac1n<\alpha$, and $\alpha$ isn’t a lower bound for $D$ (since certainly $d(p,x_n)\in D$). Thus, $\inf D=0$.
(Note that the sequence $\langle x_n:n\in\Bbb Z^+\rangle$ does converge to $p$, though we don’t actually need to use this fact explicitly.)
For the other direction, suppose that $p\notin\operatorname{cl}E$. Then there is an $r>0$ such that $B_d(p,r)\cap E=\varnothing$. Thus, $d(p,x)\ge r$ for each $x\in E$, $r$ is a lower bound for $D=\{d(p,x):x\in E\}$, and by definition $\inf D\ge r$ and hence $\inf D>0$.
|
H: Math notation for modulo
I have a little trouble understanding how to write a mathematically notation for r = x%n. How should I write this in math notation if I want to get the remainder value, after dividing by $n$? $r$ is also not just an integer in this case, just the rest of $x$ (double) divided by $n$ (integer), "$r = \operatorname{mod}(5.4, 3) = 2.4$" in this case. I think $x \equiv r \mod n$ means something different.
Edit: After thinking a while, I am still confused by the notation. Why is the notation $x \equiv r \mod n$ used at all? If $\operatorname{mod}$ is a mathematical operator like $\sin$ or $\cos$, why is the notation $r = \operatorname{mod}(x,n)$ not used always?
AI: You can just write $``x \bmod n"$.
According to Wikipedia
Given two positive numbers, a and n, a modulo n (abbreviated as a mod n) is the remainder of the Euclidean division of a by n, where a is the dividend and n is
the divisor.
Note that it was not required that $a$ or $n$ be integers.
|
H: Find the Sum of the Series $\sum_{n=0}^\infty \frac{3n^2 -1}{(n+1)!}$
Find the Sum of the Series $$\sum_{n=0}^\infty \frac{3n^2 -1}{(n+1)!}$$ I separated the Series in to the sum of $\sum_{n=0}^\infty \frac{3n^2}{(n+1)!}$ and $\sum_{n=0}^\infty \frac{-1}{(n+1)!}$. First i proceeded to find the sum of the Series $\sum_{n=0}^\infty \frac{-1}{(n+1)!}$. What i did is to integrate $e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$, then $e^x = \sum_{n=0}^\infty \frac{x^n x}{(n+1)n!} = \sum_{n=0}^\infty \frac{x^n x}{(n+1)!}$. Finally i've got that $$\frac{e^x}{x}=\sum_{n=0}^\infty \frac{x^n }{(n+1)!}$$. So $\sum_{n=0}^\infty \frac{-1}{(n+1)!}$ should be equal to $e$ if i choose $x=1$. The problem is that when i calculate the Sum with wolfram alpha https://www.wolframalpha.com/input/?i=sum+%281%29%2F%28%28n%2B1%29%21%29+%2Cn%3D0+to+infinity the resault is other. It seems to be missing a term. The Sum $\sum_{n=0}^\infty \frac{3n^2}{(n+1)!}$ i'm not really sure how to calculate it. Thanks in advance
AI: $$\sum_{n=0}^\infty\frac{1}{(n+1)!}=\sum_{m=1}^\infty\frac1{m!}=e-1,$$
$$\sum_{n=0}^\infty\frac{n+1}{(n+1)!}=\sum_{n=0}^\infty\frac1{n!}=e,$$
$$\sum_{n=0}^\infty\frac{(n+1)n}{(n+1)!}=\sum_{n=1}^\infty\frac1{(n-1)!}=e.$$
If you can express
$$\sum_{n=0}^\infty\frac{3n^2-1}{(n+1)!}$$
as a linear combination of these sums, then you're in business.
|
H: Simplify $a.b+c.d$
Suppose that, $(S, +, \cdot)$ is a semiring, where the operations are defined as $x\cdot y=min(x, y)$ and $x + y=max(x, y)$.
Can we further simply the expression $a\cdot b+c\cdot d$?, where $a, b, c, d\in S$ and $a\leq d$ and $c\leq b$; also note that $(S, \leq)$ is a partially ordered set.
I couldn't simply the above expression into simpler form.
AI: This answer has been revised in the light of further information regarding the question.
If $\min$ and $\max$ are graph intersection and graph union,
we also need to understand $x\leq y$ as a graph relationship.
I assume $\leq$ is defined consistently with the idea that min is a "minimum" and max is a "maximum", so
$ \min(x,y) \leq x, $
$ \min(x,y) \leq y, $
$ x \leq \max(x,y), $ and
$ y \leq \max(x,y). $
If by "simpler" you mean "fewer operations" then there must also be fewer variables, that is, you must eliminate at least one variable from
$a\cdot b+c\cdot d$ without changing the value of the expression
for any $a,b,c,d \in S.$
If you can show that the value of $a\cdot b+c\cdot d$ is independent of one of the variables then you might be able to write $a\cdot b+c\cdot d$
in terms of the three remaining variables, which might result in a simpler expression. (Or you might be able to eliminate two variables and write an expression in just the two variables remaining.)
On the other hand, if you can exhibit an example of $a,b,c,d$ in which it is impossible to know $a\cdot b+c\cdot d$ without knowing $a,$ then you have shown that $a$ cannot be eliminated. Similar statements apply to the other three variables.
So you have two possible paths, general proofs to eliminate one or more variables (followed by making sense out of the remaining variables),
or specific examples that show some variables cannot be eliminated.
Or maybe both.
This was the original answer, which does not hold in the context where $\min$ and $\max$ are graph intersection and graph union.
I'm assuming $\min(x,y)$ and $\max(x,y)$ are defined for all $x,y \in S$
so that for each $x$ and $y$ at least one of the following statements is true:
$\min(x,y) = x$ and $\max(x,y) = y.$
$\min(x,y) = y$ and $\max(x,y) = x.$
If we also say the first statement implies $x\leq y$
and the second implies $y \leq x,$
then either $x\leq y$ or $y\leq x$ for all $x,y\in S.$
If $\leq$ is a partial order, then we know that $x\leq y$ and $y\leq x$
cannot both be true unless $x = y.$
Moreover, $x\leq y$, $y\leq z,$ and $z\leq x$ cannot all be true
for some $x,y,z\in S$ unless $x = y = z.$
The value of $a\cdot b$ then is either $a$ or $b$.
The value of $c\cdot d$ is either $c$ or $d$.
The value of $x + y$ (for any $x,y\in S$) is either $x$ or $y$.
What does that say about the possible values of $a\cdot b+c\cdot d$?
If you can write a set that contains all possible values of $a\cdot b+c\cdot d$
(and may contain things that are not possible values of $a\cdot b+c\cdot d$),
you can try, for each element in that set, to show either that it is a possible value (by exhibiting a partial order that produces that value)
or is not a possible value.
That might give you some ideas about whether there is a simpler expression and (if there is) what that expression might be.
I can't think how to give more hints without spoiling the exercise.
|
H: Prove that $d(x,z) \leq d(x,y) + d(y,z)$ in $\textbf{R}^2$
You are potentially able to prove $d(x,z) \leq d(x,y) + d(y,z)$ in $\textbf{R}^2$ using the relation that $d(x + y, 0) \leq d(x, 0) + d(y, 0)$ where:
$0 = 0$-vector
$d(x, y) = \sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2}$ - the distance formula
I was able to find and use $(x_1z_2 - x_2z_1) \geq 0$
To form $d(x,z) \leq d(x, 0) + d(z, 0)$ then by symmetry
\begin{align*}
\begin{cases}
d(x,y) \leq d(x, 0) + d(y, 0)\\\\
d(y,z) \leq d(y, 0) + d(z, 0)
\end{cases}
\end{align*}
You get $d(x, y) + d(y, z) \leq d(x, 0) + d(y, 0) + d(y, 0) + d(z,0)$ and $d(x, z) \leq d(x, 0) + d(z, 0)$.
Any hints from here?
AI: $d(x,z)=d(x-z,0)=d(x-y+y-z,0)\leq d(x-y,0)+d(y-z,0)=d(x,y)+d(y,z)$.
|
H: Decomposition of a locally free sheaf as tensor product of sheaves
The setting is as follows: Let $X$ be an algebraic surface, $\mathcal{F}$ a locally free sheaf of rank 2 on $X$ contained in $\Omega^1_X$, and $D$ a divisor on $X$ such that $\mathcal{F}\otimes\mathcal{O}_X(-D)$ has a non-zero global section. Then there is a non-zero divisor $S$ on $X$ such that $\mathcal{F}\otimes\mathcal{O}_X(-D-S)$ admits a global section with at most isolated zeros.
Why is the last statement true? This is the first sentence in the proof of Proposition VII.4.3 in the book Compact Complex Surfaces by Barth, Hulek, Peters, van de Ven. Any help is appreciated, thanks!
AI: This a general fact and nothing to do with subsheaf of $\Omega^1_X$. If $E$ is a rank two vector bundle with a section, we have an inclusion $O_X\to E$. If you pull back the torsion subsheaf of $E/O_X$, we get an exact sequence, $0\to L\to E\to G\to 0\to 0$, with $L=O_X(S)$, $S$ an effective divisor and $G$ torsion free. One easily checks that the section $O_X\to E(-S)$ vanish only at isolated points.
|
H: Prove that commuting matrices over an algebraically closed field are simultaneously triangularizable.
Given an algebraically closed field $\mathbb K$ and matrices $A, B \in \mathbb K^{n \times n}$ such that $A B = B A$, show that $A$ and $B$ are simultaneously triangularizable, i.e., show that there exists a matrix $T$ such that $T^{-1} A T$ and $T^{-1} B T$ are both upper triangular.
AI: Observe that any eigenspace of $A$ is $B$-invariant. Explicitly, given any vector $v$ such that $Av = \lambda v,$ we have that $A(Bv) = (AB)v = (BA)v = B(Av) = B(\lambda v) = \lambda Bv$ so that $Bv$ is either $0$ or an eigenvector of $A$ with respect to $\lambda.$ Given any nonzero eigenspace $W_\lambda$ of $A$ corresponding to the eigenvalue $\lambda$ of $A,$ we conclude that $B$ restricts to a linear operator $B|_{W_\lambda} : W_\lambda \to W_\lambda.$ Considering that $\mathbb K$ is algebraically closed, the characteristic polynomial of $B|_{W_\lambda}$ splits into (not necessarily distinct) linear factors, hence there exists a linear polynomial $x - \mu$ such that $B|_{W_\lambda} - \mu I$ is the zero operator on $W_\lambda.$ Consequently, there exists a nonzero vector $w$ in $W_\lambda$ such that $Bw = \mu w$ and $Aw = \lambda w.$ We conclude therefore that $A$ and $B$ have the same eigenvectors.
Can you finish the proof now that you have proven the hint?
|
H: Finding an element whose minimal polynomial is an Eisenstein polynomial
If we have an extension of number fields $L/K$ and $Q$ is a prime ideal of $L$ lying over $P$, do we know for certain if there exists an element $\pi \in Q \setminus Q^2$ whose minimal polynomial over $K$ is an Eisenstein polynomial?
If you don't mind, I would prefer arguments that do not explicitly involve local fields. Thank you very much in advance!
EDIT: if the answer is yes, can we ask for the minimal polynomial of $\pi$ to also have degree e(Q/P)?
AI: I think the answer should be no, in general. After localizing, we'd get an extension $L_{Q}/K_{P}$ of local fields with uniformizer $\pi$. This extension however may not be totally ramified, hence the minimum polynomial of $\pi$ may not be Eisenstein. Here we are using the fact that an extension $L/K$ of local fields is totally ramified if and only if the minimum polynomial of every uniformizer $\pi_L$ is Eisenstein over $K$.
|
H: How does $0 < 1/j$ and $1/k < 1/N$ imply that $| 1/j - 1/k | \leq 1/N$?
Synopsis
In Tao's Analysis 1, in his proof that the sequence $a_1,a_2, a_3, \dots$ defined by $a_n := 1/n$ is a Cauchy sequence, there is an inequality that I don't really feel comfortable with. I'll highlight this inequality below the proof and give some insight later.
Proof
We have to show that for every $\epsilon > 0$, the sequence $a_1, a_2, \dots$ is eventually $\epsilon$-steady. So let $\epsilon >0$ be arbitrary. We now have to find a number $N \geq 1$ such that the sequence $a_N, a_{N+1}, \dots$ is $\epsilon$-steady. This means that $d(a_j, a_k) \leq \epsilon$ for every $j,k \geq N$, i.e. $$|1/j - 1/k| \leq \epsilon \text{ for every $j,k \geq N$}.$$ Now since $j,k \geq N$, we know that $0<1/j$, $1/k < 1/N$, so that $|1/j - 1/k| \leq 1/N$. So it is sufficient for $N$ to be greater than $1/ \epsilon$.
The Inequality
The inequality that if $0<1/j$ and $1/k \leq 1/N$ is true, then $|1/j - 1/k| \leq 1/N$ is true just doesn't click for me. Probably it's my lack of sleep, but the intuition isn't there. I have, however, been able to prove it partially (I think, though there's probably something wrong since I'm mentally so incapacitated), and I'll highlight my partial proof below.
Partial Proof of Inequality That Is Probably Obvious to Most People Who Are Not Me
Suppose $j \geq k$. Then $1/j \leq 1/k$. So $1/N \geq 1/k > 1/k - 1/j = |1/j - 1/k|$ (since $1/j - 1/k < 0$). Now suppose $k <j$. Then by a similar argument, $1/N \geq 1/j \geq 1/j - 1/k = |1/j - 1/k|$ (isn't $1/j \leq 1/N$ since both $j,k \geq N$? I hope so.). Therefore, $|1/j - 1/k | < 1/N$.
Finale
As you can see, $|1/j - 1/k | < 1/N$ is slightly different from $|1/j - 1/k | \leq 1/N$. What's wrong with my "proof"? Why is my brain so bewildered by this inequality that probably isn't even that important? What is some intuition for this?
EDIT
I now realize that I simply suck at reading textbooks and that Tao was implying that $0<1/j<1/N$ and $0<1/k<1/N$ were both true at the same time wow I am actually a mess right now. This is now so painfully obvious and I cannot believe I spent like 15 minutes just bewildered. Thank you so much for your kind answers.
AI: Perhaps what you're missing is this: when Tao writes
we know that $0 < 1/j, 1/k < 1/N$ is true
he means that there are two inequalities that are true, namely
$$
0 < 1/j < 1/N \qquad \text{and} \qquad 0 < 1/k < 1/N.
$$
So, now you have two numbers, $1/j$ and $1/k$, sandwiched strictly between $0$ and $1/N$. So, the distance between these two "inner numbers" is definitely smaller than the distance between the two "outer numbers", that is,
$$
\lvert 1/j - 1/k \rvert < 1/N - 0 = 1/N.
$$
|
H: Fourier transform of $\sqrt{f(t)}$
If the Fourier transform of $f(t)$ is $F(f)$, can you conclude that the Fourier transform of $\sqrt{f(t)}$ is $\sqrt{F(f)}$?
Probably this is not always the case, but what are the cases in which this is true?
AI: Counterexample: Let $f=\chi_{[-1,1]}.$ Then $F(f)(x)=2(\sin x)/x$. But here $\sqrt f =f.$ If the result held, you'd have
$$\sqrt {2(\sin x)/x} = 2(\sin x)/x$$
for all $x,$ contradiction.
|
H: Find the mistake - 4th order homogeneous ODE with constant coefficients
Solve the following $4$th order ODE:
$\varphi''''+\varphi=0$
I've tried the standard approach and computed the zeros of $x^4+x$, which consist of $0,\cos(\frac{\pi}{3})+i\sin(\frac{\pi}{3}),-1,\cos(\frac{\pi}{3})-i\sin(\frac{\pi}{3})$.
This gives us the general solution $\varphi(x)=c_1e^{0t}+c_2e^{\cos(\frac{\pi}{3})t}\cdot \cos(\sin(\pi/3))+c_3e^{\cos(\frac{\pi}{3})t}\cdot\sin(\sin(\pi/3))+c_4e^{-t}$, however, when I plot $\varphi$ and $\varphi''''$, I can see that this does in fact not solve the ODE. Where have I made a mistake?
AI: For the DE: $$y''''+y=0$$
The polynomial characteristic should be:
$$r^4+1=0$$
And not:
$$r^4+r=0$$
|
H: A quiz question based on subspace topology i am unable to solve
While trying sample quiz papers I am unable to solve this particular question in topology.
It's image:
Questions:
(1) why 4th option is false. I think B can be deformed to a unit circle. So, it must hold.
I am unable to reason why 3rd option must hold or not.
Any help would be really appreciated .
AI: C: The image of a connected set is connected. The image of $A$ intersects some connected component $X_1$ non-trivially. The image of $A$ must be contained in $X_1$, otherwise its union with $X_1$ would be a larger connected set, contradicting the maximality of $X_1$. Similarly the image of $B$ is contained in some connected component $X_2$. As the map is surjective $X=X_1 \cup X_2$.
D: If you remove any point from $S^1$ the result is contractible. If you remove any point from $B$, the result has fundamental group $F_2$ (free on two generators).
D: (without algebraic topology) If you remove any two distinct points from $S^1$ the result is not connected. If you remove any two distinct points from $B$, the result is connected.
|
H: Baby Rudin 2.17 Perfect Set?
I'm confused about the solution to 2.17 in Baby Rudin. Let be the set of all ∈[0,1] whose decimal expansion contains only the digits 4 and 7. Is countable? Is dense in [0,1]? Is compact? Is perfect?
The solution says that E is perfect. However, I don't see how E has a single limit point. For example, take .7 and let $\epsilon$ = .03 . Any other point q in E must be $\geq$ .74 or $\leq$ .48. .74 - .7 = .04 $\geq$ .03, and .7 - .48 = .22 $\geq$ .03. So $N_.03(.7)$ contains only .7 in E, so .7 is not a limit point.
What am I doing wrong?
AI: The number $0.7$ does not belong to $E$, since $0.7=0.70000\ldots$. Elements of $W$ are, for instance $0.77777\ldots$, $0.7474474447\ldots$ and so on.
|
H: How many maximum binary pairs are possible in a Poset?
Answer : $n(n+1)/2$
Maximum binary pairs is possible iff the poset is a toset.
A toset is reflexive so I don't have control over self loops like $(1,1),(2,2).$ They have to be there.
My approach to this was :
If we consider $n$ elements in a set, we have $n^2-n=n(n-1)$ non diagonal elements, considering a matrix view of toset. Thus, pairs possible : $n(n-1)/2$ (which will be going in antisymmetric property, well thats not the point though here).
I'm missing out some cases because final answer is "my answer $+n$" which will become sum of first $n$ natural numbers.
Please help me find out what case am I missing, or have I got this everything wrong.
AI: I suspect that binary pair here means ordered pair belonging to the partial order. If $P$ is a partial order on $[n]=\{1,2,\ldots,n\}$, then $P$ contains all of the pairs $\langle k,k\rangle$ on the diagonal of your matrix (by reflexivity) as well as at most half of the off-diagonal pairs (by antisymmetry), so
$$|P|\le n+\frac{n(n-1)}2=\frac{n^2-n+2n}2=\frac{n^2+n}2=\frac{n(n+1)}2\;.$$
|
H: Safe packing Constraint satisfaction problem - is it optimal?
Problem:
You need to pack several items into your shopping bag without squashing anything. The items are to be placed one on top of the other. Each item has a weight and a strength, defined as the maximum weight that can be placed above that item without it being squashed. A packing order is safe if no item in the bag is squashed, that is, if, for each item, that item’s strength is at least the combined weight of what’s placed above that item. For example, here are three items and a packing order:
https://i.stack.imgur.com/PZbeW.png
This packing is not safe. The bread is squashed because the weight above it, 5, is greater than its strength, 4. Swapping the apples and the bread, however, gives a safe packing.
Goal:
Construct a CSP model for this problem, i.e. one which finds safe packings. In constructing the model, consider the need to place N items, where item i is placed in position Pi (0 means “at the top”), has weight Wi and strength Si.
What I tried:
https://i.stack.imgur.com/KI5xK.jpg
So I wanted to know if my CSP model was correct and optimal ?
AI: Your approach seems correct. Just to ease the notation a bit. Consider a new variable amount of weight placed on it $W$. If you are actually coding the problem it might help to reduce the overweight of calculating $\sum_T w_i$ in your notation.
|
H: How to show that $|z_1+z_2|+|z_1-z_2| = |z_1+\sqrt{z^2_1-z^2_2} |+|z_1-\sqrt{z^2_1-z^2_2} | $?
How to show that $$|z_1+z_2|+|z_1-z_2| = |z_1+\sqrt{z^2_1-z^2_2} |+|z_1-\sqrt{z^2_1-z^2_2} | $$
Is this question correct?
I tried my best to prove it
But got nowhere near, Any help
AI: Let $a=z_1+z_2$ and $b=z_1-z_2$, then we want to show that
$$|a|+|b|=\left|\frac{a+b}{2}+\sqrt{ab}\right|+\left|\frac{a+b}{2}-\sqrt{ab}\right|=\frac{1}{2}\left|\sqrt{a}+\sqrt{b}\right|^2+\frac{1}{2}\left|\sqrt{a}-\sqrt{b}\right|^2$$
Now we know that
\begin{align*}
|z+w|^2 & =(z+w)(\bar{z}+\bar{w})=|z|^2+2(z\bar{w}+w\bar{z})+|w|^2\\
|z-w|^2 & =(z-w)(\bar{z}-\bar{w})=|z|^2-2(z\bar{w}+w\bar{z})+|w|^2
\end{align*}
Let $z=\sqrt{a}$ and $w=\sqrt{b}$. Recall that $\left| \sqrt{a}\right|^2=\left|(\sqrt{a})^2\right|=|a|$. Hopefully you can take it from here.
|
H: System of equations of non-relativistic scattering in the laboratory system
Considering the system of equations of non-relativistic scattering in the laboratory system:
$$\begin{cases} \dfrac{1}{2} m_{1} v_{1}^{2} &=\dfrac{1}{2} m_{1} v_{2}^{2}+T_2 \,,\\
m_{1} v_{1} &=m_{1} v_2 \cos \psi_{1}+p_{2} \cos \psi_{2} \,,\\
0 &=m_{1}v_{2} \sin \psi_{1}+p_{2} \sin \psi_{2} \,,\\
T_2 &= \dfrac{p_2^2}{2 m_2}\,. \end{cases}$$
Prove that
$$T_2= 2 \frac{m_1^2 m_2}{(m_1+m_2)^2}v_1^2 \cos^2 \psi_2\,.$$
Is there a way to find the solution without going into the centre-of-mass reference system, i.e., considering only the 4 equations above?
AI: I assume that we are to solve for $T_2$ in terms of $m_1$, $m_2$, $v_1$, and $\psi_2$. I also assume that, in the first equation of the given system of equations, there is a typo and the correct equation should be $$\frac{1}{2}m_1v_1^2=\frac12m_1v_2^2+T_2\,.$$
From the second and the third equations in the given system of equations, we have
$$m_1v_2\cos(\psi_1)=m_1v_1-p_2\cos(\psi_2)$$
and
$$m_1v_2\sin(\psi_1)=-p_2\sin(\psi_2)\,.$$
By squaring the two equations above and then adding them, we obtain
$$m_1^2v_2^2=\big(m_1v_1-p_2\cos(\psi_2)\big)^2+\big(-p_2\sin(\psi_2)\big)^2=m_1^2v_1^2-2m_1v_1p_2\cos(\psi_2)+p_2^2\,.$$
Hence,
$$m_1^2(v_1^2-v_2^2)-2m_1v_1p_2\cos(\psi_2)+p_2^2=0\,.\tag{*}$$
From the first equation in the given system of equations, we get
$$2T_2=m_1(v_1^2-v_2^2)\,.\tag{#}$$
From the fourt equation in the given system of equations, we have
$$2T_2=\frac{p_2^2}{m_2}\,.\tag{@}$$
Plugging (#) and (@) into (*) yields
$$\frac{m_1}{m_2}\,p_2^2-2m_1v_1p_2\cos(\psi_2)+p_2^2=0\,.$$
Thus, $p_2=0$ or
$$\frac{m_1}{m_2}\,p_2-2m_1v_1\cos(\psi_2)+p_2=0\,.\tag{%}$$
The case $p_2=0$ corresponds to the initial condition, so we eliminate it. From (%), we get
$$p_2=\frac{2m_1m_2}{m_1+m_2}\,v_1\cos(\psi_2)\,.$$
That is,
$$T_2=\frac{p_2^2}{2m_2}=\frac{2m_1^2m_2}{(m_1+m_2)^2}\,v_1^2\cos^2(\psi_2)\,.$$
|
H: The sum of these 9! determinants is?
(image is attached for those who think I have changed the statement of the question while copying from the book)
Chose any 9 distinct integers. These 9 integers can be arranged to from 9! determinants each of order 3. The sum of these 9! determinants is?
My approach
For any Δ, there exist -Δ in arrangements
∴ sum = 0
I am looking for another approach!
AI: You can pair up the determinants: For any matrix $M$, define $M'$ by flipping the first and second rows. Then $\det(M)+\det(M')=0$. To avoid duplications, sum over all $M$ such that $m_{11} \lt m_{21}$. Then each matrix appears exactly once as either $M$ or $M'$.
|
H: Are all prime ideals of $\mathbb C[x,y]/(y^2-x^3+x)$ maximal?
In fact I'm trying to prove that $\mathbb C[x,y]/(y^2-x^3+x)$ is a Dedekind domain. Till now I believe I was able to show that it is a Noetherian integral domain (easy) which is an integrally closed domain. If I prove that all prime ideals of $\mathbb C[x,y]/(y^2-x^3+x)$ are maximal, the job is done.
We know that prime ideals of $\mathbb C[x,y]/(y^2-x^3+x)$ are in bijective, inclusion-preserving correspondence with prime ideals of $\mathbb C[x,y]$ which contain $y^2-x^3+x$. If I'm not wrong, these may be in the form $(f, y^2-x^3+x)$, where $f$ is an irreducible polynomial in $\mathbb C[x]$ (so it should be in form $x-\alpha$, where $\alpha \in \mathbb C$ and $\alpha \neq 1, -1$, right?) and such that $y^2-x^3+x$ is irreducible mod $f$. But maximal ideals of $\mathbb C[x,y]$ are in the form $(x-a,y-b)$, where $a$ and $b$ are from $\mathbb C$. Can we somehow represent $(f, y^2-x^3+x)$ in such a way?
Correction that I realized later: we don't need to represent this ideal differently! It is indeed maximal since $(f)$, for $f$ irreducible in $\mathbb C[x]$, is a prime ideal of $\mathbb C[x,y]$, and then $(0) \subset (f) \subset (f, y^2-x^3+x))$, so $(f, y^2-x^3+x)$ has the height 2, then any other prime (maximal) ideal containing that ideal would have the height 3 - a contradiction! Krull dimension of $\mathbb C[x,y]$ is 2 and all its maximal ideals have height 2. Thus it is a maximal ideal and all prime ideals of $\mathbb C[x,y]/(y^2-x^3+x)$ are maximal.
Do you find any mistake? Thank you!
AI: Consider the ring $R = \mathbb C[x, y] / (y^2 - x^3 + x).$ Like you have already established, $R$ is an integral domain. We claim that $\dim R = 1$ so that every nonzero prime ideal of $R$ is maximal. Considering that $S = \mathbb C[x, y]$ is a finitely generated $\mathbb C$-algebra, it follows that $\operatorname{height} I + \dim(S/I) = \dim S$ for every ideal $I$ of $S.$ Consequently, it suffices to show that $\operatorname{height}(y^2 - x^3 + x) = 1.$
By Krull's Height Theorem, we have that $\operatorname{height}(y^2 - x^3 + x) \leq 1.$ On the other hand, as $S$ is a domain, $0$ is the unique minimal prime ideal of $S,$ i.e., we have that $\operatorname{height}(y^2 - x^3 + x) \geq 1.$
|
H: Absolute convergence of a series with integral inside
Im having trouble solving the absolute convergence of this series. None of the common tests seem to work and so far couldn´t find any function to compare it to:
$\sum_{n=1}^\infty (-1)^{n} \int_{n}^{n+1}\frac{e^{-x}}{x}dx$
I would appreciate any suggestions.
AI: You have$$\sum_{n=1}^\infty\int_n^{n+1}\frac{e^{-x}}x\,\mathrm dx=\int_1^\infty\frac{e^{-x}}x\,\mathrm dx<\int_1^\infty e^{-x}\,\mathrm dx$$and this last integral converges.
|
H: If continuous images of $X$ are closed in every $Y$, is $X$ a compact space?
Suppose $X$ is a topological space. We have the following criterion for compactness:
Theorem. $X$ is compact if and only if for every space $Y$, the second projection $\pi_2: X\times Y \to Y$ is a closed map.
This property is known as being universally closed, and also plays an important role in algebraic geometry (in the definition of a proper morphism). The proof of the result above can be found on this MSE thread.
My question is whether we can strengthen this result by only asking that the graphs of continuous functions have closed image in $Y$. From now on, we will consider only Hausdorff topological spaces.
Question. Is it true that a Hausdorff space $X$ is compact if and only if for every continuous map $f: X\to Y$ with $Y$ Hausdorff has a closed image $f(X)$ in $Y$.
More context for the question: let $X$ and $Y$ be any two Hausdorff spaces. For any continuous function $f: X\to Y$ we can consider its graph $\Gamma(f) = \{(x, y)\in X\times Y: y=f(x)\}$. Note that $\Gamma(f)\subset X\times Y$ is a closed subset of $Y$ (see this MSE thread for proof). If $X$ were compact, then we know that the image of $\Gamma(f)$ under the second projection map $X\times Y\to Y$ would be closed. Note that the image of $\Gamma(f)$ is precisely the image $f(X)=\{f(x)\in Y: x\in X\}$, and so $f(X)$ would be closed in $Y$. This shows that the forward implication is true (one can show this implication in a more direct away). It makes sense to ask whether the converse also holds.
AI: A Hausdorff space $X$ is H-closed if, for every Hausdorff space $Y$ and a topological embedding $f:X\to Y$, the image $f(X)$ is closed.
Lemma. A Hausdorff space $X$ is H-closed if and only if for every Hausdorff space $Y$ and a continuous map $f:X\to Y$, the image $f(X)$ is closed.
Lemma. The topological space $[0,1]$, with the smallest topology containing the both the standard one and the set $\Bbb{Q}\cap [0,1]$, is H-closed.
The latter space is not compact, since it contains $[0,1]\setminus\Bbb{Q}$ (with the standard topology) as a closed subset, while $[0,1]\setminus\Bbb{Q}$ is not compact (with the standard topology).
|
H: Evaluating an Infinite Limit that Wolfram doesn't like!
Evaluate
$$\lim _{n \rightarrow \infty} \ln (n +1) n(n+1)^{-n/(n+1)}- \ln (n)n^{1/n}. $$
According to Wolfram, this is equivalent to $0$, yet everything I've tried (like log-exponent) doesn't lead me to the answer. Could someone show why this is true? And why doesn't Wolfram have a step-by-step for this?
AI: Asymptotically expand the individual terms:
$$\underbrace{\ln (n +1)}_{\sim \log(n) + \frac{1}{n} + \mathcal O(\frac{1}{n^2})} \underbrace{n(n+1)^{-n/(n+1)}}_{\sim 1+\frac{\log(n)-1}{n} + \mathcal O(\frac{1}{n^2})} - \ln (n)\underbrace{n^{1/n}}_{\sim 1 + \frac{\log(n)}{n} + \mathcal O(\frac{1}{n^2})}$$
Then in total you get
$$\begin{aligned}
&\sim \left(\log(n) + \tfrac{1}{n} + \mathcal O(\tfrac{1}{n^2})\right)\left(1+\tfrac{\log(n)-1}{n} + \mathcal O(\tfrac{1}{n^2})\right) - \log(n)\left(1 + \tfrac{\log(n)}{n} + \mathcal O(\tfrac{1}{n^2})\right)
\\&\sim \log(n) + \tfrac{\log(n)^2-\log(n)}{n} + \frac{1}{n}+\tfrac{\log(n)-1}{n^2} + -\log(n) -\tfrac{\log(n)^2}{n} + \mathcal O(\tfrac{1}{n^2})
\\&\sim -\tfrac{\log(n)-1}{n} + \tfrac{\log(n)-1}{n^2} + \mathcal O(\tfrac{1}{n^2})
\end{aligned}$$
|
H: the local extrema and the saddle point - i could not find the critical points
$$y=(e^y)-(ye^x)$$
I want to find the local extrema and the saddle point, what should I do?
i could not find the critical points.
what can I do?
AI: What one can do in a situation like this is to try and prove that no critical points exist, thus ending the search to find them. I proceed in this direction as follows:
We may differentiate the equation
$y = e^y - ye^x \tag 1$
with respect to $x$, to obtain
$y' = e^y y' - y'e^x - ye^x, \tag 2$
whence
$y' - e^yy' + e^x y' = -ye^x, \tag 3$
or
$(1 - e^y + e^x)y' = -ye^x, \tag 4$
from which we may isolate $y'$:
$y' = -\dfrac{ye^x}{1 - e^y + e^x}; \tag 5$
it follows from this equation that
$y' = 0 \Longleftrightarrow ye^x = 0; \tag 6$
since
$\forall x \in \Bbb R, \; e^x \ne 0, \tag 7$
we see that
$y' = 0 \Longleftrightarrow y = 0; \tag 8$
however, setting $y = 0$ in (1) yields
$0 = e^0 - 0(e^x) = e^0 = 1, \tag 9$
a contradiction from which we conclude that
$y'(x) \ne 0,\; \forall x \in \Bbb R; \tag{10}$
thus the function $y$ implicitly defined by (1) has no critical points, making moot the question of the existence of extrema or inflection (saddle) points.
|
H: Is a Riemann integral of a real-valued function a number or a function?
For example, if we define $F(x)=\int^x_a f(t)dt$, where $f$ is Riemann integrable, then $F(x)$ is a function. Or for a 2 variables real-valued integrable function $f(x, y)$, $G(x)=\int^b_a f(x,y)dy$, then $G(x)$ is a function. But for $\int_a^b f$, is it just a number rather than a function?
AI: The Riemann integral is a number.
If $f : [a,b]\to\mathbb{R}$ is a Riemann integrable function then, by definition, the Riemann integral of $f$ is given by
$$ \int_{a}^{b} f(x)\,\mathrm{d}x
= \lim_{\delta\to 0} \sum_{n=1}^{N} f(x_n^*) (x_{n}-x_{n-1}),
$$
where $a = x_0 < x_1 < \dotsb < x_N = b$ is any partition of $[a,b]$ with $x_n - x_{n-1} < \delta$ for all $n$, and $x_n^* \in [x_{n-1}, x_n]$.
For any partition of $[a,b]$, the sum on the right will be a will be a real number (since $f(x_n^*)$ and $x_n-x_{n-1}$ are real for every $n$). Thus the term on the right is the limit of a sequence of real numbers. The limit of a sequence of real numbers is, again, a real number, hence the Riemann integral is a real number. Since the variable is not really important here, the above notation will often be abbreviated to
$$ \int_{a}^{b} f, $$
but this notation indicates the same number discussed above.
That being said, there is something else going on, which might explain your confusion. Roughly speaking, the Fundamental Theorem of Calculus tells us that if there is a function $F$ with derivative $f$, then
$$ F(x) = \int_{a}^{x} f(t)\,\mathrm{d}t, $$
where $a$ is an arbitrarily chosen constant in the domain of $F$, and $x$ is any element in the domain of $F$. Note that, even here, the Riemann integral is a real number: first, we fix some some point $x$ in the domain of $F$, and then we determine the value of $F$ at that point (that is, we determine $F(x)$) by evaluating a Riemann integral. This process can be used to define a function—it gives us a way of evaluating $F$ at every point in its domain—but the integral itself is not a function.
Finally, there is one other notational wrinkle: if the derivative of $F$ is $f$, then we say that $F$ is an antiderivative of $f$.[1] The Fundamental Theorem of Calculus could be restated as "If $F$ is an antiderivative of $f$, then
$$ F(x) = \int_{a}^{x} f(x)\,\mathrm{d}x, $$
where $a$ and $x$ are as above." Because of this link between derivatives, antiderivatives, and (Riemann) integrals, it is common to write
$$
F = \int f
$$
in order to indicate that $F$ is an antiderivative of $f$. In this notation, the "integrals" have no limits, and $\int f$ doesn't really refer to an integral at all (though it is often called the "indefinite integral"). In this case, it would be fair to say that $\int f$ is a function,[2] but $\int f$ no longer denotes a Riemann integral.
[1] Note the use of the indefinite article: $F$ is an antiderivative, not the antiderivative. This is important, because if $F'(x) = f(x)$ for all appropriate $x$, then
$$ \frac{\mathrm{d}}{\mathrm{d}x} \bigl(F'(x) + C\bigr) = f(x) $$
for any constant $C$. Hence antiderivatives are not unique; both $x \mapsto F(x)$ and $x\mapsto F(x) + C$ are antiderivatives of $f$.
[2] It may also be worth noting that it is a little bit of a lie to say that
$$ \int f $$
is a function—in reality, it is probably better to think of it as an equivalence class of functions, which are equivalent up to addition of a constant. But this distinction is typically irrelevant in elementary courses, and is easily dealt with in more advanced settings.
|
H: Steps for computing Tor$(\mathbb{Z}, \mathbb{Z}\times\mathbb{Z})$
I'm reviewing algebraic topology, in particular the Kunneth Formula. I can't find online or in my book (by Hatcher) an explanation for how to calculate $\mbox{Tor}(G,H)$ for any two groups. My understanding is that Tor measures the failure of a ses to be exact. Could you please tell me the steps for calculating Tor in general?
I'm piecing together tidbits from my notes and other questions on Stack Exchange. Here's what I could come up with for Tor$(\mathbb{Z}, \mathbb{Z}\times\mathbb{Z})$.
First take a free resolution of the first group, $\mathbb{Z}$ such as $0 \rightarrow 0 \rightarrow \mathbb{Z} \rightarrow \mathbb{Z} \rightarrow 0$, since the integers form a free group without relations. Then tensor each group over integers and ditch the group on the left (since tensoring is right exact) to get $? \rightarrow 0 \rightarrow \mathbb{Z} \rightarrow \mathbb{Z} \rightarrow 0$. Now find what $?$ must be in order for the sequence to be exact. In this case, $?=0$ so Tor$(\mathbb{Z}, \mathbb{Z}\times\mathbb{Z})=0$.
AI: Consider a commutative ring $R$ and some $R$-modules $M$ and $N.$ One can compute the $R$-modules $\operatorname{Tor}_i^R(M, N)$ in essentially two ways.
1.) Begin with a projective resolution of $N,$ e.g., $P_\bullet : \cdots \to P_2 \to P_1 \to P_0 \to N \to 0;$ then, apply the functor $M \otimes_R -$ to this to obtain a chain complex $$T_\bullet : \cdots \to M \otimes_R P_2 \to M \otimes_R P_1 \to M \otimes_R P_0 \to 0.$$ We define $\operatorname{Tor}_i^R(M, N)$ as the homology of this chain complex, i.e., $\operatorname{Tor}_i^R(M, N) = H_i(T_\bullet).$
2.) Begin with a short exact sequence of $R$-modules $0 \to K \to N \to I \to 0.$ By applying the right-exact functor $M \otimes_R -$ to this exact sequence, we obtain a long exact sequence of Tor, i.e., $$\begin{align*} \cdots \to \operatorname{Tor}_1^R(M, K) \to \operatorname{Tor}_1^R(M, N) &\to \operatorname{Tor}_1^R(M, I) \\ \\ &\to \operatorname{Tor}_0^R(M, K) \to \operatorname{Tor}_0^R(M, N) \to \operatorname{Tor}_0^R(M, I) \to 0. \end{align*}$$ One can prove that $\operatorname{Tor}_0^R(M, -) = M \otimes_R -,$ so this long exact sequence measures the failure of $M \otimes_R -$ to be left-exact. Under some mild assumptions, in this setting, one could say something about the vanishing of the higher Tor, or one could explicitly compute $\operatorname{Tor}_i^R(M, N).$
Here are just a few facts about $\operatorname{Tor}_i^R(M, N)$ that could come in handy for computations.
We have that $\operatorname{Tor}_i^R(M, N) \cong \operatorname{Tor}_i^R(N, M),$ so always compute the easier of the two.
If $M$ and $N$ are finitely generated, then $\operatorname{Tor}_i^R(M, N)$ is finitely generated
Given that $M$ is a flat $R$-module, we have that $\operatorname{Tor}_i^R(M, N) = 0$ for all integers $i \geq 1.$ Conversely, if $\operatorname{Tor}_1^R(M, N) = 0$ for all $R$-modules $N,$ then $M$ is flat as an $R$-module.
Given a commutative ring $R$ with ideals $I$ and $J,$ we have that $\operatorname{Tor}_1^R(R/I, R/J) \cong (I \cap J) / IJ.$
Given an $R$-regular sequence $I = (x_1, \dots, x_n),$ we have that $\operatorname{Tor}_n^R(R/I, M) \cong (0 :_M I).$
Tor commutes with direct sums, direct limits, and localization.
|
H: Find the average value of $x^2 - y^2 + 2y$ over the circle $|z - 5 + 2i| = 3$.
Find the average value of $x^2 - y^2 + 2y$ over the circle $|z - 5 + 2i| = 3$.
Could someone please explain how to do this. I keep getting an answer of 17, but my professor says that is incorrect.
AI: First, recognize that the circle has center at $5-2i$ and radius $3$. So you can parametrize it using $x=5+3\cos(t)$ and $y=-2+3\sin(t)$.
So you are now asking for the average value of
$$
\begin{align}
&(5+3\cos(t))^2-(-2+3\sin(t))^2+2(-2+3\sin(t)) \\
&=17+30\cos(t)+18\sin(t)+9\cos^2(t)-9\sin^2(t)
\end{align}
$$
where $t$ runs over $[0,2\pi]$. Can you take it from there? Note that $\sin(t)$ and $\cos(t)$ average to $0$, and whatever $\sin^2(t)$ averages out to, it matches what $\cos^2(t)$ averages out to.
|
H: Show that these two diffeomorphisms cannot exist simultaneously
Let $d\in\mathbb N$, $x\in M\subseteq\mathbb R^d$ and $\psi^{(i)}:\Omega_i\to\psi^{(i)}(\Omega_i)$ be a diffeomorphism with $x\in\Omega_i$, $$\psi^{(1)}(M\cap\Omega_1)=\psi^{(1)}(\Omega_1)\cap(\mathbb R^k\times\{0\})\tag1,$$ $$\psi^{(2)}(M\cap\Omega_2)=\psi^{(2)}(\Omega_2)\cap(\mathbb H^k\times\{0\})\tag2,$$ where $\mathbb H^k:=\{u\in\mathbb R^k:u_k\ge0\}$, and $\psi^{(2)}_k(x)=0$.
I want to conclude that both $\psi^{(i)}$ cannot exist simultaneously.
Let $\Omega:=\Omega_1\cap\Omega_2$. One argument that I found started with observing that $\phi^{(1)}(M\cap\Omega)$ is open (in $\mathbb R^d$). But I don't get that. Why is that necessarily the case? By definition of a diffeomorphism, all we know should be that $\Omega_i$ and $\psi^{(i)}(\Omega_i)$ are open.
AI: This has been asked and answered before on MSE.
Why can there not be a diffeomorphism $\phi$ from an open neighborhood of a boundary point—say the origin—in $\Bbb H^k$ to an open neighborhood of a point in $\Bbb R^k$? View $\Bbb H^k$ as a subset of $\Bbb R^k$, and suppose $\phi(0)=a\in\Bbb R^k$. By the inverse function theorem, $\phi^{-1}$ maps some open neighborhood $U$ of $a$ onto a open neighborhood of $0\in\Bbb R^k$. (This follows from the fact that $d\phi^{-1}(a)$ is nonsingular.) So its image cannot be contained in $\Bbb H^k$.
|
H: Show a locally bounded Lipschitz function space is compact for sup metric
Just show $\Omega$ = {$f\in\text{Lip}(\alpha,M):|f(u)|\leq M$} is totally bounded and complete.
AI: Just show $\Omega$ = {$f\in\text{Lip}(\alpha,M):|f(u)|\leq M$} is totally bounded and complete.
Note that $\Omega$ is equicontinuous. Given any $\epsilon>0$ and any $x\in S$, there exists a open neighborhood $B(x,\delta,d)=$ {$y\in S:d(x,y)<\delta$} of $x$ such that $|f(x)-f(y)|\leq Md(x,y)^\alpha<M\delta^\alpha<\epsilon$ for all $f\in\Omega\subset\text{Lip}(\alpha,M)$. Just let $\delta=(\epsilon/M)^{1/\alpha}/2$.
Since $\Omega$ is equicontinuous, every $f\in\Omega$ is continuous. Since $k$ is compact, by the extreme value theorem, each $f\in\Omega$ is bounded. Thus $\Omega$ is uniformly bounded. By Azela-Ascoi Theorem, $\Omega$ is totally bounded.
$\Omega$ is complete by the following theorem.
|
H: Converting function containing summation into function without it.
Disclaimer: terminology and syntax may be incorrect as I do not use them often, please excuse the errors.
Function
The function I am trying to convert:
$ f(a,b,c) = \displaystyle\sum_{n=0}^{a} \left( \frac{b}{c} \right)^n$
$ a >= 0$
$ b,c > 0$
$ c > b $
$ b $ and $ c $ are listed as separate parameters as they have to be interchangeable in a possible solution.
Limitations & Thoughts
This function has to be implemented programmatically. But due to system structure limitation this implementation can't use any form of loop or recursion. Thus the summation in the formula can't be implemented.
Testing the function with different value combinations reveals what looks to me like a logarithmic curve.
How can I convert the function above correctly?
AI: That's not a logarithm. The sum of this geometric progression is famously $(1-(b/c)^{a+1})/(1-b/c)$. You can prove this by induction on $a$.
|
H: Solving Modular System with 2 different Moduli
Is there any way to solve for $a$ and $b$ in:
$$ a*b \equiv s_0 \mod r_0 $$
$$ a - b - a*b \equiv s_1 \mod ( r_0 - 1) $$
I have the values of $r_0$, $s_0$, $s_1$ and would like to find the values for $a$, $b$? The $*$ is just normal integer multiplication.
AI: Write the second equation as
$$(a+1)(b-1) = ab + b - a - 1 \equiv -s_1 - 1 \mod (r_0 - 1) $$
Take any $x,y,z,w$ such that
$$x y \equiv s_0 \mod r_0$$
$$z w \equiv -s_1 - 1 \mod (r_0-1)$$.
Since $r_0$ and $r_0-1$ are coprime, we can find $a$ and $b$ such that:
$$a \equiv x \mod r_0$$
$$b \equiv y \mod r_0$$
$$a \equiv z-1 \mod (r_0 - 1)$$
$$b \equiv w+1 \mod (r_0 - 1)$$.
Then these satisfy your equations.
EDIT:
For example, if $r_0 = 5$, $s_0 = 1$ and $s_1 = 2$,
you want $x y \equiv 1 \mod 5$ and $zw \equiv -3 \equiv 1 \mod 4$.
One possibility (there are many) is $x \equiv 2 \mod 5$, $y \equiv 3 \mod 5$, $z \equiv 3 \mod 4$, $w \equiv 3 \mod 4$. Then
using the Chinese Remainder Theorem, $a \equiv 2 \mod 20$ and $b \equiv 8 \mod 20$. And indeed
$ab \equiv 2 \cdot 8 \equiv 1 \mod 5$ and $a-b-ab \equiv 2 - 8 -2\cdot 8 \mod 4$.
|
H: Let $G'\triangleleft G$ be a normal subgroup and $K
Let $G'\triangleleft G$ be a normal subgroup and $K<G$ a subgroup. Is there any relation between the normalizers $N_{G'}(G'\cap K)$ and $N_{G}(K)$?
I'm working with topological groups and I would like to know if $\dim N_{G'}(G\cap K)\leq \dim N_G(K)$.
AI: Short answer: $N_G(K)$ can be finite while $N_{G'}(G'\cap K)$ can be infinite.
Long answer: If $g\in G$ normalizes $K$ then, since $g$ certainly normalizes $G'$, we see that $g$ normalizes their intersection, $G'\cap K$. Thus
$$N_G(G'\cap K)\geq N_G(K).$$
You want something like the other way round. So we have to analyse more.
Notice that if $N\lhd G$ and $N\leq H\leq G$, then $N_G(H)/N=N_{G/N}(H/N)$. Thus to understand $N_G(K)$ we may replace $G$ by $N_G(G'\cap K)$, work modulo $G'\cap K$, and replace $G'$ by some subgroup $X$ such that $G/X$ is abelian. (but $X$ is no longer $G'$). Thus we assume that $X\cap K=1$. Thus $K$ is abelian, and acts on $X$ in the semidirect product $XK$.
But now this gives us a good idea of what to look for. So, in the infinite dihedral group, for example, $G'$ is infinite cyclic, $K$ is the subgroup of order $2$ inverting the cyclic subgroup, and we see that $N_G(K)=K$, while $N_{G'}(G'\cap K)=G'$ is infinite.
For a topological group as an example, choose any simple topological group $X$ and any outer automorphism of order $2$ of $X$, and let $G$ be their semidirect product. Then $G'=X$. In $G$, the normalizer of the outer automorphism $K$ is just its centralizer on $X$, whereas the normalizer of $G'\cap K=1$ is all of $X$.
|
H: the equation $x^2-y^2 =a^2$ changes to the form $xy=c^2$ if the co-ordinate axes rotates through an angle (keeping origin fixed)
the equation $x^2-y^2 =a^2$ changes to the form $xy=c^2$ if the co-ordinate axes rotates through an angle
(keeping origins fixed) is
a) $ \frac \pi 2 $
b) $ - \frac \pi 2 $
c) $ \frac \pi 4 $
d) $ \frac \pi 3 $
AI: If we put
$$x=\frac{X+Y}{\sqrt{2}}$$
and
$$y=\frac{X-Y}{\sqrt{2}}$$
then
$$x^2-y^2=a^2=2XY$$
or
$$XY=(\frac{a}{\sqrt{2}})^2=c^2$$
But $$\cos(\frac{\pi}{4})=\frac{1}{\sqrt{2}}$$
the right answer is $ c)$.
|
H: If $\forall n \in \mathbb Z_{\ge0} \ $ and $\forall x \in \mathbb R$, we know that $\big|f^{(n)}(x)\big|\le \big|p(x)\big|$, then $f=0$.
If $p(x)$ is an odd degree polynomial such as $\forall n \in \mathbb Z_{\geq 0}$ and $\forall x \in \mathbb R$ we know that $$\big|f^{(n)}(x)\big|\le \big|p(x)\big|\,.$$
I need to show that $\forall x \in \mathbb R \ $ $f(x)=0$.
My thoughts till now: I tried to use Taylor polynomial but it didn't help. and I really need help.
Thanks in advance.
AI: Let $M_R=\max_{|x| \le R}|p(x)|$. We have $\frac{|f^{(n)}(x)|}{n!} \le \frac{M_R}{n!}, |x| \le R$.
This immediately implies that $f$ is analytic on $(-R,R)$ and its Taylor series there at $0$ has a radius of convergence at least $R$.
But $f^{(n)}(0)=0$ since $p$ odd hence $f$ is identically zero on $(-R,R)$. As $R>0$ arbitrary we are done!
(If we are given that $f$ has odd degree only we apply the above with any real zero and easy modifications by changing the center of the Taylor series at the zero of $p$.)
|
H: Struggling To Follow How to Convert expression to Logarithmic Form | Binary Search Problem
I am reading "Problem Solving with Algorithms and Data Structures using Python" and the author is currently explaining the relation between comparisons and the Approximate Number of Items Left in an Ordered List.
I am struggling to perform the conversion of: $ \left(\frac{n}{2^i}\right) = 1 $ to $ i = \log n $
I put this expression into MathWay and got back $ i = log_2(1) $ and I'm a little confused on how these results are equivalent. I'm pretty rusty with logarithms so if you could help explain this conversion to me I'd greatly appreciate it.
Please see this image from the book with a full explanation of the problem being referred to.
Image From Book
AI: So you have
$${\frac{n}{2^i}=1}$$
Multiplying both sides by ${2^i}$, one gets
$${\Rightarrow \frac{n}{2^i}\times 2^i = 1\times 2^i}$$
You can cancel the ${2^i}$'s on the left hand side, giving us
$${n=2^i}$$
Now - we want some way of turning ${2^i}$ into just ${i}$. How do we do this? Well, applying ${\log}$ base ${2}$ to both sides gives us
$${\Rightarrow \log_2(n) = \log_2(2^{i})}$$
The right hand side, by the definition of the Logarithm is saying "what number do I raise ${2}$ by to get ${2^i}$?" Clearly, the answer is $i$. And so
$${\Rightarrow i=\log_2(n)}$$
Edit: So technically, the Logarithm in the book should also be base ${2}$; however, I think he may have left it out for ${2}$ potential reasons:
(1) As you say, in many places ${\log}$ without a base specified usually refers to base ${10}$; However, Mathematicians also use ${\log}$ without a specified base to mean base ${e}$ (where $e$ is Euler's number - don't worry if you don't know what this is) - it could be that the author is just using ${\log}$ without a base to mean base $2$ (unlikely, I think - but possible).
(2) I noticed he talked about big $O$ notation. In big $O$ notation, it doesn't really matter whether it's ${O(\log_2(n))}$, ${O(\log_{10}(n))}$ etc etc. One property of the Logarithm is that
$${\log_{a}(n) = k\times \log_{b}(n)}$$
That is, the log function in one base can be written as a scalar multiple (just some number) multiplied by the log of another base. And in big $O$ notation
$${k\times O(f(n)) = O(f(n))}$$
So it doesn't really matter which base you put (provided the base is, of course, positive). Ultimately, the choice is arbitrary in big $O$ notation, and so many times people just write ${\log}$ with the base ommitted. I guess in some sense, you could say there is only really one log function, and the base only really affects what multiple of it you take.
|
H: Role of Tautologies in logic
I apologize if the title is inadequate. I am reading Loomis and Sternberg's Advanced Calculus textbook. After introducing the notation of a quantification and defining a tautology, they state:
Indeed, any valid principle of reasoning that does not involve quantifiers must be expressed by a tautologous form.
I cannot figure out what they mean by this. Surely I can reason in terms of implications with free, unbounded variables. Other than this, I cannot think of a good counterexample, but I must be misunderstanding them.
AI: This is dealt with in Shoenfield's Mathematical Logic, Chapter 3 (especially the first section and the first exercise), though be warned that that book is terse! I don't wish here to prove the relevant theorem, but just state the definitions in order to clarify the situation. To simplify, let's assume the language has only existential quantifiers (the universal quantifier can be defined in terms of existential and negation in the usual way). I'm assuming you know the definition of atomic formula, etc., and that the background language is fixed.
Before starting, here's the idea behind the whole thing. We usually talk of tautologies only with respect to propositional languages. But there is a sense in which it can be extended to first-order languages in a natural way, so that, e.g., $Px \vee \neg Px$ or $\exists x Px \vee \neg \exists x Px$ can be said to be tautologies. The definitions below show how to accomplish that; the gist of it is that you treat existential and atomic formulas as if they were propositional variables. The quoted excerpt then is claiming that if you can derive a formula without using any logical rule that is specific to the first-order calculus, that formula is a tautology. So let us see the definitions in question.
Definition 1: A formula of the form $\exists x A$ is called an instantiantion of $A$.
Definition 2: A formula is called elementary if it is either an atomic formula or an instantiation.
Definition 3: A truth valuation is a mapping from the set of elementary formulas to the set of truth-values.
Fact 1: If $V$ is a truth valuation, it is possible to extend it to all formulas of the language in the usual way, i.e. by setting $V^*(\neg A) = T$ iff $V^*(A) = F$, etc. Call $V^*$ a boolean valuation for the language.
Definition 4: $B$ is a tautological consequence of $A_1, \dots, A_n$ if $V^*(B)=T$ for every boolean valuation $V^*$ such that $V^*(A_1)= \dots =V^*(A_n)$.
Definition 5: A formula $A$ is a tautology if it is a tautological consequence of the empty set.
So pick now your favorite first-order calculus. It will probably have rules that show to reason with the quantifiers (e.g. existential instantiation and generalization), with identity (say, substitution axioms), and with the propositional connectives (say, introduction and elimination rules, or modus ponens, or whatever). We have the following:
Theorem: If a formula is derivable from the empty set using only propositional rules, then the formula is a tautology. (Cf. Shoenfield, Chap. 3, Exercise 1)
|
H: Prove that $a_{n}$ converges for $a_{1} \ge 2$ and $a_{n+1} = 1+ \sqrt{a_{n}-1}$ and find its limit
Prove that $a_{n}$ converges for $a_{1} \ge 2$ and $a_{n+1} = 1+ \sqrt{a_{n}-1}$ and find it's limit. First of all, what i did is to find the first four terms of the sequence.
$a_{2} = 1+ \sqrt{a_{1}-1} \ge 2$
$a_{3} = 1+ \sqrt{a_{2}-1} = 1+ \sqrt{\sqrt{a_{1}-1}} \ge 2$
$a_{4} = 1+ \sqrt{\sqrt{\sqrt{a_{1}-1}}} \ge 2$
Then I assume $a_{n} \ge 2 $ Witch means that $a_{n}$ it's lower bounded. I need to prove that it is a decreasing sequence and then $a_{n}$ would converge.
$$\frac{a_{n+1}}{a_{n}} = \frac{1+\sqrt{a_{n}-1}}{a_{n}} \le1$$ but i don't know how to prove that. The limit, I don't know how to find it.
Thanks in advance.
AI: Then I assume $a_n\ge 2$
Note that you would have to prove that as well. Just because the first four terms are $\ge 2$, doesn't mean that you can conclude that for all $n$. However, induction should easily help you out there.
If you assume $a_n \ge 2$, then you get
$$a_{n+1} = 1 + \sqrt{\color{blue}{a_n} - 1} \ge 1 + \sqrt{\color{blue}{2} - 1} = 2.$$
Now, since $a_1 \ge 2$, induction tells you that $a_n \ge 2$ for all $n$.
Now, showing that $a_n$ is decreasing is easy because we have
$$a_n - 1 \ge 1$$
and thus,
$$a_n - 1 \ge \sqrt{a_n - 1}.$$
Rearranging the above gives us that
$$a_n \ge 1 + \sqrt{a_n - 1} \ge a_{n+1}.$$
This finishes the proof the way you intended it.
Note that the other answer gives a simpler alternative by considering $b_n = a_n - 1$.
Once you know that the sequence converges, you can find the limit as following:
$$a_{n+1} = 1 + \sqrt{a_n - 1} \implies \lim_{n\to\infty}a_{n+1} = \lim_{n\to\infty}1 + \sqrt{a_n - 1}.$$
If $\displaystyle\lim_{n\to\infty}a_n = A$, then $\displaystyle\lim_{n\to\infty}a_{n+1} = A$ as well.
Moreover, $\sqrt{.}$ is continuous and thus, you may "take the limit inside". Thus, the above equation tells you
$$A = 1 + \sqrt{A - 1}$$
or $$A - 1 = \sqrt{A - 1}.$$
Squaring the above and solving it gives two solutions: $A = 1, 2$.
However, note that $a_n \ge 2$ for all $n$ and thus, $A \ge 2$ as well. With this, you can conclude that
$$\lim_{n\to\infty}a_n = \boxed{2}.$$
|
H: If $f(x) \leq g(x)$ is it true that $\sup_I(f(x))\leq \sup_I(g(x))$?
If $f(x) \leq g(x)$ is it true that $\sup_I(f(x))\leq \sup_I(g(x))$ ?
I haven't been able to find this anywhere.I need to prove if this holds:
$\sup_I(e^{g(x)}|x^k|) \leq e$ with $x \in (-1,1)$ and $|g(x)|\in (0,x)$ , $k \in \mathbb{N}$
My try:
Since the exponential is strictly increasing and $g(x) \leq x \leq 1$:
$e^{g(x)}|x^k|\leq e.1$ and then using the property that I am not sure if it is true:
$\sup_I(e^{g(x)}|x^k|)\leq \sup_I(e)=e$
What do you thing? If the property in question is true, how to prove it?
AI: Obviously this is true. Justification: suppose $f(x)\leq g(x)$ for all $x$ in a set $E$. Set $s_f=\sup_{x\in E}f(x)$ and $s_g:=\sup_{x\in E}g(x)$. Fix $x_0\in E$. Then $f(x_0)\leq g(x_0)\leq \sup_{x\in E}g(x)=s_g$. So $f(x_0)\leq s_g$ for any $x_0$ ($x_0$ was fixed, but arbitrary). So $s_g$ is an upper bound for $\{f(x): x\in E\}$. Therefore the least upper bound of this set, which is precisely $s_f$ is less than $s_g$, i.e. $s_f\leq s_g$.
|
H: Moduli of complex numbers
In $\mathbb{C}$, a set of complex numbers, there are z and w, such that $|z|=|w|$.
How can I show that: $|z+w|^2+|z-w|^2=4|z|^2$?
I'm trying to do:
$|z+w|^2+|z-w|^2=\sqrt{(z+w)^2}+\sqrt{(z-w)^2}$
The problem is that solving the square of the binomial I cannot cut the elements inside the roots, because I cannot join them in a single root. How can I demonstrate that expression?
AI: Recall that
$${z\bar{z}=|z|^2\ \forall\ z \in \mathbb{C}}$$
So
$${|z+w|^2 = (z+w)(\overline{z+w})=(z+w)(\bar{z}+\bar{w})}$$
If you expand out and reuse that fact again
$${\Rightarrow z\bar{z}+z\bar{w}+\bar{z}w+w\bar{w}=|z|^2 + |w|^2 + z\bar{w} + \bar{z}w}$$
I won't write all the working once again, but similarly for ${|z-w|^2}$ you end up with
$${|z|^2 + |w|^2 - \bar{z}w - z\bar{w}}$$
And so adding both together, we have
$${\Rightarrow |z+w|^2 + |z-w|^2 = 2|z|^2 + 2|w|^2}$$
And since ${|w| = |z|}$,
$${=4|z|^2}$$
Note also that the expression in your question:
$${|z+w|^2 = \sqrt{(z+w)^2}}$$
is incorrect. We actually have
$${|z+w|^2 = \left(\Re(z) + \Re(w)\right)^2 + \left(\Im(z) + \Im(w)\right)^2}$$
|
H: If $\int_{-1}^1 fg = 0$ for all even functions $f$, is $g$ necessarily odd?
Suppose for a fixed continuous function $g$, all even continuous real-valued functions $f$ satisfy $\int_{-1}^1 fg = 0$, is it true that $g$ is odd on $[-1,1]$?
My intuition is telling me that this is correct, as I have not found any counterexamples. I've tried proving this by generating a contradiction by assuming $g$ is not odd and so there is a point such that $g(-x) \neq -g(x)$ on $[-1,1]$, but this doesn't yield any information that I think I can readily use to prove the claim.
Any help would be very much appreciated!
AI: For the edited question: The answer is yes. Indeed, let $f$ be an arbitrary continuous function on $[0,1]$ and extend it to a continuous even function
$$ \tilde{f}(x) = \begin{cases}
f(x), & x \geq 0; \\
f(-x), & x < 0;
\end{cases} $$
on $[-1, 1]$. (Check that $\tilde{f}$ is indeed continuous!) Then by the assumption,
$$ 0 = \int_{-1}^{1} \tilde{f}(x)g(x) \, \mathrm{d}x = \int_{0}^{1} f(x)(g(x)+g(-x)) \, \mathrm{d}x. $$
Now pick $f(x) = g(x)+g(-x)$ and note that
$$ \int_{0}^{1} (g(x)+g(-x))^2 \, \mathrm{d}x = 0. $$
Together with the continuity of $g$, this implies that $g(x)+g(-x) = 0$ for all $x \in [0, 1]$, which then implies that $g$ is odd.
|
H: proving that $D = \{(x,y) \in \mathbb R ^2: x^2 + y^2 < 1\}$ is opened
In a general topology exercise I have to prove the following:
Prove that the disk $D = \{(x,y) \in \mathbb R ^2: x^2 + y^2 < 1\}$ is opened in the euclidean topology.
This reminded me in how in multi-variable calculus we approximates the regions in the plane with little rectangles in order to integrate over that region. But I have no idea how to approach the problem. How should I prove this? Any tips?
Edit: This exercise is in the beginning of the book, At this point the only concepts that I'm allowed to use in the proof are the following definitions and concepts:
Definition of topology on a set
Definition of basis of a topology on a set
The basis for the euclidean topology is $B=\{(x,y)\in \mathbb R ^2:a < x < b \wedge c < y < d \}$
I have not yet reached the chapter about metric spaces, so I'm not allowed do define the disk as an opened ball using some metric $d$. I think that the objective of the exercise is to prove that it is opened using the basis of the euclidean topology.
AI: You can use the more general fact that, if $V\subset\mathbb{R}$ is an open set and $f:\mathbb{R}^2\rightarrow\mathbb{R}$ is a continuous function, then $f^{-1}(V)=\{x\in \mathbb{R}^2:f(x)\in V\}$ is an open set in $\mathbb{R}^2$. In your case, take $V=(-\infty,1)$.
Edit: a more direct approach is as follows. Let $D$ be the open unit disk and $x\in D$. If we take $\delta=1-|x|>0$, where $|x|$ denotes the norm, then you can check that the open disk centered at $x$ with radius $\delta>0$ is completely contained in $D$, i.e.
$$B_{\delta}(x)\overset{\text{def}}{=}\{y\in\mathbb{R}^2:|x-y|<\delta\}\subset D.$$
Then you draw a suitable open rectangle (that's part of the basis) inside $B_{\delta}(x)$, let's call it $R_x$. Then observe that
$$D=\bigcup_{x\in D}R_x.$$
Therefore, $D$ is an union of basis sets, so $D$ is open.
|
H: Condition on $(x_n)$ equivalent to $\lim x_n \in U$
Let $(x_n)_{n=1}^\infty \subseteq \mathbb{R}$ be a Cauchy sequence of real numbers. Let $U \subseteq R$ be an open set - for simplicity, we can suppose $U$ is an open interval $(a,b)$. Is there a condition only involving the sequence $(x_n)$ that is equivalent to the limit $x = \lim_{n \to \infty} x_n$ being inside $U$?
The obvious candidate would be: "there is $N \in \mathbb{N}$ such that for all $n \geq N$, we have $x_n \in U$". While this is necessary, it is not sufficient - e.g. if $U = (0,1)$, the sequence $x_n = 1/n$ satisfies this condition, but the limit $x = 0 \notin U$.
AI: Yes. One could be "there exists $\;\epsilon>0\;$ s.t. for any
$$n\in\Bbb N\;,\;\;x_n\in(a+\epsilon,\,b-\epsilon)"$$
In fact, the above condition can be weakened to
$$\;\exists\,\epsilon>0\;\,\exists\,N\in\Bbb N\;\;s.t.\;\;\forall\,n>N\,,\,\,x_n\in(a+\epsilon,\,b-\epsilon)\;$$
|
H: If $L=\lim_{n \to \infty} \sum_{i=1}^{n-1} {\left(\frac{i}{n}\right)}^{2n}$ what is $\lfloor \frac{1}{L} \rfloor$
If $$L=\lim_{n \to \infty} \sum_{i=1}^{n-1} {\left(\frac{i}{n}\right)}^{2n}$$What is $\lfloor \frac{1}{L} \rfloor$
I really am confuse. Can this be converted into Riemann sum? If not what do I do. Answer is $7$.
AI: The limit is
$$
L = \frac{1}{e^2-1}
$$
and so $1/L = e^2 - 1 \approx 6.4$, so the result should be 6.
To see this, note that the last element of the sum gives
$$
\left ( \frac{n-1}{n} \right )^{2n} \to e^{-2},
$$
the two last ones give
$$
\left ( \frac{n-2}{n} \right )^{2n} + \left ( \frac{n-1}{n} \right )^{2n} \to e^{-4} + e^{-2}
$$
and so on. Then
$$
e^{-2} + e^{-4} + \cdots = \frac{1}{e^2-1} = L.
$$
Of course, this is not a proof, because there is a whole bunch of inversion of limits that occur. But the argument definitely shows that you have
$$
\liminf S_n \geq L,
$$
where I denote the sum by $S_n$. But what about the limsup? To see this, you can indeed use a Riemann sum, or rather compare directly with an integral. Draw a picture to see that, for any fixed $1 \leq k \leq n$
$$
\frac1n \sum_{i=0}^{n-k} \left (\frac{i}{n} \right)^{2n} \leq \int_0^{1-(k-1)/n} x^{2n} \: \mathrm{d}x = \frac{1}{2n+1} \left ( 1- \frac{k-1}{n} \right )^{2n} ,
$$
so
$$
\limsup \sum_{i=0}^{n-k} \left (\frac{i}{n} \right)^{2n} \leq \frac12 e^{-2(k-1)}.
$$
You therefore get that, by splitting the sum in the $k - 1$ last elements and the rest,
$$
\limsup S_n \leq e^{-2} + e^{-4} + \cdots + e^{-2k+2} + \frac12 e^{-2k+2},
$$
so taking $k \to + \infty$ gives
$$
\limsup S_n \leq e^{-2} + e^{-4} + \cdots = \frac{1}{e^2 - 1} = L,
$$
and we are done.
|
H: Evaluating $\int_{-a}^a x^{2n+1}\mathrm{d}x$ for all non-negative integers $n$ simultaneously
My assumption would be
$$\int_{-a}^a x\ dx=0$$
Am I on the right track here? Also, for indefinite integrals
$$\int (f)x\ dx$$
would this be correct as well?
Background
My professor raised this question in his lecture and I provided the following
\begin{align}\int_{-a}^{a}\left(x^3\right)dx&= 0\end{align}
and
\begin{align}\int_{-a}^{a}\left(x^7\right)dx&= 0\end{align}
to support that odd degrees will always equal to zero. The professor stated my evaluations were correct, however, I couldn't use the fact that it works for two positive odd exponents to deduce conclusively that the result will hold for all positive odd exponents. Thus, my assumption is that
$$\int_{-a}^a x\ dx=0$$
covers all non-negative integers $n$ simultaneously. Any help in this would be appreciated!
AI: Here it is a more general result which may help you.
Consider that the function $f:\textbf{R}\to\textbf{R}$ is odd. This means that $f(-x) = -f(x)$. Thus one has that
\begin{align*}
\int_{-a}^{a}f(x)\mathrm{d}x & = \int_{-a}^{0}f(x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x\\\\
& = \int_{0}^{a}f(-x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x\\\\
& = -\int_{0}^{a}f(x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x = 0
\end{align*}
where it has been used the change of variable $u = -x$.
In particular, at your case, $f(x) = x^{2n+1}$, which is odd.
|
H: Why $i^{-3}$ equals to $i$?
What is $i^{-3}$?
Select one:
(a). 0
(b). i
(c). -1
(d). 1
(e). -i
I calculated like this:
$i^{-3}=\frac1{i^3}=\frac1{i^2\times i}=\frac1{-1\times i}=-\frac1i$
And therefore, $i$ is $-\frac1i$.
But the correct answer is: $i$.
How does it calculate?
AI: Because $\frac{-1}{i}=\frac{i^{2}}{i}=i$
|
H: Primes And Quadratic Residues
Below is the question:
Let $p$ be a prime. Prove there exists an integer $1\le x\le9$ such that $x$ and $x+1$ are quadratic residues mod $p$.
Please include a proof
AI: If $2$ is a quadratic residue, then $1$ and $2$ are consecutive quadratic residues.
If $5$ is a quadratic residue, then $4$ and $5$ are consecutive quadratic residues.
But if $2$ and $5$ are not quadratic residues, then $9$ and $10$ are.
|
H: A confusion about the proof: every open set in $R^1$ is the union of an at most countable collection of disjoint segments.
I have met two types of solutions about this question, one is by the partition of the equivalence relation, and I understand that, and the other is the following:
I don't understand why the disjoint sets by an enumeration of the rational numbers can cover all the irrational numbers. Isn't is a situation that for some irrational number $r$ the two open sets be chosen by $(a,r)$ and $(r,b)$, so it can cover all the rational numbers but does not contain irrational number $r$.
Thank you in advance.
say
EDIT: I am sorry for leaving out the statement "open". I meant to say open set.
AI: The statement is not correct. The set of all irational numbers is a counterexample. What the proof in the OP proves is that every open set is a disjoint union of countably many open intervals. That statement and the proof are correct.
Update 1: I looked at the proof again and I think that the objections by OP are valid. The proof is wrong. Indeed, it is so not only because we may not cover all irrational points in $U$ but also not all rational points there: it could happen that the rational point we are trying to cover is a limit point of the already covered set. In that case the "maximal interval" does not exist.
Update 2: It looks like the answers and comments here contain a correct proof:
Any open subset of $\Bbb R$ is a countable union of disjoint open intervals
Update 3: The proof in OP can be saved if one takes each time the maximal interval $(p_n-a, p_n +b)$ contained in $U$. That interval cannot intersect with any of the previously chosen intervals because those were maximal too. This way we cover all rational points. Then if we take an irrational point, its maximal interval must contain one of the $p_n$, so it must belong to the maximal interval of the $p_n$ and so that point is also covered.
|
H: The solutions of $y'' = 2$ don't form a subspace
"The solutions of $y'' = 2$ don't form a subspace - the right side $b=2$
is not zero."
This is a quote on page 172 of Intro to Linear Algebra by Strang. What does this mean? Can someone explain how a differential equation relates to the idea of subspaces, and then why this statement is correct (e.g. what characteristics of subspaces does $y''=2$ violate).
AI: $y=x^2$ and $y=x^2+1$ are solutions of $y''=2$, but their sum is not.
|
H: Help finding the length of a line inside a rectangle.
I need the formula to calculate the length of the red line in the image attached. I always have the point that starts the line and the angle is always 45° but I don't know how to calculate the length.
check the shape here
I apologize that my description is not enough, I just don't know how to explain it better. That is why I draw the shape.
Thank you.
AI: Method-1(Pythagoras): $H^2 = B^2 + P^2$
where H(Hypotenuse) is the side opposite to the right angle and B and P are the other two sides.
We have red line as hypotenuse. As the angle is 45 so other is also 45 and both sides are equal(sides opposite to same angle are equal).
So putting them in formulae we get:
$$H^2=2^2+2^2$$
$$H=2\sqrt{2}$$
Method-2(Trigonometry): We can use $\cos$ function.
We have $\cos 45 = \frac{1}{\sqrt{2}}$.
So $$\frac 2H = \frac{1}{\sqrt{2}}$$
$$H=2\sqrt{2}$$
If you don't know about Trigonometric functions please refer:https://www.mathsisfun.com/sine-cosine-tangent.html
|
H: Questions about contraction
Let $X$ be a metric space and $f:X\to X$. What's true and what's false?
a. If $f$ is bijective and has a unique fixed point, then $f^{-1}:X \to X$ also has a unique fixed point.
b. If $f$ is bijective, then $f$ is a contraction iff $f^{-1}$ is a contraction.
c. $f:\mathbb {R}^2\to \mathbb {R}^2$, $[f]=\begin {pmatrix}\frac {1}{2}&1\\0&\frac {1}{2}\end {pmatrix}$ is a contraction.
I think that a and b are true and I don't really understand c. Am I right about a and b and could someone explain c please?
AI: a) is true
b) is not true: take $f:\mathbb{R}\to \mathbb{R}$, $x\mapsto x/2$. It is a contraction, but $f^{-1}:x\to 2x$ is an expansion.
c) is not true because the image of the vector $(1,1)$ of length $\sqrt{2}$ is $(1/2, 3/2)$ of bigger length $\sqrt{10}/2>\sqrt{2}$.
|
H: Questions about distributing $k$ objects to $n$ recipients
Rule: Distributions of $k$ objects to $n$ recipients can be done in $n^k$ ways with no restrictions and $n!$ ways when each recipient receives exactly one object.
Obvious Examples:
In how many ways can we distribute $70$ computers to $6$ schools s.t. no two schools share a computer? The schools are recipients so for each computer we choose one of six schools which can be done in $6^{70}$ ways by product rule.
In how many ways can we permute the word "house"? Each word has five places like this: __ __ __ __ __. And each place can receive any one of h, o, u, s, e. So for each letter we choose a place in one of $5, 4, 3, 2, 1$ ways so that there are $5!$ permutations by product rule.
Confusing example:
How many PINs of length four are there if each symbol in a PIN is chosen from the $26$ uppercase letters in the Roman alphabet and the ten digits?
This below is how I thought about the confusing example:
Let __ __ __ __ represent an arbitrary PIN where __ is a recipient. Then by the rule above, for every symbol we choose one of four places. But the problem is that after the fourth symbol we run out of places for symbols. Also, the answer given for this problem is $36^4$ which means the symbols are the recipients, not the places in a PIN.
My questions:
In problems like those above, how do we know which objects are recipients and which ones are receivables(receive-ees?) ? Also, in what way is the confusing example above different from the other two problems? Thanks.
AI: One recipient can receive more than one object, but one object can't go to more than one recipient. Similarly a character can be placed at more than once positions,hence it is a recipient, while a position can't have more than one characters, hence it is analogous to the object.
Now, each of the $4$ objects (in our case, positions) can go to any of the $36$ recipients (in our case, characters) . Thus the total no. of ways of such distribution is $36^4$.
|
H: Why is the additional time added to faster expression in this uniform motion problem?
In the following and similar uniform motion problems, when equating unequal times, the additional time seems to be added to the faster object in order to solve the problem correctly.
However, to me this seems unintuitive as a $r = \frac Dt$ implies a faster object should take less time. Nevertheless, adding the time in this way seems to be the only way to solve the problem.
I do not understand why this is the case.
The problem is as follows:
A man cycles downhill 12 miles from his home to the beach and then
later cycles back to his home. The man's speed returning home uphill
is 8 mph slower than his downhill speed. He also takes 2 hours longer
to return home than it took him to get to the beach. What is the man's
speed cycling downhill?
(Question sourced from OpenStax; Intermediate Algebra, pp. 715-716, https://openstax.org/details/books/intermediate-algebra)
I believe I understand most of the principles to solve this problem.
We know the distance, we know the distance is the same in both directions, but we don't know the rate or time of travel in either direction.
We choose to equate the time of travel because this will provide an equation in one variable; equating distance would result in an equation in two variables.
We therefore equate the two expressions of time based on $t = \frac Dr$.
Let $r$ be the rate of travel downhill.
The expression of time for the man travelling downhill would be $\frac{12}{r}$ and the expression of time for travel uphill would be $\frac{12}{r-8}$.
To equate the two expressions, we need to include the fact that the travel uphill was two hours longer than the travel downhill, i.e. the uphill travel is equal to the downhill travel plus two.
Hence, we have for the expression of time for the uphill travel: $\frac{12}{r-8} + 2$.
The equation I end up with is therefore: $$\frac{12}{r} = \frac{12}{r-8} + 2$$.
However, solving this leads me to a quadratic equation that I don't think can be factored. $$2(r^2-8r+48)$$.
The correct equation is: $$\frac{12}{r} + 2 = \frac{12}{r-8}$$
But here the extra 2 hours are added to the expression for the downhill travel.
I do not understand why the extra 2 hours are being added to the expression of time for downhill travel. The downhill travel is faster and to add 2 hours of time would imply it is 2 hours slower.
AI: The mistake is in the expression for time of uphill travel.
Suppose we have $T_u = {12 \over r - 8}$ as uphill travel time, and $T_d = {12 \over r}$ for downhill. From the problem we know that traveling uphill takes 2 hours longer than downhill i.e.
$$
T_u = T_d + 2
$$
So the equation should instead be
$$
{12 \over r - 8} = {12 \over r} + 2
$$
|
H: Suppose that $0 < a < b$. Show that $\displaystyle 1 + \frac{1+a}{1+b} + \frac{(1+a)(1+2a)}{(1+b)(1+2b)} + \ldots$ converges.
Suppose that $0 < a < b$. Show that
\begin{align*}
1 + \frac{1+a}{1+b} + \frac{(1+a)(1+2a)}{(1+b)(1+2b)} + \ldots
\end{align*}
converges.
MY ATTEMPT
This is what I've tried:
\begin{align*}
S(a,b) = 1 + \frac{1+a}{1+b} + \frac{(1+a)(1+2a)}{(1+b)(1+2b)} + \ldots = 1 + \frac{1/b + a/b}{1/b + 1} + \frac{(1/b + a/b)(1/b + 2a/b)}{(1/b + 1)(1/b + 2)} + \ldots
\end{align*}
But I do not know how to proceed from here. Any help is appreciated.
AI: Apply the ratio test.
The ratio of consecutive terms is of the form $\frac{1+na}{1+nb} = \frac{a+1/n}{b+1/n} \to \frac{a}{b} < 1$.
|
H: A positive integer has $1001$ digits all of which are $1$'s. When this number is divided by $1001$ find the remainder
A positive integer has $1001$ digits all of which are $1$'s. When this number is divided by $1001$ find the remainder.
I tried to think on it but couldn't get through. Please help.
AI: $$A=111111... (1001 \text { times})$$
$$A= 10^{1000}+10^{999}+10^{998}+\cdots +10^0$$
$$A= (10^{1000}+10^{997})+(10^{999}+10^{996})+\cdots+
(10^4+10^1)+ (10^3+10^0)+10^2$$
Now, any number of the form $10^{m+3}+10^m (m\geq 0)$ is divisible by $1001$.
$$A=1001n+10^2$$ So the remainder is $100$.
|
H: Show that there is a sequence $(m_{j})_{j=0}^{\infty}$ s.t. $m_{j}\to\infty$ as $j\to\infty$ and $\sum_{j=0}^{\infty}m_{j}a_{j}$ converges.
Suppose that $(a_{j})_{j=0}^{\infty}$ is a sequence of non-negative real numbers for which $\sum_{j=0}^{\infty}a_{j}$ converges. Show that there is a sequence $(m_{j})_{j=0}^{\infty}$ of positive real numbers such that $m_{j}\to\infty$ as $j\to\infty$ and $\sum_{j=0}^{\infty}m_{j}a_{j}$ converges.
MY ATTEMPT
I tried to consider $m_{j} = j$. Then we may apply the ratio test to the series $\sum_{j=0}^{\infty}m_{j}a_{j}$ in order to check for convergence:
\begin{align*}
\lim_{n\to\infty}\frac{(n+1)a_{n+1}}{na_{n}} = \lim_{n\to\infty}\left(\frac{n+1}{n}\right)\frac{a_{n+1}}{a_{n}}
\end{align*}
I also know that $a_{n}\to 0$ and $s_{n} = a_{1} + a_{2} + \ldots + a_{n} \leq M$, but then I got stuck.
Could someone help me with this?
AI: Let $b_j = a_j + 2^{-j}$ and $T_j = \sum_{k=j}^{\infty} b_k$. Also, define $m_j$ by
$$ m_j = \frac{\sqrt{T_j} - \sqrt{T_{j+1}}}{b_j} $$
Since $\sum_{j=0}^{\infty} b_j$ converges, $T_j$ converges to $0$ as $j\to\infty$. So
$$ \sum_{j=0}^{n} m_j a_j \leq \sum_{j=0}^{n} m_j b_j = \sum_{j=0}^{n} \bigl( \sqrt{T_j} - \sqrt{T_{j+1}} \bigr) \leq \sqrt{T_0} $$
shows that $\sum_{j=0}^{n} m_j a_j$ is bounded and hence converges. On the other hand,
$$ m_j = \frac{\sqrt{T_j} - \sqrt{T_{j+1}}}{b_j} = \frac{1}{b_j} \int_{T_j - b_j}^{T_j} \frac{\mathrm{d}x}{2\sqrt{x}} \geq \frac{1}{2\sqrt{T_j}} \xrightarrow[j\to\infty]{} \infty. $$
Therefore all desired conditions are satisfied.
|
H: Is the resulting vector part of row/column space?
Per Wikipedia: In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors.
Wiki also gives an example:
Is the vector (c1, c2, 2c1) part of the column space as well? Or just the original column vectors and their respective scalars? Like what exactly is included in the column space - I know it's all possible linear combinations. Is it the entire expression itself?
AI: Any linear combination of the column vectors $\pmatrix{1\\0\\2}$ and $\pmatrix {0\\1\\0}$ is in the column space.
That means $\pmatrix{c_1\\c_2\\2c_1}$, where $c_1$ and $c_2$ are any scalars.
This includes $\pmatrix{1\\0\\2}$, $\pmatrix{0\\1\\0}$, $\pmatrix{1\\1\\2}$, $\pmatrix{1\\-1\\2}$, and a whole host of others
-- in fact, any vector whose third component is twice its first component.
|
H: Finding $\sum_{r=1}^{\infty}\left(\frac{2r+5}{r^2+r}\right)\left(\frac{3}{5}\right)^{r+1}$
$\text{Find the value of}$ $$\lim_{n\to \infty}\left(\sum_{r=1}^{n}\left(\frac{2r+5}{r^2+r}\right)\left(\frac{3}{5}\right)^{r+1}\right)$$
$\text{Answer}: \frac{9}{5}$
Firstly I split "linear-upon-quadratic" term:
$$\frac{2r+5}{r^2+r}=\frac{5}{r}-\frac{3}{r+1}
\\ =\frac{2}{r}+3\left(\frac {1}{r} -\frac {1}{r+1}\right)$$
If it hadn't been for the $\left(\frac 35\right)^{r+1}$ term, the above step would have been very useful - splitting the single summation into two and summing individually.
Unfortunately, that's not the case. I'm unable to proceed further, though my gut says telescoping is the way.
Thanks in advance.
AI: Your idea was right:$$\sum_{r=1}^{\infty}\left(\frac{5}{r}-\frac{3}{r+1}\right)\left(\frac{3}{5}\right)^{r+1}$$
$$=\sum_{r=1}^{\infty}\left(\frac{5}{r}\right)\left(\frac{3}{5}\right)^{r+1}-\sum_{r=1}^{\infty}\left(\frac{3}{r+1}\right)\left(\frac{3}{5}\right)^{r+1}$$
$$=3\sum_{r=1}^{\infty}\left(\frac{1}{r}\right)\left(\frac{3}{5}\right)^{r}-3\sum_{r=2}^{\infty}\left(\frac{1}{r}\right)\left(\frac{3}{5}\right)^{r}$$
$$
=3\cdot \underbrace{\frac{3}{5}}_{r=1}=\frac{9}{5}
$$
|
H: True or False: If $$ is square and |$\det()|=1$, then $^{−1}$ has integer entries.
Thanks for the feedback on my most recent question. So it is not explicitly stated that A has integer matrices. Therefore I conclude that it is false. Can you please help me think of a counterexample? I am having a hard time doing so. Thanks
AI: Hint:
For each $\theta$, the matrix $\begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta\end{pmatrix}$ has determinant $1$.
I hope this helps ^_^
|
H: Vectorizing Regularization in Linear Regression
I am wondering if someone could elaborate on the vectorized partial derivative of the MSE cost function. When writing code, I noticed that there seemed to be something wrong with the partial derivative terms that the class was outputting. I used the following formulas:
$$\frac{\partial J(\theta)}{\partial \theta}=\frac{1}{m}X^T(X\theta-y) \in R^{n+1}$$
$$\theta := \theta-\alpha (\frac{\partial J(\theta)}{\partial \theta})$$
Below, I have attached my work through the first iteration, where $\theta_0$ increases despite already being at an ideal value (0). If someone could explain to me where my error is, either in the assumption that $J(\theta)$ always decreases or in my math/formulas. Thanks.
AI: The first coordinate need not stay at $0$ and in this case, will not stay at $0$ since the coordinate of the gradient is non-zero. It will still converge back to $0$ if the step size is chosen carefully.
We have $\theta_0 = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$.
$\theta_1 = \theta_0 - \alpha\begin{bmatrix} -\frac52 \\ -\frac{15}2 \end{bmatrix}$
In general,
\begin{align}\theta_{k+1} &= \theta_k -\alpha X^T(X\theta_k - y) \\
&=(I-\alpha X^TX)\theta_k + \alpha X^Ty \\
&= \begin{bmatrix} 1-4\alpha & -10\alpha \\ -10\alpha & 1-30\alpha \end{bmatrix} \theta_k +\alpha \begin{bmatrix} 10 \\ 30\end{bmatrix}\end{align}
We have to choose the step size carefully such that it converges. Let me pick $\alpha =0.01$ since the spectral radius is less than $1$.
octave:1> alpha = 0.01
alpha = 0.010000
octave:2> eig([1-4*alpha, -10*alpha; -10*alpha, 1-30*alpha])
ans =
0.66599
0.99401
Using the following Python code, we can see the progress:
import numpy as np
alpha = 0.01
A = np.array([[1-4*alpha, -10*alpha], [-10*alpha, 1-30*alpha]])
b = np.array([10*alpha, 30*alpha])
theta = np.array([0,0])
for i in range(1000):
theta = np.matmul(A,theta) + b
if i % 100 == 0:
print(theta)
and we get the following output:
[0.1 0.3]
[0.16620987 0.94346837]
[0.09116498 0.96899279]
[0.05000337 0.98299276]
[0.02742651 0.99067164]
[0.01504325 0.99488346]
[0.00825112 0.99719361]
[0.00452568 0.99846072]
[0.00248231 0.99915571]
[0.00136153 0.99953691]
|
H: What is wrong with my approach for CLRS 5.4-6 : Given n balls and n bins,find expected number of empty bins?
I am trying to find expected number of empty bins after n balls are tossed into n bins. And each toss is independent and equally likely to end up in any bin. Below is my approach.
My indicator variable is
$X_i$ : i bins are empty
$$ Pr[X_i]= \frac{\binom{n}{n-i} * n^\left(n-i\right)}{n^n}$$
And excepted number of empty bins is :
$$
\sum_{i=1}^\left(n-1\right) i*Pr[X_i]
$$
After simplifying the above equation I get:
$$
\sum_{i=1}^\left(n-1\right) \frac{\left(n-1\right)!}{\left(i-1\right)!*\left(n-i\right)!*n^\left(i-1\right)}
$$
But in the solution that I found on the web, the indicator variable is : Let Xi be the event that all the balls fall in bins, other than the ith. And then expected number of empty bins is :
$$
\sum_{i=1}^n \left(\frac{n-1}{n}\right)^n
$$
But according to me the the indicator variable chosen above is wrong. As they are adding the probabilities that ith bin is empty.So at a time only one bin is considered empty. Whereas there can be more than one bin empty at a time.
Is there something wrong with my understanding of the above problem?
AI: The web solution is correct; it uses linearity of expectation. Suppose we label the boxes $1$ through $n$, and we determine $X_1$, the expected number of boxes numbered $1$ that are empty. Since there is obviously only one box numbered $1$, this value is equal to the probability that that one box is empty:
$$
E(X_1) = P(\text{box $1$ is empty}) = \left(\frac{n-1}{n}\right)^n
$$
By symmetry, we have $E(X_1) = E(X_2) = \cdots = E(X_n)$, and then by linearity of expectation, the expected number of boxes that are empty is
\begin{align}
E(X) & = E(X_1+X_2+\cdots+X_k) \\
& = E(X_1)+E(X_2)+\cdots+E(X_n) \\
& = nE(X_1) \\
& = \frac{(n-1)^n}{n^{n-1}}
\end{align}
You can also proceed the way you set out. However, in the expression you give for the probability that exactly $k$ boxes are empty, the numerator $\binom{n}{k} n^{n-k}$ does not, unfortunately, count only those cases; it also includes cases where more than $k$ boxes are empty, too (and overcounts them, to boot).
The correct expression is not trivial; it is
$$
\frac{n!}{k!} S(n, n-k)
$$
where $S(\cdot, \cdot)$ are Stirling numbers of the second kind. See this OEIS entry for more details on the particular count. It is also written
$$
\frac{n!}{k!} \left\{ n \atop n-k \right\}
$$
All in all, you're better off using linearity of expectation. :-)
|
H: Selections with repetitions
suppose I have a list of numbers [1,2,3....20].In how many ways can i select three numbers with repetitions from this list
Can somebody explain how to solve this question .I thought about it and it seems to me that there are 20 identical objects of first type ,20 identical objects of second and of third and we have to select three items from the three types which gives me $$\binom{20}{1} X \binom{20}{1}X \binom{20}{1}$$.Is this correct?
AI: If - as clarified in comments - the order does not matter the sequences differ only by counts of the corresponding items. This is equivalent to distributing 3 balls between 20 bins. The number of ways to perform this can be computed by stars and bars as:
$$
\binom{3+20-1}3=\binom{22}3.
$$
|
H: Recursion relation for the moment of the normal distribution.
I am currently studying the Statistical Inference, 2nd, Casella & Berger.
On page 72, the authors asserts that for the normal distribution with mean $\mu$ and variance 1, $$EX^{n+1}=\mu EX^n - \frac{d}{d\mu}EX^n$$.
I cannot deduce it myself.
I know that pdf for this normal distribution is $f(x)=\frac{1}{\sqrt{2\pi}}\exp({\frac{-(x-\mu)^2}{2}})$.
Now
$$\frac{d}{d\mu}EX^n= \frac{d}{d\mu}\int_{-\infty}^{\infty}x^n\frac{1}{\sqrt{2\pi}}\exp({\frac{-(x-\mu)^2}{2}})dx=\int_{-\infty}^{\infty}\frac{d}{d\mu}(x^n\frac{1}{\sqrt{2\pi}}\exp({\frac{-(x-\mu)^2}{2}}))dx$$
$$=\int_{-\infty}^{\infty}x^n\frac{1}{\sqrt{2\pi}}\exp({\frac{-(x-\mu)^2}{2}})(x-\mu)dx=EX^{n+1}-\mu EX^n$$.
However by interchanging terms I get only $$EX^{n+1}=\mu EX^n + \frac{d}{d\mu}EX^n$$.
Am I missing something? Or is this just a typo?
AI: You are right! I checked on my copy of CB. It's an obvious typo, easy to verify, as you correctly proved.
|
H: Change in volume of sphere given change in radius
Finding the change in volume $$V=\frac{4}{3}\pi a^3$$ of a sphere when the radius change from $a_{0}$ to $a_{0}+da$
What I tried:
Using differential formula
$$\frac{\Delta V}{\Delta a}=\frac{d V}{da}=\frac{d}{da}\bigg(\frac{4}{3}\pi a^3\bigg)=4\pi a^2$$
$$\Delta V=4\pi a^2 da$$
Is my answer is right.actually i dont have solution. If not Then how do i solve it. Thanks
AI: $$\begin{align}\Delta V&=\frac 43\pi(a_0+da)^3-\frac 43\pi a_0^3\\&=\frac 43\pi((a_0+da)^3-a_0^3)\\&=\frac 43 \pi((a_0+da)-a_0)((a_0+da)^2+a_0(a_0+da)+a_0^2)\\&=
\frac43\pi da(3a_0^2+3a_0da+da^2)\end{align}$$
If $|da|\ll a_0$ then $\Delta V=4\pi a_0^2 da$.
|
H: Most general linear transformation of $|z|=r$ into itself using cross ratio
This question (without the cross ratio part) was asked earlier today, as well as a few times before. Here was the question that was asked earlier today: Find all Möbius transformations that map the circle $|z|=R$ into itself
Now, I am wondering if this problem could instead be set up using the cross ratio, but I suppose I am having a hard time setting everything up.
So, I want $(z,z_1,z_2,z_3)=(w,w_1,w_2,w_3)$, where $w=f(z)$.
Now, my thoughts, and this is where I suppose I am going wrong, I suppose that we would have $(z,0,1,a)=(w,0,1,\infty)$, which, after setting up the cross ratio and solving for $w$ would give $w=\frac{z(a-1)}{-z+a}$. But, I believe this is wrong. For instance, I have that the center of the original circle (the origin) is mapped backed to the origin. But, I don't believe this to be necessary since we want the MOST GENERAL linear transformation. Also, I am mapping $1$ to $1$ based on some of the other answers that were given.
My question is in two parts: first, is this a problem we could do using the cross ratio, and second, if the first part is doable, can you provide some insight into how you are going about setting up the "$(z,z_1,z_2,z_3)=(w,w_1,w_2,w_3)$" bit?
Thank you!!
AI: Consider the case $r=1$ first, i.e. $f$ is a Möbius transformation which maps the unit circle $|z|=1$ onto itself.
Let $f^{-1}(0) = a$. Then, because $f$ preserves symmetry with respect to the unit circle, $f^{-1}(\infty) = 1/\overline a$. The image of a third point determines $f$ uniquely, so let us set $c = f(1)$. Note that $|c| = 1$. $f$ preserves the cross ratio, so we can conclude that
$$ \tag{*}
(z, 1, a, 1/\overline a) = (f(z), c, 0, \infty)
$$
and we get
$$
cf(z) = \frac{z-a}{z-1/\overline a} \cdot \frac{1-1/\overline a}{1-a} \\
\iff f(z) = \frac 1c \frac{\overline a - 1}{a-1} \cdot \frac{z-a}{1-\overline a z} \, .
$$
The factor $\frac 1c \frac{\overline a - 1}{a-1}$ has absolute value one, therefore
$$ \tag{**}
f(z) = e^{i \lambda} \frac{z-a}{1-\overline a z}
$$
for some $\lambda \in \Bbb R$ and some $a \in \Bbb C$ with $|a| \ne 1$.
So any Möbius transformation which maps the unit circle onto itself is necessary of the form $(**)$.
On the other hand, if $f$ is defined by $(**)$ then $f$ satisfies $(*)$ with some $c$ of absolute value one, which implies that $f$ maps the unit circle onto itself.
(Depending on whether $|a| < 1$ or $|a| > 1$, $f$ maps the interior of the unit circle to the interior or to the exterior of the unit circle. The case $|a| < 1$ gives exactly the conformal automorphisms of the unit disk.)
For arbitrary $r > 0$ you can consider the mapping $\tilde f(z) = f(rz)/r$ which must be of the form $(**)$, or repeat the above argument with mirroring at the circle $|z|=r$:
$$
(z, r, a, r^2/\overline a) = (f(z), c, 0, \infty) \, .
$$
|
H: Choosing the sign of determinant when taking a square root
Calculate the determinant $$\det(A)=\begin{vmatrix}a&b&c&d\\ \:\:\:-b&a&d&-c\\ \:\:\:-c&-d&a&b\\ \:\:\:-d&c&-b&a\end{vmatrix}$$
I found that $$\det(A)\det(A^T)=\det(A)^2=(a^2+b^2+c^2+d^2)^4$$
From this we get
$$\det(A) = \pm (a^2+b^2+c^2+d^2)^2$$
Now, how to choose the sign? Any help is appreciated.
AI: Here is one quick way: Use the standard cofactor formula for the determinant. Expand only what you need. What is the sign of $a^4$?
|
H: Solve $x^{x^{x^{2017}}}=2017$
I have tried to use $\ln$, but couldn't solve:
\begin{equation}
\ln x^{x^{x^{2017}}}=x^{x^{2017}}\ln x=\ln 2017.
\end{equation}
I found that $x=\sqrt[2017]{2017}$ is a solution, and it is easy to check it. But how to find that solution without guessing and how to prove if it is the only solution?
AI: $$x^{x^{x^{2017}}}=2017$$
Raise $x$ to the power of both sides:
$$x^{x^{x^{x^{2017}}}}=x^{2017}$$
Let $y=x^{2017}$. Then the equality becomes:
$$x^{x^{x^{y}}}=y$$
Since this is in the form of the first equation, $y$ can equal 2017. Therefore:
$$y=2017=x^{2017}$$
Which gives us our one real solution $2017^{\frac{1}{2017}}$. By the fundamental theorem of algebra, there are 2016 more complex solutions, however it will take you a while if you don't have a calculator. Hope this helps!
|
H: Questions in study of Adjoint and inverse in Linear Algebra
While studying Linear Algebra from Hoffman Kunze, I am unable to understand few arguments given in lesson- Determinants. As my Institute is closed, so I have no help other than asking questions here :
It's image:
Questions : (1) How can formula (5-21) be summarized in (5-23) in matrix form .
I have question in paragraph before Theorem 4 : Authors write that if inverse of det(A) exists in K then $A^{-1}$ = $(det A)^{-1} $ adj(A) is inverse of A. To prove it I need to show that $A^{-1}$ A= I =$A A^{-1} $ but if I use A from (adj A) A = det(A) I ( see 5-23) then $(adj(A))^{-1}$ must exist. How to be sure that $(adj(A))^{-1}$ must exist?
Can anyone please help.
AI: For your first question, since $C^T=\operatorname{adj} A$, from $(5-21)$ we obtain
$$
\left((\operatorname{adj} A)A\right)_{jk}=(C^TA)_{jk}=\sum_{i=1}^n (C^T)_{ji}A_{ik}=\sum_{i=1}^n C_{ij}A_{ik}=\delta_{jk}\det A.
$$
Therefore $(\operatorname{adj} A)A=(\det A)I$.
For your second question, you don't need to prove that $\operatorname{adj} A$ is invertible. As stated in the second paragraph on p.160, by definition, $A$ is called invertible if there exists a matrix $B$ such that $AB=BA=I$. And if this is the case, $B$ is called the inverse of $A$ and we denote $B$ by $A^{-1}$. Now, if $(\det A)^{-1}$ exists and if we put $B=(\det A)^{-1}\operatorname{adj}(A)$, then by $(5-23)$ and $(5-25)$, we have $BA=AB=I$. Hence $A$ is invertible and its inverse is $B$.
|
H: False statement about a continuous and non negative function
It was asked in University of Hyderabad exam(2017).
I have shown 2nd and 3rd Statments to be true with the help of "Intermediate Value Theorem for Integrals".
But i can't reason why the statement 1 is false?
AI: The integral can be made as small as you please. To see this, consider the family of functions $f_\delta:\mathbb{R}\to\mathbb{R}$ defined by
$$f_\delta(x)=\begin{cases} \:\frac{100}{\delta}(x-(c-\delta)) &\text{$c-\delta\leqslant x\leqslant c$}\\ \: \frac{100}{\delta}((c+\delta)-x) &\text{$c\leqslant x\leqslant c+\delta$}\\ \:\: 0 &\text{elsewhere} \end{cases}$$
with $0<\delta<c$ (these functions basically have a spike centered at $c$). We have
$$\int_0^1 f(x)\:dx=100\cdot\delta$$
and $\delta$ can be made as small as you please.
|
H: $rkA+rkB=n$, and A and B are diagonalizable
Let $A$ and $B$ be diagonalizable $n$-dimensional square matrices. Suppose each of $A$ and $B$ has no eigenvalues other than $0, 1$.Show that such $A$ and $B$ do not exist.
Any help would be appreciated, thank you.
P.S. Sorry, I missed important condition at first. I assume $A+B=E$.
AI: Your question's wording is confusing, but it you really meant what is written, then the claim is false:
$$A=\begin{pmatrix}1&0\\0&0\end{pmatrix}\;,\;\;B=\begin{pmatrix}0&0\\0&1\end{pmatrix}$$
are both $\;2\times2\;$ diagonal (and thus trivially diagonalizable) matrices with only $\;0,1\;$ eigenvalues, and also
$\;A+B=E\;$ ...
|
H: Topologies for graphs
Some of the basic definitions in Graph theory made me wonder if there is by any chance a way to give a graph $G$ a topology, such that these definitions can be understood as versions of analogous definitions given in topology. For example, is there a topology for $G$ such that the definition of "connected graph" can be understood as equivalent to $G$ being a connected (or arc-connected) topological space?
AI: There is a notion of topological realization of a graph, basically given by taking points in $\Bbb R^n$ and gluing copies of the unit interval $[0,1]$ to it, wherever the graph has an edge. This results in a metric space.
Some results can be translated between this topological realization and the graph. For example the graph is connected, if and only if the realization is (path)-connected. Similarly the graph has a cycle, if and only if there is an embedding of a circle $\Bbb S^1$ into the realization etc.
|
H: Prove that n^3 - n is a multiple of 6 for all positive integral values of n
Prove that
$$n^3 - n$$
is a multiple of 6 for all positive integral values of n
Does positive integral values of n refer to values of n once the expression is integrated to $$1/4n^4 - 1/2n + c$$
How do you deal with the constant of integration in a proof like this?
AI: Little fermat implies that $n^3=n$ mod $3$ and $n^3-n=n(n^2-1)=n(n-1)(n+1)$ and $n(n-1)$ is even.
$n-1,n,n+1$ are 3 consecutive numbers, one of them is divisible by $3$.
Postive integral value are elements of $\mathbb{N}$ strictly positive.
|
H: Find all naturals numbers $φ(8n)< φ(5n)$ [Answer provided - Questioning for explanation]
Find all naturals numbers $φ(8n)< φ(5n)$
Answer:
then
Means $n$ is an odd number that is a multiple of 5
Question: Can I get an elaboration on how it was solved? also why did they use 4n?
AI: Note that for $n > 1$, $\varphi(n)$ has a formula in terms of its prime factors.
Given $n = p_1^{a_1}\cdots p_k^{a_k}$ in the standard way, we have
$$\varphi(n) = n\left(1-\dfrac{1}{p_1}\right)\cdots\left(1-\dfrac{1}{p_k}\right).$$
Now, if $5$ is already a factor of $n$, then the prime factors of $5n$ are the same as that of $n$ and thus, we get
$$\varphi(5n) = 5n\left(1-\dfrac{1}{p_1}\right)\cdots\left(1-\dfrac{1}{p_k}\right) = 5\varphi(n).$$
However, $5$ does not divide $n$, then $5n$ has the additional prime factor of $5$ and thus, we get
$$\varphi(5n) = 5n\left(1-\dfrac{1}{5}\right)\left(1-\dfrac{1}{p_1}\right)\cdots\left(1-\dfrac{1}{p_k}\right) = 4\varphi(n).$$
This gives us the correct formula for $\varphi(5n)$ as
$$\varphi(5n) = \begin{cases}5\varphi(n) & 5 \mid n\\4\varphi(n) & 5 \nmid n\end{cases}$$
Note that even though we assumed $n > 1$, to begin with, the above formula does work for $n = 1$ as well. (As seen by a manual check.)
Similarly, by considering the case whether $2\mid n$ or not, we get the formula for $\varphi(8n)$ as
$$\varphi(8n) = \begin{cases}8\varphi(n) & 2 \mid n\\4\varphi(n) & 2 \nmid n\end{cases}$$
Note that the above result is slightly different from what you had written.
The above shows you that if you want $\varphi(8n) < \varphi(5n)$, the only possibility is that $\varphi(8n) = 4\varphi(n)$ and $\varphi(5n) = 5\varphi(n)$.
Thus, these conditions force $$\boxed{2 \nmid n \text{ and } 5 \mid n}.$$
|
H: Finding limits of complex functions using Taylor expansion
I am supposed to compute the following limit:
$$ \lim_{z \to 0} \frac{(1-\cos z)^2}{(e^z-1-z)\sin^2z} $$
I guess I have to use a Taylor expansion somehow, but I'm not sure what to expand and how, it looks a bit complicated. Any ideas?
AI: $$1-\cos(z)=\frac{z^2}{2}+o(z^2),$$
therefore $$(1-\cos(z))^2=\frac{z^4}{4}+o(z^4).$$
Also,
$$\sin(z)=z+o(z),$$
and thus $$\sin^2(z)=z^2+o(z^2).$$
Moreover,
$$e^{z}=1+z+\frac{z^2}{2}+o(z^2),$$
and thus $$\sin^2(z)(e^z-1-z)=\frac{z^4}{2}+o(z^4).$$
I let you conclude.
|
H: Exterior measure on $\mathbb{Q}$
I'm preparing myself for the final exam in real-analysis and I'm trying to solve this exercise. Show that there is no exterior measure $\mu^*$ on $\mathbb{Q}$ s.t. $\forall$ $p, q$ $\in$ $\mathbb{Q}$ with $p<q$
$$
\mu^*({x \in \mathbb{Q} : p\leq x\leq q})=q-p.$$
I was thinking to suppose that there is such an exterior measure, and then I'd probably get a contradiction, but I don't know how it should be done...
AI: $\mu^{*}(\{a\})=0$ for each $a$ and $\mathbb Q$ is countable so $\mu^{*}(A)=0$ for every $A$ by countable sub-additivity of exterior measures.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.