text
stringlengths
83
79.5k
H: Epp's proof that if a graph $G$ has a vertex of degree $k$ and $G$ is ismorphic to $G'$, then $G'$ has a vertex of degree $k$ I'm very confused by the this last paragraph shown here: What exactly does she mean by "there are no edges incident on $g(v)$ other than the ones that are images under $g$ of edges incident on $v$"? Her $g$ function maps from $V(G) \to V(G')$, so why would an edge be an image under $g$? AI: Yes, $g$ is a map from $V(G)$ to $V(G')$. But homomorphism gives that, if $uv$ is an edge in $G$ then $g(u)g(v)$ is an edge in $G'$. So here image of the edge $e=uv$ under $g$ refers the edge $e'=g(u)g(v)$ in $G'$.
H: How to know the possible order of a coset when an order of an element is given? I have a group $G$ and an element $a\in{G}$ which has an order of 17 ( o(a)=17 ). Also I'm giving that $H$ is a normal subgroup of $G$, and I'm supposed to know the possible orders of the right coset $Ha$ from this given. I'm trying to better my understanding of the order of a group and using Lagrange's theorem. I'm not seeing a way to approach this, from the given I figured out that if $H$ is a normal subgroup then $\forall{h}\in{H},\forall{}a\in{H}$ it is known that $a*h*a^{-1}\in{H}$ where * is the operation of $G$, and also it is known that the right and left cosets are equal. AI: The order of the coset divides the order of a representative (by Lagrange's theorem). So the answer is 17 (if your element is not in the normal subgroup) or 1 (otherwise).
H: Let $G$ be a group, $H\le G$. Define $X=\cup_{g\in G}gHg^{-1}.$ Prove $X=G$ when $[G:H]<\infty$. Let $G$ be a group and $H \leq G$. Define $$X = \bigcup_{g\in G} gHg^{-1}.$$ I want to prove that $X=G$ when $[G:H] < \infty$. I had couple of observations: $g_1 H g_1^{-1} = g_2 H g_2^{-1} \iff g_1 N_G(H) = g_2 N_G(H)$. Thus, the number of different subgroups $g H g^{-1}$ as $g$ varies is $[G: N_G(H)]$. Since all $g H g^{-1}$ contain $1_G$, they are not disjoint. Then I'm stuck. Any help will be appreciated. AI: The statement is wrong. If $H$ is normal then $X=H$ (this always happens if the index is 2).
H: Improper Integral Property of a Positive Function Let $g:[0,\infty)\rightarrow\mathbb{R}$ a positive function satisfying $$\int_{0}^{\infty}{\frac{dr}{g(r)}}=\infty.$$ Can say that $\int_{0}^{\infty}{g(r)dr}<\infty$ ? Thanks in advance! AI: Take $g(r) = r$. Since the harmonic series diverges, then by integral test, we have $$\sum_{n = 0}^{\infty} \frac{1}{n} = +\infty \Rightarrow \int_{0}^{\infty} \frac{dr}{g(r)} = \int_{0}^{\infty} \frac{dr}{r} = +\infty$$ On the other hand, we also have $$\int_{0}^{\infty} g(r)dr = \int_{0}^{\infty} rdr = +\infty$$ This makes $g(r) = r$ a counterexample to the claim.
H: Why $\int_{ \mathbb{R}^2 } \frac{dx\,dy }{(1+x^4+y^4)} $ converges? Why $\int_{ \mathbb{R}^2 } \frac{dx\,dy }{(1+x^4+y^4)} $ converges? Apparently this integral is quite similar to the integral $\iint_{\mathbb R^2} \frac{dx \, dy}{1+x^{10}y^{10}}$ diverges or converges? and it converges. So this is quite remarkable that it does converge. AI: Answer. Yes. Note that $$ 1+x^4+y^4\ge 1+\frac{1}{2}(x^2+y^2)^2 $$ and hence, using polar coordinates ($x=r\cos\theta, \,y=r\sin\theta$), we have $$ \int_{\mathbb R^2}\frac{dx\,dy}{1+x^4+y^4}\le \int_{\mathbb R^2}\frac{dx\,dy}{1+\frac{1}{2}(x^2+y^2)^2}= \int_0^{2\pi}\int_0^\infty\frac{r\,dr\,d\theta}{1+\frac{1}{2}r^4}\le \int_0^{2\pi}\int_0^\infty\frac{2r\,dr\,d\theta}{1+r^4}\\ =2\pi \int_0^\infty\frac{ds}{1+s^2}=2\pi\cdot\frac{\pi}{2}=\pi^2. $$ Note. If we set $$ A=\{(x,y): |xy|\le 1\}, $$ then $A$ has infinite area, since $\int_0^\infty\frac{dx}{x}=\infty$. Meanwhile, $$ \frac{1}{1+x^{10}y^{10}}\ge \frac{1}{2}, \quad \text{for all $(x,y)\in A$} $$ and hence $$ \int_{\mathbb R^2}\frac{dx\,dy}{1+x^{10}y^{10}}\ge \int_{A}\frac{dx\,dy}{1+x^{10}y^{10}}\ge \int_{A}\frac{1}{2}\,dx\,dy=\infty. $$
H: Proving exponent law for real numbers using the supremum definition only I was working on a problem displaying the expansion of the definition of exponents, and naturally the final question was to prove the exponent laws when the exponents are real numbers. For $a>1$, define $a^x=\sup\{a^r:r\in\mathbb Q, r\le x\}$ for any $x\in\mathbb R$. Prove the exponent laws hold for real exponents and $a>1$. Now I have proved that $a^{x+y}=a^xa^y$ for any real number $x,y$. The next thing I was to prove was $(a^x)^y=a^{xy}$. So I first fixed $x$ and let a rational number $r$ less or equal to $y$, and show $(a^x)^r=a^{rx}$. This is my attempt: First suppose that $(a^x)^r<\sup\{a^{rs}:s\in\mathbb Q,s\le x\}$. Then there exists a rational number $t\le x$ such that $(a^x)^r<(a^t)^r\le\sup\{a^{rs}:s\in\mathbb Q,s\le x\}$, and hence $a^x<a^t$, so we reach a contradiction. Now I have to show a contradiction when $(a^x)^r>\sup\{a^{rs}:s\in\mathbb Q,s\le x\}$, and I'm completely stuck. Can anyone help me out with this? Any help would be appreciated. AI: $ \def\qq{\mathbb{Q}} $ ... there exists a rational number $t≤x$ such that $(a^x)^r < (a^t)^r ≤ \sup\{a^{rs}:s∈\qq,s≤x\}$ ... This is correct. After this you then concluded $a^x < a^t$, but I am not sure whether you actually understood how to get it, because you cannot simply "take $r$-th root" unless you have already proven the needed inequalities involving rational powers of reals. Some work is needed here. Moreover, you should not immediately seek a proof by contradiction when doing real analysis. Instead, focus on the underlying structure. Here, we can in fact directly prove the desired result. Here is a sketch (I'll leave the proof of each substep to you): $(a^x)^y = \sup \{ ( \sup\{ a^r : r∈\qq_{<x} \} )^s : s∈\qq_{<y} \}$ $ = \sup \{ \sup\{ (a^r)^s : r∈\qq_{<x} \} : s∈\qq_{<y} \}$ $ = \sup \{ (a^r)^s : r∈\qq_{<x} ∧ s∈\qq_{<y} \}$ $ = \cdots$ It should be easy to finish now, using the definition of multiplication of reals and the properties of exponentiation for rationals.
H: Composition of orthogonal projections, $P_1 P_2 = P_2 P_1 \rightarrow P_1 P_2$ is the orthogonal projection on $W_1 \cap W_2$ Let $V$ be an inner product space of finite dimension. $P_1, P_2$ the orthogonal projections on sub-spaces $W_1, W_2$. Prove that if $P_1 P_2 = P_2 P_1$ then $P_1 P_2$ is the orthogonal projection on $W_1 \cap W_2$ Hint (could be irrelelevant to this part of the question, there are more parts): $(W_1 \cap W_2)^{\bot} = W_1^{\bot}+W_2^{\bot}$ My attempt was to first define an orthonormal basis to $W_1 \cap W_2$, and then to "complete" it to an orthonormal basis of $W_1$, and to a orthonormal basis of $W_2$, then I used the formula $P(v)=\sum \left \langle v, x_i \right \rangle x_i$, but I somehow got that $P_1 P_2 = P_2 P_1$ is always true, which makes no sense unless the question is misleading. So I'm looking for a hint on how to approach this question. If necessary I can show how I got to $P_1 P_2 = P_2 P_1$ AI: I am not sure how exactly the hint is supposed to be used (if it is indeed meant for this question), but the following should suffice. Verify that $P_1P_2$ is self-adjoint with $(P_1P_2)^2 = P_1P_2$; it follows that $P_1P_2$ is an orthogonal projection onto its image. The image of $P_1P_2$ lies in the image of $P_1$ and in the image of $P_2$, so the image of $P_1P_2$ is a subspace of $W_1 \cap W_2$. On the other hand, we see that for $x \in W_1 \cap W_2$, $P_1P_2 x = x$, so the image is indeed all of $W_1 \cap W_2$. The conclusion follows.
H: Evaluate the integral $\int\limits_{0}^{b}\frac{dx}{\sqrt{a^2+x^2}}$ Show that $$\int\limits_{0}^{b}\frac{dx}{\sqrt{a^2+x^2}}=\sinh^{-1}\frac{b}{a}$$ However, when I use Maple or WolframAlpha to calculate the left integration, both gave me $-\frac{\ln(a^2)}{2}+\ln(b+\sqrt{a^2+b^2})$, which seems not agree with the result on the right. [This integration is from the book Introduction to Superconductivity Second Edition By Michael Tinkham Page 56] A snapshot of the textbook . The $\Delta$ is independent of $\xi$. This is from the famous BCS theory and I think it should be correct. Also, I have checked another book, which shows the same result ($\sinh^{-1}\frac{b}{a}$). AI: There is a link between $ sinh^{-1} $ and $ \ln $ via: $$\sinh(x)=\frac{e^x-e^{-x}}{2}=y$$ $$x=\sinh^{-1}(y)$$ By solving for $ e^x $, the equation $$e^{2x}-2ye^x-1=0$$ we find $$e^x=y+\sqrt{y^2+1}$$ and then $$x=\sinh^{-1}(y)=\ln(y+\sqrt{y^2+1})$$ So if $a>0$, $$\sinh^{-1}\Bigl(\frac ba\Bigr)=\ln\Bigl(\frac ba+\sqrt{\frac{b^2}{a^2}+1}\Bigr)$$ $$=\ln\Bigl(\frac{b+\sqrt{a^2+b^2}}{a}\Bigr)$$ $$=\ln(b+\sqrt{b^2+a^2})-\ln(a)$$
H: Prove that there is a constant $ M $ such that $ \int|fg|dm\leq M \| f\|_{L^{p}} $ for all $ f\in L^{p}(\mathbf{R}) $. I could not understand the last part of the proof of the following theorem: Let $ p\geq 1 $ and $ g $ be a measurable function such that $ \int|fg|dm<\infty $ for every $ f\in L^{p}(\mathbf{R}) $. Prove that there is a constant $ M $ such that $ \int|fg|dm\leq M \| f\|_{L^{p}} $ for all $ f\in L^{p}(\mathbf{R}) $. Proof. Let $$ g_{n}(x)=\begin{cases}g(x)&\text{if $|g(x)|\leq n$ and $|x|\leq n$},\\ 0&\text{otherwise}\end{cases}$$ and $ \frac{1}{p}+\frac{1}{q}=1 $. Then $ g_{n}\in L^{q} $ for all $ n $. We observe that $ |g_{1}f|\leq |g_{2}f|\leq.........\leq |gf| $. So $ \lim_{n\rightarrow \infty} g_{n}(x)f(x)=f(x)g(x) $ for all $x$. It follows that the sequence of bounded linear functionals $ f\mapsto \int f(x)g_{n}(x)dx $ on $ L^{p}(\mathbf{R}) $ ( since $ g_{n}\in L^{q} $, this linear functionals are bounded) converges to $ f\mapsto \int f(x)g(x)dx $. Then the Banach- Steinhaus Theorem, $ f\mapsto \int f(x)g(x)dx $ is bounded linear functional on $ L^{p}(\mathbf{R}) $. So $ f\mapsto \int f(x)g(x)dx $ is continous on $ L^{p}(\mathbf{R}) $ and therefore there is an $ h\in L^{q}(\mathbf{R}) $ such that $\int f(x)g(x)dx=\int f(x)h(x)dx $, which then implies that $ g=h $ a.e. I could not understand why $ h $ exists. I think $ h $ should exist because of the continuity of linear function. But I could not understand the relation. AI: You have proved that the map $f\mapsto \int f(x)g(x)\,\mathrm dx$ is a continuous linear functional on $L^{p}(\mathbb{R})$. Now use the isomorphism between the dual of $L^{p}(\mathbb{R})$ and $L^{q}(\mathbb{R})$ to find the function $h\in L^{q}(\mathbb{R})$.
H: If $f$ is differentiable for any value except $x=0$, and $e^{f(x)} = x$, show that $f’(x) = 1/x$ If $f$ is differentiable for any value except $x=0$, and $e^{f(x)} = x$, show that $f’(x) = 1/x$. AI: Knowing that $e^{f(x)}=x$, we can take the derivative of both sides with respect to $x$: $$\begin{align*} \dfrac{d}{dx}e^{f(x)}&=\dfrac{d}{dx}x\\f'(x)e^{f(x)}&=1\\f'(x)=\dfrac{1}{e^{f(x)}}&=\dfrac 1x \end{align*} $$ Note: there exists no function $f$ that satisfies $e^{f(x)}=x$ when $x$ is not positive. This is because exponentials are strictly positive functions; this was perhaps a mistake in the problem statement?
H: To Prove $ \bigcap_{i \in I} A_i \in \bigcap_{i \in I} P(A_i) $ $$ \bigcap_{i \in I} A_i \in \bigcap_{i \in I} P(A_i) $$ , $ I \neq \phi $ MY ATTEMPT I use proof by Contradiction. Assume $ \bigcap_{i \in I} A_i \notin \bigcap_{i \in I} P(A_i) $$ Let $ x \in \bigcap_{i \in I} A_i $ i.e $ \{ x \} \notin ( P(A_1 ) \land P(A_2 \land... \land P(A_n)) )$ So $ \{x\} \not \subseteq A_i \forall i \in I$ So $ x \notin A_i \forall i \in I$ So $x \notin \bigcap_{i \in I} A_i $ Hence we arrive at contradiction AI: When you start by letting $x\in\bigcap_{i\in I}A_i$ and supposing that $x\notin\bigcap_{i\in I}\wp(A_i)$, you’re already getting off on the wrong foot: that would be a reasonable start if you were trying to show that $\bigcap_{i\in I}A_i$ was a subset of $\bigcap_{i\in I}\wp(A_i)$, but that isn’t what you want to show. You need to show that $\bigcap_{i\in I}A_i$ is an element of $\bigcap_{i\in I}\wp(A_i)$. In symbols, you’re setting out to try to prove that $\bigcap_{i\in I}A_i\color{red}{\subseteq}\bigcap_{i\in I}\wp(A_i)$, but what you need to prove is that $\bigcap_{i\in I}A_i\color{red}{\in}\bigcap_{i\in I}\wp(A_i)$. Let’s back up for a minute and take a good look at the objects involved. In fact, let’s look at a very simple example. Suppose that $I=\{1,2,3\}$, so that you have sets $A_1,A_2$, and $A_3$. To be absolutely definite, let’s suppose that $A_1=\{1,2,4,5\}$, $A_2=\{2,3,4,5\}$, and $A_3=\{4,5,6\}$. Then $$\begin{align*} \bigcap_{i\in I}A_i&=A_1\cap A_2\cap A_3\\ &=\{1,2,4,5\}\cap\{2,3,4,5\}\cap\{4,5,6\}\\ &=\{4,5\}\;. \end{align*}$$ This a subset of each of the sets $A_1,A_2$, and $A_3$, so like $A_1,A_2$, and $A_3$, it is a set of integers. Now what is $\bigcap_{i\in I}\wp(A_i)$? $$\begin{align*} \wp(A_1)=&\big\{\varnothing,\{1\},\{2\},\{3\},\{4\},\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\},\\ &\{1,2,3\},\{1,2,4\},\{1,3,4\},\{2,3,4\},\{1,2,3,4\}\big\}\;, \end{align*}$$ $$\begin{align*} \wp(A_2)=&\big\{\varnothing,\{2\},\{3\},\{4\},\{5\},\{2,3\},\{2,4\},\{2,5\},\{3,4\},\{3,5\},\{4,5\},\\ &\{2,3,4\},\{2,3,5\},\{2,4,5\},\{3,4,5\},\{2,3,4,5\}\big\}\;, \end{align*}$$ and $$\wp(A_3)=\big\{\varnothing,\{4\},\{5\},\{6\},\{4,5\},\{4,6\},\{5,6\},\{4,5,6\}\big\}\;,$$ and the intersection of these three sets is $$\big\{\varnothing,\{4\},\{5\},\{4,5\}\big\}\;:$$ $\varnothing,\{4\},\{5\}$, and $\{4,5\}$ are the only sets of integers that are elements of all three power sets. $\bigcap_{i\in I}A_i=\{4,5\}$ cannot possibly be a subset of $\bigcap_{i\in I}\wp(A_i)$: it’s the wrong kind of object. If it were a subset of $\bigcap_{i\in I}\wp(A_i)$, its elements would also be elements of $\bigcap_{i\in I}\wp(A_i)$. But the elements of $\{4,5\}$ are integers, while the elements of $\bigcap_{i\in I}\wp(A_i)$ are sets of integers. $\{4,5\}$ can, however, be an element of $\bigcap_{i\in I}\wp(A_i)$, and indeed we see that it is: $$\bigcap_{i\in I}\wp(A_i)=\big\{\varnothing,\{4\},\{5\},\color{red}{\{4,5\}}\big\}\;.$$ Now let’s go back and consider how to prove the result. You don’t need a proof by contradiction: you can show directly that $\bigcap_{i\in I}A_i\in\bigcap_{i\in I}\wp(A_i)$. For each $i\in I$, $\bigcap_{i\in I}A_i$ is a subset of $A_i$: $\bigcap_{i\in I}A_i\subseteq A_i$. By definition this means that $\bigcap_{i\in I}A_i\in\wp(A_i)$. Thus, $\bigcap_{i\in I}A_i\in\wp(A_i)$ for each $i\in I$, and that by definition means that $\bigcap_{i\in I}A_i$ is in the intersection of those power sets: $\bigcap_{i\in I}A_i\in\bigcap_{i\in I}\wp(A_i)$. And that’s what we wanted to prove. Once you understand what’s going on here, you might try to prove the stronger result that $$\bigcap_{i\in I}\wp(A_i)=\wp\left(\bigcap_{i\in I}A_i\right)\;.$$
H: Flux through the positive part of a sphere centered at $(0,0,1)$ Let $$B=\{(x,y,z):x^2+y^2+(z-1)^2<4, \ z\geq 0\}$$ and consider the vector field $$F:(x,y,z)\mapsto(x^3,y^3,z)$$ I want to compute the flux of $F$ through $\partial B.$ We have $$\text{Div}F=3x^2+3y^2+1$$ and so by the divergence theorem we could compute the flux as $$\int_B 3x^2+3y^2+1 \ dx dy dz$$ but this triple integral does not look very friendly to me using spherical coordinates, because the sphere is centered in $(0,0,1).$ Another approach is by computing $$\partial B =\{x^2+y^2\leq 3, \ z=0\}\cup\{x^2+y^2+(z-1)^2=4, \ z>0\}$$ I can parametrize $\{x^2+y^2+(z-1)^2=4, \ z>0\}$ as $$x=2\sin \phi \cos \psi$$ $$y=2 \sin \phi \sin \psi$$ $$z= 2 \cos \phi +1$$ $$\phi\in[0,\pi], \psi \in [0,2 \pi)$$ but it does not seem to make calculations easier. AI: You'll want to use a change of variables, then use spherical coordinates. Starting with $$\int_B\left(3x^2+3y^2+1\right)dx\ dy\ dz$$ we let $u=x$, $v=y$, and $w=z-1$. Let's call our new volume of integration $C$; with the new variables, $C$ is the sphere centred at the origin with radius 2: $$C=\{(u, v, w) : u^2+v^2+w^2<4, w>-1\}$$ Noting that the Jacobian determinant for this is just 1, we get that the flux is equivalently stated as $$\int_C\left(3u^2+3v^2+1\right)du\ dv\ dw$$ From here, you can use either spherical or cylindrical coordinates to finish the computation, and it should be a lot more pleasant this time.
H: What is the probability the the second toss is heads? One bag contains two coins. One is fair, the other is biased with Heads probability = $0.6$. One coin is randomly picked and it is tossed. It lands heads up. What is the probability that the same coin will land heads up if tossed again? Now, the probability that for a coin randomly picked up the toss will result in a head is given by the formula for total probability. Writing $B_1, B_2$ the events that the fair coin and the biased coin is selected, respectively and $E_1$ the event that the coin will land heads up in the first toss, we obtain $$ P(E_1) = P(E_1|B_1)P(B_1) + P(E_1|B_2)P(B_2) = 0.5\cdot0.5 + 0.6\cdot0.5 = 11/20. $$ However, it is not clear to me how to calculate now the probability $P(E_2)$. I would like to use again the total probability formula for $E_2$, i.e., $$ P(E_2) = P(E_2|E_1)P(E_1) + P(E_2|E_1^c)P(E_1^c) = P(E_2|E_1)\cdot 11/20 + P(E_2|E_1^c)\cdot 9/20 $$ But I don't see how to calculate $P(E_2|E_1), \ P(E_2|E_1^c)$. So in the end I suspect that my method doesn't work and a better solution should be given. AI: Next you need to use bayes theorem to figure out the posterior probability $P(B_1|E_1)$ that the coin picked is fair $$P(B_1|E_1)=\frac {P(E_1|B_1)P(B_1)}{P(E_1)}=\frac {0.5 \cdot 0.5}{11/20} = 5/11$$ To determine the probability of $E_2$ given $E_1$ $$P(E_2|E_1) = P(E_2 B_1|E_1) + P(E_2 B_1^c|E_1) = P(E_2|B_1 E_1)P(B_1|E_1) + P(E_2|B_1^c E_1)P(B_1^c|E_1) $$ Using the fact that flips of the same coin are independent, $P(E_2|B_1 E_1)=P(E_2|B_1)=0.5$ and $P(E_2|B_1^c E_1)=P(E_2|B_1^c)=0.6$, and substituting these probabilities in gives $$P(E_2|E_1) = P(E_2|B_1)P(B_1|E_1) + P(E_2|B_1^c)P(B_1^c|E_1) = .5 \cdot 5/11 + .6 \cdot 6/11$$
H: Get a random circle C inside a bigger circle, and C encompasses a specific point So basically I want to draw a random circle of radius R inside a bigger one, but the drawn circle should encompass a specific point. If my math is right, it comes down to solving the following and getting the valid ranges of x and y: x^2 + y^2 - xF - yG < H, where F, G, H are known. But there may be an easier approach. Any kind of help, even partial, is appreciated. More info on how I got to the above equation, for those interested: Given: Ca = bigger circle (Center=C0, Radius=R0), Cb = smaller circle (C1, R1) inside Ca, P = point to be contained by Cb; Then: d(C1, P) < R1 and d(C1, C0) < R0 - R1, where d(A, B) is the distance between points A and B. I've squared both sides of each of the inequalities (I'm allowed to because everything's positive) and then summed them together (left with left and right with right), and after some - hopefully correct - steps I got to that inequality. Edit: Alright, I eventually solved the inequalities via the wolfram alpha website. It turned out solving for the range of x then sampling a random x from it works, but further solving for ranges of y from that point yields erroneous results. Either summing up these kinds of inequalities is not allowed, or another way of solving both x and y at once is needed. It's way over my math skills so I went with solution #1, which actually isn't that bad. I'm using a fallback method if the number of iterations is bigger than 50, just to keep it finite. AI: If the bigger circle is centered at $A$ and has radius $L>R$ and the small circle must encompass point $B$, then the random center $M$ can be any point in the intersection of two disks \begin{equation} S = D(A, L-R)\cap D(B, R) \end{equation} From there I see at least two methods to generate a random point in $S$ with respect to the Lebesgue measure: You can compute the smallest rectangle $T$ containing $S$, with sides parallel and orthogonal to the line $AB$. Using a uniform random generator, shoot random points in $T$ until one of them is in $S$. This is the required random point. With some work and an equation solver, you can use inverse transform sampling. Indeed the pdf of the random coordinates needed involve functions such as $\sqrt{c-x^2}$ which antiderivatives are well known. An equation solver could find inverse points of the cdf of such random variables.
H: How to integral $z^{\prime\prime}+ 2 \eta z^\prime=0$? From the given ODE, $$z^{\prime\prime} + 2 \eta z^\prime = 0 $$ The rest procedure is as below $$\int \frac{z^{\prime\prime}}{z^\prime} d \eta = \int -2 \eta \, d \eta$$ $$ \ln z^\prime = - \eta ^2 + c_0 $$ $$ z^\prime = \frac{d z}{d \eta} = C_1 \exp (- \eta ^2 )$$ $$ z(\eta) = c_1 \int _0 ^{\eta} {\exp (-x^2) } \, {d x } + c_2$$ I'd like to know how the fraction of derivatives ($z^{\prime\prime}/z^\prime$) is integrated as $\ln z^\prime$. There are similar equations but there is not exactly the same one. AI: You have to take into account that $z'=\frac{dz}{d\eta}$ and $\frac{d\log z'}{d\eta}=\frac{1}{z'}\frac{dz'}{d\eta}=\frac{z''}{z'}$. The rest is just an application of the fundamental theorem of calculus.
H: How to prove the following, for $n>4$? Let $G$ be an undirected graph with $n>4$ nodes. Prove that there is a cycle in $G$ or in $G'$? Note: $G'$ is the graph which includes all nodes in $G$ and includes edges iff those edges do not exist in $G$. AI: If $G$ does not contain cycles then it is a forest, hence $|E|<|V|$. Hence if $n>3$, the number of edges in $G'$ is $>|V|$, so $G'$ is not a forest and contains a cycle. Here $V$ is the set of vertices and $E$ is the set of edges in $G$.
H: transformation of Bern random variable I've encountered this question: Let X ~ Bern(1/2) and let a and b be constants with a < b. Find a simple transformation of X that yields an r.v. that equals a with probability 1 - p and equals b with probability p. I have been working on this for sometime without answer, is it possible to construct a transformation of X such that a,b,p are all arbitrary? If p is 1/2, then the problem is quite simple, but p is also arbitrary, how can we make this p generic instead of fixing it to 1/2? Any help is appreciated. Thanks. AI: Sketch: Idea is to write $Y=X+Z$ for a binary random variable $Z$ and choose the conditional distribution $Y|X$ such that $Y\sim $Ber$(p)$. If $q_1=P(Y=a|X=0)$ and $q_2=P(Y=b|X=1)$ then every choice of $q_1,q_2$ satisfying $q_2-q_1=1-2p$ will then be a conditional distribution satisfying law for $Y$.
H: Norm of Position Operator in $L^2[0,1]$ I was wondering what is the norm of the position operator $Xf(x)=xf(x)$ in $L^2[0,1]$. I have two different results. The first one is the simplest and the reasonable: $$||X|| \overset{||f(x)||=1}{=} \sup||xf(x)||=\sup||x||=1, $$ since $x\in[0,1]$. The second method is the usual one I have always applied for L^2 opeartors: $$||Xf(x)||^2=\left(\int_0^1xf(x)\text{d}x\right)^2\le\left|\int_0^1x^2\text{d}x\right| \left|\int_0^1f^2(x)\text{d}x\right|=\frac13||f(x)||^2 \qquad \implies \qquad ||X||=\frac{1}{\sqrt{3}}.$$ Both are different, and I am not able to find the mistake in the second method. Can you help me? Thank you in advance :) AI: The "mistake" is in both methods. In the first one, you are mixing norms. The result is correct, though. In the second, you have $\|Xf\|_2^2=\int_0^1 x^2 f^2(x)$, and you cannot apply Cauchy-Schwarz. The usual calculation is $$ \|Xf\|=\left(\int_0^1 x^2 f^2(x)\,dx\right)^{1/2}\leq\left(\int_0^1 f^2(x)\,dx\right)^{1/2}=\|f\|, $$ so $\|X\|\leq 1$. Further for each $\delta>0$ you can find $f$ with $\|Xf\|\geq(1-\delta)\|f\|$, proving that $\|X\|=1$.
H: Proving that the inverse in subgroup $H \leq G$ is the same as the inverse in $G$ : does it follow from uniqueness? Let $H$ be a subgroup of $G$. I am trying to show that the inverse element of some $h \in H$ is the same as its inverse in $G$. I know how to prove it without uniqueness, but I am trying to understand why this proof method would fail, assuming that it would. I am taking as proved that the identity in $H$ is the same as the identity in $G$, and am just calling it $e$. Let $h \in H$. Then there exists an inverse $x \in H$ such that $hx = xh = e$. Furthermore, $H \subset G$, so there exists an inverse $y \in G$ such that $hy = yh = e$. By uniqueness of the inverse element, $x = y$. Is the reason that uniqueness fails because the first equation $hx = e$ holds only for elements in $H$, and the second equation, $hy$, holds for elements in $G \setminus H$? If $H$ is the improper subgroup, this should work, but that would really boil down to the standard proof of uniqueness. AI: Take $h \in H$. Let $x$ be the inverse of $h$ in $H$. Let $y$ be the inverse of $h$ in $G$. We will prove that $x = y$ by contradiction. Assume that $x \neq y$. By definition of the inverse, $x \in H$. Since $H \subseteq G$, then we also have $x \in G$. Hence, $x$ is an element of $G$ that satisfies $hx = xh = e$, making it also an inverse of $h$ in $G$. Since $x \neq y$, then we have two distinct inverses of $h$ in $G$. This contradicts with the uniqueness property of the inverse in a group. Hence, it must be the case that $x = y$. That is, the inverse of $h$ in $H$ is just equal to the inverse of $h$ in $G$. In other words, the uniqueness argument in your proof is just fine.
H: Knight on a $3\times 4$ board: Hamiltonian graphs A chess knight sits on a $3\times 4$ board. Is it possible for the knight to jump into the $12$ squares without jumping twice in any of them and ending and starting in the same box? What if it starts and ends in the different boxes? I have drawn the graph that represents this problem and by looking at it I know that the answer to the first question is that it is impossible, but the second one is possible. However I can't find a mathematical reasoning to prove this. I know that my problem is equivalent to finding a hamiltonian cycle in the first case and a hamiltonian path in the second, but I don't know how to do this in any other way that trying to draw different paths. Could someone please help me with the mathematical reasoning? AI: Let's number the squares on the board in the conventional way: a4 b4 c4 a3 b3 c3 a2 b2 c2 a1 b1 c1 If we look at squares a1, b4, and c1, there are only two possible moves from each of them. If these squares are in the middle of a tour - and that's always the case for a closed tour - then both of those moves have to be part of the tour: one entering the square and one leaving. However, putting those two moves together gives a cycle of length $6$: a1 - b3 - c1 - a2 - b4 - c2 - a1. Instead of a tour that visits all $12$ squares, we end up with two subtours: this cycle, and its mirror image a4 - b2 - c4 - a3 - b1 - c3 - a4. So a closed tour is impossible. For an open tour, we can just pick a move, such as a3 - c2, that goes between the two cycles of length $6$, and use it to stitch them together into a path. Just finding a path that works should be enough for a complete solution to this part.
H: How to show $(a_1a_2\ldots a_n)^{\frac{1}{n}}\leq \frac{\sum_{i=1}^{n}a_i}{n}$ How to show $(a_1a_2\ldots a_n)^{\frac{1}{n}}\leq \frac{\sum_{i=1}^{n}a_i}{n}$ with $a_i$ positive. Well, I tried by induction: with $n=2$ then $\sqrt{ab}\leq \frac{a+b}{2}$ is equivalent to say (elevate square in both side) $4ab\leq a^2 +2ab + b^2$ and this is equivalent $0\leq(a-b)^2$ and this is true. I suppose it is true for some $n$. But with $n+1$, I don't know how to do, Please can help me with a hint or other way, thank you so much. AI: A very simple proof which doesn't require induction (except implicitly); compare the logarithms. You only need to show that $$\frac{\sum_{i=1}^n\ln a_i}n\le\ln\frac{\bigl(\sum_{i=1}^na_i\bigr)}n, $$ and this results from $\ln$ being a concave function.
H: In which ratio does the point $P$ divide the segment $\overline{AN}$? In an arbitrary triangle $\triangle ABC$, let $M\in\overline{AC}$ s. t. $|AM|:|MC|=2:1$ and let $N\in\overline{BC}$ s. t. $|BN|:|NC|=1:2$. Let $P$ be the intersection point of the segments $\overline{AN}$ and $\overline{BM}$. In which ratio does the point $P$ divide the segment $\overline{AN}$? My attempt: I thought I could apply the intercept theorem to find the ratio in which the point $P$ divides the segment $\overline{BM}$ and then express $\overrightarrow{AP}$ as a linear combination of $\overrightarrow{BM}$ and some vector in $\triangle ABC$ linearly independent of $\overrightarrow{BM}$. Let $S\in\overline{NC}$ s. t. $\overline{AN}\parallel\overline{MS}$. From the given ratios, it follows: $|AM|=2\lambda,\ |MC|=\lambda,\ |BN|=\mu, |NC|=2\mu, \ \lambda,\mu\in\Bbb Q$. By the intercept theorem, $$\begin{aligned}&|SC|:|NS|=|MC|:|AM|=1:2\\\implies&|SC|=\nu,\ |NS|=2\nu,\ \nu\in\Bbb Q\\\implies&|NC|=|NS|+|SC|=3\nu=2\mu\implies\mu=\frac32\nu\end{aligned}$$ Then $$\begin{aligned}&|BP|:|PM|=|BN|:|NS|=\frac{\mu}{2\nu}=\frac{\frac32\nu}{2\nu}=\frac34\\\implies&\overrightarrow{PM}=\frac47\overrightarrow{BP}=\frac47\left(\frac13\overrightarrow{AC}-\overrightarrow{BC}\right)\end{aligned}$$ but it doesn't seem I've accomplished anything by finding $\frac{|BP|}{|PM|}.$ It would be perfect if I could find $\frac{|AP|}{|PN|}$ the same way, but there isn't enough information to do that and compare that result with $\overrightarrow{AP}=\alpha\left(\overrightarrow{AC}-\overrightarrow{NC}\right),\ \alpha\in\Bbb Q$. Another option was to consider a midpoint $T$ of the segment $\overline{NC}$, so $\overrightarrow{BT}=\frac23\overrightarrow{BC}$. Then $$\overrightarrow{AP}=\alpha\left(\overrightarrow{AB}+\overrightarrow{AT}\right)$$ May I ask for advice on solving this task? Thank you in advance! AI: I will use vectors too, so a picture is redundant. Let $\overrightarrow{CA}=a,\,\overrightarrow{CB}=b$ and $C$ be the origin. Then $M=\frac{1}{3}a,\,N=\frac{2}{3}b$, $$P\in AN:\quad P=uA+(1-u)N=ua+\frac{2}{3}(1-u)b,$$ $$P\in BM:\quad P=vB+(1-v)M=vb+\frac{1}{3}(1-v)a,$$ $$\hbox{as }P=ua+\frac{2}{3}(1-u)b=vb+\frac{1}{3}(1-v)a$$ and $a,\,b$ forms a basis, then $$\begin{cases} u=\frac{1}{3}(1-v)\\ v=\frac{2}{3}(1-u) \end{cases}$$ $$\begin{cases} u=\frac{1}{7}\\ v=\frac{4}{7} \end{cases}$$ $$\hbox{So }\frac{AP}{PN}=\frac{1-u}{u}=6.$$
H: Subspace $\operatorname{null}(T^2+bT+c)^j$ has even dimension when $b^2<4c$ Question: Let $V$ be a finite-dimensional real vector space and $T$ be a linear operator on $V$. Let $b,c\in \mathbb{R}$ such that $b^2<4c$. Prove that for every $j$, $\operatorname{null}(T^2+bT+c)^j$ has even dimension. What I have done is, If $W=\operatorname{null}(T^2+bT+c)^j$ then $T$ restricted on $W$ has the annihilating polynomial $(x^2+bx+c)^j$. Now $b^2-4c<0$ and $V$ is real vector space so $x^2+bx+c$ can not be factored. So minimal polynomial of $T|_W$ has minimal polynomial of the form $(x^2+bx+c)^k$, where $k\leq j$. After that I have no idea what to do. Any help is welcome. AI: Every linear operator acting on an odd dimensional vector space has an eigenvalue, so if $W$ is odd dimensional, $T|_W$ has an eigenvalue. But this eigenvalue must necessarily be a root of the minimal polynomial. This is a contradiction, since $(x^2 + bx + c)^k$ is not divisible by a linear factor.
H: The interval $(a,b) \subseteq \mathbb{R}^{2}$ is bounded - metric spaces I am trying to show that the interval $(a,b) \subseteq \mathbb{R}^{2}$ is a bounded set. By $(a,b) \subseteq \mathbb{R}^{2}$ I am meaning $(a,b) \times \{0\} = \{(x,y) \in \mathbb{R}^{2}: a<x<b, y=0\}$ Bounded: If I can show that $(a,b) \times \{0\}$ is contained in some open/closed ball then I am done. Would the following work? $B((\frac{a+b}{2}, 0), 5(b-a))$ i.e. a ball centred at $(\frac{a+b}{2}, 0)$ with a radius 5 times the length of the interval. I'm getting a bit confused because we are in $\mathbb{R}^{2}$. Also, am I correct in thinking that the interval $(a,b)$ is not open as a subset of $\mathbb{R}^{2}$ but it is open as a subset of $\mathbb{R}$? At least that is what I seem to have proven. AI: Yes, your ball will work. You could also have taken the ball centered at the origin with radius $b$, i.e. $B((0,0),b)$. You are correct that $(a,b)$ is not open in $\mathbb{R}^2$, but is open in $\mathbb{R}$.
H: Two definitions of singular cohomology I am reading Hatcher's Algebraic Topology and Milnor's Characteristic Classes. In these two books, the definition of singular cohomology is little bit different, as follows: Fix a topological space $X$ and a (commutative) ring $R$ (with $1$). First, in Hatcher, we form a chain complex $\dots \to C_n(X) \xrightarrow{\partial} C_{n-1}(X)\to \cdots$ where $C_n(X)$ is the free abelian group with one generator for each singular simplex $\sigma\colon\Delta^n\to X$. Then we take $\text{Hom}(-,R)$ of this complex to obtain a cochain complex $\cdots \to \text{Hom}(C_{n-1}(X),R) \to \text{Hom}(C_n(X),R)\to \cdots$ . Hatcher defines the $n$-th cohomology group $H^n(X;R)$ from this complex. On the other hand, in Milnor we form a chain complex $\dots \to C_n(X;R) \xrightarrow{\partial} C_{n-1}(X;R)\to \cdots$ where $C_n(X;R)$ is the free $R$-module with one generator for each singular simplex $\sigma\colon\Delta^n\to X$. Then we take $\text{Hom}_R(-,R)$ of this complex to obtain a cochain complex $\cdots \to \text{Hom}_R(C_{n-1}(X;R),R) \to \text{Hom}_R(C_n(X;R),R)\to \cdots$ . Milnor defines the $n$-th cohomology group $H^n(X;R)$ from this complex. Is there no difference between these two definitions? AI: I will elaborate a bit on Angina Seng's comment that there is a natural isomorphism $F\colon\text{Hom}_\mathbb{Z}(C_n(-), R) \to \text{Hom}_R(C_n(-;R), R)$ as functors to the category of $R$-modules. In essence this is due to the fact that the functors $\text{Hom}_\mathbb{Z}(-,R)$ and $\text{Hom}_R(-\otimes R, R)$ on the category of abelian groups are isomorphic. First recall that $C_n(X;R) = C_n(X)\otimes_\mathbb{Z} R$. Given a group homomorphism $\varphi\colon C_n(X)\to R$, define the $R$-linear homomorphism $F(\varphi)\colon C_n(X;R)\to R$ by $F(\varphi)(g\otimes r) = \varphi(g)\cdot r$. As an exercise you should verify for yourself that $F$ is a natural transformation of functors $\mathbf{Top}\to R\mathbf{-mod}$ (along the way you will need to verify that $F(\varphi)$ is well-defined wrt to the tensor product relation), and that for any given space $F$ is injective and surjective. Hints: for injectivity, once you establish that $F$ is an $R$-linear homomorphism for each space you only need to show $F(\varphi) = 0$ implies $\varphi = 0$, and for surjectivity if you are given an $R$-linear $\varphi'\colon C_n(X;R) \to R$, consider $\varphi'$ restricted to the subgroup $\{g\otimes 1\}$.
H: Behmann's proof of Infinitude of primes. I am having difficulty in understanding the proof of Behmann of Infinitude of primes. Can someone please explain the last part 'The proof is concluded by noticing....' which is in page $178$? Any help would be appreciated. Thanks in advance. AI: Suppose there are only $m$ primes. The expression $$\frac{p_i}{p_i-1} = \frac{1}{1-\frac{1}{p_i}} = 1 + \frac{1}{p_i} + \frac{1}{p_i^2} +\cdots,$$ by geometric series. So if the right hand side is multiplied out, you get every possible unit fraction. So the right hand side must be equal to the harmonic series. But we've just shown that the harmonic series is strictly larger. So there must be more than $m$ primes.
H: Find Recursive Definition from given formula I've read some ways about how to derive a formula from a recursive definition, but what about this one? I started solving this formula $$ a_n = 2^n + 5^n n , n \in \mathbb{N} $$ gives you the recursive definition of what and how you go about figuring that out? Any tips for me? My calculations so far $$ (r-2)(r-5) = r^2 - 7r + 10 $$ Am i doing right? AI: Because of the coefficient $n$ here, you'll need a double root at $r=5$. Thus you should look at the characteristic polynomial $$(r-2)(r-5)^2=r^3-12r^2+45r-50$$ It follows that the recursion we want is $$a_n=12a_{n-1}-45a_{n-2}+50a_{n-3}$$ with initial conditions $$a_0=1\quad a_1=7\quad a_2=54$$
H: Show that length of sine is equal to length of cosine on the same interval. Let $$f(x)=\sin(x)\\\ g(x)=\cos(x)$$ Let $L_1$ be $$\int_0^{2\pi}\sqrt{1+\cos^2(x)}\space dx$$ And $L_2$ $$\int_0^{2\pi}\sqrt{1+\sin^2(x)}\space dx$$ I.e. L is a length of sine/cosine during it's period interval. Numerical approach shows that these two integrals are equal. It seems reasonable, as both have the same wavelength and frequency. How can one show above relationship holds? ($L_1=L_2$) AI: Basically, we want to exploit the fact that $\sin$ and $\cos$ "look the same". The theorem that lets us do this is integration by substitution. Let's define, for convenience, the functions $\ell_1(x) = \sqrt{1 + \cos^2 x}$, and $\ell_2(x) = \sqrt{1 + \sin^2 x}$, so \begin{align*} L_1 &= \int_0^{2\pi} \ell_1(x) \,\mathrm dx \\ L_2 &= \int_0^{2\pi} \ell_2(x) \,\mathrm dx \end{align*} Now note that \begin{align*} L_1 = \int_0^{\pi/2} \ell_1(x) \,\mathrm dx + \int_{\pi/2}^{2\pi} \ell_1 (x) \,\mathrm dx \\ L_2 = \int_0^{\pi/2} \ell_2(x) \,\mathrm dx + \int_{\pi/2}^{2\pi} \ell_2 (x) \,\mathrm dx \end{align*} However, using the substitution $u = \pi/2 - x$, as given in the comments, \begin{align*} \int_0^{\pi/2} \ell_1(x) \,\mathrm dx &= \int_{\pi/2}^0 (-\ell_2(u)) \,\mathrm du \\ &= \int_0^{\pi/2} \ell_2(u) \,\mathrm du \end{align*} And from the substitution $u = \tfrac 52 \pi - x$, \begin{align*} \int_{\pi/2}^{2\pi} \ell_1 (x) \,\mathrm dx &= \int_{2\pi}^{\pi/2} (-\ell_2 (u)) \,\mathrm du \\ &= \int_{\pi/2}^{2\pi} \ell_2 (u) \,\mathrm du \end{align*} It is an exercise in trigonometric identities to establish that $\ell_1(\pi/2 - u) = \ell_2(u)$, and $\ell_2(\tfrac 52 \pi - u) = \ell_2(u)$. From these equalities, it follows that $L_1 = L_2$. In fact, you can use this approach to show that for any function $f$, \begin{equation*} \int_0^{2\pi} f(\sin x) \,\mathrm dx = \int_0^{2\pi} f(\cos x) \,\mathrm dx \end{equation*} This also holds on smaller intervals like $[0, \tfrac 12 \pi]$ or $[0, \pi]$.
H: Jensen inequality in measure theory : why doesn't the convex function need to be nonnegative? This section of the Wikipedia article on Jensen's inequality states that if $g$ is an integrable function on a measure space with mass $1$ and $\varphi$ is a convex function, then $$\varphi \left( \int g \right) \leq \int \varphi \circ g $$ What troubles me is that I can see no mention of a hypothesis ensuring that $\varphi \circ g$ actually has an integral (like that it is integrable or else $\varphi$ is nonnegative). I assumed it was simply missing, but then I noticed that not only does the same article in French also omit these hypotheses but furthermore, it even bothers to mention that the integral on the right may be infinite, suggesting in my opinion that $\varphi$ should be assumed to be nonnegative. AI: $\varphi \circ g$ always has integrable negative part so that the integral on the right hand side is well-defined via $$\int \varphi \circ g d \mu = \int (\varphi \circ g)^+ d\mu - \int(\varphi \circ g)^- d\mu$$ To prove this, note that convex functions possess subderivatives so that there are numbers $a,b$ such that $$ax + b \leq \varphi(x)$$ In particular, $(\varphi \circ g) \geq a g + b$ so that $$0 \leq (\varphi \circ g)^{-} \leq |a g + b| \in L^1$$
H: Finding a function to fit a curve. I would like to fit an equation to the curve shown below. A selection of the data points (x,y) are given too. I have tried to fit equations $y = a \, e^{(-b \, x)}$ and $y = a x^{-b}$ using software minitab 19 without getting a good fit. So if an equation can be found or suggested, that would be good. {{1., 3.97364}, {3., 2.65259}, {5., 2.12207}, {7., 1.81891}, {9., 1.61681}, {11., 1.46983}, {21., 1.07533}, {31., 0.888459}, {41., 0.77407}, {51., 0.694874}, {61., 0.63588}, {71., 0.589742}, {81., 0.552379}, {91., 0.521322}, {101., 0.494974}, {201., 0.349951}, {301., 0.278213}, {401., 0.226841}, {501., 0.18593}, {601., 0.152562}, {701., 0.12521}, {801., 0.102766}, {901., 0.0843466}, {1001., 0.0692285}, {2001., 0.00960411}, {3001., 0.00133238}, {4001., 0.000184842}, {5001., 0.0000256433}, {6001., 3.5575110^-6}, {7001., 4.9353610^-7}, {8001., 6.8468610^-8}, {9001., 9.4986910^-9}} AI: Using Matlab's curve fitting tool, I got the following fit: with the parameters as below: General model Exp2: f(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 3.513 (3.237, 3.788) b = -0.1935 (-0.2215, -0.1655) c = 0.9278 (0.7994, 1.056) d = -0.004471 (-0.006014, -0.002928) and the goodness of fit as below: Goodness of fit: SSE: 0.2513 R-square: 0.9903 Adjusted R-square: 0.9893 RMSE: 0.09473 which seems reasonable.
H: Relation of Adjacency matrices of 2 graphs Let G be the subgraph of G¯(prime) such that there is an edge between vi ∈ V and vj ∈ V in G if and only if there is at least one edge between vi and vj in G¯(prime). Any tips or hint to solve about finding A from A¯(prime) ? AI: $A_{ij}$ is equal to 1 if $\bar A_{ij}$ is not equal to zero. This means that there is an edge between $v_i$ and $v_j$ on $G$ if there is at least one edge between them on $\bar G$, as stated in the problem. $A_{ij}$ is zero if $i=j$ or $\bar A_{ij}=0$. $$A_{ij}=\begin{cases}0 & i=j\textrm{ or }\bar A_{ij}=0\\1 & \bar A_{ij}>0\end{cases}$$
H: If $\nabla\cdot u=0$ and $w=\operatorname{curl}u$, then $\int w=0$ Let $\Lambda\subseteq\mathbb R^2$ be open, $u\in C^1(\Lambda,\mathbb R^2)$ with $\nabla\cdot u=0$ and $$w:=\frac{\partial u_2}{\partial x_1}-\frac{\partial u_1}{\partial x_2}.$$ How can we show that $$\int_\Lambda w=0?\tag1$$ Since $\nabla\cdot u=0$, $$\int_\Lambda(u\cdot\nabla)\varphi=-\int_\Lambda(\nabla\cdot u)\varphi=0\;\;\;\text{for all }\varphi\in C_c^\infty(\Lambda)\tag2.$$ On the other hand, $$\int_\Lambda w\varphi=\int u_1\frac{\partial\varphi}{\partial x_2}-u_2\frac{\partial\varphi}{\partial x_1}\;\;\;\text{for all }\varphi\in C_c^\infty(\Lambda)\tag3.$$ I guess I've made a mistake at any point above, since the desired conclusion seems to require that $\int w\varphi=\int u_1\frac{\partial\varphi}{\partial x_1}-u_2\frac{\partial\varphi}{\partial x_2}$ instead (since this is equal to $\int_\Lambda(u\cdot\nabla)\varphi$). AI: This is false. Take $u=(-x_2,x_1)$. Then $w=2$ everywhere. EDIT: With periodic boundary conditions on the square $S$, by Green's Theorem $\int_S w = \int_{\partial S} u\cdot dr = 0$.
H: How many ways can we make a 3-senator community where no 2 of the members are from the same state? Part (a): There are $2$ senators from each of the $50$ states. We wish to make a $3$-senator committee in which no two of the members are from the same state. In how many ways can we do it? Part (b): Suppose for this problem (though it may not be accurate in real life) that the Senate has $47$ Republicans and $53$ Democrats. In how many ways can we form a $3$-senator committee in which neither party holds all $3$ seats? For part (a), I got $100*98*96$ because for the first spot, there are 100 choices and person in the second spot cannot be from the same state so there are 98 choices and for the third spot there are 96. I'm not sure if this is right or not and I do not know how to do part (b). AI: In (a) you’ve counted each committee $6$ times, once for each of the $6$ possible orders in which you could have picked it: ABC, ACB, BAC, BCA, CAB, and CBA. As a check we can calculate it a bit differently: there are $\binom{50}3$ ways to pick the $3$ states from which the chosen senators come, and for each state we have a choice of $2$ senators, so there are $\binom{50}32^3$ ways to choose the committee. And indeed $$\binom{50}32^3=\frac{50!2^3}{3!47!}=\frac{50\cdot49\cdot48\cdot2^3}{3\cdot2\cdot1}=\frac{100\cdot98\cdot96}6\;.$$ HINT for (b): First calculate the total number of possible $3$-person committees, and then subtract the number in which all $3$ senators are Republicans or all $3$ senators are Democrats. It’s also feasible to calculate directly the number of committees with $2$ Republicans and $1$ Democrat and the number with $2$ Democrats and $1$ Republican and add these two figures.
H: How is $nq-1 = \sum_{m=0}^n (m-1){n \choose m} p^{n-m}q^m$ $$nq-1 = \sum_{m=0}^n (m-1){n \choose m} p^{n-m}q^m$$ I found this formula in my facility planning book. The math part of it I don't understand. I tried taking it apart like this $\sum_{m=0}^n (m){n \choose m}p^{n-m}q^m-\sum_{m=0}^n {n \choose m}*p^{n-m}q^m$ and calculate each term of the summation and it doesn't work. Btw we have $\ p+q=1$. I appreciate it if anyone can help me. AI: $$\begin{align*} \sum_{m=0}^n(m-1)\binom{n}mp^{n-m}q^m&=\sum_{m=0}^nm\binom{n}mp^{n-m}q^m-\sum_{m=0}^n\binom{n}mp^{n-m}q^m\\ &\overset{*}=\sum_{m=0}^nn\binom{n-1}{m-1}p^{n-m}q^m-(p+q)^n\\ &=n\sum_{m=0}^{n-1}\binom{n-1}mp^{n-(m+1)}q^{m+1}-1\\ &=nq\sum_{m=0}^{n-1}\binom{n-1}mp^{(n-1)-m}q^m-1\\ &\overset{*}=nq(p+q)^{n-1}-1\\ &=nq-1 \end{align*}$$ The starred steps use the binomial theorem.
H: bounds for conditional expectation and variance let X and Y be random variables with density: $$f_{x,y}(x,y)=\frac{1}{\pi}\mathbf{1}_{\{x^2+y^2\le1\}}$$ Find $$E[X|Y] \& Var(X|Y)$$ So what I did was: $$E[X|Y]=\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}\frac{x}{2\sqrt{1-y^2}}dx=0$$ $$Var(X|Y)=\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}\frac{x^2}{2\sqrt{1-y^2}}dx=-\frac{y^2-1}{3\sqrt{1-y^2}}$$ Does this seem right? I'm not sure about the bounds I used to get the marginal distributions. AI: The variance is wrong. But without a lot of calculations, when you got your conditional density $f_{X|Y}(x|y)=\frac{1}{2\sqrt{1-y^2}}$ $-1\leq y \leq 1$ You can observe that this is a Uniform$(a;b)$ distribution $$(X|Y)\sim U(-\sqrt{1-y^2};\sqrt{1-y^2})$$ Then you can calculate mean as: $\frac{a+b}{2}=0$ variance as: $\frac{(b-a)^2}{12}=\frac{1-y^2}{3}$
H: Proof that the perpendicular bisectors of the sides of a general triangle meet at a point I have been studying Lang's basic math and I am stuck on the problem below: This is my representation: I am not sure on how I should proceed in this proof, I think this page is related to the problem, if so should I use a similar approach in the exercise? Thanks in advance. AI: HINT: Points on the perpendicular bisector of a segment are equidistant from the endpoints of that segment (this is not difficult to show if you look at the right triangles formed by the bisector, the segment, and the distances to the endpoints). Then, in your figure, you have $OQ=OM$ and $OQ=OP$. Now you can easily complete the proof.
H: An application of the dominated convergence theorem to approximate a function. In "Measure theory and probability theory (pag. 58)" by Krishna and Soumendra we can found the following: Let $\mu$ be a Lebesgue-Stieltjes measure, and $f \in L^p(\mathbb{R}, \mathcal{B}(\mathbb{R}), \mu)$ with $0<p<\infty$. Define $B_n=\{x: |x|\le n, |f(x)|\le n\}$ and $f_n=fI_{B_n}$ ($I$ is the indicator function). By the DCT we have, for every $\epsilon > 0$, there exists an $N_\epsilon$ such that for all $n \ge N_\epsilon$, $\int \left| f(x) - f_n(x)\right|^p d\mu < \epsilon$. How is the DTC applied here? Thanks. AI: Note that $|f(x)-f_n(x)| \le |f(x)|$ and $|f|^p$ is integrable. Since $|f|^p$ is integrable, $|f(x)| $ is finite ae. and so $f_n(x) \to f(x)$ ae. Hence $\int |f(x)-f_n(x)|^p d \mu \to 0$.
H: Maximum likelihood estimator for bombing planes. The bombing planes are intersecting two lines of anti-aircraft defense. Each plane, regardless of the others, has probability $\theta$, true resemblance, and can be knocked down by the first line of defense and second line of defense. Probability $\theta$ is not known. Of the n = 100 aircraft, $K_1$ = 40 were knocked down and $K_2$ = 20 were knocked down. Compute likelihood for $K_1$ and $K_2$, $P_{\theta}(K_1=40, K_2=20)$. Compute mle for $\theta$. I assumed that $\theta$ is the probability of Bernoulli trails. But how to compute mle and likelihood? AI: If I understand correctly, each plane has a probability $\theta$ to be hit when passing through each line of defense. For the first line, $40$ planes were hit, probability is $\binom{100}{40}\theta^{40}(1-\theta)^{60}$. For the second line, of the $60$ remaining planes, $20$ were hit, probability is $\binom{60}{20}\theta^{20}(1-\theta)^{40}$. So the likelihood is $$P_\theta(K_1=40,K_2=20) = \binom{100}{40}\theta^{40}(1-\theta)^{60}.\binom{60}{20}\theta^{20}(1-\theta)^{40} = \frac{100!}{(40!)^2 20!}\theta^{60}(1-\theta)^{100}$$ Taking the derivative, you find $$P_\theta(K_1=40,K_2=20) = \frac{100!}{(40!)^2 20!}\theta^{59}(1-\theta)^{99}\left[60-160\theta\right]$$ so the maximum likelihood is $\theta=\dfrac{60}{160}=\dfrac38$.
H: If $f: X \to [0,\infty]$ is measurable, $\lim_{n \to \infty} \int_X f^n d \mu$ exists. Let $(X, M, \mu)$ be a finite measure space. Let $f: X \to [0,\infty]$ be a measurable function. Prove: a) $\lim_{n \to \infty} \int_X f^n d \mu$ always exists on $[0,\infty]$. b) The latter limit is finite iff $\mu\{x \in X : f(x)>1\}=0$. I'd like to know if my proof is correct: What I did is to separate the limit in three parts: $$\lim_{n \to \infty} \int_X f^n d \mu= \lim_{n \to \infty} \int_{\{x:f(x)<1\}} f^n d \mu + \lim_{n \to \infty} \int_{\{x:f(x)=1\}} f^n d \mu+ \lim_{n \to \infty} \int_{\{x:f(x)>1\}} f^n d \mu.$$ For the first limit, I used the result which says that if a sequence $\{g_n\}$ decreases pointwise to $g$ and $\int g_1 < \infty$, then $\int g = \lim \int g_n$. So, since $f^n$ decreases to $0$ on ${\{x:f(x)<1\}}$, the first limit is $0$. The second limit equals $\mu(X)<\infty$. For the third limit, since $f^n \to \infty$ on ${\{x:f(x)>1\}}$, by the monotone convergence theorem, it's equal to $\int_{\{x:f(x)>1\}} \infty d \mu$, whis is $0$ if $\mu\{x \in X : f(x)>1\}=0$ and is $\infty$ otherwise. I also wonder if the double implication in (b) is proved with this. AI: Your arguments seem fine to me. If $\{f>1\}$ has positive $\mu$ measure then $$\int f^n\,d\mu\geq \int_{\{f>1\}} f^n\,d\mu\xrightarrow{n\rightarrow\infty}\infty$$ by monotone convergence. If $\mu(\{f>1\})=0$ then \begin{aligned} \int f^n\,d\mu &=\int_{\{f<1\}}f^n\,d\mu +\int_{\{f=1\}}f^n\,d\mu +\int_{\{f>1\}}f^n\,d\mu \\ &= \int_{\{f<1\}}f^n\,d\mu + \mu(f=1) \xrightarrow{n\rightarrow\infty}\mu(f=1) \end{aligned} again by monotone or dominated convergence.
H: If $G$ is a directed and finite graph whose underlying graph is a clique, then does $G$ have a root? Since I have failed to prove the following I think it's mostly false. Let $G$ be a directed and finite graph such that its underlying graph is a Clique. Then, $G$ has a root. Can someone give a contrary example? Note: Finite Graph means that the number of the graph's vertices and edges is finite. $v$ is called a root of the graph if there is a path from $v$ to every node in the graph. AI: Actually, I think the statement is true. We will proceed by induction over the number of vertices $n$. Base Case $n = 1$: Clearly the unique vertex in any graph with one node is a root. Step Case $n \rightarrow n + 1$: Assume as induction hypothesis (IH) that the statement holds for all graphs with $n$ vertices. Now let $G$ be an arbitrary directed graph on $n + 1$ vertices such that the underlying graph of $G$ is a clique. Let $v$ be an arbitrary vertex in $G$ and consider the graph $G' = G - v$ we get by deleting $v$: It is a directed graph on $n$ vertices whose underlying graph is a clique. Hence by IH we know that there exists a root $u$ in $G'$. Now consider the edges incident to $v$ in $G$. If there exists an incoming edge to $v$ in $G$ then $u$ is also a root in $G$. On the other hand, if there does not exist an incoming edge to $v$ in $G$ then $v$ is a root in $G$ because it has an outgoing edge to all other nodes in the graph. Thus there always exists a root in $G$.
H: RMO 1990 question Prove that the inradius of a right angled triangle having integer sides is also integral I tried it and got something like $r=\frac {(a.b)}{(a+b+c)}$ How to proceed after this. AI: First, note that $$r=\frac{a+b-c}{2}$$ Also, the Pythogorian theorem says $$a^2+b^2=c^2$$ $\textbf{Case 1: }$ If both $a$ and $b$ are even numbers then so is $c$, which implies that $a+b-c$ is also even and $r=\frac{a+b-c}{2}$ is integer. $\textbf{Case 2: }$ If both $a$ and $b$ are odd numbers then $c$ is even, which implies that $a+b-c$ is also even and $r=\frac{a+b-c}{2}$ is integer. $\textbf{Case 3: }$ If $a$ is an even and $b$ is an odd number then $c$ is odd, which implies that $a+b-c$ is even and $r=\frac{a+b-c}{2}$ is integer. $\textbf{Case 4: }$ If $a$ is an odd and $b$ is an even number then $c$ is odd, which implies that $a+b-c$ is even and $r=\frac{a+b-c}{2}$ is integer.
H: Typo on page 92 of Spivak's Calculus 4th edition? On page 92 of Spivak's calculus 4th edition in the middle of the page it states: "We just have to be 1,000,000 times as careful, choosing $|x- a|<\epsilon/3,000,0000$ in order to ensure that $|f(x) - a|<\epsilon$." Is it not $|f(x) - f(a)|<\epsilon$ or $|f(x) - l|<\epsilon$? Can it be a tiny typo or I am missing the point? Any clarification on this is very much appreciated. AI: If it is about the continuity of the function $ f $ at the point $ x=a $, it will be $$|f(x)-f(a)|<\epsilon$$ If it is about the limit of $ f $ at $ x=a $, it will be $$|f(x)-l|<\epsilon$$ and in this case, $$\lim_{x\to a}f(x)=l.$$
H: For $a,b$ in abelian group $G$ of orders $m,n$ where $\gcd(m,n) = 1$, $|ab| = mn$. I am trying to prove this following result. Let $G$ be an abelian group. Let $a,b$ be elements with orders $m,n$, respectively, where $m$ and $n$ are relatively prime. Prove that $|ab| = mn$. Here is my attempt. There is one final step that I cannot figure out. Let $G$ be an abelian group with identity $e$, and let $a,b \in G$ with the property that $|a| = m$ and $|b| = n$. We have: \begin{align*} (ab)^{mn} & = a^{mn} b^{mn} & & \text{since $G$ is abelian} \\ & = (a^n)^m (b^m)^n \\ & = e^m e^n \\ & = ee \\ & = e \end{align*} Define $t := |ab|$. I claim that $t \mid mn$. By the division algorithm, there exist unique $k \in \mathbb{Z}$ and $r$ with the property that $0 \leq r < t$ such that $$mn = kt + r.$$ We therefore have \begin{align*} (ab)^{mn} & = (ab)^{kt + r} \\ & = (ab)^{kt} (ab)^r \\ & = ((ab)^t)^k (ab)^r \\ & = e^k (ab)^r \\ & = e(ab)^r \\ & = (ab)^r \end{align*} Since $(ab)^{mn} = e$, and $(ab)^{mn} = (ab)^r$, we deduce $(ab)^r = e$ by transitivity of equality. But, $r$ is non-negative by definition and is strictly less than $t$, the order of $ab$. If $r$ is positive, this contradicts the definition of order, so we must have $r = 0$. Hence, $mn = kt$. That is, $t \mid mn$, or $|ab| \mid mn$. Now, since $t$ is the order of $ab$, we have $$(ab)^t = a^t b^t = e,$$ but we have $$e = e^n = (a^t b^t)^n = (a^t)^n (b^t)^n = a^{tn} b^{tn} = a^{tn} (b^n)^t = a^{tn} e^t = a^{tn} e = a^{tn}.$$ Since $a^{tn} = e$, by the earlier result, the order of $a$ divides the $tn$. That is, $m \mid tn$. Since $m$ and $n$ are relatively prime, $m \mid t$. By an exactly analogous argument, we deduce $n \mid t$. Indeed, $$e = e^m = (a^t b^t)^m = (a^m)^t b^{tm} = e^t b^{tm} = eb^{tm} = b^{tm},$$ which again implies that the order of $b$ divides $tm$, that is, $n \mid tm$, and since $m$ and $n$ are relatively prime, $n \mid t$. The solution I am looking at goes one step further and argues that $n \mid t$ and $m \mid t$, so $mn \mid t$. I cannot figure out why this is the case. If in fact it were the case, the proof would be done since we earlier found that $t \mid mn$, so together with this fact, we'd have $t = mn$. AI: $$(ab)^{|a| |b|}=a^{|a||b|}b^{|a||b|}=1$$ Therefore, $|ab|$ divides $|a||b|$. Conversely, $$1=(ab)^{|ab|}=a^{|ab|}b^{|ab|}$$ implies that $$1=1^{|b|}=a^{|b||ab|}b^{|b||ab|}=a^{|b||ab|}$$ Therefore, $|a|$ divides $|b||ab|$. Since $|a|$ and $|b|$ are coprime, you may write $$1=|a|x+|b|y$$ for suitable integers $x$ and $y$. Then $$|ab|=|a||ab|x+|b||ab|y$$ Since $|a|$ divides the right-hand-side, it also must divide $|ab|$. With a similar argument, you get that also $|b|$ divides $|ab|$. Hence $|a||b|=\operatorname{lcm}(|a|,|b|)$ divides $|ab|$.
H: Computing the signed curvature of a surface in an arbitrary direction If I have a surface defined as the graph of the function $z = f(x,y)$, is there a closed-form expression for the signed curvature of this surface in an arbitrary direction? That is, if $x(t) = x_0 + t\Delta x$ and $y(t) = y_0 + t\Delta y$, how can I compute the signed curvature of the intersection curve $\mathbf{C}(t) = \big(x(t), y(t), f(x(t), y(t))\big)$ at $t=0$? AI: Apply the second fundamental form of the surface (at the point $P=(x_0,y_0,f(x_0,y_0))$) to the unit tangent vector $\mathbf v$ of the curve at the point. See pages 45-47 of my differential geometry text. EDIT: This is computing the normal curvature (i.e., agrees with the actual space curvature of the plane curve only when the slicing plane is normal to the surface at the point).
H: Show that: $\left[\underset{n\to \infty }{\text{lim}}\int_1^{\infty } \frac{\sin (x)}{x^{\{n+1\}}} \, dx\right] = 0 $ Show that: $$\left[\underset{n\to \infty }{\text{lim}}\int_1^{\infty } \frac{\sin (x)}{x^{\{n+1\}}} \, dx\right] = 0 $$ My attemp: I using Taylor series: $$ \sin(x)= x-\frac{x^3}{6}+\frac{x^5}{120}-\frac{x^7}{5040}+\frac{x^9}{362880}+O(x^{11}) $$ $$\left[\underset{n\to \infty }{\text{lim}}\int_1^{\infty } \frac{x-\frac{x^3}{6}+\frac{x^5}{120}-\frac{x^7}{5040}+\frac{x^9}{362880}+O(x^{11})}{x^{\{n+1\}}} \, dx\right] = 0 $$ We have main part $$ \left[\underset{n\to \infty }{\text{lim}}\int_1^{\infty } \frac{1}{x^{\{n\}}} \, dx\right] = \frac{x^{1-n}}{1-n} $$ But it's not convergence. Do you have better a idea? AI: No need to use the Taylor series. For $x > 0$, $$\left|\frac{\sin(x)}{x^{n+1}}\right| \leq \frac{1}{x^{n+1}}$$ so that $$\left|\int_1^\infty \frac{\sin(x)}{x^{n+1}}\,dx\right| \leq \int_1^\infty \left|\frac{\sin(x)}{x^{n+1}}\right|\,dx \leq \int_1^\infty \frac{1}{x^{n+1}}\,dx = \frac{1}{n} \xrightarrow{n\to\infty} 0$$
H: Metric spaces with two conditions If $X$ is an non-empty set and $d: X \times X \rightarrow \mathbb {R}$ has the following properties $d(x,y)=0$ if and only if $x=y$ $d(x,y) \leq d(x,z)+\color{red}{d(z,y)}$ Prove that d defines a metric on X. I need to prove that $d(x,y) \geq 0$ $d(x,y)=d(y,x)$ I know this result. But the conditions that are set are different, I have tried to do it in an analogous way, but I think that with the conditions that are given it does not meet that it is a metric. The question falls on the fact that in the statement that I mention you have to d(x,y) $\leq d(x,z)+\color{red}{d(y,z)}$ I would appreciate any hint or if you can help me prove that it is not metric. AI: Consider $X = \{1, 2\}$ and $d$ defined as $$d(1, 1) = d(2, 2) = 0; \; d(1, 2) = 1; \; d(2, 1) = 2.$$ This clearly satisfies the first condition. The second condition is trivially satisfied in the following cases: $z = x,$ $z = y,$ $x = y$. Therefore, the only case to be checked is when the three are distinct. However, that clearly cannot happen. Thus, we conclude that the given conditions do not ensure a metric.
H: If $f$ is continuous and satisfies $|f(x) - f(y)| \ge \log(1+|x-y|)$, how do I show that $f$ is bijective? Let $f:\mathbb R\to \mathbb R$ be a continuous function satisfying $|f(x)-f(y)|\ge \log(1+|x-y|)$ for all $x$ and $y$ in $\mathbb R$. Prove that $f$ is bijective. $f(x)=f(y)$ at once implies $x=y$ and hence $f$ is injective. How do I prove surjectivity? AI: $f $ is continuous at $ I= (-\infty,+\infty) $, therefore $ f(I) $ is an intervall $ J$ But $$(\forall x\in I) \;\; |f(x)-f(0)|\ge \ln(1+|x|)$$ and $$\lim_{|x|\to \infty}\ln(1+|x|)=+\infty.$$ implies that $$\lim_{|x|\to \infty}|f(x)|=+\infty.$$ You proved that $ f $ is injective. thus $$f(I)=(-\infty,+\infty)=\Bbb R$$ which shows that $ f $ is surjective.
H: proves and disproves about inner product spaces $l_p=\{[{a_n}]_{n=1}^ {\infty}|\sum_{n=1}^{\infty}|a_n|^p < \infty \}$ with the norm $||a_n||_p = (\sum_{n=1}^{\infty}|a_n|^p)^\frac{1}{p} $ prove or disprove: $L_2\subset L_1$ I know its true for functions but is it also for sequnces? If $\lim\limits_{n \to \infty}a_n=0$ then $a_n\in l_1$ I think this is not true but can't think of a counter example If $a_n , b_n \in l_2$ then $\sum_{n=1}^{\infty}(a_nb_n)^2 \le \sum_{n=1}^{\infty}|a_n|^2 \sum_{n=1}^{\infty}|b_n|^2$ Is it just saying $||a_nb_n||_2^2 \le||a_n||_2^2||b||_2^2$ using cauchy schwartz? every clue is a big help, thanks in advance. AI: The first statement is false. As you remarked, it is true for functions but only on certain conditions: if $f$ is in $L^2(\Omega)$ with $\Omega$ a bounded set, then $f$ is in $L^1(\Omega)$. This is because of Cauchy-Scharz inequality: $$ \int_\Omega |f| \leq \left( \int_\Omega |f|^2 \right)^{\frac{1}{2}} \left( \int_\Omega 1^2 \right)^{\frac{1}{2}} < \infty $$ because $\Omega$ is bounded so $\int_\Omega 1 = \lambda(\Omega)$ is finite. However, it is usually false otherwise. Here the equivalent would be to write $$ \sum_n |a_n| \leq \left( \sum_n |a_n|^2 \right)^{\frac{1}{2}} \left( \sum_n 1^2 \right)^{\frac{1}{2}} $$ but obviously this does not tell us anything since $\sum_n 1 = \infty$ when $n$ runs through the positive integers. A simple counter-example example is $a_n = \frac{1}{n}$ which is in $L^2$ but not in $L^1$. This is also false: take the same counter-example $a_n = \frac{1}{n}$. Suppose $(a_n)$ and $(b_n)$ are in $L^2$. $$ \left( \sum_{n=1}^N |a_n|^2 \right) \left( \sum_{n=1}^N |b_n|^2 \right) = \sum_{n=1}^N \sum_{m=1}^N |a_n b_m|^2 \geq \sum_{n=1}^N |a_n b_n|^2 $$ so for all $N$ $$ \sum_{n=1}^N |a_n b_n|^2 \leq \left( \sum_{n=1}^{\infty} |a_n|^2 \right) \left( \sum_{n=1}^{\infty} |b_n|^2 \right) $$ with the right member of the inequality independent of $N$, and therefore the sum of the $|a_n b_n|^2$ converges and $$ \sum_{n=1}^{\infty} |a_n b_n|^2 \leq \left( \sum_{n=1}^{\infty} |a_n|^2 \right) \left( \sum_{n=1}^{\infty} |b_n|^2 \right). $$
H: How Was Tarski's Undefinability Theorem Used? How is Tarski's undefinability used in the following stack-exchange answer(s)? To recall, the result is that for language $L$ for which diagonal lemma (i.e. $ZFC, PA)$ applies there is no formula $T$ with one free variable such that for all wffs $\phi$ $$L\vdash \phi\leftrightarrow T(\#(\phi))$$ Recall also that (please correct if wrong - I was not able to find a precise definition) a set $A$ is definable in $L$ if there is a function $\underline\space :S\rightarrow E$, where $S$ is some set and $E$ are the constants (or in some sense the objects) of language such that $$A=\{x\in S:L\vdash\phi(\underline x)\}$$ Then a number (real or natural) is definable if the set $A$ contains only that number, and $S$ is then the set of natural or real numbers. The formula $\phi$ is said to define the number. (Note we work here in $ZFC$ and $L\vdash \phi$ can be taken to be the short hand for $Provable_L(\#(\phi))$) How was Tarski used in In $PA$ the "smallest number not definable in $PA$ in under $90$ characters" is not definable in $PA$. (This actually seems to not need Tarski, rather the different language levels solve it). By Tarski, we can not in general map $\phi$ to a real number it defines. In other words, we can not say that $\phi$ is true only at some point $r$. AI: There are a couple issues here. To begin with, you're conflating languages (just sets of symbols) with theories (sets of sentences using the symbols in the language, in addition to the "purely logical" symbols - the latter being equality, parentheses, Booleans and quantifiers, and variables). For example, $\mathsf{ZFC}$ is a theory in the language $\{\in\}$. You're also mixing up definability (which is relative to a structure) with provable behavior (which is relative to a theory). Recall that a set $X\subseteq\mathfrak{A}^n$ is definable in $\mathfrak{A}$ iff there is some formula $\varphi(x_1,...,x_n)$ in the language of $\mathfrak{A}$ such that $$X=\{(a_1,...,a_n):\mathfrak{A}\models\varphi(a_1,...,a_n)\}.$$ (See this old answer of mine for more about this.) Definability, not provability, is the notion which is important here: provability is irrelevant to Tarski's undefinability theorem. (OK, that's not strictly true, but it's true at the beginning.) In general definability is much "broader" than provable behavior in any sense. For example, Godel's incompleteness theorem says (for example) that there is some sentence $\sigma$ such that $\mathsf{PA}$ can't prove or disprove "$\sigma$ is a theorem of $\mathsf{PA}$," but the set of $\mathsf{PA}$-theorems is definable over the standard model $\mathfrak{N}=(\mathbb{N};+,\cdot)$ of $\mathsf{PA}$. I'm not sure what your proposed definition of definability is getting at. One key issue with your proposed definition is that definability only makes sense for subsets of (Cartesian powers of) the structure; it doesn't make sense to ask whether $X$ is definable in $\mathfrak{A}$ if $X\not\subseteq\mathfrak{A}^n$ for some $n$. Note that this indicates an abuse of terminology in the present situation: when we say "$Th(\mathfrak{N})$ is undefinable in $\mathfrak{N}$" what we really mean is "$\{\#(\varphi): \varphi\in Th(\mathfrak{N})\}$ is undefinable in $\mathfrak{N}$." To drive this point home, note that every countable set is in bijection with some definable subset of $\mathfrak{N}$, so that wouldn't give a useful notion. OK, now let's turn to the statement of Tarski's undefinability theorem. The usual "concrete" version of Tarski is the following: "$\{\#(\varphi):\mathfrak{N}\models\varphi\}$ is undefinable in $\mathfrak{N}$," where $\mathfrak{N}$ is as mentioned above the standard model of arithmetic and "$\#$" is the usual Godel numbering map. Note that - as indicated earlier - theories and provability are not mentioned here at all. The result above may feel a bit arbitrary; this is especially true since we often want to apply Tarski outside of the realm of $\mathfrak{N}$ specifically (e.g. to the whole set-theoretic universe). Rather than proving a new version of Tarski each time we need it, there is a more general form of the theorem which says roughly that if $\mathfrak{A}$ is a structure and $f$ is a map from the set of sentences in the language of $\mathfrak{A}$ to $\mathfrak{A}$ satisfying some basic properties, then $$\{f(\varphi):\mathfrak{A}\models\varphi\}$$ is not a definable subset of $\varphi$. Now we're finally in a position to talk about how it's used. Often it appears not literally as a step in a proof, but rather as an indicator that a given proof attempt will not work (and indeed that the opposite result is likely true). For example, the natural way to attempt to show that the function $F$ sending $n$ to the least natural number not definable in $\mathfrak{N}$ by a first-order formula of length $<n$ is definable in $\mathfrak{N}$ would be to try to directly express "the least natural number not definable in $\mathfrak{N}$ by a first-order formula of length $<n$" in the language of first-order arithmetic; but when unfolded, that relies on being able to talk about truth in a definable way, which Tarski says we can't do. Note that this isn't yet a proof that $F$ isn't definable in $\mathfrak{N}$ - maybe there's some clever other way to define it - but it kills the obvious approach and strongly suggests that $F$ isn't definable. The actual proof that $F$ is undefinable however is strongly indicated by the proof of Tarski's theorem, so we generally get away with saying "By Tarski, $F$ isn't definable." That said, Tarski also gets used more "directly." For an example I personally find quite neat see here, where a slight generalization of the usual "concrete" phrasing of the theorem can be used to prove the first incompleteness theorem. Also, note that the way you've stated $(1)$ is incorrect: the least natural number $k_{90}$ not definable in $\mathfrak{N}$ by a formula with $<90$ symbols is definable in $\mathfrak{N}$, since indeed every natural number is definable in $\mathfrak{N}$, but the formula "the least natural number not definable in $\mathfrak{N}$ by a formula with $<z$ symbols" is not definable in $\mathfrak{N}$.
H: Triangular numbers divisible by $3$ I can't understand any of sentences from the images below. Since I don't understand almost every possible lines, I'm very troubled for what I should even ask. But I'll try to. Firstly, how is it possible for the triangular number with 3k+1 as the last number to be followed by the one with 3k+3 as its last number? As you can see on the example above, 3k = 6, then 3k+1 = 7, which makes sense since the next triangular number for 21 is 21+7, 28. But how could 30, 21+9, where 9 come from 3k+3, be the second next triangular number? Is it perhaps that 3k in 3k+1 is a different number with 3k in 3k+3? Secondly, how can they be sure they can show the pattern continues just by showing 3 cases? And, in the last pictures, how does it make sense to say the first two figures are triangular numbers? They just aren't, judging from the look of them? It's not stair-like figure at all. Simply put, I can't believe the last figures are consecutive triangular numbers. AI: I agree: the sentence "second number after $3k$: $3k + 1 + 2$, or $3k + 3$" is extremely misleading. They change the meaning of $k$ mid-sentence. I think to understand what is going on we should make as clear as possible the difference between the small numbers we add up in each step and the big triangular numbers. The situation they start from is that for some small number (such as 6) that is itself divisible by 3 we find a big triangular number (in the example: 21) that is also divisible by 3. Now we want to introduce the number $k$ to write numbers that are divisible by 3 as 3k. The author of the picture does not specify which of the two they rewrite in this way and this remains unclear forever so let's do it better ourselves. Let's say the small number (6 in our example) is denoted 3k (so k = 2 in our example) and the big number (21 in our example) I will write 3K (so the big number K is 7 in our example). Now the next triangular number is formed by adding the next small number (3k + 1) to the big number 3K, so we get $3K + 3k + 1$. This can be rewritten as $3(K + k) + 1$ so this number is one more than a multiple of 3 and hence not a multiple of 3 itself. It is of the form $3m + 1$ (with m = K + k) but saying that it is 3k + 1 as the picture does is extremely misleading. Now the next triangular number we get by adding the next small number: $3k + 2$ and so the next triangular number is $3K + (3k + 1) + (3k + 2)$. This can be expanded to $3(K + k + k) + (1 + 2)$, a number of the form $3n + 3$ (just take $n = K + k + k$) and hence divisible by $3$ as the picture argues. However calling this number $3k + 3$ is serious abuse of notation. The next triangular number we get by adding the next small number $3k + 3$ to the total we already had and hence we end up with $3(K + k + k) + 3 + (3k + 3) = 3(K + k + k + k) + 6 = 3(K + k + k + k + 2)$ so again this number is divisible by 3. I hope this helps! The final thing you need to understand is that after three steps we are back in the same situation we started with: The last small number we added (3k + 3) is divisible by 3 and the large triangular number we arrived at, 3(K + k + k + k + 2), also. Since these two properties was what defined our starting position we can repeat the same reasoning over and over.
H: $f$ is absolutely continuous implies that $f$ is continuous I am going to show that if f is absolutely continuous it implies that f is continuous, the test seems somewhat trivial to me of the definition, however I would like to know if I am doing it well. If $f$ is absolutely continuos on [a,b], let $\varepsilon >0$, there exist $\delta>0$ such that $\displaystyle\sum_{k=1}^{n}|f(b_{k})-f(a_{k})|<\varepsilon$, if $\sum_{k=1}^{n}(b_{k}-a_{k})<\delta$, for some collection finite $\lbrace (a_{k},b_{k}),1\leq k \leq n\rbrace$. If $n=1$, $|b_{1}-a_{1}|<\delta$ implies $|f(b_{1})-f(a_{1})|<\varepsilon$, then $f$ is uniformly continous on $[a,b]$ AI: Let $ c\in [a,b]$ and $\epsilon>0$. $f $ is absolutely continuous at $ [a,b]$ $$\implies$$ $$\exists \delta>0\;\; : \; \forall (a_1,b_1)\in [a,b]^2\;$$ $$ |a_1-b_1|<\delta \implies |f(a_1)-f(b_1)|<\epsilon$$ $$\implies$$ $$\exists \delta>0\;\; :\;\; \forall x\in[a,b]$$ $$|x-c|<\delta \;\;\implies \;\;|f(x)-f(c)|<\epsilon$$ This proves that $ f$ is continuous at any point $ c\in [a,b].$
H: Does a first integral of $\dot x = f(x)$ satisfy $\nabla H \cdot f = 0$? Let $\dot x = f(x)$ be an ODE, where $f: U\to \mathbb{R}^n$, and $U\subset \mathbb{R}^n$ is open. Then I know that a first integral is a function $H: U\to \mathbb{R}^n$ so that $H(\varphi(t,x_0))=const$ for every solution $\varphi(t,x_0)$ of the ODE. My notes prove that $\nabla H \cdot f = 0 \Rightarrow H$ is a first integral. My question is whether the reciprocal is true, that is, if $H$ is a first integral does it satisfy $\nabla H \cdot f = 0$? I cannot find any clues online. My notes say that this is true for a planar complex polynomial system, that is, $n=2$ and $f(x,y)=\big(P(x,y),Q(x,y)\big)$, where $P,Q$ are polynomials. Is this true for an $n-$dimensional polynomial system? How about for an arbitrary function $f$? Thank you very much! AI: Indeed,$$ 0=\frac{d}{dt}H(\varphi(t,x_0))\Big|_{t=0}=\nabla H(\varphi(t,x_0))\Big|_{t=0}\cdot \frac{d}{dt}\varphi(t,x_0)\Big|_{t=0}=\nabla H(x_0)\cdot f(x_0), $$ since $$ \frac{d}{dt}\varphi(t,x_0)=f(\varphi(t,x_0)). $$
H: Show that the card drawn at time $(S+1)$ is equally likely to be a club or a diamond. Let $k \geq 2$ be an integer. At time $n=0,$ we shuffle a deck of $2k$ cards of which $k$ are clubs and $k$ are diamonds. We draw a card each turn without replacing. Denote by $C_{n}$ the number of clubs drawn by the turn $n$ and by $\left(\mathcal{F}_{n}\right)$ the natural filtration of $\left(C_{n}\right)$ Let $M_{n}=\frac{k-C_{n}}{2 k-n}, \quad 0 \leq n \leq 2 k-1$ be the proportion of clubss left in the deck after turn $n$. I could show that $\left(M_{n}\right)$ is a martingale. now let $S$ be the first time at which the card draw is a club. it is a stopping time. show that the card drawn at time $(S+1)$ is equally likely to be a club or a diamond. I know I have to use Doob's optional stopping with $M_n$ but I don't know how exactly I should proceed, any help will be greatly appreciated, thanks ! AI: If you know $S=s$ then the probability the $s+1^{\text{th}}$ card is also a club is the same as the probability the last, i.e. $2k^{\text{th}}$, card is a club. Both are $\frac{k-1}{2k-s}$ and the denominator must be positive since $S < 2k$ when $k \ge 2$ If you do not yet know $S$, the probability the next card after the first club is also a club must then be the marginal probability the last card is a club. That is $\frac12$ since you started with an equal number of each type.
H: Determine whether $F$ is continuous as a map from $\big({\cal C}[0,1],d_1\big)$ to $\big({\cal C}[0,1],d_1\big)$. Define $F:{\cal C}[0,1]\to{\cal C}[0,1]$ by $F(f)(x)=\int_0^x{f(t)\over\sqrt t}\,dt$. I have a feeling that the map is continuous. I have shown that the map is Lipschitz continuous from $(\mathcal C[0, 1], d_\infty)$ to $(\mathcal C[0, 1], d_\infty)$. $$\left| F(f)(x) \right| = \left| \int_{0}^x \frac{f(t)}{\sqrt{t}} dt \right| \leq \int_{0}^x \left| \frac{f(t)}{\sqrt{t}} \right| dt \leq \|f\|_\infty \int_{0}^x \left| \frac{1}{\sqrt{{t}}} \right| dt \leq \|f\|_\infty \int_{0}^1 \frac{1}{\sqrt{t}} dt = 2\|f\|_\infty$$ for all $x \in [0, 1]$. Thus: $$\sup_{0 \leq x \leq 1} |F(f)(x)| = \|F(f)\|_\infty \leq 2 \|f\|_\infty$$ However, the same technique does not seem to apply to $\big({\cal C}[0,1],d_1\big)$. AI: $F$ is not continuous for the $d_1$ metric. Take $f_n(t) = \min\left(\frac{1}{\sqrt{t}},n\right)$, then $$\lVert f_n \rVert_1 \leq \int_0^1 \frac{1}{\sqrt{t}} \, dt.$$ But we have $$\lVert F(f_n)\rVert_1 = \int_0^1 |F(f_n)(x)|\,dx = \int_0^1\int_0^x \frac{f_n(t)}{\sqrt{t}} \, dt dx \xrightarrow[n\to \infty]{} \int_0^1\int_0^x \frac{1}{t}\, dt dx = \infty$$ by the monotone convergence theorem.
H: Bounded Variation Functions My question is; $f(x)=\left\{\begin{array}{cc}x^{2} \sin \left(\frac{1}{x^{2}}\right), & 0<x \leq 1 \\ 0, & x=0\end{array}\right.$ "Show that this function is Bounded Variation on [0,1] or not. " i know , It's not a bounded variation function. Because i read a theorem from a book; where a,b>0 ; $f(x)=\left\{\begin{array}{cc}x^{a} \sin \left(\frac{1}{x^{b}}\right), & 0<x \leq 1 \\ 0, & x=0\end{array}\right.$ if a>b; f is a bounded variation if a≤b ; f is not a bounded variation. So; its clear that this function is not bounded variation. But i want to solve in different ways. I need your helps. Thanks for your answers. AI: Define $a_n = \sqrt{\dfrac{2}{(2n+1)\pi}}$ (which is a monotone sequence in $[0,1]$) Then the total variation $$V^{\!1}_0(f) \geq \sum_{n=2}^N|f(a_{n-1})-f(a_n)| = \frac{2}{\pi}\sum_{n=2}^N\left(\frac{1}{2n-1} + \frac{1}{2n+1}\right) \xrightarrow{N\to\infty}\infty$$
H: Implication of probabilistic ordering from convergence in probability For a sequence $X_1, X_2, \ldots X_n$ with $E X_n = a$ I proved that $X_n \to_p a$, i.e. $$ \lim_{n \to \infty}P(|X_n - a|>\varepsilon)=0 $$ Does this type of convergence imply a probabilistic $ordering$ on the sequence, i.e. $$ P(|X_1-a|>\varepsilon) > P(|X_2-a|>\varepsilon)> \ldots > P(|X_n-a|>\varepsilon) $$ This would be useful for deriving a lower bound, e.g. for proof of convergence a.s., e.g. if $P(|X_1-a|>\varepsilon) = \pi$, the sum of probabilities is lower-bounded by $n \pi $ and hence diverges. Intuitively it makes sense, but it sounds a bit too general and simplistic, hence probably wrong, but I couldn't find a good explanation anywhere. AI: Certainly false. In such questions it is always advisable to look at constant random vaiables. If something is not true for a sequence of real numbers it cannot be true for a sequence of random variables. Take $X_n=\frac1 n$ if $n$ is even and $X_n =\frac 1 {4n}$ if $n$ is odd. Then $a=0$. Take any $\epsilon $ between $\frac 1 4$ and $\frac 1 2$. Then $P(|X_1-a| >\epsilon)=0$ and $P(|X_2-a| >\epsilon)=1$.
H: Let $Y = X − [X]$, where $X\sim U(0,\theta)$. Show that $Y \sim U (0, 1)$ Let $X \sim U(0,\theta)$, where $\theta$ is a positive integer. Let $Y = X − [X]$, where $[x]$ is the largest integer $≤ x$. Show that $Y \sim U (0, 1)$ Clearly, the support of $Y$ is $S_Y = [0,1]$. In order to show this, I want to be able to prove that for $y\in [0,1]$ $F_Y(y) = y$ But for $y\in [0,1]$ $$F_Y(y) = Pr(X \leq y + [X]) = \sum_{i \; = \;0}^\theta Pr(X \leq y + i, \; [X] = i) =\frac1\theta \sum_{i \; = \;0}^\theta \int_{i}^{y+i} dx =\frac1\theta \sum_{i \; = \;0}^\theta y = \frac{\theta +1}\theta y \neq y$$ Is this question wrong or am I solving it incorrectly? AI: Hint: Your integral cannot extend up to $y+i$. For example when $i=\theta$ the interval $(i,y+i)$ is completely outside $(0,\theta)$ so the density function is $0$ there.
H: [Proof Verification]: $\overline{A\times B}=\overline{A}\times\overline{B}$ Suppose $X$ and $Y$ are topological spaces, and $A\subset X$ and $B\subset Y$ are subspaces. I'm trying to prove $\overline{A\times B}=\overline{A}\times\overline{B}$. For the backward inclusion, choose $(x_1,x_2)\in \overline{A}\times \overline{B}$. Every neighbourhood of $x_1$ intersects $A$, and every neighbourhood of $x_2$ intersects $B$, so clearly every neighbourhood of $(x_1,x_2)$ intersects $A\times B$. Conversely, take a point $(x_1,x_2)\in \overline{A\times B}$ and for a contradiction, suppose $(x_1,x_2)\not\in\overline{A}\times\overline{B}$. Then every neighbourhood of $(x_1,x_2)$ intersects $A\times B$. However since neighbourhoods of $x_1$ and $x_2$ which do not intersect $A$ and $B$ respectfully exist, call them $U$ and $V$, the product $U\times V$ does not intersect $A\times B$.$\hspace{1em}\Box$ AI: Here is another way to look at this. Since $A\times B= (A\times Y)\cap(X\times B)$, $$\overline{A\times B}\subset \overline{A\times Y}\cap \overline{X\times B}\subset (\overline{A}\times Y) \cap (X\times \overline{B}) = \overline{A}\times\overline{B}$$ Here we have used the fact that $\overline{U\cap V}\subset \overline{U}\cap \overline{V}$ for any subsets in a topological space, along with the facts that $A\times Y\subset \overline{A}\times Y$, and $\overline{A}\times Y$ is closed in $X\times Y$. (similarly argument for $X\times B$.) It remains to show that $\overline{A}\times\overline{B}\subset\overline{A\times B}$. Let $(x,y)\in\overline{A}\times\overline{B}$. Any neighborhood $W$ in $X\times Y$ of $(x,y)$ contains an open set of the form $U\times V$ where $x\in U$, $y\in V$ and $U$, $V$ one in $X$ and $Y$ respectively. Then $$W\cap(A\times B)\supset (U\times V)\cap (A\times B)=(U\cap A)\times (V\cap B)\neq\emptyset$$ for $x\in\overline{A}$ and $y\in\overline{B}$. Consequently, $(x,y)\in \overline{A\times B}$
H: The product of an arbitrary family of locally convex spaces is locally convex. Let $\{E_\alpha\ : \ \alpha\in I\}$ be a family of a locally convex sets, where $I$ is an index family. I want to prove that $$E:= \prod_{\alpha\in I}E_\alpha$$ is locally convex. I know that, by definition, for each $\alpha\in I$, $E_\alpha$ is locally convex, that is, $E_\alpha$ is topological vector space such that there is a basis of neighborhoods in $E_\alpha$ consisting of convex sets. I also know that I must prove that there is a basis of neighborhoods in $E$ formed by convex sets, but I do not know how to prove it from hypotheses. AI: Let $C$ be a collection of convex spaces. Let $a = \prod_{s \in C}a_s$ and $b = \prod_{s \in C}b_s$ be two points in $\prod C$. Show that the line from $a$ to $b$ is within $\prod C$.
H: Integral Criteria for Functions to be Zero Almost Everywhere While reading the proof of Lemma 2 in the following link, I realized they only proved the case of a nonnegative function $f$, but that's not an hypothesis of the lemma. So, what happens if $f$ takes negative values? Does the lemma remain true? Is the proof similar to the nonnegative case? Lemma 2: Let $f$ is a Lebesgue integrable function on $[a, b]$ and let $F(x)=\int_a^x f(t) d t$. If for all $x \in[a, b]$ we have that $F(x)=0$ then $f(x)=0$ almost everywhere on $[a, b]$ Link of the proof: http://mathonline.wikidot.com/integral-criteria-for-functions-to-be-zero-almost-everywhere AI: Following the proof of the Lemma 2 in the link it shows that $m(\{x\in[a,b]:f(x)>0\})=0$, therefore $f$ is non-positive almost everywhere. Then it follows from the Lemma 1 that $-f=0$ almost everywhere, so we conclude that $f=0$ a.e. That is, if we set $S:=\{x\in[a,b]: f(x)>0\}$ then $$ \int_{[a,b]}f\mathop{}\!d \lambda =\overbrace{\int_{S}f\mathop{}\!d \lambda}^{=0} +\int_{[a,b]\setminus S}f \mathop{}\!d \lambda =-\int_{[a,b]\setminus S}|f|\mathop{}\!d \lambda =0\\[2ex] \therefore\quad \mathbf{1}_{[a,b]\setminus S}\,|f|=0\text{ a.e. }\implies \mathbf{1}_{[a,b]\setminus S}\,f=0\text{ a.e. }\implies f=0\text{ a.e. } $$
H: $U \sim U(0,1)$ and let $X$ be the root of $3t^2 −2t^3 −U = 0.$Show that $X$ has p.d.f. $f(x)=6x(1−x)$, if $0\leq x\leq 1$ Let $U \sim U(0,1)$ and let $X$ be the root of the equation $3t^2 −2t^3 −U = 0.$ Show that $X$ has p.d.f. $f(x)=6x(1−x)$, if $0\leq x\leq 1,\;=0$,otherwise. I really don't know how to begin solving this problem. I don't even know what the roots of $3t^2 −2t^3 −U = 0$ are Help will be much appreciated. AI: First of all, for $U \in (0,1)$ there are three roots of $3t^2 -2t^3-U=0$ (two roots if $U = 0$ or $1$) so the correct definition of $X$ would be the unique root in $[0,1]$. Now set $f(t) = 3t^2-2t^3-U$. Taking the derivative, it is easy to check that $f$ is increasing on $[0,1]$ and that $f(0) = -U$ and $f(1) = 1-U$ (proving that $X$ is well defined). Using this, we can show that for $x \in [0,1]$, we have $$\{X \leq x\} = \{U \leq 3x^2 - 2x^3\}.$$ Indeed, $X \leq x$ if and only if $f(X) \leq f(x)$ if and only if $0 \leq 3x^2 - 2x^3 - U$. It follows that $$\mathbb{P}(X\leq x) = \mathbb{P}(U \leq 3x^2 - 2x^3) = 3x^2 - 2x^3,$$ and the right-hand side is the cdf of the given distribution (just take the derivative).
H: Sum of divisors function inequality Prove that if $n<m$ and $n$ divides $m$, then $\frac{\sigma(n)}{n} < \frac{\sigma (m)}{m}$, where $\sigma(x)$ denotes the sum of all the divisors of $x$. I know that $\sigma (x)$ is multiplicative but not completely multiplicative. So, if $m=nk$ and $\gcd(n,k)=1$, then the inequality rewrites to $\frac{\sigma(n)}{n} < \frac{\sigma(n)\sigma(k)}{nk}$, which gives a lot of cancellation and can be easily proven. But how can it be solved if the gcd is not $1$? AI: Working one prime at a time, it will suffice to show that the inequality $$ \frac{\sigma(np)}{np} > \frac{\sigma(n)}{n} $$ holds for all positive integers $n$ and all primes $p$. Fix a positive integer $n$ and a prime $p$. First suppose $p\not\mid n$.$\;$Then $$ \frac{\sigma(np)}{np} = \frac{\sigma(n)\sigma(p)}{np} = \frac{\sigma(n)}{n}\cdot \frac{\sigma(p)}{p} = \frac{\sigma(n)}{n}\cdot \frac{p+1}{p} > \frac{\sigma(n)}{n} $$ Next suppose $p\mid n$. Then we can write $n=ap^k$, where $p\not\mid a$, so \begin{align*} \sigma(n) &= \sigma(ap^k) = \sigma(a)\sigma(p^k) = \sigma(a)\left(\frac{p^{k+1}-1}{p-1}\right)\\[4pt] \sigma(np) &= \sigma(ap^{k+1}) = \sigma(a)\sigma(p^{k+1}) = \sigma(a)\left(\frac{p^{k+2}-1}{p-1}\right)\\[4pt] \end{align*} so $$ \frac{\sigma(np)}{\sigma(n)} = \frac{p^{k+2}-1}{p^{k+1}-1} $$ hence $$ \frac {\left({\Large{\frac{\sigma(np)}{np}}}\right)} {\left({\Large{\frac{\sigma(n)}{n}}}\right)} = \frac{1}{p}\cdot \frac{p^{k+2}-1}{p^{k+1}-1} = \frac{p^{k+2}-1}{p^{k+2}-p} > 1 $$ and thus $$ \frac{\sigma(np)}{np} > \frac{\sigma(n)}{n} $$ Now suppose positive integers $n,m$ with $n < m$ are such that $n{\,\mid\,}m$. Then we can write $$ m=n{\,\cdot\,}(p_1\cdots p_j) $$ where $p_1,...,p_j$ are primes, not necessarily distinct. Let $x_0,...,x_j$ be defined recursively by $$ \left\lbrace \begin{align*} x_0&=n\\[4pt] x_i&=x_{i-1}p_i\;\text{for}\;\,1\le i\le j\\[4pt] \end{align*} \right. $$ Then for $1\le i\le j$ we get $$ \frac{\sigma(x_i)}{x_i} = \frac{\sigma(x_{i-1}p_i)}{x_{i-1}p_i} > \frac{\sigma(x_{i-1})}{x_{i-1}} $$ hence $$ \frac{\sigma(n)}{n} = \frac{\sigma(x_0)}{x_0} < \cdots < \frac{\sigma(x_j)}{x_j} = \frac{\sigma(m)}{m} $$
H: Conceptual reason why height of unit tetrahedron is the same as the distance between opposite faces of an octahedron? One of my favorite mathematics visualizations shows why attaching a tetrahedron to a triangular face of a square pyramid results in a polyhedron with five faces instead of the seven faces one might expect. One thing that I've noticed is that if you "subtract" a tetrahedron from an octahedron along a face, something interesting happens: the fourth vertex of the tetrahedron lands on the octahedron's opposite face. This means that the distance between opposite faces of an octagon is precisely the same as the distance from a vertex of a tetrahedron to its opposite face. Is there a clear way to see this is the case without simply computing it? (It looks like this may follow from heropup's answer, but I'd prefer an explanation that would convince a high school student.) AI: The answer you cited might actually be satisfactory for a high school student if you work it through carefully. But I would suggest taking this answer and gluing pyramids to the bottoms of the pyramids in that figure so that you have two octahedra as shown in the figure below. You can still fit a regular tetrahedron into the gap between the two octahedra; the black line segment is the same length as the edge of either octahedron and is one of the edges of the tetrahedron. (The other five edges are shared with the octahedra.) The "upper front" faces of the two octahedra are still coplanar and the "front" face of the tetrahedron is still coplanar with them. The "lower rear" faces of the two octahedra (opposite the "upper front" faces) also are coplanar and meet at the "rear" vertex of the tetrahedron (opposite the "front" face). But the two planes in which the "upper front" and "lower rear" faces lie are parallel. Hence if you put one face of a tetrahedron anywhere in the plane of the two "upper front" faces of the octahedra, the remaining vertex of the tetrahedron will be in the plane of the "lower rear" faces, just like the tetrahedron shown in the figure. In particular, if you put a face of the tetrahedron coincident with one of those "upper front" faces, the fourth vertex of the tetrahedron lands precisely in the middle of the opposite face of the octahedron. If you (or the high school student) still have trouble visualizing this, try making paper models of a couple of regular octahedra and a regular tetrahedron with congruent edges and lay them on a flat surface so that they fit together like this. The alternative construction suggested by Will Jagy (which I think is even nicer than the one above) is to take a single regular octahedron and place four regular tetrahedra on its faces as shown in the figure below. It should not take much to figure out that you can get the same combined figure by taking a single large regular tetrahedron and subdividing each face into four equilateral triangles.
H: Proving $\lim_{n\to\infty}\frac1n\sum_{i=0}^{n-1}\sum_{j=n}^{n+i}\frac{\binom{n+i}j}{2^{n+i}}=0$ I found the following question online: How can I prove that $$\lim_{n\to\infty}\frac1n\sum_{i=0}^{n-1}\sum_{j=n}^{n+i}\frac{\binom{n+i}j}{2^{n+i}}=0$$ ? One notices that the inner sum is equal to the probability $\mathsf P\left(\mathrm B\left(n+i;\frac12\right)\geq n\right)$, where $\mathrm B$ denotes the binomial distribution. Using Hoeffding's inequality, one gets $\mathsf P\left(\mathrm B\left(n+i;\frac12\right)\geq n\right)\le\exp\left(-\frac{(n-i)^2}{2(n+i)}\right)$, i.e. $$\tag1\label1\frac1n\sum_{i=0}^{n-1}\sum_{j=n}^{n+i}\frac{\binom{n+i}j}{2^{n+i}}\le\frac1n\sum_{i=0}^{n-1} \exp\left(-\frac{(n-i)^2}{2(n+i)}\right).$$ Based on numerical experiments, the right-hand side converges to $0$. If you apply $\exp(-x)\le\frac{1}{1+x}$, you get $$\tag2\label2\frac1n\sum_{i=0}^{n-1} \exp\left(-\frac{(n-i)^2}{2(n+i)}\right)\le\frac1n\sum_{i=0}^{n-1} \frac{1}{1+\frac{(n-i)^2}{2(n+i)}},$$ and the right-hand side still seems to converge to $0$. However, it is 2am so I lack the stamina to find a proof for this. I am asking for a sketch of proof that either the right-hand side in \eqref{1}, or even better, the right-hand side in \eqref{2} converges to $0$. Note: Here, I answered a similar question. AI: We have \begin{align} \frac{1}{n}\sum_{i=0}^{n-1} \frac{1}{1 + \frac{(n-i)^2}{2(n+i)}} &= \frac{1}{n}\sum_{i=0}^{n-\lfloor \sqrt{n} \rfloor} \frac{1}{1 + \frac{(n-i)^2}{2(n+i)}} + \frac{1}{n}\sum_{i=n+1-\lfloor \sqrt{n} \rfloor}^{n-1} \frac{1}{1 + \frac{(n-i)^2}{2(n+i)}}\\ &\le \frac{1}{n}\sum_{i=0}^{n-\lfloor \sqrt{n} \rfloor} \frac{1}{0 + \frac{(n-i)^2}{2(n+n)}} + \frac{1}{n}\sum_{i=n+1-\lfloor \sqrt{n} \rfloor}^{n-1} \frac{1}{1 + 0}\\ &= 4 \sum_{i=0}^{n-\lfloor \sqrt{n} \rfloor}\frac{1}{(n-i)^2} + \frac{\lfloor \sqrt{n} \rfloor - 1}{n}\\ &= 4 \sum_{m=\lfloor \sqrt{n} \rfloor}^n \frac{1}{m^2} + \frac{\lfloor \sqrt{n} \rfloor - 1}{n}. \end{align} From $\sum_{m=1}^\infty \frac{1}{m^2} = \frac{\pi^2}{6}$, we know that $\lim_{n\to \infty} 4 \sum_{m=\lfloor \sqrt{n} \rfloor}^n \frac{1}{m^2} = 0$. Also, clearly, $\lim_{n\to \infty} \frac{\lfloor \sqrt{n} \rfloor - 1}{n} = 0$. The desired result follows. (Q. E. D.)
H: Checking the derivative of function of two variables Lef $f:\mathbb R^2\rightarrow\mathbb R $ be defined by $$f(x,y)=\sin\left(\frac{y^2}{x}\right)\sqrt{x^2+y^2}$$ if $x\neq0$ and $f(x,y)=0$ if $x=0$. Then show that $f$ is not differentiable at $(0,0)$. It is clear that $f$ is continuous at $(0,0)$. Now by the definition of derivatives of function of several variables, the derivative of $f$ at $(0,0)$ is going to be a linear map say $\lambda$ which satisfies the condition $$\lim_{h \to 0}\frac{ |f((0,0)+(h_{1},h_{2}))-f(0,0)-\lambda(h_{1},h_{2})|}{\Vert h \Vert} = 0$$ So how can we show that f is not differentiable at $(0,0)$. Should we assume that f is differentiable at (0,0) and then try to get some contradiction in the condition of differentiability? Or there is some other way to do it? AI: Note that if $f$ were differentiable at $(0,0)$, then this limit $$\lim_{(x,y)\to(0,0)}\left\lvert\dfrac{f(x,y)-f(0,0)}{(x,y)-(0,0)}\right\rvert=\lim_{(x,y)\to(0,0)}\left\lvert\sin\left(\dfrac{y^2}{x}\right)\right\rvert$$ would exist. But it is not the case, since, taking $y=a\sqrt x,x\to0$ we get the limit to be $\sin a^2$ which is different for different values of $a$.
H: Curve family whose velocities are normalized exponentials For a fixed $\gamma > 0$, consider the family of curves parametrized by $\Theta=(\vec a,\vec b)$: $$\vec v_\Theta(t) = \frac{\vec a + e^{\gamma t}\vec b}{||\vec a + e^{\gamma t}\vec b||_2}$$ I'm interested in their integrals $$\vec x_\Theta(t) = \int_0^t \vec v(t')\,dt'$$ $$\vec w_\Theta(t) = \int_0^t e^{\gamma t'}\vec v(t')\,dt'$$ I don't need $\vec x_\Theta(t)$ and $\vec w_\Theta(t)$ as functions of $\Theta=(\vec a,\vec b)$. Rather, I'd like to know if there's a nice form for a general element of the set $$\{\vec x_\Theta(\cdot): \Theta\in \mathbb R^n \times \mathbb R^n\}$$ and similarly for $\vec w_\Theta(\cdot)$. For example, an answer might take the form of another parametric family, parametrized by $\Theta$ or by something more convenient. I suppose there is no loss of generality in assuming we're in two dimensions (spanned by $\vec a$ and $\vec b$), in which case we can say that $\vec v$ is a unit vector whose direction satisfies $$\tan\theta(t) = \frac{a_y + e^{\gamma t}b_y}{a_x + e^{\gamma t}b_x}$$ Are these well-known types of curves? At the very minimum, is there a faster and higher-precision way to compute the curves than numerical integration? AI: Project $\vec x$ into directions along $\hat a$ and $\hat b$. Then $$x_a(t)-x_a(0)=\int_0^t\frac 1{\sqrt{a^2+b^2e^{2\lambda t'}}}dt'$$ If either $a$ or $b$ are zero, the integral is trivial. If both are nonzero, use $c=b/a$ and $-2\lambda t'=\tau$ to get an integral $$\int\frac1{\sqrt{1+c^2e^{-\tau}}}d\tau$$ Now use $u=1+c^2e^{-\tau}$ to get an integral of the form $$\int\frac1{(u-1)\sqrt u}du$$ Substitute $v=\sqrt u$ and you get $$\int\frac1{v^2-1}dv$$ You will need to get all the constants and the limits. The last integral is trivial. Similarly for the other direction: $$x_b(t)-x_b(0)=\int_0^t\frac {be^{\lambda t'}}{\sqrt{a^2+b^2e^{2\lambda t'}}}dt'$$ Use $u=\frac bae^{\lambda t'}$ to get an integral of the form $$\int\frac1{\sqrt{1+u^2}}du$$ This is a standard integral
H: Solving $(D^2-1)y=e^x(1+x)^2$ I did like this: $$\text{Let,} y=e^{mx} \text{ be a trial solution of } (D^2-1)y=0$$ $$\therefore \text{The auxiliary equation is } m^2-1=0$$ $$\therefore m=\pm1\\ \text{C.F.} = c_1e^x+c_2e^{-x}$$ $$\begin{align} \text{P.I.}& =\frac{1}{D^2-1}e^x(1+x)^2\\ & =e^x\frac{1}{(D+1)^2-1}(1+2x+x^2)\\ & =e^x\frac{1}{D^2+2D}(1+2x+x^2)\\ & =\frac{e^x}{2}\left[\frac{1}{D}-\frac{1}{D+2}\right](1+2x+x^2)\\ & =\begin{aligned} \frac{e^x}{2}\frac{1}{D}(1+2x+x^2)-\frac{e^x}{2}\frac{1}{D+2}(1+2x& +x^2)\\ \end{aligned}\\ & =\frac{e^x}{2}\left(x+x^2+\frac{x^3}{3}\right)-\frac{e^x}{4}\left(x^2+x+\frac{1}{2}\right)\\ & =e^x\left(\frac{x^3}{6}+\frac{x^2}{4}+\frac{x}{4}-\frac{1}{8}\right)\\ \end{align}$$ $$\therefore \text{The solution is}$$ $$y=c_1e^x+c_2e^{-x}+e^x\left(\frac{x^3}{6}+\frac{x^2}{4}+\frac{x}{4}-\frac{1}{8}\right)$$ But, in my book the answer is: $$y=c_1e^x+c_2e^{-x}+\frac{xe^x}{12}(2x^2+3x+3)$$ $$\text{Please, check if there is any } \color{red}{mistake}.$$ AI: The answers are essentially the same. $-\frac18e^x$ can be absorbed in $c_1e^x$.
H: Monotone Convergence theorem Application $$ \lim _{n \to \infty} \int_{-\infty}^{\infty} \frac{e^{-x^{2} / n}}{1+x^{2}} d x=? $$ My opinion is using Monotone Convergence Theorem here. For every $x \in \mathbb{R}$ the sequence $\left\{e^{-x^{2} / n}\right\}$ monotonically increases and converges to $e^0=1$. Thus by the Lebesgue Monotone Convergence Theorem, $$ \lim _{n \to \infty} \int_{-\infty}^\infty \frac{e^{-x^2/n}}{1+x^2} dx = \int_{-\infty}^\infty \frac{dx}{1+x^2} = \left.\tan^{-1} x \right|_{-\infty}^\infty = \pi $$ I think my answer is right. But I am especially interested in its details.I think applying Lebesgue Classical monotone convergence is enough in this question, not need Beppo Levi version. But i am not sure.What about you?May you give some details ? Thanks for your helps. AI: $\newcommand{\D}{\,\mathrm{d}}$The notation $$\int_{-\infty}^{\infty} \frac{e^{-x^{2} / n}}{1+x^{2}} \D x$$ is most commonly used to denote the iterated improper Riemann integral $$\lim_{a \to \infty} \lim_{b \to \infty} \int_a^b \frac{e^{-x^{2} / n}}{1+x^{2}} \D x$$ while $$\int_\mathbb{R} \frac{e^{-x^{2} / n}}{1+x^{2}} \D\lambda(x)$$ the analogous Lebesgue integral, where $\lambda$ is the Lebesgue measure. If you want to add more details to your proof then you could make the relation to Lebesgue integral clearer. For that, let $$f_n : \mathbb{R} \to [0, \infty[ : x \mapsto \frac{e^{-x^{2} / n}}{1+x^{2}}$$ and $F$ its pointwise limit as $n \to \infty$. Having in mind that the proper Riemann integral, when it exists, is equal to the corresponding Lebesgue integral, reason as follows \begin{align*} \lim_{n \to \infty} \int_{-\infty}^{\infty} f_n(x) \D x &= \lim_{n \to \infty} \lim_{a \to \infty} \lim_{b \to \infty} \int_a^b f_n(x) \D x \\ &= \lim_{n \to \infty} \lim_{a \to \infty} \lim_{b \to \infty} \int_{[a, b]} f_n \D\lambda \\ &= \lim_{n \to \infty} \lim_{a \to \infty} \lim_{b \to \infty} \int_{\mathbb{R}} f_n \chi_{[a, b]} \D\lambda \\ \end{align*} Then using the MCT three times, one time for each limit, it follows that \begin{align*} \lim_{n \to \infty} \lim_{a \to \infty} \lim_{b \to \infty} \int_{\mathbb{R}} f_n \chi_{[a, b]} \D\lambda &= \int_{\mathbb{R}} \lim_{n \to \infty} \lim_{a \to \infty} \lim_{b \to \infty} \left( f_n \chi_{[a, b]} \right) \D\lambda \\ &= \int_{\mathbb{R}} \lim_{n \to \infty} f_n \lim_{a \to \infty} \left( \lim_{b \to \infty} \chi_{[a, b]} \right) \D\lambda \\ &= \int_{\mathbb{R}} F \lim_{a \to \infty} \left( \lim_{b \to \infty} \chi_{[a, b]} \right) \D\lambda \\ \end{align*} Now use MCT just two more times to get back to the Riemann integral \begin{align*} \int_{\mathbb{R}} F \lim_{a \to \infty} \left( \lim_{b \to \infty} \chi_{[a, b]} \right) \D\lambda &= \lim_{a \to \infty} \lim_{b \to \infty} \int_{\mathbb{R}} F \chi_{[a, b]} \D\lambda \\ &= \lim_{a \to \infty} \lim_{b \to \infty} \int_{[a, b]} F \D\lambda \\ &= \lim_{a \to \infty} \lim_{b \to \infty} \int_a^b \frac{1}{1+x^2} \D x \\ \end{align*} which you already computed. Furthermore you could also argue why all $f_n$ and $F$ are Lebesgue measurable.
H: Prove $(\mathbb{Z} \times \mathbb{Z})/ \langle (2,3)\rangle$ is isomorphic to $\mathbb{Z}$. I'm trying to prove the following but im stumped: Prove that $(\mathbb{Z} \times \mathbb{Z})/\langle (2, 3)\rangle \cong \mathbb{Z}$. My attempts so far have been to try and find a single generator of this group. Since its obviously infinite, a single generator would mean its cyclic, and an infinite cyclic group is trivially isomorphic to $\mathbb{Z}$ by simply mapping the generator to $1$. However, i don't see how its possible for this to have a single generator. AI: Since $\gcd(2,3)=1$ we have $x,y\in\Bbb Z$ such that $2x+3y=1$. For example, $x=2,y=-1$. Now, consider the element $$a:=(y,-x)+\big\langle(2,3)\big\rangle.$$ Now, $3a=(1,0)+\big\langle(2,3)\big\rangle$ and $-2a=(0,1)+\big\langle(2,3)\big\rangle$. Note that, $(1,0),(0,1)$ generates $\Bbb Z\times\Bbb Z$. Hence, $a$ generates $\frac{\Bbb Z\times\Bbb Z}{\langle(2,3)\rangle}$. Notice that $3a=(3y,-3x)+\big\langle (2,3)\big\rangle=\big\{(3y,-3x)+(2n,3n)\big|n\in\Bbb Z\big\}$. So, $(1,0)-(3y,-3x)=(2x,3x)\in \big\langle(2,3)\big\rangle$.
H: Let $p$ be an odd prime, and $f(x):=x^{p^2-p} -1+(x+1)^p.(x+2)$ Solve $f(x)\equiv0\pmod p.$ Find the number of incongruent solutions of Solve $f(x)\equiv0\pmod{p^2}.$ I think i can raise solutions to the second one from the first one by Hensel's Lemma but i am not sure how to do the first part. Any help? AI: Let's quit first the case when $p\mid x$: If this happens, you can just delete the $x$ in all the equation and see if the result is $0$ mod $p$. So it becomes: $$f(x)= -1+(1)^p\cdot(2)=-1+2=1\not\equiv 0\pmod p$$ Now if $p\not\mid x$: First let's see that $x^{p^2-p}=(x^{p-1})^p$ and by the Fermat's little theorem, $x^{p-1}\equiv 1 \pmod p$, (because $\gcd(x,p)=1$) So you will get that $(x^{p-1})^p\equiv 1^p\equiv 1 \pmod p$. Now you can do the next step: $$f(x)\equiv 1-1+(x+1)^p\cdot(x+2)\equiv (x+1)(x+2)$$ And the only cases you must solve are when $x+1\equiv 0 \pmod p$ and $x+2\equiv 0 \pmod p$. So $x\equiv -1,-2\pmod p$ are your solutions.
H: Applications of Integration: PDF The probability of showing the first symptoms at various times during the quarantine period is described by the probability density function: f(t) = (t-5)(11-t) (1/36) Find the probability that the symptoms will appear within 7 days of contact. F(t) = (1/36)(8.t^2 -t^3/3 -55t) P(x<7) = F(7) - F(0) After taking the integral I equate t to 7 but that doesn't give me the right answer. Expected answer is: 0.259 AI: It is understood that that the denisty is $0$ for $x <5$ since you cannot have negative values for the density. Hence the answer is $\int_5^{7} \frac {(t-5)(11-t)} {36}dt$ and this does work out to the approximate value $0.259$.
H: Unable to Solve a quiz question asked in mathematics exam ( Quantitative Aptitude) I am self studying for an exam and I am unable to solve this quiz question. Adding it's image -> I tried by finding numbers in the sentences but couldn't find and I think that's a wrong approach. Can anyone please tell how to solve this question. Answer is B. AI: Numbers are spelled out in each phrase. The first has eleven (Tinselevent), the second nine. Look for the other two. There is no excuse for this being called mathematics.
H: area of triangle using sides (ratio?) Given a triangle ABC, points D, E and F are placed on sides BC, AC and AB, respectively, such that BD : DC = 1 : 1, CE : EA = 1 : 3 and AF : FB = 1 : 4. A line parallel to AB is drawn from D to G on side AC. Lines DG and EF meet at X. If the area of triangle ABC is 120, what is the area of triangle DEX? I found that AFE is 20, ECD is 20 and BFXD is 75 (i think this is wrong). Subtracting those from 120, I got 5 but it doesn't seem correct. Can anyone please tell me where i went worng? AI: Notice DEX is CDG-ECD-EGX. You can see that ABC is similar to CDG with a 1:2 ration of side lengths so that means that the ratio of their areas is 1:4. Thus the area of CDG is $120*\frac{1}{4}=30$. Now, we can find the area of CDE by saying the ratio between CE and AC is 1:4 and by noticing that if we use EC as the base of CDE then its height would be half of what it would be if we used B instead of D. So we can say EDC has area $120*\frac{1}{2}*\frac{1}{4}=15$. Now to get the area of DEX, we have to subtract out the EGX from the triangle CDG having already subtracted out the area of CDE. We can say that because DG is parallel to AB that the ratio between AG and CG is 1:1. This implies that the ratio between EG and AE is 1:3. Finally, because EGX is similar to AEF, by calculating the area of AFE we can find the area of GXE. The area of AFE can be calculated by noticing the ratio between AF and AB is 1:5 and the ratio of AE and AC 3:4. So the area of AEF is $120*\frac{1}{5}*\frac{3}{4} = 18$ making the area of EGX $18*\frac{1}{9} = 2$. Thus, we can calculate the area of DEX which is finally $30-15-2 = 13$.
H: Unable to think about a question based on Directions in Aptitude I am self studying quantitative aptitude and I am unable to solve this particular question. Question is -> Starting from Point A you fly one mile south, one mile east, then 1 mile north which brings you back to Point A. Point A is not North Pole. Then in which hemisphere you are in? Attempt -> I assumed a starting point ( A) and then proceed for directions given in the question but the problem is that I could not say which hemisphere I was actually in at A? So, I can't solve this problem and need help. AI: You are at the southern hemisphere. As you travel south and land on point $B$, as you travel around in the east direction, you want to end up at the same point as $B$ and then travel back up. this is only posible if you are close enough to the south pole. An explicit construction: Consider a circle of circumference $1$ mile that is near the south pole. Let your starting point be a point above the circle by $1$ mile. As you travel down, you visit the circle and you travel along the circle and travel back up.
H: How do I show that $\lfloor{(\sqrt 2+1)^{2020}}\rfloor\equiv 1\left(\text{mod}~ 4\right)$? I think I can use that $$(a\sqrt b+c)^{2n}+(a\sqrt b-c)^{2n}\in\mathbb{Z},$$ but I have no idea about the next step. AI: Let $r,s$ be given by \begin{align*} r&=1+\sqrt{2}\\[4pt] s&=1-\sqrt{2}\\[4pt] \end{align*} Noting that $r,s$ are the roots of the equation $$x^2-2x-1=0$$ it follows that for all integers $n$, we have \begin{align*} &r^n-2r^{n-1}-r^{n-2}=r^{n-2}(r^2-2r-1)=0\\[4pt] &s^n-2s^{n-1}-s^{n-2}=s^{n-2}(s^2-2s-1)=0\\[4pt] \end{align*} hence by summing the above and letting $t_n=r^n+s^n$, we get $$t_n-2t_{n-1}-t_{n-2}=0$$ for all integers $n$. Noting that $t_0=2$ and $t_1=2$, an easy induction shows that $t_n$ is an integer$\\[4pt]$ $t_n\equiv 2\;(\text{mod}\;4)$ for all nonnegative integers $n$. Now suppose $n$ is an even positive integer. Noting that $-1 < s < 0$, it follows that $0 < s^n < 1$, hence $$t_n-1=r^n+s^n-1=r^n+(s^n-1) <\, r^n <\, r^n + s^n=t_n$$ so $$ \lfloor{r^n}\rfloor=t_n-1\equiv 1\;(\text{mod}\;4) $$
H: Separable metric spaces have countable bases. Suppose $X$ is a separable metric space, and $E$ is a countable dense subset of $X$. I want to show that the set $$B=\{B_r(x)\mid (r,x)\in\Bbb Q^+\times E\}$$ is a basis for $X$. We want to show $$X=\bigcup_{(r,x)}B_r (x),$$ The backward inclusion is easy. For the forward inclusion, suppose $y\in X$. Fix $x\in E$, and take $r\in\Bbb Q^+$ so that $d(x,y)<r/2$. Then $y\in B_r (x)$. Fix $p,q\in E$ and $r,s\in\Bbb Q^+$, and consider $x\in B_r(p)\cap B_s(q)$. Since $B_r(p)\cap B_s(q)$ is open, there is a neighbourhood $B_h(x)\subset B_r(p)\cap B_s(q)$ for some $h\in \mathbb{R}^+$. Fix $y\in E$ so that $y\in B_h(x)$. Choose a $h'\in\mathbb{Q}$ so that $d(x,y)<h'<h$. Then $B_{h'}(y)\subset B_r(p)\cap B_s(q)$ and $x\in B_{h'}(y)$. Does this work? AI: No, it does not work. By that argument, if $x\in X$, then $\{B_r(x)\mid r\in\Bbb Q^+\}$ is a basis of the topolgy of $X$, since $X=\bigcup_{r\in\Bbb Q^+}B_r(x)$. What you are supposed to prove is that, if $A$ is an open subset of $X$, then $A$ can be written as an union of balls of the type $B_r(x)$, with $x\in E$ and $r\in\Bbb Q^+$. That's not hard. For each $a\in A$, take $r\in\Bbb Q^+$ such that $B_r(a)\subset A$. Since $E$ is dense, there is some $x\in A\cap B_{r/2}(a)$. So, $a\in B_{r/2}(x)$ and $B_{r/2}(x)\subset A$. Can you take it from here?
H: Without calculating, tell which is a perfect square: 1022121; 2042122; 3063126; 4083128 I am trying aptitude questions but was struck on this problem. Which of the following numbers is a perfect square? A) $1022121\quad$ B) $2042122\quad$ C) $3063126\quad$ D)$4083128$ (original problem image) As a perfect square always has last digit $0$, $1$, $4$, $5$, $6$, $9$. So, B and D are eliminated. But I don't know how to eliminate A or C. The answer is A. AI: Perfect squares give remainder $0$ or $1$ after division by $3$ and also $4$, both of which can be checked quickly. The C gives same remainder as $26$ after division by $4$, which is $2\neq 0,1$.
H: Continuous mapping not exist on $\mathbb{R}^2$ Suppose that $\Delta\subseteq\mathbb{R}^2$ is a closed triangle area, $\partial \Delta$ its boundary (namely the 3 sides of the triangle). I'd like to show that there doesn't exist a continuous mapping from $\Delta$ to $\partial \Delta$, which maps each side of $\partial \Delta$ to itself. This problem may have something to do with Brouwer's fixed point theorem, but I have no idea now. Thanks in advance for any thoughts. AI: If we want to fix $\partial\Delta$ (i.e. $f(x)=x$ for points in the boundary) then such a map does not exist by the no retraction theorem which is indeed a consequence of Brouwer's fixed point theorem, as $(\Delta,\partial \Delta) \simeq (\Bbb B^2,\Bbb S^1)$.
H: Proving a function$f:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ is onto. Question: Let $f:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ be a continuously differentiable function satisfying $||f(x)-f(y)||\geq||x-y||$ for all $x,y\in \mathbb{R}^n$. Show that $f$ is onto. 1. I know that, a function $f:\mathbb{R}\longrightarrow \mathbb{R}$ is continuously differentiable if $f'$ exists and continuous on $\mathbb{R}$. But what is the definition of continuously differentiable function on $\mathbb{R}^n$? 2. Next, how to prove $f$ is onto under that assumption in the question? Thank you. AI: Given distinct points $x_1,x_2\in\mathbb{R}^n$, $\|f(x_1)-f(x_2)\|\geq\|x_1-x_2\|>0$ implies that $f(x_1)\neq f(x_2)$. Hence $f$ is injective. By the invariance of domain theorem, the image $U$ of $f$ is open in $\mathbb{R}^n$. Let $y$ be a limit point of $U$ and $(y_k)_{k\in\mathbb{N}}$ be a sequence in $U$ converging to $y$. Then $(y_k)$ is a Cauchy sequence. Put $x_k=f^{-1}(y_k)$ for each $k$. Since $\|x_k-x_m\|\leq\|y_k-y_m\|$ for all $k,m\in\mathbb{N}$, the sequence $(x_k)_{k\in\mathbb{N}}$ is Cauchy, hence has a limit $x$. By continuity of $f$, we have \begin{align} f(x)=\lim_{k\to\infty}f(x_k)=\lim_{k\to\infty}y_k=y\,. \end{align} Hence $y\in U$. This proves $U$ to be closed. The only subsets of $\mathbb{R}^n$ both open and closed are the empty set and $\mathbb{R}^n$ itself. The set $U$ cannot be empty, so it has to be $\mathbb{R}^n$. Note that $f$ is not required to be continuously differentiable here, only continuous. Perhaps it's because the invariance of domain theorem is not assumed, and instead we have, for example, the inverse function theorem.
H: Orthogonal complements really confuse me, I think its the notation? For example what do I do here, I know wha to do for part a but then...? Let$$W=\operatorname{Span}\left\{\left(\begin{array}{c}1\\1\\0\\0\end{array}\right),\,\left(\begin{array}{c}0\\1\\1\\-1\end{array}\right)\right\}$$ (a) Show $v=\left(\begin{array}{c}1\\-1\\1\\0\end{array}\right)\in W^{\perp}$. (b) Determine a basis for $W^{\perp}$. (c) Determine a matrix $A$ such that $W$ is the row space of $W$ and $W^{\perp}$ is the null space of $A$. AI: Let $S$ be some set of vectors. Then the 0rthogonal complement of $S$ is denoted by $S^{\perp}$ and is defined as the space of all those vectors in the vector space $V$ such that they are orthogonal to every vector in $S$. So $$S^{\perp}=\{x \in V\, | \, x \cdot s=0 \,\, \forall s \in S\}.$$ In your question $W$ is a subspace generated by two given vectors (call them) $a,b$. So any vector $w \in W$ can be written as a linear combination of $a$ and $b$. This means $w=c_1a+c_2b$ for some $c_1,,c_2 \in \Bbb{R}$ (assuming the field is real numbers). So if we want to test that the given $v \in W^{\perp}$, then all we have to do is to verify if $v$ is orthogonal to BOTH $a$ and $b$. Because once it is then it will be orthogonal to every $w \in W$. So try testing that. For finding a basis, there are a couple of ways of doing that. Here is a (sort of) standard approach. First find $W^{\perp}$. Let $X=\begin{bmatrix}x &y&z&w\end{bmatrix}^T \in W^{\perp}$, then $X \cdot a=0$ and $X \cdot b=0$ implies \begin{align*} x+y&=0\\ y+z-w&=0 \end{align*} Thus the solution set is $$W^{\perp}=\left\{z\begin{bmatrix}1\\-1\\1\\0\end{bmatrix}+w\begin{bmatrix}-1\\1\\0\\1\end{bmatrix} \, \Big| \, z,w \in \Bbb{R}\right\}.$$ Thus a basis for $W^{\perp}$ is $$\left\{\begin{bmatrix}1\\-1\\1\\0\end{bmatrix}, \,\begin{bmatrix}-1\\1\\0\\1\end{bmatrix} \right\}.$$
H: Proving that $0 I need to prove that $0<e-S_{n}<\frac{4}{(n+1)!}$ $\quad\forall n\in\mathbb{N}\quad$ for $\quad S_{n}=1+1+\frac{1}{2!}+\ldots+\frac{1}{n!}$ Previously, I proved that $2<e<4$, and that $S_n$ is taylor polynomial of $e$. I thought on proving it using induction: Base: $S_1=1<2<e<4$ Step - for $n+1$: $e-S_{n+1}=e-S_{n}-\frac{1}{(n+1)!}$ What am I missing here? It's supposed to be simple. Thank you! AI: You have $$\begin{aligned}0 < e- S_n &= \sum_{k=n+1}^\infty \frac{1}{(k+1)!}\\ &= \frac{1}{(n+1)!}\left(1 + \frac{1}{n+2} + \frac{1}{(n+2)(n+3) }+ \dots\right )\\ &< \frac{1}{(n+1)!}\left(1 + \frac{1}{n+2} + \frac{1}{(n+2)^2 }+ \dots\right )\\ &= \frac{1}{(n+1)!} \frac{1}{1-\frac{1}{n+2} } = \frac{1}{(n+1)!} \frac{n+2}{n+1} \le \frac{2}{(n+1)!} \end{aligned}$$ Or using Taylor's theorem: $$e - S_n = \frac{e^c}{(n+1)!}$$ with $c \in (0,1)$ and with what you already proved $$0 < e- S_n < \frac{4}{(n+1)!}$$
H: Show that for any positive integer $m$ there are $m$ consecutive positive integers each of which has at least $10$ positive divisors. In other words, show that for any positive integer $m$ there are $m$ consecutive positive integers $a+1, a+2, ..., a+m$ such that $τ(a + i) ≥ 10$ for each $i = 1, 2, ..., m)$ I don't think i even understood the question, let alone solve it. Can anyone help me with it? AI: If $n=p_1^4p_2$ (product of prime p0wers), then $\tau(n)=10$. So let us consider a sequence of numbers $N_1, N_2, N_3, \ldots $ such that $N_k=p_{1k}^4p_{2k}$, where $p_{1k}, p_{2k}$ are distinct primes in increasing order for all $k \geq 1$ (i.e. $p_{11}<p_{21}<p_{12}<p_{22}<\dotsb$). Then $\gcd(N_i,N_j)=1$ for $i \neq j$. Now we want to find a number $a$ such that \begin{align*} a& \equiv -1 \pmod{N_1}\\ a& \equiv -2 \pmod{N_2}\\ a& \equiv -3 \pmod{N_3}\\ \vdots & \equiv \vdots\\ a& \equiv -m \pmod{N_m}. \end{align*} Since the $N_i's$ are pairwise relatively prime so we can apply Chinese Remainder Theorem to guarantee that such an $a$ exists. Now consider $a+1, a+2, a+3, ...$. This is a sequence of consecutive integers, each of which is divisible by a number $N_i$ that has exactly $10$ positive divisors. Note: If you want more positive divisors than $10$, then just increase the power of one of the primes in $N_i$.
H: how to determine dimension of bases? Hello guys let's say we have 3 vectors $u,v,w$ in $R^4$. Does bases dimension depends on dimension of subspace? For example in $R^4$ can we have bases which has $\dim=2$ or $\dim=3$? I was confused when I saw dim=3 bases in $R^4$ and then in another page I saw dim=4 bases in $R^4$. So can we say dimension of bases must be equal or less than subspace? so in this case if we have $R^4$ subspace then dimension of bases can be 1,2,3. SO we just need to select 1 or 2 or 3 independent vectors then it will be bases? AI: I'm not entirely following your question, but perhaps this might be helpful. I believe it is more clear to speak about a basis which spans a subspace of a given dimension. If we take $\mathbf{R}^{4}$, it is possible to have subspaces that have dimension 1, 2, 3, or 4. A subspace of dimension 1 would require a single vector to span it, a subspace of dimension 2 would require 2 linearly independent vectors to span it, etc. The dimension 1 subspace has a basis consisting of one vector which spans it, and the dimension 2 subspace consists of a basis with two vectors which spans it. Please note that since we are in $\mathbf{R}^{4}$, each of the vectors mentioned has four components, like $\mathbf{x}=\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{bmatrix}$, but the number of components is not the same as the "dimension" as used when talking about the dimension of subspaces. I hope this helps.
H: Proof Verification: Cartesian Product of Finitely Many Nonempty Sets is Nonempty (Without AoC) Synopsis Please verify my proof. I would also appreciate any tips on how I might improve my mathematical writing. Thank you. Exercise Assume that $H$ is a function with finite domain $I$ and that $H(i)$ is nonempty for each $i \in I$. Without using the axiom of choice, show that there is a function $f$ with domain $I$ such that $f(i) \in H(i)$ for each $i \in I$. [Suggestion: Use induction on $\text{card }I$.] Proof Let $P(n)$ be the condition that $\text{card }I = n$ and that $H(i)$ is nonempty for each $i \in I$. Let $Q(n)$ be the condition that there exists a function $f$ with $\text{dom}f = I$ and with $f(i) \in H(i)$ for all $i \in I$. Consider the set $S = \{n \in \omega \mid P(n) \Rightarrow Q(n)\}$. We will show through induction that $S$ coincides with $\omega$. The base case is vacuously true. For the inductive hypothesis, suppose that $P(k) \Rightarrow Q(k)$. We wish to show that $P(k^+) \Rightarrow Q(k^+)$. Suppose that $\text{card }I = k^+$. Then there exists a bijection $f$ from $I$ onto $k^+$. Consider the set $A \subseteq I$ where $A = f^{-1}[\![k]\!]$. Clearly, $\text{card }A = k$, and by the inductive hypothesis, there exists a function $g$ with domain $A$ and with the property that for each $i \in A$, $g(i) \in H(i)$. Now consider the set $B \subseteq I$ where $B = f^{-1}[\![k^+ - \{0\}]\!]$. Similarly, $\text{card }B = k$, and the inductive hypothesis suggests the existence of a function $h$ such that $\text{dom }h = B$ and $h(i) \in H(i)$ for all $i \in B$. With these two functions at our disposal, we can now construct a third function $F = g \cup (h \upharpoonright \{f^{-1}(k)\})$ such that $\text{dom}F = I$ and $F(i) = H(i)$ for all $i$ in $I$, satisfying $Q(k^+)$. Hence, $S$ is inductive, and coincides with $\omega$. Note that $h \upharpoonright \{f^{-1}(k)\}$ denotes the ordered pair $\langle f^{-1}(k), h(f^{-1}(k)) \rangle$ AI: Your proof is correct apart from one error in the definition of $F$: You meant to write $f^{-1}(k)$ instead of $f^{-1}(k^+)$. The second application of the induction hypothesis seems unnecessary. You can just write $i_0$ for the element of $I\backslash A$, choose an element $x$ of $H_{i_0}$ and define $F=g\cup\{(i_0,x)\}$. Style Critique: You focus a bit too much on technicalities, which sometimes makes it hard to quickly grasp the idea behind an argument. In particular, it is much better to use the language of natural numbers instead of ordinals, i.e. you should not use the "fact" that natural numbers are elements and subsets of each other, but simply consider them as elements of an ordered set. This also makes notations like $f^{-1}(k)$ (where $k$ could be interpreted as an element or a subset) less ambiguous.
H: Backwards direction of Cauchy Criterion for Sequences of Functions I am reviewing the proof of the Cauchy Criterion for sequences of functions and have a question regarding the backwards direction. Statement: Let $A\subseteq \mathbb{R}$ and $(f_n)$ be a sequence of real-valued functions with domain $A$. Then $(f_n)$ converges uniformly if and only if $\forall \epsilon >0$, $\exists N\in \mathbb{N}$ such that $m,n\geq N$ and $x\in A$ implies $|f_n(x)-f_m(x)|<\epsilon$. My question: For the backwards direction, we know that for each $x$ the sequence $(f_n(x))$ converges (to say $L_x$), but how do we know there is a single $N$ such that $n\geq N$ implies $|(f_n(x)-L_x|, |f_n(y)-L_y|<\epsilon$ for any choice of $x,y\in A$? The difficulty I am having is that the $N$ given in the proof of the Cauchy criterion for real sequences is dependent on the subsequence chosen (I am assuming a proof that uses the BW theorem). An answer is appreciated, but I would prefer a hint at this point to help. AI: Take $\varepsilon>0$, and take $\varepsilon'\in(0,\varepsilon)$. Then there is a $N\in\Bbb N$ such that$$m,n\geqslant N\implies|f_m(x)-f_n(x)|<\varepsilon'.$$ But then, if $n\geqslant N$,\begin{align}|L_x-f_n(x)|&=\left|\lim_{m\to\infty}f_m(x)-f_n(x)\right|\\&\leqslant\varepsilon'\\&<\varepsilon.\end{align}
H: Can one show that there exists a matrix $M$ such that $BA=M^k$ with the condition obtained? In a problem, I got $ AB=N^k $ where $\det(N)=1$. Can one show that there exists a matrix $M$ such that $BA=M^k$, where all $A,B$, and $M$ are $3 \times 3$ matrices with integer entries. AI: I'm not entirely sure what the question is, but I presume it is "Suppose $A$ and $B$ are $3\times 3$ integers matrices with the property that $AB=N^k$, where $\det(N)=1$ . Prove that $BA=M^k$ where $M$ is a $3\times 3$ integer matrix". As $A$ and $B$ are integer matrices, and $\det(A)\det(B)=\det(AB)=\det(N)^k=1$ then $\det(A)=\pm1$. Therefore $A^{-1}$ is also an integer matrix. Then $$BA=A^{-1}(AB)A=A^{-1}N^kA=(A^{-1}NA)^k=M^k$$ where $M=A^{-1}NA$ has integer entries.
H: Find Solution set of $(\log_4x)^2+4\sqrt{(\log_4x)^2-\log_2x-4}=\log_2x+16$ My attempt : $$\dfrac{1}{4}(\log_2x)^2+4\sqrt{\dfrac{1}{4}(\log_4x)^2-\log_2x-4}=\log_2x+16$$ $$(\log_2x)^2+16\sqrt{\dfrac{1}{4}(\log_2x)^2-\log_2x-4}=\log_2x+16$$ Let $a=\log_2x$ $$a^2+8\sqrt{a^2-4a-16}=a+16$$ Because this is fourth degree polynomials so I think this might not be the most efficient way to solve it. AI: It should be $$a^2+8\sqrt{a^2-4a-16}=4a+64.$$ Now, we can use a substitution $a^2-4a-16=t^2,$ where $t\geq0$.
H: Probability of getting 3 blue skittles in even-sized allotments from the same bag Let's say Skittles come in 3 colors: red, green, and blue. If I pour a bag of 90 Skittles into 10 bowls of 9 Skittles each, the probability of getting >= 3 blue Skittles in any bowl is 1/3. Is the probability of getting >= 3 blue Skittles in all 10 bowls also 1/3? I would think the probability of all bowls having >= 3 blue Skittles decreases with every additional bowl because of random variance: eg how is the all-bowl probability affected if 1 bowl has, say, 5/9 blue Skittles? AI: The probability of getting at least $3$ blue skittles in a bowl is $$ 1-\sum_{k=0}^2\frac{\binom{30}{k}\binom{60}{9-k}}{\binom{90}{9}}\approx.632509$$ There are $\binom{30}{k}$ ways to choose $k$ blue skittles, $\binom{60}{9-k}$ ways to choose $9-k$ non-blue skittles, and $\binom{90}{9}$ ways to choose $9$ skittles, so the fraction gives the probability of getting exactly $k$ blue skittles. The probability of getting at least $3$ blue skittles is the same as the probability of not getting $0,\ 1,$ or $2$. The number of ways to put $3$ blue skittles in each bowl is $$\frac{30!}{(3!)^{10}}\approx$$ because we can line the skittles up in $30!$ ways, put the first $3$ in the first bowl, the next $3$ in the second bowl, and so on. However, the order in which the skittles go into the bowl doesn't matter, so we have to divide by $3!$ in each case. Similarly, there are $$\frac{60!}{(6!)^{10}}$$ ways to put $6$ non-blue skittles in each bowl, and $$\frac{90!}{(9!)^{10}}$$ ways to put $9$ skittles in each bowl, so the probability that there are $3$ blue skittles in each bowl is $$\frac{\binom{9}{3}^{10}}{\binom{90}{30}}\approx.00002598$$ I don't understand the last part. If one of the bowls has $5$ skittles, then the probability of having $3$ skittles in each bowl is $0$.
H: Conditional Probability within regards to Discarding. An urn contains four (4) red chips and six (6) white chips. Two (2) chips are drawn out and discarded and a third chip is drawn. What is the probability that the third chip is red? Would Hyper geometric Distribution be the best possible method for solving this? AI: The best solution is obviously the one written in the comment by @salulpatz: the probability that the first, the second, the third a.s.o. to estract a red chip is always $\frac{4}{10}$ But if you do not realize that, in a simple example like this exercise you can calculate it... After drawing 2 chips, the urn will be the following $\{2R;3W\}$ with a probability of $\frac{\binom{4}{2}}{\binom{10}{2}}=\frac{2}{15}$ $\{3R;5W\}$ with a probability of $\frac{\binom{4}{1}\binom{6}{1}}{\binom{10}{2}}=\frac{8}{15}$ $\{4R;4W\}$ with a probability of $\frac{\binom{6}{2}}{\binom{10}{2}}=\frac{5}{15}$ Then now the probability to draw a Red Chip is obviously $$\mathbb{P}[R]=\frac{2}{15}\frac{2}{8}+\frac{8}{15}\frac{3}{8}+\frac{5}{15}\frac{4}{8}=\frac{2}{5}$$ as already known before this tedious calculations
H: If $\sum_{k=0}^2 c_kf^{(k)}(x) \ge 0$ show that $f(x)$ is a non-negative polynomial If for some polynomial $f(x)$ with real coefficients there exist $c_1,c_2 \in \mathbb{R}$, satisfying $c_1^2\ge 4c_2$, such that for all real values of $x$ the following inequality holds: $$\sum_{k=0}^2 c_kf^{(k)}(x) \ge 0, \ \ (c_0=1)$$ then show that $f(x)$ is a non-negative polynomial. $f^{(k)}(x)$ stands for the $k^{th}$ derivative of $f$. Assuming $f$ looks like $$f(x)=\sum_{i=0}^n a_ix^i$$ the given sum will become of the form $$\sum_{k=0}^2 c_kf^{(k)}(x) = \sum_{i=0}^na_ix^i + c_1\sum_{i=0}^n ia_ix^{i-1} + c_2\sum_{i=0}^n i(i-1)a_ix^{i-2}=$$$$=a_0+c_1a_1+a_1x+\sum_{i=2}^n a_ix^{i-2}(x^2+c_1ix+c_2i(i-1)) \ge0$$ But this is no more simplified the problem. Any help is appreciated. AI: We have that $$ \left(1+c_1\frac{d}{dx}+c_2\frac{d^2}{dx^2}\right)f(x)\ge 0, \quad\text{for all $x\in\mathbb R$}. $$ Since $c_1^2\ge 4c_2$, the polynomial $\xi^2+c_1\xi+c_2$ has real roots, say $\mu,\nu$, i.e. $$ \xi^2+c_1\xi+c_2=(\xi-\mu)(\xi-\nu), $$ and $$ 1+c_1\frac{d}{dx}+c_2\frac{d^2}{dx^2} =\left(1-\mu\frac{d}{dx}\right)\left(1-\nu\frac{d}{dx}\right) $$ Now it suffices to show to the following: If $\mu\in \mathbb R$ and $f(x)-\mu f'(x)\ge 0$, for all $x$, then $f(x)\ge 0$, for all $x$. If $\mu=0$, there is nothing to prove. Let $\mu> 0$. Clearly $$ f(x)-\mu f'(x)\ge 0 \quad\Longrightarrow\quad \frac{1}{\mu}f(x)- f'(x)\ge 0 \quad\Longrightarrow\quad \mathrm{e}^{-x/\mu}\big(f(x)/\mu-f'(x)\big)\ge 0 \quad\Longrightarrow\quad -\big(\mathrm{e}^{-x/\mu}f(x)\big)'\ge 0 \\ \quad\Longrightarrow\quad \int_{x_1}^{x_2}\big(\mathrm{e}^{-\mu x}f(x)\big)'\,dx\le 0 \quad\Longrightarrow\quad \mathrm{e}^{-\mu x_2}f(x_2)\le\mathrm{e}^{-\mu x_1}f(x_1) $$ for all $x_2\ge x_1$. Since $\lim_{x_2\to\infty}\mathrm{e}^{-\mu x_2}f(x_2)=0$, we obtain from the above that $$ 0\le\mathrm{e}^{-\mu x_1}f(x_1)\quad\Longrightarrow\quad 0\le f(x_1) $$ for all $x_1\in\mathbb R$. If $\mu<0$, setting $\nu=-\mu>0,\,$ we have $$ f(x)+\nu f'(x)\ge 0 \quad\Longrightarrow\quad \frac{1}{\nu}f(x)+ f'(x)\ge 0 \quad\Longrightarrow\quad \mathrm{e}^{x/\nu}\big(f(x)/\nu+f'(x)\big)\ge 0 \quad\Longrightarrow\quad \big(\mathrm{e}^{x/\nu}f(x)\big)'\ge 0 \\ \quad\Longrightarrow\quad \int_{x_1}^{x_2}\big(\mathrm{e}^{\nu x}f(x)\big)'\,dx\ge 0 \quad\Longrightarrow\quad \mathrm{e}^{\nu x_2}f(x_2)\ge\mathrm{e}^{\nu x_1}f(x_1) $$ for all $x_2\ge x_1$. Since $\lim_{x_1\to-\infty}\mathrm{e}^{\nu x_1}f(x_1)=0$, we obtain from the above that $$ 0\le\mathrm{e}^{\nu x_2}f(x_2)\quad\Longrightarrow\quad 0\le f(x_2) $$ for all $x_2\in\mathbb R$.
H: Differentiability of $x^2\times\sin(1/x)$ Using first principle, when we try to check the differentiability of $x^2\sin(1/x)$ at $x= 0$,we get 0. But if we differentiate the function first, and then try to find differentiability at x=0,we we find it's not differentiable. I have encountered similar questions on stack exchange , but none them gave clarity on which one is right ? Is the function differentiable or not ? Why does the first principle and checking differentiability after differentiating give different answers ? I know that the derivative of $x^2\sin(1/x)$ is not continuous. Is this the reason why we get different answers ? AI: Presumably, you are talking about the continuous function $f:\Bbb R \to \Bbb R$ given by $$ f(x) = \begin{cases} x^2 \sin(1/x) & x \neq 0\\ 0 & x = 0. \end{cases} $$ As can be shown "using first principle" (by which I assume you mean the definition of the derivative), we find that the derivative of this function is given by $$ f'(x) = \begin{cases} 2x \sin(1/x) - \cos(1/x) & x \neq 0\\ 0 & x = 0. \end{cases} $$ Because the derivative of $f(x)$ exists at all $x \in \Bbb R$, the function $f(x)$ is indeed differentiable. This function is unusual, however, in that the derivative $f'(x)$ is not continuous. For a more typical function, we would find that if $f$ is differentiable at $x = 0$, then it would necessarily satisfy $f'(0) = \lim_{x \to 0}f'(x)$. As this example illustrates, this does not always need to be the case. Interestingly, Darboux's theorem implies that we cannot have a removable discontinuity or jump discontinuity in $f'(x)$. In other words, if $f'(x)$ exists but is discontinuous at $x = 0$, then it must be the case that $\lim_{x \to 0^+}f'(x)$ or $\lim_{x \to 0^-}f'(x)$ fails to exist.
H: Do not understand this basic conditional probability This example came from Tsitsiklis' Intro to Probability 2nd edition. I don't understand why is $P(A|B)=2/5$ when $m=3$ or $m=4$. Surely it should be $4/5$? I get that $P(A|B)=1/5$ when $m=2$, because when we restrict our domain to the highlighted area (i.e. $B$), only 1 out of a total of 5 combinations matches the description. So if we apply the same logic, out of the 5 combinations, $(4,2),(3,2),(2,3),(2,4)$ fulfill the descriptions - so shouldn't it be $4/5$? In fact in the MITx course that goes with this textbook, Tsitsiklis mentioned that $P(M=3|B)=2/5$ which makes sense; so how can the probability stays constant, when $m$ is now larger(since we allow $m=3$ or $m=4$), which surely should allow more possibilities? AI: I think the confusion stems from the interpretation of the phrase $m = 3$ or $m = 4$. The probability $P(A \mid B)$ when $m = 3$ is indeed $2/5$ because the only pairs that fall under event $A = \left\{\max(X, Y) = 3\right\}$ are $(2, 3)$ and $(3, 2)$. Similarly, the probability $P(A \mid B)$ when $m = 4$ is also $2/5$ because the only pairs that fall under $A = \left\{\max(X, Y) = 4\right\}$ are $(2, 4)$ and $(4, 2)$. Hence, it is true that $P(\left\{\max(X, Y) = m\right\} \mid B)$ is $2/5$ when $m = 3$ or $m = 4$. Here, we note that the event $A$ is not defined as having $\max(X, Y) = 3 \text{ or } 4$. But if we take $A = \left\{\max(X, Y) = 3 \text{ or } 4\right\}$, then in this case, the pairs that fall under event $A$ are $(2, 3)$, $(2, 4)$, $(3, 2)$ and $(4, 2)$. In this case, $P(A \mid B)$ would be $4/5$. I think you misunderstood the book to imply this case. We have to be careful with how we interpret the phrase $m = 3$ or $m = 4$ in the book. Indeed, if we take $m = 3$ or $m = 4$, we get $2/5$ as the answer in both cases if we follow the definition of $A$ in the book. However, if we want to find the probability $P(A \mid B)$ with $A$ defined differently as the event where $\max(X, Y)$ is $3$ or $4$, then in this case, $4/5$ is the answer. Also, at the end of the page, $P(\left\{\max(X, Y) = m\right\} \mid B)$ is presented as a piece-wise function that varies each answer for different values of $m$. In this case, the book is indeed correct in saying that $P(\left\{\max(X, Y) = 3\right\} \mid B) = 2/5$ when $m = 3$ or $m = 4$.
H: Probability question: choosing between two options I am having trouble understanding the following exercise in probability and statistics: A person has to choose between two jobs. In Job1, they have a profit of 12k with a 75% probability, while they have a damage of 3k with probability 25%. In Job2, they have a profit of 18k with probability 50%, while they have a damage of 4.5k with probability 50%. If the person needs to earn at least 15k, which job should they choose? I cannot understand what I have to do here. This is an exercise in a revision, so anything is on the table. I am really confused by this, since I don't even know if any random variable follows some distribution. Could you please help me out? AI: Of course he has to chose Job2, independently about any profit expectation. This because he has as a goal to get AT LEAST 15k but the job1 has a maximun profit of only 12k In other words, choosing Job2 he has 50% probability to get goal, choosing Job1 he has 0% probability to get goal
H: Left adjoint of exponential functor Does the contravariant exponential functor have a left adjoint ? And if so what is it ? To elaborate, I know that in many categories, like $\textbf {Set}$ for example, the covariant exponential functor $\left(\_ \right)^A$ has a left adjoint (which is the product functor $\left(\_ \right)\times A$). However, given an object $B,$ there is also a contravariant functor $B^{\left(\_ \right)}$ associated with exponentiation. Here $B^{\left(\_ \right)}$ sends an object $A$ to $B^A.$ Also, $B^{\left(\_ \right)}$ sends an arrow $g: A\rightarrow A'$ to the arrow $B^f: B^{A'} \rightarrow B^A,$ where $B^f$ is the exponential transpose of $e \left( 1_{B^{A'}} \times g \right).$ So my question is can the functor $B^{\left(\_ \right)}$ have a left adjoint ? And if so what is it ? Or to be more specific, I could ask, does $B^{\left(\_ \right)}$ have a left adjoint when we are working in $\textbf {Set},$ or some more general Cartesian closed category ? Basically, I am trying to understand the nature of the contravariant exponential functor. If it helps, I would also like to know about its right adjoint (if it exists). AI: Let $\cal C$ be your category. Contravariant functors confuse me, so I'll think of $B^{(-)}$ as a functor $\cal C^{\textrm{op}}\to\cal C$. So I presume we want a functor $F:\cal C\to\cal C^{\textrm{op}}$ with $${\text{Hom}}_{\cal C}(Y,B^X)\cong{\text{Hom}}_{\cal C^{\textrm{op}}}(F(Y),X),$$ naturally, that is $${\text{Hom}}_{\cal C}(Y,B^X)\cong{\text{Hom}}_{\cal C}(X,F(Y)).$$ I reckon that $F(Y)=B^Y$ works here, certainly in the case of the category of sets, as $${\text{Hom}}_{\cal C}(Y,B^X)\cong{\text{Hom}}_{\cal C}(Y\times X,B) \cong{\text{Hom}}_{\cal C}(X\times Y,B)\cong{\text{Hom}}_{\cal C}(X,B^Y).$$
H: Does the Fundamental Theorem of Calculus tell us that integration is the 'opposite' of differentiation? I have often read that the Fundamental Theorem of Calculus (FTC) tells us that integration is the opposite of differentiation. I have always found this summary confusing, so I will lay out what I think people mean when they make such a statement. The First FTC implies the existence of antiderivatives for every function, $f$, that is continuous on a particular interval, say $[a,b]$. Generally, we denote this antiderivative as $F$. Differentiating $F$ gets back to our original function, $f$. So when people say that 'integration is the opposite of differentiation', what they mean is that an antiderivative of a function can be computed using a definite integral. The Second FTC is more powerful than the First FTC, as it tells us that definite integrals can be computed using the antiderivative of a function (which is generally more useful than knowing that one possible antiderivative of $f$ can be computed using a definite integral, $F$). For the Second FTC, I don't understand how this is related to 'integration being the opposite of differentiation' at all. The Second FTC shows us the link between antiderivatives (indefinite integrals) and definite integrals. It is extremely useful for trying to find the area under a curve, but I'm not sure how this relates to integration and differentiation being 'opposites'. Is there something about the First FTC or the Second FTC that has a bigger implication about integration being the opposite of differentiation, or is my understanding correct? AI: I think the first FTC: If $f: [a,b] \to \Bbb R$ is continuous then $F: [a,b] \to \Bbb R$ defined by $F(x)=\int_a^x f(t)dt$ is differentiable and $F'(x)=f(x)$ for all $x \in [a,b]$. is what people mean by saying the integration (which defines $F$) is the inverse of differentiation (as we have found a function with derivative $f$). The second FTC If $f: [a,b] \to \Bbb R$ is Riemann-integrable on $[a,b]$ and we have a function $F: [a,b] \to \Bbb R$ such that $F'(x)=f(x)$ on $[a,b]$, then $\int_a^b f(x)dx=F(b)-F(a)$. is more of a "recipe" to find an integral: the target is to compute the definite integral and the tool we're given is to find an antiderivative. So not an inverse as such but a method. It's a bit of an iffy one, as an antiderivative $F$ need not exist at all (except when $f$ is continuous and the first FTC gives us one, but not explicitly, but at least we know some solution exists, but we don't have it in computable form yet). I think the first is closer to giving a direct "inverse" connection between integration and differentiation (and is often used in other contexts when we differentiate wrt boundaries of integrals, etc.). But that's just one view. The first FTC can be summarised as $$\frac{d}{dx}\int_a^x f(t)dt = f(x)$$ so "Applying the integration operator to $f$, followed by the differentiation operator gives us back $f$ again".
H: A non-unit which is neither irreducible, nor a product of irreducibles. I am having trouble with this exercise Find a non-unit $d \in D = \{f \in \mathbb{Q}[x] \mid f(0) \in \mathbb{Z}\}$ where $d$ is neither irreducible, nor a product of irreducibles. I find it contradictory that an element can be non-irreducible, yet not a product of irreducibles. For those interested, this is exercise 32.10 from A First Course in Abstract Algebra (Rings, Groups and Fields) The Third Edition by Marlow Anderson and Todd Feil. AI: Consider the element $x \in D$. Notice that $x = 2(\frac{1}{2} x)$. Now $2$ is not a unit since $\frac{1}{2} \notin D$ and $x$ is clearly not a unit (it isn't a unit in $\mathbb{Q}[x]$). Hence $x$ is not irreducible. Moreover a similar argument shows that $\alpha x$ is not irreducible for any $\alpha \in \mathbb{Q}^*$. It is not hard to see that if $f(x), g(x) \in D$ are such that $f(x)g(x) = x$ then one of $f(x)$ or $g(x)$ is of the form $\alpha x$.
H: Logical implication and conjunction in transitive relation definition In the properties of relations, the transitive relation is defined as follows: If I read it informally, it says, "If $(a,b) \in R$ and $(b,c) \in R$, then $(a,c) \in R$ What surprised me was when the author of the paper said that this definition meant, if a relation contains just $(a, b) \in R$ but not $(b, c)$, then R is transitive. Also, in his examples, he shows that $R_1$, $R_2$ and $R_5$ are transitive relations, whereas to me, since one statement of the conjuction is false, the conjuction is false, and it doesn't proceed to the then part, so $R_1$, $R_2$ and $R_5$ are not transitive. Q. Am I reading and applying this conjunction incorrectly? Q. Isn't the author incorrectly translating the conjunction in the definition? AI: Let $p$ and $q$ be any propositions. Then the implication $p \rightarrow q$ is the proposition defined as follows: $$ \begin{matrix} p \ & q \ & p \rightarrow q \\ F \ & F \ & T \\ F \ & T \ & T \\ T \ & F \ & F \\ T \ & T \ & T \end{matrix} $$ The truth of $p \rightarrow q$ when $q$ is False is said to be vacuous. Note that the implication is False when and only when $p$ is True but $q$ is False. Now in your case, $p$ and $q$ are given as follows: $$ p \ \colon \ (a, b) \in R \land (b, c) \in R $$ and $$ q \ \colon \ (a, c) \in R. $$ In fact your $p$ and $q$ are the propositional functions defined by $$ p(a, b, c) \ \colon \ (a, b) \in R \land (b, c) \in R $$ and $$ q(a, c) \ \colon \ (a, c) \in R. $$ Thus for each $(a, b, c) \in R\times R \times R$, you are to show that $$ p(a, b, c) \rightarrow q(a, c) $$ is True. Hope this helps.
H: Why is this assumption needed in Cauchy's theorem? I am studying complex analysis and Cauchy's theorem states: Suppose that a function $f$ is analytic in a simply connected domain $D$ and that $f'$ is continuous in $D$. Then for every simple closed contour $C$ in $D$, $\oint_C f(z)dz = 0$ Next after this theorem the book presents Cauchy-Goursat theorem which states that we don't actually need $f'$ to be continuous as assumption. My question: If it is given that function $f$ is analytic in a domain $D$ doesn't it mean that function $f$ is infinitely differentiable in that domain? Then we know that $f'$ is differentiable and so we know that $f'$ must be continuous. What I don't understand is why it is a big deal removing the assumption of continuous derivative if it is already implied by analyticity of the function. What am I missing? AI: The version I know of this theorem states only the hypothesis that $f$ has a (complex) derivative, except possibly at a finite number of points. Furthermore, the proof that, if $f$ is holomorphic, it is infinitely differentiable depends on this theorem.
H: Naive Bayes Classifier - With Lagrange Variable- Derivation I am running through this link to understand better the derivation for MLE for Naive Bayes: https://mattshomepage.com/articles/2016/Jun/26/multinomial_nb/ In particular, i am confused as to this part: $L=\sum_{i=1}^{N}\sum_{j=1}^Pf_{ij}\log(\theta_j)+\lambda(1−\sum_{j=1}^P\theta_j)$ When taking the derivative: $\frac{\partial L}{\partial {\theta_k}} = \sum_{i=1}^N \frac{f_{ik}}{\theta_k} - \lambda =0$ why does the derivative of $f_{ij}$ go to $f_{ik}$ (similar happens for the $log(\theta_j)$. The differential goes to $\theta_k$) Can someone help explain why this is the case? Thanks AI: $$\frac{\partial L}{\partial {\theta_k}} = \frac{\partial}{\partial {\theta_k}} (\sum_{i=1}^{N}\sum_{j=1}^Pf_{ij}\log(\theta_j)+\lambda(1−\sum_{j=1}^P\theta_j)) = \sum_{i=1}^{N}\sum_{j=1}^P\frac{\partial f_{ij}\log(\theta_j)}{\partial {\theta_k}}-\lambda(\sum_{j=1}^P\frac{\partial \theta_j}{\partial {\theta_k}})$$ As $f_{ij}$ does not depend on $\theta_k$ and neither does $\theta_i$ for $i \neq k$ we have $$\frac{\partial f_{ij}\log(\theta_j)}{\partial {\theta_k}} = \begin{cases} 0 & \quad j \neq k \\ \frac{f_{ij}}{\theta_k} & \quad j = k \end{cases}$$ $$\frac{\partial \theta_j}{\partial {\theta_k}} = \begin{cases} 0 & \quad j \neq k \\ 1 & \quad j = k \end{cases}$$ If you insert this to the above mentioned formula you will indeed get $$\frac{\partial L}{\partial {\theta_k}} = \sum_{i=1}^N \frac{f_{ik}}{\theta_k} - \lambda$$
H: Calculating the volume with triple integral Calculate the volume of $V$ limited by planes: $$x+y=1, x+y=2, x=0, y=0, z=0, z=1$$ The projection on $xy$ is I tried finding the volume without integrals first: $$V=Sh=\frac{1}{2}(2\cdot2-1\cdot1)\cdot (1-0)=\frac{3}{2}$$ And I got a completely different answer when calculating with integrals: $$\int_{0}^{1}dz\int_{0}^{2}dx\int_{1-x}^{2-x}dy=\int_{0}^{1}dz\int_{0}^{2}dx(2-x-1+x)=\int_{0}^{1}dz\cdot2=2$$ Where did I go wrong? AI: What you did using integrals is not correct. The part corresponding to $z$, yes, it is fine, but the part corresponding to $x$ and $y$ should be$$\int_0^1\int_{1-x}^{2-x}1\,\mathrm dy\,\mathrm dx+\int_1^2\int_0^{2-x}1\,\mathrm dy\,\mathrm dx=\frac32.$$