text
stringlengths
83
79.5k
H: Russian roulette statistics I have the following probabilily problem. "You're playing a game of Russian roulette, the revolver has 6 chambers in which you put 3 bullets completely random. What is the probability that all bullets are next to each other?" The solution says 0.3 I have no clue how to start with this problem or which kind of distribution to choose. Can I get feedback on this problem? Thanks, Ter AI: Since all the bullets are interchangeable, the number of ways we can load 3 bullets into a six-shooter is "6 choose 3", which is 20. Of those 20 configurations, we can see that there are exactly 6 that have all the bullets adjacent - one with chambers 1,2,3 loaded, one with chambers 2,3,4, and so on, including the ones that wrap around like 6,1,2. With 20 equiprobable configurations and 6 that satisfy the criterion, the odds of randomly getting one is 6/20, or 0.3. You can also arrive at this answer by sequentially picking positions. It doesn't matter where the first bullet goes. For the second bullet, you have a 1/5 chance of putting it one chamber to the left, and then you have a 2/4 chance of putting the third bullet on either side of the first two, for a total chance of 2/20. You could also put the second bullet two chambers to the left of the first one with 1/5 chance, but that only gives you a 1/4 chance of putting the third bullet between the first two, for a total chance of 1/20. These probabilities are exactly mirrored by considering bullets placed on the right instead of the left, so your overall chance is 2 x (2/20 + 1/20), which is again 6/20, or 0.3.
H: Solve: $\tan\frac x2 = x,\> x\in(0,\pi)$ I'm trying to calculate the shortest distance between two points on a sphere in terms of its radius and the angle they intend at the centre. I'm stuck at the equation: $$\tan(x/2) = x,\ 0<x<\pi$$ I have no idea how to solve this. AI: Given that there is no analytic solution, an approximate close-form may be desirable. Recognizing that the root within $(0,\pi)$ is near $\frac{3\pi}{4}$, let $f (x) = x\cot\frac x2 -1$ and approximate the root $r$ as $$ 0= f(r) = f(\frac{3\pi}{4})+ f’(\frac{3\pi}{4})(r-\frac{3\pi}{4})$$ Solve to obtain $$r= \frac{16+8\sqrt2-9\pi^2}{8\sqrt2-12\pi}= 2,3313$$ vs. the exact numerical root $2.3311$
H: Finding the 2017th power of a nondiagonalizable matrix Matrix image The eigenvectors and eigenvalues of the matrix are as follows: https://matrixcalc.org/en/vectors.html#eigenvectors%28%7B%7B110,55,-164%7D,%7B42,21,-62%7D,%7B88,44,-131%7D%7D%29. This matrix cannot be diagonalized since the eigenvectors are not linearly independent. How can I find the 2017th power in this case? AI: Hint: By Hamilton-Cayley, $A^3=A$. Deduce by an easy induction that, if $n>0$, $A^n$ depends only on $n$ being even or odd.
H: Volume of $CP^1$ wrt Fubini Study metric I have shown that the FS form on an affine chart of $CP^1$ is $1/4$ times the usual area form (written in $\theta$,h coordinates). So the area of $CP^1$ wrt FS form should be $1/4$ of usual area right? But when I integrate the FS form that is $(dx\wedge dy)/(1+x^2+y^2)^2$ over $S^2$ (since it is diffeomorphic to $CP^1$) I am getting $\pi /2$. Area of $S^2$ is $4\pi$ right? So it doesn't add up. Where am I making a mistake? AI: We use polar coordinates $dx\wedge dy=r dr d\theta$, then set $t=r^2$ $\int \int {dx\wedge dy\over (1+x^2+y^2)^2}= \int_0^\infty {r dr\over (1+r^2)^2}\int_0^{2\pi}d\theta =$ $ (2\pi)\times \int _0^\infty {1\over 2} {dt\over (1+t)^2 }= 2\pi \times {1\over 2}= \pi$
H: Show that when $n$ balls are thrown into $n^3$ cells, the chance that (at least) two balls will collide goes to zero as $n\to \infty$ Show that when $n$ balls are thrown into $n^3$ cells, the chance that (at least) two balls will collide goes to zero, as a function of $n$: $$\lim_{n\to \infty }P(\exists\, x,y:x,y\text{ collide})=0$$ AI: Markov's inequality is the right idea. Let $X$ be the number of pairs of balls which are in the same cell. For example, when $n=6$, and the arrangement of balls is Cell 1: Balls $1$ and $2$. Cell 2: Balls $3,4$ and $5$. Cell 3: Ball 6. All other cells: empty. then $X=4$, since the pairs are $(1,2)$, $(3,4),(3,5)$ and $(4,5)$. You want to prove the probability of $X\ge 1$ goes to zero, so we can use Markov's inequality $P(X\le 1)\le E[X]$ to get an upper bound on this probability and instead show that $E[X]$ goes to zero. By linearity of expectation, $E[X]$ is easy to calculate. There are $\binom{n}2$ pairs of balls. For each pair, the probability they are in the same cell is $\frac1{n^3}$. Therefore, $E[X]=\underline{\hspace{1.5cm}}$ (fill in the blank). Does this go to zero as $n\to\infty$?
H: $M$ is projective iff $\operatorname{Ext}_R ^1 (M,P)= 0 $ for $P$ projective? We know that an $R$-module $M$ is projective iff $\operatorname{Ext}_R ^1 (M,N)= 0 $ for every $R$- module $N$. Is it true that: $M$ is projective iff $\operatorname{Ext}_R ^1 (M,P)= 0 $ for every projective $R$- module $P$ ? AI: No. Let $R$ be a self-injective noetherian ring. Then any projective module is also injective. In particular, for any $M$ and any projective $P$, $\mathrm{Ext}^1_R(M,P) = 0$, as $P$ is in particular injective. However, $R$ can have non projective modules: for instance take $R = k[C_p]$ where $C_p$ is the cyclic group of order $p$ (prime) and $k$ has characteristic $p$. Then $R$ is self-injective, but $k$ (as an $R$-module, via the augmentation $k[C_p]\to k$) is not projective. ($R$ is self-injective because for any finite group $G$ and any field $k$, $k[G]$ is self-injective)
H: Proof: $x_n\to p \iff d(x_n,p)\to 0$ Let $(M,d)$ be a metric space, let ${x_n}\in M$ and $p\in M$ Prove: $x_n\to p \iff d(x_n,p)\to 0$ $\Leftarrow:$ be definition of a limit, for all $0 < \varepsilon$ there is $N\leq n$ such that $$|d(x_n,p) + 0|< \varepsilon \iff |d(x_n,p)|< \varepsilon \iff d(x_n,p)< \varepsilon$$ last $\iff$ is due to $d:M\times M\to [0,\infty)$ $\Rightarrow:$ $x_n\to p$, let $\varepsilon = \frac{1}{n}$ therefore for all $N\leq n$ $$0 \leq d(x_n,p)<\frac{1}{n}$$ using the squeeze theorem we get $$d(x_n,p)\to 0$$ Have I missed something? is it correct? AI: The ideas are all there, but your argument is somewhat muddled. Note the ordering, especially when it comes to $N$ and $n$: If $d(x_n,p) \to 0$ then for any $\epsilon > 0$ there exists a natural number $N$ such that for any $n \geq N$ we have $$d(x_n,p) < \epsilon.$$ (Absolute values not necessary as distances are non-negative.) But this is precisely the condition for $x_n \to p$ in a metric space, so the two statements are equivalent.
H: $ X_n\stackrel{P}\rightarrow X\text{, }Y_n\stackrel{P}\rightarrow Y \Rightarrow X_nY_n\stackrel{P}\rightarrow XY $. How to show this? Let $\{X_n\}$ and $\{Y_n\}$ be two sequences of random variables such that $X_n$ converges to $X$ in probability and $Y_n$ converges to $Y$ in probability. Show that $X_n Y_n$ converges to $XY$ in probability. Hence, we have to show that: $$ X_n\stackrel{P}\rightarrow X\text{, }Y_n\stackrel{P}\rightarrow Y \Rightarrow X_nY_n\stackrel{P}\rightarrow XY $$ I was thinking about some way to 'rewrite' $X_nY_n$. In particular, $$X_nY_n=X_n(Y_n-X_n)+X_n^2$$ At this point, I was thinking that, for a sufficiently large $M\in\mathbb{N}$, for $n\geq M$, since $\lim(a\cdot b)=\lim(a)\cdot\lim(b)$ and $\lim(a+b)=\lim(a)+\lim(b)$, $$\lim\limits_{n}(X_nY_n)=\lim(X_n)\cdot\lim(Y_n)-\lim(X_n)\cdot\lim(X_n)+\lim(X_n^2)=$$ $$=\lim(X_n)\cdot\lim(Y_n)-\lim(X_n)\cdot\lim(X_n)+\lim(X_n)\lim(X_n)=$$ $$=X\cdot Y -X^2 +X^2=XY$$ However, I am pretty sure this is not the right way for the following reason: convergence in probability of $X_nY_n$ to $XY$ requires that $\lim\limits_{n}\mathbb{P}\left(|X_nY_n-XY|>\varepsilon\right)=0$ and I think that my reasoning - should it be correct - is not sufficient. How could I proceed to solve such an exercise alternatively? AI: We use the following equivalence: $X_n \stackrel{\mathbb{P}} \to X$ if and only if every subsequence of $(X_n)_n$ has a further subsequence that converges almost surely to $X$. Now, we show $X_n Y_n \stackrel{\mathbb{P}}\to XY$. Let $(X_{n_k} Y_{n_k})_k$ be a subsequence of $(X_nY_n)_n$. Then $(X_{n_k})$ has a further subsequence $(X_{n_{k_l}})_l$ that converges almost surely to $X$ and $(Y_{n_{k_l}})_l$ has a further subsequence $(Y_{n_{k_{l_j}}})_j$ that converges almost surely to $Y$. Then since also $(X_{n_{k_{l_j}}})_j$ converges to $X$ almost surely, we can deduce that $X_{n_{k_{l_j}}}Y_{n_{k_{l_j}}} \to XY$ almost surely (refer to $X_n\stackrel{a.s.}\rightarrow X, Y_n \stackrel{a.s.}{\rightarrow} Y \implies X_n Y_n \stackrel{a.s.}\rightarrow XY$). We can conclude, because we have found a further subsequence of $(X_{n_k}Y_{n_k})_k$ that converges almost surely. I have to admit that this looks ugly but the idea is simple.
H: Let $f(x)=7-3x$, evaluate and simplify the following $\frac{f(1)}{x}$ Using this equation, how would I be able to simplify this? My initial thoughts are, this would be an inverse function and it would be as follows: $f(x)=7-3x$ Then, multiply both sides by $-3x+7$, which would be $-3fx^2+7fx=1$ Then, factoring it out $f(-3x^2+7x)=1$ Divide both sides by $-3x^2+7x$, like such: $\frac{(-3x^2+7x)}{(-3x^2+7x)}=\frac{1}{-3x^2+7x}$ So, $f=\frac{1}{-3x^2+7x}$ Would I be on the right track here? Many thanks in advance! UPDATE - I have a practice question and this was how the directions were written. Apologies for the confusion. I'll try to reword this post. AI: $$\frac{f(1)}{x}=\frac{7-3}x=\frac4x.$$ There is nothing that you can simplify.
H: What does the solution space of differential equation mean? I am trying to understand what I am actually getting when I solve a differential equation. For example, the last one I solved was $$y^\prime + y x \sin(x) = 0$$ and I got the solution $$y(x) = C e^{x \cos(x)- \sin(x)}$$ I checked its graph on GeoGebra but what does it actually mean? How should I think about solutions for this kind of equation? I would like an intuitive explanation. AI: At each point $(x,y)$ you can compute $f'(x,y)=-xy\sin(x)$ For example $f'(1,1)=-1\times1\times \sin(1)$. This means that the gradient of a line segment at $(1,1)$ should be plotted as approximately $-0.84$. Can you see it sloping down? Now imagine joining up all the little sticks. You'd get a function such as the ones shown. Depending on which point you want included. or Hope this helps.
H: Relations of set operators: closure, interior, complement For a proof related to one of the versions of Baire's category theorem I needed the following equality: $$\operatorname{cl} \! \big( \!\operatorname{int} (A^c)\big) = \operatorname{int} \! \big( \! \operatorname{cl} (A)\big)^c \quad \text{ for a set } A.$$ I was a little baffled that the author thought I should just come up with this "out of the blue" as it didn't seem obvious that something like this could be appropriate. I was looking for a comprehensive list of rules (à la De Morgan's), which I could just memorize for the future, but I couldn't really find anything. What's the philosophy behind being able to apply suchlike transformations when needed? I saw that there are plenty specific equalities proven on here – but I'd like to approach this more generally. Thankful for any pointers at "main rules" to keep in mind or a genereal direction/works/articles where to look for answers. AI: If $X$ is the space in which you’re working, you always have $$\operatorname{int}(X\setminus A)=X\setminus\operatorname{cl}A$$ and $$\operatorname{cl}(X\setminus A)=X\setminus\operatorname{int}A\;.$$ These complementary relationships between interiors and closures are pretty fundamental and should be part of your basic ‘tool set’, and they’re exactly what’s needed here: $$\operatorname{cl}\operatorname{int}(X\setminus A)=\operatorname{cl}(X\setminus\operatorname{cl}A)=X\setminus\operatorname{int}\operatorname{cl}A\;.$$ In fact they’re a bit reminiscent of De Morgan’s laws: the closure (interior, resp.) of the complement is the complement of the interior (closure, resp.) versus the union (intersection, resp.) of the complements is the complement of the intersection (union, resp.).
H: Homework Problem, Implicit Function Theorem Given the following equation: $$ x^3y -y^3x -6 = 0 $$ determine using implicit differentiation $y'(2)$. (exact wording) Where $y' = \frac{\partial y}{\partial x}$ I called $$ f(x,y) = x^3y -y^3x -6 = 0 $$ Here is what I have done so far: $$ \begin{align*} \frac{\partial}{\partial x}(f(x,y)) &= 3x^2y - y^3 \\ \frac{\partial}{\partial y}(f(x,y)) &= x^3 - 3y^2x \\ f'(x) &= -\frac{\frac{\partial}{\partial x} (f(x,y))}{\frac{\partial}{\partial y} (f(x,y))} = \frac{y^3- 3x^2y}{x^3 - 3y^2x} \end{align*} $$ Here I am stuck because I cannot express y as a function of x: $$ x^3y -y^3x -6 = 0 \\ $$ or at least i do not see a way. Without expressing $y = g(x)$ I can not solve $f'(x)$. Am i missing something here? AI: For $x=2$ we obtain: $$8y-2y^3-6=0$$ or $$y^3-4y+3=0$$ or $$y^3-y^2+y^2-y-3y+3=0$$ 0r $$(y-1)(y^2+y-3)=0,$$ which gives three points. Can you end it now? Also, I got $$y'=\frac{y^3-3x^2y}{x^3-3xy^2}.$$
H: Understanding the disk model of projective space $\mathbb{P}^n(\mathbb{R})$ I'm working on Exercise 2.3.3 in Riemann Surfaces and Algebraic Curves: A First Course in Hurwitz Theory by Cavalieri and Miles: In a previous definition, we realized $\mathbb{P}^n(\mathbb{R})$ as an identification/orbit space: let $\mathbb{R}^\ast = \mathbb{R} - \{0\}$ act on $\mathbb{R}^{n+1}$ by component-wise multiplication: $\lambda \cdot (X_0, X_1, \dots, X_n) = (\lambda X_0, \lambda X_1, \dots, \lambda X_N).$ Then $$\mathbb{P}^n(\mathbb{R}) = (\mathbb{R}^{n+1} - \{0\}) / \mathbb{R}^\ast.$$ We now present two alternative models for $\mathbb{P}^n(\mathbb{R})$ as an identification space, and leave it as an exercise that they yield homeomorphic results. Sphere quotient. Consider the $n$-dimensional unit sphere $S^n \subset \mathbb{R}^{n+1}.$ The multiplicative cyclic group $\mu_2 = \{1,−1\}$ acts on the sphere by $$\pm 1 · (X_0, X_1,\dots, X_n) = (\pm X_0,\pm X_1,..., \pm X_n).$$ Then $\mathbb{P}^n(R)$ is the quotient space $S^n/\mu_2.$ Disk model. Consider the $n$-dimensional closed unit disk $\bar{D}^n \subset \mathbb{R}^n,$ and consider the antipodal equivalence relation on the points of its boundary: $x ∼ −x $ if and only if $||x|| = 1.$ Then $\mathbb{P}^n(\mathbb{R})$ is the identification space $\bar{D}^n / ∼.$ I can understand the sphere quotient easily enough, but I can't seem to wrap my head around the disk model, and I can't tackle this problem very well until I can do that. Does anyone have a good way of picturing and understanding this model? I can't seem to find a good resource explaining this anywhere. AI: The upper hemisphere, i. e. the subset $S^n_{+} \subset \mathbb{R}^n \times \mathbb{R}_{\geq 0}$ of the embedded sphere $S^n \subset \mathbb{R}^{n+1}$ is homeomorphic to the unit disc (it is the graph of the function $x \mapsto \sqrt{1 - |x|^2}$). Since every point in $\mathbb{P}^n(\mathbb{R})$ has a representative in the upper hemisphere, the map $S^n_+ / \mu_2 \to \mathbb{P}^n(\mathbb{R})$ is bijective and hence a homeomorphism (since its domain is compact and the target Hausdorff). Here (by abuse of notation) the quotient by $\mu_2$ means that you quotient by the restriction of the equivalence relation you used for the sphere quotient. On the upper hemisphere however most points do not have an antipodal point. Actually only those on the boundary (the equator) do. Hence the restricted equivalence relation consists only of identifying opposite points on the boundary. If push all of this down to $\mathbb{R}^n \times \{0\}$ (remember that this is a homeomorphism), you get the disc model described in your book.
H: Counting pair in a poker hand - counting confusion I want to be good enough to do math competitions, and one of the main topics in them is always combinatorics. So I'm going through a combinatorics book, "A Walk through Combinatorics." The confusion I have is with the following problem: Find the prob that a poker hand has a pair( and nothing better than a pair). My attempt: We know that a poker hand has five cards from a deck of 52 cards. Each card is equally likely so the total combinations is: $${52} \choose 5$$ The number of hands that contain only a pair and nothing better than a pair are: $${{13} \choose {4}} * {{4} \choose {2}}* {{4} \choose {1}}^3$$ There are 13 cards of each card. We choose 4 the 13 values. Then from one of the groups we want a pair, and from the remaining group we want a single. So, the probability I got was: $$\frac {{{13} \choose {4}} * {{4} \choose {2}}* {{4} \choose {1}}^3}{{52} \choose 5}$$. However, according to the answers: The correct probability should be : $\frac {{{13} \choose {1}} *{{12} \choose {3}} * {{4} \choose {2}}* {{4} \choose {1}}^3} {{52} \choose 5}$ Where is my logic flawed? Where am I going wrong? Am I double counting somewhere? I think my answer produces a smaller probability so am I undercounting? AI: The difference is your ${13 \choose 4}$ versus their ${13 \choose 1} {12 \choose 3}$ (your ${4 \choose 2}$ and ${4 \choose 1}$ factors are correct). You're choosing the four of 13 ranks in the hand, but not specifying which rank has the pair. They choose the rank for the pair with ${13 \choose 1}$ and then the three other ranks with ${12 \choose 3}$.
H: Linear Functional is Discontinuous I'm trying to show that the linear functional $f:(\ell^1, \|\cdot\|_\infty) \longrightarrow (\mathbb{R}, \|\cdot\|)$ given by $$f((x_n)_{n \in \mathbb{N}})=\sum\limits_{n=1}^\infty x_n$$ is discontinuous. Here, $\|(x_n)_{n \in \mathbb{N}}\|_\infty=\sup\limits_{n \in \mathbb{N}} |x_n|$ and $\|\cdot\|$ is any norm on $\mathbb{R}$. I've tried to show that it is discontinuous at the origin, by exhibiting a sequence $(x^k_n)_{k \in \mathbb{N}}$ on $\ell^1$ such that $(x^k_n)_{n \in \mathbb{N}} \to (0,0,...)$ on $\ell^1$, as $k \to \infty$, but $f((x^k_n)_{n \in \mathbb{N}})\not\to 0$, but no success. AI: Equivalently, you can show that there is no constant $C > 0$ such that $$\lvert f(x) \rvert \le C \| x \|_\infty$$ for all $x \in \ell^1$. This would show that $f$ is not bounded, and hence not continuous. Consider $$x^{(k)} = (\underbrace{1,1,\ldots,1}_{k},0,0,\ldots).$$ Then each of $x^{(k)}$ is in $\ell^1$ and $\|x^{(k)}\|_\infty = 1$. However, $$\lvert f(x^{(k)}) \rvert =\sum^\infty_{n=1} x^{(k)}_n = k.$$
H: When a sigma algebra is closed under countable intersection? In "Measure Theory and Probability Theory" by Krishna and Soumendra pag. 43 there is the following passage. Let $(\Omega, \mathcal{F})$ be a measurable space and $\{f_n\}$ a sequence of functions such that $f_n:\Omega \rightarrow \mathbb{R}$ is a measurable function for each $n\ge 1$. Let $\bar{\mathbb{R}}=\mathbb{R} \cup \{+\infty, -\infty\}$ be the extended real line and $\mathcal{B}(\bar{\mathbb{R}})=\sigma\left<\mathcal{B}(\mathbb{R}) \cup \{+\infty\} \cup \{-\infty\}\right>$ be the extended Borel $\sigma$-algebra on $\bar{\mathbb{R}}$. Define $g= \Omega \rightarrow \bar{\mathbb{R}} $ as the supremum pointwise $g=sup_{n \ge 1}f_n$. Then the book says that for show $g$ being measurable is enough to show that $g^{-1}((-\infty, r]) \in \mathcal{F}$ for each $r \in \mathbb{R}$. Thus it continues with the proof below. $(-\infty, r] = \{\omega: g(\omega) \ge r \} = \\ \bigcap_{n=1}^{\infty} \{\omega: f_n(\omega) \ge r \}=\\ \bigcap_{n=1}^{\infty}f_n^{-1}((-\infty, r]) \in \mathcal{F} $ since $f_n^{-1}((-\infty, r]) \in \mathcal{F}$ for all $n \ge 1$ by the measurability of $f_n$. What assures us that the countbale intersection of those sets is in $\mathcal{F}$? AI: You prove this using De-Morgan's law. Since the $\sigma-$algebra is closed under countable unions and under complements, and since by De-Morgan, the complement of the union is the intersection of the complements, then you get that the countable intersection of measurable sets must also be measurable.
H: Some intuition for $\sin\left(x+\frac\pi5\right)+\sin\left(x+\frac{3\pi}5\right)+\sin\left(x+\frac{7\pi}5\right)+\sin\left(x+\frac{9\pi}5\right)=1$ Can anyone give me some intuition how to start solving this equation please? Or maybe fastway to understand that this whole thing is equal to $\sin(x) = 1$. $$\sin\left(x+\frac\pi5\right)+\sin\left(x+\frac{3\pi}5\right)+\sin\left(x+\frac{7\pi}5\right)+\sin\left(x+\frac{9\pi}5\right)=1$$ Thank you! AI: Note \begin{align} & \sin(x + \frac{\pi}{5}) + \sin(x + \frac{3\pi}{5}) + \sin(x + \frac{7\pi}{5}) + \sin(x + \frac{9\pi}{5}) \\ =& \sin(x + \frac{\pi}{5})+ \sin(x - \frac{\pi}{5}) + \sin(x + \frac{3\pi}{5}) + \sin(x -\frac{3\pi}{5}) \\ =& 2\sin x \cos\frac\pi5 + 2\sin x \cos\frac{3\pi}5\\ = & 2\sin x (\cos\frac\pi5 + \cos\frac{3\pi}5)= \sin x \end{align} Thus, $\sin x =1$, or $x= \frac\pi2+n\pi$.
H: Show that $\frac{(J-1)!}{2!(J-3)!}= \sum_{h=2}^{J-1}(h-1) $ Could you help me to show that $$ \frac{(J-1)!}{2!(J-3)!}= \sum_{h=2}^{J-1}(h-1) $$ where "$!$" denotes the factorial function? AI: $$\sum_{h=2}^{J-1} (h-1) = \sum_{h=1}^{J-2} h = \frac{(J-2)(J-1)}{2}= \frac{(J-1)!}{2!(J-3)!}$$
H: Find a(n) with characteristic equation Find a(n) with characteristic equation of recurrence: a(1)=3, a(2)=10, a(n)=6a(n-1)-3a(n-2) . Started from equation t^2-6t+3=0 then find t1,t2 then take them in: a(n)=Ct1+ Dt2 then C and D have to find from a(1)=3 and a(2)=10 but I'm stuck here. AI: $t^2-6t+3=0 \implies t=3 \pm \sqrt{6}$. Thus $$a_n=C(3+\sqrt{6})^n + D(3-\sqrt{6})^n.$$ Now substitute $n=1$, to get $$a_1=3=C(3+\sqrt{6})^{\color{red}{1}}+D(3-\sqrt{6})^{\color{red}{1}}.$$ Likewise with $n=2$ we get $$a_2=10=C(3+\sqrt{6})^{\color{red}{2}}+D(3-\sqrt{6})^{\color{red}{2}}=C(15+6\sqrt{6})+D(15-6\sqrt{6}).$$ So now solve for $C$ and $D$ from the following set \begin{align*} C(3+6\sqrt{6})+D(3-6\sqrt{6})&=3\\ C(15+6\sqrt{6})+D(15-6\sqrt{6})&=10. \end{align*} I believe \begin{align*} C&=\frac{1}{12}(16-5\sqrt{6})\\ D&=\frac{1}{12}(16+5\sqrt{6}) \end{align*}
H: Linear constrained form to free parameter form I came across this fact that for a linear system of equation $Ax=b$ one can write a free parameter form as $x=Fz+x_0$, where the columns of $F$ span the null space of A, and $x_0$ is some specific solution. I am wondering if this is always true? I am also wondering why I did not encounter this in basic linear algebra course? Any reference is appreciated. AI: Whenever you have a linear equation system $Ax = b$, the set of all solutions $\mathbb{L}$ can be written as $$\mathbb{L} = \mathbb{L}_{\text{h}} + \{ y_{\text{p}}\} = \{x_0 + y_{\text{p}} : x_0 \in \mathbb{L}_h \}, \tag{$\ast$}$$ with $\mathbb{L}_{\text{h}} = \{ x_0 : Ax_0 = 0 \}$ and $y_p$ any solution of $Ax = b$. This means that the general solution of a linear equation system is the sum of the general solution of the associated homogeneous system and a particular solution of the inhomogeneous system. Now assume for the moment that $(\ast)$ holds. Then it follows that every solution $x$ of $Ax = b$ has a representation $x = \sum_{i} \lambda_i x_i + y_{\text{p}}$, where the vectors $x_1, \dots, x_n$ span the kernel of $A$. But this equation can be rewritten in matrix form as $$x = \left(\begin{array}{ccc}x_1^1 & \dots & x_1^k \\ \vdots &\ddots & \vdots \\ x_n^1 & \dots & x_n^k\end{array}\right) \begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_n \end{pmatrix} + y_p.$$ This equation has the desired form $x = Fz + x_0$. For the sake of completeness, we now show $(\ast)$. Two inclusions have to be shown. Let $x, y, z \in \mathbb{K}^n$. "$\supseteq$": Let $y \in \mathbb{L}$ and $x \in \mathbb{L}_{\text{h}}$. Then $y + x$ solves the system since $$A(y + x) = Ay + Ax = b + 0 = b.$$ "$\subseteq$": Let $z \in \mathbb{L}$ and $y \in \mathbb{L}$. Then $$A(z - y) = Az - Ay = b - b = 0$$ implies that $z - y \in \mathbb{L}_{\text{h}}$. But then every $z \in \mathbb{L}$ has a representation $$z = y + (z - y) \in \{ y \} + \mathbb{L}_{\text{h}}.$$
H: Upper bound and limit of $\frac{x^5}{\sqrt{(1+\frac{x^2}{2})^5}}$ Let $Y=\sigma X$ be a scaled Student's t-distributed variabled with scale parameter $\sigma=1/\sqrt{2}$ and $4$ degrees of freedom. I'm proving that for $k>0$ $$\frac{P(Y>x)}{kx^{-4}}\rightarrow k_0 \qquad\text{for}\ x\rightarrow\infty$$ where $k_0>0$. And to do so I'm using L'Hopital's law for "$0/0$"-limits and the fundamental theorem of calculus. I have that $$f(x)=\frac{\Gamma(5/2)}{\sqrt{2\pi}}\bigg(1+\frac{x^2}{2}\bigg)^{-5/2}$$ is the density function for $Y$. This is where I'm stuck at $$\frac{f(x)}{4kx^{-5}}=\frac{\Gamma(5/2)}{4k\sqrt{2\pi}} \frac{x^5}{\sqrt{\big(1+\frac{x^2}{2}\big)^5}}.$$ How do one prove that $$\frac{x^5}{\sqrt{\big(1+\frac{x^2}{2}\big)^5}}\leq 4\sqrt{2}$$ for all $x\in\mathbb{R}$? This is really all I need, since the lefthand-side defines an increasing function on $\mathbb{R}$, hence this bound is going to be its upper limit. Thanks! AI: Once we have $$\frac{x^2}{2}<1+\frac{x^2}{2}$$ we can take $5$th power of both sides to get $$\left(\frac{x^2}{2}\right)^5<\left(1+\frac{x^2}{2}\right)^5,$$ then after some rearranging we get the desired result $$\frac{x^{10}}{32}<\left(1+\frac{x^2}{2}\right)^5,$$ $$x^{10}<32\left(1+\frac{x^2}{2}\right)^5,$$ $$\frac{x^{10}}{\left(1+\frac{x^2}{2}\right)^5}<32,$$ $$\frac{|x^5|}{\sqrt{\left(1+\frac{x^2}{2}\right)^5}}<4\sqrt{2}.$$
H: The connectivity and diameter properties of a replacement product of graphs I have difficulties with proving the properties of replacement product operation between two graphs. Actually, I'm not sure where to start on this proof. Let G and H be connected graphs. Prove that replacement product of G and H be also connected. Let R denote the replacement product of G and H. Prove that $\operatorname{diam}(R)≤\operatorname{diam}(G)·\operatorname{diam}(H)$. I will be very grateful for every hint or solution. AI: The first problem isn’t too hard; I’ll sketch the argument and leave it to you to write it up carefully. Let $R$ be the replacement product. Because $H$ is connected, so is each cloud of $R$. Now suppose that $v_{ij}$ and $v_{k\ell}$ are vertices of $R$ in different clouds (i.e., $i\ne k$). $G$ is connected, so there is a path in $G$ from $i$ to $k$. In the simplest case this is simply an edge $\{i,k\}$ in $G$, and in that case by definition there are $j_1$ and $\ell_1$ such that $\{v_{ij_1},v_{k\ell_1}\}$ is an edge of $R$. In other words, if there is an edge between two vertices in $G$, there is an edge between their clouds in $R$. That edge connects the $i$ and $k$ clouds of $R$, and those clouds are connected, so there must be a path from $v_{ij}$ to $v_{k\ell}$. It should be clear that this idea can be extended to longer paths; perhaps the easiest way is to do it by induction on the length of the path from $v_{ij}$ to $v_{k\ell}$. The same idea can be used in the second problem: in the worst case the shortest path from $v_{ij}$ to $v_{k\ell}$ would require $\operatorname{diam}(H)$ steps in each cloud and $\operatorname{diam}(G)$ steps between clouds.
H: Proof for any value of x greater than 1, the following statement is true: The limit of $\sum_{n=0}^{\infty} {\frac{1}{x ^ n}}$ is ${\frac{x}{x-1}}$ I've been looking into the following sum for values of x: $$\sum_{n=0}^{\infty} {\frac{1}{x ^ n}}$$ And after plugging in different values of x, I became confident enough to make the following conjecture: The limit of the expression, $\sum_{n=0}^{\infty} {\frac{1}{x ^ n}}$ , will be equal to ${\frac{x}{x-1}}$ for any value of x greater than one. My question: Can someone provide a proof or explanation to this conjecture? And if so, what is it? If unable or unwilling to provide a proof, suggestions on how to prove my conjecture would also be satisfactory, though be ready to clarify and elaborate some of your points. I attempted to prove the conjecture via a contradiction proof by seeing what would result if this conjecture was false (then search for a contradiction thus making my conjecture true). However, I couldn't extrapolate any resulting truths which would occur if my conjecture was false. I tried rewriting the sum as various forms such as, $\sum_{n=0}^{\infty} {\frac{x}{x ^ {n+1}}}$ and $\sum_{n=0}^{\infty} {x^{-n}}$ to help, yet I was unable to get any closer to proving my conjecture. Important note: My formal education of mathematics only goes up to Algebra 1 in high school, so if simpler notation and mathematic vocabulary can be used in an answer it would be greatly appreciated. Thank you. AI: This is what is known as a geometric series. Start with some $r$ with $r\neq 1$ (for simplicity): $$ S_n=1 + r + r^2 + \cdots +r^n $$Note that if we multiply by $(1-r)$ we have $$ (1-r)S_n = (1 + r + r^2 + \cdots +r^n)-( r + r^2 + \cdots +r^n+r^{n-1}); $$then all the intermediate terms cancel: $$ (1-r) S_n = 1 - r^{n+1};\qquad S_n = \frac{1-r^{n+1}}{1-r} $$For example, if $r=3,n=4$ we have $S_n = 1+3+3^2+3^3+3^4=121 = \frac{1-243}{1-3}$. Now if $|r|<1$, we can take the limit and obtain the infinite sum: $$ \sum_{r=0}^{\infty}r^n = \lim_{n\to\infty} S_n = \lim_{n\to\infty}\frac{1-r^{n+1}}{1-r}=\frac{1}{1-r}$$ For example, if $r=1/2$, we have $1+1/2+1/4+1/8+1/16+\cdots =2 = 1/(1-1/2)$. Try drawing some pictures to see this. Now for your question: if $|x|>1$, $|x|^{-1}<1$. So we have $$ \sum_{r=0}^{\infty}\frac{1}{x^n} = \sum_{r=0}^{\infty}\left(\frac{1}{x} \right)^n = \frac{1}{1-1/x}=\frac{x}{x-1} $$
H: Meaning of power matrix function Suppose $t\in \mathbb{C}$, $\Lambda=\begin{bmatrix} 2 & 0 \\ 0 & 1 \end{bmatrix}$, what does $t^\Lambda$ mean? Is it $\begin{bmatrix} t^2 & 0 \\ 0 & t \end{bmatrix}$? AI: The easiest way to understand powers of matrices is through the exponential function $f(x)=e^x$. We know that for all $x$: $$ e^x=\sum\limits_{n=0}^{\infty}\frac{x^n}{n!}=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\dots $$ Since taking natural powers of matrices is something we are already (probably) acquainted with, it is almost a "must" to define: $$ e^{\Lambda}=1+\Lambda+\frac{\Lambda^2}{2}+\dots=\sum\limits_{n=0}^{\infty}\frac{\Lambda^n}{n!} $$ Now, if we want to use some base other than $e$ we can use the fact that for all $t\in\mathbb{R}$, we get $t^x=e^{x\ln{(t)}}$ so for matrices: $$ t^{\Lambda}=e^{\ln{(t)}\Lambda}=1+\ln{(t)}\Lambda+\frac{\left(\ln{(t)}\Lambda\right)^2}{2}+\dots $$ Which is usually much more complicated to use. So in your case, the result is correct since if we use the fact that $\Lambda$ is diagonal to compute: $$ e^{\Lambda}=\sum\limits_{n=0}^{\infty}\frac{1}{n!}\Lambda^n=\sum\limits_{n=0}^{\infty}\left(\begin{array}{cc}\frac{2^n}{n!} & 0 \\ 0 & \frac{1}{n!}\end{array}\right)=\left(\begin{array}{cc}\sum\limits_{n=0}^{\infty}\frac{2^n}{n!} & 0 \\ 0 & \sum\limits_{n=0}^{\infty}\frac{1}{n!}\end{array}\right)=\left(\begin{array}{cc}e^2 & 0 \\ 0 & e^1\end{array}\right) $$ And finally since $t^{\Lambda}=e^{\ln{(t)}\Lambda}$ which is also diagonal we get using the same calculation: $$ t^{\Lambda}=\left(\begin{array}{cc}e^{2\ln{(t)}} & 0 \\ 0 & e^{\ln{(t)}}\end{array}\right)=\left(\begin{array}{cc}t^2 & 0 \\ 0 & t\end{array}\right) $$
H: Probability puzzle on Inference. Couple of months ago, i was asked this weird probability puzzle in an interview. Problem: There are two persons X and Y. Given a statement, X says it is true w.p $\frac{1}{3}$. and false w.p. $\frac{2}{3}$. Y always agrees with whatever X says. What is the probability that the statement is actually true? A seemingly related question is Truth or Lie? Please help with a proper explanation. Does the information provided in the question sufficient to infer anything? AI: As stated, it should be clear that it is unanswerable. Let $P$ be a statement. Suppose that it is true. What happens? $X$ will tell you it's true with probability $1/3$ and false with probability $2/3$. $Y$ will agree. Let $P$ be a statement. Suppose that it is false. What happens? $X$ will tell you it's true with probability $1/3$ and false with probability $2/3$. $Y$ will agree. You're given the same information in both situations. What $X$ tells you is independent of the truth or falsehood of the statement. Assuming you're not missing any part of the question, I would speculate that the following is what your interviewer was getting at. Sometimes in ML, you will find yourself in a situation where your model has been trained in a way that its predictions become independent of your input data. Think for example about training on a dataset with very imbalanced class ratios-- maybe even only on one label. Would this be detectable? How would one figure out it was happening? Perhaps the interviewer wanted to lead into this sort of discussion.
H: Prove $\sum^{\infty}_{n=1} n^{13} q^{n}$ where $q \in \mathbb{R}$ converges if and only if $q \in (-1, 1)$ If the sum converges, we can argue that if $q \notin (-1, 1)$ then the necessary condition isn't met which is a contradiction. For $q=0$ the sum is 0 and if $q \in (0, 1)$ then we can use comparison test. However, I don't know how to approach this for $q \in (-1, 0)$ (or if there's an argument which works for any $q \in (-1, 1)$ AI: A nice way to use the comparison test is choose some $q'\in\left(-1,1\right)$ such that $|q'|>|q|$, so: $$ \lim\limits_{n\to\infty}\frac{n^{13}|q|^n}{{|q|'}^n}=\lim\limits_{n\to\infty}n^{13}\left|\frac{q}{q'}\right|^n=0 $$ And so for all $n$ bigger than some $N$ we get $0<n^{13}|q|^n<|q'|^n$ so by the comparison test, since the right series converges, so is the left.
H: Simplify $(1-x)^3+2y^3-3(1-x)y^2=0$ to $-(x - 2 y - 1) (x + y - 1)^2 = 0$ How to simplify $(1-x)^3+2y^3-3(1-x)y^2=0$? I tried it and the result I'm aiming for is $$-(x - 2 y - 1) (x + y - 1)^2 = 0.$$ AI: $$(1-x)^3+2y^3-3(1-x)y^2=0$$ $$\underbrace{(1-x)^3-(1-x)y^2}+\underbrace{2y^3-2(1-x)y^2}=0$$ $$(1-x)((1-x)^2-y^2)+2y^2(y-1+x)=0$$ $$(1-x)((1-x+y)(1-x-y))+2y^2(x+y-1)=0$$ $$-(1-x)(x+y-1)(1-x+y)+2y^2(x+y-1)=0$$ $$-(x+y-1)((1-x)^2+y-xy-2y^2)=0$$ $$-(x+y-1)(\underbrace{(1-x)^2-y^2}+\underbrace{y-xy-y^2})=0$$ $$-(x+y-1)((1-x-y)(1-x+y)+y(1-x-y))=0$$ $$-(x+y-1)(1-x-y)(1-x+y+y)=0$$ $$(x+y-1)(x+y-1)(1-x+2y)=0$$ $$-(x-2y-1)(x+y-1)^2=0$$
H: How do I show $\lim_{\epsilon\to0}\int_\epsilon^1x^n(\ln(x))^mdx=\frac{-m}{n+1}\lim_{\epsilon\to0}\int_\epsilon^1x^n(\ln(x))^{m-1}dx$ I have an assignemt where I'm to prove $$\int_0^1x^{-x}dx=\sum_{k=1}^\infty k^{-k}$$ And one of the steps towards doing this is showing that $$\lim_{\epsilon\to0}\int_\epsilon^1x^n(\ln(x))^mdx=\frac{-m}{n+1}\lim_{\epsilon\to0}\int_\epsilon^1x^n(\ln(x))^{m-1}dx$$ $n,m\in\mathbb{N}\setminus\{0\}$ And I really am not sure where to start with this. In the previous steps I have shown that $$\int_0^1e^{-x\cdot\ln(x)}dx=\int_0^1x^{-x}dx=\sum_{n=0}^\infty \frac{(-1)^n}{n!}\int_0^1(x\cdot\ln(x))^ndx$$ And that $$\int_0^1(x\cdot\ln(x))^ndx=\lim_{\epsilon\to0}\int_\epsilon^1x^n(ln(x))^ndx$$ But I'm not sure how any of that helps me here.. Thanks in advance AI: You're overcomplicating it. Do integration by parts with $dv=x^n dx$ and $u={(\ln{x})}^m$: $$I= \lim_{\varepsilon \to 0} \frac{1}{n+1} x^{n+1} {(\ln{x})}^m \big \rvert_{\varepsilon}^1-\frac{m}{n+1} \int_{\varepsilon}^1 x^{n+1} \frac{{(\ln{x})}^{m-1}}{x} \; dx$$ $$I=\lim_{\varepsilon \to 0}-\frac{m}{n+1} \int_{\varepsilon}^1 x^{n} {(\ln{x})}^{m-1} \; dx$$
H: Show that no entire function $f\big(\frac{1}{n}\big)= \frac{n}{2n-1}$ exists using the identity theorem. Show that no entire function $f: \mathbb{C} \to \mathbb{C}$ exists with $$f\bigg(\frac{1}{n}\bigg)= \frac{n}{2n-1}$$ for $n \in \mathbb{N}$. What is the domain of a holomorphic function with such values? I was thinking of putting $g(n) = \frac{1}{2-n}$ with $n \in \mathbb{N}$ which is holomorphic on $\mathbb{C}\setminus \{2\}$ and then arguing that because both functions "meet" at $0$, according to identity theorem, $g(n) = f\big(\frac{1}{n}\big)$ and because $g(n)$ is not an entire function per definition, there is no entire function $f\big(\frac{1}{n}\big)$ on $\mathbb{C}$. The domain for which a function with such values is defined would be defined for values in $(0, 1]$, right? AI: Let $g(z)=\frac1{2-z}$, for each $z\in\Bbb C\setminus\{2\}$. Suppose that such an entire function $f$ exists. Then$$(\forall n\in\Bbb N):f\left(\frac1n\right)=\frac n{2n-1}=g\left(\frac1n\right).$$On the other hand, since $f$ is continuous,\begin{align}f(0)&=\lim_{n\to\infty}f\left(\frac1n\right)\\&=\lim_{n\to\infty}\frac n{2n-1}\\&=\frac12\\&=g(0).\end{align}So,$$\{z\in\Bbb C\setminus\{2\}\mid f(z)=g(z)\}\supset\{0\}\cup\left\{\frac1n\,\middle|\,n\in\Bbb N\right\}.$$In particular, the set $\{z\in\Bbb C\setminus\{2\}\mid f(z)=g(z)\}$ contains an accumulation point ($0$) and therefore, by the identity theorem and because $\Bbb C\setminus\{2\}$ is connected,$$(\forall z\in\Bbb C\setminus\{2\}):f(z)=g(z).$$But this is impossible, since the limit $\lim_{z\to 2}f(z)$ exists in $\Bbb C$ (it is $f(2)$), whereas the limit $\lim_{z\to 2}g(z)$ does not exist in $\Bbb C$.
H: Is the function $\phi:\Bbb C\to\Bbb C$ given by $\phi(z)=\bar{z}$ a ring isomorphism (where $\bar{z}$ is the complex conjugate of $z$). Okay so I'm working on this problem: The function $\phi:\mathbb{C}→\mathbb{C}$ given by $\phi(z)=\overline{z}$ is a ring isomorphism (where $\overline{z}$ is the complex conjugate of $z$). I'm still very new to abstract algebra, so maybe it's obvious, but here's what I have: A set is a ring is it has two binary operations - addition and multiplication - closed and satisfying commutativity, associativity, distributive, A zero or identity element, and an inverse element. A function is an isomorphism if it forms a bijection between the two spaces - or rings in this case - meaning it is surjective and injective (1-1 and onto). The complex field is a ring, given that it is closed under multiplication and addition, has the distributive, commutative, and associative properties, has a zero element (0), and has inverses - I believe in this case the conjugate. Now, all I need to do is determine if the function $\phi$ is an isomorphism. I think it is, but I'm not sure how to go about deciding for certain. Any help or pointers would be appreciated, or if you see that I've done something wrong in saying $\mathbb{C}$ is a ring. Thanks! AI: You would have to verify the definition: Is $\phi$ a ring homomorphism? What does this mean? What do you have to verify? Is $\phi$ a bijection. Is it one-one and onto? Is there an easy inverse of it that you can see? (As Liddo pointed out in the comments, what is $\phi(\phi(z))$?)
H: Find the values $a$ and $b$ such that the function is differentiable at $x=0$ $\mathbf{Question:}$ Find the values $a$ and $b$ such that the function is differentiable at $x=0$ $$ f(x)= \begin{cases} x^{2}+1 &x≥0\\ a\sin x+b\cos x & x<0\\ \end{cases} $$ $\mathbf{Solution:}$ $f(x)$ is differentiable at $x=0$ if $f'(0)$ exists. This implies that for $f$ to be differentiable at $x=0$, the left hand limit and the right hand limit must exist and be equal. $$ \begin{align} \lim_{x\to 0-}f'(0) & =\lim_{x\to 0-}\frac{f(x)-f(0)}{x} \\ & =\lim_{x\to 0-}\frac{a\sin x +b\cos x-1}{x} \\ & =\lim_{x\to 0-}\frac{a\sin x}{x}+\frac{b\cos x-1}{x}=a \\ \end{align} $$ $$ \begin{align} \lim_{x\to 0+}f'(0) & =\lim_{x\to 0+}\frac{f(x)-f(0)}{x} \\ & =\lim_{x\to 0+}\frac{x^{2}+1-1}{x} = 0 \end{align} $$ Therefore, $a=0$ To find $b$, we can use the fact that if $f(x)$ is differentiable at $x=0$ then, it must be continuous at $x=0$. So if $f(x)$ is continuous, $\lim_{x \to0-}f(x) = \lim_{x \to0+}f(x)=b$ $$ \begin{align} \lim_{x\to 0-}f(x) & =\lim_{x\to 0-}a\sin x +b\cos x \\ & = a\sin (0) + b\cos (0) = b \end{align} $$ $$ \begin{align} \lim_{x \to 0+}f(x) & = \lim_{x \to0+}x^{2}+1 =1 \end{align} $$ Therefore, $b=1$ Thus, $ f(x)= \begin{cases} x^{2}+1 &x≥0\\ \cos x & x<0\\ \end{cases} $ is differentiable at $x=0$ AI: $$f(0)=1$$ $$\lim_{x\to 0^+}f(x)=\lim_{x\to0^+}(a\sin(x)+b\cos(x))=b$$ $f$ is continuous at $x=0$ if $b=f(0)=1$. $$f'(0^-)=\lim_{x\to 0^-}\frac{f(x)-f(0)}{x-0}$$ $$=\lim_{x\to0^-}x=\color{red}{0}$$ $f$ is differentiable at $x=0$ if $$\lim_{x\to0^+}\frac{f(x)-f(0)}{x-0}=\color{red}{0}$$ or $$\lim_{x\to0^+}\frac{a\sin(x)+\cos(x)-1}{x}=0$$ but $$\sin(x)\sim x \; and \; \cos(x)-1\sim \frac{-x^2}{2}$$ thus $a=0$.
H: How to compute $\frac{\partial}{\partial X}tr(BXX^tA)$? I know that $\frac{\partial}{\partial X}tr(BXX^t) = BX + B^tX$ according to the matrix cookbook equation 109. However, I need to calculate $\frac{\partial}{\partial X}tr(BXX^tA)$. Is there a simple way to derive this formula from the previous one? AI: The trace is traditionally defined for a square matrix. Assuming that is the case, let $B \in \mathbb{R}^{k \times n}$, $X \in \mathbb{R}^{n \times p}$, and $A \in \mathbb{R}^{n \times k}$. Then by commutative property of trace \begin{align*} tr(BXX^{\top}A) = tr(ABXX^{\top}) = tr(CXX^{\top}) \end{align*} for $C = AB \in \mathbb{R}^{n \times n}$. From there, you can directly apply the derivative equation. The definition of trace and its commutative property should be easily encountered by any math or applied math major ;)
H: Differential of Derivative I have encountered with a problem, and I can't follow the intermediate steps. Consider the following differential equation: $$ y^{\prime \prime} = -2y + f(y)-0.5y^3 $$ Here, the first derivative is expressed as the following integral: $$ (y^{\prime})^2 = 2\int_0^{y^\prime} y^\prime dy^\prime $$ Without giving the details of the intermediate steps, this equation is converted into the following equation using the differential equation given above: $$ (y^{\prime})^2 = -2 \int_{Y}^{y} \left( 2y - f(y)+0.5y^3 \right) dy$$ where $Y$ is the maximum value of $y$ when $y^\prime$ is zero. Here, I would like to learn how to express differential of a derivative $ dy^\prime $ in terms of $dy$, and end up with the above equation. If I am not wrong, the following equality must hold: $$ y^\prime dy^\prime = y^{\prime \prime} dy $$ How can I prove this equality? I am confused to think a differential like $ dy^\prime = d(\frac{dy}{dx}) $. Any help would be appreciated. Thanks. AI: $$y^\prime dy^\prime = y^{\prime \prime} dy$$ Simply because: $$y''dy=\left (\dfrac {dy'}{dx} \right )dy=y'dy'$$ Note that $$y^{\prime \prime} = -2y + f(y)-0.5y^3$$ Multiply by $2y'$ $$2y'y^{\prime \prime} = -4yy' + 2y'f(y)-y^3y'$$ $$(y'^2)'= -4yy' + 2y'f(y)-y^3y'$$ And integrate since $2y''y'=(y'^2)'$ $$\int (y'^2)'dx=\int ( -4yy' + 2y'f(y)-y^3y')dx$$ But since $y'dx=dy$ we simplify the integrand: $$y'^2=\int ( -4y + 2f(y)-y^3)dy$$ $$y'^2=-2\int ( 2y -f(y)+\dfrac 12 y^3)dy$$
H: Let $n \geqslant 0$ be an integer. Show that $2^n | \lceil (3+\sqrt5)^n \rceil.$ Let $n \geqslant 0$ be an integer. Show that $2^n | \lceil (3+\sqrt5)^n \rceil.$ This turned out to be relatively hard. The idea that our lecturer gave was that one should work towards an linear recurrence relation here, but it doesn’t seem to be an trivial task. Using the binomial theorem on $(3+\sqrt5)^n$ was my first approach to see if there would be any cancellations, but that turned out to be also very messy. What would be the way to approach this? AI: Let $a_n=(3+\sqrt5)^n+(3-\sqrt5)^n$. Then can you show $a_0=2$, $a_1=6$, and $a_{n+2}=6a_{n+1}-4a_{n}$? [$3+\sqrt5$ and $3-\sqrt5$ are roots of $x^2-6x+4=0$.] It follows that $2^n|(3+\sqrt5)^n+(3-\sqrt5)^n$. Since $(3+\sqrt5)^n$ is not an integer and $(3-\sqrt5)^n<1$, it follows that $(3+\sqrt5)^n+(3-\sqrt5)^n=\lceil(3+\sqrt5)^n\rceil$.
H: How does this alternate form come about from Eulers identity? I don't understand how this identity comes about, my tutor wrote it out and I checked it on wolfram so i know its not a mistake and now I'd like to understand how to derive it. $sin(\theta)cos(\phi) -isin(\theta)sin(\phi) = sin(\theta)e^{-i\phi}$ I know Eulers identity can be used to substitute $e^{-i\phi} = cos(\phi) - i sin(\phi)$ but what on earth happens to the $ isin(\theta)$? Why isn't the result $(sin(\theta) -isin(\theta))e^{-i\phi}$ AI: Let $\color{red}A = \color{red}{\sin \theta}$ then $\color{red}{\sin \theta}\cos(\phi) -i\color{red}{\sin \theta}\sin(\phi)=$ $\color{red}A\cos\phi - \color{red}Ai\sin \phi = $ $\color{red}A(\cos \phi - i\sin \phi) =$ $\color{red}A(\cos (-\phi) + i\sin (-\phi))=$ $\color{red}A e^{-i\phi} =$ $\color{red}{\sin \theta} e^{-\phi}$. but what on earth happens to the isin(θ)? The $\sin \theta$ factored out. ANd the $i\sin \phi$ get incorporated into the conversion to polar coordinates. Why isn't the result (sin(θ)−isin(θ))e−iϕ Because $\cos \phi \ne \sin \phi \ne e^{-i\phi}$ and you can't factor out unlike terms.
H: Are Quasi-Polynomial Time Functions Regularly Varying? Is $$\lim_{x\to\infty} \frac{e^{(\ln ax)^k}}{e^{(\ln x)^k}}$$ finite for all positive real $a$ and $k$? I have tried this on Desmos with $a$ and $k$ less than 2 where it seems to converge after around $10^{13}$, but with larger values the function is essentially a vertical line, though I'm not sure if it eventually converges. This would answer if $e^{(\ln x)^k}$ (a quasi-polynomial time function) is regularly varying. Here is the Wikipedia definition of regularly varying https://en.wikipedia.org/wiki/Slowly_varying_function. AI: I'm not sure how you define convergence, and I'm not familiar with SVF, but I tried the expression for small $k$. For $k=1 \ e^{\log a x - \log x} = a$ is a constant, but for $k=2$ $$ e^{\log^2(ax) - \log ^2 x} = e^{\log a \times \log ax^2} = a^{\log a} \times a^{2 \log x} $$ which converges for $0<a \leq 1$ and diverges for $a>1$.
H: Determine the edge-chromatic number of the next graph. Determine the edge-chromatic number of the next graph. For Vizing`s Theorem $\chi'(G)=4$ or $\chi'(G)=5$. I have try show that $\chi'(G)=4$ but I cannot do it. I think that $\chi'(G)=5$ and I suppose that G has a proper edge coloring with 4 colors, but I dont find a contradiction. Could you give me any advice or recommendation? AI: I think I can do it with $4$ colors. I don't have a quick way to make a diagram, so I'll try to explain in words. (It would be easier if you had labelled the vertices.) I'll use the colors Red, Green, Blue and White. Color one side of the outer triangle R, another side B, and the third side G. Color each side of the smaller triangles the same color as the parallel side of the big triangle. Now we color the $4$ segments on the line from each vertex of the big triangle to the center alternately W and X where X is the color of the side opposite this vertex. We start with W at the outer vertex and end with X at the center. Since X takes the values R,G,B, the edges that meet at the center are different colors.
H: Sum $\sum_{n = 1}^{\infty}\left[\frac1n\sin nx + \frac{1}{n^2}\cos nx\right]$ I want to find the following sum by using the complex methods for series ($z = \cos nx + i \sin nx$). $$ \sum_{n = 1}^{\infty}\left[\frac1n\sin nx + \frac{1}{n^2}\cos nx\right] $$ Here is my attempt: $$ S_N = \sum_{n = 1}^{N}\frac1n\sin nx + \sum_{n = 1}^{N}\frac{1}{n^2}\cos nx $$ I'm done with the first sum: $$ \sum_{n = 1}^{N}\frac1n\sin nx = \Im \sum_{n = 1}^{N}\frac{z^n}{n} \Rightarrow \lim_{N \to \infty} \sum_{n = 1}^{N}\frac1n\sin nx = -\Im \ln(1 - z) $$ But I'm stuck with the second one: $$ \sum_{n = 1}^{N}\frac{1}{n^2}\cos nx = \Re \sum_{n = 1}^{N}\frac{z^n}{n^2} $$ AI: Note that we have $$\int_0^x \sin(nt)\,dt=\frac{1-\cos(nx)}{n}$$ Therefore we can write for $x\in [0,2\pi]$ $$\begin{align} \sum_{n=1}^\infty \frac{\cos(nx)}{n^2}&=\frac{\pi^2}{6}-\sum_{n=1}^\infty \frac1n \int_{0}^x \sin(nt)\,dt\\\\ &=\frac{\pi^2}{6}-\int_0^x \sum_{n=1}^\infty \frac{\sin(nt)}{n}\,dt\\\\ &=\frac{\pi^2}{6}+\int_0^x \text{Im}\left(\log(1-e^{ix})\right)\,dt\\\\ &=\frac{\pi^2}6-\int_0^x \arctan(\cot(t/2))\,dt \\\\ &=\frac{\pi^2}{6}-\int_0^x \left(\frac{\pi}{2}-\frac t2\right)\,dt\\\\ &=\frac{\pi^2}{6}-\frac\pi 2 x +\frac{x^2}{4} \end{align}$$
H: Combinatorial argument for the number of pairs whose sum is less than or equal to k Example: How many pairs $(x,y)$ such that $x+y <= k$, where x y and k are integers and $x,y>=0, k > 0$. My solution: If we fix x then we can choose y in $(k - x + 1)$ ways (i.e. $y = 0,1,2,3 .... (k-x)$). Therefore, the final answer can be calculated as follows: $\begin{align*} &=\sum_{x = 0}^{k}\left ( k - x + 1 \right ) \\ &=\sum_{p = 1}^{k+1}\left ( p \right ) \qquad \left [ \text{ replacing k - x + 1 with p} \right ] \\ &=\frac{(k+1)(k+2)}{2} \end{align*}$ It seems that the final answer is equals to ${k+2\choose 2}\qquad$ (or choosing 2 out of k+2 distinct objects). Question: What can be the (possible) combinatorial argument for this answer ${k+2\choose 2}$ w.r.t to the counting of $(x,y)$ pairs ? AI: Take $k$ items and $2$ 'dividers'. Arrange them in any order you wish. There are ${k+2}\choose{2}$ ways to do this. The number of items (perhaps $0$) to the left of the first divider is your first addend; and the number of items between the two dividers is your second addend. For example, with $k = 5$, you could have $|**|***$ corresponding to $0+2$. Or you could have $*|****|$ corresponding to $1+4$.
H: Show that $\|A\|_{\infty} \leq \sqrt n \|A\|_2$ Question: Let $A \in \Bbb R^{m\times n}$. Show that $$\|A\|_{\infty} \leq \sqrt n \|A\|_2$$ Attempt: First, I tried invoking the SVD (Singular Value Decomposition) of $A$: $$\|A\|_\infty = \|UDV\|_\infty \leq \|U\|_\infty\|D\|_\infty\|V\|_\infty \leq \sqrt m \|A\|_2\sqrt n$$ since $\|D\|_\infty$ is just the largest singular value of $A$, which is precisely $\|A\|_2$. However, the RHS of the inequality is already too large. Next, I tried doing $$n\|A\|_2^2 = n\bigg(\sup_x \frac{\|Ax\|_2}{\|x\|_2}\bigg)^2 \geq n \max_i \bigg(\frac{\|A e_i\|_2}{\|e_i\|_2}\bigg)^2 = n \max_i \sum_{j=1}^m a_{ji}^2$$ but this doesn't work either because the RHS has already gotten too small. Any hints? AI: Hint: Let $i^*$ be a row with maximal absolute row sum, i.e. $$\sum_{j = 1}^{n}|A_{i^*,j}| \ge \sum_{j = 1}^{n}|A_{i,j}|$$ for all $i = 1,\ldots,n$. Also, let $v \in \mathbb{R}^n$ be a vector whose entries are $$v_j = \begin{cases}+1 & \text{if} \ A_{i^*,j} \ge 0 \\ -1 & \text{if} \ A_{i^*,j} < 0\end{cases}.$$ Now, what can you say about $\|Av\|_2$?
H: Prove that the number of subsets of [n] equals the number of n-digit binary numbers. So, I am going through David Mazur’s Combinatorics: A Guided Tour and I came across this exercise question: Give a bijective proof: The number of subsets of [n] equals the number of n-digit binary numbers. So, I know how to go about the proof - create two sets and a function to map between them, then prove that the function is bijective and then use the bijective principle to show that the two sets have the number of elements. The part I am having trouble with is defining a function between the two sets. Suppose A is the set of all subsets on [n] and B is the set of all n-digit binaries. Then I define f:A->B but by what relationship? If anyone could point me toward how to think through this function definition it would be of great help! Thanks in advance :) AI: Well, how many $n$ digit binary numbers are there? $2^n$. How do you know? Well, each digit place we have two choices, $0$ or $1$. So for $n$ places each with $2$ choices we have $2^n$ choices. How many subset of $[n]$ are there? $2^n$. How do you know? Well, well when creating a subset of $[n]$ we have a choice to make for each element $[n]$; we can choose to put it in the subset we are creating or to leave it out. For each element there are $2$ choose and there are $n$ chooses to make so there are $2^n$ possible ways of making the choices and each one creates a unique subset ...... Okay, so "making" a binary number and "making" a subset you make a series of binary choices. So we ought to be able to make bijection by matching up every choice we made for a string with a choice we made for a set: $f:$ strings to $P([n])$ via $f(string)=$ the set where for every place value, $k$ that has a $1$ the set $f(string)$ will contain $k$ as an element and for every place value, $j$ that has a $0$ the set $f(string)$ will not contain $j$ as an element. ...... That ought to do it and it "obvious and clear to a dope it must be bijective" because... well, every thing we do to make a string is perfectly matched to everything we do to make a set. But..... there's not a math teach in the world who will accept something that informal. ...... But if we represent a string as $s=[s_1,s_2,s_3......,s_n]$ where each $s_i$ is either $0$ or $1$ then we want the set of indexes $k$ where $s_k = 1$ or in other words: $f([s_1,s_2,s_3......,s_n]) = \{k| s_k = 1\}$ I'll leave it to you to prove formally it is a bijection.
H: How to calculate expected code length? How to calculate expected code length? AI: Should be: $$L = \sum p_i l_i = (0.25 + 0.25 + 0.2) \times 2 + (0.15 + 0.15) \times 3 = 2.3 $$
H: Finding a general formula for the function $f$ I need some help! During a moment of research, I wanted to find a general formula with a function $f$. $$f(x)=\begin{cases} 0&& \text{if $x\in\mathbb R^*$}\\ 1&&\text{if $x=0$} \end{cases}$$ The domain of $f(x)$ is $\mathbb R$. Try to determine an explicit formula for this function. And if we can't, does an explicit formula really exist? Thank you for your help! AI: Well, heck: $$f(x) = 1 - \left( \left\lfloor \frac{x}{|x|+1|}\right\rfloor - \left\lfloor \frac{-x}{|x|+1}\right\rfloor\right)^2.$$
H: Locally constant sheaf on irreducible space is constant Let $X$ be an irreducible space, and let $\{U_{i}\}_{i\in I}$ be an open covering of $X$. Let $\mathcal{F}$ be a sheaf on $X$ such that the restriction of $\mathcal{F}$ to each open $U_{i}$ is constant. I want to show that $\mathcal{F}$ has to be constant. Note that the constant sheaf is the sheafification of the constant presheaf with value $A$ which assigns to every open the value $A$. Notice that since $X$ is irreducible every pair $U_{i}$ and $U_{j}$ from the open covering have non-empty intersection. Define for $i\in I$ the sheaf $\mathcal{F}_{i}:=\mathcal{F}|_{U_{i}}$, I tried to show that we have isomorphisms $\mathcal{F}_{i}|_{U_{i}\cap U_{j}}$ to $\mathcal{F}_{j}|_{U_{i}\cap U_{j}}$ which satisfies the desired properties such that we can glue them uniquely to the constant sheaf. But I failed doing this. Any help would be appreciated! I think my main struggles is how to deal with the sheafification here. Do I have a explicit description of the sets $\mathcal{F}_{i}(V)$ for $V\subset U_{i}$? AI: One way of doing this is to work like you have a generic point, a point with closure the whole space, contained in every open set. That is, define the set $A$ to be the colimit of $F(U)$ over all nonempty open sets, ordered by inclusion. Then, we have a canonical map $F(U)\rightarrow A$, so we can view $A$ as the constant sheaf, and we have our proposed isomorphism to a constant sheaf, since the constant presheaf is a sheaf on an irreducible space. To verify that this map is an isomorphism, we can work locally, noting that this colimit can be taken with respect to the (cofinal) system of subsets contained in $U_i$, for any $i$. The reason we can do this is that our original colimit was a directed set, which is equivalent to our space being irreducible, after unravelling the definitions.
H: $f$ entire and $f(n) = n$ with $\lim_{|z| \to \infty} |f(z)| = \infty$, show $f(z) = z$ The question is: $f$ entire and $f(n) = n$ with $\lim_{|z| \to \infty} |f(z)| = \infty$, show $f(z) = z$. I know that $f$ must be a polynomial which can be deduced from just the entire and $\lim_{|z| \to \infty} |f(z)| = \infty$ hypothesis. However, I do not see how to use the $f(n) = n$ for each $n$. AI: $f(z)-z$ is a polynomial with infinitely many zeros. Hence it is identically $0$.
H: Spheric equation to Cartesian equation I have this equation $$\rho^2=2 \sin(2\theta)\cos(\phi)$$ and I want to convert it into a Cartesian equation. I’ve already applied an trigonometric identity and I have this$$\rho^2=4\sin(\theta)\cos(\theta)\cos(\phi)$$ and I don’t know what else to do AI: I suppose, that you have following spherical coordinates: $x=r \sin \phi \cos \theta$, $y=r \sin \phi \sin \theta$, $z = r \cos \phi$ From these we have $$y^2+z^2=r^2\sin^2 \phi \sin^2 \theta + r^2 \cos^2 \phi=\\=r^2\left( \sin^2 \phi \sin^2 \theta +1- \sin^2 \phi \right)=r^2\left( 1+ \sin^2 \phi \left(\sin^2 \theta -1 \right) \right)= r^2 \big(1- \sin^2 \theta \cos^2 \phi \big) \ \ \ (1)$$ and $$x^2 + y^2=r^2\sin^2 \phi \cos^2 \theta + r^2\sin^2 \phi \sin^2 \theta=r^2 \sin^2 \phi \ \ \ (2)$$ Your equation is possible to write as $$r^4=16\sin^2(\theta)\cos^2(\theta)\cos^2(\phi) = 16(1- \cos^2 \theta)\cos^2 \theta (1-\sin^2 \phi)$$ Then we will have equation in $x,y,z$. Happy simplification.
H: Using the remainder theorem to prove a quadratic is a factor of a polynomial For example, if I have $P(x) = 3x^4 + 5x^3 -17x^2 -13x + 6$ then to show that $x^2 + x - 6$ is a factor I individually show that $x+3$ and $x-2$ are factors using the factor theorem (i.e. $P(-3) = 0$ and $P(2) = 0$). However, if $x+3$ and $x-2$ are individually factors, why does that conclude their product would be a factor? Because I have learnt if two individual numbers are factors, they also need to be "coprime" for the product to be a factor. (For example: $12$ is divisible by both $6$ and $12$ but not their product $72$) How would you know that $x+3$ and $x-2$ are coprime? E.g. for $x=7$ they do not seem to be? Thanks in advance! AI: $x+3$ and $x-2$ (or any two distinct monic polynomials of degree $1$) are coprime as polynomials. That is, there is no polynomial of positive degree that divides both of them. Indeed, $(x-2)-(x+3)=-5$. If $P(x)$ is a polynomial that is divisible by both $x-2$ and $x+3$, i.e. there are polynomials $q(x)$ and $r(x)$ such that $P(x) = (x-2) q(x) = (x+3) r(x)$, we have $$ (x+3) r(x) = P(x) = (x-2)(q(x)) = (x+3) q(x) -5 q(x)$$ so $$ q(x) = (x+3)(q(x)-r(x))/5$$ and $$P(x) = (x-2) q(x) = (x-2)(x+3)(q(x)-r(x))/5$$ is divisible by $(x-2)(x+3)$.
H: Conditions under which the torsion subgroup of an abelian group is a direct summand. Hi: Let $T$ be the torsion-subgroup of an abelian group $G$. If $T$ is the direct sum of a divisible group and a bounded group, then T is a direct summand of $G$. The author says "This follows easily from 4.1.3 and 4.3.8". 4.1.3 A divisible subgroup is a direct summand. 4.3.8 A pure bounded subgroup of an abelian group is a direct summand. Suppose $T=0\oplus B$ where $B$ is bounded. Of course $0$, the trivial subgroup of $G$, is divisible. Then $T$ is bounded. How do I apply 4.3.8 to prove $T$ is a direct summand of $G$? Note: bounded group = group of bounded order. AI: For your example, $T$ is bounded, so to apply 4.3.8, it suffices to show $T$ is pure. Let $t\in T$, and suppose $n{\,\mid\,}t$ in $G$. We want to show $n{\,\mid\,}t$ in $T$. Since $n{\,\mid\,}t$ in $G$, we have $t=ng$ for some $g\in G$. Since $t\in T$, we have $mt=0$ for some positive integer $m$, so $$(mn)g=m(ng)=mt=0$$ hence $g\in T$ Thus $n{\,\mid\,}t$ in $T$, so $T$ is pure. Now consider the general case . . . Thus suppose $T=D\,{\large{\oplus}}B$, where $D$ is divisible and $B$ is bounded. Our goal is to show that $T$ is a direct summand of $G$. Since $D$ is divisible, it follows by 4.1.3 that $G=D\,{\large{\oplus}}H$ for some $H\le G$. Let $S$ be the torsion subgroup of $H$. Since $B$ is bounded, we have $mB=0$ for some positive integer $m$. Let $s\in S$. Clearly $s\in T$ so $s=d+b$, where $d\in D$ and $b\in B$.$\;$Then \begin{align*} & s=d+b\\[4pt] \implies\;& ms=md+mb\\[4pt] \implies\;& ms=md\\[4pt] \implies\;& ms\in D\\[4pt] \implies\;& ms=0\;\;\;\text{[since $ms\in S\subseteq H$ and $D\cap H=0$]}\\[4pt] \end{align*} Thus $mS=0$ so $S$ is bounded. Since $S$ is the torsion subgroup of $H$, it follows (as in the first part of this post) that $S$ is pure in $H$, hence since $S$ is bounded, it follows by 4.3.8 that $S$ is a direct summand of $H$. Thus we have $H=S\,{\large{\oplus}}K$ for some $K\le H$, hence we can rewrite $G=D\,{\large{\oplus}}H$ as $G=D\,{\large{\oplus}}S\,{\large{\oplus}}K$. Claim $T=D\,{\large{\oplus}}S$. The inclusion $D\,{\large{\oplus}}S\subseteq T$ is clear. For the reverse inclusion, let $t\in T$. Since $G=D\,{\large{\oplus}}S\,{\large{\oplus}}K$, we have $t=d+s+k$, where $d\in D$, $s\in S$, and $k\in K$. Then from $k=t-d-s$, it follows that $k$ has finite order, hence $k=0$. Thus $t=d+s\in D\,{\large{\oplus}}S$, which yields $T\subseteq D\,{\large{\oplus}}S$. Hence $T=D\,{\large{\oplus}}S$, as claimed. Then we can rewrite $G=D\,{\large{\oplus}}S\,{\large{\oplus}}K$ as $S=T\,{\large{\oplus}}K$, so $T$ is a direct summand of $G$ as was to be shown.
H: Local characterization for convex $C^1$ functions Let $f:\mathbb R^n\to \mathbb R$ be $C^1$ and satisfy following condition: For every $x\in\mathbb R^n$ there exists some $\varepsilon_x > 0$ such that for every $y$ with $\|y - x\| < \varepsilon_x$ it follows $$ f(y) \ge f(x) + \nabla f(x)^T (y - x). $$ Is $f$ convex? Note: This is a follow-up question to 'Locally' Convex Function, whose accepted answer assumes $C^2$. Obviously a proof for $n=1$ suffices. AI: This is a much uglier proof than I had anticipated. The key result is that if a real valued $h$ is convex on an interval and attains a $\max$ in interior of the interval then $h$ is constant on the interval. It is sufficient to prove that $\phi:I \to \mathbb{R}$ is convex, where $I=[0,1]$ and $\phi$ is defined by $\phi(t) = f(ty+(1-t)x)$. Let $\eta(t) = \phi(t) -l(t)$ where $l(t) = t \phi(1)+(1-t)\phi(0)$. Note that $\eta(0)=\eta(1) = 0$. Let $t^* \in I$ maximise $\eta$ and suppose $\eta(t^*) > 0$. Note that $\eta(t) \le \eta(t^*)$ for $t \in I$. It is clear that the set $M= \{t \in I | \eta(t)=\eta(t^*) \}$ is closed. Note that $\eta'(t) =0 $ for all $t \in M$ (since all points are maximisers). We will show that $M$ is open, hence connected and so equal to $I$ which will give a contradiction. Pick $t' \in M$. There is some $\epsilon>0$ such that $\phi(t'+h)-\phi(t') \ge \phi'(t')h$ for $|h|< \epsilon$. Since $l'(t'+h)-l(t') = l'(t') h$, we see that $\eta(t'+h)-\eta(t') \ge \eta'(t')h = 0$ for $|h| <\epsilon$. In particular, $\eta(t'+h) = \eta(t')$ for $|h| <\epsilon$ and so $(t'-\epsilon,t'+\epsilon) \subset M$. Hence $M$ is open which means that $\eta(0) = \eta(t^*)>0$, a contradiction. Hence $\eta(t) \le 0 $ for all $I$ and so $f$ is convex. Note that only differentiability was used.
H: How to sketch the level curves of the following function Consider the function f defined by $f(x,y)=ln(x-y)$ How do I sketch the level curves for this function for the values of $k=-2,0,2$? AI: Observe that the level curves of $f(x, y)$ are given by $f(x, y) = k$ for some constant $k.$ Considering that $f(x, y) = \ln(x - y),$ the level curves of $f(x, y)$ are given by $\ln(x - y) = k.$ Of course, this is equivalent to $e^k = x - y,$ and this is equivalent to $y = x - e^k.$ Consequently, the level curves are the family of parallel lines $y = x - e^k$ with slope $1$ and $y-$intercept $e^k$ for each value of $k.$
H: Constant functionals on a set generated by one elements are linear? In functional analysis we say that if $X$ is a normed space and $f\in X'$(dual of $X$).(Say real normed space.) f is $f:X\to \mathbb R$ to satisfy $$f(x+y)=f(x)+f(y),\\f(\alpha x)=\alpha x \\ \forall x,y \in X, \forall \alpha \in \mathbb R$$ But If I want to consider $Z=span(\{x_0\})\subset X$ and define $f$ as $$f(x)=1,\quad \forall x\in Z$$ How can I formally show that $f$ is linear. The intuition is clear because $f$ is constant functional on $Z$ so it is linear but formally $$1=f(x+y)\neq f(x)+f(y)=1+1=2$$ So what is the problem? I want such a thing to show that functional defined on $Z$ is bounded. My problem arises from a solution from functional analysis problem below: And another example is that using "linear functional" in a sense that I want to understand is from Kreyszig Functional Analysis Book. (Look at the line described by $(10)$) AI: The proposed constant function is not linear. While the graph of $f$ forms a line, it doesn't pass through the origin, making it an affine function, not a linear function. In the given solution, $f$ is not being defined to be constantly $1$, but is being defined to be $1$ at the single point $x - y$. Since $x - y$ is a basis for $\operatorname{span}(x - y)$, this is sufficient to define a linear map. In particular, the linear map takes an arbitrary point $\alpha(x - y) \in \operatorname{span}(x - y)$, and maps it to $\alpha$. Really though, you don't need to understand what happens further along the line $\operatorname{span}(x - y)$, so long as $f$ maps $x - y$ to something non-zero. Your third question has much the same answer. Don't forget that the $x$ in $(10)$ takes the form $x = \alpha x_0$ (which is the form that all elements of $\operatorname{span}(x_0)$ take). Then, using linearity, and the fact that $f(x_0)$ is defined to be $\|x_0\|$, we get $$f(x) = f(\alpha x_0) = \alpha f(x_0) = \alpha \|x_0\|.$$
H: If $f: X \to Y$ and continuous in $D \in \mathcal{B}_{X}$, then $V$ open implies $f^{-1}(V) = U \cap D$ with $U$ open I am reading through Yeh's Real Analysis 3rd edition, pp. 21-22. Yeh proves the following theorem: Given two measurable spaces $(X, \mathcal{B}_X)$ and $(Y, \mathcal{B}_Y)$ where $X$ and $Y$ are topological spaces and $\mathcal{B}_X$ and $\mathcal{B}_Y$ are the Borel $\sigma$-algebras of subsets of $X$ and $Y$ respectively, if $f$ is a continuous mapping defined on a set $D \in \mathcal{B}_X$, then $f$ is a $\mathcal{B}_X$/$\mathcal{B}_Y$-measurable mapping of $D$ into $Y$. In the above, it is understood that the domain and range of $f$ satisfy $\mathcal{D}(f) \subset X$ and $\mathcal{R}(f) \subset Y$ respectively. Yeh starts off the proof as follows: Let $V$ be an open set in $Y$. The continuity of $f$ on $D$ implies that $f^{-1}(V) = U \cap D$ where $U$ is an open set in $X$ so that $f^{-1}(V) \in \mathcal{B}_X$. I don't follow this part of the proof, and I suspect it's because my background in topology is lacking. Every topology text I have that I've opened up has something like the following: Let $X$ and $Y$ be topological spaces. $f: X \to Y$ is continuous if $f^{-1}(V)$ is open in $X$ for every open set $V$ in $Y$. Great, so that means that $f^{-1}(V)$ should be open, but why does it equal $U \cap D$? Why does this matter anyway, since $\mathcal{B}_X$ is the $\sigma$-algebra generated by the collection of all open sets of $X$ anyway, so $f^{-1}(V) \in \mathcal{B}_X$ should be obvious? AI: That $f\colon D\to Y$ is continuous means that $f^{-1}(V)$ is open in $D$ whenever $V$ is open in $Y$. However, we did not specify a topology on $D$. Typically, when nothing is explicitly stated, we take the subspace topology on $D$. In the subspace topology, a set $E \subseteq D$ is open if there is some $F\subseteq X$ that is open in $X$ such that $E=D\cap F$. Thus, that $f^{-1}(V)$ is open in $D$ means that there is a $U$ that is open in $X$ so that $f^{-1}(V) = D \cap U$.
H: Determine whether a function on the even integers is injective and surjective Let $E$ be the set of even integers. Define $f:E \rightarrow \{0, 1, 2, 3, 4\}$ by $f(n) = n \mod 5$. a) Determine whether $f$ is one-to-one (injective). b) Determine whether $f$ is onto (surjective). AI: Let $m=4 \in \mathbb{N}$ then $f(4)=4 mod 5$ And $n=9 \in \mathbb{N}$ then $f(9)=4 mod 5$ Since $n, m \in \mathbb{N} $such that $m\not=n$ And $f(n) =f(m) $the function don't is one to one. And now you try if f is onto using the definition, like $F:=A \rightarrow B$ said onto if for every $ y\in B \exists x \in A : F(x) =y$
H: Equinumerosity: Proof Validation Synopsis In my studying of Set Theory, I've had the privilege of being able to compare much of my work to a solutions manual. This solutions manual, however, ended after the chapter on the various number systems. As such, I'm going to start posting a lot of my own personal solutions or questions onto the site for either proof validation or proof hints (if I'm unable to find the solution in other places), and I would greatly appreciate any feedback on my mathematical writing. Thank you. Exercise Show that the equation $$f(m, n) = 2^m(2n + 1) - 1$$ defines a one-to-one correspondence between $\omega \times \omega$ and $\omega$. Proof First, we show that $f$ is injective. Consider $(m, n)$ and $(m', n')$ in $\omega \times \omega$. Now suppose that $f(m, n) = f(m', n')$. Then $2^{m}(2n+1) - 1 = 2^{m'}(2n' + 1) - 1$ and $2^{m}(2n+1) = 2^{m'}(2n' + 1)$. Note that every value of this equation will only have one odd factor, mainly $(2n + 1)$. As such, $n = n'$ and $2^m = 2^{m'}$. So, $m = m'$ and $f$ is injective. Next, we wish to show that $f$ is surjective. Consider any $x \in \omega$. If $x$ is even, take $m = 0$ and $n = x/2$. If $x$ is odd, note that $x = 2k + 1$ for some $k \in \omega$. So $2k = 2^m(2n + 1)$ if $m$ and $n$ exist, and since every even number contains a factor that is odd (for powers of two, we consider the factor one), there is always a $m$ and $n$ that satisfies the equality. Therefore, for any $x \in \omega$, there is always a $m$ and $n$ such that $f(m, n) = 2^m(2n + 1) - 1$ and $f$ is surjective. As such, because $f$ is both surjective and injective, $f$ is bijective, and the equation $f(m, n) = 2^m(2n + 1) - 1$ defines a one-to-one correspondence between $\omega \times \omega$ and $\omega$. AI: Proof looks fine. For the $x$ odd case, you might want to clarify whether when using $n=0$ for the "odd factor one" interferes with $k$ itself being odd (hence $m=1$). It might be easier to simply write down $m$ and $n$ in terms of the factorization of $k$ (at least for prime $2$ and everything else). This tracks closer to why this entire construction works: you're essentially splitting up all natural numbers into the direct product of two copies of itself, one half encoded by powers of 2, the other half by all odd numbers. Fundamental theorem of arithmetic makes this work both ways (per Anonymous).
H: Find solution set of $\dfrac{8^x+27^x}{12^x+18^x}=\dfrac{14}{12}$ What I've done is factoring it. $$\dfrac{2^{3x}+3^{3x}}{2^{2x}\cdot 3^{x}+3^{2x}\cdot{2^{x}}}=\dfrac{7}{2\cdot 3}$$ This looks like it can be factored more but it doesn't work from my attempts. AI: Well you've reached a good point: $$\dfrac{2^{3x}+3^{3x}}{2^{2x}\cdot 3^{x}+3^{2x}\cdot{2^{x}}}$$ $$=\dfrac{2^{3x}+3^{3x}}{2^{x}\cdot 3^{x}(2^x+3^{x})}$$ We can let $2^x=a$ and $3^x=b$ and get $$\frac{a^3+b^3}{ab(a+b)}=\frac{a^2-ab+b^2}{ab}=\frac{a}{b}-1+\frac{b}{a}$$ Now let $z=\frac{a}{b}$, we get $$z-1+\frac{1}{z}=\frac{7}{6} \iff z^2-\frac{13}{6}z+1=0 \implies z=\frac{2}{3},\frac{3}{2} \implies x=\pm 1$$
H: How to prove that the limit is equal to the function? I have the following statement to prove: Prove that if $g$ is derivable, therefore $\lim_{h\to 0}\frac{1}{g(x+h)}=\frac{1}{g(x)}$ My attempt was: If $g$ is derivable, then is continuous on its domain and the limit of continuous function in a point of its domain is just the image of that point, that is $\frac{1}{g(x)}$. Is my proof correct? A second proof that i made was: I know the fact that $\lim_{h\to 0} g(x+h)=g(x)$ since is continuous. And i have $[\lim_{h\to 0} g(x+h)]^{-1}=[g(x)]^{-1}$ and since limit exist using limit's algebra, i got the desired result. Are these proofs correct? thanks in advance. AI: Of course we assume $g(x) \ne 0$, but nevertheless we need $\lim_{h \rightarrow 0}g(x +h) \ne 0$ in both cases.
H: Why did Spivak choose this definitions? ("Calculus on Manifolds", the definition of the norm, the definition of open sets) I am reading "Calculus on Manifolds" by Michael Spivak. In this book, the norm of $x \in \mathbb{R}^n$ is defined by $|x| := \sqrt{(x^1)^2 + \cdots + (x^n)^2}$. In this book, a set $U \subset \mathbb{R}^n$ is called open if for each $x \in U$ there is an open rectangle $A$ such that $x \in A \subset U$. Why did Spivak choose this definitions? What are the advantages of these definitions? I wonder if we define the norm of $x \in \mathbb{R}^n$ as $|x| := \sqrt{(x^1)^2 + \cdots + (x^n)^2}$, then it is natural we define a set $U \subset \mathbb{R}^n$ is open if for each $x \in U$ there is an open ball $B$ such that $x \in B \subset U$. And I wonder if we define the norm of $x \in \mathbb{R}^n$ as $|x| := \max \{|x^1|, \cdots, |x^n|\}$, then it is natural we define a set $U \subset \mathbb{R}^n$ is open if for each $x \in U$ there is an open rectangle $A$ such that $x \in A \subset U$. AI: Both of them are equivalent. If there is an open rectangle containing the point, there is an open ball within that open rectangle, containing the point. Conversely if there is an open ball containing the point, there is an open rectangle within that open ball, that contains the point. His definition has the additional advantage that Cartesian product of open rectangles in $\Bbb R^m$ and $\Bbb R^n$ gives us an open rectangle in $\Bbb R^{m+n}$.
H: Composition of Irreducible Representation and Non-surjective homomorphism: counterexample I came across this problem where it proved that if $\phi: G\rightarrow H$ be a epimorphism and $\psi: H \rightarrow GL (V)$ is an irreducible representation, then $\psi \circ \phi$ is an irreducible representation of . I learned from elsewhere that this statement is actually "if and only if", and so I was trying to find a counterexample to this problem where $\phi$ is not surjective, but I couldn't find one. AI: Take $G$ to be the trivial group and $H$ a non-Abelian finite group. There is a unique homomorphism $\phi:G\to H$. But $H$ has an irreducible representation $\psi$ of degree $d\ge2$. Then $\psi\circ\phi$ also has degree $d$ but the only irreducible representation of the trivial group is the trivial representation of degree one.
H: To solve polynomial equation of a special form. It is known that there is no general method to solve polynomial with order greater than 5. However, some equations of the special forms have been knowns to solve. While I was studying today, I faced the following problems. $Ax^{n}+Bx^{m}=C$ where $n > m >0$, $A$, $B$ and $C$ are constants. My attempt is here. Let $y = x^m$ then $x^n$ can be rewritten as $y^p$ for some $p>0$. so the problem is converted into $Ay^p+By =C $ I do not know what to do anymore. I tried to do the change of variables but failed. Does this equation have a solution? If so, who can offer me some helpful advice? Thank you for reading. AI: Unfortunately there's no general approach. You can look for rational roots and try to factor the polynomial; or sometimes, if you get very lucky, you can find a clever substitution that turns the polynomial into one where all powers of the unknown have a common divisor, so that you reduce the degree of the polynomial (this is the essence of the trick behind Cardano's formula for solving cubics). But even for something as simple as $-x^5 + x =1$, there is no solution in terms of radicals.
H: Prove or disaprove that proposition Let $l_1=(Cos\alpha_1,Cos\beta_1,Cos\gamma_1)$,$l_2=(Cos\alpha_2,Cos\beta_2,Cos\gamma_2)$,$l_3=(Cos\alpha_3,Cos\beta_3,Cos\gamma_3)$ \, three points that are perpendicular in the space ($l_1\perp l_2 \perp l_3$), and $u=f(x,y,z)$. Prove or disaprove that $\\ $ $(\frac{\partial u}{\partial l_1})^2+(\frac{\partial u}{\partial l_2})^2+(\frac{\partial u}{\partial l_3})^2=(\frac{\partial u}{\partial x})^2+(\frac{\partial u}{\partial y})^2+(\frac{\partial u}{\partial z})^2$ $\\$ Where $\frac{\partial u}{\partial l_1}$ denotes the partial derivative of $u$ in the direction of $l_1$ $\bf{My\, attempt}$ $\\$ That are false, because, let $v_1=\frac{1}{\sqrt{14}}(1,2,3)$\, $v_2=\frac{1}{\sqrt{27}}(-5,1,1)$ and $v_3=\frac{1}{\sqrt{378}}(-1,-16,11)$ three perpendicular vectors in the space, and now if we use the fact that the derivative of $u$ in the direction of $l_1$ is the $\nabla f \cdot l_1$ where $||l_1||=1$ and get the $\frac{\partial u}{\partial l_1}$,$\frac{\partial u}{\partial l_2}$ and $\frac{\partial u}{\partial l_3}$ and calculate the square but i get that are diferent from the sum of the square of partial derivatives of $u$ respect the variables $x,y,z$.$\\$ I appreciate any correction on my disaprove or in it case the respective proof, is necesary comment that i try do the proof but i can´t and decide make a contra example. AI: Recall that the directional derivative in direction of a unit vector $\vec v$ is (even though I do not like this notation) $$\frac{\partial u}{\partial\vec v} = \nabla u\cdot \vec v.$$ So, if $\vec v_1,\vec v_2,\vec v_3$ are three mutually orthogonal unit vectors, we have \begin{align*}\left(\frac{\partial u}{\partial\vec v_1}\right)^2 + \left(\frac{\partial u}{\partial\vec v_2}\right)^2 + \left(\frac{\partial u}{\partial\vec v_3}\right)^2 &= (\nabla u\cdot\vec v_1)^2 + (\nabla u\cdot\vec v_2)^2 + (\nabla u\cdot\vec v_3)^2 \\ &= \|\nabla u\|^2.\end{align*}
H: Associative Law - Number Sequence I'm stuck in a problem, need Help. Associative property States that (a + b ) + c = a + (b + c) Which is true but what if i change the position of the numbers (a + c) + b = a + (c + b) Does it hold true for the associative Law because the answer will still be same.. Stuck in this problem. Need your help with proofs. Thanks in Advance. AI: Yep, it's true, even when you permute the symbols. Associativity means that $$(a + b) + c = a + (b + c)$$ for any $a, b, c$ in the appropriate set (real numbers perhaps?). It doesn't matter what I substitute in for these variables, this will always be true. Here are some true statements that follow from this: \begin{align*} (q + l) + v &= q + (l + v) \\ (1 + 3) + 5 &= 1 + (3 + 5) \\ (a + b) + (x + 1) &= a + (b + (x + 1)) \\ a^x + (b^y + c^z) &= (a^x + b^y) + c^z \\ a + (c + b) &= (a + c) + b \\ a + (a + a) &= (a + a) + a. \end{align*} Basically, you can substitute any expression you like in for $a, b, c$, so long as they evaluate to a real number, and you will get a true statement.
H: Function from $\mathbb{Z^+}$ to $\mathbb{Z^+}$ that is neither one-to-one nor onto? I am thinking of something like $f(x) = 8$ Does this make sense? It seems a bit simple to me so I'm not sure. My reasoning is that this function is not one-to-one because f takes same value for all domain. It's also not onto because range isn't equal to codomain since here the range is just the number 8. Thanks! AI: This is a fine example. Few functions are one-to-one or onto, so finding one that is neither should not be hard. You have succeeded. The point is to understand the definitions and you seem to have done that.
H: $\sum_{n=1}^{\infty} \frac{d_{n}(x,y)}{2^{n}}$ is a metric Could you help me with the following please, I have a question with the following: If $d_1,d_2,...$ is a succession of metrics in a space $X$ such that $d_{n}(x, y) \leq 1$ for each $x, y \in X$ and each $n$. Prove that the function $d: X\times X \rightarrow \mathbb{R}$, defined by $$d(x,y)=\sum_{n=1}^{\infty} \frac{d_{n}(x,y)}{2^{n}}$$ is a metric in $X$. My question in particular is, because it is important or where the hypothesis is applied that $d_{n}(x, y) \leq 1$? Thank you very much for your help. AI: First of all, we will see that $d(x,y)$ is well-defined: note that $$\sum_{n=1}^N \frac{d_n(x,y)}{2^n} \leq \sum_{n=1}^N \frac{1}{2^n} \leq \sum_{n=1}^\infty \frac{1}{2^n}<\infty \textrm{ for all $N \geq 1$,}$$ which means that $\Big( \displaystyle \sum_{n=1}^N \frac{d_n(x,y)}{2^n} \Big)_{N=1}^\infty$ is bounded above, and hence, converges$^1$. Next, we will to verify that $d(x,y) \leq d(x,z) + d(y,z)$. But this is also easy: since $$\sum_{n=1}^N \frac{d_n(x,y)}{2^n} \leq \sum_{n=1}^N \frac{d_n(x,z)}{2^n} + \sum_{n=1}^N \frac{d_n(y,z)}{2^n} \leq d(x,z)+d(y,z) \textrm{ for all $N \geq 1$,}$$ the set $$S = \bigg\{ \sum_{n=1}^N \frac{d_n(x,y)}{2^n} :\, N \geq 1 \bigg\}$$ is bounded above by $d(x,z)+d(y,z)$, and therefore, $d(x,y) = \sup S \leq d(x,z)+d(y,z)$. I will leave to you the proof of $d(x,y) = 0$ if and only if $x=y$ (why the other properties in the definition of a metric are redundant?). $^1$ It is also a non-decreasing sequence because $d_n(x,y) \geq 0$.
H: Why these different forms of Triangle Removal Lemma are equivelent? Triangle removal lemma can be stated as follows. $\forall \epsilon>0,\exists \delta>0$ such that any graph on $n$ vertices with $\leq \delta n^3$ triangles can be made triangle-free by removing at most $\epsilon n^2$ edges. I can also find another version of this result. Any graph on $n$ vertices with $o(n^3)$ triangles can be made triangle-free by removing $o(n^2)$ edges. How to show these two are equivelent? AI: The second statement is tricky to parse. You can't, looking at a graph on $n$ vertices, tell if it has $o(n^3)$ triangles or not, because $o(n^3)$ is only meaningful as $n \to \infty$. The statement really means: Given any $f(n)$ which is $o(n^3)$, there is some $g(n)$ which is $o(n^2)$ such that any $n$-vertex graph with at most $f(n)$ triangles can be made triangle-free by removing at most $g(n)$ edges. (The only way to know that this is a for-all, there-exists type of statement is to check that all the other possibilities either don't say anything interesting, or are false.) Here's one way to deduce this from the first lemma. Take any $f(n) = o(n^3)$. Let $\delta(\epsilon)$ be some function realizing the dependency between $\epsilon$ and $\delta$ in the first lemma. For each positive $k$, let $n_k$ be the value such that for all $n > n_k$, $f(n) < \delta(\frac1k) n^3$; this must exist, because $f(n) = o(n^3)$. Define $g(n) = \frac1k n^2$ for all $n \in [n_k, n_{k+1})$. (We can set $g(n) = n^2$ for $n<n_1$, because deleting $n^2$ edges can always make any graph triangle-free.) Then, the first lemma will tell us that for any graph with $f(n)$ triangles, we can make it triangle-free by removing $g(n)$ edges.
H: Application of finite intersection property If $\mathscr{F}$ be a family of compact sets with finite intersection property in a metric space (X, d). Then show that $\cap \mathscr{F} \neq \phi$. My thinking: I want to prove by contradiction. Let, $\cap \mathscr{F} = \phi$ Then {$X-F: F\in \mathscr{F}$} is an open cover of X. I don't know what to do next. If, I am on the right track, then, please somebody help me. Any independent idea is also welcome. Thanks in advance. AI: You have the right general idea, but you need to refine it a bit. Pick any $F_0\in\mathscr{F}$, and consider $$\big\{X\setminus F:F\in\mathscr{F}\setminus\{F_0\}\big\}\;;$$ this is an open cover of the compact set $F_0$. Can you finish it from here?
H: Design a rank one matrix $\boldsymbol{C} = \boldsymbol{a} \boldsymbol{b}^T$ to make there exists no same value in the matrix $\boldsymbol{C}$. I wanto to design a rank one matrix $\boldsymbol{C} = \boldsymbol{a} \boldsymbol{b}^T \in \mathbb R^{m \times n}$. The matrix $C$ should not have same values, the vector $ \boldsymbol{a} $and $\boldsymbol{b} $ are supposed to satisfy $ \| \boldsymbol{a} \|_2 \leq 1$ and $ \| \boldsymbol{b} \|_2 \leq 1$. In my question, $m ,n $ are supposed to be large enough, which means $\min {m,n} \geq 256$. In fact, when $m,n$ are small, I can carefully choose values to statisfy these constraints. However, is there any law to design a huge matrix like that? Any comments would be appreciated! AI: If you make your values simple fractions, then you can take advantage of some elementary number theory. For example, $2^a3^b=2^c3^d$ if and only if $a=c$ and $b=d$. Can you see how to use that fact? Edit: Okay, here's my full answer: $$\pmatrix{\frac12\\\frac14\\\frac18\\\vdots\\\frac{1}{2^m}}\pmatrix{\frac13&\frac19&\frac{1}{27}&\cdots&\frac{1}{3^n}}= \pmatrix{ \frac16&\frac{1}{18}&\frac{1}{54}&\cdots&\frac{1}{2\cdot 3^n} \\ \frac{1}{12}&\frac{1}{36}&\frac{1}{108}&\cdots&\frac{1}{4\cdot 3^n} \\ \frac{1}{24}&\frac{1}{72}&\frac{1}{216}&\cdots&\frac{1}{8\cdot 3^n} \\ \vdots&\vdots&\vdots&\ddots&\vdots \\ \frac{1}{2^m\cdot3}&\frac{1}{2^m\cdot9}&\frac{1}{2^m\cdot27}&\cdots&\frac{1}{2^m\cdot 3^n} }$$
H: Is terminal object the kernel of identity morphism? Let's say that there is a category $\mathbf{C}$ with $A$ being an object of that category and a zero object exists in that category. If we have an identity morphism ${id}_A: A\to A$, is the kernel of this morphism a terminal object of the category? My reasoning for this stems from the fact that the universal property of kernel requires a unique morphism going into $\ker(id)$, hence the terminal object. AI: EDIT: As pointed out in the comments by @Geoffrey Trang, the same argument applies to monomorphisms and dually to epimorphisms. In fact, you can show that any isomorphism $f : A \longrightarrow B$ satisfies $ker(f) = 0$. Indeed, let $g: C \longrightarrow A$ such that $f \circ g = 0$. We want to show that $g$ factors through $0$. Well since $f$ is an isomorphism, $g = 0$. Hence, $g$ factors as $C \longrightarrow 0 \longrightarrow A$ and we have shown that $0$ satisfies the universal propertt of the kernel. Dually, the cokernel of an isomorphism is $0$.
H: Prove that $\lim_{n \to \infty} \sqrt{a_n} = \sqrt{L}$ $1.$ Let $\{a_n\}$ be a sequence of positive terms such that $\lim_{n \to \infty} a_n = L$ where $L > 0$. Prove that $\lim_{n \to \infty} \sqrt{a_n} = \sqrt{L}$. Proof of $1:$ \begin{align*} \forall \epsilon > 0\: \exists N >0\:\: s.t\:\:n>N &\implies |a_n-L|<\epsilon\\ &\implies|(\sqrt{a_n}-\sqrt{L})(\sqrt{a_n}+\sqrt{L})|<\epsilon\\ &\implies |\sqrt{a_n}-\sqrt{L}||\sqrt{a_n}+\sqrt{L}|<\epsilon\\ &\implies|\sqrt{a_n}-\sqrt{L}|<\epsilon\\ &\therefore \lim_{n \to \infty} \sqrt{a_n} = \sqrt{L} \end{align*} Is this proof correct ? AI: Start with $N$ such that $|a_n-L| <\epsilon \sqrt L$ for $n >N$. Then you get $|\sqrt a_n -{\sqrt L}||\sqrt a_n +{\sqrt L}| <\epsilon \sqrt L$. Hence $|\sqrt a_n -{\sqrt L}| <\frac 1 {\sqrt a_n +{\sqrt L}} \epsilon \sqrt L<\epsilon$ since $\sqrt a_n +{\sqrt L} >{\sqrt L}$.
H: Simply way to count points in finite Projective plane What is an easy way to count points and lines in a finite Projective Plane over finite field $\mathbb F_q$, accessible for student with minor knowledge of linear algebra (spaces sub-spaces, e.c.t). Points in Projective Plane over finite field $\mathbb F_q$ are $1$-dimensional sub-spaces of $\mathbb F_q^3$ . Lines are $2$-dimensional sub-spaces of $\mathbb F_q^3$ . I want to show that there are $\frac{q^3-1}{q-1}=q^2+q+1$ points and the same number of lines. As well as there are $q+1$ points on each line and there are $q+1$ lines going through each point. How can one show this in easiest way? AI: When you have evaluated one of the two numbers you conclude for the duality that occurs between points and hyperplanes (in this case lines) in the projective space. Now your task for example is to count one dimensional nonzero subspaces so you find a nonzero generator , so you have q^3-1 choices. Now there are q-1 (a am excluding the zero of the field) multiples of this vector that generate the same space so we divide by this number. We have concluded. If you want to avoid the duality argument you can approach in this way the problem ( using the duality argument without knowing it ) : a two dimensional vector space in F_q is the datum of a cartesian equation. The row vector that describe the equation modulo multiplication by an invertible element of F_q describes uniquely the space, you can conclude as in the previous point. For the second part you take a line an obtain with analogous steps q^2-1/q-1= q+1 (one dimensional spaces in a two dimensional subspace), then we conclude by duality.. Do you want still avoid this tool? :)
H: Investigate the solution (no solution, a unique solution, or infinitely many solutions) according to the constant $a$ and $b.$ Investigate the solution (no solution, a unique solution, or infinitely many solutions) of the following system of differential equations according to the constants $a$ and $b$. $$\begin{align*} x + bz &= 1 \\ \\ ax + y + z &= a+1 \\ \\ bx -a^2y + a^3bz &= a-a^3 \end{align*}$$ I tried to solve the problem by checking whether $\det A$ is $0$ or not. Shortly, I don't get the meaning of what I should do. If you can give any ideas for this simple question, I'll be so happy. Sincerely, thanks. AI: Our system of equations gives rise to the matrix equation $A \mathbf x = \mathbf b,$ where we have that $$A = \begin{pmatrix} 1 & \phantom - 0 & b \\ a & \phantom - 1 & 1 \\ b & -a^2 & a^3 b \end{pmatrix},$$ $\mathbf x = (x, y, z)^t,$ and $\mathbf b = (1, a + 1, a - a^3)^t.$ Consider the induced augmented matrix $$\begin{pmatrix} \begin{array}{ccc|c} 1 & \phantom - 0 & b & 1 \\ a & \phantom - 1 & 1 & a + 1 \\ b & -a^2 & a^3 b & a - a^3 \end{array} \end{pmatrix}.$$ By performing elementary row operations, we obtain the augmented matrix $$\begin{pmatrix} \begin{array}{ccc|c} 1 & 0 & b & 1 \\ 0 & 1 & 1 - ab & 1 \\ 0 & 0 & a^2 - b^2 & a + a^2 - a^3 - b \end{array} \end{pmatrix}.$$ Can you finish the problem from here?
H: There are $20$ females and $15$ males in a party. In how many ways can $15$ couples be created? There are $20$ females and $15$ males in a party. In how many ways can $15$ couples be created? I think it is $P^{20}_{15}$ but I feel I'm wrong. AI: Similar to @David G. Stork, I'm assuming you define "couple" as one man and one woman. You have an excess of women. To counter that, select(choose) $15$ from the total of $20$. This can be done in $\binom{20}{15}$ ways."Assign" a woman to each man: The first can be done in $15$ ways. The second in $14$ ways and so on.
H: Geometric interpretation to a system of equations For part (a) these are clearly two parallel lines so no points of intersection. For part (b) this has one point of intersection because these two lines cross at exactly one point. For parts (c) and (e) we have $z=0$ and $x=2y+1$ but what does this mean geometrically? For part (d) there are no points of intersection so does that mean the three planes are parallel or the planes never cross anywhere? Thanks for the help. AI: $z=0$ and $x=2y+1$ gives the equation of a line; $x=2y+1$ is clearly a line in the $xy$-plane, and $z=0$ forces us to stay in this plane. For d), you'll notice that all three planes are parallel, since the left hand sides are multiples of each other. (Two of the planes are in fact the same; namely those given by the first and second equations.)
H: Questions About Combinatorial Games I've been reading about combinatorial games, on an article by Brilliant: https://brilliant.org/wiki/combinatorial-games-winning-positions/#chomp-and-strategy-stealing. The article makes 4 statements about combinatorial games, which I am confused about: Due to the deterministic nature of the game, one can show via backward induction that every position can be uniquely characterized as a winning or a losing position. The empty game (the game where there are no moves to be made) is a losing position. A position is a winning position if at least one of the positions that can be obtained from this position by a single move is a losing position. A position is a losing position if every position that can be obtained from this position by a single move is a winning position. What I am confused about is the following: I don't understand why every position can be uniquely characterized as a winning or a losing position. Can't there be positions that don't bring a player close to winning, but don't cause them to lose either? I still don't fully understand what an empty game is. Is it a game where you can't make any moves (like when you are checkmated in chess)? But then why is an empty game an automatic loss? In a misere game, it would be a win, because your opponent is the last to move! I don't understand why the fact that "one of the positions that can be obtained from this position by a single move is a losing position" is sufficient to show something is a winning position. Because that means the position will only cause the opponent to lose if he or she chooses to move to the worst possible position - otherwise you're opponent won't be any closer to losing then before! So how can that be considered a winning position? AI: Let's address points 2 and 3 first. 2: The empty game is the game where neither player has any legal move. The authors of that article are (implicitly, although they don't say it, it applies to all of their examples) only considering games under the normal (as opposed to misere) play convention, where the last player to move wins. 3: Let's make sure you have correct definitions of winning and losing positions. A winning position is a position where the side to move can win, and a losing position is a position where the side to move loses no matter what they do. To show a position is winning, I just need to exhibit a single winning move. It must lead to a position where the opponent loses no matter how they respond, i.e., the new position is a losing position for the opponent who is now on move. Now 1 follows by an inductive argument from 2 and 3, assuming that (1) the game has finitely many positions; (2) it is impossible to repeat a position; (3) the game is impartial, i.e., the two players have the same legal moves. (These assumptions are not stated, but applies to all their examples.) If (3) did not hold, there would be four types of positions, not just two (winning and losing). The inductive argument runs like this: By the additional assumptions above, the game can be represented as a finite tree where the starting position is at the root, and the leaves are all terminal positions where neither player can move. Argue by induction on the height of the nodes. At height 0 (the leaves), we have the empty position which we know is losing. At any higher node, the children are already classified as winning or losing (by induction). If all the children are winning, then the position is losing; otherwise, at least one child is losing, so the position is winning. Therefore, all nodes can be classified as winning or losing.
H: Evaluating $\lim_{n\to\infty} \frac{S_1(n)S_5(n)-S_3(n)²}{S_7(n)}$ If $S_k(n)=\sum_{r=1}^n r^k$, then compute $\lim_{n\to\infty} \frac{S_1(n)S_5(n)-S_3(n)²}{S_7(n)}$ ($n$ defined on all natural) What I did - $S_1(n)$ is a term of degree 2. $S_5(n)$'s last would be $n^5$ so in the product of $S_5(n)$ with $S_1(n)$ the maximum power of $n$ would be 7. Similarly in the denominator $S_7(n)$ will also have maximum power of 7. What I was thinking of was to take the $n^7$ common and reduce it but now I am stuck because in $S_3(n)²$ the maximum power of $n$ will be $8$. Any hint will also be helpful, thanks. AI: Note that $$S_{\gamma}(n) \sim \frac{n^{\gamma+1}}{\gamma+1}~~~~(*)$$ So $$L=\lim_{n\to \infty} \frac{S_1(n) S_{5}(n)-S^2_{3}(n)}{S_7(n)}= \lim_{n \to \infty}\frac{(n^2/2)(n^6/6)-(n^4/4)^2}{n^8/8}= \frac{(1/12)-(1/16)}{(1/8)}=\frac{1}{6}.$$
H: Property of set of positive Lebesgue measure Let $E\subseteq \mathbb{R}$ be a compact set and positive Lebesgue measure. Does there exits $a>0$ such that $$\cap_{0\leq x\leq a}E+x$$ is a set of positive Lebesgue measure? I proved that above is true if $E$ is an interval. Edit: I changed the question and add the condition that $E$ is compact. AI: Take $E$ to be any compact set with positive Lebesgue measure and empty interior, for example the fat Cantor set. Then for any $a>0$, $$\bigcap_{0\leq x \leq a} E +x = \emptyset.$$ (In particular its Lebesgue measure is zero). Indeed, assume by contradiction that there exists $a >0$ such that $\bigcap_{0\leq x \leq a} E+x$ is not empty. Then, there exists $y$ such that $y-x \in E$ for every $0\leq x \leq a$. In other words $[y-a,y] \subset E$. This contradicts the empty interior assumption.
H: Find all functions $f : \mathbb R \to \mathbb R$ such that: $f\left(x^3\right)+f\left(y^3\right) = (x + y)f\left(x^2\right)+f\left(y^2\right)-f(xy)$ Here is the question: Find all functions $f : \mathbb R \to \mathbb R$ such that: $$f\left(x^3\right)+f\left(y^3\right) = (x + y)f\left(x^2\right)+f\left(y^2\right)-f(xy)$$ What I tried: Note that the function $f\left(x^3\right)+f\left(y^3\right)$ is symmetric. From here we get $$(x + y)f\left(x^2\right) + f\left(y^2\right) - f(xy) = f(x + y)f\left(y^2\right) + f\left(x^2\right) - f(xy)$$ which gives $(x + y - 1)f\left(y^2\right) = (x + y - 1)f\left(x^2\right)$. From here I cannot proceed further. I could have cancelled $(x + y - 1)$ but I haven't proved that $(x + y - 1) \neq 0$ and neither I don't know how to proceed with $\left(y^2\right) = f\left(x^2\right)$ even if we can cancel $x + y - 1$. Any hints or suggestions will be greatly appreciated. AI: Here's how to get $f(x^2) = f(y^2)$. If $x+y-1 \neq 0$, we must have $f(x^2) = f(y^2)$. F any $x^2$ and $y^2$, pick a $z$ such that $x+z-1$ and $y+z-1$ are nonzero, which is always possible by taking $z \neq 1-x, 1-y$. Then $f(x^2) = f(z^2) = f(y^2)$. So $f(x^2) = f(y^2)$ for all reals $x, y$, and hence $f$ is constant on nonnegative reals.
H: Is $\binom{n}{n/2}\sim 2^n?$ We say $a_n\sim b_n$ if $\lim_n(a_n/b_n)=1$. Is $$\binom{n}{n/2}\sim 2^n?$$ Actually, we know $n!\sim n^{n+1/2}e^{-n}\sqrt{2\pi}$. So $$\binom{n}{n/2}\sim \frac{n^{n+1/2}e^{-n}\sqrt{2\pi}}{(n/2)^{n+1}e^{-n}2\pi}=c2^{n+1}n^{-1/2}?$$ How about the next step? AI: Since you've already gotten $\binom{n}{n/2} \sim c2^{n+1} n ^{-1/2}$, we just want to ensure that $c2^{n+1} n ^{-1/2}$ doesn't go to $2^n$. But $$\frac{c2^{n+1} n ^{-1/2}}{2^n} = 2cn^{-1/2},$$ which approaches $0$. So $$\binom{n}{n/2} \not \sim 2^n.$$ (Computing $c$, you can rewrite the asympotic as $$\binom{2n}{n} \sim \frac{4^n}{\sqrt {\pi n}},$$ which you can find here: https://en.wikipedia.org/wiki/Central_binomial_coefficient)
H: How To Represent Draws In Combinatorial Games I've been reading about combinatorial games, specifically about positions in such games can be classified as either winning or losing positions. However, what I'm not sure about now is how I can represent draws using this: situations where neither player wins or loses. Do I use a winning position or losing position to represent a position where a draw has occurred? Why is such a representation correct? Thanks in advance. AI: The analysis on the Brilliant page only works with games in which a draw is not possible. In general, you'd need to represent drawn positions as their own type of position; a drawn position can move to another drawn position (because both players can force a draw), or move to a winning position (if the person who just plays messes up and moves to a position where the other can win). It can't move to a losing position, because otherwise it'd be possible to win from the current state, and hence would be a winning state, not a drawn state.
H: Point of Inflection Inequality in a Quartic The following is a multiple choice question from a workbooklet (year 12): The family of graphs of the form $y=x^4+ax^3+bx^2+cx$ has two points of inflection if and only if: A. $b<\frac{3a^2}{8}$ B. $b^2>3c$ C. $a<b$ D. $b^2<4c$ E. none of these Answer: A My question is how do I solve this? It would be great if you could show your working. Thanks in advance. My working I've done the double derivative but that's it. I'm kinda stuck. $y=x^4+ax^3+bx^2+cx$ $y'=4x^3+3ax^2+2bx+c$ $y''=12x^2+6ax+2b$ AI: Hint: Inflection points are points where the function changes concavity, i.e. from being "concave up" to being "concave down" or vice versa. They can be found by considering where the second derivative changes signs. In similar to critical points in the first derivative, inflection points will occur when the second derivative is either zero or undefined. $y'=4x^3 + 3ax^2 + 2bx + c$ $y'' = 12x^2+6ax+2b$ We have to solve now: $y''=0 \iff 12x^2+6ax+2b=0$. Next, we require that the family of graphs having two inflection points, so it is necessary that the discriminant $D=(6a)^2-4\cdot12\cdot2b>0$, i.e. $$36a^2-96b>0 \iff b< \frac{3a^2}{8}.$$
H: If $L(y)=\min_{x\in \mathbb{R}}h(x,y)$ is well defined in $(a,b]$ then is it also well defined on $a$? Let $h:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ be a continuous function such that $\lim_{x\to \pm\infty}h(x,y)=0$ for all $y$. Then we know that if $L(y)=\min_{x\in \mathbb{R}}h(x,y)$ exists, it must be $L(y)\leq 0$. Now, is it true that if $L(y)$ is well defined in a interval $(a,b]$ with $a<b$ then it is also well defined at the point $a$? Intuitively this seems to be right, but I couldn't find a way to prove it. Any hint or suggestion would be appreciated. AI: The answer is no. Let $f(t)$ be a positive-valued function with $f(0)>1$ and $f(-1)<1$ and $\lim_{t\to\pm\infty} f(t)=0$, such as $f(t)=2/(1+2t^2)$ or $f(t)=2e^{-t^2}$ (its exact value is unimportant). Then define $$ h(x,y) = f(x)\big(1-f(xy-1)\big). $$ Note that $h(x,0) = f(x)\big(1-f(-1)\big) > 0$ for all $x\in\Bbb R$. However, for $y\ne0$, $$ h\big(\tfrac1y,y\big) = f(y)\big(1-f(0)\big) < 0. $$ Once the limits as $x\to\pm\infty$ are verified, these inequalities show that $L(y)$ exists for all $y\ne0$ but does not exist for $y=0$. (At the core, this example exploits the useful fact that the hyperbola $xy=1$ is a closed set in $\Bbb R^2$ whose projection onto either coordinate is not closed.)
H: How do I calculate the price of 50g of a product, if the total grams of the product cost a certain amount I am looking for a formula that can calculate cost per 50g. e.g. How do I calculate the price of 50g of Pepper, if 300g costs 132? or How do I calculate the price of 50g of Salt, if 1000g costs 100? What standard formula could I use? AI: You have to divide the monetary cost by the given grams, and multiply the result by $50~\text{g}$. Example: $$ \text{Pepper}: \qquad \text{price per 50 g} = \frac{132}{300~\text{g}} \cdot 50~\text{g} \approx 22 $$ If you forget what should be multiplied by what, it's also rather easy to "test out" the numbers. Example: You know that 300 g costs 132. Therefore, one gram costs $132/300= 0.44$, right? Then two grams costs $2\times 0.44 = 0.88$, and 50 grams costs $50 \times 0.44 = 22$. Easy!
H: Limit $\lim_{n\to \infty} \frac{1}{n^{n+1}}\sum_{k=1}^n k^p$ The evaluation of $$\lim_{n\to \infty} \frac{1}{n^{n+1}}\sum_{k=1}^n k^p$$ as an integral ( evaluating this as a right Riemann sum ) requires the form $f(a + k\cdot \Delta x_i)$, where $a$ is our lower bound of integration, and $\Delta x_i$ is the length of our subinterval, on some partition of the interval of integration. However, the given expression does not seem to give way to any meaningful expression for this. I think it's safe to assume our lower bound is $0$ though. The bigger problem is the interpretation of $\frac{1}{n^{n+1}}$. I tried $$\frac{1}{n^{n+1}}=\frac{1}{e^{\ln(n^{n+1})}}=\frac{1}{e^{(n+1)\cdot \ln(n)}}$$ and since the inverse of $e^{x}$ is $\ln(x)$, we'd have $$\ln((n+1) \cdot \ln(n)=\ln(n+1)+\ln(\ln(n))$$ which doesn't seem very useful either. I'd appreciate any advice. AI: The reason why your approach seems to fail is because this expression is trivially just $0$. I'm assuming there's been a transcription error, but if not, then we have $$\begin{align}&\lim_{n\to\infty}\frac{1}{n^{n+1}}\sum_{k=1}^nk^p\\ =&\lim_{n\to\infty}\frac1{n^{n-p}}\left(\frac 1n\sum_{k=1}^n\left(\frac kn\right)^p\right)\end{align}$$ where the right bracket is the integral expression you're looking for (which is bounded by $1$), and the left goes to $0$. Perhaps your sum should be $$\lim_{n\to\infty}\frac{1}{n^{p+1}}\sum_{k=1}^nk^p$$ in which case you just want the right bracket, which is evaluatable as a Riemann Sum.
H: If $a$, $b$, $c \geq 1$; $y \geq x \geq 1$; $p$, $q$, $r > 0$ Prove the insane inequality $$\left(\frac{1+y\left(a^pb^qc^r\right)^{\frac{1}{p+q+r}}}{1+x\left(a^pb^qc^r\right)^{\frac{1}{p+q+r}}}\right)^{\frac{p+q+r}{\left(a^pb^qc^r\right)^{\frac{1}{p+q+r}}}}\left(\frac{1+xa}{1+ya}\right)^{\frac{p}{a}}\left(\frac{1+xb}{1+yb}\right)^{\frac{q}{b}}\left(\frac{1+xc}{1+yc}\right)^{\frac{r}{c}}\geq \prod \limits_{cyc}\left(\frac{1+y\left(a^pb^q\right)^{\frac{1}{p+q}}}{1+x\left(a^pb^q\right)^{\frac{1}{p+q}}}\right)^{\frac{p+q}{\left(a^pb^q\right)^{\frac{1}{p+q}}}}$$ This is a inequality problem from 2019 Jozsef Wildt International Mathematics Competition. For this problem i almost tried in all ways with AM-GM and Cauchy-Schwarz but i am unable to do it. Honestly i don't even know how to start thinking for this problem. Please help. I know i have not uploaded my attempts. But what i shall attempt. I am completely blank about this problem AI: We need to prove that $g(y)\geq0,$ where $$g(y)=\tfrac{p+q+r}{\left(a^pb^qc^r\right)^{\tfrac{1}{p+q+r}}}\ln\tfrac{1+y\left(a^pb^qc^r\right)^{\tfrac{1}{p+q+r}}}{1+x\left(a^pb^qc^r\right)^{\tfrac{1}{p+q+r}}}+\sum_{cyc}\tfrac{p}{a}\ln\tfrac{1+ya}{1+xa}-\sum_{cyc}\tfrac{p+q}{\left(a^pb^q\right)^{\tfrac{1}{p+q}}}\ln\tfrac{1+y\left(a^pb^q\right)^{\tfrac{1}{p+q}}}{1+x\left(a^pb^q\right)^{\tfrac{1}{p+q}}}.$$ But $$g'(y)=\tfrac{p+q+r}{1+y\left(a^pb^qc^r\right)^{\tfrac{1}{p+q+r}}}+\sum_{cyc}\tfrac{p}{1+ya}-\sum_{cyc}\tfrac{p+q}{1+y\left(a^pb^q\right)^{\tfrac{1}{p+q}}}=$$ $$=\frac{p+q+r}{1+ye^{\frac{\sum\limits_{cyc}p\ln{a}}{p+q+r}}}+\sum_{cyc}\frac{p}{1+ye^{\ln{a}}}-\sum_{cyc}\frac{p+q}{1+ye^{\frac{p\ln{a}+q\ln{b}}{p+q}}}.$$ Now, let $f(t)=\frac{1}{1+ye^t},$ where $t\geq0$ and $y\geq1$. Thus, $$f''(t)=-\left(\frac{ye^t}{(1+ye^t)^2}\right)'=-\left(\frac{ye^t+1-1}{(1+ye^t)^2}\right)'=$$ $$=\frac{ye^t}{(1+ye^t)^2}-\frac{2ye^t}{(1+ye^t)^3}=\frac{ye^t(ye^t-1)}{(1+ye^t)^3}\geq0,$$ which says that $f$ is a convex function. Thus, by the Tiberiu Popoviciu's inequality $g'(y)\geq0,$ which says $$g(y)\geq g(x)=0$$ and we are done! About Popoviciu see here: https://arxiv.org/pdf/0803.2958v1.pdf
H: Diagonalizable projection operator How to say that a projection mapping is digonalizable.further it can be represented in the diagonal matrix having a block of identity matrix of order r inside it.where r is the rank of the operator. AI: In my book a linear projection mapping is by definition one that is diagonalisable with no other eigenvalues than $0$ and $1$. It might be said in a slightly different way, for instance whenever $V$ and $W$ are complementary subspaces, there is a projection operator $p$ onto $V$ and parallel to $W$ such that whenever $u=v+w$ is a vector, $p(u)=v$. But that is just saying that $V$ is an eigenspace for the eigenvalue $1$ (since $p(v)=v$) and $W$ is an eigenspace for the eigenvalue $0$ (since $p(w)=0$), and since their direct sum fills the whole space, that is all there is. On any basis formed by combining a basis of $V$ and a basis of $W$, the matrix of $p$ will be a diagonal one, with diagonal entries $1$ for the initial "$V$-part", and remaining diagonal entries $0$. Since the eigenspace for $0$ does not contribute to the rank, the rank of the operator is equal to the dimension of the eigenspace for $1$. The fact that a linear operator $\phi$ is a projection operator if and only if $\phi^2=\phi$ is something (easy) to prove, but I would think not the proper definition of a projection mapping.
H: Does the analytic function have a square root Let $f:U\to \mathbb{C}\setminus(-\infty,0]$ be a holomorphic function. Does it always have a sqare root function $g^2=f$ which is also analytic in the same region $U$?Note that $U$ represents the open unit disk in $\mathbb{C}$. I think yes, but am not even able to provide an example of such an $f$. Should we make use of the riemann mapping theorem to produce such a function and its square root. Any hints? Thanks beforehand. AI: The principal branch of the logarithm is holomorphic in $\mathbb{C}\setminus(-\infty,0]$. Therefore, for any holomorphic function $f:U\to \mathbb{C}\setminus(-\infty,0]$, you can define $$ g: U \to \Bbb C, g(z) = \exp(\frac 12 \log(f(z))) $$ which satisfies $$ g(z)^2 = \exp( \log(f(z))) = f(z) $$ in $U$, i.e. it is a holomorphic square root of $f$. Note that even more is true: In your case $U$ is the unit disk and that is a simply connected domain. It follows that any non-zero holomorphic function $f$ in $U$ has a holomorphic logarithm and therefore a holomorphic square root.
H: How to determine the degree of these polynomials I have two functions, $p_1,p_2\in\mathcal{H}(\mathbb{C})$ verifying $p_1(p_2(z))=z^2$. I have to prove that $p_1$ and $p_2$ are polynomials and, in addition, show that one of them has degree equal to 1 and the other equal to 2. After reading this question I know how to solve the first part, but I have no idea how to prove the second one. AI: If $p_1$ and $p_2$ are polynomials, then $\deg(p_1 \circ p_2) = \deg (p_1) \cdot \deg (p_2)$. But $\deg(p_1 \circ p_2) = \deg(z^2) = 2$, and this only factorises as $1 \cdot 2$ or $2\cdot 1$.
H: GCD of Gaussian Integers $\text{gcd}(4, 36+18i)$ I have to compute $\text{gcd}(4, 36+18i)$. I computed the norms: $16$ and $1620$. I am sure $2$ is the gcd. Is there any method to prove $2$ is the gcd, other than using the Euclidean Algorithm (which I don't know how to use)? Or if it couldn't be proved directly, can you please explain me how to compute it using Euclidean Algorithm? I've searched here but I really don't understand the steps. Thank you! AI: The $\gcd$ of two numbers is the same if you subtract a multiple of one from the other. In other words, $\gcd(a,b) = \gcd(a,b-ak)$ for any $k$. So here, notice that we can immediately simplify the expression by subtracting $36$ from the second term -- $$\gcd(4,36+18i)=\gcd(4,18i).$$ Next, we subtract $16i$ from the latter term, remembering that this is a multiple of $4$ in the Gaussian integers. $$\gcd(4,18i)=\gcd(4,2i).$$ Finally, we subtract $4$ from the LHS, as $4=2i\times(-2i)$ is a multiple of $2i$: $$\gcd(4,2i)=\gcd(0,2i)$$ Since the $\gcd$ of $0$ and anything is the latter, the answer is $2i$. Note that this only differs from $2$ by a unit, and it's conventional to give the $\gcd$ in simplest form, so $2$ is the simplified answer. Another way to approach the question is to full factorise both sides, noting that $\mathbb Z[i]$ is a UFD -- in other words, $$4=(1+i)^2(1-i)^2, 36+18i=(1+i)(1-i)3^2(2+i)$$ and then just picking the common factors (remembering that multiples of units is fine!), $$(1+i)(1-i)=2$$
H: Is this proof of a harmonic progression question correct? This question is from RMO 2017, the second round of selection for the IMO in India. Question: Prove that there cannot exist a strictly increasing infinite harmonic progression consisting of terms which are positive integers. My proof: Let the first term of a HP is a positive integer $a$. Suppose $$\frac{1}{a}, \frac{1}{a}+d,\frac{1}{a}+2d,\dots (d \in \mathbb R)$$ or $$\frac{1}{a}, \frac{1+ad}{a},\frac{1+2ad}{a},\dots$$ is an infinite AP such that the reciprocals of its terms form a strictly increasing HP consisting of positive integers. So the HP becomes $$a,\frac{a}{1+ad},\frac{a}{1+2ad},\frac{a}{1+3ad},\dots$$ Since the term $\frac{a}{1+ad}$ is a positive integer, $1+ad|a$. This implies that $1+ad$ is a factor of $a$. Similarly $1+ad, 1+2ad, 1+3ad, \dots$ are all factors of $a$. But it is impossible for a number to have infinite factors unless all of $1+ad, 1+2ad, 1+3ad, \dots$ are equal. That is $$1+ad=1+2ad=1+3ad=\dots$$ or $$ad=2ad=3ad=\dots$$ $$\implies ad=0$$ Since a is a positive number, $$d=0$$ But if $d=0$ then all the terms of the HP become 1, which contardicts the hypothesis that it is strictly increasing. Hence, no such HP exists. For some reason, I think there's a flaw in my proof. Is my proof correct? And even if it is, can it be used as a rigorous proof in a contest? AI: This proof is indeed flawed. The statement that a number can't have infinitely many factors is wrong when it comes to $\mathbb R$; for instance, $1=n(\frac{1}{n})$ is a trivial factorisation of $1$ in infinitely many ways. What you want to do is show that eventually $|1+nad|>|a|$ (assuming $d\neq 0$), so $$\left|\frac a{1+nad}\right|<1.$$ I'll leave this as an exercise.
H: For $A, B \subset \mathbb{R}^+$, $\sup(A \cdot B) = \sup A \sup B$. I am trying to prove that: For nonempty subsets of the positive reals $A,B$, both of which are bounded above, define $$A \cdot B = \{ab \mid a \in A, \; b \in B\}.$$ Prove that $\sup(A \cdot B) = \sup A \cdot \sup B$. Here is what I have so far. Let $A, B \subset \mathbb{R}^+$ be nonempty and bounded above, so $\sup A$ and $\sup B$ exist by the least-upper-bound property of $\mathbb{R}$. For any $a \in A$ and $b \in B$, we have $$ab \leq \sup A \cdot b \leq \sup A \cdot \sup B.$$ Hence, $A \cdot B$ is by bounded above by $\sup A \cdot \sup B$. Since $A$ and $B$ are nonempty, $A \cdot B$ is nonempty by construction, so $\sup(A \cdot B)$ exists. Furthermore, since $\sup A \cdot \sup B$ is an upper bound of $A \cdot B$, by the definition of the supremum, we have $$\sup(A \cdot B) \leq \sup A \cdot \sup B.$$ It suffices to prove that $\sup(A \cdot B) \geq \sup A \cdot \sup B$. I cannot figure out the other half of this. A trick involving considering $\sup A - \epsilon$ and $\sup B - \epsilon$ for some $\epsilon > 0$ and establishing that $\sup(A \cdot B) < \sup A \cdot \sup B + \epsilon$ did not seem to work, though it did in the additive variant of this proof. I haven't anywhere used the assumption that $A$ and $B$ are contained in the positive real numbers, and it seems to me that this assumption must be important, probably as it pertains to inequality sign, so I assume that at some point I will need to multiply inequalities by some positive number. I cannot seem to get a good start on this, though. A hint on how to get started on this second half would be very much appreciated. AI: If $\varepsilon>0$, take $a\in A,b\in B$ such that $\sup A-\varepsilon<a$ and $\sup B-\varepsilon<b$. Then it is $$(\sup A-\varepsilon)\cdot(\sup B-\varepsilon)<ab\leq\sup(A\cdot B) $$ So, $$(\sup A-\varepsilon)\cdot(\sup B-\varepsilon)<\sup(A\cdot B) $$ is true for any $\varepsilon>0$. What happens if you let $\varepsilon\to0^+$?
H: $y \prime=10^{x+y}$ how should I proceed from here? $$y \prime=10^{x+y}$$ $$\int10^{-y}dy=\int10^xdx$$ $$-\frac{10^{-y}}{\ln(10)}=\frac{10^{x}}{\ln(10)} + C$$ $$10^{-y}=-10^x -C\cdot \ln(10)$$ How I should get $y$ from here? And also, can I call $C\cdot \ln(10)$ just another constant $D$? Is that fine to do such a move like that? And if it is fine, what about if I have been given a boundary value problem? AI: This is a separable equation, so we have: $$\text{y}'\left(x\right)=\text{n}^{x+\text{y}\left(x\right)}\space\Longleftrightarrow\space\text{n}^{-\text{y}\left(x\right)}\cdot\text{y}'\left(x\right)=\text{n}^x\tag1$$ Now, integrate both sides with respect to $x$: $$\int\text{n}^{-\text{y}\left(x\right)}\cdot\text{y}'\left(x\right)\space\text{d}x=\int\text{n}^x\space\text{d}x\tag2$$ These are pretty standard integrals, so we get: $$-\frac{\text{n}^{-\text{y}\left(x\right)}}{\ln\left(\text{n}\right)}=\frac{\text{n}^x}{\ln\left(\text{n}\right)}+\text{C}\tag3$$ Solving for $\text{y}\left(x\right)$, gives: $$\text{y}\left(x\right)=-\frac{\ln\left(-\text{n}^x-\ln\left(\text{n}\right)\text{C}\right)}{\ln\left(\text{n}\right)}\tag4$$ Now, we can set $\ln\left(\text{n}\right)\text{C}=\text{k}$: $$\text{y}\left(x\right)=-\frac{\ln\left(\text{k}-\text{n}^x\right)}{\ln\left(\text{n}\right)}=-\log_{\text{n}}\left(\text{k}-\text{n}^x\right)\tag5$$
H: Divergence of $\int_{0}^{1}\frac{\sin^{2}x}{x^{2}}dx$ Does $\int_{0}^{1}\frac{\sin^{2}x}{x^{2}}dx$ diverge or converge? Symbolab says it diverges, and I get why, but I don't get the logic behind this because you can clearly see that the graph is bounded and continuous (also $\underset{x\to0^{+}}{\lim}\frac{\sin^{2}x}{x^{2}}=1$). Wolfram gives a definite answer with $Si(2)$, what is $Si(2)$? AI: You are correct that the integral exists because the function can be continuously extended to the closed interval $[0, 1]$. Symbolab writes the integral as the difference of two divergent integrals $$ \int_{0}^{1}\frac{\sin^{2}x}{x^{2}}\, dx = \int_{0}^{1}\frac{1}{x^{2}}\, dx - \int_{0}^{1}\frac{\cos^{2}x}{x^{2}}\, dx $$ but of course nothing can be concluded from that. You can compute the integral with integration by parts: $$ \int_{0}^{1}\frac{\sin^{2}x}{x^{2}}\, dx = \Bigl[ -\frac 1x \sin^2(x) \Bigr]_{x=0}^{x=1} + \int_0^1 \frac{2 \sin(x)\cos(x)}{x} \, dx \\ = -\sin^2(1) + \int_0^1 \frac{\sin(2x)}{x} \, dx = -\sin^2(1) + \int_0^2 \frac{\sin(t)}{t} \, dt \\ = \operatorname{Si}(2) - \sin^2(1) $$ where $$ \operatorname{Si}(x) = \int_0^x \frac{\sin(t)}{t} \, dt $$ is the sine integral function.
H: Consider a matrix $A$ with $n=2m$ and $a_{ii}=a_{n+1-i,n+1-i} = a_{i,n+1-i}=a_{n+1-i,i} = i$ for $i=1,\dots, m$. Find the eigenvalues of $A$. Consider a matrix $A$ with $n=2m$ and $a_{ii}=a_{n+1-i,n+1-i} = a_{i,n+1-i}=a_{n+1-i,i} = i$ for $i=1,\dots, m$. Find $||A||_2$. My attempt: An example of such a matrix would be $(n=4, m=2)$, $$ \begin{pmatrix} 1&0&0&1\\0&2&2&0\\0&2&2&0\\1&0&0&1\end{pmatrix}.$$ The matrix is symmetric, therefore $||A||_2=\rho(A)$, where $\rho(A)=\max_i |\lambda_i|$ (where $\lambda_i$ are the eigenvalues of $A$). The rank of this matrix is $m$, meaning that that the algebraic multiplicity of $0$ is at least $m$. Now, $\operatorname{tr}(A) = \sum_i \lambda_i = 2\sum_i^m i = m(m+1)$. The eigenvalues of the example that I gave are $0$ (algebraic multiplicity 2), $2$ (AM 1) and $4$ (AM 1). So, I would expect that in the general case $0, m$ and $m^2$ are eigenvalues, which $\rho(A)=m^2$ and therefore $||A||_2$. How can I show this rigorously? Thanks! AI: If you reorder the basis by reversing the order of the last $m$ basis elements then the matrix is the tensor product of $\text{Diag}(1,2,\dots,m)$ and $\begin{bmatrix}1 &1\\1 &1\end{bmatrix}$. The latter matrix diagonalises to $\text{Diag}(2,0)$. The upshot is that your original matrix is similar to $\text{Diag}(2,4,6,\dots, 2m,0,0,\dots, 0)$ and all is clear.
H: Does expectation inequality imply conditional expectation inequality? Given a probability space $\left(\Omega\text{, }\mathcal{F}\text{, }\mathbb{P}\right)$ and two random variables defined on it, does it hold true that $$ \mathbb{E}\left(X\right)<\mathbb{E}\left(Y\right)\hspace{0.5cm}\Rightarrow\hspace{0.5cm}\mathbb{E}\left(X|\mathcal{F}\right)<\mathbb{E}\left(Y|\mathcal{F}\right) $$ ? If not, could you please give me some good counterexample? AI: Let it be that $X$ only takes values in $\{0,1\}$ with $P(X=1)=p\in(0,0.5)$. Let $Y=1-X$. Then: $$\mathbb EX=p<1-p=\mathbb EY$$ If $\mathcal F=\sigma(X)$ then: $$\mathbb E(X\mid\mathcal F)=X\text{ and }\mathbb E(Y\mid\mathcal F)=Y$$ But we do not have $X(\omega)<Y(\omega)$ for every $\omega\in\Omega$ because $P(X=1,Y=0)=p>0$. So we cannot state that $\mathbb E(X\mid\mathcal F)<\mathbb E(Y\mid\mathcal F)$.
H: Let $N$ be a normal subgroup of $G$ and $H$ be a subgroup of $G$ such that $ H N =G$. Prove that $H$ is a system of representatives of $N$ in $G$. Let $G$ be a group and let $N \subseteq G$ be a normal subgroup. Suppose there is a subgroup $H \subseteq G $ such that : $HN=\{ hn : h \in H , n \in N\} = G$ $ H \cap N = \{e \} $ Prove that $H$ is a system of representatives for the costs of $N$ in $G$ Now I see that for all elements $g$ in G there is an element $h \in H $ and an element $n \in N$ such that $g=hn$ but how do I show that $ \#H = [G:N] $? AI: The first condition $HN=G$ says that, for every coset $gN$ of $N$ in $G$ there exists an element $h\in H$ such that $gN=hN$. Just write $g=hn$ for some $h\in H$ and $n\in N$. Another way to say this is to observe that $G$ may be decomposed as follows $$G=\bigcup_{h\in H}hN$$ Another way could be to consider the map $H\to \frac{G}{N}$, defined by $h\longmapsto hN$, and observe that it is surjective. The second condition $H\cap N=\{e\}$ says that each left coset $gN$ of $N$ in $G$ can be written uniquely in the form $gN=hN$, because if you have $hN=h'N$ for some $h,h'\in H$ then $h'^{-1}h\in H\cap N=\{e\}$ and so $h=h'$. This is equivalent to say that the union $$G=\bigcup _{h\in H}hN$$ is indeed a disjoint union, or that $\{hN:h\in H\}$ is a partition of $G$, or equivalently that the map $h\longmapsto hN$ is also injective, hence is a bijection. (It is something more, namely a group isomorphism).
H: Finding the ratio between the area of a circle inscribed by a kite and a circle inscribing the kite In the following problem, $\angle DAB = 2\alpha$, and $ABCD$ is a kite ($AD=AB, DC=CB$). I need to prove the ratio between the circle inscribed by the kite to the area of the circle inscribing the kite is $\frac{\sin^22\alpha}{1+\sin2\alpha}$. I proved $\angle ADC = \angle ABC = 90^\circ$, and that $AC$ is the diameter of the circle inscribing the kite. I have difficulties with finding the area of the circle inscribed by the kite, and hope one of you will be able to help me. Sorry if my English is bad, it is not my native language. AI: Draw the segments from the center of the inscribed circle to the vertices of the polygon, forming triangles. Each triangle has one side of the quadrilateral as its base and the radius of the circle as the corresponding altitude. Thereby we have $\text{Radius of inscribed circle}=2×\text{Area of polygon}/\text{Perimeter of polygon}$ In this case the area of the polygon, as you can see is given by two right triangles having the sides of the polygon as legs, and the perimeter is the sum of the legs of both right triangles. That plus what you already know should get you to the radius of the inscribed circle and then you get its area.
H: Sum of elements in two different subsets of a set Let $a$ be any positive integer. Consider set $$S=\{ a^i:i\in \mathbb N \cup \{0\} \}.$$ Let $S_1$ and $S_2$ be any two different finite subsets of $S$. Then show that sum of elements in $S_1$ and $S_2$ are different. Attempt The case $a=1$, which is very trivial because in this case $S=\{1\}$ and there's nothing to prove. What can be done for $a>1$? AI: I will assume that $a>1$ and that $S_1$ and $S_2$ are finite. Suppose that the largest element $N$ of $S_1$ is greater than any element of $S_2$. Then$$\sum_{n\in S_2}n<\sum_{k=0}^{N-1}a^k=\frac{a^N-1}{a-1}\leqslant a^N-1<a^N<\sum_{n\in S_1}n.$$ So, if the largest element of one of the sets is larger than the largest element of the other set, we are done. Suupose now that the largest element $N$ of $S_1$ is also the largest element of $S_2$. Then all you have to do is to prove that the sum of all elements of $S_1\setminus\{N\}$ is not the sum of all elements of $S_2\setminus\{N\}$. Now you can start all over again with the largest element of these two sets. Since they are finite, the process must stop at some point.
H: Finding a recurrence relation for the number of ternary strings of length $n$ containing $11$ Find the recurrence relation for $$a_n = \text{number of ternary strings of length $n$, containing $11$}$$ Some of these strings are: $11$, $112$, $1102$,... . I'm thinking that is same if the question was for $00$ because I have found many solutions for this one in Google. At least the idea I think it would be the same. First to take all ternary string $3^n$, I'm stuck here... . AI: First, let's define $b(n) := 3^n-a(n)$, the number of ternary strings that don't contain the substring $11$, as it is easier to find a recurrence relation for this in the fist place. Since the empty string and any of the strings of length $1$ do not contain the substring $11$, we find that $b(0)=1$ and $b(1)=3$. Now, for $n\ge 1$, we try to build a feasible string (not containing $11$) of length $n+1$. Such a string can start with either $0$ or $2$ followed by any feasible string of length $n$, or it can start with $1$ followed by either $0$ or $2$ and then any feasible string of length $n-1$. This leads to the recurence formula $b(n+1)=2\cdot b(n)+2\cdot b(n-1)$. Now, since $b(n)=3^n-a(n)$ for all $n \ge 0$, we see that $$\begin{align} 3^{n+1}-a(n+1)&=2\cdot(3^n-a(n))+2\cdot(3^{n-1}-a(n-1))\\ \iff a(n+1)-2\cdot a(n)-2\cdot a(n-1)&=3^{n+1}-2\cdot3^n-2\cdot3^{n-1}\\ &=3^{n-1} \end{align}$$ This is a recurrence relation for $n \ge 1$, but it contains the term $3^{n-1}$ which we still can eliminate for $n \ge 2$ in the following way: $$\begin{align} 3^{n-1}&=3\cdot 3^{n-2}\\ \iff a(n+1)-2a(n)-2a(n-1)&=3\cdot(a(n)-2a(n-1)-2a(n-2)) \end{align}$$ Solving this for $a(n+1)$, we find the linear recurrence relation for all $n \ge 2$: $$ a(n+1)=5\cdot a(n)-4\cdot a(n-1)-6\cdot a(n-2) $$ Of course, we have to specify the initial values for the recursion, which are $a(0)=a(1)=0$ and $a(2)=1$. Added: Just in case you need an explicit formula: $$ a(n) = 3^n+\frac{2-\sqrt{3}}{2\sqrt{3}}\left(1-\sqrt{3}\right)^n-\frac{2+\sqrt{3}}{2\sqrt{3}}\left(1+\sqrt{3}\right)^n $$
H: $\sigma$-algebra over a set of measures In Theorem 6.6 of the book Probability Theory by Varadhan, he proved the existence of a probability measure over a set $M_e$ of ergodic measures. The context is as follows. Let $(\Omega,\mathcal{F})$ be a complete separable metric space with its Borel sets. And let $T:\Omega \to \Omega$ be a measurable map. A probability measure $P$ is invariant (with respect to $T$) if $$P[T^{-1}(A)]=P[A]$$ for every $A \in \mathcal{F}$. And here comes the theorem. Theorem 6.6. For any invariant measure $P$, there is a probability measure $\mu_P$ on the set $M_e$ of ergodic measures such that $$P=\int_{M_e}Q\mu_P(dQ).$$ I think the set $M_e$ in this theorem is not necessarily countable. For an existence of probability measure on $M_e$, we should first have a $\sigma$-algebra on $M_e$. How is it defined? Thanks for any comment! AI: For any measurable space $(\Omega, \mathcal F)$, the natural sigma-algebra $\mathcal D$ on the set $\Delta$ of probability measures on $(\Omega, \mathcal F)$ is defined as follows. For each $A \in \mathcal F$, let $X_A: \Delta \to \mathbb R$ be the function defined by $X_A(P) = P(A)$. Define $\mathcal D$ to be the smallest sigma-algebra on $\Delta$ that makes every member of $\{X_A: A \in \mathcal F\}$ measurable (we assume, as is standard, that $\mathbb R$ is equipped with its Borel sigma-algebra). Equivalently, $\mathcal D$ is the sigma-algebra generated by $\{X_A^{-1}(B): A \in \mathcal F, B \subseteq \mathbb R \ \text{Borel}\}$.
H: Can an object be a proper direct summand of itself? Let $\mathcal{A}$ be an additive category, $A,C$ two objects in it. If $A\oplus C\cong A$, is it true that $C=0$? It seems rather clear, but I am not finding anything on the web and can't really understand how to prove/disprove it. AI: Consider category of Abelian groups, and let $C=\Bbb Z$ and $A=\Bbb Z\oplus\Bbb Z\oplus\cdots$ (the free Abelian group of countable rank).
H: Are these conditional probabilities related? Knowing $P(A|B)$ & $P(A|C)$, would it be possible to infer $P(A|B,C)$? I looked at this other question: Any mathematical relation between these conditional probabilities but it doesn't solve my problem as he only uses mutually exclusive probabilities. AI: Just look at simple examples. Suppose we are tossing a fair penny and a fair dime. Let $A$ be the event "the penny comes up $H$". Let $B$ be the event "the dime comes up $H$" and let $C_1$ be the event "the two coins do not match". Then $$P(A|B)=\frac 12\quad \&\quad P(A|C_1)=\frac 12\quad \&\quad P(A\,|\,(B\cap C_1))=0$$ Now let $C_2$ be the event "the coins do match". then $$P(A|B)=\frac 12\quad \&\quad P(A|C_2)=\frac 12\quad \&\quad P(A\,|\,(B\cap C_2))=1$$