text
stringlengths
83
79.5k
H: Variance of a piecewise pdf There is a $95\%$ chance of event A happening and a $5\%$ chance of event B happening. Event A and B are exclusive so both cannot happen, it's one or the other. The pdf of event A is $f(x)=5e^{-5y}$ and the pdf of event B is $f(y)=7e^{-7y}$. What is the variance when an event is triggered? This question is basically a piecewise pdf. But i'm not sure how to tackle this one. I started of buying finding and overall E(Y). But how would I utilize that to find Var(Y)? AI: This is a mixture distribution known as the hyperexponential distribution (which in general is a convex combination of exponential distributions). The density of this distribution (call the random variable $X$) is $f_X(t) = \frac{19}{20}5e^{-5t} + \frac1{20}7e^{-7t}$. We may compute the moment-generating function of $X$ by integration: \begin{align} M_X(\theta) &= \mathbb E[e^{\theta X}]\\ &= \int_0^{\infty } \left(\frac{19}{20} e^{-5t} +\frac1{20} 7 e^{-7t}\right) e^{\theta t} \, dt\\ &= \frac{19}{4 (5-\theta)}+\frac{7}{20 (7-\theta)},\ \mathsf{Re}(\theta)<5. \end{align} It follows then that \begin{align} \mathbb E[X] &= \lim_{\theta\to 0} \frac{\mathsf d}{\mathsf d\theta} M_X(\theta) = \frac{69}{350}, \end{align} \begin{align} \mathbb E[X^2] &= \lim_{\theta\to 0} \frac{\mathsf d^2}{\mathsf d\theta^2} M_X(\theta) = \frac{478}{6125}, \end{align} and hence \begin{align} \mathsf{Var}(X) = \mathbb E[X^2] - \mathbb E[X]^2 = \frac{4799}{122500}\approx 0.0391755. \end{align}
H: Proving DeMorgan's law for arbitrary unions/intersections I am trying to prove DeMorgan's law for arbitrary unions and intersections using Munkres's notation. One of the laws takes the form $$B - \bigcup\limits_{A \in \mathcal{A}} A = \bigcap\limits_{A \in \mathcal{A}} (B - A).$$ This is the not the notation I am accustomed to, which would instead take the form $$\bigcup\limits_{A \in \mathcal{A}} A^c = \left(\bigcap\limits_{A \in \mathcal{A}} A^c\right)^c,$$ but I am trying to prove this fact using Munkres's notation, which uses set differences in place of complements. Here is what I have so far. \begin{align*} x \in B - \bigcup\limits_{A \in \mathcal{A}} A & \iff x \in B \text{ and } x \not \in \bigcup\limits_{A \in \mathcal{A}} A \\ & \iff x \in B \text{ and } \forall A \in \mathcal{A}, \; x \not \in A \end{align*} At this point, I am immediately stuck because I want to say something to the effect of: \begin{align*} & \iff x \in (B - A_1) \text{ and } x \in (B - A_2) \ldots \end{align*} But the collection is arbitrary, so I cannot quite do that. In effect, I am using some sort of "pairing" and using the rule $p \wedge (q \wedge r)$ an arbitrary number of times. If I were to do that without writing it out in a misleading way, I would get something like: \begin{align*} & \iff x \in \bigcap\limits_{A \in \mathcal{A}} (B - A). \end{align*} But the problem is, I am essentially asserting the conclusion without showing any of the steps. The proof using the usual, complement notation I know to be far more involved in this. It seems that I am missing intermediary steps that are difficult to formalize with this notation. What am I missing? AI: You have deduced that $x \in B \text{ and } \forall A \in \mathcal{A}, \; x \not \in A$, and you are trying to prove that $\forall A \in \mathcal{A}, \; ( x \in B \text{ and } x \not \in A )$. This is an instance of a general rule for any statements $P$ and $Q(A)$: $$ P\land (\forall A\in\mathcal A,\, Q(A)) \implies (\forall A\in\mathcal A,\, P\land Q(A)). $$ (Indeed this is an equivalence, at least when $\mathcal A\ne\emptyset$, but you only need this one implication.) This is easy to prove: given an arbitrary $A\in\mathcal A$, you need to prove the statement $P\land Q(A)$, which can be done from the assumptions $P$ and $\forall A\in\mathcal A,\, Q(A)$. Edited to add: For the backward implication, we need to prove two things: $P$, and $\forall A\in\mathcal A,\, Q(A)$. For a proof of $P$, we can do: Since $\mathcal A$ is nonempty, we can choose $A_0\in \mathcal A$. Since $\forall A\in\mathcal A,\, P\land Q(A)$, we know in particular that $P\land Q(A_0)$. In particular, we know $P$. For a proof of $\forall A\in\mathcal A,\, Q(A)$, we can do: Suppose that $A_1$ is any element of $\mathcal A$. Since $\forall A\in\mathcal A,\, P\land Q(A)$, we know in particular that $P\land Q(A_1)$. In particular, we know $Q(A_1)$. Since $A_1\in\mathcal A$ was arbitrary, we have proved $\forall A\in\mathcal A,\, Q(A)$. (Note the subtle difference between the two parts; the first part requires $\mathcal A$ to be nonempty, but the second part is perfectly valid if $\mathcal A$ is empty, since a universal statement is vacuously true in that case.) Moral of the story, at least for me: the logical structure of the statement to be proved is what tells us the structure of the proof itself, and hence how we should arrange our steps in that proof.
H: nth term of a sequence which according to me is not an AP, GP or HP The first term of a sequence is 2014 . Each succeeding term is the sum of the cubes of the digits of the previous term. Then the $2014^{\text {th }}$ term of the sequence is I thought of doing it by writing recurrence relation but unable to do so AI: Write few terms and you will notice after some terms, 370 starts repeating as sum of cubes of digits of 370 is equal to 370 itself
H: Calculus: derivative of logarithm with respect to logarithm I have this expression: $$ \dfrac{d\ln\left(\dfrac{x_2}{x_1}\right)}{d\ln(\theta)} $$ That I’m hoping to get some help solving, Where $\ln\left(\dfrac{x_2}{x_1}\right)= \ln\left(\dfrac{1-a}{a}\right)+\ln(\theta)$ and $\theta= \dfrac{a}{1-a} \cdot \dfrac{x_2}{x_1}$ My confusion stems from having a derivative with ln in both the numerator and denominator and I’m not sure how to correctly proceed. I know the answer is $1$, however, I’m more interested in knowing the technique to get there. Any help would be appreciated AI: Welcome to Mathematics Stack Exchange. In the general form, you can find a derivative of $f(x)$ with respect to $g(x)$ ,in other hand $\frac{d(f(x))}{d(g(x))}$ by dividing $f,g$ to $dx$ like below $$\frac{d(f(x))}{d(g(x))}=\frac{\frac{d(f(x))}{dx}}{\frac{d(g(x))}{dx}}=\frac{f'(x)}{g'(x)}$$ in your case this can be show $$\frac{d\ln(x2/x1)}{d\ln(\theta)}=\frac{\frac{d \ln(x2/x1)}{d\theta}}{\frac{d\ln \theta}{d\theta}}=\\\frac{\dfrac{d(\ln(\frac{1-a}{a})+\ln(\theta))}{d\theta}}{\dfrac{d\ln \theta}{d\theta}}=\frac{0+1\over \theta}{1\over \theta}=1$$
H: Logic behind Pisano period I just came to know about Pisano periods, and it's amazing application in computing modulus of large Fibonacci numbers. I know that a Pisano period periodically starts at $0,1$, but I haven't been able to figure out why this pattern occurs periodically. Is there any underlying concept behind this pattern, or was it just pure luck that it was discovered? AI: We consider the Fibonacci sequence with $f_0=0,f_1=1$ and $f_{n+2}=f_{n+1}+f_n\;\forall\;n\geq0$. Consider the pairs of consecutive Fibonacci numbers $(f_k,f_{k+1})$ modulo $n$. There are only $n^2$ possible pairs like this. Hence by pigeon hole principle $\exists$ two distinct pairs $(f_r,f_{r+1})$ and $(f_s,f_{s+1})$ which are identical modulo $n$. Let $r>s$. Then $$f_r\equiv f_s\pmod{n}$$ $$f_{r+1}\equiv f_{s+1}\pmod{n}$$ which implies $$f_{r+2}=f_r+f_{r+1}\equiv f_s+f_{s+1}=f_{s+2}\pmod{n}$$ and $$f_{r-1}=f_{r+1}-f_r\equiv f_{s+1}-f_s=f_{s-1}\pmod{n}$$ proceeding this way we get that $$f_{r-s}\equiv f_0\pmod{n}, f_{r-s+1}\equiv f_1\pmod{n}$$ hence the sequence $f_n$ is periodic with period $r-s$. Moreover let $$\mathcal{F}=\begin{bmatrix} 1&1\\1&0 \end{bmatrix}$$ in $GL_2(\mathbb{Z/n\mathbb{Z}})$ If $\pi(n)$ is the Pisano period modulo $n$ then $$\mathcal{F}^{\pi(n)}=I$$ From this we can conclude that the Pisano period is even. $$\mathrm{det}(\mathcal{F}^{\pi(n)})=(-1)^{\pi(n)}=1$$ which implies $\pi(n)$ is even.
H: Integral roots in quadratic equation The smallest possible natural number $n$, for which the equation $x^{2}-n x+2014=0$ has integral roots, is I know the discriminant will be a perfect square, but I am struck on equation of discriminant. AI: The prime factorisation of $2014$ is $2 \times 19 \times 53$. $2$ is obviously a factor, so you need to find the factors of $1007$ by hand. To do this, you only need to check the primes less than $\sqrt{1007} \approx 31.7$ of which there aren't many. This gives the factor pairs $(1, 2014); (2, 1007); (19, 106)$ and $(38, 53)$. By Vieta's formulas, the sum of the two roots is $\frac{-b}{a}$ in $ax^2+bx+c$, or $-\frac{-n}{1} = n$ in your case. From here, you can immediately observe what the smallest value of $n$ should be.
H: Formula for first difference is not the derivative? I was messing around in Desmos and wanted to create a chart to show the x values, function values, first differences, and second difference of some quadratics. The graph I was using is here. However, I had to create explicit formulas for the first differences and found some really strange stuff going on. It's common knowledge that the second difference of a quadratic is just the $a$ value times $2$, assuming that the function is in the form $f(x) = ax^2 + bx + c$. If I recall correctly this is because that is the slope of the derivative of the function. However, I found that the first difference of a quadratic at $x$ is $2ax+a+b$, where the first difference is the change in y over the next step of one unit in the x axis. This is definitely not the derivative, but for all of the values I tested (I used Desmos's little slider thingies) it seemed to work. However, I couldn't find this formula anywhere on the internet. I haven't proved anything, so have I just had happy mistakes and it doesn't actually work? If it does work, why does it work? Additionally, an explanation that explains the difference between the 'first difference' and the derivative would be great. AI: Let $f(x) = ax^2 + bx + c$ be a quadratic equation. The first difference of this quadratic can be expressed as: $$f(x+1) - f(x) = a(x+1)^2 + b(x+1) + c - (ax^2 + bx + c) = 2ax + a + b$$
H: Determine the type of object (e.g. lines or plane) given the intersection Given B =\begin{pmatrix} 1 & -1 & 1\\ 1 & 1 & 3\end{pmatrix}determine the general solution of the homogeneous system Bx=0. This describes the intersection of two objects (e.g. a line or a plane). Determine and find the Cartesian equation of this object. I have solved the homogeneous equation being x$=t(-2, -1, 1)$ but how do I determine what two objects this intersection represents along with their Cartesian equation? AI: Note that the first two columns are linearly independent, this means rank (dimension of the range space) is at least $2$. But range is $\subseteq \Bbb{R}^2$. Thus the range is entire $\Bbb{R}^2$. By the rank-nullity theorem, the nullity is $3-2=1$. This means the solution space for $B\mathbf{x}=\mathbf{0}$ is $1$ dimensional. Thus it must be a line passing through the origin. Note: we didn't have to really solve the system to determine the geometry of the solution space. From your answer it is clear that any solution is parallel to the vector $\begin{bmatrix}-2\\-1\\1\end{bmatrix}$. Hence it is line whose direction vector is this and passes through the origin.
H: Are finitely generated modules over a commutative local ring cancellative? Let $M,N,P$ be finitely generated modules over a (Notherian) local ring $R$. If $M \oplus N \cong M \oplus P$, do we have $N \cong P$ ? If not, what if we further assume that $M \cong R^n$ for some positive integer $n$, or even $M=R$ ? AI: This is true. See Proposition 1 in this paper which contains many other interesting results on decompositions of modules over local rings. Update: Also see this paper of T.Y. Lam for a more recent reference (thanks to rschwieb for suggesting this).
H: A symmetric point of the inverses I have the graphs of $y = F(x) = e^x$, $y = G(x) = ln (x)$, and $L : y = x$ drawn. Of course, $y = F(x)$ and $y = G(x)$ are inverses to each other and therefore they are symmetric about $L$. Let $P(p, q)$ be a point on $y = G(x)$. Through $P$, I drop a perpendicular to $L$, cutting it at $M$, and $y = F(x)$ at $H(h, k)$ such that $M$ is the midpoint of $HP$. I know that $p = k$ and $h = q$ is a fact. However, I want to prove that via the equations:- (1) $k = e^h$ (2) $q = log_e (p)$ (3) $h + p = k + q$ Using (1) and (2) to eliminate p and k from (3), I end up with $h + e^q = e^h + q$. A trivial solution is $h = q$. My questions are (1) is that solution unique? And (2) how can I prove that in a more vigorous manner? PS Please ignore the circle. AI: The equation $h+e^q=e^h+q$ is equivalent to $e^q-q=e^h-h$, or equivalently $f(q)=f(h)$ where $f(x) = e^x-x$. And it is easy to check that $f$ is increasing for $x>0$ (since $f'(x) = e^x-1>0$), hence one-to-one—so that $f(q)=f(h)$ implies $q=h$. (Indeed, if you allow $h$ or $q$ to be negative then the solution is no longer unique.)
H: Let $T$ and $U$ be non-zero linear transformations from $V$ to $W$. If $R(T)\cap R(U) = \{0\}$, prove that $\{T,U\}$ is LI Let $V$ and $W$ be vector spaces, and let $T$ and $U$ be non-zero linear transformations from $V$ to $W$. If $R(T)\cap R(U) = \{0\}$, prove that $\{T,U\}$ is a linearly independent subset of $\mathcal{L}(V,W)$. My solution Let us consider the following linear combination: \begin{align*} \alpha T + \beta U = 0 \end{align*} If they were linear dependent, we could assume without loss of generality that $\alpha \neq 0$. Thus we would have \begin{align*} T = -\frac{\beta}{\alpha}U \end{align*} Consequently, if we consider a basis $\mathcal{B}_{V} = \{v_{1},v_{2},\ldots,v_{n}\}$, we get the following relation \begin{align*} T(v_{j}) = -\frac{\beta}{\alpha}U(v_{j}) \Rightarrow T(v_{j}) \in R(T)\cap R(U) \Rightarrow T(v_{j}) = 0 \Rightarrow T = 0 \end{align*} which contradicts the given assumption. Hence the proposed result holds. Could someone please double-check my solution? Is there another way to approach just for the sake of curiosity? AI: Your proof looks fine. One refinement might be to avoid taking a basis (after all, no guarantee that $V$ has finite dimension). Rather you could just note that $\beta \neq 0$ since neither mapping is zero, and then see that any vector in the range of $U$ is also in the range of $V$. Also, you might emphasise that you move the $-\beta/\alpha$ inside the operator $U$‘S argument noting that linearity allows you do that.
H: Example of unequal iterated integral but that does not contradict Fubini's Theorem Consider counting measure $\mu_1$ and $\mu_2$ on $X=Y=\mathbb{N}$ Define a function, $$ f(x,y) = 2-2^{-x} \ \text{if} \ \ x=y \\ \text{and}\\ f(x,y) = -2 + 2^{-x} \ \text{if} \ \ x=y+1 $$ I showed that $$ \int_X(\int_Y f(x,y)d\mu_2)d\mu_1 =1 $$ and $$ \int_Y(\int_X f(x,y)d\mu_1)d\mu_2 =-\frac{1}{2} $$ Therefore two iterated integral does not equal. But I can to show that why Fubini's theorem does not contradict. Thanks in advance AI: $\int |f(x,y)| d\mu_1(y) d\mu_2(x) \geq \int |2-2^{x}| d\mu_1(x)=\sum_x |2-2^{x}| =\infty$. Hence $f$ is not integrable on the product. Neither is it non-negative. So Fubini/Tonelli Theorem is not applicable. [$ \sum |2-2^{x}| \geq \sum 2 -\sum 2^{-x} = \infty$ since $\sum 2^{-x} <\infty$].
H: Verify Proof that N cannot be expressed as ${\frac{a}{b}} + {\frac{c}{d}}$ Given Certain Parameters I'm trying to prove the statement for any integer, N: N cannot be expressed as ${\frac{a}{b}} + {\frac{c}{d}}$ Given the two parameters: a, b, c, and d are all different integers which are not 0 The fractions, ${\frac{a}{b}}$ & $\frac{c}{d}$, are irreducible. In other words, a and b are co-prime, and c and d are co-prime. My current proof is a proof of contradiction and goes as follows: Propose ${\frac{a}{b}} + {\frac{c}{d}} = N$ , we can then conclude that $N - {\frac{a}{b}} = {\frac{c}{d}}$. The integer, N, can be rewritten as ${\frac{Nb}{b}}$ because multiplying then dividing by the same value yields the original number. If we replace n with this new, ${\frac{Nb}{b}}$, we get: $${\frac{Nb}{b}} - {\frac{a}{b}} = {\frac{c}{d}}$$ Since the terms share a common denominator, this can be simplified to: $${\frac{Nb - a}{b}} = {\frac{c}{d}} $$ Due to parameter 2, the only way for these two fractions to be equal are for the numerator and denominator to be the same values since there are no reducible fractions allowed. This would mean that b and d would have to be equal in order for the equation above to hold true, yet parameter 1 states b and d cannot be equal. This is a contradiction which thus proves that ${\frac{a}{b}} + {\frac{c}{d}} = N$ is false, or alternatively: $${\frac{a}{b}} + {\frac{c}{d}} \neq N$$ is true. My Question: Is this proof valid? Is it true that N cannot be expressed as ${\frac{a}{b}} + {\frac{c}{d}}$? And as a bonus question: The fraction, ${\frac{5}{4}}$, can be written as ${\frac{a}{b}} + {\frac{c}{d}}$ given the parameters mentioned (${\frac{1}{2}} + {\frac{3}{4}}$). If my proof is indeed valid for all integers, n, why is not also valid for fractions which are not whole such as ${\frac{5}{4}}$? AI: You are correct in your proof for when N is an integer! Now, it is important to emphasize that the new fraction $\frac{Nb-a}{b}$ is irreducible. To do so, we can use Euclidean's Algorithm to show the gcd of the numerator and denominator is 1: $gcd(Nb-a,b) = gcd((N-1)b-a,b) = ... = gcd(a,b) = 1$ this is true because $N$ is an integer so $Nb$ is an integral multiple of $b$. However, this only works because $N$ is an integer. If $N$ is a non-integral rational (i.e. $N = \frac{e}{f}$), then if $N = \frac{cb+ad}{bd}$ we get: $\frac{Nb-a}{b} = \frac{\frac{cb+ad}{bd}b-a}{b} = \frac{c}{d}$
H: Finding integer solutions to sum of reciprocals of x and y $$\frac{1}{x}+\frac{1}{y}=\frac{1}{13}$$ Given the sum of reciprocals of $(x,y)$, what's a method to find integer solutions for an equation similar to the above? I've been wondering and I haven't really found something online. If you could point me to resources on stuff related to this, that would be greatly appreciated as well. AI: In general, you start by manipulating the equation as follows $$\dfrac{1}{x}+\dfrac{1}{y} = \dfrac{1}{n}$$ $$nx+ny = xy$$ $$0 = xy-nx-ny$$ $$n^2 = xy-nx-ny+n^2$$ $$n^2 = (x-n)(y-n)$$ Then, all the solutions are of the form $(x,y) = (n+a,n+b)$ where $a$ and $b$ are complementary factors of $n^2$, i.e. $n^2 = ab$. The idea to add $n^2$ to both sides in line 4 is known as Simon's Favorite Factoring Trick in case you are curious.
H: Calculate the dot product when for one of the vectors we only know the sum The problem is to calculate the following quantity: $$ax+by$$ I know $a, b \in [0, 1]$, $x, y\in \mathbb Z^+$, and I know the sum of $x$ and $y$. Is it possible to calculate $ax+by$? I have tried using substitution of one variable, trying notable products like $(a+b)(x+y)$, etc., with no luck. AI: If $S$ is the sum of $x$ and $y$, then we know $x=t,y=S-t$ for some $t$. Then your dot product is: $$at+b(S-t)=(a-b)t+bS$$ and clearly, except when $a=b$, by choosing the right value of $t$ we can get this to equal any number. So we cannot say anything at all about this dot product. If $a=b$ then we can calculate the dot product since it's just $a(x+y)=aS$.
H: Prove that there exists $n\in \mathbb{N}$ s.t. $x_n=\frac12$ Let $(x_n)_n$ a sequence given by $2x_{n+1}=2x_n^2-5x_n+3$ with $x_1\in \mathbb{Q}$. I know that the sequence is convergent. I know that the limit of the sequence should be $\dfrac{1}{2}$ or $3$. I want to prove that there exists $n\in \mathbb{N}$ s.t. $x_n=\dfrac{1}{2}$ if the sequence goes to $\dfrac{1}{2}$. Similar if the sequence goes to $3$. I tried by definition with $\epsilon$ but I didn’t succeed. AI: Let $f(x)=x^2-\frac{5}{2}x+\frac{3}{2}$, so that $x_{n+1}=f(x_n)$. Suppose that $x_n$ converges to $l$, where $l\in\lbrace \frac{1}{2},3\rbrace$. Then $$ y_n=\frac{x_{n+1}-l}{x_n-l}=\frac{f(x_{n})-f(l)}{x_n-l} \to f'(l) \textrm{ when } n \to \infty \tag{1} $$ Note that $f'(\frac{1}{2})=-\frac{3}{2}$ and $f'(3)=\frac{7}{2}$. So $|f'(l)| \geq \frac{3}{2}$ in both cases, and hence $|f'(l)| \gt \frac{5}{4}$ in both cases. It follows that there is a $n_0$ such that $|y_n|\gt \frac{5}{4}$ for all $n\geq n_0$. Then $$|x_{n+1}-l| \geq \frac{5}{4} |x_n-l| \textrm{ for all } n\geq n_0 \tag{2}$$. By induction, we deduce $$|x_n-l| \geq \big(\frac{5}{4}\big)^{n-n_0}|x_{n_0}-l| \textrm{ for all } n\geq n_0 \tag{3}$$ If $x_{n_0}\neq l$, we would deduce $\lim_{n\to\infty}{|x_n-l|}=\infty$, which is impossible. So $x_{n_0}=l$, which finishes the proof.
H: Metrizability of RxR in the dictionary order topology The question is one in Munkres where we are asked to prove the metrizability of RxR in the dictionary order topology. My attempts of defining a metric seem to falter at the end. As for example, I have tried the standard bounded metric, usual metric etc...etc., but all of them give us open balls which can't be contained in a basis element of the dictionary order topology of the form (axb, axc), where b<c. How do i proceed?? AI: HINT: Each vertical $\{x\}\times\Bbb R$ is a clopen subset of $\Bbb R\times\Bbb R$ in this topology, so the space is homeomorphic to $\Bbb R_d\times\Bbb R$, where $\Bbb R_d$ is the real line with the discrete topology, and the second factor has the usual topology. $\Bbb R_d$ and $\Bbb R$ are both metric spaces. The product of two metric spaces is metrizable; do you know how to construct a metric for it from metrics on the factors? If not, or if you get stuck, take a look at the second paragraph of this answer.
H: Poisson arrival conditional probability A meteorite shower is a poisson arrival with a rate of 16.6 per minute. Given that 7 meteorites were observed during the first minute, what is the expected value of the time passed until the 10'th meteorite is observed Extracting info from the question gives us: $$N_t \sim Pois(16.6t)$$ $$T_{i+1}-T_i \sim Exp(16.6)$$ $$T_i \sim Gamma(i,16.6)$$ We want to find $E(T_{10}|N_1=7)$. These two variables are dependent so by definition: $$E(T_{10}|N_1=7)=\int_{0}^{\infty} tP(T_{10} = t|N_1=7) dt$$ The only thing i could extract from that conditional probability is that the integral should start from 1, because the tenth arrival could not have been during the first minute. Also, i don't know any good way of opening up the following: $$=\int_{1}^{\infty} t\dfrac{P(T_{10} = t,N_1=7)}{N_1=7)} dt$$ Is there an easier way to approach this? AI: The question is equivalent to asking how much time, on average, it will take to observe three more meteorites. This is just the expectation of a gamma random variable with shape $n = 3$ and rate $\lambda = 16.6$; i.e., it will take on average $3/16.6 \approx 0.180723$ minutes, or about $10.8$ seconds. This does not include the one minute that it took to observe $7$ meteorites.
H: Find $E[X\mid Y]$ for $y I'm having a dillemma regarding the integral of a density: Suppose: $$f_{X,Y}(x,y) = \frac{1}{2}xy\times\textbf{1}_{x\in[0,2]}\times\textbf{1}_{y\in[0,x]}$$ In order to find $E[X\mid Y]$ I know I want to solve the integral: $$\int_{\mathbb{R}}x\frac{f_{X,Y}(x,y)}{f_{Y}(y)}dx$$ And for that I need to find $f_{Y}(y)$. Since I have this condition where $y\in[0,x]$ I'm not sure how to take that into consideration where I integrate. When looking for $f_{Y}(y)$ how should I approach? it can either be: $f_{Y}(y)= \int_{y}^{2}{f_{X,Y}}(x',y)dx'$ $f_{Y}(y)= \int_{0}^{2}{f_{X,Y}}(x',y)dx'$ If that were the other way around, I'd say with no doubt that: $$f_{X}(x)= \int_{0}^{x}{f_{X,Y}}(x,y')dy'$$ My intuition says that $x$ can run independently on the interval $[0,2]$ and is not affected by $y$ even though $y$ IS affected by $x$. Therefore I think integral number 2 is correct, but I'm not $100 \%$ about it. AI: In order to find $f_{Y}(y)$, you should compute it in this way: $f_{Y}(y)= \int_{\mathbb R}{f_{X,Y}}(x',y)dx'=\int_{\mathbb R}\frac{1}{2}x'y\times\textbf{1}_{x'\in[0,2]}\times\textbf{1}_{y\in[0,x']}=\\ \\ =\int_{[0,2] \cap [y,+\infty]}\frac{1}{2}x'y dx'=\frac 1 2 y \int_{[0,2] \cap [y,+\infty]}x'dx'$ and the last one depends on $y$. If $y \leq 0$ then $[0,2] \cap [y,+\infty]=[0,2]$, if $0 < y < 2$ then $[0,2] \cap [y,+\infty]=[y,2]$ and if $y > 2$ then $[0,2] \cap [y,+\infty]= \emptyset$. The case $y=2$ does not really matter.
H: Multivariable Non-degenerate Critical Points Question Let $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ be a $C^2$ function, and the origin is a non-degenerate critical point and suppose $f(x,mx)$ is a local minimum at the origin for all $m$, then does $f$ have a local minimum at the origin? I understand that if the function is not degenerate, then we either have a local maximum (which we can rule out, since the function has local minimums as you approach the origin via a straight line), a local minimum, or a saddle point. I am not sure how to rule out that it's not a saddle point. In particular, I'm not sure how you apply the non-degenerate condition rigorously. AI: Suppose that $f$ has a saddle point at $(0,0)$. Let $v\in\Bbb R^2$ be an eigenvector of the Hessian of $f$ at $(0,0)$ such that the corresponding eigenvalue $\mu$ is negative; $v$ must exist, since $(0,0)$ is a non-degenerate critical point. Consider the map $\varphi\colon\Bbb R\longrightarrow\Bbb R$ defined by $\lambda\mapsto f(\lambda v)$. Then $\varphi'(0)=0$ (since $(0,0)$ is a critical point of $f$), and $\varphi''(0)<0$ (since $\mu<0$). Therefore, $0$ is a local maximum of $\varphi$.
H: Limit of sum function for infinite series $f(x)=\sum_{n=1}^{\infty}\frac{1}{n^6+x^4}$ as $x\rightarrow\infty$ As the title states I would like to determine the limit of $f(x)=\sum_{n=1}^{\infty}\frac{1}{n^6+x^4}$ as $x\rightarrow\infty$. My gut instinct here tells me that the limit should be 0 as each of the terms would go to 0, however I am having difficulty finding any formal reason as to why: $\lim_{x\rightarrow\infty}\sum_{n=1}^{\infty}f_n(x)=\sum_{n=1}^{\infty}\lim_{x\rightarrow\infty}f_n(x)$ If it were a finite sum then this would be easy enough, however the fact that I'm working with an infinite sum is causing me some trouble, I'd deeply appreciate any help! AI: The result follows directly by Dominated Convergence. But here's an elementary argument. Fix $\varepsilon>0$. Choose $m$ such that $\sum_{n>m}\frac1{n^6}<\frac\varepsilon2$. If $x>(2m/\varepsilon)^{1/4}$, then $$ \sum_n\frac1{n^6+x^4}\leq\frac\varepsilon2+\sum_{n=1}^m\frac1{n^6+x^4} \leq\frac\varepsilon2+\sum_{n=1}^m\frac1{x^4} =\frac\varepsilon2+\frac m{x^4}<\varepsilon. $$
H: Prove that $\sqrt[3]{2} + \sqrt[3]{4}$ is irrational. Prove that $\sqrt[3]{2} + \sqrt[3]{4}$ is irrational. My steps so far: I found that the polynomial $y^3-6y-6=0$ has roots $\sqrt[3]{2} + \sqrt[3]{4}.$ Can I use this to prove that $\sqrt[3]{2} + \sqrt[3]{4}$ is irrational? If so, how? I was thinking of using Proof by Contradiction, but I'm not so sure. AI: Actually, $\sqrt[3]2+\sqrt[3]4$ is not a root of that polynomial. But it is a root of $x^3-6 x-6$. By the rational roots theorem, the only rational roots that that polynomial can have are $\pm1$, $\pm2$, $\pm3$, and $\pm6$. Since none of them is actually a root, your number is irrational.
H: Swapping variables in an equation Consider this function, $$ f\left(\frac{ax+by}{a+b} \right) = \frac{af(x) + bf(y)}{a+b}$$ would it be correct to write , $$ f\left( \frac{ay+bx}{a+b} \right) = \frac{ a f(y) + bf(x)}{a+b}$$ Reasoning: the equation should even if you switch variables AI: Sure. Perhaps it might be easier to see if you make some intermediate substitutions. Let $x \mapsto p$ and $y \mapsto q$. Then $$f\left(\frac{ax+by}{a+b} \right) = \frac{af(x) + bf(y)}{a+b}$$ becomes $$f\left(\frac{ap+bq}{a+b} \right) = \frac{af(p) + bf(q)}{a+b}$$ Now let $p \mapsto y$ and $q \mapsto x$. $$f\left(\frac{ay+bx}{a+b} \right) = \frac{af(y) + bf(x)}{a+b}$$ Alternatively, consider defining $$g(x,y) := f\left(\frac{ax+by}{a+b} \right) = \frac{af(x) + bf(y)}{a+b}$$ Then if we were to swap $x$ and $y$, we would want to find $g(y,x)$, no? In doing so, then, the same variables are swapped in $f$'s equality, and so $$g(y,x) = f\left(\frac{ay+bx}{a+b} \right) = \frac{af(y) + bf(x)}{a+b}$$
H: Extending the Dirac delta to $L^p$ In this question the Dirac delta is extended from $\mathcal C^0([-1,1])$ to $L^\infty([-1,1])$ by the Hahn-Banach theorem. My question is: why can't it be extended to an arbitrary $L^p([-1,1])$ for $p \geq 1$? AI: To extend $\delta$ to act on $L^p$ along the idea you mention, you need to consider $C[-1,1]$ as a subspace of $L^p[-1,1]$. In that case $\delta $ is not bounded, so Hahn Banach does not apply. For instance let $f_n(t)=\max\{0,1-n|t|\}$. Then $\delta(f_n)=1$ for all $n$, while $$\|f_n\|_p=\frac{2^{1/p}}{(p+1)^{1/p}}\,\frac1{n^{1/p}}\to0. $$
H: Given two functions $f(x),g(x)$ so that $f(x)=-\frac{x^3}{3}+x^2+1,g(x)=5-2x$. Find the ranges of $x$ so that $f(g(x)) Given two functions $f(x), g(x)$ so that $f(x)= -\dfrac{x^{3}}{3}+ x^{2}+ 1, g(x)= 5- 2x$ . Find the ranges of $x$ so that $$f\left ( g(x) \right )< g\left ( g(x) \right )$$ Is an easy Desmos problem but my teacher hate it https://www.desmos.com/calculator/jjfaugkkku I see that $f\left ( g(x) \right )\gtreqqless g\left ( g(x) \right )$ is related to $f(x)\gtreqqless g(x)$ . So how can I use $g(x)= 5- 2x$ in a smarter way ? Thank you. AI: You have to find the $y$-intervals where $$f(y)<g(y)\qquad(y\in{\mathbb R})\ .$$ There are two such intervals, say $A$ and $B$; you get them by solving a cubic equation. You then have to find the $x$-intervals for which $$y:=g(x)\in A\cup B\ .$$
H: three variable inequality $x+y+z\le xyz+2$ with constraint $x^2+y^2+z^2=2$ Let $x$, $y$ and $z$ be three real numbers such that $x^2+y^2+z^2=2$. it is asked to prove that $$x+y+z \le xyz+2$$ I tried using Lagrange multipliers but I'm stuck with the following system $$\begin{cases} x^2 + y^2 + z^2 = 2 \\ 1-yz = 2\lambda x \\1-xz = 2\lambda y \\ 1-xy = 2\lambda z \end{cases} $$ thanks for any advice, hint. AI: By C-S $$x+y+z-xyz=x\cdot(1-yz)+(y+z)\cdot1\leq$$ $$\leq\sqrt{(x^2+(y+z)^2)((1-yz)^2+1^2)}=\sqrt{2(1+yz)(2-2yz+y^2z^2)}\leq2$$ because the last inequality it's just $$y^2z^2(1-yz)\geq0,$$ which is true because $$yz\leq\frac{y^2+z^2}{2}\leq1.$$
H: How to calculate this conditional probability? I was given a conditional probability task with the following wording: The probability of having black hair is 60%. The probability of having blue eyes if your hair is not black is 40%. The probability of having blue eyes given that your hair is black is 10%. What is the probability of having black hair if your eyes are blue? I am not a native speaker so I was conflicted if the task implies conditional probability or joint probability. Anyway I formulated the task as follows: P(BH) = .6 P(BE | not BH) = .4 P(BE | BH) = 0.1 P(BH | BE) = ? 1. Is this correct? Did I understand the wording correctly? After formulating these equations I could derive the following: P(BH) = .6 P(not BH) = .4 ================== P(BE | not BH) = .4 P(BE and not BH) / P(not BH) = .4 P(BE and not BH) / .4 = .4 P(BE and not BH) = 0.16 ================== P(BE | BH) = 0.1 P(BE and BH) / P(BH) = 0.1 P(BE and BH) / 0.6 = 0.1 P(BE and BH) = 0.06 ================== P(BH | BE) = P(BH and BE) / P(BE) 2. Now I am stuck as I don't know how I could get the P(BE) value. How to get value of P(BE)? AI: This question is most readily answered by the use of Bayes theorem. From Bayes we have $$ \mathbb P(BH\mid BE) = \frac{\mathbb P(BE\mid BH)\mathbb P(BH)}{\mathbb P(BE)}. $$ We are given all of these quantities aside from $\mathbb P(BE)$, but we can compute $\mathbb P(BE)$ using the law of total probability and definition of conditional probability: \begin{align} \mathbb P(BE) &= \mathbb P(BE\cap BH)+\mathbb P(BE\cap BH^c)\\ &=\mathbb P(BE\mid BH)\mathbb P(BH) + \mathbb P(BE\mid BH^c)\mathbb P(BH^c)\\ &= \frac1{10} \cdot\frac35 + \frac25\cdot \frac25\\ &= \frac{11}{50} \end{align} It follows that $$ \mathbb P(BH\mid BE) = \frac1{10}\cdot \frac35 \cdot\frac{50}{11} = \frac3{11}. $$
H: How to understand derivative of reciprocal function with taylor series With taylor series, $f(x + h) = f(x) + \frac{df}{dx}h + O(h^2)$, we know the derivative of $f(x)$ is the coefficient of the first order term $\frac{df}{dx}h$. Using this definition of derivative, we can easily understand the linearity of derivative: $\frac{d(u+v)}{dx} = \frac{du}{dx} + \frac{dv}{dx}$ as $u(x + h) = u(x) + \frac{du}{dx}h + O(h^2)$ $v(x + h) = v(x) + \frac{dv}{dx}h + O(h^2)$ Thus: $(u+v)(x+h) = u(x+h) + v(x+h)$ $= u(x) + \frac{du}{dx}h + O(h^2) + v(x) + \frac{dv}{dx}h + O(h^2)$ $= u(x) + v(x) + (\frac{du}{dx} + \frac{dv}{dx})h + O(h^2)$ from the coefficient of the first order term, we know the derivative of $u+v$ is $\frac{du}{dx} + \frac{dv}{dx}$ The question is how to use the same reasoning process to verify reciprocal rule, i.e. $\frac{d(1/v)}{dx} = -\frac{1}{v^2}\frac{dv}{dx}$ I was starting with: $\frac{1}{v}(x + h) = \frac{1}{v(x + h)} = \frac{1}{v(x) + \frac{dv}{dx}h + O(h^2)}$ And I stuck here, could you please give some directions on how to continue? AI: Assuming things are well defined etc... $$\frac{1}{v+\frac{dv}{dx}h + O(h^2)} = \frac{1}{v}\cdot\frac{1}{1+\frac{1}{v}\frac{dv}{dx}h + O(h^2)}$$ $$ = \frac{1}{v}\left(1 - \frac{1}{v}\frac{dv}{dx}h + O(h^2)\right) = \frac{1}{v} - \frac{1}{v^2}\frac{dv}{dx}h + O(h^2)$$ by geometric series.
H: Discrete Uniform Distribution with random variables Let X be the random variable that records the number of “heads” when two coins are tossed. Let Y be a random variable with the discrete uniform distribution on the probability space {1, 2, 3}. Assume that X and Y are independent. Let U be the random variable defined by U = X + Y . Find the probability distribution for the random variable U. How is the probability distribution calculated in this case? AI: First of all check the support of the new rv $Z=X+Y$ $z \in \{1;2;3;4;5\}$ to calculate the probability of each $z$ the easiest way is to construct a $2 \times 2$ table with inside the table the probability of the joint distribution (easy because of independence) then detect all the possible cases for each z Example, for $z=2$ $$\mathbb{P}[Z=2]=\mathbb{P}[X=0;Y=2]+\mathbb{P}[X=1;Y=1]=\frac{1}{4}\times\frac{1}{3}+\frac{2}{4}\times\frac{1}{3}=\frac{3}{12}$$ now you have only to calculate all the other probabilities of $z \in Z$
H: Find the surface of a triangle between two slopes and x axis Find the surface of the triangle between this two slopes and x axis. $$f(x)=x+1$$ $$g(x)=(3-2\sqrt{2})x$$ So : $\text{k}_1= \arctan(1)=45$° and $\text{k}_2=\arctan(3-2\sqrt{2})=?$ It's very hard to get this angle without a calculator, which I must not use. Then I tried: $$\tan(\psi)=|\frac{\text{k}_1-\text{k}_2}{1+\text{k}_2\text{k}_1}| $$ I got: $\psi=45$° So the angles are: $\alpha=45$°, $\psi=45$°, $\beta=90$° But how to get surface of the triangle only from these data? EDIT: Also the $\arctan(3-2\sqrt2)$ is not 90 degrees. This frustrates me? What should I do? Actually mathematica says it is:$\,9,736$° AI: $f(x)$ and $g(x)$ intersect when $$x+1=(3-2\sqrt2)x \implies x=-\frac{(\sqrt 2+1)}{2} $$ and $$y=x+1=\frac{1-\sqrt 2}{2}$$ The other vertices of the triangle are the $ x$-intercepts of the lines, namely $(0,0)$ and $(-1,0)$. So the area is simply $$\frac 12\left|\begin{matrix} -\frac{(\sqrt 2+1)}{2} &\frac{1-\sqrt 2}{2} & 1 \\ 0 & 0 & 1 \\ -1 & 0 & 1 \end{matrix}\right| $$
H: Maximize/Minimize $ \sum_{i=1}^n x_i^3$ subject to $\sum_{i=1}^n x_i = 0$ and $\sum_{i=1}^n x_i^2 =1 $ This question comes from a master thesis's course I followed in Optimization. The problem is the following Maximize/Minimize $ \sum\limits_{i=1}^n x_i^3$ subject to $\sum\limits_{i=1}^n x_i = 0$ and $\sum\limits_{i=1}^n x_i^2 =1 $. Or, if we prefer \begin{align*} \min/\max f(\pmb x) &= \sum_{i=1}^n x_i^3 \\ \text{s.t.} \quad h_1(\pmb x) &= \sum_{i=1}^n x_i = 0 \\ h_2(\pmb x) &= \sum_{i=1}^n x_i^2 - 1 = 0 \end{align*} My approach Let $\Omega = \{\pmb x \in \mathbb R^n : h_1(\pmb x) = 0, h_2(\pmb x) = 0 \}$ The interesting cases are for $n \ge 3$; $\Omega$ is compact; $\Omega$ is simmetric (i.e. for every $\pmb x \in \Omega$ we have that $-\pmb x \in \Omega$); $f$ is globally odd (i.e. $f(- \pmb x) = -f(\pmb x)$ for every $\pmb x \in \Omega$); If $\pmb x_{\text{max}}$ is a local maximum, then $-\pmb x_{\text{max}} $ is a local minimum; hence we can reduce to the minimization case. I apply the KKT condition and with a lot of calculus I obtained $$ f(\pmb x_\text{min}^*)= - \frac{n-2}{\sqrt{n^2-n}} $$ and the minimum is $$ \pmb x_\text{min}= \left(\frac{1}{\sqrt{n^2-n}},\frac{1}{\sqrt{n^2-n}},\dots,\frac{1}{\sqrt{n^2-n}},- \sqrt{\frac{n-1}{n}} \right) $$ (and, of course, every permutation with this structure). Assumed my solution is correct, the question is : is there another way to prove this solution with some clever algebraic trick? AI: With Lagrange multipliers we want to minimize$$L:=\sum_ix_i^3+\lambda\left(1-\sum_ix_i\right)+\mu\left(1-\sum_ix_i^2\right)$$using$$0=\frac{\partial L}{\partial x_i}=3x_i^2-\lambda-2\mu x_i.$$Each solution can only have at most two values for the $x_i$, as they must be roots of $3x^2-2\mu x-\lambda$. A one-value option requires $x_i=0$, contradicting $x_i^2=1$. Instead let $k$ of the $x_i$ be $(n-k)c$ and the other $n-k$ be $-kc$, so$$1=k(n-k)^2c^2+(n-k)^2k^2c=nk(n-k)c^2\implies c=\pm\frac{1}{\sqrt{nk(n-k)}},$$obtaining$$\sum_ix_i^3=k(n-k)^3c^3-(n-k)k^3c^3=\pm\frac{n-2k}{\sqrt{nk(n-k)}}.$$This is easiest to extremize with$$\frac{n-2k}{\sqrt{nk(n-k)}}=\sqrt{\frac{n-k}{k}}-\sqrt{\frac{k}{n-k}}.$$As you found, the minimum is$$k=n-1,\,c=\frac{1}{\sqrt{n(n-1)}},\,\sum_ix_i^3=\frac{2-n}{\sqrt{n(n-1)}}.$$By contrast, the maximum is$$k=1,\,c=\frac{1}{\sqrt{n(n-1)}},\,\sum_ix_i^3=\frac{n-2}{\sqrt{n(n-1)}}.$$This is an important sanity check: $x_i\mapsto-x_i$ preserves the problem, so the extrema should be of the form $\pm M$.
H: Trying to understand the statement of Nakayama's lemma for coherent modules in Mumford' red book Here is the statement of a version of Nakayama's lemma in Mumford's red book. Let $X$ be a noetherian scheme, $F$ a coherent $O_X$-module and $x \in X$. If $U$ is a neighbourhood of $x$ and $a_1, \ldots a_n \in \Gamma(U,F)$ have the property: the images $\overline{a_1}, ..., \overline{a_1}$ generate $F_x \otimes_{O_{X,x}} k(x)$ then there exists a neighbourhood $U_0 \subset U$ of $x$ such that $a_1, ..., a_n $ generate $F |_{U_0}$. There are two things I don't understand in this statement, which I haven't been able to deduce looking at the proof. -"$\overline{a_1}, ..., \overline{a_1}$ generate $F_x \otimes_{O_{X,x}} k(x)$" over $O_{X,x}$ or $k(x)$? -what does it mean by $a_1, ..., a_n $ generate $F |_{U_0}$? thank you. AI: The map $O_{X,x} \to k(x)$ is surjective, so generated over $O_{X,x}$ or $k(x)$ is the same thing. For the second question, it means that the images of $a_1, ..., a_n $ in $F_y$ are generators for this $O_{X,y}$-module for each $y\in U_0$.
H: $ a^2 <10^{\sqrt{a}}$ for $a\geq 2$. How to show $ a^2 <10^{\sqrt{a}}$ for $a\geq 2 $ and $ a \in \mathbb{N}$? Should I try considering a new function which is the difference and then differentiating it? I could solve it using two case when $ a \in [10^{2m}, 10^{2m+1})$ or $[10^{2m-1}, 10^{2m})$. Is there any other elementary way? Any help would be appreciated. Thanks in advance. AI: That inequality is equivalent to $\ln a < \sqrt{a} \frac{\ln 10}{2}$ because of the fact that $t \to \ln t$ is strictly increasing. Define $f(a)=\ln a$ and $g(a)=\sqrt{a} \frac{\ln 10}{2}$ for $a \geq 2$. We want to show that $f(a)<g(a)$ for all $a \geq 2$. Noting that $f'(a)=\frac 1 a \leq \frac {1}{\sqrt{a}}\frac {\ln 10} {4}=g'(a)$ holds if and only if $\sqrt{a} \geq \frac{4}{\ln 10}$, define $\phi(a)=g(a)-f(a)$. Due to the fact that $\phi'(a) \geq 0$ if and only if $a \geq (\frac{4}{\ln 10})^2$ as we have observed and that $\phi((\frac{4}{\ln 10})^2)=g((\frac{4}{\ln 10})^2)-f((\frac{4}{\ln 10})^2)=2(1+\ln (\frac {\ln 10}{4})) >0$ we have what we want.
H: Condition for an operator to be compact Suppose we have an Hilbert space $X$ a compact operator $T$ and an operator $S$ such that $TT^*-SS^*\geq 0$, then $S$ will be a compact operator. Using the condition I can see that $||Tx||\geq ||Sx|| ,\forall x\in X$, and we know that $T(B_X(0,1))$ is relative compact so if I can show that $S(B_X(0,1))\subset T(B_X(0,1)).$ we will get that $S$ is compact, now I can't seem to see why we would get that inclusion of the sets, so any hint is aprecciated , thanks in advance. AI: Recall that a bounded linear operator is compact if and only if its adjoint is compact (Schauder's Theorem). I will prove that $S^*$ is compact. Let $x_n$ be a bounded sequence. We want to show that $S^*$ has a convergent subsequence. To do this, note that $TT^* - SS^* \geq 0$ implies that $\|T^*(x_n - x_m)\| \geq \|S^*(x_n - x_m)\|$ for every $n, m > 0$. Now $T^*$ is compact so that $T^* x_n$ has a convergent subsequence, $T^* x_{n_k}$. In particular, $T^* x_{n_k}$ is Cauchy. This implies that $S^*x_{n_k}$ is Cauchy since $$\|S^* x_{n_k} - S^* x_{n_j}\| \leq \|T^* x_{n_k} - T^* x_{n_j}\|$$ and hence $S^*x_{n_k}$ converges. Hence $S^*$ is compact.
H: Proof of $\tan{x}>x$ when $x\in(0,\frac{\pi}{2})$ I have read Why $x<\tan{x}$ while $0<x<\frac{\pi}{2}$? If I want to get $\tan{x}\gt x$ instead of weaker inequality $\tan{x} \ge x$. Do I need only to show that $\tan{x} \gt x$ when $x\to 0$? Because from @David Mitra 's picture, it is obvious to see $\tan{x}\gt x$ when $x$ is not near $0$. Since $$\lim_{x\to 0}\frac{\tan{x}}{x}=1 \text{}$$ ,for $$\varepsilon=\frac{1}{n} \;\;\text{ where }n\in\Bbb N$$, we can find $\delta \gt 0 $ s.t. $$\forall x \in (0,0+\delta)$$, we have $$\frac{\tan{x}}{x}\gt1-\frac{1}{n}$$ Let $n\to \infty$, we get $$\frac{\tan{x}}{x}\gt1$$ So $$\tan{x}\gt x \text{ when }x\in(0,\frac{\pi}{2})$$ Or someone has more analytical proof instead of geometric proof for $x\in(0,\frac{\pi}{2})$, since I only prove the case $x$ is near the origin. Thanks for helping. AI: It is a direct consequence of the Mean value theorem: For any $x\in \bigl(0,\frac\pi2\bigr)$, we have $$\frac{\tan x}x=(\tan)'(\xi)\quad(0<\xi<x)\quad =\frac1{\cos^2\xi},$$ and on the interval $\bigl(0,\frac\pi2\bigr)$, $\;0<\cos\xi<1$, so $$\frac{\tan x}x=\frac1{\cos^2\xi}>1.$$
H: Finding $P(X>Y)$ where $X\sim U(0,2)$ and $Y\sim U(1,3)$ are independent I have the following problem: Two stochastic variables $X\sim U(0,2)$ and $Y\sim U(1,3)$ are independent. What is $P(X>Y)$? The answers is $\frac18$, but I don't know how to solve. I did the following: I drew both distributions in one plot with height 1/2 --> (1/(2-0)) I see that the distributions have a overlap on interval $[1,2]$, so if $X$ is going to be bigger than $Y$, its going to be in that interval. So I know that the probability of $X$ being in that interval is $(2-1)\times\frac12 = 0.5$. I don't know how to proceed. Can I have some feedback? Ter AI: As commented a picture is a good idea here. If you want a proof that does not depend on a picture then the following might help. For independent $X,Y$ where $Y$ has a PDF we have the equalities: $$P\left(X>Y\right)=\int P\left(X>Y\mid Y=y\right)f_{Y}\left(y\right)dy=\int P\left(X>y\mid Y=y\right)f_{Y}\left(y\right)dy=$$$$\int P\left(X>y\right)f_{Y}\left(y\right)dy$$ Here the last equality rests on independence.
H: Residue fields at points on $\mathbb{A}^n$ Let $k=\bar k$ be a field. I'm trying to "write down" the residue fields at various points on $\mathbb{A}^n=\operatorname{Spec} k[x_1,\cdots, x_n]$, but am having some trouble with the non-closed points. The definition I'm using is that residue field at a point $x$ on an integral scheme $X$ is the residue field of the local ring $\mathcal{O}_x$. Thus, if $\mathfrak{p}$ is a point on the affine scheme $X=\operatorname{Spec} A$ then the residue field at $\mathfrak{p}$ is $A_\mathfrak{p}/\mathfrak{p}A_\mathfrak{p}$, which is isomorphic to the fraction field of $A/\mathfrak{p}$. On $\mathbb{A}^1=\operatorname{Spec} k[x]$ these fields are not hard to determine. The only points are the generic point $(0)$ and the closed points $(x-a)$ for $a\in k$. Clearly the residue field at the generic point (i.e., the function field) is $k(x)$ and the residue fields at closed points are all just $k$. Already for $\mathbb{A}^2$, though, I'm finding this a bit trickier. The points in this case are again $(0)$ (generic), $(x-a,y-b)$ for $a,b\in k$ (closed), and $(f)$ for $f\in k[x,y]$ irreducible (nonclosed). The function field at the generic point is clearly $k(x,y)$, and it's also not hard to see that the function fields at closed points are all $k$. But what do we get for the function field at a nonclosed point $(f)$? I think this should come out to be an algebraic extension of a transcendence degree 1 extension of $k$, but would like some clarification on this. My thoughts are as follows. Given $f\in k[x,y]$ irreducible, the function field, which we denote $k(f)$, will be the fraction field of $k[x,y]/(f)$. Setting $R=k[x]$ and $g(y)=f(x,y)\in R[y]$, we have $k[x,y]/(f)=R[y]/(g(y))$, which I believe is isomorphic to $R(\alpha)$, where $\alpha\in \overline{k(x)}$ satisfies $g(\alpha)=0$, i.e., is a root of a polynomial equation with coefficients in $k[x]$. Thus, the fraction field of $k[x,y]/(f)$ should be an algebraic extension of $k(x)$. Morally, this feels right to me in that the various function fields one obtains for $\mathbb A^2$ include transcendence degree 0,1,2 extensions of $k$. What makes me somewhat skeptical is the choice $R=k[x]$ was completely arbitrary. The same reasoning would also show that $k(f)$ is an algebraic extension of $k(y)$. In either case, $k(f)$ seems to be a trans. degree 1 extension of $k$, but the degrees $[k(f):k(x)]$ and $[k(f):k(y)]$ need not be equal, which I find a bit disturbing. Can someone tell me what's going on here? In the general case $\mathbb{A}^n$, what I expect then is that the function fields are again $k$ at the generic point, $k(x_1,\cdots, x_n)$ at the closed points, and ... transcendence degree $n-1$ extensions of $k$ at the nonclosed other points? AI: Let's take an example. Consider an elliptic curve, for argument's sake $$y^2=x^3+x$$ over $k=\Bbb C$. The residue field at its generic point is the fraction field of $$\frac{\Bbb C[x,y]}{(y^2-x^3-x)}.$$ This is $\Bbb C(x)[y]$ where $y$ is a square root of $x^3+x$, so a quadratic extension of $\Bbb C(x)$ and also it is $\Bbb C(y)[x]$ where $x$ is a root of the equation $x^3+x-y^2=0$. Thus the function field is a degree $3$ extension of $\Bbb C(y)$. This behaviour is quite typical. If you like consider the rational curve $y^2=x^3$: its function field is $\Bbb C(t)$ where $t=y/x$, $y=t^3$ and $x=t^2$. Then $|\Bbb C(t):\Bbb C(x)|=|\Bbb C(t):\Bbb C(t^2)|=2$ and $|\Bbb C(t):\Bbb C(y)|=|\Bbb C(t):\Bbb C(t^3)|=3$.
H: Convergence of $\int_0^1 x^p \ln^q \left(\frac{1}{x}\right)dx$ without using Gamma function Determine all values of $p$ and $q$ that the integral convergens of $$\int_0^1 x^p \ln^q \left(\frac{1}{x}\right)dx$$ without using Gamma function. First attempt: Since $$\int_0^1 x^p \ln^q \left(\frac{1}{x}\right)dx \le \int_0^1x^{p-q}dx$$ the integral diverges when $p-q>-1$ Second attempt: Substituting $x=1/t$ the integral will look like $$\int_1^{\infty}\frac{\ln^qt}{t^{p+2}}dt \le \int_1^\infty\frac{1}{t^{p-q+2}}dt$$ which converges when $p-q+2>1$ or $p-q>-1$ But the answer in my book says the integral converges when $p>-1$ and $q>-1$. Why is this? AI: Hint: The substitution $y=-\log x$ changes the integral to $\int_0^{\infty} e^{-(1+p)y} y^{q}dy$. $\int_0^{1} e^{-(1+p)y} y^{q}dy$ converges iff $q >-1$ and $\int_1^{\infty} e^{-(1+p)y} y^{q}dy$ converges iff $1+p >0$ or $p >-1$.
H: Does the heat equation have a unique solution with these mixed boundary conditions Does the heat equation $u_t - u_{xx} = 0$ on the unit square with $\forall 0 \leq x \leq 1: u(x,0)=0$, $\forall 0 \leq t \leq 1: u(0,t)=0$, $\forall 0 \leq t \leq 1: u_x(1,t)=0$ have a unique solution? Here's my attempt: Let $u, v$ be two solutions to the above IBVP. Let $w=u-v$. Then $w$ solves the IBVP, $w_t - w_{xx}=0$ and $w(x)=0$ on the boundary of the unit square (*). By the maximum principle this means that $w \leq 0$. Similar $w \geq 0 $ applying the maximum principle to $-w$. Hence $w=0$ and the solution is unique. (*) This seems wrong to me. $w$ hasn't been specified on the upper edge of the unit square hasn't been specified so $ w $ need not be $0$ on the entire boundary of the unit square. So I suspect there is no unique solution. Also what if instead we had the wave/Laplace equation with these same conditions? Can anyone help me out here? Thanks! AI: When you define $\omega$, you know $\omega(x,0)=0$, $\omega(0,t)=0$, and $\omega_x(1,t)=0$; a priori you cannot state $\omega(1,t)=0$, so you cannot conclude $\omega\equiv 0$ from here. The usual trick for uniqueness with Neumann (or mixed) boundary condition is to use an energy method. We define an auxiliary functional $$ E = \frac{1}{2}\int_{0}^{1}u^2 \mathrm{d}x \geq 0, $$ and observe that if $u$ is a solution to your problem, this energy is dissipated in time: \begin{align} \frac{\mathrm{d}E}{\mathrm{d}t} = \frac{1}{2}\int_{0}^{1}\frac{\partial}{\partial t}u^2 \mathrm{d}x = \int_{0}^{1} uu_t \mathrm{d}x = \int_{0}^{1} uu_{xx} \mathrm{d}x = \left[uu_{x}\right]_{x=0}^{x=1} - \int_{0}^{1} (u_{x})^2 \mathrm{d}x \leq 0. \end{align} Now, you define $\omega=u-v$, and study what happens to its energy. From this you will be able to conclude $\omega\equiv 0$, like you wanted, and show uniqueness.
H: Trying to understand example, determine fx and fy I´m trying to learn 2nd order Taylor but i cant understand the example i have. This is an example i have from a book, $$y´=\frac{y}{2}+x$$ $$-1\le x \le 1 $$ $$y(-1)=1$$ How does fx = 1 and fy = 1/2? $$f´(x,y)= fx+fyf = 1+\frac{1}{2}\left(\frac{y}{2}+x\right) = 1+\frac{y}{4}+\frac{x}{2}$$ AI: I guess $fx,fy$ mean: $$f_x=\dfrac {\partial f}{ \partial x}=1$$ $$f_y=\dfrac {\partial f}{ \partial y}=\frac 12$$ Where $f(x,y)=\dfrac y 2 +x$. So that you have: $$\dfrac {df(x,y)}{dx}=\dfrac {\partial f}{ \partial x}+\dfrac {\partial f}{ \partial y}\dfrac {dy}{dx}=f_x+f_yf$$ $$\dfrac {df(x,y)}{dx}=1+\dfrac y 4+\dfrac x 2$$
H: Inequality proof (perhaps inductive?) Came up with this on my own and although it seems true (due to Desmos), I was interested to see a proof of it. I tried an inductive approach myself but unfortunately couldn't come up with anything concrete (just by assuming the statement, proving the base case and fiddling with it). Prove that $$ (5^{k})! > 5^{k!} \ \forall \ k \in \mathbb{N} $$ AI: For $k = 0$, we obtain $$ \left( 3^k \right)! = 1! = 1 \not> 5 = 5^1 = 5^{0!} = 5^{k!}. \tag{0} $$ For $k = 1$, we obtain $$ \left( 3^k \right)! = 3! = 3 \not> 5^1 = 5^{1!} = 5^{k!}. \tag{1} $$ For $k = 2$, we obtain $$ \left( 3^k \right)! = 9! > 25 = 5^2 = 5^{2!} = 5^{k!}. \tag{2} $$ Suppose that $k \in \mathbb{N}$ such that $k \geq 2$ and also $$ \left( 3^k \right)! > 5^{k!}. \tag{3} $$ Then we find that $$ \begin{align} \left( 3^{k+1} \right)! &= \left( 3 \cdot 3^k \right)! \\ &= \left( 3 \cdot 3^k \right)\left( 3 \cdot 3^k -1 \right) \ldots \left( 3^k +1 \right) \left( \cdot 3^k \right)! \\ &> \left( 3 \cdot 3^k \right)\left( 3 \cdot 3^k -1 \right) \ldots \left( 3^k +1 \right) 5^{k!} \\ &= \left( 3^k + 2 \cdot 3^k \right) \left( 3^k + 2 \cdot 3^k -1 \right) \ldots \left( 3^k + 1 \right) 5^{k!} \\ &> \left( 3^k + 1 \right)^{2 \cdot 3^k} 5^{k!} \\ &> 5^{2 \cdot 3^k } 5^{k!} \\ &> 5^{k+1} 5^{k!} \tag{4} \\ &= 5^{(k+1)k!} \\ &= 5^{(k+1)!}. \end{align} $$ In (4) above here we have used the result that $$ 2 \cdot 3^k > k+1 $$ for all $k \in \mathbb{N}$. This result should not be too hard to prove using induction. Hope this helps.
H: Dimension of intersection of distinct subspaces Let $W_1, W_2, W_3$ be $3$ distinct subspaces of $\Bbb{R}$$^{10}$ such that each $W_i$ has dimension 9. Let $W = W_1 \cap W_2 \cap W_3$. Then which of the following can we conclude? $W$ may not be a subspace of $\Bbb{R}$$^{10}$ $\dim W \le 8$ $\dim W \ge 7$ $\dim W \le 3$ I know that first and fourth options are incorrect. I have even find examples where dimension of W is $7$ and $8$. But still from here, how can I conclude that the third option is correct. I know that dimension of $W$ should be less than or equal to $9$. It can't be $9$ since those subspaces are distinct. So second option will be correct. But how do I know that the dimension of $W$ Will be greater than or equal to $7$? AI: For any two subspaces $X$ and $Y$ of a vector space $V$ you have $$ \dim(X+Y) = \dim(X) + \dim(Y) - \dim(X\cap Y). $$ Starting with $W_1$ and $W_2$ this gives you $$ \dim(W_1\cap W_2) = \dim(W_1) + \dim(W_2) - \dim(W_1+W_2) = 9 + 9 - 10 = 8, $$ where $\dim(W_1+W_2)$ is $10$ since the two $9$-dimensional subspaces of $\mathbb R^{10}$ are distinct. Since $W_1\cap W_2\cap W_3$ is a subspace of $W_1\cap W_2$ we immediately conclude $$ \dim(W_1\cap W_2\cap W_3) \le 8. $$ Finally $$ \dim(W_1\cap W_2\cap W_3) = \underbrace{\dim(W_1\cap W_2)}_{{}=8} + \underbrace{\dim(W_3)}_{{}=9} - \dim((W_1\cap W_2) + W_3). $$ Since $\dim((W_1\cap W_2) + W_3)\le 10$ we can conclude $$ \dim(W_1\cap W_2\cap W_3) \ge 7. $$
H: Change of variables in a sum If we have the following function $$f(k) = \frac{1}{k-m} \sum_{j=k+1}^{m+n} \frac{1}{j-m}\ \text{where}\ k\in \{m,.....,m+n-1\}$$ How will the function look if we change the variables $i= j-m$ and $r=k-m$? I have tried to resolve but still having trouble with the final result. First, since $j\in \{k+1,...m+n \}$ and $i = j-m$ it results that $$ \sum_{j=k+1}^{m+n} \frac{1}{j-m} = \sum_{i=k+1-m}^{n} \frac{1}{i}\ ?$$ Do we have now that $$f(k) = \frac{1}{k-m} \sum_{i=k+1-m}^{n} \frac{1}{i}\ \ ?$$ Also if next we have $r = k-m$ how will the sum change? $k\in \{m,.....,m+n-1\} => r\in \{0,.....n-1\}$... Any help is much appreciated, thanks. AI: Given that $$f(k) = \frac{1}{k-m} \sum_{j=k+1}^{m+n} \frac{1}{j-m}, $$ where $$ k \in \{m,.....,m+n-1\}, $$ and given that $$ i= j-m $$ and $$ r=k-m, $$ we obtain $$ j = i+m $$ and $$ k = r+m, $$ with $$ r \in \{ 0, \ldots, n-1 \}, $$ and hence $$ f(k) = f(r+m) = \frac{1}{r} \sum_{i = k+1-m}^{n} \frac{1}{i} = \frac{1}{r} \sum_{i=r+1}^n \frac{1}{i}, $$ that is, $$ f(r+m) = \frac{1}{r} \sum_{i=r+1}^n \frac{1}{i}, $$ which implies that $$ f(r) = f \big((r-m)+m\big) = \frac{1}{r-m} \sum_{i=r-m+1}^n \frac{1}{i} = \frac{1}{r-m} \sum_{i = r-m+1}^n \frac{1}{i}. $$
H: Why is the annual interest rate given if the principal isn’t compounded annualy? Back in middle school, We learned that if the interest rate given, r, is annual, it must be divided By the number of times compounded per year, n, to get interest rare per period. I was reviewing compound interest for a standardised test and it got me thinking. Why is an “annual rate” given in the first place? The interest rate per period applies to a different principal every period so you cant just find that rate by dividing the annually rate equally Into n Period , right? What does, say an interest rate of 10% compounded quarterly, even mean? Is hid something just to confuse you? AI: I mean, you can. You just have to take powers. Using your example: You have annual interest rate $10%$, compounded quarterly. Suppose your principle is $100$, then, after one year, you have: $$100 \times (1 + \frac{10}{4\cdot 100})^4 = 100 \times (1 + 0.025)^4 = 110.38.$$ This is standard practice in finance, and you should get used to it, so that you don't get swindled. Mess around with some numbers. At the end, the definitions are just long-standing conventions in banking.
H: Show equivalence with BPI: every Boolean algebra has a prime ideal I want to prove that the following statements are equivalent: (1) Every non-trivial Boolean algebra has a prime ideal. (2) In every non-trivial Boolean algebra, every ideal is contained in a prime ideal. (3) For every set $X$, is a filter on $X$ contained in an ultrafilter on $X$. I only know about the definitions of all these concepts, and nothing more. Is there an elementary proof I can understand? A reference is fine. I checked Herrlich's book "The axiom of choice" but the proofs there used Lattice theory, which I don't know. AI: (1) to (2) is simple. If $B$ is a Boolean algebra, and $I$ is an ideal, then $B/I$ is a Boolean algebra, so it contains a prime ideal $J$, but we can take its pullback to $B$, and it will be a prime ideal in $B$ containing $I$. (2) to (3) is also simple. If $X$ is a set, then $\mathcal P(X)$ is a Boolean algebra, and a filter has a dual ideal, which is then contained by a prime ideal, whose dual is an ultrafilter. Finally, (3) to (1) is the tricky part, and for the life of me I do not recall a direct proof. This part is proved in Herrlich's book without referring to lattice theory, by topological means. You can find a proof through the completeness theorem for first-order logic in Jech's "The Axiom of Choice" in Chapter 2. You can find another proof through analysis in Eric S. Schechter's "Handbook of Analysis and its Foundations" in Chapter 13.
H: How to solve $T(n) = 4T(n-1) - 3T(n-2) +1$? Which method should I use and how can I solve this recurrence to find the complexity (order) of the recurrence relation? The equation is: $T(n) = 4T(n-1) - 3T(n-2) +1$ Find $O(T(n))$. AI: We have $$T(n) - 4T(n-1) + 3T(n-2) = 1. \qquad \cdots (1)$$ We solve it like we solve differential equations, by summing a complementary function $C(n)$ and a particular solution $P(n)$, where $C$ solves the homogeneous equation $$T(n) - 4T(n-1) + 3T(n-2) = 0, \qquad \cdots (2)$$ and $P$ is any solution to (1). Can you continue? Do you know what is an eigenfunction to (2)? The characteristic equation is $x^2-4+3=0$, so $(x-3)(x-1)=0$. Thus $C(n) = A\cdot 3^n + B\cdot 1^n$, where A and B are constants, to be determined by initial data (such as T(0), T(1)). Now, we may try to find a particular solution. We usually start by trying constants. However, constants can only solve the homogeneous equation in this case, so we proceed to try linear functions $P(n) = c\cdot n$. According to a helpful commenter Zarrax, we get $ c = -1/2$. However, you only care about the complexity (order) of the solution. So yeah, clearly the order is $O(3^n)$
H: Jack d'Aurizio's exercise on Chebyshev polynomials I am working through Jack D'Aurizio's “Superior Mathematics from an Elementary point of view”, and I found (Lemma 61) the following lemma: $\sum_{k=1}^{n-1}\frac{1}{\sin^2(\pi k/n)}=(n^2-1)/3$. He does not provide a proof, but says that it follows by considering the roots of the Chebyshev polynomial $U_n$ or $T_n$. I know that this is about relating the roots to the coefficients, but what what bothers me is the that the sin terms (closely related to the roots) appears in the denumerator - I can solve the other problems in Lemma 61. Could someone point out (as a hint) how to reformulate the sum in a more manageable way? Thanks! AI: Hint Recall that if $p(x)=x^n+p_1 x^{n-1} + \cdots + x p_{n-1} + p_n$ and $(r_i)_{i=0}^{n-1}$ are the roots of $p$, then $$\sum_{i=0}^{n-1} \frac{1}{r_i}= \frac{p_{n-1}}{p_n}$$ Of course provided that none of the roots is $0$, that is $p_n \neq 0$.
H: evaluate the limit using L'Hospitals rule $$\lim_{x\to 0} (\csc x - \frac 1x)$$ i have tried using the L'Hopitals rule on it in 3 successive derivations and haven't been able to come to a solid conclusion. the denominator just keeps getting longer and harder to differentiate while the numerator keeps switching between sinx and cosx. i believe there might be a simpler answer but unfortunately i can't seem to be able to get there. AI: The first step should be to find a common denominator: $$\lim_{x \to 0} \frac{x-\sin{x}}{x\sin{x}}$$ Then apply L'Hospital's rule: $$\lim_{x \to 0} \frac{1-\cos{x}}{\sin{x}+x\cos{x}}$$ Apply L'Hospital's rule again: $$\lim_{x \to 0} \frac{\sin{x}}{\cos{x}+\cos{x}-x\sin{x}}=0$$
H: Find all real values of $m$ such that all the roots of $f(x)=x^3-(m+2)x^2+(m^2+1)x-1$ are real I have the following polynomial with real coefficients: $$f(x)=x^3-(m+2)x^2+(m^2+1)x-1$$ I have to find all real $m$'s so that all of the roots of $f$ are real. Trying to guess a root didn't get me anywhere. I computed $x_1^2+x_2^2+x_3^2$ using Vieta's relations to be $-(m-2)^2+6$. This has to be positive if the roots are real, so $m\in[-\sqrt6+2, \sqrt6+2]$. I tried using the derivative of $f$ and Rolle's theorem, but the calculations get complicated quite fast. I managed to prove that m has to be somewhere in the interval $(-\sqrt\frac32+1, \sqrt\frac32+1)$, though I can't guarantee that this is correct. I could continue this way and I'll probably reach a solution sooner or later, but I hope there's a much more elegant solution that I've missed. Thanks for your help! AI: I shall assume that we want three different real roots Consider $$f(x)=x^3-(m+2)x^2+(m^2+1)x-1$$ The first condition is that $$f'(x)=3x^2-2(m+2)x+(m^2+1)$$ shows two real roots which are $$x_\pm=\frac{1}{3} \left(m+2\pm\sqrt{-2 m^2+4 m+1}\right)$$ This gives the first condition $$-2 m^2+4 m+1 > 0$$ Now, you need that $$f(x_-) \times f(x_+) <0$$ that is to say $$3 m^6-4 m^5+6 m^4-22 m^3-9 m^2+26 m+23 < 0$$ which cannot be solved. Numerical calculations give $$1.558 < m < 1.756 $$
H: Prove that there exists a positive integer $k$ such that $k2^n + 1$ is composite for every positive integer $n$. Prove that there exists a positive integer $k$ such that $k2^n + 1$ is composite for every positive integer $n$. (Hint: Consider the congruence class of $n$ modulo 24 and apply the Chinese Remainder Theorem.) I am struggling with this problem. I have not made any meaningful progress on it. Most of my time were spent on trying to understand the hint. I find it baffling that I should be concerned with the $n \mod 24$ which is the exponent. Anyone has any hints? Or can clarify the hint a bit more? I prefer hints and guiding questions to complete solutions. Thank you for your time. AI: The idea here is to find a covering set $\{ (a_i, b_i) \}$ of the integers, such that every integer $n\equiv a_i \pmod{b_i}$ for at least 1 pair. Then, for any prime $p_i$ that divides $2^{b_i} - 1$, if $k \equiv - 2 ^ { b_i-a_i } \pmod{p_i}$, then $ p_i \mid k 2^n + 1 $. If $k$ is large enough relative to $p_i$ (E.g. $k> p_i$), then this guarantees the term is composite. Requirements: primes $p_i$ are distinct, in order to cleanly apply CRT to get $k$ -> We could allow $p_i$ to not be distinct, and then deal with it. Or we could make $p_i$ be distinct and have a much easier path. Your choice. $\sum \frac{ 1}{ b_i } \geq 1$ so that we can can have a hope of covering the integers. -> This is a necessary, and may not be sufficient, condition for a covering set. It is a simple enough first check, that it's worthwhile to be listed out separately. $\{(a_i, b_i)\}$ is a covering set of the integers. Note: We do not require $b_i$ to be distinct, just that the corresponding $p_i$ must work. With large enough $b_i$, it could contribute multiple $p_i$ and so we could use distinct values of $a_i$. If the prime $p$ divides $ 2^b - 1$, we could have $(a, 2b), (a+b, 2b)$ that use the same prime $p$, but in which case we should reduce it to $(a, b)$. Let $B= lcm (b_1, b_2, \ldots)$. We would want $B$ to have as many divisors as possible, so focusing on the terms $ 2^a 3^b 5^c \ldots$ make sense. The requirements make it such that "too small" $B$ are unlikely to work, so we'd have to test up to larger values. But, for now, let's just work through small $B$ so that we can see these in play: With $B = 6$, we have $ 2^2 - 1 = 3, 2^3 - 1 = 7, 2 ^6 - 1 = 63 = 3^2 \times 7 $ doesn't give us distinct primes for requirement 1 so we have to drop one of these. Then, there is no covering set of the form $ (a_1, 2), (a_2, 3)$ since $ \frac{1}{2} + \frac{1}{3} < 1$ violating requirement 2. In particular, this tells us that if $ 6 \mid b$, then we've have to drop (at least) one of these values. With $ B = 10$, we have $ 2^2 - 1 = 3, 2^5 - 1 = 31, 2^{10} - 1 = 3 \times 11 \times 31$, so we can get our distinct primes, but again $ \frac{1}{2} + \frac{1}{5} + \frac{1}{10} < 1 $ violates requirement 2. With $B = 8, 9, 12, 15, 16, 20$, it is left as an exercise to the reader to show why they work or do not work. (My guess is that they do not, since otherwise the hint/solution would have used them, but you never know.) With $ B = 24$, $ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \frac{1}{12} + \frac{1}{24} = \frac{3}{2}$, so we could drop some residue classes (e.g. 6 as indicated above) to force the distinct primes condition. Work this out yourself, and determine your value of $k$. Now pick some other $B = 2^a 5 ^c $ and try to make this work.
H: Prove that a polynomial has degree two Let $f(x)$ be a polynomial that satisfies $x*f(x-2)=(x-4)*f(x)$. Proof that $f(x)$ has a degree of 2. What I've tried was substituting $f(x)$ with $ax^2+bx+c$ and ended up with $4ax+2bx+4c=0$, which didn't help much. AI: Substitute $x=0$, then $x=4$ to get that $f(0) = f(2) = 0$. Hence $f(x) = x(x-2)h(x)$. Use that to get that $h$ is a periodic polynomial, hence a constant.
H: Prove $\lim\limits_{n\to \infty}\frac{n!}{(n-k)!(n-a_n)^k}=1$ Let $a_n\to \lambda\in \mathbb{R}$ such that $\frac{a_n}{n}\to 0$ and let $k\in \mathbb{N_0}$. I have to prove that $$\lim\limits_{n\to \infty}\frac{n!}{(n-k)!(n-a_n)^k}=1$$ I tried this: $$\frac{n!}{(n-k)!(n-a_n)^k}=\frac{n!}{\sum_{i=0}^{k}(n-k)!\binom{k}{i}n^ia_n^{k-i}}=\frac{\binom{n}{k}}{\sum_{i=0}^{k}\frac{n^ia_n^{k-i}}{i!(k-i)!}}$$ But it doesn't help much. Any help is appreciated. AI: Consider $$S_n=\frac{n!}{(n-k)!(n-a_n)^k}$$ Take logarithms $$\log(S_n)=\log(n!)-\log((n-k)!)-k\Big[\log(n)+\log \left(1-\frac{a_n}{n}\right)\Big]$$ Using Stirling approximation twice and continuing with Taylor expansions $$\log(S_n)=\frac{k (2 a_n-k+1)}{2 n}+O\left(\frac{1}{n^2}\right)$$ So, $\log(S_n)\to 0$ and then $S_n \to 1$.
H: Proof verification: $\mathbb{E}[X] = \int_0^\infty P(X > \alpha) d\alpha$ I want to show that given a probabilty density $P: \mathbb R^+ \rightarrow [0, 1]$, its expectation obeys the identity: $\mathbb{E}[X] = \int_0^\infty P(X > \alpha) d\alpha$. We assume that the density $P$ is defined on $[0, u]$. We will get the final version by setting $u \rightarrow \infty$ (nit 1) Begin by defining the cumulative density $C(x) = \int_0^x P(\alpha) d\alpha$. $C'(x) = P(x) - P(0)$ from fundamental theorem of calculus. We need to assume that $P(0) = 0$ so that $C'(x) = P(x)$. (nit 2). This gives us $dC(x) = P(x)$. Now compute expectation: \begin{align*} &\mathbb E[X] = \int_0^u xP(x)dx \quad [UdV]\\ &\left[ \text{use product rule: }\int U dV = UV - \int V dU \right] \\ &= [xC(x)]_0^u - \int_0^u C(x) \cdot 1 dx \\ &= uC(u) - \int_0^u C(x)dx \\ &\text{[$C(u) = 1$ since $u$ was upper bound of distribution]} \\ &= u - \int_{0}^uC(x) dx \\ &= \int_0^u 1 dx - \int_{0}^uC(x) dx \\ &= \int_0^u [1 - C(x)] dx \\ &= \int_0^u [1 - P(X \leq x)] dx \\ &= \int_0^u P(X > x) dx \\ \blacksquare \end{align*} Set $u \rightarrow \infty$ to get the final desired expression. Is this proof watertight? I'm nervous about (i) first proving it for finite $u$ and then setting the limit; (ii) The assumption that $P(0) = 0$. I believe (i) is all right since that's the definition of integral with limit infinity. As for (ii), I believe this is also okay since we are assuming something about the distribution over a set of measure zero (a single point $0$). Still, I'm nervous, so a proof verification would be very appreicated. AI: You can fix your proof just noticing that $$ \int_{0}^\infty tf_X(t) \mathop{}\!d t=\lim_{n\to\infty}\int_{0}^n tf_X(t) \mathop{}\!d t $$ from the monotone convergence theorem. Hence, following your work, you get that $$ \mathrm{E}[X]=\lim_{n\to\infty}\int_{0}^n(F_X(n)-F_X(t))\mathop{}\!d t=\int_{0}^\infty (1-F_X(t))\mathop{}\!d t $$ where we had used again the monotone convergence theorem as $\mathbf{1}_{[0,n]}(t)(F_X(n)-F_X(t))$ increases to $1-F_X(t)$ as $n\to\infty $.
H: Prove $\sum_{n=1}^{\infty}|x_n y_n|^p\leq\left(\sum_{n=1}^{\infty}|x_n |^p\right)\left(\sum_{n=1}^{\infty}| y_n|^p\right)$ for $1 Suppose that $(x_n)_{n=1}^{\infty}$ and $(y_n)_{n=1}^{\infty}$ be in $\ell_p$ for any $1<p<\infty$ prove that $$\sum_{n=1}^{\infty}|x_n y_n|^p\leq\left(\sum_{n=1}^{\infty}|x_n |^p\right)\left(\sum_{n=1}^{\infty}| y_n|^p\right)$$ AI: Let $w=x/\|x\|_p$. Then $\|w\|_p=1$, and $|w_n|\leq1$ for all $n$. We have $$ \sum_n|x_ny_n|^p=\|x\|_p^p\,\sum_n|w_ny_n|^p\leq\|x\|_p^p\sum_n|y_n|^p=\Big(\sum_n|x_n|^p\Big)\Big(\sum_n|y_n|^p\Big). $$
H: Using determinant to calculate surface of a triangle $$\frac 12\left|\begin{matrix} -\frac{(\sqrt 2+1)}{2} &\frac{1-\sqrt 2}{2} & 1 \\ 0 & 0 & 1 \\ -1 & 0 & 1 \end{matrix}\right| $$ With this we calculate the area of a triangle that has vertices: $$x=-\frac{(\sqrt 2+1)}{2} $$ and $$y=\frac{1-\sqrt 2}{2}$$ Is the point of the first vertex The other vertices of the triangle are the $ x$-intercepts of the lines, namely $(0,0)$ and $(-1,0)$. So the area is simply $$\frac 12\left|\begin{matrix} -\frac{(\sqrt 2+1)}{2} &\frac{1-\sqrt 2}{2} & 1 \\ 0 & 0 & 1 \\ -1 & 0 & 1 \end{matrix}\right| $$ So my question is, how does this work to calculate this. I already have experience with determinants but never to calculate surface. AI: In two dimensions it's more simple: Let $\vec a=(a_x,a_y)$ and $\vec b=(b_x,b_y)$ then the area of the parallelogram spanned by $\vec a$ and $\vec b$ is just the absolute value of the determinant of these vectors, see https://de.wikipedia.org/wiki/Parallelogramm#Beweis_der_Flächenformel_für_ein_Parallelogramm.
H: Show the inequality $ |f(x)-p(x)| \leq h^{n+1}* \frac{||f^{n+1}||_{\infty}}{4(n+1)}$ Let $ f \in C^{n+1}([a,b])$ and a polynom p $\in P_{n} $. The support points $ x_{i} = a +ih , i =0,...,n$ are equidistant with $ h \in \mathbb{R} $ so that $ x_{n}=b$. Show $ |f(x)-p(x)| \leq h^{n+1}* \frac{||f^{n+1}||_{\infty}}{4(n+1)} \forall x \in [a,b]$. My first idea is to show this with induction but I don´t get the start. But in lesson we have a corollar that for the interpolation error applies $ |f(x)-p(x)| \leq |\omega (x)|* max_{\xi \in I}\frac{|f^{n+1}(\xi)|}{(n+1)!} ,x \in I=[a,b]$. With $\omega (x) = \prod_{j=0}^n (x-x_{j})$. Maybe this could be usefull? AI: Use the error formula for polynomial interpolation $$ |f(x)-p(x)|\le\frac{|f^{(n+1)}(\xi)|}{(n+1)!}\prod_{k=0}^n|x-x_k| $$ for some $\xi$ inside the interval spanned by the $x_k$ and $x$. See There is a way to determine error of an interpolation outside of the given range, Error term in polynomial interpolation of non-differentiable function, or similar interpolation error using higher derivatives
H: Understanding definition of derivative of a scalar field Let $f$ be a scalar field i.e., $f:S\subseteq\mathbb R^n \to \mathbb R$ is defined on a set $S\subseteq \mathbb R^n$. Let $B(a,r)=\{x\in S:\|x-a\|\lt r\}$ be an $n$-ball inside $S$. Let $v\in S$ be a vector such that $\|v\|\lt r$, so that $a+v\in B(a,r)$. Then, $f$ is said to be differentiable at $a$ if $\exists$ a linear transformation $T_a:\mathbb R^n\to \mathbb R$ and a scalar function $E(a,v)$ such that : $f(a+v)-f(a)=T_a(v)+\|v\|E(a,r)$ and $E(a,r)\to 0$ as $\|v\|\to 0$ . The linear transformation $T_a$ is called total derivative of $f$ at $a$. What I don't understand is why does it have to be $\mathbf{\|v\|\lt r}$ in "Let $v\in S$ be a vector such that $\mathbf{\|v\|\lt r}$"? $a+v\in B(a,v)$ is understandable as we want to define $f(a+v)$ but how does $\|v\|\lt r$ guarantee that $a+v\in B(a,r)$? Does $S$ have to contain $0$ vector so that we can take the limit $\|v\|\to 0$? Please guide. Thanks for your time. AI: Recall that $B(a,r) = \{ x \in \mathbb{R}^n : \|x-a\| < r \},$ i.e. $x \in B(a,r)$ iff $\|x-a\| < r.$ If $\|v\|<r$ then $\|(a+v)-a\| = \|v\| < r$ so $a+v \in B(a,r).$
H: Question regarding proof of a limit which equals e ( the compound interest one). To prove the limit is e you do the following $$ L = \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^n $$ \begin{align} \ln L &= \lim_{n \to \infty} \ln \left(1 + \frac{1}{n} \right)^n \\ &= \lim_{n \to \infty} n \ln \left(1 + \frac{1}{n} \right) \\ &= \lim_{n \to \infty} \frac{\ln \left(1 + \frac{1}{n} \right)}{1/n}, \end{align} which you can evaluate with L'hopital's rule (take derivative of top and bottom, since both go towards 0): \begin{align} \ln L &= \lim_{n \to \infty} \frac{\ln \left(1 + \frac{1}{n} \right)}{1/n} \\ &= \lim_{n \to \infty} \frac{\frac{1}{1 + \frac{1}{n}}\left(\frac{-1}{n^2}\right)} {\frac{-1}{n^2}}\\ &= \lim_{n \to \infty} \frac{1}{1 + \frac{1}{n}}\\ &= 1. \end{align} Since the natural log of your limit is $1$, the limit itself must be $e$.$$$$ I cant understand the second statement in which we take the log of both sides and switch the limit and ln . The argument for doing such a thing is that the limit exists and the function is continuous (log is continuous) but how do we know the limit exists . So don’t we need to prove $$ L = \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^n $$ Exits before showing the above proof (and how do we prove it exists) or am i missing something AI: The sequence $\left(1+\frac1n\right)^n$ is increasing and bounded above (by $4$). Therefore, it converges. The remaining problem is therefore to find its limit.
H: Need help understanding step in the proof of multivariable chain rule Theorem: Let $k\in\mathbb{N}$, $x_{0},x\in A\subseteq\mathbb{R}^{n}$, $x_{0}\neq x$ and $f:A\to\mathbb{R}$ satisfy that $\partial^{\alpha}f$ exists and is differentiable on $L=\{(1-t)x_{0}+tx\in\mathbb{R}^{n}\mid t\in[0,1]\}$ for $\lvert\alpha\rvert\leq k$. Then there exists $y\in\{(1-t)x_{0}+tx\in\mathbb{R}^{n}\mid t\in(0,1)\}$ such that: \begin{equation} f(x)=\hspace{-3pt}\sum_{\lvert\alpha\rvert\leq k}\frac{\partial^{\alpha}f(x_{0})}{\alpha!}(x-x_{0})^{\alpha}\hspace{1pt}+\hspace{-6pt}\sum_{\lvert\alpha\rvert=k+1}\hspace{-6pt}\frac{\partial^{\alpha}f(y)}{\alpha!}(x-x_{0})^{\alpha}. \end{equation} Proof: From the assumptions it follows that $\partial^{\alpha}f$ differentiable on $L$ for $\lvert\alpha\rvert\leq k$, and thus $L\subseteq A$ og $x_{0},x\in A$ are interior points. Moreover, there exists $r>0$ such that $F:(-r,1+r)\to\mathbb{R}$ is well defined by $F(t)=f\left((1-t)x_{0}+tx\right)$ and $k+1$-times differentiable on $[0,1]$. (...) My question: What ensures that $F$ is $k+1$-times differentiable on $[0,1]$??? AI: You write that $\partial^\alpha f$ exists and is differentiable on $L$ for $|\alpha| \leq k$. Does that not mean that $f$ is differentiable $k$ times, and then THAT derivative is differentiable once more on $L$? Since $f$ is then differentiable $k+1$ times, $F$ is too. Sorry if I misunderstood in my answer, I cannot leave comments yet as my reputation is too low.
H: Can any undirected connected graph (UCG) with $N$ cycles be decomposed as 2 UCG with $N-1$ cycles? Consider any (arbitrary) undirected connected graph $\mathcal{G}_{AB} = (V,E_{AB})$ which has $N$ cycles, $V$ is the set of vertices and $E_{AB}$ the set of edges. I'm wondering if it is always possible to decompose $\mathcal{G}_{AB}$ as the superposition (union?) of two undirected connected graphs $\mathcal{G}_{A}=(V,E_{A})$ and $\mathcal{G}_B=(V,E_{B})$ which both have $N-1$ cycles? Maybe its redundant to say it (since all graphs are connected) but note that $\mathcal{G}_{AB},\mathcal{G}_A,\mathcal{G}_B$ all share the same vertex set $V$. If the answer is no: is it possible to decompose $\mathcal{G}_{AB}$ with $\mathcal{G}_A, \mathcal{G}_B$ with $N_A,N_B<N$ cycles respectively? Thanks in advance! EDIT: By decomposition, I don't mean that $E_A$ and $E_B$ are disjoint, as in the usual sense. I mean that $E_A,E_B$ may have non empty intersection and that $E_{AB}=E_A\cup E_B$. AI: Assuming that by the number of cycles you mean the dimension of the cycle space: for a connected graph, that dimension is just $m-n+1$ (source: Wikipedia), where $m$ is the number of edges and $n$ is the number of vertices. If $G$ is a connected graph with $m-n+1>0$, then it has at least one cycle. Pick two edges $e, e'$ on that cycle; let $G_A = G - e$ and $G_B = G - e'$. Then $G = G_A \cup G_B$ and each of them has a cycle space of dimension one less than in $G$.
H: Can a limit have 2 values? Limit of a function is said to have only one value. But say I have a function $$\lim_{l\to \infty}\left(1+\frac 1{{(1+x^2)}^l}\right)$$ Is this limit not defined or is a function of $x$? AI: There are several related but separate notions. If you consider $x$ as standing for some fixed (but arbitrary) real number then you can consider the questions what is the limit of $$\lim_{l\to \infty}\left(1+\frac 1{{(1+x^2)}^l}\right)$$ for this one fixed $x$. The limit and whether it exists will then usually depend on $x$, you could denote it $L_x$ if it exists. (If it exists it is unique for a fixed $x$, for example for $x=2$ there is just one limit; in your context, real numbers, a limit is always unique. There is a more general theory of spaces, called topological spaces, where one can consider limits and in some of those limits are not unique. But that's several years beyond what seems to be your current level.) You can then defined a function, $L$, defined as $x \mapsto L_x$ for each $x$ for which the limit exists. This $L$ then can be seen as the limit (function) of the functions $f_l(x) = \left(1+\frac 1{{(1+x^2)}^l}\right)$ as $l \to \infty$. However, there are different notions of the limit of a function. The one above is that of pointwise limit.
H: How do you prove that $\int_0^\infty \frac{\sin(2x)}{1-e^{2\pi x}} dx = \frac{1}{2-2e^2}$? I know the following result thanks to the technique "Integral Milking": $$\int_0^\infty \frac{\sin(2x)}{1-e^{2\pi x}} dx = \frac{1}{2-2e^2}$$ So I have a proof (I might list it here later, if it turns out this question seems very hard to solve) of the result, but I wouldn't be able to solve it if I would start with the integral. I tried a few things, e.g. expanding and substitution, but I didn't come anywhere. WolframAlpha doesn't have the closed-form, but you can check numerically if you want. How would you solve the integral without knowing the result? AI: Divide the numerator and denominator by $e^{2\pi x}$: $$I=-\int_0^{\infty} \frac{e^{-2\pi x} \sin{(2x)}}{1-e^{-2\pi x}} \; dx$$ $$I=-\int_0^{\infty} \sum_{n=1}^{\infty} e^{-2\pi x n} \sin{(2x)} \; dx$$ Due to Fubini theorem we can interchange the summation and integral: $$I=-\sum_{n=1}^{\infty} \int_0^{\infty} e^{-2\pi x n} \sin{(2x)} \; dx$$ Then, use integration by parts: $$I=-\sum_{n=1}^{\infty} \frac{1}{2 \pi^2 n^2+2}$$ $$I=-\frac{1}{4} \left( \coth{1}-1\right)$$ $$I=\frac{1}{2-2e^2}$$
H: Let $G$ and $X$ be groups with a surjective homomorphism $\phi : G \to X $. Show that if $H \trianglelefteq G$ then $\phi(H) \trianglelefteq X$ I proceeded as follows: To show $\forall \bar{g} \in G, \bar{g^{-1}} H \bar{g} \subset H $ Let $$z \in \bar{g^{-1}} H \bar{g} \\ z=\bar{g^{-1}}h\bar{g} \ , \exists h\in H \\ $$ Since $\phi$ is surjective, $ \forall \bar{g} \in \phi(G), \exists g \in G: \phi(g)=\bar{g}. $ Furthermore, $\bar{g^{-1}}=\phi(g^{-1})$ Hence it's enough to show $\phi(g)\phi(H)\phi(g^{-1})\subset \phi(H)$. Since $\phi$ is a homomorphism this simplifies to $\phi(gHg^{-1})\subset \phi( H)$ Also $H \trianglelefteq G$ thus $\forall g \in G , g^{-1} H g \subset H $ Letting $$z \in \phi(gHg^{-1}) \\ z=\phi(ghg^{-1}), \exists h \in H \\ \text{but } ghg^{-1} \in H\\ \to z \in \phi(H) \to \phi(gHg^{-1})\subset \phi(H)$$ Therefore $\phi(H) \trianglelefteq X$ Is the proof correct? AI: We have to prove that for every $x \in X$ it is $x^{-1}\phi(H)x \subseteq \phi(H)$. Since the homomorphism is onto, for every $x \in X$, it does exist $g \in G$ such that $x=\phi(g)$. But then, $x^{-1}\phi(H)x=\phi(g)^{-1}\phi(H)\phi(g)=\phi(g^{-1})\phi(H)\phi(g)=\phi(g^{-1}Hg)\subseteq \phi(H)$, because $g^{-1}Hg \subseteq H$ by hypothesis.
H: Convergence with respect to a metric in a locally convex space Suppose $X$ is a locally convex space with topology generated by a countable family of seminorms $\mathcal{P}=\{||\cdot||_{k}\}_{k\in \mathbb{N}}$. Suppose $\{x_{n}\}_{n\in \mathbb{N}}$ is a sequence on $X$ which converges to $x \in X$ in the locally convex topology. I know this is equivalent to convergence with respect to each seminorm, that is, $x_{n}\to x$ iff $||x_{n}-x||_{k} \to 0$ for every $k \in \mathbb{N}$. Now, because $\mathcal{P}$ is countable, it is actually a Fréchet space, so it is metrizable with (a possible metric) given by: $$d(x,y) :=\sum_{k=1}^{\infty}2^{-k}\frac{||x-y||_{k}}{1+||x-y||_{k}}$$ I suppose that $x_{n}\to x$ also implies $x_{n}\to x$ with respect to the metric $d$ on $X$, since this metric defines the topology. But I'm having a hard time trying to prove it. Can someome give me any hints on how to address this problem? AI: A remark: I believe the word Frechét spae is only used if the metric is complete. Now suppose $x_\alpha\to x$ wrt the locally convex topology, meaning $\|x_\alpha -x\|_k\to0$ for all $k$, in particular for each $k$ you find an $a_k$ so that for all $\alpha≥a$ you have $\|x_\alpha-x\|_k≤\epsilon$. Infact by choosing $a=\sup\{ a_1,..., a_k\}$ you actually get $\|x-x_\alpha\|_k≤\epsilon$ for all $k\in\{1,...,N\}$ for any finite $N$. Now choose $N$ so that $2^{-N}<\epsilon$ and you $a$ as before to get: $$\sum_{k=0}^\infty 2^{-k}\frac{\|x_\alpha-x\|_k}{1+\|x_\alpha-x\|_k}=\sum_{k=0}^N2^{-k}\frac{\|x_\alpha-x\|_k}{1+\|x_\alpha-x\|_k} + \sum_{k=N+1}^\infty 2^{-k}\frac{\|x_\alpha-x\|_k}{1+\|x_\alpha-x\|_k}\\ ≤\sum_{k=0}^N2^{-k}\epsilon + \sum_{k={N+1}}^\infty 2^{-k}≤\epsilon + 2^{-N}≤2\epsilon$$ for all $\alpha>a$. Now remember that $\epsilon$ was arbitrary. This step shows that the topology of the metric is coarser than the topology of the semi-norms, since any convergent net in locally convex topology is also convergent (with the same limit) in the metric topology. For the other direction suppose that $x_\alpha\to x$ in the metric topology. The easiest way to show that $x_\alpha\to x$ for the semi-norms is to suppose that there is some semi-norm $\|\cdot\|_N$ for which $\|x_\alpha-x\|_N\not\to0$ and to get a contradiction. Well if $\|x_\alpha -x\|_N\not\to0$ then $$\sum_{k=0}^\infty 2^{-k}\frac{\|x_\alpha-x\|_k}{1+\|x_\alpha-x\|_k} ≥ 2^{-N}\frac{\|x_{\alpha}-x\|_N}{1+\|x_{\alpha}-x\|_N}$$ and $d(x_\alpha, x)$ majorises something positive that does not converge to $0$, as such $d(x_\alpha,x)$ does not converge to zero, a contradiction.
H: Compute norm of linear functional over $c_0$ I am given a functional $f$, that is defined as \begin{align} f:c_0 &\rightarrow \mathbb{R} \\ x &\mapsto \sum_{k=1}^\infty \frac{\xi_k}{3^k} ~~~x = (\xi_k)_{k\ge1} \end{align} and I am supposed to calculate ||f||. I started by estimating an upper bound for $||fx||$: $$ ||fx|| = \left| \sum_{k=1}^{\infty} \frac{\xi_k}{3^k} \right| \le \sum_{k=1}^{\infty} \left| \frac{\xi_k}{3^k} \right| \le \sum_{k=1}^{\infty} \frac{\sup_\limits{k \ge 1} |\xi_k|}{3^k} \le ||x|| \sum_{k=1}^{\infty} 3^{-k} \le \frac{1}{2} ||x|| $$ This tells me that $||f|| \le \frac{1}{2}$. So now I'd like to show that there exists an $x$ such that $||f|| \ge \frac{1}{2}$, but I can't find a suitable $x$ to do the job. Am I approaching this problem the wrong way or is there some obvious $x$ that I'm somehow not seeing? AI: you can take the sequences $a^n = (1,1,\underbrace{...}_{n\text{ times}},1,0,0,...)$
H: An old APMO problem involving combinatorial geometry $\textbf{Question:}$ (APMO 1999.) Let S be a set of $2n+1$ points in the plane such that no three are collinear and no four concyclic. A circle will be called good if it has 3 points of S on its circumference, n − 1 points in its interior and n − 1 points in its exterior. Prove that the number of good circles has the same parity as n . $\textbf{My progress so far:}$ I have been able to show that for any two points (say A,B) among those there exist at least one good circle that goes through them. Take a line through those two points(say l).Then one side of that line contains at least $n-1$ points. Then there is a circle (for which the larger arc AB is on the "more point" side) large enough to contain all the points from that side.Now,we can use some "sweeping argument" like we continiously transform the circle to one that contains none of the point from that side.But the no 4 concyclic condition then gives us that the number of points in that circle cant be decreased more than one at any point.My claim follows from here. I had no luck with this problem after this. AI: I'd like to try answering the question. The statement we need is that for any pair of points we can find an odd number of good circles (Lemma 1). We know that the number of pairs of points we can choose is $$ \frac{(2n+1)2n}{2} = n(2n+1) $$ and has the parity of $n$. Obviously, the sum of $n(2n+1)$ odd numbers will have the parity of $n$. Sketch of proof of Lemma 1. Consider your "swiping argument". Let there be $m$ points on the left and $k$ points on the right. Consider the "swiping" function -- number of points inside the circle. Points are crossing the circle one by one (no $4$ points on the circle). This function has a plot that starts at $m$ and ends at $k$. How many times does this plot cross horizontal line $n-1$? At least once (since $m+k = 2n-1$ it is obvious that one one of them is greater than $n-1$ and one of them is smaller). It can cross the line more times ($3$, $5$, etc.), but that should be always an odd number, otherwise it cannot start below (above) $n-1$ and end above (below) $n-1$. UPDATE It seems there are certain details I have overlooked (thanks to @Calvin Lin for pointing the issue). I will insert here a link to the solution and take some time to reconsider Lemma 1.
H: The equation $x^2-x-1$ has no solution over finite fields of even order. Is the equation $x^2-x-1$ has no solution over $GF(2^i)$ for all i. I can prove it trivially for some arbitrary chosen small even ordered fields, but can this be generalized? AI: Note that in any characteristic $2$ field, $-1 = 1$ and the polynomial is equal to $x^2 + x + 1$. This has a root over the field of four elements. We can think of that field very explicitly; it is given by $0, 1, \alpha, \beta$, with $\alpha = 1 + \beta$ and the rest of the addition table following from the fact that $1 + 1 = 0$ in characteristic $2$. The multiplication rules are $\alpha^2 = \beta$, $\alpha\beta = 1$, $\beta^2 = \alpha$, and again the rest of the table follows from standard rules for rings. You can check that both $\alpha$ and $\beta$ are roots of your polynomial.
H: Given $S =\{ x-y \mid (x,y) \in \Bbb R , x^2+y^2=1\}$, find $\max S$ My solution: $x^2+y^2=1$ is the equation of a circle. Let $x-y=k$. That becomes the equation of a line. Since $(x-y) \in S$, the point $(x,y)$ satisfies both the circle and the line equation. So we know that the graph of the circle and the line have at least $1$ intersection. Now here's the part I have a question about. I drew the graph of the circle and visualised the line equation (slope$=1$) and one can easily conclude that the maximum and minimum values of $k$ occur when the line is a tangent to the circle. So after concluding this, rest is just calculation. ( The answer is $\sqrt2$, by the way). My questions are: Can we prove that(without visualization) the maximum and minimum values of $k$ occur when the line is tangent to the circle? 2.Is there any other method to solve such a question? AI: You can do it as follows: since $x=\cos\theta$ and $y=\sin\theta$ for some $\theta\in[0,2\pi]$, then\begin{align}x-y&=\cos(\theta)-\sin(\theta)\\&=\sqrt2\left(\frac1{\sqrt2}\cos(\theta)-\frac1{\sqrt2}\sin(\theta)\right)\\&=\sqrt2\left(\cos\left(\frac\pi4\right)\cos(\theta)-\sin\left(\frac\pi4\right)\sin(\theta)\right)\\&=\sqrt2\cos\left(\frac\pi4+\theta\right)\end{align}and, since the maximum of $\cos$ is $1$, the maximum that you're after is $\sqrt2$.
H: Is $e-1/e$ rational? My intuition tells me that it's not rational, and even not algebraic (i.e., it's transcendental). But I'm having a hard time showing it. Taking it slightly further, $e-\frac1e=\frac{e^2-1}{e}=\frac{(e+1)(e-1)}{e}$, but I don't feel I'm in the right direction. Any ideas how to tackle this one? Thank you! AI: If $p$ and $q$ are integers and $$\frac{e^2-1}{e} = \frac{p}{q}$$ Then $qe^2-pe-q = 0$, so $e$ would satisfy a quadratic polynomial over the integers. Since $e$ is not algebraic, that gives a contradiction.
H: Example of dense sets Provide an example of an infinity of dense subsets of a space $(X,d)$ such that the intersection of all of them is not a dense subset. I have tried taking $(\mathbb{Q}, d_{usual})$ and as dense subsets the intervals $(0,1), (1,2), (2,3), ...$ But it seems to me that it does not work, since the lock of each one of the subsets is not the $\mathbb{Q}$ and with this it would not be true that they are dense. AI: For every $x\in\mathbb{Q}$, take $A_x=\mathbb{Q}-\{x\}$.
H: A property of a kind of product integral Please, could you say to me which property was utilized here? $$\int_0^\infty \int_0^x f(a)g(x-a)\,da\, dx=\int_0^\infty f(a)\,da\int_0^\infty g(x)\, dx$$ Many thanks! AI: $\int_0^\infty \int_0^x f(a)g(x-a)\,da\, dx$=$\int_0^\infty \int_a^\infty f(a)g(x-a)\,dx\, da=\int_0^\infty f(a)\,da\int_0^\infty g(x)\, dx$ The first "=" is due to Fubini and $a\leq x$, you integrate both sides over the set $\{(a,x)\in \mathbb{R}^2 : 0\leq a\leq x\}$
H: Power of point hypothesis test With $X_i$ i.i.d. $N(\theta,1)$ and $$H_0: \theta=0 vs. H_1: \theta=1$$ I got that the test statistic is $\bar{X_n} > \frac{q_{\alpha}}{\sqrt{n}}+\theta$, where $\theta$ is either 0 for $H_0$ or 1 for $H_1$. The type 1 test statistic is given with $\alpha = 0.05$. To compute the power, I know that I have to compute the probability that I reject $H_0$ when $H_1$ is true (using the alternative distribution $N(1,1)$). So I need to get $$P_\theta(\bar{X_n}>\frac{q_{\alpha}}{\sqrt{n}}+\theta) = P_\theta(\bar{X_n}>\frac{1.64485}{\sqrt{n}}+1)$$ But from here on I am stuck on how to proceed, I suppose I need the CDF in the form $1-CDF(c)$ where $\theta=1$? Can someone point me to the solution? AI: To calculate the power, first of all you have to set the decision rule. To calculate it $n$ is needed. Let's suppose just for an example that $n=4$ Then the decision rule is the following: We reject $H_0$ iff $z>1.64 \rightarrow \bar{X}_n \sqrt{n}>1.64 \rightarrow \bar{X}_n >\frac{1.64 }{2}=0.82 $ Now you can calculate the power $$\mathbb{P}[\bar{X}_n>0.82|\theta=1]=\mathbb{P}[Z>(0.82-1)\sqrt{4}]\approx 0.64$$ Note that: the higher is $\theta$ under $H_1$the higher is the power
H: Isomorphism of algebraic structures as an isomorphism of relational structures In connection with the question Is a set together with an operation always a relational structure?, I am trying to represent an isomorphism of algebraic structures as an isomorphism of relational structures. Let's say a relational structure is a set with one or more n-ary relation on it. An n-ary relation on a set $A$ is a non-empty subset of the Cartesian power $A^n$. Let's say an algebraic structure is a set with one or more n-ary operation on it. An n-ary operation on a set $A$ is a map of a non-empty subset of the Cartesian power $A^n$ onto $A$. Clearly, an algebraic structure is a relational structure with the following relations: The product relation is the image (a subset of $A^1$) of the map; For each element $p$ from the products relation there is also an operand relation which is the preimage of $p$ in the Cartesian power $A^n$ (a set of subsets of $A^n$). We can define an isomorphism between relational structures as a regular relation-preserving isomorphism. Then, we can say that two algebraic structures are isomorphic if: They are product relation isomorphic, and They are operand relation isomorphic for each pair from the product relation isomorphism. Would it be a correct and an equivalent definition of isomorphism of algebraic structures? AI: You don't quite have the right notion of isomorphism between relational structures; rather, an isomorphism needs to preserve and reflect each relation in question. A relation on the left has to hold iff it holds on the right. In full generality - and this is the notion of isomorphism coming from model theory - an isomorphism between two structures $\mathcal{A}$ and $\mathcal{B}$ in the same language consisting of some relation symbols and some function symbols (thinking of constant symbols as $0$-ary function symbols) is a map $I:\mathcal{A}\rightarrow\mathcal{B}$ such that: $I$ is a bijection. For each $n$-ary function symbol $f$ in the language and each $a_1,...,a_n\in\mathcal{A}$, we have $$I(f^\mathcal{A}(a_1,...,a_n))=f^\mathcal{B}(I(a_1),...,I(a_n)).$$ For each $k$-ary relation symbol $R$ in the language and each $a_1,...,a_k\in\mathcal{A}$, we have $$R^\mathcal{A}(a_1,...,a_k)\iff R^\mathcal{B}(I(a_1),...,I(a_k)).$$ Note that we're distinguishing between function/relation symbols ($f, R$) and the actual functions/relations in the structures they name ($f^\mathcal{A},f^\mathcal{B},R^\mathcal{A},R^\mathcal{B}$). This can often feel tedious at first, but it's important (although down the road once well-understood the distinction can be elided). Now as to the "relationalization" process, you have the right idea but your implementation is not ideal. Specifically, your process for going from a functional language to a relational language is "structure-dependent:" exactly how many new relation symbols we introduce depends on the structure in question, so it's not a uniform change of language across all structures. However, your basic idea is absolutely correct: you want to keep track of the "basic facts" of the form "This tuple of elements gets sent to that element" in a relational way. The right way to do this is via the graph of a function: given an $n$-ary function $f$ on some set $X$, the graph of $f$ is the $(n+1)$-ary relation on $X$ given by $$\{(x_1,...,x_n,x_{n+1})\in X^{n+1}: f(x_1,...,x_n)=x_{n+1}\}.$$ (Indeed, in the usual set-theoretic formalism a function literally is its graph, but that's getting a bit needlessly "under-the-hood.") So in general we "relationalize" a language by replacing each $n$-ary function symbol $f$ with an $(n+1)$-ary relation symbol $Graph_f$, and we "relationalize" a structure in this language by interpreting $Graph_f$ as the graph of the interpretation of $f$. We then have: Two structures are isomorphic iff their relationalizations are isomorphic.
H: If $\|x_1\| \leq \|y_1\|$ and $\|x_2\| \leq \|y_2\|$ then $\|\lambda x_1 + (1 - \lambda) x_2\| \leq \|\lambda y_1 + (1 - \lambda) y_2\|$ I have being running in circles in this problem. Let $x_1,x_2,y_1,y_2 \in \mathbb{R}^n_+$ such that $\|x_1\| \leq \|y_1\|$,$\|x_2\| \leq \|y_2\|$ and $y_1$ and $y_2$ are l.d.. Is it true that for any $\lambda \in (0,1)$ we have that $$\|\lambda x_1 + (1- \lambda)x_2\| \leq \|\lambda y_1 + (1 - \lambda)y_2\|?$$ I attempt to square the inequality and use the convexity of $f(x) = x^2$ but it was fruitless. AI: Take $x_1=x_2=y_1=e_1$ and $y_2=e_2$, where $(e_n)_n$ is the standard orthonormal basis in $\mathbb{R}^n$ (for $n\geq 2$). You have $$ \lVert x_1\rVert_2 = \lVert x_2\rVert_2 = \lVert y_1\rVert_2 =\lVert y_2\rVert_2 =1 $$ and, for all $\lambda \in (0,1)$, $$ \lVert \lambda x_1+(1-\lambda)x_2\rVert_2 = \lVert x_1\rVert_2 =1 $$ but $$ \lVert \lambda y_1+(1-\lambda)y_2\rVert_2 = \sqrt{\lambda^2+(1-\lambda)^2} < 1 $$ Under the extra assumption that $y_1,y_2$ are linearly dependent but not that they have non-negative coordinates: take $e_1=x_1=x_2=y_1$ and $y_1=-2y_2$. Then, for $\lambda = 1/3$, $$ \lVert \lambda y_1 + (1-\lambda) y_2\rVert_2 = |(1-3\lambda)|\lVert y_2\rVert_2 = 0 $$ but $$ \lVert \lambda x_1 + (1-\lambda) x_2\rVert_2 = \frac{\sqrt{5}}{3} > 0. $$ With all the assumptions: it's true. Since $y_1,y_2$ are linearly dependent, there exist $(\alpha,\beta)\neq (0,0)$ such that $\alpha y_1+\beta y_2=0_n$. Suppose either $\alpha=0$ or $\beta=0$; say, wlog, $\alpha =0$. Then$y_2=0_n$, and therefore $x_2=0$, and therefore the statement holds since $\lambda\lVert x_1\rVert_2 \leq \lambda\lVert y_1\rVert_2$. Otherwise, rewrite for convenience $y_2 = \gamma y_1$ for $\gamma> 0$ (the sign of $\gamma$ follows from the fact that $y_1,y_\in\mathbb{R}_+^n$). $$ \lVert \lambda y_1+(1-\lambda)y_2\rVert_2 = \lVert \lambda y_1+\gamma(1-\lambda)y_1\rVert_2 = (\lambda+\gamma(1-\lambda)) \lVert y_1\rVert_2 $$ (note that $\lambda+\gamma(1-\lambda) \geq 0$); while $$\begin{align} \lVert \lambda x_1+(1-\lambda)x_2\rVert_2 &\leq \lambda\lVert x_1\rVert + (1-\lambda)\rVert x_2\rVert_2 \leq \lambda\lVert y_1\rVert + (1-\lambda)\rVert y_2\rVert_2\\ &= \lambda\lVert y_1\rVert + \gamma (1-\lambda)\rVert y_1\rVert_2\\ &= (\lambda+\gamma(1-\lambda)) \lVert y_1\rVert_2 \end{align}$$ proving the claim.
H: Ways to write $n=p^k$ as a product of integers Let's say that $F(n)$ is the number of ways to write $n$ as a product of integers greater than $1$. For example, $F(12)=4$ since $12=2\cdot 2 \cdot 3$, $12=2\cdot 6$, $12=3\cdot 4$ and $12=12$. Given $n=p^k$ where $p$ is a prime number, what is the value of $F(n)$? I know how to manage this problem when $n=p_1\cdots p_k$ where $p_1,\ldots , p_k$ are different primes (the result is $B(k)$, where $B(n)$ is the Bell-Number), but how can I do it in this case? Note: The order of the factors does not matter; that is, $a\cdot b$ and $b \cdot a$ do not count as different ways to write a number AI: Clearly, any factor of $n$ will be of the form $p^m$ for some $m\le k$. The question thus is equivalent to asking how many ways are there to write $k$ as the sum of positive integers, where order doesn't matter. This is simply the number of partitions of $k$. I'll admit that I'm not super familiar with partitions myself, so I'm not sure if there's a nice formula for this; I don't believe there is, though.
H: Is this a valid proof that the boundary of a set on a metric space is closed? The definition that I have been given of the boundary of a subset $A$ of a metric space $X$ is: $$\partial A=\{x\in X:\forall r\in \mathbb{R}, B_r(x)\cap A \neq\emptyset \text{ and } B_r(x)\cap A^c\neq\emptyset\}$$ So with this definition, we would have $$(\partial A)^c=\{x\in X:\exists r\in \mathbb{R}, B_r(x)\cap A =\emptyset \text{ or } B_r(x)\cap A^c=\emptyset\}$$ $$(\partial A)^c=\{x\in X:\exists r\in \mathbb{R}, B_r(x)\cap A =\emptyset\}\cup\{x\in X:\exists r\in \mathbb{R}, B_r(x)\cap A^c=\emptyset\}$$ $$(\partial A)^c=\operatorname{int}A\cup \operatorname{int}A^c$$ Thus, the complement of the boundary of $A$ is the union of two open sets, in consequence, it is open. Then, $((\partial A)^c)^c=\partial A$ is closed. AI: Yes, that’s fine. You can also do it a little more directly: the definition of $\operatorname{bdry}A$ says that $x\in\operatorname{bdry}A$ iff $x\in\operatorname{cl}A\cap\operatorname{cl}(X\setminus A)$, i.e., that $\operatorname{bdry}A=\operatorname{cl}A\cap\operatorname{cl}(X\setminus A)$, and since this is the intersection of two closed sets, it is closed. While your definition of the boundary is stated in terms of metric spaces, it’s just a special case of the definition for topological spaces in general, and the same argument shows that $\operatorname{bdry}A$ is closed in all topological spaces.
H: Two vector sequences that are basis of a vector space The sequence of vectors $(u_{1}, \cdots, u_{n})$ form a basis of the vector space $E$. I need to show that there is a $k \in \{1, \cdots, n \}$ such that $ (u_{1}, \cdots, u_{k-1}, v, u_{k+1}, \cdots, u_{n}) $ is a base of $E$, with $v \in E \setminus \{ 0_{E} \}$ AI: As you wrote in the comments, $v$ can be written as $b_1u_1+b_2u_2+\cdots+b_nu_n$. Choose $k$ such that $b_k\ne0$. Then $u_k$ is a linear combination of $\{u_1,\ldots,u_{k-1},v,u_{k+1}\ldots,u_n\}$. So, since the set$$\{u_1,\ldots,u_{k-1},v,u_{k+1}\ldots,u_n\}\tag1$$spans $V$ and $\dim V=n$, $(1)$ is a basis of $V$.
H: How is it possible that if $A \implies B $ is true then $ \lnot ( \lnot B \implies A )$ can be false? While I was playing around with the material implication I made a proof by contradiction which I think it's wrong, but I don't find any mistake : Say that $A \implies B $ is true , then suppose the truth of $ \lnot B \implies A $ , but this can't be the case because otherwise $ \lnot B \implies A \implies B $ , then $\lnot( \lnot B \implies A) $ is true . However the truth table of the statement $(A \implies B )\land \lnot( \lnot B \implies A) $ isn't always true when $A \implies B $ is true , but I think it should be the case if my reasoning was correct. What's wrong with my proof? AI: $\lnot B\implies B$ is true when $B$ is true.
H: If $\phi\in W^U$ and if $\psi\in W^V$ and if $W$ is a topological vector space then $f(u,v):=\phi(u)+_{_{W}}(-1)*_{_{W}}\psi(v)$ is continuous Statement Let be $W$ a topological vector space and $\phi:U\rightarrow W$ and $\psi:V\rightarrow W$ two continuous functions. So if we define $f:U\times V\rightarrow W$ though the condition $$ 1.\quad f(u,v):=\phi(u)+_{_{W}}(-1)*_{_{W}}\psi(v) $$ for any $u\in U$ and for any $v\in V$ then $f$ is continuous in the product topology. Unfortunately I can't prove the statement: I have proved to show that $f$ is composition of continuous functions defining the function $\Delta:U\times V\rightarrow W\times W$ through the condition $$ \Delta(u,w):=\big(\phi(u),\psi(v)\big) $$ that for the universal mapping theorem for products is continuous (is this correct?) but then I can't continue because, although I see that $f(u,v)=+_{_{W}}\Big(\phi(u),*_{_{W}}\big(-1,\psi(v)\big)\Big)$, I can't prove that the function $\tilde\Delta:U\times V\rightarrow W\times W$ defined through the condition $$ \tilde\Delta(u,v)=\Big(\phi(u),*_{_{W}}\big(-1,\psi(v)\big)\Big) $$ for any $u\in U$ and $v\in V$ is continuous. Naturally $+_{_{W}}$ is the vectorial sum in $W$ and $*_{_{W}}$ is the scalar multiplication in $W$. So could someone help me, please? AI: We can write $f$ as the composition of simpler maps in the form $f = +_W \circ s \circ \Delta$, where the addition $+_W$ is continuous by the definition of topological vector spaces, the map $\Delta \colon (u,v) \mapsto (\phi(u),\psi(v))$ is continuous by general properties of maps between product spaces, and $s \colon (w_1, w_2) \mapsto (w_1, -w_2)$ remains to be seen as continuous. Again by general properties of maps between product spaces, the continuity of $s$ is equivalent to the continuity of the negation map $n \colon W \to W$, $n(w) = -w$. We can write this as the composition $$w \mapsto (-1,w) \mapsto (-1)\ast_W w = -w$$ of the embedding $W \to \{-1\}\times W \subset K \times W$ and the scalar multiplication $K \times W \to W$. The embedding is continuous since each component is, and the scalar multiplication is continuous by definition of a TVS. Hence $n$ is continuous, and thus $s$ is continuous, and finally the continuity of $f$ follows.
H: Small problem on minus sign $({-5}+{9}x)y'={4}x+{9}y$ I have the equation $$({-5}+{9}x)y'={4}x+{9}y$$ So I am trying to solve it in the way need to solve linear differential equation. I recognise that I can write it as : $$({-5}+{9}x)y' -{9}y={4}x$$ now I dont know what I should do, I know that I could write $(({-5}+{9}x)y)^\prime=\int4xdx$ if just my expression was different by minus sign. so what is the thechnique in such problems? this is not the first time I getting in a situation like that. AI: Have you heard of the integrating factor? Divide the equation by $9x-5$ and change the minus sign to a plus sign by distributing $-1$ to the $9x-5$ when you divide $9y$ by it: $$y'+\frac{9}{5-9x} y=\frac{4x}{9x-5}$$ Then this is just a standard integrating factor problem. Multiply both sides of the equation by: $$exp\left(\int \frac{9}{5-9x} \; dx \right)= \frac{1}{5-9x}$$ $$\left(\frac{y}{5-9x}\right)'=\frac{-4x}{(9x-5)^2}$$ Lastly, integrate both sides and don't forget the constant.
H: analytical geometry problem with locus A line of constant length 10 units moves with the end always on the -axis and the end always on the line = 4. Find the equation of the locus of the midpoint of . How could I solve this problem? AI: Let the point on x axis be $A(h,0)$ a general parametric point on the line $y=4x$ is $B(t,4t)$. We can write $$(h-t)^2+4t^2=100~~~(1)$$ Let $P(x,y)$ be the mid point of AB, then $x=(h+t)/2$ and $y=2t$. Puuting $h=2x-t$ and then $t=y/2$ in (1), we get $$(2x-2t)^2+16t^2=100 \implies (2x-y)^2+4y^2=100 \implies 4x^2-4xy+y^2+4y^2=100 \implies 4x^2+5y^2-4xy=100~~~~(2)$$ The Eq. (2) gives the required locus.
H: Integrate $ \int_a^b \frac{1}{\sqrt{Ax-\frac{x^2}{2}}}dx$ From online integral calculators I am aware that: $$ \int_a^b\frac{1}{\sqrt{Ax-\frac{x^2}{2}}}dx=\sqrt{2}\left[\arcsin\left(\frac{x}{A}-1\right)\right]\Bigg|_a^b$$ When I work backwards starting with: $$y=\sqrt{2}\left[\arcsin\left(\frac{x-A}{A}\right)\right]\Bigg |_a^b$$ $$=\sqrt{2}\left[\arcsin\left(\frac{x}{A}-1\right)\right]\Bigg |_a^b$$ I can show that the integral is correct. But how would I go about integrating the initial expression in the first place? I can't think of any suitable substitution. AI: Let $u=\sqrt{x}$: $$\int \frac{2}{\sqrt{A-\frac{u^2}{2}}} \; du$$ Then let $t=\frac{u}{\sqrt{2A}}$: $$\int \frac{2 \sqrt{2}}{\sqrt{1-t^2}} \; dt$$ $$2\sqrt{2}\arcsin{t}+C$$ $$2\sqrt{2}\arcsin{\left(\sqrt{\frac{x}{2A}}\right)} \bigg \rvert_a^b$$ The expression that I have and the one that your integral calculator gave are equivalent.
H: How to determine whether given nonlinear equation system cannot be solved analytically? I am currently studying nonlinear equations that require numerical analysis methods to solve them. But I could not understand why can't I solve some equations analytically? For example: x^2 + 4y^2 - 16 = 0 and x(y^2) - 3 = 0 How can I determine that this equation system cannot be solved analytically before using numerical analysis methods? AI: We can solve the equation even algebraically. The solution is given by $$ x=\frac{1}{3}(4y^2( 4 - y^2)), $$ where $y$ satisfies the equation $$ 4y^6 - 16y^4 + 9=0. $$ This is a cubic equation in $z=y^2$, where we have a formula. We obtain four real solutions and two complex ones.
H: Maximizing profits in a game with given number of shot and a given capacity We are given $7$ shots to choose randomly (with uniform distribution) from the interval $[0,1]$. After choosing each number we can decide whether we want to keep this number or throw it away. Once we get to keeping $3$ numbers we stop playing the game. What is the best strategy to maximize the sum of those $3$ numbers? What is the expected return of this strategy? Here are my thoughts on the best strategy (which are wrong!): If we choose $7$ random numbers uniformly, expected value of the smallest one is $1/8$, the second smallest one is $2/8$ and so on. So in our first shot if we get a number greater than or equal to $5/8$ we should keep the number and continue the game with $6$ shots remaining and capacity of 2 numbers to hold. This can happen with the probability of $3/8$. If this doesn't happen then we don't keep the first shot and continue the game with $6$ shots and capacity of $3$ numbers to hold. So we can write a recursive formula for the expected return of this strategy. Let $E(n,k)$ be the expected return with $n$ shots and capacity of $k<n$ numbers to hold. We have the following recursion: $$E(n,k)=\frac{k}{n+1}(E(n-1,k-1)+\frac{2n+2-k}{2n+2})+\frac{n+1-k}{n+1}E(n-1,k)$$ The boundary condition should be $E(k,k)=\frac{k}{2}$. This strategy gives $E(2,1)=11/18$. Which is less than $5/8$ given by the strategy that if the first shot is greater than $1/2$ keep it otherwise take the second shot. AI: This is a dynamic programing problem. First compute $E(n,1),\ n=1,\dots,7$. We have $E(1,1)=\frac12$. Then $E(2,1)=\frac58$. To compute $E(n,1)$, we should accept the first draw if it is $\geq E(n-1,)$ and otherwise reject it. Therefore, $$E(n,1)=\int_0^{E(n-1,1)}E(n-1,1)\,\mathrm{d}x+\int_{E(n-1,1)}^1x\,\mathrm{d}x=\frac12+\frac12E(n-1,1)^2$$ Use the same idea to compute $E(n,k)$ for larger values of $k$. If the current draw is $x$, we should accept it if $$x+E(n-1,k-1)\geq E(n-1,k)$$ and reject it otherwise.
H: Show that two definitions for a subgroup are equivalent Show that (1) $\Longleftrightarrow$(2): (1) For $H \subseteq G$ with $H \ne \varnothing$ of a group $(G,\ast)$, $(H,\ast)$ is a subgroup of $(G,\ast)$ if: (G1): $\forall a,b \in H: a \ast b \in H$ (G2): $\forall a \in H: a^{-1} \in H$ (2) For $H \subseteq G$ with $H \ne \varnothing$ of a group $(G,\ast)$, $(H,\ast)$ is a subgroup of $(G,\ast)$ if: (U): $\forall a,b \in H: a \ast b^{-1} \in H$ (Note: that $\ast$ is associative follows from $(G, \ast) $ being a group) (1) $\Longrightarrow$ (2) (G1) states $\forall a,b \in H: a \ast b \in H$ but since (G2) says $\forall a \in H: a^{-1} \in H$ $(\forall a,b \in H: a \ast b \in H) \overbrace{\Longleftrightarrow}^{(G2)} (\forall a,b \in H: a \ast b^{-1} \in H)$ (2) $\Longrightarrow$ (1) We choose $a \in H$ and $a \in H$ another time. For (U) this means since $a$ is definitly a element of $H$, that $a,a \in H: a \ast a^{-1} \in H$ But $a \ast a^{-1}= e$ by definition of the inverse. So $e \in H$ can be concluded by (U). So since $e \in H$ we now choose any $a \in H$ and as second element $e \in H$. (U) states $e,a \in H: e \ast a^{-1} \in H$ But $e \ast a^{-1}=a^{-1}$, so (U) gives us $e,a \in H: a^{-1} \in H$ which is (G2) since $e \in H$ no matter what. But since for all elements in $H$ its inverse exists: $(\forall a,b \in H: a \ast b^{-1} \in H) \overbrace{\Longleftrightarrow}^{(G2)} (\forall a,b \in H: a \ast b \in H)$ which is (G1) $\Box$ Would be great if someone could look over it and give me some feedback if this is correct :) AI: I think you could improve a bit the implication $(U)\implies (G1)$. So you know that $a\star b^{-1}\in H$ for all $a,b\in H$ and you want to show that $H$ is closed under the group operation, that is, $a\star b\in H$ for all $a,b\in H$. To do this, just note that you may express the product $a\star b$ as $a\star (b^{-1})^{-1}$.
H: What's incorrect in this word problem solution? I had a question and answered it, but I've been told that my solution is incorrect. What's the mistake here? The Question Runner A is running at the speed of x in a triangular path (each side of the triangle is of length a) and Runner B is running in the same track at the speed of x+3. Runner A passed 3a+60 (a>60) while at the same time Runner B passed 6a-60. Express the value of the perimeter (3a, I believe) using the speed x. My solution Using this table | time | speed | location | -----------------|---------|------------| Rider A | t | x | tx | -----------------|---------|------------| Rider B | t | x+3 | t(x+3) | ----------------------------------------- We can understand that tx=3a+60 and t(x+3)=6a-60 so t(x+3)-3a+120=3a+60=tx and then we get tx+3t-3a+120=tx and then 3t=3a-120 => t=a-40. If we put that back in the first equation tx=3a+60 we get (a-40)x=3a+60 => xa-40x=3a+60 => (x-3)a=40x+60 => a=(40x+60)/(x-3) meaning that the permiteter is P=3a=3*(40x+60)/(x-3) What is incorrect here? (Sadly I'm not sure what the correct answer is so I can't add it here) AI: You need to eliminate $t$, so write $$tx=3a+60\\t(x+3)=6a-60\\t=\frac {3a+60}x\\t=\frac {6a-60}{x+3}$$ $$(3a+60)(x+3)=(6a-60)x\\ 3ax+9a+60x+180=6ax-60x\\ 120x+180=3ax-9a\\ \frac{120x+180}{x-3}=3a$$
H: Sequence of vectors with specific conditions forming a basis $E$ is a vector space with $e_{1}, e_{2}, e_{3}, e_{4}, e_{5} \in E$ with the following conditions: $E = \langle e_{1}, e_{2}, e_{3}, e_{4}, e_{5} \rangle$. $ e_{1}, e_{2}, e_{3}, e_{4}, e_{5} $ are linearly dependent. $ e_{1} + e_{2}, e_{2} + e_{3}, e_{3} $ are linearly independent and don't generate $E$. $ e_{1} + e_{4}, e_{2}, e_{3} + e_{4}$ are linearly dependent. I need to use this information to show that $(e_{1}, e_{2}, e_{3}, e_{5})$ is a basis of $E$, but I don't know how. AI: This is a cool problem. Set $V=\langle e_1+e_2,e_2+e_3,e_3 \rangle$ and $W=\langle e_1+e_4,e_2,e_3+e_4 \rangle$, and set $X=\langle W,e_4\rangle$. Finally, set $Z = \langle e_1,e_2,e_3,e_5 \rangle$; we're trying to prove that $Z=E$. Note that $V = \langle e_1,e_2,e_3 \rangle$ and $X = \langle e_1,e_2,e_3,e_4 \rangle = \langle V,e_4\rangle$. Now we do some dimension-counting. From 1 and 2, we know that $\dim E < 5$. From 3, we know that $\dim E > 3$. Therefore $\dim E = 4$. To prove that $e_1,e_2,e_3,e_5$ is a basis for $E$, it suffices to prove that $\dim Z\ge 4$. From 3, we know that $\dim V = 3$. From 4, we know that $\dim W < 3$. It follows that $\dim X \le \dim W + \dim \langle e_4 \rangle \le \dim W + 1 < 4$. Therefore $V\subseteq X$ but $\dim X \le 3 = \dim V$, and therefore $V=X$. It follows that $e_4$ is a linear combination of $e_1,e_2,e_3$. Finally, $V\subseteq Z$; if $\dim Z \le 3 = \dim V$, then it would follow that $e_5$ is a linear combination of $e_1,e_2,e_3$. But since $e_4$ is also, it would follow that $\dim E \le 3$, a contradiction. Therefore $\dim Z > 3$, as required.
H: Gilbert Strang 1.3 #4 Question about Linear Dependence In Gilbert Strang's Linear Algebra Book 4th Edition, the question asks to find the zero vector with the combination of three vectors: $w_1$ = (1,2,3), $w_2$ = (4,5,6), $w_3$ = (7,8,9). Working this out to reduced row echelon form, I get to a certain point where it's obvious that there is no solution since the bottom row are 0's. \begin{bmatrix} 1 & 4 & 7 & 0 \\ 2 & 5 & 8 & 0 \\ 3 & 6 & 9 & 0 \end{bmatrix} becomes: \begin{bmatrix} 1 & 4 & 7 & 0 \\ 0 & 3 & 6 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} In the solutions, though, it's pointed out that $w_1$ - 2$w_2$ + $w_3$ = 0 and thus, no solution. Plugging the original vectors in this equation, it's obviously true. However, how does one get to $w_1$ - 2$w_2$ + $w_3$ = 0 from the original vectors if RREF can't get you there? Is it just a matter of simply looking at the vectors, I should have seen the dependence, or is there another way of working it out besides RREF? AI: You wish to find $a_1, a_2, a_3$ such that $a_1 w_1 + a_2 w_2 + a_3 w_3 = 0$. The fact that the third row of the reduced matrix is zeroes means that any number may be chosen for $a_3$ and you will be able to find $a_1, a_2$ satisfying the above equation. You may choose $a_3 = 1$. Then from the second row you have $3a_2 + 6a_3 = 0 \implies a_2 = -2$. Finally, from the first row you have $a_1 + 4a_2 + 7a_3 = 0 \implies a_1 - 8 + 7 = 0 \implies a_1 = 1$.
H: The spectral radius of a $n\times n$ matrix I would like to know which is the spectral radius of this $n\times n$ matrix: $$ \begin{matrix} 0 & 1 & . & . & . &1 \\ 1 & 0 & . & . & . &0 \\ . & . & . & & &. \\ . & . & & . & &. \\ . & . & & & . &. \\ 1 & 0 & . & . & . &0 \\ \end{matrix} $$ I know that the spectral radius is the maximum eigenvalue, but I don't know how to calculate it in this matrix... I also know that if we've got a symmetric amtrix the spectral radius is $||A||_2$ but I neither know how to calculate this... AI: Your matrix has rank $2$, and in particular it can be written in the form $$ A = xy^T + yx^T, $$ where $x = (1,0,\dots,0)^T$ and $y = (0,1,\dots,1)^T$. Because $A$ has rank $2$, it has $0$ as an eigenvalue with algebraic multiplicity at least $n-2$; let $\lambda_1,\lambda_2$ denote the two possibly non-zero eigenvalues of $A$. We can find the eigenvalues of $A$ by noting that the trace of a matrix is the sum of its eigenvalues. In particular, it is clear that $\operatorname{tr}(A) = 0$. Thus, we see that $$ \lambda_1 + \lambda_2 + 0 + \cdots + 0 = 0 \implies \lambda_1 = -\lambda_2. $$ On the other hand, we find that $$ A^2 = (xy^T + yx^T)^2 = xy^Txy^T + xy^Tyx^T + yx^Txy^T + yx^Tyx^T $$ Conclude that $$ \lambda_1^2 + \lambda_2^2 = \operatorname{tr}(A^2) \\= \operatorname{tr}[xy^Txy^T] + \operatorname{tr}[xy^Tyx^T] + \operatorname{tr}[yx^Txy^T] + \operatorname{tr}[yx^Tyx^T] \\= \operatorname{tr}[y^Txy^Tx] + \operatorname{tr}[x^Txy^Ty] + \operatorname{tr}[y^Tyx^Tx] + \operatorname{tr}[x^Tyx^Ty] \\= 2(x^Ty)^2 + 2(x^Tx)(y^Ty) = 2(n-1). $$ Conclude that the non-zero eigenvalues of $A$ are $\pm \sqrt{n-1}$, and the spectral radius is $\sqrt{n-1}$.
H: On the proof of $\;(1+x)^p\equiv1+x^p \pmod p$ I know the proof for $(1+x)^p\equiv 1+x^p\mod p$ using the binomial theorem. Moreover, I know that $x^p \equiv x \mod p$ due to Fermat's theorem. Hence, is $(1+x)^p\equiv(1+x)\equiv1+x^p \mod p$ a correct proof of this relation? After thinking about it for a bit, one of the proof of Fermat's theorem uses the binomial theorem so my comments might have been redundant (although Fermat's theorem can be deduced from Lagrange's theorem). I guess you can try to prove Fermat's by Pigeon hole: Assuming $a\not\equiv 0$ $$a^i,\;\;1\le i \le p$$ takes $p$ values but $\mod p$ can only take $p-1$ distinct values ($a^p \not \equiv 0$ unless $a=0$) and so $a^i\equiv a^j$ and take inverses and the rest follows. If $a\equiv 0$, then the statement follows. This can be extended to $(1+x)^{p^n}\equiv (1+x^{p^n})$ without using induction since $p |p^n$, then $$(1+x)^{p^n}\equiv (1+x)^{p\cdot{p^{n-1}}}\equiv 1+x \equiv 1+x^{p^n}$$ I have a gut feeling that says I overlooked something important. If this is correct, what additional instructive value does the proof using the binomial theorem has that the Fermat's proof doesn't? PS: I don't know what to say on the title. Feel free to edit it. AI: $$(1+x)^p = 1+x^p$$ is true not only for $x\in \Bbb{F}_p$ but also for $x\in \Bbb{F}_p[t]$ or any commutative ring of charactistic $p$. In contrary $a^p = a$ is true only for $a\in \Bbb{F}_p$ (as $t\ne t^p$ in $\Bbb{F}_p[t]$)
H: Residues of $ \frac{z^4}{1+z^6} $ I am trying to compute all 6 the residues of $ \frac{z^4}{1+z^6} $. I tried the straightforward way first of finding all the points where the denominator is 0 etc but it became way too complicated. Any ideas? AI: Let $a$ be a pole of $f(z)=z^4/(z^6+1)$. Then $a^6=-1$. Also $a$ is a simple pole. The residue is thus $$\lim_{z\to a}(z-a)f(z)=\lim_{z\to a}\frac{z^4(z-a)}{z^6-a^6} =\frac{a^4}{\lim_{z\to a}\frac{z^6-a^6}{z-a}}.$$ But $$\lim_{z\to a}\frac{z^6-a^6}{z-a}=g'(a)$$ when $g(z)=z^6$ etc.
H: Prove the inconsistency of this definition of limit I want to prove that the following fake definition of limit doesn't capture the property that $f(x)$ tends to infinity as $x$ does : $$\lim_{x\rightarrow +\infty} f(x) = +\infty \iff \forall M > 0 \exists X > 0\mid( f(x) > M \implies x > X ) $$ If I take a strictly increasing and continuos function so that $\lim_{x\rightarrow +\infty} f(x) = a $ with $a \in R$ then the definition is surely satisfied when $M < a$ . However, if I take an $M > a $ then $f(x)>M $ is always false. Because implication is always true in this case , can I say to have proven that the definition is wrong ( because the definition would mean also the same of $\lim_{x\rightarrow +\infty} f(x) = a $ ) ? AI: For example, $f(x) = x^2$ doesn't satisfy your definition. On the other hand $f(x) = 1 - e^{-x}$ satisfies your definition but $\lim_{x \to \infty} f(x) \ne +\infty$.
H: a function that is perpendicular to another and goes through a given point Find a linear function that goes through function $g(x)=(3-2\sqrt2){x}$ in point T($x_0$,1) and does that perpendicullary to the function.. $$\phi=\pi/2$$ $$\pi/2=\frac{(3-2\sqrt2-k)}{1+(3-2\sqrt2)k}$$ $$\frac{\pi}{2}(1+(3-2\sqrt2)k)-3+2\sqrt{2}=\frac{-k}{1}$$ Then we get: $$k=\frac{-3+2\sqrt{2}+2\pi}{\sqrt{2}\pi-1}$$ The point in which the x-axis intercept should be is the zero of the function $g$, so it is $x=0$ $$h(x)=(\frac{-3+2\sqrt{2}+2\pi}{\sqrt{2}\pi-1})x$$ I find it very strange that such a function would be the solution. AI: If $g(x)$ is linear (which it is) and has slope $m$, then the line perpendicular to it has slope $-\frac{1}{m}$. So the line perpendicular $h(x)$ has slope $\frac{-1}{3-2\sqrt{2}}$. The point $\big(\frac{1}{3-2\sqrt2},1\big)$ is on $g(x)$, and $g(x)$ and $h(x)$ share that point, so the line perpendicular, by point-slope, is $$h(x) = \frac{-1}{3-2\sqrt{2}}\Big(x-\frac{1}{3-2\sqrt2}\Big)+1$$ If I interpreted the question wrong please let me know, I was confused over some of the notation.
H: Can every square matrix be written as product of an invertible matrix and a projection matrix? Let A be a square matrix. Will there always be an invertible matrix B and a projection matrix P such that A = BP? Thanks AI: Sure: $A = (A+E)\Pi$ where $\Pi$ is the (orthogonal) projection onto $(\ker A)^\perp$ and E is any matrix with $\ker E = (\ker A)^{\perp}$ and $\operatorname{Im} E = \operatorname{Im}(A)^\perp$
H: Is my development about the continuity of the function correct? I have the following statement: Prove if $\sqrt{log(x^2+7)}$ is continuous at $x=-4$ My development was: Let $g(x)=\log(x)$ and $ f(x)=x^2+7$ I will prove that $\log(x^2+7)$ is continuous at $x=-4$. $\log(x^2 +7)$ is continuous at $x = -4 \iff \lim_{x\to -4}g(f(x)) = g(f(-4))$ Since $f$ is a polynomial is continuous at $-4 \in \mathbb{R}$ Also, $g(x)$ is continuous in $\mathbb{R^+}$ hence is continuos at $f(-4) = 23 \in \mathbb{R^+}$ $(\star)$With these conditions, i can say that $\lim_{x\to -4}g(f(x)) = g(f(-4))$ is true, i.e, $(g\circ f)(x)$ is continuous at $x = -4$. Let $h(x) = \sqrt{x}$ and $p(x)= g(f(x)) = \log(x^2 + 7)$ Since $h$ is continuous in $\mathbb{R^+_0}$ and $p(-4) = log(23) \in \mathbb{R^+_0},$ hence $h$ is continuous in $p(-4)$. Furthermore, using $(\star)$ i can affirm that $\lim_{x\to -4}h(p(x)) = h(p(x)) = \sqrt{log(23)}$, that is $h(g(f(x))) = \sqrt{\log(x^2+7)}$ is continuous at $x-4$ Is my development correct? Thanks in advance for take your time for reading. AI: Your method is absolutely correct. Nicely done! I, however, like approaching this via the $\epsilon$ method. We note: $$\lim_{x\to a}f(x)=f(a)\iff \lim_{\epsilon\to0}f(a\pm\epsilon)=f(a)$$ such that $\epsilon>0$ Here: $$\lim_{\epsilon\to0} f(-4\pm\epsilon)=\lim_{\epsilon\to0}\sqrt{\log((-4\pm\epsilon)^2+7)}=\lim_{\epsilon\to0}\sqrt{\log(23+\epsilon^2\mp8\epsilon)}=\sqrt{\log(23)}=f(-4)$$
H: Let $U$ be orthogonal. How can I prove that $||UA||_2=||A||_2$? Let $U$ be orthogonal. How can I prove that $||UA||_2=||A||_2$? I know that $||UA||_2\le||U||_2||A||_2$ and I also know that as $U$ is orthogonal, $U^{-1}=U^T$. But I don't know what else to do... AI: Hint: since $U^TU=I$, $(UA)^TUA=A^TU^TUA=\cdots$
H: Probability of sum of IID variables I have $X_1, X_2$ two IID random variables and I know $P[X_1<\epsilon]=P[X_2<\epsilon]\le c$. Can I claim that $P[X_1+X_2<2\epsilon]=P[X_1<\epsilon]\le c$ I'm confused as it seems right but I don't know how to prove it. AI: No, it's not true. For example, consider tossing two fair dice, $X_1$ and $X_2$ the numbers appearing on the dice, and $\epsilon = 1.1$. $$P(X_1 + X_2 \le 2.2) = P(X_1 + X_2 = 2) = \frac{1}{36}$$ while $$P(X_1 < 1.1) = P(X_1 = 1) = \frac{1}{6} $$
H: Can you claim $1+i>1$? The title says it all: I know you can claim $i+1>i$. But can you also claim $1+i >1$? If not why can't I? AI: As mentioned in the comments, there is no way to "order" $\mathbb{C}$ in a way that is compatible with the operations of addition and multiplication that $\mathbb{C}$ is equipped with - $\mathbb{C}$ is not an ordered field with the usual operations. What this means in practice is that we cannot compare the sizes of complex quantities in a meaningful way using $<$ or $>$. One way we commonly get around this problem is to instead consider the relative magnitudes of complex quantities. For example, $|1+i|=\sqrt{2}>1=|i|$, and $|1+i|=\sqrt{2}>1=|1|$. This is perfectly valid, because we have moved away from $\mathbb{C}$ to $\mathbb{R}$, which is an ordered field and hence allows us to make these comparisons.
H: $\frac{dy}{dx} - {8} -{2}x^2+{4}y^2+y^2x^2 = 0.$ how should I procced from here having the equation $$\frac{dy}{dx} - {8} -{2}x^2+{4}y^2+y^2x^2 = 0.$$ I am getting to the following $$\frac{1}{2^{\frac{3}{2}}}\ln \left(y+\sqrt{2}\right)-\frac{1}{2^{\frac{3}{2}}}\ln \left(y-\sqrt{2}\right)=\frac{x^3}{3}+4x+c$$ from here I can do $\frac{1}{2^{\frac{3}{2}}}(ln\frac{y+\sqrt(2)}{y-\sqrt(2)})=\frac{x^3}{3}+4x+c$ how should I get none implicit function of $y$? AI: $$\frac{dy}{dx} = 8+2x^2-4y^2-y^2x^2 = (4+x^2)(2-y^2)$$ $$\frac{dy}{2-y^2}=(4+x^2)dx \iff 2^{-3/2}\ln \Big(\Big \vert\frac{y+\sqrt{2}}{y-\sqrt{2}}\Big \vert\Big) = 4x+\frac{1}{3}x^3+C$$ And this is where you got to. Now we can multiply by $2^{3/2}$ and take $\exp$ of both sides to get $$\Big \vert \frac{y+\sqrt{2}}{y-\sqrt{2}}\Big \vert= \exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)$$ (considering the positive case of the absolute value) $$y+\sqrt{2} = y\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big) - \sqrt{2}\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)$$ $$y\Big(1-\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)\Big) = -\sqrt{2}\Big( 1+\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)\Big)$$ $$y=-\sqrt{2} \frac{1+\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)}{1-\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)}$$ if $|y| > \sqrt{2}$, and (by the same method and considering the negative case) $$y=-\sqrt{2} \frac{1-\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)}{1+\exp \Big(2^{7/3}x+\frac{2^{3/2}}{3}x^3+C\Big)}$$ if $|y| < \sqrt{2}$
H: How can we decide two elliptic curves over Q are isomorphic over Q? How can we decide which of the following elliptic curves over $\mathbb{Q}$ are isomorphic over $\mathbb{Q}$? $$E_1:y^2=x^3+1$$ $$E_2:y^2=x^3+2$$ $$E_3:y^2=x^3+x+1$$ AI: See Silverman's book and answer there namely given $$E_i/K:y^2=x^3+a_ix+b_i,\qquad a_ib_i\ne 0$$ for $char(K)\ne 2,3$ Since $E_i\cong y^2=x^3+c^4 a_ix+c^6 b_i$ and $j(E_i) = 1728 \frac{-4 a_i^3}{-4a_i^3-27b_i^2}=1728 \frac{-4}{-4-27(b_i/a_i)^2 /a_i}$ It suffices to compare $j(E_i)\in K$ and $a_i/b_i\in K^*/K^{*2}$ to find if $E_i\cong E_l$ over $K$ ($K^*/K^{*4},K^*/K^{*6}$ in the few degenerate cases $a_ib_i=0$ ie. $j\in 0,1728$)