text
stringlengths
83
79.5k
H: Vector Congruence (Beachy & Blair 2.2 - Equivalence Classes) proofs I need some help with exercise 10 in Chapter 2.2 of Beachy and Blair's Abstract Algebra with a Concrete Introduction. The question is as follows: Let $W$ be a subspace of a vector space $V$ over $\mathbb{R}$ (that is the scalars are assumed to be real numbers). We say two vectors $u,v \in V$ are congruent modulo $W$ if $u-v \in W$, written $u \equiv v \pmod{W}$. Show that $\equiv$ is an equivalence relation. Show that if $r,s$ are scalars and $u_1, u_2, v_1, v_2$ are vectors in $V$ such that $u_1 \equiv v_1 \pmod{W}$ and $u_2 \equiv v_2 \pmod{W}$, then $ru_1 + su_2 \equiv rv_1 + sv_2 \pmod{W}$. Let $[u]_W$ denote the equivalence class of the vector $u$. Set $U = \{[u]_W \mid u \in V\}$. Define $+$ and $\cdot$ on $U$ by $[u]_W + [v]_W = [u+v]_W$ and $r \cdot [u]_W = [ru]_W$ for all $u,v \in V$ and $r \in \mathbb{R}$. Show that $U$ is a vector space with respect to these operations. Let $V = \mathbb{R}^2$ and let $W = \{ (x,0) \mid x \in \mathbb{R} \}$. Describe the equivalence class $[x, y]_W$ geometrically. Show that $T : \mathbb{R} \to U$ defined by $T(y) = [0,y]_W$ is a linear transformation and is one-to-one and onto. This is what I've got so far (verification on what I have would be great, but I need help on part 4): To prove that $\equiv$ is an equivalence class, we first show that for $u \in V$, we have $u \equiv u \pmod{W}$ because $u - u = \textbf{0}$ and $\textbf{0} \in W$ because $W$ is a vector space, thus showing $\equiv$ is reflexive. Now, for $u,v \in V$ such that $u \equiv v \pmod{W}$, we have $u - v \in W$, thus we must have the inverse of $u - v$ also in $W$, that is $v - u \in W$ so that $v \equiv u \pmod{W}$, showing that $\equiv$ is symmetric. Lastly, for $u,v,w \in V$ such that $u \equiv v \pmod{W}$ and $v \equiv w \pmod{W}$, we have $u - v \in W$ and $v - w \in W$, thus by the closure of addition in $W$, we have $(u - v) + (v - w) \in W$ or $u - w \in W$, that is $u \equiv w \pmod{W}$, showing that $\equiv$ is transitive. If $u_1 \equiv v_1 \pmod{W}$ and $u_2 \equiv v_2\pmod{W}$, then we have $u_1 - v_1 \in W$ and $u_2 - v_2 \in W$. This means that $ru_1 - rv_1 \in W$ and $su_2 - sv_2 \in W$ for $r,s \in \mathbb{R}$ by /scalar multiplication and the distributive property of scalar multiplication over addition. This means that $ru_1 - rv_1 + su_2 - sv_2 \in W$ by the closure property of addition, or $(ru_1 +su_2) - (rv_1 - sv_2) \in W$ by the axioms of vector spaces. By definition, $(ru_1 +su_2) - (rv_1 + sv_2) \in W$ is $ru_1 + su_2 \equiv rv_1 + sv_2 \pmod{W}$. We know that U must be closed under $+$ and $-$ because V is a vector space. We prove the commutativity of addition by seeing $[u]_W + [v]_W = [u + v]_W$ and $[v]_W + [u]_W = [v + u]_W$, but since vector addition is commutative in $V$, $[v + u]_W = [u + v]_w$, thus $[u]_W + [v]_W$. To show associativity of addition, we see for $u,v,w \in V$, $([u]_W + [v]_W) + [w]_W = [u + v]_W + [w]_W = [(u + v) + w]_W$ and $[u]_W + ([v]_W + [w]_W) = [u]_W + [v + w]_W = [u + (v + w)]_W$ and since $(u + v) + w = u + (v + w)$ by the associativity of addition in $V$, we have $([u]_W + [v]_W) + [w]_W = [u]_W + ([v]_W + [w]_W)$. Now we prove that $U$ has an additive identity, for $\textbf{0}_V, u \in V$, we have $[ \textbf{0}_V ]_W + [u]_W = [u]_W + [\textbf{0}_V]_W = [u + \textbf{0}_V]_W = [\textbf{0}_V + u]_W = [u]_W$, thus showwing that $[ \textbf{0}_V ]_W$ is the additive inverse in $U$. We continue by showing that $1 \in \mathbb{R}$ is the multiplicative identity in $U$, we see that for $u \in V$, $ 1 \cdot [u]_W = [1 \cdot u]_W = [u]_W$, thus $1$ is the multiplicative identity in $U$. Next, we show that there is an additive inverse in $U$, for $u \in V$, we have $-u \in V$, the additive inverse of $u$ in $V$, thus we see $[u]_W + [-u]_W = [-u]_W + [u]_W = [u - u]_W = [-u + u]_W = [\textbf{0}_V]_W$ which shows that $[-u]_W$ is the additive inverse of $[u]_W$ in $U$. Now we show the compatability of scalar multiplication in $U$, for $r,s \in \mathbb{R}$ and $u \in V$, we have $(rs) \cdot [u]_W = [rsu]_W$ and $r \cdot (s \cdot [u]_W) = r \cdot [su]_W = [rsu]_W$, thus $(rs) \cdot [u]_W = r \cdot (s \cdot [u]_W)$. To show the distributive property of scalar multiplication over addition, we see that for $r \in \mathbb{R}$ and $u,v \in V$, $r \cdot ( [u]_W + [v]_W ) = r \cdot [u + v]_W = [r (u + v)]_W = [ru + rv]_W = [ru]_W + [rv]_W = r \cdot [u]_W + r \cdot [v]_W$. Lastly, to show that field addition is distributive over scalar multiplication, we see that for $r,s \in \mathbb{R}$ and $u \in V$, $(r + s) \cdot [u]_W = [(r + s)u]_W = [ru + su]_W = [ru]_W + [su]_W = r \cdot [u]_W + s \cdot [u]_W$. The equivalence class $[ (x, \ y)]_W$ is the set of vectors in $\mathbb{R}^2$ that can be mapped via a shear transformation to the vector $(x, \ y) \in \mathbb{R}^2$ (that is, all vectors in $\mathbb{R}^2$ that have the same $y$ value). To prove that $T$ is a linear transformation, we see that for $y_1, y_2 \in \mathbb{R}$, $T( y_1 + y_2 ) = [ (0, \ y_1 + y_2) ]_W = [ (0, \ y_1) + (0, \ y_2) ]_W = [ (0, \ y_1) ]_W + [ (0, \ y_2) ]_W = T(y_1) + T(y_2)$ and for $c,y \in \mathbb{R}$, we have $T(cy) = [ (0, \ cy) ]_W = [ c(0, \ y) ]_W = c \cdot [ (0, \ y] = cT(y)$, thus showing that $T$ is a linear transformation. To show that $T$ is one-to-one, assume that $T(y_1) = T(y_2)$ for $y_1,y_2 \in \mathbb{R}$, thus we have $[ (0, \ y_1) ]_W = [ (0, \ y_2) ]_W$, but for these equivalence classes to be equal to each other, we must have $y_1 = y_2$. I don't know how to show that $T$ is onto. (I hope I didn't make any typos) Thanks! AI: Your work for 1.-3. is all correct. For 4., observe that $(x,y)\equiv (0,y)\pmod W$ for any $(x,y)\in\Bbb R^2$, and thus, on one hand, the equivalence class of $(x,y)$ is the same as that of $(0,y)$, which is the horizontal line $\{(a,y):a\in\Bbb R\}$. (Note that the congruence classes modulo any subspace $W$ are always the affine subspaces parallel to $W$, that is, the shifts $W+v$ of $W$ with all possible vectors $v$.) On the other hand, it means $[(x,y)]_W=[(0,y)]_W=T(y)$, and since $(x,y)$ was arbitrary, it shows surjectivity of $T$.
H: Does Fubini's Theorem hold for the Polar Form of Double Integrals? Simply put, does Fubini's Theorem hold for double integrals expressed as in polar terms? I.e., does the following hold given that all integration bounds are constants: $$ \int_\alpha^\beta \int_a^b f(r, \theta) \, r \, \text{d}r \, \textrm{d}\theta \, \, \, \stackrel{?}{=} \, \, \, \int_a^b\int_\alpha^\beta f(r, \theta) \, r \, \textrm{d}\theta \, \text{d}r $$ AI: The Funini's Theorem applies to any integral of the form $$\int\int fdudv$$ as far as the two dimensional case is concerned (same applies to any dimension). In other words, Fubini's theroem holds for any function of variables $u$ and $v$, independently of what those represent(if they are in polar or cartesian coordinates). In reality, Fubini's theorem is about functions of multiple variables, and it treats those variables as abstract quantities. The same applies to any theorem that is regarded with multivariable calculus. So, you must see those variables as changing quantinties independent the one from the other and avoid, when possible, the geometric interpretation.
H: Using Rouche's Theorem to find the number of solutions of $f(z)=z$ in the open unit disc How many roots does the equation $f(z)=z$ have in the circle $|z|<1$ if for $|z|\leq 1$, $f(z)$ is analytic and satisfies $|f(z)|<1$? My idea: I figured I could do this pretty easily using Rouche: Consider $|z|=1$, and let $g(z)=z$, then $|f(z)-g(z)|=z-z=0<1=|g(z)|$. So, since $g$ has only $1$ root in $|z|<1$, then so does $f$. I just feel like I am missing something. In particular, can I define $g$, and use it, in the way I did? Any thoughts are greatly appreciated! Thank you. AI: Let $h(z) = f(z)-z$, $g(z) = z$, then for $|z|=1$ we have $|h(z)+g(z)| = |f(z)| < 1 = |g(z)|$, hence $h,g$ have the same number of zeroes inside the circle. Since $g$ has exactly one zero, so does $h$.
H: Solve the equation $\frac{1}{x^2+11x-8} + \frac{1}{x^2+2x-8} + \frac{1}{x^2-13x-8} = 0$ Problem Solve the equation $$\frac{1}{x^2+11x-8} + \frac{1}{x^2+2x-8} + \frac{1}{x^2-13x-8} = 0$$ What I've tried First I tried factoring the denominators but only the second one can be factored as $(x+4)(x-2)$. Then I tried substituting $y = x^2 - 8$ but that didn't lead me anywhere. Where I'm stuck I don't know how to start this problem. Any hints? P.S. I would really appreciate it if you give me hints or at least hide the solution. Thanks for all your help in advance! AI: You started very well. To make things easier, set $A=x^2+7x-8$ (*) and the equation rewrites $$\frac{1}{A+4x} + \frac{1}{A-5x} + \frac{1}{A-20x} = 0.$$ Denominators are not allowed to be zeros. We solve $$(A-5x)(A-20x)+(A+4x)(A-20x)+(A+4x)(A-5x)=0$$ or equivalently $$3A^2-42Ax=0,$$ or even $$3(x^2+7x-8)(x^2-7x-8)=0,$$ which is easy to finish. (*) I noticed that $x=1$ satisfies, and decided to make profit from it.
H: How do two conjugate elements of a group have the same order? I'm reading group action in textbook Algebra by Saunders MacLane and Garrett Birkhoff. I have a problem of understanding the last sentence: Since conjugation is an automorphism, any two conjugate elements have the same order. Assume $x,y \in G$ are conjugate, then they are equivalent. As such, $gxg^{-1} = y$ for some $g \in G$. This means $gx = yg$. From here, I could not get how $x,y$ have the same order. Could you please elaborate on this point? AI: Mac Lane and Birkhoff are saying that it's not obvious (at least not directly) that $x$ and $gxg^{-1}$ have the same order. But once we know that $x \mapsto gxg^{-1}$ is an automorphism then it becomes obvious, since all automorphisms preserve order. To see why, let $\varphi : G \to G$ be an automorphism. Then let $x \in G$ have order $n$, and let $\varphi x$ have order $m$. Now $$(\varphi x)^n = \varphi (x^n) = \varphi e = e$$ So $m$ divides $n$. Similarly, $$(\varphi^{-1} \varphi x)^m = \varphi^{-1}((\varphi x)^m) = \varphi^{-1} e = e$$ And $n$ divides $m$ too, so they must be equal. There is also a direct computational proof for the conjugation isomorphism. It is basically the exact same proof as above, but writing $gxg^{-1}$ everywhere I wrote $\varphi$ above. I encourage you to try to prove it yourself! I hope this helps ^_^
H: The same result for $\mathbb{C}$ is true for algebraically closed field? The following result about polynomials is known: Proposition: Let $K$ be a subfield of $\mathbb{C}$, $f(x) \in K[x]$ a polynomial with degree $n \geq 1$ and $\alpha \in \mathbb{C}$ a root of $f(x)$. Then a) $\alpha$ is a simple root of $f(x)$ $\iff$ $f(\alpha) = 0$ and $f'(\alpha) = 0$; b) if $f(x)$ is irredutible over $K$ then all the roots of $f(x)$ are simple. Is this result true if we exchange $\mathbb{C}$ for any algebraically closed field? AI: I believe you want to say that $\alpha$ is a simple root implies that $f(\alpha)=0, f'(\alpha)\neq 0$. This is not true is the characteristic is $p$, $X^p-a=(X-a)^p$ and $a$ is simple.
H: Proof of the Fundamental Theorem of Algebra: filling in some intermediate steps I'm familiar with Rouche's theorem in the following form: If $f, g$ are analytic on a domain $\Omega$ with $|g(z)| < |f(z)|$ on $\partial \Omega$, then $f$ and $f+g$ have the same number of zeros in $\Omega$. I'm walking through how to use this to prove the fundamental theorem of algebra, but am stuck on how the "usual" lower bound is used and where it comes from. The argument goes something like this: Suppose $p(z) = a_n z^n + \cdots + a_1 z + a_0$ Set $f(z) = a_n z^n$ Set $g(z) = a_{n-1}z^{n-1} + \cdots + a_1 z + a_0$ Choose $R$ large enough such that $$R > \max\left( {|a_{n-1}| + \cdots + |a_1| + |a_0| \over |a_n| }, 1\right)$$ Then $|g(z)| < |f(z)|$ on the circle $|{z}|= R$, so apply Rouche and note that $z^n$ has $n$ zeros at $z_0 = 0$, which is in this region. I am stuck on part 4: I buy that such an $R$ can be chosen, since these coefficients are fixed. What I can't work out is the explicit inequality that shows 4 $\implies$ 5. The argument I'm looking for would essentially fill in the vdots in this chain of inequalities: $$\begin{align*} |g(z)| &:= |a_{n-1}z^{n-1} + \cdots + a_1 z + a_0 | \\ &\leq |a_{n-1}z^{n-1}| + \cdots + |a_1 z| + |a_0 | \quad\text{by the triangle inequality} \\ &= |a_{n-1}|\cdot |z^{n-1}| + \cdots + |a_1|\cdot| z| + |a_0 | \\ & \quad\vdots \quad ? \\ &\leq |a_n z^n| \\ &:= |f(z)| \end{align*}$$ I would prefer a proof that doesn't necessarily assume that $p$ is monic, since I'd like to trace $a_n$ throughout the inequality. I would also welcome answers that show how you might run this problem backwards, i.e. writing out some inequalities in order to deduce what $R$ should be. AI: $$\begin{align*} |a_{n-1}|\cdot |z^{n-1}| + \cdots + |a_1|\cdot| z| + |a_0 | &= |a_{n-1}|\cdot R^{n-1} + \cdots + |a_1| R + |a_0 | \\ &\leq |a_{n-1}|\cdot R^{n-1}+|a_{n-2}|\cdot R^{n-1} + \cdots + |a_1| \cdot R^{n-1} + |a_0 |\cdot R^{n-1} \qquad\text{(1)}\\ &= R^{n-1} \left( |a_{n-1}| + |a_{n-2}| + \cdots + |a_1| + |a_0| \right) \\ &\leq R^{n-1} \left( |a_n|\cdot R \right) \qquad\text{(2)}\\ &= R^{n} |a_n| \end{align*} $$ In inequality (1) I have used the fact that $R^{i} \leq R^{n-1}$ for $0 \leq i \leq n-1$ and inequality (2) follows by choice of $R$.
H: Denotation of the range of a function using its definition Is this expression allowed in a strict sense? Consider a function $f:[a,b] \rightarrow [f(a),f(b)]$ defined by $f(x) = x$. What I mean by it is that Let $g:[a,b]\rightarrow \mathbb{R}$ defined by $ g(x) = x $. Consider $f:[a,b]\rightarrow g([a,b])$ where $f(x) = g(x)$ for all $x \in [a,b]$. In summary, when the values of a function are explicitly defined, can I express the range of the function using its value? In the example, I wanted to make $f$ to be surjective. AI: I have not seen that notation used before In practice. However, I have seen similar notation used in topology to denote a chart map: where $M$ is a manifold and $U \subseteq M$ then let $x: U \to x(U)$ be a chart map such that $x(U) \subseteq \mathbb{R}^d$. One thing to keep in mind is the relation between the two sets, I.e., the map $x : U \to x(U)$ is always given before how the map is defined e.g. $x(t) := t^{2}$ $\forall t \in U$. With that in mind, the order at which you have presented the information about the map $f$ is a little unconventional.
H: Caculate $\int_{-2}^{2}\ln(x+\sqrt{1+x^2})\ln(1+x^2)dx$ $\int_{-2}^{2}\ln(x+\sqrt{1+x^2})\ln(1+x^2)dx$ My work: The origin $=-\int_{-2}^{2}\ln(-x+\sqrt{1+x^2})\ln(1+x^2)dx$, I think maybe we can exploit some integral properties related to odd function by playing with bounds but I don't see it. EDIT:The origin $=-\int_{-2}^{2}\ln(-x+\sqrt{1+x^2})\ln(1+x^2)dx$ I accidently made a mistake in my original step. It is actually $ \int_{-2}^{2} \ln( x + \sqrt{1+x^2} ) \ln( 1+x^2) dx$ AI: Let $u=-x$, so $-du=dx$: $$\int_{2}^{-2} -\ln{\left(\sqrt{1+u^2}-u\right)} \ln{\left(1+u^2\right)} \; du$$ Now, use the $-1$ to switch the bounds of the integral: $$\int_{-2}^2 \ln{\left(\sqrt{1+u^2}-u\right)} \ln{\left(1+u^2\right)} \; du$$ Now, add the original integral $I$ to the integral above: $$2I=\int_{-2}^2 \ln{\left(1+x^2\right)}\bigg[\ln{\left(\sqrt{1+x^2}-x\right)}+\ln{\left(\sqrt{1+x^2}+x\right)} \bigg] \; dx$$ Use log properties to get: $$2I=\int_{-2}^2 \ln{\left(1+x^2\right)} \left(\ln{1} \right) \; dx$$ $$2I=0$$ $$\boxed{I=0}$$
H: Diameter of a ball in a metric normed space Could you help me with the following please: Prove that the diameter of a ball in a normed space it is twice its radius. My attempt: $diam(A)=\sup \{d(x,y): x,y\in A\}$ The first inequality is evident $diam(B_\epsilon(a))\leq 2\epsilon$, but for the second I have the following: Suppose that $diam(B_\epsilon(a))< 2\epsilon$, we have that there exists k such that $diam(B_\epsilon (a))<k<2\epsilon$, we choose $z$ not null and we define $x=a+\dfrac{kz}{2||z||},\ y=a-\dfrac{kz}{2||z||}$, in addition we have that $||x-a||=\dfrac{k}{2}<\epsilon,\ ||y-a||= \dfrac{k}{2}<\epsilon$ and $||x-y||=||k||$ Here it is mentioned that it contradicts the definition of diameter and I would like you to help me understand where the contradiction is, or if you can think of any other demonstration, thank you. AI: For any $x,y \in B_\epsilon (a)$, $||x - y|| \leq \rm{diam}\, B_\epsilon (a)$. So, $k = ||x - y|| \leq \rm{diam}\, B_\epsilon(a) < k$, a contradiction.
H: Optimize a multivariable function when the constraint is an inequality. I know how to optimize functions with given constraints when those constraints are equalities, i.e. $g(x, y) = c$ with Lagrange multipliers. However, I have a problem where the constraint is an inequality and I'm wondering how (or if) I could do this. I'm trying to minimize the function $f(x) = (4x^2 + y^2)/4xy$ with $y > x > 0$. I'm aware that there are probably simpler ways to do this, but I'd like to know if it's doable with direct partial differentiation. Is there a way to reduce this to a typical problem with a constraint that can be solved with Lagrange multipliers? AI: The standard approach is to introduce a new "slack" variable for each constraint such that the inequality becomes an equality when written in terms of the slack variable. In your case, we have $x>0\Leftrightarrow \exists s, x=s^2$ and $y>x\Leftrightarrow \exists t, y-x=t^2$, so the problem is equivalent to solving the 4-variable optimization in the variables $x,y,s,t$ subject to the constraints $x=s^2, y-x=t^2$.
H: prove that if $E$ is connected and $E \subseteq F \subseteq \overline{E}$, then $F$ is connected. Define a set $A$ to be disconnected iff there exist nonempty relatively open sets $U$ and $W$ in $A$ with $U\cap W = \emptyset$ and $A = U\cup W.$ Define a set $A$ to be connected iff it is not disconnected.(there are many equivalent definitions, but I want to prove this lemma using this one). Prove that if $E$ is connected and $E\subseteq F \subseteq \overline{E},$ then $F$ is connected. Let $U, W$ be a separation for $F$. Find open sets $O_U$ and $O_W$ so that $U = F \cap O_U$ and $W = F\cap O_W.$ I claim that $E\cap O_U, E\cap O_W$ separate $E$. However, I'm unable to show that $U' = E\cap O_U, W' =E\cap O_W \neq \emptyset$ (I think this should be straightforward, but for some reason, I can't figure this out). Suppose $U' = \emptyset.$ Then $E\cap O_U = \emptyset.$ Since $E \subseteq F = U\cup W = F\cap (O_U \cup O_W)\subseteq O_U\cup O_W,$ we have that $E \subseteq O_W,$ so $E\cap O_W = E\subseteq F\cap O_W = W\subseteq F\subseteq \overline{E}.$ Observe that since $E\cap O_U = \emptyset, F\cap O_U = (F\backslash E)\cap O_U\subseteq F\backslash E.$ Similarly, $W'\neq \emptyset.$ Clearly, $U', W'$ are relatively open in $E$. Suppose $U'\cap W' \neq \emptyset.$ Let $x\in U'\cap W'.$ Then $x\in E\cap O_U\cap O_W \subseteq F\cap O_U\cap O_W = U\cap W = \emptyset,$ a contradiction. So $U'\cap W' = \emptyset.$ Also, $U'\cup W' = (E\cap O_U)\cup (E\cap O_W) = E\cap (O_U \cup O_W)$ and $E\subseteq (O_U \cup O_W),$ so $U'\cup W' = E.$ AI: This follows from the following result: Theorem: If $Y$ is a connected subset of a topological space $X$, then $\overline{Y}$ is connected. Here is a short proof Suppose $\overline{Y}$ is the union of two disjoint clopen sets $A$ and $B$ in $\overline{Y}$. Then $A\cap Y$ and $B\cap Y$ are clopen in $Y$. Hence, either $A\cap Y=\emptyset$ or $B\cap Y=\emptyset$. Suppose $Y\cap B=\emptyset$. Then $Y\subset A$ and so, $\overline{Y}=\overline{A}=A$ since $A$ is closed in $\overline{Y}$. Thus, $B=\emptyset$. In your case, if $E\subset F\subset \overline{E}$ and $E$ is connected, then the closure of $E$ relative to $F$, given by $\overline{E}\cap F=F$, is connected.
H: $|f(z)| \leq \frac{1}{1-|z|}$ implies $|f'(z)| \leq \frac{4}{(1 - |z|)^2}$ I have the following question: Suppose that $f$ is analytic in the unit disk and $|f(z)| \leq \frac{1}{1-|z|}$. Prove that $|f'(z)| \leq \frac{4}{(1 - |z|)^2}$ in the open unit disk. I am not really sure where to start. Any help would be appreciated. AI: Fix $z, |z|=r, 0 \le r <1$ The idea for this is to apply Cauchy on a disc centered at $z$ of some radius $\rho < 1-r$ so it is contained in the unit disc and then minimize the estimate on $\rho$ to find best such. Concretely $2\pi if'(z)=\int_{|w-z|=\rho}\frac{f(w)}{(w-z)^2}dw$, and since $|f(w)| \le 1/(1-|w|) \le 1/(1-(r+\rho))$ on the integration circle by hypothesis (as $|w| \le r+\rho$ there) while $|dw/(w-z)^2|=1/\rho$ we immediately get: $|f'(z)| \le \frac{1}{(1-(r+\rho))\rho}, 0 < \rho < 1-r$. A simple AM-GM inequality (or calculus or pick your favorite method) shows that $\frac{1}{(1-(r+\rho))\rho}$ is smallest when the two terms in the denominator are equal, or $\rho=\frac{1-r}{2}$, giving by substitution precisely that: $|f'(z)| \le \frac{4}{(1-r)^2}=\frac{4}{(1-|z|)^2}$ so we are done!
H: If $f$ is odd and periodic then a translation of $f$ is even? Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be a odd and periodic function, with period $L>0$. If we define $$g(x):=f\left(x-\frac{L}{2}\right), \; \forall \; x \in \mathbb{R},$$ then $g$ is even? I tried to prove it, as follows: let $x \in\mathbb{R}$ arbitrary. Thus, $$g(-x)=f\left(-x-\frac{L}{2}\right)=f\left(-\left(x+\frac{L}{2}\right)\right)=-f\left(x+\frac{L}{2}\right).$$ But I couldn't conclude that $g(-x)=g(x)$. Is this true in general? What did I do is right? AI: From your last line, $$g(-x)=-f(x+L/2)$$ $$=-f(x-L/2+L)$$ $$=-f(x-L/2)=-g(x)$$ $g$ is in fact odd.
H: How to compute a simple sum In the book I am reading the author left some excersises for the reader, I happend to be stuck at this sum $$\sum_{n=1}^k{n!(n^2+n+1)}$$ So far I have tried to factorize the polynomial, and also tried to split the sum. I know how to compute $n\ n!$, but I have no idea on the other terms, any help is accepted! AI: Note that $$\sum_{n=1}^k n!(n^2+n+1)=\sum_{n=1}^k n![(n+1)^2-n]=\sum_{n=1}^k [(n+1)!(n+1)-n!n]$$ This is a telescopic serie
H: How can I find a Fraisse's paper I want to see Fraisse's "Sur certaines relations qui généralisent l’order des nombres rationnels". But at https://gallica.bnf.fr/ark:/12148/bpt6k3188h, the search returns nothing. I wonder how to find this paper. AI: I believe it is here. (EDIT: as kimchi lover comments, the issue seems to be that you want volume $237$ instead of $236$.) Note that the "paper" itself, if this is indeed the right citation, seems to really just be an announcement of results.
H: How do I integrate $\int\frac{x^3}{\sqrt{1+x^2}}$ by parts? So I've just recently begun integration and now we're doing integration by parts. We've been told about ILATE (some sort of acronym to helps us remember which function we integrate and which to differentiate). But that aside can we perform by part integration for two functions of same type? For example, $$\int\frac{x^3}{\sqrt{1+x^2}}.$$ In the above problem, Can I consider $x^3$ as one part and $\dfrac{1}{\sqrt{1+x^2}}$ as the other part? Sine then if I chose to integrate the second function, there is a standard integration and then differentiating $x^3$ is no big deal. Am I thinking right? Or is there some fundamental mistake in my approach? AI: The trick to using integration by parts is to be able to resolve the resulting integral more easily than one you started with. You view the original integrand as a product of two terms. Then you think about which term to differentiate and which to integrate. The point is that you'll need to integrate the new product of the differentiated and integrated terms easily. That's, in a sense, the "limiting step" in IBP. With your proposed method you need to integrate something of the form $x^2 \arctan x$, which is hardly easier than what you started with. Think slightly more creatively. $x^3 = x^2 \cdot x$. Now $\frac x{\sqrt {1+x^2}} $ is easy to integrate, think of the form $g'(x) \cdot f(g(x)) $. Obviously $x^2$ is trivial to differentiate. What's most important is that the product you end up with, of the form (ignoring constant multipliers) $x\cdot \sqrt {1+x^2}$ is also easy to integrate - again, think of $g'(x) \cdot f(g(x)) $. That's the key. Break up the integrand into a product of two terms. Get creative in how you do this. You have to differentiate one term, integrate the other and the product needs to be easily integrable.
H: Proving a bound on the difference between expected value of a continuous random variable and the expected value sampled on all integers I need to prove that for a random variable $X$ with $$ X \geqslant 0 $$ it's true that $$ \sum_{n=1}^{\infty} P(X \geqslant n) \leqslant E[X] \leqslant 1 + \sum_{n=1}^{\infty} P(X \geqslant n) $$ I proved that $$ E[X] = \int_{0}^{\infty} P(X \geqslant t) dt $$ so I have the first part of the inequality proved by comparing the integral with the sum. I'm not sure how to go about proving the other part, i.e. $$ E[X] \leqslant 1 + \sum_{n=1}^{\infty} P(X \geqslant n) $$ I was trying to move some things around and reason why $\int_{0}^{\infty} P(X \geqslant t) dt - \sum_{n=1}^{\infty} P(X \geqslant n) \leqslant 1$ but I didn't get far. I was thinking of (somehow) using the fact that the integral of the pdf of $X$ is bounded by 1 but no luck. Any suggestions for how to get started on this? AI: Split the integral up and estimate the probability $P(X\ge t)$ for $t\in[n,n+1]$ from above by $P(X\ge n)$. \begin{align*} EX &= \int_0^1P(X\ge t)\,dt + \int_1^\infty P(X\ge t)\,dt\\ &\le 1 + \sum_{n=1}^\infty\int_{n}^{n+1}P(X\ge t)\,dt \\ &\le 1 + \sum_{n=1}^\infty\int_n^{n+1}P(X\ge n)\,dt \\ &= 1 + \sum_{n=1}^\infty P(X\ge n) \end{align*}
H: Find the basis for column space $A=\left[\begin{smallmatrix}1&-1&3\cr 5&-4&-4\cr 7&-6&2\end{smallmatrix}\right]$ Find the basis for column space $$A=\begin{bmatrix}1&-1&3\cr 5&-4&-4\cr 7&-6&2\end{bmatrix}.$$ I'm quite confused because I thought there were two methods: using the transpose or not. By not using the transpose of $A$ and computing the RREF, I would get $$A=\begin{bmatrix}1&0&-16\cr 0&1&-19\cr 0&0&0\end{bmatrix},$$ for which I find that columns 1 and 2 have leading 1s. Therefore, I would go to the original matrix's columns 1 and 2 and say the basis for the column space is $\begin{bmatrix}1 & 5 & 7\end{bmatrix}$ and $\begin{bmatrix}-1 & -4 & -6 \end{bmatrix}.$ However, my textbook solution did the transpose method to get $$A^T=\begin{bmatrix}1&5&7\cr -1&-4&-6\cr 3&-4&2\end{bmatrix},$$ and the reduced form is $$A=\begin{bmatrix}1&5&7\cr 0&1&1\cr 0&0&0\end{bmatrix}.$$ The book got $\langle 1,5,7 \rangle$ and $\langle 0,1,1 \rangle;$ however, if we reduce it further, we get $\langle 1,0,2 \rangle$ and $\langle 0,1,1 \rangle.$ I'm not sure whether the textbook sol of $\langle 1,5,7 \rangle$ and $\langle 0,1,1 \rangle$ vs. $\langle 1,0,2 \rangle$ and $\langle 0,1,1 \rangle$ is right. And why are all these numbers different from finding the corresponding columns from the original matrix? Is this method wrong to begin? AI: Both of these solutions give rise to the same basis. Observe that for $v_1 = (1, 5, 7)$ and $v_2 = (-1, -4, -6),$ we have that $v_3 = v_1 + v_2 = (0, 1, 1),$ from which it follows that $\operatorname{span}_k \{v_1, v_2 \} = \operatorname{span}_k \{v_1, v_3 \}.$ Of course, we could go one step further to see that $v_4 = v_1 - 5 v_3 = (1, 0, 2)$ so that $\operatorname{span}_k \{v_1, v_3 \} = \operatorname{span}_k \{v_3, v_4 \}.$
H: Which Greek letter is commonly used to represent a count? Which Greek letter is commonly used to represent a count? For example, the Greek letter sigma ($\Sigma$) is commonly used to represent a sum. AI: While your "count" is quite vague, here is a possibility: The count of elements in a set: What symbol gives the count of elements in a set? The latin letter N is, as pointed out in the comments, also sometimes used.
H: Let $f$ be a continuous function on $\mathbb{R}$ satisfying $\int_\mathbb{R}|f(x)|dx<\infty$. Can we conclude that $\sum_\mathbb{Z}|f(k)|<\infty$? Let $f$ be a continuous function on $\mathbb{R}$ satisfying $$\int_\mathbb{R}|f(x)|dx<\infty.$$ Can we conclude that $$\sum_\mathbb{Z}|f(k)|<\infty?$$ Note: Continuity is necessary otherwise $f=\chi_\mathbb{Z}$ would give counterexample. AI: No; having finite integral does not impose what $f$ has to behave like on the integers (a set of measure zero). To give a smooth counterexample, we just want to "smooth out" your $\chi_{\mathbb{Z}}$ example. Let $\phi$ be a smooth bump function on $\mathbb{R}$ with compact support in $(-1/2, 1/2)$, with $\phi(0) = 1$, and with $\int_{-1}^1 \phi(x)\, dx = 1$. Define $\phi_n(x) = \phi(n^2(x - n))$. These are narrow versions of $\phi$. They are supporte in the interval $(n - 1/2n^2, n + 1/2n^2)$. Then set $$ f(x) = \sum_{n \in \mathbb{Z}} \phi_n(x). $$ Note that for each $x$, the sum is actually finite since the supports of $\phi_n$ are all disjoint. Then $f(n) = 1$ for each $n \in \mathbb{Z}$ so certainly the sum over the integers isn't finite. However we can compute that $$ \int_{\mathbb{R}} \phi_n(x) \, dx = n^{-2}\int_{\mathbb{R}}\phi(y) dy = n^{-2} $$ and thus $$ \int_{\mathbb{R}} f(x) \, dx = \sum_{n \in \mathbb{Z}}\int_{\mathbb{R}} \phi_n(x)\, dx = \sum_{n \in \mathbb{Z}} \frac{1}{n^2} < \infty. $$ To get your desired conclusion from the hypotheses, we'd have to suppose some monotonicity on $|f(x)|$, I believe.
H: How can i find $\int _0^{\infty }\ln ^n\left(x\right)\:e^{-ax^b}\:dx$ I tried using certain substitutions like $u=ax^b$ but that lead to $\displaystyle\frac{1}{a^{\frac{1}{b}}b^n}\int _0^{\infty }e^{-u}\:\ln ^n\left(\frac{u}{a}\right)u^{\frac{1}{b}-1}du\:$ i tried to use special functions to evaluate this but that $\ln ^n\left(\frac{u}{a}\right)$ is very annoying, i'd appreciate any help. AI: You can start using the following identity, $$\int _0^{\infty }x^m\:e^{-ax^b}\:dx=\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}$$ You can now differentiate both sides $n$ times with respect to m and then set it to $0$, $$\int _0^{\infty }x^m\:\ln ^n\left(x\right)\:e^{-ax^b}\:dx=\frac{\partial ^n}{\partial m^n}\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}$$ $$\boxed{\int _0^{\infty }\ln ^n\left(x\right)\:e^{-ax^b}\:dx=\lim _{m\to 0}\frac{\partial ^n}{\partial m^n}\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}}$$
H: Converting between bound on probability measures and densities Suppose that $P$ and $Q$ are two probability measures on the same probability space with $P(A) \leq c Q(A)$ for each (measurable) set $A$. Is it true that $dP/dQ$ is then bounded by $c$ $P$-almost surely? AI: Let $f$ denote the Random-Nikodym derivative $\frac{dP}{dQ}$. Then $\int_Af\,dQ\leq cQ(A)$ for all $A$ and so $$\int_A (c-f)dQ\geq0\quad\text{for all} \quad A$$ From this, it should follow that $f\leq c$ $Q$-a.s. Take for instance $A=\{f>c\}$ (You ma use a well know fact that says that if $\phi\geq0$ and $\int \phi\,d\mu=0$, then $\phi=0$ $\mu$-a.s.)
H: Can I determine if the result of a pairing function is the from the inverse of a given pair? For example, say you had the following results: f(a, b) = c f(b, a) = d Is there a pairing function that would allow for determining that c is sort of the "inverse" of d without de-pairing the results? AI: For this pairing function it’s not hard to test: $c$ and $d$ are related in this way iff there is an $n\in\omega$ such that $\binom{n}2<c,d\le\binom{n+1}2$ and $c+d=\binom{n}2+\binom{n+1}2-1$. Getting rid of the binomial coefficients, we have $c$ and $d$ so related iff there is an $n\in\omega$ such that $$\frac{n(n+1)}2<c,d\le\frac{(n+1)(n+2)}2$$ and $$c+d=\frac{n(n+1)}2+\frac{(n+1)(n+2)}2-1=n(n+2)\;.$$ Added: In practice this is more straightforward that it may at first appear. Note that $$n(n+2)=(n+1)^2-1\;,$$ so we first check whether $c+d+1$ is a square; if not, $c$ and $d$ are not related in this way. If it is, we need only check whether $$m-1<\frac{c}m\le m+1\;,$$ where $m=\sqrt{c+d+1}$.
H: How can we find the crossing point of two lines with a prescribed angle? In the 3D space, we have two given points of $P$ and $Q$. Line $A$ passes through the point $P$ and whose angle with the x-axis is $\theta$ and with the z-axis $\phi$. Line $B$ passes through the point $Q$ and has an angle of $\alpha$ with the line $A$. How can we find the crossing point of the lines $A$ and $B$? AI: Using the given point $P$ and the properties of line $\theta$ and $\phi$, any point on the line can be written as $$\vec{p_A} = \vec{P} + r \vec{l} $$ Here, $\vec{l} = \cos\theta \vec{i} + \sqrt{1-\cos^2\theta - \cos^2\phi}\vec{j} + \cos\phi \vec{k}$ Now, you need to find $r$ such that $$\frac{(\vec{p_A} - \vec{Q}).(\vec{l})}{||\vec{p_A} - \vec{Q}||} = \cos \alpha$$
H: What's the proof for this summation with dependent variables? Given $0\leqslant i<j\leqslant n$ $\sum_{i=0}^n\sum_{j=0}^n i = \sum_{i=0}^n(n+1)*i$ It seems to work like a nested loop and gave the right answer when substituting n for any number but I don't know how to derive it. AI: $ \sum\limits_{j=0}^{n}i=i(n+1)$ because there are $n+1$ terms in the sum each having one fixed value $i$. Now sum over $i$ to finish.
H: Conditions for symmetric, Toeplitz $\mathbf{M}$ with nonnegative elements to have inverse with nonnegative elements Problem Suppose we have symmetric, Toeplitz matrix $\mathbf{M}$ such that $$ \mathbf{M} = \begin{bmatrix} m_0 & m_1 & m_2 & m_3 & \cdots &m_{n-1} \\ m_1 & m_0 & m_1 & m_2 & \cdots & m_{n-2} \\ m_2 & m_1 & m_0 & m_1 & \cdots & m_{n-3} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ m_{n-1} & m_{n-2} & m_{n-3} & m_{n-4} & \cdots & m_0 \\ \end{bmatrix} $$ where $m_0,m_1,\cdots,m_{n-1}$ all nonnegative, i.e. $m_i \ge 0, \forall i=0,1,\cdots,n-1$ . What are the conditions for $\mathbf{M}^{-1}$ to have nonnegative elements? The above problem assumes $\mathbf{M}$ is invertible (related). Some notes A very simple example may be an identity matrix $I$, with $I^{-1} = I$ has all nonnegative elements. A diagonal dominance $m_0 > \sum_{j\neq i} m_{ij}, \forall i$ may guarantee the existence of inverse, but not what I want. For instance, $\tilde{\mathbf{M}} := \begin{bmatrix} 1 & 1/2 \\ 1/2 & 1 \end{bmatrix}$ has inverse $\tilde{\mathbf{M}}^{-1} = \begin{bmatrix} 4/3 & -2/3 \\ -2/3 & 4/3 \end{bmatrix}$. Try A paper gives an explicit form of the inverse of $\mathbf{M}$, but it does not help for finding an explicit representation. Although it seems evident that if $m_0 > 0$ the diagonal of $\mathbf{M}^{-1}$ may be all positive, I think off-diagonals are not quite straight-forward. Any help will be appreciated. AI: In general, if $A$ and $B$ are two nonnegative $n\times n$ matrices such that $AB=I$, one can prove by mathematical induction on $n$ that the sign patterns of $A$ and $B$ must be some permutation matrices (that are transposes of each other). In the inductive step, since $1=\delta_{11}=\sum_ka_{ik}b_{ki}$, both $a_{1k}$ and $b_{k1}$ must be positive for some $k$. By permuting the columns of $A$ and the rows of $B$ if necessary, we may assume that $a_{11}$ and $b_{11}$ are positive. As $$ \begin{cases} 0=\delta_{1j}=\sum_k a_{1k}b_{kj}\ge a_{11}b_{1j}\ge0,\\ 0=\delta_{i1}=\sum_k a_{ik}b_{k1}\ge a_{i1}b_{11}\ge0, \end{cases}\tag{1} $$ we see that $a_{i1}=b_{1j}=0$ for all $i,j\ne1$. However, as $BA$ is also equal to $I$, if we interchange the roles of $A$ and $B$ in $(1)$, we also have $b_{i1}=a_{1j}=0$ for all $i,j\ne1$. Therefore $A=a_{11}\oplus X$ and $B=b_{11}\oplus Y$ for some $X$ and $Y$ with $XY=I$. By induction assumption, the sign patterns of $X$ and $Y$ are permutation matrices. Hence the sign patterns of $A$ and $B$ are permutation matrices too. So, in your case, the sign pattern of $M$ must be a permutation matrix. As $M$ itself is a symmetric Toeplitz matrix, it must be in the form of $mI_n$ when $n$ is odd. When $n$ is even, $M$ must take the form of either $mI_n$ or $\pmatrix{0&mI_{n/2}\\ mI_{n/2}&0}$.
H: Why can't the "chain rule" of derivatives be used to differentiate 3sin(x)? My understanding is that functions of the form $f(g(x))$ can be differentiated using the "chain rule", where $$\frac{d}{dx}f(g(x)) = f'g(x) \cdot g'(x)$$ I was trying to apply that logic to the following equation: $$\frac{d}{dx}3sin(x)$$ To solve this, I tried parsing the problem in terms of 2 functions applied to an input: $$f(x) = 3x$$ $$g(x) = sin(x)$$ $$f(g(x)) = 3sin(x)$$ which led me to this solution: $$\frac{d}{dx}3(sin(x)) = 3sin(x) \cdot cos(x)$$ I've since learned that the answer is simply $3cos(x)$. It seems like I am missing a fundamental concept. Is there an error in those function substitutions I tried? Or am I messing something up in the differentiation itself? AI: $f'(y)=3$ for all $y$. So $f'(g(x))=3$ and $f'(g(x))g'(x) =3 \cos x$.
H: Showing a function is constant - Complex analysis I am trying to solve the following problem. $f(z)=u(x,y)+iv(x,y)$ is an analytic function in $D$ ($D$ is connected and open). If $u, v$ fulfill the relation $G(u(x,y), v(x,y))= 0 $ in $D$ for some function ($G:\mathbb{R^2}\to\mathbb{R}$) with the property Show that $f$ is constant. In this question, does the condition imply that $G$ is differentiable? If so, which part ? And should i use chain rule to solve this? AI: If $f$ is non constant and analytic it is an open map. Then $f(B(z_0,\epsilon))$ contains an open set $O$ containing $f(z_0)$, and $G(x,y) = 0$ for $x+iy \in O$. In particular, ${\partial G(x,y) \over \partial x} = {\partial G(x,y) \over \partial y} = 0$ for such $x+iy$ which contradicts the hypothesis. Hence $f$ is constant on each component and hence on $D$.
H: Write $f(x) = \int_{-2}^{x}t|t-1|\,dt$ without the sign of integral. Here is my attempt: $\displaystyle \int_{-2}^{x}t|t-1|dt$ =$\begin{cases} \displaystyle\int_{-2}^{x}t^2-tdt, \; t \ge 1 \\\displaystyle \int_{-2}^{x}-t^2+tdt, \; t <1\end{cases}$ =$\begin{cases} \displaystyle\dfrac{x^3}{3}-\dfrac{x^2}{2}-\dfrac{(-2)^3}{3}+\dfrac{(-2)^2}{2}=\dfrac{x^3}{3}-\dfrac{x^2}{2}+\dfrac{14}{3}, \; x \ge 1 \\\displaystyle -\dfrac{x^3}{3}+\dfrac{x^2}{2}--\dfrac{(-2)^3}{3}-\dfrac{(-2)^2}{2}=-\dfrac{x^3}{3}+\dfrac{x^2}{2}-\dfrac{14}{3}, \; x <1\end{cases}$ However, the solution in my list is: =$\begin{cases} \displaystyle\dfrac{x^3}{3}-\dfrac{x^2}{2}-\dfrac{13}{3}, \; x \ge 1 \\\displaystyle -\dfrac{x^3}{3}+\dfrac{x^2}{2}-\dfrac{14}{3}, \; x <1\end{cases}$ Is my attempt wrong or is the solution in the list wrong? Thanks AI: When $x>1$, I get $$\int_{-2}^xt|t-1|\,dt=\int_{-2}^1t(1-t)\,dt+\int_1^xt(t-1)\,dt.$$
H: Simplifying $\frac{1}{\sqrt{x-i}}\left\{1-2 \sum_{n=1}^{\infty}(-1)^{n-1} \exp \left(-\pi n^{2}\left(\frac{1}{x-i}+i\right)\right)\right\}$ This is related to another recent question of mine. Considering that $$ \psi(x)=\sum_{n=1}^{\infty} \exp \left(-n^{2} \pi x\right) $$ has the finctional equation $$ \frac{1+2 \psi(x)}{1+2 \psi(1 / x)}=\frac{1}{\sqrt{x}} $$ my goal was to find a functional equation for $$ f(x)=\sum_{n=1}^{\infty}(-1)^{n-1} \exp \left(-n^{2} \pi x\right) $$ and using the hint given in the comments there, I got this $$ (1-2f(x))=\frac{1}{\sqrt{x-i}}\left\{1-2 \sum_{n=1}^{\infty}(-1)^{n-1} \exp \left(-\pi n^{2}\left(\frac{1}{x-i}+i\right)\right)\right\} $$ But as we can see in the right ride of the equation we have some complex terms and this expression must be purely real valued. So my question is, how to get rid of the complex terms? Thanks. AI: $$f(x)=\sum_{n=1}^{\infty}(-1)^{n-1} \exp \left(-n^{2} \pi x\right)=\frac{1}{2} \left(1-\vartheta _4\left(0,e^{-\pi x}\right)\right)$$ whgere appears Jacobi theta function (have a look here and here). Edit If you expand the complex, you have $$ \exp \left(-\pi n^{2}\left(\frac{1}{x-i}+i\right)\right)=$$ $$\exp \left(-\frac{\pi n^2 x}{x^2+1}\right)\Big[\cos \left(\pi n^2 \left(\frac{1}{x^2+1}+1\right)\right)-i \sin \left(\pi n^2 \left(\frac{1}{x^2+1}+1\right)\right) \Big]$$ and, in the argument of the trigonometric functions, the $n^2 \pi$ term has to disappear.
H: Proving that $\vec{r'}(t)$ is orthogonal to $\vec{r''}(t)$ With a given nonzero vector $\vec{r}(t)$, how do I that $\vec{r'}(t)$ is orthogonal to $\vec{r''}(t)$? The length ($||\vec{r'}(t)||$ is constant.) This is what I have tried so far. Let $\vec{r}(t)= <f(t),g(t),h(t)>$. Then, $\vec{r'}(t)$ is $<f'(t),g'(t),h'(t)>$. And $\vec{r''}(t)$ is $<f''(t),g''(t),h''(t)>$. For vectors to be orthogonal to each other, their dot products must be 0, meaning $\vec{r'}(t)\cdot\vec{r''}(t)=0$. This is where I get stuck. How do I prove that $\vec{r'}(t)\cdot\vec{r''}(t)=f'(t)\cdot f''(t)+g'(t)\cdot g''(t)+h'(t)\cdot h''(t) =0$? AI: \begin{align*} f'(t)\cdot f''(t)+g'(t)\cdot g''(t)+h'(t)\cdot h''(t)&=\frac{1}{2}\frac{d}{dt}(f'^2+g'^2+h'^2)\\ &=\frac{1}{2}\frac{d}{dt}\|r'(t)\|^2 \\ &=0. &&(\because \|r'(t)\|=c) \end{align*}
H: Rather easy arctan limit (without L'Hôpital) It's not an abomination of a limit, but I can't wrap my head around it. This is a factor of a bigger limit that was plausible enough, but this little bit kept me stuck for too much time. Here it is: $$\lim_{x\to0^+}\frac{x}{\pi-3\arctan{\frac{\sqrt{3}}{1+x}}}$$ I would really appreciate even a hint, thank you very much. AI: Hint $$\arctan(\frac{\sqrt 3}{1+x})=\arctan(\sqrt3-\sqrt 3x+o(x))=\frac{\pi}{3}-\frac{\sqrt 3}{4}x+o(x).$$
H: Root test for complex series and cancelling powers with absolute values The root test for convergence of a complex power series is given as $$\lim_{n \rightarrow \infty} \sqrt[n]{\left|a_{n}\right|} = L$$ If $a_n = \frac1{(1+i)^n}$ then I read that when applying the root test I can just remove the powers since they cancel out: $$\lim_{n \rightarrow \infty} \sqrt[n]{\left|a_{n}\right|} = \lim_{n \rightarrow \infty}\sqrt[n]{\left|\frac1{(1+i)^n}\right|} = \lim_{n \rightarrow \infty} \left|\frac1{(1+i)}\right|$$ Why is it ok to cancel the root outside the absolute function with the power inside the absolute function? For any given $n$ the expression might be negative so I feel this shouldn't be possible. Thank you. AI: This is because $$|z^n|=|z|^n\qquad\forall z\in\mathbb{C},n\in\mathbb{N}$$ which can be proven by considering both sides in exponential form.
H: Example of dependent random variables $X,Y$ such that there is no measurable $f$ with $Y=f(X)$ Let $(\Omega,\mathcal A,\operatorname P)$ be a probability space and $X,Y$ be real-valued random variables on $(\Omega,\mathcal A,\operatorname P)$. If $X$ and $Y$ are dependent, can we always find a Borel measurable $f:\mathbb R\to\mathbb R$ with $Y=f(X)$? This is actually equivalent to the claim that $Y$ is $\sigma(X)$-measurable. I guess the answer in general is "no", if we understand "dependent" as "not independent" and I guess $Y=f(X)$ for some Borel measurable $f:\mathbb R\to\mathbb R$ is a particular kind of dependence ... But what could dependence mean otherwise? AI: Let $A,B,C$ be a partition of the sample space where each of the three sets has positive probability. Let $X=1$ on $A \cup B$ and $0$ on $C$. Let $Y=2$ on $A$, $3$ on $B$ and $1$ on $C$. Note that $P(X=1,Y=1) =0$ but $P(X=1) P(Y=1) >0$. Hence $X$ and $Y$ are not independent. However there is no $f$ such that $Y=f(X)$; this is because if such a function exists then $X=1$ implies $Y=f(1)$. But $Y$ take two distinct values on the set $X=1$.
H: Getting a probability curve (Central Limit Theorem) in a game I play there's a chance to get a good item with 1/1000. After 3200 runs I only got 1. So how can I calculate how likely that is and I remember there are graphs which have 1 sigma and 2 sigma as vertical lines and you can tell what you can expect with 90% and 95% sureness. Sorry if that's asked before, but I don't remember the name of such a graph! Thanks in advance. AI: It's an application of the CLT: to apply it should be , $np \geq 5$. We have $np=3.2$; it is low but not very low...say bordeline. With 95% sureness the interval of what you are expected to find is the following formula $$[np-2\sqrt{np(1-p)};np+2\sqrt{np(1-p)}]$$ where $p=\frac{1}{1000}$ Substituting your numbers, you get that your expectation of successes is, with a confidence of 95% $$[1;5]$$ so you are in the range.... If you want a confidence interval at 90% you must substitute 2 with 1.64
H: In how many ways you can make up 20 pence using 20p, 10p, 5p, 2p and 1p coins I would like to ask in how many ways you can make up 20 pence using 20p, 10p, 5p, 2p and 1p coins. What I was thinking is that I have counted there are 11 ways to make up 10 pence using 10p, 5p, 2p and 1p coins. So, the number of way to make up 20 pence would be $11\times 11+1=122$ (using 1 20 pence). But. the answer turns out to be wrong. May I know why? Thank you so much for you guys. AI: The problem is that this counts some combinations several times. First, you double-counted all the ways where the two ways to make 10p are different, since you can have those in either order. But it's worse than that, because there are some ways to make 20p that can be split into two halves in more than one way (for example, two 5ps and ten 1ps - the 5ps can be in the same half or different halves). Probably the best way to do it is to compute the following: the number of ways to make 10p the number of ways to make 15p without using a 10p coin the number of ways to make 18p only using 2p and 1p then add these and add $2$. This works because you are conditioning on the largest coin used, and working out how many ways to do the rest without violating that condition. (The extra $2$ cases are using one 20p and all 1ps.)
H: If $X_n$ converges to $X$ in $L_p$ and $Y_n$ converges to $Y$ in $L_p$ then $X_n + Y_n $ converges to $X + Y$ in $L_p$ I want to show that if $X_n \xrightarrow{L^p} X$ and $Y_n \xrightarrow{L^p} Y$ then $X_n + Y_n \xrightarrow{L^p} X + Y$ ($p \geq 1)$. My idea is to use the following facts (whose proofs I won't give here): $L_p$ convergence implies convergence in probability If $X_n$ converges to $X$ in probability and $Y_n$ converges to $Y$ in probability then $X_n + Y_n $ converges to $X + Y$ in probability I also want to use my hunch that $L_p$ convergence implies domination, i.e. $X_n \xrightarrow{L^p} X$ $\implies$ there exists some random variable $Z_x$ s.t. $|X_n|$< $Z_x$. Finally I want to use Show that convergence in probabiltiy plus domination implies $L_p$ convergence on the sum $X_n+Y_n$, which, if my facts and my hunch are correct, converges in probability and is dominated by $Z_x+Z_y$ and thus converges to $X+Y$ in $L_p$. Problem is: is my hunch even correct? And if yes, how to prove it rigorously? AI: Hint. $\|\cdot \|_{L^p}$ is a norm. So $$\|X_n+Y_n-X-Y\|_{L^p}\leq \|X_n-X\|_{L^p}+\|Y_n-Y\|_{L^p}.$$
H: how to integrate $\int\frac{1}{x^2-12x+35}dx$? How to integrate following $$\int\frac{1}{x^2-12x+35}dx?$$ What I did is here: $$\int\frac{dx}{x^2-12x+35}=\int\frac{dx}{(x-6)^2-1}$$ substitute $x-6=t$, $dx=dt$ $$=\int\frac{dt}{t^2-1}$$ partial fraction decomposition, $$=\int{1\over 2}\left(\frac{1}{t-1}-\frac{1}{t+1}\right)dt$$ $$=\frac12(\ln|1-t|-\ln|1+t|)+c$$ $$=\frac12\left(\ln\left|\frac{1-t}{1+t}\right|\right)+c$$ substitute back to $t$ $$=\frac12\ln\left|\frac{7-x}{x-5}\right|+c$$ I am not sure if my answer correct. Can I integrate this without substitution? Thank you AI: Can you skip the substitution? Certainly! $\frac {1}{x^2 - 12 x + 35} = \frac {1}{(x-7)(x-5)} = \frac {A}{x-7} + \frac {B}{x-5}$ $A+B = 0\\ -5A - 7B = 1\\ A = \frac {1}{2}, B = -\frac {1}{2}$ $12\int \frac {1}{x-7} - \frac {1}{x-5} \ dx\\ \frac 12 (\ln |x-7| - \ln |x-5|)\\ \frac 12 \ln \left|\frac {x-7}{x-5}\right|$
H: Local trivialization of two vector bundles? Let $M$ be a smooth manifold, and $E_1\xrightarrow{\pi_1}M,E_2\xrightarrow{\pi_2}M$ are two vector bundles of rank $m$ and $n$. By definition, there exist two open cover $\{U_i\}_{i\in I}$ and $\{V_j\}_{j\in J}$ such that $\pi_1^{-1}(U_i)\cong U_i\times \mathbb{R}^{m}$ and $\pi_2^{-1}(V_j)\cong V_j\times \mathbb{R}^{n}$. My question is that, can we choose an open cover $\{W_k\}$ such that $\pi_1^{-1}(W_k)\cong W_k\times \mathbb{R}^{m}$ and $\pi_2^{-1}(W_k)\cong W_k\times \mathbb{R}^{n}$ for any $k$? I guess we can take all $U_i\cap V_j$, but in this case, I don't know if we can prove $\pi_1^{-1}(U_i\cap V_j)\cong (U_i\cap V_j)\times \mathbb{R}^{m}$. Could someone explain this? Thanks! AI: The homeomorphism $\phi_{1,i}:\pi_1^{-1}(U_i)\cong U_i\times \mathbb R^m$ is just $\pi_1$ under the projection : $$\mathrm{pr}_{U_i}(\phi_{1,i}(x))=\pi_1(x)$$ This is one of the axioms on a vector bundle. So, $$\mathrm{pr}_{U_i}\phi_{1,i}(\pi_1^{-1}(U_i\cap U_j))=\pi_1(\pi_1^{-1}(U_i\cap U_j))=U_i\cap U_i$$ because $\pi_1$ is onto. So, restricted to $\pi_1^{-1}(U_i\cap U_j)$, $\phi_{1,i}$ is a homeomorphism to $U_i\cap U_j\times \mathbb R^m$.
H: Can an autonomous differential equation be nonhomogeneous? When a differential equation $\dfrac{dy}{dt}=f(y)$ does not depends on $y$ then it is autonomous, and it can't be non-homogeneous? Can anyone explain this? Thanks! AI: $\frac{dy}{dt}=f(y)$ is an autonomous differential equation irrespective of whether $f$ depends on $y$. The only condition for autonomy is that $f$ should not be a function of $t$. If $f$ does not depend even on $y$, then it is a constant function. There are two definitions of homogeneous. A first-order ODE is said to be homogeneous if it can be written in the form$$\frac{dy}{dt}=g\left(\frac yt\right)$$If you use this definition, then $\frac{dy}{dt}=k\in\mathbb R$ is homogeneous. Other examples include $$\frac{dy}{dt}=\frac{y^2+t^2}{yt},~\frac{dy}{dt}=\frac{y^3+ty^2}{t^3}~\text{ etc.}$$ A linear ODE is also said to be homogeneous if it has no term independent of the dependent variable or its derivatives with respect to the independent variable. Since $f$ is independent of $y$, $\frac{dy}{dt}=k\in\mathbb R$ is not homogenous using this definition.
H: How can I differentiate ($2x+1$)($x-5$) by expanding then differentiating? The question is to differentiate ($2x+1$)($x-5$) by expanding then differentiating each term But I am running into problems and am running into the wrong answer I end up with $4x-15$ but answer is $2x$$^2$($2x-3$) How? AI: Both answers are incorrect. The answer is $4x-9$ since $(2x+1)(x-5)=2x^2-9x-5$ and its derivative is $4x-9$.
H: Did my math textbook make a typo? My textbook defined connectedness in graphs in the following way: A graph G(V, E) is said to be connected if for every pair of vertices u and v there is a path in G from u to v. The textbook then asks the reader to complete the following exercise: Show by giving an example that there are connected graphs in which there is no path that goes through all the vertices. I take the exercise as impossible to complete. By definition, connected graphs are those in which there is some path that goes through all the vertices. Might I have misunderstood either the definition or exercise? Could it be possible that the textbook made a typo? Any advice or clarification would be greatly appreciated. AI: A connected graph means that between any two vertices, there is some path. For example if the two vertices share an edge, there is a path between them consisting of just that single edge. This says nothing about whether or not there is a single path that goes through all the vertices in some order. Remember that by definition a path cannot visit any vertex more than once. So as you go from one vertex to the next, it might not be possible to visit all vertices without re-visiting a previous one.
H: Prove a function is Lebesgue measurable Problem $f$ is defined on a measurable set $E$, $D$ is a dense subset of $\mathbb{R}$ ($\overline{D}=\mathbb{R}$), prove $\forall\ r\in D$, if the set $\{x\in E:f(x)>r\}$ is measurable then $f$ is measurable on $E$ Since $D$ is dense in $\mathbb{R}$, $\forall\ a\in\mathbb{R}$, there exist $\{r_n\}_{n=1}^\infty\subset D$ such that $\lim_{n\to\infty}r_n=a$. We know $\{x\in E:f(x)>r_n\}$ is measurable for any $n$, so we only need to relate $\{x\in E:f(x)>r_n\}$ with $\{x\in E:f(x)>a\}$. AI: You have to take sequence $(r_n)$ from $D$ decreasing to $a$. In that case $\{x \in E: f(x) >a\}=\bigcup_n\{x\in E: f(x) >r_n\}$.
H: Proving two different expressions of non-centrality parameters are equivalent I am stuck in proving $$\sum_{i=1}^{K}\xi_i(\mu_i - \bar{\mu})^2 = \sum_{i,j}\xi_i\xi_j(\mu_i - \mu_j)^2,$$ where $\bar{\mu} = \sum_{i=1}^{K}\xi_i\mu_i$ and $\sum_{i=1}^{K}\xi_i = 1$. I am not sure whether it is true or not. AI: Let $P = \sum_{i=1}^K \xi_i \mu_i^2.$ Notice that: $$\sum_{i=1}^K \sum_{j=1}^K \xi_i\xi_j(\mu_i - \mu_j)^2 = \sum_{i=1}^K \xi_i\sum_{j=1}^K \xi_j(\mu_i^2 + \mu_j^2 - 2\mu_i\mu_j) = \\ = \sum_{i=1}^K \xi_i\left(\sum_{j=1}^K \xi_j\mu_i^2 + \sum_{j=1}^K \xi_j\mu_j^2 - \sum_{j=1}^K \xi_j2\mu_i\mu_j\right)=\\ = \sum_{i=1}^K \xi_i\left(\mu_i^2 + \sum_{j=1}^K \xi_j\mu_j^2 - 2\mu_i \sum_{j=1}^K \xi_j\mu_j\right)=\\ = \sum_{i=1}^K \xi_i\left(\mu_i^2 + P - 2\mu_i \overline{\mu}\right)=\\ = \sum_{i=1}^K \xi_i\mu_i^2 + \sum_{i=1}^K \xi_iP - 2\sum_{i=1}^K \xi_i\mu_i \overline{\mu}=\\ = P + P - 2\overline{\mu}^2 = 2(P - \overline{\mu}^2). $$ On the other hand: $$\sum_{i=1}^{K}\xi_i(\mu_i - \bar{\mu})^2 = \sum_{i=1}^{K}\xi_i \mu_i^2 + \sum_{i=1}^{K}\xi_i \bar{\mu}^2 - 2\sum_{i=1}^{K}\xi_i \mu_i \bar{\mu} = \\ = P + \bar{\mu}^2 - 2\bar{\mu}^2 = P - \bar{\mu}^2.$$ Therefore: $${\color{red}2}\sum_{i=1}^{K}\xi_i(\mu_i - \bar{\mu})^2 = \sum_{i,j}\xi_i\xi_j(\mu_i - \mu_j)^2.$$
H: Diamond Distribution in system K (Garson Modal Logic exercise 1.8) I want to prove $\Diamond (P \lor Q) \Rightarrow \Diamond P \lor \Diamond Q$ It was a biconditional, but I have proved the other one. Thanks for the answer. Please use Garson's method. Thanks. I am stuck on this, any help. Please. I don't have any work to show for it because I don't know how to proceed. AI: Maybe try proof by contradiction Here is one possible approach $\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}$ \begin{align} \fitch{\Diamond(p\lor q)}{ \neg\square\neg(p\lor q)\hspace{7.8ex}\Diamond\text{Def}\\ \fitch{\neg(\Diamond p\lor\Diamond q)} {\neg\Diamond p\land\neg\Diamond q\hspace{5ex}\text{DM}\\ \neg\Diamond p\hspace{12ex}\&\text{Out}\\ \fitch{\neg\square\neg p}{\Diamond p\hspace{10ex}\Diamond\text{Def}\\ \bot\hspace{11ex}\bot\text{In}}\\\square \neg p\hspace{13.5ex}\text{IP}\\ \neg\Diamond q\hspace{12ex}\&\text{Out}\\ \fitch{\neg\square\neg q}{\Diamond q\hspace{10ex}\Diamond\text{Def}\\ \bot\hspace{11ex}\bot\text{In}}\\\square\neg q\hspace{13.5ex}\text{IP}\\ \fitch{\square} {\neg p\hspace{10.2ex}\square\text{Out}\\ \neg q\hspace{10.2ex}\square\text{Out}\\ \fitch{p\lor q}{\fitch{p}{\bot}\fitch{q}{\bot}\\\bot\hspace{7ex}\lor\text{Out}}\\\neg(p\lor q)\hspace{5ex}\text{IP}} \\\square\neg(p\lor q)\hspace{6.5ex}\square\text{In}\\ \bot\hspace{14.8ex}\bot\text{In}}\\ \Diamond p\lor\Diamond q\hspace{12.4ex}\text{IP}} \end{align}
H: Joint density of $(V,Z)$ The joint density of $(X,Y)$ is $f_{XY}(x,y)=k(1-\sqrt{\frac{y}{x}}),0<x<1,0<y<x$. Find the value of $k$ and say if $X$ and $Y$ are independent or not. $\rightarrow k=6 \Rightarrow f_{XY}(x,y)=6(1-\sqrt{\frac{y}{x}})$; $f_X(x)=2x$ and $f_Y(y)=6(1+y-2^{\frac{1}{2}})$, so $f_X(x)f_Y(y)\neq f_{XY}(x,y)\Rightarrow X,Y$ are not independent Let $V=X$ and $Z=\sqrt{\frac{Y}{X}}$. Say if $V$ and $Z$ are independent or not. $\rightarrow cov(V,Z)=\mathbb{E}(VZ)-\mathbb{E}(V)\mathbb{E}(Z)=\mathbb{E}(X^{\frac{1}{2}}Y^{\frac{1}{2}})-\mathbb{E}(X)\mathbb{E}(X^{-\frac{1}{2}}Y^{\frac{1}{2}})=\frac{1}{3}-\frac{2}{3}\cdot \frac{1}{3}=0 \Rightarrow V \perp Z$ Find the joint density of $(V,Z)$. Well, I'm having a bit of a problem with this: indeed I know that if $X \perp Y \Rightarrow f_{VZ}(v,z)=f_X(x)f_Y(y)|\det(J)|$, but in this case $X$ and $Y$ are not independent. So I thought I'd do in this way: $F_{V,Z}(v,z)=\mathbb{P}(V\geq v,Z\geq z)=\mathbb{P}(X\geq x, \sqrt{\frac{Y}{X}\geq z})$ $=\mathbb{P}(X\geq x,X>\frac{y}{z^2})=\mathbb{P}(\frac{y}{z^2}<X<x)$; $f_{X|Y}(x|y):=\frac{f_{XY}(x,y)}{f_Y(y)}=\frac{x^{\frac{1}{2}}-y^{\frac{1}{2}}}{x^{\frac{1}{2}}(1+y-2y^{\frac{1}{2}})}$; $\mathbb{P}(\frac{y}{z^2}<X<x|Y=y)=\int_{\frac{y}{z^2}}^{x}f_{X|Y}(x|y)dx$ and then $\mathbb{P}(\frac{y}{z^2}<X<x)=\int_{\frac{y}{z^2}}^{x}\mathbb{P}(\frac{y}{z^2}<X<x|Y=y)f_X(x)dx$ Is it correct? Thanks in advance. AI: To say if $X,Y$ are independent or not no calculation is needed. In fact, Necessary Condition for independence is that the joint domain is rectangular....you have a triangle so they are NOT independent To calculate the joint density of $(Z,V)$ let's use the Fundamental Transformation Theorem, setting $$ \begin{cases} v=x \\ z=\sqrt{\frac{y}{x}} \end{cases}$$ $$ \begin{cases} x=v \\ y=z^2v \end{cases}$$ The jacobian is $|J|=2zv$ So the joint density is $$f_{ZV}(z,v)=12(1-z)zv\cdot\mathbb{1}_{(0;1)}(z)\cdot\mathbb{1}_{(0;1)}(v)$$ It is very simple to see that $$f_{ZV}(z,v)=6z(1-z)\mathbb{1}_{(0;1)}(z)\times 2v\mathbb{1}_{(0;1)}(v)=f_V(v)f_Z(z)$$ Thus the rv's V and Z are independent. IMPORTANT OBSERVATION The calculation of the covariance $Cov(Z,V)$ is not requested and finding $Cov(Z,V)=0$ says nothing about independence. Further explanation Fact: $X$ and $Y$ are not independent as their join domain is a triangle: $$\mathbb{1}_{(0;1)}(x)\mathbb{1}_{(0;x)}(y)$$ using the above system $$ \begin{cases} x=v \\ y=z^2v \end{cases}$$ we calculate the jacobian $$ |J|= \begin{bmatrix} \frac{\partial x}{\partial v} & \frac{\partial x}{\partial z} \\ \frac{\partial y}{\partial v}&\frac{\partial x}{\partial z} \\ \end{bmatrix}= \begin{bmatrix} 1 & 0 \\ z^2&2zv \\ \end{bmatrix}=2zv $$ Now simply changing the variables (this is called Fundamental Transformation theorem but it is just a variables' change) you get $$f_{ZV}(z,v)=f_{XY}(z,v)\times |J|=6(1-z)\times 2zv=12zv(1-z)$$ When I look at this density, defined on the unit square $[0;1]\times[0;1]$ it is immediately to identify that $f_{ZV}=f_Z\times f_V=6z(1-z)\times 2v$ If this is an issue for you, no problem at all! integrate with respect to z to derive $f_V$ and viceversa: $f_V=\int_0^1 f_{ZV}(z,v)dz=2v$ $f_Z=\int_0^1 f_{ZV}(z,v)dv=6z(1-z)$ Further question in same exercise It is also requested to calculate $\mathbb{P}[V^2+Z^2>\frac{1}{2}]$ Rmemebering that $z^2+v^2=\frac{1}{2}$ is a circumference with center in the origin and radius $r=\frac{\sqrt{2}}{2}$ we get $$\mathbb{P}[V^2+Z^2<\frac{1}{2}]=\int_0^{\frac{\sqrt{2}}{2}}6z(1-z)dz\int_0^{\sqrt{\frac{1}{2}-z^2}}2v dv\approx 0.234$$ Thus $$\mathbb{P}[V^2+Z^2>\frac{1}{2}]\approx 0.760$$
H: Show $A=(I-S)(I+S)^{-1}$ is an orthogonal matrix if $S$ is a real antisymmetric matrix I am trying to show that if $S$ is a real antisymmetric matrix ($S^T=-S$), then $A=(I-S)(I+S)^{-1}$ is an orthogonal matrix. $I$ is the identity matrix. To show that $A$ is orthogonal, i.e. $A^T=A^{-1}$, I first calculated $A^T$ and $A^{-1}$: $$A^T=[(I+S)^{-1}]^T(I-S)^T=[(I+S)^T]^{-1}(I-S)^T=(I+S^T)^{-1}(I-S^T)=(I-S)^{-1}(I+S)$$ $$A^{-1}=(I+S)(I-S)^{-1}$$ How can I then show that $(I-S)^{-1}(I+S)=(I+S)(I-S)^{-1}$ so that I can prove $A$ is orthogonal? AI: $$(I-S)^{-1}(I+S)=(I+S)(I-S)^{-1}$$ $$ \iff$$ $$(I+S)=(I-S)(I+S)(I-S)^{-1}$$ $$ \iff$$ $$(I+S)(I-S)=(I-S)(I+S)$$ $$ \iff$$ $$I-S+S-S^2=I+S-S+S^2.$$
H: How I can prove this? I have to prove the next proposition and I think I have to do it reducing it to the absurd but I don't know how to do it. Being $V$ a finite vectorial space and $v_1,v_2\in V$ ($v_1 \neq v_2$). Prove that $\exists \phi\in V^*$ where $\phi(v_1)\neq\phi(v_2)$ AI: Obviously we have to assume that $v_1 \neq v_2$. Let $x_1=v_1-v_2$. Extend $\{x_1\}$ to a basis $\{x_1,x_2,..,x_n\}$ and define $\phi (\sum a_ix_i)=a_1$. Then $\phi \in V^{*}$ and $\phi (x_1)=1$ so $\phi (v_1)=\phi (v_2)+1$.
H: Example of Devil's nested radicals Let the following nested radical : $$\sqrt{665+2x}=x$$ There is a hidden quadratic equation and the result is : $$x=\sqrt{666}+1$$ So we see the number $666$ appear . My question : Do you know (more or less trivial) nested radical where the devil's number appear ? Can we build it as in my example ? Thanks in advance and have fun . Ps: I don't speak about Galois's theory . AI: I wrote a small program looking for equations of the form $ax^2 + 2bx + c = 0$ and found these quadratic equations below. Their roots all contain the number $\sqrt {666}$. My program requires that $|b| \le 20$ and $|a| \le 700,\ |c| \le 700$. Of course these limits can be extended. By the way, your equation happens be to number 66 in the list below. Equations 64 and 68 also look interesting to me since their solutions don't have a denominator. Equation (1): ( -266 ) * (x**2) + ( -40 ) * x + ( 1) = 0 Solutions (1): [-10/133 + 3*sqrt(74)/266, -3*sqrt(74)/266 - 10/133] Equation (2): ( 1 ) * (x**2) + ( -40 ) * x + ( -266) = 0 Solutions (2): [20 - 3*sqrt(74), 20 + 3*sqrt(74)] Equation (3): ( -133 ) * (x**2) + ( -40 ) * x + ( 2) = 0 Solutions (3): [-20/133 + 3*sqrt(74)/133, -3*sqrt(74)/133 - 20/133] Equation (4): ( 2 ) * (x**2) + ( -40 ) * x + ( -133) = 0 Solutions (4): [10 - 3*sqrt(74)/2, 10 + 3*sqrt(74)/2] Equation (5): ( -305 ) * (x**2) + ( -38 ) * x + ( 1) = 0 Solutions (5): [-19/305 + 3*sqrt(74)/305, -3*sqrt(74)/305 - 19/305] Equation (6): ( 1 ) * (x**2) + ( -38 ) * x + ( -305) = 0 Solutions (6): [19 - 3*sqrt(74), 19 + 3*sqrt(74)] Equation (7): ( -342 ) * (x**2) + ( -36 ) * x + ( 1) = 0 Solutions (7): [-1/19 + sqrt(74)/114, -sqrt(74)/114 - 1/19] Equation (8): ( 1 ) * (x**2) + ( -36 ) * x + ( -342) = 0 Solutions (8): [18 - 3*sqrt(74), 18 + 3*sqrt(74)] Equation (9): ( -171 ) * (x**2) + ( -36 ) * x + ( 2) = 0 Solutions (9): [-2/19 + sqrt(74)/57, -sqrt(74)/57 - 2/19] Equation (10): ( 2 ) * (x**2) + ( -36 ) * x + ( -171) = 0 Solutions (10): [9 - 3*sqrt(74)/2, 9 + 3*sqrt(74)/2] Equation (11): ( -114 ) * (x**2) + ( -36 ) * x + ( 3) = 0 Solutions (11): [-3/19 + sqrt(74)/38, -sqrt(74)/38 - 3/19] Equation (12): ( 3 ) * (x**2) + ( -36 ) * x + ( -114) = 0 Solutions (12): [6 - sqrt(74), 6 + sqrt(74)] Equation (13): ( -377 ) * (x**2) + ( -34 ) * x + ( 1) = 0 Solutions (13): [-17/377 + 3*sqrt(74)/377, -3*sqrt(74)/377 - 17/377] Equation (14): ( 1 ) * (x**2) + ( -34 ) * x + ( -377) = 0 Solutions (14): [17 - 3*sqrt(74), 17 + 3*sqrt(74)] Equation (15): ( -410 ) * (x**2) + ( -32 ) * x + ( 1) = 0 Solutions (15): [-8/205 + 3*sqrt(74)/410, -3*sqrt(74)/410 - 8/205] Equation (16): ( 1 ) * (x**2) + ( -32 ) * x + ( -410) = 0 Solutions (16): [16 - 3*sqrt(74), 16 + 3*sqrt(74)] Equation (17): ( -205 ) * (x**2) + ( -32 ) * x + ( 2) = 0 Solutions (17): [-16/205 + 3*sqrt(74)/205, -3*sqrt(74)/205 - 16/205] Equation (18): ( 2 ) * (x**2) + ( -32 ) * x + ( -205) = 0 Solutions (18): [8 - 3*sqrt(74)/2, 8 + 3*sqrt(74)/2] Equation (19): ( -441 ) * (x**2) + ( -30 ) * x + ( 1) = 0 Solutions (19): [-5/147 + sqrt(74)/147, -sqrt(74)/147 - 5/147] Equation (20): ( 1 ) * (x**2) + ( -30 ) * x + ( -441) = 0 Solutions (20): [15 - 3*sqrt(74), 15 + 3*sqrt(74)] Equation (21): ( -147 ) * (x**2) + ( -30 ) * x + ( 3) = 0 Solutions (21): [-5/49 + sqrt(74)/49, -sqrt(74)/49 - 5/49] Equation (22): ( 3 ) * (x**2) + ( -30 ) * x + ( -147) = 0 Solutions (22): [5 - sqrt(74), 5 + sqrt(74)] Equation (23): ( -470 ) * (x**2) + ( -28 ) * x + ( 1) = 0 Solutions (23): [-7/235 + 3*sqrt(74)/470, -3*sqrt(74)/470 - 7/235] Equation (24): ( 1 ) * (x**2) + ( -28 ) * x + ( -470) = 0 Solutions (24): [14 - 3*sqrt(74), 14 + 3*sqrt(74)] Equation (25): ( -235 ) * (x**2) + ( -28 ) * x + ( 2) = 0 Solutions (25): [-14/235 + 3*sqrt(74)/235, -3*sqrt(74)/235 - 14/235] Equation (26): ( 2 ) * (x**2) + ( -28 ) * x + ( -235) = 0 Solutions (26): [7 - 3*sqrt(74)/2, 7 + 3*sqrt(74)/2] Equation (27): ( -497 ) * (x**2) + ( -26 ) * x + ( 1) = 0 Solutions (27): [-13/497 + 3*sqrt(74)/497, -3*sqrt(74)/497 - 13/497] Equation (28): ( 1 ) * (x**2) + ( -26 ) * x + ( -497) = 0 Solutions (28): [13 - 3*sqrt(74), 13 + 3*sqrt(74)] Equation (29): ( -522 ) * (x**2) + ( -24 ) * x + ( 1) = 0 Solutions (29): [-2/87 + sqrt(74)/174, -sqrt(74)/174 - 2/87] Equation (30): ( 1 ) * (x**2) + ( -24 ) * x + ( -522) = 0 Solutions (30): [12 - 3*sqrt(74), 12 + 3*sqrt(74)] Equation (31): ( -261 ) * (x**2) + ( -24 ) * x + ( 2) = 0 Solutions (31): [-4/87 + sqrt(74)/87, -sqrt(74)/87 - 4/87] Equation (32): ( 2 ) * (x**2) + ( -24 ) * x + ( -261) = 0 Solutions (32): [6 - 3*sqrt(74)/2, 6 + 3*sqrt(74)/2] Equation (33): ( -174 ) * (x**2) + ( -24 ) * x + ( 3) = 0 Solutions (33): [-2/29 + sqrt(74)/58, -sqrt(74)/58 - 2/29] Equation (34): ( 3 ) * (x**2) + ( -24 ) * x + ( -174) = 0 Solutions (34): [4 - sqrt(74), 4 + sqrt(74)] Equation (35): ( -545 ) * (x**2) + ( -22 ) * x + ( 1) = 0 Solutions (35): [-11/545 + 3*sqrt(74)/545, -3*sqrt(74)/545 - 11/545] Equation (36): ( 1 ) * (x**2) + ( -22 ) * x + ( -545) = 0 Solutions (36): [11 - 3*sqrt(74), 11 + 3*sqrt(74)] Equation (37): ( -566 ) * (x**2) + ( -20 ) * x + ( 1) = 0 Solutions (37): [-5/283 + 3*sqrt(74)/566, -3*sqrt(74)/566 - 5/283] Equation (38): ( 1 ) * (x**2) + ( -20 ) * x + ( -566) = 0 Solutions (38): [10 - 3*sqrt(74), 10 + 3*sqrt(74)] Equation (39): ( -283 ) * (x**2) + ( -20 ) * x + ( 2) = 0 Solutions (39): [-10/283 + 3*sqrt(74)/283, -3*sqrt(74)/283 - 10/283] Equation (40): ( 2 ) * (x**2) + ( -20 ) * x + ( -283) = 0 Solutions (40): [5 - 3*sqrt(74)/2, 5 + 3*sqrt(74)/2] Equation (41): ( -585 ) * (x**2) + ( -18 ) * x + ( 1) = 0 Solutions (41): [-1/65 + sqrt(74)/195, -sqrt(74)/195 - 1/65] Equation (42): ( 1 ) * (x**2) + ( -18 ) * x + ( -585) = 0 Solutions (42): [9 - 3*sqrt(74), 9 + 3*sqrt(74)] Equation (43): ( -195 ) * (x**2) + ( -18 ) * x + ( 3) = 0 Solutions (43): [-3/65 + sqrt(74)/65, -sqrt(74)/65 - 3/65] Equation (44): ( 3 ) * (x**2) + ( -18 ) * x + ( -195) = 0 Solutions (44): [3 - sqrt(74), 3 + sqrt(74)] Equation (45): ( -602 ) * (x**2) + ( -16 ) * x + ( 1) = 0 Solutions (45): [-4/301 + 3*sqrt(74)/602, -3*sqrt(74)/602 - 4/301] Equation (46): ( 1 ) * (x**2) + ( -16 ) * x + ( -602) = 0 Solutions (46): [8 - 3*sqrt(74), 8 + 3*sqrt(74)] Equation (47): ( -301 ) * (x**2) + ( -16 ) * x + ( 2) = 0 Solutions (47): [-8/301 + 3*sqrt(74)/301, -3*sqrt(74)/301 - 8/301] Equation (48): ( 2 ) * (x**2) + ( -16 ) * x + ( -301) = 0 Solutions (48): [4 - 3*sqrt(74)/2, 4 + 3*sqrt(74)/2] Equation (49): ( -617 ) * (x**2) + ( -14 ) * x + ( 1) = 0 Solutions (49): [-7/617 + 3*sqrt(74)/617, -3*sqrt(74)/617 - 7/617] Equation (50): ( 1 ) * (x**2) + ( -14 ) * x + ( -617) = 0 Solutions (50): [7 - 3*sqrt(74), 7 + 3*sqrt(74)] Equation (51): ( -630 ) * (x**2) + ( -12 ) * x + ( 1) = 0 Solutions (51): [-1/105 + sqrt(74)/210, -sqrt(74)/210 - 1/105] Equation (52): ( 1 ) * (x**2) + ( -12 ) * x + ( -630) = 0 Solutions (52): [6 - 3*sqrt(74), 6 + 3*sqrt(74)] Equation (53): ( -315 ) * (x**2) + ( -12 ) * x + ( 2) = 0 Solutions (53): [-2/105 + sqrt(74)/105, -sqrt(74)/105 - 2/105] Equation (54): ( 2 ) * (x**2) + ( -12 ) * x + ( -315) = 0 Solutions (54): [3 - 3*sqrt(74)/2, 3 + 3*sqrt(74)/2] Equation (55): ( -641 ) * (x**2) + ( -10 ) * x + ( 1) = 0 Solutions (55): [-5/641 + 3*sqrt(74)/641, -3*sqrt(74)/641 - 5/641] Equation (56): ( 1 ) * (x**2) + ( -10 ) * x + ( -641) = 0 Solutions (56): [5 - 3*sqrt(74), 5 + 3*sqrt(74)] Equation (57): ( -650 ) * (x**2) + ( -8 ) * x + ( 1) = 0 Solutions (57): [-2/325 + 3*sqrt(74)/650, -3*sqrt(74)/650 - 2/325] Equation (58): ( 1 ) * (x**2) + ( -8 ) * x + ( -650) = 0 Solutions (58): [4 - 3*sqrt(74), 4 + 3*sqrt(74)] Equation (59): ( -325 ) * (x**2) + ( -8 ) * x + ( 2) = 0 Solutions (59): [-4/325 + 3*sqrt(74)/325, -3*sqrt(74)/325 - 4/325] Equation (60): ( 2 ) * (x**2) + ( -8 ) * x + ( -325) = 0 Solutions (60): [2 - 3*sqrt(74)/2, 2 + 3*sqrt(74)/2] Equation (61): ( -657 ) * (x**2) + ( -6 ) * x + ( 1) = 0 Solutions (61): [-1/219 + sqrt(74)/219, -sqrt(74)/219 - 1/219] Equation (62): ( 1 ) * (x**2) + ( -6 ) * x + ( -657) = 0 Solutions (62): [3 - 3*sqrt(74), 3 + 3*sqrt(74)] Equation (63): ( -662 ) * (x**2) + ( -4 ) * x + ( 1) = 0 Solutions (63): [-1/331 + 3*sqrt(74)/662, -3*sqrt(74)/662 - 1/331] Equation (64): ( 1 ) * (x**2) + ( -4 ) * x + ( -662) = 0 Solutions (64): [2 - 3*sqrt(74), 2 + 3*sqrt(74)] Equation (65): ( -665 ) * (x**2) + ( -2 ) * x + ( 1) = 0 Solutions (65): [-1/665 + 3*sqrt(74)/665, -3*sqrt(74)/665 - 1/665] Equation (66): ( 1 ) * (x**2) + ( -2 ) * x + ( -665) = 0 Solutions (66): [1 - 3*sqrt(74), 1 + 3*sqrt(74)] Equation (67): ( -665 ) * (x**2) + ( 2 ) * x + ( 1) = 0 Solutions (67): [1/665 - 3*sqrt(74)/665, 1/665 + 3*sqrt(74)/665] Equation (68): ( 1 ) * (x**2) + ( 2 ) * x + ( -665) = 0 Solutions (68): [-1 + 3*sqrt(74), -3*sqrt(74) - 1] Equation (69): ( -662 ) * (x**2) + ( 4 ) * x + ( 1) = 0 Solutions (69): [1/331 - 3*sqrt(74)/662, 1/331 + 3*sqrt(74)/662] Equation (70): ( 1 ) * (x**2) + ( 4 ) * x + ( -662) = 0 Solutions (70): [-2 + 3*sqrt(74), -3*sqrt(74) - 2] Equation (71): ( -657 ) * (x**2) + ( 6 ) * x + ( 1) = 0 Solutions (71): [1/219 - sqrt(74)/219, 1/219 + sqrt(74)/219] Equation (72): ( 1 ) * (x**2) + ( 6 ) * x + ( -657) = 0 Solutions (72): [-3 + 3*sqrt(74), -3*sqrt(74) - 3] Equation (73): ( -650 ) * (x**2) + ( 8 ) * x + ( 1) = 0 Solutions (73): [2/325 - 3*sqrt(74)/650, 2/325 + 3*sqrt(74)/650] Equation (74): ( 1 ) * (x**2) + ( 8 ) * x + ( -650) = 0 Solutions (74): [-4 + 3*sqrt(74), -3*sqrt(74) - 4] Equation (75): ( -325 ) * (x**2) + ( 8 ) * x + ( 2) = 0 Solutions (75): [4/325 - 3*sqrt(74)/325, 4/325 + 3*sqrt(74)/325] Equation (76): ( 2 ) * (x**2) + ( 8 ) * x + ( -325) = 0 Solutions (76): [-2 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 2] Equation (77): ( -641 ) * (x**2) + ( 10 ) * x + ( 1) = 0 Solutions (77): [5/641 - 3*sqrt(74)/641, 5/641 + 3*sqrt(74)/641] Equation (78): ( 1 ) * (x**2) + ( 10 ) * x + ( -641) = 0 Solutions (78): [-5 + 3*sqrt(74), -3*sqrt(74) - 5] Equation (79): ( -630 ) * (x**2) + ( 12 ) * x + ( 1) = 0 Solutions (79): [1/105 - sqrt(74)/210, 1/105 + sqrt(74)/210] Equation (80): ( 1 ) * (x**2) + ( 12 ) * x + ( -630) = 0 Solutions (80): [-6 + 3*sqrt(74), -3*sqrt(74) - 6] Equation (81): ( -315 ) * (x**2) + ( 12 ) * x + ( 2) = 0 Solutions (81): [2/105 - sqrt(74)/105, 2/105 + sqrt(74)/105] Equation (82): ( 2 ) * (x**2) + ( 12 ) * x + ( -315) = 0 Solutions (82): [-3 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 3] Equation (83): ( -617 ) * (x**2) + ( 14 ) * x + ( 1) = 0 Solutions (83): [7/617 - 3*sqrt(74)/617, 7/617 + 3*sqrt(74)/617] Equation (84): ( 1 ) * (x**2) + ( 14 ) * x + ( -617) = 0 Solutions (84): [-7 + 3*sqrt(74), -3*sqrt(74) - 7] Equation (85): ( -602 ) * (x**2) + ( 16 ) * x + ( 1) = 0 Solutions (85): [4/301 - 3*sqrt(74)/602, 4/301 + 3*sqrt(74)/602] Equation (86): ( 1 ) * (x**2) + ( 16 ) * x + ( -602) = 0 Solutions (86): [-8 + 3*sqrt(74), -3*sqrt(74) - 8] Equation (87): ( -301 ) * (x**2) + ( 16 ) * x + ( 2) = 0 Solutions (87): [8/301 - 3*sqrt(74)/301, 8/301 + 3*sqrt(74)/301] Equation (88): ( 2 ) * (x**2) + ( 16 ) * x + ( -301) = 0 Solutions (88): [-4 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 4] Equation (89): ( -585 ) * (x**2) + ( 18 ) * x + ( 1) = 0 Solutions (89): [1/65 - sqrt(74)/195, 1/65 + sqrt(74)/195] Equation (90): ( 1 ) * (x**2) + ( 18 ) * x + ( -585) = 0 Solutions (90): [-9 + 3*sqrt(74), -3*sqrt(74) - 9] Equation (91): ( -195 ) * (x**2) + ( 18 ) * x + ( 3) = 0 Solutions (91): [3/65 - sqrt(74)/65, 3/65 + sqrt(74)/65] Equation (92): ( 3 ) * (x**2) + ( 18 ) * x + ( -195) = 0 Solutions (92): [-3 + sqrt(74), -sqrt(74) - 3] Equation (93): ( -566 ) * (x**2) + ( 20 ) * x + ( 1) = 0 Solutions (93): [5/283 - 3*sqrt(74)/566, 5/283 + 3*sqrt(74)/566] Equation (94): ( 1 ) * (x**2) + ( 20 ) * x + ( -566) = 0 Solutions (94): [-10 + 3*sqrt(74), -3*sqrt(74) - 10] Equation (95): ( -283 ) * (x**2) + ( 20 ) * x + ( 2) = 0 Solutions (95): [10/283 - 3*sqrt(74)/283, 10/283 + 3*sqrt(74)/283] Equation (96): ( 2 ) * (x**2) + ( 20 ) * x + ( -283) = 0 Solutions (96): [-5 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 5] Equation (97): ( -545 ) * (x**2) + ( 22 ) * x + ( 1) = 0 Solutions (97): [11/545 - 3*sqrt(74)/545, 11/545 + 3*sqrt(74)/545] Equation (98): ( 1 ) * (x**2) + ( 22 ) * x + ( -545) = 0 Solutions (98): [-11 + 3*sqrt(74), -3*sqrt(74) - 11] Equation (99): ( -522 ) * (x**2) + ( 24 ) * x + ( 1) = 0 Solutions (99): [2/87 - sqrt(74)/174, 2/87 + sqrt(74)/174] Equation (100): ( 1 ) * (x**2) + ( 24 ) * x + ( -522) = 0 Solutions (100): [-12 + 3*sqrt(74), -3*sqrt(74) - 12] Equation (101): ( -261 ) * (x**2) + ( 24 ) * x + ( 2) = 0 Solutions (101): [4/87 - sqrt(74)/87, 4/87 + sqrt(74)/87] Equation (102): ( 2 ) * (x**2) + ( 24 ) * x + ( -261) = 0 Solutions (102): [-6 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 6] Equation (103): ( -174 ) * (x**2) + ( 24 ) * x + ( 3) = 0 Solutions (103): [2/29 - sqrt(74)/58, 2/29 + sqrt(74)/58] Equation (104): ( 3 ) * (x**2) + ( 24 ) * x + ( -174) = 0 Solutions (104): [-4 + sqrt(74), -sqrt(74) - 4] Equation (105): ( -497 ) * (x**2) + ( 26 ) * x + ( 1) = 0 Solutions (105): [13/497 - 3*sqrt(74)/497, 13/497 + 3*sqrt(74)/497] Equation (106): ( 1 ) * (x**2) + ( 26 ) * x + ( -497) = 0 Solutions (106): [-13 + 3*sqrt(74), -3*sqrt(74) - 13] Equation (107): ( -470 ) * (x**2) + ( 28 ) * x + ( 1) = 0 Solutions (107): [7/235 - 3*sqrt(74)/470, 7/235 + 3*sqrt(74)/470] Equation (108): ( 1 ) * (x**2) + ( 28 ) * x + ( -470) = 0 Solutions (108): [-14 + 3*sqrt(74), -3*sqrt(74) - 14] Equation (109): ( -235 ) * (x**2) + ( 28 ) * x + ( 2) = 0 Solutions (109): [14/235 - 3*sqrt(74)/235, 14/235 + 3*sqrt(74)/235] Equation (110): ( 2 ) * (x**2) + ( 28 ) * x + ( -235) = 0 Solutions (110): [-7 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 7] Equation (111): ( -441 ) * (x**2) + ( 30 ) * x + ( 1) = 0 Solutions (111): [5/147 - sqrt(74)/147, 5/147 + sqrt(74)/147] Equation (112): ( 1 ) * (x**2) + ( 30 ) * x + ( -441) = 0 Solutions (112): [-15 + 3*sqrt(74), -3*sqrt(74) - 15] Equation (113): ( -147 ) * (x**2) + ( 30 ) * x + ( 3) = 0 Solutions (113): [5/49 - sqrt(74)/49, 5/49 + sqrt(74)/49] Equation (114): ( 3 ) * (x**2) + ( 30 ) * x + ( -147) = 0 Solutions (114): [-5 + sqrt(74), -sqrt(74) - 5] Equation (115): ( -410 ) * (x**2) + ( 32 ) * x + ( 1) = 0 Solutions (115): [8/205 - 3*sqrt(74)/410, 8/205 + 3*sqrt(74)/410] Equation (116): ( 1 ) * (x**2) + ( 32 ) * x + ( -410) = 0 Solutions (116): [-16 + 3*sqrt(74), -3*sqrt(74) - 16] Equation (117): ( -205 ) * (x**2) + ( 32 ) * x + ( 2) = 0 Solutions (117): [16/205 - 3*sqrt(74)/205, 16/205 + 3*sqrt(74)/205] Equation (118): ( 2 ) * (x**2) + ( 32 ) * x + ( -205) = 0 Solutions (118): [-8 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 8] Equation (119): ( -377 ) * (x**2) + ( 34 ) * x + ( 1) = 0 Solutions (119): [17/377 - 3*sqrt(74)/377, 17/377 + 3*sqrt(74)/377] Equation (120): ( 1 ) * (x**2) + ( 34 ) * x + ( -377) = 0 Solutions (120): [-17 + 3*sqrt(74), -3*sqrt(74) - 17] Equation (121): ( -342 ) * (x**2) + ( 36 ) * x + ( 1) = 0 Solutions (121): [1/19 - sqrt(74)/114, 1/19 + sqrt(74)/114] Equation (122): ( 1 ) * (x**2) + ( 36 ) * x + ( -342) = 0 Solutions (122): [-18 + 3*sqrt(74), -3*sqrt(74) - 18] Equation (123): ( -171 ) * (x**2) + ( 36 ) * x + ( 2) = 0 Solutions (123): [2/19 - sqrt(74)/57, 2/19 + sqrt(74)/57] Equation (124): ( 2 ) * (x**2) + ( 36 ) * x + ( -171) = 0 Solutions (124): [-9 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 9] Equation (125): ( -114 ) * (x**2) + ( 36 ) * x + ( 3) = 0 Solutions (125): [3/19 - sqrt(74)/38, 3/19 + sqrt(74)/38] Equation (126): ( 3 ) * (x**2) + ( 36 ) * x + ( -114) = 0 Solutions (126): [-6 + sqrt(74), -sqrt(74) - 6] Equation (127): ( -305 ) * (x**2) + ( 38 ) * x + ( 1) = 0 Solutions (127): [19/305 - 3*sqrt(74)/305, 19/305 + 3*sqrt(74)/305] Equation (128): ( 1 ) * (x**2) + ( 38 ) * x + ( -305) = 0 Solutions (128): [-19 + 3*sqrt(74), -3*sqrt(74) - 19] Equation (129): ( -266 ) * (x**2) + ( 40 ) * x + ( 1) = 0 Solutions (129): [10/133 - 3*sqrt(74)/266, 10/133 + 3*sqrt(74)/266] Equation (130): ( 1 ) * (x**2) + ( 40 ) * x + ( -266) = 0 Solutions (130): [-20 + 3*sqrt(74), -3*sqrt(74) - 20] Equation (131): ( -133 ) * (x**2) + ( 40 ) * x + ( 2) = 0 Solutions (131): [20/133 - 3*sqrt(74)/133, 20/133 + 3*sqrt(74)/133] Equation (132): ( 2 ) * (x**2) + ( 40 ) * x + ( -133) = 0 Solutions (132): [-10 + 3*sqrt(74)/2, -3*sqrt(74)/2 - 10] In case anyone wants to play further with it, here is the Python code. import math as mt from sympy import Symbol from sympy.core.sympify import sympify from sympy.solvers import solve MAX_B = 20 MAX_A = 700 MAX_C = 700 x = Symbol('x') z = solve(x**2 - 1, x) def is_int(a): return abs(a - mt.floor(a))<0.001 def generate_eq_str(a, b, c): return ("( % 3d" % a + " ) * (x**2) + " + "( % 3d" % ( 2*b ) + " ) * x + " + "( % 3d" % c + ")") def main(): ind = 1 for b in range(-MAX_B, MAX_B+1): for c in range(1 + int(mt.floor(mt.sqrt(abs(b))))): if (c > 0): a = ( b**2-666 ) / c if (is_int(a)): if (abs(a) <= MAX_A and abs(b) <= MAX_B and abs(c) <= MAX_C): str1 = generate_eq_str(a, b, c) print("Equation (%d): " % ind, str1, " = 0") eq = sympify(str1) z = solve(eq, x) print("Solutions (%d): " % ind, z) print() ind += 1 str2 = generate_eq_str(c, b, a) print("Equation (%d): " % ind, str2, " = 0") eq = sympify(str2) z = solve(eq, x) print("Solutions (%d): " % ind, z) print() ind += 1 if __name__ == "__main__": main()
H: Finding a basis given two matrix representations Given $T:\mathbb{R^3}\rightarrow \mathbb{R^3}$ defined by: $$ [T]^E_E=\begin{pmatrix} -2 & 4 & 5 \\ -8 & 12 & 12 \\ 8 & -11 & -10 \\ \end{pmatrix} $$ I need to find a basis $B=(\vec b_1,\vec b_2,\vec b_3)$ such that: $$ [T]^B_B=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{pmatrix} $$ I tried using the relation $[I]^B_E[T]^B_B[I]^E_B=[T]^E_E$ but it lead me nowhere since I couldn't find $[I]^B_E, [I]^E_B$. Any help would be appretiated, thank you! AI: The first column of $[T]_B^B$ is the zero vector. This means that $$T(\vec{b}_1) = 0\vec{b}_1 + 0\vec{b}_2 + 0\vec{b}_3 = 0.$$ The second column of $[T]_B^B$ is $(1, 0, 0)^\top$. This means that $$T(\vec{b}_2) = 1\vec{b}_1 + 0\vec{b}_2 + 0\vec{b}_3 = \vec{b}_1.$$ The third column of $[T]_B^B$ is $(0, 1, 0)^\top$. This means that $$T(\vec{b}_3) = 0\vec{b}_1 + 1\vec{b}_2 + 0\vec{b}_3 = \vec{b}_2.$$ So, I would start by finding a non-zero solution to $T(\vec{x}) = 0$, and let it be $\vec{b}_1$. Then, take this solution, find a solution to $T(\vec{x}) = \vec{b}_1$, and let $\vec{b}_2$ be this solution. Finally, solve $T(\vec{x}) = \vec{b}_2$, and let $\vec{b}_3$ be one of the solutions.
H: Writing an integral in terms of Lebesgue measure I have seen the following identity, but I think that it is not correct: Given a positive measure $\mu$ in $\mathbb{R}^N$, we have that $$\displaystyle\int_{\|x-y\|^{-\gamma}\leq R}\|x-y\|^{-\gamma}d\mu(y)=\displaystyle\int_0^R \mu\{y\in \mathbb{R}^N:\|x-y\|^{-\gamma}>t\} dt.$$ It seems to me that these must be an inequality. Could someone tell me if this is correct or not? AI: For any non-negative measurable function $f$ we have $\int f(y)d\mu (y)=\int_0^{\infty} \mu (\{y: f(y) >t\})dt$. This is a consequence of Fubini's Theorem. Just put $f(y)=1_{\|x-y\|^{-\gamma} \leq R} \|x-y\|^{-\gamma}$ in this equation and note that the integrand vanishes for $t >R$. $\int_0^{\infty} \mu \{x: f (x) >t) dt=\int_0^{\infty} \int \chi_{\{f>t\}}(x) d\mu (x) dt=\int \int_0^{f(x)} dt d\mu(x)=\int f(x) d\mu(x)$. The interchange of integrals is justified by Tonelli's Theorem.
H: How to determine when the following sum will be prime? I was playing around with dates in my head and thought of the following prime number problem. Problem: The following (numerical) days of the month are prime: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31. From this, we know the following (numerical) months are prime: Feb (2), Mar (3), May (5), July (7), Nov (11). Interestingly, some upcoming prime years are 2027 and 2029. Let $y$ be the year (four figures), $m$ the month, and $d$ the day. When is the next day that they are all prime and sum to a prime? Question: I think I have already shown that the sum must be odd by assuming that $y, m$ and $d$ are odd (which I now realise doesn't account for 2). However, I'm trying to prove that a sum of three primes can be prime and can't think of a way to go about it. Any ideas? AI: Using brute force: Using $2027$ as year of course we can't have $2029$ as sum of day and month ($1+1$ are not primes) The next prime is $2039$. $2027 + 12 = 2039 \,$ so day + month must be $12$ $5 + 7 = 12$ and they are primes. So the next date is the 7th of May 2027.
H: How to prove $f$ is an even function? Suppose $g$ is any odd function, and we have $\int_{-1}^{1} f(t)g(t)dt=0$ and $\int_{-1}^{1} f(-t)g(t)dt=0$, $f$ and $g$ are continuous on $[-1,1]$, how to show that $f$ is an even function, that is $f(t)=f(-t)$? I considered the integral $\int_{-1}^{1} (f(t)-f(-t))g(t)dt=0$, and this is for all odd function $g(t)$. First I looked the case $f(t)-f(-t)$ is always $0$, then it is obvious. Then I discussed $f(t)-f(-t)$ is not always $0$, this implies $(f(t)-f(-t))g(t)$ must be an odd function, then it is easy to get $f(t)=f(-t)$ since $g$ is an odd function. But I do not know whether this is correct, it seems I missed something. Thanks in advance. AI: The function $g(t)=f(t)-f(-t)$ is odd, as $g(-t)=f(-t)-f(t)=-g(t)$. This gives $\int_{-1}^1|f(t)-f(-t)|^2dt=0$. But this is possible if and only if $|f(t)-f(-t)|^2$ is identically $0$, which means that $f(t)=f(-t)$ for all $t$, i.e. $f$ is even. Your reasoning is mostly correct, be confident about what you did! It is unnecessary to distinguish the cases if $f(t)-f(-t)$ is identically $0$ or not, you can get the result without splitting into cases (as I explained above). P.S: If it is not clear to you why $\int|h(t)|=0\implies h\equiv0$ when $h$ is continuous, do it as an exercise. If you have questions, feel free to ask about it.
H: Deriving formula for cross-product. It is given on pg. #106, 107 in the book by: Thomas Banchoff, John Wermer; titled: Linear Algebra Through Geometry, second edn.. Consider a system of two equations in three unknowns: $$a_1x_1 + a_2x_2 + a_3x_3 = 0$$ $$b_1x_1 + b_2x_2 + b_3x_3 = 0$$ We set $A = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}, B = \begin{bmatrix} b_1\\ b_2 \\ b_3 \end{bmatrix}$ in $\mathbb{R}^3$. A solution vector $X = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$ gives $A.X=0, B.X=0$. We may find such an $X$ by multiplying the first equation by $b_1$ and the second by $a_1$ and subtracting $$a_1b_1x_1 + a_2b_1x_2 + a_3b_1x_3 = 0,$$ $$a_1b_1x_1 + a_1b_2x_2 + a_1b_3x_3 = 0,$$ $$(1): \ (a_2b_1 - a_1b_2)x_2 + (a_3b_1 - a_1b_3)x_3 = 0.$$ Similarly, we may multiply the first equation by $b_2$ and the second by $a_2$ and subtract to get $$(2):\ (a_1b_2 - a_2b_1)x_1 + (a_3b_2 - a_2b_3)x_3 = 0.$$ We can obtain a solution to the system $(1), (2)$ by choosing $$x_1 = (a_2b_3-a_3b_2), x_2=(a_3b_1-a_1b_3), x_3 =(a_1b_2-a_2b_1)$$ I am confused over how the author made such a choice for the coefficients of $x_1, x_2, x_3$ so as to derive from $(1),(2)$. AI: The derivation of (1) (or (2)) follows the logic that we want to eliminate $x_1$ (or $x_2$). The system (1), (2) itself can seen as: $$ -Cx_2=-Bx_3,\\ Cx_1=Ax_3. $$ Now it is almost obvious, that if $x_3=C$, $x_2=B$ and $x_1=A$, then the system will be satisfied.
H: iven $f:U\to\mathbb{R}^n$, $U\subset\mathbb{R}^n$ U is convex and $||D_f(x)||<1$ for every $x\in U$. $g$ is diffeomorphism. Given $f:U\to\mathbb{R}^n$, $U\subset\mathbb{R}^n$ U is convex and $||D_f(x)||<1$ for every $x\in U$. Setting $g(x)=x+f(x)$ prove $g$ is diffeomorphism. I've manged to prove the $g$ is one to one and no idea how to prove $g^{-1}\in C^1$ AI: Hint: $\|D_g(x)\| = \|I + D_f(x)\| \geq \|I\| - \|D_f(x)\|$.
H: Identity proof with binomial coefficients I have to prove the following identity and have to idea how to start. $ { {-n} \choose k} = (-1)^k {{n+k-1} \choose k} $ I know that $ \sum_{k=0}^{n} (-1)^k{n \choose k}=0 $ but I cannot see a way, in which it could help prove the mentioned above. Thanks for your help AI: We can formally write $\binom {-n}{k}$ with factorial formula: $$ \binom {-n}{k}=\frac {(-n)(-n-1)(-n-2)\dotsm (-n-k+1)}{k!} =\\ (-1)^{k}\frac {(k+n-1)(k+n-2)\dotsm (n)}{k!}=(-1)^{k}\binom {k+n-1}{k} $$
H: Unable to derive an expression in study of Linear Transformations While self studying Linear Algebra from Hoffman and Kunze I am unable to derive this following expression. My question is in 5 th line after the underlined line in blue ie the line " The only terms which survive in this huge sum are the terms where q=r } ",Not anything after it. I think we should add condition i=s ,along with condition q=r . This i =s appeared due to solving $E^{r, s} ( \alpha_{i} ) $as due to solving $ E^{r, s} $ I got $ \delta_{i s} $ × $\beta_{r} $ and then again solving the condition we get q=r. Here I am using result used in theorem 5( page 76) which is $E^{p, q} (\alpha_{i} )$ = 0 if i$\neq$ q and $\beta_{p} $ if i=q. Kindly guide what should be the case. AI: $E^{p,q}$ takes the $q$th $\alpha$-vector to the $p$th, and sends all others to zero. It's nice to look at this in the case where the $\alpha$-vectors are just the standard basis vectors, so that $$ \alpha_1 = \pmatrix{1\\0\\ 0\\ \vdots \\0}, \alpha_2 = \pmatrix{0\\1\\ 0 \\\vdots \\0}, \ldots, \alpha_n = \pmatrix{0\\0\\ 0 \\\vdots \\1}, $$ for in that case, the matrix for $E^{p,q}$ is all zeroes, except for a $1$ in row $p$, column $q$. If you multiply this by $E^{a,b}$ on the right, it's clear that you get all zeroes unless some column of $E^{a,b}$ contains a "1" in column $q$. But the only $1$ in $E^{a,b}$ is in column $b$. So you need $q = b$. And you end up taking $\alpha_a$ to $\alpha_b = \alpha_q$, which is then transformed, but $E^{p,q}$, to $\alpha_p$, so you've got $E^{a,p}$ as a result.
H: Maximum number of possible disjoint subsets of unequal size Given a set $S$ of size $n$, there is a maximum number of disjoint subsets where the size of each of the subsets is larger than zero and different for each of the subsets. Which is obtained by counting the number of times $(k)$ we must add up the sequence of integers starting at 1, where at each addition we increase the number by which we add with one, until the our sum is larger than $n$, and then taking $k-1$. For example, if $n = 11$ we find that the maximum number of subsets is obtained by summing 1 + 2 + 3 + 4 + 5 = 15 > 11, e.g. k = 5, and the maximum number of subsets with different sizes for $n = 11$ is 4. My question is if there exists a simple expression for this number that does not require to iteratively sum until some condition is reached? A more elegant representation of the problem (thanks ab123) is which expression gives back the the maximum value for $i$ in under the constraint that: $$ i(i+1) \leq 2n $$ This function should for $n$ is 1:15 return the following answers: $1,1,2,2,2,3,3,3,3,4,4,4,4,4,5$ AI: If you need largest $i$ such that $\dfrac{i(i + 1)}{2} \leq n$, then $\implies i^2 < 2n$ $i = \lfloor\sqrt{2n}\rfloor - 1$ Edit: I think the other (now deleted) answer posted by Giorgio was correct, the above is an incorrect simplification. As worked out by Giorgio, solving $k^2 + k - 2n \leq 0$ gives the correct solution $i = \left\lfloor \dfrac{\sqrt{1+ 8n} - 1}{2}\right\rfloor$.
H: An alternate definition of Limit Let's say we were to define the concept of limit as follows: $\displaystyle{\lim_{x \to c}}f(x)=L$ means that for every $x$ in the domain of $f$, there exists an $x_0 \neq x$ in the domain of $f$ such that: $$|x-c|>|x_0-c|$$and$$|f(x)-L|\ge|f(x_0)-L|$$ I have two questions: Does there exist a limit that can be proven using the (ε, δ)-definition but not by using the above definition? Does there exist a limit that can't be proven using the (ε, δ)-definition but can be proven using the above definition? AI: The issue is not what you can prove, but that these two definitions are different. Let us define the function: $$ f(x)=\begin{cases} 1 & x=0\\ 0 & x=1\\ x & x\ne 0 \text{ and } x\ne 1 \end{cases} $$ Then, with the conventional definition, $\lim_{x\to 0} f(x)=0$. Take $x=1$. There is no point $x_0\ne x$ with $|f(x_0)|\le|f(x)|=0$. So the limit does not exist with your definition. Let $f(x)=\sin\dfrac{1}{x}$. For every $x\ne 0$, you can choose an integer $k$ large enough so that: $$ x_0=\frac{1}{2\,k\,\pi}<|x| $$ Then: $$ |f(x_0)|=\left|f\left(\frac{1}{2\,k\,\pi}\right)\right|=|\sin 2\,k\,\pi|=0\le\left|\sin\dfrac{1}{x}\right|=|f(x)| $$ This "proves" that $\lim_{x\to 0}f(x)=0$ with your definition. But the conventional limit does not exist. ADDITION: As mentioned in other answers, your definition does not even define an unique limit. In the last example, choose $x_0=\frac{1}{2k\pi+m}$ with $\sin m=L$. Then, again $|f(x_0)-L|=0$ and $L$ would also be a limit for any $-1\le L\le 1$.
H: $\int\frac {dx}{\sqrt{2ax−x²}}=a^n \sin^{-1}(\frac {x}{a}-1)$ The question is telling us to find n The options are: a) 0 b) -1 c) 1 d) none of these I have tried to solve this by using some general formula of integration yet Iam unable to find the answer AI: Hint:$$\sqrt{2ax-x^2} =\sqrt{a^2-(x-a)^2}$$ Also, $$\int\frac {dx}{\sqrt{A^2-x^2}}=\sin^{-1}\frac xA+C$$ Now, can you continue from here?
H: Graph Theory - Prove complement graph has a diameter of at most $3$ Given a graph $G$ with a diameter of at least 3. Prove that the diameter of $\bar{G}$ (Complement graph) is at most 3. I got stuck at the very beginning.. why those specific numbers? I mean, I would understand if the graph $H$ would have a diameter of $5$ and then $H = G \cup \bar{G}$ or something that contains somewhere the number $5$.. but nothing else is given.. I would appreciate your kind help! Thank you! AI: First simplifying assumption: assume the graph is connected, since if $G$ is disconnected then $\overline{G}$ has diameter two or less (so we don't need to worry about 'infinite' diameter nonsense). Here's a hint: if a graph has diameter three or more, then it has two vertices $w$ and $z$ such that $d_G(w,z) = 3$. Consider a shortest path from $w$ to $z$, this will look like $w,x,y,z$, where the pairs $(w,y), (x,z)$ and $(w,z)$ are NOT adjacent in $G$. A full proof is in the spoiler below (hover over to see it): Let $u$ and $v$ be vertices of $G$. Since $d_G(w,z) = 3$, it is not possible that $u$ is adjacent to both $w$ and $z$ in $G$. Thus $u$ IS adjacent to either $w$ or $z$ in $\overline{G}$. Similarly, $v$ is adjacent to either $w$ or $z$ in $\overline{G}$. Finally, $w$ and $z$ themselves are adjacent in $\overline{G}$, so $d_{\overline{G}}(u,v) \leq 3$.
H: On a 6th degree polynomial Say $p(x)$ is a 6th degree polynomial. We know that $p \geq 1, \forall x \in \mathbb R$ and that $$p(2014) = p(2015) = p(2016) = 1$$ while $p(2017)=2$. What is the value of $p(2018)$? My first approach was to define $q(x) = p(x)-1$ such that its factorization is something like $$q(x) = (x - 2014) (x - 2015) (x - 2016) \left(ax^3 + bx^2 + cx + d \right)$$ for some $a, b, c, d \in \mathbb R$ with $a \neq 0$. Now it is all to find those values that respect the conditions but in the system something goes wrong and makes difficult or even impossible to find useful solution. Do you have better ideas? AI: Continuing with the same notation. Note that $q(x) \ge 0$ for all $x \in \Bbb R$. We already see that $2014, 2015, 2016$ are roots of $q$. The claim is that they are all repeated twice. We do this by first proving the following claim: Claim. Let $r$ be a root of $q$. Then, $r$ is a root with an even multiplicity. Proof. Suppose not. Let $2k+1$ be the multiplicity of $r$. Then, we can write $$q(x) = (x - r)^{2k+1}g(x)$$ for some polynomial $g(x)$ with $g(r) \neq 0$. As polynomials are continuous, we can find a neighbourhood $U$ around $r$ such that $g(x)$ is of the same sign for all $x \in U$. However, on the other hand, $(x-r)^{2k+1}$ changes its sign in that neighbourhood and thus, we see that $q$ changes its sign. This contradicts that $q(x) \ge 0$ for all $x \in \Bbb R$. $\blacksquare$ Thus, each of $2014$, $2015$, and $2016$ has multiplicity at least $2$. Since $q$ has degree $6$, we see that they all have multiplicity exactly $2$. Thus, we get $$q(x) = a(x - 2014)^2(x - 2015)^2(x - 2016)^2$$ for some $a \in \Bbb R$. Using $q(2017) = 1$, we see that $a = 1/3!^2$. Thus, we get that $$q(2018) = \dfrac{1}{3!^2} \cdot 4!^2 = 4^2 = 16$$ and hence, $$p(2018) = 16 + 1 = \boxed{17}.$$
H: Identifying extrema of a functional I'm new on Calculus of variations and I don't figure out how to find a minimum (or maximum) for the following functional $$ J(f) = \int_{-3}^{-2}(f^2(t)+f'(t)) ~dt . \tag{1}$$ I have tried to use the Euler-Lagrange equation $$\frac{\partial\mathcal{L}}{\partial f}-\frac{d}{dt}\left(\frac{\partial \mathcal{L}}{\partial f'}\right) = 0, \tag{2}$$ where $\mathcal{L}\left(f(t),f^\prime(t),t\right) = f^2(t)+f'(t).$ However, the only solution that I have found was $f(t) =0$, but I don't know if $f(t)=0$ is a local minimum, local maximum or a saddle path of $J$. Question: If $f$ satisfies the Euler-Lagrange equation, how to test if $f$ is a minimum, maximum or saddle path ? In ordinary calculus, we can compute the Hessian, but I don't know how to proceed in calculus of variations. Thanks in advance for any help! AI: $f(t)=0$ is a saddle, indeed there exist $f_1(t)=-\frac{3}{38}t$ and $f_2(t)=t$ such that $J(f_1)=-\frac{3}{76}<0=J(f)<\frac{22}{3}=J(f_2)$. Since from $g(t)=kt$ it follows that $J(g)=\frac{19}{3}k^2+k$, it means that the functional $J$ is unbounded from above, so it does not have maximum value and supremum of $J$ is $+\infty$. Moreover there exists a sequence of functions $\{h_n(t)\}_{n\in\mathbb{N}-\{0\}}$ defined as $h_n(t)=\frac{1}{2\left(t+2-\frac{1}{n}\right)}$ such that $J(h_n)=-\frac{n}{4}\left(1-\frac{1}{n+1}\right)$. It means that the functional $J$ is also unbounded from below, so it does not have minimum value and infimum of $J$ is $-\infty$.
H: Integration by parts but not in terms of dx I have a question regarding the following integration. How do we integral $\int_0^\infty x d(e^{-sx})$. The answer is $xe^{-sx}|_0^\infty - \int_0^\infty e^{-sx}dx$. I am not sure how to solve this. I can only solve integrals in terms of $dx$ so this one really confuses me. Thanks in advance. AI: The integration by parts formula is $$\int u\,\mathrm{d}v=uv-\int v\,\mathrm{d}u$$ which you can use in this case with $u=x$ and $v=e^{-sx}$.
H: Does Fréchet derivative needs continuity? I want to ask a question that I feel everyone acts as if it is a common knowledge. I am really new to functional analysis. How does Fréchet differentiability imply continuity? In the definition I have only continuity of the derivative itself,A, is mentioned: A mapping F : U ⊂ U → V is said to be Fréchet differentiable at u ∈ U if there exist an operator A ∈ L(U,V ) and a mapping r(u,·) : U → V with the following properties: for all h ∈ U such that u + h ∈ U, we have F(u + h) = F(u) + Ah + r(u,h) where the so-called remainder r satisfies the condition $\frac{||r(u,h)||_V}{||h||_U} $ → $0$ as $|h||_U$ → 0. The operator A is then called the Fréchet derivative of F at u, and we write A = F ? (u). If A is Fréchet differentiable at every point u ∈ U, then A is said to be Fréchet differentiable in U. AI: $F(u+h)-F(u) \to 0$ as $h \to 0$ because $Ah \to 0$ as $h \to 0 $ and $r(u,h) =\|h\|_U \frac {r(u,h)} {\|h\|_U} \to 0$ too. [$\|Ah\| \leq \|A\| \|h\|_U \to 0$].
H: Solving $\int_{-1}^{1}8x^3-5x^2+4dx$ I've done the following so far: $$\left.\int_{-1}^{1}\left(8x^3-5x^2+4\right)dx=\left(\frac{8}{4}x^4-\frac{5}{3}x^3+4x\vphantom{\int}\right)\right|_{-1}^{1}$$ $$=\left(\frac{8}{4}-\frac{5}{3}+4\right)=\frac{13}{3}$$ However, I double-checked on wolfram alpha and the solution is actually $-\dfrac{14}{3}$. Would you know where I went wrong? I have no idea where the negative came from, in the solution, or how it's one value above mine. AI: The value of last limit must be $4-(-4)=8$ $$\int_{-1}^1(8x^3-5x^2+4)dx=\left(4x^4-\frac53x^3+4x\right)_{-1}^1$$ $$=\left(4(1)^4-\frac53(1)^3+4(1)-4(-1)^4+\frac53(-1)^3-4(-1)\right) $$ $$=\frac{14}{3}$$
H: Proving that every constructible number is algebraic I am starting to self study Galois Theory from J.S. Milne's notes on the subject. He claims that If $\alpha$ is a constructible number then $\alpha$ is algebraic over $\mathbb{Q}$ and $[\mathbb{Q}[\alpha],\mathbb{Q}]$ is a power of $2$. Now to prove this he uses 2 facts that are $1)$ If $F\subset E\subset L$ are fields then $[L:F]=[L:E][E:F]$ $2)$ A number is contructible if and only if it is contained in a subfield of $\mathbb{R}$ of the form $\mathbb{Q}[\sqrt{a_1},...,\sqrt{a_n}]$ with $a_i \in Q[\sqrt{a_1},...,\sqrt{a_{n-1}}]$ and $a_i>0$. Now to prove the result he just claims that $[\mathbb{Q}[\alpha],\mathbb{Q}]$ divides $[\mathbb{Q}[\sqrt{a_1},...,\sqrt{a_n}],\mathbb{Q}]$ and the latter is a power of $2$. Now I have some doubts about this , I get that since the number is algebraic we will have that $\alpha \in \mathbb{Q}[\sqrt{a_1},...,\sqrt{a_n}]$ and so we will have that $\mathbb{Q}[\alpha]$ will also be there and then we can use the formula , what I don't get is how do we know that $[\mathbb{Q}[\sqrt{a_1},...,\sqrt{a_n}],\mathbb{Q}]$ is a power of $2$? If I have this I get both the results but I don't even know how this would be finite since I don't know that $\sqrt{a_n}$ are algebraic right ? Maybe I am confusing things up but any help is aprecciated. Thanks in advance. AI: $\sqrt{a_n}$ is a root of $X^2-a_n$ and thus algebraic of degree 1 or 2. Edit. I will add details. You have a tower $\mathbb{Q}\subset ...\subset\mathbb{Q}[\sqrt{a_1},...,\sqrt{a_{i-1}}]\subset \mathbb{Q}[\sqrt{a_1},...,\sqrt{a_i}]\subset ...$ and each step is of degree 1 or 2, since $a_i \in Q[\sqrt{a_1},...,\sqrt{a_{i-1}}]$, so the total degree will be of the form $1^a2^b$.
H: Given the equation $\alpha \mathbf{v} + \mathbf{v}\times\mathbf{a} = \mathbf{b}$, solve for $\mathbf{v}$. I'm reading a textbook at the moment that provides the following linear equation, $$ \alpha \mathbf{v} + \mathbf{v}\times\mathbf{a} = \mathbf{b}, $$ and asks to solve for $\mathbf{v}$. The form of $\mathbf{v}$ is given as $$ \mathbf{v} = \frac{\alpha^2 \mathbf{b} - \alpha (\mathbf{b} \times \mathbf{a}) + (\mathbf{a}\cdot\mathbf{b})\mathbf{a}}{\alpha(\alpha^2+\lvert \mathbf{a} \rvert^2)}. $$ It's easy enough to verify that this is the correct solution. However, I can't figure out how I'd solve for $\mathbf{v}$ if given just the original equation. Are there any general approaches to solving this kind of equation systematically? Edit: $\mathbf{a}, \mathbf{b}$ and $\mathbf{v}$ are all vectors, whereas $\alpha$ is a scalar such that $\alpha \neq 0$. AI: Taking cross product with $\mathbf{a}$ on both sides, we get, \begin{align*} &\alpha \mathbf{v} + \mathbf{v}\times \mathbf{a} = \mathbf{b}\\ \implies &\alpha(\mathbf{v}\times \mathbf{a})+(\mathbf{v}\times \mathbf{a})\times \mathbf{a}=\mathbf{b}\times \mathbf{a}\\ \implies &\alpha(\mathbf{b}-\alpha \mathbf{v})+(\mathbf{v}\cdot \mathbf{a})\mathbf{a}-|a|^2\mathbf{v}=\mathbf{b}\times \mathbf{a}\\ \implies &\alpha \mathbf{b}-\alpha^2\mathbf{v}+\dfrac1\alpha (\mathbf{b}\cdot \mathbf{a})\mathbf{a}-|a|^2\mathbf{v}=\mathbf{b}\times \mathbf{a}&&\Big(\text{Using }\alpha (\mathbf{v}\cdot \mathbf{a})=\mathbf{b}\cdot \mathbf{a}\Big) \end{align*} Now solve for $\mathbf{v}$ directly.
H: If every element of a subset is strictly smaller than the ones of another, what is the relation between their bounds? Suppose S is a set with the least-upper-bound property and the greatest-lower-bound property. Let $X$ and $Y$ be subsets where every $x \in X$ and every $y\in Y$ is such that $x < y$. I'm asked to show that $\sup{X} < \inf{Y}$ or provide a counterexample where this is doesn't hold. I've previously showed that when $x\leq y$ for every $x\in X$ and $y \in Y$ we have that $\sup{X} \leq \inf{Y}$. It feels as if a similar proof employing strict inequalities should do the trick or this there some subtility with changing the inequalilties that breaks everything. Or is there a simple counterexample that one can employ? AI: Your previous result still obviously holds, that is, $\sup X \le \inf Y$. However, this inequality cannot be made strict. A natural counterexample is to take $S = \Bbb R$, $X = (-\infty, 0)$, and $Y = (0, \infty)$. Every negative number is strictly less than every positive number but the supremum and infimum coincide. Even more natural might be to take $S$ and $X$ as above but $Y = \{0\}$. Some more details A more intuitive way to think is that even though you have $x < y$ for all $x \in X$ and $y \in Y$, you can choose the numbers to be "arbitrarily close". In fact, this is something usual: you don't expect strict inequality to hold in inequalities involving $\sup$ and $\inf$. For example, suppose you have $s \in S$ such that $x < s$ for all $x \in X.$ Even then you cannot conclude that $\sup X < s$.
H: Quantum-mechanical Schwarz inequality: Proving $\langle \psi \mid \phi \rangle^* \langle \psi \mid \phi \rangle \ge 0$ for 1D case. I am currently studying the textbook Mathematical methods of quantum optics by Ravinder R. Puri. When discussing the postulates of quantum mechanics, the author introduces the quantum-mechanical version of the Schwarz inequality as follows: An important consequence of the axioms defining the scalar product is the Schwarz inequality $$\langle \phi \mid \phi \rangle \langle \psi \mid \psi \rangle \ge \langle \phi \mid \psi \rangle \langle \psi \mid \phi \rangle, \tag{1.5}$$ where the equality holds if and only if the two vectors in question are linearly dependent i.e. if $$\mid \psi \rangle = \mu \mid \phi \rangle, \tag{1.6}$$ $\mu$ being a complex number. In order to establish this, show that the minimum value of $\langle \Psi(\mu) \mid \Psi(\mu) \rangle$, where $\mid \Psi \rangle \ = \ \mid \psi \rangle - \mu \mid \phi \rangle$, as a function of $\mu$ is $\langle \psi \mid \psi \rangle - |\langle \psi \mid \phi \rangle |^2 / \langle \phi \mid \phi \rangle$. The requirement that this value, due to axiom 3 of the scalar product, be positive leads to the Schwarz inequality in (1.5). Also, according to the axiom 4 above, $\langle \Psi(\mu) \mid \Psi(\mu) \rangle = 0$ iff $\mid \Psi(\mu) \rangle - 0$ i.e. iff (1.6) holds. it may be verified easily that (1.5) then holds with equality. In a similar way we can derive the generalized Schwarz inequality $$\det(\langle \psi_\mu \mid \psi_\nu \rangle ) \ge 0, \tag{1.7}$$ where $\det(\langle \psi_\mu \mid \psi_\nu \rangle )$ is the determinant of the matrix constituted by the elements $\det\langle \psi_\mu \mid \psi_\nu \rangle$, $\mu, \nu = 1, \dots, n$. Invoking the fact that the determinant of a matrix is zero if its rows (or columns) are linearly dependent, it follows that the equality in (1.7) holds iff $\mid \psi_\mu \rangle$ are linearly dependent. Axiom 3 is as follows: $$\langle \psi \mid \psi \rangle > 0$$ Axiom 4 is as follows: $$\langle \psi \mid \psi \rangle = 0 \ \text{if and only if $\mid \psi \rangle = 0$}$$ My goal is to prove this case of the Schwarz inequality for myself. In researching this problem, I found this webpage. The author claims that $$| (\psi, \phi) |^2 \le (\psi, \psi)(\phi, \phi).$$ Trying to connect this with (1.5), I get $$\langle \phi \mid \phi \rangle \langle \psi \mid \psi \rangle \ge \langle \phi \mid \psi \rangle \langle \psi \mid \phi \rangle = \langle \psi \mid \phi \rangle^* \langle \psi \mid \phi \rangle \ge 0,$$ where $*$ is the complex conjugate. So then I wonder: Is it true that $\langle \psi \mid \phi \rangle^* \langle \psi \mid \phi \rangle \ge 0$? It isn't clear to me that this is true. So my objective now is to prove that this is true. To simplify things, I will first try to prove that it is true for the 1-dimensional case: $\psi$ and $\phi$ are complex numbers, right? So let $\psi = x_1 + i y_1$ and $\phi = x_2 + i y_2$. $$\begin{align} \langle \psi \mid \phi \rangle &= (x_1 - i y_1) \cdot (x_2 + i y_2) \\ &= x_1 x_2 - i x_1 y_2 - i x_2 y_1 + y_1 y_2 \\ &= (x_1 x_2 + y_1 y_2) - i(x_1 y_2 + x_2 y_1) \end{align}$$ $$\langle \psi \mid \phi \rangle^* = (x_1 x_2 + y_1 y_2) + i (x_1 y_2 + x_2 y_1)$$ $$\begin{align} \langle \psi \mid \phi \rangle^* \langle \psi \mid \phi \rangle &= [(x_1 x_2 + y_1 y_2) + i (x_1 y_2 + x_2 y_1)] \cdot [(x_1 x_2 + y_1 y_2) - i(x_1 y_2 + x_2 y_1)] \\ &= (x_1 x_2 + y_1 y_2) (x_1 x_2 + y_1 y_2) + i (x_1 x_2 + y_1 y_2)(x_1 y_2 + x_2 y_1) - i (x_1 y_2 + x_2 y_1)(x_1 x_2 + y_1 y_2) - (x_2 y_1 + y_1 y_2)(x_1 y_2 + x_2 y_1) \\ &= (x_1^2 x_2^2 + 2 x_1 x_2 y_1 y_2 + y_1^2 y_2^2) + i (x_1^2 x_2 y_2 + x_1 x_2^2 y_1 + x_1 y_1 y_2^2 + x_2 y_2 y_1^2) - i (x_1^2 x_2 y_2 + x_1 y_1 y_2^2 + x_2^2 x_1 y_1 + x_2 y_1^2 y_2) - (x_1 x_2 y_1 y_2 + x_2^2 y_1^2 + y_2^2 y_1 x_1 + y_1^2 y_2 x_2) \\ &= (x_1^2 x_2^2 + 2 x_1 x_2 y_1 y_2 + y_1^2 y_2^2) - (x_1 x_2 y_1 y_2 + x_2^2 y_1^2 + y_2^2 y_1 x_1 + y_1^2 y_2 x_2) \end{align}$$ I'm unsure of how to proceed from here. Have I made an error somewhere? Am I going about this incorrectly? I would greatly appreciate it if people would please take the time to review my this. AI: No, $|\phi\rangle$ and $|\psi\rangle$ are not complex numbers. They're element in a Hilbert space and $\langle \phi |\psi\rangle$ is their inner product. So it's $\langle \phi |\psi\rangle$ which is a complex number. And then $$\langle \phi |\psi\rangle^\star\langle \phi |\psi\rangle=|\langle \phi |\psi\rangle|^2$$ is the norm of a complex number so it's a real non negative value.
H: Calculate a meromorphic function on $\mathbb{C}$ s.t. $g(z) = \sum_{n\geq0}(n+1)T^n$ I have the following power series $f(T)=\sum_{n\geq0}(n+1)T^n$ and I have to calculate a meromorphic function $g$ on $\mathbb{C}$ such that $\forall z \in B_1(0), g(z)=f(z)$ I have tryed the following: $f(z) = \sum_{n\geq0}nz^n + \sum_{n\geq0}z^n$ and I used that $z\sum_{n\geq0}nz^{n-1} = z \frac{1}{(1-z)^2}$ and $\sum_{n\geq0}z^{n-1} = \frac{1}{1-z}$ because $|z|\leq1$ Then $f(z)= \frac{z}{(1-z)^2} + \frac{1}{1-z} = \frac{1}{(1-z)^2}$ That I know is wrong but I could not find the error. Can anyone help me? Thank you in advanced. AI: As mentioned in a comment, you computation is correct, but you can make it simpler, using the formal derivative in $\mathbf C[[T]]$: $$\sum_{n\ge 0}(n+1)T^n=\sum_{n\ge 1}nT^{n-1}=\Bigl(\sum_{n\ge 0}T^n\Bigr)'=\Bigl(\frac1{1-T}\Bigr)'=\frac1{(1-T)^2}.$$
H: numerical instability of integer programming Let us assume the objective function $f$ of some IP looks as follows $$ f = \sum x_i + \varepsilon \sum y_i.$$ With $\varepsilon$ being very small ($\approx 0.00001$) and $x_i$ and $y_i$ some variables. There may also be some constraints, but let us not focus on them. It seems that IP-solvers have problems to solve such an IP, because of numerical instability. (I am by now means an expert on IP solvers.) Is there a way to reformulate the IP, such that the IP can still be solved efficiently? The purpose of the small $\varepsilon$ is to make minimizing the $\sum y_i$ a secondary priority. In other words, among the solutions that minimize $\sum x_i$, try to minimize $\sum y_i$. AI: You can try doing it in two stages. First solve the problem with the original objective to minimize $\sum x_i$ and original constraints. Suppose the optimal solution has $\sum x_i = s$. Then solve the problem with additional constraint $\sum x_i = s$ and objective to minimize $\sum y_i$.
H: Natural logarithm calculation in RGR example I'm studying the book Statistical Methods for Research Workers by R.A. Fisher (1934). I'm in chapter 2, called diagrams. It contains an example of Relative Growth Rate of babies’ weight growth in their first 13 weeks of life. I understand how RGR is calculated, but trying to replicate the table one, I could not get how to obtain the natural log of weight column numbers. Someone knows how the calculation of those numbers was made? For example, if you get, Ln(110) you get 4.70, and not 0.0953 as the book show. But if you do the Logarith part as usual, them the increase column calculation fit the table. Can a natural log have two equivalent decimal representations? AI: For some reason, the weight was converted to units of $100$ ounces before the natural logarithm was taken. Thus, the numbers in the column labeled Natural Log. of Weight are obtained by dividing the Weight in Ounces by $100$ and then taking the natural logarithm, or by taking logarithm of the Weight in Ounces and then subtracting the natural logarithm of $100$.
H: Uniqueness of homomorphism -universal property If $\varphi: R\to A$ is a homomorphism such that for a subset $S$, $\varphi(s)$ is inevertable for every $s\in S$ then there is a unique homomorphism $\varphi^\prime:S^{-1}R\to A$ such that $\varphi = \varphi^{\prime}\circ \pi $. where $\pi:R\to S^{-1}R$ which is defined by: $\pi(r)=\frac{r}{1}$ My problem is to prove the uniqueness of the homomorphism. Can someone give me a hint? AI: For any ring map $ψ \colon S^{-1}R → A$ with $φ=ψ∘π$ and given $r ∈ R$, $s ∈ S$, what is $ψ(\frac r 1)$ mapped to and then what about $ψ(s)ψ(\frac r s)$?
H: How to prove $a^x\times a^y=a^{x+y}$ for cardinals? How can I prove this: $$a^x\times a^y=a^{x+y} $$ when $card(A)=a$ , $card(X)=x$ and $card(Y)=y$. AI: Assuming the axiom of choice, and assuming that $a$ is an infinite cardinal and that $x,y>0$: use the fact that if $\kappa$ and $\lambda$ are infinite cardinals then $\kappa + \lambda = \kappa \cdot \lambda$ ($= \mathrm{max} \{ \kappa, \lambda \}$). This reduces the problem to proving that $a^x \cdot a^y = a^{x+y}$, which you have likely already proved. Edit: I see you've updated your question to ask about the product instead of the sum. In this case, try to find a bijection between the following sets: The set of functions $f : X \sqcup Y \to A$, where $\sqcup$ denotes the disjoint union operation; and The set of pairs of functions $(f_X : X \to A, f_Y : Y \to A)$. The first of these sets has cardinality $a^{x+y}$, and the second has cardinality $a^x \cdot a^y$. Bigger hint: Consider the restrictions of a function $X \sqcup Y \to A$ to $X$ and $Y$, respectively.
H: Finding the plane where the line lies on Having a line and a normal orthogonal to it, how do I find the plane in which the normal will come out from and the line will lie on? AI: In the three-dimensional affine space $\mathbb{A^3}_{\mathbb{R}}$ (I'm assuming $\mathbb{K}=\mathbb{R}$) a line is defined, in its cartesian form, by the intersection of two planes. Let $l$:\begin{cases} ax+by+cz+d=0 \\ a'x+b'y+c'z+d'=0 \end{cases} and $\textbf{u}$ a vector orthogonal to the line. We can consider the pencil of planes with support the line $l$ such that $$\Lambda_{\mu,\eta}: \mu (ax+by+cz+d) + \eta(a'x+b'y+c'z+d')=0$$ that can be written as $x(\mu a + \eta a')+y(\mu b + \eta b')+z(\mu c + \eta c')+\mu d+\eta d'=0$, where $$\textbf{v}=(\mu a + \eta a', \mu b + \eta b',\mu c + \eta c')$$ is the parametrization of its orthogonal vector. So, in order to find the plane we only have to solve the system $\textbf{v}= \rho \textbf{u}$ (the two vectors have to be proportional).
H: Finding Radon-Nikodym derivative $d\mu/dm$ where $m$ is the Lebesgue measure on $[0,1]$, $f(x)=x^2$, and $\mu(E)=m(f(E))$ Consider the function $f:[0,1]\to \Bbb R$, $f(x)=x^2$. Let $m$ denote the Lebesgue measure on $[0,1]$ and define $\mu(E)=m(f(E))$. Since $f$ is absolutely continuous and nondecreasing, $f$ maps measure zero sets to measure zero sets, so $\mu$ is absolutely continuous with respect to $m$, so we can consider the Radon-Nikodym derivative $d\mu/dm$. Can we explicitly find $d\mu/dm$? AI: For interval $[a,b]\subseteq[0,1]$ we find: $$\mu\left(\left[a,b\right]\right)=m\left(f\left(\left[a,b\right]\right)\right)=m\left(\left[a^{2},b^{2}\right]\right)=b^{2}-a^{2}=\left[x^{2}\right]_{a}^{b}=\int_{a}^{b}2xdx$$ Indicating that the derivative is $2x$.
H: Online resource recommendation for learning about vector analysis In the last semester I had Vector Analysis lecture. We have seen about some basic geometry of sphere, then we got into basics if vectors that we seen in first semester in mechanics lecture but this was more like linear algebra. Then I learned about lines and planes. Then we have seen about partial derivative, double integrals, triple integrals, scalar field, vector field, gradient vector, divergence of a scalar and vector fields, rotations of vector fields, curl of vector fields, line integrals and so on. Due to covid-19 I couldn't be able to learn and understand the topics I have seen in the lecture. Also I'm interested in these divergence, curl, rotation and vector fields topics. I want to learn them all from zero. I need your online resource recommendation. If you can recommend me PDF file, it could be very helpful. (I like demonstrations and illustrations of vector analysis. If the PDF has good illustrations, I would be very happy) AI: I know two good courses on this topic that could help you to visualize the demonstrations and illustrate them. A good start is the Vector Calculus for Engineers available on Coursera. The other one is the Multivariable Calculus available on Mit opencourseware. Both of them will give you the tools that you are looking for.
H: Proof that in a field $x=0$ is equivalent to $x=-x$ Let $F$ be a field and $x\in F$. If $x=-x$ and $1\neq -1$, then $$0=\frac{x+x}{1+1}=\frac{1+1}{1+1}x=x.$$ This means that the statement in the title is true if and only if $1\neq -1$. But how do we know that $1\neq -1$ for an arbitary field? AI: But how do we know that $1+1\neq 0$ for an arbitrary field? Completely valid question, and the answer is: you don't! There are fields for which $1 + 1 = 0$, called fields of characteristic $2$. The simplest example is the field with two elements, $\mathbb{F}_2 := \{ 0, 1 \}$, where $1 + 1 = 0$. Indeed, in $\mathbb{F}_2$ (or any field of characteristic $2$) we have $1 = -1$ but $1 \neq 0$, so the result you are trying to prove is true only when the characteristic of the field is not $2$.
H: Distribution of sum of two independent random variables Here's a small problem I tried to solve. 2 dice are thrown, let $X$ denote the result of the first dice and let $Y$ denote the result of the second dice. I'm asked to describe the law of $Z=X+Y$. I tried to solve this problem using the law of total probabilities using the partition given by the events $\{ X=i \}$, for $i=1,2,...,6$. Now let $j \in \{ 1,2,3,...,11,12 \}$, then \begin{align*} P(Z=j) &=\sum_{i=1}^6 P(Z=j\mid X=i)P(X=i)\\ &=\frac{1}{6}\sum_{i=1}^6 P(X+Y=j\mid X=i)\\ &=\frac{1}{6}\sum_{i=1}^6 P(Y=j-i\mid X=i)\\ &=\frac{1}{6}\sum_{i=1}^6 P(Y=j-i). \end{align*} I cannot justify the last step. I just found out that it leads to the correct answer. I guess that it has something to do with the fact that the variables $X$ and $Y$ are independant. In my book the definition of independant variables is the following : $X$ and $Y$ are said to be independant iff for all $i,j$ the events $\{X=i\}$ and $\{Y=j\}$ are independant, i.e. $$ P(X=i,Y=j)=P(X=i)P(Y=j). $$ Also I guess that $P(X=i,Y=j)$ means $P(\{X=i\} \cap \{Y=j\})$. Can anyone explain why I am allowed to write : $$ P(Y=j-i\mid X=i)=P(Y=j-i). $$ Are the two events $\{ Y=j-i\}$ and $\{ X=i \}$ independant ? Thanks for your help. AI: If $X,Y$ are independent random variables then: $$P(X\in A,Y\in B)=P(X\in A)P(X\in B)$$ for all Borel measurable sets $A,B$. Using that we find: $$P(Y=j-i\mid X=i)=\frac{P(Y=j-i,X=i)}{P(X=i)}=\frac{P(Y=j-i)P(X=i)}{P(X=i)}=$$$$P(Y=j-i)$$ where the second equality rests on independence.
H: Prove that triangle $\triangle ABC \cong \triangle G H I$ . Explain each step. My question: Prove that triangle $\triangle ABC \cong \triangle G H I$ . Explain each step. Here are my triangles I proved that $\triangle ABC \cong\triangle DEF$ because the first sign of equality. angle $ABC = $angle $DEF$ $AB = DE$ $BC = EF$ Now my problem is how to prove $\triangle ABC \cong \triangle G H I$. Thanks again! AI: First you should prove that $\triangle DEF \cong \triangle GHI$ by applying the ASA (Angle-Side-Angle) criterion: angle $EDF \cong$ angle $HGI$; $DF \cong GI$; angle $DFE \cong$ angle $GIH$. Finally you can prove that $\triangle ABC \cong \triangle GHI$ by applying the transitive property of congruence.
H: Find a suitable polynomial function for the data points: $(-1,1),(0,1),(1,3),(2,1)$. I'm working through examples on different kinds of interpolation methods, and I came across this video, with the following question: Find a polynomial equation that best fits the following data points: $$(-1,1),(0,1),(1,3),(2,1)$$ To solve this I constructed the following matrix equation: $$V \cdot c = f, \text{V is the Vandermonde matrix}$$ $$\begin{pmatrix}1&-1&1&-1\\ \:1&0&0&0\\ \:1&1&1&1\\ \:1&2&4&8\end{pmatrix}\begin{pmatrix}c_0\\ \:c_1\\ \:c_2\\ \:c_3\end{pmatrix}=\begin{pmatrix}1\\ \:1\\ \:3\\ \:1\end{pmatrix}$$ Using Reduced Row Echelon Form (done on a piece of paper and confirmed by symbolab), I get the following matrix equation: $$\DeclareMathOperator{rref}{rref} \left[\begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & -1 \\ \end{array}\right]$$ This means that $c_0=c_1=c_2=1, c_3=-1$ and my polynomial function is $f_1(x)=1+x+x^2-x^3$. However, the answer in the video is slightly different: $f_2(x) = 1+2x+x^2-x^3$. To check which one is correct, I simply checked with gave me the correct results for $x_i, i=0,1,2,3$. Unfortunately, my equation is off by a little: $f_1(-1)=2$ and $f_2(-1)=1$. I'm not sure why my function is wrong, especially since I checked on Symbolab to reconfirm my solutions for $\vec c$. I suspect it may be a slight error in my RREF but I'm not 100% sure. AI: Let $f(x)=ax^3+bx^2+cx+d% let this ploynomial pass throught (-1,1),(0,1),(1,3),(2,1)$ Then we get four equations $$-a+b+c-d=1~~~~(1),~~ d=1~~~(2),~~ a+b+c+d=3~~~~(3),~~ 8a+4b+2x+d=1~~~~~(4)$$ These simple simultaneous equations can be solved to get $a=-1,b=1, c=2, d=1$ Fiinally we have $$f(x)=-x^3+x^2+2x+1$$
H: Prove that a group has injective homomorphism into direct product of quotients Herstein problem 2.13.10 Let $G$ be a group, $K_1,..,K_n$ normal subgroups of $G$. Suppose that $K_1\cap K_2\cap...\cap K_n=(e)$. Let $V_i=G/K_i$. Prove that there is an isomorphism of $G$ into $V_1\times V_2\times .. \times V_n$ I tried to prove $G$ is isomorphic to internal direct product of $V_i$'s but this will give an isomorphism onto $V_1\times V_2\times .. \times V_n$ while we are asked to prove merely into. I am unable to deal with two concepts together: external direct product and quotient group. Please give a hint. Please do not give the solution. AI: Hint: The embedding: $$g\mapsto (gK_1, gK_2,...).$$
H: Why $\frac{1}{m}\frac{dm}{dt}= \frac{d}{dt}\left ( \log_{e}m \right )$ is true? In the explanation of relative growth rate calculation, in chapter 2 of R.A. Fisher's Book Statistical Methods for Research Workers, it is shown the following equality: $$\frac{1}{m}\frac{dm}{dt}= \frac{d}{dt}\left ( \log_{e}m \right )$$ I could not see why this is true, since $(1/x)' = -1/x^{2}$ and $(\ln x)'= \frac{1}{x}$ Do I am missing something? why the equality is true? AI: Here, $m$ is some function of $t$. By Chain rule: $(fog) '(t) =f' (g(t)) g'(t) $. In your case, $f(t) =\ln|t|, g(t) =m \implies g'(t) =dm/dt $ and $f'(t) =1/t $ for all $t$ except $t=0$. Hence, $(fog) '(t) = f' (g(t)) g'(t) =\frac{1}{g(t) } \frac{dm} {dt} =\frac{1} {m}\frac{dm} {dt}$.
H: Find every equation of the line that passes through the point $(5,13)$ "Find every equation of the line that passes through the point $(5,13)$ and passes both axis at non-negative, whole values." Here's my attempt: Finding first two equations, with $k=\pm1$ is fairly simple. After that, plugging in the $x=5$ and $y=13$ in the equation yields $b=13-5k$. Since the line passes the $y$ axis at $(0,b)$, $b$ has to be whole. That means $13-5k$ has to be whole, $\implies 5k \in Z$. Only non-negative value of $k$ that passes through the $x$ axis at non-negative value is $k=1$, so for every other line $k<0$. For lines with $k<0$, $b>13$ and value of $x >=6$. $$kx+b=0$$ $$x=\frac{-b}{k}$$ $$\frac{5k-13}{k} \geq 6 $$ $$ k \leq -13 \implies b\leq 78$$ I'm not sure how to proceed from here on, I could just check for each value of $b \in (13,78]$ but that doesn't seem very efficient. What am I missing? Is my way of doing this correct? Or is there a better way? And if my attempt is correct, how do I proceed? AI: $$y-13= m(x-5)$$ We can't have $m=0$ or there is no $x$-intercept. has $x$-intercept $-\frac{13}m+5$ and $y$-intercept $13-5m$. We require $-\frac{13}m+5 \ge 0$ and $13-5m \ge 0$. $$m(-13 +5m) \ge 0 \land m \le \frac{13}5 $$ $$m=\frac{13}{5} \lor m \le 0$$ Notice that $m$ can't be irrational and it can't be zero. Suppose $m= \frac{p}{q}, \gcd(p,q)=1$, we need $p$ to divide $13$ and $q$ to divide $5$. Hence $p\in \{-13, -1, 1, 13\}$ and $q \in \{-5,-1,1,5\}$. Hence possible values of $m$ are $\frac{13}5, -13, \frac{-13}5, -1, \frac{-1}5$.
H: Sum of reciprocals of odd prime numbers equal to one I was wondering if there is any known sum of reciprocals of distinct odd prime numbers such that $$\sum_{k=1}^{n}\frac{1}{p_k}=1$$ Could someone give an example of one, or tell if there is none known? Or maybe it is impossible to find one, then it would be great to know the proof. Thanks! AI: Multiply both sides by $\prod_{j=1}^n p_j$ and you obtain $$\sum_{k=1}^n\left(\prod_{j\ne k} p_j\right)=\prod_{j=1}^n p_j\\ \prod_{j=1}^{n-1}p_j+p_n\sum_{k=1}^{n-1}\prod_{j\ne k\\ j<n}p_j=p_n\prod_{j=1}^{n-1} p_j\\\prod_{j=1}^{n-1}p_j=p_n\left(\prod_{j=1}^{n-1}p_j-\sum_{k=1}^{n-1}\prod_{j\ne k\\ j<n}p_j\right)$$ And therefore $p_n\mid\prod_{j=1}^{n-1}p_j$, which is impossible by $p_1,\cdots,p_n$ being distinct primes.
H: How can I prove this derangement question I need to prove this equation while using combinations but I have no idea how to proceed. I just need a hint. Thank you $n!=D_n{n\choose 0} + D_{n-1} {n\choose 1} + D_{n-2}{n\choose 2} +... D_0{n\choose n} $ $D_0=1$ $D_n$ is the number of derangements of an $n$-element set AI: Remember that $n!$ counts the number of permutations of $\{1,\ldots, n\}$. So let $S_{n}$ be the set of permutations. We can partition $S_{n}$ into $n+1$ sets $X_{0},X_{1},\ldots, X_{n}$ where a permutation $f$ is in $X_{k}$ if and only if it has exactly $k$ fixed points. So $n!=|S_{n}|=|X_{0}|+|X_{1}|+\ldots +|X_{n}|$. Now notice that $|X_{k}|={n\choose k}D_{n-k}$ (first choose $k$ fixed points and then choose a derangement of the remaining $n-k$ points.)
H: Example of a Graph What is the smallest simple Graph with all but one nodes having degree 3. The last node having degree 2? I have tried looking for relevant Graph Theory books but couldn't find how to proceed. AI: The graph must have at least 4 vertices, for any to have degree three. Since the sum of the degrees is always even, this graph must have an even number of degree three vertices. So the total number of vertices (including the one degree 2 vertex) in this smallest graph has to be odd. So the graph has at least 5 vertices (it can't have 4). So you can't get a smaller graph than this with your degree conditions.
H: Mandelbrot Set Main Shape What is the exact shape of the main component of the Mandelbrot set? I’m referring to the heart-shaped area centered at at the origin. Is there a simple way to express this shape in Cartesian or polar coordinates? AI: It's a cardioid. The Wikipedia article on the Mandelbrot set explains this in some detail, and even has a section titled “main cardioid”: Upon looking at a picture of the Mandelbrot set, one immediately notices the large cardioid-shaped region in the center. This main cardioid is the region of parameters $c$ for which $P_c$ has an attracting fixed point. It consists of all parameters of the form $$c = \frac\mu2(1-\frac\mu 2)$$ for some $\mu$ in the open unit disc. The answers at Why does the boundary of the Mandelbrot set contain a cardioid? have a presentation of the cardioid's boundary that may be easier to deal with.
H: Points at infinity of $x^2-6xy+9y^2-3z+1=0$ (in the projective space $\mathbb{P^3}$) (Have I done it well?) We have to calculate the points at infinity of the following curve: $x^2-6xy+9y^2-3z+1=0$ (in the projective space $\mathbb{P^3}$) I know how are done this type of exercise but in this one exactly, I've got a doubt. I've done the following: $D=${$(x,y,z)\in\mathbb{R^3}|x^2-6xy+9y^2-3z+1=0$} To have a homogeneous equation: $\mathbb{D}=${$(x:y:z:t)\in\mathbb{P^3}|x^2-6xy+9y^2-3zt+t^2=0$} So the points at infinity: $I.P.(\mathbb{D})=${$(x:y:z:0)\in{\mathbb{P^3}|(x:y:z:0)\in{\mathbb{D}}}$}={$(x:y:z:0)\in{\mathbb{P}^3}|x^2-6xy+9y^2=0$} So, we can differentiate the following cases: If $\space $ $y=0 \to x=0 \to(x:y:z:0)=(0:0:z:0)$ And if $z=0$, $(0:0:z:0)=(0:0:0:0)$ and this is not possible because $(0:0:0:0)$ is not a projective point. But if $z\neq0$, we have $(0:0:z:0)=(0:0:1:0)$ If $y\neq 0\to x=3y$ so we have $(x:y:z:0)=(3y:y:z:0)$ But what does this mean? That it has $\infty$ points at infinity? Which are like $(3:1:z:0)$ ??? Have I done the exercise well? Or, if not, I would like to know what's wrong and how can I do it in a correct way. AI: You're doing well: first consider the associated homogeneous polynomial $$ x^2-6xy+9y^2-3zt+t^2=0 $$ and then set $t=0$, which yields $x^2-6xy+9y^2=0$, so $x=3y$. There's no condition on $z$, so you get a whole line, namely $(3:1:k:0)$ and the point $(0:0:1:0)$.
H: definite Integral with limit approches $\infty$ Evaluation of $$\lim_{n\rightarrow \infty}\int^{\infty}_{0}\bigg(1+\frac{t}{n}\bigg)^{-n}\cdot \cos\bigg(\frac{t}{n}\bigg)dt$$ What i Try: put $\displaystyle \frac{t}{n}=u.$ Then $dt=ndu$ $$I_{n}=\lim_{n\rightarrow \infty}\int^{\infty}_{0}n(1+u)^{-n}\cos(u)du$$ By using by parts $$I_{n}=\lim_{n\rightarrow \infty}n\bigg([-n(1+u)^n\cos(u)\bigg|^{\infty}_{0}+\int^{\infty}_{0}n(1+u)^{n-1}\cos(u)du\bigg]$$ How do i solve it Help me please. Thanks AI: Taking the limit inside, we get: \begin{equation} I=\int\limits_{0}^{+\infty}\lim_{n\rightarrow \infty} \left(1+\frac{t}{n}\right)^{-n}\cos\left(\frac{t}{n}\right)\,dt \end{equation} Let $L$ be the limit, then using the product rule, we can split the limit as follows: \begin{equation} L=\underbrace{\lim_{n\rightarrow \infty}\left(1+\frac{t}{n}\right)^{-n}}_{L_{1}}\times\underbrace{\lim_{n\rightarrow \infty} \cos\left(\frac{t}{n}\right)\,dt}_{L_{2}} \end{equation} The second limit is equal to $1$ and the first limit is the definition of the exponential function $e^{-t}$. Then, we conclude that $L=e^{-t}$. Plugging this into I: \begin{equation} I=\int\limits_{0}^{+\infty}e^{-t}\,dt=\Gamma(1)=0!=1 \end{equation}
H: Eigenvalues from the eigenvectors $T\colon\mathbb{R}^{3}\to\mathbb{R}^{3}$ is a linear transformation such that $T^{3}(v)=T(v)$. I know the matrix $[T]$ in the canonical basis has trace and determinant both equal to zero. Also $$[T]=[Q][D][Q]^{1}$$ such that $$[Q]=\begin{bmatrix} 1/\sqrt{2} & -1/\sqrt{3} & -1/\sqrt{6} \\ 0 & 1/\sqrt{3} & -2/\sqrt{6} \\ 1/\sqrt{2} & 1/\sqrt{3} & 1/\sqrt{6} \end{bmatrix}$$ and surely, $[D]$ is a diagonal matrix. I have to find the eigenvalues of $T$. I know its eigenvectors are the columns of $[Q]$. I tried making the matrix product above, but I think this is not the way - I believe there's some properties I don't know which make things easier. AI: If $v $ is an eigenvector with eigenvalue $\lambda$, then $T^3(v)=T(v)\iff \lambda^3v=\lambda v$. But then $\lambda\in\{-1,0,1\}$. Since both the sum and the product of the eigenvalues are $0$, then, unless $T$ is the null function, the only possibility is that one of the eigenvalues is $1$, another one is $-1$ and the third one is $0$.
H: Is every sufficiently smooth function on a compact manifold approximated by a linear combination of a 'few' Laplacian eigenfunctions? Let $M$ be some smooth, compact manifold. Let $\Delta$ be the usual Laplacian, and $f_0, f_1, \ldots, $ its eigenfunctions in order of increasing eigenvalues. I know that minimizing Dirichlet energy subject to being norm 1 and orthogonal to the first $k$ eigenfunctions gives the $k+1$st eigenfunction: https://en.wikipedia.org/wiki/Dirichlet_eigenvalue This seems to be a sense in which the canonical smooth functions on $M$ are the Laplacian eigenfunctions. Along those lines, I'm interested in whether there is a sense in which every sufficiently smooth (low Dirichlet energy) function is 'near' to a linear combination of the first $n$ eigenfunctions, where $n = n(K,\epsilon)$ will be a function of the Dirichlet energy and the desired approximation. More precisely: Question: Is there a function $n(K, \epsilon)$, such that for any $f \in C^{\infty}(M)$ with $ \int || \nabla f||^2 \leq K$ and $||f||_2 = 1$, there are $a_1,\ldots, a_{n(K, \epsilon)}$ such that $|| \sum_{i = 0}^{n(K, \epsilon)} a_i f_i - f ||_2 < \epsilon$? AI: I think the answer is yes, although manifold stuff is pretty far out of my wheelhouse so I could easily be missing something. First, we'll write $f = \sum_{i = 0}^{\infty} a_i f_i$, where the $f_i$ are the orthonormal eigenfunctions ($\lambda f_i = \lambda f_i$) normalized so $||f_i||_2 = 1$. In particular, we have $\sum a_i^2 = 1$ by the assumption $\| f\|_2 = 1$. Additionally, if $d\omega$ is the volume element on the manifold, we have $$K \geq \int_M || \nabla f||_2^2 d\omega = \int_M f \Delta f d\omega = \sum_{i = 0}^{\infty} a_i^2 \lambda_i \tag{1}$$ We have that $||f - \sum_{i = 0}^{N - 1} a_i f_i ||_2 = \| \sum_{i = N}^{\infty} a_i f_i\|_2 = \sum_{i = N}^{\infty} a_i^2$. So, mainly the question is how large $N$ has to be so that $\epsilon > \sum_{i = N}^{\infty} a_i^2$. If $\epsilon \leq \sum_{i = N}^{\infty} a_i^2$, since $\lambda_i \leq \lambda_{i + 1}$, from $(1)$ we have that $K \geq \epsilon \lambda_N$. Since $N \to \infty$, we know that we can pick $N$ large enough so that $\lambda_N > K/\epsilon$ and obtain a contradiction to $K/\epsilon \geq \lambda_N$. Moreover, we can estimate how large $N$ needs to be: We know, by Weyl's law, that $\lambda_j \sim \frac{ (2 \pi)^2 }{ ( w_n Vol(M))^{2/n} } j^{2/n} = C_M j^{2/n}$. (Throughout I'll set $C_M = \frac{ (2 \pi)^2 }{ ( w_n Vol(M))^{2/n} }$.) Thus, we want $\inf \{ N : \lambda_N > K/\epsilon \} \sim \inf \{ N : C_M N^{2/n} > K/\epsilon\}$. Modulo the handwaving in the previous line comparing the two infimums, this tells us that taking $N = \lceil (\frac{ K}{ C_M \epsilon})^{n/2} \rceil$ suffices so that $||f - \sum_{i = 0}^{N - 1} a_i f_i ||_2 < \epsilon$ when $K \geq \int_M || \nabla f||_2^2 d\omega$ and $||f||_2 = 1$.
H: How many different fractions can be made up of the numbers How many different fractions can be made up of the numbers 3, 5, 7, 11, 13, 17 so that each fraction contains 2 different numbers? How many of them will be the proper fractions? AI: The numbers of fraction which have two different numbers according to @Benjamin wang = 6C2×2!=30. Now you can find the solution of next question. To be proper fraction I can't take the value 2! So, the answer of next question=6C2=15
H: is linear projection sufficient for capturing all extreme points? Given a set $X \subset R^n$ with $m$ points. We can find it's Convex Hull and together with set of extreme points $E(X)$. And none of any points are linear multiplier of each other. Under a linear projection of $f: R^n \to R^{n-1}$, we can find extreme points of $Y:=\{f(x)| x \in X\}$, denote $E(Y)$. which would again be extreme points of $X$, since linear projection preserves convexity. $PI((E(Y))) \cap E(X) \neq \emptyset$, the pre-image of extreme points of $Y$ contains a subset of extreme points of $X$ There are ${m \choose n }$ such linear projection defined by the data points. Namely picking $n$ data points, looking at the affine subspace they span, and then projecting orthogonally onto that affine subspace Would the union of ${m \choose n}$ linear projection and find extreme points in the lower dimension be sufficient to recover all the extreme points in the original set? would this hold $E(X)\subseteq \cup_{i \in {m \choose n}}PI(E(Y_i))$ ? For example, if in 2d space(n=2) and 10 points (m=10). Any two points can define a line, we would have ${10 \choose 2}$ lines defined by the data. If we project the 10 points to any line, the convex hull would form a line segment, we would detect 2 extreme points. If we project towards all the lines and collect all the extreme points there would be $2*{10 \choose 2}$ points, which contains duplicates of course. but if we deduplicate them, would all the extreme points of the 2d space being captured by these procedures? AI: There are $\binom{m}{n-1}$ such linear projection defined by the data points. I don't understand how: do you mean that you are picking some data points, looking at the affine subspace they span, and projecting orthogonally onto that affine subspace? If, so $n-1$ points will define an $(n-2)$-dimensional affine subspace, so you probably want to choose $n$ data points to project onto an $(n-1)$-dimensional space. Apologies if this isn't what you meant — your question is pretty imprecise as stated so this was my best attempt at interpreting it. Also, what do you mean by an "extreme point of $X$", since $X$ is supposed to be finite (hence not convex unless $\lvert X \rvert \leq 1$)? Finally, how is an extreme point of $f(X)$ supposed to be the same as an extreme point of $X$, since these live in different spaces? I think this question is answerable but needs to be clarified first. I'll edit this answer once the question is clear and precise :) Edit The question has been clarified, and the answer is no. Indeed, this fails immediately in $\mathbb{R}^2$ (and in all higher dimensions): Let $X$ be the convex hull of $p_1 = (0,0)$, $p_2 = (0,1)$, $p_3 = (1,3)$, and $p_4 = (1,2)$ (these four points are the extreme points of $X$). Then $p_2$ is not sent to an extreme point of $f(X)$ for any orthogonal projection $f$ onto the affine span of any two of the $p_i$'s.
H: Is the derivative of a periodic function always periodic? True or False : The derivative of a periodic functions is always periodic. I thought it to be true , as everything about a periodic function repeats itself at regular intervals, and so should it's derivative . But , to my surprise it is given false , which suggests that it might be true most of the time but not always , I have given all my thoughts to finding a counter example but I just can't seem to find even one counter example . One possibility was $\{x\}$ , which is not differentiable at every integer , and I am confused about whether I should call it periodic or not , because it's graph will be a straight line with holes at every integer , so in a sense it is periodically not defined , just like $\tan x$ wich is not defined at every odd multiple of $\pi\over 2$ but still it is said to be periodic . Could someone please help me find a counter example and clarify about periodicity of derivative of $\{x\}$. Thanks ! $\{x\}$ is fractional part of x . AI: If $f$ is periodic with period $T$, then $f'$ is also periodic with period $T$, because, if $f$ is differentiable at $x$,\begin{align}f'(x+T)&=\lim_{h\to0}\frac{f(x+T+h)-f(x+T)}h\\&=\lim_{h\to0}\frac{f(x+h)-f(x)}h\\&=f'(x).\end{align}And $\{x\}'$ is periodic with period $1$ (although its domain is not $\Bbb R$).
H: hermitian forms are related by linear transformations Suppose $(-,-)$ and $[-,-]$ are two positive deinifite hermitian forms on an $n$-dimensional vector space, show that there exists an invertible linear transformation $\phi$ such that $(u,v) = [\phi(u),\phi(v)]$. Attempt: I tried to write the hermitian forms in matrix forms, that is $(v,w) = vH\overline{w}^\intercal $, and $[v,w] = vJ\overline{w}^\intercal$, with the associated matrix $H$ and $J$ of the hermitian forms, and try to relate the two matrix by a linear transformation, but I can't seem to get a concrete linear transformation to do so. Can someone help me with this? AI: A positive definite Hermitian matrix $H$ can always be deconstructed as $$H = Q D Q^\dagger = A A^\dagger, ~~ \text{where } A = Q \sqrt{D} Q^\dagger$$ with $D$ being a diagonal matrix with strictly positive entries and $Q$ unitary. Let us then decompose $H = Q D Q^\dagger$ and $J = S B S^\dagger$ and choose $\phi$ to be represented by the matrix $T = Q \left(\sqrt{D} \big/ \sqrt{B}\right) S^\dagger $ where $\sqrt{D} \big/ \sqrt{B}$ is the diagonal matrix obtained by dividing the diagonal entries of $\sqrt{D}$ by the respective ones of $\sqrt{B}$. Here it is crucial that both $H$ and $J$ are positive semi-definite, as this guarantees that the division is well-defined as none of the diagonal elements of $\sqrt{B}$ can be zero. It also further guarantees that the diagonal elements of $\sqrt{D} \big/ \sqrt{B}$ are not zero, as none of the diagonal elements of $\sqrt{D}$ can be zero. Then it follows that \begin{align*} [\phi(u), \phi(v)] = u T (S B S^\dagger) T^\dagger v^\dagger &= u Q \left(\sqrt{D} \big/ \sqrt{B}\right) B \left(\sqrt{D} \big/ \sqrt{B}\right) Q^\dagger v^\dagger \\ &= u Q D Q^\dagger v^\dagger = u H v^\dagger = (u, v) \end{align*} Note that $\left(\sqrt{D} \big/ \sqrt{B}\right) = \left(\sqrt{D} \big/ \sqrt{B}\right)^\dagger$ as it is a diagonal matrix with real entries. It remains to check if $T$ is invertible. Indeed it is, as it is the product of three invertible matrices: $Q$ and $S^\dagger$ are unitary and hence invertible, and $\left(\sqrt{D} \big/ \sqrt{B}\right)$ as noted before is a diagonal matrix with strictly positive entries (and hence invertible). So $\phi$ is invertible, and we are done. $\square$
H: The distance from the center to the perimeter of a square, given an angle theta Given a square with some width w, and an angle theta, what's the distance d from the center to the perimeter? Letting $r=w/2$, we clearly have $d=r$ at $\theta = \frac{n\pi}{2}$, and $d=\sqrt{2}r$ at $\theta=\frac{n\pi}{2}+\frac{\pi}{4}$. But I can't figure out the general formula for any $\theta$ AI: This will be periodic in $\theta$ because of the symmetric nature of this problem. Now, to answer the question, you have the right idea. but you can take it a step further by using trigenometric functions. We can see that: $\cos(\theta) = \frac{\frac{w}{2}}{d} = \frac{w}{2d}$ when $\theta$ is in the range $[-\frac{\pi}{4}, \frac{\pi}{4}]$. Solving for d we get: $d = \frac{w}{2\cos(\theta)} = \frac{r}{\cos(\theta)}$. This is the same result @user4815162342 got. But to take it a step further, this is periodic in $\theta$. So, we can get a better answer that $d = \frac{r}{\cos(\phi)}$ where $\phi = \theta+k\frac{\pi}{2}$ where $k$ is some integer such that phi is inside the range we discussed earlier.
H: The winning probability in a card game Suppose that I am playing a card game with my friend - a $1$ vs $1$ card game. All cards in standard card deck (52 cards) are shuffled randomly, then two cards are drawn to each person respectively. (without replacement) Each player is required to play one of these cards. The card is ranked according to its standard value, regardless of the suits, but the absolute weakest card beats the absolute strongest card, i.e. a $2$ wins an A. The winner belongs to the player who shows the larger value on his card. If both cards have the same value, then we have a tie. Cards will be reshuffle again after a match. The following are the probabilistic assumptions on this game in order to compute the probability to win: The probability to play any one of these cards are equally likely for me and my opponent. There is no other factor that affect the match. If there is the case, then I have calculated that the result is $P(\text{I win})=P(\text{I lose})=\dfrac{8}{17}$ and $P(\text{Tie})=\dfrac{1}{17}$. This sounds reasonable because the probability to win equals to probability to losing by symmetry argument. However, if I define a new parameter for the tendency of a player to play a larger value card as $p$, then I should get a new function $f(p,q)$ for my probability to win, where $q$ is the tendency of my opponent. Note that $0\leq p,q\leq 1$. (Why define such parameter? Because everyone is not guaranteed to play any card equally likely). This changed the probabilistic assumptions, and I intended to do so. But now I have no idea to calculate $f(p,q)$ because the sample space involved is too large. Say a quick example, $$\begin{align*} P(&\text{I win with a }4)\\ &=P(\text{4 being the smaller card and I choose it})P(\text{win }\lvert\text{ 4 being the smaller card and I choose it})\\&\quad +P(\text{4 being the larger card and I choose it})P(\text{win }\lvert\text{ 4 being the larger card and I choose it}) \end{align*}$$ Writing this seems helpless to solve the problem? How do I proceed the next? With the help of python, the function is \begin{align} f(p,q)=\dfrac{564}{1225}pq+\dfrac{5137056}{6497400}p(1-q)+\dfrac{1110432}{6497000}(1-p)q+\dfrac{564}{1225}(1-p)(1-q) \end{align} AI: Sketch for the solution: Note: Im assuming here that just player two have some strategy and player one is playing randomly. You are over-complicating a bit I think, the sample space can be "reduced" drastically just thinking about a generic draw of four cards, that is $$ \Pr [P_1 \text{ win }]=\Pr [P_1 \text{ win }|P_2 \text{ play it lower card }]\Pr [P_2 \text{ play it lower card }]\\ +\Pr [P_1 \text{ win }|P_2 \text{ play it higher card }]\Pr [P_2 \text{ play it higher card }] $$ As the cards are assumed to be drawn randomly (i.e. each card have the same probability to come up) then the probability $$ \Pr [P_1 \text{ win }|P_2 \text{ play it higher card }] $$ is the same as drawing three cards randomly and the first one is higher than the other two cards, what is easy to handle, and the probability $$ \Pr [P_1 \text{ win }|P_2 \text{ play it lower card }] $$ is equivalent to draw randomly three cards and the second or third card drawn be lower than the first one. Well, you need to count also (if you want) the rare case where the lower rank beat the higher. But overall it seems that this probability is small and the changes in the probabilities discarding this possibility will be small. EDIT: if you want to add some strategy to the first player also and $H_1$ and $H_2$ are the hands of player one and two respectively then you can build the model as $$ \Pr [P_1 \text{ win }]=\Pr [\max H_1>\max H_2]\Pr [\max H_2]\Pr [\max H_1]\\ +\Pr [\min H_1>\max H_2]\Pr [\max H_2]\Pr [\min H_1]\\ +\Pr [\max H_1>\min H_2]\Pr [\max H_1]\Pr [\min H_2]\\ +\Pr [\min H_1>\min H_2]\Pr [\min H_2]\Pr [\min H_1] $$ where, by example, the probability $$ \Pr [\max H_1>\max H_2] $$ is equivalent to the probability that, after we had drawn four cards randomly, the first or the second have higher rank than the third and the fourth. (Im not assuming again the case where the lowest rank beat the highest.)