text
stringlengths
83
79.5k
H: Prove that the summation of product of co-factor of an element of a matrix with any other element is 0. Let A = ($a_{ij}$) be an $n×n$ matrix and $A_{ij}$ be the co-factor of $a_{ij}$. Show that: $\sum$ $a_{ik}$$A_{jk}$ = det($A$) if i=j, else 0 I couldn't think of any way to do this other than trying to open the co-factor, but I think that would be vigorous. EDIT- I've tried expanding, but there were a lot of terms, I got confused how I could accommodate them all. Also, as i is not equal to j, there are lots of cases. How can we generalize all that? AI: Hint Replace the column $j$ of the matrix $A$ with column $i$. Then use cofactor expansion with regard to column $j$. If $j\neq i$, then obviously the result is equal to zero as we're expanding the determinant of a matrix having two columns equal. And if $j = i$, this is the classical formula of the determinant expansion according to column $j$ of the matrix $A$. A proof of this formula is provided at Laplace expansion in wikipedia. Another way to prove it is to prove that the determinant is an $n$-linear alternating form of the column vectors. See properties of the determinant.
H: Enlarging the ring of $S$-integers to get a PID Let $K$ be a number field and let $S$ be a finite set of places of $K$ containing the archimedean ones. Then we can define: $$\mathcal O_S:=\{a\in K\colon a\in\mathcal O_{\mathfrak p},\;\forall\mathfrak p\notin S\}\supset \mathcal O_K$$ We know that $\mathcal O_S$ is a Dedekind domain and that $\mathcal O_S$ is the localization of $\mathcal O_K$ at the multiplicative set $$T=\left( \prod_{\mathfrak q\in S\setminus S_{\infty}}\mathfrak q\right)\setminus \{0\}$$ Moreover the prime ideals of $\mathcal O_S$ are of the type $\mathfrak p\mathcal O_S$ with $\mathfrak p\notin S$. It is easy to see that $\operatorname{Cl}(\mathcal O_S)\subseteq \operatorname{Cl}(\mathcal O_K)$ so if we enlarge $S$ to a set of places $S'$ we get $\mathcal O_S\subset\mathcal O_{S'}$ and $$\operatorname{Cl}(\mathcal O_{S'})\subseteq \operatorname{Cl}(\mathcal O_S)\subseteq \operatorname{Cl}(\mathcal O_K)$$ Why we can always find a set $S'$ such that $\operatorname{Cl}(\mathcal O_{S'})=1$? Since the class number $h_k$ of $K$ is finite, it is clear that $\operatorname{Cl}(\mathcal O_{S'})$ divides $h_k$, but why can we eventually reach $1$? AI: What one does, it to write down a finite set of ideals $I_1,\ldots,I_k$ whose classes $[I_1],\ldots,[I_k]$ generate the class-group. Let $S'$ contain all the prime ideal factors of all the $I_j$. Then $\mathcal{O}_{S'}$ will have trivial class-group.
H: Why are picewise continuous functions on $[a,b]$ bounded? Consider a picewise continuous function $f:[a,b] \to \mathbb{R}$, i.e there are $a=t_0<\dots <t_n=b$ such that $f$ is continuous on each open interval $(t_i,t_{i+1})$ and the limits $\lim\limits_{x \uparrow t_i}f(x)$ and $\lim\limits_{x \downarrow t_i}f(x)$ exists for all $i=1,\dots,n$. Why does there exist a constant $M>0$ such that $|f(x)| \leq M$ for all $x \in [a,b]$? AI: The extreme value theorem tells us that if $f$ is continuous on a closed and bounded interval, then it is bounded (and that it achieves these bounds as well). Let us see how we can apply that your case. Let $i \in \{0, \ldots, n-1\}$ and consider the function $g_i$ defined on $[t_{i}, t_{i+1}]$ as follows: $$g_i(t) := \begin{cases}f(t) & t \in (t_i, t_{i+1})\\\displaystyle\lim_{x\to t_i^+}f(x) & t = t_i\\\displaystyle\lim_{x\to t_{i+1}^-}f(x) & t = t_{i+1}\\\end{cases}$$ It is an easy check that $g_i$ is continuous on $[t_i, t_{i+1}]$. Thus, there exists $M_i$ such that $|g_i(x)| \le M_i$ for all $x \in [t_i, t_{i+1}]$. In turn, this gives us that $|f(x)| \le M_i$ for all $x \in (t_i, t_{i+1})$. Now, to conclude, we simply choose $$M = \max\{M_0, \ldots, M_{n-1}, |f(t_0)|, \ldots, |f(t_n)|\}.$$ It is easy to see that $$|f(x)| \le M \qquad \forall \; x \in [a, b].$$
H: Prove that the image of $D$ is the superior half plane I have an exercise left without correction, and I'm struggling; if anyone could help, it would be a lot appreciated. Let $D$ the open disk centered at $0$ with radius $r = 1$. We define $h(z)=\frac{1+iz}{1-iz}$ How can we prove that $h(D) = \left\{z \in \mathbb{C} : \Re(z) > 0 \right\}$? Maybe computing $h(\exp(i\theta))$ ? AI: $\Re (\frac {1+iz} {1-iz})= \Re (\frac {(1+iz) (1+i\overline {z})} {|1-iz|^{2}})$ and $(1+iz) (1+i\overline {z})=1-|z|^{2} +i(z+\overline {z})$; the real part of this is $1-|z|^{2}$ which is $>0$. So LHS $\subseteq$ RHS. For the reverse inclusion use the inverse map $g(w)=\frac {w-1} {i(1+w)}$. I will leave the calculation in this case to you. [What you have to show is $|w-1|< |w+1|$ if $\Re w >0$. Square both sides and expand the square].
H: Almost sure convergence of a sum of independent r.v Let $S_n:=\sum_{i=1}^nX_i$ where $X_1,X_2,...$ are indepentent r.v.'s such that: $P(X_n=n^2-1)=\frac{1}{n^2}$ and $P(X_n=-1)=1-\frac{1}{n^2}$ Show that $\frac{S_n}{n}\rightarrow-1$ almost sure. It is easy to see that you can't apply strong law of large numbers (SLLN) because: $\forall i\neq j: E(X_i)=E(X_j)$. Which is why it is not converging a.s. to $E(S_n)=0$. Now i thought about applying the Borel-Cantelli-Lemma on ... $\{\lim_{n\rightarrow\infty}S_n=-1\}=\bigcap_{k\geq1}\bigcup_{m\geq1}\bigcap_{n\geq m}\{\omega\mid |\frac{1}{n}\sum_{i=1}^nX_i+1|\leq\frac{1}{k}\}$ ... but did not succeed. So how do I prove almost sure convergence here? Thanks in advance. AI: If you have a sequence $(a_n)$ of real numbers such that $a_n=-1$ for all but finitely many values of $n$ then it is easy to check that $\frac 1 n \sum\limits_{k=1}^{n} a_k \to -1$. So we only have to show that $X_n=-1$ for all $n$ sufficiently large, with probability $1$. But this follows from Borel - Cantelli Lemma since $\sum P(X_N=n^{2}-1)=\sum \frac 1{n^{2}}<\infty$ so $P( X_n=n^{2}-1 \ i.o.)=0$.
H: Determine $\arctan{e^i}$ In this answer, the quantity $\arctan(e^i)$ must be determined. It's obviously $e^i = \cos(1) + i\sin(1)$, but there is no formula for $\arctan(x + y)$ like $\sin(x + y)$, for example, and I'm stuck. In the comments, the author suggests: $$\arctan(e^i) = \frac{1}{2} \arctan \left( \frac{\cos (1)}{1 - \sin (1)} \right) + \frac{1}{2} \arctan \left( \frac{\cos (1)}{1 + \sin (1)} \right) + i \left[ \frac{1}{4} \log \left( 2 - 2 \sin(1)\right) - \frac{1}{4} \log \left( 2 + 2 \sin(1)\right) \right]$$ But, again, I am not able to obtain this from $\arctan(e^i) = \arctan \left[ \cos(1) + i\sin(1) \right]$. What relation has been used? And then, if I consider the real part of $\arctan(e^i)$ above, and use the arctan angle-addition formula, I obtain: $$\arctan \left( \frac{\cos (1)}{1 - \sin (1)} \right) + \arctan \left( \frac{\cos (1)}{1 + \sin (1)} \right) = \arctan(w)$$ $$w = \frac{ \frac{\cos (1)}{1 - \sin (1)} + \frac{\cos (1)}{1 + \sin (1)} }{1 - \frac{\cos (1)}{1 - \sin (1)} \frac{\cos (1)}{1 + \sin (1)} } = \frac{ \frac{\cos (1) \left[ 1 + \sin (1) \right] + \cos(1) \left[ 1 - \sin (1) \right]}{1 - \sin^2(1) } }{\frac{1 - \sin^2(1) - \cos^2 (1)}{1 - \sin^2(1)}} = \frac{\cos (1) \left[ 1 + \sin (1) \right] + \cos(1) \left[ 1 - \sin (1) \right]}{1 - \sin^2(1) - \cos^2 (1)}$$ But this denominator is $0$. How to proceed? Any other method (not necessarily using the above steps) to obtain this result is ok. AI: You want to solve $\tan z=a$ where $a=e^{i}$. Note first that one expects the solution to be unique only modulo $\pi$. $$\tan z=\frac{e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})}=\frac{w-1}{i(w+1)}$$ where $w=e^{2iz}$. $$\tan z=a\implies\frac{w-1}{w+1}=ia\implies \frac{2}{1+w}=1-ia \implies w+1=\frac{2}{1-ia}\implies w=\frac{1+ia}{1-ia}.$$ In your case, $$e^{2iz}=\frac{1+ie^i}{1-ie^i}=\frac{1+e^{i(1+\pi/2)}}{1-e^{i(1+\pi/2)}} =\frac{e^{-i(1/2+\pi/4)}+e^{i(1/2+\pi/4)}}{e^{i(1/2+\pi/4)}-e^{i(1/2+\pi/4)}} =i\frac{\cos(1/2+\pi/4)}{\sin(1/2+\pi/4)}=i\cot(1/2+\pi/4).$$ So $$2iz=2n\pi i+\frac{\pi i}2+\log\cot(1/2+\pi/4)$$ etc.
H: Why does Stolz- Cesaro fail to evaluate the limit of $\dfrac{n + n^2 + n^3 + n^4 + \ldots + n^n}{1^n + 2^n + 3^n + 4^n + \ldots +n^n}$, I need to find the limit of the sequence $\dfrac{n + n^2 + n^3 + n^4 + \ldots + n^n}{1^n + 2^n + 3^n + 4^n + \ldots +n^n}$, My strategy is to use Stolz's Cesaro theorem for this sequence. Now, the numerator is given by : $x_r = n^1+ n^2 +n^3 + \ldots +n^r$, so $x_{n+1} - x_{n} = n^{n+1}$ Similarly for denominator $y_r = 1^n + 2^n + 3^n +\ldots +r^n$, so $y_{n+1}- y_{n} = (n+1)^n$ Using Stolz Cesaro, this limit is equivalent to $\displaystyle \lim \dfrac{n^{n+1}}{(n + 1)^n}$, which diverges to $ +\infty$, However ans given to me is $\dfrac{e-1}{e}$, Can anyone tell where is the error in my solution ? Thanks. AI: Note that, as mentioned in the comments below, your computation of the ratio is incorrect. Regardless, the hypothesis of Stolz-Cesaro assumes that the limit $\lim_{n \to \infty} \frac{a_{n+1} - a_n}{b_{n+1} - b_n}$ exists. If it doesn't exist, it does not imply that the original limit does not exist. A better way to approach is to write it as follows: $$ \frac{n + n^2 + \cdots + n^n}{1^n + 2^n + \cdots +n^n} = \frac{n^{-(n-1)} + n^{-(n-2)} + \cdots + n^{-1} + 1}{\left(\frac{1}{n}\right)^n + \left(\frac{2}{n}\right)^n + \cdots + \left(\frac{n-1}{n}\right)^n + 1} $$ As $n \to \infty$, clearly the numerator $\to 1$. For the denominator, see this.
H: I would like to know what the topics are involved in this question. I would like to know what the topics are involved in this question that I can use to solve it. I have tried searching for exponents with variables or using functions although I can't find a way to start on this question. If $3^{m+1} = 5^{n-1} = 15^k$, show that $k(m+n) = (m+1)(n-1)$ Thank you. AI: Exponent rules. Hint: Show that $ 15 ^ { k (m+n) } = 15 ^ { (m+1) (n-1)} $. Further hint: $ 15 ^ {k (n-1)} = 3^{(m+1)(n-1) }$. Write a similar equation for $ 5^ { (m+1)(n-1)}$. Complete details: By raising the given equation to the $n-1$ power, we get $ 15 ^ {k (n-1)} = 3^{(m+1)(n-1) }$. By raising the given equation to the $m+1$ power, we get $15^{k(m+1) } = 5 ^ { (m+1) (n-1) } $ $ $ Multiplying these 2 identities, we get $15^{ k ( n - 1 + m + 1 ) } = 3^{(m+1)(n-1) } \times 5 ^ { (m+1) (n-1) } $ which simplifies to $15^{ k ( n +m ) } = 15^{(m+1)(n-1) }$. Hence $ k (n+m) = (m+1)(n-1)$
H: Volume by Rotation Using Integration How to find the volume if the shown area is rotated around the $y$-axis? The area will be bounded by $=^2+1$, $y=2x$ and $x=0$. AI: Hint: $V_y=\pi\int_c^d x^2 dy$. So, $V_{y_1}=\pi\int_0^2 \left(\frac{y}{2}\right)^2dy$ is the volume of cone, and the $V_{y_2}=\pi\int_1^2\left( y-1 \right)dy$ is the volume of a rotationally symmetric body given by a rotation of parabola $x^2=y-1$ around $y$-axis. Hence, desired volume is: $$V_{y_1} - V_{y_2}.$$
H: Lottery variance The chance to win in Lottery game is $0.1$. Michael decided to buy a ticket every week until he will win or until he will buy 5 tickets. if X is the number of weeks Michael bought a lottery ticket, what is the variance of $X$? So I Calculated for X=5, which means LLLLL or LLLLLW. I calculated the probabilities for both, added them up together and then calculated the variance by the formula $\frac{1-p}{p^2}$. I get $0.798$ which I'm not sure makes sense, am I doing something wrong? Do I need to compute all weeks? Because if so, in the end I get 1. AI: The issue is that $X$ does not have a geometric distribution, precisely because you have the extra condition that $X \leq 5$. e.g. A geometric distribution would have $\mathbb{P}(X = 6) > 0$, but clearly $X$ can never be 6. So $X$ is a discrete random variable that takes values in $\{1,2,3,4,5\}$. Let $Y$ be a geometrically distributed random variable with $p = 0.1$. If $k \leq 4$, then $X$ is behaving geometrically, i.e. $$ \mathbb{P}(X = k) = \mathbb{P}(Y = k) = (1-p)^{k-1}p $$ But we have $$ \mathbb{P}(X = 5) = \mathbb{P}(Y \geq 5). $$ Can you see now how to proceed computing statistics for $X$?
H: Showing that map $\phi(z)=z(z-1)$ is not covering projection Map $\phi : \mathbb{C} - \{0,1/2\} \to \mathbb{C} - \{-1/4\}$ given by $\phi(z)=z(z-1)$ is not covering projection. Can someone give hint how to proceed. AI: Hint For $z \in \mathbb C \setminus \{0, -1/4\}$, the inverse image of $z$ under $\phi$ includes two points. However, the inverse image of $0$ is equal to $\{0,1\}$ but $0$ has been excluded from the definition set of $\phi$.
H: Why having even number of edges in cut imply $\mathcal{B}(G) \subset \mathcal{C}^\perp(G)$ and $\mathcal{C}(G)\subset\mathcal{B}^\perp(G)$? $\mathcal{C}(G)$ is cut space. $\mathcal{B}(G)$ is cut space. Diestel say at theorem 1.9.4, having even number of edges in cut mply $\mathcal{B}(G) \subset \mathcal{C}^\perp(G)$ and $\mathcal{C}(G)\subset\mathcal{B}^\perp(G)$. Why? If F and F' have even number of edges in common, $\langle F,F'\rangle =0$. So, $\mathcal{B}(G) , \mathcal{C}^\perp(G) , \mathcal{C}(G) , \mathcal{B}^\perp(G)$ should be related each other. But it's not clear for me. AI: No offence but it looks like you are rushing through Diestel (this is your third question on it in 1 or 2 days). It is a very good introduction book, but you need to take the time to understand everything, otherwise you won't get much from it. In this question, you saw the important part : $\langle F,F'\rangle=0$ if and only if $F$ and $F'$ have an even number of common edges. You have the definition, for any subspace $\mathcal{F}$ of the edge space $\mathcal{E}(G)$, $$ \mathcal{F}^\perp := \left\{ D\in\mathcal{E}(G) \ \mid \ \langle F,D\rangle=0\text{ for all }F\in\mathcal{F}\right\}$$ And you also know that any cycle $C\in\mathcal{C}$ has an even number of edges in any cut $B\in\mathcal{B}$. You are asked to prove first that these implie $\mathcal{B}\subseteq\mathcal{C}^\perp$. You need to "break down" this statement. What do you need to prove ? $\forall B\in \mathcal{B},\ B\in \mathcal{C}^\perp$ Which can be written $\forall B\in \mathcal{B},\ \forall C\in \mathcal{C},\ \langle C,B\rangle=0$ Is this true? If so, can you write the other statement in a similar fashion? Hope this helps and that you are enjoying the Diestel.
H: How does the linear independence or dependence of the set of column vectors of a matrix depend on that of the set of row vectors of the same? Let A = ($a_{ij}$) be an mxn matrix. If the set of row vectors of A is linearly independent, is the set column vectors too? What happens if the row vectors are linearly dependent. Does it affect the linear dependence of the column vectors? I believe that since the row rank = column rank and m and n are not equal, the linear dependence and independence of the set of row vectors and column vectors should not depend on each other. I'm not sure how to build up a solid argument though. AI: You can build an intuition for that by using the number of pivots (in other words "rank") Consider the following matrix: \begin{equation*} A_{5,4} = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,4} \\ \vdots & \vdots & \ddots & \vdots \\ a_{5,1} & a_{5,2} & \cdots & a_{5,4} \end{pmatrix} \end{equation*} Assuming it can be transformed to reduced row echelon form, we get the following (this is just a concrete example for that sake of understanding): \begin{equation*} A'_{5,4} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{equation*} we have $4$ pivots, $4$ columns, and $5$ rows, the number of pivots is enough for the columns to be linearly independent but it's not enough for the rows to be so, one row won't have a pivot. if in the systems of equations $A\vec x=0$ one row doesn't have a pivot then we don't have a unique solution such that $\vec x=0$ and by definition the rows aren't linearly dependent. This applies to any non-square matrix (number of rows $\neq $ number of columns), I chose a $5 X 4$ one just as a concrete example. To directly answer your questions: What happens if the row vectors are linearly dependent. Does it affect the linear dependence of the column vectors? No, this is clearly shown in the above example, rows are linearly depedent and columns aren't. If the set of row vectors of A is linearly independent, is the set column vectors too? No, this can be shown in a similar fashion where we have number of columns bigger than the number of rows. It's worth noting that if you are dealing with a square matrix the linear dependence/independence of the rows and columns are related (one imply the other) as a pivot for a column will be a pivot for the row.
H: Show that $f$ can be extended to a Cauchy-sequence preserving continuous mapping on $\overline{A}$. Question: Let $(X,d)$ be a metric space and $A\subset X.$ If $f: A\to\mathbb R$ be a Cauchy-sequence preserving continuous mapping then show that $f$ can be extended to a Cauchy-sequence preserving continuous mapping on $\overline{A}$. I have tried it as follows: Let $x\in\overline{A}.$ Then there exists $(x_n)\subset A$ such that $\lim x_n=x.$ Thus $\lim f(x_n)$ exists, $(f(x_n))$ being Cauchy. Let $f_*:\overline{A}\to\mathbb R$ be such that $f_*(a)=\lim f(x_n)$ where $(x_n)\subset A$ with $\lim x_n=a.$ I have proved that $f_*$ is well-defined. However I could not show $f_*$ is Cauchy-sequence preserving continuous mapping. Please help me. AI: Some extensive hints: Let $(\bar x_n)\subset \bar A$ be a Cauchy sequence. There exists a sequence $(x_n)\subset A$ such that $d(f(x_n), f_*(\bar x_n)) \to 0$ and $d(x_n, \bar x_n) \to 0$ (why?). From $d(x_n, \bar x_n)\to 0$ it follows $(x_n)$ is Cauchy (why?). As $(x_n)$ is Cauchy, $(f(x_n))$ is Cauchy by assumption. Therefore $(f_*(\bar x_n))$ is Cauchy (why again?)
H: The closure of a rational polyhedron is a rational polyhedron. I'm reading the following proof, where the closure of the rational polyhedron $P$ is denoted $P'$. I don't get the line where $Y$ is defined. This is a set of linear expressions of the form $y^TA$, and since $y \in \mathbb{R}$, I don't suppose that the "boundedness" here is on the size of the set, but instead on the space it carves out in $\mathbb{R}^n$? But if so how do we define it without the right hand sides of $y^TA$? AI: $y$ is not supposed to be a real number, it is a vector. A better notation would be $$Y:=\{y^TA: y \in [0,1]^m\}$$ Since $A \in \mathbb{Z}^{m \times n}$ and $y^T \in \mathbb{R}^{m \times 1}$, we have $y^TA \in \mathbb{R}^{1\times n}$ and hence we have $Y \subseteq \mathbb{R}^n$, it is bounded since it is a continuous image of a compact set. By boundedness of a set $S$, it means we can find $r>0$, such that for a distance function $d$, for all $u, v \in S$, we have $d(u,v) \le r$.
H: Define a relation R on Z × N by (a, α)R(b, β) if and only if aβ = bα. Prove that R is a reflexive relation. I'm a bit confused about how to prove that R is reflexive. By definition, R, a relation in a set S, is reflexive if and only if ∀x∈S, xRx. Since (a, α)R(b, β), we know that aβ = bα. Then to prove that this is reflexive, based on the definition, we would have to show that ((a, α)R(b, β)) R ((a, α)R(b, β). After this, I'm not sure as to how to prove why this is reflexive. Could we possibly do something like (aβ = bα) R (aβ = bα) is reflexive? Or does ((a, α)R(b, β)) R ((a, α)R(b, β)) already show that it is reflexive itself? AI: To prove that $R$ is reflexive is to prove that, for each $(a,\alpha)\in\Bbb Z\times\Bbb N$, $(a,\alpha)\mathrel R(a,\alpha)$. But $(a,\alpha)\mathrel R(a,\alpha)$ means that $a\alpha=a\alpha$, and it is clear that this holds.
H: How to convert this integral to a polar integral, $\int_{0}^{1}\int_{0}^{x}ydydx$ I was browsing through some past final exams, and I ran into this integral: $\int_{0}^{1}\int_{0}^{x}ydydx,$ The question wants us to convert this integral to a polar integral. I'm wondering how we convert this integral? I started drawing the region, which I got as a triangle in quadrant 1 with vertices (0,0), (1,0), (1,1). Then I tried using $y=rcos(\theta)$ and $dA = rdrd\theta$. But now I'm stuck with the limits of integration. I tried converting the vertices of the triangle into polar coordinates and working them but that got me nowhere. So I'm not sure what to do next. AI: To convert the given triangle to polar coordinates, draw a line at angle $\theta$, and the range of $r$ that falls within the region. For a line at $\theta$ with x-axis, the length of hypotenuse would be $\sec \theta$ . Also, the maximum angle you can have is $\theta = \frac{\pi}{4}$. Hence $$0 \leq \theta \leq \frac{\pi}{4} \\ 0 \leq r \leq \sec \theta$$
H: Determine second degree polynomial by least squares method Determine the polynomial of the second degree that approximates by the method of least squares in the following function, at the interval [0,5, 1,5]. $f(x)=\frac{3}{\sqrt{x}}$ I´ve done least squares before using the point table, but now all i have is a function! How can i solve this? AI: Assume your polynomial to be $$P(x) = ax^2 + bx + c$$ Now, the error in approximation is defined as $$E(P, f) = \left(\frac{1}{b-a}\int_a^b(P - f)^2dx\right)^\frac{1}{2}$$ Hence, your task is to find $a,b,c$ to minimise $$I = \int_{0.5}^{1.5}\left(ax^2+bx+c-\frac{3}{\sqrt{x}}\right)^2dx$$ Now, to solve for $a,b,c$, you will need to solve the system $$\frac{\partial I}{\partial a} = \frac{\partial I}{\partial b} = \frac{\partial I}{\partial c} = 0$$
H: ${d\over dx}f(2x+1)=\sin(4x^2-1)$, then is $f$ differentiable? Question: ${d\over dx}f(2x+1)=\sin(4x^2-1)$, then is $f$ differentiable? Since $\varphi:x\rightarrow2x+1$ is bijective, I thought that $f$ is differentiable. But, I wanted a precise solution, so I tried to use the chain rule in order to get the form of $df/dx$. (And since it is from the past calculus mid-term exam, I cannot write the function $\varphi$ and finish the proof) When we use the chain rule, by substituting $t = 2x-1$ we might say that ($df/dx=2$) $$ {df(t)\over dt}{dt\over dx}={d\over dx}\sin(4x^2-1)=8x\cos(4x^2-1)$$, so we can determine $df(t)/dt$. However, since we cannot ensure the existance of $df(t)/dt$ we cannot actually step into the procedure. Using the definition of derivatives can be a possible way to solve this. $$sin(4x^-1)=\lim_{x\to a}\frac{f(2x+1)-f(2a+1)}{x-a}\\ =\lim_{x\to a}\frac{f(2x+1)-f(2a+1)}{(2x+1)-(2a+1)}\cdot\frac{(2x+1)-(2a+1)}{x-a}\\$$ But, again, we don't know whether $$\lim_{x\to a}\frac{f(2x+1)-f(2a+1)}{(2x+1)-(2a+1)}$$ has a value or not. I also thought of the solution analogous to the 'proof' of the chain rule, which uses epsilon function to approximate the function values(epsilon function is defined as for $h\ne0: f(x+h)-f(x)=f'(a)h+\epsilon(h)\cdot h$, for $h=0: \epsilon(0)=0$), but I stuck in here. Is there any idea to solve it? Thanks. AI: Let $g(x):= f(2x+1)$. Then $g$ is differentiable. Since we have $$f(x)=g(\frac{x-1}{2}),$$ $f$ is differentiable.
H: Help with calculating the integral $\int_{-\pi}^\pi \cos\left(x/2\right) \cdot e^{ix} dx$ by using Eulers formula I have to determine the following integral $$ \int_{-\pi}^\pi \cos\left(x/2\right) \cdot e^{ix} dx $$ by using Eulers formula $$ \cos\left(x/2\right) = \frac{e^{ix/2}+e^{-ix/2}}{2} $$ we have that \begin{align*} \int_{-\pi}^\pi \cos\left(x/2\right) e^{ix} dx & = \frac{1}{2} \int_{-\pi}^\pi \left(e^{ix/2}+e^{-ix/2}\right) e^{ix} dx \\ & = \frac{1}{2} \int_{-\pi}^\pi e^{\frac{i3x}{2}}+e^{-\frac{i3x}{2}} dx \\ & = \frac{1}{2} \left[ \frac{2}{3i} e^{\frac{i3x}{2}} - \frac{2}{3i} e^{\frac{-i3x}{2}}\right]_{-\pi}^\pi \\ & = \frac{1}{2}\left(\frac{2}{3i}\cdot \left(-i\right)-\frac{2}{3i}\cdot \left(\frac{1}{-i}\right)-\left(\frac{2}{3i}\cdot \left(\frac{1}{-i}\right)-\frac{2}{3i}\cdot \left(-i\right)\right)\right) \\ & = -4/3 \end{align*} but the answer should be $4/3$. Why do I get a minus instead? Do you mind helping me? Thanks in advance. AI: You have a small error going from the first to the second line in your last formula... $e^{-ix/2} e^{ix} = e^{ix/2}$.
H: Are there sufficient conditions on $H,K \lneq G$, $H\ne K$, such that $[G:H]=[G:K]$ for infinite $G$? Let $G$ be a group and $H,K$ distinct proper subgroups of $G$. If $G$ is finite, then $|H|=|K|\Rightarrow [G:H]=[G:K]$. Now, if we move to the infinite $G$ case, and take e.g. $G=\mathbb{Z}$ and $H_n:=n\mathbb{Z}$, then $[G:H_n]=n$ and then, in spite of the fact that $|H_n|=|H_m|, \forall m,n>1$ ( here I consider the bijection $H_m\to H_n$ defined by $k \mapsto (k/m)n$ ), we get $[G:H_n]\ne[G:H_m]$ as soon as $m\ne n$. Are there sufficient conditions on $H,K \lneq G$, $H\ne K$, such that $[G:H]=[G:K]$ for infinite $G$? I was guessing that the case $K=gHg^{-1}$ worked, but I can't build up a bijection $f\colon G/H\to G/gHg^{-1}$ (here by "$/$" I denote the set of left cosets). AI: Conjugates do have the same index. The bijection you are looking for is: $$aH\mapsto gaHg^{-1}$$ The main thing to worry about is whether or not $gaHg^{-1}$ is actually a coset of $gHg^{-1}$, and you can see that it is by viewing it as $gag^{-1}\cdot gHg^{-1}$. The map is well-defined and injective as: $$\begin{align*} gaHg^{-1}&=gbHg^{-1}\\ \Leftrightarrow\: &gag^{-1}(gbg^{-1})^{-1}\in gHg^{-1}\\ \Leftrightarrow\: &gab^{-1}g^{-1}\in gHg^{-1}\\ \Leftrightarrow\: &ab^{-1}\in H\\ \Leftrightarrow\: &aH=bH \end{align*} $$ It is surjective as it is reversible (just as $gHg^{-1}$ is a conjugate of $H$, we have that $H$ is a conjugate of $gHg^{-1}$ so we can analogously define $bgHg^{-1}\mapsto bH$).
H: Need clarification on Abbott's Proof of the Algebraic Limit Theorem for Sequences Here is the excerpt of his proof that I do not understand I don't understand why he concludes that $|b_n| > |b|/2$ from this. AI: Observe that by the triangle inequality $|b|-|b_n|\leq |b_n-b|<|b|/2$. Then rearranging for $|b_n|$ you obtain the result. To see that the trick with the triangle inequality, note $|b|=|b_n+(b-b_n)|\leq |b_n|+|b-b_n|=|b_n|+|b_n-b|$. Therefore subtracting $|b_n|$ from both sides you obtain the result.
H: The Hausdorff measure of the unit interval I am trying to calculate the Hausdorff measure of the unit interval. Here's my attempt: Fix $\epsilon>0$. consider the open balls $B(x,\epsilon)$ with $x\in[0,1]$. How many can cover the unit interval? at least $\frac{1}{2\epsilon}$. So if we use $\lfloor{}\frac{1}{2\epsilon}\rfloor{}+1$ $\epsilon$-balls we will definitely cover the interval. So we have: $$H^\delta([0,1])\le{}H^\delta_\epsilon([0,1])\le{}\sum_{i=1}^{\lfloor{}\frac{1}{2\epsilon}\rfloor{}+1}{(2\times\epsilon)^\delta}=(\lfloor{}\frac{1}{2\epsilon}\rfloor{}+1)(2\epsilon)^\delta\le{}(\frac{1}{2\epsilon}+1)(2\epsilon)^\delta$$ But here, I don't think this works as taking the limit $\epsilon\to0$ blows up the upper bound Is there any way to lower bound this to show that it is 1? AI: (answer for the original question.) Take $\delta=1$, then the limit $\epsilon \to 0$. Your calculation shows $$ H^1([0,1]) \le 1 $$ Proof of the lower bound will also be needed: $$ H^1([0,1]) \ge 1 $$ But this is much harder. It can be done using properties of the Lebesgue measure. It is often true in estimation of Hausdorff measure. The upper bound it easy, the lower bound is hard. Now a comment for the edited question. With $\delta < 1$ you should get $H^\delta([0,1]) = \infty$. Your calculation shows $$ H^{\delta}([0,1]) \le +\infty $$ which is not very useful. You will need to prove a lower bound, $$ H^{\delta}([0,1]) > 0 $$ As I remarked before, the lower bound is the harder one to prove. It is easy to show $H^\delta(\mathbb Q \cap [0,1]) = 0$, so then you will know that the measures of the set and its closure are different.
H: Show that for every m and n value, $\int_0^1 x^m (1-x)^n \,dx= \int_0^1 x^n (1-x)^m \,dx$ Show that for every m and n value, $$\int_0^1 x^m (1-x)^n \,dx= \int_0^1 x^n (1-x)^m \,dx$$ I have no idea how to solve a question like that. Do I have to solve both parts of the equation and show that they're equal, or is there any other aspect that I'm not aware of? AI: $$\int_0^1 x^m (1-x)^n \,dx= \int_0^1 x^n (1-x)^m \,dx$$ Your answer is in the image below: Finally, replace $ t$ by $x$ to get the required answer.
H: Roots of product of two quadratic. consider the quadratics, $ P(x) = ax^2 + bx +c$ ,and, $Q(x)= -ax^2 + dx+c$ , given that ac $\neq$ 0. What can we say about the roots of PQ? More particularly, does PQ have at least two real roots? my work: I don't really know where to start even, I suppose it has something to do with discriminant but what does one do after writing discriminant of each quadratic? AI: Taking complex numbers into consideration, P(x) and Q(x) both have two roots, and PQ, as a quartic polynomial, has four roots. But I think you are actually refering to Real roots rather than roots. We could easily find that any root of P(x) and Q(x) is the root of PQ, since PQ is the product of P(x) and Q(x). And we could also find that any root for PQ is a root for either P(x) or Q(x) or both. We could prove that by assuming this is not true, then there would be $PQ=0=P(x)*Q(x)$ Where $P(x)\ne0,Q(x)\ne0$ And that is impossible because the product of two non-zero numbers cannot be zero. Therefore we could deduce that the root for PQ, as a set, is the union of the roots of the P(x) and Q(x). So, as a result, We could find the roots for PQ by simply finding the roots for P(x) and Q(x), which is done by considering the signs of$\Delta_p=b^2-4ac, \Delta_q=d^2+4ac$. Noticing that $ac\ne0 ,b^2\ge0,d^2\ge0$, we could infer that there is at least one positive $\Delta$, since it is impossible for both $4ac$ and $-4ac$ to be negative. Thus, at least one polynomial has two real roots, implying that PQ has at least two real roots. By Victor, Huang
H: First trigonometric differential equation Show that: $$\tan(x) \frac {dy}{dx}-y=\sin^2(x)+2\sec(x)$$ where $y=\sin^2(x)-2\cos(x)$ I get: $\frac {dy}{dx}=2sin(x)cos(x)+2sin(x)$ =$tan(x)(2sin(x)cos(x)+2sin(x))-sin^2(x)-2cos(x)$ From here I go into many directions but not towards the RHS. Guidance is much appreciated AI: You have a little sign mistake here: $$E=\tan(x)(2\sin(x)\cos(x)+2\sin(x))-\sin^2(x)\color{red}{+2\cos(x)}$$ $$E=2\sin^2(x)+2\dfrac {\sin^2(x)}{\cos x}-\sin^2(x)+2\cos(x)$$ $$E=\sin^2(x)+2\dfrac {\sin^2(x)}{\cos x}+2\cos(x)$$ $$E=\sin^2(x)+2\dfrac {(\sin^2(x)+\cos^2(x))}{\cos x}$$ $$E=\sin^2(x)+\dfrac 2{\cos x}=\sin^2(x)+ 2{\sec x}$$ You can also solve the DE: $$\tan(x) \frac {dy}{dx}-y=\sin^2(x)+2\sec(x)$$ Multiply by $\cos x$: $$\sin(x) \frac {dy}{dx}-y\cos x=\sin^2(x)\cos x+2$$ Divide by $\sin^2 x$: $$( \frac {y}{\sin x})'=\cos x+2\csc^2(x)$$ Integrate: $$ \frac {y}{\sin x}=\sin x+2\int \csc^2(x)dx+C_1$$ $$ y(x)=\sin^2 x-2\sin x\cot(x)+C_1\sin x$$ $$ y(x)=\sin^2 x-2\cos(x)+C_1\sin x$$ Take $C_1=0$
H: Line graph degree sequence Let $G= (V,E)$ be a simple graph. The line graph $L(G)$ is the graph its vertices correspond to $G$'s edges and two vertices are connected with an edge in $L(G)$ if the respective edges share a vertice in $G$. Vertices: Obviously $|V(L(G))| = |E(G)|$ by definition. Edges: An edge appears in $L(G)$ for each two edges that share a vertice in $G$, thus $$ |E(L(G))| = \sum_{v_i \in V(G)} {\text{deg}(v_i) \choose 2} $$ However, I'm having trouble producing a closed form of $L(G)$'s degree sequence. My first thought was to take an edge $e = (u,v) \in E(G)$ with $e \in V(L(G))$ and find its degree in $L(G)$ as $$ \text{deg}_{L(G)}(e) = \text{deg}_G(u) + \text{deg}_G(v) - 2 $$ but I can't figure out how to express $L(G)$'s degree sequence if $G$'s degree sequence $d_1 \geq d_2 \geq \dots \geq d_n$ is given. Any form of help would be greatly appreciated. AI: I don’t think it’s possible to deduce the degree sequence of $L(G)$ given JUST the degree sequence of $G$. For example, consider the following graphs: $G_1$ is a path of length 4, its degree sequence is 2,2,2,1,1. Then $L(G_1)$ is a path of length 3, its degree sequence is 2,2,1,1. $G_2$ is the disjoint union of a triangle an an edge, it’s degree sequence is 2,2,2,1,1. Then $L(G_2)$ is the disjoint union of a triangle and a isolated vertex, its degree sequence is 2,2,2,0. Then $G_1$ and $G_2$ have the same degree sequence but $L(G_1)$ and $L(G_2)$ do not.
H: Amount of money earned only from interest - recursion POV. I have a bank account with $0€$. At the beginning of every month I put an additional $250€$ into my account and at the end of the month I get a $0.5\%$ interest on my money. Based only on this information, after $10$ years (or $120$ months), how much money have I earned only from the interest? I know I have to use the $2$ recursion equations I learned in my discrete structures class, but I'm not sure how to utilise them properly in this case: $$x_n = a_1 \cdot x_{n-1} + b_1 \\ \text{and} \\ x_n= \begin{cases} b_0 \cdot a^n + b_1 \cdot \frac{a^n -1}{a-1} ^\ , & a \neq 1\\ b_0 + n \cdot b_1, & a=1 \end{cases}$$ Additionally, in this case the starting amount in the bank is $0€$ but what if we had more than that, for example $1000€$? AI: Every influx of into the bank account increases the principal amount, on which you will earn interest. I do not know about those particular recursion equations, but I will define the recursion, and you can check back to see how it fits with what you've been taught For let's say the $n^{th}$ month, the interest you've earned till the previous month is $I_{n-1}$. Hence, your total amount as of the start of the $n^{th}$ month is $\sum P_i + I_{n-1}$, where $P_i$ is the monthly fixed payment (in your case, it's 250$) Now, in the $n^{th}$ month, you will earn interest on your total, plus the added monthly payment for that month Hence $I_n = 0.5\%*(\sum_1^{n-1} P_i + I_{n-1}) + 0.5\%*P_n $ Now if you had an initial sum, that would just get added to the overall term as a constant, not affecting the rest of the solution
H: If eigenvalue has $n$ eigenvectors, is the matrix the $0$ matrix? Let $\lambda_1$ be an eigenvalue of $A \in \mathbb{R}^{nxn}$, with $n$ eigenvectors, and $\lambda_2$ an eigenvalue of $B \in \mathbb{R}^{nxn}$ with $k \leq n$ eigenvectors. Prove that $\lambda_1 + \lambda_2$ is an eigenvalue of $A+B$ with $k$ eigenvectors. I'm thinking that if $\lambda_1$ has $n$ eigenvectors then $\lambda_1 = 0$ and the characteristic polynomial is $\lambda_1^n$. This means $A$ is a nil-potent matrix because by Cayley–Hamilton we get $$P_A(A) = A^n = 0$$ I'm confused to whether or not I can say that $A = 0$ in this case because the nil-potent matrix doesn't have to be $0$, but it seems right here. AI: First of all, if $A$ has an eigenvalue $\lambda$ with $n$ linearly independent eigenvectors, then is is a multiple of the identity. Ideed, these $n$ linearly eigenvectors form a basis of $\mathbb{R}^n$, so the matrix in this basis of $A$ is $\lambda I_n$, and $A = \lambda I_n$ because it is invariant under change of basis. Moreover, any non-zero vector will be an eigenvector of $A$. Thus, $A + B = \lambda I_n + B$ has the same eigenvectors than $B$, because for $v\neq 0$ \begin{align} \exists a \in \mathbb{R}, Bv = av \iff \exists a \in \mathbb{R}, (\lambda I_n + B)v = (\lambda + a)v \end{align}
H: How is 'Convergence' defined? I've noticed that some papers, e.g. in theoretical computer science and numerical mathematics, provide pseudo-algorithms for their proposed methods. Often these pseudo-algorithms have instructions like: While not converged do: .... end or Repeat: .... Until convergence This makes me wonder how convergence is defined in mathematical terms and how one is testing if something has converged, i.e. what are the criteria for convergence? One common criteria I'm familiar with is the euclidean norm of two succeeding intermediate solutions being smaller than some $\epsilon$, $\left\lVert \theta_{t-1} - \theta_t \right\rVert_2 < \epsilon$. But are there other measures? What are the trade-offs of different measures? What if the solution we like to obtain doesn't live in a vector space in the euclidean sense? Think of a scenario where we like to estimate probability densities, could we use the Kullback-Leibler-Divergence $D_{KL}$ of two succeeding iterations in this case? AI: In mathematical terms, to say that a sequence has converged means that it has reached the limit, which in most cases simply never happens. What we mean when we say that a method has converged, is that for our purposes, the sequence has come close enough to the limit. But what close enough means, strongly depends on the situation and is very subjective. Some examples that you haven't mentioned yet: If you are minimizing something, a stopping criterium could be that the norm of the gradient is smaller than some $\varepsilon$. In the case of solving an equation, here you could stop once the norm of the difference between both sides (i.e. residual norm) is smaller than some $\varepsilon$. when you run out of patience. This one is very bad but also very popular. In practice, finding a good stopping criterium can be very hard. In pseudo code it is easy to sweep this under the rug and pretend that you have a complete algorithm that nicely spits out what you want. In practice, this is something that you might have to tune, and sometimes it is simply dictated by how much time you have. What if the solution we like to obtain doesn't live in a vector space in the euclidean sense? This happens quite often. Notice that in the examples about I spoke about norms. These do not have to be the euclidean norm. You can replace it by other norms, metrics, distance/similarity functions that suite your case. In the case of probabilty density functions, you could use the $L_1$ norm for example. Using Kullback-Leibler divergence between succesive terms does not look like a very natural idea to me.
H: Why do we have $Q(\alpha, z) = \mathbb{Q}(\alpha, z, \overline{z})$ where $\alpha, z, \overline{z}$ are the roots of $X^3+X+1 \in \mathbb{Q}[x]$? We are given the polynomial $f = X^3+X+1 \in \mathbb{Q}[x]$. It is easy to show that $f$ has only one real root, call it $\alpha$, and the other two roots are complex conjucates: $z, \overline{z}$. Let $L=\mathbb{Q}(\alpha, z, \overline{z})$ be the splitting field. I want to prove that $L = \mathbb{Q}(\alpha, z)$. Is there a straightforward way to show this? I have a proof for this fact, it is not long, but I feel like there should be a simpler explanation, and maybe a more general one. How would you prove this fact? AI: $(X-\alpha)(X-z)(X-\overline{z})=X^3+X+1$ so $\alpha+z+\overline{z}=0$, so that $\overline{z}=-z-\alpha$.
H: Writing explicitly $(s^2-1)^2+(t^2-1)^2$ as a polynomial in $st$ and $s+t$? Consider the symmetric polynomial $$ P(s,t)=(s^2-1)^2+(t^2-1)^2.$$ How can we write $P$ as a polynomial in the variables $st,t+s$? The Fundamental theorem of symmetric polynomials implies this is possible, but I am having trouble doing it in practice. AI: $P(s,t)=s^4+t^4-2s^2-2t^2+2$. Now, $s^2+t^2=\sigma^2-2\pi$, where $\sigma$ and $\pi$ are the sum and product of $s$ and $t$. Thus, $s^4+t^4=(s^2+t^2)^2-2s^2t^2=(\sigma^2-2\pi)^2-2\pi^2=\sigma^4-4\pi\sigma^2+2\pi^2$. So $P(s,t)=2-2\sigma^2+4\pi+\sigma^4-4\pi\sigma^2+2\pi^2$.
H: Prove that a normal matrix can be completely determined by its eigen values and a unitary matrix. Prove that a normal matrix can be completely determined by its eigenvalues and a unitary matrix. I tried using the Spectral decomposition theorem: Is states that a normal matrix N with spectrum set {$k_1, k_2,.., k_n$} can be represented as N = $k_1P_1 + k_2P_2 + ... + k_nP_n$ where for i,j belonging to [n] and $P_iP_j$ = P_i if i = j, else 0. I tried making it all a matrix with multiplication of a diagonal matrix containing all the eigen values and another matrix containing all the $P_is$, but it didn't work. AI: You have the right idea. A consequence of the spectral decomposition theorem is the that any normal matrix is unitarily diagonalizable: $$A=U^{-1}DU$$ Where $U$ is unitary and $D$ is diagonal. This means that given the corresponding unitary diagonalizing matrix and the eigenvalues of $A$ (which give you its diagonal form), you can find $A$ using this exact formula.
H: Induced mapping into mapping cylinder Can anyone please tell what is the induced map from mapping cone of f into Z? Also please explain how H and g induced G. Thanks in advance AI: A map $f: X \to Y$ is nullhomotopic iff there is an extension to $\tilde{f}:CX \to Y$ - you can just define $\tilde{f}\left(x, t\right) = h_t\left(x\right)$ where $h_t$ is the nullhomotopy. $C_f$ can be constructed as the pushout $CX \cup_f Y$, so by the universal property of pushout, mapping out of it is the same as mapping from $CX \sqcup Y$ in a way that that agrees on the image of $f$. A map from a disjoint union is the same as a pair of maps from each of the components. $H$ is then the extension of the composition to the cone, and $g$ is the given map, and they agree on $f$, thus define a map from the mapping cone.
H: Prove that $\sum_{k=1}^{\infty}\frac{k^2}{e^k}< \sum_{k=1}^{\infty}\frac{k}{2^k}$ Prove that : $$\sum_{k=1}^{\infty}\frac{k^2}{e^k}< \sum_{k=1}^{\infty}\frac{k}{2^k}$$ Without calculating the value of these series . The partial sum formula are : $$\sum_{k=1}^n \frac{k}{2^k} = 2^{-n}(-n+2^{n+1}-2)$$ $$\sum_{k=1}^n \frac{k^2}{e^k} =\frac{ (e^{-n} (-e^2 n^2 + 2 e n^2 - n^2 - 2 e^2 n + 2 e n + e^{n + 1} + e^{n + 2} - e^2 - e))}{(e - 1)^3}$$ I have tried a straightforward comparison like : $$\frac{x^2}{e^x}\leq \frac{x}{2^x}$$ Wich is true for $x\geq 6$ So we have a problem with the first terms . If you have a trick (maybe using integral) or a good answer I take . Any helps is appreciated. Thanks a lot! AI: Observe $$ \sum_{k=1}^{11}(\frac{k^2}{e^k}-\frac{k}{2^k}) = -0.002905 $$ Further, include your finding that for $x > 6$, $$ \frac{x^2}{e^x}- \frac{x}{2^x} \leq 0 $$ This shows that the partial sum for more than 11 terms will always obey the required inequality, hence also the infinite sum.
H: Limit superior and inferior of Césaro means are almost surely constant Reading the book of probability of Achim Klenke I came across the assertion that if $(X_n)$ is a sequence of independent real valued random variables then the limit superior and inferior of the sequence of Cesáro means defined by $S_n:=\frac1n \sum_{k=1}^n X_k$ are almost surely constant, but I cannot see why. I know that the limit superior and inferior of $(X_n)_n$ are almost surely constant and I already knows that $$ \liminf X_n \leqslant \liminf S_n\leqslant \limsup S_n\leqslant \limsup X_n $$ but from here I cannot see why then must be also surely constant $\liminf S_n$ or $\limsup S_n$. The book just assert that but doesn't gives a proof (just gives a proof for the case of $\liminf X_n$). Can someone show me or give me a reference about how to prove this assertion? Note: at this point of the book there is no central limit theorem neither theorems related to convergence or weak or strong law of large numbers, the major theorem at this point is the Kolmogorov 0-1 law (that is: the tail $\sigma$-algebra of independent $\sigma$-algebras is $P$-trivial). AI: If $T_n=\frac1{n}\sum_{k=2}^nX_k$ then $|S_n-T_n|\leq\frac1{n}|X_1|$. From this it follows that for every $\omega\in\Omega$ we will have: $$\limsup S_n(\omega)=\limsup T_n(\omega)$$ In short we have $\limsup S_n=\limsup T_n$ so that we may conclude that $\limsup S_n$ is measurable wrt to $\sigma\{X_2,X_3,\dots\}$. This can easily be made broader to find that $\limsup S_n$ is measurable wrt to $\sigma\{X_k,X_{k+1},\dots\}$ for every positive integer $k$. So $\limsup S_n$ is measurable wrt to $\bigcap_{k=1}^{\infty}\sigma\{X_k,X_{k+1},\dots\}$ and it is well known that all sets in that tail $\sigma$-algebra have probability $0$ or $1$. Consequently random variables that are measurable wrt to it are degenerated.
H: Number of ordered Pairs satisfying $4^m-3^n=1$ Find the Number of ordered Pairs $(m,n)$ of positive integers satisfying $4^m-3^n=1$ Mt try: Trivially $m=n=1$ satisfies Let $m \gt 1$ $$4^m-3^n=(1+3)^m-3^n=1$$ $\implies$ $$3\binom{m}{1}+3^2\binom{m}{2}+3^3\binom{m}{3}+\cdots+3^m=3^n$$ Now since LHS is not a power of $3$ and RHS is, this is possible only when $m=1$ Hence the only ordered pair is $(1,1)$ is this the right way? AI: Let $(m,n)$ be a pair with $n>1$ such that $4^m-3^n=1$. Looking at both sides modulo $4$, we see that $n$ must be odd, so of the form $n=2k+1$. Thus we now have $4^m-3\cdot 9^k=1$. Now looking at the equation modulo $9$, we see that $m$ must be a multiple of $3$, so of the form $m=3l$. Thus we have $64^l-3\cdot 9^k=1$. However, comparing both sides modulo $7$, we now must have $1-3\cdot 2^k\equiv 1\pmod 7$, or $2^k\equiv 0\pmod 7$. This is clearly impossible, hence $(1,1)$ in fact is the only solution. PS: I use $n>1$, or, equivalently, $k>0$, to say $3\cdot 9^k\equiv0\pmod9$.
H: What do the components of the solution to a system of equations to find the parameters of a circle represent? The homogenous representation of a circle is given by $x^2 + y^2 + 2gxz + 2fyz + cz^2 = 0$ (or, equivalently, if we set $z=1$, $x^2 + y^2 + 2gx + 2fy + c = 0$). Now, given 3 points (in a homogenous form), we can solve a system of linear equations and retrieve the unknowns $f$, $g$ and $c$. This is all very nice (because of linear algebra), but what do these unknowns actually represent with respect to the circle? Which of these numbers represent the x and y coordinates of a circle and which one represents the radius? Apparently, $-f$ and $-g$ would be the $x$ and $y$ coordinate of the center of the circle? Why is that the case? I would like to see a proof/derivation of it. Also, what is the radius then? AI: $$ \begin{aligned} 0&=x^2+y^2+2gx+2fy+c\\ &= x^2+2gx+g^2+y^2+2fy+f^2+(c-g^2-f^2)\\ &=(x+g)^2+(y+f)^2-(f^2+g^2-c) \end{aligned} $$ This equation says that the squared distance of the point $(x,y)$ from the point $(-g,-f)$ is $f^2+g^2-c$, which describes a circle centered at $(-g,-f)$ with radius $\sqrt{f^2+g^2-c}$.
H: Rewrite the feasible set of a linear programming problem Consider a $K\times 1$ vector $x$ and define $$ \mathcal{X}\equiv\{x\in \mathbb{R}^K: A*x\leq b\} $$ where $A$ is a $J\times K$ matrix containing known real scalars and $b$ is a $J\times 1$ vector containing known real scalars. Is it correct to say that $$ \mathcal{X}\equiv [\min_{x\in \mathcal{X}} x_1, \max_{x\in \mathcal{X}} x_1]\times [\min_{x\in \mathcal{X}} x_2, \max_{x\in \mathcal{X}} x_2]\times ...\times [\min_{x\in \mathcal{X}} x_K, \max_{x\in \mathcal{X}} x_K] $$ where "$\times$" denotes Cartesian product? AI: No, you are suggesting the the feasible region must be rectangular but this is not true. For example, we have have $x \ge 0, y \ge 0, x+y \le 1$. The feasible region is triangular. It is not equal to $[0,1] \times [0,1]$.
H: Variance of Linear Combination of Standard Normals I have a random variable $X_n=\frac{1}{n} \sum^n_{i=1} Z_i$ for $n\in \mathbb{Z}^+$ where $X_0=0$ and the $\{ Z_i \} \sim N(0, 1)$ for all $i$. The $Z_i$ are independent. I need to find the distribution of $X_n$. I know that a linear combination of normal random variables is also normal. I also know that $\mathbb{E}(X_n)=0$. However, I'm not sure how to calculate the variance of $X_n$. I know $\mathrm{Var}(\sum^n_{i=1} Z_i)=1+1+1...$ n times $= n$. But am unsure what happens when I try to find the $\mathrm{Var}(\frac{1}{n}\sum^n_{i=1} Z_i)$. My friends have gotten answers as $\frac{1}{n+1}$, but I got $\frac{1}{n}$ as my answer from the following working: $$\mathrm{Var}(\frac{1}{n}\sum^n_{i=1} Z_i)=(\frac{1}{n})^2\mathrm{Var}(\sum^n_{i=1} Z_i)$$ $$\mathrm{Var}(\frac{1}{n}\sum^n_{i=1} Z_i)=\frac{1}{n^2}\cdot n$$ $$\mathrm{Var}(\frac{1}{n}\sum^n_{i=1} Z_i)=\frac{1}{n}$$ Can anyone offer some insight? AI: It should indeed be $\frac1n$. More generally,$$\operatorname{Var}(\sum_ia_iZ_i)=\operatorname{Cov}(\sum_ia_iZ_i,\,\sum_ja_jZ_j)=\sum_{ij}a_ia_j\operatorname{Cov}(Z_i,\,Z_j)=\sum_{ij}a_ia_j\delta_{ij}=\sum_ia_i^2,$$where the penultimate $=$ uses the fact that the $Z_i$ are independent, and each of mean $0$ and variance $1$, to write $\operatorname{Cov}(Z_i,\,Z_j)=\delta_{ij}$ in terms of the Kronecker delta. Now take $a_i=\frac1n,\,1\le i\le n$.
H: How to find maximal element of a set and its minimal elements? $X \neq \emptyset$ is a set and on $P(X)$ (the power set of $X$), we know that "$\subseteq$" subsuming relation is a partially ordered set. For $\forall a\in X$, how can I show the maximal element of subset $A=P(X) \setminus \{X, \emptyset\}$ of $X\setminus \{a\}\in P(X)$ It is a made up problem and I am unable to find anything resembles to this question. And I am asked to find the $A$'s minimal elements, too. AI: An element is said to maximal if it is not contained in any other element. Here, for any a in X, X\ {a} is only contained in X. Since X is not present in A, X\ {a} will not be completely contained in any other set belonging to A. Hence it is maximal for each a in X. An element is minimal if it isn't bigger than any element in that set. Here for any a in X, {a} doesn't contain any smaller set in it ( though it contains the empty set, the empty set is not present in A. So you can't consider it to be contained in {a} ). Hence every singleton is a minimal set.
H: Quick Question: Variance of a two related Gaussian distributions I have two random variables, $X$ and $Y$. Both follow a Gaussian distribution, and $$X \sim N(0,1)\;.$$ After some manipulation, I got that $$P(X \leq z) = P(kY \leq z)$$ where $k$ is some constant. Would this imply that $X=kY$, and therefore $\operatorname{Var}(kY)=k^2\operatorname{Var}(Y)=k^2$? AI: If $kY = X$ and $\mathsf{Var}(X) = 1$, then it must be true that $$\mathsf{Var}(kY) = 1 $$ and thus $$k^2\mathsf{Var}(Y) = 1$$ and thus $$\mathsf{Var}(Y) = \frac{1}{k^2}$$
H: Let p be a prime. If a group has more than $p-1$ elements of order $p$, then prove that the group can't be cyclic. Result: Let G be a group which has more than $p-1$ elements of order $p$. I need to prove that such a group can't be cylic. $p$ is a prime number. Let's consider the case when G is finite. I want to prove the result using contradiction. Proof: On the contrary, assume that G is cyclic. Hence, there exists $a\in G$ such that $G=<a>$. Let $|G|=n$ Case 1: $p$ divides $n$ In this case, no. of elements of order $p =\phi (p)$, where $\phi$ is Euler Phi function. Since it is known that, $\phi (p)\le p-1$, we have a contradiction. Case 2: $p$ doesn't divide $n$ gcd $(p,n)=1$ Let $|a^p|=t$. This implies that $a^{pt}=e$, where $e$ is identity of $G$. $\implies$ $n$ divides $pt \implies \exists$ integer $k$ such that $ kn=pt$. How do I arrive at a contradiction from here? PS: I don't know Lagrange's theorem, Cosets, Sylow's theorem, Isomorphisms yet. AI: Here is a proof without using Lagrange's theorem. Suppose $p$ does not divide $|G|=n$. Since $G=\langle a\rangle $ we must have $|a|=n$. Now, by our assumption there is an element $x\in G$ or order $p$. Since it is an element in the group there must be some $0\leq t\leq n-1$ such that $x=a^t$. Then: $a^{pt}=x^p=e$ $n$ is the order of $a$, so this implies $n|pt$. But by our assumption $\gcd(p,n)=1$, so we conclude that $n|t$. But since $0\leq t\leq n-1$ this implies $t=0$. So $x=a^0=e$. It is a contradiction because $e$ has order $1$, not $p$.
H: Greatest integer less than or equal to the 50th term of the series defined below This question I found in one of my practice papers for the JEE $$a_{1}= \sqrt[3]{28}$$ and $$a_{n+1}=\sqrt[3]{a_{n} + 60} \space \space \forall \space n \in\Bbb N $$ Question asks for greatest integer less than or equal to the 50th term of the series defined above. How to approach this? AI: It is easy to see that $a_n$ is a strictly increasing sequence and is bounded above by $4$. (Both these facts can be proven via induction.) This shows that $a_n$ converges. (Of course, as a JEE student, you probably do not care about this and could have assumed this anyway.) Now, let us calculate that limit $L = \displaystyle\lim_{n\to\infty} a_n$. Since we know that $(a_n)$ converges, we can take limit on both sides of the recurrece relation to get $$L = \sqrt[3]{L + 60}.$$ Cube both sides and note that $L = 4$ is the only (real) solution. Thus, $L = 4$. From this we conclude that $a_n < 4$ for all $n \in \Bbb N$. Moreover, since $(a_n)$ was increasing, we have that $a_1 \le a_n$ for all $n \in \Bbb N$. Noting that $a_1 = \sqrt[3]{28} > \sqrt[3]{27} = 3$, we see that $$3 < a_n < 4.$$ You can now conclude that the answer is $3$.
H: Maths area question I want to answer the follow question. I’ve worked out the area of the circle to be 200.96 Where would I go from here? AI: Hint $$\text{Area of triangle}=\frac{1}{2}(a)(b)\sin(C)$$ where a,b are sides and C is the included angle. So we get $70=\frac{1}{2}(8)(19)\sin(A)$ so $\sin(A)=\frac{35}{38}$ you can get value of angle A in radians and find the area of sector as $\frac{A}{2\pi}\pi (8)^2$
H: If $X$ is symmetric at $a$, then $Y= X + a$ is symmetric at $0$ Suppose $X$ has a density function $f$ that is symmetric about $a$. Let $Y = X + a$. Show that the density function $g$ of $Y$ is symmetric about $0$. Setting $f(x) = g(x-a)$ gives you the result through basic algebra. But I'm having an inordinate amount of difficulty justifying the equality to myself. Could someone explain to me very clearly why/ if the equality is allowed? AI: You have that $P(X\leq x) = \int_{-\infty}^x f(t)dt$. Now using variable change $t\mapsto t+a$ gives $$ P(Y\leq y) = P(X+a\leq y) = P(X\leq y-a) = \int_{-\infty}^{y-a}f(t)dt = \int_{-\infty}^y f(t+a)dt $$ so $Y$ has density $g(t) = f(t+a)$. But by symmetry of $f$ about $a$ we have $g(-t) = f(a-t) = f(a+t) = g(t)$ for each $t$, so $g$ is symmetric about $a$.
H: Intuition behind multiplication of decimals that are greater than 0.1 How can I visualize the multiplication of 2 decimals that both are greater than 0.1? For example, 0.2×0.25. I understand that 0.1×0.1 is to: step 1. divide a size into 10 parts 2. divide 1 of those parts into another 10 parts 3. retrieve 1 part from the product of step 2 Overall, it's equivalent to dividing the initial size into 100 parts, then get 1 part from it. AI: You can do the same with your example by writing $0.2=\frac 15, 0.25=\frac 14, \frac 15 \cdot \frac 14=\frac 1{20}=0.05$ The problem comes when your decimals do not convert to fractions. I think it is better to think of doing multiplication, then finding where the decimal point goes. Counting places past the decimal point works for me. If you multiply $0.1416 \cdot 0.71828$, ignoring the decimal points you get $101708448$. Now the number of decmals in the product is the sum of the numbers in the things you multiplied, so there are $9$ decimals and the result is $0.101708448$. You can also approximate $0.1416 \approx \frac 17, 0.71828 \approx \frac 7{10},$ so the product is about $\frac 1{10}$
H: A simple faithful module for $k\langle x,y\rangle/\langle xy-1\rangle$ Let $S$ be the free algebra $k\langle x,y\rangle$ and $R$ be the quotient $S/\langle xy-1\rangle$. In talking with someone, they mentioned they thought this algebra is left primitive. So, I set about looking how to prove that by finding a faithful simple left $R$ module. Here's my candidate: let $M$ be the abelian group that is a countable direct sum of copies of $k$. Let $y$ act as a right shift on the entries: $y(f_0)=0$ and $y(f_i)=f_{i-1}$ for $i>0$, and $x$ as the left shift $x(f_i)=f_{i+1}$, and extend the action to all of $S$. Obviously $xy-1$ annihilates $M$, so we can consider $M$ as a left $R$ module, and I convinced myself it's simple. The stumbling block for me is decisively concluding that the annihilator is no bigger than $(xy-1)$, proving that $_RM$ is faithful. A little digression: I'm speaking more generally of a problem where one knows an ideal contains a desired set of relations, but needs also to conclude that the relations generate the entire ideal. I have heard that Bergman's Diamond lemma is a good thing to learn with regards to this, but I have not found a good introduction to it. This seems tricky sometimes. Here's an example of intuition going wrong: Take the algebra $\mathbb Q\langle x,y\rangle/(xy-yx-1)$ and mod out by the ideal generated by $yx$. Alternative description of the algebra above, right? Except there's this inconvenient fact that $\mathbb Q\langle x,y\rangle/(xy-yx-1)$ is a simple ring, and therefore the quotient is the zero ring Modulo $\langle xy-1\rangle$ we'd just be left with sums of monomials of the form $p(y)x^m$ where $p(y)$ is a polynomial in $y$. I want to reason that one can always produce an element of $M$ not annihilated by such an element, but I haven't gotten any traction with this. The $x^m$ doesn't cause any trouble, of course, but how do you prevent $p(y)$ from scrambling your element to produce $0$? AI: I'll only adress the first part of the question, i.e. find a simple faithful module for your ring $R$. Let $V = \bigoplus_{n=1}^\infty e_n k$ and let $E= \operatorname{End}(V_k)$. Define $f(e_i) = 0$, $f(e_i)= e_{i-1}$ for $i \ge 2$. Let $g \in E$ such that $g^m e_1 = e_{r(m)}$ and $\lim_m r(m) = \infty$. Now, let $S$ be the $k$-algebra generated by $g$ and $f$. Then we can prove that $V$ is a simple faithful left $S$-module. How is this useful? For example, choosing $g$ such that $g(e_i) = e_{i+1}$, the map $$k\langle x,y\rangle \to S: x \mapsto f, y \mapsto g$$ is a ring epimorphism with kernel $(xy-1)$ so we get $$k\langle x,y \rangle/(xy-1) \cong S$$ and we can thus view the module above as simple faithful $R$-module. For more information and details, see "A first course in non-commutative rings" by Lam, starting p195, where this example is discussed in more detail.
H: Calculate the number of triplets in a graph I try to compute the global clustering coefficient of a graph in Matlab using the adjacency matrix. I know how to find the number of closed triangles: trace(A^3), but I do not have any idea how to find open triplets in a graph efficiently. I wrote a code traversing each node and checking for open triangles, but it does not work for large datasets. Is there any option to calculate the number of all triplets (three connected nodes) in a graph? AI: With the same matrix calculations that gave you closed triangles, we can find open triplets. The $(i,j)$ entry of $A^2$ counts the number of paths of length $2$ from $i$ to $j$. Then, computing $\mathbf 1^{\mathsf T}\!A^2 \mathbf 1$ (where $\mathbf 1$ is the all-ones vector) will add up all these values, telling you the total number of paths of length $2$ in your graph. I think Matlab can also do this with sum(A^2,'all') or sum(sum(A^2)), but I'm not a Matlab user so I can't be sure. A set of three vertices $\{i,j,k\}$ will contribute: $0$ to this total if the graph has $0$ or $1$ of the edges between them, since then there are no paths of length $2$ involving $i,j,k$. $2$ to this total if the graph has $2$ of the edges between them, since then there are two paths (one in either direction). $6$ to this total if the graph has all $3$ of the edges between them, since then any permutation of $\{i,j,k\}$ is a path. Therefore $\mathbf 1^{\mathsf T}\!A^2 \mathbf 1$ gives you $2$ times the number of open triplets plus $6$ times the number of closed triplets. You already have the closed triplets from $\operatorname{tr}(A^3)$, so this lets you solve for the open ones. (Double-check that I'm getting the terminology right - I always forget how symmetry is counted for clustering coefficients - but the idea should be sound either way.)
H: Show that pushforward measure is inner regular Let $X, Y$ be compact Hausdorff spaces, $\tau:Y \to X$ continuous and $\mu^\tau:=\mu \circ \tau^{-1}$ the pushforward measure of the measure $\mu$ which is a inner regular measure. Show that $\mu^\tau$ is inner regular as well. We have to show that $\mu^\tau(A)=\sup\{\mu^\tau(K):K\subseteq A, K\ \mathrm{compact}\}, \forall \ \mathrm{measurable\ } A \subseteq X$ I tried: $\mu^\tau(A)=\mu(\tau^{-1}(A))=\sup\{\mu(K):K \subseteq \tau^{-1}(A), K\ \mathrm{compact}\}=\sup\{\mu(\tau^{-1}(K)):K \subseteq (A), K\ \mathrm{compact}\}=\sup\{\mu^\tau(K):K \subseteq (A), K\ \mathrm{compact}\}$ Where do we use the fact that $X$ and $Y$ are compact Hausdorff spaces? It should also work for non compact Hausdorff spaces. But where do we use the hausdorff property? AI: You use (parts of) the assumption that $X$ and $Y$ are compact Hausdorff spaces in the justification of the third equals sign of your try. The other equals signs are immediately justified from the definition of $\mu^{\tau}$ and the assumption of inner regularity of $\mu$. Let's go from right to left, and start with $$\sup \: \{\mu(\tau^{-1}(K)) : K \subseteq A, K \text{ compact}\}\,.$$ Since $X$ is a Hausdorff space, we know that all the $K$ considered there are closed subsets of $X$. Hence $\tau^{-1}(K)$ is closed in $Y$ for all these $K$. Since $Y$ is compact, it follows that $\tau^{-1}(K)$ is compact. And clearly $\tau^{-1}(K) \subset \tau^{-1}(A)$, thus the family $\mathscr{F} = \{\tau^{-1}(K) : K \subseteq A, K \text{ compact}\}$ is a subfamily of the family of compact subsets of $\tau^{-1}(A)$, which shows $$\sup \: \{\mu(\tau^{-1}(K)) : K \subseteq A, K \text{ compact}\} \leqslant \sup\: \{ \mu(L) : L \subseteq \tau^{-1}(A), L \text{ compact}\}\,.$$ For this inference we need that compact subsets of $X$ are closed (which is a weaker assumption than Hausdorffness, but $X$ being Hausdorff is a less exotic assumption, satisfied in most cases one may be interested in), and that $Y$ is compact. For the other inequality, it suffices to see that every compact $L \subseteq \tau^{-1}(A)$ is contained in some member of $\mathscr{F}$. This is straightforward, since $L \subseteq \tau^{-1}(\tau(L))$, and $\tau(L)$ is compact by the continuity of $\tau$ — hence closed by the assumption on $X$ — and therefore $\tau^{-1}(\tau(L))$ is a member of $\mathscr{F}$ containing $L$. Thus $$\mu(L) \leqslant \sup \: \{\mu(\tau^{-1}(K)) : K \subseteq A, K \text{ compact}\}$$ for every compact $L \subseteq \tau^{-1}(A)$, and the third equals sign is established. Thus we used the compactness of the domain and (a consequence of) the Hausdorffness of the codomain for the proof. We see that the assumptions are stronger than needed, but not entirely superfluous.
H: Möbius Transformation from a unit disk to the upper half plane Consider the unit-disk $\mathbb{D} = \{ z : |z|\leq 1 \}$. I need to find a Möbius Transformation $w=Tz$ that maps $\mathbb{D}$ to the upper-half plane $\mathbb{H} = \{ w : Im(w) \geq 0\}$. I have searched and found that the linear fractional transformation $f(z) = \frac{i (1+ z)}{1-z}$ maps $\mathbb{D}$ to $\mathbb{H}$. But I am not sure about the logic behind how one can come to this result. If any hints could be given, it would be much appreciated. Thanks. AI: Geometrically $\{z\in \Bbb C | \ \ |z+1|=|z-1|\}$ is locus of all $z$ which is equidistant fron $1$ and $-1$, which is nothing but imaginary axis in complex plane. Whereas $A=\{z\in \Bbb C | \ \ |z+1| \geq |z-1|\}$ is right half plane. $\tag{1}$ Hence the map $f:A \to \Bbb D$ defined as $$z \mapsto \frac{z-1}{z+1}$$ and from $(1)$ we can infer that $f$ is well-defined and is bijective map. $g:A \to \Bbb H$ defined as $z\mapsto iz$ rotates the right half plane to upper half plane. Hence the desired map $g\circ f^{-1}: \Bbb D \to \Bbb H$.
H: Clarification on limits arithmetics Given a limit in the form of: $\lim_{x \to \infty} ((1+\frac{1}{f(x)})^{f(x)})^{g(x)}$ Assuming we know $f(x)\xrightarrow[x \to \infty]{}\infty, g(x)\xrightarrow[x \to \infty]{}c \in \mathbb{R}$ Why is it true that the limit is $e^c$? or isn't it? This is just a little thing I'm trying to understand, not h.w or something. I'm looking for a formal explanation obviously so $\lim_{x \to \infty} (1+\frac{1}{f(x)})^{f(x)}=e$ is obvious and not an answer. AI: If $u(x)\to e$ and $v(x)\to c,$ then $u(x)^{v(x)} \to e^c.$ Proof: Apply $\ln$ to the expression to get $v(x)\cdot \ln u(x).$ Note $\ln u(x)\to \ln e =1.$ By the product rule for limits, $v(x)\cdot \ln u(x)\to c\cdot 1 = c.$ Exponentiating back gives $e^c$ for the original limit. So in your problem all you need to verify is that $(1+1/f(x))^{f(x)} \to e,$ which is pretty standard.
H: Can $g\cdot A < A$ for some group action? Can $g\cdot A$ be a strict subset of $A$ for some group action '$\cdot$' from $G\times G$ to $G$. For some subset $A$ of $G$, for some element $g$ other than identity in $G$. Where $g\cdot A = \{ g\cdot a \mid a \in A \}$. AI: Let $\mathbb Q^*$ be the multiplicative group of nonzero rational numbers and let $A=\mathbb Z\setminus\{0\}$. Then if we take $g=2$ we have that $g.A$ is a proper subset of $A$.
H: Does polynomial generated by repeated application of matrix divide characteristic polynomial? Given a square matrix $A\in \mathbb C^{n,n}$ and a vector $v\ne0$, the vectors $$ v, Av, A^2 v, \dots, A^n v $$ are linearly dependent. Let now $m\le n$ be the smallest number such that $$ v, Av, A^2 v, \dots, A^m v $$ are linearly dependent. Then there are coefficients, not all of them zero, such that $$ \sum_{i=0}^m a_i A^iv=0, $$ or equivalently, $$ p(A)v=0 $$ for $p$ given by $p= \sum_{i=0}^m a_i t^i$. My question is: does this polynomial divide the characteristic polynomial $p_A$ of $A$? Of course, $p$ and $p_A$ share a non-trivial factor. I think a proof of the claim above can be achieved using Jordan decomposition, but it looks like it would be complicated. Is there a more elementary proof? Does such a proof also work for other fields different from $\mathbb R,\mathbb C$? (Using this polynomial is the way how Axler proves existence of eigenvectors for the complex case and existence of small invariant subspaces for the real case. ) AI: By the definition of $m$, the coefficient $a_m$ must be nonzero. So, we may assume that $p$ is monic. Note that $V=\operatorname{span}\{v,Av,\ldots,A^{m-1}v\}$ is an invariant subspace of $A$ and $p(A)A^kv=A^kp(A)v=0$ for every integer $k\ge0$. Hence $p$ annihilates the restriction of $A$ on $V$. It must be the minimal polynomial of $A|_V$, for, if the minimal polynomial $q$ of $A|_V$ has degree $r<m$, we will have $q(A)v=0$ and hence $v,Av,\ldots,A^rv$ will be linearly dependent, which is a contradiction to the definition of $m$. Now, since $p_A(A)=0$, we have $p_A(A|_V)=0$. As $p$ is the minimal polynomial of $A|_V$, it must divide $p_A$.
H: Let $X$ be compact and $f:X\to\mathbb{R}$ s.t. each $x\in X$ has a nbh where $f$ attains its minimum. Show $f$ attains minimum on $X$. Consider a compact topological space $X$ and a map $f:X\to\mathbb{R}$ such that each $x\in X$ has a neighborhood where $f$ attains its minimum. Show that $f$ attains its minimum on $X$. My attempt: I was thinking of covering $X$ with all the neighborhoods where $f$ attains a minimum, so something like $X\subseteq \bigcup_{x\in X} U_x$. Then, by compactness, there would be a finite subcover. I don't see how I can conclude that a minimum is attained from this information: I could possibly go over all sets of the subcover and take the overall minimum that $f$ attains (which exists, as the subcover is finite), but am I guaranteed that this minimum is the minimum of $f$ on $X$? AI: For each $x\in X$ we have open neighborhood $U_x$ of $x$ such that there is $t_x\in U_x$ such that $f(t_x)\leq f(t) \ \forall t\in U_x$. By compactness of $X$, we have finite subcover $\{U_{x_1},...,U_{x_n}\}$ of $X$. Hence ,$f(t_{x_i})\leq f(t) \ \forall t\in U_{x_i}$ for $i=1,2,...,n.$ $\min\{f(t_{x_i})\}_{i=1}^n =f(t_0)$ is required minimum.
H: I am not able to understand these lines from deductive reasoning " Conclusion can only be false if atleast one of the premises is also false. If both premises are true, then conclusion is also true. We will say that argument is valid if the premise cannot be all true without the conclusion being true as well. " I am not able to understand meaning of these lines in textbook. Can anyone clarify ? Thank You AI: You know contraposition. The contrapositive of " if A then B " is " if B is false, then A is false", or, more precisely, " if not-B, then not-A". A proposition and its contrapositive sentence are equivalent, they mean exactly the same thing. The definition of deductive validity says that a reasoning is valid just in case : if all the premisses are true, then ( necessarily) the conclusion is true. By contrapositon, it can also be phrased : if the conclusion is false ( i.e. not true) , then not all the premises are true ( meaning that at least one premise is false). So (1) if I know that a reasoning is valid (2) and that its conclusion is actually false (2) then , I can claim with certainty that at least one of its premises is false ( one or more, possibly all).
H: Not Understanding the Key Step in a Linear Algebra Proof I'm self-studying some linear algebra from the Schaum's Outline of Linear Algebra by Lipschutz and I came across a problem where I don't fully follow the key step. Below I make a note with respect to where I don't follow: Problem 6.26: (Note that the notation $[T]_S$ and $[v]_S$ refer to the matrix representation of the operator $T$ and the coordinate vector of the vector $v$, respectively). The proof is clear to me until it comes time to make the key step to conclude that the equality after "Hence," implies that the two matrices acting on the coordinate vector are the same. I understand that the coordinate mapping is surjective (and, indeed, an isomorphism) as stated, and (I think) that this implies that there is a $v$ in $V$ such that this equality holds - and since it holds for all $v$ in $V$ the proof is complete? I feel like I'm missing something here. AI: I think we can complete the argument as follows: Once we have that $$P^{-1}[T]_SP[v]_{S'} = [T]_{S'}[v]_{S'} \quad \textrm{for any $v \in V$;} \tag{$*$}$$ call $A = P^{-1}[T]_SP$ and $B = [T]_{S'}$, and suppose that $S'$ consist of the $n$ ordered vectors $v_1,\dots,v_n$. Now observe that, for all $j$ between $1$ and $n$, the coordinate vector $[v_j]_{S'}$ is the column vector in $K^n$ whose only nonzero entry is a $1$ in the $j$-th place, let's denote this vector as $[v_j]_{S'} = e_j$. Applying $(*)$ with $v = v_j$ we have $Ae_j = Be_j$. Since the left hand side is just the $j$-th column of $A$ and the right hand side the $j$-th column of $B$, we conclude that all the columns of $A$ and $B$ coincide, hence, they are equal.
H: factorizing the given polynomial. factorize $(bc-a^2)(ac-b^2)-(c^2-ab)^2$. The one method I know is to something like let $ a=x$ and if equation becomes zero then $a-x$ is a root but how to exactly think? AI: I think it's better to do this using determinants,the given equation can be converted to the following matrix: $$\begin{pmatrix}{bc-a^2} & {ab-c^2}\\\ {ab-c^2} & {ac-b^2}\end{pmatrix}$$ and we just need to find the determinant of this matrix to the given equation.This can be easily done with help of row and column transformations $$\det(\begin{pmatrix}{bc-a^2} & {ab-c^2}\\\ {ab-c^2} & {ac-b^2}\end{pmatrix})=({bc-a^2})({ac-b^2})-({ab-c^2})^2$$and using $$C_1\to C_1-C_2$$we get $$\det(\begin{pmatrix}{(c-a)(a+b+c)} & {ab-c^2}\\\ {(b-c)(a+b+c)} & {ac-b^2}\end{pmatrix})$$ and taking a+b+c common from $C_1$ we get $$(a+b+c)\det(\begin{pmatrix}{c-a} & {ab-c^2}\\\ {b-c} & {ac-b^2}\end{pmatrix})$$ and now evaluating the det value we get $$({bc-a^2})({ac-b^2})-({ab-c^2})^2=(a+b+c)(c)(ab+ac+bc-a^2-b^2-c^2)$$ which simplifies to $$({bc-a^2})({ac-b^2})-({ab-c^2})^2=-(a+b+c)(c)(\frac{1}{2})((a-b)^2+(b-c)^2+(c-a)^2)$$ And a simpler way is to just simply it as given:$$({bc-a^2})({ac-b^2})-({ab-c^2})^2=abc^2-b^3c-a^3c+a^2b^2-c^4-a^2b^2+2abc^2$$now taking -c common from the whole equation gives us $$(-c)(abc-b^3-a^3-c^3+2abc)=(-c)(-3abc+a^3+b^3+c^3)=(-c)(\frac{1}{2}((a-b)^2+(b-c)^2+(c-a)^2)(a+b+c)$$
H: Does the infinite power tower $x^{x^{x^{x^{x^{x^{x^{.{^{.^{.}}}}}}}}}}$ converge for $x < e^{-e}$? As per the answer at https://math.stackexchange.com/a/573040/23890, the infinite power tower $x^{x^{x^{x^{x^{x^{x^{.{^{.^{.}}}}}}}}}}$ converges if and only if $ x \in [e^{-e}, e^\frac{1}{e} ] $. Is $ e^{-e} = 0.065988... $ really the lower bound for $ x $ for this power tower to converge? I wrote a Python program to check how this power tower behaves for $ x < e^{-e} $, for example, $ x = 0.01 $. It seems for $ x = 0.01 $, the power tower converges to $ 0.941... $. from math import * def power_tower_iterations(x, iterations): result = 1 for i in range(iterations): result = x ** result return result def power_tower(x): for iterations in [10, 100, 1000, 10000, 100000, 1000000]: print(power_tower_iterations(x, iterations)) power_tower(0.01) Output: 0.9415490240371601 0.9414883685756696 0.9414883685756696 0.9414883685756696 0.9414883685756696 0.9414883685756696 So does the power tower converge for $ x < e^{-e} $ or does it not? If it does not, what error have I made above that led me to the false conclusion that the power tower converged to $0.941...$ for $x = 0.01$? AI: Hint :Nice answer is given here by @Simply ,For $0<x<e^{-e}$, it diverges. Note that for such small values of $x$, we get $t>x^{x^t}$ for when $t>y$ and $t<x^{x^t}$ when $t<y$, where $y=x^y$. In other words, adding more powers of $x$ ends up pushing it farther and farther away from $y$ instead of closer. Particularly, it approaches $0$ and $1$, for even and odd powers of $x$. In the same time limit of odd iteration is not the same with limit of even iteration for $0<x<e^{-e}$ as you showed in your code as @Rob Bland claimed above in comment
H: Show that the sequence $a_n=\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}$ is rising and is unbounded. We have a sequence: $$a_n=\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}$$ We need to show that it is rising and above unbounded. So I did the following: $$a_{n+1}=a_n+\frac{1}{n} $$ $$a_{n+1}>a_n $$ $$a_n+\frac{1}{n}>a_n $$ $$\frac{1}{n}>0$$ $$n>0$$ Which holds. First question: Is the upper proof enough to satisfy that the sequence is rising As far as I understand, I need to also now show that supremum doesn't exist. So I have done the following: $$ \sup{a_n}=\lim{a_n} = M $$ $$M>0$$ $$M>a_n$$ for every $n$ $$a_n+\frac{1}{n}<M$$ $$1<(M-a_n)n $$ Because $M>a_n$ we can divide without sign direction change So we get: $$\frac{1}{M-a_n}<n$$ Which I think isn't the end of the proof. I do not know how to continue. AI: Here is a proof by grouping terms and using elementary inequalities (Due to Nicole Oresme). For non-negative integers $j$ we have, $$\sum_{k=1}^{2^j} \frac{1}{2^j+k}\geq\sum_{k=1}^{2^j}\frac{1}{2^{j+1}}=\frac{2^j}{2^{j+1}}=\frac{1}{2}$$ Now, for $n\geq1$, $$\sum_{r=1}^{2^n}\frac{1}{r}=1+\sum_{j=0}^{n-1}\left( \sum_{k=1}^{2^j}\frac{1}{2^j+k}\right)\geq\left(1+\frac{n}{2}\right)=\frac{n+2}{2}$$ Equality holds for $n=1$. Hence $a_{2^n}\geq\frac{n+2}{2}$. So the sequence $\{a_n\}$ grows unboundedly.
H: Finding the diagonal of a rectangle. What fraction of the rectangle is shaded? (You may assume that each line, other than the diagonal of the rectangle, is parallel to some side of the rectangle.) Is there a way to solve this without doing algebraic manipulations to get the other side length of the rectangle? How can the two rectangles being formed inside help in the calculation of the diagonal? AI: You do not need the length of the diagonal to find the proportion. If $H$ is the height of the whole rectangle, then the small rectangle in the bottom-left corner is of height $\frac H9$ because of similarity. Hence, the proportion of the shaded region is $$\frac{1\cdot \frac 89H + 8\cdot \frac 19 H}{9H} = \frac{16}{81}$$. Another way is to see that the whole rectangle is tiled by $9\times 9$ rectangles congruent to the small bottom-left rectangle. Now, counting gives as well $\frac{16}{81}$.
H: There is a $C > 0$ such that $\vert f(z)-f(w)\vert \leq C\vert z-w\vert$ for a holomorphic map into $B(0, R)$. Let $0<\delta<r$ and $R>0$, and $a\in\mathbb C$. I want to show that there exists $C>0$ such that for all holomorphic functions $f\colon B(a,r)\to B(0,R)$, we have $$ \vert f(z)-f(w)\vert\leq C\vert z-w\vert $$ for $z,w\in B(a,\delta)$. I'm not sure how to approach this. I was wondering if I could use that $$ \left\vert\frac{f^{(n)}(a)}{n!}\right\vert\leq\frac{\Vert f\Vert_{C(a,\rho)}}{\rho^n} $$ for some $0<\rho<r$. I also have an expression from biholomorphic maps from $B(0,1)$ to itself, but I'm not sure if I could use that here. Any hints? AI: Let $\rho = (r - \delta)/2$. Then for any $z \in B(a, \delta)$ the closure of the disk $B(z, \rho)$ is contained in $B(a, r)$, so that Cauchy's estimate for the derivate gives $$ |f'(z)| \le \frac{\Vert f \Vert_{C(a,\rho)}}{\rho} \le \frac{2R}{r - \delta} =: C \, . $$ This $C$ satisfies the desired inequality: Let $\gamma(t) = z + t (w-z)$, $0 \le t \le 1$, be the straight path from $z$ to $w$. Then $$ f(z) - f(w) = \int_\gamma f'(\zeta) \, d\zeta = (w-z)\int_0^1 f'(z + t(w-z)) \, dt $$ and therefore $$ |f(z) - f(w)| \le |z-w| \int_0^1 |f'(z + t(w-z))| \, dt \le C |z-w| \, . $$
H: Stability of these fixed points Say we have the set of nonlinear equations, where $\alpha>0$: $$\begin{matrix} \frac{dx}{dt}=x[1-\alpha x-y]\\ \frac{dy}{dt}=y[1-x-\alpha y] \end{matrix}$$ I have determined that the fixed points of this system when $\alpha \neq 1$ are $$(0,0), (0,\frac{1}{\alpha}), (\frac{1}{\alpha},0), (\frac{\alpha (1-\alpha)}{1-\alpha ^{2}},\frac{1-\alpha}{1-\alpha^{2}})$$ And the Jacobian is $$J= \begin{pmatrix} 1-2\alpha x-y& -x\\ -y & 1-x-2\alpha y \end{pmatrix}$$ So, the fixed point $(0,0)$ has eigenvalues $\lambda=1,1$, so this fixed point is unstable. But what about the stability of the other fixed points? For example, for the fixed point $(0,\frac{1}{\alpha})$ the Jacobian becomes $$J= \begin{pmatrix} \frac{\alpha -1}{\alpha}& 0\\ -\frac{1}{\alpha} & -1 \end{pmatrix}$$ which gives eigenvalues of $\lambda=\frac{\alpha -1}{\alpha},-1?$ But I am struggling at this point because the first eigenvalue could be positive or negative depending on the value of $\alpha$, correct? Have I made a mistake somewhere or am I missing something? Thanks. AI: There is no problem with your answer. Just do it on a case by case basis depending on $\alpha$. For example: $$\bigg(0,\frac 1\alpha\bigg) \text{ is a }\begin{cases} \text{saddle} & \alpha<0 \\ \text{stable node} & 0<\alpha <\frac 12 \\ \text{stable inflected node } & \alpha = \frac 12 \\ \text{stable node} & \frac 12<\alpha < 1 \\ \text{saddle} & \alpha > 1\end{cases}$$ where of course, a saddle is unstable by definition. When $\alpha = 1$, you get a zero eigenvalue, so you will have to expand out the center manifold in order to determine the stability in this case. EDIT: Have you been taught how to find the eigenvalues for a general matrix? For any matrix $A$, $\lambda$ is an eigenvalue of $A$ if and only if $\lambda$ satisfies $\det(A-\lambda I)=0$ In particular, the determinant of a $2 \times 2$ matrix is $$ \det \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad-bc$$ Here, the Jacobian is $$J = \begin{pmatrix} -\frac{\alpha}{\alpha+1} & -\frac{1}{\alpha+1} \\ -\frac{1}{\alpha+1} & -\frac{\alpha}{\alpha+1} \end{pmatrix} $$ So we want to find $\lambda$ such that $\det(J-\lambda I)=0$. \begin{align} 0 & =\det(J-\lambda I) \\ & = \det \begin{pmatrix} -\frac{\alpha}{\alpha+1}-\lambda & -\frac{1}{\alpha+1} \\ -\frac{1}{\alpha+1} & -\frac{\alpha}{\alpha+1}-\lambda \end{pmatrix} \\ & = \bigg(-\frac{\alpha}{\alpha+1}-\lambda\bigg)\bigg(-\frac{\alpha}{\alpha+1}-\lambda\bigg) - \bigg(-\frac{1}{\alpha+1}\bigg)\bigg(-\frac{1}{\alpha+1}\bigg) \\ & = \bigg(\lambda + \frac{\alpha}{\alpha+1}\bigg)^2 - \bigg(\frac{1}{\alpha+1}\bigg)^2 \\ \end{align} Solving for $\lambda$, we find that $$\lambda = \color{red}{-}1 \qquad \text{ or } \qquad \lambda = \color{red}{-}\frac{\alpha-1}{\alpha+1}$$ I leave it up to you to do the analysis.
H: Show that a function f, continuous on (a,b), has an abs minimum value. The limits as x approaches either bounds of the interval is +Infinity Can you show that the function f must have an absolute minimum value on the interval (a,b), if f is continuous on (a,b) and the right hand limit as x->a along with the left hand limit as x-> b are both equal to positive infinity? Now before I waste anyone's time I should explain my point of confusion with this question. Its a concept question from the differential calculus section of a text I own, so this was really for my own practice. My initial attempts consisted of establishing an interval [c, d] within (a,b) and applying the min and max closed interval prop. Then I'd increase the bounds of the closed interval so that for sufficiently large da, there exists a point satisfying what I need. This felt incomplete to me. AI: Consider some $c\in(a,b)$. Since $\lim_{x\to a}f(x)=\lim_{x\to b}f(x)=\infty$, there exists some $\epsilon >0$ such that for any $y\in(a,a+\epsilon)$ or $z\in(b-\epsilon,b)$, $f(y),f(z)>f(c)$. So, the minima can't be in those regions. So, the minima of the function on $(a,b)$ is the minima of the function on $[a+\epsilon, b-\epsilon]$. But, this interval is compact, so there is in fact a minimum.
H: Randomizing a regular polygon along its "spokes" while maintaining area I have a 2d graphics question that seems like it'd fit better here than at stackoverflow. Please forgive any breach of ettiquette as I am new to mathematics exchange. I did search for a solution before posting this. I am taking a regular polygon (of arbitrary n sides) and adding a random value (between 1 and -1) to the length of each "spoke" (ie. each circumradius line segment). I'd like to shift the random values so that when they are added to the polygon's spokes, the polygon's area is the same as the original polygon's area. I naively assumed that subtracting the total average of the values from each value (thus making the average of the values be 0) would do this, but this clearly does not work. Here is an example of what I'd like to achieve: I have an n=4 regular polygon (a square) with a circumradius of 1. The area of this polygon is 2. I also have a random value for each point of the polygon: (0.85, -0.75, 0.6, -0.4). If I add these values to the "spokes" of the polygon then the resulting polygon has spokes of length (1.85, 0.25, 1.6, 0.6) and an area of: 1.85 * 0.25 * 0.5 + 0.25 * 1.6 * 0.5 + 1.6 * 0.6 * 0.5 + 0.6 * 1.85 * 0.5 1.46625 I'd like to shift the random values so that the resulting polygon has an area of 2 (ie. the original area). If I reduce the values by their average of 0.075 then I get (0.775, -0.825, 0.525, -0.475). Adding these shifted values to the polygon results in a polygon with spokes of length (1.775, 0.175, 1.525, 0.525), which average out to 1. The polygon's area is then: 1.775 * 0.175 * 0.5 + 0.175 * 1.525 * 0.5 + 1.525 * 0.525 * 0.5 + 0.525 * 1.775 * 0.5 1.155 So shifting to get an average of 0 does not work. My question: What would I shift by instead so that the resulting polygon has the same area as the original polygon? EDIT: joriki provided a solid answer to this question that was akin to normalizing a vector. Unfortunately, this technique involves calculating the area of the resulting polygon, which becomes a bottleneck in software implementation (I am solving this problem each frame for a number of regular polygons of n=64 each). I can try to optimize the solution if this is the only option, but a less computationally expensive solution would be preferred. AI: Compute the area of the randomized polygon and multiply all spoke lengths by the square root of the quotient of the desired area over the current area.
H: Finding a limit given a function is continous if $ f(x) = \frac{\sin 3x+ A \sin 2x + B \sin x}{x^5} $ for $x\neq 0 $ is continuous at $x=0$ , then $A+B+f(0)$ is..? Answer is supposedly '$2$'.. According to my friend we have to find '$A$' and '$B$' such that this limit exists but would that be correct interpretation? AI: The given $f$ (for general $A$ and $B$) is undefined at $x = 0$, which is the obstacle we must overcome. We are told that $f$ is actually continuous at $x = 0$, so (by definition) $f(0)$ agrees with its limit as $x \rightarrow 0$, $\lim_{x \rightarrow 0} f(x) = f(0)$. So somehow, $f$ is actually defined at $x = 0$. The only thing(s) in sight that could be varied to make this happen are the parameters $A$ and $B$. So your friend is right: you are to find $A$ and $B$ such that $ \lim_{x \rightarrow 0} f(x)$ exists (which implies it is finite). From the given $f$, for the limit to exist, we should have the form $\left[\frac{0}{0}\right]$. (This suggests that $x = 0$ is a zero of the numerator -- which is true.) So we will have to use some technique to evaluate limits of this form...
H: Reference for $p$-adic Haar integral I have stumbled upon the notion of a $K$-valued Haar integral on a locally compact group, where $K$ is a non-Archimedean field, as well as the $K$-valued modular function, in an article of Schikhof. He cites his thesis "Non-archimedean harmonic analysis", but I have not been able to find that book anywhere. Is there any other more accessible reference where I could read about this? I am particularly interested in the $p$-adic case. AI: You will find it in the following link: https://repository.ubn.ru.nl/bitstream/handle/2066/107512/mmubn000001_234785616.pdf?sequence=1
H: How close $f(n)$ is to $\phi(n)^2$ asymptotically? Consider the number $f(n)=\#\{(a,b) \mid 1\leq a,b\leq n, (a,n)=(b,n)=1,~\text{gcd}(a-b,n)=1\}.$ It is clear that $f(n) \leq \phi(n)^2.$ Is it it known how close $f(n)$ is to $\phi(n)^2$ asymptotically ? AI: Well (unless $n=1$) it is at most $\phi(n)^2-\phi(n)$, since every such pair has $a\neq b$. This can be an equality, for example if $p$ is prime then $f(p)=(p-1)(p-2)=\phi(p)^2-\phi(p)$. On the other hand $f(n)=0$ whenever $n$ is even, since any $a,b$ coprime to $n$ must both be odd, meaning that $a-b$ is even and has a common factor with $n$. So I don't think you can say very much beyond $0\leq f(n)\leq \phi(n)^2-\phi(n)$.
H: Majorization of $(5,5,0) $ by $(10,0,0)$ Majorization definition: $x $ is majorized by $y$ iff $x$ is in the convex hull of all the points formed by permuting the components of $y$. Question: $(5,5,0) $ is majorized by $(10,0,0)$ according to the alternate definition of majorization given at wikipedia. My confusion is: the convex hull formed by $(10,0,0)$ is the planar equilateral-triangle with three vertices at $(10,0,0), (0,10,0)$ and $ (0,0,10)$; and $(5,5,0)$ is not the part of this convex set. I see the two definitions contradicting. AI: We have $(5,5,0) = 0.5(10,0,0) + 0.5(0,10,0)+0(0,0,10)$, so it is in the convex hull.
H: Is this a correct solution to this problem? 2018 AIME II Problem 10 Problem: Find the number of functions $f(x)$ from $\{1, 2, 3, 4, 5\}$ to $\{1, 2, 3, 4, 5\}$ that satisfy $f(f(x)) = f(f(f(x)))$ for all $x$ in $\{1, 2, 3, 4, 5\}$. Solution: Note that there are $5^5$ possible functions $f(x)$. Now consider the probability of picking a function from those $3125$ functions that satisfies, exclusively, one of the following criteria: $P(x = f(x) = f(f(x)) = f(f(f(x)))) = \frac{1}{5^5}$ $P(x \neq{f(x)} = f(f(x)) = f(f(f(x)))) = \frac{1}{5^4}$ $P(x = f(x) \neq{f(f(x))} = f(f(f(x)))) = \frac{1}{5^2}$ $P(x \neq{f(x)} \neq{f(f(x))} = f(f(f(x)))) = \frac{1}{5}$ Thus the number of functions $f(x)$ satisfying the condition stated in the problem is given by: $\frac{5^5}{5^5} + \frac{5^5}{5^4} + \frac{5^5}{5^2} + \frac{5^5}{5} = 1 + 5 + 125 + 625 = \boxed{756}$. AI: Let $S = \{1,2,3,4,5\}$. We are told that $f(x)=x$ on $f^2(S) = \{f(f(s)): s \in S\}$. On the other hand, if $f(x)=x$ then $x = f(f(x)) \in f^2(S)$. Now $f^2(S) \subseteq f(S) \subseteq S$. If $f(S) = S$, then $f^2(S) = S$ as well, and $f(x)=x$ for all $x$. If not, $f$ maps $S \backslash f(S)$ into $f(S)$, and maps $f(S) \backslash f(f(S))$ into $f(f(S))$. Each point of $f(S) \backslash f(f(S))$ is the image of a point of $S \backslash f(S)$. There are the following possibilities for cardinalities of $S \backslash f(S)$, $f(S) \backslash f(f(S))$, and $f(f(S))$: $0,0,5$ $1,0,4$ $1,1,3$ $2,0,3$ $2,1,2$ $2,2,1$ $3,0,2$ $3,1,1$ You can then compute the number of possibilities in each case. For example, for the case $2,2,1$, there are $5!/(2!2!1!)= 30$ ways to choose which points go in which set, then $2$ ways to map the two points of $S \backslash f(S)$ to $f(S) \backslash f(f(S))$, and only one way to do the rest (both points of $f(S) \backslash S$ must map to the one point of $f(f(S))$, and that point maps to itself), so $30 \times 2 = 60$ functions corresponding to this case.
H: Finding triangle side lengths given altitudes or lengths of sectioned triangles within triangle. In right triangle $ABC,$ $\angle C = 90^\circ.$ Let $P$ and $Q$ be points on $\overline{AC}$ so that $AP = PQ = QC.$ If $QB = 67$ and $PB = 76,$ find $AB.$ How do I use ratios and given side lengths to create a proportion to solve for $AB$? Is there any other way to solve this? I would think the best way to approach this is to relate $QB/CB = AB/CB$, though that would make $CB$ for both the same. I guess the relation of $AB/AC = QB/QC$ can also be used. AI: Let $BA= y, BC=x. $ We can solve numerically using Pythagoras thm twice. $$ 2 (67.^2 - x^2)^{0.5} = (76^2 - x^2)^{0.5}$$ $$ 3 (67.^2 - x^2)^{0.5} = (y^2 - x^2)^{0.5}$$ Since there are 2 equations and two unknowns, we can solve by squaring and eliminating $x$. Begin by squaring both equations. Solve for $x$ and substitute in the second set to find $y$: $$ y=89, x=63.7181$$
H: Can I infer this about probability of drawing a queen or heart from deck of a cards (twice in a row) no replacements? So I know the probability of drawing a queen or a heart from a deck of cards is 4/13 Let A be the event a heart or queen is drawn on the first draw Let B be the event a heart or queen is drawn on the second draw If I were to draw twice from the deck, with no replacements would, could I assume that P(A⋂B) + P(A⋂BC) = P(A) = 4/13 I found myself making this assumption and was like...wait...can I? AI: Yes, that’s correct, because $A\cap B$ and $A\cap\overline B$ are mutually exclusive, and their union is $A$.
H: A wine taster claims she can distinguish four vintages of a particular Cabernet. What is prob this can be done by guessing? Feedback A wine taster claims she can distinguish four vintages of a particular Cabernet. What is prob this can be done by guessing? She is confronted with four unlabeled glasses. I have a solution to this question, but I also saw a solution elsewhere and I am confident that my solution is actually the correct one. My Solution Since there are four unlabeled glasses, the taster can either guess correctly or guess wrong for each glass. As such there are $2^{4}$ possible sets of four guesses. There is also only one way of getting all four glasses correct (call this event $A$). As such the probability will be: $$P(A) = \frac{1}{2^{4}} = \frac{1}{16}$$. Other Solution Now thinking about it while writing this post, I think in my solution I considered the glasses as "ordered" whereas in the other solution they treated them as unordered hence difference in sample space size. Could somebody confirm that is actually the case? AI: While your approach could work in considering each glass to be the right wine or the wrong wine, the probability for each is not $1/2$. Instead, the probability is $1/4$ for the first glass, $1/3$ for the second glass (as, $1/2$ for the third glass, and $1/1$ for the last glass. The probability for each is chosen by assuming that all previous glasses were correctly assigned. It is easier to look at the first glass as having $4$ possible states: Wine $A$, $B$, $C$, or $D$. Naively, this would make it seem like there is $4^4 = 256$ possible states, but this reduces to $4! = 24$ possible states (with only one right one), since each glass must map one-to-one to the guess of a wine.
H: Why does the integral $\int_{-\infty}^ \infty \frac{1}{x^2}dx$ diverge? The integral $\int_{-\infty}^ \infty \frac{1}{x^2}dx$ according to many websites diverges. But, by manual calculation, the answer is clearly $0$. Why is this the case? AI: I just realized it's because the value at x = 0 does not exist. It approaches infinity from both sides at x = 0.
H: Nillradical is prime ideal, then the ring is not a product ring. Let $A$ be a commutative ring and $nill(A)$ is not a prime ideal. This is just a characterization of $SpecA$ to be irreducible. Then, according to the argument of general topology, irreducible deduces connected. So,to translate this into ring theory, We should like to say 'if $nill(A)$ is not a prime ideal, then $A$ cannot be a product of non-zero ring.' But I cannot show the last proposition with ring theory. Thank you for your help, my teachers. AI: I suppose the rings have unit. Suppose that $A=U\times V$. $(1,0)(0,1)=(0,0)$ implies $(1,0)\in Nil(A)$ or $(0,1)\in Nil(A)$ contradiction since $(1,0)$ and $(0,1)$ are not nilpotent.
H: Notations from Rudin's book This is taken from Rudin's book Notations: If $x=(x_1,...,x_n) \in R^n$ and $y=(y_1,..,y_m)$ in $R^m$. and $(x_1,...,x_n,y_1,..,y_m) \in R^{n+m}$. Every $A \in L(R^{n+m},R^n)$ can be split into two linear maps $A_x$ and $A_y$, defined by $A_x h = A(h,0), A_y k=A(0,k)$ for any $h \in R^n, k \in R^m$. Then $A_x \in L(R^n), A_y \in L(R^m,R^n)$ and $A(h,k)=A_xh +A_yk$. My question is how do $A_x h = A(h,0), A_y k=A(0,k)$ and $A(h,k)=A_xh +A_yk$ work in terms of matrix operations? Specifically how do I check that those equations hold? AI: If you want to treat $A$, $A_x$ and $A_y$ as matrices, then you should interpret all variables $0, h, k$, etc. as column vectors. For example, "$(h, k)$" actually means a $(n + m)$-dim column vector by stacking the $n$-dim column vector $h$ and the $m$-dim column vector $k$ together. Moreover, $A$ can be seen as the juxtaposition of $A_x$ and $A_y$: $A = \begin{bmatrix} A_x & A_y \end{bmatrix}$. With this, we have \begin{align} A(h, k) = \begin{bmatrix}A_x & A_y\end{bmatrix}\begin{bmatrix} h \\ k\end{bmatrix} = A_xh + A_y k. \\ \end{align} Setting $h = 0$ or $k = 0$ in the above equation, you can get other equalities verified.
H: Counterexample: Two groups $H$ and $G$, with surjective homomorphismus So I need to find a counter-example, i.e. I need to find two groups $H$ and $G$, with $N$ being normal subgroup of $G$, with a NON-surjective Homomorphism $\phi: G \rightarrow H$, such that $\phi(N)$ is NOT normal subgroup of H. I know for sure, that H isn´t supposed to be abelian group, but I am not sure which one to take! I would appreciate any kind of help. AI: Every group homomorphism $\Bbb Z\to S_3$ which maps $1$ into an odd permutation.
H: Show that $M_p^p\equiv 1 \mod p^2$ Can it be shown that $M_p^p\equiv 1 \mod p^2$ where $M_p=2^p-1$ is a Mersenne prime. I tried to develop the left part into into $2^{p^2}-1-pk2^p$ and use $2^{p^2}\equiv 2^p \mod p^2$, but I get nowhere Thanks AI: From FLT $$2^{p-1}\equiv 1\pmod{p} \Rightarrow 2^{p}\equiv 2\pmod{p} \Rightarrow 2^p-1\equiv 1 \pmod{p} \Rightarrow$$ $$M_p \equiv 1 \pmod{p} \tag{1}$$ And from $$M_p^p-1=\color{red}{(M_p-1)}\color{blue}{(M_p^{p-1}+M_p^{p-2}+M_p^{p-3}+...+M_p+1)}$$ we have the expression in red is divisible by $p$, from $(1)$ also from $(1)$ follows $$M_p^{p-1} \equiv 1\pmod{p}$$ $$M_p^{p-2} \equiv 1\pmod{p}$$ $$..$$ $$M_p \equiv 1\pmod{p}$$ $$1 \equiv 1\pmod{p}$$ if we sum them $$M_p^{p-1}+M_p^{p-2}+M_p^{p-3}+...+M_p+1\equiv \underbrace{1+1+1+...+1}_{p\text{ times}}=p\equiv 0\pmod{p}$$thus, the expression in blue is also divisible by $p$. As a result $p^2 \mid M_p^p-1$.
H: For every projection $p$ and normal $a$ in a C*-algebra $A$ (with $ap=pa$), there is a $*$-isomorphism $C(\sigma(a))\to C^{*}(a,p)$ such that ... Let $a$ be a normal element of a (non-unital) C*-algebra $A$. I am trying to prove that for every projection $p\in A$ that commutes with $a$ (i.e. $p=p^{2}=p^{*}$ and $ap=pa$), there is a $*$-isomorphism $\phi:C(\sigma(a))\to C^{*}(a,p)$ such that $\phi(z)=a$ and $\phi(1)=p$. Summary of the notation: $\sigma(a)$ is the spectrum of $A$, $z\colon\sigma(a)\to\mathbb{C}$ is the inclusion map, $C^{*}(a,p)$ is the C*-subalgebra of $A$ generated by $a$ and $p$. My attempt: I think that $p$ is a unit for $C^{*}(a,p)$ (but I am not able to prove it yet). If $C^{*}(a)_{1}$ is the unitization of $C^{*}(a)$ (= the C*-subalgebra of $A$ generated by $a$), then the map $$\pi_{1}\colon C^{*}(a)_{1}\to C^{*}(a,p),\qquad x+\lambda\mapsto x+\lambda p$$ is a $*$-isomorphism. Note that $a$ is normal. Let $\pi_{2}\colon C(\sigma(a))\to C^{*}(a)_{1}$ be the functional calculus of $a$ in $C^{*}(a)_{1}$ (note that $\sigma(a)$ does not depend on the C*-subalgebra of $A$ containing $a$). Then $\pi_{1}\circ\pi_{2}(1)=p$ and $\pi_{1}\circ\pi_{2}(z)=a$. Hence $\phi:=\pi_{1}\circ\pi_{2}$ should do the job. So (I think that) this proof only works if $p$ is unit for $C^{*}(a,p)$. I still haven't used that $ap=pa$, so this is probably needed to prove my claim (if my claim is true). Any suggestions are greatly appreciated! AI: For your $\pi_1$ to be multiplicative, you need $ap=a$. Otherwise, you could have for instance that $ap=p$; in that case, $$ \pi_1((a+\lambda)^2)=\pi_1(a^2+2\lambda a+\lambda^2)=a^2+2\lambda a+\lambda^2 p, $$ while $$ \pi_1(a+\lambda)^2=(a+\lambda p)^2=a^2+(2\lambda+\lambda^2)p. $$ And you have already used that $ap=pa$; if not, then you couldn't possibly get that $\pi_1$ is multiplicative. When $pa=a$, then $C^*(a,p)$ is a unitization of $C^*(a)$, so the isomorphism is automatic. In general, it is not true that $C(\sigma(a))\simeq C^*(a,p)$. Let $A=\mathbb C^4$, and $$ a=(2,1,0,1),\ \ \ p=(0,1,1,0). $$ Then $pa=ap=(0,1,0,0)$. We have $\sigma(a)=\{0,1,2\}$, so $C(\sigma(a))=\mathbb C^3$. The algebra $C^*(a,p)$, on the other hand, contains $(1,0,0,0)=f(a)$ for $f$ a polynomial with $f(2)=1$, $f(1)=f(0)=0$. Also $(0,1,0,0)=pa$. Also $(0,0,1,0)=p-pa$. And $(0,0,0,1)=a-2(1,0,0,0)-(0,1,0,0)$. So $C^*(a,p)=\mathbb C^4$. If you want the above $A$ to be non-unital, just use $A=\mathbb C^4\oplus c_0$ and work on the first four coordinates.
H: Predicate logic: Negation My book says that the negation of "Everyone likes coffee" is "Not everyone likes coffee". But if i apply that to quantors (c = likes coffee), then... $\neg(\forall x(c(x))) \Longleftrightarrow \exists x (\neg c(x))$ Which means that "Not everyone doesn't like coffee". Can someone explain? Is this the same as the sentence above? AI: What you have written here: $\exists x (\neg c(x))$ does not mean "Not everyone doesn't like coffee". It means "There exists someone who is not a coffee-liker." That is: "at least someone doesn't like coffee" which is the same thing as "Not everyone likes coffee".
H: Find $\int_{0}^{\infty} \frac{\log(x) }{\sqrt{x} (x+1)^{2}}\,dx$ Need solve the next integral $$\int_{0}^{\infty} \frac{\log(x) }{\sqrt{x} (x+1)^{2}}\,dx$$ Tried something with Laurent’s series, but i can’t conclude anything. Thanks AI: METHODOLGY $1$: CONTOUR INTEGRATION Enforce the substitution $x\mapsto x^2$ to find that $$\int_0^\infty \frac{\log(x)}{\sqrt x(x+1)^2}\,dx=4\int_0^\infty \frac{\log(x)}{(x^2+1)^2}\,dx\tag1$$ Let $f(z)$ be given by $$f(z)=\oint_C \frac{\log^2(z)}{(z^2+1)^2}\,dz\tag2$$ where we choose to cut the plane along the positive real axis and where $C$ is the classical keyhole contour. We then have from $(2)$ $$\begin{align} \int_0^\infty \frac{\log^2(x)-\left(\log(x)+i2\pi\right)^2}{(x^2+1)^2}\,dx&=2\pi i \text{Res}\left(\frac{\log^2(z)}{(z^2+1)^2}, z=e^{i\pi/2}\right)\\\\&+2\pi i \text{Res}\left(\frac{\log^2(z)}{(z^2+1)^2}, z=e^{i3\pi/2}\right)\tag3\end{align}$$ The left-hand side of $(3)$ becomes $$\begin{align} \int_0^\infty \frac{\log^2(x)-\left(\log(x)+i2\pi\right)^2}{(x^2+1)^2}\,dx&=-i4\pi\int_0^\infty \frac{\log(x)}{(x^2+1)^2}\,dx\\\\& +4\pi^2\int_0^\infty \frac{1}{(x^2+1)^2}\,dx\tag4 \end{align}$$ Note that the imaginary part of the right-hand side of $(4)$ is $-\pi$ times the integral of interest on the right-hand side of $(1)$. Thus, we find that $$\begin{align} \int_0^\infty \frac{\log(x)}{\sqrt x(x+1)^2}\,dx&=-2\text{Re}\left(\text{Res}\left(\frac{\log^2(z)}{(z^2+1)^2}, z=e^{i\pi/2}\right)\right)\\\\ &-2 \text{Re}\left(\text{Res}\left(\frac{\log^2(z)}{(z^2+1)^2}, z=e^{i3\pi/2}\right)\right)\\\\ &=-2\left(-\frac\pi4+\frac{3\pi}{4}\right)\\\\ &=-\pi \end{align}$$ METHODOLGY $2$: REAL ANALYSIS ONLY We begin by enforcing the substitution $x\mapsto \tan(x)$ in the integral on the right-hand side of $(1)$ to reveal $$\begin{align} 4\int_0^\infty \frac{\log(x)}{(x^2+1)^2}\,dx&=4\int_0^{\pi/2}\cos^2(x) \log(\tan(x))\,dx\\\\ &=4\int_0^{\pi/2}\cos^2(x) \log(\sin(x))\,dx\\\\&-4\int_0^{\pi/2}\cos^2(x) \log(\cos(x))\,dx\tag5 \\\\ &=4\int_0^{\pi/2}(2\cos^2(x)-1)\log(\sin(x))\,dx\tag6\\\\ &=4\int_0^{\pi/2}\cos(2x)\log(\sin(x))\,dx\tag7\\\\ &=4\left(-\int_0^{\pi/2}\cos^2(x)\,dx\right)\tag8\\\\ &=-\pi \end{align}$$ as expected. In going from $(5)$ to $(6)$ we made use of the transformation $x\mapsto \pi/2 -x$ in the second integral on the right-hand side of $(5)$. In going from $(7)$ to $(8)$, we used integration by parts with $u=\log(\sin(x))$ and $v=\sin(x)\cos(x)$
H: Line integral $\int_{AB}y\,ds$ I need to calculate this line integral: $$\int_{AB}y\,ds$$ Where $AB$ is an arc defined by $\begin{cases} x=t \\ y=\sqrt{t} \end{cases}$ with $t\in [1,2]$ I got $I=\int_1^2 t\sqrt{1+\frac{1}{4t}} \, dt$ and now I'm stuck with this integral. I tried substitution method but it's getting heavier. How to approach this? AI: $$\int_1^2t\sqrt{1+\frac{1}{4t}}\ dt$$ $$=\int_1^2\sqrt{t^2+\frac{t^2}{4t}}\ dt$$ $$=\int_1^2\sqrt{t^2+\frac{t}{4}}\ dt$$ $$=\int_1^2\sqrt{\left(t+\frac{1}{8}\right)^2-\left(\frac{1}{8}\right)^2}\ d\left(t+\frac18\right)$$ using standard result, $\int \sqrt{x^2-a^2}dx=\frac12\left(x\sqrt{x^2-a^2}-a^2\ln|x+\sqrt{x^2-a^2}|\right)$, $$=\frac{1}{2}\left(\left(t+\frac18\right)\sqrt{t^2+\frac t4}-\frac{1}{64}\ln\left|t+\frac18+\sqrt{t^2+\frac t4}\right|\right)_1^2$$ $$=\frac{51\sqrt2-9\sqrt5}{32}-\frac{1}{128}\ln\left(153+108\sqrt2-68\sqrt5-48\sqrt{10}\right)$$
H: Brownian motion increments - are they random variables or random processes If $W_t$ is a Brownian motion process and $0 \le t_1 \le t_2$ then is the increment $W_{t2} - W_{t1}$ a random variable or a random process? My lectures say "random variable" but I believe it makes more sense to call this a random process because any segment of a Brownian motion process is still a Brownian motion process. AI: If $t_1$ and $t_2$ are fixed it is a random variable. If you consider it as a mapping from $\mathbb{R}^2$ $(t_1,t_2)\mapsto W_{t_2}-W_{t_1}$, so that $t_1$ and $t_2$ are variables, then you can see it as a process on $\mathbb{R}^2$. In practice you can see a random process as a set of random variables which are labelled on some space (in your case by $(t_1,t_2)\in\mathbb{R}^2$). So even when you look at $W_t$ itself, if you consider it as a mapping $t\mapsto W_t$ it is a random process, if you consider it at a fixed time $t$ it is a random variable.
H: Let $A,B,X$ be sets such that $A\cup B = X$ and $A \cap B = ∅$. Show that $A = X\backslash B$ and $B = X\backslash A$. I'm trying to prove this Let $A,B,X$ be sets such that $A\cup B = X$ and $A \cap B = ∅$. Show that: (1) $A = X\backslash B$ and (2) $B = X\backslash A$. My proof is Let $x \in A$. We know that $x \notin B$ by definition of intersection. It follows that $x \in A \cup B$ by definition of union so we have $x \in X$. Therefore all elements of A must not be in B and must also be in X so $A=X \backslash B$ by definition of difference sets. We can repeat the same process on an arbitrary element of B to get $B= X \backslash A$. Is this correct and is there any way I could word my argument better? AI: This argument is a correct but is not complete. In your proof of (1) you proved only that if $x \in A$ then $x \in X\backslash B$. This is just half of proving (1). To have a complete proof of (1) you need to also prove the reverse implication i.e. you need to prove that if $x \in X\backslash B$ then $x \in A$. The statement (2) you don't really need to prove, it follows by symmetry as you noticed.
H: Help to evaluate the integral $\iint_D\frac{y}{\sqrt{x^2+y^2}}dxdy$ I'm solving a problem about integrals in curves, and I got this integral: $$\int_1^2\int_1^2\frac{y}{\sqrt{x^2+y^2}}dxdy.$$ I have been struggling to solve it. I'm sure i have to do some variable change to polar coordinates (to simplify the denominator expression), to be said, $$x=r\cos\theta \phantom{a},\phantom{a}y=r\sin\theta.$$ $$\text{being: } \phantom{a}r=\sqrt{x^2+y^2}\phantom{a},\phantom{a}\theta=\arctan\frac{y}{x}$$ My problem is finding the new integration limits. The integration region is the square of vertices: $(1,1),(1,2),(2,1),(2,2)$. I'm not sure how is the square transforming to a polar coordinates region. How do i find the new integration limits? AI: Use change of order of integration $$\int_1^2\int_1^2\frac{y}{\sqrt{x^2+y^2}}dxdy$$ $$=\int_1^2\int_1^2\frac{y}{\sqrt{x^2+y^2}}dydx$$ $$=\int_1^2\left(\frac12\int_1^2\frac{d(x^2+y^2)}{\sqrt{x^2+y^2}}\right)dx$$ $$=\int_1^2\left(\sqrt{x^2+y^2}\right)_1^2dx$$ $$=\int_1^2\left(\sqrt{x^2+4}-\sqrt{x^2+1}\right)dx$$ $$=\left(\frac x2\sqrt{x^2+4}+2\ln|x+\sqrt{x^2+4}|-\frac x2\sqrt{x^2+1}-\frac12\ln|x+\sqrt{x^2+1}|\right)_1^2$$
H: I don't understand the Rank–nullity theorem.. $$\dim(U) = \dim (\ker\phi) + \dim(\text{im }\phi)$$ Hey all I'm just a little confused about the above mentioned theorem; namely, how does one measure the dimensions of a kernel? A kernel is normally defined as a set, and I don't see how I could measure the dimensions of a set.. and surely, if we were referring to a bijection in the above-mentioned linear mapping, $\phi$, then wouldn't the theorem no longer hold true at that point because the dimensions of the ker would surely be 1, as there is definitely exactly one identity-element mapped to the identitity element in the other set, right? $n = 1 + n$? a one-to-one mapping implies a Kernel with only one element and a set with only one element has how many dimensions? it sounds like I'm comparing apples to pears.. I hope it doesn't come across as a stoopid question, thanks yo. AI: A kernel is not only a set, it is a subspace of $U$. Which is to say, it is itself a vector space, and as such has a notion of dimension. It is actually a good exercise to show that this is indeed the case.
H: Is it true that if a function is bound by a constant so is it's derivative? I have an intuition that I haven't been able to prove nor disprove: Given a parametric function $p$ if $||p(t)|| < C$ then $||p'(t)|| < C$. The intuition behind that claim is that the change in the magnitude of a function cannot be higher than the magnitude of that function. I am not sure if this is true. Is it? AI: That's very false. Look at $f(x) = \sin(e^x)$. Here, the function is bounded, but the derivative isn't. Note that the derivative measures "how fast" the function is changing. But just because the function doesn't change very much, it doesn't impose any constraints on the rate of change. This is precisely what the example above illustrates: $\sin(\cdot)$ is a nice bounded function, so $\sin(e^x)$ is also bounded. However, the derivative is $e^x \cos(e^x)$. And because of the exponential multiplying the cosine, the derivative changes vary rapidly (and is actually oscillating). Try to sketch these two graphs side by side and compare them. The "converse" to what you're saying is true. If the derivative is bounded; i.e there is a $C>0$ such that for all $x$, $|f'(x)| \leq C$, then this imposes a limit to how much the function can change: for all $\xi,\eta$, we have $|f(\xi) - f(\eta)| \leq C |\xi - \eta|$. This can be proven easily using the mean-value theorem.
H: Is $\mathbb A^n(\mathbb Z)$ Zariski dense in $\mathbb P^n(\mathbb C)$? Since $\mathbb A^1(\mathbb Z)$ is infinite, it is obvious that $\mathbb A^1(\mathbb Z)$ is dense in $\mathbb P^1(\mathbb C)$ with respect to the Zariski topology. Does the same property hold for $\mathbb A^n(\mathbb Z)\subset \mathbb P^n(\mathbb C)$? I am considering the canonical embedding of the affine space. I think that the answer is positive, otherwise I'd need a system of polynomial equations which is zero on $\mathbb A^n(\mathbb Z)$ but not on the whole $\mathbb P^n(\mathbb C)$. This looks odd to me, but what is the formal argument? AI: Yes. You should be aware of this lemma: Lemma: Let $k$ be a field (or, more generally, an integral domain) and let $S_1,\cdots, S_n\subseteq k$ be infinite subsets. The only polynomial $p\in k[X_1,\cdots, X_n]$ such that $p(x)=0$ for all $x\in \prod_{j=1}^n S_j$ is the zero polynomial. This proves easily that $\Bbb A^n(\Bbb Z)$ is Zariski dense in $\Bbb A^n(\Bbb C)$, which is in turn Zariski dense in $\Bbb P^n(\Bbb C)$. We may also substitute $\Bbb C$ with any field of characteristic $0$. The proof of the lemma may be found in many places (I think I saw it in Lang's Algebra). However, it's an easy induction. Assume the lemma works for all $1\le n<m$, let $S_1,\cdots,S_m\subseteq k$ infinite sets and let $p\in k[X_1,\cdots, X_n]\setminus\{0\}$. Then, $$p(X_1,\cdots,X_n)=\sum_{j=0}^h p_j(X_1,\cdots,X_{m-1})X_m^j$$ with $p_h\in k[X_1,\cdots, X_{m-1}]\setminus\{ 0\}$. It is understood that if $m=1$, then $k[X_1,\cdots,X_0]:=k$. For $m>1$, by hypothesis there is some $(\xi_1,\cdots,\xi_{m-1})\in \prod_{j=1}^{m-1}S_j$ such that $p_j(\xi_1,\cdots, \xi_{m-1})\ne 0$. Therefore, $p(\xi_1,\cdots, \xi_{m-1},X_m)$ is an univariate polynomial of degree $h$, and we know that it can have at most $h$ roots. This implies the existence of some $\xi_m\in S_m$ such that $p(\xi_1,\cdots,\xi_{m-1},\xi_m)\ne 0$. $\xi=(\xi_1,\cdots,\xi_m)$ is thus an element of $\prod_{j=1}^m S_j$ such that $p(\xi)\ne 0$, as desired.
H: We have to prove that $H=${$(x_1,...,x_n)\in{\mathbb{R^n}}|a_1x_1+...+a_nx_n=0$} is an hyperplane of $\mathbb{R^n}$. I've got $\space$ $V$ $K$ - vectorial space, and $H$ which is a subspace of $V$. We say that $H$ is a hyperplane when $dimH=n-1$. If we've got $\space a_1,a_2,...,a_n\in{\mathbb{R}}$ which are not all of them null. We have to prove that $H=${$(x_1,...,x_n)\in{\mathbb{R^n}}|a_1x_1+...+a_nx_n=0$} is an hyperplane of $\mathbb{R^n}$. What I understand is that what we have to prove is that the dimension of $H$ is $n-1$. As I know that $H$ is a subspace of $V$ and the dimension of $V$ is n, we know that $dimH\le{n}$. And know, I suppose that I would have to deduce something else owing to the fact that $\space a_1,a_2,...,a_n\in{\mathbb{R}}$ are not all of them null. But I don't know which information to take from there... AI: If you don't know what the rank-nullity theorem is or don't want to use kernels or linear maps. Let $ H = \{(x_1,x_2,...,x_n) \in V : a_1x_1 + a_2x_2 + ...a_nx_n = 0.\}$ where at least one of the $a_i$'s is non zero WLOG consider $a_1 \neq 0$ then the equation defining $H$ can be rewritten as $$ x_1 = \frac{-1}{a_1}\sum_{k=2}^na_ix_i.$$ Therefore $\forall (x_1,...,x_n) \in H$: \begin{align*} (x_1,x_2,...,x_n) &= \left(\frac{-1}{a_1}\sum_{k=2}^na_ix_i ,\; x_2\,...,x_n \right) \\ &= x_2(-a_2/a_1,1,0,...,0) + x_3(-a_3/a_1,0,1,0,...,0) + ... + x_n(-a_n/a_1,0,...,0,1).\\ \end{align*} This means $H$ is generated by the vectors $\frac{-a_i}{a_1}e_1 + e_i$ for $i = 2,3,...n.$ Since these vectors are also a linearly independent they form a basis of $H$ with $n-1$ elements. Therefore $\dim H = n-1.$
H: sequence of entire functions that converges to $\sqrt{q}$ uniformly on compact subsets of the line Re $(z)=q$ for each positive rational $q$ I'm scheduled to host a prep course for the complex analysis qualifying exam at my university this summer, so I'm going through old qualifying exams to prepare. I'm stuck on the following: Show that there exists a sequence $\{f_n\}$ of entire functions with the following property: For each rational number $q\geq 0$, the sequence converges to $\sqrt{q}$ uniformly on compact subsets of the line $\{z\in\mathbb{C}\,:\,\text{Re }(z)=q\}$. I wish I could share my thoughts on this problem, but I honestly don't even know where to start. I imagine it uses the Mittag-Leffler or Weierstrass factorization theorem in some way but I can't put together how. I also assume that the countability of $\mathbb{Q}$ is exploited in some way, however the square root seems a bit mysterious. Would anyone mind nudging me in the right direction? Thanks! AI: Enumerate the nonnegative rationals as $q_n$. Use Runge's theorem to find $f_n$ so $|f_n(q_k+iy) - \sqrt{q_k}| \le 1/n$ for $k=1\ldots n$ when $|y| \le n$. The square root is a red herring. Any function on the nonnegative rationals would do.
H: Showing/finding sumfunction I will try again asking my question: I have $\displaystyle\sum_{n=0}^{\infty}\frac{n}{n+1}x^n$, for x$\in$R. Then I have used wolframalpha finding the sum function: https://www.wolframalpha.com/input/?i=sum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D+%5Cfrac%7Bn*x%5E%7Bn%7D%7D%7Bn%2B1%7D And with Maple I have reduced the sum function to: $\frac{1}{1-x}+\frac{ln(1-x)}{x}$ for |x|<1. But how can I show it formally that this is the sum function? AI: Note that for $|x|<1$ $$\begin{align} \sum_{n=1}^\infty \frac{nx^n}{n+1}&=\sum_{n=1}^\infty x^n-\frac1x\sum_{n=1}^\infty \frac{x^{n+1}}{n+1}\\\\ &=\frac x{1-x}-\frac1x\int_0^x \sum_{n=1}^\infty t^n\,dt\\\\ &=\frac x{1-x}-\frac1x\int_0^x \frac{t}{1-t}\,dt \end{align}$$ Can you finish now?
H: What do more than or less than sign with a line through them mean? What do these symbols where the arrow point to mean? Does it mean not less then? If that's the case, why do we write it that way and not > if they are the same? AI: $0\not<1$ means ‘$0$ is not less than $1$’, just as $a\ne b$ means ‘$a$ is not equal to $b$’. This is not the same as $0>1$; it is the same as $0\ge 1$, but in this case I suspect that $p$ is $0<1$, and the writer preferred to write the negated relation in a way that made the connection with $\sim p$ (‘not $p$’) obvious
H: How do you convert a number from base $10$ to binary? I don't understand how you get the binary representation of a number. Say we have a number in base $10$, how do you change it into binary? I used the Google math converter as well. . All I know is that $a=a_n2^n+a_{n-1}2^{n-1}+\dotsb+a_0$ AI: You keep dividing your number by $2$, until you can't do it anymore. Say that, for instance, your number is $19$. Then:\begin{align}19&=9\times2+\color{red}1\\9&=4\times2+\color{red}1\\4&=2\times2+\color{red}0\\2&=1\times2+\color\red0\\1&=0\times2+\color{red}1\end{align}and therefore the binary expansion of $19$ is $10\,011$. This works for other bases too.
H: Minimal polynomial of $\varphi:g+(f)\mapsto xg+(f)$ is $f$. Let $K$ be a field, $a$ an element in a field extension such that $a$ is algebraic over $K$. Denote by $f\in K[x]$ the minimal polynomial of $a$ over $K$. Consider the $K$-vector space $K[x]/(f)$ and the $K$-linear map $$ \varphi: V\to V,\quad g+(f)\mapsto xg+(f). $$ Show that the minimal polynomial of $\varphi$ is given by $f$. My thoughts: I think we need to find a matrix representation of $\varphi$, then find its minimal polynomial, and show it equals $f$. To do this, we will need a basis for $V$. I know from basic field theory that we can do the following (let $n=\deg(f)$): One way to make a basis: Note that $V\cong K(a)$, so a basis for $V$ is given by $1,a,\ldots,a^{n-1}$. I don't know what to do with this, since the map $\varphi$ is defined in terms of the cosets. Maybe I can transform this basis into some usable form? Any hints? AI: Under the isomorphism $V \cong K(a)$, $x+(f)$ corresponds to $a$ and so a much better way to look at this question is to think of $\varphi: K(a) \to K(a)$ as multiplication by $a$ map. Note that $f(\varphi)(x)=f(a)(x)=0$ for any $x \in K(a)$ using the fact that $f$ is the minimal polynomial for $a$. Therefore $f(\varphi)=0$ and because $f$ is irreducible, this is the minimal polynomial for $\varphi$. The crucial insight is that while $\varphi$ is multiplication by $a$, the map $f(\varphi)$ is multiplication by $f(a)=0$, so the zero map.
H: Hartshorne's proof of Castelnuovo's theorem For those of us who have forgotten, Castelnuovo's theorem is the following: Theorem: If $Y$ is a curve on a surface $X$ with $Y \simeq \mathbb{P}^1$ and $Y^2 = -1$, then there is a morphism $f: X \to X_0$ to a smooth projective surface $X_0$ such that $X$ is the blow up of $X$ at some point and $Y$ is the exceptional divisor. A proof of this theorem is given on page 414 of Hartshorne. To formulate my question, note that we will construct $X_0$ using the image of $X$ under a suitable map to projective space. In more detail, our aim is to show that the invertible sheaf $\mathcal{M} : = \mathcal{L}(H + kY)$ is semi-ample. Here, $H$ is some very ample divisor such that $H^1(\mathcal{L}(H))=0$ and $k = H.Y$ is assumed to be $\geq 2$. Problem: Consider the following sequence of sheaves $$0 \longrightarrow \mathcal{L}(H + (i-1)Y) \xrightarrow{ \ \alpha \ } \mathcal{L}(H+iY) \xrightarrow{ \ \beta \ } \mathcal{O}_Y \otimes \mathcal{L}(H + i Y) \to 0.$$ Hartshorne claims this sequence is exact. In trying to verify this, I am finding that I am not sure of how $\alpha$ and $\beta$ are defined. AI: This exact sequence is just $0\to \mathcal{I}_Y\cong\mathcal{O}_X(-Y)\to \mathcal{O}_X\to \mathcal{O}_Y\to 0$ tensored with the line bundle $\mathcal{L}(H+iY)$.
H: Conplex analysis showing normal Show that the family of functions $f_n(z)=z^n,n\geq 1$ is normal in $D(0;1)$ but not in any region which contains a point on the unit circle. I tried to show that the spherical derivatives of $f_n$'s are uniformly bounded so that I can use Marty's theorem but I could not do it as $n$ goes to infinity. Is there another way, or is it easy to show this in some way I cannot see? AI: $|f_n(z)| \leq 1$ for all $z \in D(0;1)$. If a sequence of holomorphic functions is uniformly bounded on compact subsets then it a normal family. (This is Montel's Theorem) Now suppose we include a point $c$ on the boundary. Consider $\{z_1,z_2,... \} \cup \{c\}$ where $(z_k)$ is a sequence in the domain such that $|z_k| <1$ and $z_k \to z$. If there is a subsequence of $(f_n)$ which converges uniformly on compact subsets of our new domain then the limit would be $0$ at each of the points $z_k$ but $|f_n(c)|=1$ for all $n$ . Hence the limit function is not even continuous. This proves that the family is not normal.
H: A question about derivatives between Euclidean spaces: how should we construct it and interpret its definition? As it is known from the single-variable calculus, given $X\subseteq\textbf{R}$, a function $f:X\to\textbf{R}$ and a adherent point $x_{0}\in X$ which is also a limit point, we define the derivative of the function $f$ at $x_{0}$ (if it exists) by the limit \begin{align*} L = \lim_{x\to x_{0};x\neq x_{0}}\frac{f(x) - f(x_{0})}{x - x_{0}} \end{align*} Based on it, we can extend its definition to functions between Euclidean spaces. More precisely, given a subset $E\subseteq\textbf{R}^{n}$, an interior point $a\in E$, a function $f:E\to\textbf{R}^{m}$ and a linear transformation $L:\textbf{R}^{n}\to\textbf{R}^{m}$, we say that $f$ is differentiable at $a$, whose derivative is $L$, if \begin{align*} \lim_{x\to x_{0};x\neq x_{0}}\frac{\|f(x) - f(x_{0}) - L(x-x_{0})\|}{\|x-x_{0}\|} = 0 \end{align*} From the ''algebraic'' point of view, it is kind of natural to set up this definition. This is because it is equivalent to the single-variable definition. Indeed, based on the assumption of existence of the derivative, we have that \begin{align*} L = \lim_{x\to x_{0};x\neq x_{0}}\frac{f(x) - f(x_{0})}{x - x_{0}} & \Longleftrightarrow \lim_{x\to x_{0};x\neq x_{0}}\left(\frac{f(x) - f(x_{0})}{x - x_{0}} - L\right) = 0\\\\ & \Longleftrightarrow \lim_{x\to x_{0};x\neq x_{0}}\frac{f(x) - f(x_{0}) - L(x-x_{0})}{x-x_{0}} = 0\\\\ & \Longleftrightarrow \lim_{x\to x_{0};x\neq x_{0}}\frac{|f(x) - f(x_{0}) - L(x-x_{0})|}{|x-x_{0}|} = 0 \end{align*} Moreover, it is customary to say the derivative $L$ is the linear transformation which best (linearly) represents the behavior of $f$ near $x_{0}$. But what about the second, third and $n$-th derivative? How should we construct it and interpret it? Could someone help me to better understand the notion of derivative between euclidean spaces either through analytic or geometric arguments? I am new to this. Thus any comment, contribution or explanation is welcome. AI: Pretty much most of the results of single variable calculus can be generalized to higher dimensions, and for that matter, most of the calculus on $\Bbb{R}^n$ can easily be generalized to Banach spaces; take a look at Loomis and Sternberg's Advanced Calculus (chapter 3 mainly), or Henri Cartan's Differential Calculus texts. Let $V,W$ be (think finite-dimensional if you wish) Banach spaces over $\Bbb{R}$, let $A \subset V$ be open and let $f:A \to W$ be a map. We say $f$ is differentiable at a point $a \in A$ if there is a continuous linear map $L:V \to W$ such that \begin{align} \lim_{h \to 0}\dfrac{\lVert f(a+h) - f(a) - L(h) \rVert_W}{\lVert h\rVert_V} &= 0. \end{align} In this case, $L$ is unique, and we denote it by the symbol $Df(a)$, or $Df_a$, or $f'(a)$, or $df(a)$, or $df_a$... or any other notation. Now, if $f:A \to W$ is differentiable at every $a \in A$, then we get a new map $Df:A \to \mathcal{L}(V,W)$, which assigns to each $a \in A$, the derivative $Df_a$. Note that since $V$ and $W$ are vector spaces, and $W$ is complete, we can equip $\mathcal{L}(V,W)$ with a complete norm as well. (In the finite-dimensional case, all norms are equivalent; i.e generate the same topology, and the spaces are always complete, so you can ignore these technical details if you wish). In other words, $Df$ maps the open subset $A \subset V$ of a Banach space into the Banach space $\mathcal{L}(V,W)$. We can now ask if this map itself is differentiable at a point $a$. (using the same definition as above). In this case, we can take the derivative $D(Df)_a$. Said differently again, we're considering the map $g:A \to \mathcal{L}(V,W)$ given by $g(a) = Df_a$, and asking if $g$ is differentiable. In this case, we denote \begin{align} Dg_a := D(Df)_a \equiv D^2f_a \end{align} We call $D^2f_a$ the second derivative of $f$ at $a$. What kind of object is it? Well, \begin{align} Dg_a = D^2f_a \in \mathcal{L}(V, \mathcal{L}(V,W)) \end{align} in words it is a linear map from $V$ into $\mathcal{L}(V,W)$. One can show that there is an (isometric) isomorphism: \begin{align} \mathcal{L}(V, \mathcal{L}(V,W)) \cong \mathcal{L}^2(V;W), \end{align} where the RHS is the (Banach) space of continuous bilinear maps $V \times V \to W$. Because of this isomorphism, we usually think of $D^2f_a$ as a bilinear map $V \times V \to W$, or a "quadratic form". Also, we never explciitly write the isomorphism in the notation; we always just keep in mind that there is an isomorphism, and decipher from context which interpretation is intended. So, now we have the second derivative $D^2f:A \to \mathcal{L}^2(V;W)$. To construct the third derivative, we ask whether this map is differentiable. Then, once again, by composing with appropriate isomorphisms, you'll see that for each $a \in A$, the third derivative $D(D^2f)_a \equiv D^3f_a$ is a tri-linear, continuous map $V \times V \times V \to W$, i.e an element of $\mathcal{L}^3(V;W)$. In general, the $n^{th}$ derivative (after using all the isomorphisms) will be $D^nf:A \to \mathcal{L}^n(V;W)$, so for each $a \in A$, $D^nf_a \in \mathcal{L}^n(V;W)$ will be an $n$-times-multilinear map. So, really, the definition of second and higher derivatives are the same: to get the second derivative, you differentiate the first derivative. To get the third derivative, you differentiate the second derivative. And so on. The only thing to keep track of is that now the target space for each successive derivative changes. However, if we specialize to the case that $V = W =\Bbb{R}$, then for every $n$, $\mathcal{L}^n(\Bbb{R};\Bbb{R}) \cong \Bbb{R}$, which is why in single variable calculus, we are able to think of higher derivatives at a point as being just numbers. But, if the domain is not $\Bbb{R}$ anymore, then in order to keep track of all the various information about the various directions along which the functions can change, we need to invoke much more linear algebra. Two of the most important results about these higher derivatives are the following: With notation as above, for each $n$, $D^nf_a$ is a symmetric, multilinear map. In other words, for every $\xi_1, \dots, \xi_n \in V$, and any permutation $\sigma: \{1, \dots n\} \to \{1, \dots, n\}$, we have \begin{align} (D^nf)_a[\xi_{\sigma(1)}, \dots, \xi_{\sigma}(n)] &= (D^nf)_a[\xi_1, \dots, \xi_n]. \end{align} This is the "proper" way of thinking about the classic theorem regarding equality of mixed partial derivatives. We also have an analogue of Taylor's theorem \begin{align} f(a+h) &= f(a) + Df_a[h] + \dfrac{1}{2!}D^2f_a[h,h] + \dots + \dfrac{1}{k!}D^kf_a[\underbrace{h, \dots, h}_{\text{$k$ times}}] + o(\lVert h \rVert^{k+1}) \end{align} (see here for a statement and outline of proof of Taylor's theorem, and here for Taylor's theorem with an explicit bound on the remainder term.) So, even in this multivariable case, the higher derivatives of the function $f$ tell you the higher-order approximations to the function. This is of course very similar to the single-variable case.
H: Fourier Series Analysis Decomposition If I understand Fourier series correctly. The objective is to model a periodic function as a linear combination of sinusoidal functions with different amplitudes, frequencies, etc. Since the collections $\{\frac{\cos(nx)}{\pi}\}_{n=0}^{\infty}$ are orthonormal over in the $L^2([-\pi,\pi])$ norm then essentially I can find the coordinates of $f(x)$ in this new basis by taking the inner product $$C_n=<f(x),\frac{\cos(nx)}{\pi}>=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx$$ to form the series: $$\sum_{n=0}^{\infty}C_n\cos(nx)$$ Similarly we can find the projection onto the orthonormal collection $\{\frac{\sin(nx)}{\pi}\}_{n=1}^{\infty}$ To get $$f(x)=\sum_{n=0}^{\infty}C_n\cos(nx)+\sum_{n=1}^{\infty}B_n\sin(nx)$$ If I am mistaken in any way please correct me. My question is why does the projection have to be onto both $\sin(nx)$ and $\cos(nx)$? Does this have to do with the fact that $\sin(nx)$ is an odd function and $\cos(nx)$ is an even function therefore their linear combinations can replicate more complicated periodic functions. That is $\cos(nx)$ is well equipped to model even periodic functions and $\sin(nx)$ is well equipped to model odd periodic functions, but any periodic function can be uniquely decomposed into odd and even parts therefore we need both to model any periodic function. Finally, when the projection is done using exponential form that is the collection $\{e^{-inx}\}$ why does the fourier series have to be a two tail that is $$\sum_{-\infty}^{\infty}A_ne^{-inx}$$ AI: Any periodic function $f$ can be uniquely decomposed into an odd part $f_o$ and an even part $f_e$ as below: $$ f_o(x)=(f(x)-f(-x))/2, $$ and $$ f_e(x)=(f(x)+f(-x))/2 $$ Not only that, a purely odd function cannot be represented as a linear combination of even functions, and vice versa. So just sines or just cosines are not enough to represent all periodic functions. As for the other question, think of the formulae $$ \cos(x)=\frac{e^{ix}+e^{-ix}}{2} $$ and $$ \sin(x)=\frac{e^{ix}-e^{-ix}}{2i} $$ and it all becomes clear.
H: Is $a$ assigned to $b$ or is $b$ assigned to $a$? Being very interested in relations, functions and the best way on how to teach them at a set theoretic level, the following is something that has begun to puzzle me. Relations Let $A$ and $B$ be sets such that $a \in A$ and $b \in B$. Now, let $R \subset A \times B$ be a binary relation from the set $A$ to the set $B$. I have been reading the following article on relations to not only further my education on them, but also put together a study guide for anyone who is new to set theory and wants a solid foundation. As I was reading, I came across the following remark: “Notice that a relation is simply a subset of $A \times B$. If $(a,b) \in R$, where $R$ is some relation from $A$ to $B$, we think of $a$ as being assigned to $b$.” $(1)$ Now, I didn’t think too much about the idea of an element $a$ being assigned to an element $b$ By the relation $R$. The reason being is that this appeared synonymous with if $(a,b) \in R$ then $a$ is related to $b$ (by definition of a relation). However, I began to think about it more when I read about functions in another source. Functions Upon reading this article about functions in order to extend my citations, I came across the following remark after the definition of a function was stated. Just know that $f: A \to B$ such that $a \in A$ and $b \in B$: ”It is common usage to say “$f$ maps $A$ to $B$”. This expression arises from the usual arrow diagram where each element of $A$ is joined by an arrow to the element of $B$ assigned to it. Unfortunately, this tends to lead to the confusion that the elements of $A$ are somehow assigned to the elements of $B$, which is backwards! It is the elements of $B$ that are assigned to the elements of $A$.” $(2)$ As you can see the bolded statement in remark $(2)$ is the opposite of the bolded statement in remark $(1)$. With that said: Which is correct to say? Is an element $a$ assigned to an element $b$, or is an element be $b$ assigned to an element $a$? I think what has stumped me on this is that in the definition of a function, we see the vernacular “$a$ is mapped to $f(a)$”. So, naturally I have always believed it was the bolded portion of remark $(1)$ that was correct; as it seemed too similar to be wrong. Not to mention, along with the definition of a relation that you learn first before functions: “if $(a,b) \in R$ then $a$ is related to $b$“. The only other time I have seen a variation of the bolded portion of remark $(2)$ is in the following definition of a function in some textbooks and lecture notes: for all $a \in A$ there is assigned a unique $b \in B$ such that $(a,b) \in f$. Please give an in depth answer and not just an opinion. AI: As a professional mathematician who has taught set theory, this is my answer: In the context of a general relation, I would never use the word "assigned" (in either direction). The fact that any $a\in A$ can be related to many $b\in B$, and any $b\in B$ can have many $a\in A$ related to it, makes the word "assigned" sound off-the-mark to me. For example, $<$ is a perfectly good relation on $\Bbb N$. Does it sound right to say "under the less-than relation, $1$ is assigned to $2$, and $1$ is also assigned to $3$, and $2$ is assigned to $3$, ..."? Not to me. In the context of a function, I would probably avoid the word "assigned" as well. Mathematically my instinct is to say that the function values in the codomain $B$ are assigned to the elements of the domain $A$. But on the other hand, in real-life examples of functions such as "every student is assigned to an advisor", the usage is that elements of the domain (students) are assigned to elements of the codomain (advisors). The mathematical (set-theoretic) definitions of relation and function are unambiguous, and so the question of what "assigned" means is not crucial to their definitions—it's a question of how we discuss their definitions in English. Simply avoiding the word "assigned" altogether solves the problem without losing any ability to work with functions and relations.
H: Definition of the equality refered by the definition of a partial order A partial order $≤$ on a set $P$ must satisfy the following properties: Reflexivity: For all $a \in P$, $a ≤ a$ Anti-symmetry: For all $a,b \in P$, $a ≤ b \land b ≤ a → a = b$ Transitivity: For all $a,b,c \in P$, $a ≤ b \land b ≤ c → a ≤ c$ But regarding the anti-symmtery, what equivalence relation does $=$ mean? Does it have to be the finest equivalence relation on $P$, or can it be coarser? Say, if $P$ is the set of triangles on the Euclidean plane $\mathbb{R}^2$: Usually, $=$ denotes the equality of the area of the triangles. There is also $≡$, which denotes the congruence of the triangles. This is also an equivalence relation. On the extreme, triangles could be considered equivalent only if the have exact same set of coordinates of vertices. It is practical to define order over the $=$. It would sort triangles on their area. Defining order over the extreme equivalence would make lexicographical ordering of triangles. And I don't see a practical order relation over the $≡$. So which equivalence relation does a partial order refer? AI: $a=b$ means $a$ and $b$ are the same element of $P$. If you want to put a partial order on the set of equivalence classes of $P$ (under some other equivalence relation) then you can do that. Then anti-symmetry has the same interpretation but in the set of equivalence classes.