text
stringlengths
83
79.5k
H: In what type of series can we apply convergence tests? Let $(a_n)_n \in \mathbb{R}$ and let $(z_n)_n \in \mathbb{C}$ be two sequences, and let $f_n(x) \in \mathbb{R}$ and $g_n(z) \in \mathbb{C}$ be two sequences of functions. To check if $\sum a_n \in \mathbb{R}$ converges we can use tests such as the root test, the ratio test, the integral test, the comparison test, etc... My questions are the following: 1 - Can we apply those tests to the series $\sum z_n \in \mathbb{C}$? I've learned that we can always write $z_n = x_n + iy_n$ getting that: $\sum z_n = \sum x_n + i \sum y_n$ and we study the convergence of $\sum z_n$ by studying the convergence of $\sum x_n$ and $\sum y_n$. But those 2 series are also series of real valued sequences. My questions is can we apply the tests we apply to these real valued series to the complex valued series directly? 2 - Can we apply those tests to $\sum f_n(x)\in\mathbb{R}$? What about $\sum g_n(z)\in\mathbb{C}$? Can we apply the same test we use for regular series to series of functions? Because if we fix an $x$ or $z$ this basically became a regular series where $x$ or $z$ is just a constant. AI: 1 - As long as you don't rely on the order of $\mathbb{R}$. i.e., restrict your discussion to absolute convergence, you're good. You can apply those methods even if your series are formed by elements in a Banach space. 2 - As you said, if you choose some $x$ or $z$ beforehand, you can apply the methods for numerical series. But there are tests tailored specifically to series of functions such as Weiertrass M-test and an analogous one to Dirichlet's theorem, that are helpful to detect uniform convergence, which is beautiful.
H: Show that there exist $a_1,\ldots, a_{2n-1}$ such that $ a_{2n-1}J^{2n-1}+\cdots+a_1 J=I_n,$ where $J$ is a Jordan matrix Let $J\in\mathbb{C}^{n\times n}$ be a Jordan normal form and assume that ${\rm tr~}J<2n$. Prove or disprove that there exist $a_1,\ldots, a_{2n-1}\in\mathbb{R}$ such that \begin{equation} a_{2n-1}J^{2n-1}+a_{2n-2}J^{2n-2}+\cdots+a_1 J=I_n. \end{equation} Any help is appreciated AI: The statement is false. For example, any Jordan matrix with zeros on the diagonal satisfies the condition, but no such coefficients exist.
H: Understanding some points in the proof that the Thomae function is continuous at irrationals (pg.74 in Petrovic book) Here is the proof as it is given in the book: My questions are: 1- Why the author assumed that $a \in (0,1),$ what about the case of $a=0$? 2- Why if $f(x) < \epsilon$ then $|f(x)| < \epsilon$ in our case here? Could anyone help me in answering those questions please? AI: The autor says that he is only considering the cases where $a$ is an irrational number, this is why he's pick $a \in (0,1)$. Well, since $f(x) \geq 0$ for any $x$, $f(x) < \varepsilon$ if and only if $-\varepsilon < f(x) < \varepsilon$, that is, $f(x) < \varepsilon$ if and only if $|f(x)| < \varepsilon$.
H: Addressing the probability of a category as a whole I think this is rather an English language question, but I've asked this in ell.sx, and a person there insists that this is a concern of mathematics. Let's say I have $k_g$ green, and $k_r$ red balls. I want to select one, randomly, but with a bias towards the red balls, as follows; Each of the $k_g$ green balls have $p_g$ probability of getting chosen. Each of the $k_r$ red balls have $p_r$ probability of getting chosen. Obviously (I hope), $k_g p_g + k_r p_r = 1$. Now, without giving any of the lengthy explanation above, and without saying $P_g = k_g p_g$, I want to summarize all this information with the following paragraph: The bag I have consists of $k_g$ green, and $k_r$ red balls. The setup is so arranged that the probability of choosing a green ball is $P_g$. Same color balls have the same probability of getting chosen. Is the part written in bold in the above paragraph any ambiguous? If it is, how else should I say it? To clarify: What I'm trying to say is (1) "the chances of a selected ball being green is $P_g$", and not (2) "each green ball has a $P_g$ chance of getting chosen". I find the sentence (1) much more clear, but it is rather a weird way to put it. AI: I would consider your phrasing ambiguous; both interpretations (1) and (2) are plausible readings without more context. I think your version (1) (or a minor variation of it) is a perfectly good phrasing to use, actually. I would move to the forefront the condition that different balls can have different probability, though, since typically when describing this sort of situation, it is assumed that all balls are equally. So, I might write your paragraph as something like the following: The bag I have consists of $k_g$ green, and $k_r$ red balls. When I draw a ball from the bag, balls of the same color have the same probability of being chosen, but balls of different colors may have different probabilities. The probability that the selected ball is green is $P_g$. Another phrasing you could use is something like "the total probability of any green ball being chosen", where the word total clarifies that you mean the cumulative probability for all the green balls together, instead of one at a time.
H: Showing that the differential is an immersion If $f: X \rightarrow Y$ is an immersion of smooth manifolds, then show that $df: TX \rightarrow TY$ is also an immersion. The definition of immersion(when dim$X <$ dim$Y$) that I have is that for $f: X \rightarrow Y$, $f$ is an immersion if $df_{x}: TX \rightarrow TY$ is injective. So, for my problem I suppose I would have to show that the differential of $df$ is also injective. But, how do I go about showing that? AI: Here's a hint: Note that $f:X \to Y$ is an immersion if and only if for any $x \in X$, you can find chart maps $(U,\alpha)$ and $(V, \beta)$ such that $x \in U$ and $f(x) \in V$ and such that $\beta \circ f \circ \alpha^{-1} = \iota$ is the inclusion mapping from $\Bbb{R}^{\dim X} \to \Bbb{R}^{\dim Y}$. The "if" part is trivial, and the "only if" can be deduced as a consequence of the inverse/implicit function theorem. Now, $d\beta \circ df \circ (d\alpha)^{-1} = d \iota$, so just try to verify that $d \iota$ can be regarded as an inclusion mapping between (open subsets of) $\Bbb{R}^{2 \dim X} \to \Bbb{R}^{2 \dim Y}$. Once you do this, by the above characterization of immersions, you'll be done. Basically the idea is that the theorem is obviously true for the inclusion mapping $\iota$ of vector spaces. But locally (by the implicit function theorem), every immersion "looks like" such an immersion. A similar idea works with submersions.
H: Efficient methods to calculate incomplete beta $B[a,b;x]$ for $b=0$ I am looking for an efficient numerical method (or a module) to calculate the incomplete $\beta-$function for $b=0$. e.g. https://www.wolframalpha.com/input/?i=incomplete+beta%5B4%2F5%2C1.5%2C0.0%5D+ Most modules e.g. scipy.special.incbeta in Python run into problems because they try to calculate it via computing gamma[0] enter preformatted text here. This is far from my domain of knowledge. AI: "Efficient" depends on how much precision you need. Note $$B_x(a,b) = \int_{t=0}^x t^{a-1} (1-t)^{b-1} \, dt$$ so with $b = 0$ we have the series expansion $$B_x(a,0) = \int_{t=0}^x \sum_{k=0}^\infty t^{k+a-1} \, dt = \sum_{t=0}^\infty \frac{x^{k+a}}{k+a}.$$ If $a$ is a positive integer, this series can be computed exactly as the difference of a natural logarithm minus a finite number of terms. When $a$ is not a positive integer, then convergence is most rapid when $x$ is close to $0$. If $x$ is close to $1$, it is better to compute instead $$\begin{align*}B_x(a,0) &= \int_{t=0}^x \frac{1}{1-t} - \sum_{k=0}^\infty \binom{a-1}{k+1} (t-1)^k \, dt \\ &= -\log(1-x) - \sum_{k=0}^\infty \binom{a-1}{k+1} \frac{(-1)^k + (x-1)^{k+1}}{k+1}. \end{align*}$$ The number of terms needed depends on the desired precision, but in many cases, you do not need to compute a lot unless $a$ is extremely large, in which case the binomial series will converge slowly due to the oscillatory summand.
H: Orthogonality relation of eigenvectors for a self-adjoint operator So everyone knows eigenvectors corresponding to different eigenvalues are orthogonal to each other, given that the operator is self-adjoint. If we have a self-adjoint operator, say $L$, is it possible that $\exists u, v$ such that $Lu=\lambda u$, $Lv=\lambda v$ and $\langle u, v\rangle=0$. In other words, we have eigenvectors with the same eigenvalues to $L$ and they are still orthogonal? This is motivated by considering the angular momentum operators in Quantum Mechanics, I was wondering if there is a simpler example in Linear Algebra. AI: Just take the $L$ to be the identity operator on a finite dimensional Hilbert space (inner product space). And use the orthonormal basis basis . If your eigen-space has dimension greater than $1$ then it is always possible. Just take the orthonormal basis of the eigen-space.
H: Evaluating $\sum_{n=1}^\infty\frac{1}{4n(2n+1)}$ How to evaluate this sum, derived from "Lockdown math" by 3Blue1Brown? $$\sum_{n=1}^\infty\frac{1}{4n(2n+1)}$$ AI: $$\frac{1}{4n(2n+1)}=\frac{1}{4n}-\frac{1/2}{2n+1}=\frac{1}{2}\left(\frac{1}{2n}-\frac{1}{2n+1}\right)$$ Hence $$\sum_{n=1}^{\infty} \frac{1}{ 4n(2n+1)} =\frac 12 \sum_{n=1}^{\infty} \left(\frac{1}{2n}-\frac{1}{2n+1}\right)= \frac 12\left(1-\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}\right)=\ldots $$ where $${\displaystyle \ln(1+x)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}x^{n},\quad \ln(2)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}}$$
H: Quantifier question $!\exists x ! \exists y \forall w(w^2>x-y)$ I have a question about the following quantified sentence if it is true or false. $!\exists x ! \exists y \forall w(w^2>x-y)$ for the real numbers I think this this is true because if take two negative numbers $x<y$ then it will work Like if you have $-8-(-7)$ $x=-8$ $y=-7$ AI: The question asks whether there is a unique pair $x, y$ such that the square of any real number is greater than the difference $x - y$. This is clearly false: there are infinitely many pairs $x, y$ such that $x - y < 0$ (just choose any pair such that $x < y$) so that, for every real number $w$, we have $w^2 \geq 0 > x - y$.
H: The area of a triangle determined by two diagonals at a vertex of a regular heptagon In a circle of diameter 7, a regular heptagon is drawn inside of it. Then, we shade a triangular region as shown: What’s the exact value of the shaded region, without using trigonometric constants? My attempt I tried to solve it with the circumradius theorem : $A=(abc)/(4R)$, where $a$, $b$, and $c$ are the three sides, and $R$ is the circumradius of the triangle. However, I needed to find the exact value of $\cos(5\pi/14)$, $\cos(4\pi/7)$, and $\sin(5\pi/14)$. Finally, I found an explicit formula for this particular triangle, but the proof was missing. You can find the formula in Wikipedia's "Heptagonal triangle" entry. AI: Expressing the sides $a,b,c$ via the sines of corresponding central angles one obtains: $$ A=\frac{abc}{4R}=2R^2\sin\frac\pi7\sin\frac{2\pi}{7}\sin\frac{4\pi}7.\tag1 $$ For a product of sines we have the following theorem: $$ \prod_{0<m_i<n}2\sin\frac{\pi m_i}{n}=n. \tag2 $$ Therefore: $$2^6\sin\frac\pi7\sin\frac{2\pi}{7}\sin\frac{3\pi}7\sin\frac{4\pi}{7}\sin\frac{5\pi}7\sin\frac{6\pi}{7}=\left(8\sin\frac\pi7\sin\frac{2\pi}{7}\sin\frac{4\pi}7\right)^2=7.$$
H: Deep doubt on a double surface integral I don't understand how to proceed with an exercise. I will write down what I have done so far. The exercise is: Evaluate the following integral $$\iint_{\Sigma}\dfrac{1}{x^2+y^2}\ \text{d}\sigma $$ Where $\Sigma = \{(x, y, z): x^2+ y^2 = z^2+1,\ 1\leq z \leq 2 \}$ My attempt I wrote $$z = \sqrt{x^2+y^2-1} ~~~~~~~ \text{with} ~~~~~~~ 2 \leq x^2+y^2 \leq 5$$ Hence I fond a parametrisation curve like $$\phi: \begin{cases} x = t \\ y = s \\ z = \sqrt{t^2+s^2-1} \end{cases} $$ Now I have to calculate the vector product of the partial derivatives gradients (I know I am expressing myself in a bad language, I apologise): $$\frac{\partial \phi}{\partial t} = \left(1,\ 0,\ \dfrac{t}{\sqrt{t^2+s^2-1}}\right)$$ $$\frac{\partial \phi}{\partial s} = \left(0,\ 1,\ \dfrac{s}{\sqrt{t^2+s^2-1}}\right)$$ Hence $$\frac{\partial \phi}{\partial t} \wedge \frac{\partial \phi}{\partial t} = \text{det}\begin{pmatrix} i & j & k \\ 1 & 0 & \dfrac{t}{\sqrt{t^2+s^2-1}} \\ 0 & 1 & \dfrac{s}{\sqrt{t^2+s^2-1}} \end{pmatrix} $$ Which lead me to $$\bigg|\bigg| \frac{\partial \phi}{\partial t} \wedge \frac{\partial \phi}{\partial t}\bigg|\bigg| = \sqrt{\dfrac{2(t^2+s^2)-1}{t^2+s^2-1}} $$ Now I should evaluate the integral but I don't know how to proceed since I would get $$\iint_{\Sigma} \dfrac{1}{s^2+t^2} \sqrt{\dfrac{2(t^2+s^2)-1}{t^2+s^2-1}}\ \text{d}\sigma\ \text{d}s$$ And I cannot continue... I should get $3\pi$ as a result. Can anyone help me? Thank you! AI: It is easy if you use the fact that $\Sigma$ have rotational symmetry around $z$ axis : $$\iint_\Sigma\frac{d\sigma}{x^2+y^2}=\iint_\Sigma\frac{d\sigma}{z^2+1}=\int_1^2\frac{2\pi\sqrt{z^2+1}}{z^2+1}\sqrt{1+\bigg(\frac{d(\sqrt{z^2+1})}{dz}\bigg)^2}dz$$$$=2\pi\int_1^2\frac{\sqrt{2z^2+1}}{z^2+1}dz.$$ However, I don't think 3$\pi$ is the value of the integral. When I did numerical calculation of this integral, it gave me approx. $4.5595$, which clearly is not $3\pi$. Also Mathematica yields : $$2\pi\int_1^2\frac{\sqrt{2z^2+1}}{z^2+1}dz=\pi\bigg(2\sqrt{2}\Big(\arcsin(2\sqrt{2})-\arcsin(\sqrt{2})\Big)+\log((2+\sqrt{3})/5)\bigg)$$
H: Prove that subset of complex plane is open or is closed (or both or neither ) I have subset $$D = \{z: \text{Re}(z) > 2, \text{Im}(z) \leq1\}$$ So i think that set is not open because contain boundary points, but how prove this? Let's say it's open. so by defenition we have: For all points of the set, they are contained together with some ball. Let's take an arbitrary point (with the property $\text{Im}(z_0)=1$), $$z_0 \in D$$ Let's look at the ball with the center at our point. $$B_r(z_0) = \{z: |z - z_0|<r\}$$ Where $r>0$ is the positive number. So... I have a problem with building a contradiction. Intuition tells us that if a number $r$ has $\text{Im}(r) > 0$ So we have $z_o + r = c \notin D$ is it close? Help me develop intuition and technique for this kind of task. (prooving subset of complex plane is open or close) AI: Your are in the right way to prove $D$ is not open. It suffices to take $ \ z=z_{0}+\frac{i}{2}r \ $, then $ \ |z-z_{0}|=\frac{r}{2}<r \ $ and $ \Im(z)=1+\frac{r}{2}>1. $ $A \subset \mathbb{C} \ $ is closed if and only if the limit of every convergent sequence $ (z_{n})_{n} \subset A $ is also in $A$. Take $ \ z_{n}=2+\frac{1}{n} \ $, then $z_{n}\in D$ and $ \lim_{n\to \infty}z_{n}=2 \notin D $. Hence $D$ is not closed. Note that the only subsets of $\mathbb{C}$ which are both open and closed are $ \emptyset$ and $ \mathbb{C} $, since $\mathbb{C}$ is connected.
H: $P(X_1 We have two independent random variables $X_1$, $X_2$, with law $Exp(\rho_i)$ respectively. I want to find the probability of the following event $\{X_1<X_2\}$. Is the following correct? $P(X_1<X_2)=\int_0^\infty\int_0^{x_2=x_1}\rho_1\rho_2 e^{-(\rho_1x_1+\rho_2x_2)}dx_2dx_1=\int_0^\infty-\rho_1[e^{-(\rho_1x_1+\rho_2x_2)}]_0^{x_2=x_1}dx_1=-\rho_1[-\frac{1}{\rho_1+\rho_2}e^{-(\rho_1+\rho_2)x_1}+\frac{1}{\rho_1}e^{-\rho_1x_1}]^\infty_0=1-\frac{\rho_1}{\rho_1+\rho_2}$ Is this solution correct? AI: The answer is $\frac 1 2$ if $\rho_1=\rho_2$, not in general. ALlso you computed $P(X_2 <X_1)$ instead of $P(X_1<X_2)$. So the correct answer is $\frac {\rho_1} {\rho_1+\rho_2}$.
H: If $\alpha$ is a cycle of length $k$. Then $o(\alpha)=k$ I found a proof of this theorem wich says: $ Proof: $ If $ \alpha=(i_1,i_2,...,i_{k-1},i_{k}) $ . Then, $\alpha^{k}(i_j)=\alpha^{j}\alpha^{k-j}(i_j)=\alpha^{j}(i_k)=i_j, \; \forall\; j $ Thus, $ \alpha^{k} =1$ and $ o(\alpha) \leq k $. Now, if $ 1\leq s < k $. Then, $ \alpha^{s}(i_1)=i_{s+1}\neq i_1 $. Therefore, $ o(\alpha) = k $ My problem is in the last line. I don't get how concludes that $o(\alpha)=k$. AI: Assume $\alpha$ is a cycle of length $k \gt 1$. It is easy to see that $\alpha^k$ is the identity so $1 \le o(\alpha) \le k$ Now, if $ 1\leq s < k $. Then, $ \alpha^{s}(i_1)=i_{s+1}\neq i_1 $. So $ \alpha^{s}$ is not the identity permutation. So the order is indeed equal to $k$.
H: Find the minimum value of $x+2y$ given $\frac{1}{x + 2} + \frac{1}{y + 2} = \frac{1}{3}.$ Let $x$ and $y$ be positive real numbers such that $$\frac{1}{x + 2} + \frac{1}{y + 2} = \frac{1}{3}.$$Find the minimum value of $x + 2y.$ I think I will need to use the Cauchy-Schwarz Inequality here, but I don't know how I should use it. Can anyone help? Thanks! AI: Cauchy-Schwarz implies $$((x+2)+2(y+2))\left(\frac 1{x+2}+\frac 1{y+2}\right)\geq (1+\sqrt{2})^2 $$ $$\Rightarrow x+2y+6\geq 3(1+\sqrt{2})^2,$$where equality is achieved when $$x+2=3(1+\sqrt{2}),y+2=\frac 3{\sqrt{2}}(1+\sqrt{2}).$$ This shows that the minimum of $x+2y$ is $3+6\sqrt{2}.$
H: Proof verification: the characteristic of an integral domain $D$ must be either 0 or prime. Claim: the characteristic of an integral domain $D$ must be either 0 or prime. Here is my attempt: Assume $D$ is an integral domain. Assume $k$ is the characteristic of $D$. Let $a \in D\setminus \{0\}$. Aiming for a contradiction, assume $k$ is neither prime nor $0$. Since $k$ is the smallest positive integer satisfying $k \cdotp a = 0$, $\exists m, n \in \mathbb{Z}^+$ s.t. \begin{equation} k = m \cdotp n \end{equation} Without loss of generality, assume that $m, n$ are the smallest positive integers satisfying $k = m \cdotp n$. Since $D$ is a ring with unity $1 \neq 0$, we have $k = (m \cdotp 1) \cdotp (n \cdotp 1)$. That is, $(m \cdotp 1) \cdotp (n \cdotp 1) \cdotp a= 0$. Since $D$ contains no divisors of $0$, either $(m \cdotp 1) = 0$ or $(n \cdotp 1) = 0$. If $(m \cdotp 1) = 0$, then by Theorem 19.15, $n$ is the characteristic of $D$ is $n$, which is a contradiction. If $(n \cdotp 1) = 0$, then by Theorem 19.15 again, the characteristic of $D$ is $m$, which is also a contradiction. $\square$ Theorem 19.15: Let $R$ be a ring with unity. If $n \cdotp 1 = 0$ for some $n \in \mathbb{Z}^+$, then the smallest such integer $n$ is the characteristic of $R$. My question: I am not sure if my use of Theorem 19.15 is correct/ justified in my proof. I know that I have "Without loss of generality, assume that $m, n$ are the smallest positive integers satisfying $k = m \cdotp n$" in my proof but I am not sure if this is sufficient to use Theorem 19.15 the way I have in the last couple lines of my proof. Can someone please verify if this proof is correct or if it needs any adjustments? Thanks! AI: Yes, it's all correct, though you don't need $n$ to be the smallest nontrivial divisor. (Note that saying the pair $n,m$ is 'smallest' makes no sense.) Theorem 19.15 can be easily seen, as if $n\cdot 1=0$, then $n\cdot a=(n\cdot 1)\cdot a=0$ for every element $a$.
H: Distribution of $Y$ if $Y=X$ if $|X|\leq c$ and $Y=-X$ if $|X|>c$, $X\in N(0,1)$ Let $X\in N(0,1)$, and $c\geq 0$. $Y$ is defined as $Y=\begin{cases}X & \text{for}\quad |X|\leq c,\\ -X&\text{for}\quad |X|>c.\end{cases}$ What is the distribution of $Y$? I would guess it has a normal distribution given that $X\overset{d}{=} -X$. AI: Yes, it is $N(0,1)$: $P(Y \leq y)= P(X \leq y, |X| \leq c)+P(-X \leq y, |X| > c)$ In the second term use the fact that $-X$ has the same distribution as $X$ to see that $P(Y \leq y)= P(X \leq y, |X| \leq c)+P(X \leq y, |X| > c)=P(X \leq c)$$
H: Same remainders of a sequence Show that for the sequence $x_{1}=9$ and $x_{n+1} = 9^{x_{n}}$ the remainders for the third and fourth term are equal when divided by $100$. Determine this remainder. So the second term seems to be $x_{2}=9^9$ and therefore the third and fourth $x_{3}=9^{9^9}, x_{4}=9^{9^{9^9}}$. Is there some exponent rule we need to apply here or what would be the way to go about this? AI: All you need is to find $\operatorname{ord}_{100}(9)$, say $m$ and show that $x_2\equiv x_3\pmod m$. As $\gcd(4,25)=1,\,100=4\cdot 25$ it suffices to find $m=\operatorname{ord}_{25}(9)$ as $9\equiv 1\pmod 4$ so $9^m\equiv 1\pmod {100}\Leftrightarrow 9^m\equiv 1\pmod {25}$ by Chinese Remainder Theorem. So, how to proceed with pen and paper, as OP is tageg 'contest-math': As $m|\varphi(25)=20$ so we have only to test positive divisors of $20$, namely $1,2,4,5,10,20$, alone to say $9^{\varphi(25)}\equiv 1\pmod{25}$ by little Fermat's Theorem and $9\not\equiv 1\pmod{25}$, so let's start with $2$: $9^2=81\equiv 6\pmod{25}$ $9^4=(9^2)^2\equiv 6^2\equiv 11\pmod{25}$ $9^5=9\cdot 9^4\equiv 9\cdot 11\equiv -1\pmod{25}$, and finally $9^{10}=(9^5)^2\equiv (-1)^2\equiv 1\pmod{25}$, so $m=10$. Now $x_2=9^9\equiv 9\pmod{10}$, so $9^{x_2}\equiv 9^9\pmod{100}$ $\Rightarrow 9^{x_2}\equiv 9^9\pmod{10}$ hence the desired result. Now we find $9^9\pmod{100}$ like $(9^3)^3$ or $9((9^2)^2)^2$ -- 4 multiplications either way (I'll omit the details). $9^9\equiv 89\pmod{100}$
H: Regular closed subset of H-closed space An H-closed space $X$ is a topological space which is closed in any Hausdorff space in which it is embedded. A well-known characterization is that $X$ is H-closed iff every open cover of $X$ has a finite proximate subcover, i.e. a finite subcollection whose union is dense. I need to show that this property is hereditary to regular closed subsets. I tried to do something analogous to the proof of closed subset of Compact spaces are compact. But got stuck. Any help is appreciated. AI: Let $X$ be H-closed, and let $F$ be a regular closed set in $X$. Let $\mathscr{U}$ be a relatively open cover of $F$. For each $U\in\mathscr{U}$ there is an open $V_U$ in $X$ such that $U=F\cap V_U$; let $$\mathscr{V}=\{X\setminus F\}\cup\{V_U:U\in\mathscr{U}\}\;.$$ $\mathscr{V}$ is an open cover of $X$, so it has a finite proximate subcover $\mathscr{V}_0$. Let $$\mathscr{U}_0=\{U\in\mathscr{U}:V_U\in\mathscr{V}_0\}\;;$$ clearly $\mathscr{U}_0$ is a finite subset of $\mathscr{U}$. Since $\operatorname{cl}(X\setminus F)\cap\operatorname{int}F=\varnothing$, and $\bigcup\mathscr{V}_0$ is dense in $X$, $\bigcup\{V_U:U\in\mathscr{U}_0\}$ must be dense in $\operatorname{int}F$, and hence $\bigcup\mathscr{U}_0$ must be dense in $\operatorname{int}F$. Thus, $$F=\operatorname{cl}\operatorname{int}F\subseteq\bigcup_{U\in\mathscr{U}_0}\operatorname{cl}U\subseteq F\;,$$ $\mathscr{U}_0$ is a proximate subcover of $\mathscr{U}$, and $F$ is H-closed. It is also true that a space $X$ is H-closed iff every open filter in $X$ has a cluster point, and we can use this characterization instead. Let $\mathscr{U}$ be a relatively open filter on $F$, and let $\mathscr{B}=\{U\cap\operatorname{int}F:U\in\mathscr{U}\}$. Clearly $U\cap\operatorname{int}F\ne\varnothing$ for each $U\in\mathscr{U}$, so $\mathscr{B}$ is an open filterbase in $X$. $X$ is H-closed, so the filter $\mathscr{V}$ generated by $\mathscr{B}$ has a cluster point $x\in X$, which is evidently also a cluster point of $\mathscr{U}$. And $\operatorname{int}F\in\mathscr{V}$, so every nbhd of $x$ meets $\operatorname{int}F$, and therefore $x\in\operatorname{cl}\operatorname{int}F=F$, so that $\mathscr{U}$ has a cluster point in $F$.
H: Convergence/dicergence My series has a general term $\frac{(1+\frac{1}{n})^{n^2}}{e^n}$. I found that the Root test is inconclusive here. Wolfram says to use "limit test". Is that the limit comparison test? Which series can I compare this one to? I know it should diverge. AI: Let $u_n$ be the general term. $$\ln(u_n)=n^2\ln(1+\frac 1n)-n$$ $$\ln(1+\frac 1n)=\frac 1n -\frac{1}{2n^2}+\frac{1}{n^2}\epsilon(n)$$ $$\ln(u_n)=-\frac 12+\epsilon(n)$$ thus $$\lim_{n\to+\infty}u_n=\frac{1}{\sqrt{e}}\ne0$$ the series diverges by the limit test.
H: Measure theory: motivation behind monotone convergence theorem I am watching a very nice set of videos on measure theory, which are great. But I am not clear on what the motivation is behind the monotone convergence theorem--meaning why we need it? The statement of the theorem is that given a set of functions $\\{f_n\\} \rightarrow f$ such that $f_1 \leq f_2 \leq f_3 \leq ... f_n \leq f$ $$ \lim_{n \rightarrow \infty} \int_X f_n d\mu = \int_X \lim_{n \rightarrow \infty } f_n d\mu = \int_X f d\mu $$ So the theorem suggests the interchange of the limit and the integral sign. But I am not sure what the implications of this interchange is and under what conditions this interchange is not possible (for the Lebesgue integral)? Meaning, that this particular theorem of monotone convergence presupposes the Lebesgue integral as opposed to the Riemann integral. So, is monotone convergence not guaranteed for the Riemann integral--is that the key distinction? And second, are the cases where monotone convergence fails for the Riemann integral due to the fact that the Riemann integral gives mass to sets in $X$ of measure zero, while the Lebesgue integral does not have this problem? AI: Here's a simple example where the monotone convergence theorem fails for the Riemann integral. Fix an enumeration $(q_k)_{k\in\mathbb N}$ of the rational numbers in the interval $[0,1]$. Define $f_n(x)$ to equal $1$ for $x=q_0,q_1,\dots, q_n$ and to equal $0$ for all other $x\in[0,1]$. Then the sequence of functions $(f_n)$ converges monotonically pointwise to the characteristic function of $\mathbb Q\cap[0,1]$, which is not Riemann integrable on $[0,1]$, even though each $f_n$ is Riemann integrable with integral $0$. One of the main motivations (if not the motivation) for Lebesgue's theory of integration was to improve the behavior of integration vis à vis limits. The monotone convergence theorem, the dominated convergence theorem, and Fatou's lemma are among the instances of this improved behavior.
H: Clarification of finding this transition probability matrix Let $X_n$ denote the two-state Markov chain with transition probability matrix P= $ \begin{bmatrix} \alpha & 1-\alpha\\ 1-\beta & \beta \end{bmatrix} $ given states 0 and 1. Let $Z_n=(X_{n-1},X_n)$ be a Markov chain over the four states (0,0), (0,1), (1,0), and (1,1). Determine the transition probability matrix. Here are my thoughts so far: If $Z_n=(X_{n-1},X_n)$, then $Z_{n-1}=(X_{n-2},X_{n-1})$. I believe I'm looking for $P[Z_n=(x_1,y_1)|Z_{n-1}=(x_2,y_2)]=\frac{P[X_{n-1}=x_1, X_n=y_1, X_{n-2}=x_2, X_{n-1}=y_2]}{P[X_{n-2}=x_2, X_{n-1}=y_2]}$=$P(X_n=y_1|X_{n-1}=x_1, X_{n-2}=x_2)$=$P(X_n=y_1|X_{n-1}=x_1)$ by the Markov property. For example, using the points (0,0)$\rightarrow$(0,0) would be $P[Z_n=(0,0)|Z_{n-1}=(0,0)]=\frac{P[X_{n-1}=0, X_n=0, X_{n-2}=0, X_{n-1}=0]}{P[X_{n-2}=0, X_{n-1}=0]}$=$P(X_n=0|X_{n-1}=0, X_{n-2}=0)$=$P(X_n=0|X_{n-1}=0)$ which corresponds to $p_{00}=\alpha$. Following this process I obtained P=$\begin{bmatrix} \alpha &0 &\alpha &0\\ 1-\alpha &0 &1-\alpha &0\\ 0 &1-\beta &0 &1-\beta\\ 0 &\beta &0 &\beta\\ \end{bmatrix}$. $0$'s denote a probability that is not possible (i.e. $X_{n-1}$ cannot be both 0 and 1). I've seen the answer expressed as the matrix P= $\begin{bmatrix} \alpha &1-\alpha &0 &0\\ 0 &0 &1-\beta &\beta\\ \alpha &1-\alpha &0 &0\\ 0 &0 &1-\beta &\beta\\ \end{bmatrix}$ Based on the logic in when a probability equals zero I really think my transition probability matrix is correct rather than the alternative choice. Any thoughts? AI: Your matrix is just the transpose of the answer. It occurs probably because your definition is different from the answer's. For instance, for the transition probability matrix $P$, the row sums equal to 1. It means that $$ \sum_{j} p_{ij} = 1 $$ because it is summing over all possible outcomes of $i \to j$. It does not matter in computing the probability for $(0,0) \to (0,0)$, but it does matter for $(1,0) \to (0,0)$. Examine your calculation and make sure that it is defined as $$ p_{ij} = \mathbb{P}(Z_{n+1}=j|Z_n=i) $$ Yours may be the other way around and result in the transpose.
H: Is $x^5-2x+4$ irreducible in $\mathbb{Q}[x]$? I was asked this question on my exam but I didn't know how to solve it. Eisenstein criterion can't be applied, rational roots theorem shows that it doesn't have a linear factor because possible roots are $\lbrace 1,-1,2,-2,4,-4 \rbrace$ and it doesn't seem to be irreducible over $\mathbf Z/2\mathbf Z$ or $\mathbf Z/3\mathbf Z$ AI: Hint: $f(x)=x^5-2x+4$ has only one real root. By rational root test, $f(x)$ has no linear factor. If $f(x)$ is reducible, it must have a quadratic factor of the form $x^2+Ax+B,$ where $B$ divides $4$. But one has $A^2-4B<0$ and hence $B>0$, otherwise it will yield at least two real roots. There are just a few cases to consider.
H: Is X always in any open neighborhood system for the topological space X, t? I have just been recently introduced to the concept of open neighborhood systems, but the fact that X is in any open neighborhood system $N_x$ in X, $t$ is not self-evident to me. Is this always true? AI: Yes. By definition any topology on $X$ includes the set $X$ itself, so $X$ is an open nbhd of every point of $X$.
H: When the piecewise constant integral independs of the partition's choice? Proposition Let $I$ be a bounded interval, and let $f:I\to\textbf{R}$ be a function. Suppose that $\textbf{P}$ and $\textbf{P}'$ are partitions of $I$ such that $f$ is piecewise constant both with respect to $\textbf{P}$ and with respect to $\textbf{P}'$. Then \begin{align*} p.c.\int_{[\textbf{P}]}f = p.c.\int_{[\textbf{P}']}f \end{align*} My question Although my question is quite elementary, I would like to ask the following: if $f$ is piecewise constant with respect to $\textbf{P}$ and $\textbf{P}'$, then necessarily $\textbf{P}$ is finer than $\textbf{P}'$ or $\textbf{P}'$ is finer than $\textbf{P}$? AI: No. For instance, consider a constant function and any two partitions.
H: Right Triangle within another right triangle inside a square As you can see in the below picture, We have a right triangle inside a big square, and within the triangle there is another small right triangle. The question as follows: Find the length of AB "With" using BE and FGin your solution. Well I came up with this question because I need to find AB without using the other data that we can simply take them from the image, because I am working in a project That I must use raycasting method anyway after analyzing and a lot of drawing I managed to answer the question as follows : We have BE = 4cm and FG = 0.5cm then AB = BE / FG => AB = 4 / 0.5 => AB = 8cm and I found this method works only if the AE ray stays within the square area, For example if point E = (10,8) this method will not work anymore as shown below : there is some area of the triangle is outside the square and also if you see the picture you will find that FG now is big than 1 cm FG > 1cm, so this method will work for you just if FG > 0 && FG <= 1 I believe that there is another way to explain this so my question is does anyone knows the logic behind AB = BE / FG or someone who knows a better explanation or some formulas that works with this kind of triangles. Edit after Daniel Mathias answered my question, I realized even the second example the "method" works with it, i just got confused with another triangle my bad. AI: Triangles $ABE$ and $AFG$ are similar, with $\frac{AB}{BE}=\frac{AF}{FG}$. Given $AF=1$, this means: $$\frac{AB}{BE}=\frac{1}{FG}\implies AB=\frac{BE}{FG}$$ This is true without exception. In your second image, you have $BE=10$, $FG=1.25$ and $AB=\frac{10}{1.25}=8$.
H: Differentiation of a function with 2 variables I have a function: $$ f: R^2 \to R $$ Which satisfies: $$ x\frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y} = 0, \forall (x,y) \in \mathbb{R}^2 $$ Now the answers get in the end to: How did they conclude that the right side sums to zero? I assume that it stems from the fact in the conditions of the question that: $$ x\frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y} = 0, \forall (x,y) \in \mathbb{R}^2 $$ But how? AI: The condition in the question can be written as $$x \frac{\partial f}{\partial x} (x,y) + y \frac{\partial f}{\partial y} (x,y) = 0 \textrm{ for any } (x,y) \in \mathbb R^2,$$ so, apply this with $(x,y) = (cx_0,cy_0)$.
H: Prove that if a sequence converges (in a metric space), then every subsequence does always converge as well. Let $(x_{n})_{n=m}^{\infty}$ be a sequence in $(X,d)$ which converges to some limit $x_{0}$. Then every subsequence $(x_{f(n)})_{n=m}^{\infty}$ of that sequence also converges to $x_{0}$. My solution Let $\varepsilon > 0$. Then there exists a natural number $N\geq m$ such that \begin{align*} n\geq N \Rightarrow d(x_{n},x_{0}) < \varepsilon \end{align*} Since $f:\textbf{N}\to\textbf{N}$ is stricly increasing, we conclude that $f(n) \geq n$. Consequently, for the same $N\geq m$, the following relation holds \begin{align*} f(n) \geq n\geq N \Rightarrow d(x_{f(n)},x_{0}) < \varepsilon \end{align*} whence we conclude that $x_{f(n)}$ converges to $x_{0}$ as well. I am mainly concerned with the wording of the proof. Can someone point out any flaw? AI: Your proof (and the wording as well) is fine. You proved that for all $\varepsilon > 0$, there exists an $N \in \mathbb{N}$ such that $n \geq N \implies x_{f(n)} \in B(x_0, \varepsilon)$, which shows exactly the convergence you desired.
H: When does a substructure of an algebraic structure exist? (from Fraleigh) I read in Fraleigh, A first course in Abstract Algebra, that If we have a set, together with a certain type of algebraic structure (groups, rings, integral domains, etc.), then any subset of this set, together with a natural induced algebraic structure that yields an algebraic structure of the same type, is a substructure. I am trying to understand this comment. Firstly, why does Fraleigh say "algebraic structure" instead of something like "algebraic operation"? Also, if we have, say, an integral domain $D$ and $A$ is a subdomain of $D$, then is it reasonable to infer, form this above comment, that $A$ is an integral domain? Thanks! AI: An operation is something like addition or multiplication. An algebraic structure is a lot more than that. Typically an algebraic structure is defined by one or more operations together with axioms they must satisfy. There are group axioms, ring axioms, etc., which define a group, ring, etc. It is a lot more than just an operation. Yes, in that case, $A$ would also be an integral domain.
H: Does this strategy of characterizing poles always work? I stumbled upon a fast way to characterize poles of order $m$ of a meromorphic function $f$ (on some open set $\Omega$) in this answer here. My question is, does this general strategy always work? Here's why I think it doesn't always work. Consider $$f(z) = \frac{\cos z -1}{z^2}.$$ According to the general strategy given in the answer, $z = 0$ should be a simple pole (pole of order $1$) as $0$ is a root of $z^2$ of multiplicity $2$ and $0$ is a root of $\cos z - 1$ of multiplicity $1$. Therefore, $0$ is a pole of order $2-1 = 1$. But then, considering the power series expansion of $\cos z$, we have $$f(z) = -\frac{1}{2!} + \frac{z^2}{4!} - \frac{z^4}{6!} + \ldots$$ So evidently we see here that $z= 0$ is a removable singularity of $f(z)$. Is there a flaw in how I used the answerer's argument or it is thus true that the general strategy does not always work? AI: Your mistake is that $0$ is a root of $\cos(z)-1$ of multiplicity $2$, not $1$.
H: Understanding Fraleigh's proof of: Every finite integral domain is a field Here's how Fraleigh proves: Every finite integral domain is a field in his book: Let \begin{equation*} 0, 1, a_1, \dots, a_n \end{equation*} be all the elements of the finite domain $D$. Now, consider \begin{equation*} a1, aa_1, \dots, aa_n \end{equation*} Since the multiplicative cancellation laws hold in $D$, it means that each of $a1, aa_1, \dots, aa_n$ are distinct from each other since $aa_i = aa_j \implies a_i = a_j$. Also, since $D$ has no divisors of $0$, neither of $a1, aa_1, \dots, aa_n$ can be zero. Hence, $a1, aa_1, \dots, aa_n$ are elements $1, a_1, \dots, a_n$ in some order. So, either $a1 = 1 \implies a = 1$ or $aa_i = 1$ for some $i$. My addition: If $a = 1$, then the conditional in question is trivially satisfied and there is nothing to prove. So, without loss of generality, assume $aa_i = 1$. This shows that $a$ has a multiplicative inverse, $a_i$. $\square$ I have two questions: firstly, is my addition to the proof valid? Secondly, how does $D$ has no divisors of $0$ imply "neither of $a1, aa_1, \dots, aa_n$ can be zero" (in bold above). The definition of 0 divisors that Fraleigh has given is: If $a$ and $b$ are two nonzero elements of a ring $R$ s.t. $ab = 0$, then $a$ and $b$ are divisors of 0. To conclude that "neither of $a1, aa_1, \dots, aa_n$ can be zero" from this definition, I think we would need to know that the product of any two terms from $a1, aa_1, \dots, aa_n$ is zero but we don't know this. What am I missing? Thanks! AI: The whole point is to show that none of the products $a1,aa_1,\ldots,aa_n$ is $0$. Suppose that some $aa_k$ were $0$. We know that $a$ and $a_k$ are not $0$; if $aa_k$ were $0$, $a$ and $a_k$ would by definition be divisors of $0$, but we know that $D$ has no divisors of $0$. Thus, $aa_k$ cannot be $0$. The same argument shows that $a1$ cannot be $0$, though in that case it’s even easier, since $a1=a$, and we know that $a\ne 0$. Your addition is correct but not really necessary: one would hope that the reader can be trusted to recognize that if $a=1$, we already know that it has a multiplicative inverse, so we’re really interested in the other cases.
H: t statistic formula I just have a quick question about t statistic. Of these two, which formula is correct? $$t(x) = \frac{\bar{X} - \mu }{\sqrt{\frac{1}{n}\sum \left ( X_{i} - \bar{X}\right )^{2}}}\sqrt{n-1}$$ $$t(x) = \frac{\bar{X} - \mu }{\sqrt{\frac{1}{n-1}\sum \left ( X_{i} - \bar{X}\right )^{2}}}\sqrt{n}$$ I came across both in different places (even within the same textbook) so don't know which one is correct. Thanks for helping! AI: They are mathematically equivalent, because $$\frac{\sqrt{n-1}}{\sqrt{1/n}} = \sqrt{n}\sqrt{n-1} = \frac{\sqrt{n}}{\sqrt{1/(n-1)}}.$$ That said, the second formula is a better reflection of how the $t$ statistic is calculated, because it is obtained by estimating the standard deviation of the population from the sample when it is unknown. Thus, $$s^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \bar X)^2$$ is the unbiased estimator for the variance $\sigma^2$ when $X_i$ are iid normal random variables with unknown mean $\mu$ and unknown variance $\sigma^2$. Then the statistic $$\frac{X-\mu}{s/\sqrt{n}}$$ is Student $t$ distributed where $s/\sqrt{n}$ is the standard error of the mean.
H: On a bus, 1/10 of the total number of passengers get off at one spot, then 1/3 of the rest... On a bus, 1/10 of the total number of passengers get off at one spot, then 1/3 of the rest (the passengers left on the bus) get off at another point. There are 42 passengers left on the bus. What's the total number of passengers who were originally on the bus? I was sent this question and I got 140 as the total. I used the calculation as such: Total - $\frac{1}{10}$ Total = NewTotal $\frac{1}{3}$NewTotal = 42 NewTotal = 126 Total - $\frac{1}{10}$ Total=126 Total$(1-\frac{1}{10})$ = 126 Total = $\frac{1260}{9}$ Total = 140 Now, I was then sent another solution that went like this $x - \frac{1}{10}x-\frac{1}{3}*\frac{9}{10}x=42$ $\frac{18}{30}x=42$ $x = \frac{1260}{18}$ $x=70$ Now, I maintain that the second solution used two-thirds of the nine tenths left one the bus rather than one third. But, I may very well be experiencing some blindness due to my desire to be right. Given that my answer is twice the answer I was given. I feel a little embarrassed to be writing this, but I claim that it is less to do with my mathematical abilities and more to do with my ability to see what I like to see when I want to be correct! So, can you all clear this up for me? AI: $\frac{\color{red}2}3 \cdot \text{New Total}=42$ as that is the leftover on the bus after the second point. $\text{New Total} = \frac{126}{2}$ $\text{Total} - \frac{1}{10}\cdot \text{Total }=\frac{126}{2}$ Total$(1-\frac{1}{10}) = \frac{126}2$ Total = $\frac{1260}{9}\cdot \frac12$ Total $= 70$
H: Show that $\mathbb{I}_A$ is continuous at $x_0$ if and only if $x_0$ is not a boundary point of $A$ Let $A \subset\mathbb{R}$ and $\mathbb{I}_A:\mathbb{R}\longrightarrow \mathbb{R}$ the indicator function of A. Show that $\mathbb{I}_A$ is continuous at $x_0$ if and only if $x_0$ is not a boundary point of $A$. If $C$ is the Cantor set where is $\mathbb{I}_C$ continuous? Well, I been struggling with this demonstration for a while, and I could only write down the definitions. Any suggestions would be great! $\Rightarrow$] $\mathbb{I}_A$ is continuous in $x_0$ if $\forall$ sequence $(x_n)$ in $\mathbb{R}$ such that $x_n\longrightarrow x_0$ we have that $\mathbb{I}_A(x_n)\longrightarrow \mathbb{I}_A(x_0)$. $\Leftarrow$] If $x_0$ is not a boundary point of $A$, it means that there exists $r>0, B_r(x_0)\cap A = \emptyset = B_r(x_0)\cap A^c$. AI: If $x_0$ is a boundary point of $SA$ then there exists a sequence $(x_n)$ contained in $A$ converging to $x_0$ and a sequence $(y_n)$ contained in $A^{c}$ converging to $x_0$. Now $I_{A}(x_n) =1$ for all $n$ and $I_{A}(y_n) =0$ for all $n$. Henec $\lim_{x \to x_0} I_A(x)$ does not exist and $I_A$ is not continuous at $x_0$. If $x_0$ is not a boundary point then there is an open interval around $x_0$ which is completely contained in $A$ or an open interval around $x_0$ which is completely contained in $A^{c}$. Can you finish? The boundary of the Cantor set $C$ is $C$ itself since it is closed with no interior points.
H: How do we prove that compact spaces in metric spaces are bounded? Let $(X,d)$ be a compact metric space. Then $(X,d)$ is both complete and bounded. My solution The space $(X,d)$ is indeed complete. This is because every Cauchy sequence which admits a convergent subsequence is also convergent. Now it remains to prove the bounded part. Let us suppose otherwise that $X$ is unbounded. Thus for every $r > 0$ there is an element $x\in X$ such that $x\not\in B(x_{0},r)$. In particular, if we choose $r = n$, there corresponds $x_{n}\in X$ such that $d(x_{n},x_{0}) \geq n$. But, since $x_{n}$ admits a subsequence $x_{f(n)}$ which converges, such subsequence is bounded: $d(x_{f(n)},x_{0}) < N$ for some natural $N$. Thus if we choose $n_{0} = N + 1$ and noticing that $f(n_{0}) \geq n_{0}$, we conclude that \begin{align*} N+1 = n_{0} \leq f(n_{0}) \leq d(x_{f(n_{0})},x_{0}) \leq N \end{align*} which is a contradiction. Therefore $X$ is bounded. Could someone please point out any theoretical flaw, if there is one? AI: The subsequence converges to some point $y_0$ and we get $d(x_{f(n)},y_0) <N$ instead of $d(x_{f(n)},x_0) <N$. (Though $d(x_{f(n)},x_0) $ is also bounded you have give some argument to justify this).
H: Maximization inequality for Frobenius norm after adding orthogonal matrix Let $A$ be a matrix and $Q$ be an orthogonal matrix such that $AQ^T$ is symmetric, positive semidefinite. Show that $$||A+Q||_F\geq||A+P||_F$$ for any orthogonal matrix $P$. Here, $||\cdot||_F$ is the Frobenius norm. AI: Let $AQ^T=VDV^T$ be an orthogonal diagonalisation. Then $D$ is a nonnegative diagonal matrix and $U=V^TPQ^TV$ is orthogonal. Since Frobenius norm is unitarily invariant, we have \begin{aligned} \|A+Q\|_F^2-\|A+P\|_F^2 &=\|V^T(A+Q)Q^TV\|_F^2-\|V^T(A+P)Q^TV\|_F^2\\ &=\|D+I\|_F^2-\|D+U\|_F^2\\ &=2\operatorname{tr}(D)-\operatorname{tr}(DU)-\operatorname{tr}(U^TD)\\ &=2\operatorname{tr}(D(I-U)), \end{aligned} but this trace is nonnegative because $D(I-U)$ has a nonnegative diagonal.
H: Find a number $n \neq 2017$ such that $\phi(n) = \phi(2017)$ Find a number $n \neq 2017$ such that $\phi(n) = \phi(2017)$, as above. I know the formula for a general $\phi$ function, but I cannot see how this is helpful here. Any help would be appreciated! AI: Hint: $2017$ is a prime number. Solution: Fortunately, the totient function $\phi$ has the nice property that $\phi(p) = \phi(2p)$, where $p$ is prime. This is because $2p$ is relatively prime to all odd numbers less than it except $p$. There are $p$ even numbers less than $2p$, and discounting the fact that $\gcd(p, 2p) \neq 1$, we see that $$\phi(2p) = p - 1 = \phi(p)$$ You could have alternatively seen this from the multiplicative property of $\phi$, namely that $\phi(mn) = \phi(m)\phi(n)$. Hence, $\phi(4034) = \phi(2017)$, because $\phi(2) = 1$.
H: What does it mean to take the “ gradient with respect to the position $r_ i$”? Let’s say we have a number of particles (charged, massive or anything that can create potential energy). The total potential energy of any particle can be given by $$U_i (\vec r_1, \vec r_2, ... \vec r_N) = \sum_{j\gt i}^{N} U_{ij}(|r_i -r_j|)$$ and hence total potential of the system is $$ U_{total}(\vec r_1, \vec r_2, ..., \vec r_N)= \sum_{i=1}^{N} U_i$$ ($U_{ij}$ means potential energy of the particle $i$ due to the particle $j$) Now, to get the force on the particle $i$ we write $$ \mathbf F_i = -\nabla_{r_i} U_{total}(\vec r_1, \vec r_2, ..., \vec r_n) $$ Now, that subscript after $\nabla$ is posing some problems for me. What is meant by “gradient with respect to $r_i$? Isn’t $r_i$ a fixed position of $i$ th particle? It’s same as saying “take the derivative of $f$ with respect to number 2”. Why not to take the simple gradient, that is only $\nabla$, of $U_{total} (\vec r_1, \vec r_2, ... \vec r_N)$ and evaluate the gradient at $r_i$ ? Can you please resolve my doubts? Here is a Stackexchange post which address the similar problem, but I’m quite unsatisfied with the answers over there. AI: The total potential energy is a function $U: \left(\Bbb{R}^3\right)^N \to \Bbb{R}$. Where the physical idea is that the $i^{th}$ copy of $\Bbb{R}^3$ tells us the position $(x_i, y_i, z_i)$ of the $i^{th}$ particle. So, writing something like $\mathbf{F}_i = - \nabla_{\mathbf{r}_i}U$ means: $\mathbf{F}_i : \left(\Bbb{R}^3\right)^N \to \Bbb{R}^3$ is the vector valued function defined as \begin{align} \mathbf{F}_i &= - \left( \dfrac{\partial U}{\partial x_i}\, \mathbf{e}_1 + \dfrac{\partial U}{\partial y_i}\, \mathbf{e}_2 + \dfrac{\partial U}{\partial z_i}\, \mathbf{e}_3\right) \\ &\equiv - \left(\dfrac{\partial U}{\partial x_i}, \dfrac{\partial U}{\partial y_i}, \dfrac{\partial U}{\partial z_i} \right), \end{align} where I use $\equiv$ to mean they're the same thing, expressed in different notation. In other words, the force $\mathbf{F}_i$ on the $i^{th}$ particle is obtained by differentiating the total potential energy with respect to the $3$ cartesian coordinates of that particle. Edit: The short answer to the question in the comments is that "no, $\mathbf{F}_a = - \nabla_{\mathbf{r}_a}(U_{\text{total}})$" is a mathematical statement which needs to be proven and is not a "law". Here, we crucially make use of the fact that the potentials $U_{ij}$ depend only on $|\mathbf{r}_i - \mathbf{r}_j|$. By definition, the total force on $a^{th}$ particle, due to all other particles is \begin{align} \mathbf{F}_a = \sum_{j \neq a} \mathbf{F}_{\text{$j$ on $a$}} \end{align} What is $\mathbf{F}_{\text{$j$ on $a$}}$? Well, you simply take the potential energy $U_{aj}$ and differentiate with respect to $\mathbf{r}_a$, i.e $\mathbf{F}_{\text{$j$ on $a$ }} = - \nabla_{a}(U_{aj})$ (for ease of typing, I use $\nabla_a$ to mean $\nabla_{\mathbf{r_a}}$, which I defined above to mean differentiation with respect to $x_a, y_a, z_a$). Hence, \begin{align} \mathbf{F}_a = \sum_{j \neq a} \mathbf{F}_{\text{$j$ on $a$}} = - \sum_{j \neq a} \nabla_{a}(U_{aj}) \tag{$*$} \end{align} I now claim that this is also equal to $-\nabla_a U_{\text{total}}$. To prove this, note first of all that $U_{ij} = U_{ji}$ and that $\nabla_{a}(U_{ij}) = 0$ if $a \notin \{i,j\}$ (because the potential energy depends only on the distance between the $i$ and $j$ particles, so clearly it doesn't depend on some other $\mathbf{r}_a$). So, we have \begin{align} \nabla_a(U_{\text{total}}) &= \nabla_a\left( \sum_{i=1}^n \sum_{j>i} U_{ij}\right) \end{align} Now, we shall split the sum over $i$ into 3 pieces: $i=a$, $i<a$ and $i>a$. Then, we get: \begin{align} \nabla_a(U_{\text{total}}) &= \sum_{j>a} \nabla_a(U_{aj}) + \sum_{i<a} \sum_{j>i} \nabla_a(U_{ij}) + \sum_{i>a} \sum_{j>i} \nabla_a(U_{ij}) \end{align} Now, recall the above mentioned property that $\nabla_a(U_{ij}) = 0$ if $a \notin \{i,j\}$. So, the only non-zero terms in the above summation are: \begin{align} \nabla_a(U_{\text{total}}) &= \sum_{j>a} \nabla_a(U_{aj}) + \sum_{i<a} \nabla_a(U_{ia}) + 0 \\ &= \sum_{j \neq a} \nabla_a(U_{aj}), \tag{$**$} \end{align} where in the last line I used the fact that $U_{ia} = U_{ai}$, and I renamed summation indices and combined everything. So, if you combine $(*)$ and $(**)$ then you immediately see that \begin{align} \mathbf{F}_a &= - \nabla_a(U_{\text{total}}). \end{align}
H: Two players put fill $1$ and $0$ in a $3\times 3$ matrix and compute its determinant when it is full. Can Player $0$ win if $1$ starts at the center? In Determinant Tic-Tac-Toe, Player 1 enters a 1 in an empty 3 × 3 matrix. Player 0 counters with a 0 in a vacant position, and play continues in turn until the 3 × 3 matrix is completed with five 1’s and four 0’s. Player 0 wins if the determinant is 0 and player 1 wins otherwise. (a) If Player 1 goes first and enters a 1 in the middle square, is there a strategy that can give player 2 a guaranteed win? Note: I have seen a similar question at https://mathoverflow.net/questions/312034/matrix-tic-tac-toe, however this is based on the first number entered in the top left. Note: I have seen a similar question and solution at http://math.ucr.edu/~muralee/p4sols.pdf but I'm not quite sure how the proof provided extends to Player 1 starting in the middle. Perhaps the above could be used as a starting point? A website I have been using to visualise this is http://textbooks.math.gatech.edu/ila/demos/tictactoe/tictactoe.html (default set to 2 x 2 and Player 0 first but this can be changed). Thanks! AI: In the first place, it doesn't matter where ONE starts; the nine squares are all equivalent. (Permuting rows and columns can only change the sign of the determinant, not whether its zero or nonzero.) Personally, for neatness, I'd start with a $1$ in a corner, but it's your question so I'll do it your way. Anyway, the game is a win for player ZERO; he can force not only the determinant but even the permanent of the $3\times3$ matrix to be zero. Notation. Let me write $a_{i,j}=1$ to mean that player ONE write a $1$ in the $(i,j)$-square of the matix, $a_{i,j}=0$ to mean that player ZERO writes a $0$ in that square. The game starts with the move $a_{2,2}=1$. I claim that ZERO can win by replying with $a_{1,1}=0$. Now, because of symmetry, there are only four choices for ONE's next move: $a_{2,1}=1,a_{3,1}=1,\ a_{3,2}=1,\ a_{3,3}=1$. First variation. $a_{2,2}=1,\ a_{1,1}=0,\ a_{2,1}=1,\ a_{1,3}=0,\ a_{1,2}=1,\ a_{3,3}=0$ with a double threat of $a_{2,3}=0$ and $a_{3,1}=0$. Second variation. $a_{2,2}=1,\ a_{1,1}=0,\ a_{3,1}=1,\ a_{1,3}=0,\ a_{1,2}=1,\ a_{2,3}=0$ threatening $a_{2,1}=0$ and $a_{3,3}=0$. Third variation. $a_{2,2}=1,\ a_{1,1}=0,\ a_{3,2}=1,\ a_{1,3}=0,\ a_{1,2}=1,\ a_{2,1}=0$ threatening $a_{2,3}=0$ and $a_{3,1}=0$. Fourth variation. $a_{2,2}=1,\ a_{1,1}=0,\ a_{3,3}=1,\ a_{1,2}=0,\ a_{1,3}=1,\ a_{3,1}=0$ threatening $a_{2,1}=0$ and $a_{3,2}=0$.
H: Meaning of "$\exp[ \cdot ]$" in mathematical equations I am reading book "Fuzzy Logic With Engineering Applications, Wiley" written by Timothy J. Ross. I am reading chapter 7 and in this chapter, "Batch Least Squares Algoritm" has been defined. It illustrates the development of a nonlinear fuzzy model for the data in Table 7.1 using the Batch Least Squares algorithm. At the page 218 there is a mathematical equation: I have two questions: 1- As you can see, there is a "exp" phrase (in the red rectangle). What is this? Is it the exponential function? (https://en.wikipedia.org/wiki/Exponential_function) At the link, "What is the meaning of $\exp(\,\cdot\,)$?" it was stated at the link that it is an exponential function, but I noticed that ordinary paranthesis has been used. In my equation, square brackets is used. 2- What is the purpose of the equation? Thanks in advance. AI: The exponential function $\exp: \Bbb{R} \to \Bbb{R}$ is the function $\exp(x) = e^x$. There is no difference between $(\cdot)$ and $[\cdot]$ here. It is just a way to make things look nicer, and attempt to clarify the order in which the brackets should be read. For example, that red thing you circled can also be written as: \begin{align} \exp\left(-\dfrac{1}{2} \left( \dfrac{x_1 - c_1^1}{\sigma_1^1}\right)^2 \right) \quad \text{or} \quad e^{-\frac{1}{2} \left( \frac{x_1 - c_1^1}{\sigma_1^1}\right)^2} \end{align} These are all correct, but which one "looks the nicest"? Well, to the author, it seems his/her favourite is \begin{align} \exp\left[-\dfrac{1}{2} \left( \dfrac{x_1 - c_1^1}{\sigma_1^1}\right)^2 \right]. \end{align}
H: Closed ball is weakly closed The problem is in a Banach space, if $||x_n||\leq 1$ and $x_n\to x$ weakly, then $||x||\leq 1$ This question has an answer here: math.stackexchange.com/questions/714049/closed-unit-ball-in-a-banach-space-is-closed-in-the-weak-topology But for convenience, I will repost the answer. "If $x_n \to x$ weakly then we have that $\lambda x_n \to \lambda x$ for all $\lambda \in V^*$and $|\lambda x_n| \leq \|\lambda\| \|x_n\|$. Dividing both sides by $\|\lambda\|$ gives $$ \frac{|\lambda x_n|}{\|\lambda \|} \leq \|x_n\|.$$ Taking $n \to \infty$ and substituting in $\|x\| = \sup_{\lambda \in V^*} \frac{|\lambda x|}{\|\lambda \|}$ gives $$\|x\| \leq \liminf_{n \to \infty} \|x_n\|.$$Then any limit $x$ of $x_n$ with $\|x_n \| \leq 1$ for all $n$ will necessarily have $\|x\| \leq 1$. " My question is, how does "liminf" appears? My understanding is that when we take $n\to \infty$, the left side is $||x||$, but would the right side be $\lim ||x_n||$? AI: Weak convergence does not guarantee existence of $\lim \|x\|$. So we cannot take limit on both sides of the inequality $|\lambda x_n | \leq \|\lambda\| \|x_n\|$. But we can always take $\lim \inf $ or $\lim \sup$. When you take $\lim \inf$ on both sides LHS becomes $|\lambda x|$ because if a sequence is convergent then its $\lim \inf$ is same as its limit.
H: When the increasing of sequences $a_n$ leads to increasing of $\frac{1}{a_n}$? I was asked the following question: When the increasing of sequences $a_n$ leads to increasing of $\frac{1}{a_n}$? I could not think of an example. It looks like if $a_n$ is increasing then $\frac{1}{a_n}$ is decreasing. Is it true? If so, how do I prove it formally? AI: $\{-1,2,3,...\}$ is increasing and $\{-1,\frac 1 2, \frac 1 3,...\}$ is neither increasing nor decreasinig. If $a_n$'s are all positive or all negative then $(\frac 1 {a_n})$ is necessarily decreasing because the function $f(x)=\frac 1 x$ is decreasing on both $(0,\infty)$ and $(-\infty,0)$.
H: Opposite determinant in Autonne-Takagi factorization Let us consider a complex symmetric matrix in $M_2(\mathbb C)$ \begin{equation} A = \begin{pmatrix} x_1+ix_2 & x_3 \\ x_3 & -x_1+ix_2 \end{pmatrix} \end{equation} where the $x_i\in \mathbb R,\;\;i=1,2,3$. The Autonne-Takagi factorization theorem tells that a unitary matrix $U$ exisits such that \begin{equation} U^T A U=D \end{equation} where \begin{equation} D =\begin{pmatrix} d_1& 0 \\ 0 & d_2 \end{pmatrix} \end{equation} is a diagonal non negative matrix, with real entries. Given that \begin{equation} D^2 = D^\dagger D = (U^T A U)^\dagger U^T A U=U^\dagger A^\dagger (UU^\dagger)^TA U=U^\dagger (A^\dagger A) U \end{equation} we can see that $d_1^2$ and $d_2^2$ are the eigenvalues of the Hermitian matrix $A^\dagger A$. In our case, this matrix is already diagonal, as \begin{equation} A^\dagger A =\begin{pmatrix} x_1^2+x_2^2+x_3^2& 0 \\ 0 & x_1^2+x_2^2+x_3^2 \end{pmatrix} \end{equation} Given that the theorem tells us that $D$ is non negative, I would take $d_{1,2}=+\sqrt{x_1^2+x_2^2+x_3^2}$. However, if we compute the determinant of the factorization, we would obtain \begin{equation} \det(U^T)\det A \det U = \det A = - x_1^2-x_2^2-x_3^2 =\det D = x_1^2+x_2^2+x_3^2 \end{equation} hence the only solution would be $x_i=0$, $i=1,2,3$. What am I missing? AI: The factorization tells you that $U^\intercal AU = D$ for a suitable unitary matrix $U$. But $U$ is in general complex (even if $A$ is real), as can be easily seen by some examples. Hence, $U^\intercal U \neq I$ in general (this is only the case if $U$ is real). This implies that the equality $$\det(U^\intercal AU) = \det(U^\intercal U) \det(A) = \det(A)$$ does not hold in general for if $\det(U^\intercal U) \neq 1$. Consider for example the matrix $U = \operatorname{diag}( i, 1)$, then $U$ is unitary with $\det(U) = \mathrm i$, but $\det(U^\intercal U) = - 1$. What you indeed always can say is that $\lvert \det (U) \rvert = 1$, therefore it follows that $$\lvert \det(U^\intercal AU) \rvert = \lvert \det(A) \rvert= \det(D).$$ Your given example is an example for the last equality.
H: Do I have the chain rule right? I was revising chain rule and I made up a problem to write down in my notes that uses it at least two times. Here it is, if a function $\zeta(x) = (z(x))^2$ where $z(x) = x + f(x), f(x) = \ln(g(x))$ and $g(x) = \frac{1}{2}x^2$ then $\zeta'$ or $\frac{d\zeta}{dx}$ is defined as, \begin{align*} \zeta'(x) & = \frac{d\zeta}{dz}\times \frac{dz}{df}\times \frac{df}{dg} \times \frac{dg}{dx}\\ \zeta'(x) & = 2(z(x))z'(x) \\ & = 2(z(x))(1 + f'(x)) \\ & = 2(z(x))(1 + (\ln(g(x)))') \\ & = 2(z(x))\Big(1 + \Big(\frac{1}{g(x)}\Big)g'(x)\Big) \\ & = 2(z(x))\Big(1 + \Big(\frac{1}{g(x)}\Big)x\Big) \\ \end{align*} Did I get it right? AI: Have a little more confidence! Your work is completely correct. About writing it in Leibniz notation: we have $\zeta(x) = \varphi(y)$, where $\varphi$ is defined by $x \mapsto x^2$ and $h$ by $y \mapsto h(y) = y + f(y)$. Therefore: $$ \begin{aligned} \frac{\mathrm{d}\zeta}{\mathrm{d}x} &= \frac{\mathrm{d} \varphi}{\mathrm{d}y} \frac{\mathrm{d}y}{\mathrm{d}x} \\ &= \frac{\mathrm{d} \varphi}{\mathrm{d}y} \left(1 + \frac{\mathrm{d}f}{\mathrm{d}y} \right) \end{aligned}$$ And from here on you can use the definitions of $f$ and $g$ to expand the last equality in terms of $f$ and $g$, but I'd rather leave it at just the first one.
H: How to write numbers in the language of first-order set theory. I saw this Numberphile video (link at bottom), and at around 10:10 they talk about writing numbers in the language of first-order set theory. For example, to write $0$, it showed the empty set: $$\exists x_1\neg\exists x_2(x_2\in x_1)$$ And to write $1$, it said: $$\exists x_1\forall x_2(x_2\in x_1\leftrightarrow(\neg\exists x_3(x_3\in x_2)\vee\forall x_3(x_3\in x_2\leftrightarrow\neg\exists x_4(x_4\in x_3))))$$ Edit: An answer has corrected this and showed that the above formula is for $2$, not $1$. I have left the above as this is what the video showed. The video then alluded to writing other numbers, saying that it takes less symbols as the numbers get larger. This made me curious and I wanted to find out more about this topic. I have done research and tried to find how to write other numbers in the language of first-order set theory, but I haven't been able to find anything. Question Does anyone know any resources/websites with information on how to write numbers in the language of first-order set theory? I have searched the internet many times but I haven't been able to find this. Or is it just that there are no websites about this because there is no use for them and nobody really cares about them? Thanks. Numberphile video-The Daddy of Big Numbers (Rayo's Number)-Numberphile https://youtu.be/X3l0fPHZja8 AI: The logical formulas you have written describe the Von Neumann Ordinals for 0 and 1. This is one particular encoding of numbers in terms of sets, which can be easily turned into numbers in terms of logical formulas by writing a sentence saying explicitly what is in that set (this describes the set uniquely by extensionality). For instance, to say $0 = \emptyset$, which it is, we would instead say $\forall x . x \not \in 0$. That formula forces $0 = \emptyset$. The formula you've written says "the number $0$ exists". Similarly, $1 = \{ \emptyset \}$. So we can express in the language of logic by writing $\forall x . x \in 1 \leftrightarrow x = 0$. If we want to be purists, and avoid using the defined symbol $0$: $$ \forall x . x \in 1 \leftrightarrow (\forall y . y \not \in x)$$ The second formula you've written is actually the number $2$. Well, it's the formula saying "the number $2$ (which it is calling $x_1$) exists", but they're very similar ideas. If you want a reference for this material, any decent book on set theory will do. In fact, just knowing that these are called "von Neumann Ordinals" will help tremendously. Good luck! Edit: The sentence you have written is quite long, so I'll color code it for ease of reference. Each part of this says something that points towards "$x_1 = 2$". Let's break it down: $$ \exists x_1 \forall x_2 ( x_2 \in x_1 \leftrightarrow ( \color{blue}{\lnot \exists x_3 (x_3 \in x_2)} \lor \color{green}{\forall x_3 (x_3 \in x_2 \leftrightarrow} \color{red}{\lnot \exists x_4 (x_4 \in x_3)}\color{green}{)} ) ) $$ This says: there exists a set $x_1$ (which we will soon see to be $2$) such that $x_2 \in x_1$ iff some condition holds on $x_2$. remember, we want this condition to be "$x_2 = 0 \lor x_2 = 1$" the blue part of this condition says $x_2 = \emptyset$, so $x_2 = 0$ the green part says the only thing in $x_2$ has a unique element, colored red notice the red part says "$x_3 = \emptyset$", or, $x_3 = 0$ so the green part says $x_2 = \{ \emptyset \}$, equivalently, $x_2 = 1$ so the blue and green parts together say $x_2 = 0 \lor x_2 = 1$ this is exactly what we wanted, and $x_1 = \{0, 1\} = 2$ I hope this helps ^_^
H: An application of Uniform Boundedness Theorem I did this problem before "Let $X$ be a compact Hausdorff space and assume that $C(X)$ is equipped with a norm $||.||$ with which this is a Banach space. For each $x\in X$, define $\lambda_x:C(X)\to \mathbb{R}, f \mapsto f(x)$. Prove that $\sup_x ||\lambda_x||$ is bounded." What I did: Fix $f\in C(X)$. Then we observe that $\sup_x |\lambda_x(f)|=\sup_x |f(x)|=||f||_{\infty}<\infty$. Hence by the Uniform Boundedness Principle, we deduce $\sup_x ||\lambda_x||$ is bounded. This answer was marked correct. But now I read this, why is $||f||_{\infty}<\infty$? Sorry if this is a dumb question. AI: Because every continuous function from a compact space into $\Bbb R$ is bounded.
H: if $B$ is countable, then the following are equivalent Suppose $B \neq \varnothing$. Prove the following are equivalent ${\bf A.}$ B countable ${\bf B.}$ there is a surjection $f: \mathbb{Z}_+ \to B$ ${\bf C.}$ there is an injection $g: B \to \mathbb{Z}_+ $ Attempt: (I already proved $A \implies B$) First we prove $B \implies C$. Let $f$ be surjection. Since $B$ is not empty, it has a smallest element, say $b_1$ and $f$ surjection $\implies$ there is some $i_1 \in \mathbb{Z}_+$ such that $f(i_1) = b_1$ Now, consider $B \setminus \{ b_1 \}$. If this set is empty, then $g(b_1) = i_1$ is desired injection. If not, then there is smallest element in $B \setminus \{b_1\}$, call it $b_2$ and so $\exists i_2 \in \mathbb{Z}_+$ so that $f(i_2) = b_2$ Now, if $B \setminus \{ b_1, b_2\}$ is empty, then $g(b_k) = i_k $ for $k=1,2$ If we continue in this fashion, we obtain a list $\{ b_1,b_2,...... \}$ so that $g(b_k) = i_k $ where $i_1,i_2,.....$ are positive integers. ${\bf C \implies A}$ Take $g: B \to \mathbb{Z}_+$ an injection. We need to prove $B$ is countable. By contradiction if $B$ is uncountable, then there is ${\bf NO}$ bijection from $B \to \mathbb{Z}_+$ but this really doesnt help, we can still have injections. My other idea is to procceed as : since $g$ is injection then $g$ maps some $b_i$ from $B$ in one-to-one correspondence to positive integers: $g(b_i) = i$ say $i \leq n$ But I am having trouble seeing how to extend this to a surjection. Any help? Is my first implication correct? AI: C is usually the definition of countable. To finish the proof prove exists injection f:X -> Y iff exists surjection g:Y -> X. Left to right. Let g(y) = f$^{-1}$(y) for all y in f(X) and for all y not in f(X), g(y) any element of X. Conversely define for all x in X, f(x) to be some element in g$^{-1}$(x).
H: Find all positive integers $n$ such that $\varphi(n)$ divides $n^2 + 3$ This was the Question:- Find all positive integers $n$ such that $\varphi(n)$ divides $n^2 + 3$ What I tried:- I knew the solution and explanation of all positive integers $n$ such that $\varphi(n)\mid n$ . That answer was when $n = 1$, or $n$ is of the form of $2^a$ or $2^a3^b$ . I tried to relate this fact with this problem in many ways, but couldn't get to a possible solution. Any hints or suggestions will be greatly appreciated AI: First, we observe that $n$ can't be even. If it were even, $n^2+3$ would be odd and hence $\varphi(n)$ couldn't divide $n^2+3$ as $\varphi(n)$ is always even unless $n=1,2$. Since $\varphi(n)=1$ for $n=1,2$, these are two trivial solutions. Let $n=3^ap_1^{a_1}p_2^{a_2}\ldots p_k^{a_k}$, where $p_j$'s are primes bigger than $3$ and $a_j\geq1$. If for some $j$ we have $a_j\geq2$, then $p_j\mid \varphi(n)\implies p_j\mid n^2+3\implies p_j\mid3$. A contradiction! Hence $a_j=1$ for all $1\leq j\leq k$. Now if $a>0$, $v_3(n^2+3)=1$. Since $v_3(\varphi(n))\geq(a-1)$, we must have $a=2$ and $3\nmid p_j-1$ for all $1\leq j\leq k$. case $1$: Let $a=0$. Then $n=p_1p_2\ldots p_k$ and $\varphi(n)=(p_1-1)(p_2-1)\ldots(p_k-1)$. clearly $v_2(\varphi(n))\geq2^k$ and hence $v_2(n^2+3)\geq2^k$. Since $n$ is odd, $n^2\equiv1\pmod{8}\implies n^2+3\equiv4\pmod{8}\implies v_2(n^2+3)=2$. This means $k\leq2$. For $k=1$, we have $n$ is a prime $p$ and $(p-1)\mid (p^2+3)$. Now $(p-1)\mid(p-1)^2=p^2-2p+1\implies (p-1)\mid((p^2+3)-(p^2-2p+1))=2(p+1)$. Since $p$ is odd, $(p-1)=2$ or $(p-1)=4$. since we have assumed that $p>3$, in this case the only solution is $p=5$. For $k=2$ we have the situation $n=pq$ for two distinct primes bigger than $3$. We have to solve the congruence $(pq)^2+3\equiv0\pmod{(p-1)(q-1)}$ $(p-1)(q-1)\mid((pq)^2-p^2q-pq^2+pq)\implies (p-1)(q-1)\mid(p^2q+q^2p+3-pq)$ $(p-1)(q-1)\mid(p^2q-p^2-pq+p)$ and $(p-1)(q-1)\mid(q^2p-q^2-pq+q)$. Therefore, $(p-1)(q-1)\mid(p^2q+q^2p-p^2-q^2-2pq+p+q-(p^2q+q^2p+3-pq))\implies (p-1)(q-1)\mid(p^2+q^2+pq-p-q+3)\implies (p-1)(q-1)\mid(p^2+q^2+2)$ $(p-1)\mid(p^2+q^2+2)\implies(p-1)\mid(p^2+q^2+2-p^2+2p-1)=(q^2+2p+1)\implies (p-1)\mid(q^2+2p-2p+3)=(q^2+3)$. Similarly we get $(q-1)\mid(p^2+3)$. Let $\mathrm{WLOG}$ $p<q$. If $p=3$ then we can deduce that $q=7$. Therefore $n=21$ is a solution. Let $p>q>3$. Since $(pq)^2+3\equiv4\pmod{8}$ and $(p-1)(q-1)\mid((pq)^2+3)$ we get $v_2(p-1)=v_2(q-1)=1$. Any odd prime dividing $p-1$ or $q-1$ divides $(pq)^2+3$ and hence $-3$ is a quadratic residue modulo those primes. Therefore they are either $3$ or of the form $6l+1$. If $(q-1)/2\equiv1\pmod{6}$ then $q-1\equiv2\pmod{6}$, which implies $3\mid pq$. A contradiction! Therefore $p=3,q=7$ is the only solution in this case. case $2$: Let $a=1$. In this case $n=3$ is a solution as $\varphi(3)=2\mid3^2+3=12$. We investigate now the other possibilities. For $a=1$, if $n\neq3$, then $n$ can be of the form $3p$ for some odd prime $p>3$. Otherwise $v_2(\varphi(n))>2$ which can't be possible as we have shown before. In this case, the situation is, $\varphi(3p)=2(p-1)\mid(9p^2+3)$. We have $(p-1)\mid(9p^2+3)\implies (p-1)\mid(9p^2+3-9p^2+9p)=(9p+3)\implies (p-1)\mid12$. Then $p$ can be $7$ or $13$. For $p=13$, $v_2(\varphi(n))=3$ which is not possible. So in this case only possible solution is $n=3\cdot7=21$ last case: For $a=2$, we have $9$ is a solution. By similar arguments as above, we can show that there can't be any other solutions. Hence only possible solutions are $n=1,2,3,5,9,21$ DONE!
H: Calculating the mean and standard deviation of a Gaussian mixture model of two curves An ELO rating is a Gaussian curve with a mean and a standard deviation. Assuming there are two such ratings that belong to the same player (he's using two separate online identities so he has two separate ratings) - How would I best merge the two curves into one curve representing the ELO of the persona? Extending the question based on comments below: The rating's mean is the approximate skill of the player, and the standard deviation is the level of confidence of the system in the skill approximation. The suggested model is to use a Gaussian mixture model with some probability of picking each of the identities, and then calculate the mean and standard deviation of the resulting distribution. I know the mixed distribution is not Gaussian, but I need just two parameters, so this is what I am after. In short How do you calculate the mean and standard deviation of a Gaussian mixture model of two Gaussian curves ($\mu_1$, $\sigma_1$), ($\mu_2$, $\sigma_2$) with probability of p and (1-p) for each curve? AI: $\newcommand{\N}{\mathcal{N}}\newcommand{\Var}{\mathrm{Var}}\newcommand{\E}{\Bbb{E}}$Assume that the two personas are represented by distributions $X_1\sim \N\left(\mu_1, \sigma_1^2\right)$ and $X_2\sim \N\left(\mu_2, \sigma_2^2\right)$, where $\mu_k$ and $\sigma_k^2$ are the mean and variance respectively of $X_k$, for $k=1,2$. Assume that $X_1$ and $X_2$ are independent. We can model the overall persona as coming from $X_1$ with some probability $p$, or coming from $X_2$ otherwise (with probability $1-p$). That is, if $Z$ is the overall persona, then $Z = IX_1 + (1-I)X_2$, where $I$ is a random variable that is $1$ with probability $p$ and $0$ with probability $1-p$, and $I,X_1,X_2$ are independent. In this case, $Z$ (the overall persona) is modelled as a Gaussian Mixture Model, with probability density function $f_Z(z) = pf_{X_{1}}(z)+(1-p)f_{X_{2}}(z)$, where $f_{X_{k}}$ is the probability density function of $X_k$, $k=1,2$. If you just want the mean and variance of the overall persona $Z$ (to use for a Gaussian model), the formulas are: $\Bbb{E}[Z] = p \mu_1 + (1-p)\mu_2$ and $\Var(Z) = p\sigma_1^2 +(1-p)\sigma_2^2 + p(1-p)\left(\mu_1-\mu_2\right)^2.$ Some hints to proving the formulas for the mean and variance of $Z$ are to recall the following facts: $\E[Z] = \E[\E[Z\mid I]]$ by the Law of Total Expectation $\Var(Z) = \E[\Var(Z\mid I)] + \Var(\E[Z\mid I])$ by the Law of Total Variance If $Y$ is a random variable that takes value $a$ with probability $p$ and value $b$ with probability $1-p$ (where $a,b$ are constants), then $\E[Y] = pa+(1-p)b$ and $\Var(Y) = p(1-p)(a-b)^2$.
H: Find the $n$th term of sequence in the form of $a_{n+2}=ba_{n+1}+ca_n+d$ $a_1=1$, $a_2=3$, $a_{n+2}=a_{n+1}-2a_n-1$ How do you solve this? I only solve the sequence in the form $a_{n+2}=ba_{n+1}+ca_n$ before by writing it in $x^2-bx-c=0$ but for this I don't know how to. Please help AI: The method is fairly similar, you suppose the solution is composed of 2 parts, namely the particular solution (which takes care of the constant and the homogenous solution which is the case you know how to solve. $$a_{n} = a^{p}_{n} + a^{h}_{n}$$ To solve for $a_{n}^{p}$ say it is some constant $c$ and plug it in to the reccurence relation. $$c = c - 2c - 1$$ $$2c = -1$$ $$c = -\frac{1}{2}$$ For the homogenous solution suppose the $-1$ isn't there, you get the following reccurence relation: $$a_{n+2} = a_{n+1} -2a_{n}$$ Which has the characteristic polynomial: $$x^2 - x + 2 = 0$$ $$x_{1, 2} = \frac{1 \pm \sqrt{1 - 8}}{2} = \frac{1 \pm i \sqrt{7}}{2}$$ Then our solution has the following form: $$a_{n} = \alpha(\frac{1 - i \sqrt{7}}{2})^n + \beta(\frac{1 + i \sqrt{7}}{2})^n - \frac{1}{2}$$ All that remains is to solve for $\alpha, \beta$ by evaluating it at $n=1, 2$.
H: For convex functions $f(x)>1,g(x)>1, x\in \mathbb{R}$, is the product $(f\cdot g)(x)$ necessarily convex? From what I understand, this is how it is: assume $f\cdot g$ is concave. then, $$(f\cdot g)(0)>1$$ by this and the assumption, the product must be less than $1$ for some real $x$. thus, at least one of the functions at $x$ must be less than $1$, which brings to a contradiction. hence, $(f\cdot g)(x)$ is convex $\square$. is this train of thought correct? AI: As mentioned in the comments, your arguments are wrong. For a counterexample, take $$\begin{cases} f(x) &= 2 +(x-2)^2\\ g(x) &= 2 +(x+2)^2 \end{cases}$$ which are both convex and greater than one while $f \cdot g$ is not convex.
H: Counter-example of Jacobi identity for antisymmetric bilinear operation For a bilinear, antisymmetric, alternating operator to be a Lie bracket, it must satisfy the Jacobi identity. I assume this is because a bilinear, antisymmetric, alternating operator does not always satisfy the Jacobi identity. If I consider this operator without Jacobi identity axiom to be defined on a finite dimensional vector space, I have $$[A_i,A_j]=C_{ij} ^kA_k.$$ I want to find $C_{ij}^k$ such that the Jacobi identity $$\text{Alt}\left([A_i,[A_j,A_k]]\right)=0$$ is not satisfied. Although I am quite sure that there must exist such structure coefficient, I cannot find one. Can someone shed some light on this? Or, is it the case that (although I do not think so) antisymmetry is enough to yield Jacobi identity? AI: For the bracket defined on generators by $\{x,y\} = x$ and $\{x,z\}=y$ and $\{y,z\}=0$, the Jacobiator $$\{x,\{y,z\}\}+\{y,\{z,x\}\}+\{z,\{x,y\}\}=\{x,0\}+\{y,-y\}+\{z,x\}=-y$$ is nonzero. More generally you can expand the Jacobi identity as a system of quadratic equations, e.g. in dimension $3$ for $$\begin{align*}\{x,y\}&=a_1x+a_2y+a_3z,\\\{x,z\}&=b_1x+b_2y+b_3z,\\\{y,z\}&=c_1x+c_2y+c_3z\end{align*}$$ you get the system of equations $$\begin{align*}-a_{2} c_{1} - b_{3} c_{1} + a_{1} c_{2} + b_{1} c_{3}&=0,\\ a_{2} b_{1} - a_{1} b_{2} - b_{3} c_{2} + b_{2} c_{3} &=0,\\ a_{3} b_{1} - a_{1} b_{3} + a_{3} c_{2} - a_{2} c_{3} &=0\end{align*}$$ and it easy to find a non-solution; the one above corresponds to $a_1=b_2=1$ and the rest equal to zero.
H: Show that $KL$ is parallel to $BB_1$. The incircle of a triangle $ABC$ has center $I$ and touches $AB , BC , CA$ at $C_1 , A_1 , B_1$ respectively. Let $BI$ intersects $AC$ at $L$ and let $B_1I$ intersects $A_1C_1$ at K. Show that $KL$ is parallel to $BB_1$. I’ve draw some lines and try to find angles but it didn’t work. Can anyone give me some hints (or solution) please. Thank you very much! (I didn’t try trigonometry or complex bash yet because it would be very ugly.) AI: Here is my answer. Since $AI$ and $CI$ are bisectors of $BAL$ and $BCL$ so we have $$\dfrac{IL}{IB}=\dfrac{AL}{AB}=\dfrac{CL}{CB}=\dfrac{AC}{AB+BC}$$ Now $\dfrac{IK}{IB_1}=\dfrac{S_{IA_1 C_1}}{S_{IA_1 B_1}+S_{IB_1 C_1}}$. We have $S_{IA_1 B_1}=\frac{1}{2}IA_1 IB_1 sinB_1IA_1= \frac{1}{2}IA_1 IB_1 sinC$. So $\dfrac{IK}{IB_1}=\dfrac{sinB}{sinA+sinC}$ We know that $\frac{AB}{sinC}=\frac{BC}{sinA}=\frac{CA}{sinB}$ so $\frac{IL}{IB}=\frac{IK}{IB_1}$, which gives the desired result. Maybe there is another way without using Sin cos blah blah but I do not know. Hope this answer is helpful.
H: Two questions of the proof of $\mathbf{z}^{\oplus n}$ is a free abelian group on $A={1,...,n}$ I am reading algebra chapter 0 and have two questions about the proof: $\mathbf{z}^{\oplus n}$ is a free abelian group on A={1,...,n} . Proof: we frist define a function $j:A\rightarrow $$\mathbf{z}^{\oplus n}$ by $j(i):=(0,...,0, \underset{i-th \ place}{1}$ $,0,...,0)$ $\in \mathbf{z}^{\oplus n}$, so every element of $\mathbf{z}^{\oplus n}$ can be written uniquely in the form $\sum_{i=1}^{n}m_i j(i)$. Now let $f:A\rightarrow G$ be any fuction from $A={1,...,n}$ to an abelian group G.We define $\phi:\mathbf{z}^{\oplus n} \rightarrow G$ by $$\phi(\sum_{i=1}^{n}m_i j(i)):=\sum_{i=1}^{n}m_i f(i).$$to make the digram commute. And it's obviously that $\phi$ is a homomorphism. There are my quetions: 1.$j$ is injective, for all $i\in A$,I can get $\phi(j(i))=f(i)$. But what's the conditon of the element in $\mathbf{z}^{\oplus n}$ and G but not the image of $i\in A$? I think they also must make $\phi$ is a homomorphism but needn't make the digram commute,for their preimages are not involved in this process.Right? 2.However, the operation in $G$ of this proof is "addition".Is the proof still right if the oepration in $G$ is multiplication? Is there a way to proof the claim that satisfy all commutative binary operation ? AI: The commutativity of the diagram just means that $\phi$, restricted to the elements of $A$, coincides with $f$ or, equivalently, that every group homomorphism from $A$ to a group $G$ extends (uniquely) to a group homomorphism from $\mathbb{Z}^{\oplus n}$, or, as sometimes one says, that every group homomorphism from $A$ to $G$ factors uniquely through $\mathbb{Z}^{\oplus n}$. Your assertion that the diagram is commutative only on the elements on $A$ just does not make sense, because $A$ is a part of that diagram, and commutativity means that you only have to consider elements of $A$. To have commutativity, you have to start from $A$ and reach $G$ following the two different paths you have, (the one through $\mathbb{Z}^{\oplus n}$ and that which goes directly to $G$) then check that the result is the same. What you can do "outside" $A$ is checking that, with your definition of $\phi$, the function $\phi$ is a group homomorphism, but this comes almost automatically from the definition itself. As for the second question, a group is a set with just a single binary operation defined on it. You can call it addition or multiplication, and then use a coherent notation. Since the groups involved are abelian, that is they satisfy the commutative property, it is customary to use the additive notation with $+$. If you want, you can just shift to product notation on $G$, writing $f(i)^{m_i}$ in place of $m_i f(i)$ and $\prod$ in place of $\sum$.
H: limit of $2^n\arcsin\frac{\sqrt{y_0^2-x_0^2}}{2^ny_n}$ I want to know why $\lim_{n\rightarrow \infty}2^n\arcsin\frac{\sqrt{y_0^2-x_0^2}}{2^ny_n}=\frac{\sqrt{y_0^2-x_0^2}}{y}$ when $y_n\rightarrow y$ as $n\rightarrow \infty$. Here $y_0$ and $x_0$ are constants. I thought about using a theorem $\lim ab=\lim a\lim b$. Then $\lim_{n\rightarrow \infty}2^n=\infty$ and $\lim_{n\rightarrow }\arcsin\frac{\sqrt{y_0^2-x_0^2}}{2^ny_n}=0$. So I think I need a different approach. AI: Hint: recall that: $$ \lim_{x \rightarrow 0} \frac{\arcsin x}{x}=1 $$ Now let $x=\frac{1}{2^n}$, which clearly goes to $0$ as $n \rightarrow \infty$. After some algebraic manipulations, you should obtain the given result.
H: How many degrees of freedom are there when generating a joint probability function for $n$ different binary random variables? To generate a proper probability function one should assign a probability to all $2^n$ possible events. One constraint is that the sum of the probabilities of all events must be equal to one - so this removes one degree of freedom. I tried to think of any other constraints but couldn't find any. Are there any other constraints? or is the answer $2^n - 1$? AI: No, there are no other contraints. Your formulation of the question is slightly unclear. You speak of $n$ events at first and then of $2^n$ events. I suspect that what you mean is: If $n$ events can occur independently of each other, then there are $2^n$ possible events, one for each subset of the $n$ events. Indeed there are $2^n-1$ degrees of freedom in the probability mass function in this case. More generally, if the sample space consists of $k$ elementary outcomes, there are $k-1$ degrees of freedom in the probability mass function.
H: Why does $\sum_{i=1}^n a = (n+1)a$ I'm currently working my way through Eccles's "An Introduction to Mathematical Reasoning" and an alternative proof of $\sum_{i=0}^n (a+ib) = \frac 12 (n+1)(2a+bn)$ states that $$\sum_{i=0}^n (a+ib) = \sum_{i=1}^n a + b\sum_{i=1}^n i = (n+1)a + \frac 12n(n+1)b$$ After some careful thought I understand everything except why $\sum_{i=1}^n a = (n+1)a$. Isn't this just the series "1+2+3+4..." which is represented by $$\sum_{k=1}^n k = \frac{n(n+1)}{2}$$ or am I misunderstanding? AI: Whe have$$\sum_{i=0}^na=\overbrace{a+a+\cdots+a}^{n+1\text{ times}}=(n+1)a.$$
H: Why am I getting the wrong answer when I factor an $i$ out of the integrand? Consider the following definite integral: $$I=\int^{0}_{-1}x\sqrt{-x}dx \tag{1}$$ With the substitution $x=-u$, I got $I=-\frac{2}{5}$ (which seems correct). But I then tried a different method by first taking out $\sqrt{-1}=i$ from the integrand: $$I=i\int^{0}_{-1}x\sqrt{x}dx=\frac{2i}{5}[x^{\frac{5}{2}}]^{0}_{-1}=\frac{2i}{5}{(0-(\sqrt{-1})^5})=-\frac{2i^6}{5}=+\frac{2}{5} \tag{2}$$ which is clearly wrong. I understand that $x\sqrt{x}$ is not even defined within $(-1,0)$, but why can't we use the same 'imaginary approach' ($\sqrt{-1}=i$) to treat this undefined part of the function (i.e. the third equality in $(2)$). I can't find a better way of phrasing my question so it may seem gibberish, but why is $(2)$ just invalid? AI: I had difficulty understanding the previous answer so am offering an expanded version. Taking your first step, you write $\sqrt{-x} = i\sqrt{x}$. Now try that with $x=-1$. It gives a contradiction, $$1 = \sqrt{1} = i \sqrt{-1} = i^2 = -1.$$ It is not really fixed if you use the alternative sign for $\sqrt{-1}$ because you obtain $$ 1 = \sqrt{1} = -i \sqrt{-1} = (-i) \times (-i) = -1 $$ Only if you take different signs for the imaginary part at each square root do you get the answer you want. Underlying this is a general point about complex valued functions. By convention for real $ x \geqslant 0$, $\sqrt{x}$ is always taken to be the positive root. When $x < 0$ there is no natural convention and $\sqrt{x} $ could be either one of $\pm i\sqrt{-x}$. The difficulty arises because there cannot be a consistent choice for the root of a negative number that at the same time satisfies the desirable identity $\sqrt{xy} = \sqrt{x}\sqrt{y}$. That is because in complex analysis the square root $\sqrt{z}$ has a branch point (that is, it is badly behaved) at $z=0$ and it cannot be extended to a well behaved function across the whole complex plane.
H: Solve equation: $\log_2 \left(1+ \frac{1}{a}\right) + \log_2 \left(1 +\frac{1}{b}\right)+ \log_2 \left(1 + \frac{1}{c}\right) = 2$ $$ \log_2 \left(1 + \frac{1}{a}\right) + \log_2 \left(1 + \frac{1}{b}\right)+ \log_2 \left(1 + \frac{1}{c}\right) = 2 \quad \text{where $a$, $b$, $c \in N$.} $$ Apparently, the answer is $a= 1$, $b =2$, and $c\space = 3$. When I asked my math teacher I was told that the solution involved a bit of number theory, but didn't recieve a complete explanation. Could someone clear that up for me? Edit: I had made a mistake in typing the question. I had left it as: $ \log_2 \left(a + \frac{1}{a}\right) + \log_2 \left(b + \frac{1}{b}\right)+ \log_2 \left(c + \frac{1}{c}\right) = 2 \quad \text{where $a$, $b$, $c \in N$.} $ My apologies for causing confusion. AI: The problem, as you have written it, has no solution. Simplifying the LHS, we get $$(a^2+1)(b^2+1)(c^2+1) = 4abc$$ But, by the AM-GM inequality, we get $x^2+1\ge2x$, which gives $$(a^2+1)(b^2+1)(c^2+1) \ge 8abc$$
H: dice question : what is best? got into a discussion what is best if you have : - 5 dices - 3 throws You aim for getting 12345 or 23456 - doesnt matter which one of the 2 combinations. First throw : 12356. So we are missing a 4 to get either 12345 or 23456. What is best for 2nd throw ? To keep 235 - and throw 2 dices ( to get a 4 + 1 or 2 )? Or to keep either 2356 and throw 1 dice ( to get a 4 ) ? Remember there is 3 throws total so i assume that it might be smartest to keep the 6 and use 2 throw to throw a 4 with the same dice ? it just feels like you have more chances of getting the 4 throwing 2 dices - but ofcourse you would need to do a 41 or 46 - instead of only needing a 4 in 2 throws with 1 dice. AI: It’s better to keep the $1$ or $6$. If you reroll it, you just need to get it again. Your chances of getting the straight in a single roll are $\frac16$ if you keep the $1$ or $6$ (since you need one specific number on one die) and $\frac4{36}=\frac19$ if you reroll the $1$ or $6$ (since you need one of the four ordered pairs $(1,4)$, $(6,4)$, $(4,1)$, $(4,6)$). Since you have two rolls, if you keep the $1$ or $6$, your chances of getting the straight are $1-\left(\frac56\right)^2=\frac{11}{36}\approx31\%$ (since you get it unless you don’t roll a $4$ on either roll). If you reroll the $1$ or $6$, with probability $\frac19$ you immediately get the straight; with probability $\frac14$ you get neither a $1$ nor a $4$ nor a $6$, and then you have another $\frac19$ chance on the second roll; with probability $\frac7{36}$ you get a $4$ but no $1$ or $6$, and then you have probability $\frac13$ to get the $1$ or $6$ on the second roll; and with probability $\frac{16}{36}=\frac49$ you get a $1$ or $6$ but no $4$, and then you have probability $\frac16$ to get the $4$ on the second roll, so in total your probability to get the straight in two rolls if you reroll the $1$ or $6$ is $$ \frac19+\frac14\cdot\frac19+\frac7{36}\cdot\frac13+\frac49\cdot\frac16=\frac5{18}\approx28\%\;. $$
H: Determine convergence annulus of Laurent series without computing the Laurent series Take the holomorphic function: $$\mathbb{C} \setminus \{2k \pi i \ \mid \ k \in \mathbb{Z} \} \ni z \mapsto \frac{1}{e^z - 1} \in \mathbb{C}. $$ How can we determine the annulus of convergence of the Laurent series around the point $z = 0$ of the above function, without actually computing the Laurent series? I don't really know how to start. Of course, for $z = 0$, the above function is not defined, so the annulus of convergence will be $\{z \in \mathbb{C} \ \mid \ 0 < |z| < R, \}$ for some $R \in \mathbb{R}_+$. Also, the function is again not defined in $2\pi i$ (by periodicity of the exponential). So I believe that the annulus of convergence will be $$\{z \in \mathbb{C} \ \mid \ 0 < |z| < |2\pi i| = 2\pi \}. $$ Is this correct? Also, how would I formally prove this (besides saying that a bigger radius of convergence would include the point $2\pi i$, in which the function is not defined)? AI: The Laurent series $\sum_{n=-\infty}^\infty a_nz^n$ of $f$ must converge on any annulus contained in its domain. So, it must converge in$$\{z\in\Bbb C\mid0<|z|<2\pi\}.\tag1$$But, if it converged on a larger annulus, the limit$$\lim_{z\to2\pi i}\sum_{n=-\infty}^\infty a_nz^n\tag2$$would exist (in $\Bbb C$). But $(2)$ is equal to $\lim_{z\to2\pi i}f(z)$, which does not exist (again, in $\Bbb C$). So, the answer is $(1)$.
H: Doubt on conditional expected value Let $S$ and $T$ two random indipendent variables with exponential distribution, and let $\mathbb{E}(S)=\alpha,\mathbb{E}(T)=\beta$ . 1) Find the distribution of $Y=\min(S,T)$. $\rightarrow Y\sim Exp(\frac{1}{\alpha}+\frac{1}{\beta})$ 2) Find the probability of event $\mathbb{P}(S<T)$. $\rightarrow \mathbb{P}(S<T)=\frac{1}{2}$ Given that these two points I believe they're correct, I have a doubt on the following third point: 3) Find $\mathbb{E}(S+T|S>4)$. $\rightarrow \mathbb{E}(S+T|S>4)=\mathbb{E}(S+T|S>4,T>0)=\mathbb{E}(S|S>4)+\mathbb{E}(T|T>0)=$ $\mathbb{E}(S|S>4)+\mathbb{E}(T)=4+\alpha +\beta$. Is it correct? AI: @Francesco Totti (friendly... er pupone?) The integral is correct but the solution is $\frac{\beta}{\alpha+\beta}$ intermediate result $\frac{1}{\alpha}\int_0^\infty e^{-s \frac{\alpha+\beta}{\alpha \beta}}ds$ 3) it is correct but simply $\mathbb{E}[S+T|S>4]=\mathbb{E}[S|S>4]+\mathbb{E}[T|S>4]=4+\mathbb{E}[S]+\mathbb{E}[T]$ a) $\mathbb{E}[S|S>4]=4+\mathbb{E}[S]$ follows immediately from lack of memory property b) $\mathbb{E}[T|S>4]=\mathbb{E}[T]$ follows immediately from independence between S and T
H: Prove that $AB*CD+AD*BC\ge2*A(ABCD)$ Let $ABCD$ be a quadrilateral. Then prove $(AB\cdot CD+AD\cdot BC)\geq 2A(ABCD)$, where $A(ABCD)$ means area of $ABCD$. AI: Define $D'$ s.t. $|A-D|=|C-D'|,\ |C-D|=|A-D'|$ so that ${\rm area}\ ABCD = {\rm area}\ ABCD'$ Further \begin{align*} 2{\rm area}\ ABCD' &=|B-C||C-D'|\sin\ \angle\ BCD' + |A-B| ||A-D'|\sin\ \angle\ BAD' \\& \geq | B-C||A-D| + |A-B| |C-D| \end{align*}
H: $A$ is a matrix of 3 rows and 2 columns while $B$ is a matrix of 2 rows and three columns. If $AB=C$ and $BA=D$, then pick the correct option Options A) determinant of C and D are always equal B) determinant of C is zero C) determinant of D is zero The reasoning should be really simple. $$|C|=|A||B|$$ $$|D|=|B||A|$$ So $$|D|=|C|$$ But the right answer is option b. I have the written explanation for this answer, but I a found it ridiculous and ultimately unsatisfactory. What could be the reason for b) being the answer? Given solution Let $A’$ be a 3x3 matrix with first two columns identical to A and third column has all elements equal to zero. Similarly, construct $B’$ as a 3x3 matrix $AB=A’B’=C$ $\implies$ $|C|=|A’||B’|=0$ AI: Your identity for determinant multiplication is only true for square matrices, since other matrices do not have a proper determinant. What you can do to check if the determinant of a matrix is $0$ is ask - does this matrix have a non-trivial kernel? And indeed, the kernel of $C$ is non-trivial - $B$ sends a $3$ dimensional space into a $2$ dimensional one (which means that some non-zero vector must be mapped to $0$, and no matter what you do later with $A$, you already have a non-trivial kernel (which can only grow with $A$), and so $C$ must have determinant $0$.
H: What is the history of the different logarithm notations? There are two different notation of writing logarithms. In my country (Indonesia), the notation of writing logarithms is $$ ^a\log b $$ but the most commonly used notation is $$ \log_{a}b $$ I'm used to the widely used notation. I've searched the web for why there are two different notation, but I couldn't find any. This is only thing I could find relating to that which is what I've explained above that I already know. Are there any historical reasons for this? AI: Based on the comment by @J.G., I researched to the HSM Stack Exchange website. I found this answer there. It is the same book that is also suggested by @bjcolby15. Apparently, it is also widely used in the Netherlands. Considering that Indonesia was colonized by the Dutch, I think the most reasonable explanation is the Indonesians adapted the notation from the Dutch.
H: If densities converge then the corresponding RV converge in distribution I tried to prove the following Theorem: Given $(X_n)_{n\in\mathbb{N}}$ iid. random variables with $\mathbb{E}[X_i^2]<\infty$. If the rv's have respective densities $(f_n)_{n\in\mathbb{N}}$ and $f_n\rightarrow f$ pointwise, it follows that $X_n\stackrel{d}{\rightarrow}X$, meaning convergence in distribution. Proof: $$|F_n(x)-F(x)|\le \left | \int_{(-\infty, x]}f_n(t)\,\mathrm{d}t - \int_{(-\infty, x]}f(t)\,\mathrm{d}t\right |\\ \le \int_{(-\infty, x]}|f_n(t)-f(t)|\,\mathrm{d}t\le \int_\mathbb{R}|f_n(t)-f(t)|\,\mathrm{d}t$$ Now because of Scheffés-Lemma I know that the rhs. converges to zero. Is my proof correct? I did not use the finite variance which gives me doubt. AI: To apply Scheffé's lemma, you must prove that $$\int_{\mathbb{R}} f_n \to \int_{\mathbb{R}} f$$ However, your proof doesn't show why this should be the case.
H: Prove big O for a recursive function Let $t(n):=\begin{cases} \frac{2+\text{log}n}{1+\text{log}n}t(\lfloor\frac{n}{2}\rfloor) + log ((n!)^{\text{log} n}) \hspace{1cm} \text{if}\hspace{0.5cm} n>1 \\ 1 \hspace{0.5cm} \text{if}\hspace{0.2cm} n=1 \end{cases}$ We need to prove that $t(n) \in O(n²)$, thus $t(n) \leq c*n²$ I tried to play around with the master theorem (since $a,b > 1$) so $a=\frac{2+\text{log} n}{1+ \text{log} n}$, $b=2$, $f(n)=\text{log}((n!)^{\text{log} n})=\text{log}n(\text{log}(n!))$ I have difficulties with the asymptotics of the $f(n)$ due to all the logarithms, help would be much appreciated. AI: Hint: \begin{align} \log(n!) &= \log(1 \cdot 2 \cdots (n-1) \cdot n) \\ &= \log(1) + \log(2) + \ldots + \log(n-1) + \log(n) \\ &\le \int_1^{n+1} \log(x) ~ dx\\ &= C\left. (x \log x - x) \right|_1^{n+1}\\ &= C\left(\left[ (n+1) \log (n+1) - (n+1)\right]- \left[ 1 \log (1) - 1\right] \right)\\ &= C\left(\left[ (n+1) \log (n+1) - n-1)\right]+1\right) \\ &= C\left( (n+1) \log (n+1) - n\right)\\ &\le C \left[ (n+1) \log (n+1) \right] \end{align}
H: Confusion regarding $\models \forall x A \equiv \forall y A[y/x]$ I have some confusion regarding the following statement given in Jean H. Gallier's book "Logic for Computer Science". It says, For every formula $A$ $$\models \forall x A \equiv \forall y A[y/x]$$ Now if I take $A = \forall y \Phi(x,y)$, then $A[y/x] = \Phi(y,y)$. Hence using above statement I have $$\models \forall x \forall y \Phi(x,y) \equiv \forall y \forall y \Phi(y,y)$$ Which is just $$\models \forall x \forall y \Phi(x,y) \equiv \forall y \Phi(y,y)$$ And this statement is definitely wrong. Is this because I am using a bounded variable to substitute a free variable ? My end goal was to actually prove the following, $$\forall x \forall y \Phi(x,y) \rightarrow \forall y \Phi(y,y) $$ How do I prove this ? Here is the proof from tree proof generator https://www.umsu.de/trees/#%E2%88%80x%E2%88%80yF(x,y)%E2%86%92%E2%88%80yF(y,y) But a better proof will be much appreciated. AI: If $A = ∀yΦ(x,y)$ , then $A[y/x]=A$ because $y$ is not free for $x$ in $A$. The proof must be: 1) $∀x∀yΦ(x,y)$, 2) $∀yΦ(z,y)$ by Universal Instantiation, 3) $Φ(z,z)$ by UI, and finally: $∀yΦ(y,y)$ by Universal Generalization. But the last formula is not equivalent to the first one: consider $\forall y (y \le y)$, which is true in $\mathbb N$, and $\forall x \forall y (x \le y)$, which is not. Regarding the title: 1) $\forall x A$, 2) $A[y/x]$, if $y$ is free for $x$ in $A$, 3) $\forall y A[y/x]$.
H: Probability Question using series There are $n$ dice with $f$ faces marked from $1$ to $f$, if these are thrown at random,what is the chance that the sum of the numbers exhibited shall be equal to $p$? Please help me understand the first part of the solution - the number of ways in which the numbers thrown will have $p$ for their sum is equal to the coefficient of $x^p$ in the expansion $(x+x^2+x^3+..x^f)^n$. How does this work? Also can you help me with a solution . AI: Hint:-The landed dies will have 1 to f as their range of values. Imagine a gp such that (x^1........x^f) now imagine n such gp now multiply them the coefficient of x^p in the final expansion is your answer. Why? because all possible outcomes are encapsulated in the sum for each coeff. imagine die to be regular 1 to 6 at n=3 and p=7 one possibility is 1,3,3 i.e from x^1,x^3,x^3 and other permutations such as [331][313] etc will also have to be counted
H: If $A,B$ and $AB$ are symmetric matrices then is $A^{-1}B^{-1}$ also symmetric. if $ A,B$ and$ AB$ are symmetric matrices then is $A^{-1}B^{-1}$ also symmetric ? my approach: since $$AB=BA$$, pre multiply by $A^{-1}$ and then post multiply by $A^{-1}$ to get $$BA^{-1}=A^{-1}B$$ hence I was able to prove that $BA^{-1} and A^{-1}B $ are symmetric.(taking transpose and applying reversal property) but using this nothing could be concluded for $A^{-1}B^{-1}$ so is there any way to prove or disprove that $A^{-1}B^{-1}$ is symmetric? thanks for your help. AI: $$\begin{align}(A^{-1}B^{-1})^T &= (B^{-1})^T(A^{-1})^T\\ &= (B^{T})^{-1}(A^{T})^{-1}\\ &= (B)^{-1}(A)^{-1}\\ &= B^{-1}A^{-1}\\ &= (AB)^{-1}\\ &= (BA)^{-1}\\ &= A^{-1}B^{-1} \end{align}$$
H: Conditionalish/joint probability? What is the overall probability of some event happening if it has 15% chance of happening, but can only happen if another event with 40% of happening takes place ? AI: 0.40*0.15=0.06. So, for the event 2nd to occur you need the 1st to occur. So 60 percent of time neither occurs. Of the remaining 40 percent 15 percent of time the second occurs. Ie 6 percent of total
H: Hodge star operator and exterior calculation I am learning complex geometry by D. Huybrechts. Here is a formula that I can't understand $$\omega \wedge \beta\wedge \star \alpha=\beta\wedge(\omega \wedge \star \alpha)\tag 1$$ Here $\omega$ is the fundamental form which is a $2$-form actually. And I try to expand both sides by the definition $\alpha \wedge \star \beta= \langle \alpha, \beta \rangle \cdot vol$ For LHS, we have $\langle \omega \wedge \beta,\alpha \rangle \cdot vol$ and for RHS, we have $\beta \wedge \langle \omega, \alpha \rangle \cdot vol$ But here is the things I don't understand: (1) And let's suppose $\alpha,\beta \in \land^k V^{*}$. For $\langle \omega \wedge \beta,\alpha \rangle$, here $\omega \wedge \beta$ is a $(k+2)$- form so how can an inner product operated by a $k+2$-form and a $k$-form ? So does for RHS. (2) How to prove $(1)$ by expanding in my way or the other methods? AI: As $\omega$ is a $2$-form, it commutes for the $\wedge$-product with other $p$-forms, (because if $\alpha$ and $\beta$ are $p$ and $q$-forms, $\alpha\wedge \beta = (-1)^{pq} \beta\wedge \alpha$). Moreover, this product is associative. Forget about the $\star$ in $(\star \alpha)$ and consider only $(\star\alpha)$ as a differential form. Then : \begin{align} \omega \wedge \beta \wedge \star \alpha &= \left(\omega \wedge \beta\right) \wedge \star \alpha \\ &= \left(\beta \wedge \omega \right)\wedge \star \alpha \\ &= \beta \wedge \left( \omega \wedge \star \alpha\right) \end{align}
H: 'Completeness' of ordered topological space This is a follow-up question to an answer given by Henno Brandsma in this thread How to prove ordered square is compact. In the answer it is shown that: A non-empty LOTS (linearly ordered topological space) $(X,<)$ is compact iff every subset $A\subseteq X$ has a supremum $sup(A)$. Since $\mathbb{R}$ is a locally compact LOTS , with the completeness property, i.e., that every bounded subset $A\subseteq \mathbb{R}$ has a supremum $\sup(A)$, can a similar argument be made for general LOTS? More precisely, can we something along the lines of: A LOTS $(X,<)$ is locally compact if and only if every bounded subset $A\subseteq X$ has a supremum $\sup(A)$. Or perhaps some other condition can characterize such a 'completeness' property for ordered spaces. AI: If a LOTS $X$ obeys (order completeness:) For every non-empty subset $A$ that has an upperbound, $\sup(A)$ exists. then $X$ is locally compact: If $x \in X$ does not equal the min or max of $X$ (if it exists), pick any $a,b$ with $a < x, x < b$ and note that $Y=[a,b]$ is closed (true in any LOTS) and even compact using the characterisation I showed (for any order convex subset of a LOTS the subspace topology and induced-order topology are the same, so we can see $Y$ as a LOTS too): let $A \subseteq Y$ be any subset: $A=\emptyset$ is OK: $\sup(A) = \min(Y)=a$ in that case. If $A \neq \emptyset$, $b$ is always an upperbound and the assumption tells us $\sup(A)$ exists in $X$, but as $\sup(A)\in \overline{A}$ (true in any LOTS), and $Y$ is closed, $\sup(A) \in Y$ and we're done showing $Y$ compact. A similar argument can be held for $\min(X), \max(X)$ if they exist, and we see all points of $X$ have a compact neighbourhood (and as any LOTS is Hausdorff, $X$ obeys all equivalent definitions of local compactness). The lexicographically ordered plane (which is locally compact) has sets like $A=\{0\} \times \Bbb R$ that have upper (and lower) bounds but not a sup. So the completeness condition is quite a bit stronger than local compactness. It's customary (in "LOTS theory") to define the equivalence relation $a \sim b$ iff $[a,b]$ is compact, for locally compact LOTS, and study the equivalence classes, see this paper for an example. I'm currently not aware of a natural completeness condition on $(X,<)$ characterising that the corresponding LOTS is locally compact. So the order-completeness of $\Bbb R$ has indeed its local compactness (and connectedness) as a consequence. But the reverse is not so clear. Final remark: the compactness characterisation for LOTS also implies that a connected LOTS with min and max is compact. It's (I think) fun to see that where in general compactness and connectedness can be quite independent properties, and in LOTS they are quite closely linked. It's an interesting class.
H: Finding coefficients on a complex Taylor series I don't see a result that my book say it's straightforward. Here's my try: Prove that the coefficients of the Taylor series of the function $$f(z)=\frac{1}{1-z-z^2}$$around $z=0$ verify $$c_0=1,\\ c_1=1, \\ c_{n+2}=c_{n+1}+c_n, n\geq 0.$$ From here, what I've done is to find first $c_0$ and $c_1$ as follows: $$c_0=\frac{f^{0)}(0)}{0!}=\frac{1}{1-0-0^2}=1\\c_1=\frac{f^{1)}(0)}{1!}=\frac{-1\cdot(-1-(2\cdot 0))}{(1-0-0^2)^2}=1$$ I can take both results as straightforward, but my book's solution only says: "identifying coefficients, we have the result." That's the only information I have and I don't see how can we prove that $\ c_{n+2}=c_{n+1}+c_n, n\geq 0.$ Thanks for your time. AI: Note that since $f(z)=1+f(z)z+f(z)z^2$, for $n\ge2$, comparing the coefficient of $z^n$, we have that $c_n=c_{n-1}+c_{n-2}$.
H: If $B\subset B(H)$ is a C*-subalgebra and $T\colon B''\to B''$ is linear, bounded and weakly continuous, then $\|T\|=\|T|_{B}\|$ Let $H$ be a Hilbert space and let $B\subset B(H)$ be a C*-subalgebra. Suppose that $T\colon M\to M$ is linear, bounded and operator-weakly continuous, then I want to prove that $\|T\|=\|T|_{B}\|$. Let $M$ be the von Neumann algebra generated by $B$. That is, $M=B''=\overline{B}^{\text{s}}$ (I think), where $B''$ is the double commutant of $B$ and $\overline{B}^{\text{s}}$ is the operator-strong closure of $B$. I think that I need Kaplansy's density theorem (Theorem 4.3.3 in Murphy's book on C*-algebras). In particular, this theorem tells us that $B_{\leq1}$ is strongly dense in $M_{\leq1}$. Furthermore, I also think that I have to use Theorem 4.2.7 of Murphy, which states that a convex subset of $B(H)$ is strongly closed if and only if it is weakly closed. I think this allows us to use the weak continuity of $T$. One clearly has $\|T|_{B}\|\leq\|T\|$ and I think one can use the above results to prove that \begin{align*} \|T|_{B}\|&=\sup\{\|T(b)\|:b\in B_{\leq1}\}\\ &\geq\sup\{\|T(m)\|:m\in M_{\leq1}\}=\|T\|. \end{align*} But I dont know how to connect the dots. Any help will be greatly appreciated! AI: Let $x$ be in the unit ball of $M$ with $\|Tx\|≥ \|T\|-\epsilon$. Now because the unit ball of $B$ is weakly dense in the unit ball of $M$ you have a net $x_\alpha$ in the unit ball of $B$ with $x_\alpha\to x$ in the weak operator topology, because $T$ is weakly (operator) continuous you have that $T x_\alpha \to Tx$ in that sense. Now let $\xi \in H$ with $\|\xi\|≤1$ so that $\|(Tx)[\xi]\| ≥ \|Tx\|-\epsilon ≥ \|T\|-2\epsilon$. Then: $$\|(Tx)[\xi]\|\ \|(Tx_\alpha)[\xi]\|≥|\langle (Tx)[\xi], (Tx_\alpha)[\xi]\rangle| \to \langle (Tx)[\xi],(Tx)[\xi]\rangle = \|(Tx)[\xi]\|^2 .$$ This gives: $$\liminf_\alpha\|(Tx_\alpha)[\xi]\|≥ \|(Tx)[\xi]\|≥ \|T\|-2\epsilon,$$ in particular since $\xi$ was in the unit ball of $H$ what you get is $\liminf_\alpha\|Tx_\alpha\|≥\|T\|-2\epsilon$. Since $x_\alpha$ was in the unit ball of $B$ you get $\|T\lvert_B\|≥\|T\|-2\epsilon$ for any $\epsilon$.
H: Why can we distribute the complex modulus? (looking for intuition/proof) if you have, $$ z= z_1 \cdot z_2 ... z_n$$ then, $$ |z| = |z_1 \cdot z_2 ... z_n| = |z_1| |z_2| |z_3| ... |z_n|$$ Now, why is this equality true? $$|z_1 \cdot z_2 ... z_n| = |z_1| |z_2| |z_3| ... |z_n|$$ AI: Think about the polar form of a complex number. If each $\displaystyle z_j=r_je^{i\theta_j}$, where $r_j=|z_j|$ (the modulus), and $\theta_j$ is the argument. Then $z=z_1z_2z_3....z_n=(r_1r_2r_3...r_n)e^{i(\theta_1+\theta_2+\theta_3...\theta_n)}$
H: $\lim_{x \to 0+}\log_{10}{\tan^2x}$ $\lim_{x \to 0+}\log_{10}{\tan^2x}$ What is the value of this one? Could you please give me some hints on this problem? AI: Let $f(x) = \log_{10}(\tan^2(x))$. Since $f(x) = f(x+\pi)$ and $f$ isn't constant, $f$ is a periodic function, so $\displaystyle \lim_{x \to \infty} f(x)$ doesn't exist. FIXED QUESTION: Observe that $x \to 0^+$, $\tan^2(x) \to 0^+$, so $\log(\tan^2(x)) \to -\infty$.
H: When can we say that a linear transformation is equivalent to a change of basis? I'm aware that every change of basis is a linear transformation, but the converse isn't true. What conditions must a linear transformation $T$ satisfy for us to call it a change of basis? One condition that I can think of is that $T$ should be invertible, but I'm not sure that's enough to call it a change of basis. AI: Invertibility is equivalent to the span's dimension staying the same, which in turn is equivalent to the linear transformation giving a basis. Let the original basis $B$ have $n$-dimensional span. (This is just the vector space's dimension. If the elements of $B$ are linearly independent so that $B$ has a minimal number of elements, $B$ has $n$ elements.) The dimension of $\operatorname{span}\{Tb|b\in B\}$ is then the rank of $T$, which is $n$ iff $T$ is invertible.
H: Computing $\int_{0}^{\infty} \frac{x}{x^{4}+1} dx$ using complex analysis. I want to compute $$\int_{0}^{\infty} \frac{x}{x^{4}+1} dx$$ using complex analysis. Now the first thing that strikes me is that $f(x)$ is not an even function. So this troubles me a bit since I would normally use $$\int_{0}^{\infty} f(x)dx = \frac{1}{2} \left[\lim_{R \rightarrow \infty} \int_{-R}^{R} f(z)dz + \int_{C_{R}}f(z)dz \right], $$ where $C_{R}$ is the semi cirle connecting $R$ to $-R$ in the positive imaginary part. Now we see that we have to compute the singularities of $f(z)$, which we can do by computing the fourth root of $z$. We then find $$ \begin{align*} z^{4} &= e^{i (\pi + 2n\pi)} \\ z &= e^{i ( \frac{\pi}{4} + \frac{n\pi}{2} )} \end{align*}. $$ Since we are only interested in singularities above the real line, we find $z_{0} = e^{i \frac{\pi}{4}}$ and $z_{1} = e^{i \frac{3\pi}{4}}$. Then we let $p(z) = z$ and $q(z) = z^{4}+1$, which makes $q'(z) = 4z^{3}$. We then compute $p(z_{0}), q(z_{0})$ and $q'(z_{0})$ and finally $\frac{p(z)}{q'(z)}$ which equals the residue at $z_{0}$. However, when I do the above I find $\text{Res}(z_{0}) = - \frac{i}{4}$ and $\text{Res}(z_{1}) = \frac{i}{4}$ but this would make the integral equal zero since $2\pi i (\frac{i}{4} - \frac{i}{4})=0$. Can anybody point me to my mistake? Also, when would find the value for this integral, I would argue we can not simply take half of it, since the initial function is not even. How would we fix that? AI: So i would take the following contour: $[0,R],$ $C_R$ and $[iR,0]$ where $C_R$ connects $R$ and $iR.$ So, instead of a semicircle, you have a quarter of a circle. Notice that, by doing this, just one of your singularities is inside, mainly $e^{i\frac{\pi}{4}}$ hence your integral $$\int _{[0,R]}+\int _{C_R}+\int _{[iR,0]}=2\pi i\frac{-i}{4}=\frac{\pi}{2}.$$ Check that the integral vanishes in $C_R$ and make a change of variable, perhaps, $y=ix$ to convert the integral in $[iR,0]$ to an integral in $[0,R].$
H: Justification of restricting to functions with finite support in the definition of ordinal exponentiation For ordinals $\alpha$ and $\beta$, the ordinal $\beta^\alpha$ is defined to be the epsilon-image of the well-ordered structure $\langle F,<\rangle$, where $F$ consists of the functions $f:\alpha\rightarrow\beta$ with finite support and $<$ is the antilexicographical order. I understand that dropping the constraint of having finite support destroys the well-orderedness of $F$. My question is about the justification of the choice of this constraint. For instance, how do we know that this particular choice of contraint faithfully generalizes the idea/intuition of the "length" of $\alpha\times \beta$ that is well ordered by the antilexicographical order in the definition ordinal multiplication? Can it be the case that the constraint of finite support deletes too many functions from $F$ so that the resulting well-ordered set becomes shorter than what it is supposed to be (i.e., without the constraint)? I hope my question makes sense. Please help. Thanks. EDIT: I appreciate Alessandro's answer, but I am looking for a more direct justification. As a matter of fact, even the idea of imposing any constraint on the functions in $F$ troubles me, since the resulting $F$ is different from what is generalized from the definition of ordinal multiplication. AI: A different way to define $\alpha^\beta$ is by transfinite recursion on $\beta$ as follows: If $\beta=0$, $\alpha^\beta=1$. If $\beta=\gamma+1$ for some $\gamma$, $\alpha^\beta=\alpha^\gamma\cdot\beta$ If $\beta$ is a limit ordinal, $\alpha^\beta=\sup_{\gamma<\beta}\alpha^\gamma$ The first case is clear, the second generalizes the idea that exponentiation is repeated multiplication, while the third says that exponentiation is continuous. If you agree that this definition makes sense intuitively then using finite support functions is the correct choice, because it agrees with this definition.
H: What does this theorem statement mean? This is from Royden's Real Analysis book (4th edition). On page 17, Proposition 9 says: Every nonempty open set is the disjoint union of a countable collection of open intervals. At first, I thought it meant that: if $X$ is a non-empty open set then there is a set of intervals $I=\{I_1,I_2,I_3,..\}$ such that: (a) $X=\cup_j(I_j)$; (b) if $j \neq k$ then $I_j \cap I_k = \emptyset$. But from the proof, this doesn't seem right. So I looked up "disjoint union" (https://mathworld.wolfram.com/DisjointUnion.html) but it looks like it gives you back a set of tuple - so I'm not sure how to interpret this. AI: Your understanding is correct. The definition of disjoint union in your link is perhaps a little misleading: it is best to see it as defining a binary operator DisjointUnion$(A,B)$ that takes two arbitrary sets as input, and outputs a set that consists of an image of $A$ and an image of $B$ under a simple mapping, such that these two images are disjoint. But when we say that a set $C$ is the disjoint union of $A$ and $B$, it usually means that (i) $C=A\cup B$ and (ii) $A\cap B=\emptyset$. Unfortunately this usage clashes with $C=$ DisjointUnion$(A,B)$ in the link.
H: Find the number of symmetric closure There is a set A with n elements. R is relation of set A. R has 3 elements. When n ≥ 4, the symmetric closure of the R was obtained. Find a minimal and maximum value of number of R elements. I want to figure out this question. I think I can draw a matrix to prove this, but I don't know the exact solution. Can someone give me the direction of this problem? AI: The minimum and maximum number of elements of $R$ is $3,$ because the problem told you that $R$ has $3$ elements. I suspect, though, that the question has to do with how many elements are in the symmetric closure of $R,$ instead. To figure this out, we need to ask ourselves what the symmetric closure looks like, compared to $R,$ itself. To obtain the symmetric closure, we look at each pair $\langle x,y\rangle$ in $R,$ and add the pair $\langle y,x\rangle,$ if it isn't already an element of $R.$ Put another way, we construct the set $$S=\bigl\{\langle y,x\rangle\in A\times A:\langle x,y\rangle\in R\bigr\}.$$ The symmetric closure of $R$ is the set $R\cup S.$ Observe that $R$ and $S$ have the same number of elements. (Can you justify this?) If they have no elements in common, then the symmetric closure of $R$ must have twice as many elements as $R$ does. If they have exactly the same elements--that is, if $R$ is already symmetric--then $R\cup S=R.$ Can you see what the maximum and minimum cardinality of $R\cup S$ is from there? It's worth noting that the number $n$ never came into play, here. In the cases $n\le 2,$ the maximum value from above will not be correct. (Can you see what it will be, instead?) However, it will hold for $n\ge 3.$ I'm not sure why the problem specifies that $n\ge 4.$ Added: You've found the correct numbers! To justify it, though, we need to be a bit more careful. After all, even if we assume that the pairs $\langle a,b\rangle, \langle c,d\rangle, \langle e,f\rangle$ are distinct, that doesn't mean that $R\cup S$ will have $6$ elements. After all, we could (for example) have $a=f,$ $b=e,$ and $c=d,$ in which case $R\cup S$ has $3$ elements. Instead, I'd consider the set $$S\setminus R:=\bigl\{\langle x,y\rangle\in S:\langle x,y\rangle\notin R\bigr\}.$$ At that point, there are a few facts to justify: $S\setminus R$ has no more than $3$ elements. If $R$ is symmetric, then $S\setminus R$ has $0$ elements. It is possible for $S\setminus R$ to have exactly $3$ elements. (This only works because $n\ge 3.$) $R$ and $S\setminus R$ have no common elements. $R\cup S = R\cup(S\setminus R)$ $\lvert R\cup S\rvert = \lvert R\rvert +\lvert S\setminus R\rvert.$ From there, the conclusion is immediate. If we'd had $n=2,$ then $S\setminus R$ could not have $2$ or $3$ elements, but could have $1$. If we'd had $n\le 1,$ then $R$ couldn't have had $3$ elements, in the first place. See if you can justify these claims, along with those above. Please let me know if you get stuck, or if you just want to have me check your reasoning. Welcome to the site!
H: When is a function not a local homeomorphism? my background is engineering and I am very new to the topology. I think I got the concept of a local homeomorphism, but I cannot come up with a concrete example that is a continuous surjection but not a local homeomorphism. For example, if I have a map $f:\mathbb{R}^n\to\mathbb{R}^m$ ($n>m$) that is a continuous surjection, in what case it is not a local homeomorphism? Also, is a projection map $f_{proj}:[0,1]\times[0,1]\to[0,1]$ where $(x,y)\mapsto x$ also a local homeomorphism? I thought it is since for every $(x,y)$ there is a neighborhood, e.g., $\{(\tilde x,\tilde y)|x-\epsilon<\tilde x<x+\epsilon, \tilde y = y\}$, which is homeomorphic to $\{\tilde x|x-\epsilon<\tilde x<x+\epsilon\}$. Is this correct? EDIT: Thank you for the comments and answers. I think I misunderstood the concept of the local homeomorphism. My question came from the covering map, $p:C\to X$, where the article says that every covering map is a local homeomorphism. Does it then mean that $C$ and $X$ have to be of the same dimensionality, or is it saying something different? Thank you in advance. AI: For a function $f : X \to Y$ to be a local homeomorphism means that for every $x \in X$, there exists an open neighborhood $U$ of $x$ in $X$ and an open neighborhood $V$ of $f(x)$ in $Y$ such that $f : U \to V$ is a homeomorphism. For $f : \mathbb{R}^n \to \mathbb{R}^m$, the fact that $f$ is a local homeomorphism implies $n=m$ is a deep result known as Invariance of domain theorem. It is a really important theorem, that basically states that the dimension is a topological notion. For $f : [0,1]^2 \to [0,1]$, it is easier to show it, but still, you have to consider fundamental notions in topology. Basically, you have to look on the boundary and on the interior separatly. On a little open subset in the interior, removing a point on the left let it connected, but it is not the case on the right. In your last example, what you choose is not a neighborhood since it does not contain any open non-empty subset (because you fixed the $y$ coordinate). Edit Careful not to misunderstand the definition of local homeomorphism. What you noticed in your example is that there exists a subset $E$ of $X$ for which the restriction and corestriction $f : E \to f(E)$ provides a homeomorphism. But this does not take account of the ambiant spaces $X$ and $Y$ of the definition here! A local homeomorphism tells something about $X$ and $Y$, not just about $f$. More edit Take $p : X \to B$ a continuous function. With hands, saying it is a covering map means that $X$ locally looks like $B$, but is bigger, and that $p$ is the function htat makes it looks like $B$. More formally, it means that there is no difference between the local topology of $X$ and $B$, but there is a global difference that you can understand and control. There are many visual examples on wikipedia to understand the notion.
H: How to figure the next number in this sequence? This question has been asked as a puzzle in a forum of an online game, but I coudn't solve it neither any of the members of the forum, so I asked for help here. giving this sequence of numbers 1 > 170 2 > 344 3 > 520 4 > 698 5 > 875 6 > 1052 8 > x 16 > 2811 20 > y What would be x and y based on that sequence ? the moderator has given us a hint after 2 days , but we still can't figure it out, the Hint is: based on a > b c > d calculating the number $a*d - c*b$ for each row should give us a clue. I noticed that $d = (c*b + z) / a$ and $z = a*d - c*b$ The problem is that d and z both are unknowns and the 2 formulas are the same so I can't figure out 2 unknowns in the same formula. How to figure out x and y ? AI: Using the hint, $$x=1405,y=3514$$ makes a symmetric pattern.
H: Ways for the differential equation $y' + {{y\ln(y)}\over x}= xy$ I've been stuck on one of my homework numbers. The number precise that the following equation is a non-linear equation of order 1 with x>0. $$y' + {{y\ln(y)}\over x}= xy$$ So far, I tried 2 different methods to solve them. As suggested by internet (link below): Bernoulli, and as an inexact one. However, I get stuck every time. Bernoulli sample **As a small side note, can someone explain to me the following from the link: how, by passing ln(y) to the right side, we get y^1+1 ? For the Bernoulli equation, when I try to solve it manually, I get in a kind of infinite integral, no matter how many time I integrate. Am I missing something? AI: $$y' + {{y\ln(y)}\over x}= xy$$ $$\dfrac {y'}{y}+\dfrac {\ln y}{x}=x$$ $$(\ln y)'+\dfrac {\ln y}{x}=x$$ It's a linear first order DE. Substitute $u=\ln y$ $$u'+\dfrac {u}{x}=x$$ $$xu'+u=x^2$$ $$(xu)'=x^2$$ Integrate. $$\ln y =\dfrac {x^2}3+\dfrac {c_1}{x}$$ This answer dosent agree with Symbolab link but it agrees with Wolfram Alpha's answer $$ y(x) =\exp {\left (\dfrac {x^2}3+\dfrac {c_1}{x}\right)}$$
H: Floating Numbers in Combinations What could be the answer to ${\displaystyle {\binom {2.5}{2}}}$ is it defined or considered as $0$ or $1$? AI: The answer could use the gamma function: $$\binom {2.5} 2=\dfrac{2.5!}{0.5!\times2!}=\dfrac{\Gamma(3.5)}{\Gamma(1.5)2}=\dfrac{2.5\times1.5}2=1.875.$$
H: Degree of pole of $\frac{1}{\cosh(z)}$ I'm having difficulties with calculating the singularity of $\frac{1}{\cosh(z)}$. So far, I have the complex zero at $z = i \frac{\pi}{2}+i \pi k$ with $k \in \mathbb{Z}$ from which it would follow that that is the only singularity. My problem is, how would I show that it is a pole of degree 1? With $C = i \frac{\pi}{2}+i \pi k$ with $k \in \mathbb{Z}$, of course $\lim_{z \to C}\frac{1}{\cosh (z)}=\infty$, but when evaluating $\lim_{z \to C}\frac{1}{\cosh (z)}(z-C)^k$, I don't of know how to show that $k$ has to be one. I see that putting $k$ at one will lead to an existing result through L'Hospital, but wouldn't it work for $k=2$, as well? AI: Note that\begin{align}\lim_{z\to i\pi/2}\frac{\cosh z}{z-i\pi/2}&=\lim_{z\to i\pi/2}\frac{\cosh (z)-\cosh(i\pi/2)}{z-i\pi/2}\\&=\cosh'\left(i\frac\pi2\right)\\&=\sinh\left(i\frac\pi2\right)\\&\ne0.\end{align}But then$$\lim_{z\to i\pi/2}\left(z-i\frac\pi2\right)\frac1{\cosh z}\ne0,$$and therefore $i\frac\pi2$ is a simple pole of $\frac1\cosh$. The same argument works for every other singularity of $\frac1\cosh$.
H: For (Lebesgue) measurable functions $f$ and $g$, if $f=g$ a.e., then $ \int_{E} f=\int_{E} g. $ Problem Let $f$ and $g$ be bounded (Lebesgue) measurable functions defined on a set $E$ of finite measure. If $f=g$ a.e., then $$ \int_{E} f=\int_{E} g. $$ Attempt Let $f,g:E\to \mathbb{R}$ be a (Lebesgue) measurable function with $m(E)<\infty$. Assume $f=g$ a.e. on $E$. It is enough to show that $$ \int_{E} f-g=0. $$ Let $\varphi$, $\psi$ be simple functions with $\varphi \leq f-g \leq \psi$. Since $f-g=0$ a.e., we have $\psi \geq 0$ a.e. and $\varphi\leq 0$ a.e.. Then, $$ 0 \leq \int_{E} \psi {\rm ~and~} 0\geq \int_{E} \varphi, $$ ${\it whence} $ $$ \int_{E} f-g \geq 0 {\rm ~and~} \int_{E} f-g \leq 0, $$ as required. Question The hypothesis that there is a simple function $\varphi$ is vaild? What is the exact meaning of 'whence' which is italics? For the end of sentence, what did the right expression of the two?: a.e. or a.e.. Thanks! AI: The assertion about the simple functions is not clear neither justified. A simpler approach: let $A:=\{x\in \mathbb{R}:f(x)-g(x)=0\}$, then $A$ is measurable and $$ \int_{E}(f-g) \mathop{}\!d \lambda =\int_{E \cap A}(f-g) \mathop{}\!d \lambda +\int_{E \cap A^\complement }(f-g) \mathop{}\!d \lambda =\int_{E \cap A}0 \mathop{}\!d \lambda+0=0 $$ where we used the fact that if a set $B$ have measure zero then $\int_B f \mathop{}\!d \lambda =0$ for any measurable function $f$.
H: If $\text{adj}A=\left[\begin{smallmatrix}1&-1&0\\2&3&1\\2&1&-1\end{smallmatrix}\right]$ and $\text{adj}(2A)$ is $2^k\text{adj}(A)$, find $k$ I have thought of a solution for this, but I know it’s wrong. I don’t know what’s wrong with the procedure, I just solved it instinctively. $$\text{adj} A =A^{-1}|A|$$ So for $2A$ $$\text{adj} 2A =(2A)^{-1} |2A|$$ $$=\frac 12 A 4 |A|$$ $$=2\text{adj} A$$ Which implies $k=1$ As I said, the answer is wrong, I am aware of that. The question remains: what’s wrong with my procedure and how do I get the correct answer. Thanks! AI: Note that for a $n\times n$ matrix $A$, $\det(mA) = m^n \det A$ where $m \in \mathbb R$. So the answer should really be $$(2A )^{-1} |2A| = \frac 12 A^{-1} \ 2^3 |A| = 4 \ \text{adj} (A) $$
H: Minimizing costs of a specific geometry shape I have geometry mathematical problem. I have a shape that is made of a cylinder and two half spheres as to top and bottom. How can I minimize the cost of this shape when the volume is known and the cylinder part costs $k_1$ dollar per square meters and the sphere part costs $k_2$ dollar per square meter. AI: Well, the volume of that shape is given by: $$\mathcal{V}=\mathcal{V}_\text{cylinder}+\mathcal{V}_\text{hemisphere}+\mathcal{V}_\text{hemisphere}=\mathcal{V}_\text{cylinder}+\mathcal{V}_\text{sphere}\tag1$$ Now, for the volume of a cylinder: $$\mathcal{V}_\text{cylinder}=\pi\cdot\text{r}_\text{cylinder}^2\cdot\text{h}\space_\text{cylinder}\tag2$$ And for the sphere: $$\mathcal{V}_\text{sphere}=\frac{4}{3}\cdot\pi\cdot\text{r}_\text{sphere}^3\tag3$$ Now, we also know that: $$\text{r}=\text{r}_\text{cylinder}=\text{r}_\text{sphere}\tag4$$ So the total volume is given by: $$\mathcal{V}=\pi\cdot\text{r}^2\cdot\text{h}\space_\text{cylinder}+\frac{4}{3}\cdot\pi\cdot\text{r}^3=\frac{\pi\text{r}^2}{3}\cdot\left(3\text{h}\space_\text{cylinder}+4\text{r}\right)\tag5$$ And the surface area is given by: $$\mathcal{S}=2\cdot\pi\cdot\text{r}\cdot\text{h}\space_\text{cylinder}+4\cdot\pi\cdot\text{r}^2\tag6$$ And for the costs we can set up a function: $$\mathcal{K}=\text{K}_1\cdot2\cdot\pi\cdot\text{r}\cdot\text{h}\space_\text{cylinder}+\text{K}_2\cdot4\cdot\pi\cdot\text{r}^2\tag7$$ Well, you know the volume: $$\mathcal{V}=\frac{\pi\text{r}^2}{3}\cdot\left(3\text{h}\space_\text{cylinder}+4\text{r}\right)\space\Longleftrightarrow\space\text{h}\space_\text{cylinder}=\frac{\mathcal{V}}{\pi\text{r}^2}-\frac{4\text{r}}{3}\tag8$$ So, in order to minimize the costs we can write: $$\frac{\partial\mathcal{K}}{\partial\text{r}}=\frac{\partial}{\partial\text{r}}\left\{\text{K}_1\cdot2\cdot\pi\cdot\text{r}\cdot\left(\frac{\mathcal{V}}{\pi\text{r}^2}-\frac{4\text{r}}{3}\right)+\text{K}_2\cdot4\cdot\pi\cdot\text{r}^2\right\}=$$ $$8\text{K}_2\pi\text{r}-\frac{16\text{K}_1\pi\text{r}}{3}-\frac{2\text{K}_1\mathcal{V}}{\text{r}^2}=0\space\Longrightarrow\space\text{r}=\left(\frac{1}{4}\cdot\frac{\text{K}_1\mathcal{V}}{\pi\text{K}_2-\frac{2\pi\text{K}_1}{3}}\right)^\frac{1}{3}\tag9$$ And so the height will be: $$\text{h}\space_\text{cylinder}=2\cdot\left(\frac{6}{\pi}\right)^\frac{1}{3}\left(\frac{\text{K}_2}{\text{K}_1}-1\right)\cdot\left(\frac{\text{K}_1\mathcal{V}}{3\text{K}_2-2\text{K}_1}\right)^\frac{1}{3}\tag{10}$$
H: Improving the Cauchy's bound on the absolute values of the roots of the monic polynomial $x^n=m \times \sum_{k=0}^{n-1} x^k$ Given a polynomial $x^n=m \cdot \sum_{k=0}^{n-1} x^k$ (for all $m,n \in \mathbb{N}, m \geq 2,n\geq 2$), the numerical computation of roots for different $n$ and $m$ shows, that the absolute values of roots are strictly less than $m + 1$. The Cauchy's bound gives a non-strict bound for the given polynomial: the absolute values of roots are less or equal to $m+1$. I have come up with the approach for showing the strict inequality, and would like to ask, if my approach is valid? Below is a description of my approach. If we show that the roots don't belong to the complex circle with radius $m+1$, then in combination with the Cauchy's bound we will obtain a strict inequality. As far as $x=1$ is not a root of this polynomial, we can transform the polynomial as follows: $$ x^n - m \cdot \sum_{k=0}^{n-1} x^k = x^n - m \cdot {x^n-1 \over x - 1} = 0 $$ So, equivalently, we need to consider the roots of the polynomial: $x^n \cdot ((m+1) - x) - m = 0$. For the sake of contradiction let's assume, that $x =(m+1) \cdot e^{i \phi}$ (for some $\phi \in \mathbb{R}$) is a root of the polynomial $x^n \cdot ((m+1) - x) - m = 0$. After doing the substitution we obtain: $$ (m+1)^n \cdot e^{i n \phi} \cdot ((m+1) - (m+1) \cdot e^{i \phi}) = m \iff \\ \iff (m+1)^{n+1} \cdot e^{i n \phi} \cdot (1 - e^{i \phi}) = m \iff \\ \iff e^{i n \phi} \cdot (1 - e^{i \phi}) = {m \over (m+1)^{n+1}} \iff \\ \iff e^{i n \phi} - e^{i (n+1) \phi} = {m \over (m+1)^{n+1}} $$ Using the trigonometric form of complex numbers we can rewrite the latter equality as follows: $$ \left( cos(n\phi) - cos((n+1)\phi) \right) - i \cdot \left( sin(n\phi) - sin((n + 1)\phi) \right) = {m \over (m+1)^{n+1}} $$ As far as the imaginary part on the left hand side is $0$ we have a following system: $$ \left\{ \begin{aligned} sin(n\phi) - sin((n+1)\phi) &= 0 \\ cos(n\phi) - cos((n+1)\phi) &= {m \over (m+1)^{n+1}} \end{aligned}\right. $$ Using the sum-to-product trigonometric identities we can rewrite the system as follows: $$ \left\{ \begin{aligned} 2 \cdot cos \left({2n + 1 \over 2} \phi \right) \cdot sin \left( - {\phi \over 2} \right) &= 0 \\ -2 \cdot sin \left({2n + 1 \over 2} \phi \right) \cdot sin \left( - {\phi \over 2} \right) &= {m \over (m+1)^{n+1}} \end{aligned}\right. $$ Looking at the first equation: $2 \cdot cos \left({2n + 1 \over 2} \phi \right) \cdot sin \left( - {\phi \over 2} \right)$ we conclude that some of its multipliers should be $0$. So, we have two cases: Case 1: $sin \left( - {\phi \over 2} \right) = 0$. In this case the second equation shows a contradiction: $$ -2 \cdot sin \left({2n + 1 \over 2} \phi \right) \cdot sin \left( - {\phi \over 2} \right) = 0 \neq {m \over (m+1)^{n+1}} $$ Case 2: $cos \left({2n + 1 \over 2} \phi \right) = 0$. In this case we can see that: $\phi = {2k + 1 \over 2n + 1} \pi$ for $k \in \mathbb{Z}$. Let's substitute $\phi$ into the second equation: $$ -2 \cdot sin \left({2n + 1 \over 2} \cdot {2k + 1 \over 2n + 1} \pi \right) \cdot sin \left( - {1 \over 2} \cdot {2k + 1 \over 2n + 1} \pi \right) = {m \over (m+1)^{n+1}} \iff \\ \iff sin \left({2k + 1 \over 2 \cdot (2n + 1)} \pi \right) = {m \over 2 \cdot (m+1)^{n+1}} $$ So, from the second equation we have obtained the equation $sin(a \cdot \pi) = b$, where $a$ and $b$ are rational numbers ($a={2k + 1 \over 2 \cdot (2n + 1)}$ and $b={m \over (m+1)^{n+1}}$). Furthermore, in case if $m \geq 2, n \geq 2$ we see that $b \not \in \{ 0, \pm 1, \pm {1 \over 2}\}$. In this case, we have a contradiction with the Niven's Theorem (that states, that if $sin(a \cdot \pi) = b$ and $a, b \in \mathbb{Q}$, then the sine takes only the values $0, \pm 1, \pm {1 \over 2}$). So, we see that the original assumption (that $(m+1) \cdot e^{i \phi}$ is a root of the polynomial) leads to the contradiction. Consequently, $(m+1) \cdot e^{i \phi}$ can't be a root of the polynomial. In combination with the Cauchy's bound we have a strict bound on the absolute values of roots: the roots are strictly less than $m + 1$. So, I would like to know if my approach is valid? And, the second question: if there are any simpler approaches that allow to obtain a strict bound in this case? AI: So, I would like to know if my approach is valid? Your proof seems to be good, I could not find an error. It should work even for $m \ge 1$ and $n \ge 1$. And, the second question: if there are any simpler approaches that allow to obtain a strict bound in this case? Yes: If $x$ with $|x| > 1$ is a root of that polynomial then $$ |x|^n \le m\left(1 + |x| + \ldots + |x|^{n-1} \right) = m \frac{|x|^n-1}{|x|-1} \, . $$ Multiplication with the (positive) number $(|x|-1)/|x|^n$ gives $$ |x| -1 \le m \frac{|x|^n-1}{|x|^n} < m $$ and therefore the strict inequality $|x| < m +1$. This also works for all positive real numbers $m$, not only for integers. More generally, an inspection of the proof of Cauchy's bound (e.g. the one here) shows that if $$ h = \max\{ |a_0|, |a_1|, \ldots, |a_{n-1}| \} > 0 $$ then all roots of $$ x^n + a_{n-1}x^{n-1} + a_1 x + x_0 = 0 $$ satisfy the strict inequality $|x| < 1 + h$.
H: Common eigenvectors implies simultaneously block diagonalizations? Suppose $A,B$ are symmetric matrices of the same size. If $A,B$ have a common eigenvectors, can we block diagonalize them at the same time? More specifically, suppose $A$ and $B$ have one common eigenvector $v$ , and $Av = \lambda v$ and $Bv =\mu v$. We say $A$ and $B$ are block diagonalized at the same time if there is a orthogonal matrix $V$ such that $$V^t A V=\begin{pmatrix} * & 0 \\ 0 & \lambda \end{pmatrix}\quad \text{and} \quad V^tBV=\begin{pmatrix} * & 0 \\ 0 & \mu \end{pmatrix}.$$ If $A$ and $B$ have more that one common eigenvectors, it can be defined in a similar way as above. Can we find such an orthogonal matrix $V$? AI: Suppose $v \neq 0$ is a common eigenvector. Thus, $\mathrm{span}(v)$ is stable by $A$ and $B$. By orthogonality, $\mathrm{span}(v)^{\perp}$ is also stable by $A$ and $B$. Take $(e_2,\ldots,e_n)$ a basis of $\mathrm{span}(v)^{\perp}$. Then $(v,e_2,\ldots,e_n)$ is a basis of the ambiant space, and in this basis, the matrices of $A$ and $B$ are of the wanted form.
H: How to solve $x^{17}\equiv 37$ in $\mathbb{Z}/101\mathbb{Z}$? I need to solve the equation $x^{17}\equiv 37$ in $\mathbb{Z}/101\mathbb{Z}$. I've looked into these topics (the calculation of the primitive root is missing, n is not prime) but couldn't derive a solution. So summarize what I know: 101 is prime $\implies \mathbb{Z}/101\mathbb{Z}$ is cyclic group (or even a field) since $\mathbb{Z}/101\mathbb{Z}$ is cyclic it has a generator with the same order of $\mathbb{Z}/101\mathbb{Z}$. In this case the generator has order $\phi(101)=101-1=100$ due to Fermat I have $x^{100}\equiv 1$ $mod(101)$ $\phi(101)=100=2^2\cdot 5^2$ I have tried in vain to orient myself to: $n-th$ root at the bottom of the page I know that the (only) solution is $x=52$ Can somebody help me? AI: Hint: Solve $17n \equiv 1 \bmod 100$. Then compute $37^n \bmod 101$ using exponentiation by squaring.
H: It is based on set theory. Here I wan to know that how can I derive the modulus of P(A) is equal to 2ⁿ. How to proof /P(A)/=2ⁿ / / For modulus sign A: Given set P(A):Power of set A 2ⁿ: The number of possible elements. Hints: In the question it says that to prove it by induction method . /P(A)/=2ⁿ is possible when set A has 'n' number of elements. AI: By induction on $n$. Let $P(n)$ be the sentence $P(n)=(|A|=n\Rightarrow |\mathcal{P}(A)|=2^n)$. Then $P(1)=(|A|=1\Rightarrow |\mathcal{P}(A)|=2)$, which is true, because if $|A|=1$ then $\mathcal{P}(A)=\{\varnothing, A\}$. Let now $|A|=n+1$. Let $x\in A$. Consider the set $A'=A-\{x\}$. Counting the subsets of $A$, we have all the subsets of $A'$ and all the subsets of $A'$ with $x$ added to them, hence in total $2^n+2^n=2^{n+1}$ subsets.
H: The norm $\|S-Q\|_F$ where $Q$ is orthogonal is minimised by $Q=I$ Problem: Suppose that $S$ is symmetric and semi-positive-definite. Let $\|\cdot \|_F$ be the Frobenius norm. Show that $$\|S-I \|_F \leq \|S-Q\|_F$$ for all orthogonal matrices $Q$, where $I$ is the identity matrix. Attempt: So from what I know, the Frobenius norm is defined as $$\|A\|_F:= \bigg(\sum_{i,j} a_{ij}^2\bigg)^{1/2}$$ and one of the properties of it is that $\|A\|_F = \big(\sum_i \sigma_i^2 \big)^{1/2}$ where $\sigma_i$ are the Singular Values of $A$. Also, $\|QA\|_F = \|AQ\|_F = \|A\|_F$ for any orthogonal matrix $Q$. Thus, if we consider the Singular Value Decomposition of $S$, say $S=UDV$ where $U,V$ are orthogonal and $D = \text{diag}(\sigma_1,\dots,\sigma_n)$, then $$\|S-I\|_F = \|UDV - I\|_F = \|D - U^TV^T\|_F$$ but I feel like I'm not getting anyhere with this approach. $\color{red}{\text{In particular,}}$ I don't really know how to use the fact that $S$ is symmetric and semi-positive-definite. Does this have any effect on the form of the SVD for $S$? Any help would be much appreciated. Thanks! AI: Use the fact that $\|A\|_F^2 = \operatorname{Tr}(A^TA)$. In particular, we have $$ \|S-Q\|_F^2 = \operatorname{Tr}([S-Q]^T[S-Q]) = \|S\|_F^2 + n^2 - 2 \operatorname{Tr}(Q^TS) $$ where $n$ denotes the common size of the matrices $S,Q$. With that said, it's clear that we want an orthogonal matrix $Q$ that maximizes $\operatorname{Tr}(Q^TS)$. Because $S$ is symmetric and positive semidefinite, there exists an orthogonal matrix $V$ and a non-negative diagonal matrix $D$ with $S = VDV^T$ (regarding your question about using the properties of $S$: note that this is an SVD). Note that $$ \operatorname{tr}(Q^TVDV^T) = \operatorname{tr}([V^TQ^TV]D) = \operatorname{tr}([V^TQV]^TD). $$ In other words, we want an orthogonal matrix $W = V^TQV$ that maximizes $\operatorname{tr}(W^TD)$. We see that this maximum is achieved with $W = I$. In particular, the entries of an orthogonal matrix must all be at most $1$. So, we have $$ \operatorname{tr}(W^TD) = \sum_{i=1}^n w_{ii}d_{ii} \leq \sum_{i=1}^n d_{ii} = \operatorname{tr}(D) = \operatorname{tr}(I^TD). $$ Note that the only $Q$ for which $W = I = V^TQV$ is given by $Q = I$. The conclusion follows. Note: This problem is an instance of the orthogonal procrustes problem.
H: Prove that a function is uniformly continuous in $[a,\infty)$ Let $f:[a,\infty)\to\mathbb{R}$ be a continuous function. For every $\varepsilon>0$ there exist $0<\delta_{\varepsilon}$ and $a<c_{\varepsilon}\in\mathbb{R}$ so that for every $x_{1},x_{2}>c_{\varepsilon}$ so that $|x_{1}-x_{2}|<\delta_{\varepsilon}$ it holds that $|f(x_{1})-f(x_{2})|<\varepsilon$. Prove that $f$ is uniformly continuous in $[a,\infty]$. Hint: $[a,\infty]=[a,c_{\varepsilon/2}]\cup[c_{\varepsilon/2},\infty)$ and use the Heine-Cantor Theorem. I get that the Heine-Cantor Theorem proves uniform continuity in $[a,[c_{\varepsilon/2}]$, but don't get how we get to that epsilon and the general idea of the proof. Could you give me a more obvious hint or where to start? AI: The definition that a function $f(x)$ is uniformly continuous on $[a,\infty)$ is that for any real number $\epsilon>0$, there exist a real number $\delta >0$ such that if the distance between two points in the domain is close, $|x_i-x_j|<\delta $, we have that the distance between the evaluation of the function at those two points is close ,$|f(x_i)-f(x_j)|<\epsilon$. So given $\frac{\epsilon}{2} >0$, from the condition given to us in the question, we know that there exist $c_{\frac{\epsilon}{2}} \in (a, \infty)$ and $\delta_{\frac{\epsilon}{2}}>0$ such that if $|x_k-x_j|<\delta_{\frac{\epsilon}{2}}, x_k,x_j >c_{\frac{\epsilon}{2}}$, then $|f(x_k)-f(x_j)|<\frac{\epsilon}{2}$. Also, on the interval $[a,c_{\frac{\epsilon}{2}}]$, we know that the function is uniformly continuous, so we have that there is an $\delta'_{\frac{\epsilon}{2}}>0$ such that if $|x_i-x_j|<\delta'_{\frac{\epsilon}{2}}, x_i,x_j \in [a,c_{\frac{\epsilon}{2}}]$, then $|f(x_i)-f(x_j)|<\frac{\epsilon}{2}$. Now if we choose $\delta := \min(\delta'_{\frac{\epsilon}{2}},\delta_{\frac{\epsilon}{2}})$ and now we are ready to verify the condition for the function $f(X)$ to be uniformly continuous on the interval $[a, \infty)$ So we choose two arbitary numbers in the domain of the function such that their distance is less less than $\delta$, like: $x_s<x_d \in [a,\infty), |x_s-x_d|<\delta$, We have three cases for the locations of the two numbers: if $x_s,x_d \in [a,c_{\frac{\epsilon}{2}}]$, then $|f(x_s)-f(x_d)|<\frac{\epsilon}{2}<\epsilon$ if $x_s,x_d \in [c_{\frac{\epsilon}{2}},\infty)$, then $|f(x_s)-f(x_d)|<\frac{\epsilon}{2}<\epsilon$ Now if could be that the two numbers where one belongs to $[a,c_{\frac{\epsilon}{2}}]$ and the other belongs to $[c_{\frac{\epsilon}{2}},\infty)$, but since we assume that $x_s<x_d$, we have that: $x_s \in [a,c_{\frac{\epsilon}{2}}], x_d \in [c_{\frac{\epsilon}{2}},\infty)$, and then $|f(x_s)-f(x_d)| = |f(x_s)-f(c_{\frac{\epsilon}{2}})+f(c_{\frac{\epsilon}{2}})-f(x_d)| \leq |f(x_s)-f(c_{\frac{\epsilon}{2}})|+|f(c_{\frac{\epsilon}{2}})-f(x_d)|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$. Hence, in all cases for possible location that the two numbers $x_s,x_d$ that could be, we have shown the definition for uniformly continuous is satisfied. Hence the function is uniformly continuous on the interval $[a,\infty)$.
H: How to calculate percent dynamically? $65$ is $100 \%$ and $55$ is $0 \%$. How can I calculate the percentage of all other numbers, , starting from 55 to 65? AI: If you consider the points $(65,100)$ and $(55,0)$,then the line joining them is $$ \frac{y-0}{x-55}=\frac{100-0}{65-55}.$$ This gives us $$ y=10(x-55).$$ So the percentage of any number $x$ in $55<x<65$ is given by $$y=10(x-55) \% .$$ For example,if you take $x=60$,then $y=10(60-55) \%=50 \%$.
H: Integrate $\int_0^{2\pi} e^{\cos\theta} \sin(\sin\theta-n\theta)d\theta$ How to integrate $\int_0^{2\pi} e^{\cos\theta} \sin(\sin\theta-n\theta)d\theta$ $\int_0^{2\pi} e^{-\cos\theta} \cos(\sin\theta+n\theta)d\theta$ AI: The first integral is$$\Im\int_0^{2\pi}e^{\cos\theta+i(\sin\theta-n\theta)}d\theta=\Im\int_0^{2\pi}e^{e^{i\theta}-in\theta}d\theta.$$Expanding $e^{e^{i\theta}}$ as a power series in $e^{i\theta}$ and using $\int_0^{2\pi}e^{ik\theta}=2\pi\delta_{k0}$ for $k\in\Bbb Z$, this is$$\Im\frac{2\pi}{k!}=0.$$Similarly, the second integral is$$\Re\int_0^{2\pi}e^{-e^{-i\theta}+in\theta}d\theta=\frac{(-1)^n2\pi}{n!}.$$
H: convergence of $\sum_{n=1}^\infty \frac {\sin (nx)}{n^{\alpha}}$ I'm trying to understand why the series $$\sum_{n=1}^\infty \frac {\sin (nx)}{n^{\alpha}}$$ converges for $\alpha > 0$. At the end of the prof for $0 < \alpha \le 1 $ it is not clear to me two passages. My book says that this result is obvious for $\alpha > 1$ because the series converges absolutely. $|\sin(nx)| \le 1 \Rightarrow |\frac {\sin (nx)}{n^{\alpha}}| \le \frac {1}{n^{\alpha}} \forall x \in R$. For $\alpha >1 \sum_{n=1}^\infty \frac {1}{n^{\alpha}}$ converges. So $\sum_{n=1}^\infty \frac {\sin (nx)}{n^{\alpha}} \le \sum_{n=1}^\infty \frac {1}{n^{\alpha}}$ and for the comparison test $\sum_{n=1}^\infty \frac {\sin (nx)}{n^{\alpha}}$ converges. For $0 < \alpha \le 1 $ we can use the Dirichlet criterion. To do this we have to prove that $\sigma_n=\sum_{k=1}^n \sin (kx)$ is limited. Once we have done this, the sequence $\left \{ \frac{1}{n^{\alpha}}\right \}$ is positive and decreasing and we can say that $\sum_{n=1}^\infty \frac {\sin (nx)}{n^{\alpha}}$ respects the Dirichlet criterion and it converges. I have not understood the proof of the limitation of $\sigma_n=\sum_{k=1}^n \sin (kx)$. In my book it suggests to analyze the complex sequence $e^{ikx}= \cos (kx)+i\sin (kx)$ with $r_n=\cos (kx)$ and $s_n=\sin (kx)$. $\sum_{n=1}^\infty e^{ikx}= \frac{e^{i(n+1)x}-1}{e^{ix}-1}$ and then$|\sum_{n=1}^\infty e^{ikx}|=\sum_{n=1}^\infty \sqrt{r_n^2+s_n^2} =\sum_{n=1}^\infty \sqrt{{\cos(kx)}^2+{\sin(kx)}^2} \le \frac {2}{|e^{ix}-1|} $ Why this last passage and $|e^{i(n+1)x}-1| \le 2$? and then the book continues saying that for $x \ne 2k \pi$ both the sequences $r_n$ and $s_n$ are limited. I don't understand why. AI: Let me correct some mistakes. You should have $\sum_{k=1}^n e^{ikx} = e^{ix} \frac{e^{inx} - 1}{e^{ix} - 1}$ (There are two errors here: first the sum should not be to infinity, and second, you are using a sum that would work if the summation were for $k$ going from $0$ to $n$ rather than $1$ to $n$.) It is not true that $\sum_{k=1}^n e^{ikx} = \sum_{k = 1}^n \sqrt{r_k^2 + s_k^2}$. Here you appear to be making the mistake of believing that $\left| \sum_{k = 1}^n z_k \right| = \sum_{k = 1}^n |z_k|$. In fact, we have the triangle inequality $\left| \sum_{k = 1}^n z_k \right| \leq \sum_{k = 1}^n |z_k|$, but not equality. Using the triangle inequality is key here. You can prove the $|e^{inx} - 1| \leq 2$ part that way.
H: Is this a property of isomorphisms? In a homework problem I was doing, I was trying to show that $U(8)$ is not isomorphic to $U(10)$. They used that, supposing $f: U(10) \rightarrow U(8)$ was an isomorphism, $|f(3)| = |3| = 4$ and there are no elements of order $4$ in $U(8)$, thus disproving that any such isomorphism could exist. I am confused. For a given homomorphism $\varphi : G \rightarrow H$, I understand it is true that $|\varphi(x)|$ divides $|x|$, for some $x\in G$. For an isomorphism $\varphi : G \rightarrow H$, is it the case that $|\varphi(x)| = |x|$, for $x\in G$? If not, can you explain the reasoning in the proof given at the beginning? AI: It is the case that for an isomorphism $\phi:G\to H$, $|\phi(x)|=|x|$ for $x\in G$.