text
stringlengths
83
79.5k
H: Find an equation on the cone which is tangent plane is perpendicular to a given plane Let $z=\sqrt{x^2+y^2}$ be cone. Find an equation of each plane tangent to the cone which şs perpendicular to the plane $x+z=5$ I have learnt the solution of such question for parallel at previous question I asked today. But now, I want to learn properly this questions for perpendicular. Again, I have its answer. But I cannot comprehend exactly. The answer is that Please teach me this again properly. Thank you so much:) Again I define a function $f(x,y,z)=(x^2/z)+(y^2/z)-z$ And to get a normal vector to the cone at a point (x,y,z) $\nabla f(x,y,z)= (2x/z)ı+(2y/z)j -k $ And then ? AI: Hint: You can let $f(x,y,z)=\sqrt{x^2+y^2}-z$. Then a normal vector to the tangent plane is given by $\nabla f =\frac{x}{\sqrt{x^2+y^2}}i+\frac{y}{\sqrt{x^2+y^2}}j-k$, and you want this to be orthogonal to the vector $i+k$, which is a normal vector to $x+z=4$ (since the planes are perpendicular). Now set the dot product of these two vectors equal to zero.
H: Simpson's Rule, find an $N$ for $\;\int_1^5 \ln x \, dx,\;\; $ error $\leq 10^{-6}$ $$\int_1^5 \ln x \, dx\;\qquad\text{ error } \leq 10^{-6}$$ I know that my $K_4 = 24$ since the fourth derivative is $24x^{-5}$ $$\frac{24 \cdot 4^5}{180 N^4} \leq 10^{-6}$$ $$\frac{24 \cdot 4^5}{10^{-6}} \leq 180 N^4$$ $$\frac{24 \cdot 4^5}{10^{-6} \cdot 180} \leq N^4$$ $$\left(\frac{24 \cdot 4^5}{10^{-6} \cdot 180}\right)^{1/4} \leq N$$ This give me the wrong answer, I should be getting 23. Where am I going wrong? AI: You went "one too far" with the "fourth derivative" of $f(x)$: $$f^{(4)}(x) = -6x^{-4}\,.$$ The function you are starting with when taking derivatives needs to be: $\,f(x) = \ln x;\,$ which is the integrand, prior to integrating. (You are using Simpson's rule to approximate the integral, after all.) You took the fifth derivative of $f(x) = \ln x$, giving you $\left(f^{(5)}(x) = 24x^{-5}\right).\;$ So try applying Simpson's using $\;K_4 = 6$. Note also that $N$ should be raised to the fifth power. $$\frac{6 \cdot 4^5}{180 N^5} \leq 10^{-6}$$
H: Distance of a function from a subspace Let $f \in L^2([-a,a])$. Trying to find $\mathrm{dist}(f,S)$ in $L^2([-a,a])$ (where S is the subspace of real polynomials of max degree $2$, like $a+bx+cx^2$) and knowing that $\langle f,a\rangle=0$ and $\langle f,bx\rangle=0$, can we proceed by considering just $$ \left(\int_{-a}^a |f-cx^2|^2\right)^{1/2} $$ and minimizing with respect to $c$? AI: The best $L^2$ approximation to $f$ is going to be the orthogonal projection of $f$ onto $S$, let's call it $P_S(f)$. With that in mind, the simplest way to find this projection is to find an orthonormal basis $\{e_1,e_2,e_3\}$ for $S$ using Gram-Schmidt (this will be easy for your subspace), then $$ P_S(f)=\sum_{j=1}^3\langle f,e_j\rangle e_j $$ The distance from $f$ to $S$ is then simply $\|f-P_S(f)\|_2$.
H: Prove inequality by induction Once again, I'm stuck in a demonstration by induction, this time, it's really proving that an inequality is valid. So, here is the inequality: Prove that $\binom{2n}{n} \geq (n+5)^2 \ \forall n \geq 5, n \in \mathbb{N} $ Then, what I wanted to prove is that: $\binom{2n+2}{n+1} \geq (n+6)^2$ For $n=5$: $\binom{10}{5} = 252 \geq 100$ The inductive step would be: $\binom{2n+2}{n+1} \geq (n+6)^2$ $\binom{2n+2}{n+1} = \frac{(2n + 2)(2n+1)(2n)!}{(n+1)(n+1)n!n!} = \frac{2(2n+1)}{n+1}\frac{(2n)!}{n!n!}$ By Inductive Hypothesis, I know that: $\frac{2(2n+1)}{n+1}\frac{(2n)!}{n!n!} \geq \frac{2(2n+1)}{n+1} (n+5)^2 $ I want to prove, indeed, that: $\frac{2(2n+1)}{n+1} (n+5)^2 \geq (n+6)^2$ And there I'm stuck. I've tried doing this: $ 2(2n+1)(n+5)^2) \geq (n+6)^2(n+1) $ $\Rightarrow 4n^3 + 40 n^2 + 120 n + 50 \geq n^3 + 13n^2+38n+36$ $\Rightarrow n(4n^2 + 42n + 120) + 50 \geq n(n^2 + 13n + 38) + 36$ Because $50 \geq 36 \Rightarrow n(4n^2+42n+120) \geq n(n^2+13n+38) $ But they told me here that this inequality is not necessarily true. So.. any ideas how should I follow? Thanks for your help! AI: What you have is right, you want to show that if $n \geq 5$, then $$2(2n+1)(n+5)^2 \geq (n+6)^2(n+1) $$ This can be shown in various ways. For example, if you just expand everything, you get that $$2(2n+1)(n+5)^2 \geq (n+6)^2(n+1) \Leftrightarrow 3n^3 + 29n^2 + 72n + 14 \geq 0 $$ which is obviously true for the region that we want. Note, what they are telling you, is that just because $50 \geq 36$, does not necessary imply that $2(2n+1)(n+5)^2 \geq (n+6)^2(n+1) $.
H: Solving for $x$ in $3^{2x+1} = 3^x + 24$ I'm having trouble solving this equation step by step: $$3^{2x+1} = 3^x + 24$$ I've tried to take the log of both sides but then I am stuck with the right hand side being $\log(3^x + 24)$. I've found the answer to '$x$' by trial and error but cannot arrive at the answer otherwise. Can anyone please show how to work it out properly? AI: Let $u = 3^x$, then we have: $$ 3u^2 = u + 24 $$ $$ 3u^2 - u - 24 = 0$$ Which is a quadratic in $u$. Solve, and then use that $u = 3^x$ in order to find $x$.
H: Find a point on a parabola that's closest to another point. Find the point on the parabola $3x^2+4x-8$ that is closest to the point $(-2,-3)$. My plan for this problem was to use the distance formula and then that the derivative to get my answer. I'm having a little trouble along the way. $$ d = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2}.$$ AI: Suppose the closest point is at $p=(x_0,y_0)$, and set $q=(-2,-3)$. Then the tangent to the parabola at $p$ is perpendicular to $\ell$, the line through $p,q$. Since $y'=6x+4$, the slope of the tangent is $6x_0+4$, so the slope of $\ell$ is $-\frac{1}{6x_0+4}$. Since it passes through $q$, we see the equation for $\ell$ is $$y+3 = -\frac{x+2}{6x_0+4}$$ Combining with $y=3x^2+4x-8$ shows that $x_0$ is a root of $$ (6x+4)(3x^2+4x-5)+(x+2)=18x^3+36x^2-13x-18=0 $$ and we end up numerically computing the roots from here.
H: Show that $f \in c_0^*$ and $||f||=\sum_{j=1}^{\infty} \frac{1}{j!}$ Let $$\begin{eqnarray} f: c_0 & \to & \mathbb{R}\\ (x_i)_1^{\infty} & \to & \displaystyle \sum_{j=1}^{\infty} \frac{x_j}{j!}\\ \end{eqnarray}$$ Show that $f \in c_0^*$ and $||f||=\sum_{j=1}^{\infty} \frac{1}{j!}$. I can show that $f \in c_0^*$ but can't show the restant. $c_0=(x_i)_1^{\infty} \ ; \ x_i \to 0, \ n \to \infty$ AI: We have $$||f(x_i)||=\left|\sum_{j=1}^{\infty} \frac{x_j}{j!}\right|\leq \sum_{j=1}^{\infty} \frac{|x_j|}{j!} \leq ||(x_i)||_\infty \sum_{j=1}^{\infty} \frac{1}{j!} $$ so we have $$||f||\leq \sum_{j=1}^{\infty} \frac{1}{j!}$$ Now let $(x^k_i)$ defined by $x_1=\cdots,x_k=1$ and $x_i=0$ if $i>k$ so $||(x^k_i)||_\infty=1$ and then $$\frac{|f(x^k_i)|}{||(x^k_i)||_\infty}=\sum_{j=1}^{k} \frac{1}{j!}\leq ||f||$$ and pass to the limit $k\to \infty$.
H: Write an expression in powers of $(x+1)$ and $(y-1)$ for $x^2+xy+y^2$ Write an expression in powers of $(x+1)$ and $(y-1)$ for $x^2+xy+y^2$ I calculated $f_x=2x+y $ $f_{xx}=2 $ $f_y=x+2y$ $f_{yy}=2$ And then what I need to do? What is the formula to solve the question ? AI: Let $u=x+1$, $v=y-1$, then $$ x^2+xy+y^2=(u-1)^2+(u-1)(v+1)+(v+1)^2=u^2+v^2+uv-u+v+1=\;\cdots $$
H: Force for electron movement $F= \frac{k}{d^2}$ An electron is fixed at $ x = 0$. Electrons repel each other with formula $F= \frac{k}{d^2}$ where k is proportionality constant. Find the work done in moving a second electron along the x axis from x = 10 to x = 1. I don't know what this means or where to start, there are too many variables. I think this is the set up. $$\int_{10}^1 y*F dy$$ $$\int_{10}^1 y*\frac{k}{d^2} dy$$ $$\int_{10}^1 y*\frac{k}{y^2} dy$$ I am not sure about this but I know work is force times distance so my integral needs to take that into account I think. AI: The work done in moving the electron between start $a=10$ and finish $b=1$ is $$\int_a^b dx \, F_e(x)$$ where $F_e$ is the electrostatic force between the electron and the fixed electron at $x=0$. This work done is equal to the change in electric potential energy between the points $a$ and $b$, as the force is conservative. The electrostatic force is $$F_e(x) = \frac{k q^2}{x^2}$$ where $q$ is the charge of an electron. The fact that $F_e(x) \gt 0$ means that the force is repulsive. The resulting integral is very simple to do.
H: Proving a statmenet about convergence of complex sequence Let $x_k \in \mathbb C$ for $k \in \mathbb N \cup {0}$ and let $y_k = \frac{(x_0 + x_1 + ... + x_k)}{k+1}$. We want to prove that if $x_k$ converges to $x$ ($x \in \mathbb C$) as $k \rightarrow \infty$ then $y_k$ also converges to $x$ as $k \rightarrow \infty$. This seems like a simple interesting exercise but having a little trouble. If $x_k \rightarrow x$ then from this I tried to make conclusions about convergence of $y_k$ since it depends of $x_k$. As $k \rightarrow \infty$ then $y_k \rightarrow \frac{(x_0 + x_1 + ... + x)}{k+1}$ but how can we conclude $\frac{(x_0 + x_1 + ... + x)}{k+1} = x$? Intuitively we also notice that all the finite terms over k+1 will go to 0 as $k \rightarrow \infty$. AI: We may start by proving the lemma that if $x_k \to 0$, then $y_k\to 0$. Choose $N(\epsilon)$ such that $|x_k|<\epsilon$ for all $k\geq M$. Then let $M\in \mathbb{Z}^+$, and look at $y_{N+MN}=y_{N(1+M)}$. If we let $|y_N| = |A|$, then we have $$|y_{N+MN}|\leq \frac{A+MN\epsilon}{(M+1)N}.$$ Note that as $M\to\infty$, the first term goes to zero, and the second term goes to $\epsilon$. The general case $x_k \to x$ can be shown by defining an auxiliary sequence $\mu_k := x-x_k$. Note $\mu_k$ goes to zero, and you can apply the lemma to $\mu_k$ to get the result you want, because $$y_k = L + (\text{averages of } \mu_k)$$
H: $p$-adic Ring extensions vs. "ordinary" Ring extensions I read about inverse limits in this post, and found the example by Arturo Magidin quite interesting (his "approximate" solution of $x^2 = -1$ in $\mathbb Z$). By his construction we get a Ring which is an extension of $\mathbb Z$, i.e. the ring of $p$-adic (here $5$-adic) integers where this equation has a solution. On the other side it is well know that $i$ solves this equation, so $\mathbb Z[i] \equiv \mathbb Z[x] / (x^2+1)$ is also a Ring which is an extension of $\mathbb Z$ such that this equation is solvable. Could something be said about the relation on these different ring extensions? EDIT: Correction from comment $x^2 - 1 \to x^2 + 1$. AI: I’ll assume that you do understand the relation between $\mathbb Z$ and $\mathbb Z[i]$ on the one hand and between $\mathbb Z$ and $\mathbb Z_5$ on the other hand. I take it that you’re asking about what the relation between $\mathbb Z[i]$ and $\mathbb Z_5$ might be. Let’s call $I$ the solution that @Arturo told you about, of the equation $X^2+1=0$ in $\mathbb Z_5$. Maybe it was the solution that is congruent to $2$ modulo $5$ there. So the question becomes what the relation between $i\in\mathbb Z[i]$ and $I\in\mathbb Z_5$ might be. The two of them satisfy the same polynomial relation over $\mathbb Z$, after all. The answer is simple enough, if you know the language of ring (homo)morphisms. There is a morphism, and only one, $\varphi_1\colon\mathbb Z[i]\to\mathbb Z_5$, for which $\varphi_1(i)=I\,$; and similarly there is a unique morphism $\varphi_2$ with same domain and target, for which $\varphi_2(i)=-I$. In other words, you can embed $\mathbb Z[i]$ into $\mathbb Z_5$ in precisely two ways, depending on where you choose to send $i$.
H: Why it is important to find largest prime numbers? It always takes a lot of effort and money to find the next largest prime number. Why is it so important to do this work and what is the application those numbers? AI: Just to add to the previous answers: Usually, part of the discovery of these mathematical curiosities is not the result itself, but the new or improved method for finding these new results. It's not just about finding the number which is meaningless in itself, but it's about showing that mathematical techniques have advanced so much that we can even show that these incredibly huge numbers are prime. Similarly, who cares that we planted a flag on the moon, or that we sent people there at all? It's all about showing that we can do this.
H: An Equivalence Relation: Introspection into a Particular Well-Defined Quotient DATA: Let $f:\mathbb{Z}\setminus \{0\}\rightarrow \mathbb{N}$ be a function defined by $$f(n) = \{k~:~n=2^km,~m\in \cal{O}\},$$ where $\cal{O}$ is the set of odd integers. Let $v:\mathbb{Q}\setminus \{0\}\rightarrow \mathbb{Z}$ be a function defined by $$v\pmatrix{\frac{a}{b}}=f(a)-f(b).$$ QUESTION: Is $v$ well-defined? KNOWN: Let $X$ be a set and $\sim$ be an equivalence relation on $X$. Let $f:X\rightarrow Y$. If $\forall x,x'\in X$ we have that $x\sim x' \implies f(x)=f(x')$, then $f$ defines a function $X_{/\sim}\rightarrow Y$ by $[x] \mapsto f(x)$. In this case, we say $f$ is "well defined" on the quotient $X_{/\sim}$. AI: Recall that we obtain $\Bbb Q$ by quoting the set $\Bbb Z\times (\Bbb Z-\{0\})$ with the equivalence relation $$(a,b)\sim (a',b')\iff ab'=a'b$$ This hints that we should see $v$ as a map $$\nu:\Bbb Z\times (\Bbb Z-\{0\})\to\Bbb N$$ defined as $$\nu(a,b)=f(a)-f(b)$$ and we ought to prove (or disprove) that $ab'=a'b\implies \nu(a,b)=\nu(a',b')$. Note that if $m$ is odd, $$\nu(mn,mk)=\nu(n,k)$$ since $\text{odd}\times \text{odd}=\text{odd}$. Similarily, if $m=2^j$ is even, $$\nu(2^jn,2^jk)=j+f(n)-(j+f(k))=f(n)-f(k)=\nu(n,k)$$ Since this considers all possible alterations on the pair $n,k$, we conclude $\nu$ is well-defined. OBS $\nu(a,b)$ simply returns the exponent of $2$ (negative or positive) in $$\frac{a}{b}$$
H: If every open subset of R is a disjoint union of open intervals, the number of the intervals is at most countable. Q: Assume that every open subset of R is a disjoint union of open intervals. Show that the number of the intervals is at most countable. Could you give me some help to solve this problem? Since R is uncountable, I thought that the number of the intervals is uncountable by intuition... I think each subset's being open is a key point to prove this one, but I'm not sure how to do it though... AI: Hint : Every open interval contains a rational number. Hence if you have a collection of disjoint open intervals, then each of these intervals contains a distinct rational number. Use the fact that the set of rational numbers is countably infinite to conclude.
H: Dirac Orthonormality Proof - Can't Make Sense of Complex Integral I'm having trouble rationalizing a particular statement that is, surely, present in many quantum mechanics textbooks. The following statement comes from the orthnormalization condition for eigenfunctions of the wavefunction, $\Psi (x, t) $, subject to the momentum operator, $\hbar/i (d/dx)$, with REAL eigenvalues, $\lambda \in R$. The eigenfunctions are of the form: $\psi_\lambda = e^{i \lambda x / \hbar} $ and then, taking their inner product $ < \psi_\lambda' | \psi_\lambda > = |A|^2 \int_{-\infty}^{\infty} e^{i (\lambda - \lambda')x/\hbar} dx = |A|^2 2 \pi \hbar \ \delta(\lambda - \lambda') $ So I'm quite unclear about that last part. Of course, if $\lambda' \neq \lambda$ , you are integrating a sinusoid, and the answer is zero, however, isn't: $ \left. \int_{-\infty}^{\infty} e^{i (\lambda - \lambda')x/\hbar} dx \right|_{\lambda = \lambda'} = \int_{-\infty}^{\infty} 1 dx = \infty $ I mean, $ e^{i (\lambda - \lambda')z/\hbar}$ is analytic everywhere, so you could just complex integrate, treating i as a constant and take limits, (same answer, no?). I also thought about Cauchy's Integral Theorem, and doing a contour integral, and using the maximum modulus principle: $ \lim_{p \rightarrow \infty} \left. \int_{-p}^{p} e^{i (\lambda - \lambda')x/\hbar} dx \right|_{\lambda = \lambda'} = 2 \pi \sum_i f(z_i) - lim_{p \rightarrow \infty} p \left| e^{i(\lambda - \lambda') p / \hbar} \right| $ where $z_i$ is the location of singularity i. However, as far as I can tell...there are no singularities, and the term on the right is unbounded, ( $\rightarrow \infty$). So why does that evaluate to the delta function? Thanks AI: When you see a delta function, you should always understand any equation, identity or expression in a distributional sense, which means that $$\int_{-\infty}^{\infty} f(\lambda')\delta(\lambda-\lambda')d\lambda'=f(\lambda)$$ for any sufficiently smooth $f(\lambda)$. Now $$\int_{-\infty}^{\infty} f(\lambda')\int_{-\infty}^{\infty} e^{i (\lambda - \lambda')x/\hbar} dx d\lambda'$$ is, provided the integrations can be exchanged, $$\int_{-\infty}^{\infty} e^{i\lambda x/\hbar}\int_{-\infty}^{\infty} f(\lambda')e^{- i \lambda'x/\hbar} d\lambda' dx = \int_{-\infty}^{\infty} e^{i\lambda x/\hbar}\tilde{f}(x/(2\pi\hbar)) dx = 2\pi\hbar f(\lambda) $$ where I successively applied the Fourier transform and the inverse Fourier transform.
H: What's $P$ and what's $Q$ in this classic proof of the irrationality of $\sqrt 2$? In this proof extracted from the Wikipedia A classic proof by contradiction from mathematics is the proof that the square root of 2 is irrational. If it were rational, it could be expressed as a fraction $a/b$ in lowest terms, where a and b are integers, at least one of which is odd. But if $a/b = \sqrt 2$, then $a^2=$ $2b^2$. Therefore $a^2$ must be even. Because the square of an odd number is odd, that in turn implies that $a$ is even. This means that $b$ must be odd because $a/b$ is in lowest terms. On the other hand, if $a$ is even, then $a^2$ is a multiple of $4$. If $a^2$ is a multiple of $4$ and $a^2=2b^2$, then $2b^2$ is a multiple of $4$, and therefore $b^2$ is even, and so is $b$. So $b$ is odd and even, a contradiction. Therefore the initial assumption—that $\sqrt 2$ can be expressed as a fraction—must be false. Knowing that a proof by contradiction you assume P and Not(Q) what's P and what's not Q in this proof? AI: I'm assuming you are more accustomed to seeing proof by contradiction used largely with statements that are implications or conditionals. And indeed, when writing a proof by contradiction to prove statements of the form $$P \implies Q,$$ we typically assume $(P\land \lnot Q)$. But in this particular case, we do not seem to have an implication to prove. Rather, we have the proposition: The square root of $2$ is irrational. $\quad$( $Q$). There's no helpful "if, then", or "this implies that" to indicate any sort of implication being asserted. So we have an example of the use of a proof by contradiction where to prove a statement other than an implication. What we can do is to think of the assertion to be proven as a simple "atomic" proposition: $\,Q.\,$ Then $\,\lnot Q\,$ is the statement to the effect: Suppose $\,\sqrt 2\,$ is not irrational. $\;$ Put differently, suppose $\,\sqrt 2\,$ is rational.$\quad(\lnot Q)$ The proof then proceeds, after having supposed $\,\lnot Q\,$ to invoke the definition of a rational number in order to arrive at a contradiction. In a sense then, the proof amounts to a "bare-bones" proof-by-contradiction: To prove that $\,Q,\,$ we assume $\,\lnot Q,\,$ and then we work to obtain a contradiction. Once we arrive at a contradiction, we can conclude that our assumption is false, and so we are justified in negating the false assumption: "therefore, $\lnot\lnot Q.$" $\;\;$ And this amounts to affirming the desired conclusion/assertion: therefore $Q$, since $\;\lnot \lnot Q\equiv Q$. The contradiction in this proof happens to come from our knowledge about the rational numbers, information which could be considered a premise: the "implicit" premise $P$ being the definition of a rational number.
H: how to show that $\int_0^1 \frac{t^{s-1}}{\sqrt{1-t^2}} d t = \frac{1}{2} B\left(\frac12, \frac{s}{2}\right) $ how to show that $$\int_0^1 \frac{t^{s-1}}{\sqrt{1-t^2}} d t = \frac{1}{2} B\left(\frac12, \frac{s}{2}\right) = \dfrac{\sqrt{\pi}\, \Gamma\left(\frac{s}{2}\right)}{2 \Gamma\left(\frac{s+1}{2}\right)}$$ AI: The standard definition is $B(a,b) = \int_0^1 x^{a-1} (1-x)^{b-1}\ dx$. Try $t = x^{1/2}$ in your integral. EDIT: For the equation $B(a,b) = \dfrac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)}$, see e.g. http://en.wikipedia.org/wiki/Beta_function#Relationship_between_gamma_function_and_beta_function
H: How close could two local maxima be? How close could two local maxima be? Def Let $f$ be a real function defined on a metric space $X$. We say that $f$ has a local maximum at a point $p \in X$ if there exists $\delta > 0$ such that $f (q) < f(p)$ for all $q \in X$ with $d(p, q) < \delta$. Local minima are defined likewise. Edit: I'm also thinking two(or) more local maxima may degenerate to one? And In that case the distance between two maxima is arbitrary close maybe not so obvious? AI: They can be as close as you want, since you can scale any graph $f(x)$ to become $f(nx)$, which shrinks it by a factor of $n$ in the x-direction. If you consider the graph of $f(x) = \sin \frac{1}{x} , x > 0$, you can see that the local maxima can occur within any arbitrarily small distance that you want.
H: Solve for x, when $ \log_3 (2 - 3x) = \log_9 (6x^2 - 19x + 2)$ How do you deal with the different bases when solving the equation: $$\log_3 (2 - 3x) = \log_9 (6x^2 - 19x + 2)$$ I'm going round in circles trying to reconcile the bases. AI: Raise both sides to the $9$th power to get: $$ 9^{\log_3(2 - 3x)} = 9^{\log_9 (6x^2 - 19x + 2)} $$ $$ (3^2)^{\log_3(2 - 3x)} = 6x^2 - 19x + 2 $$ $$ (3^{\log_3(2 - 3x)})^2 = 6x^2 - 19x + 2 $$ $$ (2-3x)^2 = 6x^2 - 19x + 2$$ This is a quadratic in $x$. Solve appropriately. One last thing (thanks to Calvin Lin): Keep in mind that in order for the $\log x$ to be defined (over the real numbers), then $x > 0$. After you solve your quadratic, you want to check for extraneous roots. Make sure that your solutions for $x$ satisfy that $ 2 - 3x > 0$ and $ 6x^2 - 19x + 2 > 0 $. (You shouldn't have any problems here, but it's important to do so in general!)
H: The connection between mathematical induction and implication What is the connection between mathematical induction and implication? I always see that mathematical induction is about $$P(k)\implies P(k+1).$$ From what I know, mathematical induction works by finding a way to transform $P(k)$ into $P(k+1)$ and if you can do it, then you prove it. Now, speaking in terms of logic formality, we can treat $P(k)$ as statement $A$, and treat $P(k+1)$ as statement $B$. We know that an implication is true if statement $A$ is true and statement $B$ is true; $A$ is false and $B$ is true/false (implication is false if $A$ is true and $B$ is false). How can we know the truth value of statement $B$ ($P(k+1)$)? Is it the act of deriving $P(k+1)$ from $P(k)$ that makes statement $B$ have a value of true? AI: Unless I am seriously misunderstanding you, I think your question really amounts to: "How does mathematical induction work, logically speaking? I don't get it." That's a perfectly good question to ask. In fact, it is one of the most common questions I have encountered when teaching undergraduate mathematics over the last ten years or so. (Moreover, many students should ask this question, in the sense that they actually don't understand the logic of mathematical induction but either don't perceive their own lack of understanding or are not willing to verbalize it.) It's up there with big conceptual questions like "How does the $\epsilon,\delta$ definition of a limit work, logically speaking?" or "What is uniform convergence, logically speaking?" or "What is linear independence..." These are key questions, but they are hard to answer in a vacuum. By that I mean that we often build a large portion of an undergraduate course around answering each of the above questions: there are prerequisites, training, several rounds of conceptual explanation of various kinds, examples, exercises, exams, and so forth. Without knowing where you are with respect to all these processes of undergraduate coursework, one (or at least, this one) can't give an explanation tailored to you: one can just try to repeat one of the blanket explanations that one gives in lectures. So, for instance, I can point you to Chapter 2 of these lecture notes of mine, where induction is explored in great detail. I can also say that in my department we have decided to place mathematical induction towards the end of an entire course devoted to introducing students to the concepts and methods of mathematical proof. I support that approach, as I have come to realize that induction is more logically complicated than most other proof techniques learned at this level. For instance, I think that in order to properly understand the logic of induction one needs to understand the logic of quantifiers. Thus one version of the Principle of Mathematical Induction is: Principle of Mathematical Induction for Subsets: Let $S$ be a subset of the positive integers $\mathbb{Z}^+$ satisfying both of the following properties: (POI1) $1 \in S$. (POI2) For all $n \in \mathbb{Z}^+$, $n \in S \implies n+1 \in S$. Then $S = \mathbb{Z}^+$. My guess is that the place that you should concentrate your efforts is not solely or even primarily in the implication, but rather in understanding the meaning of the universal quantification in (POI2). Notice that that quantification is entirely missing in your description of mathematical induction: that's a symptom of the problem, I suspect. I am honestly sorry not to be able to give a more useful answer. I have spent 2-3 weeks in undergraduate courses teaching induction and at the end had a substantial minority of the students who were still not fully comfortable with the logic of it. I wish I knew how to do better! Added: I just clicked on the OP's profile and saw that s/he is 17 years old. Knowing that, I would say -- again, with all possible goodwill and in full knowledge that it could be frustrating to hear it -- to just have a little faith and patience: in my experience, most high school students who grapple with mathematical induction find it just a bit too abstract for them in some (I am imagining!) Piagetian sense. Most of these teenagers find that by the time they are just a year or two older they are simply cognitively more receptive to abstraction in general and mathematical induction in particular. For instance, once in my late 20's I was reminiscing with an old friend from high school who taken most the same (rather advanced, for our tender years and by the standards of our school) math courses as I had, and at one point he vouchsafed to me that he remembered that every once in a while I would say something about "mathematical induction", and that he never knew what the heck I was talking about but kept that to himself. He went on to study electrical engineering at a top American university, and when he saw mathematical induction in his courses then he had no trouble understanding it and was bemused by the silent mental block he had had about it as a high school student. It may also be the case that high school courses which introduce mathematical induction (at least in the United States) are not very serious about it, do not expect most of the student to really grasp it, and in fact are not giving as good or careful an explanation as one would get in a university course. I learned about mathematical induction while taking a "self-paced precalculus course" over the summer at the age of 15. In self-paced courses, one actually starts at the beginning of the textbook, spends about five hours a day for three weeks reading the textbook, and keeps going until one gets to the end (and demonstrates at least some level of mastery of what was read). This meant that I got exposed to things that it turns out were not actually covered in most high school courses, or certainly not covered very well. Induction was always an "easy sell" for me (somewhat unfortunately, I think, for my current pedagogy), but for instance I remember reading about rotating conic sections to get rid of cross terms like $xy$ and being hella confused about what was going on. Once I learned linear algebra a couple of years later, all that stuff got a lot easier. (But there are still things that I read in my textbooks in that one self-paced course that I have literally never encountered again in 12 more years of schooling and 10 years of post-PhD mathematical research: for instance there is, I vaguely recall, something called a lattus rectum. I am starting to think that it was not actually so important!)
H: Revisted$_2$: Are doubling and squaring well-defined on $\mathbb{R}/\mathbb{Z}$? Define a relation $\sim$ on $\mathbb{R}$ as follows: for any $a,b \in \mathbb{R}$, $$a\sim b \iff a-b\in \mathbb{Z}.$$ Let $S=\mathbb{R}/{\sim}$. That is, $S$ is the set of equivalence classes of elements of $\mathbb{R}$ under the equivalence relation $\sim$. Define $f:S\rightarrow S$ by $f([t])=[t^2])$ for $t\in \mathbb{R}$. Is this well-defined? Define $g:S\rightarrow S$ by $f([t])=[2t]$ for $t\in \mathbb{R}$. Is this well-defined? Let $C=\{(x,y)\in\mathbb{R^2}:x^2+y^2=1\}$, and define $h:S\rightarrow C$ by $$h([t])=(\cos(2\pi t),\sin(2\pi t)).$$ What is the process here to realize whether or not $h$ is well-defined? Or whether it is a bijection? I know that I have to show $$[t]=[t']\implies h([t])=h([t']),$$ and $$\forall x\in C \exists x'\in S : h([x'])=x,$$ but I'm not really sure how to go about this. AI: $$t-s\in\mathbb{Z}\implies t^2-s^2=(t-s)(t+s)\in\mathbb{Z}\tag{false}$$ $$t-s\in\mathbb{Z}\implies 2t-2s=2(t-s)\in\mathbb{Z}\tag{true}$$ $$t-s\in\mathbb{Z}\implies (\cos(2\pi t),\sin(2\pi t))=(\cos(2\pi s),\sin(2\pi s))\tag{true}$$
H: Examples of a monad in a monoid (i.e. category with one object)? I've been trying to figure out what having a monad in a monoid (i.e. a category with one object) would mean. As far as I can tell it would be a homomorphism (functor) $T : M → M$, with two elements (natural transformation components) $\eta, \mu : M$, such that $\forall x. \eta x = T(x) \eta$ $\forall x. \mu T(T(x)) = T(x) \mu$ $\mu \eta$ = $\mu T(\eta)$ = 1 $\mu T(\mu) = \mu \mu$ The identity monad $T(x) = x$, with $\eta = \mu = 1$, is an obvious example for any monoid. But no other examples really come to mind... These laws seem a bit strange. Are there any interesting examples, or any good intuition for what the laws would mean? AI: So you can interpret a functor $T$ as a homomorphism $T:M \rightarrow M$. What is the interpretation of a natural transformation between two such functors? Well, it should assign to each element in the base category (there's only one!) some morphism such that the necessary diagram is natural. So we can think of $\eta, \mu$ as being elements of the monoid $M$, satisfying the following identities: $$\begin{align*}\eta m &= T(m)\eta \\ \mu T^2(m) &= T(m) \mu \end{align*}$$ for all $m$. The associativity of $T$ becomes $$ \mu T(\mu) = \mu^2 $$ Note two things. First, that this is an identity of elements of the monoid $M$. The LHS corresponds to the natural transformation $\mu \circ T\mu$, while the RHS is the natural transformation $\mu \circ \mu T$. Since the functor $T$ fixes the only point in the category, $\mu T = \mu$. The unit law is: $$\mu T(\eta) = e = \mu \eta.$$ It's not clear to me right now what this means for a general monoid, but for a group, we see that $T$ is actually just conjugation by $\eta$, and $\mu$ is the inverse of $\eta$. So at least for groups, the monads are just inner automorphisms, which is nice. I can't think of an interpretation for a general monoid, but the intuition from groups might help.
H: Solving a problem using Cauchy's residue theorem, is there more to it? Let $z_1,...z_n$ be distinct complex numbers. Let $C$ be a circle around $z_1$ such that no other $z_j$ is in $C$ for $j>1$. Let $$f(z) = (z-z_1)(z-z_2)...(z-z_n)$$ Find $\oint_{C}{\dfrac{\mathrm{d}z}{f(z)}}$. Attempt: Using Cauchy's Residue Theorem, and $$g(z) = \dfrac{1}{f(z)}$$ we see that $g(z)$ is analytic inside and on $C$ (assuming no $z_j$ is on $C$ for $j>1$) except for $z=z_1$ $$\oint_{C}{\dfrac{\mathrm{d}z}{f(z)}}=\oint_{C}{g(z)}\ \mathrm{d}z=2\pi\textbf{i}\ \phi(z_1)$$ where$$\phi(z) = \dfrac{1}{(z-z_2)(z-z_3)...(z-z_n)}$$ $$\therefore \phi(z_1) = \dfrac{1}{(z_1-z_2)...(z_1-z_n)}$$ Any faults with this? Or something that I should add? AI: Cauchy's Residue theorem states: Let $C$ be a simple closed contour, oriented in a positive direction. If a function is analytic inside and on $C$ except for a finite number of singular points denoted by $z_k\ \forall\ k \in \{1,2..n\}$ then: $$\oint_C f(z)\ \mathrm{d}z = 2\pi \textbf{i}\sum_{k=1}^{n}\mathrm{Res}_{z=z_k}f(z)$$ From Brown and Churchill, Complex Variables and applications. From what you've written: Let $C$ be a circle around $z_1$ such that no other $z_j$ is in $C$ for $j>1$. I understand that the Contour $C$ encloses $z=z_1$ and that all other points, $z_j$ with $j>1$ do not reside in $C$. Therefore we need to find the residue at $z=z_1$ and only at that point. Because it is the only point in the area enclosed by the Contour for which the function "blows up". For that we can use your method: Let $g(z)=\frac{1}{f(z)}$ for which we have: $$\oint_C g(z)\ \mathrm{d}z = 2\pi \textbf{i}\ \mathrm{Res}_{z=z_1}\frac{1}{f(z)}$$ To find the residue, apply the theorem: An isolated singular point $z_0$ of some function $f$ is a pole of order $m$ iff $f(z)$ can be expressed thusly: $$f(z)=\frac{\phi{(z)}}{(z-z_0)^m}$$ In our case $z_0$ is $z_1$, which is isolated by virtue of the contour we have chosen to enclose it. So our isolated point is $z_1$ and the pole is simple, i.e. $m=1$ $$g(z) = \frac{1}{f(z)} = \frac{\phi(z)}{(z-z_1)}$$ From here we see that we can indeed express our function in the form indicated and that $\phi(z)$ is in fact $\frac{1}{(z-z_2)...(z-z_n)}$. So the Residue is simply: $$\mathrm{Res}_{z=z_1}g(z) = \phi(z_1) = \frac{1}{(z_1-z_2)...(z_1-z_n)}$$ So: $$\oint_C f(z)\ \mathrm{d}z = 2\pi \textbf{i}\ \phi(z_1) = \frac{2\pi \textbf{i}}{{(z_1-z_2)...(z_1-z_n)}}$$ Hope this helps
H: I couldn't find the fault in $B_X(a,\epsilon)\times B_Y(b,\epsilon)=B_{X\times Y}((a,b),\epsilon)$ I know that the product of two balls of equal radius in metric spaces is not necessarily a ball in the product space. But I couldn't identify the fault in the proof where I showed $B_X(a,\epsilon)\times B_Y(b,\epsilon)=B_{X\times Y}((a,b),\epsilon):$ $(x,y)\in B_X(a,\epsilon)\times B_Y(b,\epsilon)\\\iff x\in B_X(a,\epsilon),y\in B_Y(b,\epsilon)\\\iff d_X(a,x)<\epsilon,d_Y(b,y)<\epsilon\\\iff \max\{d_X(a,x),d_Y(b,y)\}<\epsilon\\\iff d_{X\times Y}((a,b),(x,y))<\epsilon\\\iff(x,y)\in B_{X\times Y}((a,b),\epsilon)$ In the above figure do the topology obtained by defining product metric as $d_{X\times Y}=\max\{d_X,d_Y\}$ different with the topology obtained from $\|.\|_2?$ AI: Just because the max distance is less than $\epsilon$ does not mean that the distance in the product space is less than $\epsilon$. Both $x=0.9$ and $y = 0.9$ are within $\epsilon = 1$ of the origin in $\mathbb{R}^1$, but the point $(0.9,0.9)$ is distance $\sqrt{0.9^2 + 0.9^2} = 1.27 \cdots$ from the origin in $\mathbb{R}^2$.
H: Why is the sample correlation coefficient not $1?$ A reasonable value for the sample correlation coeffcient $\rho$ between daily maximum tem- peratures and daily ice cream sales would be $A) 0$ $B) 1$ $C) 0.7$ $D) -0.7$ I am taking intro stats for the first time and this was a question that confounded me. I know the answer is neither $A$ nor $D$. So stuck between $B$ and $C$, I chose $B$. The correct answer was $C$ - indicating the correlation isn't that strong. I am assuming the higher the temperature, the higher the sales? Indicating some sort of straight linear trend - strong correlation ? AI: If the correlation was $0.7$, then the correlation is fairly strong. It wouldn't be $1$, because that would mean there would be a direct/"perfect" relation between the two.
H: On upper central series Let $G$ be a group and Z(G) be the center of $G$. This is the upper central series $$1=Z_{0}(G)\leq Z_{1}(G)\leq...,$$ defined by $\frac{Z_{n+1}(G)}{Z_{n}(G)}=Z\left(\frac{G}{Z_{n}(G)}\right)$. Now prove that $Z_{i}\left(\frac{G}{Z_{j}(G)}\right)=\frac{Z_{i+j}(G)}{Z_{J}(G)}$. Attempt: We use of induction on $i$. It is obvious for $i=1$. Now let that we have $Z_{i}(\frac{G}{Z_{j}(G)})=\frac{Z_{i+j}(G)}{Z_{J}(G)}$, Inductive assumption. Now we have $\frac{Z_{i+1}(\frac{G}{Z_{j}(G)})}{Z_{i}(\frac{G}{Z_{j}(G)})}=Z(\frac{\frac{G}{Z_{j}(G)}}{Z_{i}(\frac{G}{Z_{j}(G)})})=Z(\frac{\frac{G}{Z_{j}(G)}}{\frac{Z_{i+j}(G)}{Z_{J}(G)}})\cong Z(\frac{G}{Z_{i+j}(G)})=\frac{Z_{i+j+1}(G)}{Z_{i+J}(G)}\cong \frac{\frac{Z_{i+j+1}(G)}{Z_{j}(G)}}{\frac{Z_{i+J}(G)}{Z_{j}(G)}}=\frac{\frac{Z_{i+j+1}(G)}{Z_{j}(G)}}{Z_{i}(\frac{G}{Z_{j}(G)})}$. therefore $$\frac{Z_{i+1}(\frac{G}{Z_{j}(G)})}{Z_{i}(\frac{G}{Z_{j}(G)})}\cong \frac{\frac{Z_{i+j+1}(G)}{Z_{j}(G)}}{Z_{i}(\frac{G}{Z_{j}(G)})}$$. Now Do we can to say that $Z_{i+1}(\frac{G}{Z_{j}(G)})=\frac{Z_{i+j+1}(G)}{Z_{J}(G)}$? How? Thank you. AI: Very perceptive of you to notice the difference between isomorphisms and equalities and how that precludes a naive application of the lattice theorem. Here's a patch for the middle: $$\begin{array}{ccc} Z\left(\frac{G}{H}\right) & = & \frac{A}{H} \\ & \Large \Updownarrow & \\ Z\left(\frac{G/N}{H/N}\right) & = & \frac{A/N}{H/N} \end{array} $$ with $N=Z_j(G)$, $H=Z_{i+j}(G)$, and $A=Z_{i+j+1}(G)$. Hint to see why $A$ is the same in both: $$[aH,gH]=H\iff [aN(H/N),gN(H/N)]=H/N.$$
H: Why does the series $\sum_{n=1}^∞ \ln ({n \over n+1})$ diverges? And general tips about series and the logarithm Why does the series $\sum_{n=1}^∞ \ln ({n \over n+1})$ diverges? I'm looking for an answer using the comparison test, I'm just not sure what I can compare it to. And can I have some tips on what to look at when handling with series that have logarithms in the expression? Thanks in advance! AI: We have $$\log\frac{n}{n+1}=\log n-\log (n+1)$$ Telescope, telescope, telescope. Alternatively, $$\tag 1 \log\left(1+\frac{1}n\right)\sim\frac 1 n$$ as $n\to\infty$ cries out for the comparison test. ADD Recall (or prove) that $$\lim_{x\to 0}\frac{\log(1+x)}x=1$$ This means $$\lim_{n\to \infty}\frac{\log\left(1+\frac{1}n\right)}{\frac1n}=1$$ which is what I write in $(1)$.
H: Choices for course selection I read this question on brilliant.org: Winston must choose 4 courses for his final semester of school. He must take at least 1 science class and at least 1 arts class. If his school offers 4 science classes, 3 arts classes and 3 other classes, how many different choices for classes does he have? My solution: His schedule can be written as {Science, Arts, Any, Any}, in which: there are $4$ choices for the first Science class there are $3$ choices for the first Arts class Now that the requirements are satisfied, we can pool the rest of the classes, giving us $2$ slots for $3+(4-1) +(3-1) = 8$ classes, which can be filled in $8\times 7$ ways. In total, he has $4\times 3\times 8\times 7 = 672$ ways to choose his classes. However, the website marked my answer as incorrect. What's the correct answer and why? AI: Whenever I see the phrase "at least one", I am tempted to try to solve the complementary problem. Let's try to enumerate the schedules that either have no science class or no arts class. If there is to be no science class, then we can choose our four classes from the remaining six options in $\binom{6}{4}$ ways. If there is to be no arts class, then we can choose our four classes from the remaining seven options in $\binom{7}{4}$ ways. A priori, we can't simply add these values together to get the number of schedules with no science class or no arts class, because we have counted twice the schedules that have no science class and no arts class. Fortunately for us, there are zero such schedules, since there are only three "other" classes. That is, we can't possibly fill out a four-class schedule without using a science or arts class. Now, there are $\binom{6}{4} + \binom{7}{4}$ schedules that lack either a science class or an arts class. These are the bad schedules. We want to subtract this from the total number of unrestricted schedules, of which there are $\binom{10}{4}$. Finally, the number of schedules that have at least one science class and at least one arts class is $$ \binom{10}{4} - \binom{7}{4} - \binom{6}{4} = 160. $$
H: Analog of modus ponens for semantics To pose my question, I first must first quickly define a language, a model, semantics for such models, and a logical system called S4O. Consider a language $L$ with a set $PV$ of propositional variables, Boolean connectives $\neg$ and $\vee$(with the other boolean connectives as shorthands for these), the necessity modality $\Box$, and a "next" modality O. Define a model to be a triple $\langle X,f,V \rangle$ where $X$ is a topological space, $f$ is a continuous function on $X$, and $V: PV \rightarrow \mathcal{P}(X)$. For $A$ and $B$ wffs of $L$ and $p \in PV$, define: $M(p) = V(p)\\ M(A \vee B) = M(A) \cup M(B)\\ M(\neg B) = X - M(B)\\ M(\Box B) = \textit{Int}(M(B))\\ M(\text{O}B) = f^{-1}(B)\\ $ where $\textit{Int}$ denotes topological interior. Suppose $\langle X,f,V \rangle$ is a model. We can define semantics as follows: $ M \models B \text{ iff } M(B) = X \\ \langle X,f,V \rangle \models B \text{ iff } M \models B \text{ for every model M} \\ X \models B \text{ iff } \langle X,f,V \rangle \models B \text{ for every continuous function } f. \\ \models B \text{ iff } X \models B \text{ for every topological space } X. $ The system S4O in the language $L$ is given by the following axioms: $ \text{the classical tautologies} \\ \text{S4 axioms for } \Box \\ (\text{O}(A \vee B) \leftrightarrow (\text{O}A \vee \text{O}B)) \\ (\text{O} \neg A \leftrightarrow \neg \text{O} A) \\ (\text{O} \Box A \leftrightarrow \Box \text{O} A) \\ $ and the inference rules of modus ponens, and necessitation for both O and $\Box$. I can finally pose my question. Suppose S4O proves the formula $(A \rightarrow B)$ for some special $A, B$. Suppose we've proven soundness. Then $\models (A \rightarrow B)$. In particular, for any model $M$, $M \models (A \rightarrow B)$. If we suppose $M \models A$, does it follow that $M \models B$? AI: Since your semantics agrees with ordinary classical logic on the Boolean connectives, modus ponens will work just as it does classically. In detail, looking at the last two lines of your question, you have $M\vDash(A\to B)$ and $M\vDash A$. Since $\to$ is defined as usual from $\neg$ and $\lor$, the former means $M\vDash((\neg A)\lor B)$. According to your definitions, this means $(X-M(A))\cup M(B)=X$. But since $M\vDash A$, you also have $M(A)=X$, so $X-M(A)=\varnothing$ and therefore $(X-M(A))\cup M(B)=M(B)$. So $M(B)=X$, which means $M\vDash B$.
H: $a^2+b^3=c^5$Are there infinitely many solutions? I am having troubles figuring whether there are infinitely many integer solutions to the following equation: $$a^2+b^3=c^5$$ This is just a problem I thought of on my own, so sorry in advance if this is already an open problem. The way I tried to solve it is this: We know that there are infinitely many integers such that $a^2+y^2=z^2$ . Therefore, if $b^3=y^2$ and $c^5=z^2$ , we might have a solution. This reduces to $y=b^{3/2}$ and $z=c^{5/2}$ . So $b$ and $c$ have to be squares. However, I don't know where to go from here. I don't know how to prove that there are infinitely many $y, z$ which satisfy $y=b^{3/2}$ and $z=c^{5/2}$ $and$ $a^2+y^2=z^2$ . Thanks. AI: We will cheat. Let $a=2^{3k}$ and let $b=2^{2k}$. Then the left-hand side is equal to $2^{6k+1}$. It is easy to find infinitely many $k$ such that $6k+1$ is divisible by $5$. Just let $k$ be any integer congruent to $4$ or $9$ modulo $10$. Then we can let $c=2^{(6k+1)/5}$. Remark: We get into interesting territory when we ask that the numbers be relatively prime. For some information, see the Wikipedia article on Beal's Conjecture.
H: $\mathbb{R}/{\sim}$: A Question about the Formal Definition of a Quotient For an equivalence relation $\sim$ what is $\mathbb{R}/{\sim}$? I mean explicitly and formally... AI: Define on $\mathbb R$ an equivalence relation $\sim$ that's reflexive, symmetric and transitive and for all $x\in \mathbb R$ let $$[x]=\{y\in\mathbb R\ |\ y\sim x\}$$ the class of $x$ i.e. the set of element in relation with $x$, hence it isn't difficult to prove that the set of classes denoted by $\mathbb R/\sim$ $$\mathbb R/\sim=\{[x]\ |\ x\in\mathbb R\}$$ forms a partition of $\mathbb R$.
H: If $f$ is any function and $X_1 ... X_n$ are IID, are $f(X_1), f(X_2), ..., f(X_n)$ IID? Suppose that $f : \mathbb{R} \rightarrow \mathbb{R}$ be any function and let $X_1, X_2, ..., X_n$ be IID real-valued random variables drawn from any arbitrary distribution. Is it guaranteed that $f(X_1), f(X_2), ..., f(X_n)$ are IID as well? This came up when discussing randomized algorithms that generate IID variables - if an algorithm generates IID variables and then uses them deterministically to build up some real number, I was curious about whether those real numbers were necessarily IID as well. Thanks! AI: Yes, this is true. It isn't difficult to prove, and you might like to try it for yourself. A useful observation is that an event of the form $\{f(X) \in B\}$ can be written $\{X \in f^{-1}(B)\}$.
H: What makes a Maclaurin Series special or important compared to the general Taylor Series? I realize that the Maclaurin Series is a special form of the Taylor Series where the series is centered at $x=0$, but I have to wonder what's special about it such that it deserves its own special designation? On that point, how would you know (or care) which point to choose as the center of a Taylor Series? AI: Expanding on the comment above, the idea is that we really like the expression $$ \sum_{k=0}^\infty a_k z^k, $$ simply because it is easy to manipulate and involves less writing than a series with powers of $(z-a)$. So a lot of the time we like to shift our function so that the "point of interest" is simply $0$ (mathematicians try to be efficient, I suppose). Typically we expand in a Taylor series (or more generally, a Laurent Series) about the point $z=a$ to investigate the behavior of $f$ near $a$. Is $f$ well behaved, or does it blow up? Can it be approximated using polynomials? If so, how good is this approximation and how far away from $a$ will it hold? This third question is the basis of many classical numerical analysis algorithms, including numerical differentiation and integration, as well as solution methods for ODEs. The analysis of these methods relies heavily on Taylor series - for example, say we're at $x=a$ and want to approximate the value of the function $f$ at $a+h$, a little distance away. The Taylor series about $x=a$ reads: $$ f(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{\prime\prime}(a)}{2}(x-a)^2+O((x-a)^3) $$ where the "big-O-$(x-a)^3$" means a quantity that grows as a constant multiple of $(x-a)^3$. If we evaluate this Taylor approximation at $x=a+h$, we arrive at the nice, simple expression $$ f(a+h)=f(a)+hf^\prime(a)+\frac{h^2}{2}f^{\prime\prime}(a)+O(h^3) $$ This says that if we know the value of the function and its first and second derivatives at $x=a$, we can approximate the value of $f$ at $a+h$ to an accuracy of $h$-cubed. So, for instance, if $h=0.1$, our approximation will only be off by a constant multiple of $0.001$. (This constant, incidentally, will depend on how bad the third derivative is near $a$). Of course, I'm only using this "numerical" idea as an example of why we might expand the Taylor series at a location other than 0 - the idea has plenty of other uses.
H: Use a known Maclaurin series to derive a Maclaurin series for the indicated function. $$f(x)=x\cos(x)$$ I'm not quite sure how to do this. I did two others, which I presume is the right way to do it, as follows: \begin{align} e^x&=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots\\ e^{-x/4}&=1-\frac{x}{4}+\frac{x^2}{4^22!}-\frac{x^3}{4^33!}+\cdots\\ &=\sum_{n=0}^{\infty}(-1)^n\frac{x^n}{4^nn!} \end{align} and \begin{align} \sin(x)&=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots\\ \sin(x^6)&=x^6-\frac{(x^6)^3}{3!}+\frac{(x^6)^5}{5!}-\frac{(x^6)^7}{7!}+\cdots\\ &=x^6-\frac{x^{18}}{3!}+\frac{x^{30}}{5!}-\frac{x^{42}}{7!}+\cdots\\ &=\sum_{n=1}^{\infty}(-1)^{n-1}\frac{x^{12n-6}}{(2n-1)!} \end{align} But where this one appears to be two separate functions multiplied together, I'm not sure how to approach it. AI: Hints: What is the Maclaurin series for $\cos x$? Calculate the first few derivatives of $x\cos x$ in order to compute the first terms of the Maclaurin series by hand. Take note of what values you get. Write out the first couple of terms in the series. Using the Maclaurin series for $\cos x$ and the information you just computed, make a reasonable guess as to how to obtain a power series for $x\cos x$. Test your conjecture. Why should it be true? If you are still stuck after these steps, I am more than happy to provide further information, but I hope these will be more illuminating than me telling you the answer.
H: $h:\mathbb{R}_{/\sim}\rightarrow \mathbb{R}^2$: A Bijection from a Quotient Space to the Unit Circle (Geometrically Considered) NOTE: This is not a duplicate. Define a relation $\sim$ on $\mathbb{R}$ as follows: for any $a,b \in \mathbb{R}$, $$a\sim b \iff a-b\in \mathbb{Z}.$$ Let $S=\mathbb{R}_{/\sim}$. That is, $S$ is the set of equivalence classes of elements of $\mathbb{R}$ under the equivalence relation $\sim$. Let $C=\{(x,y)\in\mathbb{R^2}:x^2+y^2=1\}$, and define $h:S\rightarrow C$ by $$h([t])=(\cos(2\pi t),\sin(2\pi t)).$$ TASK: Prove that $h$ is a bijection. I know that I have to show $$h([t])=h([t'])\implies [t]=[t'],$$ and $$\forall x\in C \exists x'\in S : h([x'])=x,$$ but I'm not really sure how to go about this. Further question: What is the geometric interpretation of this? WORK: I get that $\mathbb{R}_{/\sim}=\{y\in\mathbb{R} : y-x\in \mathbb{Z}\}$, right? AI: Hint $h$ is injective: $h([t]=h([t'])\Rightarrow (\cos(2\pi t),\sin(2\pi t))=(\cos(2\pi t'),\sin(2\pi t'))\Rightarrow e^{2\pi it}=e^{2\pi it'}$ so $t-t'\in \mathbb Z$ and then $[t]=[t']$. $h$ is surjective: Let $(\cos(2\pi t),\sin(2\pi t))\in C$ then take $[t]\in S$ and we have $h([t])=(\cos(2\pi t),\sin(2\pi t))$ Geometrically $C$ is the unit circle and we have $h(S)=C$ with $h$ is bijective so we can identify geometrically $S$ to the unit circle.
H: combination related question suppose that dinner cooker has 500 mint,500 orange and 500 strawberry,and he wished to do packets containing 10 mint,5 orange and 5 strawberry,question is what is a maximum number of packets he can make by this way? so as i think,it is a combination related problem,which means that we can choose how many 10 mint we can choose from 500 mint or $500!/(10!*490!)+500!/(5!*495!)+500!/(5!*495!)$,but no calculator can calculate factorial of $500$ and how can i solve it more easily? AI: After making 50 packets, you've used up $500=50\times 10$ mint, $250=50\times 5$ orange, and $250=50\times 5$ strawberry. There is no mint left, so you can't make any more.
H: Faster mental arithmetic with powers of 10 Please excuse me if this question is too vanilla. What's a faster way to do mental arithmetic involving powers of ten? I've always had to do this and I do it using scientific notation which I'm equivocal about, but am finding myself roaringly slow. Here's what I do. Suppose I need to calculate in my head $1.25\%$ of $75 \text{ billion.}$ I start by converting $1.25\%$ of $75$ billion to scientific notation. $\because 75E9 = 7.5(10^{10}) \,\, \& \,\, 1.25\% = 1.25(10^{-2}) $ $\therefore 1.25\% \text{ of } 75 \text{ billion} =1.25(10^{-2}) \times 7.5(10^{10}) = 1.25 \times 7.5 \times 10^8.$ $\because 1.25 \times 7.5 = (1 + 0.25) \times (7 + 0.5) = 7 + 0.5 + 1.75 + 0.125 = 9.375$, $\therefore 1.25 \times 7.5 \times 10^8 = 9.375E8 = 0.9375E9 = \text{ 937 million & 500 hundred thousand.}$ I'll devour Books or site/guides about calculations by hand and mental tricks?, Mental math tip needed; moving decimal around on larger and smaller numbers?, Is it possible to practice mental math too often?, & Fast arithmetic, without a calculator?. AI: As my father would do it: Take $\,75\,$ , calculate the easy $\;\frac{75}4=18.75\;$ , add now this to $\;75\;:\;\;75+18.75=93.75\;$ , and now go to the billions: $$1.25\%\;\;\text{of}\;\;75\;\;\text{billion is}\;\;937.5\;\;\text{million}$$
H: Eigenvalues of a self-adjoint operator necessarily distinct? Let's say we have a self-adjoint operator acting on an inner product space (real or complex), represented, of course, by a self-adjoint matrix. I'm looking at the proof for spectral theorem in which you build up a basis out of eigenvectors relying on the fact that the characteristic polynomial will always have roots, both over a real and over a complex field, because eigenvalues of a self-adjoint operator are real. But what I do not understand is, why do all eigenvalues must necessarily be distinct? How do we conclude that? After all, spectral theorem says that every self-adjoint operator is always diagonalizable and I know that for a matrix of order $n$ to be diagonalizable, it has to have $n$ distinct eigenvalues. So, what am I missing here? Edit: A matrix doesn't have to have n distinct eigenvalues in order to be diagonalizable, but if it does have n distinct eigenvalues it is diagonalizable, guess I was too sloppy and tired to notice such a silly mistake! But I'm leaving the question here ^_^ AI: The identity matrix is self-adjoint and all of its eigenvalues are equal (to $1$). The problem is in your false understanding that for a matrix to be diagonalizable, it has to have $n$ distinct eigenvalues ($n$ being the relevant dimension). That is in incorrect, as the identity matrix (and many others) show. You are probably confused with the true statement that if an $n\times n$ matrix has $n$ distinct eigenvalues, then it is diagonalizable.
H: number leaving different remainders with different divisors number when divided by 17 leaves remainder 3 and when divided by 16 leaves remainder 10 and is divisible by 15 find the smallest number in the series i tried the conventional method but it gave me wrong answer so please help AI: By the Chinese Remainder Theorem, the number is $x=3930$, which is unique modulo $4080$.
H: Find all real polynomials $p(x)$ that satisfy $\sin( p(x) ) = p( \sin(x) )$ Find all real polynomials $p(x)$ that satisfy $\sin( p(x) ) = p( \sin(x) )$. Is there an easy way to prove this? AI: From $\sin(p(0))=p(0)$, we get $p(0)=0$. Therefore $\sin(p(2k\pi)) = p(\sin(2k\pi)) = p(0) = 0$ for every integer $k$. In turn, $\cos(p(2k\pi))=\pm1$ and \begin{align*} &\sin(p(x)) = p(\sin(x))\\ \Rightarrow\ &p'(x) \cos(p(x)) = p'(\sin(x)) \cos(x)\\ \Rightarrow\ &p'(2k\pi) = \pm p'(0). \end{align*} Hence $p'(2k\pi)$ is bounded for all integers $k$ and $\deg(p)$ is at most $1$. Since $p(0)=0$, we have $p(x)=ax$. It remains to show that $a\in\{-1,0,1\}$. That should be easy.
H: If $f$ is compact is $f$ continuous? If $f$ is a compact function (image of every compact set is compact) is $f$ continuous? Attempt: I can't find a counterexample. I can't prove it. I only know how to prove the converse. AI: Let $$f:\Bbb R\to\Bbb R:x\mapsto\begin{cases} 0,&\text{if }x\in\Bbb Q\\ 1,&\text{if }x\in\Bbb R\setminus\Bbb Q\;. \end{cases}$$
H: Rationalizing a numerator I'm having trouble rationalizing a numerator with radicals. After multiplying the conjugate I get 0. Does anyone know where I went wrong? \begin{align} \frac{\sqrt{2+y} + \sqrt{2 - y}}{y} & = \left(\frac{\sqrt{2+y} + \sqrt{2 - y}}{y}\right) \left(\frac{\sqrt{2+y} - \sqrt{2 - y}}{\sqrt{2+y} - \sqrt{2 - y}}\right)\\ & = \frac{\sqrt{2+y}\sqrt{2+y} - \sqrt{2+y}\sqrt{2-y} + \sqrt{2-y}\sqrt{2+y} - \sqrt{2-y}\sqrt{2-y}}{y\sqrt{2+y} - y\sqrt{2-y}}\\ & =\frac{2 + y - 2 - y}{y\sqrt{2+y} - y\sqrt{2-y}} = 0 \end{align} AI: Careful: $$(a-b)(a+b)=a^2-b^2\implies $$ $$\left(\sqrt{2+y}+\sqrt{2-y}\right)\left(\sqrt{2+y}-\sqrt{2-y}\right)=(2+y)-(2-y)=2+y-2+y=2y$$ you forgot to change the second$\,-y\,$ into $\,+y\,$ ....
H: Factoring a given polynomial I am trying to factor the polynomial $$(a-1)x^2 + a^2xy+(a+1)y^2.$$ The problem previous to it in the book uses the method of factoring a polynomial of the form $$ax^2 + bx +c$$ by inspection, and the problem following it uses a formula related to cubes (I thought it's best you know). That said, I began by multiplying the coefficients of $x^2$ and $y$, but that did not yield something good.So I started again by taking $ax$, $x$, and $y$ as common, and that yielded nothing good. I would show some of my other work, but that would seem way too messy without proper formatting. AI: Convince yourself that it's going to be $$(rx+sy)(tx+uy)$$ where $r$, $s$, $t$, and $u$ are going to have formulas involving $a$. Note that $$rt=a-1,\quad su=a+1$$ How can you get two things that multiply to $a-1$? Don't look for anything really fancy; what are the simplest possibilities? Same question for $a+1$. Now you have some possibilities for $r$, $s$, $t$, and $u$; see which combination gives you the right coefficient for $xy$. One will soon note that r=a-1 and u=a+1 works fine.Putting the values gives us the result required.Hence,we get, (ax-x+y)(x+ay+y)
H: Number of primitive roots modulo p; asymptotic behavior I know that number of primitive roots modulo p is $\varphi(p-1)$, where $\varphi$ is Euler totient function. I'm actually interested in asymptotic behavior of $\frac{\varphi(p-1)}{p-1}$ (percentage of primitive roots among elements of $\mathbb{Z}_p^*$ ). It's easy to see it attains it's maximum (0.5) on Fermat primes (those of form $2^n + 1$), and you can calculate a little bit to see what kind of primes correspond to some fractions, but that's not really interesting. Do we know is there a limit of that sequence? Does it get arbitrarily close to zero? Is there a way we can speak of mean value of that sequence? AI: A theorem is stated here: Let $D(u)$ be the relative asymptotic density in the set of all primes of the set $$\{{\,p{\rm\ prime}:\phi(p-1)/(p-1)\le u\,\}}$$ Then $D(u)$ exists for every real number $u$, $D(u)$ is a continuous function of $u$, and $D(u)$ is strictly increasing on $[0,1/2]$, with $D(0)=0$, $D(1/2)=1$. The theorem is attributed to I. Katai, On distribution of arithmetical functions on the set prime plus one, Compositio Math. 19 (1968), 278-289.
H: Can a "nearly" harmonic series converge to an irrational number (say, $\pi$)? Suppose you take the set $X=\{\sum_{k \in A} \frac{1}{k}: A \in \mathcal{P}(\mathbb{N} \setminus \{1\})\}$. Suppose that we agree to introduce the symbol $\infty$ to encompass the cases where the series $\sum_{k \in A} \frac{1}{k}$ diverges (so $\infty \in X$). My question is if any irrational number (say, $\pi$) is in $X$. Surely this could only possibly happen for an infinite set $A$ (any finite sum would have to be a rational number). Considering the fact that you can get converging series by deleting some of the terms of the harmonic series, it might happen that you could somehow obtain a series that converges to $\pi$. AI: You can get every positive number, rational or otherwise, as a sum of infinitely many reciprocals of distinct positive integers. That's true for any divergent series of positive terms [EDIT: as long as the terms go to zero]. I could tell you exactly how to do this, but you might get more out of working it out on your own.
H: find angle in triangle Let us consider problem number 21 in the following link http://www.naec.ge/images/doc/EXAMS/math_2013_ver_1_web.pdf It is from georgian national exam, it is written (ამოცანა 21), where word "ამოცანა" means amocana or problem. We should find angle $\angle ADE$. I have calculated angle $B$, which is equal to $87^\circ$, but is there any sign of similarity between these two triangle or how can I find it? I think I could calculate angle using arc formula, but I don't remember exactly how it is, even how can I connect arc's angle and $\angle ADE$ angle together? Please help me. AI: Hint: A convex quadrilateral BCDE is cyclic if and only if its opposite angles sum up to $180^\circ$. See also the Wikipedia. I hope this helps ;-)
H: Estimating the integral $\sqrt{n}\cdot \int\limits_0^\pi \left( \frac{1 + \cos t}{2} \right)^n dt$ Consider the sequence $\{a_n\}$ defined by $$ a_n = \sqrt{n}\cdot \int_{0}^{\pi} \left( \frac{1 + \cos t}{2} \right)^n dt.$$ An exercise in Rudin, Real and Complex Analysis, requires showing that this sequence is convergent to a real number $a$, with $ a > 0$. I don't have any idea of how to prove this. I only obtained the following estimation $$ \begin{align*} \int_{0}^{\pi} \left( \frac{1 + \cos t}{2} \right)^n dt &= 2 \int_{0}^{\frac{\pi}{2}} \left( 1 - \sin^2 t \right)^n dt \\ &> 2 \int_{0}^{\frac{1}{\sqrt{n}}} (1 - t^2)^n dt \\ &> 2 \int_{0}^{\frac{1}{\sqrt{n}}} (1 - n t^2) dt \\ & = \frac{4}{3 \sqrt{n}}, \end{align*}$$ which shows that $ a_n > \frac{4}{3}$. Thank you very much in advance for any help. AI: Note that $a_n=\int\limits_0^\infty u_n(t)\mathrm dt$, where $$ u_n(t)=2^{-n}(1+\cos(t/\sqrt{n}))^n\,\mathbf 1_{0\leqslant t\leqslant\pi\sqrt{n}}. $$ It happens that $u_n\to u$ pointwise (can you show this?), where $$ u(t)=\mathrm e^{-t^2/4}, $$ hence a natural conjecture is that $a_n\to a$, where $$ a=\int_0^\infty u(t)\mathrm dt=\sqrt\pi. $$ A tool to make sure this convergence happens is Lebesgue dominated convergence theorem, which requires to find some integrable $v$ such that $|u_n|\leqslant v$ for every $n$. It happens that $v=u$ fits the bill. To see why, note that $\cos(2t)+1=2\cos^2(t)$ hence $|u_n|\leqslant u$ for every $n$ as soon as, for every $x$ in $(0,\pi/2)$, $\cos(x)\leqslant\mathrm e^{-x^2/2}$. Any idea to show this last (classical) inequality?
H: Finding a Jordan base Let $\frac{\mathrm{d} }{\mathrm{d} x}: \mathbb{R}\underset{\leqslant 3}{[x]}\rightarrow \mathbb{R}\underset{\leqslant 3}{[x]}$ be the derivative operator. I am trying to find a base $B\subset V$ and a Jordan block matrix $J$ so that $\left [ \frac{\mathrm{d} }{\mathrm{d} x} \right ]_{B}=J$. I chose the standard base $E=(1,x,x^2,x^3)$ and found $\left [ \frac{\mathrm{d} }{\mathrm{d} x} \right ]_{E}$ = $\left(\begin{smallmatrix} 0 & 1 & 0 &0 \\ 0 & 0 & 2 &0 \\ 0 &0 &0 &3 \\ 0&0 &0 &0 \end{smallmatrix}\right)$. I found that the minimal polynomial is $x^4$ and therefore the suitable Jordan matrix is: $\left(\begin{smallmatrix} 0 & 1 & 0 &0 \\ 0 & 0 & 1 &0 \\ 0 &0 &0 &1 \\ 0&0 &0 &0 \end{smallmatrix}\right)$. I couldn't find a Jordan form base for it, how do I continue from here? AI: Your matrix as almost the right form, you are just off by multiplicative coefficients. By linearity, if you divide by two your third vector the image will also be divided by 2, right? So you obtain the matrix $\begin{pmatrix}0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0& 0 & 0 & 6\\ 0& 0 & 0 & 0\end{pmatrix}$ in the base $(1,x,\frac{x^2}{2},x^3)$. Apply the same argument to correct the $6$ in the third row!
H: Show that $span({v_1,...v_n})=span({v_1,...,v_{n-1},w})$ This is what's given: $v_1,...v_n,w ∈ V$ and $v_1+...+v_n+w=0$ then I need to show that $span({v_1,...v_n})=span({v_1,...,v_{n-1},w})$ I could think of a way to show that this is true if I was sure that every $v_i$ and w were $0$, but they arent, right? Would using this be helpful?: $span(v_1,...,vn) = \left\{x*(v_1,...v_n)|x∈ℝ\right\}$ I really don't know how to start this, any hints would be great, I'm also pretty sure that I could understand this by seeing the solution, or at least a part of it. I have a few problems of this kind and feel like solving this would help me understand the others. AI: $$span\{v_1,\dots,v_n,w\}=\{\sum_{j=1}^n\alpha_jv_j+\beta w| \alpha\in\mathbb R^n,\,\beta\in \mathbb R\},$$ but $w=-\sum_{j=1}^nv_j$, hence $$span\{v_1,\dots,v_n,w\}=\{\sum_{j=1}^n(\alpha_j-\beta)v_j| \alpha\in\mathbb R^n,\,\beta\in \mathbb R\}=\{\sum_{j=1}^n \alpha_j v_j| \alpha\in\mathbb R^n\}=span\{v_1,\dots,v_n\}$$ Note that both $\alpha_j$ and $\alpha_j-\beta$ cover the whole line $\mathbb R$.
H: A morphism which fixes one root of an irreducible polynomial must also fix the others. Let $E/K$ be a field extension, let $p(x)$ be an irreducible polynomial in $K[x]$ which splits in $E$ with roots $\alpha_1$, $\alpha_2$, etc., and let $\sigma$ be an automorphism of $E$ which fixes $K$. Then $\sigma$ fixes $p(x)$ and so permutes the roots. The proposition I am trying to prove is that if $\sigma$ fixes one root $\alpha_1$ of $p(x)$ then it must also fix the other roots $\alpha_i$. At first I thought the statement is false for $x^3-2$ in $Q[x]$, but it's not. Are there perhaps additional conditions needed to make this statement true? AI: This fails exactly when the the order of the Galois group $G$ exceeds the degree of $p(x)$, call it $n$. Remember that the action of $G$ is transitive, so the order of a point stabilizer is (for any root $\alpha$) $$ \left\vert\operatorname{Stab}_G(\alpha)\right\vert=\frac{|G|}n>1, $$ iff $|G|>n$. So under those circumstances there exist a non-trivial automorphism $\sigma$ fixing $\alpha$. Non-triviality implies that $\sigma(\alpha')\neq\alpha'$ for some other root $\alpha'$ of $p(x)$. OTOH, if $|G|=n$, then any point stabilizer is trivial, and the claim is true. Yet another way of saying the same things is that the claim is true iff $\alpha$ is a primitive element of $E/K$ (keeping the assumption that $E$ is the splitting field of the minimal polynomial of $\alpha$). As then $E=K(\alpha)$ this is almost a tautology.
H: What is periodic solution to a PDE? If I have a PDE $$u_t = Au + f$$ with conditions $$u(0,x) = u(T,x)$$ then if it has a solution, why is the solution called periodic? Isn't it only true that $u(0) = u(T)$? It does not follow that $u(0+\epsilon) = u(T+\epsilon)$, which I would have thought is what periodic should be. Is that all that is required for the solution to be called that? Finally, is there any literature that address weak periodic solutions of parabolic PDE via Galerkin method? AI: The implicit assumption is that your PDE has a well-posed Cauchy problem, and that $A,f$ are either independent of time $t$ or periodic with period $T$. Under the above two assumptions, the uniqueness of solutions for the Cauchy problem will mean that $$ u(0,x) = u(T,x) \implies u(t,x) = u(T+t,x) $$
H: I need help with an assignment question please for numerical methods Consider the function $f (x) = xe^x - 2,$ we want to study the properties of $f (x)$ so that we can apply numerical methods to solve the equation $f (x) = 0$. Which option is false ? the function, $f (x)$ is well defined and continuous for all $x$ in the interval $(0,2)$ the function, $f (x)$ has no discontinuity and no singularities the function, $f '(x)$ is well defined and continuous for all $x$ in the interval $(0,2)$ the function, $f '(x)$ has no discontinuity and no singularities all of the above I do not understand the different options given. How do I know if the function is well defined? AI: Hint 1: Here is a plot of $f(x)$ and $f'(x)$ over the range (do they look continuous over the range, see any singularities over $\mathbb{R}$). Can you make an analytical argument over the range $(0,2)$ and $\mathbb{R}$, respectively, regarding continuity and singularities? Are both continuous? Do either of them have singularities? Hint 2: Here is a plot of $f(x) = 0$. Hint 3: From hint 2, did you try solving $f(x) = 0$ analytically? Can you?
H: What makes irreducible representations nice? Let $\mathcal{A}$ be a C*-algebra and $(H,\pi,\Omega)$ a cyclic representation. What does it intuitively mean if the representation is irreducible? From what I've read, irreducible representations are nice and I can be happy if my algebra can be represented in such a way, but which nice properties does an irreducible representation actually bring about? AI: I am teaching myself representation theory, and it seems to me irreducible representations are nice in two ways. (Since you are talking about representations of $C^*$-algebras, we might as well restrict our attention to $*$-representations, that is, the representation respects the involution.) Extrinsically, irreps are nice just as prime numbers are nice since irreps are basic building blocks of general representations. Formally, this refers to Every representation is a direct sum of irreducible representations. So to study representations of an algebra $\mathcal{A}$, we might first try to find its irreps, and see how to break up a general representation into these blocks. Irreps are also nice intrinsically. Since the algebra acts transitively on an irreducible representation, the geometry of an irrep is completely determined by this algebra. You might see its manifestations in the following $\mathcal{H}$ is an irrep of $\mathcal{A}$ if and only if $\mathcal{A}\cdot v$ is dense in $\mathcal{H}$ for every $v\neq 0$. Or indirectly, the von Neumann bicommutant theorem. Well to conclude, irreps are nice because they are the smallest possible representations. The above two are just two faces of this smallness.
H: Seeking an analytic proof of a vector identity Show that for any vectors $\bf{u_1},\bf{u_2},\bf{v_1},\bf{v_2}\in\mathbb R^3$, we have $$(\bf{u_1}\times\bf{v_1})\cdot(\bf{u_2}\times\bf{v_2})= \left|\begin{matrix} \bf{u_1}\cdot\bf{u_2} & \bf{u_1}\cdot\bf{v_2}\\ \bf{v_1}\cdot\bf{u_2} & \bf{v_1}\cdot\bf{v_2} \end{matrix} \right|.$$ I really don't want to "explode" it, is there a more analytic way to prove this identity? Thanks. AI: First note that, $(\vec a\times\vec b)\cdot\vec c=\vec a\cdot(\vec b\times\vec c)$ and then $\vec a\times(\vec b\times\vec c)=(\vec a\cdot\vec c)\vec b-(\vec a\cdot\vec b)\vec c$ EDIT: Consider $u_1=\vec a,$ $v_1=\vec b$ and $\vec c=(u_2\times v_2),$ then, $(u_1\times v_1)\cdot(u_2\times v_2)=u_1\cdot(v_1\times(u_2\times v_2))$ Now use second identity to expand $v_1\times(u_2\times v_2)$
H: Discrete topology on infinite sets I want to prove the following: Let $X$ be an infinite set and $\tau$ a topology on $X$. If every infinite subset of $X$ is in $\tau$, then $\tau$ is the discrete topology on $X$. Proof. Let $x\in X$. There exist two infinite subsets $A$ and $B$ of $X$ such that $\{x\}=A\cap B$. So every singleton is open in $X$. It follows that $\tau$ is the discrete topology on $X$. Edit. Justifying the existence of $A$ and $B$. Assume that $C$ is a countable subset of $X$. Let $A$ be the set of odd-numbered points in $C$ and $B=(C-A)\cup\{x\}$ for some $x$ in $A$ (i.e. $B$ contains the even-numbered points together with $x$). $A\cap B=\{x\}$. This holds for every $x$ in $B$ as well (by symmetry) so such $A$ and $B$ always exist. AI: Hints: Choose an arbitrary $\,x\in X\;$ : == There exists an infinite countable $\,Y\subset X\setminus\{x\}\,$ , say $\,Y=\{y_1,y_2,\ldots\}\;$ == Define $\,A:=\{y_i\in Y\;;\;i\;\;\text{is an odd natural number}\}\cup\{x\}\\B:=\{y_i\in Y\;;\;i\;\;\text{is an even natural number}\}\cup\{x\}$ == Deduce that any point in $\;X\;$ is open ...
H: Elliptic Cylinder Coordinates Integral Could somebody show me an example of an integral that becomes easy when you change to elliptic cylinder coordinates $x = a\cosh(\eta)\cos(\phi)$, $y = a\sinh(\eta)\sin(\phi)$, $z = z$, or even (&?) an integral where you would think to change your variables to these coordinates? Thanks! AI: You can consider the integral of the constant function $1$ over an elliptic cylinder, for example $$\displaystyle \int_{x^2+2y^2\leq 1, 0\leq z\leq 1}dV.$$
H: Proving that for certain ring of algebraic integers $R$, $R/bR$ is finite This is a part of proof I try to understand. The situation is the following: Suppose that $a,b,x,y$ are algebraic integers such that $b \neq 0$ and $ax+by=1$. Set $K:=\mathbb{Q}(a,b,x,y)$ and $R:=O_K,$ that is, a subring of all algebraic integers contained in $K$. Next statement of the proof is (whithout any further comments): "Then $R/bR$ is a finite ring." Hence my question: How can I prove that $R/bR$ is finite? Or is it somewhat obvious? (I should probably add that my knowledge of alg. number theory is very limited.) My attempt so far: From the given relation, it is not difficult to see that every element of $R/bR$ can be expressed as a $\mathbb{Q}-$linear combination of elements of the type $a^jy^i+bR$, $x^jy^i+bR$ for some $i,j \leq N$, where $N$ is a sufficiently large integer. I would also guess that there are not many possibilities for the values of the rational coefficients in those linear combinations. But that seems to be far from the desired conclusion. AI: This is just a general thing about rings of integers, it has nothing to do with your problem... Let $K$ be any number field, and $R$ its ring of integers. Then you can check that $R$ is free as a $\mathbb Z$-module, in particular $R \simeq \mathbb Z^n$ where $n$ is the degree of $K$ over $\mathbb Q$. Therefore if you have any integer $N \in \mathbb Z$, then $R / N R $ is clearly finite (it has cardinality $N^n$!). Now let $b \in R$, so that $b$ satisfies a monic polynomial equation with coefficients in $\mathbb Z$: $$ b^n + a_{n-1}b^{n-1} + \dots + a_0 = 0$$ of minimal degree. In particular, $a_0 \neq 0$, and $$ a_0 = - a_1 b - a_2 b^2 - \dots - b^n = b ( -a_1 - a_2 b - \dots - b^{n-1}) \in b R,$$ i.e., $a_0 R \subseteq bR$. So $|R / bR | \leq |R / a_0R|$, and this last ring is finite as we showed above.
H: Formula to fit a straight line to data Theorem (Best Linear Prediction of $Y$ outcomes): Let $(X,Y)$ have moments of at least the second order, and let $Y'=a+bX$. Then the choices of $a$ and $b$ that minimize $Ed^2(Y,Y')=E(Y-(a+bX))^2$ are given by $$a= \left(E(Y) - \dfrac{cov(x,y)}{var(x)}\right)E(X)$$ and $$b=\dfrac{cov(x,y)}{var(x)}$$ Proof: Left to the reader. I want to prove this theorem, so I see that this $a$ and $b$ are very similar to the case when correlation is equal $1$, except that $cov(x,y)$ is not $= std(x)std(y)$, but I can do no more. Also, below the theorem: Now define $V=Y-Y'$ to represent deviation....Since $EY=EY'$, $EV=0$ (there is no mention of why $EV=0$) $std(Y) = var(Y')+var(V)+Cov(Y',V)$ (I get this one) where: $var(V)=EV^2=Ed^2(Y,Y')=Ed^2(Y,a+bX)=var(Y)-\dfrac{cov(X,Y)^2}{var(X)^2}$ (Why?Why?Why? I have sat down for nearly 1 hour and can't understand this expression. Sometimes, the book exceeds too fast that I can't understand) AI: We want to minimize the expected value of $(Y-a-bX)^2$. That expands to $Y^2+a^2+(bX)^2-2aY-2bXY+2abX$. So we want to minimize $$E(Y^2)+a^2+b^2E(X^2)-2aE(Y)-2bE(XY)+2abE(X)$$ That is a quadratic in $a$ and $b$. Do you know how to minimize that? For the answer, you don't want the large brackets in the expression for $a$. Then $E(Y')=E(a+bX)=a+bE(X)=E(Y)-bE(X)+bE(X)=E(Y)$. So $E(V)=E(Y-Y')=E(Y)-E(Y')=0$.
H: Abelian $2$-groups Is every abelian group $A$ where every element has order two isomorphic to a direct product of cyclic groups of order two, $A\cong C_2\times C_2\times\ldots$? I ask because I used this "fact" in one of my old answers here (which is relevant to some work I am doing), and have just realised that this is not obvious, and so perhaps not true. Am I perhaps just not seeing something which I thought was obvious at the time? Or is there something more subtle going on? (Note that there is no assumption that $A$ is finitely generated.) AI: An abelian group is a $\mathbb{Z}$-module. An abelian group of exponent dividing $n$ is a $\mathbb{Z}/n\mathbb{Z}$ module. In your case, $A$ is a $\mathbb{Z}/2\mathbb{Z}$-module, so a vector space over the field $\mathbb{Z}/2\mathbb{Z}$. By standard axioms such as the axiom of choice, $A$ has a basis, and so is the direct sum of one dimensional subspaces. In other words, $A$ is the restricted direct product or direct sum of copies of $C_2$. $\mathbb{Z}/n\mathbb{Z}$ is a self-injective ring if $n$ is nonzero, and this often gives you nice decompositions. See "DSC" group. $A$ need not be a direct product. For instance if $A$ is countably infinite, then it is not a direct product of copies of $C_2$, since finitely many $C_2$s produces finite cardinality, and infinitely many $C_2$s produces at least a continuum cardinality.
H: Why does $\gamma=\lim_{s\to1^+}\sum_{n=1}^{\infty}\left(\frac{1}{n^s}-\frac{1}{s^n}\right)=\lim_{s\to0}\frac{\zeta(1+s)+\zeta(1-s)}{2}$? To be clear, I'm having trouble with proving both equalities, and would appreciate a hint. I'm also not sure why $1^+$ must be used as opposed to $1^-$. I'm not sure about the definition of $\zeta(x), x\le1$ (I encountered these equations here, to provide context). The first one reduces thusly $$\gamma=\lim_{s\rightarrow 1^+}\sum_{n=1}^{\infty}\left(\frac{1}{n^s}-\frac{1}{s^n}\right)$$ $$=\lim_{s\rightarrow1^+}\left(\zeta(s)-\frac{\frac{1}{s}}{1-\frac{1}{s}}\right)=\lim_{s\rightarrow1^+}\left(\zeta(s)-\frac{1}{s-1}\right)$$ As $\gamma=\lim_{n\rightarrow \infty}\bigl(H_n-\ln(n)\bigr)$, the above equality is equivalent to $$\lim_{s\rightarrow 0^+}\frac{1}{s}-\lim_{n\rightarrow \infty}\ln(n)=0$$ , although I am implicitly using $$\lim_{n \rightarrow \infty}\sum_{k=1}^{n}\frac{1}{k}=\lim_{m \rightarrow 1^+}\sum_{k=1}^{\infty}\frac{1}{k^m}$$ which is may be wrong as the limits are approached differently. Regardless, I'm not sure how to progress from there. I have even less of an idea about how to go about solving the second equality, perhaps because I have not dealt with antisymmetric limits before. AI: Dave Renfro's paper ’Euler's Constant $\gamma$' and David Speyer's link should be helpful for an elementary derivation of the first part i.e. get the limit : $$\tag{0}\gamma=\lim_{s\rightarrow 1^+}\left[\zeta(s)-\frac{1}{s-1}\right]$$ As indicated by Gerry your problem was to go from a well defined limit (the limit of the difference $\,\zeta(s)-\frac{1}{s-1}\,$ as $\,s\rightarrow1^+$) to the difference of the limits when these limits are both infinite ! $$-$$ Concerning your second limit : $$\tag{1}\gamma=\lim_{s\to0}\frac{\zeta(1+s)+\zeta(1-s)}{2}$$ this will require a better definition of $\zeta\,$ than $\,\displaystyle\zeta(s):=\sum_{n=1}^{\infty} \frac{1}{n^s}$ since this definition is valid only for $\Re(s)>1$. To go further you may use the Dirichlet eta function with the idea of converting a sum of positive terms to an alternate sum so that $\zeta$ may then be written as : $$\tag{2}\zeta(s)=-\frac 1{1-2^{1-s}}\sum_{n=1}^{\infty} \frac{(-1)^n}{n^s}$$ which is convergent for any complex $s$ such that $\,\Re(s)>0,\ s\not =1\,$ or better use the analytic extension of $\zeta$ in the whole complex plane except $s=1$ where $\zeta\,$ has a simple pole (as you found). To see how to obtain the alternate series $(2)$ (and convergence proof) as well as get some intuitive ideas about analytic continuation of $\zeta\,$ you may see this answer. Let's note that once the Laurent series of $\zeta$ at $s=1$ obtained with the simple pole at $1$ : $$\tag{3}\zeta(s)=\frac 1{s-1}+\sum_{n=0}^\infty \frac{(-1)^n}{n!}\gamma_n\;(s-1)^n$$ with $\gamma_n$ the Stieltjes constants and $\gamma_0=\gamma$ your Euler constant then the limit of $\zeta(s)-\frac 1{s-1}$ at $s\to 1$ is rather straightforward. Using the alternate series or the analytic extension you'll get that the limit was in fact given by (note that $\,s\rightarrow1^+$ was replaced by $\,s\rightarrow1$) : $$\gamma=\lim_{s\rightarrow 1}\left[\zeta(s)-\frac{1}{s-1}\right]=\lim_{z\rightarrow 0}\left[\zeta(1+z)-\frac{1}z\right]$$ The idea is simply to rewrite the $z$ at the right as $+s$ and as $-s$ and to return the mean value to get : $$\gamma=\lim_{s\rightarrow 0}\frac 12\left[\left(\zeta(1+s)-\frac{1}s\right)+\left(\zeta(1-s)-\frac{1}{-s}\right)\right]$$ or $$\gamma=\lim_{s\rightarrow 0}\frac {\zeta(1+s)+\zeta(1-s)}2$$ Let's conclude with an elementary proof using the alternate series $(2)$ : \begin{align} \zeta(1+s)+\zeta(1-s)&=\sum_{n=1}^{\infty}\frac 1{2^{-s}-1} \frac{(-1)^n}{n^{1+s}}+\frac 1{2^{s}-1} \frac{(-1)^n}{n^{1-s}}\\ &=\sum_{n=1}^{\infty}\frac{(-1)^n}n\left[\frac 1{2^{-s}-1} \frac 1{n^{s}}+\frac 1{2^{s}-1} \frac 1{n^{-s}}\right]\\ &=\sum_{n=1}^{\infty}\frac{(-1)^n}n\left[\left(e^{-s\ln(2)}-1\right)^{-1}e^{-s\ln(n)} +\left(e^{s\ln(2)}-1\right)^{-1}e^{s\ln(n)}\right]\\ &=\sum_{n=1}^{\infty}\frac{(-1)^n}n\left[2\frac{\ln(n)}{\ln(2)}-1+\sum_{m=1}^\infty s^{2m}\,P_{2m}(\ln(m))\right]\\ \end{align} with $P_{2m}$ polynomials depending of $\ln(n)$ and constants only. But $\lim_{n\to\infty}\frac{\ln(n)^k}n=0$ for any nonnegative integer $k$ so that we get another nice series equal to $\gamma$ : $$ \lim_{s\rightarrow 0}\frac {\zeta(1+s)+\zeta(1-s)}2=\sum_{n=1}^{\infty}\frac{(-1)^n}n\,\left(\frac{\ln(n)}{\ln(2)}-\frac 12\right)=\gamma$$
H: Explicit isomorphism $S_4/V_4$ and $S_3$ Let $S_4$ be a symmetric group on $4$ elements, $V_4$ - its subgroup, consisting of $e,(12)(34),(13)(24)$ and $(14)(23)$ (Klein four-group). $V_4$ is normal and $S_4/V_4$ if consisting of $24/4=6$ elements. Hence $S_4/V_4$ is cyclic group $C_6$ or a symmetric group $S_3$ (really, there are only two groups consisting of $6$ elements). It is easy to see, that an order of each element of $S_4$ is $1,2,3$ or $4$. So, $S_4/V_4$ is isomorphic to $S_3$. My question: how to build the isomorphism explicitly? AI: Let $A = \{a = (12)(34), b = (13)(24), c = (14)(23)\}$ (it's a conjugation class from $S_4$). Let $S(A)$ be the group of permutations of $A$. $S_4$ acts by conjugation on $A$ : if $\sigma \in S_4$ and $a \in A$, $\sigma.a = \sigma a \sigma^{-1} \in A$. This gives a group morphism $S_4 \to S(A)$. Moreover, because $V_4$ is commutative and $A \subset V_4$, if $\sigma \in V_4$ then $\sigma.a = a$, hence $\sigma$ acts trivially, and so the kernel of that map contains $V_4$. Next, look at the action of a transposition, say $\tau = (12)$. You should find that $\tau.a = a, \tau.b = c, \tau.c = b$. Similarly, the other transpositions of $S_4$ should acts as the other transpositions of $S(A)$. Since $S(A)$ is generated by transpositions, this means that the map $S_4 \to S(A)$ is surjective, so its kernel has cardinality $4$, but it contains $V_4$ so it has to be $V_4$, and finally, this gives an isomorphism $S_4 / V_4 = S(A)$
H: Proving statement - $(A \setminus B) \cup (A \setminus C) = B\Leftrightarrow A=B , C\cap B=\varnothing$ I`m trying to prove this claim and I need some advice how to continue, $$(A \setminus B) \cup (A \setminus C) = B \Leftrightarrow A=B , C\cap B=\varnothing$$ what I did is: $$(A \setminus B) \cup (A \setminus C) = (A \cap B') \cup (A \cap C') = A \cup (B' \cap C')$$ thanks! AI: Hint for $\implies$ direction: When using the distributive law, note the equivalence between $(A\cap B') \cup (A\cap C') \iff A\cap (B' \cup C')$: $$(A/B) \cup (A/C) = \color{blue}{(A \cap B')\cup(A\cap C')} = \color{blue}{\bf A \cap(B'\cup C')}$$ Note that by DeMorgan's $$A \cap(B'\cup C') = A\cap (B \cap C)'$$ Now recall that the premise is $$(A\setminus B) \cup (A\setminus C) = B$$ And now we're at $$\begin{align} (A\setminus B) \cup (A\setminus C) & = B \\ \\ A \cap(B'\cup C') & = B \\ \\ A\cap (B \cap C)' & = B \\ \\ A\setminus (B\cap C) & = B \end{align}$$ Now, what can you conclude about the relationship between $A$ and $B$, and about the intersection $B\cap C\;?$
H: Prove that $9\mid (4^n+15n-1)$ for all $n\in\mathbb N$ First of all I would like to thank you for all the help you've given me so far. Once again, I'm having some issues with a typical exam problem about divisibility. The problem says that: Prove that $\forall n \in \mathbb{N}, \ 9\mid4^n + 15n -1$ I've tried using induction, but that didn’t work. I've tried saying that: $4^n + 15n-1 \equiv 0 \pmod{9}$. Therefore, I want to prove that $4^{n+1} + 15(n+1) -1 \equiv 0 \pmod{9}$. I've prooved for $n=1$, it's $18\equiv 0 \pmod{9}$, which is OK. But for the inductive step, I get: $4\cdot4^n + 15n+15-1 \equiv 0 \pmod{9}$ And from there, I don't know where to replace my inductive hypothesis, and therefore, that's why I think induction is not the correct tool to use here. I guess I might use some tools of congruence or divisibility, but I'm not sure which ones. I do realize that all $n\in \mathbb{N}/ \ 3 \ |\ n \Rightarrow 4^n \equiv 1 \pmod{9} \text{ and } 15n \equiv 0 \pmod{9}$. In that case, where 3 divides n, then I have prove that $4^n + 15n-1 \equiv 0 \pmod{9}$. But I don't know what to do with other natural numbers that are not divisible by 4, that is, all $n \in \mathbb{N} / n \equiv 1 \pmod{3} \text{ or } n \equiv 2 \pmod{3}$. Any ideas? Thanks in advance! AI: By the Inductive Hypothesis, $4^n + 15n -1 \equiv 0$ so $4^n \equiv 1-15n$ and thus $$ 4^{n+1}+15(n+1)-1 = 4 \cdot 4^n + 15n + 14 \equiv 4 \cdot (1-15n) + 15n + 14 = 18 -45n \equiv 0 $$ since both $18$ and $45$ are divisible by 9.
H: Solving a simple quadratic equation I have problems every time I face a quadratic equation. What can I do to learn how to solve them? Can anyone please show me how to solve the one below and explain the basic principle of solving quadratic equations. $$x^2- xa - ab = 0$$ AI: There is a formula: $$Ax^2 + Bx + C = 0 \quad \Rightarrow \quad x_{1,2} = \frac{-B \pm \sqrt{B^2 - 4AC}}{2A}.$$ $A$ is whatever is next to $x^2$, $B$ is whatever is next to $x$, and $C$ is without $x$. In your case: $$x^2 - xa - ab = 1 \cdot x^2 + (-a)x + (-ab) = 0 \quad \Rightarrow \quad A = 1, \quad B = -a, \quad C = -ab,$$ so $$x_{1,2} = \frac{a \pm \sqrt{a^2 + 4ab}}{2}.$$
H: Find the value of $\cos^{12}\theta + 3\cos^{10}\theta + 3\cos^{8}\theta + \cos^6\theta + 2\cos^4\theta + 2\cos^2\theta - 2$ We are given that $\sin\theta + \sin^3\theta + \sin^2\theta = 1$ Find the value of $\cos^{12}\theta + 3\cos^{10}\theta + 3\cos^{8}\theta + \cos^6\theta + 2\cos^4\theta + 2\cos^2\theta - 2$ Now, I was able to establish the following from the first equation: $\sin\theta + \sin^3\theta + \sin^2\theta = 1 = \sin^2\theta + \cos^2\theta \implies \sin\theta + \sin^3\theta = \cos^2\theta$ The next obvious step was to simplify the second expression. I let $\cos^2\theta = x$: $f(x) = x^6 + 3x^5 + 3x^4 + x^3 + 2x^2 + 2x - 2$ $f(-1) = 0 \implies f(x) = (x + 1)(x^5 + 2x^4 + x^3 + 2x - 2)$ I was stuck after this. AI: Observe that the powers of $\cos \theta$ are all even, suggesting that we should use the conversion $\cos^2 \theta = 1 - \sin ^2 \theta$. For simplicity, let $x = \sin \theta$. We are given that $$(x + x^2 + x^3) = 1$$ and want to find $$(1-x^2)^6 + 3(1-x^2)^5 + 3(1-x^2)^4 + (1-x^2)^3 + 2(1-x^2)^2 + 2(1-x^2) - 2 $$ By long division, we could factor out $x^3 + x^2 + x -1 $ to get $214x^2 + 20x - 68$, but then it's not clear what to do after that. You could use the fact that $x^3 + x^2 + x - 1 = 0 $ has 1 real root; but I doubt that is how they want you to proceed.
H: form of groups of motions of tessellations I have read from the book "Mathmatics and Its History" by John Stillwell. In Section 18.6 it is about complex interpretations of geometry. The book says: The triangle and hexagon tessellations have similar group of motions, generated by $z \mapsto z+1 ,z \mapsto z+\tau,z \mapsto z\tau$, ($z=x+iy$) and more generally any motion of the Euclidean plane can be composed from translations $z \mapsto z+a$ and rotations $z \mapsto ze^{i\theta}$. (For example, the unit square pattern is mapped by the rotation of $\pi/2$ about the origin, and these three motions generate all motions of the tessellation onto itself. Then these generating motions are given by the transformations $z \mapsto z+1 ,z \mapsto z+i,z \mapsto zi$.) My question is why the rotation must be of the form $z \mapsto ze^{i\theta}$? Why it must be $ze^{i\theta}$? Can it be any other forms? How do you conclude this form? Thanks in advance. AI: Any complex number $z$ has the form $z=re^{i\phi}=r\cos\phi+ir\sin\phi$. $r$ is its distance from the origin, and $\phi$ is the angle between $z$ and the positive real axis. When you multiply by $e^{i\theta}=\cos\theta+i\sin\theta$, the product is $$r( \cos\phi+i\sin\phi)(\cos\theta+i\sin\theta)$$ You should check that equals $$r\cos(\phi+\theta)+ir\sin(\phi+\theta)$$ So the new point is the same distance from the origin, but the angle to the positive real axis is increased by $\theta$. That is why multiplying by $e^{i\theta}$ is a rotation by $\theta$.
H: Stuck at proving convergence of the series that is dependent on a converging series Suppose $\sum_{n=1}^{\infty}{a_n}$ converges, and $a_n > 0$. Does $$\sum_{n=1}^{\infty}{\dfrac{\sin(\sqrt{a_n})}{\sqrt{n}+na_n}}$$ converge or diverge? Attempt: I was able to prove that it diverges, as shown below, but could not find an example. Claim: $\sum_{n=1}^{\infty}{\dfrac{1}{\sqrt{n}+na_n}}$ diverges. Proof: Since $\sum_{n=1}^{\infty}{a_n}$ converges, there exists a $n\geq n_0$ such that $$0 \leq a_n \leq 1$$ which gives, $$\dfrac{1}{\sqrt{n}+na_n} \geq \dfrac{1}{\sqrt{n}+n}$$ proving the claim. Doing a limit comparison test for$\sum_{n=1}^{\infty}{\dfrac{\sin(\sqrt{a_n})}{\sqrt{n}+na_n}}$ with $\sum_{n=1}^{\infty}{\dfrac{1}{\sqrt{n}+na_n}}$ we get $$\lim_{n\rightarrow \infty}{\dfrac{\sin(\sqrt{a_n})}{\sqrt{n}+na_n}\cdot \dfrac{\sqrt{n}+na_n}{1}} = \sin(\sqrt{a_n}) < \infty$$ and hence the given series diverges. However, I am having trouble finding an example. Thanks in advance! AI: Hint: It is easy to see it converges for some $(a_n)$. To see it could diverge, you may consider $a_n=\frac{1}{n(\log n)^2}$ for $n\ge 2$.
H: Fix point of squaring numbers mod p Take the set of integers $\{0, 1, .., p-1\}$, square each element, you get the (smaller) set of quadratic residues. Repeat until you get a fix point set. The size of this set is a function of $p$. Does this function have a name? How can I efficiently calculate it? Some values: $$f(3) = 2, \\ f(5) = 2,\\ f(7) = 4,\\ f(11) = 6,\\ f(13) = 4, \\ f(17) = 2,\\ f(19) = 10,\\ f(23) = 12,\\ f(29) = 8$$ AI: Let us factor $p-1=2^am$, where $m$ is odd. The size of your terminal set is $m$. You are working with a cyclic group of order $p-1$. As $m$ is odd, the Chinese Remainder Theorem tells you that there is an isomorphism $$ \mathbb{Z}_p^*\cong C_m\times C_{2^a}. $$ Repeated squaring kills the $C_{2^a}$ after $a$ iterations, and squaring is bijective on $C_m$. If you include $0$ (I didn't), then add one.
H: Finding diagonal and unitary matrices Let $A=\begin{pmatrix} 1 & 1+i\\ 1-i & 2 \end{pmatrix}$ I'm trying to find a diagonal matrix $D$ and a unitary matrix $U$ so that $U^\star AU=D$. (We define $U^*=\overline{U}^t$ ). I found the eignvalues: $\lambda_1 = 3$, $\lambda_2 = 0$. The eignvectors are: $V_1=\begin{pmatrix} 1+i\\2 \end{pmatrix}$, $V_2=\begin{pmatrix} -1-i\\1 \end{pmatrix}$. What should I do from here? Thanks in advance. AI: Since $U$ is unitary, $U^*=U^{-1}$. So, let us first think about how to find a matrix $U$ such that $U^{-1}AU$ is diagonal, without worrying about the condition that $U$ be unitary; we'll come back to fix that detail at the end. Think about it this way: to diagonalize $A$, you want to find a change of basis: we know that the matrix for this transformation with respect to the basis $\{V_1,V_2\}$ is $$ \begin{pmatrix}3 & 0\\0 & 0\end{pmatrix}, $$ because it maps $V_1$ to $3V_1$ and $V_2$ to $0V_2$. If $U^{-1}AU$ is going to give you this, we want it to work like like so: when we plug in the vector $(1,0)^{T}$, we want $U$ to transform that in to $V_1$, so that $AU$ transforms it in to $3V_1$, so that $U^{-1}AU$ transforms it in to $(3,0)^T$. Does that make sense? If $U$ is going to transform $(1,0)^T$ in to $V_1$, then the first column of $U$ must be exactly $V_1$. Similarly, the second column must be $V_2$. So, by this logic, we should take $$ U=\begin{pmatrix}1+i & -1-i\\2 & 1\end{pmatrix}. $$ For this matrix $U$, we certainly have $U^{-1}AU$ being the above matrix. However, there's one problem here: this matrix is not unitary. So, instead of using $V_1$ and $V_2$ directly, let's use $W_1=\frac{V_1}{\|V_1\|}$ and $W_2=\frac{V_2}{\|V_2\|}$. If you take the matrix $U$ whose first column is $W_1$ and second column is $W_2$, you should get the properties you want. In this case, we have $$ \|V_1\|=\sqrt{(1+i)(1-i)+2^2}=\sqrt{6}\qquad \|V_2\|=\sqrt{(-1-i)(-1+i)+1^2}=\sqrt{3} $$ and so we would take $$ U=\begin{pmatrix}\frac{1+i}{\sqrt{6}} & \frac{-1-i}{\sqrt{3}}\\\frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}}\end{pmatrix} $$ You can check that in this case, $U^*=U^{-1}$.
H: Linear homeomorphisms mapping an orthonormal basis into another orthonormal basis Consider $L^2(A)$ and $L^2(B)$. If $\{a_i\}$ is an o.n basis of $L^2(A)$, how many linear homeomorphisms $F:L^2(A) \to L^2(B)$ do there exist such that $Fa_i$ is an orthonormal basis of $L^2(B)$? Is this a very restrictive assumption on the maps, if I wanted to discuss something about homeomorphism between the spaces? AI: If you fix one such $F$, the rest are found by composition with elements of the unitary group of $L^2(A)$. The unitary group of a Hilbert space (sometimes called the Hilbert group, $\mathrm{Hilb}\,(L^2(A))$) is very large: it contains a copy of every compact group. At the same time, it is contractible, by Kuiper's theorem. You may also be interested in MO discussions on the subject: Compact subgroups of the unitary group of operators in a hilbert space Local cross sections for Unitary group in a hilbert space
H: find parameter for maximize area suppose that we have Cartesian coordinate system.and suppose that we have three point which depend on parameter $t$,where t belongs to $(0,1)$;points are $A(cos(3-t),sin(3-t))$ $B(cos(t),sin(t))$ $C(-cos(t),-sin(t))$ goal: find $t$ for which area of triangle $ABC$ is maximum first of all,i was thinking that we could find length of each side of triangles,for example $BC=2$ but what about another sides?we can use determinant formula like here http://people.richland.edu/james/lecture/m116/matrices/applications.html and goal will be find maximum determinant,but could we it?also i have calculate length of $AB$,which is equal $2*cos(t)*cos(3-t)-2*sin(t)*sin(3-t)$ which is i think $2*cos(\alpha-\beta)$ or in our case it would be $2*cos(t-(3-t))=2*cos(2*t-3)$ am on the right way?or could i simplify way of solution? EDITED: so rotation matrix in 2D has form AI: Hint: All these points lie on the unit circle. In particular, $B$ and $C$ are antipodal, meaning that the line segment $BC$ passes through the origin. Now, forget about point $A$ as in the problem. Where you should put a point $A'$ such that the area $A'BC$ is maximized, where $BC$ is an antipodal line segment? The solution of this problem is to make $A'BC$ a right triangle (You need to prove this). Now, see if your parametric equations form the same right triangle (up to rotations); and by the way, they will.
H: Residues at poles What is the residue of $$f(x)=\frac{1}{(x^2+1)^a}$$ at $x^2=\pm i$, where $0<a<1$ ? My intuition tells me that there must be a non-zero residue, but my attempts to compute tells me the residue is $0$. How can this be so when $x^2+1=0$ when $x=\pm i$ ? AI: Although $(1+z^2)^a$ is not analytic in a neighborhood of $i$ or $-i$, we can still compute the circular integral around each point missing the branch cut. $\log(1+z^2)$ can be well-defined in a domain cut so that if a closed path circled $i$, it also circles $-i$. For example, we could have a branch cut that connects $i$ and $-i$ or a branch cut that extends from $i$ to $\infty$ and another cut that extends from $-i$ to $\infty$. On any such domain we can then define $(1+z^2)^{\large a}$ via the exponential function. In any case, near $i$, $$ |f(z)|\sim2^{\large-a}|z-i|^{\large-a} $$ On a small circle of radius $r$, the length of a circular path is $2\pi r$ and the value of the function would be $\sim2^{\large-a}r^{\large-a}$ the integral around the circle would be at most $\sim2^{\large-a}r^{1\large-a}\to0$ if $0\lt a\lt1$. Thus, even though we cannot form a closed circuit around $i$ because of branch cuts, the integral around the point vanishes as the radius goes to $0$. Caveat: Although we have a $0$ "residue" at $i$ and $-i$, this cannot be extended to any useful contour. To extend the result for the small circle to a larger path, the paths are usually connected by two superimposed connectors oppositely directed that cancel each other. Here, the connectors would have to follow the branch cut and the function is not continuous across the branch cut so the integrals would not necessarily cancel. For example, consider $f(z)=z^{1/2}$ with a branch cut along the positive real axis. $\hspace{3.2cm}$ Limiting to the real axis from above, $f(z)\to\sqrt{x}$, the normal positive square root. Limiting to the real axis from below, $f(z)\to-\sqrt{x}$, the negative square root. Let's try to use the same construction to show that the integral along two contours that circle the same singularities are the same. The only place to add the connectors, and keep the contour inside the domain of definition of $f$, is on each side of the branch cut. However, the integral along the connectors do not cancel; in fact, they actually reinforce. The integral counterclockwise around a circle of radius $r$ is $$ \begin{align} \int_0^{2\pi}r^{1/2}e^{i\theta/2}\,\mathrm{d}re^{i\theta} &=\int_0^{2\pi}r^{3/2}ie^{i3\theta/2}\,\mathrm{d}\theta\\ &=\left.\frac23r^{3/2}e^{i3\theta/2}\right]_0^{2\pi}\\ &=-\frac43r^{3/2} \end{align} $$ This makes sense. As shown above, the integral along the circle as $r\to0$ is $0$. The integral along each of the connectors is $$ \int_0^r\sqrt{x}\,\mathrm{d}x=\int_r^0-\sqrt{x}\,\mathrm{d}x=\frac23r^{3/2} $$ So the total along all the contours is $0$. However, the point is that the integral along the circles is not constant since the integrals along the connectors do not cancel.
H: A Well-Defined Bijection on An Equivalence Class DATA: Let $f:X\rightarrow Y$ be a surjective function. Define a relation $\sim$ on $X$ by $$a\sim b~\iff~f(a)=f(b).$$ Let $S=X/{\sim}$, namely let $S$ be the set of equivalence classes of elements of $X$ under the equivalence relation $\sim$. Define a function $q:X\rightarrow S$ by $$\forall~a\in X~q(a)=[a].$$ Lastly, define $\overline{f}:S\rightarrow Y$ by $$\forall~a\in X~\overline{f}([a])=f(a).$$ QUEST: $\dagger_1\hspace{0.5cm}$Is $\overline{f}$ well-defined? $\dagger_2\hspace{0.5cm}$Is $\overline{f}$ a surjection? $\leftarrow$ Unsure about this one. $\dagger_3\hspace{0.5cm}$Is $\overline{f}$ an injection? KNOWN: $\overline{f}\circ q = f$ $\leftarrow$ What is the utility of this for the quest? DIAGRAM: $\hspace{4.6cm}$ THOUGHTS: $\dagger_1^{\star}\hspace{.5cm}$"$\overline{f}$ well-defined": $ [a]=[b]\implies \overline{f}([a])=\overline{f}([b])$ $\dagger_2\hspace{.5cm}$"$\overline{f}$ surjection": Want to show that $\forall ~y\in Y~\exists~[x]\in S$ s.t. $\overline{f}([x])=y$ $\leftarrow$ ??? $\dagger_3^{\star}\hspace{.5cm}$"$\overline{f}$ injection": Want to show that $\overline{f}([a])=\overline{f}([b])\implies [a]=[b]$ $\star$ denotes the ones I think I've got so far. ATTEMPT: $\dagger_1^{\star}\hspace{.5cm}$ $[a]=[b]\implies a\sim b \iff f(a)=f(b) \implies \overline{f}([a])=\overline{f}([b])$ $\dagger_2\hspace{.5cm}$ ...PENDING... $\leftarrow$ Unsure about this one. $\dagger_3^{\star}\hspace{.5cm}$ $f([a])=f([b])\implies f(a)=f(b) \iff a\sim b \implies [a]=[b]$ NOTES: I suspect $q$ is to be used for the surjection proof. AI: The answer is yes. Suppose $a\sim b$. Then $f(a) = f(b)$. So the map $[a]\mapsto f(a)$ is well-defined. Suppose $f(a) = f(b)$. Then $a\sim b$ so $[a] = [b]$; $\overline f$ is 1-1. It is easy to see that the image of X under $f$ is equal to that of the image of $X/{\sim}$ under $\overline f$. Therefore it is onto.
H: How many numbers can I make with subseries of $\sum_{n=1}^{\infty} \frac{1}{2^n}$? Given $\sum_{n=1}^{\infty} \frac{1}{2^n}$, what real numbers in $\left[ \frac{1}{2},1 \right]$ can I generate with subseries of this series? Obviously we have every power of $\frac{1}{2^n}$ (by taking single terms), as well as 1 itself, which is the value of the original series. But can I get to any real number I want with the appropriate terms? Informally I would say yes, since we can approach with arbitrary precision by choosing the terms that are as small as I need. Are all the reals in the interval $\left[ \frac{1}{2},1 \right]$ reachable? If not, is there a way to characterize "how many" are reachable? AI: Just like any real number has a decimal expansion you can write any real number in base $2$. If you don't allow positive exponents or a minus sign you get all numbers between $0$ and $1$ (both included).
H: Equation $(a-3)cb=a(c+b)$ for natural numbers. Let $a$, $b$, and $c$ be positive integers. Suppose that $c \leq b \leq a$ and that they satisfy the relation $$ (a-3)cb=a(c+b). $$ What can be said about the solutions? AI: This equation can be rewritten as $$\frac{3}{a}+\frac{1}{b}+\frac{1}{c}=1.$$ Now If $c>5$, then there is no solutions (the lhs $<1$). If $c=5$, then the only solution is $a=b=c=5$. If $c=4$ and $b>5$, then there is no solutions (the lhs $<1$). If $c=4$ and $b=4$, then $a=6$. If $c=3$ and $b>6$, then there is no solutions (the lhs $<1$). If $c=3$ and $b=6$, then $a=6$. If $c=3$ and $b=4,5$, then there is no solutions (direct verification). If $c=3$ and $b=3$, then $a=9$. If $c=2$ and $b>8$, then there is no solutions (the lhs $<1$). If $c=2$ and $b=8$, then $a=8$. If $c=2$ and $b=7$, then there is no solutions. And finally, we have solutions $(a,b,c)=(9,6,2), (10,5,2), (12,4,2), (18,3,2)$. Hence the complete list of triples $(a,b,c)$ is: $$(5,5,5),(6,4,4),(9,3,3),(8,8,2),(9,6,2), (10,5,2), (12,4,2), (18,3,2).$$
H: Proving the length of a circle's arc is proportional to the size of the angle How can I prove that: The length of the arc is proportional to the size of the angle. Every book use this fact in explaining radians and the fundamental arc length equation $s = r\theta$. However no book proofs this fact. Is this fact some axiom, some natural law like $\pi$ and the triangle side proportions? Can I proof the above? Or is it something that you should just accept? AI: I am not sure if this is completely true, but I am sure others will correct me if I am mistaken. Consider a circle of radius R. In polar form, this equation is $r(\theta)=R$. The equation for arc length is as follows: $L = \int_0^{\theta} \sqrt {r^2+ (\frac {dr} {d\theta})^2 } d\theta$ We know that r=R and dr/d$\theta$ is 0, so the integral becomes: $L = \int_0^{\theta} R d\theta$ L=$\theta$R
H: An elementary problem in Group Theory: the unique noncyclic group of order 4 Following the advice given in this question, I have started to study Group Theory from the very basics. My reference text is Abstract Algebra by Dummit and Foote. While going through the exercises (page 24) I found one problem which required more effort than others: Assume $G = \{1, a, b, c\}$ is a group of order $4$ with identity $1$. Also assume that $G$ has no elements of order $4$. Use the cancellation laws to show that there is a unique group table for $G$. Deduce that $G$ is abelian. I proceeded with solution as follows. Since $G$ is of even order there must be at least one element of order $2$ (this statement itself was one of the problems in the exercises and is easy to handle). From the wording of the question it seems that none of the elements $a, b, c$ is singled out with specific properties and hence $a, b, c$ must behave in exactly the same manner. Hence each of them is of order $2$. Thus $a^{2} = b^{2} = c^{2} = 1$. Next I analyze product $ab$. Clearly it can't be $a$ (as $b$ is non-identity), or $b$ (as $a$ is non-identity) or $1$ (because $ab = 1 = aa$ implies $b = a$). Hence $ab = c$ and similarly $ba = c$. By the similar nature of all elements $a, b, c$ it follows that $bc = cb = a$ and $ca = ac = b$. Thus the operation of the group is defined properly for all elements and clearly it is abelian. I think my solution is OK, but I am not sure. Note that I have not used the fact that $G$ has no elements of order $4$. Please let me know : 1) if my solution is correct. If not then point out the flaws. 2) if there is any better / shorter solution. If so provide hints and not solution. The question may sound too easy / simple but I request to treat me like a beginner who has never heard of Group Theory and is reading fresh from the reference text I mentioned. Thanks in advance for your inputs. EDIT: Thinking further about this problem I wondered what would happen if an element of order $4$ was allowed in $G$. In that case I think that the group has to be isomorphic to the group $G_{1} = \{1, 2, 3, 4\}$ with modulo $5$ multiplication as the group operation. The argument is as follows. Suppose $a$ is of order $4$ and let $b$ be its inverse then $b$ is also of order $4$. Now $c$ has to be its own inverse so that $c^{2} = 1$. Again $a^{2} \neq 1$ (as $ab = 1$), $a^{2} \neq a$ (as $a \neq 1$), $a^{2} \neq b$ (as it would mean $b^{2} = 1$ and thus $b$ would be of order $2$). Hence $a^{2} = c$ and similarly $b^{2} = c$. Next we can see that $ac \neq a$, $ac \neq c$, $ac \neq 1$ (as $ab = 1$) so that $ac = b$. Similarly $ca = b$, $bc = a$, $cb = a$. Also it is clear that this group turns out to be cyclic with both $a$ as well as $b$ as generators. And it also follows that there are the only two ($G$ in original question and $G$ in new variation) groups of order $4$ upto isomorphism. I hope I have learnt something from the answers given for the original question and this solution is correct. Please let me know if there is any problem with this reasoning. AI: Here is a full solution: This group has only one element of order 1 ($a^1 = a$ so if $a^1=1$, then $a=1$). This group has no elements of order 4 or greater (4 is outlawed by hypothesis; order 5 or greater implies that $a^1,a^2,a^3,a^4,a^5$ are 5 distinct elements of a 4 element set). This group has no elements of order 3 (if $a$ has order 3, then $a^2 \neq 1$ and $a^2 \neq a$, so $a^2 =b$ (or $c$, but WLOG we choose $b$). Hence $ab=ba=1$ and $b^2=a$. What about $ca$? $ca=1$ implies $c=b$. $ca=a$ implies $c=1$. $ca=b$ implies $c=a$. $ca=c$ implies $a=1$. Oh no!) Therefore all elements have order 2, and we finish exactly as you did.
H: Prove that in an obtuse triangle the orthocentre is the excenter of the orthic triangle Consider an obtuse angled $\Delta ABC$ with altitudes $AD, BE, CF$ concurrent at $H$. Consider the orthic triangle $\Delta FED$. Extend $ED$ to $D'$ and $EF$ to $F'$. Prove that $\angle FDH = \angle HDD'$ and $\angle DFH = \angle HFF'$. In other words prove that $H$ is the excenter of $\Delta FED$. I tackled $\angle FDH = \angle HDD'$ first. I tried reducing the proof to a simpler statement: Since $\angle D'DH = \angle ADE$, and $\angle FDH = 90 - \angle FDB$, it is sufficient to prove that $\angle ADE + \angle FDB = 90$ Now, the proof hinges on the conjecture that in an orthic triangle of an obtuse triangle, the point with the obtuse angle is the incenter of the orthic triangle. I was unable to prove this conjecture. Is there a proof for this conjecture (or is it incorrect altogether?), or is there an alternative proof to the whole problem? AI: Hint: Consider angles inscribed in circumcircles of the following quadrilaterals: $BDHF$, $ADBE$ and $BFCE$. I hope this helps ;-)
H: law of large number modified statement The weak law of large number states that, given $Y_n = \sum_{k=1}^{n} X_k$, where $X_k$ are random variables independent and identically distributed with finite expectation $\mu$, $$ \forall \delta>0, \forall \epsilon>0 \, \, \exists N>0\, \,\, s.t.\, \, \, P ( |Y_n/n - \mu| > \delta ) < \epsilon, $$ From this statement, is there a simple way to prove that there is a nonzero probability that $$ \forall n>0 \, \, \, \, \, \, \, \, \, \, Y_n/n - \mu > 0 \, \, \, \,? $$ (Note: without absolute value in the difference). AI: No, this is false. A trivial counterexample to the statement as written (with strict inequality) is $X_k = 0$ for all $k$. If we replace the strict by a weak inequality, it is still false in all nontrivial cases: If $X_k$ has a nonconstant distribution, then almost surely, $Y_n / n - \mu > 0$ for infinitely many $n$, and $Y_n / n - \mu < 0$ for infinitely many $n$. Without loss of generality, we can assume $\mu = 0$ (replace $X_k$ by $X_k - \mu$). We will show that almost surely, $\limsup Y_n = +\infty$ and $\liminf Y_n = -\infty$; in particular, $Y_n$ takes on each sign infinitely often. We use the Hewitt-Savage zero-one law, which says that any event that is invariant under finite permutations of the $X_k$ has probability 0 or 1. In particular, any event of the form $\{\limsup Y_n = a\}$ or $\{\limsup Y_n > a\}$ has this property (any permutation of $X_1, \dots, X_m$ leaves $Y_n$ unchanged for $n \ge m$, because addition is commutative). First, note that the event $\{\limsup Y_n = -\infty\}$ has probability 0 or 1 by Hewitt-Savage. By symmetry, $\{\liminf Y_n = +\infty\}$ has the same probability, and they are mutually exclusive, so the probability cannot be 1 and must be 0. Thus $\limsup Y_n > -\infty$ almost surely. Therefore, there exists $a > -\infty$ such that $P(\limsup Y_n > a) > 0$; by Hewitt-Savage, $P(\limsup Y_n > a) = 1$. Let $Y_n' = X_2 + \dots + X_n$, so that $Y_n'$ has the same distribution as $Y_{n-1}$. In particular, $P(\limsup Y_n' > a) = P(\limsup Y_n > a) = 1$. Since $X_k$ has a nonconstant distribution with mean zero, there exists $\epsilon>0$ such that $P(X_1 > \epsilon) > 0$. So we have $$\begin{align*} P(\limsup Y_n > a + \epsilon) &\ge P(X_1 > \epsilon, \limsup Y_n' > a) \\ &= P(X_1 > \epsilon) P(\limsup Y_n' > a) > 0 \end{align*}$$ where we used the fact that $X_1$ and $\{Y_n'\}$ are independent. Using Hewitt-Savage a final time, we have shown $P(\limsup Y_n > a+\epsilon) = 1$. Iterating this, we have $P(\limsup Y_n > b) = 1$ for all $b$, which is to say $\limsup Y_n = +\infty$ almost surely. By symmetry we also have $\liminf Y_n = -\infty$.
H: numerical linear algebra 101 since I'm a programmer and I need linear algebra, I'm starting considering how to teach myself a little of numerical linear algebra, not really optimize things right from the start, but I would like to get how to decompose and "linearize" a matrix or a vector in a form that can be written on a single line or in a less "cryptic" way. For example I would like to find a resource that when mentioning vector normalization, it will outline the fact that I can normalize a vector with just a square root to make my life "easier" and think more in an algebraic way instead of in a more cryptic way. As I mentioned earlier I also would like to skip the optimizations for now, so I would like to not have a resource that tells me that a division is not cheap and a multiplication is better in terms of computation power used. I would like to do that because I already have this kind of notions and at this point performances are not my point, my point is really just how to transpose a matrix or a vector into common algebra and how to perform all the linear algebra operations in "standard" calculus. Considering my goals and my field, where and with what should I start ? AI: I would pick up "Linear Algebra" by Strang and "Numerical Linear Algebra" by Trefethen and Bau. Those will get you well on your way, along with math.stackexchange and scicomp.stackexchange.
H: What's the symbol m in this sum? I'm supposed to write some code to calculate the inertia moments of a shape, but I am afraid I have been given too little information. The matrix that I must obtain is this one: $$ \begin{vmatrix} J_{xx} = \sum \limits_i m_i y_i² & J_{xy} = -\sum \limits_i m_i x_i y_i\\ J_{xy} = -\sum \limits_i m_i x_i y_i & J_{yy} = \sum \limits_i m_i x_i² \end{vmatrix} $$ Which we can denote by $$ \begin{vmatrix} A & -F\\ -F& B \end{vmatrix} $$ Apparently, the eigenvectors $v_1$ and $v_2$ obtained from that matrix, with $$ v_n = \begin{vmatrix} -F \\ -A+r_n\end{vmatrix} $$ and $r_n$ being the corresponding eigenvalue, will determine the orientation of the shape. The problem is that it is nowhere stated what the $m$ in the sums is supposed to be. Knowing that it's just a shape, is it possible that this mass is always 1 in this case? In addition, I think the $x$ and $y$ values of each point have to be measured from the centre of mass of the shape, but I'm not sure. Is anyone familiar with these concepts and kind enough to clear out my doubts? AI: Regard the kinetic energy of an assembly of $N$ masses $m_i$ that lie at the distances $r_i$ from a pivot point $P$, which is the sum of the kinetic energy of the individual masses: $$E_{kin} = \sum_{i=1}^N \frac12\,m_i \mathbf{v}_i\cdot\mathbf{v}_i = \sum_{i=1}^N \frac12\,m_i (\omega\, r_i)^2 = \frac12\, \omega^2 \underbrace{\sum_{i=1}^N m_i \,r_i^2}_{J_{p}}$$ while $\mathbf{v}_i$ velocity. Hereof results that the moment of inertia of the body is the sum of each of the $m_i r_i^2$ terms, that is: $$J_{p}=\sum_{i=1}^N m_i \,r_i^2 \qquad (1)$$ But not sure if this is what you look for and perhaps confuses you. The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. A beam along the z-axis has stresses in the cross-section in the x-y plane that are calculated using the second moment of this area around either the x-axis or y-axis depending on the load. The moment of inertia of mass distributed along a body with the shape of this cross-section is the second moment of this area about the z-axis weighted by its density. The second moment of area around an axis perpendicular to the area is called the polar moment of the area, and is the sum of the second moments about the x and y axes. The second moment of area for an arbitrary shape with respect to the the x-axis is denoted $J_{xx}$ (see for details for insatnce here>>>). So if I understood the question correctly, the clue is that you look for moment of inertia of a body in the geometric context of the second moment of area. The result is simple the second second moment of area takes in this case mathematical the form of moment of inertia of a body (equation 1); hence $r_i \sim y_i$ $$J_{xx}=\sum_{i=1}^N m_i \,y_i^2$$ In extension to the abovementioned. The second moment of area is a property of a two-dimensional plane shape which characterizes its deflection under loading. The second moment of area has dimensions of length to the fourth power. Unfortunately, in engineering contexts, the second moment of area is often called simply "the" moment of inertia even though it is not equivalent to the usual moment of inertia of a body (which has dimensions of mass times length squared and characterizes the angular acceleration undergone by a solid when subjected to a torque). The second moment of area about the $x$-axis is defined by (commonly this is known in integral but for your calculation discrete) $$J_{xx}=\sum_i y_i^2$$ while more generally, the product moment of area is defined by: $$J_{xx}=\sum_i x_i y_i$$ More generally, the second moment of area tensor $J_{k,l}$ is given by: $$\begin{pmatrix} J_{xx} = \sum \limits_i y_i² & J_{xy} = -\sum \limits_i x_i y_i\\ J_{xy} = -\sum \limits_i x_i y_i & J_{yy} = \sum \limits_i x_i² \end{pmatrix}$$ This is a geometric context and has no mass. In the physical context "the" moment of inertia (moment of inertia of a body) would be of the same mathematical form, however, different dimmension, hence having mass into account: $$\begin{pmatrix} J_{xx} = \sum \limits_i m_i y_i² & J_{xy} = -\sum \limits_i m_i x_i y_i\\ J_{xy} = -\sum \limits_i m_i x_i y_i & J_{yy} = \sum \limits_i m_i x_i² \end{pmatrix}$$ Resume you have mass in your case and "the" moment of inertia (moment of inertia of a body) in the physical context and must keep it as long this is a physical phenomenon of moment of inertia of a body.
H: Verify that the six matrices form the group I solved the problem myself and I want to check if my solution is legitimate. My solution usually has partial errors or is not solid enough. Thank you! Verify that the six matrices $$\begin{bmatrix}1 & 0 & 0\\0 & 1 & 0\\0&0&1\end{bmatrix},\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix},\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\end{bmatrix},\begin{bmatrix}1&0&0\\0&0&1\\0&1&0\end{bmatrix},\begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix},\begin{bmatrix}0&1&0\\1&0&0\\0&0&1\end{bmatrix}$$ from a group under matrix multiplication. [Hint: Don't try to compute all products of these matrices. Instead, think how the column vector $\begin{bmatrix}1\\2\\3\end{bmatrix}$ is transformed by multiplying it on the left by each of the matrices.] What group discussed in this section is isomorphic to this group of six matrices? AI: Note that $\sigma=(12)$ and $\tau=(123)$ generate $S_3$, so that it is enough to consider the map $\phi\colon S_3\rightarrow G$, given by $\phi(\sigma)=A_6$ and $\phi(\tau)=A_3$, where $A_1,\ldots ,A_6$ are the six given matrices. Then $\phi$ is an isomorphism. See also http://groupprops.subwiki.org/wiki/Element_structure_of_symmetric_group:S3.
H: Is there an injective operator with a dense nonclosed range? Let $H$ be an infinite dimensional separable Hilbert space. Is there an operators $A \in B(H)$ such that $Im(A) \subsetneq \overline{Im(A)} = H$ and $Ker(A) = \{0\}$ ? Bonus : We can build such operators by using some compact or shift operators (see the answers). Is there others possibilities ? How classify this phenomenon ? AI: You can check that for $$ T:\ell_2\to\ell_2: (x_1,x_2,x_3,\ldots)\mapsto(1^{-1} x_1,2^{-1}x_2, 3^{-1}x_3,\ldots) $$ we have $\operatorname{Ker}(T)=\{0\}$, $\overline{\operatorname{Im}(T)}=\ell_2$, but $T$ is not surjective because $(1,2^{-1},3^{-1},\ldots)\notin\operatorname{Im}(T)$.
H: Resolving a counterexample to the most fundamental probability concept; mutually exclusive and independence Suppose you roll a die $(1-6)$ and toss a coin each once. Let $A$ be the event that I get either heads or tails (let's say tails) and $B$ be the event that I roll a number $(1 - 6)$, (let's say $2$). So the sample space of $A$ is $\{H, T \}$ and sample space of $B$ is $\{1, 2, 3, 4, 5,6 \}$ Clearly the outcome of $A$ has no effect on $B$, yet $A = \{ T\}$ and $B = \{2\} \implies A \cap B = \phi$ So doesn't this mean I have a contradiction? Since the two events are mutually exclusive and independent? I can't seem to wrap my head over $P(A \cap B) = P(A)P(B)$ EDIT Wait I am making another mistake again because I found $P(A) = \frac{|A|}{|S|} = \frac{|A|}{|A \times B|} = 1/6$ AI: You should consider elementary events. These consist of a coin toss combined with a roll of the die. Thus your sample space is the direct product $$ S = \{H,T\}\times\{1,2,3,4,5,6\} $$ The combined events you describe are projections of these atomic events. When you write $A=\{T\}, B=\{2\}$ then this is only a shorthand notation for \begin{align*} A &= \{(T,1), (T,2), (T,3), (T,4), (T,5), (T,6)\} \subseteq S \\ B &= \{(H,2), (T,2)\} \subseteq S \end{align*} Therefore $A\cap B=\{(T,2)\}\neq\emptyset$. You get the probabilities \begin{align*} P(A) &= \frac{\lvert A\rvert}{\lvert S\rvert} = \frac{6}{12} = \frac12 \\ P(B) &= \frac{\lvert B\rvert}{\lvert S\rvert} = \frac{2}{12} = \frac16 \\ P(A\cap B) &= \frac{\lvert A\cap B\rvert}{\lvert S\rvert} = \frac{1}{12} \end{align*}
H: Is there a specific reason to choose $3$ and $1$ for evaluating these limits? I'm reading Gemignami's Calculus and Statistics. There are two examples of limits that left me confused: Example 5. Let $f$ be defined by $f(x)=x^2$. We now evaluate $$\lim_{x\rightarrow 0}\frac{f(\color{red}{3}+h)-f(\color{red}{3})}{h}$$ And: Example 6. Let $f$ be defined by $f(x)=x^{1/2}$, where $x$ is any nonnegative number. We shall evaluate $$\lim_{h\rightarrow0}\frac{f(\color{red}{1}+h)-f(\color{red}{1})}{h}=\frac{(\color{red}{1}+h)^{1/2}-\color{red}{1}}{h}$$ I don't understand why he chose $\color{red}{3}$ and $\color{red}{1}$ for these examples, the second example brings the "where $x$ is any nonnegative number." I'm not sure if I could choose any nonnegative number or if he chose $3$ and $1$ for some specific reason. The text also doesn't point specificaly if there is a reason for doing so (or I'm not seeing the reason). AI: For both of these, I think the goal is to allow you to compare the results of the $formulas$ for the derivatives ($2x$ for $x^2$ and $1/(2\sqrt{x})$ for $x^{1/2}$) with the results of working out the limits. For the $x^2$ case, $\begin{align} \dfrac{f(3+h)-f(3)}{h} &=\dfrac{(3+h)^2-(3)^2}{h}\\ &=\dfrac{(9+6h+h^2)-9}{h}\\ &=\dfrac{6h+h^2}{h}\\ &=6+h\\ \end{align} $, and the limit of this as $h \to 0$ is $6$, which is also $2x$ evaluated at $x = 3$ (as it should be, of course). For the $\sqrt{x}$ case, $\dfrac{f(1+h)-f(1)}{h} =\dfrac{\sqrt{1+h}-\sqrt{1}}{h} =\dfrac{\sqrt{1+h}-1}{h} $. Here, since it is not immediately obvious how $\dfrac{\sqrt{1+h}-1}{h}$ behaves as $h \to 0$, you have to apply some ingenuity and use this result (which you should understand and memorize for your future work with square roots): $\begin{align} \sqrt{a^2+b}-a &=(\sqrt{a^2+b}-a)\dfrac{\sqrt{a^2+b}+a}{\sqrt{a^2+b}+a}\\ &=\dfrac{(\sqrt{a^2+b}+a)(\sqrt{a^2+b}-a)}{\sqrt{a^2+b}+a}\\ &=\dfrac{(a^2+b)-a^2}{\sqrt{a^2+b}+a}\\ &=\dfrac{b}{\sqrt{a^2+b}+a}\\ \end{align} $ Setting $a=1$ and $b=h$, $(\sqrt{1+h}-1) = \dfrac{h}{\sqrt{1+h}+1} $ so $\dfrac{\sqrt{1+h}-1}{h} =\dfrac{1}{\sqrt{1+h}+1} $. Since $\sqrt{1+h}$ clearly goes to $1$ as $h \to 0$, $\sqrt{1+h}+1 \to 2$ as $h \to 0$, so $\dfrac{\sqrt{1+h}-1}{h} \to \dfrac{1}{2}$.
H: Proof that the topology of uniform convergence on $C(\mathbb R, X)$ is finer than the topology of pointwise convergence. The proof is from R. Engelking, General Topology and it relies on the following fact $$ f \textrm{ is continuous} \Leftrightarrow f(\overline{A}) \subseteq \overline{f(A)}. \qquad (1) $$ Proposition: For every topological space $X$ the topology of uniform convergence on $C(\mathbb R, X)$ is finer than the topology of pointwise convergence. Proof: The equivalence of (1) above shows that it suffices to prove that if $f \in C(\mathbb R, X)$ is in the closure of a set $A \subseteq C(\mathbb R, X)$ with respect to the topology of pointwise convergence, then $f$ is in the closure of $A$ with respect to the topology of pointwise convergence. Let $U = C(\mathbb R, X) \cap \bigcap_{i=1}^k p_{x_i}^{-1}(U_i)$ be a neighboorhood of $f$ in the topology of pointwise convergence; since the sets $U_i$ are open in $R$, there exists an $\epsilon > 0$ such that $(f(x_i) - \epsilon, f(x_i) + \epsilon) \subseteq U_i$ for $i = 1,2,\ldots,k$. As $f = \lim f_j$, where $f_j \in A$ for $j = 1,2,\ldots$ there exists a $j$ such that $|f(x) - f_j(x)| < \epsilon$ for every $x \in X$, in particular, $f_j(x_i) \in U_i$ for $i = 1,2,\ldots, k,$ and this shows that $U\cap A \ne \emptyset$. q.e.d. I don't understand the proof strategy, why "The equivalence of (1) above shows that it suffices to prove that if $f \in C(\mathbb R, X)$ is in the closure of a set $A \subseteq C(\mathbb R, X)$ with respect to the topology of pointwise convergence", I don't see the connection, because it is a characterisation of continuous functions, and has so nothing to do with the topology on the set $C(\mathbb R, X)$ (because here continouity refers to the topologies on $\mathbb R$ and $X$), can someone explain this to me? AI: It's a typo, it should read The equivalence of (1) above shows that it suffices to prove that if $f \in C(\mathbb R, X)$ is in the closure of a set $A \subseteq C(\mathbb R, X)$ with respect to the topology of uniform convergence, then $f$ is in the closure of $A$ with respect to the topology of pointwise convergence. The connection to $(1)$ is that $\mathcal{T_1} \supset \mathcal{T}_2 \iff \operatorname{id} \colon (X,\, \mathcal{T}_1) \to (X,\, \mathcal{T}_2)$ is continuous; and the inclusion of the closure with respect to uniform convergence in the closure with respect to pointwise convergence is, in light of $(1)$, just what's needed to show the continuity of $\operatorname{id}\colon \bigl(C(\mathbb R, X),\, \mathcal{T}_{uni}\bigr) \to \bigl(C(\mathbb R, X),\, \mathcal{T}_{pw}\bigr)$. Note however, that here it is much simpler to directly prove that every neighbourhood of $f$ in the topology of pointwise convergence contains a neighbourhood of $f$ in the topology of uniform convergence, $$\{g \in C(\mathbb R, X)\colon \lvert g(x_i) - f(x_i)\rvert < \varepsilon_i\} \supset \{h\in C(\mathbb R, X)\colon \sup_x \lvert h(x) - f(x)\rvert < \min_i \varepsilon_i\}.$$
H: Why is it so difficult to prove that the discrete Fourier transform (DFT) cannot be calculated in faster time than $N \log N$? As the title says, why is it so difficult to prove that the discrete Fourier transform (DFT) cannot be calculated in faster time than $O(N \log N)$? This is a famous open problem in mathematics/theoretical computer science. Still, I cannot find any explanation why it is so difficult to construct a proof for it. Actually, I don't have any idea at all how proofs for such problems are usually written. AI: Intuitively, proofs that something can't be done are harder than proofs that something can be done, because the latter need only exhibit a way of doing it, while the former need to show that any possible approach will run into problems. Most proofs of impossibility use additional structure in the problem to show that any approach needs some number of steps to 'clarify' the structure or to construct 'adversaries' who can make the problem more difficult. For instance, the straightforward proof that sorting requires $\Omega(n\log n)$ operations is of this form; since there are $n!\approx n^{n+1/2}e^{-n}$ different permutations of the data and any binary comparison can only cut the search space in half at best, then we need a minimum of $\lg(n!)\approx n\lg n-kn$ comparisons to 'pick out' the sorted data, using a binary comparison model of computation. This example also shows why specifying the model of computation matters; it's well known that sort on bounded data can be done in linear time(using e.g. bin sorts), and even that sorting on arbitrary sized integer data can be done more quickly than $\Omega(n\lg n)$ if the algorithm is allowed to perform arithmetic operations (and not just comparisons) on the data. Contrast this against something like matrix multiplication, where even though the naive algorithm is $O(n^3)$ there are no specific constraints that 'force' algorithms to be slow, and in fact the time has been pushed down to roughly $O(n^{2.4})$ with no reason to believe that $O(n^2)$ (the naive lower bound) is impossible. In short, then, the reason why it's so hard to show that the DFT requires superlinear time is that we have no specific reason not to believe that it doesn't; since we only have $O(n)$ pieces of data coming out of the operation, it's at least conceivable that some very clever algorithm can find them all in $O(1)$ time per piece of data, or $O(n)$ time overall.
H: Partitions and Orbit Sizes If $U,V \subset S_n$ are subgroups with $S_n//U = \{id,g_2,...,g_e\}$ and $\alpha_j$ is $\frac{1}{j}$ times the number of $i\in [e]$ s.t $[V:V \cap g_i U g_i^{-1}]=j$ then $(\alpha_1,...,\alpha_e)$ is a partition of $e$. Now let $U=V \in \{V_4,D_4\}$ and $n=4$ ($V_4$ is the group with only the double transpositions). Then the number of $V_4$-orbits of $S_4/V_4$ of size $1$ is $6$, and there are none of greater size. One $D_4$-orbit of $S_4/D_4 = \{D_4,(12)D_4,(14)D_4 \}$ is $\{D_4\}$; for the other two, we have $|orb_{D_4}((ab)D_4)|=[D_4 : D_4 \cap (ab)D_4(ab)]=[D_4:V]=2$, so there is one orbit of size $1$ and two of size $2$. I don't understand what I've done wrong here. I got a partition of $[S_4 :V_4]$, $1^6$, but apparently something went wrong with $D_4$, but I have no idea why? In the article which all this stuff pertains to it is asserted that there is one orbit of size one and one of size two, when $D_4$ acts on $S_4/D_4$. It cannot be that the two orbits should be considered as one, for if so then there would only be one orbit of size one wrt $S_4/V_4$. (This is one of those questions that's difficult to pose because there's too much background, but I think this info should do, hopefully. Otherwise, if you're feeling more helpful than usual, check out beginning of section 3 and Table 2 here.) Edit: Some calculations. $orb_V(V)={V}$, trivial ($V=V_4$). $orb_V((12)V)=\{(12)V,(34)V,(1324)V,(1423)V \}$ and $orb_V((13)V)=\{(13)V, (1234)V, (24)V, (1432)V\}$. To see that each of these orbits is really just $\{V\}$, use $\sigma V = \tau V \iff \tau^{-1} \sigma \in V$. Thus we have three orbits of size one so far. With $D_4$, we have $orb_{D_4}((12)D_4)=\{ (12)D_4, (134)D_4, (243)D_4, (123)D_4, (142)D_4, (34)D_4, (1423)D_4, (1324)D_4\}$ and $orb_{D_4}((14)D_4) = \{ (14)D_4, (234)D_4, (132)D_4, (143)D_4, (124)D_4, (1342)D_4, (1243)D_4, (23)D_4 \}$. Some manipulations show that these two orbits have size two, yet only one of them counts? AI: There are a few unstated conventions. Which $D_4$? How do we multiply permutations? I'll go with GAP conventions for both as they are likely to agree with the paper and most of the world of computational group theory and finite group theory. Take $U=D_4 = \{ (), (3,4), (1,2), (1,2)(3,4), (1,3)(2,4), (1,3,2,4), (1,4,2,3), (1,4)(2,3) \}$ and multiply permutations in reading order such that $(1,2)(2,3)=(1,3,2)$. $G//U=S_4//D_4 = \{ ()D_4, (1,2,3)D_4, (1,3,2)D_4 \}$ so $e=3$. If $V=D_4$ as well, then $V \cap g_1Ug_1^{-1} = V \cap ()U()^{-1} = V \cap V = V$ has index 1. Note that the orbit of $D_4$ on the seed $()D_4$ is $\{ D_4 \}$ of size 1. $V \cap g_2 U g_2^{-1}$ has size 4 (it is $V_4$, the Klein viergruppe of double transpositions). That means it has index 2. Note that the orbit of $D_4$ on the seed $(1,2,3)D_4$ is $\{ (1,2,3)D_4, (1,3,2)D_4 \}$. $V \cap g_3 U g_3^{-1}$ has size 4 (it is $V_4$, the Klein viergruppe of double transpositions). That means it has index 2. Note that the orbit of $D_4$ on the seed $(1,3,2)D_4$ is $\{ (1,2,3)D_4, (1,3,2)D_4 \}$. Now we get $\alpha_1 = \tfrac{1}{1} 1 = 1$ since only $g_1$ gave index 1. We get $\alpha_2 = \tfrac{1}{2} 2 = 1$ since $g_2$ and $g_3$ gave index 2. We get $\alpha_3=\tfrac{1}{3} 0 = 0$ since nothing gave index 3. Sure enough $1 \alpha_1 + 2\alpha_2 + 3 \alpha_3 = 1 + 2 + 0 = 3$ is a partition. In terms of orbits, this is because the three cosets $\{ ()D_4, (1,2,3)D_4, (1,3,2)D_4 \}$ split into two orbits under the left action of $D_4$, namely, $\{ ()D_4 \}$ and $\{ (1,2,3)D_4, (1,3,2)D_4 \}$.
H: solve easy problem with group action Here is a simple problem which can be found in every elementary group textbook: $H,K$ are finite subgroups of group $G$, then $$|HK|=\dfrac{|H|\cdot|K|}{|H\cap K|}$$ Could you help me to prove it with group action? Thank you in advance AI: Show that the group $H^{\text{op}} \times K$ (where $H^{\text{op}}$ denotes the group $H$ with reversed multiplication) acts transitively on the set $HK$ via multiplication - i.e. $x^{(h,k)} := hxk$ for $x \in HK$. The claim follows now from the orbit stabilizer theorem.
H: do fibres of morphisms of Noetherian rings have finite Krull dimension? Let $f:A \rightarrow B$ be a morphism of Noetherian rings. Let $p \in Spec(A)$ and let $C=B \otimes \kappa(p)$ be the fibre over $p$. Is it true that $\dim C < \infty$? How can we see that? Remark: $B \otimes \kappa(p) \cong B_S/pB_S$ where $S$ is the image of $A-p$ in $B$. Hence $C$ is not necessarily a semilocal ring and the fundamental theorem of dimension theory (i.e. that every Noetherian semilocal ring has finite dimensions) does not apply. AI: No, it is not true: we can have $\operatorname {dim} (C)=\infty.$ Indeed for any field $k=A$, Nagata has shown that there exists a Noetherian $k$-algebra $B$ of infinite Krull dimension and this of course gives you the required example by taking $\mathfrak p=(0)$ and thus $C=B$. Nagata's example is developed as Exercise 9.6, page 229 of Eisenbud's Commutative Algebra.
H: Show $\cos(x+y)\cos(x-y) - \sin(x+y)\sin(x-y) = \cos^2x - \sin^2x$ Show $\cos(x+y)\cos(x-y) - \sin(x+y)\sin(x-y) = \cos^2x - \sin^2x$ I have got as far as showing that: $\cos(x+y)\cos(x-y) = \cos^2x\cos^2y -\sin^2x\sin^2y$ and $\sin(x+y)\sin(x-y) = \sin^2x\cos^2y - \cos^2x\sin^2y$ I get stuck at showing: $\cos^2x\cos^2y -\sin^2x\sin^2y - \sin^2x\cos^2y - \cos^2x\sin^2y = \cos^2x - \sin^2x$ I know that $\sin^2x + \cos^2x = 1$ and I have tried rearranging this identity in various ways, but this has not helped me so far. AI: Looking at $\cos^2x\cos^2y -\sin^2x\sin^2y - \sin^2x\cos^2y + \cos^2x\sin^2y$ (The last is a negative times a negative that forms a positive I believe. You could pull $\sin^2x$ from the middle two terms to get this expression: $ -\sin^2x\sin^2y - \sin^2x\cos^2y = -\sin^2x(\sin^2y+\cos^2y) = -\sin^2x $ There is a similar reduction with the first and last terms around $\cos^2x$ that should make this appear easier.
H: A tough calculation involving hyperbolic contangents. From here: http://en.wikipedia.org/wiki/Brillouin_function Define $$B_j(x)=\frac{2j+1}{2j} \coth \left( \frac{2j+1}{2j} x \right) - \frac{1}{2j} \coth \left( \frac{1}{2j} x \right)$$ I want to do this calculation ($m,j$ are integers): $$\langle m \rangle = \sum_{-j\le m\le j} m \ P(m)$$ where $$P(m)=\frac{e^{xm/j}}{Z}$$ and $$Z = \sum_{-j\le m\le j} e^{xm/j}$$ The answer is supposed to be $$\langle m \rangle = j \cdot B_j (x)$$ But I'm unable to grind through this calculation. I keep getting stuck with an awful expression with exponentials that I can't seem to simplify. I tried expressing them as the cotangents and the best I got was the two cotangents from the original formula plus a ton of garbage that didn't seem to cancel. I'm guessing there is some sort of trick to calculating stuff like this that I'm unaware of. AI: I will summarize the steps; they are a little messy, but no more difficult than geometric series. First of all, evaluate $Z$: $$Z = \sum_{m=-J}^J e^{m x/J} = e^{-x} \frac{e^{(2 J+1) x/J}-1}{e^{x/J}-1} = \frac{\sinh{\left ( \frac{2J+1}{2 J} x\right)}}{\sinh{\left( \frac{x}{2 J}\right)}}$$ Then note that $$\sum_{m=-J}^J m \, e^{m x/J} = J \frac{dZ}{dx}$$ This is the mere application of the quotient rule; the result is $$J \frac{dZ}{dx} = \frac{1}{2 \sinh^2{\left( \frac{x}{2 J}\right)}}\left [(2J+1) \cosh{\left ( \frac{2J+1}{2 J} x\right)}\sinh{\left( \frac{x}{2 J}\right)} - \sinh{\left ( \frac{2J+1}{2 J} x\right)}\cosh{\left( \frac{x}{2 J}\right)}\right ]$$ The ratio of this derivative and $Z$ is then $$P(M) = \frac12 \left [(2J+1) \coth{\left ( \frac{2J+1}{2 J} x\right)} - \coth{\left( \frac{x}{2 J}\right)} \right ] $$ The result follows. ADDENDUM A little more detail on the first equation. $$\begin{align}e^{-x} \frac{e^{(2 J+1) x/J}-1}{e^{x/J}-1} &= \frac{e^{(J+1) x/J}-e^{-x}}{e^{x/J}-1}\\ &= \frac{e^{x/(2 J)} \left (e^{(2 J+1) x/(2 J)} - e^{-(2 J+1) x/(2 J)} \right )}{e^{x/(2 J)} \left (e^{x/(2 J)}-e^{-x/(2 J)} \right )}\\ &= \frac{\sinh{\left ( \frac{2J+1}{2 J} x\right)}}{\sinh{\left( \frac{x}{2 J}\right)}}\end{align}$$ as was to be demonstrated.
H: Calculating $\lim_{x\to 0}\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt{\log(x+1)}}\right)$ Find the limit $$\lim_{x\to 0}\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt{\log(x+1)}}\right)$$ AI: Hint $$\frac{\log(1+x)}{x}\to 1$$ $$\frac{\log(1+x)-x}{x^2}\to -\frac{1}{2}$$ Further Hint $$\displaylines{ \frac{1}{{\sqrt x }} - \frac{1}{{\sqrt {\log (x + 1)} }} = \frac{{\sqrt {\log (x + 1)} - \sqrt x }}{{\sqrt {x\log \left( {1 + x} \right)} }}\left( {\frac{{\sqrt {\log (x + 1)} + \sqrt x }}{{\sqrt {\log (x + 1)} + \sqrt x }}} \right) \cr = \frac{{\log (x + 1) - x}}{{\sqrt {x\log \left( {1 + x} \right)} }}{\left( {\sqrt {\log (x + 1)} + \sqrt x } \right)^{ - 1}} \cr = \frac{{\log (x + 1) - x}}{{\sqrt {x\log \left( {1 + x} \right)} }}\frac{1}{{\sqrt x }}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr = \frac{1}{{\sqrt x }}\frac{{\log (x + 1) - x}}{x}{\left( {\sqrt {\frac{{\log \left( {1 + x} \right)}}{x}} } \right)^{ - 1}}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr = \sqrt x \frac{{\log (x + 1) - x}}{{{x^2}}}{\left( {\sqrt {\frac{{\log \left( {1 + x} \right)}}{x}} } \right)^{ - 1}}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr} $$
H: Prove $A \bigtriangleup B = B \bigtriangleup A$ I`m trying to prove the following statement: $$A \bigtriangleup B = B \bigtriangleup A$$ I know that: $$A \bigtriangleup B = (A \cup B )\setminus (A \cap B )= (A \setminus B) \cup (B \setminus A)$$ I can do that with truth table. but want to prove it by formal way. Any suggestions?Thanks! AI: Well if you're allowed to use the fact that the union and intersection operations are commutative, then we have: $$ A \bigtriangleup B = (A \cup B )\setminus (A \cap B )=(B \cup A )\setminus (B \cap A )= B \bigtriangleup A $$
H: Successive Differentiation and division I am working on successive differentiation. I have ran into some confusion and would like some help with the process of differentiating when dealing with division. Here is the problem that sparked my intentions to post here: $$y = \frac{x^2 + a}{x + a}$$ I am reading Calculus Made Easy by Silvanus P. Thompson and unfortunately there is no example on how I should go about this. I can find $\frac{dy}{dx}$ but can't seem to find $\frac{d^2 y}{dx^2}$ for this exercise. I find $\frac{dy}{dx}$ by doing this: $$y = \frac{(x + a)(2x) - (x^2 + a)(1)}{(x + a)^2}$$ to get $y = \frac{x^2 + 2xa - a}{(x + a)^2}$. This is where I am stuck. I have attempted to simply perform the same operation on this result to differentiate again but it isn't adding up. Any help on this is much appreciated! AI: $$ \dfrac{dy}{dx} = \frac{(x + a)(2x) - (x^2 + a)(1)}{(x + a)^2} =\dfrac{2x^2 + 2ax - x^2 - a}{(x+a)^2} = \frac{x^2 + 2xa - a}{(x + a)^2} $$ So far so good. Now we need to differentiate again, using the quotient rule, as you did when finding $\dfrac{dy}{dx}$. It gets messy-looking, but in this case, simplifies relatively nicely in the end: $$\dfrac{d^2y}{dx^2} = \dfrac{(x+a)^2(2x + 2a) - (x^2 + 2xa - a)[2(x+a)(1)]}{((x+ a)^2)^2}$$ We can factor the numerator, expand, and then simplify: $$\begin{align}\dfrac{d^2y}{dx^2} & = \dfrac{2(x+a)[(x+a)^2 - (x^2 + 2xa - a)]}{(x+ a)^4} \\ \\ & = \dfrac{2[x^2 + 2ax + a^2 - x^2 - 2ax + a]}{(x+ a)^3}\\ \\ & =\dfrac{2( a^2 + a)}{(x + a)^3} \end{align}$$ We could have gotten clever by rewriting $\dfrac{dy}{dx}$: We can rewrite $\dfrac{dy}{dx}$ thusly: $$\begin{align} \dfrac{dy}{dx} = \frac{x^2 + 2xa - a}{(x + a)^2} & = \dfrac{x^2 + 2xa + a^2 -a -a^2}{(x+ a)^2} \\ \\ & = \dfrac{(x+a)^2 - a(a + 1)}{(x + a)^2} \\ \\ & = 1 - \dfrac{a(a + 1)}{(x + a)^2}\end{align}$$ The result of differentiating the result (using essentially, the power rule) would be the same as given above.
H: How to denote an 'atomic' morphism in category? I want to distinguish between two disjoint classes of morphisms in a category: (1) those morphisms that are composed of other morphisms (other than identities) and could conceivably be factored into a sequence of other morphisms; and, (2) those morphisms that cannot be factored further. For the time being, I am referring to the first class of morphisms as "composed" and the second class as "atomic". What is the correct terminology for this distinction? AI: As far as I know there is no fixed or common terminology for these concepts. Atomic is a good name, as would indecomposable or irreducible would be. I wouldn't use 'composed' for the non-atomic ones, but rather 'composite'. Of course few categories in nature will have interesting behaviour with respect to these concepts. For instance, isomorphisms may be atomic but then they decompose an identity. A category with reasonable structure will tend to have lots and lots of morphisms, making atomic ones very rare. You might want to consider morphisms that are atomic with respect to a fixed class of morphisms that you may call 'redundant'. These would be morphisms that for some reason you wish to ignore for the purposes of reducibility. It all depends on what you want to achieve.
H: Addition of ideals Given a ring $R$ and ideals $A,C$ suppose we have $A + B' =A + B = C.$ I was wondering then what can we say about relation between $B$ and $B'$. Clearly, $B$ may not equal $B'$, but can we say something? Does it follow that $B= B' + D$ where $D$ is an ideal contained in $A$? Thanks! AI: In $\mathbb Z$, $(p)+(q)=(p)+(r)$ for all distinct primes $p,q,r$. More generally, if $M$ is any maximal ideal in a ring $R$ and $I,J$ are any two ideals that are not contained in $M$, then $M+I=M+J$. So there is enormous freedom in constructing the situation you describe with no particular conditions on the ideals.
H: bijection between $\mathbb{N}$ and $\mathbb{N}\times\mathbb{N}$ I understand that both $\mathbb{N}$ and $\mathbb{N}\times\mathbb{N}$ are of the same cardinality by the Shroeder-Bernstein theorem, meaning there exists at least one bijection between them. But I can't figure out what such a bijection would be. The paper that I'm reading gives an example of an injective function $f:\mathbb{N}\to\mathbb{N}\times\mathbb{N}$, $f(n)=(0,n)$, and an injective function $g:\mathbb{N}\times\mathbb{N}\to\mathbb{N}$, $g(a,b)=2^a3^b$. I was thinking perhaps if there were some way to combine these two, I could find a bijection, but I have no idea how to go about that or if it's even possible. What is an example of a bijection between these two sets, and please explain the process by which you found it? AI: I found such bijection when I was a freshman. Cantor found it, but I don't know how he came to notice it. $$(m,n)\mapsto\frac{(m+n)(m+n+1)}2+m$$ Another function, which is simpler to prove is a bijection is the following, I don't know who came up with that one. $$(m,n)\mapsto 2^m(2n+1)-1$$ The idea is that every pair encodes a unique number by writing it as an even number times an odd number, and reducing $1$ so we can get $0$ as well.
H: How to find $k[x_1,\dotso,x_n]/I$ concretely? I want to know if there is a way to find $k[x_1,\dotso ,x_n]/I$ in specific cases. For example how can we find concretely the ring $\mathbb{C}[X]/(X^2+1)$? How does one mod out $(X^2+1)$? AI: Over $\;\Bbb C\;$ : $$x^2+1=(x-i)(x+i)\implies $$ $$\Bbb C[x]/(x^2+1)=\Bbb C[x]/((x-i)(x+i))\cong \Bbb C[x](x-i)\times \Bbb C[x]/(x+i)$$ Now just figure out what $\,\Bbb C[x]/(x\pm i)\;$ is...:P
H: How to evaluate the definite integral? How to evaluate the definite integral? $$\int \frac{7}{3x+1}dx$$ I am having difficulties to finish the question: Below is what I did: $$ =\left.\frac{7}{3}\ln|3x+1|\right|_0^4$$ $$=\frac{7}{3}\ln(\dots.$$ AI: Your integration is just fine: All you have left to do is evaluate $$\dfrac 73 \ln|3x + 1| \Big|_0^4 = \dfrac 73 (\ln(13) - \underbrace{\ln(1)}_{\large = 0}) = \quad \frac 73 \ln(13) \quad \approx \quad5.985$$