text
stringlengths
83
79.5k
H: Assess the limit: $ \lim_{n\to\infty} \frac{1}{n}\int_0^n \frac{\arctan(x)}{\arctan{\frac{n}{x^2-nx+1}}}dx$ Compute the following limit: $$ \lim_{n\to\infty} \frac{1}{n}\int_0^n \frac{\arctan(x)}{\arctan{\frac{n}{x^2-nx+1}}}dx$$ I'm looking for an easy approach if possible. AI: $$\arctan \left(\frac{n}{x^{2}-nx+1}\right) = \arctan(x) + \arctan(n-x)$$ $$ I= \int_{0}^{n} {\frac {\arctan(x)}{ \arctan(x)+\arctan(n-x)}dx} =\int_{0}^{n} {\frac {\arctan(n-x)}{ \arctan(x)+\arctan(n-x)}dx} $$ $$I = \frac{1}{2}\cdot\int_{0}^{n} 1dx = \frac{n}2$$ $$\lim_{n \to \infty }\frac1n \int_{0}^{n} \frac {\arctan(x)}{ \arctan \left(\frac{n}{x^{2}-nx+1}\right)}dx = \frac12 $$
H: What's the definition of limit of sets(esp. ordinals) in set theory? By definition of exponent operator on ordinals, we have $$0^\omega=\lim_{\xi\to\omega}0^\xi$$ However, Note that $0^\xi$ is not increasing, so if we still let $\lim_{\xi\to\omega}0^\xi=\sup\{0^\xi|\xi<\omega\}$ then it is followed by $$0^\omega=\sup\{1,0,0,\ldots\}=1$$ An incredible result. Nevertheless if we use the definition of limit superior $$\overline{\lim}_{\eta\to\beta}\alpha^\eta:=\inf_{\eta < \beta}\sup_{\eta \le\xi<\beta}\alpha^\xi$$ and limit inferior $$\underline{\lim}_{\eta\to\beta}\alpha^\eta:=\sup_{\eta < \beta}\inf_{\eta \le\xi<\beta}\alpha^\xi$$ then we get $$\lim_{\xi\to\omega}0^\xi=\underline{\lim}_{\xi\to\omega}0^\xi=\overline{\lim}_{\xi\to\omega}0^\xi=0$$ Conceivable, but this is not the end of the story. Let's consider about $m^n$ with $0<m,n< \omega$. Since exponent operator is continuous in the second slot, this sentence following must hold: $$m^n=\lim_{\xi\to n}m^\xi=\underline{\lim}_{\xi\to n}m^\xi=\overline{\lim}_{\xi\to n}m^\xi=m^{n-1}$$ fail when $1<m$. So can we find a perfect definition of limit on ordinals? Update: By the way, if we deal the Ordinal class as a discrete topology space, then $\{\alpha\}$ is a neighborhood of $\alpha$, hence $\{\alpha\}$ must in every filterbase which converge to $\alpha$. So if we let $$\overline{\lim}_{\eta\to\beta}\alpha^\eta:=\inf_{\eta \le \beta}\sup_{\eta \le\xi\le\beta}\alpha^\xi$$ and $$\underline{\lim}_{\eta\to\beta}\alpha^\eta:=\sup_{\eta \le \beta}\inf_{\eta \le\xi\le\beta}\alpha^\xi$$ Then $$\alpha^\beta=\lim_{\xi\to \beta}\alpha^\xi=\underline{\lim}_{\xi\to \beta}\alpha^\xi=\overline{\lim}_{\xi\to \beta}\alpha^\xi$$ hold for every $(\alpha,\beta) \in \mathbb O^2$ But actually it cannot be a definition since $$\inf_{\eta \le \beta}\sup_{\eta \le\xi\le\beta}\alpha^\xi:=\inf\{\sup\{\alpha^\xi|\eta \le\xi\le\beta\}|\eta \le \beta\}$$ but $\{\alpha^\xi|\eta \le\xi\le\beta\}$ contains $\alpha^\beta$! Update: Edited title. The eventual question is as title illustrates. AI: You asked in your comment: Indeed, but what's the exact definition of ordinal limit? Let me try to address this, although this is not an answer to your original question. The notion of limit of of transfinite sequence is defined only for limit ordinals see e.g. Wikipedia If $\alpha$ is a limit ordinal and $X$ is a set, an $\alpha$-indexed sequence of elements of $X$ is a function from $\alpha$ to $X$. This concept, a transfinite sequence or ordinal-indexed sequence, is a generalization of the concept of a sequence. An ordinary sequence corresponds to the case $\alpha=\omega$. Some authors consider only increasing sequences, see e.g. Kechris: Classical Descriptive Set Theory p.349 or Sierpinski: Cardinal and ordinal numbers, p.287 or Kuratowski, Mostowski: Set Theory: p.231. In the case of increasing (non-decreasing) sequence we have $\lim_{\xi\to\alpha} \beta_\xi=\sup\{\beta_\xi; \xi<\alpha\}$. However, it makes sense to define limit of any transfinite sequence $(\beta_\xi)_{\xi < \alpha}$, if $\alpha$ is a limit ordinal. (We do not have to use only monotone sequence.) A transfinite sequence transfinite sequence $(\beta_\xi)_{\xi < \alpha}$ converges to an ordinal $\beta\ne0$ if, for every ordinal number $\gamma<\beta$, there exists an ordinal number $\eta < \alpha$ such that $\gamma <\beta_\xi\le\beta$ whenever $\eta <\xi < \alpha$. $$(\forall \gamma <\beta) \quad (\exists \eta < \alpha) \quad (\forall \xi) \qquad (\eta<\xi<\alpha \Rightarrow \gamma < \beta_\xi \le \beta) $$ A transfinite sequence $(\beta_\xi)_{\xi < \alpha}$ converges to $0$ if it is eventually equal to zero. $$(\exists \eta < \alpha) \quad (\forall \xi) \qquad (\eta<\xi<\alpha \Rightarrow \beta_\xi =0) $$ If you are familiar with nets, you can notice that this is the same as limit of the transfinite sequence $(\beta_\xi)_{\xi< \alpha}$ considered as a net in the order topology on $\lambda$, where $\lambda$ is any ordinal larger than all $\beta_\xi$'s. The basis for the order topology on some totally ordered set $X$ consists of sets of the form $(-\infty,b)=\{x\in X; x<b\}$, $(a,\infty)=\{x\in X; a<x\}$ and $(a,b)=\{x\in X; a<x<b\}$ for $a<b$. It is not difficult to notice, that if $\beta>0$ then neighborhood basis of $\beta$ (as an element of some $\lambda>\beta$ with order topology) consists of all sets of the form $(\gamma,\beta+1)$ where $\gamma>\beta$. Precisely these sets were used in the above definition. Neighborhood basis at $0$, cannot be described in that way, that's why I had to treat the case $\beta=0$ separately. (In this case, we can take the neighborhood basis consisting of the single set $(-\infty,1)=\{0\}$.) Thanks for letting me know that the definition I suggested originally wasn't working.
H: Open set whose boundary is not a null set I've just seen a theorm about a bounded set $A \subset \mathbb{R}^n$. $$\chi_A \text{ is Riemann integrable} \Longleftrightarrow \partial A \text{ is a null set}$$ Then, I wonder if there's any open set whose boundary is not a null set. Can you give me some example for that? AI: Consider $A$ a fat Cantor set, for example a set of measure $\frac13$. $A$ is closed, so its complement is open, and also $A$ is nowhere dense therefore its complement is dense. That is to say that $U=[0,1]\setminus A$ is open, so for the Lebesgue measure $m$ we have: $$m(\partial U)=m([0,1]\setminus U)=m(A)=\frac13.$$
H: Uniqueness theorem for harmonic function So far in complex analysis books I have studied about Uniqueness theorem: If $f$ is analytic in a domain $D$ and if its set of zeroes has a limit point in $D$ then $f\equiv 0$ on $D$, I want to know is this result holds for harmonic functions? AI: In general harmonic functions are functions $u:\mathbb{R}^n\rightarrow \mathbb{R}$. If $0$ is a regular value of such a function (which is true for almost every real number) then $u^{-1}(0)$ is locally an $(n-1)$ dimensional submanifold. Actually this is true for every $C^1$-function, regardless whether it is harmonic or not. So the question makes only sense for $u: \mathbb{R}\rightarrow \mathbb{R}$. In this case the equation is $u^{''}=0$ which implies $u$ is linear. So the answer is yes, in one dimension, cause there the solution is explicitly known, no otherwise, since the zero set is usually locally a submanifold of dimension $\ge1$.
H: Height of this triangle? Each edge of the following cube is 1 and C is a point on the edge. What would the height of triangle be in this case , how would you measure it? AI: Assuming you mean "height" as "altitude passing through point $C$". You might want to clarify if this is not correct: The height of the triangle is just the distance from $C$ to the line $AB$. Cutting the cube perpendicular to $AB$ and containing point $C$, we find a right angled triangle with both legs of length $1$, and the height of the triangle as its hypotenuse. So the height of the triangle is $\sqrt{1^2+1^2}=\sqrt 2$ It's also good to realize that the height doesn't change depending on where $C$ is, so you can just arbitrarily move it to one vertex and take the height along the corresponding face. Picture:
H: Formula to estimate sum to nearly correct : $\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$ Estimate the sum correct to three decimal places : $$\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$$ This problem is in my homework. I find that n = 22 when use Maple to solve this. (with some programming) But, in my homework, teacher said find the formula for this problem. Thanks :) AI: By the Alternating Series Test, the error to an alternating series with monotonically decreasing terms is the next term to be added. Thus, to get three decimal places, we would need to find an $n$ so that $n^3>2000$, which would be $n=13$. Thus, summing the first 12 terms should get you to within 3 decimal places.
H: Finding the equation of the tangent plane to the surface $z=x^2+ y^2$ at the point $(1,2)$ Just wanted to ask a simple question I've forgotten how to solve (lost my mind completely). Find the equation of the tangent plane to the surface $z=x^2 + y^2$ at $(1,2)$. The fact it has $3$ variables is what is putting me off. I don't know whether to differentiate with respect to $x$ or $y$. AI: This plane has normal $(-\dfrac{\partial z}{\partial x}(1,2),-\dfrac{\partial z}{\partial y}(1,2),1) = (-2,-4,1)$. Then the plane has the form \begin{equation} -2x -4y + z + d = 0. \end{equation} And the plane pass by (1,2,5), hence the plane is \begin{equation} -2x -4y + z + 1 = 0. \end{equation}
H: Ordered numbers Let $0<a<b<1$, can we find a point $x\in (a,b)$ such that $a<x^{2}<x<b$. I know that we can find $x$ such that $a<x<b$ and this $x$ will satisfies $0<x^{2}<x<b$, but I'm not sure how to choose such $x$ with $a<x^{2}<x<b$? AI: This is not true. For example, take $a=\frac{1}{4}$ and $b=\frac{1}{3}$. Then, $x<b \implies x^2<b^2=\frac{1}{9}$. So $x<b \implies x^2<a$, for all $x$. This contradicts your statement. As Serkan noted in the comments, however, this holds if and only if $b \gt b^2 \gt a$ (it cannot be equal). If this condition holds, then $b^2>b^2-\frac{1}{n}>a$, for some natural number $n$. Set $x=\sqrt{b^2-\frac{1}{n}}$. Then $b^2>x^2>a$, so $b>x$. Since $x<1$, $x^2<x$, and the inequality is complete, with $a<x^2<x<b$.
H: Can an ordered field be finite? I came across this question in a calculus book. Is it possible to prove that an ordered field must be infinite? Also - does this mean that there is only one such field? Thanks AI: Recall that in an ordered field we have: $0<1$; $a<b\implies a+c<b+c$. Suppose that $F$ is an ordered field of characteristic $p$, then we have in $F$ that $$\underbrace{1+\ldots+1}_{p\text{ times}} = 0$$ Therefore: $$0<1<1+1<\ldots<\underbrace{1+\ldots+1}_{p\text{ times}} = 0$$ Contradiction! Therefore the characteristic of $F$ is $0$ and therefore it is infinite, since it contains a copy of $\mathbb Q$. Few fun facts on the characteristic of a field: Definition: The characteristic of a field $F$ is the least number $n$ such that $\underbrace{1+\ldots+1}_{n\text{ times}}=0$ if it exists, and $0$ otherwise. Exercises: If a field has a positive characteristic $n$ then $n$ is a prime number. If $F$ is a finite field then its characteristic is non-zero (Hint: the function $x\mapsto x+1$ is injective, start with $0$ and iterate it $|F|$ many times and you necessarily got $0$ again.) If $F$ is finite and $p$ is its characteristic then $p$ divides $|F|$.
H: Evaluating $\int \frac{x^3}{(x^2 + 1)^\frac{1}{3}}dx$ $$\int \frac{x^3}{(x^2 + 1)^\frac{1}{3}}dx$$ I am suppose to make a $u$ substitution and to make this a rational integral and then evaluate it from there but I have no idea how to do that. There aren't any good examples of this in the book and I can not find any us that make this doable. AI: Let $u = x^2 + 1$, $du = 2x\,dx$: \begin{align*} \int \frac{x^3\,dx}{(x^2 + 1)^\frac{1}{3}} &= \int \frac{x^2\,x\,dx}{(x^2 + 1)^\frac{1}{3}} = \frac{1}{2} \int \frac{u-1}{u^\frac{1}{3}}\,du = \frac{1}{2} \int \left(u^\frac{2}{3} - u^{-\frac{1}{3}}\right) \,du \\ &= \frac{3}{10} u^{\frac{5}{3}} - \frac{3}{4} u^{\frac{2}{3}} + C \\ &= \frac{3}{5} \left(x^2+1\right)^{\frac{5}{3}} - \frac{3}{4} \left(x^2+1\right)^{\frac{2}{3}} + C \end{align*}
H: Inequality $|f(1)-f(0)|\le g(1)-g(0)$ if $|f(t)|\le g'(t)$ I have the question: Let $I=[0,1]$ be the closed interval, $f:I\to\mathbb{R}^n$ and $g:I\to\mathbb{R}$ differentiable, with $|f(t)|\le g'(t)$, for all $t\in I$. Show that $|f(1)-f(0)|\le g(1)-g(0)$ The book suggests to use the same trick in proving the Mean Value Theorem: for any $\epsilon>0$, define $X=\{t\in I:|f(t)-f(0)|\le g(t)-g(0)+\epsilon t+\epsilon\}$ and prove that if $\alpha\in X$, with $\alpha<1$, then there exists $\delta>0$, such that $\alpha+\delta\in X$ (so, since it's easy to prove that $X$ is a interval and $\sup X\in X$, we have the question). Sincerely I have no progress... The condition $|f(t)|\le g'(t)$ is quite annoying, and I have no idea how to manipulate things to it be useful... I tried the basic: let $\alpha\in X,\alpha<1$, so, since $f$ is differentiable, we can find $\delta>0$ and write $|f(\alpha+\delta)-f(\alpha)|\le|f'(\alpha)\delta+r(\alpha)|\le|f'(\alpha)|\delta+\epsilon\delta$, where $\lim_{\delta\to0}r(\alpha)/|\delta|=0$. If we guarantee that $|f'(\alpha)|\delta\le g(\alpha+\delta)-g(\alpha)$, we prove the suggestion... Also, if we assume that $|f(\alpha)|<g'(\alpha)$ (and see what we do with the case $|f(\alpha)|=g'(\alpha)$ later), then, we can find $\delta>0$ such that $|f(\alpha)|\delta<g(\alpha+\delta)-g(\alpha)$, so, in this case, we have a relation between $|f|$ and $g$ (but doesn't seem to help...) Any hint will be appreciated! Thanks! AI: Take $$f,g:I \to \mathbb{R}\,\,,\,f(t):=t^2\,,\,g(t)=\frac{t^2}{2}$$Then, for $\,t\in I\,\,,\,|f(t)|=t^2\leq t=g'(t)\,$ , but nevertheless $$f(1)-f(0)=1\rlap{/}{\leq} \frac{1}{2}=g(1)-g(0)$$So either you ommited some condition or the above contradicts the claim (or, of course, I missed something)
H: Formal proof of De Morgan's laws for quantifiers Consider the set of inference rules for first order logic (analogous to the ones listed here : http://en.wikipedia.org/wiki/Sequent_calculus#Inference_rules) I am stuck in proving the following rule $$\vdash_{\gamma} \neg \forall x.\phi \implies \exists x. \neg \phi $$ I think it is easy to do this using the notion of soundness and completeness and checking that the left formula is valid when the right is. However I am not able to prove it using just the formalism of manipulating proof trees with inference rules. Somehow I do not see how to get rid of the negation in $\neg \forall x.\phi$ without applying the rule I want to prove. Any hints? AI: Would the following work? 1 $\vdash \neg \forall x. \phi$ | Hypothesis 2 $\vdash \neg \phi \implies \exists x. \neg \phi$ by existential generalization 3 $\vdash \neg \exists x. \neg \phi \implies \phi$ by 1,Contraposition 4 $\neg \exists x. \neg \phi \vdash \phi$ by 3 5 $\neg \exists x. \neg \phi \vdash \forall x. \phi$ by 4,Universal Generalization 6 $\vdash \neg \exists x. \neg \phi \implies \forall x. \phi$ by 5,Deduction 7 $\vdash \neg \forall x. \phi \implies \exists x. \neg \phi$ by 6,Contraposition
H: Evaluating $\int \frac{dx}{x^2 - 2x}$ $$\int \frac{dx}{x^2 - 2x}$$ I know that I have to complete the square so the problem becomes. $$\int \frac{dx}{(x - 1)^2 -1}dx$$ Then I set up my A B and C stuff $$\frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{-1}$$ With that I find $A = -1, B = -1$ and $C = 0$ which I know is wrong. I must be setting up the $A, B, C$ thing wrong but I do not know why. AI: My book is telling me that I have to complete the square $$I=\begin{eqnarray*} \int \frac{dx}{x^{2}-2x} &=&\int \frac{dx}{\left( x-1\right) ^{2}-1}\overset{ u=x-1}{=}\int \frac{1}{u^{2}-1}\,du=-\text{arctanh }u+C \end{eqnarray*},$$ $$\tag{1}$$ where I have used the substitution $u=x-1$ and the standard derivative $$\frac{d}{du}\text {arctanh}=\frac{1}{1-u^{2}}\tag{2}$$ You just need to substitute $u=x-1$ to write $\text{arctanh }u$ in terms of $x$. Added 2: Remark. If we use the logarithmic representation of the inverse hyperbolic function $\text{arctanh }u$ $$\begin{equation*} \text{arctanh }u=\frac{1}{2}\ln \left( u+1\right) -\frac{1}{2}\ln \left( 1-u\right),\qquad (\text{real for }|u|<1)\tag{3} \end{equation*}$$ we get for $u=x−1 $ $$\begin{eqnarray*} I &=&-\text{arctanh }u+C=-\text{arctanh }\left( x-1\right) +C \\ &=&-\frac{1}{2}\ln x+\frac{1}{2}\ln \left( 2-x\right) +C \\ &=&\frac{1}{2}\left( \ln \frac{2-x}{x}\right) +C\qquad (0<x<2). \end{eqnarray*}\tag{4}$$ Added. If your book does require using partial fractions then you can proceed as follows $$\begin{equation*} \int \frac{1}{u^{2}-1}\,du=\int \frac{1}{\left( u-1\right) \left( u+1\right) }\,du=\int \frac{1}{2\left( u-1\right) }-\frac{1}{2\left( u+1\right) }du. \end{equation*}$$ $$\tag{5}$$
H: high school line equation question I hit stumbling block below. But i try everything that I can think of but i failed to find any satisfactory answer. my workout is that I find mid point between point B and C. and then using A and earlier mid point to find slope which I know is perpendicular. But line equation y = mx + c didn't match with answer at the back. the problem is as below: Let A(1,1), B(4,5), and C(6,13) be points in the xy-plane. Find the equation of the line which bisects $\angle$BAC. The solution given is 7x - 4y = 3 your help is much appreciated. AI: Do you know (or perhaps, remember) that the angle bisector is the locus of all points that are equidistant from both angle's rays? In this case, the angle rays are the lines $\,AB\,,\,AC\,$:$$AB:\,4x-3y-1=0\,\,,\,AC:\,12x-5y-7=0$$So we want all the points $\,(x,y)\,$ which are at the same distance from both lines above: $$\frac{|4x-3y-1|}{\sqrt{4^2+3^2}}=\frac{|12x-5y-7|}{\sqrt{12^2+5^2}}\Longrightarrow 13^2(4x-3y-1)^2=5^2(12x-5y-7)^2\Longrightarrow$$Try to continue from here. Added In fact you don't need the headache of the squares in the last line right: just check the left side with different values for the absolute values (once $\,|A|\,=A\,,\,\,another\,\,with\,\,|A|=-A\,$), taking into account that both lines $\,AB\,,\,AC\,$ have positive slope and thus the bisector line has also to have positive slope...and between the first two lines' slopes) Further added: Drop the squares but you can not drop the absolute values, so we get $$13|4x-3y-1|=5|12x-5y-7|\Longrightarrow 52x-39y-13=\pm\left(60x-25y-35\right)$$$$(i) \,\,52x-39y-13=60x-25y-35\Longrightarrow 8x+14y-22=0$$and this is not what we want as both given lines have positive slope (and thus their angle's bisector also has positive slope, so we get here the bisector of the obtuse angle between the lines, which is usually NOT "the angle between the lines", defined almost always as the acute (or straight) one between), so we go with $$(ii)\,\,52x-39y-13=-60x+25y+35\Longrightarrow 112x-64y=48\Longrightarrow 7x-4y=3$$and we get what we wanted. Of course, there's also a well-known formula with tangents and difference of angles and stuff, but on purpose I left trigonometry out of the answer.
H: Spivak Calculus Prologue I'm completely blown away by the difficulty of Spivak. I've managed to work through the first 3 problems, but I feel I'm missing something important to solve these basic inequalities in his 4th problem: $$x^2 + x + 1 > 2$$ & $$x^2 + x + 1 > 0$$ any suggestions? AI: Complete the square: $$ x^2 + x + 1 = (x^2 + x + \tfrac 1 4) + \frac 3 4 = \left(x + \frac 1 2 \right)^2 + \frac 3 4. $$ That gets you the second one. For the first one, put everything on one side of the inequality and $0$ on the other side, and procede similarly. Later addendum in response to vitno's question in the comments below: In general, the process of completing the square looks like this: $$ \begin{align} ax^2 + bx + c & = a\left(x^2 + \frac b a x\right) + c \\[12pt] & = a\left(x^2 + \frac b a x + \frac{b^2}{4a^2}\right) + c - a\left(\frac{b^2}{4a^2}\right) \tag{$\begin{array}{c} \text{completing} \\ \text{the square}\end{array}$} \\[12pt] & = a\left(x + \frac{b}{2a}\right)^2 + \frac{4ac - b^2}{4a}. \end{align} $$ Say you have a particular case: $$ 3x^2 + 20 x + 7. $$ Proceed as follows: $$ 3\left(x^2 + \frac{20}{3} x \right) + 7. $$ Take half the coefficient of the first-degree term and square it, getting $(10/3)^2$. Add this in the appropriate place, and substract it out later: $$ 3\underbrace{\left(x^2 + \frac{20}{3} x + \left(\frac{10}{3}\right)^2 \right)}_{\text{a perfect square}} + 7 - 3\left(\frac{10}{3}\right)^2 $$ $$ = 3\left(x + \frac{10}{3}\right)^2 - \frac{79}{3}. $$ Knowing how and when to complete the square is useful. Remember this: The purpose of completing the square is always to reduce a quadratic polynomial with a first-degree term to a quadratic polynomial with no first-degree term.
H: Evaluating $\int \frac{1}{(1-x^2)^{3/2}} dx$ I am struggling to evaluate the following integral: $$\int \frac{1}{(1-x^2)^{3/2}} dx$$ I tried a lot to factorize the expression but I didn't reach the solution. Please someone help me. AI: Hint: Set $x=\sin(t)$, then everything will turn out very well. This often helps when you have some expresion like $1-x^2$ in your integral.
H: Evaluate $\lim_{x \to \infty} \frac{1}{x} \int_x^{4x} \cos\left(\frac{1}{t}\right) \mbox {d}t$ Evaluate $$\lim_{x \to \infty} \frac{1}{x} \int_x^{4x} \cos\left(\frac{1}{t}\right) \mbox {d}t$$ I was given the suggestion to define two functions as $g(x) = x$ and $f(x) = \int_x^{4x}\cos\left(\frac{1}{t}\right)dt$ so then if I could prove that both went to $\infty$ as $x$ went to $\infty$, then I could use L'Hôpital's rule on $\frac{f(x)}{g(x)}$; but I couldn't seem to do it for $f(x)$. I can see that the limit is 3 if I just go ahead and differentiate both functions and take the ratio of the limits, but of course this is useless without finding my original intermediate form. How do I show that $\frac{f(x)}{g(x)}$ is in intermediate form? or how else might I evaluate the original limit? AI: For other methods of solving the limit you could use mean value theorem: $$\frac{1}{x} \int_x^{4x} \cos \frac{1}{t} \; dt = \frac{3x \cos \frac{1}{c}}{x}$$ for some $c \in (x,4x)$. Now when $x \to +\infty$ by squeeze theorem we get $3$ as a result.
H: Tricky radius of convergence: $\sum\limits_{n=0}^\infty\cos\left(\alpha\sqrt{1+n^2}\right)z^n$ I encountered the following power series, and while I know a couple of ways to determine radius of convergence, I wasn't able to figure out how to evaluate the appropriate limit to get said radius. Can anyone help? What is the radius of convergence of the power series $$\sum_{n=0}^\infty\cos\left(\alpha\sqrt{1+n^2}\right)z^n,$$ where $\alpha$ is any real number? What if $\alpha$ is a complex number? AI: Hint: $\sqrt{1+n^2} = n + 1/(2n) + O(1/n^3)$.
H: Proof Using Truth Tables Pleae forgive the very basic question, but I know nothing really of formal logic and so would appreciate some feedback. The truth table defining the implication operator P Q P implies Q T T T T F F F T T F F T together with the negation operator ~ defined in the obvious way enables one to construct the following table for ~Q $\implies$ ~P: P Q ~P ~Q ~Q implies ~P T T F F T T F F T F F T T F T F F T T T Evidently, truth values for ~Q $\implies$ ~P are the same as those for P $\implies$ Q. Is this enough to prove that $P \implies Q$ if and only if ~Q $\implies$ ~P ? My thinking is that yes, it is, because I belive that the logical operators involved are defined by their respective truth tables and this being the case the observations above should be sufficient to prove the equivalence. AI: Yes what you're saying is indeed the case and is the idea behind the technique of proof by contrapositive.
H: Putting ${n \choose 0} + {n \choose 5} + {n \choose 10} + \cdots + {n \choose 5k} + \cdots$ in a closed form As the title says, I'm trying to transform $\displaystyle{n \choose 0} + {n \choose 5} + {n \choose 10} + \cdots + {n \choose 5k} + \cdots$ into a closed form. My work: $\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = \displaystyle\sum_{p=0}^{n}\binom{n}{p}\exp\left(\frac{p\cdot2i\pi}{5} \right)$ $\displaystyle=\binom{n}{0} + \binom{n}{1}\exp\left(\frac{1\cdot2i\pi}{5} \right) + \binom{n}{2}\exp\left(\frac{2\cdot2i\pi}{5} \right) + \binom{n}{3}\exp\left(\frac{3\cdot2i\pi}{5} \right) + \binom{n}{4}\exp\left(\frac{4\cdot2i\pi}{5} \right) + \binom{n}{5} + \cdots = \left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] + \exp\left(\frac{2i\pi}{5} \right)\left[\binom{n}{1} + \binom{n}{6} + \binom{n}{11} \right ] + \exp\left(\frac{4i\pi}{5} \right)\left[\binom{n}{2} + \binom{n}{7} + \binom{n}{12} \right ] + \exp\left(\frac{6i\pi}{5} \right)\left[\binom{n}{3} + \binom{n}{8} + \binom{n}{13} \right ] + \exp\left(\frac{8i\pi}{5} \right)\left[\binom{n}{4} + \binom{n}{9} + \binom{n}{14} \right ] + \cdots$ I'll recall $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = k$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = u$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = v$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = w$ and $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = z$. Thus $\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = k + u\cdot\exp\frac{2i\pi}{5} + v\cdot\exp\frac{4i\pi}{5} + w\cdot \exp\frac{6i\pi}{5} + z\cdot\exp\frac{8i\pi}{5} = k + u\cdot\left (\cos\frac{2\pi}{5} + i\cdot\sin\frac{2\pi}{5} \right ) + v\cdot\left (\cos\frac{4\pi}{5} + i.\sin\frac{4\pi}{5} \right ) + w\cdot\left (\cos\frac{6\pi}{5} + i.\sin\frac{6\pi}{5} \right ) + z\cdot\left (\cos\frac{8\pi}{5} + i.\sin\frac{8\pi}{5} \right )$ Noting that $\cos\frac{2\pi}{5} = \cos\frac{8\pi}{5}$, $\cos\frac{4\pi}{5} = \cos\frac{6\pi}{5}$, $\sin\frac{2\pi}{5} = -\sin\frac{8\pi}{5}$ and $\sin\frac{4\pi}{5} = -\sin\frac{6\pi}{5}$: $\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = k + \left(u + z\right)\cos\frac{2\pi}{5} + i\cdot\left(u - z \right)\sin\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5} + i\cdot\left(v - w \right)\sin\frac{4\pi}{5} = \left(k + \left(u + z\right)\cos\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5}\right) + i\cdot\left(\left(u - z \right)\sin\frac{2\pi}{5} + \cdot\left(v - w \right)\sin\frac{4\pi}{5} \right)$ But $\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = \left(2\cos\left(\frac{\pi}{5} \right)\cdot\exp\left(\frac{i\pi}{5}\right)\right)^n = \left(2^n\cos^n\left(\frac{\pi}{5} \right)\cdot\exp\left(\frac{ni\pi}{5}\right)\right) = \left(2^n\cos^n\frac{\pi}{5} \right)\left(\exp\left(\frac{ni\pi}{5}\right)\right) = \left(2^n\cos^n\frac{\pi}{5} \right)\left(\cos\frac{n\pi}{5} + i.\sin\frac{n\pi}{5} \right ) = \left(2^n\cos^n\frac{\pi}{5}\cos\frac{n\pi}{5} \right) + i\cdot \left(2^n\cos^n\frac{\pi}{5}\sin\frac{n\pi}{5} \right)$ So, $\displaystyle k + \left(u + z\right)\cos\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5} = 2^n\cos^n\frac{\pi}{5}\cos\frac{n\pi}{5}$ and $\displaystyle\left(u - z \right)\sin\frac{2\pi}{5} + \left(v - w \right)\sin\frac{4\pi}{5} = 2^n\cos^n\frac{\pi}{5}\sin\frac{n\pi}{5}$ and I'm stuck here. I noted that $k + u + v + w + z = 2^n$ but I couldn't isolate $k$. So, any help finishing this result will be fully appreciated. Thanks. AI: Hint: with $\omega=\exp(2\pi i /5)$ a primitive $5$th root of unity, we have $$\sum_{r=0}^4 \omega^{rk}=\begin{cases}5 & k\equiv 0 \bmod 5 \\ 0 & k\not\equiv 0\bmod 5.\end{cases}$$ So then what is $$(1+\omega^0)^n+(1+\omega^1)^n+(1+\omega^2)^n+(1+\omega^3)^n+(1+\omega^4)^n~? $$ (Combine the binomial expansions...)
H: Given $x^2 + 2y^2 - 6x + 4y + 7 = 0$, find center, foci, vertex/vertices So the equation is: $$ x^2 + 2y^2 - 6x + 4y + 7 = 0 $$ Find the coordinates of the center, the foci, and the vertex or vertices. What I did was put the equation in the form: $$ \frac{(x-3)^2}{4}+ \frac{(y+1)^2}{4} = 1 $$ Now based on that, I said the center is at $(3,-1)$, the foci are at $\approx \pm 2.45$ (since $c = \sqrt {a^2 + b^2}$). So the coordinates of that are $(3+2.45,-1)$ and $(3-2.45,-1)$ and the vertices are $(1,-1)$ and $(5,-1)$. I also went ahead and found the asymptote, which is just done by setting the equation to $0$, correct? AI: The equation should be $$\frac{(x-3)^2}{4}+\frac{(y+1)^2}{2}=1.$$ You've correctly identified the center and vertices. The focal length should be $\sqrt{a^2-b^2}$, not $\sqrt{a^2+b^2}$. Ellipses don't have asymptotes, you're thinking of hyperbolae.
H: Fourier and integral Given the below trigonometric series: $1 + \sum_{n=1}^{\infty} \frac{2}{1+n^{2}}\cos (nt)$ Where $f(t)$ is the value of the series. Can I then deduce that $\int_{-\pi}^{\pi} f(x) dx$ is $2\pi$? I ask because the series for $f(t)$ looks like a fourier series and I can then recognize that $1 = \frac{1}{2} \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) dx$. ? AI: The series converges uniformly by Weierstrass's $M$-test, so it's ok to interchange integration and summation: $$\int_{-\pi}^{\pi} \left( 1+ \sum_{n=1}^\infty \frac{2}{1+n^2}\,\cos nt \right)\,dt = \int_{-\pi}^{\pi} 1\,dt + \sum_{n=1}^\infty \left(\frac{2}{1+n^2}\int_{-\pi}^{\pi}\cos nt \,dt\right) = 2\pi + \sum_{n=1}^\infty 0$$
H: Explanation of how models can differ on $\omega$? Assuming set theory (here, ZF) is consistent, there is a model $V$ of ZF, the universe of all sets. So, there is a $\omega^V\in V$. A set $A\in V$ is countable iff a bijection $f\in V$ exists between $A$ and $\omega^V$. By the downward Löwenheim–Skolem Theorem, there is a countable model $M$ of ZFC such that the domain M of $M$ is in $V$ and a bijection $B\in V$ exists between M and $\omega^V$. Since $M$ is a model of ZFC, there must be some $\omega^M\in M$ and some $\omega_1^M\in M$. There is no guarantee that $\omega^M$ is $\omega^V$, and no guarantee that $\omega_1^M$ is $\omega_1^V$. My questions: Can someone explain when/how/why/under what conditions this divergence (between models taking different sets to be $\omega$) happens? When/how/why/under what conditions is there a guarantee that sets will agree on $\omega$? AI: To the first question, consider $M$ to be a model of ZFC which is a set in the universe $V$. Suppose that $M$ and $V$ agree on $\omega$, we can take an ultrapower of $M$ by a free ultrafilter on the real $\omega$. This happens in $V$, and externally to $M$. The result is a model, $N$, which externally speaking has a different version of $\omega$. Namely, $\omega^N$ is the ultraproduct of $\omega^M$ by a free ultrafilter over a countable set, so it is not well-founded and therefore cannot be the real $\omega$. On the other hand, we can require that the models we work with are $\omega$-models, namely their copy of $\omega$ is isomorphic (from the external, and real point of view) to the real $\omega$. In which case even if the model does not fully agree on how $\omega$ looks like, its agrees on its behavior (again, from an external point of view). If we have a transitive $\omega$-model then as sets $\omega^M$ and the real $\omega$ of $V$ are the same set.
H: Let $f$ be a holomorphic function on D = $ ( z\in C : |z| <1 ) $ such that $ | f(z)|\leq1$. Let $f$ be a holomorphic function on D = $ ( z\in C : |z| <1 ) $ such that $ | f(z)|\leq1$. Let $ g : D: \rightarrow C $ be such that $ g(z) = \frac{ f(z)} {z} $ if $z\in D $, $ z\neq 0$ and $ g(0) = \ f' (0) $ . I have to select which are the correct options. 1) g is holomorphic (Seems correct by definition) 2) $ |g(z)|\leq 1$ for all $ z\in D$. 3) $ |f'(z)|\leq 1$ for all $ z\in D$. 4) $ |f'(0)|\leq 1$. The solution set says all four are correct. Please suggest. AI: The solution is wrong: Consider $f(z) = z^n$ for (3)... The rest is an application of the maximum modulus principle to $g$ and is correct. But we do need to have $f(0) = 0$ for $g$ to be holomorphic, as was already hinted at in the comments by Leonid Kovalev. If this additional information on $f$ is not given in the exercise statement, then all points stated there are wrong the first three points stated there are wrong: For (1) and (2) consider the constant function $f(z) = 1$. (4) turns out to be right just from the assumption $|f(z)|\le 1$ alone. This follows from Cauchy's formula for the derivative. For any $0<r<|z|<1$ we have $$f'(z) = \frac{1}{2\pi i}\oint_{|z| = 1-r} \frac{f(\zeta)}{(\zeta - z)^2} \, d\zeta$$ In particular $$|f'(0)| \le \frac{1}{2\pi}\oint_{|z| = 1-r} \frac{|f(\zeta)|}{(1-r)^2} \, d\zeta \le \frac{1}{1-r}$$ for all $0<r<1$. Letting $r\to 0$ gives $|f'(0)| \le 1$. Remark: In the case were $f(0) = 0$, you might want to check out the Schwarz Lemma.
H: Fourier integral how could i evaluate the following integral ?? $$\int_{-\infty}^\infty dt \frac{\exp(-iut)}{|at|^{1/2+ib}}$$ here $a$ and $b$ are positive real numbers.. how can i make this integral ? thanks. $ |x| $ means the absolute value function I think this integral is related to the Mellin transform $$\int_0^\infty dx \cos(ux)x^{s-1}$$ with $ s=1/2-ib $ but i don not know how to get this Mellin transform AI: Let's define $\displaystyle\ F(b):= \int_{-\infty}^\infty \frac{e^{-it}}{|t|^{1/2+ib}}\,dt\ = 2\int_0^\infty \frac{\cos(t)}{t^{1/2+ib}}\,dt$ Since $\displaystyle \int_0^\infty \frac{\cos(t)}{t^z}\,dt= \sin\left(\frac{\pi z}2\right)\Gamma(1-z)\ \ $ (for $\Re(z) \in (0,1)$) (you may indeed find this integral in tables of Mellin transforms or in tables of Fourier cosine transform : Gradshteyn and Ryzhik (17.34.6) for example) This seems to be a result of Euler who provided the integral (Whittaker and Watson 'A course of Modern Analysis' page 260 example 12, see too the Hankel integral of Gamma §12.22) : $\displaystyle \int_0^{\infty}\frac {\cos(ux)}{x^z}\,dx=\frac {\pi}{2 \Gamma(z)} u^{z-1}\sec\left(\frac {\pi z}2\right)\ $ This corresponds to the previous result using $\frac {\pi}{\Gamma(z)}=\Gamma(1-z)\sin(\pi z)\ $ and $\ \sin(\pi z)=2\sin\bigl(\frac{\pi}2 z\bigr)\cos\bigl(\frac{\pi}2 z\bigr)$. so that $F(b)$ becomes : $$F(b)= 2\,\sin\left(\frac {\pi}2\left(\frac 12 +ib\right)\right)\Gamma\left(\frac 12 - ib\right)$$ A change of variable of $t\to ut$ and a division by $|a|^{1/2+ib}$ should give your solution.
H: Hilbert space linear operator question Let $\mathcal{H}$ be the vector space of all complex-valued, absolutely continuous functions on $[0,1]$ such that $f(0)=0$ and $f^{'}\in L^2[0,1]$. Define an inner product on $\mathcal{H}$ by $$\langle f,g\rangle=\int_0^1f^{'}(x)\overline{g^{'}(x)}dx $$ for $f,g\in\mathcal{H}$. If $0<x\leq 1$, define $L:\mathcal{H}\rightarrow \mathbb{C}$ by $L(f)=f(x)$. Show $L$ is a bounded linear functional and find $\|L\|$. I was able to show $L$ is linear. That was easy. I am having trouble showing it is bounded and I cannot determine what $\|L\|$ is. AI: Use Cauchy-Schwarz inequality $$ |L(f)|= \left|\int\limits_{0}^xf'(t)dt+f(0)\right|= \left|\int\limits_{0}^xf'(t)dt\right|\leq \left(\int\limits_{0}^x|f'(t)|^2dt\right)^{1/2} \left(\int\limits_{0}^x|1|^2dt\right)^{1/2}= $$ $$ \sqrt{x}\left(\int\limits_{0}^x|f'(t)|^2dt\right)^{1/2}\leq \sqrt{x}\left(\int\limits_{0}^1|f'(t)|^2dt\right)^{1/2} =\sqrt{x}\Vert f \Vert $$
H: Composite number Determine for what numbers $n$ the number $n^4 + 4$ is a composite number. Sorry about my English. I found $n^4 + 4 = (n^2 + 2n +2)(n^2 -2n + 2)$, but i don't know what to do from here. AI: You did the non-obvious part, and are now essentially finished! Determine the $n$ for which one of your terms could be $\pm 1$. Not many! (And neither can be $-1$.) For all other $n$, you will have a non-trivial factorization. It may be useful to note that $n^2-2n+2=(n-1)^2+1$, with something similar for the other one. Added: It turns out that in effect the OP wondered whether $n^4+4$ can be a prime power. Except when $n=0$, it cannot. If a prime $p \gt 2$ divides both $n^2-2n+2$ and $n^2+2n+2$, then $p$ divides $n$, but then $p$ divides $2$, contradiction. So the only possibilities are $n^2+2n+2$, $n^2-2n+2$ both a non-trivial power of $2$, Then $n$ has to be even. It follows that each of $n^2-2n+2$ and $n^2+2n+2$ is congruent to $2$ modulo $4$, so if each is a power of $2$, each must be $2$, giving $n=0$.
H: Why is this covering map doubly periodic? The universal cover of the torus $T$ is the complex plane $\mathbb{C}$. If $p: \mathbb{C} \to T$ is the covering map, why is $p$ doubly periodic? AI: Since $\mathbb C$ is simply connected, it is a universal covering space. Any two covering maps $\mathbb C\to\mathbb T$ are related by an automorphism of $\mathbb C$. Such automorphisms are linear, therefore preserve periodicity. So if one covering map is doubly periodic, all of them are. To get one such map, use the quotient map that comes from the definition of torus as the quotient of $\mathbb R^2$ by a lattice.
H: there exist analytic function with $f(\frac{i^n}{n})=-1/n^2$ Does there exist analytic function with $f(\frac{i^n}{n})=\frac{-1}{n^2} \forall n\ge 2$, well I guess Yes, beacuse $g(z)=f(z)+z^2$ has zero set $\{-\frac{i}{n}: n \text{ odd}\}$ which has limit point zero, hence $f(z)=-z^2$ Is my answer is correct? AI: If you plug in $i^3/3$, you'll see that $f(z)=-z^2$ fails to satisfy the desired property.
H: Can the argument of an algebraic number be an irrational number times pi? This is mainly out of curiosity. Let $\nu$ be an algebraic number. Can Arg($\nu$) be of the form $\pi \times \mu$ for an irrational number $\mu$? AI: Yes. Try $\alpha + i \beta$ where $\alpha$ and $\beta$ are rational and nonzero and $\arctan(\beta/\alpha)$ is not a rational multiple of $\pi$. The only cases where $\theta/\pi$ and $\tan(\theta)$ are both rational are the "obvious" ones where $\tan(\theta) \in \{ 0, \pm 1\}$. However, $\mu$ can't be an irrational algebraic number. This follows from the Gelfond-Schneider theorem, since $\nu/|\nu| = e^{i\mu \pi}$ is a value of $(-1)^\mu$.
H: Multiplication in the field $F = \mathbb{Z}_2[x]/f(x)$ Let $f(x) = x^6 + x + 1$ and define the field $F = \mathbb{Z}_2[x]/f(x)$ Compute the following in this field: 1. $(x^5 + x + 1)(x^3 + x^2 +1)$ I start by multiplying (in $\mathbb{Z}_2[x]$): $(x^5 + x + 1)(x^3 + x^2 +1)$ = $(x^8 + x^7 + x^5 +x^4 + x^2 + x + 1)$ Then dividing the result with $f(x)$: $(x^2 + x)$ and the remainder $(x^5 + x^4 + x^3 + x^2 + 1)$ Is this the right approach for solving this problem? Do I understand it correct that I want the result of my multiplication mod $f(x)$? Can I think of it as a simple modulus calculation: $11$ mod $7 = a$ $7*1 + 4$ mod $7 = 4$ In my case I have $(x^6 + x +1)*(x^2 + x) + \mbox{remainder}$ mod $(x^6 +x + 1) = \mbox{remainder}$ So my answer to the question would be the remainder, $(x^5 + x^4 + x^3 + x^2 + 1)$? 2. $(x + 1)^{-1}$ I read (wiki) that the inverse to $(x + 1)$ could be found by using the extended euclidean alg. for $a = (x^6 + x +1)$, $b = (x+1)$ but I don't really get it since $a$ is irreducible. Any hints in the right direction would be appreciated! AI: Your idea for (1) is correct. As for (2), it is the same idea as for the ring of integers modulo $n$. The $gcd$ of $x+1$ and $x^6+x+1$ will be $1$ since $x+1$ and $x^6+x+1$ are coprime. Using the extended Euclidean algorithm you'll get $$ A(x+1) + B(x^6+x+1) = 1 $$ where $A, B \in \Bbb{F}_2[x]/f(x)$. So as you can see, modulo $x^6+x+1$ we have that $$ A(x+1) \equiv 1 \pmod{x^6+x+1} $$ and hence $A$ is the inverse of $x+1$. NOTE: remember that in $\Bbb{Z}_2$ (which I hope you don't mean the $p$-adic integers), $-1 \equiv 1 \pmod{2}$.
H: Index notation clarification Previously, I have seen matrix notation of the form $T_{ij}$ and all the indices have been in the form of subscripts, such that $T_{ij}x_j$ implies contraction over $j$. However, recently I saw something of the form $T_i^j$ which seems to work not entirely differently from what I was used to. What is the difference? and how do they decide which index to write as a superscript and which a subscript? What is the point of writing them this way? Is there a difference? (A link to a good reference explaining how these indices work would also be appreciated!) Thanks. AI: Mostly it's just a matter of the author's preference. The staggered index notation $T^i{}_j$ works great in conjunction with the Einstein summation convention, where one of the rules is that an index that is summed over must appear once as a subscript and once as a superscript. Usually the index of an ordinary vector's components are written in superscript, so the contraction becomes $T^i{}_j x^j$. This rule can become relevant when one is working with multiple bases, in which cases supscript and superscript indices behave differently under basis change. Writing the matrix with staggered indices then servers a reminder that you're planning to use the matrix to represent a linear transformation, rather than to represent a bilinear form, for which both indices are always on the same level. This agrees with the fact that the matrix of a linear transformation and the matrix of a bilinear form respond differently to basis changes. These considerations are most weighty in contexts where one needs to juggle a lot of basis changes -- or just to be sure that what one is writing does not depend on the particular choice of basis -- such as differential geometry. On the other hand, in introductory texts where this is less of an issue, there's an argument that explaining the rules for different kind of indices will just confuse the student without really adding to his understanding (as I have may confused you in the above paragraph).
H: In which cases is the inverse of a matrix equal to its transpose? In which cases is the inverse of a matrix equal to its transpose, that is, when do we have $A^{-1} = A^{T}$? Is it when $A$ is orthogonal? AI: If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.
H: Is there any orthogonal matrix P that makes a symmetric A, diagonal by $PAP^{-1}$? Given a symmetric matrix A. Is there any orthogonal matrix P that makes $PAP^{-1}$ diagonal? I've found at wikipedia this: The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every symmetric real matrix A there exists a real orthogonal matrix Q such that $D = Q^{T}AQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. I think it fits with my problem, but I am not sure since I need this $PAP^{-1}$ (1) and what I get is this: $Q^{T}AQ$ I know that since P is an orthogonal matrix, this is true: $ P^{-1} = P^{T}$ So I get this: $Q^{-1}AQ$ (2) And I am not sure if expressions (1) and (2) are identical. Thank you for your time! (emphasizing on the fact that AI: I understand now where your confusion stems from. The thing is we don't really care about whether we say that there is a matrix $P$ such that $PAP^{-1}$ is diagonal, or that there is a matrix $P$ such that $P^{-1}AP$ is diagonal, because both definitions are equivalent, replacing $P$ by $P^{-1}$. The same thing happens with the concept of orthogonal diagonalization: there is a matrix $Q$ such that $QAQ^T$ is diagonal if and only if there is a matrix $Q$ such that $Q^TAQ$ is diagonal. Both definitions are equivalent replacing $Q$ by $Q^T$, so don't worry about where you write the transpose, or the inverse. Just be consistent once you decide upon one way to write that a matrix is diagonalizable or orthogonally diagonalizable. Gerry's answer shows exactly how one would be consistent once written down the definitions.
H: Distribute pennies for children We distribute n pennies to k boys and l girls, so that (to be really unfair) we require that each girls gets at least one penny. In how many ways can we do this? AI: I'm assuming each boy and girl is labeled (that is, giving 3 pennies to boy 1 and 5 to boy 2 is counted as distinct from giving 5 to boy 1 and 3 to boy 2). Since each girl gets at least one penny, removing $l$ pennies and removing the special condition for girls does not change the number of possibilities: we have the "sex change" equality $$f(n,k,l)=f(n-l,k+l,0)$$ Finally it is well known that $f(n,k,0)=\binom{n+k-1}{k-1}$: a $k-1$-element subset $x_1<\dots<x_{k-1}$ of $[1,n+k-1]$ corresponds bijectively to the assignment of $n$ pennies to $k$ boys $y_i=x_i-x_{i-1}-1$ (with the boundary values $x_0=0$ and $x_k=n+k$). So: $$f(n,k,l)=\binom{n+k-1}{k+l-1}$$
H: Maximally entropy preserving irreversible functions. (CS related) The topic/problem is related to hashing for data structures used in programming, but I seek formal treatment. I hope that by studying the problem I will be enlightened of the fundamental limitations of hashing. So, the purpose is educational. Let's consider a family of functions $f_i$ which will be applied on a random variable $X$, with unknown weight function $w$, drawn from some finite set $P$. $f_i: P \rightarrow Q$ $\Vert P\Vert=n>\Vert Q\Vert =m$ (therefore $f$ is irreversible) The entropy for $X$ is: $H(X) = q$ What is the function $f_i$ that maximizes $\min_w \{H(f_i(X))\}\qquad(\max)$. Clarification: The objective function we are trying to maximize is the minimum entropy for $f_i(X)$, taken over the various possible weight functions $w$ for $X$. To paraphrase, what is the transformation that promises to preserve the most entropy in its output, regardless of the input's distribution (, as long as the entropy of the input is some fixed $q$.) The question can also be raised with the ratio $\min_w \left\{\dfrac{H(f_i(X))}{H(X)}\right\}\qquad(\max)$, or the difference $\min_w \{H(f_i(X)) - H(X)\}\qquad(\max)$, and then $q$ can be permitted to vary. I suspect there will be subtleties that I am not trained well enough to see (; such as $q=0$). The asymptotic behavior is interesting - when $P$ is much larger than $Q$ and $Q$ is sufficiently large. That is, $\frac n m\to+\infty$ and $m\to+\infty$. Particularly, finding a sequence of functions that tends to preserve the entropy completely as $m$ and $n$ grow indefinitely, or at least preserves it up to $\max(q, log_2m)$, would have great significance. If there are no such functions, is there a proof of that being the case? I will be happy to have such result for the continuous case, if it simplifies things. Also, I am interested to find literature that performs comparative study of the effect of certain well-known functions (modulo, division, etc.) on the entropy of their input, based on the general properties of the input's distribution (, not bound to particular distributions.) Post-graduate CS level. Not the most math savvy, but I have passed introductory course to probability and statistics, and to information theory. Thanks and Regards, AI: You won't find any very good solution that maximizes the minimal output entropy over all possible probability distributions for $X$. No matter which $f$ you decide on, the adversary could then choose a probability distribution that concentrates all of the entropy of $X$ in values that map to the same $Q$ value -- for a resulting entropy of zero. This makes your ratio of entropies rather uninteresting. It works slightly better to ask about the amount of lost entropy. In the worst case where the output entropy is 0, the input entropy cannot be greater than the log of the largest number of $p$ values that go into the same $q$ bucket. So in order to minimize the lost entropy you should choose an $f$ that maps either $\lfloor n/m\rfloor$ or $\lceil n/m\rceil$ different $p$s to each $q$. Any deviation from this will give the adversary a chance to make you lose more than $\log\frac nm$ entropy. However, that's the most specific advice you're going to get out of the problem as formulated, since you have abstracted away all structure of $P$, $Q$, and $X$ except for their cardinality. What one does in practice is to make further (often implicit) assumptions about $X$ and its probability distribution. In the pessimistic extreme, if only $X$ is generated by some stochastic process that does not involve inordinate amounts of computation, then one should hope that a good cryptographic hash function would mix up things enough to extract all of the entropy in it that $Q$ has room for. (Because if it didn't, then because of that the hash function wouldn't be "good" after all).
H: Given a symmetric matrix $A$, are there any matrices $B$, $C$ that $BAC = I$? Given a $4 \times 4$ symmetric matrix $A$, are there any matrices $B,C$ that: $BAC = I_{4}$ ? I've thought of $B$ being a orthogonal matrix $P$ ($B=P$) and $ C = P^{T}$ so we get $PAP^{T} = \begin{bmatrix}\lambda_{1}&0&0&0\\0&\lambda_{2}&0&0\\0&0&\lambda_{3}&0\\0&0&0&\lambda_{4}\end{bmatrix} $ Where $\lambda_{i}$ with $i \in {1,2,3,4}$ are the eigenvalues of matrix $A$. And then demanded that $\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\} = \{1,1,1,1\}$ But I am not sure that this is correct. Thank you for your help! AI: Expanding Qiaochu Yuan's comment, suppose that A is invertible. Then we can choose $B = I, C = A^{-1}$ so that $BAC = IAA^{-1} = I$. On the other hand, if $C$ is not invertible, then its null space is non-trivial (this follows because its a square matrix), so there exists $x \neq 0$ such that $Cx = 0$. Hence for any matrices $A,B$ $$BACx = BA0 = 0$$ If also $BAC = I$, then there would exist $x \neq 0$ such that $Ix = x = 0$ which is a contradiction. Therefore, if the identity is to hold, then $C$ must be surjective. Now if $A$ is not invertible then again its null space is non-trivial so there exists $x \neq 0$ such that $Ax = 0$ and since $C$ is surjective there exists $y \neq 0$ such that $Cy = x$. Hence $$BACy = BAx = B0 = 0$$ which shows that we cannot have $BAC = I$. Alternatively, if you have seen determinants, then $$\det(B)\det(A)\det(C) = \det(BAC) = \det(I) = 1$$ implies $A,B$ and $C$ cannot have determinant equal to $0$ and hence cannot be singular.
H: What does face-width mean? What is the meaning of the term face-width? I have seen the term used as a property of an embedding of a graph on a surface. I haven't found a definition. AI: See here, here, and here for the definitions of face-width.
H: Impossibility of certain methods of proof? There are many methods available for proving a given statement: direct proof, proof by induction, proof by contrapositive, proof by contradiction, etc. In some cases there is an obvious method that should be employed, such as using induction to prove that $\displaystyle\sum_{i=0}^n i=\dfrac{n(n+1)}{2}$. However if we are interested not in the most straightforward proof of a statement but rather all possible proofs, is it possible to eliminate proofs via particular methods, or to put a bound on the number of possible proofs for a given statement? It is easy to prove the value of the above sum using mathematical induction, but can it be proved in any other way, say via contradiction? Starting off with "Suppose that the value of the sum does not equal..." does not seem very promising. Of course I am not asking specifically about this example, and alternate proofs of this equality are not relevant here. So my questions are: Is it ever the case that certain statements cannot be verified using particular lines of proof (contrapositive, contradiction, etc.)? Of course, induction is not suitable for many proofs, but this seems to be somewhat of a special case, being built on a theorem and not basic logic, such as with contradiction, etc. How does one go about proving that a particular method of proof cannot be used to prove a particular statement? Often one will begin employing a particular method and find it not to work or make sense, but how can one (or can one) actually prove that it cannot be used? As asked in the first paragraph, is it possible to bound the number of possible proofs or methods of proof for a given statement? If either of my tags are inappropriate, feel free to change them. I am not not well-versed in either area and that is partly why I'm interested in this question. AI: In formal logic there exists no bound on the number of proofs for any given statement, in principle. A (formal) proof gets defined basically as a sequence of logical formulas (wffs) such that each formula comes as permissible to the proof system involved. There exists no part of the definition of a proof such that it prohibits repeating formulas ad nauseum, or writing proofs with loops in say a natural deduction system where you introduce and discharge the same hypothesis over and over again in the same way, nor prohibiting the use of proof by contradiction in proving that (p->p). So, there does exist an infinity of proofs... even in classical propositional logic. No proof methods, by which I interpret here as rules of inference, can get eliminated either. Proofs can use rules of inference to infer formulas which we don't end up using to infer the conclusion. Those formulas not used to infer the conclusion still exist within the sequence of the proof, and thus still qualify as part of the proof. This entails that every single rule of inference for the system can ultimately get used in a proof of some conclusion if desired. To perhaps make this clearer, say we want to prove that (p->p) in a natural deduction framework for classical propositional logic, and precede as follows: 1 | p assumption 2 | (p v p) 1 disjunction introduction 3 (p->p) 1-1 conditional introudction Since a formal proof consists of a sequence, and the sequence [p, (p v p), (p->p)]-1 differs from [p, (p->p)]-2 we have two different proofs there. If we could rule out disjunction introduction as a permissible rule for proving that (p->p), then we would have to rule out proof 1 above. But, it does satisfy the definition of a proof, so we can't rule it out disjunction introduction for proving that (p->p). Since the use of disjunction introduction here ultimately could work just like any other rule of inference of the proof system involved, all rules of inference not only can, but do appear in at least one proof of every theorem of classical propositional logic. Actually, if you combine that with the idea in the first paragraph, there exist an infinity of proofs of every theorem of classical propositional logic using every rule of inference of the proof system... so long as the number of rules of inference is finite. So to this "However if we are interested not in the most straightforward proof of a statement but rather all possible proofs, is it possible to eliminate proofs via particular methods, or to put a bound on the number of possible proofs for a given statement?" I answer "no, and no." "It is easy to prove the value of the above sum using mathematical induction, but can it be proved in any other way, say via contradiction?" yes. "Is it ever the case that certain statements cannot be verified using particular lines of proof (contrapositive, contradiction, etc.)?" no. "How does one go about proving that a particular method of proof cannot be used to prove a particular statement?" You can't do this under usual definitions of a formal proof at least, for a system with valid rules of inference. All methods can always get used somewhere in a proof of any given theorem. If they couldn't get used somewhere in some proof, such rules of inference wouldn't come as valid in the first place. If the rules of inference used aren't valid in the first place, then such a proof system isn't sound. Thus, you could start with a classically true premise and infer classically false conclusions from such a premise. "Often one will begin employing a particular method and find it not to work or make sense, but how can one (or can one) actually prove that it cannot be used?" You can't do it.
H: Why is the range of arctan $[ -\frac{\pi}{2} , \frac{\pi}{2}]$? I've been taught in school and it says on Wikipedia that the range of arctan is $[ -\frac{\pi}{2} , \frac{\pi}{2} ]$. Why isn't it $[0,\pi]$ ? AI: As the graph of the function $\,\tan x\,$ show, this is a very not $\,1-1\,$ function onto the reals, so just as it's done with $\,\sin x\,,\,\cos x\,$ we limit its range in order to get a $\,1-1\,$ onto, and thus invertible, function. As the period of $\,\tan x\,$ is $\,\pi\,$ , we can take any interval of length $\,\pi\,$ to do this...but...if the interval contains a point of the form $\,\displaystyle{x=\frac{(2n+1)\pi}{2}}\,n\in\mathbb{Z}$ we're going to have an ugly vertical asympote there, so we choose an interval of the form $$\left(\frac{(2n-1)\pi}{2}\,,\,\frac{(2n+1)\pi}{2}\right)\,\,,\,n\in\mathbb{Z}$$withy $\,n=0\,$ being customary.
H: Proving a derivative equality How can I prove the following equality? $$ \frac{1} {{n!}}\frac{{d^n }} {{dx^n }}\left( {\left( {x^2 - 1} \right)^n } \right) = \sum\limits_{k = 0}^n {\left( {\frac{{n!}} {{k!\left( {n - k} \right)!}}} \right)} ^2 \left( {x + 1} \right)^{n - k} \left( {x - 1} \right)^k $$ And without the use of induction. Only with knowledge of derivatives and sums. EDITED AI: $(x^2-1)^n = (x-1)^n (x+1)^n = u^n v^n$ where $u=x-1$ and $v=x+1$. Now since $\dfrac{d}{dx} f(u,v) = \dfrac{du}{dx} \dfrac{\partial f}{\partial u} + \dfrac{dv}{dx} \dfrac{\partial f}{\partial v} = \dfrac{\partial f}{\partial u} + \dfrac{\partial f}{\partial v}$, which can be written as $\dfrac{d}{dx} = \dfrac{\partial}{\partial u} + \dfrac{\partial}{\partial v}$, we have, by the Binomial Theorem, $$ \dfrac{d^n}{dx^n} u^n v^n= \left(\dfrac{\partial}{\partial u} + \dfrac{\partial}{\partial v}\right)^n u^n v^n= \sum_{k=0}^n {n \choose k} \dfrac{\partial^k}{\partial u^k} \dfrac{\partial^{n-k}}{\partial v^{n-k}} u^n v^n$$ Now note that $\dfrac{\partial^k}{\partial u^k} u^n = \dfrac{n!}{(n-k)!} u^{n-k}$ and similarly $\dfrac{\partial^{n-k}}{\partial v^{n-k}} v^n = \dfrac{n!}{k!} v^{k}$ Well, that might reasonably be done by induction, or you could use a Taylor series and the binomial theorem: if $g(t) = (t+u)^n$, then $$g(t) = \sum_{k=0}^\infty \dfrac{t^k}{k!} g^{(k)}(0) = \sum_{k=0}^\infty \dfrac{t^k}{k!} \dfrac{\partial^k u^n}{\partial u^k}$$ but also $$ g(t) = (t+u)^n = \sum_{k=0}^n {n \choose k} t^k u^{n-k} $$ and comparing the terms in $t^k$ shows that for $0 \le k \le n$, $$\dfrac{\partial^k}{\partial u^k} u^n = k! {n \choose k} u^{n-k} = \dfrac{n!}{(n-k)!} u^{n-k}$$
H: Uniform Convergence on a Closed and Bounded Interval Let $f_n\colon [a,b] \to \mathbb{R}$ be a sequence of continuous functions converging uniformly to a function $f$. Show that if each $f_n$ has a zero then $f$ also has a zero. Thanks for any help. AI: Since each $f_n$ has a zero, there is a number $x_n \in [a,b]$ such that $f_n(x_n) = 0$. The set $[a,b]$ is compact, so $x_n$ has a convergent subsequence, call it $x_{n_k}$, and let $x_{n_k} \to x$. Since $f_n$ converges to $f$ uniformly, and $f_n$ are continuous, then $f$ is continuous. Now consider $|f_{n_k}(x_{n_k}) - f (x_{n_k})| = | f (x_{n_k})|$. By uniform convergence we have $|f(x_{n_k})| \to 0$, and by continuity of $f$, we have $f(x) = 0$. Alternative proof: Another version would be to proceed by contradiction. Suppose $f(x) \neq 0 $ $\forall x \in [a,b]$. Then since $[a,b]$ is compact (and $|f|$ is continuous), there exists $\delta>0$ such that $|f(x)| \geq \delta$, $\forall x \in [a,b]$. By assumption of uniform convergence, we can choose $N$ so that $|f_n(x) - f(x)| < \frac{\delta}{2}$, $\forall n \geq N$, $\forall x$. Since $f_N$ has a zero, we have $f_N(z) = 0$ for some $z \in [a,b]$, then the previous inequality gives $|f(z)| < \frac{\delta}{2}$, which is a contradiction.
H: Norm and invertibility of operator $\left(-\Delta+\lambda I\right)$ with $\lambda>0$. Let $\lambda>0$ and $n\geq 1$. Prove that the operator $$-\Delta+\lambda I:H^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$$ is invertible and find the norm $$\left|\left|\left(-\Delta+\lambda I\right)^{-1}\right|\right|_{L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)}.$$ Futhermore show that $$\left(-\Delta+\lambda I\right)^{-1}:H^s(\mathbb{R}^n)\to H^{s+2}(\mathbb{R}^n)\qquad s\in\mathbb{R}$$ is bounded and find the norm. I don't know how to find norm of inverse when $\lambda$ is not $0$ and I'm not sure if invertibility can be proved in same way. AI: Consider $g(x) = \|x\|^2 + \lambda$. Then $$(-\Delta + \lambda I)u = F^{-1}M_gFu$$ where $F$ is the Fourier transform and $M_g$ is the operator defined by $M_gu = gu$. Since $g(x) \geq \lambda$ for all $x \in \mathbb{R}^n$, $1/g$ exists and is bounded by $1/\lambda$. Define $$Au = F^{-1}M_{1/g}Fu$$ Then since $M_g^{-1} = M_{1/g}$ $$(-\Delta + \lambda)A = F^{-1}M_gFF^{-1}M_{1/g}F = I$$ and similarly $$A(-\Delta + \lambda) = I$$ so that $A$ is the inverse of $-\Delta + \lambda$. Now we have that $$\|Au\|_2 = \|F^{-1}M_{1/g}Fu\|_2 = \|M_{1/g}Fu\|_2 \le \|1/g\|_\infty\|Fu\|_2 = 1/\lambda\|u\|_2$$ so that $\|(-\Delta + \lambda)^{-1}\| \le 1/\lambda$. To see that the above inequality is optimal, choose $u_n$ such that $Fu_n$ is the characteristic function of the ball of radius $n$ around $0$ and consider $\|Au_n\|_2$ as $n \to \infty$. To deal with the case $(-\Delta + \lambda)^{-1}: H^s \to H^{s+2}$ recall that $$\|u\|_{H^s} = \|F^{-1}(1 + \|x\|^2)^{s/2}Fu\|_2$$ and proceed similarly.
H: Determine if a Turing Machine M, on input w, will move its head to the left, at least once Here is a problem from my formal languages class Consider the following problem: Determine if a Turing Machine M, on input w, will move its head to the left, at least once. Is this problem decidable? Can Rice's theorem be applied on this case? and here is my attempt to answer (1): Define M' as a turing machine that takes a pair (M,w) as input, where M is a turing machine encoded in some form recognized by M' and w is the input to M. M' stops and accepts (M,w) whenever the head of the simulated machine M moves to the left while processing input w For a particular input to M' (M,w), construct the turing machine P as follows: P executes M' on (M,w) P stops and accepts any input if M' accepts (M,w) We have reduced the Universal Turing Machine U to P. Since we know that L(U) is not decidable, we conclude that L(P) is not decidable. Consequently, M' is not decidable As to the applicability of Rice's theorem, I'm not sure... "Any nontrivial property about the language recognized by a Turing machine is undecidable" moving the head of the machine is not a property of the language itself, it is a property of the machine... thoughts, please? Thanks! AI: Your idea that Rice's theorem is inapplicable is correct. For the theorem to hold, we want to have whenever $L(M_1)=L(M_2)$, $M_1$ satisfies the property iff $M_2%$ does. This is evidently not true of the property presently under consideration. Think of two machines identifying a regular language (i.e. recognizable in a single pass), but one machine backs up all the way to the left, while the other does not. But your reduction does not make sense. At a very high level, the point of a reduction is something like: "I know problem $P_1$ is hard. If I could solve problem $P_2$, voila! Here is a method to solve problem $P_1$. But this should not be possible. Therefore, $P_2$ has to be hard." Typically, $P_1$ is the halting problem, and you want $P_2$ to be the "left-mover" problem. So, given an instance of the halting problem, you manufacture an instance of the left-mover problem. If I read your reduction correctly, you're doing it in the opposite direction. Now, something else is wrong with your solution. The problem is decidable! Here is the algorithm sketch: observe that if $M$ never moves left on $w$, it is making a single pass on the string. Eventually, $M$ finishes reading $w$ (which is of finite length), and starts reading spaces. Now, you simply look at which state $q$ $M$ is in when it finishes reading $w$, and look at the subgraph of states it visits on reading spaces. None of these should ever involve moving to the left.
H: How to verify this function is continuous? Recently, I'm reading a paper "Spaces with a regular Gδ-diagonal" of A.V.Arhangel’skii's. I can't understand the function $d$ in the example 9 is continuous. Could someone help me? Thanks ahead:) AI: The space is $X=X_0\cup X_1\cup U$, where $X_0=\Bbb R\times\{0\}$, $X_1=\Bbb R\times\{-1\}$, and $U=\Bbb R\times(0,\to)$. For $x=\langle a,0\rangle\in X_0$ let $x'=\langle a,-1\rangle\in X_1$. For $n\in\Bbb Z^+$ and $x=\langle a,0\rangle\in X_0$ let $$V_n(x)=\{x\}\cup\left\{\langle s,t\rangle\in U:t=s-a\text{ and }0<t<\frac1n\right\}$$ and $$V_n(x')=\{x\}\cup\left\{\langle s,t\rangle\in U:t=a-s\text{ and }0<t<\frac1n\right\}\;.$$ The topology $\tau$ on $X$ is obtained by isolating each point of $U$ and taking the families $\{V_n(x):n\in\Bbb Z^+\}$ and $\{V_n(x'):n\in\Bbb Z^+\}$ as local bases at $x\in X_0$ and $x'\in X_1$, respectively. Added: In other words, $X$ is the closed upper half-plane with the $x$-axis doubled. Points in the open upper half-plane are isolated; basic open nbhds of points in one copy, $X_0$, of the $x$-axis are spikes diagonally up and to the right; and basic open nbhds of points in the other copy, $X_1$, of the $x$-axis are spikes diagonally up and to the left. Now define $d:X\times X\to\Bbb R$ as follows. If $x\in X_0$ and $y=\langle s,t\rangle\in V_1(x)\setminus\{x\}$, let $d(x,y)=d(y,x)=t$. If $x'\in X_1$ and $y=\langle s,t\rangle\in V_1(x')\setminus\{x'\}$, let $d(x',y)=d(y,x')=t$. For distinct $y=\langle s,t\rangle,z=\langle u,v\rangle\in V_1(x)\setminus\{x\}$, let $d(y,z)=\max\{t,v\}$. For distinct $y=\langle s,t\rangle,z=\langle u,v\rangle\in V_1(x')\setminus\{x'\}$, let $d(y,z)=\max\{t,v\}$. For all other distinct $x,y\in X$ let $d(x,y)=1$, and of course $d(x,x)=0$ for all $x\in X$. The claim in the paper is that $d$ is a continuous symmetric that generates $\tau$. That it’s a symmetric is obvious, and it’s not hard to check that it generates $\tau$. $X$ is first countable, so it suffices to show that that if $\langle x_n:n\in\Bbb Z^+\rangle\to x$ and $\langle y_n:n\in\Bbb Z^+\rangle\to y$, then $\langle d(x_n,y_n):n\in\Bbb Z^+\rangle\to d(x,y)$. This is easily done by cases. If $x$ and $y$ are isolated points, we may assume without loss of generality that $x_n=x$ and $y_n=y$ for all $n\in\Bbb Z^+$, so that $d(x_n,y_n)=d(x,y)$ for all $n$, and the result is trivial. Suppose that $x\in X_0$ and $y=\langle s,t\rangle\in V_1(x)\setminus\{x\}$. Then $y$ is an isolated point in $X$, so we may assume that $y_n=y$ for all $n\in\Bbb N$. Either $\langle x_n:n\in\Bbb N\rangle$ is eventually constant at $x$, in which case the result is trivial, or we may assume (by passing to a subsequence if necessary) that $x_n=\langle s_n,t_n\rangle\in V_n(x)\setminus\{y\}$ for $n\in\Bbb Z^+$. But then $$d(x_n,y_n)=d(x_n,y)=\max\{t_n,t\}=t=d(x,y)$$ for all sufficiently large $n$, since $\langle t_n:n\in\Bbb Z^+\rangle\to 0$. The case of $x'\in X_1$ and $y\in V_1(x)\setminus\{x'\}$ is exactly similar. Suppose that $x$ and $y$ are distinct points of $X_0$. If both sequences are eventually constant, the result is trivial, so assume that the sequence $\langle x_n:n\in\Bbb Z^+\rangle$ is non-trivial. As before, we may assume that $x_n=\langle s_n,t_n\rangle\in V_n(x)$. We may further assume that $y_n\in V_1(y)$ for all $n\in\Bbb Z^+$. Then for all $n\in\Bbb Z^+$ we have $y_n\notin V_1(x)$, so $d(x_n,y_n)=1=d(x,y)$ by clause (5) of the definition of $d$. The case of distinct points of $X_1$ is exactly similar. Suppose that $x\in X_0$ and $y=x'\in X_1$. Then for each $u\in V_1(x)$ and $v\in V_1(x')$ we have $d(u,v)=1$, since $u$ and $v$ don’t fall under any of the first four clauses in the definition of $d$, and the result is trivial. Finally, suppose that $x\in X_0$, $y\in X_1$, and $y\ne x'$. The sets $V_1(x)$ and $V_1(y)$ intersect in at most one point, and there is an $n_0\in\Bbb Z^+$ such that $V_{n_0}(x)\cap V_{n_0}(y)=\varnothing$. Thus, we may without loss of generality assume that $x_n\in V_{n_0}(x)$ and $y_n\in V_{n_0}(y)$ for all $n\in\Bbb Z^+$. But then, just as in Case 3, $d(x_n,y_n)=1=d(x,y)$ for all $n\in\Bbb Z^+$, and we’re done.
H: Set theory puzzles - chess players and mathematicians I'm looking at "Basic Set Theory" by A. Shen. The very first 2 problems are: 1) can the oldest mathematician among chess players and the oldest chess player among mathematicians be 2 different people? and 2) can the best mathematician among chess players and the best chess player among mathematicians be 2 different people? I think the answers are no, and yes, because a person can only have one age, but they can have separate aptitudes for chess playing and for math. Is this correct? AI: Yes, it’s correct. If $M$ is the set of mathematicians, and $C$ is the set of chess players, you’re looking rankings of the members of $M\cap C$. If for $x\in M\cap C$ we let $m(x)$ be $x$’s ranking among mathematicians, $c(x)$ be $x$’s ranking among chess players, and $a(x)$ be $x$’s age, then there is a unique $x_a\in M\cap C$ such that $$a(x_a)=\max\{a(x):x\in M\cap C\}\;,$$ but there can certainly be distinct $x_m,x_c\in M\cap C$ such that $$m(x_m)=\max\{m(x):x\in M\cap C\}$$ and $$c(x_c)=\max\{c(x):x\in M\cap C\}\;.$$ All of which just says what you said, but a bit more formally.
H: Finding Polynomial Limit I am asked to find the Limit for: $$\lim_{x\rightarrow -∞}(x^4+x^5) $$ The first thing I am tempted to do is divide the numerator and denominator of this fraction by the highest power of x, in this case $x^5$. $$\lim_{x\rightarrow -∞}\frac{\dfrac {x^4+x^5}{x^5}}{\dfrac1{x^5}}$$ Continuing with this I apply the limit laws which state $\lim_x=0$ when dealing with a limit at infinity, and I end up with a denominator equal to zero.. AI: Just factor it: $$\lim_{x\to-\infty}(x^4+x^5)=\lim_{x\to-\infty}x^4(1+x)\;.$$ As $x\to-\infty$, what’s happening to $x^4$ and to $1+x$? What’s the combined effect?
H: Discontinuous function sending compacts to compacts I know that the condition that $f(X)$ is compact if $X$ is compact should not be sufficient to say that $f$ is continuous, but I can't come up with an example of such discontinuous $f$. What is it? Thanks AI: Let $f:\Bbb R\to\Bbb R$ be such that $f(x)=0$ if $x\le 0$ and $f(x)=1$ if $x>0$.
H: equivalence of $E[X_\infty]=1$ and $X$ is a u.i. martingale on $[0,\infty]$ Let $(X_t)$ be a strictly positive supermartingale on $[0,\infty)$. Hence $X_t$ covnerge to $X_\infty$ a.s. Now how can I show the following: $E[X_\infty]=1$ is equivalent to $(X_t)$ is a uniformly integrable martïngale on $[0,\infty]$. hulik AI: This result does not hold, as the following classical example shows. Let $U$ be uniform on $[0,1]$. For every $n\geqslant0$, let $U_n$ denote the integer part of $2^nU$, $\mathcal F_n=\sigma(U_n)$, and $X_n=2^n\cdot[U_n=0]$. Then $(\mathcal F_n)$ is a filtration and $(X_n)$ is a nonnegative martingale with respect to $(\mathcal F_n)$, starting from $X_0=1$. Furthermore, $X_n\to X_{\infty}=0$ almost surely, hence $\mathrm E(X_{\infty})=0$, and $(X_n)$ is not uniformly integrable since $\mathrm E(X_n)=1$ does not converge to $\mathrm E(X_\infty)=0$. For a strictly positive example, add a positive constant to each $X_n$. About the revised version: Assume first that $(X_n)$ is in fact a martingale and that $\mathrm E(X_0)=1$. In one direction, if $\mathrm E(X_\infty)=1$, then Scheffé's lemma ensures that $X_n\to X_\infty$ in $L^1$. Any sequence in $L^1$ which converges in $L^1$ is uniformly integrable hence $(X_n)$ is uniformly integrable. In the other direction, if $X_n\to X_\infty$ in $L^1$, $\mathrm E(X_n)\to \mathrm E(X_\infty)$. Since $\mathrm E(X_n)=1$ for every $n$, $\mathrm E(X_\infty)=1$. QED. Finally, if one only assumes that $(X_n)$ is a supermartingale, what could impose that $(X_n)$ is in fact a martingale? To wit, consider the deterministic example $X_n=1+\frac1{n+1}$.
H: Evaluating $\lim_{y \to 0^+} (\cosh (3/y))^y$ Evaluating $$\lim_{y \to 0^+} (\cosh (3/y))^y$$ This is what I have tried: $L = (cosh(3/y))^y$ $\ln L = \frac{\cosh(3/y)}{1/y}$, applying L'Hopital's rule, I get: $\ln L = \frac{-3y^2(\sinh (3/y))}{(y^2)}$ $\ln L = 3\sinh (3/y)$ Now I seem to be stuck in a loop between $\sinh$ and cosh. I know the answer is supposed to be $e^3$, but how do I proceed from here? Any help would be greatly appreciated! AI: Hint: $$ \begin{align} \cosh(3/y)^y &=\left(\frac{e^{3/y}+e^{-3/y}}{2}\right)^y\\ &=e^3\left(\frac{1+e^{-6/y}}{2}\right)^y \end{align} $$
H: Multiplication inverse for dedekind cut Let $\alpha \in P_R$ be a cut. Since there exists a cut that is not $\{q\in Q\mid q<r\}=r^*$ for every $r\in Q$, $\alpha$ doesn't need to be of the form $r^*$. Let $$\gamma= 0^* \cup \{0\} \cup \{q\in P_Q\mid\text{ there exists }r\in P_Q\text{ such that }r>q\text{ and }1/r \notin \alpha\}\;.$$ I have proved that $\alpha \gamma$ is a subset of $1^*$. I dont't know how to prove $1^*$ is a subset of $\alpha \gamma$. Help AI: We have $\alpha\gamma=0^*\cup\{0\}\cup\{st:0<s\in\alpha\text{ and }0<t\text{ and }\exists r>t(1/r\notin\alpha)\}$. For positive $t$ the condition that there is some $r>t$ such that $1/r\notin\alpha$ is equivalent to the condition that there is a positive $r<1/t$ such that $r\notin\alpha$, i.e., that $\alpha\subsetneqq(1/t)^*$. Let $q\in 1^*$; clearly $q\in\alpha\gamma$ if $q\le 0$, so assume that $0<q<1$. Suppose that $q\notin\alpha\gamma$. Then for every positive $s\in\alpha$ and $t$ such that $\alpha\subsetneqq(1/t)^*$, $q\ne st$. Equivalently, for every positive $s\in\alpha$, $\alpha$ is not a proper subset of $(s/q)^*$, i.e., $(s/q)^*\subseteq\alpha$. Fix a positive $s_0\in\alpha$. Given $s_n$ for some $n\in\omega$, let $s_{n+1}=s_n/q$; an easy induction shows that ${s_n}^*\subseteq\alpha$ for each $n\in\omega$. But $s_n=s_0q^{-n}$, so you’ll have the desired contradiction showing that $q\in\alpha\gamma$ once you show that the sequence $\langle q^{-n}:n\in\omega\rangle$ is unbounded in $\Bbb Q$. We know that $q=a/b$ for some positive integers $a$ and $b$ such that $a<b$, so $$q^{-n}=\left(\frac{b}a\right)^n=\left(1+\frac{b-a}a\right)^n\;.$$ Let $p=\dfrac{b-a}a$; then $q^{-n}=(1+p)^n$, and it’s an easy induction to show that $(1+p)^n\ge 1+np$. Since $\langle 1+np:n\in\omega\rangle$ is clearly unbounded, we’re done.
H: Question about flat modules and exact sequences I have a basic question about exact sequences. I want to show that if I have that whenever $0 \to A \xrightarrow{f} B \xrightarrow{g} C \to 0$ is exact then $0 \to A \otimes N \to B \otimes N \to C \otimes N \to 0$ is then $N$ is flat. So let $\dots \to A \xrightarrow{f} B \xrightarrow{g} C \to D \dots$ be an exact sequence. Now I want to split it into short exact sequences so I can apply what I have: $0 \to A / \operatorname{Ker}{f} \to B \to \operatorname{Im}{g} \to 0$ is exact. From this I get that $ A / \operatorname{Ker}{f} \otimes N \to B\otimes N \to \operatorname{Im}{g}\otimes N \to 0$ is exact since $- \otimes N$ is right exact. But how do I get from that to $0 \to A \otimes N \to B\otimes N \to C \otimes N \to 0$? I think I can't. So how do I split the long exact sequence in a better way? AI: The point is that you need to show that $\ldots \rightarrow A\otimes N \rightarrow B \otimes N \rightarrow C \otimes N \rightarrow D\otimes N \rightarrow \ldots$ is exact at $B \otimes N$, not that $0 \rightarrow A\otimes N \rightarrow B \otimes N \rightarrow C \otimes N \rightarrow 0$ is an exact sequence. Now, the sequence which you have shown to be exact gives the desired result.
H: Computing: $\lim_{x\rightarrow0} \frac{\log(1+x)}{x^2}-\frac{1}{x}$ Could the following limit be computed without L'Hopital and Taylor? Thanks. $$\lim_{x\rightarrow0} \frac{\log(1+x)}{x^2}-\frac{1}{x}$$ AI: Here's an approach. Note that you can write the limit as $$\lim_{x\to 0} \frac{\log(1+x)-x}{x^2}$$ and use the following definition: $$\log(1+x) = \int_1^{1+x}\frac{dt}{t}$$ For $x$ small, the midpoint rule gives us that: $$\log(1+x) \approx \frac{x}{2} \left( 1 + \frac{1}{1+x}\right) = \frac{x+\frac{1}{2}x^2}{1+x}$$ and hence we have $$\lim_{x\to 0} \frac{\frac{x+\frac{1}{2}x^2}{1+x}-x}{x^2} = \lim_{x\to 0}\frac{x+\frac{1}{2}x^2-x-x^2}{x^2} = \lim_{x\to 0}\frac{-\frac{1}{2}x^2}{x^2} = -\tfrac{1}{2}$$ You may consider the midpoint rule to be a cheat, since it is typically justified using Taylor's theorem (I suspect that it can be proved without Taylor's theorem, though I haven't tried it).
H: Probabilty of one person getting a pair A dealer is using a standard deck of $52$ cards. One extra ace of spades is put into the deck. So now he got $53$ cards in the deck with two ace of spades in total. The dealer deals $4$ hands with $5$ cards each. What is the probability that one $5-$card hand contains the two ace of spades? I got some work done on this already. The probability of two ace of spades in the first hand] must be $\frac{\binom 22 \times \binom {52}3}{\binom {53}5}.$ How do i expand this to four hands? AI: There are $\binom{51}{18}$ sets of $20$ cards that include both of the aces of spades, so the probability that both aces of spades are in play is $$\frac{\binom{51}{18}}{\binom{53}{20}}=\left(\frac{51!}{18!33!}\right)\left(\frac{20!33!}{53!}\right)=\frac{20\cdot19}{53\cdot52}=\frac{95}{689}\;.$$ Pretend that one of the aces is marked. The marked ace can be in any of the four hands. The other ace is equally likely to be any of the other $19$ cards that are in play, so the probability that the other ace is in the same hand is $\frac4{19}$. Thus, the probability that both aces are in play and in the same hand is $$\frac{95}{689}\cdot\frac4{19}=\frac{20}{689}\approx0.02903\;.$$ Added: You can also get the $\frac4{19}$ by counting the deals that put both aces of spades into the same hand, but I don’t recommend doing it that way. If the two aces are in the first hand, there are $\binom{18}3$ ways to pick the rest of that hand and $\binom{15}5\binom{10}5\binom55$ ways to deal out the other three hands, so for a given set of $20$ cards that includes both aces of spades there are $\binom{18}3\binom{15}5\binom{10}5\binom55$ ways to deal the cards so that both aces are in the first hand. There are just as many ways with the aces in the second hand, the third, and the fourth, so the total number of deals of those $20$ cards that give both aces to the same person is $4\binom{18}3\binom{15}5\binom{10}5\binom55$. There are altogether $\binom{20}5\binom{15}5\binom{10}5\binom55$ ways to deal the $20$ cards into four hands of $5$ cards each, so the probability that both aces of spades end up in the same hand is $$\left(\frac{4\cdot18\cdot17\cdot16\cdot15!}{3!\,5!\,5!\,5!}\right)\left(\frac{5!\,5!\,5!\,5!}{20!}\right)=\frac{4\cdot5!}{3!\cdot20\cdot19}=\frac4{19}\;.$$
H: question about the bracket process of brownian motion Suppose I have a multidimensional brownian motion $W=\{W_t\}$. Why is the following true: $$\langle W^k,W^l\rangle_t = \delta_{k,l}t$$ where $W^k$ denotes the k-th coordinate, $\langle \cdot,\cdot\rangle$ denotes the bracket process and as usual $\delta_{k,l}$ the kronecker symbol. cheers math AI: The part which is not a definition is a consequence of the fact that two independent martingales $X$ and $Y$ are such that $\langle X,Y\rangle=0$. This fact itself is a consequence of the following simple computation. For every $s\leqslant t$, $X_tY_t$ is $X_sY_s$ plus a sum of three terms $(X_t-X_s)Y_s+(Y_t-Y_s)X_s+(X_t-X_s)(Y_t-Y_s)$, where each term has zero conditional expectation conditionally on $\sigma(X_u,Y_u;u\leqslant s)$. Thus, $XY$ is a martingale and, in particular, $\langle X,Y\rangle=0$.
H: Why parametrise a curve in this way (on the unit circle)? I saw papers saying something like "let $\gamma:S^1 \times [0,T] \to \mathbb{R}^2$ parametrise a curve. The second interval above just makes it time dependent, but why parametrise (for fixed time) the curve on S^1, the unit circle? I think it's to make it closed but what is confusing is that in papers they write $\gamma$ a function of a real variable $u$ and $t$, so as $\gamma(u, t)$ so how can the domain of the first variable $u$ be the unit circle? Should it not be $\gamma(u, v, t)$ for $u^2 + v^2 = 1$? Thanks AI: As you observe, it forces the curve to be closed without them needing to say it. The variables in a function don't need to be standard coordinates on $\mathbb{R}^n$ either - a circle is $1$-dimensional, so a single coordinate $u$ is perfectly good. Maybe it would help to think of $u$ as the angle round the circle (remembering that $u=a$ and $u=a+2\pi$ define the same point, so $\gamma(a,t)=\gamma(a+2\pi,t)$ for all $t$ and $a$). This isn't strictly accurate because I've now redefined $u$ as being a real number (and $\gamma$ as being $2\pi$-periodic in its first variable), rather than $u$ being a point on the circle, but if you're used to functions taking real inputs it might be helpful. Note: The value $t$ in Thomas's answer is precisely this angle in his parameterization of the circle. This is not the only way of parameterizing a circle by one variable though, which is why philosophically it's cleaner to just take $u$ to be a point in the circle.
H: Proving that $A=\{(-2)^n : n \in \mathbb{N} \}$ is unbounded I am trying to prove that $A=\{(-2)^n : n \in \mathbb{N} \}$ is unbounded. What I did was first to show that for every $n \in \mathbb{N}$ if $n$ is even then $(-2)^n = 2^n$ and if $n$ is odd then $(-2)^n = -2^n$ (I did it by induction on $n$). Then I show that for every $n \in \mathbb{N}$ there exist $k \in \mathbb{N}$ such that $(-2)^n \lt (-2)^k$ because if $n$ is even then $$(-2)^n=2^n\lt 2^k$$ Where the last inequality holds by the definition of natural powers. The case where $n$ is odd is obvious now. Am I right? Is the a more elegant way of proving so? thanks! AI: What does it mean to be bounded? $A$ is bounded if there exists some positive number $M$ such that $|a|\leq M$ for all $a\in A.$ You want to show $A$ is unbounded, so that there is no such $M$ that satisfies that condition. One way to do this is to pick an arbitrary positive number, and find something in $A$ which has larger magnitude than what you picked. If you can do that for any $M>0$ then $A$ can't be bounded. So to start your proof: Let $M>0.$ We can find $a\in A$ such that $|a|> M$ by picking $n$ ...
H: Showing that the space $C[0,1]$ with the $L_1$ norm is incomplete Can anyone think of a relatively easy counter example to remember, which demonstrates that the space $C[0,1]$ with the $L_1$ norm is incomplete? Thanks! AI: This example works to show $C[0,1]$ is not complete with respect to the $L^p$ norm for all $1\leq p < \infty.$ Consider the piecewise linear function described by $$f_n(x)=\begin{cases} 1,&\text{if }0\le x\le\frac12\\ 1-n(x-\frac{1}{2}),&\text{if }\frac12\le x\le \frac{1}{2}+\frac{1}{n}\\\\ 0,&\text{if }\frac{1}{2}+\frac{1}{n}\le x\le 1\;. \end{cases}$$ Then $$ \| f_n - f_m \|_p = \left( \int^{1/2+1/n}_{1/2} |f_n(x)-f_m(x)|^p dx\right)^{1/p} \leq \left( \frac{1}{n} \right)^{1/p} \to 0$$ so $f_n$ is Cauchy. Suppose $f_n$ has limit $f\in C[0,1].$ Then $$\int^{1/2}_0 |f(x)-f_n(x)|^p dx \leq \|f-f_n\|_p^p \to 0$$ so $f(x)=1$ on $[0,1/2].$ Similarly we see $f(x) = 0$ on $[1/2,1]$, which is a contradiction. Of course, the calculations are easy to verify once you have the example, and no one actually remembers the explicit equations for such a thing. All you have to remember is that it is $1$ on $[0,1/2]$ then goes down very quickly to $0.$ $\phantom{}$
H: Linearly dependent vectors over finite fields My problem is as follows: Assume you have a vector space of dimension $(d + 1)$, with values over $GF(q)$. Every vector in this vector space can be regarded as an element of the extension field $GF(q^{d+1})$. It is well known that every element of a finite field can be expressed as a primitive element power and as a linear combination of primitive powers from $0$ up to $d$. It turns out that the element of the field expressed as the $(i + kn)$th powers of a primitive element, taken modulo $n$ have linearly dependent vector representation (elements of vectors are the coefficients of the lin combination), where $n = (q^{d + 1} - 1)/(q - 1)$. They basically lie on the same line in the original vector space... But i can't find a proof... I'm not really an expert of the field and i suspect the problem could be stated more formally. Any help? Thanks in advance! Francesco Edit: let's say $\beta = \alpha^i$, where $\beta$ is an element of the extension field and $\alpha$ is a primitive over that field. We also know that $\beta$ can be expressed as a linear combination of $1, \alpha, \alpha^2, \dots, \alpha^d$ with coefficients $(a_0, \dots, a_d)$ over $GF(q)$. Those coefficients actually determine a vector of the finite vector space of dimension $(d+1)$. Now take a second element $\beta' = \alpha^{i+kn}$, for some $k$ integer. The coefficient of the linear combination of $1, \alpha, \alpha^2, \dots, \alpha^d$ that represent $\beta'$ form a vector $(a_0', \dots, a_d')$ which is linearly dependent to $(a_0, \dots, a_d)$. So in the vector space of dimension $(d+1)$ with values over $GF(q)$, those vectors lie on the same line through the origin. I know that this is true, but I can't find a proof. So my question is actually a proof. Hope to have make my point a bit more clear. AI: Assume that $\alpha$ is a primitive element of $GF(q^{d+1})$. So $\alpha$ is of (multiplicative) order $q^{d+1}-1$, and $GF(q^{d+1})^*=\langle \alpha \rangle$. Consider the element $ \gamma=\alpha^n, $ where $$n=(q^{d+1}-1)/(q-1)=q^d+q^{d-1}+q^{d-2}+\cdots+q+1.$$ We have $$qn=q(q^d+q^{d-1}+q^{d-2}+\cdots q^2+q+1)=q^{d+1}+q^{d}+q^{d-1}+\cdots q^2+q= n+ (q^{d+1}-1),$$ so $$ \gamma^q=\alpha^{qn}=\alpha^{n+(q^{d+1}-1)}=\alpha^n\cdot \alpha^{q^{d+1}-1}=\alpha^n\cdot1=\gamma. $$ This means that $\gamma$ is an element of the smaller field $GF(q)$. Furthermore, the theory of cyclic groups tells that the order of $\gamma$ is $q-1$, so it is a primitive element of the field $GF(q)$. In other words the set $$ GF(q)^*=\{1,\gamma, \gamma^2,\cdots, \gamma^{q-2}\}. $$ This gives an explanation to the phenomenon that you observed. If we fix an integer $i, 0\le i<n$, and let $k$ vary over the range $0\le k <q-1$, then $$ \alpha^{i+kn}=\alpha^i\cdot(\alpha^n)^k=\alpha^i\gamma^k $$ ranges over the set $\{x \alpha^i\mid x\in GF(q)^*\}$. Together with $0$ these elements form the 1-dimensional subspace over $GF(q)$ generated by $\alpha^i$. In other words, they all lie on the same line throught the origin in the $(d+1)$-dimensional space $GF(q^{d+1})$ over $GF(q)$.
H: A problem about conditional expectation. I'm studying advanced probability theorem by myself and have encountered a exercise: Let $A>0$ be a constant, $\xi$ be a $r.v.$ such that $E|\xi|<\infty$ and $P(\xi\leq x) = P(-\xi\leq x),\quad x\in\mathbb{R}$ Compute the conditional expectation $E(\xi\ |\ \xi I_{\{|\xi|\leq A\}})$. I have no idea of this problem. Any help would be appreciated. I written a solution (more mathematical in my opinion) follow did's. Tell me if it has anything wrong. For any $B\in \mathscr{B}_{\mathbb{R}}$, we have If $0\in B$, then $\{\xi I_{\{|\xi|\leq A\}}\in B\} = \{\xi\in B,|\xi|\leq A\}\cup\{|\xi|>A\}$. Hence $\begin{eqnarray*} & &\int_{\{\xi I_{\{|\xi|\leq A\}}\in B\}}\xi I_{\{|\xi|> A\}} dP \\ &=& \int_{ \{\xi\in B,|\xi|\leq A\}}\xi I_{\{|\xi|> A\}} dP +\int_{\{|\xi|>A\}}\xi I_{\{|\xi|> A\}} dP\\ &=& 0. \end{eqnarray*}$ The last integral is zero due to the symmetric of $\xi$. If $0\notin B$, it is easy to see that $ \int_{\{\xi I_{\{|\xi|\leq A\}}\in B\}}\xi I_{\{|\xi|> A\}} dP = 0. $ Consequently, we have $E(\xi I_{\{|\xi|> A\}}\ |\ \xi I_{\{|\xi|\leq A\}}) = 0, a.s.$. Therefor, $\begin{eqnarray*} E(\xi\ |\ \xi I_{\{|\xi|\leq A\}}) &=& E(\xi I_{\{|\xi|\leq A\}}\ |\ \xi I_{\{|\xi|\leq A\}}) + E(\xi I_{\{|\xi| > A\}}\ |\ \xi I_{\{|\xi|\leq A\}})\\ &=& \xi I_{\{|\xi|\leq A\}}. \end{eqnarray*}$ AI: Let $\xi_A=\xi\,\mathbf 1_{\{|\xi|\leqslant A\}}$. By definition, $\mathrm E(\xi\mid\xi_A)=u(\xi_A)$ for some measurable function $u$ such that, for every Borel set $B$, $\mathrm E(\xi:\xi_A\in B)=\mathrm E(u(\xi_A):\xi_A\in B)$. For every $x\ne0$ such that $|x|\leqslant A$, $[\xi_A=x]=[\xi=x]$ hence $u(x)=x$. For $x=0$, $[\xi_A=0]=[\xi=0]\cup[|\xi|\gt A]$ and the distribution of $\xi$ is symmetric hence the best predictor of $\xi$ on $[\xi_A=0]$ is $0$. For every $x$ such that $|x|\gt A$, $[\xi_A=x]=\varnothing$ hence the value of $u(x)$ is irrelevant. To sum up, a solution is $\mathrm E(\xi\mid\xi_A)=\xi_A$ almost surely.
H: Number of $4 $ digit numbers with no repeated digit. Number of $4$ digit numbers with no repeated digit is $4536$ $3024$ $5040$ $4823$ Well, I am very much weak in combinatorics. Please help. AI: Ok so lets write down any old $4$ digit number $abcd$ How many choices do we have for the digit $a$? We have $9$ choices (since the first digit cannot be $0$). Now for each possible choice of $a$ we have $9$ choices for $b$ (since we want $b$ to be a different digit to $a$ and we now allow $0$). So for choice of the $ab$ part we have $9*9 = 81$ possibilities. Now for each of these we have $8$ choices for $c$ (to avoid $c$ being the same as either $a$ or $b$). And for each of these we have $7$ choices for $d$ (to avoid $d$ being the same as either $a,b$ or $c$). So in total there are $9*9*8*7 = 4536$ possible numbers.
H: Rank and determinant of $D$ , an $n\times n$ real matrix, $n\ge 2$ Let $D$ be a $n\times n$ real matrix, $n\ge 2$. Which of the following is valid? $\det(D)=0\Rightarrow \mathrm{rank}(D)=0$ $\det(D)=1\Rightarrow \mathrm{rank}(D)\neq 1$ $\det(D)=1\Rightarrow \mathrm{rank}(D)\neq0$ $\det(D)=n\Rightarrow \mathrm{rank}(D)\neq 1$ Well, (1) is wrong because there is a $3\times 3$ matrix with rank $2$ and determinant $0$, namely $$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}. $$ I am confused about the other three: please help! AI: HINT The following are equivalent A square matrix is invertible A square matrix has full rank A square matrix has non-zero determinant The rank of the matrix is between $0$ and $n$ More than one may be correct
H: Needed an example to understand the concept . I would be glad if anyone could provide me example of a Polar Set . Polar set is defined as follows: Given a dual pairing $(X,Y)$ the polar set or polar of a subset $A$ of $X$ is the set $A^0$ of $Y$ such that : $$A^0 = \{y \in X : \sup |\langle x,y\rangle|\le1 \}$$ AI: Let $E$ be a subspace of a normed linear space $X$ with topological dual $X^\ast$. Then: $$E^0=\{f\in X^\ast:|f(x)|\leq 1,\,\forall x\in E\}=\{f\in X^\ast:f=0\text{ on } E\}$$ This is known as the annihilator of $E$ in $X^\ast$.
H: Limit exercise from Rudin: $\lim\limits_{n \to \infty} \sqrt{n^2+n} -n$ This is Chapter 3, Exercise 2 of Rudin's Principles. Calculate $\lim\limits_{n \to \infty} \sqrt{n^2+n} -n$. Hints will be appreciated. AI: Hint: $$\frac{\sqrt{n^2+n}-n}{1} = \frac{\sqrt{n^2+n}-\sqrt{n^2}}{1}\times \frac{\sqrt{n^2+n}+\sqrt{n^2}}{\sqrt{n^2+n}+\sqrt{n^2}} = \cdots$$ I will expand more if needed.
H: how to visualize binomial theorem geometrically? How does $ \binom{n}{k} $ 'n choose k' get involved with coefficient of $ (a+b)^n $. Is there any intuitive geometrical picture (interpretation) that it seems obvious? AI: Hint: Imagine writing $(a+b)^n$ as $(a+b)(a+b)\dots(a+b)$, and then multiplying out all the brackets. Ask yourself how many ways you can get a term involving $a^kb^{n-k}$.
H: Solving an equation with precondition How do I solve the following equation: $$x + y\ne0\text{ and }\frac{1}{x+y}=x$$ Wolfram Alpha came up with this solution $$x\ne0,\:y=\frac{1-x^2}{x}$$ but I don't know how to get there. thx alex AI: First af all, $x + y \neq 0$ since it is a denominator. Therefore $1=x(x+y)$. If $x=0$, then $1=0$, a contradiction. Therefore $x\neq 0$, and we can divide by $x$: $$\frac{1}{x}=x+y.$$ Now $$y=\frac{1}{x}-x = \frac{1-x^2}{x},$$ under the condition that $x \neq 0$.
H: Solving P vs NP with computer Is it possible to build a computer program that would (eventually) bring a solution to the P vs. NP question? AI: Nobody knows. I suppose if there is a polynomial-time algorithm for 3-SAT (or some other NP-complete problem) then a computer could find it and prove P = NP. And if there is a proof that P isn't NP, well, I suppose a computer could find that, too. Why - are you looking for something to work on this summer?
H: How to check if a matrix is positive definite I want to know how to check if a matrix M is positive definite ,assume that M is 3x3 real numbers matrix I think one way is to put the matrix in a quadratic form $X^TMX$ , where X is a vector $X^T=[x_1 x_2 x_3]$ , my question is if I found that $X^TMX = ax_1^2 + bx_1*x_2+ ........$ can I say that the matrix M is not positive definite because the term $bx_1*x_2$ can be negative or I have to try to put the value of $X^TMX$ in the form of sum of squares e.g.,$()^2+()^2+.....$ and then decide? and what is the relation between the positive definiteness of a matrix and its determinant? AI: I don't think there is a nice answer for matrices in general. Most often we care about positive definite matrices for Hermitian matrices, so a lot is known in this case. The one I always have in mind is that a Hermitian matrix is positive definite iff its eigenvalues are all positive. Glancing at the wiki article on this alerted me to something I had not known, Sylvester's criterion which says that you can use determinants to test (a Hermitian matrix) for positive definiteness by checking to see if all the square submatrices whose upper left corner is the $(1,1)$ entry have positive determinant. Sorry if this is repeating things you already know, but it's the most useful information I can provide. Good luck!
H: The Frobenius-Nakayama Formula I am currently reading a paper where it refers to the usual Frobenius-Nakayama formula describing quotients of an induced module. It is refering to the following result: If $k$ is a field, $P$ is a subgroup of a group $G$, $F$ is a $kP$-module and $W$ is a $kG$-module, then we have that $\operatorname{Hom}_{kP}(F,W_{P})\cong_{k}\operatorname{Hom}_{kG}(\operatorname{Ind}_{P}^{G}(F),W).$ However, having searched for the Frobenius-Nakayama formula, I cannot find it in any reliable sources. Does anybody know of a good source for this result, and/or if the result is normally referred to by an alternative name? AI: Over a field of characteristic not dividing the group order, this is called Frobenius reciprocity. The version you give is usually called Frobenius-Nakayama reciprocity. See page 58-59 of Alperin's Local Representation Theory for a proof. A very introductory treatment is given on page 231 of James–Liebeck's Representations and Characters of Groups. A more general version is proved on page 46 of Benson's Representations and Cohomology.
H: Example computation of $\operatorname{Tor_i}{(M,N)}$ Let $M = \mathbb Z / 284 \mathbb Z$ and $N = \mathbb Z / 2 \mathbb Z$. Can you tell me if my computation of $\operatorname{Tor_i}{(M,N)}$ is correct: (i) First we want a projective resolution of $M$: $$ 0 \to \mathbb Z \xrightarrow{\cdot 284 } \mathbb Z \xrightarrow{\pi} M \to 0$$ Then we chop off $M$ to get $$ 0 \to \mathbb Z \xrightarrow{d_1 = \cdot 284 } \mathbb Z \xrightarrow{d_0 = 0} 0$$ And apply $- \otimes N$ to get $$ 0 \to \mathbb Z \otimes N \xrightarrow{d_1^\ast } \mathbb Z \otimes N \xrightarrow{d_0^\ast} 0$$ (ii) Now we see that $\operatorname{Tor_i}{(M,N)} = 0$ for $i \geq 2$. (iii) We know that $\operatorname{Tor_0}{(M,N)} = M \otimes N = \mathbb Z / 284 \mathbb Z \otimes \mathbb Z / 2 \mathbb Z$ (iv) $\operatorname{Tor_1}{(M,N)} = \operatorname{Ker}{d_0^\ast} / \operatorname{Im}{d_1^\ast} = 0 / 0 = 0$. Thanks for your help. AI: In your calculation of the group $\textrm{Tor}^1(M, N)$, it seems that the calculation of the kernel is incorrect. The kernel must be the whole $\mathbb{Z} \otimes N$ since an elementary tensor $k \otimes [n]$ is mapped to $284k \otimes [n] = 142k \otimes [2n] = 142k \otimes [0] = 0$. By the way, $i$ is generally written as a subscript in $\textrm{Tor}_i(M, N)$ since $\textrm{Tor}_i(M, N)$ is a homology group.
H: If $z$ is the unique element of a monoid such that $uzu=u$, is $u$ invertible? This question is a follow-up to this one. I tried to check whether the same statement as discussed for rings there is true for monoids too, but without success. Let $M$ be a monoid and $u\in M$. Suppose there exists a unique $z\in M$ such that $uzu=u$. Does it imply that $u$ is an invertible element of $M$, with $u^{-1}=z?$ The only thing I see is that it implies that $z=zuz,$ since $$u(zuz)u=(uzu)zu=uzu=u.$$ So $z$ must be a (unique) von Neumann generalized inverse of $u$. But this is far from enough... AI: EDIT: this has been reworked multiple times due to discussion below. I believe it is now correct. Let $S$ be the set of sequences of natural numbers, almost all of which are zero. It is endowed with a function $\Sigma$ which returns the sum in a sequence. Let $M$ be the monoid of endomorphisms $T$ of $S$ such that $\Sigma(Ts) \leq \Sigma(s)$ for all $s \in S$. Let $z$ be the "right shift operator" which inserts a 0 in the first slot of the sequence and shifts everything else to the right, and let $u$ be the left shift operator which shifts everything to the left (eliminating the number in the first slot). Clearly $uzu = u$, and $u$ can't be invertible. I claim that $z$ is unique with this property; clearly any other element with this property must be a right shift operator inserting something in the first spot (one gets this immediately upon writing $uzu(a_0, \ldots) = (a_1, \ldots)$), and that something has to be 0 or the sum will increase.
H: How is uncountability characterized in second order logic? How is uncountability characterized in second order logic? Also, why is this characterization of uncountability "absolute" in the way that FOL's characterization of uncountability is not? A very direct answer will be much appreciated. Much thanks. AI: The definition of uncountability is essentially the same as it is in first order logic: a set $X$ is uncountable if there is no surjective function $F$ from the natural numbers $\mathbb{N}$ onto $X$ (there are other ways of formulating uncountability, but they are equivalent to this definition). Consequently in order to provide a formal definition in second order logic we must first define the following: the set of natural numbers $\mathbb{N}$; functions and their properties in second order logic; and finally the definition of uncountability. Let's start with functions. Let $F$ be a unary function symbol, and let the domain of $F$ be given as $$ \mathrm{dom}(F) = \left\{ x : \exists{y} (F(x) = y) \right\} $$ while the range of $F$ is $$ \mathrm{ran}(F) = \left\{ y : \exists{x} (F(x) = y) \right\}. $$ These sets exist via the second order comprehension scheme. Now we characterise the natural numbers $\mathbb{N}$ in second order logic. Given a function symbol $S$ for the successor function and a constant symbol $0$, the natural numbers is the set satisfying the following axioms. By a theorem of Dedekind, this suffices to characterise the natural numbers up to isomorphism. $$ \forall{x}(0 \neq S(x)) \\ \forall{n}\forall{m} (S(n) = S(m) \rightarrow n = m) \\ \forall{P} ( P(0) \wedge \forall{n} (P(n) \rightarrow P(S(n))) \rightarrow \forall{n} (P(n)). $$ Finally we are in a position to present a definition of uncountability in second order logic. A set $X$ is countable just in case it satisfies the following definition. $$ \mathrm{countable}(X) =_{df} \exists{F} (\mathrm{dom}(F) = \mathbb{N} \wedge \mathrm{ran}(F) = X \wedge \forall{x \in X}\exists{y \in \mathbb{N}} (F(y) = x). $$ The definition of uncountability is then just the negation of the countability condition. The question of why this characterisation is 'absolute' in a way that the first order characterisation is not can be found in the Dedekind categoricity theorem cited above. Given the standard semantics for second order logic, all structures satisfying the second order Peano axioms are isomorphic—the theory is categorical. Because of this, the Löwenheim–Skolem theorem is false for second order logic (with the standard semantics), so the kind of model-theoretic relativity you get in first order logic does not appear (at least to the same extent). We don't get models where the whole domain is countable (from the 'external' perspective), but the model itself thinks that there are uncountable sets just because certain functions don't exist.
H: Basic probability - either event occurs but not both I'm taking a graduate course in probability and statistics using Larsen and Marx, 4th edition and I'm struggling with a seemingly basic question. If A and B are any two events, not mutually exclusive: $$P((A \cup B) ^\complement) = 0.6, P(A \cap B) = 0.2$$ What is the probability that A or B occurs, but not both? Or in other words: $$P((A \cap B)\ ^\complement \cap (A \cup B)) = ?$$ So far, I've been able to infer the following: $$ P(A \cup B) = 1 - P((A \cup B) ^\complement ) = 1 - 0.6 = 0.4 $$ and $$P((A \cap B) ^\complement) = 1 - P(A \cap B) = 1 - 0.2 = 0.8$$ Can someone kindly give a hint as to how to approach from here? Am I heading in the right direction? How should I think about these types of problems in general? I feel like the text gives you the basic set of axioms to define things but then neglects the showing of solutions for slightly more complicated examples of compound probability equations such as the above. AI: Hint: Find the probability of $A$ only plus the probability of $B$ only. Let $x= P(A~ \text{only})$, $y=P(B ~\text{only})$. Then $x +y + 0.2 =0.4$. Thus $x+y= 0.2$
H: multiplication of a trigonometric series Let $f(x)$ be the value of a trigonometric series, which converges uniformly on $\left[ -\pi, \pi\right]$. If I multiply $f(x)$ with $e^{iax}$ where $a\in\mathbb{N}$ will the result then be a trigonometric series which converges uniformly? AI: Let $f_n$ denote the n-th partial sum of the series. We have $f_n \to f$ uniformly on $[-\pi,\pi].$ Clearly, $e^{iax} f_n(x) \to e^{iax} f(x)$ pointwise, and this is uniform since $$ \sup_{x\in [-\pi,\pi]} \| e^{iax}f_n(x) - e^{iax}f(x)\| =\sup_{x\in [-\pi,\pi]} \| f_n(x) - f(x)\| \to 0 \text{ as } n\to \infty.$$
H: $C^1$ function questions $f(x,t)$ is a function defined on the set $$S = \cup_{t \in [0,T]} A(t) \times \{t\}$$ where $A(t)$ is an open subset of $\mathbb{R}^n$ that depends on $t$ in some way. I am told that $f \in C^1(S)$. Since the set $S$ is not closed (I think, even though $\{t\}$ is closed), we cannot say that $f$ is bounded above and below. Am I right? Also we cannot say anything about its divergence either can we (if $f$ is vector valued)? Or maybe I am missing something. If I were given that $f \in L^\infty(S)$, then I can write $$|f(x,t)| \leq \text{const}$$ almost everywhere? AI: Taking $A(t)=(0,1)$, and $f(x,t) = \frac{1}{x}-\frac{1}{1-x}$ gives a $C^1$ $f$ that is neither bounded below or above on $S$. If $f \in L^{\infty} (S)$ then $|f(x,t)| \leq ||f||_{\infty}$ ($\mathbb{ess} \sup$ norm) a.e. on $S$.
H: A problem about Riemann-Lebesgue lemma Let $f$ be an integrable function over $[a,b]$. Prove that: $$\lim_{n \rightarrow \infty } \int _a ^b f(x)|\sin(nx)| dx= \frac {2}{\pi} \int _a^b f(x) dx.$$ AI: Hint: Prove the theorem for characteristic functions of intervals, then prove the theorem for step functions. Approximate integrable functions with step functions and apply a convergence theorem.
H: Cheat sheet for various Mathematic topics I was searching through the web to find out any webpage which contains the consolidated list for all mathematical cheat sheets in various topics in one place. I couldnt find one. It would be good if we can share the known cheat sheet links in this post for various mathematical topics. AI: I think this will be your best bet. I have been using it for years.
H: Riemann Surfaces Question (Complex Analysis) Let $X$ be a compact Riemann surface, and denote by $m_X$ the following field: $ m_X := \{ f:X \to \mathbb{P}_\mathbb{C} : f- \text{meromorphic} \} - \{\infty \} $ What is the natural injection of the field of rational functions $\mathbb{C}(z)$ into $m_X$ ? p.s- $\mathbb{P}_\mathbb{C}$ denotes the Riemann sphere. Thanks in advance !!! AI: Let me say that emphatically: There is no canonical injection of $\mathbb C(z)$ into $\mathcal M(X)$ To give an embedding $\mathbb C(z) \hookrightarrow\mathcal M(X)$ exactly amounts to choosing a non-constant morphism $m:X\to \mathbb P^1(\mathbb C)$. If such a choice is made, the deduced field embedding $\mathbb C(z) \hookrightarrow\mathcal M(X)$ will send $z\mapsto m$, where $m$ is now seen as a meromorphic function on $X$ .
H: Resources for matrices and its applications I was preparing some presentation slides on basics of matrices and its application. Even though, many of the participants are familiar with basic matrix operation, I planned to explain them by starting from Matrix addition, Matrix subtraction, Matrix Multiplication and Matrix Inverse in comparison with real number operation. My theme is learning a new concept = Comparison with known old concept + where the new concept differs from know old one + Why they differs. In this process, I thought of comparing Real number operation with Matrix operation. Though matrices are used in various algorithms for representing document and finding out it page rank,im looking for the example which is simpler and easy to understand. In general, im looking for various resources on matrices and its application in real world which can be shown in creative way. Any book similar to Mathematical Nature walk for matrices, will be more useful. AI: You can't talk about matrices without talking about linearity. It would practically be a sin to not mention their role as linear transforms on vector spaces, especially since that's probably the best motivation for the initially unintuitive form of matrix multiplication. A practical application from this angle would be video game graphics and geometry: understanding matrices puts every transformation (shifts, rotations, scaling, etc.) in a very nice conceptual framework that can be easily applied to otherwise difficult things, such as scaling an object mesh by calculating the new positions of all vertices in it. The fact that this is so intensely visual (it is geometry after all) is a very nice bonus that students are rarely granted in algebra, and shouldn't be passed up. Another major application is the role of adjacency matrices in graph theory. If your students have never seen graph theory before then the use of these might not seem as immediate (after all, you'd have to explain the uses of graph theory itself), they are incredibly useful, and I would say that they're worth a mention at least. I believe Strang has a lecture as part of his Linear Algebra course where he uses these to solve circuitry problems, though I forget where the video is. In terms of linking operations from old to new concepts: You probably won't have much trouble with addition/subtraction: for those operations a matrix can be treated as a grid of values. In my experience most students have no trouble with this, and only run into trouble with matrix multiplication. But if you introduce matrices as linear transformations, and matrix multiplication as matrix composition, then you can talk about the identity matrix, matrix multiplication, determinants, inverses, rank, and nullspaces in an extremely intuitive and easy to visualize manner. You can talk about what makes matrices matrices and not mere grids of numbers. I'd say this is your best bet for explaining "why they differ", since the why is all about linearity, which is motivated very well through geometric ideas.
H: is this continuous and differentiable Let $I=\{1\}\cup\{2\}$, for $x\in\mathbb{R}$,$f(x)=\operatorname{dist}(x,I)=\inf\{|x-y|:y\in I\}$ Then 1.$f$ is discontinuous some where on $\mathbb{R}$ 2.$f$ is continuous on $\mathbb{R}$ but not differentiable only at $1$ 3.$f$ is continuous on $\mathbb{R}$ but not differentiable only at $1,2$ 4.$f$ is continuous on $\mathbb{R}$ but not differentiable only at $1,2,3/2$ What I think is $f(x)=0$ when $x=1,2$ and $f(x)>0$ when $x\in \mathbb{R}\setminus\{1,2\}$ so it is continuous on $\mathbb{R}$ and it is not differentiable at $1,2$. Am I right? AI: If $x \in (-\infty,1)$, we have $f(x) = 1-x$, which is differentiable. Similar reasoning shows that $f$ is differentiable on $(1,1.5)$, $(1.5,2)$ and $(2,\infty)$. Now check differentiability of $f$ at $\{1,1.5,2\}$: If $x=1$, and $|h|<\frac{1}{2}$, we have $ f(1+h)=|h|$, this gives $\lim_{h\downarrow 0} \frac{f(1+h)-f(1)}{h} = +1$, but $\lim_{h\uparrow 0} \frac{f(1+h)-f(1)}{h} = -1$, hence $f$ is not differentiable at $x=1$. If $x=1.5$, then if $|h|<\frac{1}{2}$, we have $f(1.5+h) = .5-|h|$; similar reasoning applies. Finally, at $x=2$, if $|h|<\frac{1}{2}$, we have $f(2+h) = |h|$, which is the same as the $x=1$ case, hence not differentiable at this $x$.
H: A limit that involves prime numbers Let be $p_{n}$ the nth prime number and $(a_{n}), n\geq1$ such that: $$a_{n}=\frac{1}{p_1}+\frac{1}{p_2}+\cdots+\frac{1}{p_n}$$ By using this result, $$\lim_{n\rightarrow\infty} \frac{p_{1}}{p_{1}-1} \frac{p_{2}}{p_{2}-1}\cdots\frac{p_{n}}{p_{n}-1}=\infty$$ I have to prove that $\lim_{n\to\infty} a_{n} = \infty$. AI: Absolute convergence of $\displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right)$ means that $\displaystyle \sum_{k=1}^{\infty} \lvert a_k \rvert$ converges. Further if $a_k$'s are positive, then we have the following inequalities. $$1 + \sum_{k=1}^{\infty} a_k \leq \displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right) \leq \exp \left( \sum_{k=1}^{\infty} a_k \right)$$ Hence, if $a_k$'s are positive, then $\displaystyle \prod_{k=1}^{\infty} \left( 1+a_k\right)$ converges iff $\displaystyle \sum_{k=1}^{\infty} a_k$ converges. Since we are given that $\displaystyle \prod_{k=1}^{\infty} \left(\dfrac{p_k}{p_k-1} \right) = \prod_{k=1}^{\infty} \left(1 + \dfrac1{p_k-1} \right)$ diverges, we have that $$\sum_{k=1}^{\infty} \dfrac1{p_k-1}$$ diverges i.e. if $b_n = \displaystyle \sum_{k=1}^{n} \dfrac1{p_k-1}$, then $\displaystyle \lim_{n \rightarrow \infty} b_n = \infty$. Now note that \begin{align} a_n & = \dfrac1{p_1} + \dfrac1{p_2} + \dfrac1{p_3} + \cdots + \dfrac1{p_n} > \dfrac1{p_2-1} + \dfrac1{p_3-1} + \dfrac1{p_4-1} + \cdots + \dfrac1{p_n-1} + \dfrac1{p_{n+1}-1}\\ & = b_{n+1} - \dfrac1{p_1-1} = b_{n+1} - 1 \end{align} The above is true since successive prime differ at-least by $1$ i.e. $p_k < p_{k+1}-1$. Now you letting $n \to \infty$, we get what you want.
H: Explanation of Zeta function and why 1+2+3+4+... = -1/12 Possible Duplicate: Why does $1+2+3+\dots = {-1\over 12}$? I found this article on Wikipedia which claims that $\sum\limits_{n=0}^\infty n=-1/12$. Can anyone give a simple and short summary on the Zeta function (never heard of it before) and why this odd result is true? AI: The answer is much more complicated than $\lim_{x \to 0} \frac{\sin(x)}{x}$. The idea is that the series $\sum_{n=1}^\infty \frac{1}{n^z}$ it is convergent when $Re(z) >1$, and this works also for complex numbers. The limit is a nice function (analytic) and can be extended in an unique way to a nice function $\zeta$. This means that $$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z} \,;\, Re(z) >1 \,.$$ Now, when $z=-1$, the right side is NOT convergent, still $\zeta(-1)=\frac{-1}{12}$. Since $\zeta$ is the ONLY way to extend $\sum_{n=1}^\infty \frac{1}{n^z}$ to $z=-1$, it means that in some sense $$\sum_{n=1}^\infty \frac{1}{n^{-1}} =-\frac{1}{12}$$ and this is exactly what that means. Note that, in order for this to make sense, on the LHS we don't have convergence of series, we have a much more suttle type of convergence: we actually ask that the function $\sum_{n=1}^\infty \frac{1}{n^z}$ is differentiable as a function in $z$ and make $z \to -1$... In some sense, the phenomena is close to the following: $$\sum_{n=0}^\infty x^n =\frac{1}{1-x} \,;\, |x| <1 .$$ Now, the LHS is not convergent for $x=2$, but the RHS function makes sense at $x=2$. One could say that this means that in some sense $\sum_{n=0}^\infty 2^n =-1$. Anyhow, because of the Analyticity of the Riemann zeta function, the statement about $\zeta(-1)$ is actually much more suttle and true on a more formal level than this geometric statement...
H: A list of basic integrals I am in need of a list of basic integrals for my upcoming ODE test, I have searched on Math.SE for a post that might help but I didn't find such a post. When I write 'basic' I don't necessarily mean immediate integrals (such as $e^x,\sin x$) but also 'useful' ones such as $\ln x$. If there is also some common techniques (useful substitutions like the trigonometric substitutions, how to integrate a rational function, etc.) this would be helpful too (since it has been over a year since I tooked my exam on Calc II and I forgot some of this material, this might also save some time during the exam). I would appreciate any reference for this matter. AI: I have found the following "cheat sheets" helpful when I was going through ODE. Paul's Online Math Notes: Cheat Sheets The integral ones come in a full sized and condensed form. Integrals Cheat Sheet
H: what is derivative of determinant map Possible Duplicate: Derivative of Determinant Map consider $v=(v_1,v_2)\in \mathbb{R}^2$ ,$w=(w_1,w_2)\in\mathbb{R}^2$ consider the determinant map det:$\mathbb{R}^2\times \mathbb{R}^2$ define by $\det(v,w)=v_1w_2-w_1v_2$. The derivative of the determinant map at $(v,w)\in\mathbb{R}^2\times \mathbb{R}^2$ evaluated on $(H,K)\in \mathbb{R}^2\times \mathbb{R}^2$ is $1$. $\det(H,W) +\det(V,K)$ $2$. $\det(H,K)$ $3$. $\det(H,V)+\det(W,K)$ $4$. $\det(V,W) + \det(K,W)$. Well, I have no idea how to solve this one. AI: Compute the partials using the fact that $\det$ is multilinear: Thus the derivative at $(V,W)$ in the direction $(H,0)$ will be easily computed from the expression $\det(V+H,W) = \det(V,W)+\det(H,W)$. In particular, the derivative at $(V,W)$ in the direction $(H,0)$ is $\det(H,W)$. Similarly, the derivative at $(V,W)$ in the direction $(0,K)$ will be given by $\det(V,K)$. Since the derivative is linear, we have that the derivative at $(V,W)$ in the direction $(H,K)$ is just the sum of the derivatives in the direction $(H,0)$ and $(0,K)$. Hence the result is $\det(H,W)+\det(V,K)$.
H: Proving ${p-1 \choose k}\equiv (-1)^{k}\pmod{p}: p \in \mathbb{P}$ Possible Duplicate: Prove $\binom{p-1}{k} \equiv (-1)^k\pmod p$ The question is as follows: Let $p$ be prime. Show that ${p \choose k}\bmod{p}=0$, for $0 \lt k \lt p,\space k\in\mathbb{N}$. What does this imply about the binomial co-efficients ${p-1 \choose k}$? By the definition of binomial coefficients: $${p \choose k}=\frac{p!}{k!(p-k)!}$$ Now if $0 \lt k \lt p$, then we have $p\mid{p\choose k}$, therefore ${p \choose k}\equiv0\pmod{p}, \space 0 \lt k \lt p. \space \blacksquare$ Note that we can write: ${p \choose k}={p-1 \choose k}+{p-1 \choose k-1}$, and therefore: $${p-1 \choose k}={p \choose k}-{p-1 \choose k-1}=\frac{p!}{k!(p-k)!}-\frac{(p-1)!}{(k-1)!(p-k)!}=\frac{(p-1)!}{(k-1)!(p-k)!}\left(\frac{p}{k}-1\right)$$ However, I am unsure how to proceed with this question, the book I am working from states that: $${p-1 \choose k}\equiv(-1)^{k}\pmod{p}, \space 0 \le k \lt p$$ But I am unsure how the authors have derived this congruence, so I'd appreciate any hints. Thanks in advance. AI: Remember Wilson's Theorem for a prime $\,p\,$: $$(p-1)!=-1\pmod p$$and from what you already proved we get $$\binom {p-1}{k}=\binom{p}{k}-\binom{p-1}{k-1}=\binom{p-1}{k-1}\pmod p$$Now just observe $$\frac{(p-1)!}{(p-k)!}=(p-k+1)(p-k+2)\cdot ...\cdot (p-1)\equiv(1-k)(2-k)\cdot ...\cdot (-1)(\text{mod } p)\Longrightarrow$$$$\Longrightarrow\binom{p-1}{k-1}=\frac{(1-k)(2-k)\cdot ...\cdot (-1)}{1\cdot 2\cdot ...\cdot (k-1)}=(-1)^k\pmod p $$
H: Why is $\log_{-2}{4}$ complex? With the logarithm being the inverse of the exponential function, it follows that $ \log_{-2}{4}$ should equal $2$, since $(-2)^2=4$. The change of base law, however, implies that $\log_{-2}{4}=\frac{\log{4}}{\log{-2}}$, which is a complex number. Why does this occur when there is a real solution? AI: The exponential function is not invertible on the complexes. Correspondingly, the complex logarithm is not a function, it is a multi-valued function. For example, $\log(e)$ is not $1$ -- instead it is the set of all values $1 + 2 \pi \mathbf{i} n$ over all integers $n$. How are you defining $\log_a(b)$? If you are defining it by $\log(b) / \log(a)$, then it too is a multi-valued function. The values of $\log(4)/\log(-2)$ ranges over all values $(\ln 4 + 2 \pi \mathbf{i} m)/(\ln 2 + \pi \mathbf{i} + 2 \pi \mathbf{i} n)$, where $m$ and $n$ are integers. Do note that the set of values of this multi-valued function does include $2$; e.g. when $m =1$ and $n=0$.
H: solving a ODE with periodic boundary conditions Help me please to solve this problem: $u_{xx}+(\cos x+\cos^{2} x)u=e^{\cos x - 1}$ Thanks a lot! AI: The starting point could be changing the dependant variable: $$\cos x=t$$ $$\frac{du}{dx}=\frac{d(\cos x)}{dx}\frac{du}{d(\cos x)}=-\sin x\frac{du}{d(\cos x)}$$ $$\begin{aligned}\frac{d}{dx}\left(\frac{du}{dx}\right) & =-\cos x\frac{du}{d(\cos x)}-\sin x\frac{d(\cos x)}{dx}\frac{d^{2}u}{d(\cos x)^{2}}=-\cos x\frac{du}{d(\cos x)}+\sin^{2}x\frac{d^{2}u}{d(\cos x)^{2}}\\ & =-\cos x\frac{du}{d(\cos x)}+(1-\cos^{2}x)\frac{d^{2}u}{d(\cos x)^{2}}=-t\frac{du}{dt}+(1-t^{2})\frac{d^{2}u}{dt^{2}} \end{aligned}$$ $$(1-t^{2})\frac{d^{2}u}{dt^{2}}-t\frac{du}{dt}+(t+t^{2})u=e^{t-1}$$ This is a second-order linear inhomogeneous equation with regular singular points at $t=\pm 1$. It does not look immediately familiar, so perhaps Frobenius method could lead to a solution for the homogeneous part and Green's function (as long as you state the boundary conditions) could be invoked for a particular solution. EDIT: The original equation without RHS is actually a Hill equation
H: Are these proofs correct? (Number Theory) I'm finishing Chapter 1 of Apostol's book Introduction to Analytic Number Theory. I have made almost half of the 30 problems posed. I have some doubts on the proofs I produce, since sometimes I seem to assume extra information, or seem assume things that are obvious, when that is precisely what is to be proven. I am unsure about this few, however $(3)$ and $(4)$ seem right. $(1)$ THEOREM If $(a,b)=1$ and $ab=c^n$, then $a=x^n$ and $b=y^n$ for some $x,y$. PROOF If $(a,b)=1$ then we have $$a = \prod p_i^{a_i}$$ $$b = \prod p_j^{b_j}$$ where $p_j \neq p_i$ for any $i,j$. Let $c=\prod p_m ^{c_m}$, and the $p_j$ and $p_i$ are uniquely determined. Then $$ab=\prod p_i^{a_i}p_j^{b_j}=\prod p_m ^{nc_m}$$ But this means, since $p_i \neq p_j$, that $$a_i =nc_{m_i}$$ $$b_j =nc_{m_j}$$ Thus $$a = \prod p_i^{nc_{m_i}}=x^n$$ $$b = \prod p_j^{nc_{m_j}}=y^n$$ What I seem to be saying is "if $a$ and $b$ have no common prime factors and $ab=c^n$, then $a$ and $b$'s prime factors must be of multiplicity $n$. Else, $c$ wouldn't be a perfect $n$th power." Apostol suggests considering $d=(a,c)$. $(2)$ THEOREM For every $n \geq 1$ there exist uniquely determined $a<0$, $b>0$ such that $n=a^2b$, where $b$ is squarefree. PROOF From the fundamental theorem of arithmetic, one has $$n=\prod p_i^{a_i}$$ where the $p_i$ are unique. Group the product into two factors, according to the parity if the $a_i$s. If $a_i=2m_i$, write $$n=\left(\prod p_i^{m_i} \right)^2 \prod p_l^{a_l}$$ The remaining $a_l$ are all odd, viz $a_i=2n_i+1$. Then write $$n=\left(\prod p_i^{m_i} \prod p_l^{n_l}\right)^2 \prod p_l$$ $$n=a^2 b$$ Since the $p_i$ were unique, so are $a^2$ and $b$, and $b$ is clearly squarefree. $(3)$ THEOREM If $2^n-1=p$, where $p$ is prime, then $n$ is prime. PROOF Reductio ad absurdum. Suppose $2^n-1$ is prime, and write $n=qp$. Then $$2^n-1=2^{qp}-1=(2^q-1)(1+2^q+2^{2q}+\cdots+2^{q(p-1)}$$ thus $2^{q}-1\mid 2^n-1$, $\Rightarrow \Leftarrow$ $(4)$ THEOREM If $2^n+1$ is prime, then $n$ is a power of two. PROOF Reductio ad absurdum. Suppose that $2^n+1=p$, $p$ a prime, and $n$ is composite $$n=ed$$ where $e$ is odd. Then it is clear $n \neq 2^m$ and $$2^n+1=2^{ed}+1=(2^d+1)(1-2^d+-\cdots+2^{d(e-1)})$$ Thus $2^d+1 \mid 2^n+1$. $\Rightarrow \Leftarrow$ Then $n$ can't have any odd factors, that is $n=2^m$ for some $m$. NOTE: I mostly care about the proofs being correct or not. If they aren't let me know what the flaw is, and please hint a correction. I'm not looking for alternative proofs unless the proof is absolutely hokum. AI: Your proof is correct of 1 is correct; it is indeed saying that if a product of two relatively prime (positive) integers is a perfect $n$th power, then each is a pefect $n$th power. This is a generalization of a result of Euclid's which states that if $ab$ is a perfect square, and $a$ and $b$ are relatively prime, then each of $a$ and $b$ is a perfect square; this result is used to characterize Pythagorean triples in the Elements. I'll assume that $n\gt 1$. To follow Apostol's hint, let $d=\gcd(a,c)$; then $d^n|c^n = ab$, and since $\gcd(d,b)|\gcd(a,b) = 1$, then $d^n|a$. Since $\gcd(a/d, c/d) = 1$, if we write $a=d^ny$ and $c=dx$, then $\gcd(d^{n-1}y,x) = 1$. But since $y|x^n$, it follows that $y=1$, so $a=d^n$. Now use a symmetric argument to show to show $b$ is an $n$th power. Note that (I think) we did not use unique factorization into irreducibles, so this argument should work in any gcd-domain, not just in UFDs. Your proof for 2 is almost okay, but you should really specify that $n=pq$ with $1\lt p,q\lt n$ if you want to argue by contradiction. Then your contradiction arises from the fact that you cannot have $2^q-1 = 1$ nor $1+2^q + \cdots + 2^{q(p-1)}=1$ (you never took this into account!). Alternatively, you can do it directly, by noting that if $n=pq$, then the factorization you give forces either $2^{q}-1=1$, or $1+2^q + \cdots 2^{q(p-1)}=1$, hence $p-1=0$. Your argument for $3$ is not quite right, because we should not assume that $n$ is composite. If you want to argue by contradiction, you should only assume that $n$ is divisible by an odd number greater than $1$. And you also need to account for the possibility that the second factor in your factorization equals $1$; that is, that $$1-2^d + 2^{2d} -2^{3d}+ \cdots +2^{(2k)d} = 1.$$ The argument is that since $2^{2r} - 2^{2r-1}$ is positive, this can only happen if $2k=0$, hence the odd factor $e = 2k+1$ must be equal to $1$.
H: Limit involving $(\sin x) /x -\cos x $ and $(e^{2x}-1)/(2x)$, without l'Hôpital Find: $$\lim_{x\to 0}\ \frac{\dfrac{\sin x}{x} - \cos x}{2x \left(\dfrac{e^{2x} - 1}{2x} - 1 \right)}$$ I have factorized it in this manner in an attempt to use the formulae. I have tried to use that for $x$ tending to $0$, $\dfrac{\sin x}{x} = 1$ and that $\dfrac{e^x - 1}x$ is also $1$. AI: You are given $$\lim_{x\to 0}\ \frac{\dfrac{\sin x}{x} - \cos x}{2x \left(\dfrac{e^{2x} - 1}{2x} - 1 \right)}$$ I guess you know $$\lim_{x\to 0}\dfrac{\sin x}{x}=1$$ $$\lim_{x\to 0} \dfrac{e^{2x} - 1}{2x}=1$$ The most healthy way of solving this is using $$\frac{\sin x}{x} = 1-\frac {x^2}{6}+o(x^2)$$ $$\frac{e^x-1}{x}=1+\frac x 2 +o(x^2)$$ $$\cos x = 1-\frac {x^2}{2}+o(x^2)$$ This gives $$\lim_{x\to 0}\ \frac{\dfrac{\sin x}{x} - \cos x}{2x \left(\dfrac{e^{2x} - 1}{2x} - 1 \right)}$$ $$\eqalign{ & \mathop {\lim }\limits_{x \to 0} \;\frac{{1 - \dfrac{{{x^2}}}{6} + o({x^2}) - 1 + \dfrac{{{x^2}}}{2} - o({x^2})}}{{2x\left( {1 + x + o({x^2}) - 1} \right)}} \cr & \mathop {\lim }\limits_{x \to 0} \;\frac{{\dfrac{{{x^2}}}{3} + o\left( {{x^2}} \right)}}{{2{x^2} + 2xo\left( {{x^2}} \right)}} \cr & \mathop {\lim }\limits_{x \to 0} \;\frac{{\dfrac{1}{3} + \dfrac{{o\left( {{x^2}} \right)}}{{{x^2}}}}}{{2 + 2\dfrac{{o\left( {{x^2}} \right)}}{{{x^2}}}}} = \dfrac{1}{6} \cr} $$ Note that $$\eqalign{ & \frac{{o\left( {{x^2}} \right)}}{{{x^2}}} \to 0 \cr & \frac{{2o\left( {{x^2}} \right)}}{x} \to 0 \cr} $$
H: Diamonds of ideals, part 3 I'd like to wrap up the line of questioning started first in this question and then continued in this question. The only variant left to try is: "How close can you get to the Diamond lattice with two-sided ideals of a ring?" Naturally, the commutative example in the first post is an example with six ideals, the Diamond with one ideal on top. I'm putting (what I think is) the solution below for review. If all is well then it contains an alternate proof of why the Diamond can never appear in a lattice of ideals with $R$ at the top, even for noncommutative rings. (The previous proof factored $R$ into local rings.) This brings the line of questioning to closure to me, but maybe someone else has a good variant too! AI: Suppose $R$ has three maximal two-sided ideals $M_1, M_2, M_3$, and that their pairwise intersection is an ideal $K$ so that $R, M_1, M_2, M_3, J$ forms the Diamond within the lattice of ideals of $R$. Now $R/K$ obviously has three distinct maximal two-sided ideals which are the images of the three maximal ideals in $R/K$. But on the other hand, the Chinese Remainder Theorem says that $R/K\cong R/M_1\oplus R/M_2$, which is a product of two simple rings. However it is clear that the product of two simple rings has exactly two maximal ideals. Contradiction! Reading back through this, I thought this was a little strange. The intersection of two different maximal ideals is never contained by a third distinct maximal ideal? Did I do something silly? Edit: I suppose it also means that given $n$ distinct maximal ideals, $\cap_{i=1}^{n-1}P_i\not\subseteq P_n$. This is sounding a little more believable now, since if it were a containment, then $\prod_{i=1}^{n-1} P_i\subseteq P_n$, whence by primeness of $P_n$, one of the $P_i\subseteq P_n$, an absurdity.
H: Prove that $R \otimes_R M \cong M$ Let $R$ be a commutative unital ring and $M$ an $R$-module. I'm trying to prove $R \otimes_R M \cong M$ but I'm stuck. If $(R \otimes M, b)$ is the tensor product then I thought I could construct an isomorphism as follows: Let $\pi: R \times M \to M$ be the map $rm$. Then there exists a unique linear map $l: R \otimes M \to M$ such that $l \circ b (r,m)= l(r \otimes m) =r l(1 \otimes m) = \pi(r,m) = rm$. Now I need to show that $l$ is bijective. Surjectivity is clear. But I can't seem to show injectivity. In fact, by now I think it might not be injective. But I can't think of a different suitable map $\pi$. Then I thought perhaps I should show that $l$ has a two sided inverse but for an $m$ in $M$ I can't write down its inverse. How do I finish the proof? AI: For an inverse, define $M \to R \otimes M$ by $m \mapsto 1 \otimes m$. I suppose you could also show directly that the map is injective, but point remains the same: $R$ contains $1$. If $\sum r_i \otimes m_i$ is in the kernel then $\sum r_im_i = 0$, and we can write the tensor as $\sum 1 \otimes r_im_i = 1 \otimes \sum r_im_i = 1 \otimes 0 = 0$. Another way is to show that your map $R \times M \to M$ satisfies the universal property of the tensor product of $R$ and $M$.
H: Compute close formula for a sum. Let $k$ be a integer. How can we compute the close formula for $$ \sum_{m=0}^{k} (m+1)(m+2)(2m+3)(3m+4)(3m+5)? $$ AI: We can expand the polynomial we are summing over to give a degree 5 polynomial in $k$, as follows: $$(m+1)(m+2)(2m+3)(3m+4)(3m+5)=18m^{5}+135m^{4}+400m^{3}+585m^{2}+422m+120$$ Therefore, we can write the summation as: $$18\sum_{m=0}^{k}{m^{5}}+135\sum_{m=0}^{k}{m^{4}}+400\sum_{m=0}^{k}{m^{3}}+585\sum_{m=0}^{k}{m^{2}}+422\sum_{m=0}^{k}{m}+120\sum_{m=0}^{k}1$$ We can therefore compute the final answer without much difficulty using standard results: $$\begin{equation}\sum_{m=0}^{k}{m^{5}}=\frac{k^{2}(2k^{2}+2k-1)(k+1)^{2}}{12}\end{equation}$$ $$\begin{equation}\sum_{m=0}^{k}{m^{4}=\frac{k(2k+1)(k+1)(3k^{2}+3k-1)}{30}}\end{equation}$$ $$\begin{equation}\sum_{m=0}^{k}{m^{3}}=\frac{k^{2}(k+1)^{2}}{4}\end{equation}$$ $$\begin{equation}\sum_{m=0}^{k}{m^{2}}=\frac{k(k+1)(2k+1)}{6}\end{equation}$$ $$\begin{equation}\sum_{m=0}^{k}{m}=\frac{k(k+1)}2{}\end{equation}$$ $$\begin{equation}\sum_{m=0}^{k}{1}=k+1\end{equation}$$ Combining all these, we have the following simplified expression: $$(k+2)^{2}\left(3k^{4}+24k^{3}+67k^{2}+76k+30\right)$$ Testing for a few simple test cases: $$\begin{align}k = 1: 1800 \\ k=2: 11040 \\ k=4: 133560\end{align}$$ These correspond to what we get if we compute the summation by hand, so this is our final closed form.
H: Arclength of the curve $y= \ln( \sec x)$ $ 0 \le x \le \pi/4$ Arclength of the curve $y= \ln( \sec x)$ $ 0 \le x \le \pi/4$ I know that I have to find its derivative which is easy, it is $\tan x$ Then I put it into the arclength formula $$\int \sqrt {1 - \tan^2 x}$$ From here I am not sure what to do, I put it in wolfram and it got something massive looking. I know I can't use u substitution and I am pretty certain I have to algebraicly manipulate this before I can continue but I do not know how. AI: The arclength formula is $$\mathrm S_a^b(f) =\int_a^b \sqrt{1+f'(x)^2}dx$$ You have $$f(x) = \log \sec x$$ This means $$f'(x) = \tan x$$ Then you need to find $$\mathrm S =\int_0^{\pi/4} \sqrt{1+\tan^2 x}dx$$ Remember that $$1+\tan^2 x=\sec ^2 x$$ Also, remember the secant is positive in the first quadrant, so $$\mathrm S =\int_0^{\pi/4} \sqrt{\sec^2 x}dx$$ $$\mathrm S =\int_0^{\pi/4} \sec xdx$$
H: "Negative" versus "Minus" As a math educator, do you think it is appropriate to insist that students say "negative $0.8$" and not "minus $0.8$" to denote $-0.8$? The so called "textbook answer" regarding this question reads: A number and its opposite are called additive inverses of each other because their sum is zero, the identity element for addition. Thus, the numeral $-5$ can be read "negative five," "the opposite of five," or "the additive inverse of five." This question involves two separate, but related issues; the first is discussed at an elementary level here. While the second, and more advanced, issue is discussed here. I also found this concerning use in elementary education. I recently found an excellent historical/cultural perspective on What's so baffling about negative numbers? written by a Fields medalist. AI: I would encourage (maybe insist is too strong) to use "negative". It's not the worst idiosyncrasy, though. I prefer this distinction so that the unary "-" and binary "-" are two different things. It irritates me a little more when students say "times-ing it by 5", or "matricee".
H: Simplification of the Expected Value via CDF: Does it work for ALL Probability Distributions? If a random variable $X$ has a density $f$, then the expected value can be simplified: $$\mathbb{E}[X]=a+∫_{a}^{b}(1-F(x))dx,$$ where $F$ is the cumulative distribution function, $F(x)=\Pr(X≤x)$. My question is: Does this simplification should work for all probability distributions, even for those that are not absolutely continuous with respect to Lebesgue-measure on $[a,b]$? If $X$ is any real-valued random variable with support $[a,b]$ and $F(x)=\Pr(x≤X)$, is it always true that $$ \mathbb{E}[X]=a+∫_{a}^{b}(1-F(x))dx$$ The answer to this question would be simple if one could generally extend integration-by-parts to Lebesgue-Stieltjes integration. However, this is not possible; see Wikipedia. AI: Let's use $X$ for the random variable, keeping $x$ for the variable of integration. In general, we have a probability measure $\mu$ on $I = [a,b]$ and $$\eqalign{E[X] &= \int_I x\ d\mu(x) = a + \int_I (x-a) d\mu(x)\cr &= a + \int_I \int_a^x 1 \ dt \ d\mu(x) = a + \int_a^b \int_{[t,b]} 1 \ d\mu(x)\ dt \cr &= a + \int_a^b (1 - F(t)) \ dt\cr}$$ (note that $\int_{[t,b]} 1 \ d\mu(x) = 1 - F(t-)$, but that is $1 - F(t)$ (Lebesgue) almost everywhere)
H: Inverses and orders in the group $Z_5$ Ill like some guidance to solve this kind of question ( I have many like this) and I have no clue what I need to do here: I need to find $$g^{-1}$$ and $$o(g) $$ when $G$ is $Z_{_5}$. adding - is I have G is $Z^x_{_5}$ what group is this???? AI: So $\mathbb{Z}_5 = \{[0], [1], [2], [3], [4]\}$ is an additive group (so we can/should write $-g$ for $g^{-1}$). You should try to write out the whole addition table. That way you will be able to see what the inverse of an element is. So for example, for $g = [2]$ you have that $[2] + [3] = [5] = [0]$, hence the inverse of $[2]$ is $[3]$. To find the order of an element you keep adding it to itself until you reach $[0]$. So for example of $g = [3]$, then you have $[3] + [3] + [3] +[3] + [3] = [15] = [0]$. So the order of $[3]$ is $5$. (Note here that adding $[3]$ to itself less times will not give you $[0]$. Added: You ask about the group $\mathbb{Z}^{\times}_5$. It is a multiplicative group. We take the additive group and remove $[1]$. So we have $$ \mathbb{Z}_5^{\times} = \{[1], [2], [3], [4]\}. $$ Now the composition of elements is multiplication. So we have for example that $[3]\cdot[4] = [12] = [2].$ Or we have that $[2]^{-1} = [3]$, because $[2]\cdot [3] = [2\cdot 3] = [6] = [1]$. For the order of an element we now just use multiplication. So for example if $g = [2]$, then $[2]\cdot[2]\cdot[2] = [6] = [1]$, so the order of $[2]$ is 3 (again noting that any power of $[2]$ less than $3$ will not give $[1]$). Another note is that taking the units (i.e. putting the ${}^\times$ on $\mathbb{Z}_n$) doesn't always mean that you just take away zero. You are simply asking for all the elements that have a multiplicative inverse. As suggested in the comments to your question, you can take a look at the Wikipedia article about the multiplicative group.
H: boundary of multiple sets Is it possible for a boundary point of a set to be a boundary point with respect to multiple other sets, in the sense that any neighborhood of a point of A contains points of BOTH B and C? If so can you give me an example? If so is it then the case that an entire boundary set can be a boundary of A with both B and C? If so what are the rules governing closed and open sets? If so, is there any necessary relation implied between B and C? AI: So you want $\partial(A)\subseteq \partial(B)\cap\partial(C)$? Yes, it's possible, even with (though this was not specified) $A$, $B$, and $C$ pairwise distinct. For example, take $X$ to be the real line with the usual topology, $A=[0,1]$, $B=(-1,0)\cup (1,2)$, $C=(0,1)$. Then $\partial(A)=\partial(C)\subseteq \partial(C)$. I'm not sure what "what are the rules governing closed and open sets" means: the usual rules, I would guess: a set is open if and only if its complement is closed, open sets must include $\varnothing$, the entire set, and be closed under pairwise intersections and under arbitrary unions. I don't think there is any relationship implied between $B$ and $C$. Above you see an example where $B\cap C=\varnothing$. Replace $C$ with $(-\frac{1}{2},0)\cup (\frac{1}{2},1)$ for an example where $B\cap C\neq\varnothing$, but you have neither $B\subseteq C$ nor $C\subseteq B$. And of course, if $B$ and $C$ work, then so do $X-B$ and $C$, $B$ and $X-C$, and $X-B$ and $X-C$ (since the boundary of a set equals the boundary of its complement).
H: Area of a surface of revolution of $y = \sqrt{4x+1}$ $y = \sqrt{4x+1}$ for $1 \leq x \leq 5$ I really have no idea what to do with this problem, I attempted something earlier which I will not type up because it took me two pages. $$y = \sqrt{4x+1}$$ $$\int 2 \pi \sqrt{4x+1} \sqrt{1 + \frac{4}{1+4x}}dx$$ $$2 \pi \int \sqrt{4x+1} \sqrt{1 + \frac{4}{1+4x}}dx$$ Nothing really seems obvious at this point, I attempted a u substitution of $u = 1+4x$ but it does not help simplify this problem really. $$ \pi /2 \int \sqrt{u} \sqrt{1 + \frac{4}{u}}du$$ I thought about making a wonky trig substitution but it didn't seem to help and was overly complicated. AI: From the last step:: $$ \frac{\pi}{2} \int \sqrt u \frac{\sqrt{4 +u}}{\sqrt u} du = \frac \pi 2 \int \sqrt{4 + u} du $$ substituting $4 + u = p \implies du = dp \;\;$, we get $$ = \frac \pi 2 \int \sqrt p dp = \frac \pi 2 \frac{p^{3/2}}{3/2} = \frac \pi 3 (4+u)^{3/2} $$
H: Is the support of a random variable those values where the graph of its distribution is not "flat"? In the literature the support, $S$, of a random variable $X$ is defined as the smallest closed subset of real line $\mathbb{R}$ with probability $1$. Looking to prove that $S$ is where the graph of $X$'s cdf, $F$, is not “flat”. More formally, with $O_x$ an open interval containing $x$ with probability $P(O_x)$, show that: $S=\{x: \forall O_x\quad P(O_x)\neq 0\}$. To clarify, prove that $S=\{x: \forall O_x\quad P(O_x)\neq 0\}$ is closed with probability 1 and is the smallest such closed certain set. AI: If $x \notin S$, since $S$ is closed there is an open interval $O_x$ around $x$ that is disjoint from $S$, and since $P(S) = 1$ we must have $P(O_x) = 0$. Conversely, if there is an open interval $O_x$ around $x$ such that $P(O_x) = 0$, then ${\mathbb R} \backslash O_x$ is a closed subset of $\mathbb R$ with $P({\mathbb R} \backslash O_x) = 1$, so $S \subseteq {\mathbb R} \backslash O_x$, and in particular $x \notin S$.
H: How is this an example of a linear system? Consider the following transfer function: $$\frac{Y}{X} = \frac{A_0}{\omega_o^2 s^2 + 1} $$ or something similar that is supposed to represent an undamped block on a spring. I encountered the following question about it which has me flummoxed: Linear systems, when given a pure oscillating input, are supposed to produce an output signal that is also a pure oscillating output and at the same frequency. However, an oscillating input at resonance produces an output in the form $t \cos \omega_ot$. That signal is not a single frequency. What is going on? Is this still a linear system? AI: It is still linear. As you say, you would expect that the system response to a "complex oscillating" input $e^{st}$ is a function of the same form, $H(s)e^{st}$, where $H$ is the transfer function. But as you notice, there are usually a number of exceptional frequencies, or eigenfrequencies, for the system, where this is not quite true. This will happen at frequencies $s$ for which $H(s)$ blows up. In your example, the transfer function has poles at $s = \pm i/\omega_0$. (The transfer function in your example should probably have been $H(s) = \dfrac{A_0}{s^2+\omega_0^2}$ with poles at $\pm i\omega_0$.) Note that a complex oscillation of frequency $i\omega_0$ corresponds to a cosine term: $$e^{i\omega_0 t} = \cos \omega_0t + i\sin\omega_0t.$$