text
stringlengths
83
79.5k
H: What is the right solution for derivative of implicit function? There's need to find y' of: $$\arctan(y/x)=\ln\sqrt{x^2 + y^2}$$ Tried: $\dfrac{1}{(1+(y/x)^2)}*(\dfrac{y}{x})'=(x^2+y^2)^\dfrac{-1}{2}*(\dfrac{1}{2})*(x^2+y^22)^\dfrac{-1}{2} * (2x+2y'*y')$ AI: $$\arctan(\frac{y}{x})=\ln\sqrt{x^2+y^2}$$ $$\frac{1}{1+(y/x)^2}\frac{y'x-y}{x^2}=\frac{x+yy'}{\sqrt{x^2+y^2}}$$ solve for $y'$
H: Evaluate $\sum\limits_{k=2}^n \frac{n!}{(n-k)!(k-2)!} $ Question is to Evaluate $$\sum_{k=2}^n \frac{n!}{(n-k)!(k-2)!} $$ What i have done so far is $$\sum_{k=2}^n \frac{n!}{(n-k)!(k-2)!}=n(n-1)\sum_{k=2}^n \frac{(n-2)!}{(n-k)!(k-2)!}=n(n-1)\sum_{k=2}^n \binom{n-2}{k-2}$$ I could see that $$\sum_{k=2}^n \binom{n-2}{k-2}$$looks much similar to $$\sum_{k=0}^n \binom{n}{k}=(1+1)^n$$ So, we should have something like $$\sum_{k=2}^n \binom{n-2}{k-2}=(1+1)^{n-2}$$ But i am not very sure if this is even true. I tried adding $\binom{n-2}{k-2}$ with $\binom{n-2}{k-1}$ with a formula $\binom{n-1}{r-1}+\binom{n-1}{r}=\binom{n}{k}$ hoping to reduce $n-2$ in the expansion to $n$ but is was not helpful. I tried to consider $(1+x)^n$ expansion and tried differentiating it but this was also not very helpful. I would be thankful if some one can help me to clear this. Thank you AI: $$\sum_{k=2}^n{n-2\choose k-2}\quad=\quad\sum_{j=0}^{n-2}{n-2\choose j}=2^{n-2}\qquad\iff\qquad S=n(n-1)2^{n-2}$$
H: Subnet vs. Subsequence I'm looking for an example of a topological space $X$, a sequence $(x_n)_{n \in \mathbb{N}}$ in $X$ and a converging subnet $(x_i)_{i\in I}$ of $(x_n)$, but with the property that $x_n$ does not have any converging subsequence. I have an examples of $X$ and $(x_n)$ such that there is converging subnets $(x_i)$ and no converging subsequence of $(x_n)$, but I like to a explicit example of such a subnet $(x_i)$. (I still can't imagine how such subnets could look like...). Thank you very much in advance AI: Let $X=\beta\Bbb N$, the Čech-Stone compactification of $\Bbb N$. Let $\mathscr{U}\in(\beta\Bbb N)\setminus\Bbb N$. Let $$D=\{\langle n,U\rangle\in\Bbb N\times\mathscr{U}:n\in U\}\;,$$ and for $\langle m,U\rangle,\langle n,V\rangle\in D$ write $\langle m,U\rangle\preceq\langle n,V\rangle$ iff $U\supseteq V$; $\langle D,\preceq\rangle$ is a directed set. The net $$\nu:D\to\Bbb N:\langle n,U\rangle\mapsto n$$ converges to $\mathscr{U}$ in $\beta\Bbb N$. Let $\sigma=\langle n:n\in\Bbb N\rangle$; the sequence $\sigma$ has no convergent subsequence in $\beta\Bbb N$. However, $\nu$ is a subnet of $\sigma$: for each $A\subseteq\Bbb N$, if $\sigma$ is eventually in $A$, then so is $\nu$. (Note that I use the definition of subnet from J.F. Aarnes & P.R. Andenæs, ‘On nets and filters’, Math. Scand. $31 (1972)$, $285$-$282$ and these excellent notes by Saitulaa Naranong, not the slightly inferior one found, for example, in Kelly.)
H: Show that every open subset of a surface is a Surface. Show that every open subset of a surface is a Surface. I think, first of all, i need to define an atlas $\{\sigma_{\alpha}: U_{\alpha}\to \Bbb R^3\}$ for surface S, Where $U$ is an open set. After there, how to continue the proof? Please help me. Thank you. AI: if $S^{'}$ open subset of $S$ then $\sigma^{-1}(S^{'}) =U^{'}$ is open you can take the restriction $\sigma^{'}$ of $\sigma$ on $U^{'}$ and you have the homomorphisme $\sigma^{'} : U^{'}\to S^{'}\cap W$
H: Find the Characteristic polynomial The characteristic polynomial of $A \in M_{4}(\Bbb R)$ is: $P(t)= t^4-t$ Find the Characteristic polynomial of: $A^2, A^4$ ($A^4$ was easy but with $A^2$ I'm stuck) Same question with the field $\Bbb F_3$ and $\Bbb F_2$ Thanks. AI: $A$ has distinct eigenspaces with eigenvalues $0,1,\omega,\bar{\omega}$. So $A^2$ has those same eigenspaces, but with eigenvalues $0^2,1^2,\omega^2,\bar{\omega}^2$. That is, $0,1,\bar{\omega},\omega$. So $A^2$ has characteristic polynomial $x(x-1)(x-\bar{\omega})(x-\omega)$, or $x(x-1)(x^2+x+1)$, which is just $x^4-x$ again. In those other fields, the factorization of $P$ to get the four eignevalues may be different. In $\mathbb{F}_2$, the factorization is essentially the same and requires you to use a root in a degree $2$ extension. In $\mathbb{F}_3$, the polynomial fully factors as $x(x-1)^3$, so there is a repeated root. There may be a dimension three eigenspace, a dimension three shearing space, or a dimension one eigenspace paired with a dimension two shearing space. But whichever way, after squaring there will be a dimension three something associated to the $(x-1)^3$ factor. And the argument is basically the same, still leading back to a characteristic polynomial of $x^4-x$.
H: Show that the following set has the same cardinality as $\mathbb R$ using CSB We have to show that the following set has the same cardinality as $\mathbb R$ using CSB (Cantor–Bernstein–Schroeder theorem). $\{(x,y)\in \Bbb{R^2}\mid x^2+y^2=1 \}$ I think that these are the two functions: $f:(x,y)\to \Bbb{R} \\f(x)=x,\\f(y)=y $ $g:\Bbb{R}\to (x,y)\\ g(x)=\cos(x),\\g(y)=\sin(y)$ Is this correct ? Thanks. AI: Finding an injective function $g\colon\mathbb{R}\to X$, where $X$ is your set is easy: just remember the formulas for $\cos x$ and $\sin x$ in terms of $\tan(x/2)$ Set$$g(t)=\left(\frac{1-t^2}{1+t^2},\frac{2t}{1+t^2}\right)$$ For an injective map $f\colon X\to \mathbb{R}$, observe that any point on the unit circle determines an angle. Alternatively, observe that only two points of $X$ have the same $x$-coordinate $$f(x,y)=\begin{cases}x & \text{if $y>0$}\\x+100 & \text{if $y<0$}\\1 &\text{if $x=1$ and $y=0$}\\-1&\text{if $x=-1$ and $y=0$}\end{cases}$$
H: Probability that successive $8$ heads in a trial of $10$ tosses. Question is : A fair coin tossed ten times. What is the probability that we can observe a string of $8$ heads in succession at some time. I do not even understand this Question properly.. Probability of getting first head would be $\frac{1}{2}$ as we want just $\{H\}$ from $\{H,T\}$ Probability of getting second head would be $\frac{1}{4}$ as we want just $\{H H\}$ from the collection $\{H H, HT ,TH, TT\}$ Probability of getting third head would be $\frac{1}{2}.\frac{1}{2}.\frac{1}{2}=\frac{1}{8}$ So, generalizing this further i would like to say that : Probability of getting $8$ heads consecutively is $(\frac{1}{2})^8$. But then, I am sure i am missing something as there are not just $8$ coins but there are actually $10$ coins. So, I should even consider $THHHHHHHHT$ and $TTHHHHHHHH$ I guess in deciding that probability is $(\frac{1}{2})^8$ I have considered case in which only possibility is $HHHHHHHH**$ So, I have realized that i should mix this with something else but i have no clue how to mix that and what to mix... I would be thankful if some one can help me to clear this issue. Thank You AI: We observe a string of 8 consecutive heads if : We obtain 10 heads HHHHHHHHHH We obtain 9 heads HHHHHHHHHT or THHHHHHHHH We obtain 8 heads HHHHHHHHTH HHHHHHHHTT THHHHHHHHT HTHHHHHHHH TTHHHHHHHH Hence $8$ positives cases among $2^{10}$. $$P=\frac{8}{2^{10}}=2^{-7}$$
H: If $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is exact and $B\simeq A\oplus C$ as a $R$-module, does this sequence split? Suppose $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is a short exact sequence that $B\simeq A\oplus C$ as a $R$-module. Does this short exact sequence split? I think the answer is no, but I must have a counterexample. I could not find any. Can you help me? AI: Although this isn't quite a duplicate, the case $R=\mathbb{Z}$ is thoroughly discussed in A nonsplit short exact sequence of abelian groups with $B \cong A \oplus C$.
H: Is the image of $\mathrm{Spec}(\varphi)$, where $\varphi$ is a localization, always open? Let $R$ be a commutative ring and $S \subseteq R$ multiplicatively closed subset. Consider $\varphi: R \rightarrow S^{-1}R$ the localization of $R$ by $S$. Then it can be shown that the corresponding map $\mathrm{Spec}(\varphi): \mathrm{Spec}S^{-1}R \rightarrow \mathrm{Spec}R \;$ defined by $\;\mathfrak{a}\mapsto \varphi^{-1}(\mathfrak{a})$ is a homeomorphism (with respect to the Zariski topologies) between $\mathrm{Spec}S^{-1}R$ and its image $U:=\{\mathfrak{q} \in \mathrm{Spec}R \;|\; \mathfrak{q} \cap S= \emptyset\}$. If we assume the set $S$ to be "principal", i.e. of the form $\{f^n \; | \;n \in \mathbb{N}\}$ for some $f \in R$, then it is easy to see that the image $U$ is actually of the form $\mathrm{Spec}R \setminus V((f))$, therefore open. My question is: What happens in the case of a general multiplicative set? Namely, is the set $U$ always open? If not, is the above result (i.e. the mentioned homeomorphism) still interesting for some reason(s)? I am also interested in examples, where the set $U$ is not open (if it is possible) and conditions on the ring $R$ which would force the set $U$ to be open (for any choice of $S$). AI: $U$ can be any intersection of basic-open subsets. If it was always open, this would mean that open subsets are closed under arbitrary intersections. Of course this fails in the common examples. You should be able to find an example for $R=\mathbb{Z}$. [Always look at examples, algebraic geometry is not as abstract as it looks like at first sight ...] The result you mention is useful because it shows for example that the scheme-theoretic fiber of a morphism of schemes has as underlying topological space the usual fiber.
H: Floor equation, determine solutions there is a equation $\lfloor\sqrt{6}x\rfloor=\lfloor2.5x\rfloor$, determine how many integer solutions and real solutions has the equation. I know that $\{-1,0,1\}$ satisfies the equation, but I don't know if there isn't any other solution. AI: Since $\sqrt6<2.5$, $0\le\left\lfloor\sqrt6 x\right\rfloor=\lfloor 2.5x\rfloor$ if and only if there is an integer $n$ such that $$n\le\sqrt6 x<2.5x<n+1\;.$$ Suppose that $n=\sqrt6x$; then $x=\frac{n}{\sqrt6}$, so $2.5x=\frac{2.5}{\sqrt6}n$, and we require that this be less than $n+1$: $$\frac{2.5}{\sqrt6}n<n+1\;,\tag{1}$$ or $$n<\frac{\sqrt6}{2.5-\sqrt6}\approx48.5\;,$$ i.e., $n\le 48$. For example, if you take $n=48$, then $x=\frac{48}{\sqrt6}\approx19.59591794227$, and $$\lfloor 2.5x\rfloor\approx\lfloor48.98979485566\rfloor=48\;.$$ You can increase $x$ up to but not including $\frac{49}{2.5}=19.6$, so $n=48$ gives you solutions $$\frac{48}{\sqrt6}\le x<19.6\;.$$ If you take $n=0$, at the other extreme, then $0\le x<0.4$. I’ll leave it to you to look at the other integers and to do the analysis for the case $\left\lfloor\sqrt6 x\right\rfloor=\lfloor 2.5x\rfloor<0$.
H: Interior and closure of $\left\{ \frac{1}{n}: n \in \mathbb{N} \right\} $ in $\mathbb{R}$ and $\mathbb{R} \setminus \{ 0 \}$ What is the interior and closure of $A = \left\{ \frac{1}{n}: n \in \mathbb{N} \right\}$ in $\mathbb{R}$ and $\mathbb{R} \setminus \left\{ 0 \right\} $ ? I suppose that in both cases $\mbox{int}A = \emptyset$. Now closure: in $\mathbb{R} \setminus \left\{ 0 \right\} $ we have $$\left( \mathbb{R} \setminus \left\{ 0 \right\} \right) \setminus A = ( - \infty,0) \cup (1,+\infty) \cup \bigcup_{n=1}^{ \infty } \left( \frac{1}{n} , \frac{1}{n+1} \right) $$ On right side every set is open so $\left( \mathbb{R} \setminus \left\{ 0 \right\} \right) \setminus A$ is open hence $A$ is closed. Then we have $\mbox{cl}A = A$ But from second side if we take $a_n = \frac{1}{n}$ we have that $a_n \rightarrow 0$ and of course $0 \not\in A$ so $A$ is not closure. Where is mistake? in $\mathbb{R}$ Now I'm really confused so I don't know. AI: You made no mistake in the beginning: $A$ is closed in $\Bbb R\setminus\{0\}$, because its complement is open, just as you said. The fact that $\left\langle\frac1n:n\in\Bbb Z^+\right\rangle\to 0$ is irrelevant, because $0$ is not in the space $\Bbb R\setminus\{0\}$. That sequence is relevant when you consider $A$ as a subset of $\Bbb R$: it shows that $A$ is not closed in $\Bbb R$ and furthermore that $0\in\operatorname{cl}_{\Bbb R}A$. Is anything else in $\operatorname{cl}_{\Bbb R}A$?
H: $\lim\sup_{n\to\infty}$ of a sequence $\lt e$ Is there a sequence $ c_n $ such that $$ \lim\sup_{n\to\infty}(\frac{1+c_{n+1}}{c_n})^n \lt e $$? I tried to get the above sequence into a form that would reveal more about the potential upper limit, but I couldn't really figure out anything useful. How do I tackle this sort of problem? AI: $c_n=-1$ for $n$ even and $c_n=-2$ for $n$ odd. Then $sup(\frac{1+c_{n+1}}{c_n})=1$ for $c_{n+1}=-2$ and $1^n\to 1<e$
H: Quick way to determine if a rational function has a hole on the $x$-axis.... To sketch a rational function's graph, one step is to determine the sign $(+/-)$ of various intervals. I create intervals separated by the vertical asymptote (VA) and $x$-ints on a number line (since these are where a sign change can occur), and test a point in each interval. As you know, when you cancel a factor in the denominator eg:$(x-5)$, that is where a hole will be $(x=5)$. You then plug the $5$ into the simplified function to get the corresponding $y$-coordinate of the hole. Normally, I do not include the $x$-value of the hole on the interval number line, since a sign change can only happen at a hole if the hole is on the $x$-axis. This only happens when your cancelled factor cancel in the denominator, and the subsequent plugging in of that $x$-value leads to a $y$-value of 0. However, the only way that can happen is if the cancelled factor is repeated in the numerator (ie: multiplicity $\gt 1$) For example, $$R(x)=\dfrac{(x-3)(x-3)}{(x-3)(x+5)}$$ The $(x-3)$ terms cancel, and when you do $f(3)$ you will get a $0$ in the numerator from that remaining $(x-3)$ term. So, can we conclude that the only way a hole can involve a sign change is if its on the $x$-axis, and the cancelled factor must have a multiplicity of $\gt 1$ so that the function "equals" zero when you plug that $x$-value into the simplified version. Otherwise, with a multiplicity of just $1$, there is no way the hole can be on the $x$-axis, and there is no need to test for a sign change at that hole... Yea? AI: To graph a rational function you need to know where the non-removable singularities are. If you reduce this function to lowest terms, you will remove all the x values where it appears to blow up (have a zero denominator), but actually doesn't. That leaves you real singularities (which are generally called poles) where the function will go to $\pm \infty$. Probably it's best to start your graphing by showing these asymptotes, and then fill in the rest of the sketch afterwards. Figuring out where R(x) = 0 and is thus crossing the x axis doesn't hurt, but isn't necessary. You can sketch using any known points, which is usually easier than finding the roots. For example if the numerator is not factored and is of the 5th degree, how will you find the roots? But you can sketch it fine without them.
H: How to solve this summation: $\sum_{k=1}^{n}\frac{k2^{k}}{(k+2)!}$ How do I solve this summation? I have tried simplifying k on the numerator with the factorial in the denominator but it just gets me nowhere. $$\sum_{k=1}^{n}\frac{k2^{k}}{(k+2)!}$$ Thanks! AI: Hint $$\frac{k2^k}{(k+2)!}=\frac{(k+2-2)2^k}{(k+2)!}=\frac{(k+2)2^k}{(k+2)!}-\frac{2\cdot2^k}{(k+2)!}=\frac{2^k}{(k+1)!}-\frac{2^{k+1}}{(k+2)!}$$ You get a telescopic sum.
H: Show $\int_{\gamma}e^{iz}e^{-z^2}dz$ same value on every line parallel to $\mathbb{R}$ From an old qualifier: Show that $$\large\int_{\gamma}e^{iz}e^{-z^2}\mathrm dz$$ has the same value on every straight line path $\gamma$ parallel to the real axis. Justify the estimates involved. My first thought was to draw a long strip and integrate over it. By Cauchy's theorem the integral is zero, and I can compare the contributions. I'd like it if, for well-chosen such strips, the contribution on the sides were entirely imaginary and that on the top/bottom were entirely real, or vice-versa. But so far, no luck. AI: Take a rectangle with vertices $\pm R, \pm R + iY$, for an arbitrary but fixed $Y$. On the edges between $R$ and $R+iY$ resp $-R$ and $-R+iY$, the integrand can be estimated $$\bigl\lvert e^{iz}e^{-z^2} \bigr\rvert \leqslant e^{\lvert Y\rvert}e^{Y^2-R^2},$$ so the contribution of these integrals is dominated by $$\lvert Y\rvert e^{\lvert Y\rvert + Y^2 - R^2} \xrightarrow{R\to +\infty} 0,$$ so since by Cauchy's integral theorem the integral over the rectangle is $0$, the proposition follows.
H: A sufficient condition for irreducibility of a polynomial in an extension field. Here is another problem of the book Fields and Galois Theory by Patrick Morandi, page 37. Let $f(x)$ be an irreducible polynomial over $F$ of degree $n$, and let $K$ be a field extension of $F$ with $[K:F]=m$. If $\gcd(n,m)=1$, show that $f$ is irreducible over $K$. I'm preparing for my midterm exam so I'm trying to solve as many as this book problems. Thanks for your helps. AI: Let $L \supset K$ be obtained by adjoining a root of $f$ to $K$, say $\alpha$. Then consider the chains $F \subset K \subset L$ and $F \subset F(\alpha) \subset L$. You know $[F(\alpha):F] = n$, because $f$ is irreducible over $F$. What does this tell you about $[L:K]$?
H: Flip all to zero I have a square grid of size $N$, with rows numbered from $0$ to $N - 1$ starting from the top and columns numbered from $0$ to $N - 1$ starting from the left. A cell $(u, v)$ refers to the cell that is on the $u$-th row and the $v$-th column. Each cell contains an integer $0$ or $1$. I can pick any cell and flip the number in all the cells (including the picked cell) within the distance $D$ from the picked cell. A flip here means changing the number from $0$ to $1$ and vice-versa. The distance from the cell $(u, v)$ to the cell $(x, y)$ is equal to $|u - x| + |v - y|$ where $|i|$ is the absolute value of $i$. Now,I want to change all values in the grid to zero without using more than $N\cdot N$ flips. If not possible tell that its not possible. Otherwise i want to tell the total moves required to achieve it. Along with the position of cells on which the operation is to be performed. EXAMPLE : Say i have $3\times 3$ grid and the distance $d = 1$. $$\begin{array}{|c|c|c|} \hline 0&1&0\\ \hline 1&1&1\\ \hline 0&1&0\\ \hline \end{array}$$ Then clearly its possible to change all cells to zero by performing the first operation in the center cell, this will flip all the elements to $0$ within $1$ distance. For a not possible case we can take a example : Say i have $3\times 3$ grid and the distance $d = 2$. $$\begin{array}{|c|c|c|} \hline 1&0&1\\ \hline 1&1&0\\ \hline 0&0&0\\ \hline \end{array}$$ ITS NOT POSSIBLE TO GET ALL ZEROS IN THIS CASE. How to tackle this problem .? PLZ HELP AI: This is a linear algebra problem. First note that there is no point in playing the same square twice. Each square should be played once, or not at all. Consider the grid: \begin{array}{|c|c|c|c|} \hline a_{11}&a_{12}&a_{13}&a_{14}\\ \hline a_{21}&a_{22}&a_{23}&a_{24}\\ \hline a_{31}&a_{32}&a_{33}&a_{34}\\ \hline a_{31}&a_{42}&a_{43}&a_{44}\\ \hline \end{array} Define $a$ as when $a_{yx} = 1$ you play the square, when $a_{yx} = 0$ you don't play the square. Then you can see that $a$ is the variable you want to solve for, and it is constrained by a set of linear equations shown below. Let $b_{yx}$ be the initial values of the grid. Write out the statement "location y x becomes zero": $b_{y, x} + \sum_{u,v}^{N,N} a_{u,v} \cdot (|u - x| + |v - y| \le_{1/0} D) \equiv 0 \pmod 2$ (Here I'm using $m \le_{1/0} n$ to mean "1 if $m \le n$ and 0 otherwise".) ...for example the the equation "the top left corner becomes zero for D=2" is: $\begin{align} b_{11} & + a_{11}\cdot 1 + a_{12}\cdot 1 + a_{13}\cdot 1 + a_{14}\cdot 0 \\ & + a_{21}\cdot 1 + a_{22}\cdot 1 + a_{23}\cdot 0 + a_{24}\cdot 0 \\ & + a_{31}\cdot 1 + a_{32}\cdot 0 + a_{33}\cdot 0 + a_{34}\cdot 0 \\ & + a_{41}\cdot 0 + a_{42}\cdot 0 + a_{43}\cdot 0 + a_{44}\cdot 0 \equiv 0 \pmod 2 \end{align}$ This is a linear equation in $a$, the variable you need to solve for. Build your $N^2 \times N^2$ matrix and solve for $a$: $\begin {bmatrix} & \cdots \\ \vdots & \end{bmatrix} \begin{bmatrix} a_{11} \\ a_{12} \\ \vdots \end{bmatrix} \equiv \begin{bmatrix} -b_{11} \\ -b_{12} \\ \vdots \end{bmatrix} \pmod 2 $ Example as requested: Consider the following initial position: $$b = \begin{array}{|c|c|c|c|} \hline 1&0&0\\ \hline 0&1&0\\ \hline 0&0&1\\ \hline \end{array}$$ Let $D = 1$. The top left square is only affected by changes to 3 locations, $a_{11}, a_{12}, a_{21}$ : $$1 + a_{11} + a_{12} + a_{21} \equiv 0 \pmod 2$$ The center square is affect by changes to 5 locations: $$1 + a_{12} + a_{21} + a_{22} + a_{23} + a_{32}\equiv 0 \pmod 2$$ The bottom middle square is affected by changes to 4 locations: $$0 + a_{22} + a_{31} + a_{32} + a_{33} \equiv 0 \pmod 2$$ And so on, you can get 9 equations total like this. Write them as a matrix: $$\begin{bmatrix}1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\cr 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\cr 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0\cr 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0\cr 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0\cr 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1\cr 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0\cr 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1\cr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\end{bmatrix} \begin{bmatrix} a_{11} \\ a_{12} \\ a_{13} \\ a_{21} \\ a_{22} \\ a_{23} \\ a_{31} \\ a_{32} \\ a_{33} \\\end{bmatrix} \equiv \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \pmod 2$$ Begin Reduced Row Echelon Reduction: $$\begin{bmatrix}1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\cr 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\cr 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\cr 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0\cr 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1\cr 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0\cr 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\cr 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0\cr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1\end{bmatrix}$$ $$\vdots$$ $$\begin{bmatrix}1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\cr 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\cr 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1\cr 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0\cr 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0\cr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\cr 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\cr 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\cr 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\end{bmatrix}$$ $$\vdots$$ $$\begin{bmatrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\cr 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\cr 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\cr 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\cr 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1\cr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\cr 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\cr 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\cr 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\end{bmatrix}$$ Which corresponds to $a_{11} = 1$, $a_{22} = 1$, $a_{33} = 1$, and the other $a$ values are zero. And if you check, flipping those 3 will solve the problem. A more general example. Suppose $N=3$ and $D=2$. The resulting matrix: $$\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & b_{11}\cr 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & b_{12}\cr 1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & b_{13}\cr 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & b_{21}\cr 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & b_{22}\cr 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & b_{23}\cr 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & b_{31}\cr 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & b_{32}\cr 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & b_{33}\end{bmatrix}$$ Which has reduced row echelon form: $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & b_{23}+b_{12} \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & b_{31}+b_{22}+b_{21}+b_{11} \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & b_{22}+b_{21} \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & b_{13}+b_{12} \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & b_{31}+b_{23}+b_{22}+b_{13}+b_{12} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & b_{22}+b_{11} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & b_{22}+b_{12} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & b_{32}+b_{23}+b_{21}+b_{12} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & b_{33}+b_{31}+b_{23}+b_{21}+b_{13}+b_{11} \end{bmatrix} $$ Meaning that you only have a solution if $$b_{32}+b_{23}+b_{21}+b_{12} \equiv 0 \pmod 2$$ $$b_{33}+b_{31}+b_{23}+b_{21}+b_{13}+b_{11} \equiv 0 \pmod 2$$ For the second example in your question, this is given by : $$0+0+1+0 \equiv 0 \pmod 2$$ $$0+0+0+1+1+1 \equiv 0 \pmod 2$$ showing that your example is actually impossible.
H: Field extension, primitive root of unity Let $\xi_5$ be a primitive fifth root of unity in $\mathbb{C}$, then we know the extension $\mathbb{Q}(\xi_5)\supseteq\mathbb{Q}$ has degree 4 so the Galois group is of order 4. I am trying to find all intermediate extensions and it turns out the only one is $\mathbb{Q}(\sqrt{5})\supseteq\mathbb{Q}$. It is not hard to find this, but how to show that there are no more intermediate extensions? AI: The Galois group is of order $4$ because the degree of the extension is $4$, but more is true. It's canonically isomorphic to $(\mathbf{Z}/5\mathbf{Z})^\times\cong\mathbf{Z}/4\mathbf{Z}$, i.e. it is cyclic of order $4$. Galois theory gives a bijective correspondence between intermediate fields and subgroups of the Galois group, so, since $\mathbf{Z}/4\mathbf{Z}$ only has one proper, non-trivial subgroup, $\mathbf{Q}(\xi_5)/\mathbf{Q}$ has a unique non-trivial intermediate extension.
H: Series Expansion of $\arcsin\left(\frac{a}{a+x}\right)$ Can anyone think of a good approximation to: $$ \arcsin\left(\frac{a}{a+x}\right)\ $$ accurate at $x = 0$? The Taylor series is not available...perhaps some other kind of method? AI: To answer my own question, the most accurate I can get is by using $$ \arcsin\left(x\right)=2\arctan\left(\frac{x}{1+\sqrt{1-x^{2}}}\right)\ $$ so $$ \arcsin\left(\frac{a}{a+x}\right)=\frac{\pi}{2}-\frac{\sqrt{2}}{\sqrt{a}}x^{1/2}+\frac{5}{6a^{3/2}\sqrt{2}}x^{3/2}...\ $$
H: complex logarithm I have $e^{jk}$ and I want to take a logarithm from it, $\log(e^{jk})$ must be $jk$, right? here some example I have tried to do with matlab. $$\log(e^{j2})=j2$$ $$\log(e^{j3})=j3$$ but for $e^{j4}$ it gives me: $\log(e^{j4})=-j2.2832$, why it doesn't give me $j4$ note: $\log$ is natural logarithm AI: See here. Since $e^{i\theta} = e^{i(\theta+2n\pi)}$ for any $n \in \mathbb{Z}$, it doesn't have a well-defined inverse. That is, to define 'the' complex logarithm we need to restrict the domain of $z \mapsto e^z$. There's no single 'correct' way of doing this, so it amounts to choosing a 'branch' of the logarithm. In general, $$\log(z) = \log|z| + i\arg(z)$$ for some choice of range of $\arg(z)$. (For a given $z$, $\arg(z)$ is only well-defined up to adding some integer multiple of $2\pi$.) Notice that $−2.283184\dots = 4 - 2\pi$. The long and short of it is: $\log z$ might take many different values, depending on the choice of branch implicit in writing '$\log$'. Yours may differ from MATLAB's.
H: Cut circle with straight lines Problem: To cut a circle with $n$ straight lines into the greatest number of possible parts. What is the greatest number and how can you prove it is optimal? AI: the number of parts is $\binom{n}{2}+\binom{n}{1}+\binom{n}{0}$ I'll give you a hint first, consider what happens when you add a new line to the circle. How many lines intersect it? and how many new parts does that create? Note that this problem generalizes to $n$ planes cutting a sphere: for that, the number of parts would be $\binom{n}{3}+\binom{n}{2}+\binom{n}{1}+\binom{n}{0}$. Use recursion to prove and by using the formula for the original problem.
H: Find the sum of $\sum (n^2+n)x^n$ using integrals I'm having a difficult to find $\sum_{n=1}^\infty (n^2+n)x^n$. the solution is $\frac{2x}{(1-x)^3}$. This is my solution: $$1. \space\space\space\space S(x) = \sum_{n=1}^\infty (n^2 +n)x^n =$$ $$2. \space\space\space\space \sum_{n=1}^\infty n(n+1)x^n.$$ $$3. \space\space\space\space \int S(x) = \sum_{n=1}^\infty n(n+1)\frac{x^{n+1}}{n+1} = \sum_{n=1}^\infty nx^{n+1}$$ $$4. \space\space\space\space \int S(x) = \sum_{n=1}^\infty nx^{n-1}$$ $$5. \space\space\space\space \frac{1}{x^2}\int S(x) = \sum_{n=1}^\infty nx^{n-1}$$ $$6.\space\space\space\space \int\big(\frac{1}{x^2}\int S(x)\big) = \sum_{n=1}^\infty n\frac{x^{n}}{n} = \sum_{n=1}^\infty x^{n} = \frac{x}{1-x}.$$ $$7.\space\space\space\space\frac{1}{x^2}\int S(x) = \bigg(\frac{x}{1-x}\bigg)' = \frac{1}{({x-1})^2}$$ $$8. \space\space\space\space\frac{1}{x^2}S(x) = \bigg(\frac{1}{(x-1)^2}\bigg)' = -\frac{2}{(x-1)^3}.$$ $$9.\space\space\space\space S(x) = -\frac{2x^2}{(x-1)^3} $$ My solution is wrong. I numbered every step, where is my problem? I have tomorrow test so I'd love if you can help me. thanks in advance! AI: Everithing is fine untill line 7: $$\frac1{x^2}\int{S(x)}=\frac1{(x-1)^2}$$ The problem was that you derived first, just then multiplied by $x^2$. deriving both sides gives, by the multiplication rule: $$-\frac{3}{x^3}\int{S(x)}+\frac{1}{x^2}S(x)=\frac{1}{(1-x)^2}$$ And not as mentioned. Doing it in the other way, multiplying first then derive, gives: $$\int{S(x)}=\frac{x^2}{(x-1)^2}=(\frac{x}{x-1})^2\\ S(x) = [(\frac{x}{x-1})^2]'=2(\frac{x}{x-1})(\frac{-1}{(x-1)^2})\\ S(x) = -2\frac{x}{(x-1)^3}=\frac{2x}{(1-x)^3} $$
H: L2 Norm of Gaussian Integer I'm writing a Java program that deals with Gaussian Integers. In my program, I have to compute the L2 norm of the GI (Gaussian Integer) and return it as a float. I've looked around but I cannot seem to find a formula to compute said value. Is this the correct formula? The l2 norm of a vector is the square root of the sum of the absolute values squared In that case, would the L2 Norm for a+bi be: int sum = abs(a) + abs(b); int sum = sum*sum; float norm = sqrt(sum); Thanks in advance AI: The description is correct, the code isn't. Where it says "sum of the absolute values squared", the "squared" applies to "absolute values" and not to "sum". So, correct would be $\sqrt{|a|^2 + |b|^2}$. Note there is no need for the absolute values, $\sqrt{a^2 + b^2}$ is the same.
H: Unfamiliar with notation : $S \subseteq [d]$ where {$d, w_{1},w_{2}, ..., w_{d}$} What does $S \subseteq [d]$ mean in the context of {$d, w_{1},w_{2}, ..., w_{d}$}? I don't get what [d] stands for. AI: Almost certainly $[d]=\{1,2,\ldots,d\}$; this is a fairly standard notation. Check to see whether the context shows that $S$ is being used as a set of subscripts on the $w$’s.
H: find the eigenvectors of the eigenvalue 1 - simple question simple question but I seem to be having difficulties. $A = \begin{bmatrix}1 & \gamma & 4\\0 & 2 & \beta \\ 0 & 0 & 1\end{bmatrix}$, find the eigenvectors of the eigenvalue 1. What I did: $\begin{bmatrix}1 & \gamma & 4\\0 & 2 & \beta \\ 0 & 0 & 1\end{bmatrix} * \begin{bmatrix}x \\ y \\ z\end{bmatrix} = \begin{bmatrix}x+\gamma y+4z \\ 2y+\beta z \\ z \end{bmatrix} = \begin{bmatrix}x \\ y \\ z \end{bmatrix}$ So... 1) $x+\gamma y +4z = x$ 2) $2y+\beta z = y$ 3) $z=z$ From 1) we get that $y=\frac{-4}{\gamma}z$ but from 2) we get that $y = -\beta z$ so we overall we get that $\beta = \frac{4}{\gamma}$ (suppose that z is not 0, if z=0 then the eigenvector is 1,0,0, but if z is not 0, the result gets very complex) but how can that be? We don't know anything about beta or gamma and we can't say anything about them. we can't impose restrictions on them or anything like that. I don't know how to interpet this result. AI: Your system can be reduced to $$ \left\{ \begin{array}{ll} y+\beta z & =& 0, \\ (\beta\gamma-4)z &=& 0. \end{array} \right. $$ If $\beta\gamma \neq 4$ then $z=y=0$ and $x \in \mathbb{F}$. So, the eigenvectors are of the form $c\cdot(1,0,0)^T$, $c \in \mathbb{F}$. If $\beta\gamma = 4$ then system reduces to $y+\beta z = 0$ that implies $y=-\beta z$ ($x,z \in \mathbb{F}$). And the eigenvectors are of the form $c_1\cdot(1,0,0)^T+c_2\cdot(0,-\beta,1)^T$, $c_1,c_2 \in \mathbb{F}$.
H: Why does the Kronecker Delta get rid of the summation? I am working with spherical harmonics and the radial equation (part of Laplace's equation in spherical co-ords). The coefficients and equations with I am working with aren't important to my question. I am using the orthogonality relation for spherical harmonics: $<Y_{kn},Y_{lm}> = \int^{2\pi}_0\int^{\pi}_0 Y^*_{lm} Y_{kn} sin(\theta) d\theta d\phi = \delta_{kl} \delta_{mn}$ In the problem below: $<Y_{kn},f> = \sum\limits_{l=0}^\infty\sum\limits_{m=-l}^\infty (A_{lm}a^l)<Y_{kn},Y_{lm}>$ We can use the orthogonality relation to get: $<Y_{kn},f> = \sum\limits_{l=0}^\infty\sum\limits_{m=-l}^\infty (A_{lm}a^l)\delta_{kl} \delta_{mn}$ Which simplifies to: $<Y_{kn},f> = A_{kn}a^k$ Why do the Kronecker deltas "get rid off" the summation signs and why have we relabelled the coefficients on the right hand side? AI: Because $$\delta_{kl}=\begin{cases}0&,\;\;k\neq l\\{}\\1&,\;\;k=l\end{cases}$$ so in that sum all equals zero except when $\;l=k\;,\;m=n\;$
H: If $X_n\rightarrow X$ almost surely and $Y_n\rightarrow Y$ a.s., then is $X_n/Y_n\rightarrow X/Y$ almost surely true? If $X_n\rightarrow X$ almost surely and $Y_n\rightarrow Y$ a.s., then is $X_n/Y_n\rightarrow X/Y$ almost surely true? Is there a theorem for this or is this not correct? AI: Claim: Let $f : S \subset \mathbb{R}^2 \to \mathbb{R}$ be a continuous map, $\{X_1, X_2, ..., Y_1, Y_2, ..., X, Y\}$ be random variables on $(\Omega,P)$ taking values in $S$ such that, $X_n \to X$ a.s. and $Y_n \to Y$ a.s., then $f(X_n,Y_n) \to f(X,Y)$ a.s. Proof: Let $N = \{X_n \not\to X\}$ and $M = \{Y_n \not\to Y\}$. Notice that $P(N \cup M) = 0$. By assumption of continuity, $f(X_n,Y_n) \to f(X,Y)$ on $(N \cup M)^c$ and $P\big((N \cup M)^c\big) = 1$. Caution: You do need the continuity of $f$ here. So, if you are dividing by zero, say, you can't apply this claim.
H: How to show the following cool equality: I am looking for a proof of the following relationship: $\newcommand{\ds}[1]{\displaystyle{#1}}$ $$ \frac{\ds{\int_{0}^{\pi}\sin^{n-2}\left(t\right)\,{\rm d}t}} {\ds{\int_{0}^{\pi}\sin^{n-3}\left(t\right)\,{\rm d}t}} = {\ds{\Gamma^{\,2}\left(\left[n - 1\right]/2\right)} \over \ds{\Gamma\left(n/2\right) \Gamma\left(\left[n - 2\right]/2\right)}}, $$ where $\Gamma$ is the Gamma-function. But: The proof must only contain things that a person could prove that has just a knowledge of $1$-d calculus (you do not need to show everything, but the things you use should be provable with standard techniques). Especially no Complex Analysis and Functional Analysis. My problem is, that I do not see any kind of relationship between the left and right-hand side. AI: Let $I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^n(x) dx$. We then have $I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-1}(x) d(-\cos(x)) = -\sin^{n-1}(x) \cos(x) |_{0}^{\frac{\pi}{2}} + \int_{0}^{\frac{\pi}{2}} (n-1) \sin^{n-2}(x) \cos^2(x) dx$ The first expression on the right hand side is zero since $\sin(0) = 0$ and $\cos(\frac{\pi}{2}) = 0$. Now rewrite $\cos^2(x) = 1 - \sin^2(x)$ to get $I_n = (n-1) \left(\displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-2}(x) dx - \int_{0}^{\frac{\pi}{2}} \sin^{n}(x) dx \right) = (n-1) I_{n-2} - (n-1) I_n$. Rearranging we get $n I_n = (n-1) I_{n-2}$, $I_n = \frac{n-1}{n}I_{n-2}$. Using this recurrence we get $$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3} I_1$$ $$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} I_0$$ $I_1$ and $I_0$ can be directly evaluated to be $1$ and $\frac{\pi}{2}$ respectively and hence, $$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3} = 4^k \dfrac{(k!)^2}{(2k+1)!}$$ $$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} \frac{\pi}{2} = \dfrac{(2k)!}{4^k (k!)^2} \dfrac{\pi}2$$ Use this to get what you want.
H: Limit of power series Exercise 3.4.22 Let $f_n(x) = x^n$ for n ∈ N. Show that the sequence $(f_n)_(n∈N)$ converges pointwise to the function f(x) = 0 on the interval (−1, 1) The definition of pointwise convergence: We say that a function f : X → F is the pointwise limit of the sequence $(f_n ) _(n∈N)$ if, for every x ∈ X, lim n→∞ $f_n(x)$ = f (x) What I want: for all ε>0, for all x∈(-1,1), there exists an N s.t. if n>N, then $|x^n-0|=|x^n|<ε$ if x = 0, clearly $x^n$ = 0. if x∈(0,1), or if x∈(-1,0), I'm unsure of what to do. Can I use the ratio test on $\sum x^n$? Since $|x^(n+1)/x^n|=|x|<1, then \sum x^n$ converges absolutely. Thus (I'm not sure where I can find a proof/theorem for this but it seems very obvious to me), $\lim_{x\to \infty \!\,} x^n = 0$ What I don't get is why this would prove that $x^n$ is, in particular, pointwise convergent. AI: The ratio test gives that for all $x\in (-1,1)$ you have $f_n(x) = x^n\to 0$. Hence the sequence $f_n$ converges pointwise by definition.
H: System of Equations- using profits Question: Keller industries' profits were up $20,000 this year over last year. This was an increase of 25%. a. Let T represent the profit this year and L the profit from last year and write a system of equations that can be used to determine the profits. b. Which method would be most efficient to solve this system (addition method or substitution method)? Explain c.Using your system to determine the profit for this year and last year. *Is this two different equations for both the profit made this year and the profit made last year? * AI: Two pieces of information are given: $\bullet$ This year's profit is $\$20000$ more than last year's profit. $\bullet$ The increase is $25\%$. You should write both in equation form. Then you get a system of two equations with two variables, which you can solve.
H: Residues computation when we need power series I'm trying to compute residues in situation where we need to manipulate power series to get it, but I can't find a good way. Indeed for the sake of example, consider the residue of the following function at $z=0$ which wolfram says it's zero: $$f(z)=\csc z\cot z = \dfrac{1}{\sin z}\dfrac{\cos z}{\sin z},$$ the only thing I could think of was using power series, but a lot of "problems" appear. Well, we would have $$f(z)=\left(\sum_{n=0}^\infty (-1)^n \dfrac{z^{2n+1}}{(2n+1)!} \right)^{-2}\sum_{m=0}^\infty (-1)^n \dfrac{z^{2n}}{2n!},$$ but it seems like lots of trouble in expanding that. I would first need to invert $1/\sin^2(z)$ and when I tried it was pretty messy to get just 3 terms. The multiplying by the other series would also cause some confusion I think. What I mean is that in general I think that I didn't find a good approach to compute residues with power series yet, in the sense of being efficient. What's a good approach to this kind of problem? How can one compute this residue without messing everything up along the way? Thanks very much in advance. AI: Consider the function $$f(z) = \frac{p(z)}{q(z)^2}$$ where $q(z)=(z-z_0) r(z)$ and $r$ is analytic at $z=z_0$. Then the residue of $f$ at $z=z_0$ is $$\left [\frac{d}{dz} [(z-z_0)^2 f(z)] \right ]_{z=z_0} = \left [\frac{d}{dz} \frac{p(z)}{r(z)^2} \right ]_{z=z_0}$$ or $$\operatorname*{Res}_{z=z_0} f(z) = \frac{p'(z_0)}{r(z_0)^2} - \frac{2 p(z_0) r'(z_0)}{r(z_0)^3}$$ Now $q'(z_0) = r(z_0)$ and $q''(z_0) = 2 r'(z_0)$, so we now have $$\operatorname*{Res}_{z=z_0} f(z) = \frac{p'(z_0)}{q'(z_0)^2} - \frac{ p(z_0) q''(z_0)}{q'(z_0)^3}$$ Now, $p(z)=\cos{z}$, $q(z)=\sin{z}$, and $z_0=0$, so $p'(z_0)=0$, $q''(0)=0$, so the residue at the pole is zero.
H: Why do we use the natural base for the logarithm in the Kullback–Leibler divergence? Well known formula of KL divergence when we have a discrete probability distributions. $$D_{KL}(P \parallel Q)=\sum\limits_i \ln \left(\frac{P(i)}{Q(i)}\right) P(i)$$ Can someone explain why the natural base of the logarithm? That will probably not yield the information in bits as a result? Do I need to change the base of the logarithm to 2 in order to get the relative entropy in bits? Or there is another way? Thank you. M. AI: $$ \log_2 x = \frac{\log_e x}{\log_e 2} = \frac{\ln x}{\ln 2} $$ Expressing entropy in bits means using base-$2$ logarithms. Just divide the base-$e$ logarithms by $\ln 2$ and you've got it.
H: Divergence-free vector field from skew-symmetric matrix Let $[a_{i,j}(x_1,\ldots,x_n)]$ be a skew-symmetric $n\times n$ matrix of functions $a_{i,j}\in C^\infty(\mathbb{R}^n)$. Show that the vector field $$v=\sum\left(\dfrac{\partial}{\partial x_i}a_{i,j}\right)\dfrac{\partial}{\partial x_j}$$ is divergence-free. It looks like there's a typo somewhere in this problem. For one thing, it looks like there should be a double summation, one over $i$ and the other one over $j$. Should it be $$\sum_{j=1}^n\left(\sum_{i=1}^n\dfrac{\partial}{\partial x_i}a_{i,j}\right)\dfrac{\partial}{\partial x_j}?$$ The definition of a vector field $v$ to be divergence-free is that the divergence $\sum_{j=1}^n\dfrac{\partial v_j}{\partial x_j}=0$. AI: For your first point, the answer is yes, it is a double sum. You'll see a lot of shorthand summing notation while you study differential geometry! Read this as $$ \sum_{ij} \left(\frac{\partial}{\partial x_i} a_{ij}\right) \frac{\partial}{\partial x_j} $$ Hint for the proof of divergence free: Mixed partial derivatives commute on functions in $C^{\infty}(\mathbb{R})$ and, by assumption, $a_{ij} = -a_{ji}$.
H: uniqueness of the solution of the beam equation Does anyone know how to prove there exists at most one smooth solution $ u $ of the following problem for the beam equation? $ u_{tt} + u_{xxxx} = 0 $ in $ (0,1) \times (0, T) $ $ u(0,t) = u(1,t) = u_x(0,t) = u_x(1,t) = 0 $ for all $ t \in (0,T) $ $ u(.,0) = g $ and $ u_t(.,0) = h $. Thank you so much in advanced. AI: Let $u$ and $\tilde u$ be two solutions, and let $v = u - \tilde u$. Then $$ v_{tt} + v_{xxxx} = 0,\\ v(0,t) = v(1,t) = v_x(0,t) = v_x(1,t) = 0, \\v(\cdot,0) = v_t(\cdot,0) = 0 .$$ Multiply the first equation by $v_t$, and integrate w.r.t. $x$ from $x=0$ to $1$, and integrate by parts twice on one of the integrals (noting $v_t(0,t) = \cdots = v_{tx}(1,t) = 0$), and use the product rule for differentiation to see $(v_t^2)_t = 2v_{tt}v_t$ and $(v_{xx}^2)_t = 2v_{xxt}v_{xx}$, to conclude $$ \frac12 \frac{\partial}{\partial t}\left(\|v_t\|_2^2 + \|v_{xx}\|_2^2\right) = 0$$ where we define $\|f\|_2^2 = \int_0^1 |f(x)|^2 \, dx $. Hence $$ \|v_t\|_2^2 + \|v_{xx}\|_2^2 = \text{constant} .$$ Setting $t=0$, we see that this constant is zero. Hence for all $t>0$ $$ v_t = 0, v_{xx} = 0 .$$ Since $v_{xx} = 0$, from the boundary conditions we obtain $v = 0$.
H: Task with local extreme I don't know where and what to start calculating in the following task: Determine the value of parameters $a,b$, so that the function $f(x) = x^3- 2ax + b$ has a local extreme $y=5$ at $x=1$. The solution is: $a=3/2$, $b=7$ I need to solve quite a few similar tasks as this and I would really appreciate if sb could tell me how to solve it, so I could solve other similar ones. Thank you in advance! AI: Think about how you find local extrema of a function $f(x)$: you find $f\,'(x)$, set it to $0$, and solve to find critical points, which you then examine to see what kinds they are. Do the same here: $f(x)=x^3-2ax+b$, so $f\,'(x)=3x^2-2a$. Setting that to $0$ and solving, you should find that $$x=\pm\sqrt{\frac{2a}3}\;.\tag{a}$$ You want $y=f(x)$ to have a local extrema at $x=1$, so set $x$ to $1$ and solve $(1)$ for $a$. Once you’ve done that, substitute $x=1$ into $f(x)$ and see what value of $b$ will make $f(1)=5$; you’ll know $a$ at that point, so $b$ will be the only unknown, and you will be able to solve for it.
H: Intermediate Analysis Book Suggestions the title basically says it all. I am looking for real/functional analysis books that would be considered "intermediate level" by that I mean books harder than baby rudin, and easier than big rudin. In general, I'm looking for something with good coverage of non measure theory material, although its understandable if such books contain measure theory. I guess I'm looking for something similar to royden. Mainly, I want to be able to find some difficult problems related to contraction mapping, normed spaces, stone-weierstrass, and arzela-ascoli. I realize that there may be some relevant questions already asked so if someone could point me to the right threads, that would be helpful too. Thanks AI: One nicely written book is Simmons.
H: If $\sum_n \|x\|< \infty$, how to show that $\sum x_n$ is convergent in the Hilbert space $H$. Let $\{x_n\}$ be a sequence in a Hilbert space $H$. If $\sum_n \|x\|< \infty$, how to show that $\sum x_n$ is convergent in $H$? There is no doubt that $x_n \rightarrow 0$ as $n \rightarrow \infty$ (right?) since we have that \begin{align*} \sum_n \|x\|< \infty &\iff \|x_n\|\rightarrow 0 \text{ as } n \rightarrow \infty \\ &\iff \langle x_n,x_n\rangle \rightarrow 0 \text{ as } n \rightarrow \infty \\ &\iff x_n \rightarrow 0 \text{ as } n\rightarrow \infty \end{align*} (it is a property of the inner product that $\langle x,x\rangle=0 \implies x=0$) What to do next? Can I use the completeness of $H$ somehow? AI: The series $\sum_n ||x_n||$ is convergent if and only if the partial sum $\sum_{k=1}^n||x_k||$ is a cauchy sequence hence $$\forall \epsilon >0\ \exists\ n_0\quad \forall p\ge q\ge n_0,\ ||\sum_{k=p}^q x_k||\le\sum_{k=p}^q||x_k||<\epsilon $$ and hence $\sum_{k=1}^n x_k$ is a Cauchy sequence in a complete space. Conclude.
H: Absolutely convergent sums in Banach spaces Let's say a sum of elements in a Banach space is absolutely convergent if even the sum of the norms converges, i.e. $\sum_{i=1}^\infty ||x_i|| < \infty$. This condition implies that the sum of the $x_i$ can be reordered, and in fact that it can be represented in this way that has no dependence on the ordering: You consider the directed set of finite subsets of $\mathbb{N}$ and the net formed on this directed subset is for each $J \subset \mathbb{N}$ with $J$ finite we assign the sum $\sum_{i \in J} x_i$. Do we have TFAE, or any more implications than what I've stated in the following: (1) The sum converges absolutely (2) The sum converges in the sense of the net (3) The sum converges in the ordered sense, but any permutation on the naturals gives the same result These are equivalent in $\mathbb{R}$ and $\mathbb{C}$. Otherwise I only see that (1) implies (2) implies (3). Actually also (3) implies (2) for general Banach spaces because one can check it against all continuous linear functionals and then it's back to familiar cases. AI: From http://en.wikipedia.org/wiki/Absolute_convergence: A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space admits an unconditionally convergent series that is not absolutely convergent. Dvoretzky, A.; Rogers, C. A. (1950), "Absolute and unconditional convergence in normed linear spaces", Proc. Nat. Acad. Sci. U. S. A. 36:192–197.
H: Finite series help with an obvious fact Hi everyone I apologize is the following question is too stupid but I cannot figure out how use the principle of induction in this obvious fact: Definitions: (Finite series): Let $m,n$ be integers and $a_i \in \mathbb{R}$ for each $m\le i \le n$. The we define the finite sum by the recursive formula: $\sum_{i=m}^{n}a_i:=0$ whenever $n<m$, and $\,\,\sum_{i=m}^{n+1}a_i:=\big(\sum_{i=m}^{n}a_i\big)+a_{n+1}$ whenever $n\ge m-1$ Lemma: Let $m\le n < p$ be integers and let $a_i \in \mathbb{R}$ for each $m\le i \le p$. Then we have $$\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{p}a_i= \sum_{i=m}^{p}a_i $$ The above definition and the principle of induction is the only tools that I have but I cannot figure out how the induction should be, because the negative integers. So, my first thought was $0\le n-m<p-m$ but again this will prove it in the next lemma. So I'm not sure of how to begin. Could somebody give some hint please? Sorry if I no put my first attempt, but this is basically an assumption of $m\ge 0$ but the problem with this is again to reach the negative case the lemma that I need, for change the indices, is after this one. If I understand correctly, first I need to show that whenever we have a sentence $P(n)$ whose domain is the set of the integers: where 1) $P(N_0)$ is true for some $N_0\in \mathbb{Z}$, and 2) $P(n)\implies P(n+1)$ for each $n\ge N_0$. Then $P(n)$ is true for every $n\ge N_0$, using this fact the proof becomes almost trivial. Proof: Let $Q(k)$ be the predicate "$P(k+N_0)$ is true for $k\ge 0$". Then the base case is when $k=0$, and clearly holds by hypothesis (since $P(N_0)$ is assumed to be true). Now we may assume $Q(k)$ be true for $k\ge 0$. Thus, this means $P(k+N_0)$ is true and since $k+N_0\ge N_0$ we must have $P(k+1+N_0)$ is true, and thus $Q(k+1)$ holds, which close the induction. Then, $Q(k)$ is true for each $k\ge 0$, i.e., $P(k+N_0)$ is true for $k\ge 0$. Setting $n=k+N_0$, this would imply $P(n)$ hold for each $n\ge N_0$. Now using the above claim let us fix $m,n$ as arbitrary integers such that $m\le n$ and we can use $p=m+1$ as our base case. Thus, $\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{n+1} a_i=\sum_{i=m}^{n}a_i+ a_{n+1}=\sum_{i=m}^{n+1}a_i $, which prove the base case. Suppose the lemma hold for some $p\ge n+1$; we will show that also hold form $p+1$. Then, $\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{p+1}a_i=\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{p}a_i+a_{p+1}$ and by inductive hypothesis we already know that $\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{p}a_i=\sum_{i=m}^{p}a_i$. Thus, we have $\sum_{i=m}^{p}a_i+a_{p+1}= \sum_{i=m}^{p+1}a_i$. Hence the lemma hold for each $p>n$ What do you think is this last one correct? Thanks to everybody :) AI: (1) For $p= n+1$ $$\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{n+1}a_i= \sum_{i=m}^{n}a_i+\sum_{i=n+1}^{n}a_i + a_{n+1}= \sum_{i=m}^{n}a_i+0 + a_{n+1}= \sum_{i=m}^{n}a_i + a_{n+1}= \sum_{i=m}^{n+1}a_i$$ so the inductive hypothesis is true for $p=n+1$. (2) For $p\gt n+1$ assume the inductive hypothesis is true for $p=k-1$ and then $$\sum_{i=m}^{n}a_i+\sum_{i=n+1}^{k}a_i = \sum_{i=m}^{n}a_i+\sum_{i=n+1}^{k-1}a_i +a_k = \sum_{i=m}^{k-1}a_i +a_k= \sum_{i=m}^{k}a_i$$ so the inductive hypothesis is true for $p=k$. (3) So by induction the hypothesis is true for all $p \gt$ n.
H: Extremal problem The task is to calculate $a'$ of a square (you cut out) if the volume of a cube is maximum. (you cut out white squares and put together grey squares so you get a cube without a cover/cap) I don't know what I'm doing wrong, but when I try to calculate critical points I always get $0$, and there isn't even a $\max$ but a $\min$! $V={a'}^3$ $\implies V'=3{a'}^2$ $\Rightarrow \text{critical points}: 0=3{a'}^2$ $\implies a'=0 (?!)$ The solution is $\dfrac{a}{6}$. AI: $$3a' = a\implies a' = \frac 13 a$$ $$V = a'^3\implies V = \left(\frac 13 a\right)^3 = \frac 1{27} a^3$$ $$V' = \frac 19a^2$$
H: Compute the sum of the following series $$\sum_{j=0}^{\infty} \left(e^{-jx}\right)\left(x^j\right) $$for $x$ on $[0, \infty)$. I already proved that this series converges uniformly for $x$ on $[0, \infty)$, i.e. its partial sum converges uniformly to a function on $[0, \infty)$. But I just could not get a fair guess of what that limiting function is, or how to compute it. Thanks a lot! AI: This is an infinite series: $1 + \left(e^{-x}x\right) + (e^{-x}x)^2 + (e^{-x}x)^3 + \cdots = (1- e^{-x}x)^{-1}$
H: How to show that if a 3x3 matrix A and it's diagonal B are in SO(3) that their product is in SO(3) I can easily determine part a and c but am stuck when it comes to part b and d. Any help would be greatly appreciated. I apologize for any formatting errors I am new to math.stackexchange.com I promise I tried to do my best to make it readable for all you wonderful people! Looks like this was asked here: https://math.stackexchange.com/questions/577732/a-couple-questions-about-a-3-by-3-matrix-a but even with Any's answer I am unable to understand and complete parts b and d. Without further dilly-daddling here is the question: Let SO(3) denote the set of 3x3 matrices A such that A has real entries, $det(A)=1$, and $A^T$= $A^{-1}$. (a) Show that the matrix A is in SO(3): $$A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(x) & -\sin(x) \\ 0 & \sin(x) & cos(x) \end{pmatrix}$$ This matrix can be described as "a rotation of $R^3$ with axis v = $[1,0,0]^T$. Let v be any unit length vector in $R^3$ and let F = [v, v', v''] be any ordered orthonormal basis for $R^3$ containing this vector v. Let P = [v v' v''] be the transition matrix from F to the standard basis. Show that for the matrix A above, if we define $B = PAP^{-1}$ then, B is in SO(3) and $Bv = v$ (b) Show that if $A$ and $B$ are in $SO(3)$, then their product $AB$ is also in $SO(3)$ (c) Show that if A is a 3x3 matrix, then $det(-A) = -det(A)$ (d) Show that if $A$ is in $SO(3)$ then there is a vector $v$ in $R^3$ such that $Av = v$. To do this, you need to show that $A - I$ is singular. Consider the determinant of $A - I$. Use the fact $A^T = A^{-1}$. Work attempted so far: a) I've easily determined the determinant of A (hehe) to be 1 using the fact that $cos^2(\theta) + sin^2(\theta)$ = 1 and I have computed $A^T$ and verified that it is equal to $A^{-1}$. Since A contains all real values it meets the criteria of being in SO(3) part (a) is completed without issue. b) I am not sure if I can do this... but since we are given $B = PAP^{-1}$ can $AB$ be rewritten as $APAP^{-1}$ and furthermore rewritten $A^2PP^{-1}$ and since $PP^{-1} = I$ can AB be rewritten as $A^2I$? If so proving that $A^2I$ is in SO(3) is rather trivial and I don't need any further help with this part of the question just want to make sure I am approaching the question correctly. c)This is easily done by taking the opposite of all the values in matrix A and checking if the value is equal to the opposite of det(A). As long as this is the correct approach I don't need further help. d) $$A-I = \begin{pmatrix} 0 & 0 & 0 \\ 0 & \cos(x)-1 & -\sin(x) \\ 0 & \sin(x) & cos(x)-1 \end{pmatrix}$$ Since top row is all 0's det(A-I)= 0 so we have shown the matrix A-I is singular. We have already shown in part (a) that $A^T = A^{-1}$ but I get stuck trying to show there is a vector v in $R^3$ such that $Av=v$ any suggestions/starting points for this question would be a huge help. AI: First of all, considering how the question is phrased I assume that in parts b) through d) the matrix $A$ is supposed to be an arbitrary element in $SO(3)$ and not the specific one from part a). My answers/hints are intended for an arbitrary $A$ in $SO(3)$ though they will work for the specific $A$ from part a). b) The question states that both $A$ and $B$ are in $SO(3)$. Therefore, $\text{det}A = 1 = \text{det}B$ and $B^T = B^{-1}$ and $A^T = A^{-1}$. From this you can directly check that $\text{det}AB = 1$ and that $(AB)^T = (AB)^{-1}$. c) Your approach is correct but presumably you should write it out for a general $(3 \times 3)$-matrix and not just the one from part a). d) A vector $v$ satisfies $Av = v$ precisely when it satisfies $(A-I)v = 0$. We know that a matrix $B$ has a non-trivial solution to $Bv= 0$ precisely if it is singular. Therefore, the only thing we have to do is to show that $\text{det}(A-I) = 0$. Hopefully this helps.
H: How to find Equation of motions and Hamilton Function for this Lagrangian? $$L = \frac{m \dot{x}^2}{2} - \exp(|x|) $$ I would appreciate if you could explain the steps needed to get the answer AI: I feel the discussion was getting too long in the comments. Since we've established the system has one degree of freedom we find that there is only one Euler-Lagrange equation to solve: $$\frac{\partial \mathcal{L}}{\partial x} = \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{x}}.$$ Remember that $\dot{x}$ means the time derivative of $x$, i.e. $$\dot{x} := \frac{dx}{dt}.$$ Nevertheless, what matters is that you partially differentiate the Lagrangian and equate the parts. Are you sure that the potential is $\exp(|x|)$, the exponential of the modulus of $x$? This would lead to $$\frac{\partial \mathcal{L}}{\partial x} = \text{sign}(x) \exp(|x|) = \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{x}} = \frac{\partial}{\partial \dot{x}} \frac{m \dot{x}^2}{2} = m \ddot{x}.$$ Quite hard to integrate.
H: series implication Let $(a_n)_{n \in\ \mathbb{N^*}}$ , $(b_n)_{n \in\ \mathbb{N^*}}$ a sequence of real numbers. Show that: converges $\sum\limits_{n=1}^{\infty}a_n$ and is $(b_n)_{n \in\ \mathbb{N^*}}$ bounded and monotone, so converges $\sum\limits_{n=1}^{\infty}a_nb_n$. $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ I've already proofed that, if $\sum\limits_{n=1}^{\infty}a_n$ converges absolutely and if $(b_n)_{n \in\ \mathbb{N^*}}$ is bounded, that then $\sum\limits_{n=1}^{\infty}a_nb_n$ converges absolutely. My idea is, to use that $\sum\limits_{k=1}^{n}a_kb_k$ $= A_nb_{n+1}+\sum\limits_{k=1}^{n}A_k(b_k-b_{k+1})$ is for $A_k=\sum\limits_{j=1}^{k}a_j$ for $ 1 \leq k \leq n$ but i don't really know how to start.. AI: Let $\epsilon > 0$ and $N_0$ large enough that if $N > M \geq N_0$ then $\bigg|\sum_{n=M}^Na_n\bigg|<\epsilon$. Since $(b_n)$ is monotone and bounded, then $(b_n)$ is Cauchy and $\sum_{k=M}^N|b_{k}-b_{k-1}| = |b_N-b_M|$. Now $$ \bigg| \sum_{n=M}^{N} a_n b_n \bigg| = \bigg| \sum_{n=M}^{N}\sum_{k=M}^n a_n (b_k- b_{k-1})\bigg| \\ = \bigg| \sum_{k=M}^N \bigg\{ (b_k-b_{k-1}) \sum_{n=k}^N a_n \bigg\} \bigg| \leq \sum_{k=M}^N \bigg\{|b_k-b_{k-1}| \bigg| \sum_{n=k}^N a_n \bigg|\bigg\}\\ \leq |b_N-b_M|\epsilon $$ You can now conclude that the sum meets the Cauchy criteria for convergence.
H: $R\cong\mathbb{Z}$ or $R\cong\mathbb{Z}/p\mathbb{Z}$ ($p$ prime) Let $R$ be an integral domain and each subgroup of additive subgroup of $R$ forms an ideal of $R$. Prove that either $R\cong\mathbb{Z}$ or $R\cong \mathbb{Z}/p\mathbb{Z}$ ($p$ prime). Help me. AI: Let $n$ be the characteristic of $R$. If $n =0$, then we know that a copy of $\mathbb{Z}$ is sitting in $R$, namely the one generated by 1, which is an additive subgroup... If $n > 0$, we know that $n$ must be a prime p. (If you don't know this, think about how you can get a contradiction if n has two distinct prime factors.) Now, same as above: we have a copy of $\mathbb{Z}/p \mathbb{Z}$ sitting in $R$, which is an additive subgroup...
H: How to multiply permutations I didn't understand the rules of multiplying permutations. I'll be glad if you can explain me... For example: we have $f=(135)(27),g=(1254)(68)$. How do I calculate $f\cdot g$?? Thank you! AI: I’m going to assume that you evaluate products from left to right; note that many people use the opposite convention. Here we have the product $(135)(27)(1254)(68)$. First see what it does to $1$: the cycle $(135)$ sends $1$ to $3$, the transposition $(27)$ does nothing to the $3$, the cycle $(1254)$ does nothing to the $3$, and the transposition $(68)$ does nothing to the $3$, so the net effect is to send $1$ to $3$. Now let’s see what the product does to $3$. $(135)$ sends $3$ to $5$; $5$ is left unchanged by $(27)$, and $(1254)$ then sends $5$ to $4$, which is left unchanged by $(68)$. The net effect is to send $3$ to $4$. Next we’ll see what the product does to $4$. Nothing changes it until we get to $(1254)$, which sends it to $1$, and $1$ is left alone by $(68)$, so the product sends $4$ to $1$. Notice that this closes a cycle, $1\to 3\to 4\to 1$. Now start over with $2$, the smallest number whose image under $f\cdot g$ we’ve not yet found. $(135)$ does nothing to it, $(27)$ sends it to $7$, and $(1254)$ and $(68)$ do nothing to the $7$, so the net effect is to send $2$ to $7$. Next we see what the product does to $7$: $(135)$ does nothing, $(27)$ sends it to $2$, $(1254)$ sends that $2$ to $5$, and $(68)$ does nothing to the $5$, so $f\cdot g$ sends $7$ to $5$. $(135)$ sends $5$ to $1$, which is left alone by $(27)$ and then sent to $2$ by $(1254)$; $2$ is left alone by $(68)$, so $f\cdot g$ sends $5$ to $2$. This closes another cycle: $2\to 7\to 5\to 2$. We now know what $f\cdot g$ does to $1,2,3,4,5$, and $7$, so we move on to $6$. $(135)$, $(27)$, and $(1254)$ do nothing to $6$, and $(68)$ sends it to $8$. $(135)$, $(27)$, and $(1254)$ also do nothing to $8$, which is sent to $6$ by $(68)$, and we’ve closed off one last cycle: $6\to 8\to 6$. Putting the pieces together, we can write $f\cdot g$ as the product $(134)(275)(68)$. Alternatively, you could work out $f\cdot g$ in two-line notation. First see where it sends $1$, then where it sends $2$, and so on up through $8$; you’ll be doing the same work that I did above, but in a different order, and you’ll find that $f\cdot g$ is $$\binom{1\,2\,3\,4\,5\,6\,7\,8}{3\,7\,4\,1\,2\,8\,5\,6}$$ in two-line notation.
H: Help with natural number problems Suppose that $\mathbb{F}$ is an ordered field with identities $0$ and $1^{\star}$ (in this problem, $1 \in \mathbb{N}$ ) Define inductively $f: \mathbb{N} \rightarrow \mathbb{F}$ by: $$f(1) = 1^{\star}\qquad f(n + 1) = f(n) + 1^{\star}$$ (Imprecisely, $f(n) = \underbrace{1^{\star} + 1^{\star} + \cdots + 1^{\star}}_{\text{n times}}$) c) Letting $\mathbb{N}^{\star} = \{f(n): n \in \mathbb{N}\}$, show that $f$ is a bijection from $\mathbb{N}$ to $\mathbb{N}^{\star}$. (Hint: Use this form $f(j + k) = f(j) + f(k)$ for $j,k\in\mathbb{N}$) d) Show that $\mathbb{N}^{\star}$ is a "copy" of $\mathbb{N}$ in $\mathbb{F}$. How can analogous $\mathbb{Z}^{\star}$ and $\mathbb{Q}^{\star}$ be defined, starting with $\mathbb{N}^{\star}$? I don't need help for the first two parts of the problem since it's all just induction. The first and second parts are easy. I need help with third and fourth parts of the problems. My Approach Part(c) - I know that in order for $f$ to be a bijection, it needs to be 1-1 and onto. If it's 1-1, then: $$f(x) = f(y) \rightarrow x = y$$ I attempt to apply induction to show $f$ is $1$-$1$, but I'm stuck. For the other part, I need to show that $f$ is onto. That is: I need to prove that $f(y) = x$ for every $x$ and $y$, such that $y$ is the inverse of $f$. Part(d) - My method would be to select $n \in \mathbb{N}$ and determine the range for each $n$. That is: $$f(1) = 1^{\star}$$ $$f(2) = f(1 + 1) = 1^{\star} + 1^{\star}$$ and so on. But I think this may be wrong. For the other part, I need to determine what are $\mathbb{Z}^{\star}$ and $\mathbb{Q}^{\star}$. I can list all the elements from $\mathbb{Z}$ and $\mathbb{Q}$ and determine their equivalences in the field $\mathbb{F}$, but this seems to be the slow way to determine these sets. Please help me. My instructor is totally disorganized with his teaching. He expects me to know how to solve these problems easily. AI: c) By the definition of $\mathbb{N}^*$ as the image of $f$, you know that $f$ is onto. For one-to-one, if you would like to use induction, you can do the following. Let $x, y \in \mathbb{N}$ such that $x < y$. Define $n$ by $y = x+n$ and use induction on this $n$ to conclude that $f(x) \neq f(y)$. In fact, what you might want to prove here inductively is that $f(x+n) = f(x) + f(n)$ here to help you with part d. d) As mentioned in the previous part, show that $f(x+n) =f(x)+f(n)$ and I think from here you get that $\mathbb{N}^*$ is a copy of $\mathbb{N}$. For $\mathbb{Z}^*$ you may want to extend $f$ by $f(-n) = -f(n)$ when $n \in \mathbb{N}$ and $f(0) = 0^*$. And for $\mathbb{Q}^*$, you may try extending $f$ again by $f(m/n) = f(m)f(n)^{-1}$ whenever $m, n \in \mathbb{Z}$ $(n \neq 0)$. Hope this helps.
H: $x\in R$ is a unit iff $(x+RadR)$ is a unit in $R/RadR$ Let $R$ be a commutative ring. Prove that the element $x\in R$ is a unit iff $(x+RadR)$ is a unit in $R/RadR$. ($RadR$ is Jacobson radical of $R$) Thanks in advanced. AI: $\hat x$ is invertible in $R/J(R)$ implies $\hat x\hat y=\hat 1$ for some $y\in R$, that is, $1-xy\in J(R)$, so $1-(1-xy)=xy$ is invertible (in $R$) and therefore $x$ is invertible.
H: Prove $\lim_{x\to 3}\frac{1}{x+1}=\frac{1}{4}$ using $\epsilon$-$\delta$ definition of a limit First of all, I'd just like to say I'm fairly new to proving limits using $\epsilon-\delta$ definition, so I apologize in advance if I ask a very obvious question, or make an elementary error. Given $\displaystyle\lim_{x\to 3}\frac{1}{x+1}=\frac{1}{4}$, I want to prove it using the $\epsilon-\delta$ definition of a limit: The definition: $\forall\epsilon>0, \exists\delta>0\ni0<|x-c|<\delta\implies|f(x)-L|<\epsilon$ The definition applied to my example: $\forall\epsilon >0, \exists\delta >0\ni0<|x-3|<\delta\implies\left|\dfrac{1}{x+1}-\dfrac{1}{4}\right|<\epsilon$. $\text{Scratch Work For Determining}\space \delta$: $$\left|\dfrac{1}{x+1}-\dfrac{1}{4}\right|=\left|\dfrac{4-(x+1)}{(x+1)\cdot4}\right|=\left|\dfrac{4-x-1}{(x+1)\cdot 4}\right |=\left|\dfrac{3-x}{(x+1)\cdot 4}\right|=\dfrac{|x-3|}{|4(x+1)|}<\epsilon$$ I can let the latter inequality be $|x-3|<\epsilon\cdot|4(x+1)|=4\epsilon\cdot|(x+1)|$, and then try to bound it: Let $\delta=1$: $$|x-3|<\delta=1\implies -1<x-3<1\implies 3<x+1<5$$. Plugging the value $5$ instead of $|(x+1)|$, gives the following: $$|x-3|<4\epsilon\cdot 5=20\epsilon.$$ At this stage, I'd then say $\delta=\min\{1,20\epsilon\}$. I was told this is not the correct answer though. Can anyone explain why it's not right? AI: To maximize a fraction, we want to minimize its denominator. With this in mind, given any $\epsilon>0$, let $\delta = \min\{1,12\epsilon\}$. Then if $0 < | x-3| <\delta$, notice that: \begin{align*} 0<|x-3|<\delta \leq 1 &\implies -1 < x-3 < 1 \\ &\implies 3 < x+1 < 5 \\ &\implies 12 < 4(x+1) < 20 \\ &\implies \frac{1}{12} > \frac{1}{4(x+1)} > \frac{1}{20} \\ &\implies \frac{1}{20} < \frac{1}{4(x+1)} < \frac{1}{12} \\ &\implies \frac{-1}{12} < \frac{1}{20} < \frac{1}{4(x+1)} < \frac{1}{12} \\ &\implies \left| \frac{1}{4(x+1)} \right| < \frac{1}{12} \\ \end{align*} Hence, it follows that: \begin{align*} \left| \frac{1}{x+1} - \frac{1}{4} \right| &= \left|\frac{1}{4(x+1)}\right||x-3| &\text{using your work}\\ &< \frac{1}{12}|x-3| & \text{from above}\\ &< \frac{1}{12}(12\epsilon) & \text{since $|x-3|<\delta \leq 12\epsilon$}\\ &= \epsilon \end{align*} as desired.
H: Irreducible polynomial over a finite field? I am trying to solve a problem about irreducible polynomials over a finite field and i would like to ask you for a little help or any idea how to make this proof. Here is the problem: We have a finite field $\mathbb{F}_{q}$ and a prime number $p$. Let $q$ be the generator of the multiplicative group $\mathbb{F}_{p}^{\times }$. Prove that the polynomial $\sum_{i=0}^{p-1}X^{i}$ is irreducible over $\mathbb{F}_{q}$. $q$ as a generator of the multiplicative group has order $p-1$, but what kind of information i can take from this? Can anybody help me with an idea, please? Thank you in advance! AI: Suppose there is a root $x \in \Bbb F_{q^n}$ of this polynomial for some $n$. Then, since $x \neq 0$ and $x^p = 1$, $\Bbb F_{q^n}^\times$ contains a cyclic group of order $p$, so its order, $q^n-1$, has to be divisible by $p$. Since you supposed that $q$ was a primitive root of $\Bbb F_p^\times$, $q^n \equiv 1 \pmod p \iff n \equiv 0 \pmod {p-1}$. This shows that if $x$ is a root of this polynomial then it lives in an extension of $\Bbb F_{q^{p-1}}$. Since the polynomial is of degree $p-1$, it is irreducible.
H: Differential Calculus - Swapping a sup and a limit. Well, I'm doing some exercises on differential calculus and I'm stuck. (a) Let $U \subset \mathbb R^m$ and $f: U \rightarrow \mathbb R^n$ be a continuous function on a line segment $[x, x+h]\subset U$ and differentiable on $]x, x+h[$. Show that if $T:R^m\rightarrow R^n$ is a linear map, then: $$ ||f(x+h)-f(x)-T(h)\leq \sup_{t \in ]0, 1[}||df(x+th)-T||\,||h|| $$ (b) Let $U \subset R^m$ and $f:U\rightarrow \mathbb R^n$ a continuous function differentiable on $U- \{x\}$, where $x$ is a interior point of $U$. Suppose $\lim_{y \rightarrow x}df(y)=T$ for some linear map $T: \mathbb R^m \rightarrow \mathbb R^n$. Show that $f$ is differentiable on $x$ and $df(x)=T$. Suggestion: Use item (a). Progress: I had no problems with (a) but I'm stuck at be. I must show that: $$\lim_{h \rightarrow 0} \frac{f(x+h)-f(x)-T(h)}{||h||}=0$$ We know that: $$\lim_{h \rightarrow 0} \frac{f(x+h)-f(x)-T(h)}{||h||}\leq \lim_{h \rightarrow 0}\sup||df(x+h)-T||$$ If I can swap the sup and the limit then I'm done, but I don't know if I'm allowed to. Ps: I don't know why, but I cant tag this question properly. AI: Hint: Let $\epsilon > 0$ be arbitrary and $\{h_n\}$ be any sequence $0 \neq h_n \to 0$. Then you can find $t_n \in \,]0,1[$ such that $\|f(x+h_n)-f(x)-Th_n\| \leq \big(\|df(x+t_nh_n) - T\|+\epsilon\big)\|h_n\|$ (why can we find such a $t_n$?). Deduce that $t_nh_n$ also $\to 0$ and then use this to conclude that $$ \overline{\lim_{n \to \infty}} \frac{\|f(x+h_n) - f(x) -Th_n\|}{\|h_n\|} \leq \epsilon. $$ Since $\epsilon$ and $\{h_n\}$ were chosen arbitrarily, you'll be done after this!
H: What is this curve formed by latticing lines from the $x$ and $y$ axes? Consider the following shape which is produced by dividing the line between $0$ and $1$ on $x$ and $y$ axes into $n=16$ parts. Question 1: What is the curve $f$ when $n\rightarrow \infty$? Update: According to the answers this curve is not a part of a circle but with a very similar properties and behavior. In the other words this fact shows that how one can produce a "pseudo-cycle" with equation $x^{\frac{1}{2}}+y^{\frac{1}{2}}=1$ from some simple geometric objects (lines) by a limit construction. Question 2: Is there a similar "limit construction by lines" like above drawing for producing a circle? AI: If we attach four curves like $f$ to each other in the following form a "pseudo-circle" shape appears. Note to its difference with a real circle. Its formula is $x^{\frac{1}{2}}+y^{\frac{1}{2}}=1$ a dual form of the circle equation $x^{2}+y^{2}=1$. You can find this equation simply by the geometric analysis of each line. A very interesting point about this curve is that there is a kind of $\pi$ for it which doesn't change by radius! Here we have $\pi'=\frac{10}{3}=3.3333...$ which is very near to the $\pi$ of circle ($=3.1415...$) but $\pi'$ is a rational number not a non-algebraic real number like $\pi$!
H: Is $x^2$ always congruent to $(y-x)^2$ modulo $y$? How could you prove the cases where its true? I've came across a problem recently which I've been thinking about, and am not sure if I am correct. Is $x^2$ always congruent to $(y-x)^2$ modulo $y$? Note: $y$ and $x$ are any integers. Since $y$ and $x$ are integers, can we say that $(y-x)^2 \le y$? Then, $(y-x)^2$ modulo $y$ $= (y-x)^2 \ne x^2$? This is probably wrong. If it is, what's the proper way to approach this question? Is there any specific cases where it would be true? AI: The definition of congruence modulo $n$ is: $a$ is congruent to $b$ modulo $n$ if $n$ divides evenly into $a-b$. Expand out $(y-x)^2-x^2$ to get $y^2-2xy+x^2-x^2 = y^2-2xy = y(y-2x)$. Notice that $y$ divides the result. The definition of congruence that you are probably more familiar with is that of "reducing" a number modulo $n$, and we say $a$ is congruent to $b$ modulo $n$ when "$a$ mod $n$ = $b$ mod $n$". This definition is equivalent, and typically works well for computer programming purposes, but it's not as conducive to proving results about congruence.
H: notation question re: function space This is a quick notation question: when one writes $X: C[0,\infty) \to \mathbb{R}$, what does that mean exactly? Is $C[0,\infty)$ the space of continuous functions with a domain of $[0,\infty)$ and thus $X$ is a functional? If so, shouldn't the range be a set of functions as well? AI: $X$ is just a function from $C[0,\infty)$ to $\mathbb{R}$. Since nothing is said about linearity of $X$, it need not be a linear functional.
H: Identify when $f(x) = 0$. I have the following problem to solve. My attempt a. $\int_0^{\pi} x^n f(x) dx =0$ $\forall$ $n \ge 0$ gives $ x^n f(x) dx =0$ almost everywhere in $[0,\pi]$ $\forall$ $n \ge 0$. Putting $n = 0$ we shall get $f(x) = 0$ almost everywhere. As $f(x) \in C[0,\pi]$ we shall say $f(x) = 0$ $\forall$ $x \in [0,\pi]$ b. It is same as a. We shall put $n = 0$ and $\cos(nx) = 1$ $\Rightarrow$ $f(x) = 0$ $\forall$ $x \in [0,\pi]$ c. $\int_0^{\pi} f(x) \sin(nx) dx =0$ $\forall$ $n \ge 1$. Now $f(x)\sin(nx) = 0$ almost everywhere. But I am not getting any more here. Integrals of b and c are looking like Fourier coefficients of the function $f(x)$. Can we say anything from it? Thank you for your help. AI: Continuous functions can be approximated uniformly both by polynomials and trigonometric polynomials. The condition implies that $$\int_0^{\pi} pf=0$$ for any polynomial. Now choose $p$ such that $\lVert p-f\rVert_\infty<\varepsilon$, and let $M$ an upper bound for $f$ over $[0,\pi]$ and note that $$\int_0^\pi f^2=\int_0^\pi f(f-p)<M\pi\varepsilon $$ For $(b),(c)$ you should be able to find counterexamples if only one of the two conditions hold, but if both hold the above works analogously with $p$ a trigonometric polynomial.
H: Complex number and conjugate If $z$ is a complex number and $z^6=-16|z|^2$, how to prove that $|z|=0$ or $|z|=2$? I tried to change $z$ into the form $|z|$ and I get $|z|^4=-16(e^{-6iθ})$ But I don't know how to continue. AI: We start with $$ z^6= -16|z|^2$$ .$z=0$ is an obvious solution. Now, for the nonzero solutions , write $z=e^{i\theta}$. Then $|z|^2= |re^{i\theta}|^2 =r^2 $ , and $z^6=r^6 e^{i6\theta}$. Now, set two sides equal to each other: $$r^6e^{i6\theta}=-16r^2 $$. Since $r\neq 0$ (since we already considered $0$ as a solution ), we cancel the $r's$ on both sides, to get: $r^4e^{i6\theta}=-16=-16(cos(\alpha)+isin(\alpha))=....$ (set real part equal to real part and imaginary part equal to imaginary part )
H: Markov Chain: I don't understand this solution to a conditional probability problem after n state transitions... What is the probability of being in state 4 after two steps, given that one is in state 5 after 8 steps? Markov Chain is at top of link. The sample solution is part (f). I have no idea what the summation means (e.g. p77p44p66p55) or why it was given, or the powers of x. I understand why I am finding p(state 4 after 2 steps | state 5 after 8 steps), but am lost otherwise. Any clarification on how I should read those summations and what they mean would be greatly appreciated! https://courses.cit.cornell.edu/info2950_2013fa/wex.pdf AI: To be in state 5 after eight steps starting from state 0, you must go from state 0 to state 7, from state 7 to state 4, from state 4 to state 6, from state 6 to state 5, and spend another four steps going from states to the same states. You need to sum the probabilities of the different patterns of steps which will make you be in state 5 after eight steps starting from state 0. Each pattern's probability is the product of the probabilities of its individual steps. You can ignore going from state 0 to state 0 as this has probability 0. So the probability of being in state 5 after eight steps starting from state 0 is the sum of the probabilities of the different patterns. The probabilities of the steps which change state appear in each pattern, so you can take these outside the summation, since they all have a common factor of $p_{07}p_{74}p_{46}p_{65}$ or $0.5 \times 0.4 \times 0.8 \times 0.3$, which needs to be multiplied by a sum over the probabilities of going from states to the same states. That sum is expressed as $\sum p_{77}^i p_{44}^j p_{66}^k p_{55}^l$ or $\sum 0.4^i\times 0.2^j\times 0.7^k\times 1.0^l$ provided that $i+j+k+l=4$ since you need to waste four steps doing this; you also need $i,j,k,l$ to be non-negative integers. If you want to be in state 4 after two steps and in state 5 after 8 steps, starting from state 0, then the calculation is the same with the additional condition that $i=0$ since you cannot waste any steps going from state 7 to itself.
H: Determining $a$ values for convergence in alternating series So we have this series: $$ \sum^{\infty}_{n = 1}\left(-1\right)^n\sin\left(a \over n\right) $$ The task is to find all $a$ values, in which case the series converges and all $a$ values in which case the series absolutely converges. My first question is that is this even an alternating series ?. From my maths book: It says that series is an alternating series $\sum^{\infty}_{n = 1}\left(-1\right)^{n - 1}a_n$, where $a_{n} > 0\,,\ \forall\ n\in N$. But $\sin\left(a/n\right)$ is from $-1$ to $1$. So I know that the series $\sum^{\infty}_{n = 1}u_{n}$ converges absolutely if $\sum^{\infty}_{n = 1}\left\vert u_{n}\right\vert < \infty$. So I tried using D'Alemberts formula $\lim_{n \to \infty}\left\vert u_{n+1}/u_{n}\right\vert = D$ to find the series convergence, but got $D = 1$, so this is no good. But I don't think these take into account the $a$ variable, how can I find out what the actual $a$ values need to be ?. Should I just start trying different values for $a$ ?. Any suggestions, tips would be greatly appreciated !. AI: Whatever value $a$ has, over the long term $a/n$ will be very close to $0$, and so $\sin \frac an\sim\frac an$. So the series might not be alternating for the first terms, but it will always be eventually. The series only converges absolutely for $a=0$. For $a\ne0$, the series does converge, but not absolutely. To see this, we can assume $a>0$ (the case $a<0$ is identical, since $\sin x=-\sin(-x)$. For $n$ big enough, we have $\sin \frac an\geq\frac a{2n}$, so $$ \sum_n\left|\sin\frac an\right|\geq\sum_n\frac a{2n}=\infty. $$ We also have $\sin \frac an\to0$ and, for $n$ big enough $\sin \frac an>0$. So by the Leibniz Criterion, the series converges.
H: Help with a 'simple' sum of linear operators and their adjoints acting on an orthonormal basis Given an orthonormal basis $\{u_1,\cdots, u_n\}$ of a vector space $V$ I am asked to show that $$ \sum_{k=1}^n \|T^*u_k\|^2= \sum_{k=1}^n \|Tu_k\|^2 $$ for all $T\in \mathcal{L}(V)$ where $T^*$ represents the adjoint of $T$ and $\|u\|^2=\langle u,u \rangle$ for a positive definite hermitian inner product $\langle \cdot,\cdot \rangle.$ I know by the orthonormality of the basis we can represent $$T^*u_k = \sum_{i=1}^n \langle T^*u_k,u_i \rangle u_i $$ for each $k$ and this expression can be tinkered with by applying properties of the inner product and adjoint. I did a problem like this that was essentially a one line proof and I expect that a similar approach should work here but so far everything I've tried has expanded into a humongous mess and I get lost lines and lines of notation. A push in the right direction would be appreciated. AI: Another way to do it is to see $\|T^* u_k\|^2 = \sum_{i=1}^n |\langle T^*u_k,u_i \rangle|^2$, which follows from the formula you wrote. Hence $$ \sum_{k=1}^n \|T^* u_k\|^2 = \sum_{k=1}^n \sum_{i=1}^n |\langle T^*u_k,u_i \rangle|^2 $$ Or even easier: write $T$ as a matrix with respect to this orthonormal basis. Then $\sum_{k=1}^n \|T^* u_k\|^2$ is the sum of the squares of the entries of the transpose of this matrix. Now it is clear that this is no different to the sum of the squares of the entries of the matrix.
H: Finding formula for a cubic polynomial So I need to find a formula for a cubic polynomial with the given conditions: Local max at $x=1$ Local min at $x=3$ $Y$ intercept of $5$ $x^3$ term whose coefficient is $1$ So all I really have is $x^3+x^2+x+5$. I know that $F'(1) = 0$ and $F''(1) < 0$. Also $F'(3) = 0$ and $F''(3) > 0$. But as far as setting up the problem, I'm lost. AI: You don't have $f(x)=x^3+x^2+x+5$; you have $f(x)=x^3+bx^2+cx+5$ where the coefficients $b$ and $c$ are yet unknown. Since you also have $f'(1)=0$ and $f'(3)=0$, this gives you two (linear) equations in the two unknowns $b$ and $c$ which you can solve. This doesn't use the information that $1$ and $3$ are maximum and minimum rather than the other way around, but for a third-degree polynomial whose leading coefficient is positive, the local maximum (if any) will always come before the local minimum (if any), so that is automatically satisfied.
H: A closed ball in $l^{\infty}$ is not compact Definition of compact in Real Analysis, Carothers, 1ed said that: In Example 8.1 (c), he claimed that closed ball $\{x: \|x\| \leq 1\}$ in $l^{\infty}$ is not compact. Why? AI: compact implies limit point compact. Hint : consider $\{(1, 0 , 0 , 0, \cdots) , \hspace{2mm} (0,1,0,0,0 \cdots), \hspace{2mm} (0,0,1,0,0,0, \cdots), \cdots \}$. prove that it has no limit point. Actually you can prove stronger result that, they are discrete. But the only discrete subsets of a compact sets are finite.
H: PDF of $Y = \cos(\pi X)$ I've run into a practice problem where $Y = \cos(\pi X)$, and $X$ is uniform on $[0, 1]$, and I'm supposed to prove that the PDF of $Y$ is $1/(\pi\sqrt{1 - y^2})$. I know that for derived distributions, you plug in the the equation for $Y$ in terms of $X$ into the PDF of $X$, integrate up to $y$, and then differentiate with respect to $y$. However, when I do this, I get the PDF of $Y = -1/(\pi\sqrt{1 - y^2})$. I'm sure that the problem here is that there's some property of the cosine and its inverse that I don't know, so if someone could point out how to do this, that would be great, thanks! AI: If $Y=\cos \pi X$, $X\sim U(0,1)$, then the probability (for $y\in[-1,1]$) $$\Pr(Y<y) = \Pr\left(X > \frac{\arccos y}\pi\right) = \int_{\frac{\arccos y}\pi}^{+\infty}f_X(x)\ dx = \int_{\frac{\arccos y}\pi}^{1}\frac{1}{1-0}dx = 1 - \frac{\arccos y}\pi$$ Then the PDF of Y is (for $y\in(-1,1)$) $$f_Y(y) = \frac d{dy}\Pr(Y<y) = 0-\frac1\pi\cdot\frac{-1}{\sqrt{1-y^2}} = \frac{1}{\pi\sqrt{1-y^2}}$$ I am not sure which negative sign you have missed. If you choose to jump a few steps between integration and differentiation, $$f_Y(y) = \frac d{dy}\int_{\frac{\arccos y}\pi}^{+\infty}f_X(x)\ dx = -f_X\left(\frac{\arccos y}\pi\right)\cdot\frac d{dy}\frac{\arccos y}\pi = \frac{1}{\pi\sqrt{1-y^2}}$$ (for $y\in(-1,1)$)
H: Notation for "parallel" morphisms in a diagram Suppose $f\colon A\to B$ and $g\colon A\to B$ are possibly-distinct morphisms. How do I stick them both in a diagram (along with, e.g., their (co)equalizer) without suggesting that they are equal? AI: In simple cases (such as for (co)equalizers) you can get away with simply showing them beside each other. Even if you assert that the diagram commutes, that will not be understood as requiring equality between two morphisms that are explicitly shown and named in the diagram. If the case is not simple, you can still draw the diagram (as an illustration of the situation) as long as you don't claim it commutes. You'll have to specify equations by saying that such-and-such specific subdiagrams commute. ... or perhaps it may be clearer just to duplicate an object in the diagram, such that the morphisms are not parallel anymore.
H: DE : $\frac{\mathrm d^2(F)}{\mathrm dx^2} -S^2 \cdot (F) = K \cdot e^{-ax}$ I have this ODE that I need to solve and I am not sure where to begin: $$\frac{d^2(F)}{dx^2} -S^2 \cdot (F) = K \cdot e^{-ax}$$ Does anyone know any techniques that I could use? AI: Hints: For the homogeneous, we have: $$m^2 - s^2 = 0 \rightarrow m_{1,2} = \pm ~ s$$ This gives: $$F_h(x) = c_1e^{-sx} + c_2 e^{sx}$$ For the particular, using Undetermined Coefficients, we can choose: $$F_p = w e^{-a x}$$ Substitute into the ODE and solve for $w$.
H: sequence with infinitely many limit points I am looking for a sequence with infinitely many limit points. I don't want to use $\log,\sin,\cos$ etc.! It's easy to find a sequence like above, e.g. $1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,1,\dots$ But how can you prove the limit points? The problem I am having is the recursion or definiton of the sequence which I can't name exactly. But for a formal proof I need this. So what's a sequence with infinitely many limit points without using $\log,\sin,\cos$ or any other special functions? AI: You can use something like :{ $1/n$}$\cup${$1+1/n$} $ \cup ....\cup$ {$k+1/n$} $\cup.... $
H: Proving that $(abc)^2\geq\left(\frac{4\Delta}{\sqrt{3}}\right)^3$, where $a$, $b$, $c$ are the sides, and $\Delta$ the area, of a triangle Let $a$, $b$, $c$ be the sides of $\triangle ABC$. Prove $$(abc)^2\geq\frac{4\Delta}{\sqrt{3}}$$ where $\Delta$ is the area of the triangle. (Editor's note: As observed in the answers, the target relation is in error. It should be $$(abc)^2\geq\left(\frac{4\Delta}{\sqrt{3}}\right)^3$$ I incorporated the correction into the title. —@Blue) Clearly, $a$, $b$, $c$ are the sides opposite to the angles $A$, $B$, $C$ I considered the point $F$ inside the triangle as the Fermat point of the triangle and named $FA=x$, $FB=y$, and $FC=z$. Then $\angle AFB = \angle AFC = \angle BFC=120^\circ$, so we have $$a^2=y^2+z^2+yz$$ and similarly for $b^2$ and $c^2$. Observe that $$xy+yz+zx=\frac{4\Delta}{\sqrt{3}}$$ So, we have a big inequality to prove: $$\left(x^2+xy+y^2\right)\left(y^2+yz+z^2\right)\left(z^2+zx+x^2\right)\geq xy+yz+zx$$ but I can't prove it. Thanks for help AI: The actual inequality is $(abc)^2\ge \left(\dfrac{4\Delta}{\sqrt{3}}\right)^3$.I will show this one. Lemma: $\displaystyle \sin{\alpha}+\sin{\beta}+\sin{\gamma} \le \frac{3\sqrt{3}}{2}$ when $\alpha$,$\beta$,$\gamma$ are angles of a triangle.Now we see that $\sin x$ is concave in $(0,\pi)$ so applying Jensen's inequality we get $\dfrac{\sin{\alpha}+\sin{\beta}+\sin{\gamma}}{3}\le \sin\left({\dfrac{\alpha+\beta+\gamma}{3}}\right)=\dfrac{\sqrt{3}}{2}$ So our lemma is proved. Now to the actual problem.$a+b+c=2R(\sin{\alpha}+\sin{\beta}+\sin{\gamma})\le 3\sqrt{3}R$ (from the lemma). Now $$\dfrac{4\Delta}{\sqrt{3}}=\dfrac{abc}{R\sqrt{3}}\le \dfrac{3abc}{a+b+c}\le (abc)^{2/3}$$ where in the last step we have used AM-GM.Now cubing both sides we get the desired inequality.
H: Minimizing/Maximizing triangle I plugged in the Y point e^(-x/3) into the equation (1/2)XY, found the derivate and set it equal to zero. But I am only getting one value of X=3 (max). How do I get the min? AI: There is no local minimum. For absolute min, you just plug in the end point of the domain, 1 and 5. The one with smaller A-value is your min (absolute minimum).
H: Finding determinant using properties of determinant without expanding show that determinant $$\left|\matrix{ x^2+L & xy & xz \\ xy & y^2+L & yz \\ xz & yz & z^2+L \\ }\right| = L^2(x^2+y^2+z^2+L)$$ without expanding by using the appropriate properties of determinant. All i can do is LHS $$x^2y^2z^2\left|\matrix{ 1+L/x^2 & 1 & 1 \\ 1 & 1+L/y^2 & 1 \\ 1 & 1 & 1+L/z^2 \\ }\right|$$ Is it a must to relate to eigenvalue problem? AI: $$\left|\matrix{ x^2+L & xy & xz \\ xy & y^2+L & yz \\ xz & yz & z^2+L \\ }\right| = $$ $$xyz\left|\matrix{ x+L/x & y & z \\ x & y+L/y & z \\ x & y & z+L/z \\ }\right| = $$ $$=\left|\matrix{ x^2+L & y^2 & z^2 \\ x^2 & y^2+L & z^2 \\ x^2 & y^2 & z^2+L \\ }\right| = $$ $$=\left|\matrix{ x^2+y^2+z^2+L & y^2 & z^2 \\ x^2+y^2+z^2+L & y^2+L & z^2 \\ x^2+y^2+z^2+L & y^2 & z^2+L \\ }\right| = $$ $$=(x^2+y^2+z^2+L)\left|\matrix{ 1 & y^2 & z^2 \\ 0 & L & 0\\ 0 & 0 & L \\ }\right|=L^2(x^2+y^2+z^2+L)$$
H: How find this Fibonacci sequence sum $\sum_{k=0}^{\infty}\frac{1}{F_{2^k}}$ let sequence $\{F_{n}\}$ such $$F_{1}=1,F_{2}=1,F_{m+1}=F_{m}+F_{m-1},m\ge 2$$ Find this value $$I=\sum_{k=0}^{\infty}\dfrac{1}{F_{2^k}}$$ My try: I know this $$F_{n}=\dfrac{1}{\sqrt{5}}\left(\left(\dfrac{\sqrt{5}+1}{2}\right)^n-\left(\dfrac{\sqrt{5}-1}{2}\right)^n\right)$$ so $$\dfrac{1}{F_{2^k}}=\dfrac{\sqrt{5}}{\left(\dfrac{\sqrt{5}+1}{2}\right)^{2^k}-\left(\dfrac{\sqrt{5}-1}{2}\right)^{2^k}}$$ so we only find this sum $$I=\sum_{k=0}^{\infty}\dfrac{\sqrt{5}}{\left(\dfrac{\sqrt{5}+1}{2}\right)^{2^k}-\left(\dfrac{\sqrt{5}-1}{2}\right)^{2^k}}$$ But I can't.Thank you AI: let $$a=\dfrac{1+\sqrt{5}}{2}\Longrightarrow \dfrac{\sqrt{5}-1}{2}=a^{-1}$$ so $$I=\sum_{n=0}^{\infty}\dfrac{\sqrt{5}}{a^{2^n}-a^{-2^n}}$$ so let $$a^{2^n}=x$$ then $$\dfrac{1}{a^{2^n}-a^{-2^n}}=\dfrac{x}{x^2-1}=\dfrac{1}{x-1}-\dfrac{1}{x^2-1}=\dfrac{1}{a^{2^n}-1}-\dfrac{1}{a^{2^{n+1}}-1}$$ so \begin{align*}I&=\sum_{n=0}^{\infty}\dfrac{\sqrt{5}}{a^{2^n}-a^{-2^n}}=\sqrt{5}\sum_{n=0}^{\infty}\left(\dfrac{1}{a^{2^n}-1}-\dfrac{1}{a^{2^{n+1}}-1}\right)\\ &=\lim_{n\to\infty}\left(\dfrac{\sqrt{5}}{a-1}-\dfrac{1}{a^{2^{n+1}}-1}\right)\\ &=\dfrac{\sqrt{5}}{a-1}\\ &=\dfrac{7-\sqrt{5}}{2} \end{align*}
H: Definite Integral $\int_2^4\frac{\sqrt{\log(9-x)}}{\sqrt{\log(9-x)}+\sqrt{\log(3+x)}}dx$ How can I find the value of this following definite integral? $$\int_2^4\frac{\sqrt{\log(9-x)}}{\sqrt{\log(9-x)}+\sqrt{\log(3+x)}}dx$$ AI: HINT: Use $$I=\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$$ and $$2I=\int_a^bf(x)dx+\int_a^bf(a+b-x)dx=\int_a^b\left(f(x)+f(a+b-x)\right)dx$$ Observe that if $\displaystyle g(x)=\sqrt{\ln(9-x)},$ $\displaystyle g(4+2-x)=\sqrt{\ln(9-(6-x))}=\sqrt{\ln(x+3)}$
H: Prove that there does not exist a surjective function from the set of rationals to reals. Prove that there does not exist a surjective function f: $\mathbb{Q}\rightarrow \mathbb{R} $. I think a proof by contradiction would work which means we want to prove $$\neg (\forall{y}\in \mathbb{R}, \exists{x}\in\mathbb{Q}, ~f(x) = y)$$ which is equivalent to $$(\exists{y}\in \mathbb{R}, \forall{x}\in\mathbb{Q}, ~f(x) \not= y).$$ My logic is that since the set of rational numbers, $\mathbb{Q}$, is countably infinite and the set of real numbers, $\mathbb{R}$, is uncountably infinite, then we can try to choose one element in $\mathbb{Q}$ at a time to pair with multiple elements in $\mathbb{R}$. However, we'll eventually find some element in $\mathbb{R}$ that has no pre-image in $\mathbb{Q}$. I'm a bit unsure if my logic is correct and how to write this formally. AI: Basically the comment of @nsanger but since I was already typing this I'll post it anyway. This follows directly from Cantor's diagonal argument: a countable sequence cannot contain all real numbers. Here is a slight variation of that idea. Let $f:\mathbb{N}\to \mathbb{R}$ be any function. Construct a sequence of closed intervals $$\mathbb{R} \supset I_0 \supseteq I_1 \supseteq \ldots $$ such that $f(k) \not \in I_m$ for all $k \leq m$. (You only need to consider one extra function value at each step.) The intersection of all these intervals is not empty but it does not contain any number $f(k)$. Therefore $f$ is not surjective. Now use that $\mathbb{Q}$ is countable to complete this argument.
H: If $S_n=\left[-\frac{n}{n+1},\frac{n}{n+1}\right]$ then $\bigcup\limits_{n\geq1}S_n=(-1,1)$ I was self reading Mathmatics for Economists by Simon and Blume. Consider the closed sets $S_n=\left[-\frac{n}{n+1},\frac{n}{n+1}\right]$ for $n\geq1,n\in\mathbb N$. Then $$\bigcup_{n\geq1}S_n=(-1,1),$$ is an open interval. How to show that $\bigcup\limits_{n\geq1}S_n=(-1,1)$? AI: If $-1 < x < 1$, then $\exists N_1 \in \mathbb{N}$ such that $$ x < 1-1/N_1 $$ and $N_2 \in \mathbb{N}$ such that $$ x > -1 + 1/N_2 $$ Now take $n = \max\{N_1, N_2\}$. Then, $$ -1 + 1/n < x < 1-1/n \Rightarrow x \in S_{n-1} $$ Hence, $$ (-1,1) \subset \bigcup_{n\geq 1} S_n $$ Now $S_n \subset (-1,1)$ for all $n$, and hence $\supset$ also holds.
H: Finding an affine combination of a point on a triangle I have a problem involving affine combinations that I can't figure out how to solve. Given the above picture, write q as an affine combination of u and w. Now, I understand how to write the simpler affine combinations. I can figure out p or s as an an affine combination of u, v, and w. q, however, has me stumped. I've tried a few different approaches. I started off by looking at the picture using triangles. I wasn't able to get anywhere with this method, given that I couldn't find anywhere where the various laws could be applied. Next, I tried writing everything as a combination of pretty much everything else, but - after using a number of pages as scrap - I was unable to come up with anything solid. The main problem is that I can't figure out what information will lead me to the answer. I've used variables to represent the distances between q and the other points, but that just gives me equations with too many unknowns. As for what I know: $r = \frac{2u + v}3$ $r = \frac{s + p}2$ $r = \frac{3s + w}4$ $p = \frac{2r + w}3$ $p = \frac{s + w}2$ $p = \frac{4u + 2v + 3w}9$ $s = \frac{8u + 4v -3w}9$ $p = \frac{xq + yv}{x + y}$ $q = \frac{p - yv}x$ $q = \frac{aw + bu}{a + b}$ All of these were obtained from the diagram. x is the distance between p and v y is the distance between p and q a is the distance between q and u b is the distance between q and w A variety of other equalities can be derived from the above relationships, but I've been running myself in circles trying to figure out q in terms of u and w. I would appreciate any help (or a direction to move in) in solving this question! Thanks in advance! AI: The key fact here is that $u,v,w$ are affinely independent. Then any point in the plane can be expressed as a unique affine combination of $u,v,w$. You have $r = \frac{2}{3} u + {1 \over 3} v $, $p = {2 \over 3} r + {1 \over 3 } w$. Combining gives $p = {4 \over 9} u + {2 \over 9} v + {1 \over 3} w$. We have $q = tp+(1-t) v$ for some $t$, and $t$ is such that the multiplier of $v$ is zero. So, $q = {4 \over 9}t u + ({2 \over 9}t + (1-t)) v + {1 \over 3}t w$. Setting the multiplier of $v$ to zero gives $t = {9 \over 7}$, and so we have $q = {4 \over 7} u + {3 \over 7} w$.
H: show H is a subgroup Let G be a finite abelian group. Let $H=\langle a,b\rangle = \{a^{i}b^{j}\;\;\;i,j\in\mathbb{Z}\}$ Show that $H$ is a subgroup of $G$ My solution. 1) Need to show $H\neq\emptyset$ This is true as we can take $i=j=0$ then $a^{0}b^{0}=1\in H$ 2) closure let $a^{i}b^{j}\in H$ and let $a^{k}b^{r}\in H$ then $a^{i}b^{j}a^{k}b^{r}=a^{i+k}b^{j+r}\in H$ 3) let $a^{i}b^{j}\in H$ then $a^{-i}b^{-j}\in H$ as we can take $i=j=-1\in\mathbb{Z}$ So $H$ is a subgroup of G Is this correct? Thank you AI: In 3), if you take $i=j=-1$, you have $a^ib^j=a^{-1}b^{-1}$. Better say: since $i,j\in \mathbb{Z}$, then also $-i,-j$ are integers, thus $a^{-i}b^{-j}$ is in $H$ as well.
H: $l^q \subset l^p$ for $q\leq p$ with counting measure Suppose $\Sigma = \mathcal{P}(\Omega) < \infty$ and $\mu$ is the counting measure. I'm seeking to show that $l^q \subset l^p$ for $1 \leq q\leq p \leq \infty$. The main obstacle to a proof is showing $$ \left(\sum_{\omega \in \Omega} |\omega|^q\right)^{1/q} \leq \left(\sum_{\omega \in \Omega} |\omega|^p\right)^{1/p}$$. Could someone point me in the right direction? Thanks in advance! AI: Try proving it first in the case where $\left(\sum_{\omega \in \Omega} |\omega|^p\right)^{1/p} = 1$. Can you see how the general case follows from that?
H: How do you prove that there does not exist a $3 \times 3$ matrix over $\Bbb{Q}$ such as $A^8=I$ and $A^4 \ne I$? I am kind of stuck with the following problem: Prove that there does not exist a $3 \times 3$ matrix over $\Bbb{Q}$ such as $A^8=I$ and $A^4 \ne I$. I already tried some things and got that if $A^8=I$, then $(A^4)^2=I$. And I guess a similar reasoning would work for this one. I bet that I am missing something obvious but I can't find what is it :( AI: Hint: The minimal polynomial of A divides $(x^8−1)=(x^4−1)(x^4+1)$, and $(x^4+1)$ is irreducible over $\mathbb{Q}$
H: interior of a nested increasing union over a sequence of sets What are the weakest hypotheses on a topological space $X$ so that for every increasing sequence $S_n$ of subsets of $X$ we have that $\cup _{i=1}^\infty (S_n^o)=(\cup _{i=1}^\infty S_n)^o$ AI: The proposition is false in the case of the compact regular space $X=[0,1]$, the closed unit interval with the usual topology. Let $r_1,r_2,\dots,r_n,\dots$ be an enumeration of the rational points of $X$. Let $S_0$ be the set of all irrational points of $X$. For each $n\in\mathcal N$, let $S_n=S_0\cup\{r_1,\dots,r_n\}$. Then $S_0,S_1,\dots$ is an increasing sequence of sets whose union is the whole space $X$. Each $S_n$ has empty interior, so the union of the interiors is the empty set, but the interior of the union is the whole space. More generally, the proposition is false in every T$_1$-space which has a countable subset which is not closed. (More generally still, it's false if there are countably many pairwise disjoint closed sets whose union is not closed.) On the other hand, it's true in a discrete space or a finite space. A P-space is a topological space in which countable intersections of open sets are open. (Some writers include some separation axiom in the definition.) Discrete spaces and finite spaces are trivial examples of P-spaces. Does every P-space have the property that, for any decreasing sequence of subsets, the interior of the intersection is equal to the intersection of the interiors?
H: Integration by trig substitution - Why can I draw a right triangle and use that to go back to the original variable? Suppose we are given an integral of the form $\int f(x) dx$ with $f(x)$ defined on $[a;b]$. We can then use inverse substitution using a function $g(t)$ defined over an interval $[\alpha;\beta]$ such that $g(\alpha) = a$ and $g(\beta) = b$. Also it makes the substitutation easier if $g(t)$ is one-to-one such that $g(t)^{-1}$ exists. Then $\int f(x) dx = F(x) = \int f(g(t))g^{'}(t) dt = F(g(t))$. So we have $F(g(t))$ as an anti-derivative of $f(x)$ with respect to the change of variable $x$ to $g(t)$. Since $g(t)$ is continous it has image $g([\alpha;\beta]) = [a;b]$, so we can evaluate $F(g(t))$ for any two points in the original interval $[a;b]$ to get values for $F(x)$ by using appropriate values for $t \in [\alpha;\beta]$, so indeed $F(g(t))$ is an anti-derivative of $F(x)$ with respect to the variable $x = g(t)$. Now since we are dealing with an indefinite integral we must return the the original variable. We want an integral with $x$ instead of $t$, so we use the identity $x = g(t)$. In other words we are returning the original interval $[a;b]$. Then setting $g(t) = a/2$ yields $t = g^{-1}(a/2)$ so removal of terms involving $g(t)$ and $t$ in our integral $F(g(t))$ are trivial. Please correct me if I am wrong about any of this. In the case of trig substitution of the form $x = \sin(\theta)$ substitution back the the original integral involving $x$ is easy when terms are involving $\theta$ are trivial. However terms like $\cot(\theta)$ becomes much more difficult. I understand that using trig identities allow us to express $\cot(\theta)$ in terms of $x$, however I don't see why we can draw a right triangle with $\theta$ as the angle and then just divide two sides to get $\cot(\theta)$ in terms of $x$. Could someone explain why this is possible and right to do ? Thanks Below I have attached an image for reference: In this case we know that $sin(\theta) = x/3$. Why does this imply we can draw a triangle with side $\sqrt{9 - x^2}$ ? I see for any value of $\theta$, $x$ must be in proportion for the identity $sin(\theta) = x/3$ to be true. AI: I think I see the core of your question from your last paragraph, your image, and your response in post. The essential answer to your question boils down to the definitions of the trig functions. The definitions of the trigs are nothing more than definitions, and they are defined over right triangles with $\theta$ being either of the non-right angles. That is to say for example, as you surely know, $\sin\theta=\frac{opposite}{hypoteneuse}$. By geometric principles, given any two sides of any right triangle we can always find the third side by the theorem of Pythagoras. The technique of trigonometric substitution (like any other substitution) works when we can make a total substitution, and then completely reverse substitute the final result. In the case of succesful trig substitutions, we can always go back and call out the value of any of the trigs including $\cot \theta$. We can draw these because ALL of the trigs are defined over right triangles, and we can always find a third side given two. No one trig is more difficult to find than another over a right triangle with defined side lengths. When all of the terms of an integral can be completely substituted by the method, the result, assuming you can find it, should always be re-translatable in to the original terms as all of the information required for the back substitution is included in the contents of a right triangle, as a trig of an angle is nothing more than the ratio of two sides of a right triangle. Given any of the 6 trigs with variable theta, we can always assign measures to two of three sides of a right triangle as that is how the trigs are defined, hence we can always define the third side. I hope this post is helpful, please let me know if it is less than clear. Trig sub is my favorite technique as far as methods go and so I am happy to have this out with you.
H: Composition of Differential Operators If I have: $A=\partial_x^2+u(x)$ $B=u(x)\partial_x$ How do I compose: $AB$ and $BA$? AI: This is just usual composition of functions: for all $f$ in an appropriate domain $(AB)(f)=A(B(f))=(\partial^2_x+u)(u\partial_x f)=\partial^2_x (u\partial_x f)+u^2\partial_x f=\cdots$ using the chain rule, and similarily for $BA$. Notice that $u$ acts as a multiplication operator. Apart from that, it is totally crucial for composition that you specify both domain and codomain of $A$ and $B$ and in particular what is assumed about $u$ (say domain and codomain is $C^\infty$, and $u$ be also of class $C^\infty$).
H: Guide to sketching graphs of basic functions I'm taking a test soon, where I would be asked to sketch graphs. I wonder if there is any kind of guide or general tutorial on the net how to carry out sketching. For instance I would much like to know how would you sketch $f(x)=\frac{ln (x)} {x}$ and $g(x) = \frac{e^x}{x}$ or in general if I have a function $f(x)$ the graph of which I know, how should I sketch $f(x)/x$ or an even more complex question: If I am given the sketch of two functions $f(x), g(x)$ how may I sketch $\frac{f(x)}{g(x)}$? AI: Try not to think of a rational function ( f(x)/g(x) ) as being something completely different. The process is the same except that now the threat of a vertical asymptote is more imminent. Your process should be as follows: Find intercepts (where x=0 or y=0) Check for vertical asymptotes (where the denominator, or in your case g(x) is zero) Check for horizontal asymptotes (what happens as x goes to infinity) Find critical points (where f'(x) = 0) Determine concavity / whether or not your critical points are mins, maxes, or neither To address rational functions specifically, going from a function that has no denominator (your f) to one that does (your g) the main difference will be in step 2 where now you have a big vertical line in your graph that sort of sucks whatever the shape of f was into it. But the point I wanted to make was to not focus on how dividing by a function affects another, but to just follow the same steps regardless and you'll be fine no matter what the function looks like.
H: Find the $2013$th power of a given $3\times 3$ matrix Question from my linear algebra homework I'm struggling with: Let $D = \begin{bmatrix} -2 & 5 & 4 \\-1 & 0 & 0 \\0 & 4 & 3 \end{bmatrix}$ We are asked: Find $D^5+3D^2-D+I$ Find $D^{2013}$ Write $D^{-1}$ as a polynomial of $D$ I solved questions 1) and 3) but can't solve 2)... AI: Here is how I would attack such a question. You need to find a polynomial equation satisfied by$~D$ first. You could use the characteristic polynomial for that (by the Cayley-Hamilton theorem), but knowing that such a polynomial of degree at most$~3$ exists, you can also just try to find a relation between some powers of$~D$. The sparse second line suggests right-multiplying powers of $D$ to $(0~1~0)$, giving the sequence $(0~1~0),(-1~0~0),(2~{-}5~{-}4),(3~{-}6~{-}4)$. The first three are clearly linearly independent, and the fourth one gives the relation $(0~1~0)-(-1~0~0)-(2~{-}5~{-}4)+(3~{-}6~{-}4)=(0~0~0)$, so your polynomial equation should be $I-D^1-D^2+D^3=0$, which you can check to be true. Now $X^3-X^2-X+1=(X-1)(X^2-1)=(X-1)^2(X+1)$. Indeed your matrix is not diagonalisable because of the double root. To compute $X^{2013}$ you can take its remainder$~R$ after division by $P=(X-1)(X^2+1)$, so that $X^{2013}=PQ+R$ for some (quotient)$~Q$. For finding the remainder after division by a polynomial with such easy (complex) roots as $P$ has, the standard trick to avoid doing a (very) long division of polynomials is write the remainder as a polynomial of degree${}<\deg P=3$ with unknown coefficients: $R=aX^2+bX+c$, and evaluate the equation $X^{2013}=PQ+R$ at the roots of$~P$; since these substitutions annihilate the term $PQ$ regardless of$~Q$, they give you linear equations in $a,b,c$. The problem here is that you have only two roots $1,-1$ to substitute, although $1$ is a double root of the minimal polynomial. There is another trick to solve this shortage of equations: since $1$ is also a root of the derivative $P'$ of$~P$, you can take the derivative of the equation (being an identity of polynomials in$~X$, this gives an equation that must still hold) giving $2013X^{2012}=P'Q+PQ'+R'$. Now substitute $X=1$ into that to get a third equation that will let you solve $a,b,c$. The answer is $aD^2+bD+cI$.
H: Sequences not strictly increasing. Let $d_n=a_{n+1}-a_n$ be the difference of two consecutive terms of a sequence of natural numbers. We can easily construct sequences of natural numbers $a_n$ using trigonometric functions or the floor function which have the property: *(1)*For infinitely many $n, d_{n+1}<d_n$ . For example, let $a_n=cos(\frac{n\pi}{2})$ or $a_n=\lfloor \frac{n}{3}\rfloor$ and $n\geq 1$ for both cases. My question is Can anybody give me an example of a sequence having the property (1) but which has a closed formula not containing trigonometric functions,or the floor function? Note I would like to see a closed formula of such a sequence (if possible) and not something general like the primes or a sequence whose sum of reciprosals diverges. Thank you very much in advance. EDIT:$(-1)^n=cos(n\pi)$ so don't try something like this. So what i would like to see is something without trigonometric functions ,floor functions or $(-1)^n$. Everything else is acceptable. AI: I’m probably missing something, but doesn’t $a_n=42+(-1)^n$ already work? EDIT: Excluding $(-1)^n$ still leaves arbitrarily many sequences, e.g., A027642, (the absolute values of) A124449, or even A067029, but it all depends on your definition of “closed formula”. To me, something like $$a_n = \min\left\{v_p(n) | v_p(n)>0\right\}$$ (which encodes A067029) is a closed formula, but you may of course disagree.
H: Why are $g(1) < 0, g(0) > 0$ in proof of Fixed-point property for continuous function on $[0,1]$. Can someone please help me understand how, in the solution below, we can get to $g(1)<0$ and $g(0)>0$ from the fact that $g(1) \neq 0 $ and $ g(0) \neq 0 $? Thanks. Rudin Chapter 4, question 14 (Exercise 4.14) AI: I think that is because f is a map to [0,1] and therefore f can get a max value of 1 and min value of 0 because we assume that $f(1)\ne1$ it clear that $f(1)<1$ and $f(0)\ne0$ so it clear that $f(0)>0$ and we get $g(1)= f(1)-1 <1-1 =0$ $g(0) =f(0)-0 =f(0) >0$ have a nice day
H: Is this a partially ordered set? I was wondering if partially ordered sets could have loops in their diagrams. For example isn't the $S=\{1,2,3\}$ and relation $R=\{(1,1),(2,2),(3,3),(1,2),(2,3),(3,1)\}$ a partially ordered set that has a cycle? $R$ is reflexive, antisymmetric and transitive. AI: Transitivy fails for your relation $R$. As for loops on the diagram, that is not possible due to transitivy, that should become apparent from the algorithm to draw diagrams I describe in this answer.
H: Algebraic structures on proper classes From time to time, I see proper classes being endowed with algebraic structure. The ordinals with addition is one example, but I've seen a lot more, most of which have been above my head. The standard definitions of standard algebraic structures impose the requirement that the underlying class be a set though. I'm not exactly sure I know why that is. What kind of problems does removing this requirement cause? How much of algebra over sets carries over to algebra over classes? I'm asking this because I don't know anything about proper classes other than what the first introductory course in foundations told me, plus some other bits. This makes me uneasy whenever I see or hear someone do anything with proper classes. I'm often left unconvinced about the rigor of such considerations because of this uneasiness. AI: Set theories that explicitly embrace proper classes as part of the first-order theory do so by allowing formulas to define and mention classes in restricted ways. Provided the algebraic relations on a proper class, e.g. an ordering in the case of ordinals, observe these restrictions, there is no lack of rigor in the presentation. In some respects it does tie our hands, and so warrants caution in how we define operations. For example an operation that converts nonempty collections of ordinals into the least such ordinal in such a collection cannot be realized as a function using Kuratowski ordered pairs, since this would entail proper classes being elements of something. However the mapping relation can be expressed as a first-order formula, which is likely all that one needs to make a rigorous argument.
H: Given that $z= \cos \theta + i\sin \theta$, prove that $\Re\left\{\dfrac{z-1}{z+1}\right\}=0$ Given that $z= \cos \theta + i\sin \theta$, prove that $\Re\left\{\dfrac{z-1}{z+1}\right\}=0$ How would I do this? AI: $$\frac{z-1}{z+1}=\frac{z-1}{z+1}\frac{\overline z+1}{\overline z+1}=\frac{|z|^2-1+2i\,\text{Im}\,z}{|z+1|^2}\implies\text{Re}\,\left(\frac{z-1}{z+1}\right)=\frac{|z|^2-1}{|z+1|^2}$$ But we know that $\;z=\cos\theta+i\sin\theta\implies |z|=1\;$ , so...
H: Proof on ring isomorphism- irreducible Consider the ring isomorphism $\phi: A \to B$ I have to prove that $a\in A $ is irreducible if and only if $\phi(a)$ is irreducible. By definition, $a$ is irreducible in A if and only if: 1) $a$ is no cero 2) $a$ a is not a unit 3) If $a=bc$ where $c,b\in A$, then $b$ or $c$ must be a unit. To prove $\to$ this direction I have seen that 1) $\phi(a)$ is no cero (by contradiction), and to prove that $\phi(a)$ a is not a unit I have explained the following: By contradiction, let's suppose that $\phi(a)$ is a unit. So, it exists $b\in A$ where $\phi(a)\phi(b)=1$. As $\phi$ is an homomorpism, $\phi(a)\phi(b)=\phi(ab)=1$ and $\phi(1)=1$. Therefore, as $\phi$ is inyective,$ab=1$ We get a contradiction as a is not a unit. Is my proof correct? Could you help me to prove the 3rd condition, please? 3) If $\phi(a)=\phi(b)\phi(c)$, then $\phi(b)$ or $\phi(c)$ is a unit. Thank you for your time and dedication. AI: Let us write $\phi(a)=\phi(b)\phi(c)=\phi(bc)$. This implies $a=bc$. We know that either $b$ or $c$ is a unit, as $a$ is irreducible. Thus, either $\phi(b)$ or $\phi(c)$ is a unit, as $B^\times=\phi(A^\times)$, where $R^\times$ denotes the group of units in the ring $R$. Actually, if you do not want to prove $B^\times=\phi(A^\times)$, you just need $\phi(A^\times)\subset B^\times$, and it is clear that a unit $u\in A^\times$ is sent to a unit (by any ring homomorphism): $$1_A=uu'\Rightarrow 1_B=\phi(1_A)=\phi(uu')=\phi(u)\phi(u').$$
H: Finding $\displaystyle \lim_{x\to 0} \frac{\ln(3^{x}+1)-\ln(2)}{x}$ I have to find the limit of $\displaystyle \lim_{x\to 0} \frac{\ln(3^{x}+1)-\ln(2)}{x}$ using only notable limits avoiding l'Hopital's method and derivatives. I tried to use logarithm's properties but then I came in a bad situation. Then I tried to use +1 and -1 and x/x in the argument of the logarithm in order to use the notable limit so I'm stuck here... Can somebody help me in a clear way please? (tomorrow I have the limit's test!) AI: You could use the two standard limits $$\lim_{x\to0}\frac{\ln(x+1)}{x}=1$$ and $$\lim_{x\to0}\frac{e^{x}-1}{x}=1.$$ We have $$ \lim_{x\to0}\frac{\ln(3^x+1)-\ln2}{x}= \lim_{x\to0}\frac{\ln(3^x-1+2)-\ln2}{x}= \lim_{x\to0}\frac{\ln\left(\frac{3^x-1}{2}+1\right)+\ln2-\ln2}{x}= \lim_{x\to0}\frac{\ln\left(\frac{3^x-1}{2}+1\right)}{\frac{3^x-1}{2}}\cdot\frac{\frac{3^x-1}{2}}{x}= \lim_{x\to0}\frac{\ln3}{2}\cdot\frac{\ln\left(\frac{3^x-1}{2}+1\right)}{\frac{3^x-1}{2}}\cdot\frac{e^{x\ln3}-1}{x\ln3}=\frac{\ln3}{2}. $$
H: Neighborhood in topological groups Let $G$ be a topological group, $e$ the neutral element and $U$ a neighborhood of $e$. Claim: Then there exists a neighborhood $V$ of $e$, such that $V^2 \subseteq U$. This should follow easily from the continuity of the group multiplication. But how? AI: The map $m\colon (x,y)\in G^2\mapsto x\cdot y$ is continuous, where $G^2$ is endowed with the product topology. Therefore, there is $(e,e)\in O\subset G^2$ open such that $m(O)\subset U$. By definition of the product topology, there is $V_1,V_2$ open set such that $e\in V_1\cap V_2$ and $V_1\times V_2\subset O$. Now take $V:=V_1\cap V_2$.
H: Why is a certain subset of a regular uncountable cardinal stationary? this is an excerpt from Jech's Set Theory (page 94). For a regular uncountable cardinal $\kappa$ and a regular $\lambda<\kappa$ let $$E^\kappa_\lambda= \{\alpha<\kappa:\mbox{cf}\ \alpha=\lambda \}$$ It is easy to see that each $E^\kappa_\lambda$ is a stationary subset of $\kappa$ Well, I don't see so easily why this should hold. Essentially, given a set which is disjoint to a set of this form, there is no reason for it to be bound, so I tried proving it can not be closed. However, I have no idea why this should hold either, as it might not have any ordinal with cofinality $\lambda$ as a limit point as well. Thanks in advance AI: Note that if $C$ is a club, then $C$ is unbounded, and therefore has order type $\kappa$. Since $\lambda<\kappa$ we have some initial segment of $C$ of order type $\lambda$. Show that this initial segment cannot have a last element. Its limit is in $C$ and by the regularity of $\lambda$ must have cofinality $\lambda$. Therefore $C\cap E^\kappa_\lambda\neq\varnothing$.
H: What is measure of the following set? Suppose $ D= \lbrace (x,y) \in [0,1]\times [0,1]: x-y \in \mathbb{Q} \rbrace $. Is the measure of $D$ zero? Thanks. AI: The set $D$ is a subset of the union over $q\in\Bbb Q$ of the lines $$D_q = \{(x,y) \in \Bbb R^2 : x-y=q\},$$ each of the $D_q$ having Lebesgue measure $0$ (it is an affine subspace of $\Bbb R^2$ of dimension 1). The union being countable, $D = [0,1]^2 \cap \left(\bigcup_{q\in\Bbb Q} D_q\right)$ also has measure zero.
H: Calculating the limit of some sum I want to calculate the following limit $$ \lim_{n\to\infty}\left(\frac{1}{n}+\frac{1}{n+1}+\ldots+\frac{1}{2n}\right)=? $$ Obviously the result is bounded between $1/2$ and $1$, but how can I calculate the exact result? AI: Rewrite the given sum as follows: \begin{align}\sum_{k=0}^{n}\frac{1}{n}\frac{1}{1+\frac{k}{n}}\end{align} Consider the function $f(x)=\frac{1}{1+x}$, for $x\in[0,1]$. Take the partition $x_0=1$,$x_k=\frac{k}{n}$. In the limit, we find \begin{align}\int_{0}^{1}f=\lim_{n\rightarrow\infty}\sum_{k=0} ^{n}f(x_k)(x_{k+1}-x_{k})=\lim_{n\rightarrow\infty} \sum_{k=0}^{n}\frac{1}{n}\frac{1}{1+\frac{k}{n}}.\end{align} So, \begin{align}\lim_{n\rightarrow\infty}\sum_{k=0}^{n}\frac{1}{n+k}=\lim_{n\rightarrow\infty}\sum_{k=0}^{n}\frac{1}{n}\frac{1}{1+\frac{k}{n}}=\int_{0}^{1}f=\int_{0}^{1}\frac{1}{1+x}\text{d}x=\log(2).\end{align}
H: Order of a subgroup on elliptic curve over a finite field May i ask you for a little help for a problem about elliptic curves? Here's the problem: Given an elliptic curve $E$ over the finite field $\mathbb{F}_{101}$. We know that there is a point of order $116$ on this curve. Find $\left | E(\mathbb{F}_{101}) \right |$. OK, i've noticed that $E(\mathbb{F}_{101})$ is a subgroup of $E$. I guess here i should apple the Lagrange's theorem, which says that the order of the subgroup divides the order of the group. How can i use the given order of the point? Do i have to apply the Hasse's theorem? I would be really glad, if someone could help me with this problem. Thank you in advance! Have a nice day! AI: Your idea is correct. It follows immediately from Lagrange's theorem and the Hasse bound. Could the group be twice as big or bigger?
H: How to calculate the ⌉ operator? In a book about finance, the following formula appears: $$\large a_{10 \,⌉\, 0.04}$$ What is this ⌉ operator and how to read / calculate it? AI: This just a notation in Actuarial sciences $a_{\overline{n|}i} := v + v^2 + \cdots + v^n = \frac{1-v^n}{i}$ , where $v:=\frac{1}{i+1}$ Read this to know all notation of Actuarial sciences: Click Here So, $a_{\overline{10|}0.04}=\frac{1-(1+0.04)^{-10}}{0.04} \approx 8.111$
H: Finding the Eigenvector for eigenvalue 0 I tried finding the eigenvectors of the following transformation: $T:\mathbb R_2[X]\rightarrow \mathbb R_2[X] $, $T(f(x))=f'(x)$. I found the matrix of this transformation to be $$\begin{matrix} 0&0&0 \\ 2&0&0 \\ 0&1&0 \end{matrix}$$ Thus I found the eigenvalue to be 0. But how do I find the eigenvector? I found it to be 0, which isn't right... Thanks in advance! AI: You have a eigenvalue $\lambda_1 = 0$, but it has algebraic multiplicity equal to three. I will show you how to find the first eigenvector, but only provide guiding hints for the other two. We can find the fist the eigenvector as: $$Av_1 = 0$$ This is the same as finding the nullspace of $A$, so we get: $$v_1 = (0,0,1)$$ Unfortunately, this only produces a single linearly independent eigenvector as the space spanned only gives a geometric multiplicity of one. This means we need to find two generalized eigenvectors. We try: $$Av_2 = v_1$$ This gives us another linearly independent eigenvector (I will let you fill in the details), but again only one. Next, we try: $$Av_3 = v_2$$ This produces a third linearly independent eigenvector. Sometimes, we resort to using chains to find the generalized eigenvectors.
H: What is the equation for this graph? Here is what y(x) = -x+100 graph looks like: I am trying to change this equation, so that the graph becomes a curve which still crosses the axis Y at (0,100) and axis X at (100,0): Can anybody give me a hint on this? AI: Probably the easiest way is to choose a third point $(a,b),\ \ b\in(0,100)$, what your curve also contains, and then construct a quadratic function to fit these $3$ points, using $$ \begin{aligned} p_0(x) &:=\frac{(x-a)(x-100)}{(-a)(-100)} \\ p_a(x) &:=\frac{x(x-100)}{a(a-100)} \\ p_{100}(x) &:=\frac{x(x-a)}{100(100-a)} \end{aligned}$$ These satisfy $p_i(j)=1$ if $i=j$ and $p_i(j)=0$ if $i\ne j$ for $i,j\in\{0,a,100\}$. So, a quadratic curve can be obtained as $$f(x):=100\cdot p_0(x)+b\cdot p_a(x)+0\cdot p_{100}(x)\,.$$
H: Convergence of sequence defined with other sequence $a_n$ is a sequence with $\lim\limits_{n \rightarrow \infty}{\frac{a_{n+1}}{a_n}}=L$, $a_n>0$. So the task is to show that $c_n:=(a_n)^\frac{1}{n}$ converges and $\lim\limits_{n \rightarrow \infty}{c_n}=L$. I've been working on this for hours now and don't have any useful result. Please help :) AI: We will show that $$\lim\limits_{n \rightarrow \infty}{\frac{a_{n+1}}{a_n}}=\lim_{n\to \infty} \sqrt[n] {a_n}$$.(Because the limits exist). First we will show that $$\lim\limits_{n \rightarrow \infty}{\frac{a_{n+1}}{a_n}}=L\geq \lim_{n\to \infty} \sqrt[n] {a_n}$$. Let $ε>0$ then there is a $n_1\in \Bbb N:\frac {a_{n+1}}{a_n}\leq L+\frac {ε}{2}$ for every $n\geq n_1$. Now write $$a_n=\frac {a_n}{a_{n-1}}\cdot \frac {a_{n-1}}{a_{n-2}}\cdot ...\frac {a_{n_1+1}}{a_{n_1}}\cdot a_{n_1}$$. Thus we have that $$a_n\leq \frac {a_{n_1}}{(L+\frac {ε}{2})^{n_1}}\cdot (L+\frac {ε}{2})^n$$. Let now $$θ=\frac {a_{n_1}}{(L+\frac {ε}{2})^{n_1}}$$ and by using that $\sqrt[n] {θ}\to 0$ we have that there is a $n_0\geq n_1:\sqrt [n] {a_n}\leq \sqrt[n] {θ}\cdot (L+\frac {ε}{2})<L+ε$ for every $n\geq n_0$ and thus $$\lim_{n\to \infty} \sqrt[n] {a_n}\leq L$$. Same you do to prove $$\lim_{n\to \infty} \sqrt[n] {a_n}\geq L$$.
H: Is this matrix positive-semidefinite in general? for the matrix written below I was wondering if one can show that it is positive-semidefinite for $n>3$ and $0< \alpha<1$. (Or not. For $n=2, 3$ it works by showing that all principal minors are non-negative.) $$ C_{n,n} = \begin{pmatrix} 1 & \alpha^1& \alpha^2 & \cdots & \alpha^{n-1} \\ \alpha^1 & 1 & \alpha^1&\cdots & \alpha^{n-2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \alpha^{n-1} & \alpha^{n-2} & \alpha^{n-3}& \cdots & 1 \end{pmatrix} =\begin{pmatrix} \alpha^0 & \alpha^1& \alpha^2 & \cdots & \alpha^{n-1} \\ \alpha^1 & \alpha^0 & \alpha^1&\cdots & \alpha^{n-2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \alpha^{n-1} & \alpha^{n-2} & \alpha^{n-3}& \cdots & \alpha^0 \end{pmatrix} $$ AI: Taking the first column, and substracting to it $\alpha$ times the column 2, we get $\det C_{n,n}=(1-\alpha^2)\det C_{n-1,n-1}$, hence we can conclude by Sylvester's criterion.
H: Let $b>0.$ Evaluate $\lim_{n \to \infty} \int^{b}_{0} \frac{\sin nx}{nx}dx$ I'm stumped at the last part of this problem. Could anyone advise, please? Thank you. Here is my proof. Define $f: \mathbb{R} \to \mathbb{R}$ by $$ f(x) = \left\{\begin{aligned} &\frac{\sin nx}{nx}&&, x \neq0 \\ &1 &&, x=0 \end{aligned} \right.$$ Hence, $$f(x) = \sum^{\infty}_{k=0}(-1)^k\frac{(nx)^{2k}}{(2k+1)!} , \forall x \in \mathbb{R} \mbox{ and } f \in {\mathscr R[0,b]}.$$ $$\int^{b}_{0} \frac{\sin nx}{nx}dx = \int^{b}_{0}f(x)dx=\sum^{\infty}_{k=0}\int^{b}_{0}(-1)^k\frac{(nx)^{2k}}{(2k+1)!}dx= \sum^{\infty}_{n=0}(-1)^k\frac{(n)^{2k}b^{2k+1}}{(2k+1)(2k+1)!}$$ $$\lim_{n \to \infty} \sum^{\infty}_{k=0}(-1)^k\frac{(n)^{2k}b^{2k+1}}{(2k+1)(2k+1)!}= ?$$ AI: Put $u = nx$. Then your integral becomes $$\int_0^{nb} \frac{1}{n} \frac{\sin u}{u} du = \frac{1}{n}\int_0^{nb} \frac{\sin u}{u} du .$$ Taking the limit as $n \to \infty$ and noting that if $\lim a_n = a$, $\lim b_n = b$ then $\lim a_nb_n = ab$ we have $$\int_0^b \frac{\sin x}{x} dx = \left(\lim_{n\to \infty} \frac{1}{n}\right) \times \frac{\pi}{2} = 0.$$
H: Prove that $d_{1}$ and $d_{2}$ induce the same topology Suppose $d_{1}(x,y) = |x-y|$, $d_{2}(x, y) = |\phi(x) - \phi(y)|$, where $\phi(x) = \frac{x}{1 + |x|}$. Prove that $d_{1}$ and $d_{2}$ are metrics on $\mathbb{R}$ which induce the same topology. It's easy to prove that $d_{1}$ and $d_{2}$ are metrics. Call the topologies induced by them $\tau_{1}$ and $\tau_{2}$. I can prove that $\tau_{1}$ is finer than $\tau_{2}$, but I can't prove the inverse. Can any body please help me. Thanks AI: Let's break it down like this: Given a metric space $(Y,d)$, a set $X$, and a bijection $f\colon X \to Y$, the pull-back of $d$ via $f$, $f^\ast d \colon X\times X \to [0,\infty);\; f^\ast d(a,b) = d(f(a),f(b))$ is a metric on $X$. Thus $f\colon (X,f^\ast d) \to (Y,d)$ is an isometry, in particular a homeomorphism. If, in the situation above, $X$ carries a prior topology $\tau$, we have two topologies on $X$, the original $\tau$, and the one induced by the pull-back of $d$, let's call it $\tau_{f^\ast d}$. We have $\tau = \tau_{f^\ast d}$ if and only if $\operatorname{id} \colon (X,\tau) \to (X,\tau_{f^\ast d})$ is a homeomorphism. Since the composition of homeomorphisms is a homeomorphism, and we know that $f \colon (X,\tau_{f^\ast d}) \to (Y,d)$ is a homeomorphism, that is the case if and only if $f\circ \operatorname{id} \colon (X,\tau) \to (Y,d)$ is a homeomorphism. It remains to identify $X$, $Y$, $f$, $d$, $\tau$, $\tau_{f^\ast d}$.