text
stringlengths
83
79.5k
H: Why can't I use the chain rule to solve this trigonometric integration? The question is: what is the indefinite integral: $\int \sin^2(kx) \, \mathrm dx$? I get the correct answer using trig identities to change the $(\sin(kx))^2$ into $\dfrac{1}{2} - \dfrac{(\cos(2kx))}{2}$ and integrating that. But why can't I just integrate the outermost function $(x^2)$ and then divide by the derivative of the inner function giving $\dfrac{(\sin(kx))^3}{3k\cos(kx)} + c$? AI: The chain rule does say something about integrals, but not what you seem to think. The chain rule says $$ \dfrac{d}{dx}f(g(x)) = f'(g(x)) g'(x)$$ Integrating both sides gives you $$ \int f'(g(x)) g'(x)\ dx = f(g(x)) + C$$ Now you can't just divide out the $g'(x)$ from the left side, because that $g'(x)$ is inside the integral: $\dfrac{1}{g'(x)} \int f'(g(x)) g'(x) \ dx$ is not the same as $\int f'(g(x))\ dx$.
H: $\alpha > \omega$ is an $\epsilon$-number iff $\beta^\gamma < \alpha, \forall \beta, \gamma < \alpha$ Prove: $\alpha > \omega$ is an $\epsilon$-number iff $\beta^\gamma < \alpha, \forall \beta, \gamma < \alpha$. From left to right: if $\omega >\beta: \alpha=\omega^\alpha> \beta^\gamma$ if $\omega =\beta: \alpha=\omega^\alpha> \omega^\gamma=\beta^\gamma$ if $\omega <\beta: \alpha=\omega^\alpha> \omega^\gamma$ and then? Can anyone full prove? AI: $(\Rightarrow)$ Let $\alpha>\omega$ be an $\epsilon$-number. For any ordinal $\beta$ let us define the degree of $\beta$, $\deg(\beta)$ the highest $\gamma$ that is an exponent of the Cantor normal form of $\beta$; so for instance an ordinal $\beta$ is an $\epsilon$-number if and only of $\deg(\beta)=\beta$. Let us prove first $\forall \beta,\gamma<\alpha[\beta+\gamma<\alpha]$. As $\alpha=\omega^\alpha$, it follows that $\deg(\beta),\deg(\gamma)<\alpha,$ however, beacuse of this, $\deg(\beta+\gamma)=\max(\deg(\beta),\deg(\gamma))<\alpha,$ in consequence, $\beta+\gamma<\alpha$. Now let us prove $\forall \beta,\gamma<\alpha[\beta\cdot\gamma<\alpha]$. We have, using this, that $\deg(\beta\cdot\gamma)\leq\max(\deg(\beta)+\deg(\gamma),\deg(\gamma)+\deg(\beta))<\alpha$, since $\deg(\beta),\deg(\gamma)<\alpha$ and what it was shown above. Now let us see $\forall \beta,\gamma<\alpha[\beta^\gamma<\alpha].$ Let $\beta'=\deg(\beta)$. Then as $\alpha=\omega^\alpha$, $\beta<\alpha$, but clearly $\alpha$ is limit, so that $\beta'+1<\alpha$. But $\beta^\gamma\leq (\omega^{\beta'+1})^\gamma=\omega^{(\beta'+1)\cdot\gamma}<\omega^\alpha=\alpha$; since $(\beta'+1)\cdot\gamma<\alpha$ by what it was shown above and the uniqueness of Cantor's normal form. $(\Leftarrow)$ As $\alpha>\omega$, in particular $\omega^\gamma<\alpha$ for all $\gamma<\alpha$, hence $\omega^\alpha=\lim_{\gamma\to\alpha}\omega^\gamma\leq\alpha,$ but clearly $\omega^\alpha\geq\alpha$.
H: Are limits allowed in a function? I wasn't sure how to specify my question correctly in my title, so I hope that my language is not too offensive. What I would like to do is specify a function such as f(x) = .999... * x, such that f(1) represents the largest value less than 1. I'm not sure if A: this somehow violates f's membership of the functions, and B: how this should be correctly specified if not A. My thoughts are that .999... is equivalent to 1, and that .999...9 is ambiguous or implies that the sequence of digits is finite and therefore not the largest value less than 1. Again I apologize for my ignorance. AI: Your function is only useful if it can specify the largest $x \in \mathbb{R}$ such that $x < 1$, but does such an $x$ even exist? Well, let's look at all the real numbers between $0$ (the next lowest integer) and $1$ (the upper limit of possible values): let the set $S$ be defined as $S:=\{x| x \in \mathbb{R},\ 0<x< 1 \}$, if $S$ has a largest value then we can specify your function. The least upper bound of our set $S$ is the value that bounds $S$ from above, also called the supremum, which is the largest member of the set, or (if the set has no largest value) is the smallest value possible that is not in the set but every member of the set is smaller than. Our set $S$ has the obvious upper bound of one, but because one is not in the set itself, the set is infinite* So, no you can not write such a function. However, so long as you have only one 'output' values for every one of your domain values, then it is a function. So, you can use limits or other things in your function. *A more rigorous proof: Define the set $S$ as above. Now we can describe a subset of $S$, $Q$, such that $$Q := \left\{y | \ \ \ y \in S \wedge \ y = \sum\limits_{i=1}^n 9 * 10^{-i} \right\}$$ if $S$ has a highest value it will be in $Q$ because $$\lim\limits_{n \to \infty}\left( \sum\limits_{i=1}^n 9 * 10^{-i}\right) = 1 \ \ \ \ .$$ By the same logic, we can see that the set $Q$ is infinite, therefore it has no largest value, and so by extension $S$ has no largest value.
H: Max length possible I have a cabinet that has 15" door. I can use $3$ baskets of $15 \times 15$ in the cabinet. I will like to know if there is any possibility that $15 \times 20$ or bigger basket can fit in this cabinet. The problem is, it will not turn inside the cabinet if it is too large. Is there any mathematical way to find out the maximum length possible in this case? AI: For any $\epsilon>0$, you cannot fit a basket of size $15 \times (15+\epsilon)$ into the cupboard. It is clear that any basket that can fit inside a $15 \times 15$ square can fit straight in. So, taking the length $l$ to be the largest dimension, we need only consider $l > 15$. (It is clear that the corresponding width must satisfy $w<15$.) To simplify life, I am going to presume the cupboard is infinity wide. The same analysis as follows can deal with finite with cupboards, but the analysis below is complicated enough as is. It should be clear that if we can get a basket with $l >15$ inside, we can always get it inside with the following 'trajectory': We start with the long edge vertical and the basket as far up and right as possible (that is, the right and top edges of the basket are flush against the right and top sides of the cupboard). Now rotate the basket anti-clockwise while keeping the top right edge of the basket against the top of the cupboard, with the basket as far right as possible (that is, either the the bottom right corner of the basket touches the right wall of the cupboard, or the right edge of the basket touches the bottom right corner of the cupboard. It should be clear that each configuration above can be completely described by an angle $\theta \in [0,2 \pi]$. Given $l, \theta$, we can compute the maximum width $\bar{w}(\theta)$ that can exist for that configuration. Then a basket of length $l>15$ can be fit into the cupboard iff $w \le \bar{w}(\theta)$ for all $\theta \in [0,\frac{\pi}{2}]$. Now for the gory details. We need to consider two configurations. The first is when $l \cos \theta \ge 15$. Then we see that $\bar{w}(\theta) = 15 \cos \theta$. The second is when $l \cos \theta < 15$. Then we see $ \frac{15-l \cos \theta}{\sin \theta}\cos \theta + \frac{1}{\cos \theta} (\bar{w}(\theta) -\frac{15-l \cos \theta}{\sin \theta} ) = 15$, which simplifies to $\bar{w}(\theta) = 15 \cos \theta + (15-l \cos \theta)\sin \theta$. Combining gives $\bar{w}(\theta) = 15 \cos \theta + \max(0,(15-l \cos \theta)\sin \theta)$, for $\theta \in [0, \frac{\pi}{2}]$, $l > 15$. To finish, I need to compute $w^* = \min_{\theta \in [0, \frac{\pi}{2}]} \bar{w}(\theta)$. Let $w_1(\theta) = 15 \cos \theta$, $w_2(\theta) = 15 \cos \theta + (15-l \cos \theta)\sin \theta$, and note that $\bar{w}(\theta) = \max(w_1(\theta), w_2(\theta))$. If we let $\theta_0 = \arccos \frac{15}{l}$, we note that $\theta_0 \in (0, \frac{\pi}{2})$ and we see that $\bar{w}(\theta) = w_1(\theta)$ for $\theta \in [0,\theta_0]$ and $\bar{w}(\theta) = w_2(\theta)$ for $\theta \in [\theta_0, \frac{\pi}{2}]$. Furthermore, $\bar{w} $ is continuous and is differentiable for $\theta \ne \theta_0$. We have $\bar{w}'(0) < 0$, $\bar{w}(0) = \bar{w}(\frac{\pi}{2}) = 15$, hence $\bar{w}$ has a minimum in $(0, \frac{\pi}{2})$. Note that $w_1'(\theta) < 0 $ for all $\theta \in (0, \frac{\pi}{2})$, and so the minimum must occur in $[\theta_0, \frac{\pi}{2})$. We note that $w_2(0) = w_2(\frac{\pi}{2}) = 15$, and $w_2'(\theta) = (\sin \theta - \cos \theta) (l(\sin \theta + \cos \theta) -15)$. This gives $w_2'(0) = 15-l <0$, and the only zero of $w_2'$ in $[0,\frac{\pi}{2}]$ is at $\theta = \frac{\pi}{4}$. Hence $w_2$ has a strict minimum at $\theta = \frac{\pi}{4}$. Consequently, if $\theta_0 \le \frac{\pi}{4}$ then $\bar{w}$ is minimized at $\frac{\pi}{4}$ and $w^*=\bar{w}(\frac{\pi}{4}) = 12 \sqrt{2}-\frac{l}{2}$, and if $\theta_0 > \frac{\pi}{4}$ then $\bar{w}$ is minimized at $\theta_0$, and $w^*=\bar{w}(\theta_0) = \frac{15^2}{l}$. Translating this into a function of $l$, we see that $\theta_0 \le \frac{\pi}{4}$ iff $l \le 15 \sqrt{2}$, and so we obtain an expression for the maximum width corresponding to a given length $l$: $$w_{\max}(l) = \begin{cases} 15,& 0 \le l \le 15 \\ 15 \sqrt{2}-\frac{l}{2},& 15 < l \le 15 \sqrt{2} \\ \frac{15^2}{l}, & 15 \sqrt{2} < l \end{cases}$$ And a plot of $w_{\max}$: As an aside, note that $w_{\max}(20) \ge 11$, so a $20 \times 10$ basket will fit. As a further aside (triggered by TonyK's answer), it is straightforward to show that $w_{\max} \le \frac{225}{l}$, and hence the area of the rectangle is bounded above by $225$ (and is attained at $l=15$ and for $l \ge 15 \sqrt{2}$).
H: Give an example of $f\in L^1$, $g\in L^{\infty}$, such that $f*g\notin C_0$ (meaning $\lim_{|x|\to\infty}(f*g)(x)\neq0$) Give an example of $f\in L^1$, $g\in L^{\infty}$, such that $f*g\notin C_0$ (meaning $\lim_{|x|\to\infty}(f*g)(x)\neq0$) Here's a theorem from my real analysis book: Assume $1\le p\le \infty$ and $f\in L^p(\mathbb{R}^n)$, $g\in L^{p'}(\mathbb{R}^n)$. Then the integral defining $(f*g)(x)$ exists for every $x\in\mathbb{R}^n$. The function $f*g$ thus defined is bounded and uniformly continuous on $\mathbb{R}^n$. Moreover, if $1 < p <\infty$, then also $f*g\in C_0$, meaning that $\lim_{|x|\to\infty}(f*g)(x)=0$. (here $p$ and $p'$ are Holder conjugates, i.e., $1/p+1/p' = 1$) Unfortunately this theorem doesn't take care the case $p=\infty$ for $f*g$ to be in $C_0$. Any idea for constructing $f$ and $g$? Thank you! AI: Let $g = 1$. Then $(f * g)(t) = \int f(x) g(t-x) dt = \int f$. If $\int f \neq 0$, then $\lim_{t \to \infty} (f * g)(t) = \int f \neq 0$, of course.
H: Computing Fourier sum for infinitely differentiable functions Let $f\in C^{\infty}(\mathbb{R})$ be a periodic function of period $2L$. I want to show that $$f(x)=\sum_{n=-\infty}^\infty \left(\dfrac{1}{2L}\int_{-L}^Lf(y)e^{-in\pi y/L}dy\right)e^{i\pi nx/L}$$ The sum on the right is equal to $$\dfrac{1}{2L}\sum_{n=-\infty}^\infty \left(\int_{-L}^Lf(y)e^{in\pi (x-y)/L}dy\right)$$ I can't see how this sum should be equal to $f(x)$. AI: There is an entire theory about Fourier series. Here is one article about when a Fourier expansion of f converges to f. http://en.wikipedia.org/wiki/Convergence_of_Fourier_series. There are entire books on the subject. There is no reason why you should immediately grasp a substantial amount of analysis simply by looking at the basic equation. The general idea is that a periodic function may be (and often is) expressable as the sums of sin(nx) and cos(nx) with suitable coefficients. It is possible that the sum is finite, but more often to get an exact equality you need an infinite series. You really do have sines and cosines here. They are buried in the fact the $e^{ix} = cos(x) + i sin(x)$. This kind of expansion is enormously useful in all sorts of problems. We know a lot about sines and cosines. If the Fourier series converges to the function, then a finite sum may be a good approximation, which is often easier to deal with than the original function. There are also many cases where simply seeing what the Fourier coefficients are provides a solution.
H: Definition of degree By Hatcher P134, degree is defined from a map $f: S^n \to S^n$ - but degree must be able to applied to all maps. Can I arbitrarily generalize the definition of degree to a map between any two spaces? For a map $f: S^n \to S^n$ with $n > 0$, the induced map $f_*: H_n(S^n) \to H_n(S^n)$ is an homomorphism from an infinite cyclic group to itself and so must be of the form $f_*(\alpha) = d \alpha$ for some integer $d$ depending only on $f$. This integer is called the degree of $f$. AI: The most common generalization of this notion of degree is for a continuous map $f:X\to Y$ where $X$ and $Y$ are closed connected orientable manifolds of the same dimension $n$. These conditions imply that the top homology groups $H_n(X)$ and $H_n(Y)$ are both isomorphic to $\mathbb{Z}$. In order to define the degree we must fix an orientation of each of $X$ and $Y$; this amounts to choosing generators $[X]$ and $[Y]$ of $H_n(X)$ and $H_n(Y)$ respectively. Then the map $f$ induces on homology is defined in terms of these generators by $$f_*([X]) = d[Y]$$ for some $d\in\mathbb{Z}$, which is by definition the degree of $f$. Note that choosing a different orientation for either $X$ or $Y$ would change the degree of $f$ by a sign.
H: A question on the restriction of the Euler’s formula We all know the famous Euler's Formula. It says that if a polyhedron has F(Faces), V(vertices) and E(edges) then F + V – E = 2. My question is “is there any restriction on these variables?” By restriction, I mean something like aF + bV + cE > 0 (for some a, b and c) must be satisfied before the formula can be applied. For example, if there is no such restriction, can I ask the following question:- Given that E= 6 and V = 7, (i) find F; (ii) draw that figure and (iii) name that figure. AI: (ii)/(iii): Putting the other parts of your question aside, this is something that is an issue. For instance, a decagonal prism and a dodecahedron both have 12 faces, 20 vertices, and 30 edges, so these counts alone don't always determine which polyhedron you have. Finding F (or V or E) if there are any simple (no holes) polyhedra with the given counts for the other two numbers is what the formula is for. So if there were a polyhedron with 6 edges and 7 vertices, it must have only 1 face, but that wouldn't make for much of a polyhedron. There are some basic inequalities to cut down what's possible, though. Since every edge has two halves, and every vertex is attached to at least three "half-edges", we have $2E\ge3V$ (your E=6 V=7 example doesn't obey this). Similarly, since every edge bounds exactly two faces, and every face has at least three edges, we have $2E\ge3F$. Edit: These inequalities, with Euler's formula, basically reduce to $(V+4)/2\le F\le2V-4$ and this table of polyhedron counts suggests that all numbers satisfying that are possible (they certainly are up to 32 vertices/faces).
H: Orientation-preserving isometry of $R^n$ I am preparing for an exam, and would like to have a rigorous definition of the following: Orientation-preserving isometry of $R^n$ I know that it is something like the following (feel free to correct my wording): When the homomorphism $\pi:M_n \rightarrow O_n$ is applied to the unique representation $t_a\phi$ of an isometry $f$, and $\pi(f)=\phi$, define $\sigma:M_n \rightarrow \pm 1$. This map that sends an isometry of $R^n$ to $1$ is orientation-preserving. AI: The thing that is orientation-preserving is not the map that sends the isometry to $1$; rather it is the isometry itself, not that map, that is orientation-preserving. An isometry is a function $f:\mathbb R^n\to\mathbb R^n$ that preserves distances, i.e. for any two points $x,y\in\mathbb R^n$, the distance from $x$ to $y$ is the same as the distance from $f(x)$ to $f(y)$. To say that $f$ is orientation-preserving means that it won't map a left shoe to a right shoe or a left hand to a right hand, etc. In some contexts, that is demonstrably equivalent to saying that the determinant of a certain matrix is $1$ rather than $-1$.
H: Prove This Bool Expression Prove $x'z+xyz+xy'z=z$ can you show how you solve this using Boolean Algebra. My main problem is when I do this $xz (y + y') = 1 $ So $1$ times $x =$ ? AI: $$ x'z + xyz + xy'z = z $$ $$ x'z + xz (y + y') \ // AssociationRule$$ $$ x'z + xz * 1 \ // \ a + a' = 1; a * 1 = a $$ $$ z (x'+x) \ // \ a + a' = 1$$ $$ z * 1 \ // \ a * 1 = a $$ $$ z $$
H: Solve for huge linear congruence How to solve a linear congruence with a very huge number. For example, 47^27 congruent to x (mod 55) My idea is to first break this into 47^27 congruent to x (mod 5) and 47^27 congruent to x (mod 11) then, by FlT, I can reduce this to: 47^3 congruent to x(mod 5) and 47^5 congruent to x(mod 11) However, I don't know how to continue. AI: We can do a series of reductions to simplify the problem. So, we have: $47^1 \pmod{55} = 47$ $47^2 \pmod{55} = 9$ $47^3 \pmod{55} = 9 \times 47 \pmod{55} = 38$ $47^4 \pmod{55} = (47^2)^2 \pmod{55} = 9^2 \pmod{55} = 26$ $47^5 \pmod{55} = 47^2 \times 47^3 \pmod{55} = 9 \times 38 \pmod{55} = 12$ Now, using this approach, how can we reduce the problem for $47^{27} \pmod{55}$? You can see additional approaches, like the Modular Exponentiation and other approaches (Montgomery and many others).
H: Permutation in $S_n$ By book states the following: "Given the permutation $( 1 , 2)$ in $S_n$, what elements commute with it ? Certainly any permutation leaving both $1$ and $2$ fixed does. There are $(n - 2) !$ such. Also $( 1 , 2)$ commutes with itself. This way we get $2 (n - 2) !$ elements in the group generated by $( 1 , 2)$ and the $(n- 2) !$ permutations leaving $1$ and $2$ fixed. Are there others? There are $n(n - 1 ) /2$ transpositions and these are precisely all the conjugates of $( 1 , 2).$ Thus the conjugate class of $( 1 , 2)$ has in it $n(n - 1 ) /2$ elements." How did they get those results? AI: This is an excerpt from Dummit and Foote... The only logical step is that the size of the conjugacy class is the index of the centralizer, i.e. $|S_{n} :\; C_{S_{n}}((12))|$ = size of conjugacy class. Hence, $|C_{S_{n}}((12))| = \frac{n!}{\frac{n\cdot (n-1)}{2}} = 2\cdot (n-2)!$. Hence, we conclude that there are no other elements of $S_{n}$ that commute with $(12)$.
H: How to solve this mathematically This is a question given in my computer science class. We are given a global variable $5$. Then we are to use keyboard event handlers to do the following: On event keydown double the variable and on event keyup subtract $3$ from this variable. The question is after $12$ presses of any key on the keyboard ( a keydown and keyup event is the same as one press), what will be the value of this variable? The value after $ 4$ presses is $35$. I can implement this and get the answer in less than 10 lines of code, but this is one of those questions I feel there should be a mathematical formula one can derive and apply to the question and arrive at an answer. So is it possible to solve this one with math? I already tried: $\lim\limits_{x \to 12} (2v - 3)x$ where $v$ is the initial variable and I am trying to represent an equation that will evaluate to the new value of the variable as $x$ gets to $12$. However since I am not very mathematically minded, that was as far as I got. Thanks AI: You can express this as a recurrence relation $a_n = 2 a_{n-1} - 3$ with initial condition $a_0 = 5$. To find a closed form, divide both sides by $2^n$ to get $$ \frac{a_n}{2^n} = \frac{a_{n-1}}{2^{n-1}} - \frac{3}{2^n}. $$ Let $s_n = a_n \cdot 2^{-n}$. This gives $s_n = s_{n-1} - 3 \cdot 2^{-n}$. Hence $s_n = -3 \sum_{j=1}^n 2^{-k} + a_0$. This is a geometric series with closed form $s_n = 3 \cdot 2^{-n} + a_0 - 3$. We conclude that $$ a_n = (a_0 - 3) 2^n + 3. $$ For $a_0 = 5$, this gives $a_4 = 35$.
H: Real Analysis Proof Verificaition Suppose $f$ is defined on all of ${\Bbb R}$, and satisfies, $|f(x)-f(y)|\leq(x-y)^2$ for all $x,y\in {\Bbb R}$. Prove that $f$ is constant. Basically I have, $|f(x)-f(y)|\leq(x-y)^2 \Rightarrow \frac{|f(x)-f(y)|}{(x-y)}\leq(x-y)$. $\Rightarrow \lim_{\ x\to\ y}\frac{|f(x)-f(y)|}{(x-y)}\leq \lim_{\ x\to\ y}(x-y)$. $\Rightarrow f^{'}(x)\leq 0, \ \forall x\in{\Bbb R}$. Case 1: if $f^{'}(x)= 0$ then we're done. Case 2: if $f^{'}(x)< 0$??? Well I don't even know if I'm going about this right. Any hints? AI: You've really shown more: Using the fact that $(x - y)^2 = |x - y|^2$, we have $$0 \le \left|\frac{f(x) - f(y)}{x - y}\right| \le |x - y|$$ Now take the limit as $y \to x$, and we see from the Squeeze (Sandwich) Theorem that $$0 \le |f'(x)| \le 0$$
H: Proof for this relation regarding complex numbers This is what I have to prove : Re($z_1z_2$) = Re($z_1$)Re($z_2$) - Im($z_1$)Im($z_2$). I have two complex numbers : $z_1$ and $z_2$. Can anyone give at least some hints? AI: Why don't you try...? $$ (a+bi)\cdot(c+di) = \dots $$
H: $f$ is Differentiable on all of ${\Bbb R}$ and $\lim_{\ x\to\infty} f^{'}(x) = 0.$ Show $\lim_{\ x\to\infty} (f(x+1)-f(x))=0.$ Suppose $f$ is Differentiable on all of ${\Bbb R}$ and that $\lim_{\ x\to\infty} f^{'}(x) = 0.$ Show that $\lim_{\ x\to\infty} (f(x+1)-f(x))=0.$ I tried to prove by this logic, $\lim_{\ x\to\infty} f^{'}(x) = 0 \Rightarrow \lim_{\ x\to\infty}\left(\lim_{\ h\to\ 0}\frac{f(x+h)-f(x)}{h}\right) = 0 \Rightarrow f^{'}(x) = 0, \forall x\in{\Bbb R}$. Thus, $f$ is constant which implies $f(x+1)=f(x) \Rightarrow \lim_{\ x\to\infty} (f(x+1)-f(x))=0$. AI: No, your conclusion that $f$ is constant is wrong (A function that satifies your conditions is $f(x) = \ln(x)$). What you need is the mean-value theorem. Suppose $\lim_{x\to \infty} f'(x) = 0$, then for any $\epsilon > 0$, there is $M > 0$ such that $$ z > M \Rightarrow |f'(z)| < \epsilon $$ Now, if $x > M$, then there is some $z\in (x,x+1)$ so that $$ |f(x+1) - f(x)| = |f'(z)| < \epsilon $$ Hence $$ \lim_{x\to\infty} f(x+1) - f(x) = 0 $$
H: Expected number of balls remaining A box contains $w$ white balls and $b$ black balls. Each time we pick a ball without replacement until there are no white balls left. What is the expected number of black balls remaining in the box ? AI: Let $X$ be the number of black balls remaining when the last white ball is drawn. We wish to find $E[X]$. Number the black balls; call them $B_1,B_2,\dots,B_b$. For $1\le j\le b$, let $X_j$ be the "indicator variable" which takes the value $1$if $B_j$ remains in the box when the last white ball is drawn, $0$ otherwise. Observe that $X=\sum_{j=1}^bX_j$. By linearity of expectation, $E[X]=\sum_{j=1}^bE[X_j]=bE[X_1]=bP(X_1=1)=bp$, where $p$ is the probability that $B_1$ is still in the box when the last white ball is drawn. We have to determine the value of $p$. What is the probability that all of the white balls are drawn ahead of $B_1$? Note that the other black balls are irrelevant; we may as well throw them away. The box contains one black ball, $B_1$, and $w$ white balls. What is the probability that the black ball is drawn last? Plainly that probability is $\frac1{1+w}$, so the final answer is $E[X]=\frac b{1+w}$.
H: Prove that G is not simple If in a finite group G an element $a$ has exactly two conjugates, prove that G has a normal subgroup $N \ne e$. I know to find the number of conjugates of $a$ I can use the formula $\frac{|G|}{|N(a)|} = c_a$ where $N(a)$ is the normalizer thus $\frac{|G|}{|N(a)|} = c_a = 2$ but how can I show that G is not simple? AI: You're almost there. We know that $N(a) \leq G$ ($N(a)$ is a subgroup of $G$). Further, since we are working in a finite group, the index of the subgroup is given by $$|G:N(a)| = \frac{|G|}{|N(a)|}$$ which you correctly claim is 2. We know that any subgroup of index 2 is a normal one. So in particular we have that $N(a) \triangleleft G$. To see that any subgroup $H$ of index 2 in a group $G$ is normal, note that there are only two cosets, $H$ and $G-H$. Of course $1H = H1$ because the identity commutes. But also, if we choose some $g\notin H$, then $gH = G-H$. But at the same time, the right cosets also partition $G$ so $Hg = G-H$ and thus $gH = Hg$ so $H$ must be normal.
H: Order of some matrices in $GL(2,p)$ is coprime with $p$ Let $M$ belongs to $GL(2,p)$ where $p$ is a prime number, and $\det M$ generate $GL(1,p)$, so I want to prove that the order of $M$ is coprime to $p$. I think if $M^{np}=I_2$ that means $M^n=I_2$ but how to do next? AI: If you look at the answer to this question can see that the only case to study is $M=\left(\begin{array}{cc} a & 1 \\ 0 & a \end{array}\right).$ (In the other two cases the order of matrix divides $p-1$, respectively $p^2-1$, and therefore is coprime with $p$.) In this case $a^2$ generates $\mathbb F_p^{\times}$, that is, the order of $a^2$ in $\mathbb F_p^{\times}$ is $p-1$. But this is not possible for $p\ge 3$: if the order of $a$ is $m$, then $m\mid p-1$ and the order of $a^2$ is $m/\gcd(2,m)$ which is less than $p-1$.
H: Index of maximal proper subgroup of a solvable group This is problem 2.7.15 from Hungerford's Algebra: If $H$ is a maximal proper subgroup of a finite solvable group $G$, then $[G:H]$ is a prime power. If $G$ is abelian, then it's easy to show that $[G:H]$ is a prime power. I'm stuck on the non-abelian case. Any hints how to proceed? AI: In a solvable group, you can show that a minimal normal subgroup is abelian of prime power order. I think this is a lemma in Hungerford that is proven in the section this problem is in. Now here's a hint for your problem. Proceed by induction, and let $N \neq 1$ be a minimal normal subgroup of $G$. Consider the cases $N \leq H$ and $N \not\leq H$ separately.
H: Differentiation in several variables using projection Could you tell me how to differentiate a function with several variables? Our teacher gave us an example: $\pi_1 : (x,y) \rightarrow x, \ \ \ \pi_2: (x,y) \rightarrow y$ - these are differentiable, because they are linear, Consider $f(x,y) = e^x \cos y$ Let $f_1: t \rightarrow e^t, \ \ \ f_2: t \rightarrow \cos t$. Now $f(x,y) = F(f_1 \circ \pi_1, f_2 \circ \pi_2$), where $F(x,y)=xy$, So $F'(x,y)(h,k) = F(x,k) + F(h,y) = xk+hy$, so $f'(x,y)(h,k)=he^x \cos y - k e^x \sin y$. I understand this, because we had this theorem on the analysis lecture: If $E_1, E_2, F$ - Banach spaces, $\phi \in \mathcal{L}(E_1, E_2; F)$ - linear and continuous, then $\phi$ is $C^1$ and $d_{(a_1, a_2)}\phi.(h_1, h_2) = \phi'(a_1,a_2)(h_1,h_2) = \phi(a_1, h_2) + \phi(h_1, a_2), \ \ (a_1, a_2), (h_1, h_2) \in E_1 \times E_2$. My problem is - what should I do when I have three, four variables. Is there any other way to quickly and safely determine derivatives? When doing it by calculating partial derivatives I first need to check if they are continuous and then $d_af = (\frac{ \partial f }{\partial x_1 }, ..., \frac{ \partial f }{\partial x_1 })$ where $\frac{ \partial f }{\partial x_1 } = d_af.e_i$ or $d_af = \sum _{i=1} ^m \frac{\partial f}{\partial e_i}(a) \circ \pi _i$, where $e_i$ is the standard basis. Is this the correct approach? I'd really appreciate all your help here Thank you a lot. AI: In a finite-dimensional setting, the derivative, if it exists, is always the linear map $$ df(x^*)\colon (h_1,\ldots,h_n) \mapsto \sum_{j=1}^n \frac{\partial f}{\partial x_j}(x^*) h_j. \tag{1} $$ Hence it suffices to compute all the partial derivatives and then show that (1) satisfies the definition of the derivative (the best linear approximation etc.) So your approach is correct in $\mathbb{R}^n$. In Banach spaces, the situation is more complicated, since you do not have an algebraic basis to express a linear map. The approach via projections is intrinsic and works also for functions $f \colon E_1 \times E_2 \to Y$, where $E_1$ and $E_2$ and Banach spaces. However, if $E_1$ and $E_2$ have finite dimension, you are simply computing standard partial derivatives, so you can proceed as you were used to.
H: Adjoint of a Matrix Definition Tom M. Apostol in his book "calculus Vol. 2" page 122 (see image below) defines adjoint of a matrix as the transpose of the conjugate of the matrix. Is this definition always correct ? Does it agree with the adjoint defined here, i.e. transpose of the cofactor matrix? AI: These are two different concepts. The one that one hears a lot about is the usual adjoint of $A$, which is simply, as you say, the conjugate transpose of $A$, usually denoted by $A^*$.
H: How find this integral $I=\int\int_{D}|\cos{(x+y)}|dxdy$ Find this $$ I = \int\int_{D}\left\vert\,\cos\left(x + y\right)\right\vert\,{\rm d}x\,{\rm d}y $$ where $D = \left\{\vphantom{\Large A}\left(x, y\right)\quad \left.\right\vert\quad \ \left\vert\,x\,\right\vert + \left\vert\,y\,\right\vert\ \leq\ 2\pi\right\}$ My try: let $$x+y=u,x-y=v\Longrightarrow x=\dfrac{u+v}{2},y=\dfrac{u-v}{2}$$ then $J=\left|\begin{vmatrix} \dfrac{1}{2}&\dfrac{1}{2}\\ \dfrac{1}{2}&-\dfrac{1}{2} \end{vmatrix}\right|=\dfrac{1}{2}$ and $D'=\{(x,y)||u+v|+|u-v|\le 2\pi\}$ then $$I=\dfrac{1}{2}\int\int_{D'}|\cos{u}|dudv$$ and Follow I can't,Thank you for can help me. AI: Hint: $D': -2\pi\leq u\leq 2\pi,\quad -2\pi\leq v\leq 2\pi$.
H: I have confusion while translating propostions to logical expressions I have following propositions: p:Grizzly bears have been seen in the area. q:Hiking is safe on the trail. r:Berries are ripe along the trail. I need to convert following compound statement to logical expressions by using logical connectives. If berries are ripe along the trail,hiking is safe if and only if grizzly bears have not been seen in the area What I think is true is: ~p <-> (r^q) Note:I have considered comma in above statement as and. But answer given at the end of book is: r -> ( q <-> ~p ) AI: The comma is not to be read as an and, it is the "then" of the first if. So. If berries are ripe along the trail, then the following holds: hiking is safe if and only if bears have not been seen in the area Thus If $r$, then ($q$ if and only if $\neg p$).
H: What is Topology of compact-convergence? Munkres - Topology p. 283 Definition Let $(Y,d)$ be a metric space and $X$ be a topological space. Define $B_C(f,\epsilon)$ as the set $\{g\in Y^X : \sup\limits_{x\in C} \operatorname{d}(f(x),g(x)) < \epsilon \}$ for a given compact subspace $C$ and $\epsilon >0$ and $f\in Y^X$. Then, the topology generated by all $B_C(f,\epsilon)$ is called the "Topology of compact convergence". How does this is a well-defined definition? Munkres mentioned in his book that we need some topology on $Y^X$ making $C(X,Y)$ closed which is stronger than the product topology. Then, he defined 'the topology of compact convergence' as given above. Since he considers a topology on $Y^X$, he didn't assume functions to be continuous, hence not bounded. Well, if functions are not continuous, then compactness of $C$ no more gurantees that $\sup_{x\in C} d(f(x),g(x))$ exists even when $C$ is nonempty, and of course it does not exist when it is empty. Is he taking the supremum over the extended real? Or, should i take $d$ as a bounded metric? What would be the definition of this that makes sense? Off the topic, i feel like munkres define topologies that nobody uses but really useful. An example is the uniform metric. And i think 'topology of compact convergence' would be the one too. There's no definition for this topology in wikipedia.. AI: The definition given by Munkres is correct. The set $B_C(f,\epsilon)$ contains the functions $g:X\to Y$ for which $\sup_{x\in C}d\big(f(x),g(x)\big)$ exists and is less than $\epsilon$. If $g:X\to Y$ is such that the supremum doesn’t exist (or if you prefer, is infinite), then $g\notin B_C(f,\epsilon)$, that’s all. The topology of compact covergence is defined in Wikipedia; the definition is given in terms of which sequences of functions converge rather than directly in terms of the topology, but if you compare it with Theorem $46.2$ in Munkres, you’ll see that it’s the same topology. Both the uniform topology and the topology of compact convergence are extremely useful and widely used.
H: Proving infinite wedge sum of circles isn't first countable Let $\{S_i\}_{i=1}^\infty$ be a countable family of circles and $\{p_i\}_{i=1}^\infty$ be a family of points such that $p_i\in S_i$. let $X = \bigcup _{i=1}^\infty S_i/\{p_i\}_{i=1}^\infty$ be the topological space obtained from the quotient map $q: \bigcup _{i=1}^\infty S_i\longrightarrow X$ by collapsing all $p_i$ to one point $p\in X$. Show that $X$ is not first countable. I had this diagonalization idea: suppose for the sake of contradiction that there's a countable neighbourhood basis $\{U_i\}_{i=1}^\infty$ at point $p\in X$. let $V_{ij}=q^{-1}(U_i)\cap S_j$. At this point my intuition suggests that $W=\bigcup_{i=1}^\infty V_{ii}$ is a saturated open set that satisfies $$\forall i>0(U_i\nsubseteq q(W))$$ thus arriving at a contradiction but all my attempts so far to prove it rigorously have failed. Any help formalizing my argument will be appreciated. AI: HINT: For each $k\in\Bbb Z^+$ there is an $r_k>0$ such that $$B(p_k,r_k)\subseteq q^{-1}[U_k]\;,$$ where $B(x,\epsilon)$ is the usual open ball. For $k\in\Bbb Z^+$ let $\epsilon_k=\frac12r_k$, and consider the image under $q$ of the saturated open set $$\bigcup_{k\in\Bbb Z^+}B(p_k,\epsilon_k)\;.$$
H: $13$ is the largest prime that can divide two successive integers of the form $n^2+3$ How can I prove following problem in number theory? Show that $13$ is the largest prime that can divide two successive integers of the form $n^2+3$. AI: Let $p=4k+1$ is a prime number and $q$ is another prime number such that $$q|n^2+k, q|(n+1)^2+k$$ which implies that $q|2n+1$. Now note that $$4(n^2+k)=(2n+1)(2n-1)+4k+1$$ because $q|4(n^2+k)$ and $q|(2n+1)(2n-1)$, $q|4k+1=p$ and $q=p$. (In your question $k=3$).
H: Solving the heat equation - derivative of an integral I was reading a solution for the heat equation and, at some point, I had to calculate $Q_{x}$, where $$Q(x,t)=\frac{1}{2}+\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\frac{x}{\sqrt{4kt}}}e^{-s^2}ds$$ How can I do this? AI: First, perform a simple change of variables in the integral: $$ u := s\sqrt{4kt}.$$ Next, use the Fundamental theorem of analysis to evaluate the derivative with respect to x.
H: Linear algebra statment - rank of a matrix How can I show that given an $m\times n $ matrix $A$ such that $\operatorname{rank} A = k \leq \min(m,n)$ , then there must exist a $k\times k $ minor of $A$ having $\det \neq 0 $ . I know that $rankA=k$ implies that there exist $k$ linearly independent rows and $k$ linearly independent columns , but how can I deduce from this the above statement ? Thanks ! AI: First take the $k$ linearly independent columns to get a $m \times k$ matrix of rank $k$. Since column rank equals row rank, there are $k$ linearly independent rows. Take those rows to get a $k \times k$ matrix. The rank is $k$ since there are $k$ linearly independent rows. (The latter also implies that the column rank $= k$.) By the invertible matrix theorem any $k \times k$ matrix of rank $k$ has determinant $\neq 0$.
H: Prove that $\operatorname{trace}(ABC) = \operatorname{trace}(BCA) = \operatorname{trace}(CAB)$ Prove that $\operatorname{trace}(ABC) = \operatorname{trace}(BCA) = \operatorname{trace}(CAB)$ if $A,B,C$ matrices have the same size. AI: Is it already known that $\operatorname{Tr}(XY) = \operatorname{Tr}(YX)$ when $X$ and $Y$ are square matrices of the same size? If it is, then simply set $X= AB$ and $Y = C$. It will give you $\operatorname{Tr}(ABC) = \operatorname{Tr}(CAB)$. You can get $\operatorname{Tr}(ABC) = \operatorname{Tr}(BCA)$ in a similar fashion.
H: Order of an element as a product I apologise if this has been asked before, but I wasn't able to find it.. I am trying to prove that if $g\in G$, where $G$ is an arbitrary group,$|g| = n$, where $n = ab$, with $\gcd(a, b) = 1$, then there exist $h,k\in G$ such that $g = hk$, and $|h| = a$, $|k| = b$. So far I have reasoned that the identity and $g$ itself would be suitable choices for $h$ and $k$. However, I am asked to determine if there are more than one choice for $h$ and $k$. It seems to me that if $G$ is cyclic, then there might me more than one choice for $h,k$ as in finite abelian groups, the product of two elements have order equal to the product of their order. But I am wondering if they would satisfy the $\gcd(a,b)=1$ criteria, and what happens when $G$ is not cyclic? I hope someone might be able to help me out! AI: Since $ gcd(a, b) = 1$ there are such $c,d$ that $ac+bd=1$. Set $h=g^{bd}, k=g^{ac}$. The choice is unique. If we have another such pair $h_1,k_1$ then $hk=h_1k_1$ implies $h^{-1}h_1=k^{-1}k_1$. Denote this element by $x$, then $x^a=(h^{-1}h_1)^a=1$ and simlarly $x^b=1$. Again from $ac+bd=1$ it follows $x=1$ etc.
H: Evaluate $\int_0^1\int_p^1 \frac {x^3}{\sqrt{1-y^6}} dydx$ I have been working on this sum for a while. The question asks to evaluate the double integral. $$\int_0^1\int_p^1 \frac {x^3}{\sqrt{1-y^6}} dydx$$ where $p$ is equal to $x^2$. I know that I have to solve the $y$ integral first and then the $x$. But I don't know how to solve the root integral. Applying the formula $$\int \frac{1}{\sqrt{1-t^2}}dt$$ where $t=y^3$ isn't working. Any ideas as to how I must proceed with the integral? Once I get the integral, I must substitute the limits and then the integral would be in terms of $x$ and I must integrate it. Am I correct? AI: Change the order of integration. You can do this by drawing the region of integration and seeing that the integral is just $$\int_0^1 \frac{dy}{\sqrt{1-y^6}} \, \int_0^{\sqrt{y}} dx \, x^3$$ which is $$\frac14 \int_0^1 dy \frac{y^2}{\sqrt{1-y^6}}$$ or $$\frac{1}{12} \int_0^1 \frac{du}{\sqrt{1-u^2}} = \frac{\pi}{24}$$
H: Find the inverse of a matrix with a variable $$X= \begin{pmatrix} 2-n & 1 & 1 & 1 & \ldots & 1 & 1 \\ 1 & 2-n & 1 & 1 & \ldots & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 1 & 1 & 1 & \ldots & 2-n & 1 \\ 1 & 1 & 1 & 1 & \ldots & 1 & 2-n\end{pmatrix}_{n\times n} $$ Which means that the matrix with the size of $n\times n$ have $n-2$ along the diagonal and $1$ everywhere else. Help please, I am stuck with this problem. AI: HINT: Let $B$ be the $n\times n$ matrix of ones. Then $XB=B$, so $$X(B-I)=XB-X=B-X=(n-1)I\;,$$ where $I$ is the $n\times n$ identity matrix. By the way, manually calculating the inverses for $n=2$ and $n=3$ was enough to suggest what the answer ought to be, and discovering the argument suggested above was then quite easy.
H: Possible number of names from a certain alphabet I am trying to solve the following problem, but I am a bit stuck. The question is as follows. The language of a certain island has only the letters A, B, C, D, E. Every place name must start and end with a consonant, consist of exactly 6 letters, contain exactly 2 vowels, which may not be next to each other, and cannot have 2 consecutive copies of the same consonant. How many possible place names are there? From what I have established, since the start and end of the place name has to end with consonants, there are $3^{2}$ = 9 possibilities for this. This results in $\ ^{4}C_2$ different choices for placing the vowels. However, I am a bit confused after this. Any help would be greatly appreciated. AI: HINT: These names can only be of the forms CVCVCC, CCVCVC, and CVCCVC, where V represents a vowel and C a consonant, and consecutive consonants must be distinct. The CVCVCC and CCVCVC types are mirror images of each other, so there must be the same number of each. Thus, all you have to do is count the names of CVCVCC type, double it, and add the number of names of CVCCVC type. CVCVCC: There are $3$ possibilities for the first letter, $2$ for the second, $3$ for the third, ... CVCCVC: The same straightforward approach works here.
H: Problem with integrating by parts \begin{eqnarray*} \int x^3\cos 4x \ dx &=& \int x^3(8\cos^4 x - 8\cos^2x+1) \ dx\\ &=&\int 8x^3\cos^4x \ dx - \int 8x^3\cos^2x + \int x^3 \ dx\\ &=&8\int x^3\cos^4x \ dx - 8\int x^3\cos^2x + \frac{x^4}{4}+c \\ &=& ? \end{eqnarray*} I'm kind of stuck there. I'm new to integration, maybe somebody should lend me a hand. AI: You seem to have got off on the wrong foot completely. When the integrand is a polynomial times $\cos ax$, $\sin ax$, or $e^{ax}$ you should always split it so that you differentiate the polynomial and integrate the rest. Let $u=x^3$ and $dv=\cos 4x\,dx$. Then $du=3x^2\,dx$, $v=\frac14\sin 4x$, and $$\int x^3\cos4x\,dx=\frac14x^3\sin4x-\frac34\int x^2\sin 4x\,dx\;.$$ Now you have $$\int x^2\sin 4x\,dx$$ to evaluate, with a lower power of $x$. Treat it the same way, and you’ll end up with an integral $$\int x\cos 4x\,dx\;.$$ still to be evaluated. Treat that the same way, and you’ll end up with a simple $$\int\sin 4x\,dx\;.$$
H: prove a limit about convergence of norma Given $f_n:\Bbb R \to \Bbb R $ converge to 0 on norma 2. Show that: $$\lim_{n\to\infty}{1\over n}\int_{-n}^n|f_n|dx=0$$ I think it has something to do with C-S inequality but i'm having troubles with it. AI: Using Daniel Fischer's hint it is enouh to use the C-S inequality: $$\frac{1}{n}\int_{-n}^n|f_n(x)|dx \leq\frac{1}{n}\big(\int_{-n}^n1^2dx\big)^{1/2}\big(\int_{-n}^n|f_n(x)|^2dx\big)^{1/2}$$ $$=\frac{\sqrt{2}}{\sqrt{n}}\|f_n\|_2 \leq \sqrt{2}\|f_n\|_2\rightarrow 0$$.
H: Almost uniform convergence of $f_n(x) = x^n$ on the interval $[0, 1]$? Consider $f_n(x) = x^n$ on the interval $[0, 1]$. This converges pointwise to $$f = \begin{cases} 0, & \mbox{if } 0 \le x < 1\ \\ 1, & \mbox{if } x = 1 \end{cases}$$ Now I know $f_n(x)$ doesn't converge uniformly to $f$ but does it converge almost uniformly? For any $\delta > 0$ we can can take the subset $(1 - \epsilon, 1]$ of $[0, 1]$ such that $\epsilon < \delta$. Then as the measure of this subset is $\epsilon < \delta$ and $f_n(x)$ converges to $f$ on the complement of this subset we have almost uniform convergence. Is my understanding correct here? AI: You have the right idea! The sequence $f_n$ will converge almost uniformly to $f$ when: Given any $\epsilon > 0$ there exists some measurable subset $F_{\epsilon} \subset [0,1]$ such that $\mu(F_{\epsilon}) \leq \epsilon$ and $f_n \to f$ uniformly on $[0,1]\backslash F_{\epsilon}$. Note here I'm using $\mu$ as Lebesgue measure. So, take $\epsilon > 0$ (and $<1$ so we have something to talk about) and let $F_{\epsilon} = (1-\epsilon, 1]$ as you suggested. Now, $\mu(F_{\epsilon}) = \epsilon$ and so it remains to show that $f_n \to f$ uniformly on $[0,1-\epsilon]$. To do this, notice that $\sup_{0\leq x \leq 1-\epsilon}|f_n(x) - f(x)| = \sup_{0\leq x \leq 1- \epsilon}|f_n(x)| \leq (1-\epsilon)^n$, which since $\epsilon \in(0,1)$, means that $\lim_{n \to \infty} \sup_{0\leq x \leq \epsilon}|f_n(x) - f(x)| = 0$, proving uniform convergence on $[0,1]\backslash F_{\epsilon}$.
H: What is the Krull dimension of this local ring I want to know what is the dimension of this ring $\mathbb C[x,y]_{(0,0)}/(y^2-x^7,y^5-x^3)$. I don't know how to do that. If I suppose $y^2=x^7$ I will get a higher degree of $x$. AI: We compute in the local ring: $$y^{6}=(y^2)^3=(x^7)^3=(x^3)^7=(y^5)^7=y^{35}$$ Hence, $y^6(1-y^{29})=0$. Since $y$ belongs to the maximal ideal, it follows that $1-y^{29}$ is a unit, hence $y^6=0$. Similarly one obtains $x^6=0$. Hence, the local ring modulo nilpotents (this doesn't change dimension) is just $\mathbb{C}$, which is $0$-dimensional.
H: How to find the normal plane from a tangent plane? $$f(x,y,z)=\frac{x^2}{4} +\frac{y^2}{9} +\frac{z^2}{25}=3 $$ I found the tangent plane from this surface at $P(2,3,5)$ by using the gradient vector, $\nabla F=\langle f_x, f_y, f_z\rangle$. I was wondering if I could find the normal plane at that same point using the information already given. The tangent plane is $(x-2)+2/3(y-3)+2/5(z-5)=0$ The missing key is to find the vector that is perpendicular to the normal. AI: I think you may be a little bit confused -- in three dimensions, a surface has a tangent plane and a normal line. The normal line is spanned by the gradient.
H: Is the submanifold compact? Let $M$ be the following subsets of $\mathbb R^4$:$$M= \{(x,y,z,w), 2x^2+2=z^2+w^2, 3x^2+y^2=z^2+w^2 \}$$ we know $M$ is a submanifold of $\mathbb R^4$, is $M$ compact? AI: Yes, because $M$ is closed and bounded. Closedness: $M$ is given implicitly on the form $F=0$ with $F:\mathbb{R}^4\rightarrow\mathbb{R}^2$, $F$ continuous. Continuiuty guarantees closedness. To see this, recall that a set is closed if and only if it contains all its limit points. Let $u_n = (x_n,y_n,z_n,w_n) \in \mathbb{R}^4$ be a sequence on $M$, i.e., $F(u_n) = 0$ for all $n$. If $u_n$ converges to some $u\in\mathbb{R}^4$, it follows by continuity of $F$ that $F(u)=0$. Thus, the limit point $u\in M$ as well. Boundedness: The defining equations $F$ can be combined to give $$ 3x^2+y^2 = x^2 + w^2 = 2x^2 + 2$$ and hehce $$ x^2 + y^2 = 2. $$ Thus the projection of $M$ on the $xy$ plane is a circle of radius $\sqrt{2}$. Hence, $x^2$ and $y^2$ are smaller than 2 for every point $(x,y,z,w)\in M$. Similarly, $$ z^2 + w^2 = 2 + 2x^2 \leq 6, $$ so the projection of $M$ on the $zw$ plane is contained in a circle of radius $\sqrt{6}$. Thus, both $z^2$ and $w^2$ are smaller than 6. It follows that for all $(x,y,z,w)\in M$, $x^2+y^2+z^2+w^2 \leq 8$, and $M$ is bounded.
H: Closures of Relations How to prove that the transitive closure of a symmetric closure of a relation is greater than the symmetric closure of a transitive closure of a relation? AI: Essential is here that the symmetric closure of a transitive relation is not necessarily transitive. For instance on positive integers look at the relation $n\mid m$, i.e $n$ divides $m$. The symmetric closure of it is the relation $n\mid m\vee m\mid n$. The numbers $4$ and $12$ are related and so are the numbers $12$ and $6$, but $4$ and $6$ are clearly not. On the other hand the transitive closure of a symmetric relation is symmetric itself. So if we denote the transitive and symmetric closure of a relation $R$ respectively by $\tau\left(R\right)$ and $\sigma\left(R\right)$ then we find in $\tau\left(\sigma\left(R\right)\right)$ a transitive and symmetric relation that contains $\tau\left(R\right)$. This leads to $\sigma\left(\tau\left(R\right)\right)\subset\tau\left(\sigma\left(R\right)\right)$. In $\sigma\left(\tau\left(R\right)\right)$ we recognize a relation that contains $\sigma\left(R\right)$ but is not necessarily transitive. If it is not transitive then $\sigma\left(\tau\left(R\right)\right)$ must be a proper subset of $\tau\left(\sigma\left(R\right)\right)$.
H: Using Euler's method and Taylor polynomial to solve differential equation Consider the initial value problem $dy/dx=x+y^2$ with $y(0)=1$ a) Use Euler's Method with step-length $h=0.1$ to find an approximation to $y(0.3)$. HINT 1: :Numerical methods. HINT 2: Differential equations videos. b) Let $P2(x)$ denote the second order Taylor polynomial for the solution of the initial value problem $y(x)$ at $x=0$. Find $P2(0.3)$. HINT: Differentiate the differential equation implicitly to find $y′′$. I just want to know if I've done it correctly. $y′(0)=0+1^2=1$ $y′′(0)=1+2⋅1⋅1=3$ y(0.3) by Euler method: $y1=y0+hf(x0,y0)=1+0.1(0⋅1^2)=1$ $y2=y1+hf(x1,y1)=1+0.1(0.1⋅1^2)=1.01$ $y3=y2+hf(x2,y2)=1.01+0.1(0.2⋅1.01^2)=1.030402$ Inserting into Taylor formula: $P2(0.3)=1+1\cdot (0.3-0)+\frac{3\cdot (0.3-0)^2}{2!}=1.435$ Is this correct? Shouldn't the result from $P2(0.3)$ be closer to that of the Euler method? AI: Given: $$\tag 1 \dfrac{dy}{dx}=x+y^2, y(0) = 1, h = 0.1$$ For $(1)$, using Euler's Method we have: $y_0 = \alpha$ $y_{i+1} = y_i + hf(x_i,y_i) = y_i + 0.1(x_i + y_i^2)$ Thus, the iterates are: $y_0 = 1$ $y1= y_0 + 0.1(x_0 + y_0^2) = 1 + 0.1(0 + 1^2) = 1.1$ $y2= y_1 + 0.1(x_1 + y_1^2) = 1.1 + 0.1(.1 + 1⋅1^2) = 1.231$ $y3= y_2 + 0.1(x_2 + y_2^2) = 1.231 + 0.1(0.2+ 1.231^2) = 1.40254$ Next, you need to read what is being asked for in the Taylor Polynomial approach and rework that. This is a different approach than the Euler approach. They provide a hint for this, implicitly differentiate the DEQ to find $y''$. Implicitly differentiating $(1)$ yields: $$y'' = 1 + 2 y y' = 1 + 2 y (x+y^2) = 1 + 2 x y + 2 y^3$$ Hopefully, you can take it from here.
H: $L^2$ function on finite interval implies $L^1$? Let $a,b\in\mathbb{R}$. Suppse $f:\mathbb{R}\rightarrow\mathbb{C}$ is an $L^2$ function on the finite interval $(a,b)$. That is, $$\int_{a}^b|f(x)|^2dx<\infty$$ Is it always true that $f$ is an $L^1$ function on the same interval, that is, $$\int_{a}^b|f(x)|dx<\infty \text{ }?$$ AI: Yes, every square integrable function on a finite interval is integrable. Even more generally, if $(X,\mathcal{A},\mu)$ is a finite measure space ($\mu(X) < \infty$), then for all $1 \leqslant q < p \leqslant \infty$ we have $L^p(\mu) \subset L^q(\mu)$. Hölder's inequality states that for $r,s \geqslant 1$ with $\frac1r + \frac1s = 1$, and any measurable $u,v$, we have $$\int_X \lvert u(x)v(x)\rvert\,d\mu \leqslant \left(\int_X \lvert u(x)\rvert^r\right)^{1/r}\cdot \left(\int_X \lvert v(x)\rvert^s\right)^{1/s}.$$ Applying that with $u = 1$, $v = \lvert f\rvert^q$, $s = \frac{p}{q}$ and $r = \frac{p}{p-q}$ yields $$\lVert f\rVert_q^q \leqslant \mu(X)^{1-q/p}\cdot \lVert f\rVert_p^{q},$$ or $$\lVert f\rVert_q \leqslant \mu(X)^{1/q-1/p}\cdot \lVert f\rVert_p.$$ So $\lVert f\rVert_p < \infty$ implies $\lVert f\rVert_q < \infty$ for $q < p$ if $\mu(X) < \infty$.
H: Does $x \in [0, 1]$ mean $x = 0 \lor x = 1$? Consider the following notation: $x \in [0, 1]$ Does this mean that $x$ can be any rational number between 0 and 1 inclusive, or does it mean that either $x = 0$ or $x = 1$? AI: By definition $[0,1]$ is the set of real numbers $\{x\in \mathbb R \colon 0\le x \le 1\}$.
H: What does a symplectic vector field means in terms of the physics of a system? The mathematics of symplectic (as well as Hamiltonian) vector fields is something that has been quite clear to me for some time, but recently I have been thinking much more about what certain mathematical ideas are meant to capture from physics (as I do not have a physics background), and there is one notion that I just can't see: if some physical system admits a symplectic vector field, then what does this mean in terms of the physics of a system? What is the physical relevance of a vector field whose flow preserves the symplectic form? EDIT: I've come to decide that there is not really any physical meaning for a symplectic vector field, other than, perhaps, a vector field that locally models the dynamics of the system, but not globally (as $X$ symplectic implies that $\iota_X\omega$ is closed, and so the Poincaré lemma says it is locally exact; hence $X$ is locally hamiltonian, i.e. its integral curves satisfy the equations of motion). AI: I will try to give a short (and surely partial) motivation. A bit of notation. Let $(M,\omega)$ be a symplectic manifold, with symplectic form $\omega$ and let $H:M\rightarrow \mathbb R$ be a smooth function (the hamiltonian). The hamiltonian vector field $X_H$ associated to $H$ is defined through $$i_{X_H}\omega=dH.$$ An example: $(M,\omega)=(S^2, d\theta\wedge dh)$, with hamiltonian $H(\theta,h):=h$ (the height function). Then $X_H=\frac{\partial}{\partial\theta}$. A symplectic vector field is a vector field $X$ on $M$ which preserves the symplectic form $\omega$, i.e. $\mathcal L_X\omega=0\Leftrightarrow i_X\omega$ is closed, by Cartan's magic formula. In summary, $X$ is hamiltonian iff $i_X\omega$ is exact, while $X$ is symplectic iff $i_X\omega$ is closed. In general, not all symplectic fields are hamiltonian: the obstruction is given by non trivial $H^1(M)$. So I would see the presence of non trivial diff. geometry of the phase space as a motivation for dealing with symplectic vector fields. For example, given the 2-torus $(M, \omega) = (T^2, d\theta_1\wedge d \theta_2)$ the vector field $X = \frac{\partial}{\partial\theta_1}$ is symplectic but not hamiltonian.
H: Inequation solving. $\frac{(x+2)²}{x+1}<4$ I am trying to solve this inequality, but I always get the wrong score. This is how I did it. $$ \frac{(x+2)^2}{x+1}<4\\ (x+2)^2< 4(x+1)\\ (x+2)^2 < 4x+4\\ x^2+4x+4 < 4x+4\\ x^2+4x+4-4x-4 < 0\\ x^2<0\\ x<0 $$ I know that I should get $x<-1$ but I always get $0$. What is my mistake??? Thanks AI: It goes like this $$ \frac{(x+2)²}{x+1}<4\\ (x+2)² < 4(x+1),\space x+1>0 \space or \space (x+2)² > 4(x+1),\space x+1<0\\ (x+2)² < 4x+4, \space x>-1 \space or \space (x+2)² > 4x+4, \space x<-1\\ x²+4x+4 < 4x+4, \space x>-1 \space or \space x²+4x+4 > 4x+4, \space x<-1\\ x²+4x+4-4x-4 < 0, \space x>-1 \space or \space x²+4x+4-4x-4 > 0, \space x<-1\\ x²<0, x>-1 \space (impossible)\space or \space x^2>0, x<-1\\ x<-1 $$ EDIT: There is another method: $$ \frac{(x+2)²}{x+1}<4\\ \frac{(x+2)²}{x+1}-4 < 0\\ \frac{(x+2)²-4x-4}{x+1} < 0\\ \frac{x^2+4x+4-4x-4}{x+1} < 0\\ \frac{x^2}{x+1}<0 $$ Since $x^2>0$ for any rational $x$, the sign depends only in $x+1$. When x+1 negtive ($x<-1$), the expression is negative, and when it is positive ($x>-1$) the expression is potitive.
H: Simple Algebra Equation I have a simple part of a question to solve. The problem is my answer is different to the solution in my textbook. The equation is: $$\frac{5v}{6} = \frac{(\frac{1}{2}a+b+\frac{1}{2} c)v}{a+b+c}$$ I am supposed to get $$\frac{2}{3}(a+b+c) = b$$ But I simply get: $$b=2a +2c$$ I get my answer by cross multiplying. I then use my answer to get $\frac{b}{a+b+c}$ as some fraction. I have not worked onto this stage as I am unsure about the above work. What am I doing wrong here? AI: $b=2a+2c$ Adding $2b$ on both sides gives $b + 2b =2a+2b+2c$ Or better $3b =2a+2b+2c$ And if you throw the $3$ to the other side you get: $b =\frac{2}{3}(a+b+c)$
H: If There are four 2's three 1's and two 0's how many was can you arange them in a 9 Digit number! If There are four 2's, three 1's and two 0's, in how many was can you arrange them in a 9 Digit number! Using Permutations only. Show your answer is corrrect by counting it in three different ways and getting the same answer each time. I Got : p(9,4)*p(5,3)*P(2,2) =9! p(9,2)*p(7,3)*P(4,4) =9! p(9,3)*p(6,4)*P(2,2) =9! Am I correct? AI: $\require{cancel}$ This is a standard problem for using the multinomial coefficient: $$\binom{9}{4, 3, 2} = \binom{9}{4}\cdot \binom 53\cdot \binom 22 = \dfrac{9!}{4!\cancel{5!}} \cdot \frac{\cancel{5!}}{3!2!} \cdot \frac{\cancel{2!}}{\cancel{2!}} = \frac{9!}{4!\,3!\,2!}=1260$$ We have multiple occurrences of some digits, which can permute without changing the final number. So we need to divide by the number of ways each repeated digit can be permuted.
H: Determine the core-nilpotent decomposition for P. Let P be a projector different from the identity. Determine the core-nilpotent decomposition for P. I know that I can use the formula $Q^{-1} P Q$ and got the result $N =0$, but I don't know how to find $C$ AI: If $P$ is a projector, $P$ is annihilated by $X^2-X=X(X-1)$, a polynomial with distinct roots. So $P$ is diagonalizable, its only eigenvalues are $0$ ($s$ times) and $1$ ($t$ times) with $s+t=n$. We then have $C=I_t$, $L=0_s$.
H: Finding $a_n$ such that $x^n+a_1x^{n-1}+\cdots+a_{n-1}+a_n$ cannot be factored when $a_1,\cdots,a_{n-1}$ given Let $n\ge 4\in\mathbb N$. Suppose that $a_1,a_2,\cdots,a_{n-1}$ are given integers. Then, here is my question. Question : Is the following true for any $(a_1,a_2,\cdots,a_{n-1})$ ? There exists a composite $a_n$ such that $f(x)=x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_{n-1}x+a_n$ cannot be factored using integer coefficients. Motivation : I've known the following theorem : Theorem : There exit such prime number $a_n$ for any $(a_1,a_2,\cdots,a_{n-1})$. Then, I got interested in the case that $a_n$ is not a prime number. The answer must be YES, but I can't prove that. Can anyone help? In the following, I'm going to write the proof for the above theorem. Proof for theorem : Let $a_n$ be a prime number. If $f(x)$ can be factored, then we can write $$\begin{align}f(x)&=x^n+a_1x^{n-1}+\cdots+a_n\\ &=(x^m+b_1x^{m-1}+\cdots+b_m)\cdot (x^{n-m}+c_1x^{n-m-1}+\cdots+c_{n-m})\\ &=g(x)\cdot h(x)\ \ \ \ \ (|b_m|\le|c_{n-m}|)\end{align}$$ Since $b_m\cdot c_{n-m}=a_n=\text{a prime number}$, we know that $b_m=\pm 1.$ Letting $\alpha_1,\alpha_2,\cdots,\alpha_m$ be the solutions of $g(x)=0$, since we can write $g(x)=(x-\alpha_1)(x-\alpha_2)\cdots (x-\alpha_m),$ we know that $(-1)^m\alpha_1\alpha_2\cdots\alpha_m=b_m.$ If $|\alpha_1|\gt 1,|\alpha_2|\gt 1,\cdots,|\alpha_m|\gt 1$, then $|b_m|=|\alpha_1||\alpha_2|\cdots |\alpha_m|\gt 1,$ which is a contradiction. Hence, we know that there exists $\alpha$ such that $f(\alpha)=0, |\alpha|\lt 1$. Hence, we get $$\begin{align}|a_n|&=|{\alpha}^n+a_1{\alpha}^{n-1}+\cdots+a_{n-1}\alpha|\\ & \le |\alpha|^n+|a_1|\cdot |\alpha|^{n-1}+\cdots+|a_{n-1}|\cdot |\alpha|\\ &\le 1+|a_1|+\cdots+|a_{n-1}|\end{align}$$ Hence, we can take a prime number which is larger than $1+|a_1|+\cdots+|a_{n-1}|$ as $a_n$ in order that $f(x)$ cannot be factored using integer coefficients. We now know that the proof is completed. AI: Well, you can easily modify your proof to get a composite $\alpha.$ The only difference you will get is that you need to take $|a_n|$ lager. Namely, take $a_n=2p$ where $p$ is very large prime and run the same proof. Instead of choosing root $|\alpha|<1$ you will be able to choose the root smaller than $2$ and get different bound on $|a_n|\le 2^{n-1}|a_{n-2}|+...$
H: Bijectivity of set sequences I've got this homework problem to prove in my introductory analysis course ... and right now, I really have no idea how to even go about that (and as such, don't really know the right questions to ask). Could you guys maybe give me few hints in the right direction? Problem 4: Define $\mathbb{N}_0 := \mathbb{N} \cup 0$ and $b \ge 2$. Let $$ M_b := \{ (a_1, a_2, ...)|\forall i \in \mathbb{N}: a_i \in \mathbb{N}_0 \land a_i < b \}. $$ A sequence $(a_n)_{n \in \mathbb{N}} $ is called eventually constant if there is a $n_0 \in \mathbb{N}$ so that $a_n = a_{n_0}$ for all $n \ge n_0$. Also, let $$ X_b := \{ (a_n)_{n \in \mathbb{N}} \in M_b |a_n \text{ is not eventually constant in } b-1 \}, $$ ie $(a_n)_{n \in \mathbb{N}} \in X_b$ if and only if $\forall m \exists n>m:a_n \neq b-1$. Please show that $$ f: M_b \to \mathbb{R} \cup \{\infty\}, (a_n)_{n \in \mathbb{N}} \mapsto \sup\{\frac{a_1}{b}+...+\frac{a_n}{b^n}|n \in \mathbb{N}\} $$ maps the set $X_b \subset M_b$ bijectively to [0,1). Please forgive my bad translation from German. Thanks in advance! :) AI: Perhaps it would help to think about a specific case, $b=10$. Consider a number $x$ in $[0,1)$ whose ordinary decimal expansion is $0,a_1a_2a_3\ldots$. Clearly $a_k\in\{0,1,\ldots,9\}$ for each $k\in\Bbb Z^+$, so $\langle a_1,a_2,a_3,\ldots\rangle\in M_{10}$, and $$x=\sum_{k\ge 1}\frac{a_k}{10^k}=\sup_{n\in\Bbb Z^+}\sum_{k=1}^n\frac{a_k}{10^k}\;.$$ $X_{10}$ corresponds to the set of decimal expansions that do not end in an infinite string of $9$’s, so it contains $\langle 5,0,0,0,\ldots\rangle$ but not $\langle 4,9,9,9,\ldots\rangle$. In terms of ordinary notation, it contains the representation $0,5000\ldots$ of $\frac12$ but not the representation $0,4999\ldots$. If you can prove the result for a specific $b$, you should be able to modify your argument fairly easily to handle the general case.
H: Homework - algebra, find constants The question is as follows, I think I solved it partially: Show that there are $a,b$ real positive numbers such that $an^7 \leq \frac{n!}{7!(n-7)!} \leq bn^7$ $7\leq n$ my solution for b: $\frac{n!}{7!(n-7)!} \leq \frac{n!}{(n-7)!} = (n-6)(n-5)(n-4)(n-3)(n-2)(n-1)(n) \leq n*n*n*n*n*n*n = n^7$ so $b=1$ is a good answer for b. But for a...I'm stumped AI: For $n\ge7$, $n-k\ge n/7$ for $0\le k\le6$ So $$\frac{n!}{7!(n-7)!} =\frac{(n-6)(n-5)(n-4)(n-3)(n-2)(n-1)n}{7!}\ge \frac{n^7}{7!7^7}$$ $$a=\frac{1}{7!7^7}$$
H: Congruence Equation x/20 = 7 ( mod 5) please tell me how to solve this equation : x/20 = 7 ( mod 5 ) I tried too many methods but it does not work. AI: It depends on where you are looking for $x$. Most likely you search for $x\in\Bbb Z$ when you are doing congruence relations; in that case $x$ must be divisible by$~20$ so that $x/20$ will be integer. Now multiply the whole equation by$~20$ to get $x\equiv 140\pmod{100}$ which gives the solution; it can also be written $x\equiv 40\pmod{100}$. Indeed $x=40$ satisfies $x/20=2\equiv7\pmod5$ and also for instance $x=-60$ gives $x/20=-3\equiv7\pmod5$.
H: Are integrable, essentially bounded functions in L^p? Given an arbitrary measure space (of possibly infinite measure), if $f \in L^1 \cap L^\infty$, then by Hölder's inequality, $f^2 \in L^1$, so $f \in L^2$. Intuition suggests that $f \in L^p$ even for any $1 \le p \le \infty$ (since we have eliminated the only two things that can go wrong for $f$ to be in $L^p$; blow-up & non-decay). This does not seem to follow from the common inequalities, hence my question: Is it true that $L^1 \cap L^\infty \subset L^p$ in general, and if so how can I prove it? Many thanks in advance for any hints! AI: Yes, it's true. Let us define $A = \{x : \lvert f(x)\rvert > 1\}$. Then $\mu(A) \leqslant \int_A \lvert f(x)\rvert\,d\mu \leqslant \lVert f\rVert_1$, and hence $$\int_X \lvert f(x)\rvert^p\,d\mu = \int_A \lvert f(x)\rvert^p\,d\mu + \int_{X\setminus A} \lvert f(x)\rvert^p\,d\mu \leqslant \lVert f\rVert_\infty^p\cdot \lVert f\rVert_1 + \lVert f\rVert_1 < \infty$$ since $\lvert f(x)\rvert^p \leqslant \lvert f(x)\rvert$ on $X\setminus A$.
H: Trignometry prove question I am new to this website so Please forgive me for my mistakes. I have a question of trigonometry to prove and it is as (I dont know how to write theta symbol sorry for it) $(1-\sin \theta)/(1-\sec \theta) = 2\cot \theta(\cos \theta- \csc\theta)$ Thanks in advance!!! AI: Sometimes it helps to put everything in terms of $\sin\theta, \cos\theta$, first: $$\frac{(1-\sin \theta)}{(1-\sec \theta)} = \frac{1-\sin \theta}{1 - \frac{1}{\cos\theta}} = \frac{(1 - \sin\theta)\cos\theta}{\cos\theta - 1} $$ $$\overset{?}= \quad2\cot \theta(\cos \theta- \csc\theta) = 2\frac{\cos \theta}{\sin\theta}\left(\cos \theta - \frac{1}{\cos\theta}\right) = (\sin(2\theta) - 2)\cot \theta \sec\theta$$ In truth, the first expression and the second expression, while perhaps describing similar functions over a small interval, are not equivalent expressions. See Wolfram Alpha here, and see the graph below : Now, if the question asked you to solve the equation: (to find values of $\theta$ for which the equation is true, this we can do). Indeed, the equality holds if and only if $$\theta = 2\pi n,\; n \in \mathbb Z\;\text{ or }\;\; \theta = 2\pi n - \frac \pi2, \;\;n \in \mathbb Z$$
H: Are these statements correct? $A \subseteq f^{-1} \circ f(A)$ and $ f \circ f^{-1}(B) \subseteq B$ We wrote these two statements in class: $A \subseteq f^{-1} \circ f(A)$ $ f \circ f^{-1}(B) \subseteq B$ where $A$ and $B$ are sets and $f(A)= \lbrace f(x):x \in A \rbrace $ and $f^{-1}(B)= \lbrace x:f(x) \in B \rbrace $ My question is shouldn't we have equality for both statements? And why? I drew to gain intuition but it didn't work Any help would really be appreciated. Thank you! AI: Both of these inclusions are correct, but neither can be replaced by an equality in the general case. For a counterexample, consider $f: \mathbb{N} \to \mathbb{N}$ defined as $f(n) = 1$ for every $n \in \mathbb{N}$, $A=\{1\}$ and $B=\mathbb{N}$. Both inclusions in this example are proper. UPDATE: drawing can actually help you in a situation like that. But it also helps to keep in mind that counterexamples are more likely to be found when $f$ is not injective and/or surjective. The first inclusion would in fact be an equality if $f$ were injective, but when injectivity fails, many elements outside of $A$ can map into $f(A)$ under $f$. In a similar fashion, if $f$ were surjective, then the second inclusion would be an equality. But if $f$ isn't surjective then it's possible to have a situation when some element in $B$ doesn't have a preimage, and then $f(f^{-1}(B))$ just cannot cover all of $B$. UPDATE 2: one more thing that can help gain some intuition. It may be helpful to formulate what sets $f^{-1}(f(A))$ and $f(f^{-1}(B))$ are in "human" terms. Namely, $f^{-1}(f(A))$ is actually "all the elements in $A$ and also all those who are glued with them under $f$". The second set $f(f^{-1}(B))$ can be described as "all those elements in $B$ that have a preimage". If you convince yourself that these human-readable definitions are equivalent to the formal ones, then it will also become clear why exactly the inclusions in the original question are only one way and not the other.
H: Myhill-Nerode Theorem with constraint I am trying to to understand the Myhill-Nerode Theorem with the example. $L = \{{0^i1^j}|\ j > i\}$ I have read some article but still cannot fully understand,what I know about is that I have to choose the subset$\sum^*$ and find two string x and y and z so that if the language can distinguishable, then it is nonregular. But I don't know how to do this with the constraint j > i, can you demonstrate the example. AI: Just consider sequence $x_n = 0^n, n \ge 0$ and prove that $ x_i \not\equiv_L x_j$ for every $i \ne j $. You can do this by finding a word $z$ such $ x_iz \not \in L $ and $ x_jz \in L $ or vise versa. For example, if $i < j$ just take $z = 1^j$ and notice that $ x_iz \in L $ and $ x_iz \not \in L $. So you can conclude that all elements of the sequence are pairwise $L$ - nonequivalent and hence our language is not regular since the number of equivalence classes would not be finite. For more info check this question.
H: Find the volume using the triple integral method Find the volume of a solid bounded by: $z=0$, $x^2+2y^2=2$, and $x+y+2z=2$ I got this triple integral: $$\int_{-1}^1\int_{-\sqrt{2-2y^2}}^\sqrt{2-2y^2}\int_0^{1-x/2-y/2}dzdxdy$$ I think it's wrong because I keep getting a negative value. I'd appreciate any help, thanks! AI: Your integral is correct but there may be some computational mistakes (me too in the following). The integral reduces to a double integral $$\int_{-1}^1\int_{-\sqrt{2-2y^2}}^\sqrt{2-2y^2}(1-x/2-y/2)dxdy=\int\int_{D}(1-x/2-y/2)dxdy,$$ where $D: x^2+2y^2\leq 2$. To evaluate this integral we can make use of change of variables: first let $x=\sqrt{2}u,\, y=v$, so the Jacobian $J=\sqrt{2}$. Then $\int\int_{D}(1-x/2-y/2)dxdy=\displaystyle\int\int_{u^2+v^2\leq 1}\sqrt{2}\big(1-\sqrt{2}u/2-v/2\big)dudv$. Now we use the polar coordinates $u=r\cos\theta$, $v=r\sin\theta$, where $0\leq r\leq 1$ and $0\leq\theta\leq 2\pi$. Thus we have $\displaystyle\int\int_{u^2+v^2\leq 1}\sqrt{2}\big(1-\sqrt{2}u/2-v/2\big)dudv=\displaystyle\int_0^{2\pi}\int_0^1\sqrt{2}\big(1-\sqrt{2}r\cos\theta/2-r\sin \theta/2\big)rdrd\theta=\sqrt{2}\pi.$ I dont claim this is the shortest solution.
H: Discrete Math need some help! I'm taking discrete math course now and need some help on this question. THX!! T/F or unknown? There is a function that is both $O(n^2)$ and $\Omega(n^3)$. Given two functions $f(n)$ and $g(n)$, if $g(n) = O(f(n))$ and $f(n) = O(n^2)$, then $g(n) = O(n^5)$. Given two functions $f(n)$ and $g(n)$, if $g(n) = O(f(n))$ and $f(n) = O(n^2)$, then $g(n) = \Omega(n)$. I think the first one is false, but not sure about other two. Can anyone help me? AI: You’re right about the first one; I’ll prove it. Then I’ll point you in the right direction for the other two; see if you can finish them on your own, but if you get stuck, feel free to leave a comment. Suppose that $f(n)$ is both $O(n^2)$ and $\Omega(n^3)$. Then there are positive constants $c_1$ and $c_2$ and an $m\in\Bbb N$ such that $|f(n)|\le c_1n^2$ and $|f(n)|\ge c_2n^3$ for all $n\ge m$. But then for all $n\ge m$ we must have $$c_2n^3\le|f(n)|\le c_1n^2$$ and hence $c_2n\le c_1$, which is clearly false when $n>\frac{c_1}{c_2}$. Thus, there is no such $f(n)$: you were correct. The hypotheses imply that there are $c_1,c_2>0$ and $m\in\Bbb N$ such that $|g(n)|\le c_1|f(n)|$ and $|f(n)|\le c_2n^2$ for all $n\ge m$. Can you find an inequality relating $|g(n)|$ to $n^2$? What about to $n^5$? What if $f$ and $g$ are both constant functions?
H: Evaluate the integral of a function defined by an infinite series I need to evaluate $\,\,\,\displaystyle \int \limits_0^{2\pi} \! \sum\limits_{n=1}^\infty \dfrac{\sin nx}{n^3} \, \mathrm{d}x$ and $\displaystyle\int \limits_0^{\pi} \! \sum\limits_{n=1}^\infty \dfrac{\cos nx}{n^2} \, \mathrm{d}x$ I have already proved that the infinite series' are continuous and that the derivative of the first is equal to the second. I'm not sure how to use that information to evaluate the integrals however. AI: Say $$\int\limits_0^{2\pi}\sum_{n=1}^\infty\frac{\sin nx}{n^3}dx=\sum_{n=1}^\infty\frac1{n^3}\int\limits_0^{2\pi}\sin nx\,dx=\sum_{n=1}^\infty\frac1{n^3}\left(\left.-\frac1n\cos nx\right|_0^{2\pi}\right)=$$ $$=\sum_{n=1}^\infty\frac1{n^3}\cdot 0=0$$ Can you explain the above?
H: Harmonic and Continuous everywhere but on a curve is harmonic throughout? Suppose u is a harmonic function everywhere in a domain $\Omega$, but on a curve inside $\Omega$ , say a segment, and is continuous throughout, i.e $u\in C(\Omega)$. Can we conclude that u is harmonic throughout $\Omega$ ? AI: I am not totally sure if the following example satisfies all the conditions you require, since I am not sure about the precise meaning of " ... but on a curve inside ...", but here we go: Take the domain $\Omega$ to be the open unit disc $\{ z : |z| < 1 \}$ in $\mathbb{C}$. Define $u(z)$ to equal $y$ on the subdomain $\{ x+iy \in \Omega : y>0 \}$, and define $u(z)$ to be 0 on the other half of the domain, i.e. on the subdomain $\{ x+iy \in \Omega : y \le 0 \}$. Clearly the function is harmonic in the 2 subdomains, and is continuous on the whole of $\Omega$, since the two definitions "match up" on the "curve" $y=0$. We can see that on the whole domain $\Omega$, the function $u$ fails to be harmonic since at any point on the line $y=0$ it fails to satisfy the mean value condition for harmonic functions. So, the question is: does this example satisfy your conditions? Edit: I am still unsure of the precise conditions of the question, but here is another possible counter-example. Take $\Omega$ to be the open disc $\{ |z| \in \mathbb{C} : |z| < 2\}$, and take the closed curve inside $\Omega$ to be the unit circle $|z|=1$. Then define $u(z)$ to equal $0$ in the subdomain $\{ |z| < 1\}$ and to equal $\log|z|$ on the remainder of $\Omega$ i.e. in $\{ 1 \le |z| < 2\}$. Again, $u$ is continuous on the whole domain and harmonic on each subdomain, but fails the mean value condition on the closed curve.
H: Linear dependence on union of intervals Suppose I have two functions $f,g$ which I want to exam for linear dependence. If I can conclude they are dependent on $(a,b] $ and $ [b,c)$ is it possible to conclude that they are dependent on $(a,b] \cup [b,c) = (a,c)$ ? For example $f= t^3$, $g=|t^3|$ which are clearly dependent on $(-\infty, 0]$ and $[0,\infty)$, but when examining the whole line, I can only solve for nonzero constants in a piecewise manner since the two constant vectors are orthogonal to one another. Feels like im missing something obvious here..Thanks. AI: You've almost answered that yourself: Your $f,g$ are (clearly) linearly independent. None of the two functions is a multiple of the other. A sufficient condition that would allow to conclude that $f,g$ are dependent on $(a,c)$ if they are known to be dependent on $(a,b]$ and $[b,c)$ would be that $f(b)$ and $g(b)$ are nonzero: In that case we have the linear dependency $g(b)\cdot f-f(b)\cdot g=0$ on $(a,b]$ and $[b,c)$ and all of $(a,c)$.
H: About described DFA I need to find DFA (or NFA, $\epsilon$-NFA, it's not improtant (I know how to convert between them)) that accept all strings of $0$'s and $1$'s such that every block of five consecutive symbols contains at least two $0$'s. This is exercise 2.5 c) from Hopcroft's Introduction to Automata Theory. If someone could also explain and help to build regular expresion from this (I'm still learning)? AI: Hint: Try the opposite: find DFA that accepts all strings of 0's and 1's such that there exists a block of five consecutive symbols containing one or zero 0's. And then build the complement to this DFA. Added: There was a typo: "every block" -> "there exists a block". Sorry.
H: Bounded, divergent series with terms approaching zero Is there an example, or proof that one cannot exist, of a sequence of real numbers $a_n$ such that (1) $a_n\rightarrow 0$ (perhaps non-monotonically), and (2) the sequence of partial sums $\sum_1^N a_n$ are uniformly bounded, but the sum $\sum_1^\infty a_n$ diverges? Condition (1) prevents things like $a_n = (-1)^n$, while condition (2) prevents things like $a_n = 1/n$. AI: Try $$a_n=\sin{\sqrt{n+1}}-\sin{\sqrt n}.$$
H: Prove that if $\int f^2$ and $\int( f'')^2$ converge, so does $\int (f')^2$ Question: Let $f: [a,\infty) \to \Bbb R \in C^2$ and the two following integrals converge: $$\int _a^\infty (f''(x))^2\,dx ,~~~~~~~~~ \int _a^\infty (f(x))^2\,dx$$ Prove that $\int _a^\infty (f'(x)^2)\,dx$ converges as well. What we tried: Taylor expansion, Lagrange mean value theorem, integration by parts, comparison test, limit comparison test, none really helped us get there... AI: Hint 1: Integration by parts gives $$ \begin{align} \int_a^\infty f'(x)^2\,\mathrm{d}x &=\int_a^\infty f'(x)\,\mathrm{d}f(x)\\ &=\lim_{b\to\infty}f'(b)f(b)-f'(a)f(a)-\int_a^\infty f(x)f''(x)\,\mathrm{d}x\tag{1} \end{align} $$ Hint 2: As Giraffe points out, if $\int_a^\infty f'(x)^2\,\mathrm{d}x$ diverges, then by $(1)$, $\lim\limits_{x\to\infty}f'(x)f(x)=\infty$. Since $$ f(b)^2=\int_a^bf'(x)f(x)\,\mathrm{d}x\tag{2} $$ we get that $\int_a^\infty f(x)^2\,\mathrm{d}x$ diverges. By the comments, this seems to be a bit more involved than befits a hint, so I will explain in more detail. Note that what follows is not needed due to Hint 2, but the ideas used are more generally applicable, so I will leave it. Claim 1: $\displaystyle\lim_{x\to\infty}f'(x)=0$ Proof: Suppose not; then, for some $\epsilon\gt0$ and all $x_0$, there is an $x\ge x_0$ so that $|f'(x)|\ge\epsilon$. Since $\|f''\|_{L^2}\lt\infty$, we can choose a $b$ so that $$ \int_b^\infty f''(x)^2\,\mathrm{d}x\le\epsilon^4\tag{1} $$ Then, for any $x,y\ge b$ so that $|x-y|\le1$, Cauchy-Schwarz says $$ \begin{align} |f'(x)-f'(y)| &\le\int_x^y|f''(x)|\,\mathrm{d}x\\ &\le\left(\int_x^y|f''(x)|^2\,\mathrm{d}x\right)^{1/2}|x-y|^{1/2}\\[9pt] &\le\epsilon^2\tag{2} \end{align} $$ For any $x_0\ge b+1$, we can choose an $x\ge x_0$ so that $|f'(x)|\ge\epsilon$. If $f'(x)$ and $f(x)$ have the same sign, let $I=[x,x+1]$, otherwise, let $I=[x-1,x]$. $\hspace{2cm}$ By $(2)$, for $t\in I$, $|f'(t)|\ge\epsilon-\epsilon^2$ and $|f(t)|\ge\left(\epsilon-\epsilon^2\right)|t-x|$. Thus, $$ \int_If(x)^2\,\mathrm{d}x\ge\frac13\left(\epsilon-\epsilon^2\right)^2\tag{3} $$ By supposition, we can find infinitely many points so that $|f'(x)|\ge\epsilon$. Therefore, $$ \int_b^\infty f(x)^2\,\mathrm{d}x\quad\text{diverges}\tag{4} $$ giving us a contradiction. QED Claim 2: $\displaystyle\lim_{x\to\infty}f(x)=0$ Proof: Suppose not; then, for some $\epsilon\gt0$ and all $x_0$, there is an $x\ge x_0$ so that $|f(x)|\ge\epsilon$. Since $\displaystyle\lim_{x\to\infty}f'(x)=0$, we can choose a $b$ so that for all $x\ge b$, $$ |f'(x)|\le\epsilon^2\tag{5} $$ Then, for any $x,y\ge b$ so that $|x-y|\le1$, the Mean-Value Theorem says $$ \begin{align} |f(x)-f(y)| &\le\max_{t\in[x,y]}|f'(t)||x-y|\\ &\le\epsilon^2\tag{6} \end{align} $$ For any $x_0\ge b+1$, we can choose an $x\ge x_0$ so that $|f(x)|\ge\epsilon$. For any $t\in[x-1,x+1]$, $|f(t)|\ge\epsilon-\epsilon^2$. Thus, $$ \int_{x-1}^{x+1}f(t)^2\,\mathrm{d}t\ge2\left(\epsilon-\epsilon^2\right)^2\tag{7} $$ By supposition, we can find infinitely many points so that $|f(x)|\ge\epsilon$. Therefore, $$ \int_b^\infty f(x)^2\,\mathrm{d}x\quad\text{diverges}\tag{8} $$ giving us a contradiction. QED
H: Find $ \lim\limits_{x \to 0^{+}} x^x $ My guess we will have to reformulate the problem in order to be able to use L'Hopital's Rule. Could you give me a hint? Thank you. AI: Hint: Try $x^x=e^{\ln x^x}= e^{x \ln x}$.
H: Mean vs Expected Value I have a probability distribution like so, $$f(x)=\begin{cases} 1/(2x) & x\geq 2\\ 11/40 & x=1 \end{cases} $$ The question asks to "find the mean for X", which I calculate like so: $\bar{x}= 1/n \cdot \displaystyle\sum\limits_{i=1}^6 x$ Which gives me, 21/6=3.5. However, the answer to question is 2.775, which is the expected value of the above function. Am I misinterpreting the question incorrectly? Or is the solution incorrect? AI: $$ \mathbb E(X) = \sum_{x=1}^6 x f(x) = \frac{11}{40}\cdot 1 + \sum_{x=2}^6 x \frac{1}{2x}. $$
H: Krull dimension of this local ring I want to know what the Krull dimension of this ring $\mathbb C[x,y]_p/(y^2-x^7,y^5-x^3)$ is, where $p\neq (0,0)$. I know the dimension of it in the origin point, but I don't know other cases. AI: Since $y^2-x^7,y^5-x^3$ are irreducible polynomials (why?) they form a regular sequence in $\mathbb C[x,y]$ and so they are in $\mathbb C[x,y]_{\mathfrak m}$, for any maximal ideal containing them. Then the dimension of $\mathbb C[x,y]_{\mathfrak m}/(y^2-x^7,y^5-x^3)$ is $\dim\mathbb C[x,y]_{\mathfrak m}-2$, that is, $0$. If $\mathfrak m$ doesn't contain one or both of the polynomials $y^2-x^7,y^5-x^3$, then the quotient ring is $0$.
H: First Order linear differential equation problem I need help solving this DE: $(\cos\theta )v'+v=3$ I really was only able to get it here: $v'+(\sec\theta\ )v=3\sec\theta $ $I=e^\int\sec\theta\ d\theta$ Any help would be appreciated. Thanks AI: Hint: It is separable. Write it as: $$\dfrac{dv}{d\theta} = (3-v) \sec \theta$$ Now you can integrate both sides as: $$\int \dfrac{1}{3 - v}~dv = \int sec(\theta) d\theta$$ Can you take it from here?
H: Example on relative homology I am trying to prove that $$H_p(B_{n+1},S_n;\mathbb{A}) \cong \left\{\begin{array}{ll} H_{p-1}(S_n,\mathbb{A}) & \text{if } p\geq2\\\ 0&\text{if } p=1, n\geq 1\\ \mathbb{A} &\text{if } p=1, n=0\\0 & \text{if } p=0 \end{array}\right.$$ For $p=0$ that's ok but I don't understand how to prove the case when $n=0$ or $n\geq 1$. AI: Recall that for a pair $(X,A)$ with $A$ a subspace of $X$, we get a long exact sequence in homology $$\cdots\to H_p(A)\to H_p(X)\to H_p(X,A)\to H_{p-1}(A)\to H_{p-1}(X)\to\cdots$$ over the given coefficient ring. Next, recall that $B^{n+1}$ is contractible for all $n$ and so $H_p(B^{n+1})=0$ for all $p\geq 1$. Also, $H_0(X)=\oplus_{i=1}^k\mathbb{A}$ for all $X$ with $k$ path components. Also, remember that if $A_1\to A_2\stackrel{f}{\to} A_3\to A_4$ is an exact sequence of groups, then $A_1=0=A_4$ if and only if $f$ is an isomorphism. Finally, recall that for the pair $(X,A)$, if there exists a neighbourhood $U\subset X$ of $A$ such that $A$ is a deformation retract of $U$, then $H_p(X,A)\cong \tilde{H}_p(X/A)$ where $\tilde{H}$ denotes reduced homology.
H: MLE of fourth moment of normal distribution Take $X\sim N(0,\theta)$, and let $\phi = E(X^4)$, the fourth moment. What is its MLE, $\hat{\phi}$, and what is the asymptotic distribution of $\sqrt{n}(\hat{\phi} - \phi) $ as $n\to \infty$? Any help with this question would be appreciated, as I really can't think of where to start!! Many thanks. I have integrated the expression for $E(X^4)$ by parts three times to get $\phi = 6\theta^3$, which gives $\hat{\phi} = 6\hat{\theta}^3$ as the function is continuous (is this correct?). However, I am not sure what the MLE of $\theta$ should be, and what that makes the asymptotic distribution $\sqrt{n}(\hat{\phi} - \phi) $. AI: I assume you mean $\theta=\operatorname E(X^2)$. The fourth moment is $$ \operatorname E(X^4) = 3\theta^2. $$ If you can find the MLE $\hat\theta$ for $\theta$, then the MLE for $3\theta^2$ is just $3\hat\theta^2$. Something useful to know about MLEs is that if $g$ is a function, and which function $g$ is does not depend on any parameters being estimated, then the MLE of $g(\alpha)$ is $g(\hat\alpha)$ where $\hat\alpha$ is the MLE of $\alpha$. (The asymptotic distribution is something I'll look at later. Probably the delta method will work.) PS: I see you're saying you got $6\theta^3$. Having a third power there doesn't make sense. Let $\varphi$ be the standard normal density, so the standard distribution is $\varphi(x)\,dx$. Then the $N(0,\theta)$ distribution (assuming you mean by that that the variance, not the standard deviation, is $\theta$, is $$ \varphi\left(\frac{x}{\sqrt{\theta}}\right)\,\frac{dx}{\sqrt{\theta}}. $$ So \begin{align} \operatorname E(X^4) & = \int_{-\infty}^\infty x^4 \varphi\left(\frac{x}{\sqrt{\theta}}\right)\,\frac{dx}{\sqrt{\theta}} = \theta^2 \int_{-\infty}^\infty \frac{x^4}{\theta^2} \varphi\left(\frac{x}{\sqrt{\theta}}\right)\,\frac{dx}{\sqrt{\theta}} \\[10pt] & = \theta^2 \int_{-\infty}^\infty u^4 \varphi(u)\,du = \theta^2 \operatorname E (Z^4) \end{align} where $Z\sim N(0,1)$. So it has to be proportional to $\theta^2$. PPS: OK, let's look at the integral \begin{align} & \phantom{={}}\int_{-\infty}^\infty z^4\varphi(z)\,dz = 2\int_0^\infty z^4\varphi(z)\,dz \\[8pt] & = \frac{2}{\sqrt{2\pi}}\int_0^\infty z^3 \exp(-z^2/2)\Big(z\,dz\Big) \\[8pt] & = \frac{2}{\sqrt{2\pi}} \int_0^\infty \sqrt{2w\,{}}^3 \exp(-w)\,dw \\[8pt] & = \frac{2}{\sqrt{2\pi}}\sqrt{2}^3\Gamma(5/2) = \frac{1}{\sqrt{2\pi}}\sqrt{2}^3\cdot\frac12\cdot\frac32\cdot\Gamma(1/2) \\[8pt] & = \frac{2}{\sqrt{2\pi}}\cdot\sqrt{2}^3\cdot\frac12\cdot\frac32\cdot\sqrt{\pi} \\[10pt] & = 3. \end{align}
H: Entire function with positive real part is constant (no Picard) A problem asks to show that an entire function on $\mathbb{C}$ with positive real part must be a constant. I spoke to a professor, and asked why not just use the Picard theorem. He said that we should try to aim the solution at the level of the problem, and that Picard was a little too high-powered for this problem. How would I solve it in the absence of Picard's theorem? A related question: suppose we have an isolated singularity, near which $\Re (f)$ (alternately, $\Im (f)$) is bounded. How can we show the singularity is removable? Yet another related problem: why is a positive harmonic function on Rn a constant? Mean value property seems not to be the way... AI: Hint: a fractional linear transformation (aka Möbius transformation) takes a half-plane to a disk.
H: $f(x)=\sum_{n=0}^{+ \infty} \frac{(-1)^n}{(n!)^2}\left( \frac{x}{2}\right)^n $ is continuous Let \begin{align} f: \begin{cases} \mathbb{R} &\longrightarrow \mathbb{R} \\ x & \longmapsto \displaystyle \sum_{n=0}^{+ \infty} \frac{(-1)^n}{(n!)^2}\left( \frac{x}{2}\right)^n\end{cases} \end{align} Show that: (i) The function is well defined (ii) the function is continuous This is my first time I try to solve such a problem, especially (ii) is a concept I cannot grasp yet. For (i) I was told that I simply need to show that the function converges, if it wouldn't converge then it would not be considered as well defined. It is easy to show that the function converges using the d'Alembert Criteria (also known as the ratio test). For the sake of this question (and since it might be important later I will do this with some abbreviations). \begin{align} \text{Let } u_n = \frac{(-1)^n}{(n!)^2}\left( \frac{x}{2}\right)^n, \ \text{such that } \frac{u_{n+1}}{u_n}=-\frac{1}{(n+1)^2} \left(\frac{x}{2} \right) \\ \implies \left(\frac{x}{2} \right)\lim_{n \to + \infty} - \frac{1}{(n+1)^2}=0 \end{align} So the function converges for all $x \in \mathbb{R}$ For (ii) I am stuck, to be brief about my problem is that I don't know about a sufficient way to show how a function is continuous. I know the definition of continuity at a given point $x_0$ and it is easy to grasp for me that if I can generalize this for all $x_0$ in the domain, then the function must be continuous. In Analysis 1 by Zorich continuous functions are demonstrated on some very easy examples such as $f(x)=x$ and some of the trigonometric functions. In the hints given by my tutor he recommends to use uniform convergence, but I don't see the link between uniform convergence and a function $f$ being continuous everywhere. Maybe if it would be possible to get me started on this problem with some basic steps, or trying to elaborate the intuition behind this problem, I could complete it on my own. AI: Let's define $g_m(x)=\sum_{k=0}^m \frac{(-1)^k}{(k!)^2} \left( \frac{x}{2} \right)^k$. Of course $g_m$ is continuous, and it tends to $f$ pointwise. If we could show, that this convergence is uniform in neighbourhood of every point, it would imply that $f$ is continuous. This theorem is stated for example on wikipedia http://en.wikipedia.org/wiki/Uniform_convergence 'Uniform convergence theorem. If $(f_n)$ is a sequence of continuous functions which converges uniformly towards the function $ f$ on an interval $ S$, then $ f$ is continuous on $ S$as well.'
H: Probability in Rolling dice 6 times If I have one die and I'm rolling the dice $6$ times. What is the probability that in the all $6$ times the result will be the same? I know that the probability for each number in $6$ sides dice is $\frac{1}{6}$ If I want the result to be 2 in all times, the probability is $(\frac{1}{6})^6$, right? So for the $6$ numbers not needing to be $6 * (\frac{1}{6})^6$, is it right? The other issue is, what is the probabilty that in $2$ of the $6$ rolling, it gets the same result? AI: For the first roll there are no restrictions. For the second, you have 5 choices. For the third, 4 and so forth. Hence, the probability of it not happening is $1\cdot \frac{5}{6}\cdot \frac{4}{6}\cdot ...\cdot \frac{1}{6} = \frac{5!}{6^{5}}$. Hence, the probability of it happening is $1 - \frac{5!}{6^{5}} = \frac{6^{5} - 5!}{6^{5}}$.
H: What's the difference between these two transformations of functions? I'm about to graph the transformation of a function, but in this problem I encountered something new. The function transformation looks like this: y=12(f(x)+2) Thing is, I've never seen the f encapsulated in parentheses, so I'm unsure what effect it has. Essentially, I'm wondering what the difference between the above transformation and this one is: y=12f(x)+2 What do I do differently on the top one when compared to what is done to the bottom one? Is there even any difference? AI: It has to do with order of operations. Recall that when doing arithmetic, we do multiplication before addition. If we want to do addition before multiplication, we must use brackets. For example: $$ 1 + 2 \times 3 = 1 + 6 = 7 $$ while on the other hand: $$ (1 + 2) \times 3 = 3 \times 3 = 9 $$ Likewise, the transformation $y=af(x) + b$ can be interpreted to mean: "Vertically expand $f(x)$ by a factor of $a$, then vertically translate the result up by $b$ units." On the other hand, the transformation $y = a(f(x) + b)$ can be interpreted to mean: "Vertically translate $f(x)$ up by $b$ units, then vertically expand the result by a factor of $a$."
H: Variance for random variable X I have a probability distribution like so, $$f(x)=\begin{cases} 1/(2x) & x\geq 2\\ 11/40 & x=1 \end{cases}$$ I know the mean is $2.775$ so in order to find the variance for $X$, I use $$E(X) = \sum_{x=1}^6 (x - 2.775)^2 f(x)$$ I have tried subbing in all the values and cannot get any value other then $2.1567$ and the answer should be $2.5743$. Any clues about where I went wrong would be greatly appreciated! AI: So $x \ge 2$ means $x = 2,3,4,5,6$? Yes, $$\sum_{x=1}^6 (x-2.775)^2 f(x) = 2.574375$$ I suspect you've made an arithmetic error. Without seeing your actual calculations, it's impossible to tell where that occurred.
H: Intuition behind inner product of gradient Let $f: \mathbb{R}^{n} \to \mathbb{B}$ be continuously differentiable and take the points $x, p \in \mathbb{R}^{n}$. I am familiar with the idea of a directional derivative being given as an inner product of the gradient of a point with another vector in $\mathbb{R}^{n}$. For example, the directional derivative of $f$ in the direction $p$ with respect to the point $x$ can be denoted as $\langle \triangledown f(x), p \rangle$. However, I cannot seem to wrap my mind around the idea of the inner product of the gradient of a point with respect to itself itself. More succinctly, what can we learn about the function $f$ at $p$ given the identity: $$\langle \triangledown f(p), p \rangle$$ This identity can be rewritten as $\langle \triangledown f(p), p \rangle = \sum\limits_{i = 1}^{n}\frac{\partial{f}}{\partial{p_{i}}}p_{i}$, and I'm not quite sure what we can gather from this. I know this is somewhat vague, but if anyone could give me some insight onto the mechanics of what is going on here, I would be very appreciative. AI: A "direction" is generally given by a unit vector, not just any vector. If $p$ is not necessarily a unit vector, $\langle \nabla f(x), p \rangle$ would be the rate of change of $f$ as you pass through $x$ with velocity vector $p$. Now you're looking at the case $x=p$. So your velocity vector is the same as the position vector: the direction is directly away from the origin.
H: limit of a sequence $(1+1/\sqrt 2+\dots+1/\sqrt n)/\sqrt n$: Cesaro maybe? What is the limit of: $$\mathop {\lim }\limits_{n \to \infty } {1 \over {\sqrt n }}(1 + {1 \over {\sqrt 2 }} + ...{1 \over {\sqrt n }})$$ It looks like I need to use Cesaro theorem but I'm not sure how exactly.. AI: Hint One approach would be Cezaro Stoltz Where $b_n = \sqrt n$ , $a_n =\sum_{k=1}^{n} \frac{1}{\sqrt{k}} $ Another way would be estimate the numerator by integrals namely $$ \int _{2}^{n} \frac{1}{\sqrt x} \rm{d} \leq (\sum_{k=1}^{n} \frac{1}{\sqrt{k}}) \leq \int _{1}^{n-1} \frac{1}{\sqrt x} \rm{d}x$$
H: What does this infinite sum represent? Is there a better way to write the following? $$\sum_{i=0}^\infty \left(\frac{2}{3}\right)^i \frac{\left(\dot{f}\right)^i}{f^{2i-1}},$$ where $\dot{f} = df/dt$. AI: It's $$ \dfrac{f^3}{f^2 - (2/3) \dot{f}}$$ whenever the infinite series converges.
H: Expected Value Word Problem -Roulette Wheel The probability a roulette wheel stops on a red number is 18/37. If it lands on red and you bet red, then you receive double you bet (including your bet). If you bet $1 on 10 consecutive plays, what is the probability that you make a profit? My answer: To make a profit, you need 5 wins and 5 loses. Let X be the number of wins, X ~ Binomial(10, 18/37). $$\Bbb P(5) = \binom{10}{5}(18/37)^5(19/37)^{5}$$ I found this to be equal to .245 which I have been told is incorrect. I can't find any flaw in my thinking, any clue would be appreciated! Thank you! AI: You want to find P(X > 5), not P(X = 5).
H: Normalizing Eigenvectors (Length equal to 1) I am a bit confused as I have seen texts that normalize the vectors to get unique solutions and others that do not. Is there an empirical rule about when we should set the length of the vector equal to 1 and when not? Thank you. AI: Sometimes it just makes the vectors easier to deal with. If that is the case, or you're dealing with basis vectors and the like, it's common to normalize the vector. Otherwise, it is up to the good judgement of whoever is writing the text as to whether or not to normalize the vector, though I think most of the time he or she would not.
H: Given two subspaces $N,W$ of $V$ find a linear transformation $T:V\to V$ such that its kernel is $N$ and is range is $W$. If $N$ and $W$ are subspaces of $V$ such that $\dim(V/N) = \dim W$, then there exists at least one element $A$ of $L(V,V)$ such that $\mbox{ker}(T) = N$ and $\mbox{range}(T) = W$. Two related problem i) Given $N$ subspace of $V$, find a linear transformation $T:V\to V$ with kernel $N$. ii) Given $W$ subspace of $V$, find a linear transformation $S:V\to V$ with range $W$. I could solve, for i) extend a basis $\{ v_i \}$ of $N$ to one $\{ v_i \} \cup \{ v_j \}$ for $V$ and set $T(v_i) = 0$ and $T(v_j) = v_j$. For ii) extend a basis $\{ v_i \}$ for $W$ to a basis $\{ v_i \} \cup \{ v_k \}$ for $V$ and set $S(v_i) = v_i$ and $S(v_k) = 0$. But I a have problems with the more general situation. I have a idea how it might work, by extending a basis for $N\cap W$ to a basis for $N$ and $W$ separately, and then the union of all these basis vectors must span a space of dimension $\dim(V) - \dim(N\cap W)$ and then extending this basis to one of $V$ and on the added vector use some isomorphism between there spanned space and $N\cap W$ to construct a linear transformation, but I have problem how to make these idea's precise, any help? (btw. I am also considering infinite dimensional vector spaces, which makes a clean formulation also more difficult for me) AI: Choose a basis of $\;N\;:\;\;\{v_1,...,v_k\}\;$ , complete it to a basis of the whole space: $\;\{v_1,...,v_k,v_{k+1},...,v_n\}\;$. By the given info, there exists a basis $\;\{w_1,...,w_{n-k}\}\;$ of $\;W\;$ , so define on the above basis of $\;V\;$ : $$T:V\to V\;,\;\;\;T(v_i)=\begin{cases}0&,\;\;1\le i\le k\\{}\\w_{i-k}&,\;\;k+1\le i\le n\end{cases}$$ and extend by linearity. No need to mess around with the intersection and stuff.
H: Visualization of rotation in $\mathbb R^3$ I am trying to visualize the following rotation of $\mathbb R^3$, but it is very difficult. I want to get the answer by intuition, and not by using the Rodrigues rotation formula or conjugation of matrices, etc. Help please. Problem statement: Determine the matrix that represents the following rotation of $\mathbb R^3$: an angle of $\pi/2$ about the fixed axis containing the vector $(1,1,0)^t$ Here is what I have tried in my diagram: Should I find a 3x3 rotation matrix $A$ such that $A(1,1,0)^t=(-1,1,0)^t$? By working out the problem using the conjugate matrices, I end up getting the answer to be: A = \begin{pmatrix} 0.5 & 0.5 & 0.5 & \\ 0.5 & 0.5 & -\sqrt(2)/2\\ -\sqrt(2)/2 & \sqrt(2)/2 & 0 \end{pmatrix} This answer is not at all intuitive. AI: If you are rotating around an axis, all points on that axis will remain the same after transformation. (e.g. If you rotate around $(1,1,0)^t$, $\alpha(1,1,0)^t = A \alpha(1,1,0)^t ,\forall \alpha \in \Bbb R$) It may help to start by thinking of rotation about a different axis. ($\hat y$, for example) What happens to the points upon rotation around this axis? Which change? Which remain the same? Can you create a transformation matrix for this simpler case?
H: Simple limit calculation $\lim_{n\to\infty} \sqrt{n(n + 1)}/(n+1)$ Why is this limit equals 1? $$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } {{\sqrt n \cdot\sqrt {n + 1} } \over {n + 1}} = 1 \cr & \cr} $$ I tried dividing by n, but it gives 0/0, which isn't so great.. AI: $$\frac{\sqrt n\sqrt{n+1}}{n+1}=\frac{\sqrt1\sqrt{1+\frac1n}}{1+\frac1n}\xrightarrow[n\to\infty]{}\frac{1\cdot 1}1=1$$
H: Determination of a constant based on continuity The following defines function with a constant $b$ to be determined by using the continuity of a function: $$f(x)=\begin{cases} \dfrac{x-b}{b+1}, \quad x<0\\[1.75ex] x^2+b, \quad x>0 \end{cases}$$ In short, for what value of $b$ is $f(x)$ continuous for every $x$. What I did was to determine the right hand and left hand limit at zero to find that $b=0$ or $b=-2$ but I find this weird. AI: If we require continuity we have $$\lim_{x\to0^+}f(x)=b=\lim_{x\to0^-}f(x)=\frac{-b}{b+1}$$ Can you solve this equation to find the value of $b$?
H: Tensor product of a ring with itself If $R$ is a commutative ring then $R \otimes_{R} R \cong R$. Is this still true if $R$ is non-commutative? AI: For some left $R$-module $M$, we have $R \otimes_R M \cong M$ as abelian groups (or in fact left $R$-modules): We have the linear map $M \to R \otimes_R M,\, m \mapsto 1 \otimes m$. By the universal property of the tensor product there is a linear map $R \otimes_R M \to M$ mapping $r \otimes m$ to $rm$. One checks that they are inverse to each other.
H: How to solve $\text{ constant} = \sin(2*\theta)\;?$ What would you do to solve $0.587 = \sin(2\theta)$? I know that this question is rather basic, but I've had no luck trying to find answers online. I was wondering if $\sin$ could be replaced by $\text{opposite} \over \text{hypotenuse}$, but I'm not exactly sure how that would work. AI: We take the inverse $\sin$ of each side of your equation: $$0.587 = \sin(2\theta) \implies \sin^{-1}(0.587) = \sin^{-1}(\sin(2\theta)) = 2 \theta $$ $$\iff \theta \frac{\sin^{-1}(0.587)}{2}$$
H: Rendezvous problem - Dynamical Systems I'm learning about graph-based distributed control and there's a problem called "the rendezvous problem" that uses the Laplacian matrix as the state matrix of the system. I have a graph with 4 nodes and already computed the Laplacian matrix, but I cannot figure out how to analytically derive the value to which the vector x converges when time goes to infinity. What I was doing was computing the eigenvalues but having the eigenvalues I can only check if the system is stable or not, but then I still don't get how to get the point of convergence. The only thing I know about the Laplacian matrix is that it is time-invariant. Can someone give me a hint on how to proceed? should I compute the eigenvectors and find the solution of the system? I mean its a good amount of work due to the size of the system. AI: Try to transform the Laplacian matrix to the Laplace domain, then use the final value theorem: http://en.wikipedia.org/wiki/Final_value_theorem
H: Mean Value Property for Continuous Complex Functions Suppose I have an open set $U$ in the complex plane and a function $g$ that is continuous on $U$. Let $C(z_0$$,r)$ be a circle fully contained in $U$ of radius $r$ whose center is $z_0$. I know that if $g$ is harmonic, then the mean value of $g$ over $C(z_0$$,r)$ is equal to $g(z_0)$. But suppose we do not require $g$ to be harmonic. Does $g(z_0)$ equal the limit as $r$ tends to $0$ of the mean values of $g$ over the circles $C(z_0$$,r)$? I'd appreciate any suggestions on how to prove this result or some reading material where I might find the answer. AI: You should be able to prove this result using pretty much just the definition of continuity. If $g$ is continuous and we take $\epsilon > 0$ then for all $z$ sufficiently close to $z_0$, we have $| g(z) - g(z_0) |< \epsilon$, from which you should be able to show that the mean on any sufficiently small circle (centred at $z_0$) is within $\epsilon$ of $g(z_0)$.
H: Proof of the nonexistence of an identity $\phi$ involving convolution The Banach space $L^1(\mathbb{R}^n)$ is an algebra with a product (convolution) which is both commutative and associative. But this algebra does not have a multiplicative identity. An attempt to show the nonexistence: If $\phi$ exists, consider $\chi_{B(0,r)}*\phi=\chi_{B(0,r)}$ where $B(0,r)$ is the ball with radius $r$ centered at the origin, and $\chi_A$ is the characteristic function on set $A$. Then there exists $x\in B(0,r)$ such that $(\chi_{B(0,r)}*\phi)(x)=1$. Now, why does it follow that $\int_{B(0,2r)}|\phi(y)|dy\ge1$? Does this follow from the inequality $||f*g||_1\le||f||_1||g||_1$? Thank you! AI: Let me suggest a different path: $\chi_{B(0,r)} \in L^1(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$, therefore $\phi\ast \chi_{B(0,r)}$ is a continuous function for every $\phi \in L^1(\mathbb{R}^n)$ But a continuous function cannot be equal almost everywhere to $\chi_{B(0,r)}$. Hence there is no identity for convolution in $L^1(\mathbb{R}^n)$. For the route you took, we have by assumption $(\chi_{B(0,r)}\ast\phi)(x) = 1$ for almost all $x \in B(0,r)$. That means for such an $x$ $$\begin{align} 1 &= \int_{\mathbb{R}^n} \chi_{B(0,r)}(y) \phi(x-y)\,dy\\ &= \int_{B(0,r)} \phi(x-y)\,dy\\ &= \int_{B(x,r)} \phi(z)\,dz\\ &\leqslant \int_{B(x,r)} \lvert \phi(z)\rvert\,dz\\ &\leqslant \int_{B(0,2r)} \lvert \phi(z)\rvert\,dz, \end{align}$$ where the last inequality follows since $B(x,r) \subset B(0,2r)$ for $x\in B(0,r)$. Letting $r \searrow 0$, we obtain a contradiction to the integrability of $\phi$, since $$\lim_{r\searrow 0} \int_{B(0,2r)} \lvert f(z)\rvert\,dz = 0$$ for every $f \in L^1(\mathbb{R}^n)$ by the dominated convergence theorem.
H: Show that $\int_{0}^{1}{\frac{\sin{x}}{x}\mathrm dx}$ converges As title says, I need to show that the following integral converges, and I can honestly say I don't really have an idea of where to start. I tried evaluating it using integration by parts, but that only left me with an $I = I$ situation. $$\int \limits_{0}^{1}{\frac{\sin{x}}{x} \mathrm dx}$$ AI: Notice that, for all $0 < x < 1$, we have \begin{eqnarray*} \left|\int_0^1 \frac{\sin x}{x} \, \operatorname{d}\!x\right| &\le& \int_0^1 \left|\frac{\sin x}{x} \right| \operatorname{d}\!x \\ \\ &\le& \int_0^1 1 \, \operatorname{d}\!x \\ \\ &\le& 1 \end{eqnarray*}
H: Prove that $X$ is a finite set Base case: $7 \in X$ Recursive case: If $x \in X$, either $\dfrac{x}{2} \in X$ (if $x$ is even) or $3 \times x + 1 \in X$ (if $x$ is odd) Prove that $X$ is a finite set by explicitly listing all of its elements. Show how each element has been derived. I notice that using my calculator after a while it gets back to $7$, but how can I prove it? I haven't been exposed to such questions yet. AI: You can't prove that $X$ is a finite set from what is given. You can only prove that $$X^{*}=\{7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1\}\subseteq X,$$and note that $X=X^{*}$ is possible. There are many infinite sets that are also consistent with the given information; for instance, $$ X^{*} \cup \{8,16,32,64,128,\ldots\} $$ is another perfectly good choice for $X$.
H: Finding a relationship between x and y of a DE I have the differential equation: $$ \frac{dy}{dx}=\frac{-5y-xy}{-4x-xy}$$ How do I go about finding a relationship between $x$ and $y$? AI: $$\frac{dy}{dx}=\frac{-5y-xy}{-4x-xy} = \frac{y(x+ 5)}{x(y + 4)} = \dfrac{y}{y+4} \cdot \frac{x + 5}{x}$$ Now, the strong hint - This is a separable differential equation: $$\dfrac{y+4}{y}\,dy \;\;= \;\dfrac{x+5}{x}\,dx$$
H: What is $\lim\limits_{i\to 0} \frac{2^n}{\frac{(n+1)\sin((n+1)\theta)}{\sin\theta } - \frac{(n-1)\sin((n-1)\theta)}{\sin\theta }} $? What is $$\lim\limits_{i\to 0} \dfrac{2^n}{\frac{(n+1)\sin((n+1)\theta)}{\sin\theta } - \frac{(n-1)\sin((n-1)\theta)}{\sin\theta }} $$ where $$\theta=\frac{i\pi}{n} $$ The second page of this document says, that it's $$\frac{2^{n-2}}{n}$$ but I'm at a complete loss as to how to come to this conclusion. AI: It's not too hard to show that $$ \lim_{x \to 0} \frac{\sin{mx}}{nx} = \frac{m}{n}$$ Thus $$\lim\limits_{i\to 0} \dfrac{2^n}{\frac{(n+1)\sin((n+1)\theta)}{\sin\theta } - \frac{(n-1)\sin((n-1)\theta)}{\sin\theta }} = \frac{2^n}{(n+1)^2-(n-1)^2} = \frac{2^n}{4n} = \frac{2^{n-2}}{n}$$
H: lim n tends to infinity, $\frac{\sqrt{n}\sqrt{n+1} }{(n+1)}$ I am interested in the limit $\frac{\sqrt{n}\sqrt{n+1} }{(n+1)}$ as n grows without bound. There is already one question on this site asking about this limit. However this is not duplicate, since my question is rather specific. I understand the standard way of solving it by dividing the fraction by n. But my first thought on how to solve the problem was replace n+1 by n (difference becomes negligible as n grows) and simplify. Is this a valid approach of computing the limit? AI: The fact that $\frac{n+1}{n}\rightarrow 1$ is the reason why in rough working to guess the limit you can "replace $n+1$ by $n$", but there's no formal rule that allows straight up replacement of $n+1$ by $n$ while leaving the other $n$ alone.
H: Norms Induce by Inner Products (Complex Case) I have just proved that if $||\cdot||$ satisfy $||u+v||^{2}+||u-v||^{2}=2||u||^{2}+2||v||^{2}$, then there exists an inner product such that $||u||^{2}= \langle u,u \rangle$ is given by $\langle u,v \rangle =\frac{1}{4}(||u+v||^{2}-||u-v||^{2})$. Now, I want to extend this to the complex numbers. My friend insists that one would have to repeat all the arguments necessary to prove the real case separately for the complex numbers, but is that really necessary? Isn't sufficient to simply define $\langle u,v \rangle:= \frac{1}{4} \sum_{k=0}^{3}i^{k}||u+i^{k}v||^{2}$ show that $\langle iu,v \rangle =i\langle u,v \rangle $, $\langle u,v \rangle=\overline{\langle v,u \rangle}$ and then simply refer to the proof for the real numbers (point out off course that the imaginary and real parts are separated)? AI: You don't have to repeat all arguments. You are probably aware that the real and complex inner product are related by $\DeclareMathOperator{\real}{Re} \DeclareMathOperator{ima}{Im}$ $$\langle u,v\rangle_{\mathbb{R}} = \operatorname{Re} \langle u,v\rangle_{\mathbb{C}}.$$ So if you have a complex inner product, you get a real inner product for free, and from a real inner product, you obtain a complex inner product via $$\begin{align} \langle u,v\rangle_{\mathbb{C}} &= \operatorname{Re} \langle u,v\rangle_{\mathbb{C}} + i\operatorname{Im} \langle u,v\rangle_{\mathbb{C}}\\ &= \real \langle u,v\rangle_{\mathbb{C}} - i \real \left(i\langle u,v\rangle_{\mathbb{C}}\right)\\ &= \real \langle u,v\rangle_{\mathbb{C}} - i\real \left(\langle u,iv\rangle_{\mathbb{C}}\right)\\ &= \langle u,v\rangle_{\mathbb{R}} - i \langle u,iv\rangle_{\mathbb{R}}. \end{align}$$ By that construction, it is clear that $\langle\cdot,\cdot\rangle_\mathbb{C}$ is a real-bilinear form, and it only remains to show that it is complex sesquilinear, that is, $\langle iu,v\rangle_\mathbb{C} = - i\langle u,v\rangle_{\mathbb{C}}$ and $\langle u, iv\rangle_\mathbb{C} = i\langle u,v\rangle_\mathbb{C}$, and hermitian, $\langle v,u\rangle_\mathbb{C} = \overline{\langle u,v\rangle_\mathbb{C}}$.
H: Show that a random variable is not dominated Let $((0,1], \mathcal{B}_{(0,1]}, \lambda)$ be a probability space, and define $$ X_n = n 1_{(0,1/n]} $$ This is an example where $\lim E(X_n) \neq E( \lim X_n)$, and the dominated convergence theorem doesn't apply because $X_n$ can't be dominated by an integral random variable. I can see intuitively that this random variable can't be dominated by an integrable random variable; $X(\omega) = 1/\omega$ "connects the dots" and its expectation isn't finite. However, how could one prove that $X_n$ can't possibly dominated by an integrable random variable? I was thinking about a proof by contradiction, but I didn't succeed. I suppose that one way to prove it is by saying that since 1) $\lim E(X_n) \neq E( \lim X_n)$, 2) $X_n \rightarrow 0$, and 3) DCT doesn't apply, we can't have domination (the result doesn't follow, so at least one premise of the DCT must be false, which in this case is domination). But I was looking for something more "constructive". Thanks! AI: If $X_n\leqslant X$ for every $n$, then $X\geqslant Y$ where $Y=\sup\limits_{n\geqslant1} X_n$. Note that $$ Y=\sum_{n\geqslant1}n\mathbf 1_{(1/(n+1),1/n]}, $$ hence $$ E[Y]=\sum_{n\geqslant1}n\left(\frac1n-\frac1{n+1}\right)=\sum_{n\geqslant1}\frac1{n+1}, $$ which is infinite. One sees that $Y$ is not integrable, and that neither is any dominating random variable $X$.
H: Series convergence test with geometric series This is my first question on the math stackexchange-website. This is an assignment question, but I've tried to detail my thought process as granularly as possible to show I'm not just being lazy. My goal in asking this question is to fill a gap in understanding. I'm being asked to determine whether the series below converges or diverges using the comparison test: $$\sum_{n=0}^{\infty}\frac1{(2^n)(n+1)}.$$ I've identified the dominant term as $\dfrac1{2^n}$. Rewritten as $1 \cdot 2^{-n}$, where $a = 1$ and $r = 2$, it seems like a classic geometric series test. However, $r = 2$ means that $r > 1$ which means the series should diverge. But, sampling a few terms of the series gives: $$\{1, \frac14, \frac1{12}, \frac1{80}\}$$ which implies the series converges. Troubleshooting my own reasoning, I believe I've either made a mistake with: Identifying the dominant term, Identifying the dominant term as an application of geometric series, Rewriting the dominant term into geometric series form, or Incorrectly applying the geometric series test. But: $n+1$ just acts as a linear offset, whereas $2^n$ grows exponentially, so $2^n$ must be the dominant term in the denominator, Geometric series have the form $ar^n$, and $1 \cdot 2^{-n}$ matches this perfectly (unless there is a condition that $n$ must be positive), $\dfrac1{2^n}$ is the same thing as $1 \cdot 2^{-n}$, and $r = 2$ is greater than $1$, and so by geometric series test it diverges. What have I done wrong? Thanks. AI: Your reasoning is correct, but notice that $r = 1/2$, not $r = 2$, so the series $\sum_ {n = 0}^\infty 2^{-n}$ converges. Also, regarding (3), $n = 0, 1, 2, \ldots$, so $n$ is nonnegative, not just positive. Moreover, your reasoning in step (4) is not completely correct because if something "larger" diverges, it does not mean something "smaller" also diverges. Think about it, the given series is $\leq \sum_{n = 0}^\infty 1/(n + 1)$, which is the harmonic series and it diverges, but your series converges.
H: Show that a positive operator on a complex Hilbert space is self-adjoint Let $(\mathcal{H}, (\cdot, \cdot))$ be a complex Hilbert space, and $A : \mathcal{H} \to \mathcal{H}$ a positive, bounded operator ($A$ being positive means $(Ax,x) \ge 0$ for all $x \in \mathcal{H}$). Prove that $A$ is self-adjoint. That is, prove that $(Ax,y) = (x, Ay)$ for all $x,y \in \mathcal{H}$. Here's what I have so far. Because $A$ is positive we have $\mathbb{R} \ni (Ax,x) = \overline{(x,Ax)} = (x,Ax)$, all $x \in \mathcal{H}$. Next, I have seen some hints that tell me to apply the polarization identity: $$(x,y) = \frac{1}{4}((\lVert x+y \rVert^2 + \lVert x-y \rVert^2) - i(\lVert x + iy \rVert^2 - \lVert x - iy \rVert^2)),$$ where of course the norm is defined by $\lVert \cdot \rVert^2 = (\cdot, \cdot)$. So my guess is that I need to start with the expressions: $$(Ax,y) = \frac{1}{4}((\lVert Ax+y \rVert^2 + \lVert Ax-y \rVert^2) - i(\lVert Ax + iy \rVert^2 - \lVert Ax - iy \rVert^2)),$$ $$(x,Ay) = \frac{1}{4}((\lVert x+Ay \rVert^2 + \lVert x-Ay \rVert^2) - i(\lVert x + iAy \rVert^2 - \lVert x - iAy \rVert^2)),$$ and somehow show they are equal. But here is where I have gotten stuck. Hints or solutions are greatly appreciated. AI: You should apply the polarization identity in the form $$4(Ax,y) = (A(x+y),x+y) - (A(x-y),x-y) -i(A(x+iy),x+iy) + i(A(x-iy),x-iy).$$ Since you already know $(Az,z) = (z,Az)$ for all $z \in \mathcal{H}$, it is not difficult to deduce $A^\ast = A$ from that.
H: How many ways to seat 9 couple around a round table You are a host/hostess at your local Applebee’s. You are seating a group consisting of 9 couples at a round table. A)In how many different ways can you do this, provided that each couple will sit together, and all that you care about is their position relative to one another? B)What is the probability that Al doesn’t end up within two seats of Ricky, AND Beth doesn’t end up within two seats of Charlene? Used a wrong tag before of order-statistics. Sorry about that. AI: For the number of arrangements, imagine that one of the chairs is a throne, and the Queen is one of the group at Applebee's. She sits down first, of course, on the throne. Her Consort has $2$ choices of chair. Now let us seat the other couples one at a time, counterclockwise from the Queen-Consort pair. The person chosen to occupy the chair immediately counterclockwise from the royal pair can be chosen in $16$ ways. Now the occupant of the next chair is determined. The person chosen to occupy the next chair after that can be chosen in $14$ ways, and then the occupant of the chair after that is determined. And so on. Multiply. We get $(2)(16)(14)(12)\cdots (4)(2)$. This can also be written as $(2^9)(8!)$.