text
stringlengths
83
79.5k
H: Line integral about a circle Here is the question: Evaluate$$\int Pdx+Qdy$$ where $$P(x,y)=\frac{y+x}{x^2+y^2}$$ and $$Q(x,y)=\frac{y-x}{x^2+y^2}$$ about the circle $$C: x^2+y^2=a$$ oriented clockwise. I tried finding $P_y$ and $Q_x$ and I got $0$. However, that is not the answer. I know it is because the function is discontinuous at $(x,y)=(0,0)$, but I don't know what to do about it. AI: If $\gamma(t)=(x(t),y(t)),t\in [\alpha,\beta]$ is an oriented parametrization of the circle $C:x^2+y^2=a$ then $$\int_C Pdx+Qdy=\int_\alpha^\beta\left[P(x(t),y(t))x'(t)+Q(x(t),y(t))y'(t)\right]dt$$ by definition. I am sure that you know at least one parametrization of a circle of radius $\sqrt{a}$, so all you need now is to compute the integral on the right-hand side.
H: Show that the tangent plan pass through the origin Show that all the tangent plans to the conic surface $z = xf(\frac{y}{x})$ at the point $M(x_o,y_o,z_o)$, where $x_o \neq 0$, pass through the origin of the cordinates First, I've found the tangent plan at this generic point $M$ of the surface: $z - z_o = (x-x_o)(f(\frac{x_o}{y_o}) - x_o \frac{y_o}{x_o^2}f'_x(\frac{y_o}{x_o})) + (y-y_o)(x_o \frac{1}{x_o} f'_y(\frac{y_o}{x_o}))$ We want to check if $(0,0,0)$ is a point of this plan. Then: $-z_o = -x_o f(\frac{y_o}{x_o}) + y_o f'_x(\frac{y_o}{x_o}) - y_o f'_y(\frac{y_o}{x_o})$ But I couldn't prove that this equality holds (i.e,$ y_o f'_y(\frac{y_o}{x_o}) = y_o f'_x(\frac{y_o}{x_o})$) Thanks in advance! AI: I sense an apparent misconception on your part regarding the term $f(y/x)$. This is hiding a second function $g: \mathbb{R}^2 \to \mathbb{R}$ defined as $g(x,y) = f(y/x)$, with $f: \mathbb{R} \to \mathbb{R}$. (Possibly not all of $\mathbb{R}^2$ or $\mathbb{R}$, in which case just assume they are defined for the largest subset avaiable for the exercise.) Therefore, expressions as $f'_y$ and $f'_x$ don't make much sense, because $f$ is not a function of two variables, but $g$, as we defined it, is. Calculating the partial derivatives of $z$, we find $$\begin{align} \frac{\partial z}{\partial x} & = f \left( \frac{y}{x} \right) + x f' \left( \frac{y}{x} \right) \left( - \frac{1}{x^2} \right) = f \left( \frac{y}{x} \right) - \frac{y}{x} f' \left( \frac{y}{x} \right), \\ \frac{\partial z}{\partial y} & = x f' \left( \frac{y}{x} \right) \frac{1}{x} = f' \left( \frac{y}{x} \right). \end{align}$$ Notice also that $z_0 = x_0 f(y_0/x_0)$. Putting it all together in the tangent plane equation, we have $$z - x_0 f \left( \frac{y_0}{x_0} \right) = \left( f \left( \frac{y_0}{x_0} \right) - \frac{y_0}{x_0} f' \left( \frac{y_0}{x_0} \right) \right) (x - x_0) + \left( f' \left( \frac{y_0}{x_0} \right) \right) (y-y_0).$$ Expand out and you'll simplify it down to $$z = \left( f \left( \frac{y_0}{x_0} \right) - \frac{y_0}{x_0} f' \left( \frac{y_0}{x_0} \right) \right) x + \left( f' \left( \frac{y_0}{x_0} \right) \right) y.$$ We see then that $x=y=z=0$ is a solution to this plane equation and therefore it passes through the origin.
H: Fourier transform convention: $\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(x)e^{\pm ikx}dx $? I've come across the Fourier transform being defined as: $$\tilde{f}(k)=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(x)e^{ikx}dx$$ But this convention is not present in the Wikipedia article. The one given there, under "Fourier transform: unitary, angular frequency" has a minus sign in the exponent. Are the two equivalent? Switching variables from $x$ to $-x$ wouldn't work because I would get $f(-x)$. Is it perhaps something to do with the symmetric nature of $e^{ikx}$ if expressed in trigonometric form? AI: The conventions each have a purpose, and there is a relationship between them all. A general relation which covers all of the standard conventions is (see here for details) $$\hat{f}(k) = \sqrt{\frac{|b|}{(2 \pi)^{1-a}}} \int_{-\infty}^{\infty} dx \, f(x) \, e^{i b k x}$$ $$f(x) = \sqrt{\frac{|b|}{(2 \pi)^{1+a}}} \int_{-\infty}^{\infty} dk \, \hat{f}(k) \, e^{-i b k x} $$ Note that the "Physics" convention has $(a,b)=(1,1)$, while the "Mathematics" convention has $(a,b)=(1,-1)$. I was also exposed to an "electrical engineering" convention that has $(a,b)=(0,2 \pi)$. The question of whether they are equivalent is tricky. Of course they are, but one must be careful in defining the scale of one's frequency space before blindly expecting equality.
H: Convert double integral from cartesian coordiantes to polar coordiantes I have the integral $$\int_{-3}^3 \int_0^\sqrt{9-x^2} (x^2 + y^2)^{3/2} {dy}{dx}$$ I cannot solve this in it's current form so I realize that the limit is a circle ${x^2} + {y^2} = 9$ using this I attempted to convert the integral to polar coordinates. The integral I came up with is $$\int_{0}^{\pi} \int_{-3}^{3} r(r^2)^{3/2} {dr}{d{\theta}}$$ When I integrate that I get $$\frac{(2)9^{5/2}{\pi}}5$$ When I compute the original integral using mathematica I get $$\frac{81{\pi}}{4}$$ Is my conversion to polar coordinates wrong or my integration? AI: Indeed, as we see the region of the integration, we have $$\theta|_0^{\pi},~~~r|_0^3$$. So we have the following integrals instead: $$\int_0^{\pi}d\theta\times\int_0^3 r^4dr=(243/5)\pi$$
H: Fundamental group of the topological space obtained by identifying the four vertices of a square The task is: Compute the fundamental group of the topological space obtained by identifying the four vertices of a square. So we identify the vertices with the same letter. Can we say something about the orientation of sides ? How can we use Van-Kampen theorem here ? AI: I wouldn't use Van Kampen. I would just note that this space deformation retracts to the union of the two diagonals of the square which is an X shape with the four ends glued together. This in turn is homotopy equivalent to a wedge of three circles by contracting one of the edges. Do you know the fundamental group of this? Or to use Van Kampen, a neighborhood of the perimeter has free fundamental group generated by the four edges. Gluing in the interior of the square adds the relation that the product of these four generators is trivial. So we have $\langle a,b,c,d\,|\, abcd=1\rangle$, which isomorphic to a free group on three generators.
H: Evaluating the surface integral $\iint_\Sigma \mathbf{f} \cdot d \mathbf{a}$ where $\mathbf{f}(x,y,z)=(x^2,xy,z)$ Evaluate the surface integral $\iint_\Sigma \mathbf{f} \cdot d \mathbf{a}$ where $\mathbf{f}(x,y,z)=(x^2,xy,z)$ and $\Sigma$ is the part of the plane $6x+3y+2z=6$ with $x,y,z\geq 0$. I changed the function to parametric form, $\mathbf{r} (u,v)=(u^2,uv,3-3u-3v/2)$. Then $\mathbf{r}_u\times \mathbf{r}_v=(-3v/2+3u,3u,2u^2)$. So the integral is $\int^2_0\int^1_0(-3u^2v/2+6u^2-3u^3)\,dudv=3/2$. But the answer turn out to be $15/4$, what's wrong with my solution? AI: You can choose a parametrization of the plane as $$\textbf{r}(x,y) = \left( x, y, \frac{6-6x-3y}{2} \right).$$ We find the expression of the field in the surface by finding the composite, yielding $$\textbf{f}(\textbf{r}(x,y)) = \left( x^2, xy, \frac{6-6x-3y}{2} \right).$$ To find the projected area in the $xy$ plane we make $z=0$, finding $$\frac{6-6x-3y}{2} = 0 \iff 6x+3y=6 \iff 2x+y=2.$$ The normal vector is computed as $\textbf{r}_x \times \textbf{r}_y = (3, 3/2, 1)$. We have that the surface integral is $$\iint\limits_{\Sigma} \textbf{f} \cdot d \textbf{a} = \int_0^1 \int_0^{2-2x} 3x^2 + \frac{3xy}{2} + \frac{6-6x-3y}{2} \, dy \, dx.$$
H: Mayer-Vietoris sequence for the figure eight On my professor's solutions for my last algebraic topology homework, he gets the following Mayer-Vietoris sequence for the figure eight space (the wedge of two circles): $0\to H_{2}(X)\to 0\to \mathbb{Z}\oplus \mathbb {Z}\to H_{1}(X)\to \mathbb{Z}\overset{\varphi_{*}}{\to}\mathbb{Z}\oplus\mathbb{Z}\to\mathbb{Z}\to0$ I didn't have a problem getting that far, but then he appeals to the "rank theorem" for finitely generated abelian groups, which we proved on a previous homework. For a short exact sequence of finitely generated abelian groups: $0\to A\to B\to C\to 0$ The ranks of the free parts of $A$ and $C$ sum to the rank of the free part of $B$. He appeals to this to claim that it forces $\varphi_{*}$ to be injective, but I don't see how this follows. (He doesn't have any explanation in between and my algebra is not great.) Could someone explain further how the rank theorem implies that $\varphi_{*}$ is injective? AI: It's because when you're given an exact sequence $$\cdots \to A_3 \stackrel{\psi}{\to} A_2 \to A_1 \to A_0 \to 0,$$ you get a short exact sequence $$0\to A_2/\psi(A_3) \to A_1 \to A_0 \to 0.$$ In your case, this gives $$0\to \mathbb{Z}/(\text{something})\to \mathbb{Z}\oplus \mathbb{Z}\to \mathbb{Z}\to 0.$$ The only way for this to happen is if the "something" is $0$.
H: How would I find the residue of $\text{sech}$ and $\coth$ at their poles? I thought I had understood this, but I'm now lost when trig. functions are introduced and I don't know how to continue. I attempted to apply the $\lim_{z \to a} (z-a)f(z)$ on it, but that didn't take me far. AI: The residues of $\coth$ are simple, since $$\coth z = \frac{\cosh z}{\sinh z} = \frac{\sinh' z}{\sinh z},$$ and the residue of $\dfrac{f'(z)}{f(z)}$ is always the multiplicity of the zero of $f$ (poles are counted as zeros of negative multiplicity). Since $\sinh$ has only simple zeros, all residues of $\coth$ are $1$. The poles of $\coth$ are the zeros of $\sinh$, that is $k\pi i$ for $k \in \mathbb{Z}$. For $\operatorname{sech}$, the matter is similar, but not quite as easy. If $f$ has a simple zero in $z_0$, the residue of $\dfrac1f$ in $z_0$ is $\dfrac{1}{f'(z_0)}$. Since $\cosh' = \sinh$, the zeros of $\operatorname{sech}$ are then the reciprocal values of $\sinh$ in the zeros of $\cosh$. Since $\cosh^2 z - \sinh^2 z \equiv 1$, $\sinh z_0 = \pm i$ when $\cosh z_0 = 0$, and it remains to find the sign. The poles of $\operatorname{sech}$ are the zeros of $\cosh$, that is $(k+\frac12)\pi i$ for $k\in\mathbb{Z}$. We have $\sinh \frac{\pi i}{2} = i\sin \frac{\pi}{2} = i$, so the residue of $\operatorname{sech}$ in $\frac{\pi i}{2}$ is $\frac{1}{i}$, and since the sign of $\sinh z_k$ alternates, we have $$\operatorname{Res}\left(\operatorname{sech};z_k \right) = \frac{(-1)^k}{i},$$ where $z_k = (k+\frac12)\pi i$.
H: Build regular grammar from regular expression Is there an algorithm for creating a regular grammar directly from a regular expression? All the discussions and notes I found so far go through an intermediary step of creating an FA for the reg ex and then the regular grammar from the FA. E.g., for a reg ex $a(b|c)*d$, what's the algorithm for figuring out the rewriting rules without first building the FA? I already built the grammar, but did so intuitively, I'd really like to see a generic algorithm that can be applied on any reg ex to build the grammar's productions. Thanks AI: You can do it by combining grammars. Notation: $A,B,C,...$ will be non-terminals $a,b,c,...$ will be terminals $\alpha,\beta, ...$ will be "whatever can be here" $x$ and $y$ will be regular expressions The grammar corresponding to a regular expression $x$ will be denoted $f(x)=(N_x,\Sigma,P_x,S_x)$ $f(a)=(\{S\},\Sigma, \{S\to a\}, S)$ $f(x\mid y)=(N, \Sigma, P, S)$ $N=N_x\cup N_y \cup \{S\}$ $P=P_x\cup P_y\cup\{S\to \alpha \mid S_x\to \alpha\in P_x\}\cup\{S\to \alpha \mid S_y\to \alpha\in P_y\} $ The idea is that you want to be able to go one way or the other but if you add $S\to S_a$ and $S\to S_b$, then your grammar isn't regular anymore. You could want to replace both $S_x$ and $S_y$ and their associated rules but if you do that, if the grammars use $S_x$ or $S_y$ somewhere on the right hand side, it might fail. For example if they represent $a^*$ and $b^*$ in such a way, then instead of $a^*\mid b^*$, your grammar would represent $(a\mid b)^*$ which isn't the same thing. Here's the grammar for $a^*$: $S\to \varepsilon$ $S\to aS$ $f(x\cdot y)=(N, \Sigma, P, S_x)$ $N=N_x\cup N_y$ $P=\{A\to bC\mid A\to bC\in P_x\} \cup \{A\to bS_y \mid A\to b \in P_x\}\cup \{A\to S_y \mid A\to \varepsilon \in P_x\} \cup P_y$ The idea is that since your grammar is regular, you'll always have a unique non-terminal until you finish your word with a rule not producing a non-terminal so you simply replace those rules in the first grammar by rules saying to continue on the second grammar. $f(a*)=(N_a,\Sigma, P_a\cup \{A\to bS_a\mid A\to b\in P_a\}\cup \{A\to S_a\mid A\to \varepsilon\in P_a\}\cup \{S_a\to \varepsilon\}, S_a)$ $f(x^*)=(N_x,\Sigma, P, S_x)$ $P=P_x \cup \{A\to bS_x \mid A\to b \in P_x\}\cup \{A\to S_x \mid A\to \varepsilon \in P_x\}\cup \{S_x\to \varepsilon\}$ Same thing as above but you also take the rules giving simply the non-terminal or $\varepsilon$ to allow to end right away.
H: How to show that the set of three primes whose sum is a fixed integer is an integral? Let $I_F = \int_0^1 F(\alpha)^3 e^{-\alpha n} d\alpha$, where $F(\alpha)=\sum_{p\leq n} e^{\alpha p}$, $n$ is an integer. It is said that $I_F$ is the number of $(p_1, p_2, p_3)$ such that $p_1, p_2, p_3$ are primes and $p_1+p_2+p_3=n$. How to prove this? Thank you very much. AI: Note that $$F(\alpha)^3 =\sum_{p_1,p_2,p_3, \leq n} e^{\alpha(p_1+p_2+p_3)} = \sum_k \sum_{p_1+p_2+p_3 =k} e^{\alpha k},$$ and for $k \neq n$, $$ \int_0^1 e^{\alpha k} e^{-\alpha n} d\alpha =0.$$ I not sure about your notation but I guess that $e^\alpha := \exp(2i\pi \alpha)$.
H: $\frac{1}{n}\sum_{i=1}^nZ_i \rightarrow \int_0^1f(x)dx$ PROBLEM Let $X_1,Y_1,X_2,Y_2,...$ be a sequence of independent random variables, all of which distributed uniformly on $[0,1]$. Let $f: [0,1] \rightarrow [0,1]$ be a continuous function. Define $Z_i = 1_{f(X_i)>Y_i}$. $a)$ Show that almost surely $\frac{1}{n}\sum_{i=1}^nZ_i \rightarrow \int_0^1f(x)dx$ $b)$ Show that $E(\frac{1}{n}\sum_{i=1}^nZ_i - \int_0^1f(x)dx)^2 \leq \frac{1}{4n}$ I don't know how to attack this problem, I hope that someone could help me.. AI: Hints: a) Note that $Z_i$ are i.i.d. random variable with mean $\mu = E[Z_i]$. Therefore the LLN tells you that $$ \frac{1}{n} \sum_{i=1}^N Z_i \to \mu $$ almost surely. Note also: $$ E[Z_i] = \iint_{\{(x,y):f(x)>y\}} \rho_{X_i,Y_i}(x,y) \,dx\,dy = \int_0^1 \int_0^{f(x)} \,dy\,dx $$ where I used $\rho_{X_i,Y_i}(x,y)$ as the joint density of $(X_i, Y_i)$. b) $$ E\big[\big(\frac{1}{n}\sum_1^n Z_i - \mu \big)^2\big] = \frac{1}{n^2}E\big[\big(\sum_{1}^n \{Z_i - \mu\}\big)^2 \big] $$
H: Probability density problem Suppose that on each day that I cycle to work, there is a probability 0.33 that I get wet because of rain. Suppose I cycle to work on 12 days, then what is the probability that I get wet in more than 2 days? Give your solution accurate to 4 decimal places. i dont know how to solve the distribution AI: The probability $P$ wanted is $1$ minus the probability that you get wet in no days, $P_0$, one day only, $P_1$ or two days $P_2$. The probability that you don't get wet anyday is simply $P_0=0.67^{12}$. The probability that you get wet in one day only is $P_1=12\times0.33\times 0.67^{11}$, since there are $12$ possible days for you to get wet. $P_2=\binom{12}{2}\times0,33^2\times0,67^{10}$, since there are $\binom{12}{2}$ ways of choosing the two days you will get wet. So, we have $$P=1-P_0-P_1-P_2=1-0.67^{12}-12\times0.33\times 0.67^{11}-\binom{12}{2}\times0,33^2\times0,67^{10}\approx0.8206$$
H: If $f(x) < g(x)$, prove that $\int_a^b f(x) dx < \int_a^b g(x)dx.$ 1) Let $f$ and $g$ be Riemann integrable functions on $[a,b]$. Suppose that $f(x) < g(x)$ for each $x\in [a,b]$. Prove that $\int_a^b f(x) dx < \int_a^b g(x)dx.$ Basically my idea was to break the integral down into partitions and show the inequality for each... here's my work... not sure if it's correct Let $f$ and $g$ be Riemann integrable functions on $[a,b]$ and suppose that $f(x) < g(x)$ for each $x\in [a,b]$. Since $f$ and $g$ are Riemann integrable, we know that the points at which they are discontinuous form a zero set. Then $\exists A \subseteq [a,b]$ such that A is not a zero set and $\forall z\in A$, $f$ and $g$ are continuous at $z$. Suppose $x_0\in A$. Then $f(x_0) < g(x_0)$, since $x_0$ is in the domain of $f$ and $g$. Now let $\epsilon = g(x_0) - f(x_0)$. By the continuity of $f$ and $g$ at $x_0$, we have that $\exists \delta > 0 $ such that if $x\in (x_0 - \delta, x_0 + \delta)$ then $\mid f(x_0) - f(x) \mid < {\epsilon \over 4.}$ Then $f(x_0) - {\epsilon \over 4} < f(x) < f(x_0) + {\epsilon \over 4}$. Similarly, $g(x_0) - {\epsilon \over 4} < g(x) < g(x_0) + {\epsilon \over 4}$. Then $(g(x_0) - {\epsilon \over 4}) - (f(x_0) + {\epsilon \over 4}) < g(x) - f(x)$, so $f(x) + {\epsilon \over 2} < g(x) $. Now take $$\int_a^b g(x)dx = \int_a^{x_0 - \delta} g(x)dx + \int_{x_0 - \delta}^{x_0 + \delta} g(x)dx + \int_{x_0 + \delta}^b g(x)dx. $$ We know $$\int_a^{x_0 - \delta} f(x)dx \leq \int_a^{x_0 - \delta} g(x)dx,$$ $$ \int_{x_0 + \delta}^b f(x) dx \leq \int_{x_0 + \delta}^b g(x) dx, $$ and $$ \int_{x_0 - \delta}^{x_0 + \delta} (f(x) + {\epsilon \over 2})dx \leq \int_{x_0 - \delta}^{x_0 + \delta} g(x)dx. $$ But \begin{align} \int_{x_0 - \delta}^{x_0 + \delta} (f(x) + {\epsilon \over 2})dx & = \int_{x_0 - \delta}^{x_0 + \delta} f(x) dx + \int_{x_0 - \delta}^{x_0 + \delta} {\epsilon \over 2} dx\\ &= \int_{x_0 - \delta}^{x_0 + \delta} f(x) dx + {\epsilon \over 2}\mid 2\delta \mid. \end{align} Thus $$ \int_{x_0 - \delta}^{x_0 + \delta} f(x) dx < \int_{x_0 - \delta}^{x_0 + \delta} g(x) dx.$$ Thus $\int_a^b f(x) dx < \int_a^b g(x)dx.$ Is this ok? AI: Let $h(x) = g(x) - f(x)$ then $h(x)$ is Riemann-integrable on $[a, b]$ and hence is continuous almost everywhere. Thus there must be an interior point $c \in (a, b)$ where $h(x)$ is continuous. Since $h(c) > 0$ it follows that there is an interval $[c -\delta, c + \delta]$ where $h(x)$ is positive and thus $m_{c} = \inf \{h(x)\mid x \in [c -\delta, c + \delta]\} > 0$. Clearly we have $$\int_{a}^{b}h(x)\,dx \geq \int_{c - \delta}^{c + \delta}h(x)\,dx \geq 2\delta\cdot m_{c} > 0$$ and hence $$\int_{a}^{b}f(x)\,dx < \int_{a}^{b}g(x)\,dx$$ Update: Looking at OP's comment to my answer even I figured out a much simpler solution which avoids Lebesgue's theorem ("a bounded function is Riemann integrable if and only if it is continuous almost everywhere"). This may or may not be what OP had in his mind. Since $h(x)$ is integrable over $[a, b]$ both lower and upper sums for $h(x)$ converge to same value $A$. And clearly since $h(x) > 0$ we must have lower sums as non-negative so that $A \geq 0$. Suppose $A = 0$. Then the upper sums also converge to $0$. It follows that the upper bound of $h(x)$ must be $0$ in some sub-interval. This contradicts that $h(x) > 0$. Hence we must have $A > 0$ i.e. $\int_{a}^{b}h(x)\,dx > 0$. Update: The argument in the preceding paragraph is wrong. The proper proof is not that easy. Do have a look at this answer.
H: Proving that $2^n$ is greater than a binomial expression This is from a friend's textbook. There is a really obvious counting argument, but it is a calculus, not a combinatorics, textbook, and the answer probably involves messing up with algebraic equations. By considering $(1+x)^n$ for suitable $x$, show $$2^n > \frac{n(n-1)(n-2)}{3!}$$ The counting argument is that the LHS is the total number of ways to choose any set of items out of $n$ items, while the RHS is the number of ways to choose three items, which is clearly much less. However, I can't seem to think of an algebraic method of doing this. The hint seems to be related to the binomial theorem, but how does that involve $2^n$? Thanks a ton! AI: $2^n=(1+1)^n=\binom n 0+\binom n 1+\binom n 2+\binom n 3+\ldots+\binom n n$
H: Calculation of coefficients of a Fourier series Calculating the Fourier series of a periodic function I need to evaluate these integrals: $$1) \int_{-\pi}^{\pi}dt\left(\cos^{-1}(\alpha t-1)+2(1-\alpha t)\sqrt{\frac{1}{2}\alpha t-\frac{1}{4}\alpha^2t^2}\right)\sin(t)$$ $$2) \int_{-\pi}^{\pi}dt\left(\cos^{-1}(\alpha t-1)+2(1-\alpha t)\sqrt{\frac{1}{2}\alpha t-\frac{1}{4}\alpha^2t^2}\right)\cos(t)$$ Knowing alpha, it's easy to calculate $1)$ and $2)$ numerically. The problem is that I don't know $\alpha$, so I would like to have an analytical evaluation of these two integrals. Is there some method to calculate them? Thanks. AI: Probably not too helpful (kind of obvious), but I think the first step is to recognize that each of them is an integral of a sum, which can be simplified to a sum of integrals. As for actually integrating the part on the right....not sure.
H: Does $ \log(x)^{x^a}$ eventually dominate $x^k$? Does $ \log(x)^{x^a}$ eventually dominate $x^k$ for all $a\gt 0$ and for all positive integers $k$? And if so, how does one prove this? Thanks a lot for your help. AI: $$\begin{align} \log\bigl((\log x)^{x^a}\bigr)&=x^a\log\log x\\ \log(x^k)&=k\log x \end{align}$$ For all $a,k>0$$$ \lim_{x\to\infty}\frac{x^a\log\log x}{k\log x}=\infty. $$
H: Predicting data in many dimensions I have two matrices deriving from one matrix of the original data. One is the training, the other is the validation set. Each matrix has rows= examples, columns = featuers. The proportions are 65% vs 35% respectively. Given that the data is in many dimensions and it is not possible to visualize it, what would you suggest to use to make predictions ? I was initially thinking about a polynomial fit, but how does one know which of the 65 features to square, cube, etc? AI: You are describing the very complex problem of feature selection (as it is known in the machine learning and statistics community). http://en.wikipedia.org/wiki/Feature_selection What do you want to predict, anyway? Do you have a classification for each example (each row) or a numerical outcome?
H: the probability of rolling a die There is a die with six faces numbered consecutively from 1 to 6. What is odd about it, is that the probability of rolling the face with number k on it is c*(q^k), where c is a constant, and q = 0.9. What is the expected value of a roll of the die? i could not get the constant c first got stuck here AI: You need that $\sum_1^6 cq^i = 1 \Rightarrow c = 1/(\sum_1^6 0.9^i)$. Then the expected value is $\sum i cq^i$.
H: solving an equation by fixed point theorem This is captured from a chapter talking about completeness of metric space in Real Analysis, Carothers, 1ed. And I have been confused by an application of fixed point theorem: The definition of fixed point theorem is: An application of it is showed as following: Why does the author consider the form of $f(x)=x-\lambda F(x)$? I mean how did he get this idea? It is a little interesting since the form looks like largange used in optimization. Well, just guessing. Thanks^_^ AI: You want to solve an equation $F(x)=0$ by a fixed point method. To do so, you must transform it into an equation of the form $f(x)=x$ for an appropriate $f$ with exactly the same solutions. The simplest choice is $x-F(x)=x$, leading to $f(x)=x-F(x)$. However the function $f$ may not satisfy the conditions of the fixed point theorem. So, after a little thinking, you may come up with the equation $x-\lambda\,F(x)=x$, $\lambda\ne0$, which has the same solutions as $F(x)=0$. It is very difficult to describe the chain of thoughts that lead to this; all I can say is that someone with experience in problem solving will think about it sooner or later. The next problem is how to choose $\lambda$ in an optimal way, but that was not your question. A further step is to consider the equation $x-H(F(x))=x$, where $H$ is defined on a neighborhood of $0$ and $H(x)=0$ if and only if $x=0$ (the previous method appears if we take $H(x)=\lambda\,x$.)
H: Equations with exponents I can't remember how to solve equations that have exponent and a variable in them. This is somewhat embarrassing, because this used to be really easy for me. I know that logarithms are involved I just can't remember how they are involved. Would anybody be able to help me out? Here is the equation, which needs to be solved for $\alpha$ $$ c_1(e^{-2\alpha - c_2} -\alpha + c_3) = c_4 $$ Any help is appreciated. AI: This equation does not have solutions that can be expressed in terms of elementary functions. You'd need the Lambert W function, or solve numerically.
H: Sets and expectations Imagine two sets $A = \{1, 2, \dots, a\}$ and $B = \{1, 2, 3, \dots, b\}$ with $a \leq b$. Let $f$ be a uniformly independently distributed random map $f:A\rightarrow B$ and $F = \bigcup_{i=1}^a f_{i}$ If I pick different functions $f$ until I find one such that $|A| = |F|$ what is the expected number of functions I pick? What is the probability that none of the $p$ functions I pick first results in $|A| = |F|$? This looks like a sampling problem? How to think about it? AI: Let $G$ denote the set of functions such that $|F|=a$. Since $|G|=b!/(b-a)!$, the event that a function is in $G$ has probability $$ q=\frac{b!}{(b-a)!b^a}. $$ The rank $N$ of the first function in $G$ is such that $P[N=n]=(1-q)^{n-1}q$, thus, $P[N\gt n]=(1-q)^n$, for every $n\geqslant1$. Let $M$ denote the random set of the different functions not in $G$ picked before the first one in $G$. Any given function not in $G$ is in $M$ if and only if it was picked before any of the $|G|$ functions in $G$. This happens with probability $1/(|G|+1)$ and there are $b^a-|G|$ functions not in $G$, hence the expected size of $M$ is $(b^a-|G|)/(|G|+1)$. Thus, $$ E[|M|]=\frac{1-q}{b^{-a}+q}. $$ Edit: A new question asked in the comments is to compute $E[|F|]$. This can be done using the trick of evaluating the probability, for any given $f$, to be in $F$. Rather, the probability that $f$ is not in $F$ is $(1-b^{-a})^a$ hence $$ E[|F|]=\sum_fP[f\in F]=b^a(1-(1-b^{-a})^a). $$
H: deducing $\lnot B \implies \lnot A$ from $A \implies B$ One way how to prove a statement of the form $A \implies B$ is to presume that $A$ is true and deduce $B$. Lets have $A \implies B$ and lets assume that $\text{not}~B$ is true. $A$ is true or it is false (duh). If it were true, $B$ would also be true. However, we know that $B$ is not true and therefore $A$ must not be true either. We conclude $\lnot B \implies \lnot A$. The other direction follows similarly. Is my deduction sound? Is there a more formal way to see these two are equivalent? AI: Yes, indeed, you've informally argued using a proof by contradiction. (1) Given $A \rightarrow B$. (2) Assume $\lnot B$. (3) Assume $A$. (4) Then $B\;$ ((1) & (3), modus ponens) (5) Then $\lnot B \land B\;$ ((2) & (4) $\land$-Introduction) (6) $\perp\;$ Contradiction. (5) (7) Therefore $\lnot A\;$ ((3) - (6), $\lnot$-Introduction) (8) Therefore $\lnot B \rightarrow \lnot A\;$ (2-7) Similarly, we can deduce $A\rightarrow B$ if given that $\lnot B \rightarrow \lnot A$. With both directions proven, we will have then proven (by natural deduction) the equivalence of an implication and its contrapositive: $$A \rightarrow B \iff \lnot B \rightarrow \lnot A$$
H: weight of heaviest box? A shipping clerk has five boxes of different but unknown weights each weighing less than 100 kg. The clerk weights the boxes in pairs. The weights obtained are 110, 112, 113, 114, 115, 116, 117, 118, 120 and 121 kg. What is the weight of the heaviest box? The answer options are given as 60,65,64,62. 62 is given as the correct answer in the book and it is written that the other boxes shall then weigh 59,54,58,56. I want to know the method used for arriving at the answer in these type of questions. AI: Hint: None of the weights can be the same or there would be repeated results Define the weights $a<b<c<d<e$ You know the smallest two boxes and the largest two boxes result in the smallest and largest weights. So: $$110=a+b$$ $$121=d+e$$ As the next smallest and largest must coehcide with third lightest and heaviest respectively. $$112 = a+c$$ $$120 = e+c$$ Finally if you add up all combinations: $$1156 = 4a+4b+4c+4d+4e$$ This is 5 equations to solve 5 variables. I think you should be able to do this.
H: How to compute the Hessian Matrix I want to compute the Hessian matrix of a 2-dimensional vector function. \begin{pmatrix}x_1 + x_2 + x_3\\x_2^2 -x_1x_2\end{pmatrix} Can anyone pleae explain how to compute this since I can find it nowhere.. AI: The hessian matrix of a vector function is a $(1,2)$ tensor whose entries are given by $\partial_{i}\partial_{j}f^{k}$. Now i'll let you do the rest, Sjoerd, my friend. Think of this as two matrices, each one the hessian of a scalar function $f^{1}$ or $f^{2}$. How would you compute the hessian of $f^{1}$? well $f^{1}$ is a first order function, so all the second derivatives are done for, $\partial_{i}\partial_{j}f^{1}=0$, now onto $f^{2}$ the only nonvanishing entries are $\partial_{2}\partial_{2}f^{2}$=$\partial_{2}\partial_{2}x_{2}^{2}$=2 and the mixed $\partial_{2}\partial_{1}f^{2}$=1. Of course the hessian is symmetric, so $\partial_{1}\partial_{2}f^{2}$=1. These are all the nonvanishing entries; just 3.
H: What is the focal width of a parabola? I'm not wondering what the formula is—I already know that. For a parabola in standard form of $(x-h)^2=4p(y-k)$ I know that the focal width is $|4p|$. But what does that mean, conceptually? What does that distance, $|4p|$, represent? If I were to graph the parabola, would that distance be some measurable value between the focus and something else? Or between the vertex and something else? It's easy enough to solve what the focal width is; I just want to know what the point of it is. Can someone please explain focal width, as a concept, in plain English? AI: This is the length of the focal chord (the "width" of a parabola at focal level). Let $x^2=4py$ be a parabola. Then $F(0,p)$ is the focus. Consider the line that passes through the focus and parallel to the directrix. Let $A$ and $A'$ be the intersections of the line and the parabola. Then $A(-2p,p)$, $A'(2p,p)$, and $AA'=4p$.
H: Proof of $|\int^{b}_{a}fg|^2\leq(\int^{a} _{b}|fg|)^2\leq (\int^{b}_{a}f^2)(\int^{b}_{a}g^2), \forall f,g \in {\mathscr R[a,b]}.$ There is an exercise in an Analysis textbook that requires one to establish the Cauchy- Schwarz Inequality: $|\int^{b}_{a}fg|^2\leq(\int^{a} _{b}|fg|)^2\leq (\int^{b}_{a}f^2)(\int^{b}_{a}g^2), \forall f,g \in {\mathscr R[a,b]}.$ I have managed to do so but I did not use the hints provided in the exercise(of which I have successfully established) as follows: Let $f,g \in {\mathscr R[a,b]}.$ (1) $2|\int^{b}_{a}fg|\leq t\int^{b}_{a}f^2 + \frac{1}{t}\int^{b}_{a}g^2, t>0$ (2) If $\int^{b}_{a}f^2=0,$ then $\int^{b}_{a}fg=0.$ Hence, could anyone advise me on how to use that hint? Thank you. AI: If $\int_a^b f^2 = 0$ then hint 2 gives you the conclusion. Otherwise, apply hint 1 with $$t = \sqrt{\frac{\int_a^b g^2}{\int_a^b f^2}}.$$
H: how to solve trigonometric inequalities? how does one solve trigonometric inequalities? Is there a method to this or is every solution done ad hoc? simple equations of the type: $cos3x \leq 0$ when: $0\leq x \leq 2π$ The attempt at a solution: equating $cos 3x = 0$ yields $$ π /6 + 2\frac13πk\leq x \leq 2π -π /6 - πk /3 $$ as a general solution...what happens next? AI: There is a sort of general method. Take your example: $\cos 3x \leq 0$ for $0 \leq x \leq 2 \pi$. Like you did, first solve the equality $\cos 3x = 0$. You cannot always do this exactly, but here we can. We get $$ 3x = \frac{1}{2}\pi + 2 k \pi \quad\vee\quad 3x = \frac{3}{2} \pi + 2 k \pi $$. By drawing a graph (or looking at the derivative of $\cos 3x$ at those points), we know that the inequality holds for $3x$ between them. So $$ \frac{1}{2}\pi + 2 k \pi \leq 3x \leq \frac{3}{2}\pi + 2 k \pi $$ It is important here that you get a representation for the interval. In your example solution, you gave representations for the endpoints of the interval with different meanings for $k$ on the left-hand side and the right-hand side. Your thing works for $k = 0$, but for $k = 10$ the left endpoint is to the right of the right endpoint. Dividing by 3 we get $$ \frac{1}{6}\pi + \frac{2}{3} k \pi \leq x \leq \frac{1}{2}\pi + \frac{2}{3} k \pi $$ This is a general solution for all of the real line. To confine ourselves to values of $x$ in $[0, 2\pi]$, we try different values of $k$ and see if the resulting interval lies within $[0, 2\pi]$. In this case, that holds for $k = 0, 1, 2$. We get: $$ k = 0 \;\rightarrow\; x \in [\frac{1}{6} \pi, \frac{1}{2} \pi] \\ k = 1 \;\rightarrow\; x \in [\frac{5}{6} \pi, \frac{7}{6} \pi] \\ k = 2 \;\rightarrow\; x \in [\frac{3}{2} \pi, \frac{11}{6} \pi] $$ The answer will be the union of these three intervals.
H: About $n$ consective integers such that each of them is a multiple of the element of a given set Let $S(a)$ be a set of the multiples of an integer $a$. Then, here is my question. Question : Is the following true for any $n\ge 2\in\mathbb N$ ? Supposing that any two of $n$ integers $a_1, a_2,\cdots,a_n$ are coprime, then there exists a set of $n$ consective integers $k,k+1,\cdots,k+n-1$ such that $$k+i\in S(a_j)\ \ \text{($i=0,1,\cdots,n-1$)}.$$ Suppose that if $k+p\in S(a_r), k+q\in S(a_s)$ and $p\not=q$ , then $r\not=s$. Example : Suppose that $a_b$ represents that $a$ is a multiple of $b$. The $(n,a_1,a_2,a_3)=(3,3,5,7)$ case has an example $[48_3, 49_7, 50_5]$. The $(n,a_1,a_2,a_3,a_4)=(4,2,3,5,11)$ case has an example $[8_2,9_3,10_5,11_{11}]$. The $(n,a_1,a_2,a_3,a_4,a_5)=(5,2,3,5,7,11)$ case has an example $[119_7,120_5,121_{11},122_2,123_3]$. Motivation : We know the $n=2$ case is true. I've been able to prove that the $n=3,4$ cases are true. The above proposition seems true for $n$ in general, but I'm facing difficulty for proving that. Can anyone help? AI: We will prove a slightly stronger statement. Let $n \geq 2$ be an integer, and $a_1,\ldots,a_n$ pairwise coprime integers. Then there exists an integer $k$ such that $k + i \in S(a_{i+1})$ for all $0 \leq i \leq n-1$. The $n=2$ case is obvious. We will use induction on $n$. Let $n > 2$, and let $a_1,\ldots,a_n$ be pairwise coprime integers. By induction hypothesis, there exist integers $k,l$ such that $k + i \in S(a_{i+1})$ for all $0 \leq i \leq n-2$ and $l + i \in S(a_{i+2})$ for all $0 \leq i \leq n-2$. Write $A = a_1 a_2 \cdots a_{n-1}$, $B = a_2 a_3 \cdots a_n$ and $C = a_2 a_3 \cdots a_{n-1}$. For any $0 \leq i \leq n-3$ we have $l + i \in S(a_{i+2})$ and $k + 1 + i \in S(a_{i+2})$, and thus $l - (k+1)$ is divisible by every $a_j$ for $2 \leq j \leq n-1$. So $l - (k-1)$ is also divisible by $C$. So we can write $l - (k-1) = xC$ for some integer $x$. Now note that gcd$(A,B) = C$, and thus there exist integers $s,t$ such that $sA + tB = C$. We see that $l-(k+1) = xC = xsA + xtB$, which we can write as $l - xtB = (k+1) + xsA$. Now consider the consecutive integers $k + xsA, k + xsA + 1, \ldots, k + xsA + (n-1)$. For $0 \leq i \leq n-2$, $A \in S(a_{i+1})$, and thus $k + xsA + i \in S(a_{i+1})$. Finally, we note that $k + xsA + (n-1) = l - xtB + (n-2)$. As $B \in S(a_n)$, we get $k + xsA + (n-1) \in S(a_n)$. So the integers $k + xsA, k + xsA + 1, \ldots, k + xsA + (n-1)$ satisfy the conditions, which completes the proof.
H: For what values of $\gamma > 0$ does $n^{\gamma} (\sqrt[n]{n} - 1)^2$ converge? This is not for homework, but I would please just like a hint. The question asks For what values of $\gamma > 0$ does $n^{\gamma} (\sqrt[n]{n} - 1)^2$ converge? I did a couple of tests, and believe that $n^{\gamma} (\sqrt[n]{n} - 1)^2 \to 0$ for $0 < \gamma < 2$, and $n^{\gamma} (\sqrt[n]{n} - 1)^2 \to \infty$ for $\gamma \geq 2$. I've been working on the second claim a bit, but haven't made any real progress. If I can show that $n^2 (\sqrt[n]{n} - 1)^2 \to \infty$ then I would be done with the second claim. The sequence $n^2 (\sqrt[n]{n} - 1)^2$ grows very slowly, so maybe I can get an easier lower bound for $exp(n^2 (\sqrt[n]{n} - 1)^2)$? Any suggestions or hints would be greatly appreciated! AI: Hint : $$ n^{\frac{1}{n}}-1= \exp(\frac{\log(n)}{n})-1= (\frac{\log(n)}{n})+O((\frac{\log(n)}{n})^2) ... $$
H: prove that $a_n$ is convergent if $\limsup a_n \cdot \limsup \frac1{a_n} = 1$ $a_n$ is a positive series, and I know that $\limsup a_n \cdot \limsup \frac1{a_n} = 1$. Prove that $a_n$ is convergent. What do I need to do? AI: Can you show that $$\limsup\frac1{a_n}=\frac1{\liminf a_n},$$ (for any sequence $(a_n)$ such that $a_n>0$)? Once you prove this, the equality you are given simply says that $\limsup a_n=\liminf a_n$. You can find this also in the book Wieslawa J. Kaczor, Maria T. Nowak: Problems in mathematical analysis: Volume 1; Real Numbers, Sequences and Series as Problems 2.4.22 and 2.4.23. The problems are given on p.45 and solved on p.203-204.
H: Existence of a norm K - compact, convex subset of $ \Bbb R^n $ 0 $\in$ int K K is symmetrical to 0. I'm sorry, but i don't know how to write it properly. I mean: $ (x_1,x_2,...,x_n) \in K \Rightarrow (-x_1,-x_2,...,-x_n) \in K $ I need to prove that there exists only one norm determined by K such that K is in this norm an closed unit sphere, which has 0 in the middle. I know that K is bounded from Heine-Borel theorem. I have problems with defining the norm. AI: The norm needs to be defined only on the unit sphere, all other points have their norm defined via rescaling. I'd try to study the function $$f(x) = \sup_{t>0 \text{ such that }tx\in K}\|tx\|,$$ then call $$\|x\|_K = \frac{\|x\|}{f(x)}$$ where $\|\cdot\|$ stands for usual norm in $\Bbb R^n$. You need to check that this is indeed a norm, of course.
H: Simplify functions involving modular arithmetic In this question, the answer says that $f \circ g(x) = x$. But I am unable to get this result. The expression I am able to get is that $$f \circ g(x) = 7(x\text{ mod } 3) + 57(x\text{ mod }7) \pmod {21}.$$ I am unable to proceed any further. AI: Let $x \mod 3 = a$ and $x \mod 7 = b$. Thus $a = x + 3 y$ and $b = x + 7 z$ for some integers $y$ and $z$. Express $7 a + 57 b$ in terms of $y$ and $z$. What do you get mod $21$?
H: hypothesis on bilinear form Let $H$ an Hilbert space and $a:H\times H\to \mathbb{R}$ a bilinear form. Let $H_h\subset H$ a finite dimentional subspace and let $\{w_1,\ldots,w_n\}$ a basis of $H_h$. What hypothesis must have on bilinear form such that the matrix $K=(a(w_i,w_j))_{i,j=1,\ldots,n}$ is an invertible matrix? I think that $a$ must be symetric and positive definite. Is this enough? AI: The matrix $K_{i,j}=a(w_i,w_j)_{i,j}$ is invertible if and only if the bilinear form $a$ is nondegenerate when restricted to $H_h \times H_h$. This means that the associated linear map $\tilde{a}: H_h \to H_h^*$ given by $\tilde{a}(x)=a(x,-):H_h \to \mathbf{R}$ is a linear isomorphism. In particular, $a$ need not be symmetric. The field of symplectic linear algebra is all about vector spaces equipped with a nondegenerate antisymmetric bilinear form. Of course, if $a$ is symmetric and positive definite then it defines an inner product on $H_h$. The restricted bilinear form is non degenerate. I might be a good idea to work this out in the case of the standard inner product on $\mathbf{R}^n$ for example. Hope this helps!
H: How many iterations does it take to cover a range with random values? Let's say I have a random number generator that generates integers uniformly from 0 to n-1 (where n is some positive integer). What is the expected number of iterations after which all the values 0..n-1 will be generated? I did some simulations and the results seem close to n*ln(n), but I wonder how to solve it mathematically. Possibly related SO question AI: This is known as the coupon collector's problem. See the link for details and references. You are right that asymptotically, the expected number of trials needed grows as $n \ln n$ (plus lower order terms).
H: How do I prove that the range of log is $\mathbb{R}$? For a real analysis course. In the first part of the problem, I proved that $\log xy = \log x + \log y$. Here $\log x$ is defined as $\int_1^x\frac1t\mathrm dt$. Since it's a two part problem, I am assuming that will come in handy. I am not sure how to go about this one. Do I need to show that the log function is surjective? Here's what I did: Suppose $u \in \mathbb{R}$ and choose $n, m$ such that $n > {u\over \log 2}$ and $m < {u\over \log 2}$. Without loss of generality, we have that for all $m > 0$, we have $2^m \in (0, \infty)$, and for $m <0$ we have $(1/2)^{\mid m \mid } = 2^m$, and for $m = 0$ we have $2^m = 1$. Then $\log 2^n = \log 2 + \log 2 + \log 2 ...$, with $n$ factors of $\log 2$ by part (a). Then $\log 2^n = n\log 2$. Similarly, $\log 2^m = m\log 2$. Then by the Intermediate Value theorem, $n\log 2 > u > m\log 2$. Thus the range of $\log$ is $\mathbb{R}$. AI: If you've proved that $\log x$ is continuous, and $\log(e)=1$ then you can show that there exists $x$ such that $\log(x)=n$ for all $n\in\mathbb{Z}$ by using $\log xy=\log x +\log y$, and so from this you can use the intermediate value theorem to show that $\log$ attains every real number between the integers as well.
H: Inverse of a $4 \times 4$ matrix with variables I missed my class on the inverses of matrices. I'm catching up well, but there's a problem in the book that got me stumped. It's a $4 \times 4$ matrix that is almost an identity matrix, but whose bottom row is $a,b,c,d$ instead of $0,0,0,1$. $$\begin{pmatrix} 1 &0 &0 &0 \\ 0 &1 &0 &0 \\ 0& 0 & 1 &0 \\ a &b & c & d \end{pmatrix}$$ Any pointers? AI: Recall that one way to compute an inverse is by forming the augmented matrix $$ (A \vert I) $$ and then using Gaussian elimination to completely row-reduce $A$. The final result will be of the form $$ (I \vert A^{-1}). $$ So, in this case, you would write $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ a & b & c & d & 0 & 0 & 0 & 1 \end{bmatrix} $$ and the first step would be to add $-a \times$ (first row) to (last row), i.e. $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & b & c & d & -a & 0 & 0 & 1 \end{bmatrix} $$ etc.
H: A quick question about the additive identity of a ring. Is it always the case that the additive identity annihilates all elements under multiplication? I can't think of an example; my course essentially relies on integers, polynomials, and matrices for most examples of any given concept. I'm curious if there is an example where this is not the case (and avoid erroneous proofs in exercises!) AI: $0 \cdot x = (0+0)\cdot x = 0 \cdot x + 0 \cdot x \implies 0 \cdot x = 0$
H: How do I find/predict the center of a circle while only seeing the outer edge? Question What formula would allow me to predict the center of this circle? In addition, what attributes of this image must be detected in order to predict the center? I figured understanding the math first will help me determine what parts of the image are relevant, then I can worry about trying to detect those features. Background I need to programmatically find the center of a circle while I'm only able to see the outer edge of the circle. I am using matlab for my prototype of this. This was a class project. We finished the project using matlabs imfindcircles(), but it is to slow. It takes about 2-3 seconds to process a single image. I want to improve my programs speed and hopefully learn some math in the process. The initial assignment was to create a rifle that blind people could use. Below is a test image I am trying to process: Ignore the black around the outer edge. I have my camera behind a scope, that is why you can only see a small magnified circle in the middle. The black half circles are actually a target, so there are multiple rings nested inside each other. Also, I initially posted this question on stackoverflow as a programming issue. Feel free to read it for more information about the code behind my imfindcircles() solution and my attempted alternative. However, there are quite a few votes to close it. I'm guessing that is because it is really more of a math problem than a programming problem. My Solution I have written a short tutorial about how I solved the problem and posted all my code to github. See the tutorial and code. The tutorial was written in the projects README.md file so it should appear when you view the github page and scroll down a bit. I used the math @vadim123 suggested. AI: The hard part is finding three points on the circle using the messy data you're given. Assuming you have done this, $A,B,C$, in that order, then: Let $L_1$ be the line that passes through the midpoint of segment $AB$, and is perpendicular to $AB$. Let $L_2$ be the line that passes through the midpoint of segment $BC$, and is perpendicular to $BC$. Then the intersection of $L_1$ and $L_2$ is the center of the circle. If $A,B,C$ are far away that will make the error smaller. Hence I advocate doing this process several times for different choices of $A,B,C$, to get several "centers" and averaging.
H: Understanding the intermediate field method for the $\phi^4$ interaction In Rivasseau's and Wang's How to Resum Feynman Graphs, on page 11 they illustrate the intermediate field method for the $\phi^4$ interaction and represent Feynman graphs as ribbon graphs. I had to read up about ribbon graphs as I've never heard of them before and still don't fully understand their use for the intermediate field method. For the $\phi^4$ interaction, we have: In that case each vertex has exactly four half-lines. There are exactly three ways to pair these half-lines into two pairs. Hence each fully labeled (vacuum) graph of order $n$ (with labels on vertices and half-lines), which has $2n$ lines can be decomposed exactly into $3^n$ labeled graphs $G'$ with degree $3$ and two different types of lines the $2n$ old ordinary lines $n$ new dotted lines which indicate the pairing chosen at each vertex (see Figure 5). The extension illustrated in Figure 5 looks as follows: I'm not sure I correctly understand the way to create such "3-body extensions". I understand it in that way that we "split up" the given vertex into two vertices and add an edge between them. Then, each vertex has two additional half-edges and there are three ways to pair them together. Joining the upper half-edges together, we get the second term on the RHS and joining the two half-edges on one side together, we will get the first term on the RHS (the dashed line would be the new edge between the two vertices). The only case left is the one of joining the upper left and the lower right (and the upper right and lower left) half-edge together. However, when drawing that specific case, I don't see any way to stretch or rotate it such that the third term on the RHS results. Am I misunderstanding the way 3-body extensions are intended to get or am I just missing a way to stretch the edges to get the third term on the RHS? Also, where exactly do we need the cyclic ordering characterising ribbon graphs? And why are there two dashed lines in the third term on the RHS? AI: Consider reading this post at mathoverflow where a similar question was asked and answered by several people. In order to get a geometric intuition of a ribbon graph, think about replacing vertices of the ordinary graph with little 2d disks and replacing edges of the usual graph with ribbons (thin rectangles) which are attached to the disks along the short sides. The cyclic order in the definition of the ribbon graph corresponds to specification of the way ribbons are attached to disks. I found a detailed discussion of the definition with many illustrative pictures in this paper. Edit: I still do not have much time, but here is an explanation of the difference between the last two figures on your picture: As ordinary graphs they are the same, of course, but they are different as Ribbon graphs. Just think of a ribbon graph as a graph together with a mapping of this graph to the plane (the map is not 1-1). The ribbon structure is determined by such a map as follows. If you have a path $p$ immersed in the plane, you also have its normal vector field $\nu$. This vector field defines an infinitesimal rectangle along this path; by integrating the normal vector field a bit you obtain a thin rectangle. Now, glue these thin rectangles to form a ribbon graph.
H: Mirror a function about y axis I have a piecewise function from -1 to 0 in Maple, and I want somehow get a mirror piecewise function about y axis, just like here: Is there any bult-in function for that? AI: However, we can think about the code reflect(p, [pt_2d, pt_2d]) in Maple, It is easy to do that as follows. I defined a function and their plots. > with(plots): h := x->piecewise(x < -2, x+3, x <= 2, 5-x^2, 3-x): t:=x->h(-x): a:= plot(h(x), x = 0 .. 10, color = red, thickness = 3): b:= plot(t(x), x = -10 .. 0, color = green, thickness = 3): display(a,b);
H: Three-Dimensional geometry + trigonometry question Dear all: I read this question yesterday and it is driving me crazy! I shall offer a bounty to whoever gives a reasonable answer... We have a straight pyramid with a square ABCD as its base and apex S. We're given the pyramid's height 8 and the angle 48 deg. between SA and SC. I've already managed to calculate the pyramid's volume (67.66 cubic meters). and now I'm asked to find the angle between the height SO (O=center of the square base = intersection point of its diagonals) and the pyramid's face SBC. I tried the triangle SOE , E=midpoint of BC, but I can't explain why this works: I know I must draw a perpendicular to plane SBC from some point on SO, yet OE definitely isn't this perpendicular. All I need is to show such perpendicular MUST intersect the line SE at some point. Perhaps it is possible to express the pyramid´s volume by means of the wanted angle? That way we could get the angle... AI: If I understand the very question correctly, you can do the following: show that the planes SOE and SBC are perpendicular (because SBC contains BC, which is normal to SOE ($BC \perp OE$ and $BC \perp SE$ which are both in SOE), and therefore, by definition, the planes SOE and SBC are perpendicular) now, for each point in SOE plane, you can make a unique perpendicular line to SE, which will also be perpendicular to SBC (because 1.) Let OK be such perpendicular line built from O until its intersection with SE at point K Because SOE and SOK are in the same plane, and share the angle in question, one can say that computing $\angle OSK$ (which is a definition you're referring to) and $\angle OSE$ are equivalent.
H: Convergence question in measure theory I have a convergence question in measure theory that requires assistance: Let $1\leq p<\infty$. Suppose $f,\ f_n \in L^P$, and $f_n\to f$ in $L^P$. (i.e $(\int|f_n-f|^pd\mu)^{1\over p}\to 0$ as $n\to\infty$) Show that $\int|f_n|^pd\mu \to \int|f|^pd\mu$. For $p=1$, $$\lvert\int|f_n|d\mu-\int|f|d\mu\rvert=\lvert\int(|f_n|-|f|)d\mu\rvert\leq\int\lvert|f_n|-|f|\rvert d\mu\leq\int|f_n-f|d\mu\to 0$$ hence obtaining the required inequality. However, I have trouble with $p>1$. Kindly advise. Thank you. AI: Let us show it in two steps: $$ f_n\to f\;\;\text{in } L^p\;\;\Longrightarrow\;\;|f_n|^p\to |f|^p \;\;\text{in }L^1\;\;\Longrightarrow \;\;\int |f_n|^p\,\mathrm d\mu\to\int |f|^p\,\mathrm d\mu. $$ The last implication should be pretty straightforward. For the first implication we might use that for $x,y\in\mathbb{R}$ one has $$ ||x|^p-|y|^p|\leq p(|x|^{p-1}+|y|^{p-1})|x-y| $$ and hence $$ \begin{align} \int ||f_n|^p-|f|^p|\,\mathrm d\mu & \leq \int p(|f_n|^{p-1}+|f|^{p-1})|f_n-f|\,\mathrm d\mu \\ &=\int p|f_n|^{p-1}|f_n-f|\,\mathrm d\mu+\int p|f|^{p-1}|f_n-f|\,\mathrm d\mu\\ \end{align} $$ which by the use of Hölder's inequality is less than or equal to $$ p\left(\int |f_n|^{q(p-1)}\,\mathrm d\mu\right)^{1/q}\left(\int |f_n-f|^p\right)^{1/p}+p\left(\int |f|^{q(p-1)}\,\mathrm d\mu\right)^{1/q}\left(\int |f_n-f|\,\mathrm d\mu\right)^{1/p}\\ $$ where $q>1$ is chosen such that $\frac1p +\frac1q=1$. This simplifies to (recall that $q(p-1)=p$) $$ p\left[\left(\int |f_n|^p\,\mathrm d\mu\right)^{1/q}+\left(\int |f|^p\,\mathrm d\mu\right)^{1/q}\right]\left(\int |f_n-f|^p\,\mathrm d\mu\right)^{1/p}. $$ The last factor tends to $0$ as $n\to\infty$ by assumption, and thus we only need to argue that $$ \sup_n\int |f_n|^p\,\mathrm d\mu<\infty. $$ This i'll leave for you to prove.
H: Prove that $ \sum_{1 \le t \le n, \ (t, n) = 1} t = \dfrac {n\phi(n)}{2} $ Problem: Prove that the sum of all integers $ t \in \{ 1, 2, \cdots, n \} $ and $ (t, n) = 1 $ is $ \dfrac {1}{2} n \phi (n) $, where $ \phi $ is the Euler Totient Function. My proof: Define the set $\mathcal{S}$ to be the set of all the elements $t$ such that $ 1 \le t \le n $ and $ (t, n) = 1 $. The cardinality of $\mathcal{S}$ is $\phi(n)$. For $ n = 2 $, the set $\mathcal{S}$ is $ \{ 1 \} $ and so the statement holds, since $ \dfrac {1}{2} \cdot 2 \cdot \phi(1) = 1 $. $ \\ $ $\mathrm{}\\$ Lemma: For natural $ n \ge 3 $, $\phi(n)$ is even. Proof: If $n$ is a power of $2$, say $2^k$, then $\phi(n)=n \cdot \left( 1 - \dfrac {1}{2} \right) = \dfrac{n}{2}=2^{k-1}$, which is even, since $ n > 2 $. If $n$ is not a power of $2$, as to have at least one odd prime divisor, say $p$, from the Euler Totient Function formula, $$ \phi (n) = n \cdot \displaystyle\prod_{p \text { prime}, \ p \mid n} \left( \dfrac {p-1}{p} \right), $$and so, $(p-1)\mid\phi(n)$. Thus, since $p$ is odd, $2\mid\phi(n)$, which is to say that $\phi(n)$ is even. $ \Box $ $\mathrm{}\\$ From the Lemma, it follows that the cardinality of $\mathcal{S}$ is even, as long as $n\ge3$. We verified the statement for $n=2$, so what remains is $ n \ge 3 $. We will need the even cardinality of $\mathcal{S}$ shortly. Let $\mathcal{S}=\{t_1,t_2,\cdots,a_{\phi(n)}\}$. Let the desired sum be $R$. Now, note the following. If $t_k$ and $n$ are relatively prime, it follows that $n-t_k$ is relatively prime. Note that there is a bijection between $t_k$ and $n-t_k$, from Lemma $1$. That is, for every $t_k$, it follows that there is an $n-t_k$ that corresponds to it. If the cardinality of $\mathcal{S}$ was odd, this would not be the case. We can now conclude that $ \{ n - t_1, n - t_2, \cdots n - a_{\phi(n)} \} $ is the same set as $ \{ a_1, a_2, \cdots, a_{\phi(n)} \} $, namely $\mathcal{S}$. We add up the two representations of $\mathcal{S}$. We have $$ a_1 + \cdots + a_{\phi(n)} = R = (n - a_1) + \cdots + (n - a_{\phi(n)}). $$Therefore, we solve for $R$ to get $ 2R = n \cdot \phi (n) $, or $ R = \dfrac {1}{2} n \phi (n) $, as desired. $ \blacksquare $ Are there any other proofs or interesting results? AI: You don't need the lemma. For every $1\leq t \leq n$ with $(t, n)=1$, we have $(n-t,n)=1$. The map $t \mapsto n-t$ is a bijection, with itself as inverse: $t=n-(n-t)$. This has nothing to do with the parity of $\varphi(n)$. The rest of your proof is correct: if $S=\sum_{(t,n)=1} t$, we have: $$S = \sum_{(t,n)=1} t= \sum_{(t,n)=1} (n-t) = n\varphi(n) - S$$ so $$S = n\varphi(n)/2.$$
H: Let $z \in \mathbb C$, $|z| = 1$. Assume the sequence $a_n = z^n$ is convergent. Prove $z = 1$. Let $z \in \mathbb C$, $|z| = 1$. Assume the sequence $a_n = z^n$ is convergent. Prove $z = 1$. The case $z = 1$ implies convergence of $a_n$ is easy to prove. It is also easy to prove that $z = -1$ implies divergence of $a_n$. So clearly $z$ must be a complex number on the complex unit circle of radius 1, and I must prove divergence for any $z \ne \pm 1$. My idea is: $z = a + ib$ and $|z| = 1$ can be written in polar form $e^{i \theta} = cos(\theta) + i sin(\theta)$. Then $a^n = e^{i \theta n} = cos(\theta n) + i sin(\theta n)$ for $n \in \mathbb N$. Now we may assume $\theta \neq \pi, 2\pi$, but how do I verify that the sequence does in fact diverge in this case ? AI: If $z^n$ converges, then $\left|z^{n+1}-z^n\right|\to 0$ But you also know $\left|z^{n+1}-z^n\right|=\left|z^n\right|\left|z-1\right|=|z|^n\left|z-1\right|=|z-1|$ So $|z-1|\to 0$, that is, $z=1$
H: Show that there is no natural number $n$ such that $3^7$ is the largest power of $3$ dividing $n!$ Show that there is no natural number $n$ such that $7$ is the largest power $a$ of $3$ for which $3^a$ divides $n!$ After doing some research, I could not understand how to start or what to do to demonstrate this. We have $$E_3(n!)\neq7\;\;\forall n\in\mathbb{N}\\\left[\frac{n}{3} \right]+\left[\frac{n}{3^2} \right]+\left[\frac{n}{3^3} \right]+\dots\neq7$$I do not know where from, or what to do to solve it. AI: For $n=15,\left[\frac{15}{3} \right]+\left[\frac{15}{3^2} \right]+\left[\frac{15}{3^3} \right]+\;...=6$ for $n=18$ (the next multiple of $3$) $\left[\frac{18}{3} \right]+\left[\frac{18}{3^2} \right]+\left[\frac{18}{3^3} \right]+\;...=8$ If $n\geq 18$ then $\left[\frac{n}{3} \right]+\left[\frac{n}{3^2} \right]+\left[\frac{n}{3^3} \right]+\;...\geq 8$ So there is no possibility for $7$
H: Probability of getting something with a low probablity If there are 100 marbles in a bag (1 red one, 99 green ones), then the probability of picking the red one is 1/100. But if I do 100 trials then I believe it is likely that I will pick the red one at least once in those 100 trials. I'm curious as to what this probability is; that is, what is the probability of picking the red marble if I run 100 trials. NOTE: I replace the marble after I pick it so there are always 100 marbles in the bag. I've always wondered about this question, because I've always believed that the odds are >50% to pick the red marble, but I'd like to know exactly what they are. This experiment can obviously be done with any other type of probability problem. I just used the marbles to illustrate one example. AI: The probability of getting green every time is $0.99^{100} = 0.36603...$ So the probability of not getting green every time (which is another way to say getting red at least once) is $1-0.366=0.634$. When you have a large number of marbles and always pick as many times as there are marbles in all, the probability of getting any particular one of them at least once will converge to $1-e^{-1} = 0.63212...$. This is due to the well-known limit $$ \lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n = e^x $$ (which is sometimes used as a definition of $e^x$) applied at $x=-1$. From the same limit: If you have $n$ marbles (and $n$ is large), the number of tries you need to do in order to have 50-50 chance of getting a selected one at least once is about $\ln(2) n \approx 0.693\,n$.
H: Is this a typo in Hoffman and Kunze's linear algebra 2e? On page 203 on the part about characterizing triangulability it looks like there's a typo in the indicies they sum over on eqn (6-12). AI: I agree with Hoffman and Kunze. The $j$th column of the matrix should be the coefficients of $T\alpha_j$ with respect to the ordered basis $\{\alpha_1, \ldots, \alpha_j \}$.
H: $P(P(\cdots(P(x))))$ and its integer solutions Problem: Suppose that $P(x)$ is a polynomial with degree at least $2$ and integer coefficients. Let $Q(x)$ have the form $$ Q(x) = P(P(P(\cdots P(x) \cdots))) $$ for some finite number of nested $P$s. Prove that the equation $Q(t)=t$ can have at most $ \text {deg}(P) $ integer solutions. Thoughts: Graphing it doesn't help as the key to this problem is the integer restriction. Taking inverses don't help as they are ugly to deal with. Induction, maybe? But how? Any thoughts would be greatly appreciated! AI: This is IMO 2006 Problem 5 A solution to it is this: Suppose $a$ is a solution to $Q(x) = x$. Let $a_0 = a$ and $P^m(a) = a_m$. Then we know $(a_z - a_{z-1})|(a_{z+1}-a_z)$ for all $z \ge 1$. In particular, $(a_k - a_{k-1})|(a_1 - a_0)$ so we quickly deduce that $P$ has order $2$ on $a$ or order $1$. This is because firstly we have $$\prod_{i=1}^k \frac{a_{i+1} - a_{i}}{a_{i} - a_{i-1}} = 1 \implies \frac{a_{i+1} - a_{i}}{a_{i} - a_{i-1}} = \pm 1$$ Now suppose $m$ is the minimum number of $P$'s you need to apply on $a$ to reach $a$ again where $m > 1$. Then the $a_i$ are distinct. One quickly deduces if one of them were $-1$, we would have $a_{i+1} = a_{i-1}$ which implies $m=2$. Thus we can assume all of them are equal to $1$, but then the $a_i$ form an arithmetic progression which is absurd. Thus it is only possible $m=1$ or $m=2$. Now, $P(x) = x$ has at most $\deg P$ solutions. Thus if there does not exist an $a$ such that $P(P(a)) = a$ while $P(a) \neq a$, we are done. Now suppose $P(P(a)) = a$ and $P(a) = b$ and $P(c) = d, P(d) = c$. Then $(a-c)|(b-d)$ and $(b-d)|(a-c) \implies a-c = \pm (b-d)$. One also knows $(a-d)|(b-c)$ and $(b-c)|(a-d) \implies a-d = \pm(b-c)$. But then one quickly deduces $a+b = c+d$. Note that this implies if $P(z) = z$ then $2z = a+b$ by letting $c=d=z$. Now let $a$ be a solution to $P(P(a)) = a$. Then any $z$ such that $Q(z) = z$ is a root of $x + P(x) - a - P(a)$, which has at most $\deg P$ roots so we are done.
H: Integration of $\int_{0}^{1.7}[x^2]dx$ so we have got this problem $$I=\int_{0}^{1.7}[x^2]dx$$ where $[f(x)]$ is under greatest integer function so i thuought of this possible solution $$I=\int_{0}^{1.7}[x^2]dx =\int_{0}^{1}[x^2]dx+\int_{1}^{1.4}[x^2]dx+\int_{1.4}^{1.7}[x^2]dx$$ $$=0+(1.4-1)+2*(1.7-1.4)=1$$ which i think is very much incorrect , how to do this integration? AI: Hint: In general, to integrate this function you should figure out exactly where $[x^2]$ changes values in the interval you're integrating over and break up the integral at these locations. For the interval $[0,1.7]$, the only values the integrand takes are $0$, $1$, and $2$ (why?), and you still need to figure out at which value $a$ is it true that $[a^2] = 2$ but $[x^2] < 2$ for $0\leq x < a$. (You'll find that $a$ is irrational.) Once you recognize what $a$ is, rewrite the integral in a similar manner to the one in your solution as $$ I = \int_0^1 0 \,dx + \int_1^a 1 \,dx + \int_a^{1.7} 2\,dx = 0 + a-1 + 2(1.7-a). $$
H: Help me to prove that $|BA|\leq|B||A|$ holds Given the norm $|A|= \sqrt{tr(A^*A)}$, where $tr$ is the trace of a linear operator, help to prove that $|BA| \leq |B||A|$ holds. AI: Cauchy-Schwarz gives: $$ \begin{split} |BA|^2&=\mathrm{trace}((BA)^*(AB))=\sum_{i,j}|(BA)_{ij}|^2=\sum_{ij}\left|\sum_k b_{ik}a_{kj}\right|^2 \leq\sum_{ij}\left(\sum_k|b_{ik}|^2\sum_k|a_{kj}|^2\right) \\&= \sum_{i,k}|b_{ik}|^2\sum_{j,k}|a_{jk}|^2=|B|^2|A|^2. \end{split} $$
H: Number of ways to rearrange a line of $n$ marbles My friend challenged me to solve the following problem, and after having thought about it for a long time and not being able to find the answer, I decided to give up. His explanation which followed wasn't very clear, and I've already forgotten the answer, but I'm still curious. Here is the problem: You have green, blue and red marbles at your disposal, and you would like to arrange them in a single line of $n$ marbles long. Green marbles can never be put next to each other. Others can. In how many different ways can you rearrange a line of 10 marbles long? I would also like to know the general formula for the possibilities of a line of $n$ marbles long. An explanation would be appreciated. Thanks in advance. AI: The line of length $n$ can begin R or B followed by a line of length $n-1$, or GR or GB followed by a line of length $n-2$. If $S(k)$ counts the number of lines of length $k$, we have $$S(n)=2S(n-1)+2S(n-2)$$ with $S(1)=3$ and $S(2)=8$. Can you take it from there?
H: The integral $\int_0^{\infty } \frac{L_m(-x)}{e^{2 \pi x}+1} \, dx$ Could you expain the following sum I seen in a forum $$\int_0^{\infty } \frac{L_m(-x)}{e^{2 \pi x}+1} \, dx=\sum _{n=0}^{\infty } \left(2^{-2 n-1} \left(2^n-1\right) \pi ^{-n-1} \zeta (n+1)\right) \binom{m}{n}$$where L in a Lagarre polinomium AI: The result follows from a simple, explicit expression for the Laguerres: $$L_m(-x) = \sum_{n=0}^m \binom{m}{n} \frac{x^n}{n!}$$ The integral is then $$\sum_{n=0}^m \binom{m}{n} \frac{1}{n!} \int_0^{\infty} dx \frac{x^n}{e^{2\pi x}+1}$$ Now, each integral is easily evaluated in terms of the zeta function by expanding the denominator: $$\begin{align}\int_0^{\infty} dx \frac{x^n}{e^{2\pi x}+1} &= \sum_{k=0}^{\infty} (-1)^k \int_0^{\infty} dx\, x^n \, e^{-(k+1) 2\pi x}\\ &= \frac{n!}{(2\pi)^{n+1}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1)^{n+1}} \\ &= \frac{n!}{(2 \pi)^{n+1}} \left (1-\frac{1}{2^n} \right ) \zeta(n+1)\end{align}$$ The sum is then $$\sum_{n=0}^m \binom{m}{n} \frac{1}{(2 \pi)^{n+1}} \left (1-\frac{1}{2^n} \right ) \zeta(n+1)$$ Note that $\binom{m}{n} = 0$ when $n \gt m$, so the infnite sum presented in the question is actually a finite sum.
H: Good Probability Practice Problems I'm looking for a good probability textbook with lots of worked out examples and problems to prepare for my course's final exam. I'm in an introductory probability class in college, and we've covered basic probability, combinatorics, and discrete and random variables. We're using Introduction to Probability, 2nd Edition, by Bertsekas and Tsitisklis, but I've gone through a lot of problems in that already. Thanks! AI: I'd go with Sheldon Ross' A First Course in Probability. It has tons of worked problems, many (sometimes over a hundred) exercises per chapter, and solutions to those exercises in the back of the book (or online).
H: Finding $\lim_{n\to\infty}\frac{\prod_{k=1}^n(2k-1)}{(2n)^n}$ Recently got this on a test: $$\lim_{n\to\infty}\frac{\prod_{k=1}^n(2k-1)}{(2n)^n}$$ Because it's a freshman calculus course, I think we were expected to solve it like a physicist. Taking a look at the first few terms of the series: $$\{\frac{1}{2},\frac{3}{16},\frac{5}{216},\cdots\}$$And saying "this probably converges at $0$". Because this is all that's been covered in our text so far. I find this really sketchy, considering how carefully we normally tiptoe around infinities. What would be a more robust solution of this problem? Edit Sorry, the question in the title was different from the one here. Fixed. Attempted solution by ratio test $$\lim_{n\to\infty}\frac{\frac{3\cdot5\cdot7\cdot\cdots\cdot(n-1)\cdot n\cdot(n+1)}{(2n+2)^n(2n+2)}}{\frac{3\cdot5\cdot7\cdot\cdots\cdot(n-1)\cdot n}{(2n)^n}}=\lim_{n\to\infty}\frac{2^nn^n(n+1)}{(2n+2)^{(n+1)}}$$Both the top and the bottom have a largest term of $n^{n+1}$, but at the bottom there is a coefficient of $2^{n+1}$, so the series converges. Is this good reasoning? AI: $$ \frac{\prod_{k=1}^n(2\,k-1)}{(2\,n)^n}=\frac{1}{2\,n}\cdot\frac{3}{2\,n}\cdot\dots\cdot\frac{2\,n-1}{2\,n}<\frac{1}{2\,n}. $$
H: If all $D_v f(P)$s are same, what is it? function $f$ is defined around a dot P in n-space. and $f$ is differentiable. And an arbitrary unit vector $\mathbf{v}$ if directional differential coefficients($D_\mathbf{v}f(P)$) are all same, what is that? And how can I show that? AI: When $f:\>{\mathbb R}^n\to{\mathbb R}$ is differentiable at ${\bf p}$ then there is a vector ${\bf a}$, called the gradient $\nabla f({\bf p})$, such that $$f({\bf p}+{\bf X})-f({\bf p})={\bf a}\cdot{\bf X}+o\bigl(|{\bf X}|\bigr)\qquad({\bf X}\to{\bf 0})\ .\tag{1}$$ Let ${\bf X}:=t{\bf U}$ with $t>0$ and ${\bf U}$ a unit vector. Then from $(1)$ it follows that $${f({\bf p}+t{\bf U})-f({\bf p})\over t}={\bf a}\cdot{\bf U}+o(1)\qquad(t\to 0+)\ ,$$ and this in turn is saying that $$D_{\bf U}f({\bf p})={\bf a}\cdot{\bf U}\ .$$ Here the right hand side is independent of ${\bf U}\in S^{n-1}$ only if ${\bf a}={\bf 0}$, which means that $\nabla f({\bf p})={\bf 0}$, i.e., that ${\bf p}$ is a critical point of $f$.
H: How to evaluate a limit with subtractions $\lim_{x \rightarrow -1}(\frac{3}{x^3+1}-\frac{1}{x+1})$? I'm having trouble thinking of a way to solve this. $$\lim_{x \rightarrow -1}\left(\frac{3}{x^3+1}-\frac{1}{x+1}\right)$$ AI: Because$$x^3+1=(x+1)(x^2-x+1)$$we get $$\frac{3}{x^3+1}-\frac{1}{x+1}=\frac{3}{(x+1)(x^2-x+1)}-\frac{1}{x+1}$$ $$=\frac{3-(x^2-x+1)}{(x+1)(x^2-x+1)}=\frac{x-x^2+2}{(x+1)(x^2-x+1)}=$$ $$=\frac{x+1-(x^2-1)}{(x+1)(x^2-x+1)}=\frac{(x+1)(2-x)}{(x+1)(x^2-x+1)}=\frac{2-x}{x^2-x+1}$$
H: What do these characters mean in RDF/OWL Domain and Range logic? Does someone know what these strange looking characters are? I would like to learn what they mean. Can you send me a reference/hyperlink so I can understand what they mean? http://www.w3.org/TR/2004/REC-owl-semantics-20040210/rdfs.html#owl_ObjectProperty_rdf AI: These are just operators from set theory: ⊆ is 'subset', ∅ and {} are the empty set, ≠ is 'not equals to', and ∈ is 'is an element of'. So IOXP ⊆ $P_I$ just means "the set we call IOXP is a subset of the set we call $P_I$", and $S_I$("l"^^d) ∈ $LV_I$ means "the URI reference denotation for URI reference "l"^^d is an element of $LV_I$".
H: f(X ∩ Y) = f(X) ∩ f(Y) for all non empty subsets X and Y of A, given f:A→B is 1-1? Let A and B be nonempty sets and f:A→B be a 1-1 function. Then f(X ∩ Y) = f(X) ∩ f(Y) for all non empty subsets X and Y of A. I believe this statement is true? AI: If $f:A\to B$ is not one-one, we can find $x,y$ for which $x\neq y$ yet $z=f(x)=f(y)$. Consider $X=\{x\}$ and $Y=\{y\}$. The converse is also true: if $f(X)\cap f(Y)=f(X\cap Y)$ for any $X,Y$ then $f$ is one-one.
H: If $f,g\in {\mathscr R[a,b]}$ and $\int^{b}_{a}f=\int^{b}_{a}g,$ then $\exists c \in[a,b]$ such that $f(c)=g(c). $ Could anyone provide some hint to the problem? Thank you. AI: As Christian Blatter has pointed out, the claim is not true. The function which is the identity over $[-1,1]\setminus\{0\}$ and $1$ at the origin is Riemann integrable, but has zero integral over $[-1,1]$. Observe, however, that if we ask additionally that $f\geqslant 0$, $\displaystyle\int_a^b f=0$ implies $f$ must vanish at every point of continuity.
H: Asymptotics of the logarithmic integral Problem Given $$ \gamma = \int_0^1 {1-e^{-u} \over u} du - \int_1^\infty {e^{-u} \over u} du, $$ prove that $$ \int_0^x {dt \over \log t} = \gamma + \log \log x + \sum_{k=1}^\infty {\log^k x \over k \cdot k!}. $$ Hint: Let $u = \log t$. Notes: $\gamma$ is the Euler-Mascheroni constant and $\log x$ is the natural logarithm. Progress To use the substitution, we need $u = \log t$, $e^u = t$, $du = dt/t$, and $e^u du = dt$. Plugging these into the first integrals gives $$ \gamma = \int_1^e {1 - 1/t \over \log t} {dt \over t} - \int_e^\infty {1/t \over \log t} {dt \over t} = \int_1^e {t-1 \over t^2 \log t} dt - \int_e^\infty {dt \over t^2 \log t}. $$ Using the hint on the desired integral gives $$ \int_{-\infty}^{\log x} {e^u du \over u} = \gamma + \log \log x + \sum_{k=1}^\infty {\log^k x \over k \cdot k!}, $$ and this series sort of looks like $$ \sum_{k=1}^\infty {(-\log x)^k \over k \cdot k!} = -\int_0^{\log x} {1 - e^{-u} \over u} du, $$ which I got from expanding the integrand into a power series. This makes me think to split the intervals $[0,1]$ and $[1,\infty)$ up at the point $\log x$. That gave me $$ \gamma = \int_0^{\log x} {1 - e^{-u} \over u} du - \log \log x - \int_{\log x}^\infty {e^{-u} \over u} du, $$ or using the value of the series, $$ \gamma + \log \log x = \sum_{k=1}^\infty {(-\log x)^k \over k \cdot k!} - \int_{\log x}^\infty {e^{-u} \over u} du. $$ For this integral, using the substitution from the hint and then $s = 1/t$ gives $$ \int_{\log x}^\infty {e^{-u} \over u} du = \int_x^\infty {dt \over t^2 \log t} = -\int_0^{1/x} {ds \over \log s}. $$ Replacing $x$ with $1/x$ would give the correct series with the wrong sign, and the correct upper limit for this last integral, but then the $\log \log x$ terms would be ruined. This seems like there might be a sign error somewhere. AI: Unfortunately I don't see a way to gently nudge you in the right direction from your start, so I'll begin with the beginning. If you know where you want to end, it's often very helpful to work from the target towards the starting point too. If we do that here, from the target we obtain $$\begin{align} \gamma &+ \log \log x + \sum_{k=1}^\infty \frac{\log^k x}{k\cdot k!}\\ &= \int_0^1 \frac{1-e^{-u}}{u}\,du - \int_1^\infty \frac{e^{-u}}{u}\,du + \int_1^{\log x} \frac{du}{u} + \sum_{k=1}^\infty \frac{\log^k x}{k\cdot k!}\\ &= \int_0^{\log x} \frac{1-e^{-u}}{u}\,du - \int_{\log x}^\infty \frac{e^{-u}}{u}\,du + \sum_{k=1}^\infty \frac{\log^k x}{k\cdot k!}\\ &= \int_0^{\log x} \frac{1-e^{-u}}{u}\,du - \int_{\log x}^\infty \frac{e^{-u}}{u}\,du + \sum_{k=1}^\infty \frac{1}{k!} \int_0^{\log x} u^{k-1}\,du\\ &= \int_0^{\log x} \frac{1-e^{-u}}{u}\,du - \int_{\log x}^\infty \frac{e^{-u}}{u}\,du + \int_0^{\log x}\sum_{k=1}^\infty \frac{1}{k!} u^{k-1}\,du\\ &= \int_0^{\log x} \frac{1-e^{-u}}{u}\,du - \int_{\log x}^\infty \frac{e^{-u}}{u}\,du + \int_0^{\log x} \frac{e^u-1}{u}\,du\\ &= \int_0^{\log x} \frac{e^u-e^{-u}}{u}\,du - \int_{\log x}^\infty \frac{e^{-u}}{u}\,du. \end{align}$$ Now, the integral $$\int_0^x \frac{dt}{\log t}$$ exists only in the principal value sense, since the singularity at $1$ isn't integrable. Instead of taking out a completely symmetric interval around $1$, let us consider $$\lim_{\varepsilon \searrow 0} \int_0^{e^{-\varepsilon}} \frac{dt}{\log t} + \int_{e^{\varepsilon}}^x \frac{dt}{\log t}$$ for $x > 1$. It is easy to see that that limit is the same as the one you get by removing the interval $(1-\varepsilon, \, 1+\varepsilon)$. Now substituting $u = \pm \log t$ produces something from which the meeting point is easily reached.
H: Help with differential equation problem Could someone give any directions on this problem: A ball is thrown into upright direction. Acceleration $a$ satisfies the following equation: $$a = -g$$ where $(g = 9.81 \frac{m}{s²}, a = s''(t))$. Solve the function for distance travelled $s = s(t)$, when at $t = 0$, the ball has initial velocity $v_o$ and initial height $s_o$ How should I start solving this? thnx for any help =) AI: You are given that $a(t)=\frac{d^2s}{dt^2}=-g$ where $a(t)$ is the acceleration and $s(t)$ is the height of the particle at time $t$ with $v(0)=s'(0)=v_0$ and $s(0)=s_0$. Integrating the equation $s''(t)=-g$ with respect to $t$ twice and using the initial velocity and displacement you get $s(t)=-\frac{1}{2}gt^2+v_0t+s_0$ as the equation of motion.
H: Vector space over local field Let $L/K$ be an extension of number field and $\frak p$ a place of $K$ and $\frak P$ aplace of $L$ above. My question: $L$ can be considered as a vector space over $K_\frak p?$ thanks ! AI: $L$ is an extension of $K$ (extension of number fields). $L_\mathfrak{P}$ is an extension of $K_\mathfrak{p}$ (extension of local fields). $\overline{L_\mathfrak{P}} := \mathcal{O}_L / \mathfrak{P}$ is an extension of $\overline{K_\mathfrak{p}} := \mathcal{O}_K / \mathfrak{p}$ (extension of finite fields). Finally, $L_\mathfrak{P}$ is an extension of $L$ and $K_\mathfrak{p}$ is an extension of $K$ (completions). This gives you just one "non-obvious" extension: $L_\mathfrak{P}$ is an extension of $K$. It's instructive to consider the case $L=K=\mathbb{Q}$. Then you have the fields $\mathbb{Q}$, $\mathbb{Q}_p$, and $\mathbb{F}_p$, and your question, as written, asks if $\mathbb{Q}$ is a vector space over $\mathbb{Q}_p$. There is certainly no way to make sense of this, as the former is countable and the latter is uncountable.
H: Using Rouche's for function constant on a circle Let $c\in\mathbb{R}$. A non-constant function $f(z)$ is holomorphic in $|z|<2$. Suppose $|f(z)|=c$ for all $|z|=1$. Show that $f(z)$ must have a root in $|z|<1$. Here there is an answer using the maximum principle. Since the question deals with showing the existing of a root inside a circle, I wonder if it can be solved using Rouche's theorem. AI: We can prove it using Rouché's theorem, but the way that I see smells artificial. Nevertheless: Since $f$ is non-constant, we have $c > 0$. Let $g(z) = \frac12 f(z)$. Let $h(z) = g(z) + g(0)$. Since $f$, and hence $g$ is non-constant, by the maximum principle (sorry, can't do it entirely without that), we have $\lvert g(0)\rvert < \frac{c}{2}$, whence $$\lvert h(z)\rvert \leqslant \lvert g(z)\rvert + \lvert g(0)\rvert = \frac{c}{2} + \lvert g(0)\rvert < c$$ on the unit circle. Thus, by Rouché's theorem, $f$ has as many zeros in the unit disk as $$f(z) - h(z) = f(z) - \frac12\left(f(z) + f(0)\right) = \frac12\left(f(z) - f(0)\right).$$ The latter function evidently has at least one zero in the unit disk.
H: Determine if the following series are convergent or divergent? How to determine if the following series are convergent or divergent? I'm supposed to use here the limit comparison test, but I don't know how to choose the second series. $$\sum_{k=1}^\infty \ln(1+ \sqrt{\frac 2k})$$ $$\sum_{k=1}^\infty\displaystyle \sqrt[k]{e}\sin(\frac{\pi}{k}).$$ AI: A related problem. Since $$ \lim_{k\to \infty }\frac{\ln(1+\sqrt{2/k})}{\sqrt{2/k}}=1, $$ then the series diverges by the fact: Suppose $\sum_{n} a_n$ and $\sum_n b_n $ are series with positive terms, then if $\lim_{n\to \infty} \frac{a_n}{b_n}=c>0$, then either both series converge or diverge. Note: We used the Taylor series $$ \ln(1+t)=t+O(t^2)\implies \ln(1+t)\sim t. $$ $$e^t\sin(\pi t)= \pi t+O(t^2)\implies e^t\sin(\pi t) \sim \pi t.$$
H: If $A$, $B$, and $C$ are sets, the only way that $A\cup C = B \cup C$ is if $A=B$ If $A$, $B$, and $C$ are three sets, then the only way that $A\cup C$ can equal $B\cup C$ is $A = B$. I believe this statement is false and here is why: Let $A=\{1\}$, $B=\{2\}$, and $C=\{1,2,3,4\}$. In this scenario $A\cup C=\{1,2,3,4\}$ and $B\cup C=\{1,2,3,4\}$ however, $A\ne B$. Making the statement false. Have I explained this correctly? AI: Your example is the proof that the statement is false.
H: Basis of eigenvectors of a linear transformation Let $\mathbb R_n[x]$ the vector space of polynomials with degree less or equal $n$ and we consider the linear transformation $f$ defined by $$\forall P\in \mathbb R_n[x]\quad f(P)=(x^2-1)P''+2xP'$$ I proved that $f$ has the spectrum $$\mathrm{sp}(f)=\{k(k+1),\ k=0,\ldots,n\}$$ I'm stuck in this question: Prove that there's a unique basis $(P_0,\ldots,P_n)$ of $\mathbb R_n[x]$ such that: $$\forall k=0,\ldots,n\quad P_k \ \text{is a monic polynomial with degree }\ k\ \text{which's an eigenvector of }\ f $$ Any help would be appreciated. AI: Use induction. Note that the operator is independent of $n$, so you can recycle your previous polynomials.
H: Joint distribution proof I am trying to study for an exam and I am kind of lost on how my professor came to a particular result on his practice exam. Let $W$ be an exponentially distributed random variable with $\lambda = 2$ Prove that $P(W > 5 | W > 2) = P( W > 3)$ I made it as far as re-writing the problem as $$\frac {P(W>5,W>2)}{P(W>2)}$$ However, he managed to cancel out the $P(W > 2)$ in the numerator and I'm lost as to why he can do that. The entire problem is here (problem 4,b) And the solutions are here AI: In the numerator you have the probability that $W$ is greater than 5 and greater than 2. This only happens when $W$ is greater than 5. Thus $$ \frac{P(W>5, W>2)}{P(W>2)}=\frac{P(W>5)}{P(W>2)} $$ What is this then? Well, $$ \frac{P(W>5)}{P(W>2)}=\frac{1-(1-e^{-\lambda 5})}{1-(1-e^{-\lambda 2})}=\frac{e^{-5\lambda}}{e^{-2\lambda}}=e^{-(5-2)\lambda}=e^{-3\lambda}=1-(1-e^{-3\lambda})=1-P(W<3)=P(W>3) $$ which is the answer you wanted.
H: Find characteristic polynomial of $\,A^2$ if the characteristic polynomial of $\,A$ is $\,t^4 -t$ $A \in M_{4\times4}(\mathbb{R})$. The characteristic polynomial of $A$ is $P_A(t)=t^4-t$. I have to find the characteristic polynomial of $A^2$ and $A^4$. So I know that due to the Cayley–Hamilton theorem that $P_A(A) = A^4-A=0$ , therefore $A^4 = A$, Therefore $P_{A^4}(t)=P_A(t)=t^4-t$. But what do I do with $A^2$? I also know that the eigenvalues of A are $0,1$... how does this help? Thank you all. AI: For $\mathbb{C}$: The eigenvalues of $A$ are $0,1,e,e^2$ where $e$ is a primitive root out $1$: $e^3=1$. Hence the eigenvalues of $A^2$ and also of $A^4$ are $0,1,e^2, e^4=e$ i.e. the same. So the characteristic polynomials of $A^2$ and $A^4$ also are $t^4-t$. For $\mathbb{R}$: Сharacteristic polynomials of $A^2$ and $A^4$ do not change, i.e. $t^4-t$. Addendum: Theorem from the Gantmacher's book: If $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $A$ and $g(x)$ a polynomial, then $g(\lambda_1),\ldots,g(\lambda_n)$ are the eigenvalues of $g(A)$ (with the same multiplicities).
H: Derivative of integral with time varying domain Let $f:\mathbb{R}^p \rightarrow \mathbb{R}$ be a smooth function. Let $A(t) \subset \mathbb{R}^p$ be varying with time $t$. Is there a nice expression for $$\frac{d}{dt}\int_{A(t)}f(x) dx$$ ? AI: We have $A(t) = \bar{B}(tc,r)$. Let $\phi(\delta) = tc+\delta$. Then $\phi(\bar{B}(0,r)) = A(t)$. Note that $D\phi(\delta) = I$, so by the change of variables theorem, we have $g(t) = \int_{A(t)} f(x) dx = \int_{\phi(\bar{B}(0,r))} f(x) dx = \int_{\bar{B}(0,r)} f(\phi(y))\, |1|\,dy = \int_{\bar{B}(0,r)} f(tc+y) dy$. Now, assuming that $f$ is sufficiently smooth, you have $Dg(t) = \int_{\bar{B}(0,r)} Df(tc+y)\,c\, dy = (\int_{A(t)} Df(x) dx)c$.
H: Three consecutive integers with power of 5 mod 11 Let $(n - 1)$, $n$ and $(n + 1)$ be three consecutive integers, and $(n - 1)^5 \equiv n^5 \equiv (n + 1)^5 \equiv a \pmod{11}$, what are the possible values of $a$? I know the facts that $3^5 \equiv 4^5 \equiv 5^5 \equiv 1 \pmod{11}$ and $6^5 \equiv 7^5 \equiv 8^5 \equiv -1 \pmod{11}$ are two solutions, however, is there a way to actually solve this equation? AI: There is sufficiently to check the numbers $-5,-4,\ldots,4,5$.
H: find a degree and splitting field for $x^4-2$ over $\mathbb{Q}(i)$ let $K=\mathbb{Q}(i)$ and let $f=x^4-2$. Find the splitting field, its degree and the basis. My solution First I find roots of the polynomial $x_{1,2}=\pm\sqrt[4]{2},\hspace{2mm}x_{3,4}=\pm i \sqrt[4]{2}$ and I notice that the polynomial $f=x^4-2$ is irreducible over $\mathbb{Q}(i)$ since neither of the above roots are in $K=\mathbb{Q}(i)$ hence $x^4-2=min pol_{\mathbb{Q}(i)}\sqrt[4]{2}$ Can I deduce from the above that $[\mathbb{Q}(i,\sqrt[4]{2}):\mathbb{Q}(i)]=4$ What will be the $\mathbb{Q}(i)$ basis for $\mathbb{Q}(i,\sqrt[4]{2})$? In my book the answer is : -degree is 4 -basis:$\{1,\alpha,\alpha^2,\alpha^3\}$ for $\alpha=\sqrt[4]{2}$ Something here I really do not understand. Please explain what is wrong in my solution. AI: Your solution is true, notice that if you are extending your field by adding $a$, then you also add all power of $a$. But not all power of $a$ are linerley independent.Thus,Just take linery independent ones as a basis. in your case,$a^k$ for $4\leq k$ can be written as linear combination of $1,a^1,a^2,a^3$.Actualy, we expect it to happen since $Q(i,2^{1/4}):Q(i)]=4$ thus,it is 4 dimensional vector space over $Q(i)$. Or equivalantly,if you know that degree of minimal polynomial of $a$ over $F$,let say it is $n$,then $F(a)$ has a base over $F$ with ${ 1,α,α^2,...α^{n-1} }$.
H: Evaluating product $\prod_{n=2}^\infty\left(1-\frac{1}{n^2}\right)$ I'm reading about infinite products in complex analysis, where there is a theorem like The product $\prod_{n=1}^\infty\left(1+a_n\right)$ converges absolutely iff the series $\sum_{n=1}^\infty|a_n|$ converges. Then an exercise is to show that $\prod_{n=2}^\infty\left(1-\dfrac{1}{n^2}\right)=\dfrac{1}{2}$ The theorem above guarantees that the product converges, but what is the method to evaluate its value? AI: Hint: $$ 1-\frac{1}{n^2} = \frac{n^2-1}{n^2} = \frac{(n+1)(n-1)}{n^2}$$
H: Application of differentiation, modeling bacteria https://www.dropbox.com/s/defhs0u02yuqtyw/differentiation%20bacteria.jpg I basically don't understand the first sentence of the question. A good explanation and a a complete solution would be appreciated Edit: Ok since people asked for my work here it is a) innitial population is when t=0 hence P=11 and I guess this would translate to 11 000 bacteria .Bacteria will reach 14 mil when 10+e^t-3t=14 Ok so I don't understand why 14 and not 14 000 it gives t=2.42 hours b) e^t-3 I don't know how to approach this part because I am confused from the previous one c) e^t the rate of change of the growth rate of the bacteria AI: Your thoughts on part a are correct. For when the population will reach 14 million, set $P=14,000$ since our units are in thousands. For part b, you differentiated correctly, and you do a very similar thing as in part a, considering that $\frac{dP}{dt}$ is the rate of growth. Your work for part c is also correct. For the second part of c, recall that a min/max of a function is found where its derivative is $0$, and you can justify that the extremum is a minimum using the second derivative test.
H: Find a generating function for $a_r = n^3$ What is the generating function for $a_r = n^3$? I computed an answer, just wanted to double check my answer. AI: Here is how you advance. Assume $$ F(x) = \sum_{r=0}^{\infty} a_r x^r \implies F(x)=\sum_{r=0}^{\infty} r^3 x^r $$ $$ \implies F(x)= (xD)(xD)(xD)\sum_{r=0}^{\infty} x^r = (xD)^3 \frac{1}{1-x}, $$ where $D=\frac{d}{dx}$. Can you finished now? Added Here is the final answer $$ F(x)={\frac {x \left( 1+4\,x+{x}^{2} \right) }{ \left( 1-x \right) ^{4}}}. $$
H: Geodesics: a (for me) "mysterious" property related to affine parameters. Consider a Riemannian manifold $(M,g)$ with the Levi-Civita connection $\nabla$. If $D_t$ is the covariant derivative along curves descending from $\nabla$, a geodesic is a curve $\gamma: I\subseteq\mathbb R\longrightarrow M$ such that $D_t\gamma'=0$ where $\gamma'$ is the velocity vector field along $\gamma$. In coordinates (respect the coordinare frame $\frac{\partial}{\partial x^1},\ldots\frac{\partial}{\partial x^n}$) a geodesic is a curve that satisfies the following equation(s): $$\ddot \gamma^k(t)+ \dot\gamma^j(t)\dot\gamma^i(t)\Gamma^k_{ij}(\gamma(t))=0$$ It is evident that there aren't constraints for the parameter $t$, but on the book "Carroll - Spacetime and relativity" I read the following mysteriuous phrases: Notation: Here by the parametrization with the "proper time" $\tau$, he means the parametrization with the arclength parameter. The equation $3.44$ is the same that I've just written above and moreover the others cited equations deal with the variational approach to geodesics. So I don't understand why, according to Carroll, if a curve satisfies the above equation(s), then its parametrization should be affine. The author doesn't explain this point. AI: To elaborate on the comment by @PavelC, the geodesic equation $\ddot \gamma^k(t)+ \dot\gamma^j(t)\dot\gamma^i(t)\Gamma^k_{ij}(\gamma(t))=0$ actually implies that the curve is constant speed. This is what the author seems to be saying: the parameter is necessarily affine with respect to the unit speed parametrisation. The proof of this can be found in my course notes here, Lemma 8.3.5 on page 67. On an arbitrary surface $M$, a curve $\beta$ satisfying the geodesic equation is necessarily constant speed. It suffices to prove that the square of the speed has vanishing derivative: $$ \begin{aligned} \frac{d}{ds}\left(|\beta'|^2\right)&=2\langle\beta'',\beta'\rangle \\&= \langle L_{ij} {\alpha^i}' {\alpha^j}' n \,, {\alpha^k}' x_k \rangle \\&=L_{ij} {\alpha^i}' {\alpha^j}' {\alpha^k}' \langle n, x_k \rangle \\&= 0. \end{aligned} $$
H: Ideals in commutative noetherian rings with unique prime ideal Let $R$ be a commutative noetherian ring with $1$ having only one prime ideal $\mathfrak{P}$. It follows that $\mathfrak{P}^n = 0$ for some integer $n$. Can we say that every proper ideal in $R$ is a power of $\mathfrak{P}$? AI: Hint: No. Take a look at $F_2[x,y]/(x,y)^2$ where $F_2$ is the field of two elements. The answer is affirmative, however, if $\mathfrak{P}$ is principal. Add that hypothesis and try to prove that the other nonzero ideals are powers of it.
H: Countably infinite set of real numbers with a complement that is infinite but not countably infinite How can I show that if a set of real numbers is countably infinite, then its complement is infinite but not countably infinite? Thanks a lot in advance! AI: Let the set you're looking for be $$ \{a_1,a_2,a_3,\ldots\}. $$ Suppose the complement is countably infinite, so that it is $$ \{b_1,b_2,b_3,\ldots\}. $$ Then consider the sequence $$ \{c_1,c_2,c_3,c_4,c_5,c_6\ldots\}=\{a_1,b_1,a_2,b_2,a_3,b_3,\ldots\}. $$ That is countably infinite and contains all real numbers. But that's impossible. So the complement of a countably infinite set of reals relative to the set of all reals cannot be countably infinite. (Of course, this works only if you've proved the set of all reals is not countably infinite. If that's the point you're stuck on, it's time to ask how to do that. But maybe it's best to ask that question elsewhere than in this forum first and then come here if you run into difficulties.)
H: Probability, that when we send a $0$ down the network we will get back a $0$ We can send a $0$ or a $1$ over a network of $1,2...$ nodes. Unfortunately on each node with probability $p$ the message is not made different, and with probability $1-p$ the message is XOR'ed. Find recurrence relation that determines with what probability if we send a $0$ over a network of $n$ nodes we will get $0$ after $n$'th node. Base cases. Obviously $$a_1=p$$ and $$a_2=p^2 + (1-p)^2$$ Because we can either succeed two times or fail two times and be ok. But what about the rest? $$a_3=p^3+{{3}\choose{2}}(1-p)^2p$$ $$a_4=p^4+ (1-p)^4+{{4}\choose{2}}(1-p)^2p^2$$ And I fail in seeing a reccurent relation between the subsequent $a_n$'s. AI: I assume when you say XOR, it is XOR with 1. Take the case that you send 0. And after node $n$, call the value $x_n$. Then $$\begin{align} \text{Prob}(x_n=0) &= \text{Prob}(x_{n-1} = 0)*p + \text{Prob}(x_{n-1} = 1)*(1-p) \\ &= \text{Prob}(x_{n-1} = 0)*p + (1-\text{Prob}(x_{n-1} = 0))*(1-p) \\ &= (2p-1)\text{Prob}(x_{n-1} =0) + 1 - p \end{align} $$ You can figure out the rest.
H: Product $\prod_{n=0}^\infty(1+z^{2^n})$ I want to prove that $\prod_{n=0}^\infty(1+z^{2^n})=(1-z)^{-1}$ for all $|z|<1$. By multiplying $1-z$ to both sides, the equation becomes $$(1-z)(1+z)(1+z^2)(1+z^4)\ldots=1$$ Multiplying the first pair on the left yields $$(1-z^2)(1+z^2)(1+z^4)\ldots=1$$ And then $$(1-z^4)(1+z^4)\ldots=1$$ And $|z|^{2^n}$ keeps getting smaller and smaller since $|z|<1$. But the product is infinite, so this doesn't seem very rigorous? How can I make it rigorous? AI: Any integer $j$ with $0\leq j\leq2^n-1$ has a unique binary representation of the form $$j=\sum_{k=0}^{n-1} b_k\>2^k,\qquad b_k\in\{0,1\}\ .$$ Therefore these $j$ are in bijective correspondence with the $2^n$ subsets of the set $\{1,2,4,8,\ldots,2^{n-1}\}$. When the product $$P_n:=\prod_{k=0}^{n-1}\left(1+z^{2^k}\right)=(1+z)(1+z^2)(1+z^4)\cdot\ldots\cdot(1+z^{2^{n-1}})$$ is expanded one obtains the corresponding term $z^j$ for each of these subsets. It follows that for $|z|<1$ one has $$P_n=\sum_{j=0}^{2^n-1} z^j={1-z^{2^n}\over 1-z}\to{1\over1-z}\qquad(n\to\infty)\ .$$
H: System of differential equations (in X, Y): expression of Y. I want to find an expression for the function $Y(t)$ by the following system \begin{align} \frac{d X (t)}{dt} & =-\alpha Y \\[6pt] \frac{d Y (t)}{dt} & =\sigma \beta Y^2-(\beta+\gamma) Y - \sigma X Y+X \end{align} The equation must contain only $Y$ and not $X$. $\alpha, \beta, \sigma$ are real parameters. Does anyone have any suggestions for me? Thank you very much. AI: Hint: Let $X(t)=-\alpha\int_0^tY(s)ds$ and put this in the second equation, or solve $X$ in terms of $Y$ and its derivatives from the second equation and then take the derivative of $X$ and substitute this in the first one. Namely $X=\frac{Y'-\sigma \beta Y^2+(\beta+\gamma) Y}{1-\sigma Y}$ and use the quotient rule to find $X'$. Plugging this into the first equation you will have an equation involving only $Y$ and its derivatives.
H: How to check if the series $\sum_{n=1}^{\infty} n\cdot \sin(\frac{1}{n})$ is convergent or divergent? $$\sum_{n=1}^{\infty} n\cdot \sin(\frac{1}{n})$$ Which test I should use? Thank you so much for your help! AI: The sine function has slope $1$ where it crosses the axis at the origin. Therefore it lies above the line $y=\frac12 x$ if $x$ is positive but close to $0$. Therefore $$ \sin\frac1n > \frac12\cdot\frac1n. $$ (At least if $n$ is big enough to make $\frac1n$ close enough to $0$. But in this case $n=1$ is already big enough.) So if $$ \sum_{n=1}^\infty n\cdot\frac12\cdot\frac1n $$ diverges, then so does your sum.
H: Naming general objects in more than 3 dimensions In a paper I am writing, I need to talk about a general "object" formed by the points of a connected set in an $n$-dimensional euclidean space. I have found some suggestion here, but none fit my needs. The "object" I am concerned with do not have any specific shape restriction : I cannot speak of $n$-polytope, $n$-spheres, etc. I cannot either use $n$-curve or hypercurve as these objects I am talking about could be "thick". So I would really need some general terminology. 1) Do you know of terminology in use? What is important to me is to convey the idea that the object lies in a multidimensional space and could be "thick". I have googled "hypershape", "hyperobject", $n-object" and "n-shape" but these do not seem to be in use. 2) I am tempted to use either "hyperobject" or "$n$-object". Do you think this would be understood? AI: The comments constitute a good answer: We give names to things in mathematics in order to suggest certain intuitive ideas to the reader. What aspects of this set are important? [...] I would just call it an "n-dimensional region" or just a "connected region in $\mathbb{R}^N$ (Sammy Black) @ Sammy Black : the exact number of dimensions itself is not essential. What is essential is to convey the idea that the object could be "thick", and lies in a multidimensional space. Let me edit the question again to make this clearer. (OP) "Object" is a little too broad; depending on context it could refer to anything. I second Sammy's suggestion of just calling it a region in n-dimensional space. (Rahul Narain)
H: Galois Group of $\sqrt{2+\sqrt{2}}$ over $\mathbb{Q}$ So I want to show that $\mathbb{Q}(\sqrt{2+\sqrt{2}})$ is Galois over $\mathbb{Q}$ and determine its Galois group. My thoughts are as follows: Define $\alpha := \sqrt{2+\sqrt{2}}$. Then it is easily shown that $\alpha$ satisfies $\alpha^4-4\alpha^2+2=0$. Define $f(x) := x^4-4x^2+2$. Then $f$ is irreducible over $\mathbb{Q}$ by Eisenstein with $p=2$. So we have that $f$ is the irreducible polynomial for $\alpha$ over $\mathbb{Q}$. Further $|\mathbb{Q}(\alpha):\mathbb{Q}|=4$. For $\mathbb{Q}({\alpha})$ to be Galois, it must contain all roots of $f$. Define $K=\mathbb{Q}(\alpha)$ for convenience. Define $\alpha := \alpha_1$. Since $f$ has only even powers, we know that $-\alpha := \alpha_2$ is a root, and therefore contained in $K$ since $K$ is a field. We note that the other two roots are $\alpha_3=\sqrt{2-\sqrt{2}}$ and $\alpha_4=-\sqrt{2-\sqrt{2}}$. So in order to show $K$ is Galois, it must be shown that $\alpha_3$ and $\alpha_4$ lie in $K$. Now $\alpha_1^2=2+\sqrt2$ and so $\sqrt2 \in K$. Thus $-\sqrt2 \in K$ since $K$ is a field. Can somebody explain why $\alpha_3$ and $\alpha_4$ lie in $K$? Next we are to determine the Galois group of $K$. Assuming $K$ is Galois, since it has degree $4$ over $\mathbb{Q}$ (shown earlier), we know that its Galois group has size $4$. There are only two groups of size $4$, namely $V_4$ and $C_4$, the Klein four group and the cyclic group of order $4$. How do we determine which of these choice is in fact the Galois Group? AI: $\alpha^2-2=\sqrt 2\in K$ by closure of multiplication and addition. $$\frac{\sqrt 2}{\sqrt{2+\sqrt{2}}}=\frac{\sqrt 2 \cdot\sqrt{2-\sqrt 2}}{\sqrt{2+\sqrt{2}}\sqrt{2-\sqrt 2}}=\frac{\sqrt 2\cdot\sqrt{2-\sqrt 2}}{\sqrt{4-2}}=\sqrt{2-\sqrt 2}$$ Since $K$ is a field, it has multiplicative inverses and is closed under multiplication, so $\sqrt{2-\sqrt 2}\in K$. We can determine the nature of $\mathrm{Gal}(K/\mathbb{Q})$ by the order of each element. If $f$ is a field automorphism of $K$ and $f(\sqrt{2+\sqrt 2})=\sqrt{2-\sqrt 2}$, then $f(\sqrt 2)=f(\alpha^2-2)=f(\alpha)^2-2=-\sqrt 2$. Therefore $$f(f(\alpha))=f\left(\sqrt{2-\sqrt 2}\right)=f\left(\frac{\sqrt{2}}{\sqrt{2+\sqrt 2}}\right)=\frac{f(\sqrt 2)}{f(\sqrt{2+\sqrt 2})}=\frac{-\sqrt{2}}{\sqrt{2-\sqrt{2}}}=-\sqrt{2+\sqrt 2}$$ Therefore $\mathrm{ord}(f)> 2$ and must divide $4=|\mathrm{Gal}(F/\mathbb{Q})|$, so $\mathrm{ord}(f)=4$. It follows that the Galois group is cyclic and abelian.
H: Spanning Trees of the Complete Graph minus an edge I am studying Problem 43, Chapter 10 from A Walk Through Combinatorics by Miklos Bona, which reads... Let $A$ be the graph obtained from $K_{n}$ by deleting an edge. Find a formula for the number of spanning trees of $A$. So how I approached this problem was by creating the Laplacian of A. I set the edge to be deleted as the edge between the first and second vertices in the graph. After an obscene amount of potentially dubious matrix operations, I received a result of... $n^{n-3}*[2n^{3}-5n^{2}+3n \pm 1]$ Can anyone shed some light on this problem? I feel as I am approaching it the wrong way... AI: There's no need to consider the Laplacian. We can obtain this by a simple symmetry argument. Every edge of the complete graph is contained in a certain number of spanning trees. By symmetry, this number is the same for each edge, call it $k$. Let us now count the total number of edges in all spanning trees in two different ways. First, we know there are $n^{n-2}$ spanning trees, each with $n-1$ edges. Therefore there are a total of $(n-1)n^{n-2}$ edges contained in the trees. On the other hand, there are $\binom{n}{2} = \frac{n(n-1)}{2}$ edges in the complete graph, and each edge is contained in precisely $k$ trees. This means there are a total of $\binom{n}{2}k$ edges. This gives us $$(n-1)n^{n-2} = \binom{n}{2}k$$ which upon simplification gives $k=2n^{n-3}$. If we delete an edge, then we effectively remove the set of all spanning trees containing that edge. By assumption that number is $k$. Therefore there will remain $$n^{n-2} - k = n^{n-2} - 2n^{n-3} = n^{n-3}(n-2)$$ total spanning trees.
H: Counting six-letter strings over $\{a,b,c,d,e\}$ containing a single $a$ Consider all strings whose letters belong to the set: $A = \{ a, b, c, d, e\}$ How many strings of length $6$ are there that contain exactly one $a$? Attempt: Since we are only using $\frac{4}{5}$ letters for the rest of the string, There are $1* 4^5$ strings that contain exactly one $a$. Book answer: $6 * 4^5$ What am I doing wrong? Where does the six come from? AI: For strings, the order matters. So, you can put the one "a" in any one of the six positions. There are $4^5$ ways to fill up the rest. Hence, you multiply $4^5$ by 6.
H: Prove a summation inequality by induction: $\sum_{i=1}^n \frac{3}{4^i} < 1$ I was having trouble proving by induction with this problem. $$\sum_{i=1}^n \frac{3}{4^i} < 1$$ for all $n \geq 2$ I went to see my professor and he said try proving this equality $$\sum_{i=1}^n \frac{3}{4^i} < 1 - 1/4^n $$ Where did he get the $$1-(1/4^n)$$ from? How would I prove this? And is it still proving the same inequality? AI: The "improved" inequality is wrong as stated, it should be $\le$ (or even $=$) instead of $<$ You can hardly use induction with the original inequality. If you only have $s_n<1$, you cannot conclude that $s_{n+1}<1$ because you always have $s_{n+1}>s_n$. In other words, you need that $s_n$ is sufficiently smaller than $1$ (and need to show that $s_{n+1}$ is not just smaller, but sufficiently smaller than $1$) You might get the $1-1/4^n$ from looking at the first few sums ($\frac34$, $\frac{15}{16}$, $\frac{63}{64}$) and smelling the pattern. As it turns out, the stricter inequality (or even equality) is much easier to prove. Proe by induction is straightforward. Since $1-1/4^n<1$ you also obtain the originally desired result.
H: Let $A$ be an orthogonal $n\times n$ matrix. Show that $\|A\vec x\|=\|A^{-1}\vec x\|$ for any vector $x$ in $\mathbb R^2$ Let $A$ be an orthogonal $n\times n$ matrix. Show that $\|A\vec x\|=\|A^{-1}\vec x\|$ for any vector $\vec x$ in $\mathbb R^2$ I want to show that $\|A\vec x\|=\|A^{-1}\vec x\|=\|\vec x\|$ I tried to show that since $A^TA=I$, then using $A^T=A^{-1}$, $\|A^{-1}\vec x\|=(A^{-1}\vec x)\cdot(A^{-1}\vec x)=(A^{-1}\vec x)^T(A^{-1}\vec x)=\vec x^T(A^{-1})^TA^{-1}\vec x=\vec x^T(A^{T})^TA^{-1}\vec x=\vec x^TAA^{-1}\vec x$. I got stuck here since by definition $A^TA\neq AA^T$ (or is it)? Any hint is appreciated! AI: Hint: You need the fact a matrix $Q$ is orthogonal if its transpose is equal to its inverse: $$Q^\mathrm{T}=Q^{-1}, \,$$ which entails $$ Q^\mathrm{T} Q = Q Q^\mathrm{T} = I.$$
H: Solution Verification: Given $|A\cup B|=45, |A|=30,$ and $|A\cap B|=7,$ find $|B|$ Given |A∪B|=45, |A|=30, and |A∩B|=7, find |B|. If I am not mistaken here is how I am reading the scenario: B must have 22 elements. The 7 that it shares with A, and then 15 of its own unique elements. 23 unique to A, 7 shared by both, and 15 unique to B. 23 + 7 + 15 = 45. Is this correct? AI: There's a cool identity you can use here: $$|A\cup B| = |A| + |B| - |A \cap B|$$ $$ \Longrightarrow 45 = 30 + |B| - 7$$ $$ \therefore |B| = 22$$ The identity makes sense if you think about: we subtract the intersection at the end, so we don't double-count the elements that are common to both $B$ and $A$.
H: dividing a unit in several different ways There is a king who wants to divide his kingdom between his infinity of daughters. Suppose he wants to divide the kingdom evenly. It seems that under such conditions, each of the daughters gets an infinitely small piece of the kingdom. Suppose on the other hand that the king is a favouritist. He gives half of the kingdom to first daughter half of the rest to the second daughter and so on. In this case each daughter gets a finite non-zero part of the kingdom. So in the second case each daughter is getting more and yet we are dividing the same quantity into the same number of pieces. How is this possible? AI: It is just not possible to divide the kingdom evenly between infinitely many daughters. There does not exist a convergent series $\Sigma_{n=1}^{\infty} a_n$ with all $a_n$ being equal. The second case is possible because the series $\Sigma_{n=1}^{\infty} 2^{-n}$ is convergent.
H: $\mathbb Q[x]/(x^2+1)$ is not isomorphic to $\mathbb Q[x]/(x^2+2)$ I found an argument online that the two fields are not isomorphic but I can't make sense of the argument. You find it here. These are the things I'm confused about when it comes to the argument. It says that the field $\mathbb Q[x]/(x^2+1)$ contains a root of the polynomial $x^2+1$. I suppose this is the congruence class $[x]$? But this is not the same as the zero element of the field $\mathbb Q[x]/(x^2+1)$, right? It says that every isomorfism takes $0$ to $0$. I interpret this as the zero-element in $\mathbb Q[x]/(x^2+1)$ (which I interpret to be $[0]$) is taken to the zero-element in $\mathbb Q[x]/(x^2+2)$ denoted $[0]_2$. But in the argument it says that this means that if $\mathbb Q[x]/(x^2+1)$ has a root of the polynomial, then also $\mathbb Q[x]/(x^2+2)$ must have a root of this same polynomial. Why is that? Greatful for any help! AI: 1) Yes to both. Note that $[x]^2 + [1] = [x^2 + 1] = [0]$. 2) Not only must every isomorphism take 0 to 0, it must take every element $a$ of ${\mathbb Q}$ to itself (i.e., it must take $[a]$ to $[a]$). Even better, it must preserve polynomial expressions over ${\mathbb Q}$, in the sense that an element $a_0 + a_1 \beta + a_2 \beta^2 + \dots a_k \beta^k$ (with the $a_i \in {\mathbb Q}$ and $\beta \in {\mathbb Q}[x]/(x^2+1))$ maps to $a_0 + a_1 \beta' + a_2 \beta'^2 + \dots a_k \beta'^k$, where $\beta' \in {\mathbb Q}[x]/(x^2+2)$ is the image of $\beta$. In particular, if $1 + \beta^2 = 0$, then $1 + \beta'^2 = 0$. Now because there indeed such an element $\beta \in {\mathbb Q}[x]/(x^2+1)$ with $1 + \beta^2 = 0$ (namely $[x]$), there should also be a $\beta' \in {\mathbb Q}[x]/(x^2+2)$ with $1 + \beta'^2 = 0$ (assuming that the fields are isomorphic). What is left is to show that such an element of ${\mathbb Q}[x]/(x^2 + 2)$ does not exist. For this, write $\beta' = [a_0 + a_1 x]$ with $a_0, a_1 \in {\mathbb Q}$. Work out $1 + \beta'^2 = [1] + [a_0 + a_1 x]^2$ using $[x]^2 = [-2]$ in the form $[b_0 + b_1 x]$ and see that this can never be $[0]$.
H: Proof by induction; simplify when adding k+1th term. Understanding induction. I want to prove: $$(-\frac{1}{2})^0 + (-\frac{1}{2})^1 + \cdots + (-\frac{1}{2})^k + (-\frac{1}{2})^{k+1} = \frac{2^{k+1}+(-1^k)}{3\cdot2^k} + (-\frac{1}{2})^{k+1}$$ How do I simplify the last bit, $\frac{2^{k+1}+(-1^k)}{3\cdot2^k} + (-\frac{1}{2})^{k+1}$, so I don't have a case of $a+b$ ? Additionally, how is this considered a proof? I've looked up on Wikipedia re: induction, and the example they gave went something like "if the base case is true, assume $P(k)$ is true, and if $P(k+1)$ holds, then $P(k)$ is true". I don't make much sense of this. How do I know if $P(k+1)$ holds? On Wikipedia they simplified $P(k+1)$ and then directly said it holds without reasoning from what I can tell. Essentially Wikipedia stated the following: Check if $P(0)$ or $P(1)$ is true Let $k$ be an arbitrary number Assume $P(k)$ is true Simplify $P(k+1)$ $P(k+1)$ holds because we simplified it <-- this is where I'm tripping up. How do I show that $P(k+1)$ holds? AI: Okay, let's follow your steps. $P(0)\iff \left(-\frac{1}{2}\right)^0=\frac{2^1+(-1)^{0}}{3\times2^{0}}=\frac{3}{3}=1$, which is true. Now, assume $P(k)$ is true. Then $S_k:=\sum_{n=0}^k \left(-\frac{1}{2}\right)^n=\frac{2^{k+1}+(-1)^{k}}{3\cdot 2^k}$. $$\begin{align}S_{k+1}&=S_k+\left(-\frac{1}{2}\right)^{k+1}\\ &=\frac{2^{k+1}+(-1)^{k}}{3\cdot 2^k}+\left(-\frac{1}{2}\right)^{k+1}\\ &=\frac{2^{k+2}+2(-1)^{k}}{3\cdot 2^{k+1}}+\frac{3(-1)^{k+1}}{3\cdot 2^{k+1}}\\ &=\frac{2^{k+2}-2(-1)^{k+1}+3(-1)^{k+1}}{3\cdot 2^{k+1}}\\ &=\frac{2^{k+2}+(-1)^{k+1}}{3\cdot 2^{k+1}} \end{align}$$ Therefore, under the assumption $P(k)$, $P(k+1)$ is true. Since we know $P(0)$ is true, $P(n)$ must be true for all $n\in\mathbb{N}_0$.
H: finding $\sum_{n=0}^\infty(\frac{(n+1)}{n})^{n^2}(z-2)^2 $ radius of convergence find this power serie radius of convergence and the area where it converges. $\sum_{n=0}^\infty(\frac{(n+1)}{n})^{n^2}(z-2)^2 $ my attempt: a) $L=lim sup|an|^\frac{1}{n} \quad $$L=Lim sup(\frac{n+1}{n})^{\frac{n^2.1}{n}}$ = $Lim_n\to\infty sup(\frac{n+1}{n})^n=1$ $R=\frac{1}{L}$ therefore $R=1$ and for $|z-2|<1 $ it is convergent . is it correct? AI: If the series is $$\sum_{n=1}^{\infty} \left (1+\frac{1}{n}\right)^{n^2} (z-2)^n$$ then note that the coefficient behaves as $e^n$ as $n\to\infty$. Thus, the series converges when $e |z-2| \lt 1$, because $$\left [\limsup_{n\to\infty} \left (1+\frac{1}{n}\right)^{n^2} \right ]^{1/n} = e$$ Therefore, the values of $z$ for which the series converges is $$|z-2|\lt \frac{1}{e}$$
H: dividing an octave to $7$ instead of $12$ Usually an octave is divided into $12$ parts based on the harmonic series(basic zeta function). how can I calculate the frequency of a note if I divide the octave into $7$ parts? $N_1=A_4(440Hz)$ $N_8=N_1*2=A_5(880Hz)$ $N_2....N_7???$ Thanks guys! AI: If you want equal musical intervals, then you need to multiply each frequency by a factor of $2^{1/7}$ to get the next higher frequency. Thus, your frequencies will be $440, 440\times 2^{1/7}, 440\times 2^{2/7}, \dots, 440\times 2^{7/7}=880$. Edit: The reasoning goes like this: $$N_1 = 440 \\ N_2 = 440r \\ N_3 = 440r^2 \\ \ldots \\ N_8 = 440r^7 = 880$$ So that $$r^7 = 2$$ I warn you that the harmonies will sound terrible!
H: How to check if the series $\sum_{n=0}^{\infty} \sqrt{n+1}-\sqrt{n}$ is convergent or divergent?? I tried few tests, but I didn't success to discover if the series is convergent or is divergent... $$\sum_{n=0}^{\infty} \sqrt{n+1}-\sqrt{n}$$ Thank you! AI: Let $S_n$ be the sequence of partial sums: $$S_n = \sum_{k=0}^n \sqrt{k+1} - \sqrt{k}$$ It is easy to see that $S_0 = 1$ and by telescoping $$S_n = \sqrt{n+1}$$ Since convergence of a series is defined through convergence of the partial sums and since $S_n$ obviously diverges, the series diverges as well.
H: Prove integral of sequence of function uniformly converges Show that if $f_n \to f$ uniformly on $[a,b]$ and $f_n$ is integrable for each n then $\int_{a}^{x}f_n(t)dt\to \int_{a}^{x}f(t)dt$ uniformly in $x$ on [a,b]. I know how to prove the question $\int_{a}^{b}f_n(x)dx\to \int_{a}^{b}f(x)dx$ uniformly in $x$ on [a,b]. I want to make sure if it is same thing? AI: Nearly so. Did you understand the proof you have pasted above? $$ \left|\int_a^x (f_n(t) - f(t))\,dx\right| \leq \int_a^x |f_n(t) - f(t)| \,dt \leq \int_a^b \epsilon/(b-a) \,dt < \epsilon,$$ for all $n$ sufficiently large. The last inequality follows from the fact that $|f_n(t) - f(t)|$ is nonnegative. (I've left out obvious details that are already present in what you've pasted above. )
H: Prove the Jordan lemma i.e. $\int e^{-R\sin{\theta}}< \pi/R$ In complex variables my instructor wrote on the board "Jordan's Lemma", and then, somewhat imprecisely, $$\int e^{-R\sin{\theta}}< \pi/R \;\;\;\; \text{ e.g. } \int \frac{s \sin{x}}{x^2 + 2x + 2}.$$ I have searched for a reference for this result without success. I have many of the major books on linear algebra. Can anyone provide me a reference, or else provide the idea behind this? AI: You probably are concerned with this integral: $$\int_0^{\pi/2} d\theta \, e^{-R \sin{\theta}}$$ Note that we may use the following inequality: $$\sin{\theta} \ge \frac{2}{\pi} \theta$$ valid over this integration region. Thus we have $$\int_0^{\pi/2} d\theta \, e^{-R \sin{\theta}} \le \int_0^{\pi/2} d\theta \, e^{-2 R \, \theta/\pi} = \frac{\pi}{R} \left (1-e^{-R}\right ) \le \frac{\pi}{R}$$
H: How to calculate equivalence relations How can I calculate how many equivalence relations can be defined on a given set? For example: How many possible equivalence relations can be defined on S = {a,b,c,d}? AI: Okay, so equivalence relations effectively partition $S$ into subsets where each element in a given subset is related to each other element in that subset. This means you simply need to count all the ways you can partition four elements.
H: Is $L^2(\Omega)$ the only $L^p$ hilbertian space? I've started today studying Hilbertian spaces, and all of the examples seen in class were about the space $L^2(\Omega)$, where $\Omega$ is a limited domain in $\mathbb{R}^N$ $(N \geq 1)$. Online I didn't manage to find anything related to my doubt regarding other $p \neq 2$ . So my attempt is to find a function $v : \ \Omega \to \mathbb{C}$ in $L^p(\Omega)$ ($p \neq 2$) for which $$ \langle v, v \rangle := \int_{\Omega} v(x)\overline{v(x)} dx \ \ \notin \mathbb{C} $$ Iìm trying to use the fact that Hölder's Inequality can't be used here. ($p \neq p'$ if $p \neq 2$) and so the integral may diverge. And the conclusion would be that the "usual" inner product over $L^2$ can't be extended for every $1 \leq p < \infty $. But i can't manage to find one. Is my attempt correct? Can someone provide me a counterexample? I apologize in advance for not being precise but it's my first time with an hilbert :P AI: Each $L^p$ space for $p \ge 1$ is what is called a Banach space. (i.e. a complete normed vector space). Every Hilbert space is a Banach space. There is a way to characterize all the Banach spaces that are Hilbert spaces. Essentially a Banach space is a Hilbert space iff its norm satisfies the parallelogram law: $$2\|f\|^2 + 2\|g\|^2 = \|f-g\|^2 + \|f+g\|^2$$ This works for $L^2$. You should look for a violation of this for $p\neq 2$. For the counter example you are looking for, here is one such function for $L^1[0,1]$. Take the function $v(x) = 1/\sqrt{x}$. That function is indeed integrable on the interval, but $$\int_{0}^1 \frac{1}{\sqrt{x}} \frac{1}{\sqrt{x}} dx = \int_{0}^1 \frac1x dx = \infty$$ You can fashion similar examples for other spaces.
H: Prove that a bounded sequence contains all of it's accumulation points Homework question: Let {$x_n$} be a bounded sequence. Prove that the set of all subsequential limits of {$x_n$} is closed. Any help would be appreciated. (I edited this so that anyone who looks at this in the future will not get confused by my incorrect interpretation of the question. It now matches the second part of the solution, instead of both the second and the first). AI: The statement that if $(x_n)$ is bounded then $(x_n)$ contains all its accumulation points is false. Take for example $x_n=1/n$. This is bounded, but zero is an accumulation point which is not part of the sequence. If you ask to prove that the set of limit points is closed, this is true. If $y_n$ is a sequence of limit points which converges to $y$, then there exist terms $x_n$ in the sequence such that $|x_n-y_n|<1/n$. As a consequence $$ |x_n-y|\leq |x_n-y_n|+|y_n-y| \to 0$$ so $x_n$ converges to $y$ which makes $y$ a limit point.
H: Solution Verification: Maximum number of edges, given 8 vertices Suppose a simple graph G has 8 vertices. What is the maximum number of edges that the graph G can have? The formula for this I believe is n(n-1) / 2 where n = number of vertices. 8(8-1) / 2 = 28. Therefore a simple graph with 8 vertices can have a maximum of 28 edges. Is this correct? AI: Yes, it is. The maximum number of edges is simply the number of pairs of distinct vertices; if there are $n$ vertices, this is $$\binom{n}2=\frac{n!}{2!(n-2)!}=\frac{n(n-1)}2\;.$$