text
stringlengths
83
79.5k
H: Invertibility of row and column operations I have a problem and a proposed plan for a solution. Please tell me if I'm on the right track. Problem: What happens if instead of $1$ row operation and then $1$ column operation, the reverse order is performed on a matrix? I'm thinking: There are $3$ types of row operations, and hence $3$ types of column operations. Also, there are square and non-square matrices. So by the multiplication principle, I need to perform calculations for $3\times3\times2 = 18$ cases of row $+$ column operations on matrices. Is this all right? AI: Yes, there are $3$ elementary column operations. Yes, there are square and non-square matrices. I'm not sure what you are asking for your last question though.
H: Integration By Parts with a definite integral I've got the following: \begin{align} \int_{0}^{1}\int_{0}^{y^{2}}\frac{y}{x^{2}+y^{2}}\ dx\ dy&=\int_{0}^{1}\left.\arctan{\left(\frac{x}{y}\right)}\right|_{x=0}^{x=y^{2}}\ dy\\ &=\int_{0}^{1}\arctan{(y)}\ dy\\ &=y\arctan{(y)}-\int_{0}^{1}\frac{y}{1+y^{2}}\ dy \end{align} I can figure out the integral, but my question is - what should be done about the values of y on the left? I've never done IBP before on a definite integral, so it never came up. This is supposed to be done without switching the order of integration - I had considered that route but that's not until my next assignment. AI: You shouldn't have $y$ left on the first term — it should be $$\begin{bmatrix}y\arctan y\end{bmatrix}^1_0$$
H: Integrating trig functions with $R(\frac {z+1/z} {2}, \frac {z - 1/z} {2i} )$ Someone told me that there is a method for integrating rational functions $R(\cos{\theta}, \sin { \theta})$ by doing contour integration of the complex function $$\frac {R \left( \frac {z + \frac1z} {2}, \frac {z - \frac1z} {2i} \right)} {iz}.$$ I've looked through a couple of Complex Variables textbooks and haven't found it. No results on Uniquation. Does anyone have a reference for this method? AI: This ...is a good place to start.
H: Proof of theorem about continuity $\textbf{4.2}\,\,$ Theorem $\,\,$ Let $X,Y,E,f$, and $p$ be as in Definition $4.1$. Then $$\lim_{x\to p}f(x)=q\tag{4}$$ if and only if $$\lim_{n\to\infty}f(p_n)=q\tag{5}$$ for every sequence $\{p_n\}$ in $E$ such that $$p_n\ne p,\quad\lim_{n\to\infty}p_n=p.\tag{6}$$ *Proof*$\quad$ Suppose $\text{(4)}$ holds. Choose $\{p_n\}$ in $E$ is satisfying $\text{(6)}$. Let $\varepsilon>0$ be given. Then there exists $\delta>0$ such that $d_Y(f(x),q)<\varepsilon$ if $x\in E$ and $0<d_X(x,p)<\delta$. Also, there exists $N$ such that $n>N$ implies $0<d_X(p_n,p)<\delta$. Thus, for $n>N$, we have $d_Y(f(p_n),q)<\varepsilon$, which shows that $\text{(5)}$ holds. $\quad\quad\quad\,\,$ Conversely, suppose $\text{(4)}$ is false. Then there exists some $\varepsilon>0$ such that for every $\delta>0$ there exists a point $x\in E$ (depending on $\delta$), for which $d_Y(f(x)),q)\ge\varepsilon$ but $0<d_X(x,p)<\delta$. $\color{yellow}{\boldsymbol{\underline{\color{black}{\text{Taking $\delta_n=1/n(n=1,\,2,\,3,\,\ldots)$, we thus find a sequence in $E$}}}}}$$\quad$ $\color{yellow}{\boldsymbol{\underline{\color{black}{\text{satisfying (6) for which (5) is false.}}}}}$ I'm having a hard time understanding the last line of proof (which I highlighted). Which sequence satisfies $\text{(6)}$ but make $\text{(5)}$ false? Thank you! AI: From the previous sentence: for each $\delta$, there exists a point $x\in E$ s.t. $d_Y(f(x), q) \geq \varepsilon$ but $0 < d_X(x,p) < \delta$. Take $(\delta_n)_{n\in\mathbb{N}}$ as defined, which is a sequence decreasing to $0$. For each $n$, consider the $x_n$ "corresponding to" $\delta_n$: $$x_n\in E,\qquad d_Y(f(x_n), q) \geq \varepsilon, \qquad 0 < d_X(x_n,p) < \delta_n$$ Then, since $\delta_n\xrightarrow[n\to\infty]{}0$, we have $\sf{(6)}$: $x_n\xrightarrow[n\to\infty]{}p$ (since $d_X(x_n,p)\to 0$). However, we can't have $\sf{(5)}$, since $$\forall n,\ d_Y(f(x_n), q) \geq \varepsilon > 0$$ i.e. $f(x_n)$ stays away from $q$ by a distance at least $\varepsilon$, and therefore cannot converge to $q$.
H: what is difference between 0 and infinity norm? Suppose $f$ is a real function on $\Omega$, both $\|f\|_\infty$ and $\|f\|_0$ are defined as $\sup_{x\in \Omega} f(x)$ in many books. Then, am I missing some from their definitions? AI: In the context that these are functions on a compact topological space $X$, the $C^o(X)$ norm is the sup norm, which, in the context of Riesz and others, is a limit of $L^p$-norms, so, called the $L^\infty$-norm. For non-compact $X$, the continuous functions usually need not have finite sups, etc, so there'd be a divergence of notation and concept. That is, obviously abbreviations and economy are desirable and useful, and/but the specific choices are heavily dependent on context: not only when, but "which demographic".
H: Proof for planar embeddings Prove that any planar embedding of a simple connected planar graph contains a vertex of degree at most $3$ or a face of degree at most $3$. Can someone help me with this please? Thank you! AI: Hint: If every face has degree $4$ or more, then we can use Euler's polyhedron formula to show that $e \leq 2v - 4$. What does that imply about the degrees in the graph?
H: How to show this line is tangent to $f$ at point $a$? Let $f:I\to\mathbb{R}^n$ be a differentiable function, with $f'(a)\neq 0$ for some $a$ in the interval $I\subset\mathbb{R}$. If there exists a line $L\subset\mathbb{R}^n$ and a sequence $(x_k)$ in $I$such that $x_i\neq x_j$ when $i\neq j$, $\lim x_k=a$ and $f(x_k)\in L$ for all $k\in\mathbb{N}$, then $L$ is the tangent line to $f$ at point $a$. This is what I've tried: the tangent line to $f$ at point $a$ is the set $T=\{f(a)+tf'(a);\;\;t\in\mathbb{R}\}$. So, it's needed to show that $L=T$. Suppose that $L=\{u+tv;\;\;t\in\mathbb{R}\}$ for some $u,v\in\mathbb{R}^n$. Then for all $k\in\mathbb{N}$ there exists $t_k\in \mathbb{R}$ such that $f(x_k)=u+t_kv$. Moreover there exists $t_a\in\mathbb{R}$ such that $f(a)=u+t_av$. Thus $$f'(a)=\lim_{k\to \infty}\frac{f(x_k)-f(a)}{x_k-a}=\lim_{k\to \infty}\frac{(u+t_kv)-(u+t_av)}{x_k-a}=\lim_{k\to \infty}\left(0u+\frac{t_k-t_a}{x_k-a}v\right)$$ Since $f$ is differentiable, it's continuous. So, $\lim f(x_k)=f(a)$. Therefore, we know that $f'(a),f(a)\in L$ (because $L$ is closed). Could someone give me a hint to finish? AI: You have most of the pieces, you just have to arrange them correctly. To use your language, we need to show that $L=\{u+tv;\;\;t\in\mathbb{R}\}$ satisfies $f(a)\in L$ and $v$ is parallel to $f'(a)$. proof that $f(a)\in L$: Note that $\lim_{k\to\infty}x_k=a$. Since $f$ is differentiable, it is continuous, which is to say that $\lim_{k\to\infty}f(x_k)=f(a)$. Since $f(x_k)\in L$ for each $k$, we know that $f(a)$ is a limit point of $L$. Since $L$ is closed, $f(a)\in L$. Thus, there is some $t_a$ so that $L(t_a)=f(a)$. proof that $v$ is parallel to $f'(a)$: As you stated, $$ f'(a)=\lim_{k\to \infty}\frac{f(x_k)-f(a)}{x_k-a} $$ However, since each $f(x_k)\in L$, we can also say that $$ \begin{align} L'(a)&=\lim_{k\to\infty}\frac{f(x_k)-f(a)}{t_k-t_a}\\ &=\lim_{k\to\infty}\frac{f(x_k)-f(a)}{x_k-a}\cdot\frac{x_k-a}{t_k-t_a}\\ &=f'(a)\cdot \lim_{k\to\infty} \frac{x_k-a}{t_k-t_a} \end{align} $$ Since $L'(t)=v$, we deduce that $v$ is a scalar multiple of $f'(a)$, which means that the two vectors are parallel.
H: Why is it that $\{\vec{x}\}$ is always an orthogonal set? Why is it that $\{\vec{x}\}$ is always an orthogonal set, assuming $\vec{x}\in\mathbb{R}^n$ and $\vec{x}\neq 0$? AI: The set $\{\vec{x}\}$ is orthogonal because for any two distinct elements $\vec{v},\vec{w}\in\{\vec{x}\}$, we have $\langle \vec{v},\vec{w}\rangle=0$. (See the Wikipedia article on vacuous truth.)
H: On a bijection between symmetric subsets of a group Given a group $G$, we can consider the subset $H$ of $G$ defined by: $$ H = \{ xyz : x, y, z\in G \textrm{ and } x, y, z \textrm{ are pairwise distinct}\}$$ Let $a\in G$ be arbitrary element. I am interested in understanding the map $f_a: H\to H$ defined by $$ f_a(xyz)=(xa)(ya)(za) $$ It is not hard to see that image of $f_a$ lies in $H$ (because, for example, if $xa=ya$, then $x=y$), so indeed $f$ is well defined. Also, we can show that $f_a$ is surjective: Given any $xyz\in H$, we have $$ f_a(xa^{-1}ya^{-1}za^{-1})=xyz$$ So my question is: Is it true that $f_a$ is always a bijection? Here are some partial results: a) If $G$ is finite (in which case $H$ is also finite), then $f_a$ is clearly bijection, as every surjection between finite sets is bijection. b) If $G$ is abelian, then $f_a$ is bijection. This is because $f_a$ can be shown to be injection: If $f_a(xyz)=f_a(x'y'z')$, then $(xa)(ya)(za)=(x'a)(y'a)(z'a)$, and since $G$ is abelian $xyza^{3}=x'y'za^{3}$ which gives $xyz=x'y'z'$, as desired. So it remains to investigate the case when $G$ is infinite non-abelian group. By the way, I called $H$ the "symmetric subset" in the title, but feel free to edit it if some other term is more appropriate. AI: The map $xyz\mapsto (xa)(ya)(za)$ is not generally well-defined. For example, consider the free group generated by $x$ and $a$: here $xx^{-1}e$ and $x^{-1}xe$ are supposedly sent to different elements.
H: Estimate a upper bound of an infinite series. Assume $a>0$ and $a_n \geq 0$. how to verify that $$\sum_{n=1}^{\infty}\frac{a_n}{(a+S_n)^{3/2}}\leq \int_0^{\infty}\frac{1}{(a+x)^{3/2}}\mathrm{d}x$$ where $S_n = a_1+a_2+\cdots+a_n$ Thanks very much AI: $$ \int_{S_{n-1}}^{S_{n}}\frac{dx}{ (a+x)^{\frac{3}{2}}} \geq \int_{S_{n-1}}^{S_n} \frac{dx}{(a+S_{n})^\frac{3}{2}} = \frac{S_n - S_{n-1}}{(a+S_n)^\frac{3}{2}} = \frac{a_n}{(a+S_n)^\frac{3}{2}}$$
H: Rationalisation Problem Demonstrate by rationalizing the denominator that: $$ \frac{1}{\sqrt{a}+\sqrt{b}+\sqrt{c}} = \frac{(\sqrt{a}+\sqrt{b}-\sqrt{c})(a+b-c-2\sqrt{ab})}{a^2 + b^2 + c^2 - 2(ab+ac+bc)} $$ AI: $$ \frac{1}{\sqrt{a}+\sqrt{b}+\sqrt{c}} =\frac{\sqrt a+\sqrt b-\sqrt c}{(\sqrt{a}+\sqrt{b}+\sqrt{c})(\sqrt a+\sqrt b-\sqrt c)}=\frac{\sqrt a+\sqrt b-\sqrt c}{(\sqrt a+\sqrt b)^2-(\sqrt c)^2}=\frac{\sqrt a+\sqrt b-\sqrt c}{a+b-c+2\sqrt{ab}}$$ Multiply the denominator & the numerator by $a+b-c-2\sqrt{ab}$ to get $$\frac1{\sqrt a+\sqrt b+\sqrt c}=\frac{(\sqrt a+\sqrt b-\sqrt c)(a+b-c-2\sqrt{ab})}{(a+b-c+2\sqrt{ab})(a+b-c-2\sqrt{ab})}$$ $$=\frac{(\sqrt a+\sqrt b-\sqrt c)(a+b-c-2\sqrt{ab})}{(a+b-c)^2-(2\sqrt{ab})^2}$$ $$=\frac{(\sqrt a+\sqrt b-\sqrt c)(a+b-c-2\sqrt{ab})}{a^2+b^2+c^2+2ab-2bc-2ca-4ab}=...$$
H: Strong inductive proof for this inequality using the Fibonacci sequence. Problem I need to determine for what natural numbers is $2n < F_n$, where $F_n$ is the $n^{th}$ Fibonacci number determined by $F_0 = 0$, $F_1 = 1$ and $F_n = F_{n-1}+F_{n-2}$. I then need to prove my findings through strong induction. What I found I found that the inequality is true for all $n >= 8$. My attempt at proving by induction Basis: $2(8) < F_8$ = TRUE Assume: $2(k) < F_k$ Show: $2(k) < F_k$ implies $2(k+1) < F_{k+1}$ $2(k+1) = 2k + 2 < F_k + F_{k-1} = F_{k+1}$ Thus $2(k+1) < F_{k+1}$ Logic: $2k < F_k$ by induction hypothesis $2 < F_{k-1}$ because $F_{k-1}$ is at least $13$ when $k>=8$ $F_{k+1}$ is $F_k + F_{k-1}$. Is my proof correct? Is this considered strong induction? AI: IIRC, strong induction is when the induction depends on more than just the preceding value. In this case, you use the hypothesis for $k$ but not for any earlier values. Instead, you use a much weaker result ($F_{k-1} > 2$) for the earlier value. So, I would not call this strong induction. If you use the hypothesis ($F_n > 2n$) for $both$ $k$ and $k-1$, the induction works because $F_k > 2k$ and $F_{k-1} > 2(k-1)$ together imply $F_{k+1} = F_k+F_{k-1} > 2k + 2(k-1) = 4k-2 > 2(k+1) $ when $k \ge 3$. Note that the induction step works when $k \ge 3$ but the induction hypothesis is true only when $k \ge 8$. So the first case where you can do the induction is $k = 9$, because you use the truth for $k=8$ and $k=9$ to prove it for $k=10$. I would call this moderate induction, since it depends on the previous two cases being true.
H: Related rates problem I'm learning single variable calculus; I finished a section on related rates several weeks ago. I'm sure the novelty of related rates and simple optimization problems will wear off eventually, but right now I'm having a lot of fun solving these kinds of problems and creating my own. My question is about the following problem: Consider the function $f(x)={x}\sqrt{a-x}$, $a∈ \mathbb{R}^{+}$. If the value of $a$ is increasing at some rate $R$, how fast is the maximum value of $f$ increasing when $a=k?$ Am I right in thinking that to solve this, you would: (1) Determine the maximum value of $f$ in terms of $a$. Since $f'(x)= \frac{2a-3x}{2\sqrt{a-x}},$ the global maximum occurs when $x=\frac{2a}{3}.$ Thus the maximum functional value is $f(\frac{2a}{3})=\frac{2\sqrt{3}{a}^{\frac 32}}{9}$. (2) Treat this maximum value as a function, $F(a)=\frac{2\sqrt{3}{a}^{\frac 32}}{9}$, which gives the maximum value of $f$ for any given $a$. (3) Differentiate $F(a)$ using the chain rule, yielding $F'(a)=R\cdot \frac{\sqrt{3}{k}^{\frac 12}}{3}?$ Is this correct? BTW, I just want to say how much I appreciate those who are willing to share their expertise on this site. I found out about this site a couple of days ago. I asked my first question about how to prove that certain bounded functions must have at least one inflection point, and within minutes--minutes!--an emeritus professor showed my how this could be done. He even took the time to guide me through a little hurdle in his proof. Just incredible. This site rocks. AI: First determine the maximum value of $f(x)$ as a function of a: $$\begin{align} f'(x)=&-\frac{x}{2\sqrt{a-x}}+\sqrt{a-x}=\frac{-x+2a-2x}{2\sqrt{a-x}} \\ \end{align}$$ Equate the first derivative to zero: $$\begin{align} 0=&-3x+2a \implies x=\frac{2}{3}a \end{align}$$ Since the domain of $a$ is an open interval, the absolute maximum of $f(x)$ is when $x=\frac{2}{3}a$. $$m(a)=\frac{2}{3}a$$ What is $\frac{d}{da}f(m(a))$ when $a=k$?
H: What do we mean when we say the Expected Value E[X] is linear? I know that $E[2-X]$ for instance is equal to: $2 - E[X]$. And it makes perfect sense to me, because $f(x)=x$ is linear. However, $E[X^2]$ is equal to $\sum_{j=1}^n$ $x^2f(x)$, and $f(x) = x^2 $ is quadratic. What do we mean by linear here, because I suspect it carries a different meaning. AI: What is meant is that if $X_1,X_2,\dots,X_n$ are random variables, independent or not, and $a_1,a_2,\dots,a_n$ are constants, then $$E(a_1X_1+a_2X_2+\cdots+a_nX_n)=a_1E(X_1)+a_2E(X_2)+\cdots +a_nE(X_n).$$ So the expectation of a linear combination of random variables is easy to compute if we know the individual expectations. This is an extremely useful fact. Let us consider your example $2-X$. Let $X_1$ be the random variable $2$. Kind of a boring random variable, but technically a perfectly legitimate one. Let $X_2=X$. Then $$2-X=X_1+(-1)X_2.$$ Note that $E(X_1)=2$. So by the general linearity rule stated above, we have $$E(2-X)=E(X_1)+(-1)E(X_2)=2-E(X).$$
H: Can the factorial function be written as a sum? I know of the sum of the natural logarithms of the factors of n! , but would like to know if any others exist. AI: This one is pretty important: $$n! = \sum_{\sigma\in S_n} 1$$ Edit: As Arkamis explains, $S_n$ is the symmetric group on $n$ letters. Each $\sigma\in S_n$ is a permutation on the set $[1,2,\ldots,n]$. Since $S_n$ is a finite set, we may sum a function over it, and the sum of the constant function $f(\sigma)=1$ is just the size of the set, which is $|S_n| = n!$. Arguably, summing a constant function is cheating. Here's one way to raise the stakes. Let $B_n$ be the set of $n\times n$ integer matrices $A$ such that every sum of a subset of entries from $A$ is in $[0,n]$. Then: $$n!=\sum_{A\in B_n}|\det A|$$ This is the same identity in a more interesting disguise. Every $n\times n$ permutation matrix $A$ is a member of $B_n$, and $\det A = \pm 1$. On the other hand, if $A\in B_n$ is not a permutation matrix, then you can prove that $\det A = 0$.
H: Differentiation of $x$ to the power of $y$ with respect to $x$ As the title suggests, I need to differentiate $x$ to the power of $y$ with respect to $x$. Not sure how to start. Do I need to take natural log on both sides? That is: $\dfrac{d}{dx}x^y=?$ AI: We need $$\frac{d(x^y)}{dx}$$ One of the ways could be: Let $f(x)=x^y\implies \ln f(x)=y \cdot \ln x$ Using Chain Rule for the Left hand side, $$\frac{\ln f(x)}{dx}=\frac{\ln f(x)}{d f(x)}\frac{d f(x)}{dx}=\frac{f'(x)}{f(x)}$$ and Product Rule of Differentiation for the Right, $$\frac{d(y \cdot \ln x)}{dx}=y\cdot \frac{d(\ln x)}{dx}+\ln x\cdot \frac{dy}{dx}=\ln x\cdot \frac{dy}{dx}+\frac 1x\cdot y$$ $$\implies \frac{f'(x)}{f(x)}=\ln x\cdot \frac{dy}{dx}+\frac yx$$ $$\implies f'(x)=f(x)\left(\ln x\cdot \frac{dy}{dx}+\frac yx\right)=x^y\left(\ln x\cdot \frac{dy}{dx}+\frac yx\right)$$
H: Why if $\rho(I_{\mathfrak{X}} - YA)<1$ then $YA$ is invertible on the $R(YA)$? I am reading an article where I am stucked at one point. Below is my problem. Given that $\mathfrak{X}$ and $\mathfrak{Y}$ are Banach spaces. $A:\mathfrak{X} \to \mathfrak{Y}$ and $Y:\mathfrak{Y}\to \mathfrak{X}$ are linear bounded operators. At one step it is written that if $\rho(I_{\mathfrak{X}} - YA)<1$ then $YA$ is invertible on the $R(YA)$, where $\rho$ stands for the spectral radious and $R(A)$ denotes the range space of the operator $A$. I am not able to understand why operator $YA$ is invertible on the $R(YA)$? Why not on its whole domain $\mathfrak{X}$ ? Could anyone help me to clear my doubt. I would be very much thankful. Thanks AI: If $T$ is a linear operator on a vector space, $I - T$ is not invertible iff $1$ is in the spectrum of $T$, and that implies the spectral radius of $T$ is at least $1$. Take $T = I - YA$. So if $\rho(I-YA) < 1$, $YA$ is invertible on $\mathfrak X$, and this is $R(YA)$ when $YA$ is invertible.
H: Computing the expected value $E[X(X+5)]$ I know $ E[XY] = \int \int x y f(x,y) dx dy $ where $f(x,y) = f(x)f(y)$ But I am not entirely sure how to compute $E[X(X+5)]$. Is it $\int f(x)(5 + \int f(x) dx) dx$ ? AI: $$\mathbb{E}[X(X+5)]= \mathbb{E}[X^2+5X] = \int_{-\infty}^{\infty} (x^2+5x)f_X(x)dx$$ Also note that $f(x,y)$ is equal to $f(x)f(y)$ when $x$ and $y$ are independent.
H: Is there a name for this type of logical fallacy? Consider a statement of the form: $A$ implies $B$, where $A$ and $B$ are true, but $B$ is not implied by $A$. Example: As $3$ is odd, $3$ is prime. In this case, it is true that $3$ is odd, and that $3$ is prime, but the implication is false. If $9$ had been used instead of $3$, the first statement would be true, but the second wouldn't, in which case it is clear that the implication is false. Is there a name for this sort of logical fallacy? AI: I think that would just be a non sequitur ("it does not follow"), which doubles as a catch-all term for all invalid arguments. From Wiki: Non sequitur (Latin for "it does not follow"), in formal logic, is an argument in which its conclusion does not follow from its premises. In a non sequitur, the conclusion could be either true or false, but the argument is fallacious because there is a disconnection between the premise and the conclusion. All invalid arguments are special cases of non sequitur. In your case your premise and conclusion happen to be true, but B does not follow since the implication is broken.
H: Seeking clarity regarding normal subgroup If $A$ is a normal subgroup of $B$ then is it required for $A$ and $B$ to be groups under the binary operation multiplication? what if they are just groups under the binary operation addition, can there still exist normal subgroup? Like, set of Rational numbers is a group under addition only and not multiplication. AI: When you say a group, it always refers to a unique " group structure" on a said set having chosen a specific binary operation. The same set may have other group structures borne of other binary operations. It is only when we talk about rings (or more so, fields) that we have to worry as to which binary operation "addition" or "multiplication", we are talking about. But that is not the point here. When you are talking about a subgroup N being normal to a group G, it just means that $ \forall g \in G, \forall n\in N \implies gng^{-1} \in N $. The group operation there is the same as that which you invoked while you defined the group structure on your set.You can also look at normalcy as requiring every right coset to be a left coset(if you are aware of that term). Normalcy is a nice property as it facilitates construction of quotient groups, which are important in characterizing groups. I hope this helps.
H: Check whether the three vectors $A(2,-1,2),B(1,2,-3),C(3,-4,7) $ are in the same plane I want to check if three vectors are in the same plane, the vectors being $$A(2,-1,2),B(1,2,-3),C(3,-4,7). $$ What I did so far is to create vector $AB ( -1,3,-5)$ and build the plane equation with the point $A$ $$-1(x-2)+3(y+1)-5(z-2)=0$$ and inserted the point $C$ to check if the equation exists. Is this the right way to do that or I did something wrong? Thanks! AI: It would be easier to compute a scalar triple product.. Make three vectors $\hat{u}, \hat{v},\hat{w}$. The three vectors lay in the same plane iff $$ \hat{u} \cdot (\hat{v} \times\hat{w} ) = 0$$
H: Proof of a regular parallelogram Given any figure with four vertices and four straight edges, prove that one can construct a perfect parallelogram by connecting the midpoints of such figure. This to me is a very fundamental and interesting geometry problem. How would I begin to prove this? AI: We do it for a convex quadrilateral, since the diagram will be nicer. Call the vertices, listed in counterclockwise order, $A,B,C,D$. Draw the diagonal $AC$. The line joining the midpoints of $AB$ and of $BC$ is parallel to $AC$. This is by a basic property of triangles: the line joining the midpoints of two sides is parallel to the third side. The line joining the midpoints of $AD$ and $CD$ is, for the same reason, parallel to $AC$. So the two lines are parallel to each other. Now draw the diagonal $BD$ and use the same argument. Another way: If you like to play with vectors, you can give an alternate proof. Think of $A$, $B$, $C$ and $D$ as vectors. The midpoint of $AB$ is $(A+B)/2$. The midpoint of $BC$ is $(B+C)/2$. The difference is $(A-C)/2$. Similarly, the midpoint of $AD$ is $(A+D)/2$. The midpoint of $CD$ is $(C+D)/2$. The difference is $(A-C)/2$, which gives the parallelism.
H: ZF construction of the Kleene plus Given a non-empty set $A$, a (non-empty) string of $A$ is a tuple $(a_1,a_2,...,a_n) \in A^n$, where $a_j \in A$, $\forall j \in \{ 1,2,...,n \}$, for some $n \in \mathbb{N}^*$. The Kleene plus of $A$, informally, is a set $\displaystyle A^{+} = \bigcup_{n=1}^{\infty} A^n$. It is the set of all tuples of elements of $A$. My question is: within the ZF axiomatics, how can I ensure that this set exists? How to build it? AI: The typical way of dealing with questions like this is via a version of the recursion theorem. (This link to Wikipedia discusses a particular case.) Applications of the recursion theorem typically use replacement in an essential way. Recall that one can state replacement as the schema asserting that if $\varphi(x,y)$ is a formula of set theory that is functional, meaning that for any $x$ there is exactly one $y$ such that $\varphi(x,y)$, then for any set $X$ there is a set $Y$ such that for any $x\in X$ there is a $y\in Y$ such that $\varphi(x,y)$. Informally: $\varphi$ defines a function, and the axiom asserts that the image of a set under a function is a set. For example, let $\varphi(x,y)$ be the statement that $y=\mathcal P(x)$ is the power set of $y$. Certainly, $\varphi$ is functional, thanks to the power set axiom (and extensionality). We use this to verify that $Z=\{\mathcal P^n(A)\mid n<\omega\}$ exists, where $\mathcal P^n(A)$ is the result of iterating the power set operation on $A$ precisely $n$ times. (By the way, $\omega$ exists using the axiom of infinity and comprehension. I will omit "routine" details such as this in the remainder.) In effect, and this is how most proofs by recursion work, we can for each $n<\omega$ verify that there is an $n$-approximation to $Z$. In this case, this means a function $f$ with domain $n$ such that for all $m<n$, $f(m)=\mathcal P^m(A)$. More formally, if $n>0$ then $f(0)=A$, and for any $m$, if $m+1<n$, then $f(m+1)=\mathcal P(f(m))$. The proof that $n$-approximations exists is a straightforward induction. Using induction again, we check that if $m<n$ then the restriction of any $n$-approximation to domain $m$ is an $m$-approximation, and that for any $n$ there is at most one $n$-approximation (and therefore there is precisely one). OK. Now let $\varphi(x,y)$ state that either ($x=n+1$ is a positive integer, and $y$ is $f(n)$, where $f$ is some (any) $x$-approximation), or else ($x=0$ or $x\notin\omega$, and $y=0$). By replacement, $Z$ exists (use replacement with $X=\omega\setminus\{0\}$, and comprehension). A very similar argument gives us that for any set $A$, the set $T=\{A^n\mid n<\omega\}$ exists, and now the union axiom gives us the existence of $A^+$. Note how recursion works: Given an iterative process, to ensure that the result of the $n$-th iteration exists, we actually exhibit a function that traces its whole history. In the example you are interested in, we only need functions with finite domain. In more elaborate instances, we may need functions with infinite (transfinite) domain. For example, this is how one can ensure that the levels $V_\alpha$ of the cumulative hierarchy, or the infinite cardinals $\aleph_\alpha$ ($\alpha\in\mathsf{ORD}$) exist.
H: Closed and exact. I tried this question, but I have no idea if I got it correctly. On $\mathbb{R}^2$, let $\omega = (\sin^4 \pi x + \sin^2 \pi(x + y))dx - \cos^2 \pi(x + y)dy$. Let $\eta$ be the unique $1$-form on the torus $T^2 = \mathbb{R}^2 / \mathbb{Z}^2$ such that $p^* \eta = \omega,$ where $p: \mathbb{R}^2 \to T^2$ is projection. The parametrized curve $\gamma: \mathbb{R} \to \mathbb{R}^2$ given by $\gamma(\theta) = (2\theta, -3\theta)$ is a line whose image $C \subset T^2$ is an oriented circle. Is $\eta$ closed? Exact? First I showed that $\omega$ is closed: $$d\omega = \frac{\partial}{\partial y}(\sin^4 \pi x + \sin^2 \pi(x + y)) \wedge dx - \frac{\partial}{\partial x} \cos^2 \pi(x + y) \wedge dy = 0$$ And then I showed $\int_C \eta \neq 0$: $$\int_C \eta = \int_\gamma p^* \eta = \int_\gamma \omega = \int_\gamma (\sin^4 \pi x + \sin^2 \pi(x + y))dx - \cos^2 \pi(x + y)dy.$$ Carry out line integrals with $\gamma(\theta) = (2\theta, -3\theta):$ $$\int_\theta (\sin^4 \pi 2\theta + \sin^2 \pi \theta) 2d\theta + \cos^2 \pi \theta 3d \theta.$$ Hence, as long as $\theta \not\equiv 0$, $\int_C \eta \neq 0$, and it is positive. Assume $\eta$ i s exact, then $\eta = d\alpha$ for some $\alpha$. By Stokes' theorem, its integral over $C$ would be: $$ \int_{C} \eta =\int_{C} d\alpha = \int_{\partial C} \alpha = 0 \ . $$ Since $\partial C = \emptyset$. This contradicts with the fact that $\int_{C} \eta \neq 0$. $\eta$ is closed because $$d\eta = d^2 \int_C \eta = d^2 \int_\gamma \omega = d \omega = 0.$$ Agustí Roig's answer checking if a 2-form is exact is very helpful for me to think of this question. There is a standard embedding $i: T^2 \to S^1 \times S^1 \subset \mathbb{R}^2 \times \mathbb{R}^2$ (determined by the formula $i \circ p(x, y) = (\cos 2 \pi x, \sin 2 \pi x, \cos 2 \pi y, \sin 2\pi y))$. Is there a closed form $\xi$ on $\mathbb{R}$ for which $i^* \xi = \eta$? My attempt following Amitesh Datta's hints: Corollary. $\mathbf{H}^p(\mathbb{R}^k) = 0$ if $k > 0$. Hence, assume that there is a closed form $\xi$ on $\mathbb{R}$, according to the corollary, $\mathbf{H}^1(\mathbb{R}^2 \times \mathbb{R}^2) = 0$. Hence, $\xi$ is exact. But $i^* \xi = \eta$ would also be exact, contradict. Thank you very very much! AI: You might want a more elementary proof but here's one that comes to mind: Theorem Every closed form on $\mathbb{R}^n$ is exact. If you're familiar with algebraic topology, then you might have seen this theorem already. If not, then can you prove it on your own? If so, then you've got an answer to your question (a corollary of the Theorem). I hope this helps and I'm always happy to prove further hints if you encounter any difficulties in proving this Theorem!
H: Find the point of intersection of the straight line $\frac{X+1}{4}=\frac{Y-2}{-2}=\frac{Z+6}{7}$ and plane $3X+8Y-9Z=0$ Find the point of intersection of the straight line $$\frac{X+1}{4}=\frac{Y-2}{-2}=\frac{Z+6}{7}$$ and plane $3X+8Y-9Z=0$ the point of the line is $M(-1,2,-6)$ and direction vector of the line is $A(4,-2,7)$ I would like to get some advice how to do that. Thanks! AI: $$ \frac {X+1}4 = \frac {Y-2}{-2} = \frac {Z+6}7 = k \\ X = 4k - 1 \\ Y = -2k + 2 \\ Z = 7k - 6 $$ Substitute it to the equation of plane $$ 3X+8Y-9Z = 0 \\ 12k-3 - 16k + 16 - 63k + 54 = 0 \\ -67k + 67 = 0 \\ k = 1 $$ Since you know $k$, you can easily find $X,Y,Z$. $$ X = 4k-1 = 3 \\ Y = -2k+2 = 0 \\ Z = 7k-6 = 1 $$ So, point of intersection is $(3,0,1)$.
H: Prove that $\sqrt 5$ is irrational I have to prove that $\sqrt 5$ is irrational. Proceeding as in the proof of $\sqrt 2$, let us assume that $\sqrt 5$ is rational. This means for some distinct integers $p$ and $q$ having no common factor other than 1, $$\frac{p}{q} = \sqrt5$$ $$\Rightarrow \frac{p^2}{q^2} = 5$$ $$\Rightarrow p^2 = 5 q^2$$ This means that 5 divides $p^2$. This means that 5 divides $p$ (because every factor must appear twice for the square to exist). So we have, $p = 5 r$ for some integer $r$. Extending the argument to $q$, we discover that they have a common factor of 5, which is a contradiction. Is this proof correct? AI: It is, but I think you need to be a little bit more careful when explaining why $5$ divides $p^2$ implies $5$ divides $p$. If $4$ divides $p^2$ does $4$ necessarily divide $p$?
H: Cyclic subgroup of a quotient group I encountered this question in a grad-level exam. I hope somebody could help me with this. We have to choose one option. Consider the group $\;G=\Bbb Q/\Bbb Z\;$ where $\Bbb Q$ and $\Bbb Z$ are the groups of rational numbers and integers respectively. Let $n$ be a positive integer. Then is there a cyclic subgroup of order $n$? not necessarily yes, a unique one yes, but not necessarily a unique one never I can see that $\Bbb Z$ is a normal subgroup of $\Bbb Q$. So, $G$ is a quotient group and it would have elements like $\Bbb Z$+$q$ where $q\in \Bbb Q\;$, that is $q$ can be $\;1/-1/0.5/-0.5...\;$ etc., and the identity of $G$ and its subgroup would be $\Bbb Z+0\;$, that is $\Bbb Z$. Now, if i assume $S$ to be a subgroup of $G$ having just the identity element, then i guess it would be a cyclic subgroup of order $1$. Am I correct here? And will there be any other cyclic subgroup? I am not sure. I realize that this question has already been discussed. here are the links- $\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$? consider the group $G=\mathbb Q/\mathbb Z$. For $n>0$, is there a cyclic subgroup of order n $\mathbb{Q}/\mathbb{Z}$ has cyclic subgroup of every positive integer $n$? I didn't understand the concepts discussed there. Moreover, they are taking $Z$ as complex set but in my question, it is integer set. Also, since i am new, i couldn't post comment there for clarification. So, opening a new question. I hope somebody could help. AI: For any $$n\in\Bbb N\;,\;\;\text{ord}\,\left(\frac1n\right)_{\Bbb Q/\Bbb Z}=n$$ So we already know there's a cyclic subgroup of order $\,n\,$ in $\,\Bbb Q/\,\Bbb Z\,$ . Now, if $$\left(\frac ab+\Bbb Z\in\Bbb Q/\Bbb Z\;\;\;\text{and}\;\;\;\text{ord}\,\left(\frac ab\right)_{\Bbb Q/\Bbb Z}=n\right)\implies \left(n\frac ab\in\Bbb Z\right)\iff \left(n=bk\;,\;k\in\Bbb Z\right)$$ and thus in fact we have that $$\frac ab=\frac{ak}n\in\left\langle\;\frac1n+\Bbb Z\;\right\rangle\le\Bbb Q/\Bbb Z$$ and this gives us uniqueness
H: Difference between $R[c_1,c_2,\dots, c_n]$ and a finitely generated $R$-algebra. What is the difference between $R[c_1,c_2,\dots, c_n]$ ($c_1, c_2,\dots, c_n\notin R$), where $R$ is a ring, and a finitely generated $R$-algebra? Is the difference that if $c_1, c_2,\dots, c_n$ are the generators of the $R$-algebra, then their highest powers are bounded in the $R$-algebra, while they are not in $R[c_1,c_2,\dots, c_n]$? Thanks in advance. AI: When you write the expression $$R[c_1,\ldots,c_n],$$ what that signifies (to me at least) is a quotient of a polynomial ring $R[x_1,\ldots,x_n]$ by some ideal $I\subset R[x_1,\ldots,x_n]$ such that the composition of the maps $$R\longrightarrow R[x_1,\ldots,x_n]\longrightarrow R[x_1,\ldots,x_n]/I$$ is injective; the $c_i$'s denote the equivalence classes of the $x_i$'s in the quotient ring. Any such ring certainly a finitely-generated $R$-algebra, but the converse is not true. For example, $\mathbb{Z}/p\mathbb{Z}$ is a finitely-generated $\mathbb{Z}$-algebra, but it cannot be expressed as $\mathbb{Z}[c_1,\ldots,c_n]$ for any $c_i$'s in the manner described above. If, however, you do not require the above map to be injective, then any finitely-generated $R$-algebra can be expressed as $R[c_1,\ldots,c_n]$, and any $R$-algebra $R[c_1,\ldots,c_n]$ is finitely-generated (both directions essentially by definition). By the way, you seem confused about what it means for an $R$-algebra to be finitely generated. The polynomial ring $R[x]$ is an $R$-algebra, the powers of $x$ occurring in the elements of $R[x]$ are not bounded, and $x$ is a generator of $R[x]$ as an $R$-algebra.
H: $\sum_{n=1}^{\infty}f(z^n)$ converges uniformly with $f$ holomorphic Let $f$ be an holomorphic function on the unit ball with $f(0)=0$. Prove that $\sum_{n=1}^{\infty}f(z^n)$ is uniformly locally convergent in the unit ball. My attemp: It is suffice to prove that $\sum_{n=1}^{\infty}f(z^n)$ converges uniformly in any $\overline{B_0(r)}$ with $r<1$. $f'$ is also an holomorphic function, defined on a compact set and therfore bounded. Let's say $|f'|\leq M$. Then, for every $z\in \overline{B_0(r)}$, $|f(z^n)-f(0)|\leq M|z^n-0|$, $|f(z^n)|\leq M|z^n| = M|z|^n \leq Mr^n$. The series $\sum_{n=1}^{\infty}r^n$ converges and therefore by the M test our series converges uniformly. Is it correct? Thanks. AI: Let's do brute force! If $f(z)=\sum_{n\geq1}a_nz^n$, then as a formal series $$\sum_{k\geq1}f(z^k)=\sum_{n\geq1}\left(\sum_{d\mid n}a_d\right)z^n.$$ If $f$ converges in the unit disc, we know from the Cauchy-Hadamard formula that $\limsup_{n\to\infty}|a_n|^{1/n}\leq1$, and we have to show that then $$\limsup\left|\sum_{d|n}a_d\right|^{1/n}\leq1.$$ One should be able to prove that If $(x_n)_{n\geq1}$ is a sequence of non-negative real numbers such that $\limsup_{n\to\infty}x_n^{1/n}\leq1$, then $\limsup_{n\to\infty}(x_1+\cdots+x_n)^{1/n}\leq 1$. and using this our result follows.
H: Approximations of fixed points of tangent. This question comes from an exam, years ago. Show that $f(x)=\tan x-x$, for every positive integer $n$, has exactly one root $x_n$ in the interval $(n\pi,n\pi+\pi/2)$. And show that $$x_n=n\pi+\frac{\pi}{2}-\frac{1}{n\pi}+\text{o}\left(\frac{1}{n}\right).$$ I can prove the claim about the existence of $x_n$, by intermidiate value-theorem, but am stuck by the second point. Since $\tan x$ is not a contraction, we cannot apply the contraction mapping theorem here. Also, using Taylor approximation, or the Lagrange form of the remainder, I arrived at $$\tan x=(x-n\pi)+\frac{f^{(3)}(\theta)}{6}x^3$$ for some $\theta$ between $x$ and $n\pi$. It seems that, however, I can only obtain information about this $\theta$ along this direction, not about $x_n$. Any hint or help is greatly appreciated. Thanks in advance. AI: First note that $x_n-n\pi\to\frac\pi2\cdot$ This is because $0<x_n-n\pi<\frac\pi2$ and $\tan(x_n-n\pi)=\tan(x_n)=x_n\to+\infty$. Now, put $z_n=x_n-n\pi-\frac\pi2\cdot$ Then $z_n\to 0$, so $z_n\sim \tan(z_n)$ as $n\to\infty$. But $\tan(z_n)=\tan(x_n-\frac\pi2)=-\tan(x_n)=-\frac1{x_n}$, so we get $z_n\sim -\frac1{x_n}\cdot$ Now, $x_n\sim n\pi$ since $n\pi<x_n<n\pi+\frac\pi2$; so $z_n\sim-\frac{1}{n\pi}$ and hence we may write $z_n=-\frac{1}{n\pi}+o(\frac1n).$ This is what you want by the definition of $z_n$.
H: Finite and infinite sets, cardinality question Suppose there are infinite sets $A$, $B$ and $C$ such that $$|A| = |B| = |C| = |\mathbb{N}|\\ |D| = |\mathbb{R}|$$ and the finite set $E$ Give an example for the following (using the sets above). In case it's not possible, show why. $(A \setminus D = B) \wedge (A \cap D = C)$ $\mathcal P(E) \setminus A = B $ $|D| = |E|^{|A|}$ This is an exam type of exercise i couldn't answer it, if there is a soul that can help, I'll appreciate it. AI: Here are some hints: Start with the case of $D'\cap A=\varnothing$, and find a suitable subset $C$. For example $D'$ can be the irrational numbers, $A$ the rational numbers, and $C$ the natural numbers. This is impossible $B$ is infinite but $E$ is finite. The power set of a finite set is finite, and $\mathcal P(E)\setminus A$ is a subset of $\mathcal P(E)$. Remember that $\Bbb R$ and $\mathcal P(\Bbb N)$ are equipotent.
H: Find the projection of the point on the plane I want to find the projection of the point $M(10,-12,12)$ on the plane $2x-3y+4z-17=0$. The normal of the plane is $N(2,-3,4)$. Do I need to use Gram–Schmidt process? If yes, is this the right formula? $$\frac{N\cdot M}{|N\cdot N|} \cdot N$$ What will the result be, vector or scalar? Thanks! AI: Set the projection point on the plane as $P=(x,y,z)$. You need three equations: Point $P$ on the plane. $$2x-3y+4z=17$$ $\vec{MP}\perp plane$ $$\vec{MP}\perp \vec{PQ_1}$$ $$\vec{MP}\perp \vec{PQ_2}$$ where $Q_1$ and $Q_2$ are two different points on the plane. Because $\vec{MP}// \vec{N}$, you can use $\vec{N}$ instead of $\vec{MP}$ above.
H: Simple question concerning the properties of the fundamental group I need to prove that every element of the fundamental group has an inverse. First we define a map $\phi:I\to I$ homotopic to $\operatorname{Id}_I$. If $\phi$ is the constant zero function isn't it true that $f\circ \phi \simeq f$? So if we denote the inverse of $f$ by $f'$ isn't it true that $(f\ast f')\circ \phi \simeq f\ast f'$ and $(f\ast f')\circ \phi = e$, which means $e\simeq f\ast f'$? ($\ast$ is the product of paths and $e$ is the identity element) But the problem is that I didn't use any property of the inverse element, so something is wrong. How can I fix it? AI: Your argument is incorrect. $f\circ\phi$ is not in general homotopic to $f$- it's homotopic to the constant map taking the value $f(0)$. To prove that every element of the fundamental group has an inverse, think about some examples of elements of the fundamental group and their inverses. You should see a pattern jumping out that suggests a natural course for your proof.
H: Does "monotonic sequence" always mean "a sequence of real numbers" When we say a sequence is monotonic, does that imply the sequence is Real Number Sequence? And other propositions about monotonic, all real-valued? When I see some mathematical analysis books, sometimes they talk about some properties/facts like convergence of sequences in a metric space, for example, real field and complex field, sometimes the results are the same. However, when I see some propositions about "monotonic" are limited within real numbers. Maybe they write for simplicity of the theorem? For example: Definition: A sequence $s_n$ of real numbers is said to be monotonicaly increasing if $s_n\leq s_{n+1}$. Complex sequences may also have something monotonicity? Basic fact I know is that we cannot compare two complex numbers, maybe we should use partial relation? norm of complex number(this is a real number)? dictionary order? AI: To speak of monotonicity one needs to have a notion of order. As long as the set of objects you are considering your sequences to come from is ordered in a reasonable way, you can speak of monotonicity. It is common to consider partially ordered sets, or simply posets. A poset is a pair $(S,\le )$ where $S$ is a set (which can be any set at all) and $\le $ is a transitive, reflexive, and anti-symmetric relation on $S$. In the context of posets, so for sequences of elements from a poset, monotonicity makes perfect sense. The real numbers are ordered by the usual meaning of $x\le y$. However, the complex numbers are not ordered in any natural useful way, so we don't speak of monotone sequences of complex numbers. An example of a poset which is useful in the context of analysis is the poset of, for instance, all functions $f:\mathbb R \to \mathbb R$. This poset is ordered by $f\le g$ precisely when $f(x)\le g(x)$ for all $x\in \mathbb R$. Then you can speak of monotone sequences of functions.
H: $\xi$ is the least upper bound of $M$. $M$ is a set with upper bound. Should set $M$ be an ordered set? or by deafult it is an ordered set, since it maybe an sub set of $R$. When we say a subset $M$ of $R$(ordered set), is $M$ also an ordered set? a cut $S=(\xi )$ is denoted as the set made up of the left part of $\xi$ (1) $\eta \in S$$\Leftrightarrow$ there exsists $\zeta \in M$, such that $\eta \text{$<$=}\zeta$, we can define a cut in $R$ that $S=(\xi )$, and $\xi$ is the least upper bound of $M$. I know $\eta \text{$<$=}\zeta$ means $\eta$ is not the largest number of of $S$, what's the relation with $\xi$ and $\zeta$? no typo? can you explain me about (1) AI: To clarify some points. If $M$ has an upper bound, the elements of $M$ and the elements of its wider context need to be ordered. An upper bound for $M$ need not be a member of $M$ - take $M$ to be the set $0\le x \le 1$; $2$ is greater than every element of this set, and is therefore an upper bound for it. The least upper bound for $M$ need not be a member of $M$ - take $M$ to be the set $0\le x \lt 1$ which has least upper bound $1 \notin M$. Note that I have chosen examples which work for rational numbers or real numbers. In the definition you cite $\zeta$ is defined to be a member of $M$. By contrast $\xi$ need not be a member of $M$ - to take account of the situation where the least upper bound is not an element of $M$ - as in my second example. The definition wants to collect together every number in our context (eg real or rational) for which we can definitely say $x \le \xi$ (the equality case might not occur, depending on context). However we don't initially know what the value of $\xi$ is or whether it is a member of $M$. So we gather together as set $S$ every number which is less than or equal some element of $M$ - such numbers are necessarily $\le \xi$. In fact the definition is using that set of numbers $S$ to define the unknown number $\xi$.
H: Is there any way to prove it directly? I'm trying to prove the following result: In a first countable $T_1$ space $X$ for $E\subset X,~x\in X$ is an adherent point of $E\iff~\exists~(x_n)_n\in E$ such that $x_n\to x.$ When I'm considering the $E=\emptyset,$ I can't prove it directly without using $$\{\text{if}~~p\iff q~~\text{then}~~-p\iff -q\}$$ Is there any way to prove it directly? AI: You have the implication $\bigl(\exists x_n \in E\bigr)(x_n \to x) \Rightarrow x \in \overline{E}$ without any assumptions on the space, by the definition of convergence and closure. Conversely, let $x \in \overline{E}$, and $U_1 \supset U_2 \supset \ldots U_k \supset U_{k+1} \ldots$ a countable neighbourhood basis of $x$. Then choose $x_n \in U_n \cap E$ to get a sequence $(x_n)$ in $E$ converging to $x$.
H: Is there any good strategy for computing null space of a matrix with entries $\cos x$ and $\sin x$? For example, say $A= \left ( \begin{matrix} \cos x & -\sin x & 0 \\ \cos y \sin x & \cos x \cos y & -\sin y \\ \sin x \sin y & \sin y \cos x & \cos y \end{matrix} \right)$. How do i conpute null space of $A-I$? Since I don't know whether $\cos x$ , the 1-1 entry, is $0$ or not, I cannot simply take elementary operators on this matrix. Is there any good strategy to find its null space? (Note that $A$ is just an example. I'm asking how to compute null space of such matrices with entries $\cos x$, $\sin x$, $\cos y$, $\sin y$...) AI: Recall that the rank of a matrix is equal the the dimension of the space spanned by either its rows or its columns. So if you can prove the rows or columns are linearly independent, it must have full rank (i.e. trivial null space). In this case, the dot product of any two distinct rows/columns is 0, and none of the rows/columns can be identically zero, so you're done. In fact, the matrix is orthogonal, since all the rows/columns are in fact of length 1 as well. In some sense this isn't so surprising, because sines and cosines arise naturally in problems about rotations, which are represented by orthogonal matrices.
H: help deriving a closed formula for this "magic function" I'm having trouble coming up with a closed formula for $n$ from the sequence of numbers generated by this function: The following mystery function $M : N \times N \rightarrow N $ is defined by: $$ M(m,n) = \begin{cases} m & n < 2m +1 \\ M(m+1, n-2m-1) &n \ge 2m + 1 \end{cases} $$ If that looks confusing here's an algorithm representing the logic: int M(int m, int n) { if (n < 2*m + 1) return m; else M(m + 1, n - 2*m - 1); } Here's what I have to do: Evaluate $M(0,n)$ for $n \in \{0,...,10\}$. $n = 0$ : $M(0,0) = 0$ $n = 1$ : $M(0,1) = 2$ ... I evaluated each $n$ from $0$ to $10$ on paper, and the sequence I got is: $0,1,1,1,2,2,2,2,2,3,3$ I verified this output by running my algorithm in C. Provide a closed formula for $M(0,n)$ This is where I'm lost. I know what a closed formula is. It's a formula to find the value of $M(0,n)$ with $n$ in the formula... I ran my program again, but this time from n = 0 to 100, and see a pattern of 0, three 1's, five 2's, seven 3's, nine 4's... and so on... But I don't see how to tie this to n. I think I've been up too late. Any help/insight is appreciated! AI: Take a look at what happens when you apply this procedure to $M(0,n)$. Then see whether you can identify a pattern. To simplify things, let's first assume that $n$ is large enough so that we are always in the case that $n \geq m$. That gives us the following: $$ M(0,n)=M(1,n-1)=M(2,n-4)=M(3,n-9)=M(4,n-16)=M(5,n-25) $$ Do these numbers $1,4,9,16,25,\dots$ satisfy a pattern? Looking at the above, for which values of $n$ do we get the answers $0,1,2,3,4,\dots$? For example, we stop at the stage $M(3,n-9)$ if We did not stop at the previous stage, so $n-4 \geq 2(2)+1=5$ or $n \geq 9$ We do stop at this stage, so $n-9< 2(3)+1=7$ or $n<16$. So we stop at this stage if $9 \leq n < 16$, and in this case we output $3$. This approach yields the following: $$ \begin{array}{c|c} M(0,n) & n \\ \hline 0 & 0\leq n <1 \\ 1 & 1 \leq n <4 \\ 2 & 4 \leq n <9\\ 3 & 9 \leq n< 16\\ 4 & 16 \leq n<25\\ \end{array} $$ Do you see a way to express this a function of $n$? (Why do the numbers $1,4,9,16,25, \dots$ appear? Take a look at this question here.)
H: Proving that the mothersequence converges to $x$ if any subsequence contains a subsequence which converges to $x$ Dear reader of this post, I am currently working on some problems about sequences and their subsequences. I proved a claim and because this prove involves some elementary concepts, I would like to ask three related questions. The claim is as follows: For any real number $x$ and $x_m \in \mathbb{R}^{\infty}$, show that $x_m \rightarrow x$ if every subsequence of $(x_m)$ has itself a subsequence that converges to $x$. My prove is as follows: Take any subsequence $x_{m_{k}}$. As stated, I know that $\exists$ a (sub)subsequence $x_{m_{k_{i}}} \rightarrow x $. Let $\bar{x}_{m_{k_{i}}}$ be the non-convergent part of the subsequence. Redefine $x_{m_{k}}$ as $\tilde{x}_{m_{k_{i}}} = \left\lbrace \bar{x}_{m_{k_{1}}},\bar{x}_{m_{k_{2}}},\ldots,x_{m_{k_{1}}},x_{m_{k_{2}}},\ldots \right\rbrace $. Redefine the mother sequence $x_m$ furthermore as $\tilde{x}_m = \left\lbrace \tilde{x}_{m_{1}},\tilde{x}_{m_{2 }},\ldots \right\rbrace$. Finally, take any $\epsilon >0 $. I know that for any $ \tilde{x}_{m_{k}}$ $\exists M \in \mathbb{N}$ for which $\forall i > M$ $| \tilde{x}_{m_{k_{i}}} - x | < \epsilon $. Thus, $x_m$ is convergent. After having stated my prove, I'd like to ask my three questions: I think the prove requires each subsequence to contain infinitely many elements. Is this correct and does every subsequence contain indeed infinitely many elements in general? Do I really need to redefine the sequence $x_m$ and its subsequences $x_{m_{k}}$ and is this legitimate? If these two questions are answered positively, is my prove correct? Thank you very much for your support. I am looking forward for your replies. AI: Here is an alternative method to prove the desired claim. Theorem: If every subsequence of $(x_n)$ has a subsequence which converges to $x$, then $(x_n)$ converges to $x$. Proof: Suppose $x_n$ does not converge to $x$. Then there is $\varepsilon > 0$ such that $|x_n - x| \geq \varepsilon$ for infinitely many $n$. Therefore, there is a subsequence $(x_{n_k})$ with $|x_{n_k} - x| \geq \varepsilon$ for all $k \in \mathbb{N}$. This is a contradiction as $(x_{n_k})$ is a subsequence of $(x_n)$ which does not have a subsequence which converges to $x$.
H: is $I^2=I$ true? Suppose $I$ is an ideal of a ring with $1$. I think that $II=I^2=I$ but I am stuck showing it. I can easily show that $I^2\subseteq I$, but I dont know how to show that $I\subseteq I^2$. So is it actually true? If yes, how can I show it? Definition of $I^2=\{\sum_{k=1}^m x_k y_k: m \in \mathbb{Z}_{\geq 1}, x_k,y_k\in I\}$ AI: Counterexample: $$I=\langle\;x\;\rangle\le\Bbb Z[x]\;,\;\;\text{but}\;I^2=\langle\;x^2\;\rangle \ne\langle\;x\;\rangle$$
H: How to show $\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$? I am able to evaluate the limit $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$$ for a given $n$ using l'Hôspital's (Bernoulli's) rule. The problem is I don't quite like the solution, as it depends on such a heavy weaponry. A limit this simple, should easily be evaluable using some clever idea. Here is a list of what I tried: Substitute $y = x - 1$. This leads nowhere, I think. Find the Taylor polynomial. Makes no sense, it is a polynomial. Divide by major term. Dividing by $x$ got me nowhere. Find the value $f(x)$ at $x = 1$ directly. I cannot as the function is not defined at $x = 1$. Simplify the expression. I do not see how I could. Using l'Hôspital's (Bernoulli's) rule. Works, but I do not quite like it. If somebody sees a simple way, please do let me know. Added later: The approach proposed by Sami Ben Romdhane is universal as asmeurer pointed out. Examples of another limits that can be easily solved this way: $\lim_{x \to 0} \frac{\sqrt[m]{1 + ax} - \sqrt[n]{1 + bx}}{x}$ where $m, n \in \mathbb{N}$ and $a, b \in \mathbb{R}$ are given, or $\lim_{x \to 0} \frac{\arctan(1 + x) - \arctan(1 - x)}{x}$. It sems that all limits in the form $\lim_{x \to a} \frac{f(x)}{x - a}$ where $a \in \mathbb{R}$, $f(a) = 0$ and for which $\exists f'(a)$, can be evaluated this way, which is as fast as finding $f'$ and calculating $f'(a)$. This adds a very useful tool into my calculus toolbox: Some limits can be evaluated easily using derivatives if one looks for $f(a) = 0$, without the l'Hôspital's rule. I have not seen this in widespread use; I propose we call this Sami's rule :). AI: Let $$f(x)=x+x^2+\cdots+x^n-n$$ then by the definition of the derivative we have $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1}= \lim_{x \to 1}\frac{f(x)-f(1)}{x - 1}=f'(1)\\[10pt] = \left[ \vphantom{\frac11} 1 + 2x + 3x^2 + \cdots + nx^{n-1} \right]_{x=1} = \frac{n(n + 1)}{2}$$
H: The relation between these two kinds $\text{Mod}$ What's the relation between these two kinds $\text{Mod}$ $M$ is a subset of integer set $Z$, $M$ is a $\text{Mod}$ if $\forall x,y\in Z$, we have $$\begin{align*}u,v\in M \Longrightarrow x u+y v\in M\tag{1}\end{align*}$$ Can this kind Mod be extended? here, we are limited in $Z$, A module over rings denoted A-mod here $A$ is an ring with identity, $a,b,\text{...}$ is elements of $A$, $M$ is the set containing $\xi$, $\eta$, $\forall a,b,\xi ,\eta , \text{a$\xi $}\in M$, $$\begin{align*}(a+b)\xi =\text{a$\xi $}+\text{b$\xi $},\\a(\xi +\eta )=\text{a$\xi $}+\text{a$\eta $},\\ a(\text{b$\xi $})=(\text{ab})\xi,\\1\xi =\xi .\tag{2}\end{align*}$$ (2)'s mod contains the (1)'s mod? AI: From the wikipedia page you linked to: If R is any ring and I is any left ideal in R, then I is a left module over R. The example you gave with $\mathbb{Z}$ is a special case of this, since your definition of $M$ is equivalent to $M$ being an ideal in $\mathbb{Z}$.
H: Similar matrices properties So I have a question which I can not solve. Assuming $A,B \in \mathbb{M_{n}(\mathbb{R})}$, $A$ similar to $B$, is it possible that $\det(A) = \det(B^{2})+1$? We know that there exists $P$ (invertible) such that $P^{-1}AP=B$ and therefore $B^{2} = P^{-1}A^{2}P$. This means that $$ \det(B^{2})=\det(P^{-1}A^{2}P)=\det(P^{-1})\det(A^{2})\det(P)=\det(A)^2. $$ So the question is actually is it possible to find $A \in \mathbb{M_{n}(\mathbb{R})}$ such that $$\det(A) = \det(A)\det(A)+1.$$ And I do not know how to continue from there. Any help would be appreciated. Thanks in advance !! AI: Notice that what you have is a quadratic equation: if $t:=\det(A)$, then $$ t^2-t+1=0\qquad\Rightarrow\qquad t=\frac{1\pm\sqrt{1-4}}{2}=\frac{1\pm i\sqrt{3}}{2}. $$ So, if such a matrix $A$ existed, it would need to have one of these as its determinant. Can a real matrix have non-real determinant?
H: How to write $K$ as sum of $N$ integers? How to write integer $K$ as sum of $N$ positive integers with minimum variance? Obviously when $N|K$ the solution is each of integers being $\frac KN$ and the variance would be zero. But how about when this not the case? I know that the answer is some of them being $\lfloor\frac KN \rfloor$ and $\lfloor \frac KN \rfloor +1$, but I'm looking for an analytic solution! AI: Do you know what "smoothing" is? Hint: Show that if $a \geq b+2$ then $(a-1)^2 + (b+1)^2 < a^2 + b^2$. This proves that when the variance is minimal, each pair of integers must differ by at most 1 (otherwise we can reduce the variance further). Hence, your claim follows.
H: If $q>1$ is not an integer, can $q^n$ be made arbitrarily close to integers? This question arose when I heard about Mill's constant: the number $A$ such that $\lfloor A^{3^n} \rfloor$ is prime for all $n$. It made me wonder whether $A^{3^n}$ could be made arbitrarily close to composite numbers, or in other words: "how near can $A^{3^n}$ get to the 'next integer'?" I had a feeling, perhaps erroneous, that this might have to do with the rationality of $A$, but since this is an open question, I decided to consider the more general question in the title. Having very little knowledge in this area, the only two thoughts that occurred to me were: to write $q=k+\epsilon$, where $\epsilon<1$ and $k\in\mathbb{N}$ and consider how $k$ and $\epsilon$ effect each other in $(k+\epsilon)^n$. The appears to be difficult because of the symmetry in the binomial expansion. This is also where the rationality of $\epsilon$ might come into play. to see whether $q^{-n}$ might get arbitrarly close to the terms of the harmonic sequence since they both converge to the same thing, $0$. This one is probably a bit naive. AI: A particular class of algebraic integers is called Pisot numbers. These numbers have the property you describing. A Pisot number is an algebraic integer greater than $1$ which has all of its conjugates inside the unit circle. Or equivalently if $f$ is an irreducible polynomial with integer coefficients and all of its roots except one are inside the unit circle, then the root outside the unit circle is a Pisot number. A Pisot number is never an integer. Let $\theta$ be a pisot number and $c_1,\ldots,c_n$ its conjugates. From the theory of the symmetric polynomials it follows that $$ \theta^k+\sum_{i=1}^nc_i^k\in\mathbb Z, \; \forall k\in\mathbb N. $$ Since $|c_i|<1$ it follows that $c_i^k\xrightarrow[k\to\infty]{}0 $. Therefore for large $k$, $\theta^k$ is very close to an integer. The smallest Pisot number is $\theta_0=1.3247179572447460260$. In the following table I list the values of $\theta_0^k$ for $k=1,2,\ldots,50$. $$ \begin{array}{|c |c |c |c |} \hline k & \theta_0^k & k & \theta_0^k\\ \hline 1 & 1.324717957 & 26 & 1496.955904 \\ \hline 2 & 1.754877666 & 27 & 1983.044367 \\ \hline 3 & 2.324717956 & 28 & 2626.974482 \\ \hline 4 & 3.079595621 & 29 & 3480.000269 \\ \hline 5 & 4.079595620 & 30 & 4610.018846 \\ \hline 6 & 5.404313575 & 31 & 6106.974748 \\ \hline 7 & 7.159191238 & 32 & 8090.019112 \\ \hline 8 & 9.483909190 & 33 & 10716.99359 \\ \hline 9 & 12.56350481 & 34 & 14196.99385 \\ \hline 10 & 16.64310042 & 35 & 18807.01269 \\ \hline 11 & 22.04741399 & 36 & 24913.98743 \\ \hline 12 & 29.20660521 & 37 & 33004.00653 \\ \hline 13 & 38.69051439 & 38 & 43721.00011 \\ \hline 14 & 51.25401918 & 39 & 57917.99394 \\ \hline 15 & 67.89711957 & 40 & 76725.00660 \\ \hline 16 & 89.94453353 & 41 & 101638.9940 \\ \hline 17 & 119.1511387 & 42 & 134643.0005 \\ \hline 18 & 157.8416530 & 43 & 178364.0005 \\ \hline 19 & 209.0956721 & 44 & 236281.9944 \\ \hline 20 & 276.9927916 & 45 & 313007.0009 \\ \hline 21 & 366.9373250 & 46 & 414645.9947 \\ \hline 22 & 486.0884635 & 47 & 549288.9950 \\ \hline 23 & 643.9301163 & 48 & 727652.9952 \\ \hline 24 & 853.0257881 & 49 & 963934.9892 \\ \hline 25 & 1130.018579 & 50 & 1276941.990 \\ \hline \end{array} $$
H: Why does $(a_n)$ bounded imply that $(b_n)$ is decreasing? Why does $(a_n)$ bounded imply that $(b_n)$ is decreasing? $$(a_n)=a_1,a_2,\dots\tag{1}$$ $$b_n=\sup (a_n,a_{n+1},\dots), c_n=\inf (a_n,a_{n+1},\dots)$$ If $\left(a_n\right)$ is bounded, then $\left(b_n\right)$ exists and $(b_n)$ is decreasing, $(c_n)$ is increasing. Why? AI: That $(a_n)$ is bounded insures that each $b_n$ and $c_n$ is defined. To see that $(b_n)$ is decreasing: Fix an $n$. Any upper bound of $\{a_n, a_{n+1}, \cdots\}$ is also an upper bound of $\{ a_{n+1}, a_{n+2}, \cdots\}$. In particular, $b_n$ is an upper bound of $\{ a_{n+1}, a_{n+2}, \cdots\}$. As $b_{n+1}$ is the least upper bound of $\{ a_{n+1}, a_{n+2}, \cdots\}$, we have $b_{n+1}\le b_n$. A similar argument will establish that $(c_n)$ is increasing.
H: Convergence of Improper Integrals2 Test the convergence of $$\int_0^{\pi/2}\frac{\sin x}{x^n}\,dx$$ I tried doing it by comparison test by taking $\phi(x)=\dfrac{1}{x^n}$. Then $$\lim_{n\rightarrow 0}\frac{f(x)}{\phi(x)}=\lim_{n\rightarrow 0}\sin x=0$$ This implies that if $\phi(x)$ converges then $f(x)$ also converges. We can see that $\phi(x)$ converges for $n<1$ But the answer in the book is “$f(x)$ converges for $n<2$” Have I missed out something? AI: The only problem is at $0$ and since $$\frac{\sin x}{x^n}\sim_0 \frac{1}{x^{n-1}}$$ so the integral is convergent if and only if $n-1<1\iff n<2$.
H: Denseness of the set $\{f: \int_0^1 x^\alpha f''(x) dx = \int_0^1 x^\beta f''(x) dx = 0 \}$ in $C[0,1]$ Let $\alpha, \beta \in (-1,1) \setminus \{ 0 \}$. Is it true that the set $$ \left\{f \in C^2[0,1]: \int_0^1 x^\alpha f''(x) dx = \int_0^1 x^\beta f''(x) dx = 0 \right\} $$ is dense in $C[0,1]$? I think it is not, but have no idea how to prove it. In case $\alpha, \beta \geq 1$ it is quite easy since we can use integration by parts twice and rewrite the condition without derivatives, but I do not know what to do if $\alpha, \beta < 1$. AI: Your set seems to be indeed dense in $\mathcal C([0,1])$. Without loss of generality, assume that $\alpha\neq \beta$. We need the following fact: $\mathbf{Fact.}$ Given a compact set $K\subset\mathbb R^2$, there is a constant $C$ such that the following holds: for any $(p,q)\in K$ and $A>1$, one can find a function $\Phi\in \mathcal C^2([0,A])$ such that $\int_0^{A} \Phi''(t)t^\alpha dt=p$, $\int_0^{A}\Phi''(t)t^\beta dt=q$ and $\Vert\Phi\Vert_\infty\leq C$. Assuming this has been proved, let us show that your set (call it $\mathcal A$) is dense in $\mathcal C([0,1])$. By something like Weierstrass theorem, it is enough to approximate any $\mathcal C^2$ function; so let us fix $f\in\mathcal C^2([0,1])$ and $\varepsilon \in (0,1)$. Put $L_\alpha(f)=\int_0^1 f''(x)x^\alpha dx$ and $L_\beta(f)= \int_0^1 f''(x)x^\beta dx$. Choose $\gamma >0$ such that $\gamma(1-\alpha)>1$ and $\gamma(1-\beta)>1$ By the above fact applied with $K=\{ (p,q);\; \vert p\vert\leq \vert L_\alpha(f)\vert\;{\rm and}\; \vert q\vert\leq \vert L_\beta(f)\vert \}$, one can find a function $\Phi\in\mathcal C^2([0,{\varepsilon^{-\gamma}}])$ such that $\int_0^{{\varepsilon^{-\gamma}}} \Phi''(t)t^\alpha dt=\varepsilon^{\gamma(1-\alpha)-1}L_\alpha(f)$, $\int_0^{{\varepsilon^{-\gamma}}}\Phi''(t)t^\beta dt=\varepsilon^{\gamma(1-\beta)-1}L_\beta(f)$ and $\Vert\Phi\Vert_\infty\leq C$, where $C$ does not depend on $\varepsilon$. Now define $g$ on $[0,1]$ by $g(x)=f(x)-\varepsilon\, \Phi({\varepsilon^{-\gamma}}x)$. Then $g\in\mathcal C^2([0,1])$ and $\Vert g-f\Vert_\infty\leq C\varepsilon$. Moreover, \begin{eqnarray*}\int_0^1 g''(x)x^\alpha\, dx&=&L_\alpha(f)-\varepsilon^{1-2\gamma}\int_0^1\Phi''({\varepsilon^{-\gamma}}x)x^\alpha dx\\ &=&L_\alpha(f)- \varepsilon^{1-\gamma+\gamma\alpha}\int_0^{\varepsilon^{-\gamma}}\Phi(t) t^\alpha\, dt\\&=&0\, , \end{eqnarray*} and likewise $\int_0^1 g(x)x^\beta dx=0$. So $g\in\mathcal A$, and since $C$ does not depend on $\varepsilon$, this shows that $\mathcal A$ is dense in $\mathcal C([0,1])$. To prove the fact, we first note that given $p,q\in\mathbb R$, one can find a quadratic function $\psi(x)=ax^2+bx+c$ such that $\int_0^{1} \psi(x)x^\alpha dx=p$, $\int_0^{1}\psi(x)x^\beta dx=q$, $\psi(1)=0$ and $\vert a\vert+\vert b\vert+\vert c\vert\leq M (\vert p\vert+\vert q\vert)$, where $M$ is a constant depending only on $(\alpha,\beta)$. Indeed, this amounts to solving the linear system $$\left\{ \begin{matrix}\frac{1}{\alpha+3}& a&+&\frac{1}{\alpha +2}& b&+&\frac{1}{\alpha +1}& c&=&p\\ \frac{1}{\beta+3}& a&+&\frac{1}{\beta +2}& b&+&\frac{1}{\beta +1}& c&=&q\\ &a&+&&b&+&&c&=0 \end{matrix} \right. $$ whose matrix depend only on $(\alpha,\beta)$ and turns out to be invertible (I'm skipping some row manipulations here). It follows that for any $(p,q)\in\mathbb R^2$ and any $A>1$, one can find a function $\varphi\in\mathcal C([0,A])$ such that $\int_0^A \varphi(t) t^\alpha dt=p$, $\int_0^A\varphi(t) t^\beta dx=q$, $\varphi\equiv 0$ on $[1,A]$ and $\Vert\varphi\Vert_\infty\leq M(\vert p\vert+\vert q\vert)$ for some constant $M$ which does not depend on $(p,q)$: just define $\varphi$ to be equal to the above quadratic function $\psi$ on $[0,1]$ and $\varphi\equiv 0$ on $[1,A]$. Now, let $K$ be an arbitrary compact subset of $\mathbb R^2$ and let $A>1$. For any $(p,q)\in K$, define $\Phi:[0,A]\to \mathbb R$ by $\Phi(t)=\int_1^t\int_1^s \varphi (u) du$, where $\varphi$ is as above. Then $\Phi\equiv 0$ on $[1,A]$, so $\Vert\Phi\Vert_\infty\leq C$ for some constant $C$ depending only of $K$; and $\Phi$ does the job.
H: Intuitive reason for why many complex integrals vanish when the path is "blown-up"? It is a standard trick for evaluating difficult integrals along the real line to consider a closed-contour and "blow-up" the complex part till it vanishes, leaving us with the residues picked up along the way. This is usually done by bounding the magnitude of the integral from above with something that tends to $0$. My question is, is there an intuitive reason for why we should expect this? My thought was that, if we project the complex plane onto a Riemann sphere, the path which appears to enlarge is actually shrunk to the point at $\infty$. I drew a (not very pretty) picture to illustrate what I mean: In blue is a semi-circular contour in the upper-half plane, centered at the origin, and in red is the stereographic projection of this contour onto a Riemann sphere. It is clear that as the radius of the blue semi-circle $\to\infty$, the red semi-circle that corresponds to the complex portion of the contour is shrunk to the north pole of the sphere. Is this a good way of thinking about it and is this a good enough reason for the integral along that arc to vanish (as long as it does not encounter poles on the way)? AI: I don't think this is a useful way to think about it. If $z$ is the standard coordinate on the complex plane, then we can define a coordinate $w = 1/z$ which includes $\infty$ on the Riemann sphere. Then we have $$ f(z)\, dz = -\frac{1}{w^2}f\left(\frac{1}{w}\right)dw ~. $$ So it really depends on $f$ (obviously!). The only 'heuristic explanation' I can think of for why the trick works so often is that if the integral converges on the real line, then the integrand must be dying away fairly quickly at $\pm\infty$. Therefore it is 'fairly likely' to also die away quickly over a whole half-circle of angles.
H: Check the convergence $\sum_{n=1}^{\infty}\frac{\sin^3(n^2+11)}{n^4}$ I`m trying to check the convergence of this series but I don't know how to start. $$\sum_{n=1}^{\infty}\frac{\sin^3(n^2+11)}{n^4}$$ I thought about using Comparison test. so I know when I have some $\theta$ any trigonometric function I will take some $b_n$ as another series, so I choose $\frac{1}{n^2+11}$ I would like to get some advice how to do that, Thanks! AI: We know that $$\forall x\in \mathbb{R},\quad |\sin(x)| \le 1$$ then $$\forall n\in \mathbb{N},\quad \left|\frac{\sin^3(n^2+11)}{n^4} \right|\le \frac{1}{n^4} $$ and we know that $\sum \frac{1}{n^4}$ converge and the absolute convergence implies the convergence.
H: What do we know about the distribution of Mersenne primes? Mersenne primes are primes of the form $M_n = 2^n - 1$. I'm wondering how far apart successive Mersenne primes can be. For example, is $M_{n+1} \le O((M_n)^e)$? Or, is $M_{n+1}$ always less than some power of $M_n$? If not, how close together can successive Mesenne primes be in the worst case? AI: It is not even known whether there are infinitely many Mersenne primes! There are guesses only, based on probabilistic assumptions for which there is no proof. For a brief survey of some conjectural answers about the distribution of Mersenne primes, please see Wikipedia article on Mersenne Conjectures.
H: Differentiating terms involving evaluation operator and Wronski matrix We have the initial value problem $$\dot{\mathbf{y}} = f(\mathbf{y}), \mathbf{y}(0) = \mathbf{y_0},$$ with $f(\mathbf{y})$ continuously differentiable. There exists a $T > 0$ such that $\mathbf{y_0} = \mathbf{\Phi}^T \mathbf{y_0}$ and $\mathbf{y_0} \neq \mathbf{\Phi}^t \mathbf{y_0}$ for all $0 < t < T$. I already know that $\mathbf{\Phi}^{t+T} \mathbf{y_0} = \mathbf{\Phi}^t \mathbf{y_0}$. Now I want to show that the number $1$ is an eigenvalue of the Wronski matrix $W(T;0,\mathbf{y_0})$. In our lecture about numerical mathematics, we defined the Wronski matrix as follows: $$W(t;t_0,\mathbf{z}) := \left.\frac{\partial}{\partial \mathbf{y}} \mathbf{\Phi}^{t_0,t} \mathbf{y} \right|_{\mathbf{y} = \mathbf{z}} \in \mathbb{R}^{d,d}$$ The standard solution suggests to write down $\mathbf{\Phi}^t \mathbf{y_0} = \mathbf{\Phi}^{t+T} \mathbf{y_0} = \mathbf{\Phi}^T \mathbf{\Phi}^t \mathbf{y_0}$ which holds for all $t \in \mathbb{R}$. Then I am supposed to differentiate both sides with respect to $t$ and plugging in $t = 0$ should lead me to $$\mathbf{f}(\mathbf{y_0}) = W(T;0,\mathbf{y_0}) \mathbf{f}(\mathbf{y_0}).$$ I don't really understand how to differentiate given equation and how to apply product or chain rule there and how that should give me above equation. After all, they suggest to differentiate with respect to $t$ whereas the Wronski matrix is defined with a partial derivative with respect to $\mathbf{y}$? How do they obtain the Wronski matrix out of it? Could someone just write down every step taken to obtain the desired result? Thanks a lot in advance. AI: We have \begin{align*} \frac{d}{dt} \Phi^t y_0 &= f(\Phi^t y_0) \end{align*} as $t \mapsto \Phi^t y_0$ is the solution of the given IVP. Now $t \mapsto \Phi^T(\Phi^t y_0)$ is a composition of $t \mapsto \Phi^t y_0$ with $z \mapsto \Phi^Tz$. The chain rule gives $$ \frac d{dt} \Phi^T\Phi^t y_0 = D\Phi^T(\Phi^t y_0)f(\Phi^t y_0) \tag 1$$ Now, by your definition, the derivative of $\Phi^T = \Phi^{0,T}$ is represented by the matrix $W(0, T; \cdot)$, that is (in the notation of (1)) $D\Phi^T(z) = W(0, T; z)$ for each $z$. So (1) reads $$ \frac d{dt} \Phi^T\Phi^t y_0 = W(0,T; \Phi^t y_0)f(\Phi^t y_0). $$ So, for each $t$ $$ W(0, T; \Phi^t y_0)f(\Phi^t y_0) = f(\Phi^t y_0) $$ with $t = 0$ we have $$ W(0, T; y_0)f(y_0) = f(y_0) $$
H: How many squares are there modulo a Mersenne prime? Mersenne primes are primes of the form $M_n = 2^n - 1$. I'm wondering how many distinct natural numbers result from squaring the naturals modulo $M_n$. As an example, $M_3 = 7$. If we take the naturals less than seven, we get: $$1^2 \equiv 1 \bmod 7$$ $$2^2 \equiv 4 \bmod 7$$ $$3^2 \equiv 2 \bmod 7$$ $$4^2 \equiv 2 \bmod 7$$ $$5^2 \equiv 4 \bmod 7$$ $$6^2 \equiv 1 \bmod 7$$ Thus, there are $3$ distinct results of squaring; namely $1$, $2$, and $4$. So I'm wondering, for a given Mersenne prime $M_n$, how many different squares can we get? AI: For any odd prime $p$, there are $\frac{p-1}{2}+1=\frac{p+1}{2}$ squares modulo $p$. To show this, note first that $0$ is a square modulo $p$. It is, modulo $p$, $0^2$, or, if you prefer, it is congruent to $p^2$. (But it is not called a *quadratic residue of $p$.) Now consider the squares of numbers in the interval $\left[1,\frac{p-1}{2}\right]$. These are all distinct modulo $p$. And since the numbers in the interval $\frac{p+1}{2}$ to $p-1$ are the negatives (modulo $p$) of numbers in the interval $\left[1,\frac{p-1}{2}\right]$, squaring them produces nothing new. You will observe this from the example you calculated. The squares of $1$, $2$, and $3$ were distinct modulo $7$, and after that you got nothing new. To show that squares of numbers in the interval $\left[1,\frac{p-1}{2}\right]$ are all distinct modulo $p$, let $x$ and $y$ be numbers in the interval, with $x\gt y$. Suppose $x^2\equiv y^2\pmod{p}$. Then $(x-y)(x+y)$ is divisible by $p$. Thus one of them is. That's impossible, since each of $x-y$ and $x+y$ lies between $1$ and $p-1$.
H: When are $3$ vectors associative in triple cross products? The question I am trying to show under what conditions $$\vec{A}\times(\vec{B}\times\vec{C}) = (\vec{A}\times\vec{B})\times\vec{C}.$$ I have found that right-hand side of the above equation is equal to \begin{align} (\vec{A}\times\vec{B})\times\vec{C} &=-\vec{C}\times(\vec{A}\times\vec{B})\\ &= \vec{C}\times(\vec{B}\times\vec{A}). \end{align} This is similar to the left-hand side of the original equation. The conclusion I arrived at was, in order for the equality to be true, either $\vec{A}, \vec{B}$ or $\vec{C}$ have to be zero, or $\vec{A}$ must be equal to $\vec{C}$. Is this correct? If not, or if I am missing anything, please let me know. AI: Your reasoning is correct. The reason is that the cross product is anti-commutative. Therefore, the only situations wherein you will find the desired properties are when one of the vectors is zero, or when $A \propto C$ To prove this, note that $A\times (B\times C) = (A\cdot C)B-(A\cdot B)C$ and likewise $(A\times B)\times C = -(C\cdot B)A+(C\cdot A)B$ Assume the two are equal, and note that the dot product commutes, and we find $$ (A\cdot B) C = (C\cdot B) A $$ Then, $$\frac{A}{A\cdot B} = \frac{C}{C \cdot B} \implies C = kA.$$ So $C$ is some scalar multiple of $A$.
H: Completing metric space In the completion of a metric space, a distance is defined on the set of equivalence classes of Cauchy sequences: $$ \begin{align} \tilde d:\tilde X\times \tilde X &\to \mathbb{R^+}\\ ([x_n],[y_n]) &\mapsto \lim_{n\to \infty}(d(x_n,y_n)) \end{align}$$ with $x_n,y_n$ Cauchy sequences in the metric space $(X,d)$. A detail troubles me. I can see that this is well-defined (w.r.t. various representatives of the equivalence classes), except for the fact that this limit needs not exist? What if $d(x_n,y_n)$ was periodic for instance. Is that clear that it can't be? AI: By definition, $$d(\bar x,\bar y)=\lim\limits_{n\to\infty}d(x_n,y_n)$$ Now, since $x_n,y_n$ are Cauchy, and $$|d(x_m,y_m)-d(x_n,y_n)|\leq d(x_n,x_m)+d(y_n,y_m)$$ $d_n:=d(x_n,y_n)$ is also Cauchy, but in $\Bbb R$; which is complete! ADD The inequality $$|d(x,y)-d(z,w)|\leq d(x,z)+d(y,w)$$ is known as the quadrilateral inequality.
H: Could you please explain How to expand $(1 - \frac1x)^{-n}$ into a sum of powers of $x$? Could you please explain how to expand $(1 - \frac1x)^{-n}$ into a sum of powers of $x$? Thank you in advance. AI: Consider the Taylor expansion of $(1-1/x)^{-n}$ about $1/x=0$: $$\left ( 1-\frac{1}{x}\right)^{-n} = 1+ (-n) \left ( -\frac{1}{x}\right)+\frac{1}{2!}(-n)(-n-1)\left ( -\frac{1}{x}\right)^2 + \frac{1}{3!} (-n)(-n-1)(-n-2)\left ( -\frac{1}{x}\right)^3+\cdots$$ Note also that you may define the negative binomial coefficients $\binom{-n}{k}$ to match the above coefficients to resemble a binomial expansion.
H: Parametrization of unit sphere in $\mathbb{R}^3$ I would like to show (I'm not yet sure if it's true, though), that any vector $v\in \mathbb{R}^3$ with $\|v\| = 1$ can be written as $\left(\cos(\beta)\sin(\alpha),\; \sin(\alpha)\sin(\beta), \; \cos^2\left(\frac{\alpha}{2}\right)-\sin^2\left(\frac{\alpha}{2}\right)\right)^T$. Hereby, $0 \leq \alpha \leq \pi$, and $0 \leq \beta < 2 \pi$ Any ideas whether it's true or how to show it? AI: Let $v=(v_1,v_2,v_3)$ have length 1.Let $\alpha$ be the angle between the vector $v$ and the vector $(0,0,1)$. Let $\beta$ be the angle between $(v_1,v_2,0)$ and $(1,0,0)$. We find that $v_1=sin(\alpha)cos(\beta),v_2=sin(\alpha)sin(\beta),v_3=cos(\alpha)=cos^2(\frac{\alpha}{2})-sin^2(\frac{\alpha}{2})$
H: $n$ and $n^5$ have the same units digit? Studying GCD, I got a question that begs to show that $n$ and $n^5$ has the same units digit ... What would be an idea to be able to initiate such a statement? testing $0$ and $0^5=0$ $1$ and $1^5=1$ $2$ and $2^5=32$ In my studies, I have not got "mod", please use other means, if possible of course. I demonstrated in a previous period that $$2|n^5-n$$because $$n^5-n=(n+1)n(5n^4+5n+5)$$, and$$5|n^5-n$$By Fermat's Little Theorem Only I do not understand what should happen to the units of the two numbers are equal ... What must occur? AI: Without using any modular arithmetic: $$n^5-n=n(n-1)(n+1)(n^2+1)=n(n-1)(n+1)(n^2-4+5)=n(n-1)(n+1)(n^2-4)+5n(n-1)(n+1)=$$ $$=(n-2)(n-1)n(n+1)(n+2)+5(n-1)n(n+1)$$ $(n-2)(n-1)n(n+1)(n+2)$ is the product of 5 consecutive integers thus divisible by 2 and 5. $5n(n-1)(n+1)$ is multiple of $5$ and even.
H: If $f<1$, $f(0)^2 + f'(0)^2=4$, exists $x_0$ s.t. $f''(x_0) + f(x_0)=0$ Suppose $f:\mathbb{R}\to\mathbb{R}$ is $C^2$, $f < 1$ for all $x$, and $f(0)^2 + f'(0)^2=4$. Show that $\exists x_0$ s.t. $f''(x_0) + f(x_0)=0$. So far, I have let $\phi(x) = f(x)^2 + f'(x)^2$. Then $$\phi'(x) = 2f(x)f'(x) + 2f'(x)f''(x) = 2f'(x)(f(x) + f''(x)).$$ So we need to show that there's a critical point of $\phi$ that is not a critical point of $f$. I believe this is supposed to be an exercise in the mean value theorem, but I don't know where to find another value of $\phi(x)$. Any ideas? (This is a problem from a teacher's set of notes, so of course there could be a typo. Could it be that $|f|<1$ is what he meant, for instance?) AI: $f\equiv -2$ is a counter-example. So there must be an additional restriction. I will post this as a wiki, in case anyone can solve it, given a reasonable restriction.
H: Approximate measures of sets with measures of borel subsets. Show that for each subset $A$ of $\mathbb{R}$ there is a Borel subset of $B$ of $\mathbb{R}$ that includes $A$ such that $ \lambda (B) = \lambda ^*(A)$ If A is Borel it is evident? So we need to approximate A with some Borel subset, which is just a "little" bit bigger? How is this done? I cannot see this when for example A is some strange uncountable set. Thanks. AI: Of course if $A$ is Borel you take $B=A$. I assume $\lambda^*$ denotes the Lebesgue outer measure, i.e. $$ \lambda^*(A) = \inf \left \{ \left. \sum_{n \ge 1} (b_n - a_n) \, \right| \, A \subseteq \bigcup_{n \ge 1} ]a_n,b_n[ \right \}. $$ If $\lambda^*(A) = \infty$, just choose $B = \mathbb R$. Otherwise, choose $a_n^m, b_n^m$ such that $\sum_{n \ge 1} (b_n^m - a_n^m) \le \lambda^*(A) + \frac 1m$ and $A \subseteq \bigcup_{n \ge 1} ]a_n^m,b_n^m[$. Let $\mathcal O_m = \bigcup_{n \ge 1} ]a_n^m,b_n^m[$. Then let $$ B = \bigcap_{m \ge 1} \mathcal O_m. $$ It is clear that $B$ is Borel. Since $A \subseteq \mathcal O_m$ for every $m$, $A \subseteq B$. By monotonicity of the outer measure, $\lambda^*(A) \le \lambda^*(B) = \lambda(B)$. But by monotonicity of the measure $\lambda$, we have $B \subseteq \mathcal O_m$ for every $m$, hence $$ \lambda(B) \le \lambda(\mathcal O_m) \le \lambda^*(A) + \frac 1m $$ which implies $\lambda(B) \le \lambda^*(A)$, hence $\lambda(B) = \lambda^*(A)$. Hope that helps,
H: Why is this limit $\frac{e^x}{x^{x-1}}$coming out wrong? Attempting to answer this question, I thought to evaluate the limit by taking the logarithm and then using L'Hopital's rule: $$\begin{align} L&=\lim_{x\to\infty}\dfrac{e^x}{x^{x-1}}\\ \ln{L}&=\lim_{x\to\infty}\frac{x}{(x-1)\ln(x)} \\ \ln{L}&=\lim_{x\to\infty}\frac{1}{(x-1)\frac{1}{x}+\ln{x}} \\ \ln{L}&=\lim_{x\to\infty}\frac{1}{1-\frac{1}{x}+\ln{x}} \\ \ln{L}&=0 \\ e^{\ln{L}} &=e^0 \\ L&=1 \end{align}$$ But Wolfram Alpha says that the limit is $0$. Where am I going wrong? AI: $$\ln(e^x/x^{x-1}) =x-(x-1)\ln x\neq \frac{x}{(x-1)\ln x}$$
H: Do we have $x^TDAx\ge \min(\lambda_D)\min(\lambda_A)x^Tx$ if $A$ is PD and D is both diagonal and PD? Suppose matrix $A\in\mathbb{R}^{n\times n}$ is symmetric positive definite and $D\in\mathbb{R}^{n\times n}$ is both diagonal and positive definite, do we have the following result? $$x^TDAx\ge \min(\lambda_D)\min(\lambda_A)x^Tx,\quad \forall x\in\mathbb{R}^n$$ where $ \min(\lambda_D)$ and $\min(\lambda_A)$ are the minimum eigenvalues of $D$ and $A$, respectively. I know $DA$ is not symmetric any more, and I feel the above result is wrong. But I cannot find a counterexample. What's your opinion? Thanks. AI: No, the inequality does not always hold. Here is a counterexample: $$ D=\pmatrix{1&0\\ 0&3},\ A=\pmatrix{1&2\\ 2&5},\ DA=\pmatrix{1&2\\ 6&15},\ x=(4,-1)^T,\ x^TDAx=-1, $$ but $\min(\lambda_D)\min(\lambda_A)x^Tx$ is positive because both $D$ and $A$ are positive definite and $x$ is nonzero.
H: Evaluate $\sum\limits_{(m,n) \in D,m < n} \frac{1}{n^2 m^2} $ where $\gcd(m,n)=1$ i have no clue on how to evaluate: $$\sum\limits_{(m,n) \in D,m < n} \frac{1}{n^2 m^2} \text{ where }D = \{ (m,n) \in (\mathbb{N}^*)^2 \mid \gcd(m,n) = 1\} $$ If someone is able to give me a hint... Thanks much in advance. AI: Call your sum as $S$. Then $S= \frac{1}2(\displaystyle\sum_{(m,n)=1}\frac{1}{m^2n^2}-1)$$$2S+1=\displaystyle\sum_{m,n}\displaystyle\sum_{d|(m,n)}\frac{\mu(d)}{m^2n^2}=\displaystyle\sum_{n=1}^\infty \frac{\mu(n)}{n^4}\displaystyle\sum_{r,s}\frac{1}{r^2s^2}=\frac{\zeta(2)^2}{\zeta(4)}$$ Thus, $S=\frac{3}{4}$
H: Solving a straightforward linear ODE I'm trying to solve this linear ODE, but I seem to get it wrong for some reason. $$\dfrac{dy}{dt}-\dfrac{1}{2}y(t)=2\cos t.$$ I get $$y(t)=e^{t/2}(c+2 \sin t).$$ How can this be wrong? And yet it seems to be. AI: Hints: Solve for the homogenous, $y_h = c_1e^{t/2}$ (you got that right) Choose $y_p = a \cos t + b \sin t$, substitute back into ODE and solve for $a$ and $b$. What do you get? You should get $(a = -4/5 ~~\text{and}~~ b = 8/5)$ $y = y_h + y_p$
H: A definition of algebraic expression Definition of algebraic expression An algebraic expression is a collection of symbols; it may consist of one or more than one terms separated by either a $+$ or $-$ sign. If by symbol we only mean letters such as $a, b$ or $c$, then what about this algebraic expression which consists of only one term $1a$ or $a$? There is only one symbol i.e., $a$, not a collection! But if by symbols we mean numerical symbols such as $1, 2 , 3$, etc, and letters such as $a, b, c,$ etc, then $1a$ is a collection of symbols. I want to confirm what the word symbols is meant to be in the definition; numerical symbols or letters? And want to confirm if $a$ is an algebraic expression consisting of only 1 term. This is a definition of algebraic expression from the book Algebra for Beginners by S. Hall, link: http://www.forgottenbooks.org/books/Algebra_for_Beginners_1000009092. AI: The definition says that in an expression the terms are separated by a $+$ or $-$ sign. $1a=1 \times a$, so it is an algebraic expression. It is possible to have a collection of just one element- so $a$ is indeed an algebraic expression. Letters as well as numbers are both considered symbols.
H: Showing absolute convergence for series representation of $\frac{z}{exp(z) - 1}$ I would appreciate help (self-study) in showing that the related power series of $\frac{z}{exp(z) - 1}$ converges absolutely for $|z| < 2\pi$ without looking at the specific terms of the series. (I do know the series, other than the first two terms, consists of a series with coefficients that include Bernoulli numbers and that series can be shown to converge absolutely by comparing $\zeta$-functions and applying the results to the series. Stopple's "Primer of Analytic Number Theory" - Solutions page 358.) But I was wondering if you can make that determination using theorem(s) from complex analysis. I am able to show the function satisfies the Cauchy-Riemann equations and is thus analytic with a convergent power series. What I have thought of is applying a lemma from "Flanigan" page 203: If the series $\sum {a_k(z - z_0)^k}$ converges at the point $z_1$, then it converges absolutely for all points such that $|z - z_0| < |z_1 - z_0|$. Using $z_0 = 0$. I realize there will be a pole at $2\pi i$. But if I substitute $z = 2\pi$ in the function, there should be no problem? But I am not comfortable with my line of thinking because if I substitute $z = 3\pi$ and if there is "no problem" then, in that the theorem says all points, the series should converge for $z = 2\pi i$. I would appreciate help with the first question and correction regarding what seems like a misunderstanding of the lemma. Thanks very much. AI: The function is analytic except at points where $e^z=1$, i.e. where $z=2\pi i n$ for $n\in\mathbb{Z}$. You can check that $z=0$ is a removable singularity, so the Maclaurin series for $f$ will converge (absolutely and locally uniformly) on the largest disc on which $f$ is analytic, hence on $|z|<2\pi$.
H: The closure $\mathbb{Q}$ Why the closure $\mathbb{Q}$ is not itself? Since each ball around $q \in \mathbb{Q}$ contains a point in $\mathbb{Q}$ and a point in $\mathbb{R} \setminus \mathbb{Q}$. AI: Take the sequence $a_n$ where $$a_n=\sum_{k=0}^n \frac{1}{n!}$$ Each of $a_n$ is in $\mathbb{Q}$ but the limit of $a_n$ is $e\in \mathbb{R}\diagdown \mathbb{Q}$. So, $\mathbb{Q}$ is not closed, and hence $\mathbb{Q}\ne \overline{\mathbb{Q}}$.
H: Is there a slowest rate of divergence of a series? $$f(n)=\sum_{i=1}^n\frac{1}{i}$$ diverges slower than $$g(n)=\sum_{i=1}^n\frac{1}{\sqrt{i}}$$ , by which I mean $\lim_{n\rightarrow \infty}(g(n)-f(n))=\infty$. Similarly, $\ln(n)$ diverges as fast as $f(n)$, as $\lim_{n \rightarrow \infty}(f(n)-\ln(n))=\gamma$, so they 'diverge at the same speed'. I think there are an infinite number of 'speeds of divergence' (for example, $\sum_{i=1}^n\frac{1}{i^k}$ diverge at different rates for different $k<1$). However, is there a slowest speed of divergence? That is, does there exist a divergent series, $s(n)$, such that for any other divergent series $S(n)$, the limit $\lim_{n \rightarrow \infty}(S(n)-s(n))=\infty$ or $=k$? If so, are there an infinite number of these slowest series? AI: The proof in the paper ``Neither a worst convergent series nor a best divergent series exists" by J. Marshall Ash that I referenced above is so nice that I wanted to reproduce it below before it gets lost on the Internet. $\bf{Theorem: }$ Let $\sum_{n=1}^{\infty} c_n$ be any convergent series with positive terms. Then, there exists a convergent series $\sum_{n=1}^{\infty} C_n$ with much bigger terms in the sense that $\lim_{n\rightarrow\infty} C_n/c_n = \infty$. Similarly, for any divergent series $\sum_{n=1}^{\infty} D_n$ with positive terms, there exists a divergent series $\sum_{n=2}^{\infty} d_n$ with much smaller terms in the sense that $\lim_{n\rightarrow\infty} \frac{d_n}{D_n} = 0$. $\bf{Proof: }$ For each $n$, let $r_n = c_n + c_{n+1}+\cdots$ and $s_n = D_1 + \cdots + D_n$. Letting $C_n = \frac{c_n}{\sqrt{r_n}}$ and $d_n = \frac{D_n}{s_{n-1}}$, then $\lim_{n\rightarrow\infty} \frac{C_n}{c_n} = \lim_{n\rightarrow\infty} \frac{1}{\sqrt{r_n}}=\infty$ and $\lim_{n\rightarrow\infty} \frac{d_n}{D_n} = \lim_{n\rightarrow\infty} \frac{1}{s_{n-1}} = 0$, so it only remains to check $\sum C_n$ converges and that $\sum d_n$ diverges. To see that this is indeed the case, simply write $C_n = (1/\sqrt{r_n})(r_n-r_{n+1})$ and $d_n = 1/s_{n-1}(s_n-s_{n-1})$; observe that $\int_0^{r_1} 1/\sqrt{x}dx<\infty$ and $\int_{s_1}^{\infty} 1/xdx = \infty$; and note that the $n$th term of series $\sum C_n$ is the area of the gray rectangle in Figure 1a, while the $n$th term of series $\sum d_n$ is the area of the gray rectangle in Figure 1b.
H: not both $2^n-1,2^n+1$ can be prime. I am trying to prove that not both integers $2^n-1,2^n+1$ can be prime for $n \not=2$. But I am not sure if my proof is correct or not: Suppose both $2^n-1,2^n+1$ are prime, then $(2^n-1)(2^n+1)=4^n-1$ has precisely two prime factors. Now $4^n-1=(4-1)(4^{n-1}+4^{n-2}+ \cdots +1)=3A$. So one of $2^n-1, 2^n+1$ must be $3,$ which implies $n=1$ or $n=2$ (rejected by assumption). Putting $n=1$, we have $2^n-1=1,$ which is not a prime. Hence the result follows. I also wanna know if there is alternative proof, thank you so much. AI: Of the three consecutive integers $2^n-1,2^n,2^n+1$, one must be divisble by 3, and it can't be $2^n$.
H: Prove that $\sqrt 2 + \sqrt 3$ is irrational I have proved in earlier exercises of this book that $\sqrt 2$ and $\sqrt 3$ are irrational. Then, the sum of two irrational numbers is an irrational number. Thus, $\sqrt 2 + \sqrt 3$ is irrational. My first question is, is this reasoning correct? Secondly, the book wants me to use the fact that if $n$ is an integer that is not a perfect square, then $\sqrt n$ is irrational. This means that $\sqrt 6$ is irrational. How are we to use this fact? Can we reason as follows: $\sqrt 6$ is irrational $\Rightarrow \sqrt{2 \cdot 3}$ is irrational. $\Rightarrow \sqrt 2 \cdot \sqrt 3$ is irrational $\Rightarrow \sqrt 2$ or $\sqrt 3$ or both are irrational. $\Rightarrow \sqrt 2 + \sqrt 3$ is irrational. Is this way of reasoning correct? AI: If $\sqrt{2} + \sqrt{3}$ is rational, then so is $(\sqrt{2} + \sqrt{3})^2 = 5 + 2 \sqrt{6}$. But this is absurd since $\sqrt{6}$ is irrational.
H: Properties of the Fourier transform of a certain function In my research I met the Fourier transform of the function $f(x)=(1+x^2)^{-1/2}$. I was not able to find its explicit formula. Is this a function known as a 'special function'? I would like to know if it is nonnegative, summable, etc. AI: If the fourier transform is defined by $\;\displaystyle\int_{-\infty}^\infty f(x)e^{-2\pi ikx}\,dx\;$ then the answer is $$\sqrt{\frac 2{\pi}}K_0(|t|)$$ as provided by Alpha or in the Wolfram functions 'transforms' of the modified Bessel $K$ function. This definition from DLMF (and all information there !) may be the most appropriate : $$K_0(x)=\int_0^\infty \cos(x\;\sinh(t))dt=\int_0^\infty \frac{\cos(x\;t)}{\sqrt{1+t^2}}dt$$ (remembering that $\;\displaystyle\operatorname{argsinh}'(x)=\frac 1{\sqrt{1+x^2}}\ $)
H: Solving the differential equation $\frac{dC}{dt} = -\alpha C$ I am trying to solve the following problem: After drinking a cup of coffee, the amount $C$ of caffeine in a person's body obeys the differential equation $$\frac{dC}{dt} = -\alpha C$$ where the constant $-\alpha$ has an approximate value of $0.14$ hours$^{{−1}}$. How many hours will it take a human body to metabolize half of the initial amount of caffeine? Round your answer to the nearest integer. Firstly, I am having a little trouble understanding the question. Is the question asking for $c(t)$? Next, I tried the following approach $$\begin{align} \frac{dC}{dt} &= -\alpha C\\ \int \frac{dC}{C} &= \int -\alpha dt \\ C &= \exp (-\alpha t + c)\\ C &= e^{-\alpha t} + e^c \\ \end{align}$$ That is where I am stuck. It is asking "half of the initial amount of caffeine" which means that $e^{c}$ is $e^\frac{1}{2}$ because $c$ is my initial amount. So then what do I do next? make $C = 0$? Please don't provide me with the full answer, just some hints. Thanks! AI: Hint: There was an algebra glitch, you should have $$C(t)=e^c e^{-\alpha t}.$$ Put $t=0$. We get $e^c=C(0)$, so $$C(t)=C(0)e^{-\alpha t}.$$ Continue.
H: Proof that if $n Can I get a proof of the fact that if $n<k$ and $A$ is an $n\times k$ matrix, then $A^{T}A$ is not invertible? AI: Hint $rank(A) \leq n$ then $A^TA$ is an $k \times k$ matrix and $$rank(A^TA) \leq rank(A) \leq n <k \,.$$
H: How do I determine the curvature of an arc length parameterized curve in the $xy$-plane? I have a 2D curve in the $xy$-plane, which was arc length parameterized numerically, and fitted by cubic splines for both $x$ and $y$. If one of the segments of the cubic spline is: \begin{align} x&=a_1s^3 + a_2s^2 + a_3s + a_4 \\ y&=b_1s^3 + b_2s^2 + b_3s + b_4, \end{align} where $a_1$, $a_2$, $a_3$, $a_4$, $b_1$, $b_2$, $b_3$, and $b_4$ are constants and $s$ is the arc length parameter. How can I find the curvature $k(s)$ of the curve using the above equations? Update in the question: Please find the attached image for the details of my problem. enter image description here A MATLAB code for the problem is also given below: %% Original curve Lt=15; %length of the parameter 't' N=500; %Number of points on the curve h=Lt/N; %Step size along t t = 0:h:Lt; % Function definition x= log(2+t); y= log(1+t); %% Actual derivative, and curvature slope=gradient(y)./gradient(x); % derivative second_derivative=gradient(slope)./gradient(x); % second derivative curv=second_derivative./(1+(slope).^2).^(3/2); % curvature at any point %% Arc length parameterization x_t = gradient(x); y_t = gradient(y); s = cumtrapz( sqrt(x_t.^2 + y_t.^2 ) ); % Arc length X = s.'; V = [ t.', x.', y.' ]; L = s(end); % Total length of the arc s0 = s(1); Ni = length(s); Xq = linspace(s0,L,Ni); % equally spaced indices Vq = interp1(X,V,Xq); xs = Vq(:,2); % arc length parameterized x ys = Vq(:,3); %arc length parameterized y %% Cubic spline interpolation pp1 = csaps(X, xs); Cx=pp1.coefs; % getting the coefficients of the piece-wise polynomials pp2 = csaps(X, ys); Cy=pp2.coefs; %% Comparing the actual function with the fitted ones fitted_x=[];fitted_y=[];fitted_slope=[];fitted_curvature=[]; for i=1:N out=fitted_values(Cx,Cy,X,i); fitted_x=cat(1,fitted_x,out(1)); fitted_y=cat(1,fitted_y,out(2)); fitted_slope=cat(1,fitted_slope,out(3)); fitted_curvature=cat(1,fitted_curvature,out(4)); end figure; plot(x,y,'b'); % Actual function hold on; plot(fitted_x,fitted_y,'r-'); % Fitted function legend('Actual function','Fitted function'); figure; plot(x,slope,'b'); % Actual slope hold on; plot(fitted_x,fitted_slope,'r-'); % Fitted slope legend('Actual slope','Fitted slope'); figure; plot(x,curv,'b'); % Actual curvature hold on; plot(fitted_x,fitted_curvature,'r-'); % Fitted slope legend('Actual curvature','Fitted curvature'); function fval=fitted_values(Cx,Cy,X,i) s=X(i); % arc length parameter if i==1 s1=s; else s1=X(i-1); end spline_segment_number=i-1; % At which cubic spline segment the evaluation is to be performed if i==1 spline_segment_number=1; end coeff_x=Cx(spline_segment_number,:); coeff_y=Cy(spline_segment_number,:); % Fitted function values x_fitted=coeff_x(1)(s-s1)^3+coeff_x(2)(s-s1)^2+coeff_x(3)(s-s1)+coeff_x(4); y_fitted=coeff_y(1)(s-s1)^3+coeff_y(2)(s-s1)^2+coeff_y(3)(s-s1)+coeff_y(4); % Fitted slope values x_fitted_s=3*coeff_x(1)*(s-s1)^2+2*coeff_x(2)*(s-s1)^1+coeff_x(3); y_fitted_s=3*coeff_y(1)*(s-s1)^2+2*coeff_y(2)*(s-s1)^1+coeff_y(3); slope_fitted=y_fitted_s/x_fitted_s; % Fitted curvature values x_fitted_s_s=6*coeff_x(1)*(s-s1)^1+2*coeff_x(2); y_fitted_s_s=6*coeff_y(1)*(s-s1)^1+2*coeff_y(2); curvature_fitted=(x_fitted_sy_fitted_s_s-y_fitted_sx_fitted_s_s)/(x_fitted_s^2+y_fitted_s^2)^(3/2); % using the curvature formula fval=[x_fitted;y_fitted;slope_fitted;curvature_fitted]; end AI: First we differentiate $(x,y)$ with respect to arc $s$ for input to curvature formula. $$x(s)= a_1 s^3 + a_2 s^2 + a_3 s + a_4 ;\; $$ $$ x'(s)= 3 a_1 s^2 + 2 a_2 s + a_3;\; x''(s)=6 a_1 s + 2 a_2 ;\; $$ similarly find $ y'(s),y''(s) $. Compute curvature with formula: $$ k_g(s) =\dfrac{x'y''-y'x''}{(x^{'2}+ y^{'2})^\frac32} = \dfrac{d\phi}{ds}$$ $$ \phi= \int k_g\;ds\;$$ where $\phi $ is slope obtained by integrating back numerically. Next $(x,y)$ are integrated for, using any CAS: $$x(s)= \int \cos \phi\; ds, \quad y(s)= \int \sin \phi \; ds. $$
H: When does this fact involving Lagrange's Theorem hold? Let $G$ be a finite group, with subgroups $H$ and $K$, and $H \subseteq K \subseteq G$. Then we have \begin{equation*} [G\,:\,H] = [G\,:\,K][K\,:\,H]. \end{equation*} Question: Do we require that $H\subseteq K$ be true for this property to hold, or is it true for all subgroups $H$ and $K$? Please give your reason. AI: Do we require that H⊆K be true for this property to hold, Yes because otherwise the notation $[K:H]$ is not meaningful. One cannot find the index if the cosets are not defined. is it true for all subgroups H and K ? Yes, it is, as long as $H\subseteq K$
H: How can you fold a rectangular piece of paper once to get an 8 sided polygon? If you start with a rectangular piece of paper, how can you fold it once to get an 8 sided polygon? I am sure the solution is straightforward, but after trying some possibilities, I was unable to come up with such a construction. Could somebody show the solution of how to do this? AI: This is how you do it. Hope this helps.
H: Am I going correctly? Let $f(x)$ be a polynomial in $x$ and let $a, b$ be two real numbers where $a \neq b$ Show that if $f(x)$ is divided by $(x-a)(x-b)$ then the remainder is $\frac{(x-a) f(b)-(x-b) f(a)}{b-a}$ MY APPROACH:- Let $Q(x)$ be quotient so that : $(x-a)(x-b)Q(x)+{ Remainder }=f(x)$ L.H.S, $(x-a)(x-b) \cdot a(x)+ \cfrac{(x-a)f(b)-(x-b) f(b)}{b-a}$ So if we take $x=a$ : $ \text { Remainder }=\left.f(x)\right|_{x=a} $ $0+\frac{0-(x-b) f(b)}{(b-a)}=f(a) \\ f(b)=\frac{-(b-a) f(a)}{(x-b)}=\frac{(a-b){f}(a)}{x-b}$ again if we take $x=b,$ then $0+\frac{(x-a) f(b)}{b-a}=f(b)$ Substituting, $f(b)$ value in the equation then $\frac{(x-a) f(b)}{b-a}=\frac{(a-b) f(a)}{x-b}$ AI: I think you have the right idea but have not set out your argument very clearly; it looks as if you assume your conclusion about half way through. A better lay out might be: When dividing $f(x)$ by $(x-a)(x-b)$ you obtain, $$ f(x) = (x-a)(x-b) q(x) + r(x) $$ where $r(x)$ is a polynomial of degree one or less. Thus $r(x) = \alpha x + \beta$ for values $\alpha$ and $\beta$. Then substitute the value $x=a$ and $x=b$ in turn to derive, $$ f(a) = r(a) = \alpha a + \beta, \quad f(b) = r(b) = \alpha b + \beta. $$ You an now solve for $\alpha$ and $\beta$ giving the result you wanted, $$\alpha =\frac{f(b)-f(a)}{b-a} \text{ and } \beta = \frac{bf(a) - af(b)}{b-a}. $$
H: Is $\frac{\cos(\frac{1}{z})}{z^2}$ meromorphic or Not? my professor used the Cauchy Residue Theorem to evaluate the path integral (along the positively-oriented unit circle about the origin with winding number 3) $$\int_{\gamma}\frac{\cos(\frac{1}{z})}{z^2}$$ His reasoning is that, when expanded into series, $$\operatorname{res}(f,0) = 0.$$ However, I don't see how the integrand is meromorphic since I think it has an essential singularity at the origin. When I did the integral, I just used the fact that the integrand has a primitive on an annulus about the origin and thus the integral must be zero. Is my professor wrong to use the Cauchy Residue Theorem on the justification that it only works for meromorphic functions? Thanks in advance for any help! AI: You are both right! Your proof is sound and elegant. But your professor is also correct. The learning moment here is that the Cauchy residue theorem—and indeed the existence of residues themselves—do not require the function to be meromorphic. They work for any function that is holomorphic outside of isolated singularity points. This is a great moment to go back to the proof that a contour integral over a small circle going around an isolated singularity $z_0$ equals the residue of the integrand—that is, the coefficient of $(z-z_0)^{-1}$ in the Laurent expansion of the integrant. The proof literally integrates the Laurent series term by term, and after the change of variables $z=z_0+\varepsilon e^{i\theta}$, the resulting series of integrals vanishes completely except for the residue term. In addition to reinforcing why the residue arises in the first place, revisiting this proof will confirm that it never required the Laurent series to be finite in the negative-exponent direction (which is equivalent to meromorphy).
H: how would you prove that polynomial functions are not exponential? here is one proof that I know but I am not totally sure if it is acceptable- exponential functions are exponential: no matter how many times you differentiate them e.g- f(x)=e^x first derivative f`(x)= e^x 2nd derivative f``(x)= e^x 3rd derivative f```(x)=e^x and so on. now if you differentiate a polynomial function- let's say, f(x)= x^5 1st derivative f`(x)= 5x^4 2nd de3rivative f``(x)= 20x^3 3rd derivatives f```(x)=60x^2 4th derivative f````(x)=120x 5th derivative f`````(x)= 0 like this every polynomial finally gets differentiated to zero or a constant . this proves that the polynomials are not exponential. **is my proof ok** I want more alternate proofs and a brief explanation about this one. AI: Your proof is correct. You can also say that $\lim_{x\to-\infty}e^x=0$, whereas you have$$\lim_{x\to-\infty}P(x)=\pm\infty$$if $P$ is a non-constant polynomial function. And, clearly, the exponential function is not constant.
H: Show that $23a^2$ is not the sum of 3 squares. I know that Legendre's theorem states that a number is expressible as a sum of 3 squares iff. it's not of the form $4^x (8m+7)$, so I need to show that $23a^2$ is of this form, how could I go about doing this? AI: Note that $4^x(8m + 7)$ is a product of two terms: a power of $4$, and the remaining odd part. This motivates assuming $a$ to be of the form $2^xr$ where $r \ge 1$ is odd and $x \ge 0$. (Both $r$ and $x$ are integers.) Note that every integer can indeed be written in the above form (in a unique manner). Now, we get that $a^2 = 4^xr$. This is promising because we have gotten a $4^x$ term. This shows that $$23a^2 = 4^x(23r^2).$$ Now, we need to show that $23r^2$ is of the form $8m + 7$. Note that $23 = 8\cdot2 + 7$. So, if we can show that $r^2$ is of the form $8k + 1$, then we would be done. This can be done easily by exhaustion. Since $r$ is odd, there are only the following possibilities for $r$: $r$ is of one of the following forms: $8k + 1$ $8k + 3$ $8k + 5$ $8k + 7$ You can square each and verify that $r^2$ is always of the form $8k + 1$. Thus, $23a^2$ further simplifies as $$\begin{align}23a^2 &= 4^x(23r^2)\\ &=4^x(23(8k+1))\\ &=4^x(23\cdot8k + 16 + 7)\\ &=4^x((23k + 2)\cdot8 + 7)\\ &= 4^x(8m + 7),\end{align}$$ as desired.
H: How to determine the limits of $\cos(x/3) / \cos(x)$ over the domain $x \in [0, \pi/2]$? I'm trying to determine the range of $y = \frac{\cos(x / 3)}{\cos(x)}$ over the domain $x \in [0, \pi / 2]$. This is my attempt: $$ \lim_{x \rightarrow 0} \frac{\cos(x / 3)}{\cos(x)} = \frac{\cos(0 / 3)}{\cos(0)} = 1 $$ and $$ \lim_{x \rightarrow \pi / 2} \frac{\cos(x / 3)}{\cos(x)} = \lim_{x \rightarrow \pi / 2} \frac{\sin(x / 3)}{\sin(x)}{1 \over 3} = \frac{\sin(\pi / 6)}{\sin(\pi / 2)}{1 \over 3} = 1 / 6 $$ (where L'Hopital's rule is used in the second equation) So this gives $1/6 < y < 1$. However, the graph of $y$ shows that $y$ ranges from $1$ to $\infty$ over the domain $x \in [0, \pi/2]$. Where have I gone wrong in my attempt? AI: You can't use De L'Hopital theorem because the limit should be in the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$. By direct evaluation we get $$ \lim_{x \to \tfrac{\pi}{2}^-} \frac{\cos{(x/3)}}{\cos x} =\frac{\cos{(\pi/6)}}{0^+}=+\infty $$
H: How can I represent the $-3 Re(z) - 6 Im(z) \geq -2$ and $Re(z) > 2$ on a complex number plane? How can I represent the $-3 Re(z) - 6 Im(z) \geq -2$ and $Re(z) > 2$ on a complex number plane? Is it just $-3 - 6 * i \geq -2$ and $x >2$? AI: On the complex plane, a plot of $x$ and $y$ is made where the complex number $z$ can is written as $z=x+iy \ \ \ (x,y \in \mathbb{R})$ So $\Re(z)=x$ and $\Im(z)=y$ Thus the inequalities are : $$-3x-6y \geq -2$$ $$x > 2$$ These regions are easy to interpret in a cartesian system: Here is a link to the Desmos page of the graph
H: How many options there are for $n$ people to shake hands exactly $r$ times? Find how many options there are for $n$ people to shake hands exactly $r$ times while: The same pair of people can't shake hands more than once Order of hand shakes does not matter So the solution I thought about is ordering all people, then first deciding who the first person shakes hand with which is $2^{n-1}$ options, then who the second person shakes hands with (all options except the first person who we already counted) and so on, so in total we get $2^{\sum_{i=1}^{n}(n-i)}$ options, so the solution is $\binom{2^{\sum_{i=1}^{n}(n-i)}}{r}$. I was wondering if there is a more elegant solution without summation. Also would be nice to confirm my solution is not wrong in some way. AI: You seem to be counting subsets and then choosing $r$ of the subsets. But the task is not to choose $r$ subsets but to choose $r$ pairs. The solution is actually quite straightforward. There are $\binom n2$ unordered pairs of people, and the $r$ of these pairs who shake hands can be chosen in $\binom{\binom n2}r$ ways.
H: For a given symmetric and positive definite matrix M, find matrix C which fulfills CC^T = M and C^TC = D D is the diagonal matrix containing the eigenvalues of M. Is it solvable and if so, how? Thanks in advance! AI: If $C$ is a square matrix, the equation is always solvable. Let $M=QDQ^T$ be an orthogonal diagonalisation. Then $C=QD^{1/2}$ is a solution, where $D^{1/2}$ denotes the entrywise square root of $D$.
H: Eliminating $y$ from the system $cx − sy = 2$ and $sx + cy = 1$, where $c=\cos\theta$, $s=\sin\theta$ We will write $c = \cos\theta$ and $s = \sin\theta$ for ease of notation. Eliminate $y$ from the simultaneous equations $$\begin{align} cx − sy = 2 \\ sx + cy = 1 \end{align}$$ How could you eliminate $y$ from these equations? I have no idea where to start. Thank you. also how does this prove, it is solvable for all values of sin and cos. AI: Multiply the first equation by $c$ to obtain: $c^2x-scy=2c$ and the second equation by $s$ to get: $s^2x+scy=s$. Now add the 2 equations term by term to obtain $(c^2+s^2)x=2c+s$. Using the trigonometric identity $\sin(x)^2+\cos(x)^2=1$ and you see that $x=2c+s$. Use then one of the original equations, substitute your value for $x$ and see that, after simplification $y=c-2s$. $x$ and $y$ are defined $\forall \theta$. Hope this helps.
H: Converse to a proposition on homogeneous polynomials I know that for a homogeneous polynomial $P$, if $P(x_1, ... , x_n) = 0$, then $P(ax_1, ..., ax_n) = 0$ for every $a$ in the field of $P$. Is the converse of this proposition true? That is, if $P(x_1, ... , x_n) =0$ implies $P(ax_1, ..., ax_n) = 0$ for every $a$ in the field of $P$, is $P$ homogeneous? AI: We are assuming that we are working on field of characteristic $0$. Let $(x_1,x_2,\ldots,x_n)$ be a root of $P$. Then consider the one variable polynomial $Q(t)=P(tx_1,tx_2,\ldots,tx_n)$. Hence $Q(a)=0$ for any $a$ in the field implies all the coefficients of $Q$ are $0$. What are the coefficients of $Q$ ? Write $P(x_1,x_2,\ldots,x_n)$ as sum of monomials and proceed. I think it will work. The coefficient of $X^d$ in $Q(X)$ is infact a homogeneous polynomial of degree $d$ in the variables $X_1,X_2,\ldots,X_n$. $P(X_1,X_2,\ldots,X_n)$ can be represented as follows, $$\sum_{j=1}^{m}a_jR_j(X_1,X_2,\ldots,X_n)$$ Where $m$ is the degree of $P$ and each $R_j$ is a homogeneous polynomial of degree $j$ with $R_j(x_1,x_2,\dots,x_n)=0$ for all $1\leq j\leq m$. Moreover $a_m\neq0$.
H: Integration by parts for evaluating a differential equation Hi I am trying to evaluate a integral and I am hoping for some assistance $$\int \frac{9x^2}{(x^6+9)} dx $$ I have rewritten the problem as shown below $$\int 9x^2(x^6+9)^{-1}dx$$ Therefore I attempted the question by using integration by parts $$\int udv = uv- \int vdu$$ $$u = (x^6+9)^{-1}$$ $$du = -6x^5(x^6+9)^{-2} dx$$ $$dv = 9x^2dx$$ $$v = 3x^3$$ Therefore substituting into the integration by parts formula we get $$\int 9x^2(x^6+9)^{-1} dx = 3x^3(x^6+1)^{-1} - \int -18x^8(x^6+9)^{-2} dx$$ $$\int 9x^2(x^6+9)^{-1} dx = 3x^3(x^6+1)^{-1} - \int 3x^3 \frac{-6x^5}{(x^6+9)^2}dx$$ Then i tried using integration by substitution here $$let u = x^6+9$$ $$\frac{du}{dx} = 6x^5$$ $$\frac{du}{6x^5} = dx$$ $$\int 9x^2(x^6+1)^{-1} dx = 3x^3(x^6+1)^{-1} - \int \frac{-3x^3}{u^2}du$$ This is where i have reached in terms of evaluating the problem When making the substitution I have gotten a $3x^3$ within the integral I am hoping someone can help evaluate this integral. Now i am working on Picards Method of successive approximations $$y_0 = y_0 \int f(x,y_0) dx$$ the ODE given was $$y' = \frac{x^2}{y^2+1}$$ $$x_0 = 0, y_0 =0$$ Therefore first approximation given by the following $$y_1 = y_0 + \int f(x,y_0)dx$$ therefore y1 $$y_1 = 0 + \int \frac{x^2}{0+1} dx$$ $$y_1 = \frac{x^3}{3}$$ Second approximation $$y_2 = y_0 + \int f(x,y_1)dx$$ Therefore we get the following $$y_2 = 0 + \int \frac{x^2}{\frac{x^6}{9}+1} dx$$ and this is why i am trying to evalaute the integral but the solution which i am seeing is in the form $$y = \frac{x^3}{3}+\frac{-x^9}{81}$$ and i am trying to see how they arrvied at this answer AI: $$I=\int \frac{9x^2}{(x^6+9)} dx$$ $$I=\int \frac{3dx^3}{(x^3)^2+9} =\int \frac{dx^3/3}{(x^3/3)^2+1} $$ $$I=\int \frac{du}{u^2+1} $$ Where $u=\dfrac {x^3}3$. Then use the $\arctan $ function. $$I=\arctan u +C=\arctan \left(\dfrac {x^3}3 \right)+C $$
H: Prove that $\lim_{n \to \infty}\frac{\sin\left(\frac{1}{\sqrt{n+1}+\sqrt{n}}\right)}{\frac{1}{\sqrt{n}}} = L, \quad L \in R$ In a question the answers say that: $$ \lim_{n \to \infty}\frac{\sin\left(\frac{1}{\sqrt{n+1}+\sqrt{n}}\right)}{\frac{1}{\sqrt{n}}} = L, L \in R $$ How? AI: Multiply numerator and denominator by the argument of $\sin$, then use the $\lim_{x \to 0} \frac{\sin x }{x}=1$ limit, the second term converges to $\frac{1}{2}$.
H: Uniform convergence of $f_n(z)=nz^n$ in the set $|z|<\frac{1}{2}$ In an exercise I have to prove that $f_n(z)=nz^n$ converges uniformly for $|z|<\frac{1}{2}$. So I have to prove that: $$\forall \varepsilon>0, \exists N \in \mathbb{N}:|nz^n-f(z)|<\varepsilon\ \ \ \text{if } n\geq N$$ My question is, how can I find that $f(z)$? I've tried calculating the $\lim_n nz^n$ but I got stuck. How can I evaluate this limit? AI: Since you're taking the limit wrt $n$, treat $z$ as a constant. To make things easier while taking powers, consider the polar form: $$\lim_{n\rightarrow\infty}nz^n=\lim_{n\rightarrow\infty}n\underbrace{\left|z\right|^n}_{\leq\frac{1}{2^n}}e^{in\arg(z)}$$ So clearly the magnitude of the terms just goes to zero. There's only one complex number with magnitude zero.
H: Is this a characterization of the resolvent? I am trying to understand a statement that is in some notes that I am reading right now. It is the following. "Let $T$ be a bounded, self-adjoint operator, $\eta\in\mathbb{R}, \eta\neq 0$ and let $H$ be an Hilbert space. It can easily proved that $(T-i\eta)^{-1}$ is bounded from $H$ to itself. Hence it is the resolvent of $T$." I don't understand why the boundeness of $(T-i\eta)^{-1}$ implies that it is the resolvent of $T$. It is a sort of characterization? I searched something on the web, but I didn't find anything. Could anyone please help? Also some references will be well accepted. Thank you in advance! AI: If $T$ is bounded and self-adjoint, then $\sigma(T)\subseteq\mathbb{R}$ (see Spectrum of self-adjoint operator on Hilbert space real). However, the spectrum is defined as a subset of $\mathbb{C}.$ Taking complements, you get that $\mathbb{C}\setminus\mathbb{R}\subseteq \rho(T).$ That is, if $\eta\in\mathbb{R},$ then $i\eta\in\rho(T).$ Thus, the resolvent $R_\eta:=(T-i\eta)^{-1}$ is defined, and it is bounded by e.g. the bounded inverse theorem. If $(T-i\eta)^{-1}$ exists, then it means that $i\eta$ is in the resolvent set of $T$, and $(T-i\eta)^{-1}$ is called the resolvent of $T$. It is the definition of the resolvent.
H: Show that $(x_n)^{\infty}_{n=1}$ converges. Let $(X, d)$ a complete metric space and $(x_n)^{\infty}_{n=1}$ a sequence such that $d(x_{n+1},x_n) \leq \alpha d(x_n, x_{n-1})$ for some $0<\alpha<1$ and for all $n\geq 2$. Show that $(x_n)^{\infty}_{n=1}$ converges. Been dealing with this problem without success, any suggestions would be great! AI: Observe that $$d(x_{n+1},x_n) \leq \alpha d(x_n, x_{n-1})\le \alpha^2 d(x_{n-1}, x_{n-2})\le \cdots \le \alpha^{n-1} d(x_2,x_1)$$ (Take $m>n$ and use triangle inequality) $$\implies d(x_m,x_n)\le d(x_{m+1},x_m)+d(x_m,x_{m-1})+\cdots +d(x_{n+1},x_{n})\le\sum_{k=(n-1)}^{(m-1)}\alpha^k d(x_2,x_1) $$ Now you know that $\sum_{k\ge 1}\alpha^k$ converges so does $\sum_{k\ge 1}\alpha^k d(x_2,x_1)$. $\therefore$ For every $\epsilon>0$ , $\exists N \in \mathbb N$ such that $\forall (n-1),(m-1)>N$ we have $$\sum_{k=(n-1)}^{(m-1)}\alpha^k d(x_2,x_1)<\epsilon\implies d(x_m,x_n)<\epsilon , \forall (n-1),(m-1)>N.$$ Hence $(x_n)^{\infty}_{n=1}$ is a cauchy sequence. So by completeness of $(X,d)$ we have $(x_n)^{\infty}_{n=1}$ converges.
H: How many ways can we choose a team of $16$ people with $1$ leader and $4$ deputies out of $75$ people? How many ways can we choose a team of 16 people with 1 leader and 4 deputies out of 75 people? I figured that we can select the 16 people out of the 75 simply with $75\choose 16$ but then should i continue multiplying with $16\choose 4$ and $12\choose 1$ Final : $75\choose 16$ * $16\choose 4$ * $12\choose 1$ AI: There is more than way to enumerate the possibilities. For example, your way is to first identify the $16$ team members, then within those, identify the deputies and the leader. This yields two expressions, $$\binom{75}{16} \binom{16}{4} \binom{12}{1},$$ or you could do $$\binom{75}{16}\binom{16}{1}\binom{15}{4}.$$ These are equal. Alternatively, you can reason as follows. Select the $1$ leader from the $75$ people; then select the $4$ deputies from the remaining $74$; then select the $11$ remaining members of the team to get $$\binom{75}{1}\binom{74}{4}\binom{70}{11}.$$ In fact, there are six different ways you can do this type of enumeration, depending on the order in which you select the regular team members, the deputies, and the leader. They are all equivalent. Why? Well, in the general, case, say with $n$ people, $t$ regular team members, $d$ deputies, and $r$ leaders, where $t+d+r \le n$ represents the total number of team members (regulars, deputies, and leaders), we have $$\binom{n}{t}\binom{n-t}{d}\binom{n-t-d}{r} = \frac{n!}{t! d! r! (n-(t+d+r))!}$$ and it becomes obvious that this expression is symmetric in $t, d, r$. The RHS expression is known as a multinomial coefficient and is sometimes written $$\binom{n}{t, d, r}.$$
H: Prove that $\sum (-1)^n \sin(\sqrt{n+1}-\sqrt{n})$ converges by Leibniz Prove that $\sum_{n=1}^{\infty}(-1)^n\sin\left(\sqrt{n+1}-\sqrt{n}\right)$ converges by Leibniz. The answer says that $\sin$ is continuous, monotonic around $0$ and the limit there is $0$, therefore the series conditionally converges by the Leibniz test. I don't understand, Leibniz says that we need an $a_n$ which is monotonic decreasing to $0$, How they took this sentence and say that just around $0$ it is ok and enough for the proof? AI: The point is you're looking at the magnitude $\sin\left(\sqrt{n+1}-\sqrt{n}\right)$ as $n\rightarrow\infty$. Take a look at what happens to $\sqrt{n+1}-\sqrt{n}$ - it monotonically decreases to $0$. That means for large enough $n$, the argument to $\sin$ is close to $0$. At this point, since $\sin$ is itself monotonic around $0$, applying a monotonic function to a monotonic sequence leaves it monotonic.
H: Geometry problem involving a cyclic quadrilateral and power of a point theorem? Convex cyclic quadrilateral $ABCD$ are inscribed in circle $O$. $AB,CD$ intersect at $E$, $AD,BC$ intersect at $F$. Diagonals $AC, BD$ intersect at $X$. $M$ is midpoint of $EF$. $Y$ is midpoint of $XM$. circle $Y$ with diameter $XM$ intersects circle $O$ at $P,Q$. Prove that $PY$, $QY$ are tangent to circle $O$ It looks like a pretty interesting problem that could be solved by power of a point theorem, because they are a lot of line segments we can use to compute. But I didn't go very far. AI: Below is a "full" solution to the problem. A hint, if you don't feel quite ready to see the solution yet, is to consider the Miquel point of the complete quadrilateral $ABCD$. Let points $A,B,C,D,E,F,M,P,Q,X,Y$ be defined as in the question. Define $\gamma$ to be the circumcircle of $ABCD$, and redefine $O$ to be its centre. Let the circle with centre in $Y$ through $X$ be called $\omega$. Lemma 1 (Miquel Point of Cyclic Quadrilaterals): Let $Z$ be the intersection of lines $OX$ and $EF$. Then $Z$ is the miquel point of the complete quadrilateral $ABCD$. In particular, $Z$ is the image of $X$ under inversion with respect to $\gamma$. Lemma 2 ($EFX$ is self-polar with respect to $\gamma$): $X$ and $Z$ lie on the normal line from $O$ to $EF$. These two facts turn out to be sufficient in explaining the tangency in the question as follows: Due to lemma 2 $\angle MZX=\angle MZO = \pi/2$. Therefore, since $MX$ is a diameter in $\omega$, $Z$ lies on $\omega$ by the converse of Thale's theorem. But now lemma 1 tells us that $X$ and $Z$ are inverse images under inversion in $\gamma$, implying that $|OX||OZ|=r^2$, where $r$ is the radius of $\gamma$. Hence the power of $O$ with respect to $\omega$ is $r^2$. Now suppose a tangent to $\omega$ through $O$ intersect $\omega$ at $T$. Then by power of a point $|OT|^2=r^2$, so $T\in\gamma$. But $T\in\omega$ by assumption, so $T=P$ or $T=Q$. Said in another way: the tangents from $O$ to $\omega$ are exactly the lines $OP$ and $OQ$. Now you may realise that this is rather similar to what we want to prove, that is, the tangents from $Y$ to $\gamma$ are the lines $YP$ and $YQ$. It turns out that these two statements are actually equivalent. You may try to prove it yourself. The configuration is called orthogonal circles. At any rate, this explains the problem posed in the original post. All of the concepts/lemmas that I have used are defined/proven thoroughly in chapters 8 to 10 of Evan Chen's Euclidean Geometry in Mathematical Olympiads . It is a good introduction to many of the more advanced techniques used in olympiad geometry, and I wholeheartedly recommend it if you are preparing for maths olympiads.
H: Sum of two cubes equal to prime square If $a,b\in \mathbb{N}$ find all primes $p$ such that $a^3+b^3=p^2$ My approach- $a^3+b^3=(a+b)(a^2-ab+b^2)=p^2$ suppose $a+b=x$ and $a^2-ab+b^2=y$ then there are two cases- $(x,y)=(p^2,1),(p,p)$ Now I am struggling for case 02 where $(x,y)=(p,p)$ AI: In the case that $a+b=a^2-ab+b^2$, we have $a^2+b^2=ab+a+b$, so $a^2+b^2=(a+1)b+a$. Without loss of generality (since the formula is symmetric in $a,b$) that $a\leq b$. Clearly, since $a+b$ is a prime, and $a,b=1$ isn't a solution, $a\neq b$. And, if $b>a+1$, then $a^2\geq a$ and $b^2>(a+1)b$, so $a^2+b^2>(a+1)b+a$, so there are no solutions. Hence, any solutions to this case will be of the form $a, a+1$, for some natural $a$. Now, we make that substitution to get $2a^2+2a+1=a^2+a+2a+1$, so $a^2=a$, so $a=0,1$. So, the only solution is $a=1,b=2$.
H: Pullback Topology Let $f:X\to Y$ be a bijection and $Y$ be a topological space. Let $T_X \triangleq \left\{ f^{-1}[U]:\, U \mbox{ open in Y} \right\}$. Then is $T_X$ a topology on $X$ and if so, with it, is $X$ homeomorphic to $Y$? AI: It is indeed a topology: $X,\emptyset \in T$, since $f^{-1}(\emptyset)=\emptyset, f^{-1}(Y)=X$. If $(U_i)_{i\in I}\in T$, then $U_i=f^{-1}(V_i), V_i\in Y$ open. Note that $$\begin{align}\bigcup_i U_i&=\bigcup_i f^{-1}(V_i) \\ &= f^{-1}(\bigcup_i V_i) \\ &=f^{-1}(V),\end{align}$$ where $V=\bigcup_i V_i$ is open since $Y$ is a topological space, and so the union is also open. If $U_1,U_2\in T$, then $U_1=f^{-1}(V_1), U_2=f^{-1}(V_2)$, and so $f^{-1}(V_1) \cap f^{-1}(V_2)= f^{-1}(V_1\cap V_2)=f^{-1}(V)$, where $V=V_1\cap V_2$ is open (from the definition of topology), and so the finite intersection is open. $f$ is also a homeomorphism between $X$ and $Y$. It is a bijection by definition. It is also continuous, since the topology is defined so that the preimage of an open set is open (and only it). It is also an open map, since again, by the definition of the topology on $X$, the image of an open set is open. So overall, it is a homeomorphism.
H: Splitting field $L$ of polynomial $f \in K[x]$ with degree $n$ satisfies $[L:K] | n!$ Suppose $f \in K[x]$ is a polynomial with degree $n$, $f = (x-\alpha_1)...(x-\alpha_n)$ over the algebraic colsure. Let $L=K(\alpha_1,...,\alpha_n)$ be the splitting field of $f$. Prove that $[L:K]$ divides $n!$. I was able to prove it for the case where $f$ is seperable. In this case, $L/K$ is a Galois extension and therefore $[L:K] = |Gal(L/K)|$ and since $Gal(L/K)$ is isomoprhic to a subgroup of $S_n$, the result follows. How do I prove it for the general case? AI: Proceed by induction on $n = [L:K]$. If $[L:K] = 1$ then the claim is trivial. Assume $[L:K] > 1$. We consider separately the cases where $f$ is irreducible and where $f$ is not irreducible. Suppose first that $f$ is irreducible. Let $\alpha$ be a root of $f$ in a splitting field. Then $f$ is the minimum polynomial of $\alpha$ over $K$, so $[K(\alpha):K] = n$. Then $[L : K(\alpha)] < [L :K]$ and $L$ is the splitting field of $g(x) = f(x)/(x - \alpha)$ over $K(\alpha)$. The degree of $g$ is $n - 1$, so by induction $[L : K(\alpha)] \mid (n-1)!$, and the result follows by the tower law. Suppose now that $f$ is not irreducible. Then $f = pg$ for $p, g \in K[x]$ with $p$ irreducible. If $L$ is a splitting field for $p$ over $K$, then we are done by the previous paragraph. If not, we may take $K \subset M \subset L$ such that $M$ is a splitting field for $p$ over $K$ (just adjoin the roots of $p$ in L). Then $M$ is a splitting field for $p$ over $K$, and $L$ is a splitting field for $g$ over $M$. Since our tower of fields has strict inclusions, $[L:K] = [L:M][M:K]$ is a proper factorisation, so by induction $[L:M] \mid (\deg g)!$ and $[M :K] \mid (\deg p)!$. If we define $k = \deg p$, then $n - k = \deg g$, so we have $$ [L:K] = [L:M][M:K] \mid k!(n-k)! $$ And the result follows because $k!(n-k)! \mid n!$ (which is true because binomial coefficients are integers).
H: Examples of no-zero-measure meagre set I know cantor set and rational numbers in $\mathbb{R}$ are meagre. But they are all zero measure. So is there any meagre set that is non-zero measure? AI: You should read about Fat Cantor sets - they are nowhere dense but have positive measure.
H: The module $\text{Hom}_C(E,F)$ of two finitely generated projective $C$-modules Let $C$ be an abelian ring and $E,F$ two finitely generated projective modules. Then $\text{Hom}_C(E,F)$ is a finitely generated projective $C$-module. First of all, since $C$ is abelian, the abelian group $\text{Hom}_C(E,F)$ is a $C$-module. As $E,F$ are finitely generated projective $C$-modules, there exist free $C$-modules $M,N$, with finite bases, such that $E,F$ are isomorphic, respectively, to direct factors of $M,N$. So, let $R_1,R_2$ be supplementary submodules of $M$ such that $E\simeq R_1$. On the other hand, let $L_1,L_2$ be supplementary submodules of $N$ such that $F\simeq L_1$. Furthermore, $\text{Hom}_C\left(R_1\oplus R_2,L_1\oplus L_2\right)$ is isomorphic to $$\text{Hom}_C(R_1,L_1)\oplus\text{Hom}_C(R_1,L_2)\oplus\text{Hom}_C(R_2,L_1)\oplus\text{Hom}_C(R_2,L_2).$$ Also, $\text{Hom}_C(E,F)\simeq\text{Hom}_C(R_1,L_1)$. I am not sure how I can deduce that (i) $\text{Hom}_C(E,F)$ is finitely generated and (ii) $\text{Hom}_C(E,F)$ is projective from the above. Any hints? AI: Hint: $\operatorname{Hom}_C(R_1, L_1)$ is a direct summand of $\operatorname{Hom}_C(M, N)$ , which is isomorphic to $C^{m\cdot n}$, if $m$ is the rank of the free module $M$ and $n$ the rank of the free module $N$.
H: Linear operator on $\ell^{\infty}$ is not surjective – well-definition of inverse operator Let $T: \ell^{\infty} \rightarrow \ell^{\infty}$, such that $Tx = \left(\frac{1}{n} x_n\right)_{n\in \mathbb{N}}$. Claim: $T$ is not surjective. I have been trying to come up with $y \in \ell^{\infty}$ such that there is no $x \in \ell^{\infty}$ with $Tx = y$. AI: Let $y$ be the constant sequence $1$. If there was $x\in\ell^\infty$ such that $T(x)=y$ then we would have $\frac{1}{n}x_n=1$ for all $n\in\mathbb{N}$ and hence $x_n=n$. But this means $x$ is not a bounded sequence, i.e not in $\ell^\infty$. A contradiction.