text
stringlengths
83
79.5k
H: Calculating closest and furthest possible diagonal intersections. Calculating closest and furthest possible diagonal intersections. Please refer to the image attached. It represents a $2D$ grid with the following properties: The grid origin is $(1,1)$ at the top-left. The x coordinates are $1, 2, 3... infinity$. The y coordinates are $1, 2, 3... infinity$. The highlighted cells with red dots represent all coordinates whose x and y values are powers of 2. Each power of 2 cell has horizontal and vertical lines going through them. These are just visual aids and can be ignored for any calculations. Each power of 2 cell has diagonal and anti-diagonal lines going through them. These are important because we want to intersect with them from a given $(x,y)$. There is an arbitrary given position $(x,y)$ which does not lie on a power of 2 cell or any diagonals. In this example the values are $(969,512)$ but this could be any value. The question: Given a coordinate $(x,y)$, I want to calculate coordinate $(ix,iy)$ that is an intersection point (towards the North only) between $(x,y)$ and $(px, py)$ where both $px$ and $py$ are powers of two. The attached image shows $(x,y)$ to be $(969,551)$ and various surrounding power of 2 cells. The closest and furthest possible diagonal intersections are marked in the image. Open this image in a new tab to see full view. What we already know: Given $(x,y)$, we could calculate surrounding power of 2 cells by $flooring$ or $ceiling$ the $base 2 log$ of both $x$ and $y$. As an example $(100,100)$ can be represented as (2^6.64,2^6.64). So the nearest power of 2 cell to the top-left would be $(2^6,2^6)=(64,64)$. What I cannot figure out is which power of 2 cell to consider when trying to find the closest or furthest intersections. EDIT: I am convinced that the answer lies in elementary geometry but cannot seem to get a grip on it. AI: The lines going downward to the right are of the form $x-y=2^m-2^n$ for some $m,n$. The ones going up to the right are of the form $x+y=2^m+2^n$ for some $m,n$. Given an input $x,y$ you are asking for the maximum $y' \lt y$ such that either $x-y'=2^m-2^n$ or $x+y'=2^m+2^n$. To find the second: Let $z=x+y$, then set all the low order bits of $z$ to $0$ until you only have two $1$ bits left. This will be the value of $x+y'$. Some pseudocode that will get you there: z=x+y m=int(log2(z)) z'=z-2^m n=int(log2(z')) y'2=2^m+2^n-x For the first, you can round $x-y$ up to the next higher power of $2$ to get $m$, then find the lowest $n$ such that $2^m-2^n \gt x-y$ z=x-y m=int(log2(z))+1 z'=2^m-z n=int(log2(z'))+1 y'1=x-2^m+2^n and take the greater of the two y's. Is $x$ always greater than $y$?
H: Munkres Order Topology example question (Section 14 Example 4) in section 14, Munkres introduces the order topology, and gives this example: The set $X$ = {1,2} $\times \mathbb{Z}_{+}$ in the dictionary order is an example of an ordered set with a smallest element. Denoting 1 $\times$ $n$ by $a_{n}$ and 2 $\times$ $n$ by $b_{n}$, we can represent $X$ by $a_{1},a_{2}, \cdots ; b_{1}, b_{2}, \cdots$. The order topology on $X$ is $not$ the discrete topology. Most one-point sets are open, but there is an exception - the one-point set $\{ b_{1}\}$. Any open set containing $b_{1}$ must contain a basis element about $b_{1}$ (by definition), and any basis element containing $b_{1}$ contains points of the $a_{i}$ sequence. I understand that given a set $X$, the discrete topology on $X$ is the collection of $all$ subsets of $X$. And it looks like they're assuming that the topology is generated by a certain basis, and that the "definition" that Munkres is using to justify the first part of the last sentence is the one on page 78, i.e. the a subset $U$ of $X$ is open in $X$(i.e. in the topology) if for each $x \in U$, there is a basis element $B \in$ B such that $x \in B$ and $B \subset U$, where B is the basis. The part that's confusing me is the bolded part in the previous paragraph. Why would this imply that $b_{1}$ is not open? Thank you for any help/clarification! Sincerely, Vien AI: As you mentioned, let $\mathcal{B}$ be a basis. $U$ is open if and only if for all $x$ in $U$ there exists a $B \in \mathcal{B}$ such that $x \in B \subset U$. Now for your example, let $U = \{b_1\}$, $x = b_1$. If $\{b_1\}$ was actually open, then there would exists a basis open set $B$ such that $b_1 \in B \subset \{b_1\}$. However all basis open sets containing $b_1$ contains some $a_i$. So it not possible that $B \subset \{b_i\}$. The definition of being open according to the basis of the topology is not fulfilled so $\{b_1\}$ can not be open. By the way, I interpreted the question as : how does the bold statement imply $\{b_1\}$ is not open? I assumed you know how to prove the bold statement.
H: $C^\infty$ dense in Sobolev spaces I'm reading a theorem which says that $C^\infty$ intersection with $W^{k,p}$ is dense in $W^{k,p}$. I don't understand why they take the intersection. Isn't it $C^\infty$ a subspace of $W^{k,p}$? Thanks. AI: You don't say what space the functions are on, but here let's consider functions on $\mathbb{R}$. Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be given by $f \equiv 1$. Then $f \in C^\infty(\mathbb{R})$ but certainly $f \notin L^p(\mathbb{R})$ for any $p \ge 1$ since $$\left(\int_\mathbb{R} |f|^p \right)^{1/p} = \left(\int_\mathbb{R} 1\right)^{1/p} = \infty.$$ But if $f \in W^{k,p}(\mathbb{R})$, then in particular $f \in L^p(\mathbb{R})$. Therefore we see that it is not necessarily true that every smooth function lies in $W^{k,p}$. $W^{k,p}(\Omega)$ is not the completion of $C^\infty(\Omega)$ for general $\Omega$ since, as was shown above, $C^\infty(\Omega)$ isn't contained in $W^{k,p}(\Omega)$. Define a norm $$\|f\|_{k,p} = \left( \sum_{|\alpha| \leq k} \|D^{\alpha} f\|_p^p \right)^{1/p}.$$ and write $$\widetilde{C}^k(\Omega) = \{f \in C^k(\Omega) : \|f\|_{k,p} < \infty\}.$$ Then $W^{k,p}(\Omega)$ is the completion of $(\widetilde{C}^k(\Omega), \|\cdot\|_{k,p})$ as long as $p \in [1, \infty)$. Look up the Meyers-Serrin theorem for more details.
H: Trigonometry- why we need to relate to circles I'm a trigonometry teaching assistant this semester and have a perhaps basic question about the motivation for using the circle in the study of trigonometry. I certainly understand Pythagorean Theorem and all that (I would hope so if I'm a teaching assistant!) but am looking for more clarification on why we need the circle, not so much that we can use it. I'll be more specific- to create an even angle incrementation, it seems unfeasible, for example, to use the line $y = -x+1$, divide it into even increments, and then draw segments to the origin to this lattice of points, because otherwise $\tan(\pi/3)$ would equal $(2/3)/(1/3)=2$. But why mathematically can't we do this? AI: Given your comments, one of the biggest problems with the construction you've offered is that it can't define the trig functions for angles of all real values, or even for every angle between 0 and $2\pi$, since rays from the origin hit your line at angles between $-\frac{\pi}{4}$ and $\frac{3\pi}{4}$. So we'd want to at least propose some other closed curve, say a smooth one since the trig functions are smooth, as the place where we define our functions. Perhaps the simplest reason that the circle is a natural place to define trigonometric functions is that angle is a measure of arc of a circle.
H: Continuity of powers in a Banach algebra. There are some theorems that say in a unital C* algebra $A$ when one can deduce that the functional calculus of a continuous function f is continuous as map from some subset of $A$ to $A$. In the proof of one of these theorems it assumes that $A \mapsto A^n$ is continuous, and that $A \mapsto p(A, A^*)$ is continuous for any polynomial in $z, \bar z$. I actually suspect that $A \mapsto A^n$ is continuous in a more general setting where the latter statement wouldn't make sense, namely Banach algebras or even normed algebras. Please let me know why this is true, or at any greater or lesser level of generality, as long as it contains the C* algebra case. Thanks. Incidentally, I am aware that raising to powers can be not continuous for powers more than $1$. If the integer powers give rise to continuous maps as I've conjectured above, I'd appreciate an example that illuminates why real positive powers can be so different. AI: For $A\mapsto A^n$ to be continuous, it is enough for multiplication to be continuous. Since multiplication is continuous in any normed algebra (it's an obvious consequence of submultiplicativity of the norm), the map is continuous in any normed algebra. To show that use induction with respect to $n$.
H: Why is gradient noise better quality than value noise? I have been reading about the mathematics behind Perlin noise, a gradient noise function often used in computer graphics, from Ken Perlin's presentation and Matt Zucker's FAQ. I understand that each grid point, $X$, has a pseudo-random gradient associated with it, $g(X)$ - just a vector of unit length that appears random. When finding the noise value at a point $P$, for each grid point surrounding it, $Q$, the dot product $g(Q) \cdot (P-Q)$ is found. Then these dot products are interpolated to find the noise value at point $P$. The thing I don't understand, however, is why we use gradients. There is another type of noise function called value noise in which each grid point has a scalar value rather than a gradient. I've seen articles that say gradient noise produces higher quality noise but they don't explain why. I can't seem to visualise how this dot product makes the noise any better quality. What does "quality" even mean here? Why did Ken Perlin decide to use gradients? AI: This is just my guess. In short: Gradient noise leads in general to more visual appealing textures because it cuts low frequencies and emphasizes frequencies around and above the grid spacing. Let's compare a naive value noise procedure with a naive gradient one, for a grayscale image. Value noise: we paint the points in the grid with random values (white noise) and fill the surrounding pixels by linear interpolation. This will look ugly because (among other things) some of the random grid points will happen to have similar values, and then there will be large spots with nearly uniform color (low frequency). [*] Specifically, the pixel values in the neighborhood of a grid point will be all similar - and so we depend on the other grid points being distinct to have high frequencies... and this will be at most (with luck) of the order of the grid separation. Gradient noise: we compute a random (uniform, white noise) gradient in each grid point, and compute the values by interpolating the dot products of the gradient with the distances. Consider again what happens in the neighborhoood of a grid point, specifically over a small circumference, disregarding the effect of other distant grid points. It's seen that the computed image value (as a dot product) in this small neighborhood will visit -smoothly but fully- the white-black range. Then, we can expect that the image values will never have uniform spots, i.e., we won't practically have frequencies below that of the grid spacing. [*] A similar problem arises in halftoning/dithering: it's visually unpleasant to use binary white noise because of the low frequency component; a nicer dithering algorithm, as Floyd-Steinberg, produces instead high frequency ("blue noise").
H: meaning of $Hx \ /xH$ $H$ is not normal group, take $x \in G$ such that $Hx \ /xH$. That is, there is $h ∈ H$ such that $hx \not \in xH$. Then $hxH \not = H$, and, if the same definition were to work, supposedly $hH ∗ xH = (hx)H \not = xH$ But, on the other hand, since $hH = eH$, $hH ∗ xH = eH ∗ xH = (ex)H = xH$ That is, if H is not normal, this apparent definition is in fact not well-defined. ($H$ is a subgroup of $G$) What is $Hx \ /xH$? AI: I think it's just a misprint of $Hx \not\color{red}{=} xH.$ Rationale: I assume this is a proof by contradiction that $H$ is a normal subgroup. But the definition of a normal subgroup requires that $Hx = xH.$ So it makes sense to start the proof by contradiction by assuming that $Hx \neq xH$, and then showing a contradiction from there.
H: For what values of $x$ would $nx \equiv 0\pmod{(x-n)}$? Given the positive integer $n$, for which integers values of $x$ will the equation below be satisfied? $$nx \equiv 0\pmod{(x-n)}$$ Also, how many positive integers $x$ exist for a given $n$? AI: Since $x\equiv n$ modulo $x-n$ we have the logical equivalence (same modulus) $$nx\equiv 0 \iff n^2\equiv 0\iff (x-n)\mid n^2.$$ Thus there is a correspondence $x-n\leftrightarrow d$ for solutions $x$ and divisors $d\mid n^2$. We conclude that the solution set is given in terms of $n$ by $x\in\{n+d:d\mid n^2\}$ (negative divisors also included). It follows that the number of solutions is $2\,\sigma_0(n^2)$ (see divisor sigma). If we restrict our attention to positive moduli only then we ignore the negative divisors of $n^2$, and the solution count is instead $\sigma_0(n^2)$.
H: What type of object is a differential form? This is a naive question; apologies in advance. For a point $p \in M$ on a smooth manifold $M$, a differential form can be viewed as a map $$T_p M\times \cdots \times T_p M \to \mathbb{R} \;.$$ What puzzles me about this object is that it is not "differential." Yes, I know, the tangent space $T_p M$ is differential in that it is tangent. But if $M=\mathbb{R}^n$, then I lose the intuitive sense of tangency, and just end up with a map from $k$ vectors to $\mathbb{R}$ with certain properties (the map is multilinear, and alternating). I seek a way to view differential forms intuitively that somehow emphasizes their differential aspects. Help would be much appreciated---Thanks! AI: In what one might call naive calculus, for each coordinate $x_i$, the differential $dx_i$ denotes a small (infinitesimal, even) change in $x_i$, so a covector $\sum_i a_i dx_i$ is an infinitesimal change. On a manifold, coordinates are only local, not global, so we should also imagine that this covector sits at a particular point of $M$. If we want to have a covector varying smoothly at every point, this is a differential one-form. If $f$ is a function, then the total differential of $f$ is the quantity $$df = \sum_i \dfrac{\partial f}{\partial x_i} dx_i, $$ which records how $f$ is changing, at each point. A tangent vector at a point is a quantity $v = \sum_i a_i \dfrac{\partial}{\partial x_i}$; you should think of this as a vector pointing infinitesimally, based at whatever point we have in mind. You can measure the change of $f$ in the direction $v$ by pairing $df$ with $v$. Summary: tangent vectors are infinitesimal directions based at a point, while covectors are measures of infinitesimal change. You can see how much of the change is occurring in a particular direction by pairing the covector with the vector. Now higher degree differential forms are wedge products of $1$-forms. You can think of these as measuring infinitesimal pieces of oriented $p$-dimensional volumes. (Think about how the (oriented) volume of an oriented $p$-dimensional parallelapiped spanned by vectors $v_1,\ldots,v_p$ depends only on $v_1\wedge\cdots\wedge v_p$.)
H: Is the following a ring? Is $\{a+bi: a,b \in \mathbb{Z} \} $ and $i^2=-1$, a ring under the usual operations of addition and multiplication? It needs to be closed under addition and multiplication which it seems like Addition needs to be commutative and associative which is quite straightforward. It must have an identity element which I am not sure about and it must have an additive inverse which I am also unclear about Multiplication must be associative which it should be and then it must be distributive over addition meaning that a*(b+c)=a*b+a*c which i also think it is. AI: The multiplicative identity is $1+0i$, the additive identity is $0+0i.$ Associativity (and commutativity) of addition and multiplication follow from the same properties of complex numbers in general (which you should verify if you haven't done so before!), as does distributivity. You can easily check that it is multiplicatively closed by expanding $(a+bi)(c+di)$ for $a,b,c,d\in\mathbb Z.$ The additive inverse of $a+bi$ is $-a-bi.$
H: Differential of Det Map I want to prove differential of det map at some matrix $A$ is given by $f(A)\operatorname{tr}(X)$ let $f:GL_n(\mathbb{R})\rightarrow\mathbb{R}$ is det map i.e $f(A)=\det A$, claim: $f_{*}(AX)=(f(A))\operatorname{tr}(X)$ proof of claim: Let a curve $c(t)=Ae^{tX}$ such that $c(0)=A,c'(0)=AX$. using this curve we calculate differential $f_{A{*}}(AX)=\frac{d}{dt}|_{t=0}f(c(t))=c'(t)|_{t=0}f(c(t))|_{t=0}=AXf(A)=AX(\det A)$, where am I getting wrong? Thank you. AI: It seems your problem is that you forgot to take the derivative (gradient) of the determinant $f:\mathbb R^{n^2}\to\mathbb R$. I.e., you should have $$\frac{d}{dt}f(c(t))|_{t=0} = \left[\nabla f(c(t))c'(t)\right]|_{t=0}$$ But there is maybe a better way, and it is worked out here.
H: Can't understand the solution of this combination It's a journey with 9 people using two boats. A boat can have maximum 7 people B boat can have maximum 4 people How many ways journeys can be done? My solution: A B 7 2 6 3 5 4 So, number of ways= ${^7C_7}*{^4C_2} + {^7C_6}*{^4C_3} + {^7C_5}*{^4C_4}$ = 12+28+7=47 But in book i see the solution= ${^9C_7} + {^9C_6} + {^9C_5} = {^9C_2} + {^9C_3} + {^9C_4}$ = 246 Why? Can anyone explain how? AI: When you say $7 \choose 7$ then you're assuming you're selecting $7$ out of $7,$ not $7$ out of $9.$ When you say $\binom{7}{7} \times \binom{4}{2},$ then you're assuming the order within the boat matters, which is not the case. I.e., $A=(p_1, p_2, \ldots, p_7),B=(\color{red}{p_8}, \color{blue}{p_9})$ is the same as $A=(p_1, p_2, \ldots, p_7),B=(\color{blue}{p_9}, \color{red}{p_8})$. So we don't count them twice. How many ways can you choose $7$ out of $9$ people to ride boat $A$? If the $9$ people are labelled $p_1, p_2 \dots, p_9,$ then there are $\binom{9}{7}$ ways to do so: $$ p_1, p_2, p_3, p_4, p_5, p_6, p_7 \\ p_1, p_2, p_3, p_4, p_5, p_6, p_8 \\ p_1, p_2, p_3, p_4, p_5, p_6, p_9 \\ p_2, p_3, p_4, p_5, p_6, p_7, p_8 \\ \dots $$ Note that every choice above completely defines the choice for $B.$ So the count for the first configuration $(A = 7, B = 2)$ is $9 \choose 7$. Repeat for the other two configurations $(A = 6,B = 3)$ and $(A = 5, B = 4)$ to get ${9 \choose 7} + {9 \choose 6}+ {9 \choose 5}$.
H: Proving there are infinitely many primes of the form $a2^k+1.$ Fix $k \in \mathbb{Z}_+$. Prove that we can find infinitely many primes of the form $a2^k +1,$ where $a$ is a positive integer. We can use the result that: If $p \ne 2$ is a prime, and if $p$ divides $s^{2^t}+1$, for $s > 1$ and $t \ge 1$, then $p \equiv 1 \pmod {2^{t+1}}$. Ive trying to get something inductively: For $k = 1$, there are infinitely many primes of the form $2a + 1$. Suppose there are infinitely many primes of the form $a2^k + 1$, and then show that there are infinitely many primes of the form $a2^{k+1} + 1$. If there are infinitely primes of the form $a2^k + 1$, where $a$ is even, then we have that, $a2^k + 1 = (2q)2^k + 1 = q2^{k+1} + 1$. Hence we are done. Therefore, suppose there are only infinitely many primes of form $a2^k + 1$, where $a$ is odd. - but I can't get a contradiction out of this. AI: Suppose a prime $p$ divides both $s^{2^t}+1$ and $s^{2^u}+1$, $t\gt u$. Then it divides their difference, $s^{2^t}-s^{2^u}=s^{2^u}(s^v-1)$, where $v=2^t-2^u=2^u(2^{t-u}-1)$. Now $p$ can't divide $s^{2^u}$, since it divides $s^{2^u}+1$, so it must divide $s^v-1$. But $s^{2^u}\equiv-1\pmod p$, and $2^{t-u}-1$ is odd, so $s^v\equiv-1\pmod p$, contradiction (unless $p=2$). Thus, the numbers $s^{2^t}+1$, $t=k-1,k,k+1,\dots$, are pairwise coprime (aside from factors of 2), so they have distinct prime factors. But each of those prime factors $p$ satisfies $p\equiv1\pmod{2^{t+1}}$, hence, $p\equiv1\pmod{2^k}$, hence, $p=a2^k+1$ for some $a$.
H: Question from Rudin: Jensen? I came across this question from Rudin's Real and Complex Analysis, 3rd Edition (p.75 # 25) "Suppose that $\mu$ is a positive measure on $X$ and $f:X\rightarrow (0,\infty)$ satisfies $\int_X f d\mu = 1$. Prove for every $E \subset X$ with $0 < \mu(E) < \infty$ that $\int_E(\log{f})d\mu \leq \mu(E)\log{\frac{1}{\mu(E)}} $ Also, when $0<p<1$, we have $\int_E{f^p}d\mu \leq \mu(E)^{1-p}$." My first thought was that since log is a concave function, we can use Jensen's inequality (in the opposite direction), but that is not giving me what I want. Any suggestions? Further Addendum : Jensen's inequality only works on a set of measure 1 (or by redefining an interval to get a measure 1 set) so this is clearly not the correct approach. AI: First note that since we are only interested in $f$ on $E$, the condition $\int_X f\,d\mu=1$ translates to $\int_E f\,d\mu\le1$. Let $\bar{f_E}=\int_E f\,d\mu/\mu(E)\le 1/\mu(E)$ denote the average value of $f$ on $E$. We then get $$ \int_E \ln f\,d\mu \le\int_E \ln\bar{f_E}\,d\mu =\mu(E)\ln\bar{f_E} \le\mu(E)\ln\frac{1}{\mu(E)}. $$ Replace the logarithm with power $p$, and the second result follows in exactly the same manner.
H: Convergence of an ugly series I am wondering for $n > 1$ if the following series converges and if possible what it equates to. $$\sum _{m=0}^{\infty }\dfrac {\log\left( \dfrac {\left( n+m-1\right) !} {\left( n-1\right) !}\right) -m} {\prod _{t=n}^{t=n+m-1}\log t}$$ I suspect it should sum to 1 as they are mutually exclusive probabilities, unless i made a mistake some where. I tried ratio test and wolfram alpha but did not have much success. Any help would be much appreciated. Thanks in advance. AI: Well, it's certainly not going to always be $1$. Numerically, for $n=2$ I get approximately $1.52276115857537477668117216104$. In fact the partial sum up to $m=7$ is already greater than $1$. The numerator of the $m$'th term is $O(m \log m)$, while the denominator is $\Omega(n^m)$ which grows much faster as $m \to \infty$ when $n > 1$. So the series converges.
H: Finite summation Possible Duplicate: why is $\sum\limits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$ What is the proof without induction for : $(1)$ $\sum_{i=1}^n\ i^2= \frac{n(n+1)(2n+1)}{6}$ $(2)$ $\sum_{i=1}^n\ i^3=\frac{n^2(n+1)^2}{4}$ AI: One can give a proof that has a combinatorial flavour. We want to choose $3$ numbers from the numbers $0, 1,2,3,\dots, n$. This can be done in $\binom{n+1}{3}$ ways. We do the counting in another way. Perhaps the smallest chosen number is $0$. Then the other two can be chosen in $\binom{n}{2}$ ways. Perhaps the smallest chosen number is $1$. Then the other two can be chosen in $\binom{n-1}{2}$ ways. Continue. Finally, the smallest of the chosen numbers could be $n-2$. Then the other two can be chosen in $\binom{2}{2}$ ways. Reversing the order of summation we get $$\binom{2}{2}+\binom{3}{2}+\cdots+\binom{n}{2}=\binom{n+1}{3}.$$ Now use the fact that $\binom{k}{2}=\frac{k(k-1)}{2}$. We find that $$\frac{1}{2}\sum_{k=1}^n k^2-\frac{1}{2}\sum_{k=1}^n k =\frac{(n+1)(n)(n-1)}{3!}.$$ We know a closed form formula for $\sum_{k=1}^n k$, so from the above we can get a closed form for $\sum_{k=1}^n k^2$. A similar idea works for $\sum k^3$. We use the combinatorial fact that $$\sum_{k=3}^n \binom{k}{3}=\binom{n+1}{4},$$ which is proved by a counting argument similar to the one we used for the closed form of $\sum_{k=2}^n \binom{k}{2}$. Remarks: $1$. If I put my logic hat on, the answer to your question becomes entirely different. The result cannot be proved without induction. Indeed almost nothing about the natural numbers can be proved without using induction. In the Peano axioms, omit the induction axiom scheme. We end up with a system that is very very weak. Almost any argument that has a $\dots$, or a "and so on" in it involves induction. Indeed the very *definition of $\sum_{k=1}^n k^2$ requires induction. $2$. The proof of $\sum_{k=2}^n \binom{k^2}=\binom{n+1}{3}$, and its generalizations, is very natural, and the resulting formula has a nice structure. The closed form expression for $\sum_{k=1}^n k^2$ is definitely less attractive. It turns out that one can give a slightly messy but purely combinatorial proof of the formula $$\sum_{k=1}^n 4k^2=\binom{2k+2}{3}.$$ so the somewhat "unnatural" $\frac{n(n+1)(2n+1)}{6}$ turns out to "come from" the more structured $\frac{1}{4}\cdot \frac{2n(2n+1)(2n+2)}{6}$.
H: Eigenvectors of matrices formed from self-outer products of non-orthogonal vectors Let $v_1$ and $v_2$, with $v_1\neq v_2$ and $\langle v_1,v_2\rangle\neq 0$, be two non zero vectors in an $N$ dimensional Hilbert space. Let $\mathbf{S}=v_1\otimes v_1 +v_2\otimes v_2$ be an $N\times N$ Hermitian matrix ($\otimes$ denoting outer product). $\mathbf{S}$ has at most 2 non-zero eigenvalues. Assuming there are 2 distinct eigenvalues and denoting the two associated eigenvectors as $e_1$ and $e_2$, can it be said with certainty that $\langle e_1, v_1\rangle\neq 0$ and $\langle e_1,v_2\rangle\neq 0$ (and similarly for $e_2$)? AI: By choosing a basis, one may assume wlog that $v_1 = (a,0,\ldots,0)^\top$ and $v_2 = (b,c,0,\ldots,0)^\top$ where $a = \|v_1\|$, $b = \langle v_1,v_2 \rangle/a$, $c = \sqrt{\|v_2\|^2 - b^2}$. Note that $a,b,c$ are all nonzero. Then $S$ has its top left $2 \times 2$ submatrix $$\pmatrix{a^2 + b^2 & bc \cr bc & c^2\cr}$$, the other entries being all $0$. We may as well ignore the rest of the matrix and assume $N=2$. The nonzero eigenvalues are the roots of the characteristic polynomial $\lambda^2 - (a^2 + b^2 + c^2) \lambda + a^2 c^2$. They are indeed distinct since $(a^2 + b^2 + c^2)^2 - 4 a^2 c^2 = (a^2 - c^2)^2 + 2 b^2 (a^2 + c^2) + b^4 > 0$. The eigenvector for eigenvalue $r$ is $x = (x_1,x_2)^\top$ where $(a^2 + b^2 - r) x_1 + b c x_2 = 0$. Now $\langle x, v_1 \rangle = a x_1 \ne 0$. By symmetry (we could have presented $v_1$ before $v_2$), $\langle x, v_2 \rangle \ne 0$ as well.
H: Finding all roots of polynomial system (numerically) I want to numerically find all the roots of a system of polynomials (n equations in n variables). Since I can compute the Jacobian for the system (analytically or otherwise), I can use the Newton Raphson method to find a single root (e.g. as described in the Numerical Recipes book). How do I find the other roots (if they exist), either using a different algorithm or extending the Newton Raphson method? If this is not always possible, what about finding all the roots in a bounded interval [a,b]? Other roots may exist, but I only need to find the ones that lie in the interval - however, it is possible for multiple roots to lie in the interval? Thanks! AI: A code designed implementing Homotopy Continuation by the name of PHCpack is available with both C and maple interfaces. I would not suggest trying to implement something like this yourself, since there are many fine points which are easily overlooked, and you may end up getting less solutions than you expected. A nice example of how the method works and the the related problems, can be seen here.
H: Borel linear order cannot have uncountable increasing chain I am trying to make sense of what this theorem from C.I. Steinhorn, Borel Structures and Measure and Category Logics, says. Theorem 1.3.3. A Borel linear order cannot have an uncountable increasing or decreasing chain. I didn't find definitions in the paper, but I am quite sure that a pair $(A,\leq)$ is a Borel linear order iff $A$ is a Borel set in a standard Borel space $X$ and $\leq$ is Borel as a subset of $X^2$. I presume that an increasing chain is just a linear order with no greatest element and similarly for decreasing. Here is the problem. Intuitively, I would guess that the lexicographic order on Baire space is a Borel linear order. But then Baire space itself should be an uncountable increasing chain under this order. What am I misunderstanding? AI: The reference is to a result of Harrington and Shelah, who showed that a Borel linear order does not contain an $\omega_1$-chain. Since every uncountable wellordering contains an initial copy of $\omega_1$, it follows that a Borel linear order cannot contain an uncountable increasing wellordered chain. By reversing the linear order and applying the theorem again, we see that there are no uncountable decreasing wellordered chains either. I conclude from this that the author probably meant for "chain" to be "wellordered chain" (which also explains the otherwise unusual phrase "increasing or decreasing").
H: Calculating Laplace's law for bigrams Reading this PDF, I encountered a very simple simplification that I can't obtain. Basically, it asks what is the probability of the occurrence of a word $w_{n}$ given that we know another word $w_{n-1}$ already appeared before. Usually, this is calculated using the Maximum Likelihood Estimate which gives the probability: $$P(w_{n}|w_{n-1}) = \displaystyle \frac{C(w_{n-1}w_{n})}{C(w_{n-1})}$$ where $C(w_{n})$ is the frequency of the word $w_{n}$ However, using an alternative probability called Laplace's law or Expected Likelihood Estimation we have as probability of $w_{n-1}w_{n}$ $$P(w_{n-1}w_{n}) = \frac{C(w_{n-1}w_{n})+1}{N + B} \tag{*}$$ where N is the number of tokens considered in our sample and B is the number of types which in this case would be B = V (V = vocabulary size) for unigrams and B = $V^{2}$ for bigrams. The PDF says that $P(w_{n}|w_{n-1}) = \displaystyle \frac{C(w_{n-1}w_{n})+1}{C(w_{n-1}+V)}$ however I can't prove that. I expected to get that answer by using Bayes rule like this: $$P(w_{n}|w_{n-1})=\displaystyle \frac{P(w_{n-1}w_{n})}{P(w_{n-1})}$$ and using (*) but I don't get anything similar. Some help would be appreciated. By the way, the part I'm referring to is contained in a slide titled "Laplace Add-One Smoothing" Regards AI: Laplace smoothing is a result of maximum aposteriori (MAP) estimation of the conditional probability $P(w_n|w_{n-1})$ under a Dirichlet prior. Specifically, we want to estimate the $|V|$-valued multinomial distribution $p_k = P(w_n=k|w_{n-1})$, where $V$ is the vocabulary. Let us impose a Dirichlet prior on this multinomial distribution. That is, let $(p_1,\ldots,p_{|V|})$ be drawn from $Dir(\alpha,\ldots,\alpha)$, where $\alpha \geq 0$ is the concentration parameter of the Dirichlet distribution. The MAP estimate of $p_k$ can be found out as under: $\hat{p}_k = \arg \max_{p_k} \sum_{k_1=1}^{|V|} C(w_{n-1}, k_1)\log p_{k_1} + \log{\Gamma(\alpha|V|)} - |V|\log{\Gamma(\alpha)} + \sum_{k_1}^K(\alpha-1)\log p_{k_1}$ subject to $\sum_{k_1=1}^K p_{k_1} = 1$ ($\Gamma(x)$ is the Gamma function) The first term in the objective term is due to the multinomial likelihood function, while the remaining are due to the Dirichlet prior. We can now use Lagrange multipliers to solve the above constrained convex optimization problem. The solution is the Laplace smoothed bigram probability estimate: $\hat{p}_k = \frac{C(w_{n-1}, k) + \alpha - 1}{C(w_{n-1}) + |V|(\alpha - 1)}$ Setting $\alpha = 2$ will result in the add one smoothing formula.
H: Integrating $\int \frac{dx}{(x+\sqrt{x^2+1})^{99}}$ I am bugged by this problem: how do I evaluate this? $$\int \frac{dx}{(x+\sqrt{x^2+1})^{99}}.$$ A closed form will be convenient and fine. Thanks (it does not seem particularly inpiring). AI: If you sub $u=x+\sqrt{x^2+1}$, that alone will make the integral a simple rational integral. We can solve for $x$ in terms of $u$: $$\begin{align}u - x & =\sqrt{x^2+1}\\ u^2 - 2ux + x^2& =x^2+1\\ - 2ux& =1-u^2\\ x & = \frac{u^2-1}{2u}\\ dx & = \frac{2u(2u)-2(u^2-1)}{4u^2} du\\ dx & = \frac{u^2+1}{2u^2} du \end{align}$$ So you have $$\begin{align}\int \frac{u^2+1}{2u^{101}}\,du&=\frac{1}{2}\int u^{-99}+u^{-101}\,du\\&=\frac{1}{2}\left(\frac{u^{-98}}{-98}-\frac{u^{-100}}{100}\right)+C\\&=-\frac{1}{196(x+\sqrt{x^2+1})^{98}}-\frac{1}{200(x+\sqrt{x^2+1})^{100}}+C\end{align}$$
H: About finiteness of trees I am reading a book of Michael Sipser "Introduction to the theory of computation", and there is a theorem, which he gives without a proof: "If every node of a tree has only finitely many children and every branch of the tree has finitely many nodes, the tree itself has only finitely many nodes." Could you, please, help with a proof of this theorem. AI: This is more often stated in the form "if a tree $T$ is infinite and every node has only finitely many children then there exists an infinite path in $T$" (see König's Lemma). The infinite path may be constructed as follows: Let $x_0$ denote the root of $T$. Now $x_0$ has only finitely many children but $T$ is infinite so there must be at least one child of $x_0$ such that the subtree under it is infinite. We call this child $x_1$. Now the subtree with root $x_1$ has also the property that every node has only finitely many children so we may continue in the same way as before and choose a child $x_2$ of $x_1$ such that the subtree with root $x_2$ is infinite etc. This yields an infinite path $x_0,x_1,x_2,\ldots$
H: Is there a good description of the series $\sum_{j=0}^{n}a^j b^{n-j}$? To what does the serie $\sum_{j=0}^{n}a^j b^{n-j}$ converge? Does this serie have a name? AI: I'm assuming you are asking for the convergence for $n\to\infty$. We have (assuming $a\ne b$) $$\sum_{j=0}^na^jb^{n-j} = b^n\sum_{j=0}^n\left(\frac{a}{b}\right)^j = b^n\frac{(a/b)^{n+1}-1}{a/b-1} = \frac{a^{n+1}-b^{n+1}}{a-b}$$ Therefore the sequence converges if both $|a|<1$ and $|b|<1$, with limit $0$. Moreover if one of $a$ or $b$ is $1$ and the absolute value of the other is $<1$, the sequence converges to $1/(1-b)$ (for $a=1$) or $1/(1-a)$ (for $b=1$). Otherwise, the sequence diverges. If $a=b$, then the sum is $na^n$, which converges to $0$ iff $|a|<1$ and diverges otherwise.
H: A module with non-zero dual that does not canonically embed in its bidual I'd like to find a ring $A$ and an $A$-module $E$ such that $E^*\neq\{0\}$ and the canonical mapping of $E$ into $E^{**}$ is not injective. As a hint (it's an exercise in Bourbaki's Algebra) I have "consider a module containing an element whose annihilator contains an element which is not a divisor of zero". I'm not sure how the hint is supposed to help: there are $\mathbf{Z}$-modules with the property from the hint ($\mathbf{Z}/n\mathbf{Z}$, $\mathbf{Q}/\mathbf{Z}$), but their dual is zero. Unfortunately, my repertoire of interesting modules is very limited. Can somebody help me along? Another hint or an example without proof would be fine. AI: Recall that forming duals is additive (i.e. is compatible with taking direct sums). So try combining an example of a module with a non-zero dual and a module with zero dual via a direct sum.
H: Probability - multinomial partitioning Question: Joe is in his hunting blind when he locates $20$ geese, $25$ ducks, $40$ eagles, $10$ cranes, and $5$ flamingos. Joe randomly selects six birds to target, what is the probability that at least one of each species is targeted? The correct answer is apparently $0.03985$ according to the textbook solutions. Work so far: $${20 \choose 1}{25 \choose 1}{40 \choose 1}{10 \choose 1}{5 \choose 1}{95 \choose 1} = 95,000,000$$ So there are $95,000,000$ total ways to select at least one of each species. $$95,000,000 \left/ {100 \choose 6}\right. = \frac{95,000,000}{1,192,052,400} = 0.0797$$ Notes: I'm obviously doing something wrong calculating the number of ways to select at least one of each species, as ${100 \choose 6}$ should be the total ways to randomly select 6 of the 100 birds. Obviously my result is the correct answer multiplied by 2, but I'm not sure why. Any help/hints are greatly appreciated. EDIT: Dividing my 2 should not normally be done in the solution to a problem like this, therefore, I obviously didn't do my calculation correctly. I'd like to see how the calculation would look if I had done it correctly. AI: You need two birds of some species plus one bird from each of the other species. The total number of favorable combinations is ${20 \choose 2}{25 \choose 1}{40 \choose 1}{10 \choose 1}{5 \choose 1} +{20 \choose 1}{25 \choose 2}{40 \choose 1}{10 \choose 1}{5 \choose 1} +{20 \choose 1}{25 \choose 1}{40 \choose 2}{10 \choose 1}{5 \choose 1} +{20 \choose 1}{25 \choose 1}{40 \choose 1}{10 \choose 2}{5 \choose 1} +{20 \choose 1}{25 \choose 1}{40 \choose 1}{10 \choose 1}{5 \choose 2} =47500000.$
H: Gram-Schmidt Orthogonalization - does it distort? I am writing a 3D solar panel positioning programme and have a section of code where I use the Gram-Schmidt Orthogonalization process to go from 3D to 2D for easier calculations. (For reference, here is the process in more detail: 3D to 2D rotation matrix) My simple question is this.... after applying the orthogonalization processs, is one unit in 2D still the same length as a unit in 3D? Am I right to assume that there is no distortion as the plane is simply rotated by the Gram-Schmidt process? Many thanks for your time, Kelvin. AI: Based on the answers to your previous question, I suppose you implemented the mapping from 3D to 2D. Whether or not this mapping preserves distances depends on how you did the implementation. If your 3D-to-2D mapping is just a rotation, then as you suspected, it will preserve distances (and angles). If you really want to be sure, experiment by mapping some points. Take two points that are some known distance $d$ apart, apply your mapping to each of them to get two new points, and check that these new points are again a distance $d$ apart.
H: Is there any way to determine the first $3$ digits of $2^m-2^n$ ($n\leq m\leq 10^{100}$) It's a problem in my ACM training. Since $n,m$ are really huge, I don't think the algo $2^n=10^{n\log2}$ will work. Also, it's not really wise to calculate the value of $2^n$, I think. So I stack. Can any one come up with a way? Thanks. e.g.: $m=16$, $n=12$, and the answer is $614$. ($2^{16}=65536$, $2^{12}=4096$, $65536-4096=61440$.) AI: I think $2^n=10^{n\lg2}$ does help. It gives you the first digits of $2^n$ and $2^m$ and their respective number of digits. Assuming you have n and m in scientific notation with sufficient precision. And what is better you can improve the precision of the respective result incrementally as needed using the algorithm used when multiplying numbers by hand. So a proposed algorithm works like this: calculate $2^n$ and $2^m$ using the formula above and the first $k$ digits of $\lg2$. calculate the same values again but now increase the last digit of your approximation of $\lg2$ by one, thus giving you upper and lower bound of the two terms. using those calculate upper and lower bound of the final result. if the first 3 digits of the two approximations are the same you are done otherwise increase number of digits used and repeat. As hinted above you should be able to reuse results from the previous iteration. Alternatively you could use @Anurag s result + the formular above + the suggested estimation of errors, thus adding powers of 2 until sufficient precision is reached. You'll have to deal with an arbitrary (but fast becomming irrelevant) number of terms instead of 2 though.
H: $|P(x)|$ differentiable at a root $x_0$ Let $p(x)$ be a polynomial and suppose that $x_0 \in \bf R$ is a real root i.e. $p(x_0) = 0$. When will $|p(x)|$ be differentiable at $x_0$? My Thoughts For polynomials such as $f(x) = x$, we run into trouble at roots of odd multiplicity i.e. $|x|$ is not differentiable at $x = 0$. So my thoughts here are that it would be sufficient for the root to be of even multiplicity, in which case $|p(x)|$ would behave identically to $p(x)$ around $x_0$. Is this correct? AI: Since $\,P(x_0)=0\,$ , we can write $\,P(x)=(x-x_0)^mg(x)\,\,,\,g(x_0)\neq 0\,$ , so $$|P(x_0)|':=\lim_{h\to 0}\frac{|P(x_0+h)|}{h}=\lim_{h\to 0}\frac{|h|^m\,|g(x_0+h)|}{h}$$ so if $\,m>1\,$ then $$|P(x_0)|'=\lim\frac{|h|^m\,|g(x_0+h)|}{h}=\lim_{h\to 0}\frac{\pm h^m\,|g(x_0+h)|}{h}=0$$ If $\,m=1\,$ then clearly, and as noted already in the comments above, the limit doesn't exist.
H: If $(f'_n)$ converges uniformly, does $(f_n)$ necessarily converge uniformly? I've been studying complex analysis problems, and get stuck on the following: Let $D \subseteq \mathbb{C}$ be a domain (open connected set) and $z_0 \in D$. Assume that $(f_n)$ is a sequence of analytic functions in $D$ such that $\lim_{n \to \infty} f_n(z_0) = w_0 \in \mathbb{C}$ and that sequence of derivatives $(f'_n)$ converges uniformly on compact subsets of $D$ to a function $g$. Is it true that there exists an analytic function $f$ in $D$ such that $f_n \to f$ uniformly on compact subsets of $D$? Prove or give a counterexample. I don't even know if this is true. I thought I had a solution (see below) but I realized that I was using the Fundamental Theorem of Calculus on sets which were not necessarily simply connected, which is invalid. However, I have been unsuccessful in finding a counterexample either. Does anyone know a way to fix the hole in my reasoning, or a counterexample if the statement is false? Thanks. Suppose $K$ is connected compact subset of $D$ which contains $z_0$ with Lebesgue number $\delta$. Then there exists of an open cover of $K$ by some number $k$ of $\delta$-disks. For each $m$ there exists $M>0$ such that $|f'_n(z)-g(z)| < 1/m$ and $|f_n(z_0) - w_0| < 1/m$ for all $n > M$ and all $z \in K$. Now let $z \in K$, and let $\gamma$ be a path of minimal length from $z_0$ to $z$ inside the $\delta$-cover of $D$. Then for $n > M$, $$\left | \int_\gamma f'_n(t) - g(t) dt\right| \leq \ell (\gamma) /m \leq 2\delta k /m. $$ Then \begin{align*} \left| f_n(z) - \left(w_0 + \int_\gamma g(t) dt\right ) \right | &= \left| \left(f_n(z)- f_n(z_0) - \int_\gamma g(t) dt \right) + (w_0- f_n(z_0))\right| \\ & \leq \left| f_n(z)- f_n(z_0) - \int_\gamma g(t) dt \right| + |w_0- f_n(z_0)| \\ &\leq (2\delta k+1)/m \\ & \to 0 \text{ as } m \to \infty \end{align*} Thus $f_n$ converges uniformly to $w_0 + \int_{z_0} ^ z g(t) dt $ on compact subsets of $D$ [every compact subset is contained in a connected compact subset which contains $z_0$]. AI: I haven't checked your argument in detail, but you say [...] but I realized that I was using the Fundamental Theorem of Calculus on sets which were not necessarily simply connected, which is invalid. [...] It is true that not every holomorphic function on $D$ need have a primitive. But in this case you already know that each of the $f_n'$ does have a primitive on $D$, namely $f_n$. So you will always have $$\int_\gamma f_n'(z) \, dz = f_n(\gamma(b)) - f_n(\gamma(a))$$ for a curve $\gamma: [a,b]\to D$, by the fundamental theorem of calculus. You should be able to use this to show $(f_n)$ is uniformly Cauchy when restricted to compact sets. For example you could show $$|f_n(z) - f_m(z)| \le |f_n(z_0) - f_m(z_0)| + \left|\int_\gamma f_n'(\zeta) - f_m'(\zeta)\, d\zeta \right|$$ where $\gamma$ is any curve in $D$ from $z_0$ to $z$, and go on from there.
H: Need refresher on z-transforms and difference equations I recently tried showing someone else how to solve a difference equation using z-transforms, but it's been a long time and what I was getting didn't look right. I was trying to solve the recurrence we all know and love $$x[n]=x[n-1]+x[n-2],x[0]=0,x[1]=1$$ using the one-sided z-transform $$X(z)=\sum\limits_{n=0}^\infty x[n]z^{-n}$$ So, taking the z-transform of both sides, I got $$X(z)=\sum\limits_{n=0}^\infty x[n-1]z^{-n}+\sum\limits_{n=0}^\infty x[n-2]z^{-n}=$$ $$z^{-1}\sum_{n=0}^\infty x[n-1]z^{-n+1}+z^{-2}\sum\limits_{n=0}^\infty x[n-2]z^{-n+2}$$ I substitute $i=n+1$ in the first sum, $i=n+2$ in the second $$X(z)=z^{-1}\sum\limits_{i=1}^\infty x[i]z^{-i}+z^{-2}\sum\limits_{i=2}^\infty x[i]z^{-i}=$$ $$z^{-1}(X(z)-x[0]z^0)+z^{-2}(X(z)-x[0]z^0-x[1]z^{-1})=$$ $$z^{-1}X(z)+z^{-2}X(z)-z^{-3}$$ $$X(z)(1-z^{-1}-z^{-2})=-z^{-3}$$ $$X(z)=-\frac{z^{-3}}{1-z^{-1}-z^{-2}}$$ Did I go wrong somewhere? If not, how do I proceed from here? Update: All right, now that the mistake has been pointed out, let's see if I can correct it and continue. So the last correct step was $$X(z)=z^{-1}\sum\limits_{n=0}^\infty x[n-1]z^{-n+1}+z^{-2}\sum\limits_{n=0}^\infty x[n-2]z^{-n+2}$$ Now the substitution here should be $i=n-1$ for the first sum and $i=n-2$ for the second yielding $$X(z)=z^{-1}\sum\limits_{i=-1}^\infty x[i]z^{-i}+z^{-2}\sum\limits_{i=-2}^\infty x[i]z^{-i}=$$ $$z^{-1}(x[-1]z+X(z))+z^{-2}(x[-2]z^2+x[-1]z+X(z))$$ So if I were to continue like this, I'd have to calculate $x[-2]$ and $x[-1]$. I'm guessing this is why it was suggested to rewrite it as $x[n+2]=x[n+1]+x[n]$. In any case, I'm satisfied I can handle it. AI: (I’ll write $x_n$ instead of $x[n]$.) Your mistake was in rewriting $$z^{-1}\sum_{n=0}^\infty x_{n-1}z^{-n+1}\quad\text{ and }\quad z^{-2}\sum\limits_{n=0}^\infty x_{n-2}z^{-n+2}$$ as $z^{-1}(X(z)-x_0z^0)$ and $z^{-2}(X(z)-x_0z^0-x_1z^{-1})$, respectively: $$z^{-1}(X(z)-x_0z^0)=z^{-1}\sum_{n\ge 1}x_nz^{-n}=z^{-1}\sum_{n\ge 0}x_{n+1}z^{-n+1}$$ and $$z^{-2}(X(z)-x_0z^0-x_1z^{-1})=z^{-2}\sum_{n\ge 2}x_nz^{-n}=z^{-2}\sum_{n\ge 0}x_{n+2}z^{-n-2}\;.$$ In fact $$z^{-1}\sum_{n=0}^\infty x_{n-1}z^{-n+1}=z^{-1}\sum_{n=-1}^\infty x_nz^{-n}=z^{-1}(X(z)+x_{-1}z)=z^{-1}X(z)+1$$ and $$z^{-2}\sum\limits_{n=0}^\infty x_{n-2}z^{-n+2}=z^{-2}\sum_{n=-2}^\infty x_nz^{-n}=z^{-2}\Big(X(z)+x_{-1}z+x_{-2}z^2\Big)=z^{-2}X(z)+\frac1z-1\;,$$ so $$z^{-1}\sum_{n=0}^\infty x_{n-1}z^{-n+1}+z^{-2}\sum\limits_{n=0}^\infty x_{n-2}z^{-n+2}=z^{-2}X(z)+z^{-1}X(z)+\frac1z\;.$$ Here $x_{-1}=1$ and $x_{-2}=-1$ are obtained by extrapolating the recurrence backwards. I find it easiest to deal with the initial conditions by assuming that $x_n=0$ for all $n<0$ and using Iverson brackets to incorporate them in the recurrence. In this case the resulting recurrence is $$x_n=x_{n-1}+x_{n-2}+[n=1]\;.\tag{1}$$ Now multiply $(1)$ through by $z^{-n}$ and sum over $n\ge 0$ to get $X(z)$: $$\begin{align*} X(z)&=\sum_{n\ge 0}x_{n-1}z^{-n}+\sum_{n\ge 0}x_{n-2}z^{-n}+\frac1z\\ &=\frac1zX(z)+\frac1{z^2}X(z)+\frac1z\;, \end{align*}$$ so $z^2X(z)=zX(z)+X(z)+z$, and $$X(z)=\frac{z}{z^2-z-1}=\frac{z}{(z-\varphi)(z-\widehat\varphi)}\;,\tag{2}$$ where $\varphi=\frac12\left(1+\sqrt5\right)$ and $\widehat\varphi=\frac12\left(1-\sqrt5\right)$. Then $x_n=A(n)+B(n)$, where $$A(n)=\lim_{z\to\varphi}\frac{z^n}{z-\widehat\varphi}=\frac{\varphi^n}{\varphi-\widehat\varphi}=\frac{\varphi^n}{\sqrt5}$$ and $$B(n)=\lim_{z\to\widehat\varphi}\frac{z^n}{z-\varphi}=\frac{\widehat\varphi}{\widehat\varphi-\varphi}=-\frac{\widehat\varphi}{\sqrt5}\;,$$ i.e., $$x_n=\frac1{\sqrt5}\left(\varphi^n-\widehat\varphi^n\right)=\frac1{\sqrt5}\left(\left(\frac{1+\sqrt5}2\right)^n-\left(\frac{1-\sqrt5}2\right)^n\right)\;,$$ the familiar Binet formula. Section 5.2.3 of this PDF has a brief discussion of the z-transform method with a couple of worked examples. The main difference is that the author prefers to compute the generating function and then make the substitution $z\mapsto\frac1z$; this has the virtue of avoiding negative exponents in the summations.
H: $I^2$ does not retract into comb space In Hatcher's book on Algebraic Topology in Chapter 0 (where the Homotopy Extension Property is discussed) the claim is being made that there is no retraction of $I^2$ into $I \times \{0\} \cup A \times I$ where $A = \{0, 1, \frac 12 , \frac 13, \ldots \}$. However I couldn't come up with a proof. One can define a retraction if there is only a finite number of "teeth in the comb". It's not clear to me why continuity fails (presumably) at 0 if one tries to extend this retraction to the infinite case. I hope someone can help out. AI: If $f$ is a retraction, then it must map $(0,1)$ to $(0,1)$. Since it is continuous, there must a ball $A$ centered on $(0,1)$ such that $f(A)$ is contained in the ball $B$ of radius $\frac 12$ centered on $(0,1)$. However $A$ must contain a point of the form $(\frac 1n,1)$ for some $n$ (which also maps to itself under $f$), and then the points $(\frac tn,1)$ for $0\le t\le 1$ all lie in $A$ too. Then $t\to f(\frac tn,1)$ is a path in the comb from $(0,1)$ to $(\frac 1n,1)$. Since there is no such path in the comb that doesn't wander outside $B$, $f$ cannot exist.
H: Show angle between $P-P_0$ and plane tangent to surface at $P_0$ approaches 0 Let $f(x,y) \rightarrow R$ be a function with continuous partial derivatives, and let $S$ be the surface $z=f(x,y)$ in $R^2$. Let $P_{0}=(x_0,y_0,z_0 )$ be a point in S and $P=(x,y,z)$ be some other point in S. We're asked to show that $a$, the angle between the plane tangent to S at $(x_0,y_0,z_0)$ and the vector $P-P_0$, approaches $0$ as $P\rightarrow P_0$. I'd appreciate some help with proving this. The question is from a former exam in my multivariable calculus course. The angle thing ticks me off here since I have no idea how to approach this. I tried proving this using dot products to derive the angle between the vectors, but calculating the limit was difficult. Thanks! AI: Hint only, since this looks like homework and you should try to do the calculations yourself: Instead of $P-P_0$, consider the corresponding normalized unit vector $Q= \frac{P-P_0}{|P-P_0|}$, so you have not to bother about estimating it's length. Instead of the tangent plane, consider the unit normal $n$ to the tangent plane and show that $\langle n,Q\rangle$ tends to zero as $P$ approaches $P_0$.
H: Is there a non-contradictory non-trivial axiomatic system in which Gödel's theorem is undecidable? Gödel's Incompleteness Theorem states that in any non-contradictory non-trivial axiomatic system there are certain statements or theorems whom cannot be proved in that system, For example in ZFC, assuming it is non-contradictory, there is the famed Continuum Hypothesis and Whitehead's problem. My question is, is there any such system (but different from ZFC) in which Gödel's Incompleteness Theorem itself is an unprovable statement? AI: Robinson arithmetic is non-trivial enough that the incompleteness theorem applies to it, but as far as I know not strong enough to prove the incompleteness theorem itself.
H: Application of Radon Nikodym Theorem on Absolutely Continuous Measures I have the following problem: Show $\beta \ll \eta$ if and only if for every $\epsilon > 0 $ there exists a $\delta>0$ such that $\eta(E)<\delta$ implies $\beta(E)<\epsilon$. For the forward direction I had a proof, but it relied on the use of the false statement that "$h$ integrable implies that $h$ is bounded except on a set of measure zero". I had no problem with the backward direction. AI: Assume that $\beta=h\eta$ with $h\geqslant0$ integrable with respect to $\eta$, in particular $\beta$ is a finite measure. Let $\varepsilon\gt0$. There exists some finite $t_\varepsilon$ such that $\beta(B_\varepsilon)=\int_{B_\varepsilon} h\,\mathrm d\eta\leqslant\varepsilon$ where $B_\varepsilon=[h\geqslant t_\varepsilon]$. Note that, for every measurable $A$, $A\subset B_\varepsilon\cup(A\setminus B_\varepsilon)$, hence $\beta(A)\leqslant\beta(B_\varepsilon)+\beta(A\cap[h\leqslant t_\varepsilon])\leqslant\varepsilon+t_\varepsilon\eta(A)$. Let $\delta=\varepsilon/t_\varepsilon$. One sees that, for every measurable $A$, if $\eta(A)\leqslant\delta$, then $\beta(A)\leqslant2\varepsilon$, QED.
H: Showing that $a+b\sqrt{2}$ is associative over multiplication I need to show that $a+b\sqrt{2}$ is associative over multiplication. This is what I have so far. I may be taking a wrong route so just please let me know. $$(a+b\sqrt{2})*((c+d \sqrt{2})*(e+f \sqrt{2} )) = \\ (a+b\sqrt{2})*(ce+2df+(cf+de)\sqrt{2}) $$ and then I just opened up the brackets and got an ugly looking thing that I am not sure what to do next. $$(ace+2adf+(acf+ade)\sqrt{2}+bce\sqrt{2}+2bdf\sqrt{2}+2(bcf+bde)$$ How would I go from here? AI: Are you trying to show that ordinary multiplication is associative when applied to numbers of the form $a+b\sqrt2$ (with $a$ and $b$ rational and/or integral)? That is trivially true because $(pq)r=p(qr)$ holds for all real numbers $p$, $q$, and $r$ -- the case where they all lie in $\mathbb Z+\mathbb Z\sqrt2$ or $\mathbb Q+\mathbb Q\sqrt2$ is just a special case of this general truth.
H: Create a C++ program to evaluate the following series: $\sin x \approx x - \frac{x^3}{3! }+\frac{x^5}{5!}-\frac{x^7}{7!}\cdots\pm\frac{x^n}{n!}$ This is a problem that my friend has. He says he has to create a C++ program to evaluate the series (He says we must use functions): $$\displaystyle\large\sin x\approx x- \frac{x^3}{3! }+\frac{x^5}{5!}-\frac{x^7}{7!}\cdots\pm\frac{x^n}{n!}$$ The values of $x$ and $n$ have to be taken from the user. Only I'm not good with C++. Can anyone help? AI: As $T_n=\frac{x^{2n+1}}{(2n+1)!}$ and $T_1=x$ So, $T_{n-1}=\frac{x^{2n-1}}{(2n-1)!}$ So, $T_n=T_{n-1}\frac{x^2}{(2n+1)2n}$ I have utilized this so that we don't need to calculate the powers of x and the factorial every time. I've assumed that the number of terms is not required as by repeated division, y will eventually be reduced 0 due to the limited precision of double datatype. Also, x is to be measured in radian(not in degree), as prescribed by the Taylor Series. void main(int argc, char **argv) { double x , sinx, y; printf("enter x:\n"); scanf("%lf",&x); sinx=y=x; int sign = -1; for (int idx = 1;; idx++) { y = y * (x / (2 * idx)) * (x/(2 * idx - 1)); if (y == 0) break; sinx = sinx + sign * y; sign = -sign; } printf("\n\nsin(%lf) = %lf", x, sinx); }
H: Derivation of formula for estimating error in bulk-volume In my textbook, a formula for estimating error in bulk-volume measurements is derived, but I don't quite follow one step in the derivation. The book writes the following: The bulk volume of a porous-rock core sample can be measured in two steps, by weighing the sample first in water, $m_1$ (assuming $100$% water saturation), and then in air, $m_2$. The bulk volume is calculated as follows: $$V_b = \frac{m_2 - m_1}{\rho_w}$$ By differentiating this equation, we obtain: $$dV_b = \frac{\partial V_b}{\partial m_2}dm_2 + \frac{\partial V_b}{\partial m_1}dm_1 + \frac{\partial V_b}{\partial \rho_w}d \rho_w$$, $$dV_b = \frac{m_2 - m_1}{\rho_w}\Bigg(\frac{dm_2}{m_2 - m_1} - \frac{dm_1}{m_2 - m1} - \frac{d \rho_w}{\rho_w}\Bigg)$$ If the density measurement and the measurements of $m_1$ and $m_2$ are considered to be independent, the uncertainty inherent in the bulk-volume value can be written as: $$\Bigg(\frac{\Delta V_b}{V_b}\Bigg)^2 = 2\Bigg(\frac{\Delta m}{(m_2 - m_1)}\Bigg)^2 + \Bigg(\frac{\Delta \rho_w}{\rho_w}\Bigg)^2$$ where the error in the weighing of the two masses is considered to be identical, $\Delta m = \Delta m_1 = \Delta m_2$. OK, so it is this last step here that I don't quite follow. I assume that we are here using that $dV_b = \Delta V_b$. Since we know that: $$dV_b = \frac{m_2 - m_1}{\rho_w}\Bigg(\frac{dm_2}{m_2 - m_1} - \frac{dm_1}{m_2 - m1} - \frac{d \rho_w}{\rho_w}\Bigg)$$ I would then assume that we have: $$\frac{\Delta V_b}{V_b} = \frac{\frac{m_2 - m_1}{\rho_w}\Bigg(\frac{dm_2}{m_2 - m_1} - \frac{dm_1}{m_2 - m1} - \frac{d \rho_w}{\rho_w}\Bigg)}{\frac{m_2 - m_1}{\rho_w}}$$ $$\frac{\Delta V_b}{V_b} = \Bigg(\frac{dm_2}{m_2 - m_1} - \frac{dm_1}{m_2 - m1} - \frac{d \rho_w}{\rho_w}\Bigg)$$ $$\frac{\Delta V_b}{V_b} = \Bigg(\frac{\Delta m}{m_2 - m_1} - \frac{d\rho_w}{\rho_w}\Bigg)$$ So: $$\Bigg(\frac{\Delta V_b}{V_b}\Bigg)^2 = \Bigg(\frac{\Delta m}{m_2 - m_1} - \frac{d\rho_w}{\rho_w}\Bigg)^2$$ $$\Bigg(\frac{\Delta V_b}{V_b}\Bigg)^2 = \Bigg(\frac{\Delta m}{m_2 - m_1}\Bigg)^2 - 2 \frac{d\rho_w}{\rho_w}\Bigg(\frac{\Delta m}{m_2 - m_1}\Bigg) + \Bigg(\frac{d \rho_w}{\rho_w}\Bigg)^2$$ But this is obviously not the same. So if anyone can please explain to me that last step in the derivation of the formula, I would be very grateful! AI: The author does not really mean that, for example, the errors in measuring the two masses are the same. (But admittedly, that is exactly what the author writes!) What is meant is that we have a bound $\Delta m$ on the absolute value of the error in measuring a mass, and similarly a bound on the absolute value of the error in the density. The errors could be in opposite directions and add. We could have an underestimate of the mass in air, and an overestimate of the mass in water. The squaring is traditional. What's happening is that the author, without explicitly saying so, is making a calculation of variance.
H: book with lot of examples on abstract algebra and topology I have read a lot of blogs and forums to find out the best of the books on Abstract algebra and Topology but on going through the books I realized that they are full of proofs and all kind of theorems and corollaries. But the disappointing point is that there are either very small number of examples or none at all to support that particular theorem or proof. My idea of example is practically calculating using a set with contents rather than just saying "a set of integers" and the calculations should go to the end, for example calculating index in subgroups. Everything should be step by step. I went through "Topics in Algebra" by I.N.Herstein but it's full of rigor. I faced similar problems with topology. Please help me by suggesting books where I can find lot and lot of simple examples on these topics. Please don't suggest any advanced book but something that keeps me near the base with lot of permutations of ideas. Even if you have something to suggest other than these, you are most welcome. AI: For algebra, Dummit and Foote is as easy and gentle an introduction as you could ask for. It's not quite as easy for topology. I happen to write a blog post that collected various answers and topology book recommendations from MSE. For ease, I've copied over the content below. But to follow any hyperlinks, which I didn't copy over (Wordpress has annoying formatting), I'd recommend just looking at the post itself or googling the corresponding term. A MSE Collection: Topology Book References To be clear, this is a compilation of much (not all) of the discussion in the following questions (and their answers): best book for topology? by jgg, Can anybody recommend me a topology textbook? by henryforever14, choosing a topology text by A B, Introductory book on Topology by someguy, Reference for general-topology by newbie, Learning Homology and Cohomology by Refik Marul, What is a good Algebraic topology reference text? by babgen, Learning Roadmap for Algebraic Topology by msnaber, What algebraic topology book to read after Hatcher’s? by weylishere, Best Algebraic Topology book/Alternative to Allen Hatcher free book? by simplicity, and Good book on homology by yaa09d. And I insert my own thoughts and resources, when applicable. Ultimately, this is a post aimed at people beginning to learn topology, perhaps going towards homology and cohomology (rather than towards a manifolds-type, at least for now). One cannot mention references on topology without mentioning Munkres Topology. It is sort of ‘the book on topology,’ and thus I’ll compare all other recommendations to it. When I was blitz-preparing from my almost-no-topology-outside-of-what-I-learned-in-analysis undergraduate days to Brown’s (and the rest of the world) topology-is-everywhere days, I used Munkres. In fact, Munkres once said that he explicitly wrote his book to be an accessible topology book to undergrad. I thought it was alright, although it wasn’t until much later that I began to see why anything was done the way it was done. If the goal is to immediately go on to learn algebraic or differential topology, some think that the first three chapters of Munkres is sufficient. Munkres serves primarily to give an understanding of the mechanics of point set topology. Willard’s General Topology is a more advanced point-set topology book. The user Mathemagician1234 says in a comment that it is practically the Bible of point-set topology, and extremely comprehensive. It has a certain advantage over Munkres in that it’s published by Dover for cheap. In a comment on a different thread, Brian M. Scott suggests that for a mathematician of sufficient maturity, Willard is superior to Munkres and Kelley’s General Topology. Sean Tilson wrote up an answer discussing Willard as well. In short, he thinks it is superior to Munkres, but that it is necessary to do the exercises to learn some key facts. In Munkres, this is not the case, but one needs to do exercises to develop any sort of intuition with Munkres. If Willard is much harder than Munkres, it would seem that Armstrong’s Basic Topology is comparable. It’s part of the Undergraduate Texts in Math series. I know many people who learned point set from Armstrong rather than Munkres. I have personally only cursorily glanced through Armstrong, and I found it roughly comparable to Munkres. When I read the reviews on Amazon for Munkres and Armstrong, however, there is a strange discrepancy. Munkres far outscores Armstrong (whatever that really means). Perhaps this is due to the recurring theme in the top reviews of Armstrong, which say that Armstrong deviates from the logical (and some would say, intuition-less) flow of Munkres to instead ground the subject in a diverse set of subjects. On the other hand, this review says that Munkres is simply more extensive. This ultimately leads me to believe that it’s a matter of taste between the two (although Munkres is much longer). But the autodidact’s guide recommends this book as an undergraduate text. On a different note, there is a freely read (but not printed) book by S. Morris called Topology without tears. This is a link to the pdf on his website. I should note that the version linked is not printable. There is a printable version, but you will need the author’s permission to print it. I think this book is very readable, and is perhaps the gentlest introduction text here (for better or worse). While all the texts mentioned so far have sort of been aimed at developing the skill set necessary to learn algebraic/differential topology, there is a book Introduction to Topology and Modern Analysis by Simmons that is instead aimed at the topology most immediately relevant to analysis. It comes very highly recommended for those interested in that niche. If, on the other hand, you are gung-ho towards algebraic topology, then Massey’sA basic course in algebraic topology has sufficient introduction to the material that it might be used as a first topology text (vastly helped by a course in analysis). But the homology theory section is… lacking. Finally, John Stillwell wrote a book Classical Topology and Combinatorial Group Theory which has a very interesting presentation and selection of material is atypical. But it seems like a pretty interesting angle of approach. One might consider using Singer and Thorpe’s Lecture Notes on Elementary Topology and Geometry as a supplement to one of the above texts, as it is known to give a big picture. It actually introduces point-set, algebraic, and differential topology as well as some differential geometry. But it’s very short, very terse, and it has no exercises. So it is absolutely not a replacement for learning point-set topology. One might also use Viro’s Elementary Topology (mentioned below) or Counterexamples in Topology (mentioned below) as a supplement. Counterexamples might be particularly nice, because Munkres has a tendency to give really pathological counterexamples sometimes. It’s also good to know what books to not use as an intro text, but that might be great supplements or advanced texts. This includes: Alexandroff’s Elementary Concepts of Topology: It is simply too hard. Viro’s Elementary Topology, Textbook in Problems (this is a link to his website, where it’s freely available): although it would be a good supplement. Seebach and Steen’s Counterexamples in Topology (a Dover book): it doesn’t serve to teach topology, but it’s full of exactly what it says – counterexamples. Dugundji’s Topology or John Kelley’s General Topology: these are graduate texts, and they remind me of the difference between, say, Atiyah-Macdonald (an intro-t0-commutative-algebra text) and Matsumura or Eisenbud (more advanced texts, both more useful and much more difficult). In other words, these are good ‘next’ books on topology, but not introductions. For someone who already knows their stuff, though, these seem great. Engelking’s General Topology: this is the reference on topology, but not an introduction. This is to general topology what Lang is to Algebra. It is not aimed at differential nor algebraic topology, but is instead designed to be a reference. If interested, make sure to get the 1989 second edition, which is expanded and updated. Bourbaki: this is another reference, not meant for an introduction to the subject. This answer on MathOverflow has comments to it that speak of some of the unfortunate pitfalls of Bourbaki as well. Spanier: It has a tremendous amount of material, but I’ve almost never heard or read any other redeeming aspect of the book. Chicago’s mathematics bibliography mentions that “… glad I have it, but most people regret ever opening it. Almost any book with “Algebraic Toplogy” or “Differential Topology” in the title. Now, away from the intro-to-general topology and instead towards intro-to-algebraic topology: The classic choice here is now Allen Hatcher’s Algebraic Topology (this is a link to his webpage, where he has the book available for free download). Some might say that Peter May’s freely available book is the other ‘obvious’ choice. But for this, it’s easy to say that since they’re both freely available, you might just try them both until you find one that you prefer. Hather’s book has a large emphasis on geometric intuition, and it’s fairly readable. I learned from this book, and my initial dislike (I had a hard time with some of the initial arguments) turned into a sort of respect. It’s a fine book. May’s book (called A Concise Course in Algebraic Topology) is, in May’s own admission, too tough for a first course on the subject. But it has a very nice bibliography for further reading. Tammo Tom Dieck has a new book, Algebraic Topology, which just might be loved (even by Hatcher himself). And Rotman’s book is generally well-regarded. Interestingly, the most common response at MSE when asked “What is the best alternative to Hatcher?” was along the lines of “No, use Hatcher. It’s lovely.” There are also some additional free sources of notes. There are complete lecture notes of K. Wurthmuller and Gregory Naber (available at their websites and elsewhere). I want to point out the autodidact’s guide and Chicago’s undergraduate mathematics bibliography again, because they’re good places to look in general for such things. I learned from Herstein for algebra and Munkres for topology, and they both require you to do exercises to grasp the material. And that's how learning almost anything else in math from a text will be, so that's okay.
H: Limit of Integral of square of function Let $f: [0,1)\rightarrow \mathbb{R}$ be continuous and nonnegative on $[0,1)$. Prove that if $\lim_{a\rightarrow 1^{-}}{\int_{0}^{a}{f(x)^{2}dx}}$ exists, then $\lim_{a\rightarrow 1^{-}}{\int_{0}^{a}{f(x)dx}}$ exists too. AI: The Cauchy-Schwarz inequality (taking the inner product with the function $1$) shows that if $g \in L^2[0,1)$, then $|\int_0^1 g(x) dx | \leq \sqrt{\int_0^1 |g(x)|^2 dx}$. Then consider the functon $f_a(x) = f(x) 1_{[0,a]} (x)$, with $a \in [0,1)$. Since $f$ is non-negative, $\int_0^a f(x) dx \leq \sqrt{\int_0^a |f(x)|^2 dx} \leq \lim_{a\rightarrow 1^{-}} \sqrt{\int_0^a |f(x)|^2 dx}$. Since $a \mapsto \int_0^a f(x) dx$ is non-decreasing, it follows that the limit exists.
H: Group of order $1225$ is abelian A problem asked me to show that a group, $G$, of order $1225= 5^{2}\times 7^{2}$ is abelian. I did this by using Sylow's Theorem to show that we have two normal subgroups: one of order $49$ and the other of order $25$. Since the intersection of these subgroups must be $1$, it follows that $G$ is a direct product of these two subgroups. But subgroups of order $p^{2}$ are abelian, where $p$ is a prime. Thus $G$ is abelian. The question I have upon completing this proof is that it seems to imply that $G$ is isomorphic to $\Bbb Z_{25} \times \Bbb Z_{49}$. But this isn't always that case since $\Bbb Z_{35} \times\Bbb Z_{35}$ is also a group of order $1225$. Where is my argument failing? AI: A group of order $p^2$, where $p$ is a prime, is isomorphic to $\mathbb{Z}_{p^2}$ or $\mathbb{Z}_{p} \times \mathbb{Z}_{p}$. You seem to miss the latter case.
H: I want to find out the angle for the expression $a^3 + b^3 = c^3$. like in pythagorean theorem angle comes 90 degree for the expression $a^2 + b^2 = c^2$, however I know that no integer solution is possible. AI: There is no single angle corresponding to the relationship $a^3+b^3=c^3$. Suppose that a triangle has sides of lengths $a,b$, and $c$ such that $a^3+b^3=c^3$. We know from the law of cosines that if $\theta$ is the angle opposite the side of length $c$, then $c^2=a^2+b^2-2ab\cos\theta$, so $$\cos\theta=\frac{a^2+b^2-c^2}{2ab}\;.$$ Now let’s look at just a few examples. If $a=b=1$, then $c=\sqrt[3]2$, and $$\cos\theta=\frac{2-2^{2/3}}2\approx0.20630\;.$$ If $a=1$ and $b=2$, then $c=\sqrt[3]9$, and $$\cos\theta=\frac{5-9^{2/3}}4\approx0.16831\;.$$ If $a=1$ and $b=3$, then $c=\sqrt[3]{28}$, and $$\cos\theta=\frac{10-28^{2/3}}6\approx0.12985\;.$$ As you can see, these values of $\cos\theta$ are all different, so the angles themselves are also different.
H: Problem with ratios of integrals I have the following integral from a paper I'm reading: $$f(z)=\frac{\displaystyle\int_0^{\pi/2}\,\tan \alpha\, J_0(z \sin\alpha)\, d\alpha}{\displaystyle \int_0^{\pi/2}\tan\alpha\,d\alpha}$$ where $J_0$ is the Bessel function of first kind and zeroth order. The authors claim that although the integrals in the numerator and the denominator are infinite, their ratio is finite. They then proceed to give the value (no proof) as $$f(z)=J_0(z)$$ How can this be proven (or is this correct)? AI: Let $$ f(z, \epsilon) = \frac{ \int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) J_0\left(z \sin(\alpha) \right) \mathrm{d} \alpha}{ \int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) \mathrm{d} \alpha} $$ I guess the paper defines $f(z)$ as the right limit: $$ f(z) \stackrel{\text{def}}{=} \lim_{\epsilon \downarrow 0} f(z,\epsilon) $$ The limit can be evaluated by a L'Hospital's rule: $$ \lim_{\epsilon \downarrow 0} \frac{ \int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) J_0\left(z \sin(\alpha) \right) \mathrm{d} \alpha}{ \int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) \mathrm{d} \alpha} = \lim_{\epsilon \downarrow 0} \frac{ \frac{\mathrm{d}}{\mathrm{d}\epsilon}\int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) J_0\left(z \sin(\alpha) \right) \mathrm{d} \alpha}{ \frac{\mathrm{d}}{\mathrm{d}\epsilon} \int\limits_0^{\frac{\pi}{2}-\epsilon} \tan(\alpha) \mathrm{d} \alpha} = \lim_{\epsilon \downarrow 0} \frac{-\tan\left(\frac{\pi}{2}-\epsilon\right) J_0\left(z \sin\left(\frac{\pi}{2}-\epsilon\right) \right)}{ -\tan\left(\frac{\pi}{2}-\epsilon\right) } = J_0(z) $$
H: Floor function properties: $[2x] = [x] + [ x + \frac12 ]$ and $[nx] = \sum_{k = 0}^{n - 1} [ x + \frac{k}{n} ] $ I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one: DEFINITION Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$. Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$ So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that $$z\leq x+n<z+1$$ $$z'\leq x<z'+1$$ Then $$z'+n\leq x+n<z'+n+1$$ But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$. However, this doesn't seem to get me anywhere to prove that $$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$ in and in general that $$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$ Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out. Another property is $$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$ I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say: $$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$ and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$ $$ - n - 1 \leqslant -x < (-n-1)+1$$ So $[-x]=-[x]-1$ AI: Let $n=\lfloor x\rfloor$, and let $\alpha=x-n$; clearly either $0\le\alpha<\frac12$, or $\frac12\le\alpha<1$. Then $$\lfloor 2x\rfloor=\lfloor 2n+2\alpha\rfloor=2n+\lfloor 2\alpha\rfloor=\begin{cases} 2n,&\text{if }0\le\alpha<\frac12\\ 2n+1,&\text{if }\frac12\le\alpha<1\;, \end{cases}$$ and $$\left\lfloor x+\frac12\right\rfloor=\left\lfloor n+\alpha+\frac12\right\rfloor=n+\left\lfloor\alpha+\frac12\right\rfloor=\begin{cases} n,&\text{if }0\le\alpha<\frac12\\ n+1&\text{if }\frac12\le\alpha<1\;; \end{cases}$$ since $\lfloor x\rfloor=n$, the first result is immediate. The general case is handled similarly, except that there are $n$ cases; for $k=0,\dots,n-1$, case $k$ is $$\frac kn\le\alpha<\frac{k+1}n\;.$$
H: Map $\{ z=x+iy : |x|\geq y \}$ with a branch cut on the negative imaginary axis from $[-i,0]$ to the unit disc I have a conformal map that I've been having problems with. Map the set $\{ z=x+iy : |x|\geq y \}$ with a branch cut on the negative imaginary axis from $[-i,0]$ to the unit disc. I've tried a few things but the branch cut keeps messing me up. AI: The interior of your domain consists of all points $z=x+iy$ which satisfies one of the three conditions: (i) $x>0$ and $y<x$, (ii) $x=0$ and $y<-1$, or (iii) $x<0$ and $y<-x$. First map $z$ to $w=iz$. This will rotate your picture counterclockwise by $90^\circ$. Then map $w$ to $\tau=w^{4/3}$. This will close the open sector, and your domain will be mapped to the complex plane minus a cut from $\tau=1$ along the real axis to $\tau=-\infty$. Then map $\tau$ to $\zeta=\sqrt{\tau-1}$, which will send your domain to the right half plane $\Re(\zeta)>0$. Finally map $\zeta$ to ${1-\zeta \over 1+\zeta}$, which will map your domain to the unit disc.
H: What is the integral of $\int\frac{(x-1)e^{1/x}dx}{x}$? I have been trying to solve this integral $\int\frac{(x-1)e^{1/x}dx}{x}$ I used WolframAlpha to solve it but it doesn't show the process. The solution is $e^{1/x}{x} + constant$ AI: We can integrate this by separating the integrand and integrating by parts: $$\begin{align} \int\frac{(x-1)e^{1/x}}{x}dx &=\int e^{1/x}dx-\int \frac{1}{x} e^{1/x}dx\\ &=\int e^{1/x}dx+\int x\left(\frac{d}{dx}e^{1/x}\right)dx\\ &= \int e^{1/x}dx + xe^{1/x}- \int e^{1/x}dx\\ &= xe^{1/x} + C \end{align}$$
H: Fourier transform of even/odd function How can I show that the Fourier transform of an even integrable function $f\colon \mathbb{R}\to\mathbb{R}$ is even real-valued function? And the Fourier transform of an odd integrable function $f\colon \mathbb{R}\to\mathbb{R}$ is odd and purely imaginary function? AI: Let $f: \mathbb{R} \to \mathbb{R}$ be an integrable function and let $\hat{f}$ denote its Fourier transform, i.e. $$ \hat{f}(\xi)=\int_\mathbb{R}e^{ix\xi}f(x)dx. $$ We have $$ \overline{\hat{f}(\xi)}=\hat{f}(-\xi)=\int_\mathbb{R}e^{-ix\xi}f(x)dx=\int_\mathbb{R}e^{iy\xi}f(-y)dy. $$ If $f$ is even then $$ \overline{\hat{f}(\xi)}=\hat{f}(-\xi)=\int_\mathbb{R}e^{iy\xi}f(y)dy=\hat{f}(\xi), $$ i.e. $\hat{f}$ is an even real-valued function. If $f$ is odd then $$ \overline{\hat{f}(\xi)}=\hat{f}(-\xi)=\int_\mathbb{R}-e^{iy\xi}f(y)dy=-\hat{f}(\xi), $$ i.e. $\hat{f}$ is an odd purely imaginary function.
H: Is the subgroup of quadratic residues modulo $N = pq$ a cyclic group if $p$ and $q$ are primes? I saw this question When is the group of quadratic residues cyclic? and the answers to that. I have a similar question. Assume $N=pq$ where $p$ and $q$ are primes. We know that $\mathbb{Z}^*_N$ is not cyclic. So my question is what can we say about the subgroup of quadratic residues modulo $N$? Is it cyclic, or non cyclic? AI: The direct product of two cyclic groups is cyclic if and only if their orders are relatively prime. Your group is a direct product of cyclic groups of order $(p-1)/2$ and $(q-1)/2$ respectively (the quadratic residues modulo $p$ and modulo $q$), so it is cyclic if and only if $\gcd(p-1,q-1)=2$.
H: If $\exp(itH) A \exp(-itH) = A$ for all $t$, do $A$ and $H$ commute? Let $H$ be a self-adjoint $n \times n$ matrix with complex entries. $H$ gives rise to a continuous 1-parameter group of unitaries $t \mapsto U_t = \exp(itH) : \mathbb{R} \to U(n)$. Let $A$ be some other $n \times n$ matrix. If $A$ commutes with $H$, then it commutes with $itH$ for all $t$ and hence with $U_t$ for all $t$ (by considering the series expansion). Thus we have $$ U_t A U_t^* = AU_tU_t^* = A \ \ \ \text{for all }t \ \ \ (*)$$ I suspect that the converse is true. That is, if $(*)$ holds, then $A$ must commute with $H$, but I'm not sure how to prove this and would appreciate some help. AI: Differentiate $AU_t=U_tA$ and evaluate at $t=0$. (Relevant: matrix calculus.)
H: Relationship of Fourier series and Hilbert spaces? I just read in a textbook that a Hilbert space can be defined or represented by an appropriate Fourier series. How might that be? Is it because a Fourier series is an infinite series that adequately "covers" a Hilbert space? Apart from this I (a mathematical novice) have a hard time seeing the connection between a Hilbert space, a vector construct, and a Fourier series (of trigonometric functions). AI: The space of periodic $L^2$ functions (say with period $2\pi$) forms a Hilbert space. (Here $L^2$ means that $\int_0^{2\pi} f(x)^2 dx$ exists.) The inner product of two functions is given by $\int_0^{2\pi} f(x)g(x) dx$. (Here and above I am thinking of real-valued functions; for complex valued functions the formulas are similar.) Now we consider two facts, one about $L^2$-functions, and one about Hilbert space Every $L^2$-function can be expanded as a Fourier series. Every Hilbert space admits an orthonormal basis, and each vector in the Hilbert space can be expanded as a series in terms of this orthonormal basis. It turns out that the first of these facts is a special case of the second: we can interpret the trigonometric functions as an orthonormal basis of the space of $L^2$-functions, and then the Fourier expansion of an arbitrary $L^2$-function is the same thing as its Hilbert space-theoretic expansion in terms of the orthonormal basis. Summary/big picture: To see how a "vector construct" like Hilbert space relates to Fourier series, you don't consider a single function in isolation, but instead consider the entire vector space of $L^2$-functions, which is in fact a Hilbert space.
H: Finding $F(x)$ so that $F'(x) = e^{-x^2}$ Question: Let $f(x) = e^{-x^2}$. Find a formula for a function $F$ so that $F'(x) = f(x)$. My Thoughts: We have just proven the fundamental theorem of calculus, part II of which states: Let $f$ be a continuous function on a finite interval $[a,b]$. Define $$F(x) := \int_a^xf(t)\, dt$$ Then $F \in C^1[a,b]$ and $F'(x) = f(x)$. Taking advantage of this example in the theorem, and knowing from previous experience that $f(x) = e^{-x^2}$ has no elementary antiderivatives, should I set $$F(x) := \int_{-\infty}^x e^{-x^2}\,dx$$ and consider this $F(x)$ a suitable answer? Edit: After the helpful comment and two answers, I see that $$ F(x) := \int_0^x e^{-t^2}\,dt$$ is an appropriate response to this question. Thank you. AI: This integral has no elementary antiderivative as you mention. However, we can set the lower limit to a finite constant, $a$ and get $$F(x, a)=G(x)-G(a)=\int_a^x \exp(-t^2)\, dt$$ where $$G(t)=\int \exp(-t^2)\, dt$$ so that $$\frac{d}{dx}F(x, a)=\frac{d}{dx} G(x)=f(x)$$ If you would like, this function can be written in terms of the non-elementary error function if $a=0$: $$F(x, 0)=\frac{\sqrt{\pi}}{2}\operatorname{erf}(x)$$
H: Prove that Brownian Motion absorbed at the origin is Markov I'm trying to prove that Brownian motion absorbed at the origin is a Markov process with respect to the original filtration $\{\mathcal{F}_{t}\}$. To be more specific, let $(B_{t},\mathcal{F}_{t})_{t \geq 0}$ be a 1-d brownian motion such that $B_{0}=x,\ a.s. \mathbb{P}\ $ for some $x>0.\ $ Let $T_{0}=\inf\{ t \geq 0, \ B_{t}=0\}$ be the hitting time of $0$, and define the Brownian motion absorbed at origin to be the process $\{B_{t \land T_{0}}, \mathcal{F}_{t} \}$. I want to show that such a process is Markov w.r.t. $\mathcal{F_{t}}, \ $ i.e. $$\mathbb{P}[B_{t \land T_{0}} \in \Gamma | \mathcal{F}_{s}]=\mathbb{P}[B_{t \land T_{0}} \in \Gamma |B_{s \land T_{0}}]$$ for every pair of $0<s<t$. And further more it has transition density given by: $$\mathbb{P}[B_{(t+s) \land T_{0}} \in dy |B_{s \land T_{0}}=z]=\mathbb{P}^{z}[B_{t} \in dy, \ T_{0} >t], \ \forall y,z >0,$$ where $\mathbb{P}^{z}$ is the probability measure under which $B$ starts from $z$ almost surely. I tried to use the strong Markov property for Brownian motion to derive these equalities, but I'm having a hard time dealing with $T_{0}$. For example, if $0 \notin \Gamma$, then$\{B_{t \land T_{0}} \in \Gamma\}=\{B_{t} \in \Gamma, T_{0} >t\}$. But how to apply strong Markov property to compute $\mathbb{P}[B_{t} \in \Gamma, T_{0} >t\ | \mathcal{F}_{s} ]\ $? Any answer or comment is greatly appreciated, thanks! AI: For every $r$ let $\theta_r$ denote the time-shift by $r$. Then, $$ [B_t\in\Gamma,T_0\gt t]=[T_0\gt s,B_s+(B_t-B_s)\in\Gamma,T_0\circ\theta_s\gt t-s], $$ where $[T_0\gt s]$ and $B_s$ are $\mathcal F_s$-measurable and $B_t-B_s$ is independent of $\mathcal F_s$ and distributed like $B_{t-s}$. Hence, $$ \mathbb P^x(B_t\in\Gamma,T_0\gt t\mid\mathcal F_s)=\mathbf 1_{T_0\gt s}\cdot u_{t-s}(B_s), $$ where, for every $r$ and $y$, $$ u_r(y)=\mathbb P^y(B_r\in\Gamma, T_0\gt r). $$ Likewise, $$ [B_{t\wedge T_0}\in\Gamma]=[B_t\in\Gamma,T_0\gt t]\cup[0\in\Gamma,T_0\lt t], $$ hence $$ \mathbb P^x(B_{t\wedge T_0}\in\Gamma\mid\mathcal F_s)=\mathbf 1_{T_0\gt s}\cdot u_{t-s}(B_s)+\mathbf 1_{0\in\Gamma}\cdot(1-\mathbf 1_{T_0\gt s}\cdot v_{t-s}(B_s)). $$ where, for every $r$ and $y$, $$ v_r(y)=\mathbb P^y(T_0\gt r). $$
H: Implicit function theorem and PDE; do we get uniqueness? Please see this page: The implicit function theorem: A PDE example. In the implicit function theorem they quote, uniqueness is not mentioned. But the inverse function theorem (which is equivalent to the IMFT), we do get uniqueness. So am I right that the solution got by using the theorem is unique as well? AI: Their statement of the Implicit Function theorem does contain a uniqueness part: "$F(x,y)=0$ if and only if". However, this is a local statement, only for $(x,y)\in U\times V$. It does not exclude the possibility of having other solutions outside of small neighborhood $V$.
H: 10 most significant digits of the sum of a 100 50-digit numbers This is about Project Euler #13. You are given a 100 50-digit numbers and are asked to calculate the 10 most significant digits of the sum of the numbers. The solution threads stated that we are only required to sum, the 11 most significant digits of each number to obtain the answer. Why is this? Heres a counterexample to that with fewer digits. Consider a small example, 2 numbers, and we have to find the 4 most significant digits of the sum. 123446 234556 If we consider only the 5 most significant digits, we get 12344+23455 = 35799 => first 4 = 3579 But if we take all 6, we get 123446+234556 = 358002 => first 4 = 3580 So why would summing only the 11 most significant digits of each number yield the correct answer? (I have tagged this with modular arithmetic as I suspect its related to that, feel free to correct the tags if not) AI: It is not guaranteed that summing the 11 most significant digits will work, but it is very likely. When you sum the sets 11 digits you get a 13 digit number and throw away the lower two digits. If you round, each of the 100 numbers has an error of at most 0.5 and they should be equally distributed up and down. The standard deviation is about 5, which would have to impact the hundreds digit of your sum to make a problem. This will only happen $5\%$ of the time. If you keep the top 12 digits the error chance goes down to $0.5\%$. In the Project Euler setting, you can just do the addition this way, and if you are wrong try going up or down one, and be very likely to be right.
H: Can $\mathbb CP^n$ be the boundary of a compact manifold? Can $\mathbb CP^n$ be the boundary of a compact manifold? For example, when $n=1$, $\mathbb CP^n=S^2$, therefore it is the boundary of $B^3$. AI: Short answer: For $n$ odd, yes. For $n$ even, no. The explanation for each case: $n$ odd: There is a fibration $$S^2 \hookrightarrow \mathbb{C}P^{2n+1} \longrightarrow \mathbb{H}P^n.$$ Then the associated disk bundle $$D^3 \hookrightarrow E \longrightarrow \mathbb{H}P^n$$ has as total space a compact manifold $E$ with boundary $\mathbb{C}P^{2n+1}$. $n$ even: By a result of Thom, a smooth manifold bounds a compact manifold if and only if all of its Stiefel-Whitney numbers are zero. For $\mathbb{C}P^{2n}$, we have that $$\langle w_{4n}(\mathbb{C}P^{2n}), [\mathbb{C}P^{2n}] \rangle = \chi(\mathbb{C}P^{2n}) \pmod 2 = 1 \neq 0,$$ so $\mathbb{C}P^{2n}$ has a nonzero Stiefel-Whitney number. Therefore $\mathbb{C}P^{2n}$ cannot bound any compact manifold.
H: A polynomial can be considered a number? According to wikipedia the Euler's number is: $$e = 1 + \frac{1}{1} + \frac{1}{1\times 2} + \frac{1}{1\times 2\times 3} + \frac{1}{1\times 2\times 3\times 4}+\cdots$$ And I see it's structure is quite similar to the structure of a polynomial: $$a_nt^n+a_{n-1}t^{n-1}+\cdots+a_1t+a_0$$ Can we consider polynomials as numbers? At least in some specific sense? AI: When you start to consider more general ideas than measuring geometric shapes and counting elements of sets, you start using more general sorts of objects to quantify those ideas. Or sometimes you consider interesting algebraic structures the are analogous to familiar ones, but with some other sort of thing fitting into the role where "numbers" fit into the familiar ones. I'm of the opinion that it's reasonable to call such things numbers. However, I don't think I would ever say that out loud (other than in an opinion piece), since I would expect listeners to be confused by my usage of the word, except in cases where the word "number" is traditionally used (e.g. we say "ordinal number" versus "well-order type"). Commonly in such situations there are other words available: for example, "scalar".
H: Galois extension and Tensor product The following theorem is proved in Bourbaki's algebra. They use the technique of Galois descent. I'd like to know the proof without using it if any. Theorem Let $K$ be a field. Let $\Omega/K$ be an extension. Let $N/K$ and $L/K$ be subextensions of $\Omega/K$. Suppose $N/K$ is a (not necessarily finite) Galois extension and $L \cap N = K$. Then the canonical homomorphism $\psi:L\otimes_K N \rightarrow LN$ is an isomorphism. AI: First assume $N/K$ is finite Galois. Then $LN/L$ (inside $\Omega$) is finite Galois and the natural map $\mathrm{Gal}(LN/L)\rightarrow\mathrm{Gal}(N/K)$ is injective with image $\mathrm{Gal}(N/N\cap L)=\mathrm{Gal}(N/K)$, i.e., it is an isomorphism. So, $[LN:L]=[N:K]=\dim_L(L\otimes_KN)$. The map $L\otimes_KN\rightarrow LN$ is surjective in any case because $LN/L$ is algebraic, and since both sides are $L$-vector spaces of the same (finite) dimension, the map is an isomorphism. In general, write $N=\bigcup_iN_i$ as a directed union of finite Galois extensions $N_i/K$. Then $L\otimes_KN=\varinjlim L\otimes_KN_i$, $LN=\bigcup_iLN_i$, and $L\otimes_KN\rightarrow LN$ can be identified with the direct limit of the isomorphisms $L\otimes_KN_i\cong LN_i$, and so is itself an isomorphism. To see that $LN=\bigcup_iLN_i$, first observe that the RHS is clearly contained in the LHS. Conversely, if $\alpha\in LN$, then $\alpha$ is a polynomial (with coefficients in $L$) in elements $\beta_1,\ldots,\beta_r\in N$. If $i$ is such that $\beta_j\in N_i$ for all $j$, then $\alpha\in LN_i$. It worries me a bit that the sacred text (Bourbaki) uses something like Galois descent to prove this...it leads me to believe I've made a mistake somewhere. If so, I apologize.
H: How was the Monster's existence originally suspected? I've read in many places that the Monster group was suspected to exist before it was actually proven to exist, and further that many of its properties were deduced contingent upon existence. For example, in ncatlab's article, The Monster group was predicted to exist by Bernd Fischer and Robert Griess in 1973, as a simple group containing the Fischer groups and some other sporadic simple groups as subquotients. Subsequent work by Fischer, Conway, Norton and Thompson estimated the order of $M$ and discovered other properties and subgroups, assuming that it existed. Or Wikipedia, The Monster was predicted by Bernd Fischer (unpublished) and Robert Griess (1976) in about 1973 as a simple group containing a double cover of Fischer's baby monster group as a centralizer of an involution. Within a few months the order of $M$ was found by Griess using the Thompson order formula [...] Or The Spirit of Moonshine, Its existence is a non-trivial fact: when the original moonshine conjectures were made, mathematicians suspected its existence, and had been able to work out its character table, but could not prove it actually existed. They did know that the dimensions of the smallest irreducible representations would be 1, 196883; and 21296876. It surprises me that this object could have been predicted before being rigorously discovered, due to it being often described as very complicated and highly nonobvious (or at least its construction). Take for instance the description in this AMS review of Moonshine Beyond the Monster: The proof of the moonshine conjectures depends on several coincidences. Even the existence of the monster seems to be a fluke in any of the known constructions: these all depend on long, strange calculations that just happen to work for no obvious reason, and would not have been done if the monster had not already been suspected to exist. It'd be cool to be acquainted with this part of the story in some more detail at an accessible level, if possible, though I realize it may necessarily involve heavy machinery or convoluted calculations. AI: I'm not an expert on this topic, and have only read summaries of the proofs and techniques. Nonetheless, let me give this a try. By the odd order theorem, any non-abelian simple group has a nontrivial involution. One of the key techniques in the classification of finite simple groups was to study the centralizers of involutions. That is, you try to find all finite simple groups such that the centralizer of an involution has a specific form. Once you prove enough results you end up in the following situation. You look at candidate centralizers of involutions which are made up entirely of the simple groups that you already know. You use this to try to find candidates for any new simple groups that you didn't know already. If your search turns up a new example, you repeat the process trying to find new candidate centralizers of involution involving the new simple group. Thus it should not be surprising that when people found the Baby Monster they started looking for candidates centralizers of involutions which you can make out of the Baby Monster. Once you find a good candidate that you can't rule out (e.g. they couldn't find any reason why the centralizer of an involution couldn't be the double cover of the Baby Monster), then you conjecture a new simple group. What's interesting is that when you get to the Monster you can try repeating this process to try to find yet another new simple group where the centralizer of an involution is built from the Monster. There's no reason a priori that this couldn't continue forever, with more and more sporadic groups. But it turns out that it stops at the Monster.
H: What is the meaning of the term "inductively P map"? In this page is the definition of an inductively open map. But in this pdf is the definition of a inductively P map, where P is a property of maps. But there is a difference in the definitions. In the former it's said that "the image of the subset is the same as the image of the whole space," but in the other it's said symbolically that the image of the subset is the entire codomain space. It's more credible that the terminology of the paper is the correct, but in some papers is used nonstandard terminology and I'm a little bit confused. What is the meaning of the term "inductively P map" is the sense of the paper standard or is the other page correct? and in any case what is the terminology of the other case? AI: There's no conflict between the two definitions, since the one in the paper refers to a map onto the codomain, so in this case the image of the domain is the entire codomain.
H: Using the sum of squares formula to solve more complex sums. I'm studying integration and trying to figure out how to use the sum of squares formula to solve more complicated sums. For example: knowing that $$\sum_{i=1}^n i^2 = \frac{n (n+1) (2 n+1)}{6}$$ how can we simplify $$\sum_{i=1}^n (i/n)^2$$ AI: The key here is what anon wrote in his comment. It's helpful to expand out the terms of the sum: $$ \begin{eqnarray} \sum_{i=1}^n (i/n)^2 & = & \sum_{i=1}^n \frac{i^2}{n^2} \\ &=& \frac{1^2}{n^2} + \frac{2^2}{n^2} + \dots + \frac{n^2}{n^2} \\ &=& \Big( \frac{{1^2} + {2^2} + \dots + {n^2}} {n^2} \Big) \\ & = & \frac{1}{n^2} \sum_{i=1}^n i^2 \\ & = & \frac{1}{n^2} \frac{n (n+1) (2 n+1)}{6} \end{eqnarray} $$
H: Uniform integrability question I know I keep asking similar sorts of questions, but I want to understand exactly what is going on here. So, I will list a full definition and then the problem. (Basically, I want to prove something using Vitali's theorem) A sequence of functions $\{f_n\} \in L_1(\mu)$ is called uniformly integrable if for every $\epsilon >0$ there is a $\delta > 0$ such that for all $E$ with $\mu(E) < \delta$, then $|\int_E{f_nd\mu}| < \epsilon$ for all $n$. The problem is as follows : Suppose that $(X,M,\mu)$ is a measure space, with $\mu(X) < \infty$ and $f_n \in L_1(\mu)$ for all n. Also for some $p>1$ we have that $||f_n||_p < K$, where $K$ is a positive real number. Prove that $\{f_n\}$ is uniformly integrable. So the problem is asking for asking to show a $L_p$ bounded sequence is uniformly integrable. Now, this only assumes the existence of SOME $p >1$, so I can't imagine that it's that important. Also, I thought it was strange why we need to assume that $f_n \in L_1$. Don't we get this by the boundedness condition and also the fact that the measure space is finite? In any case, I'm lost. Any ideas? AI: Suppose $\{f_n\}$ is not uniformly integrable, so we have some $\epsilon>0$ such that there exists a sequence of sets $\{E_k\}$ with $\mu(E_k)\to 0$ and for all $k$, there is some $n$ such that $\left|\int_{E_k} f_nd\mu\right|\geq\epsilon$. Then $$\epsilon\leq\int_{E_k}|f_n|d\mu\leq\left(\int_{E_k}|f_n|^{p}d\mu\right)^{1/p}\left(\int_{E_k}1^{q}d\mu\right)^{1/q}=\mu(E_k)^{1/q}\left(\int_{E_k}|f_n|^{p}d\mu\right)^{1/p}\leq K\mu(E_k)^{1/q}$$ thanks to Holder's inequality where $1/p+1/q=1$. Thus $K\geq 1/\mu(E_k)^{1/q}\to \infty$, a contradiction.
H: Finding a coordinate system for the kernel and nullity of a linear map Let $T : C^{\infty}(\mathbb{R}) \rightarrow C^{\infty}(\mathbb{R})$ be a linear map, which is defined by $T = \frac{d^2}{d x^2} - 3\frac{d}{dx} + 2 \text{id}$, that is, a map which takes a function $f(x)$ and maps it to $$f''(x) - 3f'(x) + 2f(x)$$ Now my exercise asks me to find coordinate systems for the kernel of $T$ and for the nullity of $T$. The kernel is the vectors in $C^{\infty}(\mathbb{R})$ which maps to $0$, and thus, it is exactly the functions on the form $\alpha e^{2x} + \beta e^{x}$, since that is the general solution to the differential equation $f''(x) - 3f'(x) + 2f(x) = 0$. To get a coordinate system for the kernel, I take the linear map from $\mathbb{R}^2 \rightarrow \text{ker} T$ associated with the matrix $$ \begin{pmatrix} e^{2x} & e^{x}\\ \end{pmatrix} $$ This map is both linear and bijective, thus an isomorphism of vector spaces. My problem is with the second part. By my notes, the nullity of the map $T$ is the dimension of its kernel. In this case, the dimension of the kernel is 2. How do I define a coordinate system on the number 2? Have I misunderstood something? The exact wording of the exercise: Let $T : C^{\infty}(\mathbb{R}) \rightarrow C^{\infty}(\mathbb{R})$ be the linear map defined by $T = \frac{d^2}{d x^2} - 3\frac{d}{dx} + 2 \text{id}$. Find a co-ordinate system for $\text{ker} T$ and $\text{null} T$. AI: The intended meaning of the question is "Find a coordinate system for $\text{ker}\ T$, and find $\text{null}\ T$".
H: Need help to understand a proof using functional calculus. I am having trouble with understanding part of a proof, I wrote the entire proof out, and I marked the part which I do not understand. I am hoping that someone could explain it to me. I guess I don't understand functional calculus that well :( . Suppose that $a$ and $b$ are positive elements of a $C^*$-algebra $A$ such that $\| ac \| = \| bc \| $ for all $c\in A$. Then $a=b$. Proof: Without loss of generality suppose $a$ and $b$ have norm $1$. It suffices to show that $a^2=b^2$. Assume $a^2-b^2 \not = 0$. Let $\delta = \frac{1}{2} \sup \sigma (a^2-b^2)$. We can suppose $\delta > 0$ (by excanging $a$ and $b$ if necessary. Let $f$ be a countinuous real-valued function on $\sigma (a^2-b^2)$ such that $$ 0\leq f(\lambda ) \leq 1 \ (\lambda \in \sigma (a^2 - b^2)), $$ $$ f(\lambda )=0 \ (\lambda \leq \delta ), \text{ and } f(2\delta )=1. $$ The following part is what I don't understand Let $c= f(a^2 - b^2) \in A$ and by applying functional calculus, we can get $$ \| c(a^2-b^2)c \| = 2\delta . $$ I understand the rest of the proof Since $\lambda > \delta $ whenever $f(\lambda )\not = 0$, it follows that $c(a^2-b^2)c \geq \delta c^2$. Let $\rho$ be a state of $A$ such that $\rho (cb^2c) = \| cb^2 c \| $. Since $\| b \| ^2 = 1$, we get that $\rho (cb^2c)\leq \rho (c^2)$, and so $$ \rho (ca^2c)\geq \rho (cb^2c + \delta c^2) \geq (1+\delta )\rho (cb^2c). $$ Thus $\| ca^2c \| \geq (1+\delta )\| cb^2c \| $. This implies that $\| cb^2 c \| > \| cb^2 c \| $ if $cb^2c \not = 0$. If $cb^2c=0$ then we have $ca^2c \not = 0$ (by the equality $\| c(a^2-b^2)c \| = 2\delta $) and hence $\| ca^2 c \| > \| cb^2c \| $ for all $c$. This implies that $\| ab \| > \| bc \| $, a contradiction. AI: $c (a^2 - b^2) c = f(a^2 - b^2)(a^2 - b^2) f(a^2 - b^2) = g(a^2 - b^2)$ where $g(x) = f(x)^2 x$. Now $g(2\delta) = 2 \delta f(2 \delta)^2 = 2 \delta$ and $2 \delta \in \sigma(a^2 - b^2)$ so $\|g(2 \delta)\| = \sup_{z \in \sigma(a^2 - b^2)} |g(z)| = 2 \delta$.
H: What is the lower bound and upper bound on time for inserting n nodes into a binary search tree? So given a $n$ array of few numbers(say $n$) we can sort them using the binary search tree (BST) as a black box . In order to that we first build a BST out of the array taking all the elements in order and then do an in order traversal . So the steps will be : Let the empty BST be $\phi$ Take each element of the array and insert into the BST $\phi$ so that after insertion the BST property of the tree still holds. Do an inorder traversal of the BST. So steps 1 is $O(1)$ and step 3 is $\Theta(n)$ . I want to find out the lower and upper bounds on step 2. AI: The upper bound is $1+2+\dots+n = \frac{n(n+1)}{2}$ operations as the upper bound of insertion into a tree of $m$ nodes is $m$. This bound is reached is your array is sorted already. The lower bound is $\log(1)+\log(2)+\dots+\log(n) = \log(n!)$ as the lower bound of insertion into a tree of $m$ nodes is $\log m$.. By Stirling's approximation this is $\log (\sqrt{2 \pi n}(\frac{n}{e})^n) = \frac{1}{2}\log(2\pi n) + n(\log(n) - 1)$ which is $ O(n\log n)$.
H: showing that $X$ is measurable iff $X^{-1}(C')\subset\beta$ suppose $X:\Omega\rightarrow\Omega'$ where $(\Omega,\beta)$ and $(\Omega',\beta')$ are two measurable space. suppose $C'$ generate $\beta'$;how can show $X$ is measurable iff $X^{-1}(C')\subset\beta$ AI: Let $C'$ be a system of subsets of $\Omega'$ such that $\sigma(C')=\beta'$. Suppose that $X$ is $(\beta,\beta')$-measureable, i.e. $X^{-1}(A)\in\beta$ for all $A\in\beta'$. Then we obviously have that $X^{-1}(A)\in\beta$ for all $A\in C'$ since $C'\subseteq \beta'$. Now suppose that $X^{-1}(A)\in \beta$ for all $A\in C'$. If we define $$ \Lambda=\{A\subseteq \Omega' \mid X^{-1}(A)\in\beta\}, $$ then obviously $C'\subseteq \Lambda$. Now show that $\Lambda$ is a $\sigma$-field (by using the properties of pre-images) because then $\beta'=\sigma(C')\subseteq \Lambda$ and hence $X$ is $(\beta,\beta')$-measureable.
H: Match covered graph is 2-connected Seems to be an easy question, but I can't find the right direction. Let $G$ connected graph on at least 4 vertices, such that every edge in it, participates in a perfect matching. Prove that $G$ is 2-connected. Any help will be appreciated. Thanks in advance. AI: Gerry Myerson has shown that $G$ must be $2$-edge-connected. Since you didn’t specify which you meant, I’ll show that $G$ must be $2$-vertex-connected. Suppose that $G$ is not $2$-vertex-connected. Then there is a vertex $v$ whose removal disconnects $G$ into components $C_1,\dots,C_n$ for some $n\ge 2$. Fix $k\in\{1,\dots,n\}$, let $e$ be an edge from $v$ to $C_k$, and consider a perfect matching $M$ that includes $e$. $M$ cannot include any other edge incident at $v$, so the other edges of $M$ must lie in the components $C_1,\dots,C_n$. Thus, each of these components except $C_k$ must have an even number of vertices, and $C_k$ must have an odd number of vertices. Now choose $i\in\{1,\dots,n\}\setminus\{k\}$; the same argument shows that $C_k$ must have an even number and $C_i$ an odd number of vertices. This contradiction shows that $G$ must be $2$-vertex-connected.
H: How to use Fermat's theorem in congruence problems Two days ago I asked about how to solve questions of the type: Find last digit of $27^{27^{26}}$. or Find the remainder when $27^{45}$ is divided by $7$, using congruences. I have now got this method completely, but I am still not able to solve these types of questions through Fermat's theorem. If somebody can solve one of the above problem step by step through Fermat's theorem or at least can give me some hint or provide anything that can be useful for me in solving these questions (if not direct solution) then it would be of tremendous help for me. AI: I'll try my best to make it as clear as possible with an other question: Find the last digit of $7^{20}$ (next step ist, that you try to use this for your first problem): Fermat' theorem says with $a,n$ coprime $$a^{\varphi(n)} \equiv 1 \mod n$$ With $\varphi(10)=4$ you know $7^4 \equiv 1 \mod 10$. Further more you get $7^{20} \equiv \left(7^4 \right)^5 \equiv 1 \mod 10$ So the last digit is a $1$. Find the remainder of $27^{45} \div7$ is similar. You get $\varphi(7)=6$ and $27^6 \equiv 1 \mod 7$. $27^{45} = 27^{6 \cdot 7 + 3} = 27^{6 \cdot 7} \cdot 27^3 = \left(27^6\right)^7 \cdot 27^3 \equiv 1^7 \cdot 27^3 \equiv 27^3 \mod 7$ I'm pretty sure you are able to solve the last equation.
H: Why isn't math on the sine of angles the same as math on the angles in degrees? I noticed something just now. This is probably a stupid question, but I'm going to ask it anyway. Because when I discover that my understanding of a topic is fundamentally flawed, I get nervous. Basically I'm suppose to show that the angle marked in red is $sin \space \alpha = \frac{3}{5}$. Note that this task is on the part of the test without calculator. My first thought was that the whole thing is 90 degrees. And the other to angles I can fine easily. AB is 1 and BE is 0.5. And the length of AE is $\frac{\sqrt 5}{2}$. So I calculate the angle for the bottom triangle. $sin \space = \frac{BE}{AE} = \frac{\frac{1}{2}}{\frac{\sqrt 5}{2}} = \frac{1}{\sqrt 5}$ I know that sine of 90 is 1, right. Now it all falls apart, the following is wrong. The angle on the top side of the red angle is equal to the one just calculated. So I did this. $1 - 2 \times \frac{1}{\sqrt 5}$ And expected to get $\frac{3}{5}$, which I didn't. The following is correct. $\arcsin(1) - 2 \times \arcsin(\frac{1}{\sqrt 5}) = 36.86$ $\arcsin(\frac{3}{5}) = 36.86$ Why won't the expression without arcsin give me $\frac{3}{5}$ ? Hope this makes sense, I'll be right here pressing F5 and updating if more info is needed. Thank you for input. AI: A $sin$ of an angle is not the same as an angle, it's a function of the angle. You can add angles: $$\alpha = \alpha_1+\alpha_2$$ But not the $sin$'s: $$\sin(\alpha) \neq \sin(\alpha_1)+\sin(\alpha_2)$$ In fact, this is true for most functions, and this property is called "non-additivity".
H: Covariance matrix of a complex random variable If one considers a complex random variable as the joint random variable of the real and complex part, the covariance matrix of two complex random variables $Z_{1}=X_{1}+iY_{1}$ and $Z_{2}=X_{2}+iY_{2}$ becomes a $4\times 4$ matrix $$\begin{pmatrix}C^{(rr)}&C^{(ri)}\\C^{(ir)}&C^{(ii)}\end{pmatrix}$$ $$C^{(rr)}_{ij}=E((X_{i}-E(X_{i}))(X_{j}-E(X_{j})))$$ $$C^{(ri)}_{ij}=E((X_{i}-E(X_{i}))(Y_{j}-E(Y_{j})))$$ $$C^{(ir)}_{ij}=E((Y_{i}-E(Y_{i}))(X_{j}-E(X_{j})))$$ $$C^{(ii)}_{ij}=E((Y_{i}-E(Y_{i}))(Y_{j}-E(Y_{j})))$$ However the covariance matrix of two complex random variables is often defined as a $2\times 2$ matrix $$C_{ij}=E((Z_{i}-E(Z_{i}))(Z_{j}-E(Z_{j}))^{\ast})$$ How are these two concepts related? AI: $$C_{k\ell}=C_{k\ell}^{rr}+C_{k\ell}^{ii}+\mathrm i\cdot(C_{k\ell}^{ir}-C_{k\ell}^{ri})=C_{k\ell}^{rr}+C_{k\ell}^{ii}+\mathrm i\cdot(C_{k\ell}^{ir}-C_{\ell k}^{ir})$$
H: Noetherian domains with finitely many primes For any domain $A$ let $A^\times$ be its group of units. Let $A$ be a noetherian domain with only finitely many prime ideals, and field of fractions $K$. Is the group $K^\times/A^\times$ finitely generated? AI: The answer is no in general. First by an answer to the question Does every Noetherian domain have finitely many height 1 prime ideals?, we know that $A$ has dimension at most $1$. Let $m_1,\dots, m_n$ be the maximal ideals of $A$. It is easy to see that inside $K^\times$, we have $A^\times =\cap_i (A_{m_i})^\times$, so the canonical map $$ K^\times /A^\times \to \prod_i K^\times/(A_{m_i})^\times$$ is injective and we are reduced to the case where $A$ is local of dimension $\le 1$. If $A$ is integrally closed, then $K^\times /A^\times$ is generated by an uniformizing element of $A$. So the first conclusion is $K^\times /A^\times$ is finitely generated if $A$ is integrally closed. Now the question is what happens if $A$ is local, but not necessarily integrally closed. Let us consider an example. Let $F$ be a field and $c\in F$ an element which is not a square in $F$. Let $A$ be $F[x,y]/(y^2-cx^2)$ localized at the maximal ideal generated by $x,y$. It is integral, noetherian, with only two prime ideals. Let $L=F[\sqrt{c}]$. It can be identified to a subring of $K=\mathrm{Frac}(A)$ by sending $\sqrt{c}$ to $y/x$. It is easy to see that $L\cap A=F$. So $$ L^\times/F^\times \hookrightarrow K^\times/A^\times.$$ We have $L^\times=F^\times+ \sqrt{c}F^\times$, and, set-theoretically, $L^\times/F^\times$ is in bijection with $F^\times$. If we choose a uncountable field $F$, then $K^\times/A^\times$ is uncountable, hence not finitely generated. Concretly, we can take $F=\mathbb C(t)$ and $c=t$. In this counterexample, if $B$ denotes the integral closure of $A$ in $K$, then $L\subset B$ and $B^\times/A^\times$ is not finitely generated.
H: Limit of a sequence of matrices I'm preparing or my exam in linear algebra and I'm stuck with a question. I've tried to find some help in my textbook (Linear Algebra and its applications, 4th Edition, By David C. Lay). I can't find anything about it (maybe because the question is written in danish and I'm having trouble translating it right?). I'm asked to find the limiting value, $\lim_{n \to \infty}A^nx$ where $$ A = \begin{bmatrix} 0.25 & -0.75 & 0.75 \\[0.3em] -0.5 & 0 & 0.5 \\[0.3em] 0.25 & -0.25 & 0.75 \end{bmatrix} $$ $$ x = \begin{bmatrix} 2 \\[0.3em] 3 \\[0.3em] 3 \end{bmatrix} $$ How am I supposed to solve this? I'm not asking you to calculate the answer for me, but I'm asking for the right way to solve this kind of problem. AI: $A$ is clearly diagonalisable, as the eigenvalues of $A$ are $-1/2,1/2,1$. Therefore, $A = PDP^{-1}$, where $$D = \begin{bmatrix} -0.5 & 0 & 0 \\[0.3em] 0 & 0.5 & 0 \\[0.3em] 0 & 0 & 1 \end{bmatrix}$$ Now, $A^n = PD^nP^{-1}$, therefore, $\displaystyle \lim_{n \rightarrow \infty} A^n = \lim_{n \rightarrow \infty} PD^nP^{-1} = P X P^{-1}$, where, $$X = \begin{bmatrix} 0 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & 1 \end{bmatrix}$$ Now, all that remains is to find $P,P^{-1}$. I'll leave this easy calculation, as the method is clear.
H: Complex valued function $ \cos\sqrt z$ I'm looking for the right argument why the function $ \cos\sqrt z$ is analytic on the whole complex plane. As far as I understand, a holomorphic branch of $\sqrt z$ can only be found on the cut plane (without negative numbers) since the Argument function isn't continuous everywhere. Hence $ \cos\sqrt z$ is at least holomorphic on the same domain, but how to justify that it is actually holomorphic everywhere? AI: The two branches of $\sqrt{z}$ differ only by a sign, while the cosine function is even. Thus the ambiguity in the square root is undone by the application of the cosine. Another way to see it is to use the power series $$\cos w=\sum_{n=0}^\infty \frac{(-1)^n w^{2n}}{(2n)!},$$ insert $w=\sqrt{z}$, and to get $$\cos \sqrt{z}=\sum_{n=0}^\infty \frac{(-1)^n z^{n}}{(2n)!}.$$
H: find the range for the expression, $f(n)=\frac{n^2+2\sqrt{n}(n+4)+4^2}{n+4\sqrt{n}+4}$ find the range for the expression, $f(n)=\frac{n^2+2\sqrt{n}(n+4)+4^2}{n+4\sqrt{n}+4}$ for $36 \le n \lt 72$ $f(n)=\frac{(\sqrt{n}+n+4)^{2}-9n}{(\sqrt{n}+2)^2}$, $\sqrt{36}=6$ $\sqrt{72}=6\sqrt{2}$ AI: I decided to post this as an answer since you did the hardest bit yourself after my comment. The function simplifies to $$f(n)=n-2\sqrt n+4.$$ We now want to find the intervals where $f$ is increasing or decreasing, respectively. We can do this either by differentiation: $$f'(n)=1-\frac{1}{\sqrt n}$$ So $f$ is increasing in $[1,\infty)$. Or by quadratic completion: $$f(n)=(\sqrt n-1)^2+3$$ Again we conclude that $f$ is increasing in $[1,\infty)$. In particular the range of $f$ for $36\leq n<72$ will be $f(36)=28\leq f(n)<76-12\sqrt2$. Addendum: If only natural numbers are allowed than you wont get anything more satisfactory than: the range is $\{n-2\sqrt n+4|36\leq n<72, n\in \mathbb N\}$.
H: What do I need to know to simulate many particles, waves, or fluids? I've never had a numerical analysis course so I don't know what I need to know. I'm just wondering what kind of books I should get to make me able to simulate these things. I'm wanting to simulate in 3 dimensions. I was thinking about getting a book on finite element analysis because I'm getting the hint that it might work the best, but do you have to know numerical analysis first and the Runge-Kutta method first? Will those things be covered in books about finite element methods? Are there other methods besides finite element methods that would work better? Are there any good books that would help me and do you have any advice on a battle plan to purchase books i.e. Should I just buy a book that covers finite element methods and numerical analysis together or buy separate books that cover those topics individually? Also I don't want to limit myself to just FEM or any one thing. I want to have a comprehensive knowledge of simulation methods, but not if a comprehensive book wouldn't teach you any one thing well. Advice? AI: The theory behind simulating fluids is called CFD - computational fluid dynamics. This is a wide field, with very high demands on computing power, and numerous methods available, depending on the exact nature of the problem at hand. One book to start with, here.
H: Matrix Manipulation : trick to sum elements of vector For example, I have vector A = mx1. I want to sum each elements of vector A by : 1) Define vector B = 1xm. 2) Take : AxB = 1x1. So, this vector will be the value of sum elements of vector A. But, I don't know how to choose vector B to this trick. So, my question is : does exist vector B to do my trick, or if not, does exist a way just to do matrix manipulation (sum, multiplication, transpose...), and can do this trick, too. (it means : you not count each elements by hand) Thanks :) AI: Let $A$ be a column-vector with $m$ elements, and let $B$ be a row-vector with $m$ elements, where each element of $B$ is 1. Then $$BA = \begin{pmatrix} b_1 & b_2 & \ldots & b_m\\ \end{pmatrix} \begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_m \end{pmatrix} = \sum_{i=1}^{m} b_ia_i = \sum_{i=1}^{m} 1 \cdot a_i = \sum_{i=1}^{m} a_i$$ where $a_i$ is the $i$'th element of $A$. The matrix $AB$ has dimension $m \times m$.
H: Why $(\mathbb{N} \rightarrow \mathbb{N}) \subseteq (\mathbb{N} \leadsto \mathbb{N})$? Written in natural language, the sets of all total functions from naturals to naturals is a subset of the sets of all partial functions of such. $$(\mathbb{N} \rightarrow \mathbb{N}) \subseteq (\mathbb{N} \leadsto \mathbb{N})$$ We see that $\mathbb{N} \leadsto \mathbb{N}$ has more mappings since $\mathsf{domain}(f) \in \mathcal{P}(\mathbb{N})$, and there is no bijection between $\mathbb{N}$ and $\mathcal{P}(\mathbb{N})$. Would someone give a formal proof? AI: Sean Eberhard has answered it. By definition, "$A \subseteq B"$ means that every element of $A$ is an element of $B$. Because every total function is a "partial function", the set of total functions is a subset of the set of partial functions.
H: Matrix representing operation, with respect to coordinate systems Let $T : \mathbb{R}[x]_{\leq 1} \rightarrow \mathbb{R}[x]_{\leq 2}$ be the linear map defined by the differential operator $T = (x-2) \frac{d}{d x} - \text{id}$. Now my exercise asks me to find the matrix representing $T$ with respect to the standard coordinate systems $$(1 \; x): \mathbb{R}^2 \rightarrow \mathbb{R}[x]_{\leq 1} \quad \text{ and } \quad (1 \; x \; x^2) : \mathbb{R}^3 \rightarrow \mathbb{R}[x]_{\leq 2}$$ I guess what I am mostly confused about, is the order in which I should compose the maps. Am I looking for a map from $\mathbb{R}^2$ to $\mathbb{R}^3$, or am I looking for a map from $\mathbb{R}[x]_{\leq 1}$ to $\mathbb{R}[x]_{\leq 2}$? Currently, the more I think about it, the more confused I get, so I am looking for a little clarity most of all. AI: For $i \ge 0$, let $p_i$ denote the polynomial $x^i$, with the convention $x^0=1$. Then for $0 \le i \le 1$ we have $$ T(p_0)=-p_0=-1\cdot p_0+0\cdot p_1+0\cdot p_2,\quad T(p_1)=x-2-x=-2\cdot p_0+0\cdot p_1+0\cdot p_2. $$ So the matrix of $T$ wrt the bases $\{p_0,p_1\}$ and $\{p_0,p_1,p_2\}$ is given by $$ \left[ \begin{array}{cc} -1&-2\cr 0&0\cr 0&0 \end{array} \right] $$
H: Safe use of generalized inverses Suppose I'm given a linear system $$Ax=b,$$ with unknown $x\in\mathbb{R}^n$, and some symmetric $A\in\mathbb{R}^{n\times n}$ and $b=\in\mathbb{R}^n$. Furthermore, it is known that $A$ is not full-rank matrix, and that its rank is $n-1$; therefore, $A$ is not invertible. However, to compute the "solution" $x$, one may use $x=A^+b$, where $A^+$ is a generalized inverse of $A$, i.e., Moore-Penrose inverse. What is the characteristic of such solution? More precisely, under which conditions will $x=A^+b$ give the exact solution to the system (supposing the exact solution exists)? Could one state that in the above case, with additional note that $b$ is orthogonal to null-space of $A$, the generalized inverse will yield the exact solution to the system? AI: Let $\tilde x = A^+b$. Then obviously $A\tilde x = AA^+b$. But since $AA^+$ is an orthogonal projector, and specifically $I-AA^+$ is the projector to the null space of the Hermitian transpose of $A$, $\tilde x$ is a solution iff $b$ is orthogonal to the null space of $AA^+$, that is, orthogonal to the null space of the Hermitian transpose of $A$.
H: Is a countable set $A$ dense in an uncountable set $B$ by analogy of $\mathbb{Q}$ being dense in $\mathbb{R}$? Let $A$ be any countable set and $B$ any uncountable set with the same infinity as $\mathbb{R}$. Then we must have a one-to-one map $\phi:A\rightarrow\mathbb{Q}$. So, can we say by all this that $A$ is in fact dense in $B$, since it is well known that $\mathbb{Q}$ is dense in $\mathbb{R}$ ? AI: No. $\mathbb{Z}$ and $\mathbb{Q}$ have the same cardinality, but one is dense in $\mathbb{R}$ and the other is not. Added I also wanted to say that "denseness" of a subset is not a measure of how many points there are in the subset, it is a measure of how uniformly they are spread throughout the superset and how close they get to points of the superset. In the usual topology on $\mathbb{R}$, every point of $\mathbb{R}$ is approached by points of $\mathbb{Q}$, but you cannot approach all points of $\mathbb{R}$ with points from $\mathbb{Z}$. The gaps in $\mathbb{Z}$ are just too big. In the other answers you can read that "topology" determines when the subset is spread out enough that it "gets close" to every point of the superset.
H: What is the difference between the supremum (least upper bound) and the minimal upper bound I understand the difference between the supremum and the greatest element and between the supremum and the maximal elements. But I'm not sure about the difference between the supremum (least upper bound) and the minimal upper bound. They both seem fairly similar to me. Thanks EDIT: To address some of the comments, the reason I think there is a difference is because this site says so http://en.wikipedia.org/wiki/Supremum#Minimal_upper_bounds. My definition of minimal upper bound would have been the exact same as that for least upper bound. AI: This is a very dumb example, but suppose we augment the real line by adding two additional elements, $\infty$ and $\widetilde{\infty}$. We order this set by declaring $x < \infty$ and $x < \widetilde{\infty}$ for all real numbers $x$. (But we declare no order between these two new elements, so, in particular, the ordering is no longer total). Note that both $\infty$ and $\widetilde{\infty}$ are upper bounds of $\mathbb{R}$ (the real reals) in this augmented order. Since there is no upper bound of $\mathbb{R}$ strictly below either, they are actually minimal upper bounds. However neither $\infty$ nor $\widetilde{\infty}$ is a least upper bound, because neither $\infty < \widetilde{\infty}$ nor $\widetilde{\infty} < \infty$ holds. (A least upper bound must be strictly below any other upper bound.) This is the difference between the adjectives minimal and least. To be minimal with respect to some property an object must have that property, and no object strictly below it can share that same property. To be least with respect to some property an object must have that property, and it must be strictly below any other object having that property. In terms of partially ordered sets, every least object will be a minimal object, but the converse may not hold. The two concepts do coincide in linear (total) orders.
H: Using the (Complex) Identity Theorem with a limit point on the Boundary. Question: If we have a limit point for the set of zeros which is on the boundary of our domain $D$, does the (complex analysis) identity theorem still hold? Motivation: The identity theorem states that if two holomorphic functions agree at a limit point in $D$, then they are equivalent. I'm fairly sure that if the limit point is on the boundary, then one could use something like the Riemann Removable Singularity Theorem to "get rid of" all of the points converging to our limit point on the boundary. I just wanted to check. Example: Let $D = B(1,1)$, the ball of radius 1, centered at 1. Suppose that $f$ is holomorphic on $D$ and has zeros at $\frac{1}{n}$ for each $n\in {\mathbb N}$. Is it the case that $f\equiv 0$? AI: It is not enough for the limit point to be on the boundary. Consider the analytic function $\sin\left(\frac{1}{x}\right)$ on the domain $(0, 1)$. Then we can choose a sequence $x_n = \frac{1}{\pi n} \rightarrow 0$ such that $f(x_n) = 0\ \forall n$ but the function is clearly non-zero.
H: What does mantissa mean here? I was going through this SO post on Math.random() vs Random.nextInt(int) and encountered the following line : Random.nextDouble() uses Random.next() twice to generate a double that has approximately uniformly distributed bits in its mantissa, so it is uniformly distributed in the range 0 to 1-(2^-53). From the Wikipedia link on Significand: In American English, the original word for this seems to have been mantissa (Burks et al.), and as of 2005 this usage remains common in computing and among computer scientists. However, this use of mantissa is discouraged by the IEEE floating-point standard committee and by some professionals such as William Kahan and Donald Knuth,[citation needed] because it conflicts with the pre-existing use of mantissa for the fractional part of a logarithm (see also common logarithm). What does mantissa actually mean here ? How is the range coming out to be 0 to 1-(2^-53)? AI: If I understand you correctly, you are asking about the meaning of "mantissa" in the context of floating point numbers, right? In floating point arithmetics, a number is represented as $(-1)^s\cdot m\cdot 2^e$ with $s\in\{0,1\}$, $m\in[0.5,1)$ and $e\in \mathbb Z$. The number $m$ is called the mantissa, $s$ the sign bit, and $e$ the exponent. (Of course, for general reals, only an approximation of $m$ is stored (more specifically, $m$ can only be an integer multiple of some number $\epsilon$ called machine epsilon), and the exponent can only be in a specific range, because the memory where the floating point number is stored is finite. The difference between different precision floating point format is exactly how many bits of the mantissa are stored, and which range for the exponent is supported. Especially, for IEEE double precision, the mantissa is 53 bits, that is, $\epsilon = 2^{-53}$). Note that when represented in binary, the mantissa always looks like $0.1n_2n_3n_4\dots$, therefore usually only the bits $n_2$, $n_3$, … are stored. Note that this implies that you have to use a special format to store $0$, this is done by reserving a specific exponent for that (for some floating point formats, including the IEEE formats, the same exponent is also used to denote other so-called denormalized numbers, that is, numbers where the implicit one-bit is not in the mantissa; the numbers are called "denormalized" because the mantissa is not in the proper range).
H: Simple Function Design I want to design a function that satisfies the following: generally speaking, $f(x) = y$ $$f(5.51) = 1$$ $$f(95.31) = 200$$ How can I go about designing the function to satisfy these requirements? AI: If you want a function of the form $f(x)=a e^{bx}$ then simply solve for $a$ and $b$ using your data points. If you want a polynomial function of degree $n$ then you'll need $n+1$ data points. With two data points, you get a polynomial of degree 1 (at most).
H: bound on the growth of a function I was playing around with some functional equations and I came across the following question.. Suppose we have a $C^2(\mathbb R^2, \mathbb R)$ function $f$ with first and second derivatives globally bounded.. Moreover we know that $f(x, 0)=f(0, y)=0$ for any $x, y\in \mathbb R$. Is it true then that there exists a constant $C> 0$ such that, for any $(x, y)\in\mathbb R^2$, we have $$|f(x, y)|\leq C|xy|$$? AI: Note $$f(x,y) = \int_0^x \int_0^y \frac{\partial^2 f}{\partial X\partial Y}\,dY\,dX.$$ Thus if $|\partial^2 f / \partial X \partial Y| \leq C$ then $|f(x,y)|\leq C|xy|$.
H: Why is the last digit of $n^5$ equal to the last digit of $n$? I was wondering why the last digit of $n^5$ is that of $n$? What's the proof and logic behind the statement? I have no idea where to start. Can someone please provide a simple proof or some general ideas about how I can figure out the proof myself? Thanks. AI: Alternatively, you could prove that $n^5-n$ is divisible by $10$ by induction. If $n=0$, it is obviously true. Assume true for $n$, we need to show that: $$(n+1)^5-(n+1) = n^5 + 5n^4 + 10n^3 + 10n^2 + 5n + 1 - (n+1) \\=n^5 - n + 10(n^3+n^2) +5n(n^3+1)$$ is divisible by 10. But $n^5-n$ is divisibly by $10$ by induction, and $10(n^3+n^2)$ is obviously divisible by $10$, so all you need to show is that $5n(n^3+1)$ is divisible by $10$, which is the same as proving that $n(n^3+1)$ is divisible by $2$. The fundamental reason for this, as everybody has noted, is due to rules of modular arithmetic.
H: Inequality. $\frac{a}{3a-b+c}+\frac{b}{3b-c+a}+\frac{c}{3c-a+b} \geq 1$ Let $a,b,c$ be the side lengths of a triangle. Prove that $$\frac{a}{3a-b+c}+\frac{b}{3b-c+a}+\frac{c}{3c-a+b} \geq 1 . $$ I found this inequlity in the chapter entitled Cauchy-Schwarz, but I cannot find a proof for this inequality. I used the triangle inequality and Cauchy-Schwarz but I proved the case of equality; that is for $a=b=c$. AI: Here is an attempt. It is well known that $a,b,c$ are the sides of a triangle if and only if you can find numbers $x,y,z >0$ so that $a=x+y, b=x+z, c=y+z$. Your inequality becomes then $$\frac{x+y}{2x+4y}+\frac{x+z}{4x+2z}+\frac{y+z}{4z+2y} \geq 1 \,;\, \forall x,y,z >0 \,.$$ This inequality reduces after horrible computations to $$ x^2y+y^2z+z^2x \geq 3xyz $$ But this is a bad solution. Here is a better idea, cannot complete the solution though: The equation is equivalent to $$\sum_{cyc}\frac{x+y}{x+2y} \geq 2$$ or $$\sum_{cyc}1-\frac{y}{x+2y} \geq 2$$ or $$1 \geq \sum_{cyc}\frac{y}{x+2y} \,.$$ Probably the easiest approach from here would be to denote $x+2y=m, y+2z=n, z+2x=p$ and solve for $x,y,z$. This suggest that probably it would had been best to denote $m=3a-b+c, n=3b-c+a, p=3c-a+b$ from beginning.
H: How to make something that will cap on 20? A user on the chat asked how could he make something that would cap when it gets a specific value like 20. Then the behavior would be as follows: $f(...)=...$ $f(18)=18$ $f(19)=19$ $f(20)=20$ $f(21)=20$ $f(22)=20$ $f(...)=20$ He said he would like to perform it with a regular calculator. Is it possible to do this? AI: $ x \mapsto \min ( x , 20 ) $
H: Proving a function is not integrable Let $(X, \Sigma,\mu)$ be a $\sigma$-finite measure space, and let $f:X\to\mathbb R^+$ be a measurable function such that $$\mu(\{x\in X\,\colon\,f(x)>t\})>\frac{1}{1+t},\; \forall t>0.$$ Prove then that $f$ is not integrable. I've tried to derive a contradiction, however my original plan to use Chebyshev inequality to get such an absurd turned out to be useless.. Can you help me? Thanks in advance, Guido AI: $$\int_X f d\mu = \int_\mathbb{R_+}\mu(\{x: f(x)>t \})dt > \int_0^\infty \frac{1}{1+t}$$
H: what's the general form of 3D projective mapping? I know that the general form of a 2D projective transformation is following rational linear mapping: $(u, v) \mapsto (x, y)$ $$ x = \frac{\mathit{a}\mathit{u}+\mathit{b}\mathit{v}+c}{\mathit{g}\mathit{u}+\mathit{h}\mathit{v}+i}\\ y = \frac{\mathit{d}\mathit{u}+\mathit{e}\mathit{v}+f}{\mathit{g}\mathit{u}+\mathit{h}\mathit{v}+i} $$ What is its 3D version? AI: The transformation from 3D to 2D is same, just with two extra terms, one in the denominator and one in the numerator. This is an 11 parameter projective mapping, since one of the 12 parameters can be set to 1. Some more info on this "camera model" can be found here.
H: How to put eggs in baskets A farmer has c chickens who have each laid e eggs, which she will put into b baskets. Each basket has a probability p(d) of being dropped, which breaks all the eggs in the basket. How should the farmer distribute the eggs into the baskets in such a way that she minimizes the number of chickens whose eggs all get broken? AI: We assume that the probability any particular basket gets dropped is $p$, and that droppings are independent. Let $S$ be the total number of sad chickens (a chicken is sad eggsactly if all her eggs get broken). The number $S$ of sad chickens is a random variable. We interpret minimizing $S$ as minimizing the expectation of $S$. For each chicken $C_i$, distribute her eggs so that as many baskets as possible have an egg from $C_i$. The number of baskets that have some of her eggs is then $m$, where $m=\min(e,b)$. Let $S_i=1$ if $C_i$ becomes sad, and $0$ otherwise. Then $S=\sum S_i$. We have $E(S_i)=p^m$, so by the linearity of expectation, $E(S)=cp^m$. Thus as long as we do the obvious spreading out of each chicken's eggs, the expected number of sad chickens does not depend on other details of the distribution process.
H: Help proving the primitive roots of unity are dense in the unit circle. I'm having difficulty understanding how to prove that the primitive roots of unity are in fact dense on the unit circle. I have the following so far: The unit circle can be written $D=\{x\in\mathbb{C}:|x|=1\}$. The set of primitive $m$-th roots of unity is $A_m=\{\zeta_k:\zeta_k^m=1,\zeta_k\text{ is primitive}\}$. Hence, the set of all primitive roots $A$ is given by the union of $A_m$ over $m=1,2,3,\ldots$. But I can't seem to get started on how to prove that $A$ is dense in $D$. AI: To show the primitive roots of unity are dense in the unit circle, we must show that if we pick any point on the unit circle and any $\epsilon>0,$ there is some primitive root of unity with distance less than $\epsilon$ from the chosen point. The easy way is to specialize $m$ to be prime - then every root of unity other than $1$ is primitive. Since the roots will be distributed evenly along the circle, if we pick a large enough prime we can make the distance between any two adjacent primitive roots less than $\epsilon.$ Then certainly if we pick any point on the unit circle, it will be within $\epsilon$ from a root.
H: Countability of irrationals Since the reals are uncountable, rationals countable and rationals + irrationals = reals, it follows that the irrational numbers are uncountable. I think I've found a "disproof" of this fact, and I can't see the error. Since $Q$ is countable, list $q_1, q_2, \dots$ such that $q_i < q_{i+1}$. I want to show that between $q_i$ and $q_{i+1}$ there is exactly one irrational number; this will give us a bijection and prove the irrationals countable. Since the irrationals are dense, it follows that there is at least one irrational number in $\left(q_i,q_{i+1}\right)$. Suppose there was more than one in this range, e.g. $x$ and $y$. Since $(x,y)$ is an open subset of $R$ and the rationals are dense, there must be some rational $q_c$ in this interval. But that means $q_i<q_c<q_{i+1}$, which contradicts our ordering. So there must be exactly one irrational in this range. QED. Where is the problem? The only thing I can think of is that the rationals can be put into a sequence, but cannot be put into an increasing sequence, which seems odd. AI: Your final statement is correct: the rationals can be listed, but not using the usual order. To convince yourself, just remember that between any two distinct rationals there are an infinite number of rationals (and irrationals).
H: Why is $\pi$ irrational if it is represented as $C/D$? $\pi$ can be represented as $C/D$, and $C/D$ is a fraction, and the definition of an irrational number is that it cannot be represented as a fraction. Then why is $\pi$ an irrational number? AI: A rational number is a number that can be expressed as $p/q$, where $p$ and $q$ are integers. The number $\pi$ cannot be expressed in this form; hence it is irrational. In other words, the definition of "fraction" does not include ratios like "circumference/diameter" in which the numerator and denominator are arbitrary numbers, not necessarily integers. In the case of "circumference/diameter" (which you denoted $\pi = C/D$), it will always be the case that if the diameter is an integer, the circumference ($C = \pi D$) is not an integer, and if the circumference is an integer, the diameter ($D = C/\pi$) is not an integer: precisely because $\pi$ is irrational. Note that a definition of "fraction" that allowed arbitrary real numbers as the numerator and denominator would be not very useful, as it would allow "fractions" like $\pi = \pi / 1$, or indeed, for any number $x$, the representation of $x$ as a "fraction" $x = x/1$.
H: Intersection of compact and discrete subsets I have difficulties with a rather trivial topological question: A is a discrete subset of $\mathbb{C}$ (complex numbers) and B a compact subset of $\mathbb{C}$. Why is $A \cap B$ finite? I can see that it's true if $A \cap B$ is compact, i.e. closed and bounded, but is it obvious that $A \cap B$ is closed? AI: There seems to be a contradiction between the answers of Andre Nicolas and of Arthur Fischer, yet both are correct. This depends on your definition of discrete. Andre's notion of discrete is this: A set $S$ in a topological space $X$ is discrete if it is discrete with respect to the subspace topology. This is the same as saying that every point in $S$ has a neighborhood (in $X$) that contains no other point from $S$. Arthur's definition is this: $S\subseteq X$ is discrete if every point in $X$ has a neighborhood that contains at most one element of $S$ (I assume all spaces to be Hausdorff). A discrete set according to Arthur's definition is automatically closed (as a point in the closure but not in the set would violate discreteness). So most likely, if you are supposed to show that the intersection of a discrete set with a compact set is finite, you are supposed to use Arthur's definition of discreteness. By the way, note that Arthur's argument does not say anything about $\mathbb C$. This works in every Hausdorff space and mainly uses the fact that compact discrete sets are finite. (And that subsets of discrete sets are again discrete and that discrete sets are closed.) Edit: Unfortunately Arthur Fischer's answer was deleted while I was typing mine. But it seems that my answer is still understandable.
H: We have to select a group of 7 out of a group of 9 men and 11 women,how many seven member teams consist of at least one man? We have to select a group of 7 out of a group of 9 men and 11 women Q : How many seven member teams consist of at least one man ? Now I know that the answer is ${20 \choose 7}-{11 \choose 7} = 77190$ But my first answer was ${9 \choose 1}{19\choose 6}=244188$; because we have to select one man and the rest can be either men or women. I know this is wrong but I don't know why. Any help will be appreciated. EDIT : Sorry the question should be 9 men and 11 women ( not 3) AI: Your answer of $\binom91\binom{19}6$ counts many of the allowable groups more than once. Suppose that the men are $M_1,\dots,M_9$, and the women are $W_1,\dots,W_{11}$. Consider, for instance, the group consisting of $M_1,M_2,M_3,M_4,W_1,W_2$, and $W_3$: it gets counted four times, once for each of the four men in it. You count it once when you count $M_1$ as the one man counted by the $\binom91$ factor; you count it once more when you count $M_2$ as that man; and you count it yet one more time for each of $M_3$ and $M_4$. In fact, every $7$-person group gets counted once for each man in it, and this results in massive overcounting. The easiest way to get the right answer is to to start with the $\binom{20}7$ possible $7$-person groups and subtract the $\binom{11}7$ all-woman groups to get a final result of $\binom{20}7-\binom{11}7$ groups containing at least one man. Added: A much more long-winded way is to count separately the groups with one man, the groups with two men, and so on. When you choose a group with $m$ men and $7-m$ women, you can choose the $m$ men in $\binom9m$ different ways and the $7-m$ women in $\binom{11}{7-m}$ different ways. Thus, by the product rule you can form such a group in $\binom9m\binom{11}{7-m}$ ways. The total number of groups containing at least one man is therefore $$\sum_{m=1}^7\binom9m\binom{11}{7-m}\;,$$ and it’s certainly possible to compute all $7$ terms and sum them. But it’s ever so much easier just to take the total number of possible $7$-person groups and subtract the easily calculated number that include no men.
H: How to combine ratios? If $a:b$ is $2:5$, and $c:d$ is $5:2$, and $d:b$ is $3:2$, what is the ratio $a:c$? How would I go about solving this math problem? if the ratio of $a:b$ is $2:5$ the ratio of $c:d$ is $5:2$ and the ratio of $d:b$ is $3:2$, what is the ratio of $a:c$? I got $\frac{a}{c} = \frac{2}{5}$ but that is not a correct answer. AI: These ratios are just simple equations. For example $a:b=2:5$ is $$a= \frac{2}{5}b$$ No need for confusing tricks here. Just substitutions : $$ a = \frac{2}{5}b = \frac{2}{5}\frac{2}{3} d = \frac{2}{5}\frac{2}{3}\frac{2}{5} c = \frac{8}{75} c$$ So that $$ a:c = 8:75 $$
H: When does $n$ divide $a^d+1$? $\newcommand{\ord}{\operatorname{ord}}$ For what values of $n$ will $n$ divide $a^d+1$ where $n$ and $d$ are positive integers? Apparently $n$ can not divide $a^d+1$ if $\ord_n a$ is odd. If $n\mid (a^d+1)\implies a^d\equiv -1\pmod n\implies a^{2d}≡1\pmod n \implies\ord_na\mid 2d$ but $\nmid d$. For example, let $a=10$, the factor(f)s of $(10^3-1)=999$ such that $\ord_f10=3$ are $27,37,111,333$ and $999$ itself. None of these should divide $10^d+1$ for some integer $d$. Please rectify me if there is any mistake. Is anybody aware of a better formula? AI: There are various useful bits of information that one may deduce about factors of integers of the form $\rm\:b^n\pm 1.\:$ A good place to learn about such is Wagstaff's splendid introduction to the Cunningham Project, whose goal is to factor numbers of the form $\rm\:b^n\pm 1.\:$ There you will find mentioned not only old results such as Legendre (primitive divisors of $\rm\:b^n\pm 1\:$ are $\rm\,\equiv 1\pmod{2n},$ but also newer results, e.g. those exploiting cyclotomic factorizations. e.g. see below. Often number identities are more perceptively viewed as special cases of function or polynomial identities. For example, Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\;$. These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations (e.g. see below). $$\begin{array}{rl} x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\\\ \frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\\\ \frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\\\ \frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\\\ \end{array}$$
H: How do we prove that the Pythagorean theorem holds for a right angled isoceles triangle with sides, $a,b,a$. Possible Duplicate: What is the most elegant proof of the Pythagorean theorem? How do we prove that the Pythagorean theorem holds for a right angled isoceles triangle with sides, $a,b,a$. For a right angled triangle with sides $a,b,c$, where $\angle C = 90^{\circ}$, we have $a^2+b^2=c^2$ AI: Make a paper square with sides $b$. Divide it into $4$ triangles by drawing the two diagonals. Cut along the diagonals. We get $4$ congruent isosceles right-angled triangles. Let the right-angled triangles we get have legs $a$. They each have hypotenuse $b$. We can put these triangles together in pairs along their hypotenuses to form two $a\times a$ squares. So we have cut a $b\times b$ square into four pieces and reassembled the pieces to make two $a\times a$ squares. Since area is conserved, it follows that $$a^2+a^2=b^2.$$ Remark: This is a simple dissection proof of a very special case of the Pythagorean Theorem. There are several general dissection proof of the Theorem.
H: Solving a literal equation containing fractions. I know this might seem very simple, but I can't seem to isolate x. $$\frac{1}{x} = \frac{1}{a} + \frac{1}{b} $$ Please show me the steps to solving it. AI: You should combine $\frac1a$ and $\frac1b$ into a single fraction using a common denominator as usual: $$\begin{eqnarray} \frac1x& = &\frac1a + \frac1b \\ &=&{b\over ab} + {a\over ab} \\ &=& b+a\over ab \end{eqnarray}$$ So we get: $$x = {ab\over{b+a}}.$$ Okay?
H: A little help with algebra (homework) I was brushing up on my algebra a little, and I've seen that I've become rusty with some of the concepts. Indications for solving these would perhaps be better suited rather than full answers, as the goal is to figure it out myself. Show that for any $a, b \in \mathbb{R}$ (without induction, just using algebra) that $a^3 - b^3 = (a - b)(a^2 + 2ab + b^2)$ and something similar but for $a^3 + b^3$. Factorize the following: $(3x + 1)^2 - (x+3)^2$ and other such expressions And if someone could provide a concise and simple reference for the rules of expansions, factorizing and related topic, I would be most appreciative. AI: The simplest way to verify an identity like $a^3-b^3=(a-b)(a^2+2ab+b^2)$ is to multiply out the righthand side and verify that you do indeed get the lefthand side. If you try it in this case, however, you’ll fail: $$\begin{align*} (a-b)(a^2+2ab+b^2)&=a(a^2+2ab+b^2)-b(a^2+2ab+b^2)\\ &=\left(a^3+2a^2b+ab^2\right)-\left(a^2b+2ab^2+b^3\right)\\ &=a^3+2a^2b+ab^2-a^2b-2ab^2-b^3\\ &=a^3-b^3+\left(2a^2b-a^2b\right)+\left(ab^2-2ab^2\right)\\ &=a^3-b^3+a^2b-ab^2\;, \end{align*}$$ and $a^2b-ab^2=ab(a-b)$ certainly isn’t guaranteed to be zero. (It’s zero if and only if either $a=0,b=0$, or $a=b$.) Thus, in general $a^3-b^3\ne(a-b)(a^2+2ab+b^2)$. The correct identity is $$a^3-b^3=(a-b)(a^2+ab+b^2)\;,$$ as you can check by multiplying out the righthand side: this time everything will cancel out except $a^3-b^3$. The trick to factorizing an expression like $(3x + 1)^2 - (x+3)^2$ is to recognize that it has the form $a^2-b^2$, where $a=3x+1$ and $b=x+3$, and to recall the basic factorization formula $$a^2-b^2=(a-b)(a+b)\;.$$ A few of the standard basic formulas can be found here, together with a link to a practice page. Here is the start of a set of three pages on the topic, with examples. Googling on factoring formulas, with or without quotes, will turn up many more such resources.
H: Evaluating a simple definite integral I'm currently teaching myself calculus and am onto the Mean Value Theorem for Integration. I am finding the value of $f(c)$ on the function $f(x)=x^3-4x^2+3x+4$ on the interval $[1,4]$. So, with the equation $(b-a)\cdot f(c)=\int_1^4f(x)dx $, you get $(4-1)\cdot f(c)=\int_1^4(x^3-4x^2+3x+4)dx$ Now my book says that this equals $3f(c)=\frac{57}{4}$. I've been racking my brain and can't figure out how $\int_1^4(x^3-4x^2+3x+4)dx=\frac{57}{4}$ So how did the author evaluate that integral to get the answer? AI: We have the following string of equalities: $$\begin{align*}\int_1^4(x^3-4x^2+3x+4)dx &= \left.\frac{x^4}{4}-\frac{4x^3}{3}+\frac{3x^2}{2}+4x\right|_{x=1}^4\\ &= \left(\frac{4^4}{4}-\frac{4\cdot4^3}{3}+\frac{3\cdot4^2}{2}+4\cdot4\right)-\left(\frac{1^4}{4}-\frac{4\cdot1^3}{3}+\frac{3\cdot1^2}{2}+4\cdot1\right)\\ &= \left(64-\frac{256}{3}+24+16\right)-\left(\frac{1}{4}-\frac{4}{3}+\frac{3}{2}+4\right)\\ &= \left(\frac{56}{3}\right)-\left(\frac{53}{12}\right)\\ &=\frac{57}{4}.\end{align*}$$