text
stringlengths
83
79.5k
H: Puzzle in Percentages Okay, this is a real-time problem. The following is a picture of Customer satisfaction rating, which was displayed next to an item in an online shopping website. Satisfied customers click the vote-up button, and unsatisfied customers click the vote-down button. Every time a button is clicked, the rating changes accordingly. From the above image, we can infer that 94% of the people are satisfied with the product and the rest 6% aren't satisfied. Now, is there any way to find out the number of people who participated in voting? Edit: It's clear that there could be infinite solutions evident from Lord Soth's answer. So, I'll change the question a bit. What could be the minimum no of people who participated in voting?? AI: No, though you can infer some limits if you know how the program rounds. The smallest number of voters who can produce a percentage that rounds normally to $94$% is $16$: $\frac{15}{16}=0.9375$, while $\frac{14}{15}$ is only $0.9333\dots\;$. If the program truncates instead of rounding, there must have been at least $17$: $\frac{16}{17}\approx0.9412$. But what you can infer is at best pretty minimal and probably not useful. Added: In general the minimum number voting is the smallest $n$ such that some fraction $\frac{m}n$ rounds to the decimal corresponding to the percentage of satisfied customers. If that percentage is given as an integer, and if the rounding is done normally (with rounding up at the halves), the requirement is that $$\frac1{100}\left(p-\frac12\right)\le\frac{m}n<\frac1{100}\left(p+\frac12\right)\;,$$ where $p$ is the percentage of satisfied customers. This can be rewritten as $$p-\frac12\le m\cdot\frac{100}n<p+\frac12$$ and thence as $$\frac{n}{100}\left(p-\frac12\right)\le m<\frac{n}{100}\left(p+\frac12\right)\;.$$ Thus, you want the smallest integer $n$ such that the interval $$\left[\frac{n}{100}\left(p-\frac12\right),\frac{n}{100}\left(p+\frac12\right)\right)$$ contains an integer.
H: Trying to find more information about "Darboux's method/theorem" on coefficients of an analytic function My supervisor briefly showed me a statement of something she called "Darboux's theorem," but I am having trouble finding more information about it on the internet. Here is what I have written down (some details may be incorrect): Let function $R$ be analytic in a neighborhood about $z=0$ of radius $\rho$. Suppose it has exactly one singularity at distance $\rho$ from $z=0$ (not sure about this sentence). If $R$ can be written in the form $$R(z) = \left(1-\frac zp\right)^{-s} G(z) + H(z)$$ where $s \notin \{0, -1, -2,\ldots\}$, where $G$ and $H$ are analytic in neighborhoods about $z=0$ of radius larger than $\rho$, and $G(\rho)\ne 0$, then the coefficient of $z^n$ in the power expansion of $R(z)$ follows this asymptotic description $$[z^n] R(z) = \rho^{-n} n^{s-1} \frac{G(s)}{\Gamma(s)} \left(1 + O\left(\frac 1n\right)\right)$$ My guess is that it's called "Darboux's method" in English, but still I only have these two links (here and here), that don't have a statement that is exactly the above, although they seem close. Can anyone comment or help me find what I'm looking for? Also any corrections to what I've written above would be appreciated. AI: You should take a look at the book Analytic Combinatorics, the section about analytic method contains an extensive discussion of Darboux's and other related methods. The book is available here http://algo.inria.fr/flajolet/Publications/books.html In general these types of results are known as "Tauberian theorems".
H: $\varepsilon$-$\delta$ proof for inverse function I have been struggling to prove the following claim without appealing any other theorem, except the definition of continuity. If any one give a step by step argument for it, that would be great. Many thanks! Let $f$ be strictly increasing function on $[a,b]$. If $f$ is continuous on $[a,b]$, then $f^{-1}$ is continuous on $[f(a),f(b)]$. AI: To show $f^{-1}(x)$ is continuous at $y_0\in [f(a),f(b)]$, we assume $y_0=f(x_0)$ for some $x_0\in [a,b]$(note this $x_0$ is unique). By definition, for any $\epsilon>0$, we need to find a $\delta>0$ such that $$ |f^{-1}(y)-f^{-1}(y_0)|<\epsilon\quad \text{when } |y-y_0|<\delta $$ Define $\delta:=\min\{f(x_0+\epsilon)-f(x_0),f(x_0)-f(x_0-\epsilon)\}$. If $x_0+\epsilon>b$, use $f(b)$ instead;if $x_0-\epsilon<a$, use $f(a)$ instead. Moreover, if $y_0=f(a)\ or\ f(b)$, then $\delta=f(a+\epsilon)-f(a)\ or\ f(b)-f(b-\epsilon)$. Now I will leave it to you that this $\delta$ is what we desired for. Note: It seems I didn't use the continuity of $f(x)$, but what we need is $f(x)$ should be surjective, or rather, $f^{-1}(x)$ is defined everywhere on $[f(a),f(b)]$
H: Combinatorial interpretation of this number? It is straightforward to show that if $m,n\in\mathbb{Z}$ and $m\geq n$, then $$m\mid \gcd(m, n)\binom{m}{n}.$$ I'm trying to find a combinatorial interpretation of this fact, but I can't seem to come up with one. My two proofs are formal, and do not give me any combinatorial insight. AI: $\mathbb Z/m$ acts (by transitions) on the set of $n$-element subsets of $\mathbb Z/m$. The action is not free but its restriction on any subgroup of order coprime to $n$ — in particular, on $\gcd(m,n)\mathbb Z/m\subset\mathbb Z/m$ — is free. So $\frac m{\gcd(m,n)}\mid\binom mn$.
H: Why the matrix of $dG_0$ is $I_l$. I am reading the proof of Local Immersion Theorem in Guillemin & Pallock's Differential Topology on Page 15. But I got lost at the following statement: Define a map $G: U \times \mathbb{R}^{l-k} \rightarrow \mathbb{R}^{l}$ by $$G(x,z) = g(x) + (0,z).$$ The matrix of $dG_0$ is $I_l$. Could some one explain why this is true? Thanks. AI: Earlier on that page, it is said that since $dg_0: \Bbb R^k \longrightarrow \Bbb R^l$ is injective, we may assume that $$dg_0 = \begin{pmatrix} I_k \\ 0_{k-l} \end{pmatrix}$$ by performing a change of basis. Hence we have that the map $$\tilde{g}: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$\tilde{g}(x, z) = (g(x), 0, \dots, 0)$$ has differential $$d\tilde{g}_0 = \begin{pmatrix} I_k & 0_{l-k} \\ 0_{l-k} & 0_{l-k} \end{pmatrix}.$$ The map $$h: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$h(x,z) = (0, \dots, 0, z)$$ clearly has differential $$dh_0 = \begin{pmatrix} 0_{k} & 0_{l-k} \\ 0_{l-k} & I_{l-k} \end{pmatrix}.$$ Therefore the given map $$G: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$G(x,z) = \tilde{g}(x ,z) + h(x,z)$$ has differential $$dG_0 = d\tilde{g}_0 + dh_0 = \begin{pmatrix} I_{k} & 0_{l-k} \\ 0_{l-k} & I_{l-k} \end{pmatrix} = I_l.$$
H: Valid Alternative Proof to an Elementary Number Theory question in congruences? So, I've recently started teaching myself elementary number theory (since it does not require any specific mathematical background and it seems like a good way to keep my brain in shape until my freshman college year) using Elementary Number Theory by Underwood Dudley. I've run across the following question: Show that every prime (except $2$ or $3$) is congruent to $1$ or $3$ (mod $4$) Now I am aware of one method of proof, which is looking at all the residues of $p$ (mod $4$) and eliminating the improbable ones. But before that, I found another possible way of proving it. Since $p$ must be odd: $p$ $\equiv$ $1$ (mod $2$) We can write this as: 2$|$$(p-1)$ But since $p$ is odd we can also say: 2$|$$(p-3)$ If $a|b$ and $c|d$ then $ac|bd$, then it follows that: $4|(p-1)(p-3)$ The $3$ possibilites then are: $1.$ $4|(p-1)(p-3)$ $2.$ $4|(p-1)$ $3.$ $4|(p-3)$ Thus, by the definition of congruence, we have the 3 possibilites: $1.$ $p \equiv 1$ (mod $4$) $2.$ $p \equiv 3$ (mod $4$) $3.$ $4|(p-1)(p-3) = 4|p^2-4p+3$ therefore $p^2-4p+3 = 4m$ Then $p^2+3 = 4m +4p$. Set $m+p=z$ Then from $p^2+3 = 4z$ it follows that $p^2 \equiv -3$ (mod $4$) (Is this correct?) Can anyone please tell me if this is a valid proof? Thank you in advance. EDIT: Taken the first possibility into account. I also realize that there are much simpler proofs but I was curious as to whether this approach also works. AI: I don't think there is anything wrong with your proof, but put it more simply, the congruence $$ p \equiv 1 \pmod{2} $$ is equivalent to $$ \text{either}\quad p \equiv 1 \pmod{4}, \quad\text{or}\quad p \equiv 3 \pmod{4}. $$ This is simply because if $p \equiv 0, 2 \pmod{4}$, then $p \equiv 0 \pmod{2}$. An then, note that the prime $3$ does fit in, $3 \equiv 3 \pmod{4}$.
H: differential equation with parameter I've got problem with such an example: Given equation $\frac{dx}{dt} = -x + x^7$ with initial condition $x(0) = \lambda$, where $x=x(t, \lambda)$. Find $\frac{ \partial x(t, \lambda)}{\partial \lambda} \mid _{\lambda = 0}$. What I've got now is: We can write: $x(t, \lambda) = \int_{0}^{t} -x + x^7 ds + C(t,\lambda)$ but since $x(0) = \lambda$, we've got $\lambda = x(0, \lambda) = \int_{0}^{0} -x + x^7 ds + C(0,\lambda) = C(0, \lambda)$ and so $x(t, \lambda) = \lambda + \int_{0}^{t} -x + x^7 ds$ Now we can apply $\frac{ \partial }{\partial \lambda}$ to both sides getting: $\frac{ \partial x(t, \lambda)}{\partial \lambda} = 1 + \int_{0}^{t}{ \frac{ \partial}{\partial \lambda}(-x(t, \lambda)) + \frac{ \partial}{\partial \lambda}(x(t, \lambda)^7) }ds$. But now I have no idea how can I calculate value of that integral. Does anyone have idea how to move further with that solution ? Or maybe that is wrong path and there exists easier way to solve that problem? Thanks in advance for all the help AI: Let $y(t,\lambda) = \dfrac{\partial}{\partial \lambda} x(t,\lambda)$. Differentiating the differential equation $ \dfrac{\partial x}{\partial t} = - x + x^7$ with respect to $\lambda$, we get $\dfrac{\partial y}{\partial t} = - y + 7 x^6 y$, with $y(0,\lambda) = 1$. Solving this linear homogeneous initial value problem for $y$, we get $$y(t,\lambda)) = \exp\left(\int_0^s (7 x(s,\lambda)^6 - 1)\ ds\right)$$ Now plug in $\lambda=0$.
H: Proof of Abel-Ruffini's theorem From Galois Theory (Rotman): I wrote down the whole proof, but my question is only about the third paragraph. There exists a quantic polynomial $f(x) \in \Bbb{Q}[x]$ that is not solvable by radicals. Proof If $f(x)=x^5 -4x + 2$, then $f(x)$ is irreducible over $\Bbb{Q}$, by Eisenstein's criterion. Let $E/\Bbb{Q}$ be the splitting field of $f(x)$ contained in $\Bbb{C}$, and let $G=Gal(E/\Bbb{Q})$. If $\alpha$ is a root of $f(x)$, then $[\Bbb{Q}(\alpha): \Bbb{Q}]=5$, and so $$[E : \Bbb{Q}] = [ E : \Bbb{Q}(\alpha)][\Bbb{Q}(\alpha) : \Bbb{Q}] = 5[E : \Bbb{Q}(\alpha)].$$ By Theorem 56 $|G|=[E: \Bbb{Q}]$ is divisible by 5. We now use some calculus; $f(x)$ has exactly two critical points, namely, $\pm (4/5)^{1/4}$ ~ $\pm .946$, and $f((4/5)^{1/4}) < 0$ and $f(-(4/5)^{1/4}) > 0$; it follows easily that $f(x)$ has exactly thre real roots (they are, approximately, -1.5185, 0.5085, and 1.2435; the complex roots are $-.1168 \pm 1.4385i.$) Regarding $G$ as a group of permutations on the 5 roots, we note that $G$ contains a 5-cycle (it contains an element of order 5, by Cauchy's theorem, and the only elements of order 5 in $S_5$ are 5-cycles). The restriction of complex conjugation, call it $\sigma$, is a transposition, for $\sigma$ interchanges the two complex roots while it fixes the three real roots. I was a bit confused here about how we can assume that $\sigma$ restricted to the complex conjugation is a transposition. Could we not exchange a complex number with a real number? In other words, how do we know that complex numbers must be sent to other complex numbers? By Theorem G.39, $S_5$ is generated by any transposition and any 5-cycle, so that $$G = Gal(E/\Bbb{Q}) \cong S_5$$ is not a solvable group, by Theorem G.34, and Theorem 74 shows that $f(x)$ is not solvable by radicals. AI: Conjugation maps $a + i b$ to $a - i b$, where $a, b$ are real numbers. So the complex numbers fixed by conjugation, that is, those complex numbers for which $a + i b = a - i b$, are exactly those for which $b = 0$, that is, the real numbers. So the three real roots are fixed, and the two complex, non-real ones cannot be fixed, so they must be exchanged by conjugation. (Since the coefficients of $f(x)$ are real, the conjugate of a root of $f(x)$ must be a(nother) root of $f(x)$.)
H: Minpoly and Charpoly of block diagonal matrix I am currently struggling with an exercise where I have to treat a Block diagonal matrix (so it is a square matrix, where square block matrices are down the diagonal). Now I was wondering whether we can say something about the characteristic or minimal polynomial of the whole matrix, if we know the characteristic and minimal polynomial of the blocks? Anything you know could be helpful! AI: The characteristic polynomial of the block matrix $A$ is just the product of the characteristic polynomials of the blocks, just remember that the determinant of a block diagonal matrix is the product of the determinants of the blocks, and apply this to the block diagonal matrix $x I - A$. The minimal polynomial is slightly subtler, as it is the $\operatorname{lcm}$ of the minimal polynomials of the blocks. To prove this, notice first that the minimal polynomial $m(x)$ of $A$ vanishes when computed on each block, so the minimal polynomial $m_i(x)$ of the $i$-th block divides $m(x)$. So $\operatorname{lcm}(m_i(x))$ divides $m(x)$, but it's easy to see that $\operatorname{lcm}(m_i(x))$ annihilates each block, so $\operatorname{lcm}(m_i(x)) = m(x)$. Thanks to @Lipschitz for the correction in the comment below.
H: Prime Numbers and Multiples? Other than prime numbers are all numbers multiple of 2,3,5 and 7 (Other Prime numbers as well). Suppose like if we need 8 it's the combination of 2.2.2, and 15 as 5.3 etc. AI: No: $$11\cdot 13 = 143$$ $$13 \cdot 17 = 221$$ $$\vdots$$ The most we can say is what the Fundamental Theorem of Arithmetic tells us: Every integer greater than $1$ is a prime number or a product of prime numbers. Edit: (Revised question) Yes, except $1$ is neither prime nor is it a product of prime numbers.
H: Do every $3$ linearly independent vectors span all of $\mathbb{R}^3$? I am given $3$ vectors that are linearly independent. I am trying to figure our if they span all of $\mathbb{R}^3$ to declare them as basis. AI: Yes, because $\mathbb R^3$ is $3$-dimensional (meaning precisely that any three linearly independent vectors span it). To see this, note that if we had $3$ linearly independent vectors which did not span $\mathbb R^3$, we could expand this to a collection of $4$ linearly independent vectors. Writing these in a matrix and performing row-reduction shows that this is impossible.
H: Proofs from the "Ugly Book" There is a famous saying in mathematics from Paul Erdős: "You don't have to believe in God, but you should believe in The Book." "The Book" is an imaginary book in which God had written down the best and most elegant proofs for mathematical theorems. If there is a book written down by God, why not a book from the Devil? I mean, a book with the most ugly proofs, but yet the best ones we have as accepted proofs. I don't mean to make a horrible proof on purpose, but sometimes ugly proofs is all you have. I wish if you could share some theorem from the Ugly Book, some theorem proved by a real ugly proof (and yet the only one that there is). I'm asking this not for fun only, but I'm curious about how ugly proofs can be. AI: Beauty is quite subjective. One may prove something in 2 lines using the newly developed supersymmetric coffee spaces, and that may be cool to some. Suppose you have another proof of the same result that uses only some primitive set of axioms. This proof may potentially be 1000 pages long, but it will be more beautiful to some (for example me), as it is a demonstration of the fact that all that mind blogging complexity is actually the result of addition, multiplication, etc, and some first order logic.
H: Prove $\sum_{n=2}^{\infty}{\frac{1}{n\ln (n)}}$ diverge Prove that $\sum \limits_{n=2}^{\infty}{\dfrac{1}{n\ln (n)}}$ diverge I know it's a well-known series, but all the proofs I've seen are based on the integral test and Cauchy condensation test. I need to prove it using only the following tests: Direct/limit Comparison, Root, D'Alembert, Leibniz, since they are all I have studied so far. Regards and thanks a lot. AI: group the terms into blocks of size $2^n$. sum in each block will be bigger than $$ 2^{n} \frac{1}{2^{n+1} \log{2^{n+1}}}$$ so it will be something like the series $\frac{1}{n(2ln(2)} $ which diverges (hope you can use this fact)
H: When $\;\text{FALSE}\implies P(x),\;$ is $P(x)\;$ false? Say we know that $P(k) \implies P(k+3)$. Then if we know $P(1)$ is true, we know $P(4), P(8) \dots$ are also true. However if we know $P(1)$ is false, does that mean $P(4), P(8) \dots$ are also false? I'm a little confused on if we can trust an implication which is of the form, false $\implies P(x)$. AI: No, in your example, knowing $P(1)$ is false tells us absolutely nothing about the truth value of $P(n)$, $n> 1$. Even if we know in advance that the entire implication is true, so that it is true that $P(k) \implies P(k+1)$, and we also learn that $P(k)$ is false, then we still cannot conclude anything about the truth or falsity of $P(k + 1)$, even though the implication is true. We have defined implication (aka the material conditional), as it is used in logic and in math, such that $$\text{FALSE} \implies P(x)\quad \text{ is always true}.$$ Indeed, by definition: $$P \implies Q \quad \text{is FALSE if and only if}\;\; P \;\;\text{is true}\;\;{\bf and}\;\; Q \;\;\text{is false}.$$ Recall the truth-table for the implication $P\implies Q$ which gives all possible truth-value assignments to P and Q, and the resultant truth value of $P\implies Q$:
H: Proving $g(\omega)=\frac{1}{2\pi i}\int_{\gamma}\frac{zf'(z)}{f(z)-\omega}\, dz$ where $g$ is the inverse of $f$ I have the following exercise: Let $G$ be an open subset of $\mathbb{C}$ and let $f$ be a one to one function in $H(G)$ such that $f'(z)\neq0$ for all $z\in G$. For each $\omega\in f(G)$ let $g(\omega)$ denote the unique complex number $z\in G$ for which $f(z)=\omega$. Suppose that the closed disk $\overline{D(z_{o},r)}\subseteq G$ and that $\gamma:[0,2\pi]\to G$ is the curve given by $\gamma(t)=z_{0}+re^{it}$. Prove, with the help of the residue theorem, that, for every $\omega\in f(D(z_{0},r))$ $$g(\omega)=\frac{1}{2\pi i}\int_{\gamma}\frac{zf'(z)}{f(z)-\omega}\, dz$$ Your solution should also explain why this integral is well defined. I will divide my question into two parts: I have a solution for the case that $g(\omega)\neq0$, but there is an argument made in it that I can't totally justify: Consider the function $$ \frac{zf'(z)}{f(z)-\omega} $$ it have the singular point when $$ f(z)=\omega $$ since $f$ is $1-1$ and $\omega$ is in the image of $f$, there exist a unique point $z_{1}$ s.t $$ f(z_{1})=\omega $$ Since $f'(z)\neq0$ we have it that $$ h(z):=f(z)-\omega $$ have a zero of order $1$. assume $z_{1}$ maps to $\omega$ $z_{1}f'(z_{1})\neq0$ and $f(z_{1})-\omega=0$, $(f-\omega)'=f'\neq0$ hence $$ Res\frac{zf'(z)}{f(z)-\omega} $$ at $z=z_{1}$ is $$ \frac{z_{1}f'(z_{1})}{(f(z)-\omega)'_{z_{1}}}=z_{1} $$ and by the residue theorem $$ \frac{1}{2\pi i}\int_{\gamma}\frac{zf'(z)}{f(z)-\omega}=\frac{1}{2\pi i}2\pi iz_{1}=z_{1}=g(\omega) $$ The part that I can't justify is why $h$ have a zero of order $1$ ? I think its because $f$ is $1-1$, but I'm not sure how to use it 2) I don't think it holds for when $g(\omega)=0$, since then $$z_{1}f'(z_{1})=0\cdot f'(0)=0$$ and so I can't use the theorem I used. Can someone please help me with this case ? Added later: I am referring to the following theorem, that can be found at page $253$ of the book Complex Variables and Applications by Brown and Churchill: Let two functions $p,q$ be holomorphic at a point $z_{0}$. If $p(z_{0})\neq0,q(z_{0})=0$ and $q'(z_{0})\neq0$ then $$Res_{z=z_{0}}\frac{p(z)}{q(z)}=\frac{p(z_{0})}{q'(z_{0})}$$ AI: About 1: notice that if a function $f(z)$ has a zero of order $n>1$ then, $f'(z)=0$, which is excluded in your case.
H: Better Proofs Than Rudin's For The Inverse And Implicit Function Theorems I am finding Rudin's proofs of these theorems very non-intuitive and difficult to recall. I can understand and follow both as I work through them, but if you were to ask me a week later to prove one or the other, I couldn't do it. For instance, the use of a contraction mapping in the inverse function theorem seems to require one to memorize, at the very least, a non-obvious (at least to me) function (viz. $\phi(\mathbf{x}) = \mathbf{x} + \mathbf{A}^{-1}(\mathbf{y}-\operatorname{f}(\mathbf{x}))$) and constant (viz. $\lambda^{-1} = 2 \Vert \mathbf{A}^{-1}\Vert$), where $\mathbf{A}$ is the differential of $\operatorname{f}$ at $\mathbf{a}$. The implicit function theorem proof, while not as bad, also requires one to construct a new function without ever hinting as to what the motivation is. I searched the previous questions on this site and haven't found this addressed, so I figured I'd ask. I did finnd this proof to have a much more intuitive approach to the inverse function theorem, but would like to see what proofs are preferred by others. AI: Suppose you want to find the inverse of the mapping $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ near a point $x_o$ where $F'(x_o)$ is invertible. The derivative (Jacobian matrix) provides an approximate form for the map $F(x) = F(x_o)+F'(x_o)(x-x_o)+\eta$. If you set $y = F(x)$ and ignore the error term $\eta$ then solving for $x$ gives us the first approximation to the inverse mapping. $$ x = x_o+[F'(x_o)]^{-1}(y-F(x_o)). $$ Then, you iterate. The technical details are merely to insure this iteration does indeed converge to the inverse mapping, but at the start, it's just using the derivative to linearize the problem. I don't know if this helps or not, but really the approach is almost brute force, to invert $F(x)=y$ what do you do? You solve for $x$. We can't do that abstractly for $F$ so instead we solve the next best thing, the linearization. Then the beauty of the contraction mapping technique completes the argument.
H: An odd integer minus an even integer is odd. Prove or Disprove: An odd integer minus an even integer is odd. I am assuming you would define an odd integer and an even integer. than you would use quantifiers which shows your solution to be odd or even. I am unsure on how to show this... AI: An even number is an integer which is divisible by $2$. In other words, $n$ is if and only if $n=2m$ for some integer $m$. An odd number is a number which is $1$ more (or less) than an even number. In other words, $n$ is odd if and only if $n=2m+1$ for some integer $m$. So suppose $n$ is odd and $n$ is odd. Write $n=2m+1$ and $n'=2m'$. What can you say about $n-n'$?
H: Is $\binom{52}{n}\cdot\binom{52-n}{n}$ maximised by $n=\frac{52}{3}$? If so why? I've been thinking a lot about cards, and recently about the combination of combination of hands....if you drew $n$ cards from a deck of $52$, and then drew another $n$ cards, how many combinations are there? The number of combinations appears to peak at about $1\times10^{23}$ so I threw in my best guess at the peak number, 17, I realised that it would have to be the midpoint between the maximal of $\binom{52}{n}$ and $\binom{52-n}{n}$, which is going to be $\frac{26+13}{2}=17\frac13$ or $\frac{52}{3}$. Putting that in gives me $1.01446\times10^{23}$! But all of this is from intuition, and I have no reason to believe this holds on all cases, nor do I know of the general case. Is that number the maximising number? If so why, and is there a general case? My guess at a general solution would be something like: $$\prod_{i=0}^n\binom{m-i\cdot n}{n}\text{is maximised by }\begin{cases}\frac{m}{T_n(i+1)+1} & \text{if } i=0\\ \frac{m}{T_n(i)+1} & \text{if } i>0\end{cases}$$ But I've not formal Mathematics knowledge (just University level physics) so I'm not certain. Am I right? Why? AI: Let us define $\binom{n}{k}$ for non-integer $n$ and $k$ by: $$ \binom{n}{k} = \frac{\Gamma(n+1)}{\Gamma(n-k+1)\Gamma(k+1)} $$ where $\Gamma(n) = \int_0^\infty x^{n-1} e^{-x} dx$ is the analytical continuation of the factorial (with $\Gamma(n+1) = n!$ for integer $n$). We must do this because the factorial function is not actually defined on the integers. One way to define it is the Gamma function, which matches up with the factorial for integer arguments. Now that we have the Gamma function, our problem reduces to calclus. Then we have that: $$ \binom{52}{n} \binom{52-n}{n} $$ $$ \frac{\Gamma(53)}{\Gamma(53-n)\Gamma(n+1)} \cdot \frac{\Gamma(53-n)}{\Gamma(53-2n)\Gamma(n+1)}$$ $$ \frac{\Gamma(53)^2}{\Gamma(53-2n)\Gamma(n+1)^2}$$ The numerator is a large number ($52!^2$). So let us worry about the denominator instead. The larger the denominator is, the smaller the fraction will be. To maximize the fraction, we should minimize the denominator. So: $$ 0 = \frac{d}{dn} \Gamma(53-2n)\Gamma(n+1)^2 $$ $$ 0 = -2\Gamma^\prime(53-2n)\Gamma(n+1)^2 + 2\Gamma^\prime(n+1)\Gamma(n+1)\Gamma(53-2n) $$ $$ \Gamma^\prime(53-2n)\Gamma(n+1) = \Gamma^\prime(n+1)\Gamma(53-2n) $$ Which looks extremely ugly. One obvious way they could be equal is if $n+1 = 53-2n$, because then, by simple substitution, the two sides are equal. Solving for $n$, we get $n = \frac{52}{3}$.
H: Problem 3.1.2 in Liu -- Omission in problem statement? Exercise 3.1.2 in Liu's Algebraic Geometry and Arithmetic Curves is as follows. Let $f:X\rightarrow Y$ be a morphism of schemes. For any scheme $T$, let $f(T):X(T)\rightarrow Y(T)$ denote the map defined by $f(T)(g)=f\circ g$. Shows that $f(T)$ is bijective for every $T$ if and only if $f$ is an isomorphism (use the identity morphisms on $X$ and $Y$). I feel this is poorly stated. In order to have sections over a scheme $T$, don't $X$ and $Y$ need to be $T$-schemes. That is, don't we need structure morphisms $X\rightarrow T$ and $Y\rightarrow T$, and for $f$ to be compatible with these? Or am I misunderstanding something? What should the correct wording be? AI: Andreas Blass is correct. For arbitrary schemes $X$ and $T$, $X(T):=\mathrm{Hom}_{\mathrm{Sch}}(T,X)$, and since every scheme admits a unique morphism to $\mathrm{Spec}(\mathbf{Z})$, the category of schemes is the same as the category of $\mathbf{Z}$-schemes, and $\mathrm{Hom}_{\mathrm{Sch}}(T,X)=\mathrm{Hom}_{\mathrm{Sch}/\mathbf{Z}}(T,X)$. Liu's statement is true and correct as stated. If you want to consider an arbitrary base, the statement would be: a morphism $f:X\rightarrow Y$ of $S$-schemes ($S$ being the base) is an isomorphism if and only if the induced map $\mathrm{Hom}_{\mathrm{Sch}/S}(T,X)\rightarrow\mathrm{Hom}_{\mathrm{Sch}/S}(T,Y)$ is bijective for all $S$-schemes $T$. These $\mathrm{Hom}$ sets are also usually denoted $X(T)$ and $Y(T)$ if it is understood that you are working in the category of $S$-schemes.
H: Finding the dual of this primal LP. I am going over sample questions from a sample exam, and I got stuck on the following question. I need to determine the dual of this LP: $min: c^Tx + d^Tu \\ s.t: Ax + Du = b\\ x \ge 0$ $A$ is an $m$ by $n$ matrix, D is an $m$ by $p$ matrix. So this is what is throwing me off when I try this question. I know that the dual of a regular LP like: $min: c^Tx \\ s.t: Ax = b \\ x \ge 0$ becomes $max: b^Ty \\ s.t: A^Ty \le c$ However, in this case, if I try applying the same type of strategy, I get stuck as follows: $max: b^Ty \\ s.t: A^Ty + D^Ty \; (\le, =) (c, d)$ The problem is that the dimensions of $A^T$ and $D^T$ don't agree anymore... so I tried adding an extra set of dual variables $z$... but it doesn't really get me anywhere, since $b$ is a vector in $R^m$ so any dual variables would have to be of that dimension as well. Any hints/suggestions? AI: You could try deriving it. I doubt you can use direct formulae for this case. (Just out of curiosity, is it a 2 stage Stochastic Programming Problem?) I present an explanation for a simpler case. \begin{align} \min &\quad c'x\\ Ax&=b\\ \end{align} In order to take the dual, you first take the Lagrangian. \begin{align} L(x;\lambda)&= c'x-\lambda '(Ax-b)\\ \nabla_xL(x;\lambda)&= c-A'\lambda=0\\ \end{align} Thus, we got the constraint for optimality- $A'\lambda=c$. Now, objective function. \begin{align} L(x;\lambda)&= c'x-\lambda '(Ax-b)\\ L(x;\lambda)&= c'x-\lambda 'Ax+\lambda 'b\\ L(x;\lambda)&= (c-A'\lambda )'x+\lambda 'b\\ L(x;\lambda)&= 0'x+\lambda 'b\\ L(x;\lambda)&= \lambda 'b\\ \end{align} Now, for when we deal with the Langrangian, we have a $\max \min$ operator. Thus, the objective function is: \begin{align} \max_{\lambda} \min_x L(x;\lambda)&= \max_{\lambda} \min_x \lambda 'b\\ \max_{\lambda} \min_x L(x;\lambda)&= \max_{\lambda} \lambda 'b \end{align} That is all, we have our objective function : $\max_{\lambda} \lambda 'b$ and the constraint $A'\lambda=c$. You can work out the algebra along the same lines for your specific case, I don't think the results will be as clean (maybe they are).
H: Self-adjoint operator on a finite-dimensional vector space Let $V$ be a finite-dimensional inner product space and let $x,y\in V$ be nonzero vectors. If there is a self-adjoint operator $A:V\rightarrow V$ such that $A(x)=y$ and $\langle A(v),v\rangle\geq0$ for all $v\in V$, then $\langle x,y\rangle>0$. I think we can conclude the following inequality $$\langle x,y \rangle=\langle x,A(x) \rangle=\langle A^*(x),x \rangle=\langle A(x),x\rangle\geq 0$$ but I'm not able to show that strict inequality holds (or, in other words, that equality is not possible). Can someone give me a hint? AI: Assume for a contradiction that $(x,y)=0$. Then take an orthonormal basis $(e_j)$ of $V$ starting with $$ e_1=\frac{x}{\|x\|}\qquad e_2=\frac{y}{\|y\|}. $$ The matrix $M$ of $A$ in this orthonormal basis must be symmetric semidefinite positive, and it looks like $$ M=\pmatrix{0&t&0\\ t&*&*\\ 0&*&*}\qquad t=\frac{\|y\|}{\|x\|}>0 $$ by blocks, where the first and the second row are the actual first and second rows of $M$, the remaining blocks being of the ad hoc size. Can you see a contradiction? Consider the restriction of the quadratic form $(Ax,x)$ to the span of $\{e_1,e_2\}$, whose matrix is the upper-left $2\times 2$ block of $M$. Since the restriction should be semidefinite positive as well, its eigenvalues, whence its determinant $-t^2<0$, must be nonnegative. Contradiction.
H: Change of limits of integration: $ \int_2^x \frac{\pi(x/u)}{\log u}du$ This is a change of variables that I do not see. $\pi(v)$ is the number of primes not exceeding $v.$ $$I_1 = \int_2^x \frac{\pi(x/u)}{\log u}du$$ becomes via $v = x/u$ $$I_2 = x\int_2^{x/2}\frac{\pi(v)}{\log x- \log v}\frac{dv}{v^2} $$ I see that if we let $v = x/u$ we have $du = -x/v^2$ so $$- x\int_2^? \frac{\pi(v)}{\log(x/v)} \frac{dv}{v^2} $$ I understand the expansion of $\log(x/v)$ but I don't see where the (-) sign went nor do I understand how the new upper limit is found. This occurs on p. 206 of Landau's Handbuch with very faded type. Thanks for assistance. AI: The lower limit on $I_2$ should be $1$, not 2: when $u=2$, $v=x/2$ and when $u=x$, $v=1$. Thus $$ \int_2^x\frac{\pi(x/u)}{\log(u)}du=-x\int_{x/2}^1\frac{\pi(v)}{\log(x/v)}\frac{dv}{v^2}=x\int_1^{x/2}\frac{\pi(v)}{\log(x/v)}\frac{dv}{v^2} $$ The negative sign goes away when we flip the order of integration (integrating from 1 to $x/2$ instead of from $x/2$ to 1).
H: self studying advice on analysis I am trying to learn analysis on my own but there are times when I can't solve the problem or I get the solution wrong after looking it up, but I will only look up the problems online after I am completely done trying the problems out. The solutions I find to some of these problems use previous theorems which I couldn't even think of using on my own. For example I tried proving if $X$ and $Y$ are sequences such that $X$ converges to $x \neq 0$ and $XY$ converges then $Y$ also converges. I thought over this problem for a while until I gave up and when I looked up the solution they used an idea called the tail end of a sequence which I would never have thought of. The section I am working on currently is limits of sequences from the Bartle book (chapter 3). What I am wondering is whether I should keep progressing to each new section and leave behind the problems I did not get until I'm finished with the entire chapter? What is the best way that I can learn to tie these abstract theorems into my proofs? AI: Don't sweat it if you get a few problems wrong, especially if you worked on them for a while before looking up the solution. There are plenty of good problems in analysis to try, just keep trying new ones, seeing what you can do, then seeing how an expert solves it (i.e. looking at a solution). Eventually you will pick up the tricks and you'll be able to solve the problems on your own! You should move on as soon as you mostly understand the solutions to all the problems in a section, even if you didn't come up with those solutions on your own. Then, continually come back to sections you didn't understand before - you'll notice that old material will seem easier the second or third time around. I also highly recommend getting a second (or third) book to harvest problems and ideas from. Also, keep hanging around math.stackexchange and you'll probably pick up a few tricks here and there =)
H: Considering a sum of a monotonically increasing and decreasing sequence. The following is the problem that I am working on. Let $\{z_n\} = \{x_n\}+\{y_n\}$ be a sequence where $\{x_n\}$ is monotonically increasing, $\{y_n\}$ monotonically decreasing, and $\{z_n\}$ is bounded. Is $\{z_n\}$ convergent ? What if $\{x_n\}$ and $\{y_n\}$ are also bounded ? I can clearly see and prove that in the second case, $\{z_n\}$ must converge to the sum of each of the limits of the sequences (because they exist). However, this is what I think about the case where only $\{z_n\}$ is bounded. Intuitively I want to say that it would be nice if $\{z_n\}$ converges but that sounds too good. So I considered the following case. If $\{x_n\}$ increases "faster" than $\{y_n\}$ decreasing, $\{z_n\}$ will become a monotonically increasing sequence that is bounded, thus it will converge to the sup of $\{z_n\}$. If $\{y_n\}$ decreases "faster" then with the similar argument $\{z_n\}$ will converge to the inf. If their increasing and decreasing rate are equal then it's a constant sequence, so the limit is cogent. But I am not 100% confident that there doesn't exist a "rate of increase and decrease that lies in between" so that $\{z_n\}$ eventually oscillates or something. Can someone help me out ? AI: What about something like $ x_n = n + .5 \sin (n) $ and $ y_n = -n + .5 \sin(n) $?
H: Why does a non-zero density function not imply infinitude of what it measures? Consider the following density function for the twin primes: Numbers $x-2$, $x-4$ are twin primes iff: $x \ne 2,4 \ mod \ 2 $ $x \ne 2,4 \ mod \ 3 $ $x \ne 2,4 \ mod \ 5 $ $x \ne 2,4 \ mod \ 7 $ ... $x \ne 2,4 \ mod \ max prime < x-2 $ Intuitively given this definition the rough frequency of twin prime pairs around a number N would be: $(1 - \frac{1}{2})(1 - \frac{2}{3})(1 - \frac{2}{5})...(1 - \frac{2}{p}) $ $= (\frac{1}{2})(\frac{1}{3})(\frac{3}{5})...(\frac{p-2}{p})$ if p is the largest prime less than N. Since we know there are infinitely prime numbers we know that for any positive integer N this "density" will never be 0 and therefore the density of twin primes with respect to the natural numbers will always be greater than 0. Does this not imply that there must be infinitely many twin primes? What am I missing here? AI: There are several issues You need to use the observed / incidence count, instead of the probabilistic count. For example, you cannot argue that "Since $3 < 10$ and there are 4 primes less than 10, hence $3$ is a prime with probability $\frac{4}{10}$. No, in this case, it is either a prime, or it is not a prime. Just because the incidence probability is non-zero for all $N$, merely tells you that at least 1 example exists. For example, consider the incidence probability that a number is $1$. This probability is $\frac{1}{N}$, which is non-zero, yet the number 1 only occurs a finite number of times. To begin to show that there are infinitely many such incidences, you need to show that it must appear within a certain interval. This requires you to know the incidence probability. For example, we know (from the prime number theorem) that there is a proportion $x(n)$ of primes that are less than $n!+n$. What is the probability that there are primes in the range $n!+1$ to $n!+n$? It is either 0 or 1, and is independent of $x$. This has nothing to do with the incidence probability.
H: Comparing exponents. Is there an easy way to compare $a^m$ and $b^n$ where $1 \le a, b, m, n \le 1000$ without raising the corresponding bases to their power (since it can easily cause an overflow)? Thank you. AI: Take the logs to your favorite base and compare $m \log a$ and $n \log b$. This looks like it bears on Euler project 29.
H: Lebesgue Measure and Symetric Difference I need to know if this is true because I want to use it in a proof. Let $\mu$ be the Lebesgue measure and $\Delta$ denote the symmetric difference. Is is true that $\mu(A\Delta B)-\mu(B\Delta D)-\mu(A\Delta C) \leq \mu(C\Delta D)$? I tried to apply the definition of $\Delta$ but got no result. AI: I assume $A,B,C,D$ are measurable. Equivalently, you want to show that $$\mu(A\Delta B)\leq \mu(B\Delta D)+\mu(A\Delta C)+\mu(C\Delta D).$$ Rewriting this using the definition of $\Delta$ gives us $$\mu((A\setminus B)\cup (B\setminus A))\leq \mu((B\setminus D)\cup (D\setminus B))+\mu((A\setminus C)\cup (C\setminus A))+\mu((C\setminus D)\cup (D \setminus C))$$ and so by additivity it suffices to show that $A\setminus B$ and $B\setminus A$ are contained in $$(B\setminus D)\cup (D\setminus B)\cup (A\setminus C)\cup (C\setminus A)\cup (C\setminus D)\cup (D \setminus C).$$ which follows from the facts that $$A\setminus B\subseteq (A\setminus C)\cup (C\setminus D)\cup (D\setminus B)$$ $$B\setminus A\subseteq (B\setminus D)\cup (D\setminus C)\cup (C\setminus A)$$
H: Frequency Change with Tape Speed I recorded my own voice in an old tape recorder. When I put the device in fast forward mode my voice turned a little squeaky. I wonder if the fundamental frequency of the voice had changed? Is that possible? AI: Yes, it is possible. If it is playing faster, that is outputting the analog value that used to be at time $t$ at time $t/a$ with $a \gt 1$, for some constant value $a$, all the frequencies have been multiplied by $a$.
H: Maximizing score in number-guessing game. This is inspired by a puzzle (related to the two-envelopes problem) that I've seen in several places, including unbounded generalizations. The basic premise is that Alice chooses two real numbers from $[0,1]$ uniformly at random, and writes them down on slips of paper. Bob chooses one slip of paper at random, reads the number, and may either keep that slip or switch for the other. Bob's score is the number he ends up with, and our task is to maximize his expected value. The standard solution is for Bob to choose his own real number uniformly at random, and switch if the number he picks is greater than what's on the slip he holds. However, this is not the only solution. Bob has one piece of data, $x$, and the use of a random number generator. He can keep his initial slip with some probability $f(x)$ and swap with probability $1-f(x)$. Call this function $f(x)$ his strategy; the standard solution corresponds to strategy $f(x)=x$. We calculate his expected winnings as $$E=\iint_{[0,1]\times[0,1]} f(x)x+(1-f(x))y\, dx dy=\int_0^1f(x)(x-0.5)+0.5\,dx$$ Trying $f(x)=x$ gives $E=0.58\overline{3}$, while $f(x)=x^{1.4}$ gives $E\approx 0.585784$. UPDATE: In the comments Calvin Lin suggests $f(x)=\begin{cases}0&x<0.5\\1&x>0.5\end{cases}$, with $E=0.6125$. What's the highest $E$ Bob can achieve, and what is the corresponding $f$? The maximum, if Bob knew what was on the others slip, is $2/3$, so the answer must be less than that. AI: Consider the following strategy: If chosen slip is larger than $\frac{1}{2}$, keep it. Else choose second slip. Then, the expected value is $\int_{\frac{1}{2}}^1 x\, dx + (\frac{1}{2}) \int_0^1 y \, dy = 0.625.$ Claim: This is the best possible strategy. Suppose not, then one of the following must happen Bob picks a number that is larger than $\frac{1}{2}$, and decides to pick the second slip. Bob picks a number that is smaller than $\frac{1}{2}$ and decides to stick to it. In either case, we can see that Bob increases his expected value by following the initial strategy. (In particular, this strategy is unique, up to a set of measure 0).
H: Trying to understand set theory? Subset vs Member? So I'm learning set theory and wanted to know if I'm understanding some of the basics correctly. I have a feeling 1 and 2 are correct, but I'm not sure about 3 and 4. Z = {1,3} W = {1,2,3,4} 1) Is Z ∈ W? No, Z is not a member or W, because Z is the set {1,3}, while the members of W are 1,2,3,4. 2) Is Z ⊆ W? Yes, because all elements in the set Z (1 and 3) are also present in the set W. 3)Is Z ∈ ℘(W)? Yes, because ℘(W) is a set of sets, Z is a member of ℘(W) because {1, 3} is one of those sets. 4)Is Z ⊆ ℘(W)? No, Z is not a subset of the powerset of W because the elements in set Z (1 and 3) are not present in the set ℘(W). AI: This is all correct, except for the statement in 3) that "Z is a power set of ℘(W)" which should read "Z is an element of ℘(W)". Note that ℘(W) is called the power set of W.
H: Let E be the splitting field of $f(x)=x^4-10x^2+1$ over $\Bbb{Q}$. find $Gal(E/\Bbb{Q})$. From Galois Theory (Rotman): Let E be the splitting field of $f(x)=x^4-10x^2+1$ over $\Bbb{Q}$. find $Gal(E/\Bbb{Q})$. The roots of $f(x)$ are $\sqrt{2}+\sqrt{3}$, $\sqrt{2}-\sqrt{3}$, $-\sqrt{2}+\sqrt{3}$, $-\sqrt{2}-\sqrt{3}$. We know that the splitting field of $f(x)=x^4-10x^2+1$ over $\Bbb{Q}$ is $\Bbb{Q}(\sqrt{2}, \sqrt{3})$. There are two theorems in the textbook: Theorem 1: If $f(x) \in F[x]$ has $n$ distinct roots in its splitting field $E$, then $Gal(E/F)$ is isomorphic to a subgroup of the symmetric group $S_n$, and so its order is a divisor of $n!$. Theorem 2: If $f(x) \in F[x]$ is a separable polynomial and if $E/F$ is its splitting field , then $$|Gal(E/F)|=[E:F].$$ From the first theorem, we know that the order of the Galois group divides 4!. There is an example in the textbook that shows that [E:F]=4, so the second theorem says that $|Gal(E/F)|=4$ as well. However, I find this a bit confusing. Since the Galois group is the set of all automorphisms that fix F...we need to look at the possible automorphisms that permute the four distinct roots of $f(x)$. Since there are four distinct roots, there must be 4! ways to permute them (while fixing all the elements of F), right? Doesn't that imply that we are supposed to have 4! elements in the Galois group? Why is the order 4 then? Thanks in advance. AI: Not every permutation of the roots is an automorphism of $E/\mathbb Q$. For example, any permutation $\sigma$ which sends $\sqrt2$ to $\sqrt3$ is not an automorphism, since $\sigma(\sqrt2^2)=\sigma(2)=2$ but $\sigma(\sqrt2)^2=3$. More generally any automorphism of $E/\mathbb Q$ must preserve the roots of polynomials with rational coefficients, in particular the roots of $x^2-2$ and $x^2-3$. So the only possible automorphisms are given by $$\sqrt2\mapsto \sqrt2,\ \sqrt3\mapsto \sqrt3$$ $$\sqrt2\mapsto -\sqrt2,\ \sqrt3\mapsto \sqrt3$$ $$\sqrt2\mapsto \sqrt2,\ \sqrt3\mapsto -\sqrt3$$ $$\sqrt2\mapsto -\sqrt2,\ \sqrt3\mapsto -\sqrt3$$ and since there are exactly $4$ automorphisms, you know that all of these are automorphisms.
H: Confused on permutation cycles I am a bit confused on how to interpret permutation cycle notation. I am going by the Wolfram definition. My initial interpretation of $(431)(2)$ was "4 moves to position 3, 3 to position 1, 1 to 4, 2 remains in place". which would leave me at ${3, 2, 4, 1}$ which apparently is incorrect, since the solution is ${4, 2, 1, 3}$. So it would appear that the correct way to read $(431)(2)$ is "element 4 is replaced by 3, 3 by 1, 1 by 4, 2 remains in place". I am also unclear on the definition of a cycle, which seems to mean a subset of elements which are permuted. Wolfram says, "any rotation of a given cycle specifies the same cycle", but I do not see why $(413)(2)$ cannot be considered a cycle of $(431)(2), (314)(2), (143)(2), (2)(431), (2)(314), (2)(143)$. The elements lookthe same, only permuted. AI: Your initial interpretation of $(123)$ would seem to suggest that it sends $1$ to the middleman $2$ and then to the final destination of $3$. This would make the "middleman" $2$ completely superfluous; what's the point of having a midway station if it doesn't have any consequence at all? No, $(123)$ means that $1$ ends up at $2$. It does not go any further. And $2$ ends up at $3$. And $3$ ends up at $1$. That is, if $\sigma=(123)$, then $\sigma(1)=2$, $\sigma(2)=3$ and $\sigma(3)=1$. Why is this called a "cycle" you may ask? One thing to notice is that if you start in any particular place, say $1$, then upon repeated application we get $\sigma^0(1)=1$, $\sigma^1(1)=2$, $\sigma^2(1)=\sigma(\sigma(1))=\sigma(2)=3$, and then back to $\sigma^3(1)=\sigma(\sigma^2(1))=\sigma(3)=1$ back where we began. We do go on a "round trip" through the numbers seen in the cycle notation, in the order they're listed, upon repeated application of $\sigma$. An equivalent way of thinking about this is as follows: if we list out the numbers from the cycle notation on a circle in the order they appear, applying $\sigma$ to the numbers is the same as rotating. Now, the numbers appearing in the notation $(132)$ are the same as those appearing in $(123)$, however they do not denote the same permutation. The first takes $1$ to $3$, whereas the second takes $1$ to $2$, so they cannot be the same. Yet, $(231)$ also looks different than $(123)$, and we can check that they represent exactly the same function (permutation) of the set $\{1,2,3\}$: each sends $1$ to $2$ and sends $2$ to $3$ and sends $3$ to $1$, so the notations $(123)$ and $(231)$ both specify the same exact permutation. Similarly, $(312)$ is another way of specifying this same permutation. In general, if you take the numbers that appear in a cycle's cycle notation and, well, cycle them as they appear, you will not be changing the permutation that is being denoted. However if you permute the numbers that appear in a cycle notation arbitrarily you will not, in general, end up with the same permutation you started with. We already observed $(123)\ne(132)$ for instance.
H: Rudin's Construction of Lebesgue Measure Self-studying Rudin's RCA, and I want to make sure I am understanding the intricacies of his construction of the Lebesgue measure on $\mathbb{R}^n$. The uniform continuity of $f$ shows that there is an integer $N$ and functions $g,h$ with support in $W$ such that: (i) $g,h$ are constant on each box in $\Omega_N$, (ii) $g \leq f \leq h$, (iii) $h-g < \epsilon$. What is the simplest way to construct such functions? My first thought was to consider $\operatorname{cl}(A)$ of a box $A \in \Omega_N$ and apply the Extreme Value Theorem to obtain a maximal and minimal value of $f(x)$. Setting $h,g$ as those maximum and minimum values for $x \in A$, respectively, then gives (i),(ii). The Uniform Continuity Lemma then gives (iii) via an appropriate choice of $N$. This however strikes me as a needlessly complicated construction; I am confident Rudin had something simpler in mind. Property 2.19c then shows that: $$\forall n > N, \quad \Lambda_N g = \Lambda_n g \leq \Lambda_n f \leq \Lambda_n h = \Lambda_N h$$ This seems to follow if we partition $P_n$ into the boxes $P_n \cap A$. But once again it seems that some simpler methodology would be applicable. Thus the upper and lower limits of $\{\Lambda_n f\}$ differ by at most $\epsilon \operatorname{vol}(W)$, and since $\epsilon$ was arbitrary we have proved the existence of $$\lim\limits_{n \to \infty} \Lambda_n f = \Lambda f$$ Using the formalism described above, no natural way to establish this limit suggests itself, but this should be remedied by superior ways of establishing the above claims. More specifically, we obtain that $$\lim\limits_{\epsilon \to 0} \;(\limsup\limits_{n \to \infty} \Lambda_n f - \liminf\limits_{n \to \infty} \Lambda_n f) = 0.$$ How precisely to we formally proceed from the above to equality of the inferior and superior limits, and therefore convergence? Any insight would be greatly appreciated. AI: For the first of the items you quoted, you don't need the Extreme Value Theorem. Use uniform continuity of $f$ to arrange for the boxes in $\Omega_N$ to be small enough so that $f$ does not vary by more than $\epsilon/10$ within any single box. Then, in each box, take the value of $f$ anywhere you like in the box, say at the center, and define $g$ (respectively $h$) throughout that box to be that value minus (respectively plus) $\epsilon/3$. (The $10$ was overkill, but I lack the energy, time, and desire to reduce it.)
H: The gradient of a function is an alternating one-tensor I'm currently reading Spivak's Calculus on Manifolds and I seem to have hit a snag in Chapter Four: Integration on Chains. Spivak develops tensors, vector fields, alternating tensors and differential forms. I'm okay with these ideas however he makes the claim that if $f:\mathbb{R}^n\rightarrow\mathbb{R}$, then $Df(p)\in\Lambda^1(\mathbb{R}^n)$. I can see that the gradient of a scalar function is clearly a 1-tensor over $\mathbb{R}^n$ since it maps $\mathbb{R}^n$ to $\mathbb{R}$ but I fail to see how it is alternating (and thus in $\Lambda^1(\mathbb{R}^n)$ rather than just $T(\mathbb{R}^n)$). AI: Any covector is also an alternating one tensor. You should think of an alternating tensor as any tensor $ T : (T\mathbb{R}^n)^p \rightarrow \mathbb{R} $ that is invariant with respect to permutation of the p arguments up to the sign of the permutation. Since there is just one way to permute 1 argument, every one tensor is alternating.
H: Why are the first few powers of $2^{10}$ a little more than those of 1000? See the complete list here: http://en.wikipedia.org/wiki/Power_of_two#Powers_of_1024. I'm wondering if there's a mathematical explanation for the relationship or if it's just coincidence. AI: Since $2^{10}=1024$: $$2^{10n}=(1000+24)^n=1000^n+24\cdot 1000^{n-1}n+...$$ Thus, as long as $24n$ remains a lot smaller than $1000$, then $2^{10n}$ will be near $1000^n$.
H: How prove this $|x_{p}-y_{q}|>0$ let $$x_{1}=\dfrac{1}{8},x_{n+1}=x_{n}+x^2_{n},y_{1}=\dfrac{1}{10},y_{n+1}=y_{n}+y^2_{n}$$ show that: for any $p,q\in N^{+}$ we have $$|x_{p}-y_{q}|>0$$ AI: Clearly all terms in both sequences are rational. We will show by induction that if $y_n = \frac{ a_n}{b_n}$ where $\gcd(a_n,b_n)=1$, then $5 \mid b_n$. Base case is obvious. Induction step: Let $b_k = 5 c_k$, then $y_{k+1} = \frac{a_k}{5c_k} + \frac{ a_k ^2 } { 25c_k^2} = \frac{ a_k 5 c_k + a_k^2 } { 5 c_k^2}$. We will show by induction that if $x_n = \frac{d_n}{e_n}$ where $\gcd(d_n, e_n) = 1$, then $5 \not \mid e_n$. This is similar to the above, or almost immediately obvious. Hence, $x_p$ and $y_q$ cannot represent the same rational number.
H: Probability Question about Tennis Games! $2^{n}$ players enter a single elimination tennis tournament. You can assume that the players are of equal ability. Find the probability that two particular players meet each other in the tournament. I could't make a serious attempt on the question, hope you can excuse me this time. AI: Assume that the initial placement of players is random. Otherwise, your answer would depend on where these 2 players are placed. There are $2^n-1$ games involving a pair of players that occur. There are ${2^n \choose 2}$ pairs of players. Hence, the probability that 2 players meet is $\frac{ 2^n-1} { 2^n \choose 2} = \frac{1}{2^{n-1} }$.
H: Question regarding Hartshorne Example II.(6.5.2) Let $k$ be a field, let $A=k[x,y,z]/\langle xy-z^2\rangle$ and let $X=\operatorname{Spec}A$. Let $Y:y=z=0$ I want to know the divisor of $y$ In Hartshorne book, because $y=0 \Rightarrow z^2=0$ and $z$ generates the maximal ideal of the local ring at the generic point of $Y$. So, the divisor of $y$ is $2Y$. My question is $Y$ means that $V(\langle y,z\rangle)=\{P \in X : \langle y,z\rangle \subseteq P\}$ in $X$? What is the generic point of $Y$? Source: Algebraic Geometry, Robin Hartshorne AI: yes. $Y=Spec\ A/(y,z)=k[x, y, z]/(y, z, xy-z^2)=k[x,y,z]/(y,z)$ since $xy-z^2$ is in the ideal $(y,z)$ The generic point of $Y$ is the prime ideal $(y, z)$. To compute the divisor of $y$ along $Y$, we look at the local ring $A_{(y,z)}$ in which $x$ has become invertible, so $y=z^2/x$ in this local ring and hence this local ring is $k[x,z]_{(z)}$ and then as you said $z$ is a generator of the maximal ideal, and $y$ is $z^2$ times an invertible element, so the coefficient of div(y) along $Y$ is $ 2$
H: Without appealing to choice, can we prove that if $X$ is well-orderable, then so too is $2^X$? Without appealing to the axiom of choice, it can be shown that (Proposition:) if $X$ is well-orderable, then $2^X$ is totally-orderable. Question: can we show the stronger result that if $X$ is well-orderable, then so too is $2^X$? Proof of Proposition. Pick any well-ordering of $X$. Then the lexicographic order totally orders $2^X$. More explicitly: for any two $f,g \in 2^X$, define $f < g$ iff there exists $x \in X$ such that $f(x) \neq g(x)$, and if $x \in X$ is minimal such that $f(x) \neq g(x)$, then $f(x)=0$ and $g(x)=1$. It can be shown that $<$ totally-orders $2^X$. AI: No. The axiom "If $X$ can be well-ordered then $2^X$ can be well-ordered" implies the axiom of choice in $\sf ZF$. In $\sf ZFA$ or $\sf ZF-Reg$ this is no longer true, though. To see more details see the first part of Jech, The Axiom of Choice Chapter 9. One very interesting observation about the fact that in $\sf ZFA$ this statement does not prove the axiom of choice, is that if $\psi(X)$ is a statement in which all the quantifiers are either bounded in $y$ or bounded in $\mathcal P(y)$ (where $y$ is a variable, of course), and in $\sf ZF$ we have that $\forall X\psi(X)$ implies $\sf AC$, then in $\sf ZFA$ we have that $\forall X\psi(X)$ implies that the power set of a well-ordered set is well-orderable. (This is the first problem in the aforementioned chapter 9.)
H: Inequality in geodesic quadrupel Let $(M,g)$ be a compact complete Riemannian manifold. Consider the geodesic quadrupel $ABCD$ where $l:=d(A,B)=d(B,C)=d(C,D)=d(A,D) \geq \frac{1}{2}$ and $d(B,D),d(A,C) \geq 1$. Let $x \in CD$ and $y \in AB$. Is it true that then $d(x,y)\geq \frac{1}{4}$ ? I have tried a lot of things but non of them worked, mostly I tried to do it by the triangle inequaity. I even do not know if its true. Greetings Lena AI: No (unless you assume something like a lower curvature bound). Consider two very long segments (say, of length 100) in $\mathbb R^3$ that meet orthogonally at their midpoints. Let $M$ be a smoothened boundary of an $\varepsilon$-neighborhood of the union of these segments. It is a Riemannian manifold invariant under a 90 degrees rotation. Let $A$ be a point of $M$ near one of the segments' endpoints and $B$, $C$, $D$ be its consecutive images under this rotation. Due to rotation invariance, one has $d(A,B)=d(B,C)=d(C,D)=d(D,A)$. And all distances between $A,B,C,D$ are greater than 50. On the other hand, geodesic segments $AC$ and $BD$ go through the central part of the construction, namely an $O(\varepsilon)$ neighborhood of the center. Take $x$ and $y$ in that region and observe that $d(x,y)=O(\varepsilon)$.
H: A metric space is separable iff it is second countable How do I prove that a metric space is separable iff it is second countable? AI: HINT: One direction is pretty trivial; I’ll leave it to you, at least for now. The harder direction is to prove that a separable metric space is second countable. Suppose that $\langle X,d\rangle$ is a separable metric space. Then $X$ has a countable dense subset $D$. The set $\Bbb Q$ of rational numbers is countable, so $$\mathscr{B}=\{B(x,r):x\in D\text{ and }0<r\in\Bbb Q\}$$ is a countable family of open balls in $X$. (Here $B(x,r)=\{y\in X:d(x,y)<r\}$ is the open ball of radius $r$ centred at $x$.) Show that $\mathscr{B}$ is a base for the metric topology on $X$. In other words, show that if $U$ is a non-empty open set in $X$, and $x\in U$, then $x\in B\subseteq U$ for some $B\in\mathscr{B}$. You will find it helpful to note that there is some $\epsilon>0$ such that $B(x,\epsilon)\subseteq U$; thus, you need only show that there is some $B(y,r)\in\mathscr{B}$ such that $x\in B(y,r)\subseteq B(x,\epsilon)$. The triangle inequality will be helpful.
H: What are the first 3 digits of the product of the first 1000 fibonacci numbers What are the first 3 digits of the product of the first 1000 Fibonacci numbers? Could anyone give me hints on how to start this problem? I haven't done a problem like this before and I am curious on how to approach such a problem. AI: Binet's formula gives us a useful approximation ($\tau=(1+\sqrt5)/2$) $$ F_n\approx\frac{\tau^n}{\sqrt5}. $$ For example with $n=8$ the r.h.s. is $21.00952$. We have $F_1F_2\cdots F_7=3120$, so $$ \begin{aligned} \log(\prod_{i=1}^{1000}F_i)&\approx\log(3120)+\sum_{i=8}^{1000}(i\log\tau-\log\sqrt5)\\ &=\log(3120)+500472\log\tau-\frac{993}2\log5\\ &\approx 104248.9178386, \end{aligned} $$ so using the fractional part of that gives $$ 10^{0.9178386}\approx8.27635. $$ Thus the answer is $827$ or something close to it. I skipped the estimation of the error. Note that the error to Binet's formula alternates and tends to zero, so it is not too difficult to estimate it.
H: Upper Bound of Logarithm For $1\leq x < \infty$, we know $\ln x$ can be bounded as following: $\ln x \leq \frac{x-1}{\sqrt{x}}$. Then what is the upper bound of $\ln x$ for following condition? $2\leq x <\infty$ AI: Given $a>0$, if $a \le u < \infty$ then also $1 \le u/a < \infty$, and you can apply your inequality taking $x=u/a$ to get $$\ln(u/a)\le \frac{u/a-1}{\sqrt{u/a}}.$$ Then cleaning it up you have $$\ln(u)\le \ln(a)+\frac{u-a}{\sqrt{au}}.$$ This is a bit less appealing than the $a=1$ case wherein the logarithm doesn't appear in the bounding function, but actually it only appears in a constant.
H: recurrence relation: $x_{n+1} = x^2_n - 2x_n + 2$ $$x_0 = \frac32; x_{n+1} = x^2_n - 2x_n + 2$$ $$\Rightarrow x = x^2 - 2x +2 \Rightarrow x^2 - 3x +2 = 0 \Rightarrow x = {1;2} $$ How to determine which one is the limit, i.e. $\lim_{n\rightarrow \infty} x_n = 1$ or $\lim_{n\rightarrow \infty} x_n = 2$ ? AI: HINT: The function $f(x)=x^2-3x+2$ is negative between $1$ and $2$, so if $1<x_n<2$, then $x_{n+1}-x_n=f(x_n)<0$, and $x_{n+1}<x_n$. This should tell you which of the two roots in the only possible limit if $x_0=\frac32$. Note that you also have to show that the sequence actually does converge; showing that it’s monotone and bounded would do this. The fact that $x_n^2-2x_n+2=(x_n-1)^2+1$ should help you here.
H: Is a semigroup $G$ with left identity and right inverses a group? Hungerford's Algebra poses the question: Is it true that a semigroup $G$ that has a left identity element and in which every element has a right inverse is a group? Now, If both the identity and the inverse are of the same side, this is simple. For, instead of the above, say every element has a left inverse. For $a \in G$ denote this left inverse by $a^{-1}$. Then $$(aa^{-1})(aa^{-1}) = a(a^{-1}a)a^{-1} = aa^{-1}$$ and we can use the fact that $$cc = c \Longrightarrow c = 1$$ to get that inverses are in fact two-sided: $$ aa^{-1} = 1$$ From which it follows that $$a = 1 \cdot a = (aa^{-1})a = a (a^{-1}a) = a \cdot 1$$ as desired. But in the scenario given we cannot use $cc = c \Longrightarrow c = 1$, and I can see no other way to prove this. At the same time, I cannot find a counter-example. Is there a simple resolution to this question? AI: Let $G$ have at least two elements, one of which I’ll call $e$. Define the binary operation $*$ on $G$ by $x*y=y$ for all $x,y\in G$; it’s easily checked that $*$ is associative. Clearly $e*x=x$ for all $x\in G$, so $e$ is a left identity. And $x*e=e$ for each $x\in G$, so $e$ is a right inverse for each element of $G$ (with respect to the left identity $e$). Clearly $G$ has no two-sided identity, so it isn’t a group. Of course this is a bit odd, since I can pick any element of $G$ to be the left identity, and it then becomes the right inverse of every element.
H: Equation with Parameter I want to solve a seemingly simple equation: $k+(k-2)*x = (2k+3)x-2x-3$ wolfram alpha says that k = -3 and x = 1 - but I don't see yet how I arrive at this solution. Thanks AI: Expand the equation $$k + kx -2x = 2kx + 3x -2x -3$$ $$k-kx =3x-3$$ $$k(1-x)=-3(1-x)$$ $$k(1-x)+3(1-x)=0$$ $$(k+3)(1-x)=0$$
H: Work and Time related problem A and B do a work together in 16 days. B and C can do the same work together in 20 days. How many days will it take for A, B and C to do the work together? AI: Let $A,B,C$ represent their daily efficiencies in percentage, and the total amount of work to be $1$(think about $100\%$). Then you have $$16(A+B)=1\quad\text{and}\quad20(B+C)=1.$$ To solve exactly the amount of days by A,B,C working together, you need to know how many days will be taken by A and C working together.
H: basis of space and subspace I have a question which is pretty basic about basis and verctor spaces. Generally, if I have a basis K of vector space V, Why it is not a basis of a subspace W of V (real subspace)? The vectors in basis K are linearily independent and for every vector of W I can find a linear combination of the vectors in the basis K that equals to the vector in W. The only problem I can see is that when i make some linear combinations of the vectors in basis K , I get vectors that belong to V but does not belong to the space W, is this the problem here? When I get vectors beyond my space that means it is not the proper basis , because it is spanning more vectors than my space? does it have to span exactly the group and not beyond? Thank you very much. Sorry for my english and I don't know how to edit the question to be pleasant to read. AI: Yes, that's exactly right. Some set of vectors is a "basis" for V if those vectors are linearly independent and span V. Informally, "spanning" means that V is the smallest vector space that contains all of those vectors; "linearly independent" means that there are no redundant vectors (i.e. if you take one out, the new set of vectors spans a strictly smaller space). Of course, linearly independent vectors will stay linearly independent regardless of the ambient space you consider them in. But vectors that span a space W will not necessarily span a space V. The ambient space matters. (If they span W, then they cannot span V, unless V = W.) It might help you to try to work out which subspace of $\mathbb{R}^3$ the vectors $\begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}2\\0\\3\end{pmatrix}$ span. Then show that they are linearly independent, to show that they are a basis of this subspace. Once you've worked that out, try to work out why can't they be a basis of anything smaller, bigger, or different.
H: puzzle/equation with proportionality We have three pumps filling a tank: The first one fills the whole tank in a particular time, the second one is twice as fast as the first one, and the third one is three time as fast. All the pumps together can fill the tank in two minutes. How long would each pump take on its own? AI: Let $\,T_i\;$ be the proportion of the tank filled in one minute by pump $\;i\;,\;i=1,2,3\;$ , then $$T_2=2T_1\;,\;\;T_3=3T_1\;\;\text{and}\;\; 2(T_1+T_2+T_3)=12T_1=1\implies$$ $$T_1=\frac1{12}\;,\;\;T_2=\frac16\;,\;\;T_3=\frac14\;,\;\;\text{so}\ldots\ldots$$
H: Let $p\colon X \to Y$ be a quotient map then if each set $p^{-1}({y})$ is connected, and if $Y$ is connected, then $X$ is connected. Let $p\colon X \to Y$ be a quotient map. Show that if each set $p^{-1}({y})$ is connected, and if $Y$ is connected, then $X$ is connected. I am totally stuck on this problem. Can I get some help how to tackle this problem AI: HINT: Suppose that $X$ is not connected. Then $X=U\cup V$, where $U\ne\varnothing\ne V$, $U\cap V=\varnothing$, and $U$ and $V$ are both open (and hence also both closed). Show that if $f[U]\cap f[V]=\varnothing$, then $Y$ is not connected. If $f[U]\cap f[V]\ne\varnothing$, there is a $y\in f[U]\cap f[V]$; show that $f^{-1}[\{y\}]$ is not connected.
H: Measurability of projections Let $(X,\mathcal{A})$ and $(Y,\mathcal{B})$ be measurable spaces. Consider the product space $X\times Y$ with the product $\sigma$-algebra $\mathcal{A}\otimes \mathcal{B}$ and $\pi:X\times Y \rightarrow X$ the natural projection. Is it true that $Z \in \mathcal{A}\otimes \mathcal{B}$ implies $\pi(Z)\in \mathcal{A}$? (that is, the projection sends measurable sets in measurable sets) There is a topological analogue of this results («projections are open») so I wonder if we could extend it to measure theory. AI: Lebesgue thought so in the context of real line but young Souslin, famously, constructed a Borel (G-delta is enough) subset of plane with non Borel projection.
H: Tricky Trigonometry Problem I solved this problem by applying the law of sines to get the values for $\omega$ (74.61) and $Z$ (5.65) respectively. But now I'm told there is another combination of $\omega$ and $Z$ that will work. How could we find that? AI: Hints: $$\frac6{\sin\omega}=\frac4{\sin40^\circ}\implies\sin\omega=\frac32\sin40^\circ\implies\omega=\begin{cases}74.62^\circ\\{}\\105.38^\circ\end{cases}$$ Since $\,\sin x=\sin(180^\circ-x)\,$ ...
H: one-to-one correspondence between the divisors of $n$ and Could any one help me to understand rigorously the statement from a book Elementary Number Theory by James J Tattersall , Page $51$ "From the definition of division and the fact that divisions pair up, it follows that, for any positive integer $n$, there is a one-to-one correspondence between the divisors of $n$ that are less than $\sqrt{n}$ and those which are greater than $\sqrt{n.}$ Thank you! AI: Divisors of $n$ come in pairs: if $d$ is a divisor of $n$, then so is $\frac{n}d$. Of course if $n$ is a perfect square, and $d=\sqrt{n}$, then $\frac{n}d=\sqrt{n}=d$ as well, but in all other cases $d$ and $\frac{n}d$ are different divisors of $n$. If $d<\sqrt{n}$, then $d\sqrt{n}<n$, and $\sqrt{n}<\frac{n}d$. And if $d>\sqrt{n}$, then $d\sqrt{n}>n$, and $\sqrt{n}>\frac{n}d$. Thus, $d<\sqrt{n}$ if and only if $\frac{n}d>\sqrt{n}$. This means that if you pair each divisor $d$ with its mate $\frac{n}d$, exactly member of the pair is less than $\sqrt{n}$, and the other is greater than $\sqrt{n}$. This pairing establishes a bijection between the divisors of $n$ that are less than $\sqrt{n}$ and those that are greater.
H: Embeddings of 3 point metric spaces into ultra-metric spaces with distortion 2 I need to show that every 3 point metric space has an embedding into an ultra-metric space with distortion 2. And then to show such an example. How would I go about it? Thank you. Edit: Distortion is defined as following: An embedding $f:(X,d_X)\rightarrow(Y,d_Y)$ has distortion $\alpha$ if there is a constant $c>0$ such that $\forall u,v\in X:d_X(u,v)\leq c \cdot d_Y(f(x),f(y))\leq \alpha \cdot d_X(u,v)$ AI: Let the three-point metric space be $\langle X,d_X\rangle$, where $X=\{x_0,x_1,x_2\}$. We can label the three points in such a way that $d_X(x_0,x_1)\le d_X(x_1,x_2)\le d_X(x_0,x_2)$, and the triangle inequality ensures that $d_X(x_0,x_2)\le d_X(x_0,x_1)+d_X(x_1,x_2)\le 2d_X(x_1,x_2)$. Let $\langle Y,d_Y\rangle$ be the ultrametric space, and let $Y=\{y_0,y_1,y_2\}$, where $y_k=f(x_k)$ for $k=0,1,2$. Let $d_Y(y_0,y_1)=d_X(x_0,x_1)$ and $d_Y(y_1,y_2)=d_X(x_1,x_2)$. Bearing in mind that $d_Y$ is an ultrametric, what must $d_Y(y_0,y_2)$ be? Check that the embedding $f:X\to Y:x_k\mapsto y_k$ has distortion $2$.
H: About functions and relations A pure threoretical question here. I have the relation $\pm\sqrt x$. As far as I understand it, that is not a function, as 1 input can map to 2 outputs. If I have a relation, which is not a function, and I limit the inputs, so that every input maps to only one output, can the relation be called a function, altough it's not, for inputs outside of my definition? AI: Yes, If you restrict the domain and range as you mentioned you can call it a function. A function is a map from one set of points called the domain, to another set called the range, such that each input maps to exactly one output (note this is not the same thing as requiring that each output corresponds to only one input). The set from which you map does not have to be the entire real domain, in fact it does not even have to be continuous, it could be a discrete set of points; it can be any set you like. It is good practice to specify the way the function maps the domain to the range. As an example, one way you could make $\sqrt{x}$ a function is if you specify that you are taking positive values of $x$ and that you are taking the positive square root. You can then write the way the map is constructed as follows: $$f:\mathbb R^+\rightarrow\mathbb R^+$$ which says you are building a function that maps from the positive reals to the positive reals. This arrangement is known as the principal square root function.
H: What are the set-theoretic foundations of classical calculus? I am wondering what set theoretical foundation is needed for the development of classical results of, say, calculus, such as taught in first years undergraduate courses. More concretely, I wonder if and where the full power of ZFC is needed, if that's the foundation, or what part of analysis can be developed in the theory of (well-pointed?) topoi. For example, it is known that there are topoi for which Brouwer's theorem holds, i.e. for which any function on the reals is continuous. The problem when trying to mimick the construction of noncontinuous functions like $f: {\mathbb R}\to{\mathbb R}$, $f(x):=0$ for $x\leq 0$ and $f(x) := 1$ for $x>0$ here is that in non well-pointed topoi it's not sufficient to tell what functions do on (ordinary) points. In contrast, if I am not mistaken, if we assume to work in a well-pointed topos, Brouwer's theorem fails, and functions as the one above can be constructed (please correct me if I made a mistake here). Which parts of calculus can and which cannot be developed within, say, ETCS? Or what about the role of the axiom of replacement (in ZFC or in addition to ETCS)? I'd be happy to get a list of "classical" topics where one can see that the choice and strength of foundation matters. AI: There is absolutely no part in the mathematics of freshman year (and probably most of the undergraduate mathematics, or even grad level and beyond) that requires the explicit use of replacement in its full power. You can witness that by the trivial observation that all the mathematics we meet on an average undergrad course happens well inside $V_{\omega+\omega}$, which is a model of $\sf Z$ (Zermelo's set theory which include separation, but not replacement). If one wishes to formalize ordinals as the von Neumann ordinals, or cardinals as initial von Neumann ordinals, then one needs to have replacement (or at least "enough" of it), but generally this is unneeded. Note that even if you want to talk about transfinite recursion up to $\omega_1$ you can still do that in $V_{\omega+\omega}$ because we do have a well-ordered set of order type $\omega_1$ in this model. Using "less conventional" foundations, like constructive mathematics or intuitionstic logic is possible. There are works (which I am less familiar with, and certainly unfamiliar with their details) which base mathematics on constructive methods, and as a result have the seemingly nicer properties that everything is continuous, or so on. However there are theorems which may fail in such context, generally you may want to begin with this thread.
H: Euclidean Distance on a Sphere I have that the Euclidean distance on the surface of a sphere in terms of the angle they subtend at the centre is $(\sqrt{2})R\sqrt{1-\cos(\theta_{12})}$ (Where $\theta_{12}$ is the angle that the two points subtend at the centre.) Why is this; what is the proof? Cheers, Alex AI: Consider the diagram: $\hspace{4cm}$ Using the identity $\cos(\theta)=1-2\sin^2(\theta/2)$, the distance is $$ 2r\sin(\theta/2)=r\sqrt{2-2\cos(\theta)} $$
H: Well ordered sets $(X,\le)$ is totally ordered. How do you prove that if every non empty countable subset of $X$ is well ordered then $(X,\le)$ is well ordered? AI: Now you have the correct conditions, the proof goes like this: Let $Y\subseteq X$ be non-empty. If it is not well ordered then it is uncountable. Let $y_1\in Y$, we know $y_1$ is not the minimal element of $Y$, hence by totality there is $y_2\neq y_1\in Y$ such that $y_2\leq y_1$. Repeat this process inductively and you have an infinite chain of distinct elements $$y_1\geq y_2\geq y_3\geq\ldots$$ This is a countable set and so it is well ordered, however it is clear that there is no smallest element. A contradiction. Hence $Y$ is well-ordered.
H: Fundamental volume of quotient group I came across this rule in my old notes, but I need an explanation to how it could possibly originate: The theorem says that for any lattice $L$ in $\mathbb{R}^n$, the order of the quotient group, $\lvert L/aL\rvert$, where $a$ belongs to $\mathbb{R}^n$, is given by $\text{vol}(aL)/\text{vol}(L)=|a|^n$ Any explanation for this rule ? AI: Try considering the case of $L = \mathbb{Z}^n$ and see if it doesn't clear something up: If $L = \mathbb{Z}^n$, then $a L$ consists of the points of $\mathbb{Z}^n$ where every coordinate is divisible by $a$, so the quotient $L / a L$ has a total of $a^n$ representatives, namely $$(0, \ldots, 0), (0, \ldots, 1), \ldots, (a-1, \ldots, a-1)$$ In general, when you multiply $L$ by $a$, you're "thinning out the lattice" by a factor of $a$ in $n$ different dimensions, so $a L$ will be $1/a^n$ the size of $L$, and consequently $|L / a L| = a^n$. UPDATE: As HSN points out in the comments, apparently the source of confusion is that you have $a \in \mathbb{R}^n$ rather than $a \in \mathbb{Z}$ in the notes. I'm not an expert and I won't swear to it, but surely that's just a mistake -- see my comment below.
H: Fibonacci-like sequence Today I have to deal with something which reminds Fibonacci sequence. Let's say I have a certain number k, which is n-th number of certain sequence. This sequence however is created by recursive formula that we know from Fibonacci $a(n) = a(n-1) + a(n-2)$, where $n \ge 2$ and $a(0) \le a(1) \le \dots \le a(n)$. So let's say $a(n) = k$. Now I have to find $a(0)$, $a(1)$ that are initial number of this sequence, however the sequence should be longest possible and in case there are many of them which are of the same length $a(0)$ should be smallest possible. Some example: $k = 10$ I can simply say $a(0) = 0$, $a(1) = 10$ so $a(n) = k$ is a part of this sequence since $a(0) + a(1) = a(2) = 10$. But it's not the longest possible. For instance choose $a(0) = 0$, $a(1) = 2$, now $a(2) = 2$, $a(3) = 4$, $a(4) = 6$, $a(5) = 10$, it's also valid sequence and length is $6$ and as far as I know it cannot be longer. Any idea how to do so for any $k$? Might be math formula or some algorithm. AI: My guess is to pick $a(n-1)$ so that $a(n)/a(n-1)$ is close to the Fibonacci ratio.
H: Sums of powers below a prime Given a prime $p$ and a natural number $k$, such that $k$ is not divisible by $p - 1$, prove that $\sum_{i = 1}^{p - 1}i^k \equiv 0 \pmod p$. I split the proof into two cases: one where $k$ is odd and another where it is even. The case where $k$ is odd can be proven as follows: $$\sum_{i = 1}^{p - 1}i^k \equiv \sum_{i = 1}^{\frac{p - 1}{2}}i^k + \sum_{i = 1}^{\frac{p - 1}{2}}(-i)^k \pmod p$$ Since $k$ is odd each $(-i)^k = -i^k$, and will cancel with the positive $i$ leaving a zero. Therefore, $$\sum_{i = 1}^{p - 1}i^k \equiv 0 \pmod p \text{ if $k$ is odd.} $$ I approached the case where $k$ is even in a similar fashion. $$\sum_{i = 1}^{p - 1}i^k \equiv \sum_{i = 1}^{\frac{p - 1}{2}}i^k + \sum_{i = 1}^{\frac{p - 1}{2}}(-i)^k \equiv 2(\sum_{i = 1}^{\frac{p - 1}{2}}i^k) \space \pmod p$$ Since $p$ is prime, we just need to prove that $\displaystyle\sum_{i = 1}^{\frac{p - 1}{2}}i^k \equiv 0 \pmod p$. I was stumped after this, so I considered a counterexample, one where $k$ is divisible by $p - 1$. In that I considered a case where $p = 5, k = 8$: $$1^8 + 2^8 + 3^8 + 4^8 \equiv 1 + 1 + 1 + 1 \space \pmod p$$ This and a few other cases led to the conjecture that all the numbers below $p$ have a mod cycle $(mod \space p)$ that is divisible by $p - 1$. I tested it with $p = 7$. $2: 2, 4, {\color{red}1}$ $3: 3, 2, 6, 4, 5, {\color{red}1}$ $4: 4, 2, {\color{red}1}$ $5: 5, 4, 6, 2, 3, {\color{red}1}$ After this I was stuck. I couldn't see how this would help me solve the problem. And, more importantly, I couldn't prove my conjecture. AI: HINT: As $i,1\le i\le p-1$ forms Reduced Residue System $\pmod p,$ and so does $g^r,$ for $0\le r\le p-2$ where $g$ is a primitive root of $p$ $$\text{So, }\sum_{1\le i\le p-1}i^k\equiv\sum_{0\le r\le p-2}(g^r)^k\pmod p$$ $$\text{Now, }\sum_{0\le r\le p-2}(g^r)^k=\frac{(g^k)^{p-1}-1}{g^k-1}=\frac{(g^{p-1})^k-1}{g^k-1}$$ Using Fermat's Little Theorem, $g^{p-1}\equiv1\pmod p\implies (g^{p-1})^k\equiv1\pmod p\iff (g^{p-1})^k-1\equiv0\pmod p$ As $g$ is a primitive root of $p,\text{ord}_pg=p-1\implies g^k\not\equiv1\pmod p$ as $k$ is not divisible by $p-1$ $\implies (g^k-1,p)=1\implies \frac{(g^{p-1})^k-1}{g^k-1}\equiv0\pmod p$ $\implies \sum_{1\le i\le p-1}i^k\equiv\sum_{0\le r\le p-2}(g^r)^k\equiv \frac{(g^{p-1})^k-1}{g^k-1}\equiv0\pmod p$
H: to find number of distinct root of a three degree polynomial Given that $a,b,c$ be three distinct real numbers then the number of distinct real roots of the equation $p(x)=(x-a)^3+(x-b)^3+(x-c)^3=0$ is 1 2 3 depends on $a,b,c$ what I did is $p'(x)=0$ which is two degree polynomial with three distinct root, so $p'(x)\equiv 0$ so $p(x)=k$ some constant, where I am wrong? Is the probelm wrong? Thank you for help. AI: Hint: Look at at $p'(x)$ (is it always positive?) and consider $p(x)$ as $x\to-\infty$ and as $x\to\infty$.
H: (From Lang $SL_2$) Orthonormal bases for $L^2 (X \times Y)$ Lang $SL_2$ p. 13 :Let $\{\phi_i\}$, $\{\psi_i\}$ be orthonormal bases for $L^2(X)$ and $L^2(Y)$ respectively. Let $$\theta_{ij}(x,y) = \phi_i(x)\psi_i(y).$$ Then $\{\theta_{ij}\}$ is an orthonormal basis for $L^2(X \times Y)$. To see this, it is first clear that the $\theta_{ij}$ are of norm 1, and mutually orthogonal. Let $g \in \mathcal L^2(X \times Y)$ be perpendicular to all $\{\theta_{ij}\}$. Then $$\int_X\phi_i(x)dx\int_Y\psi_j(y)g(x,y)dy=0(*)$$ for all $i,j$. Hence $$x \mapsto\int_Y\psi_j(y)g(x,y)dy$$ is zero except for $x$ in a null set $S$ in $X$. If $x \notin S$, then for almost all $y$, we have $g(x,y) = 0$ Questions: I assume that in ($*$) a double integral is meant, i.e. the integral over $Y$ should be inside the integral over $X$. 1) It is not clear to me how to get from ($*$) to the conclusion that the above map is zero for almost all y, and similarly from this to the fact that $g(x,y)$ is zero for almost all y. This would of course follow if the orthonormal basis were positive, which is not always the case since sines and cosines form an orthonormal basis for $L^2[0,1]$. 2) In the next page, Lang considers an orthonormal basis for $L^2(X \times X)$. He ascertains that it is {$\phi_i\otimes\bar\phi_j$}. Why do we have to take conjugate on the second element in $L^2(X \times X)$ but not in $L^2(X \times Y)$? 3) Does anyone know what is the difference between $L^2$ and $\mathcal L^2$? AI: Equation (*) says $\left<\phi_i,h_j\right>=0$ for all $i,j$, where $h_j$ is the function $x \mapsto\int_Y\psi_j(y)g(x,y)dy$. Since the $\phi_i$ form an orthonormal basis, it follows that (for each $j$) $h_j=0$ in $L^2$, i.e. is zero almost everywhere. Similarly, $0=h_j(x)=\left<\psi_j,g(x,-)\right>$ implies $g(x,-)$ is zero a.e. We don't have to, but is convenient in the context of integral operators. Recall in general that for a Hilbert space $H$ we identify $H$ with the conjugate dual, so there is an isomorphism $H\otimes \overline{H}\to B_{fin}(H)$ between the algebraic tensor product and finite-rank operators. Completing in the appropriate norm gives the isomorphism $H\widehat{\otimes} \overline{H}\to B_{h.s.}(H)$ between the Hilbert space tensor product and Hilbert-Schmidt operators. Applied to $H=L^2(X)$, and composing with the canonical map $L^2(X)\otimes \overline{L^2(X)}\to L^2(X\times X)$ given by $f\otimes g\mapsto k_{f,g}:=[(x,y)\mapsto f(x)\overline{g(y)}]$ (which we just proved to be a unitary isomorphism), we get the isomorphism $L^2(X\times X)\to B(L^2(X))$ given by $k_{f,g}\mapsto \left<-|g\right> f.$ This is the usual identification between bounded operators on $L^2$ and Hilbert-Schmidt operators. I think $L^2=\mathcal{L}^2/\sim$ where $f\sim g$ iff $f=g$ almost everywhere. Thus Lang carefully distinguishes between functions and equivalence classes of functions.
H: Relation between inverse and inverse of adjoint in a unital C*-algebra Let $\mathcal{A}$ be a unital C*-algebra. Is the following statement true? $$A \text{ not invertible} \Leftrightarrow A^* \text{ not invertible}$$ Moreover, suppose $A\in \mathcal{A}$ has an inverse $A^{-1}\in \mathcal{A}$, what can I say about the inverse of the adjoint $A^*$? AI: It is enough to prove that $A$ is invertible iff $A^*$ is invertible. Proof of implication $\Longrightarrow$. Let $B$ be inverse to $A$, then $AB=BA=1$. Apply adjoint to get $B^*A^*=A^*B^*=1$, hence $B^*=(A^*)^{-1}$. Proof of implication $\Longrightarrow$. Let $B$ be inverse to $A^*$, then $A^*B=BA^*=1$. Apply adjoint to get $B^*A=AB^*=1$, hence $A^{-1}=B^*$. These facts shows that $A$ is invertible iff $A^*$ is invertible and what is more $$ (A^{-1})^*=(A^*)^{-1} $$
H: Are the closed ordinals (apart from $0$ and $1$) precisely the regular cardinals? Given a partially ordered set $P$, a collection of partially ordered sets, call it $\mathcal{Q}$, and an arbitrary function $f : P \rightarrow \mathcal{Q},$ we can form a new partially ordered set $\Sigma f$ as follows. The elements of $\Sigma f$ are the pairs $(p,q) \in P \times \bigcup \mathcal{Q}$ such that $f(p) \ni q$. Define $(p,q) \leq (p',q')$ iff either $p < p'$ in the sense of $P$, or $p=p',$ and letting $Q = f(p)=f(p'),$ we have that $q \leq q'$ in the sense of $Q$. Note in particular that if $P$ is totally ordered and the elements of $\mathcal{Q}$ are totally ordered, then for all functions $f : P \rightarrow \mathcal{Q}$ we have that $\Sigma f$ is totally ordered. Similarly if 'totally ordered' is replaced by 'well-ordered.' Question 1. Is there a standard name/notation for this construction? Where can I learn more? Now call a collection $\mathcal{Q}$ of posets closed iff for all $P \in \mathcal{Q}$ and all $f : P \rightarrow \mathcal{Q}$ we have that $\Sigma f$ is order-isomorphic to an element of $\mathcal{Q}.$ Note also that an ordinal is not only a poset, but also collection of posets. Thus any given ordinal may or may not be closed. And since cardinals can be viewed as particular kinds of ordinals, any given cardinal may or may not be closed. In fact, it is easy to see that every infinite cardinal is closed. (Thank you, Asaf, for your correction.) Furthermore, every regular cardinal is closed. Question 2. Are the closed ordinals (apart from $0$ and $1$) precisely the regular cardinals? AI: Your claim is wrong. It isn't true that cardinals are closed, it is true that regular cardinals are closed. In fact that gives an exact characterization of the closed ordinals. First it is trivial that finite ordinals larger than $1$ are not closed. Now suppose that $\alpha+1$ is a successor ordinal, then $2\in\alpha+1$ and $\alpha\in\alpha+1$ then the map $f(0)=\alpha, f(1)=2$ witnesses that $\alpha+1$ is not closed. We can generalize this to the case where $\delta$ is not a regular cardinal: If $\delta$ is a limit ordinal and $\lambda=\operatorname{cf}(\delta)<\delta$ then we have: $\lambda+1\in\delta$, we have $\{\delta_i\mid i<\lambda\}\subseteq\delta$ which is a cofinal sequence. The map $f(i)=\delta_i$ for $i<\lambda$ and $f(\lambda)=1$ gives us an order type of $\delta+1\notin\delta$. As for the name, I'm not familiar enough with order theory to answer that, but it seems that this is a generalized sum of orders, over $P$, I don't know for certain though.
H: Compare standard deviation with out using the standard formula I very well know that Standard Deviation is the measure of spread of the data. If the data has higher deviation from its means then it has higher standard deviation. What if the data has same mean and range and just a couple of values changed e.g. Set1 : 10, 20, 50, 80, 90 Set2 : 10, 30, 50, 70, 90 Both have same range and mean, how can I compare the spreadness of the data without using the formula ? AI: Note that standard deviation measures "squared differences" from the mean. Let's write down the squared differences for your data sets Set 1: $40^2, 30^2, 0^2, 30^2, 40^2$, Set 2: $40^2, 20^2, 0^2, 20^2, 40^2$. As the mean of the first data set is larger, it has the larger mean squared deviation. Or, with out calculating: As the data in the first set in mean are more far from the mean, it has the larger standard deviation.
H: Solving for X in a simple matrix equation system. I am trying to solve for X in this simple matrix equation system: $$\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix} - X\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix} = E $$ where $E$ is the identity matrix. If I multiply $X$ with $\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix}$ I get the following system: $$\begin{bmatrix}5x_1 & -1x_2\\6x_3 & -4x_4\\\end{bmatrix}$$ By subtracting this from $\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix}$ I get $\begin{bmatrix}7 - 5x_1 & 7 + 1x_2\\2 - 6x_3 & 4 + 4x_4\\\end{bmatrix} = \begin{bmatrix}1 & 0\\0 & 1\\\end{bmatrix}$ Which gives me: $7-5x_1 = 1$ $7+1x_2 = 0$ $2-6x_3 = 0$ $4+4x_4 = 1$ These are not the correct answers, can anyone help me out here? Thank you! AI: $$\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix} - X\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix} = I $$ where $I$ is the identity matrix. Hint: reconsider what multiplication by $X$ will look like: $X$ will be a $2\times 2$ matrix, if matrix multiplication and addition is to be defined for this equation. So if $X = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}$, then $$X\begin{bmatrix}5 & -1\\6 & -4 \end{bmatrix} = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}\begin{bmatrix} 5 & -1 \\ 6 & -4\end{bmatrix} = \begin{bmatrix}5x_1+6x_2&-x_1-4x_2\\5x_3+6x_4&-x_3-4x_4\end{bmatrix}$$
H: Show that there are infinitely many solutions of distinct natural numbers $m,n$ such that $n^3+m^2\mid m^3+n^2$ Show that there are infinitely many solutions of distinct natural numbers $m,n$ such that $n^3+m^2\mid m^3+n^2$. This question appeared in Round $2$ of British Math Olympiad $2007-08$. I have been trying this problem since two days. And got no sequence of numbers satisfying it. AI: Choose $a\in\mathbb{N}$, and set $m=an$. Then the condition becomes $n^3+a^2n^2|a^3n^3+n^2$, which is equivalent to $$n+a^2|a^3n+1$$ We now set $n=a^5-a^2-1$ and see that $$(a^3-1)(n+a^2)=(a^3-1)(a^5-1)=a^8-a^5-a^3+1=a^3n+1$$ Update, pulling back the veil: I looked for $k$ such that $k(n+a^2)=a^3n+1$ and I could solve for $n$. After one or two false attempts I tried $k=a^3-1$.
H: If $f(-1) = 0$ and $f(2)=0$, and if $g(x)= 2x-1$, then find the value of $x$ for which $(f\circ g)(x) = 0$ I have found the following problem. If $x = -1$ and $x=2$ then $f(x) = 0$. If $g(x)= 2x-1$, then find the value of $x$ for which $f\circ g(x) = 0$. I have solved the above problem in the following way. $f(x) = (x+1)(x-2)$ $f\circ g(x) = f(g(x)) = (2x)(2x-3)$ So $2x(2x-3) = 0 \implies x= 0$ or $x= 3/2$ So if $x = 0$ or $x=3/2$ then $f\circ g(x) = 0$ Is the above procedure right? I have solved the above problem with a bit of confusion. That's why I am asking this question here. Please, remove my confusion with a proper explanation. AI: Yes: Assuming $f(x) = 0 $ if $x = -1,$ or $x = 2$, then one possible function for $f(x)$ will be $f(x) = (x+1)(x-2)$. And we are given $g(x) = 2x - 1$. If this is the case, it follows that $(f\circ g)(x) = f(g(x)) = (2x)(2x-3)$ And so: $2x(2x-3) = 0\implies x= 0$ or $x= 3/2$ But, since $f(x)$ isn't necessarily a polynomial, or may be a multiple of our guess at $f(x)$, so we would need to check that that at these two values, it holds that $g(x) = -1$, or $g(x) = 2$. Can you see why knowing precisely what $f(x)$ isn't necessary? (In fact, we don't know for certain what $f(x)$ actually is.) Without knowing $f(x)$, but only knowing $f(-1) = 0$ and $f(2) = 0$, we can find the values at which $g(x) = -1,$ and when $g(x) = 2$, because then we know for sure that $$f(g(x)) =f(-1) = 0\quad\text{and that }\quad f(g(x)) = f(2) = 0$$ $$g(x) = 2x - 1 = -1 \iff 2x = 0 \iff \bf x = 0$$ $$g(x) = 2x - 1 = 2\iff 2x = 3 \iff \bf x =\dfrac 32$$
H: How to solve equations with logarithms, like this: $ ax + b\log(x) + c=0$ I encountered an equation of type $$ ax + b\log(x) + c=0$$ Here a, b, and c are constants. Does anyone know how to solve these type of equations? I guess this way: $$\log(x)= \frac{c-ax}{b}$$ $$x= 10^{(c-ax)/b}$$ But I do not even know how to solve this too. Please help! AI: Rewriting the equation gives $$ ax+b\log(x)+c=0\\ ax+b\log\left(\frac abx\right)+c-b\log\left(\frac ab\right)=0\\ \frac abx+\log\left(\frac abx\right)+\frac cb-\log\left(\frac ab\right)=0\\ \color{#C00000}{(a/b)x}\,e^{\color{#C00000}{(a/b)x}}=(a/b)\,e^{-c/b} $$ The Lambert-W function is the inverse of $xe^x$. Therefore, $$ \color{#C00000}{(a/b)x}=\mathrm{W}\left((a/b)e^{-c/b}\right) $$ and so $$ x=(b/a)\mathrm{W}\left((a/b)e^{-c/b}\right) $$
H: How to show $0$ is a point of closure of weak topology, but not a limit of weakly covergent sequence in a a subset of $l^2$ (von Neumann)For each natural number $n$, let $e_n$ denote the sequence in $\mathcal {l}^2$ whose $n$th component is $1$ and other components vanish. Define$$E = \{e_n + n \cdot e_m : n,m \in \Bbb N \land m > n\}$$ This is an example on page 288, Real Analysis(4ed), Royden et al such that "$0$ is a point of closure with respect to weak topology, of $E$, but there is no sequence in $E$ that converge weakly to $0$". I've no idea how to show this. The first statement is equivalent to, for any $\epsilon > 0$, and $\{\psi_k\}_{k = 1}^{n} \subset {\mathcal{l}^2}^{\ast}$(${\mathcal{l}^2}^{\ast}$ is the topological dual of ${\mathcal{l}^2}$) $$\mathcal{N}_{\epsilon, \psi_1, \ldots,\psi_n}(0) = \{x \in \mathcal{l}^2 : |\psi_k(x)|< \epsilon \text{ for} 1 \leq k \leq n\}$$ $$E \cap \mathcal{N}_{\epsilon, \psi_1, \ldots,\psi_n}(0) \neq \varnothing$$ It seems to me it's true not only for $\{\psi_k\}_{k = 1}^{n} \subset {\mathcal{l}^2}^{\ast}$ but also for $\{\psi_k\}_{k = 1}^{n} \subset {\mathcal{l}^2}^{\#}$(the set of all linear functions on $\mathcal{l}^2$). Thus $\{\psi_k\}_{k = 1}^{n}$ can be represented as $\{(\cdot ,u_k)\}_{k = 1}^{n}$, $u_k \in \mathcal{l}^2 $. Let $a^k_i$ be the $i$th component of $u_k$. For any $\epsilon > 0$, we can find $n^k(\epsilon)$ such that for all $i \geq n^k(\epsilon) $, $a^k_i < \frac{\epsilon}{2}$ and $a^k_{i^2} < \frac{\epsilon}{2i}$. This may be shown by proof by contradiction, which seems to be messy. Let $p = \max\{n^k(\epsilon): 1 \leq k \leq n\}$. We have $e_{p}+p \cdot e_{p^2} \in E \cap \mathcal{N}_{\epsilon, \psi_1, \ldots,\psi_n}(0)$. I got stuck on how to show that there is no sequence in $E$ that converge weakly to $0$. AI: Weakly convergent sequences are weakly bounded. For a locally convex TVS, weakly bounded coincides with bounded, in this case norm-bounded. Since we're dealing with $l^2$, we have $\lVert e_n + n\cdot e_m\lVert = \sqrt{1+n^2} > n$ for $m > n$. Thus a weakly convergent sequence $(x_i = e_{n_i} + n_i\cdot e_{m_i})$ in $E$ must have $n_i$ bounded, say $n_i \leqslant k$. Hence there must be an $n_0 \leqslant k$ such that $n_i = n_0$ for infinitely many $i$. Then $\langle x_i \mid e_{n_0} \rangle = 1$ for infinitely many $i$, hence $x_i \not\to 0$ weakly. To see that $0$ is in the weak closure of $E$, consider $\mathcal{N}_{\varepsilon;\,\psi_1,\,\ldots,\,\psi_n}(0)$. Let $\eta_k = \sum_{i = 1}^n \lvert \psi_i(e_k)\rvert$. Since 1. $\bigl(l^2\bigr)^\ast$ is (anti-)isomorphic to $l^2$, 2. $\eta \in l^2$, and since 3. the coefficients of the $e_j$ are (real and) non-negative for all elements of $E$, $$\{v \in E \colon \langle \eta\mid v\rangle < \varepsilon\} \subset\mathcal{N}_{\varepsilon;\,\psi_1,\,\ldots,\,\psi_n}(0).$$ So it is sufficient to show that there are $n < m \in \mathbb{N}$ with $\eta_n + n\cdot \eta_m < \varepsilon$. But since $\eta \in l^2$, there is a $k \in \mathbb{N}$ with $\sum_{n = k}^\infty \eta_n^2 < \varepsilon^2/4$, in particular $\eta_k < \varepsilon/2$. Then $\langle\eta\mid e_k + k\cdot e_m\rangle = \eta_k + k\cdot\eta_m \to \eta_k < \varepsilon$, hence $e_k + k\cdot e_m \in \mathcal{N}_{\varepsilon;\,\psi_1,\,\ldots,\,\psi_n}(0)$ for all large enough $m$.
H: different shaped open basis on $\mathbb{R}^2$ $J_1$ be the smallest topology on $\mathbb{R}^2$ containing the sets $(a,b)\times (c,d)\forall a,b,c,d\in\mathbb{R}$ $J_2$ be the smallest topology on $\mathbb{R}^2$ containing the sets $\{(x,y):(x-a)^2+(y-b)^2<c,\forall a,b\in\mathbb{R},c>0\}$ $J_3$ be the smallest topology on $\mathbb{R}^2$ containing the sets $\{(x,y):|(x-a)|+|(y-b)|<c,\forall a,b\in\mathbb{R},c>0\}$ My question is is true that $J_1=J_2=J_3$?or there is some containement relationship among them? In exam hall I thought that any open set of $J_1$,I can push it into some open set of $J_2,J_3$ and I can do the same thing for other two, so they are equal? Thank you. AI: They will all be the same. You need to show that you can fit each basis element of $J_1$ inside a basis element of $J_2$ inside a (... think matryoshka dolls). If you draw a picture of the sets, you'll see why it's true intuitively, and then you just need to back it up with a little algebra.
H: How to find Perimeter and Area? I have this question: A rancher has 480 feet of fencing with which to enclose two adjacent rectangular corrals. What dimensions should be used so that the enclosed area will be a maximum? x=....ft y=....ft Below is my work: P=4x+2y A=2xy 480=4x+2y 240-2x=y A=2x(240-2x) 480x-4x^2 I cannot figure out what do I have to do next??? Thanks in advance. AI: Because there are two adjacent rectangular corrals, we need for perimeter to be such that $$P = 3x+2y = 480\iff 2y = 480 - 3x \iff y = 240 - \frac 32 x$$ Where $3x$ counts the three sides, one of which is shared by each rectangle. Then Area is $x y$...$y$ being the width across the large rectangle, and $x$ being the length of the large rectangle containing the adjacent rectangles. $$A = xy = x\left(240 - \frac 32 x\right) = 240 x - \frac 32 x^2$$ Now find $A'(x)$ and set that equal to $0$ and solve for maximum $x$. $$A'(x) = 240 - 3x = 0 \iff 3x = 240 \iff x = 80\;\text{feet}$$ $$y = 240 - \frac 32 x = 240 - 120 = 120\;\text{feet}$$ So the large rectangle will be 120 feet across top and bottom, and 80 feet along each side: Two outer sides, one inner (middle/dividing) side.
H: Classical probability, need my work checked. A kinder egg may contain a prize of type $A$, of type $B$, or be empty. The probability of a prize $A$ is $5$ times larger than of prize $B$. A box contains 12 kinder eggs, from which 7 are known to be empty. Three eggs are drawn at random. a.) Find the prob. that you find at least one prize $A$ b.) Given that you've found exactly one prize $A$, find the prob. that the other two were $B$'s. My attempt: Since an egg can either contain $A,B$ or nothing, we have $$p_a +p_b +p_0 = 1$$ Incorporating the condition $p_a =5p_b$, this leads to: $$p_0=1-6p_a$$ $$p_a=p_a$$ $$p_b = \frac{1}{5} p_a$$ Since we know that 7 eggs are empty, we can say (though I'm not sure about this bit) $$p_0 = \frac{N_0}{N} = \frac{7}{12}.$$ This gives us expressions for each of the probabilities, using the above system of equations. For part a.) we have $P(A_{\ge 1}) = 1-P(A_0)$, so all that remains is to find $P(A_0)$, where the notation is clear. This is binomial, since no A's is equivalent to all B's, all empty, or three ways of either 2 B's, one empty or 2 empty, one B: $$P(A_0) = (p_0 +p_b)^3.$$ For part b.) it's a bit unclear to me. For three drawn eggs, the probability of one A (the condition given) results from one of twelve outcomes, in groups of four, that is the total probability of getting exactly one A is: $$P(A_1) = 4(p_a p_0 ^2) + 4(2p_a p_b p_0) +4(p_a p_b ^2)$$ So now, the probability of getting two B's given one A is $$P(B_2|A_1)= \frac{P(B_2 \cap A_1)}{P(A_1)}$$ But the probability of two B's and one A is $$P(B_2 \cap A_1) = 3p_b ^2 p_a$$ So with this all that remains is to plug in the numbers. I would appreciate someone checking my work because I have the worst intuition when it comes to classical/combinatorial probability... AI: We do have $p_a=5p_b$ and $p_a+p_b+\frac{7}{12}=1$, and we can now calculate $p_b$ and $p_a$. So we can, as you did, take the basic probabilities as known. Your solution to (a) is correct, it is now just a matter of filling in the numbers. For (b), you are aware that we need to find $\Pr(A_1)$, the probability of exactly $1$ prize A. This one is tricky. There are $3$ ways this can happen. (i) We have $2$ empty eggs, and the remaining prize is an A; (ii) We have $1$ empty egg, and precisely one of the two remaining eggs has an A: (iii) We have $0$ empty eggs and precisely one of the three eggs has an A. To complete the calculation, you will also need to use the probability of (iii), as your solution indicated. We calculate the probability of (i). There are $\binom{12}{3}$ equally likely ways to choose $3$ eggs. There are $\binom{7}{2}\binom{5}{1}$ ways to choose two empty eggs and a non-empty one. Thus the probability we choose two empty and one non-empty is $\frac{\binom{7}{2}\binom{5}{1}}{\binom{12}{3}}$. Given that we got $2$ empty and $1$ non-empty, the probability our non-empty has an A is $\frac{p_a}{p_a+p_b}$. This ratio is $\frac{5}{6}$. So the probability of (i) is $$\frac{\binom{7}{2}\binom{5}{1}}{\binom{12}{3}}\cdot \frac{5}{6}.$$ Now we need the probabilities of (ii) and of (ii). The basic arguments are the same as for (i). We leave these to you. If difficulties arise, please leave a comment. Remark: The tricky thing about the calculations is that both the hypergeometric and the binomial are involved. We could bypass the binomial coefficients by imagining we are choosing eggs one at a time. But we have to take account of the fact that, for example, choosing an empty egg on the first trial affects the probability of choosing an empty egg on the next trial.
H: How does one combine proportionality? this is something that often comes up in both Physics and Mathematics, in my A Levels. Here is the crux of the problem. So, you have something like this : $A \propto B$ which means that $A = kB \tag{1}$ Fine, then you get something like : $A \propto L^2$ which means that $A = k'L^2 \tag{2}$ Okay, so from $(1)$ and $(2)$ that they derive : $$A \propto BL^2$$ Now how does that work? How do we derive from the properties in $(1)$ and $(2)$, that $A \propto BL^2$. Thanks in advance. AI: Suppose a variable $A$ depends on two independent factors $B,C$, then $A\propto B\implies A=kB$, but here $k$ is a constant w.r.t. $B$ not $C$, in fact, $k=f(C)\tag{1}$ Similarly, $A\propto C\implies A=k'C$ but here $k'$ is a constant w.r.t. C not $B$, in fact, $k'=g(B)\tag{2}$ From $(1)$ and $(2)$, $f(C)B=g(B)C\implies f(C)\propto C\implies f(C)=k''C$ Putting it in $(1)$ gives, $A=k''CB\implies A\propto BC\tag{Q.E.D.}$
H: Manifolds with finitely many ends In the article ' The structure of stable minimal hypersurfaces in $ R^{n+1} $ ( http://arxiv.org/pdf/dg-ga/9709001.pdf) of Cao-Shen-Zhu the remark 2 at page 3 contains a statement that i don't understand (actully it seems me false). Let $ M $ be a manifold and let $ \{K_n\} $ be an exhaustation by compact sets: $$ \cup_n K_n = M $$ and $$ K_n \subset K_{n+1} $$ An end of $ M $ is a collection of subsets (actually open subsets) $ \{E_n\} $ such that $ E_n $ is a connected component of $ M-K_n $ and $$ E_{n+1} \subset E_n $$ It can be proved that the number of ends is independent from the choice of the exhaustation $\{K_n\} $ Now in the article above is stated that if a manifold has only finitely many ends $ \{ E_{n}^{1} \}, \ldots \{ E_{n}^{k} \} $ there exists $ n_0 $ such that $$ E_{n}^{j} = E_{n_0}^{j} $$ for every $ j = 1 \ldots k $ and $ n \geq n_0 $ This statement seems me false. Thanks AI: You're right, that statement is false. They probably mean that the number of ends stabilizes at some point (as there might be just one component at first, then more, then more, etc.).
H: Derivative of $\sqrt[3]{x} e^x$ $$y=\sqrt[3]{x} e^x$$ I have no idea how to solved it. AI: Start with writing $\sqrt[3]x=x^{\frac 13}$. Then use the product rule.
H: Etymology of Tor and Ext Functors The names of the derived functors $\operatorname{Tor}$ and $\operatorname{Ext}$ seem quite cryptic to me. Does anyone know what these abbreviations stand for? I would be glad if someone could tell me where these names come from. AI: Ext stands for extension, as the group $\operatorname{Ext}^1(X,Y)$ parameterises extensions $Z$ fitting into a short exact sequence: $$0\to Y\to Z\to X\to 0$$ modulo the trivial extension $X\oplus Y$. According to Wikipedia, Tor is short for torsion, as if $r\in R$ is not a zero divisor and $B$ is an $R$-module, then $\operatorname{Tor}_1(R/(r),B)$ can be identified with the $r$-torsion part of $B$, i.e. $b\in B$ such that $rb=0$.
H: How many different sets of $9$ questions can the student select ? A history examination is made up of $3$ set of $5$ Question each and a student must select $3$ questions from each set . How many different sets of $9$ questions can the student select ? I am in dilemma how i can use combination formula AI: From each of the three sets of $5$ questions, a student must select three questions. So, yes, we can compute three sets of combinations: $(1)$ From the first set of $5$ questions, there are $\binom 53$ ways a student may select three questions. $(2)$ Similarly, from the second set of $5$ questions, the student must select 3 questions to answer. Again, there are $\binom 53$ ways of doing so. $(3)$ And likewise for the third set. $(*)$ Then, as David Mitra suggested, we need the multiplication principle to get, overall, the number of ways a student can answer questions on the test, for a total of: $$\binom 53 \cdot \binom 53 \cdot \binom 53 \quad \text{ways of doing so}$$
H: Inequality for finite harmonic sum and logarithm How do you prove the inequality: $|\sum_{k=1}^n 1/k - \log n | \leq 1$ ? AI: Hint: Note that the expression is just the difference between the lower Riemann sum of the integral: $$\int_0^n \frac{1}{x} dx$$ And the actual integral, $\log n$.
H: $A$ be a complex $3\times 3$ matrix such that $A^3=-I$ Let $A$ be a complex $3\times 3$ matrix such that $A^3=-I$, then we need to find out which of the following statements are correct? $A$ has three distinct eigenvalues; $A$ is diagonalizable over $\mathbb{C}$; $A$ is triangulizable over $\mathbb{C}$; $A$ is non singular. Wll, from minimal polynomial approach I have deduced that all are correct as minpoly has to be $x^3+1$ am I right? AI: Take $A=-I$. Then $A^3 = -I$, but $A$ does not have distinct eigenvalues. Hint: The minimal polynomial of a matrix $A$ may be defined as the polynomial of smallest degree that is satisfied by A and has highest coefficient equal to 1. Is $x^3 +1$ really the polynomial of smallest degree?
H: Find $n$ such that $x^n \equiv 2 \pmod{13}$ has a solution I am stuck on the following problem: Consider the congruence $x^n\equiv 2\pmod{13}$. This congruence has a solution for $x$ if $n=5$ $n=6$ $n=7$ $n=8$ Can someone explain in detail? Thanks in advance. AI: Hint: $13$ is prime, so we know there is some primitive root $g$ modulo $13$. Thus, there is some $k$ such that $g^k\equiv 2\bmod 13$, and if there is a solution $x$ to the congruence $x^n\equiv 2\bmod 13$, then there is some $\ell$ such that $g^\ell\equiv x\bmod 13$. Thus, $$(g^\ell)^n\equiv g^{\ell n}\equiv g^k\bmod 13.$$ This is true iff $\ell n\equiv k\bmod 12$, by Fermat's little theorem and the definition of primitive root: $$g^{\ell n}\equiv g^k\bmod 13\iff g^{\ell n -k}\equiv 1\bmod 13\iff (\ell n -k)\text{ is a multiple of 12}$$
H: test for the convergence of the series Test the convergence of summation $$\sum_{n=1}^\infty x_n$$ where $$x_{2n-1}=\frac{n}{n+1}\\ x_{2n}=-\frac{n}{n+1}$$ That is the series $$\frac 1 2-\frac 12+\frac 23-\frac 23 +-\cdots$$ what I did was let Sn be the partial sums of the series.Then $$S_n=\begin{cases} 0 & \text{when } n \text{ is even} \\ \frac{n}{n+1} &\text{when } n \text{ is odd}\end{cases}$$ thus $$\lim\limits_{ n\to \infty} S_n= \begin{cases} 0 & \text{when } n \text{ is even}\\ 1 & \text{when } n \text{ is odd}\end{cases}$$ Thus $\lim\limits_{ n\to \infty} S_n$ doesn't converge to a particular value. Hence $\lim\limits_{ n\to \infty} S_n$ doesn't exist. Therefore the series diverge. AI: Your answer is correct, but I would say it differently. What you mean is that $$\lim_{n\to\infty}S_{2n}=0$$ $$\lim_{n\to\infty}S_{2n+1}=1$$ Note that if we "introduced parenthesis", and set $$a_n=x_{2n-1}+x_{2n}$$ the series $$\sum a_n$$ would converge to $0$.
H: Calculate x,y line terminiating point of section of a circle I have a Cartesian plane running from -41 to 41 on the x and y axes and a circle centered on 0,0 with a radius of 41 divided up into a number of sections of different areas. I know the percentage area of each section (ie: section 1 is 16.1% of the total area of the circle, section 2 is 13% of the total circle, etc -- think pie chart). I need to calculate the x and y coordinates of the circumference points for each of the dividing lines of the section. I am trying to programatically draw a 'pie chart' by dividing the circle into a number of points on he circumference from 0,0. Here is my calculated data: AI: The angle represented by each percentage is just that percentage multiplied by $2 \pi$. So if the first sector starts at $(41,0)$, horizontal to the left, and we go clockwise, the terminating point for that sector is $(41 \cos (0.161\cdot 2 \pi), -41 \sin (0.161\cdot 2 \pi)) \approx (21.75,-34.75)$, where the minus sign comes from going clockwise instead of counterclockwise. The total angle of sectors $1$ and $2$ is then $0.291$ of the circle, so the ending point will be $(41 \cos (0.291\cdot 2 \pi), -41 \sin (0.291\cdot 2 \pi))\approx(-10.45,-39.58)$
H: How do I solve $\; 3^{2x+1}-10\cdot 3^x+3=0 \quad?$ Solve the following equation for $x$ : $ \quad3^{2x+1}-10\cdot 3^x+3=0 $ I am baffled to solve this equation. With graphing I have found the answers to be x=1 and x=-1. I would like to know how to solve this equation though. I have tried many different approaches including rearranging to these various forms (in no particular order): $$3^{2x+1} - 3^{x+2} - 3^x + 3 = 0$$ $$ 3^{2x+1}\cdot(1-10\cdot 3^{-x-1}+3^{-2x}) = 0 $$ $$ -2x = 10\cdot 3^{-x-1}-3^0 $$ This equation has a term eliminated in it already. The last equation written is the closest I have gotten to finding the answer, but I don't know how to proceed any further. AI: Hint: Substitute $y = 3^x$, you will get the quadratic equation $3y^2-10y+3=0$.
H: Prime factors + number of Divisors I know that one way to find the number of divisors is to find the prime factors of that number and add one to all of the powers and then multiply them together so for example $$555 = 3^1 \cdot 5^1 \cdot 37^1$$ therefore the number of divisors = $(1+1)(1+1)(1+1) = 2 \cdot 2 \cdot 2 = 8$ What I do not know (and can't seem to find when I look) is WHY this relationship exists or a formula which shows its proof. Has anyone seen such a formula? AI: If you write $n= \prod\limits_{i=1}^n {p_i}^{e_i}$, then the number of divisors of $n$ is the number of different products you can make of the form $\prod\limits_{i=1}^n {p_i}^{f_i}$ where $f_i \leq e_i$. For each prime, there are $e_i+1$ choices for $f_i$ and so the total number of choices is $\prod\limits_{i=1}^n (e_i+1)$ as you suggest.
H: How to find the range of $f(x) = {e^x \over x-1}$ I want to find the range of the following function : $$f(x) = {e^x \over x-1}$$ How do I find the range of the above function ? I have tried a lot , but do not have any idea to solve this. AI: There is a discontinuity at $x=1$. Analyze both sides separately. Let's call $f_1$ the function over the subset of the domain $(1,\infty)$, and $f_2$ the function over the subset of the domain $(-\infty,1)$. $$\frac{d}{dx}\frac{e^x}{x+1}=\frac{e^x(x-2)}{(x-1)^2}$$ There is a local minimum at $x=2$: $$f_1(2)=e^2$$ $$\lim_{x\to{\infty}}f_1(x)=\infty$$ $$\lim_{x\to{1}^+}f_1(x)=\infty$$ The range of $f_1(x)$ is therefore $[e^2,\infty)$. Do the same for f_2(x): $$\lim_{x\to-\infty}f_2(x)=0$$ $$\lim_{x\to{1}^-}f_2(x)=-\infty$$ The range of $f_2(x)$ is therefore $(-\infty,0)$ The range of the full function is the union of these two intervals.
H: Evaluate series of $\sum_{n=1}^\infty \frac{1}{n^2+n}$ I am trying to evaluate this sum, I know that $\sum\limits_{n=1}^\infty \dfrac{1}{n^2+n}$ is called telescopic series: $$\sum_{n=1}^\infty \frac{1}{n(n+1)}$$ and I can show that as: $$\frac{1}{k}-\frac{1}{k+1}$$ I would like to get some hint how I can evaluate it. Thanks! AI: Well, $$\sum_{n=1}^N\frac{1}{n(n+1)}=1-\frac{1}{N+1}$$ by telescopy, since, as you state $$\frac{1}{n(n+1)}=\frac 1n-\frac 1{n+1}$$ ADD A telescopic series is one of the form $$\sum x_n$$ where $x_n=y_{n+1}-y_n$ for some sequence. It follows that $$\sum_{n=1}^N x_n=\sum_{n=1}^Ny_{n+1}-\sum_{n=1}^N y_n\\=y_{N+1}+\underbrace{\sum_{n=1}^{N-1}y_{n+1}-\sum_{n=2}^N y_n}_{=0 \text{ Why? }}-y_1\\y_{N+1}-y_1$$ Thus $$\sum x_n=\lim_{N\to\infty} y_{N+1}-y_1$$ and the series converges if and only if $\lim\limits_{n\to\infty} y_n$ exists.
H: Prove that for any invertable $n\times n$ matrix A, and any $b\in\mathbb{R}^n$, there exists a unique solution to $Ax=b$ I think I've got the two ideas needed to solve this, but it feels like they're not tied together properly. I'm not sure if I'm allowed to do something like this: Let $A$ be an invertable $n\times n$ matrix, and $b$ be an n-dimensional vector. \begin{align} Ax=b&\Longrightarrow A^{-1}Ax=A^{-1}b\\ &\Longrightarrow x=A^{-1}b \end{align} Therefore, there exists at least one solution to the equation $Ax=b$. Additionally, for the equation $Ay=b$: \begin{align} Ay=b&\Longrightarrow A^{-1}Ay=A^{-1}b\\ &\Longrightarrow y=A^{-1}b\\ &\Longrightarrow y=x \end{align} Therefore, for any two unique combinations of $A$ and $b$, there is a unique $x$ for $Ax=b$. The problem I feel exists with this is that I'm doing two separate proofs and referencing one in the other, when I feel like I can only do that if they're combined into one single proof. Am I mistaken? AI: A handy way to deal with uniqueness proofs is to assume by contradiction that there exist distinct solutions. Assume that $x_1$ and $x_2$ are distinct solutions to $Ax=b$. Then, $Ax_1 = b$ and $Ax_2 = b$. Since $A$ is invertible, we have $x_1 = A^{-1}b$ and $x_2 = A^{-1}b$. Thus, because $A^{-1}b = A^{-1}b$, we have by transitivity $x_1 = x_2$, but we assumed they are distinct. Therefore, the solution must be unique. This is essentially what you're trying to do, but it is not two different proofs. Instead, we leverage the power of transitivity and reflexivity of equality to show that distinct solutions cannot exist.
H: Expected Value for a Sequential Poisson Random Variable The set up: A mouse nest contains $n$ female mice. In a particular year, the number of female offspring that each female mouse produces has the following pmf: $$ f(x) = \begin{cases} \frac{\alpha}{x!} & x = 0, 1, 2, \ldots\\[8pt] 0 & \text{else} \end{cases} $$ for some fixed $\alpha$. Assume the offspring are able to breed in the next year and that all mice behave independently. Let $Y_1$ be the total number of female offspring produced in one year from $n$ mice. Assume that no female mice die. Then after a single year, there are $Y_1 +n$ female rats which can each produce offspring. Let $Y_k$ denote the number of female mice produced in the $k^{th}$ year. The problem: Inductively show that $E[Y_k] = 2^{k-1}n$. My approach First, we note that $f(x)$ is a Poisson($\lambda$) distribution where $\alpha = 1/e$ and $\lambda = 1$. Since each mouse is independent, $Y_1$ is the sum of independent Poisson random variables. Hence $Y_1 \sim Pois(n)$. So the basis step of the induction proof is done, since $E[Y_1] = n$. For induction step, I start by assuming $E[Y_k] = 2^{k-1}n$, then I want to show $E[Y_{k+1}] = 2^{k}n$. Maybe there's a better way to go tackle this problem. I've chased it around and haven't been able to catch up with it. Any ideas? AI: Hint: $E[Y_{k+1}] = E[E[Y_{k+1}\mid Y_k,Y_{k-1},\ldots,Y_1]]$.
H: What does the inverse Fourier transform of a constant non-zero function look like. Worded another way, what does it look like to have all frequencies present at the same amplitude? AI: It's a Dirac delta (details depending on which convention you use).
H: Find the limit of the sequence $x_n=\frac{1}{n^2}+\frac{2}{n^2}+\cdots+\frac{n}{n^2}$. Help me please, I don't know if I should treat this sequence as a series. Thank you. AI: Since $$1+2+\cdots+n=\frac{n(n+1)}{2},$$ we have that $$x_n=\frac{n(n+1)}{2n^2}=\frac{1}{2}\cdot\frac{n+1}{n}.$$ Can you solve it from here?
H: Evaluation of $\sum_{n=1}^{\infty}\frac{2n-1}{2^n}$ I`m trying to evaluate this series and would like to get some advice how to do that. What I need to find here to start with? $$\sum_{n=1}^{\infty}\frac{2n-1}{2^n}= \frac{1}{2} + \frac{3}{4}+\frac{5}{8}+\dots$$ AI: Note that $$\dfrac{2n-1}{2^n} = \dfrac{n}{2^{n-1}} - \dfrac1{2^n}$$ Now make use of the following. For $\vert x \vert < 1$, we have $$\sum_{n=0}^{\infty} x^n = \dfrac1{1-x}$$ Differentiating both sides, we get that $$\sum_{n=0}^{\infty} nx^{n-1} = \dfrac1{(1-x)^2}$$ Take $x=1/2$ to get what you want.
H: Why does Totally bounded need Complete in order to imply Compact? Why does "Totally bounded" need "Complete" in order to imply "Compact"? Shouldn't the definition of totally bounded imply the existence of a convergent subsequence of every sequence? AI: No, total boundedness of $\langle X,d\rangle$ implies that every sequence in $X$ has a Cauchy subsequence. Completeness of $\langle X,d\rangle$ implies that every Cauchy sequence in $X$ actually has a limit point in $X$ and therefore converges. The two together therefore imply that every sequence in $X$ has a convergent subsequence, i.e., that $X$ is sequentially compact. Finally, there is a theorem that a metric space is sequentially compact if and only if it’s compact, so total boundedness plus completeness imply compactness.
H: The general idea of prove openness. I never really get the idea of proofs involves openness, here's an example: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a local diffeomorphism. Prove that the image of $f$ is an open interval. So, is the general principle is to show: Step n: For an arbitrary $y \in$ $f(U)$, which is the image of an open neighborhood, is in an open neighborhood. Step n-1: We find a neighborhood of $y$ open in a open neighborhood, which we chose $\mathbb{R}$ here. Step n-2: By the definition of function, $\forall y \in f(U), \exists x \in U: f(x) = y.$ So we choose an open neighborhood of $x, U_x.$ Step n-3: ..... . AI: No, you show that $f(U)$ is open for all open subsets $U \subset \mathbb{R}$. Then, since $f$ is continuous, you know that $f(\mathbb{R})$ is a connected open subset of $\mathbb{R}$, hence an open interval. To show that $f$ is an open mapping, consider an open $U \subset \mathbb{R}$ and any $x \in U$. By assumption, there is an open neighbourhood $V_x \subset U$ of $x$ on which $f$ is a diffeomorphism. That means $f\bigl\lvert_{V_x}$ is open, hence $f(V_x)$ is open. Now, $U = \bigcup\limits_{x \in U} V_x$, and therefore $f(U) = \bigcup\limits_{x \in U} f(V_x)$ is open.
H: Solve $\frac{dx}{dt} = x^3 + x$ for $x$ This is a seemingly simple first order separable differential equation that I'm getting stuck on. This is what I have so far: $$\frac{dx}{dt} = x^3+x$$ goes to $$\frac{dx}{x(1+x^2)} = dt$$ Now using partial fractions to integrate the left-hand side: $$\frac{1}{x(1+x^2)} = \frac{A}{x} + \frac{Bx+C}{1+x^2}$$ Solving for A, B, C: $1 = A(1+x^2) + (Bx+C)x$, and using coefficient matching, I get $A=1, B=-1, C=0$. So the integral yields: $$\int\frac{1}{x} - \frac{x}{1+x^2}dx = \int dt$$ This yields: $$\ln x -\frac{1}{2}\ln(1+x^2) =t + C$$ So I tried using log rules and such to solve for $x$. I think this is where my source of error is. My attempt: $$\ln x - \ln(1+x^2)^\frac{1}{2} = t+C$$ $$\ln\frac{x}{(1+x^2)^{\frac{1}{2}}} = t + C$$ $$\frac{x}{(1+x^2)^{\frac{1}{2}}}= Ce^t$$ But this seems wrong. I apologize in advance if it's a silly mistake that I didn't see. AI: Assume that $x\ge 0$. $$\frac{x}{(1+x^2)^{\frac{1}{2}}}= \frac{\sqrt{x^2}}{(1+x^2)^{\frac{1}{2}}}=\sqrt{\frac{x^2}{1+x^2}}$$ and you can probably take it from there. If $x<0$, then $x=-\sqrt{x^2}$ and proceed similarly.
H: Why is $n+\omega=\omega$ for finite $n$? I am trying to understand the following claim; $n+\omega=\omega$ where $n$ is order type of a finite set and $\omega$ is the order type of $\left\{ 1,2,\dots, \right\}$ with the usual meaning of $<$. My question is how is it possible, for instance, $\left\{1,2,\dots ,n,1,2,\dots \right\}$ has order $\omega$ whilst the order $\left\{1,2,\dots\right\}$ is $\omega$? Many thanks for any help. AI: Here’s a quick sketch of the matchup: 1 2 3 4 ... n n+1 n+2 n+3 n+4 n+5 n+6 ... 1 2 3 4 ... n 1 2 3 4 5 6 ... The actual function mapping the top line to the bottom line is: $$f:\Bbb Z^+\to\Bbb Z^+:f(k)\mapsto\begin{cases} k,&\text{if }k\le n\\ n-k,&\text{if }k>n\;. \end{cases}$$ You almost had it in your comment; you just didn’t make sure that you matched up the two sets completely.
H: Understanding the definition of the represention ring In Fulton, Harris, "Representation Theory. A first Course" there's the following paragraph which I don't really understand: The representation ring $R(G)$ of a group $G$ is easy to define. First, as a group we just take $R(G)$ to be the free abelian group generated by all (isomorphism classes of) representations of $G$, and mod out the subgroup generated by elements of the form $V+W-(V\oplus W)$. Equivalently, given the statement of complete reducibility, we can just take all integral linear combinations $\sum a_i\cdot V_i$ of the irreducible representations $V_i$ of $G$; elements of $R(G)$ are corrrespondingly called virtual representations. The ring structure is then given simply by tensor product, defined on the generators of $R(G)$ and extended by linearity. OK, let's start with the "free abelian group generated by … representations of $G$". Now from what I understand, a free group is generated from a set of groups by formally multiplying elements of different groups together. Here, the multiplication is obviously replaced by addition. However, here's my first problem: While representations are of course groups under element multiplication, they are in general not abelian; also, it wouldn't work well with the addition notation; I think something different is meant. But then, the only notion of addition for representations I can see is the direct sum (point-wise adding the matrix representations will not give another representation, and is not even well-defined if the dimension of the representations is different). But the direct sum doesn't give rise to a group structure because there's no additive inverse. Also the fact that it is explicitly used alongside the "group addition" in the "mod out the subgroup" part indicates that it is not what is meant. So either I have an incorrect understanding of what the "free abelian group" means, or I'm missing a way how to define sums of representations, and especially the additive inverse $-V$ of a representation. So can someone please enlighten me? AI: "Now from what I understand, a free group is generated from a set of groups by formally multiplying elements of different groups together. Here, the multiplication is obviously replaced by addition." This is your first problem: that's really not the definition of a free group, even approximately. (Maybe you're thinking of free products instead.) A free group and a free abelian group are both associated to a perfectly naked set $S$. The free abelian group on $S$ is easier to understand: you can look it up here. One can consider the free abelian group on $S$ as the set of all finite formal combinations $\sum_{s \in S} a_s [s]$ with $a_s \in \mathbb{Z}$ and $a_s = 0$ for all but finitely many $s$. The group operation is $\sum_{s \in S} a_s [s] + \sum_{s \in S} b_s [s] = \sum_{s \in S} (a_s + b_s) [s]$. (If $S$ is finite, this is just the direct product of $\# S$ copies of the infinite cyclic group $\mathbb{Z}$.) Since this definition is new to you, you should probably soak it in for a while before trying to understand the definition of the representation ring $R(G)$.
H: Basic question on constructing a homomorphism between two cyclic groups We want to find a non-trivial homomorphism between $Z_3$ and $Z_{24}$. If we have $f:Z_3 \rightarrow Z_{24}$ then all I can really say form the properties of homomorphism that $f(0) = 0$ since the identity in $Z_3$ must be mapped to identity in $Z_{24}$. Just from knowing that, how can I go about making the function? Thanks for the help in advance. AI: Since $0 = f(1 + 1 + 1) = f(1) + f(1) + f(1)$, we have $3f(1) = 0$. So the problem reduces to solving the linear congruence $$3x \equiv 0 \bmod 24$$ Clearly, $x_0 = 0$ is a particular solution and $\gcd(3, 24) = 3$. Therefore all solutions are given by $x = 0 + (24/3)t = 8t$ where $t \in \mathbb{Z}$. Hence, the only homomorphisms are $f(1) = 0$, $f(1) = 8$ and $f(1) = 16$.
H: Distinguishing between the different eigenvalues Consider the symmetric matrix $$A=\begin{pmatrix} 2 & t & \cos t-1 \\ t & 2 & 0 \\ \cos t-1 & 0 & 2 \end{pmatrix}. $$ The (real) eigenvalues of $A$ can be found easily using the quadratic formula: they are $$\lambda_1,\lambda_2,\lambda_3=2,\frac{1}{2} \left(4-\sqrt{2} \sqrt{2 t^2-4 \cos (t)+\cos (2 t)+3}\right),\frac{1}{2} \left(4+\sqrt{2} \sqrt{2 t^2-4 \cos (t)+\cos (2 t)+3}\right) $$ Plotting them as functions of the real parameter $t$, I obtain this graph What bothers me, is that we could have defined the eigenvalues differently as $$\lambda_1,\lambda_2,\lambda_3=2,\frac{1}{2} \left(4-\text{sgn } t\sqrt{2} \sqrt{2 t^2-4 \cos (t)+\cos (2 t)+3}\right),\frac{1}{2} \left(4+ \text{sgn } t\sqrt{2} \sqrt{2 t^2-4 \cos (t)+\cos (2 t)+3}\right), $$ so that the graph would look like this: This way, the functions $\lambda_1(t),\lambda_2(t),\lambda_3(t)$ are of class $C^\infty$ (smooth). I have a few questions regarding this: Given a symmetric matrix which depends (smoothly) on a real parameter $t$, is it possible to define its eigenvalues $\lambda_1(t), \dots, \lambda_n(t)$ is such a way that they will be smooth functions of $t$? Is there a natural way of finding these "smooth branches"? (The quadratic formula has failed me.) A little more generally, given the equation $f(x,t)=0$, under what conditions is there a choice for all solutions $x(t)$, so that they are as regular as $f$ is WRT $t$? Thank you AI: See a related thread in MO. To quote the answer of Denis Serre, If the symmetric matrix depends analytically upon one parameter, then you can follow analytically its eigenvalues and its eigenvectors. Notice that this requires sometimes that the eigenvalues cross. When this happens, the largest eigenvalues, as the maximum of smooth functions, is only Lipschitz. This should answer your first question. The answers by the others are also informative.