text
stringlengths
83
79.5k
H: Limits defined for negative factorials (i.e. $(-n)!,\space n\in\mathbb{N}$) I apoligize if this is a stupid/obvious question, but last night I was wondering how we can compute limits for factorials of negative integers, for instance, how do we evaluate: $$\lim_{x\to-3}\frac{x!}{(2x)!}=-120$$ Neither $x!$, nor $(2x)!$ are defined for $x\in\mathbb{Z}^{-}$, and indeed, both are singularities according to the graph of $\Gamma(x+1)$. The book I am reading calculates this using a previously shown identity that: $$F\left(\left.{1-c-2n,-2n \atop c}\right|-1\right)=(-1)^{n}\frac{(2n)!}{n!}\frac{(c-1)!}{(c+n-1)!},\space\forall n\in\mathbb{Z}^{*}$$ And then, the more general Kummer's Formula: $$F\left(\left.{a,b \atop 1+b-a}\right|-1\right)=\frac{(b/2)!}{b!}(b-a)^{\underline{b/2}}$$ It then shows that they would only produce consistent results if: $$(-1)^{n}\frac{(2n)!}{n!}=\lim_{b\to-2n}{\frac{(b/2)!}{b!}}=\lim_{x\to-n}{\frac{x!}{(2x)!}},\space n\in\mathbb{Z}^{*}$$ It then gives the example of $n=3$, proving that: $$\lim_{x\to-3}{\frac{x!}{(2x)!}}=-\frac{6!}{3!}=-120$$ However, using Wolfram|Alpha, I can see that there are other such limits defined (such as $\lim_{x\to-3}{\frac{x!}{(8x)!}}=-103408066955539906560000$. Without using the hypergeometric series, how could we evaluate limits such as these? Again, sorry if this is a stupid question, thanks in advance! AI: You want to compute $\displaystyle \lim_{x\to -n} \frac {\Pi(x)}{\Pi(mx)}$ when $x$ is near a negative integer. $\Pi$ is the 'natural' extension of the factorial : $\Pi(n)=n!$ and $\Pi(z)=\Gamma(z+1)$ (see Wikipedia) In this form the "Euler's reflection formula" becomes simply (for $\operatorname{sinc}(z)=\frac{\sin(\pi z)}{\pi z}$) : $$\Pi(-z)\Pi(z)=\frac 1{\operatorname{sinc}(z)}$$ $$ \lim_{x\to -n}\ \frac {\Pi(x)}{\Pi(mx)}=\lim_{x\to -n}\frac {\Pi(-mx)\operatorname{sinc}(-mx)}{\Pi(-x)\operatorname{sinc}(-x)}$$ $$ =\lim_{t\to n}\frac {\Pi(mt)\operatorname{sinc}(mt)}{\Pi(t)\operatorname{sinc}(t)}$$ It remains to prove that $\ \lim_{t\to n} \frac {\operatorname{sinc(mt)}}{\operatorname{sinc(t)}}=\frac {(-1)^{(m-1)n}}m$ (you may use l'Hôpital's rule for that) and to conclude!
H: Smallest genus example of a non planar curve A curve is a smooth projective connected curve over an algebraically closed field. Every curve of genus 2 is planar. Also, every curve of genus 3 is planar. But what about curves of genus 4? What is the dimension of the subvariety defined by planar curves in the moduli space of genus $g$ curves? AI: You seem to be under some misconception here: no smooth curve of genus $2$ is planar, actually! Indeed, a plane curve has a degree $d$ and if it is smooth its genus is then $g=\frac {(d-1)(d-2)}{2}$. So actually most smooth curves are non-planar because most integers are not of the form $\frac {(d-1)(d-2)}{2}$. The smallest example is, as I said, $g=2$ but also smooth curves of genus $4,5,7,8,9,\ldots $ are all non-planar (this answers your question about $g=4$). Also: all smooth plane curves of degree $4$ have indeed genus $3$, but some curves of genus $3$ are not planar, namely the hyperelliptic ones.
H: A linearly independent, countable dense subset of $l^2(\mathbb{N})$ Possible Duplicate: Does there exist a linear independent and dense subset? I am looking for an example of a countable dense subset of the Hilbert space $l^2(\mathbb{N})$ consisting of linearly independent vectors AI: Based on Davide's answer. Begin with the set $\{x_n\}$ of vectors with only finitely many nonzero coordinates, all rational. That is dense, but not linearly independent. Next choose a sequence (say $2^{-n}, n=1,2,\dots$) that goes to zero. Add $2^{-n}$ to coordinate $r_n$ of $x_n$, chosen so that $r_n > r_{n-1}$ and the $r_n$ coordinate of all $x_k, 1 \le k \le n$ is zero. This new sequence is still dense, but also linearly independent.
H: How to find the identity element in $(\mathbb Z_{40}, \odot)$ Let $R = \mathbb Z_{40}$ and let $\odot$ be defined on $R$ as follows: $$\begin{aligned} a \odot b = a + 25b-10 \end{aligned}$$ I need to check if this structure has an identity element, so: $$\begin{aligned} a \odot \mathbb 1_{R} = a \end{aligned}$$ $$\begin{aligned} a +25e-10 = a \end{aligned}$$ $$\begin{aligned} 25e-10=0 \end{aligned}$$ how can I check for which $0 \leq e \leq 39$ values that equation is true? Is there an algorithm I can use? Edit: first of all let me apologize because it turns out I am unfamiliar with a lot of concepts you've put in your answers, honestly I didn't quite understand everything, so I've taken out what I know I can handle (hopefully correctly) and I've dealt with it my way. So, almost all of you had suggested I had to solve the equation: $$\begin{aligned} 25x \equiv_{40} 10 \end{aligned}$$ so as shown in the class I attended, I used the Euclidean method therefore I checked $gcd(25,40) \mid 10$ and after a semplification I got $$\begin{aligned} 5x \equiv_{8} 2 \end{aligned}$$ as a few of you pointed out. All the solutions of the previous equation are the solutions to $5x \equiv_{8} 1 $, so I've determined $h,k \in \mathbb Z : 1 = 5h+8k$. After a few calculations I've found that $h = -3$ and $k = 2$ so the solutions to my original equation are all in $$\begin{aligned} \{2+8k : k \in \mathbb Z\} = 2+8\mathbb Z \end{aligned}$$ What I don't get is how do I correctly get to the set of inverse elements, which is $\{2, 10, 18, 26, 34\}$, given the solution to the equation I've found? Is it enough (and correct) counting using modulo 8 starting from $k = 0$ while $2+8k \leq 39$? AI: Hint $\rm\ \ 25\, e = 10+40\,k\iff 5\,e=2+8\,k\iff mod\ 8\!:\ e\,\equiv\,\dfrac{2}5\,\equiv\, \dfrac{10}5\,\equiv\, 2$ Thus $\rm\: e = 2 + \color{#0A0}8\,j.\:$ To get $\rm\:e\,$ mod $40 = \color{#0A0}8\cdot\color{#C00}{5},\:$ write $\rm\:j = \color{#C00}5\,q + r,\,$ for $\rm\,0\le r < \color{#C00}5.\,$ Hence $$\rm e\, =\, 2+8\,(r+5\,q)\, =\, 2 + 8\,r + 40\,q\, =\, \{2,10,18,26,34\} + 40\,q$$ See any decent textbook on elementary number theory for general methods to solve such linear diophantine equations, e.g. via the the extended Euclidean algorithm and Bezout identity for gcd. This topic is also discussed in numerous prior questions here (search using said buzzwords).
H: linear algebra linear transformation eigenvector and eigenvalues i would be very thankful if someone could help me on this question, i know how to do the first bit but the last two questions confuse me a little. thanks in advance $M = \left( \begin{smallmatrix} 8&40&-30\\ 25&98&-75\\ 35&140&-107 \end{smallmatrix} \right)$ Let $v = [1, 2, 3]^T$ Show that $v$ is an eigenvector for the matrix M and determine the associated eigenvalue, say $\lambda$. Determine the dimension of the eigenspace, $E_\lambda$, of M. It is known that 3 is also an eigenvalue of M. Quoting any general result you need, determine whether or not M is diagonalisable. AI: $$M =\begin{pmatrix} 8&40&-30\\ 25&98&-75\\ 35&140&-107 \end{pmatrix}\begin{pmatrix} 1\\2\\3\end{pmatrix}=\begin{pmatrix} -2\\-4\\-6\end{pmatrix}=(-2)\begin{pmatrix}1\\2\\3\end{pmatrix}$$ Also $$M-(-2)I=\begin{pmatrix} 10&40&-30\\ 25&100&-75\\ 35&140&-105 \end{pmatrix}=10\cdot 25\cdot 35\begin{pmatrix} 1&4&-3\\ 1&4&-3\\ 1&4&-3 \end{pmatrix}$$ So $\,\dim\ker(M+2I)=1\,$ and since the eigenvalue $\,-2\,$ has algebraic multiplicity $\,2\,$ then the matrix is not diagonalizable.
H: Is there a way to define the "size" of an infinite set that takes into account "intuitive" differences between sets? The usual way to define the "size" of an infinite set is through cardinality, so that e.g. the sets $\{1, 2, 3, 4, \ldots\}$ and $\{0, 1, 2, 3, 4, \ldots\}$ have the same cardinality. However, is this the only way to define a useful "size" of an infinite set? Could one conceivably define a "size" where the two example sets have different sizes? AI: The question is what do you want to capture in the notion of cardinality, and what is your settings. For example, if you only care about subsets of the natural numbers you can say that $A$ has cardinality larger than $B$ if either $A$ is strictly larger than $B$ or the complement of $A$ is strictly smaller. In this sense, $|\{0,1,2,\ldots\}|$ is larger than $|\{1,2,3,4\ldots\}|$, as the former is everything is the latter has a non-empty complement (namely, $\{0\}$). This notion is not a linear order, which may seem a bit bad but then again - without the axiom of choice cardinalities are not linearly ordered anyway. Another example is if you fix for every set $A$ a linear ordering $\leq_A$. Now we say that $|A|\leq|B|$ if and only if $(A,\leq_A)$ is embedded into $(B,\leq_B)$. This notion of size is also nice and has the benefit of $\mathbb N$ being strictly smaller than $\mathbb Z$ and both smaller than $\mathbb Q$ (as order types, of course). On the other hand, this notion of cardinality lacks the Cantor-Bernstein property, namely anti-symmetry: the order type of $\mathbb Q\cap[0,1]$ and that of $\mathbb Q\cap(0,1)$ are not equal, but either one embeds into the other. This is also a bit bad, but there is a natural ordering on cardinals which also lacks this property without the axiom of choice: $|A|\leq^\ast|B|$ if and only if $A=\varnothing$ or there exists a surjection $f\colon B\to A$. Without the axiom of choice it is consistent that there are two sets which can be mapped onto one another, but not bijectively. The above can be generalized to any form of structure. Simply fix for every set in the universe some sort of structure and consider the embedding as a natural order. However in many cases you do lose something in the sense that this is no longer acting as we would expect from cardinality. Or perhaps we are expecting the wrong things... Further reading: Cardinality != Density? Comparing the sizes of countable infinite sets
H: From Presheaf to Sheaf In Hartshorne's Algebraic Geometry is written that "A sheaf is roughly speaking a presheaf whose sections (i.e. elements of $\mathcal{F}(U)$ for open subset $U$) are determined by local data". What does it means? What is the local data? After this remark Robin Hartshorne gave a definition of sheaf ("...a sheaf is a presheaf satisfying certain extra condition...") which I understood but did not feel. Thanks a lot! AI: For a presheaf, given two sections $s,t\in\mathcal{F}(U)$, you can have that they agree in every single neighbourhood, yet be different. That is, if $U_i$ is an open cover of $U$ and $s|_{U_i}=t|_{U_i}$ for each $i$ then if you have a presheaf, it is possible that $s\neq t$, even if this condition held for every single open cover of $U$. In a sheaf, the local data (being the sets in the cover) actually determines the section uniquely. The other half of being a sheaf says that your sections may be glued together. That is, suppose you have sections $s_i$ over open sets $U_i$, such that the restriction of $s_i$ and $s_j$ to $U_i\cap U_j$ agree for all $i,j$. We would like to be able to glue these sections together, which we can do only if you have a sheaf. If you have a sheaf, then you are guaranteed that all of your sections $s_i$ are just the restrictions of some section of $\mathcal F(\bigcup_i U_i)$. If you think of sections as functions (which you should), then the sheaf axioms just say that you can glue compatible functions together, and any function is determined uniquely by what it does on the open sets of your space. A corollary of this is that when you have a sheaf, it is enough to define its behaviour on an open cover of your space, and this will uniquely determine your sheaf's value on any open subset. For a presheaf, giving its values on an open cover isn't enough to pin down your presheaf uniquely. The Wikipedia article on constant sheaves has an example of a presheaf on the set with two elements (the constant presheaf) and demonstrates exactly why both axioms fail in each case, and then gives a construction of the sheafification.
H: formal expression of $\tilde{z}$ is the nearest from z $h = distance(z, \tilde{z})$, where $\tilde{z}$ is the element that is nearest from $z$ (that is, distance(z, $\tilde{z}$) is smaller than distance(z, any_other_z)). Is it possible to expression this formally, instead of saying "where $\tilde{z}$ is the element that is nearest from $z$ ..." ? AI: There are two possible answers to your question. If you just want an expression for $\tilde{z}$, you can use $\operatorname{argmin}$, like this: $h = \operatorname{distance}(z,\operatorname{argmin}_{\tilde{z} \neq z} \operatorname{distance}(z,\tilde{z}))$. Formally, $\operatorname{argmin}_{x \in S} f(x)$ is defined as any value $x \in S$ such that $f(x)$ is minimal. Note that your expression is identical to $$\min \{ \operatorname{distance}(z,\tilde{z}) | z \ne \tilde{z} \}.$$ This is probably the best solution.
H: proving : $a_{1}=a_{2}=\cdots=a_{n}$ when satisfied relation Suppose $k_1,k_2,k_3,\ldots,k_n$ are non-negative integer numbers such that sum $k_1+k_2+\cdots+k_n$ is an odd number. Let $a_1,a_2,\ldots,a_n$ be arbitrary numbers satisfied: $$\frac {|a_1-a_2|}{k_1}=\frac {|a_2-a_3|}{k_2}=\cdots=\frac {|a_{n-1}-a_n|}{k_{n-1}}=\frac {|a_n-a_1|}{k_n}$$ How Prove that: $a_1=a_2=\cdots=a_n$? AI: Let us assume that arbitrary numbers means arbitrary real numbers and that the conclusion is false. Then, all the numbers $a_i$ are distinct hence there exists some signs $s_i=\pm1$ and a nonzero real number $c$ such that $s_i\cdot k_i=c\cdot(a_i-a_{i+1})$ for every $1\leqslant i\leqslant n$, with the convention that $a_{n+1}=a_1$. Summing these relations, one gets $\sum\limits_{i=1}^ns_ik_i=0$. For every $i$, $k_i=-k_i\pmod{2}$, hence this implies that $\sum\limits_{i=1}^nk_i=0\pmod{2}$, which contradicts the assumption that $\sum\limits_{i=1}^nk_i$ is odd.
H: Truth Table for If P then Q Possible Duplicate: In classical logic, why is (p -> q) True if both p and q are False? The Logic table for If P then Q is as follows: P Q If P then Q T T T T F F F T T F F T What I don't understand is, How can there be a truth table for this? As far as I understand, If p then Q means "if P is true, Q has to be true. Any other case, I don't know" So, from what I understand, the first 2 rows of the truth table state that "If P is true and Q is true, the outcome is correct and If P is true and Q is false, the outcome is incorrect (F)" What about the last 2 rows? AI: Think of the truth table as describing when the statement "If $P$ then $Q$" is true. If $P$ is false, then the statement "If $P$ then $Q$" doesn't claim anything, so how could it be false? Since it doesn't claim anything, we make the convention that "If $P$ then $Q$" should be true. One could argue that if "If $P$ then $Q$" doesn't claim anything, then how could it be true either? Well, we accept a basic axiom of logic that tell us that every statement is either true or false, so we have to pick one. In mathematics, we find it more useful to take it to be true, but this is not necessary. Often times in Philosophy one takes the opposite convention. This may be confusing as far as notation goes, but it does not actually cause any problems.
H: The Dynamics of Contrapositive Proofs The Wikipedia Link for contrapositive proofs states that proving if p then q is the same as proving if not q then not p. I don't completely follow why. Is there any way to understand what's happening without involving logic tables ? If logic tables are inevitable, can anyone give me a basic explanation? I can't seem to follow how if p then q gets simplified to not p or q I understand that the truth tables of the two are same. But, that's that. AI: The sentence "if $p$ then $q$" asserts that in every single situation (model) in which $p$ is true, $q$ must be true. The only way this can be incorrect is if we have a situation in which $p$ is true but $q$ is false. We can show this cannot be the case by showing that whenever $q$ fails, $p$ must fail, that is, by showing that if "not $q$" then "not $p$." Conversely, suppose that in every situation in which $p$ is true, then $q$ must be true. Then there cannot be a situation in which $q$ fails and $p$ is true. So if "not $q$" holds, then "not $p$" must hold. Thus the two assertions "if $p$ then $q$" and "if not $q$ then not $p$" hold in precisely the same cases. Remark: The answer above is much too abstract. To understand what's going on, it is useful to examine a number of concrete cases. Suppose that we want to show that if $x^2=2$ then $x$ is not an ordinary fraction (integer divided by integer). So $p$ is the assertion $x^2=2$, and $q$ is the assertion that $x$ is not a fraction. We prove the result by showing that if $x$ is a fraction, then $x^2$ cannot be equal to $2$, that is, by showing that "not $q$" implies "not $p$." That shows that if $x^2=2$, then $x$ could not possibly be a fraction, which is exactly what we wanted.
H: If $p$ is a prime and $x,y \in \mathbb{Z}$, then $(x+y)^p \equiv x^p+y^p \pmod{p}$ I want to prove that if $p$ is a prime and $x,y \in \mathbb{Z}$, then $$(x+y)^p \equiv x^p+y^p \pmod{p}$$ So far I know that $$(x+y)^p = \sum_{k=0}^{p} \dbinom{p}k x^{p-k} y^k$$ A part of the above equation is supposed to cancel, I think, but I can't figure out a way how to make it cancel. AI: HINT If $p$ is a prime, then $p$ divides $\dbinom{p}{k}$ for all $k \in \{1,2,\ldots,p-1\}$. The identity you have i.e. $(x+y)^p = x^p + y^p$, is referred to as Freshman's dream. Move your mouse over the gray area for the complete answer. Note that $$\dbinom{p}{k} = \dfrac{p \times (p-1) \times (p-2) \times \cdots \times (p-k+2) \times (p-k+1)}{k \times (k-1) \times (k-2) \times \cdots \cdots \times 2 \times 1}$$ is an integer. Since $k \in \{1,2,\ldots,p-1\}$, and $p$ is a prime, none of $k, k-1, k-2, \ldots, 2$ divide $p$. Hence, we can factor $p$ from the numerator to get that $$\dbinom{p}{k} = \dfrac{p \times (p-1) \times (p-2) \times \cdots \times (p-k+2) \times (p-k+1)}{k \times (k-1) \times (k-2) \times \cdots \cdots \times 2 \times 1} = p \times M$$ where $M \in \mathbb{Z}$.
H: Fourier Transforms and the Laplacian I need your help in the following question: "Use Fourier transforms to prove that the domain of $H_0 := - \bar{\Delta} $ on $ L^2 (\mathbb{R}^3)$ consists entirely of continous bounded functions. If $V$ is a non-negative potential which is not $L^2 $ when restricted to any non-empty bounded open subset of $\mathbb{R}^3$ , prove that $Dom(H_0) \cap Dom(V) = \{0 \} $. Help in any one of the two parts will be greatfuly acknowledged! Thanks in advance ! AI: $-\bar{\Delta}$ stands for the operator closure of $(-\Delta, C^{\infty}_o(\mathbb{R}^3))$, I suppose. If this is the case then you should: Apply a Fourier transform to diagonalize $-\Delta$, so that it becomes a multiplication operator in Fourier space; Ascertain that this multiplication operator is essentially self-adjoint and determine its domain of self-adjointness; By means of an inverse Fourier transform, deduce from this that the domain of $-\bar{\Delta}$ is Sobolev space $H^2(\mathbb{R}^3)$; Apply the Sobolev imbedding theorem to conclude that this space is imbedded into a space of bounded and continuous functions (Hölder continuous, actually). P.S.: I had not seen the second part of the question. For this you need to show that, if $\psi \in H^2(\mathbb{R}^3)$ is such that $V(x)\psi(x)\in L^2(\mathbb{R}^3)$, then $\psi\equiv 0$. Again, use the fact that $\psi$ is continuous and argue by contradiction: if $\psi \ne 0$ then there exists a bounded open subset $\Omega$ of $\mathbb{R}^3$ such that $\lvert \psi(x)\rvert \ge m>0$ for every $x \in\Omega$. From the fact that $V(x)\psi(x)\in L^2$ you infer from this $V\in L^2(\Omega)$, a contradiction.
H: Pullbacks of categories Let $\mathfrak{Cat}$ be the 2-category of small categories, functors, and natural transformations. Consider the following diagram in $\mathfrak{Cat}$: $$\mathbb{D} \stackrel{F}{\longrightarrow} \mathbb{C} \stackrel{G}{\longleftarrow} \mathbb{E}$$ There are several notions of pullback one could investigate in $\mathfrak{Cat}$: The ordinary pullback in the underlying 1-category $\textbf{Cat}$: these exist and are unique, by ordinary abstract nonsense. Explicitly, $\mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E}$ has objects pairs $(d, e)$ such that $F d = G e$ (evil!) and arrows are pairs $(k, l)$ such that $F k = G l$. This evidently an evil notion: it is not stable under equivalence. For example, take $\mathbb{C} = \mathbb{1}$: then we get an ordinary product; but if $\mathbb{C}$ is the interval category $\mathbb{I}$, we have $\mathbb{1} \simeq \mathbb{I}$, yet if I choose $F$ and $G$ so that their images are disjoint, we have $\mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E} = \emptyset$, and $\emptyset \not\simeq \mathbb{D} \times \mathbb{E}$ in general. The strict 2-pullback is a category $\mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ and two functors $P : \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} \to \mathbb{D}$, $Q : \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} \to \mathbb{E}$ such that $F P = G Q$, with the following universal property (if I'm not mistaken): for all $K : \mathbb{T} \to \mathbb{D}$ and $L : \mathbb{T} \to \mathbb{E}$ such that $F K = G L$, there is a functor $H : \mathbb{T} \to \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ such that $P H = K$ and $Q H = L$, and $H$ is unique up to equality; if $K' : \mathbb{T} \to \mathbb{D}$ and $L' : \mathbb{T} \to \mathbb{E}$ are two further functors such that $F K' = G L'$ and $H' : \mathbb{T} \to \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ satisfies $P H' = K'$ and $Q H' = L'$ and there are natural transformations $\beta : K \Rightarrow K'$ and $\gamma : L \Rightarrow L'$, then there is a unique natural transformation $\alpha : H \Rightarrow H'$ such that $P \alpha = \beta$ and $Q \alpha = \gamma$. So $\mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} = \mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E}$ works, and in particular, strict 2-pullbacks are evil. The pseudo 2-pullback is a category $\mathbb{D} \times_\mathbb{C} \mathbb{E}$, three functors $P : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{D}$, $Q : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{E}$, $R : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{C}$, and two natural isomorphisms $\phi : F P \Rightarrow R$, $\psi : G Q \Rightarrow R$, satisfying the following universal property: for all functors $K : \mathbb{T} \to \mathbb{D}$, $L : \mathbb{T} \to \mathbb{E}$, $M : \mathbb{T} \to \mathbb{C}$, and natural isomorphisms $\theta : F K \Rightarrow M$, $\chi : G L \Rightarrow M$, there is a unique functor $H : \mathbb{T} \to \mathbb{D} \times_\mathbb{C} \mathbb{E}$ and natural isomorphisms $\tau : K \Rightarrow P H$, $\sigma : L \Rightarrow Q H$, $\rho : M \Rightarrow R H$ such that $\phi H \bullet F \tau = \rho \bullet \theta$ and $\psi H \bullet G \sigma = \rho \bullet \chi$ (plus some coherence axioms I haven't understood); and some further universal property for natural transformations. By considering the cases $\mathbb{T} = \mathbb{1}$ and $\mathbb{T} = \mathbb{2}$, it seems that $\mathbb{D} \times_\mathbb{C} \mathbb{E}$ can be taken to be the following category: its objects are quintuples $(c, d, e, f, g)$ where $f : F d \to c$ and $g : G e \to c$ are isomorphisms, and its morphisms are triples $(k, l, m)$ where $k : d \to d'$, $l : e \to e'$, $m : c \to c'$ make the evident diagram in $\mathbb{C}$ commute. The functors $P, Q, R$ are the obvious projections, and the natural transformations $\phi$ and $\psi$ are also given by projections. Question. This seems to satisfy the required universal properties. Is my construction correct? Question. What are the properties of this construction? Is it stable under equivalences, in the sense that $\mathbb{D}' \times_{\mathbb{C}'} \mathbb{E}' \simeq \mathbb{D} \times_\mathbb{C} \mathbb{E}$ when there is an equivalence between $\mathbb{D}' \stackrel{F'}{\longrightarrow} \mathbb{C}' \stackrel{G'}{\longleftarrow} \mathbb{E}'$ and $\mathbb{D} \stackrel{F}{\longrightarrow} \mathbb{C} \stackrel{G}{\longleftarrow} \mathbb{E}$? Finally, there is the non-strict 2-pullback, which as I understand it has the same universal property as the pseudo 2-pullback but with "unique functor" replaced by "functor unique up to isomorphism". Question. Is this correct? General question. Where can I find a good explanation of strict 2-limits / pseudo 2-limits / bilimits and their relationships, with explicit constructions for concrete 2-categories such as $\mathfrak{Cat}$? So far I have only found definitions without examples. (Is there a textbook yet...?) AI: Zhen, I am not sure I understand your question. Every stict 2-limit is obviously also a 1-limit in the underlying 1-category, so these are not really different concepts (a 2-limit is a strengthen version of a limit; BTW, since Cat is 2-complete then every 1-limit in Cat is automatically a 2-limit). Your construnction of a 2-pseudo pullback is fine. However, it is easy to verify that it is not stable under equivalence of categories ([Added] in the sense that: $C$ is a limit of $F$ and $D$ is equivalent to $C$ does not imply that $D$ is a limit of $F$). All of the mentioned limits are defined in terms of "strict" adjunctions (or, more acurately, in terms of strict universal properties), i.e. there is a natural isomorphism: $$\Delta(C) \rightarrow F \approx C \rightarrow \mathit{lim}(F)$$ To obtain a concept that is stable under equivalences, you have to replace this natural isomorphism by a natural equivalence of categories (plus perhaps some additional coherence equations). Of course, every strict 2-limit is also a "weak" 2-limit in the above sense (because every isomorphism is an equivalence), so again in a complete 2-category you will not get anything new. [Added] 3. Let $\mathbb{W}$ be a 2-category, and $X$ a 1-category. There are three types of 2-cateogrical cones in $\mathbb{W}$ of the shape of $X$: $\mathit{Cone}$ --- objects are strict functors $X \rightarrow \mathbb{W}$, 1-morphisms are strict natural transformations between functors, and 2-morphisms are modifications between natural transformations $\mathit{PseudoCone}$ --- objects are pseudo functors $X \rightarrow \mathbb{W}$, 1-morphisms are pseudo natural transformations between functors, and 2-morphisms are modifications between natural transformations $\mathit{LaxCone}$ --- objects are lax functors $X \rightarrow \mathbb{W}$, 1-morphisms are lax natural transformations between functors, and 2-morphisms are modifications between natural transformations A limit of a strict functor $F \colon X \rightarrow \mathbb{W}$ is a 2-representation of: $$\mathit{Cone}(\Delta(-), F)$$ where $\Delta$ is the usual diagonal functor. A pseudolimit of $F$ is a representation of: $$\mathit{PseudoCone}(\Delta(-), F)$$ And a lax limit is a representation of: $$\mathit{LaxCone}(\Delta(-), F)$$ In each case if you take equivalent functors, then you get equivalent representations. However, in each case the notion of equivalent functors is different. Perhaps your problem is that you are using the equivalence from $\mathit{PseudoCone}$ in the context of $\mathit{Cone}$ [Added^2] I have missed one of your questions: Finally, there is the non-strict 2-pullback, which as I understand it has the same universal property as the pseudo 2-pullback but with "unique functor" replaced by "functor unique up to isomorphism". If by a non-strict pullback you mean a weak (pseudo)pullback in the above sense, then the universal property is much more subtle --- it does not suffice to say that there is a functor $f \colon X \rightarrow \mathit{Lim}(F)$ that is unique up to a 2-isomorphism (just like in the definition of a limit you do not say that there is an object which is unique up to 1-isomorphism), you have to say that for every cone $\alpha \colon \Delta(X) \rightarrow F$ there exists $f \colon X \rightarrow \mathit{Lim}(F)$ such that for any cone $\beta$ on $X$ with its $g \colon X \rightarrow \mathit{Lim}(F)$ and every family of 2-morphism $\tau \colon \alpha \rightarrow \beta$ that is compatible with $F$ there exists a unique 2-morphism $f \rightarrow g$ such that everything commutes. However, if by a non-strict pullback you mean a lax pullback, then the construction is similar to your construction of a pseudopullback --- without requirement that your $f$ and $g$ are isomorphisms. You have aslo asked: Where can I find a good explanation of strict 2-limits / pseudo 2-limits / bilimits and their relationships, with explicit constructions for concrete 2-categories such as $\mathfrak{Cat}$? So far I have only found definitions without examples. (Is there a textbook yet...?) I do not know of any good textbook, but can provide you with two examples. There is a simple general procedure to construct strict/pseudo/lax limits and colimits in $\mathbf{Cat}$. You shall notice that to give a monad is to give a lax functor $T \colon 1 = 1^{op} \rightarrow \mathbf{Cat}$. Then the lax colimit of $T$ is the Kleisli category for the monad $T$, and the lax limit of $T$ is the Eilenberg-Moore category for the monad $T$. This idea may be pushed a bit further: you may think of a lax functor $\Phi \colon \mathbb{C}^{op} \rightarrow \mathbf{Cat}$ as of a kind of a "multimonad". Then its multi-Kleisli resolution is given by the Grothendieck construction $\int \Phi$ (this construction gives a fibration precisely when $\Phi$ is a pseudofunctor). And similarly its multi-Eilenberg-Moore category is given by a suitable collection of (ordinary) algebras. In these construction if you impose a reguirement on cartesian morphisms / algebras to be isomorphisms, then you get a pseudocolimit / pseudolimit of $\Phi$, and if you impose identities instead of isomorphisms you get a colimit / limit. What is more, the Grothendieck construction works also for the bicategory of distributors; and because there is a duality on the bicategory of distributors you may construct (lax/pseudo/strict) limits in this bicategory via the Grothendieck construction as well.
H: Generalized PNT in limit as numbers get large If $\pi_k(n)$ is the cardinality of numbers with k prime factors (repetitions included) less than or equal n, the generalized Prime Number Theorem (GPNT) is: $$\pi_k(n)\sim \frac{n}{\ln n} \frac{(\ln \ln n)^{k-1}}{(k-1)!}.$$ The qualitative appearance of the actual distribution of $\pi_k(n)$ for k = 1,2,3,..., agrees very well with the GPNT, for numbers $n$ within reach of my laptop. But I noticed that as $n$ and $k$ get large, "most" of the numbers less than $n$ seem to have relatively few factors. Writing $n = 2^m$ and replacing $k$ by $x$ we can graph $$f(x) =\frac{2^m (\ln\ln 2^m)^{x-1}}{\ln 2^m (x-1)!}$$ from $x = 1$ to $m$ (since no number will have more than m factors) and see that for relatively small fixed $m$, most of the area under the curve f is is contained in a steep bell-shaped curve on the far left of the image. I take this to suggest that as we consider very large sets, $S_m = \{ 1,2,3,...,2^m\},$ almost all elements of these sets have a "very small" number of factors (including repetitions). Can this idea be (or has it been) quantified? The phrase "very small" is frustrating, and I think we might be able to say something more concrete about, for example, concentration of the proportion of area as a function of x and m...? Thanks for any suggestions. Edit: the answer Eric Naslund gave below is splendid and I won't neglect to accept it. In response to the answer, I wonder if there is any reason not to be able to get something like that answer from the expression $f(x)$? After all, $f(x)$ appears to be a Poisson-like curve with a mean near the average number of prime factors. If I let m = 100 and then 500 (i.e., we're using $2^{100},2^{500}$), $f'(x) = 0$ at $x \approx 4.73, 6.34$, respectively, while $\ln\ln 2^m$ is respectively 4.23, 5.84. If f is a valid expression for the asymptotic behavior of $\pi_k(n)$, wouldn't we expect it to give us this additional information? Can we not prove it? AI: This is a great question. There has been a lot of work done regarding the distribution of the number of prime factors function, and the most famous of which is the Erdos Kac Theorem, which states that the number of prime factors is in fact normally distributed. We can ask, what is the average number of prime factors for integers in the interval $[1,N]$? Lets define the function $\omega(n)=\sum_{p|n} 1$ to be the number of distinct prime factors of $n$. In 1917, Hardy and Ramanujan used the circle method to prove that almost all integers in the interval $[1,N]$ asymptotically have $\log \log N$ prime factors. Specifically, if $$\mathcal{E}_N=\{n\leq N: |\omega(n)-\log \log N|>(\log \log N)^{3/4}\},$$ then $|\mathcal{E}_N|=o(N)$. In 1934 Turan gave an alternative proof of this by showing that both the mean and variance of $\omega(n)$ are equal to $\log \log N$. Specifically, Turan proved that $$\frac{1}{N}\sum_{n\leq N}\omega (n) =\log \log N(1+o(1))$$ and that $$\frac{1}{N}\sum_{n\leq N}(\omega (n)-\log \log N)^2 =\log \log N(1+o(1)).$$ Now, here is where things become really interesting. Since we know the mean and variance, lets normalize and look at the function $$\frac{\omega(n)-\log \log n}{\sqrt{\log \log n}}.$$ We can ask, how is this distributed? In 1940, Erdos and Kac proved that $\omega(n)$ is normally distributed. That is, the number of prime factors function behaves like the normal distribution with mean $\log \log N$ and variance $\log \log N$. Specifically, for any fixed real numbers $a,b$, we have that$$\frac{1}{N}\left|\left\{n\leq N:\ a\leq\frac{\omega(n)-\log \log n}{\sqrt{\log \log n}}\leq b\right\}\right|=\frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}dx +o(1).$$
H: How to show that a map is an isometry I'm having a difficulty understanding how to go on proving a certain map is an isometry. It should be really basic and simple, but for some reason I can't understand how to do this.. The situation is this: I have 2 manifolds, $D,M$: $D$ is the Poincare disk $\{x\in\mathbb{R}^{n+1}:x_0=0,\sum_{i=1}^n{x_i}<1\}$, and $M$ is the positive-half-space, $\{x\in\mathbb{R}^n:x_n>0\}$). Each has its own metric: for $D$: $g_{ij}^{(1)} = \frac{4\delta_{ij}}{1-\sum_\alpha u_\alpha^2}$. for $M$: $g_{ij}^{(2)} = \frac{\delta_{ij}}{x_n^2}$. And I have a map, $f:D\to M$, given by $f(p) = 2\frac{p-p_0}{|p-p_0|^2}+p_0$ where $p_0=(0,\ldots,0,-1)\in\mathbb{R}^n$. By my understanding of the definition of isometry, I should take 2 vectors, $p_1,p_2\in D$, and show that $g^{(1)}(p_1,p_2)=g^{(2)}(f(p_1),f(p_2))$. From here I'm really confused.. I have $g_{ij}$ (not $g$), which is defined for $T_pD$, and we get $g$ from $g_{ij}$ (though how exactly I'm not sure). What I thought is that I need to find $df$, and show that for a basis $\partial_i$ of $T_pD$, $g_{ij}^{(1)}(\partial_i,\partial_j)=g_{ij}^{(2)}(df_p(\partial_i),df_p(\partial_j))$, but I'm really not sure on how to do this, and I'll be glad if you could give me a direction.. Thanks AI: Using your notation, we can get $g$ from $g_{ij}$ by integrating along paths. Given $\gamma:[0,1] \to M$ with $\gamma(p_1)=0, \gamma(p_2)=1$, its arclength can be computed as $$L(\gamma)=\int_0^1 g_{ij}(\gamma'(t),\gamma'(t)) \, dt \, .$$ Then $g(p_1,p_2)$ is defined to be $\inf_{\gamma(0)=p_1,\gamma(1)=p_2} L(\gamma)$. So your last sentence is correct; if you can show that $df$ preserves the inner product on $TD$, that will imply that $f$ preserves arclengths of curves and thus also the metric-space structure on $D$. In fact, "$df$ preserves the inner product" is generally taken as the definition of an isometry between Riemannian manifolds, since the actual metric-space structure can be difficult to work with directly (being an infimum over an infinite-dimensional space of curves). So you just need to be able to compute $df_p$ for all $p$; to do this, remember that, once you've chosen local coordinates for your source and target manifolds, $df$ becomes the Jacobian of $f$. The problem statement hands you a set of coordinates (and in fact gives you your two metrics in those coordinates), so...
H: Subgroup Test with Conditions Let $G$ be a group and let $A$ be a non empty subset of $G$. Let $H$ be a set defined by $$H = \{ x \in G \mid \text{For all }a \in A,\text{ we have }xa \in A\text{ and }x^{-1}a \in A \}$$ Show that $H$ is a subgroup of $G$. AI: Note that $H$ contains $e$, since for all $a\in A$, $ea = a\in A$ and $e^{-1}a=a\in A$. Now assume that $x,y\in H$. That means that for each $a\in A$, $xa\in A$ and $x^{-1}a$, and $ya\in A$ and $y^{-1}a\in A$. We want to show that $xy^{-1}\in H$. What do we need in order for $xy^{-1}$ to be in $H$? We need it to be the case that if $a\in A$, then both $xy^{-1}a$ and $(xy^{-1})^{-1}a\in A$. I'll do the first one: why is $(xy^{-1})a\in A$? Well, we know $y^{-1}a=a'\in A$ because $y\in H$. And since $y^{-1}a=a'\in A$, then $xa'\in A$, because $x\in H$. So $xy^{-1}a = xa'\in A$. I'll let you do the second one.
H: Does existence of anti-derivative imply integrability? If $f$ has an anti-derivative in $[a,b]$ does it imply that $f$ is Riemann integrable in $[a,b]$? AI: Take $f(x)=\begin{cases} x^2\sin (1/x^2), &x\ne 0, \\ 0, &x=0. \end{cases}\quad$ Then $g=f'$ exists everywhere but is unbounded over $[-1,1]$. $g$ thus has a primitive but is not Riemann integrable.
H: Solving for unknown inside square root Sorry if this is a very primitive question, but I really not sure if I am right about this kind of situations. Imagine the following equation where $a$ , $b$ and $c$ are known numbers and $x$ is the unknown variable: $$a\sqrt{bx}=c$$ Is it ok in this case to do it like $$a^2bx=c^2$$ If not, how to solve such equation? AI: Yes, this is fine, provided that $a$ and $c$ have the same algebraic sign. When you solve the second equation, you get $$x=\frac{c^2}{a^2b}\;.$$ Now try substituting that into the original equation: $$a\sqrt{\frac{bc^2}{a^2b}}=a\sqrt{\frac{c^2}{a^2}}=a\left|\frac{c}a\right|\;.\tag{1}$$ If $a$ and $c$ have the same algebraic sign, $\left|\dfrac{c}a\right|=\dfrac{c}a$, and $(1)$ can be simplified to $a\left(\dfrac{c}a\right)=c$, as desired. If one of $a$ and $c$ is positive and the other negative, the original equation has no solution, since by convention $\sqrt{bx}$ denotes the non-negative square root of $bx$.
H: Fastest way to compare fractions Which is the fastest method to compare the below fractions with minimum calculation possible and finding which is greatest and which the smallest?? $$\frac{26}{686},\quad \frac{48}{874},\quad \frac{80}{892},\quad \frac{27}{865}$$ AI: The denominators of the last three fractions are within $27$ of one another; $3\cdot27=81$, so $27$ out of $865$ is less than $1$ part in $30$. The numerators of those fractions differ by much more than $1$ part in $30$, so we can safely rank those three fractions in the order $$\frac{27}{865}<\frac{48}{874}<\frac{80}{892}\;.$$ The only real question is where the first fraction fits, but that’s easy: $26$ and $27$ are very close, while $686$ is much less than $865$, so $$\frac{26}{686}>\frac{27}{865}\;.$$ On the other hand, it’s pretty obvious that $$\frac{26}{686}<\frac{80}{892}\;,$$ so the smallest and largest must be $\dfrac{27}{865}$ and $\dfrac{80}{892}$, respectively.
H: A consequence of Runge's theorem I'd like to have a reference for the proof of the following fact of complex analysis. I think it follows from Runge's theorem, but I don't know how to prove it. Fact. Let $U \subseteq V \subseteq \mathbb{C}$ be open sets such that no one connected component of $V \setminus U$ is compact. Then the restriction map $\rho_{VU} \colon \mathcal{O}(V) \to \mathcal{O}(U)$ has dense image. (If $U$ is an open subset of $\mathbb{C}$, the algebra $\mathcal{O}(U)$ of holomorphic functions on $U$ is endowed with the topology of uniform convergence over compact subsets, i.e. the compact-open topology.) Thanks to all! AI: Indeed, Runge's theorem is just the thing for this problem. Given a compact subset $K\subset V$ and a number $\epsilon>0$, we must show that for every $f\in \mathcal O(U)$ there exists $g\in\mathcal O(V)$ such that $\sup_K|f-g|<\epsilon$. We want $g$ to be a rational function with poles outside of $V$. According to Runge, we must choose a point in each connected component of $\overline{\mathbb C}\setminus K$ where the poles of $g$ may be located. Let us be careful here: we don't want $K$ to have extra holes in it. By enlarging $K$ we can make sure that each connected component of $\overline{\mathbb C}\setminus K$ contains a component of $\overline{\mathbb C}\setminus U$. One way to do it is to let $K$ consist of all points of $U$ whose spherical distance to $\overline{\mathbb C}\setminus U$ is at least $\epsilon$. It remains to prove that no connected component of $\overline{\mathbb C}\setminus K$ is contained in $V$. Suppose, to the contrary, that $\Omega$ is a connected component of $\overline{\mathbb C}\setminus K$ that is contained in $V$. Let $E$ be a connected component of $\overline{\mathbb C}\setminus U$ that is contained in $\Omega$. Being a connected component, $E$ is closed in $\overline{\mathbb C}\setminus U$. And since $\overline{\mathbb C}\setminus U$ is a compact set, so is $E$. Finally, $E$ is also a component of $V\setminus U$ because $E\subset V$. A contradiction.
H: Best books in the genre "______ for Mathematicians" I once heard someone (perhaps from someone famous -- anyone have a citation?) say that there ought to be a series of books called "__ for Mathematicians," each one of which would explain a different topic or discipline using the tools of mathematics. (The idea is that knowledge of higher mathematics helps to clarify the exposition or simplify issues which would otherwise be inaccessible to beginners.) Even though this book series doesn't actually exist, there are certainly some books which fall into this category. (I've listed two below.) What are the best books of this type? Some examples: Economics with Calculus by Lovell (an introductory economics textbook which assumes calculus knowledge) Mathematical Methods of Classic Mechanics by Arnold (adopts an axiomatic, mathematical approach to classical mechanics) As you can see, the above examples vary pretty widely in what type of mathematical sophistication they expect from the reader; for me the key fact is that they both significantly alter the normal presentation of material to make it more suitable for readers comfortable with mathematical reasoning. AI: Here are some suggestions: Quantum Field Theory for Mathematicians (Encyclopedia of Mathematics and its Applications) -- Robin Ticciati, Quantum Mechanics for Mathematicians (Graduate Studies in Mathematics) -- Leon A. Tahktajan, Lectures on Quantum Mechanics for Mathematics Students (Student Mathematical Library) - L.D. Faddeev and O.A. Yakubovskii, Physics for Mathematicians, Mechanics I -- Michael Spivak, Quantum Fields and Strings: A Course for Mathematicians -- Pierre Deligne, Economics for Mathematicians (London Mathematical Society Lecture Note Series) -- J.W.S. Cassels, A Course in Mathematical Logic for Mathematicians (Graduate Texts in Mathematics) -- Y. Manin, N. Koblitz and B. Zilber. As for suggestion number 4, I know that the author has planned to write a whole series of books of the form Physics for Mathematics, [insert subject here]. He has also written series of books on differential geometry, so I guess he will probably continue with his project with at least one other book. Furthermore, I would like to add a book that does not bear a title of the form [subject] for Mathematicians, but it seems to me that, with a little imagination, it can be interpreted as such. The book I am talking about is: Pattern Theory -- The Stochastic Analysis of Real-World Signals (Applying Mathematics) -- David Mumford and Agnès Desolneux. It is an interesting account of the applications of mathematics to the analysis patterns. I think it could prove very useful for those studying artificial intelligence. I therefore think it could be interpreted as Artificial Intelligence for Mathematicians.
H: What is the preferred symbol to indicate the least positive number to start a sequence? I need a least positive number and I am considering $\delta$, $\epsilon$ and $\theta$. Which one would be best to start a sequence? Are there any others I should also consider? Edit: $a(0)\text{:=}\theta$ $a(n)\text{:=}\left \lceil \left(x=a(n-1)\right)+x^{\frac{1}{2}}\right\rceil$ The starting number can be anything $0<\theta<1$ AI: $\delta$ and $\epsilon$ have particular common uses which seem not the same as yours, so given those choices I would use $\theta$. But I'd have to know more about your use to say for sure.
H: Constructing $\sqrt{a}$ for a constructable $0\leq a\in\mathbb{R}$ - Compass and straightedge constructions Possible Duplicate: Compass-and-straightedge construction of the square root of a given line? I wish to understand how to construct $\sqrt{a}$ for a constructable $0\leq a\in\mathbb{R}$ , the book Abstract Algebra (by David Steven Dummit, Richard M. Foote) offers (in pg. 532) the following: construct the circle with diameter $1+a$ (looks like a straight line with the point $a$ somewhere on the line and $1+a$ at the right end of the line) and erect the perpendicular to the diameter from the point $a$ (the point with distance $a$ from the leftmost point on the line). Then $\sqrt{a}$ is the length of the perpendicular. My question is why the length of the perpendicular is $\sqrt{a}$ ? (I'm guessing that there's a theorem in geometry that I don't know about that might help...) Help is appreciated! AI: This is simple geometry (Pythagorean theorem): Take the circle with diameter $a+1$. then the point $a$ is at distance $\frac{a-1}{2}$ from the circle's center. the radius is $\frac{a+1}{2}$. So the perpendicular satisfies $(\frac{a-1}{2})^2+x^2=(\frac{a+1}{2})^2$, thus $x=\sqrt{a}$.
H: About Square Rooting I have read that "every positive number $a$ has two square roots, positive and negative". For that reason I have always (as far as I could remember) unconsciously done the following for such expressions $$ x^2 = 4 \implies x= \pm 2 $$ What I wanted to know was that, in order to cancel the square, aren't we taking square root on both sides? If we are, why don't we have something like this: $$ x^2 = 4 $$ $$ \sqrt{x^2} = \sqrt4 $$ $$ \pm x = \pm 2$$ Why do we always end up with this instead $$ x = ± 2\quad ?$$ AI: Consider all the possibilities of $\pm x = \pm 2$: $x = -2$ $x = 2$ $-x = -2$ which implies $x = 2$ $-x = 2$ which implies $x = -2$ so $\pm x = \pm 2$ is the same as $x = \pm 2$.
H: What is an operator in mathematics? Could someone please explain the mathematical difference between an operator (not in the programming sense) and a function? Is an operator a function? AI: Based on your comment it sounds like you're actually asking about operations, not operators. A binary operation on a set $S$ is a special kind of function; namely, it is a function $S \times S \to S$. That is, it takes as input two elements of $S$ and returns another element of $S$. We can denote such an operation by a symbol such as $a \star b$ and then demand various additional properties of this operation, such as associativity: $(a \star b) \star c = a \star (b \star c)$, commutativity: $a \star b = b \star a$ and so forth. On the other hand, an arbitrary function $f : A \to B$ between two sets only takes a single input and returns an output which is not necessarily of the same type, so one can't speak of associativity or commutativity for such a thing. One might call a function $f : A \to A$ a unary operation but one still can't speak of associativity or commutativity for such a thing.
H: normalizer of a p-Sylow on $S_p$ Let $P$ be a group of order p, on $S_p$ , How can I prove that the cardinality of normalizer of $P$ it's $p(p-1)$ ? If I compute that the number of conjugates of the group P, it's $ \frac{{n!}} {{p\left( {p - 1} \right)}} $ then I'm done, since equals to the index of the normalizer. But I don't know how. AI: It's probably easier to calculate the normalizer directly. Since all subgroups of order $p$ are conjugate in $S_p$ (they are all Sylow $p$-subgroups), we can take the subgroup $H$ generated by $(123\ldots p)$. If we conjugate that generator by $\sigma \in S_p$, we get $(\sigma(1) \sigma(2) \ldots \sigma(p))$. Now how many ways can we choose $\sigma$ so that cycle is still in $H$?
H: Difference Identity problem I have a homework problem that I don't know what to do with. We were just introduced to sum and difference identities. We've always been provided values in degrees both in class and in homework until this problem. I checked the book to see if a similar problem had been worked out; it hadn't. Any help would be appreciated. Use the information below to find the exact value of sin (A-B): $\cos A = \dfrac1{3}, 0 < A < \dfrac{\pi}{2}, \sin B = -\dfrac1{2}, \dfrac{3\pi}{2} < B < 2\pi$ AI: Hint: You should have $\sin (A-B)= \sin A \cos B - \cos A \sin B$ and $\sin^2 \theta + \cos^2 \theta = 1$. To find $\sin A$, we use the second: $\sin^2 A + \cos^2 A=1$ $\sin^2 A = \frac 89$ $\sin A = \pm \frac {2 \sqrt 2}3$ The restriction on $A$ should let you resolve the $\pm$ sign. Now do the same to find $\cos B$ and you have all you need for the first equation.
H: Let $x_n$ be a an unbounded sequence of non-zero real numbers Let $x_n$ be a an unbounded sequence of non-zero real numbers. Then it must have a convergent subsequence. it can not have a convergent subsequence. $\frac{1}{x_n}$ must have a convergent subsequence. $\frac{1}{x_n}$ can not have a convergent subsequence. Well, 1. is not true as say $x_n=n\ \forall n$, 2. I am not sure, but I guess there may be an unbounded sequence which may have an convergent subsequence, 3. is true due to Bolzano-Weierstrass as it is bounded sequence, 4. is false. Am I right? Please correct me if I am wrong anywhere. Thank you. AI: is False. $x_n = n$ is False, consider $x_n = n$ if $n$ is even and $x_n = 0$ if $n$ is odd. It has a subsequence converging to $0$. $(x_n)$ is unbounded. True. Since $x_n$ is unbounded, you can choose a subsequence $y_n$ such that $|y_n| > n$. Then the sequence $\frac{1}{y_n}$ is bounded. By Bolzano Wierstass, it has a convergent subsequence. Clearly, this is subsequence of the original $\{\frac{1}{x_n}\}$. False. Look at 3.
H: sequence of function $f_n(x)= \sin(n\pi x)$ $f_n(x):[0,1]\rightarrow \mathbb{R}$ defined by $$f_n(x)= \sin(n\pi x)$$ if $x\in [0,1/n]$, and $$f_n(x)=0$$ if $x\in (1/n,1]$ Then It does not converge pointwise. It converges pointwise but the limit is not continous. It converges pointwise but not uniformly. It converges uniformly. AI: Here is a hint for your problem: Note that given an $n$ you will always be able to find a $c \in [0, \frac{1}{n}]$ such that $f_n(c) = 1$.
H: Determining the existence of a solution to an additive equation During my research in combinatorial geometry, I have encountered the following elementary question which I am hoping to have some help on. Let $\zeta = 27.22236\ldots$. Does there exist a set of $\gamma_i$'s such that $\sum\limits_{i=1}^{8} \gamma_{i} = \zeta$, where $\gamma_i \in \{2\pi - \arccos(1/3), 2\pi - 2\arccos(1/3), 2\pi - 3\arccos(1/3), 2\pi - 4\arccos(1/3)\}$? There are other versions of the problem which I need to also solve for a case analysis involving spherical simplicial complexes on $\mathbb{S}^2$, with other values of $\zeta$ and a different number of $\gamma_i$'s, but if someone knows how to solve this particular question I think I will be able to generalize quite easily. Let me know if you have any questions, and I can add additional motivation to the question if necessary, but it involves simplicial complexes, sphere packings, and other details I would prefer not to bother mention. EDIT: A bit more information about the value for $\zeta$. Let $\Omega = 15\pi - 33\arccos(1/3)$, $\psi = 2\pi - 4\arccos(1/3)$, and $$\beta = \arccos\left(\frac{\cos(\pi/3)-\cos(\pi/3)\cos(z)}{\sin(\pi/3)\sin(z)}\right)$$ where $$z = \arccos\left(\frac{1 + 3\cos(\psi)}{4}\right)$$ Then, $$\zeta = \Omega - 2\psi - 2\beta + 9\pi - 2\arccos(1/3) = 27.22236\ldots$$ AI: All of your sums will be of the form, $16\pi-k\arccos(1/3)$ for some integer $k$ between 8 and 32, so compute $(16\pi-\zeta)/\arccos(1/3)$ and see if it looks like an integer.
H: Finding a point near other points Let $p_1, \ldots, p_k$ be $k$ points in $\mathbb{R}^n$ so that $$\max_{i,j}\|p_i - p_j\| = \epsilon$$ where we are employing the standard Euclidean norm. What is the smallest $r > 0$ so that there exists some $x \in \mathbb{R}^n$ with $\|x - p_i\| \leq r$ for all $1 \leq i \leq n$? And most importantly, How does $r$ change with $\epsilon$, $k$ and $n$? AI: Jung's Theorem says that if $K$ is a compact set in ${\bf R}^n$ and $d=\max_{p,q\in K}\|p-q\|_2$ then there is a closed ball with radius $$r\le d\sqrt{{n\over2(n+1)}}$$ that contains $K$. Equality obtains for (the vertices of) the regular $n$-simplex. As joriki suggests in the comments, the full answer is thus $$r=\epsilon\sqrt{m/(2(m+1))}{\rm\ with\ }m=\min(k-1,n)$$
H: will there be pairwise disjoint open sets in $\mathbb{R}^2$ Suppose $S$ is a collection of pairwise disjoint open sets in $\mathbb{R}^2$ $S$ can not be finite S can not be countably infinite. S can not be uncountably infinite S is empty. 1 is wrong I can take any finite no of disjoint open sets by housdorff property I can find right? 2 is also wrong I can take points from $\mathbb{N}\times \mathbb{N}$ and seperate them by those pairwise disjoint open sets so here S is countably infinite, 3 is also wrong I will do the same thing by putting $\mathbb{Q}^c\times \mathbb{Q}^c$ so 4 is right. Is my arguments are ok? AI: Any disjoint collection of open sets in $\mathbb{R}^n$ is countable. You argue as follows. Let $\mathcal{K}$ be such a collection. For each $k\in K$, choose an element of $\mathbb{Q}^n$ lying in $k$. This is a 1-1 mapping from the elements of $K$ into a subset of the countable set $\mathbb{Q}^n$. Therefore $K$ is countable. In your notation, $S$ can be countable or empty. It cannot be uncountable.
H: Is $K\subset\mathbb{R}^2$ homeomorphic to an interval if $K$ is connected but $K\setminus\{x\}$ is not for any $x\in K$? Must it have empty interior? Given that $K$ is a connected subset of $\mathbb{R}^2$ such that $\forall x\in K, K\setminus\{x\}$ is not connected, then K must be homeomorphic to an interval of $\mathbb{R}$ K must have empty interior. Well, I feel that 1 is correct but I'm not able to make it formal, and I'm not sure about 2. Thank you for help. AI: Copying & expanding comments: 1 is false: take the union of two axes. You can find a topological characterization of intervals in Analytic Topology by Whyburn. 2 is correct: removing a point of interior does not disconnect a set, because a disk minus a point is still connected.
H: Subgroup of order $9$ of $S_6$ Consider the permutation group $S_6$ and let $H\subseteq S_6$ be a subgroup of $9$ elements It is abelian but not cyclic It is cyclic It is not abelian If H is abelian then it is cyclic. Wel I know a general result that group of order $p^2$ is abelian where $p$ is a prime number, hence $H$ is abelian.but I dont know whether $H$ is cyclic or not, is it?thank you for the help. AI: Hint. If you write an element of $S_6$ as a product of disjoint cycles, the order of the element is the least common multiple of the length of the cycles. Is there a way to get an element of order $9$?
H: Is there an explicit way to determine $\mathrm{Mat}_n(R[X_1,\dots,X_m])\simeq\mathrm{Mat}_n(R)[X_1,\dots,X_m]$? For a commutative ring $R$, let $\mathrm{Mat}_n(R[X_1,\dots,X_m])$ denotes the matrix ring with entries from $R[X_1,\dots,X_m]$, and let $\mathrm{Mat}_n(R)[X_1,\dots,X_m]$ denotes the polynomial ring with coefficients in $\mathrm{Mat}_n(R)$. Is there an easy way to see that both structures are isomorphic as rings? Even experimenting with just one indeterminate at small cases of $n$, I'm having difficulty finding a suitable map to verify. What is the natural ring isomorphism here? Thanks. AI: There is an evident map $M_n(R)\to M_n(R[X_1,\dots,X_m])$, which is injective and a map of rings, so we can identify the elements of $M_n(R)$ with their images in $M_n(R[X_1,\dots,X_m])$. On the other hand, for each $i\in\{1,\dots,m\}$ let $\underline X_i$ be the element of $M_n(R[X_1,\dots,X_m])$ which is a diagonal matrix all of whose diagonal entries are $X_i$, so that $\underline X_i=X_i\cdot I_n$, with $I_n\in M_n(R[X_1,\dots,X_m])$ the identity matrix. An element $A$ of $M_n(R[X_1,\dots,X_m])$ can be written in exactly one way as a finite sum $$\sum_{i_1,\dots,i_m\geq0} a_{i_1,\dots,i_m}\underline X_1^{i_1}\cdots \underline X_m^{i_m}$$ with the $a_{i_1,\dots,i_m}$ elements of $M_n(R)$. That's where the map comes from. For all $i_1,\dots,i_m\geq0$ and all $i$, $j\in\{1,\dots,n\}$, the $(i,j)$th entry of the matrix $a_{i_1,\dots,i_m}$ is the coefficient of $X_1^{i_1}\cdots X_m^{i_m}$ in the $(i,j)$th entry of $A$. Alternatively, let us write $S=R[X_1,\dots,X_m]$. The ring $M_n(S)$ is the endomorphism ring of the free left $S$-module $S^n$ of rank $n$. One can check that there is a canonical isomorphism $$\hom_S(S^n,S^n)\to S\otimes_R\hom_R(R^n,R^n)$$ and, since $\hom_R(R^n,R^n)\cong M_n(R)$, this tells us that $$M_n(S)\cong S\otimes_R M_n(R)$$ We are thus left with showing that $S\otimes_R M_n(R)\cong M_n(R)[X_1,\dots,X_m]$. It is in fact true that for all $R$-algebras $\Lambda$ we have an isomorphism $$R[X_1,\dots,X_m]\otimes_R\Lambda\cong\Lambda[X_1,\dots,X_m],$$ and we want this when $\Lambda=M_n(R)$. Can you do this?
H: The Number of symmetric,PD, $8\times 8$ matrices The Number of symmetric,Positive Definite, $8\times 8$ matrices having trace$=8$ and determinant$=1$ is $0$ $1$. $>1$ but finite. $\infty$ I am not able to do this one. AI: If $A$ is pos. def. then its eigenvalues $\lambda_i$ are real and positive. Besides, we know (don't we?) that, for any matrix, $\sum \lambda_i = tr(A)$ and $\prod \lambda_i= |A|$. In our case, that means that we are restricted to $\sum \lambda_i =8$ and $\prod \lambda_i =1$... (can you go on from here? hint: arithmetic-geometric means and their properties)
H: Show inequality generalization $\sum (x_i-1)(x_i-3)/(x_i^2+3)\ge 0$ Let $f(x)=\dfrac{(x-1)(x-3)}{x^2+3}$. It seems to be that: If $x_1,x_2,\ldots,x_n$ are positive real numbers with $\prod_{i=1}^n x_i=1$ then $\sum_{i=1}^n f(x_i)\ge 0$. For $n>2$ a simple algebraic approach gets messy. This would lead to a generalization of this inequality, but even the calculus solution offered there for $n=3$ went into cases. I thought about Jensen's inequality, but $f$ is not convex on $x>0$. Can someone prove or disprove the above claim? AI: Unfortunately, this is not true. Simple counterexample: My original counterexample had some ugly numbers in it, but fortunately, there is a counterexample with nicer numbers. However, the explanation below might still prove informative. Note that for $x>0$, $$ f(x)=\frac{(x-1)(x-3)}{x^2+3}=1-\frac{4x}{x^2+3}\lt1\tag{1} $$ Next, we compute $$ f(2)=-\frac17\tag{2} $$ Let $x_0=\frac{1}{256}$ and $x_k=2$ for $1\le k\le8$. The product of the $x_k$ is $\frac{1}{256}\cdot2^8=1$, yet by $(1)$ and $(2)$, the sum of the $f(x_k)$ is less than $1-\frac87\lt0$. Original couunterexample: Let $x_0=e^{-3.85}$ and $x_k=e^{.55}$ for $1\le k\le 7$. We get $f(x_0)=0.971631300121646$ and $f(x_k)=-0.154700260422285$ for $1\le k\le 7$. Then, $$ \prod_{k=0}^7x_k=1 $$ yet $$ \sum_{k=0}^7f(x_k)=-0.111270522834348 $$ Explanation: Let me explain how I came up with this example. $\prod\limits_{k=0}^nx_k=1$ is equivalent to $\sum\limits_{k=0}^n\log(x_k)=0$. Therefore I considered $u_k=\log(x_k)$. Now we want $$ \sum_{k=0}^nu_k=0 $$ to mean that $$ \sum_{k=0}^n\frac{(e^{u_i}-1)(e^{u_i}-3)}{e^{2u_i}+3}\ge0 $$ I first looked at the graph of $\large\frac{(e^{u}-1)(e^{u}-3)}{e^{2u}+3}$. If the graph were convex, the result would be true. $\hspace{2cm}$ Unfortunately, the graph was not convex, but I did note that $f(u)$ dipped below $0$ with a minimum of less than $-\frac17$ near $u=.55$, and that it was less than $1$ everywhere. Thus, if I took $u=.55$ for $7$ points and $u=-3.85$ for the other, the sum of the $u_k$ would be $0$, yet the sum of the $f(e^{u_k})$ would be less than $0$.
H: If $Q \in \mathbb{R}^{n \times n}$ is both upper triangular and orthogonal, then $\textbf{q}_j = \pm \textbf{e}_j, j = 1,\ldots, n$ I can get this far: If $n = 1$, then the only matrices that are both upper triangular and orthogonal are $[1]$ and $[-1]$, so $\textbf{q}_j = \pm\textbf{e}_j, j = 1$ is true. Then if we assume that the result holds for $n = k$ and suppose that $Q$ is a $k+1 \times k+1$ matrix that is upper triangular and orthogonal, then . . . ? It seems that to get any further, I have to use the fact that there is a $k \times k$ upper triangular, orthogonal submatrix within $Q$ that I can apply the induction hypothesis to. I can see why the $k \times k$ matrix in the upper-left or lower-right would be upper triangular, but I don't see why it would necessarily be orthogonal. It seems like it's the same as saying that every submatrix along the diagonal of a nonsingular matrix is nonsingular, but I don't think that's right. Could someone clarify this for me? AI: Hint: The inverse of an upper triangular matrix is upper triangular. The transpose of an upper triangular matrix is lower triangular. But if $A$ is orthogonal, then the transpose and the inverse are related...
H: Why solving $\dfrac{\partial u}{\partial x}=\dfrac{\partial^2u}{\partial y^2}$ like this is wrong? Try let $v=x+y$ , $w=x-y$ , Then $\dfrac{\partial u}{\partial x}=\dfrac{\partial u}{\partial w}\dfrac{\partial w}{\partial x}=\dfrac{\partial u}{\partial w}$ $\dfrac{\partial u}{\partial y}=\dfrac{\partial u}{\partial w}\dfrac{\partial w}{\partial y}=-\dfrac{\partial u}{\partial w}$ $\dfrac{\partial^2u}{\partial y^2}=\dfrac{\partial}{\partial y}\left(-\dfrac{\partial u}{\partial w}\right)=-\dfrac{\partial}{\partial v}\left(\dfrac{\partial u}{\partial w}\right)\dfrac{\partial v}{\partial y}=-\dfrac{\partial^2u}{\partial vw}$ $\therefore-\dfrac{\partial^2u}{\partial vw}=\dfrac{\partial u}{\partial w}$ Let $z=\dfrac{\partial u}{\partial w}$ , Then $\dfrac{\partial z}{\partial v}=\dfrac{\partial^2u}{\partial vw}$ $\therefore-\dfrac{\partial z}{\partial v}=z$ $\dfrac{dz}{z}=-~dv$ $\int\dfrac{dz}{z}=\int-~dv$ $\ln z=-v+c_1(w)$ $z=c_2(w)e^{-v}$ $\dfrac{\partial u}{\partial w}=c_2(w)e^{-v}$ $u=\int c_2(w)e^{-v}~dw$ $u=C_1(v)+C_2(w)e^{-v}$ $u=C_1(x+y)+C_2(x-y)e^{-x-y}$ I did it correctly in Finding an analytical solution to the wave equation using method of characteristics , and did it not known whether correct or not in Solving $yu_{xx}+(x+y)u_{xy}+xu_{yy}=0$ , but why in here is wrong? AI: Fleshing out @QiaochuYuan's comment a little: $$\begin{eqnarray*} \frac{\partial}{\partial x} &=& \frac{\partial v}{\partial x} \frac{\partial}{\partial v} + \frac{\partial w}{\partial x} \frac{\partial}{\partial w} \\ &=& \frac{\partial}{\partial v} + \frac{\partial}{\partial w} \\ \frac{\partial}{\partial y} &=& \frac{\partial v}{\partial y} \frac{\partial}{\partial v} + \frac{\partial w}{\partial y} \frac{\partial}{\partial w} \\ &=& \frac{\partial}{\partial v} - \frac{\partial}{\partial w}. \\ \end{eqnarray*}$$ The reason this is a nice change of variables for the wave equation is because $$\frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} = 4 \frac{\partial}{\partial v} \frac{\partial}{\partial w}.$$ No such nice thing happens with the heat equation. Addendum: Note that $$\begin{eqnarray*} \frac{\partial^2}{\partial x^2} &=& \left(\frac{\partial}{\partial v} + \frac{\partial}{\partial w}\right)^2 \\ &=& \frac{\partial^2}{\partial v^2} + 2 \frac{\partial}{\partial v} \frac{\partial}{\partial w} + \frac{\partial^2}{\partial w^2} \\ \frac{\partial^2}{\partial y^2} &=& \left(\frac{\partial}{\partial v} - \frac{\partial}{\partial w}\right)^2 \\ &=& \frac{\partial^2}{\partial v^2} - 2 \frac{\partial}{\partial v} \frac{\partial}{\partial w} + \frac{\partial^2}{\partial w^2}. \\ \end{eqnarray*}$$ The transformed equation is  $$\left(\frac{\partial}{\partial v} + \frac{\partial}{\partial w}\right)u(v,w) = \left(\frac{\partial^2}{\partial v^2}     - 2 \frac{\partial}{\partial v} \frac{\partial}{\partial w}     + \frac{\partial^2}{\partial w^2}\right) u(v,w).$$ There are no nice cancelations. This transformation just makes the PDE harder to solve.
H: Two different characterization of "differentiable function" In a calculus class we were given the following definition of "differentiable function" (working with 2 variables): Definition: Let $A \in \mathbb{R^2}$, and $f : A \to \mathbb{R}$. We say that $f$ is differentiable in $(x_0, y_0) \in A$ if the graph of $f$ admits a tanget plane at $(x_0, y_0, f(x_0, y_0))$. Then the teacher gave us the following equivalent characterization: Proposition: $f$ is differentiable in $(x_0, y_0)$ iff 1) $f$ admits partial derivatives in $(x_0, y_0)$ 2) the following holds: $$ \lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - f(x_0,y_0) - A(x-x_0) - B(y-y_0)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0 $$ where $$A = \frac{\partial f}{\partial x}(x_0,y_0)$$ $$B = \frac{\partial f}{\partial y}(x_0,y_0)$$ i.e. the partial derivatives evaluated in $(x_0,y_0)$ Unluckily, I was not able to find any reference about this. So here's my questions: I got it right? Are the two definitions equivalents? How to prove that the limit is zero iff the function admits a tangent plane? Isn't (2) quite obvious? If a tangent plane at $P$ exists, its equation has to be $$f(x,y) = f(P) - \frac{\partial f}{\partial x}(P)(x-x_P) - \frac{\partial f}{\partial y}(P)(y-y_P)$$ follow that the numerator of the limit is zero. So what's the point of the denominator? Couldn't one use anything else for the denominator? Am I missing something? Is (2) noteworthy? EDIT: As noted below, my "it's obvious from eq. of tangent plane" approach is, in fact, wrong. AI: The definition is stated in terms of "the graph of $f$ admits a tangent plane". Did your teacher preceded these words by defining precisely what it means for a graph to admit a tangent plane? If not, then this "definition" was meant as an intuitive description of the concept, not as a definition in mathematical sense. In the proposition, part 2) is actually the key, and part 1) could even be omitted. Namely, one can state the definition as follows: A function $f$ is differentiable at $(x_0,y_0)$ iff there exists numbers $A$ and $B$ such that $$\lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - f(x_0,y_0) - A(x-x_0) - B(y-y_0)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0$$ Using this definition one can prove (and this is a reasonable exercise to do) that both partial derivatives exist and are equal to $A$ and $B$, respectively. One can also give a geometric interpretation of the limit being zero. The equation $g(x,y)=f(x_0,y_0) + A(x-x_0) + B(y-y_0)$ is an equation of a plane. The difference $f(x,y)-g(x,y)$ measures how much the graph of the function deviates from this plane in the vertical direction. The limit being zero means that the vertical deviation at $(x,y)$ is small compared to the horizontal distance between $(x_0,y_0)$ and $(x,y)$: one pictures this as a surface and a plane meeting "at zero angle", that is, being tangent to each other. (But if at this point you'll ask me to define what being tangent means, I'll only point you back at the limit: the graphs of $f$ and $g$ are tangent if $$\lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - g(x,y)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0$$ The limit is the key; geometry is optional but helpful.) In your post you wrote down the equation of the tangent plane incorrectly, and, which is a more serious mistake, denoted it by the same letter $f$ as the function itself. This is what led you to the erroneous conclusion that "numerator is zero". I avoid this confusion by using the letter $g$ for the function that represents the tangent plane: $z=g(x,y)$ is my notation for the tangent plane.
H: Weak categoricity in first order logic In a certain sense, only finite structures are definable up to isomorphism in first order logic. But if we rely on a metatheory containing a sufficient strong set theory (like required for second order logic), would it be possible to also define certain infinite structures up to isomorphism by requiring additional conditions like minimality (i.e. conditions independent of the syntax and axioms)? The set theory would be required here in order to give meaning to the additional conditions (i.e. minimal might mean here that no proper subset of the model satisfies the given first order theory). AI: If you allow ANY kind of set-theoretic notions, then of course you can describe any model up to isomorphism, even without using any model-theoretic notions, by simply defining the set. As for your question, there is actually a notion of minimal model, that is, a model which has no proper elementary substructure, but it is not in general unique up to isomorphism. For example, the theory of additive group of integers has $\mathfrak c$ nonisomorphic minimal models. On the other hand, some theories do have a unique minimal model, for example the theory of natural numbers with successor. There are also other model-theoretic notions which are strong enough to uniquely define a model of a given complete theory, such as: a primal model of any theory, which can intuitively be described as one which can be inductively constructed by blocks defined in terms of all the previous ones by a simple formula a prime model of a countable theory, which is a model such that it can be embedded as an elementary substructure into any other model of the theory (nb a primal model is always prime) a saturated model in a given cardinalty, which can be seen as a very rich model, which has elements having any noncontradictory set of properties you can define without using very many elements of the model (I'm not entirely sure if we need any stronger assumptions if the theory is uncountable, but for countable complete theories a saturated model in a given cardinality is unique) Note that, however, not every theory has a minimal, prime, or primal model, for example the theory of additive integers has no prime model, and the theory of countably infinitely many independent equivalence relations with two classes has no minimal model. Assuming existence of a strongly inaccessible cardinal, every theory has a saturated model in a strongly inaccessible cardinality (I know it to be true for countable theories, but it should probably still hold for any theories whose languages are smaller than some strongly inaccessible cardinal), but without it I think there might be theories without any (or at least I haven't heard of a theorem that would provide them for arbitrary theories without large cardinal assumptions). As Benedict pointed out, there are also some theories which have a unique model in a given cardinality $\kappa$, which are called $\kappa$-categorical. If a theory is $\kappa$-categorical (and $\kappa\geq \lvert T\rvert$), then the sole model of cardinality $\kappa$ is necessarily saturated, but cardinality is of course a more set-theoretic description. Another, more algebraic notion which can sometimes be used to uniquely describe a model of some theories is the acl-dimension. Some theories, called strongly minimal theories (examples of which are vector spaces over a fixed field or algebraically closed fields of fixed characteristic) have a well-behaved notion of independence generalizing both linear and algebraic independence, and for those theories a model of the theory is uniquely (up to isomorphism) determined by its dimension. In yet another direction, you can use infinitary logic. Similarly to how regular first-order logic can uniquely define finite models with a single sentence, if you allow countably infinite conjunctions you can uniquely define countable models with a single sentence (that is Scott's isomorphism theorem). For stronger infinitary languages, it it also possible for larger cardinalities. However, infinitary logic is quite a bit more messy than the usual first-order logic as far as I know.
H: Does ODE initial value problem produce beat or resonance phenomenon? $$x''+9x=\sin(3t),$$ $$x(0)=x'(0)=0.$$ This question was asked on a test. We are allowed to solve differential equations with TI-89. My steps: Solve with TI-89, solution $$x(t) = \frac{1}{18} (\sin(3 t)-3 t \cos(3 t)) .$$ Plot the solution, and then look at the graph, and decide whether it's a beat or resonance. Apparently, we are not allowed to solve the equation. How can i decide whether this IVP produce beat or resonance? AI: You are forcing the system at the natural frequency of the system ($\omega_0 = \pm 3$), so it is unlikely to get a beat (amplitude modulation when the excitation frequency differs from the system modes). Since there is a $t$ term in the response, the amplitude is unbounded. This is presumably what you call resonance.
H: If $a\ge 0$ and $b\ge 0$, then $\sigma(ab)\subset\mathbb{R}^+$. This is an exercise in Murphy's book: Let $A$ be a unital $C^*$-algebra and $a,b$ are positive elements in $A$. Then $\sigma(ab)\subset\mathbb{R}^+$. The problem would be trivial if the algebra is abelian. On the other hand I do not have a clue for the non-abelian case. I guess one needs to use the fact $\sigma(ab)\cup\{0\}=\sigma(ba)\cup\{0\}$ and then maybe use some algebraic manipulation. Anyway, I wonder whether someone has a hint on this. I guess I am missing a trick. Better though, maybe someone has some general insight on the many techniques concerning positive elements and approximate identities. For me they all seem very tricky and mysterious. For instance, how can they think of those strange functions when proving the existence of approximate identities and quasicentral approximate identities. Thanks! AI: A common trick is to write a positive element of a C*-algebra as a square of a positive element: $b=\sqrt b\sqrt b$. Thus $\sigma(ab)\cup\{0\}=\sigma(a\sqrt b\sqrt b)\cup\{0\}=\sigma(\sqrt b a\sqrt b)\cup \{0\}$. If you haven't already seen it, you can think about why if $a$ is positive, then $xax^*$ is positive for all $x$.
H: Can we solve $2a(x^2-y^2)/(x-y)=b$ for $a$ without multiplying $b$ by $x-y$? I would like to know if its possible to pull $a$ out of the following equation without multiplying $b$ by $(x-y)$ $$ \frac{ 2a(x^2 - y^2)}{x - y} = b $$ Its part of a more complex problem I'm stuck on. Cheers AI: Yes indeed, you have the identity $$ x^2 - y^2 = (x-y)(x+y) $$ So, $$\dfrac{2a (x^2-y^2)}{x-y}=b \Leftrightarrow 2a(x+y)=b $$
H: Asymptotic behaviour of $\sum_{p\leq x} \frac{1}{p^2}$ As the title suggests, I want to find the asymptotic behaviour of this sum as $x\rightarrow \infty$, I tried by summation by parts but didn't succeed I also tried using the asymptotic behvaiour of the sum $$\sum_{p\leq x} \frac{1}{p} \sim_{x \to \infty} \log \log x$$ i.e squaring both sides gives me: $$\sum_{p\leq x} \frac{1}{p^2} + \sum_{\substack{q,p\leq x\\p\neq q}} \frac{1}{pq} \sim_{x \to \infty} \log^2(\log x)$$ But then, how do I estimate the second term in the LHS? Thanks in advance. AI: The prime zeta function $P(s)$, for $\text{Real}(s) > 1$, is defined as $$P(s) = \sum_{\overset{p=1}{p \text{ is prime}}}^{\infty} \dfrac1{p^s}$$ The sum converges for $\text{Real}(s) > 1$, similar to the $\zeta$-function. Your sum is $P(2)$ and is approximately $0.4522474200410654985065\ldots$. There are no "nice" values for $P(s)$ where $s \in \mathbb{Z}^+ \backslash \{1\}$. A very crude argument why there are no "nice" values for $P(s)$ is due to the fact that the function, $$g(n) = \dfrac{\mathbb{I}_{n \text{ is a prime}}}{n^s}$$ is not a "nice" arithmetic function in the usual sense i.e. it is not even multiplicative for instance.
H: Find the area of the regular polygon described given the side-length I am being asked to calculate the area of a triangle with a side-length of 15.5 inches. The formula for calculating a regular polygon's area is 1/2Pa Where P is the perimeter of the polygon, and a is the apothem. I am completely lost. AI: Hint: If you cut the equilateral triangle into two by bisecting an angle, you make two right triangles. Can you identify the base and height? Added: Look at the right triangles and try to find x
H: Proving the suprema of $\{b^r\mid x\geq r\in\mathbb{Q}\}$ and $\{b^r\mid x\gt r\in\mathbb{Q}\}$ are equal if $b\gt 1$ Please help me with the proof that $$\sup\{b^r\in \mathbb{R}\mid x\geq r\in \mathbb{Q}\} = \sup\{b^r\in \mathbb{R}\mid x\gt r\in \mathbb{Q}\}$$ where $1<b\in \mathbb{R}$ and $x\in \mathbb{R}$. AI: I'm guessing you've already observed that $r\mapsto b^r$ is increasing (otherwise you can show this), and as you mentioned in a comment, there is nothing to show if $x$ is not in $\mathbb Q$. Assume that $x$ is rational, and note that $b^x=\sup\{b^r:r\leq x\}\geq \sup\{b^r:r<x\}$. To finish is to show that $b^x$ is the least upper bound of $\{b^r:r<x\}$, which means that no smaller number is an upper bound. Suppose that $0<y<b^x$. Let $n$ be a positive integer such that $b^{1/n}<\dfrac{b^x}{y}$ (showing that such $n$ exists is a good exercise, the point being that $\dfrac{b^x}{y}>1$). Then $b^{x-1/n}>y$, so $y$ is not an upper bound for $\{b^r:r<x\}$.
H: How many $3\times 3$ binary matrices $X$ are there with determinant $0$ and $X^2=X^T$? How many $3 \times 3$ binary matrices $X$ are there with determinant as $0$ that also satisfy $X^2 = X^T$? AI: There are $2^9=512$ binary $3\times 3$ matrices. Of these, $7\times 6\times 4 = 168$ are invertible, so there are $344$ singular ones. Here's a somewhat naive way of going about it; I suspect there must be a clever/elegant way of doing it, but I couldn't think of one and then started going down this path. All computations are over $\mathbb{F}_2$. If the first row of $X$ is all zeros, then so is the first row of $X^2$, so the first column of $X$ must be all zeros. The matrix is of the form $$\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & a & b\\ 0 & c & d \end{array}\right).$$ Then $$X^2 = \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & a^2+bc & b(a+d)\\ 0 & c(a+d) & bc+d^2 \end{array}\right).$$ Hence $a^2+bc = a+bc = a$, so $bc=0$. Since $b(a+d)=c$ and $c(a+d)=b$, they are both equal to $0$. Similar computations hold if the second or third row are equal to $0$. So the matrices that have at least one row equal to $0$ and satisfy the property are diagonal, with arbitrary values in the diagonal, at least one equal to $0$ (we cannot have the identity, because it is nonsingular). That shows that there are exactly $7$ matrices in which a row is equal to $0$. If no row is equal to $0$, but the matrix is singular, then either the matrix has a repeated row, or the third row is the sum (modulo $2$) of the first two rows. Suppose first that the matrix has the first and second row repeated; then the first two rows of $X^2$ are the same. Since the $(1,1)$ and $(2,1)$ entries of $X^2=X^T$ are equal, that means that the $(1,1)$ and $(1,2)$ entries of $X$ are equal. Since the $(1,2)$ and $(2,2)$ entries of $X^2$ are equal, that means that the $(2,1)$ and $(2,2)$ entries of $X$ are equal. And since the $(1,3)$ and $(2,3)$ entries of $X^2$ are equal, then the $(3,1)$ and $(3,2)$ entries of $X$ are equal. So we have: $$X = \left(\begin{array}{ccc} a & a & b\\ a & a & b\\ c & c & d \end{array}\right).$$ Then $$X^2 = \left(\begin{array}{ccc} a+a+bc & a+a+bc & ab+ab+bd\\ a+a+bc & a+a+bc & ab+ab+bd\\ ac+ac+dc & ac+ac+dc & bc+bc+d \end{array}\right).$$ Therefore, $bc=a$, $bd=c$, $dc=b$. So $c = bd = dcd = cd$. Therefore, either $c=0$ or $d=1$. If $c=0$, then $a=b=0$, and we have a row of zeros, which we are assuming we do not have. So $c=d=1$. Then $a=b=1$, so we just have the matrix of all $1$s. Likewise, if the first and third rows are equal, then the first and third rows of $X^2$ are equal, so the $(1,1)$ and $(3,1)$ entries of $X^2$ are equal, hence the $(1,1)$ and $(1,3)$ entries of $X$ are equal; the $(2,1)$ and $(2,3)$ entries are equal; and the $(3,1)$ and $(3,3)$ entries are equal, so $X$ would be $$X = \left(\begin{array}{ccc} a & b & a\\ c & d & c\\ a & b & a \end{array}\right).$$ Similar computations follow, and likewise if the second and third row are equal. So there is only one such singular matrix. Finally, if no row is zero and there are no repeated rows, then the third row must be the sum of the first two; hence, the same is true in $X^2$. That means that the third column of $X$ must be the sum of the first two, we we have: $$X = \left(\begin{array}{ccc} a & b & a+b\\ r& s & r+s\\ a+r & b+s & a+b+r+s \end{array}\right).$$ Then the $(1,1)$ entry of $X^2$ is $a+rb+(a+r)(a+b) = a+rb+a+ab+ar+rb = a(b+r)$, so $b+r=1$. The $(1,2)$ entry of $X^2$ is $ab+bs+(a+b)(b+s) = ab+bs + ab+as + b + bs = as+b$, and it must be equal to $r$. So $as+b=r$, hence $as=r+b$, so $as=1$. Therefore, $a=s=1$. Plugging in we have $$X = \left(\begin{array}{ccc} 1 & b & r\\ r & 1 & b\\ b & r & 1 \end{array}\right)$$ with $b+r=1$. A quick check shows both choices work (namely, $b=0$, $r=1$; or $b=1$, $r=0$). This gives two possibilities. So in total, if I didn't mess up, we have: seven possibilities with at least one row equal to $0$; one possibility with repeated rows; and two more possibilities with two linearly independent rows and the third row a linear combination of the other two, for a total of $10$ such matrices.
H: Simplifying a fraction? $$\frac {n-2}{n} \cdot \frac {n-3}{n-1} \cdot \frac {n-4}{n-2} \cdots \frac{2}{4} \cdot \frac{1}{3} = \frac {1}{n(n-1)}$$ Why is this true? Notice the denominators and numerators cancel out, but since they "aren't in sync" the first two denominators and the last two numerators will not be cancelled out. Considering this, shouldn't the product be: $$\frac{2}{n(n-1)}$$ What am I misunderstanding? AI: Yes, it should be $$\dfrac2{n(n-1)}$$ To see this take $n = 4$, we then get that $$\dfrac24 \cdot \dfrac13 = \dfrac{2}{4(4-1)}$$
H: Help find hard integrals that evaluate to $59$? My father and I, on birthday cards, give mathematical equations for each others new age. This year, my father will be turning $59$. I want to try and make a definite integral that equals $59$. So far I can only think of ones that are easy to evaluate. I was wondering if anyone had a definite integral (preferably with no elementary antiderivative) that is difficult to evaluate and equals $59$? Make it as hard as possible, feel free to add whatever you want to it! AI: You might try the following: $$ \frac{64}{\pi^3} \int_0^\infty \frac{ (\ln x)^2 (15-2x)}{(x^4+1)(x^2+1)}\ dx $$
H: Improper integral about exp appeared in Titchmarsh's book on the zeta function May I ask how to do the following integration? $$\int_0^\infty \frac{e^{-(\pi n^{2}/x) -(\pi t^2 x)}}{\sqrt{x}} dx $$ where $t>0$, $n$ a positive integer. This came up on page 32 (image) of Titchmarsh's book, The Theory of the Riemann Zeta-Function. Specifically for the sum involving $b_n$, I am wondering how to multiply by $e^{-\pi t^{2} x}$ and integrate over $(0,\infty)$ AI: $$\begin{eqnarray*} \int_0^\infty \frac{dx}{\sqrt{x}}\, \exp\left(-\frac{\pi n^2}{x} - \pi t^2 x\right) &=& 2\sqrt{\frac{n}{t}} \int_0^\infty dz\, \exp\left(-\pi n t(z^2 + z^{-2})\right) \hspace{5ex} (\textrm{let } x=z^2 n/t) \\ &=& \sqrt{\frac{n}{t}} \int_{-\infty}^\infty ds\, e^{s/2} \exp\left(-2 \pi n t \cosh s\right) \hspace{5ex} (\textrm{let } z=e^{s/2}) \\ &=& 2\sqrt{\frac{n}{t}} \int_0^\infty ds\, \cosh\left(\frac{s}{2}\right) \exp\left(-2 \pi n t \cosh s\right) \\ &=& 2\sqrt{\frac{n}{t}} K_{\frac{1}{2}} (2\pi n t) \hspace{5ex} (\textrm{modified Bessel function, 2nd kind}) \\ &=& \frac{e^{-2\pi n t}}{t} \end{eqnarray*}$$ Addendum: An approach not involving special functions. $$\begin{eqnarray*} \int_0^\infty \frac{dx}{\sqrt{x}}\, \exp\left(-\frac{\pi n^2}{x} - \pi t^2 x\right) &=& 2\sqrt{\frac{n}{t}} \int_0^\infty dz\, \exp\left(-\pi n t(z^2 + z^{-2})\right) \hspace{5ex} (\textrm{as before}) \\ &=& 2\sqrt{\frac{n}{t}} \int_0^\infty dz\, \exp\left(-\pi n t(z-z^{-1})^2 - 2\pi n t\right) \\ &=& \sqrt{\frac{n}{t}} e^{-2\pi n t} \int_{-\infty}^\infty du\, \left(1+\frac{u}{\sqrt{u^2+4}}\right) e^{-\pi n t u^2} \hspace{4ex} (z-z^{-1}=u) \\ &=& \frac{e^{-2\pi n t}}{t} \hspace{5ex} (\textrm{odd integral vanishes; Gaussian left over}) \end{eqnarray*}$$
H: Quadratic System of Equations I'm trying to define a quadratic that can pass through any 3 points. I've obviously done something wrong but can't figure out where. Any help would be appreciated. $$ ax_1^2 + bx_1 + c = y_1 $$ $$ ax_2^2 + bx_2 + c = y_2 $$ $$ ax_3^2 + bx_3 + c = y_3 $$ Solve for C using the first equation 1$$ ax_1^2 + bx_1 + c = y_1 $$ 2$$ ax_1^2 + bx_1 - y_1 = -c $$ 3$$ -ax_1^2 - bx_1 + y_1 = c $$ 4$$ c = -ax_1^2 - bx_1 + y_1 $$ Now substitute C and solve be B using the second equation 5$$ ax_2^2 + bx_2 + c = y_2 $$ 6$$ ax_2^2 + bx_2 - ax_1^2 - bx_1 + y_1 = y_2 $$ 7$$ a(x_2^2 - x_1^2) + b(x_2 - x_1) = y_2 - y_1 $$ 8$$ b(x_2 - x_1) = y_2 - y_1 - a(x_2^2 - x_1^2) $$ 9$$ b = \frac{y_2 - y_1 - a(x_2^2 - x_1^2)}{(x_2 - x_1)} $$ Ok, now substitute B and C and solve A using the third equation 10$$ ax_3^2 + bx_3 + c = y_3 $$ $$ ax_3^2 + bx_3 - ax_1^2 - bx_1 + y_1 = y_3 $$ 11$$ ax_3^2 + x_3\left(\frac{y_2 - y_1 - a(x_2^2 - x_1^2)}{(x_2 - x_1)}\right) - ax_1^2 - x_1\left(\frac{y_2 - y_1 - a(x_2^2 - x_1^2)}{(x_2 - x_1)}\right) + y_1 = y_3 $$ 12$$ ax_3^2 - ax_1^2 + \frac{x_3(y_2 - y_1) - x_3a(x_2^2 - x_1^2)}{x_3(x_2 - x_1)} + \frac{-x_1(y_2 - y_1) + x_1a(x_2^2 - x_1^2)}{-x_1(x_2 - x_1)} = y_3 - y_1 $$ 13$$ ax_3^2 - ax_1^2 + \frac{-x_1x_3(y_2 - y_1) + x_1x_3a(x_2^2 - x_1^2)}{-x_1x_3(x_2 - x_1)} + \frac{-x_1x_3(y_2 - y_1)+ x_1x_3a(x_2^2 - x_1^2)}{-x_1x_3(x_2 - x_1)} = y_3 - y_1 $$ 14$$ ax_3^2 - ax_1^2 + \frac{-x_1x_3(y_2 - y_1) + x_1x_3a(x_2^2 - x_1^2) - x_1x_3(y_2 - y_1) + x_1x_3a(x_2^2 - x_1^2)}{-x_1x_3(x_2 - x_1)} = y_3 - y_1 $$ 15$$ ax_3^2 - ax_1^2 + \frac{(y_2 - y_1) - a(x_2^2 - x_1^2) + (y_2 - y_1) - a(x_2^2 - x_1^2)}{(x_2 - x_1)} = y_3 - y_1 $$ 16$$ a(x_3^2 - x_1^2) + \frac{2(y_2 - y_1) - 2a(x_2^2 - x_1^2)}{(x_2 - x_1)} = y_3 - y_1 $$ 17$$ a(x_3^2 - x_1^2) + \frac{2(y_2 - y_1)}{(x_2 - x_1)} + \frac{ -2a(x_2^2 - x_1^2) }{(x_2 - x_1)} = y_3 - y_1 $$ 18$$ a(x_3^2 - x_1^2) + \frac{2(y_2 - y_1)}{(x_2 - x_1)} -2a(x_2 - x_1) = y_3 - y_1 $$ 19$$ a(x_3^2 - x_1^2) -2a(x_2 - x_1) = y_3 - y_1 - \frac{2(y_2 - y_1)}{(x_2 - x_1)} $$ 20$$ a((x_3^2 - x_1^2) -2(x_2 - x_1)) = y_3 - y_1 - \frac{2(y_2 - y_1)}{(x_2 - x_1)} $$ 21$$ a = \left(y_3 - y_1 - \frac{2(y_2 - y_1)}{(x_2 - x_1)} \right) / \left((x_3^2 - x_1^2) -2(x_2 - x_1)\right) $$ AI: A simpler approach: from $a x_1^2 + b x_1 + c = y_1$ and $a x_2^2 + b x_2 + c = y_2$ you get $a (x_1^2 - x_2^2) + b (x_1 - x_2) = y_1 - y_2$, so (noting that $x_1^2 - x_2^2 = (x_1 - x_2)(x_1 + x_2)$), $b = \dfrac{y_1 - y_2}{x_1 - x_2} - a (x_1 + x_2)$. Similarly $b = \dfrac{y_1 - y_3}{x_1 - x_3} - a (x_1 + x_3)$. Subtract these: $$0 = \dfrac{y_1 - y_2}{x_1 - x_2} - \dfrac{y_1 - y_3}{x_1 - x_3} - a(x_2 - x_3)$$ so $$a = \dfrac{\dfrac{y_1 - y_2}{x_1 - x_2} - \dfrac{y_1 - y_3}{x_1 - x_3}}{x_2 - x_3}$$
H: Area of a regular octagon with a side-length of 10 km I am asked to calculate the area of a regular octagon given the side-length of 10 km. I saw some examples saying that I should start by splitting the octagon into eight isosceles triangles, and that the length of the base would be 10 km, since we're given that the sides of the octagon are all 10km. What I don't know is what to do next. AI: I think that a different approach is easier: try cutting it up as in the rough sketch below. Once you work out the lengths of the legs of the right triangles, you should be able to calculate the area pretty easily.
H: Find an ideal in $K[x,y]$ that is maximal but not principal. Let $K$ be a field. Find an ideal of $K[x,y]$ that is maximal but not principal. Prove your claims.(Here we are working in a commutative ring with $1\neq 0.$) My idea: Choose $K=\mathbb{Q}.$ Then we claim that an ideal $I\subset K[x,y]$ which is maximal but not principal is $I=(x,y)$. First I will prove that $(x,y)$ is a prime ideal of $\mathbb{Q}[x,y]$. Let $a,b\in (x,y)$ be such that $p=ab\in (x,y)$. Any element in $(x,y)$ is going tobe of the form: $p=Ax+By$ where $A,B\in \mathbb{Q}[x,y].$ If $a'b'=0$ where $a',b'$ are constant terms of $a$ and $b$ respectively then since $\mathbb{Q}[x,y]$ is an integral domain either $a'=0$ or $b'=0.$ Hence either $a$ or $b$ is of the form $Ax+By$ where $A,B\in \mathbb{Q}[x,y]$ which in turn means that either $a\in (x,y)$ or $b\in (x,y)$. Now since $(x,y)$ is a prime ideal , $\mathbb{Q}[x,y]/(x,y)$ is an integral domain. Now if we can show that $\mathbb{Q}[x,y]/(x,y)$ is a field then we are done. Not sure how to show this. If $1\in \mathbb{Q}[x,y]/(x,y)$ then we can conclude that it is a field right? So, observe that $1-f(x,y)\in \mathbb{Q}[x,y]$ and $f(x,y)\in (x,y)$ then $1-f(x,y)+f(x,y)=1\in \mathbb{Q}[x,y]/(x,y)$. $(x,y)$ is not principal in $\mathbb{Q}[x,y]$. Observe that $(x,y)=\{xp(x,y)+yq(x,y)| p(x,y),q(x,y)\in \mathbb{Q}[x,y]\}$. Assume by way of contradiction that $(x,y)=(a(x,y))$ for some $a(x,y)\in \mathbb{Q}[x,y].$ Since $x\in (a(x,y))$ there must be $p(x,y)$ such that $x=p(x,y) a(x,y).$ Since degree $x$= degree $p(x,y)$+degree $a(x,y)$ we conclude that $p(x,y)$ must be a constant polynomial. I don't know how to continue from here. Also, if there is a simple proof for this problem please share it with me. Thank you. AI: First, your solution should work for every field, not a particular one, so you should not "choose $K=\mathbb{Q}$. Second: it's easier to prove that $(x,y)$ is a maximal ideal by showing that $K[x,y]/(x,y)$ is a field (which is very easy). Third: to show that $(x,y)$ is not principal, note that if $(x,y)=(a)$, then $a$ divides $x$ and $a$ divides $y$. What are the elements that divides $x$ and what are the elements that divide $y$? The only elements that divide $x$ are nonzero constants and nonzero constant multiples of $x$; the only elements that divide $y$ are nonzero constants and nonzero constant multiples of $y$. So then $a$ would have to be a nonzero constant, but then $(a)=(1)$, which is impossible since $(x,y)\neq (1)$.
H: Proving equipotency between sets. Prove that the set $[2,5[$ is equipotent with the set $[3,4[$ According to my book, I have to find the linear function that passes by the points $(2,4)$ and $(5,3)$. How do you do that? This is a particular case that doesn't seem to be explained in my book. ... and why a linear function? AI: To answer the last question first: any bijection between the two sets would do, but in this case it’s easiest by far to write down a linear bijection. Your two sets are $]2,5]$ and $[3,4[$. Note that while the first interval is open on the left and closed on the right, the second is exactly the opposite: open on the right and closed on the left. The easiest way to match them up is to turn one of them around. I’ll turn round the second. That means that I want to pair $5$, the closed end of the first interval, with $3$, the closed end of the second interval, and $2$, the open end of the first interval, with $4$, the open end of the second interval. 2 x 5 )--------------------------| 4 y 3 As $x$ increases from $2$ to $5$, I want $y$ to decrease from $4$ to $3$. In other words, I’m looking for the equation of a straight line that passes through the points $(2,4)$ and $(5,3)$. That means that $y$ has to drop by $1$ while $x$ increases by $3$, so the line has a slope of $-\frac13$ and passes through $(2,4)$. The standard point-slope form of the equation for a straight line gives us $$y-4=-\frac13(x-2)=-\frac13x+\frac23\;,$$ so $$y=-\frac13x+\frac{14}3=\frac13(14-x)\;.$$ If you draw a graph of this straight line, you’ll see that it maps the interval $]2,4]$ $1$-$1$ onto the interval $[3,4[$. In other words, it’s a bijection between those intervals, and its existence shows that they are equipotent.
H: Multivariable Limits Can someone help me calculate the following limits? 1) $ \displaystyle\lim _ {x \to 0 , y \to 0 } \frac{\sin(xy)}{\sqrt{x^2+y^2}} $ (it should equal zero, but I can't figure out how to compute it ) . 2) $\displaystyle\lim_ {(x,y)\to (0,\frac{\pi}{2} )} (1-\cos(x+y) ) ^{\tan(x+y)} $ (it should equal $1/e$). 3) $ \displaystyle\lim_{(x,y) \to (0,0) } \frac{x^2 y }{x^2 + y^4 } $ (which should equal zero). 4) $\displaystyle \lim_{(x,y) \to (0,1) } (1+3x^2 y )^ \frac{1}{x^2 (1+y) } $ (which should equal $e^{3/2}$ ). Any help would be great ! Thanks in advance ! AI: Hints: For problem $1$, use the fact that $|\sin t|\le |t|$. Then to show that the limit of $\frac{xy}{\sqrt{x^2+y^2}}$ is $0$, switch to polar coordinates. For problem $3$, it is handy to divide top and bottom by $x^2$, taking care separately of the difficulty when $x=0$. For problem $2$, write $\tan(x+y)$ in terms of $\cos(x+y)$ and $\sin(x+y)$. Be careful about which side of $\pi/2$ the number $x+y$ is. For problem $4$, it is useful to adjust the exponent so that it is $\frac{1}{3x^2y}$, or $\frac{1}{x^2y}$. In $2$ and $4$, you may want to take the logarithm, and calculate the limit of that, though it is not necessary.
H: Solving heterogeneous successions I know how to get the explicit formula for homogeneous successions, kinda. What I do is get the characteristic equation, get the solutions and then solve a system to obtain the values of A,B,C... constants to build the explicit formula. ... But what if the succession is heterogeneous? Particularly, this question: Determine the explicit formula for the succession defined by recurrence by $a_n=a_{n-1}+5$ with $a_1=3$ Which, apparently, is heterogeneous. AI: If you have that $a_n=a_{n-1}+5$, with $a_1=3$, then you have that $a_n=5(n-1)+3$, because $$a_n=a_{n-1}+5=(a_{n-2}+5)+5=a_{n-2}+10=(a_{n-3}+5)+10=a_{n-3}+15=$$$$...=a_1+5(n-1)$$
H: Orthogonal Trajectories I am asked to show that the given families of curves are orthogonal trajectories of each other. $$x^2+y^2=ax$$ $$x^2+y^2=by$$ I know that two functions are called orthogonal if at every point their tangents lines are perpendicular to each other. If I differentiate both of these functions, and the resulting expressions are reciprocals of one another, I have shown that they are orthogonal trajectories of each other. 1. $$x^2+y^2=ax$$ $$2x+2yy'=a'x+x'a$$ $$y'=\frac{a-x}{2y}$$ $$x^2+y^2=by$$ $$2x+2yy'=b'y+y'b$$ $$y'=\frac{b-2x}{y}$$ The results I get from differentiating these two functions don't seem to be reciprocals of each other. I am wondering if I have differentiated these two functions incorrectly, or if there is a point substitution that will show these two are reciprocals. AI: Suppose your two curves are defined by $x^2+y^2=ax$ and $x^2+y^2=by$ , with $a,b$ real constants. To compare the tangent line slopes at given points for each curve, we differentiate the first equation to find that $$ 2x + 2yy' = a~~\text{and so}~~y' = \frac{a-2x}{2y}~~, $$ and the second to find that $$ 2x + 2yy' = by' ~~\text{and so}~~ y' = \frac{2x}{b-2y}~~. $$ These answers are different from the ones you got -- note that what I did was group terms containing $y'$ on one side of the equation, and divide through by whatever factor accompanied it. We are in business so long as the product of these two quantities, for a given pair $(x,y)$ where the curves intersect, is $-1$. So, multiply one by the other and we get $$ \frac{2x(a-2x)}{2y(b-2y)} = \frac{x(a-2x)}{y(b-2y)} ~~. $$ I believe the problem you ran into is that it is not clear (in an algebraic sense anyways) that these factors should cancel in any way. But remember -- since we are looking at a point where the two curves we were given intersect, we may apply both of those equations. Where does $a-2x$ appear? Well we have $x^2 + y^2 = ax$ , so that $ax - x^2 = y^2$ , $ax - 2x^2 = y^2 - x^2$ , and $x(a-2x) = y^2-x^2$. Now bear with me, while this may not look simpler, observe that by the same token, $$ x^2 + y^2 = by ~~\text{means that}~~ by-2y^2 = x^2-y^2 ~~\text{and so}~~ y(b-2y) = x^2-y^2 ~~. $$ The factors on top and bottom are indeed the same aside for a sign switch, so the two slopes are negative reciprocal. BTW: A helpful, and even pretty exercise is to actually plot some of these orthogonal curves. In this case, you'll notice that the first family is circles with an $x$-offset of the center from the origin, while the second family is circles with a $y$-offset. Is it clear why these are orthogonal?
H: Convergence for expectation $X_n$ converges to $X$ in $L^1$, then $\limsup_H|EX_n1_H-EX1_H|=0$. I want to prove it, is the following proof right? $$\lim|EX_n1_H-EX1_H|=\lim|E(X_n-X)1_H|=|\lim E(X_n-X)1_H|\\=|E\lim(X_n-X)1_H|=0$$ It's true for all $H$. So, $\limsup|EX_n1_H-EX1_H|=0$ And I also confuse how I can get $|=|\lim E(X_n-X)1_H|=|E\lim(X_n-X)1_H|=0$ by dominated convergence theorem. AI: For every $H$, $|E(X_n1_H)-E(X1_H)|\leqslant E(|X_n-X|\cdot1_H)\leqslant E(|X_n-X|)=\|X_n-X\|_1$ hence $\sup\limits_H|E(X_n1_H)-E(X1_H)|\leqslant\|X_n-X\|_p$ for every $p\geqslant1$. As a consequence, if $X_n\to X$ in $L^p$, then $\lim\limits_{n\to\infty}\sup\limits_H|E(X_n1_H)-E(X1_H)|=0$.
H: Seminorm exercise Can you tell me if my answer is correct? It's another exercise suggested in my lecture notes. Exercise: Consider $C[-1,1]$ with the sup norm $\|\cdot\|_\infty$. Let $$ W = \{f \in C[-1,1] \mid \int_0^1 f d\mu = \int_{-1}^0 f d \mu = 0 \}$$ Show that $W$ is a closed subspace. Let $f(x) = x$ and calculate $\|f\|_{V/W} = \inf_{w \in W} \|f + w \|_\infty$ and show that the infimum is not achieved. My answer: To show that $W$ is closed we show that if $f$ is a limit point of $W$ then $f \in W$. So let $f$ be a limit point. Then there is a sequence $w_n \in W$ converging to $f$, i.e. for $\varepsilon > 0$ there is $w_n$ such that $\|f - w_n\|_\infty < \varepsilon$. Hence for $\varepsilon > 0$, $\int_{0}^1 f d \mu = \int_0^1 (f + w_n - w_n ) d \mu \leq \int_0^1 |f-w_n| + \int_0^1 w_n = \int_0^1 |f-w_n| \leq \|f-w_n\|_\infty \leq \varepsilon$. Let $\varepsilon \to 0$. Same argument for $[-1,0]$. Now we compute the norm: $$ \|x\|_{V/W} = \inf_{w \in W} \|x + w\|_\infty = \inf_{w \in W} \sup_{x \in [-1,1]} |x + w(x)|$$ $\|x + w\|_\infty$ is smallest for $w(x) = -x$. But $-x \notin W$. I'm not so sure about the second part. Is this what is meant by showing that the infimum is not achieved? "$\|x + w\|$ is smallest for $w(x) = -x$" seems a bit... wobbly. Thanks for your help. AI: Your idea for the first part is correct but the inequalities you write are odd. Try $$ \left|\,\int_0^1f\,\right|=\left|\int_0^1(f-w_n)\right|\leqslant\int_0^1|f-w_n|\leqslant\|f-w_n\|_\infty\to0, $$ and similarly for the interval $[-1,0]$. Regarding the second part, one would like to use the function $w_0:x\mapsto x-\frac12\mathrm{sgn}(x)$ to approximate $u:x\mapsto x$ but, despite the fact that $\int\limits_{-1}^0w_0=\int\limits_0^1w_0=0$, one cannot because $w_0$ is not continuous at zero. Hence $w_0$ is not in $W$ but the task is to show that $w_0$ indeed provides the infimum $\|u\|_{V/W}$. Note that $u(x)-w_0(x)=-\frac12\mathrm{sgn}(x)$ for every $x$ hence $\|u-w_0\|_\infty=\frac12$. Call $W_0\supset W$ the set of integrable functions $w$ such that $\int\limits_{-1}^0w=\int\limits_0^1w=0$. For every $w$ in $W_0$, $\int\limits_0^1(u-w)=\frac12$ hence there exists some $x\geqslant0$ such that $u(x)-w(x)\geqslant\frac12$. This proves that $\|u-w\|_\infty\geqslant\frac12$ for every $w$ in $W_0$, and in particular for every $w$ in $W$, hence $\|u\|_{V/W}\geqslant\frac12$. Furthermore, for any $w$ in $W$, the condition $\|u-w\|_\infty=\frac12$ implies that $u(x)-w(x)\leqslant\frac12$ for every $x$ in $[0,1]$. Since $\int\limits_0^1(u-w)=\frac12$ and $u-w$ is continuous, $u(x)-w(x)=\frac12$ for every $x$ in $[0,1]$. Likewise, $u(x)-w(x)=-\frac12$ for every $x$ in $[-1,0]$. These two conditions are incompatible at $x=0$ hence there is no function $w$ in $W$ such that $\|u-w\|_\infty=\frac12$. Finally, one can modify $w_0$ to get some function $w_\varepsilon$ in $W$ such that $\|u-w_\varepsilon\|_\infty\leqslant\|u-v\|_\infty+\varepsilon$ hence $\|u\|_{V/W}=\frac12$. For example, one can consider the unique $w_\varepsilon$ in $W$ which is affine on $x\leqslant-\varepsilon$ and on $x\geqslant\varepsilon$ and such that $w_\varepsilon(x)=-x/(2\varepsilon)$ on $|x|\leqslant\varepsilon$. Edit: For the last step, one could try to use the approximation of $w_0$ by its Fourier series, that is, to consider $w_n(x)=-\sum\limits_{k=1}^n\frac{\sin(2k\pi x)}{\pi k}$. Unfortunately, due to Gibbs phenomenon, this choice leads to $\|w_n-u\|_\infty$ converging to $\frac12+a$ where $a\approx0.089490$, instead of the desired limit $\frac12$.
H: What functions maintain inequality? In my calculus book it mentions that increasing functions maintain inequality relations and that's the reason you can apply $\exp$ and $\ln$ to two sides of an inequality to solve them. Is there some general classification for the types of functions that maintain inequality? For instance are they all 1 to 1 or have some other property in common? AI: In addition to m. k.'s answer, there is one very valuable criterion: If a function is continuous, it is monotonic if and only if it is injective (this is a consequence of the Intermediate Value Theorem). Therefore, a continuous function $f: [a,b] \to \mathbb R$ will maintain a strict inequality if and only if $f(a) < f(b)$ and it is injective.
H: Orthogonal projection to closed, convex subset in a Hilbert space I don't understand one step in the proof of the following lemma (Projektionssatz): Let $X$ a Hilbert space with scalar product $(\cdot)_X$ and let $A\subset X$ be convex and closed. Then there is a unique map $P:X\rightarrow A$ that satisfies: $\|x-P(x)\| = \inf_{y\in A} \|x- y\|$. This is equivalent to the following statement: (1) For all $a\in A$ and fixed $x\in X$, $\mbox{Re}\bigl( x-P(x), a-P(x) \bigr)_X \le 0$. I don't understand the following step in the proof that (1) implies the properties of $P$: Let $a\in A$. Then $\|x-P(x)\|^2 + 2\mbox{Re}\bigl( x-P(x), P(x)-a \bigr)_X + \|P(x)-a\|^2 \ge \|x-P(x)\|^2$. I don't understand the "$\ge$". How do we get rid of the term $\|P(x)-a\|$ on the left hand side? Thank you very much! AI: It is non-negative! Hence, just drop it.
H: Two Lie algebras associated to $GL(n,\mathbb{C})$ I have elementary questions about Lie groups and their associated Lie algebras. Let $G=GL(n,\mathbb{C})$. Then associated to this Lie group is the Lie algebra $M_n(\mathbb{C})$ with the commutator relation $[x,y]=xy-yx$ or we can define its Lie algebra to be $M_n(\mathbb{C})$ with left-invariant vector fields. When would we want to use one over the other? Are they equivalent? Is the Lie algebra with left-invariant vector fields only defined over $\mathbb{R}$? Why aren't we working with right-invariant vector fields? $M_n(\mathbb{C})$ can be thought of as $n^2$-dimensional complex Lie algebra or $2n^2$-dimensional real Lie algebra with complex structure. What is an example of an even dimensional real Lie algebra that cannot have a complex structure? Does there exist such an example using a subset of matrices? Could there exist other Lie algebras (other than the ones mentioned above) for $GL(n,\mathbb{C})$? Thank you. AI: Let me answer each of your questions: (1) The first characterization of the Lie algebra of $G$ is convenient because it is concrete, i.e., it is very useful in practice. For example, one would use the first characterization of the Lie algebra of $G$ when one explictly wishes to compute the differentials of Lie algebra homomorphisms. However, the second characterization of the Lie algebra of $G$ is convenient because it is abstract, i.e., it is very useful in theory. The two characterizations are equivalent. The proof of this equivalence is a standard result in Lie theory. The Lie algebra of $G$ is a complex Lie algebra (because, for example, $G$ is a complex Lie group). (2) We could certainly work with right-invariant vector fields and the theory would remain the same. The reason for the preference of "left" in place of "right" is the same as the reason for the preference of composing functions from right to left rather than from left to right; however, this is more a matter of tradition than a matter of mathematics. (3) Let $\mathfrak{u}(n)$ denote the real vector space consisting of all $n\times n$-matrices with complex entries that are skew-hermitian. Exercise 1: Prove that $\mathfrak{u}(n)$ is a real Lie algebra but that it is not even a complex vector space and thus cannot be a complex Lie algebra. Exercise 2: Prove that $\mathfrak{u}(n)$ is the Lie algebra of $U(n)$. Exercise 3: Prove that the dimension of the real Lie algebra $\mathfrak{u}(n)$ is $\frac{n(n+1)}{2}$. In particular, the real Lie algebra $\mathfrak{u}(n)$ is even-dimensional if and only if $n\equiv 0,3 \pmod 4$. (4) No. A Lie group $G$ has a unique Lie algebra by definition. Edit: Jim Conant (rightly) pointed out below in the comments that one can associate more than one Lie algebra to a Lie group. In addition to the "standard Lie algebra of a Lie group", one can associate the Lie algebra consisting of all smooth vector fields on the Lie group $G$. In fact, this construction is valid even when $G$ is not necessarily a Lie group but only a smooth manifold. Exercise 4: Prove that the Lie algebra consisting of all smooth vector fields on the smooth manifold $M$ is equivalent to the Lie algebra consisting of all derivations $C^{\infty}(M)\to C^{\infty}(M)$. Exercise 5: Prove that the Lie algebra of a Lie group $G$ is a subalgebra of the Lie algebra of $G$ consisting of all smooth vector fields on $G$. I hope this helps! I appreciate that some of my explanations are probably not as complete as is required for someone asking these questions. Please feel free to ask for further explanation if you wish.
H: finding all integer $n$ such that $ n\mid2^{n!}-1$ how to find all integer $n$ such that $ n\mid2^{n!}-1$ I find: Of course $2 \nmid n$. We prove that, if $2 \nmid n$ then $n \mid 2^{n!}-1$. $2 \nmid n \iff n = 2k+1 , k \ge 0$, we'll prove: $2^{(2k+1)!} \equiv 1\pmod{2k+1}$ Let $n = p_1^{a_1}\cdot p_2^{a_2} \cdot ... \cdot p_s^{a_s}$, we'll prove that: $2^{(2k+1)!} \equiv 1 \pmod{p_1^{a_1}}$ Let $t = ord_{p_1^{a_1}}2 \iff 2^t \equiv 1\pmod{p_1^{a_1}}$, so: $t \mid (2k+1)! \iff (2k+1)! = l\cdot t \iff l = \frac{(2k+1)!}{t} \in \mathbb{Z}_{+}$ And $2^t \equiv 1\pmod{p_1^{a_1}}/^l \Rightarrow 2^{(2k+1)!} \equiv 1\pmod{p_1^{a_1}}$ Analogously we show divisibility $2^{(2k+1)!}-1$ by $p_2^{a_2} , ... , p_s^{a_s}$ AI: We know that $2^{\phi(n)} \equiv 1 \pmod n$ and $\phi(n) < n$. So, $\phi(n)$ must be a factor of $n!$ that is $x\phi(n)=n!$ for some $x.$ $2^{\phi(n)x} \equiv 2^{n!} \equiv 1 \pmod n$ Hence, any odd $n$ will satisfy the given relation.
H: Fourier transformation of sin, cos, sinh and cosh I am trying to solve the following exercise Use $\mathcal{F}(e^{xb}) = 2\pi \delta_{ib}$ to calculate the Fourier-Transformation of $\sin x$, $\cos x$, $\sinh x$ and $\cosh x$ Now I am a little bit confused, because the fourier transformation of $\sin x$ is simply $\sin x$, of $\cos x$ is $\cos x$, of $\sinh x$ it is $-i \sin(ix)$ and of $\cosh(x)$ it is $\cos(ix)$ simply by definition, so why should I use the relation $\mathcal{F}(e^{xb}) = 2\pi \delta_{ib}$? AI: Because there are formulae: $$\sin x=\frac{e^{ix}-e^{-ix}}{2i}\quad\cos x=\frac{e^{ix}+e^{-ix}}2$$ $$\sinh x=\frac{e^x-e^{-x}}2\quad\cosh x=\frac{e^x+e^{-x}}2$$ Use them and the Fourier transform of $e^{bx}$
H: To show $f$ is continuous Let $f:[0,1]\rightarrow \mathbb{R}$ is such that for every sequence $x_n\in [0,1]$, whenever both $x_n$ and $f(x_n)$ converges , we have $$\lim_{n\rightarrow\infty} f(x_n)=f(\lim_{n\rightarrow\infty}x_n),$$ we need to prove $f$ is continuous well, I take $x_n$ and $y_n$ in $[0,1]$ such that $|(x_n-y_n)|\rightarrow 0$, and the given condition holds,Now enough to show $|f(x_n)-f(y_n)|\rightarrow 0$ I construct a new sequence $$z_1=x_1$$ $$z_2=y_1$$ $$\dots$$ $$z_{2n-1}=x_n$$ and $$ z_{2n}=y_n$$ We see, that subsequence of $f(z_n)$ converges so it must be convergent to the same limit. Am I going in right path? please help. AI: I will prove a different claim because I have pointed out that what is mentioned here is wrong by a counter-example. Let $f:[0,1]→\mathbb{R}$ is such that for every sequence $x_n∈[0,1]$ whenever $(x_n)$ converges , we have $ \lim\limits_{n→∞}f(x_n)=f \left(\lim\limits_{n→∞}x_n \right)$ then $f$ is continous on $[0,1]$. I think the best way is to use proof by contradiction. Assume $f$ is not continuous at $c \in [0,1]$ then there exist $ \epsilon_{0} > 0 $ such that for all $ n \in \mathbb{N} $ there exist $ x_{n} \in (c-1/n,c+1/n) \cap [0,1] $ such that $|f(x_n)-f(c)| \geq \epsilon_{0}>0 $ Obviously $ ( x_n )$ converges to $c$ but $(f(x_n)) $ does not converge to $f(c)$ ( Note that all the terms of $(f(x_n))$ are a positive distance away from $f(c)$ ) which is a contradiction with the given property of the function. Since our choice of $c$ was arbitrary, we have that $f$ is continuous on $[0,1]$
H: What can we say about transport equation? If we have a transport equation, i.e. $$u_t + \vec{b} \cdot D_x u=0,$$ is it true that at some point the particular directional derivative of $u$ becomes $0$, i didn't understand why? As Evans says that this property can be exploited to get the solution, but I didn't get how. Any help will be appreciated. Thank you. AI: Here I assume you are talking about the transport equation on $\Omega \subset \mathbb{R}^n \times \mathbb{R}$, so that $$u: \Omega \longrightarrow \mathbb{R}.$$ The transport equation can be rewritten as $$(\vec{b},1) \cdot D_{(x,t)} u = 0,$$ which is the directional derivative of $u$ in the direction of $(\vec{b},1)$. So the transport equation tells us that $u$ is constant along lines parallel to $(\vec{b}, 1)$ in $\mathbb{R}^{n+1}$. As stated, however, the transport equation is not well-posed, as any constant function is a solution. One way to fix this is to introduce an initial condition: $$\begin{cases} u_t + \vec{b} \cdot D_x u = 0, \\ u(x,0) = g(x) \end{cases}$$ for some $g: \mathbb{R}^n \longrightarrow \mathbb{R}$. Now given $(x,t) \in \Omega)$, let $(x_0,0)$ be the point in $\mathbb{R}^n \times \{0\}$ on the line through $(x,t)$ parallel to $(\vec{b},t)$. Since $u$ is constant on such lines, we have that $$u(x,t) = u(x_0,0) = g(x_0) = g(x - t\vec{b}).$$
H: to show $f(t)=g(t)$ for some $t\in [0,1]$ Let $f,g:[0,1] \rightarrow \mathbb{R}$ be non-negative, continuous functions so that $$\sup_{x \in [0,1]} f(x)= \sup_{x \in [0,1]} g(x).$$ We need To show $f(t)=g(t)$ for some $t\in [0,1].$ Thank you for help. AI: If $f$ is nowhere equal to $g$, then by continuity $f-g$ has to be of uniform sign. Without Loss of Generality say $f-g>0$ Hence $\displaystyle\frac1{f-g}$ is continuous. Since $[0,1]$ is compact, $\displaystyle\frac1{f-g}\le M$ that is $\displaystyle f-g\ge\frac1M$ uniformly on $[0,1]$ This shows that$$\sup f(x)\ge f(x)\ge g(x)+\frac1M\quad\forall x\in[0,1]$$so that $\displaystyle\sup f(x)\ge\sup g(x)+\frac1M$ , a contradiction to data!
H: Security of a particular cryptosystem I recently came across this problem, and while I'm fairly certain the solution is not too 'conceptually-challenging', I've been stumped at finding the right trick/manipulation to make any solution work. Alice chooses two large primes $p,q$ and denotes $N=pq$; then she also chooses three random numbers $g, r_1,r_2\in\mathbb{Z}_N$ and computes $$g_1\equiv g^{r_1(p-1)}\mod N,\hspace{5mm}g_2\equiv g^{r_2(q-1)}\mod N.$$ The public key is the triple $(N,g_1,g_2)$ and her private key is the pair of primes $(p,q).$ Now Bob wants to send the message $m$ to Alice, where $m\in\mathbb{Z}_N$. He chooses two random numbers $s_1,s_2\in\mathbb{Z}_N$ and computes $$c_1\equiv mg_1^{s_1}\mod N,\hspace{5mm}c_2\equiv mg_2^{s_2}\mod N.$$ Bob sends the ciphertext $(c_1, c_2)$ to Alice. Then Alice uses the Chinese Remainder Theorem to solve the system of congruences $x\equiv c_1\mod p$ and $x\equiv c_2\mod q$ to obtain her solution $x\equiv m\mod N.$ Given only the public key $(N,g_1,g_2)$ and the ciphertext $(c_1,c_2)$, can one still decrypt the ciphertext and obtain $m$? Intuitionally, I want to somehow use some manipulation on $g_1$ and $g_2$ to either find the primes $p,q$ and solve normally or find the message $m$ directly. But multiplying them, taking inverses, trying to apply the Chinese Remainder Theorem, etc. gets me nowhere. I appreciate any help! AI: The fact that $g_1 \equiv 1 \bmod p$ tells you that $g_1 - 1$ is divisible by $p$. But $N$ is also divisible by $p$. So if you work out the greatest common divisor of $g_1 - 1$ and $N$ using elementary methods then you are sure to recover the value of $p$ (This is because $N=pq$ has only one proper divisor that is divisible by $p$, this is $p$ itself).
H: Can $a^2+b^2+2ac$ be a perfect square if $c\neq \pm b$? Can $a^2+b^2+2ac$ be a perfect square if $c\neq \pm b$? $a,b,c \in \mathbb{Z}$. I have tried some manipulations but still came up with nothing. Please help. Actual context of the question is: Let say I have an quadratic equation $x^2+2xf(y)+25$ that I have to make a perfect square somehow. So can I conclude that $f(y)=\pm5$ $($i.e $x^2+2xf(y)+25$ is perfect square only if $f(y)=\pm5)$, or are there other possibilities for $f(y)$? Note:$x$ and $y$ are not related in any other way. AI: A small manipulation changes the problem into a more familiar one. We are interested in the Diophantine equation $a^2+b^2+2ac=y^2$. Complete the square. So our equation is equivalent to $(a+c)^2+b^2-c^2=y^2$. Write $x$ for $a+c$. Our equation becomes $$x^2+b^2=y^2+c^2.\tag{$1$}$$ In order to get rid of trivial solutions, let us assume that we are looking for solutions of the original equation in positive integers. Then $x=a+c\gt c$. The condition $b\ne c$ means that we are in essence trying to express integers as a sum of two squares in two different ways. The smallest positive integer that is a sum of two distinct positive squares in two different ways is $65$, which is $8^2+1^2$ and also $7^2+4^2$. So we can take $x=a+c=8$, $b=1$, and $c=7$, giving the solution $a=1$, $b=1$, $c=7$. Or else we can take $c=4$, giving the solution $a=3$, $b=1$, $c=4$. Or else we can take $x=a+c=7$. The next integer which is the sum of two distinct positive squares in two different ways is $85$. We can use the decompositions $85=9^2+2^2=7^2+6^2$ to produce solutions of our original equation. General Theory: Suppose that we can express $m$ and $n$ as a sum of two squares, say $m=s^2+t^2$ and $n=u^2+v^2$. Then $$mn=(su\pm tv)^2+(sv\mp tu)^2.\tag{$2$}$$ Identity $(2)$ is a very important one, sometimes called the Brahmagupta Identity. It is connected, among other things, with the multiplication of complex numbers, and the sum identities for sine and cosine. Identity $(2)$ can be used to produce infinitely many non-trivial solutions of Equation $(1)$, and therefore infinitely many solutions of our original equation. For example, any prime of the form $4k+1$ can be represented as a sum of two squares. By starting from two distinct primes $m$ and $n$ of this form, we can use Identity $(2)$ to get two essentially different representations of $mn$ as a sum of two squares, and hence solutions of our original equation.
H: $ \lim_{ k \rightarrow \infty } { \frac{ \lambda^k }{k}} = \infty$ when $1 < |\lambda| \in \mathbb{C} $. Can someone show why $ \lim_{ k \rightarrow \infty } { \frac{ \lambda^k }{k}} = \infty$ when $1 < |\lambda| \in \mathbb{C} $. AI: Just to make things a little easier to follow let $|\lambda|=1+x$ with $x >0$. Then, $$(1+x)^k \geq 1+ \binom{k}{1}x + \binom{k}{2}x^2> \frac{k(k-1)}{2}x^2 \,.$$ Thus $$\left| \frac{\lambda^k}{k} \right| \ge \frac{k(k-1)x^2}{2}\frac{1}{k}=\frac{x^2}{2}(k-1)$$ Your conclusion follows immediately from here. P.S. By exactly the same idea, or simply by Bernoulli, you can prove the following generalization: If $a_n$ is a complex sequence, so that $\lim_n \left| \frac{a_{a+1}}{a_n} \right| =x >1$ then $\lim_n a_n =\infty$.
H: NP-completeness and NP problems Suppose that someone found a polynomial algorithm for a NP-complete decision problem. Would this mean that we can modify the algorithm a bit and use it for solving the problems that are in NP, but not in NP-complete? Or would this just shows the availability of a polynomial algorithm for each NP problem indirectly? Edit: I know that when NP-complete problems have polynomial algorithms, all NP problems must have polynomial algorithms. The question I am asking is that whether we can use the discovered algorithm for NP-complete to all NP problems just by modifying the algorithm. Or would we just know that NP problems must have a polynomial algorithm indirectly? AI: A problem $X$ is "NP-complete" if for any problem $Y$ in NP, there is a polynomial-time reduction from $Y$ to $X$. So if there is a polynomial-time algorithm for some NP-complete decision problem $X$, then there is a related algorithm for any problem $Y$ in NP, namely, reduce the instance of $Y$ to an instance of $X$ and use the polynomial-time algorithm for $X$.
H: Possible combinations of items in a certain number of sets How many ways are there of arranging n elements into k sets given that all elements must be used in each arrangement? No set can be empty and order doesn't matter (i.e. {a, b, c} is the same as {c, b, a}). So for example, say n is 5 and k is three, there would be the following sets: {a b c d e} Set 1 Set 2 Set 3 ----- ----- ----- {abc} {d} {e} {ab} {cd} {e} {ab} {c} {de} {a} {bcd} {e} {a} {bc} {de} {a} {b} {cde} etc. The order of the sets does not matter either. So for example, the following are equivalent: ({ab}, {cd}) = ({cd}, {ab}) Another example: ({abc}, {d}, {e}) = ({d}, {e}, {abc}) I'm looking for some sort of formula to calculate this number. I tried to solve it by generating the sets manually and seeing could I come up with a formula. So when n is 3 and k is 2, the number of sets possible: ({ab}, {c}), ({ac}, {b}) and ({cb}, {a}) is just $$\binom{n}{k} = \binom{3}{2} = 3 $$ Increasing n to 4 (with k still as 2) I thought would give $$ \binom{n}{k} + \binom{n}{k-1}$$ possible combinations. But I know from just writing out all the possibilities that there are more than that. Any help would be hugely appreciated. Thanks. AI: The answer to your question is given by $S(n,k)$, the Stirling numbers of the second kind. There is no pleasant closed form. However, the Stirling numbers of the second kind satisfy the nice recurrence $$S(n+1,k)=kS(n,k)+S(n,k-1).$$
H: What does a "convention" mean in mathematics? We all know that $0!=1$, the degree of the zero polynomial equals $-\infty$, the interval$[a,a)=(a,a]=(a,a)=\emptyset$ ... and so on, are conventions in mathematics. So is a convention something that we can't prove with mathematical logic, or is it just intuitions, or something that mathematicians agree about? Are they the same as axioms? What does "convention" mean in mathematics? And is $i^2 = -1$ a convention? If not how can we prove existence of such number? AI: To answer the question in the title, I would say: 'convention' in mathematics means exactly the same as in ordinary English. As for your examples: $0!:=1$ and $[a,a):=\emptyset$ are definitions. It is a convention not to use a different definition, or to leave it undefined. Of course in this sense, every definition is a convention. It think that informally, one says a certain definition (such as the two above) is '(just) convention', to mean that they are 'extreme' or 'degenerate' cases, and leaving them undefined would still make the theory go through, but it is more convenient to define them anyway (for example to prevent having to exclude this extreme case in statement of theorems). For example, I think you could get by not defining $[a,a)$ or $[a,b]$ for $b<a$, but then in statements (and proofs) about general intervals $[a,b)$ you are forced to explicitly state and check whether $b>a$ which could be tiresome.
H: Boundaries on Probability of Independent Events Given an integer n, and an event n that happens with $P(\frac{1}{n})$, is the probability that e will happen in n trials bounded by any constant? For example, if I had an n-sided fair die and a target value t, can I say with certainty that regardless of the value of n, the odds of rolling a t in n rolls are no worse than $\frac{1}{x}$ for some constant x? AI: $$1-\left(\frac{n-1}n\right)^n\gt\lim\limits_{k\to\infty}1-\left(\frac{k-1}k\right)^k=1-\frac1{\mathrm e}=0.632$$
H: Is there an easy way to see associativity or non-associativity from an operation's table? Most properties of a single binary operation can be easily read of from the operation's table. For example, given $$\begin{array}{c|ccccc} \cdot & a & b & c & d & e\\\hline a & e & d & b & a & c\\ b & d & c & e & b & a\\ c & b & e & a & c & d\\ d & a & b & c & d & e\\ e & c & a & d & e & b \end{array}$$ it is easy to check that it is closed (no elements occur in the table which don't occur as row or column index), commutative (the table is symmetric), has a neutral element (the row and column of $d$ are copies of the index row/column), and has an inverse element for each element (there's a $d$ in each row and column). In other words, almost all important properties can immediately be seen. The only part missing is associativity. Therefore my question: Is there a simple way to see directly from the operation's table (i.e. without doing explicitly all the calculations) if an operation is associative? AI: Have you seen Light's associativity test? According to Wikipedia, "Direct verification of the associativity of a binary operation specified by a Cayley table is cumbersome and tedious. Light's associativity test greatly simplifies the task." If nothing else, the existence of Light's algorithm seems to rule out the possibility that anyone knows an easy way to do it just by looking at the original Cayley table. Note also that, in general, one cannot do better than the obvious method of just checking all $n^3$ identities of the form $(a\ast b)\ast c = a\ast (b\ast c)$. This is because it is possible that the operation could be completely associative except for one bad triple $\langle a,b,c\rangle$. So any method that purports to do better than this must only be able to do so in limited circumstances.
H: Weak*-convergence of regular measures Let $K$ be a compact Hausdorff space. Denote by $ca_r(K)$ the set of all countably additive, signed Borel measures which are regular and of bounded variation. Let $(\mu_n)_{n\in\mathbb{N}}\subset ca_r(K)$ be a bounded sequence satisfying $\mu_n\geq 0$ for all $n\in\mathbb{N}$. Can we conclude that $(\mu_n)$ (or a subsequence) converges in the weak*-topology to some $\mu\in ca_r(K)$ with $\mu\geq 0$? AI: We cannot. Let $K = \beta \mathbb{N}$ be the Stone-Cech compactification of $\mathbb{N}$, and let $\mu_n$ be a point mass at $n \in \mathbb{N} \subset K$. Suppose to the contrary $(\mu_n)$ has a weak-* convergent subsequence $\mu_{n_k}$. Define $f : \mathbb{N} \to \mathbb{R}$ by $f(n_k) = (-1)^k$, $f(n) = 0$ otherwise. Then $f$ has a continuous extension $\tilde{f} : K \to \mathbb{R}$. By weak-* convergence, the sequence $\left(\int \tilde{f} d\mu_{n_k}\right)$ should converge. But in fact $\int \tilde{f} d\mu_{n_k} = \tilde{f}(n_k) = (-1)^k$ which does not converge. If $C(K)$ is separable, then the weak-* topology on the closed unit ball $B$ of $C(K)^* = ca_r(K)$ is metrizable. In particular it is sequentially compact and so in that case every bounded sequence of regular measures has a weak-* convergent subsequence. As Andy Teich points out, it is sufficient for $K$ to be a compact metrizable space. Also, since there is a natural embedding of $K$ into $B$, if $B$ is metrizable then so is $K$. One might ask whether it is is possible for $B$ to be sequentially compact without being metrizable. I don't know the answer but I suspect it is not possible, i.e. that metrizability of $B$ (and hence $K$) is necessary for sequential compactness. We do know (by Alaoglu's theorem) that closed balls in $C(K)^*$ are weak-* compact, so what we can conclude in general is that $\{\mu_n\}$ has at least one weak-* limit point. However, as the above example shows, this limit point need not be a subsequential limit.
H: How to prove the uniqueness of the solution of $ax+b=0$? I have no background in mathematical analysis or the like, but I am interested to know how to prove the uniqueness of the solution of $ax+b=0$? Perhaps your answers will help me to prove other uniqueness problems. AI: A standard way of showing that a certain object is unique is two assume that you have two objects that satisfy the desired properties, and deduce that they must be equal (when we say "two objects", we mean two "names", but which may refer to the same object). In the case of the solutions to the equation $ax+b=0$, you have to distinguish two cases: if $a=0$, then the equation either has no solutions (if $b\neq 0$), or it has infinitely many solutions (if $b=0$). So uniqueness really only exists when $a\neq 0$. The uniqueness is based on the following fact about real numbers: For any real numbers $r$ and $s$, if $rs=0$, then $r=0$ or $s=0$. Once you have that: Claim. If $a\neq 0$, then there is at most one solution to $ax+b=0$. Proof. Suppose that both $x$ and $y$ are solutions. We aim to show that $x=y$. Since $x$ is a solution, $ax+b=0$. Since $y$ is a solution as well, $ay+b=0$. That means that $ax+b=ay+b$. Adding $-b$ to both sides we conclude that $ax=ay$. Adding $-ay$ to both sides, we obtain $ax-ay = 0$. factoring out $a$, we have $a(x-y)=0$. Since the product is $0$, then $a=0$ or $x-y=0$. Since $a\neq 0$ by assumption, we conclude that $x-y=0$, so $x=y$. Thus, if $x$ and $y$ are both solutions, then $x=y$, so there is at most one solution. $\Box$ Note that this argument works in the context of the real numbers, or other kinds of "numbers" where $rs=0$ implies $r=0$ or $s=0$. There are other situations where this is not the case. For example, if you work with "integers modulo 12" ("clock arithmetic", where $11+3 = 2$), then $2x+8 = 0$ has many different solutions $0\leq x\lt 12$: one solution is $x=2$ (since $2(2)+8 = 4+8=12=0$ in clock arithmetic), and another solution is $x=8$ since $2(8)+8 = 16+8=24 = 0$ in clock arithmetic).
H: Local minimum example Help me please with this question. Let's $\Delta u>0$ in connected domain in $\mathbb{R}^{n}$. Is it possible that function $u$ have local minimum? Can you show an example? Thanks!! AI: $f(x,y)=x^2 + y^2$ on all of $\mathbb{R}^2$
H: Documentaries about mathematics and mathematicians Possible Duplicate: List of Interesting Math Videos/ Documentaries I have watched "Fermat's last theorem" a documentary about Andrew Wiles proof of the theorem, it was a great show. and i am asking if there is some other good documentaries about mathematics and mathematicians out there (not movies). AI: I would definitely recommend "N is a NUMBER". I enjoyed it immensely as a portrayal of the curious and fascinating life of Paul Erdős.
H: Relationship Between Basis For Vector Space And Basis For Dual Space There exist the famous theorem about a basis for dual space Let $\mathbb V$ be finite dimensional vector space over $F$ and $\mathcal{B} = \{\alpha_1, \ldots ,\alpha_n\}$ is basis for vector space $\mathbb V$ then $\mathcal{B^*} = \{f_1, \ldots ,f_n\}$ is basis for dual space $\mathbb V^*$ such that $f_i(\alpha_{j})=\delta_{ij}$ Is the converse of this Theorem true, to explain more: Let $\mathcal{B^*} = \{f_1, \ldots ,f_n\}$ be basis for dual space $\mathbb V^*$. Does there exist $\mathcal{B} = \{\alpha_1, \ldots ,\alpha_n\}$ such that $\mathcal{B}$ would be basis for vector space and $f_i(\alpha_{j})=\delta_{ij}$? AI: Yes, you can prove that via the bidual space $V^{**} = \mathrm{Hom}(V^*,K)$. There is a canonical homomorphism $\iota: V \to V^{**}$ defined by $\iota(v)(f) := f(v)$ for $f \in V^{*}$. It is easy to see that $\iota$ is injective. Since $\dim V^{**} = \dim V^* = \dim V$, it is actually an isomorphism. Therefore, for every $\varphi \in V^{**}$ there exists a $v \in V$ s.t. $\varphi(f) = f(v)$ for $f \in V^*$. Now, if $\{f_1,\ldots,f_n\}$ is a basis if $V^*$ then the elements $\{\varphi_1,\ldots,\varphi_n\}$ form a basis of $V^{**}$ where $\varphi_i(f_j) := \delta_{ij}$. By the above considerations, each $\varphi_i$ corresponds to an $\alpha_i \in V$ s.t. $f_i(\alpha_j) = \varphi_j(f_i) = \delta_{ij}$. Since $\iota: V\to V^{**}$ is an isomorphism and $\{\varphi_1,\ldots,\varphi_n\}$ is a basis of $V^{**}$ we know that $\{\alpha_1,\ldots,\alpha_n\}$ is a basis of $V$. Its dual basis is then $\{f_1,\ldots,f_n\}$ because $f_i(\alpha_j) = \delta_{ij}$.
H: About positive semidefiniteness of one matrix It is not clear how to prove that the matrix $(\min(i,j))_{i,j=1,\dots,n}$ is (or is not) positive semidefinite. There are some facts from Horn and Johnson's book Matrix Analysis: if $A \in M_n$ is positive semidefinite, then $a_{ii}a_{jj} \ge a_{ij}^2, i,j=1,2,\dots, n$ (source). It seems that this condition is not enough for the matrix to be positive semidefinite. Are there any techniques to check that all eigenvalues of the matrix are positive, or that the minor determinants are non-negative? AI: To expand on my comment: we have the theorem "a matrix is symmetric positive definite if and only if it possesses a Cholesky decomposition". That is, any symmetric positive definite matrix $\mathbf A$ possesses a factorization $\mathbf A=\mathbf G\mathbf G^\top$, where the lower triangular matrix $\mathbf G$ is called a Cholesky triangle. Now, I claim that if $a_{ij}=\min(i,j)$, then $g_{ij}=[i\geq j]$, where $[p]$ is an Iverson bracket, equal to $1$ if $p$ is true, and $0$ if $p$ is false. As an example ($n=6$), $$\begin{pmatrix}1 & 1 & 1 & 1 & 1 & 1 \\1 & 2 & 2 & 2 & 2 & 2 \\1 & 2 & 3 & 3 & 3 & 3 \\1 & 2 & 3 & 4 & 4 & 4 \\1 & 2 & 3 & 4 & 5 & 5 \\1 & 2 & 3 & 4 & 5 & 6\end{pmatrix}=\begin{pmatrix}1 & 0 & 0 & 0 & 0 & 0 \\1 & 1 & 0 & 0 & 0 & 0 \\1 & 1 & 1 & 0 & 0 & 0 \\1 & 1 & 1 & 1 & 0 & 0 \\1 & 1 & 1 & 1 & 1 & 0 \\1 & 1 & 1 & 1 & 1 & 1\end{pmatrix}\cdot\begin{pmatrix}1 & 0 & 0 & 0 & 0 & 0 \\1 & 1 & 0 & 0 & 0 & 0 \\1 & 1 & 1 & 0 & 0 & 0 \\1 & 1 & 1 & 1 & 0 & 0 \\1 & 1 & 1 & 1 & 1 & 0 \\1 & 1 & 1 & 1 & 1 & 1\end{pmatrix}^\top$$ This is equivalent to proving that $$\min(i,j)=\sum_{k=1}^n [i\geq k][k\leq j]$$ or, since $[p][q]=[p\text{ and }q]$, $$\min(i,j)=\sum_{k=1}^n [i\geq k\text{ and }j\geq k]$$ and that the identity indeed holds is now a bit easier to see.
H: Weak-* sequential compactness and separability Let $X$ be a Banach space, and let $B$ be the closed unit ball of $X^*$, equipped with the weak-* topology. Alaoglu's theorem says that $B$ is compact. If $X$ is separable, then $B$ is metrizable, and in particular it is sequentially compact. What about the converse? If $B$ is sequentially compact, must $X$ be separable? This question was inspired by this one. AI: No, not necessarily. Any reflexive Banach space has a weakly compact unit ball; so, by the Eberlein-Šmulian Theorem, any reflexive Banach space has a weak* sequentially compact unit ball. Also, and more generally, it follows from Rosenthal's $\ell_1$ Theorem (a Banach space $X$ does not contain $\ell_1$ isomorphically if and only if every every bounded sequence in $X$ has a weakly Cauchy subsequence) that if $X^*$ does not contain $\ell_1$ isomorphically, then the unit ball of $X^*$ is weak* sequentially compact.
H: Typo in lecture notes? The following is an example in my lecture notes: "Let $X$ be a locally compact topological space (that is, a topological space in which every point has a compact neighborhood). Then $C_0(X)=\{f \in C_b(X)| \lim_{x \to \infty} f(x)=0 \}$ is a closed subspace of $C_b(X)$, the space of bounded continuous functions, and hence a Banach space. The notion of the limit of $f(x)$ as $x \to \infty $ used here is defined as follows: $\lim_{x\to \infty} f(x) = A$ if and only if for every $\varepsilon > 0$ there exists some compact set $K \subset X$ with $|f(x)−A| < \varepsilon$ for all $x \in X \setminus K$. If $X = \mathbb N$ (with the discrete topology), one often writes $c_0 = c_0(\mathbb N) \subset C_0(\mathbb N)$ for this subspace of $l^\infty(\mathbb N)$." Since I assume that $c_0 = C_0$ here, at the very end of the last sentence, should it say "...writes $c_0 = c_0(\mathbb N) \subset C_{\color{red}{b}}(\mathbb N)$ for this subspace of $l^\infty(\mathbb N)$."? Probably. AI: I guess $c_0$ is defined as the sequences which converge to $0$. In this case, to be coherent with the general definition $c_0:=C_0(\Bbb N)$ and it is both contained in $C(\Bbb N)$ and $C_b(\Bbb N)$ (but of course the latter is more accurate). (note that we don't have to really deal with continuity since the topology is discrete, $C(\Bbb N)$ is such the set of sequences of real or complex numbers)
H: Questions about the definition of group actions A group $G$ is said to act on a set $X$ when there is a map $\phi : G\times X\rightarrow X$ such that the following conditions hold for all elements $\phi(e,x)=x$ where $e$ is the identity element of $G$. $\phi (g,\phi(h,x))=\phi(gh,x)$ for all $g,h\in G$. This something I found in a tutorial for group theory. Now my question is what are the elements in X ? Is it all the permutations of all elements or is it the elements itself? And what is $S_{n}$ (not given above, it's so commonly used), which contains $n!$ elements ? I'm terribly confused between $X$ and $S_{n}$ . AI: $X$ is just a set; its elements can be anything: they can be numbers, points of the plane, elements of a group/ring/field, other sets, or pretty much anything that you consider valid elements of a set. For the purposes of the definition of group action, $X$ is not assumed to have any structure. So its elements could be $1$, $2$, $3$, and $4$, (so $X=\{1,2,3,4\}$), or they could be house, school, park, and dog (so $X=\{$`house`, `school`, `park`, `dog` $\}$ ), or anything. The group $S_n$ is the group of all permutations of the set $\{1,2,\ldots,n\}$, which is a group under composition. In fact, you can view $S_n$ and $\{1,2,\ldots,n\}$ as an example of an action: $X$ would be the set $\{1,2,\ldots,n\}$, $S_n$ would be the group $G$, and $\phi\colon G\times X\to X$ would be the function $$\phi\Bigl(\sigma,k\Bigr) = \sigma(k).$$ (Verify that this satisfies the definition of action that you give). As you will later see, every action of $G$ on $X$ corresponds to a group homomorphism between $G$ and the group $S_X$ of all permutations of the set $X$ (which is not the samething as $X$ itself, just like $S_n$ is not the same as $\{1,2,\ldots,n\}$). And if $X$ and $Y$ have the same number of elements, then $S_X$ is isomorphic to $S_Y$. So if $G$ acts on a set with $n$ elements, then this action will correspond to a group homomorphism $G\to S_X$, and since $S_X$ is isomorphic to $S_n$, we get a homomorphism from $G$ to $S_n$. So the groups $S_n$ play a big role in the theory of group actions.
H: Too many topics taught in class? Frustrated student in need of advice and encouragement. Location: New York CUNY (as education systems might be different in other places) I started my life studying philosophy and psychology and then at 22 transitions to computer science. It took me a long time to understand the importance of mathematics. I was placed in calculus1 and while it was hard to remember math I managed to pull a B+. However there is a recurrent problem I have with math classes I am taking, namely too many topics to really understand everything. In calculus2 and probability and statistics we covered so many topics that I felt that any attempt to understand the material would only hurt me on the test. Constant new topics without any chance to play around and really grasp the material. Moreover textbooks rarely focus on the WHY of things, nor do professors have the time to explain due to computational emphasized department finals and curriculum requirements. I feel like every class has artificially inflated amount of material and it becomes especially obvious when teachers end up rushing up to two topics a class at the end of the semester. For example we went over possion distribution and later waiting-time possion distribution and besides showing us how to do the problems 0 emphasis was put on why it works and where it came from. I am VERY frustrated with this frenzy of meaningless formulas. I lately feel that perhaps I am an idiot or something but I just don't see any of the textbooks explain things properly. When they do attempt at explanation it is just a soup of symbols without any intuition. It is like they just copy pasted proofs to make it seem rigorous. The emperor is naked. Is it my laziness or stupidity or a known problem in education curriculum in early undergraduate math classes? It is killing my recently gained joy for math and lowering my self esteem. Perhaps people can advice on some math books that would fill the gaps and actually explain things. AI: It is a well-known phenomenon that certain beginning undergrad math classes can be crowded with topics, to the point of making it almost impossible to explain them all with the degree of detail and clarity that one would like (and I am speaking as someone who has been on the other side of it --- i.e. teaching these classes). There are various reasons for this; one is that certain topics must be covered (due to demands of various later courses, both within the math dept. and in other depts. for which these courses acts as service courses), and there aren't enough separate course slots available to separate them out into different courses. Well-designed curricula try to minimize this phenomenon, but it's not easy; courses and curricula have momenta of their own, and are not as easy to change or redirect as you might think. In any case, given the situation you describe, it is probably not realistic to learn everything in your courses to the degree of precision and understanding you would like; like many other things in college and in life, there will have to be a compromise between the ideal and the realistic in your learning. What is possible, I would think, would be to learn some part of your curriculum more carefully and in greater depth. To this end, I would suggest that you choose one part of your course that you found the most intriguing, and that you would most like to learn, and ask a specific question about that part of your curriculum. (E.g. based on your complaint about the discussion of the Poisson distribution, maybe you would like to understand better the different probablility distributions, where they come from and why we study them, and you could ask as question about "Resources for a beginner to learn and understand different probablility distributions".) Try this with one topic at a time, and try to balance your study between "keeping up with current topic in class" and "learning topics of interest for personal development/understanding". As (or if) you move onto more advanced math classes, these two threads of your study will start to become more closely entwined, because the pace of introduction of new topics will slow, and you will get the chance to study each topic in more depth.
H: a question on symmetric group $S_5$[NBHM_2006_PhD Screening Test_algebra] Given that $x=(1 2)(3 4 5)\in S_5$ so its order $6$, and its a product of two cycle,I want to know whether $x$ commutes with all elements of $S_5$ and is it conjugate to $(4 5)(2 3 1)$? Thank you for the help. AI: There is a rule for conjugating in $S_n$. If you have two permutations $\sigma$ and $\omega$, then to compute $\sigma\omega\sigma^{-1}$, you just permute the list of numbers in $\omega$ by $\sigma$. For instance $$ (234)(12)(34)(234)^{-1}=(13)(42) $$ because $(234)$ sends $2$ to $3$, $3$ to $4$ and so forth. You should be able to work out that those two elements you have are conjugates. Once you know that you can answer the second question. If $x$ commutes with all elements of $S_5$, how big is its set of conjugates?
H: Compute $\lim\limits_{n\to\infty} \prod\limits_2^n \left(1-\frac1{k^3}\right)$ I've just worked out the limit $\lim\limits_{n\to\infty} \prod\limits_{2}^{n} \left(1-\frac{1}{k^2}\right)$ that is simply solved, and the result is $\frac{1}{2}$. After that, I thought of calculating $\lim\limits_{n\to\infty} \prod\limits_{2}^{n} \left(1-\frac{1}{k^3}\right)$, but I don't know how to do it. According to W|A, the result is pretty nice, but I don't see how W|A gets that. (See here.) Is there any easy way to get the answer? AI: Since $$ 1-\frac1{k^3}=\frac{(k-1)(k+\frac12+\frac{\sqrt3}2i)(k+\frac12-\frac{\sqrt3}2i)}{k^3} $$ and $$ k+a=\frac{\Gamma(k+a+1)}{\Gamma(k+a)}, $$ every term in the product is a ratio of the Gamma functions. Also there is a formula $$ \Gamma \left(\frac{1}{2}-i y\right) \Gamma \left(\frac{1}{2}+i y\right)= \pi \text{sech}\pi y. $$ In particular for the end terms of the product $$\frac{1}{\Gamma \left(\frac{1}{2}+\frac{i \sqrt{3}}{2}\right) \Gamma \left(\frac{1}{2}-\frac{i \sqrt{3}}{2}\right)}=\frac{\cosh \frac{\sqrt{3} \pi }{2}}{\pi }. $$ Multiplying those ratios and canceling out the same terms leads to a formula for the partial product: $$ \prod _{k=2}^n \left(1-\frac{1}{k^3}\right)= \frac{\cosh \frac{\sqrt{3} \pi }{2} \Gamma \left(n-\frac{i \sqrt{3}}{2}+\frac{3}{2}\right) \Gamma \left(n+\frac{i \sqrt{3}}{2}+\frac{3}{2}\right)}{3 \pi n^3 \Gamma^2 (n)}. $$ Taking the limit $n\to\infty$ gives the desired result.
H: Finding gradual values I'm writing some code for a pressure level sensor for propane tanks. The manual provides me with the following table with the caption: "Best accuracy will be obtained using the calibration data in the table below:" I assume: 0.000 - 0.318 is a E-stop 0.319 - 0.590 is 10 etc. What I'd like to find out is how can I calculate the exact Graduation from a specific voltage. For example if I get 0.45 volts, what is the exact Graduation? I'd much appreciate being told what type of problem this is and suggestions for formulas. (I've cast the tagging net very wide. If someone could point me in the right direction to tag that would be great) AI: The keyword you're looking for is interpolation.
H: How to think of the field $F(\alpha)$ The way I learned it was given a field extension $F \subset E$, and an element $\alpha \in E$ $$F(\alpha) := \{p(\alpha)/q(\alpha) : p(x), q(x) \in F[x] ,q(\alpha) \not = 0\} $$ Is there an easier way to think about the field $F(\alpha)$ AI: The definition you've given yields the form of an arbitrary element of the extension. However, it is not the intuitive definition. The intuitive definition is simple: $F(\alpha)$ is the smallest subfield of $E$ containing $F$ and $\alpha$. In fact, your definition can also be somewhat simplified if $\alpha$ is algebraic over $F$. In this case $F(\alpha)=F[\alpha]$, that is, the smallest ring containing $F$ and $\alpha$, so $F(\alpha)=F[\alpha]=\lbrace p(\alpha)\vert p\in F[x]\rbrace$. Furthermore, algebraic elements have finite degree, so we only need to consider $p$ of degree smaller than the degree of $\alpha$ over $F$. If $\alpha$ is transcendental over $F$, then $F(\alpha)$ is actually isomorphic to the field of rational functions over $F$, $F(x)$, in which case it is pretty much what you've written, but of course there's no restriction on $q$ in this case.
H: Name of this discrete stochastic process Suppose we have $n$ blocks of wood. At each step, we choose one of these boxes uniformly at random and paint it red (so at later steps, we may be re-painting an already-red box). Let $X_t$ denote the percentage of the boxes painted red at time $t$. In other words, take $X_0 = 0$ and let $X_{t+1} = \begin{cases} X_t & \text{ with probability } X_t \\ X_t + 1/n & \text{ with probability } 1 - X_t \end{cases}$ Question: What is the name of this process? AI: This should go in a comment, but it looks to me like a coupon-collector problem. http://en.wikipedia.org/wiki/Coupon_collector's_problem
H: Question about $L^p$ spaces Suppose $1<p<\infty$ and let $L^1$ and $L^p$ denote the usual Lebesgue spaces on $[0,1]$. Let $$A=\{f\in L^1:\|f\|_p\leq 1\}.$$ Show $A$ is closed in $L^1$. I took a sequence $\{f_n\}$ in $A$ and assumed it converges to $f$ in $L^1$. I am having trouble showing $\|f\|_p\leq 1$. AI: A subsequence $\{f_{n_k}\}$ converges to $f$ almost everywhere. Indeed, pick $\{f_{n_k}\}$ a subsequence such that $\lVert f-f_{n_k}\rVert\leq 2^{-k}$. Then $$\lambda\{x\mid |f(x)-f_{n_k}(x)|\geq j^{-1}\}\leq j2^{-k}$$ so $$\lambda\bigcup_{k\geq k_0}\{x\mid |f(x)-f_{n_k}(x)|\geq j^{-1}\}\leq j2^{-k_0+1}$$ and the set on which $\{f_{n_k}\}$ doesn't converge to $0$ is of null measure. By Fatou's lemma, $$\int |f|^p=\int\liminf_k|f_{n_k}|^p\leq \liminf_k\int |f_{n_k}|^p\leq 1.$$
H: Two Representations of $\log \zeta$ I was looking for representations of $\log \zeta$ and found these two: $ \displaystyle \log\zeta(s)=\color{red}{s}\sum_{n>0} \frac{P(ns)}{n\color{red}{s}}$ from here [$\color{red}{s}$ inserted by me], $ \displaystyle \log \zeta(s) = s \int_0^\infty \frac{\pi(x)}{x(x^s-1)}\,dx, $ from there. Identify $x$ with $n$, does this somehow imply: $$ \frac{P(ns)}{\color{black}{s}}=\frac{\pi(n)}{n^s-1}\; . $$ If so, how to prove it? (Disproven here ) If not, how are 1. and 2. related? AI: Here's how they are related. Split $(0,\infty)$ into intervals strategically: $$\begin{array}{c l}\int_0^\infty\frac{\pi(x)}{x(x^s-1)}dx & =\sum_{n=1}^\infty\int_{p_n}^{p_{n+1}}\frac{n}{x}\left(\sum_{k=1}^\infty x^{-ks} \right)dx \\ & =\sum_{n=1}^\infty n\sum_{k=1}^\infty\frac{p_{n+1}^{-ks}-p_n^{-ks}}{-ks} \\ & =\sum_{k=1}^\infty\sum_{n=1}^\infty \frac{1}{skp_n^{ks}} \\ & = \sum_{k=1}^\infty\frac{P(ks)}{ks} \end{array}$$ Notice the telescopy in the middle (write out some terms in the second line to see). The last line can be seen as $\log \zeta(s)$ by invoking $\zeta$'s Euler product and taking series expansions of $\log$s.
H: if $ax+by = d$, then $a'x+b'y=d$ where $x>0$ and $0 \leq b' \leq x$ I've been trying this for a little while now, if $ax+by = d, \ $ then $a'x+b'y=d$ where $x>0\ $ and $0 \leq b' \leq x\ $ and $a,b,a',b',x,y, \in \mathbb{Z}$ My first thought is: $$ \begin{align} ax+by &= d \\ ax &= d - by \\ \end{align} $$ Implies that (since $x \neq 0 $ ), $$\begin{align} x &| d - by \\ d & \equiv by \pmod{x} \end{align} $$ I'm not sure that helps. I also thought maybe from, $ax = d - by,\ $ I could say, $$ \begin{align} d - by &= qx +r, \ 0 \leq r \leq x \\ d &= by + qx + r \end{align} $$ But I can't see a way of pulling the information from $r$ which ideally I could somehow rewrite as $ry$ so that $b'= r$. Any hints? Thanks again. Again, not technically homework just a problem from a book. AI: The "tell" of the problem is the condition $0\leq b'\lt x$. That suggests dividing something by $x$ and letting $b'$ be the remainder. Indeed, divide $b$ by $x$. We can write $b=qx+b'$, with $0\leq b'\lt x$. Then $$d = ax+by = ax+(qx+b')y = ax +qyx + b'y = (a+qy)x + b'y.$$ Setting $a'=a+qy$, we are done.
H: $A=\{x\in \mathbb{R}\mid b^x < y\}$ is nonempty Let $1<b\in \mathbb{R}$ and $y\in \mathbb{R}$. I have proved that $A=\{x\in \mathbb{R}\mid b^x < y\}$ is nonempty when $y > 1$. Please give me any hint how to show that $A$ is nonempty when $y\leq 1$. AI: Consider the sequences $x_n=-n$ and $b_n = b^{x_n}$. Ask what the limit of $b_n$ is and use facts about limits.
H: How to design a convolutional error correcting code I'm trying to understand how one would design a convolutional code, like the (2,1,2) code that is always used in examples (see here for an example: https://en.wikipedia.org/wiki/Convolutional_code#Convolutional_encoding) It is clear to me how to decode an arbitrary convolutional code using the viterbi algorithm to get the maximum likelihood message, but I haven't been able to find any sources which explain how one might design a code which has good properties when decoded. For example, clearly if the impulse responses are not linearly independent, this is bad, but is avoiding that enough to make a good code? How would one design a code with a very long constraint length? or with a given code rate? I've tried searching google exhaustively, but I can't find anything about designing codes, only about the properties of already designed codes. Can anyone give me a source? AI: I'm not an expert on convolutional codes, so I won't pontificate. For learning I found the tutorial chapter by McEliece in Handbook of Coding Theory quite useful, but my taste is algebraic, so McEliece's choice of language fit me well. Hopefully a university library near you has it, because that link was not very useful. Another oft cited textbook is Johannesson and Zigangirov. It is probably more useful to an engineer, as it more extensive, and goes into performance analysis, details on the algorithmics and such. Anyway, my understanding is that there are very few general families of good convolutional codes (in sharp contrast to block codes), so intelligent guessing and computer search has dominated. You have undoubtedly mostly found tables of good codes. I'm fairly sure somebody has published something on search heuristics and such, so to come up with new and better designs you will need have something better. Alas, I can't point any such papers to you. For high performance with long blocks you should use turbo codes or LDPC-codes anyway. To that end Johannesson & Zigangirov is also useful, because in the end they describe soft-output Viterbi algorithm, which will come in handy. For similar ML-performance a pure convolutional code simply cannot cope. At least not without outrageous trellis complexity, which is kind of pointless, because both Turbo and LDPC-codes have reasonably good decoding complexity. Yet, there is a range of block lengths, where traditional convolutional codes work better. If memory serves, cellular people use a convolutional code for block lengths between 40 and 200 payload bits or something like that. Shorter than that - a well designed block code will have so much better Hamming distance. Longer than that - use turbo codes. Much longer than that - use LDPC codes. For an exposition to the theory of LDPC-code design I refer you to Richardson & Urbanke. The theory is somewhat beyond me, because it uses stochastic math instead of algebra.
H: Free PDF for MV Calculus I was looking for a free PDF from which I can review MV calculus. Specifically: MV Limits, Continuity, Differentiation. Differentiation of vector and scalar fields Surface/Multiple Integrals A succinct book would be great, (coherent) course notes and presentations would do as well. I ran google searches with filetype:pdf but I couldn't find one which fits all my requirements. AI: You might try Paul Dawkins’ on-line Calculus III notes, which can be downloaded in PDF format. I’ve not looked at them, but I’ve taught Calculus I and II from his notes for those courses and found them quite usable, though there are certainly books that are better.
H: Properties of a $3\times 3$ orthogonal matrix [NBHM_2006_PhD Screening Test 2006_Algebra] Let $A$ be an $3\times 3$ orthogonal matrices with real entries,Then which are true $\det A$ is rational number $d(Ax,Ay)=d(x,y)$ for any two vector $x,y\in \mathbb{R}^3$ where $d$ is ussual eucledean distance. All entries off $A$ are positive. All eigen values of $A$ are real. determinant of orthogonal matrix is $\{1,-1\}$ so 1 is true, modulas of eigen values of an orthogonal matrix is $1$ so 4 may not be true always, I am not getting anything about 2 and 3. Thank you. AI: Hint: For 2 (work out the details as exercise) First, we show that if $A$ is orthogonal then $\| Ax \|^2 = \|x\|^2.$ Write $\|Ax \|^2 = (Ax)^{T}(Ax).$ Distribute the transpose and recall that $A^{T}A = I.$ Done. Now, write $$d(Ax, Ay) = \| Ax - Ay \|^2 = (Ax-Ay)^{T}(Ax-Ay).$$ Simplify to get $$(Ax)^{T}(Ax)-2(Ax)^{T}(Ay)+(Ay)^{T}(Ay) = \| Ax \|^2 - 2 (Ax)^{T}(Ay) + \| Ay \|^2$$ Similarly show that $ d(x, y) = \| x \|^2 - 2 x^{T}y + \| y \|^2.$ Now, you can match the terms one to one, except for the middle terms $(2 (Ax)^{T}(Ay)$ and $2 x^{T}y)$. Well, you further simplify $(Ax)^{T}(Ay)$ and recall that $A^{t} A = I.$ Done.
H: Unique expression of a polynomial under quotient mapping? I have a weird feeling about something I'm reading. Suppose $f(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ is a polynomial over a field $F$. Let $y=x+(f(x))$ be the image of $x$ in the quotient $F[x]/(f(x))$. Then every element of $F[y]$ can be uniquely expressed in form $$ b_0+b_1y+\cdots+b_{n-1}y^{n-1} $$ with $b_i\in F$. I see that $$y^n = x^n+(f(x)) = -(a_{n-1}x^{n-1}+\cdots+a_1x+a_0)+(f(x))\\ = -a_{n-1}x^{n-1}+(f(x))+\cdots+-a_1x+(f(x))+-a_0+(f(x)) $$ so it looks like any power of $y$ greater or equal to $n$ can be written in terms of lower powers of $y$ if I could pull out the coefficients. However, why would the coefficients still be in $F$? Wouldn't they be somewhere else? And does uniqueness of this expression follow simply because the expressions $b_0+b_1y+\cdots+b_{n-1}y^{n-1}$ are polynomials, or is there something more to it than that? Thanks. AI: The map $$\pi:F[x]\longrightarrow F[x]/(f(x))\,\,,\,\,\pi(g(x)):=g(x)+(f(x))$$ is a ring homomorphism under which the field $\,F\,$ is embedded in the field $$\overline F:=\{a+(f(x))\;\:\;a\in F\}$$ Thus, we can, and we actually do, identify the elements of both fields, and under this agreement is that we can say the coefficients you ask about are in $\,F\,$, i.e. in $\,\overline F\,$. About uniqueness: we know we can divide $\,h(x)\in F[x]\,$, by $\,f(x)\,$ with residue (since we're in an Euclidean domain we can always do this!): $$h(x)=q(x)f(x)+r(x)\,\,,\,\,r(x)=0\,\,or\,\,\deg r<\deg f\,\,\,\,(**)$$ Uniqueness follows from degree considerations: $$q(x)f(x)+r(x)=q'(x)f(x)+r'(x)\Longrightarrow (q(x)-q'(x))f(x)=r'(x)-r(x)$$ but if $\,q(x)-q'(x)\neq 0\,$ then $\,\deg\left(q'(x)-q(x)\right)f(x)\geq f(x)>\deg(r'(x)-r(x))\,$ which, of course, contradicts $\,(**)\,$ above. The only thing left to do is to observe that $$g(x)+(f(x))=r(x)+(f(x))$$