text
stringlengths
83
79.5k
H: Functions cannot be integrated as simple functions Possible Duplicate: How can you prove that a function has no closed form integral? Since I was a college student, I was told there were many functions that cannot be integrated as simple functions.(I'll give the definition of simple functions at the end of the article). As a TA for calculus now, I've been asked for integrated various functions, certainly, most of them are integrable(in the sense of simple functions). However, how could I know that certain functions are not integrable not merely because I cannot integrate them. (There was one time that one integration on the question sheet daunted all the TAs I asked). Does anyone know THEORETICAL REASONS why certain functions cannot be integrated as simple funtions? Or could you refer to certain reference containing such materials? Or, could you show me by example, certain "good" function actually don't have "good" integration, I think one famous example could be "$\frac{\sin x}{x}$". Simple functions: The functions which is the summations(subtractions),multiplications(divisions), and compositions of the following functions(as well as the functions generated by these operations): $x, \sin x , \cos x, \log x, \sqrt[n]{x}, e^x$. AI: A search for "integration in finite terms" will get you many useful results. This paper by Rosenlicht is a very good place to start. Bibliographic details: Maxwell Rosenlicht, Integration in Finite Terms, American Mathematical Monthly 79 (1972) 963-972.
H: Polynomials with roots having the same module and linear dependent arguments Is it possible for a polynomial with integer coefficients to have some of its roots: $$m_1e^{i\theta_1 \pi}, m_2e^{i\theta_2 \pi}, \ldots, m_ke^{i\theta_k \pi}$$ such that there exist nonzero integers $a_1, a_2, \ldots, a_k$ and an integer $c$ with: $m_j = m_{j'}$ for all $j, j'$, The $\theta_j$'s are distinct, $a_1\theta_1 + a_2\theta_2 + \cdots + a_k\theta_k = c$. Is it possible in the case that all the $\theta_j$'s are transcendental? I have some opposing feelings regarding this question, and went in different directions with no luck, so any help is welcome. AI: There are many examples. I'll start with the most trivial, and work my way up to the more interesting. The polynomial $(x-1)^2$ has roots $e^{0\pi i}$ and $e^{2\pi i}$, and all three bullet points are satisfied. The $\theta_j$ are distinct, but the roots aren't; from here on, assume $0\le\theta_j\lt2$ to avoid this kind of example. The polynomial $x^2-1$ has roots $e^{0\pi i}$ and $e^{1\pi i}$, and all three bullet points are satisfied. If a polynomial with integer coefficients has any nonreal roots at all, then they come in complex conjugate pairs, $re^{i\pi\theta}$ and $re^{i\pi(2-\theta)}$, and $(1)\theta+(1)(2-\theta)=2$, so all three bullet points are satisfied. Moreover, $\theta$ will generally be transcendental. If two of the roots are complex conjugates, then their moduli will be equal, their thetas will add to the integer zero, and the thetas can be transcendental. If we rule out complex conjugate pairs, any polynomial that has a root of unity as a root will do, since all roots of unity are of the form $e^{2\pi ip/q}$, so the corresponding $\theta$s are rational. Similarly if it has roots $c\zeta$, $c$ a fixed real, $\zeta$ a root of unity. EDIT: Here's another way to generate examples. Let $\alpha=re^{\pi i\theta}$ be any algebraic number, let $p/q$ be any rational, then $\beta=re^{\pi i(\theta+(p/q))}$ is also algebraic, and there is a polynomial that has $\alpha$ and $\beta$ among its roots, and those two roots meet all the bullet points. This generates so many examples, I am struggling to see how to make an interesting question out of it. Now let's look at the question more generally. Let $r_j=me^{\pi i\theta_j}$. $$a_1\theta_1+a_2\theta_2+\cdots+a_k\theta_k=c$$ implies $$r_1^{a_1}r_2^{a_2}\cdots r_k^{a_k}=\pm m^a$$ where $a=a_1+a_2+\dots+a_k$. I don't know if there's any approach to get at all the examples of this.
H: How many elements in a ring can be invertible? If $R$ is a finite ring (with identity) but not a field, let $U(R)$ be its group of units. Is $\frac{|U(R)|}{|R|}$ bounded away from $1$ over all such rings? It's been a while since I cracked an algebra book (well, other than trying to solve this recently), so if someone can answer this, I'd prefer not to stray too far from first principles within reason. AI: $\mathbb{F}_p \times\mathbb{F}_q$ has $(p-1)(q-1)$ invertible elements, so no. Since $\mathbb{F}_2^n$ has $1$ invertible element, the proportion is also not bounded away from $0$.
H: In an engineering/optimisation context, does set $E$ have any special significance? I am reading a paper about optimsation and the description, while mostly being a very good description, makes reference to some variables being in some set $E$. For example, it states that parameter $x\in E^n$. However it does not mention the significance of $E$ or why it did not say $\mathbb{R}$ instead. I suppose the algorithm in discussion doesn't only apply to reals, but also complex numbers, etc. I am wondering though, whether the description merely means $E$ to mean, "some set," or if they mean something more specific. For example it could mean E to stand for "enumerated," which I guess would mean they only mean to refer to "computer-representable" numbers, which I guess technically is a subset of $\mathbb{R}$. I am not familiar with any special significance of the letter $E$ in engineering as applied to sets, so I wanted to make sure by asking. Here is the description from the paper: https://i.stack.imgur.com/ULt6J.png Can anyone please give me their best interpretation of $E$ here? Thanks. AI: $E^n$ is often used to denote $n$-dimensional Euclidean space, which is the same as $\mathbb R^n$. The symbol $E$ is often used to emphasize that the author is endowing $\mathbb R^n$ with the Euclidean inner product, or dot product, to make it into a Hilbert space. I've also seen it used to emphasize that $\mathbb R^n$ is being given a manifold structure.
H: Subgroups of $\Bbb Z_5 \times \Bbb Z_5$ Find all subgroups of $\Bbb Z_5 \times \Bbb Z_5$. I can see that the non-trivial ones are of order $5$. But how do I find them exactly? Thanks for any help. AI: We list the subgroups of order $5$. There is the group generated by $(0,1)$. Then there are the groups generated by $(1,b)$, where $b$ is an element of $\mathbb{Z}_5$. That's all. We can if we wish give the addition table for each.
H: System of equations to represent a matrix Suppose that there is a $n \times n$ matrix. Then there will be entries $A_{ij}$ where i and j represent row and column. Equations contain entries of a matrix, $A_{ij}$ and when the equations are solved, we will get each entry. (Add: By equations I mean like ${A_{11}}^2 + {A_{12}}^3+..$ where $A_{ij}$ would be entries of a matrix.) Is there any way to represent the matrix using a system of only $kn$ equations where $k$ is constant for all combinations of integer entries? (Obviously, linear equations will not do this.) Any variation of the matrix is allowed if the variation respect the following: entries ($n \times n$ of them) in the same column and row must be in the same column and row. Edit: In linear algebra, for example, if there are $n \times n$ distinct equations, then you will be able to get entries of a matrix. This is a-bit different question. AI: This isn't really an answer, just a too long for a comment. I'll repeat myself here to make sure I understood you correctly.. Let the variables $A_{ij}, 1 \le i,j \le n$ denote the $(i,j)$th entry of the $n\times n$ matrix $A.$ Given such an $A,$ you would like to encode it as a system of polynomials: $$P_i(A_{11}, A_{12}, \ldots, A_{nn}) = 0, \quad 1 \le i \le B, \tag{1}$$ where $B$ is the total number of polynomial equations. The system $(1)$ is constructed from the entries of $A,$ and the solution to $(1)$ uniquely determines $A.$ Your question is then about how small $B$ could be? In particular, could $B = kn$ for some constant $k$? Obviously, if $\text{deg}(P_i) = 1$ for all $i,$ i.e. a linear system, then we need $n^2$ such equations. If $\text{deg}(P_i) = d_i > 1,$ for some (or all) $i,$ then the situation doesn't get better. Consider the situation with only 2 variables. Two lines can uniquely determine a point, because the number of intersection points is at most 1. Now consider a higher degree curve: circles ($x^2 + y^2 - c = 0$.) Two circles intersect in at most 2 points, which gives you 2 solutions, one of which is your original matrix $A,$ and the other is some other matrix similar (as far as conditions go) to your original matrix. In higher dimensions (i.e. more variables), polynomials become surfaces & intersections become curves. In general, the number of common solutions will be proportional to the product of the degrees, by Bézout's theorem. Alas, I'm not well familiar with algebraic geometry, and other user will prove far more helpful. Again, long comment really rather than answer.
H: How popular and used were logarithm tables? I've heard that, for a time, logarithm tables "sold more than the Bible". Can someone produce some reliable documentation about how prevalent they were ? Would a common shopkeep have one ? Would a common merchant ship have one ? (they would be used to make multiplications and divisions faster) I am interested in information regarding any time period (though, as my wording suggests, the question was brought to my attention in the context of the 15th century) AI: Ship captains used logarithm tables for the same reason that astronomers and surveyors did: navigation required accurate calculation of the positions of the stars and other heavenly bodies, a calculation that without the use of tables would be quite expensive (timewise) in order to compute. Indeed, only once Kepler had seen the logarithmic tables of Napier was he inspired to compute his own logarithm tables and use them to formulate his famous Kepler's Equation. His subsequent use of the equation to tabulate the positions of various heavenly bodies (in the Rudophine Tables) would not have been possible without access to the logarithm tables for easy calculation. Calculations of the sort performed by Kepler were essential to any profession requiring accurate knowledge of the stars. The use of logarithms in calculating ephemerides (tables charting the positions of heavenly bodies) was an invaluable tool even as late as the middle of the twentieth century (cf. the work of L.J. Comrie in encouraging their use), and indeed the slide rules upon which twentieth-century computers (the people, not the machines) relied so heavily were of course based on the same principle.
H: Normal Distribution Transformation Suppose we have a normal distribution like $ f(x) = \mathcal{N}(\mu = 30, \sigma^2=10) $ and we transform it to another function by multiplying it to $ g(x) = 2x^2 $ the result would be: $ f(x).g(x) = h(x) = \frac{2x^2}{\sqrt{2\pi}.\sigma}.e^{-\frac{(x-\mu)^2}{2.\sigma}} $ h(x) is not a normal distribution anymore, and it is not a p.d.f. either. The question is how we can transform it to a p.d.f? After converting to a p.d.f, the result might not be a normal distribution. How we can find the $\mu$ for the new distribution? How about $\sigma^2$ ? AI: The simplest way would be to multiply by 1 over the integral of the new function, so that it becomes a P.D.F. As to what kind of distribution this would yields we'd have to check, but you could find the mean by regular ways, such as integration.
H: A trigonometric inequality How to show that if $0\le\theta\le2\pi$ $|\sum\limits_{n=1}^{p}{\sin{n\theta}}|\le\csc{\frac{\theta}{2}}$ for all integer p? AI: $$S_p = \sum_{n=1}^{p} \sin(n \theta) = \csc (\theta/2) \sin(p \theta/2) \sin((p+1) \theta/2)$$ To see why the above is true, multiply $\displaystyle \sum_{n=1}^{p} \sin(n \theta)$ by $\sin(\theta/2)$ and write it as a difference of cosines, and telescoping will give you the answer. Now, since $\sin(\alpha) \leq 1$, $\forall \alpha \in \mathbb{R}$ we hence get that $$S_p = \sum_{n=1}^{p} \sin(n \theta) \leq \csc (\theta/2)$$
H: Issue with combining InEqualities I have the following two inequalities: $$\begin{align*} 8x &\gt 12y\\ 12y &\gt 15z \end{align*}$$ Now the book states that we need to line up the inequalities as such $$\begin{array}{rcccccl} 0 &<& 15z\\ && 15z &<& 12y\\ & & & & 12y &<& 8x\\ \end{array}$$ Hence we get $$ 0 < 15z < 12y < 8x $$ Now my question is how did the book get the following $$\begin{array}{rcccccl} 0 &<& 15z\\ && 15z &<& 12y\\ && && 12y &<& 8x \end{array}$$ AI: You know that $12y\lt 8x$, because that is the first inequality you have; it appears last in the large display. You also know that $15z\lt 12y$, because that is the second inequality you have; it appears in the middle of the large display. And, presumably, you know that $z$ is positive, so that $15z$ is also positive, $15z\gt 0$. So: $0$ is smaller than $15z$; and $15z$ is smaller than $12y$; and $12y$ is smaller than $8x$. That's the three inequalities that appear, only they are indented to make it clear how they fit together.
H: $C_0(X)$ is a closed subspace of $C_b(X)$ Can you tell me if my proof is correct? Thank you! Claim: $C_0(X)$ is a closed subspace of $C_b(X)$ Proof: We have to show that $C_0(X)$ contains all of its limit points. Let $f(x)$ be a limit point of it, then we have a sequence $f_n$ converging to it (in $\|\cdot\|_\infty$) hence $f_n$ is a Cauchy sequence (with respect to $\|\cdot\|_\infty$). Of course, $f$ is continuous since it's the uniform limit of a sequence of continuous functions. So we only have to show that $f(x)$ vanishes at $\pm \infty$ (apparently, $X \subset \mathbb R$? See here for a definition of $C_0(X)$. I'd have thought it's more general but then what does $x \to \infty$ mean in a general topological space?) Let $\varepsilon>0$. Let $N$ be such that $m,n \geq N$ implies $\|f_n - f_N\|_\infty \leq \varepsilon/2$. Then $$ |f(x)| \leq |f(x) - f_N(x)| + |f_N(x)| \leq \varepsilon$$ for $x \in X \setminus K_N$ where $|f_N(x)| \leq \varepsilon/2$ outside some $K_N$, $K_N$ compact. Also, for all $x$, $|f(x) - f_N(x)| = \lim_{m \to \infty} |f_m(x) - f_N(x)| \leq \varepsilon/2$ since $|f_m(x) - f_N(x)| \leq \|f_m - f_N\|_\infty \leq \varepsilon / 2$ for all $m > N$. AI: I don’t see any reason to get bogged down with the Cauchy property: given $\epsilon>0$ you know that there is an $n\in\Bbb N$ such that $\|f-f_n\|_\infty<\epsilon/2$, and you know that there is a compact $K_n\subseteq X$ such that $|f_n(x)|<\epsilon/2$ for $x\in X\setminus K_n$. It follows that $$|f(x)|\le|f(x)-f_n(x)|+|f_n(x)|<\epsilon$$ for $x\in X\setminus K_n$. Since $\epsilon$ was arbitrary, it further follows that $\lim_{x\to\infty}f(x)=0$. You can’t talk about the limit at infinity in an arbitrary topological space, but you can do so in a locally compact space. Indeed, the definition is in your earlier question: $\lim_{x\to\infty}f(x)=A$ if and only if for every $\epsilon>0$ there exists some compact set $K\subseteq X$ with $|f(x)-A|<\epsilon$ for all $x\in X\setminus K$. This works because if $X$ is Hausdorff, locally compact, and not compact, it has a one-point compactification, a compact Hausdorff space $X^*$ formed as follows. Let $\infty$ be a point not in $X$; then in $X^*=X\cup\{\infty\}$, open nbhds of $\infty$ are sets of the form $X^*\setminus K$ where $K$ is a compact subset of $X$, and the subspace topology on $X$ as a subspace of $X^*$ coincides with the original topology on $X$. As an example, if $X=(0,1)$ with the usual topology, $X^*$ is homeomorphic to $S^1$, the circle with the usual topology. If $X=\Bbb N$ with the discrete topology, $X^*$ is homeomorphic to the subspace $$\{0\}\cup\{2^{-n}:n\in\Bbb N\}$$ of $\Bbb R$. (Note that this compactification adds only one point $\infty$; it does not make sense to talk about $\pm\infty$ in this context.) The definition given above of $\lim_{x\to\infty}f(x)$ really makes sense only when $X$ is Hausdorff, locally compact, and not compact, and in that case it’s equivalent to saying that if we extend $f:X\to\Bbb R$ to a function $\hat f:X^*\to\Bbb R$ by setting $\hat f(\infty)=A$, then $\hat f$ is continuous at $\infty$.
H: What is so wrong with polynomial hierarchy collapsing Many computational complexity researchers believe that finite-level collapse of polynomial hierarchy is unlikely. Why do they believe like this? AI: It's possible to generalize to this conclusion essentially directly from the opinion that P $\neq$ NP. Let's think of NP-completeness via SAT, the problem of satisfying a Boolean formula in $n$ variables. In the polynomial hierarchy PH, we construct complete problems similarly by allowing quantifier alternations e.g. $\forall X_1 \exists X_2 ... \forall X_n \varphi(X_1,...,X_n)$. Here the $X_i$ might be sets of variables, not just individuals, in the Boolean formula $\varphi$. The problem of satisfying the kind of formula given is $\Pi_n^P$ complete, while if I'd started with $\exists$ it would have been $\Sigma_n^P$ complete. OK, now suppose PH collapses, so that say for all $n > n_0: \Pi^P_n, \Sigma^P_n \subset \Sigma^P_{n_0}$. Then TQBF$_{n_0},$ the complete problem described above for $\Sigma^P_{n_0}$, is actually complete for PH as a whole. Conversely, if PH has any complete problem, it must lie at some finite level, in which all higher levels must then be contained, so we see PH collapses if and only if it has a complete problem. This bothers people in the same way as the idea that NP $\subset$ P. In the latter case, we'd have to be able in general to find a needle in a haystack without having to look at each piece of hay, while decades of work seem to indicate that NP problems can't generally be approached any more efficiently than via brute force. For higher polynomial complexity classes, the parallel supposition is that we can't find out, for example, whether a $\Sigma^P_2$ formula $\exists X_1 \forall X_2 \varphi(X_1,X_2)$ is satisfiable any more efficiently than by iterating through all possible assignments to the variables $X_1$ and, on each step of that loop, checking whether we can satisfy $\varphi$ with each possible assignment to the $X_2$. So, if you're skeptical about the assumption that P $\neq$ NP, that's stronger than being skeptical that PH collapses to any particular finite level, but in essence this holds conversely, too: if I can drop off all but $k$ alternations of quantifiers on PH problems, I am going to have to take much more seriously the idea that I can drop off all the quantifiers-namely the idea that P = NP. The last issue worth mentioning regards PSPACE. The quantified Boolean formula problem TQBF with arbitrarily many quantifier alternations is PSPACE complete, which shows that at least PSPACE $\supset$ PH. If PH=PSPACE, then PH collapses, because then TQBF sits at some finite level of the hierarchy and is PH-complete. On the other hand, if PH collapses, it might suggest PH=PSPACE, because otherwise we'd be in the very strange situation where each fixed TQBF$_n$ was reducible to TQBF$_{n_0}$ for some $n_0$ while the unbounded TQBF was not.
H: "Proof" of an Algebraic property of OLS Estimators I'm having a bit of trouble proving $\sum (x_i - \bar{x})\hat{e_i} = 0$. What I know so far is that the total sum of $\hat{e_i}$'s is zero by property of OLS so when you distribute the $\hat{e_i}$ in, one term "cancels out" and you are left with $\sum x_i\hat{e_i}$ which is equivalent to $\sum x_i(y_i-b_1-b_2x_i)$ When I attempt to simplify more, I keep getting stuck. What I'm doing so far is: $\sum x_iy_i - b_1\sum x_i - b_2 \sum x_i^2$ = $\sum x_iy_i -(\bar{y}-b_2 \bar{x})\sum x_i - b_2 \sum x_i^2$ = $\sum x_iy_i - \bar{y}\sum x_i + b_2\bar{x}\sum x_i - b_2 \sum x_i^2$. And at this point I get stuck. Is there an easier trick? An OLS property I am forgetting? Thanks! AI: From your notation I assume that your true model is: $$ Y_i=\beta_1+\beta_2 X_i + \epsilon_i \qquad i=1,\ldots,n $$ where $\beta_1$ and $\beta_2$ are the true parameters of the model. Then, if you assume that the (true) error term is iid and has zero conditional expectation you get (i.e., $E[\epsilon_i|X]=0$): $$ E[Y_i|X_i]=\beta_1 + \beta_2 X_i + E[\epsilon_i|X_i]\\ =\beta_1+\beta_2 X_i $$ which is what you want to estimate using OLS. Let $b_1$ and $b_2$ be the OLS estimates of $\beta_1$ and $\beta_2$ respectively. Then, $$ b_2=\frac{\sum_{i=1}^{n} (Y_i-\bar{Y})(X_i-\bar{X})}{\sum_{i=1}^{n}(X_i-\bar{X})^2}\\ $$ Since, $$ \sum_{i=1}^{n} (Y_i-\bar{Y})(X_i-\bar{X})=\sum_{i=1}^{n} Y_i(X_i-\bar{X})-\bar{Y}\sum_{i=1}^{n}(X_i-\bar{X})\\ =\sum_{i=1}^{n} Y_i(X_i-\bar{X}) $$ and, $$ \sum_{i=1}^{n} (X_i-\bar{X})(X_i-\bar{X})=\sum_{i=1}^{n} X_i(X_i-\bar{X})-\bar{X}\sum_{i=1}^{n}(X_i-\bar{X})\\ =\sum_{i=1}^{n} X_i(X_i-\bar{X}) $$ because $\sum_i(X_i-\bar{X})=0$ you can rewrite $b_2$ as follows: $$ b_2=\frac{\sum_{i=1}^{n} Y_i(X_i-\bar{X})}{\sum_{i=1}^{n}X_i(X_i-\bar{X})}\\ $$ Let $e_i$ be the OLS residual, i.e., $e_i=Y_i-b_1-b_2X_i$, $i=1,\ldots,n$. Then, $$ \sum_i(X_i-\bar{X})e_i=\sum_i(X_i-\bar{X})(Y_i-b_1-b_2X_i)\\ =\sum_i Y_i(X_i-\bar{X})-b_1\sum_i(X_i-\bar{X})-b_2\sum_i X_i(X_i-\bar{X})\\ =\sum_i Y_i(X_i-\bar{X})-b_2\sum_i X_i(X_i-\bar{X})\\ =\sum_i Y_i(X_i-\bar{X})-\left(\frac{\sum_i Y_i(X_i-\bar{X})}{\sum_i X_i(X_i-\bar{X})}\right)\sum_i X_i(X_i-\bar{X}) $$ where again I have used the fact that $\sum_i (X_i-\bar{X})=0$.
H: Order of a product of subgroups. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. Let $H$, $K$ be subgroups of $G$. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. I need this theorem to prove something. AI: Here is LaTex-ed version of the proof posted in BBred's comment. I've tried to add details of one place of the proof. If the OP explains which part of the proof is the problem, perhaps that part can be explained in more detail. I've made this answer a CW - anyone, feel free to contribute. Certainly the set $HK$ has $|H||K|$ symbols. However,not all symbols need represent distinct group elements. That is, we may have $hk=h'k'$ although $h\ne h'$ and $k\ne k'$. We must determine the extent to which this happens. For every $t\in H\cap K$, $hk =(ht)(t^{-1} k)$, so each group element in $HK$ is represented by at least $|H\cap K|$ products in $HK$. But $hk = h'k'$ implies $t = h^{-1} h' = k(k')^{-1}\in H\cap K$ so that $h'=ht$ and $k' = t^{-1} k$. Thus each element in $HK$ is represented by exactly $|H\cap K|$ products. So, $$|HK|= \frac{|H||K|}{|H\cap K|}.$$ If we have $hk=h'k'$ and we multiply this by $h^{-1}$ from left and by ${k'}^{-1}$ from right, we get $$k{k'}^{-1}=h^{-1}h.$$ Maybe it should be stressed that $t\in H$, since $t=h^{-1}h'$; and $t\in K$ since $t=k{k'}^{-1}$. (Which means $t\in H\cap K$.)
H: Quibble with terminology Proposition 5.15 on page 63 in Atiyah-Macdonald goes as follows: Let $A \subset B$ be integral domains, $A$ integrally closed, and let $x \in B$ be integral over an ideal $ \mathfrak a$ of $A$. Then $x$ is algebraic over the field of fractions $K$ of $A$ and if its minimal polynomial over $K$ is $t^n + a_1 t^{n-1} + \dots + a_n$ then $a_1, \dots , a_n$ lie in $r(\mathfrak a)$. According to Wikipedia, an algebraic element is defined as follows: "If $L$ is a field extension of $K$, then an element a of $L$ is called an algebraic element over $K$, or just algebraic over $K$, if there exists some non-zero polynomial $g(x)$ with coefficients in $K$ such that $g(a)=0$." 1)Here $L$ is a field. Is it ok to call an element algebraic even if $L=B$ is just an integral domain? "algebraic" is never defined in AM. 2)Also, in the proof following the theorem: what is a conjugate of $x$? AI: Question 1) If $K$ is a field and $B$ is a completely arbitrary $K$-algebra, it makes perfectly good sense to say that an element $b\in B$ is algebraic over $K$. This means that the ideal $I_b\subset K[T]$ consisting of the $P(T)\in K[T]$ such that $P(b)=0$ is non-zero. The monic generator $m_b(T)$ of $I_b$ is then called the minimal polynomial of $b$ . Beware however that, in contrast to the case where $B$ is a field, this minimal polynomial needn't be irreducible over $K$. It may happen that all elements of $B$ are algebraic over $K$: the algebra $B$ is then called (a bit funnily) an algebraic algebra. The best-known example of such an algebraic algebra is the algebra $M_n(K)$ of matrices over $K$. Non-zero nilpotent matrices then have as minimal polynomials a power $T^k\; (2\leq k\leq n)$, which is thus an example of a non irreducible minimal polynomial. Question 2) The conjugates of $b$ are the roots of $m_b(T)$ in some field extension of $K$ containing a splitting field of $m_b(T)$.
H: Is there a residually finite group not finitely presented? I am looking for a residually finite group which is not finitely presented. Does such a group exist? AI: Finally, I found a nice example: the lamplighter group $L_2= \mathbb{Z}_2 \wr \mathbb{Z}$. There is a natural morphism from $L_2$ to $\mathbb{Z}_2 \wr \mathbb{Z}_n$ and a such morphism, for $n$ big enough, send a non-trivial element on a non-trivial element in the finite group $\mathbb{Z}_2 \wr \mathbb{Z}_n$. Moreover, one can show that $F \wr \mathbb{Z}$ is residually finite iff $F$ is abelian.
H: Graph theory question involving probabilistic method. I was trying to prove the following statement using probabilistic methods: Given that $G$ is a graph on $n\geq 10$ vertices, is a graph that has the property: If we draw a new edge, then the number of copies of $K_{10}$ increases. Prove that $|E|\geq 8n-36$. I am interested in learning the subject but I am stuck in this problem. The idea that I have is that I should perhaps find some event $X$ such that $P(X)\geq (8n-36)/||E|$, but I dont know what should the event be. Hints on how to start this problem would be greatly appreciated. AI: Here's one possible approach: consider the complement graph of $G$ and consider the pairs $(A_e,B_e)$, where $A_e$ contains the vertices of edge $e$ in $\bar{G}$ and $B_e$ contains all vertices that do not lie in a $K_{10}$ that $e$ completes. Try to find an upper-bound on the number of edges of $\bar{G}$ i.e. the number of such pairs. Theorem 1.3.3 in the text by Alon and Spencer may be helpful.
H: proper subgroups of finite p-groups are properly contained in the normalizer I am trying to prove the following, Let $G$ be a finite $p$-group and let $H$ be a proper subgroup. Then there exists a subgroup $H'$ such that $$ H\lneq H'\leq G $$ and $H\triangleleft H'$. Obviously, the natural choice for $H'$ would be the normalizer $N_G(H)$ of $H$ in $G$. However, one needs to prove then that $H\lneq N_G(H)$ in finite $p$-groups. I am aware of a proof of this fact by induction on the order of $G$. However, I was wondering if there was another proof which only used group actions? AI: Yes, you can do it with group actions. Of course we only need to worry about the case that $H$ is a non-identity proper subgroup. Let $H$ act on the right cosets of $H$ in $G$ by right translation. Since H is proper, the number of such cosets is divisible by $p.$ At least one of these is fixed by $H,$ namely the coset $H.$ There must be another orbit of size prime to $p,$ but since orbit sizes in this situation are powers of $p,$ the orbit size must be $1$. Hence there is some $g \in G \backslash H $ such that $Hgh = Hg$ for all $h \in H.$ Then $gHg^{-1} \leq H,$ so that $gHg^{-1} = H$ as both these subgroups have the same order. Hence $g \in N_{G}(H) \backslash H$ and $N_{G}(H) > H.$ (Another standard proof not by induction is to use the upper central series).
H: How to prove: $S=\frac{4}{3}\sqrt{ m(m-m_a)(m-m_b)(m-m_c)}$ If $$m_a, m_b, m_c$$ are the medians of a triangle and let $$m=\frac{m_a+ m_b+ m_c}{2}$$ then Area $S$ of triangle is given by $$S=\frac{4}{3}\sqrt{ m(m-m_a)(m-m_b)(m-m_c)}$$ This looks very similar to Heron's formula. How to prove this formula? AI: $$S=\frac{4}{3}\sqrt{ m(m-m_a)(m-m_b)(m-m_c)}$$ $$ =\frac{4}{3}\sqrt{ \frac{(m_a+m_b+m_c)}{2}\frac{(-m_a+m_b+m_c)}{2}\frac{(m_a-m_b+m_c)}{2}\frac{(m_a+m_b-m_c)}{2}}$$ $$ =\frac{1}{3}\sqrt{(m_a+m_b+m_c)(-m_a+m_b+m_c)(m_a-m_b+m_c)(m_a+m_b-m_c)}$$ $$ =\frac{1}{3}\sqrt{[(m_a+m_b)^2-m_c^2] [m_c^2-(m_b-m_a)^2] } $$ $$ =\frac{1}{3}\sqrt{-[(m_a+m_b)(m_b-m_a)]^2+m_c^2(m_a^2+m_b^2)+m_c^2(m_b^2-m_a^2)-m_c^4}$$ $$ =\frac{1}{3}\sqrt{-[m_b^2-m_a^2]^2+m_c^2(m_a+m_b)^2+m_c^2(m_b-m_a)^2-m_c^2}$$ $$ =\frac{1}{3}\sqrt{2(m_a^2m_b^2+m_b^2m_c^2+m_c^2m_a^2)-(m_a^4+m_b^4+m_c^4)}$$ Now replacing $m_a$, $m_b$, $m_c$ respectively with $\frac{1}{2}\sqrt{2c^2+2b^2-a^2}$, $\frac{1}{2}\sqrt{2a^2+2c^2-b^2}$, $\frac{1}{2}\sqrt{2a^2+2b^2-c^2}$ You'll arrive at the Heron's Formula: $$S = \sqrt{s(s-a)(s-b)(s-c)}$$ EDIT Proof for $m_a = \frac{1}{2}\sqrt{2c^2+2b^2-a^2}$: Assumptions: $BC = a$, $AC = b$, $AD = DE = m_a$ and $AB = c$ From cosine theorem, we have: $$a^2 = b^2 + c^2 - 2bc*cos A$$ Now, in triangle ABE(since ACD is congruent to EBD); $$4m_a^2 = AE^2 = b^2 + c^2 - 2bc*cos(\pi - A)$$ $$a^2 + m_a^2 = 2(b^2 + c^2) - 2[bc*cos A + bc*cos(\pi - A)]$$ or, $$2m_a^2 = 2b^2 + 2c^2 - a^2$$ Hence, $$m_a = \frac{1}{2}\sqrt{2b^2 + 2c^2 - a^2} $$
H: Compact operator on a hilbert space. Possible Duplicate: A compact operator is completely continuous. I came across this question in a book, it is an exercise, I can't prove it. Can someone please help me. If $X$ and $Y$ are Banach spaces, we have to prove that a compact linear operator is completely continuous. A mapping $T \colon X \to Y$ is called completely continuous, if it maps a weakly convergent sequence in $X$ to a strongly convergent sequence in $Y$ , i.e., $x_n\underset{n\to +\infty}\rightharpoonup x$ implies $\lVert Tx_n- Tx\rVert_Y\to 0$. AI: Suppose not. Then there is some sequence $(x_n)$ in $X$ with $x_n \rightharpoonup x$ but $Tx_n \not\to Tx$. That is, there is some $\epsilon > 0$ such that $\|Tx_{n_j} - Tx\| \ge \epsilon$ for some subsequence $(x_{n_j})$. As $(x_{n_j})$ is bounded and $T$ is compact, some subsequence converges in $Y$, say $Tx_{n_{j_k}} \to y$. As $T$ is norm-to-norm continuous, it is weak-to-weak continuous, hence $Tx_{n_{j_k}} \rightharpoonup Tx$. Weak limits are unique, so $Tx = y$, that is $Tx_{n_{j_k}} \to Tx$, contradicting the choice of $(x_{n_j})$. Hence $T$ is completely continuous.
H: Compact operators between Hilbert spaces I have the suspect that the following statement is true, but I don't how to prove it. Any suggestion? Thanks to all! Let $X$, $Y$ be Hilbert spaces and let $T \colon X \to Y$ be a linear continuous injective map. Suppose that for every $\epsilon > 0$ there exists a closed vector subspace $V_\epsilon \subseteq X$ of finite codimension such that $\Vert Tv \Vert_Y \leq \epsilon \Vert v \Vert_X$ for all $v \in V_\epsilon$. Then $T$ is compact. AI: Let $P_\epsilon$ be the orthogonal projection onto $V_\epsilon$ and $$ T_n := T(\mathbb I - P_{1/n}) $$ $T_n$ is a finite-rank operator and $$ \lVert (T - T_n)v \rVert = \lVert T P_{1/n} v \rVert \leq \frac 1 n \lVert v \rVert $$ and so $$ \lVert T - T_n \rVert \leq \frac 1 n \to 0 $$ We can conclude $T$ is compact because it is limit in the operator norm of a sequence of finite-rank operators.
H: Probability of a few possible outcomes with certain probability beat the original Lets say we have a contest of exactly 5 contestants who are all competing against the Original. The probability that some contestant will win over the Original are as follows: cont1 = 26.9% cont2 = 21.3% cont3 = 20.7% cont4 = 8.96% cont5 = 67.5% Please tell me what is the formula to calculate what are the chances that any one of ( cont1, cont2, cont3, cont4) beat the Original ? (Please note I'm not taking cont5 into account). AI: I'm assuming that there is independence between each of the constestants. Let $p_i$ denote the probability that the $i$'th contestant beats the Original. Then $$ P(\text{none of cont1-cont4 wins})=\prod_{i=1}^4 (1-p_i)\approx 0.4153 $$ and therefore $$ P(\text{any of cont1-cont4 wins})=1-P(\text{none of cont1-cont4 wins})\approx 0.5847. $$ It's just elementary probability theory. Let $A_i$ denote the event that the $i$'th contestant wins, i.e. $P(A_1)=0.269$, $P(A_2)=0.213$ and so on. Then the complement $A_i^c$ is the event that the $i$'th contestant does not win and $P(A_i^c)=1-P(A_i)$ for all $i=1,\ldots,5$ (elementary property of a probability measure). Now $$ P(\text{none of cont1-cont4 wins})=P\left(\;\bigcap_{i=1}^4 A_i^c\right)\stackrel{\rm indep.}{=} \prod_{i=1}^4 P(A_i^c)=\prod_{i=1}^4 \big(1-P(A_i)\big)\approx 0.4153, $$ where we have used the definition of independent sets. At last we use that the complementary set of $\{\text{any of cont1-cont4 wins}\}$ is exactly the set $\{\text{none of cont1-cont4 wins}\}$.
H: Express $C$ interms of the sets $A_n$ [NBHM_2006_PhD Screening Test_Analysis] Let $f$ be a real valued function on $\mathbb{R}$ define $$w_j(x)=\sup\{|f(u)-f(v)|: u,v\in [x-1/j,x+1/j]\}$$ $j\in \mathbb{N}$ and $x\in\mathbb{R}$, Define next $$A_{j,n}=\{x\in\mathbb{R}:w_j(x)<1/n\}$$ $n=1,2,\dots$ and $$A_n=\bigcup_{j=1}^{\infty}A_{j,n}$$ Now let $$C=\{x\in\mathbb{R}: f > \text{ is continuos at }x\}$$ How to express $C$ interms of $A_n$? Well, according to the definition, $f$ iss continuos at $x$ iff $w_j(x)=0$ (I guess, I can do that by definition of continuity). Then I guess, $C=\bigcap A_n$. I am not sure. Thank you. AI: $\def\abs#1{\left|#1\right|}$ Hi, you are right about $C$. We have for $x \in \mathbb R$: \begin{align*} f \text{ is continuous at } x &\iff \forall n \exists j \forall y: \abs{x-y} \le \frac 1j \Rightarrow \abs{f(x) - f(y)} < \frac 1n\\ &\iff \forall n \exists j :\sup_{\abs{x-y} \le \frac 1j} \abs{f(x)-f(y)} < \frac 1n\\ &\iff \forall n \exists j: w_j(x) < \frac 1n\\ &\iff \forall n \exists j :x \in A_{j,n}\\ &\iff x \in \bigcap_n \bigcup_j A_{j,n} \end{align*}
H: Motivation for Topology study in Real Analysis I'm an engineering student trying to work out some Real Analysis to learn how to write proofs (Needed for my PhD thesis) and just to rekindle my Calculus fires. From what I see, Real Analysis is the study of the basics of Calculus and "constructing" Calculus ground up. Why is Topology needed in the endeavor? I can't grapple my Engineering brain around compact sets and finite subcovers and stuff. I just can't imagine what's happening without spending 2 hours and 4 coffees. The only "use" of this I've seen till now is for proving All Cauchy sequences are convergent in certain spaces. Can I skip topology? If yes, what is the bare minimum I need to do? AI: You should know the following theorems from calculus on $\Bbb{R}$: Extreme Value Theorem: A continuous function $f$ on a closed bounded interval attains its maximum and minimum value on that interval. A continuous function on a closed bounded interval is uniformly continuous Intermediate Value Theorem: Let $f$ be a continuous function on $\Bbb{R}$ such that there are real numbers $a,b$ for which $f(a) < 0$ and $f(b) > 0$. Then there exists $c \in (a,b)$ such that $f(c)= 0$. The first two theorems make use of a topological property known as compactness. The Heine - Borel Theorem is what tells you that in $\Bbb{R}$ with Euclidean metric, being closed and bounded is equivalent to being compact. Number 3 makes use of the topological property that $\Bbb{R}$ with the euclidean metric is connected. Number 2 is especially important to know why a continuous function on a closed bounded interval is Riemann Integrable; this should be enough motivation for you to study topology! The following I think is a cool example of where things get gnarly in topology. In a metric space (in fact in a topological space that is at least $T_2$) this cannot happen but: Put the trivial topology on $\Bbb{R}$, the family $A_n = (0,\frac{1}{n})$ is a countable collection of non-empty compact sets such that the intersection of any finite sub-collection is non-empty, but clearly $$\bigcap_{n=1}^\infty A_n = \emptyset .$$
H: Find the area enclosed by the curve $r=2+3\cos \theta$. the question is Find the area enclosed by the curve: $r=2+3\cos \theta$ Here's my steps: since when $r=0$, $\cos \theta=0$ or $\cos\theta =\arccos(-2/3)$. so the area of enclosed by the curve is 2*(the area bounded by $\theta=\arccos(-2/3)$ and $\theta=0$) the answer on my book is $5\sqrt{5}+(17/2)*\arccos(-2/3)$ I have no idea why there is a $5\sqrt{5}$ since $\arccos(-2/3)=2.300523984$ on my calculator. AI: Area of curve, $$A=2\int_0^{\arccos(-2/3)}\frac{r^2}{2}d\phi$$ $$\implies A=2\int_0^{\arccos(-2/3)} \frac{(2+3\cos\phi)^2}{2}d\phi$$ $$=2\int_0^{\arccos(-2/3)} \frac{4+9\cos^2\phi+12\cos\phi }{2}d\phi$$ $$=\int_0^{\arccos(-2/3)} (4+9\cos^2\phi+12\cos\phi) d\phi$$ $$=3\sqrt{5}+\frac{17}{2}\cos^{-1}(\frac{-2}{3}).$$
H: Lebesgue measure on $\mathbb{R}/\mathbb{Z}$ I was reading a (brief) introduction about measure theory today and came across the following statement: (Lebesgue measure on $\mathbb{R}/\mathbb{Z}$): There is a unique probability measure $\mu$ on $\mathbb{R}/\mathbb{Z}$ such that $\mu((a,b))=b-a$ for all $0\le a<b\le 1$. My question is not about the proof (the notes are too brief to have the proofs) but about the $\sigma$-algebra in question. I assume that $(\mathbb{R}/\mathbb{Z},B,\mu)$ is a measure space where $B$ is the Borel $\sigma$-algebra, i.e. the smallest $\sigma$-algebra containing all the open balls. What is the metric in question? How come all elements of $B$ of the form $(a,b)$? Do we use the bijection between $\mathbb{R}/\mathbb{Z}$ and $[0,1)$ somewhere? I'll be grateful if someone can clarify my doubts. Thanks. AI: Yes, one uses the bijection you mention. To be precise, let $f:[0,1]\to S^1$ be the defining quotient map $t\mapsto e^{2\pi i t}$. Write $\lambda$ for Lebesgue measure on $[0,1]$. Then the property is $\mu(S)=\lambda(f^{-1}(S))$, or $\mu(f(a,b))=\lambda(a,b):=b-a$. This is usually called Haar measure (rather than Lebesgue measure) on $S^1$.
H: $\chi^2$ test and sampling variance Let $f(x)$ denote the pdf of a $\chi^2$-distribution with $n\in\mathbb{N}$ degrees of freedom given by $$f(x) = \frac{2^{-n/2}}{\Gamma(n/2)}\cdot x^{n/2-1}\cdot\mathrm e^{-1/2x}\cdot\textbf{1}_{[0,\infty)}(x),$$ where $\textbf{1}_A(x)=\begin{cases}1,&x\in A,\\0,&\text{else.}\end{cases}$ Furthermore we define $\Gamma(1/2)=\sqrt{\pi},\;\Gamma(1)=1$ and $\Gamma(r+1)=r\cdot\Gamma(r)$. Assume we have two independant random variables $X_1,X_2\sim\mathcal{N}(\mu,\sigma^2)$ with unknown $\mu$ and unknown $\sigma$ and their sampling variance $S_X^2=\frac{1}{n-1}\sum\limits_{i=1}^n(X_i-\overline{X})^2$ with $\overline{X}=\frac{1}{n}\sum\limits_{i=1}^nX_i$. Show that $$\frac{1}{\sigma^2}S_X^2=\frac{(X_1-\overline{X})^2+(X_2-\overline{X})^2}{\sigma^2}$$ is $\chi^2$ distributed with one degree of freedom. To be honest, i have no idea at all how to start because this huge amount of information intimidates me. Can anyone explain me an appropriate ansatz to proove this? AI: First note that $X' = \frac{X - \overline{X}}{\sigma}$ and $Y' = \frac{Y - \overline{Y}}{\sigma}$ are $\mathcal{N}_{0,2}$ - just take expectations and variances. By symmetry considerations, $X'^2, Y'^2$ have densities $2\psi_{0,2}$ on positive reals, where $\psi_{\mu ,\sigma }$ is the Gaussian density function. Now, since $f(x)=\sqrt{x}$ is invertible differentiable, $\mathbb{P} (X'^2 \le t ) = \displaystyle\int_{-\infty }^{\sqrt{t}} 2\psi_{0,2}(x)dx.$ Hence the density of $X'^2$ can be computed using the chain rule - $(2\psi_{0,2}(\sqrt{x}) / (2\sqrt{x})=\frac{1}{\sqrt{4\pi }} e^{-x/4} x^{-1/2} = \gamma_{1/4, 1/2} (x),$ where $\gamma $ is a gamma density. Now it's an exercise (use convolutions or change of variables) to show that the sum of $\Gamma_{\alpha_1 , \beta }, \Gamma_{\alpha_2, \beta }$ distributed random variables is $\Gamma_{\alpha_1 + \alpha_2, \beta}$ distributed - in particular, your random variable is $\Gamma_{1/2, 1/2} \sim \chi_1^2.$
H: complexity for $f(x)=n!$ and O($2^n$) Suppose that algorithm has O($n!$). We all know that $n!$ should be smaller than $2^{2^n}$, but bigger than $2^n$. So, will O($n!$) be in EXPTIME (EXP)? Will we able to write O($n!$) as O($2^n$)? AI: $n!$ is not $O(2^n)$. The function $n\mapsto n!$ grows much faster that $n\mapsto 2^n$. It is possible for algorithms to be much worse than exponential time. A perfect example of a $O(n!)$ algorithm is the mean time for the "bozo sort" in which you randomly sort a list of sortables, check if it is in order, and repeat if necessary.
H: Fermat's theorem on sums of two squares There's Fermat's theorem on sums of two squares. As the prime numbers that are $1\bmod4$ can be divided into the sum of two squares, will the squared numbers be unique? For example, $41=4^2+5^2$ and the squared numbers will be $4$ and $5$. AI: Yes, if you don't take into account the order of the two numbers or $\pm$ sign in front of the numbers.
H: Epsilon delta proof of a sequence How can I proof the convergence of $${\{7^{-n^{-1/5}}\}_{n\geq 1}}$$ ? AI: Just do some computations and use the motonicity of powers and the logarithm to find the $N$ you want: First note, that $7^{-n^{-1/5}}\le 1$ for every $n$, now let $\epsilon > 0$. We have \begin{align*} 1 - 7^{-n^{-1/5}} < \epsilon &\iff 7^{-n^{-1/5}} > 1 - \epsilon\\ &\iff -n^{-1/5} > \log_7(1-\epsilon)\\ &\iff n^{-1/5} < \log_7\frac 1{1 - \epsilon}\\ &\iff n^{1/5} > \frac 1{\log_7 (1-\epsilon)^{-1}}\\ &\iff n > \left(\frac 1{\log_7 (1-\epsilon)^{-1}}\right)^5 \end{align*} If we let $N := 1 + \left\lceil (\frac 1{\log_7 (1-\epsilon)^{-1}})^5\right\rceil$, then $\left|{7^{-n^{-1/5}} -1}\right| < \epsilon$ for $n \ge N$.
H: which of the spaces are Locally Compact [NBHM_2006_PhD Screening Test_Topology] which of the spaces are Locally Compact $A=\{(x,y): x,y \text{ odd integers}\}$ $B=\{(x,y): x,y\text{ irrationals}\}$ $C=\{(x,y): 0\le x<1, 0<y\le 1\}$ $D=\{(x,y): x^2+103xy+7y^2>5\}$ A topological space $X$ is locally compact if every point has a neighborhood which is contained in a compact set. well, I can prove that $\mathbb{Q}$ is not locally compact, so 1,2, are not Locally Compact, 3 is clearly locally compact. I am not ssure about 4. thank you. AI: A subset of $\mathbf R^2$ is compact iff it is closed and bounded (by Heine-Borel theorem), so a subspace of $\mathbf R^2$ is locally compact iff a small enough closed ball around any given point is still closed as a subset of $\mathbf R^2$ (because compactness is absolute, and of course it is bounded). This should be enough to solve the problem by yourself. As for the answers, 1 is locally compact as martini said, 2 is indeed not locally compact (but it does not follow from the fact $\mathbf Q$ is not locally compact), 3 is locally compact, and 4. is locally compact. As an additional hint for 4.: notice that it is an open subset of $\mathbf R^2$.
H: Vector Autoregression Algebra, $M_t$, $L$ In the paper here http://www.ems.bbk.ac.uk/for_students/bsc_FinEcon/fin_economEMEC007U/VAR.pdf It shows VAR(p) model as $$ W_t = A_1W_{t-1} + A_2W_{t-2} + ... + A_pW_{t-p} + \epsilon_t $$ But then it makes a simplification and says the formula above equals to $$ (I - A_1L - A_2L^2 - ... - A_pL^p)W_t = \epsilon _t $$ How does the author make this switch? Are all $W_t$ vectors somehow combined to give $L$? But then why is taking 2nd, 3rd, etc powers come into play? Thanks, AI: A previous article in the same site-series, devoted to (scalar) AR-MA processes, explains that $L$ is delay operator: http://www.ems.bbk.ac.uk/for_students/bsc_FinEcon/fin_economEMEC007U/arma.pdf Further, when one applies Z-transform (or, basically equivalent, Generating Functions), the $n-$delay operator maps to $z^{-n}$ (or $z^n$).
H: What does the notation "$\Omega \subset \mathbb{R}^n$ is $C^1$" mean? In my calculus 2 lecture notes, we have the following definition: A region $\Omega \subset \mathbb{R}^n$ is $C^1$ (or $C_{pw}^1$ or $C^k$ respectively), if for each point $x_0 \in \partial \Omega$ there exist coordinates $(x',x^n) \in \mathbb{R}^{n-1} \times \mathbb{R}$ around $x_0 = (0,x_0^n)$, a number $d>0$, an open cuboid $Q' \subset \mathbb{R}^{n-1}$ around $x_0' = 0$ and a function $\psi \in C^1(\overline{Q'})$ (or $\psi \in C_{pw}^1(\overline{Q'})$ or $\psi \in C^k(\overline{Q'})$ respectively), where $0 \leq \psi \leq 2d$ and $\psi(0) = d = x_0^n$ such that $$\Omega \cap (Q' \times [0,2d]) = \{(x',x^n) \in \mathbb{R}^n; x' \in Q', 0 \leq x^n < \psi(x')\} = \Omega_\psi.$$ As this notation is being used quite often later on and I have no idea at all what it tells me about the region $\Omega$, I ask for help. Can anyone simplify this definition or tell me how I have to imagine such a region? As examples for such regions, I am given: $B_1(0) \subset \mathbb{R}^2$ is $C^k$ for all $k \geq 0$. A $n$-cuboid $Q$ is $C_{pw}^1$. I tried to just "apply" the definition to the first example to show that the unit circle is $C^k$ for all $k$ but I am not quite sure about this. The definition tells me that for all $x_0 \in \partial \Omega$, I should be able to find such coordinates, $d$, $Q'$ and $\psi$, however it also states that $d = x_0^n > 0$. Let's pick $x_0 = (0,1)$, then $d=1$ and $Q' = (-\varepsilon, \varepsilon)$ for some $\varepsilon >0$. Now I want $$B_1(0) \cap ((-\varepsilon,\varepsilon) \times [0,2]) = \{(x',x^n) \in \mathbb{R}^2: x' \in Q', 0 \leq x^n < \psi(x')\}$$ for some $\psi \in C^k([-\varepsilon,\varepsilon])$ with $\psi(0)=1$. Can I just choose $\psi(x) = \sqrt{1-x^2}$? If my attempt to apply the definition to the unit circle went horribly wrong, please tell me because I really have no idea what this definition actually tells me and what properties I can conclude from a region being $C^k$. In case what I did was right, it seems to me that a region is $C^k$, iff it is "locally" simple with respect to the last coordinate (if this makes sense, we only had simple regions in $\mathbb{R}^2$) where the lower border is $0$. To sum up my questions: What does this notation mean? Is there a simpler definition, maybe a less abstract definition? How can I show that $B_1(0)$ is $C^k$ or is my attempt correct? Is there anything else I need to know about this notation? Thanks for any help in advance. AI: An equivalent way of saying the same thing: If $x_0$ is any point on the boundary of $\Omega$, then you can rotate $\Omega$ in such a way that the portion of the boundary of $\Omega$ in some ball centered at the rotated $x_0$ can be written as the graph of a $C^1$ function. So if $\Omega$ is the unit sphere and $x_0$ is on the boundary of $\Omega$, you can rotate your $x_0$ to make it $(0,0,...,1)$, and then near $(0,0,...,1)$ the boundary of $\Omega$ has equation $x_n = \sqrt{1 - x_1^2- x_2^2-...-x_{n-1}^2}$, which is a $C^1$ function. It turns out you never actually have to rotate the domain; even in the original coordinates there's always some collection of $n-1$ variables where the portion of the graph near $x_0$ can be written as a graph as a function of those $n-1$ variables; i.e. there's always some $i$ such that near $x_0$ it's the graph of some $x_i = \phi(x_1,...,x_{i-1},x_{i+1},...,x_n)$. That's why the definition you have there is equivalent. I'm not familiar with the $pw$ notation, but it probably just means the boundary can be written as the union of finitely many pieces such that the above condition holds except at the boundaries of the pieces.
H: what are the possible values for integral let $\gamma$ be a closed continuosly differentiable path in the upper half plane not passing through $i$. What are the possible values of the integral $$\frac{1}{2\pi i}\int_{\gamma}\frac{2i}{z^2+1}dz$$ well the integral can be broken like $$\frac{1}{2\pi i}\int_{\gamma}\frac{2i}{(z+i)(z-i)}dz=$$ $$\frac{1}{2\pi i}\int_{\gamma}\frac{dz}{z-i}dz-\frac{1}{2\pi i}\int_{\gamma}\frac{dz}{z+i}dz=$$ by Cauchy Integral Formuale $$f(i)-f(-i)$$,so the second integral is $0$ as it is analytic in the upper half plane, but the first integral iss just $n(\gamma,i)$, which iss the windding number of $\gamma$ around $i$ what more I can say? Thank you. AI: The answer depends on two things: (1) if $\gamma$ is a Jordan curve (what is its winding number?); (2) does $i$ lie in the bounded region surrounded by $\gamma$? It seems to me that the point $-i$ is completely useless, since you assume that $\gamma$ is in the upper half-plane. The answer seems to be: the possible values of $$ \frac{1}{2\pi i}\int_\gamma \frac{dz}{z-i}. $$ I think there is nothing more to say, as you noticed.
H: If $V \times W$ with the product norm is complete, must $V$ and $W$ be complete? Let $V,W$ be two normed vector spaces (over a field $K$). Then their product $V \times W$ with the norm $\|(x,y)\| = \|x\|_V + \|y\|_W$ is a normed space. Using this norm it's easy to show that if $V,W$ are complete then so is $V \times W$. To see this, let the limit of the sequence $(x_n , y_n)$ be $(x,y) = (\lim x_n, \lim y_n)$. Then for $n$ large enough, both $\|x - x_n\|_V$ and $\|y - y_n\|_W$ are less than $\varepsilon / 2$ and hence $\|(x,y) - (x_n, y_n)\|< \varepsilon$. The other direction does (probably) not hold. Can someone show me an example of a space $V \times W$ that is complete but either $V$ or $W$ (or both) are not? AI: I believe that the spaces $V$ and $W$ must be complete whenever $V\times W$ is complete. Closed subspace of a complete normed space is complete. The space $V$ is isometrically isomorphic to the closed subspace $V\times\{0\}$ of $V\times W$.
H: maximize the function of three variable I am completely struck with the problem: Let $f$ be a function of three variables having continuously partial derivatives. For each direction vector $h=(h_1,h_2,h_3)$ such that $h_1^1+h_2^2+h_3^3=1$, Let $D_hf(x,y,z)$ be the directional derivative of $f$ along $h$ at $(x,y,z)$. For a point $(x_0,y_0,z_0)$ is not zero, mmaximize $D_hf(x_0,y_0,z_0)$ as a function off $h$ AI: The maximum of the directional derivative at a point is reached in the direction of the gradient of the function at that point. $$ D_hf(x_0,y_0,z_0)=h\cdot\nabla f(x_0,y_0,z_0)=\|\nabla f(x_0,y_0,z_0)\|\cos(\langle\nabla f(x_0,y_0,z_0),h\rangle). $$ The maximum is reached when $\cos(\langle\nabla f(x_0,y_0,z_0),h\rangle)=1$, that is, when $h$ is in the direction of $\nabla f(x_0,y_0,z_0)$, so that $$ h=\frac{\nabla f(x_0,y_0,z_0)}{\|\nabla f(x_0,y_0,z_0)\|}. $$
H: Finding Product of Scattered Variables Hi I came across the following question where I need to find $$mk$$ from $$ (x-2) (x+k) = x^2 + mx - 10 $$ The answer is 15. Any suggestions on how I could do that ? AI: The sum of the roots $(2, -k)$ equals $-m$. The product of the roots $-2k=-10$. Therefore $k=5$ and $m=3$.
H: Frechet derivative question I'm trying to show that a map $f$ between Banach spaces $X$ and $Y$ is Frechet differentiable at a point $u$. To do this, it is enough to calculate its Gateaux derivative at $u$ (call it $df(u)$) and show that $df(u)$ is continuous: so for every $\epsilon$, there exists a $\delta$ such that if $$\lVert h_1 - h_2 \rVert_X \leq \delta,$$ then $$\lVert df(u)h_1 - df(u)h_2 \rVert_Y < \epsilon.$$ Is that correct to show Frechet differentiability? Secondly, my Banach spaces $X$ and $Y$ are Hölder spaces. Specifically, let $X = C^{k, \alpha}(S)$, where $S$ is the closure of a set. Let $g$ be a smooth function in its arguments (so it's continuous). Since $S$ is closed and the norm is over a space of that set, can I say that $g$ is bounded above and hence $$\lVert gh_1 - gh_2 \rVert_X \leq C\rVert h_1 - h_2 \rVert_X$$ for some constant? Thanks AI: Re: 1st question. You may be confusing two kinds of continuity. The Gateaux derivative is a linear functional from $X$ to $Y$, so one can talk about the continuity of this linear functional (which is usually a part of the definition of the Gateaux derivative). But the fact that it is a continuous linear functional does not make it a Frechet derivative. The additional requirement in the definition of the Frechet derivative is that the functional $df(u)h$ approximates $f(u+h)-f(u)$ in a uniform way: specifically, $\|df(u)h-(f(u+h)-f(u))\|/\|h\|\to 0$ as $\|h\|\to 0$. It is true that a continuous Gateaux derivative is a Frechet derivative. But the continuity here is understood with respect to the point $u$ at which the derivative is taken: that it, $\|df(u_1)-df(u)\|\to 0$ when $u_1\to u$ in the norm. Re: 2nd question: $g$ being bounded above is not a sufficient reason for your conclusion. You should get your hands dirty with the definition of the Hölder norm, and use the smoothness of $g$ in the process.
H: Recursive digit-sum Let the recursive digit-sum(R.D.) be defined as: continue taking the sum of digits until it becomes <10. For example, the digit-sum of 5987 = 29, the digit-sum of 29 =11 So, R.D. of 5987 is 2. Prove that the value of R.D. recurs after each 9 numbers i.e., R.D. of any natural numbers of the form (9.a+b) where 0≤b<9 are same. AI: R.D of a number $n$ is nothing but the value $n\pmod9$ which repeats after every $9th$ number. Thus, R.D sum of $n$ and $9k+n$ is same.
H: The completion of a noetherian local ring is a complete local ring We have defined the completion of a noetherian local ring $A$ to be $$\hat{A}=\left\{(a_1,a_2,\ldots)\in\prod_{i=1}^\infty A/\mathfrak{m}^i:a_j\equiv a_i\bmod{\mathfrak{m}^i} \,\,\forall j>i\right\}.$$ I have a slight problem trying to understand the proof that then $\hat{A}$ is a complete local ring with maximal ideal $\hat{\mathfrak{m}}=\{(a_1,a_2,\ldots)\in\hat{A}:a_1=0\}$. Proof. If $(a_1,a_2,\ldots)\in\hat{\mathfrak{m}}$, then $a_i\equiv 0\bmod{\mathfrak{m}}$ for all $i$, i.e., $a_i\in\mathfrak{m}$. Hence $$\hat{\mathfrak{m}}^i=\left\{(a_1,a_2,\ldots)\in\hat{A}:a_j=0\,\,\forall j\leq i\right\}.$$ So the canonical map $\hat{A}\to A/\mathfrak{m}^i$, $(a_1,a_2,\ldots)\mapsto a_i$, is surjective with kernel $\hat{\mathfrak{m}}^i$. Thus $\hat{A}/\hat{\mathfrak{m}}^i\cong A/\mathfrak{m}^i$. In particular, $\hat{\mathfrak{m}}$ is a maximal ideal. But why is it the only one in $\hat{A}$? If $(a_1,a_2,\ldots)\not\in\hat{\mathfrak{m}}$, we have $a_1\neq 0$, hence $a_1\not\in\mathfrak{m}$, hence it is a unit. By the defining property of the completion, $a_j$ is a unit for all $j$. So I could choose a candidate for the inverse of $(a_1,a_2,\ldots)$ by choosing inverse elements of the $a_j$. Why would this candidate be in $\hat{A}$ then, i.e. how can I choose it properly such that the congruences on the right hand side of the definition of the completion would be fulfilled? Regards! AI: Inverses are unique. [If $R$ is a ring in which $y, z$ are inverses for $x$ then $y = y(xz) = (yx)z = z$.] So if $b_2 \in A/\mathfrak m^2$ is the inverse for $a_2$ then its homomorphic image in $A/\mathfrak m$ must be the inverse of $a_1$, and so on. You could think of this in a different way: any element not in $\hat{\mathfrak m}$ can be written as $a + x$ for some $a \in A \setminus \mathfrak m$ and $x \in \hat{\mathfrak m}$, and hence as $a(1 - y)$ for some $y \in \hat{\mathfrak m}$. Then $a^{-1}(1 + y + y^2 + \cdots )$ is an inverse.
H: If $F$ is a formally real field then is $F(\alpha)$ formally real? Let us call a field $F$ $\textit{ formally real }$ if $-1$ is not expressible as a sum of squares in $F$. Now suppose $F$ is a formally real field and $f(x)\in F[x]$ be an irreducible polynomial of odd degree and $\alpha $ is a root of $f(x)$. Is it true that $F(\alpha)$ is also formally real ? AI: Yes, and here is a proof. ${} {} {}$
H: Calculate the normal unit vector for scalar function In theory, if I have a certain function I can get his normal unit vector by using the gradient of it. $$\hat{f} = \dfrac{\nabla f}{|| \nabla f ||}$$ Example (correction from answer): $$ z = 2 -x -y$$ $$ f(x,y,z)= z + x + y -2 $$ $$ \nabla f(x,y,z)= \hat{i} + \hat{j} + \hat{k}$$ $$ \dfrac{\nabla z}{|| \nabla z ||}= \dfrac{1}{\sqrt{3}} (\hat{i} + \hat{j} + \hat{k})$$ Is that correct? what about this example: $$ z = \sqrt{x^2+y^2} $$ $$ \nabla f(x,y,z)= \dfrac{x}{\sqrt{x^2+y^2}} \hat{i} + \dfrac{y}{\sqrt{x^2+y^2}} \hat{j} + -\hat{k}$$ $$ \dfrac{\nabla f}{|| \nabla f ||}= \dfrac{\dfrac{x}{\sqrt{x^2+y^2}} \hat{i} + \dfrac{y}{\sqrt{x^2+y^2}} \hat{j} + -\hat{k}}{\sqrt{ (\dfrac{x}{\sqrt{x^2+y^2}} )^2 + (\dfrac{y}{\sqrt{x^2+y^2}} )^2 + (-1)^2 }}$$ $$ \dfrac{\nabla f}{|| \nabla f ||}= \dfrac{\dfrac{x}{\sqrt{x^2+y^2}} \hat{i} + \dfrac{y}{\sqrt{x^2+y^2}} \hat{j} + -\hat{k}}{\sqrt{2}}$$ AI: No, this is not correct. You must first write $f(x,y,z)=x+y+z-2$. Then, calculate $\nabla f=\vec i+\vec j+\vec k$ which gives $$\hat{f} = \dfrac{\nabla f}{|| \nabla f ||}=\frac{1}{\sqrt 3}(\vec i+\vec j+\vec k)$$ This is the required normal unit vector.
H: Is using the - symbol with the Associative Law of multiplication invalid? I was trying to prove that $-(x + y) = -x - y$ and as you can see in the image below, I took the liberty of using the $-$ symbol as a number and applying the associative law with it. Is it kosher in all rigorousness given the axioms professional mathematicians use? AI: $$(x+y)+(-x-y)=(y+x)+(-x-y)=y+(x-x)+(-y)=y+0+(-y)=0$$ So $(x+y)$ is the additive inverse of $(-x-y)$. Hence $-(x+y)=-x-y$
H: Possible cup product structures on a manifold I am studying for a qualifying exam, and I came across this problem: Let $M$ be a closed orientable connected 4-manifold with $H^1(M) = H^3(M) = 0$ and $H^2(M) \cong H^4(M) \cong \mathbb Z$. What are the possible cup product structures on $H^*(M)$? My thoughts: Just using properties of graded rings, we see that the product will be determined by $\alpha^2 \in H^4$, where $\alpha$ is a generator of $H^2$. If we fix $\beta$ a generator of $H^4$, then $\alpha^2 = n\beta$, so it seems like there is a distinct structure for each $n \in \mathbb Z$. My question is: is there anything special about the cohomology ring $M$ that restricts the possibilities further? Somehow my answer seems too easy. AI: I will denote by $LH^*$ the free part of the cohomology ring. (That means that $LH^r(M)$ is $H^r(M)/\mathrm{torsion part}$.) For a connected 4-manifold, the symmetric form $I : LH^2(M) \times LH^2(M) \to \mathbb Z$ defined by $\alpha \smile \beta = I(\alpha, \beta) [M]$ is nondegenerate (that's one of the consequences of Poincaré duality). By definition, this means that the map $LH^2(M,\mathbb Z) \to \mathrm{Hom}(LH^2(M),\mathbb Z)$ is a bijection: in more pedestrian terms, that means that the matrix of $I$ has determinant $\pm 1$). So, in your case, $n$ is either $1$ or $-1$. Both cases happen: you get $n=1$ with $\mathbb P^2(\mathbb C)$ and $n=-1$ with the same manifold with the orientation reversed.
H: Relation between metrics Let $$\eqalign{ & d\left( {x,y} \right) = \mathop {\max }\limits_{1 \leqslant i \leqslant n} \left\{ {\left| {{x_i} - {y_i}} \right|} \right\} \cr & d'\left( {x,y} \right) = \sqrt {\sum\limits_{i = 1}^n {{{\left( {{x_i} - {y_i}} \right)}^2}} } \cr & d''\left( {x,y} \right) = \sum\limits_{i = 1}^n {\left| {{x_i} - {y_i}} \right|} \cr} $$ for any two points $x,y \in \Bbb R^n$. How to prove the following holds? $$\eqalign{ & d\left( {x,y} \right) \leqslant d'\left( {x,y} \right) \leqslant \sqrt n \cdot d\left( {x,y} \right) \cr & d\left( {x,y} \right) \leqslant d''\left( {x,y} \right) \leqslant n \cdot d\left( {x,y} \right) \cr} $$ I think I got the second one: It is trivial that $$\mathop {\max }\limits_{1 \leqslant i \leqslant n} \left\{ {\left| {{x_i} - {y_i}} \right|} \right\} < \sum\limits_{i = 1}^n {\left| {{x_i} - {y_i}} \right|} $$ Now let $k$ be the integer such that $$d\left( {x,y} \right) = \left| {{x_k} - {y_k}} \right|$$ Then for each $1 \leq i \leq n$ we have that $$\left| {{x_i} - {y_i}} \right|\leq \left| {{x_k} - {y_k}} \right|$$ So summing from $1$ to $n$ one gets: $$d''\left( {x,y} \right) \leqslant n \cdot d\left( {x,y} \right)$$ AI: For the second result, use: $$ \sum_{i=1}^n |x_i-y_i|\le \sum_{i=1}^n \max_i |x_i-y_i|=n\max_i|x_i-y_i| $$ and $$ \sum_{i=1}^n |x_i-y_i|\ge |x_j-y_j|, \ \text{for each}\ j $$ Similar inequalities will establish your first result: $$ \sum_{i=1}^n |x_i-y_i|^2\le \sum_{i=1}^n \max_i |x_i-y_i|^2=n(\max_i |x_i-y_i|)^2 $$ and $$ \sum_{i=1}^n |x_i-y_i|^2\ge |x_j-y_j|^2, \ \text{for each}\ j $$
H: Committees that share exactly one member A club of $n$ members is organized into four committees following two rules: Each member belongs to exactly two committees, and each pair of committees has exactly 1 member in common. Find all possible values of $n$. AI: If each member must belong to two committees, he or she must be the member in common to those two committees. There are 6 pairs of committees, requiring 6 common memberships and therefore 6 members. Any more members joining would give his or her two committees two common members. Therefore, I would say only $n=6$.
H: Is there a sequence in $(0,1)$ such that the product of all its terms is $\frac{1}{2}$? Is there a sequence in $(0,1)$ such that the product of all its terms is $\frac{1}{2}$? AI: If you take any sequence $a_1,a_2,a_3,\ldots$ whose sum is $\log_b (1/2)$, then $b^{a_1}, b^{a_2}, b^{a_3},\ldots$ is a sequence whose product is $1/2$. Later note: Notice that $\frac 1 2 + \frac 1 4 + \frac 1 8 + \frac 1 {16} + \cdots = 1$. If you multiply every term by $\log_b \frac 1 2$ then you get a series whose sum is $\log_b \frac 1 2$. Still later note: If $b>1$, then $\log_b(1/2)<0$, and $b^{a_n}$ will be in $(0,1)$ if $a_n<0$.
H: How to calculate this conditional probability There's an equation in my script, which I do not understand. Let $(B_t)$ be a Brownian Motion and $\Gamma\in\mathcal{B}(\mathbb{R}^n)$, $t\ge s$ the equation is $$P(B_t\in\Gamma | B_s)=\frac{1}{\sqrt{(2\pi)^d(t-s)^d}}\int_\Gamma \exp{\left(-\frac{|z-B_s|^2}{2(t-s)}\right)}dz$$ A reference why this is true would be appreciated. I think there is a Theorem which tells us how to calculate such an expression. Just for completeness, the only result I know is: Let $\mathcal{G}\subset\mathcal{F}$ a $\sigma$-field, $X$ $\mathcal{G}$ measurable and $Y$ independent of $\mathcal{G}$. For every $F:S_1\times S_2\to[0,\infty]$ which is $\mathcal{S}_1\times\mathcal{S}_2$ measurable we have $$E[F(X,Y)|\mathcal{G}]=E[F(x,Y)]|_{x=X(\omega )}=g(X(\omega ))$$ where $g(x):=E[F(x,Y)]$. However I do not see how to apply this in this situation. AI: Let $X=B_s$, $Y=B_t-B_s$ and $f(x,y):=(x+y)\chi_A$. All the conditions are satisfied.
H: Which property should be applied here? I am having a problem with the following question: If $$N = 3^P { } $$ and $$M=P-1$$ Then in terms of M what is $$ \frac{3}{N}=?$$ Any suggestions on which properties to apply ? AI: Simply substitute and use exponents' laws: $$N=3^P\,,\,M=P-1\Longrightarrow \frac{3}{N}=\frac{3}{3^P}=\frac{1}{3^{P-1}}=\frac{1}{3^M}$$
H: Why is the fundamental group of a prime, reducible 3-manifold $\mathbb{Z}$? I've read in a paper that if $M$ is a prime, reducible $3$-manifold, then $\pi_{1}(M) \cong \mathbb{Z}$. Can anyone explain why this is true? Thanks in advance. AI: I'm probably assuming $M$ to be orientable. Reducible means that there is an essential sphere $S \subseteq M$. Two cases happen : either that sphere disconnects $M$, and $M$ is decomposable as a nontrivial connected sum (so it is not prime) or $M$ is homeomorphic to $S^2 \times S^1$. So, $S^2 \times S^1$ is the only prime reducible $3$-manifold.
H: A maximal ideal is always a prime ideal? A maximal ideal is always a prime ideal, and the quotient ring is always a field. In general, not all prime ideals are maximal. 1 In $2\mathbb{Z}$, $4 \mathbb{Z} $ is a maximal ideal. Nevertheless it is not prime because $2 \cdot 2 \in 4\mathbb{Z}$ but $2 \notin 4\mathbb{Z}$. What is that is misunderstand? AI: As Thomas points out, $2\mathbb Z$ is not a "ring", since it does not contain any identity element $1.$ It is true that every maximal ideal of a commutative ring with identity is prime.
H: Expectation of absolute value of a function Let x be real valued random variable taking values on $a_1,\ldots, a_n$. Let $\Pr(x=a_i)=p_i$. Let $f$ be real valued function defined on $a_1, \ldots, a_n$ It is known that $$ E(f(x))=\sum_{i=1}^nf(a_i)p_i. $$ Would be the same formula true for $E(|f(x)|)$, i. e. $$ E(|f(x)|)=\sum_{i=1}^n|f(a_i)|p_i? $$ Thank you. AI: Let $ g(x) := |f(x)| $ be defined on $ a_1, \dots, a_n $. The composition of two real valued functions is a real valued function. Now computing $E(g(x))$...
H: Bound on bounded functions Let $C[0,1]$ be the space of continuous and nondecreasing functions with the sup norm. Moreover, let $f[0,1]\rightarrow \mathbb{R}$ be continuous and positive, i.e., $f(s)>0,\,s\in[0,1]$. Take any two element $z,h$ in $C$. I would like to put an upper bound on the following expression: $$ \int_{0}^{1}f(z(x))h(x)dx $$ that is independent of $z$ and $h$, i.e., I would to claim that there exists some $M>$ such that for every $z,h\in C$, $$ \int_{0}^{1}f(z(x))h(x)dx\leq M $$ So far, I know that since $f$ is continuous, positive and defined on a compact set, $|f(s)|\leq \max_{s\in[0,1]} f(s)=M_{0}>0$, which leaves me with: $$ \int_{0}^{1}f(z(x))h(x)dx\leq M_{0}\left|\int_{0}^{1}h(s)ds\right| $$ But here is my question. I know that the last integral term is bounded, but this bound depends on the particular choice of $h$. How can I get rid (if possible) of this last restriction? That is, can I put a bound on the integral term that is independent of $h$? If not, what would I need it to make it work? AI: You cannot put a bound on your integral that is independent of the choice of $z$ and $h$ (even if you get rid of the problem I mentioned in my comment) because of the following counter examples : Take $f(x) = 1$ everywhere, and take $h(x) = M$ everywhere. Then $$ \int_0^1 f(z(x)) h(x) dx = \int_0^1 M dx = M. $$ If you would've bounded your expression with no dependence on $z$ and $h$, your bound would need to be bigger than $M$ for any $M > 0$, which doesn't make sense. If you can allow a dependence on $h$ in the underlying problem (because I am assuming there is one), then perhaps there is something to work out. But if there is no underlying problem, then I believe there is no general bound to be expected at all. Hope that helps,
H: Element of a set? I have to say if {2} is an element of the given sets. I'm reading {2} as if it was a subset which would make the problem true for C, D and E only correct? F it isn't because that is a nested subset and A/B don't contain subsets. For each of the sets, determine whether {2} is an element of that set. a) {x ∈ R | x is an integer greater than 1} b) {x ∈ R | x is the square of an integer} c) {2,{2}} d) {{2},{{2}}} e) {{2},{2,{2}}} f ) {{{2}}} AI: First recall that $a$ is an element of $b$ if and only if $a\in b$. On the other hand, $a$ is a subset of $b$ if and only if for every $x$ such that $x\in a$, $x\in b$. My usual tip in this situation is to replace $\{2\}$ by $x$. Now we can think of the given sets,a,b are both sets of numbers, now we can see that $x$ is not an integer, nor a real number, so it is not in either a or b. On the other hand, c can be written as $\{2,x\}$, which means that $x$ is an element of this set. Similarly d,e both have $x$ as an element. However what is the set f? $\{\{\{2\}\}\}=\{\{x\}\}$ and $x$ is not an element of this set, it is rather an element of an element of this set.
H: Estimating the Gamma function to high precision efficiently? I know there are several approximations of the Gamma function that provide decent approximations of this function. I was wondering, how can I efficiently estimate specific values of the Gamma function, like $\Gamma (\frac{1}{3})$ or $\Gamma (\frac{1}{4})$, to a high degree of accuracy (unlike Stirling's approximation and other low-accuracy methods)? AI: Gourdon and Sebah pages on "Numbers, constants and computation" are usually interesting for this kind of record (start by clicking on 'constants' at the left). They reference Shigeru Kondo and Steve Pagliarulo for having computed 10^10 digits of $\Gamma\left(\frac 14\right)$ and $\Gamma\left(\frac 13\right)$ in 2010 and 2009 (see here). Many methods for high precision are clearly exposed in the other pages of the first link (FFT, binary splitting and this kind of things should be simply explained at least in the .ps or .pdf files). This paper of Alexander Yee concerning high precision evaluation of $\pi$ contains too references to Kondo & Pagliarulo's record (he has a page of records too but without $\Gamma$ I fear). In the special case $\Gamma\left(\frac k{24}\right)$ a quadratic algorithm is proposed in Weisstein's Gamma Function page (see around (99)). This method is the famous AGM method and you may find too in page 6 of Sebah and Gourdon's postscript paper on Gamma (or here) that : $$\displaystyle \Gamma\left(\frac 14\right)^2=\frac{(2\pi)^{3/2}}{AGM(\sqrt{2},1)}$$ You may find in the excellent book of the Borweins "Pi and the AGM" following derivation using the complete elliptic integral of the first kind $\rm{K}$ : Hoping it helped anyway,
H: Maximal color difference I have a picture consisting of a two-dimensional array of ordered triples (red, green, blue) of real numbers from 0 to 1. I'm looking for something like a norm on pictures which expresses the range of colors used. The idea is that a grayscale image should have norm 0 and an image with pure red, green, and blue should have norm 1. Here's the key part: A colorized image (black and red, for example, instead of black and white) should have norm 0 just like a grayscale image. So if all colors are linear combinations of two colors the norm is 0, and the extent to which a third basis is needed to represent the colors used (even if only for one pixel) is the norm. Any ideas on how to formalize this? My first instinct is to change bases to (hue, saturation, lightness) and look at the maximum difference of hues (mod 1). But then two points with colors $(\varepsilon, 0, 0)$ and $(0, \varepsilon/2, \varepsilon/2)$ would seem very distant where they actually represent colors which are very close (near-black). It seems natural at this point to transform the problem into one of geometry and look at the color bicone, but what is the most natural metric to use here? Euclidean? L1? Something else? Of course other approaches would be welcome. AI: Sounds like what you're looking for is how far the colours in the image, seen as points in the RGB cube, deviate from a one-dimensional manifold. In a grayscale image, all the colours lie on the line segment $(t,t,t)$ for $t \in [0,1]$. On a black-and-red image, they lie on $(t,0,0)$, and on a red-and-white image, on $(1,t,t)$. A sepia-toned image contains colours on a curve that passes through black, brown, cream, and white; whether you consider this one-dimensional depends on whether you're only looking for "flat" affine subspaces or general curved submanifolds of the RGB cube. Discovering curved manifolds in data is pretty hard, but the affine case is easy. If all the points lie close to a line, their covariance matrix will have only one large eigenvalue. The best-fitting line is the one that passes through the mean of the points and is parallel to the corresponding eigenvalue. The sum of squares of the remaining eigenvalues is the residual variance, which tells you how much the points deviate from this line. The square root of this quantity is something you might call the "residual standard deviation", and it behaves very much like the "norm" you're looking for. (See my previous answer for more on the relationship between the shape and dimensionality of a data set and the eigenvectors of its covariance matrix.)
H: Can a convergent sum using only integers produce a complex result? We use this function to define the boundaries for the product in the denominator: $$f(\text{n$\_$})\text{:=}\frac{1}{8} \left(2 n (n+2)-(-1)^n+1\right)$$ We calculate the infinite sum: $$\sum _{n=1}^{\infty } \frac{1}{(f(n)+1)_{f(n+1)-f(n)}}$$ We get this complex number: $$0.61944\, -\text{5.565802539025895$\grave{ }$*${}^{\wedge}$-19} i$$ Is this a reasonable result? Note: The calculation takes a very long time in Mathematica, so if this is not reasonable, it's time to perform some debugging. AI: no, it's not reasonable to get a complex number, when the members of sum are real. We can conclude this, by discussing imaginary parts of both sides.
H: Deteriming an angle without Trig. ratios I am trying to solve the current problem If O is the center of a circle with diameter 10 and the perimeter of AOB=16 then which is more x or 60 Now I know the triangle above is an isosceles triangle with 2 sides being 5 (since diameter is 10) and third side being 6 thus 2y+x=180. However I just cant figure out y or x.Are there any suggestions on how i could solve this problem.Without using Trig ratios or at least estimate whether it would be greater than 60? AI: The largest side of a triangle is opposite the largest angle. So $\angle AOB$ at the centre is bigger than either of the other two angles of the triangle. It follows that $\angle AOB$ is bigger than $60^\circ$, and $x$ is less than $60^\circ$. If you wish to use more algebra, let $\angle AOB =w$. Then $w+x+x=180^\circ$. But $w\gt x$, so $x+x+x \lt 180^\circ$, and therefore $x \lt 60^\circ$.
H: Deriving an equation that satisfies many points Say I have a collection of points, for example the following: (1, 167), (2, 11), (3, 255), etc Is it possible to construct an equation that satisfies all of them? I have a maximum of 32 points. AI: Given any $n$ points in the plane, none of which lie on the same vertical line as another, and with at least one lying off the $x$-axis (if they all lie on the $x$-axis, then the $0$ function works), there exists a unique polynomial of degree $n-1$ that passes through all of those points. Also, there are infinitely-many polynomials of any given higher degree passing through those points. In particular, say the points are $(x_1,y_1),...,(x_n,y_n)$ (where $x_i\neq x_j$ for $i\neq j$). For $1\leq i\leq n$, define the polynomial $$P_i(x):=\prod_{j\neq i}\frac{x-x_j}{x_i-x_j}.$$ These (and any of their non-0 scalar multiples) are clearly real polynomials of degree $n-1$, and it can be determined rather readily that for any $i,j\in\{1,...,n\}$, we have $$P_i(x_j)=\begin{cases}0 & i\neq j\\1 & i=j.\end{cases}$$ Consequently, we see that $$P(x):=\sum_{j=1}^ny_jP_j(x_j)$$ is a polynomial of degree at most $n-1$ such that $P(x_j)=y_j$ for all $1\leq j\leq n$. As it turns out, the above construction generalizes rather nicely to points in $\Bbb C^2$ (rather than $\Bbb R^2$), though we won't (necessarily) get a real polynomial in this way.
H: How to count the possible ways? In a bag there are 10 indistinguishable balls. Four of them are white and $6$ are black. The balls are taken out one by one and put on the table as they are taken. In how many ways we can get at least $2$ consecutive white balls? When I got this problem, I immediately thought about the opposite event. Let $A \space$ be "get at least $2$ consecutive white balls". The opposite event will be "don't get any consecutive white balls" This will mean that the white balls have to be between the black balls or in the ends of the sequence.There are $6$ black balls, so: _B_B_B_B_B_B_ There are $7$ positions that the $4$ white balls can occupy. My doubt is: It's correct one "invented" $3$ more positions in the sequence of $10$ balls? The assumption was if the positions were not occupy by the white balls, then they simply desappear. If this thought is correct the final step is to find what is been ask. The total of ways (without restrictions) is given by: $\frac{10!}{6! \cdot 4!}$ Then what is wanted is: $\frac{10!}{6! \cdot 4!}- {}^7C_{4}$ Thanks AI: You're right on track with this! Nicely done.
H: $f$ strictly increasing does not imply $f'>0$ We know that a function $f: [a,b] \to \mathbb{R}$ continuous on $[a,b]$ and differentiable on $(a,b)$, and if $f'>0 \mbox{ on} (a,b)$ , f is strictly increasing on $[a,b]$. Is there any counterexample that shows the converse fails? I have been trying to come up with simple examples but they all involve functions that are discontinuous or has derivative $f'=0$ which does not agree with the hypothesis hmmm AI: Consider $f(x)=x^3$ on $[-1,1]$. It is strictly increasing, but has zero derivative at $0$.
H: For what value of x will it be less than 2 Is there a value of x for the following equation which will make it less than x ? The question is which is more $$ \frac{3x+1}{x+1}$$ if $$x\not=-1$$ or simply 2 ? According to the book there is not enough information to solve this problem, but i think the expression is greater than 2. Is there any value of x which makes it less than 2 ? AI: We know $x\neq -1$, so either $x+1>0$ or $x+1<0$. Let's consider these cases separately (since we're working with inequalities). If $x+1>0$, then $\frac{3x+1}{x+1}<2$ implies that $3x+1<2(x+1)=2x+2$, so $x<1$. In fact, any $-1<x<1$ will do the trick. If $x+1<0$, then $\frac{3x+1}{x+1}<2$ implies that $3x+1>2x+2$ (direction of inequality switched since we multiplied by a negative number), so $x>1$, but this is impossible, since $x+1<0$. Thus, $\frac{3x+1}{x+1}<2$ if and only if $-1<x<1$.
H: Two questions about continuity of function between topological spaces Let $X$ and $Y$ be topological spaces and suppose $f: X \to Y$ is continuous. If $f$ is continuous on $U \subset X$, will the restriction $f_U :U \to Y$ be continuous, if we consider $U$ to be a topological space of its own? My second question is given open sets $U, V \subset \mathbb{R^n}$ and continuous functions $f_1 : U \to \mathbb{R^n}$ and $f_2 : V \to \mathbb{R^n}$ Will the function $f_{U \cup V}: U \cup V \to \mathbb{R^n}$ defined in the obvious way be continuous? AI: 1) Yes, if $U$ has the subspace topology. The preimage of an open set $V$ in $Y$ under $f_U$ is just $f_U^{-1}(V)=f^{-1}(V) \cap U$, which is open by the definition of the subspace topology on $U$. This is essentially why the subspace topology is defined the way it is, so that restrictions of continuous maps are continuous. 2) For such a function to be defined, we need $f_1$ and $f_2$ to agree on $U \cap V$. In this case, the function is continuous. The preimage of an open set under $f_{U \cup V}$ is just the union of the preimages of that set under $f_1$ and $f_2$. There's nothing particular about $\mathbb{R}^n$ here. See the Pasting lemma for more general conditions.
H: Lipschitz continuity of a functional? Here is my problem. Let $C$ be the space of continuous and nondexcreasing functions defined on $[0,1]$ and endowed with the sup norm. Take any $z\in C$ and consider the following: $$ U(x;z)=\int_{0}^{x}\left[1-\int_{s}^{1}F(z(\xi))f(\xi)d\xi\right]^{n-1}ds $$ where $F:[0,1]\rightarrow [0,1]$ is continuously differentiable and increasing; $F'=f$, $f(s)>0, s\in[0,1]$ and $f(s)=0,\,s\notin [0,1]$, $x\in[0,1]$, and $n>2$. Fix the value of $x$. I would like to show that $U$ is Lipschitz continuous with respect to $z$, i.e., I'd like to show that there is a constant $K>0$ such that: $$ ||U(x;z)-U(x;y)||\leq K||z-y|| \qquad z,y\in C $$ This is what I've done so far. For a fixed value of $x$ the above expression can be treated as a functional. I know that the Frechet derivative of this expression looks like the following: $$ \frac{dU(z)}{dz}=-(n-1)\int_{0}^{x}\left[1-\int_{s}^{1}F(z(\xi))f(\xi)d\xi\right]^{n-2}\left(\int_{s}^{1}f(z(\xi))f(\xi)h(\xi)d\xi\right)ds $$ where $h$ is the 'perturbation' function. Since $f$ is continuous and $f(s)>0$ if $s\in[0,1]$, it must be the case that: $$ \left|\frac{dU(z)}{dz}\right|\leq (n-1)M_0\int_{0}^{x}\left|\left[1-\int_{s}^{1}F(z(\xi))f(\xi)d\xi\right]^{n-2}\left(\int_{s}^{1}h(\xi)d\xi\right)\right|ds\\ \leq (n-1)M_0\int_{0}^{x}\left(\int_{s}^{1}|h(\xi)|d\xi\right)ds $$ where $M_0=\sup_{s\in[0,1]} |f(s)f(s)|$, and the second line follows because $f(z(s))f(s)\leq \sup_{s\in[0,1]}|f(z(s))f(s)|$ for any $z\in C$, and $\left[1-\int_{s}^{1}F(z(\xi))f(\xi)\right] \leq 1$ for any $z\in C$ and $s\in[0,1]$. I know that the perturbation $h$ is bounded because it belongs to $C$ but any such bound depends on the choice of the perturbation and hence, the bound of the whole expression will depend on $h$. Does this means that I am not free to choose the overall bound that would allow me to show the existence of a Lipschitz constant $K$? Any help/suggestion/insight/reference is greatly appreciated it! AI: You could continue your estimates as follows: $$\left|\frac{dU(z)}{dz}\right|\leq (n-1)M_0\int_{0}^{x}\left(\int_{s}^{1}|h(\xi)|d\xi\right)ds \le (n-1)M_0\int_{0}^{x} (1-s)\|h\| ds \le (n-1)M_0 \|h\|$$ Hence, $U$ is Lipschitz with the Lipschitz constant at most $(n-1)M_0$.
H: Does (Infer $\phi$ from $\psi$) imply (Infer $\phi^L$ from $\psi^L$)? I am studying set theory on my own on Drake's famous book and I'm stuck on the (finitary) prove of the relative consistency of the Axiom of Choice. Is it true that a if we were able to infer $\xi$ from $\zeta$ then we are able to infer $\xi^L$ from $\zeta^L$. Intuitively of course in $L$ are to hold the rules of the predicative calculus since these are just the "minimum" of our human intuition, but, for example, one of these (from Shoenfield book) is R) If $x$ is non free in $\psi$ then infer $\exists x \phi \rightarrow \psi$ from $\phi \rightarrow \psi$. which would become RL) If $x$ is non free in $\psi^L$ then infer $\exists x \in L \phi^L \rightarrow \psi^L$ from $\phi^L \rightarrow \psi^L$. So is R $\Rightarrow$RL? It does not seem so obvious to me: maybe I am missing something. AI: In general when you relativize proofs to $\bf L$, you need to supplement each top-level formula $\phi$ in the proof with assumptions that its free variables are in $\bf L$. Otherwise you run into problems -- for example, $\phi\equiv\exists x.x=y$ is a theorem, but $\phi^L\equiv \exists x\in{\bf L}.x=y$ is not a theorem (in fact independent of ZFC). So you want RL to infer $$\tag B (y\in {\bf L} \land z\in {\bf L}\land \cdots) \to (\exists x\in {\bf L}.\phi^{\bf L})\to\psi^{\bf L} $$ from $$\tag A (x\in {\bf L}\land y\in {\bf L} \land z\in {\bf L}\land\cdots) \to \phi^{\bf L}\to\psi^{\bf L} $$ where $y, z\ldots$ are the free variables of $\phi$ and $\psi$ other than $x$. But that is actually easy enough. Here's how it works using the deduction theorem. Since we're aiming to prove (B) assume $y\in {\bf L}$, $z\in {\bf L}$ and so forth. Also temporarily assume $x\in{\bf L}$. Then from (A) we get $\phi^{\bf L}\to \psi^{\bf L}$, and we can now apply the ordinary $R$ to get $(\exists x\phi^{\bf L})\to\psi ^{\bf L}$. Discharging the assumption $x\in {\bf L}$, we get $$ x \in {\bf L} \to (\exists x\phi^{\bf L})\to\psi ^{\bf L} $$ where the only free $x$ is in $x\to {\bf L}$. We can instantiate this $x$ to $\varnothing$ to make the premise into $\varnothing \in{\bf L}$ which is easily provable. Then what we have is $$\tag{1} (\exists x.\phi^{\bf L})\to \psi^{\bf L}$$ It is also easy to prove $$\tag{2} (\exists x.x\in{\bf L}\land \phi^{\bf L})\to(\exists x.\phi^{\bf L})$$ independently of what ${\bf L}$ and $\phi^L$ are. Combining $(1)$ with $(2)$ gives $$(\exists x.x\in {\bf L}\land \phi^{\bf L})\to\psi^L$$ and we can then discharge our initial assumptions on $y, z, \ldots$ to get (B). In general you need an argument along these lines for every rule of inference and logical in the style of logic you're working with. If you also have proofs of $\xi^{\bf L}$ for each set-theoretic axiom $\xi$, then you have enough tools to systematically convert a proof of an arbitrary sentence $\zeta$ into a proof of $\zeta^{\bf L}$.
H: How do I form this equation? If $A$ and $B$ are the root of the equation $3x^2-4x-9=0$, what is the equation whose roots are $(A+3)/(A-3)$ and $(B+3)/(B-3)$ AI: Recall that if $\alpha, \beta$ are roots of $ax^2 + bx + c$, then we have $\alpha + \beta = -\dfrac{b}a$ and $\alpha \beta = \dfrac{c}a$. Let $\alpha = \dfrac{A+3}{A-3}$ and $\beta = \dfrac{B+3}{B-3}$. Then we get that $$\alpha + \beta = \dfrac{A+3}{A-3} + \dfrac{B+3}{B-3} = \dfrac{AB + 3B - 3A - 9 + AB + 3A - 3B - 9}{AB - 3(A+B) + 9} = 2\dfrac{AB - 9}{AB - 3(A+B) + 9}$$ Similarly, $$\alpha \beta = \dfrac{AB + 3(A+B) + 9}{AB - 3(A+B) + 9}$$ But since $A$ and $B$ are roots of $3x^2 -4x - 9$, we have that $A+B = \dfrac43$ and $AB = -3$. Can you finish it off from here? Move the mouse over the gray area for the complete solution. $$\alpha + \beta = 2 \left(\dfrac{AB - 9}{AB - 3(A+B) + 9} \right) = 2 \times \dfrac{-3-9}{-3 - 3(4/3) + 9} = 2 \times \dfrac{-12}{2} = -12$$ $$\alpha \beta = \dfrac{-3 + 3 (4/3) + 9}{-3 - 3 (4/3) + 9} = \dfrac{10}2 = 5$$ Hence, the desired quadratic polynomial is $$y^2 + 12y + 5.$$
H: what is a general algorithm to find a nonempty integer subset that have integers add up to 0? what is a general algorithm to compute if a set have nonempty integer subset that have integers add up to 0? i would like to know one with the least tries and the proof of it. Example:{−2, −3, 15, 14, 7, −10} have integers added up to zeros since {−2, −3, −10, 15} add up to zero i would also like to know the level of it - undergraduate or graduate? AI: This is the well known subset sum problem, and there is an $O\left ((\sum x_i) ^2\right )$ dynamic programming (based on a recurrence relation) algorithm. Wikipedia has a nice explanation of it here: http://en.wikipedia.org/wiki/Subset_sum_problem#Pseudo-polynomial_time_dynamic_programming_solution What do you mean when you ask for the algorithm with the "least tries"? If you mean runtime, this algorithm is the fastest. Regarding the level, a student in an introductory computer science course should be able to devise a solution of this kind.
H: How to show a subset is part of another set? I'm not sure how to "show" these two answers. The small group created from the intersection $A\cap B\cap C$ is a subset of $A\cap B$ since abc is a smaller "portion" of the overall sets. The difference of $(A-B)-C$ is the same as $A-C$ since part of $A$ was removed with the $B$ already. Let $A, B$, and $C$ be sets. Show that 1) $(A \cap B \cap C) \subseteq (A \cap B)$ 2) $(A − B) − C \subseteq A − C$ AI: To show prove 1), show that if you choose an arbitrary element $x \in A \cap B \cap C$, then $x \in A \cap B$. So, let $x \in A \cap B \cap C$. This means that $x$ is contained in all three sets. In particular, $x \in A$ and $x \in B$. Hence, $x \in A \cap B$ Proceed in the same manner for 2). Let $x \in (A - B) - C$. Then $x \in A - B$, and $x \notin C$. Therefore, $x \in A$, but $x \notin C$, so $x \in A - C$.
H: Convergence of $\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$ What theorem should I use to show that $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$$ is convergent no matter what value $x$ takes? AI: Note that while $(-2)!$ and $(-1)!$ are divergent, $1/(-2)! = 1/(-1)! = 0$. Effectively the sum starts at $n=2$. Let $\displaystyle a_n = (-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$. Then $$\left|\frac{a_{n+1}}{a_n}\right| = \frac{4}{n-1},$$ so the sum $\sum_{n=2}^\infty |a_n|$ converges by the ratio test. Now use the fact that absolutely convergent series are convergent. The sum converges to $(x-2)/e^4$, independent of the value of $x$.
H: Venn diagram for $(\sim A) \cap (\sim B) \cap (\sim C)$ The way I read that it says everything that is not part of $A$,$B$ and $C$. So the answer is $U$ from my diagram? AI: Recall the De Morgan's law for sets. $$(\sim A) \cap (\sim B) \cap (\sim C) = \sim (A \cup B \cup C)$$ Now you should be able to conclude what you want.
H: Why are restrictions important? When simplifying expressions, why do we add on restrictions for the simplified form if the original form was undefined at a certain point? The simplified form is defined at those points, so why should it be restricted? An example of what I mean: $$\frac{x^2−1}{x−1}=x+1, x≠1$$ AI: In short, it's because a function is all about the rule and domain of definition. For example, let's consider the real-valued functions $f(x)=\frac{x^2}{x}$ and $g(x)=x$, with their maximal real domains of definition. In particular, then, $\mathrm{dom}(f)=\Bbb R\smallsetminus\{0\}$ and $\mathrm{dom}(g)=\Bbb R$. That means that $f$ and $g$ are not the same function, since their domains are not the same. Now, where both are defined, the rule is the same. However, this isn't enough for them to be the same function. Now, $f$ is a restriction of $g$ (specifically, to $\Bbb R\smallsetminus\{0\}$), so they are certainly related functions. Edit: In the context of your previous question (regarding limits and restrictions), it is worth noting that $f$ can be continuously extended to $g$, since for any $\varepsilon>0$ there exists $\delta>0$ (in particular, in this case, $\delta=\varepsilon$) such that for any $0<|x|<\delta$ we have $\left|f(x)-0\right|<\varepsilon$.
H: Show convergence using CLT Let $x_1,\ldots,x_n$ be i.i.d. Bernoulli random variables with parameter $1/2$. Let $S=\sum_{i=1}^nx_i$. Using the Central Limit Theorem, show that $$ \frac{|2S-n|}{\sqrt n} $$ is convergent to a standard normal random variable. Thank you. AI: The expectation of $S$ is $\frac{n}{2}$ and its variance is $\frac{n}{4}$ so $$\frac{S - \frac{n}{2}}{\sqrt{\frac{n}{4}}} = \frac{2S - n}{\sqrt{n}}$$ converges to a standard normal distribution by the Central Limit Theorem. The absolute value of a normal distribution with mean $0$ is a half-normal distribution so $$\frac{|2S - n|}{\sqrt{n}}$$ converges to a half-normal distribution with mean $\sqrt{\frac{2}{\pi}}$.
H: Limits and restrictions? If we assume that the restrictions put on simplified forms of expressions to prevent evaluation at points undefined in the original unsimplified form are important why do we drop them when dealing with limits? For example, consider the following when trying to find the derivative of $f= x^2%$: $$\begin{align*}\lim_{h→0} \frac{f(x + h) - f(x)}{h} &=\lim_{h→0} \frac{(x+h)^2 - x^2}{h}\\ &=\lim_{h→0} \frac{x^2 + 2xh + h^2 - x^2}{h}\\ &=\lim_{h→0} \frac{h(2x + h)}{h} \end{align*}$$ All following simplified forms should have the restriction $h ≠ 0$ since the original form was undefined at this point. $$\lim_{h→0} {2x + h}, h ≠ 0$$ However to calculate the derivative, the h is valued at $0$ leading to the derivative: $$f'(x) = 2x$$ How can the equation be simplified by assuming the $h$ is $0$ when there is a restriction on it? Why is that when simplifying expressions we have to restrict the simplified forms to prevent evaluation at points undefined on the original expression, but this concept is completely ignored when dealing with limits? AI: In a sense, you are repeating the old criticism of Bishop Berkeley on infinitesimals, which were "sometimes not equal to $0$, and sometimes equal to $0$". What you need to remember is that the expression $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ represents the unique quantity (if it exists) that the expression $\dfrac{f(x+h)-f(x)}{h}$ approaches as $h$ approaches $0$, but without $h$ being equal to $0$. Whenever we take a limit, we are asking how the quantity is behaving as we approach $0$, but without actually being $0$. Because we are never actually at $0$, the simplification is valid, and so the computation turns on asking: what happens to the quantity $2x+h$ as $h$ approaches $0$? The answer is that, the closer $h$ gets to $0$, the closer that $2x+h$ gets to $2x$. We can make $2x+h$ as close to $2x$ as we want, provided that $h$ is close enough to $0$, without being equal to $0$. We are not actually evaluating at $0$ (well, we kind of are, see below, but not really); we are just finding out what happens to $2x+h$ as $h$ gets closer and closer and closer to $0$. So we are not "simplifying" the way we did before, we are now evaluating the limit, by determining what happens to $2x+h$ as $h$ approaches $0$. (Now, in a sense we are evaluating, for the following reason: the function $g(h) = 2x+h$, where $x$ is fixed, is continuous and defined everywhere. One of the properties of continuous functions (in fact, the defining property of being continuous) is that $g(t)$ is continuous at $t=a$ if and only if $g$ is defined at $a$, and $$\lim_{t\to a}g(t) = g(a).$$ That is, if and only if the value that the function approaches as the variable approaches $a$ is precisely the value of the function at $a$: there are no jumps, no breaks, and no holes in the graph at $t=a$. But we are not "simplifying" by "plugging in $a$", we are actually computing the limit, and finding that the limit "happens" to equal $g(a)$. This cannot be done with the original function $\dfrac{f(x+h)-f(x)}{h}$ because, as you note, it is not defined at $h=0$. But there is a result about limits which is very important: If $f(t)$ and $g(t)$ have the exact same value at every $t$, except perhaps at $t=a$, then $$\lim_{t\to a}f(t) = \lim_{t\to a}g(t).$$ the reason being that the limit does not actually care about the value at $a$, it only cares about the values near $a$. This is what we are using to do the first simplification: the functions of variable $h$ given by: $$\frac{(x+h)^2-x^2}{h}\qquad\text{and}\qquad 2x+h$$ are equal everywhere except at $h=0$. They are not equal at $h=0$ because the first one is not defined at $h=0$, but the second one is. So we know that $$\lim_{h\to 0}\frac{(x+h)^2 - x^2}{h} = \lim_{h\to 0}(2x+h).$$ And now we can focus on that second limit. This is a new limit of a new function; we know the answer will be the same as the limit we care about, but we are dealing with a new function now. This function, $g(h) = 2x+h$, is continuous at $h=0$, so we know that the limit will equal $g(0)=2x$. Since this new limit is equal to $2x$, and the old limit is equal to the new limit, the old limit is also equal to $2x$. We didn't both take $h\neq 0$ and $h=0$ anywhere. We always assumed $h\neq 0,$ and then in the final step used continuity to deduce that the value of the limit happens to be the same as the value of the function $g(h) = 2x+h$ at $h=0$. )
H: Diagonalizable linear algebraic group is isomorphic to $(\mathbb{C}^*)^r\times A$, for some finite abelian group $A$ I have three questions about algebraic groups. Let $D$ be a linear algebraic group. Then the following are equivalent: $D$ is diagonalizable. $\mathop{Hom}(D,\mathbb{C}^*)$ is finitely generated with an isomorphism of coordinate rings $\mathbb{C}[D]\cong \mathbb{C}[\mathop{Hom}(D,\mathbb{C}^*)]$. Every rational representation of $D$ is isomorphic to a direct sum of $1$-dimensional representations. $D$ is isomorphic to $(\mathbb{C}^*)^r\times A$, for some finite abelian group $A$. I'm fine with #1-3, but I had to pause after reading #4 because I am now wondering why and how does this torsion part arise (it may be because I do not know any algebraic groups other than the ones that can (naturally) be embedded into $GL(n,\mathbb{C})$). Example: Take $D$ to be a diagonal group with character group $\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Then the coordinate ring of $D$ is $\mathbb{C}[x,x^{-1},y]/\langle y^2-1\rangle$ with $D\cong \mathbb{C}^*\times \mu_2$. $\mathbf{Question \;1}$: Is there a way to write $D\cong \mathbb{C}^*\times \mu_2$ in a single matrix form, i.e., embed $D$ into $GL(n,\mathbb{F})$? Or must it be separately embedded into a product of $GL_n$'s like $GL(1,\mathbb{C})\times GL(1,\mathbb{F}_2)$? Suppose our above $D\cong \mathbb{C}^*\times \mu_2$ acts on $\mathbb{C}^3$ by $$ (x,y).(a,b,c)=(xa,x^{-1}b,yc). $$ $\mathbf{Question\; 2}$: Then do we have $$ (x,y)^2.(a,b,c)=(x^2a,x^{-2}b,c) \mbox{ with } \mathbb{C}[a,b,c]^D=\mathbb{C}[ab,c^2]? $$ $\bf{Question \; 3}$: Can you give me an example of an algebraic group which cannot be embedded into a product of $GL_n$'s? Thank you. AI: Some comments on question 2. Your answer is correct. It is a question in classical invariant theory. $C^*\times \mu_2 \cong GL(1)\times O(1)$, act on $\mathbb{C}[\mathbb{C} \oplus \mathbb{C}^*]\otimes \mathbb{C}[\mathbb{C}]$ one each factor respectively. Classical invariant theory tell you the $GL(n)$-invariant in $\mathbb{C}[\mathbb{C}^n\oplus (\mathbb{C}^n)^*]$-the first factor is generated by $<a,b>$ (the natural paring) and the $O(n)$ invariant in $\mathbb{C}[\mathbb{C}^n]$-the second factor is generated by $c^2$. But it is not obvious there is no further relations between the quadratic invariants. Fortunately, your case $n=1$, is in stable range, so no further relations.
H: Combinations - need help clarifying answers I have the answers to the following two questions, but I'm stumped as to why the answers are calculated this way: Q.1) There are six comics: A, B, C, D, E, F; How many ways are there to select six comics? Answer: C(6+6-1, 6-1) Q.2) There are 20 balls. 6 red, 6 green, 8 purple. In how many ways can we select five balls if balls of the same color are considered identical? Answer: C(3+5-1, 5) What I don't understand is why we subtract 1 from 6 in the first question while leaving 5 just the way it is in the 2nd question. Any help would be appreciated. Thanks! AI: Notice that $C(6+6-1,6-1)=C(6+6-1,6)$, so it doesn't matter whether you subtract 1 from the 6 or not. In the second problem, $C(3+5-1,5)=C(3+5-1,3-1)$, so you are in fact subtracting 1 --- it's just that you're subtracting it from the 3 (the number of different kinds of item), not from the 5 (the number of items chosen). This is the same as the first problem, where you either subtract 1 from the number of different kinds of item, 6, or don't subtract it from the number of items chosen, which is also 6. But the best thing to do is to understand where these formulas are coming from.
H: Is there a good way to solve for the inverse of $(u^2-u+4)$? I'm having trouble calculating the inverse of a polynomial. Consider the polynomial $f(x)=x^3+3x-2$, which is irreducible over $\mathbb{Q}$, as it has no rational roots. So $\mathbb{Q}[x]/(f(x))\simeq \mathbb{Q}[u]$ is a field. How would I calculate $(u^2-u+4)^{-1}$ as a polynomial? Calling this $p(u)$, I tried solving for it by writing $$ p(x)(x^2-x+4)=q(x)f(x)+1 $$ but I don't know what $q(x)$ is. How can I write down $(u^2-u+4)^{-1}$ explicitly? Thanks. AI: Let's calculate the $\gcd$ of the two polynomials $x^3+3x-2$ and $x^2-x+4$ in $\mathbb{Q}[x]$. Using the Euclidean algorithm, $$x^3+3x-2=x(x^2-x+4)+(x^2-x-2)$$ $$x^2-x+4=(x^2-x-2)+6$$ Because $2$ is a unit in $\mathbb{Q}[x]$, we see that the polynomials are relatively prime. Now, building back up, we see that $$6=(x^2-x+4)-(x^2-x-2)$$ $$6=(x^2-x+4)-[(x^3+3x-2)-x(x^2-x+4)]$$ $$6=(-1)(x^3+3x-2)+(1+x)(x^2-x+4)$$ $$1=\left(-\tfrac{1}{6}\right)(x^3+3x-2)+\left(\tfrac{1+x}{6}\right)(x^2-x+4)$$ Plugging in $u$, which is a root of $x^3+3x-2$, we see that $$1=\left(\tfrac{1+u}{6}\right)(u^2-u+4)$$ so that $\tfrac{1+u}{6}$ is the multiplicative inverse of $u^2-u+4$ in $\mathbb{Q}(u)$.
H: Combinations - 2 sets of identical books In how many ways can 15 identical computer science books and 10 identical psychology books be distributed among five students? So I'm trying to figure this out: I know how to calculate 15 identical cs books: C(15+6-1, 6-1) and also 10 identical psych books: C(10+6-1, 6-1), but I do not know how to consider the combinations with both books. By the way, I've asked a few questions on here in the past hour; I just wanted to say that these aren't homework problems, but I'm doing these problems to study for a midterm tomorrow. I guess there isn't any way to prove that....but just wanted to put it out there. Thanks for your help! AI: You can consider it two distributions in succession: first you distribute the $15$ computer science books, and then you distribute the $10$ psychology books. Thus, the final answer is the product of the number of ways of making each of these distributions. However, your calculations for those numbers are a bit off. The number of ways of distributing $n$ identical objects to $k$ distinguishable bins is $$\binom{n+k-1}{k-1}=\binom{n+k-1}n\;.$$ Thus, the computer science books can be distributed in $$\binom{15+5-1}{5-1}=\binom{19}4$$ ways, and the psychology books in $$\binom{10+5-1}{5-1}=\binom{14}4$$ ways, and the final answer is $$\binom{19}4\binom{14}4\;.$$
H: Combinations - at least and at most There are 20 balls - 6 red, 6 green, 8 purple We draw five balls and at least one is red, then replace them. We then draw five balls and at most one is green. In how many ways can this be done if the balls are considered distinct? My guess: $${4+3-1 \choose 3-1} \cdot {? \choose ?}$$ I don't know how to do the second part...at most one is green? Thanks for your help. AI: Event A Number of ways to choose 5 balls = $ _{}^{20}\textrm{C}_5 $ Number of ways to choose 5 balls with no red balls = $ _{}^{14}\textrm{C}_5 $ Hence the number of ways to choose 5 balls including at least one red ball =$ _{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5 $ Event B Number of ways to choose 5 balls with no green balls = $ _{}^{14}\textrm{C}_5 $ Number of ways to choose 5 balls with exactly one green ball = $ _{}^{14}\textrm{C}_4 \times 6 $ (we multiply here by 6 because we have 6 choices for the green ball we choose.) Since the above two choice processes are mutually exclusive we have that, Hence the number of ways to choose 5 balls including at most one green ball =$ _{}^{14}\textrm{C}_5 + _{}^{14}\textrm{C}_4 \times 6 $ Events A and B are independent. Therefore, the total number of ways of doing A and B = $ (_{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5) \times (_{}^{14}\textrm{C}_5 + 6_{}^{14}\textrm{C}_4 )$
H: Am I finding the Inequality Wrong? If I am given the inequality $x+6>7>2x$ then can I do the following to find the range of x since $x+6>7$ so $x>1$ and since $7>2x$ so $\frac{7}{2}$ $>x$ This means 7/2 > x x > 1 so $\frac{7}{2}$ $>x>1$ Is this correct ? According to my book "The expression $x+6>7>2x$ alone is insufficient to find the range of x" Edit: Here is the exact question from the book using the expression $$$x+6>7>2x$$ which is greater $x$ or $3$ ? AI: What you’ve done to solve the inequality is fine. However, it doesn’t suffice to answer the actual question. The inequality tells you only that $$1<x<\frac72\;;$$ that leaves open the possibility that $x=2$, say, in which case $x<3$, but it also leaves open the possibility that $x=\frac{13}4$, in which case $x>3$. Or $x$ could equal $3$. The point is that the interval $\left(1,\frac72\right)$ contains both numbers less than $3$ and numbers greater than $3$, so the fact that $x$ is in this interval doesn’t tell you how $x$ compares with $3$.
H: 2012-gon- subsets of vertices. Can we prove or disprove this? For a sufficiently large $n$, every set of at least $ n$ points in the plane with no three collinear has a subset that form the vertices of a convex $2012$-gon. Gerry mentions the Happy Ending theorem but I don't see how it relates. If someone could show me the steps in the proof or disproof, that would be nice. AI: This follows from the theorem of Erdos and Szekeres (sometimes known as "The Happy Ending Theorem"). The statement of the theorem, with some discussion, is here. There is also a link there to the original paper.
H: 1-dim subspace & sphere I'm reading a book about algebraic topology recently and I have read through this sentence. "The space of of all one dimensional subspace is equal to the one dimensional circle (that's the circumference)" I don't understand this but there isn't a lot further explanation about this. Can anyone explain to me why it is like this? THANK YOU!~ AI: The one dimensional subspaces are just the straight lines through the origin in $\mathbb R^n$ Each line through the origin is identified by a pair of antipodal points on the sphere $S^{n-1}$, the points being the points of intersection of the line with the sphere. This correspondence is a one-one correspondence. In $\mathbb R^2$, they are just the pairs of antipodal points in $S^1$. Now the one dimensional spaces have been identified with antipodal pairs of points in $S^1$. The latter is quite easily seen to be homeomorphic to $S^1$ itself.
H: When is $\mathbb{F}_p[x]/(x^2-2)\simeq\mathbb{F}_p[x]/(x^2-3)$ for small primes? I've been considering the rings $R_1=\mathbb{F}_p[x]/(x^2-2)$ and $R_2=\mathbb{F}_p[x]/(x^2-3)$, where $\mathbb{F}_p=\mathbb{Z}/(p)$. I'm trying to figure out if they're isomorphic (as rings I suppose) or not for primes $p=2,5,11$. I don't think they are for $p=11$, since $x^2-2$ is irreducible over $\mathbb{F}_{11}$, so $R_1$ is a field. But $x^2-3$ has $x=5,6$ as solution in $\mathbb{F}_{11}$, so $R_2$ is not even a domain. For $p=5$, both polynomials are irreducible, so both rings are fields with $25$ elements. I know from my previous studies that any two finite fields of the same order are isomorphic, but I'm curious if there is a simpler way to show the isomorphism in this case, without resorting to that theorem. For $p=2$, neither ring is even a domain as $x=0$ is a solution of $x^2-2$ and $x=1$ is a solution for $x^2-3$, but I'm not sure how to proceed after that. Thank you for any help. AI: The question is, as you observed, about whether the two polynomials are irreducible or not. If both of them factor, then the two rings are isomorphic, both isomorphic to a direct product of two copies of $\mathbb{F}_p$. This is seen as follows. The ring $\mathbb{F}_p[x]/\langle q(x)\rangle$ is isomorphic to $\mathbb{F}_p$, when $q(x)$ is linear. If $x^2-2$ (resp. $x^2-3$) factors, then the two factors $q_1(x)$ and $q_2(x)$ are both linear and coprime (assume $p>3$), so the claim follows from the Chinese remainder theorem: $$ \mathbb{F}_p[x]/\langle q_1(x)q_2(x)\rangle\simeq\mathbb{F}_p[x]/\langle q_1(x)\rangle\oplus\mathbb{F}_p[x]/\langle q_2(x)\rangle. $$ If both polynomials are irreducible, then both rings are isomorphic to the field $\mathbb{F}_{p^2}.$ If one polynomial factors, but the other one does not, then $R_1$ and $R_2$ are not isomoprhic, because the other has zero divisors but the other has not. The way to test factorizability in this case is by the theory of quadratic residues. $R_1$ is a field, iff $2$ is not a quadratic residue modulo $p$. Similarly $R_2$ is a field, iff $3$ is not a quadratic residue modulo $p$. As you hopefully remember, $2$ is a special case, and we simply use the result that $2$ is a quadratic residue, iff $p\equiv \pm 1\pmod{8}$. With $3$ we need to use the law of quadratic reciprocity once. The law states that (using the Legendre symbol) $$ \left(\frac3p\right)=(-1)^{\frac{p-1}2}\left(\frac{p}3\right). $$ The prime $p$ is a quadratic residue modulo $3$, iff $p\equiv1\pmod3$, so we can conclude that $3$ is a quadratic residue modulo $p$, iff $p\equiv (-1)^{\frac{p-1}2}\pmod3$. This translates to (unless I made a mistake) the result that $3$ is a quadratic residue modulo $p>3$, iff $p$ is congruent to either $\pm1\pmod{12}$. Therefore with $p=5$ we see that both $2$ and $3$ are quadratic non-residues modulo $5$ (it would be easier to check this by applying the definition), so both $R_1$ and $R_2$ are fields of $25$ elements and thus isomorphic. You asked about a simpler way of showing that the two fields of order $p^2$ are isomorphic. This also follows from the theory of quadratic residues. We get two fields, when both $2$ and $3$ are quadratic non-residues. But the ratio of two non-squares in a finite field is a square. This means that there exists an integer $m$, $0<m<p$ such that $$ 2\equiv 3m^2\pmod{p}. $$ Therefore, for the purposes of extending the field $\mathbb{F}_p$ $$ \sqrt{2}=\pm m\sqrt{3}. $$ Using this observation it is easy to see that in this case $$ \mathbb{F}_p[\sqrt2]=\mathbb{F}_p[\sqrt3]. $$ A final note. From the above we see that (for primes $p>3$) the isomorphism type of $R_1$ depends on the residue class of $p$ modulo $8$, and the isomorphism type of $R_2$ depends on the residue class of $p$ modulo $12$. Therefore the answer to the question whether $R_1\simeq R_2$ or not can be given in terms of residue classes of $p$ modulo $24$. I leave that to you, though :-)
H: Why is the probability that a prime p is a factor of a number n equal to 1/p I'm learning some number theory and I can't seem to understand why this is the case. AI: You ought to specify a distribution before you ask a question like this, because there is no uniform distribution on all of the natural numbers, so there is no particularly natural choice. However, what we can say is that if you take a large interval of natural numbers, and take a uniform distribution on them, the probability that a given number is a multiple of $m$ is roughly $1/m$. To see this, simply observe that it is equivalent to it being $0\mod m$, and there are $m$ things that you can be mod $m$, so any distribution in which they will be equally likely will give $1/m$ as the probability of any one.
H: Proof of Riemann Integral of an indicator function of an interval on Real The theorem is as follow: Let $a<b$ and let $c,d \in [a,b]$ with $c<d$. Then $1_{[c,d)}$ is Riemann Integrable over $[a,b]$ and $$\int_{a}^{b} 1_{[c,d)} dx = d-c$$ I am using Shroeder's Mathematical Analysis and I came across this proof, but I think the proof in the book is sort of unclear and therefore I am seeking if there is any other source that has a proof on this theorem. I tried google but they all concern Lebesgue integration of Dirchlet function which is not what I want. AI: Divide $[a,b]$ into $n$ intervals, each of length $\frac{b-a}{n}$. Since $c,d$ lie in at least one of these intervals, we know that $[c,d)$ contains at least $(d-c) \frac{n}{b-a} -2$ of these intervals, and that $[c,d)$ is contained entirely in at most $(d-c) \frac{n}{b-a}+2$ of these intervals. Thus we have that the lower ($L$) and upper ($U$) Riemann sums are bounded by $$\frac{b-a}{n} ((d-c) \frac{n}{b-a} -2) \leq L \leq U \leq \frac{b-a}{n} ((d-c) \frac{n}{b-a} +2).$$ Letiing $n\to \infty$ yields the desired result.
H: generating $\sigma$-field of a set Let $X=(X_t)$ be a stochastic process and we define the raw filtration by $F=(\mathcal{F}_t)$, where $\mathcal{F}_t:=\sigma (X_s;s\le t)$ Now I want to prove that $\sigma (\mathcal{C})=\mathcal{F}_t$, where $\mathcal{C}:=\{\prod_{k=0}^nf_k(X_{t_k});t_n\le t,f_k:\mathbb{R}\to \mathbb{R} \mbox{ bounded and measurable}\}$ I was able to prove one inclusion: since $\mathcal{F}_t$ is generated by $X^{-1}_s(B)$ where $B\in \mathcal{B}(\mathbb{R})$, we take $n=0$, $f_0=\mathbf1_B$ and $t_0=t_n=s$. The other inclusion is bothering me, i.e. $\sigma (\mathcal{C})\subset \mathcal{F}_t$? AI: Let $t \ge 0$, $n \in \mathbb N$, $f_k\colon \mathbb R \to \mathbb R$ measurable and bounded, $t_k \le t$ for $k \le n$. Then, as multiplication is measurable, and the $X_{t_k}$ are $\mathcal F_t$-measurable, $\prod_k f_k \circ X_{t_k}$ is $\mathcal F_t$-measurable. As $\prod_k f_k \circ X_{t_k} \in \mathcal C$ was arbitrary, and $\sigma(\mathcal C)$ is the smallest $\sigma$-algebra which makes these functions measurable, we have $\sigma(\mathcal C)\subseteq \mathcal F_t$.
H: proof that there is a random variable for which a function has a zero value Given a function $h$: $$ h(x)=af(x)−b[1−F(x)], $$ where $a$ and $b$ are constants with $b>0$, $f$ is a probability (a generalized) density function and $F$ is its CDF, I want to prove that there exists an $x$ such that $h(x)=0$. I was trying to use the Extreme value theorem, but I got it difficult to find the limit of $f(x)$ as $x$ approaches $±∞$. Taking $\lim f(x)=0$ as $x→±∞$, the result I found is $h(∞)=0$ and $h(-∞)=0$, does the envelope theorem apply in this case? Please suggest me some ways to prove the claim. AI: $h(x)=0\implies af(x)+bF(x)=b$ $$\implies f(x)=\frac{b(1-F(x))}{a}=\frac{b(1-\int_{-\infty}^xf(t)dt)}{a}$$ Differentiate both sides,$$f'(x)=\frac{-bf(x)}{a} $$ $$\implies f(x)=ce^{-bx/a}$$ for some constant $c\gt 0$. This distribution looks like exponential distribution$\implies$ that the random variable with exponential distribution $f(x)=\frac{be^{-bx/a}}{a}$ satisfies your condition i.e. $h(x)=0$ $\forall x\in \Bbb R$.
H: Is $\mathop{Hom}(O(1),\mathbb{C}^*)$ isomorphic to $\mathop{Hom}(\mathbb{C}^*,O(1))$? I have an elementary question on homomorphisms. Let $O(1)=\{A: A^t A=1 \}$ and let $\mathbb{C}^*= \{ z\in\mathbb{C}:z\not=0\}$. Then what is the character group $\mathop{Hom}(O(1),\mathbb{C}^*)$ isomorphic to, and what is $\mathop{Hom}(\mathbb{C}^*,O(1))$ isomorphic to? Is $\mathop{Hom}(O(1),\mathbb{C}^*)$ isomorphic to $\mathbb{Z}/2\mathbb{Z}$ since $f^2 = 1$ for any $f\in \mathop{Hom}(O(1),\mathbb{C}^*)$? At the moment, I am not sure about $\mathop{Hom}(\mathbb{C}^*,O(1))$. AI: Note that $O(1)$ is the group $\{\pm 1\}$. Any homomorphism into $\mathbb C^*$ sends $1$ to $1$ and $-1$ to some square root of $1$, that is to either $1$ or $-1$. Thus there are two maps in $Hom(O(1),\mathbb C^*)$, the constant map $1$ and the identity, so $Hom(O(1),\mathbb C^*)\cong \mathbb Z/2\mathbb Z\cong O(1)$. On the other hand, any homomorphism from $\mathbb C^*$ to $O(1)$ has kernel of index $1$ or $2$. No subgroup of $\mathbb C^*$ has index $2$, or any proper subgroup of finite index. To see this, note that $\mathbb C^*$ can be factored into the circle group and the real numbers under addition (by mapping $re^{i\theta}$ to $(e^{i\theta},\log r)$). A finite index subgroup of $\mathbb R$ would give us a decomposition $\mathbb R=H\cup (a_1+H)\cup \cdots \cup (a_n+H)$. Similarly, restricted to the rational numbers it gives us the rationals as a union of disjoint cosets. We can restrict our attention to those $a_i$ that are rational. We have some subfield $K$ which contains none of the $a_i$, and the denominators in $K$ are coprime to all $a_i$, thus $K\subseteq H$ as it can contain no element of the other cosets. We can then take realize $H$ as a vector subspace of $\mathbb R$ over $K$, which must correspond to a proper subset of some basis for $\mathbb R$. Thus the quotient of $\mathbb R$ by $H$ as a vector space contains at least one copy of $K$, which is infinite, contradicting the fact that $H$ has finite index. Thus any finite index subgroup would have to correspond to a finite index subgroup of the circle group, but this is isomorphic to $\mathbb R/\mathbb Z$ and the same approach works here (I think). Thus the kernel has index $1$ so is all of $\mathbb C^*$, hence the only homomorphism from $\mathbb C^*$ to $\mathbb C^*$ is the trivial one, and so $Hom(\mathbb C^*,O(1))$ is the trivial group. As a corollary, the two groups are not isomorphic.
H: "Probability" of a large integer being prime Someone once told me (rather testily) that we cannot speak of the "probability that a number is prime" because the sequence is deterministic. I think I understood his point but would like to make sure. There is a theorem in Stopple's Primer of Analytic Number Theory (p. 97): The probability that a large integer $N$ is prime is about $\dfrac{1}{\log N}$. Of course, a large integer N is either prime or it is not. Its status is completely determined by its predecessors. As long as we are careful to define the sample space, is there anything here that is controversial? Are there other probability-related objections to Stopple's theorem? Thanks for any insight. Edit: This was a pedagogical device, not a theorem, as the answers below (and Stopple) make clear. AI: I don't think the statement is meant to be taken literally, but one can interpret these statements as about a random integer between 1 and N, or between N and 2N if you want. Stopple is just trying to give you some insight into the prime number theorem: for instance 1 out of every 100 or so integers of size about e^100 is prime.
H: Factoring and Exponent rules using "n" notation tripping me up Hello StackExchange world, While doing a proof, I encountered: $$ = S(n-1)+3^n-3^{n-1} $$ I am focused on the latter part, with the powers of n, apparently it reduces after factoring to: $$ 2*3^{n-1} $$ and I have no idea why. I tried thinking of n as a concrete number, and then n-1 the one before that, and I still can't wrap my mind around it. I am hoping someone can enlighten me here. Thank you all for the time and effort. AI: $3^n-3^{n-1}=3.3^{n-1}-3^{n-1}=3^{n-1}(3-1)=2.3^{n-1}.$ I think you thought it as $3^{n}-3(n-1)$,which is not right.
H: Fermat's theorem on sums of two squares composite number Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even. Will this $c$ only have one possible pair of $a$ and $b$? edit: what happens if c is odd number? AI: Not necessarily. For example, note that $50=1^2+7^2=5^2+5^2$, and $130=3^2+11^2=7^2+9^2$. For an even number with more than two representations, try $650$. We can produce odd numbers with several representations as a sum of two squares by taking a product of several primes of the form $4k+1$. To get even numbers with multiple representations, take an odd number that has multiple representations, and multiply by a power of $2$. To help you produce your own examples, the following identity, often called the Brahmagupta Identity, is quite useful: $$(a^2+b^2)(x^2+y^2)=(ax\pm by)^2 +(ay\mp bx)^2.$$
H: Sword, pizza and watermelon Suppose that we have a sword and cut a pizza and watermelon. What is the maximum number of pieces of pizza or watermelon obtained after 10 cuts. Is there a general formula AI: If you are given a plane then, maximum number of pieces after $n$ cuts is given by recursion $$C(n)=C(n-1)+n$$ with $C(1)=2$ which evaluates to $C(n)=1+\frac{n(n+1)}{2}$. I think this same formula gives the answer of your problem(for pizza).$$$$ For watermelon, we need to consider 3-D space, the recursion for maximum number of pieces after $n$ cuts is $D(n)=D(n-1)+C(n-1)$ which can be evaluated easily with $D(1)=2$.$$$$ Explanation for recurrence of $C(n)$: Suppose there are already $n$-cuts, so when we add a new cut, to maximize the number of pieces, we have to cut without 3-lines being concurrent and no 2-lines should be parallel. When these conditions are satisfied, then new cut when passes through a region divides it into $2$ parts giving $1$ part extra and the number of region it passes is $n$(you can check that for small $n$), thus giving total $n$ extra regions$\implies C(n)=C(n-1)+n$. Similar reasoning works for the 3-D case(watermelon) but you need to handle that case carefully.
H: How to calculate the area of bizarre shapes I'm looking for an algorithm to calculate the area of various shapes (created out of basic shapes such as circles, rectangles, etc...). There are various possibilities such as the area of 2 circles, 1 triangular and 1 square (intersections possible). As you can see this gets quite complicated and requires a numeric solution. Of course, there are various special cases which can be calculated without any problems, but just as there are simple cases, there are also complex structures. The only thing which comes to my mind is to write an algorithm which fills these geometrical structures with small squares and sums them up, but maybe there are already some smarter solutions. Maybe I could also put the whole geometrical shape in a rectangle, and approximate the outlines of my shapes, and then calculate the area outside of my shape, and afterwards subtract the calculated area from the area of the rectangle. I'm quite confident that these methods would work; it's just a matter of efficiency. AI: If you don't mind using a probabilistic algorithm, have a look at Monte Carlo integration. It's easy to implement and fairly efficient. For area calculation, it works like this: Choose some large number N. Set n := 0, s := 0. Choose a random point p. Set n := n + 1. If p is inside the shape, set s := s + 1. If n < N, go to 2. The area is approximately s/n.
H: Why $A_{5}$ has no subgroup of order 20? Why $A_{5}$ has no subgroup of order 20? Thanks! AI: First, yes: it is that simple (ah, how we loathe simple things, uh?) Second, of course you've used a very special property of $\,A_5\,$: it is a simpe group! Otherwise how could you deduce $\,A_5\,$ is isomorphic to a subgroup of $\,S_3\,$?? After all, it could be that the kernel of the action you described is big enough as to make this possible, but that kernel is a normal subgroup of $\,A_5\,$, so... This exercise is more or less middle mathematics undergraduate level (unless, of course, somebody says it is high school's level...we've had some wise guys like these around lately), so you must give more credit to yourself when you reach logically some result. Just be sure every single claim you make you can back it.
H: $\alpha x^2+\beta y^2=\gamma$ solvable over $\mathbb Q$ iff $ax^2+by^2=z^2$ solvable over $\mathbb Z$ with coprime $x,y,z$? I want to understand an algorithm from [1] to solve $$\alpha x^2+\beta y^2=\gamma \text{ over } \mathbb{Q}$$ with $\alpha, \beta, \gamma\in\mathbb{Q}$. As far is I understood the process the following happens: Multiply the equation by the $\gcd$ of the denominators of $\alpha, \beta, \gamma$ to obtain an equation $$\alpha x^2+\beta y^2=\gamma$$ where $\alpha, \beta, \gamma\in\mathbb Z$. We may furthermore assume that $\alpha$ and $\beta$ are square free, since else we can solve the equation $$\bar{\alpha}x'^2+\bar{\beta}y'^2=\gamma$$ where $\bar{\alpha}, \bar{\beta}$ are the nonsquare-parts of $\alpha$ and $\beta$ and with new variables $x'=(\tilde{\alpha}x)^2$ and $y'=(\tilde{\alpha}y)^2$ where $\tilde{\alpha}$ and$\tilde{\beta}$ are the square parts of $\alpha$ and $\beta$. So without loss of generalty, we are left with an equation $$\alpha x^2+\beta y^2=\gamma \text{ over } \mathbb{Q}$$ where $\alpha, \beta, \gamma\in\mathbb Z$ and $\alpha, \beta$ are square free. But now the magic happens: This equation should be solvable if and only if an equation $$ax^2+by^2=z^2$$ is solvable over $\mathbb Z$ with coprime $x,y,z$. Unfortunately I don't see how the coefficients $\alpha$, $\beta$ and $\gamma$ should be related to the coefficient $a$ and $b$ and therefore am not able to understand the equivalence of solving these two equations. I think that there might be a very easy number-theoretic argument that I don't know and of course it would be awesome if there is indeed an elementary argument for this. As a remark, that might or might not help: A friend gave me the idea that $z$ could have something to do with the square part $\tilde{\gamma}$ of $\gamma$. The argument I'm referring to is on page 22 in[1], the middle after "Suppose $\mathbb F=\mathbb Q$". [1] On the complexity of cubic forms AI: The process is homogenization: we switch to integers by introducing a new variable. Suppose that we are interested in the solvability in rationals $(x,y)$ of $$\alpha x^2+\beta y^2=\gamma,\tag{$1$}$$ where without loss of generality $\alpha$, $\beta$, and $\gamma$ are integers. Assume $\gamma\ne 0$. There is a rational solution of $(1)$ iff there are integers $u$, $v$, $w$, with $w\ne 0$ such that $\alpha \left(\frac{u}{w}\right)^2+\beta\left(\frac{v}{w}\right)^2=\gamma$, or equivalently $$\alpha u^2+\beta v^2=\gamma w^2.\tag{$2$}$$ Equation $(2)$ has an integer solution with $w\ne 0$ iff the equation $(\gamma\alpha)u^2+(\gamma\beta)v^2=\gamma^2w^2$ has such a solution. Or equivalently, Equation $(2)$ has an integer solution with $w\ne 0$ iff the equation $(\gamma\alpha)u^2+(\gamma\beta)v^2=z^2$ has an integer solution with $z\ne 0$. (The $\gamma$ terms on the left force divisibility of $z$ by $\gamma$ since $\gamma$ has no square-part without loss of generality).
H: Approximating next prime number Suppose that there is a prime number. Now I want to approximate the next prime number. (It does not have to be exact.) What would be the time-efficient way to do this? Edit: what happens if we limit the case to the prime number of the form $4k+1$ where k is a natural number? Edit: it's fine to replace approximate prime number with finding any prime number that is bigger than the given prime number in a time-efficient way. AI: You could use the fact that the density of primes around $p$ is about $\frac 1{\ln p}$ to say the next is around $p + \ln p$, but the variance on this estimate is huge. For $4k+1$ primes, the density is about the same as $4k+3$ primes, so the estimate would be $p + 2 \ln p$. To find the next, just start from $p$ and check all numbers of the form $6k+1$ or $6k+5$ until you find one. You shouldn't have to check toooo many. (On average, $\frac {\ln p}3$) To find any larger prime, just search the Web for the largest known prime.
H: Does every Noetherian ring contain at least one maximal ideal? I want to prove that a noetherian ring $R \neq \{0\}$ contains at least one maximal ideal. My idea is to consider $\langle 0 \rangle$ and $\langle 1 \rangle$: If there is no ideal $I$ with $\langle 0 \rangle \subsetneq I \subsetneq\langle 1 \rangle$ then $\langle 0 \rangle$ is maximal. Otherwise for each infinite chain $\langle 0 \rangle \subset I_1 \subset \cdots$ there exists $i \in \mathbb{N}$ such that for all $j>i$, $I_i = I_j$. Then $I_i$ is maximal. Is that correct? AI: You have the right idea but your proof isn't correct as stated. It isn't true that each stabilizing chain ends in a maximal ideal. For example, you could just find a Noetherian ring where $0$ isn't maximal and just repeat $0 \subset 0 \subset 0 \subset \dots$ What you should do is define this process: Start with $0$. If $0$ is maximal then define $I_i = 0$ for $i \geq 1$. Otherwise, find some proper ideal $I_1$ that contains $0$ and put it in the list. $0 \subset I_1$ If $I_1$ is maximal let $I_i = I_1$ for $i \geq 2$. Otherwise find some proper ideal $I_2$ that contains $I_1$ and put it in the list. Inductively we have $0 \subset I_1 \subset I_2 \subset \dots \subset I_n \subset \dots$ By the Noetherian property, this chain stabilizes. The ideal it stabilizes to is maximal, or else by construction we would have chosen an ideal properly containing it to succeed it in the chain. Also note that by a Zorn's lemma argument, every non zero ring has a maximal ideal. They key here is that for Noetherian rings you don't need Zorn's lemma. EDIT: I was informed that this argument uses the axiom of dependent choice so I will rewrite it here to make this clear. The axiom of dependent choice states that for any nonempty set $X$ and any entire binary relation $T$ on $X$, there exists a sequence $(x_n)$ such that for all $n \geq 0$, $x_n T x_{n+1}$. A binary relation $T$ on $X$ is entire if for all $x \in X$, there exists $y \in X$ such that $xTy$. Let $X$ be the set of all proper ideals of a nonzero noetherian ring $R$. Since $R \neq 0$, $X$ is nonempty. Consider the binary relation "<" of strict inclusion. If $R$ has no maximal ideals, then "<" is entire. So by the axiom of dependent choice, if $R$ has no maximal ideals, then we may choose a sequence $(x_n)$ such that $x_1 < x_2 < \dots < x_n < \dots$ This contradicts the Noetherian property. Hence $R$ has a maximal ideal.
H: Can two topological spaces surject onto each other but not be homeomorphic? Let $X$ and $Y$ be topological spaces and $f:X\rightarrow Y$ and $g:Y\rightarrow X$ be surjective continuous maps. Is it necessarily true that $X$ and $Y$ are homeomorphic? I feel like the answer to this question is no, but I haven't been able to come up with any counter example, so I decided to ask here. AI: The circle $S^1$ surjects onto the interval $I = [-1,1]$ by projection in (say) the $x$-coordinate, while the interval $I$ surjects onto the circle by wrapping around, say $f(x) = (\cos \pi x, \sin \pi x)$. Added: Why is the circle $S^1$ not homeomorphic to the interval $I = [-1,1]$ ? The usual proof looks at cut points, i.e. a point $x$ whose removal from a topological space $X$ results in a disconnected space $X\backslash\{x\}$. Since this is a purely topological property, two homeomorphic spaces will have an equal number of cut points. Note that $S^1$ has no cut points; removal of any single point from the circle leaves a connected open arc. However a closed interval $I$ has infinitely many cut points because removing any point except one of the two endpoints disconnects it into two disjoint subintervals. The same observation serves to show the spaces in Karolis Juodelė's answer are not homeomorphic: $[0,1]$ has cut points and $[0,1]^2$ does not. See Seth Baldwin's comment below for an alternative idea, something that will not disconnect the interval $I$ that does disconnect the circle!
H: Circulant vs normal What is the relationship between the definition for a matrix to be circulant and to be normal? Does one imply the other? Assume matrix $A$ is symmetric, then $A^T=A$ and clearly it is normal, but not circulant in general. However, if I assume that $A$ is circulant, looks like $A^TA=AA^T$, so is it normal? AI: (as I just found out on Wikipedia) Normal matrices are those matrices that are diagonalisable with respect to some othonormal basis for the standard (Hermitian) inner product of $\mathbb C^n$. And circulant matrices are those that are diagonalisable with respect to one particular basis, formed of vectors $\zeta^0,\zeta^1,\ldots,\zeta^{n-1}$ where $\zeta$ is an $n$-th root of unity (and runs through all such roots as one runs through the basis). This basis is othonormal for the standard inner product, so "circulant matrix" is a very special case of "normal matrix".
H: How to find the largest possible rectangle (by perimeter) on the following function? It's been a while since I last tackled high-school math, and a friend asked me this question which I can't remember how to approach. I have the following: $ y = -x^2 + 5x $ Which produces an inverse parabola, intersecting with the $x$ axis on $(0,0)$ and $(5,0)$ (I still remember that much). The question is, place a point $a$ on the parabola graph, which will produce the largest perimeter rectangle with the axes. Image The point is to find the $a$ where $P_{aboc}$ is the largest. I would appreciate if anyone could point me to the right direction, even if not directly giving the solution! Thanks in advance! AI: As the point is on the parabola, write $\,A=(x,-x^2+5x)\,$ , so that the rectangle's perimeter is $$H(x)=2(x+(-x^2+5x))=-2x^2+12x$$ Proof 1: As $\,H\,$ is again an inverse parabola, its maximum value is obtained at its vertex $$P\left(\frac{-b}{2a}\,,\,\frac{-\Delta}{4a}\right)=(3,16)\,,\,\mathrm{with}\,\,\,a=-2\,,\,b=12\,,\,c=0\,\,,\,\Delta:=b^2-4ac,$$ so the point is $\,A=(3,6)\,$ Proof 2: Using differential calculus: $$H'(x)=-4x+12=0\Longrightarrow x=3\Longrightarrow A=(3,6)$$
H: Predicate Calculus with Sets - Question about use of an axiom Greets again StackExchange, I am watching an online lecture, and I believe that my instructor has misused an axiom. Is my concern warranted? $$\begin{align*} \text{Given:}& {P \subseteq (Q \cap R)}\\ &{(Q \cup S) \subseteq T}\\ &{x \in (P \cup S) }\\ \text{Prove:}& {x, \in T} \end{align*}$$ I will reference a membership function, P(x), so onward then to the proof. Translating my givens into Predicate Calculus $$\begin{align*} \text{Given:}&\\ &1.\ {\forall x P(x) \rightarrow [Q(x)\wedge R(x)]}\\ &2.\ {\forall x [Q(x) \vee S(x)] \rightarrow T(x)}\\ &3.\ {\exists x [P(x) \vee S(x)] } \end{align*}$$ So now comes my problem. She claims in the following line, that she just used straight simplification on the R.H.S. of line 1 to get Q(x) by itself as shown: $$\qquad\quad 4.\ {\forall x [Q(x) \wedge R(x)] \rightarrow Q(x)} $$ Wouldn't she need To have $P(x)$ first, and then use modus ponens to get the right side of that implication first, then use simplification? Is my understanding of simplification incorrect? Can you just simplify any conjoined statement at will? Thank you. If you could also venture an answer at the answer to the proof I would appreciate it, because without this step she goes over, I am at a loss on how to solve. AI: "Wouldn't she need To have $P(x)$ first, and then use modus ponens to get the right side of that implication first, then use simplification?" That depends on the exact set of rules and logical axioms you have to work with, which you haven't shown. There are multiple equivalent ways of formalizing first-order logic, and the details differ greatly at this level. However, even if your formal definitions do require an argument along the lines you sketch, you shouldn't expect lecturers and textbooks to keep showing the arguments in such minute details for the entire course. Doing so would drown out any actual content of the course by completely mechanical, tedious and uninteresting details. As soon as it can reasonably be expected of you to fill in the details of a formal proof (which it sounds like it can, because you seem to have a cogent idea of how the argument can be formalized in ways that would satisfy you better), you should be prepared to do so yourself.