text
stringlengths
83
79.5k
H: How many positive integers $n$ satisfy $n = P(n) + S(n)$ Let $P(n)$ denote the product of digits of $n$ and let $S(n)$ denote the sum of digits of $n$. Then how many positive integers $n$ satisfy $$ n = P(n) + S(n) $$ I think I solved it, but I need your input. I first assumed that $n$ is a two digit number. Then $n=10a+b$ and according to the requirement $$ 10a + b = ab + a + b\\ \Rightarrow 9a = ab \\ \Rightarrow b = 9 $$ (Since a is not zero) Now we have { 19,29,39,49,59,69,79,89,99} a set of 9 numbers that satisfy the requirement. However if we assume a three digit number $n=100a+10b+c$ then $$ 100a+10b+c = abc+a+b+c\\ \Rightarrow 99a+9b=abc\\ \Rightarrow 9(11a+b) = abc\\ $$ Either two of the digits are $1$ and $9$ or $3$ and $3$ and for any such cases the left hand side is much larger than the right side. If there cannot be three digit number, there cannot be higher cases. Is this good reason? AI: In general, if you have $n-$ digit number $A=\overline{a_1a_2...a_n},$ then $P(A)+S(A)\le 9^n+9n$ whereas $A\ge 10^{n-1}.$ So we must have $10^{n-1}\le 9^n+9n,$ which becomes impossibel fairly quickly due to the growth of the left hand side.
H: Which mathematical tool or method should I use to compare two matrices most efficiently? I have two matrices(the first one is mxm, while the second one is nxn, m>n). They store data pertaining to human speech. The second matrix contains a data segment that acts like an acoustic "signature". I need to find where this data occurs in the first matrix. Which mathematical tool(s) or method(s) can I use to get it done? I know convolution can help me out but is this the best(more importantly fastest) way? AI: It sounds like Cross Correlation is the method you are looking for. Note that implementation using a DFT instead of the brute force definition is much more efficient. A nice example for 2D matrices (in this case images), can be found here.
H: Meaning of symbol $\mathcal{P}$ in set theory article? I am teaching myself real analysis, and in this particular set of lecture notes, the introductory chapter on set theory when explaining that not all sets are countable, states as follows: If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$. Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article. AI: The set $\mathcal P(X)$ also denoted by $\frak P,\wp,\it P$ or as $2^X$ sometimes, is the set of all subsets of the set $X$. Formally: $$\{A\mid A\subseteq X\}$$ Note that this is never an empty set, because $X\in\mathcal P(X)$. Cantor's theorem, which you quote in your question, states that no matter what set $X$ is, the cardinality of its power set is strictly larger.
H: Surface area element of an ellipsoid I would like to evaluate an integral numerically over the surface of an ellipsoid. Take an $N \times N$ grid over the parameter space $(u, v) \in [0, 2\pi) \times [0, \pi) $. A simple approximation of the integral is $$ \int_0^{2\pi}\int_0^\pi \mathbf{f}(u,v)\,dA \simeq \sum_{i,j=1}^N \mathbf{f}(u_i,v_j)\, \Delta A(u_i,v_j) $$ where $\Delta A(u_i, v_j)$ is the surface area of the portion of the ellipsoid spanned by the parameter region $(u_i, u_{i+1}) \times (v_j, v_{j+1})$. I had hoped to compute $\Delta A(u_i, v_j)$ analytically: \begin{equation} \Delta A(u_i, v_j) = \int_{u_i}^{u_{i+1}}\int_{v_j}^{v_{j+1}}dA. \end{equation} To derive $dA$ I used the first fundamental form, $dA = \sqrt{EG-F^2}du\,dv$. An example of its application to an ellipsoid can be found in this answer, though note that (1) $dA$ is merged with the integrand and (2) $u$ and $v$ are switched from here. Here is what I obtain: $$ dA = \sqrt{a^2b^2 \cos^2 v \sin^2 v + c^2 \sin^4v\ (b^2 \cos^2 u + a^2 \sin^2 u)}\ \ du\,dv, $$ where $a,b,c$ are the semi-major axes associated with $x,y,z$ respectively. I am almost certain that with this expression for $dA$, the integral for $\Delta A(u_i, v_j)$ above cannot be evaluated analytically. Do I need to resort to a numerical approximation of $\Delta A(u_i, v_j)$? If so, in light of the simplistic approximation of the original integral, will it be sufficient to use the area of the trapezoid formed by connecting the image of the four points $(u_i, v_j), (u_{i+1}, v_j), (u_{i}, v_{j+1}), (u_{i+1}, v_{j+1})$ in $\mathbb{R}^3$? AI: I may be oversimplifying this, but the geometry seems to be be a non-issue. The formula for integrating over a parametrized surface $\vec r=\vec r(u,v)$ is $$\iint_D f(\vec r(u,v))|\vec r_u\times \vec r_v|\,du\,dv \tag1$$ where $D$ is the parameter domain (rectangle in your case). There is no conceptual difference between the factors $f(\vec r(u,v))$ and $|\vec r_u\times \vec r_v|$ in (1). It's just an integral over a rectangle of some function $g(u,v)= f(\vec r(u,v))|\vec r_u\times \vec r_v|$. To evaluate (1) numerically, your can sample the integrand $g$ along a grid and multiply by appropriate rectangular areas $\Delta u_i \Delta v_j$. the trapezoid formed by connecting the image of the four points The images of four points will not lie in the same plane in general, much less form a trapezoid. The term $|\vec r_u\times \vec r_v|\,du\,dv$ implicitly uses approximation of a piece of the surface with a parallelogram.
H: Determine whether the function is a linear transformation: Determine whether the function is a linear transformation. Justify your answer. $T: P_2 → P_2$, where $T(a_0+a_1x+a_2x^2) = a_0 + a_1(x+1) + a_2(x+1)^2$ My thought process on solving this: I know that in order to be a linear transformation, the following 2 conditions must be met: $T(u+v) = T(u) + T(v)$ $T(cu) = cT(u)$ I'm new to linear transformations, and I'm confused on how to set this up. For instance, how do I know what $u$ and $v$ are? AI: Let $u = a_0 + a_1x + a_2 x^2$ (some element of your space), $v = b_0 + b_1 x + b_2 x^2$ (some other element of your space) and $c$ be some scalar. Show that your two conditions hold for all possible $a_0, a_1, a_2, b_0, b_1, b_2, c$. That is, for $T$ to be linear, these two conditions should hold no matter what $u$ (a vector), $v$ (a vector) and $c$ (a scalar) are.
H: $A/ I \otimes_A A/J \cong A/(I+J)$ For commutative ring with unit $A$, ideals $I, J$ it holds $$A/ I \otimes_A A/J \cong A/(I+J).$$ A proof can be found here (Problem 10.4.16) for example. However, I'd be interested in a less explicit proof of this fact. Does anybody know a nice way to do this, for example with the universal property or isomorphism theorems? Thank you. So (after some minutes of thinking by myself) here would be my approach: $A/ I \otimes_A A/J \cong (A/ J) / ( I \cdot A/J) \cong (A/J) / ((I+J)/J) \cong A/(I+J)$ Everything okay? (using isomorphism theorem and $A/I \otimes M \cong M/IM$) AI: I'm assuming you mean this isomorphism as an isomorphism of $A$-algebras (I'm going to assume $A$ is commutative so this makes sense). Then $A/I$ has the following universal property: for any $A$-algebra $B$ such that the elements of $I$ map to zero under the structural map $A\rightarrow B$, there is a unique $A$-algebra map $A/I\rightarrow B$. Similarly for $A/J$. The tensor product $(A/I)\otimes_A(A/J)$ is the coproduct in the category of $A$-algebras. This means that an $A$-algebra map out of the tensor product to an $A$-algebra $B$ is the same as a pair $(\varphi,\psi)$ with $\varphi:A/I\rightarrow B$ and $\psi:A/J\rightarrow B$, $A$-algebra maps. Notice that for an $A$-algebra $B$, there is at most one $A$-algebra map $A/I\rightarrow B$, and likewise for $A/J$. So such a pair, if it exists, is unique, and it exists if and only if the structural map $A\rightarrow B$ kills $I$ and $J$, i.e., if and only if it kills $I+J$. This is the condition that characterizes the $A$-algebra $A/(I+J)$ (i.e. its universal property). So $A$-algebra maps $(A/I)\otimes_A(A/J)\rightarrow B$ and $A$-algebra maps $A/(I+J)\rightarrow B$ are ``the same" for all $A$-algebras $B$. Then there is a unique $A$-algebra isomorphism $(A/I)\otimes_A(A/J)\cong A/(I+J)$ compatible with the natural maps from $A/I$ and $A/J$ to source and target. The uniqueness implies that this isomorphism (and its inverse) are the natural maps one would write down if one wanted to prove these two $A$-algebras were isomorphic.
H: The completeness of the real numbers with respects to Cauchy sequences? My Question: I am not sure about the very last inequality in the proof below; namely, where did we get $\mid a_{n}-a_{N}\mid$ and $\mid a_{N}-b\mid$? I see that $\mid a_{n}-a_{N}\mid<\epsilon/2$ and that $\mid a_{N}-b\mid\leqslant\epsilon/2$, which validates the last inequality, but i just don't see how we chose those two pieces. You can probably skip 2/3 or more of what I have written below to answer my question, I just wrote everything so that it would be clear where I am coming from. Also, I don't see why $\mid a_{n}-b\mid\leqslant\mid a_{n}-a_{N}\mid+\mid a_{N}-b\mid$ is true. Thanks for your time. Notation let the sequence $a_1, a_2, ..., a_n=(a_n$) for $n\in \mathbb{N}$ Definition: The Cauchy condition states that if a sequence $a_{n}$ is convergent then the following condition must be true as well: $$ (\forall\epsilon\in\mathbb{R}_{+})(\exists N\in\mathbb{N})(n,m\geqslant N\implies\mid a_{n}-a_{m}\mid<\epsilon)$$ Theorem: $\mathbb{R}$ is complete with respects to the Cauchy sequences in the sense that if $(a_{n})$ is a sequence of real numbers obeying the Cauchy condition then $(a_{n})$ converges to a limit in $\mathbb{R}$. Proof. Let the set $A$ be the set of all the real numbers comprising the sequence $(a_{n})$, $$ A=\{x\in\mathbb{R}:(\exists n\in\mathbb{N})(a_n=x)\}$$ We first observe that the set $A$ is bounded, by taking $\epsilon=1$ in the Caunchy condition which implies that there must exist a $N_{1}\in\mathbb{N}$ such that for all $n,m\geqslant N_{1}$ , $\mid a_{n}-a_{m}\mid<1$. Now, clearly that the finite set $A^*=\{a_{1},a_{2},...,a_{N_{1}},a_{N_{1}}-1,a_{N_{1}}+1\}$ is bounded, since all finite sets are bounded. Let us say that all the elements of $A^*$ belong to to the interval $[-M,M]$. Since we require that for all $n,m\geqslant N_{1}$, $\mid a_{n}-a_{m}\mid<1$ must be true, it follows that for all $n\geqslant N_{1}$, the inequality $ \mid a_{n}-a_{N_{1}}\mid<1$ is also true. The latter inequality suggest that $A$ is also contained in the interval [-M,M], hence $A$ is bounded. Next we consider the set, $$S=\{s\in[-M,M]:\exists\mbox{ infinitely many }n\in\mathbb{N},\mbox{ for which }a_{n}\geqslant s\}$$ That is that $a_{n}\geqslant s$ occurs infinitely many times in the sequence $(a_{n})$. Clearly $-M\in S$ and $S$ is bounded above by $M$. According to the least upper bound property of $\mathbb{R}$ we know that $S$ must have a least upper bound, i.e. , $\mbox{l.u.b.}(S)=b$ where $b\in\mathbb{R}$. Thus, we claim that the Caunchy sequence $(a_{n})$ converges to the limit $b$. Given that for all $\epsilon>0$ we must show that there exists an $N$ such that for all $n\geqslant N$, $\mid a_{n}-b\mid<\epsilon$. The Cauchy condition provides us with a $N_{2}$ such that $$m,n\geqslant N_{2}\implies\mid a_{n}-a_{m}\mid<\frac{\epsilon}{2}$$ All of the element of $S$ are less than or equal to $b$ so, $b+\epsilon/2\notin S$ . Thus if the element $b+\epsilon/2$ does occur in $(a_{n})$ then it could only occur finitely many times, and thus there must exist a $N_{3}\geqslant N_{2}$ such that $$n\geqslant N_{3}\implies a_{n}\leqslant b+\frac{\epsilon}{2}$$ Since $\mbox{l.u.b.}(S)=b$ , the number $b-\epsilon/2$ cannot be be an upper bound for $S$; thus there exists an element $s\in S$ such that $s>b-\epsilon/2$ which implies that $a_{n}\geqslant s>b-\epsilon/2$ occurs infinitely often. In particular, there exists a $N\geqslant N_{3}$ such that $a_{N}>b-\epsilon/2$ . Since $N\geqslant N_{3}$ , we have $a_{N}\leqslant b+\epsilon/2$ and so we have that $$a_{N}\in(b-\epsilon/2,b+\epsilon/2]$$ Moreover, since $N\geqslant N_{3}\geqslant N_{2}$ $$\mid a_{n}-b\mid\leqslant\mid a_{n}-a_{N}\mid+\mid a_{N}-b\mid<\epsilon$$ which verifies convergence. AI: The fact that $$|a_n-b|\le|a_n-a_N|+|a_N-b|$$ is just the triangle inequality, $|u+v|\le|u|+|v|$, with $u=a_n-a_N$ and $v=a_N-b$. In the proof we start with an arbitrary $\epsilon>0$, and the goal is to show that there is some $N$ such that $|a_n-b|<\epsilon$ whenever $n\ge N$. We know that we can make $|a_n-a_m|$ as small as we like by taking $m$ and $n$ big enough, because the sequence is a Cauchy sequence. In particular, we can choose $M$ big enough so that $|a_n-a_m|<\frac{\epsilon}2$ whenever $m,n\ge M$. We could have used any positive number in place of $\frac{\epsilon}2$; the specific choice of $\frac{\epsilon}2$ is a matter of convenience, making the conclusion of the proof a little simpler than it might otherwise be. Suppose that instead of $\frac{\epsilon}2$ we had used some number $\delta_1>0$; then at this point we could assert that $|a_n-a_m|<\delta_1$ whenever $m,n\ge M$. The bulk of the middle part of the proof is an argument showing that there are infinitely many terms of the sequence in the interval $$\left(b-\frac{\epsilon}2,b+\frac{\epsilon}2\right]\;.$$ Here again we could have used any positive number in place of $\frac{\epsilon}2$, and the argument would have worked equally well. Suppose that we’d used some $\delta_2>0$ instead; then at this point we’d have shown that there are infinitely many terms of the sequence in the interval $(b-\delta_2,b+\delta_2]$. In particular, there is an $N\ge M$ such that $a_N\in(b-\delta_2,b+\delta_2]$. This means that if $n\ge N$, then certainly $n,N\ge M$, so $|a_n-a_N|<\delta_1$, and moreover $|a_N-b|\le\delta_2$. Thus, $$|a_n-b|\le|a_n-a_N|+|a_N-b|<\delta_1+\delta_2\;.$$ We wanted to make $|a_n-b|<\epsilon$, and $\delta_1$ and $\delta_2$ could be any positive numbers, so we might as well take $\delta_1=\delta_2=\frac{\epsilon}2$. We could have taken $\delta_1$ to be $\frac{\epsilon}3$ and $\delta_2$ to be $\frac{2\epsilon}3$ and arrived at the desired conclusion. For that matter, we could have taken $\delta_1=\delta_2=\frac13$ and arrived at the desired conclusion: if something is less than $\frac{\epsilon}3+\frac{\epsilon}3=\frac{2\epsilon}3$, then it’s certainly less than $\epsilon$. The basic idea underlying the argument is that one way to show that two things are close to each other is to show that they are both close to some third thing. Here we want to show that $a_n$ and $b$ are close, so we show that there is an $a_N$ that’s close to both of them. Everything else is the detailed work needed to make ‘close enough’ precise and to show that there really is that third object close to both of them.
H: Computing dimension over a field of rationals I am looking to find the dimension the vector space $V$ over $\Bbb Q$, the field of rationals, where the vectors are real numbers of the form $p + q\sqrt 2$, where $p$ and $q$ are rationals. I'm thinking that the dimension is infinite, but having trouble with proving to myself that it is right (assuming I'm right in the first place). AI: It's 2. Because of the way you defined your vector space, $1+0\sqrt{2}$ and $0+1\sqrt{2}$ generate it. So since they are independent over $\Bbb Q$, they form a basis.
H: Are spinors, at least mathematically, representations of the universal cover of a lie group, that do not descend to the group? Following on this question about how to characterise Spinors mathematically: First, given a universal cover $\pi:G' \rightarrow G$ of a lie group $G$, is it correct to say we can always lift representations of $G$ to those of $G'$ essentially by pre-composition by $\pi$? (presumably, modulo questions about the exact smooth structure of the space $End (V)$ for a representation $V$ of $G$). Secondly, there are representations of $G'$ that do not descend to $G$. Is it fair to call these Spinor representations, since when $G$ is the connected component of the identity of $SO(p,q)$, its universal cover is the double cover $Spin(p,q)$? AI: Yes, yes, and yes. In more detail: issues about smooth vectors are much subordinate in any case, and prove trivial, so (smooth-vector?) repns of the "lower" group "lift" to the covering group. Not at all a problem. Yes, at least as evidenced by many examples, about half the finite-dimensional repns of spin groups do not descend to the corresponding orthogonal groups, for example. Some other classical groups happen to be simply-connected, in contrast. And, yes, to the last question, as a matter of usage or convention or tradition. The underlying point-of-interest is that it is not obvious that the universal covering of all these not-simply-connected Lie groups is just a two-fold-cover. Weyl found this.
H: $f:\mathbb{R} \to \mathbb{R}$ continuous, locally 1-1 implies $f$ globally 1- 1 Suppose $f: \mathbb{R} \to \mathbb{R}$ is continuous, and locally 1-1. I want to show it is globally 1-1 (without assuming the existence of $f'$). The intermediate value theorem implies that $f$ is locally strictly monotonic. Intuitively, I would like to show that if $f(a)=f(b)$, then somewhere between $a$ and $b$, $\,f$ must "switch directions", but I haven't had any traction with this strategy. Any ideas? AI: You could prove the similar statement that a continuous function $f: \mathbb{R} \to \mathbb{R}$ is injective if and only if it has no local extrema. (If you're stuck, it's discussed here.) Assuming that statement is true, then the result is easy. Indeed, if $f: \mathbb{R} \to \mathbb{R}$ is continuous but not injective, then $f$ has a local extrema at some $x_0$, and it follows (show this) that $f$ is not locally injective at $x_0$. Edit: I didn't notice the comments before posting this. The solution there is similar but more to the point.
H: Check the convergence of $ \sum_{n=1}^{\infty} \ln(x)^n$ I`m trying to check the domain of $R$ for $$ \sum_{n=1}^{\infty} \ln(x)^n$$ so what I did is to take $a_n$ and he is $1$ so $\rightarrow -1<x<1$ $$ -1<\ln(x)<1 \longrightarrow \frac{1}{e} < x < e$$ now lets check the left and right sides for $e$ its Diverging and my problem is with $\frac{1}{e}$ how I check him? $$ \lim\limits_{n\to 0} \left(\ln\left(\frac{1}{e}\right)\right)^n$$ how I can write it? and what its give to me? Thanks! AI: You may try to work with geometric series: $$|\log x|<1\iff \frac1e<x<e\;\implies\;\sum_{n=1}^\infty\log^nx\;\;\text{converges}$$ $$|\log x|\ge 1\iff 0<x<\frac1e\;\;or\;\;x>e\;\implies\;\sum_{n=1}^\infty\log^nx\;\;\text{diverges}$$ The above is assuming you meant $\,\log(x)^n=(\log x)^n\;$ , as hinted in the last part of your question.
H: Topics of Group Theory Required to Understand Betti Numbers I am studying Group Theory. I made sure I have a problem at hand, as part of the motivation for my study. I have chosen the problem as being able to understand as well as compute Betti Numbers where the complex is just a simplicial complex. Now, I have started preparing a list of topics specifically in Group Theory, that are most importantly required to understand Betti Numbers in Algebraic Topology. I have: i) Quotient Groups ii) Rank of a Group (Rank of a Quotient Group, Co-Homology Group) iii) Chain Group and Basis of Chain Group iv) Concept of Dual Basis v) Isomorphic Groups vi) Concept of Codimension vii) Direct Sums of Groups (Cohomology Groups) viii) At the intersection of Group Theory and AT, we have: Homology Groups and Cohomology Groups. iX) Matrix representation of boundary and co-boundary operators (transpose) Have I missed anything, or is there a topic you might want to stress on? I am not looking at extreme abstraction, and instead more of looking at a set of abstract group theoretic concepts to be highlighted during my reading in order to have a direction and not be bogged down within the vastness of both the fields. Thanks. My background: So, this is my background.. I am a statistician(versed in both, mathematical and applied stat.) and have been active on CrossValidated SE. This was another question I had asked over an year ago here: Rank of a cohomology group, Betti numbers. This is the story that brought me here to this reading process: "About an year back, I was working on a statistical ranking problem and stumbled on a classical work by Amrtya Sen(Nobel laureate) followed by applied work : hodge decompositions on graphs ("http://arxiv.org/pdf/0811.1067v2.pdf") where they focussed on effects of intransitivities and inconsistencies in rankings and the usage of a discrete version of hodge/helmholtz decomposition to detect these and account for it in least squares (http://arxiv.org/pdf/1011.1716.pdf) and ranking problems which are very famous in statistics" Following which, given my fascination with the concepts of linear algebra and its wide uses in statistics, I started this journey of reading about algebraic topology out of no where as a basic student, given only my statistcial training and I was surprised that i really started enjoying this reading, which i kept on continuing every now and then and started making notes on my own. AI: Are you saying that you have never studied any group theory whatsoever? (If so, I am somewhat curious about how you managed to construct your solidly plausible list of topics to study.) If you really have no prior exposure to group theory -- e.g. the amount of group theory one learns in a first undergraduate algebra course -- then I am concerned that your program of study is a little too specialized. Here are two special features of homology groups of simplicial complexes: 1) They are always commutative. 2) Assuming the complex is finite, they are finitely generated. Finitely generated commutative groups are much easier to understand than either non-commutative groups or infinitely generated commutative groups: there is a single structure theorem which tells you everything you need to know in most situations.* Moreover this structure theorem and its proof are closer in spirit to matrix theory than group theory: it is in fact a close cousin of the canonical forms theorems in linear algebra. (Further, to reiterate what is mentioned in some of the comments above: most of the topics you list are closer to linear algebra -- or its elder sibling, module theory -- than group theory per se. A suitably theoretical linear course would prepare you better for basic algebraic topology than any course in group theory I can think of.) So if you open any text on group theory I can think of, I fear you'll find that most of what you're reading about is manifestly irrelevant to your chosen goal, which could be discouraging. This makes me think your chosen goal may be a little idiosyncratic/off-center: most of the things one learns in a first group theory course are vital or useful to know later on in one's mathematical life, even if they don't come up in the study of co/homology of simplicial complexes. If you can understand algebraic topology, then for sure you can learn the basics of group theory in a less mercenary way but still in a reasonable amount of time: say, within a month or two if you're working on it several days a week. May I be so bold as to suggest a perturbation of your "problem"? I suggest that you seek to understand cohomology of groups. It sits very nicely at the border of algebra and topology, with easy view of other important areas of mathematics. (As a personal aside, last semester I taught an introductory graduate course on homological algebra. For the first three months I felt that I was mostly discussing shallow -- but still nontrivial, and at times intricate -- formalities with categories and diagrams. In the last month I started talking about cohomology of groups, and the whole thing came alive. In some ways I wish the course had been "cohomology of groups featuring homological algebra" rather than the other way around!) Kenneth Brown's GTM Cohomology of Groups is a very nice introduction, although it should be read with more basic texts on group theory and topology close at hand. *: Truly, we live in a golden age of mathematics: nowadays there are brilliant people hard at work on problems involving finite commutative groups, and even finite groups of prime order: the discrete logarithm problem, the Davenport constant, additive combinatorics...But it is a very rare group theory text which will tell you about any of these things. In fact it is sort of built into the spirit of group theory that finite commutative groups are uninteresting and (even, often especially) finite non-commutative groups are interesting.
H: Non-Locally Integrable fundamental solutions Given a Linear pde $L$, a distribution $u$ is said to be a fundamental solution if $Lu=\delta$ where $\delta$ is the Dirac delta distribution. A common example is the Newtonian potential which is the fundamental solution of the Laplacian. Are there any examples of differential operators of positive order with non-locally integrable fundamental solutions? AI: The differential operator $1$ has $\delta$ as a fundamental solution, which is known to be non regular (not induced by a locally integrable function). Also, the differential operator $x\partial/\partial x-1$ has $\delta$ as a fundamental solution, since $((x\partial/\partial x-1)\delta)(f)=\delta(-x\partial f/\partial x+f)=\delta(f)$. Are you looking for the constant coefficient case?
H: Problem with simple laplacian equation I would like to solve the following PDE: $$ \partial_x^2 u + \partial_y^2 u = -\frac{2 x^2 (x^2-y^2)}{\left(x^2+y^2\right)^2} $$ The right side comes from $ x^2 \partial_x^2 \log(x^2 +y^2) $. Switching the polar coordinates, the right side is deceptively simple: $$ -2 \cos(\theta)^2 \cos(2 \theta) $$ In polar coordinates, the laplacian is: $$ \partial_r^2 + \frac{1}{r}\partial_r + \frac{1}{r^2} \partial_\theta^2 $$ so it seems as though it should be fairly simple to find a solution. If I assume $ u(r, \theta) $ is like $ r^2 F(\theta) $, and try to solve for theta, I get something that is not periodic in $ \theta $. I am not sure if there is nonetheless a way to extract a meaningful solution. I'm at a bit of a loss for any other approaches. Any suggestions? Edit: For $ F(\theta) $, I used mathematica to get: $$ F(\theta) = \frac{1}{24} (-3 - 3 \cos(2 \theta) + \cos(4 \theta) - 6 \theta \sin(2 \theta)) $$ Plus of course any homogenous solution. It is the $ \theta \sin (2 \theta) $ term that makes me so sad. AI: Let's not forget that for many purposes, logarithm is a monomial of degree $0$. That is, we should consider $r^2\log r$ alongside with $r^2$. Letting $$u(r)=r^2F(\theta)+r^2\log r\,G(\theta) \tag1$$ we arrive at the system $$F''+4F+4G=-2\cos^2 \theta \, \cos 2\theta,\qquad G''+4G=0 \tag2$$ Solving these, we get a bunch of constants. One of them allows us to get rid of $\theta\sin 2\theta$; specifically, $$F(\theta)=\frac{1}{24} \cos4\theta, \qquad G(\theta)=-\frac14\cos 2\theta \tag3$$ if I got the computations right. Remark: this example shows that solution of $\Delta u=f$ with bounded $f$ does not have locally bounded second-order derivatives in general. (E.g., $u_{rr}$ is unbounded here.) This is in contrast to the fact that for $1<p<\infty$ having $\Delta u\in L^p$ implies $u\in W_{\rm loc}^{2,p}$. Put another way, we witness the failure of the Riesz transforms to preserve $L^\infty$ the way they preserve $L^p$ for $1<p<\infty$. But if $\Delta u$ was Hölder- or Dini-continuous, we would have $u\in C^2$.
H: Extract and then insert back a sub-block of a picture using matlab I have a picture as an input to matlab $(256\times256)$: img=imread('cameraman.tif'); I want to embed a watermark using a spatial domain method. For this purpose i choose a random sub-block of the image and make my calculations: %random selection of sub-block r = randi(256-7); c = randi(256-7); subblock = img(r:r+7, c:c+7); After i embed the watermark into the subblock i want to replace the old sub-block of the image with the new generated sub-block that contains also the watermark. How can i insert the new sub-block values inside the original picture? AI: img(r:r+7, c:c+7) = modified_subblock;
H: What are the two 'sides' of a decimal number called? Is there a fancy name for the "left side" and "right side" of a decimal number? (That is, the pre-decimal part and the post-decimal part.) AI: We really do use "integer part" and "fractional part" respectively: see Wikipedia, e.g., on decimal fractions. The integer part, or integral part of a decimal number is the part to the left of the decimal separator. (See also truncation.) The part from the decimal separator to the right is the fractional part.
H: Find equation of the plane through the origin with basis <1,2,-1> and <2,3,4>. Find an equation of the plane through the origin with basis <1,2,-1> and <2,3,4>. Could I get some advice on how to work this problem? I know how to find the basis given some plane, but not the other way around. AI: Hint: Compute the cross product of the two vectors. This vector will be orthogonal to the plane and from there you get the equation.
H: Construct a compact set of real numbers whose limit points form a countable set. I searched and found out that the below is a compact set of real numbers whose limit points form a countable set. I know the set in real number is compact if and only if it is bounded and closed. It's obvious it is bounded since $\,d(1/4, q) < 1\,$ for all $\,q \in E.$ However, I'm not sure how this is closed. Is there any simpler set that satisfies the above condition? Thank you! $$E = \left\{\frac 1{2^m}\left(1 - \frac 1n\right) \mid m,n \in \mathbb N\right\}.$$ AI: What about $$A=\left\{\frac1n+\frac1 m:m,n\in\Bbb N\right\}\cup\{0\}\text{ ? }$$ One can see the $A'$ is $$\left\{\frac 1 n:n\in\Bbb N\right\}\cup \{0\}$$ Thus, let $E=A\cup A'=\bar A$ which is closed, and bounded.
H: If $f:\mathbb{R}^n \to \mathbb{R}^n$ is continuous with convex image, and locally 1-1, must it be globally 1-1? For $f:\mathbb{R}\to \mathbb{R}$ which is continuous, being locally 1-1 implies being globally 1-1, see here. This is not true for a general mapping $f:\mathbb{R}^n\to \mathbb{R}^n$. My intuition as to the source of this incongruity is that while continuity preserves connectedness, it does not preserve convexity. This illustration (from Buck's Advanced Calculus) of a locally injective, not globally injective function from $\mathbb{R}^2$ to $\mathbb{R}^2$ somewhat supports my intuition...note the image is not convex. If it were convex, it would seem there would need to be some "critical point" (not assuming the existence of the derivative) where $f$ was not locally 1-1. Is there a theorem that if $f: S \to \mathbb{R}^n$ is continuous, for $S\subset \mathbb{R}^n$ convex, with convex image, locally 1-1, then it must be 1-1 on all of $S$? AI: In $\Bbb R^2$, it's easy to "paint" a counter-example. Just take a wide, thin paintbrush, dip it in paint, and use it to paint a very large disk. Keep your strokes very smooth, never twisting the brush in place and never making any sudden movements. This gives you a locally one-to-one, continuous function from a long, thin rectangular region (the paint trail) to the disk. The paint trail and the disk are both convex and compact, but the function is not one-to-one. Clarification: How can we be sure our strokes are smooth enough? Let one face of the paintbrush be considered the front, and the other the back. Move the brush in a sequence of arcs, choosing a pivot point along the line running from the left end of the paintbrush to the right end, but lying outside the brush. Move the paintbrush in a "forward" direction each time. It will be easiest to paint an annulus around the edge first, as one arc, and then fill in the rest.
H: Derangements basic practice question practice questions not Homework I have problems with this questions that I have answers for but cant understand how the answer was derived. Q.1. In how many ways can the integer $1,2,3,...10$ be arranged in a line so that no even integer is in its natural position? Answer) $10!-(5C1)9!+(5C2)8!......-(5C5)5!$ I am looking for some intuition as to why this and why isnt it just a $d_5$ (derangement of 5 numbers as we are dealing with 5 even numbers) AI: The number of derangements of five elements would tell you what order you were putting the even numbers in... but, not even derangements are enough to satisfy these conditions. Consider, for instance, the ordering 10, 6, 8, 2, 4. This is a derangement of 2, 4, 6, 8, 10; however, where are we going to put the odd numbers? If you choose to put them in as 10, 1, 3, 5, 7, 6, 9, 8, 2, 4 then the result has an even integer (6) in natural position despite choosing the derangement! So, instead, they start by counting the $10!$ ways to arrange the numbers with no restrictions, then work to subtract off the number of such arrangements which violate the condition. Etc. I don't think that your formula, as written, can possibly be correct; why would you be choosing 9 things out of 5? And why would the next "logical" term by $\binom{5}{2}$? Double-check that.
H: Exponentials of a matrix I just was working with matrix exponentials for solving problems in control theory. Suppose $A $ is a square matrix. How can we interpret $A_1 = e^ {\textstyle-A\log(t) }$, where $\log$ is natural logarithm? Is there a formula for extending the scalar case of $e^ {\textstyle-a\log(t) }$ which gives $\dfrac{1}{t^a}$? How do we evaluate $A_1$? Here is how I proceeded along the lines of scalar case: $$A_1 = \exp(-A\log t)) = \exp(-\log {t^A}) \overset{\textstyle?}{=}\;( t^A)^{-1}$$ Is the final equality correct? AI: With the matrix exponential, more care is necessary. One approach is to multiply each item in the matrix $A$ by $\ln t$ and then find the diagonal matrix and take advantage of its structure. Try working these two examples and see if you get them. Example 1: Matrix is in diagonal form $A = \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$. For a matrix of this type, we can take advantage of the fact that it is a diagonal matrix, and right away write out: $\displaystyle e^{-A \ln t} = e^{\begin{bmatrix}-1 & 0\\0 & -1 \end{bmatrix}*\ln t} = \begin{bmatrix} 1/t & 0 \\ 0 & 1/t \end{bmatrix}$ Example 2: Diagonalizable Matrix $A = \begin{bmatrix}-1 & -3\\-1 & -3\end{bmatrix}$, so $\displaystyle e^{-A \ln t} = e^{\begin{bmatrix}1 & 3\\1 & 3 \end{bmatrix}*\ln t} = \begin{bmatrix}-3 & 1 \\ 1 & 1 \end{bmatrix} \cdot e^{\begin{bmatrix}0 & 0 \\ 0 & 4 \ln t \end{bmatrix}}\cdot \begin{bmatrix} -1/4 & 1/4 \\ 1/4 & 3/4 \end{bmatrix} = \begin{bmatrix}-3 & 1 \\ 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} 0 & 0 \\ 0 & t^4 \end{bmatrix}\cdot \begin{bmatrix} -1/4 & 1/4 \\ 1/4 & 3/4 \end{bmatrix} = \begin{bmatrix} \dfrac{t^4}{4}+ \dfrac{3}{4} & \dfrac{3 t^4}{4}-\dfrac{3}{4} \\ \dfrac{t^4}{4}-\dfrac{1}{4} & \dfrac{3 t^4}{4} + \dfrac{1}{4}\end{bmatrix}$ Example 3: Non-diagonalizable Matrix For this matrix, we will write it using the Jordan Normal Form as $A = P \cdot J \cdot P^{-1}$ and take advantage of the Jordan block. $A = \begin{bmatrix}1 & 2\\0 & 1\end{bmatrix}$, so $\displaystyle e^{-A \ln t} = e^{\begin{bmatrix}-1 & -2\\0 &-1 \end{bmatrix}*\ln t} = \begin{bmatrix}1 & 0 \\ 0 & -\dfrac{1}{2} \end{bmatrix} \cdot e^{\begin{bmatrix}-\ln t & \ln t \\ 0 & - \ln t \end{bmatrix}}\cdot \begin{bmatrix} 1 & 0 \\ 0 & -2 \end{bmatrix} = \begin{bmatrix}1 & 0 \\ 0 & -\dfrac{1}{2} \end{bmatrix} \cdot \begin{bmatrix}\dfrac{1}{t} & \dfrac{\ln t}{t} \\ 0 & \dfrac{1}{t} \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 0 & -2 \end{bmatrix} = \begin{bmatrix} \dfrac{1}{t} & -\dfrac{2 \ln t}{t} \\ 0 & \dfrac{1}{t} \end{bmatrix} $ Example 4 $A = \begin{bmatrix}1 & 0 & 2 \\ 0 & 1 & 2 \\ 2 & 1 & 0\end{bmatrix}$ Give it a go. The matrix is diagonalizable and you should get: $$e^{-A \ln t} = \frac{1}{15 t^3}\begin{bmatrix}4 t^5+5 t^2+6 & 2 t^5-5 t^2+3 & 6 (1-t^5) \\ ~~2 (2 t^5-5 t^2+3) & 2 t^5+10 t^2+3 & 6 (1-t^5) \\ ~~6 (1-t^5) & 3 (1-t^5) & 3 (3 t^5+2)\end{bmatrix}$$
H: What does the notation $\epsilon(f(x))s$ mean? I am very, very confused with the notion $\epsilon(f(x))s$. To my understanding, $s$ is a map sends to $F(x,s)$, and $\epsilon$ is the distance function given a point $f(x)$. So what does $\epsilon(f(x))s$ means, when we just put them together? Multiplication of two functions? Corollary. Let $f\colon X\to Y$ be a smooth map, $Y$ being boundaryless. Then there is an open ball $S$ in some Euclidean space and a smooth map $F\colon X\times S\to Y$ such that $F(x,0)=f(x)$, and for any fixed $x\in X$ the map $s\mapsto F(x,s)$ is a submersion $S\to Y$. In particular, both $F$ and $\partial F$ are submersions. Proof. Let $S$ be the unit ball in $\mathbf R^M$, the ambient Euclidean space of $Y$, and define $$F(x,s)=\pi[f(x)+\epsilon(f(x))s].$$ Since $\pi\colon Y^\epsilon\to Y$ restricts to the identity on $Y$, $F(x,0) = f(x)$. For fixed $x$, $$s\mapsto f(x)+\epsilon(f(x))s.$$ Thanks for helping out! AI: Since we are living in the ambient Euclidean space of $Y$, say $E=\mathbb R^M$, points may be added componentwise like vectors, and multiplied by scalars similarly. Then since $f(x)\in Y\subset E$, and $s\in S\subset E$, and $\epsilon(f(x))\in \mathbb R$, it makes perfect sense to compute the scalar product of the second with the third, and then add the first. The result is some point in the ambient space $E$. This is then projected back into $Y$ using $\pi$.
H: Integral of the product of $x^k$ and the upper half circle of radius 2. As I was browsing through the introduction of a paper, I came across the following equality: $\displaystyle\frac1{2\pi}\int_{-2}^2x^k\sqrt{4-x^2}~dx=\begin{cases}\frac1{ k/2+1}\binom{k}{k/2}&\text{if $k$ is even;}\\0&\text{if $k$ is odd.}\end{cases}$ Given that the equality in the case where $k$ is even was not intuitively obvious to me (the case where $k$ is odd is trivial), I decided to try to compute if myself. However, after trying to use most of the integral tricks I remembered, I am at a loss. I would appreciate any partial solution/hint of this if anyone knows it. AI: Hint: I should think using the substitution $x=2\sin\theta$ would help.
H: Ellipse circumference calculation method? Actually I know how to calculate the circumference of an ellipse using two methods and each one of them giving me different result. The first method is using the formula: $E_c=2\pi\sqrt{\dfrac{a^2+b^2}{2}}$ The second method is determining the arc length of the first quart in the ellipse using elliptic integral multiplied by 4 (Look at picture below): I want to know wath is the best method to get the exact circumference of an ellipse ? AI: The first one is an approximation, but it fails badly for $a \ll b $. Overall, I would recommend the exact formula of $4a E(e) $, which can be calculated quickly using the Arithmetic-Geometric Mean Method.
H: Show that $f$ is continuous at 0. EDIT: Fixed the limit. This is a question from Spivak's Calculus, Ch.6, ex. 3. Suppose that $f$ is a function satisfying $$|f(x)|\leq |x| \forall x$$ Show that $f$ is continuous at 0. (Notice that f(0) must equal 0. I do not understand the solution at all. Could someone please explain it to me? Solution: $\lim_{h\to 0} f(h) = 0, $since$ |h|\lt\delta $implies that$ |f(h) - f(0)| = |f(h)\lt \delta$ Where does the $h$ come from? AI: You need to show that $\lim_{x\to 0} f(x)=f(0)$. Now $\lim_{x\to 0}|f(x)|\le \lim_{x\to 0} |x|=0$, hence $\lim_{x\to 0}|f(x)|=0$, hence $\lim_{x\to 0} f(x)=f(0)$. Or in other words, the definition of continuity at $x_0$ is that for all $\epsilon>0$, there exists a $\delta$ such that $|x-x_0|<\delta\implies |f(x)-f(x_0)|<\epsilon$. In this case $x_0=0$, and $f(x_0)=f(0)=0$, hence this just reduces to showing $|x|<\delta\implies |f(x)|<\epsilon$, so since $|f(x)\le|x|$, just take $\delta=\epsilon$ (since $|x|<\epsilon\implies |f(x)|<\epsilon$.
H: Impossible identity? $ \tan{\frac{x}{2}}$ $$\text{Let}\;\;t = \tan\left(\frac{x}{2}\right). \;\;\text{Show that}\;\dfrac{dx}{dt} = \dfrac{2}{1 + t^2}$$ I am saying that this is false because that identity is equal to $2\sec^2 x$ and that can't be equal. Also if I take the derivative of an integral I get the function so if I take the integral of a derivative I get the function also so the integral of that is $x + \sin x$ which evaluated at 0 is not equal to $\tan(x/2)$ AI: Note that you seemed to have attempted finding $$\dfrac{dt}{dx} = \dfrac{d}{dx}\left( \tan \left(\frac x2\right)\right)= \frac 12 \sec^2\left(\frac x2\right)$$ However, the task at hand is to differentiate $f(t)$ with respect to $t$: i.e., $$\bf \dfrac{dx}{dt}\neq \dfrac{dt}{dx}$$ To obtain $\bf \dfrac{dx}{dt}$, we need to express $x$ as a function of $t$: $$\begin{align} t & = \tan\left(\frac x2\right) \\ \arctan t & = \arctan\left(\tan \left(\frac x2\right)\right) \\ \arctan t + n\pi & = \frac x2 \\ \\ 2\arctan t + 2n\pi & = x\end{align}$$ NOW we can find $\dfrac{dx}{dt}$: $$\dfrac{dx}{dt} = \frac{d}{dt}(2\arctan t + \underbrace{2n\pi}_{\text{constant}}) = \dfrac{2}{1 + t^2}$$
H: Find a metric space X and a subset K of X which is closed and bounded but not compact. Find a metric space $X$ and a subset $K$ of $X$ which is closed and bounded but not compact. I can find a metric space $X$ like the below. Let $X$ be an infinite set. For $p,q\in X$, define $d(p,q)=\begin{cases}1,&\text{if $p\ne q$}\\0,&\text{if $p=q$}\end{cases}$ Then, with the metric space above, I can find a subset $K$ of $X$, which is a ball which centre is $x$ and radius is $1$. I know this is closed(since it has no limit points) and bounded. (I'm confused again... I think $X$ is not an infinite set. Isn't the above triangle the metric space $X$?) Some helps will be really appreciated! Thank you! AI: HINT: Let $\langle X,d\rangle$ be the metric space that you described in the problem. For any $x\in X$, the open ball $B(x,1)=\{y\in X:d(x,y)<1\}=\{x\}$ is compact: it’s just the singleton set $\{x\}$. However, the closed ball $\overline{B}(x,1)=\{y\in X:d(x,y)\le 1\}$ is very different: $\overline{B}(x,1)=X$ for each each $x\in X$. (Why?) This shows that $X$ itself is a closed, bounded set in $X$ Consider $\{B(x,1):x\in X\}$. This is an open cover of $X$. Does it have a finite subcover? Is $X$ compact?
H: Domain of composite functions In "Introduction to Set Theory" we have the following: Theorem Let $f$ and $g$ be functions. Then $g \circ f$ is a function. $g \circ f$ is defined at $x$ if and only if $f$ is defined at $x$ and $g$ is defined at $f(x)$, i.e., $$ \text{dom }(g \circ f)= \text{dom } f \cap f^{-1}[\text{dom }g].$$ This assertion is certainly correct, but isn't it redundant to cite $\text{dom } f$? That is, isn't $f^{-1}[\text{dom }g]$ a subset of $\text{dom }f$? AI: Yes, it is. By definition $$f^{-1}[\operatorname{dom}\,g]=\{x\in\operatorname{dom}f:f(x)\in\operatorname{dom}\,g\}\;.$$
H: Proving there exist an infinite number of real numbers satisfying an equality Prove there exist infinitely many real numbers $x$ such that $2x-x^2 \gt \frac{999999}{1000000}$. I'm not really sure of the thought process behind this, I know that $(0,1)$ is uncountable but I dont know how to apply that property to this situation. Any help would be appreciated, Thanks, Mrs. Clinton AI: Firstly we are guessing you have one too many $9$ on the top. We proceed with $\frac{999999}{1000000}$ Define $f(x)=2x-x^2$ then $f$ has its maximum at $1$ and its maximum is $1$. Now as $f$ is continuous we can find a neighborhood (which means an open interval) around $x=1$ so that if $y$ is in the neighborhood, then $f(1)-f(y)<\frac{1}{1000000}$. Thus $f(y)>f(1)-\frac{1}{1000000}=\frac{999999}{1000000}$. Hence every point in our open interval satisfies the inequality and as open intervals have infinite cardinality we are done.
H: If A and B are disjoint open sets, prove that they are separated I just proved this statement like the below. Is this valid or solid proof? Thank you! AI: Your idea looks fine. The proof that $\overline A\cap B=\varnothing$ can be direct, and by symmetry (as you say), we also get $A\cap\overline B=\varnothing$. Take $b\in B$. Since $B$ is open, there is some open neighborhood $N$ with $b\in N\subseteq B$. But then from $A\cap B=\varnothing$ we get $N\cap A=\varnothing$; so $b\notin\overline A$. (Yes, we can even take $B=N$!) ADD Recall that $x\in \overline A$ if and only if for each open set $O$ contaning $x$, we have $A\cap O\neq \varnothing$. In fact, we can say more: Let $(X,\mathscr T)$ be a (topological, metric) space. Then the following definitions of connectedness are equivalent: $(1)$ There exist two non-empty disjoint open sets $A,B$ with $A\cup B=X$. $(2)$ There exist two non-empty disjoint closed sets $A',B'$ with $A'\cup B'=X$. $(3)$ There exist two non-empty separated sets $A'',B''$ with $A''\cup B''=X$.
H: Proof that for any interval (a,b) with a Background: We are assuming that the elements of $\mathbb{R}\setminus\mathbb{Q}$ are irrational number. If $x$ is irrational and $r$ is rational then $y=x+r$ is irrational. Also, if $r\neq 0$ then $rx$ is irrational as well. Likewise, if a number is irrational then its reciprocal is irrational as well. Theorem: Every interval $(a,b)$, no matter how small, contains both rational and irrational numbers. Proof. First we can see that the interval $(0,1)$ contains both rational and irrational, just select the numbers $1/2$ and $1/\sqrt{2}$. For the general interval $(a,b)$, think of $a$ and $b$ as cuts, that is $a=A\mid A^{'}$ and $b=B\mid B^{'}$ , such that $a<b$ . The fact that $a<b$ implies that $B\setminus A\neq\emptyset$ , moreover since $B$ has no greatest value if we select an a rational element $r\in B\setminus A$ then we can find another rational element $s\in B\setminus A$ such that $r<s$ . Thus $a\leqslant r<s\leqslant b$ . The transformation $$T:t\mapsto r+(s-r)t$$ sends the interval $(0,1)$ to the interval $(r,s)$ . Since $r,s,$ and $s-r$ are all rational, the transformation $T$ sends rationals to rationals and irrationals to irrationals. That is, $(r,s)$ contains and irrationals, and so does the larger interval $(a,b)$.$\space\space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \square$ My Confusion: My first confusion is what does the transformation $T$ map from and to? If $t\in \mathbb R$ then for any value of $t$ we should have $r\leqslant r+(s-r)t \leqslant s$ right? However, if $t<0$, $r<0$, and $s<0$ then the preceding inequality shouldn't hold (unless I am missing something); and if the preceding inequality doesn't hold then what good is the transformation $T$ for showing that there is an irrational in the interval $[r,s]$? AI: As the proof says, we actually define $T$ only on the open unit interval $(0,1)$, which it maps onto $(r,s)$. If we were to use the same formula to define $T$ on all of $\Bbb R$, we’d find that $T$ maps $\Bbb R$ bijectively onto $\Bbb R$. $T(0)=r$, $T(1)=s$. $T$ maps $(0,1)$ onto $(r,s)$ in order-preserving fashion. $T$ maps $(\leftarrow,0)$ onto $(\leftarrow,r)$ in order-preserving fashion: as $t$ moves leftward from $0$, $T(t)$ moves leftward from $r$. $T$ maps $(1,\to)$ onto $(s,\to)$ in order-preserving fashion: as $t$ moves rightward from $1$, $T(t)$ moves rightward from $s$. But we don’t actually care about the first, third, and fourth of these points.
H: Inverse properties of inequalities Take a simple inequality such as 1 >= 1/x. Just by looking at it we can see that x cannot be any number between 0 and 1-; the solution is 0 > x >= 1. Now if we multiply both sides of the inequality by x: (1)x >= (1/x)x x >= 1 Great, but what happened to x < 0? If we subtract 1/x from both sides: 1 - 1/x >= 0 (x - 1)/x >= 0 x < 0 satisfies the inequality. Great, but what happened to x >= 1? How can the last two inequalities have different solutions from the inequality which they were derived from? AI: Let's look at your inequality, namely $$1\geq\frac 1 x$$ to be solved for every real value of $x$. We make an observation: if $x$ is negative, so is $x^{-1}$, so $(1)$ holds. But if $x$ is negative, $$1\geq \frac1x\not\implies x\geq 1$$ Then, since we want $x<0,x\leq 1$, we conclude $x<0$. Thus, let's assume $x>0$, since we have already taken care of $x<0$. Then it is true that $$1\geq \frac 1 x\iff x\geq 1$$ The solution set is then $(-\infty,0)\cup [1,\infty)$ The other manipulation was not wrong $$1\geq\frac 1 x\iff 1-\frac 1 x\geq 0\iff \frac{x-1}x\geq 0$$ And when is this last inequality true? Precisely when $x>0, x-1\geq 0$ or $x< 0,x-1\leq 0$. This gives you two solution sets $S'=[1,\infty),S''=(-\infty,0)$ that in turn give $S=S'\cup S''$ as the full solutin set. In this case your mistake was in disregarding $x\geq 1$ as a solution.
H: Relation Between Two Homomorphisms Let $f,g: \mathbb Z_5\to S_5$ be two non-trivial homomorphisms. Prove that for every $x\in\mathbb Z_5\,$ there exists $\,\sigma \in S_5\,$ such that $\,f(x)=\sigma g(x)\sigma^{-1}.$ I have found that $f,g$ are injective, but cannot proceed any further. Thanks for any help. AI: Consider (1) where the generator can be sent to, (2) what their cycle type must look like, and (3) show that all possible images for the generators are conjugate. Now, (4) why is this sufficient? Step One: Let $x$ be the generator for the cyclic group $C_5$ (written multiplicatively). So $x^5=e$. Observe that $f(x)^5=f(x^5)=f(e)=e$ (two different $e$s here, technically). And similarly for $g(x)$. Thus the image of $x$ under $f,g$ must satisfy $\sigma^5=e$. Furthermore we cannot have $\sigma=e$ because that would make the homomorphism trivial, as $f(x^n)=f(x)^n=e^n=e$ for all $x^n\in C_5$. Therefore we may conclude that $f(x)$ and $g(x)$ must be elements of order $5$. Step Two: If $\sigma$ has order $5$ in $S_5$, the cycles in its disjoint cycle decomposition must have lcm = $5$, which means each such cycle has order $5$, and there cannot be more than one such cycle because that would imply that there are $\ge10$ numbers being acted on by $S_5:={\rm Perm}(\{1,2,3,4,5\})$. Thus we conclude that $f(x)$ and $g(x)$ are $5$-cycles. Step Three: Know the formula $\sigma(a_1~a_2~\cdots~a_m)\sigma^{-1}=(\sigma(a_1)~\sigma(a_2)~\cdots~\sigma(a_m))$: thus, conjugating a cycle in a symmetric group by $\sigma$ amounts to the same thing as applying $\sigma$ to the numbers that appear in its cycle representation. Use this to conclude (in particular) any two $5$-cycles are conjugate. Step Four: Conclude $f(x^n)=f(x)^n=(\sigma g(x)\sigma^{-1})^n=\sigma g(x)^n\sigma^{-1}=\sigma g(x^n)\sigma^{-1}$ for all $x^n\in C_5$ for some $\sigma\in S_5$, thus any two nontrivial homomorphisms are conjugate.
H: Saturation of the Cauchy-Schwarz Inequality Consider a vector space ${\cal S}$ with inner product $(\cdot, \cdot)$. The Cauchy-Schwarz Inequality reads $$ (y_1, y_1) (y_2, y_2) \geq \left| (y_1, y_2) \right|^2~~\forall y_1, y_2 \in {\cal S} $$ This inequality is saturated when $y_1 = \lambda y_2$. In particular, this implies $$ \boxed{ (y_1, y_1) (y_2, y_2) \geq \left| \text{Im} (y_1, y_2) \right|^2 } $$ My question is the following. Given a fixed $y_1 \in {\cal S}$. Is it always possible to find $y_2 \in {\cal S}$ such that the boxed inequality is saturated? If not in general, under what conditions is this possible? PS - I am a physicist, and not that well-versed with math jargon. AI: You have $|y_1|^2|y_2|^2\geq|(y_1,y_2)|^2\geq|\text{Im}(y_1,y_2)|$. If the right hand side is equal to the left-most hand side then, in particular, you have 'saturation' of Cauchy's inequality. Then $y_2=\lambda y_1$ because the saturation of Cauchy's inequality happens if and only if the vectors are proportional. You then only need to have $|\lambda|^2|y_1|^4=|(y_1,\lambda y_1)|^2=|\text{Im}(\overline{\lambda}(y_1,y_1))|^2=|\text{Im}(\overline{\lambda})|y_1|^2|^2=|\text{Im}(\lambda)|^2|y_1|^4$. This is ok for $\lambda$ imaginary.
H: General solution for the system of PDEs from the curl of a vector field equaling another In my vector calculus class, when we were introduced to the curl operator the professor gave us this example: Is it possible to find a vector field $\mathbf{G}$ such that $$\mathbf{F} = \nabla \times {\mathbf{G}}?$$ As a motivation for the identity that the divergence of a curl of a vector field is $0$. My question is: Given that the identity holds true, how do you solve for a general solution for the component functions of $\mathbf{G}$, given $\mathbf{F}$? The best I can do is find one or two solutions. For example, if $\mathbf{F}$ = $\langle-y,-z,-x\rangle$, a solution I find is $\mathbf{G}$ = $\langle xy,0,-\frac{1}{2}y^2+xz\rangle$. I had to go through a system of partial differential equations and made it work, but can the general solution be written explicitly? Thanks! AI: $\newcommand{\F}{\mathbf{F}}\newcommand{\p}{\mathbf{p}}$If we consider the unbounded $\mathbb{R}^3$ case, there is a path integral formula to construct the right inverse of the curl operator for divergence free vector field $$ \mathcal{R}(\F) = -(\p - \p_0)\times \int^1_0 \F\Big(\p_0 + t(\p- \p_0)\Big)t\,dt.\tag{1} $$ A proof of this formula can be found here, for more discussion the author pointed to Spivak's book Calculus on Manifolds. In your case, letting $\p_0 = \langle 0,0,0\rangle $, it is $$ \mathcal{R}(\F) = \langle x,y,z\rangle \times \int^1_0 \langle y,z,x\rangle t^2 dt = -\frac{1}{3}\langle z^2-xy,x^2-yz,y^2-xz\rangle .$$ You can check that this is indeed the right inverse $$ \nabla \times \mathcal{R}(\F) = \F = -\langle y, z, x\rangle. $$ Notice the vector field above is difference from yours, because essentially $\mathcal{R}(\F) + \nabla \phi$ is also the answer for smooth $\phi$. Further checking: the difference between the right inverse $\mathcal{R}(\F) $ above and your potential field $\langle xy,0,−y^2/2+xz\rangle$ is $$ \mathbf{A} = \langle \frac{2}{3}x y +\frac{1}{3}z^2, \frac{1}{3}(x^2-y z), -\frac{1}{6}y^2+\frac{2}{3}xz\rangle = \nabla \left(\frac{1}{3} x^2y + \frac{1}{3}xz^2 - \frac{1}{6}y^2 z\right), $$ indeed a gradient. If you wanna eliminate the huge kernel of curl operator, what you can do is (1) choosing a gauge (pinning down the divergence of $\mathbf{G}$), and/or (2) specifying boundary condition restricted on a simply-connected domain $\Omega$ of $\mathbb{R}^3$, by posing the following boundary value problem on some $\Omega$: $$\left\{ \begin{aligned} \nabla \times \mathbf{G} &= \F\quad \text{ in }\Omega, \\ \nabla \cdot \mathbf{G} &= g \quad \text{ in }\Omega, \\ \mathbf{G} \cdot \mathbf{n} &= 0 \quad \text{ on }\Gamma. \end{aligned} \right.$$ You can check the construction of yours $\langle xy,0,−y^2/2+xz \rangle $ is not divergence free, while formula (1) automatically gives you the divergence free potential field. For unbounded $\mathbb{R}^3$, choosing a gauge $g$ will get you the result you want.
H: Help with simplifying with $a+bi$ format? Can someone please explain to me why $\frac{3}{2+4i}$ simplified into $a+bi$ format is $\frac{3}{10}-\frac{3i}{5}$? I can't find any explanations of what $a+bi$ format is. Thank you, and this is from precalculus. AI: Hint: Multiplying both the numerator and denominator by the conjugate of the denominator, which is $2-4i$. \begin{align} \frac3{2+4i}&=\frac{3\cdot(2-4i)}{(2+4i)(2-4i)}\\ \\&=\frac{3(2-4i)}{4-16(i)(i)}\\ \\&=\frac{3(2-4i)}{4-16(-1)}, \text{ since $i^2=-1$}\\ \\&=\frac{3(2-4i)}{20}\\ \\&=\frac6{20}-\frac{12i}{20}\\ \\&=\frac{3}{10}-\frac{3i}{5}\\ \end{align} Thus $a=\frac3{10}$ and $b=\frac35.$ Or you can use this division formula.
H: Which of the following vectors are in ker(T)? Let T: R2→R2 be the linear operator given by the formula: T(x,y) = (2x-y, -8x+4y) Which of the following vectors are in ker(T)? *Note that ker(T) is the kernel of T. The way I think I should approach this problem is to plug in the given vectors and see if I get 0 as an answer. For example, for part a I would say: T(5, 10) = [(2(5)-10), -8(5)+4(10)] = (0,0). Thus, the vector of (5,10) would be in ker(T), right? I don't need answers to each of the following. I just need to be sure I'm approaching the problem correctly. Could someone confirm? a) (5,10) b) (3,2) c) (1,1) AI: Yes, that’s the most straightforward way to answer the question, and it works fine. You can also note that $-8x+4y=(-4)(2x-y)$, so any values of $x$ and $y$ that make $2x-y=0$ also make $-8x+4y=0$. Thus, $\ker T$ is precisely the set of vectors $\langle x,y\rangle$ such that $2x-y=0$, or $y=2x$. Geometrically it’s the line of slope $2$ through the origin. This is overkill for this specific problem, but if you had a longer list of vectors to test for membership in $\ker T$, it would save you some work.
H: How to solve this simple integral with substitution and partial fraction decomposition This is the question: $$\int \frac{e^{4t}}{(e^{2t}-1)^3} dt$$ This is my solution: $$\ln |e^{2t} - 1| - \frac{2}{e^{2t} - 1} - \frac{1}{2(e^{2t}-1)^2} $$ Which I got by first substituting $u = e^{2t}$ so $du = 2e^{2t}dt$ so $$\frac{1}{2} \int \frac{u^2}{(u-1)^3} du $$ Then solving the system $$u^2 = A(u-1)^2 + B(u-1) + C$$ My solutions manual takes the same approach, but somehow has a $u$ instead of a $u^2$ everywhere, and eventually ends up with something quite different. Where am I going wrong here? AI: You forgot $du=2e^{2t}dt$, thus $dt=\dfrac{du}{2u}$. There you have the missing $u$.
H: Differentiability only in isolated point Do functions exist, which are differentiable in a point, but not in a neighborhood of this point? Is $e^{\frac{1}{W(x)-2}}$, where W is the Weierstrass function, maybe an example of a such function? AI: If we multiply the Weierstrass function by $x^2$, we get a function which is continuous everywhere, and differentiable at $0$ but nowhere else.
H: notations of generators and relations I need to understand the following about generators and relations notations: Is $\langle a,b \mid a^kb^l\rangle =\langle a,b\mid a^k=b^l\rangle =\langle a,b\mid a^k,b^l\rangle$? Is $ \langle a,b\mid a^k=b^l\rangle =\langle a,b\mid a^k=b^l=1\rangle$? (if not, why?) Which of them is $\mathbb Z_k \ast \mathbb Z_l$? AI: In a presentation of a group the relations are understood to be equal to the identity. So in your examples $\langle a, b| a^kb^l\rangle$ is generated by $a$ and $b$ and $a^kb^l=e\Rightarrow a^k=b^{-l}$. From this we can see that the three presentations in the top row are not the same. Note however that even though the presentations are not the same does not mean that the groups are not isomorphic. In particular, $\langle a, b| a^kb^l\rangle\neq\langle a, b| a^kb^{-l}\rangle=\langle a, b| a^k=b^l\rangle$, where the $\neq$ means that presentations are not the same, but the groups are isomorphic. With regards to your final question $\mathbb{Z}_k\ast\mathbb{Z}_l\simeq\langle a, b| a^k, b^l\rangle$
H: Slope of secant line vs slope at the mid-point I am looking for broad sufficient conditions that allow comparison of $\displaystyle {{f(b)-f(a)}\over {b-a}}$ and $f'\left({{a+b}\over 2}\right)$. As in when $\displaystyle {{f(b)-f(a)}\over {b-a}} > f'\left({{a+b}\over 2}\right)$. If there are generalizations to higher-order divided differences and derivatives that would be of interest too. AI: Let $g(x) = f'(x)$. $f(b)-f(a) = \int_a^b g(x)\;dx $, $m = \frac{a+b}{2}$, and $d = \frac{b-a}{2}$, so $\dfrac{f(b)-f(a)}{2d} = \dfrac{1}{b-a}\int_a^b g(x)\;dx $ and $\begin{align} D &=\dfrac{f(b)-f(a)}{b-a}-f'(m)\\ &= \dfrac{1}{2d}\int_a^b g(x)\;dx-g(m)\\ &= \dfrac{1}{2d}\int_a^b (g(x)-g(m))\;dx\\ \end{align} $ Let's see what this is for $g(x) = x^k$, the standard integrand. $\begin{align} D &= \dfrac{1}{2d}\int_a^b x^k\;dx-m^k\\ &= \dfrac{x^{k+1}}{2d(k+1)}\big|_a^b-m^k\\ &= \dfrac{b^{k+1}-a^{k+1}}{2d(k+1)}-m^k\\ &= \dfrac{b^{k+1}-a^{k+1}}{(b-a)(k+1)}-m^k\\ &= \dfrac{b^{k+1}-a^{k+1}}{(k+1)(b-a)}-\dfrac{(b+a)^k}{2^k}\\ \end{align} $ If $k=1$, $\begin{align} D &= \dfrac{2(b^2-a^2)-2(b-a)^2}{4(b-a)}\\ &= \dfrac{b^{2}-a^{2}}{(b-a)(2)}-\dfrac{b+a}{2}\\ &= \dfrac{b+a}{2}-\dfrac{b+a}{2}\\ &=0\\ \end{align} $ If $k=2$, $\begin{align} D &= \dfrac{b^{3}-a^{3}}{(b-a)(3)}-\left(\dfrac{b+a}{2}\right)^2\\ &= \dfrac{b^2+ba+a^2}{3}-\dfrac{(b+a)^2}{4}\\ &= \dfrac{4(b^2+ba+a^2)-3(b^2+2ab+a^2)}{12}\\ &= \dfrac{b^2-2ba+a^2)}{12}\\ &= \dfrac{(b-a)^2}{12}\\ >0\\ \end{align} $ If $a = 0$ and $k > 1$, $\begin{align} D &=\dfrac{b^{k+1}}{b(k+1)}-\left(\frac{b}{2}\right)^k\\ &=\dfrac{b^{k}}{k+1}-\frac{b^k}{2^k}\\ &=b^{k}\left(\dfrac{1}{k+1}-\dfrac{1}{2^k}\right)\\ &> 0\\ \end{align} $ If $0 < a < b$ and $k > 1$, let $x = a/b$. $\begin{align} S &=\dfrac{b^{k+1}-a^{k+1}}{(k+1)(b-a)}\\ &=\dfrac{b^{k+1}(1-x^{k+1})}{(k+1)b(1-x)}\\ &=\dfrac{b^{k}(1-x^{k+1})}{(k+1)(1-x)}\\ \end{align} $ and $\begin{align} m^k &= \dfrac{(b+a)^k}{2^k}\\ &= \dfrac{b^k(1+x)^k}{2^k}\\ \end{align} $ We would like to show that, for $k > 1$ and $0 < x < 1$, $\dfrac{1-x^{k+1}}{(k+1)(1-x)} >\dfrac{(1+x)^k}{2^k} $ or, if $0 < x < 1$, $2^k(1-x^{k+1})>(k+1)(1-x)(1+x)^k$, or $2^k\sum_{i=0}^k x^i > (k+1)(1+x)^k$. For $k=2$, this is $4(1+x+x^2) > 3(1+2x+x^2)$ or $1 - 2x + x^2 > 0$ which is true. Suppose this is true for $k$. Then, multiplying by $1+x$, $\begin{align} (k+1)(1+x)^{k+1} &< 2^k(1+x)\sum_{i=0}^k x^i \\ &= 2^k\left(\sum_{i=0}^k x^i+\sum_{i=0}^k x^{i+1}\right) \\ &= 2^k\left(\sum_{i=0}^k x^i+\sum_{i=1}^{k+1} x^{i}\right) \\ &= 2^k\left(1+x^{k+1}+2\sum_{i=1}^k x^i\right) \\ &= 2^k\left(-1-x^{k+1}+2\sum_{i=0}^{k+1} x^i\right) \\ &= -2^k(1+x^{k+1})+2^{k+1}\sum_{i=0}^{k+1} x^i \\ \end{align} $ If it is false for $k+1$ and true for $k$, then $\begin{align} (k+2)(1+x)^{k+1} &\ge 2^{k+1}\sum_{i=0}^{k+1} x^i \\ &>(k+1)(1+x)^{k+1}+2^k(1+x^{k+1})\\ \end{align} $ or $(1+x)^{k+1} >2^k(1+x^{k+1}) $. We will use Jensen's inequality to show that $(1+x)^{k+1} \le 2^{k}(1+x^{k+1}) $, so if the inequality is true for $k$, it is true for $k+1$. Jensen's inequality states that, if $h$ is convex and each $a_i > 0$, then $h\left(\dfrac{\sum a_i x_i}{\sum a_i}\right) \le \dfrac{\sum a_i h(x_i)}{\sum a_i} $ Set $h(x) = x^k$, $x_i = x^i$, and $a_i = \binom{k}{i}$. Then $\sum a_i = \sum \binom{k}{i} = 2^k$, $\sum a_i x_i = \sum \binom{k}{i} x^i = (1+x)^k$, so Jensen becomes $h\left(\dfrac{(1+x)^k}{2^k}\right) \le \dfrac{\sum a_i h(x_i)}{\sum a_i} $ or $\left(\dfrac{(1+x)^k}{2^k}\right)^k \le \dfrac{\sum \binom{k}{i} x^{ik}}{2^k} \le \dfrac{(1+x^k)^k}{2^k} $ or $\dfrac{(1+x)^k}{2^k} \le \dfrac{1+x^k}{2} $ or $(1+x)^k \le 2^{k-1}(1+x^k) $. Putting $k+1$ for $k$, this is $(1+x)^{k+1} \le 2^{k}(1+x^{k+1}) $.
H: Which of the following are in R(T), when R(T) is the range of t. Let T: P2→P3 be the linear transformation given by the formula: T(p(x)) = xp(x) Which of the following are in R(T)? a) x+x^2 b) 1+x c) 3-x^2 I just need some clarification on whether I'm doing this correctly. If I'm not doing this correctly, then please explain the process I need to take. For example, I think the problem is solved the following way: a) T(x+x^2) = x(x+x^2) = x^2+x^3. If I divide this by x, I get x+x^2 - so does this mean that this is in R(T)? b) T(1+x) = x(1+x) = x+x^2. If I divide this by x, I get 1+x, so I'm thinking this is in R(T). c) T(3-x^2) = x(3-x^2) = 3x-x^3, when divided by x is equal to 3-x^2, so I think this is in R(T). AI: In order to show that $x+x^2$ is in the range of $T$, you need to find a polynomial $p(x)$ in $P_2$ such that $T(p(x))=x+x^2$. Since $x+x^2=x(1+x)$, you can take $p(x)=1+x$: that’s certainly in $P_2$, and $T(1+x)=x(1+x)=x+x^2$. Note that $T(p(x))$ always has a factor of $x$, so every polynomial in the range of $T$ must be divisible by $x$. That should help you finish off the question in short order.
H: Why is the property called continuity of measure? If $\left\{A_k \right\}_{k=1}^{\infty}$ is an ascending collection of measurable sets then $m(\bigcup_{k=1}^{\infty} A_k) = \lim_{k\to\infty} m(A_k)$. What does it have to do with continuity? AI: Intuitively, we can think of continuity as a property of a mapping where "limiting processes are preserved." For instance, with a continuous function $f:\Bbb{R}\rightarrow\Bbb{R}$, we have for any convergent sequence $(x_n)$ that $$ f(\lim_{n\rightarrow\infty}x_n)=\lim_{n\rightarrow\infty}f(x_n) $$ This "continuity of measure" property is similar, since for an ascending collection of sets we can think of $$ \bigcup_{n=1}^\infty A_n $$ as the "limit" $$ \lim_{n\rightarrow\infty}A_n $$ So "continuity of measure" is really saying $$ \mu(\lim_{n\rightarrow\infty}A_n)=\lim_{n\rightarrow\infty}\mu(A_n) $$ which is just like the continuity property for functions given above.
H: True/False questions (from a real analysis course) The source of these T/F problems is this. http://www.math.drexel.edu/~rboyer/courses/math505/true_false1.pdf (I am self-studying and found these problems by chance.) Could you tell me if my answers are correct and give me some hints on the problem that I couldn't solve please? No. There is also 0. No. {n| n is a natural number} has no least upper bound. Almost Yes. For all element a in A, a <= b such that b is a supremum(A). Then for all a, 1/a >= 1/b. Hence, as long as b is not 0, it has a greatest lower bound. Yes. Q is equal to the collection of Ai, Ai is the interval in Q. We know Q is countable, and each interval Ai is countable or has countable elements in itself. Hence, there should be countable number of intervals. Yes. There are countable number of Q in R, and there are countable number of Q in each subset of R. so there should be countable collection of intervals. No. Since rational number is dense in R, if there exists two irrational elements a and b in an open subset of R(if there exists only one element, it is not open), we can always find rational number between a and b. Yes. By Lindelof's lemma Yes. For example, subset : {1/2}, open cover: (0,1) No. Counterexample: subset: {(0, 1/n) | n goes to infinity} Then its open cover should be the collection of (0, 1/n). It doesn't have finite subcovers. ? ? AI: $3.$ is false. Take $(-\infty,0)$ $5.$ is false. Each interval in $\Bbb R$ is uniquely determined by its endpoints. $9.$ is oddly phrased. Any arbitrary subset has the trivial finite cover $\{\Bbb R\}$. $10.$ is Heine-Borel. $11.$ is false. Take $[0,1]$, and cover it by the compacts $$\left\{\left[0,1-\frac1n \right]:n=2,3,\ldots\right\}\cup\{\{1\}\}$$
H: find area of dark part let us consider following picture we have following informations.we have circular sector,central angle is $90$,and in this sector there is inscribed small circle ,which touches arcs of sectors and radius,radius of this small circle is equal to $\sqrt{2}$,we should find area of dark part. my approaches is following,let us connect radius $\sqrt{2}$ to intersection points of small circle with radius of big sector, we will get square with length $\sqrt{2}$,clearly area of dark part is area of sector -area of square-area of small sector(inside small circle)and minus also area of small part below,which i think represent also sector with central angle $90$,but my question is what is radius of big sector?is it $2*\sqrt{2}$?or does radius of small circle divide given radius of big sector into two part?also please give me hint if my approaches is wrong or good.thanks in advance AI: Hints: Let $r = \sqrt{2}$ be the small radius. Big radius is equal to diagonal of small $r^2$ square plus $r$, that is $\sqrt{2}\cdot \sqrt{2} + \sqrt{2} = 2+ \sqrt{2}$. The small area near the origin has area of a square minus area of a circle divided by four. The dark area is whatever is left divided by two (the figure is symmetric). I hope this helps ;-) Edit: For check: $\frac{1}{2}\Bigg(\frac{\pi R^2}{4}-\pi r^2-\Big(r^2-\frac{\pi r^2}{4}\Big)\Bigg)$, where $R = (\sqrt{2}+1)r$.
H: Equilibrium Points of ODE I am given the following two equations as models for male and female population $\dfrac {df} {dt}$ = $\dfrac {mf} {m+f}$$B_f-fD_f$ $\dfrac {dm} {dt}$ = $\dfrac {mf} {m+f}$$B_m-mD_m$ assume $D_m$=$D_f$=$D$ The equilibrium points two the previous two equations form a line in the $f-m$ plane and I am trying to find the equilibrium ratio of $m/f$ I have tried moving everything to one side for both of them and then integrating and solving for $m$ and $f$ respectively but it seems to just get more complicated AI: (For simplicity I will assume that $B_m\neq 0 \neq B_f$). An equilibrium point must satisfy $\dfrac {df} {dt}=0$ and $\dfrac {dm} {dt}=0$, i.e., $\dfrac {mf} {m+f}$$B_f$ = $fD$ $\dfrac {mf} {m+f}$$B_m$ = $mD$ Notice that this is trivially satisfied by $m=f=0$ meaning the origin is always an equilibrium point. Moreover, if $D=0$ then it is necessary that either $m=0$ or $f=0$. In other words, the $m$ and $f$ axes are the set of equilibrium points. Now the only case left to worry about is when $D\neq 0$. Since we already know that the origin is an equilibrium point, let us ignore it for now and we'll be able to divide the second equation by the first and yield, $\dfrac{m}{f}=\dfrac{B_m}{B_f}$ which is the only other family of equilibrium points. So in summary, If $D=0$, then the axes are the equilibrium points. If $D\neq 0$, then the line $f=\dfrac{B_f}{B_m}m$ are the equilibrium points.
H: Subspace over a line in a plane Consider a line $L$ over a plane.Describe the topology $L$ inherits as a subspace of the followings. $\mathbb{R}_k \times \mathbb{R}_k$ where $\mathbb{R}_k$ is $K$-topology on $\mathbb{R}$. $(\mathbb{R} \times \mathbb{R})_d$ where is dictionary order topology over $\mathbb{R} \times \mathbb{R}$. Thank you for your suggestion. AI: In both cases it depends on the line $L$. The first problem is much harder than the second, so I’d look at the second problem first. In preparation, note that the topology on $\Bbb R_k$ is the same as the usual topology on $\Bbb R$ except at $0$. If $x\in\Bbb R$ and $A\subseteq\Bbb R$, and if $x\ne 0$, then $x$ is a limit point of $A$ in the Euclidean topology if and only if $x$ is a limit point of $A$ in the $K$-topology. $0$ is different: $0$ is a limit point of $K$ in the Euclidean topology, but not in the $K$-topology. I’ll call $0$ a special point of $L$. If $L$ is parallel to the $y$-axis, show that $\langle x,y\rangle\mapsto y$ is a homeomorphism and conclude that $L$ is homeomorphic to $R_k$. If $L$ is parallel to the $x$-axis, or if $L$ passes through the origin but is not vertical, show that $\langle x,y\rangle\mapsto x$ is a homeomorphism and conclude that $L$ is homeomorphic to $R_k$. Finally, if $L$ is not parallel to either axis and does not pass through the origin, then $L$ has an $x$-intercept and a $y$-intercept, and they are distinct points. This is the hardest case: in its relative topology $L$ has two special points, the two intercepts, rather than just one, so it’s not homeomorphic to $\Bbb R_k$. Moreover, the $K$-like set that make the $y$-intercept special is always on the right of the intercept, but the $K$-like set that make the $x$-intercept special can be on either side of the intercept, depending on whether the slope of $L$ is positive or negative. If $L$ is not parallel to the $y$-axis, you can use the fact that for any $p=\langle x,y\rangle\in\Bbb R^2$, the set $\{x\}\times(y-1,y+1)$ is an open nbhd of $p$ in the dictionary order topology on $\Bbb R^2$ to show that $L$ has a very simple topology. If $L$ is parallel to the $y$-axis, show that it’s homeomorphic to $\Bbb R$ with the usual Euclidean topology.
H: If $p(x) =x^4 - x^3 +px^2 -4x +q$, find $p$ and $q$ if $p(0)=3$ and $p(-1)=11$ If $p(x) =x^4 - x^3 +px^2 -4x +q$, find $p$ and $q$ if $p(0)=3$ and $p(-1)=11$ Can someone please teach me how to do this question? thanks AI: Put the values x= 0 and x= -1 in p(x). p(0) = q= 3. Now put x=-1 then find another value.
H: Designing very simple function I don't have much mathematical background except for highschool and I'm struggling to design a very simple function. I need a function f(x, y) that for the absolute difference of x and y would return a number between 0 and 4. If x, y are equal, it would return 4 lets say. If they have difference over some specific number (like 8) it would return value that is zero, or one that is very close to zero, and same goes for higher difference. It also is a requirement that the value that the function returns does not change linearly but lets say if the difference is 0, it would be 4, if its 2-3 it would already return ~2. Then I need another function that does the same thing but instead of being based on the absolute difference of x and y, it is based on their percentage difference. Like, if x is ~35% different then y to return 0, if they are equal to return 4, again changing in non-linear way. I'm probably terrible to describe - I need this for a system that compares vehicles from a database. So this is one component of it, lets say VehicleA has year of manufacturing 2000, and VehicleB is 2004, I want to assign a weight how similar they are: My current out of my fingers solution: (20 / (5 + ABS(2000 (this is `x`) - 2004 (this is `y`)) * 2)) The part with the percentage difference is if I compare vehicle engine size and power. I tried but did not succeed to create such function. I "designed" it by playing with examples, I know there must be a more scientific way but I don't know where to look for :( AI: Your function consists of "steps", there are 3 usual methods to achieve this effect: the conditional or piecewise defined function $$f(x,y) = \begin{cases} 4 & \text{ for } |x-y| = 0 \\ 3 & \text{ for } 0 < |x-y| \leq 1 \\ 2 & \text{ for } 1 < |x-y| \leq 2 \\ 1 & \text{ for } 2 < |x-y| \leq 4 \\ 0 & \text{ otherwise} \end{cases}$$ the step function or characteristic function $$f(x,y) = \chi_{\{0\}}(|x-y|)+\chi_{[0,1]}(|x-y|)+\chi_{[0,2]}(|x-y|)+\chi_{[0,4]}(|x-y|)$$ the floor and ceiling functions $$f(x,y) = \left\lfloor\frac{8}{|x-y|+2}\right\rfloor$$ Play with it and tweak it to your needs. It is worth noting, that all the above are, in fact, piecewise somewhere inside. For the other function, change $|x-y|$ to what you need, i.e. percentage, ratio, whatever. I hope this helps ;-)
H: If $p(x)=ax^3 -2x^2 +bx+c$, find $a, b$ and $c$ if $p(0)=12$, $p(-1)=3$ and $p(2)=36$ If $p(x)=ax^3 -2x^2 +bx+c$, find $a, b$ and $c$ if $p(0)=12$, $p(-1)=3$ and $p(2)=36$ Can someone please teach me how to do this question thanks! AI: Use $p(2)=36$ , you'll have $8a - 8 + 2b + 12 = 36$. Simplify it, you'll get $4a+b=16$. Together with the equation you've written down, $a=3$,$b=4$
H: A problem with the proof of a proposition I have a problem with the proof of Proposition 5.1. of the article of Ito.(Noboru Itˆo. On finite groups with given conjugate types. I. Nagoya Math. J., 6:17–28, 1953.). I don't know what is "e" and "e-1" in the proof. I'd be really greatfull if someone help me.You can find the pdf file in this link. Link AI: I would appear that $e$ is a group. Whenever the author writes $e-1$ they actually have "some group":$e-1$, so they just mean the index of $e$ in the group, minus one. As to what $e$ is I am not sure, but I would guess it is related to $E$. I am not even sure whether $E$ is a group or an element, but most of the time it appears to be an element, and I would guess that $e$ is the group generated by $E$. Edit: As Derek Holt has commented below, $e$ is probably the trivial group, and hence $[G:e]$ is just the order of $G$.
H: If $A^2+2A+I_n=O_n$ then $A$ is invertible Let $A$ a matrix of $n\times n$ and $I_n, O_n$ the identity and nule matrix respectively. How to prove that if $A^2+2A+I_n=O_n$ then $A$ is invertible? AI: Hint: Notice that $A(A+2I_n)=-I_n$. More generally, you can prove in the same way that if $P(A)=0$ for some polynomial $P$ satisfying $P(0) \neq 0$, then $A$ is invertible.
H: Proof: Invariant angle measure - same result for any circle drawn. Below I have quoted Wikipedia. I am particular interested in the statement: The value of $\theta$ thus defined is independent of the size of the circle: if the length of the radius is changed then the arc length changes in the same proportion, so the ratio $\frac{s}{r}$ is unaltered. That is, for any circle drawn with a pair of compasses, centered at the vertex, the arc extending from the start ray to the end ray has length $s=r\theta$, satisfying the equation $\theta=\frac{s}{r}$. This statement is fundamental, since it states that no matter how we draw a circle to measure the angle, we always get the exact same answer $\theta$. It is not enough to prove this by saying: the radian is defined $\theta=\frac{s}{r}$. This does not show that for any circle drawn the ratio is as stated. Please check the image below: Quote Wikipedia: In order to measure an angle $\theta$, a circular arc centered at the vertex of the angle is drawn, e.g. with a pair of compasses. The length of the arc $s$ is then divided by the radius of the arc $r$, and possibly multiplied by a scaling constant k (which depends on the units of measurement that are chosen): $\theta = k \frac{s}{r}$. The value of $\theta$ thus defined is independent of the size of the circle: if the length of the radius is changed then the arc length changes in the same proportion, so the ratio $\frac{s}{r}$ is unaltered. AI: The proof relies on a careful definition of length is. The notion of length is extremely subtle since it is sensitive to small local changes (two curves can be very close to each other but have radically different lengths). In a general manifold, the notion of length depends on a metric structure. The situation you are describing takes place in $\mathbb R^2$ viewed as a Euclidean space. Thus, the length of a curve $\gamma:[a,b]\to \mathbb R^2$ is given by $\int_a^b\sqrt vdt$, where $v(t)=\gamma_1'(t)^2+\gamma_2'(t)^2$. Now, a parametrization of a circle is given by $\gamma:[0,2\pi]\to \mathbb R^2$, with $\gamma(t)=(p_1+r\cdot \cos(t),p_2+r\cdot \sin(t))$, $r>0$. This traces out a circle of radius $r$ with center the point $(p_1,p_2)$. It now becomes a simple matter to use the formula above for any arc of such a circle and see that changing $r$ leads to a proportional change in the length of the arc.
H: Confused with Eigenvalues and Eigenvectors and Vector transformations Hello fellow mathematicians, I am studding " Eigenvalues and Eigenvectors " at this point and I need to understand something here: I am actually performing automatic operations on finding them, but I don't really understand what they are and what they are used for. Those operations really look like transformations. What is the difference of transforming a matrix and finding its Eigenvectors and Eigenvalues anyway? Thank you AI: There seems to be some rather deep confussion here and I'm not sure from where to start...so just let's: (1) Eigenvectors are not "produced" by a matrix. They are vectors that fulfill a very precise relation wrt an operator/square matrix . (2) Eigenvalues don't "scale up" transformations/matrices, whether "transformed" or not (what this means in this context). Eigenvalues are scalars that saisfy a certain polynomial equation very closely related to a trasnformation/matrix (3) Eigenvectors are not trasnformations. Read (1) above. Eigenvalues/eigenvectors are names proceeding from the german "eigen", meaning (its) "own" or "self", "inherent or proper", etc.
H: why the sigma algebra generated by null set and Brownian Motion is right continuous? I mean why the generated one satisfies the definition of right continuous? AI: There is the following result, I quote from Karatzas/Shreve: For a $d$-dimensional strong Markov Process $X=\{X_t;\mathcal{F}_t^X;t\ge 0\}$ with initial distribution $\mu$, the augmented filtration $\{\mathcal{F}_t^\mu\}$ is right-continuous. This is Proposition 2.7.7. in Karatzas and Shreve. Since Brownian Motion has the strong Markov Property, you get your desired result. A proof of this can also be found in the book.
H: Bounded sequence with divergent Cesaro means Is there a bounded real-valued sequence with divergent Cesaro means (i.e. not Cesaro summable)? More specifically, is there a bounded sequence $\{w_k\}\in l^\infty$ such that $$\lim_{M\rightarrow\infty} \frac{\sum_{k=1}^M w_k}{M}$$ does not exist? I encountered this problem while studying the limit of average payoffs criterion for ranking payoff sequences in infinitely repeated games. AI: Consider $1,-1,-1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,\cdots$ (one $1$, two $-1$, four $1$, eight $-1$, ...) Then $$\frac{1-2+2^2-2^3+\cdots+(-2)^n}{1+2+2^2+\cdots+2^n}=\frac{1-(-2)^{n+1}}{3(2^{n+1}-1)}$$ This sequence is divergent. So $(\sum_{k\le M}a_k)/M$ has divergent subsequence, and it implies nonexistence of Cesaro mean of $a_n$.
H: Finding the total differential for the matrix function $F(A) = A^T A - \mathbb{1}_{n \times n}$ I am having difficulties understanding how to find the total differential for the function $F: Mat_{n\times n}(\mathbb{R}) \rightarrow S_{n\times n}(\mathbb{R})$ where $S_{n\times n}(\mathbb{R})$ is the set of all symmetric matrices over $\mathbb{R}$ and $F(A) = A^T A - \mathbb{1}_{n \times n}$. I found that it should be given by $DF_{X}(\xi) = \xi^T X - X^T \xi$ (at the point $X$). I am pretty sure that is pretty straight forward, but I am unable to find a way to this results. Any help is appreciated :) AI: Let $$f\colon Mat_{n\times n}(\mathbb{R})\rightarrow Mat_{n\times n}(\mathbb{R})\times Mat_{n\times n}(\mathbb{R}),\ A\mapsto (A,A)$$ and $$g\colon Mat_{n\times n}(\mathbb{R})\times Mat_{n\times n}(\mathbb{R})\rightarrow Mat_{n\times n}(\mathbb{R}),\ (A,B)\mapsto A^tB$$ then it's clear that $f$ is linear and $g$ is bilinear and $F=g\circ f+Constant$ so $$DF(A)H=Dg(f(A))Df(A)H=Dg(f(A))(H,H)=A^tH+HB$$
H: Do maps between topological spaces somehow induce maps between Banach spaces? If $X,Y$ are topological spaces and $h:X\rightarrow Y$ is a continuous map, is there some sort of induced map \begin{align*} h':C_b(X)\rightarrow C_b(Y) \end{align*} (or in the other direction) where $C_b(X)$ is the Banach space of bounded, continuous, real-valued functions on $X$? If $h:X\rightarrow Y$ is a quotient map, is there an induced (quotient) map between the associated Banach spaces? AI: There is such an induced map. However it is in the other direction. If $h:X\to Y$ is a continuous map between topological spaces, then for each $f\in C(Y)$, we have $f\circ h\in C(X)$ since compositions of continuos maps are themselves continuous. This induces a map \begin{equation} f\in C(Y)\xrightarrow{h'}f\circ h\in C(X). \end{equation} This is the starting point of so much wonderful mathematics, such as differential topology, differential geometry, spectral theory, and non-commutative geometry. Sorry to get emotional here, but I still remember the excitement and joy when I first read about Gelfand-Naimark theorem in my junior year. Really, they just studied the induced map of this induced map. According to my shallow knowledge, this shows a link between the geometry of a set and the geometry of the function space over the set. When we know much about the set, we can use this link to study the functions. When we know more function theory, then this sheds light on properties of the underlying set.
H: How to show that $f(x,y)$ is continuous. How to show that $f(x,y)$ is continuous. $$f(x,y)=\frac{4y^3(x^2+y^2)-(x^4+y^4)2x\alpha}{(x^2+y^2)^{\alpha +1}}$$ for $\alpha <3/2$. Please show me Thanks :) AI: Notice that $|x|,|y|\leq (x^2+y^2)^{1/2}=||(x,y)||$ so we have \begin{align}|f(x,y)|&=\frac{|4y^3(x^2+y^2)-(x^4+y^4)2x\alpha|}{(x^2+y^2)^{\alpha +1}}\leq \frac{4|y|^3(x^2+y^2)+(x^4+y^4)2|x||\alpha|}{(x^2+y^2)^{\alpha +1}}\\ &\leq \frac{4||(x,y)||^3||(x,y)||^2+(||(x,y)||^4+||(x,y)||^4)2||(x,y)|||\alpha|}{||(x,y)||^{2\alpha +2}}\\ &= \frac{4(|\alpha|+1)||(x,y)||^5}{||(x,y)||^{2\alpha +2}}=4(|\alpha|+1)||(x,y)||^{3-2\alpha}\rightarrow 0\end{align} So if $f(0,0)=0$ then $f$ is continuous at $(0,0)$ and hence everywhere.
H: How to define a function that gives us the number of pentagons formed in between two or more hexagons? I have been trying to make a general formula/function that helps in calculating the number of pentagons that may be formed using 2 or more hexagons. Like it is shown in the picture below: In Fig. 1 there are 2 hexagons and 6 pentagons. Therefore, the ratio becomes $1:3$. However, this should never be the ratio as no pentagon can be formed with a single hexagon. Let $y$ be the number of pentagons formed and $x$ be the number of hexagons, we can say: $y = 3x$ when $x =2$. In Fig. 2 there are 3 hexagons and 12 pentagons. Therefore, the ratio becomes $1:4$ Now, in this case the equation becomes: $y = 4x$ when $x = 3$. If we bring in another hexagon, we shall have 6 more pentagons. So, in that case, the ratio will be $2:9$, thereby, changing the equation too. What I mean to say is the ratio is changing every time we introduce a hexagon. I know that $x$ (number of hexagons) and $y$ (number of hexagons) $\epsilon$ $N$; and $x \geq 2$, $y \geq 6$. Since the ratios are changing every time we introduce a hexagon, I don't seem to define a general formula of a function that gives us the number of pentagons if we have the number of hexagons. Maybe, I am missing out something. So the question: How to define a function that gives us the number of pentagons formed in between two or more hexagons? AI: It looks like you are drawing 6 pentagons for each hexagon, except the outermost. So if $x$ and $y$ are the number of hexagons and pentagons, respectively, then $y=6(x-1)$.
H: Proving existence of $T$-invariant subspace Let $T:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$ be a linear transformation. I'm trying to prove that there exists a T-invariant subspace $W\subset \mathbb{R}^3$ so that $\dim W=2$. How can I prove it? Any advice? AI: Hint: Consider the minimal polynomial $\mu_T$ of $T$. If it has degree $1$ or $2$, what do you know about $T$? If it has degree $3$, factor it into $\mu_T = pq$. What do you know about $p(T)$? Kernels, eigenspaces and complements may help you. Edit: Also, I'm curious where this question comes from. Have you considered the example $$T = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}?$$ Note that this is already in Jordan normal form. Now choose $v := e_2$ as the second standard basis vector. The image $Tv$ lies in $W := \langle e_1, e_2 \rangle$ and $e_1$ is an eigenvector. So $W$ is $T$-invariant of dimension $2$. This works similarly in general. If the minimal polynomial splits (into linear factors), the Jordan normal form gives you the answer: Some generalized eigenspace, or a subspace thereof (or, in the diagonalisable case, a union of eigenspaces). The only other thing that can happen is that the minimal polynomial contains an irreducible factor $f$ of degree $2$. In this case, consider $\operatorname{ker} f(T)$. Edit2: More details. When you have the Jordan form, you can read off several invariant subspaces. First of all, eigenspaces are obviously invariant. If you have more than one eigenspace or an eigenspace of dimension $> 1$, just choose the span of suitable eigenvectors as your $T$-invariant space. Now, if you have a Jordan block of higher dimension, you observe how $T$ acts on the corresponding basis vectors: in my example, $e_3$ is mapped into $\langle e_2, e_3 \rangle$, $e_2$ is mapped into $\langle e_1, e_2 \rangle$. Thus, $\langle e_1 \rangle$, $\langle e_1, e_2\rangle$ and $\langle e_1, e_2, e_3 \rangle$ are all $T$-invariant. So $\langle e_1, e_2 \rangle$ is the $2$-dimensional $T$-invariant space you're looking for. This covers the cases when the characteristic polynomial splits. If it doesn't (over $\mathbb{R}$), you need to consider the kernel $\operatorname{ker} f(T)$ where $f$ denotes the irreducible factor of degree $2$, as I wrote above. Choose, for example, $$T = \begin{pmatrix} -1 & -2 & -2 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ and see what happens. Edit3: Expanding on my (hopefully) last comment to this answer: The minimal polynomial of the example above is $(x-1)(x^2+1)$. Moreover, we have $$\operatorname{ker} ( T^2 + 1 ) = \langle e_1, e_2 \rangle \text{ and } \operatorname{ker} ( T - 1 ) = \langle e_2 - e_3 \rangle.$$ Choose any vector in the first space, say $v_1 := e_1$. Then choose $v_2 := Tv_1 = -e_1+e_2$. We find $Tv_2 = -e_1$. Thus, choosing the basis $B := (v_1,v_2,e_2-e_3)$, we obtain $$ {}^BT^B = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ and $\langle v_1, v_2 \rangle$ is clearly a $T$-invariant subspace of dimension $2$. Note also that clearly $K := \operatorname{ker}(p(T))$ is a $T$-invariant subspace for any polynomial $p$: If $x \in K$, then $p(T) Tx = Tp(T)x = 0$, so $Tx \in K$. We just needed to find a polynomial $p$ such that $\operatorname{ker}(p(T))$ has dimension $2$. (We do not always find such a polynomial. We do, however, always find suitable $T$-invariant subspaces of the kernel.)
H: Problem on convergence of sequences Given that $\lim f_n=1>0$, Show that there exists a positive integer $m$ such that $f_n\ge 0 \\ \forall n \ge m $ AI: Aman, simply employ the usual definition of convergence. If $f_n \to 1$, then for all $\epsilon > 0$, there exists $m \in \mathbb N$ such that for all $n \geq m$, $|f_n - 1| < \epsilon$. Moreover, $|f_n - 1| < \epsilon$ is equivalent to $-\epsilon < f_n - 1 < \epsilon$, so $1 - \epsilon < f_n < 1 + \epsilon$, i.e., $f_n \geq 0$.
H: Finding all ordered tuples Suppose $a+b+c+d+e=t$ and $a,b,c,d,e \geq r$ where all the given variables are positive integers. How do you calculate all ordered tuple of $a,b,c,d,e$ such that the above equation holds. The stars and bars formula can't be applied here, since we have a restriction namely $t$, for all the integers. You can use an explicit example with any positive integers to explain these type of problems. Thanks. AI: In a problem like this, just subtract the obligatory $r$ units from each of the variables $a,b,c,d,e$ to obtain new variables $a'=a-r$ etc.. Then one has five otherwise unconstrained non-negative integers $a',b',c',d',e'$ satisfying $$a'+b'+c'+d'+e'=t-5r.$$ Assuming the right hand side is non-negative, the number of solutions is $\binom{t-5r+4}4$ (among $t-5r+4$ units choose $4$ to be separations between the remaining $t-5r$ of them).
H: Three Dimensional Vectors Question Let $\ell_1 , \ell_2 $ be two lines passing through $M_0= (1,1,0) $ that lie on the hyperboloid $x^2+y^2-3z^2 =2 $ . Calculate the cosine of angle the between the two lines. I have no idea about it... I guess it has something to do with the gradient of the function $F(x,y,z)=x^2+y^2-3z^2$ that must be perpendicular to our hyperboloid at any point... But how does this help me? Thanks ! AI: You are looking for two unit vectors $v_,\, v_2$ so that the lines $\ell_i = M_0 + \mathbb{R}\cdot v_i$ are contained in the given hyperboloid. For the components $(a,\,b,\,c)$ of such a $v_i$, inserting into the equation of the hyperboloid yields the condition $$(1 + t\cdot a)^2 + (1 + t\cdot b)^2 - 3(c\cdot t)^2 = 2$$ for all $t \in \mathbb{R}$. Expanding the squares, you get a polynomial in $t$ that shall identically vanish, that means all coefficients must vanish, which imposes some conditions on the components of $v_i$ and leaves you with (up to multiplication with $-1$) two possibilities for the $v_i$. You get the cosine of the angle by computing the inner product $\langle v_1 \mid v_2\rangle$.
H: Solving the inequality: $1\leq \cot^2(x)\leq 3$ I want to solve the following inequality: $$1\leq \cot^2(x)\leq 3.$$ but I'm unsure of how to handle the positive and negative square roots. If I take the square of the inequality, just focusing on the posive square roots, I get: $$1\leq \cot^2(x)\leq 3 \Leftrightarrow \mathrm{arccot}(1)\leq x \leq \mathrm{arccot}(3) \Leftrightarrow \frac{\pi}{4} \leq \cot(x)\leq \frac{\pi}{6}.$$ which is clearly wrong. So, what's the method to solve this? Thanks! Alexander AI: Remember: $a\leq x^2\leq b$ (where $a,b\geq 0$) is equivalent to $\sqrt{a}\leq\lvert x\rvert\leq \sqrt{b}$. So, in this case, you want to solve $$ 1\leq\lvert\cot(x)\rvert\leq\sqrt{3}, $$ or equivalently $$ -\sqrt{3}\leq \cot(x)\leq-1\qquad\text{or}\qquad1\leq\cot(x)\leq\sqrt{3}. $$
H: $F(x,y)$ is continuous. Prove that $$ f(x,y)=\begin{cases}\frac{x^3-xy^2}{x^2+y^2}&\text{if }(x,y)\ne(0,0)\\0&\text{if }(x,y)=(0,0)\end{cases}$$ is continuous on $\mathbb R^2$ and has first partial derivatives everywhere on $\mathbb R^2$, but is not differentiable at $(0,0)$. I want to show that the function is continuous on $\Bbb R^2$ For that, It is sufficient to show that the function is contiously differentiable on $\Bbb R^2$ Does there exist another choice to prove this? Edit I added second question. Use differentials to approximate $(597)(16.03)^{1/4}$ There, how to find the value of $a$ , $b$ , $dx $ and $dy$ Please explain me thanks :) AI: Since $|x|,|y|\leq \sqrt{x^2+y^2}=||(x,y)||$ then $$\frac{|x^3-xy^2|}{x^2+y^2}\leq \frac{|x|^3+|x|y^2}{x^2+y^2}\leq \frac{||(x,y)||^3+||(x,y)||||(x,y)||^2}{||(x,y)||^2}\leq2||(x,y)||\to0=f(0,0)$$ so $f$ is continuous at $(0,0)$ and hence everywhere.
H: Write $100$ as sum of $n$ numbers, such that each number is twice as big as its predecessor. I don't quite know where to start on this one. lets say we have a value 100. and we want to split it in two parts where one is twice as big as the other. That would be $v_1 = 66.666$ and $v_2= 33.333$ (sum $100$) If we want to split the value in 3 parts, where each part is twice as big as the others. That would be $v_1 = 57.15$ , $v_2 = 28.58$, $v_3 = 14.29 $(sum $100$) What do I need to do to get $66.66$ from $2$ and $57.15$ from $3$ and so on? AI: This has nothing to do with logarithms, but rather to do with simultaneous equations. In the first case, you want to find two values that sum to 100: $$x + y = 100.$$ You also have the second condition that one is double the other: $$x = 2y.$$ You can combine these equations to get $$ x + y = 2y + y = 3y = 100 \implies y = \frac{100}{3}, x = 2 \cdot \frac{100}{3}.$$ In your second case, you have three numbers: $$x + y + z = 100,$$ where each number is twice the previous, $$x = 2y, \\ y = 2z.$$ Again, you can solve these simultaneous equations: $$x = 2y = 2(2z) = 4z, \\ x + y + z = 4z + 2z + z = 7z = 100 \implies z = \frac{100}{7} = 14.\overline{285714}.$$
H: Find domain of $ \sin ^ {-1} [\log_2(\frac{x}{2})]$ Problem: Find domain of $ \sin ^ {-1} [\log_2(\frac{x}{2})]$ Solution: $\log_2(\frac{x}{2})$ is defined for $\frac{x}{2} > 0$ $\log_2(\frac{x}{2})$ is defined for $x > 0$ Also domain of $\sin ^ {-1}x$ is $[-1,1]$ When $x=1$ ,then $\log_2(\frac{x}{2})$ becomes $-1$ When $x=4$ ,then $\log_2(\frac{x}{2})$ becomes $1$ So domain is $[1,4]$ AI: Hint: You can search the proper $x$ in which we can have $$\log_2(x/2)\in[-1,+1], ~~\text{and}~~~x/2>0$$ simultaneously. Note that the logarithm function is a one-one function here.
H: Is there a $f_n$ with two local maxima converges to f only one local maxima? Is there a {$f_n$} with two local maxima converges(pointwise/uniform or other) to $f$ only one local maximum? AI: Consider the function $$ f:x\mapsto\begin{cases}1-|x| & \text{if } x\in[-1,1]\\0&\text{if not}\end{cases} $$ For $n\in\mathbb{N}$, let $f_n:x\mapsto f(x)+f(1/n+x)$. For any $n$, $f_n$ has two local maxima, and it uniformly converges towards $2f$, which only has one.
H: Intersection of topologies Is my proof that the intersection of any family of topologies on a set $X$ is a topology on $X$ correct? Proof. We are required to show that the intersection satisfies the topology axioms. Let $\tau$ be an arbitrary intersection of topologies on $X$. $\emptyset$ and $X$ are in every topology so they are in $\tau$ Let $U=\bigcup_{i\in I} A_i$ be an arbitrary union of elements of the intersection $\tau$. $U$ is open in every topology (because $A_i$ is in $\tau$ for all $i\in I$) so it's open in $\tau$ Let $V=A_1\cap\dots\cap A_n$ be a finite intersection of elements of the intersection. $V$ is open in every topology (because $A_i$ is in $\tau$ for all $i\in[1,n]$) so it's open in $\tau$ Moreover, $\tau\subseteq\tau_j$ for every topology $\tau_j$ in the intersected family so it's coarser than all of them. AI: The proof seems just fine.${}{}$
H: Every element in a ring can be written as a product of non-units elements I'm trying to understand a little detail in this proof: I didn't understand why in a ring we can always write an element as a products of non-units elements. I need help. Thanks in advance AI: The question has been answered by the comments. In the sentence under question, the writer is dealing with the case where $a$ is reducible, so by definition of reducible, $a = bc$ for some nonunits $b$ and $c$. It is untrue that all ring elements can be expressed as products of nonunits. The product $a = bc$ is a unit if and only if $b$ and $c$ are units.
H: Prove that $1+\sum_{n\geq1}\sum^n_{k=1}\frac{n!}{k!}{n-1\choose k-1}x^k\frac{u^n}{n!}=\exp\frac{xu}{1-u}$ Let $k,n\in\mathbb{N}_{>0}$. How do I get started to prove that" $$1+\left(\sum_{n\geq1}\sum^n_{k=1}\frac{n!}{k!}{n-1\choose k-1}x^k \frac{u^n}{n!}\right) = \exp\frac{xu}{1-u}$$ Hints and help greatly appreciated! AI: Use this theorem from Enumerative Combinatorics Volume 2 by R. Stanley,
H: What's the intuition with partitions of unity? I've been studying Spivak's Calculus on Manifolds and I'm really not getting what's behind partitions of unity. Spivak introduces the topic with the following theorem: Let $A\subset \Bbb R^n$ and let $\mathcal{O}$ be an open cover of $A$. Then there is a collection $\Phi$ of $C^\infty$ functions $\varphi$ defined in an open set containing $A$, with the following properties: For each $x \in A$ we have $0 \leq \varphi(x) \leq 1$. For each $x \in A$ there is an open set $V$ containing $x$ such that all but finitely many $\varphi \in \Phi$ are $0$ on $V$. For each $x \in A$ we have $\sum_{\varphi \in \Phi}\varphi(x)=1$ (by 2 for each $x$ their sum is finite in some open set containing $x$). For each $\varphi \in \Phi$ there is an open set $U$ in $\mathcal{O}$ such that $\varphi = 0$ outside of some closed set contained in $U$. The point is that I've heard that partitions of unity are able to transfer local results to global results, and this is of great importance, but I'm not really getting the intution behind this theorem. I mean, why a collection of functions with these four properties is able to do such job? When I see a theorem/definition, I try to really get the intution behind it: "why we should really think about doing things this way", because I think that this is a good way to understand what we are doing, but with partitions of unity I'm really not getting the idea. While Spivak uses this just for integration on Calculus on Manifolds for what I've seem, in his Differential Geometry books he starts to use it really more generally to get global results from local ones (obtained with charts). So, given the great importance of this topic, what's the real intuition behind this theorem and partitions of unity in general? AI: In a few words, the point of partitions of unity is to take functions (or differential forms or vector fields or tensor fields, in general) that are locally defined, bump them off so they're smoothly $0$ outside their domain of definition, and then add them all up to get something globally defined. For example, suppose you have a surface $S$ in $\mathbb R^3$ that you can locally write as $f=0$, but you don't know how to do so globally. You can cover $S$ with open sets $U_i\subset\mathbb R^3$ on which you have smooth functions $f_i\colon U_i\to\mathbb R$ with $S\cap U_i = \{x\in U_i: f_i(x)=0\}$. Consider $\Phi = \{\phi_i\}$, where $\phi_i$ is supported in $U_i$. Then $f=\sum \phi_if_i$ will define a smooth function with $f=0$ on $S$. If you want $f$ to be zero only on $S$, you can take an additional open sets $U_0 = \mathbb R^3 - S$, set $f_0 = 1$, and throw $\phi_0f_0$ into your sum.
H: Is a finite group determined by the family of all its 2-generated subgroups? At the last week I meet my old coauthor, Oleg Verbitsky who proposed me the following question. I think that here should be an easy counterexample, but I am not a pure group theorist and I am usually interested in infinite groups, so I decided to forward the question here. So, here it is: For a group $G$ and a positive integer $k$ let $Sub_k(G)$ be the family of all subgroups of $G$ generated by its subsets of size at most $k$ and indexed by these subsets, that is $$Sub_k(G)=\{\langle K\rangle_K: K\subset G, |K|\le k\}.$$ Moreover, we shall call two indexed families $\{G_i:i\in I\}$ and $\{G’_{i’}:i’\in I’\}$ of groups isomorphic, if there exists a bijection $\delta:I\to I'$ such that for each $i\in I$ the groups $G_i$ and $G_{\delta(i)}$ are isomorphic. Now suppose that we have two finite groups $G_1$ and $G_2$ of the equal size $n$. If I remember Oleg’s words right, the groups $G_1$ and $G_2$ are not necessarily isomorphic provided the families $Sub_1(G_1)$ and $Sub_1(G_2)$ are isomorphic, and there is a counterexample of two subgroups of small order. But the isomorphism of the families $Sub_1(G_1)$ and $Sub_1(G_2)$ implies the isomorphism of the groups $G_1$ and $G_2$, provided the groups are abelian. So Oleg asked me: are groups $G_1$ and $G_2$ isomorphic provided the families $Sub_2(G_1)$ and $Sub_2(G_2)$ are isomorphic? He proposed this question to a group theorist, who expects that the answer is negative in general, but positive for the abelian groups. Thanks. AI: No. The groups $$\begin{align} G_{44} &= \langle a,b,c : a^3 = b^9 = c^9 = 1, [b,a]=b^3 c^3, [c,a]=b^3, [c,b]=1 \rangle \\ G_{45} &= \langle a,b,c : a^3 = b^9 = c^9 = 1, [b,a]=c^3 c^3, [c,a]=b^3, [c,b]=1 \rangle \end{align}$$ have isomorphic 2-generated subgroups, but are not themselves isomorphic. $p$-groups are just a sea of continuous change, so it is not at all surprising to find examples there. Conveniently, it is very easy to find minimal generating sets of $p$-groups, so the condition is more easily checked. These are the smallest 3-group examples. There are a great many 2-group examples of order 128, including a batch of 6 non-isomorphic groups with isomorphic 2-generated subgroups, but no 2-group examples of smaller order.
H: The number of irreducible representations I am reading a textbook "Representation theory" by Fulton and Harris and I have a question. They proved the following theorem on page 16. With an Hermitian inner product on a set of class function, the characters of the irreducible representation of a finite group $G$ are orthonormal. For a corollary of this theorem, they mentions that Corollary: The number of irreducible representation of $G$ is less than or equal to the number of conjugacy classes. I don't know how to prove this corollary. Could you give me some advice, please? AI: You should note that the dimension of the space of class functions is equal to the number of conjugacy classes, and that orthonormal vectors in a Hermitian inner product space are linearly independent.
H: complexity of matrix multiplication For $n\times n$ dimensional matrices, it is known that calculating $\operatorname{tr}\{AB\}$ needs $n^2$ scalar multiplications. How many scalar multiplications are needed to calculate $\operatorname{tr}\{ABCD\}$? Note that $\operatorname{tr}$ means the trace of a matrix. AI: As you say, evaluating a trace is order $n^2$-you have $n$ diagonal terms, each of which takes $n$ multiplies and $n$ adds to evaluate. The final $n$ additions are dominated. To do trace $ABCD$ I don't see anything better than first finding $AB$ and $CD$, each of which are $n^3$ operations, or, if you are more clever $n^{2.373}$. Then use your $n^2$ trace calculation, giving order $n^3$ or $n^{2.373}$. This will work for any number of matrices. There might be something more clever out there.
H: continuously averaging a stream of numbers Ok, so I'd like a little help with the mathematical theory behind something I'm doing in a program. I've got a stream of numeric inputs, and I need to find the average of all the numbers in this stream of data. I'm going to output it once the stream ends. In my code, my program can remember two numbers, and is aware of the next number in the stream. Let's say I have x, y, and z. x is the current average of the stream y is the count of numbers that have been processed from the stream z is the next number in the stream. How could I do a fair-weighted average of a stream like this? If I just add z to x and then divide x by 2, then the most recent numbers in the stream heavily weight the average. Or is this even possible? AI: Your running sum is $xy$, so when a new number $z$ comes in the running sum becomes $xy+z$ and the count becomes $y+1$. The new average is $\frac {xy+z}{y+1}$. It is easier to store the running sum than the average, and update the sum and count each time a new number comes in. Added: three things you could consider: 1) Keep the running sum and count. When the running sum gets close to overflow, divide both by 2. Now start accumulating again. This will weight the new values twice as much as the old ones, but maybe that is OK. 2) If you know about what the average is, subtract a guess from all the data. You are storing the difference from your guess. If your guess is close, the running sum will stay near zero, or at least not overflow nearly as quickly. 3) Maybe overweighting the new data isn't so bad. If the average changes with time, maybe you want the recent data overweighted. One approach is to have the current average stored in $x$, then when $z$ comes in, make the new average $x=\frac {a*x+z}{a+1}$. The larger $a$ is, the slower the average will change. You are weighting the new data as $\frac 1a$ of the old average. This should be safe from overflow.
H: characterization of potential I have a Field $F(x,y):= \left( \begin{array}{c} ay\\ 0\\ \end{array} \right)$ and I have to find out whether this field is potential. Well despite the terminology( wich is not completely clear to me) i thought that the answer is affermative iff there exist a $U$ such that $F=-\nabla U$. An easy way would be to check if $\dfrac{\partial F_x}{\partial y} = \dfrac{\partial F_y}{\partial x}$ holds. ( as suggested on https://physics.stackexchange.com/) But here is my approach ( and here the question is purely mathematical) : (1) $ay=\dfrac{\partial U}{\partial x} \Leftrightarrow \int \partial U= \int ay \partial x \Leftrightarrow U=axy + C(y)$ , where $C$ is a function of $y$ only. (2) $0=\dfrac{\partial U}{\partial y}=ax+\dfrac{\partial C(y)}{\partial y}$ (subst. (1) in (2)) $\Leftrightarrow \partial C(y)=-ax \partial y$ but here we can clearly see that $C(y)$ depends on $x$ , wich is not possible , so there can be no potential $U$. Is this approach right? Is the mathematics behind it right? (that's one of the first such problems that i'm solving please be patient) AI: Let me try to understand you first as the question is quite confusing; given a field: $$\mathbf F(y):= \left(\begin{array}{c}ay\\0\\\end{array}\right)$$ first we need to confirm a conservative vector field: $$\dfrac{\partial F_1}{\partial y} - \dfrac{\partial F_2}{\partial x}\overset{?}{=}0$$ with $$\dfrac{\partial F_1}{\partial y}=a$$ $$\dfrac{\partial F_2}{\partial y}=0$$ Hence not conservative. This is precondition to any further calculation that would make sense and results in potential not existing. Now you take the tour and claim, if there would be a potential: $$\dfrac{\partial U}{\partial x} =ay \rightarrow U\sim \int dU= \int a\,y \,dx=a\,y\,x+C_1(y)$$ similarly: $$\dfrac{\partial U}{\partial y} =0 \rightarrow U \sim \int dU= C_2$$ And you have a contradiction because the precondition is not fulfilled. Resume: Not every conservative field that fulfills the abovementioned precondition has a potential but every who has a potential will be corresponding to a conservative field. To your questions: yes your approach and calculation is absolutely correct. To your comment, I see it like this: If $\mathbf F$ is a conservative field, then there exists some potential function $U$ so that $\nabla U= \mathbf F$.
H: basic integral inequality Consider $0 < r <R $. I have a function $u \in C_{0}^{\infty}(B(x_0,R))$ such that $u=1$ on $\overline{B(x_0,r)}$ . Consider $y \in \partial B(0,1)$ (fixed) . My book says : $$ 1 \leq \int_{r}^{R} |\frac{d}{ds}u(sy)| \ ds$$ I tried to begin with $\frac{d}{ds}u(sy) = \nabla u(sy) . y$ and this is zero if $s \leq r$. but I dont know what happen if $s > r$ ... Someone can give me a hand to prove the inequality ? thanks in advance AI: In the way you have stated the problem, it does not make sense, because $sy$ does not need to belong to $B(x_0,R)$. Probably what you do wanna prove is $$\tag{1}1\leq \int_r^R \left|\frac{d}{ds}u(x_0+sy)\right|$$ To prove $(1)$, define $v(s)=u(x_0+sy)$ and note that $$\tag{2}v(R)-v(r)=\int_r^R \frac{d}{ds}v(s)ds $$ From $(2)$ we conclude that $$|v(R)-v(r)|\leq\int_r^R\left|\frac{d}{ds}v(s)\right|ds$$ But $|v(R)-v(r)|=1$ which implies that $$1\leq \int_r^R\left|\frac{d}{ds}u(x_0+sy)\right|ds$$
H: Prove that if $a^x = b^y = (ab)^{xy}$, then $x + y = 1$ The question is prove that if $a^x = b^y = (ab)^{xy}$, then $$x + y = 1$$ I've tried: $$a^x = (ab)^{xy}$$ $$\log_aa^x = \log_a(ab)^{xy}$$ $$x = xy \log_ab $$ $$y^{-1} = \log_ab$$ but then I get stuck and I'm not sure if this is the right path. What is an elegant solution please? AI: Method $1:$ $a^x=b^y=(ab)^{xy}=c$(say) Taking logarithm to the base $c,$ $$x\log_ca=y\log_cb=xy\left(\log_ca+\log_cb\right)=1$$ as $\log ab=\log a+\log b$ $\implies \log_ca=\frac1x,\log_cb=\frac1y$ (consider when $\log_ca,\log_cb$ will be finite & defined) Put values of $\log_ca,\log_cb$ in $$xy\left(\log_ca+\log_cb\right)=1$$ Method $2:$ Taking logarithm to the base $a,$ in $a^x=b^y=(ab)^{xy}$ $$x=y\log_ab=xy(\log_aab)=xy(\log_aa+\log_ab)=xy(1+\log_ab)$$ From $x=y\log_ab, \log_ab=\frac xy \ \ \ \ (1)$ From $x=xy(1+\log_ab)$ Case $1:$ If $x\ne0, 1=y(1+\log_ab)\iff \log_ab=\frac1y-1 \ \ \ \ (2)$ Equate the values $\log_ab$ from $(1),(2)$ Case $2:$ If $x=0,$ the problem reduces to $1=b^y=1$ Can you take it from here? Method $3:$ From, $a^x=(ab)^{xy}=a^{xy}b^{xy}\implies a^{x(1-y)}=b^{xy}\ \ \ \ (1)$ Similarly, $b^y=(ab)^{xy}=a^{xy}b^{xy}\implies b^{y(1-x)}=a^{xy}\ \ \ \ (2)$ As lcm of the powers of $b$ is lcm$(xy,y(1-x))=xy(1-x)$ From $(1), b^{x(1-x)y}=(b^{xy})^{1-x}=(a^{x(1-y)})^{1-x}=a^{x(1-x)(1-y)}$ From $(2), b^{x(1-x)y}=(b^{y(1-x)})^x=(a^{xy})^x=a^{x^2y}$ Comparing the values of $b^{x(1-x)y},$ we get $a^{x^2y}=a^{x(1-x)(1-y)}$ What can we say if $a^m=a^n?$
H: Intersection of a hyperplane and a curve Let us fix a projective curve X over a field k. With nt, I mean a variety with all irreducible components of dimension 1. Let us suppose that there is a smooth rational point $x \in X$. My question is: Is it possible to find a hyperplane such that the intersection of X and this hyperplane is a point? If so, how? AI: If you are looking for some projective immersion for your curve such that this happens, then if your curve is irreducible just consider the divisor $(2g+1)p$, where $p$ is the point you were talking about and $g$ is the genus of the curve. This divisor is very ample, and so if we immerse the curve in projective space with this divisor, there exists a hyperplane such that the set theoretical intersection of the hyperplane with the curve is just $p$.
H: Choosing Firearms (With Replacement) In a shooting gallery there are 4 types of firearms. A practice shooter can choose 1 firearm at each shooting (with replacement). In a given day the shooter practices 8 shootings. What is the probability that in a given day he uses each of the 4 type of firearms at least once? My work: I chose to see this question as how many ways can I distribute 8 shots among 4 guns. each gun has to receive at least 1 shot = P(each received one shot) + P(each received 2 shots) since there are 8 firings so the probability is $4^{-4} + 4^{-8}$, is this correct? Any help would be appreciated thanks AI: After the day is done, write down the sequence of gun choices we made. There are $4^8$ possible sequences, all equally likely. Now we count the number of good sequences, sequences that use all $4$ guns. For our probability, we will divide the number of good sequences by $4^8$. Let us instead count the number of bad sequences, sequences that do not use all the guns. There are $3^8$ sequences that do not use gun A, $3^8$ that do not use gun B, and so on. But the number $4\cdot 3^8$ counts twice, for example, the sequences that miss both A and B. There are $2^8$ such sequences, and $\binom{4}{2}$ ways to choose a pair of guns to be omitted. So from $4\cdot 3^8$ we subtract $\binom{4}{2}2^8$. However, we have removed too many times the sequence that does not use A, B, or C, and all other same gun sequences. We need to add back $4$ to deal with that. Thus the number of bad sequences is $$\binom{4}{1}3^8 -\binom{4}{2}2^8 +\binom{4}{3}1^8.$$ Here we used the Principle of Inclusion/Exclusion. For general information about this kind of problem, please see the Wikipedia article on Stirling numbers of the second kind. Remark: Since the numbers are small, we can instead divide into cases. It is process that has to be done carefully. Think of the gun choices as a word of length $8$ from the alphabet A, B, C, D One gun might be chosen $5$ times, and the others $1$ time each. The popular gun can be chosen in $\binom{4}{1}$ ways, and for each choice there are $\binom{8}{5}$ ways to choose when it is used. The remaining Slots can be filled in $3!$ ways, for a total of $\binom{4}{1}\binom{8}{5}3!$ ways. One gun might be chosen $4$ times, another twice, and the other two once each, Similar reasoning as in the paragraph above gives a count of $\binom{4}{1}\binom{8}{4}\binom{3}{1}\binom{4}{1}2!$. One could continue, it is not too bad. However, the chance of leaving out some cases, or doing a small miscomputation somewhere, is high.
H: Strange expression for limit What do these limits $\psi(x+0), \psi(x-0)$ mean?. I did calculus but have never come across this. AI: These notations mean $\displaystyle\psi(x+0)=\lim_{t\to x^+}\psi(t)$ and $\displaystyle\psi(x-0)=\lim_{t\to x^-}\psi(t)$ and these notations $\psi(x+),\psi(x^+)$ are also used for the same meaning. See Dirichlet_conditions as an example of use.
H: Examples of logical propositions that are not functions I'm trying to understand the axiom of reeplacement in set theory. To my understanding, please tell me if I'm wrong, if there is a logical proposition $\varphi(x,y,w_{1},...,w_{n})$ and an arbitrary set $A$, then we can have the set $B=\{y:\exists x\in A(\varphi(x,y,w_{1},...,w_{n}))\}$. Now, HERE is my problem. The formulation of the axiom in terms of logic makes explicit the fact that $\exists!y(x,y,w_{1},...,w_{n})$. This means then that the logical proposition needs to be a function. So basically I'm asking for examples where logical propositions are not functions. (To make this clearer, there is something I'm missing because I think that all proposition ar functions. For example the statemet $\forall x(x<y)\wedge (y+z=10)$ is a function of the variables $y$ and $z$, etc.) AI: Consider $\varphi(x,y)= y\in x$. This is not a function because $x=\{\varnothing,\{\varnothing\}\}$ does not have a unique $y$ satisfying this formula with $x$. In fact, unless $A$ is a set of singletons, $\varphi(x,y)$ will not define a function on $A$. Here is an example of why we must require that $\varphi$ is a function (after fixing the parameters) on the set $A$. Consider $A=\{\varnothing\}$ and $\psi(x,y)$ stating that $x\subseteq y$, formally: $$\psi(x,y)=\forall z(z\in x\rightarrow z\in y)$$ Now the collection $\{y\mid\exists x\in A.\psi(x,y)\}=\{y\mid y=y\}$, every set is a superset of the empty set. So this would be a proper class, which we already know is not a set. The axiom of replacement, as Hagen says, is telling us that if we can "uniformly rename all the elements of $A$" then the result is a set.
H: Probability of $n$ times a $\frac1n$ event I never studied probability at school and this problem has been bothering me for a long time: Let's say I have a perfectly fair die. If I roll it, the odds of it landing on $6$ are $\frac{1}{6}$. If I roll two dice, the odds of at least one of them landing on 6 are $\frac{1}{6}\times 2 =\frac{1}{3}$. But what about if I roll six dice? What are the odds that one will land on $6$? Based on the previous reasoning, it should be: $$\frac{1}{6}\times 6 = 1$$ But that can't be true. It's actually possible that I roll six dice and none of them land on 6. What about if I roll $100$ dice? It's still possible that none of the land on 6. So what are the odds that at least one will? AI: As you have discovered, probability of independent events is not additive. If it was, then by rolling 10 dice, you'd have a probability of getting a 6 as $10\cdot \frac{1}{6} > 1$. Clearly this is false, since you could get $\{1,1,1,1,1,1,1,1,1,1\}$. There are two ways of looking at this: either you count the number of events where no 6s come up and subtract this from the total number of events, or you work the variables as separate events: $$P(\textrm{at least one 6}) = P(\textrm{first die is a 6}) + P(\textrm{first die is not a six AND second die is a six}) \\ = \frac{1}{6} + \frac{5}{6}\cdot \frac{1}{6} = \frac{6}{36}+\frac{5}{36} = \frac{11}{36}.$$
H: Weibull Scale Parameter Meaning and Estimation Wikipedia: http://en.wikipedia.org/wiki/Weibull_distribution gives a nice description on what the shape parameter (they call it k) means in the Weibull distribution, but I can't find anywhere what the scale parameter (denoted as λ on Wikipedia) means or how to estimate it. Obviously for estimation one could employ a MLE method, but I'm thinking more in the lines of real world examples. For instance if a bug has an average lifespan of a year and we know λ (scale parameter), can I just find k by solving for it in the equation for the mean or would that produce bias? Thanks for the help. AI: Actually there are many estimators of the scale parameter of the Weibull distribution. If you'd like to you can turn to "Continuous univariate distributions. Vol. 1" by N.L. Johnson, S. Kotz and N. Balakrishnan. But they are not that nice as one can think of looking at the pdf. For instance the moment estimator, based on the sample of size $n$: $$\hat{\lambda}_{MM}=\exp{\left(\frac{1}{n}\sum_{i=1}^n\log(X_i)+\gamma\frac{\sqrt{6}}{\pi}\sqrt{\frac{1}{n-1}\sum_{i=1}^n(\log(X_i)-\overline{\log(X)})^2}\right)},$$ where $\gamma$ - is Euler's constant. The estimator is asymptotically unbiased, with variance: $$\mathrm{Var}(\hat{\lambda}_{MM})=1.2\frac{k^{-2}}{n}+k^{-2}O(n^{-\frac{3}{2}})$$ $\hat{\lambda}_{MM}$ has the asymptotic efficiency of $95$%. Maximum likelihood estimator of $\lambda$ (when $k$ is unknown) is given by the statistics: $$\hat{\lambda}_{ML}=\left(\frac{1}{n}\sum_{i=1}^n X_i^{\hat{k}_{ML}}\right)^{\frac{1}{\hat{k}_{ML}}}$$ And $\hat{k}_{ML}$ is the solution of the following equation: $$\hat{k}_{ML}=\left(\left(\sum_{i=1}^n X_i^{\hat{k}_{ML}}\log(X_i)\right)\left(\sum_{i=1}^n X_i^{\hat{k}_{ML}}\right)^{-1}\!\!\!\!-\frac{1}{n}\sum_{i=1}^n\log(X_i)\right)^{-1}$$ Other estimators are even more peculiar but much harder in understanding ;).
H: Good texts in Complex numbers? I have asked some members on chat about good text to study complex numbers , they recommended for example , "Visual Complex Analysis" by Needham and "complex analysis" by Steins. But, I look for a text for complex numbers not complex analysis (I don't even know what is complex analysis!), Which text do you recommend to study complex numbers ? I mean, a text covers its properties and its relation to other subject such that Geometry to give the reader Strong background to use complex numbers in other branches after that, for example prepare the reader to use them in linear algebra or calculus ( special functions for example). Someone recommended a text named, $\text{"Complex numbers from A to ... Z"}$ by Titu Andreescu & Dorin Andrica, Is it a good text? AI: Short answer: Yes, "Complex numbers from A to ... Z" by Titu Andreescu & Dorin Andrica is a good text, very fitting if you are interested in understanding complex numbers. In the meantime, you will likely appreciate Paul's Online Math Notes: See his Complex Numbers Primer. You can study it on-line, and/or download the Primer as a pdf document.
H: Representing $\mathbb{R}/\mathbb{Z}$ as a matrix group. It was told to me that $G = \mathbb{R}/\mathbb{Z}$ is a real matrix group. Can someone help me understand how to represent $G$ in $Gl_n(\mathbb{R})$ for some $n$? (Supposedly, $n = 1$? But that's confusing because then G is [0,1) with a modular addition group operation, which doesn't seem like a matrix group to me.) Any help would be much appreciated! Thanks! -Dan AI: $$\theta \to \begin{pmatrix} \cos(2 \pi \theta) & \sin(2 \pi \theta) \\ -\sin(2 \pi \theta) & \cos(2 \pi \theta) \end{pmatrix}$$
H: Finite field extension over $\mathbb F_2$ I don't see why $[L:K]=4$, where $L = \mathbb{F}_2(x,y) = \operatorname{Quot}(\mathbb{F}_2[x,y])$ and $K = \mathbb{F}_2(x^2,y^2) = \operatorname{Quot}(\mathbb{F}_2[x^2,y^2])$ Let $p(X) = X^2-x^2 \in K[X]$. $p$ has leading coefficient $1$. It is $p(X) = (X-x)(X+x)$ where $\pm x \notin K$. Since $p$, with $\deg p \in \{2,3\}$ is reducible over $K$ iff $\exists a \in K: p(a) = 0$ -- and we already found two roots of $p$ which are not in $K$ -- it follows that $p(X)$ is irreducible over $K$ and therefore minimal polynomial of $x$. It follows that $[K(x):K] = \deg p = 2$. Now we show that $K(x) \neq L$ since $y \notin K(x)$ but $y \in L$: How to do this? Now let $q(X) = X^2-y^2 \in K(x)[X]$, then $q$ again has leading coefficient $1$ and is irreducible (show same way as above?) and therefore minimal polynomial of $y$. It follows that $[K(x,y):K]=[K(x,y):K(x)]\cdot[K(x):K]=\deg q \cdot \deg p=2 \cdot 2=4$. Not we only need, that $K(x,y,) = L$ then it follows that $[L:K]=4$. How to do this? AI: Hint: analyze the tower of fields $L=K(x,y)\supseteq K(x)\supseteq K$. The polynomial $p(X)=X^2-x^2$ is an irreducible polynomial over $K$. The same is true for $q(X)=X-y^2$. Convince yourself that extending by one then the other results in $L$, and that this extension is degree $2\cdot 2$. (This basically amounts to showing that $y$ isn't in $K(x)$.) $K$ for example is all polynomial fractions where the polynomials only have even powers of $x$ and $y$. $L$ is the same except you're allowed to have any powers of $x$ and $y$. How do you think about the elements of these fields? If you take any polynomial ring in however many variables, the field of fractions for that domain is just the formal "fractions of polynomials" with nonzero denominators. That's fairly easy to envision, no?
H: How does Liu intend his readers to compute $H^0(X,\Omega^1_{X/k})$ of this scheme? In the chapter on duality theory in Liu's Algebraic Geometry and Arithmetic Curves, there is the following exercise. I'm having trouble seeing how one can apply the duality theory developed in Liu to solve it. $4.2$. Let $k$ be a field, and let $F=x^n+y^n+z^n\in k[x,y,z]$, where $n\ge 1$ is prime to $\operatorname{char}(k)$. Determine $H^0(X,\Omega^1_{X/k})$ for $X=V_+(F)\subset \mathbb P^2_k$. The preceding chapter was very theoretical, and I'm having trouble understanding how to apply the theory to do actual computations. In particular, it contains a lot of duality-like lemmas and I'm not sure which one to actually apply. To be honest, I don't understand it very well, and I was hoping doing something concrete with it would help. I understand there are many different ways to compute cohomology groups. I would like to do this using the tools developed so far by Liu. In particular, he does not talk about Euler sequences like Hartshorne does. (It is possible they are in the book in disguise, but I do not see them. Please correct me if this is the case.) I would also appreciate any suggestions for elementary ways to use duality to compute this cohomology group. Thanks. (The cohomology theory Liu uses is Čech cohomology.) Edit. The answer below sketches a simple way to do this without any duality at all. My question is now this: What is this question doing in a chapter on duality? Does a duality argument make it even easier? Edit 2. I think I see what was intended now. Once must use Corollary $4.14$ and proceed as in the following examples. The key observation here is that it suffices to compute the canonical sheaf $\omega_{X/k}$, since the canonical sheaf is isomorphic to $\Omega^1_{X/k}$ in this situation. Corollary $4.14$ then gives the tools necessary to compute $\omega_{X/k}$. Thinking in terms of duality was the wrong approach. AI: $H^0$ is just global sections, so he is askign you to describe the module of globally defined differential forms on your curve $X$. Find an open covering by affine open sets of the curve, and find the coordinate ring of those sets On each of them, find a presentation of the module of Kähler differentials. Finally, see which differentials on those open sets extend to the whole thing.
H: Help with difficult integral According to my textbook, $$\int \left( 2 \cot^2{x} - 3 \tan^2{x} \right)dx = -2 \cot{x} - 3 \tan{x} + C$$ I am unable to arrive at this answer. Is this correct? If so, please help me with the integral. AI: Here is a hint: Use $$ \cot^2(x) = \csc^2(x) - 1\\ \frac{d}{dx} (-\cot(x)) = \csc^2(x). $$ That is $$ \int \csc^2(x) = -\cot(x) + C. $$ Now find something similar to use for tangent.
H: Proving $\|f(x)-f(a)\|\le M\| x-a\|$ Theorem(1): (M.V.T for real valued functions) Let $V \subseteq \Bbb R^n$ be open. Suppose $ f: V \to \Bbb R$ is differentiable on $V$. If $x, a\in V$ then there is $c\in L(x;a)$ such that $f(x)-f(a)=\nabla f(c)\cdot (x-a)$ Theorem(2): (M.V.T for vector valued functions) Let $V \subseteq \Bbb R^n$ be open. Suppose $ f: V \to \Bbb R^m$ is differentiable on $V$. If $x, a\in V$ and $L(x;a)\subseteq V$ then given any u $\in \Bbb R^m$there is $c\in L(x;a)$ such that (u)$(f(x)-f(a))=$ (u) $(Df(c)\cdot (x-a))$ The statement: Let $V \subseteq \Bbb R^n$ be open. Let $H$ be compact subset of $V$ and suppose that $ f: V \to \Bbb R^m$ is $C^1$ on $V$. If $E\subset H$ is convex, then there is a constant $M$ such that $\| f(x)-f(a)\|\le M\| x-a\| \forall x,a \in E.$ I need to prove that the statement by using the theorem(1) -not using the theorem (2)- Please show me thank you:) AI: Hint: apply theorem 1 to each component of $f=(f_1,\dots,f_m)$. Edit: elaborating Clearly, each component $f_i:V\to\mathbb R$ is $\mathcal C^1$ on $V$. Thus, we can apply th. 1 to say that $\exists c_i\in L(a,x)$ (note that $c_i\in E$, thanks to convexity), such that $f_i(x)-f_i(a)=\nabla f_i (c)\cdot(x-a)$. This leads to say that $$|f_i(x)-f_i(a)|\le \|\nabla f_i (c)\|\|x-a\|.$$ As $\nabla f_i$ is a continuous function on a compact, there's a supremum of $\|\nabla f_i\|$. Let $$M_i=\sup_{c\in E}\|\nabla f_i(c)\|,$$ then $$\forall x,a \in E\quad |f_i(x)-f_i(a)|\le M_i\|x-a\|.$$ Then again, let $$M=\frac{1}{\sqrt m}\max_{i=1..n} M_i, so$$ $$\|f(x)-f(a)\|^2 = \sum_i |f_i(x)-f_i(a)|^2\le \frac{M^2}{m} \sum_i \| x - a \|^2 =M^2\| x - a \|^2.$$
H: Pi Estimation using Integers I ran across this problem in a high school math competition: "You must use the integers $1$ to $9$ and only addition, subtraction, multiplication, division, and exponentiation to approximate the number $\pi$ as accurately as possible. Each integer must be used at most one time. Parenthesis are able to be used." I tried writing a program in MATLAB to solve this problem but it was very inefficient (brute-force) and took too long. What is the correct answer, and how would I figure it out? EXAMPLE: $$\pi \approx \frac {(6+8+2^3)} 7 = \frac {22} 7 \approx 3.142857$$ AI: Here are three approximations which I came up with, just by sitting down and "playing" with the numbers (no brute-force algorithm needed): 1.) $\pi \approx 3+\dfrac {(5 \times 7)^{(\large8/9)^4}} {2^6+1} = 3.1415926539$ 2.) $\pi \approx \left (\dfrac{3^7} {5+8+9} -2\right)^{1/4} = 3.141592652$ 3.) $\pi \approx \dfrac{7^3+2 \times 6} {(8+5) \times 9 - 4} = \dfrac {355} {113} = 3.1415929$ I'm sure that there are other solutions out there, these are just the first that I could think of.
H: boolean algebra: simplify $ a* b *d + \tilde a *\tilde c*d + b* \tilde c* d$ Simplify the following function(algebraically): $$y = a*b*d + \tilde a *\tilde c*d + b *\tilde c *d$$ the solution is: $$a*b*d + \tilde a * \tilde c * d$$ which i checked via karnaugh and also wolfram. my "solution" so far: $b*d*(a + \tilde c) + (\tilde a * \tilde c * d)$ there should be a rule that should eliminate the $\tilde c$ inside the $(a +\tilde c)$ term. Does anybody know how to do it? AI: Hint: write $$b*\tilde c*d = a*b*\tilde c*d + \tilde a*b*\tilde c*d$$ and then see if you can absorb $a*b*\tilde c*d$ into $a*b*d$ and $\tilde a*b*\tilde c*d$ into $\tilde a*\tilde c*d$ using $x+ x*y = x*(1+y) = x*1 = x$ for suitable choices of $x$ and $y$.
H: Given an operator and its' representation in a non-orthogonal basis. Is it normal? Given T, an operator in $V = \mathbb {C^2}$ and a basis $B = \{ (1,1), (1,0) \}$. Is $T$ a normal operator if $[T]_B = \begin{pmatrix} 1 & i \\ 2 & \frac{1}{2} \end{pmatrix}$ and $[T^{*}]_B = \begin{pmatrix} 1 & 2 \\ -i & \frac{1}{2} \end{pmatrix}$. Now this basis is not an orthogonal basis, so can I still check if $[T]_B [T^{*}]_B = [T^{*}]_B [T]_B$? if not, then what do I do? AI: Let $\mathscr B = \{(1, 0), (0, 1)\}$, the standard basis in $\mathbb{C}^2$. Let $M$ be the matrix that represents the change of basis from $\mathscr B$ to $B$. We have: $$ [T]_{\mathscr B} = M^{-1} [T]_B M, \quad [T^*]_{\mathscr B} = M^{-1} [T^*]_B M $$ Thus: $$ [T]_{\mathscr B} [T^*]_{\mathscr B} = M^{-1} [T]_B M M^{-1} [T^*]_B M = M^{-1} [T]_B [T^*]_B M $$ Similarly: $$ [T^*]_{\mathscr B} [T]_{\mathscr B} = M^{-1} [T^*]_B [T]_B M $$ It follows that $[T]_B [T^*]_B = [T^*]_B [T]_B$ if and only if $[T]_{\mathscr B} [T^*]_{\mathscr B} = [T^*]_{\mathscr B} [T]_{\mathscr B}$.
H: For what values of $r$ and $b$ is $X ∩ Y$ a smooth manifold? When it is a manifold, what is its dimension? Consider the hyperbolic paraboloid $X$ contained in $\mathbb{R}^3$, and the sphere $Y$ in $\mathbb{R}^3$ given by the equations $x^2 −y^2 =z$ and $x^2+y^2+(z−1)2=r^2.$ For what values of $r$ and $b$ is $X ∩ Y$ a smooth manifold? When it is a manifold, what is its dimension? I know its a manifold when the tangent planes to the points of $X$ and $Y$ span $\mathbb{R}^3$. But the problems seems a bit messy and hard to do. AI: Hint: Consider the map $f:\mathbf R^3\to\mathbf R^2$ defined by $$ f(x,y,z) = (x^2-y^2-z, x^2+y^2+(z-1)^2-r^2).$$ Notice that $X\cap Y=f^{-1}\{(0,0)\}$, and determine for what values of $r$ the map $f$ is a submersion at all points of $X\cap Y$.
H: Maximize $(a-1)(b-1)(c-1)$ knowing that : $a+b+c=abc$. If : $a,b,c>0$, and : $a+b+c=abc$, then find the maximum of $(a-1)(b-1)(c-1)$. I noted that : $a+b+c\geq 3\sqrt{3}$, I believe that the maximum is at : $a=b=c=\sqrt{3}$. (Can you give hints). AI: Go for Lagrange Multipliers! Let $f(a,b,c)=(a-1)(b-1)(c-1)$, and $g(a,b,c)=abc-a-b-c$. Then you are trying to maximize $f(a,b,c)$ subject to $g(a,b,c)=0$. Now, we have $$ \nabla f(a,b,c)=\bigl\langle(b-1)(c-1),(a-1)(c-1),(a-1)(b-1)\bigr\rangle $$ and $$ \nabla g(a,b,c)=\langle bc-1, ac-1, ab-1\rangle. $$ So, you want to find all tuples $(a,b,c,\lambda)$ such that $$ \begin{align} \tag{1} (b-1)(c-1)&=\lambda(bc-1)\\ \tag{2} (a-1)(c-1)&=\lambda(ac-1)\\ \tag{3} (a-1)(b-1)&=\lambda(ab-1)\\ \tag{4} abc-a-b-c&=0 \end{align} $$ Go through and solve this system of equations. There are only 3 solutions!
H: The space of sequences of integers, and an analog of topological space for classes? Is the collection of integer sequences a set or a class? If it's not a set, then is there an analog of topological spaces for classes? Thank you! AI: Assuming that you are talking about sequences of elements of $\Bbb N$ indexed by $\Bbb N$, then the answer is yes. A sequence is merely a function from the index set to the codomain. This set is actually $\Bbb{N^N}$. In fact, if $I$ is any set and you want to talk about sequences indexed by $I$, then $\Bbb N^I$ is that set of sequences. Or more generally $A^I$ for a set $A$. To see that this is a set, note that $f\colon I\to A$ is a subset of $I\times A$. As both $I$ and $A$ are sets, so is $I\times A$. Therefore $A^I$ is a subset of $\mathcal P(I\times A)$.
H: commutator subgroup and semidirect product Suppose $G$ is a solvable group such that $G = N \rtimes H$. Then I can show that $G' = M \rtimes H'$, where $G'$ is the commutator subgroup of $G$ and $M = N \cap G'$, $H' = H \cap G'$. I can also show that $H'$ is indeed the commutator subgroup of $H$. So $H$ and $H'$ are related without this relation explicitly depending on $G'$. Is this also true for $M$ and $N$, i.e. is there a relation between $M$ and $N$ that does not hinge on $G'$? In the case of $N$ abelian of maximal order it might hold that $N \cong Z(G) \times M$, where $Z(G)$ is the center of $G$. At least it holds for the few groups I checked. I'm equally interested in the general and this special case. AI: Here is a fundamental example (if $G$ is finite solvable, then $G/\Phi(G)$ is of the form described here). Let $H$ be a group of invertible $n \times n$ matrices over the field $\mathbb{Z}/p\mathbb{Z}$. Let $N$ be the elementary abelian group of order $p^n$, $N \cong C_p^n$. Then $H$ acts on $N$ by matrix multiplication. Let $G = N \rtimes H$. Then $G' = [G,G] = [N,N][N,H][H,H] = [N,H] \rtimes H'$. $[N,H] = \langle n^{-1} n^h : n \in N, h \in H \rangle$ in multiplicative notation, but in matrix notation, we just get $$[N,H] = \langle -n + n \cdot h : n \in N, h \in H \rangle = \langle n\cdot (h-1): n \in N, h \in H \rangle = \sum_{h \in H} \newcommand{\im}{\operatorname{im}}\im(h-1)$$ Coprime action If $H$ has order coprime to $p$, then some fancy linear algebra shows that $\ker((h-1)^n) = \ker(h-1)$ and $\im(h-1)$ is a direct complement. In other words, by Fitting's lemma (applied to the semisimple operator $h-1$), we get $$N = \ker(h-1) \oplus \im(h-1) = C_{h}(N) \times [N,h]$$ Using some slightly fancier versions of these linear algebra ideas we even get $$N=\left( \bigcap_{h \in H} \ker(h-1) \right) \oplus \left(\sum_{h \in H} \im(h-1) \right) = C_H(N) \times [H,N]$$ Even if $N$ is not abelian similar ideas give: $N=C_H(N)[H,N]$, though the intersection may be non-identity. Defining characteristic If $H$ has order a power of $p$, then one gets sort of the opposite behavior. The minimum polynomial of $h$ divides $x^{p^n}-1 = (x-1)^{p^n}$, so every eigenvalue of $h-1$ is 0, and $h-1$ is nilpotent. Hence Fitting's lemma tells us that $N=\ker((h-1)^{p^n}) \times \im((h-1)^{p^n})$, but that is useless since $(h-1)^{p^n}=0$ and so the kernel is all of $N$ and the image is $1$. If we try to apply this to $h-1$ directly without raising to the $p^n$th power, then things go very weird. Take $h=\begin{bmatrix}1&1\\0&1\end{bmatrix}$. Then $\im(h-1) = \{ (0,x) : x \in C_p \}$ but also $\ker(h-1) = \{ (0,x) :x \in C_p \}$. When $p=2$, this is the $D_8$ example. If one wants larger $N$, then one can take $H=\langle h_1,h_2\rangle$ with $$ h_1=\begin{bmatrix}1&0&1\\0&1&0\\0&0&1\end{bmatrix}, \qquad h_2=\begin{bmatrix}1&0&0\\0&1&1\\0&0&1\end{bmatrix} $$ Then $\im(h_i-1)=\{ (0,0,x) :x \in C_p \}$ but $\ker(h_1-1) = \{ (0,y,z) : y,z \in C_p \}$ and $\ker(h_2-1) = \{ (x,0,z) : x,z \in C_p \}$ so $$\bigcap_{h \in H} \ker(h-1) = \{ (0,0,x) : x \in C_p \}$$ and $$\sum_{h \in H} \im(h-1) = \{ (0,0,x) : x \in C_p \}$$ When $p=2$, this is the $D_8 \operatorname{\sf Y} D_8$ example. Notice how broken the decomposition is here. References Kurzweil–Stellmacher, Theory of Finite Groups, Chapter 8, is where this really started to make sense for me.
H: $20$ hats problem I've seen this tricky problem, where $20$ prisoners are told that the next day they will be lined up, and a red or black hat will be place on each persons head. The prisoners will have to guess the hat color they are wearing, if they get it right the go free. The person in the back can see every hat in front of him, and guesses first, followed by the person in front of him, etc. etc. The prisoners have the night to think of the optimal method for escape. This method ends up allowing $19$ prisoners to always escape. The person in the back counts the number of red hats, and if its even, says red, if its odd, says black. This allows the people in front to notice if its changed, and can determine their hat color, allowing the person in front to keep track as well. What I'm wondering is what is the equivalent solution for $3$ or more people, and how many people will go free. If possible, a general solution would be nice. AI: Suppose there are $n$ prisoners, with hats of $m$ different colors. Assign to each color a number from $0$ to $m-1$, agreed upon beforehand by all prisoners. Working mod $m$, each prisoner adds up the value of all the colors he sees when standing in line. Let $a_i$ be the value calculated by the $i$th prisoner. Begin by having the first prisoner say the color associated to $a_1$. Now, for $i\ge 2$, having heard the first $i-1$ colors, prisoner $i$ can correctly guess the color of his hat to be the color associated to the residue of $a_1-a_2-\ldots-a_i$ modulo $m$. Why will this be a correct guess?