text stringlengths 83 79.5k |
|---|
H: Question about continuous function in terms of limits of sequences
I am reading about continuous function, in this site http://en.wikipedia.org/wiki/Continuous_function specifically the section "Definition in terms of limits of sequences". My question is, let be $c\in \mathbb{R}$ an arbitrary element belonging to domain of $f$, where $f$ is in a closet and bounded set. Are there always a sequence $x_n$ which converges to $c$?.
thanks by your replies.
AI: There is always the constant sequence $x_n = c$ for all $n$. In the extreme case that the domain of the function is $\{c\}$, there isn't anything else.
In general there is a dichotomy to be made according to whether $c$ is an isolated point of the domain $X$ of $f$. (Let's assume that $X$ is a subset of the real numbers $\mathbb{R}$, which seems to be the context of the question. In fact, with only mild notational change, this answer works in the context of arbitrary metric spaces.) We say that a point $c \in X$ is isolated if there is a $\delta > 0$ such that if $y \in X$ and $|x-y| < \delta$, then $y= c$.
For instance, if $f(x) = \sqrt{x^2(x-1)}$, the natural domain is $\{0\} \cup [1,\infty)$ and $0$ is an isolated point.
Now, for $X \subset \mathbb{R}$ and $c \in X$, the following are equivalent:
(i) $c$ is an isolated point of $X$.
(ii) Every sequence $\{x_n\}$ in $X$ which converges to $c$ is eventually constant: that is, $x_n = c$ for all sufficiently large $n$.
(iii) Every function $f: X \rightarrow \mathbb{R}$ is continuous at $c$.
In other words, an isolated point is something of a trivial case. |
H: Graph $y=1/\sin x$
Graph $y=\dfrac{1}{\sin x}$
Now, I looked at the graph on google and got this
Which I thought that $y=\dfrac{1}{\sin x}$ would be $y=\sin^{-1}x$ But it's apparently not. So if anyone can shed some light on this. It's not just about finding a graph and copying it. I would like a better understanding of this. Also, I know the format for graphing trig functions is $y=a\sin k(x+c)+d$. But I don't understand how to fit the OP into this.
Edit:
So, the OP is $y=(\sin x)^{-1}$ or the inverse of $\sin x$. Now, all I need help with is how to graph accordingly.
AI: This answer supposes the OP actually wants to plot $\frac{1}{\sin(x)}$, not $\arcsin(x)$.
HINT: what is the value of the sine function on $\dots, -2\pi, -\pi, 0,\pi, 2\pi, \dots$ and on $\dots, -3\pi/2,-\pi/2,\pi/2,3\pi/2,\dots$? What's its behavior in between these points (positive/negative, increasing/decreasing)?
This is a craftman's hint to the question, which exploits the fact that we know exactly how $\sin(x)$ behaves.
If we are more clever, we can also exploit another fact of the sine function, the fact that it is periodic. You know that $\sin(x+2\pi)=\sin(x)$ for all $x\in \mathbb{R}$. This allows you to plot the sine function just on $[0,2\pi)$ and then "copy it" appropiately to get the graph on all $\mathbb{R}$.
We can exploit this in this case too, since $\frac{1}{\sin(x+2\pi)}=\frac{1}{\sin(x)}$ whenever $\sin(x)$ doesn't vanish. This means that, to plot $\frac{1}{\sin(x)}$, you might just as well plot it on the points of $[0,2\pi]$ where it is defined, and then copy the graph appropiately. This might make it easier, and puts on paper what you surely observed the moment you looked at the graph, that is, it is the same on $[-2\pi,0]$ and on $[0,2\pi]$.
There is another approach which is more mechanical and uses calculus:
HINT: Find out where the function is defined. At the points where it is not defined, find out the lateral limits. Now what calculus tool lets you find out if a function is increasing/decreasing? Compute it, and you will also find the local maxima/minima. |
H: Inscribing a rhombus within a convex quadrilateral
I was wondering if it is possible to inscribe a rhombus within any arbitrary convex quadrilateral using only compass and ruler? If possible, could you describe the method?
If not could you give an example of a convex quadrilateral which one can not inscribe a rhombus within?
AI: EDIT, Saturday July 21: I was wondering if an inscribed rhombus might be required to have edges parallel to the diagonals of the original convex quadrilateral. But this is not always the case. Take a square, on each edge mark a point the same distance counterclockwise from the nearest vertex. This makes an inscribed square, and is one of the oldest proofs-without-words of the Pythagorean Theorem.
ORIGINAL: Given a convex quadrilateral ABCD, the main observation is that the new quadrilateral made up with vertices at the midpoints of the four sides is automatically a parallelogram. So we just need to adjust the relative lengths of the sides, we will make an inscribed rhombus with sides parallel to the midpoint-parallelogram. That is, we make a parallelogram RSTU such that its sides RS and RU have the same length, thus forcing it to be a rhombus. To emphasize, the sides of the rhombus RSTU are parallel to the diagonals AC and BD of the original quadrilateral. I think I will skip the proof that it is inscribed, bunch of similar triangles involved. Many.
Take a quadrilateral such that AB and CD intersect in a point P.
Through D, draw a line $L$ parallel to the diagonal AC.
Using a compass, draw a point B' on $L$ such that DB = DB'.
Draw the segment BB' and let it intersect line CD at B''.
Draw a line $M$ through B'' that is parallel to AC and $L,$ and let $M$ intersect the other diagonal BD at point Q.
Draw the line through PQ and let it intersect edge AD at R. R is the first vertex of our rhombus.
The edges are parallel to the diagonals AC and BD. So, from R draw a line parallel to BD, let it intersect AB in point S. From R, draw another line but parallel to AC this time, let it intersect edge CD in point U. Finally, draw the appropriate parallel lines to the final point T on BC. The quadrilateral RSTU is an inscribed rhombus.
I see, there is a way to fiddle with this if we consider AB and CD parallel instead. Not to worry.
Anyway, give it a try. You may need to adjust the figure so that everything comes out on one page, lines that are supposed to intersect really do, and so on. I did what I describe with compass, straightedge, T-square (actually an L-square), and a nice parallel ruler |
H: What are vector states?
I am reading some papers from the 70s on operator theory. I come across the term 'vector state' of a $C^*$-algebra quite often. It is a little bit confusing.
Wikipedia redirects to quantum state vector which I found irrelevant.
So could someone give me a definition of a 'vector state'?
In particular, I want to make sense of the following from one of arveson's papers:
If $M_n$ is the algebra of n-n matrices, $\mathcal{A}$ is a $C^*$-algebra in $B(\mathcal{H})$, $\epsilon_j$'s are basis for $\mathbb{C}^n$, then every vector state $\omega$ of $M_n\otimes\mathcal{A}$ is of the form \begin{equation} \omega((A_{ij}))=\sum_{i,j}(V^*A_{ij}V\epsilon_j,\epsilon_i),\end{equation}where $V$ is an operator from $\mathbb{C}^n$ to $\mathcal{H}$.
This somehow resembles the positive linear functionals or states on $C^*$-algebras, but I am not quite sure. Is that the formulae for states on tensors of algebras?
Thanks!
AI: If $\xi$ is a unit vector on a Hilbert space then the linear functional $w_\xi: B(H)\to \mathbb C$ given by $w_\xi(a)=\langle a\xi,\xi\rangle$ is positive and $w_\xi(1)=1$ so it is a state and it is called a vector state. |
H: Proving a number with $3^n$ equal digits is divisible by $3^n$
Prove a number with $3^n$ equal digits is divisible by $3^n$.
My thoughts about the problem are: a number with $3^n$ equal digits $d$ is equal to $d\frac {10^{3^n} - 1} {9}$. We use Lifting The Exponent lemma, or plain induction.
AI: This follows from the fact that $x^3-1=(x-1)(x^2+x+1)$ and that $10\equiv1\pmod{3}$. That is,
$$
10^{3^n}-1=\left(10^{3^{n-1}}-1\right)\left(10^{2\cdot3^{n-1}}+10^{3^{n-1}}+1\right)\tag{1}
$$
and
$$
\begin{align}
\left(10^{2\cdot3^{n-1}}+10^{3^{n-1}}+1\right)
&\equiv1+1+1\\
&\equiv0\pmod{3}\tag{2}
\end{align}
$$
so that
$$
\left.3\,\middle|\,\left(10^{2\cdot3^{n-1}}+10^{3^{n-1}}+1\right)\right.\tag{3}
$$
We start out with
$$
\left.3^2\,\middle|\,\left(10^{3^0}-1\right)\right.\tag{4}
$$
Next, combining $(1)$ and $(3)$ yields that
$$
\left.3^{n+1}\,\middle|\,\left(10^{3^{n-1}}-1\right)\right.\Rightarrow\left.3^{n+2}\,\middle|\,\left(10^{3^n}-1\right)\right.\tag{5}
$$
Therefore, induction on $n$, using $(4)$ and $(5)$, says
$$
\left.3^n\,\middle|\,\frac{10^{3^n}-1}{9}\right.\tag{6}
$$ |
H: Showing that the closure of the closure of a set is its closure
I have the feeling I'm missing something obvious, but here it goes...
I'm trying to prove that for a subset $A$ of a topological space $X$, $\overline{\overline{A}}=\overline{A}$. The inclusion $\overline{\overline{A}} \subseteq \overline{A}$ I can do, but I'm not seeing the other direction.
Say we let $x \in \overline{A}$. Then every open set $O$ containing $x$ contains a point of $A$. Now if $x \in \overline{\overline{A}}$, then every open set containing $x$ contains a point of $\overline{A}$ distinct from $x$. My problem is: couldn't $\{x,a\}$ potentially be an open set containing $x$ and containing a point of $A$, but containing no other point in $\overline{A}$?
(Also, does anyone know a trick to make \bar{\bar{A}} look right? The second bar is skewed to the left and doesn't look very good.)
AI: How do you define the closure of a set $A$? If it's not already your definition, it might be a useful thing to prove that the closure of a set is precisely the intersection of all closed sets containing it. |
H: Find a continuous solution of the initial-value problem
This question is from DE book by Braun(Pg no 10, Q no 17),
Find a continuous solution of the initial-value problem $y'+y= g(t), y(0)= 0$ where $g(t)=\begin{cases}2, &0 \leq t\leq 1, \\0, &t > 1\end{cases} $
since the intial condition is given at (0,0), therefore we consider $g(t)=2$ and so solving $y'+y=2$ gives integrating factor $\mu(t)=Ce^t \Rightarrow y=2+Ce^{-t} \Rightarrow C=-2 \Rightarrow y=2(1-e^{-t})$, the answer given in the text it this
$y(t) = \begin{cases} 2(1-e^{-t}), &0\leq t\leq 1\\ 2(e-1)e^{-t}, &t > 1 \end{cases} $
even if we consider $g(t)=0$ we get $\ln|y|=-t+C$ and we cannot proceed further since there is no initial condition, $y(1)=?$ my question is how to solve for $y$ at $t>1$ and also, do we find left and right limit at $t=1$ to prove that the answer is a continuous function?
AI: You already got the solution for $0 \le t \le 1$. What is $y(1)$ in that solution?
Now solve on $1 < t < \infty$ with that as initial condition.
Actually there's a technical quibble here. The solution you get is not differentiable at $t=1$: it has one-sided derivatives from left and right there, but they are different. So it's not really a
solution of the differential equation at $t=1$. It's what is called a weak solution. |
H: Why are compact operators 'small'?
I have been hearing different people saying this in different contexts for quite some time but I still don't quite get it.
I know that compact operators map bounded sets to totally bounded ones, that the perturbation of a compact operator does not change the index, and that the calkin algebra is an indispensable tool in the study of operators in the sense that 'essentially something' becomes a useful notion.
But I still suspect why they are 'small'. Now Connes says they are like 'infinitesimals' in commutative function theory, which makes me even more confused. So I guess I just post this question here and hopefully I can hear some quite good explanations about the reasoning behind this intuition.
Thanks!
AI: It may help to think of the special case of diagonal operators, that is, elements of $\ell_\infty $ acting on $\ell_2$ by multiplication. Here compact operators correspond to sequences which tend to 0, "are infinitesimally small". This is a commutative situation, in which everything reduces to multiplication of functions. So, general compact operators can be called noncommutative infinitesimals.
A shorter explanation, but with less content: every ideal in a ring can be thought of as a collection of infinitesimally small elements, because they are one step (quotient) away from being zero. |
H: Solving a fraction expression without a calculator using properties
I am preparing for the CAT and am not allowed to use a calculator to solve questions like these
If k is an integer and $\frac{35^2-1}{k}$ is also an integer then k could be any of the following except a)8 b)9 c)12 d)16 e)17 (Ans=D)
Are there any properties that I might use here to get the answer without using a calculator
AI: It cannot be $16$.
The reason is that if you divide $35$ by any of $9$, $12$, or $17$, the remainder is $\pm 1$. That means that when you divide $35^2$ by any of $9$, $12$, or $17$, the remainder is $(\pm 1)^2 = 1$. So if you then subtract $1$, you get an exact multiple.
Similarly, if you divide $35$ by $8$, the remainer is $3$; so when you divide $35^2$ by $8$, the remainder is the same as the remainder of $3^2=9$, which is $1$; again, you get a multiple when you subtract $1$.
But when you divide $35$ by $16$ the remainder is $3$; squaring, the remainder is $9$. Subtract $1$, the remainder is $8\neq 0$. |
H: How to select an field of study?
As my question implies, I am a young, developing mathematician, with concerns about my future math career. In particular, I'd like to know how to select a future field of study. Through my courses and readings, I've started to entertain the possibility of studying Lie theory. I think this field has some nice, deep structure and is particularly well-suited to my minor interest in physics. Of course, other fields are also appealing, so I am having a hard time narrowing my choices.
My question is, then, how does one select a field of study? I'd love to hear some personal experiences that relate to this question.
Some of the considerations for choosing a field of study, which I have been asking myself, are
What appeals to you? What do you enjoy doing?
What do you want to know? What questions do you want to answer for yourself?
What does the world want to know? What fields have important, pending questions that should be answered?
To what fields is your chosen field connected? Do these connections play a large role in your chosen field? Are you interested in these possible connections?
Which fields are are on the rise? Which fields are currently exciting and moving in a good direction?
What are you good at? What areas seem to be intuitive to you?
If someone could talk about some of these considerations in regard to their own experience, I would be particularly grateful. Furthermore, if there are any considerations that I am missing, please feel free to add to the list.
AI: Don't worry about what fields are "hot" or not (point 5). For one thing, that can change very quickly when you least expect it. New results or sudden applications may catapult a particular area to the fore, and answers to questions can doom a field to obscurity. It's going to take you several years from now until you are done, and there is no guarantee that what is "hot" or "on the rise" right now will still be hot or on the rise when you are done.
I would also advice to give preponderance to 1 and 2. If you become a mathematician, you will spend your time thinking about these things, and they better be things that you want to spend time thinking about. If doing your research becomes a chore, you are doomed, no matter how good you are at it, or how hot the field is. You'll be looking for excuses not to do it.
Point 4 is not really something that can go into deciding what field to study, because you sort of need to know a bit more about the field to really be able to answer it.
Mainly, in my opinion, you'll want to do something that you personally find exciting, interesting, and fun, for whatever reason. Sure, it's nice for the people who are working in hot areas and can easily get grants, since it is always better to be rich and healthy than sick and poor, but remember: if things go well, you'll be doing this for 30-50 years. You want to enjoy it. |
H: Find all solutions, other than $2$ for $12x^3-23x^2-3x+2=0$
Find all solutions, other than $2$ for $12x^3-23x^2-3x+2=0$
I started off by taking out an $x$ and got $$x(12x^2-23x-3)+2=0$$
I do not know if this is the correct first step, if it is, then am I able to use the quadratic formula or complete the square to get the answers. Can anyone give me general hints. Please do not solve this for me in anyway. Just give me hints.
AI: HINT
The great bit here is to note that $2$ is a solution. By the factor theorem, we know that the linear factor $(x - 2)$ divides our polynomial.
So perhaps you should divide out $(x-2)$. You'll be left with a quadratic, which we know how to solve very quickly. |
H: Homeomorphisms of X form a topological group
So I'm just learning about the compact-open topology and am trying to show that for a compact, Hausdorff space ,$X$, the group of homeomorphisms of $X$, $H(X)$, is a topological group with the compact open topology. This topology has a subbasis of sets $\{f\in H(X):f(C)\subseteq V\}=S(C,V)$ for compact $C$, open $V$. This is my first attempt at getting to know this topology, so I'd appreciate some help with the part of a proof I have so far, possibly a better way to go about proving this, and any other help understanding this topology.
First, let $c:H(X)\times H(X)\rightarrow H(X)$ be composition. For $f,g\in H(X)$, let $S(C,V)$ be a subbasis set with $f\circ g\in S(C,V)$. So $f(g(C)\subseteq V$ which means $f\in S(g(C),V)$ and $g\in S(C,f^{-1}(V)$. The product of these open sets works since if $h_1(C)\subseteq f^{-1}(V)$ and $h_2(g(C))\subseteq V$, then $h_2\circ h_1(C)\subseteq V$.
Then let $i:H(X)\rightarrow H(X)$ be inversion. Take a subbasis set $O=\{g:g(C)\subseteq U\}$ and consider $i^{-1}(O)=\{g^{-1}:g(C)\subseteq U\}$. If $h^{-1}\in i^{-1}(O)$, then one thing we have is that $h(C)\subset U$, but I'm not totally sure what to do with this. This part seems like it should be easier, but I am just not seeing it.
Thanks.
AI: Here you don't need much topology, it boils down to pure manipulation of sets and bijective functions, and the two following facts: $(*)$ closed subsets of compact spaces are compact, and $(**)$ compact subsets of Hausdorff spaces are closed. Just take complements and use the fact that $h$ is by definition a bijection, so
$$\begin{array}{RCL}
h\in S(C,U) & \Longleftrightarrow & h(C)\subset U\\
& \Longleftrightarrow & X\setminus U\subset \underbrace{X\setminus h(C)}_{=h(X\setminus C)}\\
& \Longleftrightarrow & h^{-1}(X\setminus U)\subset X\setminus C \\
& \Longleftrightarrow & h^{-1}\in S(X\setminus U,X\setminus C)
\end{array}$$
(which is a subbasic open neighborhood) i.e.
$$\mathrm{inv}^{-1}( S(C,U))=\mathrm{inv}(S(C,U))=S(X\setminus U,X\setminus C)$$
which proves the inversion map $\mathrm{inv}$ to be continuous. |
H: Orthogonal complement of a vector bundle
Let $E \rightarrow X$ be a vector bundle with an inner product. If $F$ is a sub-bundle, we can define an orthogonal complement bundle $F^\perp$ (see http://www.math.cornell.edu/~hatcher/VBKT/VB.pdf for the construction, and the source of the problem). I am trying to show that $F^\perp$ is independent of the choice of inner product, up to isomorphism. I have tried to do a local construction, defining the isomorphism in each locally trivial neighborhood, but I do not see a way to patch these together into a global isomorphism. Any suggestions?
AI: Hint: First show that for an inner product space $(V, \langle \cdot, \cdot \rangle)$ and a subspace $W \subseteq V$,
$$W^\perp \cong V/W.$$
Then, using the quotient bundle construction of the previous exercise (exercise 2), show that for a Euclidean vector bundle $E \longrightarrow X$ with sub-bundle $F$,
$$F^\perp \cong E/F.$$
Since $E/F$ is independent of the bundle metric, this will show that $F^\perp$ does not depend on the bundle metric (up to isomorphism).
Let me know if you want me to provide the details of the argument. |
H: Solve for $x; 12x^3+8x^2-x-1=0$
Solve for $x$. $12x^3+8x^2-x-1=0$ all solutions are rational and between $\pm 1$
As mentioned in my previous answers, I'm guessing I have to use the Rational Root Theorem. But I've done my research and I do not understand what to plug in or anything about it at all. Can someone please dumb this theorem down so I can try to solve this equation. I also do not want anyone to solve this problem for me. Thanks!
AI: The Rational Root Theorem tells you that if the equation has any rational solutions (it need not have any), then when you write them as a reduced fraction $\frac{a}{b}$ (reduced means that $a$ and $b$ have no common factors), then $a$ must divide the constant term of the polynomial, and $b$ must divide the leading term.
Here, the constant term is $1$, so that means that $a$ must be either $\pm 1$. And the leading terms is $12$; the integers that divide $12$ are $\pm 1$, $\pm 2$, $\pm 3$, $\pm 4$, $\pm 6$, and $\pm 12$. That gives twelve possible things to try.
As soon as you find a root $r$, you should stop and factor out $x-r$ from the polynomial, thus reducing the problem to one with smaller degree.
For example, say you wanted to find the roots of
$$6x^3 -25x^2+ 10x - 1.$$
The rational root theorem says that any rational root $\frac{a}{b}$ must have $a$ dividing $1$ (so $a=1$ or $a=-1$), and $b$ dividing $6$ (so $b=\pm 1$, $b=\pm 2$, $b=\pm 3$, or $b=\pm 6$). It doesn't tell you all of them are roots, just that if there are any rational roots, then they must be among
$$\pm\frac{1}{2},\quad\pm\frac{1}{3},\quad \pm\frac{1}{6}.$$
Now, you can just test them. $\pm\frac{1}{1}$ does not work (plugging in $1$ gives $-10 $, plugging in $-1$ gives $-42$). Plugging in $\frac{1}{2}$, $-\frac{1}{2}$, $\frac{1}{3}$, $-\frac{1}{3}$ doesn't work either. Then when you plug in $\frac{1}{6}$, we get
$$\frac{6}{6^3} - \frac{25}{6^2} + \frac{10}{6} - 1 = \frac{1}{36}-\frac{25}{36}+\frac{60}{36} - \frac{36}{36} = 0,$$
so $x=\frac{1}{6}$ is a root. We can then factor out $x-\frac{1}{6}$ from the original polynomial,
$$6x^3 -25x2 + 10x -1 = \left(x - \frac{1}{6}\right)\left(6x^2-24x+6\right),$$
so we now just need to find the roots of the other factor, $6x^2-24x+6 = 6(x^2-4x+1)$. We can solve this using the quadratic formula, and the roots are
$$\frac{4+\sqrt{16-4}}{2}=2 + \sqrt{3},\qquad \text{and}\qquad \frac{4-\sqrt{16-4}}{2} = 2-\sqrt{3}.$$
So the rational root theorem gave us a finite collection of possible roots; we check them, and if we get lucky and find a root among them, we can use it to reduce the degree of the polynomial by $1$ (by factoring out $x-r$) and so exchange the original problem for a simpler one (instead of a polynomial of degree 3, we now have a polynomial of degree 2). |
H: Still another diophantine equation
Can any of you guys provide a hint for thew following exercise?
Exercise. There is no $3$-tuple $(x,y,z) \in \mathbb{Z}^{3}$ such that $x^{10}+y^{10} = z^{10}+23$.
Thanks a lot for your insightful replies.
AI: The first thing to do is to check for local obstructions, that is some prime modulus $p$ where the equation is impossible modulo $p$ (sometimes $p^2$, $p^3$, etc.).
In this particular case, a tempting modulus to try is $11$, since $x^{10}$ is always $0$ or $1$ mod $11$. Unfortunately, this equation is feasible mod $11$ as $$1^{10} + 1^{10} \equiv 1^{10} + 23 \pmod{11}.$$ Another modulus worth trying is $5^2$, since $x^{10}$ is always $0,\pm1$ mod $25$. Unforunately, there is still a solution mod $25$.
Can you think of another modulus that places strong constraints on $x^{10}$? Hint: you will want the order of the multiplicative group $(\mathbb Z/p\mathbb Z)^\times$ to have a large common factor with $10$. |
H: Prove that for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $
I'm learning the basics of proof by induction and wanted to see if I took the right steps with the following proof:
Theorem: for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $
Base Case:
let $$ n = 1 $$ Therefore $$2*1+1 = 1^{2}+2*1 $$ Which proves base case is true.
Inductive Hypothesis:
Assume $$\sum_{k=1}^{n} (2k+1) = n^{2} + 2n $$
Then $$\sum_{k=1}^{n+1} (2k+1) = (n+1)^{2} + 2(n+1) $$
$$\iff (2(n+1) +1)+ \sum_{k=1}^{n} (2k+1) = (n+1)^{2} + 2(n+1) $$
Using inductive hypothesis on summation term:
$$\iff(2(n+1) +1)+ n^{2} + 2n = (n+1)^{2} + 2(n+1) $$
$$\iff 2(n+1) = 2(n+1) $$
Hence for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $ Q.E.D.
Does this prove the theorem? Or was my use of the inductive hypothesis circular logic?
AI: Your proof looks fine, but if you know that
$$1+2+...+n=\frac{n(n+1)}{2}$$
then you can evaluate
$$\sum_{k=1}^n(2k+1)=2\sum_{k=1}^n k+\sum_{k=1}^n1=\rlap{/}2\frac{n(n+1)}{\rlap{/}2}+n=n^2+2n$$ |
H: Solve $\, \mathrm dy/\, \mathrm dx = e^{x^2}$
I want to solve $$\frac{\, \mathrm dy}{\, \mathrm dx}=e^{x^{2}}.$$ i using variable separable method to solve this but after some stage i stuck with the integration of $\int e^{x^{2}}\, \mathrm dx$. i dont know what is the integration of $\int e^{x^{2}}\, \mathrm dx$. Please help me out!
AI: $$
\frac{dy}{dx}=e^{x^{2}}
$$
has no elementary solution. The error function (also called the Gauss error function) is a special function (non-elementary) of sigmoid shape which occurs in probability, statistics and partial differential equations. It is defined as:
$$
\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_{0}^x e^{-t^2} dt.
$$
See the link for reference and more information and thus, J.M. ...? |
H: How to prove that $2^\omega=\mathfrak{c}$?
Let $\mathfrak{c}$ denote the continuum.
My textbook says that $2^\omega=\mathfrak{c}$. How can one prove this equality?
Thanks ahead:)
AI: Clearly $2^\omega$ has cardinality equal to $|P(N)|$. Think of a subset of the naturals as a sequence of zero's and one's. (A one indicates a particular element is in the subset, a zero indicates it isn't). Between every digit, put a 3 and in the begining put a decimal point. (Eg $11001$... becomes $0.131303031$...; this is done to overcome the problem of ambiguity in representations)
Now every subset has been put in a one one correspondence with a real number. So $|P(N)|\le |(0,1)|$. Conversely every real number between $0$ and $1$, has a binary expansion albeit sometimes redundantly; and removing the point gives us a 0-1 sequence, i.e. a subset of the naturals; and hence that particular set of reals has cardinality $\le |P(N)|$. By Schroder Bernstein theorem we have $|P(N)|=|(0,1)|$. It is now trivial to conclude that $|P(N)|=|R|$ (Hint: Think of the tan function.) As an added bonus by Cantor's theorem ($|A|<|P(A)|$) we have also established that the reals are uncountable. |
H: Solve the following inequality $x^2+x+1\gt 0$
Solve the following inequality $x^2+x+1\gt 0$
I understand how to solve inequalities and what the graphs look like. Usually the first step is to set this as in equation and then find the zeros. But for this one when I used the quadratic formula my two answers were:
$$\dfrac{-1+i\sqrt{3}}{2}$$
$$\dfrac{-1-i\sqrt{3}}{2}$$
I do not know what to do after this, or if I messed up in any way. Or, if there is another way to solve this problem. Any hints help. Please do not solve this problem in any way for me. Thanks!
AI: Complete the square: $x^2+x+1=\left(x+\frac12\right)^2+\frac34$. For what values of $x$ is this positive?
Added: Here’s an explanation of completing the square. Suppose that you have a quadratic $x^2+ax+b$. You want to write this in the form $(x+c)^2+d$ for some constants $c$ and $d$. We know that $(x+c)^2+d=x^2+2cx+(c^2+d)$, and we want this to be identically equal to $x^2+ax+b$. That is, we want $x^2+2cx+(c^2+d)$ and $x^2+ax+b$ to be the same polynomial. Clearly this means that we must have $2c=a$ and $c^2+d=b$. In particular, we must have $c=\frac{a}2$. I could also solve for $d$, but in practice it’s easier to work it out each time than it is to use a formula.
Knowing now that $c=\frac{a}2$, I write $\left(x+\frac{a}2\right)^2$ as a first approximation to $x^2+ax+b$, and then I multiply it out to get $x^2+ax+\frac{a^2}4$. This approximation gives me the right $x^2$ and $x$ terms, but in general it gives me the wrong constant term, because $\frac{a^2}4$ is rarely equal to $b$. Therefore I have to adjust my approximation $\left(x+\frac{a}2\right)^2$. I do so by subtracting $\frac{a^2}4$ and adding $b$:
$$\left(x+\frac{a}2\right)^2-\frac{a^2}4+b=x^2+ax+b\;,$$
as desired. In the case of the quadratic $x^2+x+1$, $a=1$, so $c=\frac{a}2=\frac12$, and my first approximation was $\left(x+\frac12\right)^2$. This has a constant term of $\frac14$ instead of the desired $1$, so I knew that I had to add another $\frac34$.
The only remaining issue is what to do when the coefficient of $x^2$ isn’t $1$, i.e., when we’re dealing with $ax^2+bx+c$ with $a\ne 1$. The easiest approach is to factor out the $a$ to get
$$a\left(x^2+\frac{b}ax+\frac{c}a\right)\;.$$
Then complete the square on $x^2+\frac{b}ax+\frac{c}a$ to get
$$a\left(\left(x+\frac{b}{2a}\right)^2+\text{some constant}\right)\;.$$ |
H: Question about proof of chain conditions
Here is a proof from Atiyah-Macdonald:
For i) $\implies$ ii) could one not write "If $(x_n)$ is such that $x_m = x_{m+1} = \dots$ then obviously $x_m$ is a maximal element"?
I am asking because the book has been getting terser with proofs and has reached super-terse by now but "i) $\implies$ ii)" seems much longer than necessary. So I must be missing something.
Note: for the equivalence of these two statements we need choice (which is not mentioned in the book).
AI: It seems to me that the countability issue is something of a red herring.
$\Sigma$ is a partially ordered set, and $T$ is a nonempty subset of $\Sigma$. Even if $T$ is countable, it need not be an increasing chain, i.e., of the form $\{x_n\}_{n=1}^{\infty}$ with $x_1 \leq x_2 \leq \ldots \leq x_n \leq \ldots$. So if an increasing chain $\{x_n\}_{n=1}^{\infty}$ in $T$ stabilizes then the eventual value of the sequence is a maximal element of the subset $\{ x_n\}_{n=1}^{\infty}$ of $T$, which need not be a maximal element of $T$ itself. Thus the "longer" proof given is necessary.
Added: For my take on the result in question, see Lemma 10 of these notes on factorization. (This seems to be currently missing from my commutative algebra notes, which only mostly include the material in this shorter set of notes.) To my eye what I say is the same as what Atiyah-Macdonald say, but someone learning the material for the first time might (possibly) think otherwise. |
H: How to transform a differential equation to easier one using change of variables?
I am currently studying analysis of Algorithms and have come across a paper about Median of 3 partition for quick select. The authors have solved the recurrence using the generating functions and came up to a partial differential equation (eq 6 in the paper):
$$\frac{1}{6} \frac{\partial^2Q}{\partial z^2} - \left [ \frac{1}{(1-z)^2} + \frac{u^2}{(1-uz)^2} \right]Q = \frac{u}{(1-u)} \left [ \frac{1}{(1-z)^4} - \frac{u^3}{(1-uz)^4} \right].$$
Expressing $Q(z,u)$ as (eq 7 in paper)
$$Q(z,u) = \frac{1}{(1-z)^2(1-uz)^2}E(z,u)$$
and substitute $z = 1+t(1-u)/u$.
$$G(t,u)=E\left(1+\frac{1-u}{u}t,u\right),$$
(eq 8 in paper) the original equation will transforms to:
$$t(1-t)\frac{\partial^2 G}{\partial t^2} + (-4+8t)\frac{\partial G}{\partial t} - 8G = 6(1-u)\frac{u(1-t)^4 - t^4}{t(1-t)}$$
(eq 9 in paper). Now, I am not able to understand how the authors get this equation because when I tried to solve using the same substitutions I get the below one:
$$t^2\frac{\partial^2 G}{\partial t^2} - 8t\frac{\partial G}{\partial t} + 2G \left [ 10 + \frac{u^2}{(1-u)^2} \right] = 6u^2t^2.$$
Are the both equations same or am I doing some mistake?
AI: It looks like you are missing some terms.
I give a few partial results that should allow you to track them down.
$$\begin{eqnarray*}
\frac{d}{dz} &=& \frac{dt}{dz} \frac{d}{dt} \\
&=& \frac{1}{dz/dt} \frac{d}{dt} \\
&=& \frac{u}{1-u} \frac{d}{dt} \\
\frac{d}{dz} Q(z) &\rightarrow&
\frac{u}{1-u} \frac{d}{dt} \left(\frac{u^2}{(1-u)^4} \frac{1}{t^2(1-t)^2} G(t) \right) \\
&=& \frac{u^3}{(1-u)^5} \frac{1}{t^3(1-t)^3} [(t(1-t)G'(t) - 2(1-2t)G(t)]
\hspace{5ex} \clubsuit \\
\frac{d^2}{dz^2} Q(z) &\rightarrow& \frac{u}{1-u} \frac{d}{dt} \left(\clubsuit\right)
\end{eqnarray*}$$ |
H: Product of Riemannian manifolds?
Given two Riemannian manifolds $(M,g^M)$ and $(N,g^N)$ is there a natural way to combine them to be a Riemannian manifold? Some kind of $(M \times N, g^{M \times N})$.
AI: Yes. Using the natural isomorphism $T(M \times N) \cong TM \times TN$, define the metric on $T(M \times N)$ as follows for each $(p,q) \in M \times N$.
$$g^{M\times N}_{(p,q)} \colon T_{(p,q)}(M \times N) \times T_{(p,q)}(M \times N) \to \mathbb{R},$$
$$((x_1,y_1),(x_2,y_2)) \mapsto g^M_p(x_1,x_2) + g^N_q(y_1,y_2).$$
Alternately, if you think of the metrics as vector bundle isomorphisms $g^M \colon TM \to T^* M$ and $g^N \colon TN \to T^* N$, then the metric on $M \times N$ is just the induced vector bundle isomorphism
$$g^M \oplus g^N \colon TM \oplus TN \to T^*M \oplus T^* N,$$
$$(x,y) \mapsto (g^M(x), g^N(y))$$
Here, $TM \oplus TN$ is just $TM \times TN$ as a set, and the base manifold of $TM \oplus TN$ is $M \times N$. |
H: Showing that a homogenous ideal is prime.
I'm trying to read a proof of the following proposition:
Let $S$ be a graded ring, $T \subseteq S$ a multiplicatively closed set. Then a homogeneous ideal maximal among the homogeneous ideals not meeting $T$ is prime.
In this proof, it says
"it suffices to show that if $a,b \in S$ are homogeneous and $ab \in \mathfrak{p}$, then either $a \in \mathfrak{p}$ or $b \in \mathfrak{p}$"
where $\mathfrak{p}$ is our maximal homogeneous ideal. I don't know how to prove this is indeed sufficient. If I try writing general $a$ and $b$ in terms of "coordinates":
$$a=a_0+\cdots+a_n$$ where $a_0 \in S_0,$
then I can see it working for small $n$, but it seems to get so complicated I wouldn't know how to write down a proof. Is there a better way to attack this problem?
AI: We wish to prove:
If $S$ is a $\mathbb{Z}$-graded ring and $\mathfrak{p}$ is a homogeneous ideal of $S$ satisfying $ab \in \mathfrak{p}$ implies $a$ or $b$ in $\mathfrak{p}$ for homogeneous $a$ and $b$, then $\mathfrak{p}$ is prime.
So take 2 general elements $a,b \in \mathfrak{p}$ and assume $ab \in \mathfrak{p}$ but neither $a$ nor $b$ is in $\mathfrak{p}$. Let $a = \sum a_d$ and $b = \sum b_d$ be their homogeneous decompositions. Since $a \not \in \mathfrak{p}$, then some $a_d \not \in \mathfrak{p}$, and since all but finitely many $a_d$ are $0$, there exists a largest integer $d$ such that $a_d \not \in \mathfrak{p}$. Similarly, there exists a largest integer $e$ such that $b_e \not \in \mathfrak{p}$.
Since $ab \in \mathfrak{p}$ and $\mathfrak{p}$ is a homogeneous ideal, then all the components of $ab$ are in $\mathfrak{p}$. The $d+e$ component of $ab$ is $\sum a_i b_j$ where we sum over all pairs $(i,j)$ with $i+j = d+e$. But each such pair $(i,j)$, other than $(d,e)$, must have either $i>d$ or $j>e$, and hence (by the maximality of $d$ and $e$) we have $a_i b_j \in \mathfrak{p}$. Thus $a_d b_e \in \mathfrak{p}$ also, yet neither $a_d$ nor $b_e$ is in $\mathfrak{p}$, which contradicts the original assumption about $\mathfrak{p}$ for the homogeneous elements $a_d$ and $b_e$.
In short: If $a,b$ is a general counterexample for the primality of $\mathfrak{p}$, then $a_d, b_e$ is a homogeneous counterexample. |
H: What can be the possible value of $a+b+c$ in the following case?
What can be the possible value of $a+b+c$ in the following case?
$$a^{2}-bc=3$$
$$b^{2}-ca=4$$
$$c^{2}-ab=5$$
$0, 1, -1$ or $1/2$?
After doing $II-I$, $III-I$ and $III-II$, I got,
$$(a+b+c)(b-a)=1$$
$$(a+b+c)(c-a)=2$$
$$(a+b+c)(c-b)=1$$
I'm unable to solve further, please help.
AI: You have
$$\left\{
\begin{align*}
&(a+b+c)(b-a)=1\\
&(a+b+c)(c-a)=2\\
&(a+b+c)(c-b)=1\;.
\end{align*}\right.\tag{1}$$
These clearly imply that $a+b+c\ne 0$, so the first and third of these imply that $b-a=c-b$. In other words, $\langle a,b,c\rangle$ is an arithmetic progression. (The second equation of $(1)$ confirms this.) Set $d=b-a$; then $b=a+d$ and $c=a+2d$, so $a+b+c=3(a+d)$, and each of the equation in $(1)$ reduces to $3d(a+d)=1$.
Going back to the original equations, we see that
$$\begin{align*}
3&=a^2-bc=a^2-(a+d)(a+2d)\\
&=-3ad-2d^2=-3d(a+d)+d^2\\
&=d^2-1\;,
\end{align*}$$
or $d^2=4$. Thus, $d=\pm 2$, $1=3d(a+d)=\pm6(a+d)$, and $a+b+c=3(a+d)=\pm\frac12$. |
H: The neighborhood of each points of a special topological space
Let $R$ is the real line and $R^* = R \cup \{x^*\}$, where $x^*$ is not in $R$. For any subset $A$ of $R^*$, we define the $cl_{R^*}(A)$as following:
if $A$ is finite, then $cl_{R^*}(A)=A$; if $A$ is infinite, then $cl_{R^*}(A)=cl_R(A-\{x^*\}) \cup \{x^*\}$.
This generates a topology on $R^*$.
My question is this: What's the nbhd of $x^*$ and the nbhd of $x$ of $R$?
AI: If $A$ is an infinite subset of $\Bbb R$, then $x^*\in\operatorname{cl}_{\Bbb R^*}A$, so $\Bbb R^*\setminus A$ does not contain an open nbhd of $x^*$. On the other hand, if $F$ is a finite subset of $\Bbb R$, then $\Bbb R^*\setminus F$ is an open nbhd of $x^*$. It follows that the open nbhds of $x^*$ are precisely the sets $\Bbb R^*\setminus F$ such that $F$ is a finite subset of $\Bbb R$: the topology at $x^*$ is cofinite.
Now suppose that $x\in\Bbb R$, and let $U$ be an open nbhd of $x$ in $\Bbb R^*$. If $x^*\in U$, then $U=\Bbb R^*\setminus F$ for some finite $F\subseteq\Bbb R\setminus\{x\}$, so assume that $x^*\notin U$. Then $\Bbb R^*\setminus U$ is a closed set containing $x^*$. Let $C=\Bbb R^*\setminus U$. Then
$$C=\operatorname{cl}_{\Bbb R^*}C=\operatorname{cl}_{\Bbb R}\big(C\setminus\{x^*\}\big)\cup\{x^*\}\;,$$
so $C\setminus\{x^*\}=\operatorname{cl}_{\Bbb R}\big(C\setminus\{x^*\}\big)$, and $C\setminus\{x^*\}$ is therefore closed in $\Bbb R$. But $C\setminus\{x^*\}=\Bbb R\setminus U$, so $U$ is open in $\Bbb R$. Thus, $U\subseteq\Bbb R^*$ is an open nbhd of $x\in\Bbb R$ iff either $U\subseteq\Bbb R$ is an open nbhd of $x$ in $\Bbb R$, or $U=\Bbb R^*\setminus F$ for some finite $F\subseteq\Bbb R\setminus\{x\}$.
(I normally try not to give a complete answer right away when the question is homework, but in this case (a) I didn’t notice the tag until after I’d posted the answer, and (b) I’m not at all sure that I could give a very helpful hint without doing most of the problem anyway.) |
H: For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root?
For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root?
Approach: Case I : Only $1$ positive root, this implies $0$ lies between the roots, so $$f(0)<0$$ and $$D > 0$$
Case II: Both roots positive. It implies $0$ lies behind both the roots. So, $$f(0)>0$$
$$D≥0$$
Also, abscissa of vertex $> 0 $
I did the calculation and found the intersection but its not correct. Please help. Thanks.
AI: You only care about the larger of the two roots - the sign of the smaller root is irrelevant. So apply the quadratic formula to get the larger root only, which is
$\frac{-2(k-1)+\sqrt{4(k-1)^2-4(k+5)}}{2} = -k+1+\sqrt{k^2-3k-4}$. You need the part inside the square root to be $\geq 0$, so $k$ must be $\geq 4$ or $\leq -1$. Now, if $k\geq 4$, then to have $-k+1+\sqrt{k^2-3k-4}>0$, you require $k^2-2k-4> (k-1)^2$, which is a contradiction. Alternately, if $k\leq -1$, then $-k+1+\sqrt{k^2-3k-4}$ must be positive, as required.
So you get the required result whenever $k\leq -1$. |
H: A simple question about the open mapping theorem
$X, Y $ : Banach space, $T : X \to Y$ : linear bounded operator, onto. I'm studying open mapping theorem, but how can I prove this?
If $B_Y (0, \epsilon_1 ) \subset \overline{T(B_X (0, \epsilon_2 ))}$ then $B_Y ( 0, 2 \epsilon_1 ) \subset \overline{T(B_X (0, 2 \epsilon_2 ))} $
Here $B_X (0, a) := \{ x \in X \;|\; \| x \|_X < a \}$ and the overline(bar) means its closure.
AI: Let $y\in B_Y(0,2\varepsilon_1)$. Then $\frac 12 y\in B_Y(0,\varepsilon_1)$, and we can find a sequence $\{x_n\}\subset B_X(0,\varepsilon_2)$ such that $\frac 12 y=\lim_{n\to +\infty}Tx_n$. This gives $y=\lim_{n\to +\infty}T(2x_n)$. Since $\{2x_n\}\subset B_X(0,2\varepsilon_2)$, we have $y\in \overline{T(B_X(0,2\varepsilon_2)}$.
Note that it also works if we replace $2$ by any positive constant. |
H: Showing that the last digit of $a$ and $a^{13}$ are the same
For $a \in \mathbb N$, show that the last digit of $a$ and $a^{13}$
are the same.
For example: $2^{13} = 8,192$
$7^{13} = 96,889,010,407$
AI: To rephrase the question, you want to show that $a^{13}\equiv a\pmod{10}$, or both $a^{13}\equiv a\pmod2$ and $a^{13}\equiv a\pmod5$. We can do this through the use of Fermat's Little Theorem.
Let's do the case for 5. If $a\equiv 0\pmod5$, then $a^{13}\equiv0^{13}\equiv0\equiv a\pmod5$. Otherwise, we have $a^4\equiv1\pmod5$. So we have $a^{13}=(a^4)^3a\equiv1^3\times a\equiv a\pmod5$.
The case for 2 is similar if you need to prove that an odd number to an integer power is odd and an even number to an integer power is even. |
H: Find the limit of $f(x)$ involving a sum of logarithms.
I need to find
$\lim_{x\rightarrow0}f(x)$ for the following function:
$f:(0,+\infty)$
$f(x)=[1+\ln(1+x)+\ln(1+2x)+\dots+\ln(1+nx)]^\frac{1}{x}$
I tried writing the logarithms as products:
$\lim_{x\rightarrow0}[1+\ln(1+x)(1+2x)\dots(1+nx)]^\frac{1}{x}$
and as a sum and nothing is getting me anywhere.
Also I know I have to use the formula:
$\lim_{x\rightarrow0}(1+x)^\frac{1}{x}=e$
Can someone please help me?
Thank you very much!
AI: Here is the solution, take $\ln$ to both sides, gives,
$$\ln \left( f \left( x \right) \right) ={\frac {\ln \left( 1+\sum _{
k=1}^{n}\ln \left( 1+kx \right) \right) }{x}}
$$
Using Taylor expansion of $\ln(1+t)= t+ O(t) $ at the point $t=0$ with $t = {\sum _{k=1}^{n}\ln \left( 1+kx \right) } $ yields
$$ \ln \left( f \left( x \right) \right) ={\frac {\sum _{k=1}^{n}\ln
\left( 1+kx \right) }{x}} + \frac{O\left(\left( {\sum _{k=1}^{n}\ln
\left( 1+kx \right) } \right)^2\right)}{x} $$
Taking the limit as x goes to $0$ to both sides of the above equation gives
$$ \lim_{x->0}\ln(f(x))=\ln(\lim_{x->0} f(x) ) =\sum _{k=1}^{n}{k}^{} =\frac{n(n+1)}{2}\,.$$
Exponentiating the last result, we get the answer
$$ {\rm e}^{\frac{n(n+1)}{2}} $$ |
H: What does this notation, $x>x_0(\epsilon)$, mean?
What does this notation, $x>x_0(\epsilon)$, mean?
I have seen this in several proofs and haven't quite figured it out.
AI: $x_0(\varepsilon)$ means that $x_0$ depend on $\varepsilon$. For example
$$\forall\varepsilon>0, \exists x_0(\varepsilon)>0, \forall x>x_0(\varepsilon), \frac 1{1+x^2}\leq \varepsilon.$$
It's in particular useful when the $x_0$ may depend on several variable; in this case it shows on which variable the $x_0$ depends. An example is the difference between pointwise convergence of a sequence of functions $\{f_n\}$ defined on a set $S$:
$$\tag{PC}\forall \varepsilon,\forall x\in S,\exists n_0(\varepsilon,x),\, \forall n\geq n_0(\varepsilon,x),\quad |f_n(x)-f(x)|\leq\varepsilon,$$
and the uniform convergence on $S$:
$$\tag{UC}\forall \varepsilon,,\exists n_0(\varepsilon),\, \forall x\in S,\forall n\geq n_0(\varepsilon),\quad |f_n(x)-f(x)|\leq\varepsilon.$$
In these two case, this can be written without marking the dependence, since it's implicit. |
H: Example of Artinian module that is not Noetherian
I've just learned the definitions of Artinian and Noetherian module and I'm now trying to think of examples. Can you tell me if the following example is correct:
An example of a $\mathbb Z$-module $M$ that is not Noetherian: Let $G_{1/2}$ be the additive subgroup of $\mathbb Q$ generated by $\frac12$. Then $G_{1/2} \subset G_{1/4} \subset G_{1/8} \subset \dotsb$ is a chain with no upper bound hence $M = G_{1/2}$ as a $\mathbb Z$-module is not Noetherian.
But $M$ is Artinian: $G_{1/2^n}$ are the only subgroups of $G_{1/2}$. So every decreasing chain of submodules $G_i$ is bounded from below by $G_{1/2^{\min i}}$.
Edit In Atiyah-MacDonald they give the following example:
Let $G$ be the subgroups of $\mathbb{Q}/\mathbb{Z}$ consisting of all elements whose order is a power of $p$, where $p$ is a fixed prime.
Then $G$ has exactly one subgroup $G_n$ of order $p^n$ for each $n \geq 0$, and $G_0 \subset G_1 \subset \dotsb \subset G_n \subset \dotsb$ (strict inclusions) so that $G$ does not satisfy the a.c.c.
On the other hand the only proper subgroups of $G$ are the $G_n$, so that $G$ does satisfy d.c.c.
(Original images here and here.)
Does one have to take the quotient $\mathbb{Q}/\mathbb{Z}$?
AI: Fix a prime $p$ and let $M_p={\Bbb Z}(\frac1p)/{\Bbb Z}$.
It is not difficult to see that the only submodules of $M_p$ are those generated by $\frac1{p^k}+{\Bbb Z}$ for $k\geq0$. From this it follows that $M_p$ is Artinian but not Noetherian. |
H: Notation for countable products of sets
Let $X$ be some set and $\Omega = \prod\limits_{i=0}^\infty X$, so that any element in $\Omega$ can be written as
$$
\omega= (\omega_0,\omega_1,\dots)
$$
where $\omega_i\in X$, $i \geq 0$. Given $A\subset X$ I wonder what is the right notation for the set
$$
A_0 = \{\omega\in \Omega: \omega_0\in A\}
$$
I think, the most formal is to write $A_0 = A\times \prod\limits_{i=1}^\infty X$, but I wonder if I can write $A_0 = A\times \Omega$?
I have doubts because if the second notation is formal, then it should mean that $\Omega = X\times \Omega$ - and I am not sure if the latter is correct.
AI: Neither notation is strictly formal from strictly set-theoretical point of view. An infinite cartesian product is a set of functions from the index set into the union of the producted sets, while a finite cartesian product is usually seen as a set of tuples (which can be seen as ordered pairs of ordered pairs... etc, or just ordered pairs in case of a product of two sets), so if you go down to set-theoretical formalism, they're distinct objects (that is, $A\times \prod_1^\infty X$ is a set of pairs of elements of $X$ and sequences of elements of $X$).
The most formal ay to write it would be just the thing you've written using set-builder notation: $\lbrace \omega\in\Omega\vert \omega_0\in A\rbrace$.
That said, unless you're explicitly dealing with very low-level objects, looking too closely at set-theoretical formalism is neither useful nor enlightening; I think the other two are good enough, if you make it clear what you mean. You might just spell it out in words, or just write something to highlight the abuse like $A\times \Omega\subseteq \Omega$.
Also, an useful notation for cartesian (countable) power is $X^{\mathbf N}$ or $X^{\aleph_0}$ (or even $X^\omega$, but that would be a little too confusing in that context). |
H: Proof of $M$ Noetherian if and only if all submodules are finitely generated
Is my proof of proposition 6.2 on page 75 correct? (it's different from what they do in the book)
Proposition 6.2.: $M$ is a Noetherian $A$ module $\iff$ every submodule of $M$ is finitely generated.
My proof:
$\implies$ Assume $M$ has a submodule $N$ that is not finitely generated. Say, $N = \langle \bigcup_{i=1}^\infty \{ n_i \} \rangle$. Then the following is an increasing chain that is not stationary: $\langle n_1 \rangle \subset \langle n_1, n_2 \rangle \subset \langle n_1, n_2, n_3 \rangle \subset \dots$
Hence $M$ is not Noetherian.
$\Longleftarrow$ Assume $M$ is not Noetherian. Then there is a non-stationary increasing chain of submodules $N_1 \subset N_2 \subset \dots$. Then $N = \bigcup_{k=1}^\infty N_k$ is a submodule. If $N$ was finitely generated, $N_k$ would be stationary. Hence $N$ is a submodule that is not finitely generated.
AI: The first part is not quite correct: you have assumed $N$ is countably generated.
The second part looks good, though you should make sure you know why the fact that $N$ is finitely generated implies that the sequence is stationary. |
H: Inverse problem from pdes
A linear inverse problem is given by:
$\ \mathbf{d}=\mathbf{A}\mathbf{m}+\mathbf{e}$
where d: observed data, A: theory operator, m: unknown model and e: error.
To minimize the effect of the noise; a Least Square Error (LSE) model estimate is commonly used:
$\ \mathbf{\tilde{m}}=(\mathbf{A^\top A})^{-1}\mathbf{A^\top d}$
As an example problem I am considering the model:
$\ T(x,z) = 0.5*sin(2\pi*x) - z$
My observed data is the noisy partial derivatives of T:
$\ P(x,z)=T_x+e$
$\ Q(x,z)=T_z+e$
By introducing matrix finite difference operators I can formulate these two pdes as linear equations:
$\ \mathbf{P=D_x*T} $
$\ \mathbf{Q=D_z*T} $
Below is the LSE solution of these equations:
There is one big problem with this solution:
in the model the line where T=0 goes between z=-0.5 and z=0.5,
whereas in the estimate it goes between z=-1 and z=1.
Some "scaling" of the z coordinate seems to be taking place.
Why is this happening?
Other than this the solution is quite good except for the blocky pattern.
This pattern seems to be related to the sampling; the finer the sampling the finer the blocky pattern.
What is the explanation for this pattern?
In order to get a smoother solution I tested with a smoothing regularization constraint:
$\ a*\mathbf{L*T}=0$
where a is a scaling parameter and L is the finite difference Laplacian matrix.
I noticed a trade-off between smoothness and correctness depending on the scaling parameter. For large scaling parameters the sine wave pattern of the model turns into a square wave pattern:
How can I determine a suitable a?
Inverse problems are often troubled by instabilities which make it necessary to introduce a dampening regularization term.
This does not seem to be the case for this problem.
However since A is not square I could not find its condition number using cond() in Matlab.
Is there some way in Matlab to find if an LSE problem with a nonsquare matrix is unstable?
BTW Here is the Matlab code I used:
% number of samples in x and z direction:
n = 101;
% amount of noise added to partial derivatives:
e = 0.5;
x = linspace(-1,1,n);
z = linspace(-1,1,n);
dx = x(2) - x(1);
dz = z(2) - z(1);
[X,Z] = meshgrid(x,z);
% Hidden model:
subplot(1,3,1);
T = 0.5*sin(2*pi*X) - Z;
imagesc(x,z,T);
xlabel('x');
ylabel('z');
title('T = 0.5*sin(2*pi*x) - z');
colorbar;
% Observed noisy data:
P = pi*cos(2*pi*X) + e*randn(n,n); % Tx
Q = -ones(n,n) + e*randn(n,n); % Tz
subplot(1,3,2);
imagesc(x,z,P);
xlabel('x');
ylabel('z');
title('P = Tx + e');
colorbar;
subplot(1,3,3);
imagesc(x,z,Q);
xlabel('x');
ylabel('z');
title('Q = Tz + e');
colorbar;
% Central difference matrices:
E = sparse(2:n, 1:n-1, 1, n, n);
D = (E' - E) / 2;
I = speye(n);
Dx = kron(D,I)/dx;
Dz = kron(I,D)/dz;
A = []; b = [];
A = [A; Dx]; b = [b; P(:)];
A = [A; Dz]; b = [b; Q(:)];
% Least Squares Solution:
figure;
subplot(1,2,1);
imagesc(x,z,T);
xlabel('x');
ylabel('z');
title('T');
colorbar;
tlse = lsqr(A, b, 1e-3, 1000*100);
Tlse = reshape(tlse, [n n]);
subplot(1,2,2);
imagesc(x,z,Tlse);
xlabel('x');
ylabel('z');
title('Tlse');
colorbar;
figure;
% Least Squares Solution regularized by smoothing Laplace operator:
D2 = E + E' + sparse(1:n, 1:n, -2, n, n);
Dxx = kron(D2,I)/(dx^2);
Dzz = kron(I,D2)/(dz^2);
L = Dxx + Dzz;
ns = 3;
si = 1;
for si = 1 : ns;
subplot(1,ns,si);
% regularization:
a = (si - 1)*5e-4;
Ar = [A; a*L]; br = [b; zeros(n^2,1)];
tlse = lsqr(Ar, br, 1e-3, 1000*100);
Tlse = reshape(tlse, [n n]);
imagesc(x,z,Tlse);
str = sprintf('Tlse a=%g',a);
title(str);
si = si + 1;
end
EDIT:
My central difference matrix D had an edge problem. As an example for size n=5:
was:
D =
0 0.5000 0 0 0
-0.5000 0 0.5000 0 0
0 -0.5000 0 0.5000 0
0 0 -0.5000 0 0.5000
0 0 0 -0.5000 0
Rewrote this to
D =
-1.0000 1.0000 0 0 0
-0.5000 0 0.5000 0 0
0 -0.5000 0 0.5000 0
0 0 -0.5000 0 0.5000
0 0 0 -1.0000 1.0000
With more correct difference matrices I get much nicer results:
And this is without any regularization. Sorry for the sloppiness. I learned a lot tough :-). Thanks for your answers!
AI: Factor of 2
Can't be sure, but it looks like your central difference matrix has an extra factor of 1/2 that shouldn't be there. That would cause the estimate to be incorrectly scaled by (1/2*1/2)^-1*(1/2)=2.
Condition number
The relevant condition number is the condition number of $A^T A$, since that is the matrix that is getting inverted.
Choice of regularization parameter
Choosing the regularization parameter is a classic problem, for which there is a vast literature and many methods. A good book on the subject is "Regularization of Inverse Problems" by Engl, Hanke, and Neubauer (though it lacks recent developments). One particularly simple but effective method is the discrepancy principle.
Discrepancy principle: Choose the regularization parameter so that the least square error residual is approximately the same size as the noise level. Eg, if the expected noise level is $\delta$, start with $k=0$ and solve the inverse problem with $\alpha=1/2^k$. Then check if $||Am-d||<\delta$. If it is, stop, if it isn't, set $k=k+1$ and repeat.
What is the source of the blocky noise
For ill-posed inverse problems, the eigenfunctions of the forward operator are generally higher and higher frequency oscillatory functions, with smaller and smaller eigenvalues decaying to zero. When you try to invert, those high frequency modes get amplified.
If you have a look at your noise, you see it looks like someone added a high frequency sinusoid on top of the true function - there was a little tiny bit of noise in that high frequency sinusoidal component, and it got amplified hugely by $(A^TA)^{-1}$. There was also noise in the other components, but it didn't get amplified as much since their eigenvalues are larger (so 1/eigenvalue is smaller).
If you want a visual demonstration of what's happening, compute eig($A^TA$), and plot the last eigenfunction.
Mesh as regularization
A coarse mesh can act as regularization, if the nature of the mesh averages out the high frequency components. Meshes can actually be chosen for the purpose of regularization - doing so adaptively is actually an area of current research! |
H: Codimension of the complement of a quasi-affine open subset of a variety
It is a known fact from Algebraic Geometry that the complement of an affine open subset of a variety is of pure codimension one. Does the same hold for the complement of a quasi-affine open subset of a variety? I don't know much about codimension, so I hope that this question is not trivial.
AI: If $V$ is affine, and $Z$ is closed in $V$, then $V\setminus Z$ is quasi-affine (it is open in the affine variety $V$), and its complement in $V$ is $Z$, which has whatever codimension it has. (Anything up to the dimension of $V$.) So there is no real constraint on the codimension of the complement of a quasi-affine open. |
H: Finding the divisor of an unknown
I am trying to solve this problem
A number when divided by a divisor leaves a remainder of 24. When twice the original number is divided by the same divisor, the remainder is 11. What is the value of the divisor?
I have established two relations here but dont know how to proceed , any suggestions would be appreciated .
$x= dq + 24 $ where d=divisor
$2x= dk + 11 $ where d=divisor
AI: As you wrote, corrected:
$$\begin{align*}2x=&dk+11\\x=&dq+24\end{align*}$$
Substracting second eq. from first one and comparing the result with second eq.:
$$x=d(k-q)-13=dq+24\Longrightarrow d(k-2q)=37\Longrightarrow d=37$$
as $\,37\,$ is prime. |
H: Finding a number when its remainder is given.
I am trying to solve this problem
W is a positive integer when divided by 5 gives remainder 1 and when divided by 7 gives remainder 5. Find W.
I am referring back to an earlier post I made. Now I am attempting to solve it that way.
We know that
$$w\equiv1(mod~5)$$
$$w\equiv5(mod~7)$$
$w=7r+5$
$w=5r+5+2r$
Since $5r+5$ is divisible by 5
$w=5(r+1)+2r$ this shows the remainder is $2r$
Now $2r$ divided by 5 gives a remainder 1 , thus giving the equation
$2r = 5k + 1$ or $r=\frac{5k+1}{2}$
Putting r back in $w=7r+5$ we get
$2w = 35k + 12$
So I guess $w= \frac{35+12}{2} = 23.5$
This is wrong and the answer is suppose to be 26. Any suggestions what I might be doing wrong ? or anything that I might be missing ?
Edit:
The problem was in calculation
$w= \frac{35+17}{2} = 26$
AI: There is an error: $\rm\:w=7r\!+\!5\,\Rightarrow\,2w = 7(2r)\!+\!\color{#C00}{10} = 7(5k\!+\!1)\!+\!\color{#C00}{10} = 35k\!+\!\color{#C00}{17},\:$ not $\rm\:35k\!+\!\color{#0A0}{12}.$
Since $\rm\:35k\!+\!17 = 2w\:$ is even, $\rm\:k\:$ is odd, $\rm\:k = 2j\!+\!1,\,$ so $\rm\:w = (35(2j\!+\!1)\!+\!17)/2 = 35j+26.$
Remark $\ $ It is easier to do the division by $2$ before the substitution. Namely, we have $\rm\:2r = 5k\!+\!1\:$ so $\rm\:k\:$ is odd, $\rm\:k = 2j\!+\!1,\:$ thus $\rm\:r = (5k\!+\!1)/2 = (5(2j\!+\!1)\!+\!1)/2 = 5j\!+\!3.\:$ Therefore $\rm\:w = 7r\!+\!5 = 7(5j\!+\!3)\!+\!5 = 35j\!+\!26.$ Notice how the numbers remain smaller this way.
I emphasize again, it's much more intuitive if you learn about modular arithmetic (congruences). For many examples see my posts on Easy CRT (easy version of the Chinese Remainder Theorem) |
H: A question about a proof of Noetherian modules and exact sequences
I proved part (i) of the following:
Proposition 6.3. Let $0 \to M' \xrightarrow{\alpha} M \xrightarrow{\beta} M'' \to 0$ be an exact sequence of $A$-modules. Then
i) $M$ is Noetherian $\iff$ $M'$ and $M'"$ are Noetherian;
ii) $M$ is Artinian $\iff$ $M'$ and $M''$ are Artinian
Proof. We shall prove i); the proof of ii) is similar.
$\implies$: An ascending chain of submodules of $M'$ (or $M''$) gives rise to a chain in $M$, hence is stationary.
Can you tell me if my proof of $(i)\Longleftarrow$ is correct? Here goes:
Let $M^\prime$ and $M^{\prime \prime}$ be Noetherian. Let $L_n$ be an ascending chain of submodules in $M$. Then $\alpha^{-1}(L_n)$ is an ascending chain of submodules in $M^\prime$. Hence $\alpha^{-1}(L_n)$ is stationary, that is, $\alpha^{-1}(L_n) = \alpha^{-1}(L_{n+1})$ for $n$ large enough. Hence $L_n = L_{n+1}$ for $n$ large enough since $\alpha$ is injective hence $L_n$ is stationary.
The proof given in Atiyah-Macdonald is the following but I don't understand why they need both maps, $\alpha$ and $\beta$:
$\Longleftarrow$: Let $(L_n)_{n \ge 1}$ be an ascending chain of submodules of $M$; then $(\alpha^{-1}(L_n))$ is a chain in $M'$, and $(\beta(L_n))$ is a chain in $M''$. For large enough $n$ both these chains are stationary, and it follows that the chain $(L_n)$ is stationary. $\Box$
AI: If this proof worked then we would be able to conclude that a module is Noetherian if it has a Noetherian submodule. This is certainly not the case — as a silly example, the trivial submodule is always Noetherian. If we forget about $\beta$ then it is true that $(\alpha^{-1}(L_n))_n$ will stabilize, but it's not as if $\alpha(\alpha^{-1}(L_n)) = L_n$, since $\alpha$ need not be surjective.
To use Atiyah and MacDonald's proof, you want to show that if $N_1 \subset N_2$ are submodules of $M$ such that $\alpha^{-1}(N_1) = \alpha^{-1}(N_2)$ and $\beta(N_1) = \beta(N_2)$, then $N_1 = N_2$. For this, take $x \in N_2$. By assumption there is a $y \in N_1$ such that $\beta(y) = \beta(x)$. By exactness, then, there is a $z \in M'$ such that $\alpha(z) = x - y \in N_2$. Thus $z \in \alpha^{-1}(N_2) = \alpha^{-1}(N_1)$. Do you see how to conclude that $x \in N_1$?
[We do need that $N_1 \subset N_2$, by the way. Think about $M = \mathbb R^2$ with $M' = \text{the $x$-axis}$ and the family of lines $y = mx$ for $m \neq 0$.] |
H: The row- and column-sums of a nonengative matrix with spectral radius less than $1$
Is it true that if a matrix has nonnegative elements and spectral radius less than $1$, than the sum of its elements on each row (and column) is less than $1$?
Edit: What if the matrix has positive elements?
AI: $A=\begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix}$. The spectral radius is $0$, but a row sum is $1$.
A counterexample with all entries strictly positive follows immediately from continuity of eigenvalues.
For an explicit example, try $A=\begin{bmatrix}\frac{1}{10} & 1 \\ \frac{1}{10} & \frac{1}{10} \end{bmatrix}$. It is straightforward to check that $\rho(A) = \frac{\sqrt{10}+1}{10}<1$.
For a symmetric example, take $A=\begin{bmatrix}\frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{20} \end{bmatrix}$. This has $\rho(A) = \frac{\sqrt{481}+11}{40}<1$. |
H: If $K/F$ is Galois and $E$ is a subextension then $E$ is generated by roots of a polynomial over $F$?
Let $K/F$ be finite Galois field extension, then $K$ is the splitting
field of a separable polynomial $p$ over $F$, i.e. $K=F(a_{1},..a_{n})$
where $p=(x-a_{1})...(x-a_{n})$.
My question is: is it true that if $E$ is a subextension of $K/F$
then $E$ is also of the form $E=F(b_{i_1},..b_{i_t})$ where $g=(x-b_{1})...(x-b_{k})$
is in $F[x]$ ?
AI: Now that we have the question nailed down: Let $E/F$ be a finite extension. The first question is whether $E$ can be written as $F(b_1, \ldots, b_n)$. This is true. You could, for example, take $\{b_i\}$ to be a basis for $E$ as a vector space over $F$. Each $b_i$ has some minimal polynomial $p_i \in F[x]$ over $F$, and $p_1 \cdots p_r$ seems to do what you want.
Note that the above choice of $\{b_i\}$ was probably not very efficient with regard to $n$. The primitive element theorem says that if $E/F$ is separable then $E = F(\alpha)$. |
H: To show $a_{n}\log n\to 0$ as $n\to\infty$.
Suppose i have $a_{n}\downarrow 0$ and $\displaystyle \sum_{n=1}^{\infty}\Delta a_{n}\log n<+\infty$, where $\Delta a_{k}=a_{k}-a_{k+1}$ and $a_{n}\downarrow 0$ means $a_{n}$ is decreasing and convergent to $0$.
I want to show that $a_{n}\log n \to 0$ as $n\to\infty.$ Clearly, $a_{n}\log n\geq 0$.
I need to show that $a_{n}\log n\leq 0$ but i don't know how to solve this!
AI: $$\sum_{n=n_0}^\infty\Delta a_n\log n\ge\sum_{n=n_0}^\infty\Delta a_n\log n_0=\log n_0\sum_{n=n_0}^\infty\Delta a_n=\left(a_{n_0}-\lim_{n\to\infty}a_n\right)\log n_0=a_{n_0}\log n_0$$ |
H: Moment generating function for the uniform distribution
Attempting to calculate the moment generating function for the uniform distrobution I run into ah non-convergent integral.
Building of the definition of the Moment Generating Function
$
M(t) = E[ e^{tx}] = \left\{ \begin{array}{l l}
\sum\limits_x e^{tx} p(x) &\text{if $X$ is discrete with mass function $p( x)$}\\
\int\limits_{-\infty}^\infty e^{tx} f( x) dx &\text{if $X$ is continuous with density $f( x)$}
\end{array}\right.
$
and the definition of the Uniform Distribution
$
f( x) = \left\{ \begin{array}{l l}
\frac{ 1}{ b - a} & a < x < b\\
0 & otherwise
\end{array} \right.
$
I end up with a non-converging integral
$\begin{array}{l l}
M( t) &= \int\limits_{-\infty}^\infty e^{tx} f(x) dx\\
&= \int\limits_{-\infty}^\infty e^{tx} \frac{ 1}{ b - a} dx\\
&= \left. e^{tx} \frac{ 1}{ t(b - a)} \right|_{-\infty}^{\infty}\\
&= \infty
\end{array}$
I should find $M(t) = \frac{ e^{tb} - e^{ta}}{ t(b - a)}$, what am I missing here?
AI: The density is $\frac{1}{b-a}$ on $[a,b]$ and zero elsewhere. So integrate from $a$ to $b$. Or else integrate from $-\infty$ to $\infty$, but use the correct density function. From $-\infty$ to $a$, for example, you are integrating $(0)e^{tx}$. The same is true from $b$ to $\infty$. The only non-zero contribution comes from
$$\int_a^b\frac{1}{b-a}e^{tx}\,dx.$$ |
H: does the uniform continuity of $f$ implies uniform continuity of $f^2$ on $\mathbb{R}$?
my question is if $f:\mathbb{R}\rightarrow\mathbb{R}$ is uniformly continuous, does it implies that $f^2$ is so?and in general even or odd power of that function?
AI: No. For example, $f(x)=x$ is uniformly continuous, but $f(x)=x^2$ is not, and neither for that matter is $f(x)=x^n$ for any $n>1$. |
H: right-running waves
Can you help me please? The problem is:
1.Solve the wave equation in finite interval with Dirichlet boundary condition at he right and Neumann boundary condition at the left .
2.Choose the initial conditions for right-running waves.
3.Show the phase difference of wave reflection at the boundaries.
I solved the 1'st question by assuming $u_{tt}=c^{2}u_{xx}$
for $0<x<l$ with the B.C. $u_x(0,t)=0$; $u(l,t)=0$
And the answer is:
$u(x,t)=\sum_{n=0}^{\infty }\left [ A_n \cos \frac{(n+\frac{1}{2})\pi ct}{l}+B_n \sin \frac{(n+\frac{1}{2})\pi ct}{l} \right ]\cos \frac{(n+\frac{1}{2})\pi x}{l} $
But i stuck with part 2. and 3. of the question.
What conditions i should to choose in order to get right-running waves?
Thanks for you help!
AI: Left- and right-running waves take the form $u(x,t)=g(x+ct)$ and $u(x,t)=g(x-ct)$, respectively. Differentiating a right-running wave with respect to $t$ and $x$ yields
$$\frac{\partial u}{\partial t}=-cg'(x-ct)=-c\frac{\partial u}{\partial x}\;,$$
so to specify initial conditions for a right-running wave, you can specify $u(x,0)=f(x)$ with any $f$ and then require $u_t(x,0)=-cf'(x)$. |
H: Maximum area of a triangle in a square
For a given square, consider 3 points on the perimeter to form a triangle. How to prove that:
The maximum area of triangle is half the square's.
The maximum area of triangle occurs if and only if the chosen points are vertexes of the square.
AI: First, recall that the area of a triangle is $\frac{1}{2}bh$ where $b$ denotes the length of the base and $h$ the height. Keeping $h$ fixed, we may increase $b$ to increase the area of the triangle. Similarly, keeping $b$ fixed, we may increase $h$ to increase the area of the triangle. This shows that the vertices must lie along the boundary of the square.
Now, we claim that one vertex lies at the corner of the square. Suppose not. Then, pick any edge of the triangle. Consider the vertex not along this edge. By moving this vertex away from the edge, we may increase the height of the triangle, by the remarks above. Since a corner of the square will be the furthest distance from our selected edge, moving the opposite vertex to the corner of the square maximizes the area.
Now, let $S$ be a square in the plane. Let $s$ denote the length of the edges of $S$. Without loss of generality, we may suppose that $S$ lies in the first quadrant with one vertex at the origin. Consider a triangle in $S$ that maximizes area. By the above remarks, we may suppose that one vertex of the triangle lies at the origin.
Let $b$ be the base of the triangle and $h$ be the height of the triangle. By the above remarks on the construction of the triangle, $b = s$. Similarly, $h = s$. So the area of the triangle is $\frac{1}{2}bh = \frac{1}{2}s^2 = \frac{1}{2}Area(\Box)$. (You may want to argue these points more carefully, but they can be argued by using the ideas of the first two paragraphs.)
Note that the second statement is not true: Let the vertices of the sqaure be $(0,0), (1,0), (0,1)$, and $(1,1)$. Let the vertices of the triangle be $(0,0), (1,0)$ and $(\frac{1}{2},1)$. Then we still have $Area(\triangle) = \frac{1}{2}Area(\Box)$. |
H: Antiderivative of a function that is decreasing is concave.
I'm a little rusty on calculus and would like to know how one goes about showing this without having to rely on differentiating the antiderivative twice.
Also, is there an extension to this in the multivariate case (i.e., what condition is needed to ensure that the antiderivative is concave).
Thanks!!!
AI: Suppose $f$ is a continuous decreasing (or more generally, nonincreasing) function on $(a,b)$ and $F$ is an antiderivative of $f$ there. For $0 < t < 1$ and $a < x < y < b$, we need to prove that $F(tx + (1-t) y) \ge t F(x) + (1-t) F(y)$. Rearrange this as
$ t \left(F(m) - F(x)\right) \ge (1-t) \left( F(y) - F(m) \right)$, where $m = tx + (1-t) y$.
Now by the Mean Value Theorem, $F(m) - F(x) = (m-x) f(u)$ for some $u$ between $x$ and $m$, while $F(y) - F(m) = (y-m) f(v)$ for some $v$ between $m$ and $y$. Note that $u \le v$ so $f(u) \ge f(v)$, and $m - x = (1-t)(y-x)$ while $y-m = t (y-x)$. So indeed
$$t \left(F(m) - F(x) \right) = t(1-t) (y-x) f(u) \ge t(1-t)(y-x) f(v) = (1-t) \left(F(y) - F(m)\right)$$ |
H: Cardinality of set of all fixed points of a function
3.10 Let $f\in\mathcal{C}^1[-1,1]$ such that $|f(t)|\leq 1$ and $|f'(t)|\leq\frac{1}{2}$ for all $t\in[-1,1]$. Let $$A=\{t\in[-1,1]\colon f(t)=t\}.$$
Is $A$ nonempty? If the answer is 'yes', what is its cardinality?
Well, I was trying like suppose $\exists t_1 \ni f(t_1)=t_1$ then if I apply mean value theorem on $[t_1,x]$ then $$f(x)-f(t_1)=f'(c)(x-t_1)$$ $$|f(x)-t_1|\le \frac{1}{2}|x-t_1|$$ then I can not proceed, where I can use the fact $|f(t)|\le 1$?
AI: First let $g(x)=f(x)-x$. Then $g(1) \geq 0$ and $g(-1) \leq 0$ imply that $g$ has a root.
Now, assume by contradiction that $f$ has two fixed points $t_1,t_2$, and apply the MVT on the interval between these two points. |
H: continuous map from $S^1\rightarrow \mathbb{R}^1$
well, so far I know there exist no injective map from $S^n\rightarrow R^n$(due to Borsuk-Ulam), so in the case of $3.8$ they are asking are there different point on $S^1$ whic maps to same point in $\mathbb{R}$? so by Borsuk Ulam theorem I can say "Yes", If the $f$ is constant then $A$ is uncounatble, but I have no idea about if the $f$ is non-constant.
for $3.9$ I can say same argument right?
I will be happy about responses. Thank you.
AI: 3.9 is basically answered in copper.hat's comment.
For 3.8, if $f$ is not constant, let $a$ and $b$ be any two values of $f$ and let $u$ and $v$ be any of their preimages. By continuity, any point in $[a,b]$ must have preimages in both of the segments into which $u$ and $v$ divide the circle, and there are uncountably many such points. |
H: Difference between population, sample and sample value.
I was going through a book and reached a point where the author is comparing a Population, Sample and Sample Values. I don't seem to understand the difference at all.
(Caps are Random Variables, small font are values/data points)
What is the role of Random Variables here?
What does the numbering in X imply? ($X_i \forall i\in \{1,2,3,\cdots,n\}$)
What does the vertical line from $X_1$ to $x_1$ suggest?
AI: The population refers to whatever it is you want to study (let's say the heights of men in Timbuktu). Since you can't examine all of them, you design an experiment that involves taking a random sample of $n$ men and measuring their heights. $X_1$ will be the height of the first man in the sample, $X_2$ the height of the second, etc. These are random variables: if you do the experiment several times, you'll get different results.
The theoretical analysis will mostly involve these random variables, e.g. you might construct some function of $X_1,\ldots,X_n$ and say something about its probability distribution.
Typically you might do this analysis before you actually do the experiment.
When you do the experiment, you get numbers (the sample values): $x_1$ is the measured value of $X_1$, $x_2$ the measured value of $X_2$, etc. |
H: Fourier series on $\mathbb T$ and $S^1$
From my lecture notes: "The notation $\mathbb T$ will be used for the additive circle and $S^1$ for the multiplicative circle."
What I understand: As a topological group, $S^1$ has the subspace topology of $\mathbb R^2$ and multiplication is defined as $(e^{ia}, e^{ib}) \mapsto e^{i(a + b)}$.
My guess is that $\mathbb T$ as an additive group should then be something like $(a,b) \mapsto (a + b) \mod 1$. The problem with that is that the space would look like $[0,1)$ but that's not compact.
But I'm confused: "mod 1" seems to be the same as $\mathbb R / \mathbb Z$ which is $S^1$. But I can't add complex numbers on the unit circle and stay on the unit circle.
So: What's $\mathbb T$? How are elements in it added?
AI: The map $\mathbb R \to S^1$, $t \mapsto e^{2\pi it}$ descends to a map $\mathbb T \to S^1$ which is an algebraic and topological isomorphism. [Remember that $\mathbb T$ is an algebraic and topological quotient. What is the inverse of this map?] So you can indeed pass freely between the two, and in particular $\mathbb T$ is compact. If you want to see this without reference to $S^1$, note that the restriction of the quotient map $\mathbb R \to \mathbb R/\mathbb Z$ to the compact subspace $[0, 1]$ is still surjective. |
H: What exactly is a Haar measure
I've come across at least 3 definitions, for example:
Taken from here where $\Gamma$ is a topological group. Apparently, this definition doesn't require the Haar measure to be finite on compact sets.
Or from Wikipedia:
"... In this article, the $\sigma$-algebra generated by all compact subsets of $G$ is called the Borel algebra...."
Then $\mu$ defined on this sigma algebra is a Haar measure if it's outer and inner regular, finite on compact sets and translation invariant.
So I gather the important property of a Haar measure is that it's translation invariant.
Question 1: What I don't gather is, what do I get if I define it on the Borel sigma algebra as opposed to defining it on the sigma algebra generated by compact sets (as they do on Wikipedia)?
Question 2: Can I put additional assumptions on $G$ so that I can drop the requirement that $\mu$ has to be finite on compact sets?
Question 3: As you can guess from my questions I'm poking around in the dark trying to find out how to define a Haar measure suitably. Here suitably means, I want to use it to define an inner product so I can have Fourier series. Are there several ways to do this which lead to different spaces? By this I mean, if I define it on the Borel sigma algebra, can I do Fourier series for a different set of functions than when I have a measure on the sigma algebra generated by compact sets? Or what about dropping regularity? Or dropping finiteness on compact sets?
Thanks.
AI: Summarizing some comments, and continuing: the main point is that Haar measure is translation-invariant (and for non-abelian groups, in general, left-invariant and right-invariant are not identical, but the discrepancy is intelligible).
Unless you have intentions to do something exotic (say, on not-locally-compact, or not-Hausdorff "groups", which I can't recommend), you'll be happier later to have a regular measure, so, yes, the measure of a set is the inf of the measures of the opens containing it, and is the sup of the measures of the compacts contained in it, and, yes, the measure of a compact is finite. Probably you will also want completeness, especially when taking products, so subsets of measure-zero sets have measure zero.
Probably you'll want your groups to be countably-based, too, to avoid some measure-theoretic pathologies.
Then, for abelian topological groups (meaning locally compact, Hausdorff, probably countably-based), the basics of "Fourier series/transforms" work pretty well, as in Pontryagin and Weil. The non-abelian but compact case also turns out very well. |
H: Even numbers have more factors than odd numbers...
This was an exercise to show that, in a sense, the even numbers have more prime factors than the odds, but--if it's right-- I still have a question.
As an heuristic calculation, we could take a large interval (1, 2N) on which the average number of prime factors with repetitions is $\mu$ (for sufficiently large N, $\mu $ is about $ \ln \ln 2N$; see answer to this problem). WLOG N is even, then the set $S_2 = \{N+2, N+4, N+6, ..., 2N \} $ corresponds to a sequence $S_1 = \{\frac{N}{2}+1, \frac{N}{2}+ 2, ..., N \}$. The average number of primes in $S_1$ is only slightly less than $\mu$ for large N$^{(1)}$, and so the average number of primes $\mu_2$ in $ S_2$ is $\mu+1$ primes. Let $\mu_o$ be the average number of primes of odd numbers in $[N,2N]$.
Since on $[N,2N]$ the average number of primes is also about $\mu$, we have that $$\mu =\frac{( \mu_2 + \mu_o)}{2} = \frac{(\mu + 1 + \mu_o )}{2},$$ and so the average number of primes for the odd numbers $^{(2)} $ is $$\mu_o = \mu - 1 = \mu_2 - 2 $$ so
$$\mu_2 - \mu_o \approx 2.$$
This argument has I think a somewhat complicated generalization. Using the same reasoning for multiples of $3, 5,...,p_k $ and so on, the net result would be an average number of primes $\mu_{p_k} $ for multiples of the set $P_k = \{ 2,3,5, ..., p_k \}$ with an increasing number of numbers that are multiples of more than one such prime, so that $\mu_{p_k} > \mu + 1$ and $\mu_{p_k}$ is an increasing function of N (or k ). So if we call $\mu_n$ the average number of primes for non-multiples of $P_k$, I expect that for large N the difference
$$\mu_{p_k} - \mu_n > 2 . $$ My question is whether we reach a point beyond which $$ \mu_{p_k} = \mu + \beta$$ with $\beta(N)>1$ and a,b constants of proportionality with $a > b$ , because multiples of $P_k$ take up more than half the interval, so that $$\mu = a\mu_{p_k} + b\mu_n = a(\mu + \beta) + b\mu_n$$ and $$\mu(1-a) = \mu b = a\beta + b\mu_n$$ and finally $$\mu_n = \mu -\frac{a\beta}{b} < 2.$$
I am guessing so, but that the asymptotic relationships involved make it a somewhat weak assertion?
Hopefully the question is clear. Thanks.
(1) Because $\ln \ln 2N = \ln (\ln 2 + \ln N) \sim \ln \ln N.$
(2) On [1, N] for N = 2,500,000, $\mu_2 - \mu_o$ is about 1.9.
AI: If I understand the question correctly, it's mainly a matter of the order of limits.
There are two numbers going to infinity here, $N$ and $k$. If you keep $k$ fixed and let $N$ go to infinity, your argument goes through just as it did when you considered only $2$ as a factor, and as you say, asymptotically (for $N\to\infty$) the discrepancy between $\mu_{p_k}$ and $\mu_n$ will be greater than $2$; it increases with $k$ (if you let $k$ go to $\infty$ after $N$), and it does so without bound, as does the average number of prime factors itself.
However, that doesn't mean you can write $\beta(N)$, increase $k$ at finite $N$ and conclude from the growing discrepancy that the average number of prime factors for the remaining numbers will eventually fall below $2$; that would only happen if you increase $k$ to a point where $\log\log N$ begins to change significantly from the factors you're pulling out.
To calculate the discrepancy (for fixed $k$ and $N\to\infty)$, note that the reason we get a discrepancy of $2$ considering only the prime factor $2$, and not the $1$ that one might have expected, is that after pulling out a factor of $2$ from the even numbers they still have a chance of containing further factors of $2$, whereas the odd numbers don't. We can express this as
$$\mu=\mu_0+\frac12+\frac14+\frac18+\dotso=\mu_0+\frac1{2-1}=\mu_0+1\;,$$
since all numbers on average have $\mu_0$ prime factors other than $2$, half of them half one factor of two, a quarter have two, and so on. We can do the same thing with the other primes up to $p_k$, and the "probabilities" for different primes will asymptotically be independent, so we get
$$\mu=\mu_n+\sum_{m=1}^k\frac1{p_m-1}\;.$$
The sum diverges for $k\to\infty$, so you can make the discrepancy arbitrarily large; but you then have to go to very high $N$ to actually see it, since you're pulling out more and more factors. |
H: Compute the trigonometric integrals
How would you compute the following integrals?
$$ I_{n} = \int_0^\pi \frac{1-\cos nx}{1- \cos x} dx $$
$$ J_{n,m} = \int_0^\pi \frac{x^m(1-\cos nx)}{1- \cos x} dx $$
For instance, i noticed that the first integral is convergent for any value of $n$ since $\lim_{x\to0} \frac{1- \cos nx}{1 - \cos x}= n^2$. This fact may also be extended to the second integral, as well.
AI: Here's a way to find $I_n$ with the residue theorem: first, by symmetry, $I_n = \frac{1}{2} \int_0^{2\pi} \frac{1 - \cos(nx)}{1- \cos(x)}dx$. Consider $z = e^{ix}$ such that the integral with now be on the edge of the unit circle $C$, and use the fact that $$\cos(nx) = \frac{e^{inx} + e^{-inx}}{2}.$$ as well as $\frac{dz}{dx} = i e^{ix}$. So $dx = \frac{1}{iz} dz$ and
$$\int_0^{2\pi} \frac{1 - \cos(nx)}{1 - \cos(x)} dx = \int_C \frac{1}{iz} \frac{1 - \frac{1}{2}(z^{n} + z^{-n})}{1 - \frac{1}{2}(z + z^{-1})}dz$$ $$= \frac{1}{i}\int_C \frac{1}{z^{n}} \left(\frac{z^n - 1}{z - 1}\right)^2 dz$$ $$= \frac{2\pi i}{i} \mathrm{Res}_{z = 0} \frac{1}{z^{n}} (1 + ... + z^{n-1})^2 = 2\pi n$$
and so $I_n = \pi n$. |
H: Find center, radius and a tangent to $x^2+y^2+6x-4y+3=0$
For the circle $x^2+y^2+6x-4y+3=0$ find
a) The center and radius
b) The equation of the tangent line at the point $(-2,5)$
Now, I solved a) and got the equation
$$(x+3)^2+(y-2)^2=10$$ with center $=(-3,2)$ and radius $=\sqrt{10}$
Now, I've never learned about the tangent of a circle, but I think that it's a line that touches the outer end of a circle. But I'm not 100% on that. So if anyone can help me out with this, that would be very beneficial. And please do not solve this for me.
AI: The line tangent to the circle at $(-2,5)$ is the straight line through $(-2,5)$ that is perpendicular to the radius of the circle that runs from the centre of the circle at $(-3,2)$ to the point $(-2,5)$. Find the slope of that radius, use that to get the slope of the tangent line, and you’re on your way.
Edit: I just corrected the $x$-coordinate of the centre. Yours has the wrong sign: $x+3=x-(-3)$, so the correct $x$-coordinate is $-3$. |
H: elementary measure theory problem.
I am trying to show that, a set $E$ in $\left( 0,1\right) $ is such that, if $\left( \alpha,\beta\right) $ is any interval, then $$\mu\left(E \cap \left( \alpha ,\beta \right) \right) \ge \delta \left( \beta -\alpha \right) $$ where $\delta > 0 $ then the $\mu\left(E\right)=1$.
What have i tried. I could not help notice the case by case breakdown that $E$ might be completely contained in $\left( \alpha,\beta\right) $ in which case
$$\mu\left(E\right) = \mu\left(E \cap \left( \alpha ,\beta \right) \right) \ge \delta \left( \beta -\alpha \right)$$
Similarly $E$ might have 2 parts so to calculate it's measure we'll need $$\mu\left(E\right) = \mu\left(E \cap \left( \alpha ,\beta \right) \right) + \mu\left(E \backslash \left(E \cap \left( \alpha ,\beta \right) \right)\right)$$
I am assuming a case of $\left(E \cap \left( \alpha ,\beta \right) \right) = \emptyset$ can not occur as that would imply $\alpha = \beta$ given the other conditions.
I was hoping if some one would be kind to give me a hint or a clue, so i could make progress.
AI: The original argument is flawed.
I am assuming that $E$ is a measurable set. By Andrew's comment, $0 < \delta \leq 1$.
Pick any $x \in (0,1)$.
$\frac{\mu(E \cap (x - \epsilon, x + \epsilon))}{2\epsilon}\geq \delta$
where $\epsilon \leq \text{min}(|x|, |1 - x|)$. Taking limit as $\epsilon \rightarrow 0$, this represent the density of $x$ in $E$. By the Lebesgue Density Theorem, for almost all $x$ in $E$, the density is $1$. This implies (since the complement is measurable), that for almost all points in $(0,1) - E$, the density in $E$ is $0$. Hence for every measurable set, for almost all points, the density is either $1$ or $0$. Since $\delta > 0$ in the above which holds for all $x$, one has that for almost all points the density is $1$. So $E$ has measure $1$. |
H: Calculating maximum of function
I want to determine the value of a constant $a > 0$ which causes the highest possible value of $f(x) = ax(1-x-a)$.
I have tried deriving the function to find a relation between $x$ and $a$ when $f'(x) = 0$, and found $x = \frac{1-a}{2}$.
I then insert it into the original function: $f(a) = \frac{3a - 6a^2 - 5a^3}{8}$
I derived it to $f'(a) = \frac{-15a^2 + 12a - 3}{8}$
I thought deriving the function and equaling it to $0$ would lead to finding a maximum, but I can't find it. I can't go beyond $-15a^2 + 12a = 3$.
Where am I going wrong?
AI: The first problem is that you substituted $x=\frac12(1-a)$ into $f(x)$ incorrectly; the second (and more important) problem is that you need to define a new function whose independent variable is $a$. Specifically, let $g(a)$ be the maximum attained by the function $f(x)=ax(1-x-a)$; you want to find the value of $a$ that maximizes $g(a)$. Substituting $x=\frac12(1-a)$ into $f(x)$, we find that
$$\begin{align*}
g(a)&=a\left(\frac{1-a}2\right)\left(1-\frac{1-a}2-a\right)\\
&=\frac{a}4(1-a)\big(2-(1-a)-2a\big)\\
&=\frac{a}4(1-a)(1-a)\\
&=\frac14\left(a-2a^2+a^3\right)\;.
\end{align*}$$
Now $g'(a)=\frac14\left(1-4a+3a^2\right)$. Setting this equal to $0$, we have $3a^2-4a+1=0$. To solve for $a$ you can either use the quadratic formula or notice that $3a^2-4a+1=(3a-1)(a-1)$; either way, you find that $g'(a)=0$ for $a=1$ and $a=\frac13$. By analyzing the sign of $g'(a)$ or by using the second derivative test you can check that $g(a)$ has a local maximum (of $\frac1{27}$) at $a=\frac13$ and a local minimum (of $0$) at $a=1$.
However, a quick check of the graph of $g(a)$ will show you that it increases without bound as $a\to\infty$, and this is also clear algebraically: as $a\to\infty$, $1-a\to-\infty$, so $\frac14a(1-a)^2\to\infty$. Thus, you can make the maximum of $f(x)$ as large as you want by choosing $a$ large enough. |
H: If $K,E$ are subfields of $\Omega/F$ then $KE/F$ is a finite Galois imply $K/K\cap E$ is Galois?
Let $\Omega/F$ be a field extension and $K,E$ be two subfields of
$\Omega/F$. Assume that $KE/F$ is a finite Galois.
I have a theorem in my lecture notes that claim $\text{Gal}(KE/E)\cong \text{Gal}(K/K\cap E)$,
while it is easy to see that $KE/F$ is Galois $\implies KE/E$ is
Galois I can not understand why $K/K\cap E$ is Galois (it is separable
since $KE/F$ is, but I don't see why it's normal).
Why $K/K\cap E$ is Galois ?
AI: You need to assume that $K$ is Galois over $K\cap E$; if that is the case, then the isomorphism drops out of the Galois correspondence and the Isomorphism Theorems.
We can replace $F$ with $K\cap E$, so that we are in the following situation:
$KE/F$ is finite Galois;
$K\cap E = F$.
If $G=\mathrm{Gal}(KE/F)$, let $M$ be the subgroup corresponding to $K$ and $N$ the subgroup corresponding to $E$. Then $M\cap N=\{e\}$ (since $KE$ is the field "on top"), and $\langle M,N\rangle = G$.
Now, $K$ is Galois over $F$ if and only if $M$ is normal in $\langle M,N\rangle$.
In particular, if $K$ is Galois over $F$, then $M\triangleleft \langle M,N\rangle$, so $\langle M,N\rangle = MN$. Thus, $N\cap M$ is normal in $N$, and by the isomorphism theorems we have that
$$\frac{G}{M} =\frac{MN}{M} \cong \frac{N}{N\cap M} \cong N.$$
Now, $\frac{G}{M}\cong\mathrm{Gal}(K/F)$; and $N=\mathrm{Gal}(KE/E)$; so we get the isomorphism if $K$ is Galois.
For an example showing that the given conditions do not imply that $K$ is Galois over $K\cap E$, let $F=\mathbb{Q}$, $K=\mathbb{Q}[\sqrt[3]{2}]$, and $E=\mathbb{Q}(\zeta)$, where $\zeta$ is a primitive cubic root of unity; then $K\cap E=\mathbb{Q}$. Then $KE$ is the splitting field of $x^3-2$ over $\mathbb{Q}$, hence is Galois. Even though $KE$ is Galois over $E$, $K$ is not Galois over $\mathbb{Q}$, so you cannot have the claimed isomorphism. |
H: A notation question: $|\langle x,y\rangle|$
Someone please explain what is the meaning of the words in shade
From Proof from the Book, 4th edition page 96:
Let $q$ be a prime power, set $n=4q-2$, and let
$$Q = \{x \in \{+1,-1\}^n: x_1 = 1, \#\{i:x_i=-1\} \text{ is even}\}.$$
This $Q$ is a set of $2^{n-2}$ vectors in $\Bbb R^n$.
We will see that $\langle x,y \rangle \equiv 2 \pmod 4$ holds for all vectors $x,y \in Q$.
Remark: These 4 sentences came together.
And I need to understand what is $|\langle x,y \rangle|$ is in.
We will call $x,y$ nearly orthogonal if $|\langle x,y \rangle|=2$.
AI: $\langle x,y\rangle$ denotes the result of applying the inner product of $\mathbb{R}^n$ to the vectors $x$ and $y$ (which happen to be in $Q$); in this case, it is the usual "dot product". $|\langle x,y\rangle|$ denotes the absolute value of that operation.
E.g., $q=2$, $n=6$, $x=(1,-1,1,1,-1,1)$, $y = (1,1,-1,1,1,-1)$, then
$$\langle x,y\rangle = (1)(1) + (-1)(1) + (1)(-1) + (1)(1) + (-1)(1) + (1)(-1) = -2$$
so $|\langle x,y\rangle| = |-2| = 2$. |
H: Notation for function that returns exponent of primes in factorisation?
Consider the function $f(n, i)$ which returns the exponent of the $p_i$ in the factorisation of $n$, where $p_i$ is the $i$-th prime.
Question: is there a standard label for $f$?
Context: In the first edition of my Gödel book, I (thoughtlessly!) used the notation $\mathit{exp}(n, i)$, with 'exp' for 'exponent'. But of course that notation invites the misreading '$n$ to the power $i$' (taking 'exp' for exponential). Ooops! I'd like to do better in the second edition. Suggestions?
AI: In a grad course many years ago, from a student of Kleene, the notation $(n)_i$ was used. I believe at that time the notation was quite standard, at least in the English-speaking parts of North America. For whatever it's worth, the notation is used here, in the long list of basic primitive recursive functions.
Seems fine for a logic course, since the notation is relatively short-term, while one is proving the basic facts about the indexing. |
H: What does diameter mean in the sentence of Borsuk's conjecture?
What does diameter mean in the following sentence of Borsuk's conjecture?
Sentence: Can every set $S \subseteq \Bbb R^d$ of bounded diameter $\operatorname{diam}(S)>0$ be partitioned into at most $d+1$ sets of smaller diameter?
AI: If $m:\Bbb R^d\times\Bbb R^d\to\Bbb R$ is the relevant metric, then $$\mathrm{diam}(S):=\sup\{m(x,y):x,y\in S\}.$$ Intuitively, it is the least upper bound of the pairwise distances between points of $S$. Thus, $S$ is bounded if and only if it has a finite diameter (a good exercise).
For example, if $S$ is the unit ball (open or closed), one can see that $\mathrm{diam}(S)=2$. If $S$ is a singleton, then it has zero diameter. If $S$ is a finite, non-empty set, then the diameter is the maximum of the pairwise distances between its points. If $S$ is a right triangular plane region, then its diameter is the length of the hypotenuse. |
H: Intersection of Simply-Connected Sets
Let $U,V$ be two simply connected subsets of a topological space.
Prove or disprove:
$U \cap V$ is simply connected.
AI: Let $S^1$ be the circle in $\mathbb R^2$, $U=\{(x,y)\in S^1: x\geq 0\}$ and $V=\{(x,y)\in S^1: x\leq 0\}$. Then $U$ is the right half of a circle and $V$ is the left half, both of which are simply connected. What is $U\cap V$? |
H: Loopspace of Eilenberg Mac Lane space
Is the loop space of the Eilenberg-MacLane space $K(G,1)$ dependent only on the cardinality of $G$? For instance, is the loop space of $K(\mathbb{Z}_4, 1)$ homotopy equivalent to that of $K(\mathbb{Z}_2 \times \mathbb{Z}_2, 1)$?
AI: Yes. Note that for a based space $(X, x_0)$, we have a fibration
$$\Omega(X, x_0) \hookrightarrow P(X,x_0) \longrightarrow X,$$
where $\Omega(X,x_0)$ denotes the loop space of $X$ and $P(X,x_0)$ is the space of paths in $X$ based at $x_0$. Since $P(X,x_0)$ is contractible (shrink paths to the basepoint $x_0$), from the long exact sequence of the fibration we find that
$$\pi_k(X, x_0) \cong \pi_{k-1}(\Omega(X,x_0))$$
for each positive integer $k$. So in our case, we have that
$$\pi_k(K(G,1)) \cong \pi_{k-1}(\Omega K(G,1))$$
for each positive integer $k$. In particular,
$$\pi_0(\Omega K(G,1)) \cong G$$
and
$$\pi_k(\Omega K(G,1)) \cong 0, ~k \geq 1.$$
From the above, we see that $\Omega K(G,1)$ is a "$K(G,0)$." Although $K(G,n)$ is usually only defined for $n \geq 1$, we still get sensible spaces (homotopy equivalent to $G$ with the discrete topology) with similar properties in the case $n = 0$. In particular, we have
$$[X,K(G,0)] \cong \tilde{H}^0(X; G).$$
Since $\Omega K(G,1)$ is a $K(G,0)$,
$$[\Omega K(G,1), K(G,0)] \cong \tilde{H}^0(\Omega K(G,1); G)$$
and
$$[\Omega K(G,1), \Omega K(G,1)] \cong \tilde{H}^0(\Omega K(G,1); G),$$
and therefore we can pick a map $f: \Omega K(G,1) \longrightarrow K(G,0)$ corresponding to the identity map on $\Omega K(G,1)$ via the above isomorphisms, which will induce isomorphisms on homotopy groups.
There is a theorem of Milnor saying that if $X$ has the homotopy type of a CW complex, then $\Omega X$ has the homotopy type of a CW complex. Therefore we have a map $f: \Omega K(G,1) \longrightarrow K(G,0)$ inducing isomorphisms on homotopy groups between spaces with the homotopy type of CW complexes. Therefore by the Whitehead theorem $\Omega K(G,1)$ is homotopy equivalent to $K(G,0)$, which is a discrete space of cardinality $|G|$. |
H: What is $f(t)=X_{t+1}$, if $X_{t+1}=(1-p)(1-X_{t})+pX_{t}$ and $X_{0},p \in [0,1]$?
What is $f(t)=X_{t+1}$, if $X_{t+1}=(1-p)(1-X_{t})+pX_{t}$ and $X_{0},p \in [0,1]$?
And what are general methods for finding functions defined by such recurrent equations?
AI: I will assume that $t$ ranges over, say, the non-negative integers.
For your particular example, we have
$$\begin{align*}X_{t+2}&=(1-p)(1-X_{t+1})+pX_{t+1},\\
X_{t+1}&=(1-p)(1-X_t)+pX_t.\end{align*}$$
Subtract and simplify. We get
$$X_{t+2}-X_{t+1}=(2p-1)(X_{t+1}-X_t).$$
So if $Y_t=X_{t+1}-X_t$, we find that $Y_{t+1}=(2p-1)Y_t$, which is easily solved, and therefore $X_t$ is $X_0$ plus the sum of a geometric series.
Remark: There are a number of good general methods for solving linear recurrences with constant coefficients. The generating functions approach is quite useful. Or else in our case we can rewrite our recurrence as
$$X_{t+1}=(2p-1)X_t + 1-p.$$
First find a general solution of the homogeneous recurrence $X_{t+1}=(2p-1)X_t$. This is easy. Then find a particular solution of our non-homogeneous recurrence. Easy, by guessing that a constant might work. Then the general solution of our recurrence is the general solution of the homogeneous recurrence, plus our particular solution. |
H: Bounded ration of functions
If $f(x)$ is a continuous function on $\mathbb R$, and $f$ is doesn't vanish on $\mathbb R$, does this imply that the function $\frac{f\,'(x)}{f(x)}$ be bounded on $\mathbb R$?
The function $1/f(x)$ will be bounded because $f$ doesn't vanish, and I guess that the derivative will reduce the growth of the function $f$, so that the ration will be bounded, is this explanation correct!?
AI: Take $f(x)=e^{x^2}$ then
$$\frac{f'(x)}{f(x)}= 2x$$ which is not bounded on $\mathbb{R}$
Also if $f(x)$ does not vanish it does not mean that $\frac{1}{f(x)}$ is bounded
for example take $e^{-x}$. |
H: Book suggestion for linear algebra "2"
I am almost finishing Gilbert Strang's book "An introduction to linear algebra" (plus video lectures at MIT OCW). First and foremost, I would like to suggest this course for everyone. It has been incredibly illuminating.
I would like to continue studying linear algebra, with particular focus on different properties of matrices and the transition to more general linear spaces (I am a physicist so Hilbert spaces and etc. are of particular interest).
Does anyone have a a good recommendation of books/resources/etc.?
AI: I have to suggest the somewhat underrated Matrix Analysis by Horn and Johnson (the first edition was used for my ALA class at NCF.) They take a wonderfully concrete approach to most topics encountered in a second linear algebra course (Schur Decomposition, Spectral Theorem for Normal Operators, Jordan Canonical Form, Singular Value Decomposition) while adding a lot of other nice things into the mix. The fourth chapter on Hermitian Matrices talks about the Rayleigh Ritz Theorem and variational characterization of eigenvalues, which I imagine come up a lot in serious study of classical mechanics. Chapter five discusses finite dimensional inner product / normed / pre-normed spaces in terms of algebraic, analytic, and geometric properties. They include a discussion of completeness and the $l^p$ norms, which I guess could be seen as a preview of Hilbert Space Theory. There are also nice sections on the Gersgorin circle theorem and numerically solving linear systems.
I think it's a wonderful choice for any student, but especially a non-mathematician. The proofs are rigorous and sometimes tedious but always understandable. Typically, things are proved in an algorithmic fashion rather than through diagram chasing or algebraic artifice (nary a mention of finitely generated modules over a principal ideal domain.) My only complaint is that there are a fair number of results assumed regarding matrix algebra and determinants which wouldn't typically appear in a linear algebra course - references for these are typically not too hard to find though. |
H: Is there an algorithm to compute this matrix at $\frac{2}{3}n^3 + O(n^2)$ flops?
If $A$ is an $n\times n$ nonsingular matrix, and $b$ , $c$ are column vectors of length $n$. Is there an algorithm I can use to compute the matrix $W = bc^\top A^{-1}$ in $\frac{2}{3}n^3 + O(n^2)$ flops?
AI: Well,
Decompose $\mathbf A$ with your favorite decomposition. (e.g. the LU decomposition, $\mathbf P\mathbf A=\mathbf L\mathbf U$).
Solve the system $\mathbf A^\top\mathbf y=\mathbf c$ with your decomposition.
Form the outer product $\mathbf b\mathbf y^\top$.
I leave the details, including the flop counting, to you.
It seems the hints I gave earlier wasn't sufficiently clear, so here's another:
$$\mathbf A^{-\top}=\mathbf P\mathbf L^{-\top}\mathbf U^{-\top}$$
where $\mathbf A^{-\top}$ denotes the inverse of $\mathbf A^\top$.
You should now be able to figure how to use the LU decomposition of $\mathbf A$ to generate the column vector $\mathbf y$. |
H: show that if $a | c$ and $b | c$, then $ab | c$ when $a$ is coprime to $b$.
Given two numbers $a$ and $b$, where $a$ is co-prime to $b$,
Show that for any number $c$, if $a|c$ and $b | c$ then $ab| c$.
Is the reverse also true? In other words, if $ab |c$ and $a$ is co-prime to $b$, then do we have $a | c$ as well as $b|c$?
AI: The second property holds, but the assumption that $\gcd(a,b)=1$ is superfluous. In general, if $x|y$ and $y|z$, then $x|z$. Since $a|ab$ and $b|ab$, then $ab|c$ implies $a|c$ and $b|c$, without any conditions on $\gcd(a,b)$.
On the other hand, $\gcd(a,b)=1$ is required for the first.
There are a number of ways of doing the proof. One is to use the Bezout identity: for any integers $a$ and $b$, there exist integers $x$ and $y$ such that $\gcd(a,b)=ax+by$. If $\gcd(a,b)=1$, then we can write $1 = ax+by$. Multiplying through by $c$, we get $c=axc + byc$. Since $a|c$, we can write $c=ak$; and since $b|c$, we can write $c=b\ell$. So we have
$$c = axc+byc = ax(b\ell) + by(ak) = ab(x\ell + yk),$$
so $ab|c$.
Another is to use the following property:
Lemma. If $a|xy$ and $\gcd(a,x)=1$, then $a|y$.
This can be done using the Bezout identity, but here is a proof that avoids it and only uses the properties of the gcd (so it is valid in a larger class of rings than Bezout rings):
$$\begin{align*}
r|\gcd(a,xy) & \iff
r|a,xy\\
&\iff r|ay, xy,a\\
&\iff r|\gcd(ay,xy),a\\
&\iff r|y\gcd(a,x),a\\
&\iff r|y,a\\
&\iff r|\gcd(a,y)
\end{align*}$$
In particular, $a=\gcd(a,xy)$ divides $\gcd(a,y)$, which divides $y$, hence $a|y$.
With that Lemma on hand, we obtain that the result you want as follows: If $a|c$, then $c=ak$ for some $k$. Then $b|ak$, $\gcd(a,b)=1$. so $b|k$. Hence $k=b\ell$; thus, $c=ak=ab\ell$, so $ab|c$. |
H: If $f$ is a polynomial of degree $n$, then $f(x) \equiv 0\pmod p$ has at most $n$ solutions.
I know that I have to prove this by induction on $n$ when we let $f(x)=a_{n}x^n + a_{n-1}x^{n-1} +\cdots + a_0$. I have two books in front of me with the complete proof but I don't see how after they assume that the theorem holds for a polynomial of degree $n-1$ and move on to show it holds for a polynomial of degree $n$ they say,
Either $f(x) \equiv 0\pmod p$ has no solution or at least one solution. If it has no solution the theorem holds (How so? how come is it not also true in the second case?).
Then they say, suppose that $r$ is a solution. That is $f(r) \equiv 0\pmod p$, and $r$ is a least residue $\pmod p$. Then because $x-r$ is a factor of $x^t -r^t$ for $t=0,1,2\ldots,n$, we have
$$\begin{align*}
f(x)\equiv f(x)-f(r) &\equiv a_{n}(x^n -r^n) + a_{n-1}(x^{n-1} -r^{n-1}) +\cdots + a_0(x -r)\pmod{p}\\
&\equiv (x-r)g(x)\pmod p
\end{align*}$$
where $g$ is a polynomial of degree $n-1$. suppose that $s$ is also another solution of (1). Thus
$$f(s) \equiv (s-r)g(s)\equiv0 \pmod p$$
because $p$ is prime it follows that
$$s\equiv r\pmod p\text{ or }g(s) \equiv 0 \pmod p$$
from the inductions assumption, the second congruence has at most $n-1$ solutions. Since the first congruence has just one solution, the proof is complete.
I would just appreciate is someone could explain the reasoning behind this a little more. Thank you.
AI: Remember that you are trying to prove that the congruence $f(x)\equiv 0\pmod{p}$ has at most $n$ solutions. (Added. There must be an assumption that $a_n\not\equiv 0\pmod{p}$ missing, by the way)
The argument is essentially the same one we use to prove that a polynomial of degree $n$ has at most $n$ roots in any field: we do induction on the degree.
If $f$ is of degree $0$, a nonzero constant, then it has no solutions and we are fine.
Assume the result holds for any polynomial of degree less than $n$, and that $f$ has degree $n\gt 0$. If $f$ has no roots, then we are done: $0$ is less than $n$, so $f$ satisfies the conclusion. If $f$ has at least one root $r$, then we can use the Factor Theorem to write $f(x) = (x-r) g(x)$ for some polynomial $g$ of degree $n-1\lt n$. If $s$ is a root of $f$ different from $r$, then $0 = f(s) = (s-r)g(s)$; since $(s-r)\neq 0$, then $g(s)=0$. That is, all other roots of $f$ come from roots of $g$. By the induction hypothesis, $g$ has at most $n-1$ roots. So $f$ has at most $1+n-1 = n$ roots.
This proves that, whether $f$ has at least one root or no roots, it has at most $n$ roots, proving the result.
The argument here is the same. If $f(x)\equiv 0 \pmod{p}$ has no solutions, then you are already done: $0$ is less than $n$. So we can consider the case in which there is at least one solution. It's not that the conclusion will not hold in this latter case (in fact, we will prove it does), it's that the conclusion does not immediately follow: from "at least one solution" we cannot immediately conclude "at most $n$ solutions", whereas from "no solutions" we can immediately conclude "and so at most $n$ (since $0$ is less than $n$)".
Now, we use a lemma:
Lemma. Let $r$ be any integer. Then for any nonnegative integer $t$, $x-r$ divides $x^t - r^t$.
Proof. Simply note that
$$(x-r)(x^{t-1}+x^{t-2}r+\cdots +xr^{t-2}+r^{t-1}) = x^t-r^t.\quad\Box$$
Note well: The fact that $x-r$ divides $x^t-r^t$ does not depend on whether $r$ is a root of $f(x)$ or not: it's a simple algebraic fact. It's always true.
Now, if $f(x)$ does have roots modulo $p$, then it has at least one; call it $r$. If $f(x)\equiv 0\pmod{p}$ has at least one root $r$, then we have $f(r)\equiv 0\pmod{p}$. We have
$$\begin{align*}
f(x) &\equiv f(x) - 0\pmod{p}\\
&\equiv f(x) -f(r)\pmod{p}\\
&\equiv a_nx^n + \cdots + a_0 - (a_nr^n +\cdots + a_0)
\pmod{p}\\
&\equiv a_n(x^n - r^n) + a_{n-1}(x^{n-1}-r^{n-1}) + \cdots + a_1(x-r) + (a_0-a_0)\pmod{p}
\end{align*}$$
Now, by the Lemma, $(x-r)$ divides each of $x^n-r^n$, $x^{n-1}-r^{n-1},\ldots,x-r$. So $x-r$ divides
$$a_n(x^n-r^n)+a_{n-1}(x^{n-1}-r^{n-1}) + \cdots + a_1(x-r).$$
(This argument takes the place of the Factor Theorem in the case above).
The conclusion is that if $f(r)\equiv 0\pmod{n}$, then we can write
$$f(x) \equiv (x-r)g(x)\pmod{p}$$
for some polynomial $g(x)$ with integer coefficients. Now the argument proceeds as in the real case: if $f(s)\equiv 0\pmod{p}$, then $(s-r)g(s)\equiv 0 \pmod{p}$, so $g(s)\equiv 0\pmod{p}$. So any other root of $f$ gives a root of $g$, and since $g$ has at most $n-1$ roots modulo $p$, then $f$ has at most $1+(n-1)=n$ roots modulo $p$. |
H: Simple group of order $660$ is isomorphic to a subgroup of $A_{12}$
Prove that the simple group of order $660$ is isomorphic to a subgroup of the alternating group of degree $12$.
I have managed to show that it must be isomorphic to a subgroup of $S_{12}$ (through a group action on the set of Sylow $11$-subgroups). Any suggestions are appreciated!
AI: This is a comment made into an answer.
Using the Sylow theorems, if $G$ is a simple group of order $660=60\times 11$, the number of Sylow-$11$ subgroups $n_{11}$ has $n_{11}\equiv 1~\mathrm{mod}~11$, $n_{11}|60$ and $n_{11}\neq1$ by simplicity, so $n_{11}=12$. Again, by Sylow's theorem, $G$ acts transitively on the $12$-element-set of its $11$-Sylows by conjugation, and so we get a non trivial morphism $G\rightarrow S_{12}$. By simplicity of $G$, this is an embedding.
Now apply the signature morphism $\epsilon:S_{12}\rightarrow\lbrace-1,+1\rbrace$:
$$G\hookrightarrow S_{12}\rightarrow \lbrace-1,+1\rbrace$$
The kernel is a normal subgroup of $G$. Since $G$ is simple, it is either $G$ or $1$, and by cardinality reasons it has to be $G$. Thus the embedding $G\hookrightarrow S_{12}$ maps $G$ into $\mathrm{Ker}(\epsilon)=A_{12}$, and $G$ is isomorphic to a subgroup of $A_{12}$. |
H: In a certain year, January had exactly $4$ Mondays and $4$ Fridays. What was the day on $2^{\text{nd}}$ October the previous year?
In a certain year, the month of January had exactly $4$ Mondays and $4$ Fridays. What was the day on Gandhi Jayanti $(2^{\text{nd}}$ October $)$ the previous year?
I am not sure how to approach this one, any ideas?
AI: As January has 31 days, there are three days that there were 5 of (pigeonhole principle). The calendar says they must be contiguous, so they must have been Tuesday, Wednesday, and Thursday. So Jan 1 was Tuesday. Then just count back. |
H: Solving Congruences and CRT
I have never really directly dealt with congruences until I was introduced to the Chinese Remainder Theorem. Although there are tons of different versions of this theorem out there, currently I am more interested in solving congruences.
I had some questions regarding steps that were used in solving the following congruent linear equation for $X_1$.
$99X_1\equiv1\pmod8$
Subtracting 96(which is a factor of 8 from both side)
$3X_1\equiv1\pmod 8$ (Subtracting any factor of 8 on R.H.S is like subtracting $0$. So R.H.S remains unchanged)
Adding 8 gives:
$3X_1\equiv(1+8) \pmod8$
$3X_1\equiv9\pmod8$ so $X_1=3$
Now when 8 was added why was it only added to the R.H.S . I thought any factor of 8 whether being subtracted (or added?) on R.H.S is like subtracting or adding $0$ ?
AI: Precisely because adding $8$ modulo $8$ is the same as adding $0$, you can add $8$ to only one side and not change the congruence (just like you can add $0$ to only one side of $x=y$ to get $x=y+0$ and not change the equality).
Alternatively, since $0\equiv 8\pmod{8}$, you are adding the congruence $$3x_1\equiv 1\pmod{8}$$ to the congruence $$0\equiv 8\pmod{8}$$ to get $$3x_1+0 \equiv 1+8\pmod{8}.$$ This is like adding $x=a$ to $y=b$ in order to get $x+y=a+b$. |
H: Showing that the universal enveloping algebra of some $\mathfrak{g}$ is isomorphic to $\mathbb{C}[x_i,\partial/\partial x_i]$
The universal enveloping algebra $U(\mathfrak{g})$ of a Lie algebra $\mathfrak{g}$ over $\mathbb{C}$ is defined to be
$$
\dfrac{\mathbb{C}\oplus\mathfrak{g}\oplus ( \mathfrak{g}\otimes \mathfrak{g})\oplus (\mathfrak{g}\otimes \mathfrak{g}\otimes \mathfrak{g})\oplus\ldots}{\langle a\otimes b-b\otimes a -[a,b]\rangle},
$$which means it is an associative tensor algebra generated by the vector space $\mathfrak{g}$ mod out all elements of the form $ a\otimes b-b\otimes a -[a,b]$, where $a,b\in \mathfrak{g}$.
Suppose $\mathfrak{g}=\mathbb{C}[e,p,q]/\langle [p,q]=e,[p,e]=0,[q,e]=0
\rangle$. Then we have a (ring) isomorphism
$$
U(\mathfrak{g})/\langle e-1\rangle \cong \mathbb{C}[x, \partial/\partial x]
$$ where we can define the map to be $U(\mathfrak{g})\stackrel{\phi}{\rightarrow}\mathbb{C}[x,\partial/\partial x]$
by sending
$$
p \mapsto \partial/\partial x
\mbox{ and }
q \mapsto x.
$$
One can check that since
$$
\phi([p,q])=[\partial/\partial x,x]= 1-x\dfrac{\partial}{\partial x},
$$ we take $e=x\dfrac{\partial}{\partial x}$, which gives us the isomorphism $U(\mathfrak{g})/\langle 1-e\rangle \cong \mathbb{C}[x,\partial/\partial x]$.
Now what should $\mathfrak{g}$ be so that $U(\mathfrak{g})/I\cong \mathbb{C}[x_1,x_2,\ldots, x_n,\partial /\partial x_1,\partial /\partial x_2, \ldots, \partial /\partial x_n]$?
I could be wrong but I'm guessing that if we take $\mathfrak{g}$ to be the algebra $\mathbb{C}[p_i,q_i,e_i]/I$ where $I$ is generated by brackets of the form
$$
[p_i,q_j]= \left\{ \begin{aligned}
e_i &\mbox{ if } i=j\\
-q_j p_i &\mbox{ if } i\not= j \\
\end{aligned}
\right.
$$
$$
[p_i,e_j]= \left\{ \begin{aligned}
0 &\mbox{ if } i=j\\
- p_i &\mbox{ if } i\not= j \\
\end{aligned}
\right.
$$
$$
[q_i,e_j]= \left\{ \begin{aligned}
0 &\mbox{ if } i=j\\
q_i &\mbox{ if } i\not= j, \\
\end{aligned}
\right.
$$
then $U(\mathfrak{g})/\langle e_i-1\rangle\cong \mathbb{C}[x_1,x_2,\ldots, x_n,\partial /\partial x_1,\partial /\partial x_2, \ldots, \partial /\partial x_n]$ where we take the map to be
$$
p+\langle e_i-1\rangle \mapsto \partial/\partial x_i
$$
and
$$
q+\langle e_i-1\rangle \mapsto x_i.
$$
Also, since the order of multiplication matters, i.e.,
$$
(\overbrace{\partial/\partial x_1 + \partial/\partial x_2}^{\deg -1?})(\overbrace{x_1}^{\deg 1}) = 1+0=\overbrace{1}^{\deg 0}
$$ while
$$
x_1(\partial/\partial x_1 + \partial/\partial x_2) = x_1 \partial /\partial x_1 + x_1\partial/\partial x_2,
$$ how should one think about multiplication in the skew polynomial algebra?
Finally, I just noticed that $x_i\dfrac{\partial}{\partial x_i}$ form mutually orthogonal idempotent elements in $\mathbb{C}[x_i,\partial/\partial x_i]$. That is,
$$
\left(\sum_i x_i \dfrac{\partial}{\partial x_i}\right)^2 = \left(\sum_i x_i \dfrac{\partial}{\partial x_i}\right).$$ I am guessing that these are the only non-scalar idempotent elements in the algebra.
Do these idempotent elements have a geometric interpretation?
AI: $\mathfrak{g}$ should be a Heisenberg Lie algebra. A nice coordinate-invariant way to describe these is as $V \oplus \mathbb{C}e$ where $V$ is a symplectic vector space and the symplectic form gives the Lie bracket, which takes values in $\mathbb{C}e$ (which is central). This is precisely the Lie algebra generated by the operators $x_1, x_2, ... x_n, \frac{\partial}{\partial x_1}, ... \frac{\partial}{\partial x_n}$ acting on $\mathbb{C}[x_1, ... x_n]$.
I do not believe that the elements you wrote down are idempotent. |
H: The minimum value of $(\frac{1}{x}-1)(\frac{1}{y}-1)(\frac{1}{z}-1)$ if $x+y+z=1$
$x, y, z$ are three distinct positive reals such that $x+y+z=1$, then the minimum possible value of $(\frac{1}{x}-1) (\frac{1}{y}-1) (\frac{1}{z}-1)$ is ?
The options are: $1,4,8$ or $16$
Approach: $$\begin{align*}
\left(\frac{1}{x} -1\right)\left(\frac{1}{y}-1\right)\left(\frac{1}{z}-1\right)&=\frac{(1-x)(1-y)(1-z)}{xyz}\\
&=\frac{1-(x+y+z)+(xy+yz+zx)-xyz}{xyz}\\
&=\frac{1-1+(xy+yz+zx)-xyz}{xyz}\\
&=\frac{xy+yz+zx}{xyz} - 1
\end{align*}$$
Now by applying $AM≥HM$, I got the least value of $(xy+yz+zx)/xyz$ as $9$, so I got final answer as $8$. Is it correct?
AI: If we put no constraint on $x$, $y$, and $z$ apart from $x$, $y$, $z$ positive and $x+y+z=1$, then indeed your calculation, and the one by Patrick Da Silva, show that the minimum value is $8$, attained at $x=y=z=\frac{1}{3}$.
However, the problem specifies that $x$, $y$ and $z$ are distinct real numbers. If we take that constraint into account, there is no minimum. We can get arbitrarily close to $8$ (but above $8$) by choosing $x$, $y$, and $z$ distinct and close to $\frac{1}{3}$, but we cannot attain $8$ with distinct $x$, $y$ and $z$. |
H: Solving for coefficients on a Laurent series
I am having an issue with the following complex analysis problem. I am suppose to
find the coefficients of $z^{-1}$, $z^{-2}$ and $z^{-3}$ in the Laurent series for
$\displaystyle \frac{1}{\sin z}$ around $z_0 = 0$ which is valid for $2\pi < |z| < 3\pi$.
One way I thought of doing it was to say
$$ \frac{1}{\sin z} = \frac{1}{(z - 2\pi)(z - 3\pi)}\frac{(z - 2\pi)(z - 3\pi)}{\sin z} $$
Let $H(z) = \displaystyle \frac{(z - 2\pi)(z - 3\pi)}{\sin z} $.
Then we have
$$ \frac{1}{\sin z} = H(z)\left[ \frac{A}{z - 2\pi} - \frac{B}{z - 3\pi} \right]
= H(z)\left[ \sum_{k = 0}^\infty \frac{A(2\pi)^k}{z^{k + 1}}
+ \sum_{k = 0}^\infty \frac{Bz^k}{(3\pi)^{k + 1}} \right]. $$
This seems to get me part of the way of where i need to go, but leaves me having
to find a series that works for $H(z)$. I was going to continue in this way, but I though
that perhaps it was giving me an entire series, and the problem seems to be suggesting
to only solve for 3 of the coefficients.
Another way to solve for the coefficients is
$\displaystyle a_k = \frac{1}{2\pi i} \int_\gamma \frac{f(w)}{(w - z_0)^{k + 1}} dw$, where $\gamma$ is a
circle, of say radius $R$, that is in the annulus.
I tried to do this just for $a_{-1}$ and I got the following
$$ \int_0^{2\pi} \frac{Rie^{it}}{\sin(Re^{it})} dt = \int_{u(0)}^{u(2\pi)} \frac{1}{\sin(u)} du $$
where $u = Re^{it}$, so $u(0) = R$ and $u(2\pi) = R$. Thus I'm integrating from
$R$ to $R$, so $a_{-1} = 0$.
However the $u$ substitution seems like something went wrong. I am not even sure if I can
actually do a $u$ substitution like that, but I can't see how to solve the integral any other way.
I am not sure what direction to go with this problem, and it seems like I am making it a lot
harder than it is suppose to be. Can anyone give me some direction on it?
Should I be attempting to solve the integrals, or do something like I was at the beginning?
AI: Using integrals is the way to go here. Let $\gamma$ be a path in the specified annulus with winding number 1.
We have
$$2\pi i a_{-1} = \int_\gamma \frac{1}{\sin z}\ dz.$$
We can evaluate this with the residue theorem. We have poles at 0, $\pm \pi$, and $\pm 2\pi$. Computing the residues, we see that the coefficient is 1. Finding the rest of the coefficients is very similar. |
H: simple integration question
I've tried but I cannot tell whether the following is true or not. Let $f:[0,1]\rightarrow \mathbb{R}$ be a nondecreasing and continuous function. Is it true that I can find a Lebesgue integrable function $h$ such that
$$
f(x)=f(0)+\int_{0}^{x}h(x)dx
$$
such that $f'=h$ almost everywhere?
Any hint on how to proceed is really appreciated it!
AI: The Cantor function (call it $f$) is a counterexample. It is monotone, continuous, non-constant and has a zero derivative a.e.
If the above integral formula holds, then you have $f'(x) = h(x)$ at every Lebesgue point of $f$, which is a.e. $x \in [0,1]$. Since $f'(x) = 0$ a.e., we have that $h$ is essentially zero. Since $f$ is not constant, we have a contradiction.
See Rudin's "Real & Complex Analysis" Theorem 7.11 and Section 7.16 for details. |
H: Identifying $SL(2,\mathbb{C})/H$ with $\mathbb{C}^2\setminus \{ 0\}$
Let $G=SL(2,\mathbb{C})$ and let $H$ be the set of unipotent matrices
$$
\left\{ \left[ \begin{array}{cc}
1 & b \\
0 & 1 \\
\end{array}
\right] : b\in \mathbb{C}\right\}.
$$
I am trying to work out the details to show that
$G/H$ can be identified with $\mathbb{C}^2\setminus \{ 0\}$ via the transitive action of $G$ on $\mathbb{C}^2\setminus \{ 0\}$.
This action is then supposed to extend to a linear action on its projective completion $\mathbb{P}^2=\overline{G/H}$ where we take the point $[1:1:0]$ to represent the identity coset $H$.
Any help/suggestion is greatly appreciated. Thank you.
Added: note that $\dim SL(2,\mathbb{C})/H$ is clearly 2, but I am not certain of how $G/H$ and $\mathbb{C}^2\setminus \{ 0\}$ can be identified (i.e., construct an explicit map between the two).
Just continuing to think about the above question, for $SL(3,\mathbb{C})/H$ where
$$
H =\left\{
\left[ \begin{array}{ccc}
1 & a & b \\
0 & 1 & c \\
0 & 0 &1 \\
\end{array}
\right] : a,b,c\in\mathbb{C}
\right\},
$$ can we conclude that $SL(3,\mathbb{C})/H$ also acts transitively on some subset $S$ of $\mathbb{C}^5$, and identify $SL(3,\mathbb{C})/H$ with $S$? Would then the projective closure $\overline{SL(3,\mathbb{C})/H}$ equal $\mathbb{P}^5$?
AI: Define a map $SL_2(\mathbb{C}) \to \mathbb{C}^2 - 0$ by sending $[\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}] \to [\begin{smallmatrix} a \\ c \end{smallmatrix}]$.
You can check that $g$ and $g.h$ map to the same element for all $g \in SL_2(\mathbb{C})$ and $h \in H$. |
H: Is periodic extension of Lipschitz function Lipschitz?
Let $f: [0,T] \rightarrow \mathbb{R}$, where $T>0$, be a Lipschitz with constant $K$ and $f(0)=f(T)$.
Let us define $g(x)=f(x)$ for $x \in [0,T]$ and $g(x+T)=g(x)$ for $x \in \mathbb{R}$.
Does $g$ satisfies $$|g(x)-g(y)| \leq K |x-y|$$ for $x,y \in \mathbb{R}$?
It is clear that we may assume that $x<y$. When $|x-y| \geq T$ then there are $n,m\in N$ such that $x-mT, y-nT\in [0,T]$ and $|g(x)-g(y)|=|f(x-mT)-f(y-nT)| \leq K \cdot T \leq K|x-y|$.
When $|x-y|<T$ and, for some integer $n$, is $x,y \in [nT, (n+1)T]$ we have $|g(x)-g(y)|=|f(x-nT)-f(y-nT)|\leq K |x-y|$.
It remains the case when $|x-y| <T$ and, for some integer $n$, is $x\in [nT,(n+1)T]$, $y\in [(n+1)T, (n+2)T]$.
AI: Let $x\in [0,T]$, $y\in [T,2T]$. Then
$$
|g(y)-g(x)|\le |g(y)-g(T)|+|g(T)-g(x)|=|f(y-T)-f(0)|+|f(T)-f(x)|\le K(y-T)+K(T-x)=K(y-x)
$$
Of course the same argument works if $x\in [nT,(n+1)t]$, $y\in [(n+1)T,(n+2)T]$. |
H: Unprovable statements in ZF
Possible Duplicate:
Advantage of accepting the axiom of choice
Advantage of accepting non-measurable sets
As you all know, Banach-Tarski paradox is solely a consequence of Axiom of Choice, and I think it is just absurd.
I'm trying to take ZF as my axiomatic model rather than ZFC. I wonder if there are some theorems really important understanding our 'Number system' that can be proved in ZFC, but not in ZF. (i.e., every vector space has a basis is equivalent to axiom of choice, but i think we actually don't need this strong theorem since we are always working on finite dimensions (maybe not! Please tell me if there are some branches of mathematics studying infinite dimension))
I wonder how many risks should I take when I'm removing AC.
AI: The axiom of choice, while seemingly having counterintuitive results is needed to ensure that infinite sets are well-behaved.
Of course if you would only like to work with finite sets then the axiom of choice is not needed, however some things which you think hold immediately would fail without the axiom of choice, and they may fail badly.
The union of countably many pairs may not be countable.
The real numbers may be a countable union of countable sets.
We may be able to partition a set into more parts than elements, in particular this set might be the real numbers.
In the real numbers continuity by sequences and by $\varepsilon$-$\delta$ are no longer equivalent.
Topology breaks down in acute manners, to horrid to begin to describe.
These three are far far more disturbing to me than Banach-Tarski, and these would hold in several models of ZF without the axiom of choice.
What more? Let's see, what else can fail badly
There might be no free ultrafilters, on any set.
In turn some fields might not have an algebraic closure; others could have two non-isomorphic closures, for example the rationals.
There could be a tree that every point has a successor, but there is no $\omega$-branch.
There could be a vector space which has two bases of different cardinality.
Functional analysis may stop working due to lack of Hahn-Banach, Krein-Milman, Banach-Alaoglu theorems.
If you wish to do some set theory, perhaps, it also becomes hard:
Cardinal arithmetics can fail for infinite sums and products.
In forcing the mixing lemma fails.
The partial ordering of cardinalities is not necessarily well-founded.
There may be no canonical representative for $|A|$, namely a function which returns a particular set of the cardinality of $A$ (like the $\aleph$ numbers).
This list can be made really quite infinite. Why? Because modern mathematics is very much about infinitary objects and for those to be well-behaved we really need the axiom of choice, or else a lot of bad things may occur.
It is also the case that most people are educated by choice-using people, so the basic intuitions about mathematics actually use the axiom of choice a lot more than you would think.
However there are still merits to working without the axiom of choice. For those, see my recent post:
Is trying to prove a theorem without Axiom of Choice useless?
To read even more:
Advantage of accepting the axiom of choice
Advantage of accepting non-measurable sets
Foundation for analysis without axiom of choice?
Axiom of choice and calculus
Number Theory in a Choice-less World |
H: Irrational equation for a maximization problem
I have the following maximization problem
$ \max_h m_1 + 10 (h)^{1/4} + h + m_2 - 2 (h)^{1/4} + m_3 - (h)^{1/4} $
where $m_1, m_2, m_3$ are three fixed values. The FOC for a maximum is
$ \dfrac {10}{4} h^{-3/4}+1-\dfrac{1}{2}h^{-3/4}-\dfrac{1}{4}h^{-3/4}=0$
Rearranging
$\dfrac{7}{4} \cdot h^{-3/4} = -1$
Well, now I have no idea... how can I go on? How can I say which level of $h$ maximizes my problem? Of course, I don't want you to solve, but I just would like to get a hint! Thank you in advance!
AI: Are you saying you want help solving $$(7/4)h^{-3/4}=-1$$ If so, multiply by $4/7$, take reciprocals on both sides, raise both sides to the power 4, and take the cube root on both sides. |
H: Are homeomorphisms order-isomorphisms?
Is every homeomorphism between topological spaces an order isomorphism (for orders of inclusion $\subseteq$ of sets)?
AI: Every bijection $f \colon X\to Y$ induces an order-isomorphism between $(\mathcal P(X),\subseteq)$ and $(\mathcal P(Y),\subseteq)$.
This follows easily from the following two observations:
$A\subseteq B$ $\Rightarrow$ $f[A]\subseteq f[B]$ for any map $f$
$f^{-1}[f[A]]=A$, if $f$ is a bijection. |
H: How the dimension of a subspace related to the differential operator
I am wondering the link as the title implies. The Spring 87 problem in Berkeley Problems in Mathematics is as follows:
Let $V$ be a finite dimensional linear subspace of $C^{\infty}(\mathbb{R})$. Assume that $V$ is closed under differentiation. Prove that there is a constant coefficient operator $$L=\sum^{n}_{k=0}a_{k}D^{k}$$ such that $V=\{f:Lf=0\}$.
I am wondering why the finite dimensional condition given would imply such a strong result. Because if we assume the required relationship the reverse is not necessarily true(the whole space is obviously closed under any differential operator), I feel something deeper may be buried in this problem I do not know. A hint or an illuminating example would be mostly welcome. I just do not know how to attack this problem.
AI: Let $\{f_1,\dots,f_N\}$ a basis of $V$. Since $f'_k\in V$ for all $k$, we can write
$$f'_k=\sum_{l=1}^Na_{k,l}f_l,$$
and using matrices
$$\pmatrix{f'_1\\\vdots\\ f'_N}=A\pmatrix{f_1\\\vdots\\ f_N}.$$
Let $P$ the minimal polynomial of $A$ over $\Bbb R$. We can check that the differential operator $L$ associated to it is such that $V\subset \{f\mid Lf=0\}$. The general theory of differential equations ensures the converse: if $Lf=0$, then $f$ is in the vector space generated by $x^pe^{\lambda x}$, where $\lambda$ are eigenvalues of $A$, $m$ is the multiplicity of the root $\lambda$ in the minimal polynomial and $0\leq p\leq m$. Each of these maps are in $V$.
Note that if $V=\ker L$ for $L$ of the form $\sum_{k=0}^n a_kD^k$, where $a_k$ are constant, then $V$ is necessarily finite dimensional. |
H: A question on second order ODE
I want to ask for a hint in solving the following problem from Berkeley Problems in Mathematics:
Let $h>0$ be given. Consider the linear difference equation $$\frac{y((n+2)h)-2y((n+1)h)+y(nh)}{h^{2}}=-y(nh),n\in \mathbb{Z}^{*}$$
1) Find the general solution of the equation by trying suitable exponential subsititions.
2) Find solutions with $y(0)=0$ and $y(h)=h$. Denote it by $S_{h}(nh)$.
3). Let $x$ be fixed and $h=\frac{x}{n}$. Show that $$\lim_{n\rightarrow \infty}S_{\frac{x}{n}}(\frac{nx}{n})=\sin[x]$$
I am having trouble even using the first step. I tried $y=e^{P(t)}Q(t)$ but I feel it is unlikely to generate the required relation. So I am stuck. I think I need a hint as the book does not have a solution to this problem.
AI: The equation:
$$\frac{y((n+2)h)-2y((n+1)h)+y(nh)}{h^{2}}=-y(nh),n\in \mathbb{Z}^{*}\,,$$
is nothing but a difference equation with constant coefficients coming from a second order differential equation with constant coefficients. Now, you need to solve this difference equation.
Assume your solution $y(nh) = r^n $ (by the way that's the suitable exponential substitution) and substitute back in your equation and solve the resulting polynomial for $r$. Note that you can write the equation in the form.
$$ y \left( n+2 \right) -2\,y \left( n+1 \right) + \left( 1+{h}^{2}
\right) y \left( n \right) =0 $$
with keeping in mind that $ h=\frac{x}{n}\,. $ Your general solution is going to have the form $y(n)=y(nh) = C_{1}r_{1}^n + C_{2}r_{2}^n $ which is going to be a function in $h$. Then you can advance with 2 which is determining the constants $C_{1}$ and $C_{2}$ subject to the initial conditions you are given. I hope it is clear now.
Substituting $ y(n) = r^n $ in the difference equation gives
$$ r^{n+2} - 2 r^{n+1} + (1+h^2)r^n =0 \Rightarrow r^2 - 2 r +(1+h^2)=0 \,.$$
Solve the above polynomial and construct your general solution. |
H: Polynomial ring over $\mathbb{Z}_2$
As $f(x)$ is an irreducible over $\mathbb{Z}_2[x]$ so $R/(f)$ is an infinite field. Am I right?
AI: No the quotient ring is a finite field of order 4. The reason is every element in the quotient ring by the division algorithm is a linear polynomial. There are 2 choices for the coefficient of $\bar{1}$ and 2 choices for the coefficient of $\bar{x}$. Hence 4 choices in total and the quotient is a finite field with 4 elements.
Edit: Bill Dubuque suggested that I add why 1 and 2 are ruled out: The polynomial $f(x) = x^2 + x + 1$ has no roots over $\Bbb{Z}/2\Bbb{Z}$ and being a quadratic is thus irreducible over this field, it follows that the ideal generated by $f(x)$ is maximal from which it follows from this fact proved here that $R/(f(x))$ is a field. |
H: Group of order $60$
[NBHM_2006_PhD screening test_Algebra]
Let $G$ be a group of order $60$, pick out the true statements:
a. $G$ is abelian
b. $G$ has a subgroup of order $30$.
c. $G$ has subgroups of order $2$, $3$, and $5$.
d. $G$ has subgroups of order $6$, $10$, and $15$.
My Attempt:
a is false because $A_5$ is an non abelian group of order $60$.
For b,c,d I have no idea.if $G$ was abelian then $c$ is correct by cauchy theorem .
AI: There is a non-abelian simple group of order $60$. Thus, $(a)$ is false.
We note that $(b)$ is false for precisely the same reason as $(a)$. There is a non-abelian simple group of order $60$.
$(c)$ is true. This is because, by Cauchy's theorem, there is an element of those orders specified. The subgroup those elements generate will respectively be the required subgroup.
This is a bit tricky if one does not want to use the fact that the non-abelian simple group of order $60$ is the alternating group on $5$ symbols, $A_5$. It is a straight forward Sylow calculation to show that $A_5$ has no subgroup of order $15$.
Proof that $A_5$ does not have a subgroup of order $15$
Prove that a group of order $15$ is cyclic (Sylow's Theorems). Now prove that no permutation on $5$ symbols can have order $15$. |
H: Factorise the determinant $\det\Bigl(\begin{smallmatrix} a^3+a^2 & a & 1 \\ b^3+b^2 & b & 1 \\ c^3+c^2 & c &1\end{smallmatrix}\Bigr)$
Factorise the determinant $\det\begin{pmatrix} a^3+a^2 & a & 1 \\ b^3+b^2 & b & 1 \\ c^3+c^2 & c &1\end{pmatrix}$.
My textbook only provides two simple examples.
Really have no idea how to do this type of questions..
AI: $\det\begin{pmatrix} a^3+a^2 & a & 1 \\ b^3+b^2 & b & 1 \\ c^3+c^2 & c &1\end{pmatrix}$
$=\det\begin{pmatrix} a^3+a^2 & a & 1 \\ b^3+b^2-(a^3+a^2) & b-a & 1-1 \\ c^3+c^2-(a^3+a^2) & c-a &1-1\end{pmatrix} $ (applying $R_2'=R_2-R_1\ and\ R_3'=R_3-R_1$)
$=(b-a)(c-a)
\det\begin{pmatrix} a^3+a^2 & a & 1 \\ b^2+a^2+ab+b+a & 1 & 0 \\ c^2+a^2+ca+c+a & 1 & 0\end{pmatrix}$
$=(b-a)(c-a)\cdot1\cdot
\det\begin{pmatrix} b^2+a^2+ab+b+a & 1 & \\ c^2+a^2+ca+c+a & 1\end{pmatrix}$
$=(b-a)(c-a)(b-c)(a+b+c+1)$ |
H: Question about whether axiom of choice is needed in this proof
Do I need axiom of choice in this proof here?
I think not: at each step we choose one element from a set $N - \langle g_1, \dots, g_k \rangle $. So while there is indeed a countable number of sets involved from which we choose elements, I could also think of the process as follows:
Assume $N$ is generated uncountably. Let $C = \{g_1 , g_2, \dots \}$ be a countable subset of the generators of $N$. (So far we have not used choice, right?) We may write $C$ as a countable union of singleton sets $\bigcup_n \{g_n\}$. But now we can write down an explicit choice function: Let $c(\{x\}) = x$. Since the union is countable, we may consider the choice function $\tilde{c}: \mathbb N \to C, n \mapsto g_n$.
From here, we can finish the argument as follows: Then the following is an increasing chain of submodules: $\langle \tilde{c}(1) \rangle \subset \langle \tilde{c}(1), \tilde{c}(2) \rangle \subset \dots$, avoiding the axiom of choice.
Would someone tell me where my argument is flawed? Thanks.
AI: There is a delicate point about the definition of Noetherian which requires the axiom of choice.
Let $X$ be an amorphous set, namely an infinite set that cannot be partitioned into two disjoint infinite subsets. Such $X$ has the interesting property that if ${\cal A\subseteq P}(X)$ is a chain, then $\cal A$ is finite.
Now let $M=\bigoplus_X\mathbb Z$ as a module over $\mathbb Z$. Every chain of modules is finite, simply since it defines a chain of subsets of $X$ and every such chain is finite.
On the other hand, it is clear that $M$ is not finitely generated and therefore not Noetherian in the definition that "every submodule is finitely generated", simply take $M$ itself to be that submodule. We also have that the family of finitely generated submodules does not have a maximal element.
It should be remarked that the equivalence between "every module is finitely generated" and "every non-empty family of submodules has a maximal element" requires the axiom of choice (specifically it requires Dependent Choice, which amongst other things implies that every infinite set has a countably infinite subset).
More:
Can one avoid AC in the proof that in Noetherian rings there is a maximal element for each set?
Where is the Axiom of choice used? |
H: Is every order isomorphism of sets induced by a bijection?
Let $f$ is an order isomorphism $\mathscr{P}A \rightarrow \mathscr{P}B$ (where $A$ and $B$ are some sets, the order is set-inclusion $\subseteq$).
Is it true that it always exist a bijection $F: A \rightarrow B$ such that $f(X) = F[X]$ for every $X\in \mathscr{P}A$?
AI: Since $f$ is an order isomorphism, show that $f(\{a\})$ is a singleton. Now take $F(a)$ to be the unique $b$ such that $f(\{a\})=\{b\}$, and show that $F$ is a bijection. |
H: length of the tangent
A Circle is inscribed in a triangle $ABC$, where $AB=10 cm$, $BC=9 cm$ and $AC=7 cm$ . $X$, $Y$, $Z$ are points of contact of the sides $AC$, $BC$ and $AB$ with the circle respectively. $BZ=?$
In the first look, question looked simple to me. But I am not able to get to the answer. I found the In-radius using $rs = \sqrt{s(s-a)(s-b)(s-c)}$. But that is not helping me to find the required length.
In the above formula,
r= indradius
s=(a+b+c)/2
a,b,c -> Lengths of the sides of triangle
I would be glad to see the answer of this question. Any Hints or answers are welcome. Thanks.
Sandy
AI: It's much easier than you're making it. XC = YC, XA = ZA and BZ = BY, because in each case, you've got two tangents up to the same point.
That gives you enough to solve for all of the lengths. In particular (with all lengths in centimetres) BZ = 10 - AZ = 10 - AX = 3 + XC. But because BY + YC = 9, this gives BZ = 6. |
H: How to prove the function $y = \sin x$ is not a closed function?
I came across a question:
Suppose that $f(x) = \sin x$ is a function from $\mathbb R$ to $[-1,1]$. How do I prove the function $f(x) = \sin x$ is not a closed function?
By "closed function", I mean a function such that the image of any closed set is closed.
AI: Closed function is a function such that image of every closed set is closed.
It is relatively easy to see that, for any $\varepsilon>0$, every $\varepsilon$-discrete subset of real line is closed. (A subset $A$ of a metric space $(X,d)$ is called $\varepsilon$-discrete if for any two distinct points $x,y\in A$ we have $d(x,y)\ge\varepsilon$. For subsets of real line, this condition means $|x-y|\ge\varepsilon$.)
Can you find a sequence $(x_n)$ with the following properties?
$x_n\in(2n\pi,(2n+1)\pi)$ (which implies that $\{x_n; n\in\mathbb N\}$ is an $\varepsilon$-discrete subset for any $\varepsilon<\pi$)
$\lim\limits_{n\to\infty} \sin x_n =y$ but $y\notin\{\sin x_n; n\in\mathbb N\}$
If $(x_n)$ fulfills the above properties, then $A=\{x_n; n\in\mathbb N\}$ is a closed set, but the image of this set is not closed. |
H: Sufficient condition for convergence of a real sequence
Let $(x_n)$ be a sequence of real numbers.
Prove that if there exists $x$ such that every subsequence $(x_{n_k})$ of $(x_n)$ has a convergent (sub-)subsequence $(x_{n_{k_l}})$ to $x$, then the original sequence $(x_n)$ itself converges to $x$ .
Thanks for any help.
AI: First notice that your condition implies that your sequence is bounded.
Indeed, if $(x_n)$ is unbounded, we can find a subsequence $(x_{n_k})$ such that $|x_{n_k}|\ge k$. This subsequence does not have a convergent subsequence.
So we know that $(x_n)$ is bounded and it is not convergent.
This means that $$M=\limsup x_n > \liminf x_n =m.$$
(Both $M$ and $m$ are real numbers, since $(x_n)$ is bounded.)
We know (from the properties of limit superior and limit inferior) that there is a subsequence $(x_{n_k})$ which converges to $M$ and there is a subsequence $x_{n_l}$ which converges to $m$. (And every subsequence of any of these two subsequences has, of course, the same limit $M$ resp. $m$.)
We have found two subsequences with different limits, which contradicts your assumptions about the sequence $(x_n)$. |
H: Prove that this function is measurable
I cannot write a neat proof of this result, so I would like to see how to be precise in these kinds of arguments.. Here is the problem
Let $I=[0,1]$ and let $f\colon I\times\mathbb R\to \mathbb R$ be a function such that
i) $f(\cdot,x)$ is measurable for all $x\in\mathbb R$;
ii) $f(t,\cdot)$ is continuous for a.e. $t\in I$.
Prove that, for every continuous function $x\colon I\to\mathbb R$, the function
$$g_x(t):=f(t,x(t))$$
is measurable.
Thank you for your kindness..
AI: Fix $x\colon[0,1]\to \Bbb R$ a continuous function and define
$$g_n(t):=f\left(t,\frac{\lfloor nx(t)\rfloor}n\right),$$
where $\lfloor\cdot\rfloor$ denotes the floor function, that is, the map which gives to a real number the largest integer which is smaller or equal than this real number.
We check that $g_n$ is measurable, using the assumption i), and writing $g_n$ as as $$\lim_{k\to +\infty}\sum_{j=-k}^kf(t,j/n)\chi_{[j,j+1)}(nx(t)).$$
Indeed, the map $t\mapsto f(t,j/n)$ is measurable by i), and so is $t\mapsto f(t,j/n)\chi_{[j,j+1)}(nx(t))$. A sum of measurable functions is measurable, and pointwise limit of measurable functions still is measurable.
Then using ii), show that $g_n(t)$ to $g(t)$ for almost every $t$. |
H: Uniform boundedness principle statement
Consider the uniform boundedness principle:
UBP. Let $E$ and $F$ be two Banach spaces and let $(T_i)_{i \in I}$ be a family (not necessarily countable) of continuous linear operators from $E$ into $F$. Assume that $\sup_{i \in I} \|T_ix \| < \infty$ for all $x \in E$. Then $\sup_{i \in I} \|T_i\|_{\mathcal{L}(E,F)} < \infty$.
I don't understand the statement of the UBP. The assumption tells us that, fixed an element $u$, we surely find a $\|T_ku\|< \infty$ (in particular, for that fixed $u$ each other $T$ is limited in $u$ too). The conclusion tells us that the sup over the $i$'s of the set
$$
\biggl\{ \sup_{\|x\|\leq 1} \|Tx\| \biggr\}
$$
is limited. But isn't that clear from the assumption? I mean, if each $T$ is bounded, a fortiori the conclusion must hold... please explain me where I am wrong.
AI: To illustrate the point which Davide explained in his answer, let us have a look at a concrete example.
Let us take $E=$ the set of all real sequences with finite support (i.e., only finitely many terms are non-zero). Let us use the norm $\|x\|=\sup_n |x_n|$. The space $E$ is a linear normed space, but it is not a Banach space. (So the assumptions of Banach-Steinhaus theorem are not fulfilled.)
Let us take $F=\mathbb R$ and $T_n(x)=\sum_{k=1}^n x_k$.
For every $x\in E$ we have $|T_n(x)| \le \sum_{k=1}^n |x_k|$, which is a finite number. So $\sup_n |T_n(x)|<+\infty$ for any fixed $x\in E$.
But if we take $x_n=(\underset{\text{$n$-times}}{\underbrace{1,\dots,1,}}0,0,\dots)$, then $T_n(x_n)=n$ and $\|x_n\|\le 1$. So we see that $T_n(x)$ is not bounded on the unit ball, i.e.
$$\sup_{\|x\|\le 1, n\in\mathbb{N}} T_n(x)=+\infty.$$ |
H: Compute integral $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$
I want to solve $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$ but I get the wrong results:
$$
\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x =
\int_{-6}^6 \! \frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} \, \mathrm{d} x
$$
$$
= \left[ \frac{(4e^{4x} + 8e^{2x} + 4x)2}{e^{2x}} \right]_{-6}^6 =
\left[ \frac{8e^{4x} + 16e^{2x} + 8x}{e^{2x}} \right]_{-6}^6
$$
$$
= (\frac{8e^{24} + 16e^{12} + 48}{e^{12}}) - (\frac{8e^{-24} + 16e^{-12} - 48}{e^{-12}})
$$
$$
= e^{-12}(8e^{24} + 16e^{12} + 48) - e^{12}(8e^{-24} + 16e^{-12} - 48)
$$
$$
= 8e^{12} + 16 + 48e^{-12} - (8e^{-12} + 16 - 48e^{12})
$$
$$
= 8e^{12} + 16 + 48e^{-12} - 8e^{-12} - 16 + 48e^{12})
$$
$$
= 56e^{12} + 56e^{-12}
$$
Where am I going wrong?
AI: $$
\int_{-6}^6 \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x=
\int_{-6}^6 \frac{16e^{4x} + 16e^{2x}+ 4}{e^{2x}} \, \mathrm{d} x=
\int_{-6}^6 16e^{2x} + 16+ 4e^{-2x} \, \mathrm{d} x=
\left[ 8e^{2x} + 16x-2e^{-2x} \right]_{-6}^6=
8(e^{12}-e^{-12}) + 16\cdot 12 -2(e^{-12}-e^{12})=
192+ 10 e^{12}-10 e^{-12}
$$
You can check both indefinite
and definite
integral at WolframAlpha.
I am not sure where is mistake in your solution (since I do not understand what exactly you have done), but most probably you have used $\int \frac{f(x)}{g(x)} \, \mathrm{d} x = \frac{\int f(x) \, \mathrm{d} x}{\int g(x)\, \mathrm{d} x}$, as suggested by Gerry's comment. This formula is incorrect. |
H: Endomorphisms preserve Haar measure
I am having trouble following the argument in page 21 of P. Walters, Intro. to ergodic theory, of the following statement:
Any continuous endomorphism on a compact group preserves Haar measure.
Obviously, this is not true as stated, as the trivial homomorphism does not preserve Haar measure. Even restricting ourselves to nontrivial homomorphisms, I am still unable to follow the argument, which is as follows:
Let $A : G \rightarrow G$ be the endomorphism and let $m$ denote the Haar measure. Define a probability measure on Borel sets $\mu(E) = m(A^{-1}(E))$ . Now,
$$ \mu(Ax.E) = m(A^{-1}(Ax.E)) = m(x.A^{-1}E) = \mu(E) .$$
I agree with the first equality by definition, and the last equality because of $m$ being a Haar measure. But I am unable to fathom why the middle equality is true.
Moreover, even if the said equality is true, the rest of the proof goes as follows: We see that $\mu$ is rotation invariant, and we use the uniqueness of Haar measure to prove that $\mu = m$. In this, I am unable to see how the above equality assures that $\mu$ is rotation-invariant.
Help would be appreciated.
AI: The moral is that you have to assume surjectivity of $A$.
For the middle equality: we have $A^{-1}(Ax.E)=x.A^{-1}E$ as sets:
$y\in A^{-1}(Ax.E)$ means $Ay\in Ax.E$ means $A(x^{-1}y)\in E$, while
$y\in x.A^{-1}E$ means $x^{-1}y\in A^{-1}E$.
For the rotation-invariance: we now see $\mu(Ax.E)=\mu(E)$ for all $x\in G$. In other words, $\mu(y.E)=\mu(E)$ for all $y\in \text{im} A\subset G$. By surjectivity we have $\text{im} A=G$. |
H: How to find back from generating function $\sum(Q(x)*z^m/m!)$ to $Q(x)$?
How to find back from generating function $\sum_{m=0}^{\infty}(Q(x)\frac{z^m}{m!})$ to $Q$?
In other words, find $Q$ from $\sum_{m=0}^{\infty}(Q(x)\frac{z^m}{m!})$.
finally is generating function depends on $x$ and $z$
Update
The reason for this backward action, is forward action from dsolve differential equation
solved as special function such as kummer, even convert to ratpoly also can not be success
for example
$\sum_{m=0}^{\infty}(Q(x)\frac{z^m}{m!})$ = 2/(Pi*(exp(x/z)+exp(-x/z)))
how subs(z=0, diff(f, z$k)) when z is denominator?
how many times should we diff ?
Please don't down vote, really need this
AI: Since $S=\sum\limits_{m=0}^{+\infty}Q\frac{z^m}{m!}$ is such that $S=Q\cdot\mathrm e^z$, one has $Q=S\cdot\mathrm e^{-z}=S\cdot\sum\limits_{m=0}^{+\infty}(-1)^m\frac{z^m}{m!}$. |
H: How to prove that a matrix is positive definite?
Let $L$ be a Laplacian matrix of a strong connected and balanced directed
graph. Define
$$
L^{s}=\frac{1}{2}\left( L+L^{T}\right) .$$
Let $D$ be a diagonal matrix with
$$
D=\begin{bmatrix}
d_{1} & & & \\
& d_{2} & & \\
& & \ddots & \\
& & & d_{n}%
\end{bmatrix},
$$
with $d_{i}\geq 0.$ There is at least one $d_{i}>0$. Clearly, this matrix is positive semi-definite. Is the matrix
$
L^{s}+D
$
positive definite or not?
AI: Take any nonzero $\mathbf{x}\in\mathbf{R}^n$ and let $\mathbf{x}=(x_1,x_2,\dotsc ,x_n)$.
Then
$\quad \mathbf{x} L_s\mathbf{x}^t=\sum_{ij} (x_i-x_j)^2 > 0$,
where the sum is taken over the edges of the graph (edges without orientation), which shows that $L_s$ is positive definite (the graph is strongly connected). On the other hand $D$ is clearly positive semi-definite, and hence the addition of both matrices is positive definite. |
H: Example of Artinian ring
By definition a ring $R$ is Artinian if it is Artinian as $R$-module. I think the following is an example of an Artinian ring: $\mathbb Q / \mathbb Z$.
Ideals in it are of the form $(\frac1n)$ (since $(\frac{1}{n_1}, \dots , \frac{1}{n_k}) = (\frac{1}{\text{lcm}_i(n_i)})$).
Since we have $(\frac1n) \subset (\frac1m)$ if and only if $n$ divides $m$, every decreasing chain stabilizes eventually since $n$ only has finitely many divisors.
Is this correct?
AI: You'd have to tell me how $\mathbb Q/\mathbb Z$ is a ring. The notation suggests that $\mathbb Z$ is an ideal of $\mathbb Q$ (which it isn't) and that we're forming the factor ring.
One way to make commutative Artinian rings is to fix a field $k$ and look at finite dimensional $k$-algebras. Geometrically these correspond to finite sets of points in affine space. In the classical setting these will look like $\prod k$, but it's interesting to think about stuff like $\mathbb R[x]/(x^2 + 1) \simeq \mathbb C$ and $k[x]/(x^2)$. |
H: Properties for a matrix being invariant under rotation?
Consider a 2D case.
Let $R$ be a rotation matrix with angle $\theta$
$$R = \begin{bmatrix}
\cos\theta & -\sin\theta\\
\sin\theta & \cos\theta
\end{bmatrix}.$$
Is it possible for a matrix $A$ to satisfy the following identity for any $\theta$
$$A = R A R^T?$$
AI: Perhaps some context will be valuable. Since $R^T = R^{-1}$ for any rotation matrix, it is equivalent to ask for matrices satisfying $AR = RA$. These are precisely the matrices commuting with any rotation matrix. There are several ways to say this. For example, we can talk about the centralizer of the rotation matrices.
Argument 1: Here another way to say "centralizer" is as follows. Thinking of the rotation matrices as describing a representation of the circle group $S^1 = \{ z \in \mathbb{C} : |z| = 1 \}$ on $\mathbb{R}^2$, we are also asking about the intertwiners of this representation.
Since $S^1$ is abelian, every element commutes with every other element, so any other rotation matrix has this property. The collection of linear combinations of such things is an algebra in $\text{End}(\mathbb{R}^2)$ isomorphic to the complex numbers.
Now, this representation $\mathbb{R}^2$ is irreducible. Schur's lemma then implies that the collection $\text{End}_{S^1}(\mathbb{R}^2)$ of intertwiners is a division algebra over $\mathbb{R}$. This division algebra contains the copy of $\mathbb{C}$ above and must also commute with every element in it, so it is in fact a division algebra over $\mathbb{C}$, and since it is finite-dimensional the only such division algebra is $\mathbb{C}$ itself.
Thus the matrices with this property are precisely the linear combinations of rotation matrices.
Argument 2: Over $\mathbb{C}$, the rotation matrices all share a common set of eigenvectors, namely $\left[ \begin{array}{c} 1 \\\ -i \end{array} \right]$ and $\left[ \begin{array}{c} 1 \\\ i \end{array} \right]$ with eigenvalues $e^{i \theta}, e^{-i \theta}$. Since these eigenvalues are different in general, any matrix commuting with all rotation matrices must share these eigenvectors, hence must be diagonal in the corresponding basis. But linear combinations of rotation matrices (in fact it suffices to take the identity and $90^{\circ}$ rotation) already span all such matrices (over $\mathbb{C}$, and moreover real linear combinations span the corresponding real matrices).
Argument 3: We use the fact that
$$\left[ \begin{array}{cc} \cos \theta & - \sin \theta \\\ \sin \theta & \cos \theta \end{array} \right] = (\cos \theta) I + (\sin \theta) J$$
where $J = \left[ \begin{array}{cc} 0 & -1 \\\ 1 & 0 \end{array} \right]$. It follows that a matrix commutes with every rotation matrix if and only if it commutes with $J$. Explicitly writing out what this condition means, we get that these are precisely the matrices of the form
$$\left[ \begin{array}{cc} a & -b \\\ b & a \end{array} \right] = a + b J.$$
Exercise: More generally, let $M$ be a matrix with distinct eigenvalues. Show that its centralizer $\{ A : AM = MA \}$ is spanned by $1, M, M^2, M^3, ...$ and find an example where $M$ does not have distinct eigenvalues and where the centralizer contains other elements. |
H: Putnam Problem: Partitioning integers with generating functions
We were given the following A-1 problem from the 2003 Putnam Competition:
Let $n$ be a fixed positive integer. How many ways are there to write $n$ as a sum of positive integers, $$ n= a_1+a_2+ \cdots + a_k$$
With $k$ an arbitrary positive integer, and $a_1 \le a_2 \le \cdots \le a_k \le a_1+1$. For example, with $n=4$ there are 4 ways: 2, 2+2, 1+1+2, 1+1+1+1.
I managed to do this by induction, showing that there are always $n$ ways to partition an integer in such a way. In my combinatorics class however, we always solved integer partitioning problems with generating functions and I have been unable to construct one for this problem. I was wondering if the math.stackexchange community could help me out with this and at least give me a nudge in the right direction.
Thanks
AI: Recall exponential notation for partitions: $a^b$ signifies $b$ occurrences of $a$ in the partition. (Exponential notation can be useful for seeing the generating functions.) In exponential notation, every partition satisfying your constraints are of the form $m^k (m+1)^l$. In your $n = 4$ example, the partitions are $4^1 5^0$, $2^2 3^0$, $1^2 2^1$, and $1^4 2^0$. Notice that the smaller number, $a_1$, must have exponent at least $1$, while successor can have exponent $0$.
For a fixed $m$, the contributions from partitions of the form $m^k (m+1)^l$ are given by the following generating function:
$$(x^m + x^{2m} + x^{3m} + \cdots)(1 + x^{m+1} + x^{2(m+1)} + \cdots)$$
This simplifies to:
$$\frac{x^m}{1-x^m} \frac{1}{1-x^{m+1}}$$
Such contributions come from any $m \geq 1$, and of course, the contributions are disjoint. Thus the full generating function is:
$$\sum_{m \geq 1} \frac{x^m}{1-x^m} \frac{1}{1-x^{m+1}}$$
This might already be too great a nudge, but the point is that you now obtain something you can manipulate. After some obvious $1 - x$ factorings and some telescoping, I get $\frac{x}{(1 - x)^2}$, a generating function for $n$, as desired. Let me know if you get similar results or not. |
H: Poset of idempotents
Let $A$ be an associative algebra and consider the poset $I$ of idempotents in $A$, where as usual if $e,f \in A$ we say $e \leq f$ provided that $e = fef$. Clearly $0 \in I$ is minimal, but in general I can't say anything else about this poset.
This leads me to ask: are there any other general properties of $I$ without more assumptions on $A$? In particular, if I am given some poset with a minimal element, is it possible to construct an algebra with the prescribed poset of idempotents?
Edit: I should have said I'm working over a field (or at least a ring with no nontrivial idempotents), since otherwise idempotents in the base ring complicate things. Also: bonus points for producing examples where $A = \cup_{e \in I} eAe$.
AI: The poset of idempotents is equipped with a relative complement operation: $e \le f$ if and only if $ef = fe = e$, and then we compute that
$$(f - e)^2 = f - e$$
hence $f - e$ is also an idempotent. This operation, which I"ll write as $e \Rightarrow f$, satisfies the following axioms:
$e \Rightarrow f$ is order-preserving in the second variable and order-reversing in the first,
$(0 \Rightarrow f) = f$,
$((e \Rightarrow f) \Rightarrow f) = e$,
$\inf(e, e \Rightarrow f) = 0$.
In particular, for fixed $f$ it defines an order-reversing isomorphism from the interval $[0, f]$ to itself. Consequently, any poset with a minimal element $0$ and an element $e$ such that the interval $[0, e]$ is not isomorphic to its opposite cannot admit a relative complement and so cannot be the poset of idempotents of a ring. A poset with this property of minimal size is the poset $\{ 0 \lt a \lt b, c \lt 1 \}$.
Another minimal counterexample is the total order $3 = \{ 0 \lt 1 \lt 2 \}$, which admits a unique map $e \Rightarrow f$ satisfying the first, second, and third axioms above, and this map does not satisfy the fourth axiom.
Another thing you can say about the idempotents in a rng is that they are equipped with an interesting relation, namely commutativity. If $e, f$ are commuting idempotents, then $ef$ gives their inf and $e + f - ef$ gives their sup. More generally, any commuting subposet of the idempotents is a lattice. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.