text stringlengths 83 79.5k |
|---|
H: Endomorphisms in a symmetric monoidal category
Let $\mathcal{C}$ be a symmetric monoidal category generated by one element $X$ such that $End(X)=G$ where $G$ is a finite group. Is it true that, for any object $A \in \mathcal{C}$, $End(A)$ is isomorphic to a wreath product $G \wr S_n$, $n \in \mathbb{N}$ ?
AI: No. Take the category whose objects are vector spaces of dimension $2^n$ over $\mathbb{F}_2$ for $n \in \mathbb{Z}_{\ge 0}$ and whose morphisms are all isomorphisms. The object $X = \mathbb{F}_2^2$ generates this category under tensor product (I assume this is what you meant) and $\text{End}(X) \cong \text{GL}_2(\mathbb{F}_2) \cong S_3$. It should be straightforward to verify that $\text{End}(X^{\otimes n}) \cong \text{GL}_{2^n}(\mathbb{F}_2)$ has order larger than $S_3 \wr S_n$ for sufficiently large $n$. |
H: Which of the following cannot be the length of triangle
I have a question regarding triangles which is puzzling me:
In triangle PQR , PR=7 and PQ=4.5 . Which of the following cannot represent the length of
QR ? a)2.0 , b)3 , c)3.5 , d)4.5 , e)5.0
Any suggestions ?
AI: The sum of the two smaller sides must be greater than the last side.
So 2 can't be the length of QR : 2+4.5=6.5<7 |
H: Squared Series Fourier
Possible Duplicate:
Fourier 1st step?
How to find fourier transform of a series of the such form:
$$y_k=\left[f(x) \right]^{2},$$
but I am not sure of the step by step for going about this computation.
how is the first step??
thank you very much!!
AI: Integration by parts. You're doing an integral of the form
$$ \int (f(x))^2 \exp(i\xi x) dx $$
Let $u = (f(x))^2$ and $dv = \exp(i \xi x)$. |
H: Fourier transform of $y_k=\left[k-\frac{m-1}{2}\right]^{2},$
I am trying to find the discrete Fourier transform of
$$y_k=\left[k-\frac{m-1}{2}\right]^{2},$$
but I am not sure of the step by step for going about this computation.
how is the first step??
thank you very much!!
AI: I assume you want a DFT of $(y_0,y_1,\cdots,y_{n-1})$ given $n$. First write out the FT and then expand:
$$\begin{array}{c l}\widehat{y}_r & =\sum_{k=0}^{n-1}\left(k-\frac{m-1}{2}\right)^2\exp\left(-2\pi i\frac{kr}{n}\right) \\ &
= \left(\sum_{k=0}^{n-1}k^2 \big(e^{-2\pi i r/n}\big)^k\right)-(m-1)\left(\sum_{k=0}^{n-1}k\big(e^{-2\pi i r/n}\big)^k\right)+\left(\frac{m-1}{2}\right)^2\left(\sum_{k=0}^{n-1}\big(e^{-2\pi i r/n}\big)^k\right).\end{array}$$
Now some tricks come in handy. First, the geometric sum formula:
$$\sum_{k=0}^{n-1} z^k=\frac{z^n-1}{z-1}.$$
Differentiating this and then multiplying by $z$ gives:
$$\sum_{k=0}^{n-1} kz^{k}=\frac{(n-1)z^{n+1}-nz^n+z}{(z-1)^2}. \tag{*}$$
Plugging in $z=e^{-2\pi i r/n}$ will give the middle term of our expression, and the geometric sum formula itself works on the last term (it's zero unless $r=0$, in which case it's a sum of $1$'s), but what about the first term? Differentiate $(*)$ and multiply by $z$ again to get another formula... |
H: Lindenbaum algebra is a free algebra
The following is a continuation of this question.
I would like to prove that the Lindenbaum algebra is a free algebra. Hopefully I would like to hear hints on how to proceed in the 'right' direction.
Let $X$ be a set of propositional variables, $M$ the set of all boolean expressions over $X$ and $L = M/_{\sim}$ the partition of $M$ into logically equivalent sentences.
The claim is that $L$ is free over $X$ with respect to the map $e:X \mapsto L$ defined as $e(x) = [x]$ where $[x]$ denotes the equivalence class of $x \in X.$
Let $B$ be any boolean algebra and $f:X\mapsto B$ any function. We want to argue that there is precisely one homomorphism $\overline{f}:L\mapsto B$ so that $\overline{f}\circ e = f.$ The only choice I see is to extend the map defined as
$$\overline{f}([a]) = f(x) \; \hbox{if} \; x \in X \; \hbox{and} \; [x] = [a]$$
to a homomorphism in the natural way (if $[a]$ is not the equivalence class of propositional element then apply $\overline{f}$ to recursively to the subterms of a compound element in $[a]$)
The following seems like the wrong way to do it since then one has many technicalities to show.
$\overline{f}$ is well defined.
That $\overline{f}$ is indeed the only possible homomorphism. Is it valid to use an inductive argument to show that if $\overline{g}$ is another such homomorphism then it has to be that $\overline{f} = \overline{g}$ since $f,g$ agree for all elements that are equivalence classes of propositional variables and any other element in $L$ is a finite expression of these?
As said I believe I haven't defined $\overline{f}$ in a convenient way to allow me to prove the necessary conditions.
Is there a better way to approach this?
AI: What you’re doing is fine: you want to define $\bar f$ by structural recursion on formulas of $L$. The base of it is what you already have: $\bar f\big([x]\big)=f(x)$ for $x\in X$. Now if $\varphi,\psi\in L$ and $\bar f\big([\varphi]\big)$ and $\bar f\big([\psi]\big)$ have been defined, let
$$\begin{align*}
&\bar f\big([\varphi\lor\psi]\big)=\bar f\big([\varphi]\big)\lor\bar f\big([\psi]\big),\\
&\bar f\big([\varphi\land\psi]\big)=\bar f\big([\varphi]\big)\land\bar f\big([\psi]\big),\text{ and }\\
&\bar f\big([\lnot\varphi]\big)=\lnot\bar f\big([\varphi]\big)\;.
\end{align*}$$
You know that if $\varphi\equiv\varphi'$ and $\psi\equiv\psi'$, then $\varphi\lor\psi\equiv\varphi'\lor\psi'$, $\varphi\land\psi\equiv\varphi'\land\psi'$, and $\lnot\varphi\equiv\lnot\varphi'$, so $\bar f$ is well-defined. To show that $\bar f$ is unique, just do what you sketched in (2): show by structural induction (parallel to the structural recursion defining $\bar f$) that if $g:L\to B$ is a homomorphism such that $g\!\upharpoonright\! X=f$, then $g=\bar f$. |
H: Solving $\frac{dP}{dt} = k(M - P)$
I am suppose to solve for P(t), to find an epxression for P(t) and I am suppose to find the limit.
I can't find anything.
$$\frac{dP}{dt} = k(M - P)$$
$$\frac{dP}{M - P} = k \, dt$$
$$\int \frac{dP}{M - P} = \int k \, dt$$
$$ \ln \frac{1}{M - P} = xk + c$$
$$ \frac{1}{M - P} = e^{xk} + e^c$$
$$ \frac{1}{e^{xk} + e^c} = M - P$$
$$ -\frac{1}{e^{xk} + e^c} +M= P$$
This is wrong but I am not sure why.
AI: The separation of variables went well, and in general outline the calculation was along the right lines. However, there are some problems of detail.
An antiderivative of $\frac{dP}{M-P}$ is $-\ln(|M-P|)$. In your work, the minus sign is missing.
It is always good to check by differentiating whether you have integrated right. The derivative of $\ln(M-P)$ with respect to $P$ is $-\frac{1}{M-P}$ (Chain Rule). Not quite the $\frac{1}{M-P}$ that is needed, but the fix is easy.
Later there is a typo, there is an $x$ where $t$ is intended. There is also a problem with the simplification of $e^{kt+c}$. Note that $e^{u+v}=e^u e^v$.
To do things right, we integrate and get
$$-\ln(|M-P|)=kt +c.$$
Either multiply both sides by $-1$, and take the exponential of both sides, or exponentiate directly. We do the first. So we have $\ln(|M-P|)=-kt -c$, and therefore $|M-P|=e^{-c}e^{-kt}$, so $M-P=\pm e^{-c}e^{-kt}$.
For simplicity, let $C=\pm e^{-c}$. We then get $P=M-Ce^{-kt}.$ To find the appropriate value of $C$, we need more information, such as an initial condition, the value of $P$ at a certain time $t$, often (but not necessarily) at $t=0$. In particular, if $P(0)=0$, it turns out that $C=M$.
The limit as $t\to\infty$ is easy to find even if we are not given an initial condition. I assume that the constant $k$ is positive. Then, as $t\to\infty$, we have $e^{-kt}\to 0$, so the limit of $P(t)$ is $M$. |
H: What does "+ complete" mean?
I'm reading notes about Liapunov stability, and in the book of Abraham, Marsden and Ratiu I found the next definition:
Let $m$ be a critical point of $X$. Then
$m$ is stable (or Liapunov stable) if for any neighborhood $U$ of $m$, there is a neighborhood $V$ of $m$ such that if $m'$ $\in$ $V$, then $m'$ is $+$ complete and $F_{t}(m') \in U$ for all $t \geq 0$ .
I want to know what "$+$ complete means.
Thanks!
AI: It's defined a few pages earlier (2.1.13). It means that the integral curve starting at $m'$ at $t=0$ is defined for all $t>0$. |
H: Does a closed form sum for this fourier series exist?
Continuing from an earlier question of mine: Fourier-Series of a part-wise defined function?
I now got a fourier series which I believe is the correct one:
$$\frac{\pi(b-a)}{2} + \sum\limits_{n=1}^{\infty} \frac{(a-b)(1-(-1)^n)}{n^2\pi}\cos(nx) + \frac{(-1)^n(b-a)}{n}\sin(nx)$$
Now my next task confused me a little - "What is the sum of this series?". By definition, this should just be $f(x)$ ($\frac{ax+bx}{2}$ if $x$ is a whole multiple of $\pi$) right? So I figured I am probably supposed to find a closed form for this - although given the definition of the function, I find it hard to imagine that it even exists. Am I wrong? If so, what is the closed form of this series?
(This wasn't the correct fourier series after all - the right one is
$$\frac{\pi(b-a)}{2} + \sum\limits_{n=1}^{\infty} \frac{(a-b)(1-(-1)^n)}{n^2\pi}\cos(nx) + \frac{(-1)^{n+1}(a+b)}{n}\sin(nx)$$
AI: Hint1: prove following sums for $x \in (-\pi,\pi)$ and conclude using the appropriate linear combination.
$$\tag{1} \sum_{n=1}^\infty \frac {\cos(nx)}{n^2}=\frac{(\pi-|x|)^2}4-\frac{\pi^2}{12}$$
$$\tag{2} \sum_{n=1}^\infty \frac {(-1)^n\cos(nx)}{n^2}=\frac{x^2}4-\frac{\pi^2}{12}$$
$$\tag{3} \sum_{n=1}^\infty \frac {(-1)^n\sin(nx)}n=-\frac x2$$
Hint2: To prove these identities start by computing the Fourier series of $f(x)=\frac x2$ : you should get minus the last identity (it is the classical 'Sawtooth wave').
Setting $x:=y+\pi$ you may get a fourth identity ('sign' is the 'Sign function' that is $\pm$) :
$$\tag{4} \sum_{n=1}^\infty \frac {\sin(nx)}n=\operatorname{sign}(x)\frac {\pi-x}2$$
The integral of $(4)$ will give you minus the first identity (the constant of integration is obtained by considering $x$ at $0$).
The integral of $(3)$ will give you minus the second identity.
Hint3: The result is rather simple (and was probably the starting point!) and it should be easier to separate the cases $x\in (-\pi,0)$ and $x\in (0,\pi)$. |
H: Existence of the Lebesgue integral using a variety of examples
Problem:
I am self-learning about Lebesgue integration, and am just starting to try and apply some examples of the existence of the integral.
For each of the following 5 examples, does the Lebesgue integral exist on $(0,\infty)$, and if it does, is it finite?
$f(x)=\sum_{k=0}^\infty e^{-2^kx}$
$f(x)=\sin(\frac1{x^2})$
$f(x)=x^{-1}(1-e^{-x})$
$f(x)=\frac{1-e^{-x}}{x}-\frac{1}{1+x}$
$f(x)=x^{-1}(e^{-x}-e^{-1/x})$
I have read a bit about what it means for the integral to exist in general, but I obviously do not fully understand the concept, as when I try to apply it I quickly get confused.
Any help, both in generally understanding the concept, and on any/all of the examples I found would be greatly appreciated.
AI: None of these problems is testing your conceptual understanding of Lebesgue integration. All of these functions are continuous, so the Lebesgue integral exists if and only if either $\int_0^\infty f_+(x)\,dx$ or $\int_0^\infty f_-(x)\,dx$ is finite, where $f_+$ is the positive part of $f$ (i.e. $f_+(x) = \max(f(x),0)$), and $f_-$ is the negative part of $f$.
Really these questions are designed to help you practice your calculus skills, particularly the analysis of improper Riemann integrals. In each case, what you want to do is analyze the behavior of the function near $0$ and $\infty$.
For example, in the second problem, the function is bounded as $x\to 0$, and as $x\to\infty$ it's roughly $1/x^2$, so the improper integral converges. (This can be made rigorous using the Limit Comparison Test for improper integrals and L'Hospital's Rule.)
In the third problem, the function is strictly positive, and is roughly $1/x$ as $x\to\infty$, so the improper integral will diverge to $\infty$. Thus, the Lebesgue integral is infinite.
In general, remember that $\displaystyle\int_0^1 \!\!\frac{1}{x^p}\,dx$ converges if and only if $p<1$, and $\displaystyle\int_1^\infty \!\!\frac{1}{x^p}\,dx$ converges if and only if $p>1$. The techniques for checking convergence of improper integrals are the same as those for infinite series (Comparison Test, Limit Comparison Test, etc.)
I would suggest that you play with each of these problems using these sorts of techniques. If you get stuck on any of them, you could post that problem individually as a question. |
H: Calculating angles when sides are known - Without Trignometric ratios.
The question is:
ABCD is a parallelogram and BFDE is a square . If AB is 20 and CF is 16 what is the perimeter of the parallelogram.
The question is fairly simple and I know how to solve it. However how would I get the remaining angles of triangle FDC (highlighted) if i know that CD=20 and FD=12. I also know that angle F is 90 however the triangle doesn't seem to be 30-60-90 or a 45-45-90. Also I need to solve this without Trigonometric ratios since I wont be using calculator
AI: The triangles $\,ABE\,,\,CDF\,$are congruent, thus $\,CD=20\,$. Apply now Pythagoras and get $\,FD^2=20^2-16^2=144\Longrightarrow FD=12=FB=ED\,$ , so $\,AD=BC=16+12=32\,,\,AB=CD=20\,$ and now just add. |
H: Alternative approach for result involving harmonic functions.
I encountered the following 2-part problem on a practice exam:
(a) Show that if $f:\Bbb C\to\Bbb C$ is entire and the real part of $f$ is always positive, then $f$ is constant. (b) Show that if $u:\Bbb R^2\to\Bbb R$ is a harmonic function with $u(x,y)>0$ for all $x,y\in\Bbb R$, then $u$ is constant.
Now, (a) was fairly simple. Putting $f=u+iv$, I took $g(z)=e^{-iz}$, so $|g(f(z))|=e^{-u(x,y)}<e^0=1$, so $g\circ f$ is a bounded entire function, so constant by Liouville's Theorem, from which it is readily seen that $f$ is also constant.
For (b), I wasn't certain what to do. According to the wikipedia article on harmonic conjugates, if the domain of a harmonic function is simply connected, then it admits a harmonic conjugate, and so (b) follows from (a), since the plane is of course simply connected. I had never seen this result before, so (obviously) didn't think to use it.
My question is this: Aside from proving that $u$ has a harmonic conjugate, I wonder if there are other ways that we can approach a proof of (b). My experience with harmonic analysis has been almost completely in the context of analytic functions. Any ideas?
AI: Liouville's theorem holds also in the real case. (One of) the proof(s) is based on another result, known as Harnack's inequality, which in its general form reads
$$\Delta u=0\text{ in }\Omega \Rightarrow\underset{\Omega}{\sup}u\leq C \underset{\Omega}{\inf}u$$
where $C$ depends only on the domain. For the case $\Omega=\mathcal{B}(\underline{0},R)$ and $u$ non-negative we can use Poisson's formula for the ball and write a more useful version of the Harnack's inequality, namely
$$\frac{R^{n-2}(R-|\underline{x}|)}{(R+|\underline{x}|)^{n-1}}u(\underline{0})\leq u(\underline{x})\leq \frac{R^{n-2}(R+|\underline{x}|)}{(R-|\underline{x}|)^{n-1}}u(\underline{0})\tag{1}$$
Now suppose $\Delta u=0,\ u\geq M,\ \forall \underline{x}\in\mathbb{R}^n$. Then $w:=u-M$ is non-negative and we can use Harnack's inequality in $\mathcal{B}(\underline{0},R)$, with $R$ arbitrary. If in $(1)$ we take the limit as $R$ goes to infinity, we obtain
$$w(\underline{0})\leq w(\underline{x})\leq w(\underline{0})$$
that is, $w$ is constant.
Alternatively, if you don't want to invoke the Harnack's inequality, you can prove the Liouville's theorem applying the following result about harmonic functions:
If $u$ is harmonic in $\Omega$ and $\mathcal{B}(\underline{x},R)\subset\subset\Omega$ (meaning that the closure of $\mathcal{B}(\underline{x},R)$ is contained in $\Omega$), then
$$|u_{x_j}(\underline{x})|\leq \frac{n}{R}\underset{\partial \mathcal{B}(\underline{x},R)}{\max}|u|$$
This result can be actually generalized to derivatives of any order (the constant in front of the max will change depending on the order of differentiation).
Now, if $u$ is harmonic on $\mathbb{R}^n$, then the above result holds for every $R>0$ and every $\underline{x}$. Taking the limit as $R$ goes to infinity, you get the Liouville's theorem. |
H: Complex Analysis Book
I want a really good book on Complex Analysis, for a good understanding of theory. There are many complex variable books that are only a list of identities and integrals and I hate it. For example, I found Munkres to be a very good book for learning topology, and "Curso de Análise vol I" by Elon Lages Lima is the best Real Analysis book (and the best math book) that I have read with many examples, good theory and challenging exercises.
An intuitive and introductory approach is not very important if the book has good explanations and has correct proofs.
Added: If it is possible, tell me your experience with your recommended books and if you got a really good understanding of complex analysis with a deep reading.
AI: Conway, "Functions of One Complex Variable I" http://books.google.ca/books?id=9LtfZr1snG0C |
H: Continued fraction question
I have been given an continued fraction for a number x:
$$x = 1+\frac{1}{1+}\frac{1}{1+}\frac{1}{1+}\cdots$$
How can I show that $x = 1 + \frac{1}{x}$? I played around some with the first few convergents of this continued fraction, but I don't get close.
AI: Doesn't this immediately follow from the definition of the $n+\frac1{a+}\frac1{b+}\cdots$ notation you are using? Specifically, I thought that $\frac1{a+}Z\ldots$ was defined to be exactly the same as $\frac1{a+Z\ldots}$.
Then if $x=1+\frac1{1+}\frac1{1+}\cdots$ then $\frac1x = \frac1{1+}\frac1{1+}\cdots $ and $1+\frac1x = 1+\frac1{1+}\frac1{1+}\cdots = x$. |
H: Conditions stronger than differentiability or weaker than integrability
Let $f: [a,b] \to \mathbb R$. If $f$ is (Riemann-)integrable on $[a,b]$, then define $F: [a,b] \to \mathbb R$ by $$F(x) = \int_a^x f.$$
We have the following:
$$\begin{array}{ccccccc}
f \text{ differentiable} & \implies & f \text{ continuous} & \implies & f \text{ integrable} & \implies & ?_1 \\
\Big\Downarrow & & \Big\Downarrow & & \Big\Downarrow & & \Big\Downarrow \\
?_2 & \implies & F \text{ differentiable} & \implies & F \text{ continuous} & \implies & F \text{ integrable}
\end{array}$$
Of course, we can define $$\mathcal F(x) = \int_a^x F, \qquad \mathfrak F(x) = \int_a^x \mathcal F, \qquad \text{etc.}$$ for this chain of implications to grow indefinitely.
My questions are
($?_1$) Is there a property of $f$, strictly weaker than integrability, that guarantees that $F$ is integrable?
($?_2$) Similar.
Edit: Ignore $?_1$. That was just plain stupidity on my part. If $f$ isn't integrable, then $F$ isn't even defined. Oops.
AI: If $f$ is differentiable, then $F$ will be twice differentiable. I don't think you can get anything more than that. |
H: Integration of $\int\frac{1}{x^{4}+1}\mathrm dx$
I don't know how to integrate $\displaystyle \int\frac{1}{x^{4}+1}\mathrm dx$. Do I have to use trigonometric substitution?
Many duplicate posts link to this one as the target. (Those posts were merged into this one, which is the source of the many answers.)
AI: I think you can do it this way.
\begin{align*}
\int \frac{1}{x^4 +1} \ dx & = \frac{1}{2} \cdot \int\frac{2}{1+x^{4}} \ dx \\\
&= \frac{1}{2} \cdot \int\frac{(1-x^{2}) + (1+x^{2})}{1+x^{4}} \ dx \\\ &=\frac{1}{2} \cdot \int \frac{1-x^2}{1+x^{4}} \ dx + \frac{1}{2} \int \frac{1+x^{2}}{1+x^{4}} \ dx \\\ &= \frac{1}{2} \cdot -\int \frac{1-\frac{1}{x^2}}{\left(x+\frac{1}{x}\right)^{2} - 2} \ dx + \text{same trick}
\end{align*} |
H: Angles formed by intersection of two diagonals in parallelogram and square?
Initially I was of the opinion that if two diagonals in a parallelogram intersect then the angle formed at the point of intersection is 90 degrees (I came up with this conclusion by inserting values) , if this applied to a parallelogram then I assumed it also applied to a square or a rectangle.In both the figures below i assumed angles A , B , C will always be 90 degrees at least that's what I thought.Am i wrong ?
Now I just came across a question which is conflicting with this concept the question is:
ABCD is rectangle the diagonals AC and BD intersect at E (reflected in red figure) .Which of the statement is not necessarily true. The answer to this was : AE is perpendicular to BD. Could anyone let me know why is this not true ? Did i have the wrong idea to start with ?
AI: Draw (with a ruler) a long skinny rectangle, say base $6$, height $1$. Now draw the diagonals. You will see that the two diagonals definitely do not meet at right angles, not even close!
It turns out that a parallelogram has its diagonals meeting at right angles if and only if the parallelogram is a rhombus (all sides equal). Note that a square is a special case of a rhombus.
Proof: It is fairly easy to prove that the diagonals of a parallelogram (and therefore of the special parallelogram called a rectangle) bisect each other.
Let the two diagonals of a parallelogram have length $2p$ and $2q$ respectively. If the angle at which they meet is $90^\circ$, then by the Pythagorean Theorem each side of the rectangle has length $\sqrt{p^2+q^2}$. So in particular all the sides of the parallelogram are equal, that is, we have a rhombus.
Conversely, by a congruent triangles argument, if we have a rhombus, its diagonals meet at right angles. |
H: Computing rank using $3$-Descent
For an elliptic curve $E$ over $\Bbb{Q}$, we know from the proof of the Mordell-Weil theorem that the weak Mordell-Weil group of $E$ is $E(\Bbb{Q})/2E(\Bbb{Q})$. It is well known that
$$
0 \rightarrow E(\Bbb{Q})/2E(\Bbb{Q}) \rightarrow S^{(2)}(E/\Bbb{Q}) \rightarrow Ш(E/\Bbb{Q})[2] \rightarrow 0
$$
is an exact sequence which gives us a procedure to compute the generators for $E(\Bbb{Q})/2E(\Bbb{Q})$.
(Relatively) recently I found out that there is another way to compute the rank of $E$ using $3$-descent. I was wondering, since the natural structure of the weak Mordell-Weil group is $E(\Bbb{Q})/2E(\Bbb{Q})$, what is the motivation behind using $3$-descent? Also does $3$-descent similarly produce the generators of $E(\Bbb{Q})/2E(\Bbb{Q})$ or does it simply tell us the structure of $E(\Bbb{Q})$ via the Mordell-Weil theorem by giving us only the rank of $E$? Finally does it help us get around the issue of $Ш(E/\Bbb{Q})$ containing an element that is infinitely $2$-divisible?
AI: An $n$-descent will compute the $n$-Selmer group, which sits in a s.e.s.
$$0 \to E(\mathbb Q)/n E(\mathbb Q) \to S^{(n)}(E/\mathbb Q) \to Ш(E/\mathbb Q)[n] \to 0.$$
If you do a $2$-descent, it will give an upper bound on the size of $E(\mathbb Q)/2E(\mathbb Q).$ If you do a $3$-descent, it will give you an upper bound on the size of $E(\mathbb Q)/3 E(\mathbb Q).$
The advantage of one over the other will depend on the structure as a Galois module of $E(\mathbb Q)[n]$, and the structure of the $n$-torsion in Sha.
If $E$ contains a rational $2$-torsion point, then the $2$-Selmer group may be easier to compute than the $n$-Selmer groups for other $n$.
As an example of a descent at a different choice of $n$, note that
the elliptic curve $X_0(11)$ contains a rational $5$-torsion point, and Mazur does a $5$-descent to prove that it has no other rational points besides the five points generated by this $5$-torsion point. (See this answer for more details on this.)
The elliptic curve $X_0(17)$ has a rational $3$-torsion point, and for this Mazur does a $3$-descent. |
H: How many times do these curves intersect?
When the curves $y=\log_{10}x$ and $y=x-1$ are drawn in the $xy$ plane, how many times do they intersect?
To find intersection points eq.1 = eq. 2
$$\begin{align*}
\log_{10}x &= x-1\\
10^{x - 1} &= x \tag{a}
\end{align*}$$
Answer would be no. of solutions (a) has. One of them i.e. 1 is easy to make out. You could check the degree of equation, that too is unclear (atleast to me).
Any suggestions on how to solve this (apart from plotting and checking)?
AI: Solving the equation $10^{x-1} = x$ exactly is difficult.
But you don't have to solve it exactly in order to figure out how many times the two graphs meet.
First, note that $y=\log_{10}x$ is only defined on the positive real numbers. So we can restrict ourselves to $(0,\infty)$.
Then, consider the function $f(x) = x-1-\log_{10}x$. The derivative of the function is
$$f'(x) = 1 - \frac{1}{\ln(10)x}.$$
The derivative is positive if $x\gt \frac{1}{\ln(10)}$, and negative if $x\lt \frac{1}{\ln(10)}$. That means that the function $f(x)$ is decreasing on $(0,\frac{1}{\ln(10)})$, and is increasing on $(\frac{1}{\ln 10},\infty)$.
As $x\to 0^+$, we have $f(x)\to\infty$ (since $\log_{10}(x)\to-\infty$). At $x=\frac{1}{\ln(10)}$, we have $f(x)\approx -0.2035$; and as $x\to\infty$, $f(x)\to\infty$. So the function crosses the $x$-axis somewhere between $0$ and $\frac{1}{\ln(10)}\approx 0.4343$, and then again somwhere after $\frac{1}{\ln(10)}$ (well, at $x=1$, to be precise). And that's it.
So there are exactly two intersections. |
H: Derive a Laurent series for the function $2z/(z+j)$
First of all, I apologize for the none mathematical notations. I've only ever hanged around Stackoverflow, and never learnt how to type Mathmatical notations. It would be great if someone could teach me. The good news is my question isn't too long.
I'm a beginner learning Taylor and Laurent series. I know how to find the Laurent series for simple functions such as $1/(z+5)$ by rearranging it into similar form as the geometric series $1/(1-z)$.
However, I came across this question,
Derive a Laurent series for $f(z)=\dfrac{2z}{z+j}$ about the centre $z=-j$.
I tried rearranging it to $f(z)=1/((1/z)(1+(j/z)))$ but it doesn't seem to give me the right answer.
Maybe I'm not understanding something, or maybe I'm just making a silly mathematic error. But any help would be much appreciated.
Thanks in advance
AI: Recall that the Laurent series for $f(z)$ centered around $-j$ is written as $$f(z) = \cdots + \dfrac{a_{-2}}{(z+j)^2} + \dfrac{a_{-1}}{(z+j)} + a_0 + a_1(z+j) + a_2 (z+j)^2 + \cdots$$
In our case, we are given $f(z) = \dfrac{2z}{z+j}$.
\begin{align}
f(z) &= \dfrac{2z}{z+j} = \dfrac{2z+2j - 2j}{z+j} & \text{(Adding and subtracting $2j$ in the numerator)}\\
& = \dfrac{2z+2j}{z+j} - \dfrac{2j}{z+j}\\
& = 2 \dfrac{z+j}{z+j} - \dfrac{2j}{z+j}\\
& = 2 - \dfrac{2j}{z+j}
\end{align}
The above is the Laurent series centered around $j$ with $$a_n = \begin{cases} -2j & n=-1\\ 2 & n=0\\ 0 & n \in \mathbb{Z} \backslash\{0,-1\} \end{cases}$$ |
H: Hyperplane in projective space and existence of point
This is problem $2.11$ page $133$ of Kenji Ueno's book.
Consider an irreducible hypersurface $V(F)$ where we assume that the homogeneous polynomial of degree $d$ satisfies the conditions $F(0,x_{1},..,x_{n}) \neq 0$ and $F(1,0,...0) \neq 0$.
Let $P=(1:0:...:0)$. For a point $Q \in V(F)$ let $R(Q)$ be the intersection of the line $PQ$ with the hyperplane $H_{\infty}: x_{0}=0$. Let $\phi_{P}: Q \in V(F) \rightarrow R(Q) \in H_{\infty}$.
Prove that for a point $R \in H_{\infty}$ there is a point $Q \in V(F)$ such that $R(Q)=R$.
I'm not sure how to write this. I tried looking at an example, say we are in $\mathbb{P}^{2}$ and $F(x,y,z)=xz+y^{2}$ then $\phi_{P}$ is the map that sends $(x:y:z)$ to $(0:y:z)$.
So suppose we have a point $(0:b:c) \in H_{\infty}$, we want to find a point $(u:v:w) \in V(xz+y^{2})$ such that $\phi_{P}(u:v:w)=(0:b:c)$. Now take $b=1,c=3$.
This implies there is some nonzero $\lambda$ such that $v=\lambda b$ and $w= \lambda c$. Since $b=1$ and $c=3$ then $v=\lambda$ and $w=3\lambda$ so we need to solve $uw+v^{2}=0$. Plugging these values yields:
$u(3\lambda)+\lambda^{2}=0$
So take $u=(-1/3)\lambda$. Thus setting $\lambda=-3$ gives $u=1$, $v=-3$, $w=-9$.
Then $(u:v:w) \in V(F)$ and the map sends this point to $(0:-3:-9)=(0:1:3)$ as desired.
How can we write this in a general way?
AI: As Matt points out, your example doesn't work because $P\in V(F)$.
So I suggest modifying it slightly by taking $F(x,y,z)=x^2+yz$, while $H$ is still the line $x=0$.
The morphism we are interested in is $\phi:V(F)\to H:(x:y:z)\mapsto (0:y:z)$.
Now, if you fix $R=(0:b:c)\in H$, the points $Q\in V(F)$ such that $\phi(Q)=R$ are those $(x:b:c)$ with $x^2+bc=0$.
If the base field $k$ is algebraically closed
the solutions are given by the formula $x=\pm \sqrt {-bc}$.
Supposing $char.k\neq 2$, we get :
a) Two distinct points $Q_i=(\pm \sqrt {-bc}:b:c)\in V(F) $ with $\phi(Q_i)=R$ in case $b,c\neq0$
b') A unique point point $Q'=(0:0:1)\in V(F)$ with $\phi(Q')=R'=(0:0:1)$.
And in this case the line $\overline {PQ'}$ is tangent to V(F) at $Q'=R'$.
b'') A unique point point $Q''=(0:1:0)\in V(F)$ with $\phi(Q'')=R''=(0:1:0)$.
And in this case the line $\overline {PQ''}$ is tangent to V(F) at $Q''=R''$.
A beautiful picture emerges from this: there are two tangents passing through $P$ to the conic $V(F)$ and the line joining the two contact points of these tangents with the conic is the line $x=0$ onto which you are projecting from $P$.
Do yourself a favor and draw that pleasant picture! |
H: Apply Cauchy-Riemann equations on $f(z)=z+|z|$?
I am trying to check if the function $f(z)=z+|z|$ is analytic by using the Cauchy-Riemann equation.
I made
$z = x +jy$
and therefore
$$f(z)= (x + jy) + \sqrt{x^2 + y^2}$$
put into $f(z) = u+ jv$ form:
$$f(z)= x + \sqrt{x^2 + y^2} + jy$$
where
$u = x + \sqrt{x^2 + y^2}$
and that
$v = y$
Now I need to apply the Cauchy-Riemann equation, but don't know how would I go about doing that.
Any help would be much appreciated.
AI: The Cauchy-Riemann equations are
\begin{align}
\dfrac{\partial u}{\partial x} & = \dfrac{\partial v}{\partial y}\\
\dfrac{\partial v}{\partial x} &= -\dfrac{\partial u}{\partial y}
\end{align}
In your case, $u(x,y) = x + \sqrt{x^2+y^2}$ and $v(x,y) = y$. Assuming $(x,y) \neq (0,0)$, the partial derivatives are
\begin{align}
\dfrac{\partial u}{\partial x} & = 1 + \dfrac{x}{\sqrt{x^2+y^2}}\\
\dfrac{\partial v}{\partial x} & = 0\\
\dfrac{\partial u}{\partial y} & = \dfrac{y}{\sqrt{x^2+y^2}}\\
\dfrac{\partial v}{\partial y} & = 1
\end{align}
Hence, from the Cauchy-Riemann equations, we get that
$$1 + \dfrac{x}{\sqrt{x^2+y^2}} = 1 \implies \dfrac{x}{\sqrt{x^2+y^2}} = 0$$
$$\dfrac{y}{\sqrt{x^2+y^2}} = 0$$
This has no solutions since $(x,y) \neq (0,0)$. Hence, the function is not differentiable on $\mathbb{C} \backslash \{(0,0)\}$. The only point we need to check whether it is differentiable is $(0,0)$. At this point, we can check for differentiability directly from the definition. You will find that it is also not differentiable at $(0,0)$. Hence, the function is nowhere analytic. |
H: Dirichlet's Test Remark in Apostol
Dirichlet's Test is theorem $10.17$ in Apostol's Calculus Vol. $1$.
The theorem itself says that if the partial sums of $\{a_n\}$ (can be complex numbers, not just reals) form a bounded sequence and $\{b_n\}$ is a (monotone?) decreasing function converging to $0$, then $\sum a_n b_n$ converges.
The part of the proof I am stuck on says that, letting $A_n=\sum_{k=1}^{n} a_k$
"The series $\sum (b_k - b_{k+1})$ is a convergent telescoping series which dominates $\sum A_k(b_k - b_{k+1})$. This implies absolute convergence..."
How does this imply absolute convergence? Does it have to do with the fact that $\{b_n\}$ is decreasing? By decreasing, should I automatically think monotone?
AI: Note that the partial sums of $\{a_n\}$ are bounded means that $\lvert A_k \rvert \leq M$ for all $k$ and some $M > 0$. Hence, we have that
\begin{align}
\left \lvert \sum_{k \leq n} A_k(b_k - b_{k+1}) \right \rvert & \leq \sum_{k \leq n} \left(\left \lvert A_k(b_k - b_{k+1}) \right \rvert \right) & (\because \text{By triangle inequality})\\
&= \sum_{k \leq n} \left \lvert A_k \right \rvert \left \lvert (b_k - b_{k+1}) \right \rvert & \because \lvert z_1 z_2 \rvert = \lvert z_1 \rvert \lvert z_2 \rvert\\
& \leq \sum_{k \leq n} M \lvert(b_k - b_{k+1}) \rvert & (\because A_k \text{ is bounded by }M)\\
& = M \sum_{k \leq n} (b_k - b_{k+1}) & (\because \{b_n\}\text{ form a decreasing sequence})\\
& = M (b_1 - b_{n+1}) & (\because \text{By telescoping})\\
& \leq Mb_1 & (\because b_n \downarrow 0 \implies b_{n+1} \geq 0)
\end{align}
Hence, $\displaystyle \sum_{k \leq n} A_k(b_k - b_{k+1})$ converges absolutely. |
H: A transform function from $(-\infty, \infty)$ to $(t_0, \infty)$?
I want to convert an integral from $(t_0, \infty)$ (or $(-\infty, t_0)$) range to $(-\infty, \infty)$ range by change of variable. What is the best transform function to do this - one that is simple, monotonic with $f(-\infty)=t_0$ and $f(\infty)=\infty$ (or $f(-\infty)=-\infty)$ and $f(\infty)=t_0$)?
I need this for an application of numerical quadrature with Gaussian weights.
Thanks.
AI: $ f ( x ) = t_0 + e^x\ $ is an example, or $g(x)=t_0 - e^{-x}$. |
H: calculate standard deviation from percentage of mean occuring.
I'm not a math person, although I find it quite interesting. I'm a programmer but I've got a math problem I'm trying to figure out.
Lets assume I'm trying to create a program that will predict at what mile marker a car will run out of gas. I can take the average gas mileage, size of the gas tank, road conditions etc and use those to predict how far I think it will go.
Most trials will be off by a varying amount depending on driving habits, wind, etc and I'm assuming the results will be normally distributed with my predicted value ideally being at the center as the mean. (as long as my program is calibrated correctly).
I can consider my program successful if the car stops at the mile marker I predict 20% of the time or more.
My first question is, how do I figure out the minimum standard deviation necessary to require that the car will stop at my predicted mile marker at least 20% of the time. I'm thinking If I can come up with a standard deviation, I can test sample data against this SD to see if my program is works.
My second question is harder, and probably doesn't seem practical, but this is only an example scenario. Lets say that I'm only interested in finding out what the last number of the mile marker is. So my program would be considered a success if it estimated the car to stop at the correct mile marker (e.g. mile marker 115), but I would also get credit for factors of 10 in either direction (e.g. 105, 135). So I need to know the standard deviation that would allow the total possibility that the car stops at any one of my mile markers to be at least 20%.
Or.. am I thinking about this incorrectly?
Thanks guys, I know the reasoning is vague, esp. the last part, but I have to keep my next killer app a secret!
-Jeff
AI: $$\Phi^{-1}(0.6) - \Phi^{-1}(0.4) \approx 0.50669$$ so if the distribution is indeed normal then you will not be able to achieve your 20% target of hitting a number rounded to a particular integer if the standard deviation is more than the reciprocal of this i.e. $2$ or above.
Multiply this by $21$ if your target interval is twenty-one times as wide. |
H: Evaluate the series: $ \sum_{k=1}^{\infty}\frac{1}{k(k+1)^2k!}$
Evaluate the series:
$$ \sum_{k=1}^{\infty}\frac{1}{k(k+1)^2k!}$$
AI: Partial fraction decomposition gives
$$\frac{1}{k(k+1)^2}=\left(\frac{1}{k}-\frac{1}{k+1}\right)\frac{1}{k+1}=\frac{1}{k}-\frac{1}{k+1}-\frac{1}{(k+1)^2}$$
Hence this series is
$$\sum_{k=1}^\infty\left(\frac{1}{k}-\frac{1}{k+1}-\frac{1}{(k+1)^2}\right)\frac{1}{k!}$$
$$=\left(\sum_{k=1}^\infty\frac{1}{k \cdot k!}\right)-\left(\sum_{k=1}^\infty\frac{1}{(k+1)!}\right)-\left(\sum_{r=2}^\infty\frac{1}{r\cdot r!}\right).$$
Notice how in the third sum we set $r=k+1$ so $(k+1)^2k!=(k+1)\cdot(k+1)!=r\cdot r!$. The middle term is clearly $e-2$, and the difference between the outside series is $\frac{1}{1\cdot 1!}$, hence we obtain $3-e$. |
H: Is $K=K∩G_\alpha N=G_\alpha$ an error in this context?
There is a problem in Problems in Group Theory by J.D.Dixon :
2.51 If a permutation group $G$ contains a minimal normal subgroup $N$ which is both transitive and abelian, then $G$ is primitive.
He answered it amazingly:
Let $G_\alpha $ be a stabilizer of $G$. It is sufficient to show that $G_\alpha $ is a maximal subgroup of $G$. Let us suppose that, on the contrary, $G_\alpha $ is not a maximal subgroup of $G$. Then, there is a proper subgroup $K$ of $G$ properly containing $G_\alpha $. Since $K=K∩G_\alpha N=G_\alpha$ some $x≠1$ in $N$ lies in $K$. Therefore $K∩N = M$ is a nontrivial...
Is it possible we have an errata in $K=K∩G_\alpha N=G_\alpha$ cause of wrong typing or printing? Clearly, there would be an inconsistence with his assumption, and if so, which ones of the following would be the correct one:
$K⊇K∩G_\alpha N⊇G_\alpha$, or $K=K∩G_\alpha N⊇G_\alpha$
I will be so pleased if someone points the right one. I couldn’t find an erratum for this great book.
AI: It is well known that in this case $G=G_\alpha N$, so, clearly, $K=K\cap G=K\cap G_\alpha N$. The second part of the equality is clearly wrong, or we would have had a contradiction alredy. What should be there (and what is really used to say that there exist a $x\neq 1$ such that $x\in N\cap K$) is $K\cap G_\alpha N\supsetneqq G_\alpha$. (or, simply, $K\cap G_\alpha N\neq G_\alpha$ since we already know that $G_\alpha\subset K$) |
H: Elementary Probability Questions
Toss a coin three times, so event space $\Omega=\{HHH,HHT,HTH,HTT,THH,THT,TTH,TTT\}$.
We win $\$1$ if we flip a Head and lose $\$1$ for a Tail. Let $\mathbb{P}(H) = p$ and $\mathbb{P}(T) = q$. The change in our wealth after flip $i$ is the r.v.
$$X_i = \cases{+1 \text{ if }H \\ -1 \text{ if } T}$$
Our wealth after turn $i$ is:
$$ S_i = S_0 + X_1 + \dots X_i $$
There are three questions (these are not homework problems, rather revision) and I have some questions about their solutions.
Find $\mathbb{P}(S_3|S_1)$
This is the probability of $S_3$ occuring given that $S_1$ occurs, and by definition:
$$ \mathbb{P}(S_3|S_1) = \frac{\mathbb{P}(S_1 \cap S_3)}{\mathbb{P}(S_1)} $$
I think the answer should be $\frac{1}{2}$ thinking of the $H/T$ outcome as paths on a binary tree. Can someone provide a more algebraic solution?
Find $\mathbb{E}(S_3|S_1)$
We interpret this as our expected wealth after $3$ flips given $S_1$. This is just: $S_1 + p^2(2) + 2pq(0) - q^2(2) = S_1$ iff $p=q$. I think this is okay.
Given that $F_0 = \{\phi,\Omega\}$ what are $F_1, F_2$ and $F_3$ where $F_i$ is the smallest event space that we can identify from complete knowledge of $F_j$ for $1\leq j\leq i$.
I am unfamiliar with filtrations, and have only a vague sense of what they actually are. So is $F_1$ is the event space we can identify from complete knowledge of the first coin flip? What does it mean to identify an event space from complete knowledge?
AI: $S_1$ and $S_3$ are random variables rather than events, so when reading $\Pr(S_3|S_1)$ in your first question you may want to say something like $$\Pr(S_3=s+2|S_1=s) = \frac14$$ $$\Pr(S_3=s|S_1=s) = \frac12$$ $$\Pr(S_3=s-2|S_1=s) = \frac14.$$ |
H: $L^\infty $ bound in terms of given data.
Let $\Omega \subset R^n$ be bounded and open, $u\in C^2(\Omega)\cap C(\bar \Omega)$ be a solution of $-\Delta u=f$ in $\Omega$ , $u=0$ on $\partial \Omega$.
Prove that there exists a constant $C$, depending only on $n$ and $diam(\Omega )$ such that
$||u||_{L^\infty} \le C ||f||_{L^\infty}$ .
I have got some idea like comparing $u$ with parabolas and using maximum principle or so .
Some how i am not able do anything on this problem .
Any kind of help or solution is appreciated.
Thanks a lot.
AI: Your basic idea is correct.
Let $M = \|f\|_\infty$, and let $x_0\in\Omega$. The functions
$$ \tilde{u}^\pm(x) = u(x) \pm \frac{M}{2n} (x - x_0)^2 $$
are seen to have the property that
$$ -\triangle \tilde{u}^+ = -\triangle u - M = f - M \leq 0 $$
is subharmonic and
$$ -\triangle \tilde{u}^- = - \triangle u + M = f + M \geq 0 $$
is superharmonic.
So by the maximal/minimal principles for sub/super harmonic functions you have that
$$ u(x) \leq \tilde{u}^+(x) \leq \sup_{\partial\Omega} \tilde{u}^+ \leq \frac{M}{2n} \mathrm{diam}(\Omega)^2 $$
and
$$ u(x) \geq \tilde{u}^-(x) \geq \inf_{\partial\Omega}\tilde{u}^- \geq - \frac{M}{2n} \mathrm{diam}(\Omega)^2 $$
And so you get
$$ \|u\|_\infty \leq \frac{\mathrm{diam}(\Omega)^2}{2n} \|f\|_\infty $$
and the constant indeed depends only on the dimension and the diameter of the set $\Omega$. |
H: Reference request in number theory for an analyst.
I am a confirmed mathochist. My background is in analysis, and fairly traditional analysis at that; mainly harmonic functions, subharmonic functions and boundary behaviour of functions, but I have for many years had an interest in number theory (who hasn't?) without ever having the time to indulge this interest very much.
Having recently retired from teaching, I now do have the time, and would like to look more deeply into a branch of number theory in which my previous experience might still be useful, and in particular, I would be interested to find out more about the interplay between elliptic curves, complex multiplication, modular groups etc.
I am pretty confident in my background with respect to complex analysis, and I have a working knowledge of the basics of p-adic numbers, but my algebra background is much, much weaker: just what I can remember from courses many years ago in groups, rings, fields and Galois Theory, and absolutely no knowledge of the machinery of homolgy/cohomology, and very little of algebraic geometry (I once read the first 2-3 chapters of Fulton before getting bored and going back to analysis!)
Alas, I now no longer have easy access to a good academic library, so I would need to puchase any text(s) needed, unless any good ones happen to be available online.
My request would then be this:
What text(s) would you recommend for someone who wants to find out more about elliptic curves, complex multiplication and modular groups, bearing in mind that I am very unlikely to want to do any original research, and it is all "just for fun"?
Many thanks for your time!
AI: Have a look at Koblitz: Introduction to Elliptic Curves and Modular Forms, and also at Knapp: Elliptic Curves. I prefer the latter, since it is more in depth, but it is also more algebro-geometric. The former has a much stronger complex analysis slant, especially at the beginning, which might make the entry easier. So you could try reading Koblitz first, and then Knapp (there will be a lot of overlap, of course).
The most thorough text on elliptic curves is Silverman: Arithmetic of Elliptic Curves. But it is considerably more algebro-geometric than the above two, and it has very little material on modular forms. So I am not sure it is the right entry text for you.
None of these cover complex multiplication. For that, you could have a look at Silverman: Advanced Topics in the Arithmetic of Elliptic Curves. My feeling is that to appreciate the theory of complex multiplication, it would help to have seen class field theory beforehand. But Silverman does review the main results of class field theory, so perhaps you can dive straight in, after having worked through one of the basic texts above.
"Just for fun" is a great premise to start learning elliptic curves, since it really is great fun! |
H: Would nonmath students be able to understand this?
For a course, I am required to do a presentation. The topic could either be something mundane, like a career strategy report, or something more interesting, such as a controversial topic, or an exposition on something you find interesting. What I would like to do is to present math in a way that probably no one in the class, other than myself, has seen before. That is to say, math as a deeply conceptual subject that does not necessarily involve computation with literal numbers.
In order to illustrate what I mean by the above, I would present the following theorem: There are at least two kinds of infinite sets: Countable ones, and uncountable ones (of course I would define bijection and countable). I would present the diagonal argument, since it is elegant, ingenius, noncomputational, and short.
My question is whether or not the general public (nonmathematicians) would be able to understand the argument. Note, I would not be explicit about the axiom of choice, etc.
AI: My experience with explaining non-countable sets to non-mathematicians is rather weird. Unfortunately this is what my data suggest (I don't know if it is true, it is just happened every time I tried): an idea of a set you cannot enumerate is too hard for some people, there is a certain threshold (as a function of ability in abstract thinking maybe?) below which the concept just slips away from their grasp. Usually you can tell very fast whether convincing them will be fruitful (maybe long and tedious, but doable), or if their mind rejects the thought as ridiculous and often unimportant (this frequently happened for practical individuals, deeply rooted on Earth as in "Dreams? Fantasies? What I would need that for?"). On the other hand, I hadn't tried this on children, so hopefully they might behave differently.
I second Limitless' idea about showing how the rational numbers are countable: you could do it first and decide on proceeding while in class and seeing their reaction.
Also, there are other theorems that might rock the audience, just stating them
might be enough (it does depend on the audience, but I think it is worth trying). To give you some examples:
[meta] post on math.SE,
if you stir coffee, then there is a point in it which will return to its original position,
hairy ball theorem,
inscribed square problem,
voting paradox,
Goodstein's sequence and theorem (hard conceptually),
ham and sandwich theorem,
fold and cut theorem.
Good luck! |
H: Prove a group generated by two involutions is dihedral
Prove a finite group generated by two involutions is dihedral
Is my following argument correct?
Let $G=\langle x,y\rangle$ be a group generated by involutions $x,y$. Let $n=\mathrm{ord}(xy)$ to get a presentation $G=\langle x,y\mid x^2=y^2=(xy)^n=1\rangle $ so G is dihedral of order $2n$ ?
Further note: I realise now my argument is not sufficient as it remains to show $G$ has no other relations.
I just found an idea from a reference which claims "...So $G$ must have a presentation of the form $G=\langle x,y\mid x^2=y^2=(xy)^m=1\rangle $, then one has to show $m=n$..." in which I do not understand why $G$ has exactly a presentation of such form (the presentation inovlves $m$)? That reference also showed $|\langle x,y\rangle |=2n$ which directly led to the conclusion: $m=n$
AI: If $G$ is finite and has generators $x,y$ of order 2, then the elements of $G$ are $x,xy,xyx,xyxy,xyxyx,\dots$ and $y,yx,yxy,yxyx,yxyxy,\dots$ and as soon as you know the first term in those lists to give you the identity element, you're done. It can't be an element like $xyxyx$, because if that's the identity then you multiply left and right by $x$ to find $yxy$ is the identity, and you multiply left and right by $y$ to find $x$ is the identity. So the defining relation must be $(xy)^m=1$ for some positive integer $m$ (note that $(yx)^m=1$ if and only if $(xy)^m=1$).
So your presentation is $$\langle x,y\mid x^2,y^2,(xy)^m\rangle$$ and you seem happy to accept that as dihedral. |
H: Constant Radon-Nikodym derivative
Let $(\Omega, F, \mu)$ be a complete measure space, $\mu(\Omega)=1$ and $\mu$ takes values 0 or 1.Let $\nu$ positive measura, $\sigma$-finite and absolutely continuous with respect to
$\mu$. Show that then $f=\frac{d\nu}{d\mu}$ is constant a.e. equal to $\nu(\Omega)$ that is a finite number.
Hint:prove that there exists a value $\alpha\in \mathbb{R}$ such that $\mu(f^{-1}(\alpha))=1$.
AI: Hint: define $F(\alpha):=\mu(f^{-1}(-\infty,\alpha]))$ (the Radon-Nikodym derivative exists since $\nu$ is assumed $\sigma$-finite). It's an increasing function which takes its values in $\{0,1\}$. Define $\alpha_0:=\inf\{t\mid F(t)=1\}$. |
H: Proving no anti-automorphisms exist in this matrix ring.
I've been playing with a certain matrix subring, but there is one step I am having trouble with.
Let $u=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{pmatrix}$ in $M_3(\mathbb{Q})$ and let $x=\begin{pmatrix} u & 0 \\ 0 & u^2\end{pmatrix}$ and $y=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$, where $0$ and $1$ are the zero and identity matrices in $M_3(\mathbb{Q})$, so view $x$ and $y$ in $M_6(\mathbb{Q})$.
Let $R$ be the subring generated by $\mathbb{Q}$, $x$ and $y$. Then why is that if $x'$ is nilpotent in $R$, and $y'$ is such that $y'^2=0$, then $y'x'^2=0$?
Background work I've done: I've found that $x$ and $y$ satisfy the relations
$$
x^3=0=y^2\qquad yx=x^2y
$$
and that $\{1,x,x^2,y,xy,x^2y\}$ is a basis for $R$ as a $\mathbb{Q}$-vector space. The above fact in question will prove that $R$ has no anti-automorphisms, since $x^2y\neq 0$, but if $\varphi$ is some anti-automorphism, then
$$
\varphi(x^2y)=\varphi(y)\varphi(x^2)=\varphi(y)\varphi(x)^2=0
$$
since $\varphi(y)^2=0$ and $\varphi(x)$ is nilpotent as the image of the nilpotent element $x$. But then $\varphi$ is not injective.
AI: Suppose $y'=aI+bx+cx^2+dy+exy+fx^2y, x'=a_1I+b_1x+c_1x^2+d_1y+e_1xy+f_1x^2y$.
Then the fact that $x',y'$ are nilpotents implies $a=a_1=0$ (otherwise the trace is not zero). Then by the relations you found it is easy to see $b=0$ (by expanding $0=y'^2$).
Now you can compute that $y'x'=(cc_1+da_1)x^2y$, and therefore $y'x'^2=(y'x')x'=0$. |
H: What is the mistake in my reasoning?
I am working on the following problem trying to use strategy in this problem. I am trying to simplify the proof by working with $v_{i}=0,i\not=1$ case. But the result looks very different from what I expected, so I want to ask if there is something wrong in my computation. To be more precise, I do not understand why $(1-n)$ term appeared in my computation and my result may diverge.
Let $v_{1},...,v_{n}$ be functions of class $\mathbb{C}^{1}(\mathbb{R}^{n}-\{0\})$ such that $$v_{i}(xt)=t^{1-n}v_{i}(x)$$ for $0\not=x\in \mathbb{R}^{n}$, $t>0,i=1,...n$. Show that, for $\phi\in C^{\infty}_{c}(\mathbb{R}^{n})$ we have $$\sum^{n}_{i=1}\langle \partial_{i}v_i,\phi\rangle=PV \int \phi\sum^{n}_{i=1}\partial_{i}v_{i}dx+\phi(0)\int_{\mathbb{S}^{n-1}}\sum^{n}_{i=1}\theta_{i}v_{i}(\theta)d\omega(\theta)$$
This obviously follows if we can prove the one dimension claim by letting $v_{2}..v_{n}=0$:
$$\langle \partial_{1}v_{1},\phi\rangle=PV \int \phi[\partial_{1}v_{1}]dx+\phi(0)\int_{\mathbb{S}^{n-1}}\theta_{1}v_{1}(\theta)d\omega(\theta)$$
And this follows if and only if we have
$$\lim_{\epsilon\rightarrow 0}\int^{|x|\le \epsilon}_{|x|\ge 0}[\partial_{1}v_{1}]\phi dx=\phi(0)\int_{\mathbb{S}^{n-1}}\theta_{1}v_{1}(\theta)d\omega(\theta)$$
We know $x=r\theta$, thus we have $$\frac{\partial r}{\partial x_{1}}=\frac{x_{1}}{r}=\theta_{1}$$ Therefore(in the following we no longer distinguish $v_{1}$ and $v$)
\begin{align*}
\frac{\partial}{\partial x_{1}}v(r\theta)
&=\frac{\partial}{\partial x_{1}}[r^{1-n}v(\theta)]\\
&=r^{1-n}\frac{\partial}{\partial x_{1}}v(\theta)+[\frac{\partial}{\partial x_{1}}r^{1-n}]v(\theta)\\
&=r^{1-n}\frac{\partial}{\partial x_{1}}v(\theta)+(1-n)r^{-n}\frac{\partial r}{\partial x_{1}}v(\theta)\\
&=r^{1-n}\frac{\partial v(\theta)}{\partial x_{1}}+(1-n)r^{-n}\frac{x_{1}}{r}v(\theta)\\
&=r^{1-n}(\frac{\partial v(\theta)}{\partial x_{1}}+(1-n)\frac{\theta_{1}}{r} v(\theta))
\end{align*}
Substituting $x=r\theta$ we now have:
\begin{align*}
\lim_{\epsilon\rightarrow 0}\int^{|x|\le \epsilon}_{|x|\ge 0}[\partial_{x_{1}}v]\phi dx
&=\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}r^{1-n}(\frac{\partial v(\theta)}{\partial x_{1}}+(1-n)\frac{\theta_{1}}{r} v(\theta))\phi(r\theta)drdsr^{n-1}\\
&=\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}(\frac{\partial v(\theta)}{\partial x_{1}}+(1-n)\frac{\theta_{1}}{r} v(\theta))\phi(r\theta)drds\\
&=\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}dr\int_{\theta\in \mathbb{S}^{n-1}}\frac{\partial v(\theta)}{\partial x_{1}}ds\\
&+(1-n)\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}\frac{\phi(r,\theta)}{r} v(\theta)\theta_{1}drds\\
&=(1-n)\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}\frac{\phi(r,\theta)}{r} v(\theta)\theta_{1}drds
\end{align*}
Let $\phi(r,\theta)$ have the Taylor expansion around 0: $$\phi(r,\theta)=\phi(0,\theta)+r\psi(r,\theta)$$ with $\psi=\frac{\int^{r}_{0} \phi'(t,\theta)dt}{r}$. Then the above integral become
\begin{align*}
(1-n)\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}[\frac{\phi(0)}{r}+\psi(r,\theta)]v(\theta)\theta_{1}drds
\end{align*}
Note this integral diverges if $\phi(0)\not=0$:
$$(1-n)\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}[\psi(r,\theta)]v(\theta)\theta_{1}drds+(1-n)\lim_{\epsilon\rightarrow 0}\int^{\epsilon}_{0}\frac{\phi(0)}{r}\int_{\theta\in \mathbb{S}^{n-1}}v(\theta)\theta_{1}drds$$
AI: The main problem with your derivation is that your assertion
$$ \lim_{\epsilon \to 0} \int_{0 < |x| < \epsilon} \partial_1 v_1(x) \phi(x) \mathrm{d}x = \langle \partial_1 v_1,\phi\rangle - PV \int \partial_1 v_1 \phi \mathrm{d}x $$
is not true. In other words, while away from the origin the distribution $\partial_1 v_1$ can be identified with the function $\partial_1 v_1$, at the origin (which you omit in your formulation of the expression of the LHS) the two are not equal (mainly because the derivative of $v_1$ is expected to scale like $r^{-n}$ near the origin and is not locally integrable at the origin, so you cannot expect the distributional derivative $\partial_1 v_1$ to be represented by a function), which is why you end up with an extra blow-up.
The correct derivation uses the definition of a derivative of a distribution, that is
$$ -\langle \partial_1 v_1, \phi\rangle = \langle v_1,\partial_1\phi\rangle = \int v_1 \partial_1 \phi \mathrm{d}x = \int_{|x|<\epsilon} v_1\partial_1\phi\mathrm{d}x + \int_{|x| > \epsilon} v_1 \partial_1 \phi \mathrm{d}x \tag{1}$$
Consider first the interior integral, using homogeneity of $v_1$ we have, in polar coordinates (using that $v_1$ is a locally integrable function, so the change of variables is allowed and doesn't do anything weird at the origin)
$$ \left|\int_{|x| < \epsilon} v_1 \partial_1\phi \mathrm{d}x\right| = \left| \int_{\mathbb{S}^{n-1}}\int_0^\epsilon v_1(\theta) r^{1-n} (\partial_1\phi)(r\theta) r^{n-1}\mathrm{d}r\mathrm{d\theta}\right| \leq \left\|\mathbb{S}^{n-1}\right\| \cdot \sup_{\mathbb{S}^{n-1}}\left|v_1(\theta)\right| \cdot \sup_{\mathbb{R}^n} \left|\partial_1\phi(x)\right|\cdot \epsilon $$
which means that the first term is $O(\epsilon)$ and vanishes as $\epsilon \to 0$.
The second term we integrate by parts (or, in other words, reverse Leibniz rule)
$$ \int_{|x|\geq \epsilon} \sum_{i = 1}^nv_i \partial_i\phi\mathrm{d}x = \int_{|x|\geq \epsilon} \sum_{i = 1}^n \partial_i\left( v_i\phi\right)\mathrm{d}x - \int_{|x|\geq \epsilon} \sum_{i = 1}^n (\partial_i v_i ) \phi\mathrm{d}x $$
and treat the first term on the RHS by the divergence theorem
$$ \int_{|x| \geq \epsilon} v_1 \partial_1 \phi\mathrm{d}x = - \int_{|x|\geq \epsilon} \partial_1 v_1 \phi \mathrm{d}x - \int_{\mathbb{S}^{n-1}} v_1(\epsilon\theta) \theta_1 \phi(\epsilon\theta) \epsilon^{n-1}\mathrm{d}\theta \tag{2}$$
the $-\theta_1$ we pick up when we apply divergence theorem which requires a dot product with the "outward pointing normal" against the vector $(v_i)$. Since $v_2 \ldots v_n = 0$ the dot product simplifies to $-\theta_1 v_1$, using that we are outside the disc of radius $\epsilon$ so the "outward pointing normal" actually points toward the origin.
Now the first term in the RHS of (2) is the term used to define the PV integral, so combining (1) and (2) and using that the first term on the RHS of (1) is $O(\epsilon)$ we have that your claim would be proved if we can show that
$$\lim_{\epsilon\to 0} \int_{\mathbb{S}^{n-1}} v_1(\epsilon\theta) \theta_1 \phi(\epsilon\theta) \epsilon^{n-1}\mathrm{d}\theta = \phi(0) \int_{\mathbb{S}^{n-1}} v_1(\theta)\theta_1 \mathrm{d}\theta $$
But this follows immediately by continuity of $\phi$ at the origin and the homogeneity of $v_1$. |
H: Sum set estimates for cofinite integer sets
I am interested in the sum set operation on subsets of the integers $\mathbb Z$:
$$A + B = \{ x + y | x \in A, y \in B\}$$
One readily arrives at the following cardinality bounds:
$$|A| + |B| - 1 \leq |A + B| \leq | A |\cdot | B |$$ for $A, B$ non empty and finite
What happens if $B$ is co-finite, i.e complement of $B$ is finite?
Any cardinality bounds of the complement of the set sum known?
Bye
AI: If $B$ is cofinite, it’s infinite, and in that case $|A+B|=\omega$ provided that $A\ne\varnothing$. It can’t be anything else.
Added: If $B$ is cofinite, let $F=\Bbb Z\setminus B$; Then $n\in\Bbb Z\setminus(A+B)$ iff $(n-A)\cap B=\varnothing$ iff $n-A\subseteq F$; this is clearly possible only if $|A|\le|F|$, and $|\Bbb Z\setminus(A+B)|$ is the number of distinct translates of $-A$ that are subsets of $F$. If $|A|=1$, this is evidently $|F|$, and if $|A|>|F|$, it’s $0$. In general it’s at most $|F|-|A|+1$, e.g., when $A$ and $F$ are intervals of consecutive integers. |
H: How to show that $\mathrm{ord}_m a = \mathrm{ord}_m \overline{a}$?
Let $a \in Z$ and $m \in N$ such that $\gcd(a,m)=1$. How to show that $\mathrm{ord}_m a = \mathrm{ord}_m \overline{a}$, where $\overline{a}$ is the inverse of a modulo m?
Hint: Solution starts as follows:
$1 \equiv (a \overline{a})^{ord_m a} \equiv a^{ord_m a} \overline{a}^{ord_m a}\pmod m$...
Problem: I don't understand why they don't just start with $1 \equiv a \overline{a}\pmod m$...
AI: HINT: $a^k\bar a^k=(a\bar a)^k$
Added: (That was written before you edited the question.) For any integer $k$ you have $$a^k\bar a^k=(a\bar a)^k=1^k=1\;.\tag{1}$$ Take $k=\operatorname{ord}_m(a)$: $(1)$ becomes $$\bar a^{\operatorname{ord}_ma}=a^{\operatorname{ord}_ma}\bar a^{\operatorname{ord}_ma}=1\;;\tag{2}$$ take $k=\operatorname{ord}_m\bar a$ instead, and it becomes $$a^{\operatorname{ord}_m\bar a}=a^{\operatorname{ord}_m\bar a}\bar a^{\operatorname{ord}_m\bar a}=1\;.\tag{3}$$
I expect that you know that if $a^n=1$, then $\operatorname{ord}_ma\mid n$, i.e., $n$ is a multiple of $\operatorname{ord}_ma$. (If not, that’s the first thing that you need to prove.) If you combine that fact with $(2)$ and $(3)$, it’s not hard to prove that $\operatorname{ord}_ma=\operatorname{ord}_m\bar a$. |
H: Unique minimal normal subgroup $\implies$ faithful irreducible representation.
I'm tasked with proving that if I have a finite group G with a unique minimal normal subgroup, and a field F with characteristic not dividing the order of G, then there exists a faithful irreducible F-representation.
I can be fairly certain that the proof is to use Maschke's Theorem, since all the elements are there, but that normal subgroup also makes me think of Clifford's Theorem. My first thought was to take the regular FG-module and decompose with Maschke's, but I'm not sure what to do with that. I would be very grateful for any hints you can provide.
AI: Suppose not. Then all the irreducible reps contain the unique minimal normal subgroup in their kernel. What does this tell you about the regular representation? |
H: Coordinate ring of general linear group
Let $n$ be a positive integer and let $k$ be an algebraically closed field. What is the coordinate ring of $GL(n,k)$ (the set of all $n \times n$ matrices with entries in $k$)? Here we identify this set as a subset of $k^{n^{2}}$.
Would it suffice to say that the coordinate ring is the localization of $k[x_{11},x_{12},..,x_{nn}]$ at the determinant function? Is there a way to "simplify" this?
AI: Maybe you already know this, but I think the most natural way to simplify it (if you think taking quotient is simpler than taking localization) is to view $GL(n,k)$ as a subvariety in $\mathbb{A}^{n^2+1}$: Now view $det$ as a polynomial $D(t_1,...,t_{n^2})$ of $n^2$ variables, then the coordinate ring is $k[t_1,...,t_{n^2},y]/(yD-1)$. |
H: equation of lines goes through origin
suppose that we are require to write equation of lines,whose go through origin and distance from point $F(-4,3)$ to this line is $1cm$
first of all ,distance from point $F(x_0,y_0)$ to line $A*x+B*y+C=0$ is
$d={+,-}(A*x_0+B*y_0+c)/(\sqrt{(A^2+B^2)})$
because we have origin,we have $(x_0,y_0)=(0,0)$
equation of line is $y=k*x$ or $k*x-y=0$ if we put point $F(4,-3)$ ,i have got $K_1=-2/15*(6+\sqrt{6})$ and $k_2=2/15*(\sqrt{6}-6)$
or
$y_1=-2/15*(6+\sqrt{6})*x$
and
$y_2=2/15*(\sqrt{6}-6)*x$
is that correct or did i make some mistake?
AI: Line on the plane through the origin $\,\Longrightarrow mx+y=0\,$ , and we want the distance from the given point to be $\,1\,$: $$\frac{|-4m+3|}{\sqrt{m^2+1}}=1\Longrightarrow 15m^2-24m+8=0\Longrightarrow m_{1,2}=\frac{12\pm 2\sqrt{6}}{15}$$not very nice solutions but nevertheless, solutions.
Since I can't fully understand your solution (please use LaTeX to write mathematics!) I can't be sure whether you got them right...but the signs look fishy there. |
H: Derivative of a composition
Let $f$ and $g$ two differentiable functions on $]a, b[$. Then $f \circ g$ is differentiable on the same interval and we have the expression :
$$(f \circ g)' = g' \cdot f' \circ g$$
How do you prove this ?
AI: If you are familiar with o-notation, here is an alternative proof: Note that when $f$ is differentiable at $x$, then $$f(x+h)=f(x)+hf'(x)+o(h)\qquad\text{as } h\to0,$$
and this can be used as the definition of $f'(x)$. Now
$$\begin{align*} (f\circ g)(x+h)
&=f\bigl(g(x+h)\bigr)=f\bigl(g(x)+hg'(x)+o(h)\bigr)\\
&=f\bigl(g(x)\bigr)+\bigl(hg'(x)+o(h)\bigr)f'\bigl(g(x)\bigr)+o\bigl(hg'(x)+o(h)\bigr)\\
&=f\bigl(g(x)\bigr)+hg'(x)f'\bigl(g(x)\bigr)+o(h),\end{align*}$$
and the proof is complete.
If this makes no sense to you at present, come back to it after you have learned about o-notation, and you will appreciate it much more. |
H: How can I determine these 3 variables?
How can I determine the values of $x,y,z$ below?
$a,b,c$ - are given variables
$x,y,z$ - must be found
$$\begin{align*}
ax&=S_1\\
by&=S_2\\
cz&=S_3\\
S_1-(y+z)&>0\\
S_2-(x+z)&>0\\
S_3-(x+y)&>0
\end{align*}$$
AI: First of all, if the $S_i$ are known, you are done. If the $S_i$ are just placeholders we might as well get rid of them and obtain$$
\begin{align*}
ax&>(y+z)\\
by&>(x+z)\\
cz&>(x+y)
\end{align*}
$$
This system might or might not have solutions. This pretty much depends on $a,b,c$. If for example $a=b=c=2$, then summing the three inequalities gives $2(x+y+z)>2(x+y+z)$, if $a=b=c=3$ any triple $x=y=z>0$ is a solution.
However if we have a solution $(x,y,z)$, then all positive scalar multiples of this solution are also solutions. Hence we can reduce the question to the three cases $z=0$, $z=1$ and $z=-1$. Thereby we have reduced the problem to a 2-dimensional one.
In each of these three cases one is left with three linear inequalities in $x$ and $y$. Each of them can be expressed geometrically as a line and solutions lie on one distinct side of these lines (depending on the inequality sign). Thus the lines cuts out the set of solutions. In other words, we can now choose $x$ arbitrarily and obtain either the empty set or a well defined interval for the solution of $y$.
Edit: Concerning your example $a=2$, $b=3$, $c=7$, I really advise you to try it yourself. My answer provides all tools for the general solution and plugging in some numbers is the best way to understand it. But to give you a start: if $(x,y,z)$ is a solution with $z>1$, then $(x/z,y/z)$ lies in the triangle here. You will get such a solution for each positive $z$. |
H: Subset of $\mathbb{I}\cap [0,1]$ (irrationals in [0,1]) that is closed in $\mathbb{R}$ and has measure $\epsilon \in (0,1)$
Measure theory guarantees that every Lebesgue finite measurable set $E$ has a closed subset $F$ such that $m(E \backslash F)<\epsilon$ for small $\epsilon$.
But today I saw in some text that for all small $\epsilon >0$ there exist a subset of $\mathbb{I}\cap [0,1]$ (irrationals in [0,1]) that is closed in $\mathbb{R}$ and has Lebesgue measure less than $\epsilon$? Note the set is required to be closed in $\mathbb{R}$, not in $\mathbb{I}\cap [0,1]$. Can someone give me an example / a proof that such thing does not exist?
AI: In fact, we can choose a set which have exactly measure $\varepsilon$.
For a fixed $\delta>0$, consider the set $S_{\delta}:=\bigcup_{n\in\mathbb N}(q_n-2^{-n}\delta,q_n+\delta 2^{-n})$, where $\{q_n,n\in\Bbb N\}$ is an enumeration of the rationals of $[0,1]$. Then $S_{\delta}$ is open and dense in $[0,1]$, since it contains all the rationals of this interval. The maps $f\colon\delta\mapsto \lambda(S_{\delta}\cap (0,1))$ is Lipschitz-continuous. Indeed, if $\delta_1\leq\delta_2$, we have
\begin{align*}f(\delta_2)-f(\delta_1)&=\lambda(S_{\delta_2}\setminus S_{\delta_1})\\\
&\leq \lambda\left((0,1)\cap \bigcup_{n=0}^{+\infty}(q_n-2^{-n}\delta_2,q_n+\delta_2 2^{-n})\setminus (q_n-2^{-n}\delta_1,q_n+\delta_1 2^{-n})\right)\\\
&\leq \sum_{n=0}^{+\infty}\lambda((q_n-2^{-n}\delta_2,q_n+\delta_2 2^{-n})\setminus (q_n-2^{-n}\delta_1,q_n+\delta_1 2^{-n}))\\\
&=2(\delta_2-\delta_1)\sum_{n=0}^{+\infty}2^{-n}.
\end{align*}
Now we use the intermediate value theorem to pick $\delta$ such that $\lambda(S_{\delta}\cap (0,1))=1-\varepsilon$, and we consider the complement in $[0,1]$ of $S_{\delta}$. |
H: How many bits needed to store a number
How many bits needed to store a number $55^{2002}$ ?
My answer is $2002\;\log_2(55)$; is it correct?
AI: The number of bits required to represent an integer $n$ is $\lfloor\log_2 n\rfloor+1$, so $55^{2002}$ will require $\lfloor 2002\; \log_2 55\rfloor+1$ bits, which is $11,575$ bits.
Added: For example, the $4$-bit integers are $8$ through $15$, whose logs base $2$ are all in the interval $[3,4)$. We have $\lfloor\log_2 n\rfloor=k$ if and only if $k\le\log_2 n<k+1$ if and only if $2^k\le n<2^{k+1}$, and that’s exactly the range of integers requiring $k+1$ bits. |
H: Evaluating Integral with Residue Theorem
The integral in question is
$$\int_{_C} \frac{z}{z^2+1}\,dz,$$ where $C$ is the path $|z-1| = 3.$
The two pole of $f(x)$ where $f(x)=\frac{z}{z^2+1}$ is $-j$ and $j$
$${\rm Res}_{z=z_0}f(x)=\lim_{z\rightarrow\infty}(z-z_0)f(z)$$
For the first pole:
$${\rm Res}_{z=j}f(z)= \lim_{z\rightarrow\\j}(z-j)\frac{z}{z^2+1} \\ = \lim_{z\rightarrow\\j}\frac{(z-j)z}{(z+j)(z-j)}\\
=\lim_{z\rightarrow\\j}\frac{z}{(z+j)} =\frac{j}{(j+j)}$$
${\rm Res}_{z=j}f(z)= \frac{1}{2}$.
For the second pole:
$${\rm Res}_{z=-j}f(z)= \lim_{z\rightarrow\\-j}(z+j)\frac{z}{z^2+1} \\ = \lim_{z\rightarrow\\-j}\frac{(z+j)z}{(z+j)(z-j)}\\ = \lim_{z\rightarrow\\-j}\frac{z}{(z-j)}\\ = \frac{j}{(-j-j)}$$
${\rm Res}_{z=-j}f(z)= \frac{-1}{2}$.
Sum:
$${\rm Res}_{z=j}f(z)+ {\rm Res}_{z=-j}f(z)= \frac{1}{2}-\frac{1}{2} = 0$$
Now I have always been under the impression that when integrating inside a path, the only time when the result is 0 is when there are no pole in or on the path.
Am I mistaken? or have I made an error in the calculation? Or should I not be trying to use the Residue Theorem all together?
Any help would be much appreciated.
AI: For the second pole:
$${\rm Res}_{z = -i} f(z) = \lim_{z \to -i} \frac{(z+i) z}{(z+i)(z-i)} = \lim_{z \to -i} \frac{z}{z-i} = \frac{-i}{-2i} = \frac{1}{2}$$
so in fact two poles contribute the same to the final result which is $2\pi i \cdot \left( \frac{1}{2} + \frac{1}{2} \right) = 2\pi i$. |
H: manifold structure on on a finite dimensional real vector space
I am reading Warner's Differentiable Manifolds I do not get one example which is
Let $V$ be a finite dimensional real vector space. Then $V$ has a natural manifold structure. If $\{e_i\}$ is a basis then the elements of the dual basis $\{r_i\}$ are the coordinate functions of a global coordinate system on $V$.
I don't understand how "the elements of the dual basis $\{r_i\}$ are the coordinate functions of a global coordinate system on $V$." Could any one explain me about that? Then how such a global coordinate system uniquely determines a differentiable structure on $V$? And why this structure is indipendent of choice of basis?
First of all for a manifold structure I need each point must have an open neighborhood $U$ homeomorphic to some open subset of $\mathbb{R}^n$. Here am I getting such notions?
AI: The space $\mathbb{R}^n$ has coordinate functions $x_j:\mathbb{R}^n\to\mathbb{R}$, projection onto the $j^{th}$ axis. If $(\phi,U)$ is a coordinate system on a manifold $M$, then we get coordinate functions on $U$ by composing $\phi$ with the $x_j$.
Warner is just saying that by choosing a basis on a real vector space $V$, you induce a bjiective linear map (hence homeomorphism) $A$ between $V$ and $\mathbb{R}^n$, and that homeomorphism is a global coordinate system with coordinate functions $x_j\circ A = r_j$. The open neighborhood about each point is the entire space $V$.
To see that the structure is independent of choice of basis (up to diffeomorphism), try the construction with a different basis; can you see a diffeomorphism between the two structures? |
H: find equation of triangle sides in cartesian system
it is know that one vertex of triangle is located at point $A(2,-4)$ and equation of angle bisector of two another angle is given
1.$x+y-2=0$
2.$x-3*y-6=0$
we have to find equation of sides of triangle
i have found point where this two line intersect ,got $(3,-1)$,not dont know how to get corrdinates of B and C vertices,i know that bisector cuts angle into two equal parts,also i know theorem of angle bisector,but how can i use it to find coordinates?
AI: If you reflect the point $A$ about the bisector of the angle at $B$ you get a point somewhere on $BC$. You get another point on $BC$ by reflecting about the other bisector. Now you know two points on $BC$... |
H: Weak and pointwise convergence in a $L^2$ space
Let $I$ be a measured space (typically an interval of $\Bbb R$ with the Lebesgue measure), and let $(f_n)_n$ a sequence of function of $L^2(I)$.
Assume that the sequence $(f_n)$ converge pointwise and weakly. How to prove that the pointwise limit and the weak limit are the same ?
AI: Here's a functional analytic approach:
A weakly convergent sequence in a Hilbert space $H$ is bounded, and by the Banach-Saks theorem has a subsequence whose Cesàro averages converge strongly in $H$ to the same limit.
Almost sure convergence is preserved by taking subsequences and Cesàro averages.
So, without loss of generality you may assume that your weakly convergent sequence is actually strongly convergent.
Both strong $L^2$ convergence and almost sure convergence imply convergence locally in measure, so you only need to show that such limits are unique, which is easy. |
H: $k$th power of ideal of of germs
Well,We denote set of germs at $m$ by $\bar{F_m}$ A germ $f$ has a well defined value $f(m)$ at m namely the value at $m$ of any representative of the germ. Let $F_m\subseteq \bar{F_m}$ be the set of germs which vanish at $m$. Then $F_m$ is an ideal of $\bar{F_m}$ and let $F_m^k$ denotes its $k$th power. Could any one tell me how the elements look like in this Ideal $F_m^k$? and they said all finite linear combination of $k-fold$ products of elements of $F_m$ But I dont get this. and they also said These forms $\bar{F_m}\supsetneq F_m\supsetneq F_m^2\supsetneq\dots \supsetneq$
AI: Let $R$ be a commutative ring and $I\subseteq R$ an ideal of $R$. The $k$th power $I^k$ of that ideal is defined to be the set of all elements of $R$ that can be written as finite sums of elements of the form $a_1\cdot a_2\cdot\ldots\cdot a_k$ with $a_i\in I$. One can easily check, that with $I^k$ is itself an ideal of $R$. It is then also clear that for all $k\in\mathbb{N}$ one has $I^{k+1}\subseteq I^k$, while in general one does not have $I^{k+1}\neq I^k$. For the latter one needs more asumptions concerning the ring $R$ or the ideal $I$.
A simple guess in the case you are considering is this one: on a smooth manifold there exist functions $f$ that are smooth in a neighborhood of $m$ possessing a simple zero at $m$. The germ defined by such a function then lies in $F_m$ but not in any $F_m^k$, $k>1$. Similar for the germ defined by $f^2$ etc. |
H: Is there a mathematical symbol for "For every element"?
Is there a mathematical symbol that means "For every element"? I want the meaning to have similar function as an iteration, say for loop.
AI: From your comments, it seems that you want to take the elements of your index set in a specific order, as in an iteration.
There is no single symbol for that; the general shorthand only works when the index set has order type $\omega$, that is, it is indexed by $\mathbb{N}$, in which case the standard way of signaling this would be "for $i=1,2,3,\ldots$". |
H: Estimating missing values in a dataset and averaging values
I have a statistical maths problem where I have a rather large dataset consisting of a timestamp (rows) and a quantity of - lets say detections - per each day (columns for each location). Currently I have two challenges with the data:
Some of the data is missing, and I need to replace the missing values on some days or periods by a certain estimation based on earlier values. Is there a way to do this in Libreoffice/Matlab/Octave etc. tool, or what could be the best mathematical method to look into?
Second issue is that the data in a visualised form seems to follow a certain pattern, but the amount of detections is increasing gradually by time. Is it possible to average the whole dataset so that instead of increasing by time, the dataset would be more stagnant but still shows the same patterns periodically? This could be referred as to histogram equalization from image processing.
It would be great to have just possible statistical methods and/or tools to push to the right direction, so that I could do more research on the area and on the dataset on my own. :)
AI: There are many ways to impute data. But most do not account for the added uncertainty due to the imputation. Multiple Imputation is a method that imputes each missing value several times and accounts for the uncertainty due to the imputation method. It is available in SAS and there is a special purpose software package for doing this that is called Solas which is licensed by Statistical Solutions. I don't know what is available in Matlab etc. |
H: independence of random variables for finite subfamilies
Let $X$ be a random variable and $\{Y_j\}, j\in J$ a family of random variables. $J$ should be an index set, perhaps uncountable. My question is, if $X$ is independent to every finite subfamily of $\{Y_j\}$, i.e. for every $ I \subset J$ and $|I|\in \mathbb{N}$ the family $\{Y_j;j\in I\}$ and $X$ are independent. Could we conclude that $X$ is independent to the whole family $\{Y_j; j\in J\}$?
cheers
math
AI: Yes, as
\[ \mathcal C := \left\{\bigcap_{j \in I} Y_j^{-1}[B_j] \biggm| I \subseteq J\text{ finite}, B_j \subseteq \mathbb R\text{ Borel} \right\} \]
is a $\cap$-stable generator of $\sigma(Y_j, j \in J)$ and for $A \in \sigma(X)$ and $C \in \mathcal C$ we have independence of $C$ and $A$. |
H: How to solve this quadratic congruence equation
How to solve $x^2+x+1\equiv 0\pmod {11}$ ?
I know that in some equations like $ax\equiv b\pmod d$ if $(a,d)=1$ then the equation has one and only one solution $x\equiv ba^{\phi(d)-1}\pmod d$. Any help will be appreciated. ;)
AI: You use the quadratic formula!
No, really. But you need to interpret the terms correctly: rather than "dividing" by $2a$ (here, $2$) you need to multiply by a number $r$ such that $2r\equiv 1\pmod{11}$ (namely, $r=6$). And rather than trying to find a square root, here $\sqrt{b^2-4ac} = \sqrt{1-4} = \sqrt{-3}$, you want to find integers $y$ such that $y^2\equiv -3\pmod{11}$. It may be impossible to do so, but if you can find them, then plugging them into the quadratic formula will give you a solution; and if you cannot find them, then there are no solutions.
Now, as it happens, $-3$ is not a square modulo $11$; so there are no solutions to $x^2+x+1\equiv 0\pmod{11}$. (You can find out if $-3$ is a square by using quadratic reciprocity: we have that $-1$ is not a square modulo $11$, since $11\equiv 3\pmod{4}$. And since both $3$ and $11$ are congruent to $3$ modulo $4$, we have
$$\left(\frac{-3}{11}\right) = \left(\frac{-1}{11}\right)\left(\frac{3}{11}\right) = -\left(-\left(\frac{11}{3}\right)\right) = \left(\frac{11}{3}\right) = \left(\frac{2}{3}\right) = -1,$$
so $-3$ is not a square modulo $11$).
(You can also verify that this is the case by plugging in $x=1,2,\ldots,10$ and seeing that none of them satisfy the equation).
On the other hand, if your polynomial were, say $x^2+x-1$, then the quadratic formula would say that the roots are
$$\frac{-1+\sqrt{1+4}}{2}\qquad\text{and}\qquad \frac{-1-\sqrt{1+4}}{2}.$$
Now, $5$ is a square modulo $11$: $4^2 = 16\equiv 5\pmod{11}$. So we can take $4$ as one of the square roots, and taking "multiplication by $6$" as being the same as "dividing by $2$" (since $2\times 6\equiv 1\pmod{11}$), we would get that the two roots are
$$\begin{align*}
\frac{-1+\sqrt{5}}{2} &= \left(-1+4\right)(6) = 18\equiv 7\pmod{11}\\
\frac{-1-\sqrt{5}}{2} &= \left(-1-4\right)(6) = -30\equiv 3\pmod{11}
\end{align*}$$
and indeed, $(7)^2 + 7 - 1 = 55\equiv 0\pmod{11}$ and $3^2+3-1 = 11\equiv 0\pmod{11}$.
We can definitely use this method when $2a$ is relatively prime to the modulus; if the modulus is not a prime, though, nor an odd prime power, then there may be more than $2$ square roots for any given number (or none). But for odd prime moduli, it works like a charm. |
H: Perimeter of Triangle in a Right triangle.
I am having difficulty solving this problem:
The perimeter of a right triangle is 18 inches. If the midpoints of three sides are joined by line segments they form another triangle . What is the perimeter of this new triangle ? (Ans: 9 inches) .
Any suggestions on how to solve it ?
AI: Each side of the new triangle is half the side of the old triangle, meaning that the perimeter is halved as well.
For a little more detail, draw it out. cutting at the midpoint makes this new triangle one-half of a rectangle. So clearly, since you cut at the midpoint, the legs of the new triangle are half the legs of the old, since opposite sides are of equal length in a rectangle. By similar triangles the hypotenuse is halved as well. So the whole perimeter is cut in half and so the answer is $9$.
EDIT: Picture! The rectangle I'm talking about is $AGFE$.
$AE$ is half of the side, so $GF$ is also half of that side. The same argument can be made for $FE$ and $GE$ easily. This argument doesn't need a right angle though: what matters is that you form a parallelogram (opposite sides are of the same length), which always happens. This is just a special case. |
H: General quadratic form of two variables
I was referring to this lecture http://www.stanford.edu/class/ee364a/videos/video04.html. and he gave an example of a generalized quadratic equation
f(x,y) = x'Ax + 2x'By + y'Cy
The functions is convex if the matrix
A B
B' C
is positive semidefinite and also the matrix C. I didn't get how this is derived and how the function f(x,y) is a generalized quadratic equation
AI: Consider $z=\begin{pmatrix} x \\ y\end{pmatrix}$, then $f(x,y)=g(z)$ with $g(z)=z'Mz$ for $M=\begin{pmatrix} A & B \\ B' & C\end{pmatrix}$. Thus, $f$ is quadratic. And $f$ is convex if an only if $M$ is positive semidefinite (and this condition implies that $C$ is positive semidefinite as well hence there is no need to add this further condition). |
H: What is the expected area of a polygon whose vertices lie on a circle?
I came across a nice problem that I would like to share.
Problem: What is expected value of the area of an $n$-gon whose vertices lie on a circle of radius $r$? The vertices are uniformly distributed.
AI: For $n$ even and $m=(n-2)/2$, the expected area for a unit-radius circle is
$$\frac{n!}{2^n \pi^{n-1}} \sum_{k=1}^m (-1)^{k+m} \frac{(2\pi)^{2k}}{(2k)!} \;.$$
For $n$ odd, there is a similar expression.
For example, for $n=4$, this leads to
$$\frac{4!}{2^4 \pi^{3}} \frac{(2\pi)^2}{2!} = \frac{3}{\pi} \approx 0.955 \;.$$
For $n=32$, the sum is $3.035$, approaching $\pi$ as anticipated.
I found this at a Math Pages article entitled "Expected Area of Random Polygon In a Circle". |
H: How many transvections are in a maximal unipotent subgroup of a general linear group?
If G is a general linear group GL( n, q ) of characteristic p, and U is a Sylow p-subgroup of G, then how many elements of U are transvections?
The size of a centralizer of a transvection is relevant.
If $g = \left[\begin{smallmatrix}
1 & 1 & . & . & . & . \\
. & 1 & . & . & . & . \\
. & . & 1 & . & . & . \\
. & . & . & 1 & . & . \\
. & . & . & . & 1 & . \\
. & . & . & . & . & 1 \\
\end{smallmatrix}\right],$ then $C_{GL}(g) = \left\{\left[ \begin{smallmatrix}
a & * & * & * & * & * \\
. & a & . & . & . & . \\
. & * & * & * & * & * \\
. & * & * & * & * & * \\
. & * & * & * & * & * \\
. & * & * & * & * & * \\
\end{smallmatrix}\right]\right\},$ and so $C_U(g) = \left\{\left[ \begin{smallmatrix}
1 & * & * & * & * & * \\
. & 1 & . & . & . & . \\
. & . & 1 & * & * & * \\
. & . & . & 1 & * & * \\
. & . & . & . & 1 & * \\
. & . & . & . & . & 1 \\
\end{smallmatrix}\right]\right\}.$
Then $g^x = \left[ \begin{smallmatrix}
1 & 1 & * & * & * & * \\
. & 1 & . & . & . & . \\
. & . & 1 & . & . & . \\
. & . & . & 1 & . & . \\
. & . & . & . & 1 & . \\
. & . & . & . & . & 1 \\
\end{smallmatrix}\right]$ for all $x = \left[\begin{smallmatrix}
1 & . & . & . & . & . \\
. & 1 & * & * & * & * \\
. & . & 1 & . & . & . \\
. & . & . & 1 & . & . \\
. & . & . & . & 1 & . \\
. & . & . & . & . & 1 \\
\end{smallmatrix}\right].$
The U-conjugacy class of G has $q^{n-2}$ elements. However, there are other transvections, and I'm not so sure how to count them all.
Checking small $(n,q)$ I get a possible formula:
$$ nq^{n-1} - \tfrac{q^n-1}{q-1}$$
In particular, I don't get that each transvection must look like a "row", and I don't get that every element is a transvection. It'd be nice to know what they "look" like as matrices.
Just in case I made a mistake in linear algebra: To check if an element g is a transvection, I check that $\operatorname{Rank}(g-1) = 1$ and $(g-1)^2 = 0$.
AI: Doesn't it just depend on the lowest non-zero row of $g-I$? Let's say this is row $r$ (starting counting from the bottom). Let's say that the lowest $k$ rows of $g-I$ are zero, but the next one up isn't. If I'm counting right, there are $q^{k}-1$ ways to complete that row with a non-zero vector, then, given that choice, $q^{n-k-1}$ ways to fill out the rows above that so that $g-I$ has rank $1.$ So the answer looks like $\sum_{k=1}^{n-1} (q^{n-1} - q^{n-k-1}).$ I think this agrees with what you wrote. |
H: Stability of equilibria of a differential equation (by Hale-Koçak)
Consider the differential equation
$$
x'=f(x)
$$
where
$$
f(x)=\begin{cases}
0 & x = 0 \\[12pt]
-x^3\sin\left( {\frac{1}{x}} \right) & x \ne 0
\end{cases}
$$
I have to study the equilibrium points. First, I've proved that $f(x) \in C^1(\mathbb{R})$. Then I've found that the equilibrium points are $x=0$ and $x_k = \frac{1}{k\pi}$, with $k \in \mathbb{Z} \setminus \{0\}$. The points $x_k$ can be easily classified (stable, asymptotically stable or unstable) because they are hyperbolic points ($f'(x_k) \ne 0$). What about $x=0$? I've read on Hale-Koçak that $x=0$ seems to be stable but not asymptotically stable.
I managed to prove that we cannot find a $\delta > 0$ s.t. $xf(x)<0$ for $0<\vert x\vert <\delta$: by a lemma on Hale-Koçak, this tells us the point is not asymptotically stable.
What about stability? I should prove that
$$
\exists \delta > 0, \, \vert x \vert < \delta \Rightarrow xf(x)\le 0
$$
I don't manage to prove this. How would you do?
Thanks for your help.
AI: I add to my comment above.
To prove the stability of $\hat x=0$ you do not need to prove (you simply can't) that
$\exists \delta > 0, \, \vert x \vert < \delta \Rightarrow xf(x)\le 0$
Actually, this example is given in the book because it shows that the implication $\hat x$ is stable $\Rightarrow$ $(x-\hat x)f(x)\leq 0$ is not true (whereas the converse is true).
To prove the stability of $\hat x=0$ you just need to use the definition. |
H: Show $f(x)=\int_E x^tg(t)d\mu(t)$ is continuous when $\mu$ is a general measure
Define the function $f:[0,1] \to \mathbb{R}$ by
$$
f(x)=\int_E x^tg(t)d\mu(t)
$$
where $E \subset \mathbb{R^+}$,
$\mu$ is a nonnegative measure on $\mathbb{R}$
and $g:\mathbb{R} \to \mathbb{R}$ is a $\mu$-integrable function, that is $\int |g|d\mu < \infty$.
Is $f$ a continuous function of $x$?
I would be tempt to use the relation between absolute continuity and the lebesgue integral
but as the measure is not Lebesgue, it's of no use.
Is it possible to show that $f$ is continuous?
Does this need any extra assumptions?
AI: Use the dominated convergence theorem noting that $|x^t g(t)| \leq |g(t)|$, when $x\in [0,1]$. Then if $x_n\to \hat{x}$, you will have $x_n^t \to \hat{x}^t$, hence $f(x_n) \to f(\hat{x})$. Since this is true for all such sequences, $f$ is continuous. |
H: short exact sequences and direct product
Let
$$0\longrightarrow L^{(i)}\longrightarrow M^{(i)}\longrightarrow N^{(i)}\longrightarrow 0$$
be a short exact sequence of abelian groups for every index $i$. Clearly if I take finite direct products, then
$$0\longrightarrow \prod_iL^{(i)}\longrightarrow\prod_i M^{(i)}\longrightarrow \prod_iN^{(i)}\longrightarrow 0$$
is a short exact sequence. But what about infinite direct product? Is the exactness preserved?
AI: In a general (abelian) category, the product of epimorphisms (if it exists!) may not be an epimorphism – that is why we sometimes assume (AB4*) as an extra axiom. As it turns out, the category of $R$-modules (for any ring $R$) always satisfies the (AB4*) axiom.
Proposition. If the (AB4*) axiom is satisfied in an abelian category, then products of short exact sequences are short exact sequences.
Proof. Consider short exact sequences
$$0 \longrightarrow L^{(i)} \longrightarrow M^{(i)} \longrightarrow N^{(i)} \longrightarrow 0$$
By general abstract nonsense, one can show that the kernel of a product is the product of the kernels, so we have a left exact sequence
$$0 \longrightarrow \prod_i L^{(i)} \longrightarrow \prod_i M^{(i)} \longrightarrow \prod_i N^{(i)}$$
but by the (AB4*) assumption, the last homomorphism is an epimorphism, so we in fact have a short exact sequence. |
H: infinite series involving harmonic numbers and zeta
I ran across a fun looking series and am wondering how to tackle it.
$$\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=\frac{{\pi}^{4}}{72}.$$
One idea I had was to use the digamma and the fact that
$$\sum_{k=1}^{n}\frac{1}{k}=\int_{0}^{1}\frac{1-t^{n}}{1-t}dt=\psi(n+1)+\gamma.$$
Along with the identity $\psi(n+1)=\psi(n)+\frac{1}{n}$, I managed to get it into the form
$$\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=\gamma\zeta(3)+\zeta(4)+\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}.$$
This would mean that $$\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}=\frac{{\pi}^{4}}{360}-\gamma\zeta(3).$$ Which, according to Maple, it does. But, how to show it?. If possible.
I also started with $\frac{-\ln(1-x)}{x(1-x)}=\sum_{n=1}^{\infty}H_{n}x^{n-1}$.
Then divided by x and differentiated several times. This lead to some interesting, but albeit, tough integrals involving the dilog:
$$-\int\frac{\ln(1-x)}{x(1-x)}dx=Li_{2}(x)+\frac{\ln^{2}(1-x)}{2}=\sum_{n=1}^{\infty}\frac{H_{n}x^{n}}{n}.$$
Doing this again and again lead to some integrals that appeared to be going in the right direction.
$$\int_{0}^{1}\frac{Li_{3}(x)}{x}dx=\frac{{\pi}^{4}}{90}$$
$$-\int_{0}^{1}\frac{\ln^{2}(1-x)\ln(x)}{2x}dx=\frac{{\pi}^{4}}{360}$$
$$-\int_{0}^{1}\frac{\ln(1-x)Li_{2}(1-x)}{x}dx=\frac{{\pi}^{4}}{72}$$
But, what would be a good approach for this one? I would like to find out how to evaluate
$$\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}=\frac{{\pi}^{4}}{360}-\gamma\zeta(3)$$
if possible, but any methods would be appreciated and nice.
Thanks a bunch.
AI: $$\sum_{n=1}^{+\infty} \frac{H_{n}}{n^{3}} =
\sum_{n=1}^{+\infty} \frac{1}{n^{3}} \sum_{m=1}^{+\infty} \left( \frac{1}{m} - \frac{1}{m+n}\right)
= \sum_{n=1}^{+\infty} \frac{1}{n^{3}} \sum_{m=1}^{+\infty} \frac{n}{m(m+n)}
= \sum_{n=1}^{+\infty} \sum_{m=1}^{+\infty} \frac{m}{m^2 n^2 (m+n)}
= \frac{1}{2} \left(\sum_{n=1}^{+\infty} \sum_{m=1}^{+\infty} \frac{m}{m^2 n^2(m+n)} + \sum_{n=1}^{+\infty} \sum_{m=1}^{+\infty} \frac{n}{m^2 n^2(m+n)} \right)
= \frac{1}{2} \sum_{n=1}^{+\infty} \sum_{m=1}^{+\infty} \frac{1}{m^2 n^2}
= \frac{1}{2} \zeta(2)^2
= \frac{1}{2} \left(\frac{\pi^{2}}{6}\right)^2
= \frac{\pi^{4}}{72}
= \frac{5}{4} \zeta(4) $$ |
H: Summing a exponential series
What is the appropriate way to simplify such an expression.
i am unsure of how to use the series i know to apply to this situation
$$\sum_{L=0}^{M}s^{L}L^{2}$$
do i modify such a series as power series, or is there a more efficient series to use here?
thank you very much!!
AI: Try to make the inner expression look like a derivative:
$$
\begin{align}
\sum_{L=0}^M\left(Ls^{L-1}\right)sL & =s\sum_{L=0}^M\left(\partial_ss^L\right)L\\
& =s\partial_s\sum_{L=0}^Ms^LL\\
& =s\partial_s\sum_{L=0}^M\left(Ls^{L-1}\right)s\\
& =s\partial_s\left(s\sum_{L=0}^M\left(Ls^{L-1}\right)\right)\\
& =s\partial_s\left(s\sum_{L=0}^M\partial_ss^L\right)\\
& =s\partial_s\left(s\partial_s\sum_{L=0}^Ms^L\right)\\
& =s\partial_s\left(s\partial_s\frac{s^{M+1}-1}{s-1}\right)\\
\end{align}$$
Now just take it from here, simplifying from the inside out. |
H: Question on sequences
Exercise 37 in Apostol $10.20$ asks to find all complex $z$ such that $$\sum_{n=1}^{\infty} \frac{(-1)^n}{z+n}$$ converges.
I suspect this requires the use of either Abel's Test or Dirichlet's Test. My attempt so far has been to set $\{b_n\}=\frac{1}{n}$, which is a decreasing sequence of real numbers that converges to $0$. Then, I set $\{a_n\}=(-1)^n\frac{n}{z+n}$.
As $n\to\infty$, $$\{a_n\} \to (-1)^ne^{i\arg(\frac{n}{z+n})}$$ since $$|\frac{n}{z+n}|=\frac{n}{|z+n|}\to 1$$
In order to prove that this converges for all complex $z\not=-1,-2,\dots$, I must show that $A_n=\sum_{k=1}^{n} a_n$ is a bounded sequence (not necessarily that it converges). This would satisfy the hypotheses for Dirichlet's test.
AI: $$
c_n:=\frac{1}{z+n}=\frac{x+n}{|z+n|^2}-i\frac{y}{|z+n|^2}=:a_n-ib_n \quad \forall \ z=x+iy \ne -n.
$$
For every $z \in \mathbb{C}\setminus(-\mathbb{N})$ the sequences $a_n, \ b_n$ decrease to $0$, therefore the series $\sum_{n=1}^\infty(-1)^nc_n$ converges for every $z \in \mathbb{C}\setminus(-\mathbb{N})$. |
H: $E[x\mid x>1]$ if $X \sim \exp(\lambda)$
I need to find $E[x\mid x>1]$ if $X \sim \exp(\lambda)$.
I first tried: $$f(x|x>1) = \frac{f(x)}{\int_{x=1}^{\infty}f(x) dx}.$$
AI: Hint: Use the memorylessness property of the exponential distribution. Given that you have waited $1$ hour, what is the distribution of your additional waiting time? So what is the expectation of your additional waiting time? Now don't forget to add the hour already spent waiting. |
H: How to arrive at Stokes's theorem from Green's theorem?
I would like to verify the identity
$$ \oint \vec F \cdot (\hat i dx + \hat j dy) + \oint \vec F \cdot (\hat i dx + \hat j dy) + \oint \vec F \cdot (\hat i dx + \hat j dy) = \oint \vec F \cdot (\hat i dx + \hat j dy + \hat k dz) $$
If it is incorrect then what would be the correct identity.
Green's theorem is special case of Stokes's theorem. How do we arrive at Stokes's theorem using Green's theorem?
AI: Time and space does not permit a complete answer, but here is an outline of one way to do it.
First, note that Green's theorem in the plane (applied to $f\partial g/\partial u$ and $f\partial g/\partial v$) leads to
$$\newcommand{\pd}[2]{\frac{\partial#1}{\partial#2}}
\iint\limits_D\Big(\pd fu\pd gv-\pd fv\pd gu\Big)\,du\,dv
=\oint\limits_J f\,dg$$
where $D$ is a “sufficiently nice” region in the plane and $J$ is its boundary curve.
Next, assume a three-dimensional surface $S$ is parametrized by $\mathbf{r}\colon D\to\mathbb{R}^3$, and that $\mathbf{F}$ is a vector field. Now you can prove the identity $$ (\operatorname{curl}\mathbf{F})\cdot
\Big(\pd{\mathbf{r}}{u}\times\pd{\mathbf{r}}{v}\Big)
=\pd{\mathbf{F}}{u}\cdot\pd{\mathbf{r}}{v}
-\pd{\mathbf{F}}{v}\cdot\pd{\mathbf{r}}{u}$$
and discover that each component function of $\mathbf{F}$ in this equation gives rise to a term of the form of the integrand on the left in the first equation. I.e., you let $f$ be each of the components of $\mathbf{F}\circ\mathbf{r}$ in turn, with $g$ being the corresponding comonent of $\mathbf{r}$, and add the three resulting equations together. You now have Stokes's theorem as written out using the given parametrization of $S$.
“Some assembly required.” |
H: Show that $|x|^{-\eta}$ is an "eigenfunction" for the Hardy Littlewood centered maximal operator
Let $\mu$ be the Hardy Littlewood centered maximal operator in $\mathbb{R}^n$
$$\mu (f)(x) = \sup_{r>0} \frac{1}{|B_r(x)|} \int_{B_r(x)} |f(y)|dy.$$
If $g(x)=|x|^{-\eta}$, com $\eta \in (0,n)$, how to prove that $\mu(g)(x)=Cg(x)$, for some $C$ constant?
AI: First observation: there is a close relation between $\mu(f)$ and $\mu(f\circ T)$ where $T$ is a linear operator that is either orthogonal or a multiple of identity. You should have $\mu(f\circ T)=\mu(f)\circ T$ in both cases.
Second observation to make: the function $g(x)=|x|^{-\eta}$ satisfies $g\circ T=g$ when $T$ is orthogonal, and $g\circ T=\lambda^{-\eta}g$ when $Tx=\lambda x$.
Then combine the 1st and 2nd observations to show that $\mu(g)$ has the same symmetry/scaling as $g$. |
H: What is the difference between plus-minus and minus-plus?
Possible Duplicate:
What is the purpose of the $\mp$ symbol in mathematical usage?
Just as the title explains. I've seen my professor actually differentiating between those two. Do they not mean the same?
AI: If you write
$$
\cos(a \pm b) = \cos a \cos b \mp \sin a \sin b,
$$
then + on the left side corresponds to minus on the right side, and - on the left side corresponds to + on the right side.
Standing alone, they mean the same. |
H: number prime to $bc$ with system of congruences
Can you please help me to understand why all numbers $x$ prime to $bc$ are all the solutions of this system?
$$\begin{align*}
x&\equiv k\pmod{b}\\
x&\equiv t\pmod{c}
\end{align*}$$
Here $k$ is prime to $b$, and $t$ is prime to $c$.
AI: Suppose that $x\equiv k \pmod{b}$, where $k$ and $b$ are relatively prime. We show first that $x$ and $k$ are relatively prime.
For suppose to the contrary that $d \gt 1$ divides both $x$ and $b$. Since $x-k=qb$ for some integer $q$, we have $k=x-qb$. By assumption $d$ divides $x$ and $b$, so $d$ divides $x-nb$. It follows that $d$ divides $k$, contradicting the fact that $k$ and $b$ are relatively prime.
Similarly, $x$ and $c$ are relatively prime.
Finally, we show that $x$ and $bc$ are relatively prime. There are various ways to prove this. For example, if $x$ and $bc$ are not relatively prime, there is a prime number $p$ that divides both $x$ and $bc$. But since $p$ divides $bc$, it follows that $p$ divides $b$ or $p$ divides $c$ (or both). If for example $p$ divides $b$, that contradicts the fact that $x$ and $b$ are relatively prime.
Remark: There is a mild ambiguity in the statement of the problem. We have shown that any $x$ that is a common solution of the congruences must be relatively prime to $bc$. But in general a number relatively prime to $bc$ need not be a solution of the system of congruences. Generally most of them aren't. If $b$ and $c$ are themselves relatively prime, there will be only one $x$ in the interval $0\le x\lt bc$ which satisfies both congruences. |
H: Are test/bump functions always bounded?
A bump function is a infinitely often differentiable function with compact support. I guess that such functions are always bounded, especially because the set where they are not zero is compact and because they are continuous they should attain a maximum value on that set. or am I wrong? I am wondering because nowhere in the literature I am using there it is said that such functions are bounded, and I guess this is an important property and think it should be mentioned if it holds. So maybe it's not the case?
AI: Hint: The image of a compact set under a continuous function is always compact. |
H: Counting - Colored Houses Question
Here is a question. I seem to have a hard time answering questions of this kind. I would appreciate it if you would not only help answer this, but carefully explain the process so I can understand it, and apply the same when I encounter questions of this kind.
Six houses in a row are each to be painted with one of the colors red, blue, green, and yellow. In how many different ways can the houses be painted so that no two adjacent houses can be the same color?
Thanks in advance!
AI: The general strategy is to ask yourself:
How many ways can I color the first house?
And having chosen a color for the first house, how many ways can I color the second house?
And having chosen colors for the first two houses, how many ways can I color the third house?
(etc.)
And then you multiply the numbers all together. This is called the counting principle or the rule of product.
Here is a simpler example: How many ways can I color a big vase and a little vase, if I have three colors of paint, and the vases may not be the same color?
I can color the big vase in any of three colors, and then I can color the little vase in any of the two remaining colors, and so the answer is that there are 3×2=6 ways to color the vases.
In general, the problems might get harder, and that is why we have the branch of mathematics known as combinatorics. |
H: find the bases for the range of a linear operator and the null space
I need to find the bases for a linear operator.
Here is the question given to me:
Consider $V=\mathbb{C}_{1\times 2}$ as a vector space over the real numbers.
let the linear operator $\tau : V \rightarrow V $ be defined by $\tau (z_{1},z_{2})=(z_{1}-\overline{z_{1}}, z_{2}-\overline{z_{2}})$
Find bases for $\tau(V)$ and $NS(V)$ and determine whether $V=\tau(V) \bigoplus NS(\tau)$
AI: First: linear operators don't have bases. Vector spaces (and subspaces) have bases. You'll note that the question does not ask for a basis for the linear operator, it asks for a basis of the range of $\tau$, and for a basis of the nullspace of $\tau$; and it so happens that both of those are vector spaces, so we can talk about bases for them.
Now, $V$ is $4$-dimensional as a vector space over $\mathbb{R}$; a possible basis is $\{(1,0), (i,0), (0,1), (0,i)\}$.
You should also check and make sure that $\tau$ is a linear transformation if you haven't done so.
Let's consider first the nullspace of $\tau$ (it is simpler than the range). When is $(z_1,z_2)$ in the nullspace of $\tau$?
$(z_1,z_2)\in\mathbf{N}(\tau)$ if and only if $\tau(z_1,z_2)=(0,0)$, if and only if $z_1-\overline{z_1}=0$ and $z_2-\overline{z_2}=0$, if and only if $z_1=\overline{z_1}$ and $z_2=\overline{z_2}$, if and only if $z_1$ is real and $z_2$ is real. That is, the nullspace is the span of $(1,0)$ and $(0,1)$, and they give you a basis.
What is the range of $\tau$? Note that if $z = a+bi$ is a complex number, then $z-\overline{z} = (a+bi) - (a-bi) = 2bi$. So another way of writing $\tau$ is:
$$\tau(z_1,z_2) = \Bigl( 2i\mathrm{Im}(z_1), 2i\mathrm{Im}(z_2)\Bigr).$$
So you should notice that the image of $\tau$ is contained in the span of $(i,0)$ and $(0,i)$.
Is the image of $\tau$ exactly equal to $\mathrm{span}((i,0),(0,i))$?
Given the answer to the latter question, can you find the answer to your question of deciding whether $V = \tau(V)\oplus \mathbf{N}(\tau)$? |
H: Restriction maps for structure sheaf of Spec A
For the space $X = \operatorname{Spec} A$, we define the structure sheaf $\mathcal{O}_X$ as follows. For an open subset $U \subseteq X$, we let $\mathcal{O}_X(U)$ be the projective limit of the family $\{ A_f : f \in A, D(f) \subseteq U \}$ indexed with the partial order $f \le g \iff D(f) \subseteq D(g)$. (Here $A_f$ denotes the localization of $A$ at $f$.) I am having trouble understanding how to define the restriction maps $\rho^U_V : \mathcal{O}_X(U) \to \mathcal{O}_X(V)$, for $V \subseteq U$. I understand it should be induced from $\rho^{D(g)}_{D(f)} : A_g \to A_f$ somehow ($D(f) \subseteq D(g)$), but I can’t quite figure out what it should be.
($D(f)$ are the principal open sets.)
AI: If $V \subset U$ inside of $\operatorname{Spec} A$ then the principal open subsets contained in $V$ form a subfamily of those contained in $U$. By the universal property of the inverse limit, one way to give a homomorphism $\mathscr O(U) \to \mathscr O(V)$ is to give a homomorphism $\mathscr O(U) \to A_f$ for each $f \in A$ such that $D(f) \subset V$, and in a compatible way. Since $D(f) \subset U$, the projection corresponding to $f$ that comes with the inverse limit defining $\mathscr O(U)$ will do this job nicely! |
H: Determining values where a function is not differentiable
Given
$$g(x) =
\begin{cases}
-1-2x & \text{if }x< -1,\\
x^2 & \text{if }-1\leq x\leq1,\\
x & \text{if }x>1,
\end{cases}
$$
determine at which values $g(x)$ is differentiable.
The approach I have taken with this question is to determine the values at which it is not differentiable, which will tell me all other values will be. I know that the function will not be differentiable where the limit at a given value does not exist. If I differentiate this function I get:
$$
g'(x) =
\begin{cases}
-2 & \text{if }x< -1,\\
2x & \text{if }-1\leq x\leq1,\\
0 & \text{if }x>1.
\end{cases}
$$
I am a little bit lost as to how to proceed with this question - if I can show that the left hand and right hand limits disagree, then I can determine where the function is not differentiable, and therefore where it is differentiable. Am I heading in the right direction here?
AI: First, note that $g'(x)=1$ for $x>1$.
On the interior of each of the intervals $(-\infty,-1)$, $[-1,1]$, and $(1,\infty)$ $g$ is differentiable since each component function is. The only question is what happens at the endpoints of these intervals.
At $x=-1$, the value of both component functions is $1$ and the derivative of both component functions is $2$. This means we get
$$
\lim_{h\to0^-}\frac{g(-1+h)-g(-1)}{h}=-2\tag{1}
$$
Because $x^2=-1-2x$ at $x=-1$, we can use $g(x)=-1-2x$ for the computation of $(1)$.
Furthermore, we get
$$
\lim_{h\to0^+}\frac{g(-1+h)-g(-1)}{h}=-2\tag{2}
$$
using $g(x)=x^2$ for the computation of $(2)$.
Since the derivatives computed in $(1)$ and $(2)$ are the same, we get that $g(x)$ is differentiable at $x=-1$.
At $x=1$, the value of both component functions is $1$, however, the derivative of $x^2$ is $2$ and the derivative of $x$ is $1$. This means that
$$
\lim_{h\to0^-}\frac{g(1+h)-g(1)}{h}=2\tag{3}
$$
using $g(x)=x^2$ for the computation of $(3)$.
However, we get
$$
\lim_{h\to0^+}\frac{g(1+h)-g(1)}{h}=1\tag{4}
$$
Because $x=x^2$ at $x=1$, we can use $g(x)=x$ for the computation of $(4)$.
Since the derivatives computed in $(3)$ and $(4)$ are different, $\lim\limits_{h\to0}\frac{g(1+h)-g(1)}{h}$ does not exist, and therefore $g(x)$ is not differentiable at $x=1$. |
H: What is the value of this sum?
Possible Duplicate:
Value of $\sum\limits_n x^n$
I am interested in finding what this sum converges to:
$$\sum_{n=0}^{\infty}e^{-n}=1+\frac{1}{e}+\frac{1}{e^2}+\frac{1}{e^3}+\cdots$$
Does a closed form exist? If so, what is is?
AI: This is a classic geometric series. Letting
$$S=1+\frac{1}{e}+\frac{1}{e^2}+\frac{1}{e^3}+\cdots$$
we have
$$eS=e+1+\frac{1}{e}+\frac{1}{e^2}+\cdots$$
Taking the difference, we have
$$eS-S=S(e-1)=e+1+\frac{1}{e}+\frac{1}{e^2}+\cdots - \left(1+\frac{1}{e}+\frac{1}{e^2}+\frac{1}{e^3}+\cdots\right)$$
We see all the terms cancel each other out except for $e$. Thus
$$S(e-1)=e\implies S = \frac{e}{e-1}$$
Thus the sum is equal to $\frac{e}{e-1}$.
More generally, we have
$$\sum_{n=0}^{\infty}x^n=\frac{1}{1-x}$$
for $|x| < 1$ |
H: parametrization of surface element in surface integrals
I don't understand this
How $ dS = \sqrt{ \left ( \partial g \over \partial x\right )^2 + \left ( \partial g \over \partial y\right )^2 + 1 } \; dA \; \; $ ?? Is $ dA = dx\times dy$??
AI: The surface in question is given by $z=g(x,y)$.
The vector in the surface that comes as a small change, $\mathrm{d}x$, to $x$ is
$$
\left(1,0,\frac{\partial g}{\partial x}\right)\mathrm{d}x\tag{1}
$$
The vector in the surface that comes as a small change, $\mathrm{d}y$, to $y$ is
$$
\left(0,1,\frac{\partial g}{\partial y}\right)\mathrm{d}y\tag{2}
$$
If we take the cross product of $(1)$ and $(2)$, we get
$$
\left(1,0,\frac{\partial g}{\partial x}\right)\times\left(0,1,\frac{\partial g}{\partial y}\right)\,\mathrm{d}x\,\mathrm{d}y = \left(-\frac{\partial g}{\partial x},-\frac{\partial g}{\partial y},1\right)\,\mathrm{d}x\,\mathrm{d}y\tag{3}
$$
The area represented is the absolute value of $(3)$:
$$
\sqrt{\left(\frac{\partial g}{\partial x}\right)^2+\left(\frac{\partial g}{\partial y}\right)^2+1}\quad\mathrm{d}x\,\mathrm{d}y\tag{4}
$$ |
H: something that looks sort of symmetrical but also not
Given the set $S_0$ of finite binary strings whose digit sum is congruent to 0 mod 2 and the set $S_1$ of finite binary strings whose digit sum is congruent to 1 mod 2,
what are the implications of the fact that $F: \{s_1 \in S_1 : s_1 \mbox{ends in 1} \} \to S_0$ that removes the trailing 1 from $s_1$ is onto $S_0$ but $“F^{-1}” : \{s_0 \in S_0 \} \to S_1$ that appends a 1 to the end of $s_0$ is not onto $S_1$?
AI: In a sense, this shows that you can add or subtract a single element from an infinite set and still have a bijection between the domain and range. This is not true for finite sets. If we allow $0 \in \mathbb N$, consider $f(x)=x+1$ on $\mathbb N$. $f$ is not onto, but $f^{-1}$ is. This is equivalent to your example, but may be less surprising. One's view of the implications can range from "trivial" to "the base of all the theory of infinite sets". |
H: Calculating Perpendicular and Base of Triangle. Suggestion
In this diagram AB and CD are both perpendicular to BE.If EC=5 and CD=4. What is ratio of AB to BE ?
How would i go about solving this triangle (without trigonometric ratios). I could only get DE=3 using Pythagoras theorem and was stuck after that. How would i calculate BD ? Do i make BC ? Suggestions ?
Edit:
The answer is 4:3.
AI: By Thales' theorem we have $\frac{CD}{AB}=\frac{DE}{BE}$ and so $\frac{AB}{BE}=\frac{CD}{DE}$.
We know $CD=4$ and by the Pythagorean theorem in $CDE$ we get $DE=3$.
So $\frac{AB}{BE}=\frac{4}{3}$. |
H: When am I allowed to use ln(x) when integrating functions?
In Mathematics, we know the following is true:
$$\int \frac{1}{x} \space dx = \ln(x)$$
Not only that, this rule works for constants added to x:
$$\int \frac{1}{x + 1}\space dx = \ln(x + 1) + C{3}$$
$$\int \frac{1}{x + 3}\space dx = \ln(x + 3) + C$$
$$\int \frac{1}{x - 37}\space dx = \ln(x - 37) + C$$
$$\int \frac{1}{x - 42}\space dx = \ln(x - 42) + C$$
So its pretty safe to say that $$\int \frac{1}{x + a}\space dx = \ln(x + a) + C$$ But the moment I introduce $x^a$ where $a$ is not equal to 1, the model collapses. The integral of $1/x^a$ is not equal to $\ln(x^a)$. The same goes for $\cos(x)$, and $\sin(x)$, and other trig functions.
So when are we allowed or not allowed to use the rule of $\ln(x)$ when integrating functions?
AI: Generally speaking, "using $\ln (x)$" as a rule or technique is unheard of. When one speaks of techniques, they usually include integration by substitution, integration by parts, trig substitutions, partial fractions, etc. With introductory calculus in mind, $\ln |x|$ is defined as $\int \frac{1}{x} \ dx.$ This can be extended to $\ln |u| = \int \frac{1}{u} \ du.$ Note that there are many more definitions for $\ln (x)$, but I felt this best related particularly to your examples.
For your first couple of examples, when choosing your $u$ to be the denominator, the $du$ is simply equal to $dx.$ This is what 'allows' the integrand to be evaluated to just $\ln |u|$ where $u$ is a linear expression.
In regards to $\int \frac{1}{x^2 + a} \ dx$, this can be handled using an inverse tangent and would be evaluated to
$$\dfrac{\operatorname{arctan}(\frac{x}{\sqrt{a}})}{\sqrt{a}} + C$$
For integrals of the form
$$\int \frac{1}{x^n + a} \ dx$$
where $n \ge 3$, you will have to revert to partial fractions. For more on partial fractions, see this. |
H: Algorithm to find transform random pairs into polar coordinates
I have some pairs of real numbers $(\rho_1,\alpha_1),\dots (\rho_n, \alpha_n)$. I know that all my $\rho$'s are positive, but there is no constraints on my $\alpha$'s. I want to find a function $\phi$ such as $((\rho_1,\theta_1),\dots,(\rho_n,\theta_n)$ are some cartesian products, with $\theta_i = \phi(\rho_i,\alpha_i)$.
Is there a way to find such a $\phi$?
Thanks!
AI: If I understood this correctly, you want a bijection from $\mathbb{R^+}\times\mathbb{R^+}$ to $\mathbb{R^+}\times[0,2\pi)$. How about:
$$
\phi(\rho_i, \alpha_i) = \left(\rho_i, 2\pi\tanh(\alpha_i)\right)
$$
Here is a plot of $2\pi\tanh(x)$ on $[0,3]$: |
H: Using binomial theorem find general formula for the coefficients
Using binomial thaorem (http://en.wikipedia.org/wiki/Binomial_theorem) find the general formula for the coefficients of the expantion:
$$
\left(\sum_{i=0}^{\infty}\frac{t^{2i}}{n^i6^ii!}\left(1-\frac{t^2}{6}+\frac{t^4}{120}\right)\right)^n
$$
Thank you for your help.
AI: The inner sum is
$$
\sum_{i=0}^\infty\frac{x^{2i}}{n^i6^ii!}=e^{\frac{x^2}{6n}}\tag{1}
$$
When raised to the $n^{\text{th}}$ power, $(1)$ is $e^{x^2/6}$. So it remains to compute
$$
e^{x^2/6}\left(1-\frac{t^2}{6}+\frac{t^4}{120}\right)^n
=e^{x^2/6}\sum_{j=0}^n\binom{n}{j}\left(-\frac{t^2}{6}+\frac{t^4}{120}\right)^j\tag{2}
$$
If the desire is to $(2)$ up to $O(t^{2k})$, one only need sum the first $k$ terms; that is,
$$
e^{x^2/6}\left(1-\frac{t^2}{6}+\frac{t^4}{120}\right)^n=e^{x^2/6}\sum_{j=0}^{k-1}\binom{n}{j}\left(-\frac{t^2}{6}+\frac{t^4}{120}\right)^j+e^{x^2/6}O(t^{2k})\tag{3}
$$ |
H: On calculating $\sigma(n^2) \pmod 4$ if $n$ is odd
This will be my very first post in math.stackexchange, so please bear with me if I make any silly mistakes with my maths.
So, to proceed: I am trying to calculate $\sigma(n^2) \mod 4$, given that $n$ is odd.
If I let $n = \displaystyle\prod_{i=1}^{r}{{p_i}^{{\alpha}_i}}$, then by considering the cases $p_i \equiv 1 \pmod 4$ and $p_i \equiv 3 \pmod 4$ separately (and taking the exponents ${\alpha}_i$ into consideration as well), I am led to the final congruence relation:
$$\sigma(n^2) \equiv (-1)^c \pmod 4,$$
where $$c = \left|\left\{i|1 \le i \le r, p_i \equiv 1 \pmod4, 2 \nmid {\alpha}_i \right\}\right|.$$
My question now would be: Is this as far as we could go with this congruence? I mean, is there no further possible improvement to this congruence, as far as computing $\sigma(n^2) \pmod 4$ for odd $n$ is concerned?
Appreciate any of your replies/feedback on this.
Edit: In response to Marvis's inquiry as to what sort of improvement I am looking at - I am trying to determine whether $\sigma(n^2) \equiv 1 \pmod 4 \hspace{0.10in} XOR \hspace{0.10in} \sigma(n^2) \equiv 3 \pmod 4$, given that I also know that $4 \nmid \left(\sigma(n) - 2\right)$.
AI: If $p \equiv 1\pmod{4}$, then $\sigma (p^{2 \alpha}) = 1 + p + p^2 + \cdots + p^{2 \alpha} \equiv (2 \alpha + 1)\pmod{4} \equiv (-1)^{\alpha}\pmod{4}$, since each term is $1 \pmod{4}$.
If $p \equiv 3\pmod{4}$, then $\sigma (p^{2 \alpha}) = 1 + p + p^2 + \cdots + p^{2 \alpha} \equiv 1\pmod{4}$, since $$1 + p + p^2 + \cdots + p^{2 \alpha} = 1 + p(1+p) + p^3(1+p) + p^5(1+p) + \cdots + p^{2 \alpha-1}(1+p)$$ and $1+p \equiv 0 \pmod{4}$.
Hence, $$\sigma(n^2) = \prod_{p_i-\text{primes of the form $4k+1$}} (-1)^{\alpha_i} \pmod{4}$$
Hence, if $\displaystyle \alpha = \sum_{i} \alpha_i$ where $\alpha_i$ is the highest power of prime $p_i$ of the form $4k+1$ dividing $n$, then $$\sigma(n^2) = (-1)^{\alpha} \pmod{4}$$
What sort of improvement are you looking at? |
H: Sanity check, is $\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$ a function?
EDIT#2: Yes, I'm crazy! This IS a function. Thanks for beating the correct logic into me everyone!
I'm using a website provided by my algebra textbook that has questions and answers. It has the following question:
Determine whether the following relation represents a function:
$$\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$$
I answered NO, it is not a function but the website says it is. Am I wrong? If so, what am I missing?
EDIT: I was given the following definition in class:
Function: A function is a rule which assigns to each X, called the domain, a unique y, called the range.
My instructor also said that if you plot the points you can tell if it is not a function if it fails the vertical line test. Here is the graph of the above points, and for example it would fail the vertical line test if I drew one on x = 1, right?
Thanks!
Jason
AI: All first coordinates are distinct. It's the graph of a function. |
H: Is it true that a 3rd order polynomial must have at least one real root?
While solving a problem a friend said - this polynomial is $3^{rd}$ order ($ax^3+bx^2+cx+d$), with $\{a,b,c,d\}$ real coefficients, so it must have a real root. I didn't want to sound stupid and I said sure.
I can't figure out if he's right. Is he right? Can someone help me with this?
Edit: Earlier I had asked if it must have a negative root. It's the real root that he said such polynomial must have.
AI: It is true that a cubic polynomial must have a real root. Since the lead coefficient is not $0$, we have that
$$
\lim_{x\to-\infty}ax^3+bx^2+cx+d=\left\{\begin{array}{}-\infty&\text{if }a>0\\+\infty&\text{if }a<0\end{array}\right.
$$
and
$$
\lim_{x\to+\infty}ax^3+bx^2+cx+d=\left\{\begin{array}{}+\infty&\text{if }a>0\\-\infty&\text{if }a<0\end{array}\right.
$$
Since a polynomial is continuous, by the Intermediate Value Theorem, if it takes a positive value and a negative value, it must take every value in between, in particular $0$. |
H: $\varphi:M\to N$ continuous surjective and closed. Then $f$ continuous iff $f\circ\varphi$ continuous.
$\varphi\colon M\to N$ continuous surjective and closed. Then $f\colon N\to P$ continuous iff $f\circ\varphi\colon M\to P$ is continuous. (Topological spaces)
I think that this proposition is true like I noted in $\varphi\colon M\to N$ continuous and open. Then $f$ continuous iff $f\circ\varphi$ continuous. (My commentary in Added (2)) which is true if we put $\varphi$ open instead of closed (in this case the proof is straightforward).
I tried a proof to this harder fact but is so large. I was put this as an possible answer to this question.
AI: Here's a way of doing it:
Now, let $f\colon N\to P$, $\varphi\colon M\to N$ and assume that $\varphi$ is both continuous, onto, and closed. We assume that $f\circ\varphi$ is continuous, and we want to show that $f$ is continuous. Is suffices to show that the inverse image (under $f$) of a closed set is closed.
Let $C\subseteq P$ be closed. Since $f\circ \varphi$ is continuous, then $(f\circ\varphi)^{-1}(C)$ is closed. Therefore, $\varphi((f\circ\varphi)^{-1}(C))$ is a closed subset of $N$.
I claim that $\varphi((f\circ\varphi)^{-1}(C) = f^{-1}(C)$. Indeed, note that $(f\circ\varphi)^{-1}(C) = \varphi^{-1}(f^{-1}(C))$, and that since $\varphi$ is onto, we have $\varphi(\varphi^{-1}(B)) = B$ for all $B\subseteq N$.
Thus, $f^{-1}(C) = \varphi((f\circ\varphi)^{-1})(C))$, which is closed, so $f$ is continuous (since inverse image of closed subsets are closed). $\Box$ |
H: Continuity of positive operators
How to prove that an positive linear operator $T:C[0,1]\to R $ in the sense that $T(f)\geq 0$ when $f\geq 0$ is bounded?
AI: Suppose $\|f\|_\infty \leq 1$. Then $-1\le f\le 1$ so
$-T(1) \le T(-1)\le T(f) \le T(1)$ so $\|T\| \le T(1)$. In fact equality is achieved, since
$\|1\|_\infty = 1$. |
H: Two sequences with convergent ratio
Let $ (b_{n})$ be a decreasing a sequence such that $0< b_{n}<1$ for all $n\geq 1$, and $b_{n}\to 0$ as $n\to \infty$. Is there any way to find another sequence $(a_{n})$ with $\frac{a_{n}}{b_{n}}$ converges to a nonzero constant, and $\frac{a_{n}}{b^{2}_{n}}$ is bounded.
AI: By definition, if $\frac{a_n}{b_n}\to L$ for some non-zero $L$, then for any $\epsilon>0$, there exists an $N$ such that for all $n>N$, we have $|\frac{a_n}{b_n}-L|<\epsilon$.
Let's take $\epsilon=|\frac{L}{2}|$, so that $|\frac{a_n}{b_n}-L|<|\frac{L}{2}|$ for all $n>N$. In particular, $|\frac{a_n}{b_n}|>|\frac{L}{2}|$ for all $n>N$. Then
$$\bigg|\frac{a_n}{b_n^2}\bigg|>\bigg|\frac{L}{2}\bigg|\cdot\frac{1}{b_n},$$
but $\frac{1}{b_n}\to\infty$ because $b_n\to 0$, so $\frac{a_n}{b_n^2}$ cannot be bounded. |
H: Expressing $\widehat{MN}=\{x : x \mid mn\}$ as a product of $\widehat M$ and $\widehat N$.
Let $m,n$ be any two positive integers. Note $\widehat X$ the set of positive divisors of $x$.
$$\widehat X = \{ d : d \mid x\}$$
(do not confuse it with $\hat a = \{x : x \equiv a \mod m\}$)
Assume $(m,n)=1$. How could one prove that
$$\widehat {MN}=\{d:d\mid mn\}$$
is the product $\widehat M \cdot \widehat N$ in the sense that all divisors of $mn$ appear in the product
$$\left(\sum d_i+\sum d_i d_j+\cdots+ d_1 d_2 \cdots d_r \right) \left(\sum e_i+\sum e_i e_j+\cdots+ e_1 e_2 \cdots e_r \right)$$
(clearly $e_1 e_2 \cdots e_r=m$ and $ d_1 d_2 \cdots d_r=n$) where $d_i$ are the divors of $n$ and $e_i$ those of $m$?
This is essential in the proof that if $f$ is multiplicative, then $$F(n)=\sum_{d \mid n}f(d)$$ also is.
AI: Let's see if I correctly understood what you asked:
$\,1\,.-\,\,$ Let $\,d\mid MN\Longrightarrow\,$ every prime divisor of $\,d\,$ divides either $\,M\,$ or $\,N\,$, but not both as $\,(M,N)=1\,$ , so putting$$d_M:=\{\,p\mid d\;\;;\;\;p\mid M\,\,,p\,\,\,\text{a prime}\}\,\,,\,d_N:=\{\,p\mid d\;\;;\;\;p\mid N\,\,,p\,\,\,\text{a prime}\}$$we get that $$d=a_1\cdot\ldots\cdot a_k\cdot b_1\cdot\ldots\cdot b_k\,\,,\,a_i\in d_M\,\,,\,b_i\in d_N\,$$ so $\,d\,$ is of the required form
$\,2\,.-\,$ On the other hand, if $\,\,d=a_1\cdot\ldots\cdot a_k\cdot b_1\cdot\ldots\cdot b_k\,\,,\,a_i\in d_M\,\,,\,b_i\in d_N\,$ , then clearly $\,d\mid MN\,$ |
H: Probably simple factoring problem
I came across this in a friend's 12th grade math homework and couldn't solve it. I want to factor the following trinomial:
$$3x^2 -8x + 1.$$
How to solve this is far from immediately clear to me, but it is surely very easy. How is it done?
AI: Hint: Use the quadratic formula. |
H: Is this sequence of abelian groups exact?
Let $A$ and $A'$ be abelian groups.
Let $f\colon A \rightarrow A'$ be a surjective homomorphism.
Let $B$ be a subgroup of $A$.
Let $B' = f(B)$.
Let $A_0 = Ker(f)$.
Let $B_0 = A_0 \cap B$.
Is the following sequence exact?
$0 \rightarrow A_0/B_0 \rightarrow A/B \rightarrow A'/B' \rightarrow 0$
AI: Since $f\colon A\to A'/B'$ is onto, and factors through $A/B$, then $A/B\to A'/B'$ is onto.
Since $A_0\to A$ is one-to-one, the kernel of $A_0\to A/B$ is $A_0\cap B=B_0$, so $A_0/B_0$ embeds into $A/B$ by mapping $a+B_0$ to $a+B$.
If $a\in A_0$, then $f(a+B) = f(a)+B'$, and since $f(a)=0$, then the composition is zero.
So the only issue is whether, if $a+B$ maps to zero, then $a+B$ is in the image of $A_0/B_0$. If $a+B$ maps to zero in $A'/B'$, then $f(a)\in B'$, so there exists $b\in B$ such that $f(b)=f(a)$. Thus, $a-b\in A_0$. Since $a+B = (a-b)+B$, and $(a-b)+B$ is the image of $a-b+B_0$, this follows. |
H: Set defined by $xy-zw=1$
This should be an easy question, but I found it ungooglable and not obvious to visualize...
What geometric object is defined by the equation $xy-zw=1$ in $\mathbb R^4$? And what is the homotopy type of the complement?
AI: It's $\text{SL}_2(\mathbb{R})$, of course! As a "geometric object" it may equivalently be realized as the unit sphere $x^2 + y^2 - z^2 - w^2 = 1$ in $\mathbb{R}^{2,2}$, which exhibits it (or maybe one of its connected components?) as a homogeneous space for the orthogonal group $\text{O}(2, 2)$. In particular it can be given the structure of a pseudo-Riemannian manifold.
The complement of the unit sphere in $\mathbb{R}^{2, 2}$ has two connected components
$$X = \{ (x, y, z, w) : x^2 + y^2 - z^2 - w^2 > 1 \}$$
$$Y = \{ (x, y, z, w) : x^2 + y^2 - z^2 - w^2 < 1 \}.$$
$X$ deformation retracts via the straight-line homotopy $(x, y, (1-t)z, (1-t)w)$ to $\{ (x, y) : x^2 + y^2 > 1 \}$, which is homotopy equivalent to $S^1$.
$Y$ deformation retracts via the straight-line homotopy $((1-t)x, (1-t)y, z, w)$ to $\{ (z, w) : z^2 + w^2 > -1 \}$, which is contractible. |
H: Finding the limit
I need to find the limit of this problem. I pretty much know you have to multiply by the conjugate but I get lost after I do that.
$$\lim\limits_{x\to 1} \frac{(1 / \sqrt{x}) - 1}{1-x}$$
AI: You don't have to multiply by a conjugate. Hint: $1-x=(1-\sqrt{x})(1+\sqrt{x})$. |
H: Number of prime divisors of the order of $E_8(q)$.
I am trying to compute the number of prime divisors of the order of $E_8(q)$. I am interested in the general solution, but in particular, my problem calls for $q=p^{15}$ (for prime $p$) and $q\equiv 0,1,$ or $ 4 \mod 5$, if this helps at all.
So, the order is $|E_8(q)|=q^{120}(q^{30}-1)(q^{24}-1)(q^{20}-1)(q^{18}-1)(q^{14}-1)(q^{12}-1)(q^{8}-1)(q^{2}-1)$ (ref: Wilson, The Finite Simple Groups). Is there any more efficient algorithm than the standard to factorize integers of this form? I am primarily interested in knowing the number of prime divisors, but the divisors themselves would also be very useful.
AI: So, among other things, you want to know the (number of) prime factors of $p^{450}-1$ for various primes $p$. Of course the polynomial $x^{450}-1$ factors into lots of smaller pieces, but there's still an irreducible part of degree 120. I can't imagine there would be a general formula for the number of prime factors of $f(p)$ for such a polynomial $p$, nor even much chance of computing them for $p$ with more than 2 digits.
You may find some useful information in the book, Brillhart, Lehmer, Selfridge, Tuckerman, Wagstaff, Factorizations of $b^n\pm1$.
Also, I'm not certain what algorithm you refer to when you write, "the standard." The standard algorithm for factoring that kind of number is probably the Special Number Field Sieve; is that what you had in mind? |
H: Prove that $\tan^{-1}\left(\frac{x+1}{1-x}\right)=\frac{\pi}{4}+\tan^{-1}(x)$
The question is:
Prove that $\tan^{-1}\left(\frac{x+1}{1-x}\right)=\frac{\pi}{4}+\tan^{-1}(x)$.
It's from A-level further mathematics.
AI: The identity should read $$\tan^{-1} \left(\dfrac{x+1}{1-x} \right) = \tan^{-1}(x) + \pi/4$$
Let $\tan^{-1}(x) = \theta$ i.e. $x = \tan(\theta)$. Then we get that $$\dfrac{x+1}{1-x} = \dfrac{\tan(\theta) + 1}{1-\tan(\theta)}$$
Recall that $\tan(A+B) = \dfrac{\tan(A) + \tan(B)}{1 - \tan(A) \tan(B)}$.
Taking $B = \pi/4$, we get that $\tan(A+ \pi/4) = \dfrac{\tan(A) + 1}{1 - \tan(A)}$.
Hence, we get that
$$\dfrac{x+1}{1-x} = \dfrac{\tan(\theta) + 1}{1-\tan(\theta)} = \tan(\theta + \pi/4)$$
Hence, $$\tan^{-1} \left(\dfrac{x+1}{1-x} \right) = \theta + \pi/4 = \tan^{-1}(x) + \pi/4$$
Proof of $\tan(A+B) = \dfrac{\tan(A) + \tan(B)}{1 - \tan(A) \tan(B)}$
$$\tan(A+B) = \dfrac{\sin(A+B)}{\cos(A+B)} = \dfrac{\sin(A) \cos(B) + \cos(A) \sin(B)}{\cos(A) \cos(B) - \sin(A) \sin(B)}$$
Assuming $\cos(A) \cos(B) \neq 0$, divide numerator and denominator by $\cos(A) \cos(B)$, to get that
$$\tan(A+B) = \dfrac{\sin(A) \cos(B) + \cos(A) \sin(B)}{\cos(A) \cos(B) - \sin(A) \sin(B)} = \dfrac{\tan(A) + \tan(B)}{1 - \tan(A) \tan(B)}$$ |
H: Find $P(b^2\ge4ac)$ given that $a,b,c\in\{-9,-8,\dots,8,9\},a\ne0$
I was doing some review on probability and came across the following exercise:
A quadratic equation $ax^2+bx+c=0$ is copied by a typist. However, the numbers
standing for a, b and c are blurred and she can only see that they are integers of one
digit. What is the probability that the equation she types has real roots?
Quadratic equations have real roots when the determinant is greater than or equal to $0$. Therefore $$P(\text{real roots})=P(b^2-4ac\ge0)=P(b^2\ge4ac)$$
My original thoughts were to find $1-P\left(\left(\frac{b}{2}\right)^2<4ac\right)$ where I would break it into cases where $b$ ranges from $0$ to $9$. I would then find out how many cases $4ac$ was larger than $b^2$ and divide it by $19^2*18$ (since $a\ne0$). When I found out it would be too tedious I thought of a possible geometric interpretation. Maybe the the probability could be expressed as ratios between volumes or something similar. When that seemed to over-complicate the problem I figured I was probably approaching it incorrectly.
Any hints or nudges in the general direction would be greatly appreciated.
AI: In the case where $|4ac| > b^2$ the probability is 1/2 by changing the sign of $c$.
In the case where $|4ac| \leq b^2$ and $a \neq 0$, the probability is $1$.
In the case where $a=0$ one has to decide how to interpret the words "real roots". The simplest is to exclude equations of degree less than 2 (probability 1/19). If a single root of multiplicity one is allowed, or the equation $0=0$, then the answer will be different.
The main part of the problem is then to count the cases where $b^2 < 4|ac|$. If $|ac| > 20$ there is no constraint on $b$. There are a finite number of possibilities with $|ac| \leq 20$ and they can be accounted for by hand. |
H: A question about harmonic form of trigonometric functions.
The question is:
i) Find the maximum and minimum values.
ii) the smallest non-negative value of x for which this occurs.
12cos(a)-9sin(a)
I think it should be changed into the form of Rcos(a+x) and it should be 15cos(a+36.87), and I get the answer i)+15 / -15 ii)323.13 (360-36.87) / 143.13 (180-36.87).
But the answer given by the book is " i)15, -15 ii)306.87, 143.13 "
I'm really confused by that answer..Am I wrong?
BTW, I'm self studying A-level further pure mathematics, but the book(written by BRIAN and MARK GAULTER published by Oxford university press) I get seems not very helpful.
so I truly hope someone can recommend some books/websites for self learning.
AI: Using the formula $\sin(a+b)=\sin(a)\cos(b)+\cos(a)\sin(b)$, we get
$$
12\cos(a)-9\sin(a)=15\sin(a+\pi+\arctan(-4/3))
$$
So the maximum and minimum are $+15$ and $-15$.
The smallest non-negative value for the maximum would be when $a+\pi-\arctan(4/3)=5\pi/2$; that is, $a=3\pi/2+\arctan(4/3)$.
The smallest non-negative value for the minimum would be when $a+\pi-\arctan(4/3)=3\pi/2$; that is, $a=\pi/2+\arctan(4/3)$.
Problem with Book Answer:
Converting to degrees, my answers are
maximum at $323.1301^\circ$ and minimum at $143.1301^\circ$.
It appears the first book answer is wrong. The answers should be $180^\circ$ apart. |
H: What is the mutual information $I(X;X)$?
$X$ is a random variable with normal distribution, assume $Y=X$, what is the mutual information $I(X;Y)$?
I guess that $h(Y|X)=0$ since when $X$ is known, $Y$ is completely known, so $$I(X;Y)=h(Y)-h(Y|X)=h(Y)=\frac{1}{2}\log 2\pi e\sigma^2$$ nat.
But, I was told I was wrong! and a numerical computation also shows that the value of $$I(X;Y) \neq \frac{1}{2}\log 2\pi e\sigma^2$$ Where is my mistake? Please help me out of this problem, thanks a lot! (Please note that $X$ and $Y$ are both continuous).
AI: If $(X_1,X_2)$ is a Gaussian vector each variable having the same variance ($\sigma^2$) with covariance matrix $K$, then $$h(X_1,X_2)=\frac{1}{2}\log((2\pi e)^2|K|)=\frac{1}{2}\log((2\pi e)^2 \sigma^4(1-\rho^2))$$ where $\rho$ is the correlation coefficient.
Now $I(X_1:X_2)=2h(X_1)-h(X_1,X_2)=-\frac{1}{2}\log(1-\rho^2)$.
When $X_1=X_2$, $\rho=1$ and hence $I(X:X)=\infty$. |
H: Homomorphism from $\mathbb{Q}$ to an ordered field F
I know that there exists a unique injective function $\gamma : \mathbb Q →F$ for any ordered field F.
I don't understand why 'Prove $\gamma(r) = r•1_F$ for every $r\in \mathbb Q$' is an exercise.. Don't we just see $\gamma(r)$ as an element of $\mathbb Q$, hence $\gamma(r)=r$? Am i missing something?
AI: If I am interpreting this question correctly:
$\gamma : \mathbb{Q} \rightarrow F$ is probably defined by mapping $0 \mapsto 0_F$ and $1 \mapsto 1_F$. This extends to an injective ring homomorphism into $F$.
Let $\times$ denote the multiplication on $F$. I will define $\cdot$. I think the notation $r \cdot 1_F$ is defined to be by if $r \in \mathbb{Z}$ and $r \geq 0$, then $r\cdot 1_F = 1_F + ... + 1_F$, $r$-times. If $r \in \mathbb{Z}$ and $r < 0$, then $r \cdot 1_F = (-1_F) + ... + (-1_F)$. In general, if $r = \frac{p}{q}$, then $r \cdot 1_F = (p \cdot 1_F)\times (q \cdot 1_F)^{-1}$, recall that $\times$ is the operation on $F$.
Now I think the question is that $\gamma(r) = r \cdot 1_F$ where $\cdot$ is not the multiplication operation on $F$ but the thing I defined above. Now you have a actually question to answer, but it is still pretty easy. |
H: Similar Matrices and Change of Basis
I'm trying to understand a little better change of basis matrices and how they relate to determining if two matrices are similar.
Given finite vector spaces $V,W$ such that $\textrm{dim} V=\text{dim} W$ and a linear transformation $T:W\rightarrow V$ and ordered bases $V_B$ and $W_B$.
Now my book only covers the case where $W=V$ and defines similar matrices such that an invertible change of basis matrix $M$ exists such that:
$$[M]_{W_B}^{V_B}[T]_{W_B}[M]^{W_B}_{V_B}=[T]_{V_B}$$
Now how does this work when $W=V$? Are the columns of the change of basis matrix still just the basis vectors of $W$ according to their coordinates in $V$? Or does some type of mapping of $W_B$ to $V_B$ have to be done first? I'm asking because I kind of was looking at a change of basis matrix as a special case where the transformation is the identity transformation.
I hope this question makes sense.
AI: When $W = V$ you generally choose the same basis for $W$ as for $V$ when you change bases. (The point of doing this is, for example, so that you can sensibly use the corresponding matrix representation to take powers or exponentials of the corresponding linear transformation and to find eigenvalues and eigenvectors, etc.) When $W \neq V$ you are free (if you want) to change bases both in $V$ and in $W$, but I don't think people generally call the corresponding equivalence relation similarity. Similarity is very much a $W = V$ kind of phenomenon.
(Exercise: show that up to a change of basis in $V$ and in $W$, any linear transformation is uniquely determined by the dimension of its range.) |
H: $M_m$ is naturally isomorphic to $(F_m/F_m^2)^{*}$
Let us denote $M_m$ be the set of tangent vectors to a manifold $M$ at point $m$ and is called tangent space to $M$ at point $m$ we denote $\bar{F_m}$ be the set of all germs at point $m$ and $F_m$ be the set of germs vanishes at $m$ In warner book there is a lemma:
$M_m$ is naturally isomorphic to $(F_m/F_m^2)^{*}$:
In proof he says if $v\in M_m$, then $v$ is a linear function on $F_m$ vanishing on $F_m^2$ because of the derivation property ,but I do not get why is that so?Could any one explain me a explicitly why?
derivation property says $v(f.g)=f(m)v(g)+g(m)v(f)$, but I do not connect this with the above line of my confusion. Thank you.
AI: I'll write an answer now. Regarding the question in the OP: For a derivation on $\bar F_m$ and we have $f,g \in F_m$ then
\[ \nu(fg) = f(m)\nu(g) + g(m)\nu(f) = 0 \]
as $f(m) = g(m) = 0$. These elements generate $F_m^2$, so we have $\nu|_{F_m^2} = 0$ and so a well-defined element $\bar\nu\in (F_m/F_m^2)^*$ by $\bar \nu(f) = \nu(f)$.
Regarding the question in your comment: If $\ell \in (F_m/F_m^2)^*$, then $\ell\colon F_m/F_m^2 \to \mathbb R$ is linear. For $f \in \bar F_m$, we have $\hat f := f - f(m) \in F_m$. The coset $\{\hat f\}$ is by definition
\[ \{ \hat f\} = f - f(m) + F_m^2 = \{f-f(m)+h \mid h \in F_m^2\} \]
As $\ell$ is a linear map on the cosets we may define $\nu_\ell(f) = \ell(\{\hat f\})$. Let us show that $\nu_\ell$ is has the derivation property (as the linearity follows directly from the linearity of $\ell$). So for $f,g \in \bar F_m$ we have
\begin{align*}
\hat f \hat g &= \bigl(f - f(m)\bigr)\bigl(g - g(m)\bigr)\\
&= \bigl(f- f(m)\bigr)g - \hat f g(m)\\
&= fg - f(m)\bigl( g - g(m)\bigr) - f(m)g(m) - \hat fg(m)\\
&= \widehat{fg} -f(m)\hat g - g(m)\hat f\\
\iff \widehat{fg} &= f(m)\hat g + g(m)\hat f + \hat f \hat g
\end{align*}
Now $\hat f \hat g \in F_m^2$, so taking cosets we have
\[ \{\widehat{fg}\} = f(m)\{\hat g\} + g(m)\{\hat f\} \]
which gives by definition of $\nu_\ell$:
\[ \nu_\ell(fg)= f(m)\nu_\ell(g) + g(m)\nu_\ell(f). \] |
H: Multivariable limit $xy\ln(xy)$
Does anybody know how to prove that in $D=\{(x,y)\in\mathbb{R}^2:x>0\wedge y>0\}$ the following is true:
$$
\lim\limits_{(x,y)\to(0,0)}x\cdot y\cdot\ln{(x\cdot y)}=0
$$
I have to find a $\delta$ so that if $\|(x,y)\|=\sqrt{x^2+y^2}<\delta$, that $|x\cdot y\cdot\ln{(x\cdot y)}|<\epsilon$ follows.
But I don't know what to do, because the $\ln$ goes to minus infinity.
Can anybody solve this? Thank you!
AI: The key is to treat $xy$ as one variable. Let $z = xy$. Hence, $$\lim_{(x,y) \to (0^+,0^+)} xy \ln(xy) = \underbrace{\lim_{z \to 0^+} z \ln(z) = -\lim_{t \to \infty} t e^{-t}}_{z = e^{-t}}$$
Note that $e^t \geq \dfrac{t^2}{2}$, for $t \geq 0$. $\left(\text{$\dfrac{t^2}2$ is a term in Taylor series of $e^t$ and all the other terms are non-negative for $t >0$} \right).$
Hence,
$$t e^{-t} = \dfrac{t}{e^t} \leq \dfrac{t}{t^2/2} = \dfrac2t$$
Hence, $$0 \leq \lim_{t \to \infty} t e^{-t} \leq \lim_{t \to \infty} \dfrac2t = 0$$ |
H: Prove that $G$ abelian if $|G|= pq^2$.
Let $G$ be a group of order $pq^2$, where $p \neq q$ prime and $p$ does not divide $| Aut (G) |$. Show that $G$ is abelian.
AI: Firstly,we can consider a homomorphism $f:G\rightarrow Aut(G)$ such that:
$f(x)=t_x$,where $t_x:G\rightarrow G$ is defined by $t_x(g)=xgx^{-1}$.
Note that $ker f=Z(G)$,then by first isomorphism theorm, we get:
$G/Z(G)\cong f(G) \le Aut(G)$.So $|G| $can be divided by $|Z(G)||Aut(G)|$.
Hence, $Z(G)$ is divided by $p$.By Cauchy Theorem,$Z(G)$ contains a element of order $p$.
We find that the cyclic subgroup generated by that element is of order $p$ and hence is Sylow $p$ subgroup and since it is in the center, we can conclude that it is the unique Sylow $p$ subgroup,which is also normal.
Let $Q$ be a Sylow $q$ subgroup in $G$.Since $p$ and $q$ are relative prime,we have $P\cap Q=\{1\}$.Besides, $PQ$ is a subgroup of order $pq^2$ since $P$ is normal and $|PQ|=|P||Q|/|P\cap Q|$.We have $PQ=G$.
Now, we only focus on $Q$.From class equation ,we get that $Z(Q)$ is nontrivial.If $Z(Q)$ is of order $q^2$,then $Q$ commutes with its elements, otherwise $|Z(Q)|=q$.
In this case,$Q/Z(Q)$ which is of order $q$ and hence is cyclic, so $Q$ is abelian.
In both case, $Q$ always commutes with its elements.
Since $G=PQ$,$P$ is contained in the centre and $Q$ commutes with its elements,we can conclude that $G$ is abelian. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.