text stringlengths 83 79.5k |
|---|
H: Lebesgue Integration Question
(Just read the bolded statements if you want to get straight to the point)
This question comes as an extension to one posed in Stein and Sakarchi's Real Analysis, and it is related to the notion that an integral of a positive function is equal to the volume bounded by its graph.
The text proves that $\int_{\mathbb{R}^{d}}|f(x)|dx=m(A)$ where $A:=\{(x,\alpha)\in\mathbb{R}^{d}\times\mathbb{R} : 0\leq\alpha\leq|f(x)|\}$. Assuming both $A$ and $f$ are measurable in the appropriate contexts, the proof is a simple computation:
\begin{equation*}\int_{\mathbb{R}^{d}}|f(x)|dx=\int_{\mathbb{R}^{d}}m(A_{x})dx=\int_{\mathbb{R}^{d}}\chi_{A_{x}}(x)dx=\int\limits_{0}^{\infty}\int_{\mathbb{R}^{d}}\chi_{A}(x,\alpha)dxd\alpha=m(A)\end{equation*} (we are of course applying Tonelli's Theorem).
Now, here is the question from Stein and Sakarchi: If $f$ is integrable on $\mathbb{R}^{d}$, then define for each $\alpha>0$ the set $E_{\alpha}:=\{x:|f(x)|>\alpha\}$, and prove \begin{equation*}\int_{\mathbb{R}^{d}}|f(x)|dx=\int\limits_{0}^{\infty}m(E_{\alpha})d\alpha.\end{equation*}
The proof is basically an immediate consequence of what was already proven above, except that we use the slices $A_{\alpha}$ instead of $A_{x}$. In other words, we have: \begin{equation*}\int_{\mathbb{R}^{d}}|f(x)|dx=\int_{\mathbb{R}^{d}}m(A_{x})dx=\int\limits_{0}^{\infty}m(A_{\alpha})d\alpha=m(A).\end{equation*} Since $A_{\alpha}$=$E_{\alpha}$, the problem is solved. A nice geometric meaning of integrating the separate slices is that integrating $A_{x}dx$ is akin to partitioning the domain, and integrating $A_{\alpha}d\alpha$ is akin to partitioning the range. Again, the rigorous justifications are from the Fubini/Tonelli theorem.
Now, here is the part where I am having difficulty. I want to prove the statement
\begin{equation*}\int_{\mathbb{R}^{d}}|f(x)|^{p}dx=\int_{0}^{\infty}p\alpha^{p-1}m(E_{\alpha})d\alpha\end{equation*}
where everything is as above, and $1<p<\infty$. The above case is $p=1$. Since we are not using $\{x: 0<\alpha<|f(x)^{p}\}$, the above proof technique cannot be used exactly, and I'm not sure how to proceed.
It is interesting that \begin{equation*}$|f(x)|^{p}=\int\limits_{0}^{|f(x)|}p\alpha^{p-1}d\alpha\end{equation*} so that we get something like \begin{equation*}\int_{\mathbb{R}^{d}}|f(x)|^{p}dx=\int_{\mathbb{R}^{d}}\int_{0}^{|f(x)|}p\alpha^{p-1}d\alpha.\end{equation*} However, I'm still not sure where to take this.
AI: This is a particular case of the Cavalieri principle.
Let $f : \Omega \to \Bbb R_+$ a mesurable function, and $\Phi : \Bbb R_+ \to \Bbb R_+$ an increasing continuously differentiable function, with $\Phi(0) = 0$. If we define $\rho_f(\alpha) = m(\{ x \mid f(x)>\alpha \})$, then $$\int_\Omega \Phi(f(x))\mathrm d x = \int_0^\infty \Phi'(\alpha) \rho_f(\alpha)\mathrm d \alpha.$$
This is particularly interesting because the right-hand side integrand is continuous, so the integral can be understood in the Riemannian sense of the term. If you already have a measure, this is a very good definition of the Lebesgue integral. By taking the usual vertical summation of the Riemann integral, you obtain with the Cavalieri principle the horizontal summation of the Lebesgue integral.
Proof
The key arguments are the following :
Fubini-Tonelli theorem ;
$\Phi(f(x)) = \int_0^{f(x)}\Phi'(x)\mathrm d x$ ;
$\mathbf 1_{[0,f(x))}(\alpha) = \mathbf 1_{(\alpha, +\infty]}(f(x)) $.
You already got the first two but you missed the third.
Lets compute :
$$\begin{aligned}
\int_\Omega \Phi(f(x))\mathrm d x &= \int_\Omega\int_0^{f(x)} \Phi'(\alpha)\mathrm d \alpha \mathrm d x \\
&= \int_0^\infty\int_{\Omega} \mathbf 1_{[0,f(x))}(\alpha)\Phi'(\alpha) \mathrm d x \mathrm d \alpha && \text{by Fubini-Tonelli}\\
&= \int_0^\infty \Phi'(\alpha)\int_\Omega\mathbf 1_{(\alpha, +\infty]}(f(x)) \mathrm d x \mathrm d \alpha&& \text{by the second argument and linearity} \\
&= \int_0^\infty \Phi'(\alpha) m(\{ x \mid f(x)>\alpha \})\mathrm d \alpha,
\end{aligned}$$
ce que l'on voulait. |
H: Identifying $k[x_1,x_2,y_1,y_2]^{\epsilon}$ with $k[x,y]\wedge k[x,y]$
Suppose the symmetric group $S_2$ of order 2 acts on $k^4=Spec \;k[x_1, x_2, y_1, y_2]$ by the following: for $\sigma\not=e$,
$$\sigma\circ(x_1, x_2, y_1, y_2)=(x_2,x_1,y_2,y_1).$$
That is, the nontrivial element in $S_2$ swaps the indeterminants $x_1$ and $x_2$, and $y_1$ and $y_2$, simultaneously.
Let $k[x_1, x_2,y_1, y_2]^{\epsilon}$ denote the vector subspace of $k[x_1, x_2,y_1, y_2]$ consisting of $S_2$-alternating polynomials, and let $k[x,y]\wedge k[x,y]$ denote the $2^{nd}$ exterior power of the vector space $k[x,y]$ of polynomials in $2$ variables.
Then how is $k[x_1, x_2,y_1, y_2]^{\epsilon}$ identified with $k[x,y]\wedge k[x,y]$?
$\mathbf{Examples}:$ Some examples of polynomials in $k[x_1, x_2,y_1, y_2]^{\epsilon}$ include the following:
$$g_1 = x_1-x_2,\; g_2= x_1-x_2+y_1-y_2, \;\mbox{ and } \;g_3 = (x_1 + x_2)(x_1-x_2)$$
since $g_i(x_2,x_1,y_2,y_1)=-g_i(x_1,x_2,y_1,y_2)$. This basically means when you swap the $x_i$'s and the $y_i$'s, the polynomial only changes by a sign.
At the moment, this identification is not obvious to me.
$$
$$
AI: Hints:
We have an isomorphism $A:=k[x_1,x_2,y_1,y_2]\cong k[x_1,y_1]\otimes_k k[x_2,y_2]=B$.
The action $\tau:A\to A:p\mapsto -\sigma p$ is a vector space homomorphism.
The action of $\tau$ on $B$ is generated by $p(x_1,y_1)\otimes q(x_2,y_2)\mapsto -q(x_1,y_1)\otimes p(x_2,y_2)$.
If $\varphi\in \mathrm{End}(V)$ is a linear map, $V/\mathrm{Im}(I-\varphi)\cong \mathrm{Ker}(I-\varphi)= V_1$, where $V_1$ is the eigenspace corresponding to the eigenvalue $1$ (i.e. the fixed points of the transformation).
(I'm going on the definition that $\bigwedge^2 V:= V\otimes V/I$, where $I$ is the subspace spanned by those elements of the form $v\otimes w+w\otimes v$ with $v,w\in V$.) |
H: Proving $(U \otimes V) \otimes W \cong U \otimes (V \otimes W)$ without the universal property
Let $F$ be a commutative field, and let $U$, $V$, and $W$ be finite dimensional vector spaces over $F$. How can one prove $(U \otimes V) \otimes W \cong U \otimes (V \otimes W)$ without using the universal property?
AI: Note that you only need to show they have equal dimensions.
Then, you can use the fact that $U \otimes V$ has basis $(u_i \otimes v_j)$ when $U$ and $V$ are vector spaces where $(u_i)$ and $(v_i)$ are bases for $U$ and $V$. |
H: Confusion on continuous linear forms
In Friedlander's book "introduction to the theory of distributions" he claimed(on page 35):
"Now the equation $$|\langle u,\phi\rangle| \le C\sum_{|a|\le N|}\sup\{|\partial^{\alpha}\phi|:x\in K\}$$ shows that $$\langle u,\phi\rangle=0$$ if the support of $\phi$ is disjoint from $K$, so $u$ has compact support when regarded as a member of $\mathcal{D}'(X)$."
I am confused with this claim. I think in fact as a distribution $u$'s support could well be some open subset of $K$. In other words the above argument only implies
supp $u\subset K$
As the support of $u$ is defined to be the complement of the set such that functions whose support is on it vanishes when evaluated by $u$. Of course one may argue that $u$'s support is closed, and a closed subset of a compact set must be compact; but is this the correct way to interpret the claim? I feel confused.
This technical problem does not sound serious but I think without clarifying it we cannot assert that $\mathcal{E}'(X)$ can be regarded as the subspace of $\mathcal{D}'(X)$ with compact support. One direction, that any distribution with compact support can be extended to a continuous linear form is clear to me by the author's theorem. But the other direction feels not so clear.
AI: A support is by definition closed, however it is defined. The definition of the support of a distribution I am familiar with, is the complement of the union of all open sets $V$ so that the distribution takes the value zero for all test functions with support inside $V$. Thus the support is indeed closed by definition.
(A partition of unity argument shows that the above-mentioned union itself has the property in question, so instead of the union we could simply say the largest open set …) |
H: Parametric equation of a cone.
I have a cone with vertex (a, b, c) and base circumference with center $(x_0,y_0)$ and radius R. I can't understand what is the parametric representation of three dimensional space inside the cone. Any suggestions please?
AI: The parametric equation of the circle is:
$$
\gamma(u) = (x_0 + R\cos u, y_0 + R\sin u, 0)
$$
Each point on the cone lies on a line that passes through $p(a, b, c)$ and a point on the circle. Therefore, the direction vector of such a line is:
$$
\gamma(u) - p = (x_0 + R\cos u, y_0 + R\sin u, 0) - (a, b, c) = (x_0 - a + R\cos u, y_0 - b + R\sin u, -c)
$$
And since the line passes through $p(a, b, c)$, the parametric equation of the line is:
$$
p + v\left(\gamma(u) - p\right) = (a, b, c) + v \left((x_0 - a + R\cos u, y_0 - b + R\sin u, -c)\right)
$$
Simplify to get the parametric equation of the cone:
$$
\sigma(u, v) = \left(a(1-v) + v(x_0 + R\cos u), b(1-v) + v(y_0 + R\sin u), c(1 - v)\right)
$$
Here is a plot of the cone for $p(0, 0, 2)$, $(x_0, y_0) = (-1, -1)$ and $R = 1$, when $u$ scans $[0, 2\pi]$ and $v$ scans $[0, 1]$: |
H: understanding equation $\|f(x)-f(a)-Df(a)(x-a)\|\leq\|x-a\|$
I failed to understand how what do author of my vector calculus textbook arrived at
below equation:
Based on equation below:
$$\lim_{x\to a}\frac{\|f(x)-[f(a)+Df(a)(x-a)]\|}{\|x-a\|}=0$$
Given $\|f(x)-f(a)-Df(a)(x-a)\|$
Thus since f is differentiable at a, we can make $\|f(x)-f(a)-Df(a)(x-a)\|$
as small as we can wish by keeping $\|x-a\|$ appropriately small.
In particular,$$\|f(x)-f(a)-Df(a)(x-a)\|\leq\|x-a\|$$
if $\|x-a\|$ is appropriately small.
Note: I modify above proof from my textbook to fit the context of this question.
Note that the 'D' above is differential symbol.
to be more precise with my question, I was thinking straightforward
In first equation 0 multiply with $\|x-a\|$ should be 0
I'm not sure how the author arrive with $\leq\|x-a\|$ at the last equation.
AI: $$\lim_{x\to a}\frac{\|f(x)-[f(a)+Df(a)(x-a)]\|}{\|x-a\|}=0$$
So you can find $\varepsilon>0$ such as $\|x-a\|<\varepsilon\Rightarrow\frac{\|f(x)-[f(a)+Df(a)(x-a)]\|}{\|x-a\|}\le 1$.
So you have well $\|f(x)-f(a)-Df(a)(x-a)\|\leq\|x-a\|$ if $\|x-a\|$ is appropriately small. |
H: how to show a pair of sequences is Cauchy while proving the Least Upper Bound Property
http://en.wikipedia.org/wiki/Least-upper-bound_property#Proof_using_Cauchy_sequences
This proof of the Least Upper Bound Property defines sequences $A_1, A_2,... $ and $B_1, B_2,...$ recursively:
Let S be a nonempty set of the reals that is bounded above. Let $B_1$ be an upper bound of S, and let $A_1$ be an element of S that is not an upper bound.
1)Compute $(A_n +B_n)/2$.
2) If it is an upper bound of S, then take $A_{n+1}=A_n$ and $B_{n+1}=(A_n +B_n)/2$.
3) Otherwise, there is an $s\in S$ such that $(A_n +B_n)/2<s$. Take $A_{n+1}=s$ and $B_{n+1}=B_n$.
How does one (as the wiki page suggests) show that these sequences are Cauchy?
Does one have to provide a direct proof (which I was having a hard time with due to the recursive definition) or use the fact that $A_1 \leq A_2\leq ...\leq B_2 \leq B_1$ and $|A_n -B_n|$ tends to 0?
AI: Just use the facts that $A_1\le A_2\le A_3\le\ldots\le B_3\le B_2\le B_1$ and $|A_n-B_n|=B_n-A_n\to 0$ as $n\to\infty$: that $\langle A_n:n\in\Bbb Z^+\rangle$ and $\langle B_n:n\in\Bbb Z^+\rangle$ follows immediately by the following argument.
Fix $\epsilon>0$. There is an $n_0\in\Bbb Z^+$ such that $|A_n-B_n|<\epsilon$ whenever $n\ge n_0$. Thus, for any $m,n\ge n_0$ we have $$A_{n_0}\le A_m,A_n,B_m,B_n\le B_{n_0}$$ and hence $$|A_m-A_n|,|B_m-B_n|\le|A_{n_0}-B_{n_0}|<\epsilon\;,$$ as desired. |
H: Circular Permutation
Can someone please explain how to solve circular permutation sums. I just cannot seem to understand them.
eg.
$\text{(4)}$ The number of ways in which $6$ men and $5$ women can dine at a round table if no two women are to sit together is given by
$\text{(a)}$ $6!*5!$
$\text{(b)}$ $50$
$\text{(c)}$ $5!*4!$
$\text{(d)}$ $7!*5!$
AI: First seat the $6$ men. If they were seated in a row, there would be $6!$ possible permutations of them. However, they’re actually seated around a circular table, which changes things. The six permutations $ABCDEF,BCDEFA,CDEFAB,DEFABC,EFABCD$, and $FABCDE$ are the same when the men are sitting in a circle: the only differences is where in the circle you started the list. Thus, each arrangement of the men around the table corresponds to $6$ permutations in a straight line, and there are therefore only $\frac{6!}6=5!$ possible arrangements of the men around the table. (This is a useful general fact: there are $(n-1)!$ circular permutations of $n$ things.)
Now we seat the $5$ women. There are $6$ positions available between adjacent men, so we have $6$ possible ways to seat the first woman. If we put another woman between the same two men, she’d be sitting next to the first woman; we don’t want that, so we have to seat her in one of the other $5$ gaps between adjacent men. That leaves $4$ open gaps for the third women, $3$ for the fourth woman, and $2$ for the last woman, so altogether we can seat the women in $6\cdot5\cdot4\cdot3\cdot2=6!$ different ways.
Combining the two results, we see that the $11$ people can be seated in $5!\cdot6!$ different orders around the table if no two women sit next to each other. |
H: Implicit Function Theorem example in Baby Rudin
I am looking at example 2.29 of Baby Rudin (page 227) of my edition to illustrate the implicit function theorem. This is what the example is:
Take $n= 2$ and $m=3$ and consider $\mathbf{f} = (f_1,f_2)$ of $\Bbb{R}^5$ to $\Bbb{R}^2$ given by
$$\begin{eqnarray*} f_1(x_1,x_2,y_1,y_2,y_3) &=& 2e^{x_1} + x_2y_1 -4y_2 + 3 \\
f_2(x_1,x_2,y_1,y_2,y_3) &=& x_2\cos x_1 - 6x_1 + 2y_1 - y_3 \end{eqnarray*}.$$
If $\mathbf{a} = (0,1)$ and $\mathbf{b} = (3,2,7)$, then $\mathbf{f(a,b)} = 0$. With respect to the standard bases, the derivative of $f$ at the point $(0,1,3,2,7) $ is the matrix
$$[A] = \left[\begin{array}{ccccc} 2 & 3 & 1 & -4 & 0 \\ -6 & 1 & 2 & 0 & -1 \end{array}\right].$$
Hence if we observe the $2 \times 2$ block
$$\left[\begin{array}{cc} 2 & 3 \\ -6 & 1 \end{array}\right]$$
it is invertible, and so by the implicit function theorem there exists a $C^1$ mapping $\mathbf{g}$ defined on a neighbourhood of $(3,2,7)$ such that $\mathbf{g}(3,2,7 ) = (0,1)$ and $\mathbf{f}(\mathbf{g}(\mathbf{y}),\mathbf{y}) = 0$.
Now what I don't understand is from such a $\mathbf{g}$, how does this mean that I can solve the variables $x_1$ and $x_2$ for $y_1,y_2,y_3$ locally about $(3,2,7)$?
Also if I wanted to carry out this computation explicitly, how can I do it? We do not have a nice and shiny linear system to solve unlike problem 19 of the same chapter.
Thanks.
AI: As suggested by several comments the implicit function theorem as well as the inverse function theorem (which is equivalent to the implicit function theorem) are (powerful) existence theorems which in fact provide little help when it comes to explicitly computing an inverse or implicit function. This is, however, usually not necessary in applications in analysis.
What is extremely important in the two theorems (apart from the existence claim) are the assertions of the uniqueness/invertibility of the solution in a whole neighbourhood (having topological consequences, e.g., the inverse functions theorem shows that $C^1$ maps with invertible derivative in one point are locally open) and regularity of the functions the existence of which is guaranteed (i.e. $f(x,y)\in C^1 (C^k)$ and the assumptions of the theorem are fulfilled $ \Rightarrow g\in C^1 (C^k)$ if $f(x, g(x))=0$).
As most people have difficulties to grasp the implicit function theorem when they first get to see it I think Rudin's intention was to illustrate the steps which are necessary when one wants to apply it. |
H: How to solve a differential equation containing an integral term?
I'm asking this question on behalf of another person who is not as familiar as me with computers. He has the following problem to solve and unfortunately Mathcad can't solve such type of equations.
He seeks help to find the function $u = u(x) = ?$ for
$$ \frac{d^2u(x)}{dx^2} =\left[U(x) + \alpha^2\right]u(x) $$
where
$$U(x) = \frac{2}{x}\left(\exp\left(-\frac{x}{\lambda}\right)-\int_0^x 4\pi\left(u(z)\right)^2\,dz\right)$$
The parameters $\lambda$ and $\alpha$ are real.
Is another tool (i.e. Mathematica) able to solve it ? Do you have another suggestion ?
AI: A short answer to the question in the title is:
Isolate the integral term and differentiate.
In the case at hand, one could write $\displaystyle U'(x)=-\frac1xU(x)+\frac2x\left(-\frac1\lambda\mathrm e^{-x/\lambda}-4\pi u(x)^2\right)$ and $\displaystyle U(x)=\frac{u''(x)}{u(x)}-\alpha^2$ to deduce the third-order differential equation with no integral term
$$
u'''(x)=\frac{u''(x)}{u(x)}u'(x)-\frac1x\left(\frac{u''(x)}{u(x)}-\alpha^2\right)u(x)+\frac2x\left(-\frac1\lambda\mathrm e^{-x/\lambda}-4\pi u^2(x)\right)u(x).
$$
Or, one could transform this into a system of two equations of lower degree, namely,
$$
xu''(x)=v(x)u(x),\qquad v'(x)=-8\pi u^2(x)+\alpha^2-\frac2\lambda\mathrm e^{-x/\lambda}.
$$ |
H: Hectic absolute values? (where $a=ix$ and $b=-ix$)
Where $a=ix$ and $b=-ix$ then what is:
$$|a+b|^2$$
$$|b-a|^2$$
And then is this equality true?
$$|a+b|^2=|a|^2+|b|^2$$
because it seems $a+b=0$!
AI: You have $a+b=0$ and $b-a=-2ix$ so $|a+b|=0$ and $|b-a|=2|x|$
Since $|a+b|^2=0$ and $|a|^2 + |b|^2 = 2|x|^2$ you have not this équality in general.
The general formula about complexe numbers is :
$$|a+b|^2=|a|^2 + |b|^2 + 2 {\mathcal R}e(\overline ab)$$ |
H: Evaluating: $\lim_{n\to\infty} \frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\frac{\pi}{2}} (\sin x+\cos x)^n dx $
I'm supposed to compute the following limit:
$$\lim_{n\to\infty} \frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\frac{\pi}{2}} (\sin x+\cos x)^n dx $$
I'm looking for a resonable approach in this case, if possible. Thanks.
AI: Laplace's method yields
$$\frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\pi/2} (\sin x+\cos x)^n \mathrm dx=\int_{-\pi\sqrt{n}/4}^{\pi\sqrt{n}/4}\left(\cos(t/\sqrt{n})\right)^n\mathrm dt\to\int_{-\infty}^{+\infty}\mathrm e^{-t^2/2}\mathrm dt=\sqrt{2\pi}.
$$ |
H: Curious Laplacian inequality ($\Delta f>0 \Rightarrow f$ has no maxima). Additional premises just to confuse students?
In a homework question, we've been given the following things:
Let $B=\overline{B(0,1)} \subseteq \mathbb R^n$ be the closed unit ball, $f : B\to \mathbb R^n$ a $\mathcal C^2$ function and let some $a_i : B \to \mathbb R$ be continuous for $i=1,\ldots,n$.
For all $x\in B$, we have $$\Delta f(x)+\sum_{i=1}^n a_i(x)\frac{\partial f}{\partial x_i}(x) > 0$$ where $\Delta f$ is the Laplacian of $f$.
Then show that $f$ has no local maximum in $Int(B)$.
I'm kind of confused by the question. We're used to having questions where all the premises matter, but this doesn't seem to be the case here, or is it?
I mean, let $x_0$ be the maximum whose existence we want to contradict. Then $x_0$ has to be a critical point and the sum in the inequality vanishes anyway. But $\Delta f$ needs to be negative in maxima, right? Hence contradiction.
So, what's the point of the $a_i$, why make them continuous? Do we really need the contraint on the unit ball, or if not, what property of $B$ matters? Convexity, compactness?
Are all these premises just to confuse the students and we actually don't need them? Or am I severely overlooking something? If so, could you give a hint please on what to keep in mind for the proof?
Thank you.
AI: The $a_i$ are not additional assumptions, but additional constraints. The statement is, of course, true, if the $a_i$ are all identically zero. The point is that you may even have this additional term in the equation and the statement still holds true.
And the Laplacian is not necessarily negative, only nonpositive, in $x_0$ such that $f(x_0)$ is maximal.
And, finally, if the $a_i$ were not continuous you'd run into problems assuming $f\in C^2$ (usually one considers solutions of the type $\Delta f + \sum a_i \frac{\partial f}{\partial x_i} = g(x)$) But you are right, to just prove the above statement the assumption can be weakened.
(Edit: note though, that it is not clear that the statement remains true if the $a_i$ have, say, a pole at $x_0$. If then, e.g., $\lim_{x\rightarrow x_0}a_i \frac{\partial f}{\partial x_i} < 0$, how would you reason in that case? For this reason I'd say it is, for an exercise, safe and reasonable to assume the $a_i$ are continuous.) |
H: Analyzing a recurrence relation given by a Toeplitz matrix
Let $p$ be an odd prime, and $M_p$ be the $p\times p$ Toeplitz matrix over $\mathbb{F}_2$ given by $a_0=a_1=1$ and $a_{-p+1}=1$, e.g. for $p=5$ we have
$$M_5=\left[\begin{array}{ccccc}
1 & & & & 1\\
1 & 1\\
& 1 & 1\\
& & 1 & 1\\
& & & 1 & 1
\end{array}\right]$$
Let $v_0=(1,0,\dots,0)\in\mathbb{F}_2^p$ and define the sequence $v_0,v_1,v_2,\dots$ by the recurrence relation
$$v_{n+1}=M_p v_n$$
This sequence must be periodic because of the pigeonhole principle.
I want to find the period. i.e., find a way to compute from $p$, without computing the whole sequence, $a$ and $b$ such that $v_{a+kb}=v_a$ for every $k\ge 0$. A formula giving $a,b$ from $p$ will be the best, but even non trivial estimates are good.
I want, if possible, a closed-form description of $v_{n}$ computable from $n$ and $p$, again without computing the whole sequence.
AI: Theorem: Let $M$ be an $n \times n$ matrix over a field $F$ and fix $1 \le i, j \le n$. Suppose that over $\bar{F}$ the characteristic polynomial of $M$ has roots $\lambda_1, ... \lambda_m$ with multiplicities $e_1, ... e_m$. Then we can write
$$(M^n)_{ij} = \sum_{k=1}^m p_k(n) \lambda_k^n$$
where $p_i$ is a polynomial of degree at most one less than the largest size of a Jordan block of $M$ with eigenvalue $\lambda_i$. This follows from the theory of Jordan normal form.
In this case $M$ is a circulant matrix so one can write down its eigenvalues fairly explicitly using the discrete Fourier transform (equivalently, using the representation theory of a finite cyclic group): they are given by
$$\lambda_k = \zeta_p^k (1 + \zeta_p^{-1})$$
where $\zeta_p$ is a primitive $p^{th}$ root of unity over $\mathbb{F}_2$. The polynomial $x^p - 1$ is separable over $\mathbb{F}_2$, so the $\lambda_k$ are distinct and the polynomials $p_k(n)$ are constant.
Now, $1 + \zeta_p^{-1}$ has conjugates
$$1 + \zeta_p^{-1}, 1 + \zeta_p^{-2}, 1 + \zeta_p^{-2^2}, 1 + \zeta_p^{-2^3}, ...$$
by repeatedly applying the Frobenius map. This sequence starts repeating precisely at the smallest $r$ such that $2^r \equiv 1 \bmod p$, so in other words $1 + \zeta_p^{-1}$ lives in $\mathbb{F}_{2^r}$ where $r$ is the order of $2 \bmod p$ and hence has multiplicative order dividing $\boxed{2^r - 1}$. Since $p | 2^r - 1$ by assumption, each $\lambda_k$ has this period also.
You can determine a closed form in one of various ways. My preferred general method is to write down the matrix generating function
$$(I - tM)^{-1} = \sum_{n \ge 0} M^n t^n$$
and isolate its entries using Cramer's rule. In this particular case it is fairly straightforward to write your initial vector as a linear combination of the eigenvectors of $M$. These eigenvectors are
$$u_{ij} = \frac{1}{p} \zeta_p^{ij}$$
where $u_k$ has eigenvalue $\lambda_k$ (representation theory of cyclic groups!), and we have
$$v_0 = \frac{1}{p} \sum_{k=0}^{p-1} u_k$$
hence
$$v_n = \frac{1}{p} \sum_{k=0}^{p-1} \zeta_p^{nk} (1 + \zeta_p^{-1})^n u_k.$$ |
H: Proving linearly independence of the functions $t^{i}e^{\lambda_{0}t}$
Following the question I asked here and is:
Let $P(\lambda)=(\lambda-\lambda_{0})^{r}$where $r$ is a positive
integer. Prove that the equation $P(\frac{d}{dt})x(t)=0$ has solutions
$t^{i}e^{\lambda_{0}t},i=0,1,\ldots,r-1$
I now wish to prove the solutions are linearly independent.
I have two questions regarding this:
I learned to prove such independence with the Wronskian, but I am having trouble calculating it in (I calculated the derivatives of $e^{\lambda_{0}t},te^{\lambda_{0}t}$ but its getting too hard when it is a greater power of $t$ since I am getting longer and longer expressions). How can I calculate the Wronskian ?
If I think of the vector space that is the smooth real valued functions then it seems that this set (if I take the power of $t$ to be as big as I want, but finite) is linearly independent. did I deduce right ?
I would appriciate any help!
AI: To show that $t^i e^{\lambda_0t}$ are linearly independent, it suffices to show that $t^i$ are linearly independent. |
H: Finding n in Fibonacci closed loop form
The nth term of the Fibonacci series is given by
$F_{n}$=$\Big\lfloor\frac{\phi^{n}}{\sqrt{5}}+\frac{1}{2}\Big\rfloor$
How do you get the following expression for n from this?
$n=\Big\lfloor\log_{\phi}\Big(F\cdot\sqrt{5}+\frac{1}{2}\Big)\Big\rfloor$
AI: Actually you can't get $n=\Big\lfloor\log_{\phi}\Big(F\cdot\sqrt{5}+\frac{1}{2}\Big)\Big\rfloor$ only from $F_n=\Big\lfloor\frac{\phi^{n}}{\sqrt{5}}+\frac{1}{2}\Big\rfloor$. But these two identities can be both deduced from $F_n=\frac{\phi^{n}}{\sqrt{5}}-\frac{\psi^{n}}{\sqrt{5}}$. Here we have $|\psi|<1$, so we can add 1/2 and floor it to clear away the $\psi$ term, which makes the expression nicer in some sense.
From $F_n=\frac{\phi^{n}}{\sqrt{5}}-\frac{\psi^{n}}{\sqrt{5}}$, we get $\sqrt{5}F_n=\phi^{n}-\psi^{n}$. Let's assume $n\geq 2$, then $|\psi^{n}|\leq \psi^{2}<1/2$. (when $n=1,0$, you can directly check the identity which may suit or may not suit)
Thus $\sqrt{5}F_n+\frac{1}{2}>=\phi^{n}$ and trivially $\sqrt{5}F_n+\frac{1}{2}\leq \phi^{n}+1\leq\phi^{n+1}$ since $\phi>1.6$ and $n\geq 2$. Thus $n=\Big\lfloor\log_{\phi}\Big(F\cdot\sqrt{5}+\frac{1}{2}\Big)\Big\rfloor$. |
H: Limit of $ \sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{n^2-m^2}},m\rightarrow\infty$
I would like to show that $$ \sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{n^2-m^2}}\rightarrow_{m\rightarrow \infty}\frac{\pi}{2}$$
Using integrals:
$$ m\int_{m+1}^{\infty} \frac{\mathrm dx}{x \sqrt{x^2-m^2}} \leq \sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{n^2-m^2}} \leq m\int_{m+1}^{\infty} \frac{\mathrm dx}{x \sqrt{x^2-m^2}}+\frac{m}{(m+1)\sqrt{2m+1}}$$
$$ m\int_{m+1}^{\infty} \frac{\mathrm dx}{x \sqrt{x^2-m^2}} = \frac{\pi}{2}-\arctan \left(1+\frac{1}{m} \right)=\frac{\pi}{4}+o(1)$$
The result I get is:
$$ \sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{n^2-m^2}}\rightarrow_{m\rightarrow \infty}\frac{\pi}{4}$$
Where did I go wrong?
AI: The initial inequality appears to be set up correctly so the problem may lie with your evaluation of that integral. The result comes much quicker via Riemann Sums:
$$\sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{n^2-m^2}}= \frac{1}{m} \sum_{n=m+1}^{\infty} \frac{m}{n\sqrt{(n/m)^2-1}} \to \int^{\infty}_1 \frac{1}{x\sqrt{x^2-1} } dx = \sec^{-1} x \bigg|^{\infty}_1= \frac{\pi}{2}.$$ |
H: Show the inequality $\sigma(n)\phi(n) \geq n^2(1-\frac{1}{p_1^2})(1-\frac{1}{p_2^2})\cdots(1-\frac{1}{p_r^2})$
If $n=p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$then ,show the inequality :$$ \sigma(n) \phi(n) \geq n^2(1-\frac{1}{p_1^2})(1-\frac{1}{p_2^2})\cdots(1-\frac{1}{p_r^2})$$
I know the function $\sigma(n) \phi(n)$ is multiplicative hence I only have to show the inequality for $n=p^{k}$
So $$\sigma(n) \phi(n)$$
$$\implies \frac{n(p^{k+1}-1)}{p}$$
But after then how to get the desired inequality?
AI: Putting $n = p^k$,
$\sigma(n)\phi(n) = \frac{p^{k + 1} - 1}{p - 1} p^{k -1}(p - 1) = p^{2k} - p^{k - 1}$ and
$n^2 (1 - \frac{1}{p^2}) = p^{2k} - p^{2k - 2}$.
So we get the inequality and furthermore we see that equality happens iff k = 1. Thus for a general $n$ the equality happens iff $n$ is squarefree. |
H: How to prove that $A_5$ has no subgroup of order 30?
I need to show that $A_5$ has no subgroup of order 30. Any ideas?
AI: Such a subgroup would have index $2$ and so immediately be normal and non-trivial. It suffices to show that $A_5$ is simple.
Alternatively, show that the class equation is $60=1+15+20+12+12$. One cannot obtain $30$ as a sum of any subcollection of the summands on the right. Since normal subgroups are composed of whole conjugacy classes, there is then no subgroup of $A_5$ of order $30$. |
H: Numerator vs. denominator vs. nominator
What is appropriate usage of "numerator", "denominator", and "nominator" to refer to parts of a fraction?
I'm posting this question and answer here because I had little luck finding a clear answer through Google. I realize that it's not really mathematics, but I think it's worth mentioning as an educational issue.
Ideally, the question at English.stackexchange would address this completely, however they have closed it with the reason: "you can just look it up". I agree that this is possible, but I don't think that reveals the consensus. I wanted to allow for more opinions to be expressed. (Below, I argue that "nominator" should be discouraged.)
AI: The numerator is the top part of a fraction, the denominator is the bottom part, and nominator is not an appropriate term for any part of a fraction.
I have seen nominator used to mean both "numerator" and "denominator". According to a question on this at English.stackexchange, this use of "nominator" is exceedingly rare.
Rather than people having been taught that "nominator" was appropriate, I think that it is far more often the case that the use of "nominator" is an eggcorn that has arisen due to its resemblance to the other two words.
I think using "nominator" should be discouraged because it already has a wholly different meaning, and has no etymological connection to fractions to speak of. It is also helps to confuse the meanings of the proper terms, if it is mixed with them. |
H: Solution to a system of quadratics
I am learning about a Bell State, and am trying to show that they are entangled. I believe that the required proof is to show that the system
$$\alpha_0^2+\alpha_1^2=1$$
$$\beta_0^2+\beta_1^2=1$$
$$\alpha_0\beta_0=1/\sqrt{2}$$
$$\alpha_1\beta_1=1/\sqrt{2}$$
has no solutions. I have tried various ways of rewriting each variable in terms of others etc. without success. Any hints?
EDIT: I apologize, I should have made clear. In QM probabilities can be complex - what exactly this means intuitively is unclear to me, but algebraically it means that $\alpha_i,\beta_i$ can be complex.
AI: From (3) and (4), we have:
\begin{align*}
\beta_0^2 &= \frac{1}{2\alpha_0^2} \\
\beta_1^2 &= \frac{1}{2\alpha_1^2}
\end{align*}
Plug into (2):
\begin{align*}
\frac{1}{2\alpha_0^2} + \frac{1}{2\alpha_1^2} &= 1 \\
\frac{\alpha_1^2}{\alpha_0^2\alpha_1^2} + \frac{\alpha_0^2}{\alpha_0^2\alpha_1^2} &= 2 \\
\frac{\alpha_0^2 + \alpha_1^2}{\alpha_0^2\alpha_1^2} &= 2
\end{align*}
Use (1) in the numerator to get:
$$
\alpha_0^2 \alpha_1^2 = \frac{1}{2}
$$
Therefore:
$$
\alpha_1^2 = \frac{1}{2\alpha_0^2}
$$
Plug into (1) and multiply both sides by $\alpha_0^2$:
$$
\alpha_0^4 - \alpha_0^2 + \frac{1}{2} = 0
$$
This is a quadratic equation for $\alpha_0^2$ with no real solutions, as $\Delta = -1 \lt 0$. |
H: Intuition behind Hölder space
What's the point of Hölder spaces (and parabolic Hölder spaces)? I can understand when solving some PDE say $$u_t = au_x + bu_{xx}$$ you may want the solution to lie in $C^{2,1}$ (indexed in space then time) but why do people sometimes want something like $C^{2+\alpha, 1+\alpha}$?
Also being in a Hölder space is stronger than being in the same continuous (non-Hölder) space so this is not very useful for solving PDEs either when you want as big a space as possible to show existence of solutions (I think).
AI: People don't necessarily want to do that, that's what they can prove resp work with. If $Lu = f$ with continuous $f$ and second order elliptic $L$, then in general $u$ is not in $C^2$. If, however, $f\in C^{0, \alpha}$, then it is often possible to show $u\in C^{2, \alpha}$. Similarly for parabolic equations. It just turns out that these spaces are particularly suited for PD operators, while the $C^k$ spaces are not.
And many will admit that this is not always convenient, since, for example, the Hölder spaces refuse to be separable.
Edit/added: (on a more technical note, it turns out that the Newton potential of a function $f$, defined by
$$N(f)(x):=C(n) \int \frac{f(y)}{|x-y|^{n-2}} dy$$
(with a constant $C(n)$ depending on the dimension) is not a $C^2$ function if $f$ is merely continuous. The Newton potential is the basic building block for the Greens function of the Laplacian. (This is only correct if $n>2$, for $n=2$ a $\log|x-y|$ term replaces the $1/|x-y|^{n-2}$ term). $N(f)$ is, however, in $C^{2,\alpha}$ if $f\in C^{0,\alpha}$. Again, similar observations hold true in the parabolic case. A counterexample for the case $f\in C^0$ is given in exercise 4.9 in Gilbarg and Trudingers 'Elliptic Partial Differential Equations of Second Order'). |
H: Permutations With No Identity Elements
Possible Duplicate:
Number of permutations where n ≠ position n
There are $N!$ permutations of the set $\{1,2,\ldots,N\}$
How many of them have zero identity elements?
An identity element is an element that has a value equal to its position. ie When for some $i$, the ith element equals $i$.
For example, $(2,3,4,1)$ has no identity elements, whereas $(2,1,3,4)$ has two identity elements.
AI: The usual terminology would be "fixed points" or "1-cycles" instead of "identity elements".
A permutation with no fixed points is called a derangement. The Wikipedia article on derangements gives quite a bit of information about counting them. |
H: What is the probability of two people sitting side by side each catching a home run in the same game?
Yesterday, in Edmonton, at a baseball game two of my friends caught a home run, in separate plays during the game, and I am wondering where to begin in analyzing what the probability of this happening was? They were sitting beside each other. This happened at the Telus Field which has a capacity of 10, 000. The field dimensions:
Left: 340 ft (100 m)
Centre: 420 ft (130 m)
Right: 320 ft (98 m)
Unfortunately, I don't think these dimensions include the seating area (I found these dimensions on wikipedia). How can I approach this problem?
To constrict the problem a little, I try to define the possible landing areas of the ball as squares with sides, roughly the diameter of the ball, which fill the area of shape representing the total possible landing positions. This shape would extend out from the batter's position following the foul lines into the stands, the distance between the lines increasing as you travel further from the batter. I am not including the area outside of the foul lines, and the grid of squares representing the possible positions would extend into the far reaches of the stands. I am assuming that the baseball is 3inches in diameter. Probabilistically, would I need to know which area of squares is landed on most frequently? And would I need to know their exact seating positions?
AI: This is very complicated. Obviously we know it is unlikely from observing baseball games. For example I have gone to many games and rarely had a ball come close enough to my seat that I could reach for it. So even to get 1 ball in a game is rare and this is almost like getting 2. But where a baseball will get hit is very complicated. It depends on whether the batter is a right or left handed hitter, the pitcher and the type of pitch and other things. To answer this with a probability estimate would require modeling the process for individual picthes and then to look at all the pitches thrown in the game, proably more than 200 pitches. It is even rarer than what I am saying because I am think of balls hit in fair or foul territory and you are restricting it to home runs which only accounts for about 1/4th of the seats. |
H: Find an elegant proof of a set-theoretic equiality about relations
I am now attempting to prove the following theorem.
I am in half-underway of the proof and it seems I can do it by myself. But the proof I am now constructing is not elegant.
Could anyone provide a nice proof?
Let $n$ is an index set (it may be a finite ordinal, for example). Let $a$ is an $n$-ary relation. Let $f$ is an $n$-indexed family of binary relations. By definition, $\operatorname{StarComp}(a;f)$ is such a $n$-ary relation that
$$L\in\operatorname{StarComp}(a;f) \Leftrightarrow \exists y\in a\forall i\in n: y_i \mathrel{f_i} L_i.$$
Theorem $\operatorname{StarComp}(\operatorname{StarComp}(a;f);g) = \operatorname{StarComp}(a;g\circ f)$ where the composition of $n$-indexed families of relations $f$ and $g$ is taken componentwise: $g\circ f = \lambda i\in n:g_i\circ f_i$.
AI: Let $x R \left( f \right) y \Leftrightarrow \forall i \in n : x_i f_i y_i$.
$L \in \operatorname{GR}\operatorname{StarComp}
\left( a ; f \right) \Leftrightarrow \exists y \in
\operatorname{GR}a : y R \left( f \right) L$.
$L \in \operatorname{GR}\operatorname{StarComp}
\left( \operatorname{StarComp} \left( a ; f \right) ; g \right)
\Leftrightarrow \exists p \in
\operatorname{GR}\operatorname{StarComp} \left( a ;
f \right) : p R \left( g \right) L \Leftrightarrow \exists p, y \in
\operatorname{GR}a : \left( y R \left( f \right) p \wedge p R
\left( g \right) L \right) \Leftrightarrow \exists y \in
\operatorname{GR}a : \left( y R \left( g \circ f \right) L
\right) \Leftrightarrow L \in
\operatorname{GR}\operatorname{StarComp} \left( a ;
g\circ f \right)$. |
H: Evaluating $\int_{0}^{\frac{\pi}{2}} e^{x+2}\sin(x) \,dx$
Could someone show me how to solve this integral?
$$\int_0^{\frac{\pi}{2}} e^{x+2}\sin(x) \,dx$$
I think that it's improper, but I'm not sure.
I tried to solve by parts, but first I sobstitute
$$e^{x+2} = u$$
And this is what I obtained:
$$\int{u\sin(\log(u) - 2)\frac{1}{u}}\,du$$
And at this point, integrating by part seems easy, but...
Could you help me? Maybe avoiding to follow the way that uses WolframAlpha because I don't know that method.
Thanks in advance
AI: We can compute the indefinite integral first, and then apply Barrow's rule. We start by factoring out $e^2$, since it's a constant factor.
$\displaystyle \int e^{x+2}\sin x \,dx = e^2 \int e^x \sin x \,dx $
To evaluate $\int e^x \sin x \,dx$, we perform integration by parts $\int u \,dv = uv - \int v \,du$, with $u=e^x$ and $dv = \sin x \,dx$ first, and then again with $u = e^x$, $dv = \cos x \,dx$.
$\begin{align*} \int e^x \sin x \,dx &= - e^x \cos x - \int e^x (-\cos x) \,dx = -e^x \cos x + \int e^x \cos x \,dx = \\ &= - e^x \cos x + \big( e^x \sin x - \int e^x \sin x \,dx \big) = - e^x \cos x + e^x \sin x - \int e^x \sin x \,dx \end{align*}$
Note that we get back the original integral, so we can solve for it. This is sometimes called a cyclic integral.
$\displaystyle 2 \int e^x \sin x \,dx = e^x (\sin x - \cos x)$, and then $\displaystyle \int e^x \sin x \,dx = \frac{1}{2} e^x(\sin x - \cos x)$.
Finally, we have, with Barrow's rule:
$\begin{align*} \int_0^{\pi/2} e^{x+2}\sin x \,dx &= e^2 \int_0^{\pi/2} e^x \sin x \,dx = e^2\big(\frac{1}{2} e^x(\sin x - \cos x)\big|_0^{\pi/2}\big) \\ &= e^2 \big( \frac{1}{2}e^{\pi/2}(\sin {\frac{\pi}{2}} - \cos {\frac{\pi}{2}}) - \frac{1}{2}e^0(\sin 0 - \cos 0)\big) \\ &= \frac{1}{2}e^2 \big(e^{\pi/2}(1 - 0) - (0 - 1)\big) = \frac{1}{2}e^2(1+e^{\pi/2}) \end{align*}$ |
H: How many keypresses does it take to unlock a 4-digit codelock?
Possible Duplicate:
Fastest way to try all passwords
There are $10^4$ different 4-digit codes. If each code takes 4 keypresses to try, then it would take $4*10^4$ keypresses to try all possible codes.
Now the specific codelock i have in mind is of the type that unlocks when the last 4 digits that were pressed are the code, e.g if the code is 0000 one could enter 12351350000 and it would unlock. This implies that to test the codes 0000 and 0001, one needs only 5 keypresses.
How many keypresses does it take to try all the codes on such a codelock?
AI: $10^4 + 3$. See de Bruijn sequences, which exist for all bases and lengths. |
H: An attempt to prove Tutte's theorem
I'm studying Tutte's theorem. There is a proof in Graph Theory / Diestel. I took a very short glance at it before trying to prove it on my own. I am giving my proof attempt here with a specific question.
For a finite graph $G=(V,E)$ and a subset of vertices $S \subset V$, we let
$q(S)$ denote the number of connected components of odd size in the graph induced on $V \setminus S$.
Tutte's theorem then says that if $q(S) \leq |S|$ for any $S \subset V$ (I call this Tutte's condition), then there is a perfect matching in $G$.
(the converse is trivially true too).
I'm trying to prove it in the following way: Iteratively remove edges from $G$ without violating Tutte's condition. We get a graph satistying Tutte's condition, for which a removal of any edge would violate Tutte's condition. I claim that all edges are of degree $1$ in this new graph. Thus it is a $1$-factor in the original graph and we are done.
First, vertices of degree $0$ are impossible because they violate Tutte's condition with $S=\emptyset$.
Now, assume there is a vertex $v \in V$ such that $deg(V) \geq 2$. Let $u$ and $w$ be 2 of its neighbours. Thus, removal of either edge $vu$ or $vw$ would result in violation of Tutte's condition. It is thus sufficient to prove the following lemma:
Lemma: For a graph $G=(V,E)$, if removal of edge $vu$ results in violation of Tutte's condition as exhibited by the set $S_1 \subset V$ and removal of edge $vw$ results in violation of Tutte's condition as exhibited by the set $S_2$, then $G$ itself violates Tutte's condition as exhibited by the set $S_1 \Delta S_2 = (S_1 \cup S_2) \setminus (S_1 \cap S_2)$.
Is this lemma true and how can it be proved?
AI: Symmetric difference doesn't work. Take the graph $V=\{v,a,b,c\}$, $E=\{va,vb,vc\}$. $G$ violates Tutte's condition but only for $S=\{v\}$, whereas $G\setminus va$ and $G\setminus vb$ both violate it for $\emptyset$.
So let's assume you modify the lemma to allow an arbitrary set for the violation of the condition:
Modified lemma: For a graph $G$ and edges $vu$ and $vw$, if removal of $e$ results in a violation of Tutte's condition for $e=vu$ and $e=vw$, then $G$ violates Tutte's condition.
Then it's not possible to find a counterexample to the general structure of your proof because, assuming Tutte's theorem, the modified lemma is guaranteed to be true for some set depending on $G$: if $G$ satisfies Tutte, $v$ is part of a perfect matching $M$ and so removal of any edge not in $M$ gives a subgraph satisfying Tutte. Therefore, the modified lemma is equivalent to Tutte's theorem. |
H: Showing a matrix is symmetric and its eigenvectors are in the columns of $(I - 2vv^T)$
Let $A \in \mathbb{R}^{N \times N}$ have the form $$A = (I - 2vv^T)D(I-2vv^T)$$ with $D = \text{diag} (\lambda_1, \lambda_2, ..., \lambda_n) \in \mathbb R^{N \times N}, v \in \mathbb{R}^n, v^Tv=1$
Show that $A$ is symmetric and that for $j = 1, ...,N$, the $j$th-column
from $I - 2vv^T$ is the respective eigenvector to $\lambda_j$.
Here are my ideas so far. I remember this form $(I - 2vv^T)D(I-2vv^T)$ from matrices that can be diagonalized. This means it's symmetric, right? If I rewrite $(I - 2vv^T)$ as $Q$, then maybe I could do something like $A = QDQ^{T}$, if $Q$ is symmetric. So I guess my first step should be showing that $(I - 2vv^T)=(I - 2vv^T)^T$.
So if I show $Q$ is symmetric, it is proven that $A$ is also symmetric. Am I going in the right direction? How can I show that its eigenvectors are the columns of $Q$? Thanks in advance!
Edit: Here's my proof that $Q$ is symmetric:
$$Q^T=(I-2vv^T)^T = I^T -2(vv^T)^T
=I -2(v^T)^Tv^T
=I - 2vv^T=Q$$
Edit2: Apparently $Q$ is also orthogonal:
$$QQ^T=(I - 2vv^T)(I - 2vv^T)=I-2vv^T-2vv^T+4vv^Tvv^T=I$$
AI: This means that $AQ = QDQ^TQ = QD$
By the definition of matrix multiplication, $A$ times the $j$-th column vector of $Q$ gives the $j$-th column vector of $QD$, which can be seen to be $\lambda_j$ times the $j$-th column vector of $Q$. |
H: Schwartz class estimation.
I have a function $f\in \mathcal{S}$ (i.e of Schwartz class), and I want to show there exist constants $C,k>0$ s.t
$$\|f\|_p \leq C(\sup_{x\in \mathbb{R}} |f(x)| + \sup_{\mathbb{x\in \mathbb{R}}} |x^k f(x)|)$$
for every $ p \in [1,\infty]$.
For $p=1,\infty$ it's obvious from definition, I mean I can take f with compact support and this will prove for $p=1$, for $p=\infty$ it's trivial.
But for $ p \in (1,\infty)$ I find myself at a mess, I need to do integration by parts inductively but I don't seem to find the right approach, I guess I need to use here Leibnitz general product rule, but I don't see how to come to suitable constants.
Any help , is appreciated.
AI: Hint:
$$
\|f\|_p^p=\int_{\mathbb R}\frac{|f(x)|^p(1+x^2)}{1+x^2}\,dx.
$$ |
H: Positive Semi-Definite matrices and subtraction
I have been wondering about this for some time, and I haven't been able to answer the question myself. I also haven't been able to find anything about it on the internet. So I will ask the question here:
Question: Assume that $A$ and $B$ both are positive semi-definite. When is $C = (A-B)$ positive semi-definite?
I know that I can figure it out for given matrices, but I am looking for a necessary and sufficient condition.
It is of importance when trying to find solutions to conic-inequality systems, where the cone is the cone generated by all positive semi-definite matrices. The question I'm actually interested in finding nice result for are:
Let $x \in \mathbb{R}^n$, and let $A_1,\ldots,A_n,B$ be positive semi-definite. When is
$(\sum^n_{i=1}x_iA_i) - B$
positive semi-definite?
I feel the answer to my first question should yield the answer to the latter. I am looking for something simpler than actually calculating the eigenvalues.
AI: There's a form of Sylvester's criterion for positive semi-definiteness, which unfortunately requires a lot more computations than the better known test for positive definiteness. Namely, all principal minors (not just the leading ones) must be nonnegative. Principal minors are obtained by deleting some of the rows and the same-numbered columns. Source
The book Matrix Analysis by Horn and Johnson is the best reference for positive (semi)definiteness that I know. |
H: Question about conjugacy class of alternating group
This is problem 26 from Grove's "Algebra."
Suppose $K$ is a conjugacy class in $S_n$ of cycle type $(k_1,...,k_n)$, and that $K \subseteq A_n$. If $\sigma \in K$ write $L$ for the conjugacy class of $\sigma$ in $A_n$.
If either $k_{2m} > 0$ or $k_{2m+1} > 1$ for some $m$ show that $L = K$.
I can show $L \subseteq K$ but not $K \subseteq L$. I don't know how to use the "$k_{2m} > 0$ or $k_{2m+1} > 1$" hypotheses. If $k_{2m} > 0$ for some $m$ then $\sigma \in A_n$ must have an even number of odd transpositions. Can I get a hint?
Thank you.
Edit: $k_m$ is the number of cycles of length m.
AI: Suppose you have $\sigma'\in K$; you want to show that $\sigma'\in L$.
Because $\sigma'\in K$ you have $\tau\sigma\tau^{-1}=\sigma'$ for some $\tau\in S_n$. Now if $\tau\in A_n$ then $\sigma'\in L$ immediately. The problem is if $\tau$ is odd and so not in $A_n$.
The idea is now that if we can find an odd $\rho$ such that $\rho\sigma\rho^{-1}=\sigma$, then we would have $\sigma'=\tau\rho\sigma\rho^{-1}\tau^{-1}$ with $\tau\rho\in A_n$, and we could conclude $\sigma'\in L$.
How do we find $\rho$? This is where the additional assumption on the cycle structure comes in. We know that either $\sigma$ has a cycle of even length, or $\sigma$ has two cycles of the same odd length. If the first is true, then (blah blah); otherwise the second is true and (blah blah). Can you take it from here? |
H: Defining a linear map via kernel and image.
Are linear maps defined in a 1-1 manner by setting their kernel and image? In other words, If I have a vector space, and I define a set to be the kernel of my would-be linear map, and another set to be it's image. Would I get a well defined, one linear map? (Given that my Ker and Im are okay, e.g. the dimensions are fine, etc.)
I have the following task: Let T be a linear map in R^4. Given that
orthogonal_space_of(k e rT) = { (1, 2, 0, 4) , ( 1, 0,1, 0) }, and T(1,0,1,1)=(1,2,1,1), give an example of such T ("you don't have to explicitly find T(x1,x2,x3,x4).")
Any ideas on how to do that (how can I characterize a linear map T without finding an explicit formula for it?) I thought about finding it's Ker and Im, but that doesn't work. Any thoughts?
AI: No. A linear map $f : V \to W$ is uniquely determined by the images of a basis of $V$. Consider given $\mathrm{Ker} \, f$ and $\mathrm{Im} \,f$. When you define $\mathrm{Ker} \, f$, you are specifying the images of some elements of a basis. (Suppose $V$ is finite dimensional, $\mathrm{Ker} \, f = \langle v_1, \cdots, v_k \rangle$, with $\{v_1, \cdots, v_k\}$ linearly independent, and extend this set of vectors to a basis $\{v_1, \cdots, v_n\}$). If $\mathrm{Ker} \, f \neq V$, i.e., if $f \neq 0$, you still have some elements of the basis that you have to map somewhere to determine the linear transformation: $v_{k+1}, \dots, v_n$.
Suppose you have one such linear map, with $f(v_{i}) = y_i$, for all $i \in {k+1, \dots, n}$, and $\mathrm{Im} \,f = \langle y_{k+1},...y_n \rangle$. Then you have a couple of ways in which you could get a different linear map while keeping $\mathrm{Ker} \, f$ and $\mathrm{Im} \,f$: for any $i \in {k+1, \dots, n}$, you could redefine $f(v_{i}) = k\cdot y_i$, with $k$ any scalar, or $f(v_{i}) = y_j$, with $j \in {k+1, \dots, n}$ such that $y_j \neq y_i$, or even define $f(v_{i})$ to be some linear combination of the vectors in the image. That is, it's very easy to change a linear map and keep the image and the kernel.
A very simple example: consider $f(x) = 2x$ and $g(x) = 3x$, two linear maps $\mathbb{R} \to \mathbb{R}$ with $\mathrm{Ker} \,f = \mathrm{Ker} \,g = {0}$ and $\mathrm{Im} \,f = \mathrm{Im} \,g = \mathbb{R}$.
To answer your second question:
You cannot uniquely determine such a map, but there are lots of them. Since you have $\mathrm{Ker} \,f$ (first you have to calculate the orthogonal complement of the given subspace), pick a basis of it, add the other element whose image you know ($(1,0,1,1)$) and add vectors in order to get a basis of $\mathbb{R}^4$. Then choose any images for the vectors on which you don't have any other conditions. The important thing is: a linear map is uniquely determined by the images of the elements of a basis of the domain, so you could give an answer in the following form, without computing an explicit equation for $T(x_1,x_2,x_3,x_4)$:
$\begin{cases}
T(v_1) = y_1\\
T(v_2) = y_2\\
T(v_3) = y_3\\
T(v_4) = y_4
\end{cases}$,
where $\{v_1, v_2, v_3, v_4\}$ is a basis of $\mathbb{R}^4$. |
H: What is the sum of this?
What is the sum of this
$$ \{n,n-1,...,3,2,1\}, ...... \{5,4,3,2,1\}, \{4,3,2,1\}, \{3,2,1\},
\{2,1\}, \{1\} $$
I am learning Data Structures and Algorithms now, I want to calculate the time-complexity of a nested loop. I suspect there is term and formula for this pattern.
static int count = 0;
//time complexity of this nested loop
public static void run(int n) {
for (int i = 1; i * i < n; i++) {
for (int j = i; j * j < n; j++) {
for (int k = j; k * k < n; k++) {
System.out.println(++count);
}
}
}
}
public static void cal(int n) {
int total = 0;
int temp = (int) Math.pow(n, 0.5);
for (int i = temp; i > 0; i--) {
total += i * (i + 1) / 2;
}
System.out.println(total);
}
public static void main(String[] args) {
run(12345);
cal(12345);
}
AI: Each of the sums separately is $\sum_{j=1}^k j = \frac{1}{2}k^2 + \frac{1}{2}k$.
In addition, $\sum_{k=1}^n k^2 = \frac{1}{3}n^3 + \frac{1}{2}n^2 + \frac{1}{6}n$,
both of which can be proved by induction.
The total sum is then $$\sum_{k=1}^n \sum_{j=1}^k j = \sum_{k=1}^n (\frac{1}{2}k^2 + \frac{1}{2}k) = \frac{1}{2}\sum_{k=1}^n k^2 + \frac{1}{2} \sum_{k=1}^n k = \frac{1}{6} n^3 + \frac{1}{2} n^2 + \frac{1}{3} n = \frac{n(n+1)(n+2)}{6}$$ |
H: Show a sequence is decreasing
I'm stuck trying to show that the following sequence is decreasing
$$a_{n} = \left(\frac{n+x}{n+2x}\right)^{n}$$ where $x>0$. I've tried treating $n$ as a real number and took derivatives but it didn't lead to anything promising. Any hints would be appreciated.
AI: Fix $x>0$. Our aim is to show that $a'(n)<0$. Note that
$$
a(n)=\left(1-\frac{x}{n+2x}\right)^n>0
$$
$$
(\log a(n))'=\frac{a'(n)}{a(n)}
$$
Hence it is enough to show that $(\log a(n))'<0$. Well this is indeed true. Using inequality $\log(1-t)<-t$ we obtain
$$
(\log a(n))'=
\frac{\left(n^2+3 n x+2 x^2\right)\log\left(1-\frac{x}{n+2 x}\right)+n x}{(n+x)
(n+2 x)}\leq
$$
$$
\frac{\left(n^2+3 n x+2 x^2\right)\left(\frac{-x}{n+2 x}\right)+n x}{(n+x)
(n+2 x)}=
\frac{-x^2}{(n+x)
(n+2 x)}<0
$$
And now we are done |
H: a set which is not measurable in relation to an outer measure
given an outer measure $\eta: 2^X \to [0, \infty]$ we call a subset $E$ of $X$ $\eta$-measurable (measureable with respect to the outer measure), if for every subset $Q$ of $X$, the following holds:
$$\eta(Q) = \eta(Q \cap E) + \eta(Q \cap E^C)$$
so heres my question: do you know any examples of sets E that are not $\eta$-measurable? Thanks!
AI: Whether or not $\eta$-nonmeasurable sets exist depends on $\eta$. For example, if $X$ is a finite set and $\eta(Q)$ is the number of elements in $Q$, then every set is $\eta$-measurable. For the Lebesgue measure, there are nonmeasurable sets such as Vitali set. |
H: Proving an Inequality about a function.
Assume that $f \in C^2 ([1,4])$ and for any $ \epsilon_1 , \epsilon_2 \in (0,1) $, there exsits $\lambda \in (1+ \epsilon_1 , 4 - \epsilon_2) $ such that $ | f'( \lambda ) | \leqslant | f(4 - \epsilon_2 ) | + | f ( 1 + \epsilon_1 ) | $( In fact this is by using the mean value theorem). Anyway, if we assume this statement, how can I derive the following?
$$ \forall x \in [1,4], \;\;|f'(x)| \leqslant | f(4 - \epsilon_2 )| + | f( 1 + \epsilon_1 )| + \int_1^4 |f''(t)| dt $$
AI: We can write, by the fundamental theorem of calculus, that
\begin{align}
|f'(x)|&\leq |f'(x)-f'(\lambda)|+|f'(\lambda)|\\
&=\left|\int_{\lambda}^xf''(t)dt\right|+
|f'(\lambda)|\\\
&\leq \int_{\lambda}^x\left|f''(t)\right|dt+|f'(\lambda)|\\\
&\leq \int_1^4|f''(t)|dt+|f'(\lambda)|\\\
&\leq\int_1^4|f''(t)|dt+|f(4-\varepsilon_2)|+|f(1+\varepsilon_1)|.
\end{align} |
H: Does this explanation of derangements on Wikipedia make sense?
On the Wikipedia page on derangements, the following description is given about how to count derangements:
Suppose that there are $n$ persons numbered $1,2,\ldots,n$. Let there be $n$ hats also numbered $1,2,\ldots,n$. We have to find the number of ways in which no one gets the hat having same number as his/her number. Let us assume that first person takes the hat $i$. There are $n-1$ ways for the first person to choose the number $i$. Now there are 2 options:
A. Person $1$ does not take the hat $i$. This case is equivalent to solving the problem with $n − 1$ persons $n − 1$ hats: each of the remaining $n − 1$ people has precisely 1 forbidden choice from among the remaining $n − 1$ hats ($i$'s forbidden choice is hat $1$).
B. Person $1$ takes the hat $i$. Now the problem reduces to $n − 2$ persons and $n − 2$ hats.
Aren't the two bolded statements in contradiction? Isn't the "first person" and "person 1" the same person? Is this explanation misworded?
AI: There was a mistake in the article; the people were reversed. It should have read,
A. Person i does not take hat 1. ...
B. Person i takes hat 1. ...
I've fixed it on Wikipedia. |
H: Total order in the power set of the real line
Is it possible to define constructively a total order in the power set of the real line ?
AI: You don't. It is consistent with ZF that there is no linear ordering of $\mathcal P(\mathbb R)$.
Andres Caicedo wrote a rather detailed answer to this on this MathOverflow thread. |
H: wedge product with the exterior derivative of the form $ \omega:= dz +x_1 \, dy_1+ x_2 \, dy_2 + \cdots + x_n \, dy_n $.
Write the coordinates on $ \mathbb {R} ^{2n+1}$ as $ \displaystyle{ (x_1 , y_1, x_2, y_2, \cdots ,x_n, y_n ,z)}$. Define the 1-form $ \displaystyle{ \omega:= dz +x_1 \, dy_1+ x_2 \, dy_2 + \cdots + x_n \, dy_n} $.
Compute $ \displaystyle{ \omega \wedge (d \omega \wedge d \omega \wedge \cdots \wedge d \omega )}$ where the wedge product is taken n times.
I first work out the simply cases $n=1,2,3$ and I guess that it must be
$ \displaystyle{ \omega \wedge (d \omega \wedge d \omega \wedge \cdots \wedge d \omega ) =n dz \wedge dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 \wedge \cdots \wedge dx_n \wedge dy_n }$
but I have no proof for the general case.
AI: We have $d\omega=\sum_{j=1}^ndx_j\wedge dy_j$ hence
\begin{align}
(d\omega)^n&=\left(\sum_{j=1}^ndx_j\wedge dy_j\right)^n\\
&=\sum_{1\leq i_1,\ldots,i_n\leq n}\bigwedge_{j=1}^n(dx_{i_j}\wedge dy_{i_j})\\
&=\sum_{\sigma\in\mathfrak S_n}\bigwedge_{j=1}^n(dx_{\sigma(j)}\wedge dy_{\sigma(j)}),
\end{align}
where $\mathfrak S_n$ denotes the set of the permutations of $\{1,\dots,n\}$
since in the second line the terms such that $i_k=i_j$ for $j\neq k$ vanish ($dx_k\wedge dx_k=0$).
We can see when $\sigma$ is a transposition that $\bigwedge_{j=1}^n(dx_{\sigma(j)}\wedge dy_{\sigma(j)})=\bigwedge_{j=1}^n(dx_j\wedge dy_j)$, and a permutation is a composition of transpositions, so
$$(d\omega)^n=n!\bigwedge_{j=1}^n(dx_j\wedge dy_j).$$
We conclude that
$$\omega\wedge (d\omega)^n=n!dz\wedge \bigwedge_{j=1}^n(dx_j\wedge dy_j)=n! \left(\bigwedge_{j=1}^ndx_j\wedge dy_j\right)\wedge dz.$$ |
H: Solution to simple recursive series
Possible Duplicate:
Proof of the formula $1+x+x^2+x^3+ \cdots +x^n =\frac{x^{n+1}-1}{x-1}$
Value of $\sum\limits_n x^n$
I'm working on some math programming and ran into the following recursive series.
$\displaystyle\sum\limits_{i=1}^n a_n$.
Where $a_{n} = Ca_{n-1}$ and $0 \leq C \leq 1$ and $a_0$ is given;
Because my constant values are always between 0 and 1, I know that the series converges and I can't help but thinking that there must be a solution.
My code looks something like
value = 1000000;
constant = 0.9999;
total = 0;
tolerance = 0.001;
while(value > tolerance)
{
value = constant * value;
total = total + value;
}
Problem is, as is the case with the initial values provided in the snippet above, the algorithm takes a long time to complete when $C$ approaches $1$
AI: $a_k$ can be written as:
$$a_k = a_0 \cdot \underbrace{C \cdot C \cdots C}_{k \text{ times}} = a_0 C^{k}$$
Where $a_0 = 1000000$ and $C = 0.9999$. ($0 \lt C \lt 1$)
The sum is a geometric series. It has the following closed form:
$$
\sum_{k=0}^n a_k = a_0 \frac{1-C^{n+1}}{1-C}
$$
And as $n \to \infty$:
$$
\sum_{n=0}^\infty a_n = \frac{C_0}{1-C}
$$
On the other hand, if $C=1$, then $a_k=a_0$ and the sum becomes:
$$
\sum_{k=0}^n a_k = (n+1)a_k
$$
And this diverges as $n \to \infty$. |
H: Question on a transfinite construction in algebra
I am studying out of Matsumura's Commutative Ring Theory, and in the first section on modules he proves (following Kaplansky) that every projective module over a local ring is free.
My questions have more to do with an application of transfinite induction than the actual algebra.
In the proof of the above result, Matsumura uses a lemma, and it is the proof of the lemma that I have a question on. The proof can be seen at Google Books: link. I am referencing Lemma 1 on page 10.
In the lemma, we define a family of submodules $\{F_\alpha\}$ using transfinite induction. I can follow the proof, i.e., I understand the (very) basic mechanics of transfinite induction and how each $F_\alpha$ is constructed and why the desired result follows.
Here are my questions:
(1) For every ordinal $\alpha$, we define a submodule $F_\alpha$. The ordinals do not form a set. Is it then true that I cannot speak of the set of all submodules $F_\alpha$?
Matsumura writes about half way down the page that "if $F_\beta = F$ then the construction stops at $F_\beta$." Must this eventually happen? Must this construction terminate?
I am having a hard time wrapping my mind around the fact that we have modules (which are sets) and submodules (which are sets), and then all of the sudden we have a family of submodules which isn't (if I have all this right) a set.
(2) Why can I not define a sequence of submodules over $\omega$? Why do I need the full strength of transfinite induction? Why do I need all ordinals?
As is obvious, this transfinite stuff is quite new to me; I have never used these techniques before. I would appreciate any insight.
AI: The idea is to generalize the idea of induction because some things require more than just a countably infinite sequence of steps in order to construct them.
For example, Borel sets are constructed by a transfinite induction and the construction is of length $\omega_1$, that is to say that we do not exhaust this construction until we exhausted all the countable ordinals.
Secondly, the ordinals are simply used to index the modules. $F_\alpha$ is merely a subset of $F$. We can talk on the set of subsets of $F_\alpha$ which are submodules. This is like saying $A_n=\text{some set of real numbers}$ and then talking about $A_\omega=\bigcup A_n$. Does that imply that $A_n$ is a set of natural numbers? Not at all.
Lastly, these sort of construction are usually of the form $F_\alpha\subsetneq F_{\alpha+1}\subsetneq\ldots$ and so if the construction does not stop we can define the function $\alpha\mapsto F_{\alpha+1}\setminus F_\alpha$ which is injective (do you see why?) and its range is a subset of $\mathcal P(F)$. This means that $F$ is a set whose power "set" is actually a proper class which is a contradiction to the power set axiom (which says that the power set of a set is indeed a set). |
H: Flipping Cards Probability
You have a deck of cards, 26 red, 26 black. These are turned over,
and at any point you may stop and exclaim "The next card is red.". If
the next card is red you win £10.
What's the optimal strategy? Prove this is the optimal strategy.
I feel like the optimal strategy is whenever you are in a state that you flip more black than red cards you call out red card, but I do not know how to prove it, nor can I think of why it must be the absolute best strategy.
Thanks for any help.
AI: There is no optimal strategy (or rather, every strategy is optimal). Assuming your strategy involves exclaiming before the last card is drawn, your expected winnings are 5 pounds.
If you don't exclaim, your winnings are obviously zero.
Here's a reference:
"Games People Don't Play" (Peter Winkler) |
H: Maximizing the function $\binom{n+\epsilon}{k}\binom{2n-k}{n}$
I need to find an upper bound for
$$\binom{n+\epsilon}{k} \binom{2n-k}{n}$$ where $\epsilon>0$ and $k,n$ are positive integers with $0 \leq k \leq n$.
I think an upper bound should be with $k=n/2$ or something around there, not sure how to prove it.
AI: $$a_k:=\binom{n+\epsilon}{k}*\binom{2n-k}{n} =\frac{(n+\epsilon)!}{k!(n+\epsilon-k)!}\frac{(2n-k)!}{n!(n-k)!} $$
$$\frac{a_{k+1}}{a_k}=\frac{(n+1+\epsilon)!}{k!(n+1+\epsilon-k)!}\frac{(2n+2-k)!}{(n+1)!(n+1-k)!}\frac{k!(n+\epsilon-k)!}{(n+\epsilon)!}\frac{n!(n-k)!}{(2n-k)!} $$
$$\frac{a_{k+1}}{a_k}=\frac{(n+1+\epsilon)}{(n+1+\epsilon-k)}\frac{(2n+2-k)(2n+1-k)}{(n+1)(n+1-k)}$$
Now, solve
$$\frac{a_{k+1}}{a_k} \geq 1$$
This is quadratic in $k$ and gives you the monotony of $a_k$, which yields the max/min. |
H: A map from unit ball to itself .
Let $\Omega=B_1(0)$ and $u\in C(\Omega, R^n) \cap C^2(\Omega, R^n)$ be a vector valued map into the unit ball ( ie. $|u(x)|\le 1$ for all $x\in \Omega $, such that
$$|\triangle u(x)| \le |\triangledown u(x)|^2$$ for all $x\in \Omega$
How can i show that $v:= |u^2| $ is a subharmonic and conclude that $sup_\Omega|u| \le sup_{\partial \Omega } |u| $ .
Thank you for your help .
AI: Write $u=(u_1,\dots,u_n)$. For each $k=1,\dots, n$ calculate
$$\Delta (u_k^2)=\mathrm{div}\nabla (u_k^2) = \mathrm{div} (2u_k\nabla u_k)=2u_k\Delta u_k+2|\nabla u_k|^2$$
Hence, $\Delta |u|^2 = 2\sum_{k}u_k\Delta u_k+2|\nabla u|^2$. We like the second term, because it's nonnegative. What to do with the first? Use Cauchy-Schwarz, of course:
$\left|\sum_{k}u_k\Delta u_k\right|\le |u||\Delta u|\le |\Delta u|\le |\nabla u|^2$. Thus $\Delta |u|^2\ge 0$. |
H: a ring of fractions which has finitely many maximal ideals
Let $R$ be a commutative ring and $P_1,\ldots ,P_n$ be prime ideals of $R$. If $S=\bigcap_{i=1}^n (R\setminus P_i)$ then show that the ring of fractions $S^{-1}R$ has only finitely many maximal ideals.
The above result will also follow if we can show that the set $\lbrace P\in\mbox{ Spec }(R) : P\subseteq\bigcup_{i=1}^n P_i\rbrace$ has only finitely many maximal elements ( where the partial order is natural inclusion).
AI: It is a standard result in the theory of localizations that there is a one-to-one, inclusion preserving correspondence between the prime ideals of $S^{-1}R$ and the prime ideals of $R$ that are disjoint from $S$.
Thus, the prime ideals of $S^{-1}R$ in the case where $S=R\setminus (\cup P_i)$ corresponds precisely the prime ideals of $R$ that are contained in $\cup P_i$. So we just need to determine what are the ideals of $R$ that are maximal with respect to being contained in $\cup P_i$.
Suppose that $J$ is an ideal of $R$ that is contained in $\cup P_i$. I claim that in fact it is contained in one of the $P_j$.
We proceed by induction on $n$. If $n=1$, there is nothing to prove. If $n=2$, let $J\subseteq P_1\cup P_2$. If $J$ is not contained in either $P_1$ nor $P_2$, let $a\in J\setminus P_1$ (and hence, necessarily, $a\in P_2$) and $b\in J\setminus P_2$ (and hence, necessarily, $b\in P_1$). Then $a+b\in J\subseteq P_1\cup P_2$. But $a\in P_2$, $b\notin P_2$, so $a+b\notin P_2$; and $a\notin P_1$, $b\in P_1$, so $a+b\notin P_1$. Thus, $a+b\notin P_1\cup P_2$, a contradiction.
Assume the result holds for fewer than $n$ primes, $n\gt 2$. If $J$ is contained in the union of any proper subcollection of $P_i$, then we can apply induction. Thus, we may assume that $J$ is not contained in the union of any $n-1$ of the ideals. So for each $j$ we can pick $a_j$ such that
$$a_j \in J\setminus \bigcup_{\stackrel{i=1}{i\neq j}}^{n} P_i.$$
Note in particular that $a_j\in P_j$.
Now, consider $a_1\cdots a_{n-1}+a_n$. Since $a_i\in P_i$, then $$a_1\cdots a_{n-1}\in P_1\cap\cdots \cap P_{n-1}\subseteq P_1\cup\cdots\cup P_n.$$
But since $a_i\notin P_n$ for $i=1,\ldots,n-1$ and $P_n$ is prime, then $a_1\cdots a_{n-1}\notin P_n$. On the other hand, $a_n\in P_n\setminus(P_1\cup\cdots \cup P_{n-1})$.
But then $a_1\cdots a_{n-1}+a_n\notin P_n$, since $a_n\in P_n$ but $a_1\cdots a_{n-1}\notin P_n$; and $a_1\cdots a_{n-1}+a_n\notin P_1\cup\cdots \cup P_{n-1}$, since $a_1\cdots a_{n-1}\in P_1\cup\cdots \cup P_{n-1}$ but $a_1\cdots a_{n-1}\notin P_n$. This is a contradiction, since it plainly lies in $J$. This completes the proof of the claim.
Therefore, if $J$ is an ideal of $R$ that is maximal among those that are contained in $P_1\cup\cdots\cup P_n$, then it must be equal to one of the $P_i$ (since it is contained in at least one, and if the inclusion is proper, then that $P_i$ would contradict the maximality).
Therefore, the maximal ideals of $S^{-1}R$ are exactly the ideals that correspond to $P_1,\ldots,P_n$: if $M$ is a maximal ideal of $S^{-1}R$, then it is also a prime ideal, hence corresponds to a prime $Q$ ideal of $R$ that is contained in $P_1\cup\cdots\cup P_n$, and maximality of $M$ implies maximality of $Q$ in $R$ among ideals contained in $P_1\cup\cdots\cup P_n$, which implies that $Q$ is equal to one of the $P_i$, which means $M$ corresponds to one of the $P_i$. In particular, there are only finitely many such maximal ideals in $S^{-1}R$. |
H: Non-Decreasing Digits
A number is said to be made up of non-decreasing digits if all the digits to the left of any digit is less than or equal to that digit.
For example, the four-digit number $1234$ is composed of digits that are non-decreasing. Some other four-digit numbers that are composed of non-decreasing digits are $0011$, $1111$, $1112$, $1122$, $2223$.
Notice that leading zeroes are required: $0000$, $0001$, $0002$ are all valid four-digit numbers with non-decreasing digits.
The question is
How many four-digit numbers are non-decreasing?
AI: Letting $i$ correspond to the first digit, $j$ to the second, $k$ to the third and $l$ to the fourth, the number of numbers is $$\sum_{l=0}^9 \sum_{k=0}^l \sum_{j=0}^k \sum_{i=0}^j 1$$ $$= \sum_{l=0}^9 \sum_{k=0}^l \sum_{j=0}^k \binom{j+1}{1}$$ $$= \sum_{l=0}^9 \sum_{k=0}^l \binom{k+2}{2}$$ $$= \sum_{l=0}^9 \binom{l+3}{3}$$ $$= \binom{9+4}{4} = 715$$ |
H: Estimate total song ('coupon') number by number of repeats
If shuffle-playing playlist ×100 resulted in [10 13 10 3 2 2] different songs being repeated [1 2 3 4 5 6] times, what is the estimate for the total number of songs? (assuming shuffle play was completely random)
Update: (R code)
k <- 50 # k number of songs on the disk indexed 1:k
n <- 100 # n number of random song selections
m <- 20 # m number of repeat experiments
colnum <- 10
mat <- matrix(data=NA,nrow=m,ncol=colnum)
df <- as.data.frame(mat)
for(i in 1:m){
played <- 1+floor(k*runif(n)) # actual song indices (1:k) selected
freq <- sapply(1:k,function(x){sum(played==x)})
# = number of times song with index x is being played
histo <- sapply(1:colnum,function(x){sum(freq==x)});
for(j in 1:colnum){
df[i,j] <- histo[j]
}
}
df
Resulting in: e.g. 20 distributions (V1=number of single plays, V2=number of double plays, etc):
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 15 13 11 1 2 2 0 0 0 0
2 15 12 10 1 4 0 1 0 0 0
3 12 14 7 6 3 0 0 0 0 0
4 17 16 6 4 2 0 1 0 0 0
5 17 10 12 5 0 0 1 0 0 0
6 13 15 11 6 0 0 0 0 0 0
7 10 14 9 3 2 1 1 0 0 0
8 12 17 5 6 3 0 0 0 0 0
9 9 19 8 3 1 2 0 0 0 0
10 13 9 11 6 1 0 1 0 0 0
11 16 9 12 5 2 0 0 0 0 0
12 15 9 11 6 2 0 0 0 0 0
13 19 9 7 4 4 1 0 0 0 0
14 17 11 4 7 3 1 0 0 0 0
15 11 20 8 1 3 1 0 0 0 0
16 14 12 10 5 0 2 0 0 0 0
17 9 12 8 7 3 0 0 0 0 0
18 10 15 9 4 2 0 1 0 0 0
19 14 11 12 7 0 0 0 0 0 0
20 16 14 11 3 1 1 0 0 0 0
Now I need to get from here to the Poisson modelling--my R is a bit rusty (?lmer)...--Any help would be appreciated...
Attempted Poisson modelling: disappointing fit?!
plot(1:colnum,df[1,1:colnum],ylim=c(0,30),
type="l",xlab="repeats",ylab="count")
for(i in 1:m){
clr <- rainbow(m)[i]
lines(1:colnum,df[i,1:colnum],type="l",col=clr)
points(1:colnum,df[i,1:colnum],col=clr)
}
df.lambda=data.frame(lambda=seq(1,5,0.1),ssq=c(NA));df.lambda
for(ii in 1:dim(df.lambda)[1]){
l <- df.lambda$lambda[ii]
ssq <- 0
for(i in 1:20){
for(j in 1:10){
ssq <- ssq + (df[i,j] - n*dpois(j,l))^2
}
}
print(ssq)
df.lambda$lambda[ii] <- l
df.lambda$ssq[ii] <- ssq
}
df.lambda
lambda.est <- df.lambda$lambda[which.min(df.lambda$ssq)]
lambda.est # 2.4
points(x <- 1:10, n*dpois(1:10,lambda.est),type="l",lwd=2)
100*dpois(1:10,3)
n/lambda.est
the estimated lambda stays around 2.3, with an n estimate of around 43; the fitted curve seems very discrepant, and seems to worsen with rising n !?
Doesn't this have to do with the fact, that our repeats are different from the 'classical' Poisson distributions: it's not just ONE event that repeats itself x number of times, but the sum of repeats of different items (songs)?!
AI: You can compute the likelihood: Let $M$ be the total (unknown) number of songs, $n_i$ the amount of songs that appeared $i$ times, so $ n_1 + 2 n_2 + 3 n_3 \cdots 6 n_6= N = 100$, and let $P=n_1 + n_2 +\cdots + n_6 = 40$
Then, the likelihood is
$${\mathbb l}(M) = \left(\frac{1}{M}\right)^N {M \choose n_1} {M -n_1 \choose n_2} \cdots {M - P + n_6 \choose n_6} K = \left(\frac{1}{M}\right)^N \frac{M!}{n_1! n_2! \cdots (M-P)!} K$$
where $K$ counts the permutations of $N$ elements with $n_2, n_3 \cdots$ repeated elements - which does not depend on $M$. Taking the loglikehood, ignoring elements that do not depend on $M$ and using the Stirling approximation, we get
$$L(M)= -N \log(M) + M ( \log(M) -1) -(M-p) (\log(M-P)-1)$$
The maximum of this log-likehood occurs between 44 and 45. |
H: Orthogonal fitted values
I have two regression models
$$Y=X\beta+\varepsilon,\quad \beta\in\mathbb{R}^k$$
$$Y=Z\alpha+u\quad \alpha\in\mathbb{R}^m$$ it is known that using OLS estimates $\hat{\beta},\hat{\alpha}$ fitted values $\hat{Y}_x,\hat{Y}_z$ are orthogonal. I have to find estimates and fitted values of $$Y=X\beta'+Z\alpha'+v$$ I understand that answer is $\hat\alpha'=\hat\alpha$, $\hat\beta'=\hat\beta$, $\hat{Y}_{x+z}=\hat{Y}_x+\hat{Y}_z$, it is easy to check that in case $k=m=1$ vectors $X$ and $Z$ are then orthogonal themselves and result follows. But I face difficulties proving it in general.. $\hat{Y}_x^T\cdot\hat{Y}_z=(X\hat\beta)^TZ\hat\alpha=Y^T\Pi_X\Pi_YY=0$ where $\Pi_X=X(X^TX)^{-1}X^T$ is projector. Taking $A=(X\,Z)$ and trying to show that
$$(A^TA)^{-1}AY=\begin{pmatrix}\hat{\beta} \\ \hat{\alpha} \end{pmatrix}=\hat{w}$$
I would need to prove $Z^TX\hat{\beta}=0$, $X^TZ\hat{\alpha}=0$ but how? I was given an idea about linear independence among $X$ and $Z$ columns but I can see it just as requirement for $\hat{w}$ to exist.
AI: QUOTE
it is known that using OLS estimates $\hat{\beta},\hat{\alpha}$ fitted values $\hat{Y}_x,\hat{Y}_z$ are orthogonal.
END OF QUOTE
I'm going to hazard to what you mean here: You're saying that the matrices $X$, $Z$ are such that regardless of the value of the vector $Y$, these two vectors of fitted values are orthogonal to each other.
If that's what you mean, then if would follow that every column of $X$ is orthogonal to every column of $Z$. Remember that the vector $\hat Y_x$ of fitted values is the orthogonal projection of $Y$ onto the column space of $X$. So every vector in the column space of $X$ is orthogonal to every vector in the column space of $Z$. It follows that $Z^T X=0$ and $X^T Z=0$, because every entry in the matrix product $Z^T X$ is the dot product of a row of $Z^T$ with a column of $X$, and thus is the dot product of a column of $Z$ with a column of $X$. |
H: Evaluation of a product of sines
Possible Duplicate:
Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$
I am looking for a closed form for this product of sines:
\begin{equation}
\sin \left(\frac{\pi}{n}\right)\,\sin \left(\frac{2\pi}{n}\right)\dots\sin \left(\frac{(n-1)\pi}{n}\right),
\end{equation}
where $n$ is a fixed integer. I would like to see here a strategy that hopefully can be generalized to similar cases, not just the result (which probably can be easily found).
AI: Use the formula $\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ to get
\begin{align*}
\prod_{k=1}^{n-1} \sin(k\pi/n) &= \left(\frac{1}{2i}\right)^{n-1}\prod_{k=1}^{n-1} \left(e^{k\pi i/n} - e^{-k\pi i/n}\right) \\
&= \left(\frac{1}{2i}\right)^{n-1}\left(\prod_{k=1}^{n-1} e^{k\pi i/n} \right) \prod_{k=1}^{n-1} \left(1-e^{-2k\pi i/n} \right).
\end{align*}
The first product simplifies to
$$e^{\sum_{k=1}^{n-1} k\pi i/n} = e^{(n-1)\pi i/2} = i^{n-1}$$
which cancels out with the $i^{n-1}$ in the denominator. The second product can be recognized as the polynomial $f(X) = \prod_{k=1}^{n-1} (X-e^{-2k\pi i/n})$ evaluated at $X = 1$. The roots of this polynomial are the non-trivial $n$-th roots of unity, so $f(X) = \frac{X^n-1}{X-1} = 1+X+X^2+\ldots+X^{n-1}$. Plugging in $1$ for $X$ yields
$$\prod_{k=1}^{n-1} \left(1-e^{-2k\pi i/n} \right) = f(1) = n.$$
Altogether, we have
$$\prod_{k=1}^{n-1} \sin(k\pi/n) = \frac{n}{2^{n-1}}.$$ |
H: Probabilistic regression on outliers
I have a given data set $D = \{ x_i, y_i \}_{i=1}^n$ for a regression problem. When I plot the data, it looks like there is an underlying parabola (2nd
order linear model) and some outliers.
I want to design an approach using a probabilistic model with a latent binary variable $\{ 0,1 \}$ indicating whether a data point is an outlier or not.
Currently I have no idea what I could do, what would the parameters be in this cause and how are they optimized? Is Expectation Maximization an idea?
AI: My recommendation is to use robust regression. It is simpler and downweights the outliers. |
H: Relationship: Rank of a matrix $\leftrightarrow$ # of eigenvalues
Can someone tell, why the number of the nonzero eigenvalues (counted
according to their algebraic multiplicities) of a matrix of type $A^{*}A$,
where $A$ is an arbitrary real or complexvalued matrix, is equal
to the rank of $A$ ? Here, in step 2, it seems to me that exactly this assertion is made, and
I can't quite understand it.
I get, that if
$$
\text{rank}\left(A^{*}A\right)=r
$$
then
$$
\ker\left(A^{*}A\right)=n-r,
$$
if it happens that $A^{*}A$ is an $n\times n$ matrix. But how can
I conclude from this, that the multiplicity of the $0$ eigenvalue
is $n-r$, i.e. that there are $n-r$ that are mapped to $0$ (couldn't
it by that only linear combination of $n-r$ eigenvectors
are mapped to $0$, since the kernel doesn't have to necessarily be
spanned by the eigenvectors themselves, as far as I know) ?
AI: The matrix $A^*A$ is self-adjoint, hence every eigenvalue is real and the matrix is diagonalizable. That means that the algebraic and geometric multiplicities of every eigenvalue agree. In particular, the nullity of $A^*A$ equals the algebraic multiplicity of the eigenvalue $0$; since the sum of the algebraic multiplicities of all eigenvalues must equal $n$ (since the characteristic polynomial of $A^*A$ must split, as all eigenvalues are real) it follows that the rank of $A^*A$ equals the sum of the algebraic multiplicities of the nonzero eigenvalues.
Now, all that remains is to show that the rank of $A^*A$ equals the rank of $A$. Since the nullspace of $A$ is contained in the nullspace of $A^*A$, we have that $\mathrm{nullity}(A)\leq\mathrm{nullity}(A^*A)$. On the other hand, if $\mathbf{v}\notin \mathrm{nullspace}(A)$, then
$$0\lt \langle A\mathbf{v},A\mathbf{v}\rangle = \langle A^*A\mathbf{v},\mathbf{v}\rangle$$
so $A^*A\mathbf{v}\neq\mathbf{0}$. Thus, $\mathrm{nullspace}(A^*A)=\mathrm{nullspace}(A)$, which gives $\mathrm{nullity}(A^*A)=\mathrm{nullity}(A)$.
Since $A$ and $A^*A$ have the same number of columns, it follows as well that $\mathrm{rank}(A^*A) = \mathrm{rank}(A)$. |
H: Is this the correct counter example?
I encountered the following problem in Berkeley problems in Mathematics:
(Sp84): Prove or supply a counterexample: If the function $f$ from $\mathbb{R}$ has both a left limit and a right limit at each point of $\mathbb{R}$, then the set of discontinuities of $f$ is, at most, countable.
I found the book claimed this is right. But I have the following counterexample:
Let $f=0$ at $\mathbb{R}$ except at the cantor set. And let $f=1$ at the cantor set. Then $f$ has both a left limit and a right limit at every point in $\mathbb{R}$. But the set of discontinuities is the cantor set, whose cardinality is equal to $c$.
This should make sense since the Cantor set is nowhere dense, and the left/right limit at every point should be 0. I just do not know why this counterexample does not make sense - or maybe the book means all non-removable discontinuities?
AI: It’s true that your function is discontinuous at each point of the Cantor set, but there is no point of the Cantor set where it has both a left and a right limit. It has a left limit only at the right endpoints of the open intervals that were removed in the construction of the middle-thirds Cantor set, and it has a right limit only at the left endpoints of those intervals. These two sets are disjoint. Moreover, both are countable, so at most points of the Cantor set the function has neither a left nor a right limit. |
H: Inherited Morita similar rings
Let $R$ and $S$ be Morita similar rings.
If a ring $R$ with the following property:
every right ideal is injective. How do I prove that the ring $S$ has this property?
If a ring $R$ with the following property:
$R$ is finitely generated. How do I prove that the ring $S$ has this property?
AI: Every right ideal of $R$ is injective iff $R$ is semisimple iff all right $R$ modules are semisimple.
Since the Morita equivalence sends semisimple modules to semisimple modules, all of $S$'s right modules are semisimple as well, so $S$ is semisimple.
Your second question is a little strange... for Morita theory we usually require $R$ to have unity and so $R$ will always be cyclic... and $S$ too.
If you mean: "Finitely generated modules will correspond to each other through a Morita equivalence between $R$ and $S$." then let's try that.
Prove that $M$ is f.g. iff for every collection of submodules $\{M_i\mid i\in I\}$ of $M$, $\sum M_i=M$ implies $M$ is a sum of finitely many $M_i$. This provides a module theoretic description of finite generation that you can see is preserved by Morita equivalence. |
H: A particular isomorphism between Hom and first Ext.
Let $R$ commutative ring and $I$ an ideal of $R$.
How do I prove that $\operatorname{Ext}^1_R(R/I,R/I)$ isomorphic to $\operatorname{Hom}_R(I/I^2,R/I)$ ?
This question is an exercise of the course but has a chance of being false.
AI: There's really only one thing you can try when confronted with such a problem: find a suitable short exact sequence. Here's an obvious one:
$$0 \to I \to R \to R/I \to 0$$
Applying $\textrm{Ext}_R^\bullet$, we get
$$0 \to \textrm{Hom}_R(R/I, R/I) \to \textrm{Hom}_R(R, R/I) \to \textrm{Hom}_R(I, R/I) \to \textrm{Ext}_R^1(R/I, R/I) \to 0$$
since $R$ is a projective $R$-module. Now, observe that $\textrm{Hom}_R(R, R/I) \cong R/I$ by the universal property of $R$ as a free $R$-module, while $\textrm{Hom}_R(I, R/I) \cong \textrm{Hom}_R(I/I^2, R/I)$ by considering the action of any element in $I$. On the other hand, the map $\textrm{Hom}_R(R, R/I) \to \textrm{Hom}_R(I, R/I)$ must be the zero map. Hence,
$$\textrm{Hom}(I/I^2, R/I) \cong \textrm{Ext}_R^1(R/I, R/I)$$
as required. |
H: What is the correct way to solve $\sin(2x)=\sin(x)$
I've found two different ways to solve this trigonometric equation
$\begin{align*}
\sin(2x)=\sin(x) \Leftrightarrow \\\\ 2\sin(x)\cos(x)=\sin(x)\Leftrightarrow \\\\ 2\sin(x)\cos(x)-\sin(x)=0 \Leftrightarrow\\\\ \sin(x) \left[2\cos(x)-1 \right]=0 \Leftrightarrow \\\\ \sin(x)=0 \vee \cos(x)=\frac{1}{2} \Leftrightarrow\\\\ x=k\pi \vee x=\frac{\pi}{3}+2k\pi \vee x=\frac{5\pi}{3}+2k\pi \space, \space k \in \mathbb{Z}
\end{align*}$
The second way was:
$\begin{align*}
\sin(2x)=\sin(x)\Leftrightarrow \\\\ 2x=x+2k\pi \vee 2x=\pi-x+2k\pi\Leftrightarrow \\\\ x=2k\pi \vee3x=\pi +2k\pi\Leftrightarrow \\\\x=2k\pi \vee x=\frac{\pi}{3}+\frac{2k\pi}{3} \space ,\space k\in \mathbb{Z}
\end{align*}$
What is the correct one?
Thanks
AI: These answers are equivalent and both are correct. Placing angle $x$ on a unit circle, your first decomposition gives all angles at the far west and east sides, then all the angles $60$ degrees north of east, then all the angles $60$ degrees south of east.
Your second decomposition takes all angles at the far east side first. Then it takes all angles spaced one-third around the circle starting at 60 degrees north of east. You have the same solution set either way. |
H: properties of a special set in the proof that there is no Lebesgue measure for all subsets of $\mathbb{R}$
i am working through a proof that there could be no measure on $\mathbb{R}$ such that
$\lambda([a,b]) = b - a$
$\lambda(A) = \lambda(A + \{c\})$
First a set $A \subset [0,1]$ is constructed with
$$
\forall x \in [0,1] ~ \exists ! y \in A ~:~ x - y \in \mathbb{Q}
$$
(the notation $\exists !$ means there exists exactly one such element) Then the following set is considered
$$
B := \bigcup_{r \in [-1:1] \cap \mathbb{Q}} (A + \{r\})
$$
and it is claimed that it has the property
$$
[0,1] \subset B \subset [-1:2]
$$
but i don't see why this inclusion-relations hold?
AI: Let $x\in[0,1]$. There exists $y\in A\subseteq[0,1]$ such that $x-y\in\mathbb{Q}$. Moreover, since $x,y\in[0,1]$, $|x-y|\leq 1$, which shows that $[0,1]\subseteq B$.
Now take any $x\in A$. It follows that $x\in[0,1]$. Let $r$ be a rational number between $-1$ and $1$. So $x+r\geq x+(-1)=x-1\geq 0-1=-1$. Also $x+r\leq x+1\leq 1+1=2$. So $B\subseteq[-1,2]$. |
H: How to guess whether a language is regular or not
I have a few languages and I am not given whether they are regular or not.
If I had to prove their irregularity, then it would not have been difficult.
How do I go about finding if the language is regular/irregular and then justifying my answer.
AI: What I do is the following:
If I suspect that it is a regular language (usually by checking whether you need to save some information or whether you have to be able count):
First I check whether I can easily come up with a DFA / NFA.
If that does not work, I check whether it is the
union/intersection/... of languages that are pritty well known to be
regular. (Remember regular languages are closed under union etc)
If I suspect that it is irregular (usually this happens when the language does require counting, or 'saving' data. Or I compare them to a list of well known irregular languages, and check whether they are alike):
First, I check if I can write the language $L$, as $A \cup L = C$,
where A is regular and $C$ is irregular, then $L$ can't be regular!
This method works for any operation regular languages are closed
under.
If that does not work, then I check if it is easy to use the pumping
lemma. |
H: Does a surjective ring homomorphism have to be surjective on the unit groups?
I know ring homomorphisms map units to units, which made me curious about the following. Suppose $f:R\to R'$ is a surjective ring homomorphism, mapping $1$ to $1'$. Is it necessarily surjective from $U(R)\to U(R')$?
I know if $f(u)$ is a unit in $R'$ with $f(w)$ its inverse, then $f(uw)=f(wu)=1'$, but I see no reason to conclude $uw=wu=1$ in $R$ without assuming $f^{-1}(1')=\{1\}$. But I can't find a counterexample, so I'm not sure whether it's true or not.
AI: Look at the quotient map $\mathbf Z \to \mathbf Z/5\mathbf Z$. The residue class of $2$ is a unit in the target, but the only units of $\mathbf Z$ are $\pm 1$. |
H: Gauss Multiplication?
$xy = 2^nx_Ly_L + 2^{n/2}(x_Ly_R + x_Ry_L)+ x_Ry_R$
where $n$ is the size of the number (in bits) and $x_{L/R}$ and $y_{L/R}$ are the right and left bit halves of $x$ and $y$..
Initially, you have 4 multiplication problems:
$x_Ly_L$
$x_Ly_R$
$x_Ry_L$
$x_Ry_R$
Gaus's multiplication "simplifies" this to 3 problems, reducing computation time by 25%.
$x_Ly_L$
$x_Ry_R$
$(x_L + x_R)(y_L + y_R)$
This is based on the fact that $(x_Ry_L + x_Ly_R) = (x_L + x_R)(y_L + y_R) - x_Ly_L - x_Ry_R$. But doesn't $(x_L + x_R)(y_L + y_R) = x_Ly_R + x_Ry_L + x_Ly_L + x_Ry_R$? Therefore doesn't the third multiplication still require $x_Ry_L$ and $x_Ly_R$? You could argue that instead of using the distributive property, you are adding the numbers in the bracket first, resulting in just one multiplication. However that multiplication problem would have twice the size. Therefore you would technically be reducing the problems to three, but the third one would have twice the size as the other two, resulting in it essentially taking the same amount of time. Evaluating $4$ problems of size $n/2$ problems is equal to evaluating 2 problems of size $n/2$ and one problem of size $n$ since $4(n/2) = 2(n/2) + n$. This leads to my question: How exactly does Gauss multiplication save time?
AI: You haven't said what $x_L$ and the rest stand for. I take it we are talking about the left and right halves of $x$ and $y$. Then $x_L+x_R$ is not twice the size, or rather the length, of the individual terms; it is at most one bit longer. |
H: Take set of values and change scale
I have a large array of variable-integer keypairs. The integer values range from -5 to 5. I'd like to scale that data to a range of 0 to 2. Logically, -5 would become 0, 0 would become 1 and 5 would become 2. How should I go about doing this on a large scale and with different input ranges?
AI: If $x$ ranges from $a$ to $b$, then $x-a$ ranges from $0$ to $b-a$, so $(x-a)/(b-a)$ ranges from $0$ to $1$, so $(d-c)(x-a)/(b-a)$ ranges from $0$ to $d-c$, so $${(d-c)(x-a)\over b-a}+c$$ ranges from $c$ to $d$. |
H: Formal definition of Big-O Notation?
Big O Notation is formally defined as:
Let $f(n)$ and $g(n)$ be function from positive integers to positive reals. We say $f = \theta(g)$ (which means that "$f$ grows no faster than $g$*) if there is a constant $c>0$ such that $f(n) ≤ c ⋅ g(n)$.
Using this definition how is:
$n^2 + n$ simplified to $n^2$
$n + 20$ simplified to $n$
I don't see a $c$ constant that defines their relationship. How do the above examples work?
AI: Try to prove that if $n$ is a positive integer then $n^2+n\le2n^2$ and $n+20\le21n$. |
H: For a finite field of characteristic $p$, $p-1$ divides $|F|-1$?
Let $F$ a finite field of characteristic $p$. Show that $p-1$ divides $|F|-1$. (We shall see later that $|F|$ is a power of $p$.)
I am able to solve this by first showing $|F|$ is a power of $p$. If $q$ divides $|F|$ for another prime $q$, then by Cauchy's theorem some element $x$ has order $q$ in the additive group, so $qx=0$. But $qx=px=0$, so the order of $x$ has order $\gcd(p,q)=1$, a contradiction. Thus $|F|=p^n$ for some $n$, and $p-1$ divides $|F|-1$ since $p^n-1=(p-1)(p^{n-1}+\cdots+1)$.
The remark at the end of the exercise that we shall later see $|F|$ is a prime power suggests to me that there may be an alternative proof that doesn't resort to this fact. Is there some obvious observation I'm overlooking, because the proof I believe I have seems simple enough.
AI: A simpler way to prove that the order of $F$ is a power of $p$ is to note that $F$ is a vector space over its prime field $\mathbb{F}_p$. Thus, as an abelian group, it is isomorphic to a direct sum of copies of $\mathbb{Z}/p\mathbb{Z}$, hence has order $p^n$ for some $n\gt 0$.
To show the desired result without showing that $|F|$ is a power of $p$, note that the group of units of $F$ contains the group of units of $\mathbb{F}_p$, which has order $p-1$. By Lagrange's Theorem, $p-1$ must divide $|F|-1$. |
H: Automorphisms in unit disk
Let $\mathbb{D}=\{|z|<1,\ z\in\mathbb{C}\}$. Are there any other automorphisms in $\mathbb{D}$ except the Blaschke factor $\displaystyle B_{a}(z)=\frac{z-a}{1-\overline{a}z},\ a\in\mathbb{D}$?
I denote with $\overline{a}$ the complex conjugate of $a$.
Thank you for your time,
Chris
AI: $f(z) = ze^{i\theta}$ for $\theta \in \mathbb{R}$, the rotations are also conformal automorphism of $\mathbb{D}$.
In fact, if you denote $\Psi_a$ for $a \in \mathbb{D}$ as the Blaschke factor above, you can prove that all conformal automorphism are of the form $f(z) = \Psi_a(z)e^{i\theta}$. |
H: Is the characteristic of simple rings necessarily prime when finite?
I'm familiar that fields and integral domains of finite characteristic have prime characteristic. What about the case when $R$ is a simple ring, so that the only ideals are $0$ and $R$?
I've been trying to use the usual tricks of assuming the characteristic is composite, but even though $(a)=R$ for $a\neq 0$, I'm having trouble since I can't assume that $a$ is left or right invertible since $R$ is not necessarily commutative. Is it still possible to conclude that simple rings of finite characteristic have necessarily prime characteristic?
AI: Yes. The result holds.
Note that for every $n\gt 0$, the set $nR=\{na\mid a\in R\}$ is an ideal; indeed, it is a subgroup, and if $r\in R$ and $na\in nR$, then $r(na) = n(ra)\in nR$ and $(na)r = n(ar)\in nR$.
Note that $RR$ is an ideal. Therefore, since we are assuming that $R$ is simple, either $RR=R$ or $RR=0$. Let's consider the former case first (which will hold, e.g., if $R$ has a unity).
Suppose that the characteristic is $n$ and $n=pq$; if $1\lt p$, then $qR\neq \{0\}$, and so by the simplicity of $R$ it follows that $qR=R$. Therefore, $pR = p(RR) = (pR)R = (pR)(qR) \subseteq (pq)R = nR = (0)$, which shows that $pR=0$, so $n|p$, showing that $p=n$. Thus, $n$ is prime: in any factorization $n=pq$, either $p=1$ or $p=n$.
If $R^2\neq R$, then by simplicity of $R$ we must have $R^2=0$. In particular, $ab=0$ for all $a,b\in R$, so $R$ is a ring with zero multiplication. Therefore, every subgroup of $R$ is an ideal, and so the underlying additive group of $R$ must be simple. Since the only simple abelian groups have prime order, it follows that the characteristic of $R$ is prime. (I do not consider the trivial ring to be "simple", since it does not have exactly two ideals). |
H: Antiderivative simply connected region
Why do analytic functions always have an antiderivative on a simply connected region?
Thank you for your time,
Chris
AI: Cauchy's Theorem tells us that $\displaystyle\int_{\gamma_1} f(x) dz = \displaystyle\int_{\gamma_2} f(x) dz$ whenever $\gamma_1 \gamma_2$ are homotopic, simple curves sharing the same endpoints in $f$'s domain of analyticity $\Omega .$ We can use this fact to define the antiderivative of $f$ in $\Omega $ as follows: $F(\omega ) = \displaystyle\int_{\gamma } f(z) dz,$ where $\gamma $ is a contour connecting a fixed point $z_0\in \Omega $ to $\omega.$ Analyticity is required simply so that $F$ is well-defined (in general it is not.) Using a parametrization $\rho (t) = \omega_0 + (\omega - \omega_0)t, 0\le t \le 1,$ you can show that the difference quotient satisfies $\frac{F(\omega ) - F(\omega_0)}{\omega - \omega_0} = \displaystyle\int_0^1 f(\omega_0 + t(\omega - \omega_0)) dt.$ |
H: Coefficients in products and powers of large polynomials
Let $f\in \mathbb{Z}[x_1,\dots,x_n]$ be a polynomial. I want to show that a certain monomial $m$ shows up with non-zero coefficient in the $r^{th}$ power of $f$.
If you're lucky, you can do this as follows:
Regrade $\mathbb{Z}[x_1,\dots,x_n]$ such that $m$ appears with non-zero coefficient in the initial form $(\textrm{init}f)^r$. Then because $m$ has highest weight, no trailing terms of $f^{r}$ can cancel $m$. This means you can take a hard calculation with a long, complicated polynomial and turn it into an easy calculation with a short polynomial.
Sadly, $m$ may be interior in the Newton polytope of $f^r$, in which case the above trick has no hope of working.
I would be grateful for other techniques people use for showing that monomials show up with non-zero coefficent in powers (and products) of polynomials.
Thanks!
(In fact, I really care about the situation $f\in \mathbb{F}_p[x_1,\dots,x_n]$ and $r = p-1$ for some prime $p$.)
AI: The coefficient of $\prod_{i=1}^nx^{k_i}$ in $f$ is the same thing as $(\partial_{(\bar{k})} f)|_{\bar{x}=\bar{0}}$, where $\partial_{(\bar{k})}=\prod_{i=1}^n\frac{\partial_{x_i}^{k_i}}{k_i!}$. You can take advantage of this to simplify calculations. As a small example, consider $$\begin{align}f(x,y,z) &= 1+x +y -2xy + 3xz^2 & m&=x^2y^2z^2\end{align}$$ with $r = 5$.
$$\begin{align}
(\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}(\partial_{x}^2\partial_{y}^2\partial_{z}^2 f^5)|_{(x,y,z)=\bar{0}}\\
& =\frac{1}{8}(\partial_{x}^2\partial_{y}^2\partial_{z} 5f^4f_z)|_{(x,y,z)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2\partial_{y}^2 (20f^3f_z^2+5f^4f_{zz})|_{(x,y,z)=\bar{0}}\\
\end{align}
$$
Now that we will never again differentiate by $z$, it is acceptable to evaluate at $z=0$. Let $g(x,y)=f(x,y,0)=1+x+y-2xy$. Note that $f_z(x,y,0)=0$ and $f_{zz}(x,y,0)=6x$, simplifying what we have so far. We are left with:
$$\begin{align}
(\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}\partial_{x}^2\partial_{y}^2 (30xg^4)|_{(x,y)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2\partial_{y} (120xg^3g_y)|_{(x,y)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360xg^2g_y^2+120xg^3g_{yy})|_{(x,y)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360xg^2g_y^2)|_{(x,y)=\bar{0}}\\
\end{align}
$$
where we used the fact that $g_{yy}=0$. As we will never again differentiate with respect to $y$... let $h(x)=g(x,0)=1+x$.
$$\begin{align}
(\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}\partial_{x}^2(360xh^2(1-2x)^2)|_{(x)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360x(1+x)^2(1-2x)^2)|_{(x)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360x(1+2x+x^2)(1-4x+4x^2))|_{(x)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360x(1+2x)(1-4x))|_{(x)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(360x(1-2x-8x^2))|_{(x)=\bar{0}}\\
& =\frac{1}{8}\partial_{x}^2(-720x^2)|_{(x)=\bar{0}}\\
& = \frac{1}{8}(-720)(2)\\
& = -180
\end{align}
$$
where in this final block we have stayed conscious of the degree to which we are differentiating and that we will evaluate at $0$ .
I think this strategy (using derivatives and evaluating at $0$) will generally be more efficient than directly computing $f^r$. This should also be applicable to products of polynomials. |
H: Harmonic Function which cannot be described as real part of a holomorphic function
We define $f:\mathbb{C}\rightarrow\mathbb{C},\ f(z)=\log|z|$. $f$ is harmonic. Why can't we describe $f$ as a real part of a holomorphic (analytic) function?
Thank you very much for your time,
Chris
AI: Any harmonic function on a simply connected region in $\mathbb{R}^2$ (seen as $\mathbb{C}$) is indeed the real part of a holomorphic function on the same region. The function $f(z) = \log |z|$ is not defined at zero, and so the domain you're referring to is actually $\mathbb{C}\backslash\{0\}$. This is the reason why it doesn't globally represent the real part of a holomorphic function.
The complex logarithm is the typical example of many related notions in complex analysis, notably "multi-valued" functions, functions defined on Riemann surfaces (multi-sheated surfaces), and functions that fail to be analytically continued properly. The harmonic function $\log |z|$ is indeed the real part of the complex logarithm, but its imaginary part is not well defined because of its reliance on the complex argument (which is the same mod $2\pi$). Any introductory text on complex analysis will discuss all of this in depth. |
H: Homotopy type of 8-holed torus
I would like to determine the homotopy type of a torus with 8 punctures.
(I have come across this problem studying deformations of discontinuous groups of Heisenberg groups...)
Other than trying really hard to visualize, are there any other methods for finding homotopy types of punctured surfaces or the like?
AI: A torus with one puncture is homotopy equivalent to the wedge of two circles, which is a $K(F_2, 1)$. To see this, think of the torus as a square with sides identified. Puncturing the square allows you to retract onto the $1$-skeleton.
It follows that a torus with $n+1$ punctures is homotopy equivalent to the wedge of $n+2$ circles, which is a $K(F_{n+2}, 1)$. To see this, start with the wedge of two circles and thicken some part of it back up to $2$ dimensions, then puncture it. |
H: Recalling result of tensor product of polynomial rings
Let $k$ be a field (alg closed if you want). Now let $I_{i}$ be an ideal of $k[x_{i}]$ for every $i \in \{1,2,\ldots,n\}$. Is it always true that:
$$k[x_1,x_2,\ldots,x_n]/ \langle I_1,I_2,\ldots,I_n \rangle \cong k[x_1]/I_1 \otimes_k k[x_2]/I_2 \otimes_k \cdots \otimes_k k[x_n]/I_n$$
AI: Seems like it?
Compose the natural inclusion $k[x_i] \rightarrow k[x_1, ..., x_n]$ with the quotient map; the kernel is just $I_1$. So the universal property of tensor products induces a map from the RHS to the LHS.
The map in the other direction takes a monomial to the corresponding tensor product, and extends linearly. It is simple to check that these are inverses. |
H: Offset Alternating Series
I have the following alternating series that I would like to determine whether it is absolutely convergent, conditionally convergent, or divergent:
$$ \sum\limits^{\infty}_{n=1} \frac{1+2(-1)^n}{n} $$
I have applied some tests and I find it reasonable to conclude that it is divergent.
As a sum of two series:
$$ \sum\limits^{\infty}_{n=1} \frac{1}{n} + \sum\limits^{\infty}_{n=1} \frac{2(-1)^n}{n} $$
I believe a convergent series when added to a divergent series, results in a divergent series. If this isn't a fact then I would still be left to say that it is inconclusive.
Using the Alternating Series Test, with:
$$ a_n = \frac{1+2(-1)^n}{n} $$
although this isn't of 'proper form' $ \sum\limits^{\infty}_{n=1} (-1)^n a_n $ the limit of $a_n $ does approach zero as $ n \rightarrow \infty $. As for monotonically decreasing, the limit of the ratio of absolute terms is divergent for $ n $ even and inconclusive for $ n $ odd, which has me concluding divergent by The Ratio Test as well as not monotonically decreasing, where:
$$ \lim\limits_{n \rightarrow \infty} \left\lvert \frac{2(-1)^n + 1}{2(-1)^n - 1} \frac{n}{n+1} \right\rvert $$
Am I on the right track here? Am I making any really improper assumptions? Was there a better way to go about with the proof?
Thanks!
AI: To use the alternating series test, you need to rewrite the series first; you cannot use it directly with the $a_n$ you are given.
Define $b_n$ as follows: when $n$ is even, $b_n = \frac{3}{n}$; when $n$ is odd, $b_n = \frac{1}{n}$. Then $(-1)^nb_n = a_n$, so you can apply the alternating test to your original series by writing the series as $\sum (-1)^n b_n$.
Unfortunately, the alternating series test does not apply: remember that you need not only that $\lim\limits_{n\to\infty} b_n=0$, but also that $b_{n+1}\lt b_n$ for all sufficiently large $n$. However, for $n$ odd, we have $b_n\lt b_{n+1}$, since $\frac{1}{n}\lt \frac{3}{n+1}$ if and only if $n+1\lt 3n$, which holds for all $n$. So the alternating series test cannot be used.
The Ratio Test is likewise inconclusive: the limit does not exist. To see this, note that for $n$ odd, we have
$$\left|\frac{a_{n+1}}{a_n}\right| = \frac{\quad\frac{3}{n+1}\quad}{\frac{1}{n}} = \frac{3n}{n+1}$$
which converges to $3$ as $n\to\infty$; whereas for $n$ even, we have
$$\left|\frac{a_{n+1}}{a_n}\right| = \frac{\quad\frac{1}{n+1}\quad}{\frac{3}{n}} = \frac{n}{3(n+1)}$$
which converges to $\frac{1}{3}$. Since the sequence of odd terms and the sequence of even terms of $|\frac{a_{n+1}}{a_n}|$ converge to different things,
$$\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|$$
does not exist. So the Ratio Test is inconclusive.
As to your first method, it's fine: indeed, the sum of a divergent series and a convergent series must diverge. To see this, note that if $\sum r_n$ and $\sum(r_n+s_n)$ both converge, then so does $\sum ((r_n+s_n)-r_n) = \sum s_n$, since the difference of convergent series converges. Thus, if any two of $\sum r_n$, $\sum s_n$, and $\sum (r_n+s_n)$ converge, then so does the third, and in particular if any one of the series diverges, then at least two must diverge. |
H: Confusion regarding convex and affine set
I am a bit confused regarding convex and affine set. When they mention set, does it mean the set consisting of all the points belonging to the line or shape respectively?
AI: Yes. A line or a shape is typically defined as a set of points, namely the set of points that you think of as "belonging to" or "lying on" the shape. |
H: Understanding the Analytic Continuation of the Gamma Function
So my book proves the convergence of $\Gamma(z) = \int_0^{\infty}t^{z-1}e^{-t}dt$ in the right half plane $Re(z) > 0$, and then goes on to prove the initial recurrence relation $\Gamma(z+1)=z\Gamma(z)$ by applying integration by parts to $\Gamma(z+1)$:
$$\int_0^{\infty}t^{z}e^{-t}dt = -t^ze^{-t}|_0^{\infty} + z\int_0^{\infty}t^{z-1}e^{-t}dt$$
The book explicitly states this equality to be true only in the right half plane, since otherwise $-t^ze^{-t}|_0^{\infty} = \infty$, instead of equaling zero. With this initial recurrence relation we are 'supposably' able to analytically continue the Gamma function to $Re(z) > -1$ (not including the origin) by writing the relation in the form:
$$\Gamma(z) = \frac{\Gamma(z+1)}{z}$$
What I don't understand is this relation is still only true in the right half plane, since otherwise $-t^ze^{-t}|_0^{\infty}\neq 0$. I don't see what reason we have to believe that, for instance, $\Gamma(-\frac{1}{2}) = \frac{\Gamma(\frac{1}{2})}{-\frac{1}{2}}$.
Furthermore $\int_0^{\infty}t^{z-1}e^{-t}dt$ is clearly not convergent in the left half plane, so I can't even imagine why it would be plausible to think that a recurrence relation directly based on it could possibly lead to a genuine analytic continuation of its domain.
AI: As you have said, $\displaystyle \int_0^{\infty}t^{z-1}e^{-t}dt$ does not even make sense on the left half plane.
However, if we define $\Gamma(z) = \dfrac{\Gamma(z+1)}{z}$ on the left half plane, (and this is not equal to the integral above on the left half plane), this is consistent with the fact that $\Gamma(z) = \displaystyle \int_0^{\infty}t^{z-1}e^{-t}dt$ on the right half plane and also provides us a possible extension on the left half plane except at negative integers.
Further, this has to be the only analytic continuation since $\Gamma(z) = \dfrac{\Gamma(z+1)}{z}$ is analytic everywhere except at negative integers and matches with the integral on the right half plane and hence by uniqueness of analytic continuation this is the only possible extension.
EDIT
To see that the extension, is in fact analytic except at negative integers, you proceed strip by strip. On the right half plane, you have $\Gamma(z)$ defined by the integral, $\displaystyle \int_0^{\infty} t^{z-1} \exp(-t) dt$. You can verify that the integral satisfies the functional equation and is in fact analytic.
We will now extend the function to the strip corresponding to $(-1,0)$ i.e. the strip $\text{Re}(z) \in (-1,0)$ based on the functional equation.
Define $\Gamma(z) = \dfrac{\Gamma(z+1)}{z}$ on the strip $\text{Re}(z) \in (-1,0)$.
First note that $\Gamma(z+1)$ is analytic on the strip $\text{Re}(z) \in (-1,0)$ since $z+1 \in$ right half plane. Also, $\dfrac1z$ is analytic on the strip $\text{Re}(z) \in (-1,0)$. Hence, $\Gamma(z) = \dfrac{\Gamma(z+1)}{z}$ is analytic on the strip $\text{Re}(z) \in (-1,0)$. Hence, now we have extended the definition of the $\Gamma$ function to the region $\text{Re}(z) > -1$.
Now repeat the same argument for the strip $\text{Re}(z) \in (-2,-1)$, since we now have that the $\Gamma$-function is analytic on the region $\text{Re}(z) > -1$. And so on... |
H: Analog of Compact Operators
This is kind of vague question, but I'll try to make it more precise.
$T$ is a compact operator on a Hilbert Space, $H$, if $\overline{T(D)}$ is compact in $H$, where of course, $D$ is the closed unit ball in $H$. So we use the underlying Hilbert space in order to define these operators. Now $B(H)$ is a $C^*$-algebra. Can we abstractly define/characterize compact operators only in terms of $C^*$-notions, without appealing to the underlying Hilbert space? Stated otherwise, is it possible to define, say "compact" elements in a $C^*$-algebra $A$, which agree with the usual notion if we take $A=B(H)$?
AI: I am not familiar with such a concept, but you could call a projection $p$ in a $C^*$-algebra $A$ finite if $pAp$ is finite-dimensional; and then consider the closed ideal generated by the finite projections.
I don't know if this is useful, however. |
H: Commutative Algebra without the axiom of choice
It is well known that in a commutative ring with unit, every proper ideal is contained in a maximal ideal. The proof uses the axiom of choice. This fact, and others that are proved using essentially the same argument, anchor a large part of commutative algebra.
Suppose now that we disallow the use of the axiom of choice. My feeling is that this fact should still hold except for very pathological rings. I would find it odd if commutative algebra was entirely dependent on the axiom of choice. I also recall hearing that there was a workaround argument that did not use the axiom of choice for sufficiently nice rings, so this question is not entirely speculative.
So, assuming we work without the axiom of choice, for which rings can we prove that every proper ideal is contained in a maximal ideal? How is this done? And what characterizes the rings where we can't?
AI: It should not be surprising at all that the axiom of choice is so involved in most parts of modern mathematics. The reason is simple, too: finitely generated things are by nature very well-behaved, however as time progressed we began exploring things which are not finitely generated (e.g. measure spaces, function rings, etc.) and the axiom of choice is a great tool to control these things.
True, if you are only interested in, say, rings of cardinality less than $2^{2^{\aleph_0}}$ then you probably don't need the entire axiom of choice, small fragments would suffice for everything to be well-behaved. However why limit yourself by cardinality when the argument would hold for all similar structures? So we just assume the axiom of choice and run with it.
What sort of things could break? Here are a few quick examples:
The existence of maximal ideals in unital rings;
Not every vector space has a basis, and a similar argument would show that not every projective module over $\mathbb Z$ is free;
For abelian groups injectivity is no longer equivalent to divisibility;
If things break real bad then there are not enough projective objects in $\mathbf{Ab}$, and not enough injectives either;
Not every set is an underlying set of a group.
If you get even further to the place where topology begins to take an important part of the work (e.g. topological groups/vector spaces/etc.) then the axiom of choice becomes an even more important tool.
However not all is lost. If your ring can be well-ordered then you can still apply most of the standard arguments to it. If its power set can also be well-ordered then you have even more.
What sort of common sets are guaranteed to be well-ordered in ZF? Well... countable set. That's it. If you assume more, then more. However there is no guarantee to that happening.
So to answer your later question: I am not aware to the existence of such class, furthermore I do not believe there is a "nice" description of rings you can work with in ZF. If anything, I would expect most things to fail at size continuum or less. |
H: Measure on Baire space
Inspired by the first parenthetical sentence of Joel's answer to this question, I have the following question: is there any useful notion of measurability in the Baire space $\omega^\omega$?
Some initial thoughts:
Any countably additive, translation-invariant measure has to give measure 0 to any set of the form $$\lbrace f: \sigma\prec f\rbrace.$$ The same holds if we ask for a finitely additive translation-invariant measure which gives finite measure to the whole space.
There are several natural surjections $\omega^\omega\rightarrow 2^\omega$, and the latter space has a nice measure theory; so we could, for any such surjection $s$, define a "measure" given by $m_s(X)=m(s[X])$. However, it's unclear to me that any of these would be particularly interesting.
Even short of a decent "measure" on $\omega^\omega$ (and indeed, I believe none exists), we might still be able to talk about subsets of $\omega^\omega$ being "measurable" or "non-measurable" in terms of what sorts of pathologies they admit. For example, call $X\subseteq\omega^\omega$ definably measurable if $L(\mathbb{R}, X)\models $ "Every set is measurable." This is, however, a terrible notion of measurability: there are measure zero (and hence measurable) sets $X\subseteq 2^\omega$ such that $L(\mathbb{R}, X)$ contains a non-measurable set, so this is too restrictive.
Thoughts?
AI: First note that there is an extremely natural $\sigma$-algebra for this: the Borel $\sigma$-algebra generated by the open sets and in this case the clopen cones. We can later complete it for the Lebesgue $\sigma$-algebra.
Now note that if $X$ and $Y$ are uncountable Polish spaces then their Borel sets are isomorphic. Thus we can easily endow $\omega^\omega$ with a Borel measure (and a Lebesgue measure by completion).
The point of a measure is to allow us compare which sets are big and which sets are small. In this aspect the actual measure means nothing, only the $\sigma$-algebra is important. |
H: Example of the use of differentiation rules
I am asked to differentiate the following:
$$g(t) =\sqrt t (t-1)$$
The first thing I do is apply the power rule and the constant rule to the terms inside the parentheses:
$$ g'(t) = \sqrt t(t^0-0) $$
$$g'(t)= \sqrt t(1-0) $$
$$ g'(t)= \sqrt t$$
I am wondering if this is correct application of the the rules of differentiation. Also, I was initially tempted to apply the power rule to the $\sqrt t$ coefficient, why is it that the coefficient remains undifferentiated?
AI: When you set out to differentiate a function, look at the last operation that you would perform in computing a value of the function. In the case of $\sqrt t(t-1)$, that operation is multiplying $\sqrt t$ and $t-1$. That means that this function is (at the top level) a product and must be differentiated using the product rule:
$$g'(t)=\left[\sqrt t(t-1)\right]'=\left[\sqrt t\right][t-1]'+\left[\sqrt t\right]'(t-1)\;.$$
Now $[t-1]'=1-0=1$, exactly as you computed it in the question, but to differentiate $\sqrt t$, you’ll want to convert it to a power, $t^{1/2}$, and use the power rule:
$$\left[\sqrt t\right]'=\left[t^{1/2}\right]'=\frac12t^{-1/2}\;.$$
Now put the pieces together:
$$\begin{align*}
g'(t)&=\sqrt t\cdot1+\frac12t^{-1/2}(t-1)\\
&=t^{1/2}+\frac12\left(t^{1/2}-t^{-1/2}\right)\\
&=\frac32t^{1/2}-\frac12t^{-1/2}\\
&=\frac12\left(3t^{1/2}-t^{-1/2}\right)\;.
\end{align*}$$
Some instructors might prefer that you rewrite that as $$\frac12\left(3t^{1/2}-t^{-1/2}\right)=\frac12\left(3\sqrt t-\frac1{\sqrt t}\right)=\frac{3t-1}{2\sqrt t}\;.$$
Second Addition: I should have noticed that I was actually doing this the hard way. The easy way is to convert $\sqrt t$ to $t^{1/2}$ immediately and multiply out to get $g(t)=t^{1/2}(t-1)=t^{3/2}-t^{1/2}$. Now you can apply the power and sum rules to get $g'(t)=\frac32t^{1/2}-\frac12t^{-1/2}$ directly and simplify from there.
Added: If by chance your function was intended to be $g(t)=\sqrt{t(t-1)}$, the same principle applies, but the details are different. Now the last thing that you’d do in computing a value of $g$ is take the square root, so at the top level this is a power: $$g(t)=\Big(t(t-1)\Big)^{1/2}\;.$$
Thus, you’ll need to apply the power rule, combined, of course, with the chain rule:
$$g'(t)=\frac12\Big(t(t-1)\Big)^{-1/2}\Big[t(t-1)\Big]'\;.$$
You could use the power rule to do that last differentiation, but it’s much easier to multiply out and differentiate $t^2-t$ instead:
$$g'(t)=\frac12\Big(t(t-1)\Big)^{-1/2}(2t-1)=\frac{2t-1}{2\sqrt{t^2-t}}\;.$$ |
H: What is the best approach to this problem?
I am stuck in Problem 1.3.18 in Berkeley problems in Mathematics. Without looking back in the section of solutions, I want to ask for a hint.
The problem is as follows:
Let $\{b_{i}\}$ be positive real numbers with $$\lim_{n\rightarrow \infty}b_{n}=\infty$$ and $$\lim_{n\rightarrow \infty}\frac{b_{n}}{b_{n+1}}=1$$ Assume also $$b_{1}< b_{2}<b_{3}...$$ Show that the set of quotients $$\left(\frac{b_{m}}{b_{n}}\right)_{1\le n<m}$$ is dense in $(1,\infty)$.
My thoughts are to translate this problem into $a_{i}>1$, $\lim a_{i}=1$, and prove $B_{m,n}=\prod^{m}_{n}a_{i}$ is dense in $(1,\infty)$. But this does not make the problem any easier: for example, given $1+\delta$ and $\epsilon$, how can I show there must be some $m,n$ such that $B_{m,n}$ is in the $\epsilon$ neighborhood of $1+\delta$? When $a_{i}$ approach 1, it could will ignore $1+\delta$ and 'jump' straightforward from $m\in \mathbb{Z}$ to $1+\frac{1}{100}\delta$, for example. And $m\left(1+\frac{1}{100}\delta\right),m,\left(1+\frac{1}{100}\delta\right)$ may all be totally out of $1+\delta$'s $\epsilon$ neighborhood. Similarly the $a_{i}$'s after $\left(1+\frac{\delta}{100}\right)$ may shrink so quickly that both $\prod^{\infty}_{L} a_{i}$ and $m\prod^{\infty}_{L} a_{i}$ are outside of the $\epsilon$ neighborhood as well(one is too small, the other is too large). In short I do not know how to prove constructively the $B_{m,n}$s must fall into every open subset in $(1,\infty)$.
AI: I find it easier to think additively. Let $a_n=\ln b_{n+1}-\ln b_n$ for $n\in\Bbb Z^+$. Then your hypotheses amount to saying that $\sum_{n\ge 1}a_n$ is a divergent series of positive terms that tend to $0$, and your task is to show that $$\left\{\sum_{k=m}^na_k:1\le m<n\right\}$$ is dense in the positive reals. Here are a couple of hints:
For every $m\in\Bbb Z^+$ the series $\sum\limits_{k\ge m}a_k$ diverges, so you can make $\sum\limits_{k=m}^na_k$ as big as you like.
On the other hand, when $m$ is large, the partial sums of $\sum\limits_{k\ge m}a_k$ grow slowly. |
H: $H$ is a subgroup of index 5. $ a \in Z(G) , ord(a)=3$. Show that $ a \in H $
$H$ is a subgroup of $G$ so that $G:H=5$
$ a \in Z(G) , ord(a)=3$
Show that $ a \in H $
What I did:
if $ a \notin H $ then:
$ G/H = \{ H, aH, a^2H, gH, agH \} $ for some $ g \in G-H $
But I don't know how to procceed...
AI: As in Cihan's answer, let $A = \langle a\rangle$. Since $a \in Z(G)$, $AH$ is a subgroup of $G$ containing $H$. As $[G : H]$ is prime, $AH = H$ or $AH = G$. In the latter case, $G = H\cup aH \cup a^2H$, a union of at most 3 left cosets of $H$, contradicting $[G:H] = 5$. So $AH = H$, which implies $a \in H$. |
H: Evaluate $\int_0^\pi xf(\sin x)dx$
Let $f(\sin x)$ be a given function of $\sin x$.
How would I show that $\int_0^\pi xf(\sin x)dx=\frac{1}{2}\pi\int_0^\pi f(\sin x)dx$?
AI: If you make the substitution $w = \pi-x$, so that $dw = -dx$, you get
\begin{align}
\int_0^\pi xf(\sin x)dx &= -\int_\pi^0 (\pi-w)f(\sin(\pi-w))dw \\
&= \int_0^\pi (\pi-x)f(\sin(x))dx \\
&= \pi\int_0^\pi f(\sin(x))dx - \int_0^\pi xf(\sin(x))dx
\end{align}
which gives the result you want. |
H: An inequality involving integrations.
Assume $ f \in C^2 ( \mathbb R) \cap L^2 ( \mathbb R) , \; f'' \in L^2 ( \mathbb R)$. Assume the situation, $(b-a)^2 \int_{a}^b | f''|^2 \leqslant (b-a)^{-2} \int_a^b |f|^2 $. I want to prove that there exists $ b_2 \geqslant b$ such that $$ (b_2 -a)^2 \int_a^{b_2} |f''|^2 = (b_2 - a)^{-2} \int_{a}^{b_2} |f|^2 .$$
AI: Define $G(x)=(x-a)^4\int_a^x|f''|^2-\int_a^x|f|^2$. This is a continuous map and $G(b)\leq 0$. If we can show that there is $c\geq b$ such that $G(c)\geq 0$, we are done, by the intermediate value theorem. If for all $c\geq b$ we have
$$(c-a)^4\int_a^c|f''|^2<\int_a^c|f|^2,$$
then, because $f\in L^2$,
$$\int_a^c|f''|^2\leq\frac 1{(c-a)^4}\int_a^{+\infty}|f|^2$$
hence $f''(x)=0$ if $x\geq a$. So $f$ has the form $Ax+B$, but since $f$ is square integrable, $f=0$, and in this case $G$ is identically $0$. |
H: Generating function of Lah numbers
Let $L(n,k)\!\in\!\mathbb{N}_0$ be the Lah numbers. We know that they satisfy
$$L(n,k)=L(n\!-\!1,k\!-\!1)+(n\!+\!k\!-\!1)L(n\!-\!1,k)$$
for all $n,k\!\in\!\mathbb{Z}$. How can I prove
$$\sum_nL(n,k)\frac{x^n}{n!}=\frac{1}{k!}\Big(\frac{x}{1-x}\Big)^k$$
without using the explicit formula $L(n,k)\!=\!\frac{n!}{k!}\binom{n-1}{k-1}$?
Attempt 1: $\text{LHS}=\sum_nL(n\!-\!1,k\!-\!1)\frac{x^n}{n!}+\sum_n(n\!+\!k\!-\!1)L(n\!-\!1,k)\frac{x^n}{n!}\overset{i.h.}{=}?$
Attempt 2: $\text{RHS}\overset{i.h.}{=}$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n,k\!-\!1)\frac{x^n}{n!}=$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n\!-\!1,k\!-\!1)\frac{x^{n-1}}{(n-1)!}=$
$\frac{1}{k(1-x)}\sum_nn\big(L(n,k)-(n\!+\!k\!-\!1)L(n\!-\!1,k)\big)\frac{x^n}{n!}=?$
AI: We have
\begin{align}
f_k(x)&:=\sum_{n\in\Bbb Z}L(n,k)\frac{x^n}{n!}\\
&=\sum_{n\in \Bbb Z}L(n-1,k-1)\frac{x^n}{n!}+\sum_{n\in \Bbb Z}(n+k-1)L(n-1,k)\frac{x^n}{n!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}(j+1+k-1)L(j,k)\frac{x^{j+1}}{(j+1)!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{j!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+xf_k(x)+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}
\end{align}
hence
$$(1-x)f_k(x)=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}.$$
Now we take the derivatives to get
$$-f_k(x)+(1-x)f'_k(x)=f_{k-1}(x)+(k-1)f_k(x)$$
hence
$$(1-x)f'_k(x)-kf_k(x)=f_{k-1}(x).$$
Multipliying by $(1-x)^{k-1}$ and using the formula for $f_{k-1}$ we get
$$(1-x)^kf'_k(x)-k(1-x)^{k-1}f_k(x)=\frac{x^{k-1}}{(k-1)!}$$
so
$$((1-x)^kf_k(x))'=\frac{x^{k-1}}{(k-1)!}.$$
Integrating, we get the wanted result up to another term (namely $C(1-x)^k$) but it should vanish using the value at $0$ and the initial definition of Lah numbers. |
H: How to change integral bounds?
In the following integral I want to change the bounds from $(0, 2)$ to $(-1, 1)$:
$\displaystyle{\int_{0}^{2}(1+x)^3 dx}$
How do I change them? I know that a variable changing is needed, but don't know how to use it to change the bounds. Indeed my question is how do I calculate the formula of the new variable(for example t)
AI: You want to shift the interval of integration down by $1$, so use the change of variables $t=x-1$. So when $x=0$, $t=-1$, and when $x=2$, $t=1$. Thus when integrating with respect to $t$, you would integrate over the interval $(-1,1)$. Be careful to rewrite your integrand in terms of $t$ before solving. |
H: Locally closed irreducible subset of an affine scheme.
I'm self-studying some Algebraic Geometry and I have the following question.
Let us take $X=\operatorname{Spec}A$, where $A$ is a commutative ring. I am trying to show that every locally closed irreducible subset of $X$ contains an unique generic point.
This is what I've been thinking so far:
Let $Y$ be our irreducible, locally closed subset. We know that $q=I(Y) = \bigcap_{p \in Y} p$ is a prime ideal, I believe we would be done if we could show that $q \in Y$. Let us write $Y = V(\mathfrak{a}) \cap U$, for $U$ open. If $U$ would happen to be a principal open, of the form $D(f)$, then $Y = V(\mathfrak{a}) \cap U$ could be written as $\operatorname{Spec}B$ where $B = A_f / \mathfrak{a}A_f$. In this case, to say that Spec B is irreducible would simply mean that it had an unique minimal prime, which of course would have to lie in $\operatorname{Spec}B$. I feel uneasy about this argument, and I'm not totally sure if it is correct. However, if it is done in this special case, how could it be used to show the general case?
I guess we could argue from our special case if we wrote $Y = V(\mathfrak{a}) \cap (\bigcup_i D(f_i)) = \bigcup_i (V(\mathfrak{a}) \cap D(f_i))$, but I'm not sure.
So, I am very grateful for any help here, and answers on how to tackle this.
AI: Write $Y=\bar Y\cap U$ where "bar" means closure in the topological space $X$ and $U\subset X$ is some open set.
Existence of generic point
Since $Y$ is irreducible, so is $\bar Y$.
Therefore $\bar Y$ has a generic point $\eta$ and $\bar{ \lbrace \eta\rbrace }=\bar Y$.
Of course $\eta\in Y$, else we would have $\bar{ \lbrace \eta\rbrace }=\bar Y\subset X\setminus U$ and thus $\overline { Y}\cap Y=\emptyset$, an absurd statement.
Thus $\eta$ is a generic point for $Y$ since its closure in $Y$ is $\bar Y\cap Y=Y$.
Uniqueness of generic point
If $\eta'$ were another generic point of $Y$, its closure in $X$ would also be $\bar Y$.
But an irreducible closed subset of a scheme has only one generic point, hence $ \eta'=\eta$ .
Edit: Warning !
If $Y \subset X=Spec(\mathbb Z)$ is the complement of the generic point $\eta=(0)\in Spec(\mathbb Z)$, then $Y$ is an irreducible subspace of $X$. However $Y$ has no generic point!
The explanation of this apparent contradiction to the above is that $Y$ is not locally closed in $X$. |
H: Can $G≅H$ and $G≇H$ in two different views?
Can $G≅H$ and $G≇H$ in two different views?
We have two isomorphic groups $G$ and $H$, so $G≅H$ as groups and suppose that they act on a same finite set, say $\Omega$. Can we see $G≇H$ as permutation groups. Honestly, I am intrested in this point in the following link. It is started by:
Notice that different permutation groups may well be isomorphic as ....
See here
AI: The definition of "isomorphic as permutation groups" given in the wikipedia page you refer to is equivalent to the images of $G$ and $H$ in their actions on $\Omega$ being conjugate in the symmetric group on $\Omega$.
You should be able to think of examples of subgroups $G$ and $H$ of $S_n$ for some $n$ such that $G$ and $H$ are isomorphic as groups, but not conjugate in $S_n$. Try groups of order 2 in $S_4$, for example. |
H: Identifying a map by looking at the pair of topologies that makes it continuous.
Let $\omega_X$ be the set of all topologies on $X$. Given $f:X\rightarrow X$, define $R_f \subset \omega_X \times \omega_X $ as those pairs of topologies on $X$ which make $f$ continuous. For example $\left(\text{Discrete Topology},-\right)$ or $\left(-,\text{Indiscrete Topology}\right)$ are always in $R_f$. But when $f$ can be uniquely determined, by its $R_f$? Here is one such case: $$ \forall x \in X: f(x)=x \iff R_f= \left\{ \left(T_\alpha,T_\beta\right)\subset \omega_X \times \omega_X | T_\beta \subset T_\alpha \right\}$$
Can some one give me more elaborative examples of this please?
AI: $R_f$ uniquely determines $f$ for any non-constant function $f:X \to Y$ (if, and only if, $f$ is constant, $R_f$ is $\omega_X\times\omega_Y$).
Proof: Fix $y\in Y$. Define the topology $T_y=\{\emptyset,Y,\{y\}\}$.
Then $(T,T_y)\in R_f$ iff:
$f^{-1}(\emptyset)=\emptyset\in T$ (always true)
$f^{-1}(Y)=X\in T$ (always true)
$f^{-1}(\{y\})\in T$
Take the intersection of all such $T$: it is the set $T_0=\{\emptyset,X,f^{-1}(\{y\})\}$. Because $f$ is not constant, $f^{-1}(\{y\})\ne X$ and so $f^{-1}(\{y\})$ is the largest element of $T_0\setminus\{X\}$.
Since $f$ is uniquely determined when $f^{-1}(\{y\})$ is given for all $y$, this proves the result.
(Note: we only needed to use topologies with a finite number of open sets.) |
H: Probability question regarding range
I am stuck with the following question. It is as follows,
| Employed | UnEmployed | Total
Male | 460 | 40 | 500
Female| 140 | 260 | 400
Total | 600 | 300 | 900
If the person is selected and the selected person is male, what is the probability that the selected person is employed?
Now given are the counts not probability but the solution has used the counts as probabilities. I am confused as probabilities lie between 0 and 1. Please help out. Thanks
AI: The probability of an event is is the number of times the event can occur divided by total number of possible events. In your case, you have $500$ males out of which $460$ are employed. So the probability will be $\frac{460}{500}=\frac{23}{25}=0.92$ |
H: Estimating maximum value of random variable
Suppose I have some random variable $X$ which only takes on values over some finite region of the real line, and I want to estimate the maximum value of this random variable. Obviously one crude method is to take many measurements, lets say $X_1$, $X_2$, $\ldots, X_n$ (which we'll say are all iid) and to use
$$X_{max} = \text{max}(X_1, \ldots X_n)$$
as my guess, and as long as $n$ is large enough this should be good enough. However, $X_{max}$ is always less than the actual maximum, and I'm wondering if there's any way to modify $X_{max}$ so it gives a guess (still with some uncertainty) which is centred around the actual maximum value, rather than always a little less than it.
Thanks
AI: When the random variables are uniform on $(0,M)$ for an unknown $M\gt0$, one can check that the maximum $X_n^*$ of an i.i.d. sample of size $n$ is such that $\mathrm P(X_n^*\leqslant x)=(x/M)^n$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{n}{n+1}M$. This means that an unbiased estimate of $M$ is $\widehat M_n=\frac{n+1}nX_n^*$.
For other distributions, the result is different. For example, starting from the distribution with density $ax^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm P(X_n^*\leqslant x)=(x/M)^{an}$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{an}{an+1}M$ and an unbiased estimate of $M$ is $\widehat M_n=\frac{an+1}{an}X_n^*$.
Likewise, if the density is $a(1-x)^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm E(X_n^*)=M(1-c_n)$ with $c_n\sim\frac{\Gamma(1+1/a)}{n^{1/a}}$ and an unbiased estimate of $M$ is $\widehat M_n=\frac1{1-c_n}X_n^*$. |
H: How to show that $\mathrm{SL}(2,\mathbb Z) = \langle A, B\rangle$?
Show, that if $\mathbf{A}= \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right)$, $\mathbf{B}= \left( \begin{array}{cc} 0&1\\ -1&0 \end{array} \right)$ and $\mathrm{SL}(2, \mathbb{Z}) := \{ \mathbf{C}\in\mathrm{M}(2\times 2;\mathbb{Z})\, |\, \det(\mathbf{C}) = 1\}$ then $\langle\mathbf{A}, \mathbf{B}\rangle = \mathrm{SL}(2, \mathbb{Z})$.
I found this exercise in a textbook for linear algebra in a chapter about the determinant, so it should be solved rather elementarily and without any deeper understanding of group theory ($\mathrm{SL}(2, \mathbb{Z})$ is defined only for the exercise). Showing $\langle\mathbf{A}, \mathbf{B}\rangle \subseteq \mathrm{SL}(2, \mathbb{Z})$ was easy but I got stuck with the opposite direction. Any help would be appreciated.
AI: The intuitive content is that the claimed statement roughly paraphrases into "every invertible integer matrix is a product of integer elementary row operations", with appropriate tweaks to the fact we want to stay within determinant 1 rather than $\pm 1$.
The proof of the claimed statement follows the same idea as the proof every invertible (real) matrix is a product of elementary matrices: show that elementary operations can row reduce any invertible matrix to the identity
The powers of $\mathbf{A}$ are precisely the elementary matrices that describe adding multiplies of the second row to the top row.
The powers of $\mathbf{BAB}^{3}$ are the elementary matrices that describe adding multiples of the top row to the second row.
Using these elementary row operations, any 2x2 integer matrix can be row reduced (think "Euclidean algorithm") to one of the following forms:
$$ \left( \begin{matrix} x & y \\ 0 & z \end{matrix} \right)
\qquad \left( \begin{matrix} x & y \\ 0 & 0 \end{matrix} \right)
\qquad \left( \begin{matrix} 0 & x \\ 0 & 0 \end{matrix} \right)
\qquad \left( \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} \right)
$$
with $0 \leq y < |z|$ in the first case, and $|x| > 0$ in all. If the original matrix was in $SL(2,\mathbf{Z})$, then only the first form is possible and we must have either $x=z=1$, and thus $y=0$. |
H: Construction of $[a,b]$-fold Cartesian product over space of all real-valued functions
I am currently reading "Applied Analysis", which could be found here,
and on page 85 I don't understand example 4.16. There it is said:
Suppose that $X$ is the space of all real-valued functions on the interval $[a,b]$. We may identify a function $f: [a,b] \to \mathbb{R}$ with a point $\prod_{x\in [a,b]} f(x)$ in $\mathbb{R}^{[a,b]}$, so $X = \mathbb{R}^{[a,b]}$ is the $[a,b]$-fold Cartesian product of $\mathbb{R}$.
In this construction the expression $\prod_{x\in [a,b]} f(x)$ should almost always yield $\infty$ because its a product over an uncountable set and so this identification is not one-to-one, or does i read $\prod_{x\in [a,b]} f(x)$ wrong?
AI: You’re reading $\prod_{x\in[a,b]}f(x)$ incorrectly. (This is perfectly understandable, because the notation is very poor.) It does not mean a product of the real numbers $f(x)$ for $x\in[a,b]$.
Let $I$ be an index set, and for each $i\in I$ let $Y_i$ be a set. Then $$X=\prod_{i\in I}Y_i\;,$$ the Cartesian product of the sets $Y_i$, is defined to be the set of all functions $$x:I\to\bigcup_{i\in I}Y_i$$ such that $x(i)\in Y_i$ for each $i\in I$. Each of these functions is a single point of the product set $X$. I generally write $x=\langle x(i):i\in I\rangle$ to describe such a point.
In your setting $I=[a,b]$, and $Y_i=\Bbb R$ for each $i\in[a,b]$. Thus, in this case $x=\langle x(i):i\in I\rangle\in X$ is actually a function from $I=[a,b]$ to $\Bbb R$, since each of the sets $Y_i$ is just $\Bbb R$. Change my $x$ to $f$ and substitute the specific index set for $I$, and you have a point $f=\langle f(x):x\in[a,b]\rangle\in X$. The author of that book has ill-advisedly chosen to write this as if it were a product of numbers, which is ridiculous, but his $\prod_{x\in[a,b]}f(x)$ should be understood exactly as my $f=\langle f(x):x\in[a,b]\rangle$.
He could have been a little less wrong by writing $\prod_{x\in[a,b]}\{f(x)\}$, a Cartesian product of singleton sets; that at least could not be interpreted as a product of numbers, but it also wouldn’t be a point in $X$: rather, it would be a singleton subset of $X$, a set with only one point in it. |
H: Analytic functions with poles
If $f$ is meromorphic on $\mathbb{C}$ and $a \in \mathbb{C}$, why is $g=\displaystyle \frac{1}{f-a}$ holomorphic (analytic), even if there are poles?
Thank you very much for your time,
Chris
AI: The function $g(z)$ doesn't have to be holomorphic in $\mathbb{C}$, because if there are $z\in\mathbb{C}$ such that $f(z)=a$, then $g$ will have poles, and hence wont be holomorphic in $\mathbb{C}$, but it would always be holomorphic in $\{z:f(z)\neq a\}$, and clearly meromorphic in $\mathbb{C}$. If, for some reason, you know that $f(z)\neq a$ for all $z$, then $g(z)$ will be holomorphic in $\mathbb{C}$. |
H: differential form is exact on $ U \cup V$
Let $ U ,V \subset \mathbb R ^n$ two simply connected open sets such that $ \displaystyle{ U \cap V}$ is a connected set.
If $\omega$ is a closed 1-form wich is exact in $U$ and $V$ prove that: $\omega$ is exact in $ \displaystyle{ U \cup V}$
AI: We use the fact that a differential form on a simply connected open set is exact (result from Poincaré). Since $U$ and $V$ are simply connected open set, we can find functions $f_1$ and $f_2$ such that $df_1=\omega$ on $U$ and $df_2=\omega$ on $V$.
On $U\cap V$, we have $d(f_1-f_2)=df_1-df_2=\omega-\omega=0$, and since $U\cap V$ is connected, $f_1-f_2$ is constant on $U\cap V$. Therefore, we can define $F$ by $F(x)=f_1(x)-f_1(x_0)$ for $x\in U$, $F(x)=f_2(x)-f_2(x_0)$, where $x_0\in U\cap V$ (if $U$ and $V$ are disjoint, the result is clear). |
H: Find the period of the function $y=[2x]-3*[4x]$
suppose that we have function
$y=[2x]-3*[4x]$
here $[*]$ denotes as a minimum distance till integer.
we are required to find period of this function,first of all i am confused in terms of what does mean minimum distance till integer?could you explain me it?thanks guys ,i know generally term of period,namely $f(x+T)=f(x)$,but here could not find way how i can continue to solve this task.
thanks a lot of
AI: It’s just the distance to the nearest integer.
Let $x$ be a real number. If $x$ is an integer, the smallest distance from $x$ to any integer is the distance from $x$ to $x$, which is $0$. Otherwise, $x$ lies between two consecutive integers, say $n<x<n+1$. Clearly $x$ is closer to $n$ and $n+1$ than it is to any other integer, so it’s just a matter of deciding which is closer. The distance from $x$ to $n$ is $x-n$; the distance from $x$ to $n+1$ is $n+1-x$. If $$x-n<n+1-x\;,$$ then $x$ is closer to $n$, and the minimum distance from $x$ to any integer is $x-n$. If $$n+1-x<x-n\;,$$ then $x$ is closer to $n+1$, and the minimum distance from $x$ to any integer is $n+1-x$. If $$n+1-x=x-n\;,$$ then both distances are $1/2$: $x$ is equidistant from $n$ and $n+1$, and its distance from the nearest integers is $1/2$.
As an example, as $x$ goes from $1$ to $2$, its minimum distance from an integer starts at $0$, increases to $1/2$ when $x=3/2$, and then decreases back to $0$. This cycle repeats as $x$ increases from $2$ to $3$. Thus, the nearest integer function itself has a period of $1$.
Added: This is very similar to finding the period of $\sin 2x+3\sin 4x$, for instance. Notice that the factor of $3$ in the second term doesn’t affect the period at all: it just changes the $y$-value. Thus, you’re really concerned with the periods of $[2x]$ and $[4x]$ and how they interact.
As $x$ increases from $1$ to $2$, say, $2x$ increases from $2$ to $4$. $[2x]$ starts out at $0$ when $2x=2$, rises to $1/2$ when $2x=5/2$ and falls back to $0$ at $2x=3$, then rises again to $1/2$ when $2x=7/2$ and finally falls back to $0$ when $2x=4$. In other words, the function goes through two full cycles. It finished the first cycle at $2x=3$, i.e., when $x=3/2$. The cycle started at $x=1$, so what is the period of $[2x]$?
You can analyze $[4x]$ with the same kind of reasoning. In both cases it helps to make a rough graph of the function: it’s all straight line segments, so no artistic ability is needed!
Finally, ask yourself whether the periods mesh. Does one of $[2x]$ and $[4x]$ go through a complete cycle or several complete cycles while the other is going through just one? |
H: a question on meromorphic function
Given that $f:\mathbb{C}\rightarrow \mathbb{C}$ is meromorphic, analytic at $0$ and satisfies $f(1/n)=\frac{n}{2+n}$ I want to know whether I can say $f(z)=\frac{1}{2+z}$, I have considered $g(z)=f(z)-\frac{n}{2+n}$ and zero set of $g$ is $\{\frac{1}{n}\}$ has limit point $0\in\mathbb{C}$, so can I say $f(z)=\frac{1}{2+z}$ by Identity Theorem?If not where is the problem? I know that $f$ is analytic except finitely many points where it has poles.
AI: Yes, that were correct reasoning if your $f$ would coincide with your sequence in the points you considered (it does not, but you fooled me for a minute ;-). If $\{w:f(w)=g(w)\}$ contains an accumulation point in $\Omega$ where $f, g:\Omega \rightarrow \mathbb{C}$ are holomorphic then $f=g$, provided $\Omega\subset \mathbb{C}$ is a connected domain. This quite obviously applies here if you take $\Omega$ as $\mathbb{C}$ minus the possibly existing poles. |
H: show if function is even or odd
Suppose that we have equation:
$$f(x)=\frac{2^x+1}{2^x-1}$$
There is question if this function even or odd? I know definitions of even and odd functions, namely
even is if $f(-x)=f(x)$ and odd is if $f(-x)=-f(x)$ and when I put $-$ sign in function, found that this is neither even nor odd function, because $2^{-x}\ne-1 \times 2^x$, but my book says that it is even, so am I wrong? Please help me to clarify book is correct or me? Thanks
AI: Let's see what $f(-x)$ looks like:
$$
f(-x) = \frac{2^{-x} + 1}{2^{-x} - 1}
$$
Since $f(x)$ contains $2^x$ and not $2^{-x}$, let's multiply the numerator and denominator by $2^x$:
$$
f(-x) = \frac{2^x(2^{-x} + 1)}{2^x(2^{-x} - 1)} = \frac{1 + 2^x}{1-2^x} = - \frac{2^{x} + 1}{2^{x} - 1} = -f(x)
$$
This shows that the function is odd. |
H: Free boolean algebra
Consider the following definition:
Let $X$ be a set and $e : X \mapsto A$ a mapping to a boolean algebra
$A.$ We say that $A$ is free over $X$ (with respect to $e$) if for
every mapping $f:X \mapsto B$ for a boolean algebra $B$ there is
precisely one homomorphism $\overline{f}:A \mapsto B$ such that
$\overline{f} \circ e = f.$
I would eventually like to prove some statements involving free boolean algebras but I am stuck in understanding this definition.
I understand it formally but don't see its intuitive meaning. Something simply doesn't click.
How should one think about this definition?
For example, how should one try to prove that if $A$ is free over $X$ with respect to $e:X \mapsto A$ then $e$ is injective? Is it correct to simply say that if $e$ is not injective then so isn't $\overline{f}\circ e$ but since $f$ is arbitrary it can as well be an injective function?
What about the fact that any free algebra over a set of cardinality $n$ has precisely $2^{2^{n}}$ elements? Is there a direct way (without referring to the Lindenbaum algebra) to prove this claim directly from the definition?
(PS. I am no category theorist the arrow notation just scares me :)
AI: This construction is very common all over abstract algebra -- not just for boolean algebras, but also for groups, modules, monoids, and so forth. The technical definition may look forbidding at first sight if one is not used to universal properties, but what it boils down to is very simple. Essentially the free boolean algebra $A$ has three main properties:
$A$ contains a copy of $X$, in the sense that each element of $X$ has a unique partner in $A$.
$A$ contains no elements that it doesn't have to. More precisely, every element of $A$ can be made by evaluating some finite expression which may contain elements of $X$ in addition to the Boolean algebra operations.
$A$ distinguishes between elements as much as possible. Specifically, two different expressions can only evaluate to the same element if it can be proved from the boolean-algebra axioms that they must be the same.
Essentially the free algebra is the set of finite expressions with constants from $X$, modulo those equalities forced by the axioms.
One advantage of the category-theoretic definition (for those of us who don't think that simply being category theoretic is in itself an advantage) is that it allows us to sidestep the issue of precisely what "provable from the axioms" means. It is nice not to depend on proof theory in the technical development.
To see that these properties are implied by the definition:
Let $x$ and $y$ be any two different elements of $X$; we want to show that $e(x)\ne e(y)$. Choose $f:X\to\{\mathit{true},\mathit{false}\}$ such that $f(x)=\mathit{true}$ and $f(y)=\mathit{false}$. Then by the definition, there is a homomorphism $\bar f$ such that $\bar f(e(x)=\mathit{true}$ and $\bar f(e(y))=\mathit{false}$. But that is only possible if $e(x)$ and $e(y)$ are different elements.
Let $A$ be some free algebra, and let $B$ be the set of elements in it that do arise as the values of finite expressions. Then $B$ is itself a boolean algebra, and $e(x)$ is in $B$ for every $x$. Then the free-algebra property of $A$ gives us $g:A\to B$ such that $g(e(x))=e(x)$ for all $x$. Since $B$ is a subalgebra of $A$, $g$ is also a homomorphism $A\to A$. Using the definition of free algebra again, we learn that there is exactly one homomorphism $g: A\to A$ such that $g(e(x))=e(x)$. But the identity map on $A$ also has this property, so $g$ must be the identity! But that is only possible if $B$ is all of $A$.
Let $A$ be an algebra with a map $e_A:X\to A$, and let $B$ be the "set of all expressions modulo consequences of the axioms" I describe above. If there were two expressions that evaluated to the same element in $A$ without having to, they would evaluate to different elements of $B$. But then there can be no homomorphism $A\to B$ that agrees on $e_A$, so $A$ is not a free algebra with respect to $e_A$. |
H: If $\sum\limits_{k=1}^{\infty}a_k=S $, then $ a_4+a_3+a_2+a_1+a_8+a_7+a_6+a_5+\dots=?$
if we know that $\sum\limits_{k=1}^{\infty}a_k=S$, what can we say about the convergence of $$a_4+a_3+a_2+a_1+a_8+a_7+a_6+a_5+a_{12}+a_{11}+a_{10}+a_{9}+\dots$$ ?
If it does converges, what is the sum (in terms of $S$)?
As per the first question - it clearly converges since the number of terms in each parentheses is bounded (by 4) and the $(a_n)_{n=1}^\infty$ tends to zero as $n\to\infty$.
Second question is where I'm struggling. We don't know that $\sum\limits_{k=1}^{\infty}a_k$ absolutely converges so I don't know what can we say about it's sum.
Thanks for your help.
AI: I think Cameron is right. In particular, the difference $|S_n-T_n|$ is bounded by $|a_{n-1}+a_{n-2}+a_{n-3}|$ and therefore has to approach zero.
The problem of series that are not absolutely convergent is that you can't make arbitrary rearrangement of the terms. However, in general, the rearrangement that cause the sum of a convergent series to change cannot "bounded", meaning that there's no uniform upper bound on the number of shifts applied to each term of the original series (i.e. you can't say "every term is moved at most by $M$ terms").
Indeed, if every term of the sequence is shifted by at most $M$ terms, you can prove that the difference between the partial sums of the new and original series is bounded by
$$|T_n-S_n|\leq\sum_{n-M}^{n-1}|a_n|$$
which clearly converges to zero if the original series is convergent. |
H: A question Riemann's mapping theorem
Like in Riemann's mapping theorem, we have a conformal mapping $f:\Omega\rightarrow\mathbb{D}$ (so $f$ is bijective and holomorphic), where $\mathbb{D}=\{|z|<1,\ z \in\mathbb{C}\}$ is the set of the open unit disks.
Why does it follow from Liouville's theorem, that $\Omega$ is not the whole $\mathbb{C}$, so $\Omega \neq \mathbb{C}$?
Liouville's theorem states that if $f$ is holomorphic on $\mathbb{C}$ and bounded, the $f$ is constant.
I really don't see how this determines $\Omega \neq \mathbb{C}$.
It would be so nice, if someone could walk me through this.
Thank you for your time,
Chris
AI: If $\Omega=\Bbb C$, since $f$ gets it's values in the unit disk, it's a bounded function. So it can't be bijective.
Note that Liouville theorem doesn't work if the open set is not $\Bbb C$. For example, when $\Omega$ is missing a ball $B(z_0,r)$, the map $f(z):=\frac 1{z-z_0}$ is bounded, holomorphic but not constant. |
H: Commutative law in conditionally convergent series.
Whilst reading the answers to this question, one of the answers states:
"The problem of series that are not absolutely convergent is that you can't make arbitrary rearrangement of the terms."
However, my understanding was that due to the commutative law, we could write the following (c.f. Concrete Mathematics):
$$\sum_{k\in\mathbb{K}}{a_{k}}=\sum_{p(k)\in\mathbb{K}}{a_{p(k)}},$$
For any permutation function $p(k):\mathbb{Z}\to\mathbb{Z}, \space\forall k \in\mathbb{Z}$, in which all integers are mapped to an integer without duplicity, and for all sequences $a_{k}$.
Is this true even for conditionally converging series, if so, is my confusion based on the author of that answer's definition of "arbitrary rearrangement" (I'm assuming that a permutation is a form of "arbitrary rearrangement")?
Thanks in advance!
AI: Absolutely convergent and unconditionally convergent are identical. They will always converge to the same sum under arbitrary rearrangements. However, series that fail to be absolutely/unconditionally convergent can be rearranged to converge to any given sum, or to diverge. |
H: Jordan's lemma and estimation lemma
I need a clarification on the utility of Jordan's lemma. I think I have understood the theorem and its implications. It basically implies that if you have a function like $g(z) =f(z)e^{iz}$ it suffices for $f(z)$ to tend to zero at infinity in order to have a negligible integral of $g(z)$ along a sufficiently big semicircle in the upper half plane.
However, It confuses me that Wikipedia provides, as an example, the evaluation of this integral:
$$\int_{-\infty}^\infty \frac{\cos x}{1+x^2}\,dx$$
Then it uses the fact that $f(z)=\frac{e^{iz}}{1+z^2}$ satisfies Jordan's lemma. But it seems to me that this particular function satisfies the Estimation lemma in the upper part of the Gauss graph, a stronger condition, and Jordan's lemma is not needed
AI: That is not a great example, for as you mention the Estimation Lemma suffices there. A better example would be for the evaluation of $$\int^{\infty}_{-\infty} \frac{\sin x}{x} dx.$$
For that problem the usual estimate leads to the circular arc contributing $\mathcal{O}(1)$ so we truly need Jordan's lemma. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.