text
stringlengths 83
79.5k
|
|---|
H: Intertwiner of symmetric group representations (Basic)
I am preparing for an exam and there is an excercise which I have to solve but I got stuck. The excercise states:
Let $V=\mathbb{C}^3$ be the permutation representation of the symmetric group $S_3$. Show that any map of representations (intertwiner) $I: V \rightarrow V$ has the form:
$
\begin{pmatrix}
\lambda & \mu & \mu \\
\mu & \lambda & \mu \\
\mu & \mu & \lambda
\end{pmatrix}
$
w.r.t the standard basis of $\mathbb{C}^3$ for some $\lambda, \mu \in \mathbb{C}$
What I did:
We got a representation $\rho: \rightarrow GL_n(V)$ and another representation $\sigma:G \rightarrow GL_n(V)$ and an intertwiner $I: \rho \rightarrow \sigma$ which is defined by the linear map $I: V \rightarrow V$ which has to satisfy:
$[\sigma(g)](I(v))=I([\rho(g)] (v))$ for all $v \in V$.
So our linear map is defined by the condition:
$[\sigma(\pi)] \begin{pmatrix} a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}
\ \vec{v} = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}
([\rho(\pi)] (\vec{v})) \\
$
My first intention now was to simply plug in the standard basis for v and see what conditions it generates for the matrix elements.This works just fine for two concrete representations but I'm not sure how to show this for every representation i.e for every map. I'm not an expert in representation theory since I'm a physics undergraduate and only now the basics. Maybe the whole idea is wrong but I definitely need a hint here. Thanks in advance.
AI: Fix the standard ordered basis $\{e_1,e_2,e_3\}$. A matrix $A$ intertwines with the standard permutation representation iff it is invariant under conjugation by permutation matrices. Conjugating $A$ by a permutation matrix is equivalent to rewriting it according to a different ordered basis with the same basis vectors. For example, the permutation $1\leftrightarrow 2$ yields
$$\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}\longleftrightarrow \begin{pmatrix}e & d & f \\ b & a & c \\ h & g & i\end{pmatrix}$$
(the $1$st/$2$nd rows and columns were switched). Stare at a $3\times3$ array long enough to convince yourself that any two off-diagonal entries can be swapped, and any two diagonal entries can be swapped, but no diagonal entry may be swapped with a off-diagonal entry. (Prove this rigorously with transpositions if need be.) Thus all off-diagonals and diagonal entries respectively are equal.
|
H: How many real roots for $ax^2 + 12x + c = 0$?
If $a$ and $c$ are integers and $2 < a < 8$ and $-1 < c$, how many equations of the form $$ax^2+12x+c=0$$ have real roots?
AI: As @DonAntonio give hint a quadratic eqn have real roots iff
$$b^2-4ac\ge0$$
$$144-4ac\ge0\implies ac\le36$$
since $2<a<8$ and $c>-1$ so $a$ will be in set {$3,4,5,6,7$} and $c$ {$0,1,2,3,\cdots$}
from $$ac\le36$$
firstly we will check combination for equality so factors of $36$ in given range
$36=3\times 12,4\times 9,6\times 6$
so possible combination of a and c will :
for $a=3,$ $c$ will {$0,1,2,3,4,5,6,7,8,9,10,11,12$}
for $a=4,$ $c$ will {$0,1,2,3,4,5,6,7,8,9$}
for $a=5,$ $c$ will {$0,1,2,3,4,5,6,7$} (this will hold only "<" relation since $5$ is not divider of $36$)
for $a=6,$ $c$ will {$0,1,2,3,4,5,6$}
for $a=7,$ $c$ will {$0,1,2,3,4,5$} (same case as $a=5$)
so possible eqn are $13+10+8+7+6=44$
$44$ equation having a and c in given range have real roots
|
H: True or false: if $f'(c)<0$, then $f$ is concave down at $x=c$?
How can I determine the following statement is true or false?
If $f'(c)<0$, then $f$ is concave down at $x=c$ ?
AI: Recall that the sign of $f'(c)$ determines only if a function is increasing if $f'(c) > 0$, decreasing if $f'(c)\lt 0$, or neither, e.g., when $f'(c) = 0$. at point $x = c$. So the first derivative evaluated at a point $c$ tells us about the slope of the line tangent to the graph of the function at the point where $x = c$. It tells us nothing about the concavity of the function.
The second derivative, on the other hand, gives us information about the concavity of a function at any given point.
The statement you are given is asserting that based on the value of $f'(c)$ alone, you can determine the concavity of a function. And this is not true, as Zev's example shows: He has included the graph of a quadratic function $f$ that is concave-up, and such that the first derivative $f'(x)$ evaluated at some $c$ gives $f'(c) < 0$.
|
H: Number of solutions in a finite field
let $F$ be a field of $p^b$ elements, $p$ prime and $b \in \mathbb{N}$.
Suppose I have $(a_1, a_2, \ldots, a_s) \in F^s$ and an equation
$$
0 = a_1 x_1 + \dots + a_s x_s.
$$
I was wondering if anybody could help me figure out what the number of solution $(x_1, x_2, \dots, x_s) \in F^s$ to this equation would be? (or estimate if exact number not available) Thanks!
AI: The set of solutions is in fact a subspace of $F^s$ (as is easy to check or known from linear algebra). As a vector space of dimension $n$ has $(p^b)^n$ elements it suffices to compute the dimension of the space of solutions. There are essentially two cases:
$a_i=0$ for all $i$. In this case all elements of $F^s$ are solutions, i.e. the space of solutions is $s$-dimensional and hence has $p^{bs}$ elements.
There is an $a_i\neq 0$. Then the formula can be rewritten as $$x_i=a_i^{-1}(\sum_{j\neq i}a_jx_j)$$
Thus the space of solutions is $(s-1)$-dimensional and hence has $p^{b(s-1)}$ elements.
|
H: Prove that in a tree, a path is a hamilton path iff it is an euler path
Prove that in a tree graph $T$, a path is a hamilton path iff it is an euler path.
so I said this:
==>: Let $<x_1,x_2,...,x_n>$ be a hamilton path and let us suppose by contradiction that it is not an euler path. Then there is an edge that starts in a vertix $x_i$ that we visit twice or more, as in $<x_1,x_2,...,x_i,...,x_k,x_i,...,x_n>$, and then we have a circle $<x_i,...,x_k,x_n>$, in contradiction that $T$ is a tree.
<==: Let $<x_1,x_2,...,x_k>$ be an euler path in the tree. An euler path passes through all the edges. A tree is connected, and the degree of any vertix is $\geq 1$, so this euler path has to visit every vertix $x_i$ in the graph in order to get into it and exit out of it, thus it is a hamilton path.
Is this proof good enough?
AI: Your proof of $\rightarrow$ assumes that failure to be an Euler path implies that some edge is visited more than once. This is incorrect, as some edge might not be visited at all.
Your proof of $\leftarrow$ assumes that a path that visits every vertex must be a Hamilton path. This is incorrect, as you also need to prove that it visits every vertex only once.
As requested, here's a solution to the problem:
A tree must have at least two vertices of degree 1; and if it has exactly two then it is a path graph. For a proof see here.
If the tree has a Hamiltonian path, any vertex of degree 1 must be either a start or end vertex; hence there are at most two such vertices. Thus there are two, the graph is a path, and hence has an Eulerian path.
If the tree has an Eulerian path, there can be at most two vertices of odd degree. Hence there are two, the graph is a path, and hence has a Hamiltonian path.
|
H: If I know the Conjugacy classes of a group, do I know the group?
I know that a group has Conjugacy classes of size 1, 3, 6, 6, 8 and I know that this matches with the Conjugacy classes of the group $S_4$. But could there be a different group, with the same Congucy classes?
AI: I think that in general this question is difficult. Sometimes the answer is negative (see Zev Chonoles' answer). In your specific example case we can conclude that the group has to be $S_4$.
The elements in the conjugacy class of $8$ elements have centralizers of order $24/8=3$. Therefore they must be of order $3$, and generators of their own centralizers. Thus we can deduce that the group, call it $G$, has four Sylow $3$-subgroups. The group acts on the set of these four groups. This gives us a homomorphism $f:G\to S_4$.
The conjugation action of $G$ on the four Sylow $3$-subgroups is transitive by Sylow's theorem. We can further deduce that the action is doubly transitive. This is because each Sylow 3-subgroup of $G$ permutes the other three. Thus the group $f(G)$ is a doubly transitive subgroup of $S_4$. The only doubly transitive subgroups of $S_4$ are $S_4$ and $A_4$. In the former case we are done, because $f$ is then an isomorphism. So the claim follows, if we can prove that it is impossible for
$\operatorname{Im} f$ to be isomorphic to $A_4$. Assume contrariwise that this would be the case. Then we could deduce that $\ker f$ is of order two. Because $\ker f$ is a normal subgroup of $G$ this implies that both elements of $\ker f$ would be in a conjugacy class by itself. This was manifestly not the case.
|
H: If $\phi_i$s are linearly dependent, $\det [\phi_i(v_j)] = 0$ - is the proof legit?
Given $v_1, \ldots, v_k \in V$ and $\phi_1, \ldots, \phi_k \in V^*$. If $\phi_1, \ldots, \phi_k \in V^*$ are linearly dependent, prove that $\det[\phi_i(v_j)] = 0.$ Here $k$ is the dimension of $V$, but I need to show this also works for a subspace with dimension $p$.
Assume $v_r$ is a linear combination of the others, as without loss of generality we consider in terms of basis. Denoting the matrix whose $i$th row and $j$th column is $\phi_i(v_j)$ to be $[\phi_i(v_j)]$. Then
$$[\phi_i(v_j)]= \pmatrix{ \begin{array}{cccccc}
\delta_{11}&\delta_{12} & \ldots & \lambda_1 \delta_{11} + \cdots + \hat \lambda_r \delta_{1r} + \cdots + \lambda_p \delta_{1p} & \cdots & \delta_{1p}\\
\delta_{21}&\delta_{22} & \ldots & \lambda_1 \delta_{21} + \cdots + \hat \lambda_r \delta_{2r} + \cdots + \lambda_p \delta_{2p} &\cdots & \delta_{2p}\\
\vdots &\vdots & \ddots &\vdots & \ddots & \vdots\\
\delta_{r1}&\delta_{r2} & \ldots & \lambda_1 \delta_{r1} + \cdots + \hat \lambda_r \delta_{rr} + \cdots + \lambda_p \delta_{rp} &\cdots & \delta_{rp}\\
\vdots &\vdots & \ddots &\vdots & \ddots & \vdots\\
\delta_{p1}&\delta_{p2} & \ldots & \lambda_1 \delta_{p1} + \cdots + \hat \lambda_r \delta_{pr} + \cdots + \lambda_p \delta_{pp}&\cdots & \delta_{pp}\\
\end{array} }
=\left( \begin{array}{cccccc}
1&0 & \ldots & \lambda_1 & \cdots & 0\\
0&1 & \ldots & \lambda_2 & \cdots & 0\\
\vdots &\vdots & \ddots & \vdots &\ddots & \vdots\\
0&0 & \ldots & 0 & \cdots & 0\\
\vdots &\vdots & \ddots & \vdots &\ddots & \vdots\\
0&0 & \ldots & \lambda_p & \cdots & 1\\
\end{array} \right). $$
Hence, the determinant of $\phi_i(v_j)$ is zero.
AI: After the comments clearing up stuff and the editing of the question we can now try to answer: since $\,\{\phi_1,\ldots,\phi_k\}\,$ is linearly dependent there exists a first element which is a linear combination of the preceeding ones, say
$$\phi_r=\sum_{n=1}^{r-1}a_n\phi_n\;,\;\;a_n\in\Bbb F$$
From here we get that for every $\;i\;,\;\;1\le i\le k\;$ , we have
$$\phi_r(v_i)=\sum_{n=1}^{r-1}a_n\phi_n(v_i)$$
Can you see then how the $\;r-$th row is a linear combination of the first $\,r-1\,$ first rows in the matrix $\,\left(\phi_i(v_j)\right)\;$? This, of course, means the determinant is zero.
I think you had the right idea but you messed things up with all those deltas and lambdas (=my $\,a_i'$s...)
|
H: questions in channel capacity
Q)
Suppose we have a set of t coins, all but two of which have uniform weight $0$. and two counterfeit coins have different weights$>0$. If one can only use a spring scale, what is the best solution to the problem of finding the counterfeit coins? Find an algorithm by using binary or ternary BCH code for non adaptive search, where you cannot use previous results
for designing the next step, we are forced to design all step before we start weighing.
can any one help me how to solve this problem?
Thank you
AI: Here is a way that works for $t=16$. Divide the coins into four groups, $A,B,C,D$ and label the coins $A1, A2, A3, \dots D4$. Weigh each group against each other (six weighings). At least two groups have all true coins and those will be the only groups that balance. If the two false coins are in the same group, the other three will balance.
In the case the false ones are not together, say $A$ and $B$ are heavy, $C$ and $D$ balance. Then we know the $C$ and $D$ coins are true. Now stack all the $1$ coins together, all the $2$ coins together, the $3$'s, and the $4$'s. Again weigh each stack against each other. Say the $1$'s and $2$'s are heavy. Then the heavy ones are either $A1, B2$ or $A2, B1$. If $A vs B$ had the same result as $1 vs 2$, it is $A1, B2$, otherwise it is $A2, B1$
If the two false ones are in the same group, say it is $A$. Then when you weigh the next sets, you will find the two false coins and when you weigh the two sets with false coins you will find the relative wieghts.
Can you extend this to other numbers of coins? Powers of $4$ are pretty easy, others take more work.
|
H: Encryption using modular addition and a key
Problem i'm facing says:
The value representing each row is encrypted using modular addition
with a modulus of 32 and a key of 27.
I sort of figured out what modular addition is for myself an hour ago but the key thing confuses me.
What does this
key of 27
mean? And how do I find out the original value in the very end?
Text of the task
The image in Figure 1 is to be encoded and encrypted using the
following steps. 1 Each of the five horizontal rows of the picture is
encoded as a 5-bit binary number. White squares are encoded as 0s and
black squares as 1s. The top row is therefore encoded as 00010. 2
Starting from the left, the first digit is multiplied by 16, the
second by 8, the third by 4, the fourth by 2, and the fifth by 1. For
the first row the result is (0 × 16) + (0 × 8) + (0 × 4) + (1 × 2) +
(0 × 1)
= 0 + 0 + 0 + 2 + 0
= 2. 3 The value representing each row is encrypted using modular addition with a modulus of 32 and a key of 27. 4 The result is
converted to a 5-bit binary number. 5 The binary number corresponding
to one row of the original picture is then used to fill a
corresponding row in a blank grid. 1s are displayed as black squares
and 0s are displayed as white squares. Choose from the options below
the one pattern that represents the correctly encoded and encrypted
image.
Thanks in advance xx
AI: Consider alphabet of 32 letters:
' ' = 0;
'a' = 1;
'b' = 2;
'c' = 3;
. . .
'v' = 22;
'w' = 23;
'x' = 24;
'y' = 25;
'z' = 26;
and other symbols - up to 31:
'+' = 27;
'-' = 28;
'(' = 29;
')' = 30;
'=' = 31.
Now, encrypt the text $-$ is to add value 27 to each symbol value:
Let plain text is "a+b=c".
Digital interpretation: (1,27,2,31,3).
Encryption:
$1+27 \equiv 28 \mod 32$;
$27+27 \equiv 22 \mod 32$;
$2+27 \equiv 29 \mod 32$;
$31+27 \equiv 26 \mod 32$;
$3+27 \equiv 30 \mod 32$;
So, cipher text is $(28,22,29,26,30)$. Of, in letter format: "-v(z)".
Edit.
If you need to encrypt black/white images like
$\Box\Box\Box\blacksquare\Box \;,\; \Box\blacksquare\Box\blacksquare\blacksquare \;,\;$ ...
then same way:
1-st symbol: $00010_2 = 2_{10}$; $\quad$ $2+27\equiv 29 \mod 32$; $\quad$ encrypted : $29_{10}=11101_2$;
2-nd symbol: $01011_2 = 11_{10}$; $\quad$ $11+27\equiv 6 \mod 32$; $\quad$ encrypted : $6_{10}=00110_2$;
So, encrypted image will look like this:
$\blacksquare\blacksquare\blacksquare\Box\blacksquare \;,\; \Box\Box\blacksquare\blacksquare\Box \;,\;$...
Finally, to decrypt, it is necessary to subtract $27$, or (the same) to add $5$ (because $32-27=5$).
|
H: Is it true that a complex function has a global antiderivative if and only if it integrates to zero over every closed curve?
I am somehow thinking that these properties must be equivalent, unfortunately I do not know a theorem that says it:
$f$ has a global antiderivative iff the line integral $ \int_{\gamma}f$ over every closed curve in the domain of f is zero? Is this correct?
AI: Let $\Omega$ be the domain of $f$ (an open set). If $f$ has a global antiderivative, an application of the fundamental theorem of Calculus along with the definition of path integrals shows that $\int_\gamma f(z) \,dz = 0$ for every closed curve $\gamma$ in $\Omega$.
Now assume that $\int_\gamma f(z) \,dz = 0$ for every closed curve $\gamma$ in $\Omega$. Assume, without loss of generality, that $\Omega$ is connected. (Otherwise we can treat each component of $\Omega$ individually.) Fix $z_0 \in \Omega$ and define
$$
F(z) = \int_{\gamma(z)} f(\zeta) \,d\zeta
$$
where $\gamma(z)$ is a path from $z_0$ to $z$. This function is well-defined regardless of the path we choose because of the initial hypothesis. We claim that $F' = f$.
Fix $z \in \Omega$. There is a neighborhood $D(z, r) \subset \Omega$ for some $r > 0$. Let $w \in D'(z, r)$. We have:
$$
\frac{F(w) - F(z)}{w - z} = \frac{1}{w-z}\int_{[z, w]} f(\zeta) \,d\zeta
$$
Hence:
$$
\left|\frac{F(w) - F(z)}{w - z} - f(z)\right| \le \frac{1}{w-z}\int_{[z, w]} \left|f(\zeta) - f(z)\right| \,d\zeta
$$
Fix $\epsilon > 0$. By the continuity of $f$, we can make $r$ small enough so that $\forall \zeta \in [z, w] : \left|f(\zeta) - f(z)\right| < \epsilon$. Thus:
$$
\left|\frac{F(w) - F(z)}{w - z} - f(z)\right| \le \epsilon
$$
Since $z$ and $\epsilon$ are arbitrary, it follows that $F' = f$.
|
H: Proof that operator is an isometry
A linear operator $L$ between complex spaces with inner product $U$ and $V$ is an isometry, only if $\left < Lu_i, Lu_j \right > = \left < u_i, u_j \right >$ for all $u_j, u_i$ from a basis of $U$ (not necessary orthonormal).
I would like to see a proof of this statement.
AI: EDIT: ONLY IF direction proof without directly referring the polarization identity is as follows.
First we know that
$$
\|Lu_m\|^2_V = \langle L u_m ,L u_m\rangle = \|u_m\|^2_U = \langle u_m , u_m\rangle\tag{1}
$$
because of isometry. Now there is:
$$
\|L(u_m-u_n)\|^2_V = \langle L(u_m-u_n) , L(u_m-u_n) \rangle = \langle u_m-u_n , u_m-u_n \rangle = \|u_m-u_n\|_U^2.\tag{2}
$$
Expanding and using (1) lead to
$$
\langle L u_m, Lu_n \rangle + \langle L u_n, Lu_m\rangle = \langle u_m , u_n \rangle + \langle u_n , u_m \rangle\tag{3}
$$
Change (2) a little bit:
$$
\|L(u_m-iu_n)\|^2_V = \langle L(u_m-iu_n) , L(u_m-iu_n) \rangle = \langle u_m-iu_n , u_m-iu_n \rangle = \|u_m-iu_n\|_U^2.
$$
Similarly we have:
$$
\langle L u_m,i Lu_n \rangle + \langle i L u_n, Lu_m\rangle = \langle u_m , i u_n \rangle + \langle i u_n , u_m \rangle,
$$
and this is
$$
-i \langle L u_m, Lu_n \rangle + i\langle L u_n, Lu_m\rangle = -i\langle u_m , u_n \rangle + i\langle u_n , u_m \rangle,
$$
cancelling $-i$ we have:
$$
\langle L u_m, Lu_n \rangle -\langle L u_n, Lu_m\rangle = \langle u_m , u_n \rangle -\langle u_n , u_m \rangle.\tag{4}
$$
(3)+(4) gives the result you want.
Following is the if direction proof : (deleted)
EDIT 2: Thanks Jonas Meyer pointed out here, the if direction presumes the space has a Schauder basis, i.e., at least separable. By OP only said it is an inner product space, we can't do that.
|
H: Transformation of $\mathbb R^2$
Let $T:\mathbb R^2\to \mathbb R^2$ be s.t. for all $x,y\in\mathbb R^2$, $\mid\mid T(x)-T(y)\mid\mid = \mid\mid x-y\mid\mid.$ Then, 2 questions:
If $T(0)=0$, does it follow that $T$ must be linear?
Show that $T$ is a translation, then a rotation, and then a reflection through the $x$ axis (in which any of those three could be the "empty" one).
I'm rather stumped by this problem.... for 1 think yes, proven perhaps by taking $x=0$? But I haven't actually been able to completely figure it out.
And I have no idea how to proceed with 2... perhaps looking at the matrices that generate such movements?
AI: Let $e_1=[1,0]^t$ and $e_2=[0,1]^t.$ Let $M$ be the $2\times 2$ matrix whose first column is $T(e_1)$ and whose second column is $T(e_2)$. Show that if $T(0)=0,$ then $T(v)=Mv$ for all $v\in\Bbb R^2.$
Now, note that the matrix $M$ above must be invertible (why?), and in particular, has determinant $\pm1$ (why?). Show that: a $2\times2$ real matrix with determinant $1$ represents a (possibly "empty") rotation; a $2\times2$ real matrix with determinant $-1$ represents a reflection about a line through the origin, which can be obtained instead by a (possibly "empty") rotation followed by a reflection about the $x$-axis (why?).
For your second part, put $b=T(0)$ and let $S(v)=T(v)-b$ for all $v$. What can you say about $S$ in light of the first exercise? Use this to show that there is a vector $c$ and a matrix $M$ with determinant $\pm 1$ and a vector $c$ such that $$T(v)=M(v+c).$$
Edit: Let me expand on the first two paragraphs (since that's where the proof gets sticky).
Now, we're assuming that $$T(0)=0\tag{1}$$ and that for all $v,w\in\Bbb R^2$ we have $$\lVert T(v)-T(w)\rVert =\lVert v-w\rVert.\tag{2}$$ Putting $w=0$ in $(2)$, we have by $(1)$ that $$\lVert T(v)\rVert=\lVert v\rVert\tag{3}$$ for all $v\in\Bbb R^2$.
Letting $x\cdot y$ indicate the dot product of $x$ and $y$, it is easily proved that $$\lVert v\rVert^2=v\cdot v$$ for all $v\in\Bbb R^2$ (assuming that our norm is the Euclidean norm), so for all $x,y\in\Bbb R^2,$ we have by dot product properties that $$\begin{align}\lVert x-y\rVert^2 &= (x-y)\cdot(x-y)\\ &= x\cdot x-x\cdot y-y\cdot x+y\cdot y\\ &= x\cdot x-2(x\cdot y)+y\cdot y\\ &= \lVert x\rVert^2-2(x\cdot y)+\lVert y\rVert^2.\end{align}\tag{4}$$ Applying $(2),$ $(3)$, and $(4)$, we find that for all $v,w\in\Bbb R^2$, we have $$\begin{align}\lVert v\rVert^2-2\bigl(T(v)\cdot T(w)\bigr)+\lVert w\rVert^2 &= \lVert T(v)\rVert^2-2\bigl(T(v)\cdot T(w)\bigr)+\lVert T(w)\rVert^2\\ &= \lVert T(v)-T(w)\rVert^2\\ &= \lVert v-w\rVert^2\\ &= \lVert v\rVert^2-2(v\cdot x)+\lVert w\rVert^2,\end{align}$$ so in particular $$T(v)\cdot T(w)=v\cdot w.\tag{5}$$
Now, apply $(3),$ $(4),$ $(5),$ and dot product properties, so we see that for any $x,y,z\in\Bbb R^2$ we have $$\begin{align}\lVert z-x-y\rVert^2 &= \lVert z-x\rVert^2-2\bigl((z-x)\cdot y\bigr)+\lVert y\rVert^2\\ &= \lVert z-x\rVert^2-2(z\cdot y)+2(x\cdot y)+\lVert y\rVert^2\\ &= \lVert z\rVert^2-2(z\cdot x)+\lVert x\rVert^2-2(z\cdot y)+2(x\cdot y)+\lVert y\rVert^2\\ &= \lVert T(z)\rVert^2-2\bigl(T(z)\cdot T(x)\bigr)+\lVert T(x)\rVert^2-2(z\cdot y)+2(x\cdot y)+\lVert y\rVert^2\\ &= \lVert T(z)-T(x)\rVert^2-2(z\cdot y)+2(x\cdot y)+\lVert y\rVert^2\\ &= \lVert T(z)-T(x)\rVert^2-2\bigl(T(z)\cdot T(y)\bigr)+2\bigl(T(x)\cdot T(y)\bigr)+\lVert T(y)\rVert^2\\ &= \lVert T(z)-T(x)\rVert^2-2\Bigl(\bigl(T(z)-T(x)\bigr)\cdot T(y)\Bigr)+\lVert T(y)\rVert^2\\ &= \lVert T(z)-T(x)-T(y)\rVert^2.\end{align}\tag{6}$$ In particular, let $x,y\in\Bbb R^2,$ and put $z=x+y,$ so by $(6),$ we find that $$0=\lVert T(x+y)-T(x)-T(y)\rVert^2,$$ so $$0=\lVert T(x+y)-T(x)-T(y)\rVert$$ by nonnegativity, and so $$T(x+y)=T(x)+T(y)\tag{$\star$}$$ for all $x,y\in\Bbb R^2.$ You should be able to prove from here that $T$ is linear, so in particular is given by $$T(v)=Mv\tag{$\heartsuit$}$$ (where $M$ is given in the first paragraph).
Since $T$ is one-to-one (why?), then $M$ is invertible. Moreover, using the fact that $x\cdot y=x^ty$ for all $x,y\in\Bbb R^2,$ we have that $$\begin{align}x\cdot(M^tM-I)y &= x^t(M^tM-I)y\\ &= x^t(M^tMy-y)\\ &= x^tM^tMy-x^ty\\ &= (Mx)^tMy-x^ty\\ &= (Mx)\cdot(My)-x\cdot y,\end{align}$$ so by $(\heartsuit)$ and $(5)$ we have $$x\cdot(M^tM-I)y=0$$ for all $x,y\in\Bbb R^2.$ In particular, then, for any $y\in\Bbb R^2$, we have that $$\lVert(M^tM-I)y\rVert^2=(M^tM-I)y\cdot(M^tM-I)y=0,$$ whence $$(M^tM-I)y=0$$ for all $y\in\Bbb R^2,$ and so $M^tM=I.$ From this, we conclude that $\det(M)=\pm 1$. (Why?)
Can you take it from there?
|
H: How do I know when to use nCr button and when to use nPr button on my calculator in which situation?
How do I know when to use nCr button and when to use nPr button on my calculator?
nCr= combinations I believe
NPR= permutations
Is there a general rule I can use to figure out which one to use and when according to the given question?
AI: In permutations, order counts.
So, if I wanted to know the number of ways to arrange a set of four unique books out of a total supply of 20 books, I'd use $20$ nPr $4$. (Order counts in arranging books.)
In combinations, order doesn't count.
So, if I wanted to know the number of ways to make a team of $4$ people out of a total group of $20$ people, I'd use $20$ nCr $4$. This is because order doesn't matter in choosing a team.
|
H: Why $f$ is injective? (infinitude of primes)
In this very short paper by Dustin J. Mixon, I would like understand why the author says
$f$ is injective by the fundamental theorem of arithmetic.
In my opinion, the Fundamental Theorem of Arithmetic (FTA) is necessary to define $f$, but it isn't necessary to prove that $f$ is injective. For example, if FTA were not true but you were able to ensure the uniqueness of $k_i$ by any other way then $f$ would be injective because $(k_1,\ldots,k_N)=(m_1,\dots,m_N)\Rightarrow k_i=m_i$.
In other words, I think the uniqueness of $k_i$ (and therfore FTA) is necessary to define $f$, but not to prove that $f$ is injective.
What can you talk about this?
Thanks.
AI: The fundamental theorem of arithmetic isn't necessary to define $f$, nor is it necessary to prove the injectivity of $f$.
The injectivity is immediate because
$$ g \colon \lbrace 0,\, \ldots,\, K\rbrace^N \to \mathbb{N}; \; g(e_1,\, \ldots,\, e_N) = \prod\limits_{i = 1}^N {p_i}^{e_i}$$
is a left inverse of $f$.
To define $f$, all you need is that each number has some representation as a product of primes. If the FTA didn't hold, such a representation would in general not be unique, but for a finite set of numbers and primes, a representation could be arbitrarily chosen even without thinking about the axiom of choice.
|
H: Mystery shapes - making shapes from vague descriptions
I am suppose to mentally create something that has base $x^2 + y^2 = 1$ and the cross sections perpendicular to the x axis are triangles whose height and base are equal.
What is going on? I tried to graph it but it was too hard, is this like a turtle? Sphere on back, pointy spikes on the bottom? What does "triangles" mean? How many? 2?
AI: Hint: you have a "cone" in 3D: and I'm assuming you can take the x-y plane as where the cone sits: its base is a circle of radius $1$. Think of an upside-down ice-cream cone with no ice-cream, whose height is equal to the diameter of the circle. Can you fill in more of the details? Try to envision how "steep" the cone must be from the circumference of the base of the cone, to its tip.
Added: The cross section will be an equilateral triangle: If we call of the base "2y", then 1/2 base will be y, but since we have that height will equal base, (given), we must have $2y$. cross sectional Area (area of triangle) will then be $$A = \frac 12\;\text{base}\, \times\, \text{height} = (y)(2y) = 2y^2.\tag{Area}$$ Now, we know that $x^2 + y^2 = 1$, so $y^2 = 1 - x^2 \implies 2y^2 = 2(1 - x^2)$. Substituting this into our equation for area, to get an equation $A(x)$ in terms of $x$ gives us $$A(x) = 2y^2 = 2(1-x^2) = 2 - 2x^2\tag{A(x)}$$ Now, we want to sum the area of all cross sections (triangles) between $x = -1\;\text{and}\; x = 1$, to determine the volume $V$ of the cone:
$$V = \int_{-1}^1 A(x)\, dx = \int_{-1}^1 (2-2x^2)\, dx$$
|
H: More three-term arithmetic progression questions
This question was inspired by a recent question about whether
$\frac1{2}$, $\frac1{3}$ and $\frac1{15}$
can be (possibly non-consecutive) terms
in an arithmetic progression.
My question(s): Which of the following sets of
three values can be (possibly non-consecutive) terms
in an arithmetic progression,
which can not, and which are you sure of the answer
but have no hope of a proof?
$1, \sqrt{2}, 2$
$1, \sqrt{2}, \sqrt{3}$
$\sqrt{2}, \sqrt{3}, \sqrt{5}$
$1, e, e^2$
$1, \sqrt{2}, e$
$1, e, \pi$
$\gamma, e, \pi$
$\pi, \pi^2, \pi^3$
$\pi, \pi^2, \pi^4$
$\pi, \pi^2, \pi^5$
AI: Generically, suppose that $(a, b, c)$ are three terms of an arithmetic progression with common difference $d$. Then $b-a$, $c-b$, and $c-a$ are all integer multiples of $d$ (and in fact, if any two of these quantities are, then by the appropriate addition or subtraction the third is as well); call them $k_1d$, $k_2d$ and $k_3d$, with $k_1, k_2, k_3\in\mathbb{Z}$ (and $k_3=k_2+k_1$). Then we have $\frac{k_2}{k_1} = \frac{c-b}{b-a}\in\mathbb{Q}$; contrariwise, if $\frac{c-b}{b-a}=\frac{m}{n}\in\mathbb{Q}$, then by choosing $d=\frac{b-a}n=\frac{c-b}m$, $b=a+nd$ and $c=b+md=a+(m+n)d$ are clearly members of an arithmetic progression with common difference $d$.
Applying this to several of your cases:
1) $1,\ \sqrt{2},\ 2$: $\displaystyle\frac{c-b}{b-a} = \frac{2-\sqrt{2}}{\sqrt{2}-1} = \frac{2-\sqrt{2}}{\sqrt{2}-1}\cdot\frac{\sqrt{2}+1}{\sqrt{2}+1} = 2\cdot\sqrt{2}+2-\sqrt{2}\cdot\sqrt{2}-\sqrt{2}) = \sqrt{2}$. Since this ratio is not rational, then they can't possibly be part of an a.p.
4) $1,\ e,\ e^2$: $\frac{c-b}{b-a} = \frac{e^2-e}{e-1}=e$. Again, since the ratio isn't rational, they can't possibly be part of an a.p.
10) $\pi,\ \pi^2,\ \pi^5$: Here the ratio is $\frac{\pi^5-\pi^2}{\pi^2-\pi} = \pi^3+\pi^2+\pi$. If this were a rational number $r=\frac{p}{q}$, then $\pi$ would be a solution of the equation $q\pi^3+q\pi^2+q\pi-p=0$; in other words, $\pi$ would be algebraic. But since this is known not to be so, then once again the terms cannot possibly be part of an a.p.
Can you see how to handle the rest in similar fashion?
|
H: How should I interpolate between values in a logarithmic series?
What's the best way to interpolate between 2 values of a logrithmic series?
More specifically, I have a process where we encode values as $b = \text{floor}(\log(x, k))$. We discard the original values, and use b as a bucket for a bunch of statistics.
However, later I want to "reconstitute" an approximation of the original value(s) for things like graphing.
I could say the approximation of $x$ is $a = b ^ k$. However, this puts $x$ at the lowest value in the bucket.
I could say $a = (b + 0.5) ^ k$. But I think like that would put the approximation closer to the high value to to the low value, and I'm not sure that's right either.
Any ideas? Am I over-thinking this one?
(Also, did I tag this correctly? Let me know, or else just edit to fix it.)
AI: This would heavily depend on your distribution, and what you want the approximation for.
Some questions:
Is your data uniformly distributed over $[b^k, (b+1)^k]$? It likely isn't, and so you might want to bias your approximation in some direction to reflect this.
What do you mean by approximation? Do you want to find the average value of $x$ given that $b^k \leq x < (b+1)^k$? Or do you want the most likely value of $x$ given that it is in this range?
|
H: Volume of liquid needed to fill sphere to height $h$
Find the volume of liquid needed to fill a sphere of radius $R$ to height $h$.
The picture shows $h$ up to maybe a quarter, I am not sure it seems pretty ambiguous.
No clue what to do here. I just know that the formula I am suppose to memorize is
$$V = \int \pi(R^2 - x^2)$$
AI: In this answer, I'm not really going to explain how to fit the formula; rather, I'm giving a process that will work for nearly any type of Calc II "application" problem.
The first step in any word problem is to draw a picture (even if you're given one--it still helps for you to draw your own.):
Now, we can turn the drawing into an integral. If we were to use discrete "slices" of the sphere to compute the volume, we'd have the following: (Let $r(z)$ be the radius of a slice at a given height $z$, and $\Delta z$ be the height of a given slice.)
$$\sum \left(\pi\cdot\big(r(z)\big)^2\cdot\Delta z\right)$$
This is just summing up the volume of a bunch of really short cylinders. As $\Delta z \to 0$, we have an integral:
$$\int_0^h\pi\cdot\big(r(z)\big)^2\;\mathrm dz$$
Now, what is $r(z)$? Well, let's look at a circle of radius $R$, and see what its radius is as a function of $z$:
So, we see that $y = r(z) = \sqrt{R^2 - (z-R)^2}$. Let's plug this into our integral we have above:
$$\int_0^h\pi\left(\sqrt{R^2 - (z-R)^2}\right)^2\;\mathrm dz$$
$$\int_0^h\pi(R^2 - (z-R)^2)\;\mathrm dz$$
The above integral gives the volume of the water in a sphere, filled to height $h$.
Note: my answer differs from the other answers because I positioned my sphere with the bottom at the origin, rather than centered at the origin. In hindsight, the other way is simpler, but I didn't want to have to redo all my graphics. :) To show they're the same, we perform the subsitution:
$$u = z-R\implies \mathrm du = \mathrm dz$$
This implies:
$$\int_{-R}^{h-R}\pi(R^2 - u^2)\;\mathrm du$$
|
H: Hilbert's Hotel, cardinality, and equality
One of the absurdities illustrated by Hilbert's Hotel, some say, is infinity + infinity = infinity. This is absurd in the sense that "After the infinitely many new guests check in, the number of guests will remain the same as before." I think what is meant by "the number of guests is the same as before" is that the infinities in question have the same cardinality - the same number of guests before and after.
Question: Is it true that if two infinite sets have the same cardinality, they are equal? Or is it possible that two sets have the same cardinality, yet have different numbers of members?
If it is true that infinity + infinity = some greater infinity, in what sense is the claim true that "After the infinitely many new guests check in, the number of guests will remain the same as before."
Note: The context of this question is a discussion on an objection to Hilbert's Hotel on Philosofize
AI: The term "infinity" means nothing in modern mathematics. It is just a term which represents "not finite".
In the context of the Hilbert hotel, and it seems that you discuss this in the context of infinite cardinals, rather than ordinals, or some other notion of infinity, infinities are cardinals. So when you say that "two infinities have the same cardinality they are equal" you really just say "two infinites are equal then they are equal".
On the other hand, if you mean infinities in the sense of ordinal numbers, which are another form of infinitary numbers, then addition is not idempotent. $\alpha+\alpha\neq\alpha$. But ordinals carry more structure than cardinals, and the addition of ordinals is different than the addition of cardinals because of that extra structure.
Glancing over the comments in the "objection" that you have linked, it seems to me that a particularly important point is amiss:
The notion of "number" simply means some "measurable quota" in some aspect. Cardinality of sets is a way to measure size of sets in a particular way. So cardinal numbers are numbers. It is true that those are not real numbers, but this is why we don't use the term "infinities" so wildly, and we tend to be very clear in what sort of context it is applied.
If one thinks of numbers as real numbers, and infinity as the infinity we know from calculus (a length which is longer than any finite length), then of course there is merits to this apparent objections. But this is not what Hilbert was trying to show when he opened his grand hotel to the unsuspecting public. No, he was talking about cardinal numbers which are a whole other system of numbers.
|
H: Rearrange formula in term of r
I do not know how to arrange following equation in term of r:
$$I = P\left(1 + {r \over 100}\right)^n $$
I know that first step is dividing both parts of equation by P:
$$ {I \over P} = \left(1 + {r \over 100}\right)^n$$
But here I got stuck. I do not know how to extract r out of the power. Square root does not help because it is square root of whole $1 + {r \over 100}^ n$
AI: Ok, you're on the right track so far. I believe that next you would take the $n$th root of both sides. Afterward, multiply everything by $100$, and finally subtract $100$ from the right hand side.
|
H: How to solve $dy/dx = \frac{(3y^2+2x^2)}{ (xy)}$
I got this problem in today's exam and I couldn't quite figure this out. The equation is $xy \, dy = (3y^2+2x^2) \, dx$, $M_y = 6y$ and $N_x = y$, they aren't equal so this equation is nowhere near exact, it doesn't look like I can do separable either? What to do?
AI: Hint: Multiplying both sides by $2y$ yields:
$$
2y\dfrac{dy}{dx} = \dfrac{6y^2+4x^2}{x} = \left(\dfrac{6}{x}\right)y^2 + 4x
$$
Now let $v=y^2$. Then $\dfrac{dv}{dx}=2y\dfrac{dy}{dx}$. Substitution yields:
$$
\dfrac{dv}{dx} = \left(\dfrac{6}{x}\right)v + 4x \iff \dfrac{dv}{dx} - \left(\dfrac{6}{x}\right)v = 4x
$$
which is a linear ODE that can be solved using the method of integrating factors.
|
H: Congruence classes: Find the inverse
I have the following problem:
If $ [3640]$ is invertible in $\mathbb {Z}_{7297}$ then determine its inverse.
Okay. The first thing I thought was:
$$3640x\equiv 1 \pmod{7297}$$
But isn't there any easier way?
Any hint, much appreciated.
AI: We use the Euclidean Algorithm. Note that
$$7297=(2)(3640)+17.\tag{A}$$
$$3640=(214)(17)+2.\tag{B}$$
$$17=(8)(2)+1.\tag{C}$$
Now go backwards. We have from (C) that
$$1=17-(8)(2).\tag{1}$$
But from (B) we have $2=3640-(214)(17)$.
Substituting for the $2$ in Equation (1), we get
$$1=17-8[3640 -(214)(17)]=(-8)(3640)+(1713)(17). \tag{2}$$
Now from (A) we have $17=7297-(2)(3640)$. Substitute for the $17$
in Equation (2). We get
$$1=(-8)(3640)+(1713)[7297-(2)(3640)].$$
Thus $1$ is equal to $(-3434)(3640)$ plus a multiple of $7297$.
So one answer is $-3434$. If you want a positive answer, add $7297$. We get $3863$.
Admittedly, a little unpleasant, but fully mechanical. There are nicer ways to implement this Extended Euclidean Algorithm, but for smallish numbers like these, back substitution is not too bad.
|
H: Meaning of "strong" and "weak" (formulas?) in propositional logic
I was doing some review of propositional logic from Enderton's book. In one section(pg. 26 of the 2nd edition), he explains the idea that given wffs $\sigma_1, \sigma_2, \cdots, \sigma_k$ and $\tau$, one can use truth tables to check whether or not $\{ \sigma_1, \cdots, \sigma_k \} \models \tau$. Then he gives some examples of how one might in an educated way avoid checking all combinations of assignments of the propositional variables used in the sigmas and in $\tau$. This part looks o.k to me.
But then Enderton says "The stronger the antecedent(the expression on the left side), the weaker the conditional." He then gives the examples
\begin{align}
(P \wedge Q) &\models P \\
(P \rightarrow R) &\models ((P \wedge Q) \rightarrow R) \\
(((P \wedge Q) \rightarrow R) \rightarrow S) &\models ((P \rightarrow R) \rightarrow S)
\end{align}
What does he mean by this? Which antecedent is stronger, the first or the third? Maybe a question to start out with is: what do "strong" and "weak" even mean here? Thanks for any help/clarification!
Sincerely,
Vien
AI: "Strength" here means "provability strength". It's somewhat loose terminology, but the idea is the following. Suppose $A,B,C$ are all sentences, and $A \vDash B$ (and let's assume $B \not\vDash A$). In this case, we would say "$A$ is stronger than $B$", i.e. "$A$ can at least prove everything that $B$ can, and more."
Question 1: What do you think the relative strength of $\neg A$ and $\neg B$ would be? Hopefully, it's not too hard to see that, when you add negations, you flip the relative strengths; so now $\neg B \vDash \neg A$, i.e. "$\neg B$ is stronger than $\neg A$."
Question 2: What do you think the relative strength of $A \vee C$ and $B \vee C$ is, where $C$ is just some new sentence? Well, this is a bit tricky, since $C$ could be a tautology, but all else equal, if $A$ is stronger than $B$, then $A \vee C$ is at least as strong as $B \vee C$.
Combine these two together: If $A$ is stronger than $B$, then $\neg B$ is stronger than $\neg A$. But then $\neg B \vee C$ is stronger than $\neg A \vee C$, i.e. $B \rightarrow C$ is stronger than $A \rightarrow C$, given the definition of "$\rightarrow$". Slotting in the appropriate sentence for $A,B,C$ above, hopefully Enderton's remarks are a little more clear.
That's the formal answer to your question. Perhaps, on an intuitive level, consider the following: Let $A$ be "It's raining" and let $B$ be "It's raining really hard". In this case, $B$ is strictly stronger than $A$: $A$ follows from $B$ but not vice versa. But now consider the conditionals, "If it's raining, I will bring an umbrella" and "If it's raining really hard, I will bring an umbrella." Intuitively, the latter seems to be claiming something weaker. That is, the second conditional is consistent with it raining and me not bringing an umbrella (perhaps it's only lightly raining); but the first conditional is not (even if it's lightly raining, I'll bring an umbrella).
|
H: Targets of Fighter plane
If fighter plane travels along the path
$$r(t)=(t-t^3,12-t^2,3-t),$$
how can we show that the pilot cannot hit any target on the x-axis? Any pointers and hints would be appreciated. I do not want full answer but a start because I am stuck in terms of how to approach this problem.
AI: You are probably supposed to assume that the pilot can only shoot along the instantaneous direction of travel. The question is then whether the tangent ray from the trajectory ever intersects the $x$ axis. We have $r(t)=(t-t^3,12-t^2,3-t)$, so $r'(t)=(1-t^2,-2t,-1)$ Then we need to argue that $(t-t^3,12-t^2,3-t)+u(1-t^2,-2t,-1)$ never hits $y=z=0$ for any value of $t,u$ with $u \ge 0$
|
H: Can a probability density function be used directly as probability function?
This might be something basic but it confuses me greatly.
I am reading a literature, where they use the probability density function of a Gaussian distribution, that is
$$f(x)=\frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$
directly as a probability function - that means,
$$p(x\mid\sigma^2,\mu)=\frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }\;.$$
However, from what I read elsewhere, probability density function cannot be used like that, because it can be bigger than 1.
So I am confused.
AI: Unless the writer is making a mistake, the symbol $p$ is being used where it is more common to use $f$, possibly with subscript, as in $f_X$.
Some people use $p$, in the discrete case, for the probability mass function, and, in the continuous case, for the probability density function. In the continuous case, $p(x)$ is not a probability.
|
H: Finding the point of intersection of L and the xy-plane.
$$P = (-1, 1, 1)$$
$$Q = (1,2,3)$$
$$R = (2,1,0)$$
My textbook provides a solution to this, but I'm unclear of how it came to it. The book says;
In the xy-plane we have $0 = z = 1 + 2λ$ so $λ = \frac{-1}{2}$ and so the intersection of L with the xy-plane is $(-2, \frac{1}{2}, 0)$
Can someone explain where z = 1+2λ comes from, as well as how the final answer is arrived at?
EDIT: I believe that the line L = (-1, 1, 1) + λ(2,1,2)
AI: If $L$ is $(-1,1,1)+\lambda (2,1,2)$ and we want the intersection with the $xy$ plane, we need to find the point on $L$ where $z=0$, which is the $xy$ plane. The general point on the line is then $(-1+2 \lambda, 1+ \lambda, 1+2 \lambda)$. The third coordinate is $z$ so we set that to zero, $1+2 \lambda = 0$. Solving, $\lambda =-\frac 12$ so we plug that into the general point we get $(-1+2(-\frac 12),1+(-\frac 12),1+2(-\frac 12))=(-2,\frac 12,0)$
|
H: Do I understand metric tensor correctly?
So I've been studying vectors and tensors, and I'm trying to understand metric tensors.
As I understand them, besides a vast array of explanations, they provide an invariant distance between vectors regardless of whether their basis has changed.
So if you had a set of vectors in say a Cartesian coordinate space, using a metric tensor to describe the distances between the vectors even if the primed basis describes them in Spherical coordinates, are he same distances? This isn't becoming quite intuitive to me yet.
This equation might clarify where I understand the definition of a metric tensor:
$ds^2= g_{11} dx^2 + g_{22} dy^2$
AI: Suppose $U\subseteq{\bf R}^2$ is some open region and $f:U\to{\bf R}^3$ defines some smooth surface in space. Now let $\gamma:I\to U$ be a path in $U$ that corresponds to a path $f\circ\gamma:I\to{\cal S}=f(U)\subset {\bf R}^3$ on the surface which we denote by $\cal S$ (here $I$ is an open interval in $\bf R$). How do we find the length of this curve?
The length is computed as
$${\rm length}=\int_I\underbrace{\left\|\frac{d}{dt}(f\circ\gamma)\right\|}_{\large\rm ``speed"}dt=\int_I\left\|J_f(\gamma)\gamma'(t)\right\|dt $$
where $J_F$ is the Jacobian, or matrix of partials, of $f$ with respect to $\gamma$. Note that an expression of the form $\| Jv\|$ may be rewritten as $\sqrt{Jv\cdot Jv}=\sqrt{v^T(J^TJ)v}$. Note that every matrix $A$ determines a bilinear form as $Q_A(u,v)= u^TAv$.
In general, then, suppose we have a manifold $\cal M$ (the device intended to axiomatize curved space), which is essentially a collection of points with some topology; there is no a priori notion of distance between points. To create such a notion of distance, form a collection of bilinear forms $g_p$, one associated to every point $p$ in space, so that any path $\gamma:I\to\cal M$ can be "measured" via
$$\int_I \sqrt{g_\gamma(\gamma',\gamma')}dt.$$
(I am sort of fudging up usual notational conventions for the sake of clarity.) Generally, given charts aka coordinate patches, we represent $g$ as a matrix, and so put two indices under it.
This is the motivation of the metric tensor in differential geometry. Vectors are usually understood to be tangent vectors; there is an abstract object called a "vector space" whose elements are by fiat called "vectors," and to each point in space we attach a vector space called a "tangent space." The derivative $\gamma'$ of a nice path $\gamma:I\to\cal M$ doesn't reside in just one space, it moves along the tangent spaces, which means that $\gamma'(t)$ is always in the tangent space of the point $\gamma(t)\in\cal M$, and furthermore the bilinear form $g_p$ at the point $p\in\cal M$ is always defined on $p$'s tangent space.
There are different, purely algebraic ways of understanding vectors and tensors in the context of abstract algebra (you'll see $\otimes$ symbols everywhere and no Einstein summation notation, for example): these purely algebraic ways are ultimately married to the differential-geometric ways of understanding tensors at the more advanced levels of geometry.
The distance between two points will not change if you change your coordinates. Also the metric tensor itself, as a function, will not change. But the matrix which represents the metric tensor depends on what coordinate system is being imposed on all of the tangent spaces, so the matrix representation of the metric tensor will in fact change with changes of coordinates.
|
H: If $v_i$s are linearly dependent, $\det [\phi_i{v_j}] = 0$ - is the proof legit?
Given $v_1, \ldots, v_k \in V$ and $\phi_1, \ldots, \phi_k \in V^*$. If $v_1, \ldots, v_k \in V$ are linearly dependent, proof $\det[\phi_i(v_j)] = 0.$ Here $k$ is the dimension of $V$, but I need to show this also works for a subspace with dimension $p$.
Assume $v_r$ is a linear combination of the others, as without loss of generality we consider in terms of basis. Denoting the matrix whose $i$th row and $j$th column is $\phi_i(v_j)$ to be $[\phi_i(v_j)]$. Then
$$[\phi_i(v_j)]= \pmatrix{ \begin{array}{cccccc}
\delta_{11}&\delta_{12} & \ldots & \lambda_1 \delta_{11} + \cdots + \hat \lambda_r \delta_{1r} + \cdots + \lambda_p \delta_{1p} & \cdots & \delta_{1p}\\
\delta_{21}&\delta_{22} & \ldots & \lambda_1 \delta_{21} + \cdots + \hat \lambda_r \delta_{2r} + \cdots + \lambda_p \delta_{2p} &\cdots & \delta_{2p}\\
\vdots &\vdots & \ddots &\vdots & \ddots & \vdots\\
\delta_{r1}&\delta_{r2} & \ldots & \lambda_1 \delta_{r1} + \cdots + \hat \lambda_r \delta_{rr} + \cdots + \lambda_p \delta_{rp} &\cdots & \delta_{rp}\\
\vdots &\vdots & \ddots &\vdots & \ddots & \vdots\\
\delta_{p1}&\delta_{p2} & \ldots & \lambda_1 \delta_{p1} + \cdots + \hat \lambda_r \delta_{pr} + \cdots + \lambda_p \delta_{pp}&\cdots & \delta_{pp}\\
\end{array} }
=\left( \begin{array}{cccccc}
1&0 & \ldots & \lambda_1 & \cdots & 0\\
0&1 & \ldots & \lambda_2 & \cdots & 0\\
\vdots &\vdots & \ddots & \vdots &\ddots & \vdots\\
0&0 & \ldots & 0 & \cdots & 0\\
\vdots &\vdots & \ddots & \vdots &\ddots & \vdots\\
0&0 & \ldots & \lambda_p & \cdots & 1\\
\end{array} \right). $$
Hence, the determinant of $\phi_i(v_j)$ is zero.
This question has a cousin here: If $\phi_i$s are linearly dependent, $\det [\phi_i(v_j)] = 0$ - is the proof legit?. Though my original attempt is wrong, I think the answers are indeed great.
AI: I'm not sure what the $\lambda$s are above.
If the $v_k$ are linearly dependent, then for some $\alpha \neq 0$, then $\sum_j \alpha_i v_j = 0$. Without loss of generality, we may assume $\alpha_1 \neq 0$ and write $v_1 = - \sum_{j \neq 1} \frac{\alpha_j}{\alpha_1} v_j$.
Let $M$ be the matrix with entries $[M]_{ij} = \phi_i(v_j)$.
Then we have $\phi_i(v_1) = - \sum_{j \neq 1} \frac{\alpha_j}{\alpha_1} \phi_i(v_j)$, so we see that the first column of $M$ is a linear combination of the other columns, hence $\det M = 0$.
Explicitly, using the above, we can write $M = \begin{bmatrix} 0 & -\frac{\alpha_2}{\alpha_1} & -\frac{\alpha_3}{\alpha_1} & \cdots & -\frac{\alpha_n}{\alpha_1} \\
0 & 1& 0 & \cdots & 0 \\
0 & 0& 1 & \cdots & 0 \\
\vdots & & & \ddots& \vdots\\
0 & 0 & \cdots & 0 & 1
\end{bmatrix} M$, from which we see that $\det M = 0$.
|
H: When precisely can we replace quotient objects with subobjects in the definition of simple objects?
In a category with zero, a simple object is one that has only two quotients - itself and zero.
Firstly - a point of confusion. The definition above says that quotient object requires a congruence, what is the one we should use here? I don't see a natural one, but I'm probably missing something obvious.
b. When the category is abelian, quotients can be replaced by subobjects. Is there a more precise condition?
AI: The notion of congruence being used here is linked to from the pages you linked to; it's this one.
The reason that we can replace quotients with subobjects for abelian categories is that we can quotient an object by a nontrivial subobject and expect a nontrivial quotient in return. This isn't true in general; for example, the quotient of a group $G$ by a subgroup $H$ (by which I mean the cokernel of the inclusion $H \to G$) may be trivial even if $H$ is not all of $G$ (since its normal closure may be all of $G$). I am not aware of a useful criterion substantially more general than abelian here; $\text{Grp}$ is one of the nicest not-quite-abelian categories, so the situation doesn't look hopeful.
|
H: Trigonometry in Triangles Without Right-Angles
Could you please help by showing me how I can find the unknown sides for the triangles below?
AI: Hint: since you do not have a right triangle, and do not know whether the second triangle is a right triangle, assume it isn't:
For the first, use the law of sines: the unknown angle is $180 - 60 - 50 = 70^\circ$
$$\frac a{\sin A} = \frac b{\sin B} = \frac c{\sin C}\tag{Law of Sines}$$
For the second triangle:
Use the law of cosines using the information you are given, to solve for the unknown, given the right choice of angles.
$$c^2 = a^2 + b^2 - 2ab\cos \gamma\tag{Law of Cosines}$$
Images from the linked wikipedia articles (respectively).
|
H: Constructing a linear map
Construct a linear map α : $\mathbb{R^4} → \mathbb{R^4}$ whose kernel is spanned by $(1, 0, 0, 1)$ and $(0, 1, 1, 0)$
I'm seeking guidance, how I could construct the required map? The thing is that I am not yet introduced to matrices. The chapter is on vector spaces, and short subchapter basically gives theorem on $\dim(V) = \dim ker(α) + \dim α(V)$. So I need to find a rather simple way how I could do it.
Also, I would appreciate if anybody could expand on how to make sure the transformation's kernel (or image for that matter) contains the required vector(s), again, without using matrices. Thanks!
AI: Here are some hints & suggestions:
(Note that the solution to the question is not unique. There are many $\alpha$ that satisfy the required conditions.)
Hint #1: Just by observation, we see that the vectors $(1,0,0,-1)^T, (0,1,-1,0)^T$ are orthogonal to the above two (which are orthogonal to each other).
The point is that the vectors $v_1=(1,0,0,1)^T, v_2=(0,1,1,0)^T, v_3=(1,0,0,-1)^T$, $v_4=(0,1,-1,0)^T$ are mutually orthogonal, and are a basis for the space $\mathbb{R}^4$.
Hint #2: To define the linear operator $\alpha$, it is sufficient to define its behaviour on a basis.
So, in order to define $\alpha$, you just need to specify the values of $\alpha(v_k)$, for $k=1,...,4$. In your case, the value must be zero for two specific vectors (which two?) and non-zero for the two other.
|
H: Simple closed geodesic around two hyperbolic cusps.
Consider a connected hyperbolic $2$-manifold $M$ with cusps. Consider a simple closed geodesic in $M$, which dissects $M$ into two components. Assume that one of the components contains exactly two cusps. Can you prove me, that this component is conformal to the hyperbolic disk with two points removed?
AI: This isn't true. After you cut, a component with two cusps may have any genus at all.
I shall give a specific counterexample. Let $P$ be a regular ideal hyperbolic $14$-gon, and let $S$ be the hyperbolic surface obtained by gluing together pairs of sides of $P$, as shown in the following figure:
It is easy to check that $S$ is a genus-two surface with four cusps, indicated by the red, blue, yellow, and purple dots. The dashed blue line is a closed geodesic, and cutting along this geodesic separates the surface into two components, each of which is isomorphic to a torus with two cusps and a disk removed.
|
H: Is convergence in $I^I$ topology equivalent to point-wise convergence?
The $I^I$ topology is the uncountable Cartesian product (Tychonoff) of the closed unit interval $[0,1]$. We can imagine it as a space of all the functions from $[0,1]$ to itself.
I was told that a sequence $\alpha_n$ converges to $\alpha$ in this space if and only if the function $\alpha_n(x)$ converges to $\alpha(x)$ point-wisely. But I'm a little bit skeptical about that.
For example, let
$$\alpha_n(x)=\frac{x(1-x)^n}{\frac{1}{n+1}\left(1-\frac{1}{n+1}\right)^n},$$
the famous point-wisely but not uniformly convergent function sequence that converges to $\alpha(x)=0$.
Now, look at the open set of all the functions from $[0,1]$ to $[0,0.5)$, it is clearly an open neighborhood of $\alpha(x)=0$, but it is disjoint from the sequence of $\alpha_n$, which means that the sequence is not convergent. Am I right about that?
AI: No: the set of all functions from $[0,1]$ to $\left[0,\frac12\right)$ is not open in the topology of pointwise convergence or the product topology (which is the same thing). Every open nbhd of a function in that set includes functions that take the value $0$ somewhere.
|
H: Which compact (orientable) surfaces are parallelizable?
Which compact (necessarily orientable) smooth $2$-manifolds are parallelizable?
I'm aware that the sphere $\mathbb{S}^2$ is not parallelizable, whereas the torus $\mathbb{T}^2 = \mathbb{S}^1 \times \mathbb{S}^1$ is. This leaves the case of connected sums of tori $\Sigma_g = \mathbb{T}^2 \# \cdots \# \mathbb{T}^2$.
Note: This is not a homework question. I'm asking purely out of curiosity, especially because I have no idea how one would approach this problem (aside from the techniques mentioned in this problem, which are slightly above my level).
AI: If $S$ is a parallelizable surface, then it has a flat Riemannian metric -- just choose a smooth global frame and declare it to be orthonormal. If in addition $S$ is compact, Gauss-Bonnet implies that it has Euler characteristic zero. The only compact orientable surface with $\chi(S)=0$ is the torus.
|
H: Divisibility by Quadratics $b^2+ba+1\mid a^2+ab+1\Rightarrow\ a=b$
The natural numbers $a$ and $b$ are such $a^2+ab+1$ is divisible by $b^2+ba+1$. Prove that $a = b$.
I tried to algebraically manipulate it as follows:
$(b^2 + ba + 1)k = a^2 + ab + 1$
$[b(a + b) + 1]k = a(a + b) + 1$
$kb(a + b) + k = a(a + b) + 1$
$k - 1 = (a - kb)(a + b)$
I'm stuck here. What should I do next? A case-by-case analysis of possible congruencies would be too tedious and inelegant.
AI: If $a^2+ab+1$ is divisible by $b^2+ba+1$, then so is $(a^2+ab+1)-(b^2+ba+1)=a^2-b^2$.
Note that $a+b$ and $b^2+ba+1$ are relatively prime. So $b^2+ab+1$ divides $a-b$. Now you should be able to finish, using considerations of size.
|
H: Questions about the local ring of a point on a variety.
Let $Y$ be a variety. Let $\mathcal{O}_{P, Y} = \mathcal{O}_{P}$ be the ring of germs of regular functions on $Y$ near $P$. That is, an element of $\mathcal{O}_P$ is pair $\langle U, f \rangle$ where $U$ is an open subset of $Y$ containing $P$ and $f$ is a regular function on $U$.
How to show that
(1) $\mathcal{O}_P$ is a local ring (has only one maximal ideal $\mathfrak{m}$) and
(2) $\mathcal{O}_{P}/\mathfrak{m} \cong k$?
These questions comes from page 16 of the book Algebraic Geometry by Hartshorne.
Take $\mathfrak{m}=\{ f \in \mathcal{O}_P \mid f(P) = 0 \}$. Then $\mathfrak{m} \subseteq \mathcal{O}_P$. It is clear that $\mathfrak{m}$ is an ideal of $\mathcal{O}_P$. Suppose that $f(P) \neq 0$ and $f \in \mathcal{O}_P$. Then $1/f$ is regular in some neighborhood of $P$ and hence $1/f \in \mathcal{O}_P$. How could we show that $1/f \in \mathfrak{m}$? I think that if $1/f \in \mathfrak{m}$, then $1=f \cdot 1/f \in \mathfrak{m}$ and hence $\mathfrak{m} = \mathcal{O}_P$. Is this true? Thank you very much.
Edit: I made a mistake. I should consider the ideal $\mathfrak{n}$ such that $\mathfrak{m} \subsetneq \mathfrak{n} \subseteq \mathcal{O}_P$ and show that $\mathfrak{n} = \mathcal{O}_P$.
AI: Define a map $\varphi : \mathcal{O}_P \to k$ that sends a regular function $f$ at $P$ to $f(P) \in k$. This is just the evaluation homomorphism and clearly $\mathfrak{m} = \ker \varphi$. Now this map is also surjective because for any element $a \in k$ I can just define the constant function on $Y$ that sends every $y \in Y$ to $a$. Such a function is clearly regular at $P$. Thus it follows that
$$\overline{\varphi} : \mathcal{O}_P/\mathfrak{m} \stackrel{\cong}{\longrightarrow} k$$
is an isomorphism.
|
H: Nested Interval Theorem; free of set theory
I am reading this proof on Spivak for the Nested Interval Theorem. I pretty much did exactly what he did except the last step.
For each $m$ and $n$ we have $a_n \leq b_m$, because $a_n \leq a_{n+m} \leq b_{n+m} \leq b_m$. It follows from problem 12 that $\sup \{a_n : n \in \mathbb{N} \} \leq \inf \{b_n : n \in \mathbb{N} \}$. Let $x$ be any number between these two numbers. Then $a_n \leq x \leq b_n$ for all $n$, so $x$ is in every $I_n$
Now could someone please explain to me how is it that he can simply "let $x$ be any number between these two numbers"? My original goal was to show that the sup and inf are indeed the same thing and therefore we may choose this number to be our $x$.
AI: What is there to explain? Let $u=\sup\{a_n:n\in\Bbb N\}$ and $v=\inf\{b_n:n\in\Bbb N\}$. From a previous result you know that $u\le v$. Therefore the interval $[u,v]$ is non-empty, and we can choose an arbitrary $x\in[u,v]$, i.e., an $x\in\Bbb R$ such that $u\le x\le v$. Since $a_n\le u\le x\le v\le b_n$ for each $n\in\Bbb N$, it is certainly true that $x\in[a_n,b_n]$ for each $n\in\Bbb N$ and hence that
$$x\in\bigcap_{n\in\Bbb N}[a_n,b_n]\;,$$
which is all that is being asserted at that point. Now it may be that you have other information that will enable you to show that in fact $u=v$, so that $[u,v]=\{u\}=\{v\}$, and therefore $x=u=v$, but that’s a separate issue requiring a separate proof. At this point we know only that the intervals have non-empty intersection, not that the intersection is a singleton.
|
H: What is the exact, rigorous, full statement of Divergence (Gauss') Theorem in $\mathbb{R}^3$ (without being too complicated)?
The wolfram page http://mathworld.wolfram.com/DivergenceTheorem.html states the formula
$$
\int_{V} \nabla \cdot \mathbf{F} dS = \int_{\partial V} \mathbf{F} \cdot d\mathbf{S}
$$
but it does not speak much of what kind of conditions should be imposed on $\mathbf{F}, V$ and so on.
I think it is enough for $\mathbf{F}$ to be continuously differentiable over $V$ (is it?). But what should be on $V$?
Q1) Is it enough for $V$ to have $\partial V$ as a parametrized (smooth) surface (even piecewise)?
Q2) (It may be a topological one.) But my textbook says a parametrized surface is the image of a continuously differentiable mapping $\mathbf{r} : \mathcal{R} \to \mathbb{R}^3$ where $\mathcal{R}$ is a region (i.e., open, bounded, its boundary having Jordan content 0) in $\mathbb{R}^2$. Then can a sphere have a parametrization?
Q3) What should be the exact imposition on $\mathbf{F}$ including how to specify its domain?
(I hope you'd not talk about manifolds and forms and other complex definitions...)
AI: Yes, Divergence theorem is good for "piecewise smooth" domain, namely Lipschitz domain, for example, a regular polyhedron.
The unit $2$-sphere's most common parametrization:
$$
\mathbf{\Phi}: [0,\pi]\times [0,2\pi] \to \mathbb{R}^3\\
(u,v)\mapsto(\sin v \cos u, \sin v \sin u, \cos v)
$$
has two artificial singularities at two poles $(0,0,\pm 1)$ (in $xyz$-coordinates).
Normally we require $\mathbf{F}$ to be smooth on $V$ and continuous up to the boundary. However if we are allowed to go to the realm of weak derivatives, the weak divergence of certain $\mathbf{F}$ has the exact form of Divergence theorem as well. A sufficient requirement for this is that $\nabla \cdot \mathbf{F} \in L^1(V)$, $\mathbf{F}\cdot \mathbf{n}\in L^1(\partial V)$, and $\mathbf{F}\cdot \mathbf{n}$ is continuous on any surface lying within $V$.
|
H: When will be a point $p$ not be a limit point of $A$
I am going through the proof of the Question:
Let $A$ be a subset of a (point set) topological space $(X,T)$. When will a point $p$ not be a limit point of $A$.
Proof: $p$ is not a limit point of $A$ if there exists an open set $G \in T$ such that
$p \in G$ and $(G \setminus \{p\}) \cap A = \phi$
or equivalently $p \in G$ and $G \cap A = \phi$ or $G \cap A = \{p\}$.
I did not understand why $G \cap \{p\}'$ is avoided here, since $(G \setminus \{p\}) \cap A = G \cap \{p\}' \cap A = \phi $ which will be true if $G \cap \{p\}' = \phi$(where $\{p\}' = X \setminus \{p\}$).
AI: Henno has explained the result itself, but it occurs to me that you may be asking why it’s written
$\qquad\qquad\qquad\qquad\qquad\qquad p\in G$ and $(G\setminus\{p\})\cap A=\varnothing$
instead of in the equivalent form
$\qquad\qquad\qquad\qquad\qquad\qquad p\in G$ and $G\cap\{p\}'\cap A=\varnothing$,
since, as you point out, $(G\setminus\{p\})\cap A=G\cap\{p\}'\cap A$.
It could be written either way; it really makes no difference as far as the meaning is concerned. It could, in fact, just as well be written $G\cap A\subseteq\{p\}$, since
$\qquad\quad G\cap A\subseteq\{p\}$ if and only if $(G\setminus\{p\})\cap A=\varnothing$ if and only if $G\cap\{p\}'\cap A=\varnothing$.
These are three equivalent ways of expressing the same fact; which one you choose is a matter of taste. I usually use the third. Many people prefer to avoid the second, because the notion of complement is a bit tricky: it requires specifying the universe in which the complement is to be taken. Here that’s the whole space, so we have to understand that $\{p\}'$ is really just an abbreviation for $X\setminus\{p\}$. And once we write that out explicitly, making the expression $G\cap(X\setminus\{p\})\cap A$, we might as well simplify it to $(G\setminus\{p\})\cap A$ anyway.
|
H: Numbers in a circle: how many sets of consecutive numbers have positive sum?
One hundred integers are written around a circle, and it is known that their sum is $1$. We will call a subset of several successive numbers a "chain". Find the number of chains whose members have a positive sum.
I had no idea how to approach the problem, so I considered some small sets.
Consider a circle with four elements $2, -1, 1, -1$ (arranged clockwise) at positions $0, 1, 2, 3$ respectively. Clearly, this has four chains: $[0, 1], [0, 1, 2]$ (moving clockwise) and $[0, 3], [0, 3, 2]$ (moving anti-clockwise).
Now consider another circle with four elements $2, -2, -1, 2$ (arranged clockwise) at positions $0, 1, 2, 3$ respectively. This has three chains $[3, 0, 1], [3, 0]$ (moving clockwise) and $[0, 3, 2]$ (moving anticlockwise).
I don't understand this. Won't the answer be ambiguous?
AI: Note: The question depends on your definition of a chain. If you take it to not include singletons and the full circle (as suggested by your count), then your answer will be dependent on the values you chose.
Hint: If a chain of integers have a positive sum, then the opposite set of integers have a non-positive sum.
However, the opposite set of integers may not necessarily form a chain. When does it not form a chain? This will explain why you have a discrepancy in your answer.
Hint: There are $n(n-2)$ chains in total, because the full circle is not a chain.
If we include singletons and the full circle as a chain, then the 1st hint still holds (when does the opposite net not form a chain?). There are $n(n-1) + 1$ chains (where the +1 counts the full circle just once), and hence the number of positive chains is $\frac{n(n-1)}{2} + 1$, which agrees with Brian's calculations.
|
H: How do you learn from MITOpenCourseWare 18.06 Linear Algebra Course?
I recently finished my 9th grade and I'm beginning to want to skip ahead and learn Linear Algebra throughout the summer, mainly because I want to know it to program my own games in OpenGL with C++. I already made couple of games before, but now I need to know Linear Algebra to learn OpenGL with ease.
My math skills is quite limited, as it's only up to grade 9 level, which is Algebra 1. I don't know functions and what not, passed linear equations, but I do know a bit of trigonometry. I was hoping someone could guide me to the right path to learning linear algebra?
It'd be nice if I can skip passed through some stuff, but if not, I'll go ahead and learn the stuff required to deal with vectors and matrices.
Basically, what I'm asking is, is that based on what I've just told you (only knowing math up to linear equations), can you guide and lead me to learning linear algebra without any hassles on the way there? So maybe have a list of prerequisites I should probably check out and learn, and if possible, suggest me some good resources I can learn from. I keep googling and the answers bounce back and forth, either you need to know calculus, functions, etc, or you don't. I'd like this to be cleared up, thanks.
EDIT: Sorry for the duplication, but now I don't know how to navigate/learn from the MITOpenCourseWare of 18.06 Gilbert Strang's Linear Algebra course. I looked at Gilbert Strang, Linear Algebra, the course 18.06 at MIT, and it may just be me being stupid, but how do I know when to start reading the book, etc, from the lectures? I mean, I already watched the first lecture, and obviously, there's a book he has that you have to read, but I don't know when to read it. I see the Readings and Calender, but they don't really make any sense.
Thanks.
AI: Well! Khanacademy is a good start but it will quickly bore you I suppose...
Local university is ure best bet...
Online course via Stanford online or some other format is also smart (educere is the cheapest)
Or you just get your own books online and grind it out yourself.
You only need algebra I to do linear algebra but it requires a lot of creativity and determination as you will likely find a lot of annoying notation and jargon thrown at you... Be patient!
Also I recommend studying the AoPS book collection as a good break from the usual collegiate grind
|
H: Can we call domain as inverse image of a function?
I was going through the definition of inverse image of a function http://www.northeastern.edu/suciu/U565/MATH4565-sp10-handout1.pdf, and I was wondering if inverse image of a function is the domain of the function itself. Please give me some examples on it.
AI: "Inverse image of a function" doesn't make much sense( unless it is taken as inverse image of the co-domain ); you need to specify some subset of co-domain and then we can talk about "inverse image of set $B\subset$ co-domain w.r.t. function $f$".
Consider a function $f:A\to B$. Let $C\subset A$ and $D\subset B$ such that $f(C)=D$ i.e. $D=$ the set of elements of $B$ to which elements of $C$ are mapped.
Then inverse image of $D$ is precisely the subset of $A$ which is mapped to $D$ which is none other than $C$
Here $f(C)=D$ and $f^{-1}(D)=C$ which might not be the whole $A$, in fact, $f^{-1}(D)=A\iff f(A)=D$
|
H: Proof Error? A line-segment of a circle is a metric.
In O'searcoid, Metric Spaces, he provides the following example of a metric space:
Suppose C is a circle and, for each $a,b ∈ C$, define $d(a,b)$ to be the distance along the line segment from $a$ to $b$. Then $d$ is a metric on $C$.
I have decided to confirm that his example is, indeed, a metric (here might be a good place to admit that I am weak in trigonometry).
According to wikipedia, the length of a line segment (or chord) can be written as:
$$d(a,b) = 2 r \sin{ \dfrac{ \theta }{2} }$$
where $r$ denotes the radius of $C$ and $\theta$ the angle formed between points $a,b$ with centre.
From here, it can clearly be seen that if $a = b$ then $d = 0$ since $\theta = 0$. Further, since $\theta \ge 0$, it follows that $d \ge 0$. Finally, $d(a,b) = d(b,a)$ since the angle is determined by the relation of these points with the centre of the circle. Thus, all that remains is the triangle inequality.
Take three points in $C$, $a,b,c$. The triangle inequality states that:
$$ d ( a,b ) \le d( a,c ) + d( c,b ) $$
Thus the following must satisfy:
$$ 2 r \sin{ \dfrac{ \theta_{ab} }{2} } \le 2 r \sin{ \dfrac{ \theta_{ac} }{2} } + 2 r \sin{ \dfrac{ \theta_{cb} }{2} }$$
From here, it may be deduced that $\theta_{ab} = \theta_{ac} + \theta_{cb} $ since they essentially just "split" the angle into two parts.
Removing like terms from the triangle inequality yields:
$$ \sin{ \dfrac{ \theta_{ab} }{2} } \le \sin{ \dfrac{ \theta_{ac} }{2} } + \sin{ \dfrac{ \theta_{cb} }{2} }$$
A wolfram calculation demonstrates that an alternative form for an equation in the form $ \sin{ \frac{x}{2} } + \sin{ \frac{y}{2} }$ is $2 \sin{(\frac{x}{4}+\frac{y}{4})} \cos{(\frac{x}{4}-\frac{y}{4})}$. A further wolfram calculation shows that an equation in the form $ \sin{ \frac{x}{2} }$ may be written as $2 \sin{ \frac{x}{4} } \cos{ \frac{x}{4} }$ I can rewrite the triangle inequality as:
$$ \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ \theta_{ab} } {4} } \le \sin{( \frac{ \theta_{bc} }{4} + \frac{ \theta_{cb}}{4} )} \cos{( \frac{ \theta_{bc} }{4} - \frac{ \theta_{cb}}{4} )} $$
And using the fact that $\theta_{ab} = \theta_{ac} + \theta_{cb} $, this can be written:
$$ \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ \theta_{ab} } {4} } \le \sin{( \frac{ \theta_{ab} - \theta_{cb} }{4} + \frac{ \theta_{ab} - \theta_{bc}}{4} )} \cos{( \frac{ \theta_{ab} - \theta_{cb} }{4} - \frac{ \theta_{ac} - \theta_{bc}}{4} )} $$
And this simplifies to:
$$ \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ \theta_{ab} } {4} } \le \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ - \theta_{ab} } {4} } $$
And this $ \cos{-x} = \cos{x} $, we end up with:
$$ \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ \theta_{ab} } {4} } \le \sin{ \dfrac{ \theta_{ab} }{4} } \cos{ \dfrac{ \theta_{ab} } {4} } $$
From what I can see, this also demonstrates that:
$$ \sin{ \frac{x}{2} } = \sin{ \frac{y}{2} } + \sin{ \frac{z}{2} } $$
where $x = y + z$ (just the original triangle inequality).
So based upon this I decided to use some test values. However, I am finding that the triangle inequality (equality) is not holding for the calculations I put in. Thus, there must be an error in the proof somewhere.
Thus my question is simply: What am I doing wrong?
AI: The first thing that you’re doing wrong is making the whole business way too complicated! These distances are identical to the ordinary straight-line Euclidean distances between the points. If you accept that the usual metric in the plane is a metric, then this $d$ must be a metric as well: it’s the same metric, just restricted to the circle $C$.
Alternatively, you can think of it in an even more elementary way. The points $a,b$, and $c$ are the vertices of a triangle $\triangle abc$. (The triangle can be degenerate if two or all three points are the same, but you can deal with those cases separately if you wish.) The distances given by the function $d$ are just the ordinary lengths of the sides of that triangle, so of course they satisfy the triangle inequality.
That said, let’s take a look at how you might verify your (correct) inequality
$$2r\sin\frac{\theta_{ab}}2\le 2r\sin\frac{\theta_{ac}}2+2r\sin\frac{\theta_{cb}}2\;.$$
First a small terminological correction: when you replace it by
$$\sin\frac{\theta_{ab}}2\le\sin\frac{\theta_{ac}}2+\sin\frac{\theta_{cb}}2\;,$$
you’re not ‘[r]emoving like terms’: you’re cancelling (or dividing out) a common factor (of $2r$). The easiest way to proceed is to write $\theta_{ab}=\theta_{ac}+\theta_{cb}$ and use the sum formula for the sine:
$$\sin\frac{\theta_{ab}}2=\sin\left(\frac{\theta_{ac}}2+\frac{\theta_{cb}}2\right)=\sin\frac{\theta_{ac}}2\cos\frac{\theta_{cb}}2+\sin\frac{\theta_{cb}}2\cos\frac{\theta_{ac}}2\;.$$
Now all cosines are between $-1$ and $1$, so
$$\cos\frac{\theta_{cb}}2\le 1\quad\text{and}\quad\cos\frac{\theta_{ac}}2\le 1\;,$$
and it immediately follows that
$$\sin\frac{\theta_{ac}}2\cos\frac{\theta_{cb}}2+\sin\frac{\theta_{cb}}2\cos\frac{\theta_{ac}}2\le\sin\frac{\theta_{ac}}2\cdot 1+\sin\frac{\theta_{cb}}2\cdot 1=\sin\frac{\theta_{ac}}2+\sin\frac{\theta_{cb}}2\;,$$
which is exactly what you want.
|
H: On changing from '<' to '$\le$' when taking limits (with norm $|\bullet|_p$)
I'm reading Gouvêa's book on $p-$adic, and there's one problem that I don't think I really get it. Here's a proposition, and the problem attached to it. It's on page 57, 58 of the book.
Proposition 3.2.12
The image of $\mathbb{Q}$ under the inclusion $\mathbb{Q} \rightarrowtail \mathbb{Q}_p$ is a dense subset of $\mathbb{Q}_p$.
Proof
Take any element $\lambda \in \mathbb{Q}_p$, a Cauchy presentation $\{x_n \}$ of it, and any $\epsilon > 0$. We'll prove that there's a constant sequence that belongs to $B(\lambda, \epsilon)$. We now choose an $\epsilon'$ which is slightly less than $\epsilon$.
Since $\{ x_n \}$ is a Cauchy sequence, then there exists $N>0$, such that for all $n, m \ge N$, we'll have: $|x_m - x_n| < \epsilon'$. We claim that the constant sequence $\{ x_N \}$ will indeed satisfy the requirement.
For any $n \ge N$, we'll have $|x_n - x_N| < \epsilon'$.
Taking the limit as $n \to \infty$ yields $\lim |x_n - x_N| \color{red}{\le} \epsilon' < \epsilon$.
Which means that $\{ x_N \} \in B (\lambda , \epsilon)$.
Problem 87.
Why does < become $\le$ in the limit? Do we really need the business of decreasing $\epsilon$ slightly to $\epsilon'$?
Ok, honestly, I think that we don't (but the answer states that we do).
While it's true that I can find examples when the sign does change in taking limits, say: $x_n = p^n$, then of course $|x_n|_p = |p^n|_p = \dfrac{1}{p^n}>0, \forall n$, but $\lim |x_n|_p = 0$. I think I can do this because 0 is in fact, a limit point of the set $\{p^z | z\in \mathbb{Z} \}$.
However, I don't think there's any example of a Cauchy sequence, such that $|x_n|_p < \epsilon, \forall n$, but when we take the limit, it'll end up with $\lim |x_n|_p = \epsilon$.
Are there such sequences? Am I missing something here? Or is the book wrong?
Thank you guys very much,
And have a good day, :*
AI: You are correct and the book is wrong ... if the exercise was talking specifically about ${\bf Q}_p$.
In the reals, the reason we must replace $<$ with $\le$ in the limit is that the image of the archimedean absolute value is $[0,\infty)$, which has right-accumulation points (points that are accumulated to from the left and towards the right, a term I just made up on the spot). With any $\epsilon>0$ it is possible to obtain a sequence $(d_n)_{n=1}^\infty$ such that $\lim|d_n|=\epsilon$ but $|d_n|<\epsilon$ for all $n$.
In ${\bf Q}_p$ the image of the $p$-adic absolute value is $\{0\}\cup\{p^n:n\in{\bf Z}\}$; there are no right-accumulation points in this set so if $\lim\limits_{n\to\infty}|d_n|_p=\epsilon$ then either $\epsilon=0$ or $|d_n|_p$ is eventually the constant $\epsilon$; in the former case the strict inequality $|d_n|<\epsilon$ is impossible, so that case can be ruled out, and the other case implies that $|d_n|=\epsilon$ eventually, contradicting strict inequality.
In any extension (of valued fields) of ${\bf Q}_p$ with infinite ramification (by which I mean the image of the valuation / absolute value has nontrivial accumulation points), most prominently ${\bf C}_p$ (the $p$-adic metric completion of the algebraic closure of ${\bf Q}_p$), it actually is possible to obtain sequences $d_n$ such that $|d_n|_p\to\epsilon$ but $|d_n|_p<\epsilon$ for all $n$. For example $d_n=p^{1/n}$ and $\epsilon=1$.
Note that the question is just asking about the implication $|a_n|_p<c\implies \lim\limits_{n\to\infty}|a_n|_p<c$ in general for sequences; not specifically the sequences it gives for its purposes involving Cauchy sequences.
|
H: Rigorous way to show image of a set under rotation
Take $R:\mathbb{C}\rightarrow\mathbb{C}$ where
$$R(z)=ze^{i\frac{\pi}{4}}.$$
Find $R(A)$ where
$$A=\{ re^{i\theta}:r\in [0,2], 0\leq\theta\leq\pi\}.$$
I've ran in to a sort of problem (it seems) with the way that this set is defined. I haven't done a lot of complex analysis. It looks to me like say
$$e^{i\pi} \in A \text{ but } e^{i3\pi} \notin A$$
as $3\pi \notin [0,\pi]$ but $e^{i3\pi}=e^{i\pi}$ so $e^{i\pi} \in A$ and $e^{i\pi} \notin A$.
To work through this problem (I made up the problem as an exercise) mathematically I made an equivalence relation with saying $a \sim b$, $a,b \in \mathbb{R}$ if $a-b=2\pi n$ for some $n \in \mathbb{Z}$. Then I defined the set as
$$A'=\{ re^{i\theta}:r\in [0,2], \theta \sim \alpha, 0\leq \alpha \leq \pi \}.$$
Is there a nice way to do this problem with some kind of mathematical rigor? Nicer than this at least! I know the answer would just be
$$R(A')=\{ re^{i\theta}:r\in [0,2], \theta \sim \alpha, \frac{\pi}{4}\leq \alpha \leq \frac{5\pi}{4} \}$$
unless there is another thing I am overlooking.
AI: Edit: When we say that $A=\{re^{i\theta}:r\in[0,2],0\le\theta\le\pi\},$ that is a bit misleading, and indeed, it looks like the set wouldn't even be well-defined! Ultimately, we are treating $A$, itself, as a function image. In particular, define $f:\Bbb R^2\to\Bbb C$ by $f(r,\theta)=re^{i\theta}.$ Then letting $B=[0,2]\times[0,\pi],$ all we're saying is that $A=f(B).$ It's certainly true that $f(1,3\pi)\in A,$ even though $\langle 1,3\pi\rangle\notin B,$ but that's nothing to worry about.
If we were being more careful, we would say $$A=\left\{z\in\Bbb C:\exists r\in[0,2],\theta\in[0,\pi]\text{ with }z=re^{i\theta}\right\}.$$
Take any $z\in A$. This means that $z=re^{i\theta}$ where $0\le r\le2$ and $0\le \theta\le\pi.$ Then $$R(z)=ze^{i\frac{\pi}4}=re^{i\theta}e^{\frac{i\pi}4}=re^{i\left(\theta+\frac{\pi}4\right)},$$ and so $|R(z)|=r\in [0,2]$ and the principal argument of $R(z)$ is in the interval $[\frac\pi4,\frac{5\pi}4].$ The reverse inclusion holds as well, so that $R(A)$ is precisely the set $R(A')$ (as you defined it). In our "more careful" notation (doing away with the equivalence relation), we have $$R(A)=\left\{z\in\Bbb C:\exists r\in[0,2],\theta\in\left[\frac\pi4,\frac{5\pi}4\right]\text{ with }z=re^{i\theta}\right\}.$$
|
H: Probability of getting 'k' heads with 'n' coins
This is an interview question.( http://www.geeksforgeeks.org/directi-interview-set-1/)
Given $n$ biased coins, with each coin giving heads with probability $P_i$, find the probability that on tossing the $n$ coins you will obtain exactly $k$ heads. You have to write the formula for this (i.e. the expression that would give $P (n, k)$).
I can write a recurrence program for this, but how to write the general expression ?
AI: Consider the function
$[ (1-P_1) + P_1x] \times [(1-P_2) + P_2 x ] \ldots [(1-P_n) + P_n x ]$
Then, the coefficient of $x^k$ corresponds to the probability that there are exactly $k$ heads.
The coefficient of $x^k$ in this polynomial is $\sum_{k-\mbox{subset} S} [\prod_{i\in{S}} \frac{1-p_i}{p_i} \prod_{j \not \in S} p_j] $
|
H: When does $\sum_{i=3}^\infty n^{-1} (\log \log n)^{-r}$ converge
For $r>0$, when does $\sum_{i=3}^\infty n^{-1} (\log \log n)^{-r}$ converge? My guess is $r>1$ (treating $\log \log$ as $\log$). Please give me some hints!
AI: Never. You probably know that $\sum \frac{1}{n\log n}$ diverges. If you have not done it yet, it can be done using the Integral Test.
We need only consider positive $r$ (why?).
For positive $r$, note that in the long run $(\log\log n)^r \lt \log n$ (why?). So by Comparison with $\sum \frac{1}{n\log n}$ our series diverges.
|
H: Showing path connected matrices of a group $G$ is a normal subgroup
Let $G$ be a subgroup of $GL_n(\Bbb{R})$. Define $$H = \biggl\{ A \in G \ \biggl| \ \exists \ \varphi:[0,1] \to G \ \text{continuous such that} \ \varphi(0)=A , \ \varphi(1)=I\biggr\}$$ Show that $H$ is a normal subgroup of $G$
In this post, the answers are either too advanced (manifolds) or are considered overkill. I was trying to follow the user jug's hints, but I can't understand how to make use of $\phi_A(x) = AxA^{-1}$. I believe I have the same confusion as the OP -- I want to show that for all $A \in G$ and for all $M \in H$, $AMA^{-1} \in H$ -- but I'm struggling very much connecting the idea of a normal subgroup with a continuous function (or path).
Could someone please expand on the hints provided in the linked post?
AI: Let $H$ be the path-connected component of $I$ in $G$.
First, let us notice that $H$ is not empty.
Suppose that $A$ and $B$ are in $H$, so that there are maps $\sigma:I\to G$ and $\tau:I\to G$ such that $\sigma(0)=I$, $\sigma(1)=A$, $\tau(0)=I$ and $\tau(1)=B$. Consider the map $$\lambda:t\in I\longmapsto \sigma(t)\tau(t)^{-1}\in G.$$ You can show that
the map $\lambda$ is continuous, and that
$\lambda(0)=I$ and $\lambda(1)=AB^{-1}$.
If follows from this that $AB^{-1}$ is also in $H$.
Suppose $A$ is in the path-connected component $H$ of $I$, so that there is a path $\sigma:I\to G$ such that $\sigma(0)=I$ and $\sigma(1)=A$.
Let now $B\in G$ and consider the map $$\tau:t\in[0,1]\longmapsto B\sigma(t)B^{-1}\in G.$$ You should check that
it is continuous,
its value at $0$ is $I$, and its value at $1$ is $BAB^{-1}$.
It follows from all this that $BAB^{-1}$ is also an element of $H$.
Now conclude that $H$ is a normal subgroup.
|
H: orthogonal basis for the complement
Consider $\mathbb R^3$ with the standard inner product. Let $W$ be the subspace of $\mathbb R^3$ spanned by $(1,0, -1)$.
Which of the following is a basis for the orthogonal complement of $W$?
$\{ ( 1, 0, 1), ( 0, 1, 0)\}$
$\{(1,2,1),(0,1,1)\}$
$\{(2,1,2),(4,2,4)\}$
$\{(2,-1,2),(1,3,1),(-1,-1,-1)\}$
only first set is orthogonal, so it should be correct option. but what is the correct method to solve this problem.
AI: As $\dim W=1$, you know $\dim W^\perp = 3-1=2$, so $4$ is wrong.
The vectors must be linearly independant, so $3$ is wrong. Each of the vectors must be orthogonal to each element of $W$ (it suffices to check against a basis of $W$, here only against $(1,0,-1)$), so $2$ is wrong and $1$ is correct (it does not matter if the two vectors in $1$ are othogonal to each other).
|
H: $(X, T_1), (Y, T_2)$ be topological spaces such that every function from $X$ to $Y$ is $T_1-T_2$ continuous
Let $(X, T_1), (Y, T_2)$ be topological spaces such that every function from
$X$ to $Y$ is $T_1-T_2$ continuous. Prove that either $T_1$ is the discrete topology or $T_2$ is the indiscrete topology.
How can I do this problem?
AI: HINT: Suppose that $T_1$ is not the discrete topology, and $T_2$ is not the indiscrete topology. This means that there is some point $x\in X$ such that $\{x\}$ is not an open set, and there is some $U\in T_2$ such that $\varnothing\ne U\ne Y$. Consequently, we can choose a point $y_0\in U$ and a point $y_1\in Y\setminus U$. Use $x,y_0$, and $y_1$ to define a function $f:X\to Y$ that is not continuous at $x$.
|
H: Why is square root of 2 less than cube root of 3
Square root of 2=1.41421.....
Cube root of 3=1.44224.....
4th root of 4=1.41421......
5th root of 5=1.37972......
6th root of 6=1.34800......
7th root of 7=1.32046......
.
.
nth root of n=..........
Similarly thereafter nth root of n will converge towards 1.
Q.2 why is square root 2 equals 4th root of 4?
Intuitively sqrt 2 is diagnol of 1 by 1 square, and sqrt 3 is diagnol of 1 by 1 parallelogram, explain geometrially?
AI: Let us find extreme values of $f(x)=x^{\frac1x}$ where $x$ is real positive
So, $\ln f(x)=\frac{\ln x}x$
$$\frac{d \ln f(x)}{dx}=\frac{1-\ln x}{x^2}$$
So, for the extreme values of $\ln f(x),1-\ln x=0\iff x=e$
$$\frac{d^2 \ln f(x)}{dx^2}=\frac{-\frac1x\cdot x^2-(1-\ln x)(2x)}{x^4}$$ which is $<0$ at $x=e$
and $$\frac{d \ln f(x)}{dx}>0\text{ or } < \text{ according as }1-\ln x>0\iff x<e\text{ or }1-\ln x<0\iff x>e $$
So, $x^{\frac1x}$ will increase when $x\in(0,e)$ and decreasing when $x\in(e,\infty)$ and we know $2<e<3$
As $e$ lies between $2,3$ the above derivation can not help us to compare $x^{\frac1x}$ at $x=2,3$ or any two values between which $e$ lies
So we can solve it as follows:
$\sqrt2 <\sqrt[3]3 $ i.e., $2^{\frac12}<3^{\frac13}\iff (2^{\frac12})^6<(3^{\frac13})^6\iff 2^3<3^2$ which is true
We have taken $6$th power as lcm$(2,3)=6$
|
H: Help to compute the following coefficient in Fourier series $\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$
$$\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$$
where $k\geq 0$, $k\in\mathbb{N} $ and $n\in\mathbb{R} $.
it is a $a_k$ coefficient in a Fourier series.
AI: \begin{eqnarray}
a_k&=&\int_{(2n-1)\pi}^{(2n+1)\pi}|x-2n\pi|\cos(kx)\,dx=\int_{-\pi}^{\pi}|x|\cos(kx+2kn\pi)\,dx\\
&=&\int_{-\pi}^\pi|x|\left[\cos(2kn\pi)\cos(kx)-\sin(2knx)\sin(kx)\right]\,dx\\
&=&\int_{-\pi}^\pi|x|\cos(2kn\pi)\cos(kx)\,dx=2\cos(2kn\pi)\int_0^\pi x\cos(kx)\,dx\\
&=&\frac{2\cos(2kn\pi)x\sin(kx)}{k}\Big|_0^\pi-\frac{2\cos(2kn\pi)}{k}\int_0^\pi\sin(kx)\,dx\\
&=&\frac{2\cos(2kn\pi)}{k^2}\cos(kx)\Big|_0^\pi=2\frac{(-1)^k-1}{k^2}\cos(2kn\pi).
\end{eqnarray}
NB: When I first answered the question I did not notice the condition $n \in \mathbb{R}$ and just assumed that $n \in \mathbb{Z}$.
|
H: Fourier cosine and sine transforms of 1
What is the Fourier sine and cosine transform of $f(x)=1$? I have seen some sources refer to the transform of $f=1$ involving the Dirac Delta function, but this goes against the integral definition for the Fourier sine transform, for example, since
$$\int_0^\infty f(x)\sin(x t)d x,$$
diverges when $f=1$ doesn't it?
AI: You can extend the Fourier transform to distributions like the Dirac Delta function. Taking the Fourier transform of $\delta(t)$ gives
$$\int_{-\infty}^{\infty}\delta(t)e^{-i\omega t}\;dt=1$$
because
$$\int_{-\infty}^{\infty}\delta(t)f(t)\;dt=f(0)$$
If you apply the (inverse) Fourier transform to 1 you get
$$\mathcal{F}^{-1}1=\text{p.v.}\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\omega t}\;d\omega=\frac{1}{\pi}\int_{0}^{\infty}\cos\omega t\;d\omega$$
Of course this is a divergent integral, but if it is used in a convolution integral it does have a meaning and it is useful to define
$$\delta(t)=\text{p.v.}\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\omega t}\;d\omega=\frac{1}{\pi}\int_{0}^{\infty}\cos\omega t\;d\omega$$
|
H: Qualifying problem for real analysis: limit involving definite integral
The following problem has appeared in 2013 January qualifying exam in Purdue University, which is publicly available here.
Problem 3. Let $\{a_k\}$ be sequence of positive numbers such that
$a_n\to\infty$ as $n\to\infty$. Prove that the following limit exists $$
\lim_{k\to\infty}\int_{0}^{\infty} \frac{e^{-x}\cos(x)}{a_kx^2 +
\frac{1}{a_k}} dx $$ and find it.
I have hardly come across to limits of sequences that involve definite integrals (in my undergraduate education so far), so this problem just seems insurmountable at the first glance. I would appreciate any hints.
One of the things that comes to mind is to use limit comparison test. For example, we can evaluate integrals such as
$$\int_{0}^{\infty} e^{-x}\cos(x)=\frac{1}{2}$$
But for that we would have to bound the integrand somehow. One tempting thing is to interchange the integral and the limit, which would tell us that integrand is zero in the limit, but I highly doubt this is allowed here.
Looking forward to hear your thoughts.
P.S. I am not sure how to make the title informative for this post. Feel free to edit as you see fit.
AI: Substitute $y = a_k \cdot x$ in the integral. You obtain
$$\int\limits_0^\infty \frac{e^{-y/a_k}\cos (y/a_k)}{y^2 + 1}\, dy$$
And now you can apply the dominated convergence theorem, the integrand converges to $\frac{1}{1+y^2}$ pointwise and is dominated by the limit, hence the integrals tend towards
$$\int\limits_0^\infty \frac{dy}{1+y^2} = \frac{\pi}{2}.$$
If the dominated convergence theorem is not available since one is dealing with Riemann integrals, one can also obtain the result from splitting the substituted integral at strategic points.
Fix $z > 1$ arbitrarily. For all $k$ such that $a_k > z^3$, we can split the integral at $z$, write $I(k,z) = \int_0^z \frac{e^{-y/a_k}\cos (y/a_k)}{1+y^2}\,dy$ and $II(k,z) = \int_z^\infty \frac{e^{-y/a_k}\cos (y/a_k)}{1+y^2}\,dy$, and can estimate
$$\lvert II(k,z)\rvert \leqslant \int\limits_z^\infty \bigg\lvert \frac{e^{-y/a_k}\cos (y/a_k)}{1+y^2}\bigg\rvert\,dy \leqslant e^{-z/a_k}\int\limits_z^\infty \frac{dy}{1+y^2} < \frac{\pi}{2} - \arctan z$$
for the second part, and
$$\lvert\arctan z - I(k,z)\rvert = \Biggl\lvert\int\limits_0^z \frac{1 - e^{-y/a_k}\cos (y/a_k)}{1+y^2}\,dy \Biggr\rvert \leqslant \int\limits_0^z \frac{\lvert 1 - e^{-y/a_k}\cos (y/a_k)\rvert}{1+y^2}\,dy$$
for the first part.
Now, we assumed that $a_k > z^3$, so $0 \leqslant y/a_k \leqslant z/a_k < 1/z^2$ for $I(k,z)$, so $\cos (y/a_k) \geqslant 1 - \frac{1}{2z^4}$ and $e^{-y/a_k} \geqslant e^{-1/z^2} > 1 - \frac{1}{z^2}$. We can hence estimate the numerator
$$\lvert 1 - e^{-y/a_k}cos (y/a_k) \rvert \leqslant 1 - \biggl(1 - \frac{1}{z^2}\biggr)\biggl(1 - \frac{1}{2z^4}\biggr) = \frac{1}{z^2} + \frac{1}{2z^4} - \frac{1}{2z^6} < \frac{2}{z^2}.$$
Thus we obtain $\lvert\arctan z - I(k,z)\rvert \leqslant z\cdot\frac{2}{z^2} = \frac{2}{z}$.
Altogether
$$\biggl\lvert\frac{\pi}{2} - \bigl(I(k,z) + II(k,z)\bigr) \biggr\rvert \leqslant \biggl(\frac{\pi}{2} - \arctan z\biggr) + \lvert \arctan z - I(k,z)\rvert + \lvert II(k,z)\rvert \leqslant \frac{2}{z} + \pi - 2\arctan z.$$
The last quantity obviously tends towards $0$ for $z \to \infty$.
|
H: Convergence in distribution of the log-Gamma distribution
Suppose $X$ has density $f(x)=\exp(kx-e^x)/\Gamma(k)$, $x>0$, for some parameter $k>0$. Then the moment-generating function of $X$ has the form
$$
M_X(\theta)=\frac{\Gamma(\theta+k)}{\Gamma(k)}.
$$
I want to show that
$$
\lim_{k\to\infty}M_{X^*}(\theta)=\exp(\theta^2/2),
$$
where $X^*=\sqrt{k}(X-\log k)$ using Stirling's formula:
$$
\log \Gamma(k)=-k+\left(k-\tfrac12\right)\log k+\log\sqrt{2\pi}+O(k^{-1}).
$$
Now
$$
M_{X^*}(\theta)={\rm E}[\exp(\theta\sqrt{k}(X-\log k))]=\exp(-\theta\sqrt{k}\log k)M_X(\theta\sqrt{k})
$$
and hence
$$
\log M_{X^*}(\theta)=-\theta\sqrt{k}\log k+\log \Gamma(\theta\sqrt{k}+k)-\log \Gamma(k).
$$
Using Stirling's formula, I obtain
$$
\begin{align}
\log M_{X^*}(\theta)=&-\theta\sqrt{k}\log k-(\theta\sqrt{k}+k)+(\theta\sqrt{k}+k-\tfrac12)\log(\theta\sqrt{k}+k)+\log\sqrt{2\pi}+O(k^{-1})\\
&+k-\left(k-\tfrac12\right)\log k-\log\sqrt{2\pi}-O(k^{-1})\\
\end{align}
$$
but nothing really seems to cancel out. How do I proceed from here? Thanks.
AI: Use the decomposition $\log(\theta\sqrt{k}+k)=\log(k)+\log(1+\theta/\sqrt{k})$ and the expansion $\log(1+\theta/\sqrt{k})=\theta/\sqrt{k}-\theta^2/(2k)+o(1/k)$ then watch the simplifications happening.
As a way of confirmation, one can note that the density $f_k$ of $X^*$ is such that $f_k(x)=c_k\mathrm e^{-u_k(x)}$ with
$$
c_k=\frac{\mathrm e^{-k}k^{k-1/2}}{\Gamma(k)},\qquad u_k(x)=k(\mathrm e^{x/\sqrt{k}}-1)-\sqrt{k}x,
$$
and that $c_k\to\frac1{\sqrt{2\pi}}$ and $u_k(x)\to\frac12x^2$ when $k\to\infty$.
|
H: accumulation point of a subset and accumulation point of an indexed family of subsets of space $X$.
Let $\xi=\{V_n:n\in\omega\}$ be a sequence of open subsets of space $X$. For every $n\in\omega$, choose $x_n\in V_n$.
If $p\in X$ is an accumulation point of $\{x_n:n\in\omega\}$, then is is true that $p$ is an accumulation point of family $\xi$, that is, $\xi$ is not locally finite at point $p$?
AI: HINT: Let $U$ be an open neighborhood of $p$. let $M_U=\{n\in\omega:x_n\in U\}$.
Is $M_U$ finite, or infinite?
If $n\in M_U$, what can you say about $U\cap V_n$?
Added: If you don’t assume that $X$ is $T_1$, $\xi$ need not be locally finite at $p$. Let $X=\omega$, and let $\tau=\{U\subseteq\omega:0\notin U\text{ or }\{0,1\}\subseteq U\}$; $\tau$ is a $T_0$ topology on $X$. For $n\in\omega\setminus\{0\}$ let $U_n=\{n\}\in\tau$, and let $\xi=\{U_n:n\in\omega\setminus\{0\}\}$. Clearly we must pick $x_n=n$ for each $n\in\omega\setminus\{0\}$, and $p=0$ is an accumulation point of $\{x_n:n\in\omega\setminus\{0\}\}=\omega\setminus\{0\}$: every open nbhd of $p$ contains $1=x_1$. But $\{0,1\}$ is an open nbhd of $p$ that meets only one member of $\xi$.
|
H: Determine a basis for the Lie-Algebra $\text{sp}(\text{2n},\mathbb{C})$
Consider the Lie Group $\text{Sp}(2n,\mathbb{C})=\{g\in\text{Mat}_{2n}\mid\ J=g^TJg\}$ where $J=\begin{pmatrix}
0 & 1_n \\
-1_n & 0
\end{pmatrix}
$.
The corresponding Lie Algebra is $\text{sp}(\text{2n},\mathbb{C})=\{g\in\text{Mat}_{2n}\mid\ g^TJ+Jg=0\}$.
How do I determine a basis for the Lie-Algebra $\text{sp}(\text{2n},\mathbb{C})$?
Thanks in advance!
AI: First of all, write down the general matrix $S=\begin{pmatrix}
A & B \\
C & D
\end{pmatrix}$ then, look at the equation $S^TJ+JS=0$, ie
$$\begin{pmatrix}
A & B \\
C & D
\end{pmatrix}\begin{pmatrix}
0 & 1_n \\
-1_n & 0
\end{pmatrix}+\begin{pmatrix}
0 & 1_n \\
-1_n & 0
\end{pmatrix}\begin{pmatrix}
A & B \\
C & D
\end{pmatrix}=0$$
Solving it blockwise would then give you $S=\begin{pmatrix}
A & B \\
C & -A^T
\end{pmatrix}$, where $B$ and $C$ are symmetric, ie $B=B^T,C=C^T$. So your basis is a free choice on $A$, symmetric $B$ and $C$.
By the way, a great book on Lie Algebras is Erdmann-Wildon's Lie Algebras.
|
H: Markov property of a random process (a solution of piece-wise deterministic equations)
Consider a piece-wise deterministic (Markov!) process
\begin{eqnarray}
\dot{x}(t) & = & A_{\theta(t,x(t))}x(t)\\
x(0) & = & x_0 \in \mathbb{R}^n \notag
\end{eqnarray}
where $\theta(t,x(t))\in S ={1,2,\cdots,N}$ is continuous time Markov chain(!) whose intensity is $\lambda_{ij}$ when $x(t)\in C_1$ and $\mu_{ij}$ when $x(t)\in C_2$. Here $C_1,C_2 \subseteq \mathbb{R}^n$, $C_{1}\cup C_{2}=\mathbb{R}^{^{n}}$ and $C_{1}\cap C_{2}=\phi$,. It is some kind of piece-wise deterministic process, where $x(t)$ is random because of the randomness of $\theta(t,x(t))$. The transition rate of $\theta(t,x(t))$ is $\lambda_{ij}$ or $\mu_{ij}$ depending on the set to which $x(t)$ belongs to. Let $x(0)\in C_1$ and define the first exit times (which can be proved as stopping times) $\tau_1,\tau_2,\cdots$ as
$$
\tau_{1}=\mathrm{inf}\{t\ge 0 :\,x(t)\notin C_{1}\},
$$
$$
\tau_{2}=\mathrm{inf}\{t \ge \tau_{1}:\,x(t)\notin C_{2}\}
$$
and so on. If $x(t)$ is $\mathcal{F}_t$-adapted process. Is $\theta(t,x(t))$ is also $\mathcal{F}_t$-adapted process? Within each stochastic intervals $[0,\tau_1)$, $[\tau_1,\tau_2)$ and so on, the process $\theta(t,x(t))$ intuitively seems a Markov process. Is there a way to prove this?
AI: Ok, if I understood correctly your equations, you have that a switching linear system $\dot x = A_\theta x$ where $\theta\in[1;N]$ is a continuous Markov chain whose intensity matrix depends on whether $x\in C_1$ or $x\in C_2$. Under some regularity conditions on $C_1$ and $C_2$ this process is indeed Markovian and falls into the class of PDP: piecewise-deterministic Markov processes.
The easiest way to show that $(x,\theta)$ is Markov is to embed it within the general framework of PDPs. According to the linked article, M. Davis was the first to use this term - at least, he has a very nice book on the topic. Anyways, all necessary definitions and conditions for Markovianity can be found in his apparently first paper on PDP.
Now, w.r.t. $\theta$: $\theta_t := \theta(t,x(t))$ is not a Markov process by itself as its distribution clearly depends on the value of $x(t)$.
|
H: Convex analysis problem
I have the following problem.
Let $f:[a,b]\to \mathbb{R}$ be continuously convex. I have to prove that there exists $c\in (a,b)$ such that $$\frac{f(a)-f(b)}{b-a}\in \partial f(c)$$
Firstly, I'm being doubt with $\frac{f(a)-f(b)}{b-a}$ (don't ensure this one is correct, may be it is $\frac{f(b)-f(a)}{b-a}$).
Secondly, I try to prove this problem by using the following proposition $$s\in \partial f(x_0) \Leftrightarrow \forall x\in \mathbb{R}, f(x)\ge f(x_0)+s(x-x_0)$$
That means for this problem, I need to find $c\in (a,b)$ such that
$$f(x)\ge f(c)+\frac{f(b)-f(a)}{b-a}(x-c)$$
or
$$f(x)\ge f(c)+\frac{f(a)-f(b)}{b-a}(x-c)$$
And then, I try to apply the following inequality to this one but cannot.
$$f(x)\le \frac{b-x}{b-a}f(a)+\frac{x-a}{b-a}f(b)$$
So anybody can help me?
AI: What if we mimic the standard proof for the mean value theorem?
Let $g:[a,b] \to \mathbb R$ such that
\begin{equation}
g(t) = f(t) - M(t - a)
\end{equation}
where $M$ is chosen so that $g(b) = f(a)$.
Note that $g$ is convex and continuous and $g(a) = g(b)$.
Hence $g$ has a minimizer $c \in (a,b)$.
It follows that
\begin{equation}
0 \in \partial g(c) = \partial f(c) - M
\end{equation}
or in other words
\begin{equation}
M \in \partial f(c).
\end{equation}
Now $g(b) = f(a)$ gives us
\begin{align}
&f(b) = f(a) + M(b-a) \\
\implies &\frac{f(b) - f(a)}{b-a} = M \in \partial f(c)
\end{align}
which is what we wanted.
|
H: Radius of in-circle as a function of the center
I am trying to find the radius of an in-circle in a random triangle as a function of the center of the circle. Let (x,y) in\R^2 be the center of a circle, r the radius then i need an expression of the form r(x,y).
The cirle does not have to touch all three sides of the triangle, but it has to touch at least one.
I find it difficult to get started, so all help will be appreciated.
AI: The $r$ you want is the minimum of the distances to the 3 lines defining the triangle.
The distance of a point $p := (x,y)$ to a line $L := a x + b y + c = 0$ is given by:
$$d(p,L) = \frac{\left| a x + by + c\right|}{\sqrt{a^2+b^2}}$$
If your line $L$ is defined by two points $(x_1,y_1)$, $(x_2,y_2)$ on it, then points
$(\tilde{x},\tilde{y})$ on $L$ satsify:
$$\det\begin{bmatrix}\tilde{x} & \tilde{y} & 1\\x_1 & y_1 & 1\\x_2 & y_2 & 1\end{bmatrix} = 0
\quad\implies\quad
d(p,L) = \frac{\left|\det\begin{bmatrix}x & y & 1\\x_1 & y_1 & 1\\x_2 & y_2 & 1\end{bmatrix}\right|}{\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}}
$$
|
H: How many triangles in picture
How many triangles in this picture:
I know that I can just count the triangles to solve this specific problem. I would be interested to know if there is a systematic approach to doing this that can be generalized to larger diagrams of the same type.
AI: They can be counted quite easily by systematic brute force.
All of the triangles are isosceles right triangles; I’ll call the vertex opposite the hypotenuse the peak of the triangle. There are two kinds of triangles:
triangles whose hypotenuse lies along one side of the square;
triangles whose legs both lie along sides of the square and whose peaks are at the corners of the square.
The triangles of the second type are easy to count: each corner is the peak of $4$ triangles, so there are $4\cdot4=16$ such triangles.
The triangles of the first type are almost as easy to count. I’ll count those whose hypotenuses lie along the bottom edge of the square and then multiply that by $4$. Such a triangle must have a hypotenuse of length $1,2,3$, or $4$. There $4$ with hypotenuse of length $1$, $3$ with hypotenuse of length $2$, $2$ with hypotenuse of length $3$, and one with hypotenuse of length $4$, for a total of $10$ triangles whose hyponenuses lie along the base of the square. Multiply by $4$ to account for all $4$ sides, and you get $40$ triangles of the second type and $40+16=56$ triangles altogether.
Added: This approach generalizes quite nicely to larger squares: the corresponding diagram with a square of side $n$ will have $4n$ triangles of the first type and $$4\sum_{k=1}^nk=4\cdot\frac12n(n+1)=2n(n+1)$$
of the second type, for a total of $2n^2+6n=2n(n+3)$ triangles.
|
H: Proofs that: $\text{Sp}(2n,\mathbb{C})$ is Lie Group and $\text{sp}(2n,\mathbb{C})$ is Lie Algebra
Consider following Lie Group:
$$
\text{Sp}(2n,\mathbb{C})=\{g\in\text{Mat}_{2n}(\mathbb{C})\mid J=g^TJg\}\quad\
where\quad J=\begin{pmatrix}
0 & 1_n \\
-1_n & 0
\end{pmatrix}
$$
And the corresponding Lie Algebra:
$$
\text{sp}(2n,\mathbb{C})=\{g\in\text{Mat}_{2n}(\mathbb{C})\mid g^TJ+Jg=0\}
$$
Are there any basic proofs that $\text{Sp}(2n,\mathbb{C})$ is a Lie Group and that $\text{sp}(2n,\mathbb{C})$ is the corresponding Lie Algebra without using submersions (seen here: Why is $Sp(2m)$ as regular set of $f(A)=A^tJA-J$, and, hence a Lie group.)? Can I proove the first statement by showing that $\text{Sp}(2n,\mathbb{C})$ is a subgroup of $\text{GL}(n,\mathbb{C})$?
Thanks!
AI: On the Lie algebra part. By definition, $sp(2n,\mathbb C)$ is the real vector space of complex matrices
$$sp(2n,\mathbb C)=\{ A\in Mat_{2n}(\mathbb C): \exp(tA)\in Sp(2n,\mathbb C), \forall t\in \mathbb R \}.$$
$\exp(tA)$ denotes the exponential of the matrix $tA$. By definition of
$Sp(2n,\mathbb C), $ we have that
$$J=\exp(tA)^{t}J\exp(tA)=\exp(tA^{t})J\exp(tA), (*)$$
where the second equality can be proved using the definition of the exponential of a matrix.
Using ($\exp(0)=1$)
$$\frac{d\exp(tA)}{dt}|_{t=0}:=\lim_{t\rightarrow 0}\frac{\exp(tA)-1}{t}=
\lim_{t\rightarrow 0}\frac{(1+tA+\frac{1}{2!}t^2A^2+\dots)-1}{t}=A$$
we arrive at
$$0=(\text{using (*)})=\frac{d}{dt}(J-\exp(tA)^{t}J\exp(tA))|_{t=0}=0-A^{t}J+JA,$$
i.e. $A^{t}J+JA=0$, as $J$ is independent of $t$.
|
H: Find coordinates of intersection between two circles, where one circle is centered on the other
I'm writing a program where an object needs to move from point A to point B.
A and B are points on the same circle. Point B corresponds to the intersection between the circle and another circle centered on A.
The coordinates of A, and O are known. A can be anywhere on the circle.
The radii of both circles are decided by the user.
From this, I need to calculate the coordinates of B.
The important part is that the distance between A and B is exactly as specified.
Here is a graphical representation:
Update:
I removed my incorrect implementation of the formula to avoid confusing future readers. The solution by Daniel Fischer is correct.
AI: Let $r$ be the radius of the circle with centre $A$, and $R$ the radius of the circle with centre $O$. Let $\alpha$ be the angle $BOA$.
Then $r = 2R\sin (\alpha/2)$, or $\alpha = 2\arcsin \frac{r}{2R}$, and you can obtain the coordinates of $B$ by rotating $A$ around $O$,
$$\begin{pmatrix}B_x\\B_y \end{pmatrix} = \begin{pmatrix}\cos \alpha & -\sin \alpha\\ \sin \alpha & \cos \alpha \end{pmatrix} \cdot \begin{pmatrix}A_x - O_x\\A_y - O_y \end{pmatrix} + \begin{pmatrix}O_x \\O_y \end{pmatrix}.$$
(I have rotated clockwise, according to the picture.)
|
H: Prove a Trigonometric Series
Question:
$$\cot^{2}\frac{\pi }{2m+1}+\cot^{2}\frac{2\pi }{2m+1}+\cdots+\cot^{2}\frac{m\pi }{2m+1}=\frac{m(2m-1)}{3}$$
$m$ is a positive integer.
Attempt:
I started by showing that
$$\sin(2m+1)\theta =\binom{2m+1}{1}\cos^{2m}\theta \sin\theta -\binom{2m+1}{3}\cos^{2m-2}\theta \sin^{3}\theta +\cdots+(-1)^{m}\sin^{2m+1}\theta$$
By expanding $\left(\text{cis }\theta\right)^{2m+1}$ and then equating the imaginary part.
AI: Divide this equality by $\sin^{2m+1}\theta$ to get
$$
\frac{\sin(2m+1)\theta}{\sin^{2m+1}\theta}=\sum\limits_{k=0}^m (-1)^k{2m+1 \choose 2k+1}\cot^{2k} \theta\tag{1}
$$
Denote
$$
P_m(x)=\sum\limits_{k=0}^m (-1)^k{2m+1 \choose 2k+1} x^m
$$
This is polynomial of degree $m$. From $(1)$ we see that $P_m$ have $m$ roots $x_k=\cot^2\frac{\pi k}{2m+1}$. By Vietas formula their sum equals to
$$
-\frac{{2m+1 \choose 3}}{{2m+1 \choose 1}}=\frac{m(2m-1)}{3}
$$
So we get
$$
\sum\limits_{k=1}^m\cot^2\frac{\pi k}{2m+1}=\frac{(2m-1)m}{3}
$$
|
H: Manipulating Algebraic Expression
$a + b + c = 7$ and $\dfrac{1}{a+b} + \dfrac{1}{b+c} + \dfrac{1}{c+a} = \dfrac{7}{10}$. Find the value of $\dfrac{a}{b+c} + \dfrac{b}{c+a} + \dfrac{c}{a+b}$.
I algebraically manipulated the second equation to get:
$\dfrac{(b+c)(c+a) + (a+b)(c+a) + (a+b)(b+c)}{(a+b)(b+c)(c+a)} = \dfrac{7}{10}$
$\dfrac{bc+ab+c^2+ac+a^2+bc+ba+ab+ac+b^2+bc}{(a+b)(b+c)(c+a)} = \dfrac{7}{10}$
$\dfrac{(a+b+c)^2}{(a+b)(b+c)(c+a)} = \dfrac{7}{10}$
$\dfrac{7^2}{(a+b)(b+c)(c+a)} = \dfrac{7}{10}$
$(a+b)(b+c)(c+a) = 70$
I'm stuck after this.
AI: HINT:
$$\frac a{b+c}+\frac b{c+a}+\frac c{a+b}$$
$$=\frac a{b+c}+1-1+\frac b{c+a}+1-1+\frac c{a+b}+1-1$$
$$=-3+(a+b+c)\left(\frac 1{b+c}+\frac 1{c+a}+\frac 1{a+b}\right)$$
Using summation notation, $$\sum_{a,b,c} \frac a{b+c}=-3+\sum_{a,b,c} \left(\frac a{b+c}+1\right)=-3+(a+b+c)\sum_{a,b,c}\frac1{b+c}$$
|
H: Improper integral - show convergence/divergence
I ran into this question:
show convergence/divergence of:
$$\int_{0}^{\infty}x^3e^{-x^2}.$$
I tried for a long time and I'm kind'a lost.
Thanks in advance,
yaron.
AI: We have
\begin{equation*}
\int_{0}^{\infty
}x^{3}e^{-x^{2}}dx=\int_{0}^{1}x^{3}e^{-x^{2}}dx+\int_{1}^{\infty
}x^{3}e^{-x^{2}}dx.
\end{equation*}
The first integral has no singularities. The second one is convergent as can be seen by applying the limit test and using the fact that $\int_{1}^{\infty }\frac{dx}{x^{2}}$ is convergent:
$$\lim_{x\rightarrow \infty }\frac{x^{3}e^{-x^{2}}}{x^{-2}}=0.$$
Consequently the given integral is convergent.
|
H: What is the nature of the homomorphism of the semidirect product of two groups $H$ and $K$?
Let $H$ and $K$ be group and $H\rtimes_{\phi} K$ is the semidirect product of those group by the homomorphism $\phi:K\rightarrow Aut(H)$
Now , i have a main question about this function $\phi$
Does $\phi$ map every element $k\in K$ into the same element $\sigma$ in $Aut(H)$?
I mean that , is $ \forall k_1 ,k_2 \in K ,\phi (k_1)=\phi (k_2) =\sigma $ ?
in the examples of D&F , I have noticed that in the first example, $\phi$ maps every element of K to the automprphism of inversion ( the automorphism which map the element $x$ to $x^{-1}$ and the same thing in other examples ) , so , is this true every where ?
If this is true , then there is a problem which is , this doesn't give us a homomorphism , if $\phi$ maps the elements to the inversion automorphism , this $\phi$ is not a homomorphism
as $\phi (x_1 x_2) = \tau$ but $\phi (x_1) \phi (x_2)= I$ where $\tau$ is the automorphism of inversion and $I$ is the identity automorphism
AI: No, $\phi$ is just an arbitrary map into $\operatorname{Aut}(H)$ (sometimes called an action of $K$ on $H$). Sometimes, however, this is restrictive. For example, if $H\cong\mathbb{Z}$ then $\operatorname{Aut}(H)\cong C_2$ (I write $C_n$ for the cyclic group of order $n$) and so every map from $C_5$, say, to $C_2$ is the trivial map. So $C_5\rtimes H\cong C_5\times H$.
For an example of elements acting with different automorphisms, consider $K=C_2\times C_2 \times C_2=\langle a, b, c\rangle$ and $H=C_3\times C_3=\langle x, y\rangle$ and take the action to be $x^a=x$, $y^a=y$, $x^b=x^{-1}$, $y^b=y^{-1}$ and $x^c=y$, $y^c=x$. This action is associated to a semidirect product, but clearly $a$, $b$ and $c$ act on $H$ in different ways and so are being mapped to different automorphisms of $H$ by $\phi$.
|
H: Adding simultaneous rates
If I have 3 rates like units/second produced by 3 separate devices and I want to get the total rate, is it ok to add the 3 rates up?
This seems basic and I'd say it's correct but I thought I'd ask.
Thanks
AI: Yes, you are correct. (In normal conditions.)
That is, if you're given a problem like:
Worker $A$ makes $5 \frac{\text{widgets}}{\text{hour}}$, worker $B$ makes $2 \frac{\text{widgets}}{\text{hour}}$, and worker $C$ makes $3 \frac{\text{widgets}}{\text{hour}}$. What is the total rate?
The total rate is the sum of the other rates; in this case $10 \frac{\text{widgets}}{\text{hour}}$.
However, if the rates were dependent on each other (e.g. only one worker can use a certain tool at a time), then the problem becomes a bit more difficult, and no "one-size-fits-all" approach exists.
|
H: Let $G$ be the division graph on $\{1,2,3,4,5,6,7,8,9,10,12,14,16\}$. Does it have a hamilton/euler path?
Given a graph G whose vertices are $V = \{1,2,3,4,5,6,7,8,9,10,12,14,16\}$, and there is an edge between two vertices $w$ and $j$ iff $w\neq j$ and $w$ divides $j$ or $j$ divides $w$.
(I) Does $G$ have a Hamilton path?
(II) Does $G$ have an Euler path?
Any help would be appreciated. For (I) I started looking for the path but couldn't find it, and tried to see what are the two most minimum degrees between two non-adjacent vertices and tried to see if it is $\geq 13$, but also that doesn't happen.
AI: Euler path is easy. Here are three vertices with odd degree:
4 -> 1, 2, 8, 12, 16 (5 neighbors)
10 -> 1, 2, 5 (3 neighbors)
12 -> 1, 2, 3, 4, 6 (5 neighbors)
A graph can have an Euler path only if at most two vertices have an odd degree. Hence, the graph does not contain an Euler path.
Trial-and-error gives this Hamilton-path: $$9 \rightarrow 3 \rightarrow 6 \rightarrow 12 \rightarrow 4 \rightarrow 8 \rightarrow 16 \rightarrow 2 \rightarrow 10 \rightarrow 5 \rightarrow 1 \rightarrow 7 \rightarrow 14$$
|
H: A proposed proof by induction of $1+2+\ldots+n=\frac{n(n+1)}{2}$
Prove: $\displaystyle 1+2+\ldots+n=\frac{n(n+1)}{2}$.
Proof
When $n=1,1=\displaystyle \frac{1(1+1)}{2}$,equality holds.
Suppose when $n=k$, we have $1+2+\dots+k=\frac{k(k+1)}{2}$
When $n = k + 1$:
\begin{align}
1+2+\ldots+k+(k+1) &=\frac{k(k+1)}{2}+k+1 =\frac{k(k+1)+2k+2}{2}\\
&=\frac{k^2+3k+2}{2}\\
\text{[step]}&=\displaystyle\frac{(k+1)(k+2)}{2}=\displaystyle\frac{(k+1)((k+1)+1)}{2}
\end{align}
equality holds.
So by induction, the original equality holds $\forall n\in \mathbb{N}$.
Question 1: any problems in writing?
Question 2: Why [step] happen to equal? i.e., why does $k^2+3k+2=(k+1)(k+2)$ hold?
AI: Nice work! If you want to take a bit more time, you can note that $$\frac{k^2+3k+2}2=\frac{k^2+2k+k+2}2=\frac{k(k+2)+1(k+2)}2=\frac{(k+1)(k+2)}2.$$
|
H: Vacuous Domain Mixing For all and There Exists.
May be this is a stupid question but I was thinking we know that suppose $D = \varnothing$ then $\forall\, x \in D \,P(x)$ is true vacuously and $\exists y\in D\, P(y)$ is false. What is you mix the two like $\forall x \in D \,\,\exists \,y \in D \,Q(x, y)$. Then is $Q$ true or false or neither?
AI: $\bigl(\forall x \in D\bigr) \bigl(\exists y \in D\bigr) Q(x,y)$ has the form $\bigl(\forall x \in D\bigr) P(x)$, where $P(x)$ expands to $\bigl(\exists y \in D\bigr) Q(x,y)$. So it is vacuously true.
If you switch the order of the quantors, it becomes vacuously false, since you get a proposition of the form $\bigl(\exists y \in D\bigr) R(y)$ (with $R(y) = \bigl(\forall x \in D\bigr) Q(x,y)$).
When you have an empty domain $D$, only the outermost quantor matters, any outermost $\bigl(\forall x \in D\bigr)$ makes it vacuously true, $\bigl(\exists y \in D\bigr)$ vacuously false.
|
H: Find the volume inside both $x^2+y^2+z^2=4$ and $x^2+y^2=1$.
What is the volume inside both $x^2+y^2+z^2=4$ and $x^2+y^2=1$?
The chapter I am working on is called Change of Variables in Multiple Integrals, for my Vector Calculus class.
I understand that we will be taking the double integral of these two shapes to find the volume between both of them, but I am very lost as to how to begin. Should I convert to polar coordinates or should I replace one equation into the other, IE $x^2+y^2=u$ and I could place that into the first equation, then replace it with 1 since $x^2+y^2=u=1$. Or should I set them equal to each other and try to find the bounds like that?
I'd appreciate any input, I am just very confused as to how to approach the problem.
Thank you!
AI: This solid is the intersection of a sphere of radius 4 with a cylinder of radius 1. If you look down on the solid from the positive $z$ direction you will see the disk $x^2 + y^2 \le 1$ in the $xy$ plane. Since the solid is bounded on the top and bottom by the sphere you can set up the integral as
$$
V = \iint_D [\text{top_edge} - \text{bottom_edge}] \, \mathrm dA
$$
where $D$ is the unit disk $\{(x,y) : x^2 + y^2 \le 1\}$. Thus your volume integral is
$$
V = \iint_D 2 \sqrt{4 - x^2 - y^2} \, \mathrm dx \mathrm dy.
$$
I would use polar coordinates to evaluate.
|
H: prove that $f$ is continuous on $A$ if and only if $f^{-1}(V)$is open in $\Bbb R^n$ for every open subset $V$ of $\Bbb R^m$
Suppose that $A$ is open in $\Bbb R^n$ and $f$ is a function from $A$ to $\Bbb R^m$. Prove that $f$ is continuous on $A$ if and only if $f^{-1}(V)$is open in $\Bbb R^n$ for every open subset $V$ of $\Bbb R^m$.
How to show this theorem. Please show me explicitly? I find two answers but one of these are so complicated and another is not enough to do in the exam to take proper-point. So please help me. Thank you
AI: The open subsets of $A$ with the subspace topology are exact the sets $A \cap U$, where $U$ is open in $\mathbb{R}^{n}$. If $f^{-1}(V)$ is open in $R^{n}$, then $f^{-1}(V) \cap A$ is open in $A$. But $f$ is defined from $A$ to $\mathbb{R}^{n}$, so the preimage of any set is contained in $A$, so $f^{-1}(V) \cap A = f^{-1}(V)$. Hence $f^{-1}(V)$ is open in $A$ for any $V$ in $\mathbb{R}^{n}$, which is the definition of $f$ being continuous.
For the other direction suppose that $f$ is continuous on $A$. Then $f^{-1}(V)$ is open in $A$ for each $V \subset \mathbb{R}^{n}$. But $A$ is open in $\mathbb{R}^{n}$, and $f^{-1}(V) = f^{-1}(V) \cap A$, so $f^{-1}(V)$ is open in $\mathbb{R}^{n}$ by the characterisation of open sets in $A$.
|
H: Existence of a continuous selector for a continuous optimization problem
Suppose that you have a continuous function
$$ S \colon \mathbb{R}\times [0, 1]\to [0, \infty).$$
Define an auxiliary function
$$S^\star(x)=\max_{t\in[0,1]}S(x, t).$$
Does there exist a continuous function $t(x)$ such that
$$S^\star(x)=S(x, t(x))?$$
AI: Not necessarily. Consider a zig-zag function
$$h \colon \mathbb{R} \to \mathbb{R};\qquad h(x) = \operatorname{dist}(x,\mathbb{Z})$$
and
$$S(x,t) = h(x-t)$$
The $t$ that maximises $S$ for a given $x$ slides and jumps from one end of the interval to the other for $x = k + \frac{1}{2}$, $k \in \mathbb{Z}$.
For $0 \leqslant x < \frac{1}{2}$, we uniquely have $S^*(x) = S(x, x+\frac{1}{2})$, and for $\frac{1}{2} < x \leqslant 1$ we uniquely have $S^*(x) = S(x, x - \frac{1}{2})$.
|
H: Difficulty in understanding integrals of complex numbers
I understand what integration of real numbers is. I know how the definition of it is made.
I have trouble in understanding how it works for complex numbers.
I am referring to the notes here: http://people.math.gatech.edu/~cain/winter99/ch4.pdf.
I understand till how the Reimann sum (page 2) is calculated because it is almost the same as reals. There after, I do not understand a thing.
Please elucidate what it is.
AI: As far as contours go, a rough (and probably poor) analogy is suppose you want to integrate the height of a hill. You would lay out a tape measure and measure the height at each point as you walk up the hill - except you can walk up the hill in lots of different ways. You can walk up in a straight line from east to west, or maybe a curve from south to northwest, etc. Maybe you only want to walk AROUND the top of the hill. You can think of the complex function that you're integrating as the hill (except it has a complex height instead of a real height), and the line or curve you're integrating along as the path in the complex plane.
You'll get different measurements depending on where the start and end points are, and how well-behaved the curve is. If the curve is very well-behaved, there's a nifty property that it doesn't matter what the curve is, the integral will be the same as long as the start and end are the same. You could start at point $a=1$, head all the way out to point $c=10^{1000}$ then come back to point $b=2$, and as long as your path is continuous, you will get the same answer as if your curve went straight from $a$ to $b$.
From there, you can imagine that for these well-behaved functions, the curve from $a$ to $a$ - a closed path, would be the same as the integral from $a$ to $a$ - 0, and you would be right.
I'm glossing over a lot here, but this is a very high-level view of how you can visualize complex integration.
|
H: Matrix to power $2012$
How to calculate $A^{2012}$?
$A = \left[\begin{array}{ccc}3&-1&-2\\2&0&-2\\2&-1&-1\end{array}\right]$
How can one calculate this? It must be tricky or something, cause there was only 1 point for solving this.
AI: Observe that $A^2$ is $A$.
So $A^{\large2012}$ is $A$ too.
|
H: What are discrete and fast Fourier transform intuitively?
I have done both of these in my math courses, but without understanding what they actually are intuitively. I would be very much grateful if you could give me an intuitive explanation of them.
AI: So first things first: the FFT simply refers to the algorithm by which one may compute the DFT. So, if you understand the DFT, you understand the FFT as far as intuition goes (I think).
Now with the DFT, our goal is to write of $N$ points $x_0,x_1,...,x_{N-1}$ as the sum of complex exponentials. That is, we say that $X_n$ is the DFT of $x_n$ exactly when
$$
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k \cdot e^{(2 \pi i\, k\, n) / N}
$$
You might recognize this as the inverse DFT of $X_n$. The key is that for each $k$, each $e^{(2 \pi i\, k\, n) / N}$ can be though of as a complex vector with $N$ entries. We could write
$$
v_k=\left(\frac 1N,\frac1N e^{(2 \pi i\, k) / N},\frac1N e^{(4 \pi i\, k) / N},\dots,\frac1N e^{(2 \pi i\, k\, (N-1)) / N}\right)
$$
There are $N$ vectors of this form (taking $k$ from $0$ to $N-1$), and we use them because they form an orthonormal basis of $\mathbb C ^N$. That is, $\langle v_k,v_j \rangle$ is $1$ if $k=j$ and $0$ if $k≠j$. Because these vectors are orthonormal, we can change basis (i.e. find the DFT) by using the dot product rather than by solving a system of $N$ equations.
That is, if $x=(x_0,x_1,\dots,x_{N-1})$ is the vector of complex entries of our time domain sequence, then $k^{th}$ entry the DFT of $x_0,x_1,\dots,x_{N-1}$ is simply given by
$$
X_k = \langle x,v_k \rangle
$$
and the IDFT is computed by finding
$$
x = \sum_{k=0}^{N-1} X_k v_k
$$
That is certainly my intuition for the computational process, and I find that helps. What this doesn't really help with is why we'd want to deal with complex exponentials in the first place, but if you've seen DFTs already I suppose you have some idea.
|
H: Range changes for functions of stochastic variables.
I have the stochastic variables $X=U~(-1,1)$, and $Y=2X^2+1$. I need to find the cdf of $Y$ ($F_Y(y)$). I have reasoned like this:
$$
F_y(y) = P(Y<y) = P(2X^2+1<y) = P(-\sqrt{\frac{y-1}2} < X < \sqrt{\frac{y-1}2})
$$
$$
= \int_{-\sqrt{\frac{y-1}2}}^{\sqrt{\frac{y-1}2}}f_X(x)dx = \frac12\left[x\right]_{-\sqrt{\frac{y-1}2}}^{\sqrt{\frac{y-1}2}} = \frac12\left(\sqrt{\frac{y-1}2}+\sqrt{\frac{y-1}2}\right) = \sqrt{\frac{y-1}2}
$$
Giving us $f_Y(y) = \frac1{2\sqrt2\sqrt{y-1}}$, which is what my book tells me it should be. However, my books tells me this is for the range $1<y<3$, with $0$ and $1$ on either side as is usual for cdfs.
However, I have no idea of how one finds out what this range should be. My book has some examples where the ranges do change, but never explains the reasoning behind these things. Any information on where I can learn this would be highly appreciated.
AI: If $X=U(-1,1)$, then $f_X(x)=\frac{1}{2}$ for $x\in[-1,1]$ and $0$ otherwise.
Some care has to be taken when computing
$$\int_{-\sqrt{\frac{y-1}{2}}}^{\sqrt{\frac{y-1}{2}}}f_X(x)dx. $$
We have the following cases
$\sqrt{\frac{y-1}{2}}\leq 1 \cap y-1\geq 0 \Leftrightarrow 1\leq y\leq 3$
In this case the integral gives the result you wrote as $f_X(x)\neq 0$. The inequality $y-1\geq 0$ is necessary due to the square root $\sqrt{y-1}$ appearing in your computations.
$\sqrt{\frac{y-1}{2}}\geq 1 \cap y-1\geq 0 \Leftrightarrow y\leq -1 \cap y\leq 3$
In this second case the integral is identically zero because of the definition of $f_X(x)$ given above.
|
H: Evaluating the $\int \frac{x+2}{x^2-4x+8}$ - a doubt
I have to find the antiderivative of $f(x) = \dfrac{x+2}{x^2-4x+8}$
I rewrote it to the form $$ \dfrac{x-2}{x^2 -4x + 8} + \dfrac{1}{\frac{1}{4} (x-2)^2 +1}$$
The next step supposedly is $$F(x) = \dfrac{1}{2}\ln|x^2-4x+8| + 2 \arctan\left(\dfrac{1}{2}(x-2)\right) + c$$
But now there is a problem which I've always had: I don't know why the $\dfrac{1}{2}$ in front of the $\ln(x)$ is there, same for the $2$ and the $\dfrac{1}{2}$ for the $\arctan(x)$. For me simplifying the function to standard integrals is easy, however finding the right factors and such is hard.
AI: When we use substitution, we need to determine $u$ as a function of $x$, and so, as a sort of "reverse" chain rule, we need also two substitute $du$ as a function of $dx$:
$$\int \dfrac{x-2}{x^2 -4x + 8}\,dx$$
Here, $u = x^2 - 4x + 8 \implies \,du = 2x - 4 = 2(x-2)\,dx \iff \frac 12 du = x - 2 \,dx$
This gives our substituted integrand:
$$\begin{align} \int \dfrac{x-2}{x^2 -4x + 8}\,dx & = \int \frac{\frac 12 \,du}{u} \\ \\ & = \frac 12 \int \frac {du}{u} \\ \\ & = \frac 12 \ln|u| + C =\frac 12 \ln|x^2 - 4x + 8| + C\tag{1}\end{align}$$
For the second term:
$$\int\dfrac{1}{\frac{1}{4} (x-2)^2 +1}\,dx = \int \dfrac{1}{\left(\frac 12(x-2)\right)^2 + 1} \,dx$$
We let $$u = \frac 12(x - 2) \implies du = \frac 12 dx \implies 2 du = dx$$
Substituting we have $$\begin{align} \int\dfrac{1}{\left(\frac 12(x-2)\right)^2 + 1} \,dx & = \int \frac 1{u^2 + 1}\cdot(2\,du) \\ \\ & = 2 \int \cdot \dfrac{1}{u^2 + 1} \,du \\ \\& = 2\arctan\left(u\right) + C \\ \\ & = 2 \arctan\left(\frac 12(x - 2)\right) + C \tag{2} \end{align}$$
Putting $(1)$ and $(2)$ together:
$$F(x) = \dfrac{1}{2}\ln|x^2-4x+8| + 2 \arctan\left(\dfrac{1}{2}(x-2)\right) + c$$
|
H: Question about linear dependence and independence by using Wronskian
Here is the theorem I use:
Two solutions $\phi_1$, $\phi_2$ of $L(y)=y''+a_1y'+a_2y=0$, where $a_1$ and $a_2$ are constants, are linearly independent on an interval $I$ if, and only if, the Wronskain $W(\phi_1,\phi_2)\ne0$ for all $x\in I.$
Then, consider the functions: $$\phi_1(x)=x,\phi_2(x)=|x|, x\in(-\infty,\infty). $$Are they linearly dependent ot independent?
My answer is that they are linearly dependent since when $x\ge0$, $\phi_1(x)=x=\phi_2(x)$ and I plug them and the corresponding derivatives into the Wronskian, $W=0$ for all $x\ge0$; and also check when $x\lt0$, the Wronskian also equals to 0, thus $\phi_1$ and $\phi_2$ here are linear dependent. But the answer in the back of the book says they are linearly independent. Where went wrong?
A similar example is $$\phi_1(x)=x^2, \phi_2(x)=x|x|, x\in(-\infty,\infty).$$The answer is also linear independence, but I think they are linear dependence.
Thanks for your answer.
AI: Note: I wonder what DEQ this would be a solution for? Regardless, lets proceed.
We know that if $f(x)$ and $g(x)$ are linearly dependent on $I$ then $W(f,g)(x) = 0$ for all $x$ in the interval $I$.
Is this telling you anything about the linear dependence of the functions themselves? It does not imply that if $W(f,g)(x) = 0$ then $f(x)$ and $g(x)$ are linearly dependent. It is possible for two linearly independent functions to have a zero Wronskian!
We would analyze
$$ W(f,g) = \det\begin{bmatrix}f & g\\f' & g'\end{bmatrix} = fg' - gf'$$
$x < 0 \rightarrow |x| = -x \rightarrow \text{Wronskian} = 0$ since $fg' - gf' = x(-1) - (-x)(1) = 0$
$x = 0 \rightarrow |x| = 0 \rightarrow \text{Wronskian} = 0$ since $fg' - gf' = 0-0 = 0$
$x > 0 \rightarrow |x| = x \rightarrow \text{Wronskian} = 0$ since $fg' - gf' = x(1) - (x)(1) = 0$
Since the Wronskian is zero, no conclusion can be drawn about linear independence!
For linear independence, we want to go back to the basic definitions again. We have:
$|x| = x$ if $x \ge 0$ and $|x| = -x$ if $x \lt 0$. Thus, our equations to check for linear independence of these functions become:
$$c_1 x + c_2 x = 0~~~~ \text{for}~ x \ge 0 \\ c_1x - c_2 x = 0~~~~\text{for}~ x \lt 0$$
The only solution to this system is $c_1 = c_2 = 0 \rightarrow$ linear independence. Note that at the single point $x = 0$ does not matter.
You can also see the same argument for your second example Calculate the Wronskian of $f(t)=t|t|$ and $g(t)=t^2$ on the following intervals: $(0,+\infty)$, $(-\infty, 0)$ and $0$?
|
H: Let G(V,E) undirected Graph with n vertices, where every vertex has degree less than $\sqrt{n-1}$. Prove that the diameter of G is at least 3.
Let G(V,E) undirected Graph with n vertices, where every vertex has degree less than $\sqrt{n-1}$. Prove that the diameter of G is at least 3.
Well I've thought about proving it by saying G diameter is 2 at most.
Then exists $u,v\epsilon V$ so that $d(u,v)=2$. But i got stuck.
Then i tried Pigeonhole principle, And still nothing.
Any guide-lines please?
AI: The diameter of a graph is the longest shortest path between 2 vertices. Show we want to show that there exists 2 vertices that are distance at least 3 apart.
Hint: Consider any vertex $A$. It is connected to at most $\sqrt{n-1}$ vertices.
Hint: Each of these vertices is connected to at most $\sqrt{n-1} - 1$ vertices that are not $A$.
This describes all the vertices that are distance at most 2 from $A$. How many vertices are there? If we can show that there are less than $n$ vertices, then there must be a vertex that is not distance at most 2 from $A$.
|
H: linear operator $A$ and $B$ commute
This Is exercise problem from my lecturer
Prove that if the linier operators $A$ and $B$ commute (i.e., if $AB=BA$), then every eigen space of the operator $A$ is invariant subspace of the operator B.
AI: Hint: Let $A,B\colon X \to X$. A subspace $U \subseteq X$ is $B$-invariant or an invariant subspace of $B$ if $Bu \in U$ for all $u \in U$, that is $B(U) \subseteq U$. Now let $U$ be an eigenspace of $A$, that is $Au = \lambda u$ for all $u\in U$. Let $u \in U$, if $\lambda \ne 0$, then $Bu = \frac 1\lambda BAu = \cdots$ (use $AB = BA$) if $\lambda =0 $ then $Au = 0$, hence $ABu = BAu = \cdots$.
|
H: Evaluation/Estimation of a Gaussian integral
Is there a closed form expression for the following definite integral:
$$ F(u) = \frac{1}{2}\int_{-u}^u e^{-\frac{\alpha^2}{x^2}-\beta^2 x^2}\,dx
= e^{-2\alpha\beta} \int_0^u e^{-\left(\frac{\alpha}{x}-\beta x\right)^2}\,dx? $$
Graphing the function makes it clear this quantity is somehow at least close to the standard Gaussian, and I've seen computations that indicate that the value at infinity corresponds to the standard Gaussian one (that is, $\lim_{u\rightarrow\infty} F(u) = \sqrt{\pi}/\beta$) although I've never been able to compute this directly myself. Since $F$ seems comparable to Gaussian, I expect that the answer to my question is negative, at least for finite $u$. But I would still be very interested in (1) the trick for evaluating at $u=\infty$, and (2) estimates on $F$, in particular, lower bounds.
AI: Consider $I=\mathrm e^{2\alpha\beta}F(\infty)$ and the change of variable $\beta z=\alpha x^{-1}$, then
$$
I=\int_0^\infty\exp\left(-\left(\alpha x^{-1}-\beta x\right)^2\right)\mathrm dx=\int_0^\infty\exp\left(-\left(\alpha z^{-1}-\beta z\right)^2\right)(\alpha/\beta)z^{-2}\mathrm dz,
$$
that is $\beta I=\alpha J$ with
$$
J=\int_0^\infty\exp\left(-\left(\alpha x^{-1}-\beta x\right)^2\right)x^{-2}\mathrm dx.
$$
Now, $\mathrm d\left(\alpha x^{-1}-\beta x\right)=-\left(\alpha x^{-2}+\beta\right)\mathrm dx$ hence
$$
\beta I+\alpha J=\int_0^\infty\exp\left(-\left(\alpha x^{-1}-\beta x\right)^2\right)\left(\alpha x^{-2}+\beta\right)\mathrm dx=\int_{-\infty}^\infty\mathrm e^{-t^2}\mathrm dt=\sqrt\pi,
$$
hence $\beta I=\alpha J=\frac12\sqrt\pi$ and
$$
F(\infty)=\frac{\sqrt\pi}{2\beta\mathrm e^{\alpha\beta}}.
$$
|
H: How do prove that there is a vertex with degree less than 6 in a disconnected planar graph
As we all know, in every planar graph, connected or disconnected, there is at least a vertex $v$ with $deg(v) \leq 5$. (Even we can prove that there are at least two of such vertices.)
But, as I looked for a proof to this simple fact, I saw that all proofs rely on the fact that in every planar graph $e \leq 3v - 6$. But isn't this formula applicable only to connected graphs?
AI: If $G = (V,E)$ is a non-connected planar graph, add edges to make $G' = (V, E \uplus E')$ connected and planar, this is always possible (use induction on the number of connected components). Then
$$ |E| \le |E \uplus E'| \le 3|V| - 6. $$
|
H: Differential forms: need to understand the 1-form $dx^i$
In [1] on page 118 the authors introduce differential k-forms $\omega$ on $U \subset \mathbb{R}^n$ by \begin{equation} \omega : U \subset \mathbb{R}^n \to\Lambda^k\mathbb{R}^n \end{equation} where $\Lambda^k$ is the space of k-vectors and $(\mathbb{R}^n)^\ast$ is the dual of $\mathbb{R}^n$.
Now they define the 1-form $dx^i$ by \begin{equation} \tag{1} dx^i : x \in \mathbb{R}^n \mapsto e^i \in \Lambda^1 \mathbb{R}^n \simeq (\mathbb{R}^n)^\ast \end{equation} where $e^1,\ldots,e^n$ denotes the dual basis of $\mathbb{R}^n$.
They conclude \begin{equation} dx^i(x) = x^i \tag{2}. \end{equation}
It seems to me that by $x^i$ they don't mean the $i$-th component of $x$, but the coordinate-map \begin{equation} x^i : v \in \mathbb{R^n} \mapsto v_i \in \mathbb{R} \end{equation} for this is the only way that (1) and (2) don't contradict each other.
Question 1: correct so far?
Now for any smooth function $f : U \to \mathbb{R}$ they define \begin{equation} df : x \in U \mapsto \sum_{i=1}^n f_{x^i}(x) dx^i \in \Lambda^1 \mathbb{R}^n \simeq (\mathbb{R}^n)^\ast \end{equation} where I suppose $f_{x^i}(x)$ is the $i$-th partial derivative of $f$ (the notation is never clarified).
Now $df(x)$ is an element of $(\mathbb{R}^n)^\ast$ and we can apply it to some $v \in \mathbb{R}^n$. This yields \begin{equation} df(x)(v) = \left(\sum_{i=1}^n f_{x^i}(x)dx^i \right) (v) = \sum_{i=1}^n f_{x^i}(x)dx^i(v) = \sum_{i=1}^n f_{x^i}(x) e^i = Df(x). \end{equation}
Question 2: isn't $df(x)(v)$ supposed to be $D_v f(x)$ (the directional derivative?) Where is my mistake?
Any help is much appreciated!
[1] Giaquinta, M., G. Modica und J. Souek: Cartesian currents
in the calculus of variations. I, Nr. 37 in A Series of Modern Surveys in
Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Berlin, 1998,
ISBN 3-540-64009-6. Cartesian currents.
AI: Yeah, this seems to be very sloppy. They should've said $dx^i = e^i$ (this is used in your final equation), and it should be a directional derivative. When they simplify $dx^i(v)$, they should get $e^i(v)$, and instead they leave off the $v$.
Yes, $f_{x^i}$ is meant to denote a partial derivative with respect to $x^i$.
|
H: Application of the Chinese Remainder Theorem
Three brothers A, B and C live together and they all love eating pizza. A has the habit of eating a pizza every 5 days, B every 7 days and C every 11 days. A and C both eat pizzas together on 3 January 2012 and B has a pizza the next day. When will they all three eat pizza together?
I want to solve this exercise using the Chinese Remainder Theorem. I also have the standard solution but I think I don't exactly understand the Chinese Remainder Theorem so far. In our Algebra lecture, we stated it as follows:
Let $a_1, \dots, a_n$ be pairwise relatively prime elements of a principle ideal domain $R$ (is this the correct english term for "Hauptidealring"?). Then, the homomorphism
$$R / (a_1, \dots, a_n) \to R/(a_1) \times \dots R/(a_n), \;\;[x] \mapsto ([x], \dots, [x])$$
is an isomorphism.
The standard solution starts as follows: Let $x \in \mathbb{Z}_{\geq 0}$ be the amount of days that has passed since January 3rd. We are looking for the smallest non-negative number $x$ which solves the following system:
$$x \equiv 0 \; \text{mod} \; 5$$
$$x \equiv 0\; \text{mod} \; 11$$
$$x \equiv 1 \; \text{mod} \; 7$$
The numbers 5, 7 and 11 are pairwise coprime hence the Chinese Remainder Theorem tells us that there is a unique solution x mod 385 for this system. How does the Chinese Remainder Theorem tell us something about remainder systems like these? In our case, $R = \mathbb{Z}$ and therefore it gives us an explicit isomorphism $\mathbb{Z}/(5,7,11) \to \mathbb{Z}/(5) \times \mathbb{Z}/(7) \times \mathbb{Z}/(11)$. How can I use this isomorphism to get a unique solution mod 385?
I just don't really understand the Chinese Remainder Theorem and thought looking at applications and standard solutions might help me out but it didn't so far and I'd really appreciate any help.
Thanks in advance.
AI: The CRT basically says says that the natural projection from $\Bbb Z$ onto that product is surjective.
Being surjective, $z\mapsto(0+5\Bbb Z,0+11\Bbb Z,1+7\Bbb Z)$ for some $z\in \Bbb Z$. By an isomorphism theorem, you turn this into a isomorphism from $\Bbb Z/(385)$ to the product. It is still surjective, and since it is injective, this says the answer $z$ is unique (mod $385$).
One naive way to solve this would be to look at multiples of $55$ and see which one is congruent to $1$ mod $7$.
There is a simple algorithm to compute more complex cases. Let's suppose you want a simultaneous solution to get $(a,b,c)$ instead of $(0,0,1)$. Since 77 is coprime to 5, you can find an inverse mod 5, which turns out to be 3. Then $77\cdot3\cdot a\cong a\pmod{5}$, but it is zero mod 7 and mod 11.
Since 35 is coprime with 11, we can find an inverse mod 7, which turns out to be 6.Then $35\cdot 6\cdot b\cong b\pmod{11}$, but it is zero mod 5 and mod 7.
Finally, 55 has an inverse mod 7, namely 6. Thus $6\cdot 55\cdot c\cong c\pmod{7}$, and zero mod 5 and mod 11.
Summing all of these, we get that $77\cdot 3\cdot a+35\cdot 6\cdot b+55\cdot 6\cdot c=z$ is a solution to the simultaneous modular equations. If you need it to be below 385 you can just reduce it mod 385, or if you need it to be some number you can just increase it mod 385 to get the desired number.
|
H: About the Spectrum of operators
I'm studying operator theory, and a doubt come at me, we know the diference between the pontual spectrum, the continuous spectrum and the residual spectrum. And we have that $\lambda \in \sigma(T)$ iff $T-\lambda I$ is not invertible. But the fact that $T-\lambda I$ is not invertible, implies that Ker($T-\lambda I$)$\neq 0$. But why this doesn't implies that $\lambda$ is in the point spectrum? And more, to show that some value $\lambda$ belongs to spectrum, usually the book find some sequence of points $x_{n}$ such that the norm of $(T-\lambda I$)$ x_{n}$ goes to zero. But why does this implie that the operator $T-\lambda I$) is not invertible?
Thank You!
AI: For an infinite-dimensional Banach space $E$, it is no longer true that for an operator $S:E\to E$,
$$\text{$S$ is not invertible}\implies\ker(S)\text{ is non-trivial}.$$
It may help to read the Wikipedia article on the decomposition of the spectrum.
|
H: How to teach a High school student that complex numbers cannot be totally ordered?
I once again need your precious knowledge! I am not sure which is the best pedagogic way to teach a High school student about why complex numbers cannot be totally ordered. When I was in High school we were simply told that we cannot order complex numbers. When we asked why the answer was because there is no total order in $\mathbb C$. But we weren't taught a thing about order in general. Is there a crystal clear way to do this in High school? I am thanking you all in advance!
AI: Without going into exactly what an ordered field is, you can tell your students that if you want to order $\mathbb{C}$, two things that should be true of the order is
The order preserves the ordering of $\mathbb{R}$.
For $a,b,c$ with $b<c$, then $ab<ac$ if $a>0$ and $ab>ac$ if $a<0$.
Then ask: Where is $i$ in the ordering compared to $0$?
|
H: Standard models being non-standard?
If there is a ''set'' W in V which is a standard model
of ZF, and the ordinal κ is the set of ordinals which occur in W, then
Lκ is the L of W. If there is a set which is a standard
model of ZF, then the smallest such set is such a Lκ. This
set is called the minimal model
of ZFC. Using the downward Löwenheim–Skolem theorem, one can show
that the minimal model (if it exists) is a countable set.
Of course, any consistent theory must have a model, so even within the
minimal model of set theory there are sets which are models of ZF
(assuming ZF is consistent). However, those set models are
non-standard. In particular, they do not use the normal element
relation and they are not well founded. (http://en.wikipedia.org/wiki/Constructible_universe)
I don't get it. OK, so in this case, one assumes that there exists some standard model that is not a class, but a set. Then one constructs a constructible universe inside such set which is also a set. Then one uses Lowenheim-Skolem to show that the smallest such model is countable.
My question is, it seems that the model is externally not standard (viewed from a bigger model) - so why call it standard first place? Or am I being confused, and is the model really standard? And the quote in the later part seems to say that the model is not standard, which confuses me much.
And I do not get the second paragraph - why would any consistent theory having at least one model relates to the minimal model containing models of ZF? I am not sure how Godel's completeness theorem is being matched here.
AI: If you have a standard model $M$, and $N$ is a countable submodel (not even elementary, just a substructure which is a model of $\sf ZFC$) then $N$ is well-founded, because its $\in$ is a subset of the real $\in$ of the universe of sets (which is well-founded by the axiom of foundation).
We can therefore collapse $N$ to a countable transitive model, which is then a standard model as wanted.
Within the minimal model, however, we can't do this sort of trick anymore. There is no set which is a standard model that we can begin with. However the standard integers cannot encode a proof of $\lnot\text{Con}(\sf ZFC)$, so the minimal standard model also satisfies $\text{Con}(\sf ZFC)$ and therefore has a model. But this model cannot be standard, it has to be ill-founded.
|
H: If a group $G$ has the trivial center then $|Aut(G)|\geq |G|$
If a group $G$ has the trivial center then $|Aut(G)|\geq |G|$. Any suggestion?
AI: Hint: For $g\in G$ consider $x\mapsto g^{-1}\cdot x\cdot g$.
|
H: Find the integral of $\overline{z}$
Question:
Find $\int\overline{z}$, when the contour is a parabola. Interval is from 0 to 1.
My Attempt:
$z = x + iy \Rightarrow \overline{z} = x - iy$
$f(z) = x - iy$
Since the contour is a parabola, $\gamma(t) = t + it^2$ and $y = x^2$.
$Re(\gamma) = t$, $Im(\gamma) = t^2$
$\gamma\prime(t) = 1 + 2ti$
$f(\gamma(t)) = (t - t^2)$
So, $\int_{c}\overline z = \int_{0}^{1}(t-it^2)(1+2t)$
$= \int_{0}^{1}(t - 2t^3) + i(2t^2 - t^2)$
$= (\frac{t^2 - t^4}{2}) + i(\frac{t^3}{3})$ this must be evaluated from 0 to 1
$= \frac{1}{3}i$
Is my answer right?
AI: If $f(x,y)=x-iy$ and $\gamma(t):=x(t)+iy(t)=t+it^2$ with $t\in [0,1]$ then
$$f(\gamma(t)):=f(x(t),y(t))=x(t)-iy(t)=t-it^2$$
As $\gamma'(t)=1+2it$, we arrive at
$$\int_{\gamma} f:=\int_{0}^{1}f(\gamma(t))\gamma'(t)dt=\int_{0}^{1}(t-it^2)(1+2it)dt=
\int_{0}^{1}(t+2t^3)dt+i\int_{0}^{1}t^2dt=
\\ \frac{1}{2}+\frac{1}{2}+\frac{1}{3}i=
1+\frac{1}{3}i.$$
|
H: Length of spanning and independent lists
There is this theorem in my notes that says that in a finite-dimensional vector space any linearly independent list of vectors is shorter than or equal in length to every spanning list. I understand the proof but it doesn't appear to use the assumption that the space is finite-dimensional.
Could someone show me an example of an infinite dimensional space in which there exists a linearly independent list of vectors that is strictly longer than some spanning list?
AI: Could someone show me an example of an infinite dimensional space in which there exists a linearly independent list of vectors that is strictly longer than some spanning list?
No. Since it's also true in infinite dimensional spaces, if we replace "length" with "cardinality" and "list" with "set".
But the proof is more complicated in the infinite dimensional case (and may require the axiom of choice/Zorn's lemma, I don't remember).
If the proof in the lecture didn't seem to use the assumption of finite dimensionality, that just isn't obvious.
|
H: Help with a step in rearranging this problem
i'm working through the proof of this theorem;
If $x$ is any real number other than $1$, then $$\sum_{j = 0}^{n -1} x^j = 1 + x + x^2 + \cdots + x^{n-1} = \frac{x^n-1}{x-1}$$
But i'm struggling with an intermediate step, so I hope you can help. Please don't take any large jumps in rearranging and also I would appreciate it if you would leave the last steps to the completed theorem for me to continue working on.
So here's where i'm stuck
$$\frac{(x^k) - 1}{ x - 1} + x^k = \frac{x^k - 1 + x^{k+1} - x^k}{x - 1}$$
I just can't seem to get LHS to equal the RHS
AI: Put each term over a common denominator, just like you would if you were adding two rational numbers. Since $\frac{x-1}{x-1}=1$ for any $x\neq 1$, we have
$$x^k \;=\;x^k\cdot 1\;=\;x^k\cdot\frac{x-1}{x-1}\;=\; \frac{x^k\cdot (x-1)}{x-1}\;=\;\frac{x^{k+1}-x^k}{x-1}$$
and therefore
$$\frac{x^k-1}{x-1}+x^k=\frac{x^k-1}{x-1}+\frac{x^{k+1}-x^k}{x-1}=\frac{(x^k-1)+(x^{k+1}-x^k)}{x-1}.$$
|
H: Finding Eigenvalues and Eigenvectors weird equations
I have matrix:
$A = \left[\begin{array}{ccc}3&-1&-2\\2&0&-2\\2&-1&-1\end{array}\right]$
And I have to find eigenvalues and coresponding eigenvectors.
I get $det(A-xI_3) = -x(x-1)^2$, so eigenvalues are $x_1 = 0$ and $x_2 = 1$
Now I want to find coresponding eigenvectors. I get:
$A - 0I_3 = \left[\begin{array}{ccc}3&-1&-2\\2&0&-2\\2&-1&-1\end{array}\right] $
and
$A - I_3= \left[\begin{array}{ccc}2&-1&-2\\2&-1&-2\\2&-1&-2\end{array}\right]$
To first equation I have solution $y = x, z = x$, but I dont know what eigenvectors are and to the second I dont even know the solution to equation $(A-I_3)[x,y,z]^T$. Could you help me?
AI: Since you have the eigenvalues, you can consider finding a basis of the eigenvectors associated to each eigenvalues, which means resolving (with Gauss method) this equation (for the 0 eigenvalue) :
$X=
\begin{pmatrix}x\\y\\z\end{pmatrix} \in E_0 \Leftrightarrow
\begin{cases}
3x-y-2z&=0 \\
2x-2z&=0 \\
2x-y-z&=0
\end{cases}\Leftrightarrow AX=0.X$
$\begin{cases}
3x-y-2z&=0 \\
2x-2z&=0 \\
2x-y-z&=0
\end{cases}
\Leftrightarrow
\begin{cases}
2x-y-z&=0 \\
-y+z&=0 \\
0&=0
\end{cases}
\Leftrightarrow
\begin{pmatrix}x\\y\\z\end{pmatrix}\in \{\begin{pmatrix}s\\s\\s\end{pmatrix},s\in\mathbb{K}\}$
Finally :
$E_0 = Vect(\begin{pmatrix}1\\1\\1\end{pmatrix})$
The case of the eigenvalue 1 starts exactly the same way :
$X=
\begin{pmatrix}x\\y\\z\end{pmatrix} \in E_1 \Leftrightarrow
\begin{cases}
3x-y-2z&=x \\
2x-2z&=y \\
2x-y-z&=z
\end{cases}\Leftrightarrow AX=1.X$ and you find the above eigenvectors ;-).
So, you have to find a basis of the plane : $P: 2x-y-2z=0$.
One way is to parameter the problem :
$\begin{pmatrix}x\\y\\z\end{pmatrix}\in P \Leftrightarrow \begin{pmatrix}x\\y\\z\end{pmatrix}\in \lbrace\begin{pmatrix}\frac{1}{2}s+t\\s\\t\end{pmatrix},(s,t)\in\mathbb{K}\rbrace$
So you find immediately that $E_1 = Vect(\begin{pmatrix}\frac{1}{2}\\1\\0\end{pmatrix},\begin{pmatrix}1\\0\\1\end{pmatrix})$
This is the classic method of resolution.
|
H: Notation in a Dirichlet Character
What is the meaning of the sign in the notation for these Dirichlet characters?
In the context of specific cases building up to a general proof of the Theorem Primes in Progressions there are several depictions of characters:
In the case of $\pmod 3$, there is the symbol $\chi_{- 3}(n)$. Whereas in the case of $\pmod 8$, there are $\chi_{- 8}(n)$ and $\chi_{8}(n)$. In the latter instances, the values of $+ 1$ and $- 1$ attributed to $n \equiv 3$ and to $n \equiv 7$ are interchanged.
Thanks
AI: $\chi_{-8}(n)$, $\chi_8(n)$ have another notation:
$$\left(\frac{-8} {n}\right), \left(\frac{8} {n}\right)$$
This is Jacobi symbol, and notice that they are equivalent to
$$\left(\frac{-2} {n}\right), \left(\frac{2} {n}\right),$$
and well-known formulas
$$\left(\frac {-1} n\right)=(-1)^{\frac{n-1}{2}},\left(\frac {2} n\right)=(-1)^{\frac{n^2-1}{8}}.$$
|
H: Use maximum modulus theorem to control the number of zeros of analytic functions.
Let $f$ be a analytic function on $\bar B(0,R)$,with $\| f(z)\|\leq M$ in $\| z \| \leq R$,suppose $\|f(0)\|=a>0$,the the number of zeros of $f$ in $B(0,\frac{1}{3}R)$ is equal or less than $\log 2 \log(\frac{M}{a})$.
This result seems too strong to me. As it ensures that the number of zeros can be controlled by the upper modulus bound and positive bound of modulus at $0$. I assume this is the special property of analytic functions on complex space as no similar result in the real case.
However I've no clue how to prove this one, while the hint suggests that consider the following function: $g(z)=f(z)\prod_{k=1}^n(1-\frac{z}{z_k})^{-1}$,where $\{z_{k}\}$ are the zeros of $f$ in $B(0,\frac{1}{3}R)$
AI: This problem is related to Jensen's formula; in fact solving this problem brings you halfway to the proof of Jensen's formula, in a sense.
Following the hint, you should compute an upper bound for $\max_{|z|\le R} |g(z)|$ (what's the largest it can be, given the locations of the $z_k$?). I obtain
$$
\max_{|z|\le R} |g(z)| \le M 2^{-N},
$$
where $N$ is the number of zeros of $f$ in $B(0,R/3)$.
The maximum modulus principle tells you that $|g(0)| \le \max_{|z|\le R} |g(z)|$, and so in particular $|g(0)| = a$ must be smaller than the upper bound. The resulting inequality should solve your problem for you.
|
H: How to combine these 2 notations to be as simple as possible?
Consider the following question:
Find the antiderivative of $f(x) = 6x \, (x^2+1)^5$.
I have been using 2 notations and I would like to combine them.
Notice that $[x^2 + 1]' = 2x$, so $6x \, dx = 3 d(x^2+1)$
Take $(x^2 + 1) = u$, then $6x$ is $3\,u'$.
I would like to combine these, but I'm wondering if it would still be correct:
Take $[x^2 +1] = u$, then $6x = 3 \, du$. This would save me a lot of time on tests, and also would make it easier, however with calculus I am never sure if my notation is correct.
Is this correct, and if not; what would the optimal way to combine these methods, to be as concise as possible, be?
By the way, this would be how I would do the problem:
Take $[x^2 +1] = u$, then $6x = 3 \, du$.
$6x(x^2+1)^5 = 3du \cdot u^5 = 3u^5 du = d \dfrac{1}{2}u^6 = d \dfrac{1}{2}(x^2+1)^6$.
$F(x) = \dfrac{1}{2} (x^2+1)^6 + c$.
AI: Yes, that will work: $$u = x^2 + 1 \;\implies\; du = 2x\,dx,\;\text{so}\; 6x\,dx = 3 du$$
and the resulting integral will be $$\int 6x(x^2 + 1)^5 \,dx = \int u^5 \,(3 du) = 3\int u^5\,du$$
Now we just integrate with respect to $u$:
$$3\int u^5\,du = 3\left[\frac{u^6}{6}\right] + C$$
Back substituting gives us $$\frac 12(u^6) + C = \frac 12(x^2 + 1)^6 + C$$
|
H: What's the ratio of triangles made by diagonals of a trapezoid/trapezium?
In the above image, what will be the ratio of areas of triangle $A$ and $B$?
From Googling, I've found that:
$\operatorname{Ar}(A) = \dfrac{a^2h}{2(a+b)}$
and
$\operatorname{Ar}(B) = \dfrac{b^2h}{2(a+b)}$
but how do I get these formulas from the classic formula of $\dfrac{\rm base \times height}2$?!
Basically, how were the formulas in this image:
this image http://www.geometryexpressions.com/explorations/04-Examples/02-Example_Book/02-Quadrilaterals/Example%20036-Areas_Of_Triangles_In_A_Trapezoid_files/image004.gif
figured out?
AI: Well, what is the height of the triangle $\Delta a$? It is similar to the triangle $\Delta b$ (why?), so and the heights $h_a$ of $\Delta a$ and $h_b$ of $\Delta b$ are related in the following way:
First, we have $h = h_a + h_b$. This is obvious if you draw them on your figure. Secondly, from similarity, we have that
$$
\frac{h_b}{h_a} = \frac{b}{a}.
$$
By manipulating the last equation, and with a little help from the first to rid ourselves of $h_b$, we get
$$
\frac{h-h_a}{h_a} = \frac{b}{a} \\\\
\frac{h}{h_a} - 1 = \frac{b}{a} \\\\
\frac{h}{h_a} = 1 + \frac{b}{a} \\\\
h_a = \frac{h}{1 + \frac{b}{a}} = \frac{ha}{a + b}
$$
This means that the area of $\Delta a$ is equal to
$$
\frac{1}{2}\cdot a \cdot h_a = \frac{1}{2}\cdot a \cdot \frac{ha}{a + b} = \frac{a^2h}{2(a+b)}
$$
|
H: Proof involving Continuous Map and Disjoint Sets
I'm a bit confused about the following proof my analysis book gives about the following Theorem: Consider metric spaces $(S,d), (S^*, d^*)$ and let $f: S \to S^*$ be continuous. If $E$ is a connected subset of $S$, then $f(E)$ is a connected subset of $S^*$.
Proof: Assume $f(E)$ is not connected in $S^*$. Then there exist disjoint open sets $V_1, V_2$ in $S^*$ such that $f(E) \subseteq V_1 \cup V_2$, $f(E) \cap V_1 \neq \emptyset$ and $f(E) \cap V_2 \neq \emptyset$.
Let $U_1 = f^{-1}(V_1)$ and $U_2 = f^{-1}(V_2)$.
I know that $U_1, U_2$ are both open sets in $S$ since $f$ is continuous. But why are they disjoint?
My book also claims that $E \subseteq U_1 \cup U_2, E \cap U_1 \neq \emptyset$ and $E \cap U_2 \neq \emptyset$.
Since I know $f(E) \subseteq V_1 \cup V_2$, applying $f^{-1}$ to both sides gives the desired result. But in order to justify $E \cap U_1 \neq \emptyset$ and $E \cap U_2 \neq \emptyset$ that would mean $f^{-1}(\emptyset) = \emptyset$. Is this correct and why is it?
AI: If $U_1$ and $U_2$ have a common element $x$, then $f(x) \in V_1 $ and $f(x) \in V_2$, and this is absurd.
|
H: Vector Space isomorphisms of $\mathbb{Q}(z)$ preserving the Galois group (where $z$ is a primitive third root of unity)
Take the field extension $\mathbb{Q}(z)$ where $z$ is a primitive third root of unity and consider the set $A$ of vector-space automorphisms of $\mathbb{Q}(z)$ so that for $T \in A$ the map $\phi \mapsto T\phi T^{-1}$ is an isomorphism of the Galois group. It is easy to show that $T \phi T^{-1}$ is an automorphism of the additive group of $\mathbb{Q}(z)$ since $T$ and $\phi$ are both linear. However, showing that $T \phi T^{-1}(ab)=T \phi T^{-1}(a) T \phi T^{-1}(b)$ for each automorphism $\phi$ and $a, b \in \mathbb{Q}(z)$ puts a condition on $T$.
Specifically, I can show using other methods that any $T$ fixes the degree of the minimal polynomial of elements of $\mathbb{Q}$ and so $T$ must be of the form $1 \mapsto c, z \mapsto a+bz$ where $b \neq 0$ and $a, b, c \in \mathbb{Q}$. Then the multiplicative condition on $T \phi T^{-1}$ gives the condition $a+c+ac=b$. Thus, it appears that maps of the form $1 \mapsto c, z \mapsto a+(a+c+ac)z$ should work. Unfortunately, my computations show that such maps do not form a group and yet, from my definition of these maps above, it seems like they should. What am I doing wrong here?
AI: Complex conjugation $\phi:z\mapsto z^2=-1-z$ is the only non-trivial automorphism of $\mathbb{Q}(z)$, so we must have $T\phi T^{-1}=\phi$, i.e. $T$ must commute with $\phi$.
As a linear transformation of vector space over the rationals $\phi$ is semisimple, and it has eigenvalues $+1$ (resp. $-1$) with the real (resp. imaginary line) being the eigenspaces. A basic fact of commuting linear transformations is that they must preserve the eigenspaces. In other words, we must have $T(1)=c\in \mathbb{Q}, c\neq0$ (as observed by the OP, also) as well as $T(\sqrt{-3})=a\sqrt{-3}$ for some $a\in\mathbb{Q}, a\neq0$.
It turns out that these conditions are also sufficient. This is perhaps easiest to see by writing the linear matrices of these linear transformations w.r.t. the eigenbasis $\{1,\sqrt{-3}\}$. We have
$$
M(\phi)=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right),\quad
\text{and}\quad
M(T)=\left(\begin{array}{cc}c&0\\0&a\end{array}\right).$$
It is clear that these matrices commute irrespective of the choice of $a,c\in\mathbb{Q}^*$. Earlier we showed that these are the only possibilities, so the question is solved in this case.
But do observe that this thinking doesn't immediately generalize to cubic Galois extensions. One difference comes from the possibility that conjugation by $T$ does not need to be the trivial automorphism of the Galois group. Another difference comes from the fact that the smaller field no longer contains the eigenvalues of the matrices of the elements of the Galois group. A different approach will be needed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.