text
stringlengths 83
79.5k
|
|---|
H: Classifying groups of order 21.
We have two sylow subgroups of orders 7 and 3. Let $n_3$ and $n_7$ denote the number of sylow subgroups for 3 and 7, respectively.
$n_7 \equiv 1 \mod 7$ and $n_7 | 3 \implies n_7 = 1$
$n_3 \equiv 1 \mod 3$ and $n_3 | 7 \implies n_3 = 1, 7$
Let $P_3 \cong \Bbb{Z}_3 $ and $P_7 \cong \Bbb{Z}_7$. Since $P_7$ is always normal in G, we know that $G \cong \Bbb{Z}_7 \rtimes_{\alpha} \Bbb{Z}_3$.
Case 1:
Let $n_3 = 1$. Then we know that $P_3$ is also normal, and so $G \cong \Bbb{Z}_7 \times \Bbb{Z}_3$.
Case 2:
Let $n_3=7$. We have $\alpha : \Bbb{Z}_3 \rightarrow \Bbb{Z}_7^{\times} \cong \Bbb{Z}_6$
So the generator $a$ of $\Bbb{Z}_3$ needs to be sent to an element of order 3 in $\Bbb{Z}_6$, which means that it needs to be sent to either $2$ or $4$.
Let $\alpha'(a) = 2$ and $\alpha(a)=4$.
There are two theorems from J.S. Milne's notes that says:
Theorem 1 If there exists an $\alpha \in Aut(N)$ such that $$\theta'(q) = \alpha \circ \theta(q) \circ \alpha^{-1} \forall q \in Q$$ then the map $$(n,q) \rightarrow (\alpha(n), q): N \rtimes_{\theta} Q \rightarrow N \rtimes_{\theta'} Q$$ is an isomorphism.
Theorem 2 If $\theta = \theta' \circ \alpha$ with $\alpha \in Aut(Q)$, then the map $$(n,q) \rightarrow (n, \alpha(q)) : N \rtimes_{\theta} Q \approx N \rtimes_{\theta'} Q$$
We know that there are two automorphisms of $\Bbb{Z}_3$, the identity and $f: n \rightarrow 2n$. Since $\alpha(a) = \alpha' \circ f (a)$, we know that $\Bbb{Z}_7 \rtimes_{\alpha} \Bbb{Z}_3 \cong \Bbb{Z}_7 \rtimes_{\alpha'} \Bbb{Z}_3$.
So I was able to show that they are isomorphic by using theorem 2. But I was wondering if I could use theorem 1 instead. Here is the problem that I ran into when I attempted to use the first one: We know that $Aut(\Bbb{Z}_7) \cong \Bbb{Z}_6$, and it's easier to see which elements have order three in $\Bbb{Z}_6$ than in $Aut(\Bbb{Z}_7)$. But after finding the elements 2 and 4, how can I use theorem 1? In other words, we need automorphisms in order to use theorem 1, but I only have elements of $\Bbb{Z}_6$ which are obviously not automorphisms.
Thank you in advance
AI: In your example case Theorem 1 cannot be used. This is because this time $Aut(N)$ is abelian. Observe that $\theta(q)$ and $\alpha$ are both in $Aut(N)$, so from its commutativity it follows that
$$
\alpha\circ \theta(q)\circ \alpha^{-1}=\theta(q)
$$
for all choices of $\alpha$.
|
H: Absolute value and roots
I've been trying to solve this problem and I always get 1, but the answer is $1 - 2x$.
If $x<\frac12$ then what is $\left|x-\sqrt{(x-1)^2}\right|$ ?
AI: Note that $\sqrt {t^2}=|t|$ for all real numbers $t$.
By the condition $x<\frac12$, it is clear that $x-1$ is negative, hence $\sqrt{(x-1)^2}=|x-1|=1-x$ and the final result is $|x-(1-x)|=|2x-1|=1-2x$ (because $2x-1<0$).
|
H: why we want to use grassmannian space?
I wonder what's the special about grassmannian space? Why we want to use this space?
On wikipedia, it says:
"By giving a collection of subspaces of some vector space a topological structure, it is possible to talk about a continuous choice of subspace or open and closed collections of subspaces; by giving them the structure of a differential manifold one can talk about smooth choices of subspace.
"
As a student has few knowledge in differential geometry, I found it hard to catch the meaning of it.
Could anyone gives me some more intuition to understand it?
Thanks.
AI: Here are two examples:
First, suppose you have a region in $\mathbb R^n$ and you wish to study the $k$-dimensional area of its projections onto all possible $k$-dimensional subspaces of $\mathbb R^n$. This defines a function on the Grassmannian $G(k,n)$ of $k$-planes in $\mathbb R^n$. You might even want to integrate that function and get the average shadow of the region.
Second, perhaps most fundamental, suppose $M\subset\mathbb R^n$ is a smooth submanifold. We can define the Gauss map $M\to G(k,n)$, which assigns to each point $p$ of $M$ its tangent plane $T_pM$. Properties of this mapping are fundamental in differential geometry, and deep theorems can be proved by studying the universal setting of the "tautological" vector bundle $\xi\to G(k,n)$.
|
H: Unique solution to non linear system of equations with boolean coefficients
Say we have a system of $m$ equations of the form:
$$a_{11} x_1 + a_{12} x_2 + ... + a_{1n} x_n = p_1$$
$$...$$
$$a_{m1} x_1 + a_{m2} x_2 + ... + a_{mn} x_n = p_m$$
Where the $p_i,x_j \in \mathbb{R}$, but the $a_{ij}$'s are boolean values, i.e $a_{ij}\in\{0,1\}$.
So, if I want to solve for $\{a_{ij}\}$ and $\vec{x}$ for a given $\vec{p}$, a total of $(m+1)n$ variables,
What are the conditions on $m$ ensuring the existence of at most one solution?
Does it change anything if $p_i,x_j \in \mathbb{Z}$ instead of $\mathbb{R}$?
AI: Here is a generic construction for this problem. Let $\vec{x} = \vec{1}$ and set all $a_{ij}=0$ except $a_{ii} = p_i$. That is a solution.
Note that this is an integer solution as well, as long as $\vec{p}$ is an integer vector.
I am surprised how you set up the problem though - typically, don't we have $a_{ij}$ fixed?
EDIT Even more than that. For any non-zero vector $\vec{x}$ there is a solution which works for any $\vec{p}$. Let $a_{ij} = 0$ when $i \neq j$ and let $a_{ii} = p_i/x_i$.
|
H: Is it possible to divide a circle into $7$ equal "pizza slices" (using geometrical methods)?
Or is it possible to divide a circle into n equal "pizza slices" (I don't know how to call these parts, but I think you'll know what I mean), where n hasn't got a common divider with $360$? Or are the $360$ degrees just "arbitrarily" chosen, in such way that it's also possible to make a system with $7$ "degrees" in a circle?
The main question is actually if it's possible with a ruler and a pair of compasses to divide a circle into any number of slices, and if there's a condition for a number (e.g. a common divider with $360$ as I suggested) so it's possible to slice the circle in such a number of pieces.
AI: The following result answers your question. We can divide a circle into $N$ parts by straightedge and compass if and only if
$$N=2^kp_1p_2\dots p_t,$$
where the $p_i$ are distinct Fermat primes. (We can have $k=0$, or $t=0$.)
A Fermat prime is a prime of the form $2^{2^j}+1$. There are only $5$ known Fermat primes: $3$, $5$, $17$, $257$, and $65537$.
Since $7$ is not a Fermat prime, we cannot by straightedge and compass do the division you seek.
Remark: The result was first published by Wantzel. Some people give Gauss credit for the result. Gauss certainly was the first to prove that the circle can be divided into $17$ equal parts by straightedge and compass. He almost certainly knew that any $360^\circ/N$ angle, where $N$ is of the shape described above, is constructible. There is no evidence that he knew that nothing else is.
Put $N=9$. Then $N$ is not of the shape described above, since $3$ occurs twice in the factorization. This shows that the $20^\circ$ angle is not constructible. Since $60^\circ$ is certainly constructible, that shows we cannot trisect the general angle by straightedge and compass.
Many books have proofs of the Wantzel result, for example Allan Clark's Elements of Abstract Algebra.
|
H: Preimage of invariant subspace
Suppose we have a linear map $A \colon V \to V$ on a finite- dimensional vector space, and $W \leq V$ it's invariant subspace. Then we have obviously $\operatorname{Ker} A + W \subseteq A^{-1}(W)$.
Is it then necessary $\operatorname{Ker} A + W = A^{-1}(W)$ ?
I can prove it in case $A$ is a projector. How to prove it in general? Or is there a counteexample?
AI: No. Let $V:=\Bbb R^2$ and $A$ be given by the matrix $\pmatrix{0&0\\1&0}$, i.e. it takes $e_1\mapsto e_2$ and $e_2\mapsto 0$ for the standard basis $(e_1,e_2)$.
Now let $W:={\rm span}(e_2)$. It is invariant, and actually $W=\ker A$. However, $A^{-1}(W)$ also contains $e_1$ (so it must be all $V$).
|
H: Projective Resolution of $\mathbb{Q}$ and $\mathbb{Q}/\mathbb{Z}$
The question I have is on how to find a Projective Resolution of $\mathbb{Q}$ over $\mathbb{Z}$.
I have found that $\mathbb{Q}$ is flat, but it is not projective. I mention this as I wonder if there is some sort of trick I can use knowing that it is flat.
I am ultimately wanting to figure out $\mathrm{Ext}_{\mathbb{Z}}^n(\mathbb{Q},B)$, respectively $\mathrm{Ext}_{\mathbb{Z}}^n(\mathbb{Q/Z},B)$ for some arbitrary $\mathbb{Z}$-module $B$.
Now I know how to calculate the rest of the steps i.e. $\mathrm{Hom}_{\mathbb{Z}}(_-,B)(P)$ where $P$ is the projective resolution of $\mathbb{Q}$ or $\mathbb{Q/\mathbb{Z}}$, but could use any tricks for getting the resolutions.
Thanks in advance,
Brian
AI: The fact is that $\mathbb Z$ is a hereditary ring, that is, submodules of projectives are still projectives (or, equivalently, quotients of injectives are still injective). Thus for finding a projective resolution of $\mathbb Q$ you can proceed as follows:
(1) take a surjection $f:\mathbb Z^{(\mathbb Q)}\rightarrow \mathbb Q$;
(2) the kernel of $f$ is projective as it is a submodule of a direct sum of projectives;
(3) $0\to\ker(f)\to \mathbb Z^{(\mathbb Q)}\to \mathbb Q\to 0$ is a projective resolution.
For computing $\mathrm{Ext}$, you can notice by the above argument that higer ext's are always trivial on hereditary rings...
It is also easy to find an injective resolution for $\mathbb Z$ (and you can use that to compute the $\mathrm{Ext}$-groups):
$$0\to \mathbb Z\to \mathbb Q\to \mathbb Q/\mathbb Z\to 0$$
|
H: Is it true player II must have a winning strategy, if the winning set is a closed but not open set?
Suppose, in a Gale-Stewart game, player I and player II choose from $\omega$ in a alternating fashion. If the outcome is in the winning set $W$, then player I wins. Otherwise player II wins. If $W$ is a closed but not open set in $\omega^{\omega}$, is it true that player II must have a winning strategy?
Some thoughts: Firstly, we only have to consider the case when $W$ is not countable, and we know $|W| = \mathfrak{c}$. The problem is reduced to how to characterize closed but not open sets with the cardinality equals the continuum. I don't know how to do this. I can only come up with some examples, say $A^{\omega}$, provided $A \subset \omega$ and $|A| \geq 2$, set of all permutations, and set of all injections from $\omega$ to $\omega$. It seems to me it holds in these examples. Is it true in general?
AI: Consider the set $W$ of those sequences $x$ in $\omega^\omega$ in which $x_{2n}=0$ for all $n$; the even-numbered components $x_{2n}$ are those chosen by player I. So Player I has a trivial winning strategy: Just play 0 at every move. But $W$ is closed and not open.
|
H: Why does one counterexample disprove a conjecture?
Can't a conjecture be correct about most solutions except maybe a family of solutions? For example, a few centuries ago it was widely believed that $2^{2^n}+1$ is a prime number for any $n$ . For $n=0$ we get $3$ , for $n=1$ we get $5$ , for $n=2$ we get $17$ , for $n=3$ we get $257$ , but for $n=4$ it was too difficult to find if this was a prime, until Euler was able to find a factor of it. It seems like this conjecture stopped after that. But what if this conjecture isn't true only when $n$ satisfies a certain equation, or when $n$ is a power of $2$ $\ge$ $4$ , or something like that? Did anybody bother to check? I am not asking about this conjecture specifically, but as to why we consider one counterexample as proof that a conjecture is totally wrong.
P.S. Andre Nicolas pointed out that Euler found a factor when $n=5$, not $4$ .
AI: This is because, in general, a conjecture is typically worded "Such-and-such is true for all values of [some variable]." So, a single counter-example disproves the "for all" part of a conjecture.
However, if someone refined the conjecture to "Such-and-such is true for all values of [some variable] except those of the form [something]." Then, this revised conjecture must be examined again and then can be shown true or false (or undecidable--I think).
For many problems, finding one counter-example makes the conjecture not interesting anymore; for others, it is worthwhile to check the revised conjecture. It just depends on the problem.
|
H: Decomposition an operator in terms of symmetric and anti-symmetric components
In linear algebra, we can write any operator as the sum of a symmetric and skew-symmetric parts:
$$A=A^{\mathrm{sym}}+A^{\mathrm{skew}}$$
where
$$A^{\mathrm{skew}}=\frac{1}{2}(A-A^T)$$
and
$$A^{\mathrm{sym}}=\frac{1}{2}(A+A^T)$$.
Can the same be done with any general (continuous) operator?
AI: The answer is yes. This is a standard trick in Operator Theory. Provided that the operator $A$ is bounded (i.e. continuous) it has a bounded adjoint $A^*$. This would be the conjugate transpose of a matrix in finite dimensions.
We can decompose $A$ into a sum of a self adjoint operator and an anti-self adjoint operator by:
$$A = \frac{A + A^*}{2} + \frac{A - A^*}{2}.$$
This also holds for functions of a real variable in a similar way. For instance we can write any function $f: \mathbb{R} \to \mathbb{R}$ as a sum of a even and odd function by:
$$f(x) = \frac{f(x) + f(-x)}{2} + \frac{f(x) - f(-x)}{2}$$
In many instances we try to draw analogies between operators and real/complex numbers. This often happens through the spectral theorem.
|
H: What is meant by "constant" in the optional stopping theorem?
One of the three condition of the optional stopping theorem is that "There exists a constant $c$ such that $|X_{t\wedge \tau}| \leq c$ a.s. for all $t\in \mathbb N_0$".
In the article of Wikipedia on the optional stopping theorem, the second item in the linked section gives an example of an application of the theorem:
Suppose a random walk starting at $a \geq 0$ that goes up or down by one
with equal probability on each step. Suppose further that the walk
stops if it reaches 0 or $m \geq a$; the time at which this first occurs is
a stopping time. If it is known that the expected time at which the
walk ends is finite (say, from Markov chain theory), the optional
stopping theorem predicts that the expected stop position is equal to
the initial position a. Solving $a = pm + (1 – p)0$ for the probability
p that the walk reaches m before 0 gives $p = \frac am$.
Since once the random walk is defined the number $m$ is a fixed integer, could also the condition (c) there be applied? That is, it is true that there exist a "constant" that bounds the absolute value of the martingale?
Another example (third item of the linked section), gives me even more doubt:
Care must be taken, however, to ensure that one of the conditions of
the theorem hold. For example, suppose the last example ["a random walk $X$ that starts at 0 and stops if it reaches $–m$ or $+m$"] had instead
used a 'one-sided' stopping time, so that stopping only occurred at
$+m$, not at $−m$. The value of $X$ at this stopping time would therefore be m. Therefore, the expectation value $E[X_{\tau}]$ must also be $m$, seemingly in
violation of the theorem which would give $E[X_{\tau}] = 0$. The failure of
the optional stopping theorem shows that all three conditions fail.
It seems to me that in the latter example the real problem is that the "one-sided" stopping time is not indeed a stopping time: it can be infinite. On the other hand, if it were a stopping time, it does not seem to me that "all three conditions fail" because $|X_{\tau}|\leq m$ for a fixed $m$.
What am I missing?
AI: For the first: yes, you could also use part (c) there, because the stopped process is always bounded by $m$ in absolute value.
For the second: stopping times are allowed to be infinite! However, the thing you are missing here is due to a shortfall in Wikipedia's explanation of their example: when they refer to the one-sided stopping time, they mean that the ONLY stopping condition is reaching $m$... so, for instance, you could go to $-10,000,000$, then work your way back. So, in this case, the stopped process isn't bounded - it is only bounded above.
|
H: Inequality for compact operator between Banach spaces
I've been pondering about the following Lemma for a while now, but can't think of a proof. In fact, I can't even think of a way to prove it.
Let $E$, $F$ and $G$ be Banach spaces, $T \in \mathcal{K}(E,F)$ (e.g. a compact operator) and $S \in \mathcal{L}(F,G)$ injective. Then for each $\epsilon \gt 0$ there is a $c_\epsilon \gt 0$ such that for all $x \in E$ holds $$\|Tx\| \le \epsilon \|x\| + c_\epsilon \|STx\|.$$
I have no idea what properties to use here. How can I see that this holds?
AI: Suppose it weren't so. Then there'd be an $\varepsilon > 0$ such that for every $n \in \mathbb{N}$ there is an $x_n \in E$ with
$$\lVert T x_n\rVert > \varepsilon \lVert x_n\rVert + n\cdot \lVert ST x_n\rVert.$$
$x_n$ cannot be $0$, hence we may without loss of generality assume that $\lVert x_n\rVert = 1$.
$T$ is compact, hence $T x_n$ has a convergent subsequence, say $T x_{n_k} \to y \in F$. Then $\lVert y\rVert \geqslant \varepsilon$ since $\lVert T x_{n_k}\rVert > \varepsilon$. In particular, $y \neq 0$, hence also $S y \neq 0$. But then
$$\lVert S y\rVert = \lim_{k \to \infty} \lVert ST x_{n_k}\rVert \leqslant \limsup_{k\to\infty} \frac{1}{n_k}\lVert T x_{n_k}\rVert = \lim_{k \to \infty} \frac{1}{n_k} \lVert y\rVert = 0,$$
which contradicts the injectivity of $S$.
It might be good to point out that if $S$ has closed range (in particular if it is surjective), you have the estimate $\lVert Tx \rVert \leqslant \lVert S^{-1}\rVert \cdot \lVert STx\rVert$ regardless of whether $T$ is compact.
And that in general, when the range of $S$ is not (necessarily) closed, you need both parts of the right hand side in the estimate for $\varepsilon < \lVert T\rVert$. As an example, consider $E = F = G = \ell^2$ and $S = T$ the operator $(x_k) \mapsto \left(\frac1k x_k\right)$.
|
H: What justifies assuming that a level surface contains a differentiable curve?
My textbook's proof that the Lagrange multiplier method is valid begins:
Let $X(t)$ be a differentiable curve on the surface $S$ passing through $P$
Where $S$ is the level surface defining the constraint, and $P$ is an extremum of the function that we're seeking to optimize. But how do we know that such a curve exists?
$S$ is specifically defined as the set of points in the (open) domain of the continuously differentiable function $g$ with $g(X) = 0$ but $\operatorname{grad}g(X)\ne0$. The function $f$ that we're seeking to optimize is assumed to be continuously differentiable and defined on the same open domain as $g$, and $P$ is an extremum of $f$ on $S$.
AI: By the Implicit Function Theorem, near $P$ you can represent your level surface as a graph, say $z=\phi(x,y)$, where $\phi$ is continuously differentiable. If $P=\phi(a,b)$, take any line through $(a,b)$ and you get a nice curve.
|
H: If 5 coins are flipped what is the probability of getting only one head?
How would I do this question? I know if the question said: at least one head then I would do:
${5\choose0}=1$
$2^5=32-1 = 31 $
AI: We assume that the coin is fair and is flipped fairly.
There are $2^5$ equally likely strings of length $5$ made up of the letters H and/or T.
There are precisely $5$ strings that have exactly $1$ H and $4$ T.
So the required probability is $\dfrac{5}{2^5}$.
Remark: Suppose that a coin has probability $p$ of landing heads, and $1-p$ of landing tails. If the coin is tossed independently $n$ times, then the probability of exactly $k$ heads is $\binom{n}{k}p^k(1-p)^{n-k}$.
In our case, $n=5$, $p=1/2$, and $k=1$.
|
H: Non-vanishing vector field on $\mathbb{R}P^{2n+1}$
I'm trying to cook up a non-vanishing vector field on $\mathbb{R}P^{2n+1}$. I know that $S^{2n+1}$ admits one, namely $(x_1,\dots,x_{2n+2})\mapsto (-x_2,x_1,\dots,-x_{2n+2},x_{2n+1})$. Moreover, I know that $S^{2n+1}$ is a smooth double cover of $\mathbb{R}P^{2n+1}$ via the map $x\mapsto \{x,-x\}$. Since this vector field is odd, $X(p)=-X(-p)$, I was hoping there might be a way to cook up a vector field on $\mathbb{R}P^{2n+1}$. So, this motivates the two following questions:
Specifically, how may one explicitly construct a non-vanishing vector field on $\mathbb{R}P^{2n+1}$ (using the route above or not).
Say $\tilde M$ and $M$ are smooth manifolds, and $p:\tilde{M}\to M$ is a smooth covering map. If $X(p)$ is a smooth vector field on $\tilde{M}$, under what conditions is there a natural way to cook up a vector field on $M$? (I don't mean natural in the rigorous sense).
Thanks!
AI: More generally. Suppose that a group $G$ acts properly discontinuously on a manifold $M$ and that you have a vector field $X$ on $M$ which is invariant under $G$, so that for all $g\in G$ and all $p\in M$ we have $$d_pg(X_p)=X_{gp}.$$ Then the quotient $M/G$ is a manifold, the canonical projection $\pi:M\to M/G$ is smooth and locally a diffeo, and there is a vector field $Y$ on $M/G$ such that $d_p\pi(X_p)=Y_{\pi(p)}$ for all $p\in M$.
In particular, if the field $X$ happens to be everywhere non-zero, the field $Y$ has the same property.
|
H: Understanding a proof about the Broccard problem
I was reading the paper by Berndt and Galway,"The Brocard–Ramanujan diophantine equation $n!=m^2$. And I got stuck in the part when it says:
(1)$$n!+1=m^2$$
(2)$$\left ( \frac{n!+1}{p} \right)=1 \text{ or }0$$
Let us say that we have a “solution” if (2) holds for each of the first 40 primes
p after 10^9. Computations were performed modulo p: Except for the known cases
n = 4;5;7; we found no further “solutions” of (2). It follows that (1) also has
no further solutions up to 10^9
Why do they know that they have a "solution" if it holds for each of the first $40$ primes after $10^9?$ It is not detailed anywhere and I would like to know it, any help on understanding this is greatly appreciated.
AI: By the symbol $\left( \dfrac{a}{p} \right)$, they mean the Legendre symbol, which is $1$ if $a$ is a square $\mod p$, $-1$ if $a$ is not a square $\mod p$, and $0$ if $p$ divides $a$ (we don't call $0$ a square).
Any square number will always be a square mod any prime $p$, unless that prime happens to divide that square (in which case, the Legendre symbol evaluates to $0$). So saying that $\left( \dfrac{n! + 1}{p} \right) = 1 \text{ or } 0$ for many primes is saying that $n! + 1$ behaves essentially like a square. They chose the number $40$ out of a hat (or perhaps $39$ was insufficient, which I highly doubt). I suppose they think that they think that being a square mod 40 large primes is very unlikely, unless you are actually a square. I do not know if there is an established expectation of the likelihood that a not-actual-square is a quadratic residue mod many consecutive primes - that seems like an interesting question to me.
But this doesn't matter in this case. Their 'solutions' might pick up false positive (i.e. not-squares that look like squares) but won't pick up any false negatives. And they don't find any 'solutions' except the known ones. Note that they only have to check a finite number of cases, since they're checking the value of the Legendre symbol up to approximately $10^9$ 40 times. Since they found no 'solution,' there cannot be an actual solution either.
|
H: To which (logical) language belongs $\{p\} \Rightarrow q$?
According to my book, the essential difference between a logical implication $\{p\} \Rightarrow q$ and the statement $p \to q$ is that $p \to q$ is part of the propositional language, and $\{p\} \Rightarrow q$ is not. My question is, to which language does the logical implication $\{p\} \Rightarrow q$ belong?
AI: The way that this book appears to be using the notation, if $A$ is a set of formulas and $\phi$ is a formula, the relation $A \Rightarrow \phi$ means that in the appropriate truth table, every row that makes all formulas in $A$ true also makes $\phi$ true.
So, for example, $p \land q \Rightarrow q$ holds because every row that makes $p \land q$ true also makes $q$ true. This is different than $p \land q \to q$, which is not true or false on its own, but only true or false for each row in the truth table. The logical implication arrow $\Rightarrow$ has the effect of quantifying over all the rows. So $p \land q \to q$ has "4 separate truth values", one for each row of a 2 variable truth table, while $p \land q \Rightarrow q$ is simply true.
Similarly, $\{p, p \to q\} \Rightarrow q$ holds, but this is not even of the right syntactic form to be true or false in each row.
The language of formulas is called the object language; the language of $\Rightarrow$ is called the metalanguage, which is another language used to study the object language. The need to keep these distinct is not obvious at first, but the distinction turns out to be vital for many aspects of logic.
|
H: Proving the limit superior exists
Suppose $(x_n)$ is a bounded sequence. Let $y_n = \sup \{x_n, x_{n + 1}, x_{n+2}, \dots \}$. Prove that $(y_n)$ converges.
The book said that
Clearly $(y_n)$ is decreasing and bounded below by a lower bound of $(x_n)$
Can someone explain to me why is it clearly decreasing? I had to check case by case.
I found that if $(x_n)$ was decreasing, this was true. If $(x_n)$ was increasing, the $y_n$ stays the same for all $n$. So it isn't decreasing.
Take $x_{n+1} = \sqrt{1 + x_n}$ for example; $y_n = \frac{1 + \sqrt{5}}{2}$. I am wondering what happens for sequences that are neither increasing or decreasing (is there even such thing?).
AI: Suppose that $A\subseteq B$, both being bounded non-empty subsets of real numbers. Can you show that $$\sup A\leq \sup B\text{ ? }$$
Now let $A_n=\{x_n,x_{n+1},\ldots\}$ and $A_{n+1}=\{x_{n+1},x_{n+2},\ldots\}$. We then have $A_{n+1}\subseteq A_n$. Use the above.
|
H: Mathematical induction solution I don't understand
$$T(k) = 2T(\frac{k}{2})+k^2$$
$$T(k)\leq 2(c(\frac{k}{2})^2\log(\frac{k}{2}))+k^2$$
$$T(k)\leq \frac{ck^2\log\frac{k}{2}} { 2} + k^2$$
$$T(k)\leq \frac{ck^2logk}{2} - \frac{ck^2}{2} + k^2$$
$$T(k)\leq ck^2logk$$
There's something I don't understand with the last two lines. I don't know how the $$- \frac{ck^2}{2}$$
got there. Is it just basic math I'm missing?
AI: $$\log\frac{k}{2} = \log k - \log 2$$ and so: $$\frac{ck^2\log\frac{k}{2}}{2} = \frac{ck^2(\log k - \log 2)}{2} = \frac{ck^2\log k}{2} - \frac{ck^2\log 2}{2}$$
If $\log$ is here the base-2 logarithm, then $\frac{ck^2\log 2}{2} = \frac{ck^2}{2}$, and you're done. If you're using some other base, then your book appears to have an error in it.
|
H: Understanding a Gram-Schmidt example
Here's the thing: my textbook has an example of using the Gram Schmidt process with an integral. It is stated thus:
Let $V = P(R)$ with the inner product $\langle f(x), g(x) \rangle = \int^{1}_{-1}f(t)g(t)dt$.
Consider the subspace $P_2(R)$ with the standard ordered basis $\beta$. We use the Gram Schmidt process to replace $\beta$ by an orthogonal basis {$v_1, v_2, v_3$} for $P_2(R)$ and the use this orthogonal to obtain an orthonormal basis for $P_2(R)$.
OK, so far so good. But here's where I have trouble following. The text says:
take $v_1 = 1$. (This is presumably the first part of the basis).
Then it says: $\rVert v_1\rVert^2= \int^1_{-1}1^2dt = 2$ and $\langle x,v_1 \rangle = \int^1_{-1}t \dot\ 1\ dt = 0$
and that's where I am a little lost. If the standard ordered basis they are talking about here is {$1, x, x^2$} then it makes a bit more sense but I am just trying to follow the steps. Is that what is happening here? Something else?
AI: You're correct. The standard basis for $P_2(R)$ is $\beta = \{1,x,x^2\}$, as you said, and the author is starting with $\beta$ and following the Gram-Schmidt algorithm to generate an orthogonal basis.
In reply to your comment:
Let me answer your questions about the inner product first. The author has introduced an important type of inner product here. Since the elements of $P_2(R)$ are integrable functions, we can define the inner product of two vectors $f(x),g(x) \in P_2(R)$ as
$$
\langle f(x),g(x) \rangle = \int_{-1}^1 f(t)g(t) \,dt.
$$
So for example, the inner product of $v = 1-3x + 2x^2$ and $w = 5 + 8x$ is
\begin{align}
\langle 1-3x + 2x^2, 5 + 8x \rangle &= \int_{-1}^1 (1 - 3t + 2t^2)(5 + 8t) \,dt \\
&= \int_{-1}^1 5+t+2 t^2+160 t^3 \,dt = \frac{2}{3}
\end{align}
I'll leave it up to you to verify that this is actually an inner product (worth checking, if you haven't already).
(Also, in case you were wondering: the choice of bounds $\pm 1$ doesn't really matter here; if you choose any $a$ and $b$ with $-\infty < a < b < \infty$ you get a new inner product $\langle f(x), g(x)\rangle_{a,b} = \int_a^b f(t)g(t) dt$ on $P_2(R)$!)
So to answer one of your questions, yes, you just substitute $1$ for both $f(x)$ and $g(x)$ when computing $\|v_1\|$ ($=\|1\|$):
$$
\|v_1\| = \langle v_1,v_1\rangle^{1/2} = \langle1,1\rangle^{1/2}
= \left(\int_{-1}^1 1 \cdot 1 \,dt\right)^{1/2}.
$$
Summary:
Following the Gram-Schmidt process to generate an orthogonal basis $\{v_1,v_2,v_3\}$ from the ordered basis $\beta = \{1,x,x^2\}$, we obtain the following vectors:
\begin{align}
v_1 &= 1 \text{ (since $1$ = the first vector in $\beta$)} \\
v_2 &= x - \text{proj}_{v_1}(x) = x - \text{proj}_{1}(x) \\
&= x - \frac{\langle x, 1 \rangle}{\langle 1, 1 \rangle} \cdot 1 = x - \frac{0}{2} \cdot 1 = x \\
v_3 &= x^2 - \text{proj}_{v_1}(x^2) - \text{proj}_{v_2}(x^2) \\
&= x^2 - \frac{
\langle x^2, 1 \rangle
}{
\langle 1, 1 \rangle
} \cdot 1
- \frac{
\langle x^2, x \rangle
}{
\langle x, x \rangle
} \cdot x \\
&= x^2 - \frac{
\int_{-1}^1 t^2 \cdot 1 \,dt
}{
\int_{-1}^1 1 \cdot 1 \,dt
} \cdot 1
- \frac{
\int_{-1}^1 t^2 \cdot t \,dt
}{
\int_{-1}^1 t \cdot t \,dt
} \cdot x
= x^2 - \frac{2/3}{2} \cdot 1 - \frac{0}{2/3} \cdot x = x^2 - \frac{1}{3}.
\end{align}
In other words, our new orthogonal basis is
$$
B = \{v_1,v_2,v_3\}.
$$
If you haven't before, it's worth thinking through why this gives you an orthogonal basis (and validating your conceptual understanding by confirming that $\langle v_i, v_j \rangle = 0$ for $i \neq j$.
We've gotten through the labor-intensive part of Gram-Schmidt.
Since $B$ is an orthogonal basis, we can arrive at an orthonormal basis $\{u_1, u_2, u_3\}$ by simply dividing each of the vectors $v_i$ by its norm $\|v_i\| = \sqrt{\langle v_i, v_i \rangle}$.
This is easily done after having calculated $B$, since you can reduce calculating $\|v_i\|$ into taking the square root of a sum of mostly known inner product values.
The resulting orthonormal basis $\{u_1, u_2, u_3\}$ is
\begin{align*}
u_1 &= v_1/\|v_1\| = 1/\|1\| = 1/\sqrt{2} \\
u_2 &= v_2/\|v_2\| = x/\|x\| = x/\left(\int_{-1}^1 t^2 \,dt\right)^{1/2} = \frac{1}{\sqrt{2/3}}x = \sqrt{\frac{3}{2}}x \\
u_3 &= v_3/\|v_3\| = \frac{x^2 - 1/3}{\left(\int_{-1}^1 (t^2 - 1/3)^2 \,dt\right)^{1/2}} = \frac{3\sqrt{5}}{2\sqrt{2}} \left(x^2 - \frac{1}{3}\right) = \frac{3\sqrt{5}}{2\sqrt{2}} x^2 - \frac{\sqrt{5}}{2\sqrt{2}}
\end{align*}
Edit: As @lojle and @RayBern both noticed, I accidentally omitted a projection term when calculating $v_3$. I've corrected the error and revised the surrounding commentary accordingly.
|
H: $\lim_{x \to\infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)^{\Large\frac{1}{x}}$ Why aren't these two limits equal when they should be?
I was trying to solve this problem, which says to find the limit:
$$\lim_{x\to\infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)^{\Large\frac{1}{x}}$$
Since the OP said the answer was one, and I knew that $\displaystyle\lim_{x\to\infty} x^{\large \frac 1x}=1$ ,
I wanted to show that
$$\lim_{x\to\infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)=\lim_{x \to\infty}x\;\;\;\text{or}\;\;\;\lim_{x\to\infty} \large \frac {\left(\frac{\pi}{2}-\tan^{-1}x\right)}{x}=1$$ .
Using this assumption I arrived at $$\lim_{x\to\infty}\cot x =\lim_{x\to\infty}x,$$ which is nonsense because the limit on the LHS does not exist. When I later checked what this limit was on Wolframalpha, I found that $\displaystyle\lim_{x\to\infty} \large \frac {\left(\frac{\pi}{2}-\tan^{-1}x\right)}{x}=0$ . So how come that since this limit is not $1$ , $\displaystyle\lim_{x\to\infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)^{\Large\frac{1}{x}}$ is $1$? Thank you in advance.
AI: There are a lot of issues in what you write: mainly, if you know that
$\lim_{x\to\infty} f(x) = \lim_{x\to\infty} g(x)$, you cannot conclude from it that $\lim_{x\to\infty} f(x)/g(x) = 1$.
For example, $\lim_{x\to\infty} \frac{n^2}{n} \neq 1$, and $\lim_{x\to\infty} \frac{n}{n^2} \neq 1$.
So let's go through your question step-by-step:
Firstly, because $\lim_{x \rightarrow \infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)^{\Large\frac{1}{x}} = 1$, and $\lim_{x \rightarrow \infty} x^{\large \frac 1x}=1$, you want to show that $\lim_{x \rightarrow \infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)=\lim_{x \to \infty}x$. This is already wrong: the left-hand-side limit is $0$, while the right-hand-side limit is $\infty$. (What would be correct is wanting to show that $\displaystyle \lim_{x\to\infty} \frac{\left(\frac{\pi}{2}-\tan^{-1}x\right)^{1/x}}{x^{1/x}} = 1$, but you cannot get rid of the $\frac1x$ in the exponent like that.)
Next, even to show that $\lim_{x \rightarrow \infty}\left(\frac{\pi}{2}-\tan^{-1}x\right)=\lim_{x \to \infty}x$, you want to show that $\lim_{x \rightarrow \infty} {\large \frac {\left(\frac{\pi}{2}-\tan^{-1}x\right)}{x}}=1$: wrong idea again, because $\lim_{x\to\infty} f(x) = \lim_{x\to\infty} g(x)$ may be true without it being true that $\lim_{x\to\infty} f(x)/g(x) = 1$.
From this you arrived (again I'm not exactly sure how) at the conclusion that $\lim_{x \to \infty} \cot x =\lim_{x \to \infty} x$, which you yourself realized is nonsense.
|
H: What does it exactly mean for a subspace to be dense?
My understanding of rationals being dense in real numbers:
I know when we say the rationals are dense in real is because between any two rationals we can find a irrational number. In other words we can approximate irrational numbers using rationals. I think a more precise definition would be is that any open ball around a irrational number will contain a rational number.
If what I said is correct, I am trying to think about what it means for $C[a,b]$ (which are the continuous complex valued functions on [a.b]) to be dense subspace of $L^2[a,b]$. From what I said above, I want to say that all functions in $L^2[a,b]$ can be approximated by functions from $C[a,b]$. Is the intuition correct here, what would the precise definition in this case?
AI: Given a metric space $(X,\rho)$, we say that a subset $A$ of $X$ is dense in $X$ if, for each $\epsilon >0$ and $x\in X$ $$\tag 1 B(x,\epsilon)\cap A\neq \varnothing$$
This can be put more succinctly as ${\rm cl}\; A=X$. That is, the closure of $A$ is $X$. The assertion in $(1)$ is saying that $A$ is "everywhere": no matter what point we pick in $X$; and no matter how small a ball we choose, there will always be an element $a$ of $A$ in $B(x,\epsilon)$. And this means $\rho(x,a)<\epsilon$. So, the answer is "yes": we can approximate any $x\in X$ by any $a\in A$ within any prescribed degree of accuracy.
As an example, polynomials are dense in $C[a,b]$ with the $\sup$ metric. Given $f\in C[a,b]$, and $\epsilon >0$ we can find a polynomial $P$ such that $$\lVert f-P\rVert_\infty=\sup_{x\in [a,b]}|f(x)-P(x)|<\epsilon$$
ADD As Brian as commented, in the case of metric spaces we can translate the above to the following:
Let $(X,\rho)$ be a metric space, $A$ a subset of $X$. Then $A$ is dense in $X$ if and only if for each $x\in X$ there exists a sequence of points $\langle x_n\rangle$ in $A$ such that $x_n\to x$.
The proof is not complicated, so I invite you to give it.
The above happens to be true in metric spaces, but is not true in general, that is, for any topological space $(X,\mathscr T)$.
|
H: three dimensional cross product
Why do two three dimensional vectors $x$ and $y$ such that $x\cdot y$ does not equal $x\times y$ do not not exist?
They do not exist right? Please help me kinda lost in this.
AI: The dot product is a scalar quantity. The cross product is a vector quantity. To compare a scalar and a vector is like comparing apples and oranges--it really doesn't make sense to do so.
For example:
$$(1, 0, 0) \times (0, 0, 1) = (0, 1, 0)\\
(1, 0, 0) \cdot (0, 0, 1) = 0$$
You can't really compare $(0, 1, 0)$ and $0$--one is a vector, the other is "just a number."
|
H: Expected value of a stochastic harmonic series
It doesn't seem straightforward to put this into mathematical notation, but I'll do my best to explain the setup. Consider a harmonic series of the following type. For the sake of argument, say we have a set of 5 elements which are the reciprocals of the first 5 natural numbers:
$\left\{1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}\right\}$
Start the series index at k = 1, and define the first element of the series to be 1. Knock 1 out of the set, so we now have:
$\left\{\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}\right\}$
For the second summand in the series, index k = 2, make a uniform random sample from the above set. Say we come up with $\frac{1}{4}$. Our series is now:
$1 + \frac{1}{4}$
Knock $\frac{1}{2}$ out of the set, as its denominator corresponds to the current index k = 2. We now have the set:
$\left\{\frac{1}{3}, \frac{1}{4}, \frac{1}{5}\right\}$
For the third summand in the series, again choose randomly from the above set, and knock out the value associated with the current index afterwards, that is, $\frac{1}{3}$.
For any given number k, i.e. size of both the series and starting size of the set, there should be an expected value associated with the stochastic series; using linearity of expectation it is straightforward to calculate for k = 5 as
$1 + \frac{\frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5}}{4} + \frac{\frac{1}{3} + \frac{1}{4} + \frac{1}{5}}{3} + \frac{ \frac{1}{4} + \frac{1}{5}}{2} + \frac{1}{5}.$
I'm interested in what happens in the limit as k goes to infinity in this process. I've done some experiments numerically, and it seems that the expected value for large k converges to a number, $\approx 1.64$. Can it be proved that the expected value converges in the limit $k \to \infty$, and can the exact value be found?
AI: Apart from the $1$ which doesn't follow the rule, in the expected value
you seem to have terms
$$\dfrac{1}{k+1-j} \sum_{i=j}^k \dfrac{1}{i}$$
for $j$ from $2$ to $k$. I don't know if there's a closed form for the sum of that, but it appears to have generating function
$$ G(t) = \left(\frac{\pi^2}{6} + \mathrm{dilog}(t) - \ln(t)\ln(1-t)\right) \frac{t}{1-t}
-\frac{\ln(1-t)^2}{2}$$
As for asymptotics, since $\displaystyle\sum_{i=j}^k 1/i \approx \int_{j}^{k+1} ds/s = \ln(k+1)-\ln(j)$,
and $$\displaystyle\int_{2}^k \dfrac{\ln(k+1)-\ln(s)}{k+1-s} \ ds = \mathrm{dilog}\left(\frac{2}{k+1}\right) - \mathrm{dilog}\left(\frac{k}{k+1}\right) \to \frac{\pi^2}{6}\ \text{ as } k \to \infty$$
I think the limit is $\pi^2/6$.
|
H: How find the $f(x+f(y))+f(y+f(z))+f(z+f(x))=0,$
:$f:R\longrightarrow R$ ,and is continuous
such that $$f(x+f(y))+f(y+f(z))+f(z+f(x))=0,$$
find all $f$
AI: There are two solutions: either $f(x) = c-x$ for some constant $c$, or $f(x) = 0$ for all $x$.
We claim that $f(x+f(y)) = f(x) - f(y)$.
Put $x = y = z = 0$, we get $f(f(0)) = 0$.
Put $x = y = f(0)$, we get $f(f(0) + f(z)) = -f(z)$.
Put $x = f(0)$, we get $-f(y) + f(y+f(z)) + f(z) = 0$, hence the claim.
Note that the image of $f$ is an additive subgroup of $\mathbb{R}$. There are two cases - either this subgroup is dense at 0 (hence in $\mathbb{R}$), or is not dense at 0 (hence discrete in $\mathbb{R}$)
For the first case, let $g(x) = x + f(x)$. Note that
$$f(x+f(y)) = f(x) - f(y) \Rightarrow x + f(y) + f(x+f(y)) = x + f(x) \Rightarrow g(x+f(y)) = g(x)$$
for any $y$. Density of image of $f$ and continuity of $f$ then implies that $g$ is a constant, say $c$, then $f(x) = g(x) - x = c-x$.
For the second case, $f$ is a continuous map on a connected set. If the image is discrete, it must be a constant. It's easy to check that the only possible constant is 0.
|
H: Conditions for Fubini's theorem
To preface this post, I have to admit that I have extremely little measure theory knowledge and I get lost when trying to read about Fubini's theorem for this reason. In the theorem statement for Fubini's theorem, it says that
$$\int_{A\times B}|f(x,y)|d(x,y) < \infty.$$
I get that it is saying that $|f|$ - when integrated over the product measure - is finite but how does one go about checking that this is the case? Is it sufficient to show the following:
$$\int_{A}\left(\int_{B}|f(x,y)|dy\right)dx < \infty$$
I found a resource (PDF warning) that talks about a corollary to Fubini's theorem that seems to suggest that this condition is sufficient (see Corollary 6.2.1 and the remark following it) however they restrict themselves to $\sigma$-finite measure spaces but the theorem statement on Wikipedia allows for general complete measure spaces. Is my assertion correct or am I way off the mark?
AI: Wikipedia has actually an alternate theorem statement that answers the question. Besides $\sigma$-finiteness, both iterated integrals of the absolute value of the function have to be finite.
Now $\sigma$-finiteness is implicitely required in Fubini's theorem to some degree. The assumption $$\int_{A\times B}|f(x,y)|d(x,y) < \infty.$$ implies that $F_n=\{(x,y):|f(x,y)|>1/n\}$ has finite measure, so the product measure restricted to $\bigcup_n F_n=\{(x,y):f(x,y)\neq 0\}$ is $\sigma$-finite.
|
H: When can you treat a limit like an equation?
Lately, I've been very confused about the weird properties of limits. For example, I was very surprised to find out that $\lim_{n \to \infty} (3^n+4^n)^{\large \frac 1n}=4$ , because if you treat this as an equation, you can raise both sides to the $n$ power, subtract, and reach the wrong conclusion that $\lim_{n \to \infty} 3^n=0$ . I've asked this question before over here, and the answer was that $\infty-\infty$ is not well defined. I also found out here that you cannot raise both sides of a limit to a power unless the limit is strictly less than $1$ . However, there are also many examples where limits are treated as equations. For example, taking the logarithm of each side is standard procedure. Substitutions such as using $\lim_{x \to 0} \frac {\sin x}{x}=1$ work (although other substitutions sometimes don't work). So when can a limit be treated as an equation? Can you take for example the sine or tangent of each side like you can take the log? My guess is that you can treat it as an equation $at$ $least$ whenever $nothing$ is approaching $0$ or $\infty$ , but I'm not sure. Thanks.
P.S. Please keep the answers at a Calculus 1 level, and I have not learned the epsilon-delta definition of a limit.
AI: The $n$ in the limit has no meaning outside of the limit. Therefore you cannot raise both sides to the $n$th power and then "bring in" the $n$ from the outside into the limit (to be combined with the $1/n$ exponent, as you seem to be doing). In an expression such as $$\lim_{n \to \infty} (3^n+4^n)^{1/n}$$ $n$ is a "dummy" variable. It simply tells you which variable in the inner expression we are taking the limit with respect to. The value of the limit is not a function of $n$, and therefore "raising both sides to the $n$th power" is not a meaningful operation.
It is, however, okay to square both sides (or cube, or raise to any fixed power) of the equation, or take the log, sin, tan, or any other function. An equation involving a limit is still an equation and you can always do any operation to both sides of an equation. However, you still have to be careful. Suppose you have an equation like $$\lim_{x \to 0} f(x) = L.$$ Now you take the log of both sides. You get $$\log \lim_{x \to 0} f(x) = \log L.$$ Notice I have not yet brought the $\log$ "into" the limit, which is usually the next step you want to take. In order for that step to be valid, you need to know that $\log$ is continuous at the values of $f(x)$ near $x=0$, and then you would get $$ \lim_{x \to 0} \log f(x) = \log L.$$
|
H: Solving a system of equations, why aren't the solutions preserved?
I have the equations $$6x^2+8xy+4y^2=3$$
$\qquad$ $\qquad$ $\qquad$$\qquad$$\qquad$$\qquad$$\qquad$$\qquad$$\qquad$and $$2x^2+5xy+3y^2=2$$
This question can be found here, and the answer written by "response" went like this:
Multiply the second by 8 to get: $16x^2+40xy+24y^2=16$
Multiply the first by 5 to get: $30x^2+40xy+20y^2=15$
Subtract the two to get: $14x^2−4y^2=−1$
Later, the guy said to disregard his solution because the solutions to the first two equations do not satisfy the third equation. Why does this happen? Thanks.
AI: It doesn't happen! And in fact, you've misquoted the answer by user 'response' in that post. It says:
my suggestion does not work because a solution to the third eqn need not satisfy the original two eqns.
Necessarily, any solution to the first two equations also satisfies the third.
What they actually meant is this: there can certainly be solutions to the third equation which do not satisfy the first two. This is because there has been some loss of information - you've canceled out terms!
For instance, in this particular case, $(x,y)=(0,\frac{1}{2})$ solves the third equation, but not the first two.
|
H: Proof that a Hilbert space is not locally compact.
The proof in my textbook is: Construct $B(x,\epsilon)$. If $x=\langle x_{1},x_{2},\dots\rangle$, defined a sequence $x_{n}=\langle x_{1},x_{2},\dots,x_{n}+\epsilon,\dots\rangle$. Clearly, $d(x_{i},x_{j})=\sqrt{2}\epsilon$ when $i\neq j$. Since the sequence $\langle x_{n}\rangle$ can have no convergent subsequence, $B(x,\epsilon)$ is not compact.
I have not come across the correlation between the convergence of a sequence and compactness yet. I have tried to contruct an explanation. Please do comment on whether the argument is sound.
Since $d(x_{i},x_{j})=\sqrt{2}\epsilon$ for $i\neq j$ we construct balls $B(x_{i},\frac{\sqrt{2}\epsilon}{2})$ around each $x_{i}\in\langle x_{n}\rangle$. Clearly these sets are disjoint. For each $y\in X\setminus \bigcup_{i}B(x_{i},\frac{\sqrt{2}\epsilon}{2})$, $$d(y,x_{i})\geq \frac{\sqrt{2}\epsilon}{2}$$ Hence $ X\setminus \bigcup_{i}B(x_{i},\frac{\sqrt{2}\epsilon}{2})$ is an open set. Let us call it $U$.
We have a cover $ \{U\}\bigcup \{\cup_{i}B(x_{i},\frac{\sqrt{2}\epsilon}{2})\}$ of $X$, which will not have a finite subcover. This is true for every $\epsilon\in\Bbb{R}$. Hence, we can't possibly construct a compact set around $x$.
Thanks for your time.
EDIT: Also, I feel that if a space is locally compact, not every sequence in the space has to be convergent. Could someone comment on this too? Thanks again!
AI: A metric space is compact iff it is sequentially compact, meaning that every sequence has a convergent subsequence. The argument shows that within any closed nbhd of $x$ there is a sequence with no convergent subsequence, so $x$ has no compact nbhd and is therefore not locally compact.
Your argument is essentially the usual argument that if a metric space is not sequentially compact, then it cannot be compact.
Added: You’re quite right that a locally compact space can have non-convergent sequences: $\Bbb R$ with the usual topology is locally compact, and $\langle n:n\in\Bbb N\rangle$ is a non-convergent sequence. But if $X$ is locally compact, then every point of $X$ has a compact nbhd, and if in addition $X$ is metrizable, then every sequence that has infinitely many terms in a compact set will have a convergent subsequence.
|
H: How can we prove that this triangle is Equilateral Triangle?
This is a problem which was sent to me by a friend , but i couldn't solve it , in particular , i don't have ideas for that .
I hope you can help by hints or any thing .
Here is the problem in the image .
AI: This is known as Morley's trisector theorem.
|
H: Isomorphisms on categories
This time I have a conceptual question. Reading Categories for the Working Mathematician, I noticed the author gives two different notions between isomorphisms:
1) An isomorphism $T:B\rightarrow C$ of categories is a functor $T$ from $B$ to $C$, which is bijection both on objects and arrows.
2) If we take two bifunctors, we sometimes can find a bijective natural transformation between them, for example, $\text{Cat}(A\times B,C)\cong \text{Cat}(A,C^N)$ is natural on $A,B$ and $C$.
Does the second notion implies that $\text{Cat}(A\times B,C)$ and $ \text{Cat}(A,C^B)$ are isomorphic regarded as categories? What is the meaning behind the naturality of the bijection, the extra information?
As always, thanks in advance for your answers.
AI: $Cat(A,B)$ is the set of functors from the category $A$ to the category $B$. In general, $\mathcal{C}(A,B)$ denotes the hom-set between objects $A,B$ in the category $\mathcal{C}$. Since there is a "category of categories", something like $Cat(A \times B, C) \cong Cat(A, C^B)$ is saying that there is a bijection between functors, sending $F:A\times B \rightarrow C$ to the functor $G: A \times C^B$. (This is exactly like saying $$Maps(A \times B, C) \cong Maps(A, C^B)$$ in $Sets$, except the functions are now functors!) What is $G$? Well, $C^B$ is a functor category. The objects are functors, and so given an object $a$ of $A$, $G(a)$ had better be a functor from $B$ to $C$. What functor do we take? The only option is $F(a,-)$. But $G$ also needs to do something to morphisms. So a morphism $k:a \rightarrow a'$ needs to go to a natural transformation $F(a,-) \rightarrow F(a',-)$. But this comes built in to the definition of $F$, since it's natural in both slots. We can also go back. Given such a $G$, we can build an $F$, and do it in a way that the two constructions are mutually inverse.
Now, you can then view $Cat(A,B)$ as a category (a functor category), with objects the morphisms and morphisms as natural transformations between them. The above construction doesn't "come with" a way to send morphisms (natural transformations) to morphisms. But given a natural transformation between bifunctors $S \Rightarrow T:A \times B \rightarrow C$, I would believe that one could construct a natural transformation between the adjoints, which I'll denote $\hat{S}$ and $\hat{T}$. If $\eta$ is the natural transformation from $S$ to $T$, then define $\hat{\eta}$ at $a$ to be... what? Well, it needs to go from $\hat{S}(a)$ to $\hat{T}(a)$. These are objects in the functor category $Cat(B,C)$, so $\hat{S}(a)$ is a functor from $B$ to $C$, as is $\hat{T}(a)$. So $\eta_a$ should be a natural transformation! What should the natural transformation be? Well, I would use $\eta(a,-)$.
I don't have any reason to think that this $\hat{\cdot}$ operation won't induce a bijection on natural transformations, and so those two functor categories are indeed isomorphic, which you could write as $$C^{A\times B} = (C^B)^A.$$ But you can probably imagine that writing out the details is a little tedious! The trick is to at all times carefully keep track of what type of objects you're working with, what the morphisms should be, etc.
Note that naturality of the bijection $Cat(A\times B,C) \cong Cat(A,C^B)$ didn't enter with this argument, since we never had to think about any categories other than $A,B,$ and $C$. So what does that mean? Go back to the thing that took a functor $F:A \times B \rightarrow C$ and spat out a functor $G:A \rightarrow C^B$. Suppose you had a functor $H:A' \rightarrow A$. You can use this to build a functor $H\times 1:A' \times B \rightarrow A \times B$. Suppose you wanted to know what happens when you applied all this stuff to $F(H\times 1):A' \times B \rightarrow C$. You have to go through all that computation over again!
But that's where naturality saves the day. Naturality says that if you've computed one thing, you can compute pre- and post- compositions by just... composing! So the result from applying all this to $F(H \times 1)$ is just $GH: A' \rightarrow A \rightarrow C^B$. As you read the book, you'll notice that naturality is a very powerful condition, since it imposes so many relations on the transformation!
|
H: Logical systems that are complete but not sound
I was wondering, are there any commonly used logics(with both notions of deductions and of semantics) that are complete but not sound? I'm looking for an example that has actually proven useful to logicians. Thank you for your wisdom!
Sincerely,
VIen
AI: If a logical system is not sound, this means that there is a mis-match between the deductive system and the semantics (the models used when defining soundness). A typical example would be that, if we use Kripke structures as our semantics (so the appropriate logic would be intuitionistic), then classical logic is unsound. As far as I can see, such a phenomenon is not "useful" but just a mistake. (Well, I suppose it could be useful for showing that someone made a mistake.)
|
H: Finding an orthonormal basis using Gram Schmidt process
OK, here's a question with polynomials. We want to find an orthonormal basis using Gram Schmift.
Assuming that we are in a vector space V, $R^2[X]$ where {$f = \lambda_0+\lambda_1X+\lambda_2X^2$}. Since X is a polynomial it gets treated differently than if it were just a variable, correct? because if we are defining the inner product as $\langle f,g \rangle = \int^1_0 f \centerdot g \ dx$ and the basis B is $B= ${$3, 1+X, X^2$} the first vector is simply $v_1=3$. The next one I am not sure about, because of the integral. And because it's a polynomial. I am unsure of how to apply Gram Schmidt here. WHen I apply the formula -- or try to -- I get some weird results. My first thought was to do this:
$$v_2= b_2 - \frac{\langle b_2, v_1 \rangle}{ \lVert v_1 \rVert^2} v_1$$ and get $$v_2= 1+X - \frac{\langle 1+X, 3 \rangle}{ \lVert 3 \rVert^2} 3$$ but something tells me that's wrong. I know I am supposed to work the integral in there but I am not sure how, because X is a polynomial and not a variable.
AI: Hint: The norm associated with an inner product is defined by
$$
\|f\|^2=\langle f,f\rangle.
$$
So, for instance,
$$
\|1+x\|^2=\langle 1+x,1+x\rangle=\int_0^1(1+x)(1+x)\,dx=\frac{7}{3},
$$
and so $\|1+x\|=\sqrt{\frac{7}{3}}$.
So, to perform Gram-Schmidt: say we are given $f_1(x)=3$, $f_2(x)=1+x$, and $f_3(x)=x^2$, and we want to come up with an orthonormal basis $\{e_1,e_2,e_3\}$.
We start by taking $v_1=f_1=3$; then $e_1=\frac{v_1}{\|v_1\|}$. But
$$
\|v_1\|^2=\langle v_1,v_1\rangle=\int_0^1 v_1\cdot v_1\,dx=\int_0^19\,dx=9,
$$
and so $\| v_1\|=3$; hence $e_1=\frac{3}{3}=1$.
Now, we take $v_2:=f_2-\text{proj}_{e_1}(f_2)$, where $\text{proj}_a(b)$ is the projection of $b$ on to $a$. We have
$$
\text{proj}_{1}(1+x)=\frac{\langle 1,1+x\rangle}{\|e_1\|^2}\cdot 1=\frac{\int_0^1 1\cdot(1+x)\,dx}{1^2}\cdot1=\frac{3}{2},
$$
so that $v_2=(1+x)-\frac{3}{2}=-\frac{1}{2}+x$. If you check, you will find that $v_2$ is orthogonal to $e_1$, as it should be! So, we make it a unit vector by dividing by its norm. Here,
$$
\left\|-\frac{1}{2}+x\right\|^2=\left\langle-\frac{1}{2}+x,-\frac{1}{2}+x\right\rangle=\int_0^1\left(-\frac{1}{2}+x\right)^2\,dx=\frac{1}{12},
$$
so that $\|-\frac{1}{2}+x\|=\frac{1}{2\sqrt{3}}$, and we define
$$
e_2=\frac{v_2}{\|v_2\|}=-\sqrt{3}+2\sqrt{3}x.
$$
See if you can get the last one from there.
|
H: A generalization of Waring's problem
Let $f(x)$ be a polynomial with integer coefficients such that $$\lim_{x\to +\infty}f(x)=+\infty.$$
Is it true that there always exist two integers $K$ and $R$ (depend on $f(x)$), such that every positive integer $n$ can be expressed as $$n=r+\sum_{i=1}^kf(x_i),$$
where $|r|\leq R,k\leq K,x_i(i\leq k)\in \mathbb N.$
This is a generalization of Waring's problem. If $f(x)=x^2$ then $K=4,R=0.$
AI: Not sure anyone knows. Hilbert's proof of finiteness of $g(k)$ in Waring's problem depended on the particular shape of the polynomials, namely $f(x) = x^k.$ See CONJECTURE. If you have access to a university mathematics library, look up Additive Number Theory and then Additive Bases on MathSciNet and see what happens. Meanwhile, the book you should be reading is The Hardy-Littlewood Method by Robert C. Vaughan. Your trick of adding or subtracting bounded $r$ does not change things all that much.
Meanwhile, you should have little trouble settling the case of $$ f(x) = a x^2 + b x + c $$ for positive integer $a,$ by multiplying through by $4a,$ completing the square, and using the Four Square Theorem you quote. After that try $$ f(x) = a x^3 + b x^2 + c x + d $$ and see if you get anywhere at all, and decide what restrictions you might want on $a,b,c,d$ to make an approachable problem.
Continuing on the theme of nobody knowing in this generality, see SCHNIRELMANN. Although your sets are strongly related to the sets for the Waring problem, they still have density zero.
Note that there is a probabilistic argument that suggests your scheme ought to work. Your polynomial starts $a x^k + b x^{k-1} + \cdots.$ As a result, the count of nonnegative $x$ values with $f(x) \leq N$ for some positive $N$ is asymptotically $N^{1/k},$ since $a^{1/k}$ goes to 1. As a result, the sum of $k+1$ different copies gives you an excess of lattice points. So, if you can rule out $p$-adic restrictions, you can sort of expect that not many numbers will be missed, and these can be covered by your finite $r.$ Just remember that the $p$-adic restrictions are by far the major component in $g(4)$ being as large as it is. Furthermore, there are other types of obstruction. For integer $x,y,z$ and (positive) prime $p \equiv 1 \pmod 4,$ we have $$ x^2 + y^2 + z^9 \neq 216 p^3. $$ Note that the exponent 9 is not a typo.
|
H: All distinct subgroups of $\mathbb{Z}_4 \times \mathbb{Z}_4$ isomorphic to $\mathbb{Z}_4$
This question is from a past exam.
Find all distinct subgroups of $\mathbb{Z}_4 \times \mathbb{Z}_4$ isomorphic to $\mathbb{Z}_4$
Attempt/Thoughts?
Since $\mathbb{Z}_4$ is cyclic we are looking for cyclic subgroups of the given group. Then can I use the fundamental theorem of f.g.ab. groups to solve this problem?. I have a hard time imagining how the elements in $\mathbb{Z}_4\times \mathbb{Z}_4$ look like. Is there a standard way to proceed in this type of problem?.
Can somebody help?. I usually don't ask for detailed answers in here. But in this
case I would really appreciate it. Thanks.
AI: The elements of the product are all pairs $(a,b)$ with $a$ and $b$ in $\mathbb{Z}_4$. Go patiently over all the candidates for generator of the subgroup isomorphic to $\mathbb{Z}_4$.
Start with $(0,1)$. It generates a subgroup of order $4$, consisting of $(0,1)$, $(0,2)$, $(0,3)$, $(0,0)$.
Note that $(0,2)$ is no good, while $(0,3)$ generates a group isomorphic to $\mathbb{Z}_4$ that we have already.
Now continue with $(1,1)$: good. Now try $(1,2)$: good, we get in turn $(2,0)$, $(3,2)$, $(0,0)$.
Continue. Analysis will quickly speed up. Be careful not to list the same subgroup twice. And take advantage of symmetry: the analysis of the group generated by $(0,a)$ is automatic once you have dealt with the group generated by $(a,0)$.
Remark: There is nothing wrong with computing. One becomes intimately acquainted with the structures that way. After a while we find shortcuts.
|
H: Why is $V_{4}$ the semi direct product of $Z_{2}$× $Z_{2}$
I'm trying to understand what is a semi direct product , so by the definition semi-direct product of G , I'd need two groups , $N$ and $H$ , where :
$H∩N$ = {e}
$H \cdot N$ = $G$
If $H=N=Z_{2}$ , then : $Z_{2}∩Z_{2}≠{e}$ .
Which contradicts $H∩N$ = {e} . So what am I missing here ?
Why does $V_{4}$ is indeed the semi direct product of $Z_{2}$× $Z_{2}$ ?
Thanks
AI: Actually, here, the idea could be stated more concisely this way:
Show that there exist subgroups $H,K\leq V_4$ such that $H,K\cong \mathbb{Z}_2$ and $V_4$ is the semi-direct product of $H$ and $K$.
|
H: For any set $A\subseteq\mathbb{R}^n$, we have $ \overline{A^{\circ}} = \overline{\overline{A^{\circ}}^{\,\circ}}$
I have to prove that for any set $A\subseteq\mathbb{R}^n$,
$$ \large\overline{A^{\circ}} = \overline{\overline{A^{\circ}}^{\,\circ}} $$
This is what I got so far: for any set $A$ I'm using these definitions:
Interior:
$$\exists r > 0\text{ such that }\{x \mid B_r(x) \subseteq A\}$$
Closure:
$$\{x \in \mathbb{R}\mid \exists (X_n) \subseteq A \land X_n \rightarrow x\}$$
Now what I don't get is, I think the right part of the equal, because, I have the interior of $A$, that is all the points that have a ball that is included in $A$, using this I know using the definition of closure that I can pick a sequence that converges to them (using $r$ and decreasing it with $\frac{r}{n}$, $n\to\infty$ for example). But then I don't know how to take the interior of that, I mean what I'm getting at, is that the closure of the interior is the interior, and then the right part of the equation is trivial, as it is the same (m the interior of the interior is the interior, and its closure its the interior)
I think I'm missing something..
AI: Interestingly, you can solve this problem by appealing to only a few abstract properties of interior and closure. See this article for more information. Here are three conditions satisfied by the closure operation:
$\mathrm{cl}(S) \supset S$ for all $S \subset \mathbb{R}^n$ (sets get bigger when you close them)
$\mathrm{cl} (\mathrm{cl} (S)) = \mathrm{cl} (S)$ for all $S \subset \mathbb{R}^n$ (the closure of a closed set is itself)
If $S \subset T \subset \mathbb{R}^n$, then $\mathrm{cl}( S) \subset \mathrm{cl}( T)$ (closure preserves containment)
The situation for interiors is nearly the same:
$\mathrm{int}(S) \subset S$ for all $S \subset \mathbb{R}^n$ (sets get smaller when you take their interiors)
$\mathrm{int} (\mathrm{int} (S)) = \mathrm{int} (S)$ for all $S \subset \mathbb{R}^n$ (the interior of an open set is itself)
If $S \subset T \subset \mathbb{R}^n$, then $\mathrm{int}( S) \subset \mathrm{int}( T)$ (taking interiors preserves containment)
Now, let $S \subset \mathbb{R}^n$ be given. We have $\mathrm{cl} (\mathrm{int}(S)) \supset \mathrm{int}(S)$. Now take the interior of both sides to get $\mathrm{int}(\mathrm{cl}(\mathrm{int}(S))) \supset \mathrm{int}(\mathrm{int}(S)) = \mathrm{int}(S)$. Taking the closure on both sides leads to
$$ \mathrm{cl}(\mathrm{int}(\mathrm{cl}(\mathrm{int}(S)))) \supset \mathrm{cl} (\mathrm{int}(S))$$
which is one half what we set out to prove. I encourage you to try to deduce the reverse inclusion using similar methods beginning from $\mathrm{int}(\mathrm{cl}(\mathrm{int}(S))) \subset \mathrm{cl}(\mathrm{int}(S))$.
|
H: Determinant with polynomials
Question
In $\Bbb R _{n-2}[X]$, let $f_1(x),\dots, f_n(x) \in \Bbb R _{n-2}[X]$ and let $a_1,\dots, a_n \in \Bbb R$
$$A=\begin{bmatrix} f_1(a_1) &f_1(a_2) & \dots & f_1(a_n) \\ \vdots & \vdots & & \vdots \\f_n(a_1) &f_n(a_2) & \dots & f_n(a_n)\end{bmatrix}$$
What is $\det(A)$?
Thought
We claim it's $0$ because $n$ vectors in an $n-1$ dimensional vector space are linearly dependent. And therefore the matrix is singular. Is this a true direction? (Never seen such a question before).
AI: You are correct.
You have a linear dependence among the $f_i$ as elements in $\mathbb R_{n-2}[X]$, that is real numbers $c_i$ not all zero such that $\sum c_if_i$ is the zero polynomial. Then also $\sum c_if_i(a_j)=0$ for all $j$, i.e. the row vectors are linearly dependent as elements of $\mathbb R^n$, hence $\det A=0$.
|
H: Strong convergence of operators
I'm working through the functional analysis book by Milman, Eidelman, and Tsolomitis, and I have a question. The book states a lemma that I'm a bit confused about:
A sequence of operators $T_n\in L(X, Y)$ (here, $X$ and $Y$ are Banach Spaces) converges strongly to an operator $T\in L(X,Y)$ if and only if
(i) the sequence $\{T_n x\}$ converges for any $x$ in a dense subset $M\subset X$;
(ii) there exists $C>0$ such that $\| T_n\|\leq C$.
My question is whether or not $M$ has to be a linear subspace. I think it does. The reason I think so is that the proof of the theorem says that we first define an operator $T_0$ by $$ T_0 x:= \lim_{n\to \infty} T_n x. $$ By the assumption that $T_n x$ converges on $M$, we have that the domain of an operator is $M$. Hence, $M$ must be a linear space. The proof then goes on to define an extension of $T_0$ by $$ Ty:= \lim_{n\to \infty} T_0 x_n, $$ where $x_n\to y$. Here we are using the density of $M$ and the boundedness of $T_0$ which is inherited from $\{T_n\}$. Even here though, to prove uniqueness of this limit under any sequence converging to $y$ we need the fact that $M$ is a subspace. The reason is that, if $\{z_n\}$ is any other sequence in $M$ converging to $y$, we have to consider the expression $$ \|T_0 x_n -T_0 z_n\|=\|T_0(x_n-z_n)\|\leq \|T_0\|\cdot \|x_n-z_n\|\to 0$$ since $\{x_n\}$ and $\{z_n\}$ both converge to $y$. We must have that the vector $x_n-z_n$ is in the domain of $T_0$ (i.e. $M$).
Anyways, I'm pretty sure that our dense subset $M$ must be a dense subspace for this to work. Can someone please tell me if this is correct. I'm 98% sure this is a typo in the statement of the lemma.
AI: Of course, $L(X,Y)$ means bounded linear operators.
That's a standard $\frac{\epsilon}{3}$ argument which does not require the linearity of $M$. Just that it be dense.
1- Assume (i) and (ii) are fulfilled.
Now take any $x\in X$. Let us prove that the sequence $\{T_nx\}$ is Cauchy. So let us take $\epsilon>0$. By density of $M$, there exists $y\in M$ such that $\|x-y\|\leq \frac{\epsilon}{3C}$. Then
$$
\|T_nx-T_mx\|\leq \|T_nx-T_ny\|+ \|T_ny-T_my\|+\|T_my-T_mx\|
$$
$$
\leq \|T_n\|\|x-y\|+\|T_ny-T_my\|+\|T_m\|\|x-y\|
$$
$$
\leq C\frac{\epsilon}{3C}+\|T_ny-T_my\|+C\frac{\epsilon}{3C}
$$
$$
=\frac{2\epsilon}{3}+\|T_ny-T_my\|.
$$
Since $\{T_ny\}$ converges, we can find $N$ such that $\|T_ny-T_my\|\leq \frac{\epsilon}{3}$ for all $n,m\geq N$. Whence $\|T_nx-T_mx\|\leq \epsilon$.
Since $Y$ is complete, it follows that $\{T_nx\}$ converges to some $Tx$ for every $x\in X$. Since the pointwise limit of a sequence of linear operators is linear, $T$ is linear. And since $\|T_nx \|\leq C\|x\|$ for every $x$, we get $\|Tx\|\leq C\|x\|$ as well at the limit. Whence $T$ is bounded. This direction requires that $Y$ be complete, but not $X$.
2- Conversely, it $\{T_n\}$ converges strongly, then it must be uniformly bounded by the uniform boundedness principle (and its limit must be bounded). So (i) and (ii) are satisfied by $M=X$. This direction requires that $X$ be complete. But not $Y$.
|
H: Proving that not defined value is equal to something
My younger brother (9th Grader) got the following maths problem-
Given: $$2^a = 3^b = 6^c$$
Prove:
$$c=\frac{a * b}{a+b}$$
From my elementary knowledge of mathematics it seems like a=b=c=0.Also, (ab)/(a+b) is not defined and not defined can be equal to 0. Which makes me think if the question makes any sense. They could have also asked if (ab)/(a+b) = 182 i.e. some random number.
My question is if the output of
Not Defined == A Number
is true or false?
Does this question really makes sense?
Unfortunately the teacher is pretty arrogant and doesn't want to give an answer to this question!
AI: Suppose that $2^a=3^b=6^c$, where $a\ne 0\ne b$. Take logs base $2$:
$$a=b\lg 3=c\lg 6=c(1+\lg 3)\;.$$
Then
$$\frac{ab}{a+b}=\frac{b^2\lg 3}{b+b\lg 3}=\frac{b\lg 3}{1+\lg 3}=\frac{a}{a/c}=c\;.$$
Of course the only solution with integral $a,b$, and $c$ is $a=b=c=0$, but there are certainly non-integral solutions.
|
H: Proving that exists only one basis which is dual to a given basis
Question
Let $V$ be a finite dimensional vector space over $\Bbb F$ and $V^*$ it's dual space. Let $f_1 ... f_n$ be a basis for $V^*$. Prove that $\exists ! e_1 ... e_n$ - basis for $V$ s.t. $f_1 ... f_n $ is its dual basis.
Thought:
Someone showed me a hint with an inverse of the matrix of functionals... I don't really understand how this proves the question...
AI: I am assuming that the question is about $f_i=e_i^*$.
Existence:
The map $ev:V\to V^{**}$ is an isomorphism (is injective and both spaces have the same dimension). Let $g_1,\cdots,g_n$ be the dual basis of $\{f_i\}$. This is easy to construct since you can define your morphisms $g_i$ in a basis. Now, put $e_i=ev^{-1}(g_i)$. The set $\{e_1,\cdots,e_n\}$ is a basis of $V$ since $ev$ is an isomorphism. Let us show that its dual basis is $\{f_1,\cdots,f_n\}$:
$$
f_j(e_i)=ev(e_i)(f_j)=g_i(f_j)=\delta_{ij}.
$$
Uniqueness:
Let $e'_i$ be another predual basis and write $e'_i=\sum a_{ij}e_j$. Applying $f_k$ in both sides of the equation we get
$$
\delta_{ik}=f_k(e'_i)=\sum a_{ij}f_k(e_j)=\sum a_{ij}\delta_{jk}=a_{ik},
$$
so that $e'_i=e_i$.
|
H: Euler angles to rotation matrix. Rotation direction
So we have a 2D rotation matrix for counterclockwise (positive) angle "$a$":
$\begin{pmatrix}
\cos(a) & -\sin(a) \\
\sin(a) & \cos(a)
\end{pmatrix}$.
For clockwise (negative) angle:
$\begin{pmatrix}
\cos(a) & \sin(a) \\
-\sin(a) & \cos(a)
\end{pmatrix}$.
When converting euler angles to 3d rotation matrix we extend rotation matrices of yaw ($z$), pitch ($y$) and roll ($x$) angles and multiply them. The same convention is used for all angles: counterclockwise direction, right-handed system.
$R_x =
\begin{pmatrix}
1 & 0 & 0 \\
0 & \cos(x) & -\sin(x) \\
0 & \sin(x) & \cos(x)
\end{pmatrix}$,
$R_y =
\begin{pmatrix}
\cos(y) & 0 & \sin(y) \\
0 & 1 & 0 \\
-\sin(y) & 0 & \cos(y)
\end{pmatrix}$,
$R_z =
\begin{pmatrix}
\cos(z) & -\sin(z) & 0 \\
\sin(z) & \cos(z) & 0 \\
0 & 0 & 1
\end{pmatrix}$.
Why is the direction of rotation for the pitch angle (sign of sin elements) different from yaw and roll angles? Thanks.
AI: Matrix rows or columns are traditionally listed under $(x,y,z)$ order.
Cyclically change the pairs under consideration i.e $(x,y)\to(y,z)\to(z,x)$. The pairs $(x,y)$ and $(y,z)$ show up in the same order in the matrix but the $(z,x)$ shows up in reverse in the matrix. That is the cause of apparent discrepancy but really there is no discrepancy.
For example write
$x'=x\cos \alpha - y \sin \alpha$
$y'=x\sin \alpha + y \cos \alpha$
now change $(x,y)\to(y,z)\to(z,x)$ and $\alpha\to \beta \to \gamma$ and write the three matrices to see how $(z,x)$ part gets flipped.
Edit:
If you want them to look alike then give up the matrix notation and instead write
$y'=y\cos \beta - z \sin \beta$
$z'=y\sin \beta + z \cos \beta$
And
$z'=z\cos \gamma - x \sin \gamma$
$x'=z\sin \gamma + x \cos \gamma$
In each instance if you try to write $\left[ \matrix{ x' \cr y' \cr z'}\right]$ in terms of $\left[ \matrix{ x \cr y \cr z}\right]$ you will see that the mystery goes away.
|
H: Find the 3rd degree of polynomial having trigonometry as a root
I am struggling to solve the problem that asking to find the 3rd degree of polynomial having all coefficients as integers, which satisfying
$f(\cos \frac{\pi}{7})=0$.
I know that I should use the euler's equation, but I don't know how to apply it to this problem.
In detail, I know I should use the fact that $e^{(\frac{\pi i}{7})}$ is the root for $\frac{(x^7+1)}{x+1}$ = $x^6-x^5+x^4-x^3+x^2-x+1 = 0 $, but it is really hard for me to find how to use that fact to solve the problem.
Thank you.
AI: MORE DETAILS
You want a relation involving only $\;c:=\cos\left(\frac{\pi}7\right)\,$ the real part of $\,x=e^{i\pi/7}\,$ such that :
$$x^6-x^5+x^4-x^3+x^2-x+1 = 0$$
Divide by $x^3$ to get :
\begin{align}
\tag{1}&\bigl(x^3+x^{-3}\bigr)-\bigl(x^2+x^{-2}\bigr)+\bigl(x^1+x^{-1}\bigr)=1\\
\end{align}
But
\begin{align}
\tag{2}(2\;c)=\left(x+\frac 1x\right)^1&=\bigl(x^1+x^{-1}\bigr)\\
\tag{3}(2\;c)^2=\left(x+\frac 1x\right)^2&=\bigl(x^2+x^{-2}\bigr)+2\\
\tag{4}(2\;c)^3=\left(x+\frac 1x\right)^3&=\bigl(x^3+x^{-3}\bigr)+3\,\bigl(x^1+x^{-1}\bigr)\\
\end{align}
so that everything may be written in function of $\,c\,$ only :
from $(2)$ and $(4)$ deduce $\,\bigl(x^3+x^{-3}\bigr)=(2\;c)^3-3\,(2\;c)\,$,
from $(3)$ deduce $\,\bigl(x^2+x^{-2}\bigr)$
from $(2)\ \cdots$
Conclude !
|
H: Computing determinant of X-matrix using permutations and blocks
Calculate the following determinant $$\det\begin{bmatrix} a_1 & 0 & ... &0&b_1 \\ 0&a_2 & ... &b_2&0 \\ &&... \\ 0 & b_{2n-1} & ... &a_{2n-1}&0\\ b_{2n} & 0 & ... &0&a_{2n}\end{bmatrix}$$
We thought about using the idea of a determinant of blocks. Using that way we got $a_1a_2..a_2n - b_1b_2...b_{2n}$ . Another thought was using the definition using permutations, but we are having trouble understanding why the rest of the permutations (except the identity permutation and the permutation that returns the last to the first etc) eventually become 0. We would like to verify correctness/get an explanation on how to solve this one using permutations and blocks.
AI: Laplace expansion should give you the simplest way to solve the problem, but if you insist on using permutations and block matrices, you may first move the last column forward to the second column, and then move the last row upward to the second row. So, the required determinant is equal to the determinant of
$$
\pmatrix{a_1&b_1\\ b_{2n}&a_{2n}\\ &&a_2&\cdots&b_2\\ &&&\cdots\\ &&b_{2n-1}&\cdots&a_{2n-1}}
$$
and you may proceed recursively.
|
H: How do I show that $\int_0^\infty \frac{\sin(ax) \sin(bx)}{x^{2}} \, \mathrm dx = \pi \min(a,b)/2$
Recently I found a claim saying that
$$
\int_0^\infty \left( \frac{\sin ax}{x}\right)\left( \frac{\sin bx}{x}\right) \mathrm{d}x= \pi \min(a,b)/2
$$
from what I can see this seems to be true. I already know that
$\int_{0}^\infty \operatorname{sinc}xy\,\mathrm{d}y = \pi/2$, and so independant of $y$. My suspicion is that this is closely related to the integral above.
Can someone give me some suggestions for evaluating the integral above? Also are there any generalizations for the integral? Eg
$$
\int_{0}^{\infty} \left( \prod_{k=1}^N \frac{\sin (a_k \cdot x)}{x} \right) \,\mathrm{d}x
$$
Where $a_k, \cdots, a_N$ are arbitrary positive constants.
It seems related to the Borwein Integral, but there are some subtle differences.
AI: One way is to to this by residues. Another way to integrate once by parts to get
$$I=\int_0^{\infty}\frac{b\sin ax\cos bx+a\cos ax \sin bx}{x}dx,$$
then to use the formula $2\sin \alpha\cos\beta=\sin(\alpha+\beta)+\sin(\alpha-\beta)$ and the mentioned integral (note that your formula needs to be corrected on the left and on the right)
$$\displaystyle \int_0^{\infty}\frac{\sin xy}{x}\,dx=\frac{\pi}{2}\mathrm{sgn}(y).$$
This gives
\begin{align}I&=\frac{\pi}{4}\Bigl[b\,\mathrm{sign}(a+b)+b\,\mathrm{sign}(a-b)+a\,\mathrm{sign}(a+b)+a\,\mathrm{sign}(b-a)\Bigr]=\\
&=\frac{\pi}{4}\left(|a+b|-|a-b|\right),
\end{align}
For $a,b>0$ the last expression is obviously equal to $\pi\min\{a,b\}/2$.
|
H: Must certain rings be isomorphic to $\mathbb{Z}[\sqrt{a}]$ for some $a$
Consider the group $(\mathbb{Z}\times\mathbb{Z},+)$, where $(a,b)+(c,d)=(a+c,b+d)$. Let $\times$ be any binary operation on $\mathbb{Z}\times\mathbb{Z}$ such that $(\mathbb{Z}\times\mathbb{Z},+,\times)$ is a ring. Must there exist a non-square integer "$a$" such that
$$(\mathbb{Z}\times\mathbb{Z},+,\times)\cong\mathbb{Z}[\sqrt{a}]?$$
Thank you.
Edit: Chris Eagle noted that setting $x\times y=0$ for all $x,y\in\mathbb{Z}\times\mathbb{Z}$ would provide a counterexample. I would like to see other ecounterexamples though.
AI: Probably the most natural counterexample is the following:
If the operation $\times$ is defined such that the resulting ring is simply product of two copies of the usual ring $(\mathbb{Z},+,\times)$ (that is, if we set $(a,b)\times(c,d)=(ac,bd)$), then, again, no isomorphism exists, since the resulting ring $\mathbb{Z}\times \mathbb{Z}$ is not an integral domain and $\mathbb{Z}[\sqrt{a}]$ is.
|
H: For what $a>0$ does this integral converge? $\int_{1}^{\infty} {\frac{x^a(\sin x+2)}{x^{2a}\ln(a)}}$
For what $a>0$ does this integral converge?
$$\int_{1}^{\infty} {\dfrac{x^a(\sin x+2)}{x^{2a}\ln(a)}}$$
So I said:
$\left|\dfrac{x^a(\sin x+2)}{x^{2a}\ln(a)} \right| \leq \left| \dfrac{3x^a}{x^{2a}\ln(a)} \right| = \left|\dfrac{3}{x^a \ln(a)} \right|$, and we know that
$$
\int_{1}^{\infty} {\frac{3}{x^a \ln(a)}} = \frac{3}{\ln(a)} \int_{1}^{\infty} {\frac{1}{x^a}}
$$
is convergent for $a > 1$ and thus our original integral.
But my professor said that this convergence test is only for non-negative integrals, but if not like this, then how do we do it?
AI: We can ignore $\ln(a)$ in the denominator, since it's well defined for all $a>0$ and is constant, therefore doesn't affect convergence. Then in our domain,
$$
\frac 1 {x^a} \leq \frac{x^a(\sin x+2)}{x^{2a}} \leq \frac 3 {x^a}
$$
Which proves, like you concluded, that the integral is convergent for $a>1$ and that the integral is divergent for $a\leq1$.
Regarding the absolute value technique, it works for non-negative functions. But also for non-positive functions. If $f(x)\leq 0$ then $\int f(x) = -\int -f(x)$ where $-f(x)\geq 0$. What I've written is essentially the same thing, since as you can see, the function is bounded from below by a positive function, so you can put absolute value signs everywhere if you like.
|
H: If $A$ is an $n \times n$ matrix and $ A^2 = 0$, then $\text{rank}(A)\le n/2$.
Was revising for a Linear Algebra when I came across this question.
"Given a Matrix $A$ over $\mathbb{R}^{n \times n}$, and $A^2 = 0$,
show that rank($A$) $\leq n/2$"
My attempt:
I was thinking of the theorem from Cayley-Hamilton but the matrix is a $n \times n$ matrix hence that's probably invalid.
Is there perhaps a significance of $A^2 = 0$? apart from a special case of the nilpotent matrix?
AI: Hint: Let $k = \dim\ker A$. What can you say about $\dim\ker A^2$? Now use $\dim\ker A^2 + \mathop{\rm rank} A^2 = n$.
|
H: system of differential linear equations $y'=\begin{pmatrix}1 & 1\\0 & 1\end{pmatrix}y$
find the solution to the problem $y'=\begin{pmatrix}1 & 1\\0 & 1\end{pmatrix}y, y(0)=\begin{pmatrix}4\\0\end{pmatrix}$
I know i have to find the eigenvalues and eigenvectors of the matrix $A=\begin{pmatrix}1 & 1\\0 & 1\end{pmatrix}$
there's only one eigenvalue which is $1$ and only "one" eigenvector and we can choose $\begin{pmatrix} 1\\0\end{pmatrix}$
but now I dont know what to do.
what comes next?
I should say that I know how to find the exponencial of a matrix and also I want to know how to solve this kind of problem in general, so techniques to solve this particular one that wouldnt work on a more general problem dont help me much.
AI: Note: this method is not a general one, but is probably easier for the particular example provided.
In this case, you do not need to diagonalize the matrix. Indeed, if we write $y=(y_1,y_2)$, then the system is simply:
$$
\left\{\begin{aligned}
y_1'&=y_1+y_2\\
y_2'&=y_2
\end{aligned} \right.
$$
with initial conditions $y_1(0)=4$, and $y_2(0)=0$. Notice that the second equation is easy to solve:
$$
y_2(x)=C\exp(x).
$$
Since $y_2(x)=0$, we get $y_2=0$, thus the first equation reduces to $y_1'=y_1$. Finally,
$$
y(x)=\begin{pmatrix}4\exp(x) \\ 0\end{pmatrix}.
$$
|
H: Am I understanding this integration right?
This is the snippet of a problem from this PDF here. What I dont understand is why they retain the $Sin$ part for evaluation after integration when all that it is going to evaluate to is 0.
If I am not wrong, $Sin(n\pi) = 0$
All that should be left after integrating by parts is:
$$
[\frac{-2(x-1)Cos\frac{n\pi x}{2}}{n\pi}]_1^2
$$
Please correct me if I am wrong
AI: The term you want to terminate is of the form $$f(x)|^2_1$$ (with $f(x)=(4/n^2\pi^2)\sin(n\pi x)/2$) which stands for $$f(2)-f(1)$$ It is true that $f(2)$ involves $\sin(n\pi)$ and is zero, but $f(1)$ involves $\sin(n\pi/2)$ and is not always zero.
|
H: Smallest n digit number that can divide a n digit number
Is there any simple way to find the smallest n digit number that can divide n digit number.
For Example:
Lets take a two digit number xx. I want to find the smallest two digit(yy) number that can divide the number xx.
And if its three digit xxx then i need to find smallest three digit yyy.
AI: If $N$ is an $n$ digit number and $d$ is another $n$ digit number dividing $N$, then $1\le N/d\le 9$. So you divide succesively by $9$, $8$, ..., $1$ until an exact division occurs. In your example $99/9=11$, so $11$ is the solution. If $N$ is prime (and $n\ge2$), then it is not divisible by $2,3,\dots,9$, so that the smallest $n$ digit number dividing $N$ is $N$ itself.
|
H: $Var(X-Y)$ strange outcome
THe time $Y$ between the arrival of two jobs has the probabilitiy density
$$
f(y) = \frac{1}{2}e^{-\dfrac{y}{2}}
$$
The duration, $Z \sim Unif[1,3]$ and is independent of Y
Assume that it is difficult to determine the probability densities of $Y$ and $Z$ and that one only has information about the expectations and the variances of $Y$ and $Z$. Give an upp[er bound for $P(Z>Y+3)$ that is strictly smaller than 1.
now the solution states:
$$
P(Z\geq Y+3) \leq P(|Z-Y|\geq 3)
$$
since we only have information on the expected values and variances of $Y$ and $Z$, we use the Chebyshev inequality for $U=Z-Y$. it holds that $E(Z-Y)=0$ and that $var(Z-Y) = 13/3$
now my proble is with the $\frac{13}{3}$, since
$$
Var(Z-Y) = Var(Z)+Var(Y)
$$
Since X and Y are independent, filling that in since $Y \sim EXP(\frac{1}{2})$
$$
\begin{align}
Var(Z-Y)&= Var(Z) + Var(Y)\\
&= \dfrac{(3-1)^2}{12} + \frac{1^2}{2^2}
\\ &= \frac{4}{12} + \frac{3}{12}
\\ &= \frac{7}{12}
\end{align}
$$
Is the answer givin in the answers wrong? Or am I doing something wrong?
AI: I've spotted the mistake, it should have been
$$
\dfrac{4}{12} + \dfrac{48}{12} = \dfrac{52}{12} = \dfrac{13}{3}
$$
because if $X\sim exp(\frac{1}{\theta})$ then $var(X) = \theta^2$, not $(\frac{1}{\theta})^2$
|
H: Fixed point and non-fixed point function
For constructing another proof I need two functions explicitly and therefore
I was wondering whether there exists a function that has nowhere a fixed point and a function that (maybe depending on the closed interval $[a,b] \subset \mathbb{R}$ where it is defined) has always somewhere a fixed point for each interval $[a,b]$?
AI: Sure -- $f(x)=x-1$ and $g(x)=x$. I think the latter is the only such function with no restrictions on the interval; for the former, any function which does not intersect $y=x$ works.
|
H: Integral of $\frac{2}{(x^2+2x+5)^{1/2}}$
Please, I am stuck here for hours. Can somebody help me find the following integral $$\frac{2}{(x^2+2x+5)^{1/2}}$$ Only thing I can do is turn this into $$\frac{2}{(x^2+2x+1+4)^{1/2}} = \frac{2}{((x+1)^2+2^2)^{1/2}}$$ Don't know if that helps. Thank you!
AI: Hint
$$\sqrt{x^2+2x+5}=\sqrt{(x+1)^2+4}=2\sqrt{\left(\frac{x+1}{2}\right)^2+4}$$
and use
$$\int\frac{dx}{\sqrt{1+x^2}}=\sinh^{-1}x+C$$
with the change variable $$t=\frac{x+1}{2}$$
|
H: Consider a set equipped with two idempotent functions that commute.
Consider a set $X$ equipped and two functions $f,g : X \rightarrow X$. Assume $f$ and $g$ commute with each other. Finally, call $x \in X$ a fixed point of $f$ iff $f(x)=x.$
Then we can show that if $x$ is a fixed point of $f$, then so too is $g(x)$.
Proof. Suppose $f(x)=x$. Then $g(f(x))=g(x)$. So $f(g(x))=g(x)$. So $g(x)$ is a fixed point of $f$.
This is probably a silly question, but if we assume that $f$ and $g$ are idempotent, does the converse necessarily hold?
AI: No. Consider the three element set $\{ 0,1\}$. Suppose that $f$ and $g$ are both equal to the function taking everything to $0$. Then $g(1)$ is a fixed point of $f$, since $f(g(1))=f(0)=0=g(1)$. However, $1$ is not a fixed point, since $f(1) = 0 \neq 1.$
|
H: Question on Cauchy filters
Let $(X,\mathcal{V})$ is a uniform space and $\xi$ is a Cauchy filter on $(X,\mathcal{V})$.
$o(\xi)$ is the family of all open subsets of $X$ containing at least one element of $\xi$.
what does "containing at least one element of $\xi$" mean? that is, for every elements $V$ of $o(\xi)$ there exist at least one $A\in\xi$ such that $A\subset V$; or somthing else?
Is $o(\xi)$ a Cauchy filter on $(X,\mathcal{V})$?
AI: $U\in o(\xi)$ iff $U\subseteq X$ is open in the topology induced by $\mathcal V$ and there exists some $A\in \xi $ such that $A\subseteq U$. In general, $o(\xi)$ will not be a filter since there is no reason for it to have the property that if $U\subseteq V$ and $U\in o(\xi)$, then $V\in o(\xi)$ (as $V$ may easily fail to be open).
|
H: How to check if three coordinates form a line
Assume I have three coordinates from a world map in Longitude + Latitude. Is there a way to determine these three coordinates form a straight line?
What if I was using a system with bounds that defines the 2 corners (northeast - southwest) in Long/Lat?
The long & lat are expressed in decimal degrees.
AI: If a point has longitude $\alpha$ (from $-180^\circ$ to $+180^°$) and latitude $\beta$ (from $-90^°$ to $+90^°$), the corresponding 3D point is
$$ (x,y,z)=(\cos\alpha\cos\beta,\sin\alpha\cos\beta,\sin\beta).$$
If you have three such points, the determinant
$$ \left|\begin{matrix}x_1&x_2&x_3\\y_1&y_2&y_3\\z_1&z_2&z_3\end{matrix}\right|$$
is zero if and only if the plane through the three points passes through the Earth center.
In the presence of rounding errors aou will never have exactly zero, so the question is when to accept an almost zero value as good ...
A simplified alternative: If all your points are relatively close to one another (e.g. less than 100km apart and not too close to the poles), it is a not-too-bad approximation to work with the angles themselves (after all viewing the Earth as a sphere is also just an approximation) as $x$ and $y$ coordinates and check collinearity, that is
$$ \left|\begin{matrix}\alpha_1&\alpha_2&\alpha_3\\\beta_1&\beta_2&\beta_3\\1&1&1\end{matrix}\right|\approx 0.$$
|
H: I didn't understand the proof of the chain rule
From a lecture note about analysis:
For differentiable functions $f$ and $g$
$$
(g\circ f)'(x_0)=g'(f(x_0))f'(x_0).
$$
Proof: Consider the limit $$\lim_{x \to x_0}{\frac{g(f(x))-g(f(x_0))}{x-x_0}}.$$ Note that $$\lim_{x \to x_0}{\frac{g(f(x))-g(f(x_0))}{x-x_0}}=\lim_{x \to x_0}{\frac{g(f(x))-g(f(x_0))}{f(x)-f(x_0)}\cdot\frac{f(x)-f(x_0)}{x-x_0}}$$; the second term converges to $f'(x_0)$, while the first term (since $x-x_0\to 0$ implies $f(x)-f(x_0)\to 0$ by the continuity of $f$) converges to $$\lim_{x \to x_0}{\frac{g(f(x))-g(f(x_0))}{f(x)-f(x_0)}}=g'(f(x_0)).$$
Question Why is the bold sentence necessary? Didn't we already assume that $g$ is differentiable at $f(x_0)$? Why didn't he write "..., while the first term converges to $g'(f(x_0))$ by assumption."?
AI: Knowing that $\lim_{x\to x_0}f(x)=f(x_0)$ is what is required to say that $$\lim_{x\to x_0}\frac{g(f(x))-g(f(x_0))}{f(x)-f(x_0)}=\lim_{u\to f(x_0)}\frac{g(u)-g(f(x_0))}{u-f(x_0)},$$ which is $g'(f(x_0))$ by differentiability of $g$ at $f(x_0)$.
Unfortunately, this proof has a substantial hole in it: the claim $$\frac{g(f(x))-g(f(x_0))}{x-x_0}=\frac{g(f(x))-g(f(x_0))}{f(x)-f(x_0)}\cdot\frac{f(x)-f(x_0)}{x-x_0}$$ makes sense only when $f(x)-f(x_0)\ne 0.$ If $f$ is a constant function (at least locally, near $x_0$), for example, then this manipulation makes no sense at all (though in that case, we've other recourse). This is why in this proof of the chain rule, alternate means are taken to bring the $g'(f(x_0))$ analog term in, without risking division by $0$. Does that answer your commented question, or are there particular steps in that proof that you don't understand?
|
H: Modification of the triangle inequality
We know from the triangle inequality that $X+Y \geq Z$,
My question is under what conditions of $a,b,c$ (acute, obtuse or right angle) that $Z >X $ and $Z \geq Y $
AI: The sine rule tells you that
$$\frac{Z}{\sin c} = \frac{X}{\sin a}$$
So $Z > X$ iff $\sin c > \sin a$. Since $0 < a$, $0 < c$ and $a + c < \pi$, $\sin c > \sin a$ iff $c > a$.
Similiarly $Z \geq Y$ iff $c \geq b$
|
H: Evaluation of $\lim_{N\to\infty}\sum_{k=1}^N \frac{1}{k+\alpha N}$
The series
$$S(\alpha,N)=\sum_{k=1}^N \frac{1}{k+\alpha N}$$
can be evaluated as:
$$S(\alpha,N)=\Psi(N+1+\alpha N)-\Psi(1+\alpha N)$$
The limit $$L(\alpha)=\lim_{N \to \infty}S(\alpha)$$
can be evaluated for different values of the parameter $\alpha$. For example: $L(1)=\ln(2)$, $L(2)=\ln\frac{3}{2}$ and so on. Is there a formula for $L(\alpha)$ with $\alpha\in\mathbb N$?
Thanks.
AI: Since
$$\psi(x)=\ln x-\frac{1}{2x}+O(x^{-2})$$
as $x\rightarrow\infty$, your limit is equal to $\displaystyle \ln\frac{1+\alpha}{\alpha}$.
|
H: Is the centralizer of a normal subgroup equal to the centralizer of the intersection of that normal subgroup with the commutator subgroup?
Let $G$ be a group, $N$ a normal subgroup of $G$, $C_G(X)$ the centralizer of $X$ in $G$ and $G'$ the commutator subgroup of $G$. Is it true that $C_G(N) = C_G(N \cap G')$?
If true, how do I prove it? If not, I'd like to know a counterexample or even better a characterization of the groups for which equality holds.
AI: No. Take a prime $p$, a non-abelian group $G$ of order $p^3$, and $N$ a subgroup of $G$ of order $p^2$. We have that $G'$ has order $p$, and $G'\le N$.
Then $C_{G}(N) = N$, while $C_{G}(N \cap G') = C_{G}(G') = G$.
|
H: In how many ways can 5 identical balls be placed in a (3x3) grid such that each row contains atleast one ball?
I tried two methods here. First, the usual one of checking different arrangements with a 3,1,1 distribution and the other with a 2,2,1 distribution and calculating their respective combinations. Works out to a total of 108. Again a long method. So wanted a shorter one.
Then i tried the "Stars & Bars" approach. I first gave each row a ball so am left with 2 balls and 6 spaces to fill. But somehow I am unable to arrive at the right answer. Could someone help point out the flaw in reasoning? Thanks.
AI: It may be easier to count the number of ways to place the balls into only two rows, then subtract this from the total number of ways to place 5 balls on the grid in any arrangement. In order to use only two rows, one row must have 3 balls, and the other 2 -- there are $3! = 6$ ways to choose which row has 3 balls and which has 2. On the row with two balls, there are three distinct configurations (choose which space to leave empty). Therefore, the number of ways to place 5 balls in 2 rows is:
$$
3! \cdot 3 = 18.
$$
Finally, count the number of possible arrangements of the 5 balls on the $3 \times 3$ grid:
$$
\binom{9}{5} = 126.
$$
Thus, the number of ways the 5 balls can be arranged with at least 1 ball on each row is:
$$
126 - 18 = 108.
$$
|
H: Isolated Singularities
Consider the following functions and determine which kind of singularities they have in $z_0$. If it is a removeable singularity, then calculate the limit; if it is a pole, then give the order of the pole and the main part.
$$
f(z)=\frac{1}{1-e^{z}}, z_0=0~~~~~~~~~~~~~~~~~~~~g(z)=\frac{1}{z-\sin z}, z_0=0
$$
Concerning $f(z)$, I wrote
$$
f(z)=\frac{1}{\sum\limits_{i=1}^{\infty}\frac{z^i}{i!}}
$$
and then considered
$$
\lim\limits_{z\to 0}\frac{z^k}{\sum\limits_{i=1}^{\infty}\frac{z^i}{i!}}=0~\forall~k\in\mathbb{N}_0
$$
So the smallest $k\in\mathbb{N}_0$ for which the limit exists is $k=0$ and so here $z_0$ is a removable singularity. Is that right? And what is meant with calculating the limit now?
AI: I'm cringing at your use of $i$ as an index while dealing with complex numbers, but I digress.
The limit $$\lim_{z\to 0}f(z)=\lim_{z\to 0}\cfrac{-1}{\sum\limits_{n=1}^\infty\frac{z^n}{n!}}$$ fails to exist, since the denominator vanishes. (Note also the negative sign that you omitted.) However, for $z\ne 0,$ we can write $$zf(z)=\cfrac{-1}{\sum\limits_{n=1}^\infty\frac{z^{n-1}}{n!}}=\cfrac{-1}{1+\sum\limits_{m=1}^\infty\frac{z^m}{(m+1)!}},$$ whence $\lim\limits_{z\to 0}zf(z)=-1.$ Hence, $f$ has a pole of order $1$ at $z=0$. Can you take it from there?
|
H: A proof in circles
The question is this:
Show that the equation of a straight line meeting the circle $ x^2 + y^2 = a^2 $ in two points at equal distances $d$ from a point $(m, n)$ on its circumference is
$ mx + ny - a^2 + \dfrac {d^2} {2} $
I am confused. Hints please!
EDIT:
I tried to relate the perpendicular distance between the line and $(m, n)$ but then I seem to be getting nowhere.
AI: Let $p, q$ be the two intersections of the line with the circle:
$$x^2 + y^2 - a^2 = 0\tag{*1}$$
Since $p$ and $q$ are at distance $d$ from the point $(m,n)$, $p$, $q$ lies on another
circle:
$$(x - m)^2 + (y - n)^2 - d^2 = 0\tag{*2}$$
$(*1) - (*2)$ implies $p, q$ also satisfy:
$$\begin{align}&2mx - m^2 + 2nx - n^2 - a^2 + d^2 = 0\\
\iff & 2mx + 2ny - 2 a^2 + d^2 = 0\\
\iff & mx + ny - a^2 + \frac{d^2}{2} = 0
\end{align}$$
This is the equation of a line and since two points determine a line, this is the equation we want.
|
H: Convergence of dual product of weakly and strongly convergent sequences
Say $V$ is a reflexive> Banach space, so that we can identify the dual of the dual $V'$ with $V$, i.e. $(V')'=V$.
Consider two sequences $\{v_k\} \subset V$ and $\{f_k\} \subset V'$ that converge to $v\in V$ and $f\in V'$ in the strong and in the weak sense, i.e.
$v_k \to v$ in $V$ and
$f_k \rightharpoonup f$ in V'
as $k \to \infty$. How to show, that
then $\langle v_k, f_k \rangle \to \langle v, f \rangle$ in $\mathbb R$,
where $\langle \cdot,\cdot \rangle$ denotes the dual pairing on $V\times V'$?
I see that for the seperate sequences $\langle v_k, f \rangle$ and $\langle v, f_k \rangle$ both converge (the latter because of reflexivity of $V$) [see comments below] but I haven't come to terms with the mutual convergence.
AI: Write
$$|\langle f_k,v_k\rangle-\langle f,v\rangle|\leqslant \lVert f_k\rVert\cdot\lVert v_k-v\rVert+|\langle f_k,v\rangle-\langle f,v\rangle|,$$
and notice that the sequence $(\lVert f_k\rVert,k\geqslant 1)$ is bounded.
|
H: A group of order $8$ has a subgroup of order $4$
Let $G$ be a group of order $8$. Prove that there is a subgroup of order $4$.
I know that if $G$ is cyclic then there is such a subgroup (if $G=\langle a\rangle$ then the order of $\langle a^2\rangle$ is $4$). But how do I prove this when $G$ is not cyclic? Also, I know that $G$ has an element of order $2$, because the order of $G$ is even. I suspect that assuming all elements of $G$ are of order $2$ somehow leads to a contradiction but am unable to show it. Is this correct or is there a different approach that I'm missing? thanks
AI: You already noted that if $G$ has an element of order $8$ or $4$ then we are done.
Thus we can assume all elements have order $2$ (except the identity element). Then $G$ is abelian (this is a standard exercise, and I am certain it has been asked on MSE several times).
Let $a$ and $b$ be distinct elements of order $2$. Now it is straightforward to check that $\{e,a,b,ab\}$ is a subgroup of order $4$ (where $e$ is the identity element of $G$).
|
H: Convergence in probability of the sum of scheme of series
Could you please help with this one. It looks like smth simple but I can't figure it out.
Let $\{x_{in}\}, \ i=1,\dots, n, \ n=1,\dots,\infty$ be the scheme of series of random variables. For each $n$ random variables $\{x_{in}\}, \ i=1,\dots,n,$ are i.i.d. Moreover for each $i$ we have $x_{in}\stackrel{P} \rightarrow 0, \ n\to \infty.$
Is it true that
$$
\frac{1}{n}\sum_{i=1}^{n}x_{in}\stackrel{P}\rightarrow 0, \quad n\to\infty \ ?
$$
Notice that $x_{in}$ may be random variables that don't have expactations and variances.
Thanks in advance!
AI: Take $x_{i,n}$ such that for each $n$, $\mathbb P\{x_{i,n}=n\}=c_n$ and $\mathbb P\{x_{i,n}=0\}=1-c_n$. Then pick the sequence $(c_n,n\geqslant 1)\subset [0,1]$ such that:
$\lim_{n\to\infty}c_n=0$, and
$\lim_{n\to\infty}(1-c_n)^n\neq 1$.
|
H: Potential function & constant
$\mathbf{Question:}$
Give a piar functions $Φ:\Bbb R^2 \to \Bbb R $ and $Ψ: \Bbb R^2 \to \Bbb R$, it is often useful to known that there exists some contiunously differentiable function $f:\Bbb R^2 \to \Bbb R$ such that $\partial f/\partial x(x,y)=Φ(x,y)$ and $\partial f/\partial y(x,y)=Ψ(x,y)$ for all $(x,y)\in \Bbb R^2$
Such a function $f$ is called a potential function for the pair of functions $(Φ,Ψ)$
$\mathbf{part \ a)}$ show that if a potential function exists then this potential is uniquely determined up to an additive constant - that is, the difference of any two potential is constant.
I guess, I need to use the definition of continuously differentiablity. But how?
Please show me a way. Thank you:)
AI: Assume that $f$ and $g$ are two potential functions for the vector field $(\Phi,\Psi)$. Then the gradient of $f-g$ is identically zero. What do you know about functions with vanishing gradient (on pathwise connected sets)?
If you don't know any such suitable theorem, try taking an arbitrary parametrized curve $\gamma(t)$, and look at $h(t) = f(\gamma(t)) - g(\gamma(t))$.
|
H: Process of finding a solution for a problem
I have a question that doesn't need an answer, just an explanation.
Prove that if $A$ is nilpotent, then $I+A$ is invertible.
The solution is:
Given that $A^k=0$ for some integer $k$, you can say that
$I - A^k = I$
you can factor this as
$(I + A)(I - A + A^2 - A^3 +\cdots+(-A)^{k-1}).$
My question is how should I come up with the statement that $I-A^k=I$? Where do I get the insight to figure out that $I-A^k=I$?
Thanks.
AI: What you ask is just the definition of nilpotent matrix (I think that you mean by $A$ a matrix but this definition is true in any ring):
Definition: A matrix $A\in\mathcal{M}_n(\mathbb F)$ is said nilpotent if there's some $k\in\mathbb N$ such that $A^k=0$.
Added The question is to prove that $I+A$ is invertible that's means there's some matrix $B$ such that $(I+A)B=I$ which is equivalent to $B=(I+A)^{-1}$.
The idea is to use a result from analysis: we know that
$$\frac{1}{1+x}=\sum_{k=0}^\infty (-1)^k x^k,\quad |x|<1$$
and if we substitute $x$ by $A$ (formally) and since $A$ is nilpotent so the infinite sum is indeed a finite sum and we find the desired result.
|
H: e-notation scientific notation
Hey all I asked this over at StackOverflow =) and I got a good answer but I still have no idea whats going on. I want to know how the expression got to the answer and maybe the math behind how it was reasoned.. the steps if you would be so kind. The likelihood is I don't understand the equation.
I'm not a big maths person please be gentle :p
For all those playing along at home I'm reading Absolute Java 5th
Edition by Walter Savitch (Chapter 2 Page 66).
The Expression
double d = 12345.123456789;
System.out.printf("START%12.5e END %n", d);
The Answer
START 1.23451e+04END
I understand the basic principles of this printf method's arguments
for example I know the '%' represents the start of the parameters.
'12' is the number of spacing, '.5' is the times the decimal point
will move.... I see the decimal point has moved 4 places towards the
left... can someone explain the principles of e-notation. Also how
this expression came to this answer =).
As far as e-notation go's its meant to be scientific notation;
So like 5.89e-4 would mean 0.000589 (move the decimal place if minus
left if not move the decimal place right).
AI: Used like this, eN (much more often EN) is just a symbol for $10^N$. It is used because scientific notation is convenient for large and small numbers and it avoids the need for superscripts. Superscripts used to be much more difficult to produce than they are today, and even today they are not easy in (some) word processors.
12.5e says allocate 12 spaces to the total output (including signs, e, etc.) and put 5 behind the decimal point. In your example there is room for a leading + sign (omitted) and eleven more characters. If the format were 12.3e you would probably get 123.451e+02 You need at least seven more total characters than decimals for the leading sign, the decimal point, e, the exponent sign, and two digits of exponent.
In your example, the decimal point moves by 4 places so there are the correct number of digits before it, then the e value was chosen to give the value 4
|
H: When can a non-autonomous system NOT be re-written as an autonomous system?
Consider Duffing's equation
$\ddot x + \delta \dot x + \alpha x + \beta x^3 = \gamma \cos{\omega t},$
where $\delta, \alpha, \beta, \gamma$ and $\omega$ are real parameters, $t$ represents time and $\dot x := dx/dt$.
Since there is an explicit dependence on time, this is classified as a non-autonomous system; however (following Guckenheimer and Holmes) the system can be rewritten as an autonomous system
$\dot u = v$,
$\dot v = \gamma \cos{\omega \theta} - \delta v - \alpha u - \beta u^3$,
$\dot \theta = 1$,
with $(u,v,\theta) \in \mathbb{R}^2 \times S^1$. My questions:
Are there examples of systems where the above procedure doesn't work?
If so, what are the implications?
Please suggest edits if the question is to broad - I'm still a novice in this area!
Best regards, \T
AI: Given a non-autonomous system $x'=f(x,t)$, you can introduce new vector function $u(t)=(x(t),t)$ which satisfies the autonomous system $u'=g(u)$ with $g(u)=(f(u),1)$. So the answer is yes, you can always turn a system into autonomous.
The implication is that the dimension of the system goes up. And while autonomous systems are often easier to understand by analysis of their equilibria, we do not get a free lunch here: the new system $u'=g(u)$ has no equilibria ($g$ never vanishes).
|
H: Dirichlet integral
I want to prove
$$\displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2$$
and
$$\displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty$$
And I found in wikipedia, but I don't know, can't understand. I didn't learn differential equations, Laplace transforms, and even inverse trigonometric functions.
So tell me easy, please.
AI: About the second integral: Set $x_n = 2\pi n + \pi / 2$. Since $\sin(x_n) = 1$ and
$\sin$ is continuous in the vicinity of $x_n$, there exists $\epsilon, \delta > 0$ so that $\sin(x) \ge 1 - \epsilon$ for $|x-x_n| \le \delta$. Thus we have:
$$\int_0^{+\infty} \frac{|\sin x|}{x} dx \ge 2\delta\sum_{n = 0}^{+\infty} \frac{1 - \epsilon}{x_n} = \frac{2\delta(1-\epsilon)}{2\pi}\sum_{n=0}^{+\infty} \frac{1}{n + 1/4} \rightarrow \infty $$
|
H: Probability of guessing two out of five answers in multi-choice test
I am doing some exercises for a probability class I am taking, but the answer of one confuses me quite a bit. The problem is to calculate the probability of getting two of fewer correct answers on a multi-choice test with three alternatives per question and six questions, if you guess all answers (this might sound similar to another of my questions, but this differs a bit.).
Now, with these circumstances, as only one alternative per question is correct, $P(\text{correct guess}) = {1 \over 3}$. This gives that $P(\text{incorrect guess}) = 1-P(\text{correct guess}) = {2 \over 3}$. So far so good. Then, to calculate the chance to get two or fewer correct answers I chose to calculate
$$\sum_{x=4}^{6}p_\mathbb{X}(x),$$
where $p_\mathbb{X}(x)$ is the probability to get $x$ incorrect guesses. This may not be the shortest path, but I figured it would work. So, I get the following equations:
\begin{gather}
p_\mathbb{X}(x) = {6 \choose x}{({2\over3})^x}{({1 \over 3})^{6-x}} \\
P(\text{two or fewer correct guesses}) = \sum_{x=4}^{6}p_\mathbb{X}(x) \\
\sum_{x=4}^{6}p_\mathbb{X}(x) = {{6 \choose 4}({2\over3})^4({1 \over 3})^{2}+{6 \choose 5}({2\over3})^5({1 \over 3})+({2\over3})^6} \approx 0.68.
\end{gather}
This did, to me, seem fairly reasonable. Then I looked at the answer to the exercise and found this:
$$\sum_{x=0}^{2}p_\mathbb{X}(x) = {6 \choose 0}({1\over3})^6+{6\choose1}({1\over3})^5({2\over3})+{6\choose2}({1\over3})^4({2\over3})^2 \approx 0.100.$$
Now, I can see why their approach would be more straight forward, just adding the probability to get zero, one and two correct guesses. Fine. But it confuses me that the first iteration, ${6\choose0}({1\over3})^6$ calculates the probability of correct guesses in all questions. The answer I found in the book, it seems to me, calculates the probability of getting two faulty guesses or fewer. Now, I wonder if I am correct? Does the book indeed provide the probability of getting two or fewer faulty guesses rather the two or fewer correct guesses, and if it does provide the sought value, what have I done wrong?
AI: You are correct that the book is calculating the chance of two or less being faulty. The second term (once you correct the $\frac 12$ to $\frac 23$) calculates the chance of five guesses correct.
|
H: Can $\sum_{x \in [0,1]} e^x$ be represented as an integral?
In $$\sum_{x \in [0,1]} e^x,$$ $e^x$ is summed over all values in the interval $[0,1]$.
Am I right to say that $$\sum_{x \in [0,1]} e^x = \int^{x=1}_{x=0} e^x \, \mathrm dx?$$
AI: No, they're not equal; the sum of uncountably many positive numbers is necessarily infinite, whereas the integral
$$\int_0^1\exp(x)\,dx=e-1$$
is finite.
|
H: Does $f(n)\sim g(n)$ imply $\lim_{k\to\infty} \frac{1}{k} \sum_n f(n)/g(n) = 1$?
Is it true that
$$\lim_{k\to\infty}\frac{1}{k}\sum_{n=1}^k \frac{f(n)}{g(n)} = 1 \leftrightarrow f(n)\sim g(n).$$
My thought: $f(n)\sim g(n) \to \frac{1}{k}\sum \frac{f(n)}{g(n)} = 1$ since we can find an arbitrarily small $\epsilon$ with $\left|\frac{f(n)}{g(n)}-1\right|< \epsilon$ so for all but finite $n$ the ratio $f/g$ is arbitrarily close to $1$.
But I think $\lim_{k \to\infty} \frac{1}{k}\sum_{n=1}^k \frac{f(n)}{g(n)} =1 \nrightarrow f\sim g$
since we could have for example
$\frac{f(n)}{g(n)} = 1,0,1,1,0,1,1,1,0,1,1,1,1,0,\dots$
Is this correct?
Thanks for any assistance.
AI: The arrow $\leftarrow$ is a consequence of Cesáro summation. Whenaver $\lim_{n\to\infty}a_n$ exists, the limit $\lim_{n\to\infty}\frac{a_1+\ldots+a_n}{n}$ also exists and has the same value, but not the other way round
|
H: How to simplify a sum with factorial?
I have the following sum:
$$\sum_{n=1}^k \frac{k!}{n!(k-n)!}, \quad k=9$$
wolfram alpha
It got simplified to $2^k-1$. How can I do it with math formulas?
Thank you!
AI: Use the binomial theorem, which states:
$$
\sum_{n=0}^k a^n b^{k-n} \frac{k!}{n!(k-n)!} = (a+b)^k
$$
Use $a=b=1$, that is where the $2^k$ comes from.
The -1 is because the theorem includes the term when $n=0$, whereas in your question the summation only starts at $n=1$. So you must subtract the term when $n=0$, which is $1$, hence $-1$.
So $2k-1$.
|
H: Why is $\frac{x}{\| x \|}$ a unit vector?
Let $x$ be a vector in $\mathbb{R}^n$. Why is $\frac{x}{\| x \|}$ a unit vector, for $x\neq 0$?
If I try to simplify it, I get the following: $\frac{x}{\sqrt{x \cdot x}}$, and I'm not sure how to proceed.
So I'm looking for a proof or explanation as to why it is a unit vector.
AI: Hint: Show that for any $\lambda\in\mathbb{R}$ and any $x\in\mathbb{R}^n$, we have
$$\|\lambda x\|=|\lambda|\cdot\|x\|.$$
What does this become when $x\neq 0$ and $\lambda=\frac{1}{\|x\|}$?
|
H: If $G$ is a finite group of order $n$, why is it isomorphic to its centralizer in $S_n$?
If $G$ is a finite group of order $n$, why is it isomorphic to its centralizer in $S_n$? Here, we embed $G$ in $S_n$ via the left regular representation.
From thinking a bit about the classification of covering spaces in algebraic topology, I suspect that this must be true, and I expect that I can probably prove it using topology if I wanted to.
Is there a representation/group-theoretic way of proving this?
Also, is the converse true? Ie, if $G$ is a subgroup of $S_n$ that is isomorphic to its own centralizer, then it must be transitive and have order $n$?
AI: I learned about the centralizer of the regular representation in M. Hall's Theory of Groups, Theorem 6.3.1 on page 86. In particular a very simple group theoretic proof is given.
Representation theory indicates that this should be true, since a permutation group is basically a module, and the endomorphism ring (also known as the centralizer) of a regular module is the opposite ring. Since a group is isomorphic to its opposite group via $g \mapsto g^{-1}$, it is no surprise that the centralizer of the left regular permutation representation is the right regular permutation representation.
The converse fails: a self-centralizing group need not move all of the points (for instance the cyclic group generated by an $n$ cycle in $S_{n+1}$ is self-centralizing as long as $n>1$). Even if a self-centralizing group moves all the points, it need not be transitive (for instance, the non-standard Klein 4 group, $\{ (), (12), (34), (12)(34) \}$).
However, if one assumes that the group is transitive, then an exercise in Wielandt's excellent textbook Permutation Groups on page 9 shows that a transitive group must be regular if it has the same order as its centralizer. The exercise has one show that the centralizer of a regular group is semiregular, and its order is equal to the number of fixed points in the original group's point stabilizer. For the orders to be equal, the point stabilizer must be the identity subgroup, that is, the group must be regular.
|
H: $\mathbb{R}P^2$ and its lines
I have been solving some past exam questions and I came across the following question. Let $r$ and $s$ two distinct lines in $\mathbb{R}P^2$, and let $X$ the space obtained contracting $r \cup s$ to a point in $\mathbb{R}P^2$. Say if $X$ is a topological manifold.
I can not imagine what $X$ is, and i think this is my problem.
Help would greatly be appreciated.
AI: Contracting one line to a point gives a sphere. (Why?) Contracting two lines then is equivalent to contracting one loop on a sphere to a point. What do you think you'll get?
|
H: Expected number of objects of type A I will get if I take $K$ objects.
Suppose I have:
$x$ objects of type A.
$y$ objects of type B.
Now I take $K$ objects randomly, taking each object with the same probability. What is the expected number of objects I will get of type A?
I can solve it using dynamic programming but its too costly for big values.
AI: I take it we have a total of $x+y$ objects, and we take a sample of $K$ objects, without replacement. Then the number of Type A objects we get has hypergeometric distribution. If you like, you probably find everything you need about the hypergeometric distribution by going through parts of this Wikipedia article.
In particular, you will find that the expected number of Type A objects is $K\dfrac{x}{x+y}$.
Remark: The proof is quick, if one has some background in random variables. We can imagine that the objects are removed and examined one at a time. Let random variable $W_i$ be equal to $1$ if the $i$-th object chosen is of Type A, ane let $W_i=0$ otherwise. Then the number of Type A objects selected is equal to $W_1+W_2+\cdots +W_K$. By the linearity of expectation, we have
$$E(W_1+W_2+\cdots +W_K)=E(W_1)+E(W_2)+\cdots +E(W_K)/$$
Each $W_i$ is $1$ with probability $\frac{x}{x+y}$, so has expectation $\frac{x}{x+y}$. It follows that the sum has expectation $K\frac{x}{x+y}$.
|
H: Proving $\text{rank}(A+B) \leq \text{rank}(A) + \text{rank}(B) - \text{rank}(AB)$ when $A,B$ commute
Let $A,B$ be $n \times n$ matrices and $AB=BA$. Show that $$\text{rank}(A+B) \leq \text{rank}(A) + \text{rank}(B) - \text{rank}(AB)$$
Attempt at the solution:
I first proved that $$R(A+B) \subseteq R(A) + R(B)$$ where R(A) is Range(A).
Then I used the dimension formula that states that $$\dim(U+V) = \dim(U) + \dim(V) - \dim(U \cap V)$$ so that I get $$r(A+B) \leq r(A) +r(B) - \dim(R(A)\cap R(B))$$
The last part would be to show that $$\dim(R(A)\cap R(B)) = \dim(R(AB))$$ using the fact that $AB=BA$.
I need help with this last part. Also please critique me if the above solution is acceptable or if there is something wrong with it.
Thank you!!
AI: You do not need $\dim(R(A)\cap R(B)) = \dim(R(AB))$. All you need is $\dim(R(A)\cap R(B)) \ge \dim(R(AB))$, which is easy to prove using $AB=BA$.
|
H: Find $\lim_{z\to 0}|\sin(1/z)/\sin(z)|, z\in \mathbb{C}?$
Find $$\lim_{z\to 0} \ \left|\frac{\sin(1/z)}{\sin(z)} \right|, z\in \mathbb{C}$$
I'm pretty sure that this limit doesn't exist, but i am not able to prove it.
AI: Hint: Consider the sequence
$$
z_n=\frac2{(2n+1)\pi}
$$
and the sequence
$$
z_n=\frac1{n\pi}
$$
|
H: The order of Cesaro convergent nonnegative sequence
Let $\{a_i\}_{i=1}^{\infty}$ be a real sequence s.t. $a_i \geq0$ and $(1/N)\sum_{i=1}^{N}a_i$ is convergent.
I wish to show that $\frac{\max_{1\leq i \leq N} a_i}{N} \to 0$ as $N\to \infty$.
AI: Let $a=\lim_{N\to\infty}\frac1N\sum_{i=1}^N a_i$.
Note that
$$ \frac1N\sum_{i=1}^N a_i = \frac{a_N}N+\frac {N-1}{N}\left(\frac1{N-1}\sum_{i=1}^{N-1}a_i\right)$$
Let $\epsilon>0$.
For $N$ sufficiently big, the left hand side and the second summand on the right differ from $a$ by at most $\frac \epsilon2$, hence $\frac{a_N}{N}<\epsilon$ for almost all $N$, say for all $N>N_0$.
Then for such $N$ $$\frac{\max_{1\le i\le N} a_i}{N} \le \max\left\{\frac{\max_{1\le i\le N_0}a_i}{N},\epsilon\right\}.$$
Thus if additionally $N>\frac{\max_{1\le i\le N_0}a_i}{\epsilon}$, we have
$$ 0\le\frac{\max_{1\le i\le N} a_i}{N}\le \epsilon.$$
|
H: Check the continuity of the next function $f(x)=\sum_{n=1}^{\infty}(x+\frac{1}{n^2})^n$
Check the continuity of the next function $f(x)=\sum_{n=1}^{\infty}(x+\frac{1}{n^2})^n$.
I've started by doing Cauchy test to check when the sum converges: $f(x)=\lim_{n\to\infty}\sqrt[n]{(x+\frac{1}{n^2})^n}=x$.
Now we know that the sum should converges whenever $x<1$.
So, Now I need to check uniform convergence. But I'm getting stuck on: $|f_n(x)-f(x)|=|(x+\frac{1}{n^2})^n-x|$.
What can I do?
AI: As you stated by using the Cauchy test the series is convergent for $x\in(-1,1)$.
Now let $0<a<1$ then we have
$$\left|x+\frac{1}{n^2}\right|^n\leq\left(a+\frac{1}{n^2}\right)^n=v_n\quad\forall x\in[-a,a]$$
and since
$$v_n=\exp\left(n\log\left(a+\frac{1}{n^2}\right)\right)\sim_\infty a^n$$
then the series $\displaystyle\sum v_n$ is convergent by comparison with geometric convergent series. We deduce that the given series is normal convergent on every interval $[-a,a]\subset (-1,1)$ and then the function $f$ is continuous on $(-1,1)$.
|
H: $\frac{\partial Ψ}{\partial x}(x,y)=\frac{\partial Φ}{\partial y}(x,y)$ for all $(x,y)\in \Bbb R^2$
Give a piar functions $Φ:\Bbb R^2 \to \Bbb R $ and $Ψ: \Bbb R^2 \to \Bbb R$, it is often useful to known that there exists some contiunously differentiable function $f:\Bbb R^2 \to \Bbb R$ such that $\partial f/\partial x(x,y)=Φ(x,y)$ and $\partial f/\partial y(x,y)=Ψ(x,y)$ for all $(x,y)\in \Bbb R^2$
Such a function $f$ is called a potential function for the pair of functions $(Φ,Ψ)$
Show that there is a potential function for the pair of continuously differentiable functions $Φ:\Bbb R^2 \to \Bbb R $ and $Ψ: \Bbb R^2 \to \Bbb R$ then
$\frac{\partial Ψ}{\partial x}(x,y)=\frac{\partial Φ}{\partial y}(x,y)$ for all $(x,y)\in \Bbb R^2$
Sorry for not writing any idea. I dont want to write Wrong things. Thanks for helping!
AI: Hint:
$$
\frac{\partial \Psi}{\partial x}=\frac{\partial}{\partial x}\frac{\partial f}{\partial y}
$$
What is $\partial_y\Phi$?
|
H: is $\sum_{n=1}^{\infty} {(-1)^n\sin(\frac{x}{n})}$ convergent?
is $\sum_{n=1}^{\infty} {(-1)^n\sin(\frac{x}{n})}$ convergent for $x \in [-a,a] \forall a>0$ ?
Perhaps I could use the leibntiz test? but then how would I prove it its convergent?
AI: By the Taylor series we have
$$(-1)^n\sin\left(\frac{x}{n}\right)=_\infty\frac{x(-1)^n}{n}+O\left(\frac{1}{n^2}\right)=u_n+v_n$$
and the series $\displaystyle\sum_n u_n$ is convergent by the Leibniz test and the series $\displaystyle\sum_n v_n$ is absolutely convergent so we can conclude.
|
H: coordinates of icosahedron vertices with variable radius
I was looking on the wikipedia page about icosahedrons and it says that for edge length $a$ the radius of the circumscribed sphere around the icosahedron is given by $r = a \times \sin(\frac{2\pi}{5})$. It then says that the vertices of an icosahedron of edge length 2 are given by:
$(0, ±1, ±\phi)$
$(±1, ±\phi, 0)$
$(±\phi, 0, ±1)$ where $\phi$ is the golden ratio.
How can I modify these coordinates to find the vertices of an icosahedron where the circumscribed sphere has variable radius?
Thanks
AI: Converting comment to answer, as requested.
Fact. Multiplying the coordinates by a number "scales" the figure's edge lengths by the absolute value of that number.
So, let's take this in stages:
Multiply each of the given coordinates by $1/2$ to get the vertices of an icosahedron of edge length $1$.
Then, multiply each resulting coordinate by $a$ to get the vertices of an icosahedron of edge length $a$.
From there, you can replace each $a$ with $r/\sin(2\pi/5)$ if you must have the coordinates explicitly in terms of $r$.
In one step, that's just:
Multiply each original coordinate by $\frac{r}{2\sin(2\pi/5)}$.
You may or may not care to use the fact that $\sin\frac{2\pi}{5} = \frac{1}{2}\sqrt{\phi\;\sqrt{5}}$ to try and symbolically "simplify" the resulting coordinates. There's a bit (but not really much) that could be done; the whole reason Wikipedia gives the coordinates for edge-length $2$ is that that's the size with the prettiest coordinates. Since this is apparently for a computer program, the symbolic prettiness of the coordinates probably doesn't matter much.
|
H: finding the limit to $(3^n+5^n)^{\frac{1}{n}}$
I was wondering if one can find the limit to the sequence $\{a_n\}$, where: $$\large a_n=(3^n+5^n)^{\large\frac{1}{n}}$$
Without the use of a calculator.
AI: Hint: $5^n\le 3^n+5^n\le 2\cdot 5^n$.
|
H: Why does $\lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1$?
As the title suggests, I want to know as to why the following function converges to 1 for $n \to \infty$:
$$
\lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1
$$
For even $n$'s only $n^2+1$ has to be shown, which I did in the following way:
$$\sqrt[n]{n^2} \le \sqrt[n]{n^2 + 1} \le \sqrt[n]{n^3}$$
Assuming we have already proven that $\lim_{n \to \infty}\sqrt[n]{n^k} = 1$ we can conclude that
$$1 \le \sqrt[n]{n^2+1} \le 1 \Rightarrow \lim_{n \to \infty} \sqrt[n]{n^2+1} = 1.$$
For odd $n$'s I can't find the solution. I tried going the same route as for even $n$'s:
$$\sqrt[n]{-n^2} \le \sqrt[n]{-n^2 + 1} \le \sqrt[n]{-n^3}$$
And it seems that it comes down to
$$\lim_{n \to \infty} \sqrt[n]{-n^k}$$
I checked the limit using both Wolfram Alpha and a CAS and it converges to 1. Why is that?
AI: If $n=2p+1$ we have
$$\sqrt[n]{(-1)^n \cdot n^2 + 1}=-\exp\left(\frac{1}{2p+1}\log((2p+1)^2-1)\right)\longrightarrow_{p\to\infty}-e^0=-1$$
The case $n=2p$ is almost similar and we find the limit $1$ so the given limit is undefined.
|
H: Is B(H) a Hilbert space?
If H is a Hilbert space, Is B(H) under the operator norm a Hilbert space?
If not, is there exists any norm on B(H) that makes it a Hilbert space?
AI: To call something a Hilbert space means that it is equipped with a complete inner product, not just a norm. A complete normed vector space is called a Banach space, and indeed $B(H)$ with the operator norm is a Banach space. Indeed, it has some additional structure (multiplication given by composition, and an involution given by taking adjoints) which makes it a C*-algebra.
Of course, you can still ask whether there exists an inner product on $B(H)$ which makes it a Hilbert space, and this depends on $H$:
If $H$ is finite-dimensional, then choosing a basis identifies it with $\mathbb{C}^n$ for $n=\dim H$, and $B(H)$ is identified with the space of $n\times n$ matrices, which is a Hilbert space.
If $H$ is infinite-dimensional, then $B(H)$ is not a Hilbert space: see this other question for some reasons why.
|
H: If we abandon the axiom of regularity, can the cumulative hierarchy just become a definition?
In ZFC, the axiom of regularity is used to prove that every set is an element of some stage of the cumulative hierarchy. The index of the least such stage is, by definition, the rank of that set.
Now suppose we're working in ZFC minus Regularity. Can this idea just become a definition? Like, what happens if we define that a set is ranked iff its an element of some stage of the cumulative hierarchy? I imagine that we'd be able to prove the axiom of regularity relativized to ranked sets.
Suppose this idea works. Then, we'd like a nice characterization of the property of being ranked that can be used before the ordinals are even in place, because, for example, if we define $\mathrm{S}(x) := x \cup \{x\},$ well it would be nice to be able to prove that if $x$ is ranked, then $\mathrm{S}(x)$ is distinct from $x$, and that $\mathrm{S}(x)$ is also ranked. So I'm curious as to whether there's a nice characterization of the property of being ranked that does not rely on the ordinals. e.g. does 'no infinite descending membership chains' do the trick?
AI: The usual formulation of the axiom of regularity actually contains the essence of a definition of "ranked" that works in the absence of that axiom. A set $x$ is ranked iff every set $u$ that has $x$ as a member also has a minimal member (i.e., a member $y\in u$ such that $y\cap u=\varnothing$). It is not hard to show that this notion of "ranked" is equivalent to being in some stage of the cumulative hierarchy.
Note, though, that when you define the cumulative hierarchy, you must first define the notion of "ordinal" in a way that ensures that the ordinals are well-ordered. "Transitive set of transitive sets" won't do, though it becmes OK if you add that ordinals should be ranked in the sense that I defined above.
With the definition of "ranked" that I suggested above, it is fairly easy to prove, in ZFC without regularity, that all the ZFC axioms (including regularity) are true when restricted to ranked sets. I believe this is the simplest way to prove the relative consistency of the axiom of regularity.
|
H: Local property of ring of regular function
If $X$ is a projective variety (irreducible algebraic subset of a projective space), $U$ is an open subset of $X$, $x\in U\subset X$, is that true that $\mathcal{O}_{X,x}=\mathcal{O}_{U,x}$ ?
I think it is true but I do not know how to prove it. My idea is following: $\mathcal{O}_{X,x}=\lim_{x\in V}\mathcal{O}(V)$, so we can take the intersection with the given $U$ above and then taking the limit, we get the result $\mathcal{O}_{U,x}$. But I am not sure about my argument.
Please help me to correct my idea and prove/disprove that proposition.
AI: An element in $\mathcal{O}_{X,x}$ is an equivalence class of pairs $\langle U,s \rangle$ with $s$ a section of $\mathcal{O}_X(U)$ and $U$ open about $x$. The equivalence relation that we have is that two pairs $\langle U,s\rangle = \langle V,t\rangle$ if $s$ is "eventually equal" to $t$. That is we can find an open neighbourhood $W \subseteq U \cap V$ for which $s|_W = t|_W$. From this it is clear that $\mathcal{O}_{X,x} = \mathcal{O}_{U,x}$ because any pair on the left $\langle V,s \rangle $ is equal to a pair $ \langle U \cap V,s|_{U \cap V} \rangle$ which is an element of $\mathcal{O}_{U,x}$. Note $U \cap V \neq \emptyset$ because $x$ is in both sets. The other containment is also clear because anything open in $U$ about $x$ is also open in $X$ about $x$, since being open is a transitive property.
|
H: Source to learn Galois Theory
What kind of recommendations do you have for a very good source to learn Galois Theory? Is there any Atiyah-MacDonald-type book on Galois theory? What is your opinion on the chapters from Lang, and Dummit and Foote?
AI: I really enjoyed learning Galois theory from Martin Isaacs' Algebra: A Graduate Course. Isaacs' textbook is a textbook on group theory, ring theory, and field theory (in other words, algebra!) so it's not just on Galois theory. However, you'll have a very complete knowledge of Galois theory if you read the latter half of the textbook where it is discussed. The textbook also has the distinct advantage of good, challenging exercises. The emphasis of the exercises in this textbook is on theory more than on specific computations and examples (although these are discussed as well; Isaacs generally feels, I suspect, that a student reading his textbook is already quite comfortable with specific examples and computations so should be able to do them independently). If you'd like to see worked computations and examples in detail, then perhaps it is a good idea to supplement Isaacs' textbook with textbooks like the one by Dummit and Foote on the same topic.
|
H: Continuous group representation
Suppose you have a topological
group $G$
, a normed $k$-
vector space $V$ and a group homomorphism $\rho:G\longrightarrow GL(V)$. How do you define the topology on $GL(V)$ to make this
map continuous?
Many thanks
AI: In principle this is standard: what we really want is that $G\times V\rightarrow V$ by $g\times v\rightarrow \rho(g)(v)$ is (jointly) continuous. This correctly suggests taking the "strong operator topology" on (continuous) operators $V\rightarrow V$, namely, the seminorms $p_v(T)=|Tv|$ for $v\in V$, Banach, say.
With the uniform operator norm topology on endomorphisms of $V$, rarely will $G\rightarrow GL(V)$ be continuous. This is not a pathology, but a natural occurrence: already with $G$ being the circle acting on $V=L^2(G)$, the map $G\rightarrow GL(V)$ is not continuous when $GL(V)$ is given the uniform norm operator topology.
Edit: the topology is given by the collection of all such seminorms. The resulting topology on continuous operators is not Banach, etc. That is, there's no "open ball of radius $\epsilon$ centered at $T$". A basis for the open nbds of given $T$ are indexed by finite lists of vectors $v_1,\ldots,v_n$, small $\epsilon>0$, and requiring $|(T-S)v_i|<\epsilon$ for $i=1,\ldots,n$.
|
H: Having trouble finding solution to a DE
$$y^{(4)} - \lambda y = 0, y'(0)=0, y'''(0)=0, y(\pi)=0, y''(\pi)=0$$
let $\lambda = a^4$
my solution: $$r^4-a^4=0 \implies r=\pm a \\ y = c_1e^{ax}+c_2e^{-ax}$$
but my book says the solution is $y=c_1cos(ax)+c_2sin(ax)+c_3cosh(ax)+c_4sinh(ax)$.
What am I doing wrong?
AI: I take it that $\lambda$ is positive. Beside the solutions $r=\pm a$, we have the solutions $r=\pm ia$.
You have essentially taken care of the $\cosh ax$ and $\sinh ax$ part in another way. But the complex roots have not been taken care of. They yield solutions $e^{iax}$ and $e^{-iax}$, which can be written in terms of sines and cosines.
|
H: Question on connected spaces and continuous functions
Let $f:X \rightarrow Y$ continuous where the metric space $(X,d)$ is connected. On $Y$ we use the discrete metric. I want to show that $f$ must be constant on $X$. My approach:
We may assume that $|X| \geq 2$ (otherwise the claim is trivial). Let $x_0 \in X$. Assume $\exists y \in X: f(x) \neq f(y)$. Define
$$
V:= \{ x \in X \mid f(x) = f(x_0) \}
$$ and
$$
W := \{x \in X \mid f(x) \neq f(x_0)\}
$$ We have $V \cup W = X, V \cap W = \emptyset$ en $V,W \neq \emptyset$ by the assumption. I want to show that both $V,W$ are open to get a contradiction with the fact that $X$ is connected.
By the continuity-property we know
$$
\forall x \in X \exists \delta_x > 0 \forall y \in X : d(x,y) < \delta_x \rightarrow f(x) = f(y)
$$ Let $x \in V$. Then $B(x,\delta_x) \subseteq V$ thus $V$ is open. Let $x \in W$. Then $f(x) \neq f(x_0)$. Further we have for all $y \in B(x,\delta_x)$ that $f(y) = f(x) $ and thus $f(y) \neq f(x_0)$ s.t. $B(x,\delta_x) \subseteq W$.
Is there an easier way to conclude this ?
AI: You are making it too complicated, I think.
Suppose that $f$ is not constant. Then there exist $x_1,x_2\in X$ with $f(x_1)\ne f(x_2)$. Let $y_1 := f(x_1)$ and $y_2 := f(x_2)$. As $Y$ is discrete, the sets $\{y_1\}$ and $Y\setminus\{y_1\}$ are open (every subset of $Y$ is open!) and nonempty. Then
$$
X=f^{-1}(\{y_1\})\cup f^{-1}(\{Y\setminus\{y_1\})
$$
is a union of disjoint open sets (open, because $f$ is continuous), a contradiction.
|
H: Is there a transformation that makes $\frac{1}{t}(e^{t^2}-1)e^\frac{1}{t}(1-\frac{1}{t})$ equal to $ t^{-3}(e^{t^2}-e^t)+t^{-2}-t^{-1}$?
Is there a transformation that makes $\frac{1}{t}(e^{t^2}-1)e^{\frac{1}{t}}(1-\frac{1}{t})$ equal to $t^{-3}(e^{t^2}-e^t)+t^{-2}-t^{-1}$? The reason I ask is that for the integral $$\int_1^t{\int_0^t{\frac{e^{(tx)/y}}{y^{3}}dx}dy}$$, I get the former while Apostol gets the latter (Apostol Calculus Vol 2, 1st Edition, Section 2.9, #6). There graphs look different, so I am guessing not. Still I feel like I solved this problem, but I'd like to know how Apostol got his answer in the form he presented.
First the inner integral:
$$y^{-3}e^{1/y}\int_0^t{e^{tx}dx}$$
y^(-3)e^(1/y)1/t[e^(tx)]_x=0^x=t
y^(-3)e^(1/y)1/t(e^t^2-1)
Now for the outer integral:
1/t(e^t^2-1)*Integrate[y^(-3)e^(1/y),{y,1,t}]
u-substitution:
u=1/y, du=-1/y^2
-Integrate[e^u*u,u]
IBP:
f=u, dg=e^u
df=du, g=e^u
-(u*e^u-Integral[e^u,u])=e^u-u*e^u
Plug u back in:
e^(1/y)(1-1/y)|_1^t
Combine it all for final result:
1/t(e^t^2-1)e^(1/t)(1-1/t)
AI: $$\displaylines{
\int_1^t {\frac{1}{{{y^3}}}\left( {\int_0^t {{e^{tx/y}}dx} } \right)dy} = \frac{1}{t}\int_1^t {\frac{1}{{{y^3}}}\left. {\left( {\frac{y}{t}{e^{tx/y}}} \right)} \right|_0^tdy} \cr
= \frac{1}{t}\int_1^t {\frac{1}{{{y^3}}}\left( {\frac{y}{t}\left( {{e^{{t^2}/y}} - 1} \right)} \right)dy} \cr
= \frac{1}{{{t^2}}}\int_1^t {\frac{{{e^{{t^2}/y}} - 1}}{{{y^2}}}dy} \cr} $$
Can you finish it? Note that $(y^{-1})'=-y^{-2}$
|
H: How to pick a random node from a tree?
How can I pick a random node from a tree, given the following constraints?
We are given the root of the tree, and at every node we are given its children nodes. But we do not know what its children nodes are unless we visit the node.
We do not know how many nodes are in the tree or any given subtree.
We do not know the depth of the tree.
I am trying to write a program to crawl a directory of people's names and choose a random person. The probability of choosing a given person should be 1 over the total number of people, even though I don't know the total number of people.
Thanks!
Edit: the directory has about 10 million names at most, and it's probably 2 million or less.
Edit 2: I'd like to pick 10000 names total, but if that's too computationally intensive I could do 1000 or 100.
AI: A general easy idea to pick $k$ elements from a set of unknown size is to assign a random number from $[0,1]$ to every element, and then pick the top $k$ elements, (with regard to the assigned random numbers). It is worth noting that you want to keep only the current $k$ top entries and their assigned numbers. When a new element is found, assign it a random number (thus get $k+1$ entries) and discard the smallest (along with its assigned random number), hence obtain new $k$ top entries.
For example, to pick 2 random elements from $[A,B,C,D,E]$,
start with first $k$ elements and sort them
$[B \to 0.77, A \to 0.75]$,
add $C \to 0.40$
$[B \to 0.77, A \to 0.75, C \to 0.40]$
discard the smallest
$[B \to 0.77, A \to 0.75]$,
add $D \to 0.04$
$[B \to 0.77, A \to 0.75, D \to 0.04]$
discard the smallest
$[B \to 0.77, A \to 0.75]$,
add $E \to 0.95$
$[E \to 0.95, B \to 0.77, A \to 0.75]$
discard the smallest
$[E \to 0.95, B \to 0.77]$,
nothing more to add, algorithm terminates.
I hope this helps ;-)
|
H: Calculating transformation from origin to point
I have an icosahedron of radius $x$ with 12 vertices at known coordinates. If I have a point at $(0,0,x)$ where $x > 0$ and a vertex of this icosahedron at $(a,b,c)$ how can I find the rotation matrix between the two points?
Thanks
AI: Choose two vectors $u$ and $v$ that together with $w=(a,b,c)^\mathrm{T}/x$ form an orthonormal basis. The let $$R = \left[\array{u^\mathrm{T}\\v^\mathrm{T}\\w^\mathrm{T}}\right].$$ This gives a "rotation matrix" (orthogonal matrix) that maps $(a,b,c)^\mathrm{T}$ to $(0,0,x)^\mathrm{T}$ as you wanted. This matrix is not unique and varies by the orthonormal basis you choose.
|
H: Why is the Noetherian hypothesis necessary for this fact about associated primes?
In a Noetherian ring, the set of associated prime ideals of an ideal
is the set of primes which can be written as $(I:z)$.
I'm new to associated primes, and I was wondering why the Noetherian hypothesis here is necessary. In Matsumura's book Commutative Ring Theory, the associated primes of a module $M$ are defined as primes that occur as $\operatorname{ann}(m)$ for some $m\in M$. In this base, a prime $P$ would have to annihilate some element in $A/I$, which means $P$ is the set of elements $p$ such that $px\subset I$ for some $x\in A/I$, which is exactly the definition of $(I:x)$.
What am I missing? Why must we assume $A$ is Noetherian for this to hold?
(The quote comes from @wxu's answer to this question. The question itself has a Noetherian hypothesis. I am trying to figure out why it is necessary.)
AI: Swanson's notes on primary decomposition gives an explanation for this. In particular, go to Remark 3.11 at the bottom of page 9.
For people who don't want to click through the link, here is the gist: Sure, you can define associated primes like that without the Noetherian condition. But there are many other reasonable definitions as well. It happens that for Noetherian rings all these definitions agree so it is more convenient to assume that extra condition.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.