text
stringlengths 83
79.5k
|
|---|
H: Is this recursion relation proof correct?
Recurrence relation:$$a_0 = 1$$
$$a_{n+1} = 2a_n$$
I'm trying to prove that for any n ∈ N, $a_n = 2^n$. I want to use induction.
What I have is, assume that $a_n = 2^n$ is true for $P(n)$.
Then $P(n+1)$ would be:
$$a_{n+1} = 2^{n+1}$$
$$a_{n+1}=2\cdot(2^n)$$
Because $a_n = 2^n$, then we can substitute, so $a_{n+1} = 2a_n$.
AI: The idea of the proof is certainly correct. There are some issues, such as the incorrect but unnnecessary assertion that all even numbers are powers of $2$.
We rewrite the proof you gave, making small modifications. After a while, you will not be expected to use the notation $P(n)$ explicitly.
Let $P(n)$ be the assertion that $a_n=2^n$. We prove by induction that $P(n)$ is true for every non-negative integer $n$.
Certainly $P(0)$ is true, since $a_0=1=2^0$.
Suppose that for a particular integer $k$, the assertion $P(k)$ is true. We show that $P(k+1)$ is true.
Since $P(k)$ is true, we have $a_k=2^k$. But then
$$a_{k+1}=2a_k=2\cdot 2^k =2^{k+1},$$
so $P(k+1)$ is true. This completes the induction step and the proof.
|
H: A function that brings back the prime number just before it?
Is there a function that brings the prime number just before it?
I.e P(18)=17 P(6)=5 P(28)=23;
I know how weird that sounds.
AI: Standard notations are $p_n$ for the $n^{th}$ prime and $\pi(n)$ for the number of primes less than or equal to $n$. Combining these we have $p_{\pi(n-1)}$ as an expression for the largest prime less than $n$.
|
H: The Product Rule of Square Roots with Negative Numbers
In the statement $\forall a, b \geq0, \sqrt{a} \cdot \sqrt{b} = \sqrt{ab}$, why is it necessary to restrict $a$ and $b$ to being $\geq 0$? It seems that one should be able to say, for example, $(-3)^{1/2} \cdot (-3)^{1/2} = (-3 \cdot -3)^{1/2} = 9^{1/2} = 3$, so where is the flaw in this statement, since it just seems to be using laws of exponents.
Edit: I don't think people are understanding what I'm asking. To reemphasize, my question is: why do we say $\forall a, b \geq 0, \sqrt{a} \cdot \sqrt{b} = \sqrt{ab}$, instead of just $\forall a, b \in \mathbb{R}, \sqrt{a} \cdot \sqrt{b} = \sqrt{ab}$
AI: Let us freely use these supposed laws of exponents.
You used one to show that $(-3)^{1/2}(-3)^{1/2}=9^{1/2}$. Presumably this is $3$.
But another familiar law of exponents is $a^xa^y=a^{x+y}$. Use this with $a=-3$, $x=y=1/2$. We get $(-3)^{1/2}(-3)^{1/2}=(-3)^1=-3$.
Remark: Ordinary laws can give inconsistent results, giving lawyers opportunities to get rich. Mathematical laws have to meet a higher standard.
Response to edit: Restricting attention to real $a$ and $b$ will not work, since there is no way to assign a real number to the expression $(-a)^{1/2}$ without violating some laws of the algebra of real numbers. So to give meaning to the expression, we have to introduce complex numbers. But then as shown above, and in other answers, we can by application of the "laws" reach the highly undesirable conclusion that $3=-3$.
|
H: Sum of one, two, and three squares
If a square $n^2$ can be written as the sum of two nonzero squares as well as the sum of three nonzero squares, then can we conclude that it can be written as the sum of any number of nonzero squares up to $n^2 - 14$ nonzero squares?
Example: $13^2 = 12^2 + 5^2 = 12^2 + 4^2 + 3^2$. But also $13^2 = 11^2 + 4^2 + 4^2 + 4^2$ and $13^2 = 12^2 + 4^2 + 2^2 + 2^2 + 1^2$ etc up to $13^2 = 3^2 + 2^2 + 2^2 + 1^2 + .. + 1^2$.
AI: It is true.
All positive integers that are not the sum of four nonzero squares are known, see FOUR and LAGRANGE. These are the eight odd number $1,3,5,9,11,17,29,41$ and the infinite families $2 \cdot 4^t, \; \; 6 \cdot 4^t, \; \; 14 \cdot 4^t.$ Out of the list, the squares are $1,9;$ of course $1$ is not the sum of two or three nonzero squares, while $9=4+4+1$ but is not the sum of two nonzero squares. This gives a quick and dirty way to show 1,2,3 nonzero squares implies 4 nonzero squares. A more elegant method, by Thomas Andrews, is in comment above.
All numbers, square or not, larger than 33 are the sum of five nonzero squares, see FIVE. All numbers, square or not, larger than 19 are the sum of six nonzero squares, see SIX.
Now, we have some $n^2 \geq 36, $ and we are told that $n^2$ is also the sum of two nonzero squares and of three nonzero squares. ThomasAndrews shows above how $n^2$ is the sum of four nonzero squares. So far, we have 1,2,3,4,5,6 covered.
Now, take any integer $M$ with $ 20 \leq M < n^2.$ We are told that $M$ is the sum of six nonzero squares. Then add in $n^2 - M$ copies of 1. The result is a summation of $n^2$ as $n^2 - M + 6$ nonzero squares. Here $$ 6 < n^2 - M + 6 \leq n^2 - 14. $$
The numbers up to 100,000 that are nonzero squares, and are also the sum of two nonzero squares and of three nonzero squares are
169 = 13^2
225 = 3^2 * 5^2
289 = 17^2
625 = 5^4
676 = 2^2 * 13^2
841 = 29^2
900 = 2^2 * 3^2 * 5^2
1156 = 2^2 * 17^2
1225 = 5^2 * 7^2
1369 = 37^2
1521 = 3^2 * 13^2
1681 = 41^2
2025 = 3^4 * 5^2
2500 = 2^2 * 5^4
2601 = 3^2 * 17^2
2704 = 2^4 * 13^2
2809 = 53^2
3025 = 5^2 * 11^2
3364 = 2^2 * 29^2
3600 = 2^4 * 3^2 * 5^2
3721 = 61^2
4225 = 5^2 * 13^2
4624 = 2^4 * 17^2
4900 = 2^2 * 5^2 * 7^2
5329 = 73^2
5476 = 2^2 * 37^2
5625 = 3^2 * 5^4
6084 = 2^2 * 3^2 * 13^2
6724 = 2^2 * 41^2
7225 = 5^2 * 17^2
7569 = 3^2 * 29^2
7921 = 89^2
8100 = 2^2 * 3^4 * 5^2
8281 = 7^2 * 13^2
9025 = 5^2 * 19^2
9409 = 97^2
10000 = 2^4 * 5^4
10201 = 101^2
10404 = 2^2 * 3^2 * 17^2
10816 = 2^6 * 13^2
11025 = 3^2 * 5^2 * 7^2
11236 = 2^2 * 53^2
11881 = 109^2
12100 = 2^2 * 5^2 * 11^2
12321 = 3^2 * 37^2
12769 = 113^2
13225 = 5^2 * 23^2
13456 = 2^4 * 29^2
13689 = 3^4 * 13^2
14161 = 7^2 * 17^2
14400 = 2^6 * 3^2 * 5^2
14884 = 2^2 * 61^2
15129 = 3^2 * 41^2
15625 = 5^6
16900 = 2^2 * 5^2 * 13^2
18225 = 3^6 * 5^2
18496 = 2^6 * 17^2
18769 = 137^2
19600 = 2^4 * 5^2 * 7^2
20449 = 11^2 * 13^2
21025 = 5^2 * 29^2
21316 = 2^2 * 73^2
21904 = 2^4 * 37^2
22201 = 149^2
22500 = 2^2 * 3^2 * 5^4
23409 = 3^4 * 17^2
24025 = 5^2 * 31^2
24336 = 2^4 * 3^2 * 13^2
24649 = 157^2
25281 = 3^2 * 53^2
26896 = 2^4 * 41^2
27225 = 3^2 * 5^2 * 11^2
28561 = 13^4
28900 = 2^2 * 5^2 * 17^2
29929 = 173^2
30276 = 2^2 * 3^2 * 29^2
30625 = 5^4 * 7^2
31684 = 2^2 * 89^2
32400 = 2^4 * 3^4 * 5^2
32761 = 181^2
33124 = 2^2 * 7^2 * 13^2
33489 = 3^2 * 61^2
34225 = 5^2 * 37^2
34969 = 11^2 * 17^2
36100 = 2^2 * 5^2 * 19^2
37249 = 193^2
37636 = 2^2 * 97^2
38025 = 3^2 * 5^2 * 13^2
38809 = 197^2
40000 = 2^6 * 5^4
40804 = 2^2 * 101^2
41209 = 7^2 * 29^2
41616 = 2^4 * 3^2 * 17^2
42025 = 5^2 * 41^2
43264 = 2^8 * 13^2
44100 = 2^2 * 3^2 * 5^2 * 7^2
44944 = 2^4 * 53^2
46225 = 5^2 * 43^2
47524 = 2^2 * 109^2
47961 = 3^2 * 73^2
48400 = 2^4 * 5^2 * 11^2
48841 = 13^2 * 17^2
49284 = 2^2 * 3^2 * 37^2
50625 = 3^4 * 5^4
51076 = 2^2 * 113^2
52441 = 229^2
52900 = 2^2 * 5^2 * 23^2
53824 = 2^6 * 29^2
54289 = 233^2
54756 = 2^2 * 3^4 * 13^2
55225 = 5^2 * 47^2
56644 = 2^2 * 7^2 * 17^2
57600 = 2^8 * 3^2 * 5^2
58081 = 241^2
59536 = 2^4 * 61^2
60025 = 5^2 * 7^4
60516 = 2^2 * 3^2 * 41^2
61009 = 13^2 * 19^2
62500 = 2^2 * 5^6
65025 = 3^2 * 5^2 * 17^2
66049 = 257^2
67081 = 7^2 * 37^2
67600 = 2^4 * 5^2 * 13^2
68121 = 3^4 * 29^2
70225 = 5^2 * 53^2
71289 = 3^2 * 89^2
72361 = 269^2
72900 = 2^2 * 3^6 * 5^2
73984 = 2^8 * 17^2
74529 = 3^2 * 7^2 * 13^2
75076 = 2^2 * 137^2
75625 = 5^4 * 11^2
76729 = 277^2
78400 = 2^6 * 5^2 * 7^2
78961 = 281^2
81225 = 3^2 * 5^2 * 19^2
81796 = 2^2 * 11^2 * 13^2
82369 = 7^2 * 41^2
83521 = 17^4
84100 = 2^2 * 5^2 * 29^2
84681 = 3^2 * 97^2
85264 = 2^4 * 73^2
85849 = 293^2
87025 = 5^2 * 59^2
87616 = 2^6 * 37^2
88804 = 2^2 * 149^2
89401 = 13^2 * 23^2
90000 = 2^4 * 3^2 * 5^4
91809 = 3^2 * 101^2
93025 = 5^2 * 61^2
93636 = 2^2 * 3^4 * 17^2
96100 = 2^2 * 5^2 * 31^2
97344 = 2^6 * 3^2 * 13^2
97969 = 313^2
98596 = 2^2 * 157^2
99225 = 3^4 * 5^2 * 7^2
jagy@phobeusjunior:~$
|
H: Proving $u_n(x)\le v_n(x), \sum\limits_{n=1}^{\infty}v_n(x)\to f$ uniformlly $\Longrightarrow\sum\limits_{n=1}^{\infty}u_n(x)\to g$ uniformly
Prove if $u_n(x)\le v_n(x)$ and $\sum\limits_{n=1}^{\infty}v_n(x)$ converges uniformly then also $\sum\limits_{n=1}^{\infty}u_n(x)$ converge uniformly
I thought solving it by using Weierstrass test. Since $\sum v_n(x)$ converges uniformly it can bounded by $\sum M_n$ which converge and then $\sum u_n(x)\le\sum v_n(x)\le \sum M_n$ which means $\sum u_n(x)$ converge uniformly. Is it correct?
AI: This is not correct. Not every uniformly convergent series is normally convergent (existence of such $M_n$ independent of $x$). For instance, consider $f_n(x)=\frac{1}{n}$ if $x=n$ and $f_n(x)=0$ elsewhere. This converges uniformly, but not normally (there does not exist a dominating convergent series $\sum M_n$), on $\mathbb{R}$. So Weierstrass M-test implies uniform convergence, but there are uniformly convergent series to which the Weierstrass M-test can not be applied.
I assume that $0\leq u_n(x)\leq v_n(x)$, otherwise, the result you mention is clearly false (take $u_n(x)=-1\leq 0=v_n(x)$).
To prove this result, it suffices to use the Cauchy criterion. A real-valued series of functions is uniformly convergent if and only if it is uniformly Cauchy, i.e.
$$
\forall \epsilon>0\quad\exists N\quad \forall n\geq N, k\geq 1\quad \Big|\sum_{j=1}^k v_{n+j}(x) \Big|\leq \epsilon.
$$
Now just use that $0\leq u_n(x)\leq v_n(x)$ to prove that the uniform convergence of $\sum v_n$ implies that of $\sum u_n$.
|
H: Quotient with Non-Normal Subgroup
This has been brought up here but I'd like to bring up a few more questions.
Taking the quotient of group $G$ with subgroup $H$ is well-defined iff $H$ is normal in $G$. Well, what happens when $H$ is not normal? The left- and right cosets of $H$ in $G$ don't coincide. I have this situation in a paper I'm reading; quotients are taken with not necessarily normal subgroups. Despite that, this never seems to be an issue with the authors; no "note that this quotient is not well-defined" or special theorems for "non-normal quotients".
Here's an example: Let $H \subset L \subset S_n$ where $H=Stab_L(F)^*$ and $H' := Stab_{S_n}(F)$. Then $[S_n : L][L:H] = [S_n : H'][H':H]$.
These are indices of left cosets. Does the above equality make sense even if the associated quotient groups are not well-defined?
(*I don't think it's relevant here what $F$ actually is, but see my other questions and you can probably guess.)
AI: The index of a subgroup is always well-defined. The number of left cosets of a subgroup is always the same as the number of right cosets. Normality only comes in when you want the cosets to acquire a natural group structure.
|
H: Confused with proof that all Cauchy sequences of real numbers converge.
First the textbook proves that all Cauchy sequences are bounded, and so have a convergent subsequence, $\{a_{n_{k}}\}$ that converges to a limit, say $L$. Now we use this to prove that all Cauchy sequences are convergent.
So an $N_1$ exists such that $$\left|a_{n_{k}}-L\right|<\frac{\epsilon}{2}$$ for all $ k > N_1$,
and an $N_2$ exists such that $$\left|a_m-a_n\right|<\frac{\epsilon}{2}$$ for all $n,m > N_2 $.
Pick any $k > N_1$ such that $n_k > N_2$. Then for every $n > N_2$, $$\left| a_n - L \right | \leq \left| a_n - a_{n_{k}} \right| + \left| a_{n_{k}} - L \right| < \epsilon/2 + \epsilon/2 = \epsilon$$.
So $$\lim_{n \rightarrow \infty}a_n = L$$
I'm fine with this proof until the last part - I'm confused as to why we can pick an abitrary $k$ like we do? Should the limit not depend only on $n$? Now it appears like it also depends on $k$ and if we pick a $k < N_1$, the inequality isn't true. Can someone clarify this for me please?
AI: At that point in the proof you’re trying to show that if $n>N_2$, then $|a_n-L|<\epsilon$. If you can find a real number $x$ such that $|a_n-x|<\frac{\epsilon}2$ and $|x-L|<\frac{\epsilon}2$, the triangle inequality will give you the desired result, so the proof boils down to finding such an $x$.
What things do we know are close to $L$? Terms $a_{n_k}$ of the subsequence, provided that $k$ is sufficiently large.
What things do we know are close to $a_n$? Terms $a_m$ of the original sequence, provided that $m$ and $n$ are sufficiently large.
We take care of (1) first: there is an $N_1$ such that $|a_{n_k}-L|<\frac{\epsilon}2$ whenever $k>N_1$. This means that we can take our $x$ to be any $a_{n_k}$ with $k>N_1$, and we’ll have $|x-L|<\frac{\epsilon}2$.
Then we take care of (2): there is an $N_2$ such that $|a_n-a_m|<\frac{\epsilon}2$ whenever $m,n>N_2$. This means that since we’ve already specified that $n>N_2$, we can take our $x$ to be any $a_m$ with $m>N_2$, and we’ll have $|a_n-x|<\frac{\epsilon}2$.
Can we combine the two requirements? Is there an $a_{n_k}$ with $k>N_1$ that is also an $a_m$ with $m>N_2$?
Equivalently, is there an $a_{n_k}$ with $k>N_1$ such that $n_k>N_2$? Sure: the sequence $\langle n_k:k\in\Bbb N\rangle$ is unbounded, so its tail $\langle n_k:k>N_1\rangle$ is also unbounded and contains a term $n_k>N_2$. Thus, we can set $x=a_{n_k}$ and satisfy both requirements, so that we have
$$|a_n-L|\le|a_n-x|+|x-L|<\frac{\epsilon}2+\frac{\epsilon}2=\epsilon$$
as desired. Note that this calculation doesn’t actually depend on a specific value of $k$: we could set $x=a_{n_\ell}$ for any $\ell\ge k$, the the calculation would be the same. As I said at the beginning, we’re really just trying to find one number $x$ that we can use to ‘tie’ $a_n$ to $L$ to within $\epsilon$; it turns out that there are lots of them, and it doesn’t matter which one we use.
|
H: Express $[\cos(x) + \sqrt3 \sin(x)]$ in the form $[r\cos(x-a)]$
Express $[\cos(x) + \sqrt3\sin(x)] $ in the form $[r\cos(x-a)]$, where $r>0$ and $ 0\leq360$, hence solve the equation $[\cos(x) + \sqrt3\sin(x)= \sqrt2]$
This is as far as i have completed. I don't know whether the question is wrong or i just cant get it
$[\cos(x) + \sqrt3\sin(x)]$
$r=\sqrt{a^2+b^2}$
$r=+-2$
$r>0$
$r=2$
$\tan(a)=\frac {b}{a} =\sqrt3\\a=60^\circ$
therefore $[\cos(x) + \sqrt3\sin(x)]= 2\cos (x-60)$
given that $[\cos(x) + \sqrt3\sin(x)]= 2\cos (x-60)$
$2\cos (x-60)=\sqrt 2$
$\cos (x-60)=(\sqrt2)/2$
$-60 \leq x-60 \leq 300$
I don't know where to go from here... help please
AI: $\cos x+\sqrt3\sin x=2\cos(x-60^\circ)=\sqrt2$
So, $\cos(x-60^\circ)=\frac1{\sqrt2}=\cos45^\circ$
$\implies x-60^\circ=n360^\circ\pm 45^\circ $ where $n$ is any integer
Taking $'+'$ sign, $x-60^\circ=n360^\circ+45^\circ\implies x=n360^\circ+105^\circ$
If $0\le x\le 360^\circ, 0\le n360^\circ+105^\circ\le 360^\circ\implies n=0$
Taking $'-'$ sign, $x-60^\circ=n360^\circ-45^\circ\implies x=n360^\circ+15^\circ$
If $0\le x\le 360^\circ, 0\le n360^\circ+15^\circ\le 360^\circ\implies n=0$
So, $x=15^\circ,105^\circ$
|
H: Do isomorphic structures always satisfy the same second-order sentences?
I know that if two mathematical structures are isomorphic, then they satisfy the same first-order sentences. The converse is false.
This is probably a completely obvious question, but is it true that whenever two mathematical structures are isomorphic, they satisfy the same second-order sentences? And if so, does the converse hold?
AI: The first statement is true. That is
Theorem. If two mathematical structures are isomorphic, they satisfy the same second-order sentences.
Proof. Let $\mathscr{A}, \mathscr{B}$ be two isomorphic models, with $f$ being the isomorphism between them. By induction on second order formula $\phi$, we can show that for all $x_1, \ldots, x_m \in A$ and all $X_1, \ldots, X_n \subseteq A$,
$$\mathscr{A} \models \phi(x_1, \ldots, x_m, X_1, \ldots, X_n) \Leftrightarrow \mathscr{B} \models \phi(f(x_1), \ldots, f(x_m), f``X_1, \ldots, f``X_n).$$
The proof is identical to the first order case.
First prove that for all terms $t(v_1, \ldots, v_m, V_1, \ldots, V_n)$, we have that for all $x_1, \ldots, x_m \in A$ and all $X_1, \ldots, X_n \subseteq A$,
$$f(t^\mathscr{A}(x_1, \ldots, x_m, X_1, \ldots, X_n)) = t^\mathscr{B}(f(x_1), \ldots, f(x_m), f``X_1, \ldots, f``X_n),$$
when $t$ is a first order term, and
$$f``t^\mathscr{A}(x_1, \ldots, x_m, X_1, \ldots, X_n) = t^\mathscr{B}(f(x_1), \ldots, f(x_m), f``X_1, \ldots, f``X_n),$$
when $t$ is a second order term. This is easy. For example for $t = V$, where $V$ is a second order variable, we have that for all $X \subseteq A$, $$f``t^\mathscr{A}(X) = f``X = t^\mathscr{B}(f``X_1).$$
Now the proof of the induction for the formulas. The only different step from the first order case in the induction proof is the quantifier one. We have that
\begin{align*}
\mathscr{A} &\models \exists X \phi(X, x_1, \ldots, x_m, X_1, \ldots, X_n) \\
&\Leftrightarrow \text{ there exists } X \subseteq A \text{ s.t. } \mathscr{A} \models \phi(X, x_1, \ldots, x_m, X_1, \ldots, X_n) \\
&\Leftrightarrow \text{ there exists } f``X \subseteq B \text{ s.t. } \mathscr{B} \models \phi(f``X, f(x_1), \ldots, f(x_m), X_1, \ldots, f``X_n) \\
&\Leftrightarrow \mathscr{B} \models \exists X \phi(X, f(x_1), \ldots, f(x_m), f``X_1, \ldots, f``X_n).
\end{align*}
$\dashv$
Your second question is very interesting. The converse is false in general. I didn't know the answer, but I found an answer by Joel David Hamkins here https://mathoverflow.net/a/95761/35760. So
Theorem. Every consistent first order theory T with an infinite model has a second-order completion that is not categorical.
Which means that there exist models that are equivalent for second order formulas, but are not isomorphic. I had no idea this could happen.
|
H: Find k such that f is density function
I have the following function: $f_X(x, \theta) = \left\{
\begin{array}{lr}
k/x^3 & : x \leq \theta \\
0 & : x > \theta
\end{array}
\right.$ and $\theta >0$.
I should find $k$ such that $f$ is a density function.
What i know: $\displaystyle\int_{-\infty}^{+\infty}f_X(x,\theta)dx$ = 1, then $\displaystyle\int_{-\infty}^{\theta}kx^{-3}dx = 1$. The last integral solves to $-\displaystyle\frac{k}{2x^2}$ then it must be $-\displaystyle\frac{k}{2\theta^2} = 1$ and then $k = - 2\theta^2$.
Here's the issue: shouldn't $f_X$ be equal or greater than 0?.
I could consider separately what happens when $x<0$ and $0<x<\theta$ and find $k$ for both, but this implies i should solve $\displaystyle\int_{-\infty}^{0}kx^{-3}dx$ and the last one doesn't converge.
What's wrong with what i'm trying to do?.
AI: You had a mistake. Please see this: $$\displaystyle f_\Theta(\theta)=\int_{-\infty}^{+\infty}f_X(x,\theta)dx= \frac{-k}{2\theta^2} $$
And also :
$$\int_{-\infty}^{+\infty}f_\Theta(\theta)d\theta=1 \quad \Rightarrow \frac k2(\frac1{\theta})_0^\infty=1$$ which is not possible. So I think, there is no constant $k$ making this a density function.
|
H: Prove that the series $\sum_{n=1}^\infty \left[f(n)-\int_n^{n+1}\!f(x)\,\text{d}x\right]$ converges
Let $f$ be a non-negative decreasing function on $[1,+\infty)$. Prove that the series
$$\sum_{n=1}^\infty \left[f(n)-\int_n^{n+1}\!f(x)\,\text{d}x\right]$$
converges.
AI: According to MathWorld, this is called MacLaurin-Cauchy Theorem.
A proof can be found, for example, in Burkill's book A First Course in Mathematical Analysis.
It is related to (generalized) Euler-Mascheroni constant. See also the proof in Wikipedia article on Integral test for convergence. The picture from this article (which I copied below) can help your intuition, when you try to prove this result.
To find more useful stuff you search for MacLaurin Cauchy theorem or MacLaurin Cauchy test,
now that you know the name of the result.
|
H: $\neg P \implies \neg T$ and $P \implies \neg T$. Where do I go next?
I can't find any logic equivalence or inference rules on this. Personally, I feel that $\neg P \implies \neg T$ and $P \implies \neg T$ would mean that it follows that $\neg T$ is true regardless, and I should be able to use that fact as such in my next step. Is this proper reasoning, though?
AI: Note that $P\rightarrow T$ is equivalent to $\lnot P\lor T$. Therefore we have equivalence between $(P\rightarrow\lnot T)\land(\lnot P\rightarrow\lnot T)$ and $(\lnot P\lor\lnot T)\land(P\lor\lnot T)$.
By distributivity this can be written as $(\lnot P\land P)\lor\lnot T$, which is equivalent to $\lnot T$.
Therefore $(P\rightarrow\lnot T)\land(\lnot P\rightarrow\lnot T)$ is logically equivalent to $\lnot T$. If the former is assumed true, then so is the latter. If $T$ is not true, then so is the former.
|
H: what are first and second order logics?
The only knowledge I have on logic is due to a book I read a couple of years ago called Introduction to logic: and to the methodology of deductive sciences by Alfred Tarski. And in it he talks about free variables and sentences and quantifiers. But he never says what order logic he is talkig about or anything.
My question is this:
What is the difference between the two order logics and how can I tell if a statement is in one or the other.
AI: In first-order logic we are allowed to put a quantifier only on first-order variables. This means that when we interpret the language and ask whether or not a sentence is true or false, then we can only quantifier on elements of the universe. In second-order logic, however, we are allowed to quantify over relations, or subsets of the universe as well.
For example, the statement "Every bounded set of real numbers has a least upper bound" is a second-order statement. We quantified over all the sets of real numbers, and we say "If $A$ is a bounded set, then $\sup A$ exists".
On the other hand, "$\sqrt2$ exists" is a first-order statement (in the language containing $+,\cdot,0,1$, that is) because we can characterize $\sqrt2$ as an element $x$ such that $x\cdot x=1+1$.
Second-order logic, if so is a much stronger way to write things. It allows us to express a lot more in comparison to first-order logic.
So why are we mostly interested in first-order logic? First of all, there is a lot of research into other strong logics (logics which allow us to express more than first-order itself, but usually less than second-order logic). Secondly, first-order logic got the completeness theorem and the compactness theorem, as well the Lowenheim-Skolem theorems. These mean that we have a great model theory with first-order logic. On the other hand, we don't have those for stronger logics (either we don't have one of them, or we don't have both) which means that proof theory and model theory are going to be harder to work with.
Moreover second-order logic has several variants. For example we may allow only to quantify over subsets of the universe, not general relations; or we may interpret the second-order logic in a way that the subsets we are interested in were all definable by a first-order formula (Henkin semantics). These are weaker than just full-on quantification and so on. The Henkin semantics version of second-order logic is equivalent to first-order logic in a very good sense.
I feel that I'm already confusing you enough. So let's stop here.
|
H: A question about bounds, least and minimal elements, and partial vs strict ordered sets
It's not very clear to me if the concepts of bounds, least elements and minimal elements (also, greatest elements and maximal elements, etc. ) apply only to partial orders or if the definition applies to general ordered sets.
If I try to make an analogy between sets in general and the real numbers I would say that those definitions are explicitly made for partial orders, though I'm not sure and I don't know whether there are some sets with strict order to which the definitions apply.
For example, there is a theorem that says that "if $b$ is the least element of $B$ in some order on $A$ and $B\subseteq A$, then $b$ is the infimum of $B$". In this case I can think of the set $A=\{1,2,3\}$ with the usual strict order $<$ on $\mathbb{R}$ and $B=\{2,3\}$. Then $\inf(B)=1$ and $\min(B)=2$. But if I take the usual partial order $\leq$ then $\min(B)=\inf(B)=2$.
Here I'm using the next definition: Let $R$ be an ordering on $A$, and let $B\subseteq A$. Then,
1.- $a\in A$ is a lower bound of $B$ iff $\forall x\in B(a R x)$.
2.- $a\in B$ is a greatest element of $B$ iff $\forall x\in B(x R a)$.
3.- $a\in A$ is called an infimum of $B$ iff it is the greates element of the set of all lower bounds of $B$.
Note: For the example I'm assuming as proved that the greatest element an the infimum are unique.
Edit: Esentially what I'd like to know is to what kind of orders the definitions of bounds, least and greatest elements, minimal and maximal elements, etc. apply.
AI: Ok these definitions are all for partial orders, but since total (linear) orders are partial orders as well, they work just as well for them.
Some things I will point out. First, bounds are exactly like what they say there are. There are bounds of the set in question (notices the s). For example,
$A = \{1, 2, 3 \}$ as a subset of $\mathbb{N}$ and the order $<$, then $4, 5,$ and$ 1,000$ are all upper bounds, but the only lower bound is $0$. But if instead we are thinking of $\leq$ then $3$ is also an upper bound and $1$ is also a lower bound.
Now, let $(A, \leq)$ be a partial ordered set and let $S \subset A$.
A minimal element of $S$ is an element $m$ such that there is no $n \in S$ such that $n \neq m $ and $n \leq m$. Equivalently, for all $n \in S$ $n \leq m \implies n = m$. A maximal element of $S$ is an element $m$ such that there is no $n \in S$ such that $n \neq m$ and $m \leq n$. Equivalently, for all $n \in S$ we have $m \leq n \implies m = n$ (see here).
A least element of $S$ is an element $l$ such that for all $s \in S$ we have $l \leq s$. A greatest element of $S$ is an element $g$ such that for all $s \in S$ we have $s \leq g$.
I agree these two definitions (minimal/least and maximal/greatest) are a bit confusing as they are not equivalent, so let me give some examples.
So no take for example $A = \mathbb{N}$, $S = \{2, 3, 4, 5, 6, 10, 12 \}$ and $\leq$ be the divides relation i.e. $a \leq b \text{ iff } a \mid b$ (this is a true partial order). Then $2, 3, 5$ are all minimal elements and $10, 12$ are maximal elements. (This is why it is useful to look at the Hasse diagram, because you can pick out the minimal/maximal elements fairly quickly). But notice: $S$ has no least or greatest elements. But now consider $S = \{3, 5, 15\}$. Then again $S$ has $3, 5$ as minimal elements (and no least element), but $15$ is a maximal element and a greatest element.
Now, for infimum and supremum, you are looking at the greatest element of the lower bounds and the least element of the upper bounds. So in you example, the set of lower bounds is $\{1, 2\}$ and so the greatest is $2$ (notice the definition of greatest includes equality see here).
I can add more if you want and sorry if that explanation is a bit erratic. Let me know what else you would like to know and I can expand this answer. Hope it helps a little!
|
H: Calculating $1819^{13} \pmod{2537}$ using Fermat's little theorem
Can anyone make me understand how to calculate $1819^{13} \pmod{2537}$ using Fermat's little theorem? Here $p=2537$ and $p-1=2537-1=2536$.
I am unable to understand how to express $1819^{13}$ in terms of $1819^{2536}$.
AI: If you check description of RSA in wikipedia you will find that for the modulus $n = 2537 = 43\cdot 59$ we first calculate $\phi(n) = 42\cdot 58 = 2436$. Now from the problem you have mentioned it seems that $e = 13$ is encryption key and $m = 1819$ is the message.
The modular exponentiation is simple by squaring and we can write
$\displaystyle \begin{aligned}1819^{13}\pmod{2537} &= 1819^{8 + 4 + 1}\pmod{2537}\\
&= 1819^{8}\cdot 1819^{4}\cdot 1819\pmod{2537}\\
&= 1819^{2^{3}}\cdot 1819^{2^{2}}\cdot 1819\pmod{2537}\\
&= 513^{2^{2}}\cdot 513^{2}\cdot 1819\pmod{2537}\\
&= 1858^{2}\cdot 1858\cdot 1819\pmod{2537}\\
&= 1844 \cdot 1858 \cdot 1819 \pmod{2537}\\
&= 2081 \pmod{2537}\end{aligned}$
You also need to find the decryption key $d$ such that $ed = 1 \pmod{\phi(n)}$. This requires you to find HCF of $e = 13$ and $\phi(n) = 2436$ which is clearly $1$, but we need to express this HCF in form $13x + 2436y = 1$ and then $x$ will be your decryption key $d$. For this we proceed as below.
$\displaystyle \begin{aligned}1 &= 3 - 2\\
&= 3 - (5 - 3)\\
&= 2 \cdot 3 - 1\cdot 5\\
&= 2\cdot(13 - 2\cdot 5) - 1\cdot 5\\
&= 2\cdot 13 - 5\cdot 5\\
&= 2\cdot 13 - 5\cdot(2436 - 187 \cdot 13)\\
&= 937\cdot 13 - 5\cdot 2436\end{aligned}$
so that $d = 937$. It will take time (again by modular exponentiation) to verify that from the ciphertext $c = 2081$ we can get $m = 1819$ by calculating $c^{d}\pmod{2537} = 2081^{937}\pmod{2537}$
|
H: Can we conclude from $V=\ker(T) \oplus\operatorname{im}(T)$ the invariance of both subspaces?
Can we conclude for an endomorphism $V \in \operatorname{End}(V)$ where V is a finite dimensional vector space from $V=\ker(T) \oplus \operatorname{im}(T)$ that nullspace and image are invariant subspaces?
I somehow "feel" that this should be true, but I am not sure though!
AI: If $T$ is any linear application you always have $T(\ker T) = \{0\} \subset \ker T$ and $T(\operatorname{Im}(T))\subset T(V) = \operatorname{Im}(T)$.
|
H: For what kind of a subset its sums equal $\mathbb{R}^4$
For short, suppose $a,b$ are real numbers. Let $A=\{(\cos(at), \cos(bt), \sin(at), \sin(bt))\mid t\in \mathbb{R}\}$.
Let $B=\sum A=\{\sum_{i=1}^n x_i\mid x_i\in A, n \geq 1\}$.
For what values $a,b$, $B$ equals $\mathbb{R}^4$?
In general, what conditions can we impose to a subset $A$ of $\mathbb{R}^n$,
such that the sums of $A$ is the whole space?
Any references, suggestions are appreciated.
Thanks!
AI: First notice that it is enough to prove that $B$ contains a neighbourhood of the origin. To prove that you consider the map:
$$
\psi(t_1,\dots,t_n) = \sum_{i=1}^n \phi(t_i)
$$
where $\phi(t)$ is the curve defining $A$.
First of all you want $A$ to contain the origin, so you have to solve $\psi(\bar t) = 0$ and see if you find conditions on your parameters $a$ and $b$.
Then you have some $\bar t_0$ such that $\psi(\bar t_0)=0$. So the map contains a neighbourhood of $0$ if the differential $D\psi(\bar t_0)$ has rank equal to $4$.
|
H: Find the volume of the body bounded by $z = x^2 + y^2, z= 1-x^2-y^2$.
Again, I am new to volume of bodies and I am struggling with it.
Find the volume of the body bounded by $z = x^2 + y^2, z= 1-x^2-y^2$.
Now from a previous question, I know that I can do it by $\iint_{D} {z_2(x,y) - z_1(x,y)dxdy}$.
In this case, here's what I did:
$V = \iint_{D} {[2x^2 + 2y^2 -1] dxdy}$. But I struggled with what $D$ is in this case. Is it two circles centered at $(0,0)$ with radiuses $\sqrt{z}$ and $\sqrt{1-z}$ or am I missing something here?! These questions are driving me crazy.
AI: You can use Cylindrical Coordinates for it again. Note that having $x^2+y^2$, usually, sign us to use this coordinate rather than Cartesian ones. First of all find the area in which the whole volume is established on. What is that? For doing this and for this question, intersect two functions. $$x^2+y^2=z=1-x^2-y^2\longrightarrow x^2+y^2=1/2$$ This means that that desire area is a circle on $xy$ plane as $x^2+y^2=1/2$.
Now, lets do some polar conversations. $$x^2+y^2=1/2\longrightarrow r^2=1/2,~~ \theta\in[0,2\pi]$$ or $$r\in [0..\sqrt{2}/2],~~\theta\in[0,2\pi]$$ So we have: $$V=\int_{\theta=0}^{2\pi}\int_{r=0}^{\sqrt{2}/2}\int_{z_1}^{z_2}r dr d\theta$$ Now make a guess what may that $z_1$ and $z_2$ be. If you don't know the triple integrals, we can do $V$ as follows instead:
$$V=\int_{\theta=0}^{2\pi}\int_{r=0}^{\sqrt{2}/2}(z_2-z_1)~r dr d\theta$$
|
H: Prove automorphism is trivial
I would like to prove the following:
Let $L\subset L'$, where $L'$ is a quadratic extension of $L$, and $\rho\in\text{Aut}(L'/L)$, the automorphism group of $L'$ which fixes $L$. Also, let $\mathfrak{p}$ be a prime ideal of $L$ which ramifies in $L'$. Then $\rho$ acts trivially on the residue class field.
This is what I've tried:
Let $\mathfrak{p'}^2=\mathfrak{p}R_{L'}$, then an element in $L'/\mathfrak{p'}$ can be written as $a+\mathfrak{p'}$ where $a\in L'$. Then, $a$ can be written as $x+y\lambda,\;x,y\in L,\;\lambda$ the basis element extension of $L$ to $L'$. So $\rho$ sends $\lambda$ to its conjugate. Then we have
\begin{align*}
\rho(x+y\lambda+\mathfrak{p'})&=x+y\overline{\lambda}+\rho(\mathfrak{p'})\\
&=x+y\lambda+\rho(\mathfrak{p'})+y\overline{\lambda}-y\lambda
\end{align*}
So I need to show that $\rho(\mathfrak{p'})+y\overline{\lambda}-y\lambda\in\mathfrak{p'}$, but I'm not too sure how.
Thanks for the help in advance.
AI: We have $\mathfrak p'^2=\mathfrak p$, so $\rho(\mathfrak p')^2=\mathfrak p$, hence $\rho(\mathfrak p')=\mathfrak p'$. So it remains to check that $y(\lambda-\overline\lambda)\in\mathfrak p' \quad\forall y\in L$.
Now $L'/\mathfrak p'\equiv L/\mathfrak p$, because the inertia degree is $1$. So $\rho\mid_{L'/\mathfrak p'}=\iota$, where $\iota$ is the identity on the residue field. And $\rho\mid_{L'/\mathfrak p'}$ is the induced map on the residue field.
That is, $\rho(\lambda)$ and $\lambda$ lie in the same coset of $L'$ modulo $\mathfrak p'$, hence our claim.
Hope this helps.
Edit
Sorry. I used the conclusion to show your statement. But our conclusion is easy to prove, and you need not make it so complex in fact. I am referring to the fact that inertia degree being $1$ implies the triviality of $\rho$ acting on the residue field. This is what I meant, rather that $\rho$ restricted is the inclusion. Sorry again for the mis-understanding.
Suppose $L'/\mathfrak p'\equiv (L/\mathfrak p)^f$, then $L'/\mathfrak p'\oplus \mathfrak p'/\mathfrak p'^2\equiv L'/\mathfrak p$ is of dimension $2f$ over $L/\mathfrak p$, but that dimension is also $[L'/\mathfrak p:L/\mathfrak p]=2$. Hence $f=1$, and the extension is trivial, thereby proving your claim.
|
H: Factor Equations
Please check my answer in factoring this equations:
Question 1. Factor $(x+1)^4+(x+3)^4-272$.
Solution: $$\begin{eqnarray}&=&(x+1)^4+(x+3)^4-272\\&=&(x+1)^4+(x+3)^4-272+16-16\\
&=&(x+1)^4+(x+3)^4-256-16\\
&=&\left[(x+1)^4-16\right]+\left[(x+3)^4-256\right]\\
&=&\left[(x+1)^2+4\right]\left[(x+1)^2-4\right]+\left[(x+3)^2+16\right]\left[(x+3)^2-16\right]\\
&=&\left[(x+1)^2+4\right]\left[(x+1)^2-4\right]+\left[(x+3)^2+16\right]\left[(x+3)-4\right]\left[(x+3)+4\right]\end{eqnarray}.$$
Question 2. Factor $x^4+(x+y)^4+y^4$
Solution: $$\begin{eqnarray}&=&(x^4+y^4)+(x+y)^4\\
&=&(x^4+y^4)+(x+y)^4+2x^2y^2-2x^2y^2\\
&=&(x^4+2x^2y^2+y^4)+(x+y)^4-2x^2y^2\\
&=&(x^2+y^2)^2+(x+y)^4-2x^2y^2
\end{eqnarray}$$
I am stuck in question number 2, I dont know what is next after that line.
AI: \begin{equation}
\begin{split}
\ & x^4+y^4+(x+y)^4\\
\ =& (x^2+y^2)^2-2x^2y^2+(x^2+y^2+2xy)^2\\
\ =& (x^2+y^2)^2-2x^2y^2+(x^2+y^2)^2+4xy(x^2+y^2)+4x^2y^2\\
\ =& 2((x^2+y^2)^2+x^2y^2+2xy(x^2+y^2))\\
\ =& 2(x^2+y^2+xy)^2
\end{split}
\end{equation}
|
H: What do countable transitive models of ZFC look like?
According to Cantor's Attic (link):
Not all transitive models of ZFC have the $V_\kappa$ form, for if there is any transitive model of ZFC, then by the Löwenheim-Skolem theorem there is a countable such model, and these never have the form $V_\kappa$.
Question: what do countable transitive models of ZFC look like?
It is an interesting fact that every transitive model of second-order ZFC equals $V_\kappa$ for some $\kappa$. See Asaf's answer here.
AI: That's a tough question to answer. If $M$ is a countable transitive model of $\sf ZFC$, and $M$ is a model of $V=L$ then $M=L_\beta$ for some countable $\beta$. But other than similar cases like that, it's very hard to say exactly how it looks like.
To illustrate the point, if we have some very large cardinals in the universe then we can take an elementary submodel of some $V_\kappa$ which contains a lot of large cardinal assumptions. The countable model will think that a lot of countable ordinals are very large cardinals, which makes the model quite large and complicated, but when considering a countable model of the same theory it's difficult to explain how it looks like.
Also over countable models we can prove that generic sets exist, therefore we can force over them and generate new countable transitive models which are very different. So we can force and add anything that can be added by forcing, or class forcing.
All in all we can say these things:
If $M$ is a countable transitive model of $\sf ZFC$ then ${\sf Ord}^M=\beta$ for some countable ordinal $\beta$, and $L_\beta$ is a countable transitive model of $\sf ZFC+\it V=L$.
Every model of $\sf ZFC$, and even more so when the model is transitive, is the limit of its own von Neumann hierarchy.
|
H: Complex Differential Equation: $f'(z)=bf(z) \iff f(z)=ae^{bz}$
Let $f\colon G\to\mathbb{C}$ be holomorphic on the domain $G\subseteq\mathbb{C}$ and $b\in\mathbb{C}$. Show that the two following statements are equivalent:
1) $f(z)=ae^{bz}$ on $G$ with a constant $a\in\mathbb{C}$ 2) $f'(z)=bf(z)$ on $G$
1) $\Rightarrow$ 2):
My idea is to use
$$
z=x+iy \qquad a=w+iq \qquad b=m+in
$$
what gives
$$
f(z)=(w+iq)\cdot\exp((mx-ny)+(my+nx)\cdot i)
$$
Maybe now one can use $f'(z)=f_x(z)$? Is that a good idea or is it useless to calculate this partial derivation?
2) $\Rightarrow$ 1):
no idea yet
AI: $1 \Rightarrow 2$: If $f(z) = a e^{bz}$, then $f'(z) = bae^{bz} = bf(z)$.
$2 \Rightarrow 1$: Suppose $f'(z) = bf(z)$ and consider $g(z) = e^{-bz}f(z)$. We have:
$$
g'(z) = -be^{-bz}f(z) + e^{-bz}f'(z) = 0
$$
Hence $g$ is constant. We have $g(0) = f(0)$, so let $a = f(0)$. It follows that $g(z) = e^{-bz}f(z) = a$. Thus, $f(z) = ae^{bz}$.
|
H: Which of these sets is a subspace of F?
Let $F = \mathbb{R}^\mathbb{N}$. I need to check which of these sets are subspaces of $F$:
$F_1 := \{ x \in F:\ \text{$x$ is bounded}\}$,
$F_2 := \{ x \in F:\ \text{$x$ is convergent}\}$,
$F_3 := \{ x \in F:\ \text{$x$ is a zero sequence}\}$,
$F_4 := \{ x \in F:\ \text{$x_n = x_m$ for $n \ge m$}\}$,
$F_5 := \{ x \in F:\ \text{$x$ has exactly one limit point}\}$,
$F_6 := \{ x \in F:\ \text{$x$ is unbounded}\}$.
How do I show that $F_1,\dots,F_6$ are subspaces of $F$?
I know the definition of a subspaces, but I'm afraid I don't know how to apply it on, for example, $F_1$.
AI: To show that $F_k$ is a subspace of $\mathbb R^\mathbb N$ you should verify that $F_k$ is a non empty set and any linear combination of two elements of $F_k$ remains in $F_k$.
Let's show an example:
Clearly $F_1$ is a non empty set since the zero sequence is bounded.
Let $(x_n)$ and $(y_n)$ two bounded sequences so there's $M,N$ such that
$$|x_n|\leq M\quad\text{and}\quad |y_n|\leq N\quad\forall n\in\mathbb N$$
and let $a,b\in \mathbb R$ so
$$|ax_n+by_n|\leq |a||x_n|+|b||y_n|\leq |a|M+|b|N\quad\forall n\in\mathbb N$$
so the sequence $(ax_n+by_n)$ is bounded and then $F_1$ is a subspace of $\mathbb R^\mathbb N$.
|
H: Concept Of Double Integration
Can someone explain how double integration is equivalent to calculating volume as single integration is calculating area.
AI: It's basically this: When you do single variable integration, you are fitting rectangles under a curve and letting the width of the rectangles get smaller and smaller. This will give you an approximation of the area under the curve.
When you do double integration over a region, you fit boxes under a surface. Just as in the single variable case, as you let the area of the base of these boxes get smaller and smaller, the volume of all the boxes will get closer and closer to the volume under the surface.
This website seems to have some pictures of this: http://www.vias.org/calculus/12_multiple_integrals_01_07.html
|
H: Non-trivial Topology
I can't understand the differences between a non-trivial topology and a trivial one.
Whuat's the meaning of "non-trivial" topology?
Is there a link with connection's properties?
For example, could we say that a moebius strip has a "non-trivial" topology while an ordinary strip has a trivial one?
AI: A topology $T$ on a set $X$ is the set of open sets (more precisely: the sets we will afterwards call open sets when working with this topology $T$; openness is no absolute and inherent property of a subset of $X$), that is a set of subsets of $X$ such that
$X\in T$
If $I$ is an index set and $U_i\in T$ for each $i\in I$, then $\bigcup_{i\in I}U_i \in T$
If $U,V\in T$ then $U\cap V\in T$.
Hence for any nonempty set $X$ there is one topology, where checking conditions 1., 2., 3. is trivial, i.e. does not require "any" computational effort: $T=\{\emptyset,X\}$. That's why this topology is called the trivial topology on $X$ (also: the indiscrete topology).
As a matter of fact, verifying conditions 1., 2., 3. is also trivial if one choses $T$ to be the powerset of $X$, however that topology runs by the name of discrete topology. This is so even hough the verification of the above conditions is "even more trivial" in this case: If all sets are in $T$ anyway, there is nothing to check. Rule of thumb: If ever you have two choices for naming trivial objects, the "smaller" one wins.
Compare this to other cases where objects are called trivial, such as the trivial group having only a single element $e$ with composition $e\cdot e=e$ (often occurring in phrases such as "the kernel of ... is trivial")
|
H: Does every countable subset of the set of all countable limit ordinals have the least upper bound in it?
I'm sorry if the question is that kind of trivial, I just feel uncertain about these ordinals all the time. Is the answer to the following question "yes":
Denote by A the set of all countable limit ordinals. Does every countable subset of A have the least uper bound in A?
AI: Assuming the axiom of choice, the countable union of countable sets is countable. Therefore the countable limit of countable ordinals is also countable. And trivially, the limit of limit ordinals is a limit ordinal.
So the answer is yes. The set of countable limit ordinals is closed under countable sequences.
However if we don't assume the axiom of choice, then it is consistent that $\omega_1$ is the countable limit of countable ordinals, in which case the answer would be negative.
|
H: Fundamental theorem of Morse theory for $\Omega(S^n )$
Using the Fundamental theorem of Morse Theory we can prove that $\Omega(S^n)$ is homotopically equivalent to a CW complex with one cell each in dimensions $o,n-1,2(n-1), \cdots$ and so on. But how can I attack these cells? For example $\Omega(S^2) \simeq e^0 \cup e^1 \cup^2 \cup \cdots $. But I don't think that $\Omega(S^2)$ is homotopically equivalent to $\mathbb{R}P^\infty$...
AI: The answer below is repeated from the comment thread for my answer at: Path space of $S^n$
Of course, $\Omega(\mathbb{S}^2)$ isn't homotopy equivalent to $\mathbb{RP}^{\infty}$. In fact, they don't even have isomorphic homotopy groups. The reason is that the universal cover of $\mathbb{RP}^{\infty}$ is the contractible infinite dimensional sphere $\mathbb{S}^{\infty}$. It's a (easy to prove) theorem that for $n\geq 2$, $\pi_n$ of the universal cover of a (path-connected) space agrees with $\pi_n$ of the space. Therefore, $\pi_n(\mathbb{RP}^{\infty})=0$ for $n\geq 2$. Finally, $\pi_1(\mathbb{RP}^{\infty})=\mathbb{Z}/2$ because the universal cover has fiber of cardinality two.
In general it's a hard problem to explicitly compute the attaching maps of cells in a CW complex. The homotopy type of $\Omega(\mathbb{S}^2)$ is equally as complicated as the homotopy type of $\mathbb{S}^2$ because $\pi_n(\Omega(\mathbb{S}^2))\cong \pi_{n+1}(\mathbb{S}^2)$ (this is easy to prove using either the pathspace fibration or the loopspace/suspension adjunction).
If you're interested in computing homology, then the situation is nice for $n>2$. In this case, $\Omega(\mathbb{S}^2)$ has exactly one cell in each nonnegative integer multiple of $n-1>1$. Therefore, in cellular homology, the differentials are all zero and $H_i(\Omega(\mathbb{S}^n))=\mathbb{Z}$ if $i$ is a nonnegative integer multiple of $n-1$ and zero otherwise.
The situation is more subtle in the case $n=2$. In the cellular complex for $\Omega(\mathbb{S}^2)$, there is a $\mathbb{Z}$ in each degree but the differentials are (a priori) unknown. So, an alternative approach is needed unless one is given explicit information about the homotopy types of the attaching maps.
The approach that immediately comes to mind is the Serre spectral sequence applied to the pathspace fibration of $\mathbb{S}^2$. All the differentials on the $E_2$ page must be isomorphisms (except the ones that originate from the zero group and enter a nonzero group and vice-versa) because the $E_2$ page equals the $E_{\infty}$ page (and the $E_{\infty}$ page just has a $\mathbb{Z}$ in the $(0,0)$ lattice point because the pathspace of $\mathbb{S}^2$ is contractible). Since the differentials on the $E_2$ page go two units to the left and one unit up, it follows that the homology groups of $\Omega(\mathbb{S}^2)$ are all isomorphic and and the common isomorphism class (given by, e.g., the zeroth homology group!) is $\mathbb{Z}$. Therefore, $H_i(\Omega(\mathbb{S}^2))\cong \mathbb{Z}$ for all nonnegative integers $i$. The same result holds for cohomology by the universal coefficient theorem. However, the homotopy groups of $\Omega(\mathbb{S}^2)$ are unknown because they're unknown for $\mathbb{S}^2$!
I hope this helps!
|
H: A set, which appropriately scaled is expressible as sums of elements of a compact set is pre-compact
Assume $X$ is a Banach space and $K\subseteq X$ is compact. Let $C\subseteq X$ be such that $(\forall x\in C)(\exists x_1,x_2\in K)(2x=x_1+x_2)$
Does it follow that $C$ is pre-compact? In particular I am trying to prove this result.
AI: Yes, it does follow. Addition and scalar multiplication are continuous, thus $C$ is a subset of the image of the compact set $K \times K$ under the continuous map $(x,y) \mapsto \frac12(x+y)$.
A subset of a compact set is precompact.
|
H: Help for solving this sequence
I couldn't solve the following sequence and couldn't even see any pattern:
AI: Check the OEIS. In particular, we have: $prime(n) -2n$ as seen here.
|
H: Factor Equation
Help me with this,
Question: factor $x^3y-x^3z+y^3z-xy^3+xz^3-yz^3$.
Solution:
$$\begin{eqnarray}&=&x^3y-x^3z+y^3z-xy^3+xz^3-yz^3\\
&=&x\left(z^3-y^3\right)+y\left(x^3-z^3\right)+z\left(y^3-x^3\right)\\
&=&x\left[(z-y)\left(z^2+zy+y^2\right)\right]+y\left[(x-z)\left(x^2+xz+z^2\right)\right]+z\left[(y-x)\left(y^2+xy+x^2\right)\right]\end{eqnarray}$$
This expression is quite simple at first glance, but I stuck up again in that line. I appreciate any help.
AI: $x^3y-x^3z+y^3z-xy^3+xz^3-yz^3$
$=x^3(y-z)+yz(y^2-z^2)-x(y^3-z^3)$
$=x^3(y-z)+yz(y+z)(y-z)-x(y-z)(y^2+yz+z^2)$
$=(y-z)\{x^3+yz(y+z)-x(y^2+yz+z^2)\}$
Now, $x^3+yz(y+z)-x(y^2+yz+z^2)$
$=x^3+y^2z+yz^2-xy^2-xyz-z^2x$
$=x(x^2-y^2)-yz(x-y)-z^2(x-y)$
$=(x-y)\{x(x+y)-yz-z^2\}$
Now, $x(x+y)-yz-z^2$
$=x^2+xy-yz-z^2=(x+z)(x-z)+y(x-z)=-(z-x)(x+y+z)$
|
H: The Analog of the Cube in The Fourth Dimension
I was just wondering how a "cube" would look in 4-D. I know that in 1-D it is a line, in 2-D it is a square, in 3-D it is a cube.
Is it possible to envision it? If it is, how would the axes be defined? (i.e: 3-D as the x,y, and z axes)
P.S.: Not sure what tag this would go under. Geometry maybe?
AI: Generally speaking, I imagine the fourth-dimension as the third-dimension with a grey-scale. Given a point $(x,y,z,w)$, first find $(x,y,z)$ in the third-dimension. Next, think about $w$ as looking whitish if it's very small (i.e., approaches white as $w \rightarrow -\infty$) and black if it's very large (i.e., approaches black as $w \rightarrow \infty$).
For example, the point $(0,0,0,0)$ would be at the origin in three-space and be a middling grey. Comparatively, the point $(0,0,0,37912739217)$ would be at the origin in three-space and be a much darker shade of grey.
I will leave it to you to think about what the four-dimension unit hypercube would look like with this grey-scale model underlying it.
|
H: Closed set in $l^1$ space
Let $$ X := \left \{ (a_n) : \sum_{n=0}^\infty |a_n| < \infty \right\}$$ with the metric $d(a_n,b_n) := \sum_n |a_n-b_n|$. Let $\delta_j^{(n)} := 1$ if $n = j$ and $0$ otherwise. Denote $\delta^{(n)}:=(\delta_j^{(n)})_{j=0}^\infty$ and $E := \{ \delta^{(n)} : n \in \mathbb N\}$.
I want to show that $E$ is closed and bounded but not compact. Boundedness is trivial but I get stuck at closedness. If $(x_j)_{j=0}^\infty$ is the limit of a sequence $(\delta^{(n_k)})_{k=0}^\infty$ in $E$ I get
$$
\forall \epsilon > 0 \exists N \in \mathbb N \forall k \geq N: \sum_{j=0}^\infty |\delta_j^{n_k} - x_j| = |1-x_{n_k}| + \sum_{j \neq n_k} |x_j| < \epsilon.
$$
AI: Your strategy for proving closedness (take an arbitrary limit point then show this point belongs to the set) is the right one under most circumstances. This problem, however, is somehow special.
For one thing, the set $E$ actually has no limit point. To see this, simply note that for $x\neq y\in E$, $\|x-y\|=2$ so there is no Cauchy sequence in $E$, let alone convergent ones. So in particular, there is no limit point to this set $E$.
After you show there is no Cauchy sequence in $E$, you can just take a countable subset in $E$. Since this countable subset has no limit point, we see that $E$ is not sequentially compact, which implies non-compactness in a metric space.
|
H: Real valued analytic function defined on a connected set is constant
Let $G$ be a connected set and $f : G \rightarrow \mathbb{C}$ a real valued analytic function. Prove that $f$ is constant.
My idea to prove the result is to prove a subset $A \neq \varnothing$ of the connected set $G$ is both open and closed. So $G=A$
Take $f(w) = a$
$$A = \{z\colon z \in G, f(z) = a\}$$
Now I want to show that $A$ is infinite. How to do it?
After that it is easy to prove $A=G$.
AI: This isn 't an answer to your question, but rather an answer to your problem.
Let $f$ be an holomorphic function on a connected set $G$.
There exist functions $u,v$ such that $f=u+iv$.
Now using Daniel Fischer's hint, since $f$ is holomorphic, $u_x=v_y$ and $u_y=-v_x$. By hypothesis $\text{im}(f)\subseteq \Bbb R$, therefore $v=\textbf 0$ and it follows that $u_x=\textbf 0= v_x$.
Finally use the $f'=u_x+iv_x$ and the fact that $G$ is connected to conclude. (This is used in the last step).
|
H: parallel and normal projections
I have a vector $v$ given by $(v_x, v_y, v_z)$ which makes an angle $\theta$ with the $x$-axis. The projection of $v$ onto $x$ is given by the dot product
$$v\cdot x = \cos\theta\sqrt{v_x^2+v_y^2+v_z^2}$$
Say I want to find the projection of $v$ onto the $yz$-plane ($v$ has an angle $\pi/2 - \theta$ relative this plane). Is this simply given by $v\cdot (y+z)$?
EDIT: I know that $v\cdot x = v_x$, so I can find an expression for $\cos \theta$. Is the projection onto the $yz$-plane simply determined by $\sin \theta$, which I can find by $\cos^2 \theta + \sin^2 \theta = 1$?
AI: Your notation could be a little confusing. If have the projection $\mathbf{v} \cdot \mathbf{\hat y}$ onto the $y$-axis, and the projection $\mathbf{v \cdot \hat z}$ onto the $z$-axis, then the projection of $v$ onto the $yz$ plane will be
$$
\mathbf{v}_{\text{proj}} = (\mathbf{v \cdot \hat y})\, \mathbf{\hat y} + (\mathbf{v \cdot \hat z})\, \mathbf{\hat z}.
$$
As you've written it, $v \cdot (y+z)$ appears to be a scalar, which is not what you want (I have given you a 3-vector, which you can reduce to a 2-vector by dropping the 0-term associated with the $x$ component).
This is a little general in case you want to project onto other planes. If you want the projection of a vector onto the $zy$ plane, just drop the $x$-component of the vector, so $\mathbf{v}_\text{proj} = (0, v_y, v_z)$.
|
H: Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$
Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$?
That is, if a field is of characteristic 2, then does this field have to be $\{0,1\}$?
AI: It is not hard to see that $x^2+x+1$ does not have a root in $\Bbb{Z/2Z}$. Therefore we can extend the field with this root. Also every field has an algebraic closure, and this shows that $\Bbb{Z/2Z}$ is not algebraically closed, so its closure is strictly larger (and in fact infinite).
|
H: proof of equivalence of continuity and continuity in terms of limits of sequences
I am currently working on the axiom of choice and was looking for easy applications. A common example is the proof of the equivalence of continuity with continuity in terms of limits of sequences. Usually the proof uses the axiom of choice.
I found this site (http://www.apronus.com/math/cauchyheine.htm), where it is shown that one does not need the axiom of choice, if f is continuous
in a neighbourhood of x. It is claimed though, that we do need the axiom of choice in the general case.
Now I was wondering how one can show that the axiom of choice is necessary for this proof, and whether the proposition is equivalent to some version of the axiom of choice, like countable choice, or the statement that there is a choice function for the real numbers?
thank you for your replies
AI: There are two ways to see that the axiom of choice is needed for proving a certain statement:
Construct a model in which the axiom of choice fails, and the statement fails.
Assume that the statement is true, and prove a weak form of choice.
In the case of continuity of real valued functions it is not hard to construct a counterexample using standard models of $\lnot\sf AC$. It is slightly (but not much) harder to prove that every $f\colon\Bbb{R\to R}$ is sequentially continuous at $x$ if and only if it is continuous at $x$, implies the axiom of choice for countable sets of real numbers.
The general proof that a function between metric spaces is continuous if and only if it is sequentially continuous implies the axiom of countable choice, is not too difficult either (at least to read, I'm sure that coming up with it was difficult).
For more details you can, and should, check out Herrlich's wonderful book The Axiom of Choice where he covers these questions in details.
|
H: Is a subset of $\mathbb{R}^{n}$ that is homeomorphic to $\mathbb{R}^{n}$ necessarily open?
Let $A$ be a subset of $\mathbb{R}^{n}$, such that $A$ is homeomorphic to $\mathbb{R}^{n}$.
Is $A$ open in $\mathbb{R}^{n}$?
AI: Yes. This is a special case of the theorem of invariance of domain.
|
H: differentiability: a question involving interchange of limits
Let $f:[a,b]\to\mathbb{R}$ continuous and $C^1(\,]a,b])$.
Suppose
$$ f'(x)\xrightarrow[x\to a+]{}{}l $$
1) If $l\in\mathbb{R}$ I manage to prove that $\exists\,f'(a)=l$ (I used uniform continuity of $f'$), hence $f\in C^1([a,b])$.
2) Now if $l=\infty$ I would say that $f'(a)=\infty$, i.e. $\frac{f(a+h)-f(a)}{h}\xrightarrow[h\to0+]{}\infty$ . Is it true? or further hypothesis (e.g. concavity of $f$) are needed? How can I prove it?
AI: use mean value theorem, you don't even need continuity of the derivative, just existance of a (one-sided) limit. for simplicity assume $(a,b) = (0,1)$ then
$$ \frac{f(h) - f(0)}{h} = f^{'}(\epsilon_h)$$
where $\epsilon_h \in (0,h)$
take $lim_{h \rightarrow 0}$. RHS has a limit by assumption, because when $h \rightarrow 0$ then also $\epsilon_h \rightarrow 0$ hence the LHS also has a limit and they're equal
|
H: Approximating sum by Gaussian integral - how big is the error?
I have the following infinite sum:
$$S=\sum_{n=1}^{\infty}e^{-an^2}$$
Where $a$ is a positive constant. Is there a simple way to estimate the error when approximating $S$ by:
$$S \approx \int_0^ \infty e^{-ax^2}dx .$$
Does this depend at all on the value of $a$?
AI: A very simple estimate (which is what you were asking for) would be the following:
$f(x) = e^{-a x^2}$ is strictly decreasing, therefore
$$
f(n+1) < \int_n^{n+1} f(x)\,dx < f(n)
$$
for all $n \in \mathbb N_0$. Summation gives
$$
\sum_{n=1}^\infty f(n) < \int_0^\infty f(x)\,dx < f(0) + \sum_{n=1}^\infty f(n)
$$
So the difference between sum and integral is at most $f(0) = 1$.
|
H: Asymptotic formula for complex gamma function at $+\infty+i \times y$
I am currently looking for the behaviour of the complex gamma function at real infinity:
$\lim_{x \to \infty}\Gamma\left(x+i\times y\right)$
and more particularly for asymptotic formulas for the following functions:
$f_1\left(y\right)=\text{Re}\left(\Gamma\left(+\infty+i\times y\right)\right)$
$f_2\left(y\right)=\text{Im}\left(\Gamma\left(+\infty+i\times y\right)\right)$
As a graphical reminder, the behaviour of the complex gamma function for positive real parts is:
And when we cut slices for increasing $x$ values we obtain:
which look likes damped cosine/sine.
So, is there asymptotic formulas for these functions at infinity ?
$f_1\left(y\right)=\text{Re}\left(\Gamma\left(+\infty+i\times y\right)\right)$
$f_2\left(y\right)=\text{Im}\left(\Gamma\left(+\infty+i\times y\right)\right)$
AI: This is just Stirling approximation:
$$\Gamma(z)=\sqrt{2\pi}\exp\left\{z\ln z-z-\frac{1}{2}\ln z\right\}\Bigl[1+o(1)\Bigr].$$
In particular, as $x\rightarrow+\infty$, one has
\begin{align}
\mathrm{Re}\,\Gamma(x+iy)&\sim \sqrt{\frac{2\pi}{x}}\,\left(\frac{x}{e}\right)^x\cos\left(y\ln x\right)\Bigl[1+o(1)\Bigr],\\
\mathrm{Im}\,\Gamma(x+iy)&\sim \sqrt{\frac{2\pi}{x}}\,\left(\frac{x}{e}\right)^x\sin\left(y\ln x\right)\Bigl[1+o(1)\Bigr].
\end{align}
The observed oscillations are produced by the cosine/sine of $y\ln x$.
|
H: Can't see how this function is differentiable Spivak's Calculus on Manifolds Exercise 2-4
The problem is as follows:
Let $g$ be a continuous real-valued function on the unit circle $\{x \in \mathbb{R}^2 : \lvert x \rvert = 1\}$ such that $g(0,1) = g(1,0) = 0$ and $g(-x) = -g(x)$. Define $f: \mathbb{R}^2 \to \mathbb{R}$ by
$$f(x) =
\begin{cases}
\lvert x \rvert \cdot g\left(\frac{x}{\lvert x \rvert}\right) & : x \neq 0\\
0 & : x = 0
\end{cases}$$
The question is: If $x \in \mathbb{R}^2$ and $h: \mathbb{R} \to \mathbb{R}$ is defined by $h(t) = f(tx)$, show that $h$ is differentiable.
I'm not sure which definition of differentiation I should use–the usual single variable one or the general one defined in the chapter? I went with the single variable definition since $h$ is a single variable function and I get:
$$\lim_{k \to 0} \frac{h(t+k) -h(t)}{k} = \lim_{k \to 0} \frac{f((t+k)x) - f(tx)}{k} = \lim_{k \to 0} \frac{\lvert tx-kx \rvert \cdot g\left( \frac{tx-kx}{\lvert tx-kx \rvert} \right) - \lvert tx \rvert \cdot g\left( \frac{tx}{\lvert tx \rvert} \right)}{k}$$
I don't know what to do after this. We don't know if g is differentiable. If I use the other limit definition, I run into the same problem. If the information given for $g$ is supposed to somehow imply that $g$ is differentiable, I don't see it. Any hints?
AI: You can show that $h(t)=t|x| g(\frac{x}{|x|})$ when $x\neq0$ and $h(t)=0$ when $t=0$. Now notice that $h(t)$ is linear in '$t$'. Hence differentiable. You can even use the definition of differentiablity to show this if you want.
|
H: Cubic with turning point near zero
I want a bunch of cubics which have a turning point near the $x$-axis, both above and below the $x$-axis.
That way, the graph might not easily show whether there is a zero there, and Newton's method might give the answer.
I want a bunch so that each student gets a different one.
I know a cubic has a double zero if it shares a zero with its own derivative. If the cubic is $f(x)=x^3+ax^2+bx+c$, I think it has a double zero if $4b^3-18abc-a^2b^2+6a^3c+27c^2=0$. How can I find values of $a$, $b$, $c$ where that function of $a$, $b$ and $c$ is near zero?
AI: Just fix plenty of (say integers to make your life easier) values for $a$ and $b$ and you have a quadratic in $c$. I think that's the easiest way to generate quick triples of $a,b,c$ with your desired values.
Hope that helps,
|
H: How to identify the homogeneous coordinates on $\mathbb{P}V$ with the elements of $V^*$?
Let $V=K^{n+1}$ be a vector space of dimension $n+1$ and $\mathbb{P}V$ the projective space associated to $V$. How to identify the homogeneous coordinates on $\mathbb{P}V$ with the elements of $V^*$? Thank you very much. I think that the homogeneous coordinates on $\mathbb{P}V$ are $Z_0, \ldots, Z_n$, where $Z_0, \ldots, Z_n$ are standard coordinates on $K^{n+1}$.
AI: As you wrote, $Z_i$ is a standard coordinate on $K^{n+1}$, in other words a linear map $Z_i: K^{n+1}\to K$, in other words an element of $V^*$ where $V=K^{n+1}$.
|
H: How to find closed form by induction
How can I find the closed form of
a) 1+3+5+...+(2n+1)
b) 1^2 + 2^2 + ... + n^2
using induction?
I'm new to this site, and I've thought about using the series 1 + 2 + 3 +...+ n = n(n+1)/2 to help me out But isn't that technically using prior knowledge and hence invalid? Am I on the right track? Thanks.
AI: $a):$ Clearly it is an Arithmetic Series
So, the sum up to $n$th term is $=\frac n2\{2\cdot1+(n-1)2\}=n^2$
Let $P(n):\sum_{1\le r\le n}(2r-1)=n^2$
$P(1):\sum_{1\le r\le 1}(2r-1)=1$ which is $=1^2$ so $P(n)$ is true for $n=1$
Let $P(n)$ is true for $n=m$
So, $\sum_{1\le r\le m}(2r-1)=m^2$
$P(m+1): \sum_{1\le r\le m+1}(2r-1)=\sum_{1\le r\le m}(2r-1)+2m+1=m^2+2m+1=(m+1)^2$
So, $P(m+1)$ will be true if $P(m)$ is true.
But we have already shown $P(n)$ is true for $n=1$
So, by induction we can prove that $P(n)$ is true for all positive integer $n$
$b):$
HINT:
We know, $(r+1)^3-r^3=3r^2+3r+1$
Putting $r=1,2,\cdots,n-1,n$ and adding we get $(n+1)^3-1=3\sum_{1\le r\le n}r^2+3\sum_{1\le r\le n}r+\sum_{1\le r\le n}1$
Now, you know $\sum_{1\le r\le n}r=\frac{n(n+1)}2$ and $\sum_{1\le r\le n}1=n$
On simplification $\sum_{1\le r\le n}r^2=\frac{n(n+1)(2n+1)}6$
Can you use similar induction approach here?
|
H: Prove that $\frac{100!}{50!\cdot2^{50}} \in \Bbb{Z}$
I'm trying to prove that :
$$\frac{100!}{50!\cdot2^{50}}$$
is an integer .
For the moment I did the following :
$$\frac{100!}{50!\cdot2^{50}} = \frac{51 \cdot 52 \cdots 99 \cdot 100}{2^{50}}$$
But it still doesn't quite work out .
Hints anyone ?
Thanks
AI: $$ \frac{(2n)!}{n! 2^{n}} = \frac{\prod\limits_{k=1}^{2n} k}{\prod\limits_{k=1}^{n} (2k)} = \prod_{k=1}^{n} (2k-1) \in \Bbb{Z}. $$
|
H: diagonalize quadratic form
I have this quadratic form
$Q= x^2 + 4y^2 + 9z^2 + 4xy + 6xz+ 12yz$
And they ask me:
for which values of $x,y$ and $z$ is $Q=0$?
and I have to diagonalize also the quadratic form.
I calculated the eigenvalues: $k_{1}=0=k_{2}, k_{3}=14$,
and the eigenvector $v_{1}=(-2,1,0), v_{2}=(1,2,3), v_{3}=(3,6,-5)$
I don't know if this is usefull in order to diagonalize or to see when is $Q=0$
AI: You can also do that without ever seeing a matrix, by repeated square completions: that's called Lagrange reduction method. I am not saying that's the best way to answer such a question in general, although it is quite efficient in low dimensions. And here it can be done really fast: there is only one step.
$$
x^2 + 4y^2 + 9z^2 + 4xy + 6xz+ 12yz
$$
$$
=\underbrace{x^2+4x\left(y+\frac{3}{2}z\right)}_{\mbox{square to be completed}}+4y^2 +9z^2+12yz
$$
$$
=\left(x+2\left(y+\frac{3}{2}z\right)\right)^2-\left(2\left(y+\frac{3}{2}z\right)\right)^2+4y^2 +9z^2+12yz
$$
$$
=(x+2y+3z)^2.
$$
Conclusion: the quadratic form $Q$ is positive semidefinite with signature $(1,0)$. And it is zero on the hyperplane (=two-dimensional space)
$$
\{(x,y,z)\in\mathbb{R}^3\,;\,x+2y+3z=0\}.
$$
|
H: approximating $\frac{S^2}{\sigma^2}$
Let $Y_1,\ldots,Y_n$ be independent random variables from a normal distribution with expected value $\mu$ and variance $\sigma^2$ and let $S^2 = \dfrac{1}{n-1} \sum^n_{i=1} (Y_i-\bar{Y})^2$ be the sample variance. use the Central limit theorem to show that the distribution of $\dfrac{S^2}{\sigma^2}$ can be approximated by a normal distribution with expectation 1 and variance $\frac{2}{n-1}$. for a lrage value of $n$, suggest a distribution that can be used to approximate the distribution of $S^2$.
Now I have tried this exercise but the explanation that our teacher gave us was just nonexistent. My try was writing out $\frac{S^2}{\sigma^2}$ but that yielded something was unwieldy, my second try was using $\frac{\frac{1}{(n-1)}S^2}{\sigma^2} \sim \chi^2(\frac{1}{n-1})$ (out of despair) but that doesnt really work beacuse $\mu$ would be $\frac{1}{n-1}$ but the bright side: the $\sigma^2$ would be $\frac{2}{n-1}$ (yes I know this isnt even possible as it should be $n-1$ and not $\frac{1}{n-1}$.)
Any help whatsoever? The answer was just : look at slides (which of course I did)
AI: To answer my own question, and for others that may stumble upon this:
we have $Z =\frac{Y_i -\bar{Y}}{\sigma}$ and $Z^2 \sim \chi^2(1) = V$
$$
\dfrac{\sum^{n-1}_i Z^2}{n-1} = \bar{V}
$$
now we can do some nice tricks:
$$
\begin{align}
E[\bar{V}] &= E\left[\sum^{n-1}_{i=1}V_i\right] = \frac{n-1}{n-1}E[V]=1 \\
\operatorname{Var}(\bar{V})&=\frac{n-1}{(n-1)^2}\operatorname{Var}(V) = \frac{2}{n-1}
\end{align}
$$
I don't know how if I am allowed to answer my own question. Im glad I finally understand it.
|
H: Revolution of a solid - mandatory disk method
I know I am doing something wrong. Anyways
$x = 2$
$x = 3$
$y = 16 - x^4$
$y = 0$
about the y axis
So about the y axis means I need everything in terms of y. Easy enough, that is just one term.
$$y = 16 - x^4$$
$$x = (y - 16)^\frac{1}{4}$$
Then I intgrate with respect to y.
$$\pi \int_2^3 (y - 16)^\frac{1}{4} dy$$
$$\pi * \frac{4}{5}(y-16)^\frac{5}{4}$$
I know that is correct but I can't calculate that without imaginary numbers. Where did I go wrong?
AI: I'm afraid your answer is not correct.
You have a sign error at the start: $y=16-x^4$ becomes $x=(16-y)^{1/4}$, not $x=(y-16)^{1/4}$. That explains why you came up against having to take roots of negative numbers.
Also, in your integral, $y$ needs to go from $16-3^4=-65$ to $0$, not from $1$ to $2$. Think about it backwards: you need to integrate over the region where $x=(16-y)^{1/4}$ goes from $2$ to $3$. If you integrate over $2\leq y\leq 3$, then what you are really doing is integrating over the region where $x$ goes from $(14)^{1/4}$ to $(13)^{1/4}$.
|
H: Determining whether these two groups are isomorphic
Consider the following group of matrices with multiplication:
$$I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\ A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \ B = \begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}, \\ C = \begin{pmatrix} -1 & -1 \\ 0 & 1 \end{pmatrix}, \ D = \begin{pmatrix} -1 & -1 \\ 1 & 0 \end{pmatrix}, \ K = \begin{pmatrix} 1 & 0 \\ -1 & -1 \end{pmatrix}$$
I have to determine if this group is isomorphic to $S_3$. The two groups look pretty similar: they both divide into two halves: one is cyclic, and the other consists of elements that are their own inverse. I have been trying for a while to match up their group tables, but there's always something that doesn't fit.
Is there an isomorphism between these groups, and if there isn't, how can I prove it?
AI: @DanielFischer has given a very clear and satisfactory answer in his comment.
For an explicit one, consider that $A^2 = I$, $B^3 = I$, $A B A B = I$, so $A, B$ satisfy the standard relations that define $S_{3}$.
Note also
$$
B^2 = D, A B = C, A B^2 = K,
$$
so $\langle A, B \rangle = \{I, A, B, C, D, K \}$.
This also allows you to write down explicitly an isomorphism as $A \mapsto (12), B \mapsto (123)$.
|
H: Taking the derivative of an Integral
I would like to express the derivative of an integral in as elegant a form as possible. However, I am struggling at the moment. I would like to find the derivative $f'(y)$ of the function
$f(y) = \int_{h(y)}^{g(y)}u(x,y)\,\mathrm{d}x$
in terms of only the functions $g$, $h$ and $u$ which can be assumed to be sufficiently well behaved.
AI: Setting
$$
U(x,y)=\int_{a}^xu(t,y)\,dt
$$
for some $a$ such that $(x,y) \mapsto U(x,y)$ is well-defined,
we have
$$
f(y)=\int_a^{g(y)}u(t,y)\,dt-\int_a^{h(y)}u(t,y)\,dt=U(g(y),y)-U(h(y),y).
$$
It follows that
\begin{eqnarray}
f'(y)&=&g'(y)\partial_1U(g(y),y)+\partial_2U(g(y),y)-h'(y)\partial_1U(h(y),y)-\partial_2U(h(y),y)\\
&=&g'(y)u(g(y),y)-h'(y)u(h(y),y)+\partial_2U(g(y),y)-\partial_2U(h(y),y)\\
&=&g'(y)u(g(y),y)-h'(y)u(h(y),y)+\int_a^{g(y)}\frac{\partial u}{\partial y}(t,y)\,dt-\int_a^{h(y)}\frac{\partial u}{\partial y}u(t,y)\,dt\\
&=&g'(y)u(g(y),y)-h'(y)u(h(y),y)+\int_{h(y)}^{g(y)}\frac{\partial u}{\partial y}(t,y)\,dt.
\end{eqnarray}
|
H: Going from the Poisson distribution to the Gaussian.
In this lecture, at about the $37$ minute mark, the professor explains how the binomial distribution, under certain circumstances, transforms into the Poisson distribution, then how as the mean value of the Poisson distr. increases, the devation from the mean behaves like a Gaussian. I'm having trouble with calculating this.
The pmf of the Poisson distr. is
$$p(n)=\frac{e^{-\lambda}\lambda^n}{n!}$$
If we define $x=\lambda-n$ to be the deviation from the mean and substitute into the pmf, we get
$$p(x)=\frac{e^{-\lambda}\lambda^{\lambda-x}}{(\lambda -x)!}$$
Using the Stirling formula, this becomes approximately
$$p(x) \approx \frac{e^{-\lambda}\lambda^{\lambda-x}}{(\lambda -x)^{\lambda -x} e^{-(\lambda -x)} \sqrt{2\pi (\lambda -x)} } = \frac{e^{-x}}{\sqrt{2\pi}} \frac{\lambda^{\lambda-x}}{(\lambda -x)^{\lambda -x}} (\lambda -x)^{-\frac{1}{2}}$$
$$=\frac{e^{-x}}{\sqrt{2\pi}} (\lambda -x)^{-\frac{1}{2}} (1-\frac{1}{\lambda /x})^{x-\lambda} = \frac{e^{-x}}{\sqrt{2\pi}} (\lambda -x)^{-\frac{1}{2}} ((1-\frac{1}{\lambda /x})^{\lambda /x})^{\frac{x^2-\lambda x}{\lambda}}$$
Now the rightmost term tends to $$e^{-\frac{x^2}{\lambda}}e^x$$
as $\lambda \to \infty$, which cancels the $e^{-x}$ in front. So this would be looking fairly Gaussian if not for the uncomfortable factor of $(\lambda -x)^{-\frac{1}{2}}$ which tends to zero, killing the whole thing!
Where am I making a mistake? Clearly, in the Stirling approximation, the square root factor isn't too relevant (order-wise), and if it's omitted then the problem goes away (or does it? the result isn't normalised!)... but surely this can be fixed?
AI: It is ok, just notice that the analysis is done with $x$ is fixed so for large $\lambda$ we have
$$ (\lambda - x ) \sim \lambda $$
And remember that the variance of a Poisson random variable is $\lambda$. So the resulting density is (almost) normal with mean zero and variance $\lambda$ which makes sense.
A more rigourous analysis can be made using the weak convergence machinery. If we consider
$$Y_\lambda = \frac{X - \lambda}{\sqrt{\lambda}}$$
then its characteristic function is
$$\phi_{Y_\lambda}(t) = \exp(-\sqrt{\lambda}it + \lambda(e^{\frac{it}{\sqrt{\lambda}}}-1))$$
For large $\lambda$
$$e^{\frac{it}{\sqrt{\lambda}}}-1 = \frac{it}{\sqrt{\lambda}} - \frac{t^2}{2\lambda} + o(\lambda^{-1})$$
And so in the limit
$$\lim_{\lambda \to \infty} \phi_{Y_\lambda}(t) = \exp(-t^2/2)$$
which is the characteristic function of a normal$(0,1)$. So for large $\lambda$, $Y_\lambda$ is close to a normal distribution or equivalently $X$ is normally distributed with mean $\lambda$ and variance $\lambda$
|
H: Trigonometric equations with more than one function
This is a general question about how to solve trigonometric equations which involve different functions. I have been multiplying and dividing the functions but have not been able to attain an expression with just one function. I'm encountering questions such as:
$$3\tan A - 2\cot A = 5 $$
and
$$6\sin A\cos A + 3\sin A = 2\cos A + 1$$
What is the method or process that I would use to change all the terms to like functions?
AI: Second equation can be rewritten as
$$6\sin{\alpha}\cos {\alpha} + 3\sin{\alpha} = 2\cos{\alpha} + 1,\\
3\sin{\alpha}(2\cos{\alpha} + 1)=2\cos{\alpha} + 1, \\
(3\sin{\alpha}-1)(2\cos{\alpha} + 1)=0.$$
Then
$$\sin{\alpha}=\dfrac{1}{3}$$
or $$\cos{\alpha} =-\dfrac{1}{2}. $$
|
H: Succeed with one multi-choice test out of five if answers are guessed
I'm taking a class in probability, and I have some issues understanding some concepts. Thing is, I am trying to calculate the probability that you can succeed with at least one multi-choice test out of five, if all the answers are guessed. In each test there are 18 questions, and 9 must be answered correctly for the test to be passed. Each question has 4 choices, so the chance of getting a question right is $1 \over 4$.
I figured that since 9 of the 18 questions had to be correctly answered for the test to succeed, the probability of succeeding with one test, P(success), would be
$P(success) = {1 \over {4^9}}$
for each test. The probability of failing one test, P(success*), would then be
$P(success*) = 1 - P(success) = 1 - {1 \over {4^9}}$
Thus, the probability to fail every test would be
$P(success*)^5 = (1- {1 \over {4^9}})^5$
Then, finally, the probability to succeed with at least one of the five tests, P(at least one success), would be
$P(at\ least\ one\ success) = 1-P(success*)^5 = 1-(1-{1 \over {4^9}})^5 \approx 0.000019$
This, however, is not even close to being correct.
I also tried using the number of combinations of 9 successes and 9 fails for P(success):
$P(success) = {18 \choose 9}({1 \over4})^9(1-{1 \over 4})^{18-9}$
Which then gives
$P(success*) = 1-P(success) = 1-({18 \choose 9}({1 \over4})^9(1-{1 \over 4})^{18-9})$
and thusly
$P(at\ least\ one\ success) = 1 - P(success*)^5 = 1 - (1-({18 \choose 9}({1 \over4})^9(1-{1 \over 4})^{18-9}))^5 \approx 0.0677$
This is closer, but still not correct. The answer provided by the book is just
$P(succeed\ with\ one\ test) = 1-P(fail\ every\ test) = 1 - 0.98065^5 \approx 0.094$
Since I know that $1- P(success*)^5$ is correct, I understand that is is $P(success*)$ that is wrong, but I don't know how and why it's wrong. So I would really appreciate any help with understanding this.
AI: Your first go at it, $P(success)=(1/4)^9$, is not correct. This would give the probability that you answered, e.g., the first $9$ questions of a particular test correctly. It does not take into account how you fared on the other $9$ questions.
If you wanted the probability that you answered the first $9$ correctly and the others incorrectly, it would be given by $(1/4)^9(3/4)^9$.
But there are
many ways in which you could answer exactly $9$ questions correctly. In fact, there are $18 \choose 9$ ways in which this can occur. So,
your second go at it,
$P(success) = {18 \choose 9}({1 \over4})^9(1-{1 \over 4})^{18-9}$, is closer. This correctly gives the probability that you answered exactly $9$ questions correctly.
But, this is still not correct. You pass the test if you answered at least $9$ questions correctly. So you need to find the probability that you answered exactly $9$, or exactly $10$, or ...., or exactly $18$ correctly. Then add those up.
The probability that you failed a particular test is the probability that you answered at most $8$ questions correctly. Calculating this would be a bit more direct:
$$
P(\text{ failed a particular test})=\sum_{i=0}^8 P(\text{answered exactly }i\text{ questions correctly})=\sum_{i=1}^8 {18\choose i} (1/4)^i (3/4)^{18-i}.
$$
The above is just the cumulative distribution function of a Binomial variable with $18$ trials and success factor $1/4$ evaluated at $8$. There are online calculators, such as this, that will compute it. From the link, we have $P(\text{failed a particular test})\approx .98065222$.
So, the probability that you passed at least one test is $1-P(\text{ failed all five})\approx 1-(.98065)^5$.
|
H: Show that two tori that identified $\left(x,y\right)\mapsto\left(y,x\right) $ is homeomorphic to $\mathbb{S}^{3} $
The boundary tori $\mathbb{S}^{1}\times\mathbb{S}^{1}$ of two copies of the solid torus $\mathbb{S}^{1}\times D^{2}$ are identified be a map $\left(x,y\right)\mapsto\left(y,x\right)$. Show that the resulting quotient space is homeomorphic to $\mathbb{S}^3$.
AI: Hint: Consider the map from the disjoint union of two full tori to $\mathbb{C}^2$ given by
$$\begin{gather}
\Phi \colon \mathbb{S}^1\times D^2 \times \{1,\,2\} \to \mathbb{C}^2\\
\Phi (z,\,w,\,1) = (z,\,w)\\
\Phi (z,\,w,\,2) = (w,\,z)
\end{gather}$$
|
H: Arithmetic operations in ternary number system
In a ternary number system, how are the $4$ arithmetic operations defined?
AI: Strictly speaking, one should call it the ternary numeral system. What is different from base-10 is not the numbers, but the numerals.
Addition, subtraction, multiplication, and division are not defined within a numeral system; they are defined independently of numeral systems.
One could ask, however, how to do arithmetic within a base-3 numeral system. There is an addition table:
$$
\begin{array}{c|cc}
0 & 1 & 2 \\
\hline
1 & 2 & 10 \\
2 & 10 & 11
\end{array}
$$
and there is a multiplication table:
$$
\begin{array}{c|cc}
& 1 & 2 \\
\hline 1 & 1 & 2 \\
2 & 2 & 11
\end{array}
$$
Arithmetic is done the same way as in base 10, but using these tables rather than the ones you learned at your mother's knee.
There was a time, between some time in the '60s, I suspect, and maybe some time after 1990 or so, when math courses required of those who were to become elementary school teachers taught numeral ssytems in various different bases, I suspect because it was thought that it aided in understanding the theory behind the operations done in base 10. I think that was largely a mistake, but it could have been useful if they'd taken it a step further and asked students to do a bunch of arithmetic problems in base 8 or base 12, for this reason: It makes it look just as unfamiliar to them as it looks to elementary school pupils who are learning it for the first time. In other words, it shows them what arithmetic looks like to their pupils.
|
H: Why is the discrete metric said to be so important
Can anyone enlighten me as to why the discrete metric is considered to be important in mathematics? The only real use I can see of it is that it shows the existence of a metric on any non-empty set.
I wonder if there is something I'm missing: maybe it used as a technique to prove certain types of theorems, or as a construction for other quantities (maybe characteristic functions) or in some particular application.
Thanks,
Matt
AI: I agree with the comments that it is always good to have that metric/topology in mind when first coming up with examples. On the other hand, a lot of spaces (think e.g. $\mathbb{N}$, $\mathbb{Z}$) are discrete in nature.
As far as using it to prove results, here's the first thing that came to my mind, which I remembered feeling was almost cheaty the first time I saw it:
If $X$ is a connected topological space and $f : X \to Y$ is a locally constant map from $X$ to any set $Y$, then $f$ is constant.
Proof. If we endow $Y$ with the discrete metric/topology, then $f$ is automatically continuous. It follows that $f(X)$ is connected, but connected components of $Y$ are points, so $f$ is constant.
|
H: Will this Trigonometric give the following answer?
If $n$ is an integer, can $$\cos[(2n-1)\pi/2]-\cos[(2n-1)\pi/4]$$
be equal to $\;\;\cos[(2n-1)\pi/4],\;\;?$
I have tried the formula for $\cos A-\cos B$ but that would give result in $sine$
AI: Hint: $$\cos \left((2n-1)\frac{\pi}{2}\right)=\cos \left(\pi n-\frac{\pi}{2}\right)=0.$$
|
H: Why $\sum_{k=1}^n \frac{1}{2k+1}$ is not an integer?
Let $S=\sum_{k=1}^n \frac{1}{2k+1}$, how can we prove with elementary math reasoning that $S$ is not an integer?
Can somebody help?
AI: Hint: Recall the (elementary) proof that $\sum_{i=1}^n \frac{1}{i}$ is not an integer (for $n>1$):
Let $2^k$ be the largest power of 2 that is smaller than or equal to $n$. Consider making (smallest) common denominator of the form $2^k L$, where $L$ is odd. The numerator of every term will be even except for the term of $\frac{1}{2^k}$, which contributes an odd term $L$. Hence, the numerator is odd and the denominator is even. This cannot be an integer.
Hint: For finite sum of odd reciprocals, show that when we make a (smallest) common denominator, the numerator is not a multiple of $3$.
Solution: Let $3^k$ be the largest power of 3 that is smaller than or equal to $n$. Consider making common denominator of the form $3^k L$, where $L$ is not a multiple of 3. The numerator of every term will be a multiple of 3, except for terms of the form $\frac{ 1}{ a3^k}$ for some integer $a$.
Use the fact that $a=2$ does not appear in the sequence since it is even. Also, by the definition of $3^k$, no other higher multiple can appear. Hence, there is only 1 term which contributes a non-multiple of 3. Thus, the numerator is not a multiple of 3.
|
H: Will the feasible region always be convex in linear programming?
In linear programming we find a feasible region , is this region always convex? . if a concave region is found where objective is minimization , I think then a solution exists .
Advance thanks.
someone deleted the answer of my previous post , although i wasn't sure about the answer of 1st question , and no one is replying in that post , as i am in hurry so i re-posted it to get a answer. Thanks
AI: Because the constraints in a linear program are linear, they will always produce a convex body. With finitely many constraints, it will in fact be a convex polytope. If you want a feasible region to be concave (or any other shape for that matter), you'll have to look to nonlinear constraint functions.
|
H: Method of moments and maximum likelihood
I have the following function: $f_X(x, \theta) = \left\{
\begin{array}{lr}
\theta/3 & : x = -1 \\
\theta/3 & : x = 0 \\
1-2\theta/3 &: x= 1
\end{array}
\right.$
What is the method of moments of $\theta$?
Here's my attempt:
1º Method of moments - Solve $E(X) = \overline{X_n}$.
$E(X) =\displaystyle\sum_{i=-1}^{1}xp_X(x) = -1(\theta/3)+0(\theta/3) + 1(1-2\theta/3) = 1 - \theta$. Then $1-\theta = \overline{X_n}$ and $\hat{\theta} = 1 - \overline{X}$
2º Maximum likelihood:
$L(\theta) = p(-1)p(0)p(1) = \displaystyle\frac{\theta^2}{3^2}\bigg(1-\displaystyle\frac{2\theta}{3}\bigg)$ and $\displaystyle\frac{dL}{d\theta} = \displaystyle\frac{d}{d\theta}\bigg(\displaystyle\frac{\theta^2}{3^2} - 2\displaystyle\frac{\theta^3}{3^3}\bigg) = \displaystyle\frac{2}{9}\theta \bigg(1-\theta \bigg)$.
Then $\hat{\theta} = 1$ because $L''(1)<0$ -and can't be $\hat{\theta} = 0$ because $L''(1)>0$.
Did it go well?
AI: The method of moments is allright. To apply the maximum likelihood method, note that the likelihood is $L(\theta)=\left(\frac\theta3\right)^{n_{-1}}\left(\frac\theta3\right)^{n_{0}}\left(1-2\frac\theta3\right)^{n_{1}}$, where $n_i$ denotes the number of times the sample $(x_k)_{1\leqslant k\leqslant n}$ contains the value $i$. Thus, $L(\theta)=\left(\frac\theta3\right)^{n-n_{1}}\left(1-2\frac\theta3\right)^{n_{1}}$, which implies that $\log L(\theta)=$ $________$, hence $\frac{\mathrm d}{\mathrm d\theta}\log L(\theta)=$ $________$, which is zero when $\theta=$ $____$, and finally $\hat\theta=$ $____$.
|
H: Proving by induction: $2^n > n^3 $ for any natural number $n > 9$
I need to prove that $$ 2^n > n^3\quad \forall n\in \mathbb N, \;n>9.$$
Now that is actually very easy if we prove it for real numbers using calculus. But I need a proof that uses mathematical induction.
I tried the problem for a long time, but got stuck at one step - I have to prove that:
$$ k^3 > 3k^2 + 3k + 1 $$
Hints???
AI: For your "subproof":
Try proof by induction (another induction!) for $k \geq 7$
$$k^3 > 3k^2 + 3k + 1$$
And you may find it useful to note that $k\leq k^2, 1\leq k^2$
$$3k^2 + 3k + 1 \leq 3(k^2) + 3(k^2) + 1(k^2) = 7k^2 \leq k^3 \quad\text{when}??$$
|
H: Finding the coordinates of the point on the graph of $f(x) = (x+1)(x+2)$
Im trying to find the coordinates of the point on the graph of $f(x) = (x+1)(x+2)$ at which the tangent is parallel to the line with the equation $3x - y - 1 = 0$
How do i find the coordinates?
What I've tried is:
Since it is parallel, i know the slope is: $3$
So I found the derivative of $f(x) = (x+1)(x+2)$ and set it equal to $3$.
So $2x + 3 = 3$ which will then be equal to $0$
So, I got the $x$ coordinate as $0$ and when i substitute $0$ into the original equation i get $2$.
So are the coordinates $(0,2)$ right? or am i doing something wrong?
AI: Nice work! You are correct, your reasoning, and your approach to the problem are "spot on."
(Thanks for showing your work. It helps us check things over. More important than the "final answer" are the reasoning and the methods used to get it.)
|
H: Algorithm for finding the longest path in a undirected weighted tree (positive weights)
I'm trying to find from a undirected weighted tree of only positive weights the longest path (diameter of a tree, I'm told?) I know the most common algorithm is one where you pick a random node $x$, use DFS from that node to find the longest path which ends with node $y$ and once again from $y$ using DFS find the longest path which ends in $z$. $(y,z)$ should then be the longest path.
However, I was wondering if this another algorithm I thought of would work (and with the same complexity of $O(|V|)$
Find the longest edge in graph $e$
$e$ joins 2 nodes $a$ and $b$. Considering both $a$ and $b$ to be root of their respective subtrees, run DFS on both $a$ and $b$ and find the longest path $A$ and $B$ which end with the node $a_{end}$ and $b_{end}$ respectively.
$(a_{end}, b_{end})$ is the longest path.
My reason was the longest path of the graph has to contain the edge with the heaviest weighting. Is that a correct assumption? Also, the algorithm is still of complexity $O(|V|)$ since step 1. is $O(|V|)$, step 2 for DFS on $a$ is $O(|V|)$ and DFS on $b$ is $O(|V|)$. That probably isn't so efficient compared to the standard algorithm but it should adhere to the said requirements I guess?
AI: Your assumption is false, this is a counterexample:
Check, that the longest path (all black edges) has weight $6$, while any path containg the red edge has at most weight $5$.
|
H: Selecting a committee of $5$ out of $20$ with conditions
A company has $20$ employees, $12$ males and $8$ females. How many ways are there to form a committee of $5$ employees that contain at least one male and at least one female?
This is what I got: $12\times19\times18\times17\times16-12\times11\times10\times9\times8-8\times7\times6\times5\times4$
Is this correct?
AI: The easiest way is to count all the committees, without regard to the restrictions on sex. Can you do that? Then subtract the all-female and all-male committees. Can you count those?
|
H: Prove by induction: $\forall n\in\mathbb{Z}_{\geq1}:3\ |\ (6n^2-12n+3)$
I'm not sure how to start this induction problem.
I was told that we start doing induction by using a base case $n=1$. Then we set $n=k$ to prove $n=k+1$. But how do I prove that $6(k+1)^2-12(k+1)+3$ is divisible by $3$ if $6k^2-12k+3$ is?
I'm confused, can someone give me some help? Thanks.
AI: Hint: If you have to use induction, you have the correct idea. You assume $3\mid6k^2-12k+3$ and want to prove $3\mid6(k+1)^2-12(k+1)k+3$. You make use of your assumption by finding it in the new expression. So expand out the $(k+1)$'s and group the terms together to use the assumption. Now you look at what is left and show that $3$ divides it, too.
|
H: Combinatorics question about Taking Days Off
A cashier wants to work five days a week, but he wants to have at least one of the Saturday and Sunday off. In how many ways can he choose the days he will work?
So, in this case, what should I count first? How do I start? I know how to solve this if the cashier doesn't want a weekend off, but what do I do if the cashier do?
Thanks!
AI: You can find the answer by subtracting the number of ways that don't work from the total possible number of ways.
First, figure out the total number of ways to choose five days to work out of the seven days of the week, which is ${7\choose5}=21.$
Then, count the ones that would include working on both Saturday and Sunday, which is ${5\choose3}=10.$
$\therefore$ There's $21-10=\boxed{11}$ ways.
|
H: Volume of a solid of revolution: $y = x^3$, $y = x^{1/3}$, $x \geq 0$ rotated about $y$-axis
I am trying to find the volume:
Rotate about $y$.
$$y = x^3,\quad y = x^{1/3},\quad x \geq 0$$
Simple enough.
$x = y^3 \implies x = y^{\frac{1}{3}}$
$$\pi \cdot \int_0^1 y^{(1/3)^2} - y^{3^2}dy$$
$$\pi\cdot \left(\frac{3}{5} - \frac{1}{7}\right)$$
Of course that is wrong, why?
AI: Edit: Since you're using the disk method, your solution is correct, given the problem as stated. But be careful with notation: note the difference between the squaring of the functions as shown below. You evaluated as it should be evaluated, so the result is correct.
$$\pi \cdot \int_0^1 \left(y^{1/3}\right)^2 - \left(y^{3}\right)^2\,dy = \pi \int_0^1 y^{2/3} - y^6\,dy$$
Which, integrating and evaluating, gives you
$$\pi\cdot \left(\frac{3}{5} - \frac{1}{7}\right) = \frac{16\pi}{35}$$
|
H: discretize a function using $z$-transform
I would like to discretize the following continuous function using $z$-transform:
$$G(s)=\frac{s+1}{s^2+s+1}$$
The process I am using is to take the inverse Laplace transform of $\frac{G(s)}{s}$ and then take the z-transform of it. Finally I multiply it the result for $1-z^{1}$ to simulate a zero order hold.
The results I am obtaining are the following ones:
$$\mathcal{L} \{\frac{G(s)}{s}\}= F(t)=e^{-0.5}[\cos{\frac{\sqrt{3}t}{2}}+\frac{1}{{\sqrt{3}}} \sin{ \frac{\sqrt{3t}}{2}}]$$
$$Z\{F(t)\} = \frac{1-e^{-0.5t}z^{-1}\cos{\frac{\sqrt{3}T}{2}}}{1-2e^{-0.5t}z^{-1}\cos{\frac{\sqrt{3}T}{2}}+e^{-T}z^{-2}} + \frac{1-e^{-0.5t}z^{-1}\sin{\frac{\sqrt{3}T}{2}}}{1-2e^{-0.5t}z^{-1}\cos{\frac{\sqrt{3}T}{2}}+e^{-T}z^{-2}} $$
Definig $\alpha=e^{-0.5T}\cos \frac{\sqrt{3}T}{2}$, $\beta=\frac{e^{-0.5T}}{\sqrt{3}}e^{-0.5T}\cos \frac{\sqrt{3}T}{2}$ and by writing it in function of $z$ instead of $z^{-1}$ we have:
$$Z\{F(t)\} =\frac{ z^2+z(\alpha+\beta)}{z^2-2 \alpha z + e^{-T}}$$
Multiplying it by $ 1-z^{-1} $ we finally get:
$$\frac{z^2+z(1-\alpha- \beta)+ \alpha - \beta}{z^2-2 \alpha z + e^{-T}}$$
I already redone the calculation a number of times and I couldn't find any errors. However I know it is wrong because it is not matching the correct response to an impulse when I plot it in MATLAB.
I hope that it was clear enough. Sorry if I made any theoretical mistakes trying to explain the process, I am just a beginner at this subject.
EDIT: After taking the correct Laplace Transform presented below, the answer takes the form of:
$$\frac{z(\beta - \alpha + 1)-\alpha - \beta + e^{-T}}{z^2-2 \alpha z + e^{-T}}$$
If anyone is interesd here are the plots I got from MATLAB corroborating the analysis:
The line is the step response of the continuous function, the dotted one is obtained via the c2d function in MATLAB and it is my benchmark. The points are the ones I get via my analysis showed at the top. As you can see everything fits perfectly. Thank you again for your support.
AI: Here is the inverse Laplace transform
$$F(t) = 1+ \frac{{{\rm e}^{-\frac{t}{2}}}}{3}\, \left( \sqrt {3}\sin \left( \frac{\sqrt {3}t}{2} \right) -3\,\cos
\left( \frac{\sqrt {3}t}{2} \right) \right) .$$
Added: The $z$-transform of $F(t)$ is given by
$$ {\frac {-z \left( -3\,z{{\rm e}^{1/2}}-3\,{{\rm e}^{1/2}} \right) \cos
\left(\frac{\sqrt {3}}{2} \right) -z \left( \sqrt {3}{{\rm e}^{1/2}}z-
\sqrt {3}{{\rm e}^{1/2}} \right) \sin\left( \frac{\sqrt {3}}{2} \right) -z
\left( 3\,{{\rm e}^{1}}z+3 \right) }{6\,{{\rm e}^{1/2}}\cos \left(
\frac{\sqrt {3}}{2} \right) {z}^{2}-3\,{{\rm e}^{1}}{z}^{3}-6\,z{{\rm e}^{1/2
}}\cos\left(\frac{\sqrt {3}}{2} \right) +3\,{z}^{2}{{\rm e}^{1}}-3\,z+3}}$$
|
H: Need help with non-recursive definition
So I'm trying to find a non-recursive definition for $b_n$. I'm given $$b_0=1$$ $$b_{n+1}=2b_n-1$$
Does this mean I'm trying to find a number for $b_n$ that fits that algorithm?
Update:
Proof by induction. Let P(n) be that for any $n$, $b_n=1$.
As our base case, we prove P(0), that $b_0=1$, which is obvious from the problem.
For our inductive step, assume for some n ∈ N that P(n) holds. We prove that P(n+1) holds.
$$b_{n+1}=2b_n-1$$
$$b_n=(b_{n+1}+1)/2$$
I'm not sure what to do next here.
AI: Hint:
$$\begin{align*}
b_0 & = &1\\
b_1 & = 2b_0 -1 = (2\cdot 1)-1=&1\\
b_2 & = 2b_1 -1 = (2\cdot 1)-1=&1\\
&\,\vdots& \vdots\,
\end{align*}$$
|
H: mathematical maturity
So, I finished my undergrad with a degree in applied mathematics, but when reading some graduate level texts and/or papers, I often find myself struggling. I eventually get there, but I often feel like I lack the intuition necessary to be able to come up with concepts on my own. I feel like I'm just missing some pivotal step in the journey to mathematical maturity.
Does anyone have any books/references/advice?
PS:
If this means anything at all, as I was not a mathematics major, I did not take analysis or abstract algebra.
AI: I thought I was hopeless at applied maths when I graduated from my undergrad. But after 3 years of graduate research, I still won't say I am good at it, but at least I can actually read papers in reasonable time now.
If you keep doing it, you will get better as you acquire more practice and experience. E.g. 1 year ago I was completely hopeless at cooking, but after stubbornly trying to cook over and over again, I can actually produce some tasty dishes. Thinking back, I wonder why I was ever stuck in these two activities. Same pattern happens for all my other hobbies.
In general, I have three broad advice:
1) Consistency
Keep doing it and do it often. Being good at something is about doing it a lot over a long period of time.
2) Repetition
Keep repeating the same thing. E.g. I used to be really bad at baking scones. Every time I failed, I would try to figure out why I failed and experiment with fixes. After 7 batches of sad looking squashed scones, I fixed all my mistakes and is now able to consistently bake nice looking scones!
3) Don't Delve on Specific Details
In my humble opinion, the biggest thing that has been holding me back from all my activities (mathematics or otherwise) was my inability to move ahead. I tend to get stuck trying to solve specific problems.
I find that it is much better to move on and make as much progress as possible, then come back to the stuck part later to try at it again. If I am unable to resolve it, I would move on again and then come back later. E.g. if you get stuck at a part of the paper, it might help to move on and read the rest of the paper. Or even put this paper aside and read another one.
A mathematics specific advice: it helps a great deal if you have supervision and/or feedback from a professor.
|
H: Find the derivative of $y = f(x^2 - 2x + 7)$ where $f'(10) = 2$
Determine the derivative if $y = f(x^2 - 2x + 7)$ and $f'(10) = 2$
Ok so honestly, I dont know how to solve this, or even know where to start. All i know is that we are given a point $(10, 2)$. But what is $f(x^2 - 2x +7)$ supposed to mean?
AI: Suppose $y=f(z)$ = well $f$ can be anything. Then we put $z=x^2-2x+7$ and get a new function $g(x)=f(x^2-2x+7)$. Some examples:
$$f(z)=z; f(x^2-2x+7)=x^2-2x+7=g(x)$$
$$f(z)=z^2+3; f(x^2-2x+7)=(x^2-2x+7)^2+3=g(x)$$
$$f(z)=\sin (z); f(x^2-2x+7)=\sin (x^2-2x+7)=g(x)$$
Now we want to see what we know about the derivative of $f(x^2-2x+7)$ when it could take any one of a myriad of forms. We use the chain rule - the derivative of $p(q(x))$ is $q'(x)p'(q(x))$ with $p=f$ and $q(x)=x^2-2x+7$ to obtain the derivative $$(2x-2)f'(x^2-2x+7)$$
Now we only know $f'(10)$, so we can only deal with $$x^2-2x+7=10$$ which reduces to $$(x-3)(x+1)=0$$
So $x=3$, or $x=-1$
With $x=3$ we get $4\times 2 =8$.
With $x=-1$ we get $-4 \times 2 = -8$.
Because the form of $f$ is undefined, we can't say more than this - even that our function is differentiable in other places.
|
H: If three cards are selected at random without replacement. What is the probability that all three are Kings?
This a two part question:
$1$: If three cards are selected at random without replacement. What is the probability that all three are Kings? In a deck of $52$ cards.
$2$: Can you please explain to me in lay man terms what is the difference between with and without replacement.
Thanks guys!
AI: Without Replacement: You shuffle the deck thoroughly, take out three cards. For this particular problem, the question is "What is the probability these cards are all Kings."
With Replacement: Shuffle the deck, pick out one card, record what you got. Then put it back in the deck, shuffle, pick out one card, record what you got. Then put it back in the deck, pick out one card, record what you got. One might then ask for the probability that all three recorded cards were Kings. In the with replacement situation, it is possible, for example, to get the $\spadesuit$ King, or the $\diamondsuit$ Jack more than once.
For solving the "without replacement" problem, here are a couple of ways. There are $\binom{52}{3}$ equally likely ways to choose $3$ cards. There are $\binom{4}{3}$ ways to choose $3$ Kings. So our probability is $\binom{4}{3}/\binom{52}{3}$.
Or else imagine taking out the cards one at a time. The probability the first card taken out was a King is $\frac{4}{52}$. Given that the first card taken out was a King, the probability the second one was is $\frac{3}{51}$, since there are $51$ cards left of which $3$ are Kings. So the probability the first two cards were Kings is $\frac{4}{52}\cdot\frac{3}{51}$. **Given that the first two were Kings, the probability the third is is $\frac{2}{50}$. So the desired probability is $\frac{4}{52}\cdot\frac{3}{51}\cdot \frac{2}{50}$.
Remark: We could solve the same three Kings problem under the "with replacement" condition. (You were not asked to do that,) The second approach we took above yields the answer $\left(\frac{4}{52}\right)^3$. Since we are replacing the card each time and shuffling, the probability of what the "next" card is is not changed by the knowledge that the first card was a King.
|
H: What is the difference between exponential symbol $a^x$ and $e^x$ in mathematics symbols?
I want to know the difference between the exponential symbol $a^x$ and $e^x$ in mathematics symbols and please give me some examples for both of them.
I asked this question because of the derivative rules table below contain both exponential symbol $a^x$ and $e^x$ and I don't know when should I use one of them and when should I use the another one.
Derivative rules table:
[Derivative rules table source]
AI: The letter $e$ denotes this number, whereas the letter $a$ can be any positive real number.
Because $\ln(e)=1$ (essentially by definition), the rule
$$\frac{d}{dx}(e^x)=e^x$$
is consistent with, and indeed a special case of, the rule
$$\frac{d}{dx}(a^x)=a^x\ln(a)$$
There is no "choosing when to use one or use the other".
|
H: Bounds on a mapping from unit disc to left half plane
I've recently started studying for qualifying exams and have been having trouble with the following question: Let $f$ be a nonconstant analytic function on $\mathbb{D}$ such that $f (\mathbb{D}) \subset \{z \in \mathbb{C}: Re(z)<0\}$ with $f(0)=-1$. Prove that for any $z \in \mathbb{D}$
$$
\frac{1-|z|}{1+|z|} \leq |f(z)| \leq \frac{1+|z|}{1-|z|}
$$
I have been able to show that I only need to prove one side of the inequality since 1/f(z) also satisfies the hypothesis, but I haven't been able to figure out anything meaningful beyond that. Any help or hints would be greatly appreciated.
AI: Hint: Compose with a fractional linear transformation taking the left half plane to the unit disk.
|
H: Splitting the electricity bill
This is a similar question, but not quite - click
Problem:
3 people stay at a flat and they need to divide the electricity bill fairly. They took one meter reading when they moved in and 1 meter reading $k$ weeks later. For those $k$ weeks, person $A$ has been in the flat for $x$ weeks, person $B$ has been there for $y$ weeks, and person $C$ has been there for $z$ weeks, $0\leq x,y,z
\leq k$.
Assumptions:
Every person used the same amount when they were in the flat. Also the people stay an integer amount of weeks.
Question:
How to divide the bill in a fair way?
AI: Person A should pay the fraction $x/(x+y+z)$ of the bill, and similarly for persons B,C with $y,z$ replacing the numerator $x$ of person A's fraction. It has nothing to do with $k$ because presumably they have to split the bill no matter how many weeks they were there.
(This means e.g. if $(x,y,z)=(1,2,3)$ the proportions are $1/6,2/6,3/6$ respectively, even if they were there 6 weeks, or 7 weeks, or any integer number of weeks more. If the largest of $x,y,z$ is less than $k$ I guess the flat was empty during that time, but fees would still accumulate as they usually do, and at the end they would still have to split the total fairly.)
|
H: $\pi_0$ in the long exact sequence of a fibration and quaternionic projective space
I am doing a past paper for an introductory course in algebraic topology. The question is
Calculate the homology of the quaternionic projective space. What can you say about its homotopy groups?
I figured that we have a CW decomposition with one cell in every $4k$-th dimension. All boundary maps are 0, so that
$$H^{4k}(\mathbb HP^n,\mathbb Z)=\mathbb Z\\H^i(\mathbb HP^n,\mathbb Z)=0,\quad i\neq 4k$$
In particular, if the fundamental group is Abelian, it must be trivial.
For the homotopy groups, there is a fibration
$$\mathrm{Sp}(1)\to S^{4n+3}\to\mathbb HP^n$$
so that $\mathbb HP^n$ has higher homotopy groups isomorphic to those of $S^{4n+3}$.
But the long exact sequence of the fibration ends in
$$\cdots\to\pi_1S^{4n+3}\to\pi_1\mathbb HP^n\to\pi_0\mathrm{Sp}(1)\to\pi_0S^{4n+3}\to\pi_0\mathbb HP^n\to 0$$
But $\mathrm{Sp}(1)$ has two path-components, so this seems to suggest that $\pi_1\mathbb HP^n=\mathbb Z_2$, which contradicts the fact that $H_1(\mathbb HP^n,\mathbb Z)=0$. Where have I gone wrong?
AI: Sp(1) doesn't have two path components. It's just the unit quaternions, which are $S^3$.
|
H: I know what I need to do but dont know how to apply: the question related to The first order approximation theorem
$\mathbf{Question:}$
Prove that
$\displaystyle \lim_{(x,y)\to (0,0)} \dfrac{\sin(2x+2y)-2x-2y}{\sqrt{x^{2}+y^{2}}}=0$
$\mathbf{My\ ideas:}$
I will use the First Order Approximation Theorem.
But how can I apply this?
Please show me understanbly clear because this is my first example related to the theorem. I want to learn. Thank you:)
AI: Hint: the gradient $\nabla f(0,0)$ of $f(x,y)=\sin(2x+2y)$ at $(0,0)$ is the vector
$$\nabla f(0,0)=(2,2), $$
as $\frac{\partial f}{\partial x}=2\cos(2x+2y)=\frac{\partial f}{\partial y}=2\cos(2x-2y).$
$f$ is differentiable at $(0,0)$ (which implies $f$ continuous at $(0,0)$) if
$$\lim_{(x,y)\rightarrow (0,0)} \frac{f(x,y)-f(0,0)-\langle \nabla f(0,0),(x,y)\rangle}{\sqrt{x^2+y^2}}, $$
i.e.
$$\lim_{(x,y)\rightarrow (0,0)} \frac{f(x,y)-f(0,0)-\langle(2,2),(x,y)\rangle}{\sqrt{x^2+y^2}}, $$
or
$$\lim_{(x,y)\rightarrow (0,0)} \frac{\sin(2x+2y)-2x-2y}{\sqrt{x^2+y^2}}, $$
which is the limit you consider. I used the fact that
$$f(0,0)=0,$$
$$\langle \nabla f(0,0),(x,y)\rangle=2x+2y,$$
$$\|(x,y)\|=\sqrt{x^2+y^2}. $$
To compute such limit, consider it along the $x$-axis, for all points $(x,0)$ going to $(0,0)$, or the $y$-axis (all points $(0,x)$ going to $(0,0)$). Can you compute it in these 2 special cases?
|
H: Why is matrix multiplication defined a certain way?
Why is it that when multiplying a (1x3) by (3x1) matrix, you get a (1x1) matrix, but when multiplying a (3x1) matrix by a (1x3) matrix, you get a (3x3) matrix? Why is matrix multiplication defined this way?
Why can't a (1x3) by (3x1) yield a (3x3), or a (3x1) by (1x3) yield a (1x1)? I really would like to get to the root of this problem or 'axiomatization'. Thanks.
AI: The idea is that a matrix represents a linear map of finite-dimensional vector spaces. A (3x1) matrix "is" a linear map $\Bbb{R} \to \Bbb{R}^3$, and so on...
Multiplying matrices amounts to composing these functions. The rules of matrix multiplication you ask about are tha classical rules of function composition. (if $f:E \to F$ and $g:F\to G$ then $g\circ f : E \to G$.)
Long story short, you need to study the relationship between matrices and linear maps.
|
H: Evaluate $\cos 18^\circ$ without using the calculator
I only know $30^\circ$, $45^\circ$, $60^\circ$, $90^\circ$, $180^\circ$, $270^\circ$, and $360^\circ$ as standard angles but how can I prove that
$$\cos 18^\circ=\frac{1}{4}\sqrt{10+2\sqrt{5}}$$
AI: Consider the isosceles triangle pictured below:
By considering similar triangles, one has ${x\over 1}={1\over x-1}$. From this, it follows that $x={1+\sqrt 5\over 2}$. Drop a perpendicular from the top vertex to the base below. One sees that $\sin(18^\circ)={1\over 1+\sqrt 5}={\sqrt 5-1\over 4}$. Now use the Pythagorean Identity to find $\cos(18^\circ)$.
|
H: Changing from x to y $x = y(4-y)$
$$x = y(4-y)$$
I am guessing I need some pretty advanced math to solve this for y. I am trying to use the shell method and I have to use opposite terms of the rotation axis so I am rotating around y so I need x variables.
I have a whole sheet of paper trying to solve this, is there any easy way?
AI: \begin{align}
y^2 - 4y & = -x. \\[8pt]
y^2 - 4y + 4 & = -x + 4 \qquad = 4-x. \\[8pt]
(y-2)^2 & = 4-x. \\[8pt]
y-2 & = \pm\sqrt{4-x} \\[8pt]
y & = 2\pm\sqrt{4-x}.
\end{align}
This leaves a question: How did we know that $4$ is what had to be added to both sides to get a square? Google the term "completeing the square". Then bring any questions back here.
|
H: Why is the Jacobi symbol $(D/m) = (D/n)$ for certain $m,n,D$?
$m \equiv n$ mod $D$, $m,n >0$ and odd, and $D \equiv 0,1$ mod $4$, then $(D/m) = (D/n)$
I'm am sure that one can show this using quadratic reciprocity and the supplements. Any ideas?
AI: If $D\equiv 1\pmod 4$, then
$$ \left(\frac Dn\right)=(-1)^{\frac{D-1}2\frac{n-1}2}\left(\frac nD\right)=\left(\frac nD\right),$$
so the claim follws from $\left(\frac nD\right)=\left(\frac mD\right)$ (because $n\equiv m\pmod D$).
If $D=2^kE$ with $E$ odd $>1$, $k\ge 2$, then
$$\begin{align} \left(\frac Dn\right)&=\left(\frac 2n\right)^k(-1)^{\frac{E-1}2\frac{n-1}2}\left(\frac nE\right)\\&=\left(\frac 2n\right)^k\left((-1)^{\frac{n-1}2}\right)^{\frac{E-1}2}\left(\frac nE\right).\end{align}$$
We can replace $n$ with $m $ in all places in the last line: From $n\equiv m\pmod D$ we have $n\equiv m\pmod E$, hence $\left(\frac nE\right)=\left(\frac mE\right)$. From $4|D$ we have $n\equiv m\pmod 4$ and hence $\frac{n-1}{2}\equiv \frac{m-1}2\pmod 2$. If $k=2$, then $\left(\frac 2n\right)^k$ and $\left(\frac 2m\right)^k$ both equal $(\pm1)^2=1$; and if $k>3$ then $n\equiv m\pmod 8$ hence $\left(\frac 2n\right)=\left(\frac 2m\right)$. Alltogether shows $\left(\frac Dn\right)=\left(\frac Dm\right)$.
This also solves the case $D=2^k$, i.e. $E=1$.
|
H: Is there a simple way to state continuity for $I$-adic topology?
Let $R$ be a commutative ring with the $I$-adic topology defined by an ideal $I$, and let $S$ be a commutative ring with the $J$-adic topology for an ideal $J$. How would you translate saying that a homomorphism $f:R\to S$ is continuous? I am guessing it might be enough to say that for all $n>0$, there is an $m>0$ such that $f^{-1}(J^n)$ contains $I^m$. Is this right?
AI: A homomorphism is continuous iff it is continuous at $0$, and your condition is precisely the usual way to state the fact that a function is continuous at a point in terms of bases of neighborhoods at that point and at its image.
|
H: How to calculate $\cos(6^\circ)$?
Do you know any method to calculate $\cos(6^\circ)$ ?
I tried lots of trigonometric equations, but not found any suitable one for this problem.
AI: I'm going to use the value of $\cos 18°=\frac{1}{4}\sqrt{10+2\sqrt{5}}$ obtained in this question.
$\sin^2 18°=1-\left(\frac{1}{4}\sqrt{10+2\sqrt{5}}\right)^2=1-\frac{10+2\sqrt{5}}{16}=\frac{6-2\sqrt{5}}{16}$ so $\sin 18°=\frac{1}{4}\sqrt{6-2\sqrt{5}}$
$\sin 36°=2\cos 18°\sin 18°=\frac{1}{4}\sqrt{10-2\sqrt{5}}$
$\cos 36°=\sqrt{1-\sin^2 36°}=\frac{1}{4}(1+\sqrt{5})$
$\cos 6°=\cos(36°-30°)=\cos 36°\cos 30°+\sin 36°\sin 30°=\frac{1}{4}\sqrt{7+\sqrt{5}+\sqrt{30+6\sqrt{5}}}$
|
H: Related rates, where do I start?
A revolving searchlight, which is $100$ m from the nearest point on a straight highway, casts a horizontal beam along a highway. The beam leaves the spotlight at an angle of $\frac{π}{16}$ rad and revolves at a rate of $\frac{π}{6}$ rad/s. Let $w$ be the width of the beam as it sweeps along the highway and $θ$ be the angle that the center of the beam makes with the perpendicular to the highway. What is the rate of change of $w$ when $θ = \frac{π}{3}?$
Neglect the height of the lighthouse.
100% directly from the textbook. Where do I begin?
AI: Here’s a bit to get you started. As Ross said, you start with a diagram:
Here $S$ is the spotlight, the heavy horizontal line is the highway, the line $\overline{SC}$ marks the centre of the spotlight’s beam, and the lines $\overline{SA}$ and $\overline{SD}$ mark the edges of the beam. You’re told that the angle $\angle BSD=\frac{\pi}{16}$, so $\angle BSC=\angle CSD=\frac{\pi}{32}$. The angle $\theta$ is the angle that the centre of the beam makes with the perpendicular $\overline{SA}$ from $S$ to the highway, so $\theta=\angle ASC$. Finally, $w$ is the width of the beam measured along the highway, so $w=|BD|$.
You know how fast $\theta$ is changing, since you’re given the rate of rotation of the spotlight. To solve the problem, therefore, you need to express $w$ in terms of $\theta$. This can be done with a little trigonometry involving tangents and the following data:
$$\begin{align*}
&\angle ASB=\theta-\frac{\pi}{32}\\
&\angle ASC=\theta\\
&\angle ASD=\theta+\frac{\pi}{32}\\
&|SA|=100
\end{align*}$$
|
H: Is determinant uniformly continuous?
The determinant map $\det$ sending an $n\times n$ real matrix to its determiant is continuous since it's a polynomial in the coefficients. Is it also uniformly continuous?
AI: The determinant function on the set of $n\times n$ matrices is a non-zero polynomial which is homogeneous of degree $n$.
Since the restriction of a uniformly continuous function to a subspace is still uniformly continuous, it is enough to show that the restriction of $\det$ to the line spanned by any matrix with non-zero determinant is not uniformly continuous.
That is easy, because that restriction is (up to more or less obvious identifications) the function $t\in\mathbb R\mapsto t^n\in\mathbb R$.
A different way to say this is as follows: let $A\in M_n(\mathbb R)$ be any matrix with determinant equal to $1$. Then the map $f:t\in\mathbb R\mapsto tA\in M_n(\mathbb R)$ is linear, so uniformly continuous. If $\det:M_n(\mathbb R)\to\mathbb R$ were uniformly continuous, then the composition $\det\circ f:\mathbb R\to\mathbb R$ would also be uniformly continuous, and it isn't unless $n=1$, for $\det f(t)=t^n$ for all $t\in\mathbb R$.
|
H: What is the integral of $\int e^x\,\sin x\,\,dx$?
I'm trying to solve the integral of $\left(\int e^x\,\sin x\,\,dx\right)$ (My solution):
$\int e^x\sin\left(x\right)\,\,dx=$
$\int \sin\left(x\right) \,e^x\,\,dx=$
$\left(\sin(x)\,\int e^x\right)-\left(\int\sin^{'}(x)\,\left(\int e^x\right)\right)$
$\left(\sin(x)\,e^x\right)-\left(\int\cos(x)\,e^x\right)$
$\left(\sin(x)\,e^x\right)-\left(\cos(x)\,e^x-\left(\int-\sin\left(x\right)\,e^x\right)\right)$
$\left(\sin(x)\,e^x\right)-\left(\cos(x)\,e^x-\left(-\sin\left(x\right)\,e^x-\int-\cos\left(x\right)\,e^x\right)\right)$
I don't know how to complete because the solution gonna be very complicated.
AI: Notice that you got
$$\begin{align}\int e^x\sin (x) \, dx=&\left(\sin(x)\,e^x\right)-\left(\cos(x)\,e^x-\left(\int-\sin\left(x\right)\,e^x\,dx\right)\right)\\=&\sin(x)\,e^x-\left(\cos(x)\,e^x+\int\sin\left(x\right)e^x\,dx\right)\\=&e^x\sin (x)-e^x\cos(x)-\int e^x\sin(x)\, dx\end{align}$$
This implies $\displaystyle \int e^x\sin (x) \, dx+\int e^x\sin (x) \, dx=e^x(\sin(x)-\cos(x))$.
Conclude.
|
H: Proof of the continuous function having tangent plane has directional derivatives
Suppose that the continuous function $f: \Bbb R^2 \to \Bbb R$ has a tangent plane at the point $(x_0, y_0, f(x_0, y_0))$
Prove that the function $f$ has directional derivatives in all directions at rhe point $(x_0, y_0)$
I guess that I need to use the definitions of the tangent plane and directional derivatives. Hopefully this is right!
But I am telling to you so honestly, I am not good to prove a theorem. Even if somebody gives a hint, I cannot use it to prove this properly. Thus, show me and teach me this proof step by step. Thank you so much for helping:)
AI: The tangent plane at the point $\;u=(x_0,y_0,f(x_0,y_0))\;$ of the function $\;g(x,y,z):=(x,y,z)\;,\;\;z:=f(x,y)\;$ , is just
$$\nabla g(x_0,y_0,z_0)\cdot(x-x_0\,,\,y-y_0\,,\,z-f(x_0,y_0))=0$$
This means the gradient of the function $\;f\;$ exists at $\,(x_0,y_0)\;$ and from another question you asked this means there exists the directional derivative of $\;f\;$ in any diretion at the above point.
|
H: For All Unique Combinations of 60 A's and 20 B's Number of Combinations that have BB
Here is my question. I have 60 A's and 20 B's and need to find out the number of unique combinations of those where B shows up consecutively at least once.
For example (6 A's and 2 B's):
AAAAAABB = 1
AAAAABAB = 0
AAAABBAA = 1
AI: Let's count the ways that $B$ doesn't occur $2$ or more times in a row.
Line up the $A$'s. They determine $61$ "gaps" (I am counting in the endgaps).
We can choose $20$ of these gaps to put a $B$ into in $\dbinom{61}{20}$ ways.
Subtract this from the total number of words, which is $\dbinom{80}{20}$.
Added: We want to count the number of good words, where a word is good if it has (somewhere or other) two or more consecutive $B$'s. It is much easier to count the total number of words, which is $\binom{80}{20}$, and subtract the number of bad words, words that nowhere have $2$ or more $B$'s in a row. So we concentrate on counting the bad words. For the sake of illustration, I will assume that there are $12$ $A$'s and $5$ $B$'s. This is because I will be drawing a sort of picture, and don't want to type $60$ $A$'s.
How do we make a bad word, that is, a word of length $17$, with $12$ $A$'s and $5$ $B$'s, and imagine lining up the $12$ $A$'s like this.
$$ A \quad A \quad A \quad A \quad A \quad A \quad A \quad A \quad A \quad A \quad A \quad A
$$
Where can the $5$ $B$'s go? No two $B$'s can be next to each other, so any $B$ must be placed either in a gap between $2$ consecutive $A$'s or at the left end or at the right end. There are $11$ gaps between consecutive $A$'s, and two end places, which I called endgaps. So there are $11+2=13$ places where the $B$'s can go. To make a bad word, we must choose $5$ of these $13$ places to put a $B$ into. This choosing can be done in $\binom{13}{5}$ ways.
|
H: Why is $E_{\lambda}$ the kernel of the linear map $\alpha-\lambda I$
The book starts the chapter on Eigenvalues and Eigenvectors, and goes that this statement is obvious. Here $E_{\lambda}$ stands for the set of vectors $v$ such that $α(v) = λv$, for any scalar $\lambda$.
Could somebody provide some intuition why is that obvious?
Thanks!
AI: $\alpha(v) = \lambda v$ if and only if $\alpha(v) - \lambda v = 0$ if and only if $(\alpha - \lambda I)(v) = 0$.
|
H: Infinite-dimensional extensions of $\mathbb Q$
I need help to solve the following exercise:
Let $X$ be an indeterminate over $\mathbb Q$ (so a transcendental number) and consider the field extensions $\mathbb Q\subseteq \mathbb Q(X^3)\subseteq\mathbb Q(X^2)\subseteq\mathbb Q(X)$. Prove that $$\mathrm{Fix}(\mathrm{Gal}(\mathbb Q(X)/\mathbb Q(X^2)) )= \mathbb Q(X^2)$$ and that $$\mathrm{Fix}(\mathrm{Gal}(\mathbb Q(X)/\mathbb Q(X^3)) )\supsetneq\ \mathbb Q(X^3).$$
I hope that the notations $\mathrm{Gal}({}\cdot{}/{}\cdot{})$ and $\mathrm{Fix}({}\cdot{})$ are quite standard and so understandable.
My considerations: The subgroups $H\subseteq G=\mathrm{Gal}(\mathbb Q(X)/\mathbb Q)$ that satisfy the condition $\mathrm{Gal}(G/\mathrm{Fix}(H))=H$ are only the finite subgroups of $G$. So one way to solve the problem could be showing that $\mathrm{Gal}(\mathbb Q(X)/\mathbb Q(X^2))$ is a finite group, but $\mathrm{Gal}(\mathbb Q(X)/\mathbb Q(X^3))$ is an infinite group.
Thanks in advance
AI: I'd rather write $\;t\;$ all through instead of $\,x\,$ , which can be misleading (for me, in particular) for the variable/unknown/indeterminate used for functions and/or polynomials.
Now, we can write
$$\Bbb Q(t)=\Bbb Q(t^2)[t]$$
since the $\,t\,$ is algebraic over $\,\Bbb Q(t^2)\,$ as it is a root of the quadratic $\,f(X):=X^2-t^2\in\Bbb Q(t^2)[X]\,$ (remember that if $\,F/k\,$ is a fields extension and $\,w\in F\;$ , then $\,w\,$ is algebraic over $\,k\,$ iif $\,k(w)=k[w]\,$
Since clearly both $\,t\,,\,-t\in\Bbb Q(t)\,$ , the extension is normal (and algebraic and separable) and thus Galois, and we're done with the first task
OTOH, we also have that $\,\Bbb Q(t)=\Bbb Q(t^3)[t]\,$ for the same reason as above, with $\,g(X)=X^3-t^3\in\Bbb Q(t^3)[X]\;$ , yet this time we get that
$$X^3-t^3=(X-t)(X^2+tX+t^2)$$
and the above quadratic's discriminant is
$$\Delta=t^2-4t^2=-3t^2\;,\;\;\text{and}\;\;\sqrt{-3t^2}\notin\Bbb Q(t^3)\;\text{(why?)}$$
Thus, the extension is this case is not normal and thus not Galois, and in fact $\;Gal(\Bbb Q(t)/\Bbb Q(t^3))=1\;$ , and this group's fixed field is...
|
H: number of different addends which sums to 41
I have following equation:
$$\sum_{i=1}^{21} m_i = 41$$
where $m_i$ are non-negative integers.
How many different solutions are there. Note, that $41 + 0 + ... + 0$ is a different solution than $0 + 41 + 0 + ... + 0.
My only idea was to solve this recursive: let $P_a(s)$ the number of possible combinations, where a is the number of addends and s is the sum. (Im my case I want to calculate $P_{21}(41)$.
There is obviously this relation:
$$P_1(x) = 1\\P_a(x) = \sum_{i=0}^{a}P_{a-1}(i)$$
But there I stuck. Any ideas, how to approach this?
AI: It's ${41+21-1} \choose {41}$.
The idea is Stars ans bars
You have 41 real objects and 20 dummy objects to represent borders. There's biection between all the orders of this 61 element and partitions of 41 to addends
|
H: Discrete Mathematics: $x\leq y+\epsilon \implies x\leq y$
Let $x$ and $y$ be real numbers. Prove that if $x\leq y + \epsilon$ for every positive real number $\epsilon$, then $x\leq y$.
I would like a hint as to how to prove this. Thank you. Pictorial proof would be nice too.
So this is how I word-smithed the answer given by the author of my accepted answer:
We will proceed by showing the contrapositve. We are given that
$$x\leq y+\epsilon \implies x\leq y,$$
so the contrapositive is resolved as
$$x > y \implies x>y+\epsilon,$$
or by substituting $x-y = \omega$ simply
$$\omega > 0 \implies \omega > \epsilon,$$
but if $\omega >0$, then $\epsilon = \frac{\omega}{2} > 0$; however, $\omega \leq \frac{\omega}{2}$ is false. Therefore, we can assert that for all $\epsilon > 0$
$$\omega\leq\epsilon \implies \omega \leq 0;$$
quod erat demonstrandum.
Here is my pictorial representation:
AI: The claim is equivalent to showing that if $\omega\leq \epsilon$ for each $\epsilon >0$, then $\omega\leq 0$.
But, if $\omega>0$, then $\epsilon=\frac \omega 2 >0$ and $\omega \leq \frac\omega 2$ does not hold. Having proven the contrapositive, we can assert that $$(\forall\epsilon >0\;:\;\omega\leq\epsilon )\implies \omega \leq 0$$
Now let $\omega =x-y$.
Pictorially If for any $\color{green}{\epsilon >0}$ we choose, $\omega$ is to the left of $\color{green}{\epsilon}$, it must be the case $\omega$ is to the left of the green bar, that is, on the red side $\color{red}{\omega <0}$ (strictly negative numbers) or that it is on the breaking point, that is $\color{orange}{\omega=0}$.
ADD The contrapositive of the assertion is $$\omega >0\implies (\exists \epsilon >0:\omega\not\leq \epsilon)$$ or, which is the same, $$\omega >0\implies (\exists \epsilon >0:\omega> \epsilon)$$
We proved the contrapositive with $\epsilon =\omega /2$.
|
H: proving inequality $0 < x^4+2x^2-2x+1$ for $x>0$
How can I elegantly prove the inequality $0 < x^4+2x^2-2x+1$ for $x>0$. I have plotted this function in a Sage (an open source and free CAS) and I can see that there is a local min between $0$ and $1$ that lies above the x-axis.
Therefore,I could show that the function is decreasing from $0$ to the local min and then show it is increasing from the local min to infinity and then evaluate the function at the local min and show that it is greater than zero, and hence greater than $0$ for all $x>0$.
How can I prove this more simply?
AI: A very quick way to show that an expression is non-negative, is to write it as a sum of squares (of real valued expressions). In this case, you can split it into
$$ x^4 + x^2 + (x-1)^2$$
which also shows strict positivity, since not all squares can be simultaneously $0$ here.
|
H: Using Calculus to find total and maximum revenue and profit
I'm grappling with understanding how to use calculus to find rates of profit, revenue, and cost. I have the following problem:
$x = \text{ quantity }$, $12 < x < 48$
Total Cost: $C(x) = \dfrac 92x^2 -17x + 2700$
Price per item: $p(x) = -\dfrac{x^2}3 +\dfrac{23}2x + 78 + \dfrac{20000}x$
To find the total revenue $R(x)$, I believe that I have to multiply the quantity x by the price/item. Thus,
$$\begin{align} R(x) &= x\cdot p(x)\\
&= x\left( -\dfrac{x^2}3 +\dfrac{23}2x + 78 + \dfrac{20000}x\right) \\
&= -\dfrac{x^3}3 +\dfrac{23}2x^2 + 78 + 20000\end{align}$$
To find total profit $P(x)$, I believe that I have to subtract cost from revenue. Thus,
$$\begin{align} P(x) &= R(x) - C(x) \\
&= -\frac{x^3}3 +\frac{23}2x^2 + 78x + 20000 - \left(\frac 92x^2 -17x + 2700\right) \\
&= --\frac{x^3}3 +\frac{17}2 x^2 + 61x + 17300\end{align}$$
To find $x$ for maximum revenue, I believe I need to find the derivative of $R(x)$, so that
$$\begin{align} R'(x) = -x^2 + 23x + 78 &= 0\\
-(x-26)(x+3) &= 0\\
x = 26 \text{ or }x &= -3\end{align}$$
Since $x = -3$ is not in the domain, this means that the quantity of $x$ that will generate the maximum revenue is $26$.
To find $x$ for maximum profit, I need to first find the derivative of $P(x)$, set it equal to zero, and solve for $x$. Thus,
$$P'(x) = -x^2 + 17x + 61 = 0$$
And here's where I run into trouble. I'm not sure if I'm doing my math correctly or if this is just not an easy polynomial to factor. If I can't factor the polynomial to find x, I'm not sure how to proceed.
Essentially, I'm hoping someone can tell me if my math and logic are correct as I take the derivatives and find the maxima; if I'm not doing it correctly, how so; and what I might be doing wrong when it comes to finding the total profit. My apologies if there is a better way to format exponents. I checked the formatting help page and didn't see any help specific to that. If you have a suggestion for that as well, I'm happy to go in and edit the post to make it more readable. Thanks in advance.
AI: You have $P'(x)=-x^2+17x+6=ax^2+bx+c$. First, look at the discriminant of the polynomial, to see if it has any (real) roots at all $$b^2-4ac=\Delta=17^2-4\cdot (-1)\cdot 6=313>0$$
To find the roots, we use the formula $$x=\frac{-b\pm\sqrt{\Delta}}{2a}$$that gives $$x=\frac{-17\pm\sqrt{313}}{-2}=\frac{17\mp\sqrt{313}}2$$
And now you'll have to see which root you must keep.
|
H: Work to pump water out of a tank with radius $10$
Water density is 1000, tank is a half sphere with radius $10$.
9.8 for gravity, 1000 for density, 2pi to find the volume and all of the rest gives me work.
$$ 9.8(1000)(2\pi) \int r^2 dy$$
To find the radius I just use $(10 - y)$
This is wrong, but why?
AI: The mass of an infinitely thin layer of water at depth $y$ is $1000\pi r^2dy$, so the force needed to lift it is $9.8\cdot1000\pi r^2dy$, and the work done in lifting it is $9.8\cdot1000\pi r^2ydy$. Your integral should be
$$9800\pi\int_0^{10}r^2ydy\;.$$
Moreover, $r$ is not $10-y$: $r^2+y^2=10^2$, so $r^2=100-y^2$.
|
H: Show there exists an N such that $n\ge N$ implies $\int|f^+-\phi_n|\,d\mu<\epsilon/2$
Let $f\in L(X,\mathcal{X},\mu)$. This makes $\int f^+\,d\mu<+\infty$. Now, there exists a monotone increasing sequence of simple measurable functions $\phi_n$ that converge to $f^+$. By the monotone convergence theorem, we also have:
$$\lim_{n\to\infty}\int \phi_n\,d\mu=\int f^+\,d\mu$$
How would one provide a precise symbolic argument that for any $\epsilon>0$, there exists an $N$ such that
$$\int \left|\,\,f^+-\phi_n\right|\,d\mu<\frac{\epsilon}{2}$$
for all $n\ge N$.
AI: It is not that hard ; since $|f^+ - \varphi_n| = f^+ - \varphi_n$, it suffices to find $N$ such that $\forall n \ge N$,
$$
\int f^+ d\mu - \varepsilon/2 < \int \varphi_n d\mu \le \int f^+ d\mu.
$$
Use the fact that $\int \varphi_n d\mu \nearrow \int f \, d\mu$.
Hope that helps,
|
H: Minimum value of $f(x) = x^3 + 9x^2 + 5$ on $[0,3]$
For the function $f(x) = x^3 + 9x^2 + 5$ on the interval $[0,3]$, determine the minimum value.
I don't know how to do this. I think we have to find the derivative and set that equal to $0$, but that wasn't giving me the right answer. Any ideas?
AI: I assume that we want to find the minimum value of the function.
Take the derivative of the function, so
$$f'(x) = 3x^2 + 18x$$
If $f'(x) = 0$, then
$$0 = 3x(x + 6)$$
$$x = 0 \text{ and } x = -6$$
Since $x = -6$ is not within the interval, neglect that.
Then, by checking the values of the endpoints, we have $f(0) = 5$ and $f(3) = 113$.
Thus, the minimum value is 5.
Here is the good picture of the graph of that function with $[0, 3]$.
|
H: Prove that $a^3+b^3+c^3 \geq a^2b+b^2c+c^2a$
Let $a,b,c$ be positive real numbers. Prove that $a^3+b^3+c^3\geq a^2b+b^2c+c^2a$.
My (strange) proof:
$$
\begin{align*}
a^3+b^3+c^3 &\geq a^2b+b^2c+c^2a\\
\sum\limits_{a,b,c} a^3 &\geq \sum\limits_{a,b,c} a^2b\\
\sum\limits_{a,b,c} a^2 &\geq \sum\limits_{a,b,c} ab\\
a^2+b^2+c^2 &\geq ab+bc+ca\\
2a^2+2b^2+2c^2-2ab-2bc-2ca &\geq 0\\
\left( a-b \right)^2 + \left( b-c \right)^2 + \left( c-a \right)^2 &\geq 0
\end{align*}
$$
Which is obviously true.
However, this is not a valid proof, is it? Because I could just as well have divided by $a^2$ rather than $a$:
$$
\begin{align*}
\sum\limits_{a,b,c} a^3 &\geq \sum\limits_{a,b,c} a^2b\\
\sum\limits_{a,b,c} a &\geq \sum\limits_{a,b,c} b\\
a+b+c &\geq a+b+c
\end{align*}
$$
Which is true, but it would imply that equality always holds, which is obviously false. So why can't I just divide in a cycling sum?
Edit: Please don't help me with the original inequality, I'll figure it out.
AI: Just assume, wlog $a\leq b\leq c$. Then this equation is all you need:
$$a^3+b^3+c^3=a^2b+b^2c+c^2a+\underset{\geq 0}{\underbrace{(c^2-a^2)(b-a)}}+\underset{\geq 0}{\underbrace{(c^2-b^2)(c-b)}}\geq a^2b+b^2c+c^2a$$
|
H: Using complete induction, prove that if $a_1=2$, $a_2=4$, and $a_{n+2}=5a_{n+1}-6a_n$, then $a_n=2^n$
Could anyone please explain to me how to do this problem by using the principle of complete induction? Thanks. :)
Let $a_1=2$, $a_2=4$, and $a_{n+2}=5a_{n+1}-6a_n$ for all $n\geq 1$. Prove that $a_n=2^n$ for all natural numbers $n$.
AI: For induction we wish to show the following:
If $a_{n+2} = 2^{n+2}$ and $a_{n+1} = 2^{n+1}$ for some particular positive integer $n$ then: $a_{n+3} = 2^{n+3}$ must be true for that same value $n$
$a_1 = 2^1$ and $a_2 = 2^2$
Since if 1 is true and 2 is true then we can repeatedly use 1 from the truth of 2 to establish that
$a_n = 2^n$ for all $n >= 1$
Proving point number 2 is trivial. It is already given to us. We now must prove point number 1.
Assume that
$$a_{n+2} = 2^{n+2}$$
$$a_{n+1} = 2^{n+1}$$
For some arbitrary positive integer n.
Notice that using the original definition:
$$a_{n+3} = 5*a_{n+2} - 6*a_{n+1} = 5*2^{n+2} - 6*2^{n+1}$$
Now we will simplify this expression:
$$5*2^{n+2} - 6*2^{n+1} =2*(5*2^{n+1})- (5*2^{n+1}) - 2^{n+1}$$
We now factor and reorganize:
$$2*(5*2^{n+1})- (5*2^{n+1}) - 2^{n+1} =(5*2^{n+1})(2 - 1) - 2^{n+1} = (5*2^{n+1}) - 2^{n+1} $$
And now simplify:
$$ (5*2^{n+1}) - 2^{n+1} = 4*2^{n+1} = 2*2^{n+2} = 2^{n+3} $$
Thus we have shown: that if
$$a_{n+2} = 2^{n+2}$$
$$a_{n+1} = 2^{n+1}$$
and
$$a_{n+3} = 5*a_{n+2} - 6*a_{n+1}$$
Then:
$$a_{n+3} = 2^{n+3}$$
Thus we are now done with the induction.
|
H: Change of basis and identity
Let $\beta = \{b_1,\dots, b_n \}$ be a base for $V$. Explain why the $\beta$ coordinate vectors of $b_1,\dots, b_n$ are the columns $e_1, \dots, e_n$ of the $n$ by $n$ identity.
The solution simply says $b_1 = 1b_1 + 0b_2 + \dots0b_n$.
Here is what I don't understand. If I take $b_1 = (1,2)$ and $b_2 =(7,5)$ and $v = (2,6) \in V$, then the coordinates are $(32/9, -2/9)$ no?
AI: The key here is to understand what they mean by the "$\beta$ coordinate vectors". Normally, we would say that a vector in the form $\vec v = (v_1,v_2,\dots, v_n)$ is equal to $v_1e_1 + v_2e_2 + \dots + v_ne_n$.
However, if $v$ is a "$\beta$ coordinate vector", we have $\vec v = v_1b_1 + v_2b_2 + \dots + v_nb_n$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.