text
stringlengths
83
79.5k
H: clarification on accumulation points Let $A=\{1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4},\ldots\}$. Then the point $0$ is an accumulation point of $A$. The limit point $0$ does not belong to $A$. Also, $A$ does not contain any other limit points. Why doesn't $0$ belong to $A$, is it because there are an infinite amount of open sets $G$ with $0 \in G$? (An accumulation point will never belong to its associated set?) What is the difference between open set and neighborhood, isn't every neighborhood an open set? How can you tell that there are no more accumulation points in $A$? Many thanks. AI: An accumulation point of a set can belong to the set. Change your example to the set $B$, where $B$ is $A$ together with $0$. Then $0$ is an accumulation point of $B$ and is in $B$. But an accumulation point of a set need not belong to the set. Your $A$ gives one such example. That there are no other accumulation points of $A$ can be shown as follows. Suppose that $a$ is an accumulation point of $A$. Then for any $\epsilon$, there are infinitely many elements of $A$ that are within $\epsilon$ of $a$. Suppose that first that $a\ne 0$. We show that $a$ cannot be an accumulation point of $A$. Let $\epsilon=|a|/2$. Then there are no elements of $A$ within $\epsilon$ of $a$. So $a$ cannot be an accumulation point of $A$. If $a$ is positive, pick $\epsilon=a/2$. Then every number which is within $\epsilon$ of $a$ is $\gt a/2$. There are only finitely many $n$ such that $\frac{1}{n}\gt a/2$, so $a$ cannot be an accumulation point of $A$.
H: Conditional expectation (mixed with an iterated expectation) $E[E(X\mid Y)\mid Y]=E(X\mid Y)$ Conditional expectation: I want to prove $E[E(X\mid Y)|Y]=E(X\mid Y)$ I attempted the following. Is it correct? $$\begin{align*}E[E(X\mid Y)|Y=y]&=\int_{-\infty}^\infty E(X\mid Y=y)f_{X\mid Y}(x\mid y)~dx\\ &=E(X\mid Y=y)\int_{-\infty}^\infty f_{X\mid Y}(x\mid y)~dx \\&=E(X\mid Y=y)\end{align*}$$ $$\text{Hence, } E[E(X\mid Y)|Y]=E(X\mid Y)$$ AI: You haven't defined your symbols for much of this; so, I don't want to say that I'm quite comfortable with what you've written there. Plus - assuming that $f_{X\vert Y}$ is a density function - you've made an inherent assumption that your variable is continuously distributed. However, let me present you with a hint for a much more general proof. Remember that $\mathbb{E}[X\mid Y]$ is itself a random variable - and in particular, is a random variable that is measurable with respect to the $\sigma$-algebra generated by $Y$. You can prove that if $Z$ is a random variable that is measurable with respect to the $\sigma$-algebra $\mathcal{F}$, then $\mathbb{E}[Z\mid \mathcal{F}]=Z$. (To make logical sense of this: remember, we can think of conditional expectation as "If we know only the information encoded in $\mathcal{F}$, what can we say about the value of $Z$?" If $Z$ is measurable with respect to $\mathcal{F}$, then of course we know exactly the value of $Z$ given that information.)
H: If $X$ is well-ordered set, how to prove that $\mathcal{P}(X)$ can be linearly ordered? I'm having troubles solving the following exercise, proposed in T. Jech, 'Set theory' Exercises 5.4. Let $X$ is well-ordered set then $\mathcal{P}(X)$ can be linear ordered. [Let $X<Y$ if the least element of $X\triangle Y$ belongs to $X$.] I can prove that $<$ satisfies totality (i.e. $X<Y$ or $X=Y$ or $Y<X$) and $X\not< X$, but I can't prove transitivity of $<$. Thanks for any help. AI: Suppose that $A<B$ and $B<C$; Let $a=\min(A\mathbin{\triangle}B)\in A$ and $b=\min(B\mathbin{\triangle}C)\in B$. Then $a\notin B$, so $a\ne b$. Suppose that $a<b$. If $x\in X$ and $x<a$, then $x\in A$ iff $x\in B$ iff $x\in C$, so $x\notin A\mathbin{\triangle}C$. Since $a\notin B$ and $a<b$, $a\notin C$, and therefore $a=\min(A\mathbin{\triangle}C)$, i.e., $A<C$. Now suppose that $b<a$. If $x\in X$ and $x<b$, then $x\in A$ iff $x\in B$ iff $x\in C$, so $x\notin A\mathbin{\triangle}C$. And $b\in A\setminus C$ (why?), so $b=\min(A\mathbin{\triangle}C)\in A$, and again $A<C$.
H: Applying Stirling's formula in testing for convergence of a sum I trying to figure which $\beta \in \mathbb{R}$ make the series $\sum_{n=1}^{\infty}\frac{(\beta n)^n}{n!}$ converge. I have tried two tests: ratio test, and approximation by Stirling's formula. I must be making a mistake with at least one of them, because they're giving me different answers. If I try the ratio test, I get $$\frac{a_{n+1}}{a_n}= \frac{(\beta n)^{n+1}/(n+1)!}{(\beta n)^n/n!}= \frac{\beta n}{n+1}\to \beta,$$which suggests that it converges at least for $\beta \in (-1,1)$. But Stirling's formula is giving me that the general term is $$\frac{(\beta n)^n}{n!} \sim \frac{(\beta n)^n}{(n/e)^n\sqrt{2\pi n}}= \frac{\beta^n e^n}{\sqrt{2\pi n}},$$ which suggests that the general term does not even go to zero unless $\beta e<1$! This contradicts what I got previously. What am I doing wrong? Thanks for your help! AI: Your last expression in the ratio test is wrong. Try again. $$\dfrac{{\dfrac{{{\beta ^{n + 1}}{{\left( {n + 1} \right)}^{n + 1}}}}{{(n + 1)!}}}}{{\dfrac{{{\beta ^n}{n^n}}}{{n!}}}} = \dfrac{1}{{n + 1}}\dfrac{{{\beta ^{n + 1}}}}{{{\beta ^n}}}\frac{{{{\left( {n + 1} \right)}^{n + 1}}}}{{{n^n}}} = \beta {\left( {1 + \frac{1}{n}} \right)^n} \to \beta e$$
H: A certain ice cream store has 31 flavors of ice cream available. In how many ways can we... A certain ice cream store has 31 flavors of ice cream available. In how many ways can we order a dozen ice cream cones if chocolate, one of the 31 flavors, may be ordered no more than 6 times? $\dbinom{31 + 12 - 1}{12}$ would be the total number of cases if not for the restriction of chocolate no more than 6 times. To fix this we could substract the invalid cases: Exactly 7 chocolate cones ordered: $\dbinom{30 + 5 - 1}{5}$ -> 30 because chocolate is not an option anymore, 5 because we still have to fill 5 cones to have the dozen. Exactly 8 chocolate cones ordered: $\dbinom{30 + 4 - 1}{4}$ . . . Exactly 12 chocolate cones ordered: $\dbinom{30 + 0 - 1}{0}$ Solution: $\dbinom{31 + 12 - 1}{12} - ( \dbinom{30 + 5 - 1}{5} + \dbinom{30 + 4 - 1}{4} + \dbinom{30 + 3 - 1}{3} + \dbinom{30 + 2 - 1}{2} + \dbinom{30 + 1 - 1}{1} + \dbinom{30 + 0 - 1}{0} )$ Is the reasoning correct? If so, can we avoid the invalid cases sum? This could complicate; suppose 155 flavors and a restriction of no more than 30 times. Thanks. AI: You can get rid of the cases. As you say, there are $\binom{12+31-1}{31-1}=\binom{42}{30}=\binom{42}{12}$ ways without the restriction. To count the unacceptable ways, note that each ‘bad’ way has at least seven chocolate cones, so that you’re really just counting the number of ways to order ($7$ chocolate cones and) another $5$ cones with no restrictions. That’s just the number of ways to order $5$ cones without restriction, which is $\binom{5+31-1}{31-1}=\binom{35}{30}=\binom{35}5$, so the total number of acceptable orders is $$\binom{42}{12}-\binom{35}5=11,058,116,888-324,632=11,057,792,256\;.$$ As a check, note that by a standard binomial identity we have $$\sum_{k=0}^5\binom{k+29}k=\sum_{k=0}^5\binom{k+29}{29}=\binom{5+29+1}{29+1}=\binom{35}{30}=\binom{30}5\;,$$ showing that this answer is the same as yours.
H: Formal notation when using the axiom of specification The axiom of specification states formally that for every property $\varphi$ holds $\forall X\exists Y\forall x(x\in Y\longleftrightarrow x\in X\wedge\varphi(x))$. Since from the axiom of extensionality such a set $Y$ is unique we define $Y:=\{x\in X:\varphi(x)\}$. Now, extending the definition I'd like to know the formal definitions in the next cases: 1.- When people write $X=\{x:\varphi(x)\}$. 2.- Given any arbitrary object, let's say a function $f$, then the set $X=\{f(x):x\in Dom(f)\}$, or for example $X=\{\int_{a}^{b}: 0\leq > a\leq b\leq 1\}$, etc. Edit: Maybe I didn't explain myself correctly but what I'm looking for is some general definition to the cases given above, if there are some. I'm going to try giving my interpretation: 1.- I guess that this notations correspond to the case $\exists Y \forall x(\varphi(x)\longrightarrow x\in Y)$. If this holds then we can have $\{x:\varphi(x)\}:=\{x\in Y:\varphi(x)\}$. 2.- I guess this case corresponds to some simplification like for example $\{f(x):x\in Dom(f)\}:=\{y:\exists x(x\in Dom(f)\wedge y=f(x))\}$ in the first case. In the second case it can be $\{\int_{a}^{b}: 0\leq a\leq b\leq 1\}:=\{y:\exists a\exists b(y=\int_{a}^{b}\wedge $ $ 0 \leq a\leq b\leq 1)\}$. Note: In the first case I think we have to prove that $Y$ is unique. In the second case I don't even know how to make the general statement to include all the cases that are similar to these examples I have specified. AI: There are two possibilities for $X=\{x:\varphi(x)\}$. This may be defining a proper class; in $\mathsf{ZF(C)}$ it’s an informal definition, but there are axiomatizations of set theory that allow proper classes. Alternatively, the context may show that $X$ is really $\{x\in y:\varphi(x)\}$ for some $y$, and the notation $\{x:\varphi(x)\}$ is just a bit of sloppiness. Added: In the latter case you don’t have to prove uniqueness of $y$; you just have to prove that there is a set $y$ that contains all $x$ with the property $\varphi$. In practice this will usually be fairly obvious. $X=\{f(x):x\in\operatorname{dom}f\}$ is readable shorthand for a perfectly straightforward instance of comprehension (specification). The function $f$ is a set of ordered pairs $\langle x,y\rangle=\{\{x\},\{x,y\}\}$. The sets $\{x\}$ and $\{x,y\}$ are then elements of $\bigcup f$, and $x$ and $y$ themselves are elements of $\bigcup\bigcup f$. Thus, $\operatorname{dom}f\subseteq\bigcup\bigcup f$. In fact, $$\operatorname{dom}f=\left\{x\in\bigcup\bigcup f:\exists y\in\bigcup\bigcup f\Big(\{\{x\},\{x,y\}\}\in f\Big)\right\}\;.$$ using this abbreviation, we can then write $$X=\left\{y\in\bigcup\bigcup f:\exists x\in\operatorname{dom}f\Big(\{\{x\},\{x,y\}\}\in f\Big)\right\}\;.$$ However, we don’t need $\operatorname{dom}f$ to describe $X$: $X$ is simply the range of $f$, so we can describe it directly as $$X=\left\{y\in\bigcup\bigcup f:\exists x\in\bigcup\bigcup f\Big(\{\{x\},\{x,y\}\}\in f\Big)\right\}\;.$$ Added: It’s not clear just what class of definitions you have in mind here. If it’s as broad as the two examples suggest, I don’t think that you can make a general statement that both covers all such examples and is specific enough to be useful.
H: Finding the orthogonal complement of a particular set Let $\ell^2$ denote the vector space of all square summable sequences with the inner product defined as $\langle x,y\rangle = \sum\limits_{i=1}^{\infty} x_i \bar y_i$, and $\ell_0$ denote the space of sequences that have finitely many non-zero terms. Given $y = (1,\frac12,\frac13,\frac14,\dots)\in\ell^2$, define $A = \{x \in\ell_0\mid x \perp y\}$. How to find $A^\perp$? AI: Since $\ell_0 \subset \ell^2$, we have $$ A=\bigcup_{n \in \mathbb{N}}A_n, $$ where $$ A_n=\left\{(x_1,\ldots,x_n,0,\ldots) :\ \sum_{k=1}^n\frac{x_k}{k}=0\right\}. $$ Obviously $A_1=0$ and $A_n=\text{span}\mathscr{B}_{n-1}\cong \mathbb{R}^{n-1}$ for every $n \ge 2$, with $$ \mathscr{B}_{n-1}=\left\{e_1-2e_2,\ldots,e_1-ne_n\right\} , $$ where $\{e_i\}_i$ denotes the standard basis of $\ell^2$. It follows that \begin{eqnarray} A^\perp&=&\{z \in \ell^2:\ z\perp A\}=\{z \in \ell^2: \ z\perp \mathscr{B}_{n-1} \forall n \ge 2\}\\ &=&\{z \in \ell^2: z_1-nz_n=0 \quad \forall n \ge 2\}=\mathbb{R}y. \end{eqnarray}
H: Infinite Simple Group as Union of Solvable Groups Let $A_n$ denote the alternating group on first $n$ natural numbers, and $A_{\mathbb{N}}$ be the union $\cup_{n\geq 1} \,\, A_n$. (In other words, $A_{\mathbb{N}}$ is the set of all bijections from $\mathbb{N}$ to $\mathbb{N}$ which move only finitely many points, and they are even permutations on these finitely many points.) Question: Is there an infinite ascending chain $1\leq H_1\leq H_2\leq \cdots$ of finite solvable subgroups of $A_{\mathbb{N}}$ such that $\cup_{n\geq 1}=A_{\mathbb{N}}$? AI: No, there isn't. If there were such a chain, then for a large enough $n$ we would have $A_5 \leq H_n$. Since $H_n$ is solvable, it would also mean that $A_5$ is solvable (solvability is preserved when taking subgroups). But $A_5$ is simple non-abelian, so this is a contradiction. To put the same in other words: $A_{\mathbb{N}}$ is not locally solvable, and any union of an ascending chain of solvable subgroups must be locally solvable.
H: Confusion regarding the proof to "every Lindelöf metric space is second countable". The proof given in my book for "every Lindelöf metric space is second countable" is: Let there exist an open covering $\{B(x,\epsilon)\}, \forall x\in X$, where $X$ is a Lindelöf metric space. There consequently exists a countable subcover $\{B(x_{n},\epsilon)\}$. Hence every point $y\in X$ is a part of some $B(x_{i},\epsilon)$. Let $B(y,d)$ be an arbitrary open set containing $y$, where $d\in\Bbb{R}$. If $d(x_{i},y)+\epsilon\leq d$ along with the condition $\epsilon\geq d(x_{i},y)$ then we have $B(x_{i},\epsilon)\subseteq B(y,d)$ and $y\in B(x_{i},\epsilon)$. Hence, the set of all such $\{B(x_{i},\epsilon)\}$ will form a basis, of which there is a countable number. Hence, the space is second countable. My doubt is this: For every arbitrary $B(y,d)$ of a point $y\in X$, you will have a separate countable cover $\{B(x_{n},\epsilon)\}$, where $\epsilon$ satisfies $d(x_{i},y)+\epsilon\leq d$ $\epsilon\geq d(x_{i},y)$ Note that the value of $\epsilon$ will remain the same for all $\{B(x_{n},\epsilon)\}$. To form a basis, you will have to take the collection of all such countable subcovers $\{B(x_{n},\epsilon)\}$. For the particular point $y$, there might be an uncountable number of different points $x_{i}$ such that $B(x_{i},\epsilon)\subseteq B(y,d)$, where $d$ varies over $\Bbb{R}$. Why can't the collection of all such sets be uncountable, making the space not second countable? AI: Part of the argument is simply wrong: the base constructed is not countable, because it uses all $\epsilon>0$ instead of a countable set of $\epsilon>0$ containing arbitrarily small members, like $\{2^{-n}:n\in\Bbb N\}$ or $\left\{\frac1n:n\in\Bbb Z^+\right\}$. The intended argument could be made much more clearly. Let me simply do a more intelligible version, rather than comment specifically on this one. For each $n\in\Bbb N$, $\{B(x,2^{-n}:x\in X\}$ is an open cover of the Lindelöf space $X$, so it has a countable subcover $\mathscr{B}_n$. Let $\mathscr{B}=\bigcup_{n\in\Bbb N}\mathscr{B}_n$; $\mathscr{B}$ is a countable family of open subsets of $X$, and I claim that it’s a base for the topology of $X$. To see this, let $U$ be any non-empty open set in $X$, and fix $x\in U$; we must show that there is some $B\in\mathscr{B}$ such that $x\in B\subseteq U$. Since $x\in U$, and $U$ is open, we know that there is an $\epsilon>0$ such that $B(x,\epsilon)\subseteq U$. Choose $n\in\Bbb N$ large enough so that $2^{-n}<\frac{\epsilon}2$. Now $\mathscr{B}_n$ covers $X$, so there is a $B(y,2^{-n})\in\mathscr{B}_n$ such that $x\in B(y,2^{-n})$; I claim that $B(y,2^{-n})\subseteq U$. Suppose that $z\in B(y,2^{-n})$; then $d(z,y)<2^{-n}<\frac{\epsilon}2$. We also know that $d(x,y)<2^{-n}<\frac{\epsilon}2$, so by the triangle inequality we have $d(x,z)<\frac{\epsilon}2+\frac{\epsilon}2=\epsilon$, $z\in B(x,\epsilon)\subseteq U$, and hence $B(y,2^{-n})\subseteq U$, as claimed. It follows that $\mathscr{B}$ is indeed a base for the topology on $X$. Added: Getting back to your specific questions about the original form of the argument, I think that what you’ve not realized is that for each $\epsilon>0$ there is only one family $\{B(x_{\epsilon,n},\epsilon:n\in\Bbb N\}$ of $\epsilon$ balls being considered; there is not a separate one for each $B(y,d)$. (The double subscript on $x_{\epsilon,n}$ is necessary, because for each $\epsilon>0$ there is potentially a different $x_n$.) We use the Lindelöf property to find these countable covers of $X$ at the beginning; we don’t find new ones for each $B(y,d)$. Part of the problem, I think, is that the argument given does not adequately explain why we can be certain that there is a $B(x_{\epsilon,n},\epsilon)$ such that $d(x_n,y)+\epsilon<d$ and $\epsilon\ge d(x_{\epsilon,n},y)$.
H: How can I show that $\sqrt{1+\sqrt{2+\sqrt{3+\sqrt\ldots}}}$ exists? I would like to investigate the convergence of $$\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\sqrt\ldots}}}}$$ Or more precisely, let $$\begin{align} a_1 & = \sqrt 1\\ a_2 & = \sqrt{1+\sqrt2}\\ a_3 & = \sqrt{1+\sqrt{2+\sqrt 3}}\\ a_4 & = \sqrt{1+\sqrt{2+\sqrt{3+\sqrt 4}}}\\ &\vdots \end{align}$$ Easy computer calculations suggest that this sequence converges rapidly to the value 1.75793275661800453265, so I handed this number to the all-seeing Google, which produced: OEIS A072449 "Nested Radical Constant" from MathWorld Henceforth let us write $\sqrt{r_1 + \sqrt{r_2 + \sqrt{\cdots + \sqrt{r_n}}}}$ as $[r_1, r_2, \ldots r_n]$ for short, in the manner of continued fractions. Obviously we have $$a_n= [1,2,\ldots n] \le \underbrace{[n, n,\ldots, n]}_n$$ but as the right-hand side grows without bound (It's $O(\sqrt n)$) this is unhelpful. I thought maybe to do something like: $$a_{n^2}\le [1, \underbrace{4, 4, 4}_3, \underbrace{9, 9, 9, 9, 9}_5, \ldots, \underbrace{n^2,n^2,\ldots,n^2}_{2n-1}] $$ but I haven't been able to make it work. I would like a proof that the limit $$\lim_{n\to\infty} a_n$$ exists. The methods I know are not getting me anywhere. I originally planned to ask "and what the limit is", but OEIS says "No closed-form expression is known for this constant". The references it cites are unavailable to me at present. AI: For any $n\ge4$, we have $\sqrt{2n} \le n-1$. Therefore \begin{align*} a_n &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{(n-1) + \sqrt{2n}}}}}}\\ &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{2(n-1)}}}}}\\ &\le\ldots\\ &\le \sqrt{1+\sqrt{2+\sqrt{3+\sqrt{2(4)}}}}. \end{align*} Hence $\{a_n\}$ is a monotonic increasing sequence that is bounded above.
H: How do we know that $|i!| = \sqrt{\pi \operatorname{csch} \pi}$? (Source: Wolfram Alpha) Or, to write it out in full, $$|i!| = \sqrt{\frac{2\pi e^\pi}{e^{2\pi} - 1}}$$ How is this identity derived? Also, knowing this, could we find the exact values for the real and imaginary parts of $i!$? AI: Recall the functional equation (Euler's reflection formula) $$\Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin(\pi z)}.$$ Set $z = 1+i$; then, we get $$\Gamma(1+i)\Gamma(-i) = \frac{\pi}{\sin(\pi i)} = \pi \, \mathrm{csch}(\pi).$$ On the other hand, $\Gamma(-i) = -i\Gamma(1-i) = \overline{i \Gamma(1+i)}$, and so $|\Gamma(i+1)| = |\Gamma(-i)|$; therefore $$|\Gamma(1+i)| = \sqrt{\pi \, \mathrm{csch}(\pi)}.$$
H: Integrating Factorials I feel like I'm doing something wrong here: $$\frac{d^n}{dx^n}(x^n)=n!$$ $$ 5!=\frac{d^5}{dx^5}(x^5)$$ $$ \int{5! dx}=\int{\frac{d^5}{dx^5}(x^5)}dx=x\frac{d^4}{dx^4}(x^4)=x*4!$$ Please explain what I'm doing wrong. AI: $$ \int{5! dx}=\int{\frac{d^5}{dx^5}(x^5)}dx=\frac{d^4}{dx^4}(x^5)=x*5!$$ $$ \int 120 dx = 120*x$$ *Thanks Cocopuffs
H: How can I prove that an iterated transformation describes all odd integers? This is a question where I want to find "a best" way (or even different ways) to prove my assumption - just to widen my understanding of similar problems and how to approach them. It's a question of proof-strategy. (This is also related to my studies of the Collatz-problem) Remark: this problem was less difficult than I thought it were, see my own answer. Regarding my question for a proof-strategy it is a nice example for how a tabular representation can obfuscate the problem and mislead the mind away from a relatively simple solution. Consider the transformation on odd positive numbers $$ x_{k+1} = \left\{ \begin{array} {} { 3x_k-1 \over 2} &\qquad \text{ if } x_k \equiv 1 \pmod 4 \\ { 3x_k+1 \over 2} &\qquad \text{ if } x_k \equiv -1 \pmod 4 \end{array} \right. $$ such that for instance the trajectory beginning at $5$ continues like $ 5 \to 7 \to 11 \to 17 \to \ldots $ Because the numbers of the form $ x \equiv 3 \pmod 6$ have no preimage I take them as "roots" and order all trajectories in the following two-way infinite array of odd natural numbers $ \ge 3$ : $$ \small \begin{array} {r|rrrr} 3 & 5 & 7 & 11 & 17 & 25 & 37 & 55 & \cdots \\ 9 & 13 & 19 & 29 & 43 & 65 & 97 & 145 & \cdots \\ 15 & 23 & 35 & 53 & 79 & 119 & 179 & 269 & \cdots \\ 21 & 31 & 47 & 71 & 107 & 161 & 241 & 361 & \cdots \\ 27 & 41 & 61 & 91 & 137 & 205 & 307 & 461 & \cdots \\ 33 & 49 & 73 & 109 & 163 & 245 & 367 & 551 & \cdots \\ 39 & 59 & 89 & 133 & 199 & 299 & 449 & 673 & \cdots \\ 45 & 67 & 101 & 151 & 227 & 341 & 511 & 767 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array} $$ The number $1$ forms a cycle $ 1 \to 1 $ and is not in this table. It looks quite obvious, that I've got all positive odd numbers in this table, but now my question: Q: How can I begin and proceed with a proof, that this table contains /that this transformation rule describes all positive odd integers $ \ge 3$ ? Remark: perhaps my question is not optimally formulated, I'd even like getting help for this I tried, whether it is useful to reformulate the transformation in such a way: $$T: x_k = 4j + r \to x_{k+1}=6j +r \qquad \qquad \text{ for } j \ge 1 , r \in \{-1,1\} $$ then look at the inverse and ask, whether any number of the form $ x=6j \pm 1$ under the inverse transform has a trajectory, which ends at a number of the form $3+6i $. But I have no idea how to arrive at a so-to-say "completeness"-statement. [update] after the comment of André Nicholas - ansatz transferred into a new answer AI: Hint 1: What would be the smallest odd integer missing from the list? Hint 2: You already made the observation that an odd integer $\not\equiv 3\pmod6$ has a preimage (that is smaller). Hint 3: The empty set is the only set of positive odd integers that does not have a smallest element. Fermat's infinite descent was one of the first proofs by induction.
H: An Attempted Proof of Cantor's Theorem OK, I have read two different proofs of the following theorem both of which I can't quite wrap my mind around. So, I tried to write a proof that makes sense to me, and hopefully to others with the same difficulty. Please let me know if this is an accurate proof an how I might fix it. Sorry, about the length I have been thinking about this proof all day! Thanks for your time. Cantor's Theorem: $|A|< |\mathscr{P}(A)|$. Or, in particular there exists no surjection from $A$ onto $\mathscr{P}(A)$. First, we show that $|A|\leq |\mathscr{P}(A)|$ by showing that there exsists a injective function from $A$ to $\mathscr{P}(A)$. Let, $f:A\to \mathscr{P}(A)$ be defined by the function $a\mapsto \{a\} $ that is that every element $a \in A$ is mapped to the singleton set in $\mathscr{P}(A)$ containing the element $a$. Thus we have found an injection from $A$ to $\mathscr{P}(A)$ and therefore $|A|\leq |\mathscr{P}(A)|$. Now, in order to show that $|A|<|\mathscr{P}(A)|$ we must show that there does not exist a surjective function from $A$ to $\mathscr{P}(A)$. Let, us suppose that there does exist a surjective function $f:A\to \mathscr{P}(A)$ then it is implied (by the nature of this mapping) that $(\forall a\in A)(f(a) \subseteq A)$. Now, let us define a set $B:=\{x\in A : x\notin f(x)\}$. We want to show two things: first we want to show that $B\subseteq A$, and secondly we want that $B \not\subseteq\mathscr{P}(A)$. If the set $B=\emptyset$ then $B\subseteq A$ since the empty set is a subset of all sets. If $B\not =\emptyset$ then $B\subseteq A$ by the definition of $B$. Consequently, there must be an element $x'\in A$ such that $f(x')=B$ or otherwise $B\not\subseteq \mathscr{P}(A)$. In other words there must exist an an element $x'$ in $A$ such that the image of $x'$ under $f$, $f(x')$ , belongs to $B$ in order for $B \subseteq \mathscr{P}(A)$, or otherwise the opposite is true. Since we have already determined that $B \subseteq A$ then it must be true that $x'\in B$ or that $x \notin B$. If $x'\in B$ then $x' \notin f(x')$ which means that $x' \notin B$ since $f(x')=B$. Likewise, if $x' \notin B$ then $x' \in f(x) = B$. So, either way we arrive at a contradiction, and therefore it must be true that $B \not\subseteq \mathscr{P}(A)$. Hence, $f$ is not surjetive. In conclusion, we showed that $|A|\leq |\mathscr{P}(A)|$, and since the function $f$ was arbitrary mapping it must be true that there does not exist a surjective mapping from $A$ to $\mathscr{P}(A)$. And therefore we have that $|A|< |\mathscr{P}(A)|$ is always true. I am not sure, if there are any flaws in my logic please let me know. I would like to edit the post so that a detailed proof will be available for other who have struggled with this proof. AI: The argument is basically just a slightly more detailed than usual version of the standard proof, but you’ve made one consistent error throughout, writing $\nsubseteq\wp(A)$ when you actually mean $\notin\operatorname{ran}f$. We want to show two things: first we want to show that $B\subseteq A$, and secondly we want that $B\nsubseteq\wp(A)$. No, we want to show that $B\subseteq A$ and that $B\notin\operatorname{ran}f$, thereby showing that the function $f$ is not a surjection. There’s no need to split the proof that $B\subseteq A$ into two cases: you defined $B$ to be $\{x\in A:x\notin f(x)\}$, which automatically makes $B$ a subset of $A$. Consequently, there must be an element $x'\in A$ such that $f(x')=B$ or otherwise $B\nsubseteq\wp(A)$. The last bit should be $B\notin\operatorname{ran}f$. We know that $B\subseteq A$ and hence that $B\in\wp(A)$, and we’re assuming that $f$ is a surjection, so that everything in $\wp(A)$ is in the range of $f$. And since $B$ is not in general a collection of subsets of $A$, it cannot in general be a subset of $\wp(A)$; it’s an element of $\wp(A)$. (There are special circumstances in which $B$ might turn out, more or less by accident, to be a subset of $A$). In other words there must exist an an element $x'$ in $A$ such that the image of $x'$ under $f$, $f(x')$, belongs to $B$ in order for $B\subseteq\wp(A)$, or otherwise the opposite is true. Again, we know that $B\subseteq A$, i.e., that $B\in\wp(A)$; $B$ is certainly not in general a subset of $\wp(A)$. What you should be saying here is: In other words, there must be an element $x'$ of $A$ such that the image of $x'$ under $f$, $f(x')$, is $B$ in order for $B\in\operatorname{ran}f$ to be true; otherwise, the opposite is true, and $f$ is not a surjection. The next sentence has an unnecessary bit whose presence might actually confuse some, if they thought that it actually was relevant: Since we have already determined that $B\subseteq A$ then it must be true that $x'\in B$ or that $x\notin B$. The hypothesis is unnecessary: no matter what $x'$ and $B$ are, exactly one of the statements $x'in B$ and $x'\notin B$ is true. So, either way we arrive at a contradiction, and therefore it must be true that $B\nsubseteq\wp(A)$. This is the same mistake that you’ve made in several other places: it should read ‘and therefore it must be true that $B\notin\operatorname{ran}f$’.
H: Finding a power series representation of the function $f(x)=\frac{2}{3-x}$ I feel like I'm on the right track, but I don't know if I need to do something else to finish it off... \begin{align*} f(3x)&=\frac{2}{3-3x}\\ &=\frac{2}{3(1-x)}\\ &=\frac{2}{3}\cdot\frac{1}{1-x}\\ &=\frac{2}{3}\cdot\sum_{n=0}^{\infty}x^{n} \end{align*} I feel like - OK, so I substituted $x$ with $3x$ so that I could make the fraction look like how I needed it to - but do I need to do something else to reverse the substitution? Or do I have to do this completely differently? AI: You method looks great, but you aren't quite finished! The only thing I would change in the beginning is to let $x=3u$ rather than letting $x=3x$. It's a notational thing, but letting $x=3x$ is not a true substitution. Using my method, at the end you would have $$ f(3u)=\frac{2}{3}\cdot\sum_{n=0}^\infty u^n. $$ To put the function back in terms of $x$, we can make use of our substitution by plugging back in that $3u=x$ and that $u=\frac{x}{3}$ resulting in $$ f(x)=\frac{2}{3}\cdot\sum_{n=0}^\infty \left(\frac{x}{3}\right)^n. $$
H: Proving $\int\limits _0^\infty\frac {\sin^{2n+1}x}{x}dx=\frac 1 {4^n}\binom{2n}{n}\int_0^\infty \frac {\sin{x}}{x}dx$ The question is Prove that for any $n\in\mathbb N$, $\int\limits _0^\infty\frac {\sin^{2n+1}x}{x}dx=\frac 1 {4^n}\binom{2n}{n}\int_0^\infty \frac {\sin{x}}{x}dx$ I don't have any ideas how to solve it and I'll be glad to get a hint. AI: See Proving that $\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}$ by induction. We have $$ \frac{\pi\binom{2n}{n}}{2^{2n+1}}=\frac{1}{4^n}\binom{2n}{n}\int_0^\infty\frac{\mathrm{sin}x}{x}dx $$ since $$ \int_0^\infty\frac{\mathrm{sin}x}{x}dx=\frac{\pi}{2}. $$
H: Proving convergence of a series I need to show that the series of general term $$\tanh \frac{1}{n}+ \ln \frac{n^2-n}{n^2+1}$$ converges. I was thinking to use an equivalence as $n \rightarrow \infty$ We know that $ \tanh \frac{1}{n}= \frac{1}{n} - \frac{1}{6n^3}+ o(\frac{1}{n^4})\sim \frac{1}{n}$ and $\ln\frac{n^2-n}{n^2+1}=\ln \frac{n^2(1-\frac{1}{n})}{n^2(1+\frac{1}{n^2})}= \ln\frac{1-\frac{1}{n}}{1+\frac{1}{n^2}}= \ln (1-\frac{1}{n})- \ln(1+\frac{1}{n^2})= -\frac{1}{n}-\frac{1}{n^2} \sim -\frac{1}{n} $ which when we add both gives me zero (therefore my equivalence is incorrect). Can someone explain to me where is my mistake (I should be getting and equivalence egal to $\frac{-3}{2n^2}$ Thank you in advance AI: You should keep the error term in your computations. In particular in the taylor approximation of $\ln$ you should keep the term $\frac 1 {n^2}$. $$ \tanh \frac 1 n + \ln\frac{n^2-n}{n^2+1}= \left(\frac 1 n + o(\frac 1 {n^2})\right) + \left(-\frac 1 n -\frac 3{2n^2} + o(\frac 1 {n^2})\right) = -\frac{3}{2n^2} + o(\frac 1 {n^2}). $$ In general a series of the form $$ \sum_n \frac{c}{n^\alpha} + o(\frac{1}{n^\alpha}) $$ converges if $\alpha>1$ and diverges if $\alpha\le 1$.
H: Proving that the limit $ \lim\limits_{n\rightarrow \infty} (n!)^{\frac{1}{n}}$ diverges to infinity I came across this limit in some context: $$ \lim\limits_{n\rightarrow \infty} (n!)^{\frac{1}{n}}$$ I could only say that $n! > n$ implies the limit is greater than or equal to $1$. However, the result seems to be infinity. I do not know how to arrive at this result. Any ideas? Based on the answer and the comment below, I wonder if it is possible to prove this using elementary Calculus? AI: You can do it like this. Fix an arbitrary $n_0$ and see what happens when $n > n_0$. Clearly, $$ n! = n_0! \cdot (n_0+1) (n_0+2) \ldots (n). $$ It follows that $$ n! \geq n_0! \cdot n_0^{n - n_0}, $$ and $$ \sqrt[n]{n!} \geq \sqrt[n]{n_0! \cdot n_0^{n-n_0}} = \sqrt[n]{\frac{n_0!}{n_0^{n_0}}} \cdot n_0. $$ Now let's see what happens when $n \to \infty$. The right hand side converges to $n_0$. So, when $n$ is large enough, we can say, for example, that $\sqrt[n]{n!} \geq \frac{1}{2}n_0$. But, since $n_0$ is arbitrary, we see that for any positive $C$ we will have $\sqrt[n]{n!} \geq C$ for all large enough $n$. It means that $\sqrt[n]{n!}$ tends to infinity.
H: Pattern recognition next shape Is there any logic for finding the next shape in the blank? I think it's an hard one. AI: As the comments pointed out, these types of problems can be really ambiguous. Here's my guess. Note that out of the four options, the main thing that is different is the number of white shapes. So we need to come up with a rule that tells us how many white shapes should be in the last column. Scanning along the first row, there seems to be the following rule: If there is a white shape in the second column that is in a different position compared to a white shape in the first column, then both white shapes survive to be in the last column. Scanning along the second row, there seems to be the following rule: If there is a white shape in the second column that is in the same position compared to a white shape in the first column, then only one of the white shapes survive to be in the last column. Scanning along the last row, we see that the triangle on the left is in the same position in both columns, implying that only one of them will be in the last column. Furthermore, we see that the triangle on the right is in a different position in both columns, implying that both of them will be in the last column. Thus, my guess is that the answer is the leftmost option labelled $(4)$, since it has $1+2=3$ white triangles.
H: Can modulo be used in consecutive multiplications or divisions? I used to participate in programming competitions and at times I see that the solution should be the remainder when divided with some big prime number (usually that would be 1000000007). In one of the problems, I need to divide a very big number by another very big number and find the modulo of the division. Say one of the big numbers is factorial of 10000. So, the actual problem is how can I find the solution to $$((A*B*C...)/(a*b*c*...))mod 1000000007$$ where the numerator and denominator are so huge. I tried this by hand and it holds good. $$(2 * 3 * 4 * 5) mod 7 = (((((2 mod 7 * 3) mod 7) * 4) mod 7) * 5) mod 7$$ I would like to know, whether it will be True always. If so I will find the numerator and denominator values with this method. Now, the second part, is division. $$(A / B) mod C = ((A mod C)/(BmodC))modC$$ I thought that the above equation would be true. But it doesnt work for, $$(((2*3)mod5)/(1*2)mod5)mod5\ne((2*3)/(1*2))mod5$$ So, how can I find value for the expression which I mentioned at the beginning? AI: The way to divide by a number $a$ $(\bmod p)$ is to multiply by its inverse. To find its inverse, you can use the extended Euclidean algorithm. In your example, we are dividing by $2$ modulo $5$. We calculate $$5 = 2 \cdot 2 + 1$$ and have found the gcd of $2$ and $5$ after one step of the algorithm. Now write $1 = 5 - 2 \cdot 2$ and reduce modulo $5$, to get $$1 \equiv (-2) \cdot 2 \equiv 3 \cdot 2 \, (\bmod 5),$$ so $2^{-1} \equiv 3 (\bmod 5).$ In particular, you have $$6 / 2 \equiv 6 \cdot 3 \equiv 1 \cdot 3 \equiv 3 (\bmod 5).$$ For a more difficult example, consider $123456/192$ modulo $17$. We first reduce $192 \equiv 5 (\bmod 17)$. Now we calculate the inverse of $5$ mod $17$: $$17 = 3 \cdot 5 + 2,$$ $$5 = 2\cdot 2 + 1,$$ and so $$1 = 5 - 2 \cdot 2 = 5 - 2(17 - 3 \cdot 5) =7 \cdot 5 - 2 \cdot 17.$$ This gives $7 \cdot 5 \equiv 1 (\bmod 17)$, so $5^{-1} = 7$. Now $123456/192 \equiv 123456 \cdot 5^{-1} \equiv 123456 \cdot 7 \equiv 2 \cdot 7 \equiv 14$ (modulo $17$). In fact, $123456/192 = 643 = 37 \cdot 17 + 14$, so it checks out.
H: Find coefficient of $x^{100}$ in the power series expansion of $\frac{1}{(1-x)(1-2x)(1-3x)}$ I'm trying to find to coefficient of $x^{100}$ of $$\sum_{n=0}^{∞}a_n x^{n}\ =\frac{1}{(1-x)(1-2x)(1-3x)}.$$ I used the sum: $$\frac{1}{1-x}\ = 1+x+x^2+\ldots.$$ So : $$\frac{1}{(1-x)(1-2x)(1-3x)}= \left(1+x+x^2+\ldots\right)\left(1+2x+(2x)^2+\ldots\right)\left(1+3x+(3x)^2+\ldots\right). $$ But multiplying out the right hand side to extract the coefficient $x^{100}$ is tedious. Any idea how I can obtain the coefficient in a simpler way ? Regards AI: Use partial fractions: $$\frac{1}{(1-x)(1-2x)(1-3x)} = \frac{1}{2} \cdot \frac{1}{1-x} - 4 \cdot \frac{1}{1-2x} + \frac{9}{2} \cdot \frac{1}{1-3x}.$$
H: What is the group $\langle U, * \rangle$ where $U$ is the set of roots of unity and * is normal multiplication? I'm having trouble understanding my textbook in this regard. Everything else seems to make sense, such as the group $\langle \mathbb{Z}_{1000}, + \rangle$ to list an example. In the text, it is stated that $\langle U, * \rangle$ contains two elements that are their own inverse (namely, $-1$ and $1$), but what does that mean in this scope? I am having a tough time picturing it. Sure, $1*1 = 1$ and $-1 * -1 = 1$, but is that what they're getting at? Doesn't $U$ have some relation to the unit circle, that I should be factoring in? I understand $\langle U_4, * \rangle$ having four solutions to $z^4 = 1$, but where does the $U$ versus $U_4$ come in similarly? I'm just having a tough time comprehending $U$ and would love for some insight into it. AI: Guide: The main crux of the answer to your question begins with the bold words "The main point is outlined ...". Nevertheless, I recommend carefully reading the material up to this point, in particular Exercises 1 and 2 below, because if you understand the solutions to those Exercises, then I think you will have an answer to your question. Let $(U,\ast)$ denote the multiplicative group consisting of all roots of unity in $(\mathbb{C}^{\times},\cdot)$ (the multiplicative group of non-zero complex numbers). In words, $U$ consists of all (necessarily non-zero) complex numbers $z$ such that $z^n=1$ for some natural number $n\geq 1$. Exercise 1: Prove that $(U,\ast)$ is a subgroup of $(\mathbb{C}^{\times},\cdot)$. (Note that I have used $\ast$ and $\cdot$ to indicate the multiplications on $U$ and $\mathbb{C}^{\times}$, respectively. The reason is that $\ast:U\times U\to U$ and $\cdot:\mathbb{C}^{\times}\times \mathbb{C}^{\times}\to \mathbb{C}^{\times}$ are different functions because they have different domains. Essentially, one component of this exercise asks you to verify that the function $\ast$ is the restriction of the function $\cdot$; but this should be obvious.) Of course, we also have groups $(U_n,\ast)$ for all natural numbers $n\geq 1$. (I denote the multiplications on all of these groups by the same symbol; technically, this is an abuse of notation as I explained above but this should not cause any confusion.) In words, $U_n$ consists of all (necessarily non-zero) complex numbers $z$ such that $z^n=1$. Exercise 2: Prove that $(U_n,\ast)$ is a subgroup of $(U,\ast)$ for all natural numbers $n\geq 1$. By transitivity of the relation "is a subgroup of", it follows from Exercise 1 that $(U_n,\ast)$ is a subgroup of $(\mathbb{C}^{\times},\cdot)$. The main point is outlined in the form of the following Exercise: Exercise 3: Prove that as sets $U=\bigcup_{n=1}^{\infty} U_n$ but that this is not an ascending union, i.e., $U_n\not\subseteq U_{n+1}$ for all $n\geq 1$. For which $n,m\geq 1$ is it true that $U_n\subseteq U_m$? In fact, we have more information than is strictly contained in Exercise 3; the $U_n$'s are all groups and their union is also a group. What does this mean/imply? The main thing relevant to your question is that every element $z\in U$ is an element of $U_n$ for some $n$ and, in this case, the element $z^{-1}\in U$ will also be an element of $U_n$. (If this is not obvious, prove it! It's formally a consequence of Exercise 2; that $U_n$ is a subgroup of $U$.) Similarly, if $z\in U$, then the inverse of $z$ in $\mathbb{C}^{\times}$ is also an element of $U$. (Again, because formally $U$ is a subgroup of $\mathbb{C}^{\times}$!) Exercise 4: Let $S$ and $T$ be the sets of elements in $\mathbb{C}^{\times}$ and $U$, respectively, consisting of those elements equal to their own inverses. Prove that $S\cap U=T$. Exercise 5: What is $S$? Does that answer your question in conjunction with Exercise 4? The role of the group $(\mathbb{C}^{\times},\cdot)$ in this situation can also be played by the circle group $(\mathbb{S}^{1},\cdot)$ consisting of all complex numbers $z$ with $\left|z\right|=1$. $U$ is a subgroup of the circle group and the circle group is, in turn, a subgroup of $\mathbb{C}^{\times}$. I hope this helps!
H: Describing a Galois group. From Robert Ash (Basic Abstract Algebra) Suppose that $E=F(\alpha)$ is a finite Galois extension of $F$, where $\alpha$ is a root of the irreducible polynomial $f \in F[x]$. Assume that the roots of $f$ are $\alpha_1 = \alpha, \alpha_2, ... , \alpha_n$. Describe, as best you can from the given information, the Galois group of $E/F$. Answer: $G = \{\sigma_1, ... ,\sigma_n\}$ where $\sigma_i$ is the unique $F$-automorphism of $E$ that takes $\alpha$ to $\alpha_i$. There are two things that confuse me about this answer: 1) What if some of the $\alpha_i$'s are in F? Then how would we have an $F$-automorphism that takes $\alpha$ to $\alpha_i$? Because we know that any $F$-automorhpism must fix all elements of $F$, right? 2) Don't we need $E$ to be a normal extension? Because how else would we know that the other roots of $f$ are also contained in $E$? Thank you in advance AI: 1) If any of the $\alpha_i$ were in $F$, then $f$ has a factor $X-\alpha_i$. Since $f$ is irreducible, we would have $f = x-\alpha_i$ and so $\alpha_i$ is the only root. Still, you have the identity that takes $\alpha_i$ to itself, and this is the entire Galois group. 2) By definition, a Galois extension is normal and separable. If this wasn't your definition then you must have shown it to be equivalent earlier.
H: Finding angle of intersecting lines ![ $\triangle ABD$ and $\triangle ACE$ are equilateral triangles. Can it be proved that $\triangle ADC$ and $\triangle ABE$ are congruent. Or if given they are congruent what is the value of $\angle BOD$? Another similar problem with square replacing the equilateral triangles have been added.Where $\angle BOE=?$ ]3 AI: Hint: Observe that you can rotate $\triangle ADC$ around point $A$ to get $\triangle ABE$. The rest follows from properties of rotation. The very same technique can be applied to the problem with squares. I hope this helps ;-)
H: What went wrong? Calculate mass given the density function Calculate the mass: $$D = \{1 \leq x^2 + y^2 \leq 4 , y \leq 0\},\quad p(x,y) = y^2.$$ So I said: $M = \iint_{D} {y^2 dxdy} = [\text{polar coordinates}] = \int_{\pi}^{2\pi}d\theta {\int_{1}^{2} {r^3sin^2\theta dr}}$. But when I calculated that I got the answer $0$ which is wrong, it should be $\frac{15\pi}{8}$. Can someone please tell me what I did wrong? AI: You have the integral $$M=\int_\pi^{2\pi} d\theta\int_1^2 r^3 \sin^2\theta dr $$ this seems fine. One thing is sure this integral is not zero: Indeed your can write it as a product of two integrals $$M=(\int_\pi^{2\pi} \sin^2\theta d\theta) (\int_1^2 r^3 dr)$$ and both those integrals give strictly positive numbers. I would advise you to compute these two integrals separately and check you get stricly positive numbers.
H: Path space of $S^n$ Suppose that $p,q$ are two non conjugate points on $S^n$ ($p \ne q,-p$). Then there are infinite geodesics $\gamma_0, \gamma_1, \cdots$ from $p$ to $q$. Let $\gamma_0$ denote the short great circle arc from $p$ to $q$; let $\gamma_1$ denote the long great circle arc $p(-q)(-p)q$ and so on. The subscript $k$ denotes the number of times that $p$ or $-p$ occurs in the interior of $\gamma_k$. Why the index $\lambda(\gamma_k)= k(n-1)$? AI: Let $\gamma:[0,1]\to M$ be a geodesic on a Riemannian manifold $M$. The index $\gamma$ is, by definition (and depending on your definition!), the number of points on $\gamma$ conjugate to $\gamma(0)$ counted with multiplicity. If we're on the sphere $\mathbb{S}^n$, then it's a computation that the only points conjugate to $p\in \mathbb{S}^n$ are $p$ and $-p$ (the point of $\mathbb{S}^n$ antipodal to $p$). Moreover, the multiplicity of the conjugate point $-p$ is $n-1$ if $n$ is the dimension of $\mathbb{S}^n$. In English (but advanced English!), there are $n-1$ linearly independent Jacobi fields along a geodesic defining a great circle arc from $p$ to $-p$ whose values at the endpoints are zero. If you understand this, then all you need to do is to figure out how many times $p$ and $-p$ occur on the geodesic $\gamma_k$ (not including the first time corresponding to $\gamma_k(0)$ if $\gamma_k$ is viewed a function from $[0,1]$ to $M$). And this number is: $0$ for $\gamma_0$ (because the short great circle arc from $p$ to $q$ doesn't contain $-p$ (because it's short!)) $n-1$ for $\gamma_1$ (because $-p$ occurs once on the geodesic $\gamma_1$ and with multiplicity $n-1$ as explained above) $\cdots$ $k(n-1)$ for $\gamma_k$ (because the number of times $p$ or $-p$ occurs on $\gamma_k$, not including the first time $p$ occurs at the initial point, is exactly $k$; I encourage you to visualise this for at least $k=2$) I hope this helps!
H: The sum of square roots of non-perfect squares is never integer My question looks quite obvious, but I'm looking for a strict proof for this: Why can't the sum of two square roots of non-perfect squares be an integer? For example: $\sqrt8+\sqrt{15}$ isn't an integer. Well, I know this looks obvious, but I can't prove it... AI: Assume that $a, b \in \mathbb N $ are not perfect squares and $ \sqrt a + \sqrt b = n \in \mathbb N$. Then $$ a = (n - \sqrt b)^2 = n^2 - 2 n \sqrt b + b $$ which means that $\sqrt b$ is a rational number. This contradicts the fact that the square root of an integer is either an integer or a irrational number.
H: $\mathbb{R}^k$ and $\mathbb{R}^k$ are trivially diffeomorphic. Is this claim correct? If so, is it because identity is the diffeomorphism? AI: Yes, the identity map is a diffeomorphism. The reason is that it is smooth and its inverse (the identity map again!) is smooth. Finally, its bijective for obvious reasons. The following exercise provides more interesting examples of diffeomorphisms from $\mathbb{R}^{k}$ to $\mathbb{R}^k$. Exercise 1: Let $T:\mathbb{R}^{k}\to \mathbb{R}^{k}$ be the linear operator defined by the invertible $k\times k$ matrix $A$ by the rule $T(x)=Ax$ for all $x\in \mathbb{R}^{k}$. Prove that: (a) $T$ is bijective (b) $T$ is smooth and its derivative at all points of $\mathbb{R}^{k}$ is $A$ (c) $T^{-1}$ is smooth and its derivative at all points of $\mathbb{R}^{k}$ is $A^{-1}$ Therefore, $T:\mathbb{R}^{k}\to \mathbb{R}^{k}$ is a diffeomorphism! Exercise 2: Prove or give a counterexample: every diffeomorphism from $\mathbb{R}^{k}$ to itself is of the form given by Exercise 1. Exercise 3: Prove that if $m\neq n$, then $\mathbb{R}^{m}$ is not diffeomorphic to $\mathbb{R}^{n}$. (Hint: use the implicit function theorem.) I hope this helps!
H: Prove that if $f_n\to f$ uniformly and $\int_0^\infty|f_n|\le M$, then $\int_0^\infty|f|<\infty$ Again while preparing to calculus I found another interesting question: Prove or give counterexample that if $f_n\to f$ uniformally $[0,\infty]$ and $\forall n\in\mathbb {N}\int_0^\infty|f_n|dx\le M$, then $\int_0^\infty|f(x)|dx<\infty$ It seems incorrect. I'm looking for a function that tends to a constant $c$ uniformly and its integrals are bounded. If the convergence was in $[2,b] (b\in\mathbb R)$, I could have taken $f_n(x)=\frac {x^n}{x^n+1}$ which indeed is a counterexample. How can I find a counterexample? AI: Fix $A>0$. Then $$\int_0^A|f(x)|\mathrm dx=\lim_{n\to +\infty}\int_0^A|f_n(x)|\mathrm dx\leqslant \liminf_{n\to +\infty}\int_0^{+\infty}|f_n(x)|\mathrm dx\leqslant M.$$ As $A$ was arbitrary, $\int_0^{+\infty}|f(x)|\mathrm dx\leqslant M$. It actually works when we only have pointwise convergence, and it is called Fatou's lemma, but it involves more advanced tools.
H: Proof that Scheme T implies reflexivity In my modal logic book it's written that, for each frame $F(S,R)$ the accessibility relation $R$ is reflexive IF AND ONLY IF the scheme T:$\square A \implies A$ is valid in $F$. Even if I can easily prove that reflexivity $\implies$ T, I can't prove that T $\implies$ reflexivity. As a counterexample I show a model $M(S,R,V)$ where $S=\{\alpha,\beta\}$, $R=\{(\alpha,\beta),(\beta,\alpha)\}$ and $V(A)=\{\alpha,\beta\}$. In this model T is true for both $\alpha$ and $\beta$, but R is not reflexive. What am I doing wrong? what is the right way to demonstrate T $\implies$ reflexivity? AI: In your model $T$ is true, but in your frame it is not valid. A scheme is valid if it true in the frame for every possible valuation. The standard proof is via the contrapositive. Assume that $F(S,R)$ is not reflexive, i.e. there is some node $s\in S$ such that $\lnot sRs$. Take the valuation $V(p)=S\setminus\{s\}$. Then $(S,R,V),s\Vdash\square p$ but of course $(S,R,V),s\nVdash p$. Therefore $(S,R,V)\nvDash\square p\to p$ and thus $(S,R)\nvDash\square p\to p$.
H: Differentiate the following w.r.t. $\tan^{-1} \left(\frac{2x}{1-x^2}\right)$ Differentiate : $$ \tan^{-1} \left(\frac {\sqrt {1+x^2}-1}x\right) \quad w.r.t.\quad \tan^{-1} \left(\frac{2x}{1-x^2}\right) $$ AI: the question have been solved by assuming tan inverse is “arc tan“. put x=tanθ and simplify →arc tan(√1+tan²θ -1/tanθ) →arc tan(√sec²θ -1/tanθ) →arc tan(2sin²(θ/2)/sinθ) (on simplifying) →arc tan(2sin²(θ/2)/2sinθcosθ) →arc tan(tan(θ/2)) =θ/2 →but θ=arc tan(x) hence develops to →(1/2)arc tan(x) & term it as “u“. also, arc tan(2x/1-x²)= 2arc tan(x)→“v“ finally, du/dx =1/4 (on simplifying with the above “u“ &“v“ )
H: Convergence of $\int\limits_{-\infty}^{\infty}\frac{\sin{x^2}}{x}dx$ I had to prove that the integral $\int\limits_{-\infty}^{\infty}\frac{\sin{(x^2)}}{x}dx$ converges. I thought splitting it to $$\int_{-\infty}^{-1}\frac{\sin{x^2}}{x}dx+\int_{-1}^{0}\frac{\sin{x^2}}{x}dx+\int_0^1\frac{\sin{x^2}}{x}dx+\int_1^\infty\frac{\sin{x^2}}{x}dx$$ and then to activative on the first and last dirichle test but I'm sure that $\int_a^bsinx^2$ is bounded... Furthermore what can be done with the integrals in the middle. Mupad claims the integral converges to 0. How can I apply convergence tests to these integrals? AI: I should say something more: we can determine the value of the integral (from $0$ to $\infty$)! It's easy, however, if we apply the substitution $x=\sqrt u$ where $u\ge 0$. Since $u=x^2$ is (strictly) monotone, the substitution is justified. $$\int_0^\infty\frac{\sin x^2}xdx=\frac12\int_0^\infty\frac{\sin u}udu=\frac\pi4$$ The last one is due to Dirichlet. EDIT: We should note that there's something schematic: $$\frac{f(x^m)}xdx=\alpha\frac{f(u)}udu$$ where $u=x^m$ and $\alpha$ is a constant.
H: How to prove that a circle passing through the center of the circle of inversion invert to a line? link to the referenced picture: http://www.flickr.com/photos/90803347@N03/9220374271/ In order to prove the Arbelos Theorem, as in the picture above, one need to prove that the semicircle $C$ invert to line $l$, as well as $D$ invert to $m$, with respect to the circle of inversion that is centered at point $P$ and orthogonal to the circle $K_n$. How to prove this? AI: Inversion takes circles through the center of inversion to lines. Here (the full circles belonging to) $C$ and $D$ have no point in common except the center of inversion, hence their images have no finite point in common either, i.e. the lines they become are parallel. Also, $K_n$ is invariant under the inversion as it is orthogonal to the inversion circle. Since $C$ and $D$ are tangent, their images are tangent to (the image of) $K_n$. And because $C,D$ are orthogonal to the line $PQ$, their images are orthognonal to the image of $PQ$, which is again the line $PQ$ (because it passes through the center of inversion $P$).
H: Determine if the following sequence converge in the quadratic mean "For integer $n$ let $f_n(x) = \dfrac{1}{\sqrt{1 + nx^2}}$ say if the sequence $f_n$ converges in quadratic mean." This is what I have concluded so far: $$\lim\limits_{n\rightarrow \infty}f_{n}(0) = 1$$ For $x \neq 0$ $$\lim\limits_{n\rightarrow \infty}f_{n}(x) = 0$$ For convergence in quadratic mean we must show that $$ \lim\limits_{n\to\infty}||f_{n}-f||_{2}$$ where $f =\lim\limits_{n\to\infty}f_n.$ and $||\cdot||_{2}$ is the $L^{2}$ norm. For $x=0$ it is obvious. What about when $x\neq 0$? AI: The sequence $(f_n)$ is pointwise convergent to the function $f$ defined on the interval $[0,+\infty)$ by $$f(x)=\left\{\begin{array}\\0&\text{if}\ x>0\\ 1&\text{if}\ x=0\end{array}\right.$$ which's equal to the zero function almost everywhere and since we have $$|f_n(x)|^2=\frac{1}{1+nx^2}\leq \frac{1}{1+x^2}=g(x)\quad\forall n\geq 1$$ and the function $g$ is integrable on the interval $[0,+\infty)$ then by the dominated convergence theorem the sequence $(f_n)$ converges to $f$ in quadratic mean.
H: Why difference quotient of convex functions increases in both variables Let $f: \mathbb R \rightarrow \mathbb R$ be a convex function and $$ g(x,y)=\frac{f(x)-f(y)}{x-y} \textrm{ for } x\neq y. $$ I wish to prove that $g$ is increasing function in both variables. Thanks AI: Fix $y_2 > y_1 > x$ and define $$ \phi(y) = \frac{f(y) - f(x)}{y - x} $$ Then noticing that there is some $0 < t < 1$ such that $ y_1 = (1-t)x + ty_2$ by convexity we have that $$ \phi(y_1) = \frac{f(y_1) - f(x)}{y_1 - x} = \frac{f((1-t)x + ty_2) - f(x)}{(1-t)x + ty_2- x} \leq \frac{(1-t)f(x) + tf(y_2) - f(x)}{(1-t)x + ty_2 - x} = \frac{t(f(y_2)-f(x))}{t(y_2 - x)} = \phi(y_2)$$ You can get all the other cases in a similar fashion.
H: Mandelbrot set incorrect picture I'm writing an algorithm to generate the Mandelbrot set in Java. However, the final picture is incorrect. It looks like this I was wondering if the algorithm was incorrect. public void Mandlebrot() { float reMax=2; float imMax=2; float reMin=-2; float imMin=-2; float xDelta = (reMax - reMin)/test.width; float yDelta = (imMax - imMin)/test.height; int N=20000; float complex = imMin; for(int y=0; y<test.height; y++) { float real = reMin; for(int x=0; x<test.width; x++) { int count = 0; float complexC = 0.4f; float realC = 0.3f; while(count<N && complexC*complexC+realC*realC<=4.0f) { complexC = realC*realC-complexC*complexC + complex; realC = 2*complexC*realC + real; count++; } if(complexC*complexC+realC*realC<=4.0f) { setPixel(x,y) = 1000; } else { setPixel(x,y) = 1+ 1000*count/N; } real+=xDelta; } complex+=yDelta; } } AI: In the loop while(count<N && complexC*complexC+realC*realC<=4.0f) { complexC = realC*realC-complexC*complexC + complex; realC = 2*complexC*realC + real; count++; } You use the updated complexC to compute the new realC, but you ought to use the old one: float oldC = complexC; complexC = realC*realC-complexC*complexC + complex; realC = 2*oldC*realC + real; count++; Besides, you seem to have flipped the real and imaginary parts. I think that only rotates the picture, though.
H: If $f$ has a pole at $z_0$, then $1/f$ has a removable singularity I tried a few examples and I think that the following in complex analysis holds: If a function $f$ has a pole at $z_0$, then $1/f$ has a removable singularity at this point. Is this correct? AI: If $f$ has a pole at $z_0$ then there is an open $U$ about $z_0$ small enough that$$f(z)=\frac{g(z)}{(z-z_0)^k}$$where $k$ is the order of the pole and $g(z)$ is analytic and non-vanishing on $U$ So on $U\setminus\{z_0\}$,$$\frac1f=\frac{(z-z_0)^k}{g(z)}$$ which is analytic on $U$ since we chose $U$ small enough that $g$ doesn't vanish there. Since $\lim_{z\rightarrow z_0}(z-z_0)\frac1{f(z)}=0$, the singularity is removable.
H: Prove question $(A\setminus B) \cup (B\setminus C) = A\setminus C$ , $ (A\setminus B)\setminus C= A\setminus(B\cup C)$ I want to prove the following statements but for do it I need some hint. \begin{align} \tag{1} (A\setminus B) \cup (B\setminus C) &= A\setminus C\\ \tag{2} (A\setminus B)\setminus C&= A\setminus(B\cup C) \end{align} Thanks! AI: For the first one, suppose that $(A \setminus B) \cup (B \setminus C)$ is not empty. Take any $x \in (A \setminus B) \cup (B \setminus C)$. Then either $x \in A \setminus B$ or $x \in B \setminus C$. Note that in this particular case, both cannot be true (why?). If $x \in A \setminus B$, then $x \in A$ and $x \not \in B$. If $x \in B \setminus C$, then $x \in B$ and $x \not \in C$. This does not imply that $x \in A \setminus C$. If $x \in A \setminus B$, one of the possibilities above, then this does not give us any information about whether $x \in C$. For example, suppose $A = \{1,2,3\},\ B = \{1,2\}$, and $C = \{3\}$. Then $3 \in A \setminus B$ and so $3 \in (A \setminus B) \cup (B \setminus C)$, but $A \setminus C = \{1,2\}$ and so $3 \not \in A \setminus C$.
H: Series Question $\sum_{n=1}^{\infty}\frac{4^n}{7^{n+1}}$ I`m trying to check if the following series are convergent. $$1)\sum_{n=1}^{\infty}\frac{4^n}{7^{n+1}}$$ $$2)\sum_{n=1}^{\infty}(-1)^n\frac{5^n}{4^{n+2}}$$ what I did so far for the first one is: $$\sum_{n=1}^{\infty}\frac{4^n}{7^{n+1}} = q=\frac{4}{7}<1 \rightarrow \frac{\frac{4}{7}}{1-{\frac{4}{7}}}$$ for the next one is the same thing, I just need to check the positive series, if I will understand the first one I think the seond will be ok. so I would like to get some advice, Thanks! AI: Your solution for the first one is only ever so slightly off. Here is a hint: $$\sum_{n=1}^{\infty}\frac{4^n}{7^{n+1}} = \dfrac{1}{7}\sum_{n=1}^{\infty}\biggr(\frac{4}{7}\biggr)^n$$ For the second one, notice that $$\sum_{n=1}^{\infty}(-1)^n\frac{5^n}{4^{n+2}} = \dfrac{1}{16}\sum_{n=1}^{\infty}(-1)^n\biggr(\frac{5}{4}\biggr)^n$$ What do you notice about the term in the geometric series?
H: Homotopy Theory and extensions/liftings. I found the statement: suppose that in the extension problem we have a map f': A -> E homotopic to f, and f' extends. Then it does not follow that f extends. Similarly, if the map g in the lifting problem is homotopic to a map that lifts, it does not follow that g itself lifts. The reader can easily supply counterexamples in both cases. I don't see what a counterexample would be. What I do understand that if, in the extension case, if there is another map i:A -> X then we may be able to find some h:X -> E that extends f'. I can't seem to picture how if we have another map H(1) = f:A -> E (as opposed to H(0) = f') where f and f' are homotopic to each other, that one wouldn't have an extension of f also as by the definition(?) of a homotopy there is a continuous mapping between f and f'. But furthermore since they are both mapping to and from the same spaces A and E, why they would be different. Any thoughts? Thanks, Brian AI: The key concept here is the homotopy extension property; let me quote this from the Wikipedia article of the same name: Let $X\,\!$ be a topological space, and let $A \subset X$. We say that the pair $(X,A)\,\!$ has the homotopy extension property if, given a homotopy $f_t\colon A \rightarrow Y$ and a map $\tilde{f}_0\colon X \rightarrow Y$ such that $\tilde{f}_0 |_A = f_0$, there exists an ''extension'' of $\tilde{f}_0$ to the homotopy $\tilde{f}_t\colon X \rightarrow Y$ such that $\tilde{f}_t|_A = f_t$. Let's assume that the pair $(X,A)$ in your question satisfies the homotopy extension property. Let's say that $f:A\to Y$ and $f':A\to Y$ are homotopic via a homotopy $f_t:A\to Y$ and that $f'$ extends to a map $\tilde{f'}:X\to Y$. The homotopy extension property states precisely in this context that there exists an extension $\tilde{f_t}:X\to Y$ of the homotopy $f_t:A\to Y$ such that $\tilde{f_0}=f'$. Of course, in this case $\tilde{f_1}:X\to Y$ is an extension of $f:A\to Y$. Therefore, if $f':A\to Y$ extends, then so does $f:A\to Y$. Theorem If $(X,A)$ is a CW pair, then $(X,A)$ satisfies the homotopy extension property. So, any counterexample to your claim cannot be a CW pair. The entire discussion above also has an analogue which applies to the corresponding question for liftings. In this case, the relevant concept is the homotopy lifting property; e.g., see http://en.wikipedia.org/wiki/Homotopy_lifting_property . Exercise 1: Show that there is no counterexample that you seek if the pair $(X,A)$ satisfies the homotopy lifting property. Definition 1 A map $h:E\to B$ is a fibration if it satisfies the homotopy lifting property with respect to every pair $(X,A)$. It is a Serre fibration if it satisfies the homotopy lifting property with respect to every CW pair $(X,A)$. Definition 2 A map $i:A\to X$ is a cofibration if it satisfies the homotopy extension property with respect to all spaces $Y$. In other words, you're looking for maps which aren't fibrations and for maps which aren't cofibrations. I hope this helps!
H: find symmetric line of given two line I have one question. Suppose that we have two lines given by equations $$y=2x+3$$ $$y=-2x+11$$ I want to find all equations of lines which these two given lines have same distances from them in plane.As I know symmetric means that the distance between it and the two given lines must be equal. So suppose that the requested line is $$y=kx+b$$ after calculating distances I found the following equation $2k-7b=0$ what does it mean? May you help me? AI: When we say something in a plane is symmetric about a line, we mean it's reflection over that line is unchanged. So this is misleading. How do you define the distance between two lines? The angle? If so, you are looking for an angular bisector of the two lines. But there are two, the vertical line and horizontal line passing through the intersection point of those two lines. The coordinates of this intersection point are calculated as follows: $$2x+3-y=0, -2x+11-y=0$$ $$\implies 0=(2x+3-y)-(-2x+11-y)=4x-8$$ $$\implies x=2, \implies 2x+3-y=7-y=0 \implies y=7.$$ Thus, the vertical line is given by the graph of x=2, the horizontal by the graph of y=7. In general, you would take this intersection point and find the lines passing through it whose angle at the intersection is one of the two which are halfway between those of the two lines you started from. Since the slopes $m, s$ of your starting lines are the tangent of these two angles, one of your angles is given by the average of the arctangents of those two slopes, $\frac{arctan(m)+arctan(s)}{2}$, and the other is that angle plus $\frac{\pi}{2}$. Hence the slopes of your solution lines are $tan(\frac{arctan(m)+arctan(s)}{2}), tan(\frac{arctan(m)+arctan(s)+\pi}{2})$.
H: For fixed $z_i$s inside the unit disc, can we always choose $a_i$s such that $\left|\sum_{i=1}^n a_iz_i\right|<\sqrt3$? Let $z_1,z_2,\ldots,z_n$ be complex number such that $|z_i|<1$ for all $i=1,2,\ldots,n$. Show that we can choose $a_i \in\{-1,1\}$, $i=1,2,\ldots,n$ such that $$\left|\sum_{i=1}^n a_iz_i\right|<\sqrt3.$$ AI: I was not able to think it through properly, but here's a sketch: Use induction as suggested by Berci, but with a little twist. The main idea is that for two numbers $z_i$ and $z_j$ such that $|z_i| < 1$ and $|z_j| < 1$ we can obtain $|z_i\pm z_j| < 1$ as long as some angle (out of four) between them (the difference of arguments) is smaller or equal than $\frac{\pi}{3}$. However, as long as we have 3 or more numbers, we will be able to find such a pair. Quick illustration of the lemma: $z_i$ is somewhere on the blue line, red cross is the $z_j$ and the violet is their sum. The point is that as long as the red cross belongs to darker green, the violet line will stay in light green region. $\hspace{70pt}$ I don't know if I will find enough time to work out all the details, so should this idea suit you, feel free to use it. Cheers!
H: $K$ is a basis for $W$, and $L$ is a basis for $U$. Is $K\cup L$ is a basis for $U + W$? Question: $V$ is a vector space over field $F$ , $U,W$ are subspaces of $V$. Is the next statement true or false? "$K$ is a basis for $W$, and $L$ is a basis for $U$. Therefore $K\cup L$ is a basis for $U + W$". What I did: (I "proved" it's true. The book says it's false) $K \cup L$ is an union of two linear independent groups, and is therefore linear independent. We mark: $$K = \{ v_1, ... , v_n, w_1, ... , w_k \} \ ; \ L=\{ v_1, ... , v_n, u_1, ... , u_l \} \ ; \\ K\cup L = \{ v_1, ... , v_n, w_1, ... , w_k,u_1,...,u_l \}$$ Therefore, for every $u\in U, w\in W$: $$w = a_1v_1 + ... + a_nv_n + a'_1w_1 +...+ a'_kw_k\\ u=a_1v_1 + ... + a_nv_n + a'_1u_1 +...+ a'_lu_l\\ w + u= a_1v_1 + ... + a_nv_n + a'_1w_1 +...+ a'_kw_k + b_1v_1 + ... + b_nv_n + b'_1u_1 +...+ b'_lu_l = (a_1+b_1)v_1 + ... + (a_n+b_n)v_n + a'_1w_1 +...+ a'_kw_k+ b'_1u_1 +...+ b'_lu_l$$ Note that: $$\text{span}\{K \cup L\}= c_1v_1 +...+c_nv_n +c'_1w_1 +...+c'_kw_k +c''_1u_1+...+c''_1u_l$$ Therefore for $c_h=(a_h+b_h)\ , \ c'_i=a_i \ , \ c''_j=b_j$: $$\text{span}\{K \cup L\} = w + u$$ Therefore $K\cup L$ is a basis for $U + W$ AI: For another counter example assume that $U,V\leq \mathbb R^4$ such that $$U=\langle (1,1,0,-1),(1,2,3,0),(2,3,3,-1)\rangle,~V=\langle (1,2,2,-2),(2,3,2,-3),(1,3,4,-3)\rangle $$ You can easily show that $\dim(U+V)=3\neq6$.
H: The Effect of a Transpose on a Matrix Inequality In the solution to an exercise I came across the following: $y^TA_N \geq c_N^T \rightarrow A_N^Ty \geq c_N$. Now I was wondering, is it in general true that an inequality remains valid when 'taking transposes on both sides'? If so, what is the proof for this? AI: Generally one says $A\ge B$, for matrices, if each entry of $A$ is $\ge$ each entry of $B$. This is true if and only if each entry of $A^T$ is $\ge$ each entry of $B^T$, i.e. $A^T\ge B^T$.
H: Rearranging power series expansion to get parameter on denominator How can we rearrange $$T=\dfrac{k V+g}{gk}\bigg(kT-\dfrac{1}{2}k^{2}T^{2}+\dfrac{1}{6}k^{3}T^{3}\bigg),$$ to get $$T=\dfrac{2V/g}{1+k V/g}+\dfrac{1}{3}k T^{2}$$ ? AI: So, $$kT-\frac12k^2T^2+\frac16k^3T^3=\frac{gTk}{kV+g}=\frac{Tk}{\frac{kV}g+1}$$ $$\implies 1-\frac12kT+\frac16k^2T^2=\frac1{\frac{kV}g+1} \text{Cancelling out } Tk\text{ assuing } Tk\ne0$$ $$\implies 1-\frac1{\frac{kV}g+1}=\frac12kT-\frac16k^2T^2 $$ $$\implies \frac{\frac{kV}g}{\frac{kV}g+1}=\frac12kT-\frac16k^2T^2 $$ Multiply both sides with $\frac2k,$ $$ \frac{\frac{2V}g}{\frac{kV}g+1}= T-\frac13kT^2 $$ Now,make the required side change
H: Cardinality of the quotient field of power series ring Let $k$ be a field which is countable and let $x$ be an indeterminate over $k$. I have hard time to prove $$\operatorname{card} k((x)) = \operatorname{card}\mathbb R.$$ Thank you. AI: When your field is $\mathbb{Z}_{2}$ Define a bijection between $k[[x]]$ and the set of numbers between 0 and 1 which can have only 0 and 1 as their digits. For other fields you should be more carious. If for example your field is something as $\mathbb{C}$, you can see easily that as $$k[[x]]\cong\prod_{i\in\mathbb{N}\cup\{0\}}k$$ so its cardinal is again $|\mathbb{R}|=c$ . For fields which their cardinal is less than $c$. As you can define a one-to-one function from $\mathbb{Z}_2[[x]]$ to $k[[x]]$ and a one-to-one function from $k[[x]]$ to $\mathbb{C}[[x]]$, so you have $c=|\mathbb{Z}_2[[x]]|\leq|k[[x]]|\leq|\mathbb{C}[[x]]|=c$ and therefore $|k[[x]]|=c$ . And now as you assumed that $k$ is countable so its cardinal is less than $c$ and the problem is solved. Now pay attention that $k[[x]]\subset k((x))$ and you can look at $k((x))$ as a subset of $k[[x]]\times k[[x]]$ so $c\leq|k((x))|\leq c^2=c$ .
H: Find the volume of the body bounded by $x^2 + y^2 + z^2 = 4$ and $x^2 + y^2 = 1$ Find the volume of the body bounded by $x^2 + y^2 + z^2 = 4$ and $x^2 + y^2 = 1$. This is the last subject in our syllabus and I am afraid that my professor had not any time to teach it before the end of the semester, and left us wondering in the land of calculating body volumes in $3D$. I know this can be done by double integral, but I would be happy if someone showed me direction into solving such questions. AI: The volume can be found as $$V=\int\int\int dV = \int_{-1}^{1}\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}\int_{-\sqrt{4-x^2-y^2}}^{\sqrt{4-x^2-y^2}}dzdydx .$$ Or just, you can use the double integral as $$ V = \int\int (z_2-z_1)dA, $$ where $z_1=z_1(x,y)$ and $z_2=z_2(x,y)$ and $dA=dydx$. In your case $$ z_1= -\sqrt{4-x^2-y^2},\quad z_2= \sqrt{4-x^2-y^2} .$$ Here is a plot, so you can see what's going on
H: Is this a linear transformation? Let $T$ be a transformation from $P_2$ to $P_2$ (where $P_2$ is the space of all polynomials with degree less than or equal to $2$) $$T(f(t)) = f''(t)f(t)$$ I'm tempted to say that this is not a linear transformation because $$T(f(t) + g(t)) = (f''(t) + g''(t))(f(t) + g(t))$$ Which does not equal $$T(f(t)) + T(g(t))$$ But I'm not sure if I did that correctly... AI: You are correct, but to complete the job, you should provide a counterexample--that is, specific $f,g\in P_2$. such that $T(f+g)\ne T(f)+T(g)$. Alternately, find a scalar $\alpha$ and an $f\in P_2$ such that $T(\alpha f)\ne\alpha T(f)$.
H: How to find the limits $\lim\limits_{h\rightarrow 0} \frac{e^{-h}}{-h}$ and $\lim\limits_{h\rightarrow 0} \frac{|\cos h-1|}{h}$? How to work around to find the limit for these functions : $$\lim_{h\rightarrow 0} \frac{e^{-h}}{-h}$$ $$\lim_{h\rightarrow 0} \frac{|\cos h-1|}{h}$$ For the second one i think that the limit doesn't exist. AI: HINT: $(1):\lim_{h\to0}e^{-h}=1$ $(2):$ $$\cos h=1-2\sin^2\frac h2\implies \cos h-1=-2\sin^2\frac h2$$ $$\implies \frac{\cos h-1}h=-\left(\frac{\sin \frac h2}{\frac h2}\right)^2 \frac h4$$
H: How to find maxima and minima of a function involving a factorial i need to find the value of y when the bell curve for the following function reaches its maximum , i can solve the problem easily on a MAS software but i needed to know a more mathematical approach . here's the simplified equation $$\frac{3^{-y}}{(25-y)! y!}$$ and here's the original equation which i need to solve $$\frac{\left(\frac{3}{4}\right)^{25} 25!}{3^y (25-y)! y!}$$ in a nutshell i need to find the value of y when the above function reaches its maxima AI: HINT: If $T_y=\frac{\left(\frac34\right)^{25}25!}{3^y(25-y)! y!}$ Clearly, $0\le y\le 25$ $$\frac{T_{y+1}}{T_y}=\frac{3^y(25-y)!y!}{3^{y+1}(25-y-1)!(y+1)!}=\frac{25-y}{3(y+1)}$$ Check when $\frac{T_{y+1}}{T_y}>\text{ or } = \text{ or } <1$
H: Checking Differentiability for given function Find if the function $x\mapsto |\sin (x)-1|$ is differentiable at $x=\pi /2$ . I get stuck at $$\lim_{h \rightarrow 0}{ \left|{\cos h \over h}\right|}$$ AI: HINT: With Graham Hesketh's solution, the differentiability of $(1-\sin x)$ is established here As $$\frac{d f(x)}{dx}=\lim_{h\to0}\frac{f(x+h)-f(x)}h$$ $$\implies \frac{d f(x)}{dx}_{(\text{at }x=a)}=\lim_{h\to0}\frac{f(a+h)-f(a)}h$$ Here $f(x)=1-\sin x, a=\frac\pi2$ So, $$\frac{d(1-\sin x)}{dx}_{(\text{at }x=\frac\pi2)}=\lim_{h\to0}\frac{1-\sin (\frac\pi2+h)-(1-\sin\frac\pi2) }h=\lim_{h\to0}\frac{1-\cos h}h$$ which is proved here
H: Minimum and maximum of $ \sin^2(\sin x) + \cos^2(\cos x) $ I want to find the maximum and minimum value of this expression: $$ \sin^2(\sin x) + \cos^2(\cos x) $$ AI: Following George's hint, Because $-1\le \sin x \le 1$, and $\sin x$ is strictly increasing on $-1\le x\le 1$, we see that $\sin (\sin x)$ (and hence $\sin^2(\sin x)$) is maximized when $\sin x=1$, e.g. at $x=\pi/2$. On the other hand, $\cos x$ is maximized when $x=0$, so $\cos (\cos x)$ (and hence $\cos^2(\cos x)$ is maximized when $\cos x=0$, e.g. at $x=\pi/2$. Hence the combined function is maximal at $\pi/2$, when it is $1+\sin^21\approx 1.7$. For the other direction, $x=0$ gives $\sin^2(\sin x)=0$, clearly minimal. Because $-1\le \cos x\le 1$, and $\cos x$ is increasing on $[-1,0)$ and decreasing on $(0,1]$, we minimize $\cos x$ for $x=\pm 1$. Hence in particular $x=0$ minimizes $\cos(\cos x)$ and thus $\cos^2(\cos x)$. Combining, the minimum value is $0+\cos^21\approx 0.29$
H: Questions regarding Holder's and Minkowski's inequality I've some questions regarding Holder's and Minkowski's inequality as given in my text: Does the author consider the case $q=\infty$ in the equality case of lemma 1.1.36? Shouldn't the author mention $C_1,C_2$ are not all zero in the equality case of lemma 1.1.37? I think telling the 'iff' condition for equality as the linear independence of $a,b$ would be better. AI: The author's statement cannot make sense for infinity in the first example, as raising to the inifitieth power doesn't make sense. However, it is not hard to see for positive values that equality can only hold if $b_i$ is constant. The answer to the second question is, of course, 'yes'. Actually, if you take the equality statement and take the $q$th root of both sides and take the limit, you get that $b_i$ must be constant.
H: what is the explicit form of this iterativ formular I am not sure, if there is an explicit form, but if there is, how do I get it? This is the formula: $$c_n=\frac{1-n \cdot c_{n-1}}{\lambda}$$ where $\lambda \in \mathbb{R}$ and $n \in \mathbb{N}$ I already tried some forms for c via trail and error, but I couldn't find a solution ... AI: The expression for $c_n$ is $$c_n=\frac{(-1)^n n!}{\lambda^{n+1}}\left\{S_{n}(-\lambda)+\lambda c_0-1\right\}$$ where \begin{equation}S_n(x)=\sum_{k=0}^n\frac{x^k}{k!} \end{equation} for all $x \in \mathbb{R}$. I prove it inductively. For $n=1$ the proposed expression begets $$-\frac{1}{\lambda^2}(1-\lambda+\lambda c_0-1)=\frac{1-c_0}{\lambda}=c_1$$ So the statement is true for $n=1$. Let it be true for $n=k$. Now we have from the recursion relation \begin{equation} \begin{split} c_{k+1}=& \frac{1}{\lambda}-\frac{k+1}{\lambda}c_k \\ \ =& \frac{1}{\lambda}-\frac{(-1)^k (k+1)!}{\lambda^{k+2}}S_{k}(-\lambda)-\frac{(-1)^k (k+1)!}{\lambda^{k+2}}(\lambda c_0-1)\\ \ =& \frac{(k+1)!}{\lambda^{k+2}}\left\{\frac{\lambda^{k+1}}{(k+1)!}+(-1)^{k+1}S_k(-\lambda)+(-1)^{k+1}(\lambda c_0-1)\right\}\\ \ =&\frac{(-1)^{k+1}(k+1)!}{\lambda^{k+2}}\left(S_{k+1}(-\lambda)+\lambda c_0-1\right) \end{split} \end{equation} Hence the hypothesis is proved.
H: Calculator for real and complex parts of a polynomial. Does anyone know of a nice calculator for calculating the real and complex parts of a complex polynomial. Say, for example, I want to write the polynomial $p(z)=z^3+2z^2+1$ as a function $f(x,y)=(f_1(x,y),f_2(x,y))$, but I do not want to calculate $f_1$ and $f_2$ by hand. AI: You can do this with Wolfram Alpha. Example.
H: I'm looking for several ways to prove that $\int_{0}^{\infty }\sin(x)x^mdx=\cos(\frac{\pi m}{2})\Gamma (m+1)$ I'm looking for several ways to prove that $$\int_{0}^{\infty }\sin(x)x^mdx=\cos\left(\frac{\pi m}{2}\right)\Gamma (m+1)$$ for $-2< Re(m)< 0$ AI: Here is a fake proof: Let $x = \sqrt{t}$. Then $$ \int_{0}^{\infty} x^{m} \sin x \, dx = \frac{1}{2} \int_{0}^{\infty} t^{m/2} \frac{\sin \sqrt{t}}{\sqrt{t}} \, dt. $$ Since $$ \frac{\sin \sqrt{t}}{\sqrt{t}} = \sum_{n=0}^{\infty} \frac{\phi(n)}{n!} (-t)^{n} \quad \text{for} \quad \phi(n) = \frac{n!}{(2n+1)!} = \frac{\Gamma(n+1)}{\Gamma(2n+2)}, $$ Ramanujan's master theorem gives \begin{align*} \int_{0}^{\infty} t^{m/2} \frac{\sin \sqrt{t}}{\sqrt{t}} \, dt &= \Gamma\left(\frac{m}{2}+1\right)\phi\left(-\frac{m}{2}-1\right) \\ &= \frac{\Gamma\left(\frac{m}{2}+1\right)\Gamma\left(-\frac{m}{2}\right)}{\Gamma\left(-m\right)} = \frac{\Gamma\left(\frac{m}{2}+1\right)\Gamma\left(-\frac{m}{2}\right)}{\Gamma(m+1)\Gamma\left(-m\right)}\Gamma(m+1)\\ &= \frac{\sin \pi m}{\sin \left( \frac{\pi m}{2} \right)} \Gamma(m+1) = 2\cos\left(\frac{\pi m}{2} \right) \Gamma(m+1). \end{align*} Therefore we have $$ \int_{0}^{\infty} x^{m} \sin x \, dx =\cos\left(\frac{\pi m}{2} \right) \Gamma(m+1). $$ And here is a rigorous proof: By integration by parts, for $0 < a < b < \infty$, \begin{align*} \int_{a}^{b} x^{m} \sin x \, dx &= \left[ x^{m} (1 - \cos x) \right]_{a}^{b} - \int_{a}^{b} m x^{m-1} (1 - \cos x) \, dx. \end{align*} Since $ \Re m \in (-2, 0)$, $x^{m} (1 - \cos x)$ tends to $0$ either as $x \to 0^{-}$ or $x \to \infty$. This proves $$ \lim_{\substack{a &\to& 0 \\ b &\to& \infty}} \left[ x^{m} (1 - \cos x) \right]_{a}^{b} = 0. $$ Also, by noting that $0 \leq 1 - \cos x \leq 1 \wedge x^2 $, we have $$ \left| m x^{m-1} (1 - \cos x) \right| \leq |m| x^{\Re m-1} (1 \wedge x^{2}). $$ Since the RHS is integrable, the same is true for the LHS $m x^{m-1} (1 - \cos x)$. Thus the integral $$ \int_{0}^{\infty} x^{m} \sin x \, dx $$ exists as (at least) in improper sense, and we have \begin{align*} \int_{0}^{\infty} x^{m} \sin x \, dx &= - \int_{0}^{\infty} m x^{m-1} (1 - \cos x) \, dx \\ &= - \frac{m}{\Gamma(1-m)} \int_{0}^{\infty} \frac{\Gamma(1-m)}{x^{1-m}} (1 - \cos x) \, dx \\ &= - \frac{m}{\Gamma(1-m)} \int_{0}^{\infty} \left( \int_{0}^{\infty} t^{-m}e^{-xt} \, dt \right) (1 - \cos x) \, dx. \end{align*} But since \begin{align*} \int_{0}^{\infty} \int_{0}^{\infty} \left| t^{-m}e^{-xt} (1 - \cos x) \right| \, dtdx &\leq \int_{0}^{\infty} \int_{0}^{\infty} t^{-\Re m}e^{-xt} (1 - \cos x) \, dtdx \\ &= \int_{0}^{\infty} \frac{\Gamma(1-\Re m)}{x^{1-\Re m}} (1 - \cos x) \, dtdx < \infty, \end{align*} Fubini's theorem shows that \begin{align*} \int_{0}^{\infty} x^{m} \sin x \, dx &= - \frac{m}{\Gamma(1-m)} \int_{0}^{\infty} \int_{0}^{\infty} t^{-m}e^{-xt} (1 - \cos x) \, dxdt \\ &= - \frac{m}{\Gamma(1-m)} \int_{0}^{\infty} \frac{t^{-m-1}}{t^2 + 1} \, dt. \end{align*} Applying the beta function identity and the Euler reflection formula for the Gamma function, it follows that \begin{align*} \int_{0}^{\infty} x^{m} \sin x \, dx &= \frac{\pi m}{2\Gamma(1-m)} \csc \left( \frac{\pi m}{2} \right) = \cos \left( \frac{\pi m}{2} \right) \Gamma(m+1) \end{align*} as desired.
H: Does there exist a group of even order which every element is a square? Does there exist a group of even order which every element is a square? I know in any group of odd order every element is a square. I am not sure the case of even order. Any suggestion? AI: A simple counting argument, noticing that a group of even order has an element of order 2, so that squaring cannot be a bijection - shows that you can't have every element a square.
H: Immersion of Quaternions Does there exist an immersion of the Quaternion Group in the Symmetric Groups $S_6$ and $S_7$? If it does exist, can you give me an explicit description of that immersion? AI: The Sylow 2-subgroups of $S_6$ (or $S_7$) are isomorphic to $D_8 \times C_2$ (here $D_8$ is the dihedral group of order 8). Since $Q$ (the quaternion group) is a 2-group, if there is an embedding of $Q$ in $S_6$, then $Q$ would need to be a subgroup of $D_8 \times C_2$. Can you find such a subgroup?
H: Use weak induction to prove the following statement is true for every positive integer $n$: $2+6+18+\dots+2\cdot 3^{n-1}=3^n-1$ Use weak induction to prove the following statement is true for every positive integer $n$: $$2+6+18+\dots+2\cdot 3^{n-1}=3^n-1$$ Base Step: Prove it is true for $n$. Inductive Hypothesis: It will be true for $n+1$ What I need to show: That it will be true for n and n+1 Proof Proper:..... To get this started, how do I prove that it is true for $n$? What $n$ do I choose? The fact that there is a "..." in the equation scares me. How do I know how to quantify $n$ and $n+1$? AI: The base case is typically the least value of $n$ for which you are proving an assertion applies. Since we are aiming to prove $$P(n):\quad 2+6+18 + \cdots + 2\cdot 3^{n-1}=3^n-1 \tag{P(n)}$$ holds for all positive integers $n$, we need to prove that $P(n)$ holds for $n \geq 1$, and so we need first to confirm that $P(n)$ holds for the base case $n = 1$. Base Case: $P(1)\; (n = 1): {\bf 2\cdot 3^{1-1}} = 2\cdot 3^0 = 2 = {\bf 3^1 - 1}\tag{base case: True}$ Inductive Hypothesis (IH) Now, we assume that $P(k)$ is true for some arbitrary positive integer $n = k$. That is, we assume that it is true that $P(k):\quad 2 + 6 + 18 + \cdots + 2\cdot 3^{k-1}=3^k-1\tag{IH}$ Inductive Step Now, we will use our inductive hypothesis (IH) to prove that $P(k) \implies P(k+1)$. So our aim is to show, using the inductive hypothesis, that $2 + 6 + 18 + \cdots + 2\cdot 3^{k} =3^{k + 1}-1$: $$\begin{align} P(k + 1):\quad \underbrace{2 + 6 + 18 + \cdots + 2\cdot 3^{k - 1}}_{= \,\large 3^k - 1,\;by \;\text{IH}} + 2\cdot 3^{k} & = (3^k - 1) + 2\cdot 3^k \\ \\ & = 3\cdot 3^k - 1 \\ \\ & = 3^{k + 1} - 1\end{align}$$ So we have shown, in this third step, that $P(k) \implies P(k+1)$. Hence, by induction on $n$, we can conclude that $P(n)$ is true for all integers $n \geq 1$
H: How to find the element $x\in \mathbb Z_n$ such that $f(x)=1$, where $f\in \mathrm{Aut}(\mathbb Z_n)$? Let $0<m<n$ and $(m,n)=1$. Consider the map $f\in \mathrm{Aut}(\mathbb Z_n)$ such that $f(1)=m$. Which element $x\in \mathbb Z_n$ has the property that $f(x)=1$? AI: Hint 1 Let us denote the class of $x \in \Bbb{Z}$ in $\Bbb{Z}_{n}$ by $[x]$. So you have $f([1]) = [m]$, so that $f([x]) = f([1] x) = [m] x = [m x]$. You are looking for $x$ such that $f([x]) = [mx] = [1]$. Hint 2 This means $mx \equiv 1 \pmod{n}$, so that there is $y \in \Bbb{Z}$ such that $m x = 1 + n y$ or $mx - n y = 1$. Hint 3 Now recall Euclid's algorithm and Bézout's Lemma.
H: Question related to first order partial derivatives If The funtion $f: \Bbb R^2 \to \Bbb R$ has directional derivatives in all directions at each point in $\Bbb R^2$ then the function $f$ has first order partail derivatives at each point in $\Bbb R^2$ How can I prove this? Please do not downvote because of not saying any idea or my tried. I cannot produce any idea to prove this. Please help me thank you AI: Hints: Use the definitions. What does it mean for $f$ to have directional derivatives in all direction at each point in $\Bbb R^2$? It means that for all vectors $(k,l)\in \Bbb R^2$ and for all points $(x,y)\in \Bbb R^2$ the following limit exists: $$\lim \limits_{h\to 0}\left(\frac{f((x,y)+h(k,l))-f(x,y)}{h}\right) \tag 1$$ Now set $(k,l)=(1,0)$ and $(k,l)=(0,1)$ in $(1)$ and check the definition of partial derivative. Partial answer (pun): $\begin{align} \displaystyle \lim \limits_{h\to 0}\left(\frac{f((x,y)+h(1,0))-f(x,y)}{h}\right)&=\lim \limits_{h\to 0}\left(\frac{f(x+h,y)-f(x,y)}{h}\right)=\frac{\partial f}{\partial x}(x,y) \end{align}$ The last equality only holds because the first limit exists and because it is equal to the second limit in the chain of equalities above.
H: I need a better explanation of $(\epsilon,\delta)$-definition of limit I am reading the $\epsilon$-$\delta$ definition of a limit here on Wikipedia. It says that $f(x)$ can be made as close as desired to $L$ by making the independent variable $x$ close enough, but not equal, to the value $c$. So this means that $f(x)$ defines $y$ or the output of the function. So when I say $f(x)$ close as desired to $L$, I actually mean the result of the calculation that has taken place and produced a $y$ close to $L$ which sits on the $y$-axis? How close is "close enough to $c$" depends on how close one wants to make $f(x)$ to $L$. So $c$ is actually the $x$'s that I am putting into my $f$ function. So one is picking $c$'s that are $x$'s and entering them into the function, and he actually is picking those $c$'s (sorry, $x$'s) to make his result closer to $L$, which is the limit of an approaching value of $y$? It also of course depends on which function $f$ is, and on which number $c$ is. Therefore let the positive number $\epsilon$ be how close one wishes to make $f(x)$ to $L$; OK, so now one picks a letter $\epsilon$ which means error, and that letter is the value of "how much one needs to be close to $L$". So it is actually the $y$ value, or the result of the function again, that needs to be close of the limit which is the $y$-coordinate again? strictly one wants the distance to be less than $\epsilon$. Further, if the positive number $\delta$ is how close one will make $x$ to $c$, Er, this means $\delta=x$, or the value that will be entered into $f$? and if the distance from $x$ to $c$ is less than $\delta$ (but not zero), then the distance from $f(x)$ to $L$ will be less than $\epsilon$. Therefore $\delta$ depends on $\epsilon$. The limit statement means that no matter how small $\epsilon$ is made, $\delta$ can be made small enough. So essentially the $\epsilon$-$\delta$ definition of the limit is the corresponding $y$, $x$ definition of the function that we use to limit it around a value? Are my conclusions wrong? I am sorry but it seams like an "Amazing Three Cup Shuffle Magic Trick" to me on how my teacher is trying to explain this to me. I always get lost to what letters mean $\epsilon$, $\delta$, $c$, $y$, and $x$, when the function has $x$ and $y$ only. AI: It does take some effort to grasp the $\varepsilon$-$\delta$ definition. You shouldn't insist on getting it all on first reading; you have to work on it somewhat patiently. It has been said that patience is a virtue; maybe this is a good example of why that would be said. The answer to your first numbered question is "yes". The number $c$ is on the $x$-axis, where inputs to the function live, but one must be able to use numbers arbitrarily close to $c$ as inputs, but not necessarily $c$ itelf. In the most important cases, when $c$ itself is the input, one gets $0/0$ as the output, i.e. one gets no output. And it is precisely because $f(c)$ is undefined that one must consider $\lim\limits_{x\to c}f(x)$ instead. It is the $y$-coordinate that is to be close to $L$, the distance being less than $\varepsilon$. $\delta$ is not equal to $x$. Rather, one must make the distance between $x$ and $c$ less than $\delta$ (but the distance should not be $0$) in order to make the distance between $y$ and $L$ less than $\varepsilon$. The answer to #5 is no: the function by which $\delta$ depends on $\varepsilon$ is not the function by which $y$ depends on $x$. But it does depend on which function that is.
H: What's the difference between $ \mathbb{Z}/4\mathbb{Z}$ and $ 4\mathbb{Z} $? Can someone please explain the difference between $ \mathbb{Z}/4\mathbb{Z} $ and $ 4\mathbb{Z} $? From my understanding (please correct where I'm wrong): the group $4\mathbb{Z}$ has only four elements, $\{0,1,2,3\}$, and the group $\mathbb{Z}/4\mathbb{Z}$ also has the same four elements. So, are they really so different? AI: The group $4\Bbb Z$ is a subgroup of $\Bbb Z$, and $\Bbb{Z/4Z}$ is the quotient group we get when we divide $\Bbb Z$ by $4\Bbb Z$. The former is infinite and contains all the multiples of $4$ in $\Bbb Z$, the latter is finite and has four elements.
H: The rank of a bunch of vectors in $\mathbb{R}^4$ Please help me solve this: In $\mathbb{R}^4$ how can I calculate the rank of the following vectors: $$a=(3,2,1,0), b=(2,3,4,5), c=(0,1,2,3), d=(1,2,1,2), e=(0,-1,2,1).$$ I know that since $\#\{a,b,c,d,e\}=5$ it's a linearly dependent set in $\mathbb{R}^4$ because $\dim\mathbb{R}^4=4$, but how can I find the right vector? To get rid of it and so on, And how can find the rank using determinant method? I need some hints. AI: The determinant of a square matrix will tell you if its columns (and/or rows), and the vectors they represent, are linearly independent (determinant not equal to zero). Here, we want to determine rank of a $4\times 5$ matrix, the determinant of which we can't compute. To find the rank, which at most will be $4$, you need to create a matrix using your vectors for its columns, and put the matrix into row echelon form. That amounts to performing the sort of row-reduction used in the answer to this earlier question of yours. Then you can "read off" the rank of the matrix by counting the number of non-zero rows in the resultant (row-echelon) matrix. We start with constructing the matrix whose columns consist of the entries of your vectors: $$\begin{pmatrix} 1 & 2 & 0 & 0 & 3 \\ 2 & 3 & -1 & 1 & 2 \\ 1 & 4 & 2 & 2 & 1 \\ 2 & 5 & 1& 3 & 0 \\ \end{pmatrix}$$ Now, I'll combine three elementary row operations to begin the reduction: $-2R_1 + R_2 \to R_2$ $-1R_1 + R_3 \to R_3$ $-2R_1 + R_4 \to R_4$ $$\begin{pmatrix} 1 & 2 & 0 & 0 & 3 \\ 2 & 3 & -1 & 1 & 2 \\ 1 & 4 & 2 & 2 & 1 \\ 2 & 5 & 1& 3 & 0 \\ \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 2 & 0 & 0 & 3 \\ 0 & -1 & -1 & 1 & -4 \\ 0 & 2 & 2 & 2 & -2 \\ 0 & 1 & 1& 3 & -3 \\ \end{pmatrix}$$ $2R_2 + R_1 \to R_1$ $2R_2 + R_3 \to R_3,$ $R_2 + R_4 \to R_4$ $$\begin{pmatrix} 1 & 2 & 0 & 0 & 3 \\ 0 & -1 & -1 & 1 & -4 \\ 0 & 2 & 2 & 2 & -2 \\ 0 & 1 & 1& 3 & -3 \\ \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 0 & -2 & 2 & -5 \\ 0 & -1 & -1 & 1 & -4 \\ 0 & 0 & 0 & 4 & -10 \\ 0 & 0 & 0 & 4 & -10 \\ \end{pmatrix}$$ Subtracting Row 3 from Row 4 gives us $$\begin{pmatrix} 1 & 0 & -2 & 2 & -5 \\ 0 & -1 & -1 & 1 & -4 \\ 0 & 0 & 0 & 4 & -10 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ which we see has a row whose entries are all zero. The matrix is now in row-echelon form (though not yet reduced row echelon form $\dagger$), having exactly three non-zero rows. Hence the rank of the matrix (and of the set of column vectors) is equal to $3$. $(\dagger)$ We could reduce further to obtain reduced row echelon form: $-1\times R_2$ $\frac 14 \times R_3$ $$\begin{pmatrix} 1 & 0 & -2 & 2 & -5 \\ 0 & -1 & -1 & 1 & -4 \\ 0 & 0 & 0 & 4 & -10 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 0 & -2 & 2 & -5 \\ 0 & 1 & 1 & -1 & 4 \\ 0 & 0 & 0 & 1 & -5/2 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ $-2R_3 + R_1 \to R_1$ $R_3 + R_2 \to R_2$ $$\begin{pmatrix} 1 & 0 & -2 & 2 & -5 \\ 0 & 1 & 1 & -1 & 4 \\ 0 & 0 & 0 & 1 & -5/2 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 0 & -2 & 0 & 0 \\ 0 & 1 & 1 & 0 & 3/2 \\ 0 & 0 & 0 & 1 & -5/2 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ The basis for the column space spanned by your vectors is given by $$b = \langle 2, 3, 4, 5\rangle^T, c = \langle 0, 1, 2, 3\rangle^T, d = \langle 1, 2, 1, 2\rangle^T$$ And you can test to confirm that $4\times 4$ any matrix with any $4$ of your vectors as columns will have determinant zero.
H: Show that $\mathbb{R}^n\setminus \{0\}$ is simply connected for $n\geq 3$ Show that $\mathbb{R}^n\setminus \{0\}$ is simply connected for $n\geq 3$. To my knowledge I have to show two things: $\mathbb{R}^n\setminus \{0\}$ is path connected for $n\geq 3$. Every closed curve in $\mathbb{R}^n\setminus \{0\}, n\geq 3$ is null-homotop. The problem is that I do not know exactly how to show that. AI: The path connectedness is easy. Given $x,\, y \in \mathbb{R}^n\setminus\{0\}$, the straight line segment $t \mapsto (1-t)\cdot x + t\cdot y$ connects $x$ and $y$ in $\mathbb{R}^n\setminus\{0\}$ unless $y$ is a negative multiple of $x$. If $y = c\cdot x$ for a $c < 0$, then you can compose a path of two straight line segments, one going a little away from $x$ to $x + \varepsilon\cdot e_i$ for a small enough $\varepsilon > 0$ and $i$ such that $x$ is not a multiple of $e_i$. It could also be easily seen from the homeomorphism $$F\colon S^{n-1} \times (0,\,\infty) \to \mathbb{R}^n\setminus\{0\};\quad (\xi,\,r) \mapsto r\cdot \xi.$$ (A proof that $F$ is indeed a homeomorphism may be required, or that may be considered obvious, depending on what was treated previously.) Given a closed path $\gamma \colon t \mapsto (\xi(t),\, r(t))$ in $S^{n-1}\times (0,\,\infty)$, the map $H(s,t) = (\xi(t),\, r(t)^{1-s})$ provides a homotopy to a closed path in $S^{n-1}$. If you already know that $S^d$ is simply connected for $d \geqslant 2$, you are done now. Otherwise, to prove that fact, consider a closed path $\gamma\colon [0,\,1] \to S^{n-1}$. If $\gamma([0,\,1]) \neq S^{n-1}$, assume without loss of generality that the north pole $N = (0,\,\ldots,\,0,\,1)$ is not on the trace of $\gamma$. Stereographic projection from the north pole gives a homeomorphism $S^{n-1}\setminus \{N\} \to R^{n-1}$ (proof may be required), and that shows that $\gamma$ is null-homotopic in $S^{n-1}\setminus \{N\}$, hence a fortiori in $S^{n-1}$. If $\gamma$ is surjective, you can partition it into parts that each omit one of two opposing caps of the sphere. Consider for example the two caps $T = \{x \in S^{n-1}\colon x_n \geqslant \frac23\}$, $B = \{x \in S^{n-1}\colon x_n \leqslant -\frac23\}$ and the belt $E = \{x\in S^{n-1}\colon \lvert x_n\rvert \leqslant \frac13\}$. By uniform continuity of $\gamma$, there is a $\varepsilon > 0$ such that $\gamma(t) \in B \land \lvert s-t\rvert < \varepsilon \Rightarrow \gamma(s) \notin E \cup T$, and similar for all other combinations of $B,\,E,\,T$. Without loss of generality, suppose that $\gamma$ starts (and ends) in the south pole. Set $t_0 = 0$. While $t_k < 1$, find the next partition point $t_{k+1}$ in the following way: If $\gamma(t_k) \in B$, let $t_{k+1} = \min\bigl(\{s > t_k \colon \gamma(s) \in E\} \cup \{1\}\bigr)$. If $\gamma(t_k) \in E$, let $t_{k+1} = \min \{s > t_k \colon \gamma(s) \in T\cup B\}$. If $\gamma(t_k) \in T$, let $t_{k+1} = \min \{s > t_k \colon \gamma(s) \in E\}$. By the uniform continuity mentioned above, $t_m = 1$ for an $m \leqslant \frac{1}{\varepsilon}$. Let $\gamma_k$ be the restriction of $\gamma$ to $[t_k,\,t_{k+1}]$. If $\gamma(t_{k}) \in T$, then the composition $\gamma_{k-1}\gamma_k$ is a path connecting two points $a,\, b \in E$ in $S^{n-1}\setminus B$. The latter is homeomorphic to an open ball, so $\gamma_{k-1}\gamma_k$ is homotopic to a path connecting $a$ and $b$ in $E$. Replacing all the $\gamma_{k-1}\gamma_k$ with homotopic paths in $E$, you obtain a path homotopic to $\gamma$ whose trace omits $T$, hence by the above, it, and therefore also $\gamma$ is null-homotopic in $S^{n-1}$.
H: Maximum Likelihood Principle; Local vs. Global Maxima In the statement for estimating parameters through the Maximum Likelihood Principle (MLE), there is no mention of whether to choose a local maximum or a global maximum. (In my very limited reading so far) From the examples given in various textbooks/lecture notes, it seems that we should choose the global maximum of the likelihood function for inference. Is this correct? The reason I am asking is because I am dealing with some data whose likelihood seems to have several maxima. The parameter space is three dimensional, so I have no intuition about the situation. In this case how do I estimate the parameters properly - do I just look for the maximum in a small part of the parameter space? (The bounds could be established through guesses based on the data, for example.) AI: Many, but not all, likelihood functions we usually encounter have strictly convex logarithm (i.e., they're log-concave). Consequently, they have a unique stationary point and that is the global maximum. This doesn't mean that there might be cases where the likelihood has multiple local maxima. You always look for the global maximum in MLE. Keep in mind, however, that MLE is not necessarily a good estimator for all problems and there are common and interesting cases where MLE may produce an estimate with large error.
H: Integral dependence of an algebraic element Let $A$ be a UFD, $K$ its field of fractions, and $L$ an extension of $K$. Then, let $\alpha \in L$ and let $f_\alpha \in K[x]$ be its minimal polynomial over $K$. Is it true that $\alpha$ is integral over $A$ if and only if $f_\alpha$ has coefficients in $A$? AI: Yes. By definition, $\alpha$ is integral over $A$ if (and only if) there is a monic polynomial $p \in A[X]$ with $p(\alpha) = 0$. Now, if $f_\alpha$ has coefficients in $A$, you have your monic polynomial in $A[X]$ of which $\alpha$ is a zero, hence $\alpha$ is integral over $A$. Conversely, if $\alpha$ is integral over $A$, there is a monic polynomial $p \in A[X]$ with $p(\alpha) = 0$. By the definition of a minimal polynomial, that means that $f_\alpha \mid p$ in $K[X]$. By one of the many lemmas of Gauss, if you have $p = f\cdot g$ where $p \in A[X]$ and $f,\,g \in K[X]$ are monic, you actually have $f,\, g \in A[X]$.
H: Residue/Laurent series of $\frac{z}{1+\sin(z)}$ at $z=-\pi/2$ For some reason, I just can't quite figure out how to easily calculate the Laurent series for the following function: $$ f(z)=\frac{z}{1+\sin(z)},\quad z_0=-\frac{\pi}{2} $$ I don't really need the whole series, just the residue. The function has a zero of order 2 at $z=-\pi/2$, which would lead to the nasty calculation: $$ \text{Res}[f,z_0]=\lim_{z\rightarrow-\pi/2}\frac{d}{dz}(z+\pi/2)^2f(z) $$ The derivative is nasty and we'd have to apply L'H$\hat{\text{o}}$pital's rule 4 times to get the denominator to not vanish (more nastiness). So Laurent series it is! But for some reason my worn out qual-studying brain can't figure out how to do it. A hint would be lovely! AI: Write $z = (-\pi/2) + w$. Then $\sin z = \sin (w-\pi/2) = -\cos w$. Now, you can easily get the beginning of the Taylor expansion of $1 + \sin z$ around $-\pi/2$: $$1 + \sin z = 1 - \bigl( 1 - \frac{w^2}{2} + \frac{w^4}{4!} - O(w^6)\bigr) = \frac{w^2}{2}\bigl(1 - \frac{w^2}{12} + O(w^4)\bigr)$$ and therefore $$\begin{align} \frac{z}{1+\sin z} &= \frac{w-\pi/2}{\frac{w^2}{2}\bigl(1 - \frac{w^2}{12} + O(w^4)\bigr)}\\ &= \frac{2w-\pi}{w^2}\bigl(1 + \frac{w^2}{12} + O(w^4)\bigr)\\ &= -\frac{\pi}{w^2} + \frac{2}{w} - \frac{\pi}{12} + \frac{w}{6} + O(w^2). \end{align}$$ Now replace $w$ with $z - (-\pi/2)$.
H: Wolfram double solution to $\int{x \cdot \sin^2(x) dx}$ I calculated this integral : $$\int{x \cdot \sin^2(x) dx}$$ By parts, knowing that $\int{\sin^2(x) dx} = \frac{1}{2} \cdot x - \frac{1}{4} \cdot \sin(2x) +c$. So I can consider $\sin^2(x)$ a derivative of $\frac{1}{2} \cdot x - \frac{1}{4} \cdot \sin(2x)$, and I get this result: $$\frac{1}{4} \cdot x^2 - \frac{x}{4} \cdot \sin(2x) + \frac{1}{4} \cdot \sin^2(x) +c$$ I get the confirm on wolfram if I try to compute the derivative of $\frac{1}{4} \cdot x^2 - \frac{x}{4} \cdot \sin(2x) + \frac{1}{4} \cdot \sin^2(x) +c$, but here if I try to compute the integral of $\int{x \cdot \sin^2(x) dx}$ I get this result: $$\frac{x^2}{4} -\frac{1}{4} \cdot x \cdot \sin(2x) -\frac{1}{8} \cdot \cos(2x) $$ But $\frac{1}{8} \cdot \cos(2x)$ isn't equal to $\frac{1}{4} \cdot \sin^2(x)$, which is the correct result? AI: We have that $\sin^2 x = \frac{1-\cos (2x)}{2}$. Therefore, $\frac14 \sin^2 x = \frac{1-\cos (2x)}{8} = \frac18 - \frac18 \cos (2x)$. The extra $\frac18$ is taken care of by the constant of integration. In other words, if two functions differ by a constant, they have the same derivative. This is something that happens often when integrating trigonometric functions because of the various identities.
H: Trigonometric manipulation of complex number, how does this step occur? I was reading the section about DeMoivre, and my book showed how to derive his formulas. The next part is supposed to be about finding roots of complex and real numbers. Roughly, it says: "Let $z$ be a complex number. Suppose we wish to find the $n^{th}$ root of $z$ . Then there exists a complex number $w$ such that $w^n=z$ . Let's write $z$ in trigonometric form: $z=r( \cos \theta +i \sin \theta)$ One $n^{th}$ root of $z$ is $w=r^{ \frac 1n} (\cos \frac \theta n +i \sin \frac \theta n)$" This is where I have the problem. It looks like they wrote $w^n=r( \cos \theta +i \sin \theta)$ $\to$ $w=\sqrt [n] {r( \cos \theta +i \sin \theta)}$ , but how did they get from $w=\sqrt [n] {( \cos \theta +i \sin \theta)}$ to $(\cos \frac \theta n +i \sin \frac \theta n)$? It looks like they used DeMoivre's formula which states that $z^n=r^n(\cos n\theta +i \sin n\theta)$ , but in the box with the theorem my book makes it clear: If $z=r( \cos \theta +i \sin \theta)$ , then for any $integer$ $n$ $z^n=r^n(\cos n\theta +i \sin n\theta)$ . What's going on here? $\mathbf {P.S.}$ If you want to know what book this is it's an otherwise very good but old precalculus book that I have: "Precalculus Mathematics for Calculus 4th" by James Stewart, Lothar Redlin, and Saleem Watson. AI: Your are correct when you say that they didn't use DeMoivre's formula because it has only been proven in your book for integers (though it actually does hold for all real powers which can be seen using Euler's formula). The important thing to notice is that while the formula puts a restriction on $n$ it doesn't put on on $\theta$ which can be any real number. So when they write, $$ \sqrt[n]{ \cos \theta + i \sin \theta} = \cos (\theta/n )+ i \sin (\theta/n) $$ They are really saying "Applying DeMoivre's formula to the right hand side of this expression yields the expression in the radical." The argument is as follows, suppose that $w=\cos (\theta/n )+ i \sin (\theta/n)$, then taking the $n$'th power of $w$ gives, $$w^n = \left(\cos (\theta/n )+ i \sin (\theta/n) \right)^n = \cos (n(\theta/n) )+ i \sin (n(\theta/n)) = \cos (\theta )+ i \sin (\theta)$$ From this we conclude that $w$ is a $n$'th root of $z=\cos (\theta )+ i \sin (\theta)$.
H: A question about Banach algebras: showing that $\operatorname{Sp}a \subset D_o \cup D_1$ Maybe this problem be easy for a person that have study in Banach Algebra; please give me a hint. Let $e=0$ or $1$, and $a$ be an arbitrary element in a Banach algebra $A$. Let $D_o$ and $D_1$ be the disks in the complex plane of the same radius $\|a\|$ centred at $0$ and $1$, respectively. Then $\operatorname{Sp}a \subset D_o \cup D_1$. AI: This is Theorem 3.2.3 in "Fundamentals of the Theory of Operator Algebras" by Kadison and Ringrose. Their proof is instructive and I would suggest trying to look at it:
H: What does it mean by a matrix being bounded? Does it means each entry is bounded? Also, finally, given the domain is orthogonal group, I am aware that the range of $F$ is identity, which is closed. But how can I show the domain is? By the way, it comes to my mind that - does this function has inverse: $F(A) = AA^T$? AI: 1) I assume that you have a sequence of matrices in mind. If so, you can use the same concept of boundedness that you have say $\mathbb{R}^n$ or $\mathbb{C}^n$ by supplying the extra data of a norm on the space of matrices. Coincidentally, all choices of norms are equivalent in this case so indeed you could take as the definition that each entry is bounded. 2) Hint: The preimage of a closed set under a continuous map is closed. 3) No, since it is not injective (take two different orthogonal matrices and notice that they are mapped to the same point).
H: Is Dirichlet function Riemann integrable? "Dirichlet function" is meant to be the characteristic function of rational numbers on $[a,b]\subset\mathbb{R}$. On one hand, a function on $[a,b]$ is Riemann integrable if and only if it is bounded and continuous almost everywhere, which the Dirichlet function satisfies. On the other hand, the upper integral of Dirichlet function is $b-a$, while the lower integral is $0$. They don't match, so that the function is not Riemann integrable. I feel confused about which explanation I should choose... AI: The Dirichlet function $f$ isn't continuous anywhere. For every irrational number $x$, there is a sequence of rational numbers $\{r_n\}$ that converges to it. We have: $$ \lim_{n\to\infty} f(r_n) = 1 \ne 0 = f(x) $$ Thus, $f$ isn't continuous at irrational numbers. Rational numbers can be handled similarly.
H: $p \mid x^2 +n\cdot y^2$ and $\gcd(x,y)=1 \Longleftrightarrow (-n/p) = 1$ Let $n$ be a nonzero integer, let $p$ be an odd prime not dividing $n$. then $ p \mid x^2 + n\cdot y^2$ and $x,y$ co-prime $ \Longleftrightarrow(-n/p) = 1 $ How can i prove this? by $(-n/p)$ i mean the Legendre symbol. For $\implies$ i have already tried this: $ p \mid x^2 +n\cdot y^2$, so $x^2 + n\cdot y^2 = 0$ mod $p$. then $x^2 = -n\cdot y^2\mod p$... So with a little help from my friends this part is done. Now how to show the other implication? Greets Egon AI: Hint: We have $x^2\equiv -ny^2\pmod{p}$. Multiply both sides by $z^2$, where $z$ is the multiplicative inverse of $y$. Detail: We need to be careful about the statement of the theorem. So we break up the statement and proof into two parts. When we do, we will discover that the result is stated somewhat too informally. (i) Suppose that $(-n/p)=1$. Then there exist relatively prime integers $x$ and $y$ such that $p$ divides $x^2+ny^2$. Proof of (i): Since $(-n/p)=1$, by part of the definition of quadratic residue, $n$ is not divisible by $p$. Also, there exists an integer $x$ such that $x^2\equiv -n\pmod{n}$. Thus $x^2+n$ is divisible by $p$, and therefore $x^2+ny^2$ is divisible by $p$, with $y=1$. Note that $x$ and $y$ are relatively prime. (ii) Suppose there exist relatively prime integers $x$ and $y$ such that $x^2+ny^2$ is divisible by $p$ and $\gcd(x,y)=1$. This is not enough to show that $(-n/p)=1$. For example, let $n=3$. $x=3$, and $y=1$. Thus we must assume in addition that $n$ is not divisible by $p$. We prove the desired result, with the modification that we add in the condition that $n$ is not divisible by $p$. Proof of (ii): Note that $y$ cannot be divisible by $p$. For if it is, then from $p$ divides $x^2+ny^2$ we conclude that $p$ divides $x^2$. Then $p$ divides $x$, contradicting the fact that $x$ and $y$ are relatively prime. Since $y$ is not divisible by $p$, it has a multiplicative inverse modulo $p$. That is, there is a $z$ such that $zy\equiv 1\pmod{p}$. Then $x^2z^2+ny^2z^2\equiv 0\pmod{p}$. Thus $(xz)^2\equiv -n\pmod{p}$, and the result follows. Remark: The theorem should really be stated like this. Let $p$ be an odd prime, and suppose that (the integer) $n$ is not divisible by $p$. Then $(-n/p)=1$ if and only if there exist relatively prime integers $x$ and $y$ such that $x^2+ny^2$ is divisible by $p$.
H: Cube roots don't sum up to integer My question looks quite obvious, but I'm looking for a strict proof for this. (At least, I assume it's true what I claim.) Why can't the sum of two cube roots of positive non-perfect cubes be an integer? For example: $\sqrt[3]{100}+\sqrt[3]{4}$ isn't an integer. Well, I know this looks obvious, but I can't prove it... For given numbers it will be easy to show, by finding under- and upper bounds for the roots (or say take a calculator and check it...). Any work done so far: Suppose $\sqrt[3]m+\sqrt[3]n=x$, where $x$ is integer. This can be rewritten as $m+n+3x\sqrt[3]{mn}=x^3$ (by rising everything to the power of $3$ and then substituting $\sqrt[3]m+\sqrt[3]n=x$ again) so $\sqrt[3]{mn}$ is rational, which implies $mn$ is a perfect cube (this is shown in a way similar to the well-known proof that $\sqrt2$ is irrational.). Now I don't know how to continue. One way is setting $n=\frac{a^3}m$, which gives $m^2+a^3+3amx=mx^3$ but I'm not sure whether this is helpful. Maybe the solution has to be found similar to the way one would do it with a calculator: finding some bounds and squeezing the sum of these roots between two well-chosen integers. But this is no more then a wild idea. AI: Suppose $a+b=c$, so that $a+b-c=0$, with $a^3, b^3, c$ all rational. Then we have $-3abc=a^3+b^3-c^3$ by virtue of the identity $$a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-ac-bc)$$ (take $c$ with a negative sign) Hence $a+b$ and $ab$ are both rational, so $a$ and $b$ satisfy a quadratic equation with rational coefficients. There are lots of ways of completing the proof from here.
H: How do I show that this Limit of 2 variables is zero? How do I show that :$$\lim_{(x,y)\to(0,0)}xy\frac{x^2-y^2}{x^2+y^2}=0?$$ I'm stumped... AI: $$ 0 \le (x-y)^2 = x^2+y^2 - 2xy\implies|xy| \le \frac{x^2+y^2}{2} $$
H: How can I geometrically (or geographically) group items together? I'm a programmer, and I'm working on a project that takes a bunch of photos and separates them into groups by their gps coordinates. I have no experience in things like geometric group theory so I'm not even sure if that's the field that would help me with this project, but regardless, I just want to figure out how mathematically (and then programmatically) I can decide when a photo should be in the same group as others. Obviously the easy way to do it would be to say that if a photo was taken within a certain distance of another photo it should be in the same group. However, realistically, some groups will span a greater geographical area (e.g., photos taken on a boating trip around a lake would all be in one group, but photos in the small area of my house would be in a different group than those taken in the house down the street--even though the geographical span of the lake would surround those of my house and that down the street. Along with the geographical grouping, I plan to group my pictures through time as well as a way to narrow the groups further (e.g., photos taken at the corner restaurant today in a different group from those taken next week in the same restaurant. I guess the trick for me that I'm having a hard time coming up with is how to decide how big of a span those groups should be. If i'm looking at a map with a bunch of points, or a timeline with a bunch of points, it's easy to draw lines to group things off. But how to mathematically/programmatically do so? I'm sure it has something to do with how many items there are in a geographical span (e.g., 100 items spread out along a km length of street should be together, but 2 items at either end of the same street with nothing in between should be in two different groups) but I'm still at a loss of where to go from here. Thanks for your help! AI: What you want to do is called clustering and in a general setting it's a hard thing to do. The subject is well studied, although your particular application might have some properties that would allow you to do something better, I recommend you to apply first some known methods and just then tweak it or make some hybrid approach. It is worth to mention that the more (reasonable) signals/information you get (for example you could use some kind of pattern recognition to join pictures done at sea even if those are distributed through time), the better your partition will be. Finally, be aware that there is no true way to cluster objects, and three main reasons are: You do not know how fine the clustering should be, would you like to join all the pictures at sea together, or maybe split the voyage into smaller sections? You do not know which dimension you would like to bring up first, should it be voyage, or maybe the pictures of the cat that had traveled all along should be grouped separately? Probably users would like one picture to belong in many clusters, this might be easier or harder, depends on situation (but it is worth to check if you would like to have that kind of functionality in the future). In your case it's a bit easier since you already stated that you want to use geographical data, but don't be fooled, you will have a lot of "fun" with just location/timing balancing. I hope I didn't scared you and wish you good luck! ;-)
H: Regular open sets and semi-regularization. In a Hausdorff space $(X,\tau)$, we can generate a coarser topology, say $\tau'$, by taking its base to be the family of regular open sets in $(X,\tau)$. (Semi-regularization of $(X,\tau)$) Given that it's already proven, how to we proceed to prove that the "regular open sets in $(X,\tau)$ are same as the regular open sets in $(X,\tau')$"? NOTE: A set $S$ is called 'regular open' if $S = \mathrm{Int}(\overline{S})$, where $\overline{S}$ denotes the closure of the set. (Question 1.7.8 - General Topology by Engelking) AI: The key is that $\varrho \colon \mathfrak{P}(X) \to \mathfrak{P}(X);\; \varrho(M) = \overset{\circ}{\overline{M}}$ is idempotent (in every topology). To see that: $$\varrho(M) \in \tau \Rightarrow \varrho(M) \text{ is an open subset of } \overline{\varrho(M)} \Rightarrow \varrho(M) \subset \varrho(\varrho(M))$$ and $$\varrho(M) \subset \overline{M} \Rightarrow \overline{\varrho(M)} \subset \overline{M} \Rightarrow \varrho(\varrho(M)) \subset \varrho(M).$$ Now, let us index the regularisation operation $\varrho$ by the topology with respect to which it is done. Let $\mathcal{R}_T = \{M \subset X \colon \varrho_T(M) = M\}$. So we want to show $\mathcal{R}_\tau = \mathcal{R}_{\tau'}$. Let first $S \in \mathcal{R}_{\tau'}$. Since $S$ is $\tau'$-open, and $\tau' \subset \tau$, it is also $\tau$-open. Hence $$S \subset \varrho_\tau(S) \subset \varrho_{\tau'}(\varrho_\tau(S)) \subset \varrho_{\tau'}(\overline{S}) \subset \varrho_{\tau'}(\operatorname{cl}_{\tau'}(S)) = \varrho_{\tau'}(S) = S.$$ Here monotonicity of $\varrho_T$ and $\varrho_T(\operatorname{cl}_T(M)) = \varrho_(M)$ as well as $U \subset \varrho_T(U)$ for $U\in T$ (where $T$ is an arbitrary topology) have been used, these properties are obvious or easily verifiable. Thus we have shown $\mathcal{R}_{\tau'} \subset \mathcal{R}_\tau$. Now let $S \in \mathcal{R}_\tau$. By definition of $\tau'$, that means $S$ is $\tau'$-open, hence $S \subset \varrho_{\tau'}(S)$. For the reverse inclusion, we first show that $\operatorname{cl}_{\tau'}(S) = \operatorname{cl}_\tau(S)$ (for $\tau$-open $S$, hence in particular for $S \in \mathcal{R}_\tau$, but not in general, of course!). Since $\tau' \subset \tau$, the $\supset$ inclusion is clear. Now let $x \notin \operatorname{cl}_\tau(S)$. By definition, that means there is a $U \in \mathcal{V}_x$ such that $S \cap U = \varnothing$. $S$ is open, hence also $S\cap \overline{U} = \varnothing$, and, since $\varrho_\tau(U) \subset \overline{U}$, a fortiori $S \cap \varrho_\tau(U) = \varnothing$. But $\varrho_\tau(U)$ is $\tau'$-open, hence $x \notin \operatorname{cl}_{\tau'}(S)$, so $\complement \operatorname{cl}_\tau(S) \subset \complement \operatorname{cl}_{\tau'}(S)$, and therefore $\operatorname{cl}_{\tau'}(S) \subset \operatorname{cl}_{\tau}(S)$. And then we have $$S \subset \varrho_{\tau'}(S) = \operatorname{int}_{\tau'}(\operatorname{cl}_{\tau'}(S)) = \operatorname{int}_{\tau'}(\operatorname{cl}_{\tau}(S)) \subset \varrho_\tau(S) = S$$ for $S \in \mathcal{R}_\tau$, hence $S \in \mathcal{R}_{\tau'}$, i.e. $\mathcal{R}_\tau \subset \mathcal{R}_{\tau'}$.
H: Evaluating: $\int^{n}_{1}[\ln(x) - \ln(\lfloor x \rfloor)] dx $ I am attempting to evaluate the integral: $$\int^{n}_{1}\ln(x) - \ln(\lfloor x \rfloor) dx $$ To a form: $$f(x) + O(g(x))$$ where $g(x) \rightarrow 0$ as $x \rightarrow \infty $ How do I compute that f(x) or atleast some type of series representation for it? AI: We have: $$ \int_1^n \ln(x) \,dx = \Big.\left(x\ln(x) - x\right)\Big|_1^n = n \ln(n) - n + 1 $$ And: \begin{align} \int_1^n \ln(\lfloor x \rfloor) \,dx &= \sum_{i=1}^{n-1} \int_i^{i+1} \ln(\lfloor x \rfloor) \,dx \\ &= \sum_{i=1}^{n-1} \int_i^{i+1} \ln(i) \,dx \\ &= \sum_{i=1}^{n-1} \ln(i) \\ &= \ln\left(\prod_{i=1}^{n-1}i\right) \\ &= \ln\left((n-1)!\right) \end{align} Thus, your integral is equal to: $$ n \ln(n) - n + 1 - \ln\left((n-1)!\right) $$
H: Computing the value of logarithmic series: $Q(s,n) = \ln(1)^s + \ln(2)^s + \ln(3)^s + \cdots+ \ln(n)^s $ Given a series of the type: $$Q(s,n) = \ln(1)^s + \ln(2)^s + \ln(3)^s + \cdots+ \ln(n)^s $$ How does one evaluate it? Something I noticed was: $$Q(1,n) = \ln(1) + \ln(2) + \ln(3)+ \cdots+\ln(n) = \ln(1\cdot 2\cdot 3 \cdots n) = \ln(n!) $$ I also noticed that: $$\int^{n}_{1}\ln(x)^s\, dx\quad\sim\quad\sum^{n}_{i = 1}\ln(i)^s$$ But I am really interested in an exact formula or at least one whose difference from the actual value progressively decreases as opposed to merely whose ratio from the actual progressively decreases. AI: Here is a short excerpt of the discussion to which I've linked in my first comment. For $s=1$ (which is somehow nearly trivial) we can define the function $$ t_1(x)=-\zeta '(0)-\ln(\Gamma(\exp(x)))$$ which gives for instance $$ t_1(\ln(2)) - t_1(\ln(4)) = \ln(2)+\ln(3) $$ and in general $$ t_1(\ln(a)) - t_1(\ln(b)) = \sum _{k=a}^{b-1} \ln(k) $$ The key is, that the artificial-looking version of $t_1(x)$ gives the infinite series $$ t_1(\ln(x)) = \sum _{k=x}^\infty \ln(x) = \ln(x) + \ln(x+1) + \ldots $$ The coefficients of the power series of $t_1(x)$ can easily be given for instance using Pari/GP t_1(x) + O(x^8) %1321 = 0.91893853 + 0.57721566*x - 0.53385920*x^2 - 0.32557879*x^3 - 0.12527414*x^4 - 0.033725651*x^5 - 0.0068593536*x^6 - 0.0011726081*x^7 + O(x^8) where the coefficients can be described exactly by compositions of Stirling numbers 2nd kind and $\zeta()$-values at positive integer arguments, and where moreover $\zeta(1)$ is replaced by the Euler-$\gamma $ (which, btw, indicates that we have somehow the Ramanujan-like zeta-renormalization at work here) The first answer is then $$ Q(1,n) = t_1(\ln(1)) - t_1(\ln(n+1)) $$ For $s=2$ $$ t_2(\ln(x)) = \sum_{k=x}^{\infty} \ln(x)^2 $$ such that analoguously $$ Q(2,n) = t_2(\ln(1)) - t_2(\ln(n+1)) = \sum_{k=1}^n \ln(k)^2 $$ I don't have an exact representation for the power series in terms of zetas and Euler-gamma; here is an approximation, where the constant term is $\zeta''(0)$ (the generation scheme allows arbitrary precision depending on the possible size of involved matrices): t_2(x) = -2.006356455908585 - 0.1456316909673534*x + 0.6345699670487060*x^2 - 0.3868588771980126*x^3 - 0.2407113770463571*x^4 - 0.09916202534448954*x^5 - 0.02847303775799426*x^6 - 0.005923792714748150*x^7 - 0.0009884022636657563*x^8 - 0.0001620035246035620*x^9 - 0.00002414672567100699*x^10 - 0.000001216451660450317*x^11 + 0.0000001409130267444575*x^12 - 0.0000001437552825860954*x^13 - 0.00000003587528042872192*x^14 + 0.00000001359539422026695*x^15 + O(x^16) and $$Q(2,n) = - \sum_{k=1}^\infty c_k \cdot \ln(n+1)^k $$ where $c_k$ are the coefficients of the power series and the index $k$ begins at $1$ such that the constant term is skipped. The numbers and the generation-scheme (even for the higher $s$) can be taken from the discussion to which I've linked in my first comment.
H: Why don't we define division by zero as an arbritrary constant such as $j$? Why don't we define $\frac 10$ as $j$ , $\frac 20$ as $2j$ , and so on? I know that by following the rules of math this eventually leads to $1=2$ , but we could make an exception and say that $j$ is the only number such that $0*j \not= 0$ , and put other restrictions necesary so that we don't get contradictions. We do this for $i$ , so why can't we do it here? For example, $i^2$ , is defined as $-1$ , but you could also say $i^2=\sqrt {-1} *\sqrt {-1}=\sqrt {(-1)(-1)}=\sqrt{1}=1$ , but we make an exception for this. AI: You'd have to make an exception for $j$ pretty much everywhere, and at that point, you might as well not include it. When you include $i=\sqrt{-1}$, you give up some properties like $\sqrt{ab} = \sqrt{a}\sqrt{b}$ but most of the existing laws continue to hold, and more to the point, the complex numbers have useful additional properties, such as being algebraically closed. The exceptions are few compared to what continues to work.
H: How to prove that $n\log n = O(n^2)$? How to prove that $n\log n = O(n^2)$? I think I have the idea but need some help following through. I start by proving that if $f(n) = n \log n$ then $f(n)$ is a member of $O(n\log n)$. To show this I take a large value $k$ and show that $i\geq k$ and $f(i) \leq c_1\cdot i\log(i)$. Next I need to show that if $f(n)$ is a member of $O(n \log n)$ then $f(n)$ is a member of $O(n^2)$ by taking a large value $k$ and showing that $i\geq k$ and $f(i) \leq c_2\cdot i\log i$ which turns out to be $f(i)=i^2\log i$ which is a member of $O(n^2)$. Is that right? Could someone formalize this for me? AI: From your notation, it looks like we are assuming that $n\in\mathbb{N}$. In fact, I'll be more general and just say $n\in(0,\infty)$. Then $\log n<n$, since $n < 1+n < e^n$ by its Taylor series. Thus, $n\log n<n^2$ for all $n\in(0,\infty)$. As a consequence, $n\log n = O(n^2)$.
H: Line integral, parabola I'm brushing up on some multivariable-calc and I'm stuck on the following problem: Calculate: $$E = \int_\gamma \frac{-y \cdot dx+x \cdot dy}{x^2+y^2}$$ for $\gamma$ which is the parabola $y=2x^2-1$from $(1,1)$ to $(\frac{1}{2}, -\frac{1}{2})$. I've done the following: Let $$x(t)=t \implies x'(t)=1$$ $$y(t)=2t^2-1 \implies y'(t)=4t$$ for $\frac{1}{2} \le t \le 1$. Thus $E$ becomes (unless I've done some error): $$ E=-\int_{\frac{1}{2}}^1 \frac{2t^2 + 1}{t^2+(2t^2-1)^2} dt$$ but I'm having trouble solving this. Would appreciate some help. AI: An idea: On the given domain the integrand function has a potential function (a primitive function): $$F(x,y):=-\arctan\frac xy$$ So your integral's simply $$F\left(\frac12\,,\,-\frac12\right)-F(1,1)\ldots\ldots $$
H: Checking my proof related to directional derivatives Please can somebody check my answer? Tell me and explain me my mistakes and so on if there is. Thank you for helping :) Question: Suppose that the function $f:\Bbb R^n \to \Bbb R$ is continuously differentiable. Let $x$ be a point in $\Bbb R^n$. For $p$ a nonzero point in $\Bbb R^n$ and $\alpha$ be a nonzero real number. Show that $\dfrac{\partial f}{\partial(\alpha p)}(x) = \alpha \dfrac{\partial f}{\partial p}(x)$ Solution: $\dfrac{\partial f}{\partial(\alpha p)}(x)=$ $\displaystyle = \lim_{t \to 0}\left(\frac {f(x+\alpha tp)-f(x)}{t}\right)$ by the definition of directional derivative of $f$ $\displaystyle =\lim_{t\to 0}\left(\sum_{i=1}^{n} \alpha p_i\dfrac{\partial f}{\partial x_i}(x)\right) $ by the Directional Derivative Theorem $\displaystyle =\alpha \sum_{i=1}^{n}p_i \frac{\partial f}{\partial x_i}(x)$ taking the limit $\displaystyle =\alpha \dfrac{\partial f}{\partial p}(x)$ by the same theorem. AI: Your proof is incorrect. If you want to use the directional derivative theorem you should not write the definition of partial derivative. Otherwise you have no more a derivative to which apply the theorem. The proof is much simpler and also holds for functions which are not differentiable but only have the derivative in the direction considered. If $\alpha=0$ the result is trivial. otherwise just make a change of variables $s=\alpha t$ in the limit defining the partial derivative: $$\frac{\partial f}{\partial \alpha p}(x) = \lim_{t \to 0}\frac {f(x+t\alpha p)-f(x)}{t} = \lim_{s \to 0}\frac{f(x+sp)-f(x)}{s/\alpha} = \alpha \frac{\partial f}{\partial p}(x) $$
H: Show that$ f(x)=x^5-3$ is solvable by radicals over $\mathbb{Q}$. I was reading about solvability of quintics by radicals, but unfortunately there were no many examples and I am afraid that I do not understand the whole concept. How to show $x^5-3$ is solvable by radicals over $\mathbb{Q}$? AI: The splitting field $L =\mathbb{Q}[x]/(x^5-3)$ has 20 elements, and the Galois Group is $$\text{Gal}(L\backslash\mathbb{Q}) \cong \langle (1,2)(3,4),(2,3,4,5)\rangle$$ Addition: You can find the roots by using de Moivre's theorem. They are $\sqrt[5]{3}(\cos(\frac{2\pi n}{5})+\operatorname{i}\sin(\frac{2\pi n}{5})).$ Well-known formulae tell us that $\sin(\frac{\pi}{5}) = \frac{1}{4}(\sqrt{10-2\sqrt{5}})$ and $\cos(\frac{\pi}{5}) = \frac{1}{4}(1+\sqrt{5}).$ Using the formula $\sin(\alpha \pm \beta) = \sin\alpha\cos\beta \pm \sin\beta\cos\alpha$ and $\cos(\alpha\pm\beta) = \cos\alpha\cos\beta \mp \sin\alpha\sin\beta$ will get you closed form expressions for all of the roots.
H: Estimation of variance. Given $X_1, X_2,\dots, X_n$ independent random variables with the same distribution, if we define $S^2_N = \displaystyle\frac{1}{n-1}\sum_{i=1}^{n}(X_i - \bar X_n)^2$ show that $S_N^2$ converges almost surely to $\sigma^2$ -variance- It seems that i have to prove the following: $P\{(\lim {S_N^2} = \sigma^2) \} = 1$, but how could i do it?. AI: First recall this bit of algebra: $$ \frac{1}{n-1}\sum_{i=1}^n \left(X_i-\overline X_n\right)^2 = \left(\frac{1}{n-1} \sum_{i=1}^n(X_i-\mu)^2\right) + \frac{n}{n-1}\left(\overline X_n-\mu\right)^2. $$ In the first of the two terms, the $n$ terms in the sum are now independent. Then suppose $\mu=\mathbb E X_i$ and see if you can prove something about almost sure limits of the two terms separately.
H: change of bases - matrix representation of linear maps I am trying to solve a problem and got stuck. I suppose I made a stupid mistake somewhere, could somebody explain where? Let $B = \{v_1, v_2, v_3\}$ be a basis of a vector space $V$, and let $B' = \{v_1, v_1 + v_2, v_1 + v_2 + v_3\}$. Now define a linear map $α : V → V$ by $α(v_1) = v_2 + v_3$, $α(v_2) = v_3 + v_1$ and $α(v_3) = v_1 + v_2$. Suppose, that $A$ is a matrix representation for linear transformation using basis $B$ and $A'$ using $B'$. Find $A'$ Now, I now that $A'=P^{-1}AP$, where $P$ is the identity matrix $I:V_{B'} → V_B$. Now I find that $$A= \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1\\ 1 & 1 & 0 \end{pmatrix},~ P=\begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{pmatrix}, ~ P^{-1}=\begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \end{pmatrix} $$ In that case $$A'=\begin{pmatrix} 2 & 2& 1 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ However, if I try to represent $A'$, I get this: $$\begin{array} .\alpha(v_1) = v_2 + v_3 \\\alpha(v_1+v_2)=v_1+v_2+2v_3 \\ \alpha(v_1+v_2+v_3)=2v_1+2v_2+2v_3\end{array}$$ Thus $$A'=\begin{pmatrix} -1 & 0& 1 \\ 0 & -1 & 2 \\ 0 & 0 & 2 \end{pmatrix}$$ I suppose my 'direct' approach is wrong. Whats the correct way then? AI: Your basis change matrix is $$P=\begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0& 0 & 1 \end{pmatrix}$$
H: Question related to partial differentiablity and directional derivative $\mathbf {Question:}$ Define a function $f:\Bbb R^2 \to \Bbb R$ by $f(x,y)=$ $(x/|y|)\sqrt {x^2+y^2}$ if $y\not = 0$ $f(x,y)=0$ if $y=0$ $\mathbf{a)}$ prove that the function $f$ is not continuous at the point $(0,0)$ $\mathbf{answer-a:}$ I need to find a sequence $\{u_k\}_{k\in \Bbb N}$ converges to $(0,0)$ such that $\{f(u_k)\}$ does not converge to $f(u)$ But I could not find such a sequence. Please can someone give me a hint about the sequence? $\mathbf{b)}$ The function f has directional derivatives in all directions at the point $(0,0)$ $\mathbf{answer-b:}$ I need to prove that $\frac{\partial f}{\partial p} (0,0)=\,lim_{t\to 0}\frac{f((0,0)+tp)-f(0,0)}{t}$ exists. Is this right? If this is true, please show me how to prove its existence? $\mathbf{c)}$ prove that if $c$ is any number, then there is a vector $p$ of norm $1$ such that $\frac{\partial f}{\partial p} (0,0)=c$ $\mathbf{answer-c:}$ I could not produce any idea to solve the part. Please can someone show and explain me my questions step by step? I am just starting real analysis and on my own. So I am confused so much :( thank you for helping :) AI: Hint a) Take $(u_k)=(\frac{1}{k},\frac{1}{k^3})$. b) If $p=(p_1,p_2)$ such that $p_2\neq 0$ $$\frac{\partial f}{\partial p} (0,0)=\lim_{t\to 0}\frac{f((0,0)+tp)-f(0,0)}{t}=\lim_{t\to 0}\frac{f((tp_1,tp_2))}{t}=\lim_{t\to 0}\frac{tp_1\sqrt{(tp_1)^2+(tp_2)^2}}{t|tp_2|}=\frac{p_1\sqrt{(p_1)^2+(p_2)^2}}{|p_2|}$$ and if $p=(p_1,0)$ then it's clear that $\frac{\partial f}{\partial p} (0,0)=0$ c) If $c=0$ take $p=(1,0)$ and if $c\neq 0$ then $$\frac{\partial f}{\partial p} (0,0)=\frac{p_1}{|p_2|}=c$$ and then $p_1=c|p_2|$ and since $p_1^2+p_2^2=c^2p_2^2+p_2^2=1$ then $$p=\left(\frac{c}{\sqrt{1+c^2}},\frac{1}{\sqrt{1+c^2}}\right)$$
H: is $f(x)$ in big-$O$ of $g(x)$ assuming the following? Assuming that: $f(n)=O(g(n))$ and $f(n)$ and $g(n)$ are nondecreasing and always bigger than 1 Is the following necessarily true? $$f(n)\log_2(f(n)^c)=O(g(n)\log_2(g(n)))$$ And also, could you explain why? Thanks. AI: If you can prove that $\log f(n) = O(\log g(n))$ you are done. Letting $C_0$ be such that $f(n)< C_0 \cdot g(n)$ for all $n$. Then $\log f(n) < \log C_0+ \log g(n)$ Let $D=\log g(1)$. Then $$\log C_0 = \frac{C_0}{D} \log g(1) \leq \frac{C_0}{D} \log g(n)$$ This means that $\log f(n) < \left(\frac{\log C_0}{D} +1\right)\log g(n)$ (We need $g(n)$ non-decreasing for that last inequality. We really don't need that strong a condition. I believe all you need is $\liminf_{n\to\infty}g(n)>1$.) So $$f(n)\log f(n)< C_0 g(n) \left(\frac{\log C_0}{D}+1\right)\log g(n)= Fg(n)\log g(n)$$ where $F=C_0\left(\frac{\log C_0}{D}+1\right)$. Note, we can always pick $C_0>1$, so we don't need to worry about the sign of $\log C_0$. Finally, the constant $c$ is irrelevant, since $f(n)\log f(n)^c=cf(n)\log f(n)<cF\cdot g(n)\log n$. Note that you need $\liminf g(n) >1$. For example, if $g(n)=1+\frac{1}{n}$ and $f(n)=2$, then $f(n)=O(g(n))$, but $f(n)\log_2 f(n)=2$ and $g(n)\log_2 g(n)=O(\frac{1}{n})$
H: Algebraic proof of De Morgan's law with three sets Given: $A'$ $\cup$ $B'$ $\cup$ $C'$ $=$ $(A$ $\cap$ $B$ $\cap$ $C$ $)'$ Problem: Show how the identity above can be proved using two steps of De Morgan's Law along with some other basic set rules (i.e. an algebraic proof). I wasn't aware that De Morgan's Law had multiple steps. I thought De Morgan's Law was just De Morgan's Law. Perhaps it means, use two steps used in a direct proof for proving De Morgan's Law? If that is the case, I find myself a bit lost in determining which two steps to use. Here is my direct proof of the above. I'm very new at this, and expect there might be an error or three below. Any corrections for the direct proof below are welcome in addition to this question. Let $x\in$$(A'$ $\cup$ $B'$ $\cup$ $C')$ (assumption) $\Rightarrow$ $x\in$$A'$ $\lor$ $x\in$$B'$ $\lor$ $x\in$$C'$ (by definition of union) $\Rightarrow$ $x\notin$$A$ $\lor$ $x\notin$$B$ $\lor$ $x\notin$$C$ (by definition of complement) $\Rightarrow$ $x\notin$$(A$ $\cap$ $B)$ $\lor$ $x\notin$$C$ (by definition of intersection) $\Rightarrow$ $x\notin$$(A$ $\cap$ $B)$ $\cap$ $C$ (by definition of intersection) $\Rightarrow$ $x\notin$$(A$ $\cap$ $B$ $\cap$ $C)$ (by associative law) $\Rightarrow$ $x\in$$(A$ $\cap$ $B$ $\cap$ $C)'$ (by definition of complement) Let $x\in$$(A$ $\cap$ $B$ $\cap$ $C)'$ (assumption) $\Rightarrow$ $x\notin$$(A$ $\cap$ $B$ $\cap$ $C)$ (by definition of complement) $\Rightarrow$ $(x\notin$$A$ $\lor$ $x\notin$$B)$ $\cap$ $C$ (by definition of intersection) $\Rightarrow$ $x\notin$$A$ $\lor$ $x\notin$$B$ $\lor$ $x\notin$$C$ (by definition of intersection) $\Rightarrow$ $x\in$$A'$ $\lor$ $x\in$$B'$ $\lor$ $x\in$$C'$ (by definition of complement) $\Rightarrow$ $x\in$$(A'$ $\cup$ $B'$ $\cup$ $C')$ (by definition of union) So I guess it comes down to, if the above is an accurate direct proof, what is the algebraic proof that uses "two steps" and "some other basic set rules"? AI: An algebraic proof is not very hard using basic properties of $\cap,\cup,\bullet'$ and the two sets DeMorgan law. $$A'\cup B'\cup C'=(A'\cup B')\cup C'=(A\cap B)'\cup C'=((A\cap B)\cap C)'=(A\cap B\cap C)'.$$ We use the fact that $X''=X$, and that $\cap,\cup$ are associative.
H: The preimage of continuous function on a closed set is closed. My proof is very different from my reference, hence I am wondering is I got this right? Apparently, $F$ is continuous, and the identity matrix is closed. Now we want to show that the preimage of continuous function on closed set is closed. Let $D$ be a closed set, Consider a sequence $x_n \to x_0$ in which $x_n \in f^{-1}(D)$, and we will show that $x_0 \in f^{-1}(D)$. Since $f$ is continuous, we have a convergent sequence $$\lim_{n\to \infty} f(x_n) = f(x_0) = y.$$ But we know $y$ is in the range, hence, $x_0$ is in the domain. So the preimage is also closed since it contains all the limit points. Thank you. AI: Yes, it looks right. Alternatively, given a continuous map $f: X \to Y$, if $D \subseteq Y$ is closed, then $X \setminus f^{-1}(D) = f^{-1}(Y \setminus D)$ is open, so $f^{-1}(D)$ is closed.
H: Proving that a Particular Set Is a Vector Space Let $V$ be the set of all differentiable real-valued functions defined on $\mathbb R$. Show that $V$ is a vector space under addition and scalar multiplication, defined by $$(f+g)(t) = f(t) + g(t),\quad (cf)(t) = c[ f(t)],$$ where $f, g \in V$, $c \in F$. Since addition defined as such is one-to-one, and since $f(t) + g(t)$ is differentiable, $f+g$ is unique and belongs to $V$. By similar argument, $cf$ is unique and belongs to $V$. My question has to do with the eight properties of the field $F$. They are so obvious that I have to question if I did it right. For example, $$f+g = f(t) + g(t) = g(t) + f(t) = g+f.$$ Now, how do I justify that $f(t) + g(t) = g(t) + f(t)$? I do not think that continuity is sufficient, because the proof that $$\lim_{t \to c} f(t) + g(t) = \lim_{t \to c} g(t) + f(t)$$ already assumes that $f(t) + g(t) = g(t) + f(t)$. AI: This concept really threw me for a loop when I took linear algebra and afterwards, but what you're using in that step is the fact that the real numbers themselves are commutative, since $f(t)$ and $g(t)$ are real numbers. You can just quote the fact that addition of real numbers is commutative. You couldn't do it before you evaluated since $f$ is a not a number until you evaluate. Now, on another note, you can't say that $f+g=f(t)+g(t)$, but you can say that $(f+g)(t)=f(t)+g(t)$.
H: $(\cdot,\cdot)$ in Banach spaces? I have been doing some research on fixed point theorems, and they have brought me around to papers from the 1960s in functional analysis in Banach spaces. I think that today it is common practice to use either $\langle\cdot,\cdot\rangle$ or $(\cdot,\cdot)$ to mean an inner product in a Hilbert space. However, in these papers, they use the latter notation to mean something different: here, the first argument is a bounded conjugate linear functional from $V$, the second argument is in $V$, and $(f,x)=f(x)$. Is there some reason that these two seemingly disparate concepts have the same notation? I do not know a lot of the language in linear algebra so I am a bit out of my league in terms of looking things up. AI: If $V$ is a Banach space and $V^*$ its is dual, the duality operator $(f,v)=f(v)$ is a bilinear operator. Moreover if $V$ is a Hilbert space you can identify each $f\in V^*$ with some $v_f$ in $V$. In that case it happens that $(f,v) =\langle v_f,v\rangle$.
H: Calculate sum of squares of first $n$ odd numbers Is there an analytical expression for the summation $$1^2+3^2+5^2+\cdots+(2n-1)^2,$$ and how do you derive it? AI: $$ 1^2+3^2+5^2+\cdots+(2n-1)^2 = \sum_{i=1}^{n}(2i-1)^2 = \sum_{i=1}^{n}(4i^2)- \sum_{i=1}^{n}(4i) + \sum_{1}^{n}1=\dots.$$ Now, you need the identities $$ \sum_{i=1}^{n}1 = n, $$ $$ \sum_{i=1}^{n}i = \frac{n(n+1)}{2}, $$ $$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{n^3}{3} + \frac{n^2}{2} + \frac{n}{6}.$$
H: studying compact $\partial$-$n$-manifolds via closed $n$-manifolds? What would be counterexamples to the following statement: It is not true that any $n$-manifold with boundary is a $n$-manifold with finitely many embedded disjoint open disks removed, since that would mean that its boundary is a disjoint union of spheres. E.g. the solid torus $B^2\times S^1$ has boundary the torus $S^1\times S^1$, which is not a disjoint union pf spheres. Fuzzy question: If one had a classification of manifolds (up to diffeomorphism/homeomorphism/h-equivalence), could one automatically obtain a clasification of manifolds with boundary? I know that to each manifold $M$ with boundary, there is associated a double manifold $DM$, which is the disjoint union of two copies of $M$, glued along their boundary. But if knew precisely what $DM$ was (in the classification scheme), could we classify $M$ itself? Is any manifold a double of some other manifold, i.e. can every manifold be split into equal halves? AI: Simpler, $\mathbb RP^2$, the real projective plane, is not the double of a manifold with boundary. If a manifold $M$ has boundary, notice that $\partial (M \times [0,1])$ is the double of $M$. So doubles have the property that they are boundaries of higher-dimensional manifolds. In dimension 2, a manifold is a double if and only if it is null-cobordant, which is if and only if it has even Euler characteristic. You could construct some much sharper obstructions to a manifold being a double. Doubles have an effective action of $\mathbb Z_2$ with co-dimension $1$ fixed point set, the action switching the orientation of the normal bundle. With a little more work you could refine this to an if and only if statement for when a manifold is a double.
H: Is this a characterization of well-orders? While grading some papers and thinking about a question related to well-orders (in particular, pointing a mistake in a solution), I came to think of a reasonable characterization for well-orders. I can immediately see it's true for countable orders, but not for uncountable orders. Definition. Let $\cal L$ be a first-order language, and $\cal M$ an $\cal L$-structure. We say that $\cal M$ is rigid if $\text{Aut}(\cal M)=\{\rm id\}$. Conjecture. Let $\cal L=\{<\}$. Then $\mathcal M=\langle M,<\rangle$ is rigid if and only if it is a well-order or its reverse order is a well-order. One direction is trivial. Well-orders are rigid (and therefore their reversed orders are rigid as well). In the other direction, if $\cal M$ is countable, then it is easy. Suppose it's not a well-order. If $\cal M$ contains a convex copy of $\Bbb Z$ then fix everything else and shift that copy by $1$. Otherwise $\cal M$ contains a convex copy of $\Bbb Q$ and then we can shift that copy by $1$ (or multiply it by $\frac12$, whatever floats your boat). The problem is that for uncountable order types the plot thickens and they might be $\kappa$-dense, so neither of the arguments would work (and there is no additive structure - that I know of - that we can exploit like in the countable case). Question: Does the conjecture hold, or is there some intricate counterexample? AI: The conjecture does not hold. For a simple counterexample, consider $\omega+\omega^*$. In fact, there are rather dramatic counterexamples: In this answer, Brian M. Scott refers to B. M. Scott. A characterization of well-orders, Fundamenta Mathematicæ, 111 (1), 71-76. MR0607921 (82i:06001), where he proves that for every infinite $\kappa$ with $2^{<\kappa}=\kappa$ there is a rigid dense linear order of size $2^\kappa$. Moreover, this order does not even admit order-preserving injections into itself other than the identity. For $\kappa=\omega$, this is a result of Dushnik and Miller, who even built such a set as a dense subset of the reals. This is Theorem 9.1 of Joseph G. Rosenstein. Linear orderings, Pure and Applied Mathematics, 98. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1982. MR0662564 (84m:06001). For an additional example, in his answer, Brian also mentions the set $$ X=\{(n,\alpha)\mid n\in\omega\,\&\,\alpha\in\omega_n\},$$ ordered by $(m,\alpha)\le(n,\beta)$ iff $m>n$ (as natural numbers), or else $m=n$ and $\alpha\le \beta$ (as ordinals). This order is rigid, and neither a well-order, nor a reverse well-order. In his paper, Brian also characterizes well-orders: Say that a linear order $(X,<)$ is cushioned iff every order preserving injection $f:X\to X$ satisfies $x\le f(x)$ for all $x$. Theorem. A linear order $(X,<)$ is a well-order iff it is cushioned and scattered. There is a related notion of rigidity, where we require $(X,<)$ to be rigid iff it admits no isomorphism with a proper initial segment of itself. This notion has been studied by John L. Hickman, see J. L. Hickman. Rigidity in order-types, J. Austral. Math. Soc. Ser. A, 24 (2), (1977), 203–215. MR0480217 (58 #397).
H: Laplace equation with weird boundary condition So, guys, here's my problem. I have this differential equation $$ U''_{xx}+U''_{yy}=0 $$ with these boundary conditions: $$ U'_{y}(x,0)=0 $$ $$ U'_{y}(x,\pi)=0 $$ $$ U(0,y)=0 $$ $$ U(\pi,y)=1+\cos(2y) $$ Now, I obtain this solution for the first three conditions: $$ \sum_{n=1}^{\infty} K_n \cdot \sinh(nx) \cdot \cos(ny) $$ And it has to verify the last condition, so $$ U(\pi,y)=\sum_{n=1}^{\infty} K_n \cdot \sinh(n\pi) \cdot \cos(ny)=1+\cos(2y) $$ So, $ K_n \cdot \sinh(n\pi) $ has to be the coefficient of the Fourier series of $ 1+\cos(2y) $. My problem is, how do I calculate $ K_n $? I tried to obtain the constant by using the following formula $$ K_n \cdot sinh(n\pi)=\frac{1}{\pi} \int_{-\pi}^{\pi} (1+ \cos(2y)) \cdot \cos(ny) \ dy $$ But, according to Wolfram Alpha and my own results, it's equal to $0$. Thank you very much, guys! AI: You have this mostly right. As @Brian Rushton says, the finite FS in the BC means that you only need contend with the $n=0$ and $n=2$ terms. The tricky part is getting the $n=0$ term right. In that case,you may be tempted to set that term to zero identically, but that's not right. The fact that $U(\pi,0)=2$ means that a different approach is warranted. In this case, if you imagine that $$U(x,y) = \sum_{n=0}^{\infty} A_n \sinh{n x} \cos{n y}$$ set $A_n = B_n/(n \pi)$. Then $\lim_{n \to 0} A_n \sinh{n x} = B_0 (x/\pi)$. The BC at $x=\pi$ demands that $B_0=1$. It is then straightforward to write that $$U(x,y) = \frac{x}{\pi} + \frac{\sinh{2 x}}{\sinh{2 \pi}} \cos{2 y}$$ You may verify that this $U$ satisfies the BCs and the differential equation.
H: Prove that if $\langle x,z\rangle = 0$ for all $z,$ then $x=0.$ I just wanted to check if my reasoning in this proof was correct. The question is as follows. Let $\beta$ be a basis for a finite dimensional inner product space $V.$ a) Prove that if $\langle x,z \rangle = 0$ for all $z \in \beta,$ then $x=0.$ OK, I reasoned it this way: from the definition of inner products, we have that $\langle x,0 \rangle = \langle 0,x \rangle = 0.$ Further, since $z$ is a basis element and linearly independent, then $x_1z_1+x_2z_2+\dots+x_nz_n = 0$ only if either all the $z_i$s are zero or all the $x_i$s are zero. Therefore, $x=0$ since in the initial construction, we defined $z$ as a basis for $V,$ and it doesn't have to be zero. b) Prove that if $\langle x, z \rangle = \langle y,z \rangle$ for all $z \in \beta,$ then $x=y.$ Again, we can go to the fact that $z$ is a basis and linearly independent. We have that $x_1z_1+x_2z_2+\dots+x_nz_n = y_1z_1+y_2z_2+\dots+y_nz_n,$ so if $z$ is linearly independent, then it can only be zero if all the $z$s are zero or all the $x$s and $y$s are zero. In which case, $x$ would be equal to $y$ if $z$ isn't zero. For other cases, we just divide the left side of the equation by the $z$s on the right, and we get $x_1+x_2+\dots+x_n = y_1+y_2+\dots+y_n$ Is this correct? Did I miss something important? I know this is a simple exercise, but I want to be sure. AI: Your proof assumes that the vector space is actually $\mathbb{R}^n$, and that the inner product is actually the dot product. This is an invalid assumption. For example, suppose that $V$ is the set of polynomials of degree at most $2$. A basis is $\beta=\{1,t,t^2\}$. We may define an inner product via: $$\langle f,g\rangle=\int_0^1f(t)g(t)dt$$ Now, your dependent/independent calculation falls flat. To solve this problem, you really need to work with inner products. Because $\beta$ is a basis, we may express $x=\sum_{\beta_i\in \beta} a_i \beta_i$. Then take $\langle x,x\rangle=\langle x,\sum_{\beta_i\in \beta} a_i\beta_i\rangle=\sum_{\beta_i\in \beta} \bar{a}_i\langle x,\beta_i\rangle,$ wherein complex conjugation follows from antilinearity of the second argument. This leads to the conclusion $x=0$. A similar calculation works for the second problem.
H: How to show $G(p)$ is a subgroup of $G$ an abelian group If $G$ is an abelian group and $p$ is a prime then $$G(p):=\{g \in G: \mathrm{ord}(g)=p^n, n \geq 0\} \leq G.$$ I think I may be making this problem too difficult. I know $G(p)\neq \varnothing$ as $\mathrm{ord}(e)=p^0$ so $e \in G(p)$. If $a,b \in G(p)$ then say $\mathrm{ord}(a)=p^n$ and $\mathrm{ord}(b)=p^m$. Then, if $M=\max\{n,m\}$ then $\mathrm{ord}(ab)=p^M$, is my claim. It is easy to show that $(ab)^{p^M}=e$. I suppose what I am trying to show, if it is possible, if this is a way to go about this at least is $$a^xb^x=e \Rightarrow a^x=b^x=e \qquad\text{ for $x \in \mathbb{N}$} \qquad \text{(1)}$$ as if I can do this then if I suppose I have some $j<M$ with $(ab)^j=e$ then I will have a contradiction since $j$ will be smaller than whichever of the $m,n$ is the max, contradicting that particular $m,n$ being the smallest positive integer with $a^n=e$, for example. Any tips on how to prove $(1)$ if it is true? lol Thank you very much! AI: I think your claim is false, as $a^x$ might be the inverse of $b^x$. In fact, you are making this too complicated: just show that, if $(ab)^n=e$, then $\text{ord}(ab)\mid n$. As $(ab)^{p^M}=e$, it tells us that $\text{ord}(ab)$ is a $p$-power as well. Hence $G(p)$ is multiplicatively closed. Since, if $\text{ord}(a)=p^n$, then $\text{ord}(a^{-1})=p^n$ as well, we conclude that $G(p)$ is indeed a subgroup of $G$.
H: Field extension $k(a)$ Well, first I write some definitions: Let $K|k$ be a field extension. Then $k(a)$ denotes $$ k(a)=\bigcap\{F:k\subseteq F \subseteq K \,\ a\in F \} $$ and is the smallest field of the extension $K|k$ such that $a$ belongs to it. With this, and given $a$ in $K$, I set $$ \begin{array}{cccc} \phi:& k[x]&\to & K\\ & p(x)& \to & \phi(p(x))=p(a) \end{array} $$ This function is a ring homomorphism, such that $\phi(\alpha)=\alpha, \ \forall \alpha \in k$ and $\phi(x)=a$. What I don't get is the following: $$\text{Im}(\phi)\subseteq k(a)$$ I'd appreciate any hint to prove this. AI: Hint Show that $\mathrm{Im(\phi)}=\{p(a)\ |\ p\in k[x]\}$ is a field that contains $k$ and $a$ and show that every field contains $k$ and $a$ then contains $\mathrm{Im(\phi)}$.