text
stringlengths
83
79.5k
H: Prove that a perfect square is either a multiple of $4$ or of the form $4q+1$. Prove that a perfect square is either a multiple of $4$ or of the form $4q+1$ for some $q\in \mathbb{Z}$. Any ideas on how to start? Do I use a proof by contraposition? Also what's the definition of a perfect square? AI: Hint: A perfect square is an integer $k$ such that $k = n^2$ for some integer $n$. It follows that $k$ must be non-negative and that $n$ can be chosen to be non-negative. As for approaching the problem, first break it up into two cases: $k$ even and $k$ odd.
H: Summation of reciprocal of Product of Factorials. How can this summation be evaluated: $${∑ {1\over {a_1!a_2!....a_m!}}}$$ Where $$a_1+a_2+.....+a_m=n$$ Also $a_i !=n $ and $m<n$. AI: $\sum\frac{n!}{\prod_ia_i!}=\sum\frac{n!}{\prod_ia_i!}\prod_ix_i^{a_i}$ with $x_i=1$; hence $\sum\frac{n!}{\prod_ia_i!}=(\sum_i x_i)^{a_1+\ldots+a_m}=m^n$. Finally, the result is $m^n/n!$.
H: Show that 2S = S for all infinite sets I am a little ashamed to ask such a simple question here, but how can I prove that for any infinite set, 2S (two copies of the same set) has the same cardinality as S? I can do this for the naturals and reals but do not know how to extend this to higher cardinalities. AI: In order to prove this is true for all infinite sets you have to use the axiom of choice in one way or another (or at least a fragment of it). And indeed we cannot prove that in naive set theory. If you wish to prove that for "known" infinite sets, e.g. $\Bbb R$, then this can be done without the axiom of choice indeed. Using the axiom of choice, $S$ can be put in bijection with an ordinal $\delta$ - which we will only require to be a limit ordinal. Now we can easily define the map from $2\times\delta$ into $\delta$ by the following injection, for $\beta$ limit ordinal, $n\in\omega$ and $i\in\{0,1\}$ we define $$(i,\beta+n)\mapsto\beta+2n+i.$$ This is an injection, since given $(i,\beta+n)$ and $(i',\beta'+n')$ if $i\neq i'$ then clearly the results differ, if $i=i'$ and $n\neq n'$ the result must again differ, and similarly for $\beta\neq\beta'$. So two pairs are mapped to two different pairs. The above map is in fact a bijection (given $\gamma<\delta$ we can "decode" the pair that was mapped to it), but if one finds the proof of surjectivity any less than immediate, then we can define an injection from $\delta$ into $2\times\delta$ simply by $\alpha\mapsto(0,\alpha)$. Using the Cantor-Bernstein theorem we have that $|\delta|=2|\delta|$, and since $\delta$ and $S$ have the same cardinality we finish.
H: Is the sum of factorials of first $n$ natural numbers ever a perfect cube? If $S_n = 1! + 2! + 3! + \dots + n!$, is there any term in $S_n$ which is a perfect cube or out of $S_1$, $S_2$, $S_3$, $\dots S_n$ is there any term which is a perfect cube, where $n$ is any natural number. AI: All factorials above $8!$ have a factor of $27$ and $S_8 \equiv 9 \pmod {27}$ As there is no solution to $k^3 \equiv 9 \pmod {27}$, we cannot have $n \ge 9$. Then just checking $n$ up through $8$, only $S_1=1$ is a perfect cube.
H: Determine if the given integral is convergent $$\int_0^{\pi/2}{\log x\over x^a}\,\mathrm dx,\quad a<1$$ I tried solving using the $\mu-test$. so if I consider $\mu=1$ then $\lim\limits_{x\rightarrow 0} {x\log x\over x^a}$ Solving further, I get a limit $0$, so the integral must be divergent. But according to the book it is convergent. AI: Hint: For any $\varepsilon>0$ we have $x^{\varepsilon}\log x \to 0$ as $x \to +0$. Let $\varepsilon := \dfrac{1-a}{2}$, write $\dfrac{\log x}{x^a} = \dfrac{x^\varepsilon \log x}{x^{a+\varepsilon}}$ and compare with $\int_0^{\pi/2} \dfrac{dx}{x^{a+\varepsilon}}$.
H: Minimisation of a distance sum I have a list $L$ of $N$ numbers, and I want to choose $k$ numbers $\{x_1,x_2, \ldots,x_k\} \subseteq L$ in such a way value $S$ of the those K numbers is minimum. $$ S = \sum_{0< i < j <= k} \left| x_i-x_j\right| $$ Suppose N=4 and list is {10,20,30,100} and K = 3. Then we can choose {10,20,30} , {10,20,100} , {20,30,100} or {10,30,100} I will prefer to choose first one because in that case value of is K is |10-20| + |20-30| + |30-10 | = 40 which is minimum among all selections. Can we claim that if we sort the initial list then we will have to choose K consecutive elements of list. AI: Yes. Assume that $m$ and $p$ are in your chosen set and $n$ is in $L, m \lt n \lt p$ and not in the chosen set. I claim that you can replace at least one of $m,p$ with $n$ and reduce the sum. Continuing this, you can slide all the unused numbers out the end of the chosen list, reducing the sum along the way. Let your chosen list have $r$ numbers less than $m$ and $s$ numbers greater than $p$. Then if we replace $m$ with $n$ we change the sum by $r(n-m)-(s+1)(n-m)$ because we move $n-m$ units further away from $r$ numbers and the same distance nearer $s+1$ of them (including $p$). Similarly, replacing $p$ with $n$ changes the sum by $-(r+1)(n-m)+s(n-m)$ Adding these gives $-2(n-m) \lt 0$, so at least one of them is negative. Whichever one you removed now becomes a hole, and you can continue to move the hole in the same direction until it is no longer between two of your selected numbers.
H: Integer values of $a$ for which the expression $(x+a)(2013+x)+1$ is a Perfect Square Calculation of all Integer values of $a$ for which the expression $(x+a)(2013+x)+1$ is a Perfect Square. $\underline{\bf{My\;Try}}:$ Given $(x+a)\cdot (x+2013)+1 = y^2$ So $x^2+ax+2013x+2013a = y^2\Rightarrow x^2+(a+2013)x+(2013a-y^2) = 0$ Now I did not Understand How can I solve after that Help Required Thanks AI: Hint: $(x+a)(2013+x)+1=(x+b)^2$.
H: Inequality, what is wrong with $\log(-1) = - \log(-1)$? Can anyone tell me what is wrong with the following line of argument: $$ \log(-1) = \log(-1) - \log(1) = - \bigg( \log(1) - \log(-1) \bigg) = - \log \Big( \frac{1}{-1} \Big) = - \log(-1) $$ Considering the complex logarithm the left-hand-side evaluates to $ i \pi $ and the right-hand-side evaluates to $- i \pi$. AI: It's because the logarithm, if you extend it to the complex numbers, is only properly defined on a Riemann surface. What does that mean? First of all, notice that you're getting $i\pi$ on the left hand side, and $-i\pi$ on the right hand side. It is interesting to notice that: $$ e^{i\pi}=-1=e^{-i\pi} $$ Now the logarithm can be naively defined as the 'inverse of the exponential'. So why do we say $\log(-1)=i\pi$ and not $\log(-1)=-i\pi$? Well, first notice that $e^{2i\pi}=1$. So $e^a=e^{a+2ni\pi}$ for any integer $n$. This means that the logarithm is only defined up to addition of $2ni\pi$. There are two ways round this: Just choose a value for the logarithm at each point. Remember that the logarithm of a complex number $re^{i\theta}$ is given by $\log_e(r)+i\theta$, where $\theta$ is the argument. It is common to specify that, for example, $-\pi<\theta\le\pi$. This is called a branch cut, because we cut along the negative real axis, and end up with one branch of the complex logarithm - another branch is given by $\pi<\theta\le3\pi$. For the first branch $\theta$ for $-1$ is $\pi$, so we say that $\textrm{Log}(\theta)=i\pi$ (the capital L on $\textrm{Log}$ refers to the principal value of the logarithm - what we get if we restrict ourselves to a particular branch. Notice that this logarithm coincides with the real logarithm if and only if the argument for real numbers is chosen to be $0$ and not $2n\pi$ for some integer $n\ne 0$. The branch cut means that the logarithm is not continuous on the negative real axis (indeed, it is usually not defined along the cut at all). Instead of studying the complex logarithm over the complex numbers, we introduce a Riemann surface - a one-dimensional complex manifold that 'joins up' the branch cuts so that the logarithm is now continuous everywhere. This picture shows you sort of what $\log$ looks like on a Riemann surface: Unfortunately, in neither of those cases can you assume that usual relationships like $\log(ab)=\log(a)+\log(b)$ still hold. For example, if we make a branch cut along the negative real axis, we could try to work out $\log((-1+i)\times(-1+i))=\log(-2i)$ by writing it as $2\log(-1+i)$. The argument $\theta$ for $-1+i$ is $3\pi/2$, but doubling that angle 'crosses' the branch cut, and we end up with $-2i$, which has an argument not of $3\pi$ but of $-\pi$. Of course, we could place the cut somewhere else and we'd be fine. In your example, though, you will end up crossing the branch cut wherever you put it. Why is that? Your example is basically equivalent to saying that: $$ \log(-1)=\log\left(\frac{1}{-1}\right)=-\log(-1) $$ Now $-1$ can be thought of as going round the unit circle by an angle of $\pi$. The reciprocal of a complex number always has negative of the argument of the original number, so $\dfrac{1}{-1}$ can be thought of as going round the unit circle by an angle of $-\pi$; i.e., you reach the same point travelling in the opposite direction. Between them, the two paths cover the entire unit circle. Since you've got to place a branch cut somewhere on the circle, you're going to end up crossing it at some point, so you've got to get an answer that makes no sense. Update: I can't remember where I first read this, but it's definitely true. It's easy to look at something like this and conclude that it's a slightly annoying area of mathematics that you have to deal with even though you'd rather it worked a different way. In fact, the opposite is true: this seemingly annoying property of the complex logarithm in fact opens up many beautiful areas of mathematics, including the study of Riemann surfaces.
H: Projection and direct sum I want to show that for every projection $A^2=A$ we have that there exists a subspace $U_1 \subset ker(A)$ and $U_2$ such that $A|_{U_2} = id$ such that $V = U_1 \oplus U_2$. Does anybody here have a hint how to show this? AI: Hint: (1) For $x \in V$ we have $x = (x-Ax) + Ax$. (2) What can you say about $A(x-Ax)$ and $A(Ax)$?
H: Number of ways $n$ distinct objects can be put into two identical boxes The number of ways in $n$ distinct objects can be put into two identical boxes, so that neither box remains empty. My Try:: If the question is the numbers of ways in $n$ distinct objects can be put into two Distinct boxes so that no box remains empty, then I can solve easily; this can be done in $2^n-2$ ways But I do not Understand how can I solve the original Question. AI: The only change needed if the two boxes are identical is to divide the number of ways by two, since swapping boxes produces two different cases when the boxes (as well as the objects) are distinct. So $2^{n-1} -1$ ways to fill the two identical boxes, subject to having neither empty.
H: An 'obvious' property of algebraic integers? I am looking at the book A Brief Guide to Algebraic Number Theory by H. P. F. Swinnerton-Dyer. I like the section on page 1 'the ring of integers' as it gives a motivation for choosing which elements we would like to regard as integers and how we get the definition in terms of monic polynomials. He lists the 'obvious' properties which one would want the integers ${\frak{o}}_k$ of an algebraic number field $k$ to have. Property number 3 is: ${\bf{3.}} \ {\frak{o}}_{k} \otimes_{\mathbb{Z}} \mathbb{Q}= k $. I have not come across this tensor product notation before, but I have a feeling this statement is related to the requirement that the field $k$ should be the field of fractions of ${\frak{o}}_k$. Is this the case, and if so how can the statement 3 be 'translated' into this requirement? Is it really as obvious as he claims? Why do you think he has chosen to state it in this way? A link to the book. AI: Essentially, this means that every element of $k$ can be written as $\frac{\alpha}{n}$ where $\alpha\in\mathcal{O}_k$ and $0\neq n\in\mathbb Z$. This is stronger than the field of fractions property you've given.
H: Show that ON-sequence is a base I have a Hilbert space $H$ and a base $(e_n)_{n=1}^\infty$ and a ON-sequence $(f_n)_{n=1}^\infty$. Given $$ \sum_{n=1}^\infty ||e_n - f_n||^2 < 1 $$ show that $(f_n)_{n=1}^\infty$ is a base. My work: It is straight forward to rewrite the sum $$ \sum_{n=1}^\infty (1 - \langle e_n, f_n\rangle) = \sum_{n=1}^\infty \langle e_n, e_n - f_n\rangle) < \frac12 $$ and it also holds for each $n$ $$ \langle e_n, f_n\rangle > \frac12 $$ I then I have tried to show by contradiction by assuming there is a non-zero vector $a$ that is orthogonal to $f_n$: Assume there is a vector $a$ s.t $||a|| = 1$ and $\langle a, f_n\rangle = 0$ for all $n$. $$ \langle a, f_n \rangle = \langle \sum_{m=1}^\infty\langle a, e_m\rangle e_m, \sum_{k=1}^\infty\langle f_n, e_k \rangle e_k \rangle = \sum_{m=1}^\infty \langle a, e_m\rangle \langle f_n, e_m \rangle = ... $$ and I want to somehow show that this is $> 0$ for a contradiction, based on the given inequality. Am I no the right track? I'm just treading water right now and end up rewriting all the expressions in different ways without progress. Hints are appreciated (note the homework tag) AI: Hint: $$\left|\langle a,e_n \rangle\right| = \left|\langle a, e_n - f_n \rangle \right| \le \|a\| \|e_n - f_n\|$$ Now use the given inequality...
H: Does this sequence of inverse-binomial numbers have a name? I was inspired by considering the following: $$\left(\sum_{i=1}^n i\right)^2=\sum_{i=1}^n i^3$$ Are there exact formulas for the sums of the powers of the integers? For example, we have: $$\sum_{i=1}^n i={n(n+1)\over 2}$$ $$\sum_{i=1}^n i^2={n(n+1)(2n+1)\over 6}$$ Do we have more of these? Then I began to consider them using binomial notation: $$\sum_{i=1}^n i={n(n+1)\over 2}={n+1\choose 2}$$ $$\sum_{i=1}^n i^2={n(n+1)(2n+1)\over 6}={n+1\choose 3}+{n\choose 3}$$ Upon further searching, I managed to come up with the following: $$\sum_{i=1}^n i^3={n+2\choose 4}+4{n+1\choose 4}+{n\choose 4}={n+1\choose 2}^2$$ $$\sum_{i=1}^n i^4={n+3\choose 5}+11{n+2\choose 5}+11{n+1\choose 5}+{n\choose 5}$$ $$\sum_{i=1}^n i^5={n+4\choose 6}+26{n+3\choose 6}+66{n+2\choose 6}+26{n+1\choose 6}+{n\choose 6}$$ I know that these could also be written in terms of decreasing choice (i.e., $a{n\choose 5}+b{n\choose 4}+c{n\choose 3}+\dots$), but I wonder about the particular coefficients displayed here. Note that these coefficients are the same ones that would be used if the formula were to generate the sequence of "powers of n" instead of the sum over them due to the identity ${n+1\choose k+1}-{n\choose k+1}={n\choose k}$. I have found a recursive formula for them given exponent $k$: $$a_k(i)=i^k-\sum_{j=1}^{i-1}a_k(j){k+i-j\choose k}$$ The first few sequences are: 1 = 1! 1 1 = 2! 1 4 1 = 3! 1 11 11 1 = 4! 1 26 66 26 1 = 5! So my questions are: Does the sequence of coefficients for a given $k$ have a name? I see that for each $k$ the sum over the sequence is $k!$. Note that the Bernoulli numbers are not the same as these since these are constructed individually for each $k$, and for all $i\gt k, a_k(i)=0$. These sequences are symmetrical and appear to have properties very similar to that of Pascal's Triangle. Is there a simple rule to generate them along the same lines? AI: Using $\def\falling#1{^{\underline#1}}x\falling k=k!\binom xk=x(x-1)\ldots(x-k+1)$ (also written as $(x)_k$, but I find is unpleasant to read), one has $$ \def\stirs{\genfrac\{\}0{}}x^n=\sum_{k=0}^n \stirs nk x\falling k, \qquad\text{and hence}\quad x^n=\sum_{k=0}^n k!\stirs nk \binom xk $$ where $\stirs nk$ denote the Stirling numbers of the second kind. The inverse transformation uses (signed) Stirling numbers of the first kind $\def\stirf{\genfrac[]0{}}(-1)^k\stirf nk$: $$ x\falling k=\sum_{k=0}^n (-1)^k\stirf nk x^k \qquad\text{and hence}\quad \binom nk=\sum_{k=0}^n \frac{(-1)^k}{k!}\stirf nk x^k. $$ These numbers have many interesting combinatorial properties, but no simple closed formula. Your question seems to pursue another course, the numbers $a_k(i)$ appear to be the Eulerian numbers $\genfrac<>0{}ki$.
H: How prove this $a+b\le 1+\sqrt{2}$ let $0<c\le b\le 1\le a$, and such $a^2+b^2+c^2=3$, show that $a+b\le 1+\sqrt{2}$ My try: let $ c^2=3-(a^2+b^2)\le b$ AI: As Sun stated, given a valid tuple of $(a,b,c)$, replace it with $(A, b, 0)$ where $ A^2 = a^2 + c^2$. Observe that $a + b \leq A + b$, hence it remains to show that $ A + b \leq 1 + \sqrt{2}$. Squaring this, we need to show that $ A^2 + 2Ab + b^2 \leq 3 + 2 \sqrt{2}$ or that $AB \leq \sqrt{2}$. But since $A^2 + 2b^2 \leq 4$, hence $ 2 \sqrt{2} Ab \leq A^2 + 2b^2 \leq 4$, hence we do have $Ab \leq \sqrt{2}$.
H: What to do when $x$ in $\Gamma(x)$ is a negative integer? I have the following likelihood calculation: \begin{align}\mathcal{L}(s|\alpha) = \sum_{i=1}^O\Biggl\{\ln\frac{ \Gamma(\alpha_0 )}{ \Gamma( B )}- \sum_{k=1}^K \ln \Gamma(\hat{\mathcal{S}}_k^i+1) + \ln \biggl[ \sum_{k=1}^K \Bigl( \ln \Gamma(\hat{\mathcal{S}}_k^i + \alpha_k) - \ln \Gamma(\alpha_k) \Bigr) \biggr] \Biggr\}\end{align} There are several $\Gamma$ functions in there. $\Gamma$ is not defined for negative integers. When one of the $x$ values in $\Gamma(x)$ is a negative integer, what should I do? Can I add a small number to $x$ (e.g. $x = x+1e^{-131} \quad \text{if}\ x\ \text{is a negative integer}$? Edit: https://math.stackexchange.com/a/263755/96592 could be the answer? So if $x$ is negative $\Gamma(x) = \frac{\Gamma(x+\epsilon)}{\Gamma(\epsilon)}$ with $\epsilon$ close to $0$? Not sure if I understand that correctly. AI: $\Gamma(x)$ has poles at non-positive integers, so you won't be able to solve the problem by adding a small value to $x$, since the smaller the value you use, the larger (or more negative depending on if x is odd or even) $\Gamma(x)$ will be. In other words, for even $x \in \mathbb{Z}_{\leq 0}$ $\lim_{\epsilon \rightarrow 0^+}\Gamma(x+\epsilon)=\infty$ and $\lim_{\epsilon \rightarrow 0^-}\Gamma(x+\epsilon)=-\infty$ and vice versa for odd $x \in \mathbb{Z}_{\leq 0}$
H: sketch graph by given data suppose that we are given following data,clearly in this case it is talking about linear form right?i meant $y=k*x+b$ form, in this case we can simply choose any two point for example $(1990,11)$ and $(1992,26)$,calculate slope and finally find $b$,it is required right or we should do some more advanced interpolation,like least square method or more advanced one?thanks in advance AI: I don't know exactly what the scope of this problem is, but I think that a simple linear interpolation between two data points is probably sufficient for this problem. However! this function overall might not be linear - try doing part (a) and see if you agree. If you really wanted to get a close estimate, you could try and find a curve of best fit of the right type, which is exponential (notice that as $t$ increases by $2$, $N$ roughly doubles), and then use the result to estimate answers for (b). But again, I think a simple linear interpolation for (b) is probably fine.
H: Needing help picturing the group idea 2 Needing help picturing the group idea. A further extension relating to the question. Give an example where $XY$ is not a subgroup of G, where $X$ and $Y$ are subgroups of $G$. Show that $XY$ is a subgroup if $XY=YX$. Hence show in particular that $XY$ is a subgroup if either $X$ or $Y$ is a normal subgroup of $G$. AI: Consider $S_3$ and let $X=\{e,(1\;2)\}$ and $Y=\{e,(1\;3)\}$. Then $XY=\{e,(1\;2),(1\;3),(1\;3\;2)\}$, which is not a subgroup of $S_3$. Other questions are available in any standard book.
H: Order of subgroups and number of elements of order $3$ in a group of order $9$ Let $G$ be a group of order $9$. 1) State the possible orders of subgroups and elements in $G$. 2) Find the number of elements of $G$ of order $3$ in the cases where (a) $G$ is non-cyclic, (b) $G$ is cyclic. I have a problem with the (b). Here is what I did so far, is it right ? We know that for G a group of finite order, $\forall H \leq G$ then $\vert H \vert $ divides $\vert G\vert$. $G$ has order $9 =3*3=1*9$, so the possible order of subgroups and elements in $G$ are 3 and 9. a) If $G$ is not cyclic then there can't be an element $a\in G$ such that $<a>=G$ ie there is not $a\in G$ with $\vert a \vert = \vert G \vert = 9$ so the number of elements in $G$ of order 3 is 2. b) If $G$ is cyclic then I guess the are none (probably because the only order possible is then 9, but I don't see how to prove it properly. Thanks AI: Hints: (1) For $\;G\;$ cyclic, $\;|G|=n\;$ : for any divisor $\;d\;$ of $\;n\;$, there's exactly one unique subgroup of $\;G\;$ of order $\;d\;$ which, of course, is also cyclic. (2) A cyclic group of order $\;n\;$ always has $\;\phi(n)\;$ elements of order $\;n\;$ (and thus also generators of the whole group).
H: Limits problem: Factoring a cube root of x? Disclaimer: I am an adult learning Calculus. This is not a student posting his homework assignment. I think this is a great forum! $$\lim_{x\to8}{\frac{\sqrt[3] x-2}{x-8}}$$ How do I factor the top to cancel the $x-8$ denominator? AI: Actually you need to factor the denominator: $x-8 = (\sqrt[3]{x})^3-2^3 = (\sqrt[3]{x}-2)\left((\sqrt[3]{x})^2+2\sqrt[3]{x}+2^2\right)$.
H: Prove $\int_0^1|f(t)-g(t)|dt \le (\int_0^1|f(t)-g(t)|^2dt)^{1/2} \le \sup_{t\in[0,1]}|f(t)-g(t)|$ Let $C[0,1]$ be the set of all continuous real-valued functions on $[0,1]$. Let these be 3 metrics on $C$. $p(f,g)=\sup_{t\in[0,1]}|f(t)-g(t)|$ $d(f,g)=(\int_0^1|f(t)-g(t)|^2dt)^{1/2}$ $t(f,g)=\int_0^1|f(t)-g(t)|dt$ Prove that for every $f,g\in C$, the following holds $t(f,g)\le d(f,g)\le p(f,g)$ I understand that $t(f,g)\le p(f,g)$ since $t(f,g)=\int_0^1|f(t)-g(t)|dt \le \int_0^1\sup_{t\in[0,1]}|f(t)-g(t)|dt =\sup_{t\in[0,1]}|f(t)-g(t)|=p(f,g)$. But I can't get the others. I think using Schwarz's inequality might be useful $(\int_0^1 w(t)v(t)dt)^2 \le (\int_0^1w^2(t)dt)(\int_0^1v^2(t)dt)$ AI: The same reasoning you make you can apply to show $d(f,g)\leq p(f,g)$. So that leaves you with $t(f,g)\leq d(f,g)$. This last one is a particular case of the Cauchy-Schwarz Inequality. You start with the number inequality $$ |ab|\leq\frac{|a|^2}2+\frac{|b|^2}2. $$ Applied to an integral you get $$ \int_0^1|h(t)k(t)|\,dt\leq\frac12\,\int_0^1|h(t)|^2\,dt+\frac12\,\int_0^1|k(t)|^2\,dt. $$ In particular, you get $$ \int_0^1\left|\frac{h(t)}{\left(\int_0^1|h(t)^2\,dt\right)^{1/2}}\,\frac{k(t)}{\left(\int_0^1|k(t)^2\,dt\right)^{1/2}}\right|\leq1, $$ or $$ \int_0^1|h(t)k(t)|\,dt\leq\left(\int_0^1|h(t)|^2\,dt\right)^{1/2}\,\left(\int_0^1|k(t)|^2\,dt\right)^{1/2}. $$ Now you take $h(t)=f(t)-g(t)$, $k(t)=1$, to get $t(f,g)\leq d(f,g)$.
H: Limits problem with trig? Factoring $\cos (A+B)$? Disclaimer: I am an adult learning Calculus. This is not a student posting his homework assignment. I think this is a great forum! $$\lim_{h\to0} \frac{\cos(\frac{\pi}{3}+h)-\frac{1}{2}}{h}$$ Do I use the angle addition formula to do this? I did that, and have no idea where to go from there. $$\lim_{h\to0}\frac{\cos\frac{\pi}{3}\cos(h)-\sin\frac{\pi}{3}\sin(h)-\frac{1}{2}}{h}$$ What now? AI: Let $\phi(x) = \cos(\frac{\pi}{3}+x)$. Note that $\phi(0) = \frac{1}{2}$, and $\phi$ is differentiable, with $\phi'(x) = -\sin(\frac{\pi}{3}+x)$. So, the limit is $\lim_{h \to 0} \frac{\phi(h)-\phi(0)}{h} = \phi'(0) = -\sin \frac{\pi}{3} = -\frac{\sqrt{3}}{2}$.
H: algebra, equivalence relation regarding associates If f(x) ~ g(x) if and only if f and g are associates, prove this is an equivalence relation have tried to prove this both ways, struggling AI: Well you need to show 3 things : Reflexivity : Take $u=1$ Symmetry : If $u$ works in one direction, then $u^{-1}$ works in the other. Transitivity : If $f(x) = ug(x)$ and $g(x) = vh(x)$, then $f(x) = (uv)h(x)$, and $uv$ is a unit if $u$ and $v$ are.
H: Two form of derivative $ f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$ Why I can write formula derivative $$ f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$$ in this form: $$ f'(x)=\frac{f(x+h)-f(h)}{h}+O(h)$$ I know, that it's easy but unfortunately I forgot. AI: The formula $f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$ is equivalent to $\lim_{h \to 0} \frac{f(x+h)-f(x)-f'(x)h}{h} = 0$. This in turn is equivalent to the function $g(h) = f(x+h)-f(x)-f'(x)h$ satisfying $\lim_{h \to 0} \frac{g(h)}{h} = 0$. Such a function is referred to as 'little o' or $o(h)$, and we say $g$ is 'little o' of $h$, or simply just write $o(h)$. That is, we abuse notation and write $f(x+h)-f(x)-f'(x)h = o(h)$, which gives rise to $f(x+h) = f(x)+f'(x)h + o(h)$. Note: This is $o(h)$, not $O(h)$. The function $f(x) = |x|$ is not differentiable at $x=0$, but we can write $f(h)=f(0)+0.h + O(h)$, but you cannot replace the $O(h)$ by $o(h)$, if you see what I mean.
H: Is this an Equivalence Relation and why? if $I$ is a set of positive integers and relation $\def\R{\mathrel R}\R$ is defined over the set $I$ by $x\R y$ iff $x^y = y^x$. Is this an Equivalence Relation and why? AI: Reflexivity and symmetry are trivial, so let's test transitivity: Assume $x^y=y^x$ and $y^z=z^y$. We have to show that $x^z=z^x$. We have $$ y^{xz}=(y^x)^z=(x^y)^z=x^{yz}$$ and $$ y^{zx}=(y^z)^x=(z^y)^x=z^{yx}$$ hence $$ (x^z)^y=x^{yz}=z^{yx}=(z^x)^y.$$ If $y>1$, we can take $y$th roots and obtain $x^z=z^x$ as was to be shown. And if $y=1$, then immediately $x=x^y=y^x=y^x=1=y^z=z^y=z$ and therefore also $x^z=z^x$.
H: How to solve $q= \frac{\ln{n}}{\ln{b} + \ln{q}+\ln\ln{n}}$ I have come across this equation recently. All the variables are positive and real too. $$q= \frac{\ln{n}}{\ln{b} + \ln{q}+\ln\ln{n}}.$$ Under what conditions can this be solved for $q$? AI: Let $q = e^x$, so that $\ln q = x.$ Also, put $A = \ln{n},$ $B = \ln{b},$ and $C = \ln{\ln{n}}.$ Then your equation takes the form $$e^x \; = \; \frac{A}{B + x + C}$$ Multiplying both sides by $B + x + C$ gives $$ e^x \left(B + x + C \right) \; = \; A$$ Now make the variable change $u = B + x + C,$ so that $x = u - B - C.$ Then we get $$e^{u - B - C} \cdot u \; = \; A$$ Multiplying both sides by $e^{B+C}$ gives $$e^u \cdot u \; = \; Ae^{B+C}$$ Now we can express the solution in terms of the Lambert function: $$ u = W\left(Ae^{B+C}\right)$$ Hence, recalling that $u = B + x + C$ and $q = e^x$, we get $$ B + x + C \; = \; W\left(Ae^{B+C}\right)$$ $$ x \; = \; W\left(Ae^{B+C}\right) \; - \; B - C$$ $$ e^x \;\; = \;\; \exp\left[W\left(Ae^{B+C}\right) - \; B - C \right]$$ $$ q \;\; = \;\; \exp\left[W\left(Ae^{B+C}\right) \; - \; B - C \right]$$ $$ q \;\; = \;\; \exp\left[W\left(\left(\ln{n}\right)e^{\ln{b} + \ln{\ln{n}}}\right) \; - \; \ln{b} - \ln{\ln{n}} \right]$$ (a few minutes later) Using $e^{\ln{b} + \ln{\ln{n}}} \; = \; e^{\ln{b}} \cdot e^{\ln{\ln{n}}} \; = \; b\ln{n},$ we get $$ q \;\; = \;\; \exp\left[W\left(b\left(\ln{n}\right)^2\right) \; - \; \ln{b} - \ln{\ln{n}} \right]$$
H: How to factor and find zeros of $2x^6+5x^4-x^2$ In the equation $y=2x^6+5x^4-x^2$, how can I factor it in order to find the zeros or x-intercepts? This is what I've gotten to, but I can't see what to do next: $x^2(x^2(2x^2+5)-1)$ AI: $$y=2x^6+5x^4-x^2 = x^2(2x^4 + 5x^2 - 1)$$ For the factor $\;2x^4 + 5x^2 -1,\;$ put $t = x^2$: $$2x^4 + 5x^2 - 1 = 2t^2 + 5t - 1$$ Suggestion: Use the quadratic formula to find the roots (also zeros): $t_1, t_2$, then back substitute, knowing $t = x^2 \implies x = \pm \sqrt t$. So for each zero $t_i$, there are two corresponding zeros, $x_j, x_k$ such that $x_{j, k} = \pm t_i$.
H: Morphisms of complexes chain I have a small question: Why is the following true? "If we have a continuous mapping between two topological spaces $f:X\rightarrow Y$, we can associate a morphism of chain complexes $f_*\colon C_\bullet (X)\rightarrow C_\bullet(Y)$ " thank you. AI: I suppose you are considering singular complexes for topological spaces: i.e. the complexes $(C_n(X),\delta_n)$ where $C_n(X)$ is the free $\mathbb Z$-module having as basis the set $\mathbf {Top}(\Delta^n,X)=\{\sigma \mid \sigma \colon \Delta^n \to X\}$ of continuous mappings from the standard $n$-simplex in the space. If that's the case there's a very easy way to produce from a continuous mapping $f \colon X \to Y$ a chain map between the complexes $C_\bullet(X)$ and $C_\bullet(Y)$: you define the maps $f_n \colon C_n(X) \to C_n(Y)$ as those unique linear maps such that for every $n \in \mathbb N$ and for every $\sigma \colon \Delta^n \to X$ (which is an element of the basis of $C_n(X)$) you have that $f_n(\sigma)=f \circ \sigma$. Of course you should verify that this homomorphisms $f_n$ are chain maps, i.e. commute with the $\delta$. But you can easily prove this by simply verifying the relation holds for the elements of the basis of the $C_n(X)$'s.
H: Simplifying pre-calc I'm doing my homework and I'm stuck on this problem: $$ \frac{\frac 1 {\sqrt{a}} + \frac {\sqrt{a}} 2}{\frac 1 {\sqrt{a}} - \frac {\sqrt{a}} 2} $$ I tried playing around with the powers, but failed pretty hard. AI: Expand by $2\sqrt{a}$ and your'e done.
H: Automorphism of $Q_8$ Is there anyone could help me to prove that $Aut(Q_8)=S_4$? Someone told me that there's an isomorphism between the rigid motions of cube and $Aut(Q_8)$, any ideas? Thank you! AI: In $Q_8$ the central elements $1,-1$ are clearly fixed by all automorphisms. The remaining thr pairs $(i,-i)$, $(-j,-j)$, $(k,-k)$ must also stay together (they could be permuted among each other, and within a pair the elements could be swtched). It is natural to associate these pairs with the three pairs of opposite faces of a cube. Associating $i,j,k$ to faces such that the vectors from the center to them form a positively oriented reference frame, try to show that all positive reference frames correspond to oriented triples $(x,y,z)$ with $xy=z$. It is clear that automorphisms of $Q_8$ map such triples into each other; if you show that they do so (simply) transitively, then you are done.
H: cell-by-cell constraints within a positive-definite matrix? One simple constraint in a positive definite matrix relates each off-diagonal cell to the corresponding on-diagonal cells: $$ |m_{ij}| \lt \sqrt{m_{ii}m_{jj}} $$ While this may be a necessary condition, I guess it's not sufficient? If we take a positive definite matrix and wish to replace one cell, what are the upper and lower bounds on the new value in that cell? Is it a complex function of every other cell, or just of the two diagonal entries, as above? Context: Ultimately, I want a really simple way to estimate a covariance matrix in the context of missing data. Missing data can cause the matrix of estimated pairwise covariances to be negative definite to not be positive (semi-)definite. I want a simple way to quickly modify the matrix to make it positive definite. My current plan is to tweak it, one cell at a time, enforcing any constraints you guys give me. I would first tweak the cells that suffered from the most 'missingness' - those values are based on the fewest observations and therefore I'm happy to replace them. AI: It certainly is not sufficient unless the matrix $M$ is $2\times 2$ (or $1\times 1$ in the trivial case). It is only necessary and follows from the fact that a principal submatrix $$ \begin{bmatrix} m_{ii} & m_{ij} \\ m_{ji} & m_{jj} \end{bmatrix} $$ of an SPD matrix $M$ is SPD as well (which is iff $|m_{ij}|<\sqrt{m_{ii}m_{jj}}$). A note on the context: the modified matrix actually can only happen to be indefinite (not negative definite) if the original matrix was positive definite and you changed only off-diagonal entries (since the diagonals remain positive). Simplest ways I can think of how one could "tweak" the off-diagonal values to do not make the matrix indefinite is to either use the Gershgorin theorem or the perturbation theorem for the eigenvalues. The first approach could be to allow (symmetric) changes of the off-diagonal entries such that the modified rows and columns (says $i$ and $j$) remain diagonally dominant (symmetric diagonally dominant matrices are known to be positive definite). There are SPD matrices which are not diagonally dominant though. In the second approach, if you know the minimal eigenvalue of the original matrix $\lambda_{\min}(M)$, the minimal eigenvalue of the changed matrix won't be smaller than $\lambda_{\min}(M)-\|E\|$, where $E$ is the matrix of "changes" and $\|\cdot\|$ is the spectral or Frobenius norm. So the constraint could to keep $\|E\|<\lambda_{\min}(M)$. However, this can be quite restrictive. This is BTW known as the matrix completion problem. You could probably find more interesting information in the references on the linked page. Hope this helps.
H: Homology of wedge sum is the direct sum of homologies I want to prove that $H_n(\bigvee_\alpha X_\alpha)\approx\bigoplus_\alpha H_n(X_\alpha)$ for good pairs (Hatcher defines a good pair as a pair $(X,A)$ such that $A\subset X$ and there is a neighborhood of $A$ that deformation retracts onto $A$). What I tried: Since $(X_\alpha, x_\alpha)$'s are good pairs, $(\bigsqcup X_\alpha, \{x_\alpha:\alpha\in I\})$ is a good pair, so a theorem (a long exact sequence argument) gives us an isomorphism $$q_*:H_n(\bigsqcup X_\alpha, \{x_\alpha:\alpha\in I\})\to H_n(\bigsqcup X_\alpha/\{x_\alpha:\alpha\in I\},\{x_\alpha:\alpha\in I\}/\{x_\alpha:\alpha\in I\})=H_n(\bigvee X_\alpha, \text{some point})$$ induced by the quotient map $$q:\bigsqcup X_\alpha, \{x_\alpha:\alpha\in I\})\to \bigsqcup X_\alpha/\{x_\alpha:\alpha\in I\},\{x_\alpha:\alpha\in I\}/\{x_\alpha:\alpha\in I\})=(\bigvee X_\alpha, \text{some point})$$ Questions: Now, do these mean that we have an isomorphism $\phi:H_n(\bigsqcup X_\alpha)=\bigoplus H_n(X_\alpha) \to H_n(\bigvee X_\alpha)$? In general if we have an isomorphism $\theta:H_n(X,Y)\to H_n(A,B)$, then do we also have an isomorphism $\theta':H_n(X)\to H_n(A)$? AI: Your proof is almost complete, let me suggest you some additional hint to complete it: there's an isomorphism $$\tilde H_n(X) \cong H_n(X,x_0)$$ between the reduced homology of a space $X$ and the homology of the pair $(X,x_0)$ where $x_0 \in X$; in the category of pairs of topological spaces there's an isomorphism $$H_n\left(\bigsqcup_\alpha (A_\alpha,B_\alpha)\right) \cong \bigoplus_\alpha H_n(A_\alpha, B_\alpha)$$ for a family of pairs $(A_\alpha,B_\alpha)_\alpha$. Combining these results with what you used should bring you to the solution of the problem. Hope this helps.
H: Hausdorff dimension of support of harmonic measure in complex plane I know that harmonic measure $\omega$ in complex plane $\mathbb{C}$ is absolutely continuous with Hausdorff measure $\mathcal{H_{h_k}}$ $(\omega << \mathcal{H_{h_k}})$, where $$ h_k(t) = t e^{k\sqrt{\log\frac{1}{t}\log\log\log \frac{1}{t}}} $$ with some constant $k$. Why it follows from this that the Hausdorff dimension of the support $supp\,(\omega)$ is at least $1$? This is a result of Makarov from 1985 and I have tried to find an explanation to this claim about Hausdorff dimension but haven't come up with anything. Mayby this is a simple thing but I just don't understand it... AI: How do different $\mathcal H_h$ measures compare? The larger $h$ is near zero, the larger is the measure. So, $\mathcal H_h\ll \mathcal H_g$ whenever $h(t)=O(g(t))$ as $t\to 0$. For every $d\in (0,1)$, $h_k(t)=O(t^d)$ as $t \to 0$. Just take the logarithms of both sides to see this. Therefore, $\mathcal H_{h_k} \ll \mathcal H^d$ for every $d\in (0,1)$. If $E$ is a set of Hausdorff dimension $\dim E<1$, then pick $d$ such that $\dim E<d<1$ and note that $\mathcal H^d(E)=0$. By the above, the harmonic measure of $E$ is zero. Conclusion: any set of Hausdorff dimension less than $1$ has zero harmonic measure.
H: Bound on surface gradient in terms of gradient Let $S \subset \mathbb{R}^n$ be a hypersurface and define the surface gradient of a function $u:S \to \mathbb{R}$ by $$\nabla_S u = \nabla u - (\nabla u \cdot N)N$$ where $N$ is the normal vector. Is it possible to obtain a bound of the form $$|\nabla u |_{L^2(S)} \leq C|\nabla_S u|_{L^2(S)}$$ where $C$ doesn't depend on $u$? Assume whatever smoothness of $N$ is needed. AI: This is not true by the following counter example. We note first of all that $\nabla_{S}u = \nabla u - (\nabla u \cdot N)N$ is simply the tangiential component of $\nabla u$ on the surface. This is assuming that $N$ is unitary. To verify that this is the case we can check that that $(\nabla_{s} u, N) = 0.$ Now consider that $\nabla u$ points in the direction of the unit normal. Then there will be no tangential component of $\nabla u$ on the surface $S$, i.e. $\nabla_{S} u = 0$. Then assuming $\nabla u \neq 0$ we have that $| \nabla u|_{L^{2}(S)} > |\nabla_{S} u|_{L^{2}(S)} = 0.$
H: A question about Pearson correlation coefficient Suppose that we have two vectors $x=(x_1,\ldots,x_n),y=(y_1,\ldots,y_n)$ is the following correct about their Pearson correlation coefficient? $\operatorname{corr}(x,y)=\operatorname{corr}(x+a,y+b)$ where $a$ and $b$ are non-zero real numbers. AI: Yes. This is because if $\mu_x$ is the mean of $x$, then $\mu_x^*=\mu_x+a$ is the mean of $x+a$. In the calculation of the correlation you subtract the mean. So if you define $x^*=x+a$ and $y^*=y+b$, then it can easily be seen: $$ \begin{align} \frac{E[(x-\mu_x)(y-\mu_y)]}{\sigma_x\sigma_y}=\frac{E[((x+a)-(\mu_x+a))((y+b)-(\mu_y+b))]}{\sigma_x\sigma_y}=\frac{E[(x^*-\mu^*_x)(y^*-\mu^*_y)]}{\sigma_x\sigma_y} \end{align} $$
H: Problem similar to the birthday problem a biased coin is tossed $n$ times (each toss is independent) with probability $h$ for heads. I need the smallest $n$ that lets the probability of at least one head to be $0.9$. I found p (no heads)=n(1-h) then p (at least one head)=1-n(1-h) then I found n terms of h. I'm not sure it is right(seems to simple) can anyone give hints if its wrong (please no answers) AI: Hint: The probability of tail is $1-h$. The probability the first two tosses are tail is $(1-h)^2$. The probability the first three tosses are tail is $(1-h)^3$.
H: Ideals in $F[X]$ are of the form $(f(x))$ where $f$ can be chosen to be monic. How? I am reading a statement whereby it says that In $F[X]$, where $F$ is a field, any ideal is of the form $(f(x))$ where $f$ can be chosen to be monic. I don't get this part of the statement '$f$ can be chosen to be monic'. If we have some non-monic polynomial that generates an ideal, how can we have a non-monic version that generates the same ideal? AI: Notice that $(f(x))=f(x)F[x]=\lambda f(x)F(x)$ so take $\lambda$ the inverse of the leading coefficient of $f(x)$.
H: Finding the limit $\lim_{x\to-\infty} (2x)/(2x-1)^2$. Studying for a midterm: Let $f(x)=\frac{2x}{(2x-1)^2}$ Then $\lim_{x\to-\infty} f(x)$ is: Now keep in mind I'm shaky on how to do infinity limits. I have $f(x)=\frac{2x}{(2x-1)^2}$ Remove x by dividing by the highest common denominator: $=\frac{2+\frac1x}{(2-\frac1x)^2}$ $\frac1x$=$0$ so: $=\frac{2+0}{(2-0)^2}$ $=\frac24$ $$ \lim_{x\to-\infty} f(x)\frac{2x}{(2x-1)^2}=\frac12$$ Although for some reason I don't think this is right. Since I feel like I'm finding the limit for a positive infinity function. I can't find help through other sources, so I would appreciate some help. AI: $$\frac{2x}{(2x-1)^2}=\frac{\frac2x}{\left(2-\frac1x\right)^2}\text{ and not}\frac{2+\frac1x}{\left(2-\frac1x\right)^2}$$ Alternatively putting $\frac1x=h$ as $x\to-\infty, h\to0^-$ So, $$\lim_{x\to-\infty}\frac{2x}{(2x-1)^2}=\lim_{h\to0^-}\frac{2h}{(2-h)^2}$$ Can you take it from here?
H: Computing the limit. Studying for a midterm. Compute the following limit: $$\lim_{x\to 4} \frac{x+4}{x^2+3x-4}$$ Factor the denominator: $$\lim_{x\to 4} \frac{x+4}{(x+4)(x-1)}$$ The $(x+4)s$ cancel out: $$\lim_{x\to 4} \frac1{(x-1)}$$ $$\lim_{x\to 4} \frac1{(4-1)}$$ $$\lim_{x\to 4}=\frac13$$ Just wondering if somebody could verify if my answer is correct, as there is no answer key. AI: Your calculation is nearly correct; you switch the denominator to the numerator. The correct answer should be $1/3$.
H: If $f$ is a function of moderate decrease then $\delta \int f(\delta x) dx = \int f(x) dx$ A function of moderate decrease is a map from $\mathbb{R}$ into $\mathbb{C}$ such that there exists $A \in \mathbb{R}$ such that $\forall x\in \mathbb{R}, \ |f(x)| \lt \frac{A}{1 + |x|^{1+\epsilon}}$. Let $\epsilon$ be fixed, then such functions form a vector space over $\mathbb{C}$ called $\mathcal{M}_{\epsilon}(\mathbb{R})$. Improper integrals are well-defined for such functions. I want to prove the property called "Scaling under dilations": For any $\delta \gt 0$, $$ \delta \int_{-\infty}^{\infty} f(\delta x) dx = \int_{-\infty}^{\infty}f(x) dx $$ My problem is showing that $f(\delta$x) is in $\mathcal{M}_{\epsilon}(\mathbb{R})$. Or do we have to? I.e. could we just show that the improper integral on the left is valid? AI: You don't have to show that $x \mapsto f(\delta x)$ belongs to $\mathcal{M}_\varepsilon(\mathbb{R})$, but it's not hard. If $\lvert \delta\rvert \geqslant 1$, then $$\lvert f(\delta x)\rvert \leqslant \frac{A}{1 + \lvert \delta x\rvert^{1+\varepsilon}} \leqslant \frac{A}{1+\lvert x\rvert^{1+\varepsilon}},$$ and if $0 < \lvert \delta\rvert < 1$, then $$\lvert f(\delta x)\rvert \leqslant \frac{A}{1 + \lvert \delta x\rvert^{1+\varepsilon}} \leqslant \frac{A}{\lvert\delta\rvert^{1+\varepsilon}+\lvert \delta x\rvert^{1+\varepsilon}} = \frac{A\cdot \lvert \delta\rvert^{-(1+\varepsilon)}}{1 + \lvert x\rvert^{1+\varepsilon}},$$ so in either case, $x \mapsto f(\delta x)$ belongs to $\mathcal{M}_\varepsilon(\mathbb{R})$.
H: Prove the identity $$\cos \frac{x}{2} \cdot \cos \frac{x}{4} \cdot \cos \frac{x}{8} = \frac{\sin x}{ 8\sin \frac{x}{8}}$$ Conjecture a generalization of this result and prove its correctness by induction. Ps: I have tried using identities, but I keep running on a loop. I wanted to use identities first to have an idea of what I have to do to generalize. Any ideas will be gladly appreciate it! AI: We have $$\cos \frac{x}{2} \cdot \cos \frac{x}{4} \cdot \cos \frac{x}{8}\times 8 \sin \frac{x}{8}=4\cos \frac{x}{2} \cdot \cos \frac{x}{4} \cdot\sin\frac{x}{4}=2\cos \frac{x}{2} \cdot\sin \frac{x}{2} =\sin x$$ The generalization is $$\prod_{k=1}^n\cos \frac{x}{2^k} =\frac{\sin x}{2^n\sin \frac{x}{2^n} }$$
H: Good upper bound for $(1-x)^r$ The Bernoulli's inequality gives a lower bound on numbers of the form $(1-x)^r$: $$(1-x)^r\geq 1-rx$$ for integer $r\geq 0$ and real number $0<x<1$. Is there a corresponding upper bound for $(1-x)^r$? In particular, when $r$ gets large, $(1-x)^r$ becomes very small. I suspect there should be a good bound for it, but I don't know of any. AI: let $a:=\log\left(\frac{1}{1-x}\right)$. Then $$(1-x)^r=\frac{1}{e^{ra}}=\frac{1}{1+ra+\frac{a^2r^2}{2!}+\frac{a^3r^3}{3!}+\ldots}$$ Now truncate the infinite series in the denominator to your heart's content. For example $$(1-x)^r\leq \frac{1}{1+ra}$$.
H: How can a subset be disjointed? I have the proof: Suppose A,B, and C are sets. Prove that C⊆A∆B iff C⊆A∪B and A∩B∩C=∅. If I suppose that C ⊆A∆B, how can the three sets be disjointed? AI: Here's the direction that you seem to be having trouble with. Let $x\in C$ and suppose that $C\subset A\triangle B$, then $$x\in A\triangle B=(A\cup B)\setminus (A\cap B)\subset A\cup B$$ and so $x\in A\cup B$. Now suppose that $x$ is in $(A\cap B)\cap C$, then $x\in A\cap B$. But this means that $x\notin (A\cup B)\setminus (A\cap B)$ which contradicts $x\in C\subset A\triangle B$ and so there can be no elements in both $C$ and $A\cap B$. Put another way, $$(A\cap B)\cap C=A\cap B\cap C=\emptyset.$$
H: Maximize $W(x) - (\ln(x) - \ln{\ln{x}})$ How can you maximize $f(x) = W(x) - (\ln(x) - \ln{\ln{x}})$ for $x\geq 2$? Numerically the answer seems to be at around $x \approx 41$ where you get $f(41) \approx 0.31$. Mathematica suggests the maximum is at $x= e^{1+e}$. $W$ is the Lambert-W function. AI: Using the fact that $$ W'(x) = \frac{W(x)}{x(1+W(x))} $$ we calculate $$ f'(x) = \frac{W(x)+1-\ln x}{x\ln x(1+W(x))}, $$ so to optimize $f$ we want to solve the equation $$ W(x)+1 = \ln x. $$ Exponentiating both sides yields $$ e e^{W(x)} = x, $$ and after multiplying both sides by $W(x)$ this becomes $$ e W(x) e^{W(x)} = x W(x). $$ Since $W(x)e^{W(x)} = x$ this is equivalent to $$ ex = xW(x) $$ or just $$ W(x) = e. $$ Thus $$ x = ee^e = e^{1+e}. $$ Plugging this back into the function we get $$ f(e^{1+e}) = \ln(1+e) - 1. $$
H: Why is the matrix norm $||A||_1$ maximum absolute column sum of the matrix? By definition, we have $$ \|V\|_p := \sqrt[p]{\displaystyle \sum_{i=1}^{n}|v_i|^p} \qquad \text{and} \qquad \|A\|_p := \sup_{x\not=0}\frac{||Ax||_p}{||x||_p} $$ and if $A$ is finite, we change sup to max. However I don't really get how we get to the definition of $||A||_1$ as the maximum absolute column sum of the matrix as stated in Wikipedia For example, assume $A=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix}$. Then $$ ||A||_1 = \max_{x\not=0} \frac{\left\|\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix}\cdot \begin{bmatrix} x_{1} \\ x_{2}\end{bmatrix}\right\| }{\left\|\begin{bmatrix} x_{1} \\ x_{2}\end{bmatrix}\right\|} = \max_{x\not=0} \frac{|a_{11}x_1+a_{12}x_2|+|a_{21}x_1+a_{22}x_2|}{|x_1|+|x_2|} $$ That's what I have gotten so far, but I don't really see how this is related to the max of the column sum. Can anyone help me explain this? AI: Let's denote the columns of $A$ by $A_1,\, \dotsc,\, A_n$. Then for every $x \in \mathbb{R}^n$, we have $$\begin{align} \lVert Ax \rVert_1 &= \left\lVert\sum_{\nu=1}^n x_\nu\cdot A_\nu \right\rVert_1\\ &\leqslant \sum_{\nu=1}^n \lVert x_\nu\cdot A_\nu\rVert_1\\ &= \sum_{\nu=1}^n \lvert x_\nu\rvert\cdot\lVert A_\nu\rVert_1\\ &\leqslant \max \left\{\lVert A_\nu\rVert_1 : 1 \leqslant \nu \leqslant n\right\} \left(\sum_{\nu=1}^n \lvert x_\nu\rvert\right)\\ &= \max \left\{\lVert A_\nu\rVert_1 : 1 \leqslant \nu \leqslant n\right\}\cdot \lVert x\rVert_1. \end{align}$$ That shows that $$\lVert A\rVert_1 \leqslant \max \left\{\lVert A_\nu\rVert_1 : 1 \leqslant \nu \leqslant n\right\},$$ and choosing $x = e_m$, where $m$ is the index where the absolute column sum has its maximum shows the converse inequality, hence equality.
H: Doubt in proof of special case of implicit function theorem I've been studying the implicit and inverse functions theorems and I've started with one special case of the implicit function theorem. The book I'm reading states the theorem as follows: Let $f : U\to\mathbb{R}$ be a function of class $C^k$ with $k\geq 1$ defined on some open set $U\subset\mathbb{R}^n\times\mathbb{R}$. If $p=(x_0,y_0)\in U$ is such that $f(p)=c$ and $D_{n+1}f(p)\neq0$, then there exists a ball $B(x_0;\delta)\subset \mathbb{R}^n$ and one interval $J=(y_0-\epsilon,y_0+\epsilon)$ such that $f^{-1}(c)\cap (B\times J)$ is the graph of a function $\xi : B(x_0;\delta)\to J$ of class $C^k$ and for all $x\in B(x_0;\delta)$ we have $D_i\xi(x)=-D_if(x,\xi(x))/D_{n+1}f(x,\xi(x))$. Now, the proof starts as follows: Consider that $D_{n+1}f(x_0,y_0)>0$, then since $D_{n+1}f$ is continuous, there exists $\delta>0$ and $\epsilon>0$ such that if $B$ is the open ball with center $x_0$ and radius $\delta$ and if $J=(y_0-\epsilon,y_0+\epsilon)$ then $B\times \bar{J}\subset U$ and $D_{n+1}f(x,y)>0$ for all $(x,y)\in B\times \bar{J}$. My doubt is exactly there. First: why can we assure that $B\times\bar{J}\subset U$ based on continuity? I know that if some continuous function is positive in some point, then there's a neighbourhood of the point where it is still continuous, but why can we take the neighbourhood like that? That closure confused me a little. The only thing I could think of was the following: if we endow $\mathbb{R}^n\times\mathbb{R}$ with the product topology and endow $\mathbb{R}^n,\mathbb{R}$ with the metric topology, then a set of $\mathbb{R}^n\times\mathbb{R}$ is open if and only if for every point of the set there's one product $U_1\times U_2$ of open sets $U_1\subset\mathbb{R}^n,U_2\subset\mathbb{R}$. Now, since $D_{n+1}f$ is continuous and $D_{n+1}f(p)>0$, then there's a neighbourhood $N(p)\subset U$ such that for every $q\in N(p)$ we have $D_{n+1}f(q)>0$. Since it is a neighbourhood of $p$, there must be inside of it an open set containing $p$, now we can take a basic one, because the definition of the open sets. But why can we assure that the product of the ball with the closure of the interval is still in $N(p)$? Thanks very much in advance! AI: ~~Edited out unneeded conversation about product topology ~~ If $B\times J \subset U$ but $B\times \overline J$ isn't in $U$, then just replace $J$ by $J' = (y_0 - \epsilon/2,y_0 + \epsilon/2)$.
H: A question about an epsilon-delta proof Currently, I am stuck on a question: Let $ g : [ 0 , \infty ) \mapsto \mathbb{R} $ be defined by $g(x)= \left\{ \begin{array}{ll} x^2 & \mbox{if } 0 \leq x \leq 1 \\ 3x & \mbox{if } x > 1 \end{array} \right.$ Prove: For $\epsilon =1$, $\forall \delta >0$, $\exists x \in \mathbb{R}$ such that $|x-1|<\delta$ and $|g(x)-g(1)|>\epsilon$. I know how to do a normal epsilon-delta proof with continuity. But now $\epsilon$ is fixed and I do not know how to handle this problem. My work so far: Proof: Let $g$, $\epsilon$, and $x$ as above: $|g(x)-g(1)|=|x^2-1|=|(x+1)(x-1)|=|x+1||x-1|=(|x|+1)|x-1|$ I have no clue how to get rid of the $|x|$, normally I would use that $|x|<|a|+\delta$, but since we do not have an $a$ here, I am pretty sure we cannot use that. Any hints would be appreciated, preferably no full answers as I have to do it myself one day. Thanks in advance! AI: Note that you'er busy showing that $g$ is discontinuous at $x = 1$. For $0 \leq x < 1$, $| g(x) - g(1) | = 1 - x^2$ and for $x > 1$, $| g(x) - g(1) | = 3x - 1$. Given some $\delta > 0$, you have to either find an $x \in (1 - \delta, 1)$ such that $1 - x^2 > 1$ or an $x \in (1, 1 + \delta)$ such $3x - 1 > 1$. Now the first one is not going to work, which was to be expected, because $g$ is continuous on $[0,1]$, so we better try the other one. So let's try $x = 1 + \delta/3$, which satisfies $x \in (1, 1 + \delta)$ and also $3x - 1 = 2 + \delta > 1$.
H: Differentiation of $x!$, where $x\in \mathbb{N}+\{0\}$ Calculation of $\displaystyle \frac{d}{dx}(x!) = $, where $x\in \mathbb{N}+\{0\}$ My Try:: We Know that $x! = (x)\cdot (x-1)\cdot(x-2)...........(3)\cdot(2)\cdot(1)$ Now Taking $\bf{\ln}$ on both side $\bf{\ln(x!)} = \ln (x)+\ln(x-1)+\ln(x-2)+................+\ln(3)+\ln(2)+\ln(1)$ $\displaystyle \frac{d}{dx}(\ln (x!)) = \frac{d}{dx}\left\{\ln (x)+\ln(x-1)+\ln(x-2)+................+\ln(3)+\ln(2)+\ln(1)\right\}$ Now I Did not understand How can I solve Given Question, Help Required Thanks AI: In general, consider the function $$ f(x)= \ln(\Gamma(x+1)) \implies f'(x) = \frac{d}{dx}\ln(\Gamma(x+1)) = \frac{\Gamma'(x+1)}{\Gamma(x+1)}=\psi(x+1), $$ which is the digamma function.
H: Given $f(x)$ its inverse function, domain and range $f(x) = \frac{{2x + 3}}{{x - 1}},\left[ {x \in {R},x > 1} \right]$ I've got the inverse function to be: ${f^{ - 1}}(x) = \frac{{x + 3}}{{x - 2}}$ How would I go about working out the range and domain of this function? The range is possibly $f^{-1}(x)>1$ if im not mistaken, but how do I figure out the domain? AI: One quick note, if you write $f^{-1}$ as a function of $x$ then the variable on the right hand should be $x$. So either $f^{-1}(y)=\frac{y+3}{y-2}$, which is perfectly fine to write, or $f^{-1}(x)=\frac{x+3}{x-2}$, which is also perfectly fine to write. To find the domain, you need to find the set of $x$ values such that there exists a value $y=f(x)$. Now, it seems that you're given a domain—you've written "$\left[x\in R,x>1\right]$". But if you weren't given that, you would want to find the largest possible such set of $x$ values. There's really only one real number $z$ such that we can't compute $f(z)$ for this $f$. Can you see what it is? The range isn't always easy to find. The range is the set of $y$ values such that $y=f(x)$ for some $x$ in the range. Read that sentence over and over and over again until you understand it. In this case the range is easy to find, since you've computed an inverse function. Then, the range of $f$ is just the domain of $f^{-1}$. Make sense?
H: What is the name of this delta operator In Euler-Lagrange Equation: $${\delta \over \delta y}F \equiv {\partial F \over \partial y}- {d \over dx} ({\partial F \over \partial {y'}})$$ What is the name of operator $\delta$ here? AI: It's called the functional or variational derivative with respect to $y$.
H: Is my alternative proof that the intersection of nested compact sets is nonempty valid? Let $\{A_n\}_{n=1}^\infty$ be a collection of nested closed sets in a compact space $X$. Since $A_n$ is closed, it is compact, and consequently limit point compact. Let $\varepsilon > 0$ and define a sequence $(x_n)$ where $$ x_n \in A_n \text{ but } x_n \notin A_{n+1} \text{ for all } n \in \mathbb{N} $$ Because each $A_n$ is limit point compact, and contains all but finitely many $x_n$, it contains the limit point of $x_n$. Since every $A_n$ contains the limit point of the sequence, the countable intersection must contain the limit point of $x_n$, hence is nonempty. AI: The basic idea is good, but there are some problems: You say ‘Let $\epsilon>0$’, but then you never use $\epsilon$. You want to choose $x_n\in A_n\setminus A_{n+1}$, but this may not be possible: you know that $A_n\supseteq A_{n+1}$, but you don’t know that $A_n\supsetneqq A_{n+1}$. It’s even possible that $A_n=A_1$ for all $n\in\Bbb Z^+$. Even assuming that $\langle x_n:n\in\Bbb Z^+\rangle$ can be defined, it need not converge; limit point compactness merely tells you that it has a convergent subsequence. All of these can be fixed. Just drop ‘Let $\epsilon>0$’; it serves no purpose, and nothing like it is needed. Simply choose $x_n\in A_n$ at each stage. Then for all $k\ge n$ you have $x_k\in A_k\subseteq A_n$, so all but finitely many terms of $\langle x_n:n\in\Bbb Z^+\rangle$ are in $A_n$. Let $\langle x_{n_k}:k\in\Bbb Z^+\rangle$ be a convergent subsequence of $\langle x_n:n\in\Bbb Z^+\rangle$ with limit $x$. For each $m\in\Bbb Z^+$ there is an $\ell\in\Bbb Z^+$ such that $n_\ell\ge m$; clearly $x_{n_k}\in A_m$ for all $k\ge\ell$, so all but finitely many terms of $\langle x_{n_k}:k\in\Bbb Z^+\rangle$ are in $A_m$, and therefore $x\in A_m$. Thus, $x\in\bigcap_{m\ge 1}A_m$, and $\bigcap_{m\ge 1}A_m\ne\varnothing$.
H: Prove that the real vector space consisting of all continuous, real-valued functions on the interval $[0,1]$ is infinite-dimensional. Prove that the real vector space consisting of all continuous, real-valued functions on the interval $[0,1]$ is infinite-dimensional. Clearly it's infinite dimensional, because if you consider say $P (\mathbb{F})$ on $[0,1]$, then there are an infinite amount of continuous real-valued functions on the interval, but how do I prove this? AI: Recall that if $V$ is a finite-dimensional vector space, then each subspace of $V$ is also finite-dimensional. So if $V$ contains an infinite-dimensional subspace, then it is infinite-dimensional. As you point out, $C[0,1]$ (the space of continuous real-valued functions on $[0,1]$) has the space $P(\mathbb{R})$ of real polynomials as a subspace. If you can show that $P(\mathbb{R})$ is not finite-dimensional, then you're done.
H: If $f(x)$ is positive and decreasing, can $xf(x)$ have more than one maxima? Assume that $f(x)$ is positive and decreasing on $[0,1]$ with $f(0)=1$ and $f(1)=0$. We see that $xf(x)$ is 0 at 0 and 1 and is positive in between so it must have a maxima. Is it possible that $xf(x)$ has more than one maxima on $[0,1]$? This question came from trying to think of simple examples to test some ideas I'm working on and a few minutes of mucking around with polynomial interpolation etc showed I'm not very good at coming up with counterexamples. So if you have any comments on that let me know. AI: Yes There are probably simpler examples, but $$f(x)= \frac{\sin (6 \pi x) }{24} + 1 - x$$ seems to be strictly decreasing on $[0,1]$ but $xf(x)$ appears to have two maxima in $[0,1]$.
H: A question from Eisenbud, Commutative Algebra On page 35, the proof of corollary 1.8: If k is an algebraically closed field and A is a k-algebra, then A = A(X) for some algebraic set X iff A is reduced and finitely generated as a k-algebra. In the proof, it says: "... Conversely, if A is a finitely generated k-algebra, then after choosing generators we may write A=k[x1,...,xn]/I for some ideal I. ..." Can someone explain this to me explicitly why this assertion is valid? How to choose the generators and ideal I so that A=[x1,...,xn]/I? AI: I mean, this is almost by definition. If $A(X)$ is finitely generated as a $k$-algebra, then $A(X)=k[\alpha_1,\ldots,\alpha_n]$ for some $\alpha_i\in A(X)$. Now, since $k[t_1,\ldots,t_n]$ is the "free commtutative $k$-algebra on $n$ generators" you know that there exists a unique $k$-algebra homomorphism $k[t_1,\ldots,t_n]\to k[\alpha_1,\ldots,\alpha_n]=A(X)$ such that $t_i\mapsto \alpha_i$. Note that this map is obviously surjective, and so if $I$ is its kernel, then $$k[t_1,\ldots,t_n]/I\cong k[\alpha_1,\ldots,\alpha_n]=A(X)$$ So, $A(X)=A(V(I))$. Of course, you need to note in this last step that since $A(X)$ is reduced, that $I$ is necessarily radical.
H: How to show some sets below to a $\sigma$-field $\mathcal{F}$ Let $A$ and $B$ belong to some $\sigma$-field $\mathcal{F}$. How would I show that $A\cap B,$ $ A\setminus B $ and $A \Delta B:=(A\setminus B)\cup(B\setminus A)$ below to $\mathcal{F}$ as well and how would I find $P(A \Delta B)$ in terms on $P(A)$, $P(B)$ and $P(A \cap B)$ ? So i have figured out the first part now using de-morgans law, but how do i figure out the second part of finding $P(A \Delta B)$ in terms of $P(A)$ ect? What does it mean by in terms of P(A)? Any help would be much appreciated as I am revising for an exam and these types of questions seem to come up a lot! Many thanks AI: By definition, being closed under countable union and complement implies the closedness under countable intersection (why?). Also note that $A \setminus B=A \cap B^c$ which is in $\mathcal{F}$ since $B^c \in \mathcal{F}.$ Similarly $B \setminus A \in \mathcal{F},$ so does the union $(A \setminus B) \cup (B \setminus A).$ For the second question, note that $P(A \cup B)=P(A)+P(B)-P(A \cap B)$ and replace $A$ by $A \setminus B$ and $B$ by $B \setminus A$ to get $P(A \Delta B)=P(A \setminus B)+P(B \setminus A)$ since $(A \setminus B) \cap (B \setminus A)=\emptyset.$
H: $E[X^4]$ for binomial random variable For a binomial random variable $X$ with parameters $n,p$, the expectations $E[X]$ and $E[X^2]$ are given be $np$ and $n(n-1)p^2+np$, respectively. What about $E[X^4]$? Is there a table where I can look it up? Calculating it using the definition of expectation looks like a lot of work. Or is there a good way to calculate it? AI: Well, you can create a table if you know the moment generating function of $X$ i.e. $$M_X(t)=E[e^{tX}]$$ because $\frac{d^n}{dt^n}M_X(t)|_{t=0}=E[X^n].$ Hint: Show that $M_X(t)=(e^tp+(1-p))^n$ for binomial $X$ with parameters $n,p.$
H: What is the measure of $x \in [0.1]$ whose binary representations have percentage of ones that converge within a given range? For binary representation of $x \in [0,1]$, i.e. $x = \sum_n a_n 2^{-n}$, (where all $a_n$ are binary, and using all trailing ones is chosen instead of rounding up), let $X(b,c) \subset [0,1]$ be the set of all numbers $x$ which satisfy $\lim_{n \to \infty} \sum_{j=1}^n a_j/n = \alpha$ where $b \leq \alpha \leq c$. What is the measure of $X(b,c)$ if $b=c$? What if $b < c$? AI: By the law of large numbers the set $X(b,c)$ has Lebesgue measure $1$ when $b\leqslant\frac12\leqslant c$ and Lebesgue measure $0$ otherwise. The idea is that if $x$ is uniformly distributed on $(0,1)$ then $(a_n)$ is a sequence of independent Bernoulli random variables such that $P[a_n=0]=P[a_n=1]=\frac12$. Hence $\frac1n\sum\limits_{k=1}^na_k\to E[a_1]$ almost surely when $n\to\infty$. Since $E[a_1]=\frac12\cdot0+\frac12\cdot1=\frac12$, the result above follows.
H: How do I find where a curve hits the $xy$ plane with vectors? I need to find $r'(t)$ and $||r'(t)||$ of $r(t)=<t,t,t^2>$, and tell where the curve hits the $xy$ plane (if it does). Also, I need to say something about how the curve looks like (do I just plot some points and figure out how it looks like?). I understand how to find $r'(t)$ and $||r'(t)||$, but how do I tell where the curve hits the $xy$ plane, and how do I describe the curve? Here are my solutions for $r'(t)$ and $||r'(t)||$: $r(t)=<t,t,t^2>$ $r'(t) = <1,1,2t>$ $||r'(t)||=\sqrt{2+4t^2}$ AI: Your answers are correct, "hitting the xy-plane" means to find $$\{r(t) | t\in\mathbb R, r(t)_3 = 0\} = \{r(t) | t\in\mathbb R, t^2 = 0\} = \{r(0)\} = \{\langle 0,0,0 \rangle\}$$ "Describing the curve" is, for example that the projection on the xz- and yz-plane of $r$ (i.e. $\pi_{xz} r(t) = \langle r(t)_1, 0, r(t)_3 \rangle$ and $\pi_{yz}$ analogously) is the graph of a parabola ($f(t) = t^2$). It's a parabola in a certain plane (with normal vector $\langle 1, -1, 0\rangle$)... etc.
H: Additive norm $||a+b||=||a||+||b||$ I've read somewhere that there exist spaces where $||a+b||=||a||+||b||$ is true iff $a = \lambda b, \ \ \lambda>0$. Could you tell me what spaces have that property and what spaces don't? $||\lambda b + b|| = |\lambda +1| \cdot||b||$ by homogeneity and I don't know what conditions should $X$ (such that $x,y \in X$) satisfy. Could you help me? AI: I think you're looking for strictly convex spaces.
H: Tennis balls counting problem On a Friday morning, the pro shop of a tennis club has 14 identical cans of tennis balls. If they are all sold by Sunday night and we are interested only in how many were sold in each day, in how many different ways could the tennis balls have been sold on Friday, Saturday and Sunday? What I think about the problem: Let $a$ denote the number of cans sold on Friday. Let $b$ denote the number of cans sold on Saturday. Let $c$ denote the number of cans sold on Sunday. We seek the number of ordered triples of the form $(a,b,c)$ on the condition that the triple satisfies the following: 1) $a,b,c$ are non-negative integers 2) $a+b+c=14$ Thanks in advance AI: This is equivalent to the stars and bars problem: For natural numbers $n$ and $k,$ the number of distinct $n$-tuples (in our case, how many triples, with $n = 3$) of non-negative integers whose sum is $k$ (in this case, k = 14) is given by the binomial coefficient $$\binom{n + k - 1}{k} = \binom{3 + 14 - 1}{14} = \binom{16}{14} = \binom{16}{2}$$
H: Lifting elements of $SO(3)$ to $SU(2)$. Let $A$ an element of ortogonal group $SO(3)$ such that the orders of $A$ is $>2$. We have that $SU(2)$ is a $2$-fold cover of $SO(3)$: $$ \mathbb{Z}_2 \to SU(2) \to SO(3) .$$ So how can I build a lift of $A$ to an element $\tilde{A}$ is $SU(2)$? AI: Let $\mathbb{H}$ be the quaternions and $q \in \mathbb{H}$ can be represented by $q=a+bi+cj+dk$ which can also be written as $q=z_1+z_2j$ where $z_1=a+bi,z_2=c+di$ are complex numbers. Let $\mathbb{H}_1$ be the group of quaternions with norm $1.$ There is a natural isomorphism $$\mathbb{H}_1 \longrightarrow SU(2)$$ $$z_1+z_2j \mapsto \begin{pmatrix} z_1 & z_2 \\ -\overline{z}_2 & \overline{z}_1\\ \end{pmatrix}$$ Then, we can identify $\mathbb{R}^3$ with pure quaternions $bi+cj+dk$ and define the action of $\mathbb{H}_1 \cong \text{SU}(2)$ on $\mathbb{R}^3$ by $$q \cdot q_0 \mapsto q \cdot q_0 \cdot q^{-1}, \;\; q \in \mathbb{H}_1, q_0 \in \mathbb{R}^3$$ Now, because the quaternion norm is multiplicative and coincides with Euclidean norm on $\mathbb{R}^3,$ thus $p: \mathbb{H}_1 \to \text{O}(3)$ is a well-defined group homomorphism. Moreover, write $q \in \mathbb{H}_1$ as $q=\cos \theta+ \sin \theta q_1,$ where $q_1$ is a pure quaternion of norm $1.$ It is also immediate that the action of $q$ on $\mathbb{R}^3$ is the rotation defined by the axis $q_1$ and the angle $\theta.$ Using the definition of $SO(3)$, we can define a surjective homomorphism $p: \mathbb{H}_1 \to SO(3)$ and as you said, the kernel of $p$ is the center of $SU(2)$ which is $\{\pm 1\}.$
H: Solution of PDE with directional derivative There is a partial differential equation containing directional derivative in the left-hand side: $$ \vec{s} \cdot \nabla f = a f + b \\ $$ where $f, a, b$ are functions of $(x,y,z)$, and $\vec{s}$ is a unit direction vector. How to solve this type of equation? Standard reference books did not really help me, actually I stuck with method of characteristics, but I'm not sure if it is appropriate here. AI: When I wrote up this answer, I neglected to notice that the OP specified $(x, y, z)$ coordinates, presumably in $\Bbb R^3$, so I spoke as if things take place in $\Bbb R^n$; but I think things are pretty much OK, nevertheless. Basically, the method of characteristics, at least as I understand it, should be fine here; the characteristics are precisely the integral curves of the vector field $\vec s$, so the equation $\vec s \cdot \nabla f = af + b, \tag{1}$ tells us that, along each trajectory of the ordinary differential equation $\dot {\vec x}(t) = \vec s(x(t)), \tag{2}$ we must have $\frac{df}{dt} = af(t) + b. \tag{3}$ A solution to (3) is easily had; writing $u(t) = af(t) + b, \tag{4}$ we see that $\frac{du}{dt} = a\frac{df}{dt} = a(af + b) = au, \tag{5}$ a solution of which is readily seen to be $u(t) = u(t_0)e^{a(t - t_0)}. \tag{6}$ which may be re-written in terms of $f(t)$: $f(t) = \frac{1}{a}((af(t_0) + b)e^{a(t - t_0)} - b). \tag{7}$ (7) gives the solution along each integral curve of $\vec s$; to complete the picture, we need to realize that $f(t_0)$ must be a function specified on some surface $S$ to which $\vec s$ is in general transverse, i.e. not tangent. If we think of $S$ as a space of parameters for $f(t_0)$, it becomes $f(p, t_0)$ where $p \in S$; then (7) becomes $f(p, t) = \frac{1}{a}((af(p, t_0) + b)e^{a(t - t_0)} - b). \tag{8}$ What we are really doing here is specifying $f(p, t)$ in a local "coordinate system" determined by $S$ and the parameter $t$ along the integral curves of $\vec s$, taking $t = t_0$ on $S$. Sufficiently near a point $p_0 \in S$, points in the ambient space can be represented as $(p, t)$. Of course, this "coordinate system" may be difficult to express in term of standard coordinates in, say, $\Bbb R^n$, assuming that is the ambient manifold. But such is the price of the method of characteristics. A more rigorous view of the $(p, t)$ representation might require using hypersurface coordinates on $S$ and expressing $p$ in terms of them, but a thorough discussion along those lines would take much longer than I have at present. EDIT: $a$, $b$ time-dependent in (1) and (3): as pointed out by our OP ziliboba in this comment, $a$ and $b$ may depend on the parameter $t$ which is the independent variable in the differential equation (2) which defines the integral curves of $\vec s$. I had, when the above was written, assumed that $a$ and $b$ are constant which leads to the particularly simple form of the solution (7), (8). In the event that $a$ and/or $b$ depend on $t$, as they generally will (the case $a$, $b$ constant being the exception rather than the rule), the solution of (1), (3) may be written in terms of integrals of the coefficients $a$, $b$, though they may be difficult to evaluate in terms of elementary functions as is the case when $a$, $b$ are constant. For $\frac{df(p, t)}{dt} = a(p, t)f(p, t) + b(p, t), \tag{9}$ the general solution is $f(p, t) = exp(\int_{t_0}^t a(p, s) ds) (f(p, t_0) + \int_{t_0}^t exp(-\int_{t_0}^s a(p, r) dr)b(p, s) ds), \tag{10}$ where in (9), (10) we have written $a(p, t)$, $b(p, t)$, $f(p, t)$ in order to stress that these functions of $t$ also depend on the point $p$ at which the corresponding integral curve of $\vec s$ crosses the initial surface $S$; $t = t_0$ at such points, and this defines the initial value $f(p, t_0)$ occurring in (10). Formula (10) as a solution to equation (9) is of course well-known from the classical literature on ordinary differential equations, and occurs in many standard texts on the subject. It is not difficult to derive (10) from (9), but repeating the requisite sequence of steps here would only and undue length to this already long-enough post. The derivation may in fact be found in my answer to this question; see especially the material in the vicinity of equations (5)-(13) of that posting. In closing I would like to stress that the real difficulty in using the method of characteristics to solve (1) will often lie in finding the integral curves of $\vec s$ from (2) which give rise to the $(p, t)$ coordinates in which the solution is written. In general this will be a nonlinear problem and finding the curves $x(p, t)$ will be quite a challenge; but if we have solved that problem, the solution to (9) is, as we have seen here, straightforward. END of EDIT: $a$, $b$ time-dependent in (1) and (3): Hope this helps! Cheers, and as always, Fiat Lux!!!
H: Develop second-order method for approximating f'(x) I am stuck on the following question: Develop a second-order method for approximating $f'(x)$ that uses the data $f(x-h), f(x)$, and $f(x+3h)$ only. Do you have any hints or tips? Thanks in advance. AI: Expanding on Mhenni Benghorbal's link: $f(x+h) =f(x)+hf'(x)+h^2f''(x)/2+h^3f'''(x)/6+... $, and $f(x-h) =f(x)-hf'(x)+h^2f''(x)/2-h^3f'''(x)/6+... $, so $f(x+3h) =f(x)+3hf'(x)+9h^2f''(x)/2+27h^3f'''(x)/6+... $. To combine $f(x), f(h-x)$, and $f(x+3h)$ to get $f'(x)$, let $g(x) =af(x)+bf(x-h)+cf(x+3h) $. Then $g(x) =(a+b+c)f(x) +(-b+3c)hf'(x) +(b+9c)h^2f''(x)/2 +(-b+27c)h^3f'''(x)/6 +... $. To make this a second order approximation, we need $a+b+c=0$, $-b+3c = 1$, and $b+9c=0$. This will make $g(x) =hf'(x)+O(h^3)$, so $\frac{g(x)}{h} = f'(x) + O(h^2) $. The solution to these is $c = 1/12$, $b = -3/4$, and $a = 2/3$. Since $-b+27c =-3/4+27/12 =-9/12+27/12 =18/12 =3/2 $, this makes $\begin{align} \dfrac{\frac{2}{3}f(x)-\frac{3}{4}f(x-h)+\frac1{12}f(x+3h)}{h} &=\dfrac{g(x)}{h}\\ &=f'(x)+h^2f''(x)(3/2)/6+O(h^3)\\ &=f'(x)+h^2f''(x)/4+O(h^3)\\ \end{align} $.
H: Show that $\overline{(\textbf{y},\textbf{x})}_A = (\textbf{x},\textbf{y})_A$ I want to show that $\overline{(\textbf{y},\textbf{x})}_A = (\textbf{x},\textbf{y})_A$ where $(\textbf{x},\textbf{y})_A =\textbf{x}^T A \textbf{y}$, $\textbf{x},\textbf{y} \in \mathbb R^p$ and $A$ is a real symmetric $p \times p$ matrix . Here is my approach $$ \overline{(\textbf{y},\textbf{x})}_A = \overline{\textbf{y}^T A \textbf{x}} = \overline{\textbf{x}^T} \overline{A} \overline{\textbf{y}} $$ Since $\textbf{x},\textbf{y},$ and $A$ are all real the complex conjugate can be dropped. This yields $$ \overline{\textbf{x}^T} \overline{A} \overline{\textbf{y}} = \textbf{x}^T A \textbf{y} = (\textbf{x},\textbf{y})_A $$ Hence $$ \overline{(\textbf{y},\textbf{x})}_A = (\textbf{x},\textbf{y})_A $$ I am not sure about this though $$ \overline{\textbf{y}^T A \textbf{x}} = \overline{\textbf{x}^T} \overline{A} \overline{\textbf{y}} $$ I recall reading this property before but I can't figure out what it is now. Is the above equality true? AI: One can use the identity $(AB)^T$=$B^TA^T$ twice to get the following. (see third identity http://en.wikipedia.org/wiki/Transpose ) $$ \overline{(\textbf{y},\textbf{x})}_A = \overline{\textbf{y}^T A \textbf{x}} = {\textbf{x}^T} {A}^T {\textbf{y}} .$$ Therefore $$ \overline{(\textbf{y},\textbf{x})}_A = {(\textbf{x},\textbf{y})}_{A{^T}} .$$ So you identity hold if $A$ is symmetric.
H: Find all the integral solutions to $2x+3y=200$ What's the best way of going about this? $$2x+3y=200.$$ AI: For this case, if $2x+3y=200$, an obvious solution is $x=100, y=0$. From this base solution, all other integer solutions are $x=100-3n$, $y=2n$ for integer $n$. If the solutions are to be non-negative, then $n \ge 0$ and $100-3n \ge 0$, so $0 \le n \le \lfloor(100/3)\rfloor =33$.
H: Find all couple $(x,y)$ for satisfy $\frac{x+iy}{x-iy}=(x-iy)$ I have a problem to solve this exercise, I hope someone help me. Find all couple $(x,y)$ for satisfy $\frac{x+iy}{x-iy}=(x-iy)$ AI: One looks for the complex numbers $z=x+\mathrm iy$, $z\ne0$, such that $z=\bar z^2$. In particular, $|z|=1$. Plugging $z=\mathrm e^{\mathrm it}$ in $z=\bar z^2$ yields $\mathrm e^{3\mathrm it}=1$ hence $t$ is a multiple of $\frac23\pi$. The arguments $t=0$, $t=\frac23\pi$ and $t=\frac43\pi$ yield $z=1$ and $z=-\frac12\pm\mathrm i\frac{\sqrt3}2$ respectively, thus the solutions are $(x,y)=(1,0)$ and $(x,y)=(-\frac12,\pm\frac{\sqrt3}2)$.
H: Relations in Propositional Logic It is my understanding that relations are best described with predicate logic. I have a homework question that asks me to convert English sentences into propositional logic. The following list of sentences are similar to the homework, but do not reflect the actual assignment. a) Humans have two legs. b) If Humans had four legs, they would be related to Mutants. Writing something like $P \land L$, where $P = \text{Human}$ and $L = \text{has two legs}$ does not make much sense to me. Is it even possible to show a relation like this with propositional logic? Typically, I would assume that one would represent the first sentence by stating $P = \text{Humans have two legs}$ As for the second sentence, it makes more sense to let $P = \text{Humans had eight legs}$ and $Q = \text{Humans are related to Mutants}$. Then you could state $P \implies Q$. So my question is this: Is there a way to break these sentences down even further using propositional logic? AI: There are many ways to translate these into propositional logic. P="Humans have two legs" is one possibility. P(n)="Humans have $n$ legs" is another one; so P(2) is for 2 legs and P(8) is for 8 legs.
H: Inversing a function I'm having some problems calculating the inverse of this function: $f(u,v)=(u+v,v-u^2)$, its domain is $D=\{(u,v)$ in $\Bbb R^2 : u>0\}$ Thanks in advance. AI: If you consider the inverse function $f^{-1}(a,b)$, you have to solve $$\left\{\begin{array}{l} u+v = a\\ v-u^2 = b \end{array}\right.$$ $$\begin{align} (u +v) - (v-u^2) =& a-b\\ u^2 + u - a + b =& 0\\ u =& \frac{-1+\sqrt{1+4(a-b)}}2\\ v =& a-\frac{-1+\sqrt{1+4(a-b)}}2\\ f^{-1}(a,b) =& \left(\frac{-1+\sqrt{1+4(a-b)}}2,a-\frac{-1+\sqrt{1+4(a-b)}}2\right) \end{align}$$
H: Simple notation question Let A = {2, 3, 4, 6, 7, 9} and define a relation R on A as follows: For all x, y ∈ A, x R y ⇔ 3 | (x − y). Then 2 R 2 because 2 − 2 = 0, and 3 | 0. What does the 3 | 0 notation mean here? AI: $\;3\mid 0\; $ = three divides zero, which means there exists an integer $\;k\;$ s.t. $$0=3\cdot k$$ Can you see what integer $\;k\;$ fulfills the above?
H: Why is the Brownian motion a multivariate normal distribution? I have seen in class that for some reasons I forgot, the Brownian Motion has a Multivariate normal distribution, but I am unable to prove it easily. Could someone tell me why it's true? From what I understand, I have to take a finite linear combination of values of Brownian motion at different times, and check that it's normally distributed. Could someone help me on this? thanks edit : the definition I start from is the one from wikipedia : http://en.wikipedia.org/wiki/Brownian_motion#Mathematics points 1 to 4 what I'm trying to prove is that Y = a1*B1 + … + ak*Bk is normally distributed, where Bi are values of the Brownian motion at time Ti AI: You can choose $\lambda_q,\dots,\lambda_k$ so that $$Y = a_1B_1 + \dots + a_kB_k = \lambda_1B_1 + \lambda_2(B_2-B_1) + \dots + \lambda_k(B_k-B_{k-1}).$$ But from the definition of Brownian motion, you know that $B_1, B_2-B_1, \dots, B_k-B_{k-1}$ are normally distributed and independent, so a linear combination of them is again normally distributed.
H: How to solve the equation $x^2=a\bmod p^2$ What is the standard approach to solve $x^2=a\bmod p^2$ or more general $x^n = a\bmod p^n$ ? AI: The usual method for solving polynomial equations modulo $p^n$ is to solve it mod $p$, then use some method to extend a solution from mod $p$ to mod $p^2$, then to mod $p^3$, and so forth. This can be done easily in an ad-hoc fashion: if you know that $f(a) = 0 \bmod p$, then you can make a new equation $f(a+px) = 0 \bmod p^2$ and solve it for $x$. If $f$ is a polynomial, we usually have $$ f(a+px) = f(a) + px f'(a) \pmod{p^2}$$ so, as you can see, it's just solving a linear equation in this typical case. But you don't have to memorize differential approximation: just plug $a+px$ into $f$ and simplify it. This will result in something correct even when the above formula isn't true. Sometimes, you have to solve an equation modulo $p^2$ (or worse) before you start getting unique extensions, and there can be other subtleties. But these problems manifest themselves clearly when you try to use the ad-hoc method. (e.g. $f'(a)$ will be zero modulo $p^2$) A more systematic way to carry out this method is to use Hensel's lemma. This is essentially equivalent to use Newton's method for finding the roots of an equation, and is closely related to the $p$-adic numbers.
H: Why does substitution work in integrals Let's say I have this integral: $$\int_0^\infty e^{-t} \, dt$$ And I make the substitution: $$t = nu$$ Then why I can say that: $$dt = n\,du$$ and then put this into my integral like this: $$\int_0^\infty e^{-nu}n\,du$$ What's happening in the background that allow this to be done? I'm asking this because I don't feel confortable threating diferencial operators as fractions and I don't know why this can be done. AI: It's just a way of looking at the chain rule. The chain rule is differentiation by substitution. One can write $$\frac{d}{dx} f(g(x)) = f'(g(x)) g'(x),$$ or one can look at $$ \frac{d}{dx} f(g(x)) $$ and then do this substitution: $$ u = g(x),\qquad \frac{du}{dx} = g'(x). $$ Then one writes $$ \frac{d}{dx} f(g(x)) = \frac{d}{dx} f(u) = \frac{df(u)}{dx} = \frac{df(u)}{du}\cdot\frac{du}{dx} = f'(u)\cdot g'(x) = f'(g(x))\, g'(x). $$ In the same way, when one sees $$ \int f'(g(x)) g'(x) \,dx, $$ one does the substitution $$ u=g(x),\qquad \frac{du}{dx} = g'(x),\qquad du = g'(x)\,dx. $$ Then one has $$ \int f'(g(x)) g'(x) \,dx = \int f'(u)\,du = f(u)+C = f(g(x))+C. $$ So integration by substitution is the chain rule in reverse, just as integration by parts is the product rule in reverse.
H: Simplifying a trigonometric function Can anyone show me the steps to get from: $$\dfrac{\cos (x)}{1+\sin(x)}+\frac{1+\sin(x)}{\cos(x)}$$ To: $$2\sec(x)$$ AI: First do the obvious thing and combine the fractions over a common denominator: $$\begin{align*} \frac{\cos x}{1+\sin x}+\frac{1+\sin x}{\cos x}&=\frac{\cos^2x+(1+\sin x)^2}{\cos x(1+\sin x)}\\ &=\frac{\cos^2x+1+2\sin x+\sin^2x}{\cos x(1+\sin x)}\\ &=\frac{2+2\sin x}{\cos x(1+\sin x)}\;; \end{align*}$$ from here you should be able to finish it.
H: Calculating individual wheel velocities from a desired angle in a differential wheeled robot I am working on a simulation of a two-wheeled robot, and at present am driving it by setting each individual wheel's velocity. The robot is similar to an ePuck: What I would like to do is set an initial (and constant) overall speed for the robot, and simply command it to turn by a specified angle while moving. At this point, I'd like to keep the model simple and not worry as much about acceleration. Essentially, what I would like to do is command it to turn by 90 degrees, as shown in the next picture. On that basis, with a constant speed on the forward motion of the body itself, what I'd like to know is how to calculate the appropriate velocity of each wheel separately. I'd looked into some existing models that go into differential equations here and here, however I wasn't able to understand if it were possible to find the values that I'm interested in. AI: Let $W$ be the distance between the points of contact of the two wheels. If you want to turn the (really neat) vehicle through an angle of $\theta \text{ }$radians, then the outer wheel must travel a distance $W\cdot \theta \text{ }$farther than the inner wheel. If the radius of each wheel is $R$, then the outer wheel must rotate on its axle through an "extra" angle (relative to the inner wheel) given by, in radians:$$ \theta_{extra}=\frac{W\cdot \theta}{R}$$If you want to work in angular velocity and time, assume that you can add an angular velocity $\omega_{add}$, in radians per second to the outer wheel for T seconds. Then to turn through an angle $\theta$ $$T=\frac{W\cdot \theta}{R\cdot \omega_{add}}$$ If you can "add" positive or negative angular velocity, you could change the speed of just one wheel, say the left one; add speed to turn right, and reduce speed to turn left.
H: How does my professor go from this exponential equation to a logarithmic one? How does the "therefore" portion work? How does that exponential equation come to equal n(lgn + 1)? AI: In the first line, $n=2^k$, so from the next to last we substitute in. Given this definition, $k=\lg n, k2^k=n \lg n$
H: Is it true that if $n$ is even then $\sum_{k=1}^{n}(n \bmod k)<\frac{8}{45}n^2$? Let $f(n,k)$ be the least non-negative integer such that $n\equiv f(n,k) \bmod k.$ $f(10,k)(k=1,2,\cdots,10)=0, 0, 1, 2, 0, 4, 3, 2, 1, 0.$ Hence $$\sum_{k=1}^{10}f(10,k)=1+2+4+3+2+1=13.$$ Question: Is it true that if $n$ is even then$$\sum_{k=1}^{n}f(n,k)<\frac{8}{45}n^2\tag?$$ This is true for $n<10^5,$ but not true for many odd integers, such as $11,23,29,35,47,53,59,\cdots$ Eidt: Does $$\lim_{n\to \infty}\frac{1}{n^2}\sum_{k=1}^{n}f(n,k)$$ exist? AI: Apply $n\bmod k = n - k \big\lfloor \frac{n}{k} \big\rfloor$ to $$g(n) = \sum_{k=1}^n (n\bmod k)$$ to get $$g(n) = n^2 - \sum_{k=1}^n k \bigg\lfloor \frac{n}{k} \bigg\rfloor.$$ Introduce $$q(n) = \sum_{k=1}^n k \bigg\lfloor \frac{n}{k} \bigg\rfloor$$ and observe that $$q(n+1)-q(n) = (n+1) \bigg\lfloor \frac{n+1}{n+1} \bigg\rfloor + \sum_{k=1}^n k \left(\bigg\lfloor \frac{n+1}{k} \bigg\rfloor - \bigg\lfloor \frac{n}{k} \bigg\rfloor\right) \\= n+1 + \sum_{d|n+1\atop d <n+1} d = \sigma(n+1).$$ Therefore $$q(n) = \sum_{k=1}^n \sigma(n).$$ Now recall that $$\sum_{n\ge 1}\frac{\sigma(n)}{n^s} = \zeta(s)\zeta(s-1) \quad\text{and}\quad \mathrm{Res}\left(\zeta(s)\zeta(s-1); s=2\right) = \frac{\pi^2}{6}.$$ Hence by the Wiener-Ikehara theorem $$ \sum_{k=1}^n \sigma(n) \sim \frac{\pi^2}{6} \frac{n^2}{2} = \frac{\pi^2}{12} n^2.$$ It follows that $$ g(n) \sim \left(1-\frac{\pi^2}{12} \right) n^2$$ and the conjectured limit exists. This approximation is quite good, e.g. we have $g(2000) = 708989$ and the approximation gives $710132.$ Even better we may use Mellin-Perron summation and include the pole at one which has residue $-1/2$, $$\mathrm{Res}\left(\zeta(s)\zeta(s-1); s=1\right) = -\frac{1}{2}$$ plus a correction term to get $$g(n) \sim \left(1-\frac{\pi^2}{12} \right) n^2 + \frac{1}{2} n - \frac{1}{2}\sigma(n).$$ This last approximation is excellent, it gives $708714$ for $n=2000$ and for $n=8000$ with exact value $g(8000)=11356914$ it gives $11356203.$ For $n=16000$, we have $g(16000) = 45437799$ and the approximation gives $45436549.$ Observe that $$\frac{8}{45} \approx 0.1777777778 \quad\text{and}\quad 1-\frac{\pi^2}{12} \approx 0.1775329664$$ so the conjectured coefficient was very close to the asymptotic one.
H: All values of $z$ s.t. $e^z= 1+\sqrt{3}i$ I'm trying to find all values of $z$ such that $e^z= 1+\sqrt{3}i$ and am getting stuck. I know $$e^z=e^{x+iy}$$ from this I've done $$e^{z}=e^x\cos(y)+e^x\sin(y)i=1+\sqrt{3}i$$ giving that $$e^x\cos(y)=1$$ $$e^x\sin(y)i = \sqrt{3}i$$ but can't find a value for $y$ in the second equality. AI: Hint: Note that $\lvert1+i\sqrt{3}\rvert=2$. And that $\lvert e^{x+iy}\rvert=e^x$. This tells you immediately what $e^x$ must be, and should simplify your calculations for $y$.
H: Quick Algebric Trig question The question is: I need to solve the equation for all values of x between 0 < x < 360 Now I got it to here and got a bit stuck: = 0 AI: Quadratic equation: $$cos(x) = \frac{-1 \pm \sqrt{17}}{2}$$ Can you continue?
H: Proof regarding GCD I'm trying to prove that if $a,b$ are two primes between themselves then $a+b$ and $a^2+ab+b^2$ are also prime between themselves. That is, we have to prove that $\text{gcd}(a,b)=1\Rightarrow \text{gcd}(a+b,a^2+ab+b^2)=1$ . Should I try Bezout ? Any hint on what should I proceed on doing ? AI: Hint: Notice that $p\mid a+b$ and $p\mid a^2 + ab + b^2$ together imply $p$ divides $$ a^2 = a^2+ab+b^2 - b(a+b) $$ and $$ b^2 = a^2+ab+b^2 - a(a+b). $$
H: Simplify $\sum_{k=0}^n \frac{1}{k!(n-k!)}.$ Is there a way to simplify the expression $$\sum_{k=0}^n \frac{1}{k!(n-k)!}?$$ This came up when I was trying to determine $\mathbb{P}(X+Y =r)$ given a joint mass probability $$m_{X,Y}(j,k) = \frac{c(j+k)a^{j+k}}{j!k!},$$ where $j$ and $k$ are non-negative integers and $a,c>0$ are constants. AI: Hint: Multiply the sum by $n!$, and write $$\frac{n!}{k!(n-k)!}=\binom{n}{k}.$$ Now try applying the binomial theorem.
H: Prove $T(x)=0$ (Linear Algebra) Prove that for any five linear transformations mapping $\mathbb R^2$ into $\mathbb R^2$, there exists some non-zero scalars $r_1, r_2, r_3, r_4, r_5$ such that $T=r_1T_1+r_2T_2+r_3T_3+r_4T_4+r_5T_5$ such that $T(x)=0$ for all $x$ in $\mathbb R^2$. What I did $$T : \mathbb R^2 \rightarrow \mathbb R^2$$ Let $$r\in\mathbb R^2 \qquad T[x,y]=[r_1x_1+...+r_5x_5, r_1y_1+...+r_5y_5]=[0,0]$$ I think that I should use independence to prove this, but I'm not sure how to continue. AI: Hint: The space of linear transformations from $\mathbb{R}^2$ to $\mathbb{R}^2$ (remember these are also matrices of size $2 \times 2$) has dimension $2 \times 2 = 4$. So if you have $5$ of them, what do you know immediately? Added: Intuitively, the space of $2 \times 2$ matrices has dimension $4$ because a $2 \times 2$ matrix consists of $4$ coordinates which are added separately under the operation of adding $2 \times 2$ matrices. If you've encountered the concept of isomorphism, you can notice that the map $$\begin{pmatrix}a&b\\c&d\end{pmatrix} \to (a,b,c,d)$$ is an isomorphism onto $\mathbb{R}^4$ to prove this formally. In case you have not encountered this concept, note that $$\left\{\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right\}$$ forms a basis, since for any given $2 \times 2$ matrix we have $$\begin{pmatrix} a & b \\ c & d \end{pmatrix} = a\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}+ b\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} + c\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} + d\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$$ and the family is clearly linearly independent.
H: Linear algebra - solving equaltion System -1 -x-2y = -2 x+2y=2 System 2 x+3y=6 -x-3y=6 What is the solution to the system 1 and 2 separately? It has no solution Unique solution Infinitely many solution I am seriously confused like if in sys 1 , i add both the eq then 0 = 0, does this mean it can have infinite solution and in sys 2, if i add both eq 0=16 , what does this mean? Can anyone help please? AI: If you add the equation in the first system then you'll end up with $0=0$, that means' we have infinite number of solution, i.e. We'll have solution for every number. If you do the same with the secon system you'll end up with $0=12$. That's contradiction and means that that system doesn't have a solutuon.
H: Does "gerrymandering" matter? In the United States, it is often a raw point discussing the issue of redistricting, or so unfavorably called, "Gerrymandering." Background: In the United States House of Representatives, each State is allotted a number of representatives based on the State's population. Within the state, each of its representatives are assigned a geographical "district." The point of contention comes with the fact that a states Governor may periodically get to redraw the districting lines. My question is: mathematically, probabilistically, can this work? My intuition tells me that if one line is moved over to improve the chances of one side winning one district, won't that just hurt the chances of winning the other? AI: Imagine a state with 12 million inhabitants, and 6 districts. The voters are equally split with 6 million Democrats and 6 million Republicans. Note that each district contains two million voters, and it seems fair that there should be an even split in representation. However, the districts are gerrymandered so that district one contains only 2 million Democrats and no Republicans. The other 4 million Democrats and 6 million Republicans are evenly distributed through the other districts. Now there are five districts, each 60% Republican and 40% Democrat - as a result, we expect one Democratic district and 5 Republican districts. So gerrymandering significantly changed the result. This is an extreme case, but if we were to even set it up so that one district contains 1.5 million Democrats with the remaining Democrats evenly distributed, we'd find that the remaining five districts have moderate Republican advantages, even though the first district is overwhelmingly Democratic.
H: How would I know if I'm good in logic? I've always been interested in logic, but unfortunately my school contains no logicians. What are some good logic puzzles/books and how would I know if logic is right for me? Also, what can I do with logic? Is it strictly academic? Can I do math with it? AI: There are several questions here, but maybe I can answer the ones in the last paragraph. Different occupations use logic in different amounts. Probably mathematical logicians use it the most, other mathematicians, somewhat less, and some other professionals such as lawyers use it even less, but still more than the average person. Here I am talking about informal logical reasoning, which is a mode of thinking that helps with many things in life. Mathematical logic and formal logic, by contrast, are academic subjects (although of course they do have some relation with the layman's notion of logic.) Being a mathematical logician is quite different from just practicing logical thinking. Mathematical logic is a big field of mathematics that includes set theory, model theory, recursion theory, and proof theory. Formal logic underlies all of these fields, but once you learn it, you can proceed to reason informally almost as much of the time as you can in other areas of math. In short, logic isn't only about logical reasoning, just as math isn't only about numerical calculation. Probably the only way to know if you like mathematical logic is to be introduced to some of its subject matter (perhaps someone else will answer with some suggested reading for you.)
H: Tangent plane to the surface $\cos(x)\sin(y)e^z = 0$? The surface is as in the title, $$\cos(x) \sin(y) e^z = 0$$ I'm looking for the tangent plane at the point $(\frac{\pi}{2},1,0)$ I know the equation of a tangent plane for $z = f(x,y)$ is $$z-z_0 = f_x(x_0,y_0)(x-x_0) + f_y(x_0,y_0)(y-y_0)$$ But in the surface given, I cannot isolate for $z$. What should I do? AI: Let $f : U\subset \mathbb{R}^3\to \mathbb{R}$ be given by $f(x,y,z)=\cos x\sin y e^z$ then we have that your surface is indeed a level set $M = f^{-1}(0)$. Then it's easy: remember that the gradient of a function is orthogonal to the level sets. Using this we have that $$\nabla f(x,y,z)=(-\sin x\sin ye^z, \cos x\cos y e^z, \cos x\sin y e^{z})$$ So that at $(\pi/2,1,0)$ we have $\nabla f(\pi/2,1,0)=(-\sin 1,0,0)$, so that the normal is a multiple of the vector $e_1$, and hence since the magnitude of the normal vector doesn't matter, we can pick the normal vector to be $e_1$. Of course then, the tangent plane is just the $yz$ plane.
H: Probability: balanced die is rolled repeatedly until the same number appears Suppose that a balanced die is rolled repeatedly until the same number appears on the two successive rolls, and let $X$ denote the number of rolls that are required. Determine the value of $Pr(X=x)$, for $x=2,3,\dots$. I guess the answer is $\big(\frac{5}{6}\big)^{x-2}\big(\frac{1}{6}\big)^2$. Am I right? AI: The case $X = x$ occurs precisely when: $\bullet$ The $x-2$ rolls $2,3,\ldots,x-1$ are different from the previous roll. $\bullet$ The $x$-th roll is the same as the previous roll. So the probability of this happening is $\left(\frac{5}{6}\right)^{x-2} \cdot \frac{1}{6}$
H: How would you "count" $\omega^\omega$ $\omega^\omega$ can be seen as the limit of $\omega^n$ which are all countable sets, and is thus countable. For the latter sets, there is an "easy" way list the elements out, but how would you do it for $\omega^\omega$? That is, what would be a bijection from the positive integers to $\omega^\omega$? AI: Define $f:\omega^\omega\to\Bbb{N}^+$ such that $$f(\omega^{n_0}\cdot a_0+\cdots+\omega^{n_k}\cdot a_k):=p_{n_0}^{a_0}p_{n_1}^{a_1}\cdots p_{n_k}^{a_k}.$$ This function is well-defined because of uniqueness of the Cantor normal form of ordinals, and it is 1-1 function because the fundamental theorem of arithmetic. Also, you can prove that this function is onto.
H: Metric on a Class of Functions Mapping to a Metric Space Let $X$ be a nonempty set and $(Y,\rho)$ a nonempty metric space. Let $Y^X$ denote the set of mappings from $X$ to $Y$. Define $\pi:Y^X\times Y^X\to[0,\infty)$ as follows: for any $f,g\in Y^X$, let $$\pi(f,g)\equiv\min\left\{1,\sup_{x\in X}\rho(f(x),g(x))\right\}.$$ I need to trick with the “min” in order to make sure $\pi$ is finite even when the supremum is not. Question: is $(Y^X,\pi)$ a metric space? I'm almost sure it is, but I just need some external verification to be fully convinced. Thank you in advance for sharing your thoughts. AI: Sure, it's true. All of the metric space axioms except possibly the triangle inequality are very easy to check, so let's focus on the latter. Let $f, g, h \in Y^X$. Of course it is automatic by definition that $0 \leq \pi(f, h) \leq 1$, so if either $\pi(f, g) = 1$ or $\pi(g, h) = 1$, it is immediate that $\pi(f, h) \leq \pi(f, g) + \pi(g, h)$. If not, then we have $\pi(f, g) = \sup_{x \in X} \rho(f(x), g(x))$ and $\pi(g, h) = \sup_{x \in X} \rho(g(x), h(x))$. Then for any $z \in X$ we have $$\rho(f(z), h(z)) \leq \rho(f(z), g(z)) + \rho(g(z), h(z)) \leq \pi(f, g) + \pi(g, h)$$ whence $\pi(f, h) \leq \sup_{z \in X} \rho(f(z), h(z)) \leq \pi(f, g) + \pi(g, h)$.
H: Find the vector, not with determinants, but by using properties of cross products $(i + j)\times(i − j)$ I know how to use the right hand rule for the cross product, but how do you find the exact vector without using determinants? AI: The cross product has to be orthogonal to both $i+j$ and $i-j$. Let $v$ be this cross product. Then $\langle i+j,v\rangle=\langle i,v\rangle + \langle j,v\rangle = 0$ and $\langle i-j,v\rangle=\langle i,v\rangle - \langle j,v\rangle = 0$. What can you conclude? Now that we know $i$ and $j$ are unit vectors, we can do a little better. If we write $v=ai+bj+ck$, $$ \langle i,v\rangle=a,\langle j,v\rangle = b. $$ The last fact you'll need is that $|v|=i|\cdot|j|\cdot|\sin\theta|=|\sin\theta|$. Can you figure out what the angle between $i+j$ and $i-j$ is?
H: Why is this function not locally Lipschitz? I was reading an exercise, and supposedly this function: $$\chi \colon \Bbb R\times\Bbb R\to\Bbb R, \quad \chi (t,x)=3x^{2/3}$$ is not locally Lipschitz (in the second variable). In the notes this isn't proved, so I assumed that it was easy to see, but I've been trying to prove it and I can't get very far. What I've done is suppose that it is locally Lipschitz, in particular we can take $x_0=0$ and $t_0=0$, then exists $\delta_0>0$ and $C>0$ such that, if $|t-t_0|=|t|\le\delta_0$ and $x_1,x_2\in \overline B(x_0,\delta_0)=\overline B(0,\delta_0)$, then $\|\chi(t,x_1)-\chi(t,x_2)\|\leqslant C\|x_1-x_2\|$. Also, because we are taking $x_1$ and $x_2$ very close to the origin (and because the way the function behave) we have that $||\chi(t,x_1)-\chi(t,x_2)||\geqslant ||x_1-x_2||$, I don't know if this is actually helpful, but I got two characteristics of the constant $C$, which I'm not quite sure if they make any sense or if they are worth something. Because we have this: $\|\chi(t,x_1)-\chi(t,x_2)\|\geqslant \|x_1-x_2\|$, then this happens: $C\|\chi(t,x_1)-\chi(t,x_2)\|\leqslant C\|x_1-x_2\|$ only if $C<0$, wich isn't true since $C$ is a Lipschitz constant. We know that $\|\chi(t,x_1)-\chi(t,x_2)\|\leqslant C\|x_1-x_2\|$ and $\|\chi(t,x_1)-\chi(t,x_2)||\geqslant \|x_1-x_2\|$, then $1\leqslant\frac{\|\chi(t,x_1)-\chi(t,x_2)\|}{\|x_1-x_2\|}\leqslant C$ hence $1\leqslant C$. What am I doing wrong? How can I prove that $\chi$ is in fact, not locally Lipschitz? AI: The function $x \mapsto \chi(t,x)$ is not Lipschitz at $x=0$. (Note: Being locally Lipschitz is a stronger condition.) You can see that the derivative becomes unbounded near $x=0$: Suppose $x \ge 0$. Then $\chi(t,x)-\chi(t,0) = 3 x^{\frac{2}{3}}= 3 \frac{1}{\sqrt[3]{x}}x$, and so $|\chi(t,x)-\chi(t,0)| = 3 \frac{1}{\sqrt[3]{x}}|x-0|$. Hence for any $L>0$, if we choose $0\le x \le (\frac{3}{L})^3$, then $|\chi(t,x)-\chi(t,0)| \ge L |x-0|$. Consequently, $x \mapsto \chi(t,x)$ is not Lipschitz at $x=0$.
H: Maximum distance between two unit norm vectors I have 2 random vectors. I want to limit the euclidean distance between those two vectors to a certain number (say 2) by normalizing them. I think that if I normalize them such that they have a unit (L2) norm then any two vectors arbitrarily selected of any dimensionality will have the distance between them at most equal to 2. Is this correct? If not, is there any way to achieve this. Remember, vectors can take any real values and can have any number of dimensions. Also, how would I do it if I want to limit the cosine distance of any two vectors to a certain value? AI: If $\|x\|=\|y\| = 1$, the triangle inequality gives $\|x-y\| \le \|x\| + \|y\| = 2$. This is true for any norm. Since the two vectors are have unit norm, we can define the angle between them with $\cos \theta = \langle x, y \rangle$. If you have some constraint on $\theta$, you can translate that into a check on $\langle x, y \rangle$. Then keep generating random vectors until they satisfy the criterion. The criterion needs to have a positive probability of being true.
H: How to check that probability adds up to $1$ When I asked this, I got one comment as "A good sanity check is to see if the probabilities over all $x$ add up to $1$." The right answer to the question I linked is $\left(\frac{5}{6}\right)^{x-2} \cdot \frac{1}{6}$ and I want to know how it adds up as $x$ goes from $2$ to $\infty$. Please help. AI: Let us define the sum of the series as $S$ which is,$$S=\frac{1}{6}[1+(\frac{5}{6})+(\frac{5}{6})^2+...].$$ Then multiply both sides by $\cfrac{5}6{}$$$\frac{5}{6}S= \frac{1}{6}[\frac{5}{6}+(\frac{5}{6})^2+(\frac{5}{6})^3+...]$$ Then take the difference of the above two series $$S-\frac{5}{6}S=\frac{1}{6}.$$ Which gives you $S=1.$
H: Is $\complement(A\setminus B)=(\complement A) \setminus (\complement B)$ true or false? The problem I have is to calculate this term $(\complement A) \setminus (\complement B)$ when I (forexample) let $A=\left \{ a,b,c,d \right \}$ and $B=\left \{ b,c,e,g\right \}$. How do I calculate it? I've tried, but never got the right way to do from this place \begin{equation*} (\complement A)\setminus (\complement B)=\left \{x\in U | x\notin A \right \}\setminus\left \{x\in U | x\notin B \right \} \end{equation*} I know that $A\setminus B=\left \{ a,d \right \}$, then $\complement(A\setminus B)=\left \{x\in U | x\notin \left \{ a,d \right \} \right \}$. AI: I’ll get you started. I’m guessing that your $U$ is $\{a,b,c,d,e,f,g\}$; if not, make the appropriate modifications. Start at the easy end: $$\complement A=\{x\in U:x\notin A\}=\{e,f,g\}\;,$$ since $e,f$, and $g$ are the members of $U$ that are not in $A$. Similarly, $$\complement(A\setminus B)=\big\{x\in U:x\notin\{a,d\}\big\}=\{b,c,e,f,g\}\;.$$ Can you finish it by calculating $\complement B$ and $\big(\complement A\big)\setminus\big(\complement B\big)$?
H: Prove connectivity of graph with vertices of degree $\geq \lfloor \frac n2 \rfloor$ Claim: A graph with vertices of degree at least $\lfloor \frac n2 \rfloor$ where $n = $ number of vertices and $n \geq 3$ is connected. I tried to prove this by contradiction, but I didn't know what to make of the $\lfloor \frac n2 \rfloor$ part. AI: Assume for a contradiction that the graph is not connected. This means that the vertices can be partitioned into two nonempty sets $X$ and $Y$ so that there are no edges between $X$ and $Y$; namely, choose a vertex $x_0$ and let $X$ be the set of all vertices connected $x_0$ by a path, and let $Y$ be the rest of the vertices. Now suppose $X$ is the smaller of the two sets. How big can $X$ be? $X$ contains at most half the vertices in the graph, so $|X|\le\frac n2$. Since $|X|$ is an integer, it follows that $|X|\le\lfloor\frac n2\rfloor$. How big can the degree of a vertex in $X$ be? If $x\in X$ then $x$ is joined only to vertices in $X\setminus\{x\}$, so $x$ has degree $\le\lfloor\frac n2\rfloor-1$, contradicting the assumption that all vertices have degree $\ge\lfloor\frac n2\rfloor$. This argument does what you asked for, but N.S.'s answer is better because it proves more: your graph is not only connected, it has diameter at most $2$.
H: Compact subset of a Banach space of infinite dimension Let $X$ be a Banach space of infinite dimension. And let $K\subset X$ be a compact subset of $X$. Can we conclude something about the interior of $K$? Is it true that it's empty? I don't know how to attack this problem. I have not even examples of compacts on the infinite dimensional case. AI: The interior of a compact set in an infinite dimensional space is necessarily empty. If not, then there would be an open set with compact closure. You could translate that set to get a neighbourhood of any other point with compact closure. Then the space is locally compact. However, Any locally compact topological vector space must be finite dimensional. This should be proved in pretty much any reasonable book on Functional Analysis.
H: The diameter of the open interval ,$(a,b)$ Suppose that $a,b \in R$ and $a<b$. Now the $diam(a,b)$ = $b-a$ I am slightly confused at this point, because, by definition, The diameter of a subset $A$ of a metric space $X$ is the $sup${$d(a,b)$|$a,b\in A$}, But in the above case $a$,$b$ do not belong to $(a,b)$ ,then why is the diameter of $(a,b)$ calculated using elements that don't belong to the set ? AI: Because if $a,b\in\Bbb R$ and $a<b$, then $$\operatorname{diam}(a,b)=\sup\{|x-y|:x,y\in(a,b)\}=b-a\;.$$ Added: Remember, the supremum of a set need not belong to the set: $\sup[0,1)=1$ for instance, even thought $1\notin[0,1)$.
H: Reducing a product-of-sums expression f = ($x_1$ + $x_3$ + $x_4$) * ($x_1$ + $\overline x_2$ + $x_3$) * ($x_1$ + $\overline x_2$ + $\overline x_3$ + $x_4$) I've been working on this problem for a while but I cannot for the life of me figure out how to simplify the function without distributing everything. The following is what the answer shows. f = ($x_1$ + $x_3$ + $x_4$) * ($x_1$ + $\overline x_2$ + $x_3$) * ($x_1$ + $\overline x_2$ + $\overline x_3$ + $x_4$) f = ($x_1$ + $x_3$ + $x_4$) * ($x_1$ + $\overline x_2$ + $x_3$) * ($x_1$ + $\overline x_2$ + $x_3$ + $x_4$) * ($x_1$ + $\overline x_2$ + $\overline x_3$ + $x_4$) ... I just don't understand how they made the jump from step 1 to step 2. Everything after step 2 makes sense to me. If anyone could explain where the third term of line 2 came from I would greatly appreciate it. AI: The change from step 1 to 2 is the term $(x_1+\bar x_2+x_3+x_4)$, which is precisely an $\bar x_2$ more than the first term $(x_1+x_3+x_4)$. Then it's sufficient to show that $$a*(a+b)*c=a*c$$ which seems obvious to me.
H: Generalized Heron's formula for n-dimensional "n-angle" instead of "triangle" Is there a generalized version of Heron's formula for calculating the equivalent of a "volume" of an n-dimensional "n-angle" based on the length of it's sides? I've seen the equivalent formula for a tetrahedron, but I'd like to keep extending the shape by adding an extra point that connects to all existing points in the next dimension. Does that make sense? AI: Yes. See Cayley-Menger Determinant
H: How to solve $2^x = 36$ I need to solve $\log$ of $36$ in base $2$ The logarithm result $= x$. $$ \log_ 2 36 = x. $$ How do I determine value of $x$ in $$ 2^x=36 $$ I don't know how do it, since there's perfect square of this number. AI: With a calculator, you can simply calculate: $$x= \log_2 36 = \log 36 / \log 2$$ Without a calculator, you know that $x$ must be a little over $5$, since $2^5=32$. Now: $$2^x = 2^{x-5}2^5 = 36 \to 2^{x-5}=36/32=1+1/8$$ Using the fact that for small $x$ $\log_b (1+x)\approx x/ \ln b$: $$(x-5)=\log_2(1+1/8)\approx \frac{1}{8\ln 2}$$ $$x\approx 5+\frac{1}{8\ln 2}\approx5+\frac{1}{5.6}$$
H: How to calculate $\,(a-b)\bmod n\,$ and $ {-}b \bmod n$ Consider the following expression: (a - b) mod N Which of the following is equivalent to the above expression? 1) ((a mod N) + (-b mod N)) mod N 2) ((a mod N) - (b mod N)) mod N Also, how is (-b mod N) calculated, i.e., how is the mod of a negative number calculated? Thanks. AI: It's calculated exactly like the mod of a positive number. In arithmetic modulo $c$, we seek to express any $x$ as $qc+r$, where $r$ must be a non-negative integer. Why don't we test it out with an example? Take $-100$ mod $8 = 4$. This is because $8 \cdot -13 = -104$. The remainder is $4$. So now let's take $(37-54)$ mod $5$. It's equal to $-17$ mod $5 = 3$. Substitute in and do the computation: Method $1$ gives $3$, which is what we want, and method $2$ gives $-2$, so the correct approach is method $1$.
H: Pick 9 balls from piles of different balls How many ways are there to pick nine balls from large piles of (identical) red, white, and blue balls plus one pink ball, one lavender ball, and one tan ball? What is correct answer? Is it ${11\choose9} + {{10\choose8} * 3}$? AI: ${{3}\choose{0}}{{11}\choose{9}}+{{3}\choose{1}}{{10}\choose{8}}+{{3}\choose{2}}{{9}\choose{7}}+{{3}\choose{3}}{{8}\choose{6}}$ for ways with 0, 1, 2, and 3 balls, respectively of pink, lavender, and tan colors. The idea behind ${{11}\choose{9}}$ is that to choose 9 balls from red, white, and blue balls, permute 9 identical slots and 2 identical separators and fill the three groups of slots separated with separators with red, white, and blue balls.