text
stringlengths 83
79.5k
|
|---|
H: Littlewood's 1914 proof relating to Skewes' number
From Littlewood's 1914 theorem (paraphrase):
I propose to show there are arbitrarily large values of x for which successively
$\psi(x) - x < - K\sqrt{x}\log\log\log x \tag{A}$
$ \psi(x)- x > K \sqrt{x}\log\log\log x \tag{B}$
[...2 pages later]
It suffices then, to establish (A), (B), to show that a supposition such that
$\psi(x) -1 < \delta\sqrt{x}\log\log\log x \text{ for all } \delta >0 \text{ and } x > x_0(\delta),\tag{2}$
contradicts our [auxiliary lemma which seems to establish A, B].
My threshold question is whether (2) shouldn't read:
$|\psi(x) - x | < \delta \sqrt{x} \log\log\log x$?
Even if we amend (2) to
$\psi(x) - x < \delta \sqrt{x}\log\log\log x$
for a contradiction we could still have B but not A.
On the other hand assuming typos is a last resort when parsing a proof. It is good to see the overall strategy of the proof before working through it and (2) makes no sense to me. There is a brief account of the proof in the Wiki page (bottom) on Skewes' number but I think it casts no light on this question.
If someone happens to be familiar with the argument of the proof and can see how (2) as written (or otherwise) fits into it I would appreciate any help. Thank you. Unfortunately I do not see a link to the proof online and am using a borrowed hard copy.
AI: It does look like that
"$\psi(x) -1$"
should be
"$\psi(x) -x$".
Since $\psi(x) \approx x$,
$\psi(x) - 1$ is still
$\approx x$.
Do you have a link to Littlewood's paper?
|
H: Prove $\sqrt{k}$ is not a rational number.
Suppose $k>1$ is an integer, and k is not a square number, then $\sqrt{k}$ is not a rational number.
Proof:
Let $\sqrt{k}=\frac{p}{q}$, and $(p,q)=1$,So $q^2|p^2$, $p\neq 1$, $k$ is not an integer.When $q=p=1$, and $k>1$.
And $q=1$, then $\sqrt{k}=p$, and $k$ is a square number.
I do think it's so easy...Where is wrong?
How to do it.
AI: In short: I think you started correctly, but made some odd jumps. I'm going to offer some critique of the proof--please don't take anything here personally. :)
A proof is more than just a string of symbols--rather, it must be a clearly (yet concisely) written work that allows the reader a peak inside your mind when you proved the proposition.
So, I would suggest:
Whenever you start a proof by contradiction, make sure to denote it.
Always "introduce" your variables. Are you assuming $k$ is an integer? A complex number? What about $p$ and $q$?
Don't use notation unless it really helps. Sometimes it's easier to say (and nearly always easier to read) "$p$ and $q$ coprime," rather than "$(p, q) = 1$".
Show your algebra, or at least mention that you're doing some. Don't make me think about why $\sqrt{k} = \frac{p}{q}$ implies $q^2|p^2$--show me from the definition of divides.
So, a start to the proof could be:
Proof:
Assume proposition is false. That is, there exists a $k \in \Bbb{Z}^+$ such that $k>1$ is not a perfect square and $\sqrt{k}$ is rational.
As $\sqrt{k}$ is rational, there exist coprime integers $p$ and $q$ ($q\ne0$) such that:
$$\sqrt{k} = \frac{p}{q}$$
Rearranging the above, we find that:
$$p = q\sqrt{k}$$
It follows:
$$p^2 = q^2k$$
...
|
H: Coin Flip: "Exactly" and "At Most"
A coin is flipped $10$ times. How many outcomes have exactly three heads? How many outcomes have at most three heads?
AI: Imagine writing down a sequence of length $10$, made up of the letters H and/or T, to indicate what happened on your tosses.
Exactly $3$ heads happened precisely if the sequence has exactly $3$ H (and therefore $7$ T).
There are $\binom{10}{3}$ ways to choose where the $3$ H will go, so there are $\binom{10}{3}$ such sequences.
For at most $3$ heads, do the same sort of thing for $0$ H, $1$ H, $2$ H, and $3$ H, and add up.
|
H: At least one member of a pythagorean triple is even
I am required to prove that if $a$, $b$, and $c$ are integers such that $a^2 + b^2 = c^2$, then at least one of $a$ and $b$ is even. A hint has been provided to use contradiction.
I reasoned as follows, but drew a blank in no time:
Let us instead assume that both $a$ and $b$ are odd. This means that $a^2$ and $b^2$ are odd, which means that $c^2$ is even. Thus, $c$ is even. But this is not a contradiction -- it's the sum of two odd numbers, after all.
AI: Suppose that $a$ and $b$ are both odd, and proceed as you did to conclude that $c$ is even. As $a$ is odd, we may write $a = 2k + 1$ for some $k$; similarly, $b = 2m + 1$. As $c$ is even, write $c = 2n$. This leads to
$$(2k + 1)^2 + (2m + 1)^2 = (2n)^2$$
or upon expanding and regrouping,
$$4(k^2 + k + m^2 + m) + 2 = 4n^2$$
Now the right side is divisible by 4, as is the first term on the left - but $2$ is not divisible by $4$. Do you now see how to derive a contradiction?
|
H: some integral and series whose value is $1$.
Give me some integral and series whose value is $1$.
Where can I find a large number of these kinds of examples.
I have two examples here, but I cannot think up more...
This is geometry series, we learned in high school and before.
$$\begin{align*}\sum _{i=1}^{\infty } 2^{-i}=1\end{align*}$$
This is from $\text{Central}\text{ }\text{Limit}\text{ }\text{Theorem}$.
$$\begin{align*}\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{\infty } e^{\frac{-x^2}{2}} \, dx=1\end{align*}$$
I'd like some more historical /meaningful/natural/interesting examples.
Something like some series whose value is $\pi /4$ or $\pi$,
$$\begin{align*}\frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\text{...}=\frac{\pi }{4}\end{align*}$$
In history, sometimes we firstly got some special cases with some special values, and then we continue developing theories.
Here my question is about that sums to $1$, and that sums to $\pi$ is only an example to show some historical meanings.
AI: Take you favorite convergent series $$\sum a_n=A\neq 0$$ Define $$a_n^\prime=\frac{a_n}A$$
The same for the integral. This procedure is similar to what we usually know as normalization.
|
H: Every bounded function has an inflection point?
Hello from a first time user! I'm working through a problem set that's mostly about using the first and second derivatives to sketch curves, and a question occurred to me:
Let $f(x)$ be a function that is twice differentiable everywhere and whose domain is $ \Bbb R$. If $f(x)$ is bounded, then must $f$ have at least one inflection point?
The answer is clearly yes, but I can't think of how to prove it. (I thought of using the converse of the MVT, but Google revealed that the converse is untrue.) In the case where $f''(x)$ is always positive, for example, the three possibilities for $f(x)$ are nicely illustrated by considering the three branches of $g(x)$ (disregarding the vertical asymptotes), $g(x)=\left |\frac {1}{(x-a)(x-b)}\right |$, where $a < b$.
$(-\infty, a)$: $g'(x)$ is bound below, unbounded above, and $g(x)$ is unbounded above
$(a, b)$ : $g'(x)$ is unbounded below and above, and $g(x)$ is unbounded above
$(b, \infty)$: $g'(x)$ is unbounded below, bound above, and $g(x)$ is unbounded above.
In all three cases, $g''(x) > 0$ implies that $g(x)$ is unbounded.
Is all this right? How can we prove that the answer to the above question about $f(x)$ is true?
AI: Suppose without loss of generality that $f''(x)>0$ for all $x\in\Bbb R$. If there is some $a\in\Bbb R$ such that $f\,'(a)>0$, you have $f\,'(x)\ge f\,'(a)>0$ for all $x\ge a$, and $\lim_{x\to\infty}f(x)=\infty$. If $f\,'(x)\le 0$ for all $x\in\Bbb R$, then in fact $f\,'(x)<0$ for all $x\in\Bbb R$. Let $g(x)=f(-x)$, and observe that $\lim_{x\to\infty}g(x)=\infty$ by the first part, so $\lim_{x\to-\infty}f(x)=\infty$.
|
H: What is the motivation for differential forms?
I am that point in my mathematical career where I am learning differential forms. I am reading from M.Spivak's Calculus on Manifolds. So far I have gone over the tensor and wedge products and their properties, defined forms, learned of their pullbacks and the properties of these pullbacks, and defined the differential operator while learning some of its properties. I am currently reading about exact/closed forms in the build up to a certain "Poincare Lemma".
While the theory all seems to be fitting together (albeit with a bit of effort), there has been a nagging question. What is the motivation here? It has been my experience that many mathematical constructions (that I have encountered at least) are done with the goal of better understanding something. I feel like this thing is missing from my understanding of differential forms. Any insight will be appreciated.
AI: The most obvious uses of differential forms are related to integration. They are the language in which we express Stokes' theorem, for instance: whenever you have a compact, orientable manifold $M^n$ with boundary, the integral of a $(n-1)$-form $\omega$ over $\partial M$ equals the integral of $\mathrm{d}\omega$ over $M$ (in particular, the integral of an exact form over a closed manifold is always zero, as is the integral of a closed form over the boundary).
That is not all, of course. For example, closed/exact forms you mentioned give the de Rham cohomology, an important topological invariant. There's more, but for that you'll have to dig in a bit deeper.
|
H: Largest square written as $p^2+pq+q^2$ where $p, q$ are primes?
I got this problem from the website Brilliant, but I have doubts about the solution presented there:
$(p+q)^2-k^2=pq$
$(p+q+k)(p+q-k)=pq$
Now either $(p+q+k)=p$ and $(p+q-k)=q$ (which doesn't work), or $(p+q+k)=pq$ and $(p+q-k)=1$ . Solving the second system of equations, we get the only solution of $(3, 5)$ I really don't think this statement is true. What if for example $(p+q-k)=(pq)^{1/4}$ and $(p+q+k)=(pk)^{3/4}$ ? If my doubts are correct, what is the real way to solve this question? Thanks.
AI: Since $p$ and $q$ are primes, $(pq)^{1/4}$ is not an integer. The fundamental theorem of arithmetic (or unique prime factorization theorem) immediately implies taht the only factorizations of $pq$ in positive integers are $p\cdot q$ and $1\cdot(pq)$, so the argument is correct.
|
H: Integral extensions of rings, when one of the rings is a field
The following is from page 61 of Introduction to Commutative Algebra by Atiyah & Macdonald:
Proposition 5.7. Let $A\subseteq B$ be integral domains, $B$ is integral
over $A$. Then $B$ is a field if and only if $A$ is a field.
I am curious what happens if we drop the hypothesis of "integral domains". It will probably be false (since the hypothesis plays a role in the proof). Thus my question is:
What would be example of $A\subseteq B$ commutative rings such that
$B$ is integral over $A$ and
a) $A$ is a field, but $B$ is not a field.
b) $B$ is a field, but $A$ is not a field.
Let me say a word about a). I think the situation a) is actually impossible. Because it seems to me that in the proof of this direction, only $A$ being integral domain is used. Thus, if $A$ is a field (in particular integral domain), then $B$ must also be a field. Edit: As Ted shows, I was wrong. The situation (a) is possible.
Thanks for your time. :)
AI: If $A \subset B$ and $B$ is a field, then $A$ must at least be an integral domain (subrings of fields are integral domains). So (b) is impossible.
(a) is possible, though. For example, if $A$ is a field then $B = A[x]/(x^2)$ is a ring which is a finitely generated $A$-module and therefore integral over $A$. But $B$ is not a field.
|
H: Is there a proof that no lower bound exists for the totient function?
I read here that there is no lower bound for the totient function. Is there a proof of that?
AI: If you read carefully, the article says not that there is no lower bound, but that there is no linear lower bound — that is, there's no $c$ (and $n_0$) such that for all $n\gt n_0$, $\phi(n) \geq cn$.
There is a relatively straightforward proof of this: consider primorials, that is, numbers of the form $n=2\cdot 3\cdot 5\cdot\ldots\cdot p_k$. Then for these $n$, $\displaystyle\frac{\phi(n)}{n} = \prod_{i=1}^k\left(1-\frac1{p_i}\right)$. This product can be shown to 'diverge' to zero as $i$ goes to $\infty$, essentially because the sum $\sum_i\frac{1}{p_i}$ diverges: $\ln\left(\prod(1-\frac1{p_i})\right) = \sum\ln(1-\frac1{p_i}) = \sum\left(-\frac1{p_i}+O(\frac1{p_i^2})\right)$, and since the latter sum goes to $-\infty$, then the logarithm must go to $-\infty$ likewise, and thus the product goes to $0$.
The product's divergence to zero thus means that for any $\epsilon$ we can find some $n_0$ (a sufficiently large primorial) such that $\frac{\phi(n_0)}{n_0}\lt\epsilon$, or in other words such that $\phi(n_0)\lt\epsilon\cdot n_0$.
|
H: Combination problem question.
I am working on a combination problem and I need to check if I'm doing this right.
There is a deck of cards that consist of 20 cards. There are four different colors, including 2 Green, 6 Yellow, 8 Black and 4 Red cards. Each colors are numbered, so they are distinguishable. If one draws 4 cards from this deck without replacement, how many different hands of 4 cards can one make given that exactly two cards are black ?
This is my claim.
Two cards are black, so there are 8*7 ways of having them.
The rest have to be non-black, so there are 12*11 ways of having them.
Since we don't care about the order of how we have the four cards, we divide the permutations by the number of order, namely, 4!.
So I think the ans is $\frac{8*7*12*11}{4!}$.
However, a book that contains this problem says that instead of dividing the bottom by 4!, it is divided by 4.
I think it is saying that there are 2! ways of ordering the black cards and 2! ways of ordering the non-black cards, but I don't think their claim is right, because it ignores the cases where Black-NonBlack-Black-NonBlack is treated as a different permutation.
Can someone confirm which argument is correct ?
AI: We are not ordering the cards, we are counting the number of pairs of black cards, and the number of pairs of non-black cards.
I would say that there are $\binom{8}{2}$ "tiny hands" of $2$ black cards, and $\binom{12}{2}$ tiny hands of $2$ non-black, fpr a total of $\binom{8}{2}\binom{12}{2}$ full hands of $4$.
Or else imagine picking the $2$ black cards in order from the black bunch. This can be done in $(8)(7)$ ways. But this double-counts each tiny hand of two black cards, so the actual number is $\frac{8\cdot 7}{2}$. A similar calculation can be made with the non-black.
|
H: $p\nmid 2n-1,$ then $\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}\equiv 0 \pmod{p^3} \Leftrightarrow \sum_{k=1}^{p-1}\frac{1}{k^{2n}}\equiv 0 \pmod{p^2} $
Is it true that if $p$ is a prime and $p\nmid 2n-1,$ then
$$\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}\equiv 0 \pmod{p^3}
\hspace{12pt}\Leftrightarrow \hspace{12pt}
\sum_{k=1}^{p-1}\frac{1}{k^{2n}}\equiv 0 \pmod{p^2} $$
I find some examples: $(2n-1,p)=(3,37)(7,67)(7,877)(9,5)(13,7)(13,59)(13,607)$
AI: We show something stronger, namely that
$$
2\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}+(2n-1)\cdot p\cdot\sum_{k=1}^{p-1}\frac{1}{k^{2n}}\equiv 0\mod{p^3}.
$$
Let $n\geq 1$ be a fixed integer. If we compute the difference
$$
\frac{1}{(1+x)^{2n-1}}-\left(1-(2n-1)x+n(2n-1)x^2\right),
$$
we see that it is of the form
$$
\frac{f(x)}{(1+x)^{2n-1}},
$$
where $f(x)\in\mathbb{Z}[x]$ is some polynomial which is divisible by $x^3$. It follows that for any $x\in p\mathbb{Z}_{(p)}$,
$$
(\star)\,\,\,\,\,\,\,\,\,\,\frac{1}{(1+x)^{2n-1}}\equiv 1-(2n-1)x+n(2n-1)x^2\mod{p^3}.
$$
Now, we have
$$
\begin{eqnarray*}
\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}&=&\sum_{k=1}^{p-1}\frac{1}{(p-k)^{2n-1}}\\
&=&-\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}\frac{1}{\left(1-\frac{p}{k}\right)^{2n-1}}\\
&\equiv&-\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}-(2n-1)p\sum_{k=1}^{p-1}\frac{1}{k^{2n}}-n(2n-1)p^2\sum_{k=1}^{p-1}\frac{1}{k^{2n+1}}\mod{p^3},
\end{eqnarray*}
$$
where in the third line we have used $(\star)$ with $x=-\frac{p}{k}$.
For $p\geq 2n+3$,
$$
\sum_{k=1}^{p-1}\frac{1}{k^{2n+1}}\equiv 0\mod{p}
$$
(perhaps you are familiar with this fact?), so in this case we have
$$
2\sum_{k=1}^{p-1}\frac{1}{k^{2n-1}}+(2n-1)\cdot p\cdot\sum_{k=1}^{p-1}\frac{1}{k^{2n}}\equiv 0\mod{p^3},
$$
implying the fact you want.
|
H: Interesting related rates question
A circle C in the xy-plane is described as follows: A point P on the circumference of C traces out the graph of $f(x) = \sqrt{x}$; the center of C is the y-intercept of the tangent line of $f(x)$ at P. If the center of C moves upwards along the y-axis at the rate of $\frac14$ centimeters per second, how fast is the area of C increasing when the center of the C is (0, 1)?
AI: The equation for the tangent line at $(x_0,\sqrt x_0)$ is
$$y=\frac{1}{2\sqrt x_0}x+\frac{\sqrt x_0}{2}$$
The distance between $(0,\frac{\sqrt x_0}{2})$ and $(x_0,\sqrt x_0)$ is the radius of the circle, and this is $\sqrt{x_0^2+\frac{x_0}{4}}$. This gives the area of the circle:
$$A(x_0)=\pi\left( x_0^2+\frac{x_0}{4}\right)$$
To put this in terms of the coordinate of the center of the circle, use $y=\frac{\sqrt x_0}{2}$:
$$A(y)=\pi\left(16y^4+y^2\right)$$
Deriving this expression and plugging in values gives:
$$\frac{dA}{dt}=\pi(16+\frac{1}{2})=\frac{33\pi}{2}$$
|
H: Factoring $a^2+b^2+c^2$?
Is it possible to factor $a^2+b^2+c^2$ ? If we make this into only two factors, I know it has to look like this:
$(a+b+c+\cdot \cdot \cdot )(a+b+c+\cdot \cdot \cdot )$ . But I don't know how to get rid of the $2(ab+bc+ac)$ but I have no idea what else should go in the parenthesis. How can you figure out how to factor this? Thanks.
AI: No. I will consider the case of factorizing over the complex numbers, or any sub-field (e.g. the real numbers). Most of my discussion will not really depend on the choice of field.
Any factorization (into polynomials) must be into polynomials of degree $1$, i.e.
$$ a^2 + b^2 + c^2 \overset?= (\alpha_1 a + \beta_1 b + \gamma_1 c + \delta_1)(\alpha_2 a + \beta_2 b + \gamma_2 c + \delta_2) $$
where $\alpha_1,\dots,\delta_2$ are complex numbers. If you don't mind doing some work, you can expand this out. You might as well rescale $\alpha_1 \leadsto \alpha_1' = 1$, $\beta_1 \leadsto \beta_1' = \beta_1 / \alpha_1$, $\gamma_1 \leadsto \gamma_1' = \gamma_1 / \alpha_1$, $\delta_1 \leadsto \delta_1' = \delta_1 / \alpha_1$, $\alpha_2 \leadsto \alpha_2' = \alpha_1\alpha_2$, ..., $\delta_2 \leadsto \delta_2' = \delta_2\alpha_2$. This lets you assume that $\alpha_1 = 1$, and you immediately conclude $\alpha_2 = 1$. Now looking at the coefficients of $ab$, you see that $\beta_1 = -\beta_2$, and on the other hand the coefficient of $\beta^2$ gives $\beta_1\beta_2 = 1$; from these equations you can conclude that $\beta_1 = \pm i$, and $\beta_2$ is its negation. A similar argument looking at the coefficients of $ac$ and $c^2$ gives $\gamma_1 = \pm i$, and $\gamma_2$ is its negation. But this leads to a contradiction when looking at the coefficient of $bc$.
Let me mention an important trick when looking for factorizations of highly symmetric things. It is a (nontrivial) theorem that factorization of polynomials (in any number of variables) is unique in the following sense: if you have two factorizations of the same polynomial, such that each factor itself has no factors, then the two factorizations agree up to reordering the list and multiplying terms on the list by nonzero (complex, say) numbers.
So, let's assume we have a factorization, say the one above. We can try to generate other factorizations. Well, the left hand side is real, in the sense of being invariant under complex conjugation. Thus the complex conjugate of the right hand side is also a factorization. It follows that we have either one of two possibilities:
Possibility 1: The lists are not reordered. Therefore, there is a nonzero complex number $\eta$ such that $\bar\alpha_1 = \eta \alpha_1$, $\bar \beta_1 = \eta\beta_1$, $\bar \gamma_1 = \eta\gamma_1$, $\bar \delta_1 = \eta\delta_1$, $\bar\alpha_2 = \eta^{-1} \alpha_2$, $\bar \beta_2 = \eta^{-1}\beta_2$, $\bar \gamma_2 = \eta^{-1}\gamma_2$, $\bar \delta_2 = \eta^{-1}\delta_2$. It follows that $\bar\eta = \eta^{-1}$. Upon multiplying the first factor by $\sqrt{\eta}$ and the second by $\sqrt{\eta}^{-1}$, we can assume without loss of generality that $\eta = 1$. But then both factors are real, and this is impossible: the LHS has no zeros except for $\alpha = \beta = \gamma = 0$, whereas each factor on the right has a two-dimensional space worth of zeros.
Possibility 2: Under complex conjugation, the two terms switch places. Again, we can multiply by a constant, and thereby assume that $\alpha_2 = \bar \alpha_1$, $\beta_2 = \bar \beta_1$, etc.
What other symmetries are there? Well, the left-hand side is invariant under each of $a \mapsto -a$, $b \mapsto -b$, and $c\mapsto -c$. Up to some casework, this gives linear relations between the coefficients. The left-hand side is also invariant under switching $a$ with $b$, say.
Using all of these symmetries dramatically reduces the possible space of factorizations, as any factorization must respect all symmetries up to some scalar factors.
Here's one final way to think about factorization problems. Let's fix a non-zero value for $c$, and consider the space of solutions $(a,b)$ to $0 = a^2 + b^2 + c^2$ ($a$ and $b$ may be complex numbers). If $c^2$ were negative and we were looking for real solutions, this would be a circle.
Your ability to visualize in $\mathbb C^2$ may not be great, but I assure you that the space of solutions is still something like a circle. In particular, it is not a linear subspace of $\mathbb C^2$.
On the other hand, the space of solutions $(a,b)$ to any equation of the form $0 = p(a,b)q(a,b)$ where $p,q$ are degree-$1$ is a union of two (usually intersecting) linear subspaces in $\mathbb C^2$.
This type of argument also shows that $a^2 + b^2 + c^2$ has no factorizations over the complex numbers.
Finally, a commenter suggested looking at the quaternions. The quaternions have lots of behaviors that are different from commutative fields like $\mathbb C$. Indeed, over the quaternions, we have
$$ (ai + bj + ck)(-ai - bj - ck) = a^2 + b^2 + c^2 $$
provided $a,b,c$ are real (and hence commute with everything). If $a,b,c$ are allowed also to be quaternionic, then I don't see any convenient factorization, but I don't have a proof that it is impossible.
|
H: correct understanding mathematical question
suppose that we have following question,this question is not related to itself mathematics confusion,but language problem and please help me to clarify English language terms in mathematics. question is this :
Simon arrived at work at $8:15$ A.M. and left work at $10:30$ P.M. If Simon gets paid by the hour at a rate of $10$ dollar and time and ½ for any hours worked over $8$ in a day. How much did Simon get paid?
from the beginning i could not understand what was main trick in this question,but i have only one :what does mean time and $½$ clearly as i understand during the $8$ hour,he is paid $10$ dollar per hour,but after $8$ ?clearly from $4:15$ till $10:30$,we have $6$ hour and $15$ minute,so what is meaning of time and $1/2$?please help me,because i am preparing for GRE exam,and i would like to clarify every English tricks
AI: Time and a half means that each hour after $8$ hours counts as if it was $1\frac{1}{2}$ "regular" hours. In other words, for overtime (time in excess of $8$ hours) he/she gets paid $15$ dollars an hour.
|
H: proof about the sum of lim sup
I have questions about the solution below.
I couldn't understand the red lines. What is $X_nN_1$?
I'm not sure how it led to the contradiction.
Thank you!
Exercise 3: For any two real sequences $(a_n)$ and $(b_n)$, prove that
$$\limsup_{n\to\infty} (a_n + b_n) \le \limsup_{n\to\infty} a_n +
\limsup_{n\to\infty} b_n,$$
provided the sum on the right is not of the form $\infty-\infty$.
Proof. Assume that $\limsup_{n\to\infty}a_n=L<\infty$ (The proof with
$\limsup_{n\to\infty}b_n$ finite works similarly).
Now, we break into three cases depending on whether $\limsup_{n\to\infty}
b_n$ is finite, $\infty$, or $-\infty$.
If it's $-\infty$, we claim that both $\limsup_{n\to\infty}(a_n+b_n)$ and
$\limsup_{n\to\infty}a_n + \limsup_{n\to\infty} b_n$ are $-\infty$. Note
that it's clear that
$$\limsup_{n\to\infty} a_n + \limsup_{n\to\infty} b_n = L-\infty =
-\infty,$$
so we'll just show that
$$\limsup_{n\to\infty}(a_n+b_n) = -\infty.$$
To see this, let $r$ be any real number. We must show $a_n+b_n<r$ for all
sufficiently large $n$, for then it follows that any subsequence of
$(a_n+b_n)$ eventually gets below $r$, and hence
$\limsup_{n\to\infty}(a_n+b_n)\le r$. Doing this for all $r\in\Bbb R$ then
shows that $\limsup_{n\to\infty}(a_n+b_n) = -\infty$.
To see that $a_n+b_n<r$ for sufficiently large $n$, notice first that since
$\limsup_{n\to\infty}b_n=-\infty$, there must be an $N_1$ such that for all
$n\ge N_1$, $b_n<r-|L|$. This is because otherwise, for each $N_1\in \Bbb
N$, we could find an $x_{n_{N_1}}$ with $x_{n_{N_1}}\ge r$. Then the subsequence $(x_{n_{N_1}})$ would show that $\limsup_{n\to\infty}b_n \ge r$, a contradiction.
There must also be an $N_2$ such that for any $n \ge N_2$, $a_n < L$, since otherwise we'd find $L$ wasn't the limit superior of $(a_n)$. If we let $N = \max \{ N_1, N_2 \}$, then for any $n > N$, we have $a_n + b_n < L+ r-|L|\le r$. Thus, the claim is established and it follows that $\limsup_{n\to\infty}(a_n+b_n) = -\infty$.
Note: the original text can be found here: https://i.stack.imgur.com/sdkiM.png
AI: Well, that's just a shoddy proof. Don't trust every PDF you find on the Internet!
The author writes $x_{n_{N_1}}$ but they meant $b_{n_{N_1}}$. I don't know where the letter "x" came from. And do we need the triple subscripts?
Every instance of $\star < r$ or $\star < L$ should be $\star \leq r$ or $\star \leq L$. That's a huge mistake!
There's no need to take the absolute value of $L$. It just complicates matters.
It's wasteful to say "subsequence" and "eventually" in the same sentence.
So many double negations!
It's $\limsup a_n$, not $limsup a_n$.
"We claim [X]. … Thus, the claim is established and it follows that [X]." Oh, X follows from itself? How helpful!
Edit: In fact, replacing a few "$<$" with "$\leq$" still won't fix the proof. The first sentence of the last paragraph is wrong either way. It's entirely possible that $\limsup a_n = L$ but $a_n > L$ for all $n$. Just consider $a_n = 1/n$ and $L = 0$.
|
H: Isn't $(0)$ a prime ideal in a field?
I have read in multiple places that a field $K$ has a Krull dimension of $0$. How is this true? Isn't $(0)\subset K$ a prime ideal in $K$? Obviously $K$ is an integral domain.
Thanks in advance!
AI: The Krull dimension is defined to be the length of the longest chain of prime ideals. In other words, a ring $R$ is said to have a Krull dimension $n$, if the longest possible chain of prime ideals
$$ \mathfrak{p}_{0}\subsetneq\mathfrak{p}_1\subsetneq\cdots\subsetneq\mathfrak{p}_n$$
has length $n$.
Since $0$ is the only prime ideal in a field (in fact the only proper ideal in a field), it follows that Krull dimension of a field is 0 (since there is no way to have $0\subsetneq\mathfrak{p}$ for another prime ideal $\mathfrak{p}$).
Added. In case it wasn't made clear, $K$ itself is not a prime ideal of $K$, because by definition prime ideals must be proper ideals.
|
H: Analytic function and connected region.
We have the result.
Let $G$ be an open connected set in $\mathbb{C}$, and let $f : G \rightarrow \mathbb{C}$ be an analytic function. Then the following statements are equivalent.
*1. $f(z) = 0$ in $G$ for all $z$ *
2. There is a point $a \in G$ s.t. $f^n(a) = 0$ for each $n \in \mathbb{Z}$.
3. $\{z \in G : f(z) = 0\}$ has a limit point in $G$
Can we say $\mathbb{R}$ is open connected set in $\mathbb{C}$? Why or why not?
If we do not consider $G$ as a open connected region, the result will loss its validity. Please give me a counterexample of a function for which it may happen.
Thank you for your kind suggestions.
AI: $\mathbb{R}$ is connected in $\mathbb{C}$. However, $\mathbb{R}$ is not open as a subset of $\mathbb{C}$. In general, any set in a metric space is open whenever it is a union of discs $D_r(a)=\{x:\,d(x,a)<r\}$.
Now, if $a\in\mathbb{R}$ there is no $r$ so that the complex disc $D_r(a)$ is a subset of $\mathbb{R}$.
That a function is analytic is a local property on the topology of $\mathbb{C}$. This means that a function $f$ defined on a set $\Omega$ is analytic provided that for any $z\in\Omega$ there exist a disc $D_r(z)\subset\Omega$ such that $f$ is analytic on $D_r(z)$.
In particular, it is natural to assume the domain, $\Omega$, in the definition of analytic function is open.
|
H: On Decompositions of Finite Group
Any finite non-cyclic abelian group $G$ can be written as product $HK$ of two proper subgroups. Here $HK=\{ hk\colon h\in H, k\in K\}$. A step further, if $G$ is a finite group such that the commutator subgroup $[G,G]$ is proper subgroup of $G$, then $G$ has a decomposition $HK$ for some proper subgroups $H,K$, since
if $G/[G,G]$ is non-cyclic then we can pull back the decomposition for $G/[G,G]$;
if $G/[G,G]$ is cyclic, then we have the decomposition $G=HK$ where $H=[G,G]$, and $K=\langle x\rangle$ is a subgroup such that $G/[G,G]=\langle x[G,G]\rangle$.
The question I would like to ask is the natural one:
Q. Does every finite group admits a decompositon $G=HK$ where $H,K$ are proper subgroups?
By initial observations, it is sufficient to visit the question for groups $G$ such that $[G,G]=G$ (such groups are called perfect groups
AI: If it were really true that all finite groups (other than cyclic $p$-groups) possessed a factorization, then it would suffice to prove this for non-abelian simple groups: if $G/N = (H/N) (K/N)$ then $G=HK$. A quick check reveals that if $G$ is a finite group with a top composition factor of order less than $1000$, then $G$ can be factored as $G=HK$. However, $G=\operatorname{PSL}(2,13)$ has no such factorization: the only possibilities for $H,K$ by order considerations are a Borel subgroup and a non-split torus. However, their intersection is always of size 2, which is too large, $|HK| = |G|/2$.
There are large families where there are never factorizations: finite simple groups of exceptional Lie types $E_6$, $E_7$, $E_8$ or twisted types ${}^2G_2$, ${}^3D_4$, ${}^2F_4$, and ${}^2E_6$ are not factorizable. This is shown in Hering–Liebeck–Saxl (1987).
Hering, Christoph; Liebeck, Martin W.; Saxl, Jan.
“The factorizations of the finite exceptional groups of Lie type.”
J. Algebra 106 (1987), no. 2, 517–527.
MR880974
DOI:10.1016/0021-8693(87)90013-5
|
H: Find correlation coefficient of $f(x,y)=2$ for $0
Find the correlation coefficient for the random variables $X$ and $Y$ having joint density $f(x,y)=2$ for $0 < x \leq y<1$.
Seem like a simple problem but I'm stuck.
Since $\mbox{Corr}(X,Y) = \frac{\mbox{Cov}(X,Y)}{\sqrt{\mbox{Var}(X)}\sqrt{\mbox{Var}(Y)}}$, I figure I start with $\mbox{Var}(X)$ and $\mbox{Var}(Y)$.
$E(X) = \int_0^1 \int_0^y 2xdxdy = \int_0^1y^2dy = \frac{1}{3}$
$E(X^2) = \int_0^1 \int_0^y 2x^2dxdy = \int_0^1\frac{2}{3}y^3dy = \frac{1}{6}$
$\mbox{Var}(X) = E(X^2) - E(X)^2 = \frac{1}{6} - \frac{1}{9} = \frac{1}{18}$
$E(Y) = \int_0^1 \int_x^1 2ydydx = \int_0^1 1-x^2 dx = \frac{2}{3}$
$E(Y^2) = \int_0^1 \int_x^1 2y^2dydx = \frac{2}{3}\int_0^1 1-x^3dx = \frac{1}{2}$
$\mbox{Var}(Y) = E(Y^2) - E(Y)^2 = \frac{1}{2} - \frac{4}{9} = \frac{1}{18}$
$\frac{1}{\sqrt{\mbox{Var}(X)}\sqrt{\mbox{Var}(Y)}} = \frac{1}{\frac{1}{18}} = 18$
It seems alright up to this point.
$\mbox{Cov}(X,Y) = E(XY) - E(X)E(Y)$
$E(XY) = \int_0^1 \int_0^y 2xy dxdy = \int_0^1 y^3 dy = \frac{1}{4}$
$\mbox{Cov}(X,Y) = \frac{1}{4} - (\frac{1}{18})^2$
And this is where I'm stuck.
$\mbox{Corr}(X,Y) = (\frac{1}{4} - (\frac{1}{18})^2) \times 18 = \frac{18}{4} - \frac{1}{18} > 1$
Wolfram says my calculation is correct, so this must be a concept problem.
The $0 < x \leq y<1$ is really throwing me off but Devore doesn't really provide a clear explanation.
Thanks for your hints.
AI: Your integrals look right. Note that $E(X)E(Y)=\frac{2}{9}$, so the covariance calculation is not right.
The covariance is $\frac{1}{4}-\frac{2}{9}=\frac{1}{36}$.
|
H: Continuous extension of a function
Can anybody help me with this problem?
Justify whether the following statement is true or false:
Every continuous function on $\Bbb Q\cap [0,1]$ can be extended to a continuous function on $[0,1]$.
Any help will be appreciated.
AI: See what happens if you take the function
$$f:\Bbb Q\cap[0,1]:\to[0,1]:x\mapsto\begin{cases}
1,&\text{if }x>\frac{\sqrt2}2\\\\
0,&\text{if }x<\frac{\sqrt2}2\;.
\end{cases}$$
|
H: How to use binomial theorem
How to use binomial theorem,From (1) to get (2)?
$$\begin{align*}\left(1+\frac{1}{n}\right)^n\tag{1}\end{align*}$$
$$\begin{align*}t_n=1+1+\frac{1}{2!}\left(1-\frac{1}{n}\right)+\frac{1}{3!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)+\text{...}+\frac{1}{\text{n1}}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\text{...}\left(1-\frac{n-1}{n}\right).\tag{2}\end{align*}$$
I can get the following via using $C_n^k$ or $\left(\begin{array}{c} n \\ k \end{array}\right)$
$$\begin{align*}1+\frac{1}{n^5}+\frac{5}{n^4}+\frac{10}{n^3}+\frac{10}{n^2}+\frac{5}{n}\end{align*}$$
AI: Hint:
$$\binom{n}{i}\left(\frac{1}{n}\right)^{i}=\frac{n(n-1)\cdots(n-i+1)}{i!}\cdot\left(\frac{1}{n}\right)^{i}=\frac{1}{i!}\cdot\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{1}{n}\\=\frac{1}{i!}\cdot1\cdot\left(1-\frac{1}{n}\right)\cdot\left(1-\frac{2}{n}\right)\cdots\frac{1}{n}$$
|
H: Derive a formula to solve a specific task
I have a specific problem.
I have 8 different variables a, b, c, d, e, f, g, h.
Each of these variables has a score out of 5, where 1 is bad and 5 is good. So a max score of 45 and a min of 0.
Of these variables I can influence the scores for 6 a, b, c, d, e, f.
I can not influence the scores for 2 g, h.
I need to derive a formula which has the following approximate properties:
To represent a positive collective rating where every score is good I want a score of ~100%
To represent a rating where the influencable variables are bad I want a score of ~0%
To represent where the influencable variables can be improved, but the variables out of control are good, I want ~50%
To represent where the influenceable variables are good, but the set variables are bad, I want ~100%
Or with less text:
influenceable = good & non-influencable = good => 100%
influenceable = good & non-influencable = bad => 80%
influenceable = bad & non-influencable = good => 40%
influenceable = bad & non-influencable = bad => 0%
Does anyone have an approach to this problem?
AI: Very roughly, the following is a reasonable start.
$\frac{35}{36}(\frac{a + b + c + d + e + f - 6}{30} + \frac{g + h - 2}{35})$.
At (5,5,5,5,5,5,5,5), this gives 1 (100%).
At (5,5,5,5,5,5,1,1), it gives .78 (roughly).
At (1,1,1,1,1,1,5,5), it gives .22 (roughly).
At (1,1,1,1,1,1,1,1), it gives 0.
It's hard to come up with a simple one that gives ~80% to the first six good and last two bad and 40% to the reverse, since that means that simply making the first six good should bring the value up by .8, and making the last two good should bring the value up by .4, giving a total of 1.2 for making them all good, unless another term is added to make the increase less.
|
H: Uncountable basis and separability
We know that a Hilbert space is separable if and only if it has a countable orthonormal basis.
What I want to ask is
If a Hilbert space has an uncountable orthonormal basis, does it mean that it is not separable? Or equivalently, does it imply that the Hilbert space does not have a countable basis?
I know that if a vector space has infinite number of linearly independent vectors then it cannot have a finite (Hamel) basis. But here we do not deal with Hamel basis but with a complete orthonormal set, do I cannot apply the usual techniques.
Any ideas?
AI: Here's a brute-force approach that doesn't mention other bases:
The open balls of radius $\frac{1}{2\sqrt2}$ around the orthonormal basis vectors are disjoint. A countable set can't intersect them all if there are uncountably many, so it isn't dense, and the space isn't separable.
|
H: Rational Solution of a System of Linear Equations
I am having a little trouble with this problem -
Let $A$ be a $m\times n$ matrix and $v$ be a $n\times 1$ matrix, both of which only has rational entries. It is known that the equation $Ax=v$ has a solution in $\mathbb R^n$. Does this imply that the given equation will have a solution with rational entries?
I think that the result is true, but I am unable to prove it.Thanks for any help.
AI: I would answer in the following steps:
From the fact that you know a solution exists, you have $v$ in the range of $A$, and $x$ can be assumed in the orthogonal complement of its null space.
Let $B$ the matrix that describes the mapping induced by $A$ from the orthogonal complement of its null space to its range. $B$ is a quadratic, non-singular matrix.
Cramer's rule can be applied to $B$, and it only involves addition, multiplication and division.
Thus, your solution will be rational.
|
H: Metric of the flat torus
I am studying the flat torus $T^n=\mathbb{R}^n/\mathbb{Z}^n$. I am interested in the metric and the connection used. Unfortunately, in the books I am reading those things aren't defined. Does anyone knows this definition or a reference where I can find it? Thanks in advance for the help.
AI: The standard metric on $\mathbb{R}^n$, namely $g = \sum_{i=1}^ndx^i\otimes dx^i$, is invariant under translations, i.e. for $f: \mathbb{R}^n \to \mathbb{R}^n$ of the form $f(x) = x + a$ where $a = (a^1, \dots, a^n) \in \mathbb{R}^n$, we have
\begin{align*}
f^*g &= f^*\left(\sum_{i=1}^ndx^i\otimes dx^i\right)\\
&= \sum_{i=1}^n f^*dx^i\otimes f^*dx^i\\
&= \sum_{i=1}^nd(x^i\circ f)\otimes d(x^i\circ f)\\
&= \sum_{i=1}^nd(x^i + a^i)\otimes d(x^i+a^i)\\
&= \sum_{i=1}^ndx^i\otimes dx^i\\
&= g.
\end{align*}
In particular, $g$ is invariant under translations by elements of the lattice $\mathbb{Z}^n$.
As $\pi : \mathbb{R}^n \to T^n$ is a smooth covering map, it is a local diffeomorphism. So given $q \in T^n$ and $p \in \pi^{-1}(q)$, $(\pi_*)_p : T_p\mathbb{R}^n \to T_qT^n$ is an isomorphism. Now for $v, w \in T_qT^n$, define
$$\hat{g}_q(v, w) := g_p((\pi_*)_p^{-1}(v), (\pi_*)_p^{-1}(w)).$$
To see this definition is well-defined (i.e. independent of the choice of $p$), note that if $p' \in \pi^{-1}(q)$, there is $a \in \mathbb{Z}^n$ such that $p' = p + a$. Letting $f : \mathbb{R}^n \to \mathbb{R}^n$ denote translation by $a$, we have $f(p) = p'$ and $\pi\circ f = \pi$, so $(\pi_*)_p = ((\pi\circ f)_*)_p = (\pi_*)_{f(p)}\circ(f_*)_p = (\pi_*)_{p'}\circ(f_*)_p$. Therefore,
\begin{align*}
g_{p'}((\pi_*)_{p'}^{-1}(v), (\pi_*)_{p'}^{-1}(w)) &= g_{f(p)}((f_*)_p\circ(f_*)_p^{-1}\circ(\pi_*)_{p'}^{-1}(v), (f_*)_p\circ(f_*)_p^{-1}\circ(\pi_*)_{p'}^{-1}(w))\\
&= g_{f(p)}((f_*)_p\circ((\pi_*)_{p'}\circ(f_*)_p)^{-1}(v), (f_*)_p\circ((\pi_*)_{p'}\circ(f_*)_p)^{-1}(w))\\
&= g_{f(p)}((f_*)_p\circ(\pi_*)_p^{-1}(v), (f_*)_p\circ(\pi_*)_p^{-1}(w))\\
&= (f^*g)_p((\pi_*)_p^{-1}(v), (\pi_*)_p^{-1}(w))\\
&= g_p((\pi_*)_p^{-1}(v), (\pi_*)_p^{-1}(w))\\
&= \hat{g}_q(v, w).
\end{align*}
So $\hat{g}$ is a well-defined Riemannian metric on $T^n$. By construction, $\pi^*\hat{g} = g$; that is, $\pi$ is a local isometry.
The exact same argument can be used to show that if $\pi : M \to N$ is a smooth covering map, and $g$ is a Riemannian metric on $M$ which is invariant under the deck transformations of $\pi$, then it descends to a Riemannian metric $\hat{g}$ on $N$, and $\pi^*\hat{g} = g$; that is, $\pi$ is a local isometry.
|
H: relationship between circumference and revolution
i would like to clarify two things by this problem:first what is relationship between circumference and revolution and also revolution and distant traveled by round object.let us consider following problem:
A tire on a car rotates at $500$ RPM (revolutions per minute) when the car is traveling at $50 $km/hr (kilometers per hour). What is the circumference of the tire, in meters?
there is also picture
as i know circumference represents as a total distance around outside,so for our case first of all let us convert from hour to minute,$50$ km is $50000 meter$,so per minute it is equal to $50000/60$ meter,but now in which revolution fact will help us?from this link
http://www.algebra.com/algebra/homework/Trigonometry-basics/Trigonometry-basics.faq.question.110618.html
it seems that distance in this case is equal circumference multiplied number of revolution,so in our case it would be $50000/60$ divided by number of revlution,but how could i clarify that distance in this case is exactly $50000/60$ and not other term?do you see i am confused in terms of relationship between revolution,circumference and distance
AI: For every $1 \text{ revolution}$, the tire will travel a distance equal to its circumference. Hence, we should look for a quantity whose units are $\text{metres}/\text{revolution}$. Using conversion factors, we obtain:
$$
\left(\dfrac{50 \text{ km}}{1 \text{ hr}}\right)
\left(\dfrac{1000 \text{ m}}{1 \text{ km}}\right)
\left(\dfrac{1\text{ hr}}{60 \text{ min}}\right)
\left(\dfrac{1\text{ min}}{500 \text{ revolutions}}\right)
=
\dfrac{5 \text{ m}}{3 \text{ revolutions}}
=
\dfrac{1.\overline{6} \text{ metres}}{1\text{ revolution}}
$$
So the circumference is:
$$
(1\text{ revolution})
\left(\dfrac{1.\overline{6} \text{ metres}}{1\text{ revolution}}\right)
=
1.\overline{6} \text{ metres}
$$
|
H: Theory set problem, determining the min and max number of elements from $(B \cup A) \bigtriangleup (C \cap A)$
Given that $$ |A| = 5 \\ |B| = 6 \\ |C| = 7 \\ |\Omega| = 10 \\ A \subseteq (B \cup C) $$
Determine the min and max numbers of elements from $$(B \cup A) \bigtriangleup (C \cap A)$$
I tried to solve this by first simplifying the above statement doing
$$[(B \cup A) \cap (C \cap A)^{c}] \cup [(B \cup A)^{c} \cap (C \cap A)] \\ [(B \cup A) \cap (C \cap A)^{c}] \cup [(B^{c} \cap A^{c}) \cap (C \cap A)] \\ [(B \cup A) \cap (C \cap A)^{c}] \cup \emptyset \\ (B \cup A) \cap (C \cap A)^{c} $$
Then i drew the venn diagrams that would represent the max and min case for $(B \cup A) \cap (C \cap A)^{c}$:
And according to this:
the minimum number of elements is 1
the maximum number of elements is 4
Question: Would this be correct? If so, is there any other faster way that can let me solve the exercise without drawing venn diagrams and trying to configure the elements for both cases?
Thanks so much in advance.
AI: Use the fact that $X\bigtriangleup Y=(X\cup Y)\setminus(X\cap Y)$, with $X=B\cup A$ and $Y=C\cap A$:
$$\begin{align*}
(B\cup A)\cup(C\cap A)&=(B\cup A\cup C)\cap(B\cup A\cup A)&&\text{distributivity}\\
&=(B\cup C)\cap(B\cup A)&&A\subseteq B\cup C\\
&=B\cup(C\cap A)&&\text{distributivity}
\end{align*}$$
and
$$\begin{align*}
(B\cup A)\cap(C\cap A)&=(B\cap C\cap A)\cup(A\cap C\cap A)&&\text{distributivity}\\
&=(B\cap C\cap A)\cup(C\cap A)\\
&=C\cap A\;,
\end{align*}$$
so
$$\begin{align*}
(B \cup A) \bigtriangleup (C \cap A)&=\Big(B\cup(C\cap A)\Big)\setminus(C\cap A)\\
&=B\setminus(C\cap A)\;.
\end{align*}$$
Corrected:
To make this large, you want to make $A\cap B\cap C$ as small as possible, and to make it small, you want to make $A\cap B\cap C$ as large as possible. If you put the $4$ elements of $\Omega\setminus B$ into both $A$ and $C$, you can arrange the rest so as to make $A\cap B\cap C=\varnothing$, and you certainly can’t make it any smaller! At the other extreme, if $A\subseteq B\cap C$, then $A\cap B\cap C=A$ and has cardinality $5$. Thus, the range of cardinalities for the symmetric difference is from $6-5=1$ to $6-0=6$.
|
H: reference for operator algebra
I am taking a course on operator algebra this semester. My instructor has suggested a reference "Kadinson and Ringrose." Are there any other good/standard references for this subject that I can look up?
AI: The book by Kadison and Ringrose does not contain a number modern topics (irrational rotation algebras, Cuntz algebras, K-theory etc.). I have used the following books for my lectures:
G.Murphy "$C^*$-algebras and operator theory"
and the
K.Davidson "$C^*$-algebras by example."
A nice introduction to K-theory of $C^*$-algebras with prerequisites on $C^*$-algebras is
N.E.Wegge-Olsen "K-theory and $C^*$-algebras. A friendly approach."
On the other hand I find the book by Kadison and Ringrose much easier to read.
The classical monograph by Dixmier can be used as encyclopedia of basic $C^*$-algebra theory.
|
H: Solving logarithmic equation
I'm having trouble solving this equation. I know there is a solution as my graphics calculator can solve it, but I want to see the steps on how to get the answer.
The mathematical equation is:
$$\log_{10}n = 0.07n$$
AI: Not all equations can be solved algebraically. This equation can not be solved step by step to find the answer, you will have to use limits to get to the answer (which is kind of what your calculater does).
|
H: A question on divisible groups
Let $p$ be a prime and $H=\prod_{n=1}^{\infty}\mathbb Z(p^{n})$ ($\mathbb Z(p^{n})$ is the finite cyclic group of order $p^{n}$). Is $H/t(H)$ divisible ($t(H)$ denotes the maximal torsion subgroup of $H$)?
AI: Does $(1,1,1,\cdots)$ have a $p$th root modulo torsion?
If so, we have $(1,1,1,\cdots)+(a_1p^{e_1},a_2p^{e_2},a_3p^{e_3},\cdots)=p(b_1,b_2,b_3,\cdots)$ for various choices of integers $a_i,e_i,b_i$ (chosen so $a_i$ are all prime to $p$). The torsion condition implies $p\mid p^{e_{\large i}}$ eventually, in which case $1+a_ip^{e_i}=pb_i$ immediately yields a contradiction upon reducing mod $p$.
However it is clearly $q$-divisible for every prime $q\ne p$. In particular it is a ${\bf Z}_{(p)}$-module, where ${\bf Z}_{(p)}$ denotes the localization of $\bf Z$ at $(p)$. This is perhaps easiest seen by expanding the interpretation of the original group to be a product ring, and considering the maps ${\bf Z}_{(p)}\to {\bf Z}_{(p)}/p^n{\bf Z}_{(p)}\cong {\bf Z}/p^n{\bf Z}$.
|
H: Is there formula for this squared geometric (?) progression?
Is there non-recursive formula for the following sequence:
$$a_1=\frac12,$$
$$a_n=\frac12a_{n-1}^2+\frac12$$
If there is, how do you suggest I can determine it?
AI: Making the change of variables
$$a_n=1-2x_n$$
the recursion relation transforms into
$$x_{n+1}=x_n(1-x_n).$$
This is a particular case of logistic map (with $r=1$). If it were solvable, I think it would be mentioned here along with $r=2,4$.
|
H: $E+F=E\oplus F \Leftarrow \bigcap$ of their bases $=∅$
I'm trying to understand this theorem:
I will traduce it literally from my lecture notes:
Given n≥1 subspaces $E_1,E_2,...,E_n$ of a vector space V and considering the subspace $F=E_1+E_2+...+E_n$ if $B_1,B_2,...,B_n$ are bases of $E_1,E_2,...,E_n$ and $B_i \cap B_j=∅ $ for every $i,j$ where i is not equal to j so F is a direct sum of subspaces $E_1,E_2,...,E_n$ and they are indipendent.
The question is:
How can this theorem be dimostrated?
AI: It would indeed be a useful fact in exercises, if it were true. But it isn't.
Consider $E_1=\langle(1,0,0),(0,1,0)\rangle$ and $E_2=\langle(1,1,0),(-1,1,0)\rangle$ as subspaces of $\mathbb{R}^3$. It's easy to show that
$B_1=\{(1,0,0),(0,1,0)\}$ and $B_2=\{(1,1,0),(-1,1,0)\}$ are bases of $E_1$ and $E_2$ respectively. Also $B_1\cap B_2=\emptyset$.
However the sum of $E_1$ and $E_2$ is not direct, for the simple reason that $E_1=E_2$.
The simplest counterexample is, of course, $B_1=\{1\}$ and $B_2=\{2\}$, in $\mathbb{R}$.
|
H: Cumulative Normal Distribution.
Let $X_1,\ldots,X_n$ be a random sample from $f(X;\theta)=\phi_{\theta,25}$, that is, $X_1,\ldots,X_n$ be normally distributed with mean $\theta$ and variance $25$.
I am not understanding how
$$sup_{\theta\leq17}[\phi(\frac{17+\frac{5}{\sqrt n}-\theta}{\frac{5}{\sqrt n}})]=\phi(1)$$
?
AI: $\phi$ is unimodal, decreasing on the positive reals, and reaches its maximum, $1$, at $0$. Here, this implies that since $\theta \leq17$, the argument is positive and decreasing in $\theta$: the overall function is thus increasing wrt $\theta$, and therefore maximum at the upper boundary $17$.
|
H: Sequence problem, find root
the equation $x^3-5x+1=0$ has a root in $(0,1)$. Using a proper sequence for which $$|a(n+1)-a(n)|\le c|(a(n)-a(n-1)|$$ with $0<c<1$ , find the root with an approximation of $10^{-4}$.
AI: You can attempt this: define a sequence $x_n$ by
$$
x_{n+1} = \frac 15\left(1 + x_n^3\right).
$$
We know that if $x_1 \in (0, 1)$, then $x_n \in (0, 1)$ for all $n$. Next, we show that it satisfies the said condition:
\begin{align*}
x_{n+1} - x_n & = \frac 15(1 + x_n^3 - 5x_n)\\
& = \frac 15\left(1 + x_n^3 - (1 + x_{n-1}^3)\right)\\
& = \frac 15\left(x_n^3 - x_{n-1}^3\right) \\
& = \frac 15\left(x_n - x_{n-1}\right)\left(x_n^2 + x_nx_{n-1} + x_{n-1}^2\right) \\
\therefore
\left|x_{n+1} - x_n\right|
& = \frac 15\left|\left(x_n - x_{n-1}\right)\left(x_n^2 + x_nx_{n-1} + x_{n-1}^2\right)\right| \\
& \le \frac 15\left|x_n - x_{n-1}\right| \cdot 3 \\
& \le \frac 35\left|x_n - x_{n-1}\right|.
\end{align*}
From this, we see that the sequence is Cauchy, hence convergent. Let $x$ be the limit of the sequence. Take the limit $n \to \infty$ in the first equation (which defines $x_{n+1}$ from $x_n$) to get
$$
x = \frac 15(1 + x^3).
$$
This means $x$ is a root of $f(x) = x^3 - 5x + 1$.
To compute $x$ numerically within a given error tolerance $\epsilon$, we need to find $x_N$ such that $|x_N - x| \le \epsilon$. However, we do not know $x$, so instead, we can use the following criterion to find $N$: for all $n > N$, $|x_{n} - x_N| < \epsilon$.
This criterion is sufficient because if the inequality holds for all $n > N$,
it will follow that $\lim_{n \to \infty} |x_n - x_N| = |x - x_N| \le \epsilon$.
One convenient way to guarantee $|x_n - x_N| < \epsilon$ for all $n > N$ is via the triangle inequality:
$$
|x_n - x_N| \le \sum_{k=N}^{n-1} \left|x_{k+1} - x_k\right|
\le \sum_{k=N}^\infty \left|x_{k+1} - x_k\right|
\le \frac{\left(\frac 35\right)^N}{1 - \frac 35} = \frac 52\left(\frac 35\right)^N.
$$
So, if we can find $N$ such that $\frac 52\left(\frac 35\right)^N < \epsilon$, we will get $|x_N - x| \le \epsilon$.
|
H: Simplify this expression that came from integration
I was doing a calculation and arrived at a term $\left[P_{l-1}(\cos(\theta)) -P_{l+1}(\cos(\theta))\right]_{0}^{\pi}$(So this is the result of an integration). Does anybody of you know how to simplify this expression? (Testing suggested to me that this one is either $0$ or $2$, but I did not get there). $P_n$ is the $n$-th Legendre polynomial.
AI: Two things:
Legendre polynomial $P_{l}(x)$ is either even (for even $l$) or odd (for odd $l$) function of $x$,
Its value at $x=1$ is $P_l(1)=1$ (this is a part of its definition).
I think you can get the answer from there.
|
H: Calculate angle in triangle having 2 points and two lines
I have 2 points $B$ and $P$ and need to calculate angle $\alpha$ (maybe also I will need point $C$ and $E$)
How can I do this. I know that I can calculate point $D$ it's $(\frac{1}{2}(x_P-x_B), \frac{1}{2}(y_P-y_B))$ then calculate line that it's perpendicular to $PB$ and that go through $D$ and then intercections with lines $x=x_B$ and $y=y_B$, if I have point $E$ I can calculate angle from triangle $PDE$, but need exact formula to do this.
AI: First, assume $B=(0,0)$ (by translation). $A$ seems irrelevant to me, but if we have $C$, by symmetry the angle
$$90^\circ - \angle EBD = \angle CEB = \alpha$$
$$\angle EBD = \arccos(y_D/|\overline{BD}|)= \arccos(y_P/|\overline{PB}|)$$
and finally we have
$$\alpha = \arcsin\left(\frac{x_P}{\sqrt{x_P^2+y_P^2}}\right)$$
Translating back, we get
$$\alpha = \arcsin\left(\frac{x_P-x_B}{\sqrt{(x_P-x_B)^2 + (y_P-y_B)^2}}\right)$$
|
H: Is this integral correct?
I used substitution and got that:
$$\int_0^\pi \sin x \cdot P_n(\cos x ) \, dx=0$$
where $P_n$ is the $n$-th Legendre polynomial.
AI: Recall that Legendre polynomials are defined as orthogonal polynomials on $[-1,1]$ with weight function $w(x)=1$. In other words, we have by definition
$$(P_m,P_n)=\int_{-1}^1P_m(x)P_n(x)dx\sim \delta_{mn}.$$
But, since $P_0(x)=1$, your integral is equal to $(P_0,P_n)$ and therefore it vanishes whenever $n>0$. For $n=0$, however, it is equal to $2$.
|
H: Pre-images of closed sets are open
Let $X$ and $Y$ be two topological spaces and let $f$ be such a map that $f^{-1}(A)$ is open in $X$ for any closed $A$. Note that if $X\stackrel{f}{\longrightarrow}Y\stackrel{g}\longrightarrow Z$ are two such maps, then $g\circ f$ is continuous. Perhaps, it is a trivial task - but is there an example of such surjective map from $\Bbb R$ to $\Bbb R$?
AI: Let $f\colon \mathbb R\to\mathbb R$ have the given property.
Since points are closed, we get pairwise disjoint open sets $f^{-1}(x)$. If $x$ is in the image of $f$, then $f^{-1}(x)$ contains some open interval and hence a rational number. We conclude that $f^{-1}(x)\ne\emptyset$ only for countably many $x$, i.e. $f$ is not surjective.
|
H: If points $( k+3 , 2-k)$, $ (k, 1-k)$ and $(3, 4+k)$ are collinear, find the value of $k$
If the points $( k+3 , 2-k)$, $(k, 1-k)$ and $(3, 4+k)$ are collinear,
find the value of $k$.
Can someone please hint at how to start this question? I've so far tried finding the gradients but no luck. thanks!
AI: Hints:
Three points $\,P_1,P_2,P_3\,$in the real plane are collinear iff the slope between $\,P_2,P_1\,$ equals the slope between $\,P_3,P_2\,$ , so in our case it must be
$$\frac{1-k-2+k}{k-k-3}=\frac{4+k-1+k}{3-k}\iff \frac13=\frac{2k+3}{3-k}$$
Well, now solve for $\,k\,$ ...
|
H: Tackle this series
I am looking for the exact value or a smart approximation(if you have a good idea) of the following series:
$$\sum_{n=0}^\infty \frac{1}{2n+1} (P_{n+1}(0)-P_{n-1}(0))$$ where $P_n$ is the n-th Legendre polynomial and $P_{-1}(0)=-1$
AI: Let us use the relation
$$\left(\frac{P_{n+1}(x)-P_{n-1}(x)}{2n+1}\right)'=P_n(x)$$
and consider the series
$$S(x)=\sum_{n=1}^{\infty}\frac{P_{n+1}(x)-P_{n-1}(x)}{2n+1}.$$
(Note that since $\forall n\geq 0$ one has $P_n(1)=1$, we have $S(1)=0$). Now, using the generating function $\displaystyle \frac{1}{\sqrt{1-2xt+t^2}}=\sum_{n=0}^{\infty}P_n(x)t^n$ of Legendre polynomials, we find
$$S'(x)=\sum_{n=1}^{\infty}P_n(x)=\frac{1}{\sqrt{2-2x}}-1.$$
Therefore
$$S(0)=-\int_0^1\left(\frac{1}{\sqrt{2-2x}}-1\right)dx=1-\sqrt{2}.$$
But the sum we want to compute is nothing but
$$\sum=\left(P_1(0)-P_{-1}(0)\right)+S(0)=2-\sqrt{2}.$$
|
H: Rotating a Matrix by an angle
So I have a matrix like so
\begin{pmatrix} x_0 & x_1 & x_2 & x_3 \\ y_0 & y_1 & y_2 & y_3 \end{pmatrix}
And I need rotate the matrix by an angle - for say $45$ degrees.
I read that the rotation matrix is
\begin{pmatrix} \cos(45^\circ) & \sin(45^\circ) & \\ -\sin(45^\circ) & \cos(45^\circ) \\ \end{pmatrix}
Now my question is how do I apply that to my matrix? I mean in the rotation matrix there are 2 elements for $ x$ and $2$ for y and I don't know to while elements in my matrix apply which $x$ or $y$ elements from the rotation matrix..
AI: Rotation of point $(x,y)$ in a plane is two mapping like this: $u = f(x,y)$ and $v=g(x,y)$
here $$
u =
\begin{bmatrix} \cos(45^0) & \sin(45^0) \end{bmatrix}
.
\begin{bmatrix} x \\ y \end{bmatrix}
=\frac1{\sqrt2}(x+y)
$$
and similarly
$$ v =
\begin{bmatrix} -\sin(45^0) & \cos(45^0) \end{bmatrix}
.
\begin{bmatrix} x \\ y \end{bmatrix}
= \frac1{\sqrt2}(y-x)
$$
So when you want to rotate many points, first you store them in a matrix like :
$$
X =\begin{pmatrix} x_0 & x_1 & x_2 & x_3 \\ y_0 & y_1 & y_2 & y_3 \end{pmatrix}
$$
then multiply the rotation matrix from left by matrix of points ($RX$).
This could be considered rotation of several points
|
H: Question on monotonicity and differentiability
Let $f:[0,1]\rightarrow \Re$ be continuous. Assume $f$ is differentiable almost everywhere and $f(0)>f(1)$.
Does this imply that there exists an $x\in(0,1)$ such that $f$ is differentiable at $x$ and $f'(x)<0$?
My gut feeling is yes but I do not see a way to prove it. Any thoughts (proof/counterexample)?
Thanks!
AI: The Cantor function satisfies your hypotheses (up to sign) but the derivative vanishes whenever the function is differentiable.
|
H: Pointwise convergence domain of function series
I'm stuck at finding pointwise convergence domain of the following function series
$$\sum_{n=1}^\infty \frac{\sqrt[3]{(n+1)}-\sqrt[3]{n}}{n^x+1}$$
I tried to use d'Alembert and Weierstrass tests, but it seems to me they don't work here.
AI: We have
$$(n+1)^{1/3}-n^{1/3}=n^{1/3}\left(\left(1+\frac{1}{n}\right)^{1/3}-1\right)\sim_\infty\frac{1}{3}n^{-2/3}$$
so there's two cases:
If $x>0$ then
$$\frac{\sqrt[3]{(n+1)}-\sqrt[3]{n}}{n^x+1}\sim_\infty \frac{1}{3}\frac{1}{n^{x+2/3}}$$
and hence the given series is convergent if $x+2/3>1\iff x>\frac{1}{3}$
If $x\leq 0$ then
$$\frac{\sqrt[3]{(n+1)}-\sqrt[3]{n}}{n^x+1}\sim_\infty \frac{C}{n^{2/3}}$$
where $C=1/2$ if $x=0$ and $C=1$ if $x<0$ and the series is divergent so we conclude that we have the convergence of the series if and only if $x>1/3$.
|
H: Proving statements like $(a\Rightarrow b) \Rightarrow (p \Rightarrow q)$.
Is there a way to simplify this sort of statement? For example, $$a \Rightarrow (b\Rightarrow c)$$ is equivalent to $$(a \wedge b) \Rightarrow c.$$ I'm looking for something similar for $$(a\Rightarrow b) \Rightarrow (p \Rightarrow q),$$ if it's even possible.
AI: Hint: $$ (a \Rightarrow b) \Leftrightarrow ((\neg a) \lor b)$$
Edit:
One way to continue from $(a \wedge \neg b ) \vee \neg p \vee q$ would be:
$$ (a \land \neg b ) \lor \neg p \lor q $$
$$ \lnot(\lnot((a \land \neg b ) \lor \neg p)) \lor q $$
$$ \lnot(\lnot(a \land \neg b ) \land p) \lor q $$
$$ \lnot((\lnot a \lor b ) \land p) \lor q $$
$$ \lnot((a \Rightarrow b) \land p) \lor q $$
$$ ((a \Rightarrow b) \land p) \Rightarrow q $$
|
H: Probability from chi square distribution
How do I find a probability for a chi square distribution?
I have a continous random variable from which I've got the chi square with the formula:
$$\sum \frac{(o-e)^2}{e}$$ where $o$ is the observed value and $e$ the expected value (the mean).
with that value plus the degrees of freedom (the size of my sample - 1), I would like to get result in percentage. Which step should I take? find the z score and the standard normal distribution and then try to get the CDF?
probably off-topic (for mathematics @ stackexchange):
There is any R function to calculate it? Like giving a chi square + degrees of freedom it gives me a %?
AI: Since you're probably performing a chi-squared test, then test-statistic
$$
X^2=\sum_{i=1}^n \frac{(O_i-E_i)^2}{E_i}
$$
follows a $\chi^2(p)$ distribution with $p=n-1$ degrees of freedom. Since large values are critical, the corresponding $p$-value is given by
$$
P(\chi^2(p)\geq X^2)=1-F_{\chi^2(p)}(X^2),
$$
i.e. the probability of a $\chi^2(p)$-variable being larger than what we have observed. Here $F_{\chi^2(p)}$ is the distribution function of a $\chi^2(p)$ distribution.
This is easily calculated in R with the command 1-pchisq(x,p), where x denotes the test-statistic $X^2$ and p is the number of degrees of freedom.
|
H: Differentiate $\sin \sqrt{x^2+1} $with respect to $x$?
Differentiate $$ \sin \sqrt{x^2+1} $$ with respect to $x$?
Can someone please help me with question, im very lost.
AI: $$ \dfrac {d}{dx}\sin \sqrt{x^2+1} $$
since $\dfrac {d}{dx}\sin x=\cos x,\dfrac {d}{dx} x^n=n\cdot x^{n-1},\dfrac{d}{dx}C=0$ C is Constant
so $$ \dfrac {d}{dx}\sin \sqrt{x^2+1}\implies\cos \sqrt{x^2+1}\dfrac {d}{dx}\sqrt{x^2+1}$$
like this continue the differentiation of functions (here differentiate $\sqrt{x^2+1}$).
Just try to solve it is very simple then.
|
H: A combination lock that flashes red, green, or orange
Say you have a combination lock that takes in a key code in the form of 10 integers from 0 to 9 (in the right order). If you have 0-3 of the integers right, a red light will flash. If you get 4-6 of the integers right, an orange light will flash. And if you get 7-9 of the integers right, a green light will flash and if you get all integers right the lock will open.
It costs $1 per try. What amount of money needs to be in the safe before you will try to open it (What is the expected value of opening this safe?) What is the maximum it will cost us to open this safe?
AI: Simple algorithm for the upper bound of $10042$:
Start with $0^{10}$ and go to $0^69^4$, there will be at least one orange flash, let say at combination $0^6abcd$. Now try $0^51(a+1)bcd$ to $0^59(a+1)bcd$ to get another orange flash, and continue in similar fashion until you will find all the numbers. The worst-case cost is $10000+6*9$ and the expected is roughly half of that. There are some special cases, e.g. there are some zeros, but it is easy to distinguish from them, e.g. you will get more than one orange flash, that is at least 10 orange flashes and that suffices to check which numbers are zeros.
I hope this helps $\ddot\smile$
|
H: Well defined, continuous and singular
Can you explain what they mean when a function is well defined, continuous and singular? I know a function is continuous when you look at the right and left hand limits and both conclude to the same number.
Am I right when I say option 5 is false? See attached picture.
AI: well defined:
The definition is not dependent on an individual representant. i.E. for
$$f: \mathbb{R}/2\pi\mathbb{Z} \to [0, 2\pi), \qquad [x] \mapsto x \ ({\rm Mod}\ 2\pi)$$
is well-defined, while
$$f: \mathbb{R}/2\pi\mathbb{Z} \to [0, 2\pi), \qquad [x] \mapsto x$$
is not, as in $\mathbb{R}/2\pi\mathbb{Z} 0 = 2\pi = 4\pi$, but $f([0]) = 0 \neq 2\pi = f(\underbrace{[2\pi]}_{=[0]})$.
Another requirement is, that for $f: D\to V$ holds $f(D) \subset V$, i.e. no values out of the scope are assigned.
$$ \sin: \mathbb{R} \to D$$
is only well-defined for $D \supset [-1,1]$.
Continuous:
There are multiple definitions, the most elementary definition is:
$$ f \text{ is continuous} :\Leftrightarrow f^{-1}(A) \text{ is open for every } A \text{ open} $$
This only requires a Topology.
Singularity:
A point in the definition set, for wich the function assignment is not well-defined
$$ f: \mathbb{R} \to \mathbb{R}, \qquad x\mapsto \frac{1}{x} $$
is singular in $x=0$.
Applied to your function: $f$ is clearly well-defined and continuous, so compute the derivative:
$$f'(x) = xe^x + e^x $$
as a composition of two well-defined and continuous functions, $f'$ is also well-defined and continuous. None of the statements are false.
|
H: Differentiating $\tan\left(\frac{1}{ x^2 +1}\right)$
Differentiate: $\displaystyle \tan \left(\frac{1}{x^2 +1}\right)$
Do I use the quotient rule for this question? If so how do I start it of?
AI: We use the chain rule to evaluate $$ \dfrac{d}{dx}\left(\tan \frac{1}{x^2 +1}\right)$$
Since we have a function which is a composition of functions: $\tan(f(x))$, where $f(x) = \dfrac 1{1+x^2}$, this screams out chain-rule!
Now, recall that $$\dfrac{d}{dx}(\tan x) = \sec^2 x$$
and to evaluate $f'(x) = \dfrac d{dx}\left(\dfrac 1{1 + x^2}\right)$, we can use either the quotient rule, or the chain rule. Using the latter, we have $$\dfrac{d}{dx}\left(\dfrac 1{1 + x^2}\right)= \dfrac{d}{dx}(1 + x^2)^{-1} = -(1 + x^2)^{-2}\cdot \dfrac d{dx}(1+ x^2) = -\dfrac{2x}{(1+ x^2)^2}$$
Now, we put the "chain" together: $$\dfrac d{dx}\left(\tan \left(\frac{1}{ x^2 +1}\right)\right) = \dfrac{d}{dx}\Big(\tan(f(x)\Big)\cdot \Big(f'(x)\Big) = \sec^2 \left(\dfrac 1{1 + x^2}\right)\cdot \left(-\dfrac{2x}{(1+ x^2)^2}\right)$$
|
H: Interpreting the ; in a series
This question is linked to this question.
So, suppose I set $n=5$. Given the following formula:
$$\frac{1}{n}, \dots , \frac{n-1}{n} $$
Am I suppose to get:
$$
\frac{1}{5}, \frac{2}{5}, \frac{3}{5}, \frac{4}{5} \hspace{8.2cm}(1)
$$
Or
$$
\frac{1}{5}, \frac{1}{4}, \frac{1}{3}, \frac{2}{5}, \frac{1}{2},
\frac{3}{5}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5} \hspace{5cm} (2)
$$
?
In other words, what purpose to the ";" in:
$$ \frac{1}{2}; \frac{1}{3}, \frac{2}{3}; \frac{1}{4}, \frac{3}{4}; \frac{1}{5}, \frac{2}{5}, \frac{3}{5}, \frac{4}{5}; \dots ; \frac{1}{n}, \dots , \frac{n-1}{n}.$$
serve?
Also, the formula does not mention anything about skipping numbers that are not in lowest common terms. Is skipping this assumed given the definition of $f$? For example, in (2), there is no $\frac{2}{4}$ because it is equal to $\frac{1}{2}$ which is already listed earlier.
Thank you in advance for any help provided.
AI: The commas separate values for a single given value of $n$. The semi-colons separate for different values of $n$:
$$\underbrace{\frac12}_{n=1};\underbrace{\frac{1}{3}, \frac{2}{3}}_{n=3};\underbrace{\frac14,\frac{2}{4},\frac{3}{4}}_{n=4};\ldots$$
The intention was that this form a single set of unique rational numbers; the $\frac{2}{4}$ can be omitted. The semi-colons distinguish by values of $n$ only for ease of visualization.
|
H: Loss of direction in Gauß' theorem?
I was wondering about the following:
If I have a function $\phi:\mathbb{R}^3\rightarrow \mathbb{R}$ and I want to calculate the mean value of $E=-\nabla \phi$ over a sphere, then $E$ of course if a vector, but the mean value: $ E=-\frac{1}{V_\mathrm{sphere}}\int_V \nabla \phi=-\frac{1}{V_\mathrm{sphere}} \int_{S(\text{sphere})} \phi$ is no longer a vector. so how do I manage it to get also directional information about the mean value of $E$? What I calculated was: $\frac{1}{V_\mathrm{sphere}} \int_{S(\text{sphere})} \phi=\frac{1}{V_\mathrm{sphere}} \int_0^{2\pi} \int_0^{\pi} \phi r^2 \sin(\theta) d\theta d\phi$
This is the way it was done here: Article on electrodynamics on page 3
Please note that all definitions I gave to you, also apply to the notation in this article
AI: The correct formula you want is
$$\int_V \vec{\nabla\phi}\,dV = \int_{\partial V} \phi \vec n\,dS\,,$$
where $\vec n$ is the unit outward normal to $\partial V$.
|
H: Conic section - hyperbolic path
I got that equation of path is conic section $u=\frac{1}{3c}(1+2\cos\theta)$ where $c$ is constant and one vertex of hyperbola is $(-c,0)$ and $u=r^{-1}$. So, $r=\frac{3c}{1+2\cos\theta}$. Since $e=2>1$ is eccentricity, the path is hyperbolic.
How can I find equations of asymptotes and latus rectum?
I know that $e=2$ and $a=c$.
Also, $e=\frac{\sqrt{a^2+b^2}}{a}$. From this I get $b=c\sqrt3$.
Asymptote that I need is $y=-\frac{b}{a}x=\sqrt3 x$ and latus rectum is $(-c,\frac{b^2}{a})=(-c,3c)$.
Is this good?
AI: For the asymptotes, $u=0$, which means that $\cos{\theta}=-1/2$, or $\theta = 2 \pi/3$ or $4 \pi/3$, which means that the slope of the asymptotes is $\tan{\theta}=\pm \sqrt{3}$.
For the latus rectum, the length is $2 b^2/a = 6 c$.
|
H: Tough integrals with Legendre polynomial
Does anybody here know how to integrate $\int_0^\pi P_n(\cos(x))\sin(x)\cos(x) dx$,
$\int_0^\pi P_n(\cos(x))\sin^2(x) dx$, where $P_n$ is the n-th Legendre polynomial?
They are actually extremely hard to do, as far as I see, but I pretty much need them.
AI: Rewrite the integral as
$$-\int_0^{\pi} d(\cos{x}) \, P_n(\cos{x}) \cos{x} = \int_{-1}^1 dy \, y \, P_n(y)$$
By orthogonality, the integral on the right is zero unless $n=1$. Therefore,
$$\int_0^{\pi} dx \, P_n(\cos{x})\, \cos{x}\, \sin{x} = \begin{cases}\frac{2}{3} & n=1 \\ 0 & n \ne 1 \end{cases}$$
|
H: Matrix determinant using Laplace method
I have the following matrix of order four for which I have calculated the determinant using Laplace's method.
$$
\begin{bmatrix}
2 & 1 & 3 & 1 \\
4 & 3 & 1 & 4 \\
-1 & 5 & -2 & 1 \\
1 & 3 & -2 & -1 \\
\end{bmatrix}
$$
Finding the determinant gives me $-726$. Now if I check the result at Wolfram Alpha, it says the result is $-180$ (Because there are no zeros in the matrix, expand with respect to row one) so it uses only the first row to calculate the determinant of the matrix.
My question is: Why it uses only the first row to find the determinant?
AI: The Laplace development can be performed with respect to any row or column. Let's see when developing with respect to the first row:
\begin{align}
\det\begin{bmatrix}
2 & 1 & 3 & 1 \\
4 & 3 & 1 & 4 \\
-1 & 5 & -2 & 1 \\
1 & 3 & -2 & -1
\end{bmatrix}={}&
(-1)^{1+1}\cdot 2 \cdot
\det\begin{bmatrix}
3 & 1 & 4 \\
5 & -2 & 1 \\
3 & -2 & -1
\end{bmatrix}+{}\\
&(-1)^{1+2}\cdot 1 \cdot
\det\begin{bmatrix}
4 & 1 & 4 \\
-1 & -2 & 1 \\
1 & -2 & -1
\end{bmatrix}+{}\\
&(-1)^{1+3}\cdot 3\cdot
\det\begin{bmatrix}
4 & 3 & 4 \\
-1 & 5 & 1 \\
1 & 3 & -1
\end{bmatrix}+{}\\
&(-1)^{1+4}\cdot1\cdot
\det\begin{bmatrix}
4 & 3 & 1 \\
-1 & 5 & -2 \\
1 & 3 & -2
\end{bmatrix}
\end{align}
Now you can go on by computing the determinants of the four $3\times3$ matrices with the same (or another) method. The final result is indeed $-180$.
When one row or column has many zeroes it's convenient to use that one, but any row or column can be used.
|
H: Sum $\sum_{n=0}^\infty \frac{\tan(a/2^n)}{2^n},$
$$\sum_{n=0}^\infty \frac{\tan(a/2^n)}{2^n},$$
where $a$ isn't a multiple of $\pi$. I've been going through several telescoping questions, and It seems I have hit a brick wall with this one, any help will be appreciated.
AI: Since
$$\tan\left(\frac{a}{2^n}\right)\sim_\infty\frac{a}{2^n}$$
then
$$ \frac{\tan(a/2^n)}{2^n}\sim_\infty\frac{a}{4^n}$$
so the given series is convergent.
Added (If you ask for the sum) We have
$$\tan t= \frac{1}{\tan t}-\frac{2}{\tan (2t)}$$
hence we find
$$\frac{\tan(a/2^n)}{2^n}= \frac{1}{2^n\tan (a/2^n)}-\frac{1}{2^{n-1}\tan (a/2^{n-1})}=u_n-u_{n-1}$$
where
$$u_n=\frac{1}{2^n\tan (a/2^n)}$$
so by telescoping we have
$$\sum_{n=0}^\infty \frac{\tan(a/2^n)}{2^n}=\lim_{n\to\infty}u_n-u_{-1}=\frac{1}{\tan a}- \frac{2}{\tan (2a)}=\tan a$$
|
H: Can $\{(1,2,0),(2,0,3)\}$ span $U=\{(r,s,0) \mid r,s, \in \mathbb{R}\}?$
Is it possible that $\{(1,2,0),(2,0,3)\}$ can span the subspace $U=\{(r,s,0) \mid r,s, \in \mathbb{R}\}?$
Using the definition of span, I have gotten this far:
$a(1,2,0)+b(2,0,3)=(r,s,0)$ (where $a,b,r,s \in \mathbb{R}$)
This gives the following set of equations:
$a+2b=r, 2a=s, 3b=0$
I am not sure how to proceed- a hint would be appreciated. Thanks in advance.
AI: From the obtained equations $a=r=s/2$, so if $r\ne s/2$ you cannot chose $a$.
|
H: Convolution doubt
Can someone explain why the general formula of the convolution is this one:
$$(f*g)(t)=\int_{-\infty}^\infty f(t-\tau)g(\tau) \, d\tau$$
But when both $f(\tau)$ and $g(\tau)$ are equal to zero for negative values of $\tau$, the convolution turns into:
$$(f*g)(t)=\int_0^t f(t-\tau)g(\tau) \, d\tau$$
I always thought that it would be more logical that it would become:
$$(f*g)(t)=\int_0^\infty f(t-\tau)g(\tau) \, d\tau$$
AI: Just notice that for $\tau<0$ you have $g(\tau)=0$ while for $\tau>t$ you have $f(t-\tau)=0$. In both cases $f(t-\tau)g(\tau) = 0$.
Hence
$$
\int_{-\infty}^{\infty} f(t-\tau)g(\tau)\, d\tau
= \int_0^t f(t-\tau)g(\tau)\, d\tau.
$$
|
H: $\epsilon$-$\delta$ proof that $\lim_{x \to 1} \sqrt{x} = 1$
I'm trying to teach myself how to do $\epsilon$-$\delta$ proofs and would like to know if I solved this proof correctly. The answer given (Spivak, but in the solutions book) was very different.
Exercise: Prove $\lim_{x \to 1} \sqrt{x} = 1$ using $\epsilon$-$\delta$.
My Proof:
We have that $0 < |x-1| < \delta $.
Also, $|x - 1| = \bigl|(\sqrt{x}-1)(\sqrt{x}+1)\bigr| = |\sqrt{x}-1||\sqrt{x}+1| < \delta$.
$\therefore |\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|}$
Now we let $\delta = 1$. Then
\begin{array}{l}
-1<x-1<1 \\
\therefore 0 < x < 2 \\
\therefore 1 < \sqrt{x} + 1<\sqrt{2} + 1 \\
\therefore \frac{1}{\sqrt{x} + 1}<1.
\end{array}
We had that $$|\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|} \therefore |\sqrt{x}-1|<\delta$$
By letting $\delta=\min(1, \epsilon)$, we get that $|\sqrt{x}-1|<\epsilon$ if $0 < |x-1| < \delta $.
Thus, $\lim_{x \to 1} \sqrt{x} = 1$.
Is my proof correct? Is there a better way to do it (still using $\epsilon-\delta$)?
AI: The proof is correct but can be simplified. You don't need the part "Now let $\delta=1$...". In fact it is always true that
$$
\frac{1}{\sqrt x + 1} \le 1
$$
since $\sqrt x \ge 0$.
Also, a matter of style. In the first line you don't have $0 < |x-1|<\delta$ but you suppose it (this is because $\delta$ is not already been given, but has to be found yet). The same when you write "let $\delta = 1$" you should write "if $\delta \le 1$ ..."
|
H: Prove $\exists$ neighborhood of $I \in Gl(n,\mathbb{C})$ containing no nontrivial subgroup.
Prove that there exists a neighborhood of the identity $I \in Gl(n,\mathbb{C})$ that contains no subgroup other than $\left\{ I \right\}$.
Thanks!
AI: We can prove there exists a neighborhood $V$ of $I$ in $\mathrm{GL}_{n}(\mathbb{C})$ such that the only subgroup of $\mathrm{GL}_{n}(\mathbb{C})$ in $V$ is $\lbrace I \rbrace$.
Consider the exponential map $\exp \, : \, \mathcal{M}_{n}(\mathbb{C}) \, \rightarrow \, \mathrm{GL}_{n}(\mathbb{C})$. For every $M \in \mathcal{M}_{n}(\mathbb{C})$, we have :
$$ \exp(M) = I + M + o(M^{2}) $$
and $\exp(0)=I$. This proves that $\mathrm{D}_{0}\exp = \mathrm{Id}$, where $\mathrm{D}_{0}\exp$ denotes the differential of $\exp$ at $0$. Then, there exists a neighborhood $U$ of $0$ in $\mathcal{M}_{n}(\mathbb{C})$ and a neighborhood $V$ of $I$ in $\mathrm{GL}_{n}(\mathbb{C})$ such that $\exp$ is a diffeomorphism from $U$ to $V$.
Set $W = \frac{U}{2}$ and $W' = \exp(W)$. Since $W'$ is open, it is a neighborhood of $I$. Let $M$ in $W'$. We can write $M = \exp(A)$ where $A \in W$. Then, there exists an integer $k \in \mathbb{N}$ such that $kA \in U \setminus W$. So, we have $M^{k} = \exp(kA) \in V \setminus W'$.
Finally, $M^{k} \notin W'$. This ends the proof.
|
H: Contrapositive: $\forall\; n > 1, n:$ composite $\implies\exists\; p$ (prime) s.t. $p \leq \sqrt n$ and $p\mid n$
Usually, I find it a cakewalk to write the contrapositive, but the following statement is quite complex for the task:
For all integers $n > 1$, if $n$ is not prime, then there exists a prime number $p$ such that $p \leq \sqrt n$ and $n$ is divisible by $p$.
Is it "There exists no prime number $p$ such that $p \leq \sqrt n$ and $p \, | \, n$ given that $(n > 1)$ for a prime integer $n$"? The weird thing here is that the first universal statement (for all integers $n$) was not converted into a conditional, which makes me uncomfortable.
I'd appreciate some guidance.
AI: You have the statement:
For all integers $n > 1$, if $n$ is not prime, then there exists a prime number $p$ such that $p \leq \sqrt n$ and $n$ is divisible by $p$.
This is a statement of the form $$\text{Let }\;n\in \mathbb Z, n > 1: \quad \forall n\left [\lnot P(n) \implies \exists p (Q(n, p) \land R(n, p))\right]$$
It's contrapositive is:
For all integers $n\gt 1$, if there does not exist a prime number $p$ such that $p \leq \sqrt n$ and $n$ is divisible by $p$, then $n$ is prime.
Which is a statement of the form $$\text{Let}\;n\in \mathbb Z, n > 1:\quad \forall n [\lnot \exists p(Q(n,p) \land R(n,p)) \implies P(n))$$
|
H: What is the convergence speed of logistic sequence?
I am looking at the sequence $x_{n+1}=r\, x_n(1-x_n)$ where $r=1$.
Let's choose $x_1=1/2$ so as to make the sequence convergent to 0.
My question is: precisely how quickly does this sequence approach zero?
From my numerical experiments $\lim_{n\to\infty}n\,x_n=1$ seems likely. (That is $\mathrm O(1/n)$ convergence rate.) Do you have more precise result or some mathematical proof?
AI: Let $y_n=\frac1{x_n}$.
The recursion for these is
$$y_{n+1}=y_n\cdot \frac1{1-x_n}=y_n\cdot (1+x_n+x_n^2+\ldots) = y_n+1+x_n+x_n^2+\ldots$$
As $x_n\to 0$ (assuming $0<x_0<2$) we see that $y_{n+1}-y_n\approx 1$, so there exists some $c$ such that $y_n\ge n+c$ and hence $x_n\le \frac1{n+c}$ for $n$ big enough.
|
H: Binomial Sum: An In-Depth Analysis into the Relatedness of Two Equivalences
How is it that
$$n(1+x)^{n-1}=\sum_{k=1}^n C(n,k)kx^{k-1}?$$
How can this be used to show that
$$n2^{n-1}=\sum_{k=1}^nkC(n,k)?$$
AI: Hint: We have by the Binomial Theorem:
$$(1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k.$$
Differentiate both sides with respect to $x$. For the second part, set $x=1$ in the result.
Remark: The function $(1+x)^n$ is the generating function of the binomial coefficients. One nice thing about the generating functions approach to combinatorial problems is that one can use tools of analysis, in particular differentiation and integration, to obtain combinatorial results. At a more advanced level, one can use estimates based on the generating function to obtain bounds on the size of certain combinatorial objects.
|
H: Weak maximum principle for the p-Laplacian
For the equation $\Delta_p u = 0 $ in $U$ ($U$ open and bounded), does a weak maximum principle hold? (The maximum and minimum occur on $\partial U$)? If yes, someone can indicate a book with the theorem?
Thanks in advance ( my english is horrible, sorry ... )
AI: Yes. See Theorem 2.15 in Notes on the p-Laplace equation by Peter Lindqvist. It asserts more: the Comparison Principle holds for the p-Laplacian; the Maximum Principle amounts to comparison with a constant function.
If you wanted to prove it from scratch, you would argue that a $p$-harmonic function is the unique minimizer of $p$-energy for its boundary values. Since the truncation by $\sup_{\partial U} u$ does not increase the energy and does not change the boundary values, it has to keep the function the same.
|
H: Question about coordinate change
Say $f$ is a function $f: \mathbb R^2 \to \mathbb R$. Can someone show me an example of such an $f$ with the property that $(\partial / \partial x)^2 f(x,y) = 0$ and $(\partial / \partial y)^2 f(x,y) = 0$ in one example of coordinates and $(\partial / \partial x)^2 f(x,y) \neq 0$ and $(\partial / \partial y)^2 f(x,y) \neq 0$ in another? Is it possible?
AI: Take $f(x,y)=xy$. Then both pure second partial derivatives are 0, but if we change coordinates to $x=u+v$, $y=u-v$, then $f=(u+v)(u-v)=u^2-v^2$, which has non-zero pure second partials.
|
H: Why does DP solve a problem in polynomial time whereas brute force is exponential?
I am just learning DP, so maybe this is a noob doubt. I've read (while trying to understand the difference between DP and greedy approach - and I am still not fully clear) that DP goes through all possible solutions to a problem and chooses an optimal one. That's what brute force does right? Then why such a big difference in the running time? From whatever reads and re-reads I've given to various sources, I understand that probably it's got something to do with this thing called Principle of Optimality (I've already asked a question on it, but it was too broad, so it's closed down). But it'd be nice if someone could provide a more intuitive understanding of all this.
AI: Dynamic programming is sort of an "elegant" brute force. While you are right in that most dynamic programming solutions involve producing all "smaller" solutions, the key difference is that these "smaller" solutions are tractable. Typically you will characterize a dynamic program with a recurrence relation, and if it is a good one, then you won't need all previous subproblems, just the most recent batch of them. Hence, you can start building up your solutions from the base case. Your struggles are natural: dynamic programming hurts sometimes. In my opinion, it is a strong testament to our limited intelligence. The solution is "try everything", but we are at least smart enough to do so in an efficient manner.
Here is an example of a dynamic programming solution from an old homework assignment from my algorithms class:
Problem: Say you're at an airport and you find yourself in a really long hall with two sets of $n$ walkways in parallel, and that each walkway has some time to traverse. Furthermore, switching sides takes a constant amount of time, say $k$. You want to find the fastest way to get across the hallway.
More formally, given two lists of length $n$ of walkway times for the right and left sides, and a time cost $k$ for switching between sides, a schedule is a sequence of length $n$ of the form {$L$, $R$}$^n$ , where an $L$ in position $i$ represents taking the left walkway for that segment, while an $R$ represents taking the right walkway. The cost of a schedule is the sum of the times for the walkways taken, plus $k$ times the number of switches between walkways. Give an efficient algorithm to calculate the fastest possible travel time to the gate.
Solution: Let $T_{R} = \{r_1, ..., r_n\}$ and $T_{L} = \{l_1, ..., l_n\}$ be the lists of walkway times for the right and left sides. Define $OPT(j)$ to be the fastest possible travel time to the $j^{th}$ pair of walkways. The purpose of our algorithm is to compute $OPT(n)$. The recursion uses the fact that the optimal schedule will either finish at the left or the right hand side. So, we will define two auxiliary functions $OPT_R(j)$ and $OPT_L(j)$, where
\begin{align*}
OPT_R(j) = \left \{
\begin{array}{lr}
r_1 & j = 1\\
\text{min}\{OPT_R(j-1) + r_j, OPT_L(j-1) + k + r_j\} & 1 < j \le n
\end{array}
\right.
\\OPT_L(j) = \left \{
\begin{array}{lr}
l_1 & j = 1\\
\text{min}\{OPT_L(j-1) + l_j, OPT_R(j-1) + k + l_j\} & 1 < j \le n
\end{array}
\right.
\end{align*}
If it is not already clear, $OPT_R(j)$ and $OPT_L(j)$ functions whose output are the fastest times out of all the schedules that arrive at the $j^{th}$ right and left walkways, respectively Keeping in mind our main objective, we also define
\begin{align*}
OPT(j) = \text{min}\{OPT_R(j), OPT_L(j)\}
\end{align*}
To maximize efficiency, we should start at $j=1$ and work our way up to $j=n$. Each recursive call is only computationally dependent on the preceding pair of values, so we can calculate the fastest possible travel time with constant space (just the previous pair) and linear time ($n$ iterations, and one comparison per $OPT_L$ and one per $OPT_R$, excluding the base cases).
Notice that there are $2^n$ possible routes we could take (think binary strings), but thanks to our recursion, we can solve the problem in linear time and constant space. Hope this helps!
|
H: Regarding Limit/continuity/convergence
let $$f_n(x)=\begin{cases} 1-nx&\text{when }x\in[0,1/n]\\0&\text{when }x\in [1/n,1]\end{cases}$$
Which of the following is correct?
$\lim_{ n\to\infty} f_n(x)$ defines a continuous function on $[0,1]$
$\{f_n\}$ converges uniformly on $[0,1]$
$\lim_{n\to\infty} f_n(x)=0$ for all $x\in [0,1]$
$\lim_{n\to\infty} f_n(x)$ exists for all $x\in[0,1]$
AI: Let denote $\displaystyle f=\lim_{n\to\infty} f_n$.
For $x=0$ we have $f_n(0)=1,\quad\forall n>0$ so $f(0)=1$.
For $x>0$ there's $N\in\mathbb N$ such that $\frac{1}{n}\leq x,\quad \forall n\geq N$ so $f_n(x)=0\quad \forall n\geq N$ and then $f(x)=0$ so we conclude:
$$f(x)=\left\{\begin{array}\\
1&\text{if}\ x=0\\
0&\text{if}\ 0<x\leq1
\end{array}\right.$$
Now can you answer the questions?
It's clear that $f$ isn't continuous at $0$ so options 1. and 2. are false and $f(0)=1$ so also option 3. isn't true and since $f(x)$ exists for all $x\in[0,1]$ so option 4. is true.
|
H: Fitting a sinusoidal function to three known points separated by $30$ degrees
I have three data points measured at $-30$, $0$, and $30$ degrees, respectively. I would like to fit these points to a sinusoidal function of the form:
$$f(\theta)≈A\sin(\theta + B) + C$$
Is this possible? If so, what would be the best approach?
I saw a similar post here, however this assumed that the measurements were $90$ degrees apart which helps to simplify things greatly.
Thanks,
AI: Yes, of course it is possible. Proceed in the following way:
Let the three values at $-30,0,30$ be $y_1,y_2,y_3$ repec. Then you get three equations in $A,B,C$ \begin{equation}
\begin{split}
A\sin (-30+B)+C=&y_1\\
A\sin (B)+C=&y_2\\
A\sin (30+B)+C=&y_3
\end{split}
\end{equation}
From the first and last equation, after some trigonometric manipulation you get $$2A\sin B \cos 30=y_3-y_1\Rightarrow A\sin B=\frac{y_3-y_1}{\sqrt{3}}$$
Then from second equation you get $$C=y_2-\frac{y_3-y_1}{\sqrt{3}}$$
Now from the last equation you get after using the $\sin(A+B)$ expansion formula $$A\sin B \cos 30+A\cos B \sin 30=y_1\\ \Rightarrow \frac{(y_3-y_1)}{2}+\frac{A\cos B}{2}+y_2-\frac{y_3-y_1}{\sqrt{3}}=y_3$$ So you can find $A\cos B$ from here. So, now you can use $\displaystyle \sin^2B+\cos^2 B=1$ to get $$A=\sqrt{(A\sin B)^2+(A\cos B)^2}$$ Once you get $A$, then you can get $B$ from $$B=\tan^{-1} \left(\frac{A\sin B}{A \cos B}\right)$$ restricting $B$ in $[-\pi/2,\pi/2]$, you can get $B$.
Note: This procedure is valid for arbitrary angles, they need not be necessarily $30^\circ$ apart.
|
H: Chance that first 6 characters of a SHA-1 hash matching another SHA-1 hash?
Just what the question says -- what is the chance that the first six characters of a SHA-1 hash will match the first six characters of any given SHA-1 hash?
AI: SHA-1 produces a 120 bit value, no characters.
However, such hashes are often conveyed using a string of fourty hex digits.
The probability of six identical hex digits (assuming with suitable justification uniform distribution) is $\frac1{16^6}$
|
H: You roll a die until the sum of all your rolls is greater than 13. What number are you most likely to land on, on the last roll?
So I was thinking of doing this recursively: $f(x,i)$ is equal to the probability of rolling greater than $x$ and landing on $i$ on the last roll. $f(0,i) = 1/6$ for $i = \{1,2,..,6\}$. $f(1,i) = 1/6 + 1/6f(0,i)$ for $i = \{2,...,6\}$ and $f(1,1) = 1/6f(0,1)$. Finally, we list out this recursion until we get $f(13,i)$ and see for what value of $i$ is $f$ the largest.
Is there a better way to approach this or an easy way to simplify this method?
AI: We are not asked the exact probabilities $f(13,i)$, but it is somehwat obvious that for $n\gg 0$ we have $f(n,i)\sim i$ (and hence $f(n,i)\approx \frac i{21}$). So even without calculation it is reasonable to assume that $f(13,6)>f(13,i)$ for all $i\ne 6$.
And indeed, any sequence of rolls that ends with $i$ exceeding $13$ can be mapped to a sequence of rolls that ends in a $6$ exceeding $13$ (and having the same rolls before). Therefore $f(13,6)\ge f(13,i)$ for all $i$. Now note that any sequence ending exactly at $13$ with an $i<6$ can be turned to a sequence exceeding $13$ and ending in $6$, we see that in fact $f(13,6)>f(13,i)$. (In this last step we used that $13\ge i$ for all $i<6$ so that the existence of a sequence summing to exactly $13$ with an $i$ as last roll is guaranteed).
|
H: Is there an accepted symbol for irrational numbers?
$\mathbb Q$ is used to represent rational numbers. $\mathbb R$ is used to represent reals.
Is there a symbol or convention that represents irrationals.
Possibly $\mathbb R - \mathbb Q$?
AI: Customarily, the set of irrational numbers is expressed as the set of all real numbers "minus" the set of rational numbers, which can be denoted by either of the following, which are equivalent:
$\mathbb R \setminus \mathbb Q$, where the backward slash denotes "set minus".
$\mathbb R - \mathbb Q,\;$ where we read the set of reals, "minus" the set of rationals.
Occasionally you'll see some authors use an alternative notation: e.g., $$\mathbb P = \{x\mid x \in \mathbb R \land x \notin \mathbb Q\} $$ or $$\mathbb I = \{x \mid x\in \mathbb R \land x \notin \mathbb Q\}$$
But if and when an alternative letter like $\mathbb P$ or $\mathbb I$ is used, it should be preceded by a clear statement as to the fact that it is being used to denote the set of irrational numbers.
|
H: Positive integer multiples of an irrational mod 1 are dense
I'm not sure how to solve this one.
Thank you!
$2.$ For any $\alpha\in \mathbb R$ we define $$\lfloor \alpha \rfloor = \max_{n\in\mathbb Z}\{\,n\mid n\leq \alpha\,\}$$ and $$\alpha\bmod 1 = \alpha - \lfloor \alpha \rfloor$$ Let $\alpha$ be irrational.
(a) Given $n\in\mathbb N$ show that $\{\,k\alpha\bmod1\mid k\in\mathbb N\,\}\cap\left[0,\frac{1}{n}\right]\neq\emptyset$
(b) Prove that $\{\,n\alpha\bmod 1\mid n\in\mathbb N\,\}$ is dense in $[0,1]$.
AI: What I show below doesn't strictly follows the items, but still shows the numbers are dense.
STEP 1 Pick any $n\in\Bbb N$. Consider the $n+1$ distinct numbers $$x_k=k\alpha-\lfloor k\alpha\rfloor=\{k\alpha\}\; ;\;k=0,1,2,\ldots,n$$
These are $n+1$ numbers that fit into $n$ places, namely
$$\left[0,\frac 1 n\right),\ldots,\left[1-\frac{1}n,1\right)$$
By the Dirichlet's principle there must exist at least a pair of them which fall in the same interval of length $\dfrac 1n$.
STEP 2 We obtained $$\frac{k}{n} \leqslant \left\{ {{k_1}\alpha } \right\} < \left\{ {{k_2}\alpha } \right\} < \frac{{k + 1}}{n}$$
for some $k=0,\ldots,n$. We then know that $$0 < \left\{ {{k_2}\alpha } \right\} - \left\{ {{k_1}\alpha } \right\} \leqslant \frac{1}{n}$$
But note that $$\begin{align}
\left\{ {{k_2}\alpha } \right\} - \left\{ {{k_1}\alpha } \right\} &= \left\{ {\left\{ {{k_2}\alpha } \right\} - \left\{ {{k_1}\alpha } \right\}} \right\} \cr
&= \left\{ {{k_2}\alpha - {k_1}\alpha - \left( {\left\lfloor {{k_2}\alpha } \right\rfloor - \left\lfloor {{k_1}\alpha } \right\rfloor } \right)} \right\} \cr
&= \left\{ {{k_2}\alpha - {k_1}\alpha - {\text{integer}}} \right\} \cr
&= \left\{ {\left( {{k_2} - {k_1}} \right)\alpha } \right\} \end{align} $$
Now, you may as well try and prove the following:
Let $G$ be an additive subgroup of $\Bbb R$. Let $G^+$ denote the positive elements of $G$. Then
$(1)$ If $\inf G^+=\alpha >0$, $G=\alpha\Bbb Z$
$(2)$ If $\inf G^+=0$, $G$ is dense in $\Bbb R$.
Hint
For $(1)$. Show that $\alpha\in G$.
If not, pick $\epsilon =\alpha/2$ in the defintion of infimum, and $g,g'\in G$ such that $\alpha \leqslant g < g' <\alpha + \frac{\alpha }{2}$. Look at $g'-g$.
Then $g'-g\in G$ and $g'-g\leq \alpha/2<\alpha$ which is impossible.
Now pick $g>0$. Look at $g-\alpha \left\lfloor {\dfrac{g}{\alpha }} \right\rfloor$. Use the definition of integer part to show it must be zero.
Thus $g = \alpha \left\lfloor {\dfrac{g}{\alpha }} \right\rfloor \in \alpha {\Bbb Z}$. Since opposites are in $G$ too, $(1)$ is proven.
For $(2)$, pick any $x\in \Bbb R$. We know we can find $y>0$ in $G$ with $0<y<\epsilon$. Let $n=\left\lfloor {\dfrac{x}{y}} \right\rfloor $. What can you deduce from $$n \leqslant \frac{x}{y} < n + 1\text{ ? }$$
We have $ny\in G$ by additivity, and $$yn \leqslant x < yn + y \Rightarrow 0 \leqslant x - yn < y < \varepsilon $$
|
H: How to calculate the height of a circular segment based on the area.
Given an area of a circular segment, how can one find the height of the circular segment?
In the image below, assume the area of the green segment is known. How can one find the value of h?
I have also seen this problem described as the Quarter Tank Problem.
Is there a way to solve this problem without recursive approximation?
AI: The area of the green portion will be $\displaystyle A=\frac{1}{2}\theta R^2-\frac{1}{2}R^2\sin \theta$ Also, you have $$d=R \cos \left(\frac{\theta}{2}\right)\\
h=R-d=R\left(1-\cos \left(\frac{\theta}{2}\right)\right)$$ So given $A,R$ you have to solve the transcendental equation $$ A=\frac{1}{2}\theta R^2-\frac{1}{2}R^2\sin \theta$$ to get $\theta$. Then you can compute $h$.
|
H: Indefinite integral of $\log(\sin(x))$
I'm computing the indefinite integral of $\log(\sin(x))$; this is the my solution with integration by substitution:
$$
\begin{align}
&\int\log(\sin(x))dx\\
= &\int\log(y)\frac{1}{\cos(x)}dy \\
= &\frac{1}{\cos(x)}\int\log(y)dy \\
= &\frac{1}{\cos(x)}(y\log(y)-y) \\
= &\tan(x)\log(\sin(x))-\tan(x)
\end{align}
$$
Because I did the substitution $y=\sin(x), dy=\cos(x)dx\rightarrow dx=\frac{dy}{\cos(x)}$.
Wolfram online gives a different result; where is the my error?
AI: $\cos(x)$ is not a constant, because $x$ depends on $y$, so you can't pull $\cos(x)$ out of the integral.
|
H: Ideals in ring extensions
Let $R$ be a ring, commutative with $1$, subring of a ring $R'$. Let $\mathfrak{p}$ be an ideal of $R$. Let's denote by $\mathfrak{p}R'$ the extended ideal, i.e. the ideal generated by $\mathfrak{p}$ in $R'$.
EDIT: Assume that $R'$ is a free $R$-module of finite rank $n$ and let $x_1,\ldots,x_n$ be a basis of $R'$ over $R$. Consider the residue classes $\overline{x_1},\ldots,\overline{x_n}$ modulo $\mathfrak{p}R'$. I want to show that $\{\overline{x_i}\}$ is a basis for $R'/\mathfrak{p}R'$ over $R/\mathfrak{p}$.
AI: This is a special case of the following general fact: if $R$ is a ring and $M$ is a free $R$-module with basis $\{m_i:i\in I\}$, then for any ring map $R\rightarrow S$, $M_S=S\otimes_RM$ is a free $S$-module with basis $\{1\otimes m_i:i\in I\}$. This amounts to the fact that tensor commutes with direct sums. In the situation asked about, $S=R/\mathfrak{p}$ and $M=R^\prime$, in which case $R/\mathfrak{p}\otimes_RR^\prime=R^\prime/\mathfrak{p}R^\prime$ as $R/\mathfrak{p}$-modules with $1\otimes x_i$ mapping to $x_i+\mathfrak{p}R^\prime$.
In the special case of interest this can be shown directly. It is clear that the $x_i+\mathfrak{p}R^\prime$ generate $R^\prime/\mathfrak{p}R^\prime$ over $R/\mathfrak{p}$. Suppose $\sum_i (r_i+\mathfrak{p})(x_i+\mathfrak{p}R^\prime)=0$, i.e., that $\sum_i r_i x_i\in\mathfrak{p}R^\prime$. Then $\sum_i r_i x_i=\sum_i s_ix_i$ with $s_i\in\mathfrak{p}$, so by $R$-linear independence, $r_i=s_i\in\mathfrak{p}$ for all $i$.
EDIT (to address the original, pre-edit question): The fact that every element of $\mathfrak{p}R^\prime$ has the form $\sum_i s_ix_i$ with $s_i\in\mathfrak{p}$ only uses that $R^\prime$ is generated as an $R$-module by the $x_i$. An element of $\mathfrak{p}R^\prime$ is of the form $\sum_j s_j r_j^\prime$ with $s_j\in\mathfrak{p}$ and $r_j^\prime\in R^\prime$ (this is a matter of definition). Now write $r_j^\prime=\sum_i r_{ij}x_i$ for $r_{ij}\in R$ (here we use that the $x_i$ generate $R^\prime$), and then multiply everything out, using that $s_jr_{ij}\in\mathfrak{p}$ for all $i,j$ because $\mathfrak{p}$ is an ideal of $R$.
|
H: A simple circle problem
There is a big circle of radius 20cm and a smaller circle 100 cm away from it of radius 5cm now imagine these two to be 2 tires connected by a chain , where the bigger one completes one rotation how many rotation will small complete??
Any idea how to Solve this??
AI: When the chain moves by some amount $s$ (measured somewhere between the two wheels) then
$$r_{\rm big}\cdot \phi_{\rm big}=s=r_{\rm small}\cdot \phi_{\rm small}\ ,$$
where $\phi_{\rm big}$ and $r_{\rm big}$ are the turning angle and the radius of the big wheel; and similarly for the small wheel. It follows that
$$\phi_{\rm small}={r_{\rm big}\over r_{\rm small}}\cdot \phi_{\rm big}=4\phi_{\rm big}\ .$$
Therefore the small wheel will complete four full turns when the big wheel completes one.
|
H: How to write formula for bracketed function
I am a Java programmer with little theoretical math experience. I've written a plugin for a program, part of which runs a "bracketed" formula (I don't know what else to call it). I have been asked to release the formula to an audience that may understand it better in math than in Java code. So I need to know how to write this formula in math notations.
Here's how the formula works. Suppose I have a certain number of Objects, and I want to derive a number of "Points" from them. I call the following rules "brackets".
For the first three objects, I want to count 1 point each.
For the next 10 objects, I want to count .5 points each.
For the next 30 objects, I want to count .1 points each.
So the results will be like this:
1 Object yields 1 Point. (1 x 1)
3 Objects yield 3 Points. (3 x 1)
4 Objects yield 3.5 Points. (3 x 1 + 1 x .5)
5 Objects yield 4 Points. (3 x 1 + 2 x .5)
13 Objects yield 8 Points (3 x 1 + 10 x .5)
16 Objects yield 8.3 Points (3 x 1 + 10 x .5 + 3 x .1)
43+ Objects yield 11 Points (3 x 1 + 10 x .5 + 30 x .1)
This is an example of how the brackets are set. The actual input in the program is customizable, and you can have more or less than three brackets.
How would I write this concept as a math equation or formula?
AI: A rather direct translation:
Let
$$r_i = \begin{cases}
1, &i\le 3 \\
\tfrac 1 2, & 4\le i\le 13 \\
\tfrac 1 {10}, & 14 \le i \le 43 \\
0, & 44\le i.
\end{cases}$$
Let
$$p_n = \sum_{i=1}^n r_i.$$
|
H: How to prove $f'(a)=0$?
Let $f:I\to\mathbb{R}^n$ be a differentiable function, where $I\subset\mathbb{R}$ is a interval.
For each $c\in \mathbb{R}^n$, define $X_c=\{x\in I;\;\;f(x)=c\}$.
The problem asks to show that if there exists $c\in \mathbb{R}^n$ such that $a\in I\cap \left (X_c\right)'$, then $f'(a)=0$.
I don't know how to start, so I would like hints. Can someone help me?
Thanks.
AI: Hint: $a \in X_c'$ means that there is a sequence of points $(a_n)\in X_c \setminus \{a\}$ with $\lim_{n \to \infty} a_n = a$. Now look at $$\lim_{k \to \infty} \frac{f(a_k)-f(a)}{a_k-a}$$
|
H: Solutions to $Ax=x$, where $x=(1,1,1,......1)$, for $A\in GL(n,\mathbb{C})$
I know that the set of solutions $A\in GL(n,\mathbb{C})$ satisfies $\det(A-I)=0 $.
I was wondering if there was an easy to way determine what subgroup, call it $H$, this is. How many conjugacy classes would this group (up conjugation by elements in $H$) have? Thanks.
AI: As Ittay Weiss remarks in the comments, the condition $Ax = x$ does not imply that $\det A = \pm 1$. Your condition is equivalent to the condition that $(A-1)$ has $x$ in its kernel, and also that each row in your matrix sums to $1$.
Let $e_i$ denote the standard basis vector $(0,\dots,1,0,\dots)$ with a $1$ in the $i$th spot. Then $\{x,e_2,e_3,\dots\}$ is a basis for $\mathbb C^n$. Let $A'$ be the matrix for $A$ in this new basis. Then
$$ A' = \begin{pmatrix}
1 & * & * & \dots \\
0 & * & * & \dots \\
\vdots & * & \ddots & \\
0 & * & \dots & *
\end{pmatrix} $$
or, in block form, $A' = \bigl( \begin{smallmatrix} 1 & b \\ 0 & a \end{smallmatrix}\bigr)$ where $b\in \mathbb C^{n-1}$ and $a\in \mathrm{GL}(n-1,\mathbb C)$ are arbitrary.
Matrix multiplication for block matrices of this form is:
$$ \begin{pmatrix} 1 & b \\ 0 & a \end{pmatrix} \begin{pmatrix} 1 & b' \\ 0 & a' \end{pmatrix} = \begin{pmatrix} 1 & ba' + b' \\ 0 & aa' \end{pmatrix} $$
Thus the group $H$ fixing $x$ is precisely the semidirect product $\mathrm{GL}(n-1,\mathbb C) \ltimes \mathbb C^{n-1}$. (In the standard basis, the embedding $\mathrm{GL}(n-1,\mathbb C) \ltimes \mathbb C^{n-1} \hookrightarrow \mathrm{GL}(n,\mathbb C)$ for your particular $x$ requires conjugating by the change-of-basis matrix $\left(\begin{smallmatrix} 1 & 1 & \dots \\ 0 & 1 & 0 \\
0 & 0 & \ddots \end{smallmatrix}\right)$, or maybe the transpose of that.)
Conjugation by matrices of the form $\bigl( \begin{smallmatrix} 1 & 0 \\ 0 & a' \end{smallmatrix}\bigr)$ can put any element of $H$ into the form $\bigl( \begin{smallmatrix} 1 & b \\ 0 & j \end{smallmatrix}\bigr)$ where $j$ is a Jordan block matrix. Conjugating then by $\bigl( \begin{smallmatrix} 1 & b' \\ 0 & 1 \end{smallmatrix}\bigr)$ gets you to:
$$ \begin{pmatrix} 1 & b + (j-1)b' \\ 0 & j \end{pmatrix} $$
In particular, if $j-1$ is invertible, or more generally if $b$ is in the image of $j-1$, then we can take $b' = -(j-1)^{-1}b$, and thereby conjugate it away. The clean way to say this is that up to conjugation, we care only about the class of $b$ in the cokernel of $j-1$.
Since $j$ is in Jordan form, it's easy to read off the image and cokernel of $j-1$: on each block with eigenvalue not equal to $1$, $j-1$ is invertible; for each block with eigenvalue $1$, the image is vectors starting with $0$, and so the cokernel for that block is $1$-dimensional.
Thus we see that the classification of conjugacy classes in $H$ has the following form:
The Jordan decomposition of $\mathrm{GL}(n-1,\mathbb C)$, i.e. a partition of $n-1$ along with $\lambda \in \mathbb C^\times$ attached to each block, up to permutation.
For each block with $\lambda = 1$, a number $b \in \mathbb C$.
There is a little bit left to do to finish the description of the conjugation classes of $H$, but I won't do it. First, you should convince yourself that it's enough to consider conjugating first by matrices of form $\bigl(\begin{smallmatrix} 1 & 0 \\ 0 & a'\end{smallmatrix} \bigr)$ and then of form $\bigl(\begin{smallmatrix} 1 & b' \\ 0 & 1\end{smallmatrix} \bigr)$ as I did, so that I didn't miss some example when two claimed conjugation classes are the same. Second, you may want to think about degenerations, i.e. how are the conjugation classes glued together. Handling this second question really requires more advanced technology (algebraic varieties, stacks, ...), and wasn't in your original question.
|
H: Sum of a geometric series $\sum_0^\infty \frac{1}{2^{1+2n}}$
$$\sum_0^\infty \frac{1}{2^{1+3n}}$$
So maybe I have written the sequence incorrectly, but how do I apply the $\frac{1}{1 - r}$ formula for summing a geometric sequence to this? When I do it I get something over one which is wrong because this is suppose to model a percentage of something.
AI: Hint:
write
$$
\sum_{n=0}^\infty \frac{1}{2^{1+2n}} = \frac12 \sum_{n=0}^\infty \frac{1}{2^{2n}} = \frac12 \sum_{n=0}^\infty \frac{1}{4^{n}}
$$
and use the closed-form formula for geometric series.
|
H: Centre of mass moves with constant velocity
The centre of mass of the Newton $n$-body problem is given by $$S=\frac{1}{M} \sum m_ix_i$$ with $M=\sum m_i$.
Show that it moves with contant speed and hence has no acceleration.
I don't understand as if I differentiate, I'll surely just get $$S'=\frac{1}{M} \sum m_ix'_i$$ which is not constant...is it?
AI: This system is closed, that is $\sum\overrightarrow{F}=\vec{0}=\sum m_i\ddot{x}_i=M\ddot{S}$ which implies $\dot{S}=constant$
|
H: For what variables these equalities are satisfied?
Assume:
$$
P \subseteq \{1,2,\dots,N\},\quad |P| = K, \qquad x \in \mathbb{R}_+^K , \qquad w = e^{-j\frac{2\pi}N}
$$
Consider:
$$
h_{P,X}(l) = \sum_{i=1}^K \sum_{j=1}^K x_ix_jw^{(p_i-p_j)l}
$$
Now suppose we want to find $(P,X)$'s that satisfy:
$$
h_{P,X}(1) = h_{P,X}(2)= \cdots = h_{P,X}(N-1)
$$
Do you have any suggestion ?
>
If for simplicity we manually set $x_1 = x_2 = \cdots = x_K = cte$ the problem could be solved in this way :
$$
f_P(l) = \sum_{i=1}^K \sum_{j=1}^K w^{(p_i-p_j)l} = \sum_{d=0}^{N-1} a_d w^{ld}
$$
where $d = p_i-p_j \mod N$ and $a_d$ is the number of occurrence of $d$. As you know $a_0 = K$.
The last term is telling us that $f_P(l)$ is FFT of signal $a[d]=a_d, d=0,1,\cdots,N-1.$
Now suppose we want to find $P$'s that satisfy :
$$
f_P(1) = f_P(2)= \cdots = f_P(N-1)
$$
Also it is easy to show that: $f_P(0)=\sum_{d=0}^{N-1} a_d=K^2$.
This means that we have FFT of $a_d$, so calculating $a_d$ is easy, and having $a_d$, we could find $P$.
AI: I don't have a complete solution right now, but I have an observation.
Note that $$\large h_{P,X}(l)=\sum_{j=1}^K\sum_{j=1}^Kx_ix_j w^{(p_i-p_j)l}=|f_{P,X}(l)|^2$$ where $$\large f_{P,X}(l)=\sum_{i=1}^Kx_i w^{p_il}$$ So $\large h_{P,X}(l)=c\ \forall 1\le l\le N-1 \Rightarrow f_{P,X}(l)=\sqrt{c}\ \forall 1\le l\le N-1,\ c>0.$
|
H: Show that $f(x) = x^p -x -1 \in \Bbb{F}_p[x]$ is irreducible over $\Bbb{F}_p$ for every $p$.
Let $p$ be a prime.
a) Show that $f$ has no roots in $\Bbb{F}_p$.
Let $F^*$ be the multiplicative group of $\Bbb{F}_p$. Then, by lagrange's thoerem for all nonzero $\alpha \in \Bbb{F}_p$, $\alpha^{p-1} = 1 \implies \alpha^p=\alpha \implies \alpha^p-\alpha=0$. Of course $0^p=0$, so this is true for all elements of $F$ and not just the nonzero ones. But then $\alpha^p - \alpha - 1 = -1$ for all $\alpha \in \Bbb{F}_p$ and so it must have no roots in $\Bbb{F}_p$. I could have also done this using the Frobenius automorphism, right?
b) Let $\alpha$ be a root of $f$ (in some algebraic closure of $\Bbb{F}_p$). Show that $\alpha + s$ is also a root for all $s \in \Bbb{F}_p$.
Let $\alpha^p - \alpha -1 =0$. Let $E$ be an algebraic closure of $\Bbb{F}_p$. Since $E$ has characteristic $p$, $(\alpha + s)^p = \alpha^p + s^p$. So we have,
$$(\alpha + s)^p - (\alpha + s) -1 = \alpha^p + s^p - \alpha -s -1 = s^p - s = 0.$$
c) Conclude that $f$ is irreducible over $\Bbb{F}_p$, for every $p$.
By b) and the fact that $\Bbb{F}_p$ has $p$ distinct elements, we know that the roots of $f$ are $\alpha, \alpha+1, ... , \alpha + p-1$. So if $K$ is a splitting field, we have $$x^p - x -1 = (x-\alpha)(x-(\alpha+1))...(x-(\alpha+p-1)).$$
Now let's assume that $f$ is reducible over $\Bbb{F}_p$. Then $f=gh$ for some $g$ and $h$ with degrees less than that of $f$. So $g$ and $h$ must be of the form $(x-(\alpha+s_1))...(x-(\alpha+s_k))$ where k is less than n. Let's say that g has degree 2, because the other cases are similar.
So $g =(x-(\alpha + s_i))(x-(\alpha + s_j))$ and the constant term for $g$ is,
$$(\alpha + s_i)(\alpha + s_j) = \alpha^2 + s_is_j\alpha + s_is_j.$$
Since $\Bbb{F}_p$ is a field, if $\alpha s_is_j$ is an element of $\Bbb{F}_p$ , then so is $((\frac{1}{s_is_j})(\alpha s_is_j) = \alpha$, a contradiction.
We can show by induction that if we multiply $(x-(\alpha+s_1))...(x-(\alpha+s_k)$, we get a term that looks like $s_1s_2...s_k\alpha$. So this is also true for $k>2$.
Do you think that my answer is correct?
Thank you in advance
AI: I like your idea very much, but I didn't understand how you managed to ignore the $\alpha^2$ term from your constant term.
I have written up an answer to this question with a similar idea, where I look at the next to highest degree term of $g(x)$ (=the term of degree $k-1$). In your degree two example, this would be the linear term. Its coefficient is
$2\alpha+(s_i+s_j).$ As $s_i,s_j$ are in the prime field, and $2$ is invertible, we can, as in your argument, conclude that $\alpha\in\mathbb{F}_p$, which is a contradiction.
By studying the lowest degree term, you get a lot of clutter from powers of $\alpha$. The degree $k-1$ term is IMHO easier to manage.
|
H: Clarification regarding a group theory proof
In a group we have $abc = cba$. If $c \neq 1$, is the group abelian? See the following link.
(I am new to this site but it is my understanding that you cannot PM authors, correct? Which is too bad because it means I have to open this thread)
In regards to Math Gem's answer, from the comments I take it he has omitted some of the obvious statements; but I wonder, if one was to give every required statement, what would have to be included?
In the proof he requires $c = ba$ which surely is only a small (equal to the order of the group times the number of $c \neq 1$) section of a group. It must hold for all $a,b, c \neq 1$ so the total amount of combinations you would have to verify (say A is the group) would be $\mathrm{T} = \#A \cdot \#A \cdot (\#A-1)$. Say $\#A = x$. Since $c = ba$ is only valid for $x$ distinct couples $(a,b)$ (because of the "latin square property", $c = ba$ "misses" $\mathrm{T}-(x-1)\cdot\#A$ combinations, right?
Taking $Z_4,+$ and $2$ for example, only $2 = 2 + 0$, $2 = 0 + 2$, $2 = 1 + 3$ and $2 = 3 + 1$ so e.g. (4,4,4) and (1,2,4) are invalid for the proof given, yet we do have to take them into account. From the comments, if $ab = 1$ then clearly $ab = ba$ (because $ab = 1 \implies a = b^{-1} \implies 1 = ba$) but then there's still e.g. (4,4,4) and (1,2,4). So the trivial (4,4,4) would have to be mentioned, as well when either $a$ or $b$ is equal to $1$. From inspection of $Z_4,+$ I see that we now have all possible combinations, but should it not be proven? I.e. how do you know there do not exist $a,b, c$ such that $ba \neq c$ and $a,b \neq 1$ and yet $ab \neq ba$?
For $Z_4,+$ you can sort of say: the order is 4, so you have to check $4\cdot4\cdot3$ combinations. The proof holds for $c = ba$ and since there 3 elements in $Z_4,+$ that are not equal to $0$, you substract $4\cdot3$. Then you remove (1,b,c) and (a,1,c) so you substract $4\cdot3$ twice. Then for $ab = 1$ you substract $4\cdot3$ again and you are left with $0$. So yeah, it's true for $Z_4$, but it's not exactly a real proof. Maybe there is some peculiar group out there for which there are more elements left?
I guess you would just generalize it to some order $x$ but then I don't get $0$ like I did above.
So my question is how do you know you have all possible combinations $a,b, c, \neq 1$ when basically your proof consists of checking different cases.
(I am a 1st year undergraduate math student, should it matter)
AI: He doesn't require $c=ba$, he (and that is justified) uses a special case of the given condition.
What is needed to show is
$$\tag1 \forall x\forall y\colon xy=yx$$
and we are given
$$\tag2\forall a\ne1\forall b\ne1\forall c\ne1\colon abc=cba$$
We do not have to check these $(n-1)^3$ equalities (2), we are given that they hold. If we consider some of the instances supoerfluous, we are free to simply ignore them.
On the contrary, we have to show the $n^2$ equalities (1).
Notably, the case $x=1$ in (1) holds for any group by the properties of the neutral element. Hence we may assume that $x\ne 1$.
Likewise, (1) trivially holds when $y=1$, hence we may assume that $y\ne 1$.
And finally each element commutes with its inverse, hence the case $y=x^{-1}$ is also automatically true in (1).
So we can assume $x\ne1$, $y\ne 1$, $yx\ne 1$. Thus allows us to specialize (2) to (letting $a=x, b=y, c=yx$)
$$xyyx=yxyx$$
which implies (by cancelling from the right) $$xy=yx$$ as was to be shown.
|
H: An ideal in the ring of infinitely differentiable functions
Well, I was just doing an elementary exercise, but am a tad bit skeptical about how I've gone about it. It goes as follows:
Let $R$ be the ring of infinitely differentiable functions defined on, say, the open interval $-1 < t <1$.
Let $J_n$ be the set of functions $f \in R$ such that $D^k f (0) = 0$ for all $ 0 \le k \le n$.
Here, $D$ is the differential operator. So, $J_n$ is the set of functions that whose derivatives up to order $n$ vanish at $0$.
Show that $J_n$ is an ideal in $R$.
Now, what I did was:
Let $g \in R, f \in J_n$
Consider $D^k (g(f(x))) = D^{k-1} ((Dg)(f(t)) \times(Df)(t)) $, from the chain-rule.
Now, as $Df(t) = 0$, we have $D^k (g(f(x))) = 0$ and $gf \in J_n$. Following which we can conclude that $J_n$ is an ideal.
Now, my question is: was I justified in taking composition rather than multiplication to be the product rule in the ring of functions? And, also, what about the case, $k=0$?
How would I deal with this, instead?
AI: I guess it should be clear that the addition in $R$ is that of pointwise addition of functions. Is composition of functions a suitable choice for multiplication?
This would require among others that $h\circ(f+g)=h\circ f+h\circ g$, i.e. $h(f(x)+g(x))=h(f(x))+h(g(x))$ for all $f,g,h,x$. You can show by simple counterexamlpes that this does not hold in general (it holds with linear functions). Rather, the multiplication in $R$ is understood to be pointwise as well, i.e. $(gf)(x)=g(x)\cdot f(x)$.
|
H: How to split the rent if two roommates live there from the beginning and a third one joins in the middle of the month?
I've been thinking over it and I can't figure it out.
Consider the rent of the house is $X$. Now there are two roommates from the beginning and a third one joins in the middle of the month. Now how is the rent split?
Solution 1:
half month = $X/2$
1 share in first half $= X/2/2=X/4$
1 share in second half $= X/2/3=X/6$
Total from each person $= (X/4+X/6) + (X/4+X/6) + X/6$ which is
$5X/12 +5X/12 + X/6$
Solution 2:
Person A and Person B stayed = 30 days.
Person C stayed = 15 days.
Total days $= 30+ 30 + 15 = 75$
Therefore, each share would be $= 2X/5 + 2X/5 + X/5$
Both solutions seems to be correct, yet yields different shares. Can someone tell me the difference between the two and which old would be correct ??
AI: The first solution assumes that the paid rent should be proportional to both the time spent and the area consumed. The second solution assumes that only the time should be considered. I suppose the first one is more realistic as the utility one person draws from the house can be assumed to be bigger if more personal space is available (whether this is the case to full extent - for example there is less social interaction possible in an empty house - is a different question). For some aspects the second method seems to be more appropriate, for exacmple if the rent is not fixed per month (for the house as a whole) but there are consumption dependent parts in it (e.g. water and electricity - three people may be assumed to consume more, for simplicity assumed in a proportional way, but that is also not absolutely true).
If we assum ethat the contract with the landlord fixed a rent that is the same every month independent of the number of inhabitants, then only method one is valid - just imagine the landlord would drop by every day to cash in $X/30$.
|
H: Calculating the area of a triangle
Consider the circle of radius $1$ and center in $x=1$, $y=1$. Let $p$ be the point in the circle more close to the origin. Suppose that $p$ is the centroid of a triangle with vertex in $(0,1)$, $(2,0)$ and some point $(x,y)$.
My question is: is there any way to calculate the area of the triangle without calculating $(x,y)$, or at least can we give a good approximation for the area?
Thank you
AI: Yes, there is: first, the circle is $\;(x-1)^2+(y-1)^2=1\;$ and thus the point on it closer to the origin is on the line $\,y=x\;$ .
Either by substitution or using directly Lagrange's multipliers, get that
$$P=\left(1-\frac1{\sqrt2}\;,\;\frac1{\sqrt2}\right)$$
Since a triangle's centroid is just the triangle's three median's intersection point, the area of the triangle is the length of any side times its distance to the centroid ($\;P\;$) times $\,1.5\;$, so:
(i) Find the formula of the line joining $\,(0,1)\;,\;(2,0)\;$ and the length of this segment
(ii) Calculate the distance from $\;P\;$ to the line above using the formula in (i)
(iii) Multiply the length in (i) by the distance in (ii) and all this times $\,1.5\,$ ...
|
H: Finding an $n$ so the sequence $\left\{\frac{1}{n}\right\}_{n = 1}^\infty$ satisfies $|a_n| < 10^{-4}$
How to find $n$ so that $\left\{\frac{1}{n}\right\}_{n = 1}^\infty$ satisfies
$$|a_n| < 10^{-4}$$
I can't find this formula in my book anywhere. It seems like it would be very time consuming to just plug in numbers because I have way more to do than just this one. How do I do this before next Monday?
AI: What about any $n$ such that $n>10^4$? Then $$\frac 1n <10^{-4}$$
|
H: Reference Request for The Study of Abelian Groups
So I finished Lang's Algebra and after reading this partial Structure Theorem for abelian torsion groups that are not finitely generated , I've gotten interested in abelian groups, in particular infinite abelian groups and structure theorems. Can anyone recommend a book that highlights these topics?
Thanks!
AI: Though I haven't read it myself, Kaplansky's book Infinite Abelian Groups seems like it is just right for you. Here are the contents:
|
H: Factor groups of matrices
Let $G=GL(2,\mathbb R)\oplus GL(2,\mathbb R)$ and let $H=\{(A,B)\in G\mid \det(A)=\det(B)\}$. Prove that $G/H \simeq (\mathbb R^*,\times)$.
I'm guessing I should use: "Let $G$ be a group and let $H$ be a normal subgroup of $G$. The set $G/H = \{aH\mid a \in G\}$ is a group under the operation $(aH)(bH)=abH$."
AI: Do you know the first isomorphism theorem? Apply that to the following group homomorphism:
$$\varphi:G\to \mathbb R^*\\(A,B)\mapsto\frac{\det A}{\det B}$$
|
H: SPECGRAM return value
I was studying this code:
fm = 8000;
dt = 1/fm; % dt=0.000125
t = [1:dt:5];
y = sin(2*pi*200*t);
tw = 0.05;
ws = 2 .^ round( log2( tw*fm ) ); % ws=512
o = ws/2; % o=128
w = hanning(ws);
[ X, f, tj ] = specgram( y, ws, fm, w, o );
What X represents is an array of Spectrums, one per "tw" windows on the signal. When I call:
plot(f,abs(X));
Matlab gives me the plot of one spectrum. Is that spectrum the summation of the all the spectrums of the signal?
Thanks in advance!
AI: All of your spectrums are there. They're just overlapped. Try zooming in. Or type whos and look at the size of X. If you want a spectral image plot, you need to call the function with no outputs:
specgram(y, ws, fm, w, o);
which generates a figure like this:
Alternatively, you can manually plot the spectrum image using surf:
[X, f, tj] = specgram(y, ws, fm, w, o);
surf(tj,f,10*log10(abs(X)),'EdgeColor','none');
axis xy;
axis tight;
colormap(jet(256)); % 256 colors instead of default 64 (why?!) reduces blotchiness
view(0,90);
grid off;
xlabel('Time');
ylabel('Frequency (Hz)');
Also, FYI, the help for specgram in Matlab R2012b states:
specgram has been replaced by SPECTROGRAM. specgram still works but
may be removed in the future. Use SPECTROGRAM instead. Type help
SPECTROGRAM for details.
You'll need to change the order of your inputs to use spectrogram, I think, assuming you have it in your version. Try:
[X, f, tj] = spectrogram(y, w, o, ws, fm);
|
H: $\lim_{(x,y)\to (0,0)} \frac{x^m y^n}{x^2 + y^2}$ exists iff $m+ n > 2$
I would like to prove, given $m,n \in \mathbb{Z}^+$, $$\lim_{(x,y)\to (0,0)}\frac{x^ny^m}{x^2 + y^2} \iff m+n>2.$$
(My gut tells me this should hold for $m,n \in \mathbb{R}^{>0}$ as well.) The ($\Rightarrow$) direction is pretty easy to show by contrapositive by familiar limit tricks. The ($\Leftarrow$) direction is giving me more trouble.
So far my strategy has been to use the arithmetic mean:
$$\left| \frac {x^n y^m} {x^2 + y^2} \right| \leq \frac {x ^ {2n} + y ^ {2m}} {2 ( x^2 + y^2 ) } \leq \frac {x^ {2 (n-1)}} {2} + \frac { y^ {2 (m-1)} } {2},$$
but that's not helping if $m=1$, say.
Any ideas?
Apologies if this is a repeat...I couldn't find this on the site.
AI: If $m+n>2$, you can divide into two cases, by observing that you can't have $m<2$ and $n<2$.
First case: $m\ge2$.
$$
\lim_{(x,y)\to (0,0)}\frac{x^ny^m}{x^2 + y^2}
=
\lim_{(x,y)\to (0,0)}\frac{y^2}{x^2 + y^2}x^ny^{m-2}
$$
where $n\ge1$ or $m-2\ge1$. The fraction is bounded, while the other factor tends to zero.
Similarly for $n\ge2$.
|
H: Infinite series convergence value problem
Is it possible to find the convergent value of the series: $\sum\limits_{x=0}^{\infty}(x+3) \cdot a^x $ where $a$ is a constant less than 1?
I thought about expanding:
$\sum\limits_{x=0}^{\infty}(x+3) \cdot a^x =\sum\limits_{x=0}^{\infty}x \cdot a^x + 3 \sum\limits_{x=0}^{\infty} a^x$
and the second term is just $\cfrac{3}{1-a}$, but I don't know what to do with the first term. Is there a general method to solve these types of problems? I that there are many types of convergent tests but not sure about methods to actually find convergent value. Thanks in advance.
AI: $$\sum_{x=0}^{\infty} a^x = \dfrac1{1-a}$$
$$\sum_{x=0}^{\infty} xa^x = \sum_{x=0}^{\infty} a (xa^{x-1}) = \sum_{x=0}^{\infty} a\dfrac{d}{da}\left(a^x\right) = a\dfrac{d}{da}\left(\sum_{x=0}^{\infty} a^x\right) = a \dfrac{d}{da}\left(\dfrac1{1-a}\right) = \dfrac{a}{(1-a)^2}$$
|
H: Determining if a Linear Transformation is Surjective
I am aware that to check if a linear transformation is injective, then we must simply check if the kernel of that linear transformation is the zero subspace or not. If the kernel is the zero subspace, then the linear transformation is indeed injective.
Is there a similar way to check for surjectivity?
AI: Let $T : V \rightarrow W$ be a linear map and $T^* : W^* \rightarrow V^*$ the map induced on the dual space. Then $T$ is surjective if and only if $T^*$ is injective if and only if the kernel of $T^*$ is the zero subspace.
I think that's about as similar as you can get.
|
H: injection $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$
Today a friend of mine told me a nice fact, but we couldn't prove it. The fact is that there is an injection $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$ defined by the fomula $(m,n)\mapsto (m+n)^{\max\{m,n\}}$, where $\mathbb{N}$ denotes the natural numbers.
How to prove that this map is injective? It should be elementary. We might be overlooking something trivial.
Thanks!
Edit As it was pointed out, it is not an injection by easy reasons. Thanks a lot! I was just overcomplicating things. But what if we restrict the map to the set of pairs $(m,n)$ such that $m>n$?
AI: It is not an injection since $m+n=n+m$ and $\max(m,n)=\max(n,m)$.
|
H: What is this Weierstrass' proof of uniqueness of $\mathbb{R}$ and $\mathbb{C}$ algebras?
I'm reading Derbyshire's Unknown Quantity.
It's an interesting exercise to enumerate and classify all possible algebras. Your results will depend on what you are willing to allow. The narrowest case is that of commutative, associative, finite-dimensional algebras over (that is, having their scalars taken from) the field of real numbers $\mathbb{R}$ and with no divisors of zero. There are just two such algebras: $\mathbb{R}$ and $\mathbb{C}$, a thing proved by Weierstrass in 1864.
What is this proof? I've googled Weierstrass algebra proof but found mainly the Stone-Weierstrass Theorem, which I'm not sure if this is the proof.
AI: I think the author meant Frobenius clasiffication of real division algebras (see for example Why are the only division algebras over the real numbers the real numbers, the complex numbers, and the quaternions?) that its an stronger result, so if you want to add commutativity you loose the quaternions.
|
H: Clarifying a proof of $\limsup (a_n+b_n) \le \limsup a_n + \limsup b_n$
Could you help me understand the solution below?
"otherwise we clearly have the equality" -> why? It's not clear to me. :(
"The inequality is trivially satisfied" -> why? even if the right side is +infinity, what if the left side is also +infinity?
"{bn} is bounded" -> I know there is a upper limit but I'm not sure if bn has a lower limit.
"one sees that lim sup(an+bn) = minus inifity" -> how is this so?
"Let y in R be such that ank + bnk -> y for some sequence" -> How do we suppose such y exists? what if there is no subsequence that converges?
6.I'm not sure the equality between two different limits is made.
Thank you so much in advance!
AI: If the right hand side is $\infty + \infty$, then it is actually infinite ; and infinity bounds any real number or $\pm \infty$ by above, so the inequality holds. It is not true that you will obtain equality otherwise in this case though ; just think of $a_n = (-1)^n n$ and $b_n = (-1)^{n+1} n$. Then $a_n + b_n = 0$ for every $n$ even though $\limsup_{n \to \infty} a_n = \limsup_{n \to \infty} b_n = +\infty$. If the right hand side is $-\infty -\infty$, then every subsequence
of $a_n$ and $b_n$ goes to $-\infty$, hence $\limsup_{n \to \infty}
(a_n+b_n) = -\infty$ and the inequality also holds.
If only one of the limit superiors has a $+\infty$ value, without loss of generality we can suppose it is $a_n$. So assume $\limsup_{n \to \infty} a_n = +\infty$ and $\limsup_{n \to \infty} b_n = b \in [-\infty,+\infty[$. We must remove the case $b = -\infty$ because we removed this case by assumption in the question. So assume $b \in ]-\infty, \infty[$. Then since
$$
\limsup_{n \to \infty} a_n + \limsup_{n \to \infty} b_n = \infty + b = \infty,
$$
infinity always being an upper bound, ''the inequality is trivially satisfied'' (that is what they mean by trivial).
Since $\limsup_{n \to \infty} b_n = b \in ]-\infty,\infty[$, then by the definition of the limit superior,
$$
\limsup_{n \to \infty} b_n = \lim_{n \to \infty} \sup_{k \ge n} b_k = b,
$$
hence for every $\varepsilon > 0$, there exists $N$ such that for all $n \ge N$, $\sup_{k \ge n} b_k < b + \varepsilon$. Let us take $\varepsilon = 1$ (I'm just fixing some number here, $1$ doesn't matter much) so that for some particular $n_0$,
$$
\sup_{k \ge n_0} b_k \le b+1, \quad \Longrightarrow \quad b_k \le \max \{b_0,b_1,\dots,b_{n_0-1}, b+1 \} \overset{def}= C.
$$
and the sequence $\{b_n\}$ is bounded above by this weirdo bound $C$.
Therefore, since $a_n \to -\infty$, we have
$$
\limsup_{n \to \infty} (a_n + b_n) \le \limsup_{n \to \infty} a_n + C = -\infty + C = -\infty
$$
and $-\infty$ is always bounded above by every real number and $\pm \infty$, so the inequality holds.
Recall that by Bolzano-Weierstrass's theorem, any bounded sequence has a convergent subsequence. If $\limsup_{n \to \infty} (a_n + b_n) = y \in ]-\infty,\infty[$, to show existence of a convergent subsequence, it thus suffices to show the existence of a bounded subsequence, from which we will extract a convergent sub-subsequence using Bolzano. Using the definition
$$
\limsup_{n \to \infty} (a_n+b_n) = \lim_{n \to \infty} \sup_{k \ge n} (a_k + b_k) = y,
$$
since the limit is $y$, we can find $n_1$ such that $\sup_{k \ge n_1} (a_k + b_k) \ge y-1$, hence by definition of the supremum there is also $k_1 > n_1$ such that $a_{k_1} + b_{k_1} \ge y-2$. We can also find $n_2 > k_1$ such that $\sup_{k \ge n_2} (a_k + b_k) \ge y-1/2$, hence we can find $k_2 > n_2 > k_1$ such that $a_{k_2} + b_{k_2} \ge (y-1/2)-1/2 = y-1$. Continuing in this fashion we can construct a subsequence $a_{k_j} + b_{k_j}$ such that $a_{k_j} + b_{k_j} \ge y-2^{j-1}$, and since
$$
y - 2^{j-1} \le a_{k_j} + b_{k_j} \le \sup_{\ell \ge k_j} (a_{\ell} + b_{\ell}) \to y,
$$
by a sandwich argument, the sequence in the middle converges to $y$, which proves existence.
Feel free to ask about any part of my explanation which is still vague to you. I tried to give more details, there are plenty of ways to do this, I just explained the one that came to mind.
Hope that helps,
|
H: Find force required for a launch between two points
Let me start by saying this is within a game environment, so gravity isn't 9.81m/s^2, and the unit of measure for distance will be "blocks".
I'm attempting to find the amount of force needed in order to launch a player into the air, from point A to point B. For my example, I assumed that it was in a straight line of 60 blocks away, and attempted to work from there. Knowing that velocity is the derivative of distance, and acceleration the derivative of velocity, I mustered together those variables, along with others.
In the game:
Gravity pulls at 13 blocks/second
Acceleration to the ground is at 22.48 blocks/second^2
A block is a distance of 1 meter.
I have a general mockup of attempting to start the problem, but to be honest I'm completely stumped on where to take it. I need to find the initial velocity to launch at, as well as what angle to do it at.
General mockup: https://i.stack.imgur.com/Vl6x4.jpg
AI: The force isn't constant, so that approach won't get you far. Acceleration to the ground, however is constant. What you need to calculate is just the initial velocity that will keep you in the air long enough to propel you to your target. This is just a classic projectile motion problem.
If $g$ is the acceleration due to gravity -- in whatever units -- you have that your acceleration downward is $g$. Then, your height (assuming you start at the ground $h=0$) as a function of time is $h=v_y t -\frac{1}{2}g t^2$ where $v_y$ is your velocity vertically. Next, you have your distance forward, $x$, given by $x = v_x t - \frac{1}{2} a t^2$ where $v_x$ is your velocity horizontally and $a$ is your horizontal acceleration. (I'm not sure from the problem statement if that's what was meant by 'gravity pulls' or not.) So set $x$ to 60, $a$ to whatever $a$ should be (0?) and solve for $t$. Then, plug this back into the equation for $h$ and solve for $v_y$. Since this is a game, you can presumably play with values of $t$ and $v_x$ to get the results you want.
If you know the angle of the jump, $v_x = v_{total} \cos(\theta)$ and $v_y = v_{total} \sin(\theta)$ where $\theta$ is the angle from the horizontal.
If you solve these two equations you'll have your answer for your initial and final velocities. If, however, you need more than that, meaning the actual force that has to be exerted while in contact with the ground, you probably want to consider the impulse. This would be the change in momentum. (mass)(change in velocity). Dividing this by the time in contact with the ground gives you an average force that would change your momentum accordingly. Also, for force $F=\frac{d p}{dt} \sim \frac{\Delta p}{\Delta t}$ where $p$ is your momentum.
|
H: When does an analytic function grow faster than a polynomial?
Suppose $f$ is an analytic function with power series expansion $f(z)=\sum_{n=0}^{\infty} a_nz^n$, and $p = \sum_{n=0}^{d}b_nz^n$ is a polynomial. If $f$ is a polynomial of degree larger than $d$, then $|f|$ grows faster than $|p|$, but the situation is not so clear when the expansion of $f$ has infinitely many nonzero coefficients. I would expect the growth of the function $f$ then to be faster than that of $p$, as with the function $e^z = \sum_{n=0}^{\infty}\frac{z^n}{n!}$. However the function $\frac{1}{1-z} = \sum_{n=0}^{\infty}z^n$ also has infinitely many nonzero coefficients and grows slower than any polynomial (as $|z|\to\infty$). I realize this is related to the failure of the power series to converge outside a disk of radius $1$. Also, $log(z)$ grows slower than any polynomial, but any power series representation cannot converge on an infinite radius (The function itself cannot be well-defined everywhere in the complex plane simultaneously).
Under what conditions can we say that a power series with infinitely many nonzero coefficients represents a function that grows faster than any polynomial? Is this true for any power series with infinite radius of convergence? Are there such power series which grow at the rate $z^\alpha$, for any $\alpha\in(0,\infty)$?
I have in mind the case where $f$ is complex-analytic, but I would also be interested to hear about the case where $f$ is real-analytic, if the cases differ.
AI: Suppose a non-polynomial function $f$ has a power series
$$
f(z) = \sum_{n=0}^{\infty} a_n z^n
$$
which converges on all of $\mathbb{C}$. Then for each integer $n \geq 0$ and all $r > 0$ we have
$$
\begin{align}
|a_n| &= \left|\frac{f^{(n)}(0)}{n!}\right| \\
&= \left| \frac{1}{2\pi} \int_{|z| = r} \frac{f(z)}{z^{n+1}}\,dz \right| \\
&\leq \frac{M(r)}{r^n}
\end{align}
$$
by Cauchy's integral theorem, where $M(r) = \max\limits_{|z| = r} |f(z)|$. Since there are infinitely many nonzero coefficients $a_n$ we may conclude from this that $M(r)$ grows faster than any polynomial as $r \to \infty$.
|
H: an injection into $\mathbb{N}$
Is that true that the map $f\colon \{(m,n)\in\mathbb N^2:m\le n\}\to\mathbb N$ defined by $(m,n)\mapsto (m+n)^{\max\{m,n\}}$ is an injection? If it is, how to prove that? I have asked a similar question but it appeared to be very easy. My original struggling in the previous question was that I assumed that $m\leq n$ instead of checking easy reasons for the map not to be injective.
Thanks!
AI: Suppose $f(m,n)=f(m',n')$, i.e. $(m+n)^n=(m'+n')^{n'}$. If $n=n'$ we get $m'=m$, so assume without loss of generality that $n<n'$. If $m+n\le m'+n'$, then $$(m+n)^{\max(m,n)}=(m+n)^n< (m+n)^{n'}\le (m'+n')^{n'}=(m'+n')^{\max(m',n')}$$
so $f(m,n)\neq f(m',n')$. Hence we may conclude $m+n>m'+n'$.
Write $a=m+n, b=m'+n'$, with $a^{n}=b^{n'}$, an integer assumed bigger than 1. For any prime $p$ dividing both $a,b$ we have $n\nu_p(a)=n'\nu_p(b)$, so $\nu_p(a)>\nu_p(b)$. Hence $b|a$, and since they are unequal $a\ge 2b$ or $$m+n\ge 2(m'+n')$$
On the other hand, since $m\le n<n'$, we have $$(m+n)^n<(2(m'+n'))^n$$
Edit: $\nu_p(x)$ denotes the valuation, i.e. the maximum number of $p$'s that divide integer $x$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.