text stringlengths 83 79.5k |
|---|
H: Why in differential geometry tensors are usually defined as multilinear maps?
In multilinear algebra books tensors are usually defined through the universal property. Given a family of $k$ vector spaces $V_1,\dots,V_k$ over the same field $F$ we want to construct a space $S$ and a map $T:V_1\times\cdots\times V_k\to S$ such that for every vector space $W$ and multilinear $f:V_1\times\cdots\times V_k\to W$ we have a unique linear $g:S\to W$ such that $f = g\circ T$.
We then put $S=F(V_1\times\cdots\times V_k)/S_0$ where $S_0$ is generated by all elements of $F(V_1\times\cdots\times V_k)$ with the form
$$(v_1,\dots,v_i'+v_i'',\dots,v_k)-(v_1,\dots,v_i',\dots,v_k)-(v_1,\dots,v_i'',\dots,v_k)$$
$$(v_1,\dots,kv_i,\dots,v_k)-k(v_1,\dots,v_i,\dots,v_k).$$
Then we write $S=V_1\otimes\cdots \otimes V_k$ and denote $T(v_1,\dots,v_k)=v_1\otimes \cdots \otimes v_k$. This is a good definition, motivated by an algebraic problem.
In differential geometry books, however, tensors on a vector space $V$ are defined as multilinear functions from $V^k$ to $\Bbb R$. The motivation obviously is that we are building objects that in some sense given $k$ directions is capable of giving a number being linear in each direction. They usually denote $T_k(V)$ the space of such tensors.
Now, the space $T_k(V)$ is isomorphic to $V^{\otimes k}$, so my doubt is not that. My doubt is: is there some benefit of working in differential geometry with the spaces $T_k(V)$ rather than $V^{\otimes k}$? From the geometrical point of view, is there some difference between working with $T_k(V)$ and $V^{\otimes k}$?
There's one similar question here but I didn't find an answer there.
Thanks very much in advance!
AI: The space $T^k(V)$ is not canonically isomorphic to $V^{\otimes k}$ but to its dual $(V^{\otimes k})^*$.
The distinction is very important if you pass from vector spaces to vector bundles on a complex analytic manifold: the tangent bundle for example is not isomorphic to the cotangent bundle and this is only the case $k=1$ !
Also, I think that many authors introduce tensors through multilinear maps because they fear that genuine tensor spaces are more complicated than spaces consisting of multilinear maps, so they take advantage of the canonical isomorphism $V=V^{**}$ to replace tensors by multilinear maps. |
H: Understanding Quillens Theorem A
Let me restate the theorem:
Let $F\colon\mathcal{C}\to\mathcal{D}$ be a functor. If $F\downarrow x$ is contractible for every $x\in\operatorname{Ob}(\mathcal{D})$, then $F$ is a homotopy equivalence.
Now let $\mathcal{C}$ be the category with two objects $x$, $y$, and two non-identitymorphisms $f_1, f_2\in\mathcal{C}(x,y)$, $\mathcal{D}$ be the category with two objects $\tilde x$, $\tilde y$ and one non-identitymorphism $\tilde f\in\mathcal{D}(\tilde x,\tilde y)$, and $F\colon\mathcal{C}\to\mathcal{D}$ be the obvious functor. Then
$F\downarrow \tilde x$ is the category with one object $(x,\operatorname{id}_{\tilde x})$ and no non-identitymorphisms. In particular, $\left|N(F\downarrow \tilde x)\right|\cong *$, so $F\downarrow \tilde x$ is contractible.
$F\downarrow \tilde y$ is the category with two objects $(x,\tilde f), (y, \operatorname{id}_{\tilde y})\in\operatorname{Ob}(F\downarrow \tilde y)$ and one non-identitymorphism $\tilde f\colon (x,\tilde f)\to (y, \operatorname{id}_{\tilde y})$. In particular, $\left|N(F\downarrow \tilde y)\right|\cong [0,1]$, so $F\downarrow \tilde y$ is contractible.
But $\left|N(\mathcal{C})\right|\cong S^1$ and $\left|N(\mathcal{D})\right|\cong [0,1]$, so $F$ can't be a homotopy equivalence.
I'm quite sure I didn't disproof Quillens Theorem A using one of the easiest examples one could think of, but I just can't find my mistake...
AI: I think you just confused what morphisms in $F\downarrow x$ are : given two objects $(a,\phi)$ and $(b,\psi)$ (that is two objects $a,b\in\mathcal C$ and two morphisms $\phi:Fa\to x$ and $\psi:Fb\to x$ in $\mathcal{D}$), a morphism between them is a morphism $\gamma:a\to b$ such that $\psi\circ F\gamma=\phi$. In this case, there are two morphisms $(x,\tilde{f})\to(y,\mathrm{id_{\tilde{y}}})$ : $f_1$ and $f_2$. Both $\gamma=f_1$ and $\gamma=f_2$ morphisms satisfy the equation $\mathrm{id_{\tilde{y}}}\circ F\gamma=\tilde f$. |
H: Law of iterated expectation problem
When $E(u \mid x)=0$, $E(u)=E(E(u \mid x))=0$.
Then why $E(ux)=0$ by law of iterated expectation?
The book says because it is a form of $E(h(x)E(u\mid x)$) but still I can't understand.
AI: You can make the calculations
$$
E(UX) = E( E(UX\mid X))
= E( X E(U\mid X))
= E( X (0)) = E(0) = 0,
$$
since $E(U\mid X)=0$, yielding the desired result. The reason you can move $X$ outside the conditional expectation is a fact most naturally stated for a more abstract notion of conditional expectations with respect to $\sigma$-algebras, but you can also just take it as a case of the general rule
$$
E(h(X)U\mid X) = h(X) E(U\mid X),
$$
where $h$ is some mapping. |
H: Infinite compact subset of $\mathbb{Q}$
Can I find an infinite set in $(\mathbb{Q},\mathcal{T}_e|_\mathbb{Q})$ which is compact?
AI: $$\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$$ |
H: Representing complex numbers as matrices, show that $A(z)+A(z')=A(z+z')$
I am doing a task where in which I am representing complex numbers as matrices, so $z=x+iy \in \Bbb C$ is represented by:
$A(z)=\begin{bmatrix}
x & -y \\
y & x
\end{bmatrix}$
Now I have to show for all $z, z' \in\Bbb C$ that $A(z)+A(z')=A(z+z')$
How would I go about doing this?
AI: Hint: If $z=x+iy,$ then $x=\frac12(z+\overline z)$ and $y=\frac1{2i}(z-\overline z)$. This shows you that $$A(z)=\begin{bmatrix}\frac12(z+\overline z) & -\frac1{2i}(z-\overline z)\\\frac1{2i}(z-\overline z) & \frac12(z+\overline z)\end{bmatrix}$$ for any $z\in\Bbb C.$ Recalling the fact that $\overline{z+w}=\overline z+\overline w$ for any $z,w\in\Bbb C,$ can you take it from there?
Alternately, you can simply put $z=x+iy$ and $z'=x'+iy'$ where $x,y,x',y'\in\Bbb R,$ and proceed with the calculation directly, once you've put $z+z'$ into the form $a+ib,$ where $a,b\in\Bbb R.$ |
H: Homework - countable infinity
I'm trying to solve 2 problems, but I'm having some issues and would appreciate help.
Here are the questions and what I thought could be done:
1) A is the set of all series of numbers, where in an even place there is an even number, and in an odd place there is an odd number (for example, the series {8,3,2,5,0,1,28,...} is a valid member of A).
Is A of countable cardinality? If it is, find a one-to-one correspondence with the natural numbers.
My solution: I thought of sending a series, to a natural number. like, the example we got? I would send that series to the number 83250128 but I don't think thats a good solution...
2) B is the set of all series that only contain values that are prime numbers.
Is B of countable cardinality? If it is, find a one-to-one correspondence with the natural numbers.
AI: For $B$:
Write each irrational $q \in [0,1]$ in binary, now identify that irrational with the sequence $\langle a_n, n < \infty \rangle$ where $a_n = 3$ if the $n^{th}$ digit of $q$ is $0$, or else $a_n = 5$ if the $n^{th}$ digit of $q$ is $1$. This defines an injection from the irrationals in $[0,1]$ to $B$, hence $B$ is uncountable.
For $A$, think about Daniel Rust's comment. |
H: How can I do this integral?
How can I compute this integral?
$$\int\frac{x}{\sqrt{1+x^2}-x^2-1}dx$$
I tried to rationalise the denominator and got:
$$\int\frac{x(-\sqrt{1+x^2}-x^2-1)}{x^4+x^2}dx=\int\frac{-\sqrt{1+x^2}-x^2-1}{x^3+x}dx$$
But even still I find separating the integrals does not help me much in determining how to compute them, apart from:
$$\int\frac{-x^2}{x^3+x}dx$$
For the others I see no clear subsitution or trick, so any help would be great
AI: Using the substitution $ x= \tan t $ in the original integral yields
$$ \int \frac{\tan t \sec t}{1-\sec t}dt .$$
Can you finish it now?
Note: we used the identity
$$\sec^2 t= 1 + \tan^2 t. $$ |
H: Reference for Fukaya Categories and Homological Mirror Symmetry
What references are there for learning Fukaya categories (specifically, good references for self-study)?
In addition, any references with an eye toward homological mirror symmetry would be greatly appreciated.
AI: I personally recommend Paul Seidel's Fukaya Categories and Picard-Lefschetz Theory
book. It is not easy (imho) but contains an introduction to $A_\infty$-categories, it explains the Fukaya categories in both a preliminary and a complete version and provides an example of Fukaya cat. of a Lefschetz fibration.
Another good reference is the paper
http://arxiv.org/pdf/math/0011041.pdf
Section 4 is about $A_\infty$-structures, while section5 contains an introduction to Fukaya categories. |
H: Norm of operator
Let $H := \ell^2(\mathbb N)$ and for $f \in \ell^\infty(\mathbb N)$ definie $T_f:H \to H : g \mapsto f\cdot g$. This is well definied. I want to show that $\| T_f \| = \| f \|_\infty$. It is easy to show that $\|T_f\| \leq \|f\|_\infty$. To show equality I must find a function $g:\mathbb N \to \mathbb C$ s.t. $\|g\|_2 \leq 1$ and $\|f\cdot g\|_2 = \|f\|_\infty$. That is
$$
\sup_N \sum_{n=1}^N |f(n)g(n)|^2 = \left( \sup_n |f(n)| \right)^2
$$ and
$$
\sup_N \sum_{n=1}^N |g(n)|^2 \leq 1
$$ Since this the sums are all over positive elements we could replace the $\sup$ by $\lim$.
Am I proceeding right ?
AI: In general you can not find $g$ for which the norm of operator is attained. To prove the desidred result it is enough for each $\varepsilon>0$ to find $g\in \ell_2(\mathbb{N})$ such that $\Vert f\cdot g\Vert_2\geq \Vert f\Vert_\infty-\varepsilon$. Now I suggest to consider
$g=\Vert\chi_{\{k\}}\Vert_2^{-1}\chi_{\{k\}}$ for any $k\in\mathbb{N}$ such that $|f(k)|\geq\Vert f\Vert_\infty-\varepsilon$. |
H: How to determine the sum of the series $\,\sum_{n=1}^{\infty}\frac{n+1}{2^n}$
I am stuck on the following problem:
I have to determine the sum of the series $$\sum_{n=1}^{\infty}\frac{n+1}{2^n}$$
My Attempt: $$\sum_{n=0}^{\infty}\frac{n+1}{2^n}=\sum_{n=0}^{\infty}\frac{1}{2^n}+\sum_{n=0}^{\infty}\frac{n}{2^n}=\frac{1}{1-\frac12}+\sum_{n=0}^{\infty}\frac{n}{2^n}=2+\sum_{n=0}^{\infty}\frac{n}{2^n}$$.
So,I am stuck on determining the value of $\,\,\sum_{n=0}^{\infty}\frac{n}{2^n}$.
Can someone point me in the right direction? Thanks in advance for your time.
AI: $$\sum_{n=1}^\infty \frac{n}{2^n} = \sum_{n=1}^\infty \sum_{k=1}^n \frac{1}{2^n}$$
$$ = \sum_{n=1}^\infty \sum_{k=1}^\infty \frac{1_{(k \leq n)}}{2^n}$$
$$= \sum_{k=1}^\infty \sum_{n=k}^\infty \frac{1}{2^n}$$
$$ = \sum_{k=1}^\infty \frac{\frac{1}{2^k}}{1-.5}$$
$$ = \sum_{k=1}^\infty \frac{1}{2^{k-1}}$$
$$= 2$$ |
H: Show that 2 surfaces are tangent in a given point
Show that the surfaces $ \Large\frac{x^2}{a^2} + \Large\frac{y^2}{b^2} = \Large\frac{z^2}{c^2}$ and $ x^2 + y^2+ \left(z - \Large\frac{b^2 + c^2}{c} \right)^2 = \Large\frac{b^2}{c^2} \small(b^2 + c^2)$ are tangent at the point $(0, ±b,c)$
To show that 2 surfaces are tangent, is it necessary and suficient to show that both points are in both surfaces and the tangent plane at those points is the same?
Because if we just show that both points are in both surfaces, the surfaces could just intercept each other. And if we just show the tangent plane is the same at 2 points of the surfaces, those points do not need to be the same.
Thanks!
AI: The respective gradients of the surfaces are locally perpendicular to them:
$$
\nabla f_1 = 2\left(\frac{x}{a^2}, \frac{y}{b^2}, -\frac{z}{c^2} \right) \\
\nabla f_2 = 2\left(x, y, z - \frac{b^2+c^2}{c}\right)
$$
At any point of common tangency, the gradients are proportional. Therefore $\nabla f_1 = \lambda \nabla f_2$. This is seen to be true is you substitute $\left(0, \pm b, c\right)$ into the expressions for the gradients, the proportionality constant being $b^2$. QED. |
H: Where is the error in this integration example?
I've been practicing calculus recently and found these example exercises.
Please look at problem $19$ which says:
$$\text{Integrate }\int\frac{x^3}{(x^2+5)^2} dx.$$
This is the suggested answer which says that the integral is equal to
$$\frac{x^2}{2(x^2+5)}-\frac{\ln(x^2+5)}{2}.$$
To make sure I am correct I always check with Wolfram Alpha. To my surprise, the result I got from Wolfram Alpha was different:
$$\frac{5}{(5 + x^2)} + \frac{\ln(5 + x^2)}{2}.$$
Could someone please help me find the cause of the difference in results? These are by no means the same equations as there is no way to convert one to the other.
AI: The answer in that link link is
$$ - \frac{x^2}{2(x^2+5)}+ \frac 1 2 \ln(x^2+5)$$
which is $-$ times your answer. If you look at the first term, it can be rewritten as
$$- \frac{x^2}{2(x^2+5)} = - \frac{1}{2 }\left( 1 - \frac{5}{x^2 + 5} \right) = - \frac 1 2 + \frac{5}{2(x^2 + 2)}$$
which only differs from a constant term from answer in W|A. You can assimilate it to constant of integration since it is indefinite integral. |
H: growth and decay finding t formula?
The amount of fish in the main pool increases every month by 5%. Today, we have 6 tonnes of fish in the pool.
In how many years will there be 6.6 tonnes of fish in the pool?
So I can solve it like this:
mt = 6.6
m0 = 6
q = 1 + 5/100 = 1.05
t = ?
6.6 = 6 * 1.05 ^ t
6.6 / 6 = 1.1
1.1 = 1.05^1 (Nope).
1.1 = 1.05^2 = 1.1025 (Most close)
So the answer to the question is:
2 years, t = 2
My question is, is there a formula to do that instead of checking every single time?
AI: You can use the logarithm to solve that equation for $t$:
$$6.6 = 6(1.05)^t \implies 1.1=1.05^t$$
Now, taking the logarithm of both sides gives you
$$\log 1.1=\log 1.05^t \implies t = \frac{\log 1.1}{\log 1.05} \approx 1.953$$
The last step makes use of the fact that $\log b^a = a \log b$.
Addition
The steps in more detail:
First, simplify the equation by dividing both sides by $6$:
$$6.6 = 6 \times 1.05^t \implies 1.1=1.05^t$$
Then take the logarithm of both sides:
$$\log 1.1=\log 1.05^t$$
Now use the fact that the exponent can be moved outside the log expression, i.e. the rule $\log b^a = a \log b$:
$$\log 1.1=t \log 1.05$$
Now divide both sides by $\log 1.05$ to get
$$t = \frac{\log 1.1}{\log 1.05}$$ |
H: A problem on indefinite integration
$$\int\frac{x^4-2}{x^2\sqrt{x^4+x^2+2}}dx$$
I tried some substitutions, but none succeeded in simplifying the expression. Please help.
AI: Disclaimer I'm cheating, I get the final answer from WA and reverse engineering out the steps. People have any intuition how to get the steps without cheating, please update this answer.
$$\begin{align}
\int \frac{x^4 - 2}{x^2 \sqrt{x^4+x^2+2}} dx
= & \int \frac{2x^4 + x^2 - (x^4 + x^2 + 2)}{x^2 \sqrt{x^4+x^2+2}} dx\\
= & \int \frac{\frac{x}{2}(x^4+x^2+2)' - (x^4+x^2+2)}{x^2 \sqrt{x^4+x^2+2}} dx\\
= & \int \left[
\frac{1}{x} \frac{d}{dx}\left(\sqrt{x^4+x^2+2}\right)
+ \frac{d}{dx}\left(\frac{1}{x}\right) \sqrt{x^4+x^2+2} \right]
dx\\
= & \frac{\sqrt{x^4+x^2+2}}{x} + \text{constant.}
\end{align}$$ |
H: Orthogonal Complement with Bilinear Map over Different Spaces
Let $\tau:W\times V \rightarrow K $ be a bilinear map where $V$ and $W$ are vector spaces over a field $K$.
Then, let $U$ be a subspace of $V$ and define $S(U) = \{w \in W\ |\ \tau(w,u)=0\ \forall u \in U\}$
My question is this:
Prove that $U \subset S(S(U))$
Now, I have already managed to prove that $S(U)$ is a subspace of $W$.
It seems to me that $S(S(U)) = \{w \in W\ |\ \tau(w,x)=0\ \forall x \in S(U)\}$ but this seems to get me in trouble because we're dealing with a set containing elements of $W$ not $U$.
If someone could persuade me that the following is true, then I think I can finish the proof from there:
$S(S(U)) = \{u \in U\ |\ \tau(u,x)=0\ \forall x \in S(U)\}$
Thanks for any help in pointing me in the right direction!
AI: It looks like the author skipped the "and analogously for $E \subset W$" part and that led to confusion.
Let's modify the notation a little: for $U \subset V$, define
$$S_W(U) := \left\lbrace w\in W : \bigl(\forall u\in U\bigr)\bigl( \tau(w,u) = 0\bigr) \right\rbrace,$$
and for $Z \subset W$, define analogously
$$S_V(Z) := \left\lbrace v\in V : \bigl(\forall z\in Z\bigr)\bigl( \tau(z,v) = 0\bigr) \right\rbrace.$$
Then your task is to show that for all $U\subset V$ you have
$$U \subset S_V(S_W(U)).$$ |
H: Can a complex quadratic polynomial have real roots?
Where $a\in \textbf{Z}[i] $ and $a \not\in \textbf{Z}$, suppose for the quadratic formula, $ b^2-4ac = 0 \Rightarrow b^2 = 4 ac \Rightarrow c= \frac{b^2}{4a} $ and $ b=a $, so that $\displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}=\frac{-b}{2a}=\frac{-b}{2b}=-\frac{1}{2}$. More generally, suppose $d \in \textbf{Z}$ and $b=da$. Then the quadratic formula will have a root of $ \displaystyle -\frac{db}{2b}=- \frac{d}{2}$, thus any real number could be a root.
I was going to ask this question, but then answered it in trying to show that I had made an attempt...did I do so correctly? Could a complex quadratic polynomial have more than 1 real root?
AI: If $f$ is a quadratic polynomial with more than 1, i.e. 2 real roots, it can be written as $\alpha(x-a)(x-b)$, where $a,b$ are the roots and $\alpha$ can be `anything'. If you allow $\alpha$ to be complex, then yes, a complex quadratic polynomial may have 2 real roots. However, if you require the polynomial to be monic (having leading coefficient 1), $\alpha$ has to be 1 and this cannot be the case anymore.
I do have to admit that I don't quite follow what you're doing, though. |
H: Determine if a point is contained in the circle in 3d space
I have a problem where I need to determine if a point is contained in the area of a circle in 3d space.
For my circle, I have the radius (R), the position of the center (C) and a normal vector to the circle (N). With that, I can easily draw my circle.
Now, I will have a point generated and I want to know if this point is located on the area of the circle.
Currently, I'm thinking of obtaining the parametric equations of my circle which are:
\begin{align*}
x &= r\cos\phi\cos\theta \\
y &= r\sin\phi \\
z &= r\cos\phi\sin\theta
\end{align*}
Then, I could easily verify if the point verifies these equations.
However, I don't know how to convert the data I have (Radius, Center position and normal vector) into these 3 parametric equations and I need help with that !
AI: Here's my opinion,
1. check if $|CP|\leq R$
2. check if $\vec{CP}\cdot N = 0$
more specifically, if $C = (x_{0},y_{0},z_{0})$ , $N=(a,b,c) $ and $P=(x,y,z)$
P is on disc C when,
1. $(x-x_{0})^{2}+(y-y_{0})^{2}+(z-z_{0})^{2}\leq R$
2. $(x-x_{0})*a+(y-y_{0})*b+(z-z_{0})*c=0$
the second condition ensures that P is on the same plane as the disc. |
H: Suppose $g(0) > 0$ and $g(1) = 1$. Prove that if $g'(1)>1$, then there exists $t\in (0,1)$ with g(t) = t
Let $g:[0,1]\to [0,1]$ be continuously differentiable (including one-sided derivatives at 0 and 1). Suppose $g(0) > 0$ and $g(1) = 1$. Prove that if $g'(1)>1$, then there exists $t\in (0,1)$ with g(t) = t
Okay, this looks like a Mean Value Theorem problem, but I'm not sure how to get to proving it.
Actually, is this an Intermediate Value problem? The conclusion definitely looks like it comes from Mean Value, but the hypothesis reminds me of Intermediate Vale since the range of the function is compact. Since the range of $g$ is closed and bounded, it has a max and min, say $g(a)$ and $g(b)$ respectively. In this case, $g(a)$, the max, must be $g(1) = 1$, correct?
Now, I am not sure how $g'(1)>1$ is supposed to imply the conclusion.
AI: $f(t)=g(t)-t$
Notice, that $f(0)>0$. You have to prove, that there is $t_0$ such that $f(t_0)<0$ and then just apply mean value theorem. If $f$ increases near $1$ and $f(1)=0$ then... |
H: How to solve $7200a+720b+72c=1000x+340+y<10000$?
What is the easiest way to solve $7200a+720b+72c=1000x+340+y<10000$ where all variables are one digit natural numbers?
Trial and error method seems to be tedious.
AI: It’s immediately clear that $a$ must be $1$, which implies that $x$ must be at least $7$.
$7200a+720b+72c$ is clearly divisible by $9$. The digits of $1000x+340+y$ are $x,3,4$, and $y$, so $x+3+4+y$ must be a multiple of $9$, so either $x=7$ and $y=4$, $x=8$ and $y=3$, or $x=9$ and $y=2$. Then $1000x+340+y$ is $7344$, $8343$, or $9342$. Only the first of these is divisible by $72$: it’s $72\cdot102$.
It follows that $100a+10b+c=102$; if $a,b$, and $c$ must be single digits, this implies that $a=1$, $b=0$, and $c=2$. Thus, it’s impossible to meet the conditions: if $a,b,c,x$, and $y$ are all single digits, one of them must be $0$. |
H: A Question Regarding Diagonalization
For this question I am only considering binary sequences of countably infinite length.
Consider an arbitrary set $S$ of such sequences, $S$ of order type omega, $\omega$. By diagonalization one can construct a binary sequence $s_*$ not contained in $S$. Add $s_*$ to $S$ to form a new set, call it $S'$, of countable order type $\omega+1$. Now it would seem that one can diagonalize out of $S'$ and construct another binary sequence, call it $s_*'$ ($s_*'$ of order type $\omega$) contained in neither $S$ nor $S'$ and continue this process until the order type of the set formed by the operations of diagonalization and adding the diagonal (which is always of order type $\omega$) to the set is at least $\omega_1$.
Can this process be continued past $\omega_1$? If not, why not? If yes, then how?
AI: First of all, when you say "order type" it implies that there is a natural order on the sequences. That doesn't necessary happen. Moreover the resulting sequence from the diagonal argument need not be larger in the order that you chose.
But suppose that you begin with a countable set, then using diagonalization you add more and more elements. Can you go beyond $\omega_1$? Well, not necessarily. If the continuum hypothesis holds, or if we are in a context of $\sf ZF$ and $\aleph_1\nleq2^{\aleph_0}$, then it might be the case that your process has exhausted itself somehow.
Either by showing that this inductive method is insufficient to define a set of size $\aleph_1$ (i.e. we have to make some uncountably many choices (in the context of $\sf ZF$ it might be the case that going through all the $\omega_1$ steps requires a scale to $\omega_1$, that is a sequence of enumerations for all the countable ordinals, which may not exist without the axiom of choice); or that we simply lucked out and managed to cover all the real numbers, in the case that $2^{\aleph_0}=\aleph_1$, in which case you can't continue anymore.
If you could have proved from $\sf ZF(C)$ that diagonalization carries to ordinals of size $\aleph_1$, then you would have proved $\lnot\sf CH$ and so the inconsistency of $\sf ZFC$. However if we assume more, e.g. $\sf MA+\lnot CH$, then we can switch to other techniques to produce newer sequences. |
H: Induction proof with Fibonacci numbers
Prove by induction that for Fibonacci numbers from some index $i > 10$
$1.5^i ≤ f_i ≤ 2^i$
Notice! Because Fibonacci number is a sum of 2 previous Fibonacci numbers, in the induction hypothesis we must assume that the expression holds for k+1 (and in that case also for k) and on the basis of this prove that it also holds for k+2.
This where I've got so far:
Base case: $i = 11$
$f_{11} = 89 $
$1.5^{11} ≤ 89 ≤ 2^{11} $ OK!
Induction hypothesis:
$1.5^{k+1} ≤ f_{k+1} ≤ 2^{k+1}$
Induction step:
$1.5^{k+2} ≤ f_{k+2} ≤ 2^{k+2}$
Now I have no idea how to continue from here. Could someone help?
AI: When dealing with induction results about Fibonacci numbers, we will typically need two base cases and two induction hypotheses, as your problem hinted.
You forgot to check your second base case: $1.5^{12}\le 144\le 2^{12}$
Now, for your induction step, you must assume that $1.5^k\le f_k\le 2^k$ and that $1.5^{k+1}\le f_{k+1}\le 2^{k+1}.$ We can immediately see, then, that $$f_{k+2}=f_k+f_{k+1}\le 2^k+f_{k+1}\le 2^k+2^{k+1}= 2^k(1+2)\le 2^k\cdot 4=2^{k+2}$$ As for the other inequality, we similarly see that $$f_{k+2}=f_k+f_{k+1}\ge 1.5^k+1.5^{k+1}=1.5^k(1+1.5)=1.5^k\cdot 2.5\ge1.5^k\cdot 2.25=1.5^{k+2}$$ |
H: Can we compute $ \mathbf{Pr}[x_{1} < X < x_{2}] $ if we know the cumulative distribution function $ F $?
Assume that we have a cumulative distribution function $ F $. How can we calculate the quantity $ \mathbf{Pr}[x_{1} < X < x_{2}] $?
I know the answer for $ \mathbf{Pr}[x_{1} < X \leq x_{2}] $, but I am not sure about $ \mathbf{Pr}[x_{1} < X < x_{2}] $.
AI: Whether $X$ is discrete or continuos, in any case
$$P(x_1<X<x_2)=F(x_2)-F(x_1)-P(X=x_2).$$
You may want to draw a picture. |
H: Compute $ \lim\limits_{n\to\infty}\sqrt[n]{\log\left|1+\left(\frac{1}{n\cdot\log n}\right)^k\right|}$.
Compute $$ \lim\limits_{n\to\infty}\left(\sqrt[n]{\log\left|1+\left(\dfrac{1}{n\cdot\log\left(n\right)}\right)^k\right|}\right). $$
What I have: $$ \forall\ x\geq 0\ :\ x- \frac{x^2}{2}\leq \log(1+x)\leq x. $$
Apply to get that the limit equals $1$ for any real number $k$.
Is this correct? Are there any other proofs?
AI: Yes it works, here's another proof using a little more sofisticate tool (in this case unnecessary, but sometimes more useful).
By Stolz-Cesaro if $ (x_n) $ is a positive sequence and
$$ \lim_n \dfrac{x_{n+1}}{x_n} = l $$
then
$$ \lim_n \sqrt[n]{x_n} = l.$$
Taking as $(x_n)$ the sequence you defined, an easy calculation shows that $$ \dfrac{x_{n+1}}{x_n} \rightarrow 1,$$
therefore the thesis. |
H: Factoring out an exponential?
I have the following expression
$$\frac{2^{k+1}(k+1)!}{(k+1)^{k+1}}\cdot\frac{k^k}{2^k k!}$$
I get
$$\frac{2(k+1)(k^k)}{(k+1)^{k+1}}$$
But how do I factor out the ${(k+1)}^{k+1}$
AI: It might help if you notice that $(k+1)^{k+1}=(k+1)^k(k+1)$. |
H: Modular arithmetic and one-to-one functions
Let $S = \{0, 1, 2, 3, · · · , 99\}$ . For each of the following functions $f : S \rightarrow S$ , determine
whether it is one-to-one and onto, by computing its values for all $k ∈ S$:
Function 1: $$f(k) = (131k+27)\pmod{100}$$
Well at this point, I've computed the following:
$$
\begin{matrix}
f(0) = 27 \bmod 100 = 100i+27 \\
f(1) = 158 \bmod 100 = 100i+158 \\
\vdots \\f(99) = 12996 \bmod \ 100 = 100i+12996
\end{matrix}
$$
However, I'm not sure where to go beyond this point and don't see any logical steps.
AI: To make your task simpler, you can note that $$\begin{align}f(k) &= 131k+27\pmod{100}\\ &= 100k+31k+27\pmod{100}\\ &= 31k+27\pmod{100}.\end{align}$$ I have no idea why you'd be asked to explicitly calculate $100$ modular arithmetic values, but this should make it simpler. The first several, for example, are: $$f(0)=27\pmod{100}\\f(1)=58\pmod{100}\\f(2)=89\pmod{100}.$$ Those are simple enough. Next, we have $$\begin{align}f(3) &= 120\pmod{100}\\ &= 20\pmod{100}.\end{align}$$ We continue in this fashion, finding the first several values are $$27,58,89,20,51,82,13,44,75,6,37,68,99,30,...$$ and so on until the last value of $96.$ (Are you seeing the pattern?)
Still, this is far from the most efficient way to do this problem. Let's suppose that $f(k)=f(m)$ for some $k,m\in S,$ so that $$31k+27=31m+27\pmod{100}\\31k=31m\pmod{100}\\31(k-m)=0\pmod{100}$$ But that means that $31(k-m)=100j$ for some integer $j$, so $31$ divides $100j$. Since $31$ is prime and does not divide $100,$ then $31$ must divide $j,$ meaning that $j=31i$ for some integer $i,$ and so $$31(k-m)=100j\\31(k-m)=3100i\\k-m=100i\\k-m=0\pmod{100}\\k=m\pmod{100}.$$ Thus, $f:S\to S$ is a one-to-one function. A one-to-one function from a finite set into itself is automatically onto, and so we're done. |
H: Demonstration that $\forall\; n>3,\;\;n^2
How do I prove by mathematical induction that$$\forall\; n>3,\;\;n^2<n!$$
I tried, $n=4$ then $4^2<4!$ what is true, because $16<24$.$$$$Hypotesis: $n^2<n!$ $$$$Thesis: $(n+1)^2<(n+1)!$$$$$Show: $$(n+1)^2=n^2+2n+1<n!+2n+1$$ and????
AI: $$(n+1)!=(n+1)\cdot n!>(n+1)\cdot n^2>(n+1)^2$$
Here $n^2>n+1$ since $\forall n>3, \qquad n^2-n-1=(n-\frac{1}{2})^2-\frac{5}{4}>0$ |
H: Historical reason to define a vector dot product the way it is
The dot product of two vectors is defined this way:
$$\begin{pmatrix} a_1\\ a_2 \\ \end{pmatrix}\cdot \begin{pmatrix} b_1\\ b_2\\ \end{pmatrix} = a_1\cdot b_1 + a_2\cdot b_2$$
I know it works exactly like it should, in the Work formula, in physics, like:
$$W = F.D$$
For one-dimensional motion, only. If we need to extend this to two dimensions, we could calculate the horizontal work and the vectical work, then sum the two, like:
$$W_x = F_x\cdot Dx$$
$$W_y = F_y\cdot Dy$$
Then:
$$W = W_x + W_y = F_x\cdot D_x + F_y\cdot D_y $$
If we define the dot product to work for the work formula even for a vector notation, like:
$$W = F\cdot D \tag{both F and D are vectors in this case}$$
Then we must define the vector product as being the way it is.
PS:I don't explanations like 'dot product tells how much the vector is walking in certain way' because then the multiplication $a_1 \cdot b_1$ don't make sense, it could be $a_1+b_1$ and then I would still have an idea of 'how much the vector is changing'.
AI: If you want to calculate the angle two vectors with same origin make, then you can apply the laws of cosine on the vectors with the difference of the two vectors as the third side. When you work that out and solve for $cos (angle)$ then the numerator is exactly that dot product. The denominator happens to be always positive and so the numerator also determines whether vor not the angle is obtuse. I think that is why they call it the dot product by definition. I believe that's how I got introduced to the dot product the first time. The fact that if the dot product is zero results into orthogonal vectors is then a mere consequence. |
H: Is there a rational number describing the ratio of a volume, as a string, to a surface area?
If you were to take an arbitrary 3-dimensional shape with finite surface area, then look at the volume of that shape, turn the volume into a long cylindrical string bunched up ideally inside the shape with the space between the string be the limit approaching zero, so there is a string inside a shape that fits exactly in the volume. What if you pull that string out of the shape and wrap it around the object, or more generally, what is the ratio of the string's surface area (unbundled)to the surface area of the original shape.
So is this ratio the same number for the volume and surface area of any shape?
AI: Note that the units don't work-the volume of a shape is length$^3$ while the surface area is length$^2$. Let us apply that to a sphere of radius $R$. The volume is $\frac 43 \pi R^3$. If we fill that with a string of radius $r$, cross sectional area $\pi r^2$, the length is $L=\frac {\frac 43 \pi R^3}{\pi r^2}=\frac {4R^3}{3 r^2}$. The surface area it will cover is then $2rL=\frac {8R^3}{3r}$ (notice this is not the surface area of the string, it is the cross sectional area), so it will cover the $4 \pi R^2$ surface of the sphere $\frac {2R}{3r}$ times. As $r \to 0$, this goes to $\infty$. This will be general-shrinking the radius of the string will cover the outer shape more and more times. |
H: The integral of a function that is 0 a.e is 0
I am working on this problem:
Let $f = 0$ for all $x \in [a,b] $ except for $x$ in a set of Lebesgue measure zero. Then $\int_a^b f \,dx = 0$ if the integral exists.
Here are my ideas: Split $f$ into its positive part and negative part so that $f = f^+ + f^- = f^+ - (-f^-)$. Then both $f^+$ and $-f^-\geq 0$. Then since $L(P,f) \leq \sigma(P,f) \leq U(P,f) $ for any partition $P$ of $[a,b]$ and any Riemann sum $\sigma$, then I just need to show that for any $P$ there is a Riemann sum over $P$ equal to $0$. So partition $[a,b]$ as $P: a = x_0 < ... < x_n = b$. Then it suffices to show that each subinterval $[x_{i-1},x_i]$ is not contained in the set of measure zero.
And here is where I am stuck. If I assume that $[x_{i-1},x_i]$ is part of that set, then I know each subinterval must also have measure zero, but I don't know how to use this to achieve a contradiction.
AI: Since you are dealing with the Riemann integral, I assuming that $f$ is bounded on $[a,b]$ (otherwise the Riemann integral would not exist).
I am using the notation: If $I$ is an interval, then $m I = \sup I - \inf I$ (that is, the length).
We need the following result:
If a compact set $C$ has measure zero, then it has content zero. That is, for any $\epsilon>0$, there is a finite collection of intervals $I_k$ with $\sum_k mI_k < \epsilon$ such that $C \subset \cup_k I_k$.
Let $|f|$ be bounded by $B$.
Let $\epsilon>0$. Then let $I_k$ be a finite cover of $[a,b]$ by intervals such that $\sum m I_k < \frac{1}{B}\epsilon$. Without loss of generality, we may take the intervals to be closed (since $m I_k = m \overline{I}_k$) and non overlapping (if they overlap, just replace them by their union, and since $m (I_k \cup I_j) \le m I_k + m I_j$, the conditions are still satisfied).
Since the $I_k$ are non-overlapping closed intervals, we can assume they are in
increasing order (that is, if $ x \in I_k$, $y \in I_{k+1}$, then $ x< y$).
Let $I_k = [a_k,b_k]$. Then form the partition $\pi=(a,a_1,b_1,...,a_n,b_n,b)$.
Then $L(f,\pi) = \sum_k \inf f(I_k) m I_k$ (since $f$ is zero on the complement of $\cup I_k$), and so we have $L(f,\pi) \ge \sum_k (-B) m I_k > \epsilon$.
Similarly, we have $U(f,\pi) = \sum_k \sup f(I_k) m I_k \le\sum_k B m I_k < \epsilon$.
Since $\epsilon>0$ was arbitrary, we have $\sup_{\pi \in {\cal P}} L(f,\pi) = \inf_{\pi \in {\cal P}} U(f,\pi) = 0$, and so $\int_a^b f = 0$. |
H: Differential equation higher order
Can somebody help me with working out $y'''-4y'=t+\cos t+2e^{-2t}$. I want to solve $y'''-4y'=t$, $y'''-4y'=\cos t$ and $y'''-4y'=2e^{-2t}$ apart. The homogeneous equation I already solved: $y(t)=c_1+c_2e^2t+c_3e^{-2t}$. For the particular solutions I learnt a method called judicious guessing with methods for the cases: 1. $c\ne0$, 2. $c=0$ and $b\ne0$, 3. $c=b=0$. But I don't understand what $a,b,c$ here are (normally: $ay''+by'+cy=0$). Can somebody help me with finding the particular solutions?
AI: We have a $t$ in the particular solution, but also have a constant in the complimentary, so choose:
$$y_{1p} = t(a + bt)$$
We have $e^{-2t}$ in both the homogeneous and particular, so choose:
$$y_{2p} = c t e^{-2t}$$
We have $\cos t$ in the particular, so choose:
$$y_{3p} = u \cos t + v \sin t$$
Of course, you can just write this as:
$$y_p(t) = t(a + bt)+ cte^{-2t} + u \cos t + v \sin t$$
Take derivatives, substitute and solve for constants.
Spoiler - Do Not Peek
$$y(t) = c_1 + c_2 e^{2t} + c_3 e^{-2t} - \dfrac{1}{8}t^2 + \dfrac{1}{4}te^{-2t} -\dfrac{1}{5}\sin t$$ |
H: Show that ${\bf x} \cdot A^t {\bf y} = {\bf y} \cdot A{\bf x}$
Let $A \in \mathcal M_n (R)$ and ${\bf x}, {\bf y} \in R^n$.
How can I show that:
$${\bf x} \cdot A^t {\bf y} = {\bf y} \cdot A{\bf x} \, ?$$
Thanks for any help.
AI: Hint: note that for real vectors $u,v:u\cdot v= u^T v$
Second hint: we can write
$$
x \cdot A^T y = x^T (A^T y) = (x^T A^T) y = (Ax)^T y
$$ |
H: Solving the Expected Value
I have a word problem,
A game is played where a fair coin is tossed until the first tail occurs. The probability tosses will x tosses will be needed is $ f(x) = 0.5^x;x=1,2,3,4,5$. You win $2^x$ dollars if x tosses are needed for x=1,2,3,4,5 but lose $256 if x > 5. Determine the expected winnings.
I started the question off like so,
let Z be the number of wins (1 <= x <= 5)
let Y be the number of losses (x > 5)
$E(Z) = \displaystyle\sum\limits_{x=1}^5 x * 0.5^x$
I am wondering my expected value function is setup correctly.
AI: Let the winnings given $x$ tosses be $w(x)$
$$w(x)=\begin{cases}
2^x & x\leq 5\\
-256 & x>5
\end{cases}
$$
Let $W$ be the winnings pr game. This can now be calculated as
$$E(W)=\sum_{n=1}^{\infty} w(x)0.5^x=-3$$
Does this answer your question? |
H: weak convergence in $L^2$ / $C$ ==> pointwise convergence
I have a sequence of function $B_n \in C([0,1],R)$ and a $B \in C([0,1],R)$, such that:
$\int_{[0,1]} B_n(x) f(x) dx \rightarrow \int_{[0,1]} B (x) f(x) dx$ for all $f \in C([0,1],R)$ which are bounded.
Does this imply point wise convergence of $B_n(x) \rightarrow B(x)$ for all $x$?
My starting idea:
This convergence implies weak convergence in $L^2([0,1])$, right?
Do I have here a result for point wise convergence?
AI: Your condition doesn't imply weak convergence in $L^2$ either. Let $B_n$ have a "spike" of area 1 in the interval $(2^{-(2n+2)},2^{-(2n+1)})$, a negative spike of area 1 in $(2^{-(2n+1)}, 2^{-2n})$, and vanish elsewhere. Then for continuous $f$, $\int B_n f$ looks approximately like $f(2^{-(2n+2)})-f(2^{-(2n+1)})$ which goes to 0 as $n \to \infty$. But we could pick $f \in L^2$ which oscillates near 0 in such a way that $\int B_n f$ does not converge; something like $\sin(2^{-1/x})$.
The moral is that it isn't sufficient to check weak convergence on a dense set. |
H: $\sum\frac{k^{k/2}}{k!}$ converge or diverge?
Does the following series converge or diverge?
$\sum\frac{k^{k/2}}{k!}$
I did the ratio test $\frac{a_{n+1}}{a_n}$
I did
$\frac{(k+1)^{(k+1)/2}}{(k+1)!}$ $\frac{k!}{k^{k/2}}$
Then I simplified
$\frac{(k+1)(k+1)^{k/2}}{(k+1)(k^{k/2})}$
I then got
$k\rightarrow\infty$
$(\frac{k+1}{k})^{k/2}$
$\sqrt{e}$>1 So would the serie diverge.
AI: $$\frac{(k+1)^{(k+1)/2}}{(k+1)!}\cdot \frac{k!}{k^{k/2}}= \frac{(k+1)(k+1)^{(k-1)/2}}{(k+1)k^{k/2}} = \frac{(k+1)^{(k-1)/2}}{k^{k/2}} $$
Alternatively, we have $$\frac{(k+1)^{(k+1)/2}}{(k+1)!}\cdot \frac{k!}{k^{k/2}} = \frac{\sqrt{k+1}\cdot (k+1)^{k/2}}{(k+1)k^{k/2}}= \dfrac 1{\sqrt{k+1}}\cdot \left(\frac{k+1}{k}\right)^{k/2}$$ |
H: Find permutation index of multiple lists where corresponding list indices match
I have several date time values:
Mon 17h10
Tue 20h30
Wed 21h45
that maps to the following lists [Mon Tue Wed] [17 20 21] [10 30 45], a list for days, hours and minutes respectively.
Having all permutations of these lists in order, I am looking for a formula which calculates the index of the permutations that maps the original time value.
For example, all permutations of the date time values are:
MO 17 10 (index 0 maps to Mon 17h10)
MO 17 30
MO 17 45
MO 20 10
MO 20 30
MO 20 45
MO 21 10
MO 21 30
MO 21 45
TU 17 10
TU 17 30
TU 17 45
TU 20 10
TU 20 30 (index 13 maps to Tue 20h30)
TU 20 45
TU 21 10
TU 21 30
TU 21 45
WE 17 10
WE 17 30
WE 17 45
WE 20 10
WE 20 30
WE 20 45
WE 21 10
WE 21 30
WE 21 45 (index 26 maps to Wed 21h45)
And at (zero based) indices 0, 13 and 26 we find the original time values.
AI: In general, order all three lists as $(Day_0, Day_1, ..., Day_{n_1-1})$, $(Hour_0, Hour_1, ..., Hour_{n_2-1})$, $(Minute_0, Minute_1, ... Minute_{n_3-1})$.
Your lists have $Day_0=Monday$, $Hour_0=17$, $Minute_0=10$ and $n_1=n_2=n_3=3$.
$(Day_i,Hour_j,Minute_k)$ is associated with index $in_2n_3+jn_3+k$.
And index $l$ is associated with $Day_i$, $Hour_j$, and $Minute_k$ where $i={\lfloor l/(n_2n_3) \rfloor},j=\lfloor (l-n_2n_3i)/n_3 \rfloor,k=l-n_2n_3i-n_3j$. |
H: Proving functions a formal proof
Let $A$ and $B$ be arbitary sets. Let $S_1$ and $S_2$ be arbitrary subsets of A, and let $T_1$ and $T_2$
be arbitrary subsets of $B$. For each of the following state whether it is True or False. If
True then give a proof. If False then give a counterexample: $f(n) = n^2$
$f(S_1 ∩ S_2) = f(S_1) ∩ f(S_2)$
$f^{-1}(T_1 ∩ T_2) = f^{-1}(T_1) ∩ f^{-1}(T_2)$
Well I know that the first is true but I'm not sure how to give a formal proof. As for the second, I'm not entirely sure how to proceed. Would $f^{-1}$ be $\sqrt n$?
AI: The first is not true, and the squaring function gives a way to disprove it. Take $S_1$ to be the set of negative integers and $S_2$ to be the set of positive integers. What is $f(S_1\cap S_2)$? What is $f(S_1)\cap f(S_2)$?
For the second, we are not dealing with the inverse of a function, but rather with a preimage. Namely, if $f:A\to B$ and $T\subseteq B,$ then $$f^{-1}(T):=\{a\in A\mid f(a)\in T\}.$$ Observe that $b\in T_1\cap T_2$ if and only if $b\in T_1$ and $b\in T_2$. Can you use the definition of a preimage, and take it from there? |
H: Function Proof that deals with Set Theory
How to prove that this does not hold if $H$ is not injective or how to show that this equation is true just for injective functions?
$$H(X\cap Y)=H(X)\cap H(Y)$$
AI: If $H$ is not injective, that means that there are two points $x,y$ such that $H(x) = H(y)$.
Then let $X=\{x\}, Y=\{y\}$. $X \cap Y = \emptyset$, so $H(X \cap Y) = \emptyset$,
but $H(X) =\{H(x)\}$, $H(Y) = \{H(y\})= \{H(x)\}$, and so $H(X) \cap H(Y) = \{H(x)\} \ne \emptyset$. |
H: Simple recurrence relation - 1D
I know this is a very simple recurrence relation, but how would you go on solving it?
$$x(n+1)=\frac{x(n)}{1+x(n)}$$
AI: Hint: Let $y(n)=\frac{1}{x(n)}$. The recurrence for $y(n)$ is very pleasantly simple. |
H: Number of ways of expressing $n$ as a sum of positive integers
a) Let $s_n$ denote the number of ways of expressing $n$ as a sum of positive integers. Thus $s_1=1$, $s_2=2$, and $s_3=4$ (the four ways are $3$, $2+1$, $1+2$, and $1+1+1$). Prove that $s_n=s_{n-1}+s_{n-2}+\cdots+s_1+1$. Hence calculate $s_{10}$. Find a formula for $s_n$ in terms of $n$.
b) Suppose that we decide not to distinguish between '$1+2$' and '$2+1$'; Let $\sigma_n$ denote the number of ways of expressing $n$ as a sum of positive integers when the order of the terms does not matter. Thus $\sigma_3=3$. Calculate $\sigma_n$ ($1\leqslant n\leqslant 6$). Find $\sigma_{10}$. Express the $n$th term $\sigma_n$ in terms of earlier terms. Then try to find a formula for $\sigma_n$ in terms of $n$.
I need help on part (b). I've done part (a), which was easy enough ($2^{n-1}$).
I'm struggling to find a formula for $\sigma_n$ though.
This is an exercise from "The Mathematical Olympiad Handbook - An introduction to problem solving".
AI: As Andre pointed no nice closed form but exists a beautiful recurrence given by Euler
$$p(k)=\sigma_k=\sum_{d=1}^{\infty}(-1)^{d+1}\left(p\left(k-\frac{d(3d-1)}{2}\right)+ p\left(k-\frac{d(3d+1)}{2}\right)\right)$$ |
H: Polynomial fullfilling certain derivative properties
I have to solve the following task:
For an n $\in \mathbb N$, find a polynomial f(x), s.t. $f^{(k)}(1) = 0$ for $\forall$ k < n and $f^{(n)}(1)=1$.
I have tried out a couple of variations - without success. Is there a systematic way to solve this problem?
Thanks
AI: It is easier to look for a polynomial centered at $0$ first, then shift the domain.
If $p(x) = \sum_{k=0}^n p_k x^k$, then $p^{(j)}(0) = j!p_j$.
Usin the numbers you provided, this gives $p(x) = \frac{1}{n!} x^n$.
Now shift the domain, to get $f(x) = p(x-1) = \frac{1}{n!} (x-1)^n$. |
H: Find the limit: $\lim \limits_{x \to 1} \left( {\frac{x}{{x - 1}} - \frac{1}{{\ln x}}} \right)$
Find:
$$\lim\limits_{x \to 1} \left( {\frac{x}{{x - 1}} - \frac{1}{{\ln x}}} \right)
$$
Without using L'Hospital or Taylor approximations
Thanks in advance
AI: Note that $$\log(x)=\int_1^x \frac{dt}{t}$$
and that for $1 < t < x$ $$2-t < \frac{1}{t} < \frac{\frac{1}{x}-1}{x-1}(t-1) +1.$$
Integration in $t$ from $1$ to $x$ results in $$0 < \frac{(3-x)(x-1)}{2} < \log(x) < \frac{(1+\frac{1}{x})(x-1)}{2}$$ for all $x\in(1,3)$. Therefore on the same interval
$$ \frac{2-x}{3-x}<\frac{x}{x-1}-\frac{1}{\log(x)}< \frac{x}{x+1} $$
and $$\lim_{x\downarrow 1}\left(\frac{x}{x-1}-\frac{1}{\log(x)}\right)=\frac{1}{2}.$$
For $x\in(0,1)$ all inequalities change direction and therefore also
$$\lim_{x\uparrow 1}\left(\frac{x}{x-1}-\frac{1}{\log(x)}\right)=\frac{1}{2}.$$ |
H: Prove or disprove a set $F$ is closed.
This is an example in my book that talks about $F$ being precompact;
Let $F$ be the subset of $C([0,1])$ that consists of functions $f$ of the form
$$f(x) = \sum_{n=1}^{\infty}a_n\sin(n\pi x) \hspace{.5cm} \text{ with } \hspace{.5 cm} \sum_{n=1}^{\infty} n|a_n|\leq 1.$$
I know $F$ is bounded and equicontinuous so it is precompact. I am interested in knowing if $F$ is also closed or not? So I could use the Arzela-Ascoli theorem so say that $F$ is either compact or not compact.
AI: If $(f_k,k\geqslant 1)$ is a sequence in $F$ which converges uniformly to $f\in C[0,1]$, then $((a_{n,k})_n,k\geqslant 1)$ is Cauchy in $\ell^2$ (using Parseval's equality), so for each $n$, $a_{n,k}\to b_n$ as $k\to \infty$.
Now we have to show that $f\in F$. We have $\sum_{n= 1}^Nn|b_n|=\lim_{k\to \infty}\sum_{n=1}^Nn|a_{n,k}|\leqslant 1$ hence $\sum_n n|b_n|\leqslant 1$ so it's enough to show that $f(x)=\sum_nb_n\sin(nx)$. To this aim we can compute the Fourier series (we have normal convergence). |
H: compute the following integral using Cauchy Integral Formula
Prove that $\int_{0}^{\pi}{e^{k\cos t}\cos (k\sin t)}=\pi$. Using Cauchy Integral Formula. But I don't know how. I want to rewrite the integral as a line integral first.
AI: First of all, note that one may exploit symmetry in the integral and rewrite as
$$\frac12 \int_0^{2 \pi} dt \, e^{k \cos{t}} \, \cos{(k \sin{t})}$$
Now sub $z=e^{i t}$, $dt = -i dz/z$, and note that
$$e^{k \cos{t}} \, \cos{(k \sin{t})} = \Re{\left [ e^{k z}\right]}$$
(To see this, expand $e^{k e^{i t}}$ into real and imaginary parts.) The integral is therefore equal to
$$\Re{\left [-\frac{i}{2} \oint_{|z|=1} dz\, \frac{e^{k z}}{z}\right]}$$
By Cauchy's theorem/residue theorem, the integral is equal to $i 2 \pi$ times the residue at the pole $z=0$, which is $1$. The result to be shown follows. |
H: Order of $g^i$ in cyclic group of order $n$
I am very new to groups and still have problems with proving some basic facts. I stumbled on such theorem:
Let $g$ be a generator of cyclic group of order $n$. Then $g^i$ has order $\frac{n}{\gcd(n,i)}$
And I'm frustrated with no ideas how to prove it. Clearly $g$ has order $n$, so $g^n=1$. We are looking for the smallest $d$, such that $(g^i)^d=1$. So $id \ | \ n$ or $n \ | \ id$ ? I suppose the first case, but don't see exactly why. Am I on the right direction? Can anybody help what to do next?
AI: $(g^i)^d=g^{id}$. Noting $\text{ord}\ {g}=n$, if $g^{id}=e$, $n|id$. Note that $id$ is then, by definition, $\text{lcm} (n,i)$. Moreover, $\text{lcm}(n,i)\gcd(n,i)=n\cdot i$.
I'll let you finish the proof. |
H: Equation of a plane containing a point and perpendicular to a line
Find an equation of the plane containing the point $(1, 1, -1)$ and perpendicular to the line through the points $(2, 0, 1)$ and $(-1, 1, 0)$
This is what I have:
I first find the vector between the points: $\vec{n}^{\ } = (2,0,1) - (-1, 1, 0) = (3, -1, 1)$
Next I find the formula for the plane that is perpendicular to the vector by taking the dot product of the vector and the given point + another point of the plane
$(3, -1, 1) \cdot (1 + x, 1 + y, z -1 )\\ 3 + 3x -1 - y -1 -z\\ 3x - y - z +1 = 0$
I am not at all sure if I followed the correct steps
AI: The point-normal form of a plane is $\vec n \cdot(p - p_0) = 0$. If you think about the meaning of this, you will find that for any point $p$ on the plane, if you form a vector from that point and a point known to be on the plane, $p_0$, that vector will be orthogonal to your normal vector $\vec n$. |
H: Evaluating $\lim\limits_{n \to \infty} \sum_{i=1}^{n}\sqrt{\frac{i}{n^{3}}}$ using Riemann Sums and FTC
Factoring out the $\frac {1}{n}$ out of the sigma, we get:
$$\lim_{n \to \infty} \sum_{i=1}^{n}\sqrt{\frac{i}{n}}\cdot\frac{1}{n}$$
which looks awfully similar to
$$\lim_{n \to \infty} \sum_{i=1}^{n}\sqrt{x^{*}_i}\cdot\Delta{x}$$
$f(x) = \sqrt{x}$
$\Delta{x} = \frac{a - b}{n} = \frac{1}{n}$
So, I simply evaluate
$$\int^{b}_{a} \sqrt{x}\cdot dx$$
with $b = 1$ and $a = 0$ and get the answer $\frac{2}{3}$. But... why? Why is $b = 1$ and why is $a = 0$?
AI: Expanding the Riemann Sum:
$$
\begin{align}
\sum_{i=1}^n\sqrt{\frac in}\frac1n
&=\sum_{i=1}^nf\left(\frac in\right)\frac1n\\
&=\sum_{i=1}^nf\left(x_i\right)(x_i-x_{i-1})\\
&\to\int_0^1f(x)\,\mathrm{d}x\\
&=\int_0^1\sqrt{x}\,\mathrm{d}x\\[6pt]
&=\frac{2}{3}
\end{align}
$$
Since $x_i=\frac in$, $x_i$ ranges from $0$ to $1$. |
H: Question on Exact sequence
Let $$0 \rightarrow A \rightarrow B \rightarrow C\rightarrow 0$$ be an exact sequence, with $f:A \rightarrow B$, and $g: B \rightarrow C$. Let $Q$, $P$ be two submodules of $B$.
I want to determine whether the following fact is true:
$g(Q)=g(P)$ and $f^{-1}(Q)=f^{-1}(P)$, implies $P=Q$.
I am not getting a counterexample. I suppose that it is not true. Please help. Thanks in advance.
AI: Let
$$A:=\mathbb{R}, B:=\mathbb{R}^2, C:=\mathbb{R},$$
$$P:=\{(x,0)\;|\; x\in \mathbb{R}\}, Q:=\{(0,x) \; |\;x\in \mathbb{R}\}$$
and define $f,g $ by $f(x)=(x,x), \; g(x,y)=x-y.$
Then the sequence is exact (in $\mathrm{Mod-}\mathbb{R}$), $f^{-1}(P)=f^{-1}(Q)=\{0\}$ and $g(P)=g(Q)=C$, however, $P\neq Q$. |
H: ZFC and irrational numbers
I understand how integers and rationals are expressed/derived in ZFC. But what about the irrational numbers? Can they also be expressed?
If not, are there other axiomatic set theories able to express them?
As for Dedekind cuts, from my understanding (maybe wrong), any irrational in question must have a 1-1 explicit function to a natural number, in order for the method to work (As an example the square root of 2 has such a function).
It boils down to this: is there an isomorphism between the the set of irrationals (as "just" numbers) and a a set of pure sets?
AI: Dedekind cuts are the most common way. The idea is that, given the rationals, you take the reals to be all the downward closed sets of the rationals (including all the ones with no greatest element). For a concrete example, $\sqrt{2}$ is $\{x\in\mathbb{Q}:x^2<2\}$.
The version I'm chiefly familiar with is Quine's from "Set Theory and its Logic". He defines a rational $a/b$ as $\{x+(x+y)^2:x\cdot b < a\cdot y\}\subset\mathbb{N}$. Then given a collection of such rationals $A$, their upper bound is $\bigcup A$. Most of these upper bounds fail to be other rationals themselves. In this treatment, rationals are a subset of the reals as they are intuitively, and "$\leq$" reduces to $\subseteq$. |
H: Proving reflexivity and transitivity
I want to show that if $R$ is reflexive and transitive then $R^{-1}$ is also.
Transitivity:
$$(a,b)\in R^{-1} \wedge (b,c)\in R^{-1} \Rightarrow (b,a)\in R \wedge (c,b)\in R \Rightarrow
(c,a)\in R \Rightarrow (a,c) \in R^{-1}$$
Reflexivity:
$$(a,a)\in R^{-1} \Rightarrow (a,a) \in R$$
Reflexivity seems little bit odd, is it correct or should I add something?
AI: Your proof of reflexivity is perfect.
For transitivity, not quite. We assume that $(a,b),(b,c)\in R^{-1},$ but we don't want to assume that $(a,c)\in R^{-1}.$ We want to conclude that. As a hint for how to do that, since $(a,b),(b,c)\in R^{-1},$ what are some elements that you know must be in $R$?
Edit: So long as you justify each step, it looks good. |
H: What shape is traced out by this animation?
Found this animation circulating online, and was wondering what shape the rod's end traces out. It seems to be an ellipse, but can that be proved somehow?
$\quad\quad\quad\quad\quad\quad\quad\quad\ $
AI: Call the two pivots moving on a cross $a$ and $b$. $a$ moves on $\{(0,t)|t\in[0,1]\}$, and $b$ on $\{(s,0)|t\in[0,1]\}$. Their position must furthermore satisfy
$$s^2+t^2=1$$
The drawing point is positioned at
$$a(t) + (1+l)(b(s)-a(t)) =$$
$$= ((1+l)\sqrt{1-t^2},t+(1+l)t)$$
By parametrizing $t = \cos(\phi)$ we obtain the position
$$((1+l)\sin(\phi),(2+l)\cos(\phi))$$
which is indeed an ellipse when we let $\phi$ run from $0$ to $2\pi$.
Sorry if my solution is a bit messy, but I worked it up while writing it. |
H: What does this notation mean? $f(x|\theta)=\frac{3x^2}{\theta^3}I_{(0,\theta)(x)}$
Particularly I want to know what the meaning of $I_{(0,\theta)(x)}$ is here.
AI: It's called the indicator function. It means:
$$
I_{(0, \theta)}(x)=\begin{cases}
1, \quad \text{if } 0<x<\theta\\
0, \quad \text{otherwise}
\end{cases}
$$
I don't think it's very common that the $(x)$ part is in the subscript too (as in your question).
Using the indicator function the following are for instance equivalent:
$$
f(x|\theta)=\frac{3x^2}{\theta^3}, \quad 0<x<\theta
$$
and
$$
f(x|\theta)\frac{3x^2}{\theta^3}I_{(0, \theta)}(x), \quad x\in\mathbb{R}.
$$ |
H: Calculating $\int_{- \infty}^{\infty} \frac{\sin x \,dx}{x+i} $
I'm having trouble calculating the integral
$$\int_{- \infty}^\infty \frac{\sin x}{x+i}\,dx $$
using residue calculus. I've previously encountered expressions of the form
$$\int_{- \infty}^\infty f(x) \sin x \,dx $$
where you would consider $f(z)e^{iz}$ on an appropriate contour (half circle), do away with the part of the contour that wasn't on the real axis by letting the radius go to infinity, then recover the imaginary part of the answer to get back the sine. However here, I can't replace the sine with $e^{iz}$ in my complex function because $$\operatorname{Im} \frac{e^{ix}}{x+i} \neq \frac{\sin x}{x+i}\cdots$$
How to remedy this? I'm not sure if substituting $\sin x = \frac{1}{2i}(e^{ix}-e^{-ix})$ and solving two integrals is how this problem is meant to be solved, although I'm 99% sure it would work
AI: You're on the right track. Rewrite $\sin{x}=(e^{i x}-e^{-i x})/(2 i)$, but evaluate separately. For $e^{i x}$, consider
$$\frac{1}{2 i}\oint_{C_+} dz \frac{e^{i z}}{z+i}$$
where $C_{+}$ is the semicircle of radius $R$ in the upper half plane. The integral is then zero because the pole at $z=-i$ is outside the contour. Thus, by Jordan's lemma, we have
$$\int_{-\infty}^{\infty} dx \frac{e^{i x}}{x+i}=0$$
For the other piece, consider
$$\frac{1}{2 i}\oint_{C_-} dz \frac{e^{-i z}}{z+i}$$
where $C_{-}$ is the semicircle of radius $R$ in the lower half plane. By Jordan's lemma and the residue theorem,
$$-\int_{-\infty}^{\infty} dx \frac{e^{-i x}}{x+i}= i 2 \pi e^{-i (-i)} = \frac{i 2 \pi}{e}$$
Note the minus sign before the integral because we require the contour to be traversed in a positive sense (counterclockwise). Therefore
$$\int_{-\infty}^{\infty} dx \frac{\sin{x}}{x+i} = \frac{1}{2 i} \frac{i 2 \pi}{e} = \frac{\pi}{e}$$ |
H: Proving that the Intersection of any Collection of Compact Sets is Compact
I was trying to do this problem this way:
Let $\mathcal{B}=\{B_i\}$ be a collection of compact sets. By Heine-Borel, each of the $B_i$'s are closed and bounded. We already know that the intersection of a collection of closed sets is once again closed, so $\cap \mathcal{B}$ is closed.
If I can prove $\cap\mathcal{B}$ is bounded as well, I can use Heine-Borel to say it is compact. So I was not sure how to show it was bounded, but a friend showed me his way of doing it. He did it the following way:
Let $b,a\in\mathbb{R}$ be an upper and lower bound for $B_1$, respectively. Since $\cap\mathcal{B}\subset B_1, a$ is a lower bound for $\cap \mathcal{B}$ and $b$ is an upper bound for $\cap\mathcal{B}$. Therefore, $\cap\mathcal{B}$ is bounded and then by Heine-Borel, it is compact.
My question is on the paragraph right above. I don't understand why what he did means that $a$ and $b$ are lower and upper bounds for $\cap\mathcal{B}.$ Can anyone explain why this is? Thanks.
AI: $I=\cap B_i$ is closed because the arbitrary intersection of closed sets is closed hence $\cap B_i$ is closed. As you mentioned, any compact subset of $\mathbb{R}^n$ is closed and bounded
Now $I$ is a closed subset of a compact set (any $B_i$ will suffice). Hence it is also compact (Why? Proof below). To answer your question about the boundedness of $I$, it is a subset of a bounded set (any $B_{i}$ suffices) hence it is bounded.
We need to prove that a closed subset of a compact space is closed.
Let $X$ be any hausdorff space. ($\mathbb{R}^n$ certainly is.)
Let $A\subset C$ be closed with $C$ compact. Take an open covering of $M=\{A_{\alpha}\}$. Since $A$ is closed, $U=X-A$ is open. Hence $N= M\cup \{U\}$ is an open covering of $C$. Since $C$ is compact, $M$ has a finite subcover for $C$. This obviously covers $A$ as well thus $A$ has a finite subcover. If the finite subcover of $C$ involves $U$, we can always eliminate it.
I assumed $X$ is hausdorff because it can be shown that any compact subset of a hausdorff space is closed. |
H: Combinatorics homework
Here's a question from my homework. First 2 questions I solved (but would appreciate any input you can give on my solution) and the last question I'm just completely stumped. It's quite complicated.
On the shelf there are 5 math books, 3 science fiction books and 2 thrillers (all of the books are different)
1) In how many different combinations can you organize the books on the shelf, without any limitation? **My answer: $10!$ since all the books are different, it's just organizing 10 books.
2) In how many different combinations can you organize the books so that books of the same kind are next to each other? **My answer: $3!*5!*3!*2!$ - first imagine all the math books are just 1 block, all the sci fi books are 1 block, and that the 2 thrillers are 1 block. I have $3!$ ways to organize these blocks on the shelf. The $5!*3!*2!$ is due to the order of the books inside the block.
3) How many different combinations are there to organize the books such that the sci fi books are together, and there is at least 1 book inbetween the thrillers?
**My answer: I don't know. It's too complex. I thought maybe simplying it by saying at least 1 book in between = combinations with 1 book in between + combinations with 2 books etc but even that doesn't make it any simpler...
Help? :)
AI: Well, first of all, let's see how many ways we can put other books around the thrillers, ignoring distinctions between books of the same type. To begin with, we are going to have two thrillers, with space between and to either side:
$$\text{_T_T_}$$
Now, we must put another book in between them. There are two cases to consider, here.
Case 1: There are no sci-fi books between the thrillers.
After placing the sci-fi-books, our arrangement is one of the following:
$$\text{_SSS_T_T_}$$
$$\text{_T_T_SSS_}$$
In either case, we must place a math book in between the thrillers, so our possible arrangements are:
$$\text{_SSS_T_M_T_}$$
$$\text{_T_M_T_SSS_}$$
Now we have $4$ more math books to place. Note that if we put a math book next to the math book we've already placed, then it doesn't really matter which side we put it on (as far as distinguishing between these arrangements), so these arrangements can be thought of as:
$$\text{_SSS_T_MT_}$$
$$\text{_T_MT_SSS_}$$
Any or all of the $4$ remaining math books can be placed in any of the remaining $4$ slots, so there are $2\cdot 4^4$ arrangements of $3$ $\text{S}$s, $5$ $\text{M}$s, and $2$ $\text{T}$s in which the $\text{T}$s are not consecutive, the $\text{S}$s are consecutive, and the $\text{S}$s are not between the $\text{T}$s.
Case 1: There are sci-fi books between the thrillers.
Since we must place the sci-fi books in a block, then our arrangement is:
$$\text{_T_SSS_T_}$$
Any or all of the $5$ math books can be placed in the remaining $4$ slots, so there are $4^5$ arrangements of $3$ $\text{S}$s, $5$ $\text{M}$s, and $2$ $\text{T}$s in which the $\text{T}$s are not consecutive, the $\text{S}$s are consecutive, and the $\text{S}$s are between the $\text{T}$s.
Between the two cases, we have $2\cdot 4^4+4^5=1536$ arrangements of $3$ $\text{S}$s, $5$ $\text{M}$s, and $2$ $\text{T}$s in which the $\text{T}$s are not consecutive, but the $\text{S}$s are consecutive. Now, all that remains is to order the books in each category. There are $5!3!2!$ ways to do this, and so we have $1536\cdot 5!3!2!$ arrangements of the given type. |
H: Sum of self power
Is there a formula to calculate the sum of a number to the power of this same number, like:
$$1^1 + 2^2 + 3^3 + 4^4 + 5^5 + ... + n^n$$?
or
$$x^x + (x+1)^{(x+1)} + (x+2)^{(x+2)} + ... + (x+n)^{(x+n)}$$
AI: No. There isn't. But we do know that it is of the order $n^n$, and that all other terms, save the last, can safely be ignored. |
H: How to prove if n>2 is a prime number, then n is odd?
I understand why this is true, but I have no idea on how I'd go about proving it by writing a detailed structured proof.
Since a prime number is a integer, I started off like this:
Assume n in Z
Assume n > 2 and is a prime
Then...?
I also know that for a number to be even, it has to be divisble by 2. And if n is divisible by 2, then it can't be a prime number (exception is 2 = 2 x 1).
But how would I write this out formally?
Thank you!
AI: Assume $n>2$ is even. As $n$ is even we can write $n=2k$ with some integer $k$.
From $n>2$ we obtain $k=\frac n2>1$, hence $n=2k$ is a nontrivial factorization, hecn $n$ is not prime. |
H: How would I solve this?
I don't understand this question. Or more precisely how they derived the answer for the examples given. Can someone explain? Thanks.
E.g. All integers can be represented using the base B =-10 using the digits 0, 1, 2...9 and without using a negative sign in front of the number.
For example, -1467 = (2673) subscript(-10) and 10 = (190)subscript(-10)
a) what decimal numbers do (56)sub(-10) represent?
b)Determine the base B= -10 representations of the decimal numbers -209?
AI: If you think of regular decimal notation, $456_{10}=4\cdot10^2+5\cdot10^1+6\cdot 10^0$. In any other base $b$, you do the same thing: $456_{b}=4\cdot b^2+5\cdot b^1+6\cdot b^0$. There is no requirement that $b$ be positive, so you can plug in $b=-10$. Then check that $-1467_{10}=2\cdot (-10)^3+6 \cdot (-10)^2+7 \cdot (-10)^1+3\cdot (-10)^0$ The conversion is done the same way you convert bases as well. It's fun to play with. |
H: Equivalent definitions of atom in a Boolean Algebra
I want to show that the following conditions are equivalent for a nonzero element $a$ in a Boolean algebra $\mathcal{B}$:
1) for all $x\in\mathcal{B},a\leq x$ or $a\leq x'$
2) for all $x,y\in\mathcal{B},a\leq x\sqcup y\Rightarrow a\leq x$ or $a\leq y$
3) $a$ is minimal among nonzero elements of $\mathcal{B}$
I can't show any of the implications. Could you give me some hint?
AI: For (1)$\implies$(2), suppose that $a\not\le x$ and $a\not\le y,$ and use (1) to show that we must have $a\not\le x\sqcup y,$ thus proving the contrapositive of (2), so proving (2).
For (2)$\implies$(1), take any $x\in\mathcal B$ and observe that $x\sqcup x'=1,$ so....
For (1)$\implies$(3), suppose $x\in\mathcal B$ such that $x<a.$ It follows that $a\le x'$ by hypothesis, so since $a'<x'$ (why?), then $1=a\sqcup a'\le x'$ (why?), and so....
For (3)$\implies$(1), take any $x\in\mathcal B$ and note that $a\sqcap x\le a$ and $a\sqcap x'\le a.$ Note further that $a\sqcap x$ and $a\sqcap x'$ cannot both be $0$ (why?), so without loss of generality, $0<a\sqcap x\le a,$ so by (3) we have $a\sqcap x=a,$ and so.... |
H: I reach a half-dead end when trying to find the minimum possible value of an equation
I have the equation $x^2 + 4xy + 5y^2 - 4x - 6y +7$ and I'm supposed to transform it to look like this:
$[x + 2(y - 1)]^2 + (y + 1)^2 + 2$
First I transformed it into:
$x^2 + 4x(y - 1) + 5y^2 - 6y + 7$
and then completed the square for $x$:
$[x + 2(y - 1)]^2 - 4(y - 1)^2 + 5y^2 - 6y + 7$
which sort of looks like the first part of the answer, however when I complete the square for the second part:
$[x + 2(y - 1)]^2 - 4(y - 1)^2 + 5(y - 6/5)^2 - 36/25 + 7$
which seems wrong and I can't get it into the state of the answer from here.I have this $- 4(y - 1)^2$ that is ruining it for me.What else can I do?
AI: Try expanding the problem part first, collecting like terms, then complete the square:
\begin{align*}
&[x + 2(y - 1)]^2 - 4(y - 1)^2 + 5y^2 - 6y + 7 \\
&= [x + 2(y - 1)]^2 - 4(y^2 - 2y + 1) + 5y^2 - 6y + 7 \\
&= [x + 2(y - 1)]^2 - 4y^2 + 8y - 4 + 5y^2 - 6y + 7 \\
&= [x + 2(y - 1)]^2 + y^2 + 2y + 3 \\
&= [x + 2(y - 1)]^2 + (y + 1)^2 + 2 \\
\end{align*} |
H: How to identify coefficients with the binomialcoefficient
i tried to identify the coefficients $\gamma n \nu \mu $ in
$$(a+b+c)^n = \sum_{\nu=0}^{n} \sum_{\mu = \nu}^{n} \gamma n \nu \mu ~a^\nu b^{\mu- \nu}c^{n- \mu}.$$
I used the Binomial Theorem, but didn't succeed, can you help me?
Binomial Theorem:
$$(a+b)^n = \sum_{k=0}^{n} \binom{n}{k} a^{n-k}b^k$$
AI: $$
\begin{align}
(a+b+c)^n &= \sum_k\binom nka^k(b+c)^{n-k}
\\ &= \sum_k\binom nka^k\sum_l\binom klb^lc^{n-k-l}
\\ &= \sum_k\sum_l\binom nk\binom kla^kb^lc^{n-k-l}
\\ &= \sum_{k+l+m=n}\binom nk\binom kla^kb^lc^m
\\ &= \sum_{k+l+m=n}\binom n{k,l,m}a^kb^lc^m
\qquad\text{where $\binom n{k,l,m}\overset{\rm def}=\binom nk\binom kl=\frac{n!}{k!l!m!}.
$}
\end{align}
$$
Such products of binomial coefficients are called multinomial coefficients. The coefficent $\binom n{k,l,m}$ counts permutations of the letters of the word aaa...bb...ccc... of length $n$ with $k$ letters 'a', with $l$ letters 'b', and with $m$ letters 'c'; for instance $\binom7{2,1,4}$ counts permutations of aabcccc. Such permutations (with repetitions) are the products you get when working out $(a+b+c)^n$, leaving monomials as they are obtained from applying the distributive law, without regrouping the same variables together. |
H: Integration: product containing square root.
How can I integrate the following expression?
I have tried using u-substitution, but I am having problems with integrating the entire expression. So far, I have the following:
Any help on this is highly appreciated!
AI: You have it. $u=1+e^{-kt}$ and $du=-ke^{-kt}dt$, so $dt = -\frac{du}{k}$ where $k$ is just a constant. Rewrite this and you have $\int \sqrt{u}\frac{du}{-k}$, which I bet you know how to integrate! |
H: Prove that if the set B is in the finite union of the sets Ai, if p is limitpoint of B then p is limitpoint for at least one Ai.
I am not that familiar with writing proofs and would like some feedback checking if the steps in my proof are valid.
Problem statement,
Let $B_n = \bigcup _{i=1} ^n A_i, \quad Ai \in X $ and X is a metric space.
If p is a limit point of the set $B_n$ then p is a limit point of at least one of the sets $A_i$.
Proof:
Let p be a limit point of the set $B_n$ then for every open neighborhood around p there are points $p_{k} \in B_n$ distinct from p. Since all these points $p_n$ are in the finite union of the sets $A_i$ the point p must also be a limit point of at least one of the sets $A_i$, which proves the statement.
Does reasoning look ok?
AI: Let $p$ be a limit point of $B_n$. Assume $p$ is not a limit point of all $A_i$. Then there exist neighbourhoods $\{U_i\}$ of $p$ such that $(U_i\cap A_i)-\{p\}=\emptyset$ for all $1\leq i\leq n$. Set $$U=\bigcap U_i$$
This is a neighbourhood of $p$ such that $(U\cap A_i)-\{p\} =\emptyset$ for all $1\leq i\leq n$ (Why?).
But this means that $p$ is not a limit point of $B_n$ since it is the union of $A_i$'s
A comment: I think the subscript in $B_n$ is unnecessary but that is a personal preference. |
H: Proving the limit of a function of a sequence is equal to the function of the limit of that sequence
Suppose $f$ is a continuous function at $x = c$ in $[a,b]$. Prove that for any sequence ${x_n}$ in $[a,b]$ converging to $c$, the sequence $\{f(x_n)\}$ converges to $f(c)$. That is, $$ \lim_{n\to\infty}f(x_n)= f\left(\lim_{n\to\infty}x_n\right)$$
This proof seems simple but there are a few things that I need to know first. If $\{x_n\}$ converges to $c$, is it sufficient to substitute $c$ in for $\lim_{n\to\infty}x_n$? Also needing some guidance on the structure of this proof. Thanks!
AI: Just write down the definitions:
$x_n$ converges to $c$ if and only if $\forall \delta > 0$ there exists $N = N(\delta)$ such that $\forall n \ge N$ we have $|x_n - c| < \delta$
$f$ is continuous if and only if $\forall \eta > 0$ there exists $\gamma = \gamma(\eta)$ such that if $|x - y| < \gamma$ then $|f(x) - f(y)| < \eta$.
Now we want to prove the following claim
$\forall \epsilon > 0$ there exists $M = M(\epsilon)$ such that if $n \ge M$ then we have $|f(x_n) - f(c)| < \epsilon$.
Hint: if $x_n \to c$ then you can make $|x_n - c|$ small enough to use the continuity of $f$ (say, for example, smaller than $\gamma(\epsilon)$). |
H: What are some good introductory books on complex analysis?
I am looking for self study books or general interest (above the layman level) books on complex analysis.
AI: A few of my favourites:
Stewart and Tall Complex Analysis - does not demand massive pre-requisites.
Needham Visual Complex Analysis - fantastic for getting a visual feel for what is going on with complex functions and the complex plane.
Palka An Introduction to Complex Function Theory - a very thorough treatment for a first course.
Flanigan Complex Variables - very nice introduction via harmonic functions, and has the advantage of being a cheap Dover edition.
I am also a great fan of some of the older treatments by Ahlfors, Nevanlinna and Paatero, as well as Rudin, but could not honestly recommend any of these for an introductory course. |
H: A continuous, bijective, function such that two different metrics have the same open sets.
Let $ f: \Re \rightarrow \Re$ be a continuous and strictly increasing function such that $f(0)=0$. Suppose $d_1$ and $d_2$ are two different metrics on the nonempty set $X$ and that $d_1(x,y)=f(d_2(x,y))$. Show that $(X,d_1)$ and $(X,d_2)$ have the same open sets.
$\textbf{My Attempt:}$
Let $r_1$ and $r_2$ be the collection of open subsets in the $d_1$ and $d_2$ metric. We want to show that $r_1=r_2$. So let $U \subseteq r_1$. I have seen, from an answer to a similar question on this website, that I must show for that for any $x \in U$ there exist $\epsilon >0$ such that the open ball $B_{d_2} (x,\epsilon) \subseteq U$. Also since $x \in U$ there exist $\delta > 0$ such that $B_{d_1} (x,\epsilon) \subseteq U$. I am quite lost as what to do from here on using that $f$ is a continuous bijection. We have not yet covered homeomorphisms in class so I will not be able to use any such theorems.
Also, sorry for not directly linking to the similar question I mention. I still need to look up how to do this.
AI: Note that since $f$ is increasing and continuous with $f(0)=0,$ then the range of $f$ is some open interval containing $0$ (possibly a ray, or all of the real line). (Why?) Moreover, letting $I$ be the range of $f,$ we have that $f$ is invertible and that $f^{-1}:I\to\mathfrak R$ is increasing and continuous with $f^{-1}(0)=0.$ (Why?)
To prove your claim, then, it suffices to prove that for every $\epsilon>0$ and every $x\in X,$ there exist $\delta,\delta'$ such that $B_{d_1}(x,\epsilon)=B_{d_2}(x,\delta)$ and $B_{d_2}(x,\epsilon)=B_{d_1}(x,\delta').$ (Why?)
As a hint for how to show this, you should use the definition $$B_d(x,\epsilon)=\{y\in X:d(x,y)<\epsilon\}$$ together with what you know about $f$ and $f^{-1}$. |
H: How would I prove $|x + y| \le |x| + |y|$?
How would I write a detailed structured proof for:
for all real numbers $x$ and $y$, $|x + y| \le |x| + |y|$
I'm planning on breaking it up into four cases, where both $x,y < 0$, $x \ge 0$ and $y<0$, $x<0$ and $y \ge0$, and $x,y \ge 0$. But I'm not sure how I'd go about writing it formally.
Thanks!
AI: You are absolutely on the right track. I'll model one case for you, and you can try the other cases on your own.
Case 1: $x,y\geq 0$. Then $x+y\geq 0$, so $|x+y|=x+y$. Similarly, $|x|=x$ because $x\geq 0$, and $|y|=y$ because $y\geq 0$. Thus $|x+y|=x+y=|x|+|y|$. |
H: Homework basic abstract algebra
My question is as follows:
$R$ is a ring such that for all $x \in R, x^2=x$
$p$ is a prime ideal of $R$.
Show that $R/p$ (R modulu p) has exactly 2 elements.
What I did:
$x^2=x$
$x^2-x=0$
$x(x-1)=0$
So the equivalence class of $x(x-1)$ is equal to the equivalence class of $0$ in $R/p$ which implies that $x(x-1)$ is an element of $p$
Because $p$ is a prime ideal, we know that this means that either $x \in p$ or $x-1 \in p$...But not both. So why 2 elements? I'd say $R/p$ has exactly one element. But I'm wrong. I could use an explanation.
AI: How many solutions can $x^2=x$ have in the integral domain $R/p$? |
H: Quotient of maximal ideals by its power giving simple modules
So I understand that $R/m$ where $m$ is a maximal ideal would give simple module. My question is, would $m/m^2$ also give a simple module?
My progress thus far: My first approach was to realize that inside $R$, and only prime ideal containing $m^2$ is $m$ itself (by thinnking of the radical of $m^2$). To show that $m/m^2$ is simple means there are no ideals (besides $m^2$ and $m$) which contain $m^2$. Then I reached a dead-end.
My second approach is as follows: $m/m^2$ can be understood as a $R/m$-module, and $R/m$ is a field, so $m/m^2$ is a vector space. So $m/m^2$ being simple is equivalent to saying that its dimension is 1.
Is this true for a general commutative ring $R$? How about for polynomial rings such as the polynomial ring over the integers $Z[x]$? If the answers are negative, what can be said about the structure of $m/m^2$ in the general case and in $Z[x]$?
AI: It's not true in general. A counterexample is $k[x^2,x^3]$ with $m=(x^2,x^3)$. $(x^3)+m^2$ is a proper submodule of $m/m^2$. Indeed, following your second approach, $m/m^2=\{ax^2+bx^3+m^2\}$, which is dimension $2$ over $k[x^2,x^3]/m\cong k$.
If you know some differential or algebraic geometry, you can think of the above counterexample as saying that the cotangent bundle to the curve $y^2=x^3$ in $k^2$ is not $1$-dimensional at $(0,0)$, because the curve is not smooth at that point. In ring theoretic terms, the problem here is that $k[x^2,x^3]$ is not integrally closed.
In fact, one has the following theorem, useful in number theory and algebraic geometry, which you can use to cook up plenty more counterexamples: Let $R$ be a Noetherian domain of dimension $1$. Then $R$ is integrally closed if and only if for each maximal ideal $m$, $m/m^2$ is 1-dimensional over $R/m$. (A variant of this statement is proved for local rings in Atiyah-MacDonald, from which you can easily recover the statement above.) |
H: Check if a system admits solutions of period 2
I have the following problem.
Let $r \geq 0$ be a parameter in the discrete time system $x(k + 1) = r − rx(k)$.
Verify whether there exist $r \geq 0$ such that this system admits solutions of period 2.
I do not quiet understand how I should solve this problem. I already determined that the fixed points of the system are $x^* = \frac{r}{r + 1}$ and that $0 \leq r < 1$ for the system to converge.
AI: You should write $x(k+2)=r-rx(k+1)$, substitute in your expression for $x(k+1)$ and see if there are solutions with $x(k+2)=x(k)$ |
H: If $f(a) = g(a)$ and $f'(x) < g'(x)$ for all $x \in (a,b)$, then $f(b) < g(b)$
Assume that $f$ and $g$ are continuous on $[a, b]$ and differentiable on $(a, b)$.
Prove that if $f(a) = g(a)$ and $f'(x) < g'(x)$ for all $x \in (a,b)$, then $f(b) < g(b)$.
I understand that if $f$ and $g$ start at the same point, and $g$ increases at a faster rate than $f$, it will have a bigger end value. I'm just not sure how to prove it.
Consider $h(x)= g(x) - f (x)$ (this is the space between the two functions)
$$\begin{align*}
h(a) &= g(a) - f(a)\\
h(a) &= 0 &(\text{they are in the same spot in the beginning})
\end{align*}$$
$$\begin{align*}
h'(x) &= \frac{g'(x) - f'(x)}{g(x) - f(x)}\\
h'(x) &> 0 &(\text{since } g'(x)> f'(x))
\end{align*}$$
so $g(x) - f(x) > 0 \implies g(x) > f(x)$?
Is this correct?
AI: Letting $h = g - f$ is a good first step. Now apply the Mean Value Theorem to $h$ to note that for some $x \in [a, b]$, $$h'(x) = \frac{h(b)-h(a)}{b-a},$$ or equivalently $$g'(x)-f'(x)=\frac{g(b) - f(b) - g(a) + f(a)}{b-a}.$$
Because we've assumed that $g'(x) > f'(x)$, we know the $g'(x) - f'(x)$ is positive, and so therefore so is the right hand side of this equation. So we now get
$$0 < \frac{g(b)-f(b)-g(a)+f(a)}{b-a} \implies0<g(b)-f(b)-g(a)+f(a).$$
Since we know that $g(a) = f(a)$, we can now say $$0<g(b)-f(b) - f(a)+f(a) \implies 0 <g(b)-f(b)\implies f(b) < g(b). \square$$ |
H: Meeting point for 5 people with least distance travelled (interview question)
I had an interview today and I'm completely stumped on what they asked me. Essentially: if you are given 5 people on a 2D grid, and you need to meet at a point with the least amount of distance travelled, how would you calculate it?
What I had told the interviewer after some questions is to use the pythagorean theorem for every point and then add these together. I know I completely failed in answering...
Anyway, what is the proper way to do this? And is there a name for this type of problem so I can read more into it?
Thanks!
AI: Because you're on a grid, you can separate each person's motion into motion along the $x$-axis and motion along the $y$-axis; the amount of motion required is just the sum of these two components. (This is the "Manhattan distance" or $L_1$-distance.) So if you can simultaneously minimize the total $x$-distance that must be traveled (which depends only on the $x$-coordinate of the meeting place) and the total $y$-distance that must be traveled (which depends only on the $y$-coordinate of the meeting place), then you've found the optimal spot. As pointed out in the other answer, in one dimension the optimal meeting place is always the median of the locations (or anywhere between the two middle locations, if the number of people is even). So the correct answer here is $(\bar{x}, \bar{y})$, where $\bar{x}=\text{median}(\{x_i\})$ and $\bar{y}=\text{median}(\{y_i\})$. |
H: Probabilities for $1$-in-$n$ events over $n$ trials
I know there are lots of related questions on here, but I can't seem to find what I'm looking for.
Given some event with, say, a $1$ in $1{,}000{,}000$ probability (e.g., $7$ being chosen randomly as a number between $1$ and $1{,}000{,}000$), I'd like to get a rough idea of the probability of seeing that event happen at least once in $1{,}000{,}000$ trials. (The temptation is somehow to say that you get roughly even odds.)
I understand that the probability of seeing a $1$-in-$n$ event occur at least once in $n$ trials is simply $1 - \frac{(n-1)^n}{n^n}$, but when $n$ is large, computing such values is difficult (for me at least).
Hence I'd like an idea as to whether this converges as $n$ approaches infinity.
Just from playing around with some values up to ($n=144$), it seems that the value converges towards $\sim0.633$.
So my questions relate to understanding this more.
Does this value indeed converge?
If so, does the resulting value have any significance?
Conversely, for an event with a 1-in-$n$ chance, is there a way to characterise (in the general case) how many trials would be needed to see such an event at least once with $p\approx 0.5$?
AI: We have
$$\lim_{n\to\infty} \left(\frac{n-1}{n}\right)^n=e^{-1}.$$
The number $e$, the base of natural logarithms, is of great importance.
When $n$ is largish (and it can be much smaller than $10^6$), the number $\left(\frac{n-1}{n}\right)^n$ is very close to $e^{-1}$.
For the last question, the probability of at least one event in $k$ trials is
$$1-\left(1-\frac{1}{n}\right)^k.$$
We want this to be $\frac{1}{2}$, so we want to solve the equation
$$\left(1-\frac{1}{n}\right)^k=\frac{1}{2}.$$
To solve, take the logarithm of both sides. |
H: Solving $4^{667} ≡ x \pmod{13}$ without Eulers totient theorem or CRT
Does anyone know any efficient ways to solve this without Euler's Totient Theorem or Chinese remainder theorem?
AI: As $5*13=65$ we have
$$4^3 \equiv -1 \pmod{13}$$
Even without this observation, you can calculate $4,4^2, 4^3, ... \pmod{13}$ and since there are only at most 12 posibilities you know that the powers must repeat after at most 12 steps. Find the repeating pattern.
Note: As you calculate, you can make each number in the sequence $4,4^2, 4^3, ... \pmod{13}$ between $0$ and $12$ which makes the computation simpler. |
H: Is there an easy formula for this sequence?
It is the sequence which represents the maximum number of cycles in an undirected graph with n nodes, n>=3. These graphs have all nodes connected to every other node.
How would I count the number of cycles in such a complete graph.
Also is there was a formula to this sequence to find the number of cycles with n nodes. Bonus points if you can find the name of this sequence if one exists. Thanks!
AI: By the definition we have for the number $Q_n$ of cycles in a labelled copy of $K_n$ that
$$Q_n = \sum_{k=3}^n {n\choose k}\frac{k!}{2k}.$$
This gives the sequence
$$1, 7, 37, 197, 1172, 8018, 62814, 556014, 5488059, 59740609,\ldots$$
which is number A002807 at the OEIS. |
H: Proof: $2^{n-1}(a^n+b^n)>(a+b)^n$
If $n \in \mathbb{N}$ with $n \geq 2$ and $a,b \in \mathbb{R}$ with $a+b >0$ and $a \neq b$, then $$2^{n-1}(a^n+b^n)>(a+b)^n.$$
I tried to do it with induction. The induction basis was no problem but I got stuck in the induction step: $n \to n+1$
$2^n(a^{n+1}+b^{n+1})>(a+b)^{n+1} $
$ \Leftrightarrow 2^n(a\cdot a^n + b\cdot b^n)>(a+b)(a+b)^n$
$\Leftrightarrow a(2a)^n+ b(2b)^n>(a+b)(a+b)^n$
dont know what to do now :/
AI: Note that $a\not=b$ implies $(a-b)(a^n-b^n)\gt0$. Using this and the inductive hypothesis, we get
$$\begin{align}
(a+b)^{n+1}&=(a+b)^n(a+b)\cr
&\lt2^{n-1}(a^n+b^n)(a+b)\cr
&=2^{n-1}(a^{n+1}+ab^n+a^nb+b^{n+1})\cr
&=2^n(a^{n+1}+b^{n+1})-2^{n-1}(a^{n+1}-ab^n-a^nb+b^{n+1})\cr
&=2^n(a^{n+1}+b^{n+1})-2^{n-1}(a-b)(a^n-b^n)\cr
&\lt2^n(a^{n+1}+b^{n+1})
\end{align}$$ |
H: How to put 9 pigs into 4 pens so that there are an odd number of pigs in each pen?
So I'm tutoring at the library and an elementary or pre K student shows me a sheet with one problem on it:
Put 9 pigs into 4 pens so that there are an odd number of pigs in each pen.
I tried to solve it and failed! Does anybody know how to solve this? This question seems ridiculously difficult and impossible IMO.
AI: It appears to be a trick question.
Make 3 pens, put 3 pigs in each pen. Then put a 4th pen around all 3 of the other ones, and you have 9 pigs in that pen.
Update
@MJD found a source for this problem with solution, see Boys' Life magazine from 1916. |
H: Calculating probabilities of events of different time periods
The average probability of an event occurring is 3 times in a year. What is the probability of:
1) an event occurring in any specific month; and
2) 10 events occurring in any specific month?
AI: We use a Poisson model: the number $X$ of events per year has Poisson distribution with parameter $\lambda=3$.
Then it turns out that if $Y$ is the number of events in time interval $t$, then $Y$ has Poisson distribution with parameter $\lambda t$.
Taking a month as $\frac{1}{12}$ of a year, we find that the number of events in a specific month is has Poisson distribution with parameter $\frac{3}{12}$.
Now to the questions. What does "an event occurring in a specific month" mean? Exactly $1$ event? At least one event? There is lack of clarity.
If it is exactly $1$, then by the usual formula for probabilities governed by the Poisson, the answer is $e^{-3/12}\frac{3/12}{1!}$.
If it is at least $1$, the probability that $Y=0$ is $e^{-3/12}$, so the probability it is $\gt 0$ is $1-e^{-3/12}$.
For the probability of $10$ events in a month, we want $\Pr(Y=10)$. This is $e^{-3/12}\frac{(3/12)^{10}}{10!}$, extremely tiny.
Remark: There are a lot of unreasonable assumptions built into our model. I eat roast turkey on average once a year, roughly as little as I can get away with. The Poisson model would give a very inaccurate result for the probability I eat roast turkey in July. |
H: equality of two objects depending on conditions.
I want to state that two objects $t_i, t_j$ are equal if some conditions hold. Can this be done by writing: $t_i = t_j \rightarrow$ some conditions?
AI: An example of such a condition is the axiom of extensionality found in most set theories. It states that if $x$ and $y$ have exactly the same members, then $x=y$. Symbolically, $\forall x \forall y(\forall z(z\in x\leftrightarrow z\in y)\to x=y)$. Note why you don't need the conditional in the other direction: if $x=y$ then by the rules of first order logic anything you say of $x$ must be true of $y$ and vice versa.
EDIT: It occurred to me I could make this more informative by considering the case where the antecedent is $x=y$ with different things predicated of $x$ and $y$. Say we have two predicates $P(-)$ and $Q(-)$ and a formula of the form $x=y\to(P(x)\wedge Q(y))$. An obvious instance of this is $x=x\to(P(x)\wedge Q(x))$, so that the formula reduces to $\forall x(P(x)\wedge Q(x))$. |
H: How to compute the fundamental group of a necklace of $\mathbb{S}^1$' s?
I was trying to compute $\pi_1 (X)$ where $X =$ "necklace of $n$ $\mathbb{S}^1$'s". At first, I tried using Van Kampen theorem however I could not find open sets $U$ and $V$ such that $U \cap V$ is path connected. I tried using the covering space $E =\bigvee^\omega \mathbb{S}^1$ via the projection as in the case of the real line covering the circumference, however I do not know how to compute $p_*( \pi_1 (E, e))$. I would appreciate an answer where one computes it using both methods (Van Kampen, maybe with an arbitrary colimit, and covering spaces).
Thanks in advance.
AI: As explained in the comments, the op means by $X$, that the right side of the first circle is glued at a point to the left side of the second circle, and the right of the second to the left of the third and so on to $n$, and then finally the right of the $n$th is glued to the left of the first.
First, stretch out the point at which the first and $n$th circles are glued together to be an interval with ends on either circle. Next, if we shrink the southern semicircles of each circle to a point, we see that this drags the ends of the stretched interval to the wedge point of $n$ circles and so adds another wedged circle. This operation is a homotopy equivalence and so $X$ is homotopy equivalent to $\bigvee_{i=1}^{n+1} S^1$ and so $$\pi_1(X)\cong\pi_1(\bigvee_{i=1}^{n+1} S^1)\cong \ast_{i=1}^{n+1} \pi_1(S^1)\cong\ast_{i=1}^{n+1}\mathbb{Z}$$ ie the $(n+1)$-fold free product of $\mathbb{Z}$ with itself. |
H: Why is the sequence $ a_n = \left(1+\frac{1}{n}\right)^n $ Cauchy?
I was looking at the post:
Cauchy Sequence that Does Not Converge
And the top answer was this sequence:
$ a_n = \left(1+\frac{1}{n}\right)^n$. I understand that this sequence converges to $e$, which is not a rational number, and that $a_n$ is a sequence of rationals, but I don't see why that proves that $a_n$ it is Cauchy. I wonder if someone could give another explanation of why $a_n$ is Cauchy using the following definition:
A sequence $p_n$ is Cauchy if $\forall \epsilon >0, \exists M \in R$ such that $\forall i, j \in N:$
$i,j > M \implies \mid p_i - p_j \mid < \epsilon$
EDIT: I originally asked the opposite of what I meant to asked. My apologies...
AI: In $\Bbb Q$ this sequence does not converge. In $\Bbb R$ it does. The sequence is Cauchy, regardless. |
H: Group theory: Let $H=\{0,\pm 3, \pm 6, \pm9,\ldots\}$ Find all the left cosets of $H$ in $\Bbb Z$.
I am having trouble understanding the following homework question,
Let $H=\{0,\pm 3, \pm6, \pm9,\ldots\}$ Find all the left cosets of $H$ in $\Bbb Z$.
I know the answer is $H$, $1+H$, and $2+H$ but I am having difficulty understanding why.
Thank you!
AI: Consider the three cosets $\color{Blue}{3\Bbb Z}$, $\color{Green}{1+3\Bbb Z}$, $\color{Red}{2+3\Bbb Z}$ as they sit inside $\color{Black}{\Bbb Z}$ itself:
$~~~$
Do you see any numbers that were skipped? Notice that the pattern repeats. How can we justify this though, formally and algebraically? We need to be able to take any coset and show that it is actually equal to one of these three. It suffices to show that every integer is either $0$, $1$ or $2$ plus a multiple of $3$. This leads us to consider representations of integers of the form $n=3q+r$ where the residue $r$ is $0\le r<3$. Know any theorems or identities or whatnot that deal with this? |
H: help with nand circuit
I tried to make a circuit from this expression but it's not working right.
Here's the expression and circuit:
Expression
$$\overline{\overline{P_1.S_1}.\overline{P_2.\left(\overline{\overline{P_1}.\overline{\overline{S_1}.\overline{S_2}}}\right)}}$$
AI: Everything looks right to me, perhaps the problem is in one of the connections, or the way you are using the software. |
H: Expected value of a number of events
If I have a set of random events say
${\{A_{1}, A_{2}, ... A_{n}\}}$ and a number ${N=\sum_{1}^{n} I(A_{i})}$ where ${I(A_{i})}$ is the indicator function (basically ${N}$ is the number of events that happen after a random experiment).
Can anybody please tell me how can I find the expected value of ${N}$?
We assume that we know the probability of each event.
It would also be nice to know how is ${N}$ or its expected value called (if they have a particular name) and the actual value of the expected value (if it has a general formula).
AI: For $i=1$ to $n$, define random variable $X_i$ by $X_i=1$ if $A_i$ occurs, and $X_i=0$ if it doesn't.
Then $N=\sum_{i=1}^n X_i$. By the linearity of expectation, we have
$$E(N)=\sum_{i=1}^n E(X_i).$$
If $p_i$ is the probability $A_i$ occurs, then $E(X_i)=p_i$. |
H: Sum of absolute values and the absolute value of the sum of these values?
I'm working on a proof and I need some help with this:
I determined that for some situations ($x$ or $y$ are negative but not both):
$|x| + |y| > x + y$
How can I conclude using that statement the fact that:
$|x| + |y| > |x + y|$
AI: You can try considering $|a+b|^2$, then using the property that $x^2 \geq 0$ for all x, you can obtain the desired inequality. Also your conclusion should be $|x|+|y| \geq |x+y|$. Notice that the inequality is not strict. (For example, if $x=y=0$ then certainly
$|0|+|0| > |0+0|$ is false.) Another way is to use the fact that $-|a| \leq a \leq |a|$ for all numbers a (doing this for both $a$ and $b$, and then manipulating the inequalities, we can achieve what you want), however, this second route assumes that you at least are a little bit familiar with the rules of inequalities, particularly the rules regarding inequalities with absolute values.
As an example of how we would apply the squaring technique, we can do the following:
$$ 0 \leq |a+b|^2 = (a+b)(a+b) = a^2 +2ab +b^2.$$
Now since $a^2 \geq 0$ is always true we can say that
$$a^2 +2ab +b^2 = |a|^2 +2ab +|b|^2.$$ Now we want to try to "force" an inequality. That is, we will replace $2ab$ with $2|a||b|$. If $a$ and $b$ are both greater that $0$ then nothing would change, however, if they are not both greater than $0$ we would get the following:
$$
|a|^2 +2ab +|b|^2 \leq |a|^2 +2|a||b| +|b|^2
$$ notice that we have broken the chain of equal signs and forced an inequality. (There are still some more steps to do for you. Hint: What is $(|a|+|b|)^2$?) |
H: Find integer solutions to linear congruence
Find all integer solutions of $2n \equiv 12 \bmod 19$
So I have re-arranged to: $2x-19y=12$ and by the extended Euclidean Algorithm, I get $$x=1 \ $$
$$y=-9$$
However, this is how far I was able to get to and not sure what follow past this point? What exactly are we looking for?
AI: I would say there are infinitely many. Another way to think of $2n \equiv 12 \pmod{19}$ is that $2n - 12 = 19k$ for some $k \in \mathbb{Z}$. So $n = \frac{19k + 12}{2}$. Thus any even $k$ will produce an integer solution.
EDIT: It might be better to think in terms of equivalence classes for the sake of negative $k$ values. So maybe $[n] = \bigl[\frac{19k + 12}{2}\bigr]$ in $\mathbb{Z}_{19}$. For example: If $k = -2$, then $[n]_{19} = [-13]_{19} = [6]_{19}$. |
H: What are some physical, geometric, or otherwise useful interpretations of divergent series?
I don't understand what ideas such as Abel, Cesàro summation or other types of sum 'regularization' help us describe. What is the practical application to discussing the 'sum' of sequences that are not convergent in the usual sense?
What is the motivation to assigning a value to an otherwise divergent sequence, and further why is it a good idea to call whatever comes out a 'summation'?
AI: Series that are classically divergent, and even Cesàro divergent, play an important role in physics.
The canonical example of this is the Casimir effect. When calculating the force of the effect, you are confronted with a divergent series. This series diverges even when you attempt to Cesáro sum it. However, if you use a technique known as zeta function regularization, you can recover a finite and physically meaningful quantity. |
H: What is the relationship between the spectrum of a matrix and its image under a polynomial function?
Clearly if $\lambda$ is an eigenvalue of $A$ then $p(\lambda)$ is an eigenvalue of $p(A)$ where $p$ is a polynomial. And there are cases where $A$ may have eigenvalues other than these.
Is there a general rule for finding all eigenvalues of $p(A)$? Does it depend on the field from which the elements of $A$ are taken? Are their any partial solutions to this problem?
AI: If the field is $\mathbb{C}$ then $a$ is an eigenvalue of $p(A)$ if and only if $a = p(\lambda)$, with $\lambda$ an eigenvalue of $A$.
If the field is $\mathbb{R}$ then the previous result is false and a counterexample is given by the rotation of ninety degrees, namely take $A$ to be the martix associated to the application $T(x,y) = (-y,x)$ and $p(x) = x^2$. This is a counterexample since (as you can check) $T$ has no eigenvalue, while $T^2 = -I$ has $-1$ as an eigenvalue.
The proof of the first result (as you can imagine) relies heavily on the fundamental theorem of algebra. Once you realize this the proof is straightforward.
Sketch of the proof:
[$\Longrightarrow$] write $p(z) - a = c\prod(z - \lambda_i)$. This gives you $$p(T) - aI = c(T - \lambda_1I)\dots(T - \lambda_mI).$$
If $a$ is an eigenvalue of $p(T)$, then $p(T) - aI$ is not injective, and hence one of the $(T - \lambda_iI)$ is not injective.
[$\Longleftarrow$]If $a = p(\lambda)$, where $\lambda$ is an eigenvalue of $T$ with eigenvector $v$, then $T^kv = \lambda^kv$. From this, a one second thought gives $p(T)v = p(\lambda)v$, and hence the thesis.
I am sure that if you are interested in the proof it would be easy to fill this sketch with all the details needed to make it a rigorous proof. :D |
H: How to solve $(2x+2, 6x) = x+1$
I'm looking at an old discrete mathematics test preparing for my test tomorrow and one question says solve for $x$ and gives the above. I'm thinking that this is the $\gcd(2x+2, 6x)$ but have never seen this type of question before.
AI: If $\gcd(2x+2,6x)=x+1$ it means that $\def\divides{\mathrel{|}}x+1\divides2x+2$ and $x+1\divides6x$ and there is no greater divisor.
It is trivial that for any $x$ then $x+1\divides2x+2$.
$\gcd(x,x+1)=1$, as $x$ and $x+1$ are coprimes, then if $x+1\divides kx$ then necessarily $x+1\divides k$. Therefor if $x+1\divides6x$ then $x+1\divides6$.
$6$ has only four natural divisors: $1,2,3,6$, so the possible values of $x$ are $x=0,1,2,5$. Replacing these values in the equation we have:
\begin{align}
x&=0:&\gcd(2,0)&=1&&\text{which is false} \\
x&=1:&\gcd(4,6)&=2&&\text{which is true} \\
x&=2:&\gcd(6,12)&=3&&\text{which is false} \\
x&=5:&\gcd(12,30)&=6&&\text{which is true}
\end{align} |
H: Positive and negative integer that is congruent to 0 (mod 5) and incongruent to 0 (mod 6)
I'm kind of confused by this because I thought 0 mod 5 = 0, and 0 mod 6 = 0 as well. So what's an integer that is congruent to one but not the other?
AI: "$m$ is congruent to $n$ modulo $r$", typically written
$$m\equiv n \pmod r,$$
means simply that $m-n$ is divisible by $r$. That is,
$$r\mid (m-n).$$
One way to understand this is that you can get from $m$ to $n$ by adding or subtracting $r$ repeatedly.
This usage is different from the one in computer programming, where $\bmod$ is considered a binary operator giving the remainder when one number is divided by another. |
H: on exactness of the functors $M \mapsto \hat{M}$ and $M \mapsto \hat{A}\otimes_{A}M$
if $A$is a Noetherian ring, $M$ a finitely generated module,$I$ is an ideal of $A$, and $\hat{A}$ is the $I-adic$ completion of $A$, then we know $\hat{A}\otimes_{A}M\cong\hat{M}$.
Also on Atiyah&Macdonald, there is a remark on Page 109 that the functor $M \mapsto \hat{M}$ is not exact without assuming $M$ finitely generated. but the functor $M \mapsto \hat{A}\otimes_{A}M$ is always exact.
How to prove this assertion?
And what is an example of the breakdown of exactness of $M \mapsto \hat{M}$ when $M$ is not finitely generated?
(This is not a homework problem)
AI: One example is the sequence of abelian groups
$$0 \to \mathbf Z \to \mathbf Q \to \mathbf Q/\mathbf Z \to 0.$$
(Remark that neither $\mathbf Q$ nor $\mathbf Q/\mathbf Z$ is finitely-generated as a $\mathbf Z$-module.)
If we complete this at $p$, we get the sequence
$$0 \to \mathbf Z_p \to 0 \to 0 \to 0$$
which is obviously not exact. However, if we had tensored with the p adic integers $\mathbf Z_p$ instead, we would have gotten the exact sequence
$$0 \to \mathbf Z_p \to \mathbf Q_p \to \mathbf Q_p/\mathbf Z_p \to 0.$$ |
H: Combinations: Poker hands, full houses
Reading through my Probability book brushing up on some stuff:
What is the probability that a poker hand is a full house? (A full house is defined as a hand with three cards of one denomination and two cards of another denomination; ex. three Queens and two 4's.)
The solution (this is an example) is stated as:
The number of different poker hands is $52\choose5$. To count the number of full houses, let us call a hand of type (Q,4) if it has three queens and two 4's, with similar representations for other types of full houses. Observe that (Q,4) and (4,Q) are different full houses, and types such as (Q,Q) and (K,K) do not exist. Hence there are $13 \times 12$ different types of full houses. Since every particular type, say (4,Q), there are $4\choose3$ ways to select three 4's and $4\choose2$ ways to select two Q's, the desired probability is
$$ \frac{13 \cdot 12 \cdot \binom{4}{3} \cdot \binom{4}{2}}{\binom{52}{5}}$$
I don't understand how the $13 \times 12$ comes about, and that's where I need some clarification.
Thank you in advance.
AI: There are 13 ways of picking the first denomination and for every one of those there are 12 ways of picking the second (since you can't pick one denomination twice).
There's a rule that says if there's $n_1$ ways of picking the first element of an ordered pair and $n_2$ ways of picking the second, the number of possible pairs is $n_1n_2$.
More generally, when you have $n$ objects and you want to pick an ordered combination of size $k$, $P_{k,n}=\frac{n!}{(n-k)!}$. This is called a permutation of size $k$ of the objects.
In this case, $n=13$ and $k=2$, so $P_{2,13}=\frac{13!}{11!}=\frac{13 \cdot 12 \cdot 11!}{11!}=13 \cdot 12$. |
H: Why this is false?!
$\Gamma \models (\alpha \vee \gamma)$ iff $(\Gamma \models \alpha$ or $\Gamma \models \gamma)$
[$\Longrightarrow$]
If $\Gamma \models (\alpha \vee \gamma)$ then ($\Gamma \models \alpha $ or $\Gamma \models \gamma$)
Suppose $\Gamma \models (\alpha \vee \gamma)$
By def. of logic consequence: $\forall\mathfrak{I}, \mathfrak{I}\models\Gamma \rightarrow \mathfrak{I}\models(\alpha \vee \gamma)$
By def. of relation of satisfaction of "$\vee$": $\forall\mathfrak{I}, \mathfrak{I}\models \Gamma \rightarrow \mathfrak{I} \models \alpha$ or $\mathfrak{I} \models \gamma$
By def. of logic consequence: $\Gamma \models \alpha$ ou $\Gamma\models\gamma$
[$\Longleftarrow$]
Not true! Why?
AI: Consider $\Gamma=\varnothing$, $\alpha=\beta$ and $\gamma=\lnot\beta$. (where $\Gamma\nvDash \beta$ and $\Gamma\nvDash\lnot\beta$.)
More precisely, Take $\beta=\forall x P(x)$ where $P$ is unary predicate symbol. You can check that $\beta$ is true in the structure $\mathfrak{A}$ defined as $\mathfrak{A}=\{0\}$, $P^\mathfrak{A}=\{(0)\}$. But it is false in the structure $\mathfrak{B}$ defined as $\mathfrak{B}=\{0\}$, $P^\mathfrak{B}=\varnothing$. So $\nvDash \beta$ and $\nvDash \lnot\beta$ but $\vDash \beta\lor\lnot\beta$ because $\beta\lor\lnot\beta$ is tautology. |
H: Help showing that this ideal is principal.
This is not for homework, and I would really like a hint please. The question asks
If $P = \{ 2a + (1 + \sqrt{-5})b : a, b \in \mathbb{Z}[\sqrt{-5}] \}$ is an ideal in $\mathbb{Z}[\sqrt{-5}]$, show that $P^2$ is the principal ideal $(2)$.
I have shown that $P^2 \subseteq (2)$. To show that $(2) \subseteq P^2$, I started by choosing some $2(m + n \sqrt{-5}) \in (2)$. Now, if $m$ and $n$ are both even or both odd, then I can conclude that $2(m + n \sqrt{-5}) \in P^2$ after some work. However, I cannot handle the cases of $m$ even and $n$ odd, or $m$ odd and $n$ even. Could someone lend me a hint please?
AI: Hint As $(2)$ is principal, it suffices to prove that $2 \in P^2$. This means you only need to worry about $m=1, n=0$.
You can use
$$(1 + \sqrt{-5})(1-\sqrt{-5})-2\cdot 2=2$$ |
H: Proof of the limit of a sequence of functions.
The questions is this.
Let $f_n(x)=\frac{1}{n}sin(nx).$ Each $f_n$ is a differentiable function. Show that
(a) $\lim f_n(x)=0,\forall x \in R$
(b) but $\lim f'_n(x)$ need not exist [at $x=\pi$ for instance].
Proof of (a)
Let $\epsilon > 0$ and $N = \frac{1}{\epsilon}.$ Since $|sin(nx)| \leq |1|,$ $\forall n > N \Rightarrow \left|\frac{1}{n}sin(nx)\right| \leq \frac{1}{n} < \frac{1}{N} = \epsilon.$
Can anyone explain how to prove (b)??
AI: $f'_n(x) = cos(nx)$ and cosine is an oscillating function. Let $x = \pi$. Then,
$$ f'n(\pi) = cos(n\pi) $$
but this sequence oscillates from 1 to -1 hence it cannot converge. To prove it, you can show it's not a Cauchy sequence.
Consider
$$| f_m(\pi) - f_n(\pi) | = |(-1)^m - (-1)^n|$$
For the sequence to be Cauchy there would have to be an N such that for all $n,m> N$ we can make this difference as small as we want. Instead, let $\epsilon =1$. Pick any $N$. Then there will always exist an $m>N$ and $n>N$ such that m is odd and $n$ is even. But then
$$|(-1)^m - (-1)^n| = |-1 - 1| = 2 > 1 = \epsilon$$ so the sequence is not Cauchy. |
H: The distance between two disjoint compact subsets $A,B$ of a metric space $X$ is positive
Please tell me whether my argument for the following result is true:
The distance between two disjoint compact subsets $A,B$ of a metric space $X$ is positive:
$d:X\times X\to \mathbb R$ is continuous$\implies d|_{A\times B}$ is continuous on $A\times B$
$A,B$ are compact $\implies A\times B$ is compact$\implies d|_{A\times B}$ assumes its minimum on $A\times B\implies\exists~a\in A,b\in B$ such that $$\inf_{x\in A,~y\in B} d|_{A\times B}(x,y)=d(a,b)>0.$$
AI: There is no need to deal with restrictions of $d$. All that is necessary is to note that $A \times B $ is compact.
Let $m = \min_{a\in A, b\in B} d(a,b)$. If $m = 0$, then $m=d(a,b)$ for some $a \in A, b \in B$, and since $d(a,b) = 0$, we have $A \cap B \ne \emptyset$, a contradiction. |
H: How to find a matrix $X$ such that $X+X^2+X^3 = \begin{bmatrix} 1&2005\\ 2006&1 \end{bmatrix}$?
Find a matrix $X \in M_{2}(\mathbb Z)$ such that $$X+X^2+X^3=\begin{bmatrix} 1&2005\\ 2006&1\end{bmatrix}$$
My try:
Let
$$X=\begin{bmatrix}
a&b\\
c&d
\end{bmatrix}$$
where $a,b,c,d\in Z$
then
$$X^2=\begin{bmatrix}
a^2+bc&ab+bd\\
ac+cd&bc+d^2
\end{bmatrix}$$
then $$X^3=\begin{bmatrix}
a^3+abc+abc+bcd&a^2b+b^2c+abd+bd^2\\
a^2c+acd+bc^2+cd^2&abc+bcd+bcd+d^3
\end{bmatrix}$$
so
$$X+X^2+X^3=\begin{bmatrix}
a^3+2abc+bcd+a^2+bc+a&a^2b+b^2c+abd+bd^2+ab+bd+b\\
a^2c+acd+bc^2+cd^2+ac+cd+c&abc+2bcd+d^3+bc+d^2+d
\end{bmatrix}$$
then we have
$$\begin{cases}
a^3+2abc+bcd+a^2+bc+a=1\\
a^2b+b^2c+abd+bd^2+ab+bd+b=2005\\
a^2c+acd+bc^2+cd^2+ac+cd+c=2006\\
abc+2bcd+d^3+bc+d^2+d=1
\end{cases}$$
Note $2005=5\cdot 401 $ is prime number,so
$$b(a^2+bc+ad+d^2+a+d+1)=2005$$
we can $b=\pm 1$ ,or $b=\pm 2005$ ,or $b=\pm 5$ or $b=\pm 401$
and note $2006=2\times 1003$
then $c=\pm 2$ ,or $c=\pm 1003$or $c=\pm 1$,or $c=\pm 2006$
so following is very ugly. But I can't.Thank you ,maybe this problem have other nice methods,Thank you
AI: There's no such $X$, even with rational entries.
If there were, then it would have an eigenvalue that's either
rational or a quadratic irrationality.
But if $\lambda$ is an eigenvalue of $X$ then
$\lambda + \lambda^2 + \lambda^3$ is an eigenvalue of
$\left[\begin{array}{cc}1&2005\cr2006&1\end{array}\right]$.
But those eigenvalues are the roots $x = 1 \pm \sqrt{2005\cdot 2006}$ of
$(x-1)^2 = 2005 \cdot 2006$, and the polynomial
$(\lambda^3+\lambda^2+\lambda-1)^2 - 2005 \cdot 2006$
turns out to be irreducible, so none of its roots can be
the eigenvalue of a $2 \times 2$ matrix with rational entries, QED. |
H: Find the Maclaurin series of the function $f(x) = 7 x^2 \sin 2 x$
Find the Maclaurin series of the function
$f(x) = 7 x^2 \sin 2 x$
$(f(x) = \sum_{n=0}^{\infty} c_n x^n) $
That is what is given on the question, we have to fill in $5$ blanks $c_3$ to $c_7$
The homework is past due, so I have the answers $(14, 0, -9.333, 0, 1.86667)$ but I'm not sure how to go about doing this... Can I use the expansion that we know for $\sin x$?
($x- x^3/3! + x^5/5!... $etc). If so... how exactly?
Side question: If it was just $\sin 2x$ I would replace all the $x$ in the expansion with a $2x$ instead, correct?
I'm desperate for help, I have a test tomorrow and i'm freaking out a bit...
Thanks.
AI: Sometimes you can find the Maclaurin series of a combination of "nice" functions by simple manipulations. Note that $\sin t$ has Maclaurin series
$$t-\frac{t^3}{3!}+\frac{t^5}{5!}-\frac{t^7}{7!}+\cdots.$$
For the Maclaurin series of $\sin(2x)$, replace $t$ everywhere by $2x$, and simplify a bit. We get
$$2x -\frac{2^3}{3!}x^3+\frac{2^5}{5!}x^5 -\frac{2^7}{7!}x^7+\cdots.$$
To find the series for $(7x^2)\sin(2x)$, multiply term by term by $7x^2$. We get
$$(7)(2)x^3 -(7)\cdot\frac{2^3}{3!}x^5+(7)\cdot\frac{2^5}{5!}x^7 -(7)\cdot\frac{2^7}{7!}x^9+\cdots.$$
Thus $c_0=c_1=c_2=0$. We have $c_3=(2)(7)$, and $c_4=0$. We have $c_5=-(7)\cdot\frac{2^3}{3!}$. and so on. |
H: How to make a piecewise function differentiable?
I have the following question:
Suppose $$f(x) = \left\{\begin{array}{cc}x^2 & \text{if }x\leq 2 \\ mx+b& \text{if }x>2\end{array}\right.$$
If $f$ is differentiable everywhere, then what are the values of $m$ and $b$?
How exactly would I be able to get the values to be differentiable? I know that the point at 2 has to exist and that it has to be continuous and connect to the other function to work. How exactly do I get the exact values for "m" and "b" though?
AI: $f$ has to satisfy
$$
\lim_{x\to 2+}f(x)=f(2),\quad \lim_{x\to 2- }=\frac{f(x)-f(2)}{x-2}=\lim_{x\to 2+ }=\frac{f(x)-f(2)}{x-2},
$$
i.e.
$$
2m+b=4,\quad m=4 \Rightarrow m=4,b=-4.
$$ |
H: Proof by induction of Sylow's theorem
Prove directly that if $p$ is a prime and $p^{\alpha}\ | \ o(G)$, then $G$ has a subgroup of order $p^{\alpha}.$
How can I prove this, by induction on the order of the group $G,$ without using the existence of a $p$-Sylow subgroup?
AI: CLAIM Let $k\geqslant 0$, $p$ a prime and $p^k\mid |G|$. Then $G$ contains a subgroup of order $p^k$.
P If $|G|=1$, there is nothing to prove. So suppose $|G|>1$ and the theorem proven for every group of order less than that of $G$. It is a theorem of Cauchy that if $G$ is a finite (abelian) group and $p\mid |G|$ then $G$ contains an element of order $p$. Using this, consider the class equation $$|G|=|C|+\sum [G:C(x_i)]$$
If $p\not\mid |C|$ then $p\not\mid [G:C(x_i)]$ for some $i$. This means that $p^k\mid |C(x_i)|$ and $|C(x_i)|<|G|$ so we're done. If $p\mid |C|$, $C$ is abelian, so we have an element $g$ of order $p$, and the order of $G/\langle g\rangle$ (we can take the quotient since $g$ is central) is divisible by $p^{k-1}$. By the inductive hypothesis we have a subgroup of order $p^{k-1}$, of the form $H/\langle g\rangle$ where $\langle g\rangle \subseteq H\leq G$. But $$|H|=[H:\langle g\rangle]||\langle g\rangle |=p^k$$ and the theorem is proven.
In particular, if $G=p^nk$ with $(p,k)=1$, there exists a subgroup of order $p^n$, i.e. a Sylow $p$-subgroup.
To prove Cauchy's theorem, we can do something very similar. If $G$ is abelian, take an element $g\in G$. If this element has order divisible by $p$, say $=pr'$ then $g^{r'}$ is an element of order $p$. Else, consider $G/\langle g\rangle$ (we can quotient since $G$ is abelian). This has order still divisible by $p$ but smaller than that of $G$. We obtain an element $g'\langle g\rangle$ of order $p$. Let $f$ be the order of $g'$, then $(g'\langle g\rangle) ^f=g'^f\langle g\rangle=\langle g\rangle $ so the order $p$ of $g'\langle g\rangle $ divides $f$. But then $g'$ has order divisible by $p$ and the previous case finishes things off. The general case then follows in the same manner as above, with the class equation. |
H: Find a matrix $X$ given $X^4$
Find the matrix $X$ such that
$$X^4=\begin{bmatrix}
3&0&0\\
0&3&1\\
0&0&0
\end{bmatrix}$$
This problem I can't work,and I think let the matrix the eigenvalue is $\lambda$,then
$\lambda^4$ is $$\begin{bmatrix}
3&0&0\\
0&3&1\\
0&0&0
\end{bmatrix}$$
eigenvalue?Thank you for your help.
AI: Hint: Write the Jordan Normal Form., then take:
$$X = S.J^{1/4}.S^{-1}$$
So, we have:
$$\begin{bmatrix} 3&0&0\\ 0&3&1\\ 0&0&0 \end{bmatrix}$$
We can write this in Jordan Normal Form as:
$$S.J.S^{-1} = \begin{bmatrix} 0&0&1\\ -1&1&0\\ 3&0&0 \end{bmatrix}.\begin{bmatrix} 0&0&0\\ 0&3&0\\ 0&0&3 \end{bmatrix}.\begin{bmatrix} 0&0&\dfrac{1}{3}\\ 0&1&\dfrac{1}{3}\\ 1&0&0 \end{bmatrix}$$
Now, just take the fourth roots of the diagonal entries of $J$ and then multiply out with the other two matrices.
Spoiler - Do Not Peek (do the work and then look)
$X = S.J^{1/4}.S^{-1} = \begin{bmatrix} 0&0&1\\ -1&1&0\\ 3&0&0 \end{bmatrix}.\begin{bmatrix} 0&0&0\\ 0&3^{1/4}&0\\ 0&0&3^{1/4} \end{bmatrix}.\begin{bmatrix} 0&0&\dfrac{1}{3}\\ 0&1&\dfrac{1}{3}\\ 1&0&0 \end{bmatrix}=\begin{bmatrix} 3^{1/4}&0&0\\ 0&3^{1/4}&\dfrac{1}{3^{3/4}}\\ 0&0&0 \end{bmatrix}$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.