text
stringlengths 83
79.5k
|
|---|
H: If $f\left(\frac{1-x}{1+x}\right)=x$ then find $f(x)$
If $f\left(\frac{1-x}{1+x}\right)=x$ then find $f(x)$
My attempt :-
Put $x= \tan^2(\theta)$
Then $f(\cos(2\theta))=\tan^2(\theta)$
After this ....
AI: You need to know how $x$ varies in terms of $\frac{1-x}{1+x}.$
Thus, you need to find the inverse of $g(x) = \frac{1-x}{1+x}.$
Writing $$y = \frac{1-x}{1+x},$$
you can expand $x$ in terms of y. Then, you get $x = \frac{1-y}{1+y}$ which implies that $$f(y) = x = \frac{1-y}{1+y}$$
and using the symbol "$x$," you get
$$f(x) = \frac{1-x}{1+x}$$
|
H: Finding a nonexact closed $1$-form on a surface embedded in $\Bbb R^3$
Consider the subset $S=\{(x,y,z):x^2-y^2-z^2+1=0\}$ of $\Bbb R^3$. Defining a function $f:\Bbb R^3\to \Bbb R$ by $f(x,y,z)=x^2-y^2-z^2+1$, it is easily seen that $0$ is a regular value of $f$, so it follows that $S=f^{-1}(0)$ is an embedded submanifold of $\Bbb R^3$ of codimension $1$. I am trying to find a closed $1$-form on $S$ which is not exact.
It is easy to find a closed $2$-form on $S$ which is not exact: Since $S$ is an embedded submanifold of codimension $1$ of the orientable manifold $\Bbb R^3$, it follows that $S$ is also orientable, so we can take an orientation form of $S$. But how can I find a non-exact closed $1$-form on $S$? Using wolframalpha.com, I saw how $S$ looks like; it clearly seems that $S$ is homotopy equivalent to the circle $S^1$, so $H^1(S)\neq 0$. Thus $S$ must indeed have a non-exact closed $1$-form but I have no idea to find it. Any hints?
AI: You can use the fact that $S$ contains a circle, since $S\cap\{x=0\}=\{(0,y,z):y^{2}+z^{2}=1\}$.
On $\mathbb{R}^{3}\setminus\{y=z=0\}$, we have the following "winding form" that is closed but not exact, given by
$$
\omega=\frac{ydz-zdy}{y^{2}+z^{2}}.
$$
Since $S\subset\mathbb{R}^{3}\setminus\{y=z=0\}$, we can pull back this one-form to $S$. The pullback is still closed, but not exact: if it were exact, then the integral around the loop $S\cap\{x=0\}=\{(0,y,z):y^{2}+z^{2}=1\}$ would be zero by Stokes' theorem. This is not the case, since the integral is $2\pi$.
|
H: Definition of "barycenter"
I have the following definition given:
(From "Introduction to algebraic topology" by Joseph J. Rotman)
Is the definition really meant like this?
Or is $\frac{1}{m+1}(p_0+p_1+\dotso +p_m)$ meant?
For me it should read 1/(m+1) but the author (at least it seems to) writes "fractions" like this consistently: n/n+1 which is a little off.
So to be clear: For me $1/1+1=2$ and not $1/1+1=\frac{1}{2}$ as the author intends(?).
AI: You are correct: the coefficient should be $\frac{1}{m+1}$; this particular bit of typesetting is appalling. It may be something that was done "in production" by someone trying to save vertical space in a text, who said "Hey, I can replace $\frac{A}{B}$ with $A/B$," and then applied this rule willy-nilly. Even so, the author should have objected when he saw the galley proofs. Sigh.
|
H: Method to generate counterexample: An irreducible that is not prime.
I am a newcomer in ring theory.I like the method of studying by myself with as less help from the textbook as possible.I prove the theorems on my own and always look for counterexamples if the converse of a statement does not hold.Now while studying irreducibilty in integral domain,I came up with the result that primes are irreducibles but irreducibles may not be prime.
Every student of ring theory would probably know the counterexample ,which is a very standard one: $\mathbb Z[\sqrt{-5}]$ and the element $2$.It is easy for one to verify that it is irreducible but not prime.
But my question is something different.If one does not know such kind of examples beforehand,then it is not easy for him/her to construct such example.How will one know that $\mathbb Z[\sqrt {-5}]$ would work?And suppose one makes a guess to work with such a ring,how would one know beforehand what kind of element to select so that it satisfies our requirement.There is no way other that searching randomly,if you are lucky you will find one.I have seen all books just citing counterexamples,but none of them explains the thought process behind the counterexample i.e. how to generate a family of such counterexamples or how reach such example without giving a random try.
So,does anyone have any thought process in mind so that I can find such counterexamples on my own.I am not looking for some formula for such rings.I am looking for the thought behind such counterexample.
AI: "How will one know that $\Bbb Z[\sqrt{-5}]$ would work?"
The idea is to look at integral domains which are not PIDs.
It is natural to consider first the rings of integers in quadratic number fields $\Bbb Q(\sqrt{m})$ for square-free integers $m$. These rings are factorial if and only if they are PIDs.
In particular for $m<0$ it is well known which of such rings are PIDs, i.e., exactly when
$$m= −1, −2, −3, −7, −11, −19, −43, −67, −163$$
see
For which values of $d<0$ , is the subring of quadratic integers of $\mathbb Q[\sqrt{d}]$ is a PID?
So for counterexamples we would look at square-free $m<0$ with $m\equiv 2,3 \bmod 4$ different from the above list.
|
H: If $\lim_{x\to\pm \infty}f(x,c)=0$ $\forall c\in [a,b]$ then $\lim_{||(x,y)||\to\infty}f(x,y)=0$?
If $f:\mathbb{R}\times [a,b]\to \mathbb R$ is a continuous function with $$\lim_{x\to\pm \infty}f(x,c)=0$$ for all $c\in[a,b]$, does that imply that $$\lim_{||(x,y)||\to\infty}f(x,y)=0$$ (when taking values $(x,y)\in \mathbb{R}\times [a,b]$)?
This intuitively seems true to me, any hint would be appreciated.
AI: Here's a counterexample following the idea in my comment: let's say $[a,b] = [0,1]$ for simplicity's sake. Then,
For any $x > 0$, define $f$ along the short vertical $y \in [0,1]$ as a piecewise linear path,
$$f(x,y) = \left\{\begin{array}{ll}
ye^x & 0 \leq y < e^{-x} \\
2 - ye^x & e^{-x} \leq y < 2e^{-x}\\
0 & 2e^{-x} \leq y \leq 1
\end{array}\right.$$
This draws a line from a height of $0$ to a height of $1$ and back down to $0$, completing a small triangle with a peak at $e^{-x}$. As $x \to \infty$, the triangle thins and its peak moves towards $0$.
Therefore, across any fixed horizontal $y = c$, the triangle will eventually thin and squeeze under this threshold, at which point the function becomes constantly $0$.
However, it is clear that the function does not shrink uniformly to $0$; along the path $y = e^{-x}$ the function rides the peak of this triangle and is therefore constantly $1$.
|
H: Why has this variable been used in Spivak's proof for Intermediate Value Theorem?
I am currently on chapter 8 of Spivak's Calculus and I'm struggling to understand the reason behind some of the proof of "Theorem 7-1" (IVM). Why has the variable $x_0$ been introduced (what is the utility of it)?
Could you not just say that $x_1$ is a number such that $a < x_1 < a + \delta$ and state the contradiction?
AI: The problem with your suggestion is that you need to prove that $x_1\in A.$ All you know a priori is that $f$ is negative on $(\alpha - \delta, x_1).$ The role of $x_0$ in the proof is to bridge the gap between $a$ and $x_1,$ by the equation $[a, x_1]=[a,x_0]\cup[x_0,x_1].$
|
H: Is there $f\in L^2$ with $\lim_{x\to\infty} f(x)^2\log(x)>0$?
I'm asking myself whether there is a function $f\in L^2(\mathbb{R}_+\to\mathbb{R})$ so that $\lim_{x\to\infty} f(x)^2\log(x)>0$. I think there is no such function. Here is my proof:
Let $f\in L^2$ with $\lim_{x\to\infty} f(x)^2\log(x)>0$. Then
$$\infty>\int_{1}^{\infty}f(x)^2\mathrm{d}x=\sum_{n=1}^{\infty}\int_{n}^{n+1}f(x)^2\frac{\log x}{\log x}\mathrm{d}x\geq \sum_{n=1}^{\infty}\frac{1}{\log (n+1)}\int_{n}^{n+1}f(x)^2\log (x)\mathrm{d}x.$$
Since $\sum_{n=1}^{\infty}\frac{1}{\log (n+1)}=\infty$ and $\lim_{x\to\infty} f(x)^2\log(x)>0$, the right hand side of the equation above is $\infty$, which is a contradiction.
Is this proof correct?
AI: Your proof works because
$$
\int_{n}^{n+1}f(x)^2\log (x) \, dx > c > 0
$$
for sufficiently large $n$.
Instead of going via infinite series you can also argue that $f(x)^2 \log(x) > a > 0$ for $x \ge x_0$ implies
$$
\int_{0}^{x_1} f^2(x) \, dx > \int_{x_0}^{x_1} \frac{a}{\log(x)} \, dx
> \int_{x_0}^{x_1} \frac{a}{x} \, dx = a (\log(x_1) - \log(x_0))
$$
for $x_1 > x_0$, and therefore
$$
\lim_{x_1 \to \infty}\int_{0}^{x_1} f^2(x) \, dx = \infty \, .
$$
|
H: Find all group homomorphisms $A_n \rightarrow \mathbb{C}^*$
Find all group homomorphisms $A_n \rightarrow \mathbb{C}^*$ for all integers $n \geq 2$
What I have up until now:
Define $f: A_n \rightarrow \mathbb{C}*$
Then by the first isomorphism theorem, we have that :
$A_n/ \ker(f) \cong f[A_n] \subseteq \mathbb{C}^*$
Thus, as $\mathbb{C}^* $ is abelian, so is $A_n/ \ker(f)$
Hence $[A_n,A_n] \subseteq \ker(f)$
Then by using the fundamental theorem of homomorphism we can easily find all $f$ if we find all homomorphism $g: A_n \rightarrow \mathbb{C}^*$
First, we begin with $n \geq 5 $
Then we have $[A_n,A_n] =A_n$. Which means that $A_n/[A_n,A_n] = A_n/A_n \cong (\mathbb{Z}/1\mathbb{Z})$
So the $g: A_n/[A_n, A_n] \rightarrow \mathbb{C}^*$ can only be the trivial homomorphism
This means that for $n \geq 5 $ all homomorphisms $f: A_n \rightarrow \mathbb{C}^*$ are the trivial homomorphism.
Now for $n=2$ we have $[A_2 , A_2]={(1)}$. So $A_2/[A_2,A_2] = A_2 = \{ (1) \}$
So once again $g: A_n/[A_n, A_n] \rightarrow \mathbb{C}^*$ can only be the trivial homomorphism.
So, all homomorphisms $f: A_2 \rightarrow \mathbb{C}^*$ are the trivial homomorphism.
For $n=3$ we have $[A_3 , A_3]={(1)}$. So $A_3/[A_3,A_3] = A_3 = \{ (1),(1 \ 2 \ 3),(1 \ 3\ \ 2 ) \}$
And then I am not quite sure how to proceed and I also do not know how I could easily do this for $A_4$
Is what I have up until now correct? And how should I proceed further?
AI: What you have so far is correct.
Notice that $A_3 \cong C_3$, so a homomorphism $A_3 \to \mathbb C^*$ is the same as a solution to $x^3 = 1$ in $\mathbb C^*$ (by choosing such an $x$, you choose the image of a generator, which determines the whole morphism).
For $A_4$, you can show $[A_4, A_4] = V_4 = \{(1), (12)(34), (13)(24), (14)(23) \}$. Since $\# A_4 = \frac{4!}{2} = 12$, we have $\# A_4/[A_4, A_4] = 12/4 = 3$, thus $A_4/[A_4, A_4] \cong C_3$. This means we are in the same situation as before, with $A_3$.
|
H: Combinatorics question where someone has to be at least one seat away from anyone else?
In a doctor’s waiting room, there are 14 seats in a row. Eight people are waiting to be seen.
There is someone with a very bad cough who must sit at least one seat away from anyone else. If all arrangements are equally likely, what is the probability that this happens?
My logic is this: Number the 8 people and let person 8 have the cough. We need to find the number if arrangements that person 8 is next to someone and then subtract this from the total to get the number of arrangements where he sits at least one seat away.
So treat person 8 and person 7 as one ‘object’. There are 13!*2!/6! distinct arrangements of this (if you say the seats are identical objects). We can repeat this principle 7 more times, pairing up person 8 with person 6, then 5 etc. so we multiply the previous expression by 7 to get 121080960 permutations where the coughing person is next to someone. However, this is way too much.
How would you do this problem?
AI: I would try to place the coughing person at all possible 14 places and let the others to choose the places arbitrarily leaving the gaps around the person, so that the total number of arrangements is:
$$
2\binom{12}7{7!}+12\binom{11}7{7!},
$$
where the terms correspond to coughing person sitting either on one of two end seats or on any of 12 other seats, respectively.
To find the corresponding probability the above number shall be divided by the total number of possible arrangements:
$$
\binom{14}88!.
$$
|
H: Are elements of a group are also elements of the quotient group?
I think the general answer to this question is no. What I struggle about the notation of a question in Thomas Hungerford's Abstract Algebra An Introduction Textbook. The question is the following;
Find the order of $\frac{8}{9}$ in the additive group $\mathbb{Q}/\mathbb{Z}$. But
$$\mathbb{Q}/\mathbb{Z} = \{\mathbb{Z}+g:g\in\mathbb{Q}\}$$
So we have that $\frac{8}{9}\notin \mathbb{Q}/\mathbb{Z}$
but $\mathbb{Z}+\frac{8}{9}\in \mathbb{Q}/\mathbb{Z}$.
Moreover, how can a quotient group be additive? I know the result is same and is $9$ but I am a little bit confused about the notation and I could not figure out the possible mistake I am conducting.
AI: Yes, the quotient group is a different group with different elements. The author just decided to use a more simple notation than $\mathbb{Z}\frac{8}{9}$, it happens a lot in mathematics. By the term "additive" he means that here the operation in $\mathbb{Q}$ is addition. Hence the operation of the quotient group is $\mathbb{Z}a\mathbb{Z}b=\mathbb{Z}(a+b)$. Actually, in this type of groups I prefer to write the elements of $\mathbb{Q}/\mathbb{Z}$ as $a+\mathbb{Z}$ instead of $a\mathbb{Z}$.
|
H: Does homogeneous scaling minimize this integral quantity among all surjective maps?
Let $\lambda>1$ be a parameter. Let $\psi:[0,1] \to [0,\lambda]$ be a smooth surjective function, satisfying $\psi(0)=0$.
Question: Is it true that $$ E(\psi):=\int_0^1 \big((\psi'(r)-1)^2+(\frac{\psi(r)}{r}-1)^2\big) rdr \ge (1-\lambda)^2 \, \, \,?$$
I prove below that this is true if $\psi(1)=\lambda$. I am asking about what happens without this assumption.
Note that for $\psi(r)=\lambda r$ equality clearly holds.
Proof:
We shall use $a^2 + b^2 \ge 2ab$ with $a=\psi'(r)-1, b=\frac{\psi(r)}{r}-1$:
$$
\begin{split}
& E(\psi) \ge \int_0^1 2(\psi'(r)-1)(\frac{\psi(r)}{r}-1) rdr = \\
& \int_0^1\frac{d}{dr} \big((\psi(r)-r)^2\big) dr=(\psi(1)-1)^2.
\end{split}
$$
Since we assumed $\psi(1)=\lambda$, we are done.
Replacing $\int_0^1 $ by $\int_0^a$ in the argument above, we get $ E(\psi) \ge (\psi(a)-a)^2$, thus $ E(\psi) \ge\max_{a \in [0,1]} (\psi(a)-a)^2$.
Comment:
The statement holds for $0 <\lambda \le 1$. Indeed, in that case $1-\psi(1) \ge 1-\lambda \ge 0$, so $E(\psi) \ge (1-\psi(1))^2 \ge (1-\lambda)^2.$
Thus, the interesting part is for $\lambda > 1$, as assumed in the question.
AI: The answer is positive.
As proved in the question, $E(\psi) \ge (\psi(a)-a)^2$ for any $a\in [0,1]$. In particular, take $a$ satisfying $\psi(a)=\lambda$. Then, since $a \le 1 < \lambda$, we have
$\lambda-a \ge \lambda-1>0$, so
$$
E(\psi) \ge (\lambda-a)^2 \ge (\lambda-1)^2.
$$
|
H: Prove that $||A||_2 = \max_{||x||_2=1, ||y||_2 = 1} |y^TAx|$
Prove that $||A||_2 = \max_{||x||_2=1, ||y||_2 = 1} |y^TAx|$.
Definitions:
$$ ||A||_2=\max_{||x||_2=1}||Ax||_2=\sqrt{\rho(A^T A)}.$$
My attempt:
$$||A||_2 = \max_{||x||_2=1}||Ax||_2 = \max_{||x||_2=1} \left( \sum_i \left(\sum_j a_{ij}x_j\right)^2\right)^{1/2},$$ and $\sum_j a_{ij} x_j$ can be seen as the inner product of the $i$th row of $A$ and $x$. The $i$th row of $A$ is $e_i^TA$, but from here on I have trouble writing the steps down. I want to find that $||A||_2 \le \max_{||x||_2 = 1} |e_i^T A x| $, as this would imply that $ ||A||_2\le \max_{||x||_2=1, ||y||_2 = 1} |y^TAx|$ (since the 2-norm of $e_i$ is 1, and the maximum is considered on a bigger set).
For the other inequality I tried a similar technique: $$ \max_{||x||_2=1, ||y||_2=1} |y^T A x| \ge \max_{||x||_2=1} |e_i^T A x|.$$ But I can't really find a lower bound.
How I can prove the equality?
Thanks.
AI: For any unit vectors $x$ and $y$ we have
$$ |y^T Ax| \le \|y\|_2 \|Ax\|_2 \le \|A\|_2, $$
using Cauchy-Schwarz inequality and the definition of the operator norm.
Notice that if $\rho(A^T A) = 0$ there is nothing to prove. Assume $\sigma = \sqrt{\rho(A^T A)} > 0$ from now on.
Let $x$ be an unit eigenvector of $A^T A$ to the eigenvalue $\sigma^2$, that is
$$ A^T A x = \sigma^2 x. $$
Further, let $y = \frac1\sigma A x$. Then, we have $\|y\|_2 = 1$ and $y^T A x = \sigma$.
|
H: The equation w.r.t integration of inverse function.
the question is as follows.
$f$ is differentiable on $[a,b]$ and $f'$ is continuous. And $\forall x\in[a,b]$, f'(x) is not 0.
Then, show that $$\int_a^b f(x)\,dx+ \int_{f(a)}^{f(b)} f^{-1}(x) \,dx = bf(b)-af(a)$$
I take the $$\int_{f(a)}^{f(b)} f^{-1}(x) \,dx = \int_a^b f^{-1}(f(x))f'(x) \,dx$$
I stuck here. How can i solve?
AI: From where you left,
$$\int_a^b f(x) dx + \int_a^b xf’(x) dx = \int_a^b (xf(x))’ \ dx = \left[ xf(x)\right]_a^b = bf(b)-af(a) $$
|
H: Minimizing by Linear Programming
I have a linear program which I can't write its equations. The problem is
An office furniture company has two plants that produce lumber used in the manufacturing of a line of desks and computer tables that the company sells. In one week, plant $A$ can produce the lumber required to manufacture 200 desks and 120 computer tables, while in one week, plant $B$ can produce the lumber to manufacture 350 desks and 240 computer tables.
Suppose it costs \$5,150 per week to operate plant $A$ and \$8,200 per week to operate plant $B$.
If the company needs enough lumber to manufacture at least 2,000 desks and 1,320 computer tables, how many weeks should each plant operate in order to meet this requirements at the a minimum cost? What is the minimum cost?
Thank you in advance.
AI: How to set up the problem:
What is the objective? - Minimise the cost
Total cost = Cost of operating plant A + Cost of operating plant B
Now, we need to introduce variables to quantify this. Let $w_A$ be the number of weeks to run plant A, and similarly $w_B$
Now, total cost $C$ is given as
$$C = 5150w_A + 8200w_B$$
What are the constraints?
You need to produce a certain amount of lumber to satisfy the demand. How can we use our variables above to express that? We need a total of atleast $2000$ desks and $1320$ computer tables
Hence, the number of desks we would produce is $200w_A + 350w_B \geq 2000$
Similarly, for the tables we have $120w_A + 240w_B \geq 1320$
Put it all together, you get
$$\min 5150w_A + 8200w_B
\\ \text{s.t} \\
200w_A + 350w_B \geq 2000 \\ 120w_A + 240w_B \geq 1320 \\w_A, w_B \geq 0$$
|
H: Prove that the balanced hull of a compact subset $K$ of a Hausdorff TVS $E$ is compact
Let $E$ be a topological vector space Hausdorff (over $\mathbb{C}$) and $K \subset E$ compact. I want to prove that: if $M$ is the balanced hull, that is, $M$ is smallest balanced set containing $K$, then $M$ is compact.
But I didn't have any idea how to proceed.
Remembering that: a subset $A$ of a vector space $E$ is said to be balanced if for every $x\in A $ and every $\lambda \in \mathbb{C}$, $|\lambda|\leq 1$, we have $\lambda x \in A$.
AI: Hint: Write $D$ for the set of complex numbers $z$ with $\lvert z\rvert \leq 1$. Then $M=DK$.
|
H: Invertibility of an element in a Banach algebra (Gelfand's formula)
In Folland's A Course in Abstract Harmonic Analysis, Theorem 1.8 states that for a unital Banach algebra (with unit $e$), the spectral radius of an element $x$ is given by $\lim_{n \to \infty} \|x^n\|^{1/n}$. In the proof, Folland writes:
We have $\lambda^n e - x^n = (\lambda e -x) \sum_0^{n-1} \lambda^j x^{n-1-j}$, from which it follows that if $\lambda^n e - x^n$ is invertible then so is $\lambda e - x$.
Why is this? It seems we need to show that $\sum_0^{n-1} \lambda^j x^{n-1-j}$ is invertible when $\lambda^n e - x^n$ is, but why would that be?
AI: Note the following:
If $a,b$ are so that $ab$ and $ba$ are invertible then $a$ and $b$ must both already be invertible.
It is possible to directly give the inverses:
$$a^{-1}=b(ab)^{-1}, \qquad b^{-1} = a(ba)^{-1}$$
To see that these are actually inverses you can additionally define:
$$\tilde a^{-1}= (ba)^{-1} b, \qquad \tilde b^{-1} = (ab)^{-1}a$$
Note that
$$aa^{-1}=1 = \tilde aa^{-1}, bb^{-1}=1 =b\tilde b^{-1}$$
ie $a^{-1}, b^{-1}$ are right inverses to $a,b$ and $\tilde a^{-1},\tilde b^{-1}$ are left inverses to $a,b$. By the standard computation:
$$a^{-1}= (\tilde a^{-1}a)a^{-1}=\tilde a^{-1}(aa^{-1})=\tilde a^{-1}$$
you find that if something has both left and right inverses they are equal and the operator is already inveritble.
|
H: How do you solve $2^x-x=3$?
Maybe it's a simple question, but I can't figure it out. How do you solve $2^x-x=3$? Using logarithms? I could write $\log_2(x-3)=x$, but then what?
Thank you!
AI: Here is a step-by-step Lambert W solution (see Robert's answer).
Recall the definition: $ae^a = b\Longleftrightarrow a = W(b)$.
So we try to get our equation into the form $ae^a = b$.
$$
2^x-x=3
\\
2^x=x+3
\\
2^{x+3}=8(x+3)
\\
e^{(x+3)\log 2}=8(x+3)
\\
\frac{-\log 2}{8}=-(x+3)\log 2\; e^{-(x+3)\log 2}
\\
W\left(\frac{-\log 2}{8}\right)=-(x+3)\log 2
\\
\frac{1}{\log 2}W\left(\frac{-\log 2}{8}\right)=-(x+3)
\\
x = -3-\frac{1}{\log 2}W\left(\frac{-\log 2}{8}\right)
$$
|
H: integration of a function $\int$ $x^2$/($x^2$+$R^2$)$^{3/2}$dx.
the function is $\int$ $x^2$/($x^2$+$R^2$)$^{3/2}$dx.
I substituted x=Rtan$\theta$ and got $\int$cos$\theta$tan$^2$$\theta$d$\theta$. And here i am stuck
AI: Use $\tan^2\theta=\sec^2\theta-1$ to write that as$$\int(\sec\theta-\cos\theta)\mathrm{d}\theta=\ln|\sec\theta+\tan\theta|-\sin\theta+C.$$
|
H: Does a prime ideal contains an irreducible element?
The context. Let $R$ be an integral domain. It is known that a domain $R$ is a UFD if and only if any nonzero prime ideal contains a prime element.
It is also known that $R$ is a UFD if and only if any non zero element has a decomposition as a product of a unit and irreducible elements (which is automatic if $R$ is Noetherian, for example) and any irreducible element is prime.
Thius, if $R$ is a Noetherian domain which is NOT a UFD, we known that there exists a nonzero prime ideal which does NOT contain a prime element.
The natural question coming into my mind is now:
Question 1. Let $R$ be a Noetherian integral domain which is not a field. Does any nonzero prime ideal of $R$ contains an irreducible element ?
Question 2. If answer to Q1 is NO, can we find sufficient conditions for which the answer to Q1 becomes YES ?
The answer is YES for $A[X]$ where $A$ is a PID (a full description of prime ideals are known, and they all contain an irreducible element)
I think I have proved it is also true for $\mathbb{Z}[\sqrt{-d}], d>0$ squarefree such that $d\not\equiv -1 \mod 4$ (I have not checked the details), but I have no clue how to prove it in general or how to find a counterexample (if there is any).
Edit In fact, Q1 is trivial. Any nonzero non unit $a\in \mathfrak{p}$ (prime ideal) maybe written as a product of irreducible elements. Since $\mathfrak{p}$ is prime, one of these irreducible elements belong to $\mathfrak{p}$.
So the real question is:
Real question. Let $R$ be an integral domain which has irreducible elements. Does any nonzero prime ideal contains an irreducible element ?
If there are counter examples, they are non noetherian.
AI: Let $a$ be a nonzero element of your prime ideal $P$. If it's not irreducible, it
has a proper factorisation $a=a_1b_1$ where, one may assume that $a_1\in P$
and $b_1$ is not a unit. Again if $a_1$ is not irreducible, then
$a_1=a_2b_2$ where $a_2\in P$ and $b_2$ is not a unit.
If we keep going, we get a strictly increasing chain of principal ideals
$(a_1)\subset (a_2)\subset\cdots$ contradicting the Noetherian condition.
|
H: Condition making a Compact Operator have to be a finite rank operator
I have tried to the following exercise :
Let $T$ be a compact operator, X an infite dimension Banach space and suppose that there is a closed subset $M$ such that $X= Im T\bigoplus M$, then $T$ is a finite rank operator.
Now my first approach to this was that since $M$ is closed we will have that $X/M$ will be a banach space and we know that $X/M \cong Im T$ but this is an algebraic isomorphism I am not sure we get an homeomorphism of spaces so that we could conclude that $Im T$ is a Banach space. If we have that $Im T$ we will have that this will be an open map from $X$ to $ImT$ and so $T(B_X(0,1))$ will be open in $Im T$ and so we have that for this to be relative compact $ImT$ has to have finite dimension. Now I am not sure that we will that $Im T$ will be a banach space because I think we get an algebraic isomorphism, will this be the case or do we have that $Im T$ is in fact a banach space? Thanks in advance.
AI: The composition of a compact operator and a continuous operator is again compact.
Now consider $\pi \circ T$, where $\pi \colon X \to X/M$ is the canonical projection. By the above, it is compact, and by the assumption $X = \operatorname{Im} T \oplus M$ it is surjective. Also, $X/M$ is a Banach space, whence $\pi \circ T$ is open. Thus $X/M$ is locally compact, hence finite-dimensional.
|
H: Divisibility in a Euclidean Domain
Let $R$ be a Euclidean Domain. I am working on showing that
$$ \text{If } a \, | \, bc \text{ with } a,b \neq 0 \text{ then } \frac{a}{(a,b)} \, \bigg| \, c. $$
Note that the first part of this problem is to show that it $(a,b) = 1$ and $a \, | \, bc$ then $a \, | \,c$. I had no problem with that, and believe it guides me towards the more generalized desired result. Letting $x = \frac{a}{(a,b)}$ I can show that $x \, | \, bc$. So if I can show that
$$ (x,b) = 1, $$
then I'm done.
Edit: I realized my idea need not be true. Let $a = 8$ and $b=14$, then $(a,b) = 2$ but then $(4,14) \neq 1$. Okay back to the drawing board.
I am having a difficult time showing this final result. Any help is much appreciated!
AI: Let $\mathrm{gcd}(a,b)=m$, i.e. $a=a_1m$ and $b=b_1m$, where $\mathrm{gcd}(a_1,b_1)=1$.
Then $\frac{bc}{a}=\frac{b_1c}{a_1}$, since $\mathrm{gcd}(a_1,b_1)=1$, we get that $a_1|c$. Let $\frac{c}{a_1}=x$.
Then
\begin{equation}
\frac{c}{\frac{a}{\mathrm{gcd}(a,b)}}=\frac{c}{\frac{a}{m}}=\frac{c}{a_1}=x.
\end{equation}
|
H: Closure and interior of set of functions.
Given is the set: $C([0,1]) = \{f: [0,1] \rightarrow \mathbb{R} \mid f \text{ is continuous} \}$ with the following metric: $d_\infty(f,g) = \sup\{ |f(x) - g(x)| \, | \, x \in \, [0,1] \}$.
Find the closure and interior of the following set: $N = \{ f: [0,1] \rightarrow \mathbb{R}\, | \, \exists x \in [0,1]: f(x) = 0 \}$.
My solution: $ \overline{N} = N$ and $\mathring{N} = N $.
Reason: The inclusion $ \mathring{N} \subseteq N$ is evident. Take now $f \in N$. Then there exist a $x_0 \in [0,1]$ such that $f(x_0) =0 $. Take now $\epsilon > 0$. We will show now that $B(f,\epsilon) \subseteq N$. Take $g \in B(f,\epsilon)$. We now have the following inequality: $|g(x_0)| = |f(x_0) - g(x_0)| \leq d_\infty(f,g) < \epsilon $. This shows that $g(x_0) = 0$ and that $g \in N$ because $\epsilon > 0$ was random.
For $\overline{N} \subseteq N$ I had an analogue reasoning.
My question is; am I right?
Thanks in advance!
AI: The set $N$ is not an open set (and therefore $N\ne\mathring N$). For instance, the null function $\eta$ belongs to $N$. However, given $\varepsilon>0$, the constant function $\frac\varepsilon2$ belongs to $B_\varepsilon(\eta)$, but not to $N$. So, $B_\varepsilon(\eta)\varsubsetneq N$.
In fact, $\mathring N$ consists of those functions $f\in C([0,1])$ for which there are numbers $x,y\in[0,1]$ such that $f(x)<0<f(y)$.
But $N$ is closed (and therefore $N=\overline N$) because if $f\in N^\complement$, then $f$ has no zeros. Since $f$ is continuous and $[0,1]$ is compact, $\inf|f|>0$. Let $r=\inf|f|$. Then no function from $B_r(f)$ has a zero. In other words, $B_r(f)\subset N^\complement$.
|
H: Group element that normalizes finite subgroup that is generated by a subset of $G$
Let $N$ be a finte subgroup of a group $G$, and assume $N = \langle S \rangle$ for some subset $S$ of $G$. Prove that an element $g \in G$ normalizes $N$ if and only if $gSg^{-1} \subset N$.
My question is about the forward direction. That is, we assume that $g$ normalizes $N$, i.e. $gNg^{-1} = N$, and we want to prove that if $s \in S$, then $gsg^{-1} \in N$. But I cannot see how to do this. In fact, if we let $N = \{e\}$ and $S$ be any nontrivial subset of $G$, then every $g$ normalizes $N$, but $gsg^{-1} \not \in N$ if $s \neq e$.
AI: If $S$ generates $N$, then each element of $N$is of the form $s_1\ldots s_n$, where each $s_i$ is either an element of $S$ or the inverse of an element of $S$ or the identity element of $G$. In particular, $S$ is contained in $N$. Hence if an element $g$ of $G$ normalizes $N$ then in particular it normalizes every element of $S$.
|
H: Cycle Notation for a Permutation Group
Can anyone thoroughly explain how you would arrive at this answer? I'm very confused with how you would do this problem.
AI: Let's do $\sigma_2$ for example. Since $\sigma_2(a)=2a \bmod{7}$, it maps $1\bmod{7}$ to $2 \bmod{7}$, $2 \bmod{7}$ to $4\bmod{7}$, and $4\bmod{7}$ to $8\bmod{7}$, which is $1 \bmod{7}$. Hence we have our first cycle $(1 2 4)$. $3$ is the next number not used, so we start with that. Then $\sigma_2$ maps $3 \bmod{7}$ to $6 \bmod{7}$ and $6 \bmod{7}$ to $12 \bmod{7}$. But $12 \bmod{7}$ is $5 \bmod{7}$, and that finally gets mapped to $10 \bmod{7}$, which is $3 \bmod{7}$ again. Hence we get $(3 6 5)$.
|
H: Quadratic Function from the Taiwan IMO TST 2005
Lately, I came across the Team Selection Test for the IMO 2005 Taiwan Team. One of the Question is stated as follow:
Set $f(x) = Ax^2+B^x+C$ and $g(x)=ax^2+bx+c$, with $A \times a \neq 0$, $ A,a B,b, C,c \in \mathbb{R}$ satisfies:
$|f(x)| \ge |g(x)| \forall x \in \mathbb{R}$
Prove that $|B^2-4AC| \ge |b^2-4ac|$
My teacher told me this is not simple, yet I came up with the following solution that makes this question done in a note:
Solution
Because the absolute value of $f(x)$ and $g(x)$ are always positive, we also only consider the absolute value of $ \Delta_g = b^2-4ac$ and $ \Delta_f = B^2-4AC$
Now since $|f(x)| \ge |g(x)|$, the smallest value of f(x) is larger than the smallest value of g(x), which means: $ |\frac{B^2-4AC}{4A}| > |\frac{b^2-4ac}{4a}|$
Apparently, $|A| \ge |a|$, otherwise, for $x$ large enough, $|g(x)|>|f(x)|$, a contradiction.
Hence $B^2-4AC \ge b^2-4ac$.
Q.E.D $\square$
AI: Your solution is incorrect.
The extrema value of $ g(x)$ is $ \frac{ b^2 - 4ac } { 4a}$.
Wrong claim that you made: The smallest value of $ |g(x)|$ need not be $| \frac{ b^2 - 4ac } { 4a} |$.
E.g. Consider $ g(x) = ( x - 1 ) ( x + 1)$. Clearly the smallest value of $ |g(x) | $ is 0.
Whereas $| \frac{ b^2 - 4ac } { 4a} |$ is the absolute value of the extrema of $g(x)$, so this is equal to $ | - 1 | = 1$.
You need $ \delta_g \geq 0 $ in order to conclude that "The smallest value of $ |g(x)|$ is $| \frac{ b^2 - 4ac } { 4a} |$" for your proof to work.
Note that the case of $ \delta_g \geq 0 $ is pretty simple to deal with (and can be done in a similar manner).
|
H: How to find geometrical figures areas?
Find the area of the region that lies inside the circle $r = 1$
and outside the cardioid $r=1-cos\alpha$
We know that area can not be negative value(at least basic calculus).
I wonder where I made a mistake,I tried to show equations.
$$ 1=1-cos\alpha $$
$$\alpha = \pi/2,3\pi/2$$
\begin{equation}
\frac{1}{2}\int_{3\pi/2}^{\pi/2}f(x)^2d\alpha
\end{equation}
\begin{equation}
\frac{1}{2}(\int_{\pi/2}^{3\pi/2}1^2d\alpha-\int_{\pi/2}^{3\pi/2}(1-cos\alpha)^2d\alpha)
\end{equation}
$$ \int_{3\pi/2}^{\pi/2}1^2d\alpha = $$
$$ \frac{3\pi}{2} -\frac{\pi}{2}=\pi $$
$$ \int_{\pi/2}^{3\pi/2}(1-cos\alpha)^2d\alpha = $$
$$ 9\pi/2 - 3\pi/2 + 2 - (-2) + 0 - 0 $$
$$ 2\frac{1}{2}(\pi - 3\pi - 4)$$
$$ Answer: (-2\pi-4) $$
symmterical no required multiple by 2.
AI: if you look at the picture you are asked to calculate have half a circle + the 2 times the area of the cardioid from 0 to pi/2, not what you did! your calculation is right you subtracted the big left part of the cardiode from half the circle , no wonder it is negative.
|
H: Quasilinear PDE $\left\{\left(x+y\right)\frac{\partial }{\partial x}+\frac{\partial }{\partial z}\:\right\}u\left(x,y,z\right)=0$
$\left\{\left(x+y\right)\frac{\partial }{\partial x}+\frac{\partial }{\partial z}\:\right\}u\left(x,y,z\right)=0$
This means that
$\left(x+y\right)\frac{\partial u}{\partial x}+\frac{\partial u}{\partial z}\:=0$
$\left(x+y\right)\frac{\partial u}{\partial x}+0\cdot \frac{\partial y}{\partial z}+1\cdot \frac{\partial u}{\partial z}\:=0$
This means that
$\frac{dx}{\left(x+y\right)}=\frac{dy}{0}=\frac{dz}{1}$
Step 1
$\frac{dx}{\left(x+y\right)}=\frac{dy}{0}$
$dy=0$
When integrating both sides we get $y=C_1$
Step 2
$\frac{dx}{\left(x+y\right)}=\frac{dz}{1}$
Because y is an constant we can integrate both sides, we get
$ln\left|x+y\right|=ln\left|C_2\cdot z\right|$
$x+y=C_2\cdot z$
$\frac{x+y}{z}=C_2$
This means our solution is
$u=\Phi \left(y,\:\frac{x+y}{z}\right)$
Right? Also, is there a website where I can check it?
AI: Well you can try and verify your answer relatively easily. Let $\Phi = \Phi(u,v)$:
$$\partial_x u = \frac{1}{z}\partial_v\Phi(y,\frac{x+y}{z})$$
$$\partial_z u = -\frac{x+y}{z^2}\partial_v\Phi(y,\frac{x+y}{z})$$
But in general
$$(x+y)\partial_x u + \partial_z u = \bigg[\frac{x+y}{z} - \frac{x+y}{z^2}\bigg]\partial_v\Phi(y,\frac{x+y}{z}) \neq 0$$
So your solution is not correct. Where you went wrong is integrating
$$\frac{dx}{x+y} = dz \implies \ln(x+y) = z + C_2$$
so
$$\frac{x+y}{e^z} = \tilde{C}_2$$
The general solution is then
$$u(x,y,z) = \Phi(y, \frac{x+y}{e^z})$$
|
H: Permutation Question with two independent conditions
In how many ways can $8$ boys $(B_1,B_2,...,B_8)$ and $5$ girls we arranged linearly such that $B_1$ and $B_2$ are NOT together and exactly four girls are together?
I can solve when either of the conditions is imposed, the former would give $13!-2!\cdot12!$ and the latter would be ${5 \choose 4} \cdot 4! \cdot 10! - 5! \cdot 9!$
However I'm not getting how both these conditions together are meant to be evaluated. I thought it won't follow from principle of inclusion, as these are two independent cases.
Answer given is $5! \cdot 8! \cdot 58$
AI: Without inclusion exclusion:
Select the group of four girls and permute them in $\binom{5}{4}4!=5!$ ways.
There are two option for the guys, either $B_1$ and $B_2$ are together or not. If the former is true, then you have to put one of the group of girls exactly in the middle. So there are $2$ ways to choose which of the two groups of girls goes in the middle and $8$ ways to put them in any slot of the boys. You have to multiply this by the number of ways boys are together: $7!\cdot 2!.$ So this gives $2\cdot 8 \cdot 7!\cdot 2=4\cdot 8!$
Now the bows have to be non together in $8!-7!\cdot 2!$ and we can put the girls in any possition we like, there are $9$ slots so in $\binom{9}{2}\cdot 2$ ways by choosing the slots and permuting the group of girls. Putting this process together:
$$5!\cdot (4\cdot 8!+\binom{9}{2}\cdot 2\cdot (8!-7!2!))=5!\cdot 8!\cdot (4+\binom{9}{2}\cdot 2-9\cdot 2)=5!\cdot 8!\cdot (76-18)$$
|
H: An ideal containing a unit
In a ring $R$ and $I$ is an ideal of the ring, we have a result that if $I$ contains a unit then the ideal is equal to the whole ring and since the moment the ideal contains a unit it won't be just an ideal which means there does not exist an ideal with only an unit?
I am not able to express the question with 100% accuracy, still can someone help?
AI: When talking about ideals we have two 'trivial' cases: first, the zero ideal $(0)$, and second, the whole ring $R$ viewed as ideal. Both play a special role (e.g. a ring is a field iff its only ideals are these two) but are somewhat not what you usually think of as ideals. But, there is no restriction placed on an ideal $I$ being proper per se, that is being a proper subset and not just the whole.
In fact, there result you stated is slightly stronger
Claim. Let $R$ be a ring. $a$ is a unit in $R$ if and only if $(a)=R$.
Proof. If $a$ is a unit, there is some $b\in R$ such that $ba=1$, thus $1\in(a)$. But then for all $r\in R$ we can write $r=r.1=r(ba)=(rb)a\in(a)$ and therefore $(a)=R$. Conversely, assume $(a)=R$. Then in particular $1\in(a)$ and there is some $b\in R$ such that $ba=1$. That is nothing else than $a$ being a unit and we are done. $\square$
So, yes, there is no proper ideal containing a unit as every such ideal is already the whole ring. That is why when talking about, let's say, maximal ideals we explictely define them as being proper. Otherwise we would have a trivial maximal ideal for every unit and that would screw up some things.
|
H: What is the name of this theory?
What is the name of the theory which states that even if the chances of something happening are so low (like 1%) but said event, if it were to happen, it is so disproportionate in character that even a 1% chance is still to be considered?
Like taking a gamble to which the chances are:
99% winning 100k $
1% an atomic bomb goes off in the city.
Where the outcome of the least favored scenario is so significant that it should be considered as much (or more?) as the other scenario.
I know it may seem trivial, but it makes decision making based solely on probability irrelevant.
AI: This is related to the concept of utility, which is the subjective measure of the "value" of a probabilistic outcome. For small sums of money, for example, utility is well-correlated with nominal value, meaning that \$2 has twice the "value" of \$1. For large sums of money, that's not so true anymore, as most people would not consider \$20M to have double the "value" of \$10M - going from \$0 to \$10M will have a far greater subjective impact on your life than going from \$10M to \$20M.
By assigning utility to non-numeric outcomes, we can get an idea of whether a gamble is "worth it". An atomic bomb going off has a very high negative utility, which is orders of magnitude higher than the positive utility of \$100k. So, even with only a 1% chance of the bomb going off, the wager has a negative utility, meaning it's better off to not take the bet.
This does not "make our rules of what is called rational decision making based on probability irrelevant" as you suggest - rather, it recognizes that not all outcomes have the same value. Decisions should not be made solely on the likelihood of an outcome, but also how much that outcome is worth. On a roulette table, betting on black will win 18/38 times, while betting a single number will win only 1/38 times. But since the payout for a single number bet is much larger, the two bets are roughly equally rational - the single number bet isn't more irrational just because it's less likely to hit (both are equally irrational, since they both have negative expected utility).
|
H: Prove that there are at least $2005$ pairs of $(x, y)$ of non negative integers x and y that satisfy $x^2+y^2 = N,$ for some positive integer $N$.
Prove that there exists a positive integer $N$ such that there are at least $2005$ ordered pairs $(x,y),$ of non-negative integers $x$ and $y,$ satisfying $x^2 + y^2 = N.$
I have that the $sqrt(N)$ would be the lcm of the hypotenuses of the first $2005$ Pythagorean triples. Since there are infinite amounts Pythagorean triples, there would be $2005$ pairs of $(x, y).$ But, I don't know if this proves what the question asks for completely.
AI: Consider the line $\ell$ through $(0,1)$ and $(p/q,0)$, where $p/q\in\mathbb{Q}$ is a rational number greater than $1$ written in lowest terms. $\ell$ intersects the unit circle at ${\left(\frac{2pq}{p^2+q^2},\frac{p^2-q^2}{p^2+q^2}\right)}$, which corresponds to the Pythagorean triple $(2pq,p^2-q^2,p^2+q^2)$. In particular, if $q=1$ and $p$ is even, these form the points ${\left(\frac{2p}{p^2+1},\frac{p^2-1}{p^2+1}\right)}$.
So, you can create as many rational points on the unit circle as you want, say $2005$ of them. Multiply each solution by the common denominator of each point and you're done.
|
H: If $f(x)=x^2+x+2$ and $g(x)=x^2-x+2$, how to prove that there is no function $ h:\mathbb R \to \mathbb R $ such that $h(f)+h(g) = g(f)$?
I tried to substitute $f(x)$ & $g(x)$ in their places but didn't find a relation;
The function beginning bijective or surjective etc have nothing to do with our case I believe
$g(f(x))=x^4+2x^3+4x^2+3x+4$
Here is a graph
AI: Suppose that such a function existed.
Then, we would have, for all $x \in \Bbb R$, the identity:
$$h(f(x)) + h(g(x)) = g(f(x)).$$
Letting $x = 1$, we get
$$h(4) + h(2) = 14.$$
Letting $x = -1$, we get
$$h(2) + h(4) = 4.$$
The above two equations clearly lead to a contradiction.
|
H: How to find limit of $\lim_{n\to \infty} \frac{n \sin\frac{x}{n}}{x(x^2+1)}$ without L'Hospital's rule?
How to find limit of $$\lim_{n\to \infty} \frac{n \sin\frac{x}{n}}{x(x^2+1)}$$ without L'Hospital's rule?
I thought rewriting $\sin(\frac{x}{n})$ using Taylor expansion would work but it didn't help. I solved it using L'Hospital's rule: $$\lim_{n\to \infty} \frac{n \sin\frac{x}{n}}{x(x^2+1)} = \lim_{n\to \infty} \frac{ \sin\frac{x}{n}}{(x^2+1)\frac{x}{n}}=\lim_{n\to \infty} \frac{ \cos\frac{x}{n}}{x^2+1}=\frac{1}{x^2+1},$$
but I would like to know how to solve it without the rule. If it is not possible without L'Hospital's rule is there any other way to solve the limit with the rule apart from what I've written?
AI: For $\theta$ near $0$, we have
$$
\cos(\theta)\leq \sin(\theta)/\theta \leq 1
$$So with $\theta=x/n$, as $n$ gets large $n\sin(x/n)$ approaches $x$, whence you recover the original result.
|
H: $Show: a\in \mathbb{Z} \Rightarrow 6\mid a^3 -a$
$Show: a\in \mathbb{Z} \Rightarrow 6\mid a^3 -a$
My attempt:
NTS: $6\mid a^3 -a$, so $(2\mid a^3 -a) \land (3\mid a^3 -a)$
Assume that $a\in \mathbb{Z}$, therefore I have 2 cases:
a is even $\Rightarrow a=2k, k\in \mathbb{Z}$
a is odd $\Rightarrow a=2k+1, k\in \mathbb{Z}$
Therefore in case 1: $a^3-a=(2k)^3-2k=8k^3-2k$
and in case 2: $a^3-a=(2k+1)^3-(2k+1)=8k^3+2k$
And it led me to a blind alley. I can only factor 2 out of the expressions but there's no way to factor 3 out, so $k\in \mathbb{Z}$
Is there a way to show that using direct proof and the assumption?
I also tried to factor the $a^3-a=(a-1)(a)(a+1)$ but it isn't telling me much either.
AI: $a-1,a,a+1$ are three consecutive integers, so at least one of them must be even and one of them must be a multiple of $3$.
|
H: Number of loops in permutations
From the famous prisoners problem there are n labelled boxes and inside each the relative number. These are randomly mixed so that you don’t know which number is inside each box. How can I compute the distribution of the loops lengths given n? Where a loop is defined as the path that emerge following from a certain box the number inside it and so on until you reach again the first one.
This is a link about the puzzle and a solution using a simulation
https://youtu.be/a1DUUnhk3uE
AI: Let's consider the single question:
What is the probability the longest loop is length $k$, where $k \gt \frac n2$?
This condition means if there is any loop of length $k$ then any other loop cannot be longer than $n-k < k$.
In total there are $n!$ possible patterns.
It should not be difficult to see that of these, there are $(n-1)!$ patterns where the only loop is of length $n$, making the probability of this event $\frac1n$ as mentioned in the video.
It is not much harder to see that there are ${n \choose k} (k-1)! (n-k)! =\frac{n!}{k}$ patterns where the longest loop is of length $k$: you choose the $k$ involved, put them in a loop and then do anything possible with the rest. So the probability of this happening is $\frac1k$
This means that the probability that the longest loop exceeds $\frac n2$ is related to Harmonic numbers: $$\sum\limits_{k=\lfloor n/2\rfloor +1}^n \frac1k = H(n) - H(\lfloor n/2\rfloor)$$ and for large $n$ this is going to have a limit of $\log_e(2)\approx 0.6931472$; the probability the longest loop is not more than $\frac n2$ then has a limit of $1-\log_e(2)\approx 0.3068528$ close to the $0.31$ mentioned in the video
It is possible to find the probability that the longest loop is $k$ when $1 \le k\le \frac n2$, but I suspect perhaps not in a closed form. In that case you either want the first loop to be length $k$ and the other loops to be shorter than $k$ or the first loop to be of length less than or equal to $k$ and the longest other loop to be exactly $k$, which gives the recurrence $$P_n(k)=\frac{1}{n}\left(\sum\limits_{i=1}^{k-1} P_{n-k}(i) + \sum\limits_{j=1}^{k} P_{n-j}(k) \right)$$ while when $\frac n2 < k \le n$ we have $P_n(k) =\frac1k$, from earlier.
Clearly $P_n(1)=\frac1{n!}$ and calculating the probabilities up to $n=10$ gives the following table:
k: 1 2 3 4 5 6 7 8 9 10
n:
1 1
2 1/2 1/2
3 1/6 1/2 1/3
4 1/24 3/8 1/3 1/4
5 1/120 5/24 1/3 1/4 1/5
6 1/720 5/48 5/18 1/4 1/5 1/6
7 1/5040 11/240 7/36 1/4 1/5 1/6 1/7
8 1/40320 109/5760 23/180 7/32 1/5 1/6 1/7 1/8
9 1/362880 97/13440 127/1620 27/160 1/5 1/6 1/7 1/8 1/9
10 1/3628800 211/80640 1013/22680 61/480 9/50 1/6 1/7 1/8 1/9 1/10
where, for example, $P_{10}(4)= \frac{1}{10}\left(\frac{1}{720}+ \frac{5}{48}+\frac{5}{18}+\frac14 + \frac14 +\frac{7}{32}+\frac{27}{160}\right)= \frac{61}{480}$
|
H: Bijection between real and natural numbers.
I know, I know, this question has been asked several times, but I feel mine is a little bit different.
Imagine a correspondence between $[0,1]$ and natural numbers in the following sense:
$$ 0.12 \longleftrightarrow 21 $$
$$ 0.443 \longleftrightarrow 344 $$
$$0.12345 \longleftrightarrow 54321 $$
Now, before you argue that in fact I'm leaving rationals as $\frac{1}{3}$, let me explain a bit more.
What this correspondence does is: given a real numer $ r \in [0,1] $, which can be expressed as (or can not?) $ r= x_1 \cdot 10^{-1} + x_2 \cdot 10^{-2} + x_3 \cdot 10^{-3} +$ ... (For example: $\frac{1}{3} = 0.33333...=0.3+0.03+0.003+...$). I now define $n_r$ as $n_r=x_1 \cdot 10^0 + x_2 \cdot 10^1 + x_3 \cdot 10^2 + ...$ in a recursive way. So I will have $ n_{\frac{1}{3}} = 3 + 30+300+... $ as a limit of that recursion.
What's the problem with my reasoning? If it is about that last series, isn't the nature of $\mathbb{N}$ precisely to be defined in an inductive way? So I suppose $ n_r$ defined like that is still a natural number.
PD: just an undergrad student who doesn't like infinity
AI: Incidentally, I don't think the OP has made the best case for non-duplicateness that they could, so let me do that here. Ignoring expansion issues, the point is the difference between their suggested map $$0.a_1a_2a_3...\mapsto \sum a_i\cdot 10^{i-1}$$ and the more common idea $$0.a_1a_2...a_n\mapsto \sum a_i\cdot 10^{n-i+1}.$$
Their suggestion makes much more sense - it really does look like a "limiting description" of something, the idea being that e.g. "$...333$" makes a lot more sense than "$333...$." This in turn makes their "$\mathbb{N}$ should be closed under limits of recursions" idea much more relevant, in my opinion, than it would otherwise be. I think this additional coherence is actually rather valuable, and makes this question meaningfully different - at least, from the potential duplicates I've been able to find.
J.W. Tanner's answer has gotten it exactly right: the expression "$3+30+300+...$" looks like a description of a natural number, but it isn't. There's an interesting subtlety here, though:
What exactly is $\mathbb{N}$?
This is one of those things which becomes less obvious the more we think about it, so it's worth analyzing a bit. The naive response is that natural numbers are finite - that's sort of the whole point - so we obviously can't have a natural number with an infinite number of digits. While this actually is perfectly right, it's also somewhat unsatisfying and may reasonably feel circular at first.
(Incidentally, this is why in my opinion it's much better to first present the diagonal argument for powersets: that for every set $X$ there is no surjection from $X$ to $\mathcal{P}(X)$. There's no need to define anything subtle here.)
All of what follows should really be a comment on J.W. Tanner's answer, but ... it's slightly too long.
So let's look at your query on this exact point:
isn't the nature of $\mathbb{N}$ precisely to be defined in an inductive way?
In the interests of brevity I'm not going to be totally formal in what follows, but I promise that no serious errors have been made.
"Defined in an inductive way" is a somewhat slippery thing to say, and it's created a crucial confusion in this case. In natural language we think of induction as a way of building more and more things, but that's not really the right picture. The rule
"$1$ is a natural number, and if $n$ is a natural number then $n+1$ is also a natural number"
isn't really about induction; it's really a "closure property," and that's a much simpler sort of thing. For example, it's also true that $1$ is a real number and if $n$ is a real number then $n+1$ is also a real number, but we wouldn't say that that amounts to the real numbers satisfying any kind of induction.
Rather, induction comes in when we say that the only way to build natural numbers is by applying the above rules. Specifically, consider the following very-clearly-limitative claim:
$(*)$ "$\mathbb{N}$ is the smallest set containing $1$ and closed under $n\mapsto n+1$."
The principle $(*)$ probably looks mysterious at first, but it's actually equivalent to the usually-phrased principle of induction.
Induction implies $(*)$: Suppose $X$ contains $1$ and is closed under $n\mapsto n+1$; we want to show $\mathbb{N}\subseteq X$. Well, consider $X\cap\mathbb{N}$. This set contains $1$ and is closed under $n\mapsto n+1$ (since both $X$ and $\mathbb{N}$ have these properties), so by applying induction in $\mathbb{N}$ we have $X\cap\mathbb{N}=\mathbb{N}$.
$(*)$ implies induction: Suppose $X\subseteq\mathbb{N}$ contains $1$ and is closed under $n\mapsto n+1$. Then by $(*)$ we have $\mathbb{N}\subseteq X$, so $X=\mathbb{N}$.
So when we say "$\mathbb{N}$ is built inductively," what we really mean is that $\mathbb{N}$ is as small as it could possibly be while satisfying some basic properties (namely, that $1\in\mathbb{N}$ and $\mathbb{N}$ is closed under $n\mapsto n+1$). Put another way:
Nothing is a natural number unless it absolutely has to be.
"OK," you might say, "but that's not what I think of natural numbers as! What if we replace $\mathbb{N}$ with some number system $\hat{\mathbb{N}}$ which does allow such infinite expressions?"
The good news is:
We can totally do this! The jargon here is "non-Archimedean discrete ordered semiring" or "nonstandard model of arithmetic" or something similar, but without diving into that let's just point out that we can totally whip up a perfectly well-behaved algebraic structure here.
The thing we get might indeed have the same cardinality as - or even be strictly bigger than - $\mathbb{R}$!
However, the drawback is that this is really mixing things up. We shouldn't compare the new $\hat{\mathbb{N}}$ to the old $\mathbb{R}$; e.g. what's the "$...3333$"th digit of $\pi$? Instead, we should compare $\hat{\mathbb{N}}$ to some $\hat{\mathbb{R}}$ which is the analogue of $\mathbb{R}$ for $\hat{\mathbb{N}}$. And once we whip up such a thing ... we'll see again that there is no surjection from $\hat{\mathbb{N}}$ to $\hat{\mathbb{R}}$.
Ultimately this takes us back to my parenthetical comment at the start of this answer: I think it's pedagogically better, most of the time, to present the fully general fact that every set is strictly smaller than its powerset before focusing on a particular example like $\mathbb{N}$ vs. $\mathbb{R}$, if for no other reason than that the general fact (once understood) makes the "goalpost-moving" issue above ("if we go from $\mathbb{N}$ to $\hat{\mathbb{N}}$ we should also go from $\mathbb{R}$ to $\hat{\mathbb{R}}$") basically unsurprising.
|
H: determine the power series 1/P(x) for a power series P(x)
We have a power series $P(x)=\sum_{n=0}^\infty(-1)^n2^n(n+1)x^n$ and now have to determine the power series $\frac{1}{P(x)}$. I am at a total loss here, maybe one of you can help.
AI: \begin{align}
P(x) &= 1\cdot1 - 2\cdot2x + 4\cdot3x^2 - 8\cdot4x^3 + \cdots\\
\implies 2xP(x) &= \phantom{1 . 1} +2\cdot1x - 4\cdot2x^2 + 8\cdot3x^3 + \cdots
\end{align}
Adding the above equations gives us
$$(1+2x)P(x) = 1 - 2x + 4x^2 - 8x^3 + \cdots$$
The RHS above is an infinite geometric series with common ratio $-2x$ and thus, we get
$$(1+2x)P(x) = \dfrac{1}{1+2x}.$$
Rearranging gives us $$\dfrac{1}{P(x)} = (1+2x)^2,$$
the power series of which, I leave to the daring reader to calculate.
|
H: subfields of a finite subfield
I'm trying to work out all the subfield of a finite field with $3^7$ elements. So I've said that since every subfield of that field if of the form $3^n$ where $n$ is a positive divisor of $7$. so I got the finite subfields as the finite field with $3^1$ and $3^7$ elements. just wanted to check if that was right.
AI: Yes, the possible sizes are $3^1$ and $3^7.$
If $m$ is the size of a subfield, then the original field must be a finite dimensional vector space over that subfield, and thus, if $d$ is the vector space dimension, the original field must have $3^7=m^d$ elements.
This means by unique factorization that $m=3^n$ for some $n$ and $nd=7.$ This means your subfields can have $3^1$ or $3^7$ elements.
Then you need to show those subfields exist, which is easy.
|
H: Every Radon measure defines a distribution
I have read that given a Radon measure $\mu$ in $\mathbb{R}^d$, the operator $T_{\mu}(\phi)=\int_{\mathbb{R}^d} \phi d \mu$, $\phi \in C_c^\infty(\mathbb{R}^d),$ defines a distribution.
However, when I try to prove this result I find the following difficulties related to integration respect to a Radon measure:
If $\phi\in C_c^\infty(\mathbb{R}^d)$, do we have $\int_{\mathbb{R}^d} \phi d \mu<\infty$?
If $\phi\in C_c^\infty(\mathbb{R}^d)$, can we affirm that $\int_{\mathbb{R}^d} \phi d \mu\leq \max{|\phi|}\int_{\mathbb{R}^d} d \mu\ $.
In general, I would like to know which properties cease to be true when we change Lebesgue measure to an arbitrary Radon measure.
AI: Hint: by definition, a Radon measure is finite on compact sets, and the subscript $c$ in $C_c(\mathbb R^d)$ means supported on a compact set.
|
H: Evaluate $\int \frac{1}{\sqrt{1-\sin^4{x}}}dx$
$$f(x) = \int \frac{1}{\sqrt{1-\sin^4{x}}}dx$$
I tried this by breaking the denominator as $\sqrt{(\cos^2x)(1+\sin^2x)}$ and then trying to make the integral in forms of $\sec x$ and $\tan x$. But I couldn't succeed.
Can somebody please help me out?
AI: $$I=\int \frac{\sec{x}}{\sqrt{1+\sin^2{x}}} \; dx$$
Multiply the top and bottom by $\sec{x}$:
$$I=\int \frac{\sec^2{x}}{\sqrt{\tan^2{x}+\sec^2{x}}} \; dx$$
$$I=\int \frac{\sec^2{x}}{\sqrt{2\tan^2{x}+1}} \; dx$$
Let $u=\tan{x}$:
$$I=\int \frac{du}{\sqrt{2u^2+1}}$$
Let $t=u\sqrt{2}$:
$$I=\frac{\sqrt{2}}{2} \int \frac{dt}{\sqrt{t^2+1}}$$
Let $t=\tan{w}$:
$$I=\frac{\sqrt{2}}{2} \int \sec{w} \; dw$$
$$I=\frac{\sqrt{2}}{2} \ln {\big | \sec{w}+\tan{w} \big |}+C$$
$$I=\frac{\sqrt{2}}{2} \ln {\big | \sqrt{1+2\tan^2{x}}+\sqrt{2}\tan{x} \big |}+C$$
|
H: Statement regarding Goldbach's Conjecture?
Question
I think using elementary (but twisted) means I can prove an interesting statement and was curious how a number theorist would prove the same.
Let us we want to find $2$ primes which satisfy:
$$p_1 + p_2 = 2 m$$
and $m$ is not a prime. Then I can show:
$$ m = n_1 p_1 + \lambda_1$$
And
$$ m = n_2 p_2 + \lambda_2$$
where $n$ and $\lambda$ are the negative remainders (see example).
Then:
$$ |(\lambda_1 + \lambda_2)| = \lambda p_1$$
where $\lambda$ is an integer (and $p_1$ is the smaller prime). The converse is true as well.
Example $1$
Consider $18$ which is the sum of $13$ and $5$
$$ 13 + 5 = 2 \times 9 $$
Now, this can be expresses as as a negative remainder as:
$$ 9 = 13 \times 1 - 4 $$
$$ 9 = 5 \times 2 - 1$$
Verifying :
$$ |-1 -4| = 5$$
Example $2$
Consider $24$ which is the sum of $13$ and $11$
$$ 13 + 11 = 2 \times 12 $$
Now, this can be expresses as as a negative remainder as:
$$ 12 = 13 \times 1 - 1 $$
$$ 12 = 11 \times 2 - 10$$
Verifying :
$$ |-1 -10| = 11$$
AI: I'll make the $\lambda$s positive for simplicity.
Since $p_1<p_2$ we have $$p_2>{p_1+p_2\over 2}=m$$ and so $n_2=1$. So $$p_1n_1+p_2n_2=p_1n_1+p_2=(p_1+p_2)+p_1(n_1-1)=2m+p_1(n_1-1).$$
Why is this relevant? Well, we also have by definition that $$2m=p_1n_1-\lambda_1+p_2n_2-\lambda_2$$ and so $$\lambda_1+\lambda_2=p_1n_1+p_2n_2-2m.$$
Combining these equalities we get $$\lambda_1+\lambda_2=2m+p_1(n_1-1)-2m=p_1(n_1-1).$$ So $p_1\vert \lambda_1+\lambda_2$ as desired (and in fact we've found the relevant multiple).
|
H: What's the number of subgroup of order 17 in $Z_{17}\times Z_{17}$?
It seems like there is a method to solve this kind of problem. But I can't figure it out.
AI: Any group of order 17 is cyclic. So, in order to find subgroup of order 17 in $\mathbb{Z}_{17} \times \mathbb{Z}_{17}$ we need to look at the elements of order 17 first and consider the subgroup generated by them.
It is like you are counting sheep by counting their legs but then to get the genuine number of sheep you have to divide the number obtained previously by 4.
Here, number of elements are going to be the number of legs.
Any element of order 17 in $\mathbb{Z}_{17} \times \mathbb{Z}_{17}$ is of the form $(a,0), (0,b), (a,b) $ where $a, b$ are elements of order $17$ in $\mathbb{Z}_{17}\\.$
For each such $a$ and $b$, we have 16 choices because in $\mathbb{Z}_{17}$ orders are either 1 or 17 and only identity have order 1. $\\$
Number of elements of the form $(a,0)$ are 16 and subgroups generated by them will be the same because any subgroup will contain each element of this form. Therefore, we get only one subgroup in this case.$\\$
Number of elements of the form $(0,b)$ are 16 and subgroups generated by them will be the same because any subgroup will contain each element of this form. Therefore, we get only one subgroup in this case.$ \\$
Number of elements of the form $(a,b)$ are $16^2$ and only 16 subgroups are different.
Hence, there are only 18 subgroups of order 17 in $\mathbb{Z}_{17} \times \mathbb{Z}_{17}.$
|
H: Reforming a difference equation
I am trying to solve a difference equation, but once I get a certain formula, I do not know how to reform it. There is a solution, but it does not explain how to reform the equation.
The image of the problem and the solution
I get to this point $ (\alpha-\beta)*Y_{t+1} = -\beta Y_{t} $, but how do I proceed in order to get the formula from the result?
AI: This is just a matter of algebraic manipulation:
$$-\frac\beta{\alpha-\beta}=\frac\beta{\beta-\alpha}=\frac{\beta-\alpha+\alpha}{\beta-\alpha}=\frac{\beta-\alpha}{\beta-\alpha}+\frac\alpha{\beta-\alpha}=1+\frac\alpha{\beta-\alpha}$$
|
H: $f(x)=\exp(-x^{-1})$ infinitely differentiable, induction?
$f:\mathbb{R}\rightarrow{\mathbb{R}}; f(x)=\exp(-x^{-1})$ if $x>0$ and $f(x)=0$ if $x\leq 0$.
Show that you can differentiate $f$ on $\mathbb{R}_{>0}$ as often as you want. And that for every $n \in \mathbb{N}$ polynomals exist so that $f^{(n)}(x)=\frac{p_n(x)}{q_n(x)}f(x)$.
$f'(x)=\frac{e^{-\frac{1}{x}}}{x^2}$
$f''(x)=\frac{e^{-\frac{1}{x}}-2e^{-\frac{1}{x}}x}{x^4}$
$f'''(x)=\frac{6e^{-\frac{1}{x}}x^2-6e^{-\frac{1}{x}}x+e^{-\frac{1}{x}}x}{x^6}$
Induction sounds very good to me but I don't really know how.
AI: First show the formula holds for $n=1$.
If $f^{(n)}(x)=\frac{p_n(x)}{q_n(x)}f(x)$ then
$f^{(n+1)}(x)=\frac{p_n(x)}{q_n(x)}f'(x) + {p'_n(x) q_n(x)-p_n(x)q'_n(x) \over q_n^2(x)}f(x)$, and you are given that
$f'(x) = \frac{p_1(x)}{q_1(x)}f(x)$, so
$f^{(n+1)}(x)= (\frac{p_1(x)p_n(x)}{q_1(x) q_n(x)} + {p'_n(x) q_n(x)-p_n(x)q'_n(x) \over q_n^2(x)} ) f(x)$.
Now figure out a suitable $p_{n+1}, q_{n+1}$ to show the formula holds.
|
H: Entire relation, projective object and choice object, and the axiom of choice
I was reading on the axiom of choice and I came across these few statements in nLab:
Projective object: $P$ is projective if for any morphism $f: P \rightarrow B$ and any epimorphism $q: A \rightarrow B$, $f$ factors through $q$ by some morphism $P \rightarrow A$.
The axiom of choice can be phrased as "all objects of the category of sets are projective".
Entire relation: A binary relation from a set $X$ to a set $Y$ is called entire if every element $X$ is related to at least one element of $Y$.
The axiom of choice says precisely that every entire relation contains a function.
A set $A$ is projective iff every entire relation from $A$ to set $B$, for any $B$, contains a function $A \rightarrow B$.
A set $B$ is choice iff every entire relation from a set $A$ to $B$, for any $A$, contains a function $A \rightarrow B$.
Statement 1, 2: https://ncatlab.org/nlab/show/projective+object
Statement 3, 4: https://ncatlab.org/nlab/show/entire+relation
Statement 5, 6: https://ncatlab.org/nlab/show/choice+object
My question is, how are the statements related. That is, how is (1) and (5) related, how is (2) and (4) related, and how does (6) sit in this whole picture (is there any significance of this statement)?
AI: Perhaps lets start with the relations between surjections, entire relations, and indexed families of nonempty sets.
It turns out these three notions are equivalent.
Suppose $f:A\to B$ is a surjection. Define a relation $R:B\to A$ by $bRa\iff f(a)=b$. Since $f$ is surjective, this relation is entire. On the other hand, if $b\in B$, define $A_b = f^{-1}(\{b\})$, since $f$ is a surjection, each $A_b$ is nonempty, so we have a family of (disjoint) nonempty sets indexed by $B$.
Now suppose we have an entire relation $R:B\to A$. Define $A_b = \{a\in A: bRa\}$, which gives a family of nonempty sets indexed by $B$, since $R$ is entire. Finally, define $$A'=\bigsqcup_{b\in B} A_b,$$
and $f:A'\to B$ by $f(a,b)=b$.
Lastly, suppose we start with a family of nonempty sets indexed by $B$, $A_b$. Then again, we define $A'=\bigsqcup_{b\in B} A_b$, and $f:A'\to B$ by $f(a,b)=b$, which is surjective, since all the $A_b$ are nonempty. On the other hand, we can define an entire relation $R:B\to A'$ by $b R (a,b)$. (Or we could take $A=\bigcup_{b\in B} A_b$ and $R:B\to A$ by $bRa \iff a\in A_b$.)
Choice
One version of the axiom of choice says that if $A_b$ is a family of nonempty sets indexed by $B$, then there is a function
$$g:B\to A'= \bigsqcup_{b\in B} A_b$$
such that $fg=1_B$, where $f:A'\to B$ is the surjective function constructed above.
$g$ is called a choice function.
Now the relationship between the statements of choice in the question is the following:
The following are equivalent
1. Choice (as stated just now)
2. Every surjective function has a right inverse.
3. Every set is projective
4. Every entire relation contains a function
Proof
(1) $\implies$ (2): Given a surjective function $f:A\to B$, and applying choice to the family of sets $A_b=f^{-1}(b)$, we get a function $g:B\to A$ such that $fg =1_B$.
(2) $\implies$ (3): Suppose $f:A\to B$ is surjective, and $h:X\to B$ is any map of sets.
To show that all sets are projective, it suffices to show that we can always lift $h$ to a map $\tilde{h}:X\to A$. However, if $g:B\to A$ is a left inverse, then we can take $\tilde{h}= gh$, since then $f\tilde{h}=fgh=h$.
(3) $\implies$ (1): Suppose $A_b$ is a family of nonempty sets. Then $f : A'\to B$ is surjective, and $B$ is projective, so we can lift $1_B$ along $f$ to a map $g:B\to A'$ such that $fg=1_B$, which is the statement of choice.
(4) $\implies$ (2): If $f:A\to B$ is surjective, and $R:B\to A$ is the entire relation constructed above, and $g:B\to A$ is a function contained in $R$, then by definition,
$bRg(b)$, which means that $fg(b)=b$, so $g$ is a right inverse to $f$.
(1) $\implies$ (4): If $R : B\to A$ is an entire relation, then we defined a family of nonempty subsets $A_b=\{a\in A: bRa\}$. Letting $\tilde{g}:B\to A'$ be a choice function for this family, we have $\tilde{g}(b) = (a,b)$ for some $a$ with $bRa$, and we define $g:B\to A$ by $g(b)=a$, which gives a function contained in $R$.
$\blacksquare$
The relations of the statements in your question
(1) is the definition of projective, which is used in statement (5).
I just showed (2) and (4) are equivalent to choice.
(6) is equivalent to saying that any family of nonempty sets indexed by $B$ has a choice function, so it's choice for sets indexed by that set.
|
H: Vector Projection explanation
Please can someone explain why this is? I think I understand projection when it is a comparison of two vectors but below has 3.
Im revising and I am so slow.
The projection of the vector $$\begin{pmatrix} 3\\\ -2\\\ -1\end{pmatrix}$$ onto the plane spanned by the vectors
$$\begin{pmatrix}0\\\ -1\\\ 1\end{pmatrix}$$
and
$$\begin{pmatrix}0\\\ 1\\\ 1\end{pmatrix}$$
is
$$\begin{pmatrix}0\\\ -2\\\ 1\end{pmatrix}$$
AI: There are two standard approaches here.
Option 1: We can use an orthonormal basis and add together the separate projections. Note that the vectors $v_1 = (0,-1,1)$ and $v_2 = (0,1,1)$ are already orthogonal; to make them "normal" (length 1), we need only divide each vector by its length. So, we get the orthonormal basis $u_1 = \frac1{\sqrt{2}}(0,-1,1),u_2 = \frac 1{\sqrt{2}}(0,1,1)$. The projection of the vector $v = (3,-2,1)$ is
$$
(v \cdot u_1)u_1 + (v \cdot u_2) u_2 = \\
\frac 3{\sqrt{2}} u_1 + \frac {-1}{\sqrt{2}}u_2 = \\
(0,-\frac 32,\frac 32) + (0,-\frac 12 ,-\frac 12) =\\
(0,-2,1).
$$
Option 2: If $A$ is the matrix with columns $v_1,v_2$, then the desired projection can be calculated as $[A(A^TA)^{-1}A^T]v$.
|
H: The anti definite integral
Is there such a possibility as defining an anti (definite integral)?
For instance:
$$
S(a,b)=F(b)-F(a)=\int_a^b f(x)dx
$$
The question is what operation $D$ on $S$ produces $f(x)$:
$$
D[S(a,b)]=f(x)
$$
For the simple case of $f(x)=x$, it is pretty obvious that, say $D[b^2-a^2]=x$ because
$$
\int_a^bxdx=b^2-a^2 \implies D[b^2-a^2]=x+C
$$
where $C$ is a constant.
So at the minimum, absent a general procedure, one can map out manually a few cases.
But, how far can one take this; is there a special class of functions for which the anti (definite-integral) can be "well-defined"?
AI: Note that $S(a,b)$ is a number as is $b^2 - a^2$. (All definite integrals that exist are numbers.) This will be a significant problem. Here are two definite integrals on the same interval with the same value (hence, identical $S(a,b)$). \begin{align*}
0 &= \int_{-1}^1 0 \,\mathrm{d}x \\
0 &= \int_{-1}^1 x \,\mathrm{d}x
\end{align*}
So, is $D[0]$ supposed to be $0$ or $x$?
There is also a definitional problem. From "$D[S(a,b)]$", we should recognize that we get a different $D$ for every choice of interval, but that choice of interval is not present in our notation for $D$. This is fixable, write "$D_{a,b}$". Then we have
$$ D_{-1,1}[0] = 0 $$
and
$$ D_{-1,1}[0] = x \text{.} $$
So there is still an unresolvable ambiguity. (If that ambiguity were magically resolved, we could then approach the interesting question of what to do with $D_{a,b}[S(c,d)]$ where the only given relations among $a$,$b$,$c$, and $d$ are $a \leq b$ and $c \leq d$. No idea where that would go since we haven't magically resolved the ambiguity.)
There's a sneaky trick hiding in your example. The integral on $[a,b]$ of $x$ is the number $b^2 - a^2$. But rather than work with a definite integral, we were instead to consider the accumulation function
$$ f(x) = \int_a^x t \,\mathrm{d}t $$
where $a$ is a constant and $x$ is a variable. This isn't a number; it's the function $(1/2)(x^2 - a^2)$. (You forgot the one-half in your version.) It is feasible to get the integrand back from this accumulation function, just differentiate with respect to $x$. Notice that the choice of $a$ only vertically shifts the graph, so corresponds to adding a constant vertical offset, which is discarded under differentiation. So the choice of $a$ does not alter the result of differentiation.
The idea that you can differentiate an accumulation function to retrieve its integrand is (the first half of) the fundamental theorem of calculus.
|
H: deduction proof
Prove $p \wedge \neg p \vdash q$ for any propositional variables $p$ and $q$ without using disjunctive syllogism or excluded middle or $\neg$-elimination.
I can prove this easily using $\neg$-elimination: assume $p\wedge \neg p$ and $\neg q$. Then by $\wedge$-elimination, we have $p$ and again by $\wedge$-elimination, we have $\neg p.$ But then by $\neg$-elimination, we have $q$. However I'm not sure how to do it without using $\neg$-elimination. Will Peirce's law (i.e. $((A \to B)\to A) \to A$)) be useful?
Clarification: $\neg$-elimination is defined as follows:
Let $\sum, A, B$ be formulas. Then if $\sum, \neg A \vdash B$ and $\sum, \neg A \vdash \neg B,$ then $\sum \vdash A$. Informally, it resembles the "proof by contradiction" method.
AI: By the rule of conjunctive simplification, we have $p$ and $\neg p$. Now, let $q$ be some proposition. Thus, we can use disjunctive addition to derive $\neg q \lor p$. By conditional exchange, we have $q \to p$. Finally, by modus tollens, we have $\neg q$
|
H: Estimation of the standard deviation for Power Law distribution
I've understood everything in the picture (source) below except for this equality:
$$\hat{\sigma} = \frac{\hat{\alpha} - 1}{\sqrt n}$$
Can someone please explain where does it come from?
AI: The likelihood is a Pareto distribution with shape $\alpha-1$, i.e. of the form
$$L(\alpha\mid x)=\frac{(\alpha-1)k^{\alpha-1}}{x^\alpha}\mathbf1_{x>k>0}\quad,\,\alpha>1$$
(I am denoting $x_{\min}$ as $k$ here).
The MLE of $\alpha$ obtained on the basis of the sample $(X_1,\ldots,X_n)$ is $$\hat\alpha=1+\frac{n}{T}\,,$$
where $$T=\sum_{i=1}^n \ln\left(\frac{X_i}{k}\right)$$
Note that $\ln (X_i/k)$ is exponential with mean $1/(\alpha-1)$, so that
$$E(T)=\frac n{\alpha-1}$$
and $$\operatorname{Var}(T)=\frac n{(\alpha-1)^2}$$
Now variance of $\hat\alpha$ is
$$\operatorname{Var}(\hat\alpha)=n^2\operatorname{Var}\left(\frac1T\right)$$
By a first order Taylor expansion we have for large $n$,
$$\operatorname{Var}\left(\frac1T\right)\approx \frac{\operatorname{Var}(T)}{(E(T))^4}=\frac{(\alpha-1)^2}{n^3}$$
Hence,
$$\operatorname{Var}(\hat\alpha)\approx \frac{(\alpha-1)^2}{n}$$
So a large sample estimate of the standard error is $$\widehat{\text{S.E.}(\hat\alpha)}=\frac{\hat\alpha-1}{\sqrt n}$$
|
H: Determine every polynomial with real coefficients such that $P(P(x))=[P(x)]^k$
I have doubts wrt my solution for the following problem. Determine every polynomial with real coefficients such that $P(P(x))=[P(x)]^k$
My guess is that $P(x)=x^k$. I started by saying let $r_n \in \mathbb{C}$ such that $P(r_n)=n$. Hence, $P(P(r_n))=P(r_n)^k$ implies $P(n)=n^k$. Since $P(n)=x^k$ for $x\in \mathbb{Z}$, we have $P(n)\equiv x^k$
My main point of concern is that I went from the Reals to Complex field to obtain $r_n$, how legitimate is this approach?
AI: This is a similar approach, without needing complex numbers, only that the range of $P$ is infinite.
If $P(x)$ is not constant, then $P(y)-y^k$ has infinitely many roots, name when $y$ is in the range of $P.$
So $P(x)-x^k$ must be the zero polynomial.
Then you also have two or three constant polynomial options for $P,$ when $k$ is even or odd, respectively.
|
H: 1.You are trying to guess a three-letter password that uses only the letters A, E, I, O, U, and Y.
Letter can be used more than once. Find the probability that you guess the correct password :You
AI: There are $6$ possible letters, meaning that there are
$$
6 \times 6 \times 6=216
$$
possible combinations. If you randomly guess a password, then the chance of you correctly guessing is $\frac{1}{216}$.
|
H: Non-isomorphic graphs with 2 vertices and 3 edges
Are there any non-isomorphic graphs with 2 vertices and 3 edges? From my understanding of what non-isomorphic means, I don't think there are any, but I'm not sure.
AI: Does this image answer your question?
|
H: Prove that $\lim_{x \to 0^{+}}f(x) = \infty$ iff $\lim_{x \to \infty}f(\frac{1}{x}) = \infty$ Explanation of solution and concept.
Prove that $$\lim_{x \to 0^{+}}f(x) = \infty \\ \text{iff} \\ \lim_{x \to \infty}f\bigg(\frac{1}{x}\bigg) = \infty$$
I'm confused by the reasoning behind a solution that I found for this question from Spivak. This solution solves the problem proving the "if" part (i.e proving in $\Leftarrow$ direction).
This is the solution presented:
Here is the start of my problems. If we are going to prove this direction we are assuming $$\lim_{x \to \infty}f\bigg(\frac{1}{x}\bigg) = \infty$$
is true. But the definition of this object is: For all $N > 0$, there exists a $M > 0$ s.t for all $x$ if $x > M$ then $f(\frac{1}{x}) > N$
So how is it that $f(1/x) < N$ for $x > M$ is satisfied? That is not the definition of the above object. Then the following manipulation that "If $0 < x < \frac{1}{M}$, then $x > M$"....that's not true either, it would be $M < \frac{1}{x}$ if we manipulated it.
As can be seen I'm having difficulty reconciling how to fit one definition into the other. Some clarification on what I'm not understanding about this would be helpful.
AI: Do it step by step, goal: fitting the definitions.
First of all, what you want to prove is $\lim_{x \to 0^+} f(x) = \infty$. Then by the definition, you need to show that
\begin{equation}
\forall M > 0, \exists N > 0 \quad s.t. \quad x \leq 1/N \implies f(x) \geq M
\end{equation}
On the other hand, what you have is
$$
\forall M > 0, \exists N > 0 \quad s.t. \quad y \geq N \implies f(1/y) \geq M
$$
Here, $y \geq N \iff 1/y \leq 1/N$. Then let $x = 1/y$, you get
\begin{align}
\forall M > 0, \exists N > 0 \quad s.t. \quad x \leq 1/N \implies f(x) \geq M
\end{align}
You get what you want.
|
H: Writing complex number $c$ in form $e^{iz}$
Let $c\in \mathbb C$. Then we can write $c = r\cdot e^{i\varphi},$ where $\varphi\in [0,2\pi)$ and $r\in [0, \infty)$.
My professor stated, however, that if $z \in \mathbb C$, then $c = e^{i\cdot z}$ holds as well, i.e., there exists a representation for $c$ in the "pure" form $c = e^{iz}$ with $z\in \mathbb C$, even though the modulus of $c$, i.e. $|c|$, might be different from $1$. Could sb please prove this?
Thanks!
AI: Only if $r \ne 0$.
Then $r*e^{i\phi} = e^{\ln r}e^{i\phi} = e^{\ln r + i\phi} = e^{i\frac {\ln r+ i\phi}i}$ for $z = \frac {\ln r+ i\phi}i$ if you want.
This assumes we are allowing $e^z= e^{Re(z) + iIm(z)} = e^{Re(z)}e^{iIm(z)}=e^{Re(z)}(\cos (Im(z)) + i\sin (Im(z)))$ to be true by fiat definition.
Which we are.
|
H: Solve the following equation in integers $x,y:$ $x^2+6xy+8y^2+3x+6y=2.$
Question: Solve the following equation in integers $x,y:$ $$x^2+6xy+8y^2+3x+6y=2.$$
Solution: For some $x,y\in\mathbb{Z}$ $$x^2+6xy+8y^2+3x+6y=2\\\iff x^2+2xy+4xy+8y^2+3x+6y=2\\\iff x(x+2y)+4y(x+2y)+3(x+2y)=2\\\iff(x+4y+3)(x+2y)=2.$$
Now if $(x+4y+3)(x+2y)=2$, then either $$\begin{cases} x+4y+3=1\\ x+2y=2\end{cases}\text{ or }\begin{cases} x+4y+3=2\\ x+2y=1\end{cases}\text{ or }\begin{cases} x+4y+3=-1\\ x+2y=-2\end{cases}\text{ or }\begin{cases} x+4y+3=-2\\ x+2y=-1\end{cases}.$$
We have $$\begin{cases} x+4y+3=1\\ x+2y=2\end{cases}\iff (x,y)=(6,-2), \\\begin{cases} x+4y+3=2\\ x+2y=1\end{cases}\iff (x,y)=(3,-1), \\\begin{cases} x+4y+3=-1\\ x+2y=-2\end{cases}\iff (x,y)=(0,-1),\\\begin{cases} x+4y+3=-2\\ x+2y=-1\end{cases}\iff (x,y)=(3,-2).$$
Now since, all the four pairs $(6,-2),(3,-1),(0,-1),(3,-2)$ satisfies the integer equation $(x+4y+3)(x+2y)=2$, thus we can conclude that $(x+4y+3)(x+2y)=2\iff (x,y)=(6,-2),(3,-1),(0,-1),(3,-2).$
Hence, we can conclude that the integer equation $x^2+6xy+8y^2+3x+6x=2$ is satisfied if and only if $(x,y)=(6,-2),(3,-1),(0,-1),(3,-2)$, and we are done.
Is the solution correct and rigorous enough? And, I am always confused while solving equations regarding the usage of the if and only if arguments, which I feel is very necessary in order to have a complete and rigorous solution, but I rarely find it's usage in any book while solving equations of any kind. So, is it necessary? Also, is there a better solution than this?
AI: Another solution is to let $k=2y$ and get
$$x^2+3xk+2k^2+3x+3k=2$$
$$\iff (x^2+2xk+k^2)+(k^2+3xk)+3(x+k)=2\iff (x+k)^2+(x+k)(k+3)=2$$
So now we can let $m=x+k$ and get
$$m(m+k+3)=2 \implies (m,k) \in \{(1,-2),(-1,-4),(2,-4),(-2,-2)\}$$
Noting that $k=\frac{2}{m}-m-3$ in our calculations. This gives (after dividing the solutions of $k$ by $2$ and putting $x=m-k$)
$$(x,y) \in \{(3,-1),(3,-2),(6,-2),(0,-1)\}$$
This is much simpler.
Or else, I guess we can stop at the point where
$$(x+4y+3)(x+2y)=2$$
and let $a$ be an integer such that $a=x+2y$, this simplifies a lot:
$$a(a+2y+3)=2$$
And we note that $a \in \{1,-1,2,-2\}$ and $y=\frac{1}{a}-\frac{a}{2}-\frac{3}{2}$
$$\implies (a,y) \in \{(1,-1),(-1,-2),(2,-2),(-2,-1)\}$$
$$x=a-2y\implies (x,y) \in \{(3,-1),(3,-2),(6,-2),(0,-1)\}$$
That makes stuff a lot easier.
|
H: Proving $ \sum _{k=0} ^m \binom nk \binom{n-k}{m-k} = 2^m \binom {n}{m}$.
Give an algebraic and a combinatorial proof for the following identity:
$$ \sum _{k=0} ^m \binom nk \binom{n-k}{m-k} = 2^m \binom {n}{m}.$$
For the combinatorial argument, use the analogy of $n$ party guests, where $m$ of them describe themselves as either vegetarian or vegan (but not both).
After proving the identity using algebraic transformations, I'm unable to find a combinatorial argument for it. For the right hand side, if we multiply $\binom nm $ by $2^n$, we get the Pascal-triangle but with each row multiplied by $2^n$, but here we're multiplying by $2^m$. What does this mean? How does the analogy with the party guests work? Any help would be much appreciated.
AI: It’s not the best analogy, since it requires us to assume that we can actually choose which guests are vegan and which are vegetarian, but I’ll go ahead and use it.
We can first choose $m$ guests to be the vegetarians and vegans; this can be done in $\binom{n}m$ ways. Once they are chosen, we can pick some subset of them to be the vegans; this can be done in $2^m$ ways, since there are $2^m$ subsets of an $m$-element set. Thus, the righthand side does indeed count the ways to choose $m$ of the guests and split them into vegans and vegetarians.
Alternatively, we could first choose $k$ guests to be vegan, where $0\le k\le m$, and then we could choose $m-k$ of the remaining $n-k$ guests to be vegetarians. Thus, there are $\binom{n}k\binom{n-k}{m-k}$ ways to choose $k$ vegans and $m-k$ vegetarians. If we sum over all values of $k$ from $0$ through $m$, this gives us every possible way of choosing $m$ guests and splitting them into vegans and vegetarians, so the lefthand side counts the same thing as the righthand side.
|
H: "Natural" equivalence of categories?
Let $\mathbf{C}$ be a category and $F,G:\mathbf{C}\to\mathbf{Cat}$ be category-valued functors on $\mathbf{C}$. Suppose there is a family of equivalences of categories
$$(\Phi_C:FC\simeq GC)_{C\in\mathbf{C}}\tag{1}$$
such that for all $f:C\to C'$ in $\mathbf{C}$, there is a natural isomorphism
$$\Phi_{C'}\circ Ff\cong Gf\circ\Phi_C\tag{2}$$
that is, the following diagram commutes up to natural isomorphism:
$\require{AMScd}$
\begin{CD}
FC @>{\Phi_C}>> GC\\
@V Ff V V\cong @VV Gf V\\
FC' @>>{\Phi_{C'}}> GC'
\end{CD}
Is there a standard name for such a $\Phi$ (or for something similar to such a $\Phi$)? I've looked around but haven't been able to find it.
Note it is just a generalization of a natural isomorphism for category-valued functors, which allows equivalence instead of isomorphism in (1) and natural isomorphism instead of equality in (2). It captures the intuitive notion of an equivalence of categories which is "natural" in that it respects functors between the categories.
As an example, consider the functor
$$\mathbf{Sets}^{(-)}:\mathbf{Sets}^{\mathrm{op}}\to\mathbf{Cat}\tag{3}$$
which maps a set $I$ to the functor category $\mathbf{Sets}^I$ of $I$-indexed families of sets and maps a function $f:J\to I$ to the "reindexing functor" $\mathbf{Sets}^f:\mathbf{Sets}^I\to\mathbf{Sets}^J$, and the functor
$$(-)^*:\mathbf{Sets}^{\mathrm{op}}\to\mathbf{Cat}\tag{4}$$
which maps the set $I$ to the slice category $\mathbf{Sets}/I$ and maps the function $f:J\to I$ to the pullback functor $f^*:\mathbf{Sets}/J\to\mathbf{Sets}/I$. The functors (3) and (4) are related by the above notion, which shows that the equivalence
$$\mathbf{Sets}^I\simeq\mathbf{Sets}/I$$
is "natural" in the set $I$.
Any pointers are appreciated.
AI: If your commutativity natural isomorphisms are coherent with the composition and identities, then nlab calls this a pseudonatural equivalence, which you can find towards the bottom of the linked page.
Since this would otherwise be essentially a link only answer, let me add a couple comments. First of all, the natural setting for this is 2-category theory and 2-functors, so we should regard $\mathbf{C}$ as a 2-category which only has identity 2-morphisms, and then our functors become (strict) 2-functors, though if you wanted, you could now generalize to lax/oplax 2-functors.
Next, I'd like to add a point on coherence, and why we might expect it/want it.
Suppose we have $f:c\to c'$, $g:c'\to c$, then we get
$$
\require{AMScd}
\begin{CD}
Fc @>Ff>> Fc' @>Fg>> Fc'' \\
@V\Phi_c VV \cong_{f} @V\Phi_{c'}VV \cong_g @VV\Phi_{c''}V
\\
Gc @>Gf>> Gc' @>Gg>> Gc'' \\
\end{CD}
$$
We would expect that when we paste $\cong_f$ and $\cong_g$ together like this that we get
back $\cong_{gf}$, the natural isomorphism making the outer square commute:
$$
\require{AMScd}
\begin{CD}
Fc @>F(gf)>> Fc'' \\
@V\Phi_c VV \cong_{gf} @VV\Phi_{c''}V
\\
Gc @>G(gf)>> Gc''. \\
\end{CD}
$$
Otherwise, if the commutativity natural isomorphisms are arbitrary, we can't make much use of the concept, since we can't relate them with the category structure.
|
H: How to find limit of a sequence of functions $f_n(x)=\frac{x^n e^x} {n+1}$?
How to find limit of a sequence of functions $f_n(x)=\frac{x^n e^x} {n+1}$? $$\lim_{n\to \infty} \frac{x^n e^x}{n+1}$$
I have no idea how to evaluate this limit. I thought maybe I should rewrite $e^x$ using $\sum_{n=0}^{\infty} \frac{x^n}{n!}$ but I am not sure whether I can do that or whether it would even help. If it is possible to evaluate the limit without L'Hospital's rule that would be the prefered way but I actually cannot see how L'Hospital would help with this problem.
AI: We get the same result if we compute
$$e^x\lim_{n\to+\infty}\frac{x^n}{n}$$
If $|x|\le 1$, it gives zero.
If $ x>1$, we write it as
$$\lim_{n\to+\infty}e^{n(\ln(x)-\frac{\ln(n)}{n})}=+\infty$$
If $x<-1,$ the limite doesn't exist,
since for even indices, we find $+\infty$ ,and for odd ones, it gives $-\infty$.
|
H: Proof of dominated convergence theorem
I was going through the proof of the Dominated Convergence Theorem.
Now if we have that ($f$$_n$) is a sequence of measurable functions such that $\lvert f_n\rvert$ $\le$ $g$ for all n where g is integrable on $\Bbb{R}$.
And if $f$ = $\lim_{n}$$f_n$ almost everwhere.
We can show that ($g+f_n$) is a sequence of non-negative measurable functions.
Then by Fatou's lemma, we have that $\int$liminf($g+f_n$)$d$$x$ $\le$ liminf$\int$($g+f_n$)$d$$x$.
Now from here, we can obtain that
$\int$($g+f$)$d$$x$ $\le$ $\int$$gdx$+liminf $\int$$f_ndx$.How?
Please explain this last step.I know that since both are integrable, the integral can be seperated..but how is liminf seperated in the right-hand side?
AI: Inside the liminf, $\int g$ is just a number. So what it says is that
$$
\liminf_n(c+f_n)=c+\liminf _nf_n,
$$
where $c=\int g$.
|
H: Are infinite subsets of the rationals definable?
This is really two questions in one. Consider the structure $(\mathbb{Q},<)$. We adjoin to it a subset $S$ of $\mathbb{Q}$. Is there a first-order formula $F$ in the expanded language such that $F$ is true precisely when $S$ is an infinite subset of $\mathbb{Q}$? If it is not definable in that language alone, is it definable in the expanded language $(+,-,*,0,1,<)$?
AI: Hagen von Eitzen has answered the second question (and note that the answer is exactly the same idea as David C. Ullrich's answer to your previous question).
The answer to the first question is no. Fix an irrational number $\alpha\in \mathbb{R}$. Let $(a_i)_{i\in \mathbb{N}}$ be a strictly increasing sequence of rational numbers approaching $\alpha$ from below, and let $(b_i)_{i\in \mathbb{N}}$ be a strictly decreasing sequence of rational numbers approaching $\alpha$ from above. Let $S = \{a_i\mid i\in \mathbb{N}\}\cup \{b_i\mid i\in \mathbb{N}\}$. The motivation for picking $S$ this way is that it is a discrete linear order with endpoints and with no limit point in $\mathbb{Q}$, just like every finite subset of $\mathbb{Q}$.
I claim that $S$ is indistinguishable from a finite set inside $(\mathbb{Q};<)$. More precisely, for any formula $\varphi$, there exists an $N$ such that $(\mathbb{Q};<,S)\models \varphi$ if and only if $(\mathbb{Q};<,T)\models \varphi$, where $T$ is any subset of $\mathbb{Q}$ of size $N$. This can be proven using the Ehrenfeucht-Fraïssé game.
Here's another proof that trades out the Ehrenfeucht-Fraïssé game for Łoś's theorem and countable categoricity of $(\mathbb{Q},<)$.
Suppose for contradiction that $\varphi$ is a sentence in the language $L' = \{<,S\}$ such that $(\mathbb{Q};<,S)\models\varphi$ if and only if $S$ is infinite.
For each natural number $n$, let $Q_n = (\mathbb{Q};<,T_n)$, where $T_n$ is a subset of $\mathbb{Q}$ of size $n$. So $Q_n\models \lnot \varphi$. Let $U$ be a non-principal ultrafilter on $\mathbb{N}$, and let $Q^\star$ be the ultraproduct $\prod_{n\in \mathbb{N}} Q_n / U$. In $Q^\star$, the interpretation of the relation symbol $S$ is infinite, but $Q^\star\models \lnot \varphi$, by Łoś's theorem.
Now let $(Q;<,S)$ be a countable elementary substructure of $Q^\star$, so again $S$ is infinite and $Q\models \lnot \varphi$. But the reduct $(Q;<)$ to the order language is a countable dense linear order without endpoints, so it is isomorphic to $(\mathbb{Q};<)$. Let $f\colon Q\to \mathbb{Q}$ be the isomorphism, and let $S' = f(S) \subseteq \mathbb{Q}$, the image of $S$ under the isomorphism. Then $(Q;<,S)\cong(\mathbb{Q};<,S')$, so $(\mathbb{Q};<,S')\models \lnot \varphi$, contradicting the fact that $S'$ is infinite.
|
H: Tensor product of two direct factors is a direct factor of the tensor product
Let $A$ be a ring, $E$ a right $A$-module, $F$ a left $A$-module, $M$
a submodule of $E$ and $N$ a submodule of $F$. Suppose that $M$ is a
direct factor of $E$ and $N$ is a direct factor of $F$. Then the
canonical homomorphism $M\otimes_A N\rightarrow E\otimes_A F$ is
injective and the image of $M\otimes_A N$ under this homomorphism is a
direct factor of the $\mathbf{Z}$-module $E\otimes_A F$.
Let $M'$,$N'$ be submodules of $E, F$, respectively, such that $E$ is a direct sum of $M,M'$ and $F$ is a direct sum of $N,N'$. Let $\phi:M\oplus M'\rightarrow E$ and $\psi:N\oplus N'\rightarrow F$ be the associated $A$-linear isomorphisms.
Let $i:M\rightarrow E$ and $j:N\rightarrow F$ be the canonical injections. On the other hand, let $p:M\oplus M'\rightarrow M$ and $q:N\oplus N'\rightarrow N$ be the canonical surjections. Then $(p\circ\phi^{-1})\otimes(q\circ\psi^{-1})$ is a retraction of $i\otimes j$; thus, $i\otimes j$ is injective.
Furthermore, the mapping
$$g:(M\oplus M')\otimes_{A}(N\oplus N')\rightarrow(M\otimes_{A}N)\oplus(M\otimes_{A}N')\oplus(M'\otimes_{A}N)\oplus(M'\otimes_{A}N')$$
such that $g((m,m')\otimes(n,n'))=(m\otimes n,m\otimes n',m'\otimes n, m'\otimes n')$, for $(m,m')\in M\oplus M'$ and $(n,n')\in N\oplus N'$, is a $\mathbf{Z}$-module isomorphism.
I now have to show that there exists a sub-$\mathbf{Z}$-module $X$ of $E\otimes_A F$ such that $E\otimes_A F\simeq\text{Im}(i\otimes j)\oplus X$ via the canonical mapping. I know that the mapping
$$\phi^{-1}\otimes\psi^{-1}:E\otimes_A F\rightarrow(M\oplus M')\otimes_{A}(N\oplus N')$$
is a $\mathbf{Z}$-module isomorphism. This means that
$$E\otimes_A F\simeq(M\otimes_{A}N)\oplus(M\otimes_{A}N')\oplus(M'\otimes_{A}N)\oplus(M'\otimes_{A}N').$$
However, I am not sure how to proceed at this point. Any suggestions?
Edit:
The sequence of $\mathbf{Z}$-linear mappings
$$0\xrightarrow{}M\otimes_AN\xrightarrow{i\otimes j} E\otimes_A F\xrightarrow{}(E\otimes_A F)/\text{Im}(i\otimes j)\xrightarrow{}0$$ is exact. Since $(p\circ\phi^{-1})\otimes(q\circ\psi^{-1})$ is a $\mathbf{Z}$-linear retraction of $i\otimes j$, it follows that $\text{Im}(i\otimes j)$ is a direct factor of $E\otimes_A F$. Is this enough?
AI: To say that $M$ is a direct summand of $E$, is equivalent to the existence of an homomorphism $p:E\to M$ such that $p\circ i=1_M$ that's $M\hookrightarrow E\xrightarrow p M$ is the identity on $M$.
Similarly, $q\circ j=1_N$ that's $N\hookrightarrow F\xrightarrow q N$ is the identity on $N$.
Since $(i\otimes j)\circ(p\otimes q)=(i\circ p)\otimes(j\circ q)$, the composition
$$M\otimes N\to E\otimes F\xrightarrow{p\otimes q}M\otimes N$$
is the identity on $M\otimes N$, hence $M\otimes N\to E\otimes F$ is injective and $M\otimes N$ is a direct summand of $E\otimes F$.
|
H: Recalculate Normal Vector without rotation matrix
I have 3 points in 3 dimensions (P0, P1, P2) and a normalised vector N1, that lies on the plane constructed by those points and is perpendicular to the line P0-P1.
I want to find the normal vector N2 perpendicular to the line P0-P2, lying on the same plane and facing the same direction as N1.
I think i know how to construct N2 by calculating the angle at P0, creating a rotation matrix and transforming N1 accordingly.
But is it possible to construct N2 using simple operations like dot or cross product, without calculating and using the angle?
sketch
AI: Your vector $n_2$ can be expressed as
$$n_2=\alpha (P_1-P_0)+\beta(P_2-P_0).$$
Solving
$$
n_2\cdot(P_2-P_0) = 0,\\
n_2\cdot n_2 = 1,
$$
with condition $n_2\cdot n_1 \ge 0$, will give you $\alpha,\beta$.
Alternatively, you can construct $((P_2-P_0)\times (P_1-P_0))\times(P_2-P_0)$, normalize it and invert if $n_1\cdot n_2<0$.
|
H: Are morphisms in a slice category $c/C$ inherited from the original category $C$?
Going through Emily Riehl's Category Theory in Context and something keeps tripping me up.
The notation for slice categories, which reminds me of factor group notation in group theory, indicates to me that the morphisms in $c/C$ are in fact morphisms inherited from $C$. As a more specific example/question:
If G is a fixed group in Group and we have homomorphisms $f : G \xrightarrow{} H$ and $g : G \xrightarrow{} K$ in Group, and then we have G/Group where morphisms are $h : H \xrightarrow{} K$, is $h$ also a group homomorphism or is it a differently structured morphism?
I don't think it's explicitly said in the book whether the morphisms are inherited from the original category.
AI: A morphism in this category should be a morphism $h: H \to K$ in the original category fitting into a commutative triangle with the given morphisms $G \to H$ and $G \to K$.
(i.e., yes, we inherit the morphisms, but we don't inherit all of them!)
|
H: Expected number of matching pairs from a random list
Question
A random list of length $n$ (even number) consists of $n_1$ stars $\star$ and $n_2$ squares $\square$. Suppose we randomly put these shapes into $n/2$ pairs, denote:
$X_1$ to be the number of matching pairs of $\star$
$X_2$ to be the number of matching pairs of $\square$
What is the expected number of $X_1$ and $X_2$?
Example
For example, if $n=10$, $n_1=7, n_2=3$, suppose we have a random pattern of 5 pairs
\begin{align*}
\star ~\square \\
\star ~\star \\
\square ~\square \\
\star ~\star \\
\star ~\star
\end{align*}
Because there are 3 matching pairs for $\star$ and 1 matching pair for $\square$, we have $X_1=3, X_2=1$. If we repeat this process many times, we should have different patterns of pairs, and produce different values of $X_1$ and $X_2$. I'm wondering how to calculate their expected values $E[X_1]$ and $E[X_2]$, which should be functions on $n_1$ and $n_2$.
The problem comes from my research project. Thank you in advance.
AI: The chance that the first pair is both stars is $\frac {n_1(n_1-1)}{n(n-1)}.$ By symmetry, that is the chance any given pair is both stars if we ignore the distribution of the rest of the pairs. By the linearity of expectation, the expected number of pairs of stars is $\frac n2\cdot\frac {n_1(n_1-1)}{n(n-1)}$. Change the subscript to get the expected number of pairs of squares.
|
H: $\{x\in\mathbb{R}:m(E\cap(x-k,x+k))\geq k, \forall k>0\}$ is Lebesgue measurable
Consider a Lebesgue measurable set $E\subset\mathbb{R}$. Prove that the set $\{x\in\mathbb{R}:m(E\cap(x-k,x+k))\geq k, \forall k>0\}$ is Lebesgue measurable.
I am just a bit confused on where to begin. It looks like I can just apply the open set definition of measurability. That is, there exists an open set $O$ with $E\subset O$ and $m(O-E)\leq\epsilon$. But this would show that $E$ is measurable - and we already know that $E$ is. But wouldn't the set in question just be an open interval in $\mathbb{R}$, which we know is measurable? I feel like I am missing something quite simple....
AI: Fix $k$. Consider the function $$f_k(x)=m(E\cap(x-k,x+k)).$$ This function is continuous: using that $m(A)-m(B)=m(A\setminus B)-m(B\setminus A)$ for measurable $A,B$ and assuming $x<y$,
$$
|f_k(y)-f_k(x)|=|m(E\cap(x-k, y-k))-m(E\cap[(x+k, y+k))|\leq2|y-x|.
$$
So $f_k$ is measurable, and
$$
\{x\in\mathbb{R}:m(E\cap(x-k,x+k))\geq k, \forall k>0\}=\bigcap_kf_k^{-1}[k,\infty)
$$
is measurable.
|
H: Finitely generated projective resolution
Let $K$ be a field, $A$ be a finite dimensional $K$-algebra and $M$ be a finitely generated $A$-module. Is it true that $M$ admits a projective resolution by finitely generated projective $A$-modules?
AI: As $A$ is a finite algebra over $K$ it is noetherian. As $M$ is finitely generated there is a surjection $A^{\oplus n} \longrightarrow M$. $A^{\oplus n}$ is noetherian as $A$ is. Let $N$ be the kernel of this map. By noetherianness, it is finitely generated, so there is a surjection $A^{\oplus m} \longrightarrow N$ and hence an exact sequence $A^{\oplus m} \longrightarrow A^{\oplus n} \longrightarrow M \longrightarrow 0$. Repeat this process to get a projective (in fact free) resolution by finitely generated modules. Note that we didn't need the full strength of the assumption that $A$ is finite over $K$ - only that it was noetherian.
|
H: Prove that a function $u: u= \ln\|x\|{_{2}}$ has $\Delta u = 0$.
I had a similar case some time ago and following the advices there I tried to solve this one too.
I tried to find its first partial derivative and I got:
$\frac{\partial}{\partial x_{i}}=\frac{1}{2\cdot \|x\|_{2}^{1/2}}$
Now i have to find the second derivative of this and I got $\frac{1+4\cdot (\sum x_{i}^{2})^{5}}{8\cdot(\sum x_{i}^{2})^{5/4} }$.
And now I am stuck,I have no idea if my calculations are okay or how can I continue to solve this.
I would be incredibly thankful for some help.
Annalisa
AI: You have, for $x\in\mathbb{R}^d$,
$$
u(x) = \ln \lVert x\rVert_2 = \ln \sqrt{\sum_{i=1}^d x_i^2} = \frac{1}{2}\ln \sum_{i=1}^d x_i^2
$$
from which, for $x\in\mathbb{R}^d$ and $1\leq i\leq d$,
$$
\frac{\partial}{\partial x_i}u(x) = \frac{1}{2}\frac{\partial}{\partial x_i}\ln \sum_{i=1}^d x_i^2= \frac{1}{2}\cdot\frac{2x_i}{\sum_{i=1}^d x_i^2}
= \frac{x_i}{\lVert x\rVert_2^2}\,.
$$
From there,
$$
\frac{\partial^2}{\partial x_i^2}u(x) = \frac{\partial}{\partial x_i}\frac{x_i}{\lVert x\rVert_2^2} = \frac{1\cdot\lVert x\rVert_2^2-x_i\cdot 2x_i}{\lVert x\rVert_2^4}= \frac{\lVert x\rVert_2^2-2x_i^2}{\lVert x\rVert_2^4}
$$
so that
$$
\Delta u(x) =\sum_{i=1}^d\frac{\partial^2}{\partial x_i^2}u(x) = \frac{d\lVert x\rVert_2^2-2\lVert x\rVert_2^2}{\lVert x\rVert_2^4}= \frac{d - 2}{\lVert x\rVert_2^4}
$$
This is only 0 if $d=2$.
|
H: Is there a simpler way to solve the differential equation $y''+2xy'+(x^2-1)y=0$
A student asked me to solve this differential equation
$$y''+2 x y'+(x^2-1)y=0$$
Is there a method simpler than power series?
AI: We want to express this as the second derivative of some function of $x$ and $y$. Via the ansatz introduction of $e^{-x^2/2}$ (motivated because the coefficients of the derivatives of $y$ are decreasing linearly in degree, and then just guessing), we note that $$\frac{d^2}{d x^2} e^{-x^2/2} y = e^{-x^2/2} \left(y'' - 2x y' + (x^2 - 1) y\right).$$ If we solve this differential equation, then a substitution of $-x$ at the end will yield solutions to our actual equation. Clearly the solutions to this differential equation are $$y = c_1 e^{x^2/2} x + c_2 e^{x^2/2}.$$ By substituting $-x$ into this equation, we obtain the solutions to the original differential equation, i.e. $$y = c_1 e^{x^2/2} (-x) + c_2 e^{x^2/2}.$$
|
H: Solve $X^3 = A$ in $M_2(\mathbb{R})$ where the matrix $A$ is given.
Consider the matrix:
$$A =
\begin{pmatrix}
3 & -2 \\
6 & -4 \\
\end{pmatrix}$$
I have to solve the equation:
$$X^3 = A$$
where $X \in M_2(\mathbb{R})$.
First, I tried using the notation:
$$X =
\begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix}$$
where $a, b, c, d \in \mathbb{R}$. I raised $X$ to the third power and then equated it with $A$ hoping to get something nice. Surprise, surprise, I didn't.
Then I noticed that the determinant of $A$ is $0$ and since $X^3 = A$, that means that the determinant of $X$ is also $0$. So we have the relation:
$$ad = bc$$
in the matrix $X$. But I don't see how I could use this further or even if I should at all.
So how should I approach this exercise?
AI: Hint
$A$ has two eigenvalues $\lambda_1=0$ ($\because \det(A)=0$) and $\lambda_2=-1$ ($\because \text{tr}(A)=-4+3=-1 =\text{sum of eigenvalues}$).
So $A$ is diagonalizable, i.e. we can write $A=P\begin{bmatrix}0&0\\0&-1\end{bmatrix}P^{-1}=PDP^{-1}$.
Note that $\sqrt[3]{D}=D$ itself. So we can have $X=PDP^{-1}=A$.
Another way
Observe that the characteristic polynomial of $A$ is $\lambda^2+\lambda=0$. Thus $A^2+A=0$. Which means $A^3=A$.
|
H: $\frac{1}{4} (a^2+ 3 b^2)$ is of the form $(c^2+ 3 d^2)$
If $ 2 \mid (a^2+ 3 b^2)$ and $(a,b)=1$ then $4\mid (a^2+ 3 b^2)$. How can I show $\frac{1}{4} (a^2+ 3 b^2)$ is also of the form $(c^2+ 3 d^2)$?
Here, clearly $ a$ and $b$ are both odd.
Let $a=2m+1$ and $ b=2n+1$
$\implies\frac{1}{4} (a^2+ 3 b^2)= m^2 + m+1 +3n^2 +3n$.
I am stuck here. Can anyone please help how to approach from here. Any help would be appreciated. Thanks in advance.
AI: \begin{eqnarray*}
\frac{(c+3d)^2+3(c-d)^2}{4} =c^2+3d^2.
\end{eqnarray*}
Edit:
If $ a \equiv b \pmod{4}$ then
\begin{eqnarray*}
\frac{a^2+3b^2}{4} = \left( \frac{a+3b}{4} \right)^2 +3 \left( \frac{a-b}{4} \right)^2.
\end{eqnarray*}
& If $ a \equiv -b \pmod{4}$ then
\begin{eqnarray*}
\frac{a^2+3b^2}{4} = \left( \frac{a-3b}{4} \right)^2 +3 \left( \frac{a+b}{4} \right)^2.
\end{eqnarray*}
and the values in the brackets on the RHS will be whole numbers.
|
H: finding convergence of integral having exponential and cosine terms
Finding whether the series $$\int^{\infty}_{1}e^{x}\cos (x)\cdot x^{-\frac{1}{2}}dx$$ converges or diverges.
What i try::
$$I=\int^{\infty}_{1}e^{x}\cos(x)\cdot x^{-\frac{1}{2}}dx\leq \int^{\infty}_{1}e^{x}\cdot x^{-\frac{1}{2}}dx$$
Now put $x=t^2$ and $dx=2tdt$
$$I<2\int^{\infty}_{1}e^{t^2}dt<\infty$$
From series expansion.
But this does not show anything.
Hiw do i solve it. Help me please. Thanks.
AI: Due to the fact that $e^x \cos x$ diverges away from $0$ arbitrarily often, and $\frac{e^x}{\sqrt{x}}$ is an increasing function (since $e^x = \omega(\sqrt{x})$, the integrand doesn't $\to 0$ as $x \to \infty$. Hence, integral diverges.
|
H: Question about generator of $K[x]$ using simple extension.
Let $K \subset \mathbb{C} $ be a field, and $K[x]$ the polynomial ring, $\alpha$ algebraic over $K$ and $L=K(\alpha)$.
I am reading this book and it states that, since any element of $L$ is of the form $\frac{f(\alpha)}{g(\alpha)}$ with $f,g \in K[x]$ and $g(\alpha) \neq 0$.
Considering $p=irr_k(\alpha)$ (the minimal polynomial of $\alpha$ over $K$), since $g(\alpha) \neq 0$ then $p \nmid g$. Here come my problem: Then they state that $<p,g> = K[x]$.
Why is this? I think this must come from a type of Bezout identity for polynomials, or something along that area. But I am unsure.
AI: Your suspicion is correct. Bezout's identity holds in any Euclidean domain. Since $p$ is irreducible its only divisors are 1 and itself. Since $p$ does not divide $g$, their gcd must be 1.
|
H: Question Cauchy Riemann equations
Let $f(z) =zRe(z)$. Determine all points $z_0$ for which the complex derivative $f'(z_0)$ existst.
I wrote $f(z)$ as $f(z)=f(x+iy)=(x+iy)Re(x+iy)=x^2+(xy)i:=u(x)+v(x,y)i$.
So we get the partials $u_x=2x$, $v_y = x$, $u_y=0$, $v_x=y$.
Now the CR equations, $u_x=v_y$ and $u_y=-v_x$ , only hold in the point $z=0$.
So $f(z)$ is only diffb. in the point $z=0$. So $f'$ exists only in the point $0$;.
Apparantly this reasoning is wrong. But I don't know what is wrong about it?
AI: What you have done shows that $f$ is not differentiable at any point other than $0$. The validity of C-R equations at $0$ does not guarantee existence of $f'(0)$. So you have to check existence of $f'(0)$ from definition: $f'(0)=\lim_{z \to 0} \frac {z Re(z) -0} z=\lim_{z \to 0} Re (z)=0$. Hence $f'(0)=0$.
Note: The validity of C-R equation at all points of an open set implies differentiability but validity of C-R equation at a point does not imply differentiability at that point.
|
H: Injectivity and Surjectivity of two different functions
a) If there is a function f: A-->B where there are two distinct elements a, b that are in A such that f(a) ≠ f(b), does this make f injective?
I think the answer is true because if a and b don't equal each other, then f(a) ≠ f(b).
b) If there is a function g: A-->B where for every b in B, there are two distinct elements a1, a2 in A such that f(a1) = b and f(a2) = b. Is f surjective?
I also think this is true, but am not quite sure.
AI: In a) you are only told that $f(a) \neq f(b)$ for some particular points $a$ and $b$. This does not imply that $f$ is injective. For example let $f(x)=x^{2}$ from $\mathbb R$ into itself. Take $a=1$ and $b=2$. Then $f(a) \neq f(b)$. But $f$ is not injective because $f(-1)=f(1)$.
For b) the answer is YES, since there is $a_1$ is one point where the value $b$ is attained.
|
H: Question about Spivak's proof of how to use u-substitution when the derivative of the inner function does not appear in the integral
Spivak (3rd edition) proposes solving the integral $$\int \frac{1+e^x}{1-e^x} dx$$ by letting $u=e^x$, $x=\ln(u)$, and $dx=\frac{1}{u}du$. This results in the integral $$\int \frac{1+u}{1-u}\frac{1}{u}du\\=\int \frac{2}{1-u}+\frac{1}{u}du=-2\ln(1-u)+\ln(u)=-2\ln(1-e^x)+x$$
From this example, Spivak argues that a similar method will work on any integral of the form $\int f(g(x))dx$ whenever $g(x)$ is invertible in the appropriate interval. Because this method is not a simple application of the substitution theorem, Spivak provides the following justification for his claim.
Consider continuous $f$ and $g$ where $g$ is invertible on the appropriate interval. Applying the above method to the arbitrary case, we let $u=g(x)$, $x=g^{−1}(u)$, and $dx=(g^{−1})′(u)du$. Thus, we need to show that $$∫f(g(x))dx=∫f(u)(g^{−1})′(u)du$$ To prove this equality Spivak uses a more typical substitution $u=g(x)$, $du=g′(x)dx$ and applies it by noting that $$∫f(g(x))dx=∫f(g(x))g′(x)\frac{1}{g′(x)}dx$$ Presumably using the substitution theorem, which roughly states that $∫f(g(x))g'(x)dx=∫f(u)du$, Spivak asserts that $$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$ Then, because $(g^{-1})'(u)=\frac{1}{g'(g^{-1}(u))}$ Spivak concludes $$∫f(u)\frac{1}{g′(g^{−1}(u))}du=∫f(u)(g^{−1})′(u)du$$
I lose track of the argument when Spivak argues that $$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$ In the original example, it was clear to me how we could apply the substitution theorem to make this equality true because $\frac{1}{g'(x)}$ was in fact a function of $g(x)$ as $g'(x)=g(x)$. But this is not necessarily true in all cases, or so it seems. How do we know that $f(g(x))\frac{1}{g'x}$ can be written in the form $h(g(x))$ for some continuous function $h$? To sum up, my main questions is, how do we use the substitution theorem to justify the equality $$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$
AI: Note that
\begin{align}
\int f(g(x))\cdot g'(x) \dfrac{1}{g'(x)}\ dx
&= \int \underbrace{f(g(x))\cdot\dfrac{1}{(g' \circ g^{-1})(g(x))}}_{h(g(x))}\ \underbrace{g'(x)\ dx}_{du} \\
\end{align}
where I defined $h(t):= f(t) \cdot \dfrac{1}{(g' \circ g^{-1})(t)}$. So, now of course, you can apply the substitution rule in the first form Spivak presented, simply by putting $u = g(x)$ and $du = g'(x)\ dx$.
Anyway, let me just add a few comments about how (a few years back), after reading Spivak's chapter, I really convinced myself that the substitution rule is actually true as a consequence of the chain rule, rather than some symbolic manipulation (I mean of course I knew it was true, but not how the computations aligned with the formalism).
I would recommend reading this previous answer of mine, where I pretty much (re)explain what I understood from reading this section in Spivak. Anyway, the gist is the following. Given any continuous $f$, when we write down the symbol $\int f(x)\, dx$, what we really mean is a differentiable function (or if you wish an equivalence class of differentiable functions), $F$, such that $F' = f$. So, I shall refer to this primitive function $F$ (or rather its equivalence class) by the symbol $\text{prim}(f)$. Then, the common subsitution rule
\begin{align}
\int f(g(x)) \cdot g'(x) \, dx &= \int f(u)\, du \quad \text{where $u = g(x)$}
\end{align}
can be written (I'd say more correctly from a technical perspective) as
\begin{align}
\text{prim}((f \circ g) \cdot g') &= \text{prim}(f) \circ g.
\end{align}
(by the way $\text{prim}(f) \circ g$ of course means $[\text{prim}(f)] \circ g$).
All this equation is saying is that if you differentiate both sides, you get the same function, namely $(f \circ g) \cdot g'$. On the LHS, that is trivially true, by definition of $\text{prim}(\cdot)$, while on the RHS, it is because of the chain rule. Now, if $g$ is invertible, we can "solve" this equation to get
\begin{align}
\text{prim}(f) &= \text{prim}((f \circ g) \cdot g') \circ g^{-1}
\end{align}
This equation is true for EVERY continuous $f$, and every continuously differentiable $g$, which is invertible. Just for the sake of avoiding confusion later on, I'll write this as
\begin{align}
\text{prim}(\phi) &= \text{prim}((\phi \circ \psi) \cdot \psi') \circ \psi^{-1}
\end{align}
Now, the equation you're looking for is obtained by plugging in $\phi = f \circ g$ and $\psi = g^{-1}$. Then, this immediately reduces to
\begin{align}
\text{prim}(f \circ g) &= \text{prim}\left(f \cdot (g^{-1})'\right) \circ g \\
&= \text{prim}\left(f \cdot \dfrac{1}{g' \circ g^{-1}} \right) \circ g,
\end{align}
where in the last line, I used the inverse function theorem for the formula of derivative of inverses. Of course, if you write this out in the classical notation, it says that if we put $u = g(x)$, then
\begin{align}
\int f(g(x))\, dx &= \int f(u) \cdot (g^{-1})'(u)\, du \\
&= \int f(u) \cdot \dfrac{1}{g'(g^{-1}(u))}\, du
\end{align}
In the previous answer of mine, I explain in slightly more detail, how to translate back and forth between the two notations.
Also, one final remark which I feel compelled to add: no one ever uses this $\text{prim}(\cdot)$ notation, and for good reason, because in actual hands-on computations, it doesn't serve us too well, so of course, you should get comfortable with all the tricks of integration being applied in the classical notation as well. The only thing which this notation offers is a temporary way of writing things, in order to clarify for oneself what exactly is going on when applying the substitution rule (as I'm sure several people have atleast once thought about why it's true considering we're taught derivatives are not fractions, but yet in this one circumstance, we treat it as a fraction).
|
H: Suppose two estimators are unbiased, what is the intuition behind the preference of the estimator with the less variance?
Suppose there are two unbiased estimators that we can use to estimate a parameter $\theta$, why do we often prefer the one with less asymptotic variance?
The question is rather simple and perhaps obvious but I cannot seem to convince myself totally. One thing I thought about is that, say $p(X)$ is the estimator with less variance, then for different sets of data, $p(x)$ will stay relatively 'stable' in comparison to the other estimator. So if we use it to construct a confidence interval then the interval length will be relatively short and so it gives a better idea of where the parameter $\theta$ would lie?
Are there better explanations to this? Many thanks in advance!
AI: Here is a practical example. Two unbiased estimators for the mean $\mu$ of a normal
population are the sample mean $A$ and the sample median $H.$ (See here for unbiasedness of the sample median of normal data.)
That is, $E(A) = E(H) = \mu.$
However, for any one particular sample size $n \ge 2$ one has $Var(A) < Var(H),$
so it the sample mean is the preferable estimator.
In particular, if
we are trying to estimate $\mu$ with $n = 10$ observations from a normal population with $\sigma=1,$ then it is easy to see that
$Var(A_{10}) = 0.1.$ By simulation (and other methods) one can find that $Var(H_{10}) \approx 0.138.$
Therefore, if we were to insist on using the median rather than the mean
we would have to use more than ten observations to get the same degree
of precision of estimation we could get from the mean.
set.seed(2020)
h = replicate(10^6, median(rnorm(10)))
mean(h); var(h)
[1] 0.000159509 # aprx E(H) = 0
[1] 0.1384345 # aprx Var(H) > 0.1
Here is a histogram of sample medians of a million samples of
size $n=10.$ The solid red curve shows the density function
of the normal distribution of means of samples of size $n=10,$ which is
$\mathsf{Norm}(\mu = 0, \sigma = 1/\sqrt{10}).$
[There is also a Central Limit Theorem for sample medians
that ensures the histogram is very nearly normal--but with a larger variance.]
hist(h, prob=T, br=50, col="skyblue2",
main="n=10: Histogram of Sample Medians")
curve(rnorm(x, 0, 1/sqrt(10)), add=T, col="red", lwd=2)
|
H: Orbit under group of automorphisms is finite.
Let $a,b\in\mathbb{C}$, and $\sigma$ be an automorphism of $\mathbb{C}$ such that $b=\sigma(a)$. My question is: Why if the set
$$\{\sigma'(a)\mid \sigma' \text{ is an automorphism of }\mathbb{C}\}$$
has at most $n$ elements then $b$ is an algebraic number of degree at most $n$, i.e. $[\mathbb{Q}(b):\mathbb{Q}]\leq n$?. I know that every automorphism permutes the roots of a polynomial, but my question is like ''if
$$\{\sigma(a)\mid \sigma \text{ is an automorphism of }\mathbb{C}\}=\{\alpha_{1},\ldots,\alpha_{m}\}$$
then $\alpha_{1}$, $\ldots$, $\alpha_{m}$ are roots of a polynomial over $\mathbb{Q}$ with degree at most $n$?'' and I don't know if this is true.
Obviously $b$ is a root of $(x-\alpha_{1})\ldots(x-\alpha_{m})$ but I don't know if this polynomial has rational coefficients.
Any help will be appreciated.
AI: Suppose that $a$ is transcendental over $\mathbf{Q}$. Choose a transcendence basis $S$ for $\mathbf{C}$ over $\mathbf{Q}$ containing $a$. The extension $\mathbf{C}/\mathbf{Q}(S)$ is then algebraic, and because $\mathbf{C}$ is algebraically closed, $\mathbf{C}$ is necessarily an algebraic closure of $\mathbf{Q}(S)$. For each $b\in S-\{a\}$, let $\varphi_b:S\to S$ be the bijection that is the identity on $S-\{a,b\}$ and interchanges $a$ and $b$. Then $\varphi_b$ extends to an automorphism $\overline{\sigma}_b$ of $\mathbf{Q}(S)$. Finally, since $\mathbf{C}$ is an algebraic closure of $\mathbf{Q}(S)$, $\overline{\sigma}_b$ extends to an automorphism $\sigma_b$ of $\mathbf{C}$. The set $S$ is infinite (if $S$ is finite, $\mathbf{Q}(S)$ is countable, which implies that $\mathbf{C}$ itself is countable because $\mathbf{C}$ is algebraic over $\mathbf{Q}(S)$).
Thus $S-\{a\}=\{\sigma_b(a):b\in S-\{a\}\}$ is an infinite subset of $\{\sigma(a):\sigma\in\mathrm{Aut}(\mathbf{C})\}$, and the latter set is infinite as well.
So, given $a\in\mathbf{C}$, if $\{\sigma(a):a\in\mathrm{Aut}(\mathbf{C})\}$ is finite, then $a$ is algebraic over $\mathbf{Q}$. If $a^\prime$ is a Galois conjugate of $a$ over $\mathbf{Q}$, i.e., another root of the minimal polynomial $f$ of $a$ over $\mathbf{Q}$, then there is an automorphism of $\overline{\mathbf{Q}}$ sending $a$ to $a^\prime$, and any such automorphism extends to an automorphism of $\mathbf{C}$ (use a transcendence basis for $\mathbf{C}$ over $\overline{\mathbf{Q}}$). This means that the set $\{\sigma(a):a\in\mathrm{Aut}(\mathbf{C})\}$ contains all the roots of $f$, and conversely, since any automorphism of $\mathbf{C}$ fixes $\mathbf{Q}$, $\sigma(a)$ is a root of $f$ for every $\sigma\in\mathrm{Aut}(\mathbf{C})$. Thus $\{\sigma(a):\sigma\in\mathrm{Aut}(\mathbf{C})\}$ is exactly the set of roots of $f$ in $\mathbf{C}$. There are $\deg(f)$ such roots, which means that, for any element $b$ of the set in question, we have
$$[\mathbf{Q}(b):\mathbf{Q}]=\deg(f)=|\{\sigma(a):\sigma\in\mathrm{Aut}(\mathbf{C})\}|\text{.}$$
|
H: Trig Question with angles of elevation
A UFO is flying above two people standing on the ground at points A and B.
A and B are 300m apart.
The angle of elevation of UFO from A is 30 degrees
The angle of elevation of the UFO from B is 23 degrees
Find height of UFO above ground.
I got something like 480m but apparently that's wrong, I'm really not sure how I've gone wrong, if anyone could help that would be great!
AI: Let $h$ be the height of the UFO. If you consider the triangle $AUB$ you can find the $h$ as below:
$x=\frac{h}{\tan(23)}$ and $y=\frac{h}{\tan(30)}$.
$x+y=300 \rightarrow \frac{h}{\tan(30)}+\frac{h}{\tan(23)}=300$
|
H: How to find $d\phi(I)$ where $\phi(A)=AA^T$
I'm trying to show that $so(3) = \{A\in M(3; \mathbb{R}): A = -A^T \} $ is the lie algebra of $SO(3)$. For this i am using the following fact:"The tangent space at the identity to a Lie subgroup of $GL(n,\mathbb{R})$, endowed with the matrix commutator, is isomorphic to its Lie algebra". I defined $\phi:GL(3, \mathbb{R}) \to Sym$ (Sym is the subset of symmetric matrices) $\phi(A)=AA^T$. I want to check that the kernel of $d\phi(I): gl(3,R) → Sym $ is the subspace of skew-symmetric matrices in $Gl(3,R)$. I'm having trouble finding $d \phi (I)$, how do I proceed?
AI: The derivative at a matrix $A$ applied to a matrix $H$ is, by definition
$$
d\phi_A(H)=\lim_{t\rightarrow 0}\frac{(A+tH)(A+tH)^T-AA^T}{t} = HA^T+AH^T
$$
In particular, putting $A=I$ we get
$$
\ker d\phi_I=\{H:H+H^T=0\}
$$
|
H: How to determine the integer k as lambert branch to knowing the solutions of an equation?
Actually this is my first time to self-study about lambert w function, and i have interest on it, so forgive me if this question sounds stupid.
I can derive it manually with algebra if the solution of
$$\text{$x^x=2$ is $x=e^{W(\ln(2))}$ }$$
But, that was before i knew about principal branch such as $W_0(x)$ and $W_{-1}(x)$, and obviously from the definition (i got from Wikipedia), $ye^y=x$ holds if only if
$$\text{$w=W_k(x)$ for some integer $k$}$$
But, how do we know and determine the $k$? On my first case, it turns out $x=e^{W_0(\ln(2))}, \quad k=0$. Without computer we never know that wheter $k\ne 0$ is hold or not. Even if we know the graph of the equation and determine $k$ with considering intersection points from its graph, it's fine. But again, for knowing the graph we need computer to draw the graph.
Another example is $9x=e^{3x}$. This equation has 2 real solutions that is
$$\text{$-\frac{W_0\left(\frac 1 3\right)}{3}$ and $-\frac{W_{-1}\left(\frac 1 3\right)}{3}$}$$
And what about $x^x=2$ that has 3 real solutions? Seems like only 2 of them can be interpreted by LW function (not sure).
So, what's the idea to determine the $k$? Thanks in advance!
AI: There are only two branches of Lambert W that give you real values on real numbers. The principal branch is real on $[-1/e, \infty)$ with values $\ge -1$. The $-1$ branch is real on $[-1/e, 0)$ with values $ \le -1$.
Here is a graph, with the principal branch in red and the $-1$ branch in green.
|
H: Identity for sum of binomial coefficients derived from convolution of negative binomial random variables
Let $X\sim NB(r,p)$ and $Y\sim NB(s,p)$ be negativ binomials (I use the variant where we count all trials until we reach r successes). Further, they are independent and I am interested in the distribution of $X+Y$. Surely enough, the result is a Negative Binomial again with parameter r+s.
The task however was to prove that using only the convolution formula. Doing that I derived the following
$$P(X+Y=n)=p^{r+s}(1-p)^{n-r-s}\sum_{j=0}^{n-r-s}\binom{n-j-s-1}{r-1}\binom{j+s-1}{s-1}$$
Since I know the result, I know that this holds:
$$\sum_{j=0}^{n-r-s}\binom{n-j-s-1}{r-1}\binom{j+s-1}{s-1}=\binom{n-1}{r+s-1}$$
Does anyone recognize this identity? I would love it if there was a combinatorial proof for it.
EDIT: Thanks to a comment I adapted the combinatorial argument as follows:
We use m objects and choose one object out of m that splits the list of all objects in two halfs. Then we choose x from the left half and y from the right half.
Another way of counting is to choose element j in the list, choosing x from j-1 elements on the left and y from m-j-1 objects from the right half. Obviously j can run only from x+1 to m-y-1. Therefore
$$\binom{m}{x+y+1}=\sum_{j=x+1}^{m-y-1}\binom{j-1}{x}\binom{m-j+1}{y}
=\sum_{k=0}^{m-x-y}\binom{k+x}{x}\binom{m-k-x}{y}$$
Choosing m=n-2, x=s-1, y=r-1 gives the desired result.
AI: Naturally this bears some similarity to Vandermonde's identity, and can be proven using a slightly generalised version that permits negative integers in the top of the binomial coefficient (Chu-Vandermonde on that Wikipedia page). This is perhaps unsurprising given that we're dealing with negative binomials.
Instead, here is a combinatorial proof. Define $a = n-s-1$, $b = s-1$, $x = r-1$, $y = s-1$. Then this is equivalent to the assertion that $$\sum_{j=0}^{a} \binom{a-j}{x} \binom{b+j}{y} = \binom{a+b+1}{x+y+1}.$$ (The upper limit of the sum is actually $a+x$, but all terms where $j > a$ will clearly just resolve to $0$.) We will prove this for all $y \geq b$ (and all constants non-negative).
Now, consider a row of $a+b$ distinct items. The process being described on the LHS is that we split the row into two smaller rows, the rightmost of which has size at least $b$. Then, we select $x$ elements from the left side, and $y$ from the right side. This is clearly equivalent to a process where, given a row of $a+b+1$ items, we select $x+y+1$ of them; the $x+1$-th element from the left of the row is analogous to our "splitting point" from before, and this divides the selection into a left selection of size $x$ and a right selection of size $y$ on a row of size $a+b$ that has been split in two parts. Since $y \geq b$, then this selection trivially also satisfies the constraint that the rightmost row has size at least $b$. Thus a bijection is established and the proof is complete.
|
H: Is $\lim \limits_{n\rightarrow \infty} n^{1-\ln((1+\frac{1}{n})^n)} = 1 $
could you help me understand if this statement is correct?
$$ \lim \limits_{n\rightarrow \infty} n^{1-\ln((1+\frac{1}{n})^n)} = 1 $$
Its easy to see that $ \lim \limits_{n\rightarrow \infty} 1-\ln((1+\frac{1}{n})^n) = 0 $, but since this expression is the exponent of $n$, I don't know if you can conclude that
$$ \lim \limits_{n\rightarrow \infty} n^{1-\ln((1+\frac{1}{n})^n)} = \lim \limits_{n\rightarrow \infty} n^0 =1 $$
AI: The esteemed Kavi Rama Murthy has explained why your attempted proof is not correct.
To show the result, it suffices to show that the logarithm of that expression tends to zero, i.e.
$$\left(1 - n \ln (1 + 1/n)\right) \ln n \to 0.$$
Note that by Taylor's theorem, $|\ln(1+1/n) - \frac{1}{n}| \le \frac{C}{n^2}$ for some constant $C>0$.
Thus,
$$|1 - n \ln(1+1/n)| (\ln n)
\le \frac{C}{n} \ln n \to 0.$$
Response to comment: Using the mean value form of the remainder, the remainder for the first-order Taylor polynomial is $\ln(1+x) - x = - \frac{1}{(1+\xi_x)^2} \frac{x^2}{2}$
where $\xi_x$ is a quantity dependent on $x$ that is between $x$ and $0$. When $x$ is between $0$ and $1$, we have $|\frac{1}{(1+\xi_x)^2}| \le 1$.
|
H: Proving that $2+\sqrt{2}$ is irreducible in $\mathbb{Z}[\sqrt{2}]$.
I'm asked to show that $x=2+\sqrt{2}$ is irreducible in $\mathbb{Z}[\sqrt{2}]$ by using the norm map $$N:\mathbb{Z}[\sqrt{2}]\rightarrow \mathbb{Z}^+:a+\sqrt{2}b\mapsto |a^2-2b^2|$$
Now, if $x=yz$, then $2=N(x)=N(y)N(z)$ forcing wlog $N(y)=1$. I'm now stuck trying to show that $y$ must be a unit and would appreciate any help.
AI: Use the definition of the norm. If $y=c+d\sqrt{2}$ then $N(y)=(c+d\sqrt{2})(c-d\sqrt{2})=1$.
|
H: evaluation of volume of solid obtained by rotating the curve $y=x^2,y=x+2$ about $x$ axis
Evaluation of volume of solid obtained by rotating the regin enclosed by the curves $y=x^2$ and $y=x+2$ about $x$ axis,is
What i try:
Sokving $y=x^2$ and $y=x+2$. we get $x=-1, x=2$
So volume of solid obtained by rotating about $x$ axis is
Here inner radius $r_{1}=y=x^2$ and $r_{2}=y=x+2$
Volume$(V)=$ Outer -Inner $$\int^{2}_{-1}\pi\bigg(x+2-x^2\bigg)dx$$
Can anyone plesse tell me is my solution is Right. If not then How do i solve it. Help me please
AI: The volume of a solid obtained by rotating the region enclosed by the $Ox$ axis, the lines $x=a$, $x=b$, $b>a$, and the curve $y=f(x)$ is equal to
$$V=\pi\int\limits_{a}^{b} f^2(x)\,dx.$$
So the volume to be found is
$$
V=V_{outer}-V_{inner}= \pi\int\limits_{-1}^{2} (x+2)^2\,dx-\pi\int\limits_{-1}^{2} x^4\,dx
$$
|
H: Why does the Boolean equation A.B' + B = A + B hold?
I have pretty good amount of knowledge in Boolean algebra. However, I struggled with the equality $(1)$ more than I should have.
$$x'z' + z = x' + z\tag{1}$$
How is it that this holds, algebraically? I can assure you that I've tried it enough. I just cannot get it right now.
Thanks.
AI: For any Boolean value $y$, $\color{blue}{1+y = 1}$.
So, $\color{blue}{x'z'+z} = x'z'+z\cdot1 = x'z'+z(1+x') = x'(z+z')+z = \color{blue}{x'+z}$
|
H: Please help me to find what did I wrong in the question(Probability and distribution)
$Q)$ There is rolling- a dice game following the rule.(Here the dice is average dice having sides $1$~$6$ )
Game rule
Getting the number $6$ in one trial, We get $5$ points.
Otherwise getting the other numbers in one trial, We lose $1$ point.
If we doing this game $720$ times independently, Find the $a$ $s.t.$ $P(Z \geq a) $[continutiy correction is needed.]
(Here $P(Z \geq a) $ is a approximation of "Probability that total score is more than $60$" to the normal distribution.)
Here is my solution.
Let the $X_i$ be $i$ th trial ($1 \leq i \leq 720$) [Here the $X_i$ is a one trial at the $i$ th times. ]
Then, $E(X_i) =$ -$5 \over 6$ + $5 \over 6$ $= 0$, $V(X_i) = 5$
For $X =X_1 + X_2 +...+X_{720}$ [I.e. $X$ is variable for all the 720 times.], $E(X) = 0$ and $V(X) = 60^2$
Hence $X$~$N(0,(60)^2)$
Therefore considering $P(X \geq 59.5)$ by continuity correction of the $P(X \geq 60)$
Conclusion is $P(Z \geq a) = P(Z \geq {59.5-0 \over 60})$, $a = {119 \over 120}$
I already knew the answer is $a=0.95$ not the above. I can't find which point I was wrong. Any help would be appreciated. Thanks.
AI: No, you seem to be correct.
$\mathsf E(X)=720\mathsf E(X_1)=720(5\cdot(-1)\cdot\tfrac 16+5\cdot\tfrac 16)=0$
$\mathsf{Var}(X)=720\mathsf{Var}(X_1)=720(5\cdot(-1)^2\cdot\tfrac 16+5^2\cdot\tfrac 16)=60^2$
$\therefore X\simeq\mathcal N(0,60^2)$
Then choose an appropriate correction value $\delta$.
$\mathsf P(X>60)\approx. \mathsf P(Z\geqslant\tfrac{60-\delta-0}{60})$
An aswer of $a=0.95$ would require $\delta$ to be $3$. Why might this be so?
Since the die result is either $-1$ or $5$ then $\delta$ is $\dfrac{5-(-1)}{2}$
|
H: Use Clairaut's Theorem to find an effecient way to solve $f_{yyzzx}$.
I just asked a similar question, but is there any other way the $f_{yyzzx}$ could be rearranged to get a number other than $0$ in the end?
When $x^2\sin(4y)+z^3(6x-y)+y^4$.
I have $f_y=4x^2\cos(4y)-z^3+4y^3$, then $f_{yy}=-16x^2\sin(4y)+12y^2$
AI: Let $f(x,y,z,w) = x^2 \sin(4y) + z^3(6x-y)+y^4$. Since $x$, $y$, $z$, and $w$ are continuous functions and sums, differences, products, cosines, and sines of continuous functions are continuous, $f$ is continuous and all of its (multiple) derivatives are continuous. Clairaut's theorem therefore allows us to reorder the specified partial derivatives freely without changing the value of the result.
Only the middle term contains a "$z$", so $f_z$ contains only one term, $3z^2(6x-y)$. Differentiating that by $x$ or $y$ leaves a constant times $z^2$
\begin{align*}
\frac{\partial}{\partial x} 3z^2(6x-y) &= 18z^2 \text{ and } \\
\frac{\partial}{\partial y} 3z^2(6x-y) &= -3z^2 \text{.}
\end{align*}
From either of these, partially differentiating with respect to $y$ again leaves $0$.
So, either $f_{zxy} = 0$ or $f_{zyy} = 0$, further partial differentiation remains zero, and Clauraut's theorem tells us the reordered derivatives have the same value, so the requested partial derivative of $f$ is zero.
A different way of thinking about it. With $f$ given and after observing that we may reorder the derivatives as we like, we examine tha terms of $f$. The first term has no $z$ in it and neither does the third. If $z$ appears in the list of variables with respect to which we will differentiate, these terms will vanish before we finish. "$z$" does appear on the list, so these terms do not contribute to the result. The middle term is linear in $x$ and $y$, so if $x$ appears twice or $y$ appears twice in the list of variables with respect to which we will differentiate, that term will vanish before we finish. "$y$" does appear twice on the list, so this term also does not contribute to the result. In this way, we have shown that every term is sent to zero before we finish taking the specified partial derivative.
|
H: Coordinates of circumcentre of a triangle in terms of triangle point coordinates
For a triangle $ABC$, its circumcentre is the intersection of perpendicular bisectors of its three sides. Now for a particular triangle I can draw the bisectors and find the circumcentre. However I need a general formula to solve the position $(x,y)$ of the circumcentre, in terms of the positions of each point $A(x,y)$, $B(x,y)$ and $C(x,y)$ of the triangle. Is there such formula?
AI: There is some point $(x,y)$
Such that $(x,y)$ is equidistant from A, B, C.
If $A = (x_a, y_a), B = (x_b, y_b), C = (x_c, y_c)$
$(x-x_a)^2 + (y-y_a)^2 = (x-x_b)^2 + (y-y_b)^2 = (x-x_c)^2 + (y-y_c)^2$
$x^2 + y^2 - 2xx_a - 2yy_a +x_a^2 + y_a^2 = x^2 + y^2 - 2xx_b - 2yy_b +x_b^2 + y_b^2 = x^2 + y^2 - 2xx_c - 2yy_c +x_c^2 + y_c^2$
We can subtract $x^2 + y^2$ from all 3 parts.
$2xx_a + 2yy_a + D = x_a^2+y_a^2\\
2xx_b + 2yy_b + D = x_b^2+y_b^2\\
2xx_c + 2yy_c + D= x_c^2+y_c^2$
And this is a system of linear equations with 3 unknowns.
I suppose we can apply Cramer's rule for a single formula.
$x = \frac {(x_a^2 + y_a^2)(y_b-y_c) + (x_b^2 + y_b^2)(y_c-y_a) + (x_c^2 + y_c^2)(y_a-y_b)}{2(x_a(y_b - y_c)+ x_b(y_c - y_a) + x_c(y_a-y_b))}$
$y = \frac {(x_a^2 + y_a^2)(x_b-x_c) + (x_b^2 + y_b^2)(x_c-x_a) + (x_c^2 + y_c^2)(x_a-x_b)}{2(y_a(x_b - x_c)+ y_b(x_c - x_a) + y_c(x_a-x_b))}$
|
H: Find all integers $x$ of the form $x\equiv _5 3$, $x\equiv _8 6$.
I'm asked to find all integers $x$ of the form $x\equiv _5 3$, $x\equiv _8 6$. It turns out that the set of all such integers is $$\{ x\in \mathbb{Z} : x=38+40t, \ \text{for some} \ \ t\in \mathbb{Z}\}$$
yet I haven't been able to get closer to such result. I would appreciate any help.
AI: On one hand, $38$ is a solution to both congruence since $38=3+5\cdot 7$ and $38=6+8\cdot 4$. So, given that both $5$ and $8$ divide $40$, any number of the form $38+40t$, for $t$ an integer, will also be a simultaneous solution to the congruences because:
$$38+40t\equiv_5 38\equiv_5 3$$
And similarly
$$38+40t\equiv_8 38\equiv_8 6$$
On the other hand, suppose $x$ is any number that simultaneously solves both congruences.
Then:
$$x-38\equiv_5 3-3\equiv_5 0$$
And
$$x-38\equiv_8 6-6\equiv_8 0$$
This means $5|(x-38)$ and $8|(x-38)$. Since $5$ and $8$ are coprime, this means their product $5\cdot 8=40$, also divides $x-38$, i.e. $x-38$ is of the form $40t$ for some integer $t$, i.e. $x$ is exactly of the form $38+40t$.
In general, if you have found at least one solution, $c$, to $n$ congruences taken $\textrm{mod}$ some numbers $a_1,a_2,\dots, a_n$, then the general solution to these $n$ simultaneous congruence equations is of the form $c+\textrm{l.c.m.}(a_1,a_2,\dots, a_n)t$ where $t$ is allowed to be any integer (this is a fairly easy result that you can get just generalizing the above proof). The tricky bit is finding the initial $c$, if it even exists. In a special case, the Chinese remainder theorem gives you an idea of how to find such a $c$.
|
H: Finding LU factorization and one number is off
I'm trying to find LU of this 2x2 matrix, but when I check my work, the bottom right number is 31 instead of 1.
A is the matrix at the top.
What am I doing incorrectly?
AI: Considere $A=\begin{pmatrix}
2 & 8 \\
4 & 1 \\
\end{pmatrix} $
Then we do
$A(-2) _{1,2}$ the elementary operation.
Then we have
Considere $U= \begin{pmatrix}
2 & 8 \\
0 & - 15\\
\end{pmatrix}$
THEN
$L=\begin{pmatrix}
1 & 0 \\
2 & 1\\ \end{pmatrix} $
And
$$LU=
\begin{pmatrix}
1& 0 \\
2& 1\\
\end{pmatrix} \cdot \begin{pmatrix}
2 & 8 \\
0 & - 15\\ \end{pmatrix} =A$$
|
H: Which is greater $\frac{13}{32}$ or $\ln \left(\frac{3}{2}\right)$
Which is greater $\frac{13}{32}$ or $\ln \left(\frac{3}{2}\right)$
My try:
we have $$\frac{13}{32}=\frac{2^2+3^2}{2^5}=\frac{1}{8}\left(1+(1.5)^2)\right)$$
Let $x=1.5$
Now consider the function $$f(x)=\frac{1+x^2}{8}-\ln x$$
$$f'(x)=\frac{x}{4}-\frac{1}{x}$$ So $f$ is Decreasing in $(0,2)$
any help here?
AI: The difference is so small that I see no other way than to do the computation. Note $$e^x = \sum_{k=0}^\infty \frac{x^k}{k!}$$ implies $$e^{13/32} > 1 + \frac{13}{32} + \frac{(13/32)^2}{2!} + \frac{(13/32)^3}{3!} + \frac{(13/32)^4}{4!} = \frac{12591963}{8388608} > \frac{3}{2}.$$
|
H: modular forms with complex multiplication
I would like the definition of a modular form with complex multiplication and if possible a reference.
Thank you !
AI: A newform $f=\sum_{n=1}^\infty a(n)q^n$ of level N and weight k has complex multiplication if there is a quadratic imaginary field K such that $a(p)=0$ as soon as p is a prime which is inert in K. The field K is then unique (if the weight k≥2), and one says that f has CM by K.
|
H: Given a chart $(U,\phi)$ find a chart $(V,\psi)$ such that $(U,\phi)$ and $(V,\psi)$ are $C^\infty$-compatible and $\psi(V)=\mathbb{R}^n$?
On page 4 of the book "Differential Topology" (written by Amiya Mukherjee) the following is written:
[...] observe that the charts $(U,\phi)$ and $(U,\alpha\circ \phi )$,
where $\alpha:\mathbb{R}^n\to \mathbb{R}^n$ is a diffeomorphism,
are always compatible. In particular, taking $\alpha$ to be the
translation which sends $\phi(p)$ to $0$, we can always suppose that
every point $p\in M$ admits a coordinate chart $(U,\phi)$ such that
$\phi(p)=0$. We may also suppose that $\phi(U)$ is a convex set, or
the whole of $\mathbb{R}^n$.
In that book the word "diffeomorphism" means "$C^\infty$-diffeomorphism" and two charts $(U,\phi)$, $(V,\psi)$ are said to be compatible if $\psi \circ \phi ^{-1}:\phi(U\cap V)\to\psi(U\cap V)$ is a $C^\infty$-diffeomorphism.
My question is about the end of the above quote: "We may also suppose that $\phi(U)$ is a convex set, or the whole of $\mathbb{R}^n$".
Question: Given a chart $(U,\phi)$ how can I prove that exists a chart $(V,\psi)$ such that $(U,\phi)$ and $(V,\psi)$ are $C^\infty$-compatible and $\psi(V)=\mathbb{R}^n$?
I tried to use the questions below to answer my question but I couldn't.
Equivalent Definitions of a Topological Manifold: Are Open Sets in $R^n$ homeomorphic to $R^n$?
Diffeomorphism: Unit Ball vs. Euclidean Space
AI: The thing is that we need only to find a subset of $U$ such that the image under chart map $\phi$ is an open ball $B_r(0)$ in $\mathbb{R}^n$. After this, we can blow up the ball to $\mathbb{R}^n$ by a diffeomorphism. Suppose we have a chart $(U,\phi)$ with a point $p \in U$ having $\phi(p)=0 \in \phi(U)\subset \mathbb{R}^n$.
Let $V=\phi^{-1}(B_r(0))$ for some $r>0$ and $\psi :=\phi|_V$. Then $(V,\psi)$ is $C^{\infty}$-compatible with $(U,\phi)$ since it is just restriction of the larger chart.
Choose your favorite diffeomorphism $\alpha : B_r(0) \to \mathbb{R}^n$, we have new chart $(V,\alpha \circ \psi)$ with $(\alpha \circ \psi)(V) = \mathbb{R}^n$ and $C^{\infty}$-compatible with $(V,\psi)$ as you can verify it by yourself.
Therefore $(V,\alpha \circ \psi)$ $C^{\infty}$-compatible with $(U,\phi)$ with $\psi(V)= \mathbb{R}^n$.
|
H: Show $\sum_{k=0}^n\frac{c_k}{(n-k)!}=1$ where $\sum_{n=0}^{\infty}c_nx^n=\frac{e^{-x}}{1-x}$
Show $$\sum_{k=0}^n\frac{c_k}{(n-k)!}=1$$ for each $n\geq 0$, where $$\sum_{n=0}^{\infty}c_nx^n=\frac{e^{-x}}{1-x}.$$
Since $$e^{-x}=\sum_{n=0}^{\infty}\frac{(-x)^n}{n!}$$ and $$\frac{1}{1-x}=\sum_{n=0}^{\infty}x^n$$ on $[-1,1)$, so we have, by multiplying directly, $$e^{-x}\cdot\frac{1}{1-x}=\left(\sum_{n=0}^{\infty}\frac{(-x)^n}{n!}\right)\left(\sum_{n=0}^{\infty}x^n\right)=\sum_{n=0}^{\infty}\left(\sum_{k=0}^{n}\frac{(-1)^k}{k!}\right)x^n$$ hence $$c_k=\sum_{i=0}^{k}\frac{(-1)^i}{i!}.$$
Now plugging it into the formula gives us $$\sum_{k=0}^n\frac{c_k}{(n-k)!}=\sum_{k=0}^n\frac{1}{(n-k)!}\sum_{i=0}^{k}\frac{(-1)^i}{i!},$$
but I got no useful result playing with induction, index manipulation, and spreading out all the terms, etc.
So I set $f(x)=\frac{e^{-x}}{1-x}$ so that $$c_k = \frac{f^{(k)}(0)}{k!}$$ and $$\sum_{k=0}^n\frac{c_k}{(n-k)!}=\sum_{k=0}^n\frac{f^{(k)}(0)}{k!(n-k)!}=\sum_{k=0}^n{n\choose k}\frac{f^{(k)}(0)}{n!}$$ which seems promising, because binomials usually take us to something like $(1+x)^n$ and this easily goes to $1$ if we set $x=0$ or something.
But we doesn't have power terms here, so I got stuck. I feel this is the right way to approach, not sure. Any idea or help would be appreciated.
AI: It follows from
$$\sum_{n=0}^{\infty}c_nx^n=\frac{e^{-x}}{1-x}$$ that
$$e^x \sum_{n=0}^{\infty}c_nx^n=\frac{1}{1-x}=\sum_{n=0}^\infty x^n.$$
Now it is possible to compare the coefficients of $x^n$ on both sides of the equation.
|
H: Proving That $\sum^{n}_{k=0} \bigl(\frac{4}{5}\bigr)^k < 5$
Using induction, prove that $$\sum_{k=0}^n \biggl(\frac 4 5 \biggr)^k = 1+\frac{4}{5}+\bigg(\frac{4}{5}\bigg)^2+\bigg(\frac{4}{5}\bigg)^3+\cdots +\bigg(\frac{4}{5}\bigg)^n<5$$ for all natural numbers $n.$
What I have tried is as follows.
Consider the statement $$P(n):1+\frac{4}{5}+\bigg(\frac{4}{5}\bigg)^2+\cdots +\bigg(\frac{4}{5}\bigg)^n<5.$$
For $n=1$, we have that $\displaystyle P(1):1+\frac{4}{5}<5$ is true.
For $n=k$, we assume that $$\displaystyle P(k):1+\frac{4}{5}+\bigg(\frac{4}{5}\bigg)^2+\cdots +\bigg(\frac{4}{5}\bigg)^k<5.$$
$\displaystyle P(k+1):1+\frac{4}{5}+\bigg(\frac{4}{5}\bigg)^2+\cdots +\bigg(\frac{4}{5}\bigg)^k+\bigg(\frac{4}{5}\bigg)^{k+1}<5+\bigg(\frac{4}{5}\bigg)^{k+1}$
How can I prove that the sum on the left is $< 5?$ Help me please. Thanks.
AI: Assume
$$
1+\frac{4}{5}+\left(\frac{4}{5}\right)^2+\cdots +\left(\frac{4}{5}\right)^k < 5
$$
for some positive integer $k$.$\;$Then
\begin{align*}
&
\frac{4}{5}
{\,\cdot}
\left(
1+\frac{4}{5}+\left(\frac{4}{5}\right)^2+\cdots +\left(\frac{4}{5}\right)^k
\right) < \frac{4}{5}{\,\cdot\,}5
\\[4pt]
\implies\;&
\frac{4}{5}+\left(\frac{4}{5}\right)^2+\left(\frac{4}{5}\right)^3+\cdots +\left(\frac{4}{5}\right)^{k+1} < 4
\\[4pt]
\implies\;&
1+\frac{4}{5}+\left(\frac{4}{5}\right)^2+\cdots +\left(\frac{4}{5}\right)^{k+1} < 5
\\[4pt]
\end{align*}
which completes the induction.
|
H: Convergence of Glaisher-Kinkelin Constant Limit Definitions
The Glaisher-Kinkelin constant $A$ is given by the limits
$$\begin{align}
A&=\lim_{n\rightarrow\infty}\frac{H(n)}{n^{n^2/2+n/2+1/12}e^{-n^2/4}}\\
&=\lim_{n\rightarrow\infty}\frac{(2\pi)^{n/2}n^{n^2-1/12}e^{-3n^2/4+1/12}}{G(n+1)}
\end{align}$$
where $H(z)$ is the hyperfactorial function and $G(z)$ is the Barnes G-Function. How would you prove these limits converge?
AI: You should have
$$ A = \lim_{n \rightarrow \infty} \frac{(2\pi)^{n/2} n^{n^2\mathbf{\underline{/2}}-1/12} \mathrm{e}^{-3n^2/4+1/12}}{G(n+1)} \text{.} $$
I would write your first version with $K(n+1)$ in the numerator, where $K$ is the K-function. That the two limits converge together or diverge together follows from the identity
$$ K(n) = \frac{(\Gamma(n))^{n-1}}{G(n)} \text{.} $$
(Use Stirling's approximation to replace the $\Gamma$ function with the powers of $n$ and $\mathrm{e}$ appearing in the fractions. Note that the denominator in your first limit grows rapidly enough that the error term in Stirling's approximation cannot be large enough to alter the limit.) So if you can show either converges you show both converge.
Then show the version with the Barnes $G$-function is positive and monotonically decreasing, so the limit exists. (Most of the numerator cancels with the definition of the Barnes $G$-function. Positivity is easy. Monotonicity comes from analysis of the derivative with respect to $n$, which is
$$ \frac{\mathrm{e}^{-3 n^2/4 + 1/12} n^{n^2/2-13/12} \left(3\
2^{n/2+1} n \pi ^{n/2} (2 n \log (n)-2 n \psi ^{(0)}(n+1)+1)-(2 \pi
)^{n/2}\right)}{12 G(n+1)} \text{,} $$
where $\psi^{(0)}$ is the polygamma function of order $0$, also known as the digamma function. To get monotonically decreasing, we only have to show that the expression in the large parentheses is (eventually) negative. Show that the derivative of the expression in parentheses has one zero, near $n = 2$ and compare the signs of this derivative to show that critical point is a maximum. Then observe that the parenthesized expression is negative, $< -1/8$, at that maximum and you are done.)
|
H: Prove there are infinitely many positive integers which cannot be represented as a sum of four non-zero squares.
Prove there are infinitely many positive integers which cannot be represented as a sum of four non-zero squares. Every positive integer can be written as the sum of four squares. But not all necessarily non-zero. Any hints on this?
AI: Assume there are $4$ such non-zero squares that add up to $2^{2n+1}$ for any $n \ge 1$, i.e., you have
$$2^{2n+1} = a^2 + b^2 + c^2 + d^2 \tag{1}\label{eq1A}$$
However, note all perfect squares are congruent to $0$, $1$ or $4$ modulo $8$. Since you just asked for a hint, the rest of the answer is in the spoiler below.
Any positive integer of the form $2^{k}$ where $k \ge 3$, such as where $k = 2n + 1$ for $n \ge 1$, is congruent to $0$ modulo $8$ and can only be the sum of $4$ squares if they are all even (since all $4$ odd gives a congruence of $4$ modulo $8$, $3$ odd gives $3$ or $7$, $2$ odd gives $2$ or $6$, and just $1$ odd gives $1$ or $5$). Thus, you have $a = 2a_1$, $b = 2b_1$, $c = 2c_1$ and $d = 2d_1$. Substituting this into \eqref{eq1A} and dividing both sides by $4$ gives
$$2^{2(n-1) + 1} = 2^{2n - 1} = a_1^2 + b_1^2 + c_1^2 + d_1^2 \tag{2}\label{eq2A}$$
This is an equation of the same form, so as long as the power of $2$ is $\ge 3$, you can repeat the procedure. Repeating this $n$ times gives
$$2^{2(n-n) + 1} = 2^{1} = a_n^2 + b_n^2 + c_n^2 + d_n^2 \tag{3}\label{eq3A}$$
This is not possible since the RHS is at least $4$ but the LHS is just $2$. This means at least one (actually, $2$) of the squares in \eqref{eq1A} must have been $0$. Since \eqref{eq1A} only required that $n \ge 1$, and \eqref{eq3A} shows it works for $n = 0$ also, you have an infinite # of positive integers of the form $2^{2n+1}$ which cannot be represented as the sum of $4$ non-zero squares.
Note you can also use induction to prove $2^{2n+1} \; \forall \; n \ge 0$ cannot be represented by a sum of $4$ non-zero squares by using \eqref{eq3A} as the base case, and then using the modulo $8$ congruences to show you can reduce the $n = k + 1$ case to the $n = k$ case in the inductive step.
|
H: How to distinguish between nonlinear and linear
I ve two questions.
First . How to distinguish between nonlinear and linear
The difference between linear and nonlinear as I know is whether it is proportional to the result or not.
(that is linear is Linear equation )
second .
Is it nonlinear to use an index as a variable in a MIP model?
for example)
$$sp_i \cdot y_{i,f} \le c_{f,t}\quad\forall i,j\quad\text{and}\quad t=S_i,\cdots,C_i.$$
$$S_i,C_i \ : variables , t: index .$$
If so, can you tell why?
thanks for reading.
AI: A linear form in the variables $x_i$ is $\sum a_i x_i$ where $a_i$ are arbitrary constants. If you have products or powers of $x_i$ it is non-linear.
Some non-linear functions, like $|a_i x_i + b_i|$, for example, you can express in linear form using tricks.
Index in a MIP does not necessarily make it nonlinear. For example, the sum
$$
\sum_{k=1}^5 kx_k = x_1 + 2x_2 + \ldots 5x_5
$$
is a valid linear combination of the variables, and so is the constraint
$$
\sum_{k=1}^5 kx_k \le \sum_{i=3}^6 iy_i + \sum_{i=1}^{10}i^2,
$$
since $\sum_{i=1}^{10}i^2$ is constant in the variables $x_i,y_i$ (but non-linear in $i$, which does not matter for the problem since we are only concerned with linearity in variables).
But $\sum x_k^k$ would be nonlinear when $k \not \in \{0,1\}$.
|
H: Can any Hilbert space be expressed as countable union of unit balls?
I was going through functional analysis text by J.Conway, and have encountered with next claim (2.4.6) :
Let $T\in \mathcal{B}_0(\mathcal{H},\mathcal{K})$ for two Hilbert spaces $\mathcal{H},\mathcal{K}$. Since $\text{cl}[T(\text{ball } \mathcal{H})]$ is compact, it is separable. Therefore $\text{cl}(\text{ran} T)$ is separable subspace of $\mathcal{K}$. Here, ball $\mathcal{H}$ is closed unit ball in $\mathcal{H}$.
However, I am not convinced since this presumes that any Hilbert space can be expressed as a countable union of unit balls, which is not very trivial for me.
Is this claim well-known fact?
If so, how do you prove it?
Thank you in advance.
AI: It is not being claimed that $\mathcal{H}$ is a countable union of unit balls (that is not true if $\mathcal{H}$ is not separable!). Rather, what is being used here is that $\mathcal{H}$ is the union of the balls of radius $n$ around $0$. Since $T$ is linear, the images of those balls will just be scaled versions of the images of the unit balls, and therefore their images will also be separable.
|
H: A Question About The Tower Of Hanoi
I've been reading through an inductive proof on why the minimum number of moves in a Tower of Hanoi with n disks is $2^n -1$. The proof is based on the fact that the minimum amount of moves for $k+1$ disks is $2T(k) + 1$: $T(k+1) =2T(k)+1$.
I understand that this is because you need to move the top $k$ disks to the center post, which can be done in a minimum of $T(k)$ moves. Then, you need to move the bottom disk to the final post, which can be done in $1$ move. Finally, you need to move the top $k$ disks to the final post, which can be done in a minimum of $T(k)$ moves.
But what I don't understand is why this method of moving disks is the quickest: why isn't there a method of moving disks that is quicker than this, that requires less moves? I haven't been able to devise a method that is quicker than the above, but that doesn't show that the method above is the quickest either!
So my question is, why is this method of moving disks the quickest? How can it be proved?
Thanks in advance.
AI: Here is a reply to the OP's comment:
For $1$ disk, the quickest way is by moving the disk to the rightmost pole, which takes $1$ move.
For $2$ disks, we have one disk on top, which we already figured out how we can move the quickest. We first move the disk on top, then move the disk on the bottom to the final position, then move the disk on top to the final position.
For $3$ disks, consider the top $2$ disks as one object, where we know the quickest way to move the $2$ disks. Then we have to move the bottom disk and the top two disks, which we can treat as two separate objects, and we proceed in the same way like we had with $2$ disks.
In general, given $n$ disks, the top $n-1$ disks are one object which we cannot move any faster. Then by adding another disk on the bottom, we can extend the quickest way of moving to $n$ disks. In other words, we can successively reduce a problem involving $n$ disks to a problem that only includes $2$ objects.
What justifies all of this is that we know the 'fastest method' you mentioned works for $n = 1$. Induction proves that given the base case $n=1$ holds, the next case holds. Repeating the process of induction arbitrarily many times ensures that this can be proved for any $n$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.