text
stringlengths 83
79.5k
|
|---|
H: Probability of the score of two rolls
I have a probability problem and this is the request:
What is the probability that the score of the first roll is higher than the score of a second roll. The rolls are equilibrated and independent. (I think one per time)
I have solved this problem with this method: $P(X>Y)=P(X-Y>0)= \frac{15}{36}$ with the help of a grid with all the possible $X-Y$ scores.
Is this a good way of resolution? If not, help me with a good method.
Thank you
AI: There is no problem with your solution; enumeration is a terrific technique and sometimes it's all there is. If you want a method that can handle a large number of cases better:
The probability of a tie is $\frac 16$. If we aren't in a tie then $P(X>Y)=P(Y>X)$ by symmetry. Thus $$P(X>Y)=\frac 12\times \left( 1-\frac 16\right)=\frac 5{12}$$ just as you got. Note that you could use this method to easily handle the case of an $N$ sided die.
|
H: Is it a convergent series?
If $f(x)$ is a continuous function, does
$$\lim\limits_{n\to\infty}\frac 1n\times \left[f\left(\frac{1}{n}\right) + f\left(\frac{2}{n}\right)+\cdots+f\left(\frac{n}{n}\right)\right]$$
converge?
AI: You should know that if $ f$ is integrable at $[a,b]$, then
$$\lim_{n\to+\infty}\frac{b-a}{n}\sum_{k=1}^nf\left(a+k\frac{b-a}{n}\right)=\int_a^bf(x)dx$$
in your case, $f$ is continuous at $[0,1]$, then it is integrable and
$$\lim_{n\to+\infty}\frac 1n\sum_{k=1}^nf\left(\frac kn\right)=$$
$$\lim_{n\to+\infty}\frac{1-0}{n}\sum_{k=1}^nf\left(0+k\frac{1-0}{n}\right)$$
$$=\int_0^1f(x)dx$$
So, the sum is convergent.
|
H: Find any values of $k$ for which $f$ is continuous
Sketch this function for $k = 1$. Is it continuous? Find any values of $k$ for which $f$ is continuous.
$$f(x)=
\begin{cases}
kx+3, & \text{$x≤1$} \\
(kx)^2-5, & \text{$x>1$}
\end{cases}$$
I would imagine that for the left side, I would get $4 (x+3$, which $1$ is plugged into $x$) and for the right side, I would get $-4$ (plug in $1$ for $x$, squared $- 5$ to get $-4$). I would guess these limits don't match, as one is positive and the other is negative. As they don't match, we wouldn't have a rational function, so this function is discontinuous at $x=1$ and the discontinuity is a jump discontinuity, as the $4$ and $-4$ don't match.
Am I on the right path here? If not, where did I go wrong?
Much appreciated in advance! :)
UPDATE - Thank you all for the replies! Have a lovely week. :)
AI: You are totally right :), it seems that you have understood the concept very well, but in the future, when you are dealing with piecewise functions, instead of just substituting the values of $x$, you should calculate left hand limits and right hand limits. Though for this question there wasn't any need to do that.
Let me give you a simple example:$$f(x)=\begin{cases} \lfloor x \rfloor & x \leq 0\\x & x>0\end{cases}$$ now if you substitute the value of $x=0$ you may think that it is continuous but no, you have to check the left hand limit which is -1 so the condition for continuity is left hand limit $=f(a)=$ right hand limit
|
H: Prove you can weigh any number between 1 and $\frac{3^{n+1} -1}{2}$ using $n+1$ weights - Discrete
You have $n+1$ weights, with each weighing $1,3,9, \dots, 3^n$ (one of each)
Prove that you can weigh with a traditional scale ( The one with two bowls) each integer weight between $$1 ~~~~~\text{And }~~~~~\frac{3^{n+1} -1}{2}$$
My go:
This was very confusing to me because I did not understand well how you can possibly weigh, let's say $4$ (Maybe it's just putting $1+3$ ?)
I tried proving using induction:
If $n=0$ then we have $1$ weight weighing 1, and so we can weigh $\frac{3^1 -1}{2} ~~~ \text{And} ~~~ 1 = 1$ which is obvious.
Assume you have $n+1$ weights and you can weigh each integer between $1$ and $\frac{3^{k+1} -1}{2}$
Now we prove for $n = k+1$ so: $1$ to $\frac{3^{k+2}-1}{2}$ using the fact we can weigh $\frac{3^{k+1} -1}{2}$
However I am stuck from here, I am clueless on how to use the fact we now have a weight of $1,3,9,\dots,3^k,3^{k+1}$
This seems like a known question, but I could not find anything on the web!
Thank you!
AI: You need to prove each such integer is of the form $\sum_{j=0}^na_j3^j$ with $a_j\in\{-1,\,0,\,1\}$. Equivalently, adding $\sum_j3^j=\frac{3^{n+1}-1}{2}$ to the integer, we wish to prove every integer from $\frac{3^{n+1}+1}{2}$ to $3^{n+1}-1$ is of the form $\sum_{j=0}^na_j3^j$ with $a_j\in\{0,\,1,\,2\}$. But that's trivial; just write it in base $3$.
For an inductive variant, note the case $n=0$ is trivial, and to go from $n=k$ to $n=k+1$ write each integer from $0$ to $\frac{3^{k+2}-1}{2}$ as $3m+j$ with $-1\le j\le1,\,0\le m\le\frac{3^{k+1}-1}{2}$. By the inductive hypothesis, $m$ is of the form $\sum_{j=0}^ka_j3^j$, so $3m+j$ is of the same form but with the upper limit changed to $k+1$.
|
H: Showing that $\|(x,y)\|_0=\sqrt{\|x\|^2+\|y\|^2}$ is norm if $\|\cdot\|$ is a norm.
The 3 properties are really easy to show but I cannot show that $\|(x,y)\|_0=\sqrt{\|x\|^2+\|y\|^2}$ satisfies the triangle inequality if $\|\cdot\|$ satisfies it. I tried to use Cauchy,S. inequality and etc.
My work:
Let $(x,y),(x',y')\in X\times Y$, $X,Y$ are both normed spaces.
$$\|(x,y)+(x',y')\|_0=\|(x+x',y+y')\|_0=\sqrt{
\|x+x'\|^2+\|y+y'\|^2}\\\le\sqrt{(\|x\|+\|x'\|)^2+(\|y\|+\|y'\|)^2}$$
And I have shown that $\max\{\|x\|+\|x'\|,\|y\|+\|y'\|\}\le \max\{\|x\|,\|y\|\}+\max\{\|x'\|,\|y'\|\}$
But here I am stuck.
AI: Let $f: \mathbb{R}^n \to \mathbb{R}$ be a norm having the property that for any $v, w \in \mathbb{R}^n$ where the absolute value ($l_1$ norm) of any element of $v$ is at least the absolute value of the respective element of $w$, then $f(v) \geq f(w)$. Let us refer to this property as monotonicity. Let us also define $g: V = U_1 \times U_2 \times \cdots \times U_n \to \mathbb{R}^n$ is a function that returns a vector of $n$ elements where the $i$th element of the output is the norm of the $i$th input. Here all the $U_i$ are just arbitrary normed spaces. Then $f \circ g$ is a norm on $V$, as it satisfies
Non-Negativity: The output of $f$ is always non-negative, so the output of $f \circ g$ is also non-negative.
Positivity: Note that $f(u) = 0 \iff u = 0 \in \mathbb{R}^n$ and furthermore, $g(v) = 0$ is the case if and only if the $i$th element of $v$ was the zero element of $U_i$, for all $1 \leq i \leq n$. Hence, $$f(g(v)) = 0 \iff g(v) = 0 \iff v = 0 \in V$$
Scalability: Note that $f(g(av)) = f(|a| \cdot g(v))$, since $g(v)$ is individually applied to each element of its input. We can then take $|a|$ out of $f$ because $f$ too is a norm.
Triangle Inequality: Note that $g(u + v) \leq g(u) + g(v)$. But note that the elements of $g(u + v)$ and $g(u) + g(v)$ are all non-negative real numbers, and hence the absolute value of every member of $g(u + v)$ is at most the the absolute value of the respective member in $g(u) + g(v)$. Since $f$ is a monotonic norm by assumption, it follows that
$$f(g(u + v)) \leq f(g(u) + g(v)) \leq f(g(u)) + f(g(v))$$
as $f$ itself satisfies the triangle inequality since it is a norm.
So for your question, we can take $V = X \times Y$, take $f$ to be a function from $\mathbb{R}^2 \to \mathbb{R}$ with
$$f(x, y) = \sqrt{x^2 + y^2}$$
and $g$ to be the function from $V \to \mathbb{R}^2$ defined by
$$g(x, y) = (\|x\|, \|y\|)$$
and we will have our result.
Remark: As Daniel Fischer pointed out, not all norms on $\mathbb{R}^n$ are monotonic! See the comments for an example. However, note it is indeed the case that $f$ in your example is monotonic (it is just the $l_2$ norm).
|
H: Totally bounded set in a metric space $\implies$ bounded
I apologize if the question may be trivial, but it is a fact that my textbook does not even mention and I, studying as a self-taught, do not have so many certainties.
I believe the totally boundedness $ \implies $ boundedness implication is true in any metric space. I think I managed to prove it this way:
$ A $ totally bounded $ \implies $ there is a finite $ \epsilon $-grid for every $ \epsilon> 0 $. Choose one: $ E = \{g_1, g_2, ..., g_n \} \subset A \implies $ the distance between any element $ a \in A $ and a $ g_i \in E $ is $ d (a , g_i) \leq d (a, g_k) + d (g_k, g_i) <\epsilon + \max_{j = 1, ..., n} d (g_k, g_j) \implies A $ bounded because entirely contained in the ball $ B (g_i, \epsilon + \max_{j = 1, ..., n} d (g_k, g_j)) $.
Did I do something wrong?
AI: Your proof looks fine to me. You should perhaps say that $a$ lies within $\varepsilon$ distance of $g_k$ so that your estimate holds.
|
H: Matrix of a representation from character theory
we have learnt in class that a representation (of a finite group $G$) is completely determined by the characters on its conjugacy classes. First of all, I know that the characters can be used in a lots of ways:
(1) By the orthogonality relation for characters, we can directly see if a representation is irreducible just by looking at the characters.
(2) If we have an arbitrary representation and the complete character table of all the irr.reps, we can directly derive the decomposition of the given rep.
But I have the following question: Is it possible to derive the basis representation of a given representation in a given basis just by looking at the characters? I mean if a representation is ''completely determined by its characters'', then it should be possible.
AI: I do not understand your question very well: how is the representation "given"?
I will start by saying that there is no easy method to recover a representation from its character: in fact, for a lot of groups the character table can be computed using orthogonality relations and other character properties, but it is much harder to identify the representations corresponding to some rows.
I'll assume we are in characteristic zero. What do you mean by "a given representation"? If it is given explicitly then your question does not make sense, so I don't think it means that. If its character is given, then you can use the inner product to compute its decomposition as a linear combination of irreducible characters:
$$ \psi = \sum_{\chi \in {\rm Irr}(G)} \langle \psi, \chi \rangle \chi$$
and then, if you know the matrices corresponding to each irreducible character (which you often don't) you can easily compute the matrix corresponding to $\psi$, as the character decomposition corresponds to the decomposition into a direct sum of simple representations.
|
H: What are the steps to factor $x^2 - 1$ into $(x+1)(x-1)$?
Does $(x+1)(x-1) = x^2+1x-1x-1$? If so where are the $+1x$ and the $-1x$ when it is being factored from $x^2-1$ into $(x+1)(x-1)$?
What exactly are we dividing $x^2-1$ by to get $(x+1)(x-1)$ and how did you know what to divide it by?
AI: Method 0:
$+1x - 1x = 0$
So $(x+1)(x-1) = x^2 + 1x - 1x -1 =$
$x^2 +(1x-1x) -1 = $
$x^2 + 0 -1 =$
$x^2 -1 $.
Method 1:
$x^2 - 1 = $
$x^2 + x - x - 1 = $
$(x^2 +x) - (x+1) = $
$(x\cdot x + x\cdot 1) + (-1)\cdot(x+1) =$
$x\cdot(x + 1) + (-1)\cdot (x+1) =$
$\color{blue}x\cdot\color{red}{(x + 1)} + \color{blue}{(-1)}\color{red}{(x + 1)} = $
$(\color{blue}x + \color{blue}{(-1)})\color{red}{(x+1)} = $
$(\color{blue}x-\color{blue}1)\color{red}{(x+1)}$
Method 2:
$(x^2 - 1)\div (x+1) = ?????$
We need to find a firs term, $a$ so that when we multiply $a$ times $x+1$ and get $ax+a$ that $ax = x^2$. What can be $a$? Well $ax=x^2$ and so (if we assume $x$ isn't always $0$) then $a= x$.
So $x(x+1) = x^2 + x$.
But $x^2 -1 \ne x^2 + x$ we must find a "remainder".
$(x^2 - 1)- (x^2 + x) = (x^2-x^2) + (-1-x) = -x - 1$.
So $x^2 - 1= (x^2 + x) - x-1 = x(x+1) - x-1$.
And $(x^2 - 1)\div (x+1) = x + \frac {-x-1}{x+1}$
Now we must divide $x+1$ into $-x-1$.
We must find term $b$ so that when we multiply $b$ by $x+1$ and get $bx + b$ the first term $bx = -x$. What can that $b$ be. Clearly it is $b=-1$.
So $-1(x+1) = -x-1$.
And $-x-1$ DOES equal $-1(x+1)$ so we have no remainder.
So $(x^2 -1) = x(x+1) + (-1)(x+1)$.
And $(x^2 - 1)\div (x+1) = x + (-1) = x-1$
So $(x^2 - 1) \div (x+1) = x-1$ so that mean $(x^2-1)= (x-1)(x+1)$.
Method 3:
$(x^2 -1) = (x+a) (x+b)$
$= x^2 +ax +bx + ab =$
$x^2 + (a+b)x + ab$
And that is supposed to be $x^2 -1 = x^2 + 0*x + (-1)$.
So we need $a,b$ so that $a*b = -1$ and $a+b = 0$.
Okay so $ab = -1$ so $a = -\frac 1b$
And $-\frac 1b + b = 0$ so
$b = \frac 1b$ so
$b^2 = 1$.
So $b = \pm 1$.
And $a +b = a\pm 1 = 0$ so $a =\mp 1$.
So one of them is $1$ and the other is $-1$.
So $x^2 -1 = (x+a)(x+b) = (x+1)(x-1)$.
Method 4:
If $ax^2 + bx + c = 0$ has two solutions $x= m$ and $x = n$
Then $ax^2 + bx + c = a(x-m)(x+n)$.
So what are the two solutions to $x^2 -1 = 0$
They are:
$x^2 - 1 =0$ so
$x^2 = 1$ so
$x = \pm 1$
So $m = 1$ and $n=-1$ are the two solutions and
so $x^2 -1 = (x- 1)(x-(-1)) = (x-1)(x+1)$.
........
But in ALL of these methods the $+1x -1x =0$ and the "cancel out".
You can cancel them to $0$ or pop them out of nowhere from $0$.
The this is you just have to think of it.
|
H: Random variables and geometric series
Q: Consider the following random variable $Y$. It takes only values of the form $\frac 1{2^k}$ for positive integers $k$ and $P\left(Y= \frac 1{2^k}\right) = \frac 1{2^k}$ for each $k$. Find the expected value of this random variable.
I did find out that $a = \frac12$ in the geometric series but I am unable to find the proper result
AI: General expression for expected value of a discrete random variable $X$. Let $a_n$ be a value and let $p_n=P(X=a_n)$. Then $E(X)=\sum a_np_n$ In your case $E(Y)=\sum_{k=1}^\infty \frac{1}{2^{2k}}=\frac{1}{3}$.
|
H: Is there any difference in result between quadratic programming VS linear programming?
Assume that we want to solve this equation:
$$ Ax \leq b$$
So we can either use Quadratic Programming:
$$J_{max}: x^TQx + c^Tx$$
$$Ax \leq b$$
$$x \geq 0$$
Or Linear Programming:
$$J_{max}: c^Tx$$
$$Ax \leq b$$
$$x \geq 0$$
Where $Q = A^TA$ and $c = A^Tb$
So what's the difference when I try to find the best $x$ value?
They both solve the problem. But they are different values.
Is QP more optimal?
Practical GNU Octave Example:
Linear programming VS Quadratic programming.
Download this lmpc.m file and run it with this code:
Now add this code line
u = qp([], alp, -clp, [], [], [], [], [], alp, blp);
like this:
When we run the function again, then we get this result.
I'm seeking the best choice of selecting QP VS LP if I want to find $U$ from this equation:
$$r = PHI*x + GAMMA*U$$
Ass you can see, for LP, the objective function is:
c = (GAMMA'*GAMMA)'*(r - PHI*x)
And the constraints are:
A = GAMMA'*GAMMA
b = GAMMA'*(r - PHI*x)
For QP, the objective function is:
H = GAMMA'*GAMMA
c = -((GAMMA'*GAMMA)'*(r - PHI*x)) % Must have negative sign, else unstable!
And the constratins are:
A = GAMMA'*GAMMA
b = GAMMA'*(r - PHI*x)
If we now compare the outputs between LP and QP. What can we say about that? LP gives a more smoother result, but the output is like an impulse. While QP have more oscillating result?
AI: I am interpreting that you want to find a feasible solution to $Ax \le b, x \ge 0$. (If the non-negative constraint is not needed, you might like to remove them).
In that case, you just have to solve
$$\min 0 $$
subject to $$Ax \le b$$
$$x \ge 0$$
Note that every linear programming problem is actually a quadratic programming problem with $Q=0$.
You can also define your own objective value like what you did in the question.
When you say a solution is more optimal, we have to state in what sense do we mean a solution is better than another solution.
Notice that $\min f(x)$ is equivalent to $-\max (-f(x))$.
|
H: On the existence of a holomorphic function
I have just encountered this exercise which has me stumped:
We are asked to prove that if $ f $ is holomorphic on the unit disk $ D $ and if $ f(z) \neq 0 $ on $ D $, that there is a holomorphic function $ g(z) $ on $ D $ such that $ f(z) = e^{g(z)} $. We are also asked to show that if $ D $ is replaced by an arbitrary connected open set, the result may not hold.
In fact, I have no idea where to start. Should I take a complex logarithm and show that it works, is there a straightforward solution here? I see that $ f(z) \neq 0 $ makes a lot of sense as an exponent can never be zero, but other than that I cannot show such a function exists and I also don't see why the unit disk is special as a domain. Any help would be appreciated.
AI: Let $h$ be a primitive of $\frac{f'}f$. Then $e^h/f$ is constant, since$$\left(\frac{e^h}f\right)'=\frac{fh'e^h-f'e^h}{f^2}=0.$$Take $w\in\Bbb C$ such that $e^w=f(0)$ and take $g=h-h(0)+w$. Then $\frac{e^g}f$ is constant and, since both $e^g$ and $f$ map $0$ into $f(0)$, they are identical.
In general, there is no such function. Take $f\colon\Bbb C^*\longrightarrow\Bbb C$ defined by $f(z)=z$. If there was some analytic function $g$ such that $e^g=f$, we would have $g'e^g=f'=1$. In other words, $g'(z)=z^{-1}$. But $z^{-1}$ has no primitive, since$$\oint_{|z|=1}\frac{\mathrm dz}z=2\pi i\ne0.$$
|
H: A sufficient condition for a Lebesgue point
Let $f\in L^1(\Bbb R^n)$ and let $x\in \Bbb R^n$. $x$ is said to be a Lebesgue point of $f$ if $\lim_{r\to 0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)-f(x)|~dm(y)=0$ where $m$ is Lebesgue measure on $\Bbb R^n$. Clearly the condition $\lim_{r\to 0} \int_{B(x,r)}f(y)~dm(y)=f(x)$ is a weaker condition. Is there a counterexample of $f$ such that the latter holds but the former does not?
AI: Take any hyperplane $\pi$ passing through $x$. Then define $f(y) = 0$ for $y \notin B(x, 1)$, otherwise put $f(y) = f(x)+1$ on one side of $\pi$ and $f(y) = f(x)-1$ on the other. It is easy to see that this $f$ works.
|
H: What is the dimension and base of the following vectors' sum and intersection?
I have 2 Vector Subspaces of $\mathbb{R}^3$, namely $U = \operatorname{Span}(\begin{pmatrix} 2 \\ 5 \\ 9\end{pmatrix}, \begin{pmatrix} 0 \\ -1 \\ -3\end{pmatrix})$ and $W = \operatorname{Span}(\begin{pmatrix} -3 \\ 1 \\ 6\end{pmatrix}, \begin{pmatrix} 5 \\ 3 \\ 0\end{pmatrix})$. Now I want to find the dimension and base of $U + V$ and of $U \cap W$. This is what I have done so far:
Since the vectors of $U$ and $W$ are lineary independent, they have dimension $\dim(U) = \dim(W) = 2$. I can see that $U+W = \operatorname{Span}(\begin{pmatrix} -1 \\ 6 \\ 15\end{pmatrix}, \begin{pmatrix} 5 \\ 2 \\ -3 \end{pmatrix})$ and it is also lineary independent, so $\dim(U+W) = 2$.
This means that $\dim(U\cap W) = \dim(U) + \dim(W) - \dim(U+W) = 2 + 2 - 2 = 2$.
Now, going further, I want to find the base of $U\cap W$, so I construct the following matrix and reduce it:
$$\begin{pmatrix} 2 & 0 & 3 & -5 \\ 5 & -1 & -1 & -3 \\ 9 & -3 & -6 & 0 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{pmatrix}$$
So, this means that the two vector spaces are intersecting exactly when $$\lambda \begin{pmatrix} 2 \\ 5 \\ 9\end{pmatrix} + \lambda \begin{pmatrix} 0 \\ -1 \\ -3\end{pmatrix} = \lambda \begin{pmatrix} -1 \\ 6 \\ 15\end{pmatrix} + \lambda \begin{pmatrix} 5 \\ 2 \\ -3 \end{pmatrix}$$
So, a base of $U\cap W$ would be $\begin{pmatrix} 2 \\ 4 \\ 6\end{pmatrix}$
Now, $U \cap W $ has dimension 1. But earlier I found that it should be 2. Where is the error? Do I make some error when I calculate $U + W$? Can someone point me in the right direction?
AI: You only formed the sum of the chosen basis elements of $U$ and $W$ instead of all vectors therein.
Your calculation for the intersection is correct, hence $\dim(U+V)=3$, i.e. $U+V=\Bbb R^3$, that is, every vector can be expressed as a sum $u+v$ with $u\in U$ and $v\in V$.
|
H: Number theory question involving primes
Prove that, if a, b are prime numbers $a > b$, each containing at least two digits,
then $a^4 - b^4$ is divisible by $240$. Also prove that, $240$ is the gcd of all the numbers
which arise in this way.
Looking at the prime factorisation $240=(2^4)*3*5$, i know i need to prove that the given difference is divisible by each of these.
How do i proceed from here? i have no idea.
Thanks.
AI: $240 = 2^4 \cdot 3 \cdot 5$. Any prime $> 5$ is coprime to $2, 3, 5$. The fourth powers of odd numbers mod $2^4$ are all $1$, the fourth powers of $1$ and $2$ mod $3$ are $1$, and the fourth powers of $1,2,3,4$ mod $5$ are all $1$. So the fourth power of any number coprime to $240$ mod $240$ is $1$.
The first three two-digit primes are $11, 13, 17$.
What is the gcd of $13^4-11^4$ and $17^4-11^4$?
|
H: Proof of $\sup|f(x)|=\left \| f \right \|_\infty $
Let $(\mathbb{R}^n,L_n,\lambda _n)$ be a measure space and let $f:\mathbb{R}^n\rightarrow \mathbb{R}$ be continuous and bounded, then $\sup\lvert f(x)\rvert = \lVert f \lVert_\infty $ .
Proof: we have $\lvert f(x)\rvert \leq \lVert f \rVert_\infty $ everywhere because the set where $f(x)> \lVert f \rVert_\infty $ is open (by continuity of $f$) and negligible and so it is empty. Therefore $\sup\lvert f\rvert \leq \lVert f \rVert_\infty $ . Conversely, $\sup\lvert f\rvert$ is essential bound for $|f|$ so $\lVert f \rVert_\infty \leq \sup\lvert f\vert$ .$\square$
Only one thing I did't understand, which is why the set where $f(x)> \lVert f \rVert_\infty $ is negligible ??
($\lVert f \rVert_\infty = \inf\{ c/\lvert f(x)\rvert \leq c\; \text{almost everywhere}\}$)
AI: It follows from your definition of $\|f\|_\infty$ that$$\|f\|_\infty=\inf\{x\in\Bbb R\mid f^{-1}([x,\infty))\text{ is negligible}\}.$$Therefore the set $\{x\in\Bbb R\mid f(x)>\| f\|_\infty\}$ is negligible, since it is equal to$$\bigcup_{n\in\Bbb N}f^{-1}\left(\left[\|f\|_\infty+\frac1n,\infty\right)\right)$$and each $f^{-1}\left(\left[\|f\|_\infty+\frac1n,\infty\right)\right)$ is negligible.
|
H: Is $f(x, y) = x - y$ injective and surjective?
Consider $f(x,y) = x-y$ for $ℤ × ℤ^+ → ℤ$. Is this injective and surjective?
I think it's not injective because there are infinitely many values of $x$ and $y$ which can have the same difference, such as (50, 10), (49, 9) and so on.
But I think it's surjective because every integer in the codomain is mapped to at least once.
Is this correct?
AI: You suggested the following proof of surjectivity:
Let $r = x - y$. Since $x$ and $y$ are both integers, $r$ is also an integer, hence it exists in the codomain and is mapped to.
This actually isn't a proof of surjectivity. If I give you an $r$, like $6$, you need to find $x \in \mathbb{Z}$ and $y > 0$ such that $x-y = r$ (in this case, $7$ and $1$ would do).
|
H: Why is $\frac{\int_{64}^{65}1.04^xdx}{\int_{24.5}^{25.5}1.04^xdx}=1.04^{39.5}\;?$
I am not sure whether these two things are exactly equal or only approximately equal. Wolfram says the difference is zero. I also would like to know why they are equal (or approximately)
$$\frac{\int_{64}^{65}1.04^xdx}{\int_{24.5}^{25.5}1.04^xdx}=1.04^{39.5}$$
Edit: I believe they are only approximately equal. The LHS is $\frac{1.04^{65}-1.04^{64}}{1.04^{25.5}-1.04^{24.5}}$. But why is that approximately equal to $1.04^{39.5}$ and why does wolfram give the difference as 0 instead of the actual difference.
AI: $$\frac{1.04^{65}-1.04^{64}}{1.04^{25.5}-1.04^{24.5}}$$
is exactly equal to $1.04^{39.5}$. To give you a start, the numerator is $1.04^{64}(1.04 - 1)$; now try and calculate the denominator.
|
H: Projective dimension of locally free sheaf on a Cartier divisor
Let $X$ be a projective scheme over $\mathbb{C}$, let $D\hookrightarrow X$ be an effective Cartier divisor. Is it true for any sheaf $\mathcal{F}$ which is locally free on $D$ (i.e. a pushforward of a locally free sheaf on $D$), that there exists a locally free resolutions of lenght one of $\mathcal{F}$?
I understand this is true for $X$ smooth, but I am not sure how well this generalizes to a non-smooth case.
AI: This is correct.
Fix an ample bundle $L$ on $X$. Then, $\mathcal{F}\otimes L^n$ is globally generated for large $n$ and thus we have a surjection $H^0(\mathcal{F}\otimes L^n)\otimes L^{-n}\to\mathcal{F}$. Let $E$ be the kernel. It suffices to show that $E$ is locally free. But locally, $\mathcal{F}=\mathcal{O}_D^r$. Rest follows from the fact that the kernel of $\mathcal{O}_X\to\mathcal{O}_D$ is just $\mathcal{O}_X(-D)$, which is a line bundle, by your assumption on $D$ being a Cartier divisor.
|
H: Calculating number of elements in the union of three sets.
The three sets $A, B,$ and $C$ each have $2018$ elements. The intersection of any two of the sets has $201$ elements. The intersection of all three sets has $20$ elements. How many elements are there in the union of the three sets?
If it is saying there are 20 elements in the intersection of the three sets, isn't it saying there are 20 numbers that are not repeated throughout the sets? So wouldn't the union be the same? Is there another way to solve?
AI: Hint:
Apply the Inclusion-Exclusion Principle:
$ |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|$.
|
H: Fast Exponentiation question: I am not even sure how to read this question, let alone attempt to solve it.
The Question is
$$ N=12={ 2 }^{ 2 }+{ 2 }^{ 3 }\\ { M }^{ 2 }\equiv 51(mod\quad 59)\\ What\quad is\quad { M }^{ 12 }(mod\quad 59)? $$
The solution is in the book says its 7. I am not sure what I am supposed to do with N, in relation to the question. Thank you for any insight.
AI: The point of $N$ is to show that if $M^2 \equiv 51 \pmod {59}$, then $$M^{12} = M^{2^2 + 2^3} = M^{2^2} M^{2^3} = (M^2)^2 ((M^2)^2)^2 \equiv (51)^2 ((51)^2)^2 \pmod {59}.$$
Of course, it is easier to write $$M^2 \equiv -8 \pmod {59}$$ so that the squaring operation on $M^2$ is a little less computational.
|
H: Notation question: dual space basis
I have an exercise that I am trying to decipher, but as I have never seen this notation before I do not know how to read it. The problem states:
The vectors $x_1=(1,1,1),x_2=(1,1,-1)$ and $x_3=(1,-1,-1)$ form a basis of $\mathbb{C}^3$. If ${y_1,y_2,y_3}$ is the dual basis, and if
$x=(0,1,0)$, find $[x,y_1],[x,y_2],[x,y_3]$.
Can someone help me understand what this last part $[x,y_i]$ means?
AI: Your exercise is from Halmos: Finite-Dimensional Vector Spaces.
If $V$ is a vector space, $y \in V'$ (the dual space of $V)$ and $x \in V$, the notation $[x, y]$ stands for the scalar $y(x)$, which one obtains when one inserts $x$ into $y$. It is explained in the text why this notation is used.
|
H: how is the sum of 2 normally distributed random variables different to 2 times a normally distributed variable
Say we have X~N(10, 100). It seems to hold that X+X~N(20, 200), however, if we multiply X with constant we have to multiply the variance with the square of the constant. Take for example 2, then we have 2X~N(20, 100 * 2^2) and thus 2X~N(20, 400).
Doesn't X+X and 2X denote exactly the same thing? I feel like there's a difference between the two that I do not understand.
Thanks
AI: Here is a simpler example. Suppose $A$ and $B$ are each independent uniform random variables between $0$ and $1$. We could have $A = 0.25$ and $B = 0.8$, for example, as an outcome. So their sum is $$A+B = 1.05.$$ But now $2A = 0.5$. The variable $2A$ is uniform on $[0,2]$, because the outcome of $A$ is uniform on $[0,1]$ and you are just scaling up the result by $2$. But $A+B$ is not uniform on $[0,2]$, although its support is on this interval. Intuitively, this is because in order for $A+B$ to be "close to" $2$, both $A$ and $B$ have to be close to $2$. But there are many more ways for $A+B$ to be "close to" $1$, because either $A$ can be large and $B$ can be small, or vice versa.
To see this explicitly, we can compare this to a discrete distribution--rolling two fair dice numbered from $1$ to $6$. How many ways are there to get a sum of $12$? There is only one way: $(6,6)$. But how many ways are there to get a sum of $7$? There are six ways: $$(1,6), (2,5), (3,4), (4,3), (5,2), (6,1).$$
So now that we understand that the distribution of $A+B$ is not the same as $2A$, it is not too difficult to see that their variances will also be different.
To bring our conversation back to the normal distribution, we can see that you have a misapprehension here. $X+X$ is not an appropriate way to describe a random variable that represents the sum of two independent but identically distributed (IID) normal random variables. In other words, if by $X+X$ you mean to say, "draw two realizations from a normal distribution with mean $\mu$ and variance $\sigma^2$," then this is not the correct notation. Instead, you should write $$X_1 + X_2,$$ where $$X_i \sim \operatorname{Normal}(\mu,\sigma^2), \quad i = 1, 2, \ldots.$$ Then the sum of these IID normal random variables is also normal: $$X_1 + X_2 \sim \operatorname{Normal}(2\mu, 2\sigma^2).$$ But the random variable $2X_1$ does not represent drawing two normal random variables. It means drawing one random variable and multiplying it by $2$. And as we explained with our examples above, this is a different distribution than the sum of independent normal random variables; while it is normal, it has a different variance.
|
H: Is $\mathbb{Q}(\sqrt{pq})$ a subfield of $\mathbb{R}$?
Is $\mathbb{Q}(\sqrt{pq})$ a subfield of $\mathbb{R}$?
I have a feeling that the answer is no but I can't prove it. It clearly contains $0$ and $1$, and is closed under addition and multiplication. There is an inverse to addition and the inverse to multiplication for $a+b\sqrt{pq}$ is given by $\frac{a-\sqrt{pq}b}{a^2-pqb^2}$
Please help
AI: Okay. Let's do this, slowly and tediously..
$\mathbb Q(\sqrt{pq}) = \{a+b\sqrt{pq}|a,b \in \mathbb Q\}$.
1) $\mathbb Q(\sqrt{pq}) \subset \mathbb R$.
well.... duh.... assume $\sqrt{pq} \in R$ then for any $a,b \in \mathbb Q$ then $a + b\sqrt{pq} \in R$ because $R$ is a field and closed under addition and multipliations.
2) $\mathbb Q(\sqrt{pq})$ is field.
Okay. It inherits multiplication and addition from $\mathbb R$ and so multiplication and addition is a associative, commutative and addition distributions over multiplication (or is that terminology the other way around).
And there is a unique additive identity $0$ and a unique multiplicative identity $1$ and every term has a unique additive inverse and ever non-zero term has a unique multiplicative inverse....... IN $\mathbb R$
So all we have to do is show:
a) $0\in \mathbb Q(\sqrt{pq})$
b) $1 \in \mathbb Q(\sqrt{pq})$
c) For every $a+b\sqrt{pq} \in \mathbb Q(\sqrt{pq})$ the $-(a+b\sqrt{pq})\in \mathbb Q(\sqrt{pq})$
d) For every $a + b\sqrt{pq} \in \mathbb Q(\sqrt{pq}; a+b\sqrt{pq}\ne 0$ then $\frac 1{a+ b\sqrt{pq}} \in \mathbb Q(\sqrt{pq}$.
a)b)c) are trivial $0 = 0 + 0*\sqrt{pq}$, $1 = 1+ 0*\sqrt{pq}$. $-(a+b\sqrt{pq})= -a + (-b)\sqrt{pq}$ and $-a,-b\in \mathbb Q$.
And d) is not trivial but very easy.
We can assume $a - b\sqrt{pq} \ne 0$. (if $a= b\sqrt{pq}$ and $\sqrt{pq}$ is irrational than $b=0$ and $a = 0$ so $a+b\sqrt{pq} =0$)
$\frac {1}{a+b\sqrt{pq}} = \frac {1}{a+b\sqrt{pq}}\frac {a-b\sqrt{pq}}{a-b\sqrt{pq}}=\frac {a -b\sqrt{pq}}{a^2 -b^2pq}=\frac {a}{a-b^2pq}- \frac {b}{a-b^2pq}\sqrt{pq}$
and $\frac {a}{a-b^2pq} \in \mathbb Q$ and $-\frac {b}{a-b^2pq}\in \mathbb Q$.
..... Oh.... I suppose I should have asked. Are $p,q$ integers? If $p,q$ are real numbers where $pq$ is not a rational number... well, we can address that in an addendum...
Okay.... addendum.
If $pq$ are not rational then no, $\mathbb Q(\sqrt{pq})$ need not be a field. For example if $pq$ is transcendental then $\sqrt{pq}$ is too. and Now for rational $a,b;b\ne 0$ then $(a+b\sqrt{pq})(x+y\sqrt{pq})= 1$ will have no rational $x,y$ solutions.
|
H: Maximum Flow of a network $G$
We are supposed to find the maximum flow through this network $G$. We have that $val(f) \leq c(C)$ for every cut in the network, where $c(C)$ is the capacity of the cut $C$. So I understand I am supposed to find minimum cut. What confuses me is the definition that we have for cuts: a cut $C$ is each subset of $E(G)$ (edges in $G$), so that every path from $q$ (source) to $s$ (sink) contains at least one edge of $C$.
And I'm not supposed to use any specific algorithm, just the facts, that I have listed here. Thats what makes it diffcult for me since all definitons of a cut on the internet are different from this one and therefore also the approach to the problem
AI: Aravind's definition of a cut is much more intuitive. Let's label your vertices from left to right, top to bottom starting from $1$. Note that $q = 5$ and $s = 8$. So for example, the vertex in the first row, second column is $2$. By inspection, if we choose $S = \{1, 2, 3, 5, 6, 7, 9, 10\}$, then the capacity of the cut is $5$. Let us call this cut $C^*$.
To see why $C^*$ is the minimum cut, note that the only edges with capacity less than $5$ are the edges $(3,4)$ and $(7, 11)$ (which we've used thus far in our solution) and $(6, q)$ and $(s, 7)$, both of which cannot cross a $q - s$ cut, since any edge to $q$ or from $s$ cannot cross a $q - s$ cut. (Why?) So the only possible candidates for a cut of smaller capacity would be cuts having only one of $(3, 4)$ or $(7, 11)$ flowing out of the $q$ side of the cut. To have a cut having only one of these edges flowing out of the $q$ side, we would have to move one of $4$ or $11$ to the $q$ side of $C^*$ or move one of $3$ or $4$ to the $s$ side of $C^*$. A quick check shows that none of these options reduces the minimum cut.
|
H: Finite numbers that cannot be represented with $4m+7k$ using complete induction
I need to prove that there exist finite amount of numbers that cannot be represented with $n=4m+7k~ |~ m,k \in \mathbb{N}$ .
Starting:
We say that $A = \{n \in \mathbb{N} | n=4m+7k \geq 18 ~~~ m,k \in \mathbb{N} \} \cup \{1,2, \dots 17\}$
We say that:
$18,10,20,21 \in A$ (by hand)
And assume each number between $18 \leq m \leq n-1 | n\geq 22$ are in A.
And so:
$n-4 \geq 18$ because $n\geq22$
We know that $n-4 \in A$
So: $n-4 = 4m +7k \rightarrow n = 4m+4+7k \rightarrow n=4(m+1)+7k$
And we proved that $n \in A$
And thus $A = \mathbb{N}$ an infinite set and $\{n \in \mathbb{N} | n=4m+7k \geq 18 ~~~ m,k \in \mathbb{N} \}$ is infinite.
so: $\{1...17\}$ is finite set. QED.
I am not sure about this proof because it seems "specific" about $n-4$, and what about $n-3$?
I would appreciate your help!
AI: This follows from the Chinese Remainder Theorem, but here is a cool way that I like to teach younger children of realizing this fact. Suppose we have a number $n = 4m_1 + 7k_1$. Note that if $k_1 \geq 1$, then we can immediately conclude that $n + 1$ is expressible as $4m + 7k$, as we can "exchange" one of the sevens for two fours, thereby constructing $n + 1$. That is $n + 1 = 4(m_1 + 2) + 7(k_1 - 1)$.
With these facts, we now notice that if $n = 4m + 7k$ with $k \geq 3$, then we can construct $n + 1$, $n + 2$, and $n + 3$ just by trading in our sevens one at a time for two fours. Of course, $n + 4$ is easily constructable as it is just $4 (m + 1) + 7k$, and again we can begin the exchanging process to construct the next three natural numbers (since again $k \geq 3$ and hence $n + 4$ has $3$ sevens to "exchange"). This implies that every number after $7 \cdot 3 = 21$ is constructable, so indeed there are only finitely many natural numbers that are not of form $4m + 7k$.
Note that this little heuristic works for any two relatively prime numbers $p$ and $q$. That is, there are finitely many numbers not expressible as $mp + kq$ for such $p$ and $q$.
|
H: Uniform Convergence of $\frac{n}{x+n}$
I was trying an exercise on uniform convergence of sequence of real-valued functions.
I got stuck in a problem in which I am supposed to prove that sequence defined by $f_n(x)=\frac{(n)}{(x+n)}$ is uniformly convergent on $[0,k]$.
I have found out its point-wise limit to be $1$.
Now I know that for this to be uniformly convergent,for any given $\epsilon>0$,we should get a natural number $m$ such that $|f_n(x)-1|<\epsilon$ for all $n\geq m$ and for all $x$ in $[0,k]$, which on further calculations give us that $n>x(\frac{1}{\epsilon}-1)$. Now I am not able to find a natural number $m$ such that this holds for all $n\geq m$.
Please help.
AI: Note that$$\left|\frac n{x+n}-1\right|=\frac{|x|}{|x+n|}\leqslant\frac kn,$$since $x\leqslant k$ and $|x+n|=x+n\geqslant n$. So, given $\varepsilon>0$, take $N\in\Bbb N$ such that $\frac kN<\varepsilon$. Then, if $n\geqslant N$, $\frac kn<\varepsilon$ and so$$(\forall x\in[0,k])(\forall n\in\Bbb N):n\geqslant N\implies\left|\frac n{x+n}-1\right|<\varepsilon.$$
|
H: How to expand $(\partial_\mu A^\mu)^2$
How would I expand the following:
$$(\partial_\mu A^\mu)^2 \tag{1}$$
My understanding of it makes me think it would be as simple as:
$$(\partial_\mu A^\mu)(\partial_\mu A^\mu)\tag{2}$$
but I recall in my lectures seeing something like:
$$\tag{3}(\partial_\mu A^\mu)^2 = (\partial_\mu A^\mu)(\partial_\nu A^\nu) $$
Which one of thee is correct? If none of them are, how would I proceed on expanding it?
AI: It should be written as (3), because we only want to use a given label for one pair of contracted indices. A much simpler example is $(\sum_ia_i)^2=\sum_ia_i\cdot\sum_ja_j=\sum_{ij}a_ia_j$. (Or, if you want an example with contraction, $(\sum_ia_ib_i)^2=\sum_{ij}a_ib_ia_jb_j$.)
|
H: Proof regarding inverse images and unions of sets
I'm currently working through Analysis by Tao and I just did an exercise but I'm not sure if it's correct. The question is
Let $f: X \to Y$ be a function from one set X to another set $Y$,
and let $U, V$ be subsets of $Y.$ Show that $f^{−1}(U \cup V ) = f^{−1}(U) \cup f^{−1}(V)$
I have at the moment:
Proof: Let $x\in f^{−1}(U \cup V )$ therefore we have $f(x) \in U \cup V$. Therefore $f(x) \in U$ or $f(x) \in V$. We use cases. If $f(x) \in U$ then $x \in f^{-1}(U)$ and therefore we have $x \in f^{−1}(U) \subset f^{−1}(U) \cup f^{−1}(V )$. If $f(x)\in V,$ then $x\in f^{−1}(V)$. Thus, $x\in f^{−1}(V)\subset f^{−1}(U) \cup f^{−1}(V)$. In either case, we have that $x\in f^{−1}(U) ∪ f^{−1}(V).$
For the second subset inclusion, let $x \in f^{-1}(U) \cup f^{-1}(V)$. Thus, $(x \in f^{-1}(U))$ or $(x \in f^{-1}(V)),$ giving us two cases. In the first case, $x \in f^{-1}(U)$ and thus $x \in U$ So, $f(x) \in U \subset U \cup V$ and thus $f(x) \in U \cup V.$ This gives us that $(x \in f^{-1}(U \cup V) .$ In the second case, $x \in f^{-1}(V)$ and thus $f(x) \in V .$ So,
$f(x) \in V \subset U \cup V$ and thus $f(x) \in U \cup V .$ This gives that $x \in f^{-1}(U \cup V).$ In either case, $x \in f^{-1}(U \cup V)$ and so we have the subset inclusion $f^{-1}(U) \cup f^{-1}(V) \subset f^{-1}(U \cup V).$ QED
AI: Your proof is correct but you can make it shorter in the following way:
Let $x \in f^{-1}(U \cup V).$ Then $f(x) \in U \cup V.$ That is $$f(x) \in U \text{ or } f(x) \in V.$$ Therefore $$x \in f^{-1}(U) \cup \text{ or } x\in f^{-1}(V).$$ In other words $$x\in f^{-1}(U) \cup f^{-1}(V).$$ So we may conclude $f^{-1}(U \cup V) \subseteq f^{-1}(U) \cup f^{-1}(V).$
Similarly, for the converse, you can omit splitting into cases.
|
H: Show: Linear mapping or none
Could someone explain me if the following is a linear mapping or none:
$$\text{Of }f:\mathbb{R^3}\to\mathbb{R^3} \text{ is known: }$$
$$f(\begin{pmatrix}1\\2\\3 \end{pmatrix})= \begin{pmatrix} 5\\3\\1 \end{pmatrix}$$
$$f(\begin{pmatrix}3\\2\\1 \end{pmatrix})= \begin{pmatrix} 1\\3\\5 \end{pmatrix}$$
$$f(\begin{pmatrix}-3\\2\\7 \end{pmatrix})= \begin{pmatrix} 9\\3\\-7 \end{pmatrix}$$
Thanks for helping me!
AI: It's not linear, because $3\pmatrix{1\\2\\3}-2\pmatrix{3\\2\\1}=\pmatrix{-3\\2\\7}$ but $3\pmatrix{5\\3\\1}-2\pmatrix{1\\3\\5}\ne\pmatrix{9\\3\\7}.$
|
H: Let F be a field of characteristic p and let $\alpha \in F$ be an element for which $f(\alpha)=0$. Prove that $f(\alpha^p)=0$
Let p be any prime and let $f(x) \in \Bbb{F}_p[x]$ be any polynomial with coefficients in $\Bbb{F}_p$. Let F be a field of characteristic p and let $\alpha \in F$ be an element for which $f(\alpha)=0$. Prove that $f(\alpha^p)=0$ as well.
Fermat's little theorem: for any prime p and any $a \in \Bbb{Z}$,that $a^p \equiv a$(mod p)
$f(x)=c_o+c_1x+...+c_{n-1}x^{n-1}+x^n$. How am I supposed to argue with $(c_o+c_1x+...+c_{n-1}x^{n-1}+x^n)^p$ in $\Bbb{F}_p[x]$?
AI: It is essentially the Frobenius homomorphism.
$F : \mathbb{F}_p \to \mathbb{F}_p \qquad a \mapsto a^p$
Since $\mathbb{F}_P$ is a finite field then the above homomorphism is actually an automorphism. What you need now is the linearity of $F$, namely
$ F(c_0+c_1\alpha+ \cdots + c_{n-1} \alpha^{n-1} + \alpha^{n} ) =
(c_0+c_1\alpha+ \cdots + c_{n-1} \alpha^{n-1} + \alpha^{n} )^p = \\=c_0^p + c_1^p (\alpha^p)^1 + \cdots + c_{n-1}^p (\alpha^p)^{n-1} + (\alpha^p)^n$
As you said $a^p \equiv a \bmod p$. In particular $c_i^p = c_i$ in $\mathbb{F}_p$ for every $i = 0, \dots, n-1$.
Finally, by the fact that $F$ is a homomorphism and so $0 = F(0)$:
$0=F(f(\alpha)) = F(c_0+c_1\alpha+ \cdots + c_{n-1} \alpha^{n-1} + \alpha^{n} ) = \\=c_0 + c_1 (\alpha^p)^1 + \cdots + c_{n-1} (\alpha^p)^{n-1} + (\alpha^p)^n =f(\alpha^p)$
i.e.
$0=f(\alpha^p)$
|
H: The strict topology is metrizable on bounded subsets
Let $A$ be a $\sigma$-unital C*-Algebra and $(x_{\lambda})_{\Lambda}$ be a norm bounded net in $\mathcal{M}(A)$ (the multiplier algebra of $A$). I proved that for a strictly positive element $h$ if $(x_{\lambda}h)_{\Lambda}$ and $(hx_{\lambda})_{\Lambda}$ are norm convergent in $A$ then $(x_{\lambda})_{\Lambda}$ is strictly convergent in $\mathcal{M}(A)$.
So here is my question: How can I show that the strict topology is metrizable on bounded subsets when $A$ is $\sigma$-unital.
AI: Let $h$ be a strictly positive element. Define:
$$d(x,y) = {\|h(x-y)\|+\|(x-y)h\|}.$$
You have checked that if $x_\alpha$ is a bounded net and $d(x_\alpha,x)\to0$ that then $x_\alpha\to x$ in the strict topology. Further if $x_\alpha\to x$ in the strict toplogy you have that $\|a(x_\alpha-x)\|\to0$ for all $a\in A$ and $\|(x_\alpha-x)a\|\to0$ for all $a$, in particular for $a=h$. Hence the notion of convergence given by this metric agrees on bounded sets with the convergence of the strict topology.
|
H: Is this series convergent? $1 + 1/2 + 1/2 + 1/4 + 1/4 + 1/4 + 1/4 + ...$
Is this series convergent? How to prove?
$$1 + \frac{1}{2} + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + ... + \frac{1}{8} \ (8 \times 1/8) + \frac{1}{16} + ...$$
It's equal to $\{1 + 1 + 1 + 1 + ...\}$, when I take the $(n+1)^{th}$ number and $n^{th}$ number, the ratio of them can be $\frac{1}{2}$ or $1$. I don't think this is a convergent series, but what is it?
Thanks!
AI: Observe that the $(2^n - 1)$th partial sum is given by $$s_{2^n - 1} = 1 + 2 \cdot \frac 1 2 + 4 \cdot \frac 1 4 + \cdots + 2^{n - 1} \cdot \frac 1 {2^{n - 1}} = \underbrace{1 + 1 + 1 + \cdots + 1}_{n \text{ summands}} = n.$$ Consequently, we have that $\lim_{n \to \infty} s_{2^n - 1} = \lim_{n \to \infty} n = \infty,$ and the series diverges.
|
H: Square roots modulo $pq$ where $p$, $q$ are distinct primes - confusion with quote
I have a doubt about the following quote from a book:
The Chinese Remainder Theorem implies that, if $p$ and $q$ are
distinct primes, then $s$ is a square modulo $pq$ if and only if $s$
is a square modulo $p$ and $s$ is a square modulo $q$. In particular,
if $s \equiv x^2 \equiv (x')^2 \pmod p$ where $x \neq x'$, and
likewise $x \equiv y^2 \equiv (y')^2 \pmod q$, then $s$ has exactly
four square roots modulo $pq$, namely,
$$s \equiv (xy)^2 \equiv (x'y)^2 \equiv (xy')^2 \equiv (x'y')^2 \pmod{pq}$$
As an attempt to understand the quote, suppose I want to find the square roots of $11$ modulo $133$. So, I have $s = 11$, and, since $133 = 7 \times 19$, I have $p = 7$ and $q = 19$. I first need to separately find the roots of $11$ modulo $7$ and of $11$ modulo $19$:
To find the roots modulo $7$, solve $x^2 \equiv 11 \pmod 7$. Since $11 \equiv 2^2 \equiv 5^2 \pmod{7}$, the roots are $x = 2$ and $x' = 5$.
To find the roots modulo $19$, solve $x^2 \equiv 11 \pmod{19}$. Since $11 \equiv 7^2 \equiv 12^2 \pmod{19}$, the roots are $y = 7$ and $y' = 12$.
The quote is saying that $xy$, $x'y$, $xy'$ and $x'y'$ are roots of $s$ modulo $pq$. However, this seems to be untrue in my example:
$xy = 2 \times 7 = 14$, but $11 \not\equiv 14^2 \pmod{133} $.
$xy' = 2 \times 12 = 24$, but $11 \not\equiv 24^2 \pmod{133} $.
$x'y = 5 \times 7 = 35$, but $11 \not\equiv 35^2 \pmod{133} $
$x'y' = 5 \times 12 = 60$, but $11 \not\equiv 60^2 \pmod{133} $
Am I missing something?
AI: The Chinese remainder theorem must be applied correctly.
In your example, it says that there is exactly one $a \pmod{133}$ such that $a \equiv 2 \pmod{7}$ and $a \equiv 7 \pmod{19}$. See, for example, here.
Since we have $1 = 3 * 19 - 8 * 7$, we put $a = 2 * 3 * 19 - 7 * 8 * 7 = -278$ and find that, indeed $a^2 \equiv 11 \pmod{133}$. An explanation of this calculation, can be found, for example, here.
Similar calculations for the other combinations of roots of $11$ mod $7$ and $19$ produce the following four distinct roots of $11$ mod $133$.
$$a \equiv 121 \pmod{133}$$
$$b \equiv 107 \pmod{133}$$
$$ c \equiv 26 \pmod{133}$$
$$ d \equiv 12 \pmod{133}$$
|
H: How to solve equation with multiple trigonometric functions?
Solve for $x$:
$\arccos( \cos(x) y + z) = \arcsin( \sin(x) a+b)$.
AI: Let $\cos x = \alpha$, so $\sin x = \sqrt{1 - \alpha^2}$. Then take the $\cos (\cdot)$ of both sides and note that $\cos \left( \arcsin[q] \right) = \sqrt{1 - q^2}$. Then you get a quartic equation in $\alpha$.
|
H: If $m
question: If $m<n$, show that there is a $1$-$1$ mapping $F:S_m\rightarrow S_n$ such that $F(fg)=F(f)F(g)$ for all $f,g\in S_m$. Where $S_n$ stands for symmetric group of degree $n$
my approach:
First I was thinking how to construct the $F$. Because if I did this properly then the only work is left to show this is $1$-$1$ map.
So there are more elements in $S_n$ compare to $S_m$. I need to define $F$ in such a way that after covering all elements of $S_m$ I have to define it for other elements of $S_n$. Now here is my question arrive How to choose those elements and how to define $F$ when $S_m$ is covered?
AI: You have the wrong approach to this question. You cannot guarantee with your construction that $F$ is a homomorphism. Here is the correct answer,
If $\sigma:\{1,...,m \} \rightarrow \{1,...,m \}$ is a bijection we define $F(\sigma):\{1,...,n\} \rightarrow \{1,...,n\}$ as follows
$$F(\sigma)(x) = x \ \text{ if } \ x > m$$ and
$$F(\sigma)(x) = \sigma(x) \ \text{ if } \ x \leq m$$
Intuitively $F \sigma$ just fixes the elements which are not in $\sigma$'s domain.
Try and prove that $F$ is a homomorphism.
|
H: The cyclic subgroups of $p^2$ order non-cyclic group are normal
I’m having a hard time on proving that every cyclic subgroup of $p^2$ order group is a normal subgroup, where $p$ is a prime number. I’m not going to use the truth that $p^2$ order group are abelian, since this is what I want to show through the proof.
I tried to show that $gag^{-1}$ belongs to $\langle a\rangle$, where $g$ is in $G$ but not in $\langle a\rangle$, any hints on how to check this? For $p=2$, I can check this by contradiction, but for higher $p$, I have no idea, maybe I should use induction?
AI: Let the group be $G$ of order $p^2$.
You see any subgroup of $G$ can have order $1,p$ or $p^2$.
If the order of the cyclic subgroup is $1$ or if it is $p^2$ then we are done .
Now we look at the cyclic subgroup $H=\langle a\rangle$ of order $p$ .
We are to show that $H$ is a normal subgroup of $G$. If not then $\exists g\in G$ such that $gag^{-1}\not\in H$
Observe that order or $gag^{-1}$ is $p$, also observe that $ \langle gag^{-1} \rangle \cap H =\{e\}$.
Then the cosets of $ \langle gag^{-1} \rangle $ in $G$ are $ \langle gag^{-1} \rangle,a\langle gag^{-1} \rangle,a^2 \langle gag^{-1} \rangle,\cdots,$ and $ a^{p-1}\langle gag^{-1} \rangle $.
Now $g^{-1}\in a^{i}\langle gag^{-1} \rangle$ for some $1\le i \le p-1$.
$\therefore g^{-1}= a^{i}(gag^{-1})^j$ for some $1\le j\le p-1 $. Then $$g^{-1}= a^iga^jg^{-1}\implies e= a^iga^j\implies g=a^{-i-j}\in H \implies gag^{-1}\in H
$$.
So a contradiction. Hence we are done.
|
H: Uniform convergence of $x^n$ using the definition
I have been trying to prove the uniform convergence of sequence of functions defined by $f_n(x)=x^n$ on $[0,k]$ where $k<1$ by the epsilon definition of uniform convergence.
I have found the point-wise limit of the same..i.e. $f(x)=0$
Now let $\epsilon>0$ be given.
Then if the given sequence is uniformly convergent then we must have that there exists a natural number $m$ such that $|x^n-0|<\epsilon$ for all $n\geq m$ and for all $x$ in $[0,k]$, which on further calculations give that $$n>\frac{\log(1/\epsilon)}{\log(1/x)}$$
Now what natural number $m$ should I choose such that for all $n\geq m$ the above holds?
Please help!
AI: we begin by observing that
$$\forall x\in[0,k]\;\; |x^n|\le k^n$$
given $\epsilon>0$.
we just need to find $ m\ge 0$ such that
$$n\ge m \implies k^n<\epsilon$$
or
$$n\ge m \;\implies n\ln(k)<\ln(\epsilon)$$
which gives
$$n\ge m \;\implies n>\frac{\ln(\epsilon)}{\ln(k)}$$
we know that
$$\lfloor \frac{\ln(\epsilon)}{\ln(k)}\rfloor +1>\frac{\ln(\epsilon)}{\ln(k)}$$
so, we can take for example
$$m=\lfloor \frac{\ln(\epsilon)}{\ln(k)}\rfloor +3$$
|
H: Exercise using Sard's theorem
Let $M$ be a compact $n$-dimensional differentiable manifold and $f:M\to\mathbb{R}^{n+1}$ differentiable with $0\notin f(M)$. Show that there is a straight line through the origin in $\mathbb{R}^{n+1}$ that intersects with $f(M)$ only finitely many times.
This was given as an exercise for the consequences of Sard's theorem. I can't think of how I should apply the theorem however. A direct consequence would be that the set of the critical points of $f$ has the measure zero. But neither does this go into the direction of the assertion I shall but prove, nor does a set of measure zero has to be finite. For example every line in $\mathbb{R}^2$ has measure zero but infinitely many points. Could someone help me out?
AI: As $0\notin f(M)$ you may postcompose $f$ with the projection $\pi:\Bbb R^{n+1}\to\Bbb {RP}^n$ and still have a differentiable map. By Sard the critical values $(\pi\circ f)(X)$ have zero measure, the consequence we draw from this is only the fact that image of the critical points is not all of $\Bbb {RP}^n$.
Now $M$ is compact and the set of critical points $X$ is always closed, so the critical values $\pi(f(X))$ are a closed set in $\Bbb{RP}^n$ that is not all of $\Bbb{RP}^n$. They are thus contained in an open set $U_1\subset \Bbb{RP}^n$ that is not all of $\Bbb{RP}^n$.
For every point in $M-X$ the differential of $\pi\circ f$ must be invertible, hence $(\pi\circ f)$ is a local diffeomorphism on $M-X$. So for $p\in M -X$ consider open neighbourhoods on which $\pi\circ f$ restricts to a diffeomorphism. All these open neighbourhoods together with $(\pi\circ f)^{-1}(U_1)$ must cover $M$, hence there is a finite subcover. In particular by construction any point of $\Bbb {RP}^n$ will have at most one pre-image in each open set of this finite cover (as the map $\pi\circ f$ is a diffeomorphism on these sets), thus at most finitely many pre-images.
So the points in $\Bbb {RP}^n-U_1$ must have finite pre-image, meaning the lines they correspond to only intersect $f(M)$ finitely many times.
|
H: Parabola transformation
Find the real affine change of coordinates that maps the parabola in the $xy$-plane to the parabola in the $uv$-plane
$$4x^2 + 4xy + y^2 - y + 1 = 0$$
$$4u^2 + v = 0$$
My attempt:
Since there is an $xy$ term, we know that there is a rotation. Thus suppose there is a $x'y'$ coordinate system before the rotation. We know that $x = x' \cos \theta - y' \sin \theta$ and $y = x' \cos \theta - y' \sin \theta$. If we make this substitution and rewrite the expression as $A'x'^2 + B; x'y'+ C'y'^2 + D'x' + E'y' + F'$, we want $x'y' = 0$. If we solve, we find that $\tan(2 \theta) = \frac{B}{A- C}$. In this case $\tan(2 \theta) = \frac{4}{3}$. Now when we solve this equation, we have $|\sin \theta| = \frac{1}{\sqrt{5}}$ and $|\cos \theta| = \frac{2}{\sqrt{5}}$.
In order to finish this solution, I need help with the following questions
How do I determine sign for the angles? Is there a way to do this other than graphing?
Once I substitute the angles and find an equation in terms of $x'$ and $y'$ which yields a parabola that is not rotated, I can see how to write $x'$ and $y'$ in terms of $u$ and $v$. Then how how exactly do I account for the rotation when I write $x$ and $y$ in terms of $u$ and $v$? I'm having a difficult time geometrically understanding what has to be done to the parabola coordinates.
AI: To obtain affine change of coordinates simply observe that
$4x^2+4xy+y^2-y+1=(2x+y)^2+(1-y)=4\left(x+\frac{y}{2}\right)^2+(1-y)$. So take \begin{align*}u&=x+\frac{y}{2}\\v&=1-y\end{align*}
This gives us
$$\begin{bmatrix}u\\v\end{bmatrix}=\begin{bmatrix}1&\frac{1}{2}\\0&-1\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}+\begin{bmatrix}0\\1\end{bmatrix}.$$
From here we can also get
$$\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}1&\frac{1}{2}\\0&-1\end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix}+\begin{bmatrix}\frac{-1}{2}\\1\end{bmatrix}.$$
|
H: Showing that $\text{Hom}(M,\Gamma(X,\mathcal{F}))\simeq \text{Hom}(\widetilde{M},\mathcal{F})$ (exercise II.5.3 from Hartshorne)
This is exercise II.5.3 from Hartshore:
Let $X=\text{Spec}(A)$ be an affine scheme. Show that the functors $\widetilde{\,\,\,\,}$ and $\Gamma$ are adjoint, in the following sense: for any $A$-module $M$, and for any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$, there is a natural isomorphism:
$$\text{Hom}_A(M,\Gamma(X,\mathcal{F}))\simeq\text{Hom}_{\mathcal{O}_X}(\widetilde{M},\mathcal{F})$$
Since $\widetilde{M}(X)=M$, $\Gamma(X,\mathcal{F})=\mathcal{F}(X)$ and $\mathcal{O}_X(X)=A$, then given a morphism $\psi:\widetilde{M}\to\mathcal{F}$, we have that $\psi_X:M\to\mathcal{F}(X)$ is an $A$-module morphism, therefore in $\text{Hom}_A(M,\Gamma(X,\mathcal{F}))$.
Conversely, if $\phi:M\to\mathcal{F}(X)$, I think the natural idea is to define $\phi_f:M_f\to\mathcal{F}(X)_f$ for each $f\in A$, which can be glued to obtain a morphism $\psi:\widetilde{M}\to\widetilde{\mathcal{F}(X)}$.
But is it true that $\widetilde{\mathcal{F}(X)}=\mathcal{F}$? I tried to show that $\mathcal{F}(X_f)=\mathcal{F}(X)_f$, but I don't know how to do it only knowing that $\mathcal{F}$ is a $\mathcal{O}_X$-module.
Any suggestions?
AI: It's not true in general that $\widetilde{\mathcal F(X)}=\mathcal F$; there exist $\mathcal O_X$-modules which are not of the form $\widetilde M$ for any $M$.
However, what we do know is that there is a restriction map $\mathcal F(X)\to\mathcal F(X_f)$ for any $f\in A$. But $\mathcal F(X_f)$ is a module over $\mathcal O_X(X_f)=A_f$, so multiplication by $f$ is an automorphism of $\mathcal F(X_f)$, so by the universal property of localization of modules there is an induced ($A_f$-module) morphism $\mathcal F(X)_f\to\mathcal F(X_f)$ (making the corresponding triangle commute).
Combining with the map $M_f\to \mathcal F(X)_f$ you found, we get a map $\widetilde M(X_f)=M_f\to\mathcal F(X_f)$, and these you can glue to get a sheaf morphism $\widetilde M\to\mathcal F$.
|
H: The Monty Hall Three Door Puzzle
I was going through the Monty Hall Three Door Puzzle in "Discrete Mathematics and its Application" by Kenneth Rosen (5th Edition). While reading the excerpt from tbe book (given below) I could not quite convince myself about the solution as there was only verbal reasoning and not quite vigorous mathematics involved. So I thought of two methods:
(i) Using classical probability
(ii) Using conditional probability
But I do not know whether I am correct, I may be at fault so, it shall be appreciable if my fault is pointed out.
The Monty Hall Three-Door Puzzle: Suppose you are a game show contestant. You have a chance to win a large prize. You are asked to select one of three doors to open; the large prize is behind one of the three doors and the other two doors are losers. Once you select a door, the game show host, who knows what is behind each door, does the following. First, whether or not you selected the winning door, he opens one of the other two doors that he knows is a losing door (selecting at random if both are losing doors). Then he asks you whether you would like to switch doors. Which strategy should you use? Should you change doors or keep your original selection, or does it not matter?
Solution: The probability you select the correct door (before the host opens a door and asks you whether you want to change) is $\frac{1}{3}$, because the three doors are equally likely to be the correct door. The probability this is the correct door does not change once the game show host opens one of the other doors, because he will always open a door that the prize is not behind.
The probability that you selected incorrectly is the probability the prize is behind one of the two doors you did not select. Consequently, the probability that you selected incorrectly is $\frac{2}{3}$. If you selected incorrectly, when the game show host opens a door to show you that the prize is not behind it, the prize is behind the other door. You will always win if your initial choice was incorrect and you change doors. So, by changing doors, the probability you win is $\frac{2}{3}$. In other words, you should always change doors when given the chance to do so by the game show host. This doubles the probability that you will win.
Now after the first read I had the question in my mind: "Well the total probability of winning is $1$ as it is $\frac{1}{3}$ in the first case and $\frac{2}{3}$ in the second case, so is it a sure event?"
Of course not, this is so because in the first case and in the second case our strategies are different, which means the experiment is different and so is the sample space. There are not the either,or cases of a single experiment, rather they are different experiments.
This being said let us jump into the classical probability approach , the sample space( possible out-comes) and then the outcomes favorable.
Case $1$: The situation where we stick to our initial decision.
Suppose (without the loss of generality) we have a large prize behind door $A$ and false behind doors $B$ and $C$ (the three doors being named so). Now our possible outcomes are:
1) we choose $A$ and stick to decision and win.
2) we choose $B$ and stick to decision and lose.
3) we choose $C$ and stick to decision and lose.
Now we have to find the probability of winning. The outcome favorable for winning is the option 1) (namely we choose $A$ and stick to decision and win.). So there are a total of $3$ outcomes and $1$ of them is favorable for the event of winning.
By classical probability we have,
P(winning, sticking to choice) = $\frac{1}{3}$
Case $2$: The situation where we always switch from our initial decision.
Suppose (without the loss of generality) we have a large prize behind door $A$ and false behind doors $B$ and $C$ (the three doors being named so). Now our possible outcomes are:
1) we choose $A$ (Monty opens either of $B$ or $C$ randomly) and switch doors (to $C$ (or $B$) if $B$ (or $C$) is opened by Monty (respectively)) and loose.
2) we choose $B$ (Monty opens $C$ ,can't open $A$ as it has prize) and switch doors (to $A$) and win.
3) we choose $C$ (Monty opens $B$ ,can't open $A$ as it has prize) and switch doors (to $A$) and win.
Now we have to find the probability of winning. The outcome favorable for winning is the options 2) and 3). So there are a total of $3$ outcomes and $2$ of them are favorable for the event of winning.
By classical probability we have,
P(winning, switching choice) = $\frac{2}{3}$
Now moving on the conditional probability method:
Case $1$: The situation where we stick to our initial decision.
Let $E_1$ be the event of choosing the door containing the prize.
So $P(E_1)= \frac{1}{3}$ as one door out of 3 has the prize.
Let $E_2$ be the event of opening a door containing false by Monty.
Given that $E_1$ has occurred the probability that $E_2$ occurs is:
$P(\frac{E_2}{E_1})= \frac{2}{2} = 1$, There are 2 doors (possible outcomes) containing false and both are favorable out comes.
Let $E_3$ be the event of answering the question "Do you want to change the door?" by me.
$P(\frac{E_3}{E_1 . E_2})= \frac{1}{1} = 1$,since our strategy is fixed we have only one possible out come , i.e. $NO$ and answer so, a sure event.
Required probability of winning
= $P(E_1)\times P(\frac{E_2}{E_1})\times P(\frac{E_3}{E_1 . E_2})$
= $\frac{1}{3}$
Case $2$: The situation where we switch initial decision.
Let $F_1$ be the event of choosing the door not containing the prize.
So $P(F_1)= \frac{2}{3}$ as one door out of 3 has the prize.
Let $F_2$ be the event of opening a door containing false by Monty.
Given that $F_1$ has occurred the probability that $F_2$ occurs is:
$P(\frac{F_2}{F_1})= \frac{1}{1} = 1$, There is only 1 door (possible outcome) containing false and it is chosen as the favorable out come.
Let $F_3$ be the event of answering the question "Do you want to change the door?" by me.
$P(\frac{F_3}{F_1 . F_2})= \frac{1}{1} = 1$,since our strategy is fixed we have only one possible out come , i.e. $YES$ and we answer so, a sure event.
Required probability of winning
= $P(F_1)\times P(\frac{F_2}{F_1})\times P(\frac{F_3}{F_1 . F_2})$
= $\frac{2}{3}$
AI: Your reasoning is correct in both cases. What you call a "conditional probability" tracks closer to looking at each event in the game tree separately and multiplying through, the "classical probability" actually uses symmetry.
The "total probability of winning" being $1$ in your first question actually has an interpretation: notice that you must choose exactly one of the two strategies, and regardless of the door Monty opens (which never contains the big prize), exactly one of the strategies must win. Therefore as mutually exclusive and exhaustive events, their probabilities sum to $1$.
|
H: Continuity of piecewise function using topology
I would like to check if what I'm doing is correct or if I'm missing something:
Given the two sets $$X = [0,1] \cup (2,3], \quad Y = [0,2],$$ both equipped with the standard topology, consider the function $f:X \to Y$ defined by $$f(x) = \begin{cases} x & \text{if $x \in [0,1]$} \\ x-1 & \text{if $x \in (2,3]$} \end{cases}.$$
Show that $f$ is bijective from $X$ to $Y$ and continuous, but that $f^{-1}$ is not continuous.
To show that $f$ is continuous, I take the open interval $V = (0,2)$ and find that $\operatorname{Preim}_f(V) = (0,1) \cup (2,3)$, which is open since it's the union of two open sets.
To see that $f$ is bijective, I find the map $$g(x) = \begin{cases} x & \text{if $x \in [0,1]$} \\ x+1 & \text{if $x \in (1,2]$} \end{cases},$$ which is the inverse of $f(x)$. Since I found an inverse, then $f$ is bijective.
Now, what open subset can I consider in order to show that $g = f^{-1}$ is not continuous?
Are the first 2 parts complete, or am I missing anything?
Thanks in advance!
AI: Your first part is incomplete. All you have shown is that there exists one open subset of the codomain such that its preimage is open.
However, you have to show that that is the case for all open subsets of $[0, 2]$.
(The pasting/glueing lemma may be helpful here.)
To show that the inverse is not continuous, consider the open set $\left(\dfrac12, 1\right] \subset X$.
(Why is this open?)
|
H: Lp space Example
How are spaces connected $L_{\infty}(E)$ and $L_{p}(E)$, $|E| = \infty$?
$$f \in L_\infty, \text{ but } f\notin L_1 \quad f = \frac{1}{x} \quad E = [1, \infty)$$
What can we say about reverse insertion? I think there is a suitable example
AI: Neither containment holds for Lebesgue measure if $|E| = \infty$. Your example shows that $L^{\infty}(E) \not\subset L^1(E)$. For a counterexample in the reverse direction, let $E = (0,\infty)$ and define
$$f(x) = \begin{cases} \frac{1}{\sqrt{x}} & \text{ if }0 < x < 1 \\ 0 & \text{ otherwise} \end{cases}$$
Then $f \in L^1(E)$ but $f \not\in L^{\infty}(E)$.
|
H: X subset of space vector V, $Tv=Sv, v \in X$ then T=S
Given two linear transformations:$$S,T: V \to W$$
such that $X\subseteq V$, and it is true that $S(v)=T(v), \forall v\in X$, and the exercise requests to prove that $S=T$. I have started by writing the vectors in X $ \{v_{1},\ldots, v_m\} $ then relating them to each one of them to a vector in $W$, and I don't think this is the right way to approach the exercise. I'm not so sure what to do next and if you were able to give me any hints to continue this exercise it would be very helpful.
AI: It is not necessarily true.
Let $T=\begin{bmatrix}1&0\\0&1\end{bmatrix}$ and $S=\begin{bmatrix}0&1\\1&0\end{bmatrix}$ and $X=\{(x,y) \, | \, y=x\}$. Then both $S$ and $T$ agree on $X$ but they are not equal.
|
H: Proving that if $a$ is an element from a group and $|a|=n$, then $C(a)=C(a^k)$, when $k$ is relatively prime to n.
$C$ in question title denotes centralizer of an element $a$ in group (say, $G$) and is defined as follows:
$$C(b)=\{x\in G:bx=xb \;\;\forall x\in G\}$$
In order to prove the result in title, I proceeded as follows :
Let $x \in C(a)$, hence
$$\begin{align}
ax=xa &\implies a =xax^{-1}\\
&\implies a^k=xa(x^{-1}x)a(x^{-1}x)\dots,x^{-1}=xa^kx^{-1}\\
&\implies a^kx=xa^k\\
&\implies x \in C(a^k)\\
&\implies C(a) \subseteq C(a^k)
\end{align}$$
For the converse part, let $y\in C(a^k)$ so that $a^ky=ya^k \tag{1}$
Since $n$ and $k$ are relative primes, $\langle a\rangle=\langle a^k\rangle$, hence for every $a$ in $\langle a\rangle$, there is an $r$ such that $a=a^{kr}\implies a^{kr-1}=e$, identity. $n$ divides kr-1, hence there exists some $s$ such that $kr-ns =1$
Hence, $ay=a^{kr-ns}y=a^{kr}ey=a^{kr-k}(a^ky)=a^{kr-k}(ya^k)$ [By (1)].
From here I don't know how to show that $ya=a^{kr-k}(ya^k)$. If it is shown, then it can be deduced that $C(a^k)\subseteq C(a)$. And thus the result will be proved.
Please help. Thanks in advance.
AI: If $x$ commutes with $y$, then it commutes with $y^n$ for every integer $n$: it commutes with $y^1$; if it commutes with $y^r$, then
$$xy^{r+1} = x(y^ry) = (xy^r)y = (y^rx)y = y^r(xy) = y^r(yx) = y^{r+1}x.$$
Thus, it commutes with every positive power of $y$; it clearly commutes with $y^0$. And if $xy^n = y^nx$, then multiplying by $y^{-n}$ on the left and $y^{-n}$ on the right, we get $y^{-n}x=xy^{-n}$, so $x$ commutes with $y^{-n}$.
Now, you know that $\langle a\rangle = \langle a^k\rangle$. So $a\in \langle a^k\rangle$. Use that to show that if $x$ commutes with $a^k$ then it commutes with $a$.
|
H: Tu's An Introduction to Manifolds - Section 26.2 Cohomology of a circle, tabular form.
I'm trying to understand how to use the Mayer-Vietoris sequence to compute Cohomologies. There's a small chapter in Tu's Introduction to Manifolds explaining the basics, with some basic theory.
More specifically section 26.2 has an example about the circle, later on there're other examples.
There's the following table which I don't quite understand how to read it, I also don't understand how it's filled:
The questions are:
How am I supposed to read the table?
How is it filled exactly? (I do understand the author covers $S^1$ with two overlapping circle arcs, which allows him to build a short exact sequence (I think this should correspond to a row of the table?) I think he's also using the zig-zag lemma, but not sure how. The exact entries are a bit of a mistery to me, especially all those zeros.
Is the Mayer-Vietoris sequence supposed to simplify the computation of cohomologies? With reference to this specific example to me what appears to happen in a sense is that the author still uses some of the previous results to fill some of the entries. However by the use of problem 26.2 he's able to compute the dimension of H^1(S^1) right away, without passing through the calculations done in the example 24.4. But it doesn't seem to me it simplified that much.
Can you clarify?
AI: Assuming you're familiar with the Mayer-Vietoris sequence, you know it goes $0 \to 1 \to 2$ in grading, so you read each row left to right, and you start from the bottom and go to the top.
Each point in the table is filled with the homology group of the degree from the row, and the space from the column. The whole point of MV is to use simpler spaces for which you know the cohomology. In this case, these are two arcs that intersect in two points. Since arcs are contractible, their cohomologies are $\mathbb{R}$ in degree $0$ and $0$ otherwise. Usually you use a short exact sequence of (co)chain complexes, specifically $$0 \to C^n (X \cup Y) \xrightarrow{i} C^n(X \coprod Y) \xrightarrow{j} C^n (X\cap Y) \to 0$$ and apply the zig-zag lemma to get Mayer-Vietoris in the first place, so you don't actually have to go through this. The actual Mayer-Vietoris sequence has all the terms in cohomology already written out, so all you have to do is plug and play.
Usually it's a pretty useful tool. Here the space is $S^1$ so it's already pretty simple. But suppose I asked you to calculate the de Rham cohomology of $\mathbb{R}^2 - \{p,q\}$ i.e. a plane punctured twice? MV could help you (though there are other ways for sure). Just checked the book and on page $302$ he calculates the cohomology groups of a torus using MV, that could be illuminating. After further skimming, a lot of chapter $28$ can help illustrate the power of MV.
|
H: Determine $p$ for which the hyperharmonic sequence $\sum_{k=1}^{\infty} \frac{1}{k^p}$ is convergent
I'm asked to determine $p \in \mathbb{R} $ for which the hyperharmonic series:
$$\sum_{k=1}^{\infty} \frac{1}{k^p}$$ is convergent.
I started using the Ratio Test, which gives me:
\begin{align}
L &= \lim_{k \to \infty} \left| \frac{\frac{1}{(k + 1)^p}}{\frac{1}{k^p}} \right| \\
&= \lim_{k \to \infty} \left| \frac{k^p}{(k + 1)^p} \right| \\
&= \lim_{k \to \infty} \left| \left(\frac{k}{k + 1}\right)^p \right|
\end{align}
But then I'm quite stuck. Anyone have any idea how I can continue?
AI: Use integral test.
$ \sum_{k=1}^{\infty} \frac{1}{k^p}$ is convergent if and only if the improper integral $\int_{1}^{\infty} \frac{1}{x^p}dx$ converges.
Now, when you integrate the latter, you 'll see that it is possible when and only when $p>1$
|
H: How does one find if $x^3 + 3x^2 - 8x + 12 = 0 \bmod 5k$ has any root for $k$ an integer?
Actually I want to show that there is no solution to the cubic equation for any value of $k$. I have tried some values of $k$, and I want to see if it can be shown to be true for all $k$.
AI: You could show $n^3+3n^2-8n+12\not\equiv0\pmod 5$ for $n\in\{0,1,2,3,4\}$.
|
H: Show $(A\cup B) \setminus A = B \setminus (A\cap B) $
Show $(A\cup B) \setminus A = B \setminus (A\cap B) $:
My reasoning:
LHS: $$x\in((A\cup B)\setminus A)$$
$$\equiv x\in(A\cup B) \land x\notin A$$
$$\equiv(x\in A \land x\notin A) \lor(x\in B \land x\notin A)$$
$$\equiv x\in B \land x\notin A$$
$$\equiv x\in(B\setminus A)$$
And I know the final result is equivalent to $B\setminus(A\cap B)$ because I can show that:
$$x \in (B\setminus(A\cap B))$$
$$\equiv x\in B \land x\notin(A\cap B)$$
$$\equiv x\in B \land \lnot(x\in A \land x\in B)$$
$$\equiv x\in B\land(x\notin A\lor x\notin B)$$
$$\equiv(x\in B \land x\notin A)\lor (x\in B\land x\notin B)$$
$$\equiv(x\in B \land x\notin A)$$
$$\equiv x \in (B \setminus A)$$
But is there a better way to show the sets are equivalent? And btw I'm not sure if I actually proved anything in the second part by starting from what I should have proved from: $x\in (B\setminus(A\cap B))$ and getting to $x \in (B \setminus A)$
AI: $(A \cup B) \setminus A = (A \cup B) \cap A^c = B \cap A^c$.
$B \setminus (A \cap B) = B \cap (A \cap B)^c = B \cap (A^c \cup B^c) = B \cap A^c$.
Here is an approach using indicator functions:
The indicator function $1_C$ denotes the function $1_C(x) = \begin{cases} 0,& x \notin C \\ 1,& x \in C \end{cases}$.
Note that
\begin{eqnarray}
1_{(A \cup B) \setminus A} &=& \max(0, 1_{A \cup B} - 1_A) =
\max(0, \max(1_A,1_B)-1_A) \\
&=& \max(0,\max(0, 1_B-1_A)) = \max(0, 1_B-1_A) \\
&=& 1_{B\setminus A}
\end{eqnarray}
and
\begin{eqnarray}
1_{B \setminus (A \cap B)} &=& \max(0, 1_B-1_{A \cap B})= \max(0, 1_B-\min(1_A,1_B))\\
&=& \max(0, 1_B+\max(-1_A,-1_B)) = \max(0,\max(1_B-1_A,0)) \\
&=& \max(0, 1_B-1_A) = 1_{B\setminus A}
\end{eqnarray}
|
H: prove non-continuity using open sets (topology)
In topology, continuity is defined as:
A function $f:X\rightarrow Y$ is continuous if the inverse image of an open set in $Y$ is an open set in $X$.
I have a problem to use it to check the non-continuous function. For example, in J.Munkres' book Topology (2nd Edition) (Pg.109), there is an example $$f(x)=\begin{cases}
x-2, & x<0\\
x+2, & x\geq0
\end{cases}$$
The domain of this function (i.e. the $X$ in the definition) is $\mathbb{R}$, i.e. $x\in(-\infty,\infty)$; the codomain (i.e. the $Y$ in the definition) is $(-\infty,-2)\cup[2,\infty)$. To prove this function is discontinuous at $x=0$, the book choose the open set of $f$ as $(1,3)$, and thus the inverse image is $[0,1)$, which is not an open set.
The problem is:
the open set $(1,3)$ is not the subset of the codomain $(-\infty,-2)\cup[2,\infty)$. How can we choose it?
In my point of view, within the codomain $(-\infty,-2)\cup[2,\infty)$, all allowed open sets are either within the subset $(-\infty,-2)$ or within the subset $[2,\infty)$. Within these two parts, the function are always continuous. How can we prove the discontinuous?
AI: The codomain of your function is $\Bbb R$, not $(-\infty,-2)\cup[2,\infty)$. So, there is no problem, since $(1,3)\subset\Bbb R$.
If you want to see $f$ as a map from $\Bbb R$ into $Y=(-\infty,-2)\cup[2,\infty)$, then, instead of $(1,3)$, take its intersection with$Y$, which is $[2,3)$. It is an open subset of $Y$, but $f^{-1}\bigl([2,3)\bigr)$ is not an open subset of $\Bbb R$.
|
H: How To Determine If $\sum_{n=1}^{\infty}\frac{(-1)^n}{n}\left(\sum_{k=0}^{n-1}\binom{2k}{k}\binom{k}{n-k}\right)$ Converges or Diverges?
$$\sum_{n=1}^{\infty}\frac{(-1)^n}{n}\left(\sum_{k=0}^{n-1}\binom{2k}{k}\binom{k}{n-k}\right)$$
Question : How do i determine if the above Series Converges to Diverges?
I have no idea where to begin since i do not have much experience with Sums of this type.
Thank you kindly for your help and time.
AI: Here's a hint:
The inside series is a sum of binomial coefficients, so it ``looks like it should be big.'' To this end, we can try to show that it's greater than, say, $n$; if this is true, then the terms of the series cannot tend to $0$.
By looking at some terms of the inside series, can you see why this should be the case?
|
H: Prove the equation combinatorially [full answer provided] - I need explanation for the answer
For every $N ∈ r, n$
$P(n,r) = \sum_{k=0}^{r}\binom{r}{k}P(n-m,k)P(m,r-k)$
Prove this combinatorially.
Answer:
The class has m boys and n - m girls. In what ways can r students be selected for different roles?
On the one hand, the answer is P(n , r) because the order of choice is important and there are no repetitions
On the other hand, all options divide into 1 + r types: choosing k girls and r - k boys,
where $0\leq $ k $\leq $ r.
Now, choosing k roles for k girls in $\binom{r}{k}$ ways.
And then the roles for the boys are unequivocally determined (the boys will occupy k − r roles remaining).
Now, given in k roles for girls, there are P(n - m, k) options to choose girls with importance for roles. And for k - r roles for boys, there are P(m, r-k) ways choosing options for boys with importance to roles
According to multiplication rule, for k girls there are $\binom{r}{k}P(n-m,k)P(m,r-k)$ ways
According to the addition rule, $\sum_{k=0}^{r}\binom{r}{k}P(n-m,k)P(m,r-k)$
My question:
Why all the possibilities fall into r+1 types?
And can someone elaborate the answer in an easy way, or numeric example or different story
AI: You can classify the selections according to the number of girls chosen. If we’re choosing $r$ students altogether, there can be any number of girls from $0$ through $r$; that’s one category for each number of girls from $1$ through $r$, plus one extra for no girls at all, for a total of $r+1$ different categories.
Suppose, for instance, that there are $4$ boys, $5$ girls, and we are selecting $3$ students for $3$ different roles. We can choose a group of $3$ students that contains $0$ girls, or $1$ girl, or $2$ girls, or $3$ girls: those are $3+1=4$ different types of group.
Then we count the number of groups of each type. Imagine picking the $3$ students in sequence, so that there is a first student (who gets the first role), a second student (who gets the second role), and a third student (who gets the third role). If for now we look only at the sex of each student, we see that just $\binom30=1$ sequence is possible with with $0$ girls: BBB. There are $\binom31=3$ possible sequences with $1$ girl: BBG, BGB, GBB. There are $\binom32=3$ possible sequences with $2$ girls: BGG, GBG, GGB. And there is $\binom33=1$ possible sequence with $3$ girls: GGG.
When we actually choose specific girls and boys, we have to take into account the individual identities of the students. For instance, there are $4$ boys, so there are $4\cdot3\cdot2=P(4,3)$ ways to choose $3$ of them in a specific order. What about a sequence like BGB? We’re choosing a single girl, which can be done in $5=P(5,1)$ ways, and we’re choosing $2$ boys in a specific order, which can be done in $4\cdot 3=P(4,2)$ ways, so the whole BGB sequence can be chosen in $P(5,1)P(4,2)$ ways. The same goes for the BBG sequence and the GBB sequence, so there are altogether $3P(5,1)P(4,2)$ ways to select a group with $1$ girl.
If we do this for each of the four possible numbers of girls chosen, we find that there are
$$\begin{align*}
&\binom30P(5,0)P(4,3)\text{ groups with }0\text{ girls,}\\
&\binom31P(5,1)P(4,2)\text{ groups with }1\text{ girl,}\\
&\binom32P(5,2)P(4,1)\text{ groups with }2\text{ girls, and}\\
&\binom33P(5,3)P(4,0)\text{ groups with }3\text{ girls.}
\end{align*}$$
In compact form, each line is
$$\binom3kP(5,k)P(4,3-k)\text{ groups with }k\text{ girls}\;,$$
and the total is therefore
$$\sum_{k=0}^3\binom3kP(5,k)P(4,3-k)\;.$$
|
H: Finding a volume of a region defined by |x-y+z|+|y-z+x|+|z-x+y|=1
Find the volume of the region definded by |x-y+z|+|y-z+x|+|z-x+y|=1.
I'm having trouble approaching this problem. Could someone maybe give me a hint or a solution, it would be so helpful.
Thanks in advance and sorry for my bad english.
AI: Let's find the volume of the related region
$$|x|+|y|+|z| \leq 1$$
By the symmetry of the region we can reduce it to an integral in the first octant only
$$\iiint_EdV = 8 \iiint_{E\:\cap\:\text{First Octant}} dV$$
Then setting up and doing the integral is not that hard
$$ = 8\int_0^1 \int_0^{1-x} \int_0^{1-x-y} dz\:dy\:dx = 8\int_0^1 \frac{1}{2} - x + \frac{1}{2}x^2\:dx = \frac{4}{3}$$
Now how does this help us with this problem? We can use the substitution
$$\begin{cases}u = x-y+z \\ v = x+y-z \\ w = -x+y+z \\ \end{cases} \implies J^{-1} = \begin{vmatrix} 1 & -1 & 1 \\ 1 & 1 & -1 \\ -1 & 1 & 1 \\ \end{vmatrix} = 4$$
Thus with this change of variables we get that
$$\iiint_{|x-y+z|+|y-z+x|+|z-x+y|\leq 1} dV = \iiint_{|u|+|v|+|w|\leq 1}\frac{1}{4} \:dV' = \frac{1}{3}$$
|
H: When given $X = −\frac{\ln(1 − U)}{\lambda}$, why is $X \sim \mathrm{Exp}(\lambda)$ and not $\mathrm{Exp}(-\lambda)?$
When given $X = \frac{−\ln(1 − U)}{\lambda}$, why is the distribution of $X \sim \mathrm{Exp}(\lambda)$ and not $\mathrm{Exp}(-\lambda)?$ I solved for $X$ to get:
$-X=\cfrac{\ln(1-U)}{\lambda}$
$-\lambda X=\ln(1-U)$
Considering $1-U$ is equivalent to $U$, then $-\lambda X=\ln(U)$
$\exp(-\lambda X)=U$
I'm not exactly sure of why I can make the leap, but I've trained it into my brain that the above is the equivalent of $X \sim \mathrm{Exp}(-\lambda)$
So, why is the answer considered $X \sim \mathrm{Exp}(\lambda)?$
AI: Using the fact that $\lambda>0$, the exponential function is strictly increasing and that $U$ is uniform one gets
\begin{aligned}
P[X\leq x] &=P[-\ln(1-U)/\lambda \leq x]=P[\ln\big((1-U)^{-1}\big)\leq\lambda x]\\
&=P[\frac{1}{1-U}\leq e^{\lambda x}]\\
& = P[1-U\geq e^{-\lambda x}]\leq P[U\leq 1-e^{-\lambda x}]\\
&=1-e^{-\lambda x}
\end{aligned}
For that, one sees that $X$ has the distribution corresponding to the exponential distribution with parameter $\lambda$.
One can also use the observation that $1-U$ and $U$ are equal in law.
Then
\begin{aligned}
P[-\ln(1-U)\leq x]&=P[-\ln(U)\leq x]=P[U^{-1}\leq e^{\lambda x}]\\
&=P[U\geq e^{-\lambda x}] = 1-P[U\leq e^{-\lambda x}]\\
&=1-e^{-\lambda x}
\end{aligned}
|
H: Verifying the $N_k$ in Waring's Theorem (Probability)
I know that there are proofs on Waring's Theorem on StackExchange, but I plan on tackling it without looking at the proof. The statement follows from Grimmet and Stirzaker's Probability and Random Processes. This might be pretty silly, but I've actually had a pretty hard time trying to understand what is meant by $N_k$. Assuming that $n=3$ and $k=2$, would it mean that $N_k = \{A_1 \cap A_2, A_1 \cap A_3, A_2 \cap A_3 \}$? And that we're looking for $\mathbb{P}(N_k) = (A_1 \cap A_2) \cup (A_1 \cap A_3) \cup (A_2 \cap A_3)$? Thanks!
AI: No, in your example
$$N_2=\color{red}{\big((A_1\cap A_2)\setminus A_3\big)}\cup\color{blue}{\big(A_1\cap A_3)\setminus A_2\big)}\cup\big((A_2\cap A_3)\setminus A_1\big)\;.$$
The event that $A_1$ and $A_2$ occur but $A_3$ does not is red; the event that $A_1$ and $A_3$ occur but $A_2$ does not is blue; and the event that $A_2$ and $A_3$ occur but $A_1$ does not is black.
In order for exactly $2$ of the $3$ events to occur, not only must some pair, like $A_1$ and $A_3$, occur, but the remaining one must fail to occur. Your
$$(A_1\cap A_2)\cup(A_1\cap A_3)\cup(A_2\cap A_3)$$
includes the event that all three occur, and indeed includes it three times, once in each of the parenthesized terms.
|
H: What sample size is needed to ensure a majority?
The results of a sample of voters showed that $55\%$ voted for a given candidate. It was determined that at a confidence level of $0.95$ that candidate would be the winner (i.e. would receive the majority of the votes). What sample size is needed to ensure the accuracy of that statement?
Given the mean, standard deviation, and sample size, I can find a confidence interval using $Z_{1-\alpha/2}$ and without the standard deviation with $t_\alpha(n-1)$.
This question, however, doesn't seem to give the necessary information to use either approach.
Perhaps the possible $5\%$ difference between $55\%$ and "majority" should be used in the calculation, but I don't know how to approach making the correct formulas.
AI: You are on the right track but not quite right. Unfortunately this question is a bit philosophically ambiguous.
If you decide to accept the claim that this candidate won with 95% confidence when the 95% confidence interval for $p$ lies entirely to the right of $0.5$, then you would want to have $0.55-1.96 \sqrt{0.55 \cdot 0.45/n}>0.5$. Here basically the idea is that for true/false data, we get the population standard deviation for free when we assume we know what the mean is.
So with some rounding you want $1.96 \cdot 0.497/\sqrt{n}<0.05$, so $n>\left ( \frac{0.05}{1.96 \cdot 0.497} \right )^{-2} \approx 380$. My rounding is crude so don't take all those numbers as being exactly right.
I claim that the above way of doing things has two philosophical flaws. One is that using confidence intervals is not really the right thing to do here in the first place, since the concept underlying confidence intervals is two-sided hypothesis tests, which aren't appropriate here (since you care about the claim $p>0.5$). The other is that you are estimating the standard deviation from the sample proportion, which is rather "optimistic". This can be seen by considering the extreme case of a sample with all votes for the same candidate. In this case the procedure above will conclude that this candidate won with any sample size and any level of significance.
I would argue that a way to work around these flaws is to use a null hypothesis of $p=0.5$ and run a one-sided test for rejecting this hypothesis when $\hat{p}=0.55$. Thus you reject the null hypothesis if $P(\hat{p} \geq 0.55)<0.05$. Now your test statistic is $\frac{0.55-0.5}{\sqrt{0.5^2/n}}=\frac{0.05}{0.5} \sqrt{n}=0.1 \sqrt{n}$ and you reject the null hypothesis if $P(Z \geq 0.1\sqrt{n})<0.05$, i.e. if $0.1\sqrt{n}>1.65$ so if $n \geq 273$. (Note that this is using the normal approximation but you could do it over again with an explicit binomial calculation, now that you know to look for the cutoff around $n=270$ or so.)
This number is made smaller than the previous number by the fact that we did a one-sided test whereas the confidence interval is effectively doing a two-sided test, but it is made larger by the fact that our estimate for the standard deviation is bigger since we computed it under the assumption $p=0.5$ instead of $p=0.55$.
|
H: Given an open cover of $X$, $f:I \rightarrow X$, $\exists 0\leq s_1 ..\leq s_n = 1$ such that $f([s_i, s_{i+1}]) \subset A_\alpha$, $A_\alpha$ unique
The problem:
You are given a continuous function $f: [0,1] \rightarrow X$.
We choose an open cover of $X$, $A_{\alpha}$.
Then we want to show that there exists $0= s_1 \leq .... \leq s_n=1$ such that $f([s_i, s_{i+1}]) \subseteq A_k$, for an unique $A_k$ for all $i$.
This was from Hatcher, as can be seen here:
My Attempt:
First note that $f^{-1}(A_\alpha)$ is an open cover of $[0,1]$, so we may choose some finite subcover of $[0,1]$, $f^{-1}(A_1), ..... ,f^{-1}(A_n)$.
From Lebesgue's Number Lemma(https://en.wikipedia.org/wiki/Lebesgue%27s_number_lemma), choose $\delta > 0$.
Note if $t,s \in [0,1]$, $t<s$ and $\left|t-s\right| < \delta$, by the lemma, $[t,s] \subseteq f^{-1}(A_i)$ for some $A_i$ in the cover. Hence, $f([t,s]) \subseteq A_i$.
So partition $[0,1]$ by the closed interval of length less than $\delta$. Then each image is mapped to a subset of $A_i$.
But my problem is that each closed interval needs to be mapped to a single or unique $A_i$. But from my proof, I cannot see the uniqueness part.
So if anyone could help with the uniqueness part, I would be grateful.
Thanks for your help!
AI: I think you don't have to worry about the uniqueness part, and that the author just wasn't very clear. In fact, I think it's false; consider this example.
Let $f:I \to \mathbb{R}^2$ be a loop at the base point $(0,0)$ and let $A_t$ be the family of half-planes $A_t = \{x < t\}\subseteq \mathbb{R}^2$ for $t > 0$. Then how can a section of the path belong to just one element only of the family?
|
H: In a fraternal twin pregnancy what is the probability that both children inherit a particular chromosome?
Take a given chromosome that is only present in one of the parents of a child. Assume that the probability of one child inheriting this chromosome is ${1}\over{2}$. At the beginning of the pregnancy, before testing, the probability that both children inherit this chromosome is clearly ${1}\over{4}$.
Partway through the pregnancy a test is done to determine if either twin has inherited the chromosome. The test can only determine the presence of the chromosome in the bloodstream and so can only determine if at least one twin has the chromosome. It comes back positive, showing that at least one of the children has inherited the chromosome.
Given this test result, what is the probability that both twins have inherited the chromosome?
AI: Hint:
You have suggested $\frac14$ of such pregnancies have both children inheriting the chromosome.
What proportion of such pregnancies have neither inheriting the chromosome?
What proportion of such pregnancies have at least one inheriting the chromosome?
What proportion of pregnancies with at least one inheriting the chromosome have both children inheriting the chromosome?
|
H: $X_1,X_2, \ldots$ be i.i.d. Show that $\mathbb{E}|X_1| < \infty $ iff $ \frac{X_n}{n} \to 0$ a.s
Suppose $X_1,X_2, \ldots$ be i.i.d. Show that $\mathbb{E}|X_1| < \infty \Leftrightarrow \frac{X_n}{n} \to 0$ a.s
I tried using Markov but I don't know anything about the $ \mathbb{E}X $. I was also thinking of borel Cantelli, to show $ \sum P\left[\frac{X_n}{n}>\varepsilon\right]< \infty $ for each $\varepsilon>0$, then invoke First Borel Cantelli but I am confused on how to even get to the sum is less than infinity part
AI: This is easily proved using Borel-Cantelli Lemma and the following well known fact:
For a non-negative random variable $Y$ we have $EY <\infty$ iff $\sum P(Y>n) <\infty$.
Now let $\epsilon >0$. Taking $Y=\frac {|X_1|} \epsilon$ we see that $E|X_1|<\infty$ iff $\frac{E|X_1|}{\epsilon}<\infty$ iff $\sum P\left(\frac {|X_1|} {\epsilon} >n\right) <\infty$ iff $\sum P\left(\frac {|X_n|} n >\epsilon\right) <\infty$ iff $P\left( \frac {|X_n|} n >\epsilon\hspace{0.2cm} \text{ i.o.}\right)=0$. Can you finish the proof?
|
H: Finding average from uniformly distributed values
Let's say, I am sending network packets at the rate of 450 packets per second.
The size of these packets is uniformly distributed between 100 to 500 bytes.
I want to know what's the average packet size / per second?
I tried to solve it:
Since its uniformly distributed, so the average of size is (100 + 500)/2 = 300.
So, the answer is simply 300 bytes?
AI: Let's add a bit of formalism here. Let $X$ (a random variable) be the size of the packet. We know that $X \text{~} \text{Unif}(100,500)$. Thus the expected size of a packet is $$\mathrm{E}(X)=\int_{100}^{500} {\frac{1}{500-100}x}\mathrm{d}x=300.$$ Then, the time it takes to send the average packet is $$\frac{300 \text{ bytes}}{450 \text{ bytes per second}}=2/3 \text{ second}$$
|
H: Proof on "No rectangles" on a grid
I was solving a problem of the UVA judge called "No rectangles". The problem is about picking points from an $n\times n$ grid such that $k$ points are chosen from each row and column but no $4$ of the points form a rectangle with sides parallel to the grid. They claim the following in the statement of the problem
It can easily be shown that for any given value of $k$, $k^2 − k + 1$ is a
lower bound on the value of $n$, and it can be shown further that $n$ need
never be larger than this.
Proving this statement is not necessary to solve the problem. However, I've been thinking on how to prove it and I do not know how I could do it. Any hints/clues?
AI: Here is a proof of the lower bound. Let us consider the chosen points in each row as a subset of $S=\{1,\dots,n\}$, so we are selecting $n$ subsets $A_1,\dots,A_n$ of $S$, each of size $k$, so that each element of $S$ is in $k$ of the $A_i$'s. Avoiding rectangles means that $|A_i\cap A_j|\leq 1$ if $i\neq j$.
Now consider the $k$ different $A_i$ such that $1\in A_i$. All of the elements of these sets must be distinct except for $1$, so each one has $k-1$ elements that are not in any of the others. This gives $k(k-1)+1=k^2-k+1$ different elements of $S$ (the $k-1$ unique elements of the $k$ sets, plus the element $1$). Thus $n\geq k^2-k+1$.
The claim that $n=k^2-k+1$ is possible in general is incorrect, however. Indeed, observe that when $n=k^2-k+1$, the argument above shows that for every $j\in S\setminus\{1\}$, there is exactly one $A_i$ which contains both $1$ and $j$. Similarly, every two-element subset of $S$ is contained in exactly one $A_i$. That is, in terms of the original grid formulation, for every pair of columns there is exactly one row with a point in both columns. Dually, for every pair of rows there must be exactly one column with a point in both rows. This means that such a configuration is exactly the incidence relation of a finite projective plane, with the rows representing points and the columns representing lines. (Thanks to antkam for suggesting this connection in a comment!)
Given a finite projective plane with $n=k^2-k+1$ points, the number $k-1$ is known as the order of the projective plane. Not every number can be the order of a projective plane; for instance, it is known that there is no projective plane of order $6$, so the statement you quoted is wrong for $k=7$. It is known that projective planes exist for any prime power order and it is conjectured (but not known!) that these are the only possible orders.
Here is how you can construct an example when $k-1$ is a prime power. The key fact we use is that then there exists a field $F$ with $k-1$ elements. Let us then think of our $(k^2-k+1)$-element set $S$ as $$\{0\}\cup((F\cup\{\infty\})\times F).$$ Our first $k$ sets $A_1,\dots,A_k$ are just $\{0\}\cup\{x\}\times F$ for each $x\in F\cup\{\infty\}$. The remaining sets will then all be subsets of $(F\cup\{\infty\})\times F$ which contain one element from each $\{x\}\times F$, so they are just (graphs of) functions $F\cup\{\infty\}\to F$. So, we need to pick $n-k=k^2-2k+1=(k-1)^2$ functions $F\cup\{\infty\}\to F$ such that every possible ordered pair is in exactly $k-1$ of the functions and no two functions have more than one point in common. To do this, for each $a,b\in F$ define $f_{a,b}(x)=ax+b$ for $x\in F$ and $f_{a,b}(\infty)=a$. These functions $f_{a,b}$ have the desired properties because $F$ is a field (so that any two points of $F^2$ with different first coordinates can be interpolated by a unique linear function; similarly given any point in $F^2$ and a choice of slope there is a unique linear function through that line with that slope).
(This construction is known as the projective plane over $F$ and can be described in many other ways. One particularly elegant way is to say that the rows of the grid correspond to 1-dimensional subspaces of the vector space $F^3$, the columns correspond to 2-dimensional subspaces of $F^3$, and you draw a point whenever the 1-dimensional subspace corresponding to a row is contained in the 2-dimensional subspace corresponding to a column.)
|
H: Why isn't $f(x)=0$ ever mentioned as a solution to $f'(x)=f(x)$?
I know that $f(x)=e^x$ is the accepted and useful solution to $f'(x)=f(x)$, but why isn't $f(x)=0$ ever mentioned as a solution as well? Is it simply because it's not useful?
AI: Since $f'(x) = f(x)$ is linear differential equation, if $f(x)$ is solution then so is $kf(x)$ for $k \in \mathbb{R}$. So the solution is of the form $f(x) = Ae^x$ for $A \in \mathbb{R}$. Here $A$ is determined by some given initial condition.
The choice of $A =0$ gives the solution $f(x) = 0$.
|
H: How to find all forms of the fraction that would be in between two other fractions?
I've been going through lots of my math textbooks, and I'm able to solve a lot of them using some specific method or formula. But there's one problem I've come across quite a few times that I just couldn't figure out how to do. One example is...
Suppose that $\frac {4}{2001} < \frac {a}{a+b} < \frac {5}{2001}$. Compute the number of different possible integer values that $\frac {b}{a}$ can take on.
Does anyone know a reliable method to do these kind of these problems, or even better, a formula? My best attempts have been to manually list out all of the possibilities, and it would really help if someone could explain a good way to do these kind of problems.
AI: Hint: the inequality you are trying to satisfy is the same as
$$\frac{2001}{4} > \frac{a+b}{a} > \frac{2001}{5}.$$
|
H: Does $a=ea$ and $ae=eae$ for some $a\in R$, imply that $a=e$ for any idempotent $e$ in $R$?
Let $R$ be a ring with unity and $e^2=e\in R$. If $a=ea$ and $ae=eae$ for some $a\in R$, then prove that $a=e$.
Solution: Suppose that $a=ea$ and $ae=eae$, then $a^2=eaea=e(eae)a=eeaea=eaea=eaa=ea^2$. That is, $a^2=ea^2$. If $e\neq 1,0$, then $a=e$ applies. If $a=1,0$, here it is not making sense.
Question: Is the argument above valid?
AI: Note that if $a=ea$, then automatically $ae=eae$, so the second hypothesis adds nothing.
As it stands it’s definitely false. Let $R$ be the ring of $2\times 2$ real matrices, and take $e=\pmatrix{1&0\\0&0}$ and $a=\pmatrix{1&1\\0&0}$. Then $e^2=e$ and $a=ea$, but $e\ne a$.
|
H: Let $x,y>1$ be coprime integers and $g>0$ a real number such that $g^x,g^y$ are both integers. Is it true that $g\in\mathbb N$?
Let:
$x, y\ $ be coprime integers greater than $1$
$g \in \mathbb{R}^+$
$g_,^x \ g^y \in \mathbb{N}$
Proposition: $g \in \mathbb{N}$
I have not managed to prove it. Via the fundamental theorem of arithmetic, I have proved (at least I believe so) that if $g$ is rational, it is an integer. However, I am not sure about the case of irrational $g$; I believe such irrational $g$ can exist. Can anyone (dis-)prove this?
My intuition behind it is this (I'm only using $x$ but the same applies to $y$): for $g^x$ to be integer, $g$ must be the $x^\text{th}$ root of $r_x = g^x$, i.e. $g = \sqrt[x]{r_x}$. We can construct the divisor lattice of $r_x$ and its "divisor cuboid": a generalized Hasse diagram of divisors of $r_x$, in wich the divisors are equally spaced, and which is continuous, viz. there are real numbers between the integer divisors. Now take the vector spanning the distance from the origin (the number $1$) to $r_x$ in the divisor cuboid, and scale down this vector by factor $x$ to obtain $\sqrt[x]{r_x} = g$. To allow for a $y$ and $r_y$ in all possible cases of $r_x$, we must extend the divisor cuboid to all the primes. Then there are unconutably many points in the divisor cuboid in which the value equals $g$, since we can express $g$ as a real power of any single prime (or a product thereof). So it may be possible to find several such $n \in \mathbb{N}\:$, which are mutually prime and for which $g^n \in \mathbb{N}$. But I am not even fully convinced of this myself; let alone consider this a proof that could convince someone else.
AI: If $x$ and $y$ are coprime then there exist integers $a$ and $b$ such that $ax+by=1$. So if $g^x$ and $g^y$ are both natural numbers, we see that $g^{ax+by} = g$ is a rational number. So $g$ is rational. Say $g=p/q$ where $p$ and $q$ are relatively prime.
Now we ahve $g^x = p^x/q^x = n$ where $n$ is a natural number. Therefore $q$ divides $p^x$. But since $p$ and $q$ are relatively prime, we must have that $q$ divides $p$, meaning that the only possibility is that $q=1$. This means that $g$ is a natural number.
|
H: $\operatorname{Cov}[\vec{X}\cdot({\bf{v}} \operatorname{Cov}[\vec{X},Y]), \vec{X}] = {\bf{v}\bf{v}}^{-1}\operatorname{Cov}[Y, \vec{X}]$?
In Shalizi's Advanced Data Analysis from an Elementary Point of View p.44, he writes that for a variable $Y$ with a $p$-dimensional vector of predictors $\vec{X}$, and $\bf{v}$ the covariance matrix of $\vec{X}$, that
$$\operatorname{Cov}[\vec{X}\cdot({\bf{v}} \operatorname{Cov}[\vec{X},Y]), \vec{X}] = {\bf{v}\bf{v}}^{-1}\operatorname{Cov}[Y, \vec{X}].$$
I am comfortable manipulating the covariance when it is a functions of one dimensional random variables, but I don't even know where to begin to get the RHS from the LHS. I am especially confused by the meaning of $\vec{X}\cdot({\bf{v}} \operatorname{Cov}[\vec{X},Y])$. If $\vec{X}$ is $p\times 1$ and presumably $({\bf{v}} \operatorname{Cov}[\vec{X},Y])$ is $p\times p$, then the product doesn't make sense to me. Even if this is a typo and it should be written as $\vec{X}^{T}$, I'm not sure I would know what to do.
My only gut instinct is that there is some law of iterated expectations that can be unraveled from the nested covariance in the first argument.
AI: In a full-matrix notation, note $a \cdot b$ is $a^T b$, or $b^T a$. Also for the ease of typing, let me write your $\mathbf{v}$ in $V$ and suppress the arrow above $\vec{X}$. Moreover, denote $\text{Cov}(X, Y)$ by $a \in \mathbb{R}^{p \times 1}$, hence $\text{Cov}(Y, X) = a^T \in \mathbb{R}^{1 \times p}$. It follows that
\begin{align}
\text{Cov}(X \cdot (Va), X) = \text{Cov}((Va)^TX, X) = (Va)^T\text{Cov}(X, X) = a^TV^TV = \text{Cov}(Y, X)V^TV.
\end{align}
$V^{-1}$ should not be there anyway.
|
H: Proving non-differentiability of $f:\mathbb{R}^2 \to \mathbb{R}$
Question: Given $f:\mathbb{R}^2 \to \mathbb{R}$ defined by $f(x, y) =
\begin{cases}
x, & \text{if $y=x^2$} \\
0, & \text{otherwise}
\end{cases}$, show $f$ is not differentiable at $(0, 0)$.
Attempt: I know a few things about $f$: it is continuous at $(0, 0)$ and has continuous directional derivatives (but am yet to prove these).
To prove non-differentiability, I need to show that there does not exist a linear mapping $A$ from $\mathbb{R}^2$ to $\mathbb{R}$ (which can be represented by the $2 \times 1$ matrix $\begin{bmatrix}
a \\
b \\
\end{bmatrix}$) so that $\lim_{h \to 0, h \in \mathbb{R}^2} \frac{\Vert f(x+h) - f(x) - Ah \Vert}{\Vert h \Vert} = 0$ where $x=(0, 0)$.
To do this, I considered the LHS of the equation and intend to show that it does not limit to $0$. Letting $h=(h_1, h_2)$ gives $$\lim_{(h_1, h_2) \to 0} \frac{\Vert f((0, 0)+(h_1, h_2)) - f(0,0) - A(h_1, h_2) \Vert}{\Vert (h_1, h_2) \Vert}=\lim_{(h_1, h_2) \to 0} \frac{\Vert f(h_1, h_2) - A(h_1, h_2) \Vert}{\sqrt{h_1^2+h_2^2}}$$ however I am unsure of how to further evaluate this since we do not know $f(h_1, h_2)$ and I am unsure of what $A(h_1, h_2)$ evaluates to.
Any help would be greatly appreciated.
AI: Firstly, the matrix $A$ will be a $1 \times 2$ matrix, so $A=\begin{bmatrix}a&b\end{bmatrix}$. So $A\begin{bmatrix}h_1\\h_2\end{bmatrix}=ah_1+bh_2$.
Suppose $(h_1,h_2) \to (0,0)$ but $h_2 \neq h_1^2$. Then $f(h_1,h_2)=0$. This means
$$\lim_{(h_1, h_2) \to 0} \frac{\Vert f(h_1, h_2) - A(h_1, h_2) \Vert}{\sqrt{h_1^2+h_2^2}}=\lim_{(h_1, h_2) \to 0} \frac{|ah_1+bh_2|}{\sqrt{h_1^2+h_2^2}}.$$
Further suppose we were approaching $(0,0)$ along the $x-$axis, i.e. $h_2=0$ and $h_1 \to 0$. Then the above limit
$$\lim_{(h_1, h_2) \to 0} \frac{|ah_1+bh_2|}{\sqrt{h_1^2+h_2^2}}=|a|.$$
Likewise if we were approaching $(0,0)$ along the $y-$axis, i.e. $h_1=0$ and $h_2 \to 0$. Then the above limit
$$\lim_{(h_1, h_2) \to 0} \frac{|ah_1+bh_2|}{\sqrt{h_1^2+h_2^2}}=|b|.$$
For differentiability, we want the limits to be $0$. So $|a|=|b|=0$
But we could approach $(0,0)$ along the path $y=x^2$ as well, i.e. $h_2=h_1^2$ and $(h_1,h_2) \to (0,0)$. In which case $f(h_1,h_2)=h_1$. Then,
$$\lim_{(h_1, h_2) \to 0} \frac{\Vert f(h_1, h_2) - A(h_1, h_2) \Vert}{\sqrt{h_1^2+h_2^2}}=\lim_{(h_1, h_2) \to 0} \frac{|(a-1)+bh_1|}{\sqrt{1+h_1^2}}=|a-1|.$$
For differentiability we want $|a-1|=0$ as well. But then $a=1$, a contradiction.
|
H: Uniqueness of the Frechet Derivative: the role of $x \in int_X(T)$
I'm currently trying to learn some functional analysis as a way to improve my ability to read economic theory papers. I've come across what I thought was a simple proof but on reflection I don't think I'm grasping it. I'm not a mathematician so I apologise if this question is rather trivial! My problem lies in the proof of the uniqueness of the Frechet derivative.
Here is the definition that I'm using (From Efe OK's book Real Analysis with Economic Applications).
Definition
Let $X$ and $Y$ be two normed linear spaces and $T$ a subset of $X$. For any $x \in int_X(T)$, a map $\Phi : T \rightarrow Y$ is said to be Frechet differentiable at $x$ if there is a continuous linear operator $D_{\Phi,x}\in \mathcal{B}(X,Y)$ such that
\begin{equation}
\lim_{\omega \rightarrow x} \frac{\Phi(\omega)-\Phi(x)-D_{\Phi,x}(\omega-x)}{\left\lVert \omega-x \right\rVert} = \mathbf{0}
\end{equation}
The linear operator $D_{\Phi,x}$ is called the Frechet derivative of $\Phi$ at $x$.
The proof proceeds by taking any two $K,L \in \mathcal{B}(X,Y)$ that satisfy the definition of the Frechet derivative, with $D_{\Phi,x} = K$ and $D_{\Phi,x} = L$. We must then have
\begin{equation}
\lim_{\omega \rightarrow x} \frac{(K-L)(\omega-x)}{\left\lVert \omega-x \right\rVert} = \mathbf{0}
\end{equation}
The next step is where I'm confused. Since $int_X(T)$ is open, this is equivalent to saying that
\begin{equation}
\lim_{v \rightarrow \mathbf{0}} \frac{(K-L)(v)}{\left\lVert v \right\rVert} = \mathbf{0}
\end{equation}
The rest of the proof is reasonably straightforward. The author provides a warning in the footnotes that if $x \notin int_X(T)$ the final two displayed equations are not equivalent, and the Frechet derivative in this case is not unique.
It seems intuitively reasonable that the final two expressions are equivalent but I'm not sure how to show it. My initial thought is that, with $x$ in the boundary, it limits the directions from which one can converge to it.
AI: It is easier to see this using an $\epsilon$-$\delta$ argument.
$\Phi$ is differentiable at $x$ iff there exists some continuous linear $L$ such that for all $\epsilon>0$ there is some $\delta>0$ such that if $\|x-y\| < \delta$ then
$\|\Phi(y)-\Phi(x) - L(y-x) \| \le \epsilon \|y-x\|$.
Suppose $K,L$ satisfy the equation, then
$\|(K-L)(x-y)\| \le \|\Phi(y)-\Phi(x) - K(y-x) \| + \|\Phi(y)-\Phi(x) - L(y-x) \|$. Now choose $\epsilon>0$ and get some $\delta_L,\delta_K >0$
such that the above holds. Then if $\|x-y\| < \min(\delta_L,\delta_K)$ we have
$\|(K-L)(x-y)\| \le 2 \epsilon \|y-x\|$.
Since $x$ in the the interior, there is some $B(x,\eta) \subset T$ and so
for any $h \in B(0,1)$ we have
$\|(K-L) \eta h\| \le \epsilon \eta \|h\|$ or $\|(K-L) h\| \le \epsilon \|h\|$, since $(K-L)$ is linear.
In particular, $\|K-L\| \le \epsilon$. Since $\epsilon>0$ was arbitrary
we have the desired result $K=L$.
|
H: Help proving limits don't exist.
I am not asking about any specific limits. For some reason rigorously proving a limit does not exist using an epsilon delta proof gives me a lot of trouble. Typically my book does this by negating the definition of a limit. Deciding on what to use as the $/epsilon$ is generally tricky for me. Is there a general strategy for this? Could you explain to me your thought process when you prove that one does not exist? Also these are single variable calculus type of limits.
AI: The limit does not exist if there is a lack of continuity in the neighbourhood about the value of interest.
Most limits DNE when
$$\lim_{x\to a^−}f(x)\neq \lim_{x\to a^+}f(x)$$, This typically occurs in piecewise or step functions.
A common misunderstanding is that limits DNE when there is a point discontinuity in rational functions. On the contrary, the limit exists perfectly at the point of discontinuity
One imporant way that limits don’t exist is that they go off to infinity.
For example,
$$\lim_{x\to1}\frac{1}{(x-1)^2}=\infty$$
One general strategy is to try solving $|f(x) − L| <\epsilon$ for $x$. Once you know what values of x
will work, you choose $δ$ so that the interval $(a − \delta, a + \delta)$ sits inside the set of solutions.
$l$ is not the limit if there exist an$\epsilon >0$ such that no choice of $\delta >0$ ensures $|f(x)-l|<\epsilon$ whenever $|x-x_0|<\delta$.
|
H: Denseness of a sequence in 2-Torus
I want to show that if $\alpha$ and $\beta$ are rationally independent irrational numbers i.e. $\forall m,n \in\mathbb{Z}$ , $m\alpha + n\beta \not\in\mathbb{Z}$ , then the sequence
$\{ (n\alpha$ (mod 1) , $n\beta$ (mod 1) $\}_{n\in\mathbb{Z}}$ is dense in 2-Torus.
I managed to argue that $\{n\alpha$ (mod 1) $\}_{n\in\mathbb{Z}}$ and $\{n\beta$ (mod 1) $\}_{n\in\mathbb{Z}}$ are dense in 1-Torus. However, I do not see how to use rational independence to arrive at denseness of the above sequence in two torus.
AI: The set $\{(k\alpha,k\beta) \mod 1: k \in \mathbb Z\}$ is a subgroup of $\mathbb T^2$, and its closure is a closed subgroup. If that is not all of $\mathbb T^2$, there must be a nontrivial member of the dual group $\mathbb Z^2$ that annihilates it, i.e. a pair of integers $(m,n)$, not both $0$, such that $m\alpha + n\beta \in \mathbb Z$, which would say $\alpha$ and $\beta$ are rationally dependent.
|
H: On the dimension of a vector space
Let $U, V, W$ be finite dimensional $\mathbb KG$-modules. Assume that the sequence of homomorphisms
$0 \to U \to V\to W\to 0$ is left exact but not right exact. Then, $\dim V \leq dim U + \dim W.$
How to prove this?
AI: This is a question about vector spaces, and the action of $G$ is irrelevant.
The sequence $0\to U\to V\to W$ is exact. This means that the image of the
map $U\to V$ (call it $f$) is the kernel of the map $V\to W$ (call it $g$)
and that $f$ is injective. By the rank-nullity formula, $\dim \ker g=\dim V-\dim g(V)$.
As $f$ is injective, $\dim\ker g=\dim f(U)=\dim U$. Then
$$\dim V=\dim U+\dim g(V)\le\dim U+\dim W$$
since $g(V)\subseteq W$.
|
H: Why are hyperbolic functions defined by area?
I have successfully derived the hyperbolic functions in terms of exponentials from the graphical definition:
For area $u/2$ bound by the unit parabola ($x^2 - y^2 = 1$), a ray from the origin to a point $(a,b)$ on the hyperbola and the $x$-axis, $\cosh u = a$ and $\sinh u = b$.
With some calculus and algebra, this definition can yield:
$$\sinh u = \frac{e^u-e^{-u}}{2}$$
$$\cosh u = \frac{e^u+e^{-u}}{2}$$
However, I am not very content with my derivation. I understand that the graphical definition is supposed to compare with the definition of trigonometric functions from the unit circle, as $\sin u$ and $\cos u$ can similarly be defined by bound area. My question is: why? Why not define hyperbolic functions from angle? From arc length? Why area specifically? Answers from other questions from this website and others states that "it is the only definition that carries over". If anyone can elaborate on this, it will be greatly appreciated, thanks!
Edit: It has been brought to my attention that you can define hyperbolic functions from arc length. Despite this, the definition in terms of area is still the most popular and most used. I maintain hopeful that an intuitive reason exists.
AI: The unit circle is the locus of points with unit distance from the origin in the Euclidean metric: that is, the metric corresponding to the standard dot product $(x_1,y_1)\cdot (x_2,y_2)=x_1x_2+y_1y_2$. You can define the standard trig functions as parameterizations of the unit circle in the Euclidean metric, using either arc length or area; both definitions will be equivalent.
Similarly, the unit hyperbola is the locus of points with unit distance from the origin in the Minkowski metric: that is, the metric corresponding to the nonstandard dot product $(x_1,y_1) \cdot_M (x_2,y_2)=-x_1x_2+y_1y_2$. Again, you can define the hyperbolic functions as parameterizations of the unit hyperbola in the Minkowski metric, using either arc length or area; both definitions will be equivalent.
So why does it appear that you can define the hyperbolic functions using area, but not using arc length? Because you're studying the hyperbola in the Euclidean metric! The Euclidean arc length of a curve $\displaystyle \int_C \sqrt{dx^2+dy^2}$ is usually different from the Minkowski arc length $\displaystyle \int_C \sqrt{-dx^2+dy^2}$ of that curve, so switching metrics leads to a different arc length parameterization for the hyperbola.
However, both the Euclidean and Minkowski definitions of area turn out to be equivalent. To see this, note that we can just show it for parallelograms, and then integrate to get the result for arbitrary shapes.
The signed area of the parallelogram spanned by the vectors $v_1=\left<x_1,y_1\right>$ and $v_2=\left<x_2,y_2\right>$ can be computed via the dot product as follows. First, let $v_1^\perp=\left<-y_1,x_2\right>$ be one of the two vectors perpendicular to $v_1$ (that is, with $v_1^\perp \cdot v_1=0$), and with the same magnitude as $v_1$. Then the signed area of the parallelogram spanned by $v_1$ and $v_2$ is $x_1y_2-y_1x_2$, which can be conveniently written in the form $v_1^\perp \cdot v_2$.
Working in the Minkowski metric, if $v=\left<x,y\right>$, then the vector perpendicular to $v$ with the same magnitude is $v^\perp=\left<y,x\right>$. So if $v_1=\left<x_1,y_1\right>$ and $v_2=\left<x_2,y_2\right>$, then $v_1^\perp \cdot_M
v_2=-y_1x_2+x_1y_2=x_1y_2-y_1x_2$, which is coordinate-wise identical to the expression for the area in the Euclidean metric.
TL;DR: if we define the circle or hyperbola in the most natural metric for that particular curve, we can get the trig/hyperbolic functions using either arc length or area.
But most of the time, we stick to the Euclidean metric when doing coordinate geometry. The area definition of hyperbolic functions turns out to be the same in either metric, but the arc length definition doesn't, so it ends up looking like we need to use the area definition and not the arc length definition.
|
H: How to prove this identity on exponential generating function of harmonic numbers
I came across the following problem, let $N![z^N]A(z)$ denote the coefficient of an exponential generating function (EGF) $A(z)$. The EGF is similar to an ordinary generating function (OGF) $A'(z)$ except that instead of the series $A'(z)=\sum_0^Na_Nz^N$ for an OGF, we have $A(z)=a_Nz^N/N!$ for an EGF $A(z)$.
For instance, for EGF $A(z)=e^z$, we have $N![z^N]A(z)=N![z^N]e^z=1$, i.e. the EGF coefficients of $e^z$ are $1$ for all $N \in \mathbb{N}$, i.e. the coefficients of the set $ \{z^0/0!,z^1/1!,z^2/2!,...,z^N/N! \}$ are all $1$ for any $N$ given $e^z$. Similarly, for EGF $A(z)=1/(1-z)$, we have $N!z[^N](1/(1-z))=N!$.
Now, given the following EGF $A(z)$:
$$
A(z)=e^z\int^z_0\frac{1-e^{-t}}{t}dt
$$
We are supposed to get $N![z^N]A(z)=H_N$, where $H_N$ is the $N$th harmonic number, i.e.
$$
N![z^N]e^z\int^z_0\frac{1-e^{-t}}{t}dt = H_N
$$
I could not think of a way to prove the above statement. The problem gave a hint that proving this statement involves forming a differential equation for the EGF $H(z)=\sum_{N \geq 0}H_Nz^N/N!$...
Any help?
AI: I think "forming a differential equation" is an overkill. We can simply do
$$A(z)=\int_0^z\frac{e^z-e^{z-t}}{t}\,dt\underset{t=z(1-x)}{\phantom{\big[}=\phantom{\big]}}\int_0^1\frac{e^z-e^{zx}}{1-x}\,dx=\sum_{n=1}^{\infty}\frac{z^n}{n!}\int_0^1\frac{1-x^n}{1-x}\,dx=\sum_{n=1}^{\infty}H_n\frac{z^n}{n!}$$ (the last equality, if unknown to you, follows from $(1-x^n)/(1-x)=1+\ldots+x^{n-1}$).
A side note: if we directly multiply the series for $e^z$ and the integral, we get $$A(z)=\left(\sum_{n=0}^{\infty}\frac{z^n}{n!}\right)\left(\sum_{n=1}^{\infty}\frac{(-1)^{n-1}z^n}{n\cdot n!}\right)=\sum_{n=1}^{\infty}\frac{z^n}{n!}\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^{k-1}}{k},$$ i.e. another proof of the "frequent" $G_1=H_1$ case of this question of mine.
|
H: Formula (not an algorithm, i.e. defining a function) for the nearest number of the form n(n+1), where n is an integer
What would be the formula which defines a function that returns the nearest number of the form n(n+1), where n is an integer?
AI: Given a positive real number $x$, you are really just solving
$$
x= n(n+1)
$$
which has solution
$$
n=\dfrac{\sqrt{4x+1}-1}{2}
$$
Then the function you want (almost) is
$$
n(x)= \dfrac{\sqrt{4x+1}-1}{2}
$$
Of course, this need not be an exact integer, but will be the real number closest to the actual integer $n$. So simply round to the nearest integer and this will be your $n$. Then you can simply compose with the function $g(n)= n(n+1)$ and then this will be the integer closest to $x$ with the integer being of the form $n(n+1)$.
|
H: Deriving the time-dependent solution of the Schrödinger equation
I have the Schrödinger equation:
$$\dfrac{-\hbar^2}{2m} \nabla^2 \Psi + V \Psi = i \hbar \dfrac{\partial{\Psi}}{\partial{t}},$$
where $m$ is the particle's mass, $V$ is the potential energy operator, and $(-\hbar^2/2m) \nabla^2$ is the kinetic energy operator ($p^2/2m$).
The state function can be expressed as the product of space-dependent and time-dependent factors, $\Psi = \psi(r) \psi(t)$. If we substitute these into the above equation and divide by $\psi w$, we obtain a function on the left that depends on $r$ and a function on the right that only depends on $t$:
$$\dfrac{-\hbar^2}{2m} \dfrac{\nabla^2 \psi}{\psi} + V = \dfrac{i \hbar}{w} \dfrac{\partial{w}}{\partial{t}}.$$
Therefore, to be valid for all $r$ and $t$, each side must equal a constant, $E$:
$$\dfrac{-\hbar^2}{2m} \dfrac{\nabla^2 \psi}{\psi} + V = \dfrac{i \hbar}{w} \dfrac{\partial{w}}{\partial{t}} = E.$$
From this, we immediately have
$$w(t) = Ce^{-i(E/\hbar)t},$$
from which we can identify that $E = \hbar \omega$, where $\omega$ is the radian frequency of oscillation.
It's been a while since I've done PDEs, so I'm not sure how we "immediately have" $w(t) = Ce^{-i(E/\hbar)t}$. The form of $\dfrac{-\hbar^2}{2m} \dfrac{\nabla^2 \psi}{\psi} + V = \dfrac{i \hbar}{w} \dfrac{\partial{w}}{\partial{t}} = E$ reminds me of a separation of variables situation, but I'm not entirely sure.
Furthermore, I'm not sure how we identify that $E = \hbar w$ from $w(t) = Ce^{-i(E/\hbar)t}$.
I would greatly appreciate it if people would please take the time to explain this.
AI: Note that we have performed a separation of variables and have two separate differential equations
$$\frac{i \hbar}{w} \frac{\partial w}{\partial t} = E \\
\frac{-\hbar^2}{2m \psi} \nabla^2 \psi + V = E$$
The first equation is just a first-order ODE of the form $w’(t) = A w$, whose solution is a scalar multiple of $e^{At}$. Here $A = E/(i \hbar) = -i E / \hbar$, which is why the solution to the differential equation in time is as you have written. Note that the second equation above is referred to as the time-independent Schrodinger equation.
Intuitively, one can note that the solution to the differential equation in time is periodic with some frequency $\omega = E/ \hbar$. So the wave equation of the particle has frequency $\omega$, and rewriting the previous equation gives the relation $E = \hbar \omega$.
|
H: Simple function is measurable
I want to show that a simple function is measurable. I know that a simple function is a function whose range set is a finite set. Let $f$ be a simple function defined on a measurable set $E$ such that it's range set is $\{a_1,a_2,\ldots , a_n\}$. Then help me show that $f$ is measurable.
AI: Simple functions are precisely linear combinations of indicator functions of measurable sets.
If $E$ is a measurable space and $f\colon E\to\mathbb{C}$ is a simple function, there exists $a_1,...,a_n\in\mathbb{C}$ and measurable sets $A_1,...,A_n\subseteq E$ such that $A_i\cap A_j=\emptyset$ if $i\neq j$, $E=A_1\cup\cdots\cup A_n$ and $f=\sum_{k=1}^n a_k {\bf 1}_{A_k}$.
To see that $f$ is a measurable function, just notice that
$$f^{-1}(V) = \bigcup\{A_j : a_j\in V\},$$
is measurable (being empty or finite union of measurable sets) for every $V\subseteq\mathbb{C}$.
|
H: About $n! > (\frac{n}{e})^n$ in "Introduction to Algorithms 3rd Edition" by CLRS.
I am reading "Introduction to Algorithms 3rd Edition" by CLRS.
In Apendix C, the authors use the following inequality:
$n! > (\frac{n}{e})^n$.
My proof of this fact is the following:
base case:
$1! = 1 > \frac{1}{e} = (\frac{1}{e})^1$.
induction step:
Assume that $k! > (\frac{k}{e})^k$.
Then, $(k+1)! = (k+1) k! \geq (k+1)(\frac{k}{e})^k$.
By the way, $((1+\frac{1}{k})^k)_{k \in \{1,2, \cdots\}}$ is a monotonic increasing sequence which converges to $e$.
So, $e > (1+\frac{1}{k})^k$.
So, $1 > \frac{(1+\frac{1}{k})^k}{e}$.
So, $k^k > \frac{(k+1)^k}{e}$.
So, $\frac{(k+1)k^k}{e^k} > \frac{(k+1)^{k+1}}{e^{k+1}}$.
So, $(k+1)! \geq (k+1)(\frac{k}{e})^k > \frac{(k+1)^{k+1}}{e^{k+1}}$.
On the other hand, obviously, $n^n \geq n!$.
So, $(\frac{n}{1})^n \geq n! > (\frac{n}{e})^n$.
$1 \in\{c | c > 0, n! \in O((\frac{n}{c})^n)\}$.
$e \notin\{c | c > 0, n! \in O((\frac{n}{c})^n)\}$.
My question is here:
What is the value $\sup \{c | c > 0, n! \in O((\frac{n}{c})^n)\}$?
AI: From Stirling’s approximation we have $n!\sim\sqrt{2\pi n}\left(\frac{n}e\right)^n$, so consider
$$\lim_{n\to\infty}\frac{n!}{\left(\frac{n}c\right)^n}=\lim_{n\to\infty}\frac{\sqrt{2\pi n}\left(\frac{n}e\right)^n}{\left(\frac{n}c\right)^n}=\lim_{n\to\infty}\sqrt{2\pi n}\left(\frac{c}e\right)^n\;;$$
this is clearly infinite if $c\ge e$ and $0$ if $0\le c<e$. Thus, $$\sup\left\{c>0:n!\in O\left(\left(\frac{n}c\right)^n\right)\right\}=e\;$$
|
H: Looking to optimise my Runescape grind (probability)
I know there's gaming stackexchange for gaming questions, but I believe this is purely maths related.
I'll try to avoid using game jargon and keep it simple. I'm collecting keys in game, each key taking a fair bit of time to obtain. They open a chest, which generates one random reward from a predeterimed table. On the loot table, there are five different armor pieces, each with a chance of 1/1000 to obtain. The catch is, because of game's inventory limitations, I can't open the chest every time I get a key and just get the five armor pieces that way - I have to do big openings of multiple keys at once. If I don't get a full set, but, for example, 4 out of 5 pieces, it would limit the speed of obtaining future keys.
Which is why I would like to count an optimal amount of keys to have to get the full set at once. Is there a set method or a formula for similar problems?
If the above explaination is too convoluted, I can try to further simplify it if needed.
AI: Let $c_a(k)$ be the chance you have collected $a$ armour pieces after opening $k$ keys and $p(a)$ the chance of getting a new armour piece given that you have found $a$ armour pieces already. Then we find:
$$c_{0}(k) = (1 - p(0))c_0(k-1)$$
$$c_{a}(k) = p(a-1)c_{a-1}(k-1) + (1 - p(a))c_a(k-1)$$
$$c_{a}(0) = \begin{cases}1&a = 0\\0& a> 0\end{cases}$$
$$p(a) = \begin{cases}\frac{5-a}{1000}&0 \leq a \leq 5\\0&\text{otherwise}\end{cases}$$
Programming this recurrence in Python we find the following chances of collecting all 5 armour pieces:
\begin{array}{cc}
\text{Keys}&\text{Chance}\\
\hline
2045&50\%\\
2883&75\%\\
3869&90\%\\
4583&95\%\\
6208&99\%
\end{array}
And a plot:
|
H: Defining the set of all algebraic numbers: Help with index for the union
I'm doing problem 2.2 from Rudin, which asks to show that the set of all algebraic numbers is countable. I was looking at the solution provided here (image also included), and understand the general idea of the proof, but I'm not sure why the index starts at $N=2$ instead of $N=1$.
Explicitly, if we are defining $A_N$ as the set of numbers satisfying $P_N$, where $P_N$= $\{(a_{n}x^{n} +a_{n-1}x^{n-1} + ... + a_{1}x + a_{0}): \mid n \mid + \mid a_0 \mid + \mid a_1 \mid + ... + \mid a_n \mid = N\}$,
why is the set of algebraic numbers $\bigcup\limits_{N=2}^{\infty} A_{N}$ and not $ \bigcup\limits_{N=1}^{\infty} A_{N}$?
AI: The polynomials in $A_1$ are the polynomials
$$P(x) = a_0 + a_1 x + \ldots + a_n x^n$$
satisfying
$$n + |a_0| + |a_1| + \ldots + |a_n| = 1.$$
(Note that $n \ge 0$, so I omitted the absolute value brackets). Instantly, this tells me that $n \le 1$, since $n$ plus a bunch of non-negative integers is equal to $1$. So, either we have $n = 0$, which gives us a constant polynomial, or $n = 1$, which gives us a polynomial of the form
$$P(x) = a_0 + a_1x.$$
In the former case, we have $0 + |a_0| = 1$, which gives us two cases: $P(x) = -1$, $P(x) = 1$. These have no roots whatsoever! So, we don't need to include these cases.
In the latter case, we have $1 + |a_0| + |a_1| = 1 \implies a_0 = a_1 = 0$. This gives us the zero polynomial, which is purposely excluded (if we counted the roots of the zero polynomial, then every number would be algebraic).
So, $A_1$ contains no polynomials with any roots, except the forbidden zero polynomial.
|
H: Show the map $g:\mathbb{S}^1\to\mathbb{S}^1$ defined by $g(\cos(\theta), \sin(\theta)) = (\cos(a\theta), \sin(a\theta))$ is open where $a\in\Bbb N$.
For a function $g: \mathbb{S}^1 \to \mathbb{S}^1$ defined by $g(\cos(\theta), \sin(\theta)) = (\cos(a\theta), \sin(a\theta))$ where $a$ is an integer, how do I show $g$ is an open map? I know that the idea is to show that for all open sets $U$ of $\mathbb{S}^1, g(U)$ is open. Any help would be appreciated!
AI: HINT: For $\theta_0<\theta_1$ let $I(\theta_0,\theta_1)=\{\langle\cos\theta,\sin\theta\rangle:\theta_0<\theta<\theta_1\}$; the sets $I(\theta_0,\theta_1)$ are a base for the topology of $\Bbb S^1$, and it suffices to show that each $g[I(\theta_0,\theta_1)]$ is open in $\Bbb S^1$. What is $g[I(\theta_0,\theta_1)]$?
|
H: Average distance from a square's perimeter to its center
What is the average distance from any point on a unit square's perimeter to its center?
The distance from a square's corner to its center is $\dfrac{\sqrt{2}}{2}$ and from a point in the middle of a square's side length is $\dfrac{1}{2}$. A visual explanation of what I'm trying to explain
So, what would the average distance be, accounting for all the points along a square's perimeter?
Also if possible, a general formula for finding the average distance from center to edge of any $n$-sided regular polygon would be super awesome.
AI: Consider a square ABCD of side $a$ centered at the origin $O$ (as shown in above figure)
By symmetry, the square ABCD is divided into $8$ congruent right triangles.
Consider any arbitrary point $P$ on square in first quadrant at an angle $x$ with +ve X-axis. The distance of point P from the center O is $\dfrac{a}{2}\sec x$. Taking average of distances of all the points on the perimeter of square (using symmetry of quadrants) as follows
$$D_\text{avg}=\frac{8\int_0^{\pi/4}\frac{a}{2}\sec x\ dx}{2\pi}$$
$$=\frac{2a}{\pi}\int_0^{\pi/4}\sec x\ dx$$
$$=\frac{2a}{\pi}\left[\ln\left|\tan\left(\frac{x}{2}+\frac{\pi}{4}\right)\right|\right]_0^{\pi/4}$$
$$=\frac{2a}{\pi}\ln(\sqrt2+1)$$
Therefore, the average distance from the center of all the points on the perimeter of a unit square ($a=1$) will be $$\frac{2}{\pi}\ln(\sqrt2+1)\approx 0.561099852 \ \mathrm{unit}$$
|
H: Let $V$ be a finite dimensional vector space and $W$ is proper subspace of $V$. Then show that Span of $(V/W)= V$.
Let $V$ be a finite dimensional vector space and $W$ is proper subspace of $V$.
Then show that Span of $(V/W)= V$.
I am trying to show that $V/W$ contain a basis of $V$ but How to proceed ? any hint
AI: I suppose by $V/W$ you mean $V\setminus W$.
Let $v \in V$. If $v\notin W$ then $v\in span(V\setminus W)$. So let $v \in W$. Take any $z$ which is not in $W$. Then $v+z$ and $v-z$ do not belong to $W$ and $v= \frac {v+z} 2+\frac {v-z} 2$.
Hence for $v\in W$, $v$ is linear combination of elements of $V\setminus W$. Thus it is proved.
|
H: Define a linear transformation $T$, so that the null space is $z$-axis, and the range is the plane $x+y+z=0$
As stated in the title, it is requested to define a linear transformation $T:\Bbb R^3 \to \Bbb R^3$ such that the null space of $T$ is the $z$-axis, and the range of $T$ is the plane: $x+y+z=0$
I don't really know how to begin with the solution of the exercise, I think that I should try to get a matrix using the standard base, but after that, I don't have any concrete ideas.
AI: You have to find a basis of that plane: $x+y+z=0$ then $x=-y-z$ so you can pick $v_1=(1,-1,0), v_2=(1,0,-1)$. The $z$-axis is the vector $e_3=(0,0,1)$.
If you define your map in a basis you are done. In particular take the standard basis, then
$f(e_1)=v_1, f(e_2)=v_2, f(e_3)=(0,0,0)$
|
H: In the laplace transform of 1, why is the s in the denominator?
Ive been using the Laplace transform for a little while now for some electrical engineering differential equations,
what I have never quite understood, is why is the $s$ in the denominator when you do the Laplace transform of 1. ie. this $\mathcal{L}\{1\}=\int_0^\infty{e^{-st}dt} = [-\frac{e^{-st}}{s}]_0^{\infty}$ how did the $s$ get into the denominator? Sorry if I am missing something obvious here. I thought it should be $[-s\cdot e^{-st}]_0^\infty$ using the chain rule.
AI: The derivative of $e^{-st}$ is $-se^{-st}$. But what you want is the integral, and this is $\frac{e^{-st}}{-s}$.
|
H: Evaluate $\int_{-1}^{1} [ \frac{2}{3} x^3 + \frac{2}{3}(2-x^2)^{3/2}] dx $
Evaluate $$\int_{-1}^{1} \left[ \frac{2}{3} x^3 + \frac{2}{3}(2-x^2)^{3/2}\right] dx $$
My attempt :$$ \frac{2}{3} \left[\frac{x^4}{4}\right]_{x=-1}^{x=1} + \frac{2}{3} \left[\frac{(2-x^2)^\frac{-1}{2}}{-1/2}\right]_{x=-1}^{x=1}=0$$
Is its True ?
AI: sadly, no.
the first part was completely true, although you could have used the property that $x^3$ is an odd function.
$$\frac{2}{3}\int^1_{-1}(2-x^2)^{3/2}dx$$
is a trigonometric integral.
your method ignored the internal $2-x^2$ and did an invalid substitution.
a better substetution is $x = \sqrt{2}\sin(\alpha),dx=\sqrt{2}\cos(\alpha)$ we have to change the bounds too, $\arcsin(\frac{1}{\sqrt{2}})=\frac{\pi}{4},\arcsin(-\frac{1}{\sqrt{2}})=-\frac{\pi}{4}$
$$\frac{2}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}4(1-\sin(\alpha)^2)^{3/2}\cos(\alpha)d\alpha$$
using everyone's favorite trigonometric identity $\cos(\alpha)^2+\sin(\alpha)^2=1$
$$\frac{8}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}(\cos(\alpha)^2)^{3/2}\cos(\alpha)d\alpha$$
$$\frac{8}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}(\cos(\alpha))^4d\alpha$$
using another trigonometric identity $cos(\alpha)^2=\frac{cos(2\alpha)+1}{2}$
$$\frac{8}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}\frac{(\cos(2\alpha)+1)^2}{4}d\alpha$$
$$\frac{2}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}[\cos(2\alpha)^2+2\cos(2\alpha)+1]d\alpha$$
using it again:
$$\frac{2}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}\frac{\cos(4\alpha)+1}{2}+2\cos(2\alpha)+1d\alpha$$
splitting the integral:
$$\frac{1}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}\cos(4\alpha)d\alpha+\frac{4}{3}\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}\cos(2\alpha)d\alpha+\int^{\frac{\pi}{4}}_{-\frac{\pi}{4}}1d\alpha$$
all of these integrals are very simple substitutions:
$$\frac{1}{12}[\sin(4\alpha)]^{\frac{\pi}{4}}_{-\frac{\pi}{4}}+\frac{2}{3}[\sin(2\alpha)]^{\frac{\pi}{4}}_{-\frac{\pi}{4}}+[\alpha]^{\frac{\pi}{4}}_{-\frac{\pi}{4}}$$
$$0 + \frac{4}{3}+\frac{\pi}{2}$$
or simply
$$\frac{4}{3}+\frac{\pi}{2}$$
|
H: Why does the infinite union exist in set theory?
I want to be clear that I am not asking about the axiom of union. I understand that for an infinite set $A$, $\bigcup A$ exists. My question is specifically about the more widely used (as far as I have seen) version of the notation:
$\bigcup_{b\in B} S_b$.
The technical issue that seems to appear to me is that applying the axiom of union requires first the construction of a set $A$ such that $x\in A \iff \exists b\in B(x= S_b)$.
Clearly in the case that the indexing set $B$ is finite, iterated application of the axiom of pairing with axiom of union can give the required set, but when $B$ is infinite I cannot see any way to justify $A$'s existence, and thus existence of the union in general.
AI: An indexed family $S_b, b \in B$ is a set $I$ already: namely consisting of ordered pairs (pairing axiom) $(b,x)$ where $b \in B$, and which is "functional":
$$\forall z \in I: \exists x,y: (z=(x,y) \land x \in B) \text{ and } \forall b \in B: \exists x: (b,x) \in I \text{ and } \forall b \in B: \forall x,x': ((b,x) \in I \land (b,x') \in I) \to x=x'$$
So the second part of the pair $(b,x) \in I$ is the set we denote $S_b$, but the total collection (assignment) $I$ must already be a set in the universe.
And it's straightforward to define the union $\bigcup_{b \in B} S_b$ from $I$: define the range of $I$ first, and apply the union axiom to that.
If however $S_b$ is defined by some predicate, we can apply the replacement axiom instance for that predicate to get the range as a set in our universe as well.
|
H: Show that $\frac{d}{dx}(\frac{\tan(x)}{1+\sec(x)})=\frac{1}{1+\cos(x)}$
Show that:
$\frac{d}{dx}(\frac{\tan(x)}{1+\sec(x)})=\frac{1}{1+\cos(x)}$
Applying the quotient rule I get
$\frac{\sec^2(x)(1+\sec(x))-\sec(x)\tan^2(x)}{(1+\sec^2(x))}$
From here I go into many directions but not towards the RHS. Guidance is much appreciated
AI: We have \begin{align*}\frac{\sec^{2}x(1+\sec x)-\sec x \tan^{2}x}{(1+\sec x)^2} &= \frac{\sec x(\sec x + \sec^2 x - \tan^2x)}{(1+\sec x)^2} \qquad \text{(factoring the numerator)} \\ &= \frac{\sec x(\sec x+1)}{(1+\sec x)^2} \qquad \text{(by the identity $\tan^2 x + 1 \equiv \sec^2 x$)} \\ &= \frac{\sec x}{1+\sec x} \\ &= \frac{1}{\cos x + 1} \qquad \text{(multiplying the numerator and denominator by $\cos x$)}\end{align*} as required.
|
H: An integral involving trigonometric and exponential function
Prove that $$ \int_{0}^{\infty}x^{2019}\sin(\sqrt{3}x)e^{-3x}\mathrm{d}x=\dfrac{2019!\sqrt{3}}{2^{2021}\cdot 3^{1010}} $$
Hence generalize the integral for any other value than $2019$.I know it can be done by considering the integral $\displaystyle \int_{0}^{\infty} \sin(ax)e^{-bx}\mathrm{d}x=\dfrac{a}{a^2+b^2}$ and differentiating it again and again with respect to $b$, but it only works for smaller values, after some time calculation becomes very tedious.
AI: Hint:
We integrate by parts $$\int_0^\infty x^ne^{wx}dx$$ where $w:=a+ib$ and $a<0$.
$$I_n:=\int_0^\infty x^ne^{wx}dx=\frac1w\left.x^ne^{wx}\right|_0^{\infty}-\frac nw\int_0^\infty x^{n-1}e^{wx}dx=-\frac nwI_{n-1}.$$
Then by induction,
$$I_n=(-1)^n\frac{n!}{w^n}I_0=(-1)^{n+1}\frac{n!}{w^{n+1}}I_0.$$
From this, taking the imaginary part,
$$\int_0^\infty x^n\sin(bx)e^{ax}dx=(-1)^{n+1}\frac{n!\sin\left((n+1)\arctan\dfrac ba\right)}{(a^2+b^2)^{(n+1)/2}}.$$
Here, $n=2019,a=-3,b=\sqrt3$.
|
H: Radius of convergence for binomial series (2)
I'm having trouble calculating the radius of convergence for for the following binomial series.
More in detail, I'm having trouble finding $c_k$ and $c_{k+1}$ for the following series:
$$ \sum_{k=0}^{\infty} \binom{3k}{k}x^{2k+1}$$
I'm not sure what to do with the $2k+1$ exponent here.
AI: You have, for each $\require{cancel}k\in\Bbb Z_+$ and each $x\ne0$,\begin{align}\left|\frac{\binom{3(k+1)}{k+1}x^{2(k+1)+1}}{\binom{3k}kx^{2k+1}}\right|&=\frac{\frac{\cancel{(k+2)}\cancel{(k+3)}\ldots(3k+3)}{(2k+2)!}}{\frac{(k+1)\cancel{(k+2)}\cancel{(k+3)}\ldots(3k)}{(2k)!}}|x|^2\\&=\frac{(3k+1)(3k+2)(3k+3)}{(k+1)(2k+1)(2k+2)}|x|^2\\&=3\frac{(3k+1)(3k+2)}{k+1}|x|^2\\&\to\frac{27}4|x|^2\end{align}and therefore the radius of convergence is $\frac2{\sqrt{27}}$.
|
H: Index of subgroups in a finite solvable group, with trivial Frattini subgroup (Exercise 3B.12 from Finite Group Theory, by M. Isaacs)
Let G be a finite solvable group, and assume that $\Phi(G) = 1$ where $\Phi(G)$ denotes the Frattini subgroup of G. Let M be a maximal subgroup of G, and suppose that $H \subseteq M$. Show that $G$ has a subgroup with index equal to $|M:H|$.
This is question 3B.12 from Finite Group Theory, by M. Isaacs.
Here is my approach so far. I am completely stuck and would welcome any hints or ideas.
Suppose otherwise. Among all of the counter examples choose $G$ of minimum order. Since $G$ is a counterexample it must be the case that $|G| > 1$. Since $G$ is a counter example there is a maximum subgroup $M$ and a subgroup $H \subset M$, such that every subgroup of $G$ does not have the same index as $|M:H|$. So it must be the case that $H$ is properly contained within $M$.
This is where I get stuck. I want to use a minimal normal subgroup $N$ of $G$ which exists. But my argument devolves into a series of cases about whether or not $N$ intersects $H$ and/or $M$ non-trivially.
I do know that $G$ must have a non-normal maximal subgroup, since if they all were normal then it would be nilpotent and since G is finite this implies supersolvable, then $G$ would have a subgroup for any divisor of its order. Since $\Phi(G)=1$ is the intersection of all the maximal subgroups of $G$ I suspect this should help but I'm not sure where to go from here.
AI: $G=\mathrm{Alt}(4)$, $M=C_2^2$, $H=C_2$ is a counterexample, as $\mathrm{Alt}(4)$ doesn't have a subgroup of order $6$. This seems like a mistake in the book.
|
H: Section to Skew-Symmetrization Map
Let $A$ be an $n\times n$ matrix skew-symmetric matrix. Define the map
$\mathbb{R}^{d^2}\to Skew_d$ by
$$
B\mapsto B^{\top} - B.
$$
Does this map have a continuous right inverse?
AI: The following addresses the question as first stated, which read, if I recall correctly:
"Let $A$ be an $n\times n$ matrix skew-symmetric matrix. Does there necessarily exist a unique $n\times n$ matrix $B$ such that
$A = B^T - B." \tag 0$
To which I responded:
Pick any symmetric matrix
$C = C^T; \tag 1$
let
$B = -\dfrac{1}{2}(A + C); \tag 2$
then
$B^T = -\dfrac{1}{2}( A^T + C^T) = -\dfrac{1}{2}(-A + C), \tag 3$
since
$A^T = -A, \tag{3.5}$
and
$B^T - B = -\dfrac{1}{2}(-A + C) - (-\dfrac{1}{2}(A + C)) = A. \tag 4$
These calculations indicate the existence of an infinite number of matrices $B$ such that (4) holds; thus no such $B$ is unique.
The preceding result may be used to address the present question of the existence of a continuous right inverse to the skew-symmetrization map, which we denote by
$\Sigma(B) = B^T - B. \tag 5$
According to what has been presented in the above, every skew-symmetric matrix is of the form $\Sigma(B)$ for some $B$. Thus a right inverse $\theta$ to $\Sigma$ will assign to every skew-symmetric $A$ a matrix $\theta(A)$ such that
$\Sigma ( \theta(A)) = A. \tag 6$
Picking any symmetric matrix $C$ as in (1), we set
$\theta(A) = -\dfrac{1}{2}(A + C) \tag 7$
as in (2); it is easy to see that such a $\theta$ is continuous, since both matrix addition and division by $2$ are continuous operations. Now (6) is simply the result of our preceding calculations, and $\theta$ is a right inverse of $\Sigma$.
|
H: Homework Help: Probability of 5 element subset having one prime and a single digit
Here's the question:
Determine the probability that a randomly chosen 5-element subset of numbers from 1 to 20 contains at least one single digit number and at least one prime number.
Hi. Currently stuck on this homework problem. I also want to clarify with this that the prime number can also be a single digit number, so (3, 10, 11, 12, 13) should be a valid combination.
AI: First, we find the number of unrestricted 5-element subsets to be $\binom{20}{5}=15504$ That is $\frac{20!}{5!(20-5)!}=\frac{20*19*18*17*16}{5*4*3*2*1}$
We need the number of subsets that do not contain a prime number. There are $8$ prime numbers in the set, so that leaves $12$ that are not prime. There are $\binom{12}{5}=792$ subsets that do not contain a prime number.
We need the number of subsets that do not contain a single-digit number. There are $9$ single-digit numbers in the set, so that leaves $11$ to choose from. There are $\binom{11}{5}=462$ subsets that do not contain a single-digit number.
We need the number of subsets that contain neither prime numbers nor single-digit numbers. There are $7$ numbers to choose from, so there are $\binom{7}{5}=21$ such subsets.
Now that we have these numbers, we can use inclusion/exclusion. We will start with the total number of subsets $(15504)$, exclude those that do not contain a prime number $(792)$ and those that do not contain a single-digit number $(462)$, then include those that do not contain either $(21)$ because they were excluded twice. This gives us our total of $$15504-792-462+21=14271$$ qualifying subsets.
The probability that on of these is randomly chosen is:
$$\frac{14271}{15504}\approx 92.05\%$$
|
H: Can this function be defined in a way to make it continuous at $x=0$?
We have $$f=\frac{x}{\vert x-1 \vert - \vert x +1 \vert}$$
If we want to "define" this function to be continuous at $x=0$, it's limit at $0$ must equal $f(0)$. So we should find this limit and assign it to be equal to $f(0)$, then the function is continuous at $0$. Since we are looking at the function when $x\to 0$, $x\neq 0$. Lets divide both sides by $x$.
$$f=\frac{x}{\vert x-1 \vert - \vert x +1 \vert}=\frac{1}{\frac{\vert x-1 \vert}{x}-\frac{\vert x+1\vert}{x}}=\frac{1}{\vert 1-\frac{1}{x}\vert - \vert 1+ \frac{1}{x}\vert }$$
We can use $\lim \phi(x)^{-1}=\frac{1}{\lim \phi(x)}$ here ( the limit $\neq$ 0, by hypothesis ). The inverse of the limit of $\phi(x)=\vert 1 - \frac{1}{x} \vert-\vert 1+\frac{1}{x}\vert$, when $x\to 0$. If $x<1$, we have that $$\frac{1}{x}>1\implies0>1-\frac{1}{x}\implies \Bigg\vert 1-\frac{1}{x}\Bigg\vert=-\Big(1-\frac{1}{x}\Big)$$
Now if $x>0$, we have that $$\Bigg\vert 1 - \frac{1}{x} \Bigg\vert-\Bigg\vert 1+\frac{1}{x}\Bigg\vert=-2$$
and if $x<0$, then $$\Bigg\vert 1 - \frac{1}{x} \Bigg\vert-\Bigg\vert 1+\frac{1}{x}\Bigg\vert=1-\frac{1}{x}-1-\frac{1}{x}=\frac{(-2)}{x}$$
The limit of $f$ when $x\to 0$ appears to be $\frac{-1}{2}$. Could anyone tell me what errors I made in the limit finding process?
AI: You get the correct result but the derivation is incorrect (or perhaps just unclear to me). You are performing the correct calculation for the $x>0$ case, but the $x<0$ should be more precise.
The second line should be:
\begin{equation}
\frac{1}{\frac{\left|x-1 \right|}{x} - \frac{\left|x+1 \right|}{x} } = \frac{sgn(x)}{\frac{\left|x-1 \right|}{|x|} - \frac{\left|x+1 \right|}{|x|} } = \frac{sgn(x)}{\left| 1 -\frac{1}{x} \right| - \left| 1 +\frac{1}{x} \right|},
\end{equation}
where $sgn(x)$ is the sign function, which is $1$, when $x>0$, and $-1$, when $x<0$.
When $x<0$ and close to zero $\left(1 + \frac{1}{x}\right)<0$. Hence
\begin{equation}
\left|1+ \frac{1}{x}\right| =- \left(1+ \frac{1}{x} \right).
\end{equation}
Thus in the case of $lim_{x\rightarrow0^-} f(x)$ we have
\begin{equation}
lim_{x\rightarrow0^-} \frac{sgn(x)}{\left|1 - \frac{1}{x} \right| - \left| 1 + \frac{1}{x} \right|}=lim_{x\rightarrow0^-} \frac{-1}{\left( 1 - \frac{1}{x} \right) + \left( 1 + \frac{1}{x} \right)} = -\frac{1}{2}.
\end{equation}
So the limits on both sides match, hence the function is continuous. You have missed two minus signs. One from the $sgn(x)$ at first, and another one when computing $\left| 1+ \frac{1}{x} \right|$ so your result ends up correct but only because you got lucky in getting an even number of minus signs wrong.
As a final comment, I will say that in order to see that $\left(1+ \frac{1}{x} \right)<0$ for small negative x, we can observe that $\lim_{x\rightarrow 0^-}\frac{1}{x} \rightarrow - \infty$.
|
H: Reference for very basic books in Functional analysis
I'm confused about which books I have to read for Functional analysis for the beginner level.
I need references for very basic books in Functional analysis and that book must contain given Topics below
$1.$ Normed linear spaces,
$2.$ Banach spaces,
$3.$ Hilbert spaces,
$4.$ Compact operators.
$5.$ Properties of $ C[0;1]$ and $L^p[0;1]$
$6.$ Continuous linear maps (linear operators).
$7.$ Hahn-Banach Theorem, Open mapping theorem,
$8.$ Closed graph theorem and the uniform boundedness principle.
AI: My top two recommendations would be Functional Analysis: A First Course by M.T Nair (very beginner friendly) and Introductory Functional Analysis with Applications by E. Kreyszig.
I also recommend the notes by V. S. Sunder for more of a spectral theoretic focus. You can also check out Functional Analysis by S. Kesavan.
|
H: Proof verification: Fourier Inversion theorem
I want to prove Fourier Inversion theorem:
$$\int_{\mathbb{R}^n}\widehat{f}(\xi)e^{2\pi ix\cdot\xi}d\xi=f(x)$$
almost everywhere, where $f,\widehat{f}\in L^1(\mathbb{R}^n)$.
We can get a equation
$$\int_{\mathbb{R}^n}\widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon x|^2}d\xi=\int_{\mathbb{R}^n}f(\xi)\varepsilon^{-n}e^{-\pi\varepsilon^{-2}|\xi-x|^2}d\xi$$
for any $\varepsilon>0$. For the left side of the equation, we apply the Lebesgue dominated convergence theorem.
(Lebesgue dominated convergence theorem)$~~$Let $\{h_k\}$ be a sequence of measurable functions on a measurable set $E$. Suppose that the sequence converges pointwise to a function $h$ and is dominated by some integrable function $g$ in the sense that $$|h_k(x)|\le g(x)$$for all numbers $k\in\mathbb{N}_+$ and all points $x\in E$. Then $h$ is integrable and $$\int_E h(x)~dm=\lim_{k\to\infty}\int_E h_k(x)~dm.$$
In our case, let $$h(\xi):=\widehat{f}(\xi)e^{2\pi ix\cdot\xi}~~~\mbox{ and }~~~g(\xi):= |\widehat{f}(\xi)e^{2\pi ix\cdot\xi}|= |\widehat{f}(\xi) |$$
and we construct a sequence of measurable functions $\{h_k\}$ by $h_k(\xi):= \widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|k^{-1}x|^2}$. Then clearly $$|h_k(\xi)|\le g(\xi)$$for all numbers $k\in\mathbb{N}_+$ and all points $\xi\in {\mathbb{R}^n}$. Since $g$ is also integrable, we have that
$$\lim_{\varepsilon\to 0^+} \int_{\mathbb{R}^n}\widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon x|^2}d\xi= \lim_{k\to \infty} \int_{\mathbb{R}^n} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|k^{-1}x|^2}d\xi= \left(\lim_{k\to \infty} e^{-\pi|k^{-1}x|^2}\right)\cdot\int_{\mathbb{R}^n} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}d\xi= \int_{\mathbb{R}^n} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}d\xi.$$
My question is: Is my reasoning right? I’m not sure about it. For example, the construction of $h_n(\xi)$ seems a little wired to me, but I think I must do it if I want to apply Lebesgue dominated convergence theorem. There was no sequence $\{h_n\}$ in our case originally, which is required in the dominated convergence theorem. Any help is appreciated, thanks!
AI: Your construction does not make very much sense to me. Here is how you can proceed:
If I understand your post you want to show
$$\lim_{\varepsilon\to 0^+} \int_{\mathbb{R}^n}\widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon x|^2}d\xi= \int_{\mathbb{R}^n} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}d\xi$$
Recall that $\lim_{x \to a} g(x) = L$ if and only if for every sequence $(x_n)_n$ in $\operatorname{dom}(g)\setminus \{a\}$ with $x_n \to a$, we have $g(x_n) \to L$.
We use this now.
So, let $0 < \epsilon_n \to 0$. We must show that
$$\lim_{n \to \infty} \int_{\mathbb{R}^n}\widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon_n x|^2}d\xi= \int_{\mathbb{R}^n} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}d\xi$$
For this, we can apply dominated convergence theorem. Indeed, first note that
$$\lim_{n \to \infty} \widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon_n x|^2}= \widehat{f}(\xi)e^{2\pi ix\cdot\xi}
$$
for all $\xi$.
Next, note that $$|\widehat{f}(\xi)e^{2\pi ix\cdot\xi}e^{-\pi|\varepsilon_n x|^2}| \leq |\hat{f}(\xi)|$$
for all $n$.
By your assumption, $\hat{f} \in L^1(\mathbb{R}^n)$ so we have found an integrable dominating function and the dominated convergence allows us to conclude.
|
H: Showing that a relation is neither an equivalence relation nor a partial order
Say we have a relation $R$ on $\mathbb{Z} \times \mathbb{Z}$ such that $(a, b) R (c, d)$ if $a^2 + b^2 \leq c^2 + d^2$
So to prove that $R$ is not an equivalence relation we need to show that $R$
Is not one of reflexive, symmetric or transitive
And to prove that $R$ is not a partial order we need to show that $R$
Is not one of reflexive, anti-symmetric or transitive
I'm practicing relation type questions, however, my current experience has mainly been with two variables (one on each side of the relation,) so, I'm struggling a bit with this question.
My attempt so far is as follows
$R$ is reflexive as $(a, a) R (a, a)$ because $a^2 + a^2 \leq a^2 + a^2$
$R$ is not symmetric as $a^2 + b^2 \leq c^2 + d^2$ does not imply that $c^2 + d^2 \leq a^2 + b^2$
So as $R$ is not symmetric it cannot be an equivalence relation.
At this point, I'm a bit stuck. I'm not sure how to test if $R$ is transitive or anti-symmetric.
AI: To show $R$ is not symmetric you should provide a counterexample. For instance $(1,1)R(2,2)$ but $(2,2)\not R(1,1).$
$R$ is not anti-symmetric as $(1,2)R(2,1)$ and $(2,1)R(1,2)$ but $(1,2)\neq (2,1).$
|
H: Closed operator with non-closed range
I am trying to see that if I have $X$ a banach space and $S:X\rightarrow X$ a closed operator then it's image is closed in $X$?
We know since $X$ is closed that by the closed-graph theorem we will have that $S$ is bounded, but still I can't quite see why $S(X)$ would be closed, and I can't find a counterexample.
Any help is aprecciated.
AI: Let $X=\{f\in C[0,1]: f(0)=0\}$ with the sup norm. Define $T: X \to X$ by $Tf(x)=\int_0^{x} f(t)dt$. Then $T$ is continuous and hence it is a closed operator. Its range is dense (by Weierstrass approximation). If the range is closed then it would be equal to $X$. However every function in the range is continuously differentiable so the range cannot be $X$.
|
H: Let $f\in Hol(\mathbb{C}\backslash\{a_1\ldots a_N\})$ where ${a_1\ldots a_N\,\infty}$ are the poles of $f$ show that $f$ is rational function
Let $f\in Hol(\mathbb{C}\backslash\{a_1, \dots, a_N\})$ where ${a_1, \ldots ,a_N,\infty}$ are the poles of $f$. Show that $f$ is rational function.
I've tried to define $g(z):=\frac{f(z)}{\prod_{i=1}^{k=N}(z-a_k)}$ and show that $g$ has the form $\frac{1}{P(z)}$ where P(z) is polynomial
AI: Let $p_1, \dots, p_N$ be the principal parts of $f$ at $a_1, \dots, a_N$, i.e. they are given by the negative coefficient-parts of the Laurent series expansion of $f$ at $a_i, i = 1, \dots, N$. Note that the $p_i$ are rational functions. Then $g = f - p_1 - \dots - p_N$ is entire, i.e. holomorphic on all of $\mathbb{C}$. Hence we can expand it as a Taylor series
$$
g(z) = \sum_{k = 0}^{\infty} a_k z^k
$$
Saying that $f$, and hence also $g$ has a pole (and not an essential singularity) at infinity means that
$$
g\left(\frac1{z}\right) = \sum_{k=-\infty}^0 a_{-k} z^k
$$
has a pole at $z = 0$, i.e. the series starts at some finite $-K$. But this means that the sum in the Taylor expansion for $g$ is finite, so $g$ is a rational function. Hence $f = g + p_1 + \dots p_N$ is a rational function.
|
H: If a function is Lipschitz, and differentiable, is its gradient also Lipschitz?
If $f(x)$ is Lipschitz, i.e.
$$||f(x) - f(y)|| \le L||x-y||$$ is it's gradient also Lipschitz?
$$||\nabla f(x) - \nabla f(y)|| \le K||x - y|| $$
And does $L = K$ ?
AI: By considering a function which depends only on the first coordinate you can reduce this to a question on the real line.
Let $h$ be an intergable function on $\mathbb R$ which is not bounded. Let $g(x)=\int_0^{x} h(t)dt$ and $f(x)=\int_0^{x} g(t)dt$. Since $f$ has a bounded derivative (namely $g$) we see that $f$ is Lipschitz. However $f'=g$ is not Lipschitz: If it is Lipschitz then $h$ would be bounded since $g'=h$ almost everywhere.
|
H: Normal subgroup and singleton orbits
Here is the exercise 20.11 from Groups and Symmetry from Armstrong :
Let $H$ be a subgroup of $G$ and write $X$ for the set of left cosets of $H$ in $G$. We have the action : $$ g(xH) = gxH$$
Show that $H$ is a normal subgroup iff every orbit of the induced action of $H$ on $X$ contains just one point.
$\Rightarrow h(xH)=hxH=hHx=xH$ so it is clear that $G(xH)=\lbrace xH \rbrace $
For the other way I don't know how to do.
AI: You have already done one implication. The other one is as follows:
For any $h \in H$ and $x \in G$, you have $h(xH)=xH$, so $h \in xHx^{-1}$. So $H = xHx^{-1}$ for any $x$ which means that $H$ is normal in $G$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.