text
stringlengths 83
79.5k
|
|---|
H: Let $H$ a Complex Hilbert space, $T, T_n \in B(H)$. Prove that $T_n x \to Tx$.
I need help for the next exercise:
Let $H$ a complex Hilbert space and $T, T_n \in B(H)$ for $n \geq 1$ and $\|T_{n} x\| \to \|T x\|$ and $\langle T_n x,x \rangle \to \langle Tx,x \rangle$ for all $x \in H$. Prove that $T_n x \to Tx$ for all $x \in H$.
I tried to do the following:
$\| T_n x - Tx \|^2 = \langle T_n x - Tx, T_n x - Tx \rangle = \langle T_n x, T_n x \rangle - \langle T_n x, Tx \rangle - \langle Tx, T_n x \rangle + \langle Tx, Tx \rangle = \|T_n x\|^2 + \| Tx \|^2 - (\langle T_n x, Tx \rangle + \langle Tx, T_n x \rangle)$
I think that $\langle T_n x, Tx \rangle + \langle Tx, T_n x \rangle \to \langle Tx,Tx \rangle + \langle Tx, Tx \rangle$
but I really couldn't prove this using the fact that $\langle T_n x,x \rangle \to \langle Tx,x \rangle$.
I appreciate if someone can help me. Buen día a todos.
AI: You cannot prove it, since it is false. Let $H$ be $\Bbb R^2$, with its standard inner product. Take $T_n(a,b)=(-1)^n(b,-a)$, and $T(a,b)=(-b,a)$. Then:
each $T_n$ has norm $1$, and so does $T$, which implies that $\lim_{n\to\infty}\|T_n\|=\|T\|$;
for each $n\in\Bbb N$ and each $x\in\Bbb R^2$, $\langle T_nx,x\rangle=\langle Tx,x\rangle=0$, and therefore $\lim_{n\to\infty}\langle T_nx,x\rangle=\langle Tx,x\rangle$ .
But you don't have $\lim_{n\to\infty}T_n=T$.
|
H: Is there a generic criterion for a function to be the gradient of another function?
I am interested to know whether there is a generic criterion in which a function $f:\mathbb{R}^n \to \mathbb{R}^n$ satisfies $f(x) = \nabla d(x)$ for some function $d:\mathbb{R}^n \to \mathbb{R}$.
Many thanks!
AI: Provided your function is defined on all of $\mathbb{R}^n$, then you want to know if $f$ is exact. Since the cohomology of $\mathbb{R}^n$ is trivial, this happens exactly when $f$ is closed. That is, whenever $\mathrm{d} \! f = 0$. Writing this condition down properly requires your function be treated as a differential form, which is typically not much to ask for.
Broadly though, the condition you're looking for is (for all $i$ and $j$)
$$\frac{\partial f_i}{\partial x_j} = \frac{\partial f_j}{\partial x_i}.$$
You can think of this as being an "obvious" necessary condition, since if $f_i = \frac{\partial f}{\partial x_i}$, as it would be if $\langle f_i \rangle = \nabla f$, then this condition is asking that $\frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial^2 f}{\partial x_j \partial x_i}$, which is true provided $f$ is smooth enough.
In $\mathbb{R}^3$, this is the familiar theorem that $f$ has a potential exactly when $\text{curl} f = 0$ (cf. this answer).
If your function is not defined everywhere, then this condition might fail. That is, you can have a function where $\mathrm{d} \! f = 0$ but it doesn't have a potential! In this case, the geometry of $U \subset \mathbb{R}^n$ where $f$ is defined becomes important, and indeed the cohomology tells you when the theorem is still true, and gives you more precise information about how the theorem fails when it does.
As mentioned in the below comment, this also relies on each of the $f_i$ being differentiable. But in the absence of such a condition, I don't know of any results.
I hope this helps ^_^
|
H: Existence of Hausdorff space comprised of disjoint dense subsets
Problem $13H$ of Willard states -
For any set $X$, there is a Hausdorff space $Y$ which is the union of a collection $\{Y_x:x\in X\}$ of disjoint subsets dense in $Y$.
I have no idea how to do this. Any help is appreciated!
AI: Here’s a sketch of one way; I’ve left a few details to be filled in or checked.
Let $Z$ be a Hausdorff space that is the union of two disjoint dense subsets, $D_0$ and $D_1$. For each $x\in X$ let $Z_x$ be a copy of $Z$, and let $Y=\prod_{x\in X}Z_x$. For each $A\subseteq X$ let $\chi_A$ be the indicator function of $A$, and let
$$D_A=\prod_{x\in X}D_{\chi_A(x)}\;.$$
Then $\{D_A:A\subseteq X\}$ is a family of $2^{|X|}$ pairwise disjoint dense subsets of $Y$ whose union is $Y$, which is more than we need: just keep the union of $|X|$ of them. For instance, use
$$Y_0=\bigcup_{x\in X}D_{\{x\}}$$
|
H: Integrate $\frac{\theta \sin \theta}{1+\cos^2 \theta}$ with respect to $\theta$
Integrate :
$$\int_0^\pi \frac{\theta \sin \theta}{1+\cos^2 \theta} d\theta$$
I tried to do a substitution by letting : $u=\cos \theta \implies du=-\sin\theta\ d\theta$
But I have a problem with that $\theta$, I don't know how to get bogged down in this variable, I tried some simplifications, but it gets complicated, here's what I've done :
\begin{align}
\frac{\theta \sin \theta}{1+\cos^2 \theta}&=\frac{\theta \sin\theta}{1+\frac{1+\cos 2\theta}{2}}\\
&=\frac{2\theta \sin \theta}{3+\cos 2\theta}\\
&=\frac{\theta 2\sin \theta \cos\theta}{\cos\theta(3+\cos 2\theta)}\\
&=\frac{\theta \sin 2\theta}{\cos\theta(3+\cos 2\theta)}
\end{align}
Any hints ? Thanks in advance !
AI: Here's a trick which I use always with integrals involving trigonometric functions :
$$\int_\alpha^\beta \varphi (\xi) d\xi=\int_\alpha^\beta \varphi (\alpha +\beta-\xi) d\xi$$
The proof is trivial and left for you as an exercise, lol!
Anyway, applying this technique to this integral :
Let $$I=\int_0^\pi \frac{x\sin x}{1+\cos^2x}dx$$
We'll have after applying this formula :
\begin{align}
I&=\int_0^\pi \frac{(\pi-x)\sin (\pi-x)}{1+\cos^2(\pi-x)} dx\\
2I&=\int_0^\pi \frac{x\sin x}{1+\cos^2x} + \frac{(\pi-x)\sin (\pi-x)}{1+\cos^2(\pi-x)}dx\\
I&=\frac{1}2\int_0^\pi \frac{x \sin x+\pi \sin x-x\sin x}{1+\cos^2x}\\
&=\frac{1}2\int_0^\pi \frac{\pi \sin x}{1+\cos^2x}\\
&=\frac{\pi}2\int_0^\pi \frac{ \sin x}{1+\cos^2x}
\end{align}
Now using the substitution you did earlier
$$ u=\cos x \Leftrightarrow du=-\sin x$$
So ;
\begin{align}
I&=\frac{\pi}2\int_1^{-1} \frac{-du}{1+u^2}\\
&=\frac{\pi}2\int_{-1}^{1} \frac{du}{1+u^2}\\
&=\frac{\pi}2 \arctan u\bigg\vert_{-1}^1\\
&=\frac{\pi}2 \bigg(\frac{\pi}4 +\frac{\pi}4\bigg)\\
&=\frac{\pi^2}{4}
\end{align}
Hence as @PeterForeman said your integral is : $\displaystyle \frac{\pi^2}{4}$
By the way, if you want the proof of the formula, all that you have to do is :
$$\xi=\alpha +\beta-u \Leftrightarrow d\xi=-du$$
Therefore;
$$\int_\beta^\alpha \varphi (\alpha+\beta-u) (-du)=\int_\alpha^\beta \varphi (\alpha +\beta-\xi) d\xi$$
|
H: Proof of the First Isomorphism Theorem
Here's what I'm trying to prove;
Let $T: V \to W$ be a vector space homomorphism. Let $N = \ker(T)$. Then, the quotient vector space $V/N$ is isomorphic to $T(V)$.
Proof Attempt:
Define the following map:
$$\forall x \in V: f(x+N) = T(x)$$
I claim that this is bijective and linear. I will prove linearity first.
Proof of additivity
Let $u,v \in V/N$. Then, $u = x +N$ and $v = y + N$ for some $x,y \in V$. So:
$$f(u+v) = f((x+y)+N) = T(x+y) = T(x) + T(y) = f(u) + f(v)$$
That proves additivity.
Proof of homogeneity
Let $\alpha \in \mathbb{F}$ and $u \in V/N$. Then, $u = x+N$ for some $x \in V$. So:
$$f(\alpha u) = f(\alpha x + N) = T(\alpha x) = \alpha T(x) = \alpha f(u)$$
That proves homogeneity. Hence, $f$ is linear.
Now, we will prove bijectivity.
Proof of Injectivity
Let $u,v \in V/N$ such that:
$$f(u) = f(v)$$
Now, $u = x + N$ and $v = y + N$ for some $x,y \in V$. So, the equality above implies that:
$$T(x) = T(y)$$
$$\implies T(x-y) = 0$$
$$\implies x-y \in N$$
$$\iff x + N = y+N$$
That just means that $u = v$. Hence, $f$ is injective.
Proof of Surjectivity
Let $w \in T(V)$. Then, there is an $x \in V$ such that $T(x) = w$. Define $u = x + N$. Clearly:
$$f(u) = f(x+N) = T(x) = w$$
So, a generic element from the codomain has a preimage under $f$. This means that $f$ is surjective.
Since $f$ is linear and bijective, it follows that it is an isomorphism between $V/N$ and $T(V)$. Hence, $V/N$ and $T(V)$ are isomorphic. That proves the desired result.
Does the proof above work? If it doesn't, why? What can I do to fix it?
AI: Your proof is almost correct. But you forgot one very important step. When we define a function on equivalence classes which does something with the representatives we have to prove it is well defined, i.e does not depend on the choice of the representatives.
So in your example we have to show that if $x+N=y+N$ then $f(x+N)=f(y+N)$, otherwise it will not be a well defined function. This is easy though. Suppose $x+N=y+N$. This means $x-y\in N$, and by the definition of $N$ we get $T(x-y)=0$. Since $T$ is linear this implies $T(x)=T(y)$, so indeed $f(x+N)=f(y+N)$.
|
H: Let $H$ a complex Hilbert space, $T, T_n \in B(H)$. Prove that $T_n x \to Tx$.
I need help for the next exercise:
Let $H$ a complex Hilbert space and $T, T_n \in B(H)$ for $n \geq 1$ and $\|T_{n} x\| \to \|T x\|$ and $\langle T_n x,x \rangle \to \langle Tx,x \rangle$ for all $x \in H$. Prove that $T_n x \to Tx$ for all $x \in H$.
I tried to do the following:
$\| T_n x - Tx \|^2 = \langle T_n x - Tx, T_n x - Tx \rangle = \langle T_n x, T_n x \rangle - \langle T_n x, Tx \rangle - \langle Tx, T_n x \rangle + \langle Tx, Tx \rangle = \|T_n x\|^2 + \| Tx \|^2 - (\langle T_n x, Tx \rangle + \langle Tx, T_n x \rangle)$
I think that $\langle T_n x, Tx \rangle + \langle Tx, T_n x \rangle \to \langle Tx,Tx \rangle + \langle Tx, Tx \rangle$
but I really couldn't prove this using the fact that $\langle T_n x,x \rangle \to \langle Tx,x \rangle$.
I appreciate if someone can help me. Buen día a todos.
AI: Note that by using the polarization identity it follows that $\langle T_nx,y\rangle\to \langle Tx,y\rangle$ for all $x,y\in H$.
Now fix a constant $x\in H$. By letting $y=Tx$ we have $\langle T_nx, Tx\rangle\to \langle Tx,Tx\rangle$ and $\langle Tx, T_nx\rangle=\overline{\langle T_nx, Tx\rangle}\to \overline{\langle Tx,Tx\rangle}=\langle Tx,Tx\rangle$. Combine this with the attempt you made, and you get the result.
|
H: Using Wilson's Theorem to find solution to $n^2 \equiv -1 \pmod{p}$
I am currently studying number theory and its basic topics, and I came across this problem.
Wilson's Theorem states that if $p$ is prime, then $(p-1)!\equiv -1 \pmod{p}$.
If $p \equiv 1 \pmod{4}$ is prime, then use Wilson's Theorem to find a number $n$ so that $n^2 \equiv -1 \pmod{p}$.
I tried to substitute $p$ with $4k+1$, but I didn't get much far. I don't have any other ideas for tackling this problem. Any, hopefully elementary, solutions?
AI: By Wilson's theorem,
$(p-1)!=(p-1)\times(p-2)\times\cdots\times\dfrac{p+1}2\times\dfrac{p-1}2\times\cdots\times2\times1\equiv-1\pmod p.$
This is
$(-1)(-2)\cdots\times(-\dfrac{p-1}2)\times\dfrac{p-1}2\times\cdots\times2\times1\equiv-1\pmod p,$
and, since it is essentially given that $\dfrac{p-1}2$ is even,
we can factor out $(-1)^{\text{even}}=1$ to get $\left(1\times2\times\cdots\times\dfrac{p-1}2\right)^2\equiv-1\pmod p.$
|
H: How to rewrite $R^TAR = V$ to find $A$
How to write the following equation to find $V$, the eigenvalue matrix. Note that $R$ is the orthogonal eigenvector matrix, and $A$ is one of the symmetric matrices.
AI: I assume that we have $R^TAR = V$, and $R$ is an orthogonal matrix. It follows that
$$
R^TAR = V \implies\\
R(R^TAR) = RV \implies\ \\
AR = RV \implies\\
(AR)R^T = (RV)R^T \implies\\
A = RVR^T.
$$
|
H: Prove $\frac{tf(t)}{\int_0^t f(s)ds}\ge\frac{tg(t)}{\int_0^t g(s)ds}$ given $f(0)=g(0)=0, f'(0)\ge g'(0)\ge 0, f''(s)\ge g''(s)> 0$ for $s\in[0,t]$.
I thought about a statement that is intuitively right, but doesn't know how to prove it. So could anyone help me with this: Say two second order differentiable functions $f$ and $g$ defined on $[0,t]$, satisfying
$f(0) = g(0) = 0$, $f'(0) \geq g'(0)\geq 0$ and $f''(s) \geq g''(s)> 0$ for $s\in[0,t]$. Then how to show that $$\frac{tf(t)}{\int_0^t f(s)ds}\geq \frac{tg(t)}{\int_0^t g(s)ds}$$
Intuitively, they are all positive monotonically increasing function, and $f$ is steeper than $g$. So the rectangle area $tf(t)$ over the area under the curve, which is the integral $\int_0^t f(s)ds$, should be larger than that of $g$.
Is this intuition right? If this is right, how to show it? Many thanks!
AI: Let $f = g + h$. So your assumptions are $g(0)=h(0) = 0$, $g'(0) \ge 0$, $h'(0) \ge 0$, $g''(s) > 0$ and $h''(s) \ge 0$ for $s > 0$, and your desired
conclusion is equivalent to
$$ h(t) \int_0^t g(s)\; ds \ge g(t) \int_0^t h(t)\; dt \tag{1}$$
Actually let's assume $h''(s) > 0$, not just $\ge$.
Take any $g$ and $h$ satisfying the assumptions and with the two sides of (1) not equal. Then if these are not a counterexample, just interchange $g$ and $h$ and you get a counterexample.
For example, with $g(s) = s^2$ and $h(s) = s^3$, (1) is $t^3/3 \ge t^3/4$
which is true,
but with $g(s)=s^3$ and $h(s)=s^2$, (1) is $t^3/4 \ge t^3/3$ which is false.
This corresponds to $f(s) = s^3+s^2$.
|
H: If $A$ and $B$ are invertible matrices, then $A^2B^{-1}$ is invertible.
Let $A$ be an $n \times n$ matrix and $B$ be an $n \times n$ matrix. I know:
$$
\begin{align*}
AA^{-1}=A^{-1}A=I_n \tag{1}\\
BB^{-1}=B^{-1}B=I_n \tag{2}
\end{align*}
$$
Starting from $B^{-1}B=I_n$, I have the following series of equalities:
$$
\begin{align*}
A^2B^{-1}B=A^2I_n \tag{Left multiplying by $A^2$}\\
A^2BB^{-1}=A^2(BB^{-1}) \tag{Replacing with (2)}\\
\vdots
\end{align*}
$$
Is this proof heading in the right direction ?
Do I need to assume:
$A$ and $B$ are square matrices, and
They have the same number of rows and columns ?
AI: Perhaps you are overcomplicating things. It suffices to find a multiplicative inverse, and $BA^{-2}$ will do, as you can easily check.
To answer your other questions, the matrices must be square because else the notion of invertibility does not even make sense.
If you know about determinants, you can just note that $A\in M_{n\times n}(k)$ is invertible if and only $\det(A)\ne 0$. As a consequence, $\det(A)\ne 0$ and $\det(B)\ne 0$. So,
$$ \det(A^2B^{-1})=\frac{\det(A)^2}{\det B}\ne 0$$
and you are done.
|
H: Is "each player always defects" a Nash equilibrium in Iterated Prisoners Dilemma
For the iterated prisoners dilemma with random ending time, is it the case that "both players defects each round" is a Nash equilibrium?
I understood a Nash equilibrium as a set of strategies for which it holds that no player benefits from switching strategy if the other player(s) keep their strategy. This indeed seems to be the case here since when everyone is defecting, there is no benefit to cooperating.
Did I understand it correctly?
(I am aware that it is not necessarily the best strategy in practice but I am specifically asking if it is a Nash equilibrium)
AI: Yes, both players defecting in every round regardless of the history of the game is a Nash equilibrium. Moreover, it is also a subgame perfect equilibrium, although there are also other subgame perfect equilibria of the in(de)finitely repeated prisoner's dilemma game (see folk theorem for detail).
|
H: Proof of the Second Isomorphism Theorem
Here's what I'm trying to prove:
Let $V$ be a vector space. Let $M$ and $N$ be linear subspaces of $V$. Then, it is the case that $M/(M \cap N)$ is isomorphic to $(M+N)/N$.
Proof Attempt:
Define the relation $Q: M/(M \cap N) \to (M+N)/N$ as follows:
$$\forall x \in M: Q(x+M \cap N) = x+N$$
This is totally defined so I will first show that it is well-defined. Suppose that $x+M \cap N = y + M \cap N$ for $x,y \in M$. Then, $x-y \in M \cap N$. So, $x-y \in M$ and $x-y \in N$. So:
$$x+N = y+N$$
$$\iff Q(x+M) = Q(y+M)$$
So, the function is well-defined.
We need to prove that this is linear and bijective. We will prove linearity first.
Proof of Additivity
Let $u,v \in M/(M \cap N)$. Then, $u = x + M \cap N$ and $v = y + M \cap N$ for some $x,y \in M$. So:
$$Q(u+v) = Q((x+y)+M \cap N) = (x+y) + N = (x+N) + (y+N) = Q(u) + Q(v)$$
That proves additivity.
Proof of Homogeneity
Let $\alpha \in \mathbb{F}$ and $u \in M/(M \cap N)$. Then, $u = x+M \cap N$ for some $x \in M$. So:
$$Q(\alpha u) = Q(\alpha x + M \cap N) = \alpha x + N = \alpha (x+N) = \alpha Q(u)$$
That proves homogeneity. Hence, $Q$ is linear.
Now, we will need to prove bijectivity.
Proof of Injectivity
Let $u,v \in M/(M \cap N)$ such that:
$$Q(u) = Q(v)$$
Since $u = x + M \cap N$ and $v = y + M \cap N$ for some $x,y \in M$, we have:
$$x+N = y +N$$
$$\implies x-y \in N$$
So, $x-y \in M \cap N$ and that implies that:
$$x + M \cap N = y + M \cap N$$
Hence, $u = v$. That proves injectivity.
Proof of surjectivity
Let $x+N \in (M+N)/N$, where $x \in M+N$. Then:
$$\exists x' \in M: \exists n \in N: x = x'+n$$
Since $x-x' \in N$, it follows that:
$$x+N = x'+N$$
It is also the case that $Q(x'+M \cap N) = x'+N$. So, define $x'+M \cap N$ as the preimage of $x+N$ and we are done. This proves surjectivity.
Since $Q$ is linear and bijective, it follows that it is an isomorphism between $M/(M \cap N)$ and $(M+N)/N$. That proves the desired result.
Does the proof above work? If it doesn't, why? How can I fix it?
AI: The proof looks absolutely correct. There is also another way to prove it, probably a bit shorter. You can define $\pi: M\to (M+N)/N$ by $\pi(x)=x+N$. This is clearly a linear map. It is also surjective, because for each $m\in M,n\in N$ we have $(m+n)+N=m+N=\pi(m)$, so the image of $\pi$ is all $(M+N)/N$.
Finally, we can compute the kernel of $\pi$. For $x\in M$ we have $\pi(x)=N$ if and only if $x+N=N$, if and only if $x\in N$, if and only if $x\in M\cap N$. Hence $Ker(\pi)=M\cap N$. Then it follows from the first isomorphism theorem that $M/(M\cap N)\cong (M+N)/N$.
|
H: If $zw$ is real, must $w$ be a multiple of the conjugate of $z$?
I'm talking about this:
$$c(a+bi)(a-bi) = c(a^2+b^2)$$ where $c$ is real.
Is it true that $a+bi$ can only be multiplied with $c(a-bi)$ in order to get a real number or are there other complex numbers that can multiply to get a real number? If this is the only way, how do you prove it?
AI: Well, obviously $0z$ will be real for all $z$. For $w\ne 0$, recall that $\frac1{w}=\frac{\overline w}{\lvert w\rvert^2}$, and therefore $zw=\alpha\in\Bbb R$ implies $z=\frac{\alpha}{\lvert w\rvert^2}\overline w$, which is a real multiple of $\overline w$.
|
H: Find a counter example that if $X$ banach space is not reflexive , the the operator doesn't have to be compact
Let $X, Y$ be a banach spaces, $A\in \mathcal{L} (X,Y)$ , there is a proposition that affirms that if $X$ is reflexive , and for any $ x_n\rightarrow x$ weakly in x we have that $ Ax_n\rightarrow x$ when $ n\rightarrow \infty$ then $ A$ is compact.
I have proved it using Eberlain-Smulyain theorem, so it trivial this way the proof I think.
But I can't find any counterexample that $A$ doesn't have to be comapct if $X$ is not reflexive.
AI: Consider $X=Y=\ell^1(\mathbb N)$, and let $A=I$, the identity operator. We know weak convergence implies strong convergence (see here). Thus for any $(x_n)$ converging weakly to $x$ we know $x_n\to x$ in norm. Thus $Ix_n\to Ix$ clearly. However, the identity operator on a Banach space is compact if and only if the space is finite dimensional.
|
H: Factor $3+2\sqrt{3}i$ as a product of irreducible elements in the ring $\mathbb{Z}\left [ \sqrt{-3} \right ]$
The question I am having trouble with is:
Factor $3+2\sqrt{3}i$ as a product of irreducible elements in the ring $\mathbb{Z}\left [ \sqrt{-3} \right ]$
I don't really understand how to go about this besides guessing. Is there any better way to do this?
AI: You want to express $\alpha=a+b\sqrt{3}i=\beta \gamma$, where $\beta, \gamma \in \Bbb{Z[\sqrt{-3}]} $ and they are not units. Using the idea of norms $N(\alpha)=\alpha \bar{\alpha}=a^2+3b^2 \in \Bbb{Z}_{\geq 0}$, we can show that the norm is multiplicative, i.e. $N(\beta \, \gamma)=N(\beta) \, N(\gamma)$. We get the following:
\begin{align*}
\alpha&=\beta \gamma\\
N(\alpha)&=N(\beta \, \gamma)\\
N(\alpha)&=N(\beta) \, N(\gamma).
\end{align*}
In this particular case , where $\alpha=3+2\sqrt{-3}$. We have $N(\alpha)=9+3(4)=21$. So we want $\beta, \gamma$ such that
$$N(\beta) N(\gamma)=21.$$
To make sure that we have a non-trivial factorization, we want to avoid $\beta$ and $\gamma$ being units. It can be shown easily that $\beta$ is a unit $\iff$ $N(\beta)=1$. In this ring, it only happens when $\beta =\pm 1$ (check this!!).
Coming back, we want those $\beta, \gamma$ such that $N(\beta)=7$ AND $N(\gamma)=3$ or vice versa. Now ask yourself, can we have $\beta=c+d\sqrt{-3}$ such that $c^2+3d^2=7$. This give us $c=\pm 2$ and $d=\pm 1$. So $\beta=\pm 2 \pm \sqrt{-3}$. Similarly you can get $\gamma=0\pm \sqrt{-3}$. Now you can verify that $\alpha=\beta \, \gamma$.
But one questions remains, are these $\beta$ and $\gamma$ irreducible? Can (say) $\beta$ be factored non-trivially?
Hint: Look at the norm of $\beta$, is there something special about it?
|
H: Integrating $\int_0^1 \frac{\ln(1+x)\ln^3 x}{1+x}\,dx$ with restricted techniques
How does one calculate these four following integrals?
$$ \int_0^1\frac{\ln(1\pm\varepsilon x)\ln(x)^3}{1\pm \varepsilon x}\,dx,\;\forall\varepsilon\in\{-1,1\}.
$$
CONTEXT:
Our teacher asks us to to calculate these four integrals
using only changes of variables, integrations by parts and the following known result: $$\int_0^1 \frac{\ln^n(x)}{1-x} \; dx=(-1)^n n!\zeta(n+1),\quad \int_0^1 \frac{\ln^n(x)}{1+x} \; dx=(-1/2)^n (-1 + 2^n) \Gamma(1 + n) \zeta (1 + n)$$
without using complex analysis, series, differentiation under the integral sign, double integrals or special functions.
For calculate $
U =\int_0^1 \frac{\ln(1+x)\ln^3 x}{1-x}\,dx\\$
By IBP
$ U =\left[\left(\int_0^x
\frac{\ln^3t}{1-t}\,dt\right)\ln(1+x)\right]_0^1-\int_0^1
\frac{1}{1+x}\left(\int_0^x\frac{\ln^3t}{1-t}\,dt\right)\,dx\\
=-6\zeta(4)\ln 2+\int_0^1\int_0^1\left(\frac{\ln^3(tx)}{(1+t)(1+x)}-\frac{\ln^3(tx)}{(1+t)(1-tx)}\right)\,dt\,dx\\
=-6\zeta(4)\ln 2+6\left(\int_0^1\frac{\ln^2
t}{1+t}\,dt\right)\left(\int_0^1\frac{\ln
x}{1+x}\,dx\right)+2\left(\int_0^1\frac{\ln^3
t}{1+t}\,dt\right)\left(\int_0^1\frac{1}{1+x}\,dx\right)-\int_0^1 \frac{1}{t(1+t)}\left(\int_0^t
\frac{\ln^3 u}{1-u}\,du\right)\,dt\\$
$=-\frac{33}{2}\zeta(4)\ln
2-\frac{9}{2}\zeta(2)\zeta(3)-\int_0^1
\frac{1}{t(1+t)}\left(\int_0^t \frac{\ln^3
u}{1-u}\,du\right)\,dt\\
\overset{\text{IBP}}=-\frac{33}{2}\zeta(4)\ln
2-\frac{9}{2}\zeta(2)\zeta(3)-\left[\ln\left(\frac
{t}{1+t}\right)\left(\int_0^t \frac{\ln^3
u}{1-u}\,du\right)\right]_0^1+\int_0^1
\frac{\ln\left(\frac{t}{1+t}\right)\ln^3
t}{1-t}\,dt\\
=-\frac{45}{2}\zeta(4)\ln
2-\frac{9}{2}\zeta(2)\zeta(3)+\int_0^1
\frac{\ln\left(\frac{t}{1+t}\right)\ln^3
t}{1-t}\,dt\\
=-\frac{45}{2}\zeta(4)\ln
2-\frac{9}{2}\zeta(2)\zeta(3)+24\zeta(5)-U\\
U =\boxed{-\frac{45}{4}\zeta(4)\ln
2-\frac{9}{4}\zeta(2)\zeta(3)+12\zeta(5)}$
Precisely, i can't see how to calculate $V=\int_0^1 \frac{\ln(1+x)\ln^3 x}{1+x}\,dx\\$
edit I am also interested in how to calculate the two other integrals
AI: $$\mathcal{I}=\int_0^1\frac{\ln^3x\ln(1+x)}{x(1+x)}dx=\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx-\underbrace{\int_1^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx}_{x\mapsto 1/x}$$
$$\mathcal{I}=\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx+\color{blue}{\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx}-\int_0^1\frac{\ln^4x}{1+x}dx$$
By adding $\ \mathcal{I}=\int_0^1\frac{\ln^3x\ln(1+x)}{x(1+x)}dx=\int_0^1\frac{\ln^3x\ln(1+x)}{x}dx-\color{blue}{\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx}\ $ to both sides, the blue integral nicely cancels out and we get
$$2\mathcal{I}=\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx-\int_0^1\frac{\ln^4x}{1+x}dx+\underbrace{\int_0^1\frac{\ln^3x\ln(1+x)}{x}dx}_{IBP}$$
$$2\mathcal{I}=\underbrace{\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx}_{\text{Beta function:}\ 6\zeta(2)\zeta(3)+6\zeta(5)}-\frac54\underbrace{\int_0^1\frac{\ln^4x}{1+x}dx}_{\frac{45}2\zeta(5)}$$
or
$$\mathcal{I}=3\zeta(2)\zeta(3)-\frac{177}{16}\zeta(5)\tag1$$
But
$$\mathcal{I}=\int_0^1\frac{\ln^3x\ln(1+x)}{x}dx-\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx$$
$$=-\frac{45}{8}\zeta(5)-\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx\tag2$$
Subtracting (1) and (2) yields
$$\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx=\frac{87}{16}
\zeta(5)-3\zeta(2)\zeta(3)$$
The integral $\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}dx$ can be calculated without using beta function:
With $\frac1{1+x}=y$ we have
$$\int_0^\infty\frac{\ln^3x\ln(1+x)}{x(1+x)}\ dx=\int_0^1\frac{\ln^3\left(\frac{x}{1-x}\right)\ln x}{1-x}\ dx$$
$$=\int_0^1\frac{\ln^4x}{1-x}-3\int_0^1\frac{\ln^3x\ln(1-x)}{1-x}+3\underbrace{\int_0^1\frac{\ln^2x\ln^2(1-x)}{1-x}}_{IBP}-\underbrace{\int_0^1\frac{\ln x\ln^3(1-x)}{1-x}\ dx}_{IBP}$$
$$=\int_0^1\frac{\ln^4x}{1-x}-3\int_0^1\frac{\ln^3x\ln(1-x)}{1-x}+2\underbrace{\int_0^1\frac{\ln^3(1-x)\ln x}{x}}_{\large 1-x\to x}-\frac14\underbrace{\int_0^1\frac{\ln^4(1-x)}{x}\ dx}_{\large 1-x\to x}$$
$$=\frac34\int_0^1\frac{\ln^4x}{1-x}\ dx-\int_0^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx$$
$$=\frac34(4!\zeta(5))+\sum_{n=1}^\infty H_n\int_0^1 x^n \ln^3x\ dx$$
$$=18\zeta(5)-6\sum_{n=1}^\infty\frac{H_n}{(n+1)^4}$$
$$=18\zeta(5)-6\sum_{n=1}^\infty\frac{H_n}{n^4}+6\zeta(5)$$
$$=18\zeta(5)-6[3\zeta(5)-\zeta(2)\zeta(3)]+6\zeta(5)$$
$$=6\zeta(2)\zeta(3)+6\zeta(5)$$
Your integral can be related to harmonic series:
$$\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}dx=-\sum_{n=1}^\infty (-1)^nH_n\int_0^1 x^n \ln^3xdx$$
$$=6\sum_{n=1}^\infty\frac{(-1)^nH_n}{(n+1)^4}=-6\sum_{n=1}^\infty\frac{(-1)^nH_{n-1}}{n^4}$$
$$=-6\sum_{n=1}^\infty\frac{(-1)^nH_{n}}{n^4}+6\sum_{n=1}^\infty\frac{(-1)^n}{n^5}$$
$$=-6\sum_{n=1}^\infty\frac{(-1)^nH_{n}}{n^4}-\frac{45}{8}\zeta(5)$$
|
H: The tropical integers
Let
\begin{align}
\oplus_\mathbb{N} &= + \\
0_\mathbb{N} &= 0 \\
\odot_\mathbb{N} &= \cdot \\
1_\mathbb{N} &= 1
\end{align}
Then $(\mathbb{N}, \oplus_\mathbb{N}, 0_\mathbb{N}, \odot_\mathbb{N}, 1_\mathbb{N})$ is the ordinary rig of natural numbers. Let $\mathbb{Z} = \mathbb{N}^2 / \sim$ where
\begin{align}
(a_1, a_2) \sim (b_1, b_2) &\iff a_1 \oplus_\mathbb{N} b_2 =a_2 \oplus_\mathbb{N} b_1 \\
(a_1, a_2) \oplus_\mathbb{Z} (b_1, b_2) &= (a_1 \oplus_\mathbb{N} b_1, a_2 \oplus_\mathbb{N} b_2) \\
0_\mathbb{Z} &= [(0_\mathbb{N}, 0_\mathbb{N})]_\sim \\
\ominus_\mathbb{Z} (a_1, a_2) &= (a_2, a_1) \\
(a_1, a_2) \odot_\mathbb{Z} (b_1, b_2) &= ((a_1 \odot_\mathbb{N} b_1) \oplus_\mathbb{N} (a_2 \odot_\mathbb{N} b_2), (a_1 \odot_\mathbb{N} b_2) \oplus_\mathbb{N} (a_2 \odot_\mathbb{N} b_1)) \\
1_\mathbb{Z} &= [(1_\mathbb{N}, 0_\mathbb{N})]_\sim
\end{align}
Then $(\mathbb{Z}, \oplus_\mathbb{Z}, 0_\mathbb{Z}, \ominus_\mathbb{Z}, \odot_\mathbb{Z}, 1_\mathbb{Z})$ is the ordinary ring of integers. Suppose we let
\begin{align}
\oplus_\mathbb{N} &= \max \\
0_\mathbb{N} &= 0 \\
\odot_\mathbb{N} &= + \\
1_\mathbb{N} &= 0
\end{align}
instead. This makes $(\mathbb{N}, \oplus_\mathbb{N}, 0_\mathbb{N}, \odot_\mathbb{N}, 1_\mathbb{N})$ the tropical rig of natural numbers. Call $(\mathbb{Z}, \oplus_\mathbb{Z}, 0_\mathbb{Z}, \ominus_\mathbb{Z}, \odot_\mathbb{Z}, 1_\mathbb{Z})$ the "tropical integers". Which papers, if any, have studied this structure? Does it have a geometric or easily-visualizable interpretation?
AI: Unfortunately, nothing like this works.
The starting point is the observation that the "max-version" of $\sim$ doesn't behave well at all:
It's not an equivalence relation since it's non-transitive: we have e.g. $(1,0)\sim(2,2)$ and $(2,2)\sim (0,1)$ but $(1,0)\not\sim(0,1)$.
More generally, whenever $c>a,b$ we have $(a,b)\sim(c,c)$. So the transitive closure of $\sim$ is everything.
Motivated by this specific observation we can in fact prove a general impossibility result:
Every rig homomorphism from $T=(\mathbb{N};\max,+)$ to a rig $R$ which happens to be a ring has trivial image (= sends everything to the additive identity of $R$).
The point is this: for each $m\in T$ there is some $n\in T$ such that $m+n=n$. But then if $f:T\rightarrow R$ is a rig homomorphism we have $f(m)+f(n)=f(n)$, so since $R$ happens to be a ring we have $f(m)=0$. (In fact, all we need about $R$ is that it have the cancellation property for addition.)
|
H: Show that $P:\mathbb{R}^k\to\mathbb{R}$ by $P(x_1,...,x_k)=\sum_{(i_1,...,i_k)\in I}c(i_1,...,i_k)x_1^{i_1}\cdot\cdot\cdot x_k^{i_k}$ is continuous.
Let $k\geq 1$, let $I$ be a finite subset of $\mathbb{N}^k$ and let $c:I\to \mathbb{R}$ be a function. Show that $P:\mathbb{R}^k\to\mathbb{R}$ by $P(x_1,...,x_k)=\sum_{(i_1,...,i_k)\in I}c(i_1,...,i_k)x_1^{i_1}\cdot\cdot\cdot x_k^{i_k}$ is continuous.
So for base case $k=1$ I have:
$P:\mathbb{R}\to\mathbb{R}$ by $P(x)=\sum_{i\in I} c(i)x^i$ is just a polynomial which are continuous.
So assume that this holds for some $k\in \mathbb{N}$
Then I want to prove that $P(x_1,...,x_{k+1})=\sum_{(i_1,...,i_{k+1})\in I}c(i_1,...,i_{k+1})x_1^{i_1}\cdot\cdot\cdot x_k^{i_{k+1}}$ is continuous
I think the sum means for example if $k=2$ then $P(x_1,x_2)=\sum_{i\in I}\sum_{j\in I} c(i,j)x_1^i x_2^j$?
I want to say then that since the function $c(i_1,...,i_{k+1})$ is a function on a discrete space, it is continuous, and since $x^i$ is continuous for each $i\in I$ then I can use that the product of continuous functions is continuous, to show that each $c(i_1,...,i_{k+1})x_1^{i_1}\cdot\cdot\cdot x_k^{i_{k+1}}$ is continuous, and the sum of continuous functions is continuous then $P$ is continuous.
The hint for the question says to use that the direct sum of continuous functions is continuous, but I don't see how that relates to this function since it's from $X^k\to \mathbb{R}$, unless I'm not interpreting the notation correctly.
AI: $(x_1,x_2,...,x_k) \to x_i$ is continuous for each $i$ and thus implies $(x_1,x_2,...,x_k) \to x_i^{j}$ is continuous for all $i $ and $j$. any constant multiple of this also continuous and finite sum of continuous functions is continuous. That finishes the proof.
Example: $x_1x_2^{2}-5x_1^7x_2^{9}$ is obtained by adding $x_1x_2^{2}$ and $-5x_1^7x_2^{9}$. The first terms is continuous because it is the product of $x_1$ and $x_2^{2}$. Similarly the second term is also continuous.
|
H: What is the relationship between $p$-norm and 2-norm for any $p$-norm?
I know that there are some general inequalities between the $2$-norm and $1$-norm or $\infty$-norm
Suppose I am given an arbitrary $p$-norm, obviously $p \geq 1$
What can we say about the inequality between $\|x\|_2$ and $\|x\|_p$?
AI: The answer depends on the underlying measure space. The inequality $\|x\|_p\leq\|x\|_r$ does not always hold for $p>r$. For instance, if $X=(0,1)$ with the Lebesgue measure,
then it is not true that $\|f\|_2\leq \|f\|_1$. For example, the function $x^{-1/2}$ belongs to $L_1(0,1)$ but not to $L_2(0,1)$. In fact, if $(X,\Sigma,\mu)$ is a probability space, i.e., $\mu(X)=1$, then then
$\|f\|_r\leq \|f\|_p$ if $r<p$, which is the reverse inequality of that suggested in the first comment above.
|
H: Not sure where my argument breaks down: Let $A \subset \mathbb{R}$ be a countable set. Prove that $\mathbb{R} \setminus A$ is uncountable.
$\textit{Proof.}$ Since A is countable, we know it has the same cardinality as $\mathbb{N}$, or $\mid A \mid = \mid \mathbb{N} \mid$. Additionally, since $\mathbb{N} \subset \mathbb{R}$, it is enough to prove that $\mathbb{R} \setminus \mathbb{N}$ has the same cardinality as $\mathbb{R}$, an uncountable set. We will use the Schroeder - Bernstein Theorem:
$f: \mathbb{R} \setminus \mathbb{N} \to \mathbb{R}$, $f(x) = x$
$g:\mathbb{R} \to \mathbb{R} \setminus \mathbb{N}$, $g(x) = \frac{1}{e^{x} + 2}$
As we have defined two injections, by the Schroeder-Bernstein theorem there exists a bijection between $\mathbb{R} \setminus \mathbb{N}$ and $\mathbb{R}$. Thus, $\mid \mathbb{R} \setminus \mathbb{N} \mid = \mid \mathbb{R} \mid$, so $\mathbb{R} \setminus \mathbb{N}$ is uncountable. Thus, $R \setminus A$ is uncountable. $\square$
AI: If the complement was countable then the set of reals would be the union of two countable sets, hence countable, a contradiction.
|
H: Show $\forall c>0$ have $\int^b_af(x)dx=c\int^{b/c}_{a/c}f(cx)dx$ from the definition of integral
Let $f[a,b]\to\mathbb{R}$ be an integrable function. Prove the following, using only the definition of the integral
$$\text{For any}~c>0,\int^b_af(x)dx=c\int^{b/c}_{a/c}f(cx)dx$$
Hint: A careful choice of notation is essential in solving this problem, you should consistently write $P$ to denote a partition of $[a,b]$ and $P'$ a partition of $[a/c,b/c].$ You may want to choose $P$ and $P'$ to be related in some way. With this notation, you can also write $m_j,M_j$ to refer to the inf and sup of $f(x)$ for $x$ in the $j$th interval of $P$, and $m_j',M_j'$
for the inf and sup of $f(cx)$ in the $j$th interval of
$P'$.
$($The question is from this online note$)$
This is a short summary of the integral definition
$\def\box#1#2{\boxed{\underline{\text{#1}}\\#2}}
\def\verts#1{\left\vert#1\right\vert}$
$\box{Def. Integrable Function Single Variable}
{\text{A function $f:[a,b]\to\mathbb{R}$ is integrable if it is bounded and $\underline{I^b_a}f=\overline{I^b_a}f.$ When this hold, we define}\\
\int_a^bf(x)dx=\underline{I^b_a}f=\overline{I^b_a}f, \text{ the integral of $f$ over $[a,b]$.}}$
Here $\underline{I^b_a}f=\sup_PL_Pf$, and $\overline{I^b_a}f=\inf_PU_Pf$
where $P$ is a partition of $[a,b]$, that $L_P f=\sum_{j=1}^Jm_j\text{length}(I_j)$ and $U_Pf=\sum_{j=1}^JM_j\text{length}(I_j)$
and $m_j=\inf\{f(x):x\in I_j\}\hspace{5ex}M_j=\sup\{f(x):x\in I_j\}$
My thought
Based on my understanding, the definition can be written as
\begin{align}
\int_a^bf(x)dx=&\sup\left\{\sum_{i=1}^{\verts{P}-1}\left[\inf_{x\in\left[x_i,x_{i+1}~~\right]}f(x)\right](x_{i+1}-x_i):\text{$P$ is a partition of $[a,b]$}\right\}\\
=&\inf\left\{\sum_{i=1}^{\verts{P}-1}\left[\sup_{x\in\left[x_i,x_{i+1}~~\right]}f(x)\right](x_{i+1}-x_i):\text{$P$ is a partition of $[a,b]$}\right\}\\
c\int_{a/c}^{b/c}f(x)dx=&\sup\left\{\sum_{i=1}^{\verts{P'}-1}\left[\inf_{x\in\left[x_i,x_{i+1}~~\right]}f(cx)\right](x_{i+1}-x_i):\text{$P'$ is a partition of $\left[\frac{a}{c},\frac{b}{c}\right]$}\right\}\\
=&\inf\left\{\sum_{i=1}^{\verts{P'}-1}\left[\sup_{x\in\left[x_i,x_{i+1}~~\right]}f(cx)\right](x_{i+1}-x_i):\text{$P'$ is a partition of $\left[\frac{a}{c},\frac{b}{c}\right]$}\right\}
\end{align}
However, I still can't see how to write this proof, could someone help me.
AI: Consider the partition of $[a,b]$ as $P=\{a=x_0,x_1,...,x_{n-1},x_n=b\}$
Hence partition of $[a/c,b/c]= \{a/c=x_0/c,x_1/c,...,x_{n-1}/c,x_n/c=b/c\}$
Let $M_j=\sup \{f(s): x_{j-1}\le s\le x_j\}, m_j=\inf \{f(s): x_{j-1}\le s\le x_j \}$
Let $M_j'=\sup \{f(cs): x_{j-1}/c\le s\le x_j/c\}, m_j'=\inf\{f(cs): x_{j-1}/c\le s\le x_j/c\}$
Do you see why $M_j=M_j'$ and $m_j=m_j'$?
Upper sum (Darboux's upper sum) for $f(t) $ over $P=\sum_{j=1}^{n}M_j(x_j-x_{j-1})=\sum_{j=1}^{n}cM_j' (x_j/c-x_{j-1}/c)$, where $\sum_{j=1}^{n}M_j' (x_j/c-x_{j-1}/c)$ is upper sum of $f(ct)$ over $[a/c,b/c]$ etc.
Can you take it from here?
|
H: Two types of proofs conditional expectation.
I am reviewing the proof about the conditional expectation of $e$ (Conditional Expectation Function Error) given $X$ is zero. This is:
$$e= Y-m(X)$$
$$E(e|X)=E(Y|X)-E(m(X)|X)$$
$$E(e|X)=E(Y|X)-E(Y|X)$$
$$E(e|X)=0$$
where $m(x)$ is $E(Y|X)$. X and Y are random variables.
When I check the justification for why $E(m(X)|X) = 0$ I find two types of proof:
Conditioning Theorem:
If $E|g(x)y|<\infty$ then $E(g(x)y|x)=g(x)E(y|x)$.
Proof:
$$E(g(x)y|x)=g(x)E(y|x)=\int_\infty^\infty g(x)y f(y|x)dy=g(x)\int_\infty^\infty y f(y|x)dy=g(x)E(y|x)$$
Stability Conditional Expectation:
If X is a random variable, then $E(f(X)∣X)=f(X)$
Proof:
If $f(X)$ is $σ(X)$ measureble, then it fulfills the three properties of the definition of conditional expectation , by the uniqueness the almost surely equality is obtained.
My question is:
What is the relationship between these two proofs? Which is more rigorous? Which is more appropriate to the context of the problem I am dealing with?
AI: The second one is the right one. It works in general. The first one assumes existence of densities etc. Avoid using density functions unless you are told that the random variables have absolutely continuous distributions.
|
H: $q-p$ is a projection when $pq = p$
Consider the following theorem in the book "$C^*$-algebras and operator theory" written by Murphy.
Questions: (1) Is the theorem talking about orthogonal projections? (as defined in the text above)? Or simply projections?
(2) How to prove $(2) \implies (6)?$ I assume that the theorem talks about orthogonal projections, and then I have difficulties. I can prove that $(q-p)^2 = q-p$ but I do not succeed in proving that $q-p$ is orthogonal (when $p,q$ are).
AI: The theorem refers about orthogonal projections. In Operator Algebras it is standard to reserve "projection" for orthogonal projection, and "idempotent" when one wants to talk in general.
The key result needed is that an idempotent is orthogonal if and only if it is selfadjoint. This is trivial: if $p$ is an orthogonal projection, then we have $px\perp (1-p)x$ for all $x$; this means that, for all $x,y$,
$$
0=\langle px,(1-p)y\rangle=\langle (1-p)^*px,y\rangle.
$$
Thus $0=(1-p)^*p=(1-p^*)p=p-p^*p$. Then $p=p^*p$ is selfadjoint. Conversely, if $p$ is a selfadjoint idempotent, then $(1-p)p=0$ by the above computation, and thus $px\perp (1-p)y$ for all $x,y$ and $p$ is orthogonal.
So, if $p,q$ are orthogonal projections with $pq=q$, then $p-q$ is a selfadjoint idempotent and thus an orthogonal projection.
|
H: Question on Hölder's inequality when it's equal
I'm reading this proof where they are trying to show that the Holders inequality becomes equal iff $|f|^p$ and $|g|^q$ are multiples of each other.
I'm stuck in this step:
$$\int\vert fg \vert\leq \Vert f \Vert_p \Vert g \Vert_q \int\left( \frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q}\right)=\Vert f \Vert_p \Vert g \Vert_q.$$
From here, we see that the equality in Hölder's inequalty holds iff
$$\frac{\vert fg \vert}{\Vert f \Vert_p \Vert g \Vert_q}=\frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q}, \text{ a.e.}$$
I'm not sure how they got the forward direction. Suppose Holders' inequality is equal, so
$$\int\vert fg \vert = \Vert f \Vert_p \Vert g \Vert_q \int\left( \frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q}\right)$$
Rearranging gives me
$$\int \frac{\vert fg\vert}{\Vert f \Vert_p \Vert g \Vert_q }= \int\left( \frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q}\right)$$
This only tells me that the integrals are equal, but how does that tell me the integrands are equal almost everywhere? I don't see how the equation above tells me $\frac{\vert fg \vert}{\Vert f \Vert_p \Vert g \Vert_q}=\frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q}, \text{ a.e.}$
AI: This is actually explained in the comments. If the integral of a nonnegative function is zero, then the function must be zero. To see why it's nonnegative, by Young's Inequality,
$$ \frac{\vert fg\vert}{\Vert f \Vert_p \Vert g \Vert_q } = \bigg|\frac{|f|}{\Vert f \Vert}_p \frac{|g|}{\Vert g \Vert}_q \bigg| \leq \frac{1}{p} \bigg(\frac{|f|}{\Vert f \Vert}_p \bigg)^p + \frac{1}{q} \bigg(\frac{|h|}{\Vert h \Vert}_q \bigg)^q = \frac{\vert f \vert^p}{p\Vert f \Vert_p^p} + \frac{\vert g \vert^q}{q\Vert g \Vert_q^q} $$.
Therefore, the function must be zero.
|
H: Find formula for linear transformation given matrix and bases
Let T: $\mathbb P_2\to \mathbb R^3$ be the linear transformation with matrix $[T]_{B,A}=\begin{bmatrix}1&1&-1\cr 0&-1&-1\cr -1&0&1 \end{bmatrix}$ relative to the bases $A = \{1,2-3x.1+x^2\}$ and $B = \{(1,1,1),(1,1,0),(1,0,0)\}$ find the formula for the linear transformation T.
I don't know what the [T] is supposed to stand for and how to use that information. I'm pretty confident that the linear transformation is supposed to map from B to A and we just need to find the formula to do so.
I know that a similar question has been asked on Finding Linear Transformation with bases and matrix but it didnt explain the steps at all
AI: $[T]_{B,A}$ means that if you multiply by the vector of coefficients on $A$ you get vector of coefficients of $B$. Reciprocally, notice that applying a linear transformation to the basis $A$ will give you some vector in the span of the basis $B$ the columns vectors of the matrix $[T]$ carry the information of how to get the vectors as linear combination of the basis $B.$
For example, $1=(1,0,0)$ in $A$ and if you multiply by $[T]$ you get $(1,0,-1)$ which corresponds to $(1,1,1)+0\cdot (1,1,0)-(1,0,0)=(0,1,1)$ and so $T(1)=(0,1,1).$ Similarly $T(2-3x)=(0,0,1).$
Can you get $T(1+x^2)$?
|
H: Number of chains with size $k$ in a symmetric chain paritition of subsets of $\{1,2,…,n\}$.
In a partition of the subsets of $\{1,2,…,n\}$ into symmetric chains, how many chains have only $k$ subset in them?
For a chain $A_1\subseteq A_2\subseteq ...A_l$ in the partition, we have $|A_1|+|A_l|=n$ and $|A_i|+1=|A_{i+1}|$.
I tried to solve it using recursive rule like $(n,k)=(n-1,k-1)+(n-1,k+1)$, and got $(n,k)=\sum_{i=0}^{n-1}{n-1 \choose i}(1,k-(n-1)+2i)$. However, my solution gets wrong result in some case since it doesn't consider about the $(1,2)$'s under $(x,0)$ in the binary tree.
I've got a solution from classmates (as shown in the picture) but we don't understand where does $2|A_1|+l-1=n$ come from (Thanks to phichar, now we know).
But I'm not sure how can we make sure the subset of size (n floor(n/2)) and (n ceil(n/2)) always appear together(Ohh because of the symmetric, I'm an idiot)?
Any idea about the problem? Thanks!
AI: Notice that $A_{\ell}$ contains inside $A_1$ and every step that you take, i.e., $\ell -1,$ you are adding a new element hence $|A_{\ell}|=|A_1|+\ell -1$ so if you add in both sides, you get $$|A_1|+|A_{\ell}|=2|A_1|+\ell-1$$
|
H: Is the Random Variable's Expectation the optimal Solution for the Mean Squared Error?
Lets assume we compute the Mean Squared Error between an estimated, but fixed variable $\hat{x}$ and a dataset ${x_1, x_2,...,x_n}$, sampled from a non-gaussian distributed random variable $\mathcal{X}$:
$$
\frac{1}{N} \sum^N_i (\hat{x} - x)^2.
$$
My intuition is, that the variable $\hat{x}$ that minimizes this error is the expectation of the distribution of $\mathcal{X}$, is this correct? And if yes, how to proof it?
AI: This is exact. Suppose your samples have the same law than a random variable $X$.
Let $a\in\mathbb{R}$ be any number. Then
$$
\sum_{i=1}^N (a - x_i)^2 = \sum_{i=1}^N \left[(a - \mathbb{E}[X])^2 + 2(a - \mathbb{E}[X])(\mathbb{E}[X] - x_i) + (\mathbb{E}[X] - x_i)^2\right].
$$
Now, the first term in the sum is just positive. That means that the above is greater or equal than
$$
2(a - \mathbb{E}[X])\sum_{i=1}^N (\mathbb{E}[X] - x_i) + \sum_{i=1}^N (\mathbb{E}[X] - x_i)^2.
$$
The second term is just the means square error for the expectation and the first term is zero, because evaluate the mean of $\mathbb{E}[X] - X$ (multiplied by $N$ to be precise). All in all, we get
$$
\dfrac{1}{N} \sum_{i=1}^N (a - x_i)^2 \geq \dfrac{1}{N} \sum_{i=1}^N (\mathbb{E}[X] - x_i)^2
$$
with equality if and only if $a = \mathbb{E}[X]$.
|
H: Are Semiregularity and Complete Hausdorff Properties preserved by Products
By preserved by products I mean - $\prod X_{\alpha}$ has property $P$ iff $X_{\alpha}$ has property $P$ for all $\alpha$ in index set
Also, $X$ is Completely Hausdorff if for $x\neq y$ in $X$, $\exists$ continuous function $f:X\to I$ with $f(x) = 0$, $f(y) = 1$.
Are Semiregularity and Completely Hausdorff preserved by products? If not, then is any direction of the iff statement above true?
AI: Suppose that $X_\alpha$ is completely Hausdorff for each $\alpha\in A$, and let $x,y\in X$ be distinct points; there is an $\alpha\in A$ such that $x_{\alpha}\ne y_{\alpha}$, and there is continuous $f_\alpha:X_\alpha\to[0,1]$ such that $f_\alpha(x_\alpha)=0$ and $f_\alpha(y_\alpha)=1$. Now define
$$f:X\to[0,1]:z\mapsto f_\alpha(z_\alpha)\;;$$
if $\pi_\alpha:X\to X_\alpha$ is the projection map, $f=f_\alpha\circ\pi_\alpha$. Clearly $f$ is continuous, $f(x)=0$, and $f(y)=1$. Thus, $X$ is completely Hausdorff.
Conversely, if $X$ is completely Hausdorff and non-empty, then each $X_\alpha$ is completely Hausdorff: complete Hausdorffness is evidently hereditary, and if we fix $x\in X$, the subset
$$\big\{y\in X:y_\beta=x_\beta\text{ for all }\beta\in A\setminus\{\alpha\}\big\}$$
of $X$ is homeomorphic to $X_\alpha$.
Now suppose that each $X_\alpha$ is semiregular, and let $\mathscr{B}_\alpha$ be a base of regular open sets for $X_\alpha$. Then $X$ has a base $\mathscr{B}$ whose elements are the sets, $\prod_{\alpha\in A}U_\alpha$ such that $U_\alpha=X_\alpha$ for all but finitely many $\alpha\in A$, and $U_\alpha\in\mathscr{B}_\alpha$ whenever $U_\alpha\ne X_\alpha$. Let $B=\prod_{\alpha\in A}U_\alpha\in\mathscr{B}$, and let $F=\{\alpha\in A:U_\alpha\ne X_\alpha\}$. It’s easy to check that the sets $\pi_\alpha^{-1}[U_\alpha]$ for $\alpha\in F$ are regular open in $X$; $B=\bigcap_{\alpha\in F}\pi_\alpha^{-1}[U_\alpha]$, and the intersection of finitely many regular open sets is regular open, so $B$ is regular open, and $X$ is semiregular.
I am not at the moment sure about the other direction, since semiregularity is not hereditary.
|
H: Is $K_a$ subspace of vector space $V$?
Let $\text{Hom}(V,V) = H$ be the set of linear transformations from $V$ to $V$. Let $a \in V, K_a \subset H$ such that for all $T \in K_a, T(a)=0.$ Is $K_a$ a subspace? Does there exist an $A$ such that $K_a=H$? Let $a_1, a_2, …, a_n$ be a basis $B$ for $V$. What is $\cap_{a_i\in B}K_{a_i}$?
Some thoughts that I have:
I think that $K_a$ is a set of transformations where it transforms all $a$'s into 0's. So $K_a$ is a subspace - I would be able to prove that it is additive and homogeneous. (I am not sure how exactly, however.) I do not think there exists an $a$ where $K_a = H$ because $H$ would never be a subset of $K_a$ for any $a$. Any transformation in $H$ that maps an element to a non-zero element would not exist in $K_a$. Thus, is the intersection also 0? (Also do not know how to prove this.)
Some hints that my professor gave me were what is $K_0$ and whether there are any linear transformations where $T(0)\neq0$.
Thanks!
AI: To prove something is a subspace of H, you must prove the properties of a vector space still hold and that it is a subset of H. By definition, $K_a \subset H$, so if you can show this, a lot of the properties transfer down. The main things left to prove are closure (does addition and scalar multiplication keep you in the subspace) and that zero exists. Keep reading if you want the answer, but everything above this should be enough if you just want hints:
Let $T = 0$ be the linear transformation that maps everything from $V$ to the $0$ in $V$. By definition, $T(a) = 0$, so it is in $K_a$.
Let $T_1, T_2 \in K_a, \alpha_1, \alpha_2 \in F$, where $F$ is whatever scalar field you are working on. Consider $\alpha_1 T_1 + \alpha_2 T_2$:
$(\alpha_1 T_1 + \alpha_2 T_2)(a) = \alpha_1 T_1(a) + \alpha_2 T_2(a) = \alpha_1 (0) + \alpha_2 (0) = 0 \therefore \alpha_1 T_1 + \alpha_2 T_2 \in K_a$
The above works because of linearity. Now, for the second part, if you pick an $a$ such that for every $T \in H$, $T(a) = 0$, then $K_a = H$. The only $a$ that would satisfy this is $0$.
|
H: Conventional to write the derivative of $|x|$ wrt $x$ as $\frac{x}{|x|}$?
This might be a naive question, but it sometimes confuses me.
It's known that the derivative of $|x|=\frac{x}{|x|}$. Is it conventional that the absolute value appears in the denominator and not $\frac{|x|}{x}$ ?
AI: Yes, it is simply by convention. I think it comes from the definition $$|x|=\sqrt{x^2}.$$
Rewriting and differentiating using the chain rule, we have the following:
\begin{align}
\frac{d}{dx}\left(|x|\right)&=\frac{d}{dx}\left(\sqrt{x^2}\right)\\
&=\frac{2x}{2\sqrt{x^2}}\\
&=\frac{x}{\sqrt{x^2}}\\
&=\frac{x}{|x|}.
\end{align}
|
H: For integral domains $R$ and $S$, if $S \simeq R^n$ as $R$-modules, does $\mathrm{Quot}(S) / \mathrm{Quot}(R)$ have degree $n$?
Let $\phi : R \rightarrow S$ be an injective morphism of integral domains. Then, $\phi$ extends to a morphism $\mathrm{Quot}(R) \rightarrow \mathrm{Quot}(S)$, so corresponds to a field extension $\mathrm{Quot}(S) / \mathrm{Quot}(R)$.
Now suppose that as $R$-modules, $S \simeq R^n$ for some positive integer $n$. Does the field extension then have degree $n$?
My idea is that there are elements $e_1, ... , e_n \in S$ that correspond to the standard basis vectors of $R^n$. Then, I expect that $e_1, ... , e_n$ will be a basis of $\mathrm{Quot}(S)$ over $\mathrm{Quot}(R)$.
Consider $S \otimes_R \mathrm{Quot}(R) = R^n \otimes_R \mathrm{Quot}(R) = \mathrm{Quot}(R)^n$ as $R$-modules. Multiplication gives a morphism $S \otimes_R \mathrm{Quot}(R) \rightarrow \mathrm{Quot}(S)$. However, I can't figure out if this is an isomorphism, and I'm suspecting that it's not, so maybe the answer is no.
AI: Surprisingly, the answer is yes. Set $U = R \setminus \{0\}$. Then there is a natural isomorphism of $R$ modules $S \otimes_{R} \mathrm{Frac}(R)
\to U^{-1}S$ given by multiplication, and up to this identification, the multiplication map $S \otimes_{R} \mathrm{Frac}(R) \to \mathrm{Frac}(S)$ is just the inclusion map $\varphi \colon U^{-1}S \to \mathrm{Frac}(S)$. This map is clearly $R$-linear, hence $\mathrm{Frac}(R)$-linear, and so it remains to show that $\varphi$ is an isomorphism, as you say. It is injective, so we focus on showing that $\varphi$ is surjective.
This is tantamount to showing that for any $s/t \in \mathrm{Frac}(S)$, there exists some $x \in S, r \in R \setminus \{0\}$ such that $s/t = x/r$, i.e. $sr = tx$. Since the map $R \to S$ is finite, it is integral, so let $p(X) \in R[X]$ be a monic polynomial such that $p(t) = 0$. Note that we may assume that $p(X)$ has nonzero constant term; if it didn't, then we would have $p(X) = X^{k}q(X)$ for some $k \in \mathbb{N}, q(X) \in R[X]$ monic, and since $S$ is a domain, $p(t) = t^{k}q(t) = 0$ forces $q(t) = 0$.
Write $p(X) = Xq(X) - r$ for some nonzero $r \in R$. Then $tq(t) - r = 0$, so $tq(t) = r$. But then $tq(t)s = sr$, so take $x = q(t)s$, which completes the proof.
|
H: family of pairwise disjoint sets in the complement of meager. set
Let $M$ be a meager subset of $\mathbb R$. I want to construct the following family in $\mathbb R\setminus M $
$$F:= \{A_{r}^\xi\colon r\in\mathbb R \ \&\ \xi<c\}$$ Such that all elemnets of $F$ are pairwise disjoint dense sets and $\bigcup F$does not contain any perfect sets. I want to try to construct by using transfinite induction. My first challenge is how can I do the transfinite induction for $r$ and $\xi$. If they are two ordinal number like $A_{\xi}^{\eta}$ then we can define well ordering like $\leq$ on $c\times c$ ($c$ is the cardinality of $\mathbb R$ by putting $<\alpha,\beta>\leq<\gamma,\delta>$ if and only if $$max\{\alpha,\beta\}=\{\gamma,\delta\}$$ OR $$max\{\alpha,\beta\}<\{\gamma,\delta\}\ and \ \alpha=\gamma \ \ OR \ \ max\{\alpha,\beta\}<\{\gamma,\delta\}, \ \alpha<\gamma, \ and \ \beta=\delta.$$ But in my case I have $r\in\mathbb R.$ What should I do? My thought process for the proof is enumerate all open intervals and all perfect sets which there are continuum many of them and construct by picking one element of each open interval and avoid element of perfect set at the same time. but the challenge when I reach the inductive step I have no control how can I keep the pairwise disjoint since I am doing this proof in ZFC. I like transfinite induction but sometimes is challenging for me. Any hint or idea how this can be done it will be useful. Thank you in advance
AI: Significantly revised to correct an oversight and for greater clarity.
Let $\mathscr{B}$ be a base for $\Bbb R\setminus M$, let $\mathscr{C}$ be the family of uncountable closed subsets of $\Bbb R\setminus M$, and let $\mathscr{S}=\mathscr{B}\cup\mathscr{C}$; note that $\mathscr{C}$ contains all of the perfect subsets of $\Bbb R\setminus M$. Let $T=\Bbb R\times\mathscr{S}\times\mathfrak{c}$; $|T|=\mathfrak{c}$, so we can index $T$ as $T=\{\langle r_\xi,S_\xi,\alpha_\xi\rangle:\xi<\mathfrak{c}\}$. Now you can construct your sets $A_r^\alpha$ by straightforward transfinite recursion on $\xi<\mathfrak{c}$.
(Note: Transfinite induction is a proof technique: what you’re doing here is transfinite recursion.)
Suppose that $\eta<\mathfrak{c}$, and you’ve already chosen points $x_\xi,y_\xi\in S_\xi$ for $\xi<\eta$ so that all of these points are distinct. You’ve chosen fewer than $\mathfrak{c}$ points, and $|S_\eta|=\mathfrak{c}$, so you can choose distinct
$$x_\eta,y_\eta\in S_\eta\setminus\big(\{x_\xi:\xi<\eta\}\cup\{y_\xi:\xi<\eta\}\big)$$
and continue the recursion. (You do need to know here that every member of $\mathscr{S}$ has cardinality $\mathfrak{c}$.)
Now for each $\langle r,\eta\rangle\in\Bbb R\times\mathfrak{c}$ let $A_r^\alpha=\{x_\xi:r_\xi=r\text{ and }\alpha_\xi=\alpha\}$.
For each $B\in\mathscr{B}$ there is a $\xi<\mathfrak{c}$ such that $\langle r_\xi,S_\xi,\alpha_\xi\rangle=\langle r,B,\alpha\rangle$, and $x_\xi\in A_r^\alpha\cap B$, so $A_r^\alpha$ is dense in $\Bbb R\setminus M$.
For each $C\in\mathscr{C}$ there is a $\xi<\mathfrak{c}$ such that $\langle r_\xi,S_\xi,\alpha_\xi\rangle=\langle r,C,\alpha\rangle$; by construction $x_\xi\in A_r^\alpha\cap C$ and $y_\xi\in C\setminus A_r^\alpha$, so $A_r^\alpha$ is a Bernstein set and contains no perfect set.
The points $x_\xi$ for $\xi<\mathfrak{c}$ are all distinct, so $\{A_r^\alpha:\langle r,\alpha\rangle\in\Bbb R\times\mathfrak{c}\}$ is a pairwise disjoint family.
|
H: Showing a Result of the Logisitic Equation
The logisitc equation is given by $$\frac{d N}{dt}=rN\left(1-\frac{N}{K}\right)\tag{1},$$ where $K$ is the carrying capacity and $r$ is the intrinsic growth rate. I am trying to show for the logisitc equation that $$r=-\frac{d}{dt}\left(\ln\left(\frac{K-N}{N}\right)\right), \tag{2}$$ so that the closeness of the population size to the carrying capacity determines its rate of approach to $K$. I solved equation $(1)$ in an attempt to show the required result, which is given below
\begin{align}
\frac{dN}{dt}&=rN\left(1-\frac{N}{K}\right) \\
\frac{dN}{N(K-N)}&=\frac{r}{K} \ dt \\
\int\left(\frac{1/K}{N}+\frac{1/K}{K-N}\right) \ dN&=\int \frac{r}{K} \ dt \\
\frac{1}{K}\left(\ln(N)-\ln(K-N)\right)&=\frac{r}{K}t+C, \ \ C\in\mathbb{R} \\
\ln\left(\frac{K-N}{N}\right)&=-rt-KC.
\end{align}
I am unsure how to proceed from this point.
AI: You're really close to finishing off.
Differentiating both sides gives us:
$$\frac{\mathrm{d}}{\mathrm{d}t}\ln\left(\frac{K-N}{N}\right)=-r$$
|
H: Question about function of bounded variation's derivative
$\text { Let } u_{n}(x)=\frac{1}{n^{2}} e^{-n^{2} x^{2}}, x \in \mathbb{R}$
(i) Calculate Var $u_{n}$
(ii) Let $u(x):=\sum_{n=1}^{\infty} u_{n}(x), x \in \mathbb{R} .$ Prove that $u \in B P V(\mathbb{R})$
(iii) Prove that $\sum_{n=1}^{\infty} u_{n}^{\prime}(x)$ does not converge uniformly in $[-1,1]$
(iv) Find a formula for $u^{\prime}$.
Well,I have solved (i) (ii).
For (iv),Since $\sum_{n=1}^{\infty} u_{n}^{\prime}(x)$ does not converge uniformly in $[-1,1]$,So we can't use term by term differential theorem .OTOH, we observe that $u(x)$ is monotonous in $[-\infty,0],[0,+\infty]$,So we can use Fubini term by term differential theorem(see https://en.wikipedia.org/wiki/Fubini%27s_theorem_on_differentiation) in $[-n,1-n],[n-1,n]$,Then we obtain that $u^{\prime}=\sum_{n=1}^{\infty} u_{n}^{\prime}(x) $ a.e. in $\mathbb{R}$.But we cannot find a formula of $u^{\prime} $for the whole real line in this way.
Can someone help me solve (iii) (iv)?Thanks in advance.
AI: (iii) Note that for $x \in [0,1]$,
$$\left|\sum_{k=n+1}^\infty u_n'(x) \right| \geqslant \sum_{k=n+1}^{2n} 2xe^{-k^2x^2}\geqslant n \cdot 2xe^{-4n^2x^2},$$
and, since $\frac{1}{n} \in [0,1]$,
$$\sup_{x \in [0,1]}\left|\sum_{k=n+1}^\infty u_n'(x) \right| \geqslant n \cdot 2 \cdot \frac{1}{n}e^{-2n^2(1/n)^2}=2e^{-2}$$
Since the RHS does not converge to $0$ as $n \to \infty$ the Cauchy criterion is violated , and the series fails to converge uniformly on $[0,1]$ and, therefore, on $[-1,1]$.
|
H: Linear Algebra problem involving $v^{\perp}$
Let $v,w$ be non-zero vectors in $\mathbb R^3$. Assume that the set of vectors orthogonal to them is a plane through the origin. Then each of them is a scalar multiple of the other other.
As usual, set $v,w=(\alpha_1,\beta_1,\gamma_1),(\alpha_2,\beta_2,\gamma_2)$ respectively. Then, $v^{\perp}$ is a plane through the origin with normal$=v$ and $w^{\perp}$ is also a plane through the origin with normal $=w$. We know that they are the same plane, and hence the normals must be multiples of each other, hence $(\alpha_1,\beta_1,\gamma_1)=\lambda(\alpha_2,\beta_2,\gamma_2)$ and hence, $v=\lambda w$.
Is it fine? Or some more details?
Thanks.
AI: Suppose that $v$ and $w$ are not multiples of each other. Then they are linearly independent and hence span a plane. The set of vectors orthogonal to this plane in $\mathbb{R}^3$ is a line, which contradicts the assumption. Hence $v$ and $w$ are multiples of each other.
|
H: What is the maximum integral value of $\frac{\beta-\alpha}{\tan^{-1}\beta-\tan^{-1}\alpha}$
If $0<\alpha<\beta<\sqrt3$ and if
$$k=\frac{\beta-\alpha}{\tan^{-1}\beta-\tan^{-1}\alpha}$$
then find maximum value of $\lfloor k\rfloor$.
My Attempt:
Let $\beta=\tan B$ and $\alpha=\tan A$
$$k=\frac{\beta-\alpha}{\tan^{-1}\beta-\tan^{-1}\alpha}=\frac{\tan B-\tan A}{B-A}\frac{1}{\cos A\cos B}$$
$$\Longrightarrow k=\frac{\sin(A-B)}{A-B}\frac{1}{\cos A\cos B}<1\frac{1}{\cos^2 B}=\sec^2B=1+\tan^2B<1+3$$
So I get my answer as $3$.
But is it correct. Is there any way we can narrow down the values which k may take
AI: Let $b=\tan(\beta),a=\tan(\alpha)$. Then
$$
k = \frac{\tan(b)-\tan(a)}{b-a}
$$Geometrically, this is the slope of the secant line of $y=\tan(x)$ on $[a,b]$. Tangent is convex on $[0,\arctan(\sqrt{3})=\pi/3]$, so this value is less than or equal to its derivative at $\pi/3$, which is $4$. Since we have strict inequality, we conclude the maximum $\lfloor k\rfloor $ can be is $3$; it is achieved at $\beta=1.01,\alpha=1$.
|
H: Find $\lim_{x\to 0} \frac{\sqrt{ax+b}-1}{x}=1$
My answer
Let $\sqrt{ax+b}=y$
Then
$$\lim_{y\to \sqrt b} \frac{(y-1)a}{y^2-b}$$
Let $b=1$
Then $$\lim _{y\to 1} \frac{a}{\frac{y^2-1}{y-1}}$$
$$=\frac a2 =1$$
$$a=2$$
The answer is correct, but this relies on assuming $b=1$, and that doesn’t seem appropriate. What is the correct answer for this?
AI: $$\frac{\sqrt{ax+b}-1}{x}=\frac{ax+b-1}{x(\sqrt{ax+b}+1)}.$$
Now, we see that we need $b=1$ (otherwise, the limit does not exist) and $\frac{a}{\sqrt{b}+1}=1,$
which gives also $a=2$.
If $b\neq1$ for $x\rightarrow0$ we obtain:
$$\frac{\sqrt{ax+b}-1}{x}=\frac{ax+b-1}{x(\sqrt{ax+b}+1)}=\frac{a}{\sqrt{ax+b}+1}+\frac{b-1}{x(\sqrt{ax+b}+1)}.$$
We see that for $b>0$ $$\frac{a}{\sqrt{ax+b}+1}\rightarrow\frac{a}{\sqrt{b}+1},$$ but
$$
\lim_{x\rightarrow0}\frac{b-1}{x(\sqrt{ax+b}+1)}$$ does not exist.
|
H: When trying to find the number of unique pairs in $n$ elements why does the combination formula output a different value from $n(n-1)/2$
I was recently doing a problem on LeetCode that involved counting the number of pairs of dominoes that could be formed from a set. And I have a question about the math behind a certain section of it. So essentially if you have $N$ number of elements and need to calculate how many pairs you can form from that $N$ number of elements it seems to be the right answer to use:
$$
\frac{n(n-1)}{2}
$$
Rather than using the combination formula:
$$
\frac{n!}{r!(n-r)!}
$$
Where $r$ would equal $2$. They seemed to have different outputs, although it seemed to be correct at lower input sizes. However, once it got larger, they seemed to have different answers. Would anyone care to explain when and why to use these in certain scenarios? It seems to me they should be equivalent when setting $r = 2$, but apparently not.
Edit:
Thanks for the help everybody! It turned out to be an integer overflow problem from the factorial elements of the calculation. I'm a bit rusty on my proofs, and I appreciate the everybody's explanations for how the two were equivalent as that totally makes sense!
My original function did this:
private static int NumOfCombinations(int n, int r)
{
int nFac = Factorial(n);
int rFac = Factorial(r);
int diffFac = Factorial(n-r);
return nFac / (diffFac * rFac);
}
While my new function does this (much more efficient too):
private static int Combination(int n, int r)
{
int rFac = Factorial(r);
int numerator = 1;
int nFacs = n;
while (nFacs > n - r)
{
numerator *= nFacs;
nFacs -= 1;
}
return numerator / rFac;
}
AI: Both of these are actually equivalent. One definition often given in earlier classes is that
$$\binom n r = \frac{n!}{r!(n-r)!}$$
However, an alternative definition also arises that is at least easier to grasp for mental arithmetic: on the top, you calculate the factorial as normal ($n! = n(n-1)(n-2)\cdots$ and so on) until you have $r$ factors up there. And then you just divide that by $r!$. That is,
$$\binom n r = \frac{n(n-1)(n-2)\cdots(n-r+1)}{r!}$$
These are clearly equivalent computations: expand both $n!$ and $(n-r)!$ as products in the original definition to see the cancellations. Of course, this also means that
$$\binom n 2 = \frac{n!}{2!(n-2)!} = \frac{n(n-1)}{2!}$$
(Note that $2!=2$.)
Whatever difference between these two formulas you've been noticing has been a calculation error. Sorry to inform you.
|
H: Law of cosines, Ptolemy's, angle chasing on an isosceles triangle inscribed in a circle
From HMMT:
Triangle $\triangle PQR$, with $PQ=PR=5$ and $QR=6$, is inscribed in circle $\omega$. Compute the radius of the circle with center on $QR$ which is tangent to both $\omega$ and $PQ$.
I haven't made much progress. I've set $QS=x$ and $SR=y$ to try for Law of Cosines, since we know $\cos\angle QSR$, but that really hasn't lead anywhere. With Ptolemy's, I 've found that $PS=\displaystyle\frac{5(x+y)}{6}$, but unfortunately $PS$ isn't colinear with anything useful (like the line connecting the centers). I also haven't really been able to use the tangent properties.
Hints beyond what I've done or any useful insights would be greatly appreciated!
AI: Given that $|PQ|=|PR|=5,\ |QR|=6$,
the area, the height
and
the circumradius of $\triangle PQR$ are
$S=12$, $|PF|=4$
and
$R_0=\tfrac{25}8$, respectively.
Let
$\angle PQR=\alpha$,
$\angle FOE=\phi$.
Assuming that the center of the circle $O_t\in QR$,
we must have $|DQ_t|=|EQ_t|=r$.
\begin{align}
\sin\alpha&=\frac{|PF|}{|PQ|}
=\frac45
,\\
|OF|&=|PF|-R_0=\tfrac78
\tag{1}\label{1}
.
\end{align}
We have two conditions for $r$, $\phi$:
\begin{align}
|QO_t|+|FO_t|&=\tfrac12\,|QR|
\tag{2}\label{2}
,\\
\frac r{\sin\alpha}
+
(R_0-r)\sin\phi
&=3
\tag{3}\label{3}
,\\
(R_0-r)\cos\phi&=|OF|
\tag{4}\label{4}
.
\end{align}
Excluding $\phi$ from \eqref{3},\eqref{4}
and using known values, we get
\begin{align}
r&=\frac{20}9
.
\end{align}
|
H: yes/ No :Is $A \cong B ?$
Let $A$ and $B$ be subspaces of $\mathbb{R}$ given by $A = (0, 1) \cup (3, 4)$ and $B = (0, 1) \cup (1, 2)$. Is $A \cong B ?$
My attempt : yes because both $A$ and $B$ are disconnected set and open interval
Is its true ??
Note :$\cong$ mean homeomorphism
AI: You're right, but you should give a more rigorous argument.
Hint: can you think of an explicit homeomorphism between the two sets? You mention they're both a disjoint union of open intervals. Great! Can you pair up the left and right intervals by explicit functions? Good luck!
I hope this helps ^_^
|
H: Ideals $(X^2+1)$ and $(X^2+1, 7)$ of polynomial ring $\mathbb{Z}[X]$
How can I show that generated ideals $(X^2+1)$ and $(X^2+1, 7)$ of polynomial ring $\mathbb{Z}[X]$ are a prime ideal and a maximal ideal, respectively?
AI: To show $(X^2+1)$ is a prime ideal, imagine you took two polynomials with integer coefficients, $P(X)$ and $Q(X)$, and $P(X)\cdot Q(X) = (X^2+1)R(X)$ for some other polynomial $R(X)$. The polynomial $X^2+1$ is degree $2$ and has no roots in $\mathbb Z$, so it is irreducible... Can you take it from here?
To show that $\mathbb Z[X]\big/(X^2+1,7)$ is a field (and hence that $(X^2+1,7)$ is maximal), first you can prove that as rings
$$
\frac{\mathbb Z[X]}{(X^2+1,7)}\cong \frac{\big(\mathbb Z\big/(7)\big)[X]}{(X^2+1)},
$$
and that $F=\mathbb Z/(7)$ is a field. As $-1$ is not a square in $F$, $F[X]\big/(X^2+1)\cong F(i)$ is a field extension of order $2$, but in particular $F(i)$ is a field, so $(X^2+1,7)$ is maximal.
|
H: How could be given two ratios are equivalent?
I'm currently learning Maths and got interested in Ratios. Currently, I'm going through equivalent ratio lesson and found that to be magical somehow.
I am impressed that given two ratios can have same value, but I don't know how is it possible and I know the rule and can say whether they are equivalent or not but still I don't understand how it works behind the scenes.
Example
Super Salad Dressing is made with 8 mL of oil for every 3 mL of vinegar.
I found that based on rule, 80ml and 30 ml = 8mL and 3mL, if I write it mathematically,
it would be 80:30 == 8:3
If I compare both of them physically, they are not equivalent because 80 Ml larger than 8 Ml and even tho It's amazing that they are equivalent.
AI: If I compare both of them physically, they are not equivalent
If you compare which physically?
because 80 Ml larger than 8 Ml
There's no reason to compare the $80$ Ml to the $8$ Ml. $80$ Ml is $\frac 83$ times bigger than $30$ Ml. So they are in ratio of $8:3$. And $8$ Ml is $\frac 83$ times bigger than $3$ Ml. So they are in ratio of $8:3$. And the Pacific Ocean at $704,000,000$ cubic kilometers is $\frac 83$ times bigger than the Indian Ocean at $264,000,000$ cubic kilometers. So there are in ration of $8:3$.
A ratio compares the sizes of two different things in proportion to each other. The absolute size doesn't matter.
If you are trying to compare $80$ ml to $8$ ml they are in $10:1$ proportional. And that is the same proportion that $30$ ml is to $3$ ml.
If you compare the oil to vinegar there is always $\frac 83$ more oil than vinegar no matter what size your recipe is.
And if you are comparing the two different recipes: the bigger recipe is $10$ times bigger than the smaller recipe. So the bigg recipe will have $10$ times as much oil, or $10$ times as much vinegar or $10$ times as many eggs, etc.
and even tho It's amazing that they are equivalent.
Actually it's very dull and mundane and would be very weird if they weren't.
|
H: Degree of a determinant
Let $p$ be a prime number and $n\in\mathbb N$. Consider the determinant
$$M_n = \begin{vmatrix}\frac1{x^{p^{n+1}}-x}&\frac1{x^{p^{n+1}}-x^p}\\ \frac1{x^{p^{n+2}}-x}&\frac1{x^{p^{n+2}}-x^p}\end{vmatrix} \in \mathbb F_p(x)$$
Numerical computations suggest that
$$\deg(M_n)=p-(p+2)p^{n+1}$$
Is it true? Is yes, does anyone have an idea to prove it?
AI: Assuming that by degree you mean degree of the numerator minus the degree of the denominator, this is a trivial calculation. The determinant is $$\frac{1}{(x^{p^{n+1}}-x)(x^{p^{n+2}}-x^p)}-\frac{1}{(x^{p^{n+2}}-x)(x^{p^{n+1}}-x^p)}.$$ Multiplying the denominators of these fractions to combine them gives a denominator of degree $2(p^{n+1}+p^{n+2})$. In the combined numerator, the $x^{p^{n+1}+p^{n+2}}$ terms will cancel, leaving the highest degree term in the numerator as $x^{p^{n+2}+p}$. So the degree of the numerator minus the degree of the denominator is $$(p^{n+2}+p)-2(p^{n+1}+p^{n+2})=p-(p+2)p^{n+1}.$$
|
H: Continuity of a Map on a Quotient
I am trying to understand the continuity of a map on a quotient from Loring Tu's An Introduction to Manifolds (Second Edition, page no. 72). It starts as follows.
My Question
What does it mean that $f:S \to Y$ is constant on each equivalence class? In other words, the definition of $f$ is not clear to me.
AI: By the phrase "$f$ is constant on each equivalence class," the author means that whenever $x \sim y$ one has $f(x) = f(y)$.
|
H: $18$ mice were placed in $3$ groups, with all groups equally large. In how many ways can the mice be placed into $3$ groups?
In my textbook, the given answer is $18!/(6!)^3$.
But my teacher's answer is $18!/(3!)(6!)^3$.
He solved like -
Please review the attached answer and let me know which one answer is correct?
Thank in advance!
AI: It depends if groups are distinguishable. For e.g. if you put 6 mouse each in 3 different color boxes then your textbook is correct, if you put them in same color boxes then your teacher is correct.
Take a smaller case say 4 mouse and 2 boxes and then think about. You might want to write down all the possibilities to give you clarity.
Ans clearly mentions in the way textbook has treated order in which groups are formed matter (i.e. they are distinguishable you can label them 1,2,3)
|
H: I want to know why is my idea wrong in theorem
In theorem : Let $~\mathbb{F}~$ be a field and $f(x)$ a nonconstant polynomial of degree $n$ in $\mathbb{F}(x)$. Then there exists a splitting field $\mathbb{K}$ of $f(x)$ over $\mathbb{F}$ s.t. $~[\mathbb{K}:\mathbb{F}] \leqslant n!~$ where $~[\mathbb{K}:\mathbb{F}]~$ is dimension of $\mathbb{K}$ over $\mathbb{F}$.
$\bullet~$ $\textbf{My question:}~$ Why it is $n!$ not $(n+1)$.
$\bullet~$ $\textbf{Idea:}~$ Let $~\mathbb{K}=\mathbb{F}(u_1,u_2,...,u_n)~$ where $~u_i~$ is root of $f(x)$ for all $i$. If the set ${1_\mathbb{F},u_1,u_2,...,u_n}$ is linear independence then $\dim(K)=n+1$, if not $\dim(K)\leqslant n+1$
AI: $F(u_1,\cdots,u_n)$ is generated by $1$ and $u_1,\cdots,u_n$ as a field extension of $F$ (in the in algebraic case, as a $F$-algebra as well), but not as a $F$-vector space, and here you mean dimension as $F$-vector space. If the extension is algebraic, then $F(u_1,\cdots,u_n)$ is generated as a $F$-vector space by the monomials $u_1^{\alpha_1}u_2^{\alpha_2}\cdots u_n^{\alpha_n}$, where $(\alpha_1,\cdots,\alpha_n)\in\Bbb N^n$.
|
H: Is $\sup$ and $\inf$ of a set single?
Lets say we have a set & its $\sup$ is $30$ whether $\max$ exists or no, is it safe to say $30$ is the ONLY $\sup$?
AI: The supremum and infimum of a set is unique. This can be proven.
Take a set $S$, with supremum $\alpha$. Suppose the supremum is not unique, and $S$ has a second supremum, $\beta$. From the definition, then, $\alpha, \beta$ are the least upper bounds. That is, in the set $U(S) := \{ x \mid x \ge s \; \forall s \in S \}$ of upper bounds of $S$, $\alpha \le x$ $\forall x \in U(S)$, and the same is true for $\beta$. However, $\alpha,\beta \in U(S)$ as well (after all, the "least" upper bound is still an upper bound). Thus, in particular, $\alpha \le \beta$ and $\beta \le \alpha$ (they have to be less than or equal to all of the other upper bounds, including each other). However, this implies $\alpha = \beta$, which shows that the supremum is actually unique.
You can perform a similar argument to show uniqueness of the infimum.
|
H: Must an epsilon neighbourhood be small?
Is there any requirement or restriction that an epsilon neighbourhood $V_{\epsilon}(a)$ be small? Could I use $\epsilon = 1$ if my needs demand it?
AI: As long as $\epsilon > 0$, it can be whatever you need it to be. It doesn't mean that your proof of a given theorem will work when $\epsilon = 1$. I can't guarantee that, of course. But there really is nothing criminal about choosing $\epsilon = 1$.
It's really the same logic as with functions defined on $\mathbb{R}$. You can choose whatever $\epsilon > 0$ that you desire. When doing proofs, you do tend to keep it arbitrary but fixed though.
|
H: What does it mean for an expression to be "finite but not infinitesimal"?
Also according to the definition of a positive infinitesimal:
a hyperreal number b is positive infinitesimal if b is positive but less than every positive real number.
So how are real numbers other than 0 able to have infinitesimals around them on the hyperreal line?
AI: Well, I suppose that if $a\in\mathbb{R}$ and $\eta$ is a positive infinitesimal then the hyperreal number $a+\eta$ is bigger than $a$ but smaller than any real bigger than $a$.
Yet $a+\eta$ is neither real nor infinitesimal.
The situation is analogous to that of complex numbers: any complex number is a sum of a real number and an imaginary number, but (most) complex numbers are neither real, nor imaginary.
|
H: What is the definition of value?
Imagine a number line. Each number has a value, but how is value defined? For rational numbers this is self evident, but what about real numbers? For example, we know that $\pi>e$, because $\pi>3$ while $e<3$. So here we have a rational number between them. This however doesn't tell us the value of either $\pi$ or $e$. Do we have to use asymptotically approaching rational numbers to define the value of a real number through a procedure with an infinite number of steps? Or is there a different method?
AI: Yes, as you say, in general, we have to use asymptotically approaching rational numbers to define the value of a real number through a procedure with an infinite number of steps. Of course, because rational numbers, such as $3$, are also real numbers, sometimes a real number can be defined finitely.
|
H: Proof for Cauchy's Definition of Centre of Curvature.
Cauchy defined the center of curvature as the intersection point of normals drawn to two infinitely close points on a curve. Is there any way to prove this?
I am unable to get a starting point for this.
AI: I'll assume you are dealing with biregular plane curve $\gamma=(x,y)\colon(-\epsilon,\epsilon)\to\mathbb{R}^2$ with unit speed. The normal line at $\gamma(0)=(x_0,y_0)$ is (self-explanary notation)
$$
(\dot x_0,\dot y_0)\cdot(x,y)=x_0\dot x_0+y_0\dot y_0
$$
and normal line at $\gamma(h)$ is
$$
(\dot x_h,\dot y_h)\cdot(x,y)=x_h\dot x_h+y_h\dot y_h
$$
so their intersection is
$$
(x,y)=(x_0,y_0)+(-\dot y_0,\dot x_0)\lambda
$$
where
$$
\lambda=\frac{(x_h-x_0)\dot x_h+(y_h-y_0)\dot y_h}{(-\dot y_0,\dot x_0)\cdot(\dot x_h,\dot y_h)}.
$$
But the numerator is $h(\dot x_0^2+\dot y_0^2)+o(h)=h+o(h)$ and the denominator is $h(\dot x_0\ddot y_0-\ddot x_0\dot y_0)+o(h)$, so $\lambda$ agrees with the (signed) radius of curvature in the limit and you recover the formula for the center of curvature.
|
H: How I can write a negligible compact set?
My problem:
Suppose $E$ is a negligible compact set, i.e. $E \subset \mathbb{R}$ is compact and $|E|=0$ where $|.|$ denote the Lebesgue measure. Can I write $E=\bigcap_{k \in \mathbb{N}} U_k$ where $U_k$ is open and $|U_k| \leq 2^{-k}$?
My attempt:
I tried taking for every $n \in \mathbb{N}$ and for every $x \in E$ the ball $B(x,\frac 1 n )$, using the compactness and then taking the intersection of the finite union of these balls. But I am not sure if this work.
AI: The answer is YES. There exist open sets $V_k$ such that $E \subseteq V_k$ and $|V_k| <\frac 1 {2^{k}}$. Also any closed is a countable intersection of open sets. Let $E =\cap_k W_k$ with $W_k$ open. Take $U_k =V_k \cap W_k$.
[We can take $W_k=\{x: d(x,E) <\frac 1 k\}$].
|
H: Is it possible to show, $S_4$ has no subgroup isomorphic to $Q_8$ using homomorphisms?
Question: Prove that $S_4$ has no subgroup isomorphic to $Q_8$?
My attempt: If $Q_8$ isomorphic to subgroup of $S_4$ then there exists a homomorphism from $f: S_4\rightarrow Q_8$ such that,
$\frac{S_4}{\ker f}≈Q_8$ (Am I correct?)
this implies $|\ker f|$ must be equal to $3$. But $S_4$ has no normal subgroup of order $3$ and so... statement follows...
Is my attempt correct?
AI: The quaternion group has six elements of order four, and they all have the same square. $S_4$ also has six elements of order four, the six four-cycles, but they don't all have the same square. So, $S_4$ can't have a subgroup isomorphic to the quaternion group.
|
H: Is knowing a sheaf equivalent to knowing all the stalks?
We have a topological space $(X, \tau)$, and a sheaf of functions $F: \tau \rightarrow \textbf{Set}$ on $X$. Now I build a new sheaf of stalks of F, defined as $G(U) \equiv \prod_{x \in X} F_x$. I feel that this $G(U)$ ought to carry the same information as $F$ --- after all, what does does a sheaf contain other than the data at the stalks? However, I'm unable to think of how to either prove or disprove this!
Roughly, I believe that one can reconstruct $F$ from $G$ using the idea of compatible germs in $G$. The elements of $F(U)$ will be those tuples $(x_i)_{i \in I} \in G(U)$ such that the $x_i$ germs are compatible with each other.
Is this intuition correct? If so, why don't we just "define" a sheaf this way, rather than going through the pain of $\text{Sheaf} \rightarrow \text{Stalk}$?
AI: The sheaf comtains the data of how the stalks are "glued together".
There is actually a precise result to that effect : if you take a sheaf $F$, then there is an essentially unique space $Y$ together with a local homeomorphism $p:Y\to X$ such that on each open $U$, $F(U) \cong \{s: U\to Y$ continuous $\mid p\circ s_{\mid U} = id_U\}$
This space $Y$ is called the etale space of $F$ and it satisfies $p^{-1}(x)\cong F_x$ for each $x\in X$
Now you can see that a general element in $\prod_{x\in U}F_x$ is unlikely to be in $F(U)$ : if all of then were in it, it would mean that almost any map $s:U\to Y$ which satisfies $p(s(u)) = u$ would automatically be continuous !
So the obstruction to $F\to G$ being an isomorphism is this sort of continuity condition, and this somehow corresponds to gluing the stalks together "along the topology of $X$"
Let's find a concrete example : let $\exp : \mathbb C \to \mathbb C^*$, and consider $F$ on $\mathbb C^*$ such that for each $U$, $F(U)$ is the set of complex continuous logarithms on $U$ (so continuous $f: U\to \mathbb C$ with $\exp\circ f =id_U$). One can compute $F_x \cong \mathbb Z$ : all the posible logarithms of $x$ are just $+2k\pi$'s of one another
On the other hand if you fix a sufficiently small open set (one where there is a logarithm), then $F(U)\cong \mathbb Z$ as well ! So very far from $\prod_{x\in U}\mathbb Z$
You can see here that the difference between the two lies in the fact that for $F$, one requires the logarithm to be continuous
|
H: Relation between covariance and uncorrelation/independence
I need to have an explanation... If I have a covariance equal to zero, the random variables are only uncorrelated, or are uncorrelated and independent at the same time?
Thanks
AI: They are uncorrelated but not independent, in general.
Conversely, if the variables are Independent they are also uncorrelated.
EDIT 2:
If the model is Gaussian, and to be more precise, if $(X,Y)$ are jointly Gaussian, Uncorrelation and independence are the same
Proof:
$$f_{XY}(x,y)=\frac{1}{2\pi\sigma_{X}\sigma_{Y}\sqrt{1-\rho^2}}e^{-\frac{1}{2(1-\rho^2)}[\frac{(x-\mu_{X})^2}{\sigma_{X}^2}-2\rho\frac{(x-\mu_{X})(y-\mu_{Y})}{\sigma_{X}\sigma_{X}}+\frac{(y-\mu_{Y})^2}{\sigma_{Y}^2}]}$$
Letting $\rho=0$ that means incorrelation between $X$ and $Y$ immediately we get
$$f_{XY}(x,y)=f_X(x)f_Y(y)$$
that means also independence.
|
H: Expectation value of $Y= \lfloor(\frac{x}{2})\rfloor+1$
I need help to solve this question:
Let us define $X \sim \text{Geo}(2/3)$, $Y=\Big\lfloor \frac{X}{2} \Big\rfloor + 1$.
Find $\mathbb{E}[Y]$=?
I cant seem to get the infinite sum correctly, if someone can give me any tip or guidance would be greatly appreciated!
AI: Hint: $$Y=\begin{cases}k,&\text{if } X=2(k-1)\\k,& \text{if } X=2(k-1)+1\end{cases}$$ So, for any $k\in \mathbb N_{>0}$, $$P(Y=k)=P(X=2(k-1))+P(X=2(k-1)+1)$$
|
H: Multivariate Taylor series with Hessian evaluated at a linear combination of $x$ and $\Delta x$
I found the following theorem, but I don't understand it and was unable to prove it. Is it true? Is there a proof for it?
Theorem: Let $f : R^d → R$ be such that $f$ is twice-differentiable and
has continuous derivatives in an open ball $B$ around the point $x ∈ R^d$. Then for any small enough $∆x ∈ R^d$ such that
$x + ∆x$ is also contained in the ball $B$, we have the following:
$$
f(x + \Delta x) = f(x) + \Delta x^T\nabla f|_x + \frac{1}{2}(\Delta x)^T (\nabla^2f|_w)(\Delta x)
$$
Where $(\nabla^2f|_w)$ is the Hessian of $f$ evaluated at a point $w ∈ R^d$
that lies on the line connecting $x$ and $x + ∆x$
I understand that this is a second-order Taylor expansion of $f$ about $x$, and I understand why it is in this form.
But, I don't get is why the Hessian can be evaluated at the point $w$ rather than at $x$. If it is a Taylor expansion about $x$, shouldn't all derivatives be evaluated at $x$? Why is this expansion valid?
For reference, this is where I found the theorem: https://www.cs.princeton.edu/courses/archive/fall18/cos597G/lecnotes/lecture3.pdf
On page 2.
AI: The short answer is, that the part with the Hessian of $f$ is an error correcting term. Thus the theorem states that if $f$ is twice differentiable you can approximate your function by an (affine) linear function (your first two terms with zero- and first order differentials) and the error of that approximation evaluated at any point $x+\Delta x$ is given by the Hessian of $f$ evaluated at a certain point on the line between $x$ and $x+\Delta x$.
You will find the same idea in the one-dimensional statement about Taylor series. Sometimes this will be refered to as Lagrangian form (of the remainder).
|
H: Bounded sequence in $L^{\infty}$ has weakly convergent subsequence
It would be great if someone could name me a reference where I can find a proof for the following statement, thank you! :)
Given is a bounded sequence $(f_{n})_{n\in\mathbb{N}}\in L^{\infty}$. Then there exists a $f\in L^{\infty}$ and a subsequence $f_{n_{k}}$ of $f_{n}$ that convergeces weakly* to $f$ in $L^{\infty}$.
AI: You did not specify the basic measure space but I will assume that $L^{1}$ is separable and you are looking at $L^{\infty}$ as the dual of $L^{1}$. In this case Banach Alaoglu Theorem tells you that any closed ball in $L^{\infty}$ is compact in weak* topology. Further separability if $L^{1}$ makes this ball metrizable. Hence every sequence in it has a weak* convergent subsequence.
|
H: Open convex subset of compact-open topology
Let $X$ be a locally-compact metric space, $E$ be a Banach space, and equip $C(X,E)$ with the compact-open topology. Suppose that $Z\subseteq C(X,E)$ is such that $\operatorname{span}(Z)$ is dense in $C(X,E)$. Then, is $co(Z)\triangleq \left\{\sum_{i=1}^n k_iz_i:\,k_i \in [0,1],\, \sum_{i=1}^n k_i=1,\, z_i \in Z,\, n \in \mathbb{N}\right\}$ dense in a non-empty open subset of $C(X,E)$?
AI: The trivial answer first: If $X=\{x\}$ is a singleton and $E=\mathbb R$ then $C(X,E)=\mathbb R$ and the singleton $\{1\}\in \mathbb R$ is a closed convex set with empty interior whose spane is dense.
More interesting is the following: There are many Banach spaces which are (weakly) comactly generated, i.e., there is a (weakly) compact absolutely convex set with dense linear span, for example every separable Banach space is compactly generated. But a compact subset of an infinite dimensional Banach space never has interior points (and if a weakly compact set has interior points then the Banach space is reflexive).
|
H: $\int_0^1 \frac{x^p}{1-x^q}\; dx=∞$
When $p>-1, \;q>0$, I want to prove
$$\int_0^1 \frac{x^p}{1-x^q}\; dx = \infty.$$
Any help would be appreciated. I observed by graph soft this is true, but I cannot prove. I want to find some function $\varphi(x)$ which satisfies
$$\int_0^1 \varphi(x)dx = \infty,$$
such that
$ \frac{x^p}{1-x^q}>\varphi(x)$.
Thank you.
AI: Hint: It is enough to show that $\int_{1/2}^{1} \frac 1 {1-x^{q}}dx=\infty$ (because $\inf \{x^{p}: \frac 1 2 \leq x \leq 1\} >0$). Expand $\frac 1 {1-x^{q}}$ as $\sum_n x^{nq}$ and integrate term by term. Compare with $\sum \frac 1 n$.
|
H: The characteristic of a ring with unity
Let $R$ be a ring with unity. Suppose $n\cdot 1\neq 0$ for all $n\in\mathbb N$. Here $n\cdot 1$ means the sum of $n$ unities. I would like to prove that $R$ has characteristic $0$. To this end, I want to show that we cannot have $n\cdot a=0$ for all $a\in R$ for some $n\in\mathbb N$. Assume for contradiction that $n\cdot a=0$. Oberve that
$$n\cdot a=a\cdot(n\cdot 1).$$
Could you please tell how to do next in order to reach a contradiction? Thank you very much.
AI: For contradiction you have to start with: let there exists a positive integer $n$ such that $n\cdot a=0$ for all $a\in R$. Now put $a=1$ to get the contradiction.
|
H: Existence of open convex set separating a closed, compact convex set and a point
If $(V, ||\cdot||)$ is a normed vector space and $A \subseteq C$ is compact (and therefore closed since the topology on $V$ induced by $||\cdot||$ is Hausdorff) and convex, and $x \in V \backslash A$, then does there exist an open, convex subset $U$ of $V$ such that $A \subseteq U$ but $x \notin U$?
I'm thinking that there is such a set $U$, if we define $ \varepsilon := \inf_{a \in A} ||a - x||$ (is this infimum necessarily finite?), then does $U := A + \frac 1 2 B(0, \varepsilon)$ satisfy these criteria?
AI: If $(V, \|\cdot\|)$ is a normed vector space and $A \subseteq C$ is compact (and therefore closed since the topology on $V$ induced by $\|\cdot\|$ is Hausdorff) and convex, and $x \in V \backslash A$, then does there exist an open, convex subset $U$ of $V$ such that $A \subseteq U$ but $x \notin U$?
Yes. We can even use (one of) the Hahn-Banach separation theorem(s) to deduce that there is a hyperplane strictly separating these sets. This means we can even do it with weakly open sets.
I'm thinking that there is such a set $U$, if we define $ \varepsilon := \inf_{a \in A} \|a - x\|$ (is this infimum necessarily finite?)...
Yes again. Note that $\|a - x\|$ is bounded below by $0$, hence the infimum is not $-\infty$. So long as $A \neq \emptyset$ (which, presumably, it is), the infimum is not $+\infty$. So, the infimum should be finite.
.. then does $U := A + \frac 1 2 B(0, \varepsilon)$ satisfy these criteria?
Yes once more. Every point in $u \in U$ satisfies $\inf_{a \in A} \|u - a\| < \varepsilon / 2$. Indeed, any $u \in U$ can be expressed in the form $a' + \frac{1}{2}b$, where $b \in B(0, \varepsilon)$. In particular, this means,
$$\frac{\varepsilon}{2} > \frac{1}{2}\|b\| = \|u - a'\| \ge \inf_{a \in A} \|u - a\|.$$
Note that this does not hold for $x$.
|
H: Calculate: $\lim _{x\to \infty }\left(\frac{\sqrt{x^3+4}+\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right)$.
Calculate:
$$\lim _{x\to \infty }\left(\frac{\sqrt{x^3+4}+\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right)$$
Here's my attempt:
I first tried to the statement up. I first let $f(x)=\left(\frac{\sqrt{x^3+4}+\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right)$, $f_1(x)=\left(\frac{\sqrt{x^3+4}}{\sqrt{x^3+2x^2+7x+11}}\right)$ and $f_2(x)=\left(\frac{\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right)$. We know that $\lim_{x \to \infty}f(x)=\lim_{x \to \infty}f_1(x)+\lim_{x \to \infty}f_2(x)$. Now we can find the limit of $f_1(x)$ and $f_2(x)$, which will give us our solution.Therefore, we have:
\begin{align}
\lim_{x \to \infty}f_1(x)&=\lim_{x \to \infty}\left(\frac{\sqrt{x^3+4}}{\sqrt{x^3+2x^2+7x+11}}\right)\\
&= {\sqrt{\lim_{x \to \infty} \frac{x^{3} + 4}{x^{3} + 2 x^{2} + 7 x + 11}}} \\
&= \sqrt{\lim_{x \to \infty} \frac{1 + \frac{4}{x^{3}}}{1 + \frac{2}{x} + \frac{7}{x^{2}} + \frac{11}{x^{3}}}} \\
&= \sqrt{\frac{\lim_{x \to \infty}\left(1 + \frac{4}{x^{3}}\right)}{\lim_{x \to \infty}\left(1 + \frac{2}{x} + \frac{7}{x^{2}} + \frac{11}{x^{3}}\right)}} \\
&= \ ... \\
&= 1
\end{align}
\begin{align}
\lim_{x \to \infty}f_2(x)&=\lim_{x \to \infty}\left(\frac{\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right) \\
&= \ ... \\
&= 0
\end{align}
For the limit of $f_2(x)$, I left out the actuall working out but you can either use the squeeze theorem or the fact that since $\sin(x)$ is divergent and the denominator converges to $0$, which means that the overall function will also converge to $0$. According to the Squeeze Theorem, since $- \frac{1}{\sqrt{x^{3} + 2 x^{2} + 7 x + 11}} \leq \frac{\sin{\left(x \right)}}{\sqrt{x^{3} + 2 x^{2} + 7 x + 11}} \leq \frac{1}{\sqrt{x^{3} + 2 x^{2} + 7 x + 11}}$ and both the functions on the left and right converge to $0$, so does $f_2(x)$.
Therefore: $$\lim _{x\to \infty }\left(\frac{\sqrt{x^3+4}+\sin \left(x\right)}{\sqrt{x^3+2x^2+7x+11}}\right)=1$$
However, I'm not sure if this is $100%$ since neither Symbolab nor EMathHelp could determine the solution.
AI: $$\frac{\sqrt{x^3+4}+\sin x}{\sqrt{x^3+2x^2+7x+11}}=\frac{\sqrt{x^3+4}}{\sqrt{x^3+2x^2+7x+11}}+\frac{\sin x}{\sqrt{x^3+2x^2+7x+11}}=$$
$$=\frac{\sqrt{1+\frac4{x^3}}}{\sqrt{1+2\frac1x+\frac7{x^2}+\frac{11}{x^3}}}+\frac{\sin x}{\sqrt{x^3+2x^2+7x+11}}\xrightarrow[x\to\infty]{}1+0=1$$
The second term's limit is zero as it is a function whose limit is zero times a bounded one
|
H: Projecting 3D points onto 2D coordinate system of a plane
This is a rather basic question though I could not find a post with the answer.
I have a set of points in 3D. Let us define one of them $s$ as an observation point. we calculate the 3D Euclidean distances and choose the furthest point from $s$, denoted $r$. I want to span a 2D plane where the normal is $\overrightarrow{n}=\frac{\overrightarrow{sr}}{\left\|\overrightarrow{sr}\right\|}$. $\overrightarrow{sr}$ denotes the vector from point s to point r. I want to project the remaining points onto that plane and get their 2D values with the origin of the new system at $r$.
From all other projection answers I could find, I will get a 3D coordinate after the projection. My final goal is to calculate the 2D convex hull on the projected plane. How do I make the transition here?
AI: To convert the projected 3D points into 2D coordinates you first need to define a 2D coordinate system which is contained in your plane. For this you need to define the base vectors $\overrightarrow{e_x}$ and $\overrightarrow{e_y}$ of your coordinate frame. I assume that you would want a right-handed orthonormal base.
First you need to define your base vector $\overrightarrow{e_x}$. For this you can choose any unit vector, which is contained in your plane (orthogonal to $\overrightarrow{n}$, length 1).
One possibility would be to define your basis first basis vector $\overrightarrow{e_x}$ via $\overrightarrow{r}$:
$$\overrightarrow{e_x} = \frac{\overrightarrow{r} \times \overrightarrow{n}}{||\overrightarrow{r} \times \overrightarrow{n}||_2}$$
Where $\times$ denotes the vector cross product. This methods works unless $\overrightarrow{r}$ and $\overrightarrow{s}$ are parallel.
Once you have found any possible basis vector $e_x$ you can derive the right hand basis vector $e_y$ by:
$$ \overrightarrow{e_y} = \frac{\overrightarrow{n} \times \overrightarrow{e_x}}{||\overrightarrow{n} \times \overrightarrow{e_x}||_2}$$
We now denote $p'$ as the projection of an arbitrary point p onto the defined plane in 3D coordinates. We can then define the 3d vector denoting the distance of point p' to the origin of the new coordinate system by:
$$\overrightarrow{rp'} = \overrightarrow{p'} - \overrightarrow{r}$$
To get the 2D coordinates of that point you simply project the 3D vector of $p'$ onto our derived 3D vectors for $e_x$ and $e_y$ using the scalar product:
$$ p'_{\ 2D} = \begin{pmatrix} \overrightarrow{rp'} \cdot \overrightarrow{e_x} \\ \overrightarrow{rp'} \cdot \overrightarrow{e_y} \end{pmatrix} $$
Finally you could reformulate the above equation into matrix form as follows:
$$ p'_{\ 2D} = K \ \overrightarrow{rp'} = \begin{pmatrix} \overrightarrow{e_x}^T \\ \overrightarrow{e_y}^T \end{pmatrix} \overrightarrow{rp'} = \begin{pmatrix} e_{x,1} & e_{x,2} & e_{x,3} \\ e_{y,1} & e_{y,2} & e_{y,3} \end{pmatrix} \overrightarrow{rp'} $$
Where K denotes the projection matrix which you can simply apply to any 3D point in the defined plane. With this the transition to a 2D coordinate for a projected point is complete.
|
H: Ideals in a UFD
Consider the ideal $I=(ux,uy,vx,uv)$ in the polynomial Ring $\mathbb Q[u,v,x,y]$, where $u,v,x,y$ are indeterminates. Prove that every prime Ideal containing I contains the Ideal $(x,y)$ or the Ideal $(u,v)$.
I am not able to choose the correct combinations of products of the four indeterminates to arrive at the answer.
AI: By definition, if $\mathfrak p\subseteq\Bbb Q[u,v,x,y]$ is a prime ideal and $ab\in\mathfrak p$ then either $a\in\mathfrak p$ or $b\in\mathfrak p$.
If $(ux,uy,vx,uv)\in\mathfrak p$ then $ux\in\mathfrak p$. We then get some possibilities:
If $u\in\mathfrak p$ then also $uy,uv\in\mathfrak p$. We only need to worry about $vx$. If $v\in\mathfrak p$ we get the prime ideal $P_1=(u,v)$.
If $x\in\mathfrak p$, we get the prime ideal $P_2=(u,x)$.
If $ux\in\mathfrak p$ and $x\in\mathfrak p$, then also $vx\in\mathfrak p$. We only have to worry about $uy,uv\in\mathfrak p$. $uy\in\mathfrak p$, we have already considered the case when $u\in\mathfrak p$, if $y\in\mathfrak p$ we get the prime idel $P_3=(x,y)$. We get the last case when considering $uv$.
$P_4=(x,v)$.
All of the ideals $P_1,\ldots,P_4$ are prime, since $\Bbb Q[u,v,x,y]/P_i$ is an integral domain.
It looks like your proposition is not correct. It may be, as suggested in the comments that the ideal is in fact $$I=(ux,uy,vx,vy)$$ Then we would also have $I=(u,v)\cap (x,y)\subset (u,v), (x,y)$.
|
H: Given a convergent sum $\sum_{n=1}^{\infty}a_n$, prove/disprove: $\sum_{n=1}^{\infty}a_n(1-a_n)$
Given a convergent sum $\sum_{n=1}^{\infty}a_n \ $, prove/disprove: $\sum_{n=1}^{\infty}a_n(1-a_n)$ is convergent
My Attempt:
By dividing the question into cases, as for the first case; $\sum_{n=1}^{\infty}a_n$ is definitely converges, then it's pretty easy to prove that $\sum_{n=1}^{\infty}(a_n)^2$ converges, therefore $\sum_{n=1}^{\infty}a_n - \sum_{n=1}^{\infty}(a_n)^2$ converges and we're done.
In the second case, $\sum_{n=1}^{\infty}a_n$ is conditionally convergent, and now it's not clear that $\sum_{n=1}^{\infty}(a_n)^2$ converges. for example, let $a_n = \frac{(-1)^n}{\sqrt{n}}$, then $\sum a_n$ converges, but $\sum (a_n)^2 = \sum \frac{1}{n}$ which diverges.
I've made another attempt and tried to use Abel theorem.
$\sum {a_n}$ converges, then I've tried to prove that the sequence $\{(1-a_n)\}_{n=1}^{\infty}$ is monotonic and booundedn. clearly, $\{1-a_n\}$ is bounded as $\lim_{n\to \infty} (1-a_n) = 1$, but I have no idea if it's even possible to prove that this sequence is monotonic, as there is no information given on $\{a_n\}$ positivity/ negativity, but only that $\sum a_n$
AI: I think you already have a complete answer.
For the example of $a_n=\frac{(-1)^n}{\sqrt n}$, we know $$\sum_{n=1}^Ma_n(1-a_n)=\sum_{n=1}^Ma_n-\sum_{n=1}^M(a_n)^2$$ The first sum is bounded by $$L=\sum_{n=1}^\infty a_n<\infty$$ but $\sum_{n=1}^M(a_n)^2$ tends to infinity as $M\to\infty$. Thus as $M\to\infty$ we have $$\sum_{n=1}^Ma_n(1-a_n)\to-\infty$$ So in this case (convergent, but not absolutely convergent) it is disproved.
|
H: Prove or disprove: $\ker (TS)=\{0\}$ $\implies$ $\ker (S)=\{0\}$
Prove or disprove: $\ker (TS)=\{0\}$ $\implies$ $\ker (S)=\{0\}$.
$\bullet~$ $\textbf{My attempt:}$
(not necessarily)
we will define
$$S:\mathbb{R}^2 \rightarrow \mathbb{R}^2 ~\text{ and }~ T:\mathbb{R}^2 \rightarrow\mathbb{R}^2$$
from the given
$$\ker(TS)=\{0\} \implies T\circ S(\vec v)=\vec 0$$
let's say
$S(v)=Av~$ where $A = [1, 0 ,0 ,0]~$ and $~v = (0,1) ~$ then $Av=\matrix [1, 0 ,0 ,0] \cdot (0,1)$ =$~\vec 0$
$$T\circ S(\vec {0,1} )=T(S(0,1))=T(0)=0 .$$
Thus, $\ker (TS)=\{0\} \not\Rightarrow\ker (S)=\{0\}.$
AI: If $g\circ f$ is injective, then $f$ is necessarily injective. A linear map is injective if and only if its kernel is trivial. This also proves implication in title. For linear maps, it suffices to note that $$\{0\}\subseteq\mathrm{Ker}(S) \subseteq \mathrm{Ker}(TS) = \{0\}.$$
|
H: How is the MSE calculated?
I have trouble with the following problem:
Consider the independent random variables $[X_1,\ldots,X_{19}]$ For each
$i$ their probability mass function is given by $p(X_i=-1)$ = $p(X_i= 1) = p$ and $p(X_i= 0) = 1-2p$. We use $T = \frac{1}{38}\sum_{i=1}^{19}Xi^2 $ as an estimator for p. Compute its
MSE if the real value of the parameter is p=0.1
The answer is 1/475
I tried, but I got the wrong answer and I don't know why. I know the MSE is:
$Var(T) + (E[T]-p)^2$
$Var(T)$ = $Var(\frac{1}{38}\sum_{i=1}^{19}X_i^2)$ = $(\frac{19}{38})^2 Var(X_i^2)$ =$\frac{1}{4}Var(X_i^2)$
$Var(X_i^2) = E[X_i^4] - (E[X^2]^2) = (1*0.1 + 1*0.1) - (1*0.1 + 1*0.1)^2 = 0.2 - 0.04 = 0.16$
So $Var(T) = 0.25*0.16 - 0.04$
$E[T] $ = $E[\frac{1}{38}\sum_{i=1}^{19}X_i^2]$ = $\frac{1}{2}E[X_i^2]$
$E[Xi^2]$ = $(-1)^2 * 0.1 + 1 * 0.1 = 0.2$
So $E[T] = 0.1$
$MSE = 0.04 + (0.1-0.1)^2 \ne \frac{1}{475}$
Can I get feedback on my method?
Thanks,
Ter
AI: There is an error in $\mathrm{Var}[T]$:
$$\mathrm{Var}\left(\frac{1}{38}\sum_{i=1}^{19}X_i^2\right)
\stackrel? = \left(\frac{19}{38}\right)^2 Var\left(X_i^2\right).$$
This should be
$$\mathrm{Var}\left(\frac{1}{38}\sum_{i=1}^{19}X_i^2\right)
= \left(\frac{1}{38}\right)^2 \mathrm{Var}\left(\sum_{i=1}^{19}X_i^2\right)
= \frac{19}{38^2} Var\left(X_i^2\right).$$
Your calculation gives the variance if all the $X_i$ are equal, not independent.
That is, it is
$\mathrm{Var}\left(\frac{1}{38} \times 19X_i^2\right).$
|
H: Probability with a diagnostics tests
Question: A virus has been spread around a population. The prevalence of this virus is 84%. A diagnostic test, with a specificity of 94% and sensitivity of 15%, has been introduced. If a patient is drawn randomly from the population, what is the probability that:
a) a person has the virus, given that they tested positive?
b) a person has the virus, given that they tested negative?
(As a follow up) Will the positive results in this test be mostly false positives?
Attempts and Ideas: I'm nearly certain that Bayes' Theorem.
$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$
although I am unsure how the information given relates to this theorem. Perhaps $P(A)$ is testing positive and $P(B)$ is actually being positive?
Any help or guidance is greatly appreciated!
AI: First of all Let's define what Sensitivity and Specificity of a test are:
Sensitivity is defined as
$$\mathbb{P}[T^+|D]$$
Specificity is defined as
$$\mathbb{P}[T^-|\overline{D}]$$
Where $T^+,T^-$ indicate positive and negative test result while $D$ is "disease"
Second let's take (as an example) 10,000 persons and see what is happening with the given probabilities
What you are requested to calculate is
(a)
$$\mathbb{P}[D|T^+]=\frac{1260}{1356}\approx 92.92\%$$
and
(b)
$$\mathbb{P}[D|T^-]=\frac{7140}{8644}\approx 82.60\%$$
|
H: Prove $\int_{a} ^{b} f(x) \, dx=\lim_{x\to b^-} g(x) - \lim_{x\to a^+} g(x) $
Can anyone tell me how to prove the following theorem.
Let $f:[a, b] \to\mathbb {R} $ be Riemann integrable on $[a, b] $ and let $g:(a, b) \to\mathbb {R} $ be such that $g'(x) =f(x) $ for all $x\in(a, b) $. Then the limits $$\lim_{x\to a^{+} } g(x), \lim_{x\to b^{-}} g(x) $$ exist and we have $$\int_{a} ^{b} f(x) \, dx=\lim_{x\to b^-} g(x) - \lim_{x\to a^+} g(x) $$
AI: Hint. By integrability, $|f|$ is bounded on $[a,b]$ by some constant $M\geq 0$. It follows that the function $g$ is uniformly continuous in $(a,b)$: for $a<x\leq y<b$:
$$|g(x)-g(y)|\leq \left|\int_x^y f(x)\,dx\right|\leq M|x-y|$$
which implies that the limits $\lim_{x\to a^{+} } g(x)$, $\lim_{x\to b^{-}} g(x)$ exist.
By the same reason, as $r\to 0^+$,
$$0\leq \left|\int_{a} ^{a+r} f(x)\,dx\right|\leq \int_{a} ^{a+r} |f(x)|\,dx\leq Mr\to 0$$
and
$$0\leq\left|\int_{b-r} ^{b} f(x)\,dx\right|\leq \int_{b-r} ^{b} |f(x)|\,dx\leq Mr\to 0.$$
Can you take it from here?
|
H: What is the difference between $A^TA$ and $A^2$?
What is the difference between squaring a matrix by multiplying it by its transpose and squaring it by multiplying it by itself. When I am asked to square a matrix, which method is preferred?
AI: In general we have $A^TA \ne A^2.$
Example: $A=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}.$
|
H: Is orthogonal complement in a Hilbert space unique?
Let $H$ be a Hilbert space and let $X \leq H$ where $X$ is a closed subspace of $H$. Suppose that there is another (closed?) subspace $Y$ with $X \oplus Y = H$ and $X \perp Y$. Is it true that $Y = X^\perp?$
Attempt:
Let $y \in Y$. Then $\langle y,x \rangle = 0$ for all $x \in X$, so $y \in X^\perp$. Hence $Y \subseteq X^\perp$.
Conversely, assume $y \in X^\perp$. Then $\langle y,x \rangle =0$ for all $x \in X$. Write $y = x_1 + y_1$ with $x_1 \in X, y_1 \in Y$. We wish to show that $x_1 = 0$. Then I'm stuck.
Is the statement even true?
AI: Let $y \in X^{\perp}$. We can write $y=u+v$ with $u \in X$ and $v \in Y$. Note that $v \in X^{\perp}$. Now $u=y-v \in X^{\perp}$. In particular $u \perp u$ so $u=0$. Hence $y=v \in Y$.
|
H: How to solve this equation involving natural log
I have $2$ related questions:
I know value of $\ln(x) / \ln(y)$, say it is $v$, how can I find value of $x/y$?
If $\ln(x) = v_1$ and $\ln(y) = v_2$ , what is $x/y$ ?
Thanks for your help. Apologies if these are very basic questions.
AI: Note $ln(x) =v_1\implies x= e^{v_1}$ similarly $y= e^{v_2}$
Thus $x\over y$ $ = \frac{e^{v_1}}{e^{v_2}}=e^{{v_1}-{v_2}}$
|
H: Motivation of the definition of topology
In general topology the the definition of topology is the following:
Let X be a non empty set. A set $\tau$ of subsets of $X$ is said to be a topology on $X$ if
$X \in \tau$ and $\emptyset \in \tau$
The union of any (finite or infinite) number of sets in $\tau$ belongs to $\tau$
The intersection of finitely many elements of $τ$ is an element of $τ$.
My question is, why do we define the topology on a set this way?
Why does the finite or infinite union of sets in $\tau$ belongs in $\tau$ but only the finite intersection of elements of $\tau$ belongs on $\tau$?
And why do we need to have that $X \in \tau$ and $\emptyset \in \tau$?
What is the motivation for this definition?
AI: You ask a good question. I agree with the two answers above but wanted to add something. As a mathematician coming to these definitions they do seem arbitrary and perhaps lacking in justification. I found myself asking the same question about the axioms for a category. I think what happens is that mathematicians find lots of interesting examples of spaces (e.g. the metric spaces given above). They then develop a set of axioms for a new thing (a topological space). The axioms are chosen so they are strong enough to prove many interesting theorems but weak enough to admit loads of interesting and varied examples as topological space (for example the particular point topology https://en.wikipedia.org/wiki/Particular_point_topology in which a single point set is compact but its closure is not compact (if the space is infinite) - crazy!). Note that the axioms of topology evolved (some people included a separation axiom https://en.wikipedia.org/wiki/History_of_the_separation_axioms). I think groups, categories, rings and fields are all a bit like this. This is quite different from natural numbers, real numbers and geometry which all seem to be understood by us intuitively. A difficult question is whether ZF Set Theory is of the first type (axioms chosen by humans) or the second (things that 'really' exist or at least are consistent with some intuition, real or conceptual).
|
H: "bracelet type" Combinatorics
This question seems ok but I'm having real difficultly working out the answer using the method they provided. It's so hard to keep track of all the options. Does anyone know of a better more algebraic method?
AI: Two things can be done to a bracelet without changing it: rotating it and flipping it upside down. We want to calculate the number of possible bracelets without counting two bracelets that are identical up to rotation and flipping twice.
First, it is always possible to rotate the bracelet so that the yellow bead will be on top. It is left to determine the color of the 6 other beads, out of which 2 are red and four green.
Let's name the beads (clockwise) by 1-2-3-4-5-6.
Note that 1-6, 2-5 and 3-4 are the symmetric bids (so when flipping the bracelet, the bids in 1-6 change places, 2-5 and 3-4).
In total, there are ${6 \choose 2}$ options to choose the red beads. Out of which, in 3 options, the red beads are in symmetric places.
In the rest ${6 \choose 2}-3$ options, each bracelet is counted twice due to this symmetry. Hence, divide by $2$ and the final answer is:
$$3+\tfrac{{6 \choose 2}-3}{2}=9$$
We can also list the options. If there is a red bead in place #1, the other red can be in any of the places, so there are 5 options.
If there is a red bead in place #2, the other red can be only in places 3-5, because if it was in 6, it would be symmetric to the previous case with a bid at 1 and 5. So in total there are 3 options here.
If there is a red bead in place #3, the only other place for the other red is #4, because other places were already taken into account in previous options.
|
H: Question about proof characterisation partial isometry
Consider the following fragment in the text "$C^*$-algebras and operator theory by Murphy":
Could someone explain why the marked step is true? I don't see how this follows from $\Vert u(x) \Vert^2 = \Vert u^* u(x) \Vert^2$. Thanks in advance.
AI: If $\|u(x)\|^2 = \|u^*u(x)\|^2$ for all $x$ then $\|u(1-u^*u)\,(x)\|^2=\|u^*u(1-u^*u)\,(x)\|^2=\|(u^*u-(u^*u)^2)\,(x)\|^2$. But $u^*u$ is a projection so what is left here is $\|u(1-u^*u)\,(x)\|^2=\|(u^*u-u^*u)\,(x)\|^2=0$ for all $x$.
|
H: Kernel of homomorphism $\mathbb{K}[X,Y]\rightarrow \mathbb{K}[t^2,t^3]$, $X\mapsto t^2$, $Y\mapsto t^3$
I'm trying to show that the kernel of the homomorphism
$$\varphi: \mathbb{K}[X,Y]\rightarrow \mathbb{K}[t^2,t^3],$$
$$X\mapsto t^2,$$
$$Y\mapsto t^3,$$
is the ideal $I=\langle Y^2-X^3\rangle$ in $\mathbb{K}[X,Y]$, where $\mathbb{K}$ is a field.
I can show that $I$ is contained in $\ker \varphi$, but not the other way around. For a given polynomial $p(X,Y)$ such that $\varphi(p(X,Y))=0$, I wrote it as
$$
p(X,Y)= \sum_{i=1}^n a_iX^i + \sum_{j=1}^mb_jY^j + \sum_{r,s>0}c_{rs}X^rY^S +d,
$$
but that didn't help me as much as I hoped. At least I can show that $d=0$, but that was it. The change of variables made by $\varphi$ suggested some integer relationships between $i$, $j$ and $r,s$ but I could not conclude anything.
This was a part of an example seen in a Commutative Algebra online course under the section of Integral Extensions with a spoiler from what owuld be seen in an Algebraic Geometry course (I can provide link for the Youtube video, if needed, though I believe it's not necessary)
AI: Assume that $I \neq \operatorname{Ker}(\varphi)$. Then there exists some $p(X,Y) \in \operatorname{Ker}(\varphi) \setminus I$ (as you have already shown one inclusion). As $Y^2-X^3$ is a monic polynomial in $(\mathbb{K}[X])[Y]$ we may assume that $p$ has degree $0$ or $1$ in $Y$ (otherwise add suitable element of $I$). This means we have only to consider
$$ p(X,Y) = a(X) Y + b(X), $$
where $a,b\in \mathbb{K}[X]$. Now note that the degree of $\varphi(a(X)Y)$ is odd (if $a$ is not the zero polynomial) and the degree of $\varphi(b(X))$ is even. Hence, this can only vanish if $a$ is the zero polynomial. But then we get that $\varphi(b(X))$ vanishes and hence, $b$ must be the zero polynomial as well. Then we get the contradiction $0 = p(X,Y) \in \operatorname{Ker}(\varphi) \setminus I.$
|
H: Let $f:\mathbb{R}\to(0,\infty)$ be a differentiable function. For all $x\in\mathbb{R}$ $f'(x)=f(f(x)).$ Then show that such function does not exists
What i have done is very small.
$$f'(x)=f(f(x))\implies f(f'(x))=f(f(f(x)))$$Now $$f(f(f(x)))=f'(f(x))$$Hence$$f(f'(x))=f'(f(x))$$Now i am blank. What to do for the proof
AI: Note that under the hypothesis $f$ is increasing. So, $f(f(x))>f(0)$ for all $x\in\Bbb R$. So, $f'(x)$ has a lower bound which is $f(0)$.
Hence $f(x)<f(0)+xf(0)=(1+x)f(0)$ for all $x<0$. So, for $x\leq -1$ we have $f(x)\leq 0$, contradiction.
|
H: Show $\sigma(x) \in \{0,1\}$ if $x \in \{0,1\}$
Let $A$ be a unital $C^*$-algebra and $x \in A$ an element with $x^* = x=x^2 = x^3$. I want to show that $\sigma_A(x) \subseteq \{0,1\}$.
Attempt:
We know that $\sigma_A(x)^2=\sigma_A(x^2) = \sigma_A(x^3) = \sigma_A(x)^3$ so I guess the statement somehow follows from $\lambda^2 = \lambda^3 \implies \lambda \in \{0,1\}$.
Let $\lambda \in \sigma_A(x)$. Then $\lambda^2 \in \sigma_A(x)^3$, so there is $\mu \in \sigma_A(x)$ with $\lambda^2 = \mu^3$. Similarly, there is $\eta$ with $\lambda^3 = \eta^2$. I feel like I'm missing something basic...
AI: If $P$ is a polynomial then $\sigma(P(x))=P(\sigma(x)):= \{ P(\lambda)\mid \lambda\in \sigma(x)\}$.
In particular if $x=x^2$ then $x-x^2=0$ hence for every $\lambda\in \sigma(x)$ you have that $\lambda-\lambda^2=0$. The only complex numbers satisfying this equation are however $0$ and $1$ (as this is a quadratic polynomial it can have at most two solutions), so $\sigma(x)\subseteq \{0,1\}$.
|
H: Impossible to pack Circles without gaps
It is intuitively apparent that circles cannot be packed without any gaps. I thought this is easy to prove, but it turns out not to me.
I have $2$ versions for this question, which likely to have opposite answers.
$1:$ Is it possible to pack finitely many circles(of radius larger than 0) in the same size within a finite region.
$2:$ is it possible to pack circles(of radius larger than 0) within a finite region. (Which means we can shrink the size of the circle as small as we want and there can be infinitely many of them).
For $1$, I thought it is obviously impossible, since no matter how we arrange the circles, there is always some rooms not included within the circles. I thought it is easy to prove until I realise that there can be more way than I thought to arrange the circles. (see the pictures: or maybe this is already a proof?)
For $2$, I think this is possible, just like pack any shape by rectangles like Riemann Integral, But I have not came up with a proof.
I think these are not obvious questions and need some tools, which geometrists may have but I do not. Any ideas and suggestions will be appreciated.
AI: Put in any first circle C. Notice if you have a finite number of circles there is not way to contain every point in the neighborhood of a point on the circumference of C.
Check out this link: Filling a rectangle with an infinite amount of circles
|
H: Are there doubly (left- and right-) perfect sets with Lebesgue measure zero?
This question seems natural enough that the answer should be known, but I was unable to find a reference.
Call a subset $C$ of $\Bbb R$ left- (respectively, right-) perfect if it is perfect and if every point in $C$ is a limit point from the left (respectively, from the right). Are there any subsets of $\Bbb R$ that are Lebesgue-null, right-perfect and left-perfect?
Clearly, the standard Cantor set is neither right- nor left-perfect.
AI: The only left-perfect sets are $\emptyset$, $\mathbb{R}$, and closed half-lines of the form $(-\infty, b]$. So the only sets which are right-perfect and left-perfect are $\emptyset$ and $\mathbb{R}$, and the only set fitting the criteria of your question is $\emptyset$.
To see this, suppose $y \notin C$. If there exists $x \in C$ with $x > y$, we can let $x_0 = \inf\{x \in C : x > y\}$. Then $x_0 \in C$ since perfect sets are closed, but there are no points of $C$ between $y$ and $x_0$, so $x_0$ is not a limit point from the left, a contradiction. Thus $[y, \infty) \subset C^c$. It follows that $C^c$ is a half-line which is open since $C$ is closed, so $C^c = (b, \infty)$ for some $b$, or else $C^c = \mathbb{R}$ or $\emptyset$.
|
H: Using exclusion and inclusion method to calculate number of functions
Let $A =\{1,2,3,4,5,6\}$.
I want to find the number of functions $f : A \rightarrow A$ such that
$|f^{-1}(i)|=i$ for every $i\in \{1,2,3\}$.
anyone has an idea, I tried a lot but didn't get to the answer.
Thank you
AI: You want $f$ to map one value into $1$, two values into $2$ and three values into $3$.
$f^{-1}[\{1\}]$ is a singleton containing any value between $1$ and $6$.
$f^{-1}[\{2\}]$ is a set containing any two values taken from $\{1,\ldots,6\} \setminus f^{-1}[\{1\}]$.
$f^{-1}[\{3\}]$ is the uniquely determined set given by $\{1,\ldots,6\} \setminus (f^{-1}[\{1\}] \cup f^{-1}[\{2\}])$.
Hence, the total number of possibilities is $\binom{6}{1} \cdot \binom{5}{2} \cdot \binom{3}{3} = 60$.
|
H: Infinite sum $\sum ^{\infty }_{n=1}\frac {n( n+1)^{2}}{(n-1)!}x^{n}$
I want to calculate the following.
$$\sum ^{\infty }_{n=1}\frac {n( n+1)^{2}}{(n-1)!}x^{n}=?$$
I knew.
$$\sum ^{\infty }_{n=1}\frac {x^{n}}{(n-1)!}=e^{x}x$$
By the way, answer is WolframAlpha!
Please tell me how to solve.
AI: Hint:
Write $$n(n+1)^2=(n-1)(n-2)(n-3)+a(n-1)(n-2)+b(n-1)\ \ \ \ (1)$$
so that $$\dfrac{n(n+1)^2}{(n-1)!}x^n=x^4\cdot\dfrac{x^{n-4}}{(n-4)!}+ax^3\cdot\dfrac{x^{n-3}}{(n-3)!}+bx^2\cdot\dfrac{x^{n-2}}{(n-2)!}$$
Now put $n=2$ and $n=1$ to find $a,b$
Finally $$\sum_{r=0}^\infty\dfrac{y^r}{r!}=e^y$$
See also: Evaluate the series $\lim\limits_{n \to \infty} \sum\limits_{i=1}^n \frac{n+2}{2(n-1)!}$
|
H: Can someone explain this small probability contradiction?
The question is: A service organization in a large town organizes a raffle. One thousand raffle tickets are sold for $\$1$ each. Each has an equal chance of winning. First prize is $\$300$, second prize is $\$200$, and third prize is $\$100$. What is that chance of winning something?
First way: $X= \{100, 200, 300\}$, $P(X) = P(100)+ P(200)+ P(300)$ as there is one such ticket for each
=> $P(X) = 0.001 + 0.001 + 0.001 = 0.003$
Second. when the results are going to be announced, the chance for each ticket to be the first prize winner is $0.001$. to be the second prize winner the chance is, $0.999$ (not to be the first prize winner + $1/999$ (for being the second) in the same way the chance of winning the 3rd position is $998/999 \times1/998$.
=> $P(X) = 0.001 + 999/1000 \cdot 1/999 + 998/1000 + 1/998 =0.003001\dots$
Does watching the show where they announce the names of the winners increase the chance of winning? :D
AI: The second way of calculating is more complex than needed. If you practicize it correctly then you get the same outcome:
$P\left(W_{1}\right)=\frac{1}{1000}$
$P\left(W_{2}\right)=P\left(W_{1}^{\complement}\cap W_{2}\right)=P\left(W_{1}^{\complement}\right)P\left(W_{2}\mid W_{1}^{\complement}\right)=\frac{999}{1000}\frac{1}{999}=\frac{1}{1000}$
$P\left(W_{3}\right)=P\left(W_{1}^{\complement}\cap W_{2}^{\complement}\cap W_{3}\right)=P\left(W_{1}^{\complement}\cap W_{2}^{\complement}\right)P\left(W_{3}\mid W_{1}^{\complement}\cap W_{2}^{\complement}\right)=\frac{998}{1000}\frac{1}{998}=\frac{1}{1000}$
$P\left(W_{1}\cup W_{2}\cup W_{3}\right)=P\left(W_{1}\right)+P\left(W_{2}\right)+P\left(W_{3}\right)=\frac{3}{1000}$
Here $W_i$ denotes the event of winning the $i$-th prize.
|
H: Find the points closest to two lines using least squares method
Given are two lines $g(t)=a+bt$ and $h(s)=c+ds$ with $a,b,c,d \in \mathbb R^3$. I need to find the points where the two lines are closest using the least squares method. However I am unable to find a solution for this problem.
Intuitively those points are where $g(t)-h(s)$ is as small as possible, but I don't know how to translate this into my understanding of a least squares problem. In my understanding what the least squares method does is it fits a line as close as possible to a set of given points. However given two lines this set of points seems to be infinite and therefore I don't know what that line should be. I think where I am stuck is that my understanding of the least squares method is too specific and limited to fitting straight lines.
AI: As you say, you're looking for a point where $g(t) - h(s)$ is as small as possible. Let's unpack what that means.
If the two lines actually crossed, we would be looking for a point where
$$bt - ds + (a-c) = 0$$
This can be written in the equivalent form as the matrix equation
$$\begin{bmatrix} b & -d \end{bmatrix} \begin{bmatrix} t \\ s \end{bmatrix} = c - a$$
If we introduce the notation $A = \begin{bmatrix} b & -d \end{bmatrix}$, $\vec{x} = \begin{bmatrix} t \\ s \end{bmatrix}$, $\vec{b_0} = \begin{bmatrix} c - a \end{bmatrix}$, you want to solve
$$A \vec{x} = \vec{b_0}$$
But this equation doesn't have any solutions, because the lines don't actually cross. Another way of expressing this fact is that $\vec{b_0}$ is not in $\operatorname{im} A$.
This is where least-squares suddenly kicks in. Since $\vec{b_0}$ is not in $\operatorname{im} A$, we find the point $\vec{b^*}$ that is in $\operatorname{im} A$ and is as close to $\vec{b_0}$ as possible. This point can be found by projection onto $\operatorname{im} A$:
$$\vec{b^*} = \operatorname{proj}_{\operatorname{im} A} \vec{b_0}$$
Then we want to solve the least-squares equation
$$A \vec{x^*} = \operatorname{proj}_{\operatorname{im} A}\vec{b_0}$$
Can you take it from here?
|
H: Weak law of large numbers for epsilon sequence that tends to 0
Assume a $S_n=\sum_{i=1}^n X_i$ fulfills a weak law of large numbers, i.e. for every $\epsilon>0$,
$$P\left(|\tfrac{1}{n}S_n - \mu| > \epsilon\right) \rightarrow 0$$
Does there exist a sequence $(\epsilon_n)_n$ with $\epsilon_n \rightarrow 0$ such that
$$P\left(|\tfrac{1}{n}S_n - \mu| > \epsilon_n\right) \rightarrow 0$$
holds? Or do I need to make further assumptions about the $X_i$s?
AI: Yes, and it really has nothing to do with law of large numbers.
Lemma: Suppose $Y_n$ are random variables with $\mathbb{P}(Y_n>\epsilon)\to 0$ for all $\epsilon>0$, then there exists a sequence $\epsilon_n\downarrow 0$ such that $\mathbb{P}(Y_n>\epsilon_n)\to 0$.
Proof: For every $m\geq 1$, there exists $N_m\in\mathbb{N}$ such that $\mathbb{P}(Y_n>2^{-m})<\frac1m$ for all $n\geq N_m$. So WLOG $N_{m+1}>N_m$, and choose $\epsilon_{N_m}=\epsilon_{N_m+1}=\dots=\epsilon_{N_{m+1}-1}=2^{-m}$. Then $\mathbb{P}(Y_n>\epsilon_n)<\frac1m$ for all $n\geq N_m$, hence the desired result.
|
H: How to evaluate $\int \frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}} dx$?
I am trying to evaluate
$$\int \frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}} dx \quad (1)$$
The typical way to confront this kind of integrals are the conjugates i.e:
$$\int \frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}} dx = $$
$$ \int \left(\frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}}\right)\left(\frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}+\sqrt{1-x}}\right)\left(\frac{\sqrt{1+x}-\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}}\right) dx = $$
$$\int \left(\frac{(\sqrt{1+x})^2-(\sqrt{1-x})^2}{(\sqrt{1+x})^2-(\sqrt{1-x})^2}\right)\left(\frac{\sqrt{1+x}-\sqrt{1-x}}{\sqrt{1+x}+\sqrt{1-x}}\right)dx = $$
$$\int 1*\left(\frac{\sqrt{1+x}-\sqrt{1-x}}{\sqrt{1+x}+\sqrt{1-x}}\right)dx $$
That's a dead end.
I also tried other conjugate approaches (only the numerator, only the denominator etc) with no better luck.
Any ideas?
AI: Multiplying by the conjugate is not a dead end. I'm not sure why you multiplied by the conjugate of the numerator and denominator, you can easily evaluate this integral just by multiplying by the conjugate of the denominator:
$$I=\int \left(\frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}-\sqrt{1-x}}\right)\left(\frac{\sqrt{1+x}+\sqrt{1-x}}{\sqrt{1+x}+\sqrt{1-x}}\right) dx$$
$$I=\int \frac{{(1+x)}+(1-x)+2\sqrt{1-x^2}}{2x} dx$$
$$I=\ln{|x|}+\int \frac{\sqrt{1-x^2}}{x} \; dx$$
Let $x=\sin{\theta}$ to evaluate the integral:
$$I=\ln{|x|}+\int \frac{\cos^2{\theta}}{\sin{\theta}} \; d\theta $$
$$I=\ln{|x|}+\int \csc{\theta} \; d\theta - \int \sin{\theta} \; d\theta$$
$$I=\ln{|x|}-\ln{\big | \csc{\theta}+\cot{\theta}\big |}+ \cos{\theta}+C$$
Substitute $\theta$ back for $x$:
$$I=2\ln{|x|}-\ln{\big | \sqrt{1-x^2}+1\big |} + \sqrt{1-x^2}+C$$
|
H: How prove $\int_{S^{n-1}}f(x\cdot \omega)\,d\omega=\int_{S^{n-1}}f(-x\cdot \omega)\,d\omega$
Let $f:\mathbb{R}^{n}\rightarrow \mathbb{C}$ be a continuous function.
I Believe that
$$\int_{S^{n-1}}f(x\cdot \omega)\,d\omega=\int_{S^{n-1}}f(-x\cdot \omega)\,d\omega.\qquad \qquad (1)$$
The reason is $-\omega$ is the unit vector antipodal to $\omega$, and then when we integrate on $\mathbb{S}^{n-1}$ we are summing the same integrand over the same surface.
How can we show this rigorously ?
Well, we change variables $\bar{\omega}=-\omega$, and then the Jacobian is $(-1)^n$. But we do not worry about the minus sign since we can reorient the sphere in the "correct" direction and obtain (1).
How can we write down the math?
AI: The measure $d\omega$ is surface area measure on $S^{n-1}.$ This measure is rotation invariant. I.e., if $T$ is a rotation (aka orthogonal transformation), then
$$\int_Sf(\omega)\,d\omega = \int_S f(T(\omega))\,d\omega$$
for all such $T.$ Since $T(z)=-z$ is a rotation, we get the result.
|
H: A question about cardinality
Let $A$ be a subgroup of $\mathbb{R}$.
Assume $|\mathbb{R}\setminus A|=\aleph_0$.
Is it true that for every $a,b\in\mathbb{R}$, $a<b$
$(a,b] \cap A \neq \varnothing$ ?
I think it's true, but I'm not really sure. Can anyone shed light on this?
AI: Since, $|\mathbb{R}\setminus A|=\aleph_0$, so, $(\mathbb{R}\setminus A) $ must be a countable set in $\mathbb{R} $.
For supose there are some $a,b\in\mathbb{R}$ with $a<b$ such that $(a,b] \cap A = \varnothing$, then clearly $(\mathbb{R}\setminus A) $ contains a interval, which means that containing uncountable numbers of elements of $\mathbb{R} $ , but this can't be possible, as $(\mathbb{R}\setminus A) $ is a countable set in $\mathbb{R} $.
So, our assumption is wrong! Hence for every $a,b\in\mathbb{R}$, $a<b$,
$(a,b] \cap A \neq \varnothing$.
|
H: denominator with standard part 0
(i) Why do we have to simplify the numerator and denominator, and not just substitute the standard part st(c) right away since it is given as 4.
(ii) Also what does this mean (more specifically, what is the idea behind the bold part):
However, since c $\neq$ 4 the fraction is defined, and it can be simplified by factoring the numerator and denominator
AI: (i) If you try to apply standard part rules directly you run into erroneous equations. For example, if you distribute the standard part map over the fraction you get
$$
st\big(\frac{c^{2}+2c-24}{c^{2}-16}\big)=\frac{st(c^{2}+2c-24)}{st(c^{2}-16)}\tag{1}
$$
Or maybe you try to do all possible distributions at once and get
$$
st\big(\frac{c^{2}+2c-24}{c^{2}-16}\big)=\frac{st(c)^{2}+2st(c)-24}{st(c)^{2}-16}\tag{2}
$$
But since $st(c)=4$, we get zero in the denominators of right side in both (1) and (2). So this doesn't make sense and the approach fails.
(ii) On the other hand, since $c^{2}\neq 16$, the nonstandard fraction
$$
\frac{c^{2}+2c-24}{c^{2}-16}\tag{3}
$$
is a well-defined nonstandard number. This what they mean by saying the fraction is defined. They go on to simplify $(3)$ to
$$
\frac{c+6}{c+4}\tag{4}
$$
Now the "direct approach" to evaluating the standard part (that we tried and failed to do in (i)) will work because the standard part of the denominator in $(4)$ is nonzero.
An analogy: You can liken this to certain limits in which one gets rid of $\frac{0}{0}$ by canceling common factors. For example consider
$$
\lim_{x\to 4} \frac{x^{2}+2x-24}{x^{2}-16}
$$
If you try to "plug in $4$" you get $\frac{0}{0}$. This is the analogy of the failed attempt in (i).
On the other hand we can factor and simplify:
$$
\lim_{x\to 4}\frac{x^{2}+2x-24}{x^{2}-16} = \lim_{x\to 4}\frac{(x+6)(x-4)}{(x+4)(x-4)} = \lim_{x\to 4}\frac{x+6}{x+4}=\frac{10}{8}.
$$
In the last step, plugging in $4$ causes no problems. This is the analogy of the successful approach in (ii).
|
H: How to get the point at which a line passes through another line perpendicular to it
I have two line segments that are perpendicular to each other and intersect. I want to know the point of intersection, given one endpoint of one segment and both endpoints of the other. I also know that the other endpoint of the first segment lies on the second segment (and therefore is the point of intersection).
AI: Let $\mathbf{b},\,\mathbf{c}$ be the known endpoints of segment and $\mathbf{a}$ be the known endpoint of the other segment.
For a point $\mathbf{x}$ on the first segment we have $\mathbf{x}=t\mathbf{b}+(1-t)\mathbf{c}$ (see here, point 3. why).
And we have $$(\mathbf{x}-\mathbf{a})\cdot (\mathbf{b}-\mathbf{c})=0$$
$$(t\mathbf{b}+(1-t)\mathbf{c}-\mathbf{a})\cdot (\mathbf{b}-\mathbf{c})=0$$
$$t(\mathbf{b}-\mathbf{c})^2+(\mathbf{c}-\mathbf{a})\cdot (\mathbf{b}-\mathbf{c})=0$$
$$t=
\frac{(\mathbf{a}-\mathbf{c})\cdot (\mathbf{b}-\mathbf{c})}{
(\mathbf{b}-\mathbf{c})^2}$$
And thus $$\mathbf{x}=\mathbf{c}+
\frac{(\mathbf{a}-\mathbf{c})\cdot (\mathbf{b}-\mathbf{c})}{
(\mathbf{b}-\mathbf{c})^2}(\mathbf{b}-\mathbf{c})$$
Okay, geometrical intuition behind the answer for the case pure vectors don't make sense:
$\overrightarrow{CX}=\mathbf{x}-\mathbf{c}$ is the normed vector $\overrightarrow{CB}=\mathbf{b}-\mathbf{c}$ (i.e. $\frac{\mathbf{b}-\mathbf{c}}{|\mathbf{b}-\mathbf{c}|}$) multiplied by the length of projection $\overrightarrow{CA}$ onto $\overrightarrow{CB}$.
The length of projection is $$|\overrightarrow{CA}|\cos\angle ACB=
|\overrightarrow{CA}|
\frac{\overrightarrow{CA}\cdot\overrightarrow{CB}}{
|\overrightarrow{CA}|\cdot|\overrightarrow{CB}|}=
\frac{\overrightarrow{CA}\cdot\overrightarrow{CB}}{
|\overrightarrow{CB}|}$$ so the overall result is
$$\mathbf{x}-\mathbf{c}=
\frac{(\mathbf{a}-\mathbf{c})\cdot(\mathbf{b}-\mathbf{c})}{|\mathbf{b}-\mathbf{c}|}
\cdot
\frac{\mathbf{b}-\mathbf{c}}{|\mathbf{b}-\mathbf{c}|}$$
|
H: what is the negation of $\exists^{=1}x\phi(x)$?
I need to understand the negation of counting quantifiers, so in my understanding negation of $\exists^{=n}x \phi(x)$ should be $\exists^{<n}x \neg \phi(x) \lor \exists^{>n}x \neg \phi(x) $. Hence, the negation for $\exists^{=1}x\phi(x)$ is given as follows ?
$$\exists^{<1}x \neg \phi(x) \lor \exists^{>1}x \neg \phi(x)$$
Which can be written as follows ?
$$\phi(x) \lor \exists^{>1} x \neg \phi(x)$$
Am I doing it right ?
AI: It is a good idea to say it out loud:
$\exists ^{=1} x\phi(x)$ means "there is exactly one $x$ such that $\phi(x)$". So the negation is that "there is not exactly one $x$ such that $\phi(x)$". In other words, either there are less than one $x$ such that $\phi(x)$ or there are more than one $x$ such that $\phi(x)$. So your answer (the first formulation) is nearly right, except that you have incorrectly negated $\phi(x)$.
The same ideas apply to any $n$.
As for the second formulation, $\exists^{<1}x \phi(x)$ is the same as $\neg \exists x \phi(x)$, which is the same as $\forall x \neg\phi(x)$. Sometimes people write $P(x)$ to mean the same as $\forall x P(x)$, but I think it would be confusing in this context.
|
H: if $f(\frac{x+y}{2}) =\frac{f(x)+f(y)}{2}$ then find $|f(2)|$
if the following functional equation $$f\bigg(\frac{x+y}{2}\bigg) =\frac{f(x)+f(y)}{2} \quad \text{ holds for all real }~ x ~\text{ and }~ y$$ If$f'(0)$ exists and equals to $-1$ then find $|f(2)|$.
My work
I tried to find $f'(x)$ so that i can intregrate it and then find $f(x)$
$f'(x)=\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$
which gave me this equation:
$f'(x)=\lim_{h \to 0} \frac{f(2x)-2f(x)+1+f(2h)-1}{2h}$
I could not proceed further
and the answer is $2$
AI: since $f'(0)=-1$
we get $f'(x)=-1$
we can integrate both sides and put $x=2$
(and also sorry for mistakes in question. I'm new, so this all was a bit hard and I made silly mistakes)
so we get $|f(x)|=2$
|
H: Proof that $f\big(f(x)\big)$ exists, where $f(x)=\frac{4x+3}{6x-4}$
$$f(x)=\frac{4x+3}{6x-4}$$
$\operatorname {Dom}f= \Bbb{R}\setminus\left\{\frac23\right\}$
$\operatorname {Ran}f=\Bbb R$
Since $\operatorname {Ran} f$ is not a subset of $\operatorname {Dom} f$,
therefore, $f\circ f$ does not exist.
Where have I gone wrong?
AI: Note that the image of $f$ is not the whole of $\mathbb{R}$. There is no value of $x$ such that $f(x)=\frac 2 3$.
However, this does not prevent $f(f(x))$ existing for all values of $x$ apart from $x= \frac 2 3$. Indeed, for all other values of $x$ we have $f(f(x))=x$.
The fact that the domain and the image of $f$ happen to be the same in this case has nothing to do with the existence of $f(f(x))$. Consider
$g(x)=\frac 1 {x+1}$
The domain of $g$ is $\mathbb{R} \backslash \{ -1 \}$; the image of $g$ is $\mathbb{R} \backslash \{0\}$; and
$g(g(x)) = \frac {x+1}{x+2}$
with domain $\mathbb{R} \backslash \{-1,-2 \}$.
|
H: Eigenvalues of a matrix containing an unknown matrix
How to find eigenvalues of a symmetric matrix
$$B = \begin{bmatrix}
2\mathrm{I_{m}} & A^\intercal \\\\ A & 0
\end{bmatrix}$$
without knowing anything about $A$ besides that $A \in \mathbb{R}^{n \times m}$ ?
Finding the determinant of
$$B - \lambda \mathrm{I} = \begin{bmatrix}
(2-\lambda) \mathrm{I_{m}} & A^\intercal \\\\
A & -\lambda \mathrm{I_n}
\end{bmatrix}$$
analytically seems to be unfeasible as it's $( n+m ) \times ( n+m )$ matrix with possibly large $(n+m)$. Is this correct or is there something special that would make it doable? I tried to proceed with matrices as scalars, i.e. apply Leibniz formula anyways, which gave $(\lambda^2 - 2\lambda)\mathrm{I_m}=A^{T}A$ but I'm pretty sure this approach is wrong.
Can you think of any other way to obtain the eigenvalues of $B$, without finding the determinant ?
edit: $B$ may be invertible or not (I guess it depends on $A$), both cases are relevant for me.
AI: Suppose $\begin{bmatrix} v_1 \\ v_2 \end{bmatrix}$ is an eigenvector with eigenvalue $\lambda$. Then
\begin{align} 2 v_1 + A^T v_2 &= \lambda v_1 \\
A v_1 &= \lambda v_2
\end{align}
from which you conclude that
$$ A^T A v_1 = \lambda ^2 v_1 - 2 \lambda v_1 .$$
So computing the eigenvalues of $B$ boils down to computing the eigenvalues of $A^T A$, and this is equivalent to finding the singular values of $A$. Thus there is no short cut.
|
H: Does Lindeberg's condition imply $s_n \to \infty$?
Lindeberg's theorem states that if we have a sequence of independent random variables $X_j: \Omega \to \mathbb{R}$ with zero mean, variance $\sigma_i^2$, and distribution $\alpha_i$, and we define $s_n^2 = \sigma_1^2 + \dots + \sigma_n^2$, then the distribution of $\frac{X_1+\dots +X_n}{s_n}$ converges weakly to the standard normal ${\bf if}$ for all $\varepsilon>0$,
$$
\lim_{n\to \infty} \frac{1}{s_n^2}\sum\limits_{i=1}^n \int\limits_{|x| > \varepsilon s_n} x^2 d\alpha_i(x) = 0. \qquad (\star)
$$
If $\phi_i$ is the characteristic function of some $X_i$, the proofs I see use some order of the Taylor expansion for $\log\phi_i\left(\frac{t}{s_n}\right)$ where $|t|<T$. I guess this tacitly assumes the $s_n$ diverges? However I could not verify if $(\star)$ implies this condition. Is this easy to see?
AI: Assume $s_n \to s$ with $s \in (0,\infty)$.
To start, suppose $\sigma_j>0$ for all $j$. Now look at $\varepsilon=\frac{1}{2} \frac{\sigma_1}{s}$. Then the first integral in the sum will remain bounded away from zero (since you will have $\varepsilon s_n \leq \frac{1}{2} \sigma_1$) and there will be no exploding factor in the denominator to send the whole thing to zero. So the Lindeberg condition will fail.
If $\sigma_1=0$, replace $\sigma_1$ in the above with whichever positive $\sigma_k$ you like.
If instead $s_n \to 0$ then $\sigma_j \equiv 0$, in which case the conclusion of the CLT doesn't hold (but this case is trivial, of course).
|
H: Let $f$ be measurable then prove a certain set is measurable.
Question: Suppose $f:\mathbb{R}\rightarrow\mathbb{R}$ is measurable. Prove $\{(x,y)\in\mathbb{R^2}:f(x)\geq f(y)\}$ is measurable.
My thoughts: I was hoping that there would be a way to "pull back" into an open set, since open sets in $\mathbb{R}$ are measurable. I was trying to just use the definition of a measurable function, but I am just getting thrown off by the $f(x)\geq f(y)$ bit, and how that affects everything. Also, I was wondering if you could provide any insight as to dealing with problems like this. In particular, if given a measurable function $f$, what is a good way of going about dealing with questions regarding a set with elements depending on $f$...
Any thoughts, suggestions, etc. are greatly appreciated! Thank you!
AI: we have that the function $\phi:\mathbb{R}^2\to \mathbb{R}$, where $\phi(x,y)=x-y$ is measurable since it is continuous. So is the function $F(x,y)=(f(x),f(y))$ (from $\mathbb{R}^2\to \mathbb{R}^2$), since $f$ is measurable. Then $\phi\circ F:\mathbb{R}^2\to \mathbb{R}$ is measurable. Now, \begin{align}
\{(x,y)\in\mathbb{R^2}:f(x)\geq f(y)\}
&=
\{(x,y)\in\mathbb{R^2}:f(x)-f(y)\geq 0\}
\\&=
\{(x,y)\in\mathbb{R^2}: \ \phi(f(x),f(y))\geq 0 \}
\\&=
\{(x,y)\in\mathbb{R^2}:\ \phi\circ F(x,y)\geq0\}
\\&=
(\phi\circ F)^{-1}\big([0,+\infty)\big)
\end{align}
and thus it is a measurable subset of $\mathbb{R}^2$.
|
H: If $\frac{1}{1+a}+\frac{1}{1+b}+\frac{1}{1+c}\le 1$, prove that $(1+a^2)(1+b^2)(1+c^2)\ge 125$.
QUESTION: Let $a,b,c$ be positive real numbers such that
$$\cfrac{1}{1+a}+\cfrac{1}{1+b}+\cfrac{1}{1+c}\le 1$$
Prove that $$(1+a^2)(1+b^2)(1+c^2)\ge 125$$ When does equality hold?
MY APPROACH: Firstly, let's try to squeeze out all the information we can from what is given. $$\frac{(1+b)(1+c)+(1+a)(1+c)+(1+a)(1+b)}{(1+a)(1+b)(1+c)}≤1$$ Multiplying this out, we get $$3+2(a+b+c)+(ab+bc+ca)≤1+(a+b+c)+(ab+bc+ca)+abc$$ $$\implies 2+(a+b+c)≤abc$$ Also, since $$1≥\sum_{cyc}\frac{1}{1+a}$$ Therefore by AM-GM,
$$1≥\sum_{cyc}\frac{3}{\sqrt[3]{(1+a)(1+b)(1+c)}}$$
$$\implies (1+a)(1+b)(1+c)≥27$$
That's all I ended up in.. At first, I thought Hölder's inequality could be employed, but that too requires the sum of the powers to be $=1$.. and that is not going to be useful in $(1+a^2)(1+b^2)(1+c^2)$ , since here the sum of powers add up to $3$..
I don't know what to do next.. Any help will be much appreciated..
AI: As $$abc\geq a+b+c+2\overset{AM-GM}{\geq} 4\sqrt[4]{2abc}\implies abc\geq 8$$Now,$$\prod_{cyc} (1+a^2)=\prod_{cyc} \left(1+\frac{a^2} 4+\frac{a^2} 4+\frac{a^2} 4+\frac{a^2} 4\right)\overset{AM-GM}{\geq}\prod_{cyc} 5\times \left(\frac{a}{2}\right)^{8/5}=125\times(abc/8)^{8/5}\geq 125$$
|
H: Tricolorations of a flag
Suppose that the there are 6 colors of the rainbow - red, orange, yellow, green, blue, and purple, in that order. (indigo is not included) Chan wants to design a vertical tricolor (a flag with equal vertical stripes of three different color) for a club. If Chan uses only uses the 6 rainbow colors and that the three colors appear in red-to-purple rainbow order from left to right, then how many different tricolors are possible?
example that works:
I know that the first section of the flag has 4 choices: red, orange, yellow, and green, because the 2 sections after it need to come after the color in rainbow order. So, the second section has 4-1 = 3 choices, and the third section has 3-1= 2 choices. But, I'm not sure if I am finding the right number of choices for sections 1 and 2 of the flag.
AI: Assuming we want three different colors, choosing which three of the six colors are used completely determines the flag since they must be arranged in red-to-purple rainbow order. For instance, choosing red, yellow, and green means the tricolor will be red, yellow, green, in that order. Hence, there are
$$\binom{6}{3}$$
possible tricolor flags.
|
H: prove that : $\sum_{j=1}^{n} P(A_{j}) = E\{\sum_{j=1}^{n}(I_{A_{j}}-P(A_{j})) I_{\bigcup_{i=1}^{n} A_{i}}\}+\sum_{j=1}^{n} P(A_{j})(1-\beta) $
let $A_1, A_2, \dots,A_n$ be probability events.
Set :
$$\beta=1-P\left(\bigcup_{k=1}^{n} A_{k}\right)$$
assume that $\beta > 0$ and prove that :
$$\sum_{j=1}^{n} P\left(A_{j}\right) = E\left\{\sum_{j=1}^{n}\left(I_{A_{j}}-P\left(A_{j}\right)\right) I_{\bigcup_{i=1}^{n} A_{i}}\right\}+\sum_{j=1}^{n} P\left(A_{j}\right)(1-\beta) $$
where $I$ denotes the indicator function.
I feel like I need to use the property : $\sum_{j=1}^{n} P\left(A_{j}\right)=\sum_{j=1}^{n} P\left\{A_{j}, \bigcup_{i=1}^{n} A_{i}\right\}$ and then use some conditioning but I don't how to proceed, any help will be greatly appreciated, thanks !
AI: Let $A = \cup_{i=1}^n A_i$ and note that $A_i \subset A$ for each $i = 1, ..., n$, so $I_{A_i} I_A = I_{A_i}$. Also, $1 - \beta = P(A)$. No conditioning needed.
\begin{align*}
&\quad E\left[\sum_{j=1}^n (I_{A_j} - P(A_j))I_A)\right] + \sum_{j=1}^n P(A_j)(1-\beta)\\
&=\sum_{j=1}^n E[I_{A_j}I_A - P(A_j)I_A)] + P(A)\sum_{j=1}^n P(A_j)\\
&=\sum_{j=1}^n [P(A_j) - P(A_j)P(A)] + P(A)\sum_{j=1}^n P(A_j)\\
&= (1-P(A)) \sum_{j=1}^nP(A_j) + P(A)\sum_{j=1}^n P(A_j) \\
&= \sum_{j=1}^n P(A_j)
\end{align*}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.