text
stringlengths 83
79.5k
|
|---|
H: Prove $\lim_{n\to \infty}\int\limits_0^1 \sin(x^n)dx=0$
Prove $$\lim_{n\to \infty}\int\limits_0^1 \sin(x^n)dx=0$$
I found that $\sin(x^n)$ converges uniformly to $f(x)=0$ on $[0, a]$ for every $0<a<1$.
Therefore I know that for every $0<a<1$, $\lim_{n\to \infty}\int\limits_0^a \sin(x^n)dx = \int\limits_0^a 0dx=0$
But how do I show it is true for $\int_0^1$?
I know there are some advanced techniques which allow swapping the order of limits which could solve this but most of them I probably did not study yet, so if you use such techniques please specify. Most basic solution would be best.
AI: You do not need any convergence theorem. Just use triangle inequality and the fact that $|\sin(t)|\leq |t|$ to obtain the inequalities $0\leq\left|\int_0^1\sin\left(x^n\right)\space\text{d}x\right|\leq\int_0^1x^n\space\text{d}x$.The integral on the right yields $\frac{1}{n+1}$. Now take $n\to \infty$ and squeeze.
|
H: Metric space where every bounded sequence is contained in a compact
Let $(X,d)$ be a metric space such that, for every bounded sequence $(x_n)$ taking values in $X$, there exists a compact set $K\subseteq X$ such that $x_n \in K$ for all $n$. Do such spaces have a particular name?
This includes the case of metric spaces with the Heine--Borel property, i.e., spaces where every closed bounded set is compact. In our case the closure of the image $\{x_n:n \ge 1\}$ is closed and bounded, hence compact. However, is the above class strictly larger than the latter?
AI: The two classes are the same. Suppose every bounded sequence is contained in some compact set. Let $K$ be closed and bounded. Then any sequence in $K$ is contained in some compact set and this implies that there is a convergent subsequence. But then $K $ is sequentially compact which implies that it is compact.
|
H: Chi-squared distribution question
Here is a problem that I just cannot figure out how to answer:
$z_1, z_2, \cdots, z_7$ are all independent, and are all normal random variables with $N(0,1)$. What is the probability of this occurring?
$$
P(Z_1^2 + Z_2^2 + Z_3^2) \leq 4.943(Z_4^2 + Z_5^2 + Z_6^2 + Z_7^2)
$$
AI: Expanding on my comment: since a $\chi_\nu^2$ distribution has PDF $\frac{1}{2^{\nu/2}\Gamma(\nu/2)}x^{\nu/2-1}\exp\frac{-x}{2}$ for $x\ge0$, the PDF of $X:=A/B$ with $A:=\sum_{i=1}^3Z_i^2,\,B:=\sum_{i=4}^7Z_i^2$ is$$\frac{x^{1/2}}{4\sqrt{2\pi}}\int_0^\infty b^{5/2}\exp\frac{-b(1+x)}{2}db=\frac{15}{4}x^{1/2}(1+x)^{-7/2}$$for $x\ge0$. With $x=\tan^2t$, you can verify as a sanity check that this PDF integrates to $1$:$$\int_0^\infty x^{1/2}(1+x)^{-7/2}dx=\int_0^{\pi/2}2\sin^2t\cos^3tdt=\operatorname{B}(3/2,\,2)=\frac{\Gamma(3/2)\Gamma(2)}{\Gamma(7/2)}=\frac{4}{15}.$$Finally,$$\int_0^{4.943}\frac{15}{4}x^{1/2}(1+x)^{-7/2}dx\approx0.949991.$$
Edit: as @kimchilover notes, this is equivalent to an $F(3,\,4)$-distributed variable being $\le4.943\times\frac34$, so an $F$-distribution table gives the probability as $\approx0.95$.
|
H: Prove that field $\mathbb{Z}[x]/(x,3)$ is isomorphic to field $\mathbb{Z}/3\mathbb{Z}$.
I don't realize what to do. Could you give me some hints or point to solution?
I tried to construct surjective homomorphism with $Ker(\phi)=(x, 3)$ by intuition to use First Isomorphism Theorem for Rings, but couldn't find such.
AI: Hint: You know such a homomorphism should send $x$ to $0$ (because $x$ is in the kernel), and $1$ to $1$ (because $1$ is not in the kernel, and there is no other way to make $\phi(1)^2=\phi(1)$). There aren't many choices left after that.
|
H: Variance of Poisson distribution multiplied by a constant
Assume I have a Poisson distribution $x=Poisson(X)$. It is clear that the mean and the variance will be equal to X.
In the case we have of $B *Poisson(X)$ being B a constant, I would like to see a small proof showing why the variance is $B^2*X$ and the mean is $B*X$
Many thanks!! :)
AI: I'm a bit confused by your notation but there's nothing unique about the Poisson in this regard.
Say you have a random variable X, with mean E(X) and variance V(X)
Then let $Y=cX$, for some constant c.
$\displaystyle{E(Y)=\sum_{y} y P(Y=y)=\sum_{x} cx P(cX=cx)=\sum_{x} cx P(X=x)= c\sum_{x} P(X=x)=cE(X)}$
Then $V(Y) = E(Y^2) - (E(Y))^2 = E(c^2X^2) - c^2(E(X))^2 = c^2(E(X^2)-(E(X))^2) = c^2V(X)$
|
H: Area of a trapezium inscribed in a circle?
A circle, having center at $(2, 3)$ and radius $6$, crosses $y$-axis at the points $P$ and $Q$. The straight line with equation $x= 1$ intersects the radii $CP$ and $CQ$ at points $R$ and $S$ respectively. Find the area of the trapezium $PQSR$.
I am getting stuck to find the length of parallel sides?
As area of a trapezium is $A=\frac{a+b}{2} \cdot h$
here $a$ = $SR$ , b = $PQ $
How to find $h$?
AI: Using the Pythagorean theorem you can find $PQ$.
$$PQ = 2\cdot\sqrt{r^2 - (2h)^2},$$ where $r$ is the radius of the circle, and $h$ is the altitude of the trapezoid $PQSR$.
$$RS = \frac{PQ}{2},$$ because the triangles $PCQ$ and $RCS$ are similar and $PC = 2RC.$
So you can plug in the values and find the area of the trapezoid by using $A = h\cdot\frac{PQ + RS}{2}$.
$$r = 6, h =\frac{HC}{2} = 1, PH = 4\sqrt 2, PQ = 8\sqrt 2, RS = 4\sqrt 2,$$
$$A = 6\sqrt 2$$
|
H: Maximum number of possible intersections between tangent line and function $x^3$
I'm struggling with a problem but can't find a way how to solve it:
Calculate maximum number of possible intersections between tangent line of function $f(x) = x^3$ and function $f(x) = x^3$ where $x \in \mathbb{R}$.
I know I should use derivatives but I don't have a clue how.
AI: Pick a point $x_0$, and let $g(x)=3x_0^2(x-x_0)$ be the expression for the tangent of $f$ at $x_0$ (the actual expression for $g$ isn't actually that important).
We want the number of roots of the cubic equation $f(x)-g(x)=0$. We know $x=x_0$ is a root, because a tangent intersects the graph it is tangent to. However, we actually know that this is at least a double root, because the tangent is a tangent (said with derivatives, $x_0$ is a root of $(f(x)-g(x))'=0$ as well)
Since we have a cubic equation, there can be at most three solutions. And since at least two of them are at $x_0$, there is at most room for one more.
|
H: Determine whether series converge.
Determine if the series $\Sigma^{\infty}_{n=1} \ln\left(1+\frac{(-1)^n}{n} \right)$ converges.
$$\Sigma^{\infty}_{n=1} \ln\left(1+\frac{(-1)^n}{n} \right) = \Sigma^{\infty}_{n=1} (\ln(n +(-1)^n) - \ln n ) \\ S_n = \ln(0) - \ln(1) + \ln(3) - \ln(2) + \ln(2) - \ln(3) + \ln(5) - \ln(4) + ... +\ln(n+(-1)^n) - \ln n $$
Looks like all terms cancel out except $ln(n + (-1)^n)$. So, then $\lim_{n \to \infty} S_n = \infty \rightarrow$ series diverge, but I don't know what I can do with the term $ln(0)$ and whether I can manipulate with the series like that. So any advice would be welcome!
AI: If the sum is from $n=2$ to $\infty$ then it converges to $0$. at $n=1$ it is undefined .
let us evaluate the sum from $n=2$ to $n=2m$ where $m$ is a large natural number. .
if it is from $n=2$ to $2m$ then rewrite it as :-
$\sum_{n=1}^{2m}( \ln(1+\frac{1}{2n})+\ln(1-\frac{1}{2n+1}))$
Then all the terms cancel out except $\ln(\frac{2m+1}{2m})$.
If $m\to\infty$ then the sum tends to $\ln(1)=0$.
If we evaluate the sum for $n=2$ to $n=2m+1$.
Then all the terms cancel out and the summation is $0$.
Hence the sequence of partial sums $S_{n}=\sum_{r=2}^{n} ln(1+\frac{(-1)^{r}}{r})$ converges to $0$ , as it's sub-sequences both $S_{2m}$ and $S_{2m+1}$ converges to $0$.
So as the sequence of partial sums converge...we can say that the series converges to $0$.
|
H: If $X$ is a 2x3 matrix such that $|XX’|\not =0$ and $A=I-X(X’X)^{-1}X’$, then find $A^2$
$$A=I-X(X^{-1}X^{-1’})X’$$
$$A=I-I=O$$
So $A^2=O$
But the correct answer is $A$. What am I doing wrong?
AI: $A^2=(I-X(X'X)^{-1}X')(I-X(X'X)^{-1}X') =$
$ I^2-IX(X'X)^{-1}X'-X(X'X)^{-1}X'I+X(X'X)^{-1}X'X(X'X)^{-1}X'=$
$I-2X(X'X)^{-1}X'+X(X'X)^{-1}X'X(X'X)^{-1}X'=$
$I-2X(X'X)^{-1}X'+X(X'X)^{-1}X'=I-X(X'X)^{-1}X'=A$.
Note that $X'X(X'X)^{-1} = I$
|
H: Why can't we have these alternative cross product relationships?
I am currently studying Introduction to Electrodynamics, fourth edition, by David J. Griffiths. Chapter 1.1.2 Vector Algebra: Component Form says the following:
$$\mathbf{\hat{x}} \times \mathbf{\hat{x}} = \mathbf{\hat{y}} \times \mathbf{\hat{y}} = \mathbf{\hat{z}} \times \mathbf{\hat{z}} = \mathbf{0},$$
$$\mathbf{\hat{x}} \times \mathbf{\hat{y}} = - \mathbf{\hat{y}} \times \mathbf{\hat{x}} = \mathbf{\hat{z}},$$
$$\mathbf{\hat{y}} \times \mathbf{\hat{z}} = -\mathbf{\hat{z}} \times \mathbf{\hat{y}} = \mathbf{\hat{x}},$$
$$\mathbf{\hat{z}} \times \mathbf{\hat{x}} = -\mathbf{\hat{x}} \times \mathbf{\hat{z}} = \mathbf{\hat{y}}.$$
These signs pertain to a right-handed coordinate system ($x$-axis out of the page, $y$-axis to the right, $z$-axis up, or any rotated version thereof). In a left-handed system ($z$-axis down), the signs would be reversed: $\mathbf{\hat{x}} \times \mathbf{\hat{y}} = - \mathbf{\hat{z}}$, and so on. We shall use right-handed systems exclusively.
I don't understand why we can't also have that
$$\mathbf{\hat{x}} \times \mathbf{\hat{y}} = \mathbf{\hat{y}} \times - \mathbf{\hat{x}} = \mathbf{\hat{z}},$$
$$\mathbf{\hat{y}} \times \mathbf{\hat{z}} = \mathbf{\hat{z}} \times - \mathbf{\hat{y}} = \mathbf{\hat{x}},$$
$$\mathbf{\hat{z}} \times \mathbf{\hat{x}} = \mathbf{\hat{x}} \times - \mathbf{\hat{z}} = \mathbf{\hat{y}}.$$
I would greatly appreciate it if people would please take the time to explain this.
AI: We do also have those "alternative cross product relationships" as well.
The cross-product is compatible with scalar multiplication
(in particular, multiplication by $-1$),
so $\mathbf{-\hat y}\times\mathbf{\hat x}=\mathbf{\hat y}\times -\mathbf {\hat x}$, for example.
|
H: $\mathbb Z [x]$ module structure on $\mathbb Z$
For each integer $n \in \mathbb Z$, define the ring homomorphism
$$φ_n :\mathbb Z [x]\to \mathbb Z, \ \ φ_n(f)=f(n).$$
This provides a $\mathbb Z[x]$-module structure on $\mathbb Z$ given by $$f ◦ a = f(n) · a$$ for all $f \in \mathbb Z[x]$ and $a ∈ Z$. Given two integers $m, n \in \mathbb Z$, how can we compute the tensor product $$\mathbb Z ⊗_{\mathbb Z[x]} \mathbb Z?$$ Here the left and right modules are respectively determined by the homomorphisms $φ_n$ and $φ_m$.
I believe that the answer is related to integers $n$ and $m$ but I cannot constitute a rigorous argument. Any suggestions?
AI: Let $\mathbb Z_n$ denote $\mathbb Z$ with the module structure given by $\phi_n$.
Note that, since $x\cdot(m-n) = m\cdot(m-n) \in (m-n)\mathbb Z$, the submodule $(m-n)\mathbb Z_m$ of $\mathbb Z_m$ is identical to the ideal $(m-n)\mathbb Z$ of $\mathbb Z$.
Regard the following map:
$$\mathbb Z_m\times \mathbb Z_n \to \mathbb Z_m/(m-n)\mathbb Z_m$$
$$(a, b) \mapsto ab+(m-n)\mathbb Z_m$$
This is bilinear with respect to $\mathbb Z[x]$ because for $a\in\mathbb Z_m$ and $b\in\mathbb Z_n$:
$$(xa)b+(m-n)\mathbb Z_m = mab+(m-n)\mathbb Z_m = x(ab)+(m-n)\mathbb Z_m$$
$$a(xb)+(m-n)\mathbb Z_m = nab+(m-n)\mathbb Z_m = mab+(m-n)\mathbb Z_m = x(ab)+(m-n)\mathbb Z_m$$
Therefore, one gets an induced homomorphism of $\mathbb Z[x]$-modules:
$$f:\mathbb Z_m\otimes_{\mathbb Z[x]} \mathbb Z_n \to \mathbb Z_m/(m-n)\mathbb Z_m$$
Obviously, this is surjective. Let $\alpha\in\ker f$. Then, there is $a\in\mathbb Z$ such that $\alpha = a(1\otimes 1)$.
Then, $0 = f(\alpha) = a+(m-n)\mathbb Z_m$, which means that $a = k(m-n)$ for some $k\in\mathbb Z$. Therefore:
$$\alpha = k(m-n)(1\otimes 1) = (km\otimes 1)-(k\otimes n) = ((xk)\otimes 1)-(k\otimes (x1)) = 0$$
Thus $f$ is injective. In conclusion:
$$\mathbb Z_m\otimes_{\mathbb Z[x]} \mathbb Z_n \cong \mathbb Z_m/(m-n)\mathbb Z_m$$
|
H: What is the probability of equal number of Heads when coin is tossed large number of times (say 10000)
As per basic probability theorem getting equal number of heads and tails when coin is tossed $10000$ times is
= $\frac{\binom{n}{r}}{2^n} = \frac{\binom{10000}{5000}}{2^{10000}}$
This value is not equal to or close to $0.5$
But if one does the same experiment simulation, then probability will be close to $0.5$ (with $10000$ tosses)
Can you explain why is there a difference in probability ?
AI: The probability being close to $1/2,$ has to do with relative error; namely since
$$
\frac{\binom{n}{n/2}}{2^n}\approx \frac{1}{\sqrt{\pi n}}
$$
by Stirling approximation, the probability of being within $O(\sqrt{n})$ terms, i.e. within $O(\frac{\sqrt{n}}{2^n})$ of the of the exact $1/2$ probability value goes to $1$ as $n$ increases.
|
H: Cossine of a standard Normal distribution
Let $X \sim \mathcal{N}(0,1) $ have a standard normal distribution. How do I compute
$$\mathbb{E}[cos(X)] \text{ ?}$$
I know that $$\mathbb{E}[sin(X)] = 0$$
because $$f(x)=sin(x) \frac{e^{-x^2}}{\sqrt{2 \pi}}$$
is an odd function. Can I use this fact to compute $\mathbb{E}[cos(X)]$?
AI: Since $\Bbb Ee^{itX}=e^{-t^2/2}$, $\Bbb E\cos X=\Re\left.\Bbb Ee^{itX}\right|_{t=1}=e^{-1/2}$.
|
H: Prove: $R$ is local $\iff$ $R$ has exactly one maximal ideal.
We have that a commutative ring $R$ with $1$ is called local if $R − R^×$ is an ideal of $R$. I have to proof the following:
$R$ is local $\iff$ $R$ has exactly one maximal ideal.
We have that every ideal $I$ $\subseteq$ $R$ is contained in a unique maximal ideal. $R$ is local so that the only maximal ideal is $M$. So, $I$ $\subseteq$ $M$. The homomorphism $\phi$: $R$ $\rightarrow$ $R/I$ tells me that the ideals of $R/I$ are all contained in $\phi$(M). $\phi$(M) is a maximal ideal in $R/I$ so that it is also the unique maximal ideal in $R/I$ since any other maximal ideal is contained in it.
So, $R/I$ has a unique maximal ideal $\phi$(M) and hence it is local.
I am not sure if this is a complete proof of the above statement. Any help would be grateful.
AI: Assume $R$ is local and let $M=R\setminus R^\times$. By assumption $M$ is an ideal. It is also maximal because any ideal which properly contains it must contain invertible elements and hence it is all $R$. Now let $I\subseteq R$ be any proper ideal. Then all the elements of $I$ must be non invertible and hence $I\subseteq M$. So this shows any proper ideal is contained in $M$, hence it must be the only maximal ideal of $R$.
Second direction: suppose there is exactly one maximal ideal and let's call it $M$. We will show that $M=R\setminus R^x$. Obviously $M\subseteq R\setminus R^x$. Now suppose there is a non invertible element $y\in R\setminus R^x$ such that $y\notin M$. Then define:
$I=\{ry: r\in R\}$
It is easy to see that $I$ is an ideal of $R$. It is also a proper ideal because $1\notin I$. (because $y$ is not invertible). Then $I$ is contained in some maximal ideal. Since by assumption $M$ is the only maximal ideal we must have $I\subseteq M$. But this means $y\in M$, a contradiction. Hence we must have $M=R\setminus R^x$, so $R$ is indeed local.
|
H: When can we use Weiterstrass M-test and when not?
For $x \in \mathbb{R}$ consider the series
$$
S = \sum_{n=1}^\infty \frac{2x}{x^2+n^2}
$$
Then I have to show that $S$ converges uniformly. I have been told before in here that we are not able to use Weiterstrass M-test when the convergent series we find depend on $x$. But how does this change, for example, if we let $x \in [-K,K]$ where $0< K < \infty$? I am studying for my Analysis exam and I have some answers for questions like this where the professor has provided the following answer:
Let $x \in [-K,K]$ where $0 < K < \infty$. Then we have
$$
\left| \frac{2x}{x^2+n^2} \right| \leq \frac{2|x|}{n^2} \leq \frac{2K}{n^2}
$$
where $\sum_{n=1} \frac{2K}{n^2}$ is well-known convergent. It follows from Weiterstrass M-test that $S$ converges uniformly on $[-K,K]$ where $0 < K < \infty$.
But I simply don't see how this differ from if we just let $x \in \mathbb{R}$ and get that
$$
\left| \frac{2x}{x^2+n^2} \right| \leq \frac{2|x|}{n^2}
$$
Why doesn't Weiterstrass M-test work in this instance? It is because $\mathbb{R}$ is not a compact set?
AI: To conclude from M-test that $\sum f_n(x)$ is uniformly convergent you have to produce sequence of positive constants $c_n$ such that $\sum c_n <\infty$ and $|f_n(x)| \leq c_n$ for all $n$ and all $x$. In the present case $\frac {2 |x|} {n^{2}}$ does not have a bound of this type because there is no bound for $|x|$. Hence the bound $\frac {2 |x|} {n^{2}}$ does not yield the conclusion.
EDIT: For uniform convergence on $[-K,K]$ your inequality is good enough.
|
H: Vanishing of cohomology of affine scheme
In EGA I 5.1, more specifically the proof of 5.1.9, which states that $X$ is affine iff the closed subscheme defined by a quasi-coherent sheaf of ideals $\mathscr{I}$ such that $\mathscr{I}^n = 0$ for some $n$ is also affine, it is proved in a nice way that the first cohomology of any quasi-coherent sheaf on an affine scheme vanishes, and I feel like the same idea can be used to prove the general statement about the vanishing of higher cohomology. I have seen no text where it is exposed this way, which makes me wonder wether it is correct or not.
Let $X$ be an affine scheme (say, $X = \mathrm{Spec}(A)$) and $\mathscr{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
By the results proved in EGA I section 1.4, there is actually an equivalence of categories between quasi-coherent $\mathcal{O_X}$-modules and $A$-modules (unlike Hartshorne's, EGA proves this without proving the vanishing of $H^1$, even though it is implied by what follows), and this equivalence is also an equivalence of abelian categories (it respects kernel, cokernel...). This equivalence is induced by the global section functor.
(Edit: the mistake might be here, EGA only shows all of this for the $\tilde{}$ functor, and not for $\Gamma$, so the vanishing of at least $H^1$ would be needed to get a full equivalence of abelian categories. In that case, the following would show that at least, the vanishing of H^i follows formally from the one of $H^1$, which would seem reasonable then)
The "global section" functor $\mathscr{F} \mapsto \mathrm{\Gamma}(F, X)$ is naturally isomorphic to the functor $\mathrm{Hom}_{\mathcal{O}_X-\mathrm{mod}}(\mathcal{O}_X, - )$ (can be shown with the previous equivalence and the fact that $\mathrm{Hom}_{A-\mathrm{mod}}(A, B) \cong B$.)
So its $i$-th derived functor must be naturally isomorphic to $Ext^i_{\mathcal{O}_X}(\mathcal{O}_X, \mathscr{F})$,
Using the equivalence of abelian category above (which must respects the Ext functor as an equivalence of abelian categories!), the latter is isomorhpic to $Ext^i_{A}(A, \mathscr{F}(X))$. But the latter is zero since $A$ is a free $A$-module, the Ext must vanish, and we're done.
I have repeatedly heard that vanishing of cohomology in the affine case is non-trivial and that it is a big fact/theorem, in Hartshorne it looks like (I haven't read in much detail chapter III yet) is proved in an other way, with extra finiteness hypothesis and claims that the general case is harder, yet I feel like the method above exhibits it just as a categorical trick and some homological algebra on modules. So I think I am missing something in the above, but I can't see what. So:
Is the above correct?
If not, where is the (possibly stupid) mistake I made?
Edit:
I just realized there is possibly a problem when I identify $Ext^i_{A}(A, \mathscr{F}(X))$ with $Ext^i_{\mathcal{O}_X}(\mathcal{O}_X, \mathscr{F})$ by arguing that equivalent abelian categories should have equivalent Ext. This seems intuitive, since the construction of Ext can be made purely "abelian-categorical", yet I feel like when I would flesh out the details, I would end up with a spectral sequence that would degenerate iff the higher derived functors of $\Gamma$ are $0$, which is actually what I am looking for. I am quite unsure at this point.
Another thing I am thinking is that we may use the equivalence of abelian category to prove that actually $\mathcal{O}_X$ is projective in the category of quasi-coherent $\mathcal{O}_X$-modules on $X$, using the equivalence of abelian categories. And then we could finish $Ext^i_{\mathcal{O}_X}(\mathcal{O}_X, \mathscr{F}) = 0$ the same way.
Showing $\mathcal{O}_X$ is projective would not be so hard since it is $\tilde{A}$, that $A$ is projective, and then this would be finished by transposing a lifting problem from sheaves to $A$ (as per the above edit, transporting the lifting problem would require at least the vanishing of $H^1$ since it involves epimorphisms).
AI: There are several mistakes in your arguments, and also true facts. I'll try to list them all :
$\Gamma(X,.)\simeq\operatorname{Hom}_{\mathcal{O}_X-\mathrm{mod}}(\mathcal{O}_X,.)$. This is true and does NOT require the equivalence on affine scheme. In fact, this natural isomorphism holds in any context, on any ringed space (even ringed topos)
$H^1(X,.)=0$ so $H^i(X,.)=0$ for all $i>0$. This is true, and a general fact in homological algebra. If $F$ is a left exact functor with $R^1F=0$, then by the long exact sequence you see that $F$ is actually exact. It follows that it has no higher derived exact functor.
If $F:\mathcal{A}\to\mathcal{B}$ is an equivalence of categories between abelian categories, then $F$ preserve $\operatorname{Ext}$. This is true and easy to prove using that an equivalence of category is exact and preserve injectives.
It seems you make a confusion between $QCoh(X)$ and $\mathcal{O}_X-\mathrm{mod}$. We have an equivalence of category $QCoh(\operatorname{Spec}A)\simeq A-\mathrm{mod}$ but these categories are NOT equivalent to $\mathcal{O}_{\operatorname{Spec}A}-\mathrm{mod}$.
A very subtle point in sheaf cohomology : $H^i(X,.)$ is the derived functor of $\Gamma(X,.):\mathcal{O}_X-\mathrm{mod}\to\mathcal{A}b$. It can be shown that it is also the derived functor of $\Gamma(X,.):\mathcal{A}b(X)\to\mathcal{A}b$ (where $\mathcal{A}b(X)$ is the category of abelian sheaves on $X$). But it is NOT the derived functor of $\Gamma(X,.):QCoh(X)\to\mathcal{A}b$. So this is confusing since the three functors are denoted the same way and that the inclusion functor $QCoh(X)\to\mathcal{O}_X-\mathrm{mod}$ is never written. (Note that the inclusion functor is exact ! but may not preserve injectives nor send them to flabby sheaves).
Using 4. and 5. : on an affine scheme $X=\operatorname{Spec}A$, you have an equivalence of category $QCoh(X)\simeq A$-mod (and not $\mathcal{O}_X$-mod). The equivalence is given by $M\mapsto \tilde{M}$ in one direction and by $\mathcal{F}\mapsto \Gamma(X,\mathcal{F})$ in the other. This implies that $\Gamma(X,.)$ is exact (and so has no higher derived functor) has a functor $QCoh(X)\to\mathcal{A}b$ (and not $\mathcal{O}_X$-mod). Unfortunately, this (vanishing) derived functor is not the sheaf cohomology.
To expand a bit on 5., see this post : Godement Resolution don't see $\mathcal{O}_X$-module structure
|
H: $R$ be local and suppose $x ∈ R$ satisfies $x^2 = x$
If we let $R$ be local and suppose $x ∈ R$ satisfies $x^2 = x$, then I have to show that $x = 0$ or $x = 1$.
A commutative ring $R$ with $1$ is called local if $R − R^×$ is an ideal of $R$.
If we let $x, y ∈ R$, then I was thinking of the expansion of the equation $(x + y)^2$ = $x + y$. But I am not sure how to proceed from here and if this is even the right way to tackle this problem.
Any help would be grateful.
AI: Rearranging, we have
$$0= x^2 - x = x(x-1)$$
If $x\neq 0,1$, that means that both $x$ and $x-1$ are both nonzero zero divisors, and in particular non-units. Since $R$ is local, they must both be in the unique maximal ideal. But that means
$$x - (x-1) = 1$$
is in that maximal ideal, so the maximal ideal isn't proper, a contradiction.
So it is only possible that $x=0$ or $x=1$, which of course both satisfy that relation.
|
H: Converting base 2 to Base 10?
How is $0.11 \cdot 2^{1}$ converted to $1.5$ in base 10? I am unable to understand the theory behind this, since I believe $0.11 * 2^{1}$ corresponds to $1.1$ but I dont see the conversion process?
AI: $0.11\cdot 2^1$ does indeed correspond to $1.1$. In binary. Which is to say, it means
$$
1\cdot 2^0 + 1\cdot 2^{-1}
$$
as per standard positional number system conventions. Writing this out in base ten does get you $1.5$.
Usually, when we write numbers in different bases close to one another, there is postential for such a misunderstanding. In those cases, it's common to see a subscript denoting which base we're using (this subscript uses base ten). So with that, the above paragraph would look like
$0.11_2\cdot 2^1$ does indeed correspond to $1.1_2$. In binary. Which is to say, it means
$$
1\cdot 2^0 + 1\cdot 2^{-1}
$$
as per standard positional number system conventions. Writing this out in base ten does get you $1.5_{10}$.
|
H: Finding discrete logarithm of composite numbers
I started to learn discete logarithm the definition says that:suppose that "p" is a prime number , "r" is a primitive root (modulo p) and "a" is an integer between "1 and p-1" inclusive.If r^e (modp)=a and "e" is in [0,p-1] then the "e" is discrete logarithm of "a modulo p" to the base "r".
However, when I search in google, i encountered a discrete logarithm calculator.When i get the inputs into it, it gave unexpected results.For example, i wrote that 3^x(mod10)=7 and it results that x is 3.It was true but there was a contradiction with definition because 10 is not a prime number.The other exaple is that 3^x(mod11)=5, the contradictions with definition are that 3 is not primitive root(generator) of 11.However,the calculator answered it correctly such that x is 3.
My question is that is there any deficiency in my definition or is the calculator wrong ?Moreover, if my definition is correct , why did calculator answer the questions correctly? and can composite numbers have discrete log
AI: The basic definition is that if $r^e \equiv a \mod n$, $e$ is a discrete logarithm of $a$ mod $n$ to base $r$. The theory is simpler if $n$ is a prime and $r$ is a primitive root mod $a$, but there is no reason not to consider the problem when $n$ is composite and $r$ is not a primitive root. Of course existence may not be guaranteed.
|
H: Pairing, Hahn-Banach theorem
I'd like to solve the following problem for part (ii). $X^*$ denotes the space of bounded linear functionals on the normed vector space $X$, and $\left<\ ,\right>$ stands for the pairing between a space and its dual. I desire to proceed with part (i); thus, I consider two cases separately. On the one hand, $x$ may come from $X\setminus \bar{Y}$. In this case, we use part (i) to find $x^*\in Y^\perp$ s.t. $||x^*||=1$ and $x^*(x)=\inf_{y\in Y}||x-y||$. But how does the absolute value of the pairing attain its maximum? On the other hand, $x$ may be included in the closure of $Y$.
AI: In the case $x \not \in \overline{Y}$, you have found an $x_0^* \in Y^\perp$ such that $\inf_{y \in Y} \|x - y\| = x_0^*(x)$. If you can show that
$$\sup_{x^* \in Y^\perp, \|x^*\| = 1} |\langle x, x^* \rangle | \leq \inf_{y \in Y} \|x - y\|$$
then it will follow that the $\sup$ is in fact a $\max$ which is attained at $x_0^*$. This is straightforward since for $x^* \in Y^\perp$ with $\|x^*\| = 1$ and $y \in Y$,
$$|\langle x, x^* \rangle | = | \langle x - y, x^* \rangle | \leq \|x - y\|.$$
Since $y$ was arbitrary, taking the $\inf$ then gives the result.
It remains to deal with the case $x \in \overline{Y}$. In this case $\inf_{y \in Y} \|x - y\| = 0$. So we need to show that for $x^* \in Y^\perp$, $x^*(x) = 0$ also. To do this, pick a sequence $x_n$ in $Y$ such that $x_n \to x$. Then since $x^*$ is continuous,
$$x^*(x) = \lim_{n \to \infty} x^*(x_n) = \lim_{n \to \infty} 0 = 0$$
as desired.
|
H: About non-normal nilpotent subgroups
Suppose that all non-normal abelian subgroups of a finite group $G$ are cyclic. What can I say about non-normal nilpotent subgroups of $G$?
Is it true that such supgroups are cyclic?
I appreciate your help.
AI: The finite p-groups all of whose non-normal abelian subgroups are cyclic have been classified -see here.
I would think that among these $p$-groups we should find a counterexample.
What about the generalized quaternion group $Q_{32}$, where we know that all abelian subgroups are cyclic, and the nilpotent subgroup $Q_8$ is not normal and not cyclic? See here.
|
H: Show that $\lnot\exists x\in A P(x) \equiv\forall x\in A\lnot P(x)$
I need to show that: $$\lnot\exists x\in A P(x) \equiv\forall x\in A\lnot P(x)$$
when I start from LHS, should I write(1): $$\lnot\exists x (x\in A \to P(x))$$
or(2): $$\lnot\exists x (x\in A \land P(x))$$
because after simplification in first case I get: $$\forall x(x\in A \land\lnot P(x))$$
whereas in the second case I get: $$\forall x(x\in A \to\lnot P(x))$$
I'm confused, because I know that in general: $$(p\to q) \equiv \lnot(p\land\lnot q)$$
therefore I don't know whether assuming (1) or (2) is right.
Could you please clarify that for me?
AI: The logical forms of the statements $\exists \, x \!\in\! A \, P$ and $\forall \,x \!\in\! A \, P$ are respectively:
\begin{align}
\exists x \, (x \!\in\! A \land P) \qquad \text{and} \qquad \forall x (x \!\in\! A \to P).
\end{align}
In your case, then, $\lnot \exists \, x \!\in\! A \, P(x)$ stands for $\lnot \exists x (x \in A \land P(x))$, which by De Morgan is equivalent to $\forall x \,\lnot (x \in A \land P(x))$, which by De Morgan (and the fact that $p \to q \equiv \lnot p \lor q$) is equivalent to $\forall x \, (x \in A \to \lnot P(x))$, which is the logical form of $\forall \, x \!\in\! A \, \lnot P(x)$.
|
H: Finding the $\lim \limits_{x \to 0} {1 - \cos(x)\over \sin(x) \ln(1+x)}$ using Taylor's series.
I am a bit stuck. This is what I have so far and I am not sure how to simplify it further:
$${{x^2\over 2} - o(x^4)\over (x - {x^3 \over 6} + o(x^5))(x - {x^2 \over 2} +o(x^3))} $$
How do I proceed further?
AI: What you did is absolutely correct and i hope that i can start from there $$\lim_{x\to o}\frac{1-cosx}{sin(x)ln(1+x)}=\lim_{x\to 0}\frac{\frac{x^2}{2}-\frac{x^4}{24}}{(x-\frac{x^3}{6})(x-\frac{x^2}{2}+\frac{x^3}{3})}$$ now take $x^2$ common in both numerator and denominator(one $x$ from each braces in the denominator) and then check the coefficient of constant in both numerator and denominator which gives you the value of limit (since all other powers of x tend to zero as x tends to zero) and in this case the value of limit is $$\lim_{x\to o}\frac{1-cosx}{sin(x)ln(1+x)}=\frac{1}{2}$$ Hope that helps!
|
H: Showing $a$ is a primitive root modulo $p \Leftrightarrow -a$ is a primitive root modulo $p$ if $p \equiv 1 \ (\mathrm{mod} \ 4)$
Let $p$ be a prime number with $p \equiv 1 \ (\mathrm{mod} \ 4)$ and $a \in \mathbb{N}$ with $p \nmid a$.
How to show that
$a$ is a primitive root modulo $p \Leftrightarrow -a$ is a primitive root modulo $p$?
I tried to:
$\Rightarrow:$ Let $a$ be a primitive root modulo $p$.
Then it's $\mathrm{ord_p(a)}=\varphi(p)$
So $a^{\varphi(p)-1} \neq a^{\varphi(p)}=1$
Since $p \nmid a$, it follow that $a \neq mp$ for $m \in \mathbb{Z}$
$p \equiv 1 \ (\mathrm{mod} \ 4) \Rightarrow 4 \vert (p-1)$
Here I don't know how to continue.
How to show this equivalence?
AI: You only need to show the $\Rightarrow$ implication, since $-(-a)=a$.
What you know is that $a^i\ne 1$ for all $1\le i <p-1$, and want to know the same for $-a$.
But $$(-a)^i=(-1)^ia^i=\begin{cases} a^i & \text{ if $i$ is even} \\ -(a^i) & \text{ if $i$ is odd} \end{cases}$$
so you only need to be sure that $a^i\ne -1$ for odd $i$. But $a^i=-1$ implies that $a^i$ has order $2$, so $i=\frac{p-1}2$, that it is even if $p\equiv 1 \pmod 4$.
|
H: Function with local extreme value at a point but not monotone in any of its neighborhood
In a question a function is given as-
$f(x)=\begin{cases}x^4(2+sin(\frac{1}{x})) & \text{if x$\neq$ 0} \\[2ex] 0 & \text{if x=0}\end{cases}$
In this function, $f'(0)$ comes out to be 0 (and f has minima at 0) and in every neighborhood of 0, $f'(x)$ has both positive and negative values which implies that in every neighborhood of 0, $f(x)$ is not monotonic.
It is easy to prove.
If $f$ has an extreme value at zero then there should $\exists$ a neighborhood in the left such that $f$ is monotonic decreasing to $f(0)$ and a neighborhood at right such that $f$ increases monotonic to right from $f(0)$ or vice versa if the function is continuous.
But this example suggest that $\not\exists$ any neighbourhood of 0 in which function becomes monotone.
Calculations suggest that $f$ has minima at 0 and $f'(x)$ changes its sign in every neighborhood of $0$. But why $f$ has a minimum at 0? It seems somewhat surprising to me.
AI: Maybe a picture will help. Your function does indeed have a minimum at $0$, in fact it is always between $x^4$ and $3 x^4$, but the graph is so wiggly as $x \to 0$ that there are
no intervals $(0,\epsilon)$ or $(-\epsilon, 0)$ on which it is monotonic.
|
H: A filter that has no countable basis
Conside a straight line $L$ in the plane $\mathbb{R}^{2}$. The filter of neighborhods of $L$ in $\mathbb{R}^{2}$ is the filter formed by the sets which contain an open set containing $L$. Prove that there is no basis of this filter which in countable.
First of all, I noticed that this family really defines a filter. Indeed, let
$$\mathscr{F}=\{A \subset \mathbb{R}^2: A \supset U \supset L, U=U^\circ\}$$
$\emptyset \notin \mathscr{F}$ since this set cannot contain an open set containing L.
Obviously the intersection of two sets, belonging to the family, also belongs to the family $\mathscr{F}$
If $A \in \mathscr{F}$ then there exists an open subset of $\mathbb{R}^{2}$ such that $A \supset U \supset L$. Thus, if $B \supset A$ the same set $U$ satisfies the desired properties, that is, $B \supset U \supset L.$
Therefore, $\mathscr{F}$ is a filter.
However, I had no idea how to show that this filter has no countable basis.
AI: We can assume $L=\mathbb{R}\times\{0\}$.
Let $(B_n)_{n\in\mathbb{N}}$ be a family of elements of $\mathscr{F}$. For $k\in\mathbb{N}$ and $n\in\mathbb{N}$, there must exist $\epsilon_{kn}>0$ such that $\{k\}\times[-\epsilon_{nk},\epsilon_{nk}]\subset B_n$. One can define a set $A$ which is a member of $\mathscr{F}$, but does not contain $(n,\epsilon_{nn})$ for any $n\in\mathbb{N}$. This set $A$ cannot contain any of the $B_n$.
|
H: the proper ideal of a quotient ring of a Dedekind domain $ / ^{n+1}$
I don't understand what it says in the hint of EXERCIZE 5 in the 23p of this pdf.
For example, $\mathcal O = \mathbb Z$ and $\mathcal a = (4)$.
$\mathbb Z/(4) = \mathbb Z/4\mathbb Z = \{[0],[1],[2],[3]\}$
and $a = (4) = \{ 4n | n \in \mathbb Z\} = \{ 2a \times 2b | 2a,2b \in (2)\} = (2)^2$.
So the proper ideal of $\mathbb Z/(4)$ is $(2)/(4)$.
What is $(2)/(4)$?
AI: $(2)/(4)$ is a quotient of abelian groups as well as $O$-modules
(ie. there is a natural $O$-action on the quotient group which is for $c\in (2),c+(4)\in (2)/(4),n\in O$, $n.(c+(4)) = nc+(4)$)
as well as an ideal of the quotient ring $O/(4)$
(so $(2)/(4)$ is also a $O/(4)$ module)
|
H: $\sqrt {144} = 12$ How can I find it without calculator?
$$12\times 12 = 144$$ If there's a question that asks me to find square roots manually, how will I find that?
I know a way that's really long and boring. I actually know about the "Tree" method. Is there any other interesting and effective way to find square root?
AI: To get the square root of $x$ without calculator use the following steps:
1. Make a rough estimate or guess.
2. Divide $x$ by your estimate.
3. Take the average of the answer from 2. and your estimate
....this will become your new estimate.
(Repeat steps 2 and 3 until you reach the accuracy you want.)
In equation form...where:
$x$ is the number you want to find the square root of,
$a_1$ is your initial estimate,
$a_2$ is your first calculated improved estimate,
$a_3$ is your next calculated improved estimate,
...
\begin{equation}
a_1 = initial ~estimate~ or~ guess \\
a_2 = \frac{(x/a_1 + a_1) }{2}\\
a_3 = \frac{(x/a_2 + a_2)}{2} \\
a_4 = \frac{(x/a_3 + a_3)}{2} \\
...
\end{equation}
Further,
Suppose you want to calculate the root of a positive number $M$.
Assume $x=\sqrt M$,then
$f(x)=x^2-M=0$.
Now use the Newton-Raphson Method.Newton-Raphson method For this equation is given by
\begin{equation}
x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}
\end{equation}
|
H: Probability of a negative random variable in normal distribution.
It is given that $X \sim \mathcal{N}(1.5,(3.2)^2)$.
Find the probability that the random chosen value of $X$ is less than $-2.4$.
Solution:
$Pr(X<-2.4)=1-Pr(X<2.4)=1-Pr\left[Z < \cfrac{2.4-1.5}{3.2}\right]=1-Pr[Z < 0.28]=1-0.61026=0.389$
where $Z \sim \mathcal N(0,1)$
($0.61026$ value is obtained from the z table).
Is my answer right? It is multiple choice question and my answer doesn't match the options. Please correct me if wrong.
AI: $P(X<-2.4)=P(\frac{X-1.5}{3.2} <\frac{-2.4-1.5}{3.2})=P(Z<-1.21875)\cong 1-0.88877 =0.11123$
|
H: Expectation of Stopping Time for a Brownian Motion with a drift
Let $a,b>0$ and define the stopping time $T_{a,b}$ for Brownian Motion as
$$T_{a,b}:=inf\{t>0:B(t)=at-b\}$$
Compute $E[T_{a,b}]$.
My idea:
I think $E[T_{a,b}]=\infty$.
If that was not the case, then by Wald´s Lemma
$E[T_{a,b}]=Var(B(T_{a,b}))=Var(aT_{a,b}-b)=a^2Var(T_{a,b})$ for all $b>0$.
But by reflection principle, for any fixed $t>0$:
$$P(T_{a,b}<t)=P(\exists s \le t: B(s)=as-b)\le P(\exists s \le t: B(s)<at-b) = 2\cdot P(B(t)<at-b)$$
and that converges to $0$ as $b \rightarrow -\infty$.
Could that help me conclude $E[T_{a,b}]=\infty$ and is that assumption even correct in the first place?
AI: For fixed $a,b>0$ set $f(t) := at-b$. Clearly, $f(0)=-b<0 = B_0$, and therefore it follows from the continuity of the sample paths of Brownian motion that
$$\mathbb{P}(T_{a,b} \geq t) \leq \mathbb{P}(B_t\geq f(t)).$$
For sufficiently large $t$, we have $f(t)=at-b>0$, and using $B_t \sim N(0,t)$ we find that
$$\mathbb{P}(T_{a,b} \geq t) \leq \frac{1}{\sqrt{2\pi t}} \frac{1}{(at-b)} \exp \left(- \frac{(at-b)^2}{2t} \right)$$
for large $t$. The right-hand side decays exponentially as $t \to \infty$, and therefore it follows that
$$\int_0^{\infty} t^k \mathbb{P}(T_{a,b} \geq t) \, dt < \infty$$
for all $k \geq 1$. This implies that $\mathbb{E}(T_{a,b}^k)<\infty$ for all $k \geq 1$, i.e. $T_{a,b}$ has finite moments of arbitrary order.
To compute the moments explicitly, you can use the Laplace transform $\mathbb{E}e^{-\lambda \tau}$ of $\tau$. First compute the Laplace transform (see this question) and then differentiate with respect to $\lambda$ and let $\lambda \to 0$.
|
H: Difference between a number and a set with one number
I am studying Analysis in Tao's textbook and he mentions the following:
The set $\{3, \{3, 4\}, 4\}$ is a set of three distinct elements, one of which happens to itself be a set of two elements. However, not all objects are sets; for instance, we typically do not consider a natural number such as $3$ to be a set. (The more accurate statement is that natural numbers can be the cardinalities of sets, rather than necessarily being sets themselves)
but my question is wouldn't $\{3\}$ be a set of cardinality one? Is this just a matter of notation (if we include the curly braces then we consider it a set and if not then it's just a natural number)?
AI: $\{ 3 \}$ is indeed a set of cardinality $1$. Whether $3$ is a set depends on your definition of the natural numbers. In some systems they are sets. I don't know what definition Tao uses.
|
H: Sum $\sum_{k=1}^n{k(k-1)}$
I have a series as
$$n(n-1)+(n-1)(n-2)+(n-2)(n-3)+ \ldots +n\ \text{terms}$$
How to provide a summation of this series? I think it would be similar to:
$$\sum_{i=0}^{N}(n-i)(n-(i+1)).$$
AI: This is
$$\sum_{k=1}^n{k(k-1)}=\sum_{k=1}^n{(k^2-k)}=\sum_{k=1}^n k^2-\sum_{k=1}^nk = \frac{n(n+1)(2n+1)}6-\frac{n(n+1)}2$$
|
H: Expected value of different dice roll results given number of rolls
A die is rolled $N$ times where $N\sim \operatorname{Pois}(\lambda)$. $X =$ number of different results in the rolls. For example: $(1,1,6,1)$ has 2 different results and $(1,2,3,4)$ has 4. Calculate $E(X)$
Define an indicator: $$I_i = \begin{cases} 1, & \text{$i$ was rolled at least once out of $n$ rolls} \\ 0, & \text{otherwise} \end{cases}$$
Then: $$X = \sum_{i=1}^{6} I_i$$
$$P(I_i = 1 \mid N=n) = 1 - \left(\frac{5}{6}\right)^n$$
$$E(X) = E(E(X\mid N)) = E\left(E\left(\sum_{i=1}^6 I_i\mid N\right)\right) = E\left(6\left(1-\left(\frac{5}{6}\right)^N\right)\right) \\ = 6-6E\left(\left(\frac{5}{6}\right)^N\right)$$
$$ = 6 - 6\left[\sum_{n=0}^\infty \left(\frac{5}{6}\right)^n\left(\frac{e^{-\lambda} (\lambda)^n}{n!}\right)\right] = 6 - 6\left[\sum_{n=0}^\infty \left(\frac{e^{-\frac{5}{6}\lambda}\left(\frac{5}{6}\lambda\right)^n}{n!} \right) e^{-\frac{1}{6}}\right] = 6-6e^{-\frac{1}{6}} = 0.92$$
where the last line is the sum of all probabilities of a random variable $Y\sim \operatorname{Pois}(\frac{5}{6}\lambda)$, which sums up to $1.$
This is an odd result since it didn't depend on $\lambda$ and it's too small as is.
Where is my mistake?
AI: The problem is that $\mathrm e^{-\lambda}\ne\mathrm e^{-\frac56\lambda}\mathrm e^{-\frac16}$.
|
H: Show that $f(x)=\lfloor x\rfloor$ is discontinuous exactly at all $\xi \in \mathbb{Z}$
Hello I again had a little task to show whats described in my title.
It would be helpful, if someone could again check my reasoning and if my task was successful or not :) and if not, give me some constructive advice :)
Also I do look for someone who could more frequently look over my exercises, since im at the moment studying mechanical engineering and want to switch to maths, but I can do the switch only in winter, so teach maths myself, and there are never solutions to tasks in textbooks :(
My attempt:
Let $f:\mathbb{R}\longrightarrow \mathbb{R}:x \mapsto \lfloor x\rfloor$
Lets first choose any $\xi_1,\xi_2 \in \mathbb{Z}$ with $|\xi_2-\xi_1|=1$
Let $\xi_1 < \xi_2$ we show that $\forall x \in (\xi_1,\xi_2)$ our $f$ is continuous:
$\forall \epsilon >0\,\, \exists \delta>0 \,\,\forall x \in (\xi_1,\xi_2):(|x-x_0|<\delta)\Longrightarrow(|f(x)-f(x_0)|<\epsilon)$
We know that $\forall x,y \in(\xi_1,\xi_2):|f(x)-f(y)|=0$
So our $\epsilon-\delta-$Criterium always holds in the given intervall! Which tells that
$f$ is continuous on any open intervall between two adjacent integers.
Now we just need to show that at any $\xi \in \mathbb{Z}$ the function is discontinuous.
$\lim\limits_{x \searrow \xi_2}\lfloor x\rfloor=\xi_2$ and $\lim\limits_{x \nearrow \xi_2}\lfloor x\rfloor=\xi_1 \Longrightarrow \lim\limits_{x \searrow \xi_2}\lfloor x\rfloor\ne \lim\limits_{x \nearrow \xi_2}\lfloor x\rfloor \Longrightarrow f$ is discontinuous at any $\xi \in \mathbb{Z}$
AI: You can spare some work, but you seem to have the right ideas.
Let $\zeta\in\mathbb{Z}$. Then, over the interval $(\zeta,\zeta+1)$ the function is constant and its value is $\zeta$. Since the interval is open, the function is continuous over it.
In different words: if $x$ is not an integer, there is a full neighborhood of $x$ where the function is constant, so $f$ is certainly continuous at $x$.
Thus it only remains to establish continuity or not at $\zeta$. For $\zeta<x<\zeta+1$ we have $f(x)=\zeta$; for $\zeta-1<x<\zeta$, we have $f(x)=\zeta-1$.
Therefore
$$
\lim_{x\to\zeta^-}f(x)=\zeta-1\ne\lim_{x\to\zeta^+}f(x)=\zeta
$$
|
H: Which derivative to use for an integral?
Does anyone know of a rule to tell which derivative to use when faced with an integral like this? (C is any constant, eg. 2400 or 4, etc).
$$\int \:\frac{C}{x}\mathrm{d}x$$
I know that $\ln(x) = \int\frac{1}{x}\mathrm{d}x$ so then $\int \:\frac{C}{x}\mathrm{d}x$ could equal $C\ln(x)$.
I also know that $\frac{1}{x}$ can be $x^{-1}$.
Both the $\ln(x)$ and $x^{-1}$ give different answers, so employing the wrong one is a problem. So how do we know which one to use and what situation?
An example, $\int \:\frac{2.6}{x}$. Which "derivatie method" (excuse my lack of proper terminology) would I use if I am trying to get the integral?
AI: If we try to integrate $x^{-1}$ using the power rule it replaces the function with a constant because $x^0=1$. Since it is a constant, its derivative is not actually $x^{-1}$ so we know it doesn't work. Since $-1$ is the only value that doesn't follow the power rule it's the only special case you have to remember.
|
H: commenting on whether $f'(x)$ is even or odd.
The question is as follows:
Let $f(x)$ be a differentiable function $\forall x,y \in \Bbb R$ and $$f(x-y),f(x),f(y),f(x+y)$$ are in AP then comment whether $f'(x)$ is even or odd (given $f(0) \neq 0)$.
My approach :
Since the given sequence is an AP, hence we can say that
$$2f(x) = f(x-y)+f(y) \text{ and } 2f(y) = f(x)+f(x+y)$$
replacing $x$ and $y$ in $2f(x) = f(x-y)+f(y)$ and comparing with $2f(y) = f(x)+f(x+y)$ gives
$$f(x+y)=f(x-y)$$ putting $x=0$ gives $f(y)=f(-y)$ making $f'(x)$ an odd function.
But since,
$$2f(x) = f(x-y)+f(y)$$ now partially differentiating with respect to $y$ and putting $y=0$ we get $f'(x)=f'(0)$
Also $$2f(y) = f(x)+f(x+y)$$ now partially differentiating with respect to $y$ and putting $y=0$ we get $f'(x)=2f'(0)$ now this is a clear contradiction to previous equation.
So is $f'(x)$ odd or not? Kindly help me out.
AI: The fact that $f(x-y),f(x),f(y),f(x+y)$ is an Arithmetic Progression implies that $$f(x-y)+f(x+y)=f(x)+f(y)$$
Substituting $y=0$ gives us $$2f(x)=f(x)+f(0)\to f(x)=f(0)$$This tells us that the function is constant, i.e., $f'(x)=0$, so it is even and odd.
Your work is perfectly fine. Your last equation implies that $f'(x)=0$, but I think the proof I've presented is a bit simpler.
|
H: Integral of a function with complex numbers
I'm trying to understand how the Fourier Transform can give the coefficients of epicycloids for any arbitrary closed shape.
Given a "periodic function on the complex plane $f(t)$" (as I understand it, it's a function that returns a complex number for any $t$), one might find the coefficients of epicycloids (phase, frequency, amplitude) up to a a prescribed order $n$ using
$$\frac{1}{2\pi}\int^{2\pi}_{0} f(t) e^{-nti} dt$$
I'm not sure how I should compute that integral. Let's say I'd do it "manually", by splitting the interval $[0;2\pi]$ into fragments of length $\varepsilon$, and summing the areas of the rectangles (their height would be defined by $f(t)$, for which I know the values on that interval. Would that work ? Can I simply use the imaginary part of the complex number to get the height of the rectangle ?
It's actually the first time I see an integral with complex number, and I'm not sure if approaching it the way I described it makes sense or not.
AI: Fourier coefficients are complex numbers, so you expect to get complex numbers. (It is perhaps helpful to think of these numbers as encoding the magnitude (at all times) and phase (at time $0$) of the corresponding frequency components.
We use Euler's formula to rewrite the exponential in your integral in terms of sine and cosine.
$$ \mathrm{e}^{\mathrm{i}\theta} = \cos \theta + \mathrm{i}\,\sin \theta \text{,} $$
so (using even-odd properties in the last line)
\begin{align*}
\mathrm{e}^{\mathrm{i} (-nt)}
&= \cos(-nt) + \mathrm{i}\,\sin(-nt) \\
&= \cos(nt) - \mathrm{i} \,\sin(nt) \text{.}
\end{align*}
Now let's rewrite your function to track the real and imaginary parts in separate integrals, turning both integrals into more familiar real integrals.
\begin{align*}
\frac{1}{2\pi} &\int_0^{2\pi} \; f(t) \mathrm{e}^{\mathrm{i} (-nt)} \,\mathrm{d}t \\
&= \frac{1}{2\pi} \int_0^{2\pi} \; f(t) (\cos(nt) - \mathrm{i} \,\sin(nt)) \,\mathrm{d}t \\
&= \frac{1}{2\pi} \int_0^{2\pi} \; f(t) \cos(nt) \,\mathrm{d}t - \frac{\mathrm{i}}{2\pi} \int_0^{2\pi} \; f(t)\sin(nt)) \,\mathrm{d}t \text{.}
\end{align*}
These are the Fourier sine and cosine transforms. These two integrands are real valued everywhere, so your thought of partitioning and summing areas of rectangles works. For each $n$ you will get a complex coefficient $u_n + \mathrm{i}\, v_n$. To get the amplitude and phase, convert to the polar representation, $r_n \mathrm{e}^{\mathrm{i}\,\theta_n} = u_n + \mathrm{i}\, v_n$.
Let $g = \Re f$ and $h = \Im f$ so that $g$ and $h$ are both real functions and $f(t) = g(t) + \mathrm{i}\, h(t)$. Then
\begin{align*}
\frac{1}{2\pi} &\int_0^{2\pi} f(t) \mathrm{e}^{\mathrm{i}(-nt)} \,\mathrm{d}t \\
&= \frac{1}{2\pi} \int_0^{2\pi} (g(t) + \mathrm{i}\, h(t)) (\cos(nt) - \mathrm{i} \,\sin(nt)) \,\mathrm{d}t \\
&= \frac{1}{2\pi}\left(
\int_0^{2\pi} g(t)\cos(nt) \,\mathrm{d}t
- \mathrm{i}\int_0^{2\pi} g(t)\sin(nt) \,\mathrm{d}t
+ \int_0^{2\pi} h(t)\sin(nt)\,\mathrm{d}t
+ \mathrm{i}\int_0^{2\pi} h(t)\cos(nt) \,\mathrm{d}t
\right) \text{,}
\end{align*}
and each of these four integrands is a real function so your model of finding area under a curve applies to each.
|
H: A vowel is followed by a consonant 80% of the time and a consonant is followed by a vowel 50% of the time. Proportion between cs and vs?
A text is such that the probability that a vowel is followed by a
consonant is 80% and a consonant is followed by a vowel 50% of the
time. What is the proportion of vowels in the text?
I don't know how to answer this without knowing how many letters there are in the text.
I can have vowels in two ways: either a vowel is followed by a vowel, or a consonant is followed by a vowel. The probability of the former is .2 and the probability of the latter is .5. If I do .5 + .2 I get .7, which isn't isn't any of the given options.
The answer is 5/13. Help?
AI: To reiterate what I said in the comments, this problem is about Markov Chains, and more specifically about "Regular" Markov Chains. Full details should be included in the textbook you are using or in your class notes and it is beyond the scope of this site to provide a full and complete overview on a topic like this. Including all necessary definitions and proofs would generally take up a whole chapter in a textbook.
The final results to mention without proof are that given a Regular Markov Chain like this, regardless the initial state we will after enough time approach the "steady state," a scenario where further transitions do not continue to change the state. That is, the proportions of consonants and vowels will have reached a sort of equilibrium point.
That is, given a stochastic matrix $A$ and any initial state vector $v$ we have some steady state vector $\bar{v}$ such that $A\bar{v}=\bar{v} = \lim\limits_{n\to\infty}A^n v$ and further $\lim\limits_{n\to\infty} A^n$ has every column equal to $\bar{v}$.
Finding $\bar{v}$ is a matter of solving the matrix equation $A\bar{v}=\bar{v}$. Subtracting the one side to the other and including the identity matrix so we can factor yields $(A-I)\bar{v}=0$. In other words, $\bar{v}$ is in the kernel of $A-I$, so we can use standard Linear Algebra techniques to continue.
Alternately worded, we can avoid phrasing things in terms of matrices and solve the corresponding system of equations. Here, letting $v$ be the proportion of letters which are vowels and $c$ the proportion of letters which are consonants, we have the following system of equations:
$$\begin{cases} v = .2v + .5c\\c = .8v+.5c\\v+c=1\end{cases}$$
Note how the first two lines were crafted from the given information in the problem statement. Note also how the first two lines both convey the same information... if you were to move things to their own side, both lines are equivalent to $.8v = .5c$. Finally, note the final line that $v+c=1$. This last line comes about since we are discussing proportions or probabilities, or however you wish to describe things.
So, solving this by whatever method you choose, you will arrive at $v=\frac{5}{13}$
By substitution, $.8v=.5c$ so $1.6v = c$ and $v+c=1$ we have $v+1.6v=2.6v=1$ and so $v=\frac{1}{2.6}=\frac{10}{26}=\frac{5}{13}$
|
H: Difficulty interpreting high order derivatives in $\mathbb{R}^n$
If $f:U\subseteq\mathbb{R}^m\to \mathbb{R}^n$ is differentiable function then its derivative
$$
f':U\to M_{n\times m}(\mathbb{R})\simeq\mathcal{L}(\mathbb{R}^m;\mathbb{R}^n)
$$
can be seen, for each $x\in U$, as a linear transformation $f'(x):\mathbb{R}^m\to\mathbb{R}^n$.
If $f$ is of class $\mathcal{C}^2$ then its second order derivative is a function
$$
f'':U\to\mathcal{L}(\mathbb{R}^m\times\mathbb{R}^m;\mathbb{R}^n)
$$
that carries each $x\in U$ into a bilinear transformation $f''(x):\mathbb{R}^m\times\mathbb{R}^m\to\mathbb{R}^n$
and inductively if $f$ is of class $\mathcal{C}^k$ its $k$th derivative is a function
$$
f^{(k)}:U\to\mathcal{L}(\mathbb{R}^m\times\cdots\times\mathbb{R}^m;\mathbb{R}^n).
$$
Maybe I'm lacking some linear algebra background, but using the isomorphism $\mathcal{L}_2(\mathbb{R}^m\times\mathbb{R}^m;\mathbb{R}^n)\simeq\mathcal{L}(\mathbb{R}^m;\mathcal{L}(\mathbb{R}^m;\mathbb{R}^n))$ and its correspondent in the $k$-linear case I can see that the $k$th derivative is a $k$-linear map. But I don't see how to relate (in the case of the second derivative to simplify) the partial second derivatives $\frac{\partial f_i}{\partial x_j\partial x_k}(x)$ to the matrix of $f''(x)$ as a bilinear map. In the case of a bilinear form it would be a $m\times m$ matrix but it's not the case.
Can someone please explain me better?
P.S.: I have seen this question but I didn't understand his notation in the last part of the answer (and the first part which answers that question I'm ok with).
AI: Note that in the case that $f$ has codomain $\Bbb R^n$, $f''(x)$ is a bilinear form with codomain $\Bbb R^n$. As such, $f''(x)$ cannot (in its totality) be represented by a matrix. Instead, $f''(x)$ is a third order tensor and would more naturally be presented as $3$-dimensional array.
The $i$th coordinate of the output of $f''(x)$, however, is a bilinear map to $\Bbb R$, which means that it can be represented as a matrix. The matrix corresponding to this bilinear map is the same as the matrix corresponding to $f_i''(x)$, where $f_i$ denotes the $i$th component of $f(x)$. In particular, $f_i''(x)$ is represented by the Hessian matrix of $f_i$.
You can accordingly think of $f''(x)$ as a $3$-dimensional array where each of $n$ $2$-dimensional layers is a Hessian of one of the components $f_i$ of $f$.
Here's a proof that we have $f''(x)(u,v) = \frac{\partial f^2}{\partial u \partial v}$ for vectors $u,v$. If we go back to the $\mathcal{L}(\mathbb{R}^m;\mathcal{L}(\mathbb{R}^m;\mathbb{R}^n))$ definition, $f''(x)(v)(\cdot)$ is defined such that for $h \in \Bbb R$,
$$
f'(x + hv)(u) = f'(x)(u) + h\,f''(x)(v)(u) + o(h).
$$
On the other hand, we already know that $f'(x)(u) = \frac{\partial f}{\partial u}(x)$. With that, we have
$$
\frac{\partial f}{\partial u}(x + vh) = \frac{\partial f}{\partial u}(x) + h\,f''(x)(v)(u) + o(h) \implies\\
f''(x)(v)(u) = \frac 1h\left(\frac{\partial f}{\partial u}(x + vh) - \frac{\partial f}{\partial u}(x) + o(h)\right).
$$
If we take a limit as $h \to 0$, the conclusion follows.
|
H: Arithmetic of quotients
I was wondering whether for a ring $R$ and two elements $a,b$ that $R/(a,b)$ is isomorphic to $(R/(a))/(b)$ (with the parenthesis representing the smallest ideal containing the respective element/s).
I have seen this trick applied to polynomial rings on multiple occations.
Thank you
AI: The Third Isomorphism Theorem tells you that for a ring $R$, and ideals $I\leq K\leq R$, you have that $K/I$ is an ideal of $R/I$ and that
$$\frac{R/I}{K/I}\cong \frac{R}{K}.$$
We can apply this to you situation by taking $I=(b)$, $J=(a)$, and $K=I+J=(a,b)$.
Note that the smallest ideal containing $b+(a)$ in $R/(a)$ is in fact the ideal $(b)+(a)=(a,b)+(a)=K/(a)$. Thus, you are asking whether
$\frac{R/(a)}{K/(a)}$ is isomorphic to $R/K$. And then the Third Isomorphism Theorem tells you immediately that the answer is “yes”.
|
H: Show $2$ is not prime in $\{f\in\mathbb{Q}[x]:f(m)\in\mathbb{Z}\text{ for all }m\in\mathbb{N}\}$
Consider the subring $\text{Int}(\mathbb{Z})=\{f\in\mathbb{Q}[x]:f(m)\in\mathbb{Z}\text{ for all }m\in\mathbb{N}\}$. Show $2$ is irreducible but not prime in $\text{Int}(\mathbb{Z})$.
I was able to show $2$ is irreducible in $\text{Int}(\mathbb{Z})$ by writing $2=fg$ for some non-unit constant polynomials $f,g\in\text{Int}(\mathbb{Z})$, and plugging in $1$ to both sides. This gives $2=f(1)g(1)=fg$ where $f,g\in\mathbb{Z}$. Since $2$ is irreducible in $\mathbb{Z}$ and the units of $\mathbb{Z}$ coincide with the units of $\text{Int}(\mathbb{Z})$, we obtain a contradiction. However, I am not sure about showing $2$ is not prime. I need to find two elements of $\text{Int}(\mathbb{Z})$ whose product is divisible by $2$, but neither factor is divisible by $2$. I am sure that these factors will have to be some kind of higher degree polynomials, but I am scratching my head here. One thing that might be helpful, is the fact that
$$
\binom{x}{n}=\frac{x(x-1)\cdots(x-n+1)}{n!}\in\text{Int}(\mathbb{Z}),
$$
but I am not sure how this applies here, if at all (it was mentioned in the problem statement). Any help is appreciated.
AI: It's simple enough to see that for all integers $x$, $x(x-1)$ is even, but $x$ and $x-1$ are not divisible by $2$ because they take on odd values. However, $x(x-1)$ is divisible by $2$, since
$$x(x-1)=2\binom {x}2$$
Intuitively, you can factor $2$ out of a polynomial that always takes even values on integers, because it will result in a polynomial that takes integer values on integers. Its constituents in the product need not always take on even values though, as long as at least one is even for each integer.
|
H: Show that $h(t) =\int_0^t \int_0^s f(x) dxds$ is strictly convex
I am working on the following exercise:
Let $f: (0,1) \rightarrow \mathbb{R}_{\ge0}$, be a continuous function such that the set $\{x \in (0,1) \mid f(x) = 0\}$ is a nowhere dense set. Prove that the iterated integral $$h(t) =\int_0^t \int_0^s f(x) dxds$$ is strictly convex.
I do not see how to prove this. Could you give me a hint.
AI: Prove $h'(t)=\displaystyle\int_0^t f(x)dx$ is strictly increasing.
Note that any interval $(a,b)$ containing open interval $(a',b')$ such that $(a',b') \cap \{x \in (0,1) | f(x)=0 \} = \emptyset$, i.e., $f(x) > 0 \ \ \forall x \in (a',b')$.
|
H: Cauchy-Euler equation - wronskian and $e^u$ substitution
Wikipedia says te following about Cauchy-Euler equation:
$$a_{n} x^n y^{(n)}(x) + a_{n-1} x^{n-1} y^{(n-1)}(x) + \cdots + a_0 y(x) = 0$$
is called Cauchy-Euler equation. Substitution $x=e^u$ (where $u = ln(x)$; for $x < 0$ we replace $x$ with $|x|$ to extend the domain to $\mathbb{R}_0$) may be used to reduce this equation to a linear differential equation with constant coefficients.
How does it work? I tried plugging $e^{lnx}$ to it:
$$
a_ne^{nlnx}y^{(n)}(e^{lnx}) + \dots + a_1e^{lnx} y'(e^{lnx}) + a_0y(e^{lnx})=0 \\
a_nxe^ny^{(n)}(x) + a_{n-1}xe^{(n-1)}y^{n-1}(x) + \dots + a_1e^{lnx} y'(x) + a_0y(x)=0
$$
but I don't see how this would lead me anywhere.
Another question popped up - how does Wronskian of such equation would look like in general? All I can find are Wronskians of concrete examples.
AI: Define $z(u)=y(e^u)$. Then $$z'(u)=y'(e^u)e^u=xy'(x).$$
Similarly for higher-order derivatives.
For example, the equation $$xy'(x)+y(x)=0$$ becomes $$z'(u)+z(u)=0.$$
|
H: How many points are needed to uniquely specify a box?
In $\mathbb{R}$, to specify a line segment, we need two points. In $\mathbb{R}^2$, to specify a given rectangle, all we need are three points. Then the fourth point is determined, and there is only one rectangle with those three points as vertices. In $\mathbb{R}^3$, to specify a cuboid, we can do it with just $4$ vertices (e.g. pick any vertex and its three adjacent vertices). Then there is only one cuboid that can have these $4$ vertices.
In general, what is the minimum number of points/vertices needed to uniquely specify a given box in $\mathbb{R}^n$? Is it $n + 1$?
AI: In general, a parallelotope (the generalization of parallelograms to many dimensions) can be specified by $n+1$ points: a vertex $u$ and all the vertices $v_1,\ldots,v_n$ adjacent to that vertex - and, as long as long as there is no hyperplane on which all vertices lie, any collection of $n+1$ vertices really does extend to a parallelotope (since, essentially, one can just take the Minkowski sum of the edge vectors from $u$ to each $v_i$ to get a volume).
It seems like a box, for you, is just a parallelotope in which the faces are perpendicular to one another - they can be specified the same way, except they additionally require that the edge vectors from $u$ to each $v_i$ be pairwise perpendicular - which doesn't really bring down the number of vertices required, but it means that, if we specify $u$ then $v_1$, we would know that $v_2$ lies on some specific hyperplane, then that $v_3$ lives on a codimension two subspace, and so on - bringing down the degrees of freedom. In general, you have $n$ dimensions of choice for $u$ and $v_1$, then $n-1$ for $v_2$ and $n-2$ for $v_3$ and so on, until you only have a single dimension of choice for $v_n$.
All told, you get $n(n+1)$ degrees of freedom in choosing a parallelotope in $n$ dimensions, but only $\frac{n(n+3)}2=n+(1+2+3+\ldots+n)$ degrees of freedom in choosing a box - which is probably a more meaningful measurement than "number of points required."
|
H: Proving a linear form to be continuous
In the Hilbert space $L^2(R)$ I have seen that the following form is linear, however, I need to check if it is continuous and find the associated vector using the Riesz-Fréche Theorem. I have tried to prove that it is bounded but I was not able to do it.
$$F(f) = \int_0^1 f(x^2)xdx$$
AI: You can simply make a change of variable $y=x^2$. So
$$F(f) = \frac{1}{2}\int_0^1 f(y)dy$$
which is clearly bounded. The constant function $1/2$ is the relative dual element in Riesz-Fréchet.
|
H: Question about $x \in \text{Img}(X)$
When I studied the expectation of a discrete random variable $X$, I saw it is defined as $$\mathbb{E}(X) = \sum_{x \in \text{Img}(X)} x \cdot \mathbb{P}(X=x)$$
What exactly does $x \in \text{Img}(X)$ mean? Does it mean $x$ can be all possible values of $X$?
AI: The set of all possible values the random variable $X$ can take on. For instance if $X$ represents rolling a dice, Img(X) would be $\{1,2,3,4,5,6\}$
|
H: Set Theoretic naming for a union property
I am introducing a new operation for countable sets, say $\mathcal{X}(\cdot)$ is the operation. I have proved that for two sets $S_1, S_2$ we have:
$$\mathcal
X(S_1 \cup S_1) = \mathcal{X}(S_1) \cup \mathcal{X}(S_2).$$
How can I name this property in set-theory language?
AI: It is distributive over unions.
Much like how $a\times(b+c)=a\times b+a\times c$ is the distributivity of $\times$ over $+$.
|
H: Compactness of the set $S=\{\sin(2^nx)\mid n\in\mathbb{N}\}$.
Let $$S=\{\sin(2^nx)\mid n\in\mathbb{N}\}$$ in the metric space $L^2([-\pi,\pi])$ of Lebesgue square integrable functions. On $[-\pi,\pi]$ with metric defined as $$ d(f,g)=[\int_{-\pi}^{\pi}(f-g)^{2}dx]^{\frac{1}{2}}$$ Then which of the following are true?
$1.$ $S$ is bounded.
$2.$ $S$ is closed.
$3.$ $S$ is compact.
I tried as follows .
According to me set $S$ is bounded as $\int_{-\pi}^{\pi}[(\sin(2^nx)-\sin(2^m )]^2 \leq 4\pi.$ For closeness let $f_n\rightarrow f$ where $f_n\in S$. Then $\int_{-\pi}^{\pi}[(f_n-f )]^2\rightarrow 0. $ Now how to proceed further. Please help. Thanks.
AI: All the elements of $S$ have norm equal to $\sqrt{\pi}$, so $S$ is bounded.
Distinct elements of $S$ are orthogonal in $L^2$, so given two distinct elements $f_i=\sin(2^i x)$ and $f_j = \sin(2^j x)$, you have $d(f_i,f_j)= \sqrt{2\pi}$. This means that there are no accumulation points in $S$, so the set is closed.
The set is not compact since you can consider a cover which is made of open balls of radius $<\sqrt{\pi}$ around each point of $S$. All these balls are disjoint so you cannot find any finite subcover, as required in the definition of compactness.
|
H: Solving a three variable exponential equation
I have the following function:
$$
y=a.x^b+c
$$
And I have a finite set of already calculated values for the x and the y. Using this data set I need to calculate the values of a, b, and c. Is there a way to do so?
I didn't manage to get an answer using what I remembered from solving multivariable equation since I can not simplify the equation based on b.
Additional information about the specific use case:
In the data set, the value of x is in the range [0-256] inclusive
and the value of y is in the range [0-65535] inclusive and both are natural numbers. However not all values are necessarily available. For example the set might start at 10 and continues to 250.
a, b, and c are decimal numbers with b always being bigger than 0.
The set contains around 256 pairs of (x, y) essentially
resulting in around 256 equations that can be used to solve this. $[(x_1,y_1)-(x_{256},y_{256})]$
AI: Note that for any $i,j,k,l$,
$$ \dfrac{y_i - y_j}{y_k - y_l} = \frac{x_i^b - x_j^b}{x_k^b - x_l^b} $$
In principle this allows $b$ to be determined, but in most cases this can only be done
numerically. However, if you're doing it numerically in any case, you might as well
use a numerical algorithm for a nonlinear least squares fit on the whole dataset.
If there are $i,j,k,l$ such that $x_i/x_k = x_j/x_l \ne 1$ (call this common ratio $t$), then you can simplify the above equation to $$ \dfrac{y_i - y_j}{y_k - y_l} = t^b$$
so that $b = \log_t((y_i - y_j)/(y_k - y_l))$. Then knowing $b$ you have
$a = (y_i - y_j)/(x_i^b - x_j^b)$ and $c = y_i - a x_i^b$.
|
H: what is the limit of $\sum_{n>0}^\infty p^ncos(n\zeta)$
Let $p\in \mathbb{R}$ with $|p|<1$ and $\zeta \in \mathbb{R}$.
$\sum_{n>0}^\infty p^ncos(n\zeta)$ and $\sum_{n>0}^\infty p^nsen(n\zeta)$
I need to find the limit of this series, I need it for a theorem that I am proving, but I could not find it
AI: Call these sums $C,\,S$ so$$1+C+iS=\sum_{n\ge0}(p\exp i\zeta)^n=\frac{1}{1-p\exp i\zeta}=\frac{1-p\cos\zeta+ip\sin\zeta}{1-2p\cos\zeta+p^2}.$$Hence$$C=\frac{p(\cos n\zeta-p)}{1-2p\cos n\zeta+p^2},\,S=\frac{p\sin\zeta}{1-2p\cos\zeta+p^2}.$$
|
H: Prove that set is closed and ideal of another set
Let $(X; d)$ be a compact metric space. For a given $x_0 \in X$, define $C_{x_0}(X,\mathbb{R})$ by
${C}_{x_0} = \{ f \in \mathcal{C}(X,d): f(x_0) = 0 \}$.
I need to prove that $C_{x_0}(X,\mathbb{R})$ is closed in $\mathcal{C}(X,\mathbb{R})$ and that $C_{x_0}(X,\mathbb{R})$ is an ideal of $\mathcal{C}(X,R)$. How do I prove this?
AI: Consider the linear functional $\ell$ given by $\ell(f) = f(x_0)$. Then $\ell$ is continuous and $C_{x_0} = \ell^{-1}(\{0\})$ is closed as the preimage of a closed set under a continuous map.
For the ideal property first note that $C_{x_0}$ is a subspace of $C(X, \mathbb{R})$ (by the point-wise definition of addition and scalar multiplication). Furthermore, we have $(fg)(x_0) = (gf)(x_0) = f(x_0)g(x_0) = 0$ for all $f \in C_{x_0}$ and $g \in C(X, \mathbb{R}$).
|
H: Find $x(t)$ given $\frac{dx}{dt}$ and $\frac{dy}{dt}$
For $0 \leq t \leq 1$ , a particle is moving along a curve so that its position at time $t$ is $(x(t),y(t))$. At time $t=0$ , the particle is at position $(0,0)$ .
We are given that
$$\frac{dx}{dt} = \frac{t}{\sqrt {1+t^2}}$$
$$\frac{dy}{dt} = \sqrt {\frac{1-t^2}{1+t^2}}$$
Find $x(t)$
Here is my work:
$$x(t)=\int \frac{dx}{dt}= \int \frac{t}{\sqrt{1+t^2}}dt$$
Substituting:
$$u=t^2+1 \implies du= 2t dt$$
We get;
$$x(t) = \int \frac{t}{\sqrt{1+t^2}}dt = \sqrt{t^2+1}+C$$
Since it is asking for $x(t)$ and not $y(t)$ I believe I'm ok
AI: Your answer is alright except for the $+ C$, notice that the question gave you that $x(0)=y(0)=0$, so that gives
$$x(t)=\sqrt{t^2+1}-1$$
|
H: Finding the remainder of $123^{456}+78^9$ divided by $7$
Find the remainder when $123^{456}+78^9$ is divided by $7$.
This would be $123^{456}+78^9\pmod7,$ but is there any way to find some slick congruence? I don't really understand how to find congruences.
AI: $123^{456}\equiv1\bmod 7$ by Fermat's little theorem that $a^{p-1}\equiv1\bmod p$ if $a\not\equiv0\bmod p$.
$78^9\equiv1^9=1\bmod 7$, because $78=11\times7+1$.
|
H: growth of population- solving initial value problem
I want to solve the initital value problem
$$ P'= \alpha P^{ \beta} , \space \space P(0)=P_0 > 0 $$
with
$ \alpha \in \mathbb{R}^{ +} ,\beta \in [1, \infty[ $
and show, that there exists a time T for $ \beta >1$ so that
$$ lim_{ t \rightarrow T} P(t)= \infty $$ and determinine this $T$
The differential equation is seperable, so I can solve
$$ \int_{P_0}^P \frac{1}{ \alpha P(x) } dx = \int_0^t 1 dx $$
$$ \leftrightarrow \frac{x^{1-b}}{a(1-b)} |_{P_0}^P = x |_0^t $$
$$ \leftrightarrow \frac{P^{1-b}}{a(1-b)} - \frac{P_o^{1-b}}{a(1-b)} = t $$
is this right so far? how can I solve it explicit and determine $T$??
thank you in advance for any help!
AI: Looks correct so far, besides notation (the power $\beta$ is missing in one equation; one $dx$ should be a $dP$; $a$, $b$ should be $\alpha$, $\beta$). In the particular case $\beta = 1$, we have
$P(t) = P_0 e^{\alpha t}$. For $\beta>1$, the solution of this (power-law) exponential growth model reads
$$
\begin{aligned}
P(t) &= \sqrt[1-\beta]{P_0^{1-\beta} + \alpha (1-\beta)t} \\
& = \left(P_0^{1/\gamma} + \tfrac{\alpha}\gamma t\right)^\gamma
\end{aligned}
\qquad\text{where}\qquad \gamma = \tfrac1{1-\beta} < 0 .
$$
The exponential function $x\mapsto x^\gamma$ is singular at the origin. Thus, the above solution becomes infinite at the time $t=T$ which makes the argument of the exponential vanish, i.e. $T = -\frac{\gamma}{\alpha}P_0^{1/\gamma} > 0$.
|
H: Astrodin parametric equation derivation
I know, that the astroid with equation
$$
x^\frac{2}{3}+y^\frac{2}{3}=a^\frac{2}{3}
$$
can be parametricly expressed as:
$$
x=a*cos^3(t)
$$
$$
y=a*sin^3(t).
$$
I was wandering if there is a way, to derive parametric equations from formula at the top.
AI: Usually when we see the equation
$$x^m + y^m = a^m$$
we quickly think of the parameterization
$$x = a \cos^{2 /m} (\theta), ~ y = a \sin^{2/m} (\theta), ~ 0 \leq \theta \leq 2 \pi$$
because it reminds us of the trigonometric identity $\cos^2 \theta + \sin^2 \theta = 1$. In the above, we have chosen $x$ and $y$ such that $x^m = a^m \cos^2 \theta$, and $y^m = a^m \sin^2 \theta$.
|
H: Monotonicity of the Lebesgue Integral
I am working through Baby Rudin, and encountered the following remark:
If $f$ and $g\in\mathcal{L}(\mu)$ on $E$, and if $f(x)\leq g(x)$ for $x\in E$, then
$$\int_{E}fd\mu\leq \int_E gd\mu.$$
Given a measurable function $f$, $\int_{E}fd\mu$ is always defined, but $f$ only belongs to $\mathcal{L}(\mu)$ when $$\Bigg|\int_{E}fd\mu\Bigg|<\infty.$$
(Edit: as harfe pointed out, it should be $\int_{E}|f|d\mu$ instead).
My question is, does the remark remain true if we only assume $f$ and $g$ to be measurable instead of being in $\mathcal{L}(\mu)$? It seems to be true, because of two cases:
If $\int_{E}fd\mu=-\infty$, then there is nothing to show.
If $\int_{E}fd\mu=\infty$, then it must follow (?) that $\int_{E}gd\mu=\infty$ as well, so we get $\infty\leq \infty$.
Any help appreciated.
Edit: I just realized that in my situation, I only need $f$ measurable and $g\in \mathcal{L}(\mu)$, but comments at any level of generality are welcome.
AI: First, your definition of $\mathcal L(\mu)$ is not correct in my opinion.
It is usually defined as the set of measurable functions $f$ such that
$$
\int_E |f|\,\mathrm d\mu<\infty.
$$
Your argument for the cases
$\int_E f\,\mathrm d\mu=-\infty$
and
$\int_E f\,\mathrm d\mu=\infty$
is correct.
However, there is another possibility if $f$ is not in $\mathcal L(\mu)$.
It can happen that the integral is not defined or does not exist.
This is the case if
$$
\int_E f^+\,\mathrm d\mu=\infty
\quad\text{and}\quad
\int_E f^-\,\mathrm d\mu=\infty,
$$
where $f^+,f^-$ are the positive and negative part of $f$
(using the Notation as in Baby Rudin).
This is the reason why the author adds the conditions $f,g\in\mathcal L(\mu)$.
|
H: Minimizing Type I and Type II errors Simultaneously
I am trying to understand why I can't simultaneously minimize Type I error and Type II error simultaneously. A non-randomized Hypothesis test is essentially a deterministic partition of the Range space of our Random Variable.
$$R_0=\{x\in R: Reject Null\}\hspace{4mm}R_1=\{x\in R: Accept Null\}$$
where $R_0\cup R_1=R$
If I try to minimize the event of a type I Error then I can essentially remove some $x$ from $R_0$ to $R_1$ then have the new partition of $R$ as $R_0^{'}\cup R_1^{'}=R$
if
$$P_{\theta_0}(x\in R_0)=\alpha$$
$$P_{\theta_a}(x\in R_1)=\beta$$
then the new rejection region, $R_0^{'}$ post-transferring of x will have
$$P_{\theta_0}(x\in R_0^{'})\leq\alpha$$
$$P_{\theta_a}(x\in R_1^{'})\geq\beta$$
Thus In attempting to reduce the event of Type I error I have increased the event of a Type II error. But I have read that this trade-off only holds in fixed sample size why is that. Is an increase in sample size decrease both events simultaneously?
AI: In all hypothesis tests, we calculate a $p$-value based on a test statistic calculated from our sample. Usually, we determine a threshold (significance level) $\alpha$ such that if $p < \alpha$, we reject the null hypothesis $H_0$. Note that our $p$-value is calculated under the assumption that $H_0$ is true, and represents the probability of the given parameter (e.g. mean) of the sample being observed given $H_0$ is true, so
$$P(\text{type I error}) = P(\text{reject }H_0 \text{ when } H_0 \text{ is true}) = \alpha$$
as we only reject $H_0$ when $p \leq \alpha$. The point to note here is that the probability of a type I error only depends on the significance level.
The probability of a type II error will depend on the population parameters and the sample size, since we make such an error when our observed sample fails to lie in what you have called $R_0$ - the $p$-values which will force us to reject $H_0$. In most hypothesis tests, the distribution of the test statistic calculated for a sample becomes more and more concentrated as we increase the sample size. For example, in hypothesis tests for the population mean, increasing the sample size decreases the standard deviation of the sample mean. Hence, the test is more sensitive to extreme values of the calculated test statistic and is more likely to give extreme $p$-values (upon which we would reject the null hypothesis). If you have a specific hypothesis testing procedure in mind, I would be happy to flesh out the details for that procedure for you.
The point to note here is that the probability of a type II does not only depend on the significance level, and in nearly all cases decreases with sample size. So one can simultaneously minimize the two errors by decreasing the significance level and increasing the sample size.
|
H: Prove by contradiction that $A\cup B=B\cup A.$
For arbitrary sets $A$ and $B$, assume the opposite of our conclusion: $A\cup B \neq B \cup A$.
Thus, either there exists $x\in (A \cup B)$ such that $x \notin (B\cup A)$ or there exists $x'\in (B\cup A)$ such that $x' \notin (A\cup B)$.
Without loss of generality, assume the first condition, that there exists $x \in (A\cup B)$ such that $x \notin (B\cup A)$. Then $x\notin B$ and $x\notin A$, or in other words, $x\notin A$ and $x\notin B$.
So, $x\notin (A\cup B)$, a contradiction.
From this antilogy, it follows that $A\cup B = B\cup A$.
AI: This works, but I recommend a direct proof instead: for all $x$,
$$\begin{align}
x\in A\cup B&\iff x\in A\lor x\in B\\
&\iff x\in B\lor x\in A\\
&\iff x\in B\cup A.
\end{align}$$
|
H: Divisibility of $a_{24}$ by 7. ($a_n=\underbrace{999\cdots9 }_{n \text{ times}})$
Question: By which number is $a_{24}$ divisible by?
Where $a_n=\underbrace{999\cdots9 }_{n \text{ times}}$
The solution says the answer is $7$. Here's what is given:
$$a_{24}=\underbrace{999\cdots9 }_{24 \text{ times}}$$
$$=9(\underbrace{\underline{111} \ \ \underline{111}\ \ \underline{111} \ \cdots \ \ \underline{111})}_{8 \text{ similar sets}}$$
Now differences of each set is $0$. Hence $a_{24}$ is divisible by $7$.
Now what I don't understand is what are they implying when they say "difference of each set is $0$" . Also , why does this imply that the number is entirely divisible by $7$?
Also I know the divisibility rule of $7$ to be: Double the last digit, subtract the obtained number from whatever remains after removing the last digit and then check if the final number obtained is divisible by 7.
This process can go lengthy for this question here. Is there any way to solve it quicker?
AI: To address what you don’t understand about the solution given, you should become aware that there is another rule for divisibility by $7$ besides the one you mentioned. This rule is to alternately add and subtract $3$-digit chunks of the number starting with the last $3$ digits and testing whether the result is divisible by $7$. For example, $7003010$ is divisible by $7$ because $10-3+7$ is. This rule works because $7$ divides $1001.$ (By the way, it works for $11$ and $13$ too.) Using this rule, it becomes obvious that any number written as a string of $n$ $1$s, where $n$ is a multiple of $6$, is divisible by $7$.
|
H: If $f$ is differentiable at $[a,+\infty$) and $\lim_{x\to\infty}(f(x)+f'(x))=c\in\mathbb{R}$. Prove that $lim_{x\to\infty}f(x)=c$.
I figured if I proved that $f'(x)=0$ then that would be it because, if first derivative is zero, than the function has constant value for every point of its domain. But I don't know how to prove that $f'(x)=0$ using given condition about limits. I thought about $\lim_{x\to\infty}(f(x)+f'(x))=\lim_{x\to\infty}f(x)+\lim_{x\to\infty}f'(x)$ but I failed to find out why is $\lim_{x\to\infty}f'(x)$ equal to zero.
AI: Here is just a thought, so correct me if I'm wrong:
We can have 3 cases :
(i) $\lim_{x \to \infty} f'(x)>0$
Here it will imply the function is increasing even as we approach $\infty$, so as $x$ goes on increasing to a very large number, $f(x)$ will blow up to $\infty$
Thus, $\lim_{x \to \infty} f(x)+f'(x) = (something \ blowing \ to \ \infty) + (some \ positive) \neq c$
This can definitely not reach a constant $c \in \mathbb{R}$.
(ii) $\lim_{x \to \infty} f'(x)<0$
Similar to case (i), here $f(x)$ will blow down to $-\infty.$ Thus, $\lim_{x \to \infty} f(x) + f'(x) = (-\infty) + (some \ negative) \neq c \in \mathbb{R}$
(iii) The only case we are left with is $\lim_{x \to \infty} f'(x)=0$
We can see two sun-cases here:
(a) $$\lim_{x \to \infty} f(x)= c$$
checking this with the question it satisfies. So this is a valid answer.
(b) $$\lim_{x \to \infty} f(x)= \pm \infty$$
This would again not satisfy the condition of $\lim_{x \to \infty} f(x) +f'(x) = c$
So, I believe every possible case is accounted for, and only one case is satisfying the conditions, so
For $$\lim_{x \to \infty } f(x) + f'(x) = c \in \mathbb{R}$$
It is necessary that $\lim_{x \to \infty} f'(x) =0$ and $\lim_{x \to \infty}f(x)=c$
|
H: How to find a vector given the scalar triple product
Scalar triple product formula: $a (b \times c)$
If I know the value of the product and I know what vectors $b$ and $c$ are, how do I find what vector $a$ is equal to?
AI: Unfortunately, you can't. Your real question is even simpler: if you know $\mathbf{x}$ and $\mathbf{a}\cdot\mathbf{x} = y$, can you solve for $\mathbf{a}$?
The issue is that the dot product is not an invertible operation, so there are lots of ways to choose $\mathbf{x}$ such that $\mathbf{a}\cdot\mathbf{x} = y$. Think about it for the two-dimensional cases: let $\mathbf{a} = [a_1, a_2]$ and $\mathbf{x} = [x_1, x_2]$. The hypothesis is that we know $x_1$, $x_2$, $y$, and that
$$
a_1x_1 + a_2x_2 = y,
$$
but this is a single equation with two unknowns ($a_1$ and $a_2$), which has infinitely many solutions. If that seems odd, verify that $\mathbf{a} = [0, \tfrac{y}{x_2}]$ and $\mathbf{a} = [\frac{y}{x_1}, 0]$ are both solutions.
|
H: Cramer-Rao lower bound for exponential distribution
Given a sample $X_1,\dots , X_n$ from a population $X\sim \operatorname {Exp} (\lambda )$, I have to calculate Cramer-Rao bounds for the estimation of $\lambda$ and $\frac 1 \lambda$; I also must determine if there are estimators that limit.
Now, we have that $\frac {\partial}{\partial \lambda} \operatorname {ln(\lambda e ^{-\lambda x}) }= \frac 1 \lambda -x$, and so $\mathbb E[ (\frac 1 \lambda -x)^2]=\frac 1 {\lambda^2}$, since it is the variance by definition. So in the case that we are estimating $\lambda$, the Cramer-Rao bound is $\frac {\lambda^2} n$, while in the other case the bound is $\frac {\lambda^2} n \cdot \frac 1 {\lambda^4}= \frac 1 {n\lambda^2}$. It is clear that the sample mean is an estimator for $\frac 1 \lambda$ with variance exactly $\frac 1 {n\lambda^2}$; however I don't know how to proved in the case of estimating $\lambda $. If a estimator $T_n $ for $\lambda $ equals Cramer-Rao bound, the we would have $\sum_i\frac {\partial}{\partial \lambda} \operatorname {ln(\lambda e ^{-\lambda x_i}) }=K (n,\lambda) ( T_n -\lambda )$; so that $\sum (\frac 1 \lambda -x_i)= K (n,\lambda) ( T_n -\lambda )$. Since we want $\lambda $ and not $\frac 1 \lambda$, the only way is to multiply everything for $\lambda^2$; however with this operation we can't obtain a estimator from the $x_i $, because we will still have a dependence from $\lambda$. I'm not sure about th last statement: did I actually prove that there are no estimators that equal Cramer-Rao bound for $\lambda$? Thanks in advance for your help
AI: Yes you did.
the lower bound for unbiased estimators of $\lambda$ is $V(T)\geq\frac{\lambda^2}{n}$
Using Lehmann-Scheffé Lemma you can find the UMVUE estimator of $\lambda$
$\hat{\lambda}=\frac{n-1}{\sum_i X_i}$
Its Variance is $V(\frac{n-1}{\sum_i X_i})=\frac{\lambda^2}{n-2}$ (for $n>2$) so, as often happens, the optimum estimator does not reach the Cramér Rao lower bound.
If interested I can show all the calculations...but you can find them in many books, e.g. Mood Graybill Boes, Chapter 7
|
H: Question about unclear definition of Ackermann-Péter function in Stanford Encyclopedia of Philosophy
I'm reading Recursive Functions at Stanford Encyclopedia of Philosophy (section 1.4). The following paragraph defines function β which is then used to define variant of Ackermann-Péter function:
What exactly is the type of Iter?
The base case Iter(ϕ, 0) suggests it's (N → N) → N → (N → N), but the inductive step has type (N → N) → N → N.
Assuming it's the former and Iter just composes ϕ with itself, the next paragraph doesn't make sense to me because Iter should have return type N → N, while β should be N → (N → N), so you cannot compose β with itself.
What am I misunderstanding? Thanks.
Update: I've checked the Wikipedia's article on Ackermann function and it turns out that the correct defintion is as follows:
$$β(0) = S$$
$$β(x + 1) = Iter(β(x))$$
where:
$$Iter(ϕ)(0) = ϕ(1)$$
$$Iter(ϕ)(x + 1) = ϕ(Iter(ϕ)(n))$$
AI: You're not misunderstanding, there are a few errors in the text. Your first type signature is right, and the "step" would be more clearly written with a $\circ$ rather than brackets, i.e. $\operatorname{iter}(\phi, n+1) = \phi \circ \operatorname{iter}(\phi, n)$
It's clear there's something wrong at the point that they have an equation with an x on one side but not the other. It should be $\operatorname{iter}(\phi, n)(x) = \phi^{n+1}(x)$
Similarly the definition of $\beta(0)$ is confusing at best. $\beta(0)(x) = x+1$ would be correct, or $\beta(0) = \lambda x.x + 1$
I'd try another source if I were you.
|
H: Doesn't $0.\overline9=1$ lead to consequences like $a-0.\overline01=a$ and $2=1.\overline931415926$?
I'm just starting to learn calculus, but this was the first idea presented:
$$0.\overline9=1$$
This would mean that this is true:
$$a-0.\overline01=a$$
When I thought about it, then I realised that if the above was true, then this other statement must be considered true: $$((((a-0.\overline01)-0.\overline01)-0.\overline01)...)=(a-0.\overline01)-0.\overline01=a-0.\overline01=a$$
This is confusing because it presents a lot of weird consequences. One such consequence would be a different behavior of open and closed intervals, since numbers that I previously considered to be different, are now considered to be the same. For instance, $2$ is now equal to $1.\overline931415926$.
I can't help but think that I am missing a crucial piece of information.
AI: If you're just beginning to learn calculus, this will certainly seem a bit confusing! The key is to understand that when we write something such as $0.\overline 9$ we mean something very specific.
In this case, we need a precise definition of what "a decimal expansion with an infinite sequence of nines" even means. One way we can do this is to define it as the infinite series such that: $0.\overline9 = \frac{9}{10} + \frac{9}{100} + ...$
If you are familiar with series, you'll notice this is a geometric series. If you are just starting calculus I'll assume you haven't yet met notions of convergence of sequences and series, but it will suffice to simply state that there is a precise notion of these things. In this sense, the geometric series I described above is equal to $1$, by definition of what it means for an infinite series to converge.
However, consider $0.\overline0 1$. How exactly would we define this? It is not obvious to me, and I don't see a good way to do this. In this sense we cannot apply the usual notions of convergence of series because we're not even sure how to express this 'decimal' as such an infinite series!
The key takeaway is that when we make certain claims in mathematics, they mean very specific things, and they refer to specific notions. In this case we first of all need to define what we mean by $0.\overline 9$, (in this case define it as an infinite series), and then we need to define what it means for an infinite series to converge. For completeness, we say that an infinite series converges if and only if the sequence of partial sums converges. (But note there are also different ways to define convergence in different situations).
With this key takeaway in mind, the very idea of $0.\overline0 1$ is not clear, and hence proving further statements from it is not valid. If you can come up with a precise working definition of what you mean by the symbols $0.\overline0 1$, that's a different story! But as presented, there is no way to define it in the same way as $0.\overline 9$
|
H: $\lVert x_k \rVert \rightarrow \infty \quad \Longrightarrow \quad f(x_k) \rightarrow+\infty$ implies existence of global minimiser
Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a continuous function sucht that for every sequence $(x_k)_k \subset \mathbb{R}^n$ holds
$$\lVert x_k \rVert \rightarrow \infty \quad \Longrightarrow \quad f(x_k) \rightarrow+\infty.$$
Show that at least one global minimiser exists.
I do not see how to prove this. The only thing I could think of would be to somehow use compactness, but I do not see how to apply that here. Could you give me a hint?
AI: $\newcommand{argmin}[1]{\underset{#1}{\operatorname{arg min}}}$Consider a sequence $(x_n)_{n\in\Bbb N}$ such that $x_n\in\argmin{\lVert x\rVert\le n}{f(x)}$. Such a sequence exists because $f$ has a minimum on every closed bounded set.
Notice that $(x_n)_{n\in\Bbb N}$ must be bounded because $f(x_n)$ is decreasing (and therefore it cannot diverge to $\infty$). Therefore there is some $M\in\Bbb N$ such that $\lVert x_n\rVert\le M$ for all $n$, and thus $$\inf_{x\in\Bbb R^n}f(x)=\inf_{n\in\Bbb N}\min_{\lVert x\rVert\le n}f(x)=\inf_{n\in\Bbb N}f(x_n)\ge \inf_{\lVert x\rVert\le M} f(x)=f(x_M)$$
|
H: Prove that $\mathbb{Q}[X]/I\cong Q\times Q$
Let $f(X)=(X^2-2)(X^4-X)$ and $g(X)=(X^2-1)X\in \mathbb{Q}[X]$.
Let $I=(f,g)$ the ideal generated by $f$ and $g$.
Prove that $\mathbb{Q}[X]/I\cong Q\times Q$
Using the reasoning of this answer I have computed that $I=X(X-1)$.
The next step is to define an epimorphism $\phi: \mathbb{Q}[X]\longrightarrow Q\times Q$, see that $\ker \phi = I$ and apply the First Isomorphism Theorem. But I am not sure about what is $Q$, I think that it is the field of fractions of $\mathbb{Q}[X]$ and I am not clear about how to define $\phi$.
AI: Hint the mapping $P \to P(a)$ has kernel $\langle x-a \rangle$.
So try to set $\phi(P)= (P(\alpha), P(\beta))$ in such a way that the kernel becomes exactly
$$\langle X(X-1) \rangle= \langle X\rangle \cap \langle X-1 \rangle$$
|
H: Poisson kernel equivalence
I'm currently reading "Real and Complex Analysis" by Rudin.
It was shown that
$$
\sum_{n = -\infty}^{\infty} r^{|n|} e^{in \theta} = \frac{1 - r^2}{1 - 2r \cos{\theta} + r^2 } \quad 0 \leq r < 1,\; \theta \in \mathbb{R}
$$
I'm having trouble verifying this equality. The textbook just notes that the left-hand side (series) is the real part of some complex series.
However, I'd like to see how I can split up the summation such that I can clearly see the geometric series.
AI: $$
\sum_{n = -\infty}^{\infty} r^{|n|} e^{in \theta} = \sum_{n = 0}^{\infty} r^{n} e^{in \theta} + \sum_{n = 1}^{\infty} r^{n} e^{-in \theta}
= \frac{1}{1-re^{i \theta}} + \frac{re^{-i \theta}}{1-re^{-i \theta}}
$$
is the sum of two geometric series. The right-hand side then simplifies to
$$
\frac{1-r^2}{(1-re^{i \theta})(1-re^{-i \theta})} = \frac{1-r^2}{1-2r\cos(\theta) + r^2}
$$
because $e^{i \theta} + e^{-i \theta} = 2 \operatorname{Re}(e^{i \theta}) = 2 \cos(\theta)$.
Alternatively combine the terms for $n$ and $-n$ in the given sum:
$$
\sum_{n = -\infty}^{\infty} r^{|n|} e^{in \theta} = 1 + \sum_{n=1}^\infty r^n(e^{in \theta} + e^{-in \theta}) = 1 + 2 \sum_{n=1}^\infty \operatorname{Re} (r^ne^{in\theta})
= \operatorname{Re} \left( 1 + 2 \sum_{n=1}^\infty (re^{i\theta})^n
\right)
$$
and then evaluate
$$
1 + 2 \sum_{n=1}^\infty (re^{i\theta})^n
= 1 + \frac{2re^{i\theta}}{1-re^{i\theta}}
= \frac{1+re^{i\theta}}{1-re^{i\theta}}
= \frac{1-r^2 + 2ir \sin(\theta)}{1 - 2r\cos(\theta) + r^2} \, .
$$
|
H: Find $g(x)$ if $f(x)= \begin{cases} -1, & -2 \leq x \leq 0 \\ x-1, & 0 < x \leq 2 \end{cases}$ and $g(x) = |f(x)| + f(|x|)$
$$f:[-2,2] \rightarrow \Bbb R$$
$$\text {and }f(x)= \begin{cases} -1, & -2 \leq x \leq 0 \\ x-1, & 0 < x \leq 2 \end{cases}$$
And, let $g(x)$ be equal to $|f(x)|+f(|x|)$
We need to find the value of $g(x)$ (define it).
I begin by finding the value of $|f(x)|$ first :
$$|f(x)|=\begin{cases} |-1|, & -2 \leq x \leq 0 \\ |x-1|, & 0 < x \leq 2 \end{cases}=\begin{cases} 1, & -2 \leq x \leq 0 \\ |x-1|, & 0 < x \leq 2\end{cases}$$
Now, to determine what $|x-1|$ would be, we need to determine whether $(x-1)$ is positive or negative or zero.
If $x-1 \geq 0$, $|x-1| = x-1$ and if $x-1 < 0$, then $|x-1| = -(x-1) = 1-x$
If $x-1 \geq 0$, then $x \geq 1$ and if $x - 1 < 0$, then $x < 1$. We can now split the condition $0 < x \leq 2$ from the earlier definition of $|f(x)|$ as $0<x<1$ and $1 \leq x \leq 2$, where $0<x<1 \implies |x-1| = 1-x$ and $1 \leq x \leq 2 \implies |x-1| = x-1$
So, $$ |f(x)| = \begin{cases} 1, & -2 \leq x \leq 0 \\ 1-x, & 0<x<1 \\ x-1, & 1 \leq x \leq 2 \end{cases}$$
Now, we need to define $f(|x|)$. Here's what I do :
$$f(|x|) = \begin{cases} -1, & -2 \leq x \leq 0 \\ |x|-1, & 0<|x| \leq 2 \end{cases}$$
Now, is the next step that I do here correct or even necessary?
Now, $|x|$ can never be negative but can be zero when $x=0$. So, the condition $-2 \leq |x| \leq 0$ can be replaced by $x = 0$ from which we obtain the following definition for $f(|x|)$:
$$f(|x|) = \begin{cases} -1, & x=0 \\ |x|-1, & 0 < x \leq 2 \end{cases} = \begin{cases} |x|-1, & 0 \leq |x| \leq 2 \end{cases}$$
I did the last one because we observe that $0$ is mapped to $-1$ and $|0|-1 = -1$, so it will still be mapped to $-1$ if we put it in the second condition. So, basically, for any value of $x$ that is a part of domain of $f$, $f(|x|) = |x|-1$.
Now, we add $|f(x)|$ and $f(|x|)$ to obtain $g(x)$.
$$g(x) = |f(x)| + f(|x|) = \begin{cases} |x|-1+1, & -2 \leq x \leq 0 \\ |x|-1+1-x, & 0 < x < 1 \\ |x|-1+x-1, & 1 \leq x \leq 2\end{cases}$$.
In the first condition, the value of $x$ is either negative or $0$, so if it is negative, $|x| = -x$ and else, it is 0 which is also equal to $-0$, which means that in the first condition, $|x|$ can be substituted by $-x$. In the second and, the value of $x$ is always positive, so $|x| = x$. So, we can substitute $|x|$ by $x$ in the second and third conditions and arrive at the definition of $g(x)$ that is :
$$g(x) = \begin{cases} -x-1+1, & -2 \leq x \leq 0 \\ x-1+1-x, & 0 < x < 1 \\ x-1+x-1, & 1 \leq x \leq 2 \end{cases} = \begin{cases} -x, & -2 \leq x \leq 0 \\ 0, & 0 < x < 1 \\ 2x-2, & 1 \leq x \leq 2 \end{cases}$$
Now, I want to know if this process is correct and if there is some alternative, better approach to this problem.
Sorry for the long post, I thought that showing my line of reasoning would make the question better.
Thanks
AI: Your final answer appears to be correct (though I have not checked all the steps of your reasoning), however you may try to apply symmetry by dividing into three cases: $x=0$, $-2\leq x<0$, and $0<x\leq 2$. Then it would be easy to see how to get $g(x)$, since $|x|$ for $x$ in the second case implies $|x|$ belongs to the third case. The rest is left to you.
|
H: Show the feasible set of convex functions is convex.
Problem Statement
Let $g_{1},\dots, g_{M}$ be convex functions and let $b_{1},\dots, b_{M}$ be real numbers. Show that $F:= \{ \boldsymbol{x} \in \mathbb{R}^{d} : g_{i}(\boldsymbol{x}) \leq b_{i}, $ for all $i = 1,\dots , M \}$ is a convex set.
Attempted Proof
We endeavour to prove that a given set, $F:= \{ \boldsymbol{x} \in \mathbb{R}^{d} : g_{i}(\boldsymbol{x}) \leq b_{i} \forall i = 1,\dots , M \}$ is convex. Where $\{g_{i}\}_{i=1}^{M}$ represents convex functions and $\{b_{i}\}_{i=1}^{M}$ represents real integers.
We know, from the definition of a convex set that the feasible set $F:= \{ \boldsymbol{x} \in \mathbb{R}^{d} : g_{i}(\boldsymbol{x}) \leq b_{i}, $ for all $i = 1,\dots , M \}$ is convex on the condition that all the functions $\{g_{i}\}_{i=1}^{M}$ are convex.
Explanation
I am trying to show that the set is convex by taking two points x and y which are in the set and show that for any $\lambda \in [0,1]$, the point $\lambda \boldsymbol{x} + (1-\lambda) \boldsymbol{y} $ is also in the set. Where a point will be in the set if it satisfies all the constraints.
The problem is I am struggling to take my first attempted at a proof and use a more mathematically method to prove this by taking two points in the set as described above. I would like to use the definition of a convex function ,i.e. a convex function $f$ , is one that satisfies for $0 \leq \lambda \leq1$,
\begin{eqnarray*}
f(\lambda \boldsymbol{x} +(1-\lambda)\boldsymbol{y} ) \leq \lambda f(\boldsymbol{x}) + (1-\lambda)f(\boldsymbol{y}) \text{.}
\end{eqnarray*}
I have found similar questions with regards to the sum or quotient of convex functions online but I have not come across a proof of this with a good solution so far. Could someone please show me how they would formulate this proof.
This question was similar:
From convex set to convex function
AI: Let $g_1, g_2, \ldots, g_M$ be convex functions from $\mathbb{R}^d \to \mathbb{R}$ and $b_1, b_2, \ldots, b_M \in \mathbb{R}$. We want to show that $F := \{ x \in \mathbb{R}^d : g_i(x) \leq b_i, \, i \in 1, 2, \ldots, M\}$ is convex.
To start, suppose that $x_1 \in F$ and $x_2 \in F$ are arbitrary points in $F$ and that $t \in (0,1)$. We want to show that $x_3 = tx_1 + (1-t)x_2 \in F$.
First, consider an arbitrary $i \in 1,2,\ldots, M$. By definition of the convexity of $g_i$, we know that $g_i(x_3) = g_i(tx_1 + (1-t) x_2) \leq tg_i(x_1) + (1-t)g_i(x_2)$. By definition of $F$, we know that $tg_i(x_1) + (1-t)g_i(x_2) \leq tb_i + (1-t)b_i = b_i$. Thus we have that $g_i(x_3) \leq b_i$, and we can repeat this argument for all $i$. Thus $x_3 \in F$. Hence $F$ is a convex set.
|
H: Limit of sequence of series $a_n=\frac{1}{n}\sum_{j=1}^n\sqrt{\frac{j}{n}}$
Given the series (sequence of series, really) $$a_n = \frac{1}{n}\sum_{j=1}^n \sqrt{\frac{j}{n}}$$
what is its limit as $n\rightarrow \infty$? (I happen to know its 2/3, but how does one show it?)
(Note I've seen a few solutions using something called 'Generalized Harmonics' - these, however, do not figure in any of my text books, so I'd like to avoid using them, if possible)
P.s. I realize another recent post deals with the same sequence (I take it the questioner goes to the same university as I). But that post was with regards to showing $a_n$ to be an upper sum of $\int_0^1 \sqrt{x}$, which is an issue of partitions and suprema. I felt my question was sufficiently different that I did not want to relegate it to a comment in the other thread.
AI: The norm of a partition $P = \{ a = x_0 < x_1 < \dotsc < x_n = b \}$ of an interval $[a, b]$ is defined as $\Vert P \Vert = \max \{x_i - x_{i-1} : i = 1, \dotsc, n \}.$
Given a bounded function $f \colon [a, b] \to \mathbb{R}$ and a partition $P = \{ a = x_0 < x_1 < \dotsc < x_n = b \}$ of $[a, b],$ for each choice of values $s_i \in [x_{i-1}, x_i],$ the sum
$$\sum_{i=1}^n f(s_i)(x_i - x_{i-1})$$
is said to be a Riemann sum of $f$ associated to $P.$
One can prove that if $f$ is Riemann-integrable in $[a, b]$ and $(P_n)$ is a sequence of partitions of $[a, b]$ with $\Vert P_n \Vert \to 0$ as $n \to \infty,$ then for any sequence $(\mathcal{S}_n)$ of Riemann sums of $f$ with $\mathcal{S}_n$ associated to $P_n$ for each $n,$ it holds that
$$\lim_{n \to \infty} \mathcal{S}_n = \int_a^b f.$$
In your case, just apply this result to the function $f \colon [0, 1] \to \mathbb{R}$ given by $f(x) = \sqrt{x}$ (which is continuous, hence Riemann integrable), the sequence of partitions $(P_n)$ with
$$P_n = \left\{ \frac{i}{n} : i = 0, \dotsc, n \right\} = \left\{ 0 < \frac{1}{n} < \frac{2}{n} < \dotsb < 1 \right\}$$
(which satisfies $\Vert P_n \Vert = 1/n \to 0$) and the choice of values $s_i = i/n \in [(i-1)/n, i/n]$ for each $n.$
|
H: Prove the following two statements about differentiability are equivalent
Let $E$ be a subset of $\textbf{R}$, $f:E\to\textbf{R}$ be a function, $x_{0}\in E$, and $L\in\textbf{R}$. Then the following two statements are equivalent
(a) $f$ is differentiable at $x_{0}$, and $f'(x_{0}) = L$
(b) We have $\displaystyle\lim_{x\to x_{0};x\in E-\{x_{0}\}}\frac{|f(x) - (f(x_{0}) + L(x-x_{0}))|}{|x-x_{0}|} = 0$
My solution
According to the definition of derivative, one has that
\begin{align*}
\lim_{x\to x_{0}}\frac{f(x) - f(x_{0})}{x-x_{0}} = L = \lim_{x\to x_{0}} L & \Leftrightarrow \lim_{x\rightarrow x_{0}}\left(\frac{f(x) - f(x_{0})}{x-x_{0}} - L\right) = 0\\\\
& \Leftrightarrow \lim_{x\rightarrow x_{0}}\frac{f(x) - (f(x_{0}) + L(x-x_{0}))}{x-x_{0}} = 0\\\\
& \Leftrightarrow \lim_{x\rightarrow x_{0}}\frac{|f(x) - (f(x_{0}) + L(x-x_{0}))|}{|x-x_{0}|} = 0
\end{align*}
Could someone tell me if it correct to make use of equivalences or should I break the proof into two implications?
AI: There are many equivalent ways to define differentiability.
If (as I suppose) your definition of "$f$ is differentiable at $x_0$" is that the limit $$\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x -x_0}$$ exists and is denoted by $f'(x_0)$, then yes, your proof is correct.
Note that in your last equivalence you are implicitly using two (obvious) facts:
for any $a \in \mathbb{R}$, if $|a| = 0$ then $a = 0$;
$\lim_{x \to x_0} |g(x)| = |\lim_{x \to x_0} g(x)|$.
|
H: Operator norm inequality $\|XY\|\geq\frac{\|X\|}{\|Y^{-1}\|}$
Let $X, Y$ and $Y^{-1}$ be linear operators on a normed space. How to prove the inequality $$\|XY\|\geq\frac{\|X\|}{\|Y^{-1}\|}?$$
I already know that $\|XY\|\leq\|X\|\|Y\|$ but I don't see how I can use this fact right now.
AI: Simply note that $$\|XY\|\cdot\|Y^{-1}\|\geq \|XYY^{-1}\|=\|X\|,$$ via the operator norm inequality.
|
H: Why are the $p$-oldforms $f(z)$ and $f(pz)$ linearly independent at level $\Gamma_0(pN)$?
Let $f$ be a newform (normalized eigenform) of weight $k$ and level $\Gamma_0(N)$. Fix $p$ not dividing $N$ and set $f_p(z)=f(pz)$.
Viewing $f$ and $f_p$ at level $\Gamma_0(pN)$, why are they linearly independent?
Here is what I'm thinking: Suppose there is some nontrivial relation
$$
af+bf_p=0.
\tag{1}$$ The Hecke eigenvalues for $f$ are the same at level $pN$, except possibly at $p$. Thus, for any $\ell\neq p$, applying $T_\ell-a_\ell(f)$ to equation (1) shows that $T_\ell f_p=a_\ell(f)f_p$. Thus, $f$ and $f_p$ have the same Hecke eigenvalues outside $p$.
Now, at this point, I have heard mumblings that the "multiplicity one theorem" shows that $f=f_p$, but I can't seem to find a suitable reference. Simply Googling "multiplicity one" in this context leads to many high-brow results... Diamond and Shurman do not state the multiplicity one theorm in their book (my main reference thus far), but they do cite Miyake's book as a reference. I looked up the theorem there (I believe it is Theorem 4.6.12), and it seems to require that $f$ and $f_p$ both be of level $N$, which (I am fairly certain) they are not. Can someone clarify this for me and possibly provide a good reference for the classical multiplicity one theorem?
AI: From the Fourier series it is obvious that $f \ne cf(pz)$.
$$f(z)=\sum_n a(n) e^{2i\pi n z},\qquad f(pz)=\sum_n a(n) e^{2i\pi np z}=\sum_n a(\frac{n}p)1_{p|n} e^{2i\pi n z}$$
The sequence $a(n)$ is not a scalar multiple of $a(\frac{n}p)1_{p|n}$.
The Fourier series is uniquely defined by the function : $$a(n)=\int_{iy}^{iy+1}f(z)e^{-2i\pi nz}dz$$
|
H: How do I prove that sum of reciprocals of first $2^n$ natural nos is always greater than $\frac{n+1}{2}$
How can prove this inequality
\begin{equation}
\sum_{r=1}^{2^n} \frac{1}{r}\geq \frac{n+1}{2}
\end{equation}
Without using induction... Want to have an insight on summation inequalities
AI: For non-negative integers $j$ we have,
$$\sum_{k=1}^{2^j} \frac{1}{2^j+k}\geq\sum_{k=1}^{2^j}\frac{1}{2^{j+1}}=\frac{2^j}{2^{j+1}}=\frac{1}{2}$$
Now, for $n\geq1$,
$$\sum_{r=1}^{2^n}\frac{1}{r}=1+\sum_{j=0}^{n-1}\left( \sum_{k=1}^{2^j}\frac{1}{2^j+k}\right)\geq\left(1+\frac{n}{2}\right)=\frac{n+2}{2}$$
Equality holds for $n=1$.
This was the key observation of Nicole Oresme in his proof of divergence of the harmonic series $$\sum_{m=1}^{\infty}\frac{1}{m}$$
Check this link https://en.m.wikipedia.org/wiki/Nicole_Oresme
Probably this post will help Denis James.
|
H: Calculating $E[Z(Z-1)(Z-2)(Z-3)]$ where $Z$ is Poisson
If $Z$ is a variable that distributes by Poisson, with Expected value, $E(Z) = 2.5$.
I need to solve:
$E[Z(Z-1)(Z-2)(Z-3)]$
So What I thought to do is first:
$E[(Z^2-Z)(Z^2-2Z)(Z^2-3Z)]$
$[E(Z^2)-E(Z)][E(Z^2)-2E(Z)][E(Z^2)-3E(Z)]$
From here I'm not sure how can I insert the $E(Z) = 2.5$ if what I get in the end is $E(Z^2)$
AI: The Poisson distribution has probability generating function $G_Z(t):=E(t^Z)=\sum_{k\ge0}e^{-\lambda}\frac{(t\lambda)^k}{k!}=e^{\lambda(t-1)}$. Differentiating $4$ times,$$E(Z(Z-1)(Z-2)(Z-3)t^{Z-4})=G_Z^{(4)}(t)=\lambda^4e^{\lambda(t-1)}.$$Set $t=1$ so $E(Z(Z-1)(Z-2)(Z-3))=\lambda^4=39.0625$.
|
H: On $\sum_{x=0}^\infty x^n r^x$
I am looking for either a closed form or recursive expression for $$\sum_\limits{x=0}^\infty x^n r^x\ $$ that does not include differential operators, where $n \in \mathbb N_0$. It is clear to me that $$\sum_\limits{x=0}^\infty x^n r^x = r \cfrac{\text d}{\text dr}\sum_\limits{x=0}^\infty x^{n-1} r^x \ $$ and from this I found some patterns in the first few expressions:
$$\sum_{x=0}^\infty xr^x = \cfrac r {(1-r)^2} \\ \sum_{x=0}^\infty x^2 r^x = \cfrac {r+r^2} {(1-r)^3} \\ \sum_{x=0}^\infty x^3 r^x = \cfrac {r^3+4r^2+r} {(1-r)^4} \\ \sum_{x=0}^\infty x^4 r^x = \cfrac {r^4+11r^3+11r^2+r} {(1-r)^5} \\ \sum_{x=0}^\infty x^5 r^x = \cfrac {r^5+26r^4+66r^3+26r^2+r} {(1-r)^6} \\ \sum_{x=0}^\infty x^6 r^x = \cfrac {r^6 + 57r^5+302r^4+302r^3+57r^2+r} {(1-r)^7} \\ \sum_{x=0}^\infty x^7 r^x = \cfrac {r^7+120r^6 + 1191 r^5 + 2416r^4+1191r^3+120r^2+r} {(1-r)^8}$$
It's interesting that the denominator increases by a factor of $1-r$ each time, and another interesting thing is that the coefficients of row $n$ add up to $n!$. I haven't been able, though, to leverage this information in a way that spits out the individual coefficients. Is my goal of finding a closed form/recursive formula for these possible? Thanks.
AI: Terminology ... polylogarithm
$$
\operatorname{Li}_s(z) = \sum_{k=0}^\infty \frac{z^k}{k^s}
$$
So your question asks about the polylogarithm $\operatorname{Li}_{-n}(r)$.
That page shows $\operatorname{Li}_s(z)$ for $s=-1,-2,-3,-4$, the first four
examples you computed.
|
H: $(x,y,1)$ is perpendicular to $(1,2,-7)$. What is the relationship between $x$ and $y$?
$(x,y,1)$ is perpendicular to $(1,2,-7)$. What is the relationship between $x$ and $y$?
I think that there is no relationship between $x$ and $y$ because they could be any number. I know that the dot product of both $x$ and $y$ must be $0$ but that hasn't helped me get anywhere.
AI: Given two non-null vectors $u,v$, they are orthogonal ("perpendicular") if and only if
$$u\cdot v=0.$$
So given $(x,y,1)$ and $(1,2,-7)$, that gives us the equation:
$$(x,y,1)\cdot (1,2,-7) = x + 2y -7=0$$
So the relationship between $x$ and $y$ is
$$\boxed{x=7-2y}$$
|
H: Is this representation isomorphic to the group?
Let's consider s representation $\rho: G \rightarrow GL(V)$ such that there is a bijection between every linear operator of the image of $\rho$ and the elements of $G$. Is the image of $\rho$ isomorphic to $G$?
AI: No. Consider the representation $\rho:GL(n,\Bbb C)\times \Bbb C^*\to GL(n,\Bbb C)$ sending $(A,z)\mapsto zI_n$. $GL(n,\Bbb C)\times\Bbb C^*$ and the image of $\rho$ have the cardinality of $\Bbb C$, but the former is not abelian when $n>1$.
|
H: Show that $\lvert \lvert (x,y) \rvert \rvert \rightarrow \infty$ implies $f(x,y) = x^2-4xy+4y^2+y^4-2y^3+y^2 \rightarrow \infty$
I am working on the following exercise:
Consider the function
$$f(x,y) = x^2-4xy+5y^2+y^4-2y^3.$$
Show that $$\lvert \lvert (x,y) \rvert \rvert \rightarrow \infty \quad \Longrightarrow \quad f(x,y) \rightarrow+\infty.$$
I do not see how I could prove this in a "nice way". Is there any way to avoid a lot of case distinctions, so to avoid considering each case like
$$x \rightarrow +\infty, y \rightarrow -\infty $$
$$x \rightarrow -\infty, y \rightarrow -\infty $$
and so on separately?
AI: Hint:
$$f(x,y) = x^2-4xy+4y^2+y^4-2y^3+y^2=(x-2y)^2+y^2(y-1)^2$$
If $||\cdot||$ is the Euclidean norm, then $||(x,y)||^2=x^2+y^2$ and easily we can see that $$f(x,y)=||(x-2y,y-y^2)||^2=||(x,y)-(2y,y^2)||^2\geq(||(x,y)||-||(2y,y^2)||)^2$$ (by a variant of triangle inequality).
|
H: Simple function that returns a Gaussian curve?
I need a simple function that it's output is a Gaussian for $\mathbb{R} \to [0,1]$.
Any tips? Thanks.
AI: $$f_\sigma(x)=e^{-\frac{x^2}{2\sigma^2}}$$
Here $\sigma$ is controlling the width of the Gaussian
|
H: If $\sum_{n>0}^\infty c_{n}z^{n}$ has a radius of convergence R then the series $\sum_{n>0}^\infty \frac {c_n}{n!} $ has a radius of convergence +∞.
Suppose the series $\sum_{n>0}^\infty c_{n}z^{n}$ has a radius of convergence $R \in \mathbb{R}^{+}$ Justify that the series $\sum_{n>0}^\infty \frac {c_n}{n!} $ has a radius of convergence +∞.
I need to use this fact for a theorem that I am proving but could not prove it
AI: There are many ways to see why this holds. Let us just work with the fundamentals. Think for a moment what is it that you are trying to show? You want to show that no matter what $X\in \mathbb{C},$ You choose the series $\sum\limits_{n}c_n\frac{|x|^n}{n!}<\infty.$
Given all that we know, let us just try to bound this series (Hoping that we would succeed). The first thing to note is that if I fix $r<R,$ then $\sum c_nr^n<\infty$ (Why?). Using this, let us re-write our series as follows.
\begin{align*}
\sum_n c_n\frac{|x|^n}{n!}&=\sum_n (c_nr^n)\frac{(|x|/r)^n}{n!}.
\end{align*}
Take a pause at this moment and stare at the last expression for a minute or two. It will dawn at you that for sufficiently large $n,$ we have $\frac{(x/r)^n}{n!}<1.$
As a consequence, you can now see that except may be finitely many terms, we have $c_n\frac{|x|^n}{n!}\le c_nr^n.$ Since, we already know that $\sum_nc_nr^n<\infty,$ It follows that $$\sum_n c_n\frac{|x|^n}{n!}\le \infty.$$
There are of course slicker way to see why this is true, but I prefer doing things with first principle whenever possible because it is more revealing. And, after a while you don’t need to worry about these kind of issues, because you will see it.
|
H: throwing 2 dice probability
I am throwing $2$ dices and I want the sum of the two dices to be $> 8$
Now the dice is from $1$ to $6$ so to get $>8$ we have $9 - 12$
$ 6 + 6$ $ 5 + 6$ $ 4 + 6$
$ 6 + 5$ $ 5 + 5$ $ 4 + 5$
$ 6 + 4$ $ 5 + 4$ $ 3 + 6$
$ 6 + 3$
so it should be that you only have $4$ options out of $6$ for the first dice and then it should be $2$ for the other dice because we can always find $2$ numbers to combine with.
For an example if we land on a $4$ we have either $5$ or $6$ to combine with however what if we land on $6$ then we have $4$ other options but that is not always the case.
$4/6 * 2/6 = 8/36$
In my book it says it should be $10/36$ what am I doing wrong ?
AI: You are correct that there are four possibilities for the first roll ($3$, $4$, $5$, oe $6$). But then depending on the first roll there are different numbers of possibilities for the second roll. If you roll $3$ on the first roll, the only possibility for the second roll is $6$. If you roll $4$ on the first roll, then the second roll could be $5$ or $6$. And so on.
Try to finish it up from here!
|
H: Deriving polar coordinate form of ellipse. Issue with length of a distance to a foci.
I am reading through in Spivak on how to obtain the polar coordinate form of the ellipse. I'm given the following diagram:
All I'm trying to do is establish that the distance between $(x,y)$ and $(-2\epsilon a, 0)$ is as is stated, i.e: $2a - r$
I didn't think this would be much of an issue. I actually attempted to apply the usual trigonometric ideas on right triangles getting the expression:
$$(x - (-2 \epsilon a))^{2} + y^{2}$$
expand things out and take a square root and voila I would be done.....But that is not actually happening. To get rid of $\epsilon$ I could just assume it to be $1$, but the algebra when I work it out leaves me stuck with the following:
$$4a^{2} + 4xa + x^{2} + y^{2}$$
I can see where the $r$ term will appear, but it won't be a negative. As well I don't see how it will simplify to $2a$.
I did try to reverse engineer things by using the expression $(2a - r)^{2}$ to try and derive the original expression, but what I got from that derivation was:
$$(2a - r)^{2} = 4a^{2} - 4ar + r^{2}$$
where $r^{2} = x^{2} + y^{2}$
Messing around with the first and second expressions and subtracting them from one another I am left with notion that $x = -r$
But based on everything I did in the first expression I don't see how that can come about......SOme help would be appreciated.
AI: In an ellipse the sum of the distances from the foci is constant.
Traditionally we say that this distance is $2a.$ And $2a$ is also the length of the major axis.
The distance from focus to $(x,y) = (2a-r)$ is a direct consequence of the definition.
$\epsilon$ is not going to cancel out in the algebra. $\epsilon$ defines the shape of the ellipse, and $a$ defines the size. As $\epsilon$ approaches $1$ the ellipse becomes so stretched out that it cannot actually close and is, in fact, a parabola.
As $\epsilon$ approaches $0$ the foci move right next to each other and the figure becomes a circle.
Where do you go from here?
$(x+2\epsilon a)^2 + y^2= (2a - r)^2\\
x^2 + y^2 + 4x\epsilon a = 4a^2 - 4ar + r^2\\
x^2 + y^2 = r^2\\
x\epsilon = a - r\\
x = r\cos \theta\\
r(\epsilon\cos\theta + 1) = a\\
r = \frac {a}{\epsilon\cos\theta + 1}$
|
H: Proving intersection of POsets is a POset - Reflexive
I need to prove that the intersection of 2 POsets R and S is a POset.
So we basically want to prove that if $R$ and $S$ are POsets then $R \cap S$ is reflexive, transitive and anti-symmetric.
The problem is in the reflexive, I proved it like this:
We need to prove that $\forall a \in A ~~~ <a,a> \in R \cap S$
Because $R$ and $S$ are POsets, if $R \cap S = \emptyset$ then the intersection is reflexive (empty-wise)
Else, because $R$ and $S$ are POsets, if $<a,b> \in R \cap S$ then $<a,a> \in R$ and $<b,b> \in S$ because they are reflexive.
And thus $<a,a> , <b,b> \in R \cap S$ and thus it is reflexive.
In the comments the professor said: Reflexive is not correct, you
wrote that one thing is in R and other thing is in S so it is in the
intersection (?) (-4 pts)
I don't know why I am wrong here, I would appreciate if you could point me to the mistake
Note: this does not have to be 'well written', we are first year students..I just need to find the logical mistake, not the semantic.
Thank you!
AI: Hint: from the comments, there is no constraint on the underlying sets of the posets $R$ and $S$. As you have noted, you can prove reflexivity and antisymmetry of $R \cap S$ without constraining the underlying set (or field) of this relation. For reflexivity, what you have to prove is that for every $x$ in the field of the relation $R \cap S$, $\langle x, x \rangle \in R \cap S$. Your argument is right if you define $A$ to be the field of $R \cap S$ (although you might want to improve the phrasing - you have correctly identified the conditions for $a$ or $b$ to be in the field of $R \cap S$). I don't think your professor's comments are very helpful: the right comment is "what's $A$?". Maybe he or she thought they'd asked a question about posets $R$ and $S$ over the same field, but, apparently, they did not.
|
H: Sum of orthogonal projection operators
Let $X$ be an Hilbert space, I am trying to see that if $P$ and $Q$ are orthogonal projection operators then the following are equivalent:
$(1) Im Q\subseteq KerP$
$(2)P+Q$ is an orthogonal projection operator.
I was able to do $(1) \implies (2)$ but I cant seem to make the other one so any tips are aprecciated, what I tried revolves around this
From the condition that $P+Q$ is a projection operator I know that $PQ=-QP$ but I cant seem to see why this will have to be zero, I dont know if looking at the decompositions of the space $X$ would be righ way to look at this, also tried using the fact that the operator is positive but got nowhere.
Also another thing I am trying to see is that $Im(P+Q)$ is the closure of the subspace generated by $Im P$ and $Im Q$, and any advice here would be aprecciated too, I was able to show that $cl(span\{ImP+ImQ\})\subseteq Im(P+Q)$.
Thanks in advance !
AI: Let us take $f\in Im(Q)$ and hence $Qf=f$ because $Q$ is a projection. Now apply $P$ on both sides to obtain $$Pf=PQf=-QPf.$$
Apply $Q$ on both sides and use that $Q^2=Q$ to obtain $QPf=-QPf.$ Which gives $QPf=0$ and hence $PQf=0.$ Now again using the fact that $Qf=f$ we get $Pf=0$ which yields the claim.
|
H: Rademacher complexity of linear function class with compact parameter space and support
Consider the class of functions $\mathcal{F}$ of the form $f_\theta(x) = \langle x,\theta \rangle$ where $\theta \in \{\theta \in R^d : ||\theta||_1 \leq r\}$ and x has support $\{x \in R^d : ||x||_\infty \leq M\}$.
Give an upper bound for the Rademacher complexity:
$$R_n(\mathcal{F}) = \frac{1}{n} E \sup_{||\theta||_1 \leq r} \left|\sum_i^n \epsilon_i \langle x_i, \theta \rangle\right|$$
where $epsilon_i$ are symmetric iid bernoullis on {-1,1}.
My approach:
Since the $\ell_1$ and $\ell_\infty$ norms are dual norms we have the bound on the inner product $\langle x_i, \theta \rangle \leq ||x||_\infty||\theta||_1 \leq Mr$
And by Jensen's inequality:
$$E\left| \sum_i^n \epsilon_i \right| \leq \sqrt{E\left( \sum_i^n \epsilon_i \right)^2} = \sqrt{n}$$
Where the equality follows by independence.
So we have the final upper bound: $Mr/\sqrt{n}$ which is better than the solution found here: http://web.stanford.edu/class/stats300b/Exercises/Solutions/ex6-sol.pdf (problem 7.3b) which contains an additional $\sqrt{\log 2d}$ factor (by an argument around subgaussianity).
I could use some feedback on my derivation.
AI: In the question above: $|\sum_i \epsilon_i \langle x_i, \theta\rangle| \leq ||\theta||_1 ||x||_\infty |\sum_i \epsilon_i|$ is not valid since $\epsilon_i$ and $\langle x_i, \theta\rangle$ could be negative for some $i$.
|
H: Let $X,Y\sim U(0,1)$ and suppose $X$ and $Y$ Are independent. How to find explicitly the distribution of $Z=XY$ Using conditional expectation?
I found this problem interesting but I don't see clearly how to solve it. Any hint or ideas on how to get this one started?
AI: For example like that:
Take any $t \in (0,1)$. Then: $$\mathbb P(XY \le t) = \mathbb E [ \mathbb P(XY \le t|Y)] = \mathbb E[\mathbb P(Xy \le t)|_{y=Y}] = \mathbb E((\frac{t}{y}1_{\{y \ge t\}}+1_{\{y<t\}})|_{y=Y}) =
$$ $$= \mathbb E[\frac{t}{Y}1_{\{Y \ge t\}} + 1_{\{Y < t\}}] = t \int_{t}^1\frac{dy}{y}+\int_0^tdy=t-t\ln(t) $$
Which gives us $F_Z(t) = \begin{cases} 0 & t \le 0 \\ t-t\ln(t) & t \in(0,1) \\ 1 & t \ge 1\end{cases}$
Where $1_A$ means the indicator function and I used the fact that $\mathbb E[f(X,Y)|Y] = \mathbb E[f(X,y)]|_{y = Y}$ when $X,Y$ are independent and $f$ is such that $\mathbb E[|f(X,Y)|] < \infty$.
|
H: Area between curves with respect to $x$ or $y$
Sketch the region enclosed by the curves given below. Decide whether to integrate with respect to $x$ or $y$. Then find the area of the region. $y=3\cos(x)$ , $y=3−\frac{6x}{\pi}$
Need help with solving this question. I did $\int_0^{\pi/2} [3\cos(x)- (3−\frac{6x}{\pi})]dx$ and got the answer $0.6438$ but it says the answer is wrong. Calculated in radians
Any tips on where I am wrong?
AI: You are missing other bounded area from $x=\pi/2$ to $x=\pi$
Notice, the line $y=3-\frac{6x}{\pi}$ intersects the $y=3\cos x$ at three distinct points $x=0,\frac{\pi}{2}, \pi$. Therefore you can split bounded area into two parts & integrating with proper limits as follows
$$\int_0^{\pi/2}(3\cos x-3+\frac{6x}{\pi})\ dx+\int_{\pi/2}^{\pi}(3-\frac{6x}{\pi}-3\cos x)\ dx$$
$$=3-\frac{3\pi}{4}+3-\frac{3\pi}{4}=6-\frac{3\pi}{2}\approx 1.28761102$$
|
H: General rule of thumb to convert between number systems.
I would like to know what is a general rule of thumb to convert between the following number systems:
Decimal
Octal
Hexadecimal
Binary
I know if I want to convert from octal to hexadecimal, I first convert the octal number to binary with groups of 3 bits and then I regroup the binary number to groups of 4 bits, starting from the right to left. Finally, I convert the binary number to hexadecimal by adding up the bit values.
I know if I want to convert from decimal to binary, I can just divide the decimal number by 2 and keep track of the remainder until the quotient is 0. The answer will be the remainders from the most significant bit to the least significant bit.
Because there are so many number systems to convert between, decimal to binary to octal to hexadecimal etc, I sometimes forget which method to use for which conversion.
I figured that if I need to convert from decimal that I just need to divide by the base of the number system which I'm converting to. For example, from decimal to binary, divide by 2 and keep track of the remainders for each calculations and from decimal to octal divide by 8, etc. This makes it easy to understand if I convert from decimal. I need some rule like this for all conversions.
How can I always know which method to use. Is there a general rule of thumb?
Thank you.
AI: The conversions between binary, octal, and hex can be done by groups of digits like you have done because those bases are all powers of $2$. The hardest is between octal and hex because four octal digits correspond to three hex digits, but you can either keep track of that or go through binary.
Generally, any other base conversion must be done in one of the two ways between decimal and binary. You can either divide by $2$ repeatedly and keep track of the remainders, or you can subtract the largest power of $2$ that fits into the number and keep track of those. This will also work between the bases that are powers of two, but the special method keeping groups of bits together is easier.
|
H: Calculate maximum of Gini impurity
Gini impurity is defined as :
$$G = 1 - \sum_{i=1}^{k}p_i^2$$
In my book of statistics it's written that it has a maximum when $p_1 = p_2 = … = p_k = \frac{1}{k} $ but there is no derivation . How can one prove that it has a maximum in this case ? I tried something but without success.
AI: Use AM-QM inequality:
$\frac{\sum_{i=1}^k p_i^2}{k} \ge \left(\frac{\sum_{i=1}^k p_i}{k}\right)^2=\frac{1}{k^2}$ with equality when $p_1=p_2=\ldots=p_k=\frac{1}{k}$
|
H: Recovering the covariance matrix from directional variance
How to find the covariance matrix $K$ (in some basis) of dimension $n$, given $\{$projected variance of $K$ along $V \enspace:\enspace \forall \enspace n$-dimensional vector $V\}$?
In other words, if we know the variance along any direction, how can a covariance matrix be recovered?
AI: Your question is equivalent to "if I know the value of $v^\top K v$ for every vector $v$, how can I find $K$?" (Check that you understand that this is equivalent to what you are asking.)
Then the $(i,i)$th entry of $K$ is $e_i^\top K e_i$ where $e_i$ is the $i$th standard basis vector.
Note also that $(e_i + e_j)^\top K (e_i + e_j) = e_i^\top K e_i + e_j^\top K e_j + 2 e_i^\top K e_j$ which allows you to solve for $e_i^\top K e_j$, which is the $(i,j)$ entry of $K$.
|
H: Can an average of an overall set be equal to the average of a subset?
I read in a publication that the average salary for lawyers in America is $\bar x=\$163,595$. Of these salaries, the average for men is $\bar x_m=\$183,687$, and for women, it is $\bar x_w=\$163,595$. I'm thinking, how is it possible for the average of women's salary be equal the average of the entire set?
Note that $\bar x_w = \bar x < \bar x_m$, and $\bar x_w,\, \bar x,\, \bar x_m > 0$.
Can we prove that this is possible/impossible? Can we find 1 simple example where $\bar x_w = \bar x$ (given $\bar x_w,\, \bar x,\, \bar x_m > 0$, and $\bar x_m > \bar x$) is possible?
AI: This is the calculation behind Ned's comment.
Let $m$ be the number of men, $w$ the number of women and $a_m$, $a_w$ respective averages. Then, the overall average is
$$a = \frac{ma_m + wa_w}{m+w}.$$ If $a = a_w$, from the above we get $ma_m + wa_w = (m+w)a_w$, i.e. $m(a_m-a_w) = 0$. Thus, either $m = 0$ or $a_m = a_w$.
Moreover, if $a_m \geq a_w$, then
$$a_ w = \frac{ma_w + wa_w}{m + w} \leq \frac{ma_m + wa_w}{m + w} \leq \frac{ma_m + wa_m}{m + w} = a_m.$$
|
H: If $X$ is a random variable, and $Y= 2X$, then why isn't it enough to multiply the density function of $X$ by $2$ to find the density function of $Y$?
This may be a dumb question, and I've tried searching online for answers, but I can't seem to wrap my head around it.
So say I have a random variable $X$ and $Y = 2X$. Now I want to find the density function $f_Y(y)$. Why can't I just multiply $f_X(x)$ to find this value. I understand how to derive $f_Y(y)$, by taking the derivative of $F_Y(y) = P(Y \lt y)$, substituting $Y$ for $2X,$ etc. But I don't understand why. What does $Y=2X$ actually mean? Isn't it simply multiplying all the values that $X$ is described by, which is $X$'s density function, by 2? Or is it doing something else entirely?
Again, sorry if this is a very basic question, just can't seem to fully grasp this concept.
AI: A good way of feeling this in your bones is to note the difference between $f(2x)$ and $2f(x)$. We are not at all guaranteed for these to equal one another.
When you consider the CDF $F_Y(y)$ it works that:
$$F_Y(y) = P(Y \le y) = P(2X \le y)=P(X\le y/2)=F_X(y/2)$$
A random variable is just a function. It takes things like events and stamps a number on it. If you get into sigma algebras and measure theory you’ll encounter this in detail.
Think of it like stamping numbers on cows in a field. Cowboy X randomly stamps numbers with a certain likelihood, like say having a likelihood of stamping 6’s 40% of the time. Well cowboy Y stamps according to cowboy X, but he always stamps the number twice as large as what X would/did stamp. When X stamps a 6, we know Y stamps a 12. Just the same since the likelihood of X stamping a 6 is 40% we know Y stamping a 12 is also 40%. We’re not inherently changing the structure of the distribution with Y = 2X, we’re just stretching the values out. Just like when you take $f(x)$ and compute $f(2x)$
|
H: Locally compact subsests of R
I'm looking for an example of two locally compact subsets of the real line R, but their union isn't locally compact.
I know that generally it is not true that such union is locally compact, as we can take $\left \{ (x,y):x>0 \right \}\cup \left \{ (0,0) \right \}\
$, where the origin obviously does not have any compact neighborhood. But I don't see how it can happen in R.
Thanks!
AI: For $n\in\Bbb Z^+$ let $I_n=\left(\frac1{2n},\frac1{2n-1}\right)$, let $A=\bigcup_{n\ge 1}I_n$, and let $B=\{0\}$. Then $A$ and $B$ are locally compact, but $A\cup B$ is not locally compact at $0$. It’s really the same basic idea as your example in the plane.
|
H: Given a sequence of i.i.d. random variables, prove a result involving conditional expectation by means of symmetry argument
Let $(X_n)_{n\geq1}$ be an i.i.d. sequence with $\mathbb{E}\{|X_1|\}<\infty$. Let $S_n=X_1+\cdots+X_n$ and $\mathcal{F}_{-n}=\sigma(S_n,S_{n+1},\ldots)$. Then, one can state that $$M_{-n}=\mathbb{E}\{X_1\mid \mathcal{F}_{-n}\}$$ is a backwards martingale and, by Law of Iterated Expectation, $\mathbb{E}\{M_{-n}\}=\mathbb{E}\{X_1\}$.
At this point, how can one prove that
by symmetry for $1\leq j\leq n$:$$\mathbb{E}\{X_1\mid \mathcal{F}_{-n}\}=\mathbb{E}\{X_j \mid \mathcal{F}_{-n}\}\hspace{0.2cm} \text{ a.s.}$$
AI: Hint: Define a permutation $f$ on $\mathbb N$ by $f(1)=j, f(j)=1$ and $f(k)=k$ for $k \notin \{1,j\}$. Let $Y_n=X_{f(n)}$. Then $(X_n)$ and $(Y_n)$ are identically distributed. This implies that $\mathbb E (X_1|\mathcal F_{-n})=E (X_j|\mathcal F_{-n})$ because for any $E \in \mathcal F_{-n}$ we have $\mathbb EX_1 1_E=\mathbb EY_1 1_E=\mathbb EX_j 1_E$. [Note that $\mathcal F_{-n}$ does not change under the permutation $f$].
|
H: If $(\alpha_i,\alpha_j)=(\beta_i,\beta_j)$ for $0
Let $V$ be an inner product space over $\mathbb{R}$ with dimension $n$. Let $S$ be a subset of $V$ and $L(S)$ denote the span of $S$. If $(\alpha_i,\alpha_j)=(\beta_i,\beta_j)$ for every $i, j\in \{1, ...,n\}$, Then
$$L(\alpha_1, ..., \alpha_n)\cong L(\beta_1, ..., \beta_n)?$$
When $n=1$, the result is trivial.
Gram matrix may be useful to prove ${\rm dim} L(\alpha_1, ..., \alpha_n)={\rm dim} L(\beta_1, ..., \beta_n)$, but I do not know how to use it in detail. Can someone give me some hints to this problem?
AI: One approach is as follows: note that the spaces in question are the images of the maps $A,B:\Bbb R^n \to V$ given by
$$
A(x_1,\dots,x_n) = x_1 \alpha_1 + \cdots + x_n \alpha_n, \quad
B(x_1,\dots,x_n) = x_1 \beta_1 + \cdots + x_n \beta_n.
$$
Using the given information, conclude that $A^*A$ and $B^*B$ are the same map over $\Bbb R^n$ (where $A^*$ denotes the adjoint map of $A$).
Let $U$ denote the common column space of $A^*A$ and $B^*B$. Using the fact that $A$ and $A^*A$ have the same rank, show that $A|_U$ defines an isomorphism between $U$ and the image of $A$, and $B|_U$ defines an isomorphism between $U$ and the image of $B$.
Another approach: with $A,B$ as above, note that $\|Ax\|^2 = \|Bx\|^2$ for all $x \in \Bbb R^n$. It follows that $\ker(A) = \ker(B)$, so that $\dim \ker(A) = \dim \ker(B)$, so that $\dim\operatorname{im}(A) = \dim \operatorname{im}(B)$.
|
H: Application of Burnside's Lemma on the vertices of a cube
Eight congruent equilateral triangles, each of a different color, are used to construct a regular octahedron. How many distinguishable ways are there to construct the octahedron? (Two colored octahedrons are considered distinguishable if neither can be rotated to look just like the other.)
The original problem statement is that we wish to color the faces of an octohedron, such that each face is a different color. I thought that considering a cube's 8 vertices instead of an octohedron's 8 faces would be conceptually easier, so it suffices to count the number of ways to color the vertices of a cube, such that each is a different color. I've also recently learned about Burnside's Lemma, so I decided to try it out. Here is my attempt:
Let $G$ be the group of cube orientations. There are 8 ways to fix one point. Now the cube can "spin" horizontally, so we fix another one of the non-opposite points, which there are 6 of. So, $|G|=48$. There are $8!$ ways to color the cube if identities under rotation are distinct, so if we let $X$ be the set of these colorings, then $|X|=8!$.
We know by Burnside's Lemma that $$|X\backslash G| = \frac{1}{|G|}\sum_{g \in G}|X^g|,$$
where $g$ is one of the rotations in $G$.
Because none of the colors are identical, $|X^g|=0$ if $g$ is not the identity, so the only one we count is the identity, where $X^g = X$. So, our solution is $8!/48=840$.
My concern is that this seems intuitively like a really big number, and I'm also not very familiar with Burnside's Lemma (or group theory in general). In particular, the way I fixed the two points seems very sketchy to me. Is my solution correct?
EDIT: Apparently the answer is 1680. I am unsure now of why my calculation is incorrect.
AI: By "$8$ ways to fix one point" I assume you mean there are $8$ choices of where to send a given vertex to using a symmetry of the cube. That is not how the word "fix" works. In the context of group actions and symmetry, "fix" means "keep in place without moving," the exact opposite of how you're using the word!
Also you also can't spin the remaining $6$ non-opposite points amongst each other; you would have to use reflections that swap the opposite points, so you can't reason that way. Instead, note the three vertices adjacent to the first-chosen one can be permuted in any of $3!=6$ ways using rotations to cycle them or the three reflections which preserve the chosen vertex (through planes formed using diagonal lines emanating from the chosen vertex along cube faces).
This gives $|G|=8\cdot6=48$, so your number was right.
There are other ways to calculate $|G|$ too. For instance, using flags. A flag is a choice of vertex, edge and face all incident to each other (if you shrink the face, it would look kind of like a flag I guess?). There is exactly one symmetry relating any two flags, so $|G|$ is how many flags there are. There are $8$ choices for a vertex, followed by $3$ choices for an incident edge, followed by $2$ choices for an incident face, giving $|G|=8\cdot3\cdot2$.
In any case your calculation of $|X/G|$ is correct, if your symmetries include reflections.
If your symmetries are only rotations, then $G$ is smaller. There are only $3$ ways to rotate around a given corner, and $8$ corners, yielding $|G|=3\cdot8$ rotations.
|
H: Prove that $\left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right)\geq4$ with $a>0, b>0 , c> 0$ and $d>0.$
Prove that
$\left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right)\geq4$
with $a>0, b>0 , c> 0$ and $d>0.$
My attempt:
$$\begin{align*}\left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right)& = \dfrac{abcd+b^2c^2+a^2d^2+abcd}{abcd}\\
& =\dfrac{b^2c^2+a^2d^2+2abcd}{abcd}\\
&=\dfrac{b^2c^2+a^2d^2+2abcd}{abcd}\\
&=\dfrac{(ad)^2+(bc)^2+2(ad)(bc)}{abcd}\\
&=\dfrac{(ad+bc)^2}{abcd}\end{align*}$$
I don't know how to continue from this.
Can someone help me?
AI: Use AM-GM. $\frac{ad + bc}{2} \ge \sqrt{abcd}$. Squaring both sides, you get the answer. A tiny tip: If everything is positive, and you have an inequality, think about AM-GM once at least.
|
H: for all $p\in [0,1]$ there is a constant $M(p)$ such that $|f(z)|
Let $f(z)=\sum_{n>0} \frac{c_n}{n!}z^n$ proof that for all $p\in [0,1]$ there is a constant $M(p)$ such that $|f(z)|<M(p)\exp\left(\frac{|z|}{pR}\right)$ where $R$ is the convergence radius of $\sum_{n>0} c_n z^n$ and $z\in \mathbb{C}$.
I need to use this result for an important theorem I'm working on but I couldn't prove it with an easy method
AI: Let $g(z)=\sum_{n>0} c_n z^n$ with radius of convergence $R$ and $h(z)=e^z$, then $f=g*h=h*g$ where $*$ is the Hadamard convolution and $f$ is entire unless $R=0$
We can write $2\pi if(z)=\int_{|\zeta|=r}g(z/\zeta)h(\zeta)d\zeta/\zeta, |z|/r<R$ or also
$2\pi if(z)=\int_{|\zeta|=r}h(z/\zeta)g(\zeta)d\zeta/\zeta, r<R$
Assuming $R$ finite non-zero, $r=pR<R$ so using the second expression we get:
$2\pi|f(z)|\le \int_{|\zeta|=pR}|h(z/\zeta)||g(\zeta)||d\zeta/\zeta|$ and $|h(z/\zeta)| \le e^{|z|/(pR)}, |g(\zeta)| \le M_g(pR), \int_{|\zeta|=pR}|d\zeta/\zeta|=2\pi$, so we get the required inequality:
$|f(z)| \le M_g(pR)e^{|z|/(pR)}$ so we can take $M_p=M_g(pR)$
If $R =\infty$ we can take $pR$ to be any positive number, so the result is that $f$ is of order at most $(1,0)$, namely $|f(z)| \le C_ae^{a|z|}, a>0$ arbitrary
(for example, take $c_n=n!/(2n)!$ and the corresponding $f$ is $\cosh \sqrt z$ which is of order $1/2$)
For $R=0$ we cannot really say anything as $f$ may or may not converge, the radius of convergence can be arbitrary, the order can be arbitrary when entire etc
|
H: Find all functions $f(x) = c[T(f)](x)$ for a linear transformation $T$
I'm working through Apostol's calculus vol 2, and I'm not sure how to finish the following question (2.4, #29, part f):
Let $V$ be the set of all real functions continuous on the interval of $[-\pi, \pi]$ and let $T: V \to V$ be the linear transformation defined as:
$$
g(x) = \int_{-\pi}^{\pi} (1 + cos(x-t))f(t) dt
$$
Find all real $c \neq 0$ and all nonzero $f$ in $V$ such that $T(f) = cf$.
So far, I'm sure that the constant functions $C_k(x) =k$ satisfy the relationship with $c = 2\pi k$, since:
$$
\begin{align}
[T(f)](x) &= \int_{-\pi}^{\pi} f(t) dt \\
&+ \cos(x)\int_{-\pi}^{\pi} \cos(t) f(t) dt \\ &-\sin(x)\int_{-\pi}^{\pi} \sin(t) f(t) dt
\end{align}
$$
and for $C_k$, the second two terms are $0$.
But I'm not sure how to prove that these are the only functions that satisfy the relationship. I think the second two terms are always going to be $0$ -- since if there was a $\sin(x)$ or $\cos(x)$ term in $f(x)$, it those two integrals would be $0$.
AI: Assume $f$ and $c\in \mathbb{R}$, $c\ne0$ satisfies $T(f) = cf$. Then we can write $$ f(x) = \frac{1}{c} T(f)(x) = \frac{1}{c} \int_{-\pi}^\pi \left( 1 +\cos(x - t) \right) f(t) dt. $$ Weh put this into the definition of $T(f)$ again; as,
\begin{align*} cf(x) = (Tf)(x) &= \int_{-\pi}^\pi \left( 1 +\cos(x - u) \right) f(u) du \\
& = \int_{-\pi}^\pi \left( 1 +\cos(x - u) \right) \left(\frac{1}{c} \int_{-\pi}^\pi \left( 1 +\cos(u - t) \right) f(t) dt \right)du .
\end{align*}
Be aware of the use of variable; $\int_a^b f(x)dx = \int_a^b f(u) du$ and I changed the name of variables here.
This leads us as follows;
\begin{align*} c^2 f(x) &= \int_{-\pi}^\pi \int_{-\pi}^\pi \left( 1 +\cos(x - u) \right)\left( 1 +\cos(u - t) \right) f(t) dt du \\
&\stackrel{*}{=} \int_{-\pi}^\pi \int_{-\pi}^\pi \left( 1 +\cos(x - u) \right)\left( 1 +\cos(u - t) \right) f(t) du dt \\
& = \int_{-\pi}^\pi f(t)\int_{-\pi}^\pi \left( 1 +\cos(x - u) \right)\left( 1 +\cos(u - t) \right) du dt \\
& \stackrel{**}{=} \text{constant},
\end{align*}
i.e. $f(x)$ is constant. Here $(*)$ is Fubini's theorem, $(**)$ is the calculation.
So, If the function $f$ on $[-\pi, \pi]$ and $c \ne0$ satisfies $T(f) = cf$, $f$ should be constant function. In other words, the only eigenvector of linear operator $T$ is $1$ (up to constant coefficient).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.