text
stringlengths 83
79.5k
|
|---|
H: How to find the constant value for this differential equation
Given
$$
\dot{y} = \frac{1}{4}y(1-\frac{1}{20}y), \quad y(0)=1
$$
using separable approach, we get
$$
4\ln\left( \frac{|y|}{|y-20|} \right) = t+C
$$
Is it possible to find the value of $C$? The natural logarithm is undefined at negative values.
AI: Whether your solution is correct or not, substituting the initial condition works like this,
$$
4\ln\left( \frac{|1|}{|1-20|} \right) = 0+C
$$
Maybe you forgot the absolute value $||$ is in operation in the $\log$.
|
H: Estimating $\int_{0}^{1}\sqrt {1 + \frac{1}{3x}} \ dx$.
I'm trying to solve this:
Which of the following is the closest to the value of this integral?
$$\int_{0}^{1}\sqrt {1 + \frac{1}{3x}} \ dx$$
(A) 1
(B) 1.2
(C) 1.6
(D) 2
(E) The integral doesn't converge.
I've found a lower bound by manually calculating $\int_{0}^{1} \sqrt{1+\frac{1}{3}} \ dx \approx 1.1547$. This eliminates option (A). I also see no reason why the integral shouldn't converge. However, to pick an option out of (B), (C) and (D) I need to find an upper bound too. Ideas? Please note that I'm not supposed to use a calculator to solve this.
From GRE problem sets by UChicago
AI: Starting from
$$\int_0^1\sqrt{1+{1\over3x}}\,dx=2\int_0^1\sqrt{t^2+{1\over3}}\,dt$$
(from the subsitution $x=t^2$) as in Yves Daoust's answer, integration by parts gives
$$\int_0^1\sqrt{t^2+{1\over3}}\,dt=t\sqrt{t^2+{1\over3}}\Big|_0^1-\int_0^1{t^2\over\sqrt{t^2+{1\over3}}}\,dt={2\over\sqrt3}-\int_0^1{t^2+{1\over3}-{1\over3}\over\sqrt{t^2+{1\over3}}}\,dt$$
hence
$$2\int_0^1\sqrt{t^2+{1\over3}}\,dt={2\over\sqrt3}+{1\over3}\int_0^1{dt\over\sqrt{t^2+{1\over3}}}={2\over\sqrt3}+{1\over\sqrt3}\int_0^1{dt\over\sqrt{3t^2+1}}$$
Since $1\le\sqrt{3t^2+1}\le2$ for $0\le t\le1$, we have
$${1\over2}\le\int_0^1{dt\over\sqrt{3t^2+1}}\le1$$
Thus
$${2\over\sqrt3}+{1\over2\sqrt3}\le2\int_0^1\sqrt{t^2+{1\over3}}\,dt\le{2\over\sqrt3}+{1\over\sqrt3}$$
Now
$${2\over\sqrt3}+{1\over2\sqrt3}={5\sqrt3\over6}=\sqrt{75\over36}\gt\sqrt2\gt1.4$$
and
$${2\over\sqrt3}+{1\over\sqrt3}=\sqrt3\lt\sqrt{3.24}=1.8$$
Consquently
$$1.4\lt\int_0^1\sqrt{1+{1\over3x}}\,dx\lt1.8$$
and thus (C) is the correct answer.
|
H: Solving the differential equations as shown below
I recently came across a question in which we had to to solve the set of differential equations:
$10dx/dt+x+y/2=0 $ and $6d(x-y)/dt= y$
I tried a lot to solve these equations but I was unable to do so. I tried adding them eliminating $t$, but I couldn't even solve the resulting differential equation. Substituting $y=mx$ resulted in a messy calculation. Would someone please help me to solve this question?
AI: Hint:
This is a linear system of differential equations. Rewrite it in standard form:
\begin{cases}
x'= -\dfrac 1{10}x-\dfrac 1{20}y \\[1ex]
y'=-\dfrac 1{10}x -\dfrac{13}{60}y
\end{cases}
(subtracting the 2nd equation from the 1st), which you can write in matrix form:
$$\begin{bmatrix}x'\\y'\end{bmatrix}=A\begin{bmatrix}x\\ y\end{bmatrix}, \qquad A=\begin{bmatrix}-\frac 1{10}-\frac 1{20}\\-\frac 1{10}-\frac{13}{60}\end{bmatrix}$$
The solution is given by
$$\begin{bmatrix}x(t)\\ y(t)\end{bmatrix}=\exp(At)\begin{bmatrix}x(0)\\ y(0)\end{bmatrix}$$
so all you have to do is calculating the exponential of $At$, which requires to determine a basis of eigenvectors.
|
H: Locus of a point with constant distance ratio $e$ to two circles.
Please help to obtain.. in as elegant a form as possible.. the locus of point $P$ equations if its distances to two circles:
$$ (x-h)^2 + y^2 = a^2;\;(x+h)^2 + y^2 = b^2 ;\;$$
are in a constant ratio $e,$ or
$$\dfrac{ \sqrt{(x + h)^2 + y^2} - a} { \sqrt{(x - h)^2 + y^2} - b}=e.$$
I am looking at a generalization to conic sections with $ (0,\infty)$ domains for circle radii $(a,b)$.
When $ a=0,b=0 $ we have Apollonius Circles as loci. When $ a > 0,b >0 $ the fixed base focal points of Apollonius Circles are expanded to bigger circles and the circles themselves become distorted as shown below.
Required to obtain a parametrization of the Ovals or equations in terms of any known functions.
The set of new Ovals shown have been obtained by contour plots. Distances are positive when normal distance from point P is outside a circle ( and so $e$), and negative when inside.
EDIT1:
Constants used for plot of new Ovals loci when $(2h < a+b)$ for intersecting circles:
$$\;h=1.8,\;a=3,\;b=2,\;e= (-2,-1,0,1,4).$$
In particular I wish to know if the case of equal distances $e=-1$ is an ellipse or if the circle centers are its foci.
AI: Let the center of $C1$ be $(x1,y1)$ and radius $r1$.
The center of $C2$ is $(x2,y2)$ and radius $r2$.
Z is the locus $(x,y)$.
The distance from $Z$ to the center of $C1$ is:
$$D1 = \sqrt{(x - x1)^2 + (y-y1)^2} \tag{1}$$
The closest point from $C1$ to $Z$ is extended from the radius.
The point on the perimeter is $P1$.
The distance from the perimeter of $C1$ to $Z$ is $L1$, from $P1$ to $Z$:
$$L1 = \left\lvert \sqrt{(x - x1)^2 + (y-y1)^2} - r1 \right\rvert \tag{2}$$
Similarly for $C2$:
$$L2 = \left\lvert \sqrt{(x - x2)^2 + (y-y2)^2} - r2 \right\rvert \tag{3}$$
$Z$ is at a distance ratio $e$ : $L1 = e L2$
$$\left\lvert \sqrt{(x - x1)^2 + (y-y1)^2} - r1 \right\rvert = e \left\lvert \sqrt{(x - x2)^2 + (y-y2)^2} - r2 \right\rvert \tag{4}$$
$(x1,y1) = (h,0)$ , $r1 = |a|$.
$(x2,y2) = (-h,0)$, $r2 = |b|$
The sign changes if $Z$ is inside the circle.
Let
$S1 = \sqrt{(x - x1)^2 + (y-y1)^2} \tag{5}$
$S2 = \sqrt{(x - x2)^2 + (y-y2)^2} \tag{6}$
For $Z$ outside both circles the absolute signs are both $+$.
$S1 - r1 = e(S2 - r2) \tag{7}$
$S1-e S2 = r1 - e r2 \tag{8}$
$(S1-e S2)^2 = (r1 - er2)^2 \tag{9}$
$S1^2 + e^2 S2^2 -2 e S1 S2 = (r1 - e r2)^2 \tag{10}$
$S1^2 + e^2 S2^2 - (r1 - e r2)^2 = 2 e S1 S2 \tag{11}$
$(S1^2 + e^2 S2^2 - (r1 - e r2)^2)^2 = 4 e^2 S1^2 S2^2 \tag{12}$
$(S1^2 + e^2 S2^2 - (r1 - e r2)^2)^2 - 4 e^2 S1^2 S2^2 = 0 \tag{13}$
Maxima code:
S1 : sqrt((x-x1)^2 + (y - y1)^2);
S2 : sqrt((x-x2)^2 + (y - y2)^2);
R : r1 - e*r2;
E1 : (S1^2 + e^2*S2^2 - R^2)^2 - 4*e^2*S1^2*S2^2;
E2 : expand(E1);
E3 : facsum(E2,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
E4 : subst(b,r2,subst(a,r1,subst(0,y2,subst(-h,x2,subst(0,y1,subst(h,x1,E2))))));
E5 : facsum(E4,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
tex(E5);
Finally with all the substitutions:
$$\left(e-1\right)^2\,\left(e+1\right)^2\,y^4+2\,\left(e-1\right)^2\,
\left(e+1\right)^2\,x^2\,y^2+4\,\left(e-1\right)\,\left(e+1\right)\,
\left(e^2+1\right)\,h\,x\,y^2+2\,\left(e^4\,h^2-2\,e^2\,h^2+h^2-b^2
\,e^4+2\,a\,b\,e^3-b^2\,e^2-a^2\,e^2+2\,a\,b\,e-a^2\right)\,y^2+
\left(e-1\right)^2\,\left(e+1\right)^2\,x^4+4\,\left(e-1\right)\,
\left(e+1\right)\,\left(e^2+1\right)\,h\,x^3+2\,\left(3\,e^4\,h^2+2
\,e^2\,h^2+3\,h^2-b^2\,e^4+2\,a\,b\,e^3-b^2\,e^2-a^2\,e^2+2\,a\,b\,e
-a^2\right)\,x^2+4\,\left(e-1\right)\,\left(e+1\right)\,h\,\left(e^2
\,h^2+h^2-b^2\,e^2+2\,a\,b\,e-a^2\right)\,x+\left(e\,h-h-b\,e+a
\right)\,\left(e\,h-h+b\,e-a\right)\,\left(e\,h+h-b\,e+a\right)\,
\left(e\,h+h+b\,e-a\right) = 0$$
Maxima can solve quartics:
Yall : solve(E5,y);
Y1 : part(Yall,1)^2;
Y2 : part(Yall,2)^2;
Y3 : part(Yall,3)^2;
Y4 : part(Yall,4)^2;
tex(Y1)$
tex(Y2)$
tex(Y3)$
tex(Y4)$
$$y^2={{2\,b\,e^2\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^
2}-2\,a\,e\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^2}-e^4
\,x^2+2\,e^2\,x^2-x^2-2\,e^4\,h\,x+2\,h\,x-e^4\,h^2+2\,e^2\,h^2-h^2+
b^2\,e^4-2\,a\,b\,e^3+b^2\,e^2+a^2\,e^2-2\,a\,b\,e+a^2}\over{\left(e
^2-1\right)^2}}$$
$$y^2={{2\,b\,e^2\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^
2}-2\,a\,e\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^2}-e^4
\,x^2+2\,e^2\,x^2-x^2-2\,e^4\,h\,x+2\,h\,x-e^4\,h^2+2\,e^2\,h^2-h^2+
b^2\,e^4-2\,a\,b\,e^3+b^2\,e^2+a^2\,e^2-2\,a\,b\,e+a^2}\over{\left(e
^2-1\right)^2}}$$
$$y^2={{-2\,b\,e^2\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a
^2}+2\,a\,e\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^2}-e^
4\,x^2+2\,e^2\,x^2-x^2-2\,e^4\,h\,x+2\,h\,x-e^4\,h^2+2\,e^2\,h^2-h^2
+b^2\,e^4-2\,a\,b\,e^3+b^2\,e^2+a^2\,e^2-2\,a\,b\,e+a^2}\over{\left(
e^2-1\right)^2}}$$
$$y^2={{-2\,b\,e^2\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a
^2}+2\,a\,e\,\sqrt{-4\,e^2\,h\,x+4\,h\,x+b^2\,e^2-2\,a\,b\,e+a^2}-e^
4\,x^2+2\,e^2\,x^2-x^2-2\,e^4\,h\,x+2\,h\,x-e^4\,h^2+2\,e^2\,h^2-h^2
+b^2\,e^4-2\,a\,b\,e^3+b^2\,e^2+a^2\,e^2-2\,a\,b\,e+a^2}\over{\left(
e^2-1\right)^2}}$$
There is an $x$ under the square root?.
Does this mean the general case is not a conic section?
For $a= 0$, $b = 0$
Maxima code:
E6: subst(0,b,subst(0,a,E5));
tex(E6);
$$\left(e-1\right)^2\,\left(e+1\right)^2\,y^4+2\,\left(e-1\right)^2\,
\left(e+1\right)^2\,x^2\,y^2+4\,\left(e-1\right)\,\left(e+1\right)\,
\left(e^2+1\right)\,h\,x\,y^2+2\,\left(e^4\,h^2-2\,e^2\,h^2+h^2
\right)\,y^2+\left(e-1\right)^2\,\left(e+1\right)^2\,x^4+4\,\left(e-
1\right)\,\left(e+1\right)\,\left(e^2+1\right)\,h\,x^3+2\,\left(3\,e
^4\,h^2+2\,e^2\,h^2+3\,h^2\right)\,x^2+4\,\left(e-1\right)\,\left(e+
1\right)\,h\,\left(e^2\,h^2+h^2\right)\,x+\left(e\,h-h\right)^2\,
\left(e\,h+h\right)^2 = 0$$
A circle is expected for the previous equation:
Try to fit the form : ${a}^2((y -y_0)^2 + (x-x_0)^2 - r^2)^2 = 0$ to it:
Expand this expression and equate the coefficients of each $x^ny^m$:
Maxima code (all variables are solved by calculation):
X1 : a^2*( (y - y0)^2 + (x - x0)^2 - r^2)^2;
X2 : expand(X1);
X3 : facsum(X2,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
X4 : X3 - E6;
X5 : facsum(X4,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
X6 : subst(0,y0,X5);
Yx0 : solve(part(X6,3),x0);
X7 : facsum(subst(part(Yx0,1,2),x0,X6),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Yr2 : solve(part(X7,1,6),[r^2]);
X8 : subst(part(Yr2,1,2),r^2,X7);
X9 : facsum(expand(X8),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Ya2 : solve(expand(part(X9,1,1)/a^2/y^4),[a^2]);
X10 : subst(part(Ya2,1,2),a^2,expand(X9));
X11 : facsum(expand(X10),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Yx0 : ratsimp(subst(part(Ya2,1,2),a^2,Yx0));
Yr2 : ratsimp(subst(part(Ya2,1,2)^2,a^4,Yr2));
EQN : y^2 + (x - part(Yx0,1,2))^2 = part(Yr2,1,2);
The substitutions were:
$$y0 = 0 \tag{14}$$
$${\it x_0}=-{{\left(e^2+1\right)\,h}\over{e^2-1}} \tag{15}$$
$$r^2={{4\,e^2\,h^2}\over{e^4-2\,e^2+1}} \tag{16}$$
$$a^2=e^4-2\,e^2+1 \tag{17}$$
The result equation is:
$$y^2+\left(x+{{\left(e^2+1\right)\,h}\over{e^2-1}}\right)^2={{4\,e^2
\,h^2}\over{e^4-2\,e^2+1}} \tag{18}$$
Centers $(h,0)$ and $(0,0)$ and inverting $\displaystyle e \rightarrow \frac1{e} \:$ produced the standard form:
$$\boxed{ y^2+\left(x+{{he^2}\over{1-e^2}}\right)^2={{e^2\,h^2}\over{(1 - e^2)^2}}} \tag{19}$$
Maxima centers $(0,0)$ $(h,0)$ , $eL1 = L2$
S1 : sqrt((x-x1)^2 + (y - y1)^2);
S2 : sqrt((x-x2)^2 + (y - y2)^2);
R : e*r1 - r2;
E1 : (e^2*S1^2 + S2^2 - R^2)^2 - 4*e^2*S1^2*S2^2;
E2 : expand(E1);
E3 : facsum(E2,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
E4 : subst(b,r2,subst(a,r1,subst(0,y2,subst(0,x2,subst(0,y1,subst(h,x1,E2))))));
E5 : facsum(E4,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
tex(E5);
E6: subst(0,b,subst(0,a,E5));
tex(E6);
X1 : a^2*( (y - y0)^2 + (x - x0)^2 - r^2)^2;
X2 : expand(X1);
X3 : facsum(X2,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
X4 : X3 - E6;
X5 : facsum(X4,y,y^2,y^3,y^4,x,x^2,x^3,x^4);
X6 : subst(0,y0,X5);
Yx0 : solve(part(X6,3),x0);
X7 : facsum(subst(part(Yx0,1,2),x0,X6),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Yr2 : solve(part(X7,1,6),[r^2]);
X8 : subst(part(Yr2,1,2),r^2,X7);
X9 : facsum(expand(X8),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Ya2 : solve(expand(part(X9,1,1)/a^2/y^4),[a^2]);
X10 : subst(part(Ya2,1,2),a^2,expand(X9));
X11 : facsum(expand(X10),y,y^2,y^3,y^4,x,x^2,x^3,x^4);
Yx0 : ratsimp(subst(part(Ya2,1,2),a^2,Yx0));
Yr2 : ratsimp(subst(part(Ya2,1,2)^2,a^4,Yr2));
EQN : y^2 + (x - part(Yx0,1,2))^2 = part(Yr2,1,2);
tex(%);
$$y^2+\left(x-{{e^2\,h}\over{e^2-1}}\right)^2={{e^2\,h^2}\over{e^4-2 \,e^2+1}} \tag{20}$$
|
H: How do you draw a $K_{m.n}$ graph?
$K_{3,3}$ is a complete bipartite graph with $6$ nodes split into $2$ groups of $3$ nodes. All of nodes in one group are connected to all of the nodes in the other groups, but not with nodes in the same group. Here's what it looks like:
However, what is a graph of $K_{m,n}$ supposed to look like? From my understanding the graph will have $m+n$ many nodes, $m$ nodes with a degree of $n$ and $n-m$ nodes with degree $n$. For example, $K_{2,3}$ looks like:
Is my understanding correct? If not, please explain how $K_{m,n}$ is supposed to look like.
AI: Here is respectively what $K_{1,3}$, $K_{3,5}$ and $K_{4,8}$ look like.
In each graph, the two sets of the bipartition are on the left and on the right respectively.
Your $K_{2,3}$ graph is correct but not your $K_{3,3}$ since it misses $3$ edges.
|
H: Calculate Distance (Not Squared) between two vectors using Inner Product
I'm stuck on a Inner Product question:
Calculate the distance (non squared) between $x=[4 2 1]$ and $y=[0 1 0]$
using inner product defined as
Can someone kindly help with the solution?
AI: Hint: One you have computed the vector $v = x - y$, compute the square of the distance as
$$
\|x - y\|^2 = \|v\|^2 = \langle v,v \rangle = v^T\pmatrix{2&1&0\\1&2&-1\\0&-1&2}v.
$$
Because this is the square of the distance, you must find the square root of the resulting number to get your answer.
|
H: Proving $f(x) = x + \frac{x^2}{2} + \frac{x^3}{3} + \frac{x^4}{4} + ... $ is continuous for a fixed $x_0 \in (-1,1)$ by using the Weierstrass M-test
I am trying to prove that $f(x) = x + \frac{x^2}{2} + \frac{x^3}{3} + \frac{x^4}{4} + ...$ is continuous for a fixed $x_0 \in (-1,1)$ by using the Weierstrass M-test. Now, the solution in the book is not what I expected, and not even close to what I had in my own mind. So, can some one verify if I am totally of with my approach to solve this
Here was my suggestion:
Let $M_n = x^n$, then for each $f_n(x) = \frac{x^n}{n}$ we have that $f_n(x) \leq M_n$. We know that $\sum_{n=1}^{\infty} M_n$ converges since it is a geometric series, hence $\sum_{n=1}^{\infty} f_n(x)$ converges uniformly by Weierstrass M-test, and by the Continuous Limit Theorem f(x) is continuous at $x_0$.
I guess there is a mistake here somewhere...could anybody point it out for me?
AI: I really think you can prove it like this: just choose $a \in (-1, 1)$ such that $x_0 \in (-a, a)$. Then for your $f_n$ the inequality $\lvert f_n(x) \rvert \leq \frac{a^n}{n} =: M_n$ holds for all $x \in [-a, a]$. Obviously $\displaystyle \sum_{n = 1}^{\infty} M_n < \infty$ is true (just estimate it by a geometric series). Hence $\displaystyle \sum_{k = 1}^n f_n(x)$ converges uniformly on $[-a, a]$, i.e. $f \lvert_{[-a, a]}$ is continuous. But as $x_0 \in [-a, a]$, this is all we wanted.
|
H: How $N(2\zeta^{2n})=2^{p-1}$?
Let, $\zeta$ be the $p$ th root of unity and $\mathbb{Z}[\zeta])$ be the number ring generated by $\zeta$, and $N$ is the norm function.
Why or how $N(2\zeta^{2n})=2^{p-1}$?
The source of the problem is -
AI: The norm is a homomorphism from the multiplicative group $\mathbb Q[\zeta]^\ast$ to $\mathbb Q^\ast$,
so $N(2\zeta^{2n})=N(2)N(\zeta^{2n}),$ and $N(2)=2^{p-1}$ and $N(\zeta^{2n})=1$.
$[\mathbb Q[\zeta]:\mathbb Q]=p-1$, and $N(2)$ is the product of the $p-1$ conjugates of $2$, which are all $2$,
since $2$ is in the base field. $\zeta^n$ is a unit, so its norm is $\pm1$, so $N(\zeta^{2n})=1.$
|
H: If each element in $A$ is greater than all elements in a unique subset of $B$, is the average of $A$ greater than average of $B$?
Suppose I have two sets $A = \{a_1, ..., a_n\}$ and $B = \{ b_1, ..., b_m\}$ of non-negative real numbers and where $a_1\geq a_2 \geq ... \geq a_n$, and $|A| < |B|$.
Now, suppose $B$ can be split into $n$ disjoint subsets $B_1,...,B_n$ such that
\begin{align}
|B_i| = \left\{ \begin{array}{cc}
\big\lfloor\frac{|B|}{|A|}\big\rfloor & \hspace{5mm} \text{if } i \text{ is odd} \\
\big\lceil\frac{|B|}{|A|}\big\rceil & \hspace{5mm} \text{if } i \text{ is even} \\
\end{array} \right.
\end{align}
and for all $i \in [1, n]$, $$\ \ a_i \geq b \text{ for all } b \in B_i$$
Then is it true that
$$\frac{\sum_{a \in A} a}{|A|} \geq \frac{\sum_{b \in B} b}{|B|}?$$
AI: No. The problem is rounding: even numbered sets may have more elements. For instance:
$$
A=\{1,1,0\},B=\{1,1,1,0\}
$$
Then, $|B_1|=1,|B_2|=2,|B_3|=1$, so we may choose $B_1=\{1\},B_2=\{1,1\},B_3=\{0\}$ and $a_1=1\ge 1=b_1$, $a_2=1\ge 1=b_2=b_3$, $a_3=0\ge 0=b_4$ but
$$
\frac{1}{|A|}\sum_{a\in A} a = \frac{2}{3} \ngeq \frac{3}{4}=\frac{1}{|B|}\sum_{b\in B} B
$$
|
H: If a 'distance function' does not possess triangle inequality property, would the limit of a converging sequence still be unique?
Let $X$ be a set and $d$ be a function such that $d:X\times X\to \mathbb{R}$ such that it satisfies positivity, that is, $d(x,y)\geq 0$ and $d(x,y)=0 \iff x=y.$ Moreover suppose it satisfies symmetry property, that is, $d(x,y)=d(y,x).$ However it does not satisfy triangle inequality.
Obviously if triangle inequality was to be satisfied then this will make $(X,d)$ a metric space and subsequently every converging sequence will have a unique limit. Hence I am just curious if this property is taken away, can there still be examples such that every converging sequence has a unique limit with respect to this function $d$?
I hope I explained my question sufficiently clear, many thanks in advance!
AI: Let $d(x,y) = (x-y)^2$ on $\Bbb R$, which satisfies the first two axioms but not the triangle inequality, because:
$$d(0,2)=4\not≤2=d(0,1)+d(1,2)$$
however limits are still unique, in fact you have the same limits as the usual metric $d(x,y)=|x-y|$, since $(x_n-y)^2\to0$ iff $|x_n-y|\to0$ (continuity of the root on positive numbers).
There exist "metrics" failing the triangle inequality without unique limits:
Let $$d(x,y)=\begin{cases}0 & x=y \\ \frac1{|xy|}
&x\neq y\end{cases}$$
on $\Bbb N$, then you have that any unbounded sequence $x_n$ converges to any integer.
|
H: How do I evaluate the line integral $\int _c \mathbf{F}\cdot\mathrm{d}\mathbf{r}$
How do I evaluate the line integral $$\int _c \mathbf{F}\cdot\mathrm{d}\mathbf{r}$$ where $\mathbf{F} = x^2\mathbf{i} + 2y^2\mathbf{j}$ and $c$ is the curve given by $\mathbf{r}(t)=t^2\mathbf{i} + t\mathbf{j}$ for $t \in [0,1]$.
I have started with: $\int \mathbf{F}(t^2\mathbf{i} + t\mathbf{j})\cdot \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}t}\,\mathrm{d}t$ but im not sure if this is right.
AI: In order to work out the integral, you need to find first the derivatives of the coordinates of the curve. Set $\mathbf{r} = x\mathbf{i} + y \mathbf{j} = (x,y)$. Then $x(t) = t^2$ and $y(t) = t$, thus $ \dot{x} = 2t$ and $\dot{y} = 1$. Then
$$
\begin{split}
\int_c \mathbf{F} \cdot \mathrm{d}\mathbf{r} &= \int_0^1 \left(x\times\frac{\mathrm{d}x}{\mathrm{d}t} + y\times\frac{\mathrm{d}y}{\mathrm{d}t}\right)\mathrm{d}t \cr
&= \int_0^1 (t^2 \times 2t + 2t \times 1)\mathrm{d}t \cr
&= \int_0^1 (2t^3 + 2t)\mathrm{d}t \cr
&= \frac{1}{2} + 2\times\frac{1}{2} = \frac{3}{2}
\end{split}
$$
|
H: Find the sum: $\sum_{n=1}^{20}\frac{(n^2-1/2)}{(n^4+1/4)}$
Hint: this is a telescoping series sum (I have no prior knowledge of partial fraction decomposition)
Attempt: I tried to complete the square but the numerator had an unsimplifiable term. So I couldn't find a pattern. I just need a hint on how to convert this into a telescoping series.
AI: You say that you have not been able to see a patterm
$$S_p=\sum_{n=1}^{p}\frac{(n^2-\frac 12)}{(n^4+\frac 14)}$$ generates the sequence
$$\left\{\frac{2}{5},\frac{8}{13},\frac{18}{25},\frac{32}{41},\frac{50}{61},\cdots\right\}$$ The numerators seem to be $2p^2$.
Now, subtract $1$ from each denominator to have
$$\left\{4,12,24,40,60,\cdots\right\}$$ which seem to be $2p(p+1)$.
So, if I am not wrong
$$S_p=\frac{2p^2}{2p(p+1)+1}$$
|
H: Why every field is a local ring, but the ring of integers $\mathbb{Z}$ not?
It is said, that $\mathbb{Z}$ is trivially not a local ring, because the sum of any two non-units must be a non-unit in a local ring and for example $-2+3=1$. But why every field is said to be local ring? Isn't the violation of exact this rule not a violation for fields too (if we think for example on the field of rational numbers)?
AI: Remember that an element $x$ of a ring is a unit if there is some $y$ such that $xy=yx=1$. In other words, the units are the elements with inverses wrt multiplication in the ring.
If $F$ is a field then an element $x$ of $F$ is a non-unit iff $x=0$. So the sum of any two non-units in $F$ is again a non-unit in $F$. So fields don't violate this characterization of local rings.
For example, in $\mathbb{Q}$, $2$ and $-3$ are units with inverses $1/2$ and $-1/3$ respectively.
|
H: trace of $(I_m + AA^T)^{-1}$ and $(I_n + A^TA)^{-1}$ for real matrix A
Let $A$ be $m \times n$ real matrix.
(1) Show that $X=I_m + AA^T$ and $Y=I_n+A^TA$ are invertible.
(2) Find the value of $tr(X^{-1}) - tr(Y^{-1}) $
attempt for (1):
$AA^T$ is a real symmetric matrix, therefore can be diagonalized. Let $\lambda$ be a eigenvalue of $AA^T$ and $v$ eigenvector. Then $0\leq \| A^Tv \|^2=v^TAA^Tv=\lambda v^Tv$ so $\lambda \geq0$. This shows that all eigenvalues of $X$ are positive, therefore invertible. Proof for $Y$ is similar.
But I cannot solve (2) from this result. All I know is that $X^{-1}$ and $Y^{-1}$ makes sense.
AI: Hints: For any matrices $A$ and $B$ that can be multiplied in both orders, $AB $ and $B A$ have the same nonzero eigenvalues. The eigenvalues of $(I+AB)^{-1}$ (if that exists) are the reciprocals of the eigenvalues of $I+AB$.
|
H: Maximum value of the expression $E=\sin\theta+\cos\theta+\sin2\theta$.
Find the maximum value of the expression $E=\sin\theta+\cos\theta+\sin2\theta$.
My approach is as follow ,let $E=\sin\theta+\cos\theta+\sin2\theta$, solving we get
$E^2=1+\sin^22\theta+\sin2\theta+2\sin2\theta(\sin\theta+\cos\theta)$ not able to approach from here.
AI: Following @Batominovski's hint $$\left(\sin x+\cos x+\frac12\right)^2-\frac54=2\sin x\cos x+\sin x+\cos x$$ and the maximum value of $\sin x+\cos x$ is $\sqrt2$. Hence
$$\left(\sqrt2+\frac12\right)^2-\frac54=\sqrt2+1.$$
(The minimum is $-\dfrac54$ because the squared expression can vanish. There is also a local maximum with value $\left(-\sqrt2+\dfrac12\right)^2-\dfrac54=-\sqrt2+1$.)
|
H: Classify all groups of a given order
I have to classify all groups of order 87 and 121. How can I do that? I saw other posts but there is no unified approach that helped me so far...
AI: $\gcd(87, \varphi(87)) = 1$ so there is only one group of order 87: $C_{87}$.
$121 = 11^2$ so there is exactly two groups of this order, $C_{121}$ and $C_{11} \times C_{11}$.
Lemma's used:
Lemma 1: $\gcd(n,\varphi(n)) = 1$ iff there is a unique group of order $n$: $C_n$.
Proof: http://alpha.math.uga.edu/~pete/Jungnickel92.pdf
Lemma 2: $n = p^2$ implies there are two groups of order $n$: $C_{p^2}$ and $C_p \times C_p$.
Proof: https://kconrad.math.uconn.edu/blurbs/grouptheory/groupsp2.pdf
|
H: What is the definition of nodal singularity of an algebraic curve?
What is the definition of nodal singularity of an algebraic curve ?
I got the following definition from here:
A nodal singularity of an algebraic curve is one of the forms parameterized by the equation $xy=0$. A nodal curve is a curve with a nodal singularity.
Apparently, it is not clear to me the parametrization $xy=0$.
Can you please explain it?
AI: It means that the completion of the local ring at the point is isomorphic to $k[[x, y]]/(xy)$. Intuitively, if you zoom way in it looks like the letter $X$ at the bad point.
|
H: Is the commutator subgroup of a subgroup the same as the commutator subgroup of the group intersected with that subgroup?
I might be overthinking this, but anyway:
Let $G$ be a group and $H$ a subgroup. Let $K'$ be the commutator subgroup of $H$, i.e. $K' = \langle [x, y] \mid x, y, \in H \rangle$. Is it true that $K' = G' \cap H$?
Attempt: I believe that $K' \subset G' \cap H$, because if $k \in K'$ then $k$ is a product of commutators in $K$, so $k \in G'$. By closure, $k$ is in $H$, so $k \in G' \cap H$. I'm uncertain about the $\supset$ direction.
AI: This is false. Take $G$ a non-abelian simple group. Then its commutator subgroup is all of $G$. So letting $H$ be any abelian (edit: oops! thanks to the commentors) subgroup of $G$ gives a counterexample.
|
H: I want to know radius of convergence. $\sum ^{\infty }_{n=0}\left\{ 3+\left( -1\right) ^{n}\right\} x^{n}$
$$\sum ^{\infty }_{n=0}\left\{ 3+\left( -1\right) ^{n}\right\} x^{n}$$
I used ratio test to find radius of convergence.
$$\lim _{n\rightarrow \infty }\left| \dfrac {\left\{ 3+\left( -1\right) ^{n+1}\right\} x^{n+1}}{\left\{ 3+\left( -1\right) ^{n}\right\} x^{n}}\right|=\lim _{n\rightarrow \infty }\left| \dfrac {\left\{ 3+\left( -1\right) ^{n+1}\right\} x}{\left\{ 3+\left( -1\right) ^{n}\right\} }\right|$$
But I don't know what to do next. Please tell me how to solve.
AI: Hint. In this case you should use the root test (the general form is with $\limsup$):
$$R=\frac{1}{\limsup_{n\to\infty}\sqrt[n]{|a_n|}}.$$
Now note that
$$a_n=\begin{cases}
4&\text{ if $n$ is even,}\\
2&\text{ if $n$ is odd.}
\end{cases}$$
Can you take it from here?
|
H: Determine that a series is rational
Determine whether $$\sum_{n=1}^\infty 1/10^{n!} $$ is rational. I have tried thinking about decimal representations such as that of $1/11$, and the fact that this sum is equal to $0.1100010....1...........1$ etc, but I don't know if the distance between $1$'s increases fast enough (or if it even matters) for this to converge to a rational number.
AI: That sequence of digits is not quasi-periodic. Therefore, that number is irrational. There are many similar examples, such as $\displaystyle\sum_{n=1}^\infty\frac1{10^{n^2}}$.
|
H: Inverse of block anti-diagonal matrix
Let $A \in \mathbb R^{n\times n}$ be an invertible block anti-diagonal matrix (with $d$ blocks), i.e.
$$
A = \begin{pmatrix} & & & A_1 \\ & & A_2 & \\ & \cdot^{\textstyle \cdot^{\textstyle \cdot}} & & \\ A_d\end{pmatrix},
$$
with all square blocks $A_1, \ldots, A_d$ invertible. Is there a formula for its inverse?
In the diagonal case, it is just the diagonal block matrix with the inverses of the blocks, is there an equivalent for the anti-diagonal case?
AI: I think this is the answer with all the blocks invertible.
$$
A = \begin{pmatrix} & & & A_1 \\ & & A_2 & \\ & \dots & & \\ A_d\end{pmatrix},
$$
$$
B = \begin{pmatrix} & & & A_d^{-1} \\ & & A_{d-1}^{-1} & \\ & \dots & & \\ A_1^{-1}\end{pmatrix},
$$
we have
$$AB=I$$
|
H: Is this space subspace of $[0,1]^{\mathbb{N}}$ Polish?
Let $D := \{ x \in [0, 1]^{\mathbb{N}} \mid \forall n: x_n = 1 \Rightarrow x_{n+1} = 1 \}$
be the set of sequences in $[0, 1]$ such that if $x_n = 1$ for some $n$ then $x_{n+m} = 1$ for all $m \geq 0$.
I know that $[0, 1]^{\mathbb{N}}$ is Polish. Is $D$ with the subspace topology also Polish?
I can represent $D$ as a countable union of pairwise disjoint closed sets
$D = (\{ 1 \} \times \{ 1 \} \times \dots) \cup ([0, 1) \times \{ 1 \} \times \{ 1 \} \times \dots) \cup ([0, 1) \times [0, 1) \times \{ 1 \} \times \{ 1 \} \dots) \cup \dots \cup ([0, 1) \times [0, 1) \times \dots)$
but there is no theorem stating that such subsets are necessarily Polish.
I can also represent $D$ as a countable intersection as follows: translate the condition $x_n = 1 \Rightarrow x_{n+1} = 1$ into the set $A = ([0, 1) \times [0, 1]) \cup (\{ 1 \} \times \{ 1 \})$. Then
$D = (A \times [0, 1] \times [0, 1] \times \dots) \cap ([0, 1] \times A \times [0,1] \times [0,1] \times \dots) \cap ([0,1] \times [0,1] \times A \times [0,1] \times [0,1] \times \dots) \cap \dots$.
To show that $D$ is Polish it is enough to prove that $A$ is Polish. (Then the sets $A \times [0,1] \times \dots$ and so on are all Polish (as countable products of Polish spaces are Polish) and therefore their countable intersection $D$ is Polish.) $A$ with the Euclidean metric is not complete. Is it possible to construct a compatible complete metric on $A$?
AI: Since $D$ is defined by universal quantifier over a countable set, it is most natural to write it as a countable intersection $\bigcap_n D_n$. Since $G_\delta$ subsets of a Polish space are Polish, it will be enough that $D_n$ is $G_\delta$ for each $n\in\mathbb{N}$.
We have
$$
\begin{align*}
D &= \{ x \in [0, 1]^{\mathbb{N}} \mid \forall n: x_n = 1 \Rightarrow x_{n+1} = 1 \}\\
&= \bigcap_n \{ x \in [0, 1]^{\mathbb{N}} \mid x_n = 1 \Rightarrow x_{n+1} = 1 \} \\
&= \bigcap_n \{ x \in [0, 1]^{\mathbb{N}} \mid x_n = 1 \}^{\mathsf{c}} \cup \{ x \in [0, 1]^{\mathbb{N}} \mid x_{n+1} = 1 \}.
\end{align*}
$$
Now, the first set in the union is open and the second is closed. Their intersection $D_n$ is then $G_\delta$ and we are done.
|
H: Simplify fraction $4x/(x-1)$ to $ 4+(4/(x-1))$
I have put the fraction into Symbolab which gives some step-by-step explanation on why this os correct, but I am unable to grasp how this is possible.
AI: Adding and $+4$ and $-4$ to the numerator, if $x\in \Bbb R \setminus \{1\}$, you have:
$$\frac{4x}{x-1}=\frac{(4x-4)+4}{x-1}=\frac{4x-4}{x-1}+\frac4{x-1}=4\frac{x-1}{x-1}+\frac4{x-1}=4+\frac4{x-1}$$
|
H: Compute mass function of $U=X+2Y$
Let be $X$ and $Y$ random variables and let be the joint density $f_{X,Y}(x,y)=\frac{xy}{96}I_{R}(x,y)$
where $R:=\{(x,y):0<x<4,1<y<5\}$. Let be $U:=X+2Y$, its distribution function is
$$F_U(u)=P(U\leq u) = \int\int_{\{(x,y):x+2y\leq u\}} f_{X,Y}(x,y) \,dx\,dy =$$
$$\int\int_{\{(x,y):x+2y\leq u\}} cxyI_R(x,y) \,dx\,dy =\int\int_{\{(x,y):x+2y\leq u\}\cap R} cxy \,dx\,dy$$
If we draw $\{(x,y):x+2y\leq u\}\cap R$ we can see we have to consider several cases.
One case is $2<u<6$.
$$F_U(u) = \int\int_{\{(x,y):x+2y\leq u\}\cap R} cxy \,dx\,dy = \int_0^{u-2} \int_1^{\frac{u-x}{2}} cxy \, dx \, dy$$
Second case is $6<u<10$, in this case i think the integral is
$$\int_0^4\int_1^{\frac{u-4}{2}}cxy \, dy\,dx + \int_0^4\int_{{\frac{u-4}{2}}}^{\frac{u-x}{2}}cxy\,dy\,dx = u^2/96 -u/18 +1/24$$
But the solution is $(3u-8)/144$. I dont know why my second integral is bad defined.
AI: Did you try to derivate your result?
To be more precise, in the support $u \in [6;10)$ the integral is the following
$$F_U(u)=F_U(6)+\int_0^4 \int_{\frac{6-x}{2}}^{\frac{u-x}{2}}f(x,y)dxdy=...=F_U(6)+\frac{3u^2-16u-12}{288}$$
Thus the density, simply by derivating F, is
$$f_U(u)=\frac{3u-8}{144}$$
as stated.
|
H: Understanding what is wrong in a limit development
I have the following limit:
$$\lim_{x\to -\infty} \frac{\sqrt{4x^2-1}}{x}$$
I know that the result is $-2$ and I know how to achieve it. However on the first try I made the following development and I still can't see what I am doing wrong:
$$\mathbf1)\lim_{x\to-\infty} \frac{\sqrt {4x^2-1}}{x}$$
$$\mathbf2)\lim_{x\to-\infty} \frac{4x^2-1}{x\sqrt{4x^2-1}}$$
$$\mathbf3)\lim_{x\to-\infty} \frac{x^2(4-\frac{1}{x^2})}{x^2(\frac{1}{x})\sqrt{\frac{4x^2-1}{x^4}}}$$
$$\mathbf4)\lim_{x\to-\infty} \frac{(4-\frac{1}{x^2})}{(\frac{1}{x})\sqrt{\frac{4}{x^2}-\frac{1}{x^4}}}$$
Denominator goes to zero and I remain with $\frac{4}{0}= \infty$
Where is the mistake?
AI: You made a mistake from $(2)$ to $(3)$.
$\sqrt{4x^2-1}=x^2\sqrt{\dfrac{4x^2-1}{x^4}}$
so $x \sqrt{4x^2-1}=x^2(x)\sqrt{\dfrac{4x^2-1}{x^4}}$, not $x^2\left(\frac1x\right)\sqrt{\dfrac{4x^2-1}{x^4}}$
|
H: IMO $2001$ problem $2$
Let $a,b,c \in \mathbb{R}_+^*$. Prove that $$\frac{a}{\sqrt{a^2+8bc}} + \frac{b}{\sqrt{b^2+8ca}}+ \frac{c}{\sqrt{c^2+8ab}} \geqslant 1.$$
I tried to follow the proposed solution for this which depended on Hölder's inequality, but I'm a bit confused about how they came up with the expression. How I remember Hölder's is that it states that $$\sum_{i=1}^n |x_iy_i| \leqslant (\sum_{i=1}^n|x_i|^p)^{1/p}(\sum_{i=1}^n|y_i|^q)^{1/q}$$
and we need the same Conjugate property as in Young's inequality $\frac{1}{p} + \frac{1}{q} =1.$
What they had was $$(\sum \frac{a}{\sqrt{a^2+8bc}})(\sum \frac{a}{\sqrt{a^2+8bc}})(\sum a(a^2+8bc)) \geqslant (a+b+c)^3.$$
From here it was quite straightforward, but any clarification on how we can get this result from Hölder's would be appreciated.
AI: The Holder's inequality for two sequences it's the following.
Let $a_1$, $a_2$,..., $a_n$, $b_1$, $b_2$,..., $b_n$, $\alpha$ and $\beta$ be positive numbers. Prove that:
$$(a_1+a_2+...+a_n)^{\alpha}(b_1+b_2+...+b_n)^{\beta}\geq$$$$\geq\left(\left(a_1^{\alpha}b_1^{\beta}\right)^{\frac{1}{\alpha+\beta}}+\left(a_2^{\alpha}b_2^{\beta}\right)^{\frac{1}{\alpha+\beta}}+...+\left(a_n^{\alpha}b_n^{\beta}\right)^{\frac{1}{\alpha+\beta}}\right)^{\alpha+\beta}.$$
For positives $a$, $b$ and $c$ by Holder we obtain:
$$\left(\sum_{cyc}\frac{a}{\sqrt{a^2+8bc}}\right)^2\sum_{cyc}a(a^2+8bc)\geq$$
$$\geq\left(\sum_{cyc}\sqrt[3]{\left(\frac{a}{\sqrt{a^2+8bc}}\right)^2a(a^2+8bc)}\right)^3=(a+b+c)^3.$$
In our case $n=3$, $\alpha=2$ and $\beta=1$.
|
H: Projection of space curve shortens
Let $C$ be a rectifiable, open curve in $\mathbb{R}^3$,
and let $|C|$ be its length.
Orthogonally project $C$ to a plane $\Pi$ (e.g., the $xy$-plane).
Call the projected curve $C_{\perp}$, and its length $|C_{\perp}|$.
I would like to claim $|C_{\perp}| \le |C|$.
I would appreciate either a simple proof, or a reference.
This may be so well-known that it is hard to cite a reference.
(I only need it in $\mathbb{R}^3$, but it should hold in any dimension.)
AI: If you are dealing with rectifible curves, you are taking polygons with vertices
on $C$ and looking at the limit at the lengths of the polygons as the points
become closer. But if you project a line segment to $\{z=0\}$ its length cannot increase,
so the projected polygons are no longer than the original.
|
H: Block matrix of tensor product
$K$ is a field, $K^n$ is a vector space with $(e_1, \ldots, e_n)$. The tensor product $K^n \otimes K^n$ has the basis $\mathcal {B} = (e_1 \otimes e_1, \ldots, e_1 \otimes e_n, e_2 \otimes e_1,, \ldots, e_2 \otimes e_n, e_3 \otimes e_1, \ldots, e_n \otimes e_n)\,.$
Look at the matrices $A,B \in M(n \times n; K)$. We write $A \otimes B : K^n \otimes K^n \to K^n$. How does the block matrix look?
a. $A \otimes B = \begin {pmatrix} b_{11} A & b_{12} A & \cdots & b_{1n} A \\ b_{21} A & b_{22} A & \cdots & b_{2n} A \\ \vdots & \vdots &\ddots & \vdots \\ b_{n1} A & b_{n2} A & \cdots & b_{nn} A \end {pmatrix}$
b. $A \otimes B = \begin {pmatrix} a_{11} B & a_{12} B & \cdots & a_{1n} B \\ a_{21} B & a_{22} B & \cdots & a_{2n} B \\ \vdots & \vdots &\ddots & \vdots \\ a_{n1} B& a_{n2} B & \cdots & a_{nn} B \end {pmatrix}$
c. $A \otimes B = \begin {pmatrix} A & \cdots & A & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ A & \cdots & A & 0 & \cdots & 0 \\ 0 & \cdots & 0 & B & \cdots & B \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & B & \cdots & B \end {pmatrix}$
AI: Actually, the matrix form is just a representation of an abstract object. Let $C=A \otimes B $. It is obvious that C is a tensor with 4 indexes.
$$C_{ijkl}=A_{ij} B_{kl}$$
If you take a look at your cases you will see that the correct answer is b).
By the way, a) represents $B \otimes A$.
|
H: Construct the joint probability mass function of $X$ and $Y$
Two fair dice are thrown. Let $X$ be the random variable that represents the
maximum obtained on any of the two dice and $Y$ the one which denotes the sum of what was obtained in both dice. Construct the joint probability mass function of $X$ and $Y$.
Any suggestions for how to define that function?
AI: Sketch for a solution: first find the probability mass function of $Z:=(X,Y)$ (namely $f_Z$), and from here you can find the probability mass function of $R:=(\max\{X,Y\},X+Y)$ (namely $f_R$) by the relation
$$
f_R(a,b)=\Pr [\max\{X,Y\}= a, X+Y= b]=\sum_{(j,k)\in A}f_{Z} (j,k)\\ A:=\{(j,k)\in \{1,\ldots ,6\}^2:\max\{j,k\}= a \,\land\, j+k=b \}\\
=\{(j,k)\in\{1,\ldots 6\}^2:\max\{j,k\}=a\,\land\, \min\{j,k\}=b-a\}
$$
As the support space is tiny you can do it using a table of $6\times 10$ entries (files for the values of $\max\{X,Y\}$ and columns for the values of $X+Y$), noticing that $f_R(a,b)=0$ when $b>2a$ or $a\geqslant b$, by example.
|
H: Problem with the parametrisation of this surface integral
I am facing troubles in understanding (read: "guessing") the correct way to parametrise this integral:
$$\int_{\Sigma} \dfrac{1}{\sqrt{1 + x^2 + y^2}}\ \text{d}\sigma$$
Where $\Sigma = \{(x, y, z)\in\mathbb{R}^3; x^2 + y^2 \leq 1; z = \sin^2(x^2+y^2)\}$
Is there a cool simple way to solve this?
I thought about the usual polar coordinated but it becomes rather strange.
Thank you!
Idea
Maybe it's going to be a real malarkey but I thought again about polar/cylindrical:
$$x = R\cos\theta$$
$$y = R\sin\theta$$
$$z = \sin^2(R^2)$$
Yet the ranges would become
$$R\in(-1, 1) ~~~~~~~ \theta \in (0, 2\pi) ~~~~~~~ z\in (0, 1)$$
Hence
$$\int_0^1 \text{d}z \int_{-1}^1 \frac{1}{\sqrt{1+R^2}}\ \text{d}R \int_0^{2\pi} \text{d}\theta = 4\pi\text{arcsinh}(1)$$
Yes/no?
AI: I think your miscalculating the integration limits of $z$. If you consider the change of coordinates $x = R\cos(\varphi)$, $y = R\sin(\varphi)$ and $z = z$, then the determinant of the corresponding Jacobian matrix is given by
\begin{align*}
|J| =
\begin{vmatrix}
\cos(\varphi) & -R\sin(\varphi) & 0\\
\sin(\varphi) & R\cos(\varphi) & 0\\
0 & 0 & 1
\end{vmatrix} = R\cos^{2}(\varphi) + R\sin^{2}(\varphi) = R
\end{align*}
Therefore we get that
\begin{align*}
\int_{\Sigma}\frac{1}{\sqrt{1+x^{2}+y^{2}}}\mathrm{d}\sigma = \int_{0}^{1}\int_{0}^{2\pi}\int_{0}^{\sin^{2}(R^{2})}\frac{R}{\sqrt{1+R^{2}}}\mathrm{d}z\mathrm{d}\varphi\mathrm{d}R
\end{align*}
Hopefully this helps.
EDIT
Let us consider the parametrization of $\Sigma$: $\displaystyle\sigma(u,v) = (u,v,\sin^{2}(u^{2}+v^{2})) = \left(u,v,\frac{1-\cos(2u^{2}+2v^{2})}{2}\right)$ whose domain is $D = \{(x,y)\in\textbf{R}^{2}:x^{2}+y^{2}\leq 1\}$. Then its partial derivatives are given by
\begin{align*}
\begin{cases}
\sigma_{u} = (1,0,2u\sin(2u^{2}+2v^{2}))\\\\
\sigma_{v} = (0,1,2v\sin(2u^{2}+2v^{2}))
\end{cases}
\end{align*}
Then we obtain the following integral:
\begin{align*}
\int_{\Sigma}\frac{1}{\sqrt{1+x^{2}+y^{2}}}\mathrm{d}\sigma & = \int_{D}\frac{1}{\sqrt{1+u^{2}+v^{2}}}\left\|\sigma_{u}\times\sigma_{v}\right\|\mathrm{d}u\mathrm{d}v\\\\
& = \int_{D}\frac{\sqrt{1 + 4(u^{2}+v^{2})\sin^{2}(2u^{2}+2v^{2})}}{\sqrt{1+u^{2}+v^{2}}}\mathrm{d}u\mathrm{d}v
\end{align*}
|
H: How do you prove that the derivative $\tan^{-1}(x)$ is equal to $\frac{1}{1+x^2}$ geometrically
How do you prove that the derivative of $\tan^{-1}(x)$ is equal to $\frac{1}{1+x^2}$ geometrically?
I figured it out by working it out using implicit differentiation.
I also found how to plot a semi-circle using $\cos^2(x)+\sin^2(x)=1$ and found that $((x^2-1))^{0.5}$ plots a semi-circle because if you wanted to find $\sin(x)$ with $\cos(x)$ you would do $(\cos(x)^2-1)^{0.5}$. The reason only a semi circle is in order for it to work it must be both the positive and negative solutions.
I saw that $1$ and the $x^2$ and thought you could visually see that the derivative of $\tan^{-1}(x)$ is $\frac{1}{1+x^2}$ but I couldn't find any way so far.
AI: Tried to do geometry, but ended up doing a hand-wave-y first-principles approach. It can be made rigorous though.
The angle addition formulae have geometrical proofs.
$$\text{We want to find }\ \ \ (\dagger) := \frac{1}{\delta x} (\tan ^{-1}(x + \delta x) - \tan ^{-1} x) \ \ \ \text{ as } \delta x \rightarrow 0.$$
Recall that $$\tan(A+B)= \frac{\tan A + \tan B}{1-\tan A \tan B} \ ,$$
we can substitute $u = \tan A$ and $v = \tan B$ to get
$$\tan^{-1}u + \tan^{-1} v = \tan^{-1} \frac{u+v}{1-uv} \ .$$
Therefore $$(\dagger) = \frac{1}{\delta x}\tan^{-1}\frac{(x+ \delta x) + (-x)}{1 - (x+\delta x)(-x)}$$
$$ = \frac{1}{\delta x}\tan^{-1}\frac{\delta x}{1 + x^2 - x\delta x}$$
$$\rightarrow \frac{1}{1+x^2} \text{ by a small-angle approximation.}$$
|
H: Is every factorial totient?
A positive integer $\ n\ $ is called totient , if there is a positive integer $\ m\ $ such that $\ \varphi(m)=n\ $ holds , where $\ \varphi(m)\ $ is the totient function.
Is $\ k!\ $ totient for every positive integer $\ k\ $?
For $\ 2\le k\le 200\ $ I could find positive integers $\ a,b\ $ with $\ a\cdot b=k!\ $ such that $\ a+1\ $ and $\ b+1\ $ are both (proven) primes. If we rely on the BPSW-test, I arrived at $\ k=500\ $.
Heuristically, we should be able to find $\ a,b\ $ in every case, but I think this cannot be proven. Is there another way to prove that every factorial is totient ?
Here the PARI/GP code searching a solution :
gp > for(n=2,20,s=n!;t=0;gef=0;while(gef==0,t=t+1;if(Mod(s,t)==0,if(isprime(t+1,2)==1,if(isprime(s/t+1,2)==1,z=(t+1)*(s/t+1);gef=1;print(n," ",z))))))
2 6
3 14
4 39
5 183
6 905
7 7563
8 60483
9 393133
10 4233607
11 79833602
12 526901771
13 9340531203
14 101708006407
15 1438441804811
16 31384184832003
17 414968666112007
18 6499379367936067
19 123488207990784067
20 2513998741782528031
gp >
AI: Yes, look at https://artofproblemsolving.com/community/c6h140361. Basically, we can choose $$n=\frac{k!\cdot k\#}{\varphi( k\#) } ,$$
where $k\# = \prod_{p \leq k} p$ is the primorial. To see why this works, let
$$
k!=p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}
$$
be the unique prime factorization of $k!$, then $k\#=p_1p_2\cdots p_r$ and $\varphi(k\#)=$$(p_1-1)(p_2-1)$$\cdots (p_r-1)$. Each of $p_i-1 \leq k$, and so $\varphi(k\#) \mid k!$. Furthermore, let $\varphi(k\#)=p_1^{l_{1}}p_2^{l_{2}}\cdot p_r^{l_{r
}}$, then $$n=p_1^{e_1+1-l_1}p_2^{e_2+1-l_2}\cdots p_r^{e_r+1-l_r},$$
and so it's easy to verify $\varphi(n)=p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}=k!.$
|
H: Increasing sequence of sigma-algebras
On a non-empty set $E$, let $(\mathcal{E}_n)$ be an increasing sequence of sigma-algebras, i.e. such that, for every $n \leq m$, $\mathcal{E}_n \subseteq \mathcal{E}_m$. Let us denote by $\mathcal{E}$ its limit, i.e.
$$
\mathcal{E} = \sigma\left(\bigcup_{n\geq 0} \mathcal{E}_n\right)
$$
Is it true that, for each $A\in\mathcal{E}$, there is an increasing sequence $(A_n)$ with $A_n \in \mathcal{E}_n$, $A_n \subseteq A_m$ when $n \leq m$, and $A = \bigcup_{n \ge 0} A_n$?
AI: It is not true.
Define partitions $\mathcal P_n:=\left\{\left[k\cdot2^{-n},(k+1)\cdot2^{-n}\right)\mid k\in\mathbb Z\right\}$ on $\mathbb R$.
Let $\mathcal E_n$ denote the collection of subsets of $\mathbb R$ that can be written as a union of elements of $\mathcal P_n$.
Then $(\mathcal E_n)_n$ is an increasing sequence of $\sigma$-algebras.
Every singleton is an element of $\sigma(\bigcup_{n=0}^{\infty}\mathcal E_n)$.
This because $\{x\}=\bigcap_{n=0}^{\infty}P_n(x)$ where $P_n(x)$ denotes the unique element of $\mathcal P_n$ that contains $x$ as element.
However, there is no way to write $\{x\}=\bigcup_{n=1}^{\infty} A_n$ where $A_n\in\mathcal E_n$.
This because the $\mathcal E_n$ do not contain singletons.
|
H: When can we say that a sequence is bigger or smaller than another sequence
Let's say there are two sequences $\{a_n\}$, $\{b_n\}$. I often see the inequality
$$\{a_n\} > \{b_n\}.$$
I don't understand how can we make such comparison to determine if a sequence is bigger or smaller than another one.
AI: Usually we will talk about endless sequences, and than $\{a_n\}$ will be bigger than $\{b_n\}$ if and only if, there is only finite number of $n$'s s.t
$$b_n>a_n$$
If you insists to talk about finite sequences so I think its fair to say that we relate only to the coincide numbers, there exists in both.
|
H: How to evaluate $\int_{0}^{\pi} \sin ^{n}(\eta) d \eta$?
I have encountered the following integral:
$$\int_{0}^{\pi} \sin ^{n}(\eta) d \eta=\underbrace{\left[\left(\sin ^{n-1}(\eta)\right)(-\cos (\eta))\right]_{\eta=0}^{\pi}}_{=0} -\int_{0}^{\pi}\left((n-1) \sin ^{n-2}(\eta) \cos (\eta)\right)(-\cos (\eta)) d \eta$$
But I'm not getting how did the integral evaluate to become what is on the right side of the above equation. I am trying to do integration by parts but in vain. Could you please show me the missing steps?
The right hand side can then be simplified to look like:
$$=(n-1) \int_{0}^{\pi} \cos ^{2}(\eta) \sin ^{n-2}(\eta) d \eta$$
AI: The key observation here is that you can write $\sin^{n}(\eta) = \sin^{n-1}(\eta)\sin(\eta).$ Then, to integrate by parts, we let
\begin{alignat}{3}
u &= \sin^{n-1}(\eta) &&\implies du &&= (n-1)\sin^{n-2}(\eta)\cos(\eta)\\
dv &= \sin(\eta) &&\implies v &&= -\cos(\eta).
\end{alignat}
Then, we get
$$\int_{0}^{\pi}\sin^{n}(\eta)\,d\eta = \left[\sin^{n-1}(\eta)(-\cos(\eta))\right]\bigg|_{0}^{\pi} -\int_{0}^{\pi}\left((n-1) \sin ^{n-2}(\eta) \cos (\eta)\right)(-\cos (\eta))\, d\eta.$$
|
H: Finite Summations can be Interchanged
In the proof of associativity of matrix multiplication, the reason for one of the steps is given as -Finite summations can be interchanged. What is meant by this statement?
AI: In other words, they mean the following: if $c_{ij}$ is a number for every $i = 1,\dots,m$ and $j = 1,\dots,n$, then
$$
\sum_{i=1}^m \left(\sum_{j=1}^n c_{ij}\right) = \sum_{j=1}^n \left(\sum_{i=1}^m c_{ij}\right).
$$
Note that in most texts, the parentheses around the inner sum are not written explicitly (but are implied).
That these sums are equal is an "obvious" consequence of the commutativity of addition.
For instance, we can compare the two expressions in the case of $m = 2$ and $n = 3$. We have
$$
\sum_{i=1}^2 \sum_{j=1}^3 c_{ij} = \sum_{i=1}^2 (c_{i1} + c_{i2} + c_{i3}) = (c_{11} + c_{12} + c_{13}) + (c_{21} + c_{22} + c_{23}).
$$
On the other hand ,
$$
\sum_{j=1}^3 \sum_{i=1}^2 c_{ij} = \sum_{j=1}^3 (c_{1j} + c_{2j}) =
(c_{11} + c_{21}) + (c_{12} + c_{22}) + (c_{13} + c_{23}).
$$
Clearly, since the same terms are being added in both sums, the two sums are equal.
The particular step, in detail:
$$
\begin{align}
\sum_{j=1}^n a_{ij}\left(\sum_{k=1}^p b_{jk}c_{kl}\right) &=
\sum_{j=1}^n \left(\sum_{k=1}^p a_{ij}b_{jk}c_{kl}\right)
\\ &=
\sum_{k=1}^p \left(\sum_{j=1}^n a_{ij}b_{jk}c_{kl}\right)
=
\sum_{k=1}^p \left(\sum_{j=1}^n a_{ij}b_{jk}\right)c_{kl}.
\end{align}
$$
|
H: Identicality, equality and linearity of i.i.d random variables. What are some examples if iid r.vs are unequal among themselves?
I need to understand some underlying concepts and facts regarding i.i.d random variables.
The problem is $$\textrm{Suppose } \mathrm{X_1, X_2, X_3 } \textrm{ are i.i.d positive valued r.v.s.}\\ \textrm{Define } \mathrm{Y_i=\frac{X_i}{X_1+X_2+X_3} \textrm{, i=1,2,3. Find the correlation between } Y_1 \textrm{ and } Y_3.}$$
Here is the first approach to solve, $$\mathrm{\sum_{i=1}^{3} Y_i=1 \textrm{ and since X_i's are iid, hence each of } Y_i=\frac{1}{3}. Therefore, \mathbb{E}(Y_i)=\frac{1}{3} \textrm{ for i=1,2,3.}}\\ \mathrm{\textrm{To find } \mathbb{E}(Y_{1}^{2}) \textrm{ and } \mathbb{E}(Y_{3}^{2}) \textrm{ we have }}\\ \mathrm{1=\mathbb{E}\bigg[\frac{(X_1+X_2+X_3)^2}{(X_1+X_2+X_3)^2}\bigg]}\\=\mathrm{\mathbb{E} \bigg[\frac{3X_{1}^{2}}{(X_1+X_2+X_3)^2}+\frac{2(X_1X_2+X_2X_3+X_3X_1)}{(X_1+X_2+X_3)^2}\bigg]}\\=\mathrm{\mathbb{E} \bigg[3Y_{1}^{2}+2(Y_1Y_2+Y_2Y_3+Y_3Y_1)\bigg]}\\=\mathrm{\mathbb{E} \bigg[3Y_{1}^{2}+2[Y_1(1-Y_1)+Y_2Y_3]\bigg]}\\=\mathrm{\mathbb{E} \bigg[Y_{1}^{2}+2Y_1+Y_2Y_3\bigg]}\\=\mathrm{\mathbb{E} \bigg[Y_{1}^{2}+2Y_1+Y_2Y_3\bigg]}\\=\mathrm{\mathbb{E} \bigg[Y_{1}^{2}-\frac{2Y_1}{3}+\frac{1}{9}+\frac{8}{9}+\frac{8Y_1}{3}+Y_2Y_3-1\bigg]}\\=\mathrm{\mathbb{V}(Y_1)+\mathbb{E} \bigg[\frac{8}{9}+\frac{8Y_1}{3}+Y_2Y_3-1\bigg]}\\=\mathrm{0+\mathbb{E}[\frac{7}{9}+Y_2Y_3]}\\\Longrightarrow \mathrm{\mathbb{E}[Y_2Y_3]=\frac{2}{9}}\\\Longrightarrow \mathrm{\mathbb{E}[Y_2Y_3]=\mathbb{E}[Y_2+Y_3]}$$
Hence I need to know that
-What can we conclude from this solution?
My second approach is as follows :
$$\mathrm{\textrm{If Y_i's are not independent then they are indentical only, } cov(Y_1, Y_1+Y_2+Y_3)=cov(Y_1, 1)=0}\\\Longrightarrow \mathrm{\mathbb{V}(Y_1)+cov(Y_1,Y_2)+cov(Y_1,Y_3)=0}\\ \mathrm{\textrm{Now due to identicality if the covariance } (Y_1,Y_2) = \textrm{ covariance } (Y_1,Y_3) \textrm{ and , }\mathbb{V}(Y_1)=\mathbb{V}(Y_3) \textrm{, then my problem is solved.}}$$
From here if $Y_i$'s are identical then their variances are similar to each other and covariance are similar to each other. Hence the r.vs $Y_i$ are equal due to identicality. is it a correct statement?
Any help or explanation is valuable and highly appreciated.
Here is a post from this website where the given hint says that the iid rv.s are equal according to distrbution but unequal among themselves. Is there any example available?
AI: Hint:
By symmetry, $\text{Cov}(Y_1, Y_3) = \text{Cov}(Y_1, Y_2)$ so
$$\eqalign{\text{Cov}(Y_1,Y_3) &= \frac{1}{2} \text{Cov}(Y_1,Y_2+Y_3)\cr
&= \frac{1}{2} \text{Cov}(Y_1, 1-Y_1)\cr}$$
Express this in terms of $\text{Var}(Y_1)$...
|
H: Interested in a closed form for this recursive sequence.
Consider the following game: you start with $ n $ coins. You flip all of your coins. Any coins that come up heads you "remove" from the game, while any coins that come up tails you keep in the game. You continue this process until you have removed all coins from the game. Let your score $ s $ be defined as the number of rounds of flipping before the game was over (including the last flip, say).
Let $ a_n $ be the expected value of the score when you start with $ n $ coins. It is not hard to see that
$$
a_n = \frac{1}{2^n - 1} \left(1 + \sum_{m=0}^{n-1}{{n\choose m} (a_m + 1)}\right)
$$
Are there any generating functionology wizards out there who know if this could be turned into a closed-form formula for $ a_n $?
Note, this question was asked here: You are flipping n fair coins, putting aside those that come up heads after each flip. What's the expected number of rounds?, but the answer is entirely unsatisfactory; they only provide a heuristic which is asymptotically correct.
Edit: here is a quick mathematica experiment:
Edit 2: While investigating the first differences of this function, we find a lovely pattern. I plotted the following function
$$
g(n) := n \left( \frac{1}{\log(2)} - n (a_{n+1} - a_{n}) \right)
$$
resulting in the following plot, clearly oscillating about $ 1/\log(2) $:
(Here, "dev[n]" is the first-difference function $ a_{n+1} - a_n $.)
AI: Based on OEIS, this seems to be https://oeis.org/A158466. There you can see some closed-form formulas (if you consider finite sum closed, which is quite usual), for example
$$
a_n = \sum_{k=1}^{n}(-1)^{k+1}\binom{n}{k}\frac{2^k}{2^k-1}.
$$
You should be able to verify it by plugging it into the recursive formula.
|
H: Plane, two lines and distance problem
I have been working on this exercise and am kinda struggling with it. This is the exercise and what I have done so far. Any tips would be greatly appreciated!
Given the plane $x+y=0$ and two lines:
$p_1: \frac{x}{3} = \frac{y+1}{1} = \frac{z-3}{-2}$ and $p_2$ (form with 2 planes): $ y=z+2$ & $x=1$ find the line $q$ that is parallel to the first plane and it intersects $p_1$ and $p_2$ in two points which distance is 3.
This is what I have so far. First, I transformed $p_2$ to canonical form: cross product of two normal vectors of given planes will give a direction vector for $p_2$ . $\vec{n_1} =(0,1,-1)$ and $\vec{n_2}=(1,0,0)$
Now for the cross product: $\vec{n_1} \times \vec{n_2} = (0,-1,-1).$
I can now choose a point that satisfies both planes of $p_2$. For e.g. $A=(1,3,1)$. By the formula, now I have a canonical line form: $p_2:\frac{x-1}{0}=\frac{y-3}{-1}=\frac{z-1}{-1}$
I also have the info about distance. Let $T_1=(x_1,y_1,z_1)$ and $T_2=(x_2,y_2,z_2)$. Their distance is 3 so I have: $\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}=3$. After the transformation I get: $(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2=3$.
Now I already have an equation with 6 unknowns. What can I do now to find the line $q$ ? I can also find form of $q$ because I know that normal vector to the $x+y=0$ is also normal vector to the $q$. Should I write $q$ in a form of $k,l,m$ or in a form of two unknown points? I also can write two determinants of $q$ and $p_1$,$p_2$ and set them equal to 0 but I don't get enough info.
Can someone please give me a hint or help? Sorry if formatting is not good, I'm trying my best.
Have a nice day!
AI: Say the point where it intersects $p_1$ is $$P_1=\left(3a,a-1,-2a+3\right)$$and where it intersects $p_2$ is $$P_2=\left(1,-b+3,-b+1\right)$$Then we know the vector $\vec{P_1P_2}$ is parallel to $q$, which is further perpendicular to the normal vector of $x+y=0$. So, $$(3a-1, a+b-4,-2a+b+2)\cdot(1,1,0) = 0 \\ 3a-1 +a+b-4=0 \\ \implies 4a+b=5$$ Also, $|P_1P_2| =3$, i.e. $$\sqrt{(3a-1)^2 +(a+b-4)^2 +(-2a+b+2)^2} =3$$ Solve the two equations to get $a,b$ and then you have two points through which $q$ passes, and you can write its equation.
|
H: Clarification on the use of subsequences to prove that in a metric space a sequence in a compact subset admits a convergent subsequence in the subset
Lemma: Let $(X,\tau)$ be a topological space and let $K \subseteq X$ compact. If $E\subseteq K$ is infinite then $Der(E)\neq \emptyset$ , where $Der(E)$ is the set of accumulation points
Theorem
Let (X,d) be a metric space and let $K \subseteq X$ be compact in $(X,\tau_d)$ Let $p_n$ be a sequence in $K$, then there exist a convergent subsequence of $p_n$ to an element of $K$
proof:
Let $E=\{p_n| n \in \mathbb{N}\}$ be the image set of the sequence.
$(\alpha)$ If E is finite, there exists an strictly increasing sequence $n_k$ in $\mathbb{N}$ such that $p_{n_i}=p_{n_j}$, for each $i,j \in \mathbb{N}$. Now the subsequence $p_{n_k}$ is constant and therefore convergent
Let's suppose that $E$ is infinite, now using the lemma, $E$ has an accumulation point $p_0$ and $p_0 \in K$ because K is closed.
$(\beta)$ Then $\forall k \in \mathbb{N}$ there exists $p_{n_k} \neq p_0$ such that $p_{n_k} \in B_d(p_0, \frac{1}{k})$. Now the subsequence $p_{n_k}$ converges to $p_0$
The use of subsequences confuses me a bit, and I don't see the point:
In $(\alpha)$ , why can't just I just say:
If $E$ is finite, there exists $n_0 \in \mathbb{N}$ such that $\forall n > n_0$ $p_{n}=p_{0}$ constant, that is the sequence is $\{ P_1, P_2,...P_{n_0}, P_0,P_0, P_0...\}$
And in $(\beta)$ , why can't just I just say:
$\forall k \in \mathbb{N}$ there exists $p_{k} \neq p_0$ such that $p_{k} \in B_d(p_0, \frac{1}{k})$
AI: For $(\alpha)$, we can’t say that the sequence is constant from some point on because it needn’t be true: consider the real sequence $\langle(-1)^n:n\in\Bbb N\rangle$, which alternates between $1$ and $-1$.
As for $(\beta)$, your suggestion essentially renames some of the points of the original sequence. Originally $p_2$, say, was the second term of the sequence; if the second term of the sequence was not in $B_d\left(p_0,\frac1k\right)$, your $p_2$ might actually be the original $p_{100}$. This is at best confusing. Your source follows the usual practice: in my example, it would say that $n_2=100$. The point retains its original name, $p_{100}$, but the $n_2$ subscript makes it clear that it’s the second point of the subsequence.
Your source is a bit sloppy, however. In order for $\langle p_{n_k}:k\ge 1\rangle$ to be a subsequence of the original sequence, the indices $n_k$ must be strictly increasing. It should therefore say that there is an $n_1$ such that $p_{n_1}\in B_d(p_0,1)$, and for each $k>1$ there is an $n_k>n_{k-1}$ such that $p_{n_k}\in B_d\left(p_0,\frac1k\right)$.
|
H: Show that $(\sum a_{n}^{3} \sin n)$ converges given $\sum{a_n}$ converges
Given that $\sum a_{n}$ converges $\left(a_{n}>0\right) ;$ Then $(\sum a_{n}^{3} \sin n)$ is
My approach:
Since, $\sum a_{n}$ converges, we have $\lim _{n \rightarrow \infty} n \cdot a_{n}$ converges.
i.e. $\left|n \cdot a_{n}\right| \leq 1$ for $n \geq K(\text { say })$
$\Rightarrow n \cdot a_{n}<1 \quad\left[\because a_{n}>0\right]$
$\Rightarrow a_{n}<\frac{1}{n}$
$\therefore a_{n}^{3}<\frac{1}{n^{3}}$
$\Rightarrow a_{n}^{3} \sin n \leq \frac{1}{n^{3}} \sin n \leq \frac{1}{n^{3}}$
$\Rightarrow \sum a_{n}^{3} \sin n \leq \sum \frac{1}{n^{3}}$
$\because \mathrm{RHS}$ converges so LHS will also converge.
Any other better approach will be highly appreciated and correct me If I am wrong
AI: As $\{a_n\}$ is a positive sequence such that $\sum a_n$ converges, we have $0 \le a_n \le 1$ for $n$ large enough, say $n \ge M$.
Then for $n \ge M$
$$0 \le \vert a_n^3 \sin n \vert \le a_n^3 \le a_n$$
Hence $\sum a_n^3 \sin n$ converges absolutely.
Also a series $\sum a_n$ can be convergent while the sequence $\{n a_n\}$ diverges. Consider for example $a_n$ equals to $0$ if $n$ is not a square and equal to $1/n$ otherwise.
|
H: Linear programming word problem
A sports event for a school has 300 tickets. They'll sell tickets to students for $5$ dollars and to teachers for $6$ dollars. School rules say that there must be at least $1$ teacher for every $5$ students on the trip. The school also wants to have at least twice as many students as teachers on the trip. There are $110$ seats on the school-bus that ticketholders must use to ride to the event. Each seat can fit either 2 teachers or 3 students. To how many teachers should the school sell tickets to maximize revenue (and such that all ticketholders fit on the bus)?
Let $x=$ number of students and $y=$ number of teachers, objective function: $5x+6y$
Constraints:
$x+y\leq300$
$5y\geq x$
$x\geq2y$
$\frac{1}{3}x+\frac{1}{2}y\leq110$
Maximum: $5x+6y=1560$ at $(240,60)$ hence teachers: $60$
Can anyone check if my work is correct or not?
AI: Yes, this is correct, although you might want to explicitly impose lower bounds $x \ge 0$ and $y \ge 0$. The dual variables $(3,0,0,6)$ provide a short proof that $1560$ is an upper bound on the objective value:
$$5x+6y = 3(x+y)+6\left(\frac{1}{3}x+\frac{1}{2}y\right) \le 3(300) + 6(110) = 1560$$
|
H: Determining a Laurent series with trigonometric functions
I could use some help with the Laurent series around $z_{0}=0$ for all $z\in\mathbb{C}\backslash\{n\cdot\pi;\;n\in\mathbb{N}\} $ of the following function:
$$f(z)=\frac{e^{sin(z)}-cos(z)-z}{sin^{2}(z)} $$
In particular, I'm having problems trying to figure out the coefficients of the Laurent-Series and wether or not I can just use the respective Taylor or Fourier series.
AI: Let’s expand near $0$
$$\begin{align}
\sin{z}&=z-{z^3\over 6}+\cdots +(-1)^k{z^{2k+1}\over(2k+1)!}+\cdots\\
\cos{z}&=1-{z^2\over 2}+\cdots +(-1)^k{z^{2k}\over (2k)!}+\cdots
\end{align}$$
I cannot see a general expression for the general term but assume we want to expand to order $2$, one has
$$\begin{align}
\sin^2{z}&=\left(z-{z^3\over 6}\right)^2+o(z^4)=z^2-{z^4\over 3}+o(z^4)\\
e^{\sin{z}}&=1+ \left(z-{z^3\over 6}\right)+ {1\over 2}\left(z-{z^3\over 6}\right)^2+o(z^4)\\
&=1+z+{z^2\over 2}-{z^3\over 6}-{z^4\over 6}+o(z^4)\\
\cos{z}&=1-{z^2\over 2}+{z^4\over 24}+o(z^4)
\end{align}$$
Putting all the pieces together
$${e^{\sin{z}}-\cos{z}-z\over \sin^2{z}}={z^2-{z^3\over 6}-{5z^4\over 24}+o(z^4)\over z^2-{z^4\over 3}+o(z^4)}={1-{z\over 6}-{5z^2\over 24}+o(z^2)\over 1-{z^2\over 3}+o(z^2)}$$
|
H: What is the need to include the "additive identity exists" axiom in the set of vector space axioms?
A vector space is a set V along with an addition on V and a scalar multiplication on V such that the following properties hold, according to S. Axler's Linear Algebra Done Right:
Commutativity
Associativity
Existence of additive identity
Existence of additive inverse
Multiplicative identity
Distributive Properties
Now, by definition, an addition of two elements in the set V should also be a member of V. Using that fact, and (4) from the list above, shouldn't (3) be provable?
Note: A few more moments of thought made me question my statement. What is $0$? Unless (3) is taken, does (4) make any sense? Am I on the right track?
Also, S. Axler says, about subspaces, that they have to satisfy three conditions for them to be considered a subspace (excluding the condition that they have to be a subset of a vector space, which would ensure that the other properties (distributive, commutative) are satisfied $\forall v \space \epsilon\space Subspace$).
$0 \space\epsilon\space Subspace$
Scalar multiplication is closed.
Addition is closed.
In addition to this, he also says we can replace (1) here with a similar condition:
The subspace is non-empty.
He says that since scalar multiplication is closed within the subspace, and that $0v = 0$ (the proof of this involves the additive inverse axiom and the fact that the "Zero" produced by the inverse CAN be added to a vector from $V$; Does this zero have to exist within $V$ for us to define "addition" between this zero and a vector from $V$?) for any $v\space\epsilon\space Subspace$, this would imply that $0\space \epsilon \space Subspace$.
So here, is this replacement possible only because (3) from the vector space axioms defines "zero"?
Note: I tried my best to organize my thoughts and questions, but something seems amiss. I know my questions aren't sequential and coherent but I am struggling to understand which part of this is the head and which one's the tail.
AI: (1) "Does ${4}$ and the addition of two vectors being in ${V}$ imply ${(3)}$?" No. The definitions are intertwined. In order to define an additive inverse, you need a definition of the identity element, since its definition is based on the identity element. By definition, if you take a vector ${v}$, its additive inverse, ${(-v)}$ is the vector ${\in V}$ such that
$${v + (-v) = 0_V}$$
where ${0_V}$ is the additive identity vector in the vector space. If we have no concept of what ${0_V}$ is, the definition for this additive inverse doesn't really make sense. It's like asking "what does blue mean?" without knowing the concept of a colour.
(2) Exactly. Given any subset of ${V}$, (call it ${U}$. That is ${U\subseteq V}$) it's either empty or non-empty. If it's empty, it's not a subspace (since if it's empty it does not contain ${0_V}$, the zero vector). If it's non-empty, then there must exist at least one vector in the subspace. Take any vector ${u \in U}$. Then we have
$${0_F\times u = 0_V}$$
And since the subspace is closed under scalar multiplication we have that for any non-empty subset of ${V}$ that is closed under scalar multiplication and vector addition that ${0_V}$ exists within this subset. Hence the first condition that ${0_V}$ exists within the subspace can be replaced with it being non-empty. In this context, both statements are equivalent.
Ultimately - you cannot get away without defining ${0}$. If you don't define it - how do you then define additive invertibility?
Edit: As @ArturoMagidin has pointed out in the comments - it is possible to come up with a different set of axioms (by different I mean that you replace $4$) that don't include $0$, but that satisfies all other conditions. This is different from the standard vector space axioms, but is pretty cool - so you should check it out! :D
|
H: Sanity check: is this simple formula for pseudoinverse of $[\mathbf{U} \cdots \mathbf{U}]$ correct?
Let $\mathbf{U}$ be some matrix, and then consider the "block row vector"
$$ \underbrace{[\mathbf{U} \cdots \mathbf{U}]}_{N \text{ times}} \,. $$
Claim: The pseudoinverse of this is the "block column vector"
$$ \frac{1}{N}\begin{bmatrix} \mathbf{U}^\dagger \\ \vdots \\ \mathbf{U}^\dagger \end{bmatrix} = \begin{bmatrix} \frac{1}{N}\mathbf{U}^\dagger \\ \vdots \\ \frac{1}{N}\mathbf{U}^\dagger \end{bmatrix} $$
Proof (?) of claim: I believe I was able to show that this Ansatz satisfies the four properties which uniquely define the pseudoinverse of a matrix by using the following two "lemmas"
$$ \begin{bmatrix} \mathbf{F}_1 \cdots \mathbf{F}_N \end{bmatrix} \begin{bmatrix} \mathbf{G}_1 \\ \vdots \\ \mathbf{G}_N \end{bmatrix} = \sum_{n=1}^N \mathbf{F}_n \mathbf{G_n} $$
$$ \begin{bmatrix} \mathbf{D}_1 \\ \vdots \\ \mathbf{D}_N \end{bmatrix} \begin{bmatrix} \mathbf{E}_1 \cdots \mathbf{E}_N \end{bmatrix} = \begin{bmatrix} \mathbf{D}_1 \mathbf{E_1} & \mathbf{D}_1 \mathbf{E}_2 & \cdots \\ \vdots & \ddots & \vdots \\ \mathbf{D}_N \mathbf{E}_1 &\cdots &\mathbf{D}_N \mathbf{E}_N \end{bmatrix}$$
Then the proof seems to be simply applying those principles and then using the facts that $\mathbf{U}^\dagger$ is the pseudoinverse of $\mathbf{U}$ (e.g. $\mathbf{U}^\dagger \mathbf{U} \mathbf{U}^\dagger = \mathbf{U}^\dagger$). Is this correct?
AI: I can't be sure that your proof of the properties was correct, but your claim indeed holds.
The result has a very simple proof if we use the fact that
$$
(A \otimes B)^+ = A^+ \otimes B^+,
$$
where $\otimes$ denotes the Kronecker product. Now, let $\mathbf 1$ denote the column-vector with a $1$ for every entry. It follows that
$$
\pmatrix{\mathbf U & \cdots & \mathbf U}^+ = (\mathbf 1^T \otimes \mathbf U)^+ =
(\mathbf 1^T)^+ \otimes \mathbf U^+ = \left(\frac 1N \mathbf 1\right)\otimes \mathbf U^+ = \frac 1N \pmatrix{\mathbf U^+ \\ \vdots \\ \mathbf U^+}.
$$
|
H: Understanding one step in the Neyman-Pearson lemma proof
In Georgii's book they state:
Given $(\chi,\mathcal{F},P_0,P_1)$ with simple hypothesis and alternative and $0<\alpha<1\ $ a given significance level. Then:
$(a)$ There exists a Neyman-Pearson-test $\phi$ with $E_0(\phi)=\alpha$.
They begin the proof like this:
Let $c$ be any $\alpha$-fractil ($1-\alpha$ quantil) of $P_0\circ R^{-1}$ where $R$ is the likelihood-quotient. By definition we have:
$$P_0(R>c)\leq \alpha \quad P_0(R\geq c)\geq \alpha$$
Exactly this is the part I don't get. How come we have $P_0(R\geq c)\geq \alpha$? $P_0(R>c)\leq \alpha$ is exactly by definition of the $1-\alpha$-quantil but how come taking one more point into consideration we have that it becomes greater than $\alpha$. Could anyone explain to me how that is derived?
AI: Here are some details. Define $\beta(x)=P_0(R>x)$, so that $1-\beta(x)$ is a c.d.f., thus non-decreasing and right-continuous. Thus, given $\alpha\in(0,1)$, there is $c$ s.t.
$$1-\beta(c^-)\le 1-\alpha\le 1-\beta(c),$$ where $c^-$ denotes the left limit. Using these inequalities, $P_0(R>c)=\beta(c)\le\alpha$ and
$$P_0(R\ge c)=\beta(c^-)\ge \alpha.$$
|
H: Power of orthogonal matrix
Suppose $U$ is an orthogonal matrix, and $D$ is a diagonal matrix. Let $I$ denote the identity matrix. Let $k$ be a positive integer.
I think the following holds:
$$(I - UDU^T)^k = U(I - D)^kU^T$$
But I got a little lost while writing out the steps
\begin{align*}
(I - UDU^T)^k &= (UIU^T - UDU^T)^k\\
&= (U(I - D)U^T)^k\\
&= ?
\end{align*}
What exactly are $(U^T)^k$ and $U^k$?
AI: I am not sure why you are asking about $(U^T)^k$ and $U^k$, but recall that it does not generally hold that $(ABC)^k = A^kB^kC^k$; we would need to have more information about $A,B,C$ (for instance, that they commute).
We can prove the result inductively. Note that
$$
\begin{align}
(U(I - D)U^T)^k &= (U(I - D)U^T)^{k-1}U(I - D)U^T
\\ &=
[(U(I - D)U^T)^{k-1}]U(I - D)U^T
\\ & =
\color{red}{[U(I - D)^{k-1}U^T]}U(I - D)U^T
\\ & =
U(I - D)^{k-1}\color{red}{[U^TU]}(I - D)U^T
\\ & = U\color{red}{(I - D)^{k-1}(I - D)}U^T = U(I - D)^kU^T.
\end{align}
$$
|
H: How do we solve the ODE $y''= \frac{1}{ \cosh (y')}$?
I want to solve
$$y''= \frac{1}{ \cosh (y')} $$
$y(0)=1, y'(0)=0 $
Can I do it by substituting $ y'=z $, $ y''=z' $
and solving
$$ z'= \frac{1}{ \cosh z}$$
$$ \Leftrightarrow \frac{dz}{dx}= \frac1{ \cosh z } $$
$$ \Leftrightarrow \int \cosh z dz = \int 1 dx $$
$$ \Leftrightarrow \sinh z = x+s $$
how do I proceed here?
AI: Apply the substitution $y'=z$ and proceed as follows
$$\sinh z = x+s$$ $$\implies z=\sinh^{-1}(x+s)$$ $$\implies y'=\sinh^{-1}(x+s).$$
Then, $y'(0)=0\implies 0=\sinh^{-1}(s) \implies s=0.$ Therefore $y' =\sinh^{-1}(x)$. Integrating both sides and using the identity
$$\int \sinh^{-1}(ax)\,dx=x\sinh^{-1}(ax)-\frac{\sqrt{a^2x^2+1}}{a}+C,$$
gives
$$y=x\sinh^{-1}(x)-\sqrt{x^2+1}+C.$$
Now evaluate $y(0)=1$ to find $C$.
|
H: Family vs. Child when a girl is chosen, what is the probability that the second child is a girl, textbook clarification?
In a family with two children, what are the chances, if one of the children is a girl, that both children are girls?
I was able to understand the difference between selecting a child and a family, for the conditional probability question above. However, there is a follow-up paragraph about this question that I do not understand. In my textbook Fundamentals of Probability with Stochastic Processes, it says
Is the sentence "In fact, the probability that a family with two girls is selected equals
twice the probability that a family with one girl is selected." referring to when a child is selected, rather than when a family itself is selected? The entire paragraph was very confusing -- I did not understand the context that the author is explaining. How can P(girl, girl) = 1/2? In what context is the last sentence from?
AI: Suppose there are four families with two children each. One family has a (boy,boy), one a (boy,girl), a (girl,boy) and a (girl,girl). Out of these 8 children, you pick one, which turns out to be a girl. There are four ways to pick a girl, and in half the cases that girl is from the (girl,girl) family.
|
H: If $X_1, \ldots, X_t$ is a sequence of iid r.v.'s, what is the expected $T=t$ such that $X_t>X_1$?
If $X_1, \ldots, X_t$ is a sequence of iid r.v.'s that are say, indexed by time, what is the expected $T=t$ such that $X_t>X_1$? Is it enough to know they are i.i.d or do we need a distribution on them?
AI: (Here I am assuming you have an infinite iid sequence $X_1, X_2, \dots$ and are setting $T = \min\{t : X_t > X_1\}$.)
Let $F$ be the cdf of the $X_t$s. Conditioning on $X_1$, the events $\{X_t > X_1\}$ are conditionally independent and have conditional probability $P(X_t > X_1 \mid X_1) = 1-F(X_1)$. So $T$ conditionally has a geometric distribution (plus one since $t=2$ is the first possible success) with success probability $1-F(X_1)$, thus $E[T \mid X_1] = 1 + \frac{1}{1-F(X_1)}$. This gives
$$E[T] = 1 + {E}\left[\frac{1}{1-F(X_1)}\right].$$
When $X_1$ has a continuous distribution, $F(X_1) \sim U(0,1)$ and so this yields
$$E[T] = 1 + \int_0^1 \frac{1}{1-x}\,dx = \infty.$$
|
H: Find the volume between $z=\sqrt{x^{2}+y^{2}}$ and $x^2+y^2+z^2=2$ in spherical cordinates
I am asking to find the volume of the volume trap above the cone $z=\sqrt{x^{2}+y^{2}}$ and below the sphere $x^2+y^2+z^2=2$
When I checked the solution I noticed that it was writen as $$V=\int_{0}^{2 \pi} \int_{0}^{\frac{\pi}{4}} \int_{0}^{\sqrt{2}} r^{2} \sin \theta \,d r \,d \theta \,d \varphi$$ and my question is why the boundries of $\theta$ is between $0$ to $\frac{\pi}{4}$ and not $0$ to $\pi$.
why $0$ to $\pi$ is wrong? I just can't imagine the scenerio in my head
AI: as you can see the $\theta$ axis it's just going from 0 to $\frac{\pi}{4}$
We can also obtain this $\frac{\pi}{4}$ algebraically if our change of variable is:
$x =r\sin(\theta)\cos(\varphi)$
$y =r\sin(\theta)\sin(\varphi)$
$z=r\cos(\theta)$
we will obtain $z=\sqrt{x^{2}+y^{2}}\Rightarrow r\cos(\theta)=r\sin(\theta)\Rightarrow \tan(\theta)=1\Rightarrow\theta=\frac{\pi}{4}$
also can't be $2\pi$ because $\theta \in [0,\pi] $for more information: here
|
H: "Pedantic" derivation of geodesic equation using pullback bundles
I'm trying to get more comfortable with manipulations involving connections and vector fields so I've tried to derive the geodesic equations without having to resort to any familiarities using standard calculus, everything computed "properly" from the definitions.
For a Reimannian manifold $(M,g)$ I have a curve $\gamma : \mathbb{R} \rightarrow M$ which in local coordinates can be written as $\gamma(t) = \left(x^1(t), \ldots, x^n(t)\right)$. If I wish $\gamma(t)$ to be a geodesic then I want its tangent vector to be auto-parallel.
The tangent vector is given by the pushforward of coordinate vector field on $\mathbb{R}$ $``\hspace{01mm}\dot{\gamma}(t)\hspace{-0.5mm}" := \gamma_*\left(\frac{\partial}{\partial t}\right) = \dot{x}^i(t) \frac{\partial}{\partial x^i}$
I want $\nabla_\dot\gamma \dot{\gamma} = 0$, but this expression is misleading since the vector field $\dot{\gamma}(t)$ only exists along the image of $\gamma(t)$, but we can consider the pullback of $M$ by $\gamma$ and take the connection and vector bundle with us. If $\nabla$ is the Levi-Civita connection on $(M,g)$ denote $\widetilde{\nabla}$ as its pullback connection by $\gamma$
Then $\gamma$ is geodesic if $\widetilde{\nabla}_\frac{\partial}{\partial t}\gamma_*\left(\frac{\partial}{\partial t}\right) = 0$. At this point I start to get stuck, I have the following definition from a worked exam question that inspired me to do this exercise:
I'm not quite sure if I'm in the lucky situation where $\gamma_*\left(\frac{\partial}{\partial t}\right)$ is already of the form $v \circ u$, and I'm not so sure what this even means, if my bundle is the tangent bundle of $M$, then my sections $e_i = \frac{\partial}{\partial x^i}$ are vector fields, how does one compose a vector field with a map?
I think this has something to do with where we are evaluating $\gamma_*\left(\frac{\partial}{\partial t}\right)f = \left.\dot{x}^i(t) \frac{\partial f}{\partial x^i}\right|_{\gamma(t)}$, that is, $e_i = \left.\frac{\partial}{\partial x^i}\right|_p$ for $p \in M$ whereas $e_i \circ \gamma = \left.\frac{\partial}{\partial x^i}\right|_{\gamma(t)}$. I'm not sure how to properly justify this but it certainly feels more correct that $\gamma_*\left(\frac{\partial}{\partial t}\right)$ should be "evaluating" on $\gamma(t)$ rather than any old $p$ since the whole point of this pullback stuff was to differentiate along the curve.
If we accept the above handwaving then my calculation is as follows:
$$\widetilde{\nabla}_\frac{\partial}{\partial t}\gamma_*\left(\frac{\partial}{\partial t}\right) := \nabla_{\gamma_*\left(\frac{\partial}{\partial t}\right)}\gamma_*\left(\frac{\partial}{\partial t}\right)
= \nabla_{\dot{x}^i(t) \frac{\partial}{\partial x^i}}\dot{x}^j(t) \frac{\partial}{\partial x^j}$$
Using $C^\infty(M)$ linearity of a connection in the lower argument and the Liebnitz rule gives
$$ = \dot{x}^i(t) \nabla_{\frac{\partial}{\partial x^i}}\dot{x}^j(t) \frac{\partial}{\partial x^j} = \dot{x}^i(t)\frac{\partial}{\partial x^i}\left(\dot{x}^j\right)\frac{\partial}{\partial x^j} + \dot{x}^i(t)\dot{x}^j(t)\nabla_{\frac{\partial}{\partial x^i}}\frac{\partial}{\partial x^j}$$
The second term is $\dot{x}^i(t)\dot{x}^j(t)\Gamma_{ij}^k\frac{\partial}{\partial x^k}$ which starts to look on the right tracks, but I have no idea what to do to the first term to get a second time derivative, and if my approach is even correct.
Apologies for the wall of equations, but I wanted to get down all my thoughts and where my confusions lie, I am looking for how to finish the derivation and an explanation of all this stuff with pullback bundles and correct any misunderstandings I have. Thanks in advance.
AI: This is a good question. Here's how to do it: given coordinates $(x^1,\ldots, x^n)$ around (some particular point) $\gamma(t)$, where $\gamma\colon I \to M$, we have that $$\dot{\gamma}(t) = \sum_{i=1}^n \dot{x}^i(t)\frac{\partial}{\partial x^i}\bigg|_{\gamma(t)}$$for all $t \in I$. Then indeed $\nabla_{\dot{\gamma}(t)}\dot{\gamma}$ does not immediately makes sense, but we have the pull-back bundle $\gamma^*(TM) \to I$, and a connection $\gamma^*\nabla$. Then $\partial/\partial t$ is a vector field on the "base manifold" $I$, and $\gamma_\ast(\partial/\partial t) = \dot{\gamma}$, and this is what allows us to use the defining property of $\gamma^*\nabla$: $$\begin{align}\frac{D\gamma'}{{\rm d}t}(t) &= (\gamma^*\nabla)_{(\partial/\partial t)|_t}(\dot{\gamma}) = (\gamma^*\nabla)_{(\partial/\partial t)|_t}\left(\sum_{j=1}^n \dot{x}^j \left(\frac{\partial}{\partial x^j}\circ \gamma\right)\right) \\ &= \sum_{j=1}^n \ddot{x}^j(t) \frac{\partial}{\partial x^j}\bigg|_{\gamma(t)}+ \sum_{j=1}^n \dot{x}^j(t) (\gamma^*\nabla)_{(\partial/\partial t)|_t}\left(\frac{\partial}{\partial x^j}\circ \gamma\right) \\ &= \sum_{k=1}^n \ddot{x}^k(t) \frac{\partial}{\partial x^k}\bigg|_{\gamma(t)} + \sum_{j=1}^n \dot{x}^j(t) \nabla_{\dot{\gamma}(t)}\frac{\partial}{\partial x^j} \\ &= \sum_{k=1}^n \ddot{x}^k(t) \frac{\partial}{\partial x^k}\bigg|_{\gamma(t)} + \sum_{j=1}^n \dot{x}^j(t) \nabla_{\sum_{i=1}^n \dot{x}^i(t) (\partial/\partial x^i)|_{\gamma(t)}}\frac{\partial}{\partial x^j} \\ &= \sum_{k=1}^n \ddot{x}^k(t) \frac{\partial}{\partial x^k}\bigg|_{\gamma(t)} + \sum_{i,j=1}^n \dot{x}^i(t)\dot{x}^j(t) \nabla_{(\partial/\partial x^i)|_{\gamma(t)}}\frac{\partial}{\partial x^j} \\ &= \sum_{k=1}^n \ddot{x}^k(t) \frac{\partial}{\partial x^k}\bigg|_{\gamma(t)} + \sum_{i,j,k=1}^n \Gamma_{ij}^k(\gamma(t))\dot{x}^i(t)\dot{x}^j(t) \frac{\partial}{\partial x^k}\bigg|_{\gamma(t)} \\ &= \sum_{k=1}^n \left(\ddot{x}^k(t) + \sum_{i,j=1}^n \Gamma_{ij}^k(\gamma(t))\dot{x}^i(t)\dot{x}^j(t)\right)\frac{\partial}{\partial x^k}\bigg|_{\gamma(t)}.\end{align}$$
|
H: limit of subsequence where $X_n - X_{n-1}\rightarrow 0$.
suppose $X_{n}$ is a sequence of real numbers such that $X_{n} - X_{n-1} \rightarrow 0$.
prove that the limit of subsequence is empty or single point set or interval.
.
I know the limit of subsequence is the set of limits of subsequences of {$P_{n}$} n=1,2,...
{$P_{n}$} is the sequence in metric space (X,d).
.
My effort in this regard is as follows:
Suppose it has two boundary points.
We want to prove that all points between these two points are boundary points.such as a & b.
Consider a point between these two points.such as c.(a<c<b)
Now we have to consider the radius of the neighborhood around this point.
.
Now I can not calculate this radius correctly and I do not know how the rest of the question will be proved.
Please help me!
AI: Suppose there are $J$ and $M$ such that $J<M$ and they both are limits of some subsequences. Suppose then, there exist $K$ and $L$ such that $J<K<L<M$ and no $x\in (K,L)$ is a limit point of any subsequence. This contradicts assumptions:
if $J$ is a limit point of some subsequence, then there must be infinitely many terms in $(X_i)_{i\in \mathbb N}$ belonging to any, arbitrarily small neighborhood of $J$;
similary there must be infinitely many terms arbitrarily close to $M$;
so, frankly speaking, the sequence must at least bounce between $J$ and $M$, and it must do so infinitely many times;
however, convergence of differences to $0$ implies that for any $\varepsilon>0$ there exists an index $m$ such that for each $i>m$ differences are smaller than $\varepsilon$: $|X_i-X_{i-1}|<\varepsilon$;
but for $\varepsilon < L-K$ it would force some – and actually infinitely many – terms of the sequence to fall inside the forbidden $(K,L)$ interval.
So, if there exist subsequences in $(X_i)_{i\in \mathbb N}$ with different limits $J<M$, then each point of interval $[J,M]$ is an accumulation point of the sequence, in other words terms of $(X_i)$ are dense in the interval, hence each point in the interval is a limit of some subsequence.
|
H: Computing the quantity $ \frac{{x}\cdot{y}}{{\|x\|\|y\|}}$ in terms of $a$ where $x=(1,0)$ and $y=(a,-2)$
Let $x=(1,0)$ and $y=(a,-2)$ be two vectors $ℝ^2$, where $a$ is a real number. Then compute the quantity
$$
\frac{{x}\cdot{y}}{{\|x\|\|y\|}}$$
in terms of $a$.
My work so far:
$$x\cdot y=1\cdot a+0\cdot(-2)=a$$
And so
$$\|x\|=\sqrt{{1^2}+{a^2}}=1+a$$
$$\|y\|=\sqrt{{0^2}+{(-2)^2}}=2$$
Thus
$$\cos(\theta)\frac{a}{\sqrt{{2a+2}}}$$
$$\theta=\cos^{-1}\frac{a}{\sqrt{2a+2}}$$
And I'm stuck here. Where did I go wrong to compute the quantity in terms of a? Would there be any better ways of tackling this?
AI: Your computation that $\;\mathbf x\cdot\mathbf y=a\;$ was okay, but
$||\mathbf x||=||(1,0)||=\sqrt{1^2+0^2}=1,$ and $||\mathbf y||=||(a,-2)||=\sqrt{a^2+(-2)^2}=\sqrt{a^2+4},$
so $\dfrac{\mathbf{x}\cdot\mathbf{y}}{\mathbf{||x||||y||}}
= \dfrac a{\sqrt{a^2+4}}.$
Also, as pointed out in the comments, you should be aware that
$\sqrt{1^2+a^2}$ does not generally equal $1+a$. (To see that, try squaring them both.)
|
H: The neighborhood $U$ of the identity generates a connected Lie group $G$. Proof check.
I want to check if my proof is correct.
Consider the subgroup $H$ of $G$ which generated by $U$. Then it's enough to show that $H=G$. But, if $H\neq G$, then $G$ can be written as a disjoint union of cosets $gH$ i.e. $G=\cup_{g\in G} gH$. So, it's enough to show that $gH$ is open since in that case we will write $G$ as a disjoint union of two non-empty open sets $H$ and $\cup_{g\neq e\in G}gH$ i.e.
$$G=H\cup(\cup_{g\neq e\in G}gH)$$
Which contradicts the fact that $G$ is connected.
To show that $gH$ is open it's enough to show that $H$ is open since $gH$ is just the image of $L_g(H)$ where $L_g:G\to G$ is an automorphism of $G$ given by $L_g(a)=ga$. But, $H$ is open since $H=\cup_{n\in\mathbb{N}} U^n$.
Does it work?
AI: What you have proved is that connected Lie groups have no proper, non-trivial, open subgroups. Which is true. And then applied it to the special case $H=\langle U\rangle$. I see no obvious problem with it.
|
H: Limit Problem Involving Number Sets
Let $\mathbb{N} = \left\{ 1, 2, 3, ... \right\}$.
For each $n \in \mathbb{N}$, let $A_n$ be a finite set of real numbers.
Assume $\forall m, n \in \mathbb{N}, ~m \neq n \Rightarrow A_m \cap A_n = \emptyset~$.
Assume $\forall \varepsilon > 0, \exists x \in \mathbb{R}, (\exists n \in \mathbb{N}, x \in A_n) \wedge 0 < |x - 7| < \varepsilon$.
I can tell that as $\varepsilon \rightarrow 0,~ x \rightarrow 7$.
Since this predicate can satisfy any $\varepsilon > 0$, there are an infinite number of elements $x$ that approach 7 but never reach 7.
Since all the sets are finite, the only way $x \rightarrow 7$ is if $n \rightarrow \infty$, where $x \in A_n$.
But so far, I can't prove this formally.
How do I prove $\displaystyle\lim_{x \rightarrow 7} \! ~n = \infty$ ?
AI: Let $x_N$ denote the element of $\bigcup\limits_{i=1}^N A_i$ closest to (but not equal[1]) $7$, and $\varepsilon_N = |x_N - 7|$.
Then for any $\varepsilon \in (0, \varepsilon_N)$ there is no $x$ in first $N$ sets which differs less than by $\varepsilon$ from $7$.
In other words, one must take more and more $A_i$ sets to find better and better approximation of $7$ in them.
Note 1. See the comment by Brian M. Scott.
|
H: Password counting: number of $8$-character alphanumeric passwords in which at least two characters are digits
I had an exam yesterday and wanted to confirm if I am correct, one of the questions was:
Suppose there is a string that has 8 characters composed of alphanumeric characters (A-Z, 0-10) how many combinations are there were there are at least 2 characters that only contain numbers?
I calculated that there are 36 characters (26 letters and 10 numbers) and as the length of the string is 8, $36^8$combinations, if I wanted to calculate how many of this combinations contain at least one number: $36^8-(36-26)^8$ so to calculate the answer for at least two numbers is it correct to state that the formula is: $36^8-(10^8+10^8)$?
AI: Consider binomial expansion of
$$(26+10)^8=\sum\limits_{k=0}^8 {8\choose k}\,10^k\, 26^{8-k}$$ where each term corresponds to number of passwords with exactly $k$ numbers and $8-k$ letters.
So your answer will be (as mentioned in the Alex's answer) $$36^8-26^8-{8\choose 1}\cdot 10\cdot 26^7.$$
|
H: If $f,g\in\mathcal C^1[0,1],\,f$ monotone, and $g(x)>g(1)=g(0)$ on $(0,1)$, then $\int_0^1 f(x)g'(x)\,dx=0$ if and only if $f$ is constant
The Problem: Let $f,g$ be continuously differentiable on $[0,1],\,f$ monotone, and $g(x)>g(0)=g(1)$ on $(0,1).$ Prove that
$$\int_0^1 f(x)g'(x)\,dx=0\quad\text{if and only if }f\text{ is constant.}$$
My Thoughts: I first try the easy direction. So suppose that $f$ is constant, hence there is some $c\in\mathbb R$ such that $f(x)=c$ for all $x\in[0,1]$. Then the fundamental theorem of calculus implies that
$$\int_0^1 f(x)g'(x)\,dx=\int_0^1 cg'(x)\,dx=c[g(1)-g(0)]=0.$$
However, I am having difficulty with the other direction. I tried applying integration by parts in the following way
$$0=\int_0^1 f(x)g'(x)\,dx=g(0)\left[f(1)-f(0)\right]-\int_0^1 f'(x)g(x)\,dx.$$
Then the Mean Value Theorem implies that there is some $d\in(0,1)$ such that
$$f'(d)=\frac{1}{g(0)}\int_0^1 f'(x)g(x)\,dx.$$
I did the above with the idea of showing that $f(1)=f(0)$, which would yield the conclusion. But, I have been stuck for a long time at this point.
Could anyone please give me a hint on how to get going from the point I am at, or if the above is not a correct path, just a small hint on how to start on the right path?
Thank you for your time, and really appreciate all feedback.
AI: You're almost there. Since $f$ is monotone then either $f'\geq 0$ or $f'\leq 0$ on all of $[0,1]$. Let us say the latter is true (otherwise change $f$ by $-f$). Then we have $-f'(x)g(x)\geq f'(x)g(0)$ on $[0,1]$, and
\begin{align*}
0&=g(0)[f(1)-f(0)]-\int_0^1 f'(x)g(x)dx\\
&\geq g(0)[f(1)-f(0)]-\int_0^1f'(x)g(0)dx\\
&=0\end{align*}
so the inequality in the middle is actually an equality. This means that
$$\int_0^1f'(x)g(x)dx=\int_0^1f'(x)g(0)dx$$
or equivalently
$$\int_0^1 f'(x)(g(x)-g(0))dx=0.$$
The function $x\mapsto f'(x)(g(x)-g(0))$ is non-positive on $[0,1]$ and has integral $0$, so it must be $0$ on $[0,1]$. Since $g(x)\neq g(0)$ on $(0,1)$ then $f'=0$ on $(0,1)$, and by the Mean Value Theorem $f$ is constant.
|
H: What does it mean to say data points in a complementary cumulative distribution plot are correlated?
While studying, I came across the following quote:
"A more serious disadvantage is that successive points on a cumulative distribution plot are correlated — the cumulative distribution function in general only changes a little from one point to the next, so adjacent values are not at all independent."
What does it mean to say that data points on a plot are correlated?
AI: Newman is not being precise here, but the way I understand it is that the error on the empirical cCDF (relative to the true cCDF value) will be correlated for nearby points. Whereas for a histogram, the error of the empirical frequency of nearby bins are independent, except for some global effects. (The error in the complementary cumulative is related to the sum of the errors above that point in the histogram, so you can see why it is correlated.)
As he indicates, this makes least squares regression a suboptimal method for extracting the power law, though there are deeper problems than this.
|
H: Non Abelian Normal Field Extension with Abelian Subextensions
It is known that a subextion $L/F/K$ of an abelian (Galois) field extension $L/K$ is also abelian. The converse is not true: even when assuming that $L/K$ is Galois and $L/F$ and $F/K$ are abelian, $L/K$ might not be abelian.
I am looking for an explicit counterexample of such Galois non abelian $L/K$ and abelian subextensions $L/F$ and $F/K$. I understand that these cases might arrise when the corresponding extensions of the Galois groups are something like
$$
1\rightarrow C_3 \rightarrow S_3,
$$
but I coudn't find such construction.
AI: Let $L$ be the splitting field of the polynomial $f=x^3-2$ over $\mathbb{Q}$. This polynomial is irreducible, its discriminant is negative and in particular not a square of a rational number. So $\operatorname{Gal}(L/\mathbb{Q})\cong S_3$. Also, it is easy to see that $L=\mathbb{Q}\left(\sqrt[3]{2}, e^{\frac{2\pi i}{3}}\right)$. So $M=\mathbb{Q}\left(e^{\frac{2\pi i}{3}}\right)$ is an intermediate field, the extensions $L/M, M/\mathbb{Q}$ are Galois extensions and the Galois groups have order less than $6$, so they are Abelian.
|
H: Give a function uniformly continuous with respect to one metric and not with respect to another, while both induce the same topology
I would very much appreciate an example to the above question or some hints to construct one.
Such a function should not exist in normed vector spaces: If the topologies induced by two norms are equal the norms are äquivalent and vice versa.
AI: Try $X=(0,1)$ (the open interval), $f(x)=1/x$, and two metrics on $X$, $d_1(x,y)=\lvert\frac{1}{x}-\frac{1}{y}\rvert$, and $d_2(x,y)=\lvert x-y\rvert$.
$f(x)$ is uniformly continuous under $d_1$; for $\epsilon>0$, choose $\delta = \epsilon$, then for any $x,y$ such that $d_1(x,y)<\delta$ $\lvert f(x)-f(y) \rvert = \lvert 1/x-1/y \rvert < \delta =\epsilon$.
Clearly $f(x)$ is not uniformly continuous on $d_2$.
|
H: Constructing a locally integrable function
Let $\epsilon\in(0,1)$ and $F^{\epsilon}:\mathbb{R}^2\to\mathbb{R}$ defined by
$$F^{\epsilon}(x)=\log(|x|^2+\epsilon^2)$$
How can I construct a $g \in L^1_{loc}({\mathbb{R}}^2)$ such that
$$|F^{\epsilon}(x)|\leq g(x), \ \ \forall x\in \mathbb{R}^2, \ \ \epsilon\in(0,1). $$
If $||x||>1$, it's easy to see that $g(x)=\log(|x|^2+1)$ do the job. My problem is near the origin.
AI: What about $g(x) =|\log(|x|^2)|+\log(2),$ for $|x|\leq 1$?
Clearly $g(x)\in L^1_{loc}({\mathbb R}^2)$ (use polar coordinates to check) and $$|F^{\epsilon}(x)|\leq g(x),\forall x{\rm ~with~}|x|\leq 1,\epsilon\in (0,1).$$
|
H: General solution to eigenvalue problem when $\lambda $ is negative.
I have a simple one for you guys.
So I was reading this PDEs book which regularly discusses the eigenvalue problem
$F''(x)+\lambda F(x)=0$.
For $\lambda =-\mu^2 $, i.e negative eigenvalues, that is if
$F''(x)-\mu^2F(x)=0$
then the general solution is
$F(x)=Ae^{\mu x}+Be^{-\mu x}\quad\quad(1)$.
The author then expresses the solution as
$F(x)=C\cosh\mu x+D\sinh\mu x\quad\quad (2)$
for convenience in applying the BCs.
How did he convert the exponential solution (1) into the solution (2) involving hyperbolic functions ?
Thanks.
AI: Let $Ae^{\mu x}+Be^{-\mu x}\equiv C\frac{e^{\mu x}+e^{-\mu x}}{2}+
D\frac{e^{\mu x}-e^{-\mu x}}{2}$ then
$$\begin{cases}
A=\frac{C+D}{2}\\B=\frac{C-D}{2}
\end{cases}$$
or, in an alternative notation
$$\begin{cases}
A+B=C\\A-B=D
\end{cases}$$
so $A,B$ in (1) and $A,B$ in (2) are just different.
|
H: Tournament of 32 teams, highest rank always wins
$32$ teams, ranked $1$ through $32$, enter a basketball tournament that works as follows: the teams are randomly paired and in each pair, the team that loses is out of the competition. The remaining $16$ teams are randomly paired, and so on, until there is a winner. A higher-ranked team always wins against a lower-ranked team. If the probability that the team ranked $3$ (the third-best team) is one of the last four teams remaining can be written in simplest form as $\frac{m}{n}$, compute $m+n$.
(Source: PUMAC 2016 Combinatorics A)
My attempt:
The only way team $3$ doesn't get in the top $4$ is if it gets beaten by either team $1$ or $2$. We use casework and complementary counting.
Case 1: Team $3$ gets beaten by team $1$ or $2$ in the round of $32$ = $\frac{2}{31}$
Case 2: Team $3$ gets beaten by team $1$ or $2$ in the round of $16$ = $\frac{2}{15}$, but we also add the probability that $1$ and $2$ got matched up in the round of $32$. This is because there are two "subcases" in case $2$, so we add the probability of both. This is $\frac{1}{\binom{32}{2}}$ = $\frac{2}{15} + \frac{1}{496}$.
Case 3: Team $3$ gets beaten by team $1$ or $2$ in the round of $8$ = $\frac{2}{7}$, but we add the probability that $1$ and $2$ got matched up in the round of $16$. This probability is $\frac{1}{\binom{16}{2}}$ due to the same logic, but we have to multiply by $\frac{495}{496}$ because there is a $\frac{1}{496}$ chance that either $1$ or $2$ won't make it to the round of $16$. This is $\frac{2}{7}+\frac{1}{120} \cdot \frac{495}{496}$.
Adding and using complementary probability gets us an answer of $\frac{205777}{416640}$, so $m+n = 622417$.
However, the answer key makes this problem much simpler. Here's the explanation:
This is the same as putting the teams in a bracket-style tournament at random. The probability
that the teams ranked $1$ and $2$ are not in the same quarter of the draw as the team ranked $3$
is the relevant probability, and it is $\frac{24 \cdot 23}{31\cdot 30} = \frac{92}{155}, m+n = 247$.
How did they get such a simple probability? I'm also completely confused on how they got the numerator. Denominator I can understand, but I just can't figure out how they got the numerator. Is it from $4!$, and if so, how? Also, the wording is a bit unclear to me; they say "not in the same quarter of the draw as the team ranked $3$", which I'm not quite understanding. And why is my answer wrong? I used casework and complementary counting but where did I err? Thanks in advance.
AI: After you assign team number $3$ a slot in the draw, there are $31$ slots remaining. Of those, $7$ are in the same quadrant of the draw as team $3$ so $24$ are not. Assign team number $1$ to one of those $31$ slots, and you'll still be "in the ball game" $\frac{24}{31}$ of the time.
Now that you've assigned two teams (team numbers $1$ and $3$), you need to assign team number $2$. There are $30$ remaining slots. Assuming you're still in the ball game, $7$ of those remaining slots are in the same quadrant as team number $3$, and the remaining $23$ are not. Thus, assuming that team numbers $1$ and $3$ are in different quadrants, the probability that teams $2$ and $3$ also are in different quadrants of the draw is $\frac{23}{30}$.
You win if both probabilities come to pass and they're independent, so your final probability is $\frac{24 \cdot 23}{31 \cdot 30}$ reduced to lowest terms.
Your calculations of cases past Case $1$ are incorrect because you can't add the probability of being beaten by either team number $1$ or $2$ to the probability that team numbers $1$ and $2$ have already played each other. You have to multiply $\frac 27$ (in Case $2$) by the probability that team numbers $1$ and $2$ have not played each other, and then add that to the product of $\frac 17$ by the probability that they have.
|
H: Finding the angle between vectors $\mathbf x$ and $\mathbf y$ in radians
Two unit vectors $\mathbf{x}$ and $\mathbf{y}$ in $\Bbb R^n$ satisfy $\mathbf{x}\cdot\mathbf{y}=\frac{\sqrt{2}}{2}$ in radians. How would I go about finding the angle between $\mathbf{x}$ and $\mathbf{y}$?
As I don't know the $\mathbf{x}$ and $\mathbf{y}$ unit vectors, would the unit circle be useful here? For instance, using $\frac{\sqrt{2}}{2}$ and plugging those values into $\dfrac{\mathbf{x}\cdot\mathbf{y}}{\mathbf{\|x\| \|y\|}}$ to find the angle?
AI: No, I believe the unit circle is not really involved here.
It is simple. You already know the $cosinus$ of the angle $\theta$ between the two vectors. It is this expression:
$$cos(\theta) = \dfrac{\mathbf{x}\cdot\mathbf{y}}{\mathbf{||x||\cdot ||y||}}$$
Just plug in the numbers in this formula. Thus you get:
$$cos(\theta) = \frac{\sqrt{2}/2}{1.1}$$
And once you know that $a = cos(\theta) = \sqrt{2}/2$,
find $\theta = \arccos(a) = \arccos(\sqrt{2}/2) = \pi / 4$
|
H: Convergence and Comparison of Topology
Let $(X,\mathcal{T}_1)$ and $(X,\mathcal{T}_2)$ be a topological space endowed with two different topologies. If any convergent net $\{x_v\}$ in $(X,\mathcal{T}_1)$ is convergent in $(X,\mathcal{T}_2)$, does it imply that $\mathcal{T}_1\supseteq\mathcal{T}_2$?
AI: The question can be reworded in this way: suppose the identity mapping maps $\mathcal T_1$-convergent nets to $\mathcal T_2$-convergent nets. Does it follow that it is a continuous map $(X,\mathcal T_1)\to (X,\mathcal T_2)$?
This is obviously true if all the $\mathcal T_1$-limits of a given net are also its $\mathcal T_2$-limits (even if there are more $\mathcal T_2$-limits; it is enough to consider nets of $\mathcal T_1$-neighbourhoods), but in general, this need not be the case.
This is a (superficially) special case of another question asked here. In particular, the answer is yes if $\mathcal T_2$ is Hausdorff (or $T_2$, pardon the pun). In fact, I believe this is true even if $\mathcal T_2$ is just $T_1$.
To see this, take any $\mathcal T_1$-convergent net $(x_i)_{i\in I}$ and let $x$ be its $\mathcal T_1$-limit. We need to show that $x$ is also its $\mathcal T_2$-limit. Consider the net $(y_{i,j})_{i\in I,j\in \{0,1\}}$ where $y_{i,0}=x_i$ and $y_{i,1}=x$. Then clearly $(y_{i,j})_{i,j}$ is still a net $\mathcal T_1$-convergent to $x$, and if $\mathcal T_2$ is $T_1$, then it is not $\mathcal T_2$-convergent to any point other than $x$ (because it contains a cofinal net constant at $x$). Thus, since it is $\mathcal T_2$-convergent, it is $\mathcal T_2$-convergent to $x$. But then it follows that its cofinal subnet $(x_i)_i$ is also convergent to $x$, and we are done.
The answer is no in general: directly lifting the example given there, if you take $X=[0,1]$, take $\mathcal T_1$ to be the Euclidean topology and take $\mathcal T_2$ to be the topology such that the open sets are exactly $\{[0,1],\{1\},\emptyset\}$, then every net is $\mathcal T_2$-convergent to every point in $[0,1)$ (so, in particular, every $\mathcal T_1$-convergent net is trivially $\mathcal T_2$-convergent). But $\{1\}\in \mathcal T_2\setminus \mathcal T_1$.
The answer is also no if $\mathcal T_2$ is just $T_0$: if you take $X=\{0,1\}$, $\mathcal T_1=\{X,\emptyset,\{0\}\}$, $\mathcal T_2=\{X,\emptyset,\{1\}\}$, then again, all nets are convergent in both topologies, but there is no containment.
I am not sure whether there is a counterexample where $\mathcal T_1$ is Hausdorff (or even $T_1$) and $\mathcal T_2$ is $T_0$.
|
H: Can the functions be chosen so that they increase/decrease monotonically?
Assume we are in the interval [a,b] and we have a function $f\in L^1([a,b])$. Then since the continuous functions are dense in $L^1([a,b])$ we can choose a sequence $f_n$ of continuous functions such that they converge to $f$ in $L^1([a,b])$. I have two questions.
This first is easy. I assume we can choose a subsequence so that they also converge Lebesuge a.e.?
This is more difficult. Can we choose the sequence such that $|f_m(x)|\le |f_{m+1}(x)|$ Lebesgue a.e?
AI: 1. That is true. This is often implied by an intermediate step of the proof of the completeness of $L^1$.
2. We show that such a choice is not always possible.
Choose a measurable set $E\subseteq[0,1]$ such that $0 < \frac{\operatorname{Leb}(E\cap[a,b])}{\operatorname{Leb}([a,b])}<1$ for any $0 \leq a < b \leq 1$, where $\operatorname{Leb}$ is the Lebesgue measure. An important consequence of this property is that, if $F$ is another measurable set such that $\operatorname{Leb}(E\setminus F)=0$, then $F$ is dense in $[0, 1]$.
Now we let $f = \mathbf{1}_{E} + 2\cdot\mathbf{1}_{[0,1]\setminus E}$. We claim that this $f$ serves as a counter-example.
Assume otherwise, so that there exists a sequence of continuous functions $f_n$ such that $|f_n|$ is increasing in $n$ a.e. and $f_n \to f$ in $L^1$. Since $|f_n| \to |f| = f$ in $L^1$, we may assume that $f_n$'s are non-negative. Also, by passing to a subsequence if necessary, we may assume that $f_n$ converges pointwise a.e. Then for each $n$, we have $f_n \leq f$ a.e. By the property of $E$, this implies that the set $\{f_n \leq 1 \}$ is dense in $[0, 1]$, and so, $f_n \leq 1$ on all of $[0,1]$ by the continuity of $f_n$. This contradicts the fact that $f \neq 1$ as elements in $L^1$, and therefore the claim is proved. $\square$
|
H: Evaluating $\lim\limits_{x\to \infty}\left(\frac{20^x-1}{19x}\right)^{\frac{1}{x}}$
Evaluate the limit $\lim\limits_{x\to \infty}\left(\dfrac{20^x-1}{19x}\right)^{\frac{1}{x}}$.
My Attempt
$$\lim_{x\to \infty}\left(\frac{20^x-1}{19x}\right)^{\frac{1}{x}}=\lim_{x\to \infty}\left(\frac{(1+19)^x-1}{19x}\right)^{\frac{1}{x}}\\=\lim_{x\to \infty}\left(1+\frac{x-1}{1·2}(19)+\frac{(x-1)(x-2)}{1·2·3}(19)^2+\cdots\right)^{\frac{1}{x}}$$
After this I could not proceed. The answer given is $20$.
AI: We can see that our limit is actually of the indeterminate form of ${\infty}^0$, which we can use L'hopital's on.
We will work with $y=(\frac{20^x-1}{19x})^{\frac{1}{x}}$ for now.
Taking the ln of both sides,
$$\ln(y) = \frac{1}{x}\cdot \ln(\frac{20^x-1}{19x})$$
We now take
$$\lim_{x\to \infty}\ln(y) = \lim_{x\to \infty}\frac{1}{x}\cdot \ln(\frac{20^x-1}{19x}) = \lim_{x\to \infty}\frac{1}{x}\cdot [\ln(20^x-1)-\ln(19x)]$$
We know $\frac{\ln(x)}{x}$ approaches 0 as x goes to infinity so our expression
$$=\lim_{x\to \infty}\frac{1}{x}\cdot \ln(20^x)=\frac{1}{x}\cdot x\cdot \ln(20) = \ln(20)$$
Just to recap, we now have $\lim_{x\to \infty} \ln(y) = \ln(20)$.
We can say that $\lim_{x\to \infty} (\frac{20^x-1}{19x})^{\frac{1}{x}} = \lim_{x\to \infty} y = \lim_{x\to \infty} e^{\ln(y)}$
By (Why is $\lim\limits_{x\to\infty} e^{\ln(y)} = e^{\,\lim\limits_{x\to\infty} \ln(y)}$?),
We can say that $\lim_{x\to \infty} e^{\ln(y)} = e^{\lim_{x\to \infty} \ln(y)} = e^{\ln(20)} = 20$
|
H: If $ a-b \mid ax-by$, then $\gcd(x,y) \ne1$?
So is this true for positive integers $a,b,x,y>1$ with $a>b$ and $x>y$?
AI: Well $a-b|ax - bx$ and $a-b|ax-by$ so $a-b|(ax-bx) - (ax-by)=b(y-x)$ and ... why not?
So we can have $a-b |b$, for example $a=8;b=6$ and $x,y$ can be anything....$y=57, x = 56$ for example. $8-6|8*57-6*58$.
And even if $\gcd(a,b) = 1$ where can still have $a-b|y-x$ with no issue. Let $y = 81$ and $x=73$ and $a=13$ and $b=9$. Why the heck not? $13-9|13*81 -9*73$. Whoa! wait is that true? $13*81-9*73 = 4*81 + 9*81-9*73 =4*81 + 9*4$.... yep, seems to be true.
|
H: Proving finite-stabilizers of a tensor group action
Let $G$ be a group with subgroup $H$ of finite index. Let $X$ be a $G$-set, (ie. G acts on $X$), then we can define the tensor $G$-set $$G \otimes_H X := (G \times X) / \simeq $$ where the equivalence relation is defined as $(gh,x) \simeq (g,hx)$ for all $g \in G, h \in H, x \in X$. There is a $G$-action on this set: $$ g' (g, x) := (g'g, x)$$
I am trying to verify the following statement: if $X$ has finite $G$-stabilizers, then so does $G\otimes_H X$. Here is what I've done so far:
Suppose for contradiction that $(g, x) \in G \otimes_H X$ has infinitely many $G$-stabilizers. Since $H \subset G$ is of finite index, we can choose some finite transversal $\left \{ s_1, s_2, ... , s_n \right \} \subset G$, so that for any $g \in G$, there exists some $s_k$ and $h \in H$ such that $g = s_k h$. Since the cosets partition $G$, then each $G$-stabilizer of $(g, x)$ lies in some coset $s_j H$. Furthermore, since there are finitely many cosets and infinitely many $G$-stabilizers, then there must be some $j \leq n$ such that there are infinitely many $G$-stabilizers in the coset $s_jH$. So we have $g_1, g_2, g_3, ... \in s_jH$ such that $(g_i g, x) \simeq (g, x) $. Furthermore. we can express $g = s_k h$ so that these equivalences become: $ (g_i s_k, hx) \simeq (g_i s_k h, x) \simeq (s_k h, x) \simeq (s_k, hx) $, so replacing $x$ by $hx$, we can make the following statement:
There exists $s_j$ and $s_k$ in the transversal such that there exists $g_1, g_2, g_3, ... \in s_j H$ and $(g_i s_k, x) \simeq (s_k, x) \hspace{4pt} \forall i \in \mathbb{N}$
Or equivalently:
there exists $s_j$ and $s_k$ in the transversal such that there exists a sequence $h_1, h_2, h_3, ... \in H$ with $$(s_j h_1 s_k, x) \simeq (s_j h_2 s_k, x) \simeq (s_j h_3 s_k, x) \simeq ... \simeq (s_k, x)$$
Because $s_k$ can only be expressed as a product of elements in $G$ and $H$ in the trivial way (ie. $s_k = s_k * 1$), it seems as though these elements $ (s_k, x)$ are in some sense $\textit{irreducible}$, and this seems like it could offer up some sort of contradiction, but I'm not really sure how to conclude. Does this look like the right approach in the first place? Any help would be appreciated.
AI: You need to prove that every pair in the tensor product has a finite stabilizer. You have established that every pair up to equivalence has the form $(s_i, x)$. Now suppose that this point has infinite stabilizer $g_1,g_2,...$ Since there are only finitely many cosets we can assume that all $g_k$ are in the same coset $s_jH$, $g_k=s_jh_k$, $k=1,2,...$. then $g_k(s_i, x)=(s_j h_ks_i, x)=(s_is_m, h_k'x)=(s_i,hh_k'x)$ where $h_ks_i=s_mh_k'$ (again we can assume the same $m$ for all $k$) and $s_js_m=s_ih$ for some $h\in H$. But this means that $hh_k'$ stabilizes $x$ in $X$ for all $k$. Since the stabilizer of $x$ is finite, we have many equalities $h_k'=h_l'$, a contradiction.
|
H: Is a simple and solvable group cyclic?
Is not enough with the simple hypothesis?
If $g\neq e$ then $\{e\}\neq \left\langle g \right\rangle$, also $\left\langle g \right\rangle$ is a normal subgroups of $G$. If $G$ is a simple group, and $\left\langle g \right\rangle$ is not the identity therefor $G=\left\langle g \right\rangle$ thus $G$ is cyclic.
I don't see where is the solvable needed, or what my mistake is.
AI: Nah there are simple groups that are not cyclic. The flaw in your argument is that it's not necessarily true that a cyclic subgroup $\langle g \rangle$ is normal in the group.
Proving the claim in your titles comes down to unraveling definitions. If you have a simple group $G$, then $G$ only has a composition series $\{e\} \hookrightarrow G$. But if $G$ is solvable, then by definition $G/\{e\} \cong G$ must be abelian. Since $G$ is abelian any subgroup $\langle g \rangle$ must be normal, but since $G$ is simple $\langle g \rangle$ can't be a proper subgroup, so $G = \langle g \rangle$ is cyclic.
|
H: Inqualitiy with exponent
I was trying to prove this inequality $\exp(\frac{1}{\pi})+\exp(\frac{1}{e})\geq 2 \exp(\frac{1}{3})$ My attempt was using AM-GM mean $\exp(\frac{1}{\pi})+\exp(\frac{1}{e})\geq2 \exp(\frac{1}{2\pi e})$.
AI: Since $$\pi+e<6,$$ by AM-GM and C-S and we obtain:
$$e^{\frac{1}{\pi}}+e^{\frac{1}{e}}\geq2\sqrt{e^{\frac{1}{\pi}}\cdot e^{\frac{1}{e}}}=2\sqrt{e^{\frac{1}{\pi}+\frac{1}{e}}}\geq2\sqrt{e^{\frac{4}{\pi+e}}}>2\sqrt{e^{\frac{4}{6}}}=2e^{\frac{1}{3}}.$$
|
H: evaluation of infinite series expansion
$(1)$ How can i find $\displaystyle \sum^{\infty}_{n=0}b_{n}x^n,$ If
$$\sum^{\infty}_{n=0}b_{n}x^n=\bigg(\sum^{\infty}_{n=0}x^{n}\bigg)^2$$
$(2)$ How can i find $\displaystyle \sum^{\infty}_{n=0}c_{n}x^n,$ If
$$\bigg(\sum^{\infty}_{n=0}c_{n}x^n\bigg)\cdot \bigg(\sum^{\infty}_{n=0}x^{n}\bigg)=1$$
What i try::
For $(1)$
$$\sum^{\infty}_{n=0}b_{n}x^n=\frac{x^2}{(1-x)^2}=\frac{(x^2-2x+1)+2x-2+1}{(1-x)^2}$$
$$=1-\frac{1}{1-x}+\frac{1}{(1-x)^2}$$
I did not understand how can i solve further. Help me please
For $(2)$
$$\bigg(\frac{1}{1-x}\bigg)\cdot \sum^{\infty}_{n=0}c_{n}x^n=1$$
$$\sum^{\infty}_{n=0}c_{n}x^n=1-x$$
Help me please, Thanks
AI: HINT:
For $|x|<1$, $\sum_{n=0}^\infty x^n=\frac1{1-x}$ and $\sum_{n=0}^\infty nx^{n-1}=\frac1{(1-x)^2}$.
|
H: Eigenvalues of $n^2 \times n^2$ matrix with $(n-1)^2$ along diagonal and $1$ or $1-n$ elsewhere depending on adjacencies.
I reduced a confounding and challenging problem to the task of proving an unweildy inequality. Luckily, I managed to reduce the inequality to proving that a certain quadratic form is positive semidefinite. This in turn, I observed was equivalent to a certain matrix having non-negative eigenvalues. Unfortunately, now I am stuck. The problem in its most reduced form is as such:
Write the integers $1, \dots, n^2$ in a square. Let $A_n$ be an $n^2
\times n^2$ matrix where $a_{ij} = \begin{cases} (n-1)^2, i = j \\ 1-n,
\, i,j \text{ adjacent} \\ 1,
else\end{cases}$ where "adjacent" is defined as being in the same row or column in the square that was constructed. Prove that all eigenvalues of $A_n$ are
non-negative.
We have $A_2 = \begin{bmatrix} 1 & -1 & -1 & 1 \\ -1 & 1 & 1 & -1 \\ -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}, A_3 = \begin{bmatrix} 4 & -2 & -2 & -2 & 1 & 1 & -2 & 1 & 1 \\ -2 & 4 & -2 & 1 & -2 & 1 & 1 & -2 & 1 \\ -2 & -2 & 4 & 1 & 1 & -2 & 1 & 1 & -2 \\ -2 & 1 & 1 & 4 & -2 & -2 & -2 & 1 & 1 \\ 1 & -2 & 1 & -2 & 4 & -2 & 1 & -2 & 1 \\ 1 & 1 & -2 & -2 & -2 & 4 & 1 & 1 & -2 \\ -2 & 1 & 1 & -2 & 1 & 1 & 4 & -2 & -2 \\ 1 & -2 & 1 & 1 & -2 & 1 & -2 & 4 & -2 \\ 1 & 1 & -2 & 1 & 1 & -2 & -2 & -2 & 4\end{bmatrix}.$ I'm not gonna bother writing out $A_4$ or any larger matrices. It would take too much time to even input the matrix into Wolfram-Alpha.
We see that $A_2$ has rank $1$ and trace $4,$ so its eigenvalues are $4,0,0,0.$ Unfortunately, $A_3$ is not so easy to analyze, although we can see it has rank $\le 8$ and trace $36.$ Using https://matrixcalc.org/en/vectors.html, I found that it had eigenvalues $0,9$ with multiplicity $5, 4$ respectively. How could we show that in general, $A_n$ has eigenvalues $0, n^2$ with multiplicity $2n-1, (n-1)^2$ respectively? Would there be any way of arriving at this conjecture without a computer?
Update: Some info about the structure of the eigenvalues of $A_3.$ We will represent eigenvalues as $3 \times 3$ matrices for simplicity. For $0,$ any row or column can be all $1$s with everything else being a $0.$ For $9,$ certain (not all) $2 \times 2$ rectangles with $1$ on one diagonal and $-1$ on the other will work. For example, $\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 1 & 1 \end{bmatrix}, \begin{bmatrix} 1 & -1 & 0 \\ 0 & 0 & 0 \\ -1 & 1 & 0 \end{bmatrix}$ correspond to eigenvalues of $0, 9$ respectively. It easy to generalize the eigenvalue of $0$ to obtain $2n-1$ eigenvectors, but I don't see how the $(n-1)^2$ eigenvectors for $n^2$ generalize, and the fact that the eigenvalues for $9$ actually work lacks a natural explanation.
Second update: Let $J_n$ be the all ones matrix. We can write $A_3 = \begin{bmatrix} -2J' & J' & J' \\ J' & -2J' & J' \\ J' & J' & -2J' \end{bmatrix} = J'' - 3K$ where $J' = J-3I, J''$ is the all $J'$s block matrix of appropriate size, and $K$ is the block diagonal matrix with $J'$ on the diagonal. The spectrum of $J_n$ is well-known to be $\{n^{(1)}, 0^{(n-1)} \},$ so the spectrum of $J-kI$ is also easily found. Perhaps there is some trick with block matrices that will allow for an easy solution.
AI: Notice that the condition that $i$ and $j$ are adjacent is exactly the same thing as saying that either $\lceil i/n\rceil = \lceil j/n\rceil$ or $i\equiv j$ mod $n$. Let $I_n$ denote the identity matrix and $J_n$ denote the $n\times n$ matrix of all $1$s. It can be shown that
$$ A_n=(J_n-nI_n)^{\otimes 2},$$
where $A^{\otimes 2}$ denotes the Kronecker square, i.e. the Kronecker product $A\otimes A$. It follows from properties of the Kronecker product that $A_n$ has eigenvalues equal to the square of eigenvalues of $(J_n-nI_n)$. The problem is thus reduced to showing that the eigenvalues of $(J_n-nI_n)$ are real, which follows from the fact that it is a symmetric matrix.
|
H: Factor Theorem in $\mathbb{Z}_m[x]$
All the numbers I mentioned below are integers.
Question:
In $\mathbb{Z}_m[x]$, if $f(c_1) = 0$ and $f(c_2) = 0$, it does not always follow that $(x - c_1)(x - c_2) \mid f(x).$ What hypothesis on $c_1, c_2$ is needed to make that true?
I think when $c_1$ and $c_2$ are in different congruent class, the statement is true. But I don't know how to rigorously prove it.
My attempt: Suppose $f(x)$ is expresses in its simpliest form (i.e. all the coefficients are between [0,m-1]). Then $f(c_1)-ms_1,f(c_2)-ms_2=0$ for some integers $s_1,s_2$. I want to find a polynomial $g(x)$ such that $g(c_1)=s_1,g(c_2)=s_2$, where all the coefficients of $g(x)$ are integers. Thus, We can apply the Factor Theorem in $\mathbb{Z}[x]$ to $f(x)+g(x)m$.
However, I don't know whether my idea is correct, and I don't know how to find such $g(x)$.
AI: The usual division algorithm works in any polynomial ring for monic divisors. This gives $$f(x) = (x-c_2)g(x)$$ To repeat this step again, you need $g(c_1)=0$ but all we know is $$0=f(c_1)=(c_1-c_2)g(c_1)$$ This will imply $g(c_1)=0$ if $c_1-c_2\in\mathbb{Z}_m$ is not a zero divisor, so $$\gcd(c_1-c_2,m)=1$$ is probably the most simple condition.
|
H: Question about the radius of convergence in complex analysis
This is related to this question. Tell me if this should be in the original question.
Is it possible for a complex function to have both conditions below?
Let $0<r<R$ and $a>0$ be a real number that satisfies $a+r<R$.
Condition 1: Power series expansion at $z=0$ has a radius of convergence $R$.
Condition 2: Power series expansion at $z=a$ has a radius of convergence $r$.
(Edit: It seems like I should swap R and r to make it consistent with the original question, but it might confuse people who have already read the question so I will leave it as it is.)
My attempt:
I think this is impossible. This is because a disk with radius $R$ and center $z=0$ (we call it a disk A) contains a disk with radius $r$ and center $z=a$(we call it a disk B) like an image below.
This is the case $r=1$,$a=2$ and $R=4$. So, the radius of convergence is the distance to the nearest singularity. By condition 2, this means that there is a singularity on $|z-a|=r$. However, by condition 1, the function is analytic in $|z|<R$. This means that the function is also analytic on $|z-a|=r$. This is a contradiction. So it is impossible.
My concern about this argument:
Is the radius of convergence really the distance to the nearest singularity?
Does the condition 1 really mean that the function is analytic in $|z|<R$?
I am not sure about those 2 question, so I am not really confident about this argument.
Is this correct? If not, is it possible for a complex function to have both conditions?
AI: More precisely, the radius of convergence is the radius of the largest open disk centred at the expansion point on which there is an analytic function that coincides with your function near the expansion point. If the series around $0$ has radius of convergence $R$,
then the corresponding analytic function is analytic in the disk around $0$ of radius $R$, and therefore in the disk around $a$ of radius $R - |a|$.
However there is a loophole: who says that function is the same as the function you are expanding around $a$? You might have chosen a function that has a branch cut that comes between $0$ and $a$ (with branch point outside the big disk). If the branch cut had been chosen differently, you'd have a function analytic in the big disk. But the way you chose it, the function near $z=a$ is on a different branch, and this branch might have a singularity near $z=a$ that the other branch does not. For example, this can occur with a function of the form
$$ f(z) = \frac{1}{\sqrt{z-p} - q} $$
where $p$ is in the first quadrant and you choose the principal branch of the square root.
EDIT: For concreteness, let's take
$$ f(z) = \frac{1}{\sqrt{z} + 1 - 10 i} $$
with the principal branch.
It has a branch point at $0$. Note that the principal branch of $\sqrt{z}$ has real part $\ge 0$, so the denominator is never $0$. However, other branches may have a pole at $z = (1-10i)^2 = -99-20i$. The Taylor series around $z=-99-20i$ has radius $101$ (the distance to the branch point at $0$).
But the Taylor series around $z=-99+20i$ has radius only $40$, because the analytic continuation in a disk around $-99+20i$ will run into a pole at $-99-20i$.
|
H: Upper bound of line integral along simple closed curve.
Let $U$ be an open set in $\Bbb C$ and $f\in H(U)$.
Fix a point $z\in U$.
Consider the line integral $$\displaystyle \oint_{\partial D(z,\varepsilon)} \frac{f(\zeta)-f(z)}{\zeta-z}$$
Since $\frac{f(\zeta)-f(z)}{\zeta-z}$ is continuous on $U-\{z\}$ and $[0,2\pi]$ is compact,
$$\sup_{\theta\in [0,2\pi]} \lvert\frac{f(\zeta)-f(z)}{\zeta-z} \rvert$$ exists and is equal to $$\max_{\theta\in [0,2\pi]} \lvert\frac{f(\zeta)-f(z)}{\zeta-z} \rvert$$
Hence, $$\lvert\displaystyle \oint_{\partial D(z,\varepsilon)} \frac{f(\zeta)-f(z)}{\zeta-z}\rvert \le \sup_{\theta\in [0,2\pi]} \lvert\frac{f(\zeta)-f(z)}{\zeta-z} \rvert 2\pi\varepsilon \to 0 \;\;\text{as } \varepsilon \to 0$$
We get $\displaystyle \oint_{\partial D(z,\varepsilon)} \frac{f(\zeta)-f(z)}{\zeta-z}=0$.
Is my argument correct?
Thanks for your helping.
AI: You have demonstrated that
$$
\lim_{\varepsilon \to 0}\oint_{\partial D(z,\varepsilon)} \frac{f(\zeta)-f(z)}{\zeta-z} \, d\zeta = 0 \, .
$$
In order to show that the integral itself it zero you would have to show that it is independent of $\varepsilon$. That is correct but requires some justification.
It may be easier to use Cauchy's integral theorem: The function
$$
g(\zeta) = \frac{f(\zeta)-f(z)}{\zeta-z}
$$
is holomorphic in $U \setminus \{ z \}$ with a removable singularity at $\zeta =z$ (because it has a limit there). It follows that $g$ can be extended homomorphically over $z$, and then Cauchy's integral theorem states that
$$
\oint_{\partial D(z,\varepsilon)} \frac{f(\zeta)-f(z)}{\zeta-z} \, d\zeta = \oint_{\partial D(z,\varepsilon)} g(\zeta) \, d\zeta = 0 \, .
$$
|
H: Least squares method to get the fit formula
This is my first post here. For one of my projects I need to do a temperature compensation according to the distance, browsing I found an article called "High precision infrared temperature measurement system based on dsitance compensation" that does exactly what I need. I have tried to replicate with their data the methodology they propose but I am unable to obtain the same result.
According to the article, a relationship is made between the distance and an adjustment parameter called Y (equation 1 in the image), my problem is not how to solve the ecucations presented in order to obtain the parameters $a_0, a_1, a_2$ shown in equation 4.
Article capture
Data
Thank you very much to whoever can help me.
AI: You are correct : there is a problem somewhere.
If we use the data
$$\left(
\begin{array}{cc}
d & T \\
0 & 33.37 \\
10 & 33.11 \\
20 & 32.87 \\
30 & 32.70 \\
40 & 32.32 \\
50 & 31.82 \\
60 & 31.32
\end{array}
\right)$$ and the model is either
$$\frac T {T_0}=b_0+b_1\,d +b_2\,d^2$$ the exact results are
$$b_0=\frac{34988}{34965}\approx 1.0006578$$
$$b_1=-\frac{43}{116550}\approx -0.00036894037$$
$$b_2=-\frac{73}{6993000}\approx -0.000010439010$$ which differ from the numbers they give.
If, as it seems, they fit
$$\frac {T_0}T=c_0+c_1\,d +c_2\,d^2$$ the exact results are
$$c_0=\frac{11681771821960765499}{11687496483739052592}\approx 0.99951019$$
$$c_1=\frac{50500159478371189}{155833286449854034560}\approx 0.00032406529$$
$$c_2=\frac{2449273119119}{201708529727558400}\approx 0.000012142635$$ which also differ from the numbers they give.
|
H: On properties of quotients in the abelian category
Suppose A is an Abelian group, and B a subgroup. Moreover, suppose A/B = C is a Free Abelian group.Then, we have A is isomorphic to: $$B \oplus C$$. Is there an elementary proof of this without the use of category theory? Thanks.
AI: If we think of abelian groups as $\mathbb Z$-modules then a free abelian group is just a free $\mathbb Z$-module. It's a standard result in homological algebra that if the last module in a short exact sequence is projective (free modules are projective) then the sequence splits.
The proof is very concrete. Let $f\colon A \to C$ be the map with kernel $B$ that factors to an isomorphism $A/B \simeq C$. As $C$ is free we can define a map $g\colon C \to A$ that sends each basis element $x \in C$ to some choice of element in $f^{-1}(x)$. Then $gf$ is the identity on $C$. Now the image of $g$ and the kernel of $f$ are submodules of $A$. Prove that these submodules generate $A$ and have trivial intersection, so $A = \mathrm{im} \ g \oplus \ker f$. Then note that the image is isomorphic to $C$ and the kernel is $B$.
|
H: Evaluate the Improper Integral(help)
I encountered the following integral while solving a log-normal distribution question. Initially, I thought since its a odd function, it evaluates to zero. But I think, since its a improper integral, we cannot to do simply. Upon further inspection, I found that neither does the indefinite integral exist for it. How to evaluate it then?
$$\int_{-\infty}^{+\infty}(\sin{2\pi x}) e^{-x^{2}/2}dx $$
AI: Here, you can use the fact the integrand is odd, because the improper integral exists, so in calculating the improper integral, you can calculate $\int_{-r}^r (\cdots)$ and then take $r\to \infty$.
More explicitly, note that for all $x$, $\left|\sin(2\pi x) e^{-x^2/2}\right| \leq e^{-x^2/2}$, and $\int_{-\infty}^{\infty} e^{-x^2/2}\, dx < \infty$. Therefore $\int_{-\infty}^{\infty}\left|\sin(2\pi x) e^{-x^2/2}\right|\,dx < \infty$. Now recall that absolute convergence of integral implies regular convergence. This means the following limit (which is the definition of improper integral) exists:
\begin{align}
\int_{-\infty}^{\infty}\sin(2\pi x) e^{-x^2/2}:= \lim_{r\to \infty}\int_{0}^{r}\sin(2\pi x) e^{-x^2/2}\, dx + \lim_{\alpha\to \infty}\int_{-\alpha}^{0}\sin(2\pi x) e^{-x^2/2}\, dx
\end{align}
Because this limit exists, we can show that
\begin{align}
\int_{-\infty}^{\infty}\sin(2\pi x) e^{-x^2/2}\, dx &= \lim_{r \to \infty} \int_{-r}^r \sin(2\pi x)e^{-x^2}\, dx \\
&= 0,
\end{align}
where the second equality is because for each $r$, oddness of integrand implies the integral is $0$, so the result is $0$ even after the limit.
|
H: Prove that four vectors of the three dimensional Euclidean space are always linearly dependant.
Could anyone check my proof?
Statement: Prove that four vectors of the three dimensional Euclidean space are always linearly dependant.
Proof: A group of vectors are linear dependent if their determinant is zero. Suppose we have three linearly independent vectors,$\textbf{u},\textbf{v},\textbf{w}$, in $\Bbb{R}^3$. Now consider the subspace $(a,b,c,0)$ of $\Bbb{R}^4$ which is isomorphic to $\Bbb{R}^3$. We map the three vectors to that subspace. If we add another vector $\textbf{x}$ to $(a,b,c,0)$, which is the same as adding another vector to $\Bbb{R}^3$, we see that the determinant of the four vectors is equal to zero. Therefore, four vectors in three dimensional Euclidean space are always linearly dependent.
Proof(Attempt again):Suppose we have four linear independent vectors,$\textbf{u},\textbf{v},\textbf{w}$ and $\textbf{x}$ in $\Bbb{R}^3$. Then $a_{1}\textbf{u}+a_{2}\textbf{v}+a_{3}\textbf{w}+a_{4}\textbf{x}=\textbf{0}$ only when $a_{1}=a_{2}=a_{3}=a_{4}=0$.
However,
\begin{bmatrix}u_{x} & v_{x} & w_{x} & x_{x}\\u_{y} & v_{y} & w_{y} & x_{y}\\u_{z} & v_{z} & w_{z} & x_{z}\end{bmatrix}
becomes
\begin{bmatrix}u_{x} & v_{x} & w_{x} & x_{x}\\0& v'_{y} & w'_{y} & x'_{y}\\0 & 0 & w'_{z} & x'_{z}\end{bmatrix}
by carrying out row operations.
Assuming that the vectors are not zero vectors this shows that there is another non-zero solution for $a_4$ other than zero which is $\frac{-w'_{z}}{x'_{z}}$.
Therefore any four vectors in $\Bbb{R}^3$ is linearly dependent.
AI: Talking about determinants does not make sense too much since one needs a square matrix to start with and four $3-dimensional$ vectors will not give a square matrix.
However, you can work with a similar argument. If you have four linearly independent vectors, that means any three of them are also linearly independent, say $a,b,c$.
Then if you take the vector on your basis you didn't include for those three vectors, it can be obtained by $a,b,c$.
I will leave the details of the proof/arguments to you.
|
H: Given that $f\leq g$ a.e then how to show that Essential sup $f\leq $ Essential sup $g$?
Given that $f\leq g$ a.e then how to show that Essential sup $f\leq $ Essential sup $g$?
$$\text{ess} \sup f=\inf\{b\in \mathbb R\mid \mu(\{x:f(x)>b\})=0\}$$
From the definition, it is clear that inequality holds. But how to prove rigorously given fact.
Any Help / Hint will be appreciated.
AI: If $\mu (g >b)=0$ then $\mu (f>b)=0$. [ $(f >b ) \subseteq (g>b)) \cup (f>g)$ and union of two sets of measure $0$ has measure $0$].
The infimum if $A $ is $\leq$ infimum of $B$ if $B \subseteq A$.
Notation: $(f>b)=\{x\in \mathbb R: f(x) >b\}$ etc].
|
H: Prove that a directed graph with two vertices can only reach each other if their strongly connected components can as well
Hi I am struggling to prove the following question regarding strongly connected components in a directed graph. Any help would be appreciated, thanks in advance.
Prove that a directed graph with two vertices can only reach each other if their strongly connected components can as well
AI: Define the relation $x\to y$ to be true when there is a path that starts in $x$ and ends in $y$ (via other vertices if necessary). Define also $x\leftrightarrow y$ to mean $x\to y$ and $y\to x$ (not necessarily the reverse path).
Then $x\leftrightarrow y$ is an equivalence relation (check!), and the equivalence classes $SC(x)$ are called strong components.
If $u\to v$, and $x\in SC(u)$, $y\in SC(v)$, then there are paths $x\to u$, $u\to x$, $y\to v$, and $v\to y$. So $x\to u\to v\to y$ gives a path from $x$ to $y$, hence $x\to y$.
For the converse, suppose $SC(u)$ reaches $SC(v)$. Then there are $x\in SC(u)$, $y\in SC(v)$ such that $x\to y$. But then $u\to x\to y\to v$ connects $u$ to $v$.
|
H: Locus of midpoint of line with endpoints always on x and y axis.
I came across the following question:
A line segment of length 6 moves in such a way that its endpoints remain on the x-axis and y-axis. What is the equation of the locus of its midpoint?
And I proceeded with the following:
Let (x,y) be the midpoint of the line segment.
From the description of the question we can see that the locus will be symmetric about the x-axis as well as the y-axis. So, I just solved for the first quadrant.
From the figure I saw that
$\left(y+\sqrt{\left(3^{2}-x^{2}\right)}\right)^{2}+\left(x+\sqrt{\left(3^{2}-y^{2}\right)}\right)^{2}=6^{2}$
, since (x,y) splits the segment into two parts which are of length 3 each.
Solving this I arrived at $y\sqrt{9-x^{2}}+x\sqrt{9-y^{2}}=9$.
Upon doing some probing I found that this is the equation for the the part of the circle $x^{2}+y^{2}=9$ in the first quadrant.
However, trying to plot the original equation $y\sqrt{9-x^{2}}+x\sqrt{9-y^{2}}=9$ in Desmos, does not render any graph. Upon selecting values such as 8.999 (or something closer to 9) for the RHS, I am able to get some sort of approximation of the circle equation to be rendered.
Link to graph.
I was wondering, what was wrong with the equation that caused it not to get rendered. Is there an issue with the equation or is it related to some technicality in Desmos.
AI: If you consider $\angle ABC = \theta$ in your figure, having $AB=6 \implies CB = 6\cos\theta, \ CA = 6\sin\theta$, and $C$ being the origin, assuming the line moves about in the positive quadrant, we get
$B=(6\cos\theta,0), \ C = (0,6\sin\theta) \implies \text{midpoint of }BC =(3\cos\theta,3\sin\theta), \ 0\le\theta\le \dfrac{\pi}2 $
So the midpoint has co-ordinates $(x,y)=(3\cos\theta,3\sin\theta)$ with $\theta \in \left[0,\dfrac{\pi}2\right] \\ \implies x^2+y^2=9$ but restricted only to the quarter in the first quadrant.
I am adding this as an answer because this way you don't come across the formula you have presented. I tried out some graphs on Desmos by beginning with $$x\sqrt{9-y^2} + y\sqrt{9-x^2}=9 \qquad (1)$$ which, as you have observed doesn't yield a graph, but playing around with the components in this formula, the following do produce graphs $$x\sqrt{9-y^2} + y\sqrt{9-x}=9 \\ x\sqrt{9-y} + y\sqrt{9-x^2}=9 \\ x\sqrt{9-y} + y\sqrt{9-x}=9$$ and the following don't $$\sqrt{9-y^2} + \sqrt{9-x^2}=9 \\ \sqrt{9-y} + \sqrt{9-x}=9 \\ \sqrt{9-y} + \sqrt{9-x^2}=9$$
Coming back to your formula, note that $$x^2+y^2=9 \implies \sqrt{9-y^2}=|x|\ne x$$ so, if Desmos had to graph your formula $(1)$, it should ideally graph both $$x|x|+y|y|=9\text{ and } x^2+y^2=9$$ since squaring adds unnecessary roots as Mick has noted, which have some amount of overlap but these are not entirely the same, so Desmos is kind of confused, I guess, as to which one you want, this definitely has to do with the graphing algorithm.
Point of interest: $$x\sqrt{9-y^2} + y\sqrt{9-x^2}=9$$ leads to both $$x^2+y^2=9 \text{ and } x|x|+y|y|=9$$ and the above two formulae overlap exactly in the quarter of the circle in the first quadrant.
|
H: Does every infinite graph contain a maximal clique?
The original problem is stated in terms of the tolerance relation (reflexive and symmetric, but not necessarily transitive): Is every tolerance subset contained in a maximal tolerance subset?
For a set $X$ with a tolerance relation $r \subset X \times X$, a subset $U \subset X$ is said to be a tolerance subset if $(a, b) \in r$ for any $a, b \in U$. A tolerance subset is maximal if it is not contained in any other tolerance subset.
The answer to the corresponding question for the equivalence relation is yes, but things get harder if transitivity is removed.
I tried to prove it and found it sufficient to show the existence of one maximal tolerance subset, i.e. every (possibly infinite) undirected graph contains a maximal clique.
Thank you!
AI: This follows from Zorn's lemma: any vertex is a clique, and if cliques are ordered by inclusion the union of cliques in any chain is an upper bound.
Without Zorn, it is not true. For any equivalence relation you can define a graph where two elements are connected iff they are not equivalent, and then a maximal clique is precisely a set of representatives for the equivalence classes, which needs the axiom of choice.
|
H: If a matrix has linearly independent columns, does it automatically have a left inverse?
If a matrix has linearly independent columns, does it automatically have a left inverse?
So I know the opposite is true. That is, if a matrix has a left inverse, that means that the columns of the matrix are linearly independent. Was wondering if a matrix has linearly independent columns, does that automatically mean it has a left inverse?
Thanks!
AI: Yes, it does mean that. There are several ways to see this, but here is one:
If the matrix is $m\times n$, then the columns being linearly independent means the matrix has rank $n$. Thus the $m$ rows span an $n$-dimensional subspace of $\Bbb R^n$, which must be $\Bbb R^n$ itself. In particular, that means that there are linear combinations of the rows that make up each of the basis vectors.
The $k$th row of any left inverse will be the coefficients of such a linear combination for the $k$th basis vector, and any matrix consisting of such rows will be a left inverse. (In general, in a matrix product $AB=C$, the $k$th row in $C$ is a linear combination of the rows in $B$ given by the coefficients in the $k$th row of $A$. Also, more commonly, the $k$th column in $C$ will be a linear combination of the columns of $A$ given by the coefficients in the $k$th column of $B$.)
|
H: $A = (a_{ij})$ in the matrix definition
I have the following matrix definition
An m × n (read “m by n”) matrix A over a set S is a rectangular array
of elements of S arranged into m rows and n columns: (an mn matrix
shown)
We write $A = (a_{ij})$.
What is the meaning of $A = (a_{ij})$? $a_{ij}$ is an elements in the matrix, what's the point it writing this equality?
AI: Some times you want to talk about the matrix as a whole. Then you use $A$. Some times you want to talk about the elements. Then you use $a_{ij}$. The point of writing the equality is to formally establish that they are, ultimately, just two different notations for the same thing.
|
H: What is the probability to form a triangle with the three pieces of the stick?
On a stick $1$ meter long is casually marked a point $X \sim U[0,1]$. Let $X=x$, is also marked a second point $Y\sim U[x,1]$.
1) Find the density of $(X,Y)$ showing the domain.
$$\rightarrow \quad f_{XY}(x,y)=\frac{1}{1-x}\mathbb{I}_{[0,1]}(x)\mathbb{I}_{[x<y<1]}(y)$$
2) Say if $X$ and $Y$ are independent or not, and compute $\operatorname{Cov}(X,Y)$.
$$\rightarrow f_Y(y)=-\log(1-y)\mathbb{I}_{[0,1]}(y)\Rightarrow f_X(x)f_Y(y)\neq f_{XY}(x,y)\\
\Rightarrow X\text{ and }Y\text{ are not independent}$$
$$\rightarrow \operatorname{Cov}(X,Y)=-\frac{1}{6}$$
3) Now we assume to break the stick in the points $X$ and $Y$, and to form a triangle with the pieces that we have. Remembering that in a triangle the sum of the lengths of two sides must be greater than the length of the third side, what is the probability to form a triangle with the three pieces of the stick?
I'm stuck on point 3). How would you fix it?
Thanks in advance for any help.
AI: If the sum of the lengths of two sides must be greater than the third side, that means that each side cannot be greater than $0.5$ so the probability is
$$\mathbb{P}[Y-X<\frac{1}{2};X<\frac{1}{2};Y>\frac{1}{2}]$$
Graphically:
In formula:
$$\int_0^{\frac{1}{2}} \frac{1}{1-x}dx\int_{\frac{1}{2}}^{x+\frac{1}{2}} dy=\frac{2ln2-1}{2}\approx 0.19$$
|
H: Distribution of XY with X and Y Bernoulli distributed
I have a Problem with this exercise:
$X,Y:\Omega \to \{0,1\}$ are random variables with $X$~Bernoulli($\frac{1}{2}$) and $Y$~Bernoulli($\frac{3}{4}$). We also know that $P(X=Y=0)=\frac{1}{4}$
I already showed that $X$ and $Y$ are not independet. Now I want to determine the distribtion of $XY$
Therefore I have to calculate $\rho(1)=P(X=1 \cup Y=1)-P(X=1 \cap Y=1)=P(X=1)+P(Y=1)-P(X=Y=1)=\frac{1}{2}+\frac{3}{4}-\frac{3}{4}=\frac{1}{2}$
Because $P(X=Y=1)=1-P(X=Y=0)=\frac{3}{4}$ is this true?
Therefore I thought $\rho(0)$ musst be $\frac{1}{2}$ because $\rho(0)+\rho(1)=1$
But If I try to calculate $\rho(1)=P(X=1 \cup Y=0)+P(X=0 \cup Y=1)+P(X=0 \cup Y=0)=P(X=1)+P(Y=0)+P(X=0)+P(Y=1)+P(X=0)+P(Y=0)-P(X=Y=0)$
But thats definitly not right.
I hope you can help me
AI: $$P(XY=0)=P((X=0)\cup (Y=0))$$ $$=P(X=0)+P(Y=0) -P(X=Y=0)=\frac 1 2 +\frac 1 4 -\frac 1 4=\frac 1 2.$$
And $P(XY=1)=1-P(XY=0)=1-\frac 1 2 =\frac 1 2$.
|
H: Find the minimum of the set $A=\left\{\int_0^1(t^2 - at-b)^2 dt\, : \,a,b \in \mathbb{R}\right\}$.
Let $$A=\left\{\int_0^1(t^2 - at-b)^2 dt\, : \,a,b \in \mathbb{R}\right\}\,.$$ Find the minimum of $A$.
$\textbf{My attempt:}$
Well, we have
$ 0 \leq\int_0^1(t^2 - at-b)^2 dt = \frac{1}{5} - \frac{a}{2} + \frac{a^2-b}{3} + ab+b^2$.
Ok, I can see that like a funtion $f: \mathbb{R}^2 \to \mathbb{R}$ given by $f(a,b) = \frac{1}{5} - \frac{a}{2} + \frac{a^2-b}{3} + ab+b^2$, then I need to find $(a,b)$ s.t $f(a,b)$ is minimun.
But I don't know how can I do that...Can you give me a hint?
AI: Your calculation is incorrect. For $a,b\in\mathbb{R}$, if $f(a,b):=\displaystyle\int_0^1\,(t^2-at-b)^2\,\text{d}t$, then
$$f(a,b)=\frac{a^2}{3}+ab+b^2-\frac{a}{2}-\frac{{\color{red}2}b}{3}+\frac{1}{5}\,.$$
Thus,
$$f(a,b)=\frac{1}{3}\,\left(a+\frac{3(2b-1)}{4}\right)^2+\frac{1}{4}\left(b+\frac{1}{6}\right)^2+\frac{1}{180}$$
for all $a,b\in\mathbb{R}$. This shows that $f(a,b)\geq \dfrac1{180}$ for each pair $(a,b)\in\mathbb{R}^2$. The inequality becomes an equality if and only if
$$a+\frac{3(2b-1)}{4}=0\text{ and }b+\frac{1}{6}=0\,,$$
which is equivalent to
$$(a,b)=\left(1,-\frac16\right)\,.$$
|
H: Detail of Proof of Theorem 6.17 in Probability Theory (A. Klenke)
There is a part of the proof of Theorem 6.17 that I don't understand.
Definition 6.16. A family $\mathcal{F} \in \mathcal{L}^1(\mu)$ is called uniformly integrable if
$$ \inf_{0 \leq g \in \mathcal{L}^1(\mu)} \sup_{f \in \mathcal{F}} \int (|f| - g)^+ d\mu = 0. $$
Theorem 6.17. The family $\mathcal{F} \subset \mathcal{L}^1(\mu)$ is uniformly integrable if and only if
$$\inf_{0 \leq \widetilde{g} \in \mathcal{L}^1(\mu)} \sup_{f \in \mathcal{F}} \int_{\{|f| > \widetilde{g}\}} |f| d\mu = 0$$.
If $\mu(\Omega) < \infty$, then uniform integrability is equivalent to either of the following two conditions:
(i) $\inf_{a \in [0, \infty)} \sup_{f \in \mathcal{F}} \int (|f| - a)^+ d\mu = 0$,
(ii) $\inf_{a \in [0, \infty)} \sup_{f \in \mathcal{F}} \int_{\{|f| > a\}} |f| d\mu = 0$.
I got to the last part, which is to show that uniform integrability + $\mu(\Omega) < \infty$ imply (ii).
The idea of the proof: Assume $\mathcal{F}$ is uniformly integrable. Previous part of the theorem already showed that this is equivalent to $\inf_{0 \leq \widetilde{g} \in \mathcal{L}^1(\mu)} \sup_{f \in \mathcal{F}} \int_{\{|f| > \widetilde{g}\}} |f| d\mu = 0$. For any $\varepsilon> 0$, there exists $0 \leq \widetilde{g}_\varepsilon \in \mathcal{L}^1(\mu)$ satisfying $\sup_{f\in \mathcal{F}} \int_{\{|f| > \widetilde{g}_\varepsilon\}} |f| d\mu \leq \varepsilon$. Choose $a_\varepsilon$ such that $\int_{\{\widetilde{g}_{\varepsilon/2} > a_\varepsilon\}} \widetilde{g}_{\varepsilon/2} d\mu < \varepsilon / 2$. Then
$$\int_{\{|f| > a_\varepsilon\}} |f| d\mu \leq \int_{\{|f| > \widetilde{g}_{\varepsilon/2}\}} |f|d\mu + \int_{\{\widetilde{g}_{\varepsilon/2} > a_\varepsilon\}} \widetilde{g}_{\varepsilon/2} d\mu \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon.$$
My question: Where was the $\mu(\Omega) < \infty$ assumption used?
AI: It is used to show that $(i)$ implies uniform integrability. More precisely, since $\mu (\Omega) < +\infty$, every constant function is in $\mathcal{L}^1(\mu)$ and thus the infimum of taken over all $\mathcal{L}^1(\mu)$ functions is less than the infimum taken over all the constant functions.
|
H: Hard Differential Equation
Can anyone help me to find the solution of this ODE : $$4(y')^2-y^2+4=0.$$
I've tried to find it's solution by putting $y = e^{at}$ (for null solution) and $y = 2$ (for particular solution). My final solution is $$y = c_1 e^{0.5t} + c_2 e^{-0.5t} + 2,$$
but didn't match with the solution of this dude in this video link [Time stand 2:20 ] "when question is shown in the video".
Even I've plotted his solution and my solution in graph plotter but there is slight difference in the graphs.
Please, explain in full detail if you know how.
AI: The general form of equation
$$
A^2-B^2=1
$$
can be parametrized as $A=\pm\cosh(u)$, $B=\sinh(u)$ similar to a circle equation. If $(A,B)$ change smoothly, but remain on this curve, then also $u$ is a smooth function.
Here that gives $$y(x)=\pm2\cosh(u(x)), ~~ y'(x)=\sinh(u(x))$$ from that parametrization. Now take the derivative of the first equation and compare with the second, giving $$u'(x)=\pm\tfrac12\implies u(x)=\pm\tfrac12x+c.$$ This already solves the problem completely
$$
y(x)=\pm2\cosh(\tfrac12x+C).
$$
As to your attempt, that is not possible as the equation is not linear. You can get a linear equation by taking the derivative
$$
2y'(4y''-y)=0.
$$
Excluding the constant solutions and solutions with constant segments, the second factor indeed has the general solution
$$
y=c_1e^{\frac12 x}+c_2e^{-\frac12 x}.
$$
Inserting back into the original equation results in
$$
(c_1^2e^{x}-2c_1c_2+c_2^2e^{-x})-(c_1^2e^{x}+2c_1c_2+c_2^2e^{-x})+4=0
\\
\implies c_1c_2=1,~~c_1=\pm e^C,~c_2=\pm e^{-C}
$$
which again produces the solution.
The coefficients in the video give just another parametrization of the coefficient pair. $c_1=\frac12c$ and $c_2=2c^{-1}$ still satisfy $c_1c_2=1$.
|
H: The operator norm $\|L\|$
Let $C_0([0, 1])$be a subspace of $C([0, 1])$, a functional space consisting of real-value continuous functions over the interval $[0, 1]$, such that
$C_0 ([0, 1]) = \left\{ f \in C([0, 1]) \mid \int_0^1 f(t) dt = 0 \right\}$
, and define the norm as $\| f \|_\infty = \sup_{x \in [0, 1]} |f(x)|$.
Then, define linear operator $L: C_0 ([0, 1]) \to C ([0, 1])$ as
$(Lf)(x) = \int_0^x (x-t)f(t)\, dt\quad (x \in [0, 1])$
I can show that $L$ is bounded by using some inequalities, but what is the operator norm $||L||$?
So far, I hypothesize that $\|L\| = \frac{1}{4}$, by considering the definition $\|L\| = \sup\{\|Lf\|_\infty: f \in C_0 ([0, 1]) {\rm with} \|f\|_\infty = 1\}$, and then thinking of a continuous function that is very close to this one:
$f(x) =
\left\{ \begin{array}{ll}
1 & (0 \leq x \leq 1/2) \\
-1 & (1/2 < x \leq 1)
\end{array} \right.$
(I know this is not even continuous, but I'm thinking of an intuitive way to estimate $\|L\|$ by thinking of a function $f$ that satisfies $\|f\|_\infty = 1$, and would give the maximum of $\|Lf\|_\infty$.)
And then I get the $\frac{1}{4}$by calculating (assume $x > 1/2$)
$\int_0^x (x-t)f(t)\, dt = \int_0^{1/2} (x - t)\, dt + \int_{1/2}^x (t-x)\, dt = -\frac{1}{2}x^2 + x -\frac{1}{4}$
and finding the maximum value of the result ($\frac{1}{4}$ at $x = 1$)
Where do I go from here? How can I give a more mathematical approach to calculating $\|L\|$? Thank you in advance.
AI: Hints: Let $f_n(x)=-n(x-\frac 1 2 +\frac 1 n)+1$ for $\frac 1 2 -\frac 1 n \leq x \leq \frac 1 2 +\frac 1 n$, $1$ for $x \leq \frac 1 2 -\frac 1 n$ and $-1$ for $x \geq \frac 1 2 +\frac 1 n$. Then $f_n$ is continuous and $\int f_n(x)dx=0$. Observe also that ($Lg$ can be defined for any integrable function $g$) and $|Lf(x)| \leq \int|f(t)|dt$ for all $x$. Show that $\int |f_n(x)-f(x)|dx \to 0$ (where $f$ is the discontinuous function you have introduced). Conclude that $Lf_n \to Lf$. Can you finish?
|
H: Matrix norm inequality $\| Bx\| \geq |\lambda| \| x \|$ for a real symmetric $B$
For a symmetric invertible matrix $B \in \mathbb{R}^{n \times n}$ with eigenvalues $\lambda_1, ..., \lambda_n \in \mathbb{R}$, it holds that for all $x \in \mathbb{R}^{n}$ and for any $\lambda \in \lambda_1, ..., \lambda_n$,
$$\|Bx\| \geq |\lambda| \; \|x\|$$
I.e. that, while denoting the absolute smallest eigenvalue as $\lambda _{s} = \min_{\lambda \in \left\{ \lambda_1, ..., \lambda_n\right\} } |\lambda|$, we have
$$\|Bx\| \geq \lambda _{s} \|x\|$$
Since $B$ is symmetric, spectral theorem applies and there exists a unique orthonormal basis formed by eigenvectors $v_{1}, \dots, v_{n}$ of $B$. The spectral decomposition of $B$ is:
$$B = \sum_{i=1}^{n} \lambda _{i} v_{i}v_{i}^\intercal$$
The outer products $v_{i}v_{i}^\intercal$ are the orthogonal projections onto one-dimensional $\lambda _{i}$-eigenspace.
Now, I know there's a proof:
$$\|Bx\|^{2} = \sum_{i=1}^{n} \lambda _{i}^{2} ( v_{i}^\intercal x )^{2} \geq \min_{j\in\left\{ 1,..,n \right\}} \lambda _{j}^{2}\sum_{i=1}^{n} (v_{i}^\intercal x) ^{2} = \min_{j \in \left\{ 1,..,n \right\} } \lambda _{j}^{2} \|x\|^{2}$$
But I'm lost at two points:
Why does
$$\|Bx\|^{2} = \sum_{i=1}^{n} \lambda _{i}^{2} ( v_{i}^\intercal x )^{2}$$ hold? When I substitute $B$ I get
$$\left\lVert \left( \sum_{i=1}^{n} \lambda _{i} v_{i} v_{i}^\intercal \right) x \right\rVert ^{2} = \dots?$$
I tried to write it out, but it gets ugly and doesn't lead to the stated equivalence. Maybe I'm missing some identity which would make it simple..
Why does
$$ \sum_{i=1}^{n} ( v_{i}^\intercal x ) ^{2} = ||x||^{2}$$
I also looked at the linked question (Matrix norm inequality : $\| Ax\| \leq |\lambda| \|x\|$, proof verification), but I can't see why (s)he obtained $x^{*} A^{*} A x=x^{*} U^{*} \Lambda^{*} \Lambda U x$. In my (real) case I write out the decomposition as $B= Q \Lambda Q^{-1}$ so this would give $x^\intercal B^\intercal B x = x^\intercal Q \Lambda ^\intercal \Lambda Q^\intercal x $, not $x^\intercal Q^\intercal \Lambda ^\intercal \Lambda Q x$. The later would be the case if $B = Q^{-1} \Lambda Q $, not $B = Q \Lambda Q^{-1} $, but I think that $Q \Lambda Q^{-1} \not = Q^{-1} \Lambda Q$. Afterwards it's also confusing if I could just say that my orthogonal matrix is the isometry there and $y=Qx$ and $\|y\| = \|x\|$ hold.
How to prove it the way as in the linked question? (only the "easy" symmetric case)
AI: The following might be a helpful approach to consider:
As pointed out we have:
$$
B=\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{T}
$$
Then,
$$
Bx=\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{T}x
$$
$$
\lVert Bx \rVert^{2}= \left(Bx\right)^{T}\left(Bx\right)=\left(\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{T}x\right)^{T} \left(\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{T}x\right)
$$
$$
\lVert Bx \rVert^{2}=\left(\sum_{i=1}^{n}\lambda_{i}x^{T}v_{i}v_{i}^{T}\right) \left(\sum_{i=1}^{n}\lambda_{i}v_{i}v_{i}^{T}x\right)
$$
Now, due to the orthonormality of the $v_{i}$ ($v_{i}^{T}v_{j}=1$ for $i=j$, else $0$) we obtain:
$$
\lVert Bx \rVert^{2}=\sum_{i=1}^{n}\lambda_{i}^{2}\left(x^{T}v_{i}\right)^{2}=\sum_{i=1}^{n}\lambda_{i}^{2}\left(v_{i}^{T}x\right)^{2}
$$
noting that $x^{T}v_{i}=v_{i}^{T}x$, since they are just scalars.
I hope this helps.
|
H: Brownian Motion at hitting time defined as an infimum
I'm reading a book on Brownian Motion, and they define the hitting time as
$$T_x = \inf\{t > 0 : B(t) = x \}$$
Later on they state that $B(T_x)=x$.
Why would they use inf instead of min? With inf, if we have infinite amount of crossings/hits at x in finite time, couldn't we have $B(T_x)\neq x$?
AI: When they define $T_x$ they haven't yet proved that the minimum is attained so they define it as infimum. But then we can use continuity of paths to prove that the infimum is actually a minimum. The fact that $B(T_x)=x$ follows from continuity of Brownian paths and the fact that the paths attain the value $x$ at some time $t$.
|
H: Find the range of the function $f(x)=\sqrt {\log (\sin^{-1}x +\frac 23 \cos ^{-1} x)}$
For the inner function
$$\sin^{-1} s +\frac 23 (\frac{\pi}{2}-\sin^{-1} x)$$
$$\frac 13 (\pi +\sin^{-1}x)$$
$$\frac{-\sin^{-1} x}{3}$$
Since it is inside a log function which is inside a square root
$$-\frac{\sin^{-1} x}{3} \ge 1$$
$$\sin^{-1}x \le -3$$ which is looks wrong, and is in fact wrong, because the answer is $[0,\sqrt{\log (\frac{\pi}{2})}]$
Where am I going wrong?
AI: You’re wrong in saying that $$\pi+\sin^{-1} x =-\sin^{-1}x $$ We have, $$-\frac{\pi}{2} \le \sin^{-1} x \le \frac{\pi}{2} \\ \frac{\pi}{6} \le \frac 13 (\pi +\sin^{-1} x ) \le \frac{\pi}{2} $$Since $\log(x)$ is increasing, $${\log\frac{\pi}{6}} \le f^2(x) \le \log\frac{\pi}{2} $$ Now we need to take the square root. For this to be defined we must have the argument inside $\log$ to be $\ge 1$, and so the range comes out to be $$0\le f(x)\le \sqrt{\log\frac{\pi}{2}} $$
|
H: Calculate line integral $\int_{\ell} y \cos x d \ell$
I am asking to calculate the integral $$\int_{\ell} y \cos x d \ell$$ while $\ell$ is the graph of the function $\phi(x)=sin(x)$ in the domain $x \in [0,\frac{\pi}{2}]$ .
So what I understand is that $\frac{d\ell}{dx}=cos(x)$ so I can substitude it in my integral and getting
$$\int_{0}^{\frac{\pi}{2}} y \cos x\cdot \cos x dx$$
but the answer should be : $$\int_{0}^{\frac{\pi}{2}} \sin x \cos x \sqrt{1+\cos ^{2} x} d x$$
it is not clear for my how the variable $y$ just changed to $\sin x$ and how $d\ell$ calculated
AI: Not quite true. What you call $d\ell/dx$ is, in fact, $d\phi/dx$. The length element $d\ell$ is the length of the graph curve if we step $dx$ in $x$ and $d\phi$ in $y$ directions. It makes (approximately) the right angle triangular, so the hypotenuse $d\ell$ can be calculated from the Pythagoras theorem
$$
d\ell=\sqrt{d\phi^2+dx^2}=dx\sqrt{\phi'(x)^2+1}=dx\sqrt{\cos^2(x)+1}.
$$
Finally, $y(x)$ along the graph is exactly $\phi(x)=\sin(x)$.
|
H: Direct sum of eigenspaces
This is a problem in Chapter 4, Algebra, Michael Artin, 2nd.
Let $T$ be a linear operator on a finite dimensional vector space $V$, such that $T^2=I$. Prove that for any vector $v$ in $V$, $v-Tv$ is either an eigenvector with eigenvalue $-1$ or the zero vector. Prove that $V$ is the direct sum of the eigenspaces $V^{(1)}$ and $V^{(-1)}$. The eigenspace $V^{(\lambda)}$ is the set of eigenvectors of $T$ with eigenvalue $\lambda$.
I am currently having a problem proving that $V$ is the direct sum of the eigenspaces $V^{(1)}$ and $V^{(-1)}$.
I find that the set of all eigenvalues $\Lambda$ of $T$ is $\Lambda = \{1, -1\}$
and $V^{(1)}$ and $V^{(-1)}$ are $T$-invariant subspaces of $V$. Also $V^{(1)} \cap V^{(-1)} = \{0\}.$
But I am stucked here. Thanks.
AI: Hint: $x=[(\frac x 2-T(\frac x 2)]+[(\frac x 2+T(\frac x 2)]$. Show that the first term belongs to $V^{(-1)}$ and the second term belongs to $V^{(1)}$.
|
H: Geometric multiplicity of eigenvalues in a diagonal block matrix
I'm trying to prove that the geometric multiplicity of an eigenvalue in a diagonal block matrix is the sum of the geometric multiplicities of the eigenvalue with respect to every block. I know that if I have a diagonal block matrix with $k$ blocks and I take the associated endomorphism(fixed a basis) $f:V \to V$, then $V=W_1 \oplus ... \oplus W_k$, where $W_i$ is a $f$-invariant subspace $\forall i$. So I can consider the induced endomorphisms on each $W_i$(let's call it $f_i$), and clearly the eigenspace of and eigenvalue $\lambda$ with respect to $f_i$ is $V_\lambda \cap W_i$, where $V_\lambda$ is the autospace of $\lambda$ with respect to $f$. Clearly these "induced eigenspaces" are still in direct sum, so I can use Grassman relation to obtain:
$$\sum_{i=1}^{k} \dim(V_\lambda \cap W_i)=\dim((V_\lambda \cap W_1)\oplus \cdots \oplus (V_\lambda \cap W_k))$$
The first member is the sum of the geometric multiplicities with respect to every block, so I have to prove that:
$$(V_\lambda \cap W_1)\oplus \cdots \oplus (V_\lambda \cap W_k)=V_\lambda$$
I'm having some troubles with this last step, could you help me please?
AI: We must show that $V_{\lambda} \subseteq (V_\lambda \cap W_1)\oplus \cdots \oplus (V_\lambda \cap W_k)$. So, suppose that $x \in V_\lambda$.
Because $x = W_1 \oplus \cdots \oplus W_k$, there exist $x_j \in W_j$ (for $j = 1,\dots,k$) such that $x = x_1 + \cdots + x_k$. Because $f(x) = \lambda x$, we have
$$
f_1(x_1) + \cdots + f_k(x_k) = f(x_1 + \cdots + x_k) = \lambda(x_1 + \cdots + x_k) = \lambda x_1 + \cdots + \lambda x_k.
$$
Because $f_j(x_j) \in W_j$ for each $j$ and because $W_1 \oplus \cdots \oplus W_k$ is a direct sum, we have
$$
f_1(x_1) + \cdots + f_k(x_k) = \lambda x_1 + \cdots + \lambda x_k \implies f_j(x_j) = \lambda x_j, \quad j = 1 ,\dots,k.
$$
So, it is indeed the case that $x \in (V_\lambda \cap W_1)\oplus \cdots \oplus (V_\lambda \cap W_k)$.
|
H: Finding real $(x,y)$ solutions that satisfies a system of equation.
I was given:
$x + y^2 = y^3 ...(i) \\ y + x^2 = x^3...(ii)$
And was asked to find real $(x,y)$ solutions that satisfy the equation.
I substracted $(i)$ by $(ii)$:
$x^3 - y^3 + y^2 - x^2 + x - y = 0$
Then factored it out so I have:
$(x-y)(x^2 + xy + y^2 - x - y + 1) = 0$
Multiplying it by two, I get:
$(x-y)(2x^2 + 2xy + 2y^2 - 2x - 2y + 2) = 0 \\
(x-y)((x^2 - 2x + 1) + (y^2 - 2y + 1) + (x^2 + 2xy + y^2)) = 0 \\
(x-y)((x-1)^2 + (y-1)^2 + (x+y)^2) = 0$
I noticed that a solution exists only if $x=y$ because there are no real solutions for $x$ and $y$ that satisfies $(x-1)^2 + (y-1)^2 + (x+y)^2 = 0$.
Substituting $x=y$ into the first equation, I get:
$y(y^2-y-1)=0$ where the roots are $y= 0, \frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}$. Hence, the real solutions of $(x,y)$ that satisfy are:
$(x,y) = (0,0), (\frac{1+\sqrt{5}}{2},\frac{1+\sqrt{5}}{2}), (\frac{1-\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2})$.
What I would like to ask is: Is there a better way to solve the question? It's from a local university entrance test, where this kind of questions are aimed to be done in < 3 minutes. It took me a while to manipulate the algebraic stuffs above.
Someone in a local forum said something about symmetric systems which says that the solution does not exist for $x \neq y$. How do I know if the equation is a symmetric one? (Never heard of something before throughout high school here...) I would love to see a resource for this!
AI: You can begin by noting that the $2$ functions are inverses of each other (and only involve odd non-zero exponents). Using the fact that inverse functions are reflections in the line $y=x$, we can now see that the intersection points must be along the line $y=x$. Substituting $y$ into $x$ or vice-versa, we obtain the equation you get and obtain the solutions you got. That would only take about 3 minutes. I hope that helps :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.