text
stringlengths 83
79.5k
|
|---|
H: Proof verification: A certain process of redistribution stops after a finite number of steps.
QUESTION: There are $n\ge 3$ girls in a class sitting around a circular table, each having some apples with her. Every time the teacher notices a girl having more apples than both of her neighbours combined, the teacher takes away one apple from that girl and gives one apple each to her neighbours. Prove that, this process stops after a finite number of steps.
(Assume that, the teacher has an abundant supply of apples.)
MY ANSWER:
We define the girls as gears. Now, let any gear which has more number of apples than it's immediate neighboring gears rotate clockwise, and consequently the neighbors rotate counterclockwise..
(Note: The gears rotate only in groups of $3$ and the rotation of any group does not affect the other groups)
Any clockwise rotation decreases the number of apples by $1$ and any counter rotation increases the number by $1$.
We define, a group of $3$ gears to be in a stationary state if the gear that is trapped on both the sides has $\leq$ number of apples than the sum of its neighboring gears. In such a case, the group does not rotate, and remains stationary..
Now, firstly, since we are considering positive integers, any group must come to a stationary state after finite number of rotations..
Define $\Omega_k = a_{1k}+a_{2k}+a_{3k}+....+a_{nk}$ as the sum of the number of apples in any $k^{th}$ step. Here each $a_{ik}$ denotes the number of apples possessed by the $i^{th}$ girl, at the $k^{th}$ step.
Define $\Delta_k=max(a_{1k},a_{2k},.....,a_{nk})$ as the maximum number of apples possessed by some girl at any $k^{th}$ step.
Say, $\Delta_0=a_j$, for some $j\in\{1\leq{a}\leq{n}, a\in\Bbb{N}\}$ (where $\Delta_0$ represents the initial step)
Define $V(a_g)$ to be the maximum number of apples possessed by some girl, which is $\leq$ girl $g$, or in the set excluding girl $g$.
$\color{red}{Claim :}$$\Delta_k\leq{a_j}$, $\forall k \in \{1,2,3,......,n\}$
$\color{red}{Proof:}$ Let us start the process with the group $(a_{j-1},a_j,a_{j+1})$..
Since, we have already proved that the number of rotations will be finite for this group to attain a stationary state. Let us say, after the $m^{th}$ step,
$a_{jm}<V(a_j)$
From this step onwards until the completion of the last step (say $p$) of this group, $\Delta_k=V(a_j)$, where $m\leq{k}\leq{p}$
And $\forall k<m$, $\Delta_k$ was clearly $=a_j$.
Therefore, we see that in the whole process the value of $\Delta$ never increases..
So, following the same pattern, we can say, for any group which attains a stationary state, the value of $\Delta$ either remains same or decreases by $1$.
$\therefore \Delta_k\leq{a_j}$, $\forall k \in \{1,2,3,......,n\}$
This completes the proof of our claim. $\blacksquare$
Hence, we can say, $\Delta_1\geq\Delta_2\geq.......\geq\Delta_n$.
This clearly proves $\Delta$ is a non-increasing function..
But, we also observe that the value of the sum $\Omega$ increases by $1$ after every step.
$\Omega_{k}= a_{1k}+a_{2k}+.......+a_{nk}$
$\Omega_{k}<\Delta_{k}+\Delta_{k}+...... n$ times
$\implies \Omega_{k}<n.\Delta_{k}$.
$\implies \Omega_{k}<n.\Delta_{0}$
But, $\Delta_{0}$ is a constant.. $\Omega$ increases constantly by $1$.
Hence, for this inequality to hold true, $\Omega$ cannot increase indefinitely, and therefore, the process must terminate after finite number of steps...
Q.E.D. $\square$
Is my proof correct? If not, can someone please prove it in a more elegant way?
AI: A simpler argument:
For each configuration $c$ we define the $\textit{unfairness}$ function by $$F(c)=\sum \max(0, a_i-(a_{i-1}+a_{i+1}))$$
Here, of course, $a_i$ is the number of apples the $i^{th}$ girl currently has and the indices are handled cyclically.
Then each iteration of the "smoothing" operation lowers $F$ hence the whole thing must halt after at most $F(c)$ iterations, and we are done.
Note: to see that one iteration of smoothing lowers $F$, let $$F_i(c)=\max(a_i-(a_{i-1}+a_{i+1}))$$ and consider one girl, $\#3$, say, who has more apples than her neighbors combined. Then, of course, we have $F_3(c)=a_3-(a_2+a_4)>0$. When we smooth we leave all the $a_i$ the same except that $a_3'=a_3-1$, $a_2'=a_2+1$ and $a_4'=a_4+1$. Now we need to look at each term in the sum to see if it might have increased. Of course $F_3(c)$ has dropped by either $1$ or $2$. What about the other terms that may have changed? Well, to compute $F_2(c')$ we remark that $a_3>a_2+a_4$ implies that $a_3>a_2+1$ (since each girl has some apples) so $a_3≥a_2+2$ so $a_3'=a_3-1≥a_2+1=a_2'$. It follows that $F_2(c')=0$ so it did not increase. The same argument applies to $F_4(c')$ and, as these are the only ones that might have increased, we are done.
|
H: What is the name of this property of a ring?
Let $R$ be a ring. For all $x,y\in R$, there exists a $z\in R$ such that $xy=yz$.
Does this property have a name? I do not assume that multiplication has an identity or that it is commutative, but I do assume that multiplication is associative
AI: Such rings (where $\forall x,y \in R\, \exists z \in R\, xy=yz$) are called "right duo rings". In a right duo ring, every right ideal is a two-sided ideal.
|
H: Change of basis matrix from $\alpha$ to $\beta$ or from $\beta$ to $\alpha$?
In Peterson and Sochacki's Linear Algebra and Differential Equations they define (in section 5.3) the change of basis matrix from $\alpha$ to $\beta$, $[I]_{\beta}^{\alpha}$, as the matrix whose columns are the $\alpha$-coordinates of the $\beta$ vectors. This matrix transforms $\beta$-coordinates into $\alpha$ coordinates so I think it should be called the change of basis matrix from $\beta$ to $\alpha$.
Is there a good reason to call it the change of basis matrix from $\alpha$ to $\beta$?
AI: Yes, there is an excellent reason: you can obtain the matrix $A'$ of the associated linear map in basis $\beta$ from the matrix $A$ of this linear map in basis $\alpha$.
Indeed, let's denote $X, Y,\dots$ the column matrix of vectors in basis $\alpha$ and $X', Y',\dots,\:$ their column matrix in basis $\beta$. If $Y $ is the column matrix of coordinates of the image of a vector with column vector of coordinates $X$ in basis $\alpha$, we have the relation
$$Y=AX.\tag1$$
Now, denoting $P$ the change of basis matrix, the column matrices of the same vectors, in bases $\alpha$ and $\beta$ are linked through the relations
$$X=PY',\qquad Y=PY',$$
so that $(1)$ can be written as
$$PY'=A(PX')\iff Y'=P^{-1}A(PX')$$
which shows that the matrix of the linear map in basis $\beta$ has become
$$A'= P^{-1}AP.$$
|
H: Find all $z$ such that $|\cos z|^2+|\sin z|^2=4$
I need to solve for $z$ with $|\cos z|^2+|\sin z|^2=4$
I know $\cos z =\frac{1}{2}(e^{iz}+e^{-iz})$ and $\sin z = \frac{1}{2i}(e^{iz}-e^{-iz})$ but I'm not sure if this is helpful because I don't know how to split it into Re$(z)$ and Im$(z)$ to find $|\text{cos}\ z|$ and $|\text{sin}\ z|$
AI: Since\begin{align}|\cos z|^2+|\sin z|^2&=\cos(z)\overline{\cos(z)}+\sin(z)\overline{\sin(z)}\\&=\cos(z)\cos\left(\overline z\right)+\sin(z)\sin\left(\overline z\right)\\&=\cos\left(z-\overline z\right)\\&=\cos\bigl(2\operatorname{Im}(z)i\bigr)\\&=\cosh(2\operatorname{Im}z),\end{align}all you have to do is to solve the equation $\cosh(2\operatorname{Im}z)=4$. Can you take it from here?
|
H: Proving monotonocity and convergence for sequences $(s_n)$ and $(t_n)$
Let $X:=(x_n:n\in \mathbb N)$ be a bounded sequence, and for each $n\in \mathbb N$ let $s_n:=\sup\{x_k:k\geq n\}$ and $t_n:=\inf\{x_k:k\geq n\}$.
Prove that $(s_n)$ and $(t_n)$ are monotone and convergent.
My approach: Now as $X$ is bounded, it is evident that $\inf X\leq x_n\leq \sup X\ \forall\ n\in \mathbb N$
Let $X_n=(x_k:k\geq n)$ or $X_n$ be $\text{n-tail}$ of $X$
Thus $s_n=\sup X_n$ and $t_n=\inf X_n$
Now as $X_k$ is finite, $\sup X_k\in X_k$ and $\inf X_k\in X_k$
Also it is evident that $X_n\subset X_{n-1}\subset X_{n-2}\subset \ldots \subset X_2\subset X\ \forall\ n\in \mathbb N$
$\therefore \sup X_n\leq \sup X_{n-1}\leq \sup X_{n-2}\leq\ldots\leq\sup X_2\leq \sup X\ \forall\ n\in \mathbb N$
Therefore $s_n$ is decreasing and $s_n\leq \sup X\ \forall\ n\in \mathbb N$
Similarily it can be proved that $t_n$ is increasing and $t_n\geq \inf X\ \forall\ n\in \mathbb N$
Therefore both $s_n$ and $t_n$ are monotones and are convergent.
Please check this method for any mistakes.
Also $t_n\leq x_n\leq s_n\ \forall\ n\in \mathbb N$ means that $t_n$ and $s_n$ are lower and upper bounds for $X$ respectively, thus $t_n\leq \inf X$ and $s_n\geq \sup X$ which is in contradicition to what has been given before in the proof. I am doubtful of this statement as the upper and lower bounds are not fixed but change for every $n\in \mathbb N$ or are dependent on $n$. Is that allowed?
Please correct me wherever I have committed an error.
Thanks
AI: It’s not true that $X_k$ is necessarily finite. For instance, take $x_n=\left(-\frac12\right)^n$: this is clearly a bounded sequence, and all of its points are distinct, so each tail contains infinitely many distinct points.
But you have observed the key point, which is that if $X_n=\{x_k:k\ge n\}$, then $X_n\supseteq X_{n+1}$ for each $n\in\Bbb N$: from this it is immediate that $t_n\le t_{n+1}$ and $s_n\ge s_{n+1}$ and hence that $\langle s_n:n\in\Bbb N\rangle$ and $\langle t_n:n\in\Bbb N\rangle$ are monotone. If you already know that a bounded, monotone sequence converges, you’re practically done at that point.
No, in general it’s not true $t_n$ and $s_n$ are bounds on the whole sequence. Again you can look at my example at the beginning of the answer: for instance, $t_3=-\frac18$, which is bigger than $x_1=-\frac12$ and therefore not a lower bound for the whole sequence.
|
H: Is it true that $\frac{\ln(a)}2=\ln(\sqrt{a})$ for $a>0$? In particular, is $\frac{\ln(2)}{2}=\ln(\sqrt2)$?
I believe the following two identities are correct. For some reason, they look wrong to me. Are they?
$$ \frac{ \ln \left( 2 \right) } { 2 } = \ln( \sqrt{2} ) $$
$$ \frac{ \ln \left( a \right) } { 2 } = \ln( \sqrt{a} ) $$
The second one being valid for all $a > 0$.
AI: They are both correct. To prove them, use the logarithm property $\ln\left(a^b\right)=b\ln(a)$, for $a\gt0$.
This can be rewritten as
$$b\ln(a)=\ln\left(a^b\right),\;\;\;\text{for }a\gt0$$
$\frac{\ln(a)}{2}$ can be written as $\frac12\ln(a)$, and $a^{(1/2)}\equiv\sqrt a$.
You can finish it from here.
|
H: Approximation of $\frac {n-c \choose k} {n\choose k} $ using a radical
Is there a good approximation for
$\frac {n-c \choose k} {n\choose k} $, given large parameter n (k,c can be large as well, but k,c<n/2)?
I think I can express it as a simple exponent function f(n,k,c), but i am not sure.
I prove that this expression equals
$\Pi_{j=0}^{k-1}(1-\frac{c}{n-j})$
But i can't continue further
AI: Using the factorial definition, we can rewrite this as $$\frac{n!(n-(c+k))!}{(n-c)!(n-k)!}$$
and then use falling factorials to reduce to $$\frac{n^{\underline{(c+k)}}(n-(c+k))!^2}{(n-k)^\underline c (n-c)^\underline k (n-(c+k))!^2}=\frac{n^{\underline{c+k}}}{(n-k)^\underline c (n-c)^\underline k}$$
$n^\underline t$ is a polynomial of degree $t$. Hence, we may apply what I refer to as Domination Leads To Irrelevancy: $$\lim_{x\to\infty}\bigg[\frac{\sum_{k=0}^m a_k x^k}{\sum_{k=0}^n b_k x^k}\bigg]=\lim_{x\to\infty}\bigg[\frac{a_m}{b_n}x^{m-n}\bigg]=\begin{cases} \frac{a_mb_n}{|a_mb_n|}\infty & m>n \\ \frac{a_m}{b_n} & m=n \\ 0 & m<n \end{cases}$$
to see that the value will be very close to, and approaches $1$ for sufficiently large $n$ as both polynomials are degree $c+k$ and our leading coefficients are both $1$.
|
H: Do I need Axiom of Choice for ZF construction of the Natural Numbers?
Using ZF axioms I have constructed the natural numbers like so:
0 = ∅
1 = {0} = {∅}
2 = {0,1} = {∅,{∅}}
3 = {0,1,2} = {∅,{∅},{∅,{∅}}}
4 = {0,1,2,3} = {∅,{∅},{∅,{∅}},{∅,{∅},{∅,{∅}}}}
etc.
I am trying to determine if I used the axiom of choice for the representation. For the number $0$ I obviously did not since there is only one set with no elements. However, for the other sets, say for $2$, there is an infinite number of choices. i.e. $\{ \; \{\emptyset\}, \{\{\{\emptyset\}\}\} \; \}$ is another set with two elements in it.
However if we define the way we choose our sets through a successor property:
$S(X) = X \cup \{X\}$
then we get the representative sets above. Because we used a property to pick our representation I don't think the Axiom of Choice is needed here. That is, we could use this property as a choice function.
Is this reasoning correct?
AI: I am not sure what you mean by "we could use this property as a choice function", but ignoring that point you are correct.
Since you have explicitly defined $S(X) = X \cup \{X\}$ there is no choice involved here. You can very easily define any natural number $n$ as $S^n(\emptyset)$, which is an expression that you can expand and write down.
If you are taking a set theory class, though, it is worth noting that you have only constructed each individual number $n$ in this way. You have not constructed the set $\omega$ of all natural numbers. In order to build $\omega$ you need the axiom of infinity, and indeed $\omega$ itself (in the metatheory) provides a model of "most of ZF", with the axiom of infinity conspicuously absent. For more details see Kunen's (new) "Set Theory".
I hope this helps ^_^
|
H: Can you find a function that follows a rule while rational and it's differentiable everywhere? (relates to the Pythagorean theorem)
can you find a function that follows a rule while rational and it's differentiable?
let's call this function $\alpha(x)$
$x$'s simplest form is $\frac{a}{b}$ when $x$ is a fraction.
when $a^2+b^2=c^2$ and $c$ is an integer then $\alpha(x)$ a rational number
when $a^2+b^2≠c^2$ then $\alpha(x)$ is a irrational number
and when x isn't a fraction $\alpha(x)$ can be either.
$\alpha(x)$ must be differentiable everywhere
AI: $$\alpha(x)=\sqrt{x^2+1}$$
satisfies the conditions above:
when $x=\frac ab$ then $\sqrt{x^2+1}=\frac{\sqrt{a^2+b^2}}{|b|}$ and it's rational when there is such integer $c$ that $a^2+b^2=c^2$ and it's irrational when no such $c$ exist (however, I'm not sure if I should prove it too).
$$\left(\sqrt{x^2+1}\right)'=\frac{x}{\sqrt{x^2+1}}$$
is defined everywhere on $\mathbb{R}$ as $x^2+1>0$.
|
H: Definition of subset
I am new to set theory, and even though I grasp the concept, I am having trouble with the formal definitions, specially with the subset one. The statement $A\subseteq B$ can be written as $\forall x(x\in A\rightarrow x\in B)$. Now, let $A = \left\{1, 2\right\}$ and $B = \left\{3, 4\right\}$ and $x=5$. Then, both $x\in A$ and $x\in B$ will be false, and, therefore, make the conditional true, which would make $x$ be part of the subset $A$ of $B$, but it is not. What am I getting wrong?
AI: Making the conditional true does not make $x$ part of the subset $A$. The conditional is not defining membership in the set $A$. It is defining the condition for $A \subseteq B$. The condition says that each element $x$ must satisfy $x \in A \to x \in B$. You've verified the condition for a single element $x=5$, which is fine. But that doesn't mean that 5 is in $A$.
|
H: Why is my value for the length of daylight wrong?
I was watching a YouTube video where it showed how length of daylight changes depending on the time of year, and I was curious and wanted to try calculating the value of how long the daylight is in the Tropic of Cancer (23.5 degrees latitude) during the winter solstice, apparently 10 hours and 33 minutes or so according to the video. Here is the timestamp for reference.
This is my work (the yellow blobs represent 23.5 degrees and the pink blobs 43 degrees):
$\sin(66.5 \text{ degrees}) = (\text{yellow leg + orange leg}) / r$ implies $0.917060r = \text{yellow leg + orange leg}$
$\cos(66.5 \text{ degrees}) = \text{purple leg} / r$ implies $0.398749r = \text{purple leg}$
$\tan(23.5 \text{ degrees}) = \text{orange leg / purple leg}$ implies $0.434812 \cdot \text{ purple leg} = \text{orange leg}$
Subbing in the value we already got from the purple leg, we get $0.173381r = \text{orange leg}$
That means the orange leg is $0.173381r/ 0.917060r$ fraction of the yellow and orange leg, about $0.189061784$. This represents how much extra darkness there is along the line.
Since this darkness is on both sides of the globe, I multiply it by two, to get $0.37812$.
So the daylight is about $37.81$% shorter, down from $12$ hours to about $7.46$ hours. Way off compared to the video's $10$ hours $33$ minutes.
Where is my mistake?
AI: The purple line at latitude $\alpha$ is $r\sin\alpha$
Then the orange line is $r\sin\alpha\tan\alpha$
The radius of the latitude circle is $r\cos\alpha$.
Hence the orange line divided by the radius is $\tan^2\alpha $
Now if the angle between 6 o'clock and sunrise is $\beta$, we have $\sin\beta=\tan^2\alpha$ and so obtain a daytime length of
$$ \left(1-\frac{\arcsin\tan^2\alpha}{90^\circ}\right)\cdot{12\,\text{h}}=\arccos\tan^2\alpha\cdot\frac{12\,\text{h}}{90^\circ}$$
For $\alpha=23.5^\circ$, this gives me $10.55$ hours, or $10:32:49$.
|
H: Prove that $\int(x^2-1)^n \,dx = \frac{x(x^2-1)^n}{2n+1} - \frac{2n}{2n+1}\int(x^2-1)^{n-1} \,dx$
I have tried to solve the problem mainly with the LS of the equation.
I can not seem to get rid of the x variable within the resultant integrand.
ex. after the first integration by parts I am left with:
$x(x^2-1)^n - 2n\int x^2(x^2-1)^{n-1} \,dx$
Thanks for all the help in advance!
AI: Hint: Just write $\int x^{2}(x^{2}-1)^{n-1}dx$ as $\int (x^{2}-1+1)(x^{2}-1)^{n-1}dx=\int (x^{2}-1)^{n}dx+\int (x^{2}-1)^{n-1}dx$ and transfer one term to the left side.
|
H: Distribution of the mean of Brownian motions
Consider $Z_n = \sum_{k=1}^{n} \frac{B_k}{n}$ where each $B_k \sim N(0, k)$ is a Brownian motion. I'm trying to compute the distribution of $Z_n$. Obviously $EZ_n=0$. For $Var Z_n$, since $\mathbf{Cov}B_sB_t = \min\{s,t\},$ I derive
\begin{align}
Var Z_n &= \frac{1}{n^2}\bigg(\sum_kVar B_k + 2\sum_{i<j}\mathbf{Cov} B_i, B_j\bigg) \\
&= \frac{1}{n^2}\bigg(\binom{n}{2} + 1(n-1) + 2(n-2) + \ldots+(n-1)\bigg) \\
&= \frac{1}{n^2}\bigg(\binom{n}{2} + \binom{n+1}{3}\bigg) \\
&= \frac{1}{2}\bigg(1-\frac{1}{n}\bigg) + \frac{n}{6}\bigg(1-\frac{1}{n^2}\bigg)
\end{align}
So $Z_n \sim \mathbf{N\bigg(0, \frac{1}{2}\bigg(1-\frac{1}{n}\bigg) + \frac{n}{6}\bigg(1-\frac{1}{n^2}\bigg)\bigg)}$
There are two things that are a bit confusing: I never used the fact that $B_t - B_s \sim N(0, t-s)$ and the sum of dependent normal rvs is not always normal. I'm not sure how to use these facts though.
AI: Linear combinations of jointly normal random variables are always normal. Once you use the fact that $EB_tB_s=\min \{t,s\}$ you don't need the fact that $B_t-B_s \sim N(0,t-s)$.
|
H: Terence Tao Analysis I Proposition 4.4.5
In the book the proof for
Proposition 4.4.5: For every rational number $\epsilon > 0$, there exists a non-negative rational number $x$ such that $x^2 < 2 < (x + \epsilon)^2$
Proof:
Let $\epsilon > 0$ be rational. Suppose for the sake of contradiction that there is no non-negative rational number $x$ for which $x^2 < 2 < (x + \epsilon)^2$. This means that whenever $x$ is non-negative and $x^2 < 2$, we must also have $(x + \epsilon)^2 < 2$ (note that $(x + \epsilon)^2$ cannot be equal 2 because no such rational exist according to Proposition 4.4.4). Since $0^2 < 2$, we thus have $\epsilon^2 < 2$, which then implies $(2\epsilon)^2 < 2$, and indeed a simple induction shows that $(n\epsilon)^2 < 2$ for every natural number $n$. But by Proposition 4.4.1 we can find an integer $n$ such that $n>2/\epsilon$, which implies that $(n\epsilon)*2 > 4 > 2$, contradicting the claim that $(n\epsilon)^2 < 2$ for every natural number $n$.
My question is that:
When Tao says Since $0^2 < 2$, we thus have $\epsilon^2 < 2$, is he saying that because the assumption of non-existence of non-negative $x$ that satisfies the condition, so that $x^2 < 2$ when $x=0$ you have $0^2 < 2$, and since $x=0$, then $(x + \epsilon)^2 < 2$ becomes $(0 + \epsilon)^2 < 2$ and then $\epsilon^2 < 2$?
How was the induction done to show that $(n\epsilon)^2 < 2$ for every natural number $n$ using the fact that $\epsilon^2 < 2$
Why did Tao use an integer $n$ such that $n>2/\epsilon$?
Proposition 4.4.1 is (Interspersing of integers by rationals). Let $x$ be a rational number. Then there exists an integer $n$ such that $n \leq x < n+1$.
AI: Yes. "Let $\epsilon > 0$ be rational. Suppose for the sake of contradiction that there is no non-negative rational number $x$ for which $x^2 < 2 < (x + \epsilon)^2$. This means that whenever $x$ is non-negative and $x^2 < 2$, we must also have $(x + \epsilon)^2 < 2$." Now take $x = 0$.
Unter the above assumption, we want to prove that if $\epsilon > 0$, then $(n\epsilon)^2 < 2$ for all $n$. For $n=1$ this has been proved in 1. Now assume it is true for some $n \ge 1$, i.e. $(n\epsilon)^2 < 2$. Since $x= n\epsilon$ is a non-negative rational number such that $x^2 < 2$, we get $((n+1)\epsilon)^2 =(x + \epsilon)^2 < 2$. By the way, we could start iunduction with $n=0$ which is a trivial case. Then step 1. would be obsolete.
To obtain a contradiction, we have to find $n$ such that $(n\epsilon)^2 \ge 2$. By 4.4.1 there is an integer $m$ such that $m \le 2/\epsilon < m+1$. Let $n = m+1$. Then $n\epsilon > 2$, thus $(n\epsilon)^2 > 4 > 2$.
|
H: Convergence of $\int^{\pi/2}_0 x\sqrt{\sec x}dx$
At $x=\pi/2$, $\sec x$ goes to infinity, and $x$ is fixed, so $x\sqrt{\sec x}$ goes to infinity. It seems to diverge, but the solution says it converges. I don't know how to prove it. I cannot find the antiderivative of this function or suitable functions to apply comparison test. Any help, thanks!
AI: Hint: $\frac x {\sqrt {cos x}}=\frac x {\sqrt {\sin (\frac {\pi} 2 -x)}}$. Make the change of variable $y=\frac {\pi} 2 -x$ and use the fact that $\sin y \sim y$ for $y$ near $0$.
|
H: Evaluating $\int _0^{\infty }W\left(\frac{1}{x^3}\right)\:\mathrm{d}x$
How can i evaluate $\displaystyle\int _0^{\infty }W\left(\frac{1}{x^3}\right)\:\mathrm{d}x$ in an easy manner i managed to end up with this
$$3\int _0^{\infty }\frac{W\left(\frac{1}{x^3}\right)}{W\left(\frac{1}{x^3}\right)+1}\:\mathrm{d}x$$
But how can i continue?.
$W(x)$ is the lambert w function
AI: You can generalize this integral
$$\underbrace{\int _0^{\infty }W\left(\frac{1}{x^n}\right)\:dx}_{t=W\left(\frac{1}{x^n}\right)}$$
$$=\frac{1}{n}\int _0^{\infty }t^{1-\frac{1}{n}}e^{-\frac{t}{n}}\:dt+\frac{1}{n}\int _0^{\infty }t^{-\frac{1}{n}}e^{-\frac{t}{n}}\:dt=n^{1-\frac{1}{n}}\Gamma \left(2-\frac{1}{n}\right)+n^{-\frac{1}{n}}\Gamma \left(1-\frac{1}{n}\right)$$
$$=-n^{-\frac{1}{n}}\Gamma \left(-\frac{1}{n}\right)$$
So
$$\boxed{\int _0^{\infty }W\left(\frac{1}{x^n}\right)\:dx=-n^{-\frac{1}{n}}\Gamma \left(-\frac{1}{n}\right)}$$
So for your integral, letting $n=3$ gets
$$\int _0^{\infty }W\left(\frac{1}{x^3}\right)\:dx=-3^{-\frac{1}{3}}\Gamma \left(-\frac{1}{3}\right)=3^{\frac{2}{3}}\Gamma \left(\frac{2}{3}\right)\approx2.816678$$
|
H: Is it possible to subdivide a regular polygon of side-length $n$ into equilateral polygons of side-length $1$?
Suppose I have a regular polygon whose sides each measure $n$. I want to cut it up into smaller equilateral (but not necessarily regular) polygons whose sides each measure $1$.
Is this possible? If yes, what's a simple (easy to implement) algorithm that can generate the subdivision?
AI: You can use this strategy for regular polygons with an internal angle greater than $120$ (that is, with $7$ or more sides): use $n$ equilateral triangles of side length $1$ to cover each side of the polygon, so that the uncovered region also forms an equilateral polygon of side length $1$.
Here's an example for a regular heptagon:
The importance of the internal angle greater than $120$ is, of course, so that the equilateral triangles don't overlap at the corner.
If your regular polygon have $3$, $4$ or $6$ sides, the situation is easy to handle. If it have $5$ sides, a similar strategy will work, so the answer to your question is: yes, it's always possible.
|
H: If$|f(x)-f(y)|\le (x-y)^2$, prove that $f$ is constant
(Baby Rudin Chapter 5 Exercise 1)
Let $f$ be defined for all real $x$, and suppose that
\begin{equation}\tag{1}
|f(x)-f(y)|\le (x-y)^2
\end{equation}
Prove that $f$ is constant.
My attempt:
Let $f$ be defined for all real-valued inputs. Let $x \in \mathbb{R}$ and $y \in \mathbb{R} \smallsetminus \{ x \}$, and suppose that (1) holds.
Then, we have:
\begin{align*}
\left| \dfrac{f(x)-f(y)}{x-y}\right| \le (x-y)
\end{align*}
As $x\to y, \lim\limits_{x \to y}\left| \dfrac{f(x)-f(y)}{x-y}\right| \le 0$. Since it cannot be that $\left|f'(y)\right| < 0$, we have that $\left|f'(y)\right| = 0 \implies f'(y) = 0$.
Can someone please read over my proof and let me know if it is correct?
AI: Your deduction that $$\frac{|f(x)-f(y)|}{|x-y|}\le x-y$$ is incorrect because it would lead to $$|f(x)-f(y)|\le |x-y|\cdot(x-y)\ne (x-y)^2$$ To make it work, you may want to deduce that $$\frac{|f(x)-f(y)|}{|x-y|}\le |x-y|$$
Your solution is otherwise correct.
|
H: Evaluating a binomial summation
I'm interested in evaluating the following summation, where the value of $n$ is known:
$$\sum_{i = 0}^{2n} \sum_{j = \max(0, i - n)}^{\min(i, n)} {i \choose j}.$$
In case you're wondering where the summation comes from, it is the answer to the following question: "How many binary strings of length $\leq 2n$ can you form with no more than $n$ ones and $n$ zeros?". The summation in $i$ fixes the length of the string, and the summation in $j$ fixes the number of ones we use.
By splitting the summation from $i = 0$ to $i = n$ and $i = n + 1$ to $i = 2n$, I am able to rewrite the sum as follows:
$$\sum_{i = 0}^{n}\sum_{j = 0}^{i} {i\choose j} + \sum_{i = n + 1}^{2n} \sum_{j = i - n}^{n} {i\choose j}.$$
Call the two summations $S_1$ and $S_2$ respectively. By the sum of binomial coefficients identity, I can evaluate $S_1$ as follows:
$$S_1 = \sum_{i = 0}^{n}\sum_{j = 0}^{i} {i\choose j} = \sum_{i = 0}^{n} 2^{i} = 2^{n + 1} - 1.$$
Now, I'm having trouble evaluating $S_2$. I've tried writing out the terms to find patterns. I've also tried using Hockeystick with no luck. I've also tried switching the order of summation, but this also led me nowhere.
Can someone please help me solve this problem or provide me with a hint?
When $n = 2$, the summation evaluates to $19$. When $n = 3$, the summation evaluates to $69$. When $n = 4$, my computer program gave me $251$.
I think this is OEIS A030662, which has a few closed forms, but I want to find it myself. One interesting closed form is ${2n\choose n} - 1$.
Thank you
AI: As you mentioned, the formula is ${2(n+1) \choose n+1} - 1$.
What we want to compute is
$$\sum_{i=0}^n \sum_{j=0}^n {i+j \choose i}$$
The proof is just to repeatedly use
$${n \choose k} = {n-1 \choose k} + {n-1 \choose k-1}$$
Let's expand our answer:
\begin{align*}
{2n+2 \choose n+1}
&= {2n + 1 \choose n+1} + {2n+1 \choose n}\\
&= {2n + 1 \choose n+1} + {2n \choose n} + {2n \choose n-1} \\
&= {2n + 1 \choose n+1} + {2n \choose n} + {2n-1 \choose n-1} + {2n-1 \choose n-2} \\
&= {2n + 1 \choose n+1} + {2n \choose n} + \cdots + {n+1 \choose 1} + {n+1 \choose 0} \\
&= \sum_{i=1}^{n+1} {n + i \choose i} + {n+1 \choose 0} \\
&= \sum_{i=0}^{n} {n + i + 1 \choose i + 1} + 1
\end{align*}
Now let's expand each term inside the sum:
\begin{align*}
{n + i + 1 \choose i + 1}
&= {n + i \choose i} + {n + i \choose i + 1}\\
&= {n + i \choose i} + {n + i - 1 \choose i} + {n + i - 1 \choose i + 1}\\
&= {n + i \choose i} + {n + i - 1 \choose i} + \cdots + {i + 1 \choose i} + {i + 1 \choose i + 1}\\
&= \sum_{j=1}^n {j + i \choose i} + {i+1 \choose i+1} \\
&= \sum_{j=0}^n {j + i \choose i},
\end{align*}
as required.
P.S.: I didn't check (too much work), but I think this proof can be generalized for an arbitrary number of symbols, with each symbol having its own maximum number of usages $c_i$. In particular, for $3$ symbols the result should be something like this:
$${c_1 + c_2 + c_3 + 3 \choose c_3 + 1} {c_1 + c_2 + 2 \choose c_2 + 1} - 1$$
|
H: Is there an "algebraic" way to construct the reals?
It's possible to construct $\mathbb{Q}$ from $\mathbb{Z}$ by constructing $\mathbb{Z}$'s field of fractions, and it's possible to construct $\mathbb{C}$ from $\mathbb{R}$ by adjoining $\sqrt{-1}$ to $\mathbb{R}$.
In both cases, the construction is done purely algebraically. I.e. we only rely on the operations of our given structure to build the new structure. But at no point do we have to rely on the order properties of $\mathbb{Z}$ or $\mathbb{R}$ to get to $\mathbb{Q}$ or $\mathbb{C}$.
Every construction of $\mathbb{R}$ that I'm familiar with ultimately comes down to endowing $\mathbb{Q}$ with its usual order, and then imposing the completeness axiom on it to recover the rest of the real numbers.
Is it possible to get to $\mathbb{R}$ from $\mathbb{Q}$ without relying on the ordering properties of $\mathbb{Q}$?
Alternatively (relatedly?): There is the notion of a greatest common divisor for an arbitrary ring. This notion doesn't rely on any ordering properties; just algebraic ones. Is it possible to recover an order relation on $\mathbb{Q}$ using the GCD relation on $\mathbb{Z}$, then to impose completeness on $\mathbb{Q}$ and obtain $\mathbb{R}$, and then subsequently re-cast completeness in some algebraic manner? Thus defining $\mathbb{R}$ in purely algebraic terms?
AI: The "order vs. algebra" issue is really a red herring here: in each of the structures $\mathbb{Z},\mathbb{Q},\mathbb{R}$, the order can in fact be recovered from the algebra alone!
In $\mathbb{R}$ we have $a\ge b$ iff there is some $c$ such that $c^2+b=a$.
In $\mathbb{Z}$ we have that $a\ge b$ iff there are $w,x,y,z$ such that $w^2+x^2+y^2+z^2+b=a$ (via Legendre).
In $\mathbb{Q}$ we first use the definability of $\mathbb{Z}$ inside $\mathbb{Q}$ (which is quite nontrivial). From that define the nonnegative integers as those which can be written as the sum of the squares of four integers, and then observe that $a\ge b$ iff for some positive integer $c$ the product $c(a-b)$ is a nonnegative integer. (Actually I'm pretty sure there's an easier way to algebraically define the ordering on $\mathbb{Q}$, but meh.)
Each of the definitions above is a definition in the sense of first-order logic; I'm ignoring the technicalities here, but the term is worth mentioning. Interestingly, $\mathbb{R}$ - despite its mathematical complexity in many senses - is actually quite simple from the logical perspective, and for instance neither $\mathbb{Z}$ nor $\mathbb{Q}$ are definable in $\mathbb{R}$. Bigger $\not=$ more structurally complicated!
The real issue is a "sets vs. objects" issue: in each of the constructions $\mathbb{Z}\leadsto\mathbb{Q}$ and $\mathbb{R}\leadsto\mathbb{C}$ we basically have that members of the new structure correspond to "simple combinations" of members of the old structure (e.g. appropriate ordered pairs perhaps modulo an appropriate equivalence relation), whereas in the construction $\mathbb{Q}\leadsto\mathbb{R}$ something weirder happens - objects in the new structure are "one type higher" than objects in the old structure. This is unavoidable on pure cardinality grounds: there are more reals than there are finite tuples of rationals. And this cardinality obstacle turns into a serious logical distinction via the downwards Lowenheim-Skolem theorem, which implies that there is no way to build $\mathbb{R}$ from $\mathbb{Q}$ via the machinery of first-order logic alone.
So there is indeed something essentially new about the construction $\mathbb{Q}\leadsto\mathbb{R}$, but it's not really about the ordering per se - it's more subtle than that. Rather, it's about the more general fact that (topological) completeness of any kind is fundamentally about sets/sequences rather than individual (or finite tuples of) elements of the structure.
|
H: How many different ways to fill a nonnegative integer matrix with fixed column and row sums
Given an $m$ by $n$ matrix, what is the general, closed-form formula to calculate how many different ways we can fill this matrix with nonnegative integers given the required sums of each row, $r_1, r_2, ..., r_m$ and of each column, $c_1, c_2, ... c_n$?
Example:
╭───┬───┬───╮
│ 1 │ 0 │ 2 │ =3
├───┼───┼───┤
│ 0 │ 2 │ 0 │ =2
└───┴───┴───┘
=1 =2 =2
is one solution to a 2-by-3 matrix where $c_1=1, c_2=2, c_3=2, r_1=3, r_2=2$
Edit:
I'm looking for some combinatorial properties I can take advantage of from this closed-form formula, similar to the multinomial theorem.
AI: If you insist that the "nonnegative numbers" you fill your matrix with are in fact nonnegative integers, this is a well studied problem. The objects you're describing often go by the name contingency tables (or frequency tables) and we term the specified row and column sums the margins of the table. Computing the number of contingency tables for given margins is an important problem in a lot of different branches of mathematics.
Specifically, we are interested in the following. Given two margin constraints, $r = (r_1,\dots,r_m)$ and $c = (c_1,\dots,c_n)~$ (on the rows and columns respectively) consider the set of all $m \times n$ tables satisfying these constraints:
$$
\Sigma_{m,n}(r,c) = \left\{ A = (a_{ij})_{i=1,j=1}^{m,n} : \sum_{i=1}^m a_{ij} = c_j \text{ and } \sum_{j=1}^n a_{ij} = r_i\right\}.
$$
The problem is then to find the cardinality $\left| \Sigma_{m,n}(r,c) \right|$.
You can see a brief overview of the problem (and a recursive algorithm to answer your question) in "Counting and Enumerating Frequency Tables with Given Margins" by Francesca Greselin. Therein it is mentioned that exact answers are already known in some very special cases. If we want to do the general calculation efficiently there are (randomized) approximation algorithms known when we impose additional constraints, but one should not hope to find an efficiently computed closed form expression in the general case due to complexity assumptions. I think a good bit of evidence for the the difficulty of the problem is taken from the paper: given margins $r = c = (15,15,15,15,15)$ there are $1,9208 \ldots \times 10^{50}$ tables. There are also constructions relating this computation to other well known combinatorial questions for which we do not expect there to be a computationally efficient answer for.
|
H: Laplacian of a function has the same sign of the function itself
Here is the problem:
Let $U \subset \mathbb{R}^n$ be a connected open set with regular boundary, and $f:\mathbb{R}\to\mathbb{R}$ a function such that $tf(t)\geq0$ for all $t\in\mathbb{R}$. Show that every solution $u\in C^2(\overline{U})$ of the problem:
\begin{equation}
\begin{cases}
\Delta u=f(u) \hspace{0.1in} \text{in } U \\
\frac{\partial u}{\partial \hat{n}} = 0\hspace{0.1in} \text{in } \partial U
\end{cases}
\end{equation}
Is necessarily constant, where $\hat{n}$ denotes the unitary normal to $\partial U$. Furthermore, state an additional condition on $f$ that guarantees that $u$ vanishes identically on $U$.
My idea was to use the Divergence Theorem, which in this case implies that:
\begin{equation}
\int_U \Delta u = \int_{\partial U} \frac{\partial u}{\partial \hat{n}} = 0
\end{equation}
Hence:
\begin{equation}
\int_U f\circ u = 0
\end{equation}
Now, the condition on $f$ just means that $f(t)$ has the same sign as $t$, and at $0$ continuity might fail. And, since $U$ is connected, if $u$ is not constant, it assumes every value between each two distinct values it assumes.
I tried to use these facts to get a contradiction with the vanishing of the last integral, but I got stuck. Any help or hints are welcome.
AI: I'll assume your $\Delta$ takes the sign convention $\operatorname{div}\circ\operatorname{grad}$ rather than $-\operatorname{div}\circ\operatorname{grad}$.
By Green's first identity/Divergence theorem/...,
$$
\int_U (\underbrace{u\Delta u}_{=uf(u)\geq 0}+\underbrace{\lvert\nabla u\rvert^2}_{\geq 0})=\int_U\nabla\cdot(u\nabla u)=\int_{\partial U}u\frac{\partial u}{\partial n}=0,
$$
so we must have $\nabla u=0$ on $U$. Can you see how to finish this and do the second part?
|
H: How to solve $\int\frac{1}{\sqrt {2x} - \sqrt {x+4}} \, \mathrm{dx} $?
$$\int\frac{1}{\sqrt {2x} - \sqrt {x+4}} \, \mathrm{dx}$$
I have tried $u$-substitution and multiplying by the conjugate and then apply $u$-substitution. For the $u$-substitution, I have set $u$ equal to each square root term, set $u$ equal to the entire denominator, and set $u$ equal to each expression in the radical.
However, all my attempts have just made the integral more complex without an obvious way to simplify. Can someone provide insight please? Thank you.
AI: Multiplying by the conjugate and applying a couple of substitutions does work.
\begin{align*}
\int\frac{1}{\sqrt {2x} - \sqrt {x+4}} \, \mathrm{d}x &=\int \underbrace{\frac{\sqrt{2x}}{x-4}}_{\sqrt{x} \to u} + \underbrace{\frac{\sqrt{x+4}}{x-4}}_{\sqrt{x+4} \to t} \; \mathrm{d}x\\
&=2\sqrt{2} \int \frac{u^2}{u^2-4} \; \mathrm{d}u+ 2\int \frac{t^2}{t^2-8} \; \mathrm{d}t \\
&=2\sqrt{2} \int \frac{u^2-4+4}{u^2-4} \; \mathrm{d}u+ 2\int \frac{t^2-8+8}{t^2-8} \; \mathrm{d}t \\
&=2\sqrt{2}u +2\sqrt{2} \ln{\bigg |\frac{u-2}{u+2}\bigg |} + 2t +2\sqrt{2}\ln{\bigg |\frac{t-2\sqrt{2}}{t+2\sqrt{2}}\bigg |}+\mathrm{C} \\
&=2\sqrt{2x} +2\sqrt{2} \ln{\bigg |\frac{\sqrt{x}-2}{\sqrt{x}+2}\bigg |} + 2\sqrt{x+4} +2\sqrt{2}\ln{\bigg |\frac{\sqrt{x+4}-2\sqrt{2}}{\sqrt{x+4}+2\sqrt{2}}\bigg |}+\mathrm{C} \\
\end{align*}
|
H: Using partial information to factor $x^6+3x^5+5x^4+10x^3+13x^2+4x+1.$
I wish to find exact expressions for all roots of $p(x)=x^6+3x^5+5x^4+10x^3+13x^2+4x+1.$ By observing that for the roots $x_0 \pm iy_0, x_0 \approx -0.15883609808599033632, y_0 \approx 0.27511219196092896700,$ we have that $x_0$ is the unique real root of $r(x) = x^3+12x^2+8x+1,$ I was able to prove that all roots of the original sextic can be expressed in radicals. The process is as follows:
Divide $p(x+iy)$ by $r(x)$ to get $\frac{1}{8}x^3 + \frac{3}{16}x^2 + x\left(\frac{7}{32}-\frac{15y^2}{8}\right) + \left(\frac{95}{32}-\frac{15y^2}{16}\right) + \frac{R(x,y)}{p(x)}$ where $R(x,y) = A(y)x^2 + B(y)x + C(y)$ and $A(y) = 15y^4 - \frac{15y^2}{4} - \frac{201}{16}, B(y) = 15y^8 - 30y^6 + 12y^4 + \frac{75y^2}{8} - \frac{767}{32}, C(y) = -y^6+5y^4-\frac{193y^2}{16}-\frac{63}{32}.$
The equation $R(x_0, y_0) = 0$ is a quartic in $y_0^2,$ which we can solve exactly to obtain $y_0^2$ and hence $y_0.$
Polynomial division reduces $p(x)$ to a quartic, and now we apply the quartic formula again to find the other $4$ roots.
However, I don't want to perform the rest of the computations. Is there a cleaner way to use the observation that $r(x_0) = 0,$ perhaps in the realm of abstract algebra?
AI: The hint.
Use the following:
$$x^6+3x^5+5x^4+10x^3+13x^2+4x+1=\left(x^3+\frac{3}{2}x^2-2x-1\right)^2+\frac{3}{4}x^2(3x+4)^2.$$
Now, you can get all roots of the polynomial. Just solve two cubic equations.
|
H: Does $\mathrm{SO}_n \cong T^1\mathbb{S}^{n-1}$ for all $n \in \mathbb{Z}_+$?
Let $\mathbb{S}^{n-1}$ be the $(n-1)$-dimensional sphere and let $T^1\mathbb{S}^{n-1}$ be its unit tangent bundle. I have just learnt that $\mathrm{SO}_3 \cong T^1\mathbb{S}^2$. Here $\cong$ means 'homeomorphism'. Does it hold for all $n$ ?
A well-known method to show $\mathrm{SO}_3 \cong T^1\mathbb{S}^2$ is that one take point $p \in \mathbb{S}^2$ as the first column and $q \in T\mathbb{S}^2$ as the second column. Then the third column is $p \times q$, and this proves to be a homeomorphism.
AI: The dimensions do not match; $\dim\mathrm{UT}S^{n-1} = 2n-3$ but $\dim SO(n)=\binom{n}{2}$.
For $n=3$ since $S^3$ is parallelizable we have $\mathrm{UT}S^3=S^3\times S^2$ but $SO(4)=(S^3\times S^3)/S^0$ (where we quotient the Lie group $S^3\times S^3$ by the diagonal copy of $S^0=\{\pm1\}$).
In general, the unit tangent bundle $S^{n-2}\to\mathrm{UT}S^{n-1}\to S^{n-1}$ is like a twisted version of the direct product $S^{n-2}\times S^{n-1}$ (like how a Mobius band is a twisted version of a cylinder $S^1\times I$), whereas there are bundles $SO(n-1)\to SO(n)\to S^{n-1}$ (pick a point $p\in S^{n-1}$ and apply rotations to it; the fibers are cosets of $p$'s stabilizer). This means, loosely speaking, $SO(n)$ is like a very twisted version of $S^1\times S^2\times\cdots\times S^{n-1}$. Indeed, there is a bundle $SO(n)\to \mathrm{UT}S^{n-1}$ given by projecting to the first two columns (as you describe). The fibers are cosets of $SO(n-2)$ (which stabilizers two orthogonal vectors).
|
H: What does it mean when we say '$f$ is a function taking values on $\mathbb{R}$ $\cup$ {$-\infty, \infty$}'?
I was recently reading this post and noticed some terminology I am not familiar with. The title of the post is "Why is convex conjugate defined on functions taking values on extended real line?"
What does it mean when we say 'a function $f : X \to \mathbb{R}$ $\cup$ {$-\infty, \infty$} taking values on the extended real line'? Does saying "$f$ is a function taking values on the extended real line (or any set, in a general sense)" mean the function values are elements of $\mathbb{R}$ $\cup$ {$-\infty, \infty$}?
AI: Yes, it is just as you say. A function goes from one set to another. The text is defining what set the range is contained in, which is just as you think $\Bbb R \cup \{-\infty,\infty\}$
The motivation is that there are sets of reals that do not have a supremum in the reals. The set of natural numbers, for example, does not. If we add $\pm \infty$ to the reals, we have a compact set and every subset that is bounded above has a least upper bound. For the naturals, it is $+\infty$.
|
H: Finding the equation of 4 circles given 3 tangents, one of which is oblique
The question is to find the equation of the four circles tangent to the x-axis, the y-axis and the line $x+y=2$. I have drawn out a diagram and have identified the 4 circles but I am stuck on how to find their equations.
AI: There are many ways to approach this. For example, if you know about incircle and excircles of triangles, then just follow that construction or other known facts (such as: the point of tangency of the excircle $\Gamma_A$ are distance $s=\frac12(a+b+c)$ from $A$, the in-/ex-centres are on the angle bisector, the inradius $r=\Delta/s$, exradius $r_A=\Delta/(s-a)$, etc.).
However, you can also do it purely algebraically:
A circle $x^2+y^2+Dx+Ey+F=0$ is tangent to $Ax+By+C=0$ if and only if solving the simultaneous equation gives repeated roots. Eliminating $x$ (assuming $A\neq 0$, otherwise change $x\leftrightarrow y$) gives
$$
(A^2+B^2) y^2 + (A^2 E-ABD+2BC) y + A^2 F - A C D + C^2=0
$$
and we want the discriminant to be zero:
$$
(A^2 E-ABD+2BC)^2-4(A^2+B^2)(A^2 F - A C D + C^2)=0.
$$
Expanding, you end up with
$$\require{cancel}
A^2 E^2 - 4 A^2 F - 2 A B D E + 4 A C D + B^2 D^2 - 4 B^2 F + 4 B C E - 4 C^2 = 0
$$
(remember we assumed $A\neq 0$). So the four circles satisfy
\begin{align*}
D^2 - 4 F&=0 &&x\text{-axis}\colon y=0\\
E^2 - 4 F&=0 &&y\text{-axis}\colon x=0\\
\cancel{E^2 - 4 F} - 2 D E - 8 D + \cancel{D^2 - 4 F} - 8 E - 16&=0 &&x+y-2=0.\\
\end{align*}
Id est, we want to solve
\begin{align*}
D^2 - 4 F&=0\\
E^2 - 4 F&=0\\
D E + 4 D + 4 E + 8&=0.\\
\end{align*}
From the first two equations, we have $D^2=E^2=4F$, so $D=\pm E$. Substituting into the third we have $D^2+8D+8=0$ (case $D=E$), or $-D^2+8=0$ (case $D=-E$). So we have $D=E=-4\pm2\sqrt2$ or $D=-E=\pm2\sqrt2$, hence $F=\frac14D^2=\dots$.
|
H: I don't know why this is answer. $f\left( x\right) =\lim _{n\rightarrow +\infty }f_{n}\left( x\right) =0$
"n" is positive integer. An interval is [0,3].
$$f_{n}\left( x\right) =\begin{cases}n^{2}x\left( 0\leq x\leq \dfrac {1}{n}\right) \\ n\left( 2-nx\right) \left( \dfrac {1}{n} <x\leq \dfrac {2}{n}\right) \\ 0\left( \dfrac {2}{n} <x\leq 3\right) \end{cases}$$
I want to calculate the following formula.
$$f\left( x\right) =\lim _{n\rightarrow +\infty }f_{n}\left( x\right)$$
So I don't know, I researched. And then, this answer is the following.
$$f\left( x\right) =\lim _{n\rightarrow +\infty }f_{n}\left( x\right) =0$$
I don't know why this is answer. Please tell me.
AI: This is a bit tricky at the beginning. The key idea here is that the interval $\left(\frac 2n, 3\right]$ grows monotonically as $n\to\infty$ and will eventually, include all the points inside $\left(0,3\right]$. Notice how $0$ will never be included (no matter how big $n$ is).
With that in mind, if you fix an $x$, then it's clear from the function definition that, after a certain $n_0$, $f_n(x) = 0$ for each $n\ge n_0$.
In particular, this is an example of pointwise but not uniform convergence (because $f_n(0)$ diverges).
|
H: If $p$ is not a limit point of $E$, then it has a neighborhood with at most 1 point of $E$ — why at most 1 point?
I have a doubt about the following proof from PMA Rudin:
Suppose $E \cap K$, and let $q \in K$ be a point which is NOT a limit point of $E$. Then $q$ has a neighborhood $N$ s.t. it has at most one point of $E$ in it (in case $q \in E$).
In other words, $(N - \{q\}) \cap E = \phi$
How can this be true?
I can’t help but think of the scenario where q is in E, then q is on the “border” of the set E. In that case no matter what neighborhood we construct, although it may not be completely inside E, it will intersect E non-trivially.
I can’t help but think of this in terms of 2D euclidean space where a neighborhood is a perfect circle with radius d and we are trying to shrink d as much as possible.
BTW this is coming from the proof for Theorem 2.37.
AI: By definition $q$ is a limit point of $E$ if every neighbourhood $U$ of $q$ contains $e\in E,e\neq q$.
Take the negation of the definition $q$ is not a limit point if there exists a neighbourhood $U$ which does not contain an element $e\in E$, $e\neq q$. Thus if $U$ contains an element of $E$ it is necessarily $q$.
https://en.wikipedia.org/wiki/Limit_point#Definition
|
H: Given $y = x^3 − 2x$ for $x \geq 0$, find the equation of the tangent line to $y$ where the absolute value of the slope is minimized.
I tried finding the derivative of this, and promptly got $y=$ about $0.816$, but I have no idea how to put that into equation form or if I'm even correct.
AI: I assume you already got $y' = 3x^2 - 2$. This is the slope of the curve $y$.
As per the question, you need to find absolute minimum of the slope which in
this case is $0$ for $x = \sqrt{\frac{2}{3}}$, (where $x \ge 0)$.
Substituting x in your curve, you get the equation of the tangent line as the slope is zero.
$y + \frac{4\sqrt2}{3\sqrt3} = 0$
|
H: Nonstandard models of PA
Reading The Incompleteness Phenomenon, by Goldstern and Judah, they show there are nonstandard models of PA by adding a constant greater than any natural number. They then show that any countable model consists of the standard naturals followed by a dense linear order of copies of the integers (though we can't find the origin in any of these copies). The demonstration relies on the fact that $\forall a,b [a\lt b\implies \exists c (c+c=a+b \vee c+c=a+S(b))]$ and that every number except $0$ has a predecessor. There is no mention of multiplication in the argument, so we could delete the two multiplication axioms from PA and get the same restriction on the set of models. Does multiplication not restrict the models of PA in any way?
AI: I've heavily edited my original answer. The answer below is really two separate answers, each of which appeals to a different kind of intuition. The first is just about the very broad nature of the question, the idea being that the models-of-arithmetic language just makes things feel mysterious when they really aren't; the second answer goes into more detail, but may be less comprehensible until the first is read.
An algebraic take
At its core, here's what's going on:
We have three "big classes" $X,Y,Z$ with three distinguished subclasses $A\subseteq X,B\subseteq Y,$ and $C\subseteq Z$. We also have "forgetful" maps $f:X\rightarrow Z$, $g:Y\rightarrow Z$, and $h:X\rightarrow Y$.
We're told that $f[A]=g[B]=C$, and that $h[A]\subseteq B$. However, despite this we can't figure out much about the relationship between $A$ and $B$. (Perhaps it would be more appropriate to say that we can't from this alone figure out much about the relationship between $A$ and $h^{-1}[B]$ or about the relationship between $B$ and $h[A]$.)
Specifically, $X,Y,$ and $Z$ are the classes of ordered semirings, ordered monoids, and linear orders respectively; $A,B$, and $C$ are the classes of countable models of $\mathsf{PA}$, countable models of Presburger arithmetic $\mathsf{Pres}$, and linear orders isomorphic to either $\mathbb{N}$ or $\mathbb{N}+\mathbb{Z}\cdot\mathbb{Q}$ respectively; and $f,g,$ and $h$ are the "underlying order of an ordered ring," "underlying order of an ordered monoid," and "underlying ordered monoid of an ordered ring" constructions, respectively. And indeed the relationship between $A$ and $B$ is extremely complicated, no matter how you cut it.
In particular, the answer to your question (rephrased for clarity)
Does multiplication [...] restrict the models of PA in any way?
is it most certainly does, and we can see this purely combinatorially as the fact that $A$ is "much smaller than" (and more importantly, much more complicated than) $h^{-1}[B]$.
A more logic-y flavor
We have to distinguish between a model and one of its reducts. The key passage is:
They then show that any countable model consists of the standard naturals followed by a dense linear order of copies of the integers
That's not quite true! Rather, the underlying linear order of a countable nonstandard model of $\mathsf{PA}$ is isomorphic to $\mathbb{N}+\mathbb{Z}\cdot\mathbb{Q}$. But a model of $\mathsf{PA}$ is much more than just a linear order, and we can't recover that additional structure from the order alone.
For example, let $M$ be a countable nonstandard model of $\mathsf{PA}+Con(\mathsf{PA})$ and let $N$ be a countable (necessarily nonstandard) model of $\mathsf{PA}+\neg Con(\mathsf{PA})$. The "$\{<\}$-reducts" of $M$ and $N$ are isomorphic, but $M$ and $N$ themselves aren't even elementarily equivalent.
So all that is true is that $\mathsf{PA}$ and Presburger arithmetic each have the same ordertypes of countable models. But this is a very weak fact. In particular, it doesn't rule out $(i)$ the existence of countable models of Presburger arithmetic which cannot be expanded to models of $\mathsf{PA}$ or $(ii)$ the existence of countable models of Presburger arithmetic which can be expanded to models of $\mathsf{PA}$ in multiple distinct - even non-isomorphic - ways.
We can prove that $(i)$ holds via computability theory. Presburger arithmetic has computable nonstandard models (e.g. take the $\{+\}$-reduct of the set of polynomials over $\mathbb{Q}$ with nonnegative leading coefficient and constant term in $\mathbb{Z}$). But the $\{+\}$-reduct of a $\mathsf{PA}$-model can never be computable, by Tennenbaum's theorem (note that this uses the stronger version of the theorem).
I don't immediately see how to show that $(ii)$ holds, but if memory serves it does; I'll add the argument (or counter-argument, if I'm wrong) when I find it.
|
H: Subgroup of a ring closed under multiplication?
Let $(R, +, \cdot)$ be a ring with identity $1$. Let $G \subset R$ be a group under addition. Then $G$ is a subset of $R$, so we can perform $R$'s multiplication on elements of $G$. Will multiplication in $G$ always be closed? What are some counterexamples?
If $R = \mathbb{Z}$, the only subgroups of $\mathbb{Z}$ are of the form $n \mathbb{Z}$ for nonnegative integer $n$. These are all closed under multiplication. What if we let $R = \mathbb{R}$?
EDIT: there are counterexamples when $R = \mathbb{R}$. What about when $R$ is a noncommutative ring? Also, by "closed multiplication in $G$" I mean multiplication of elements of $G \subset R$ will remain in $G$, not multiplication of elements of $G$ with elements of $R \setminus G$. I.e. $G$ should qualify as a magma with respect to $R$'s multiplication.
AI: Let $R=\mathbb{C}$ the complex numbers. Then the imaginary line $\{xi|x\in \mathbb{R}\}$ is not closed under multiplication.
Similarly let $R=\mathbb{H}$ the quaternions. This is non-commutative (to answer the OP's edit). Again the imaginary line $\{xi|x\in \mathbb{R}\}$ is not closed under multiplication.
|
H: Show how to assume any matrix is upper triangular, and the concept of a basis in matrices?
How can you show that any given matrix can be assumed to be upper triangular? And what does the concept of a basis have to do with upper triangular matrices (or matrices in general)?
AI: A matrix represents a linear map with respect to a particular basis. I.e. if you have a vector space, and you transform it to a different vector space using a linear map, there is a correspondence between bases of the domain and co-domain of the map, and matrices. In particular, any $n\times n$ matrix can be 'viewed' as a linear map from an n-dimensional vector space to itself, transforming the standard basis (by scaling, rotating or shearing them).
An endomorphism is 'triangulable' (i.e. similar to a triangular matrix, or the linear map it represents is represented by a triangular matrix) iff its characteristic polynomial can be written as a product of linear factors- not necessarily distinct. So in an algebraically closed field such as $\mathbb{C}$, every matrix is similar to a triangular matrix.
|
H: Using Correspondence Theorem for Rings
I was trying to solve a problem involving local rings and I did the following, which seems to lead to a contradiction but I cannot find where I have messed up:
For a field $\mathbb{K}$ we know that $\mathcal{M}=(x)/(x^3)$ is the unique maximal ideal in the quotient ring $\mathbb{K}[x]/(x^3)$. Let $q: \mathbb{K}[x] \rightarrow \mathbb{K}[x]/(x^3)$ be the quotient map, $\mathcal{I} = (x)$ and $\mathcal{J} = (x^3,1+x)$. Then $q(\mathcal{J}) \subseteq q(\mathcal{I})$ since $q(\mathcal{I}) =\mathcal{M}$ is a unique maximal ideal. Finally this means that $q^{-1}(q(\mathcal{J}))\subseteq q^{-1}(q(\mathcal{I})) \implies \mathcal{J} \subseteq \mathcal{I}$, which is false.
Where have I gone wrong?
AI: $q(\mathcal{J})$ is not contained in any maximal ideal. You may be confused because every proper ideal is contained in a maximal ideal. The point of course is that $q(\mathcal{J})$ is not a proper ideal.
|
H: Using ${\rm Lip}1$ to show that $C[0,1]$ is separable
I am studying for my PhD qualifying exams by going through the problems in Carothers, and I have come across this problem.
For each $n$, show that $$\{ f \in {\rm Lip}1 : \rVert f \lVert_{{\rm Lip}1} \leq n \}$$ is a compact subset of $C[0,1]$. Use this to give another proof that $C[0,1]$ is separable.
Here, ${\rm Lip}1 = \cup_{K=1}^{\infty} {\rm Lip}_K1$ where ${\rm Lip}_K1$ are the Lipschitz functions with Lipschitz constant $K$ of order 1.
I have proven these sets are compact but don't know how that helps. I know that ${\rm Lip}1$ is dense in $C[0,1]$ so I only need to show that ${\rm Lip}1$ is countable. I was thinking since ${\rm Lip}1$ is the countable union of the sets that I showed were compact, if I could show those compact sets themselves were countable I would be done, but I don't know how to do that.
AI: Any compact set in a metric space is separable. Hence Lip1 is separable and so is its closure.
If $D_n$ is a coutable dense set in $\{f\in Lip1:\|f\|_{Lip1} \leq n\}$ then $\cup_n D_n$ is a countable dense set in $C[0,1]$.
|
H: Composition Functions (Advanced Functions)
Question: a) Given the functions $f(x) = x + 2$ and $g(x) = 3^x$, determine an equation for (f ∘ g)(x) and (g ∘ f)(x).
b) Determine all values for $x$ for which $f(g(x)) = g(f(x))$.
*For part a), I got the equation:
$(f ∘ g)(x) = 3^x + 2$
and
$(g ∘ f)(x) = 3^{x+2}$
But I don't know if it's right or not. I'd appreciate if anyone can tell me if I got it right or not. And if not, what I did wrong. I believe it's right because I used the simplified formula of $f(g(x))$ and $g(f(x))$. But still, I just want to make sure that I'm correct.
For part c.) I don't know how to solve the question at all. Here's what I started with:
$3^x + 2 = 3^{x+2}$
But I don't know how to continue on. I don't know how to get rid of the 2. I'm trying to figure out a way where I can get all the bases to be the same so I can solve for x. But I don't think it's going to work like that and I don't know what else to do. I'd be thankful if anyone can help me out!
AI: Your answer for the first pat is correct. For solving $3^{x}+2=3^{x+2}$ write this as $3^{x} (3^{2}-1)=2$ or $3^{x}=\frac 1 4$. Take logarithm to finish.
|
H: Show that $f(x)=\frac{1}{\sqrt{x}}$ is uniformly continuous on the domain $(1,\infty)$ but not on the domain $(0,1)$.
Show that $f(x)=\frac{1}{\sqrt{x}}$ is uniformly continuous on the domain $(1,\infty)$ but not on the domain $(0,1)$.
$\def\verts#1{\left\vert#1\right\vert}$
My Attempt
First we show that $\forall\varepsilon>0,\exists\delta>0$ such that $\forall x,y\in (1,\infty)$, if $\verts{x-y}<\delta$ then $\verts{\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{y}}}<\varepsilon$
Since $x,y\in(1,\infty)$, that $0<\frac{1}{\sqrt{x}},\frac{1}{\sqrt{y}}<1$ i.e. $\verts{\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{y}}}<1$, if $1<|x-y|$, let $\delta=\varepsilon$ then $\verts{\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{y}}}<\varepsilon$. If $\verts{x-y}\le1$ then $\dots$ here i'm stucked on the second case
Next is to show $\exists\varepsilon>0$,$\forall\delta>0,\exists x,y\in(0,1)$ such that $\verts{x-y}<\delta$ and $\verts{\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{y}}}\ge\varepsilon$, not sure how to prove this, could someone help me.
AI: As in the hint from @BrianMoehring, if $n^2>1/\varepsilon$ then $|1/n^2-1/(n+1)^2|<1/n^2<\varepsilon$ and $|\sqrt {1/(1/n^2)}-\sqrt {1/(1/(n+1)^2)}|=1.$
If $a<b$ and $f:(a,b)\to \Bbb R$ is uniformly continuous on $(a,b)$ then $f$ is bounded on $(a,b).$
For we may take $r>0$ such that $|f(x)-f(y)|<1$ whenever $x,y\in (a,b)$ with $|x-y|<r.$ Now take some (any) $x_0\in (a,b).$ Consider some $n_0\in \Bbb N$ such that $x_0-n_0r/2\le a$ and $x_0+n_0r/2\ge b.$
Then $|f(x)|<n_0+|f(x_0)|$ for all $x\in (a,b).$
E.g. if $n_0> 2$ and $x_0+(r/2)<x\le x_0+2(r/2)<b$ then $$|f(x)|\le$$ $$\le |f(x)-f(x-[x_0+2(r/2)])|+{}$$ $${}+|f(x_0+2(r/2))-f(x_0+r/2)|+{}$$ $${}+|f(x_0+r/2)-f(x_0)|+{}$$ $${}+|f(x_0)|<$$ $$<3+|f(x_0)|\le n_0+|f(x_0)|.$$
So if $f$ is unbounded on $(a,b),$ e.g. if $f(x)=1/\sqrt x$ and $(a,b)=(0,1),$ then $f$ cannot be uniformly continuous on $(a,b).$
|
H: Function of bounded variation whose reciprocal is not of bounded variation
Problem: Find an example of a positive function $f: [0,1] \to \mathbb{R}_{>0}$ that is of bounded variation, whose reciprocal $1/f$ is integrable but not of bounded variation.
One necessary condition for $f$ is that $\inf_{x \in [0,1]} f(x)=0$, but I don't know how to proceed further.
AI: You have more or less resolved this with the observation that $\inf_{x \in [0,1]} f(x)=0$. Just take the simplest such function, e.g. $$f(x)= \begin{cases}\sqrt{(x)},\qquad x\neq0,\\1,\qquad x=0.\end{cases}$$
Apart from your observation, the only consideration is making sure that the reciprocal is integrable.
Note that any continuous $f$ with the property $\inf_{x \in [0,1]} f(x)=0$, will not be positive.
|
H: Find the number of possible passwords that can be created
Suppose that you are assigned an e-mail account, you need to create your own password. The format of a password is three digits followed by five uppercase letters such that neither digit nor letter can be repeatedly used. Find the number of possible passwords that can be created.
a. $5683392000$
b. $947232000$
c. $47361600$
d. $7893600$
AI: There are 10 digits, and 26 letters.
The digits
For the first digit, there are 10 options.
For the second digit, there are 9 options, because you have used one already.
For the third digit there are 8 options for the same reason.
The uppercase letters
for the first letter, there are 26 options.
for the second, there are 25, because you have used a letter
...and keep on going until you reach the 5th letter
You can use the rule of multiplication to find the answer.
Hope this helps!
Selena
|
H: Solve for $(p,q)\in\mathbb{Z}$, $\frac{p}{\sqrt{3}-1}+\frac{1}{\sqrt{3}+1}=q+3\sqrt{3}$
The question says "find integers $p$ and $q$ such that $\frac{p}{\sqrt{3}-1}+\frac{1}{\sqrt{3}+1}=q+3\sqrt{3}$.
I tried solving it but couldn't quite get the grasp of it.
It's solved. Thank you.
AI: Hint: Assuming you mean $\frac{p}{\sqrt{3}-1}+\frac{1}{\sqrt{3}+1}=q+3\sqrt{3}$ you can multiply the terms in the left hand side by $\frac{\sqrt{3}+1}{\sqrt{3}+1}$ (first term) and $\frac{\sqrt{3}-1}{\sqrt{3}-1}$ (second) and then group terms in $\sqrt{3}$. You should get $p=5$ and $q=2$.
|
H: Find coordinates of a point Q on the graph $\sin (x) + \cos (y) = 0.5$ given that the gradient of its tangent is perpendicular to point P.
Note:
Point $P$ is on the $y$-axis and above the $x$-axis
$\frac{-\pi}{6}\le x \le\frac{7\pi}{6}$
$\frac{-2\pi}{3}\le y\le\frac{2\pi}{3}$
What I have done so far:
Solving for $P$:
$$x = 0
\\ \sin (0) + \cos (y) = 0.5
\\ 0 + \cos (y) = 0.5
\\ y= \pm\frac{\pi}{3} $$
For $P$, $y \gt 0$
$\therefore y = \frac{\pi}{3}$
Solving for $\frac{dy}{dx}$:
$$\sin(x) + \cos(y) = 0.5
\\ \cos(x) - \sin(y)\frac{dy}{dx} = 0$$
$\therefore \frac{dy}{dx} = \frac{\cos(x)}{\sin (y)}$
Derivative at $P$:
$$\frac{dy}{dx} = \frac{\cos(x)}{\sin (y)}
= \frac{\cos(0)}{\sin(\frac{\pi}{3})}
= \frac{2}{\sqrt3}$$
As for the gradient of the tangent line at $Q$ is perpendicular to that at $P$:
$\frac{dy}{dx} = \frac{-\sqrt3}{2}$
How do I solve for the coordinates of $Q$ after this?
AI: Good work. Note the point $Q$ lies on the curve as well as the tangent line. So:
$$\begin{cases}\frac{\cos x}{\sin y}=-\frac{\sqrt{3}}{2}\\ \sin x+\cos y=0.5 \end{cases} \Rightarrow$$
From the first:
$$\cos^2x=\frac34(1-\cos^2y)\Rightarrow \cos y=\pm\sqrt{1-\frac43\cos^2x}$$
Now sub it to the second:
$$1-\frac43\cos^2x=\frac14-\sin x+\sin^2x\Rightarrow \\
4\sin^2x+12\sin x-7=0 \Rightarrow \sin x=\frac12.$$
Referring to the given constraints, the final answer is:
$$x=\pi-\frac{\pi}{6},y=\frac{\pi}{2}$$
|
H: Parametric representation of the intersection of spheres
Goal:
I am trying to find the curve of intersection of two spheres.
$\begin{align*}x^2+y^2+z^2 &= 9 \\ (x-3)^2+y^2+(z-1)^2 &= 4 \end{align*}$
What I have done:
One of the ways of achieving this is to do the following.
Eliminate one variable, in this case $y$, and obtain an $xz$-relation.
Simplify down to a linear expression $6x+2z-15=0$. This is the plane that contains the circle where they intersect.
Set $x=t$ for some parameter $t$, and then find $z(t)$ and then $y(t)$.
Parameterization complete.
I get $y(t)$ being a plus/minus root since $y$ is defined implicitly, and below is the diagram showing both spheres and half of the circle of where they intersect.
Problem:
There has to be a better way that uses some kind of trigonometric parameterization in the above example. How can I do this without using spherical coordinates?
AI: Observe that:
The center of the circle lies on the line connecting the 2 centers of the sphere.
Hence, find the radius of the circle: $ r^2 + s^2 = 9, r^ 2 + (\sqrt{10}-s)^2 = 4 $.
Hence, find the center of the circle: $\frac{s}{\sqrt{10}}(3,0,1)$.
|
H: Confused about cyclic goups
The cyclic group $\mathbb Z_6$ has a subgroup of order $3$ so I can deduce that $\mathbb Z_3$ is a subgroup of $\mathbb Z_6$. On one hand, it seems false to me cause the group operations of the above groups are not the same. On the other hand, the subgroup of order $3$ is isomorphic to $\mathbb Z_3$. I am confused...
AI: No, you cannot deduce that $\Bbb Z_3$ is a subgroup of $\Bbb Z_6$. What you can deduce is that $\Bbb Z_6$ has a subgroup which is isomorphic to $\Bbb Z_3$.
|
H: if the lcm is simply the product, then the integers are pairwise prime
I am trying to prove that
let $n_1,\ldots,n_k \in \Bbb Z\setminus\{0\}$. then $\gcd(n_i,n_j)=1 \forall i\neq j$ iff $\operatorname{lcm}(n_1,\ldots,n_k)=n_1\cdots n_k$
I can prove "$\Rightarrow$" this direction by the fact that $\gcd(n_1,n_1)\operatorname{lcm}(n_1,n_2)=n_1n_2$ and by induction on $k.$
But I do not know if the converse is true or not, it is obvious when $k=1$, as $\gcd(n_1,n_1)\operatorname{lcm}(n_1,n_2)=n_1n_2$. But I got stuck at extend $k$ from $2$ to any natural number.
Any suggestion will be appreciated
AI: If $g:=\gcd(n_i,n_j)>1$ for some $i\neq j$.
Note that $\frac {n_1 \cdots n_k} {g} < n_1 \cdots n_k$ is a common multiplier of $n_1, \ldots ,n_k$, which implies $\text{lcm}(n_1, \ldots ,n_k)\leq\frac {n_1 \cdots n_k} {g}<n_1 \cdots n_k$
|
H: Problem related with semicircles on sides of triangle and common tangents through semicircles.
On the sides $ BC,CA,AB $ of a triangle $ABC$ semicircles $c_1,c_2,c_3$ are described externally.
If $t_1,t_2,t_3$ are the lengths of common tangents of $c_2,c_3;\;c_3,c_1$ and $c_1,c_2$ then $t_1t_2t_3$ in terms of semiperimeter and area of triangle is?
If we are able to find $t_1$ in terms of sides then $t_2 ,t_3$ will also be found similarly. We can then use relation $\Delta=\sqrt{s(s-a)(s-b)(s-c)}$.
But how to find $t_1$?
AI: From the hints,consider evaluating $t_1$,I found the right angled triangle with sides $t_1, \frac{(b-c)}{2}$ and hypotenuse $\frac{a}{2}$. so by pythagoras' theorem
$(t_1)^2=(\frac{a}{2})^2-(\frac{(b-c)}{2})^2$ . so $t_1=\frac{(a-b+c)(a+b-c)}{4}=(s-b)(s-c)$.
Similarly $(t_2)^2=(s-c)(s-a)$,
$(t_3)^2=(s-a)(s-b)$
so $t_1t_2t_3=(s-a)(s-b)(s-c)$ $\Longleftrightarrow$ $ \frac {\Delta^2} {s}$
|
H: Showing the equation of a circle given diameter and Euclidean geometry
If $AB$ is the diameter of a circle and $P$ another point on the circumference, Euclidean geometry tells us that angle $APB = 90˚$. Use this fact to show that the equation of a circle whose diameter has endpoints $A(x_1,y_1)$ and $B(x_2,y_2)$ is $(x-x_1)(x-x_2)+(y-y_1)(y-y_2)=0$.
I have tried doing it using the Midpoint and Radius but got stuck in the middle of algebra. Is that the correct way? How do I use the Euclidean fact?
AI: Let $P(x,y)$ be any point on the circle. It's easily shown that
$$\vec{AP}=(x-x_1,y-y_1),\vec{BP}=(x-x_2,y-y_2). $$
Since $\angle APB=\dfrac{\pi}{2}$, we know that $\vec{AP}\cdot\vec{BP}=0$, which is the same as
$$(x-x_1)(x-x_2)+(y-y_1)(y-y_2)=0. $$
Geometrically, one finds $k_{AP}\cdot k_{BP}=-1$ when $\angle APB=\dfrac{\pi}{2}$, so
$$\dfrac{x-x_1}{y-y_1}\cdot\dfrac{x-x_2}{y-y_2}=-1, $$
which can be simplified to the preceding result.
|
H: How to solve $x^{x^{x^x}} = 1/3^{\sqrt{48}}$
How to solve
$$x^{x^{x^x}} = \frac{1}{3^{\sqrt{48}}}$$
Attempt :
Let $x^{x^{\cdots}} = y$
$$\begin{align}
x^y &=y\\
y\ln(x) &= \ln(y)\\
-\ln(x) &= -\ln(y)e^{-\ln(y)}\\
-\ln(y) &= W(-\ln(x))\\
y &= e^{-W(-\ln(x))}
\end{align}$$
I'll stop this. Am i doing this right? I know the original question is not continuous power. By the way i'm assuming $x^{x^{x^x}} = x^{x^{\cdots}}$ But is this allowed?
Some advice and help are needed.
Thanks
AI: As said in comment, in the real domain, there is no zero for the function
$$f(x)=x^{x^{x^x}}-3^{-\sqrt{48}}$$ the first derivative
$$f'(x)=x^{x^x+x^{x^x}-1} \left(x^x \log (x) (x \log (x) (\log (x)+1)+1)+1\right)$$ cancels close to $x_*\sim 0.275$ (this is a minimum by the second erivative test) and $f(x_*) \sim 0.593$.
If the problem was
$$g(x)=x^{x^{x^x}}-3^{\sqrt{48}}$$ it would be a very different story. Plotting
$$h(x)=\log \left(\log \left(\log \left(x^{x^{x^x}}\right)\right)\right)-\log \left(\log
\left(4 \sqrt{3} \log (3)\right)\right)$$ shows almost a straight line around $x=2$.
Newton method will work like a charm
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 2.0000000 \\
1 & 1.9447990 \\
2 & 1.9466308 \\
3 & 1.9466333
\end{array}
\right)$$
Edit
Back to the original equation, there are at least two complex roots which are
$$x_\pm=-0.332844\pm 0.291254\, i$$
In comments, I have asked how I found these roots. In a preliminary step, I looked at function
$$F(a)=\Im(f(-a(1+i)))^2+\Re(f(-a(1+i)))^2$$ and noticed that for $a \sim \frac \pi {10}$ the result was very small $(F\left(\frac{\pi }{10}\right)\sim 1.37 \times 10^{-6}$).
Now, Newton iterations
$$\left(
\begin{array}{cc}
n & x_n \\
0 & -0.31415927-0.31415927\, i \\
1 & -0.32787054-0.30029210\, i \\
2 & -0.33258317-0.29293379\, i \\
3 & -0.33287291-0.29129953\, i \\
4 & -0.33284380-0.29125412\, i \\
5 & -0.33284375-0.29125414\, i
\end{array}
\right)$$
|
H: Law of large numbers applicability
Can you define a series of $(X_n)$ so that for every such n, we get $X_n\sim\mathsf{Geo}\left(\frac{1}{7n}\right)$ and for a constant $c\in\mathbb{R}$, such that
$$\displaystyle{\mathbb{P}\left[ \lim_{n \to \infty}\frac{X_n}{n} = c \right] = 1}$$
if so, find $c$.
I believe there is not such $c$ but I'm not sure how to prove it.
AI: The characteristic function $\phi_n$ of $X_n$ is given by $\phi_n(t)=\frac {p_n} {1-(1-p_n)e^{it}}$ where $p_n=\frac1 {7n}$. From this it is easy to see that the characteristic function of $\frac {X_n} n $ tends to $0$ at every point other than $0$. Hence this sequence does not converge in distribution. In particular it does not converge with probability $1$.
|
H: How many ways of a cup frozen yogurt can be dressed up?
Three of $16$ toppings can be selected for dressing up a cup of frozen yogurt. How many ways of a cup of frozen yogurt can be dressed up?
Select one:
a. $3360$
b. $560$
c. $48$
d. $45$
AI: Note: this is assuming that the order does matter
You have $16$ possible toppings that can be selected and you can only select $3$.
When you pick your first topping, you will have $16$ options.
As you have already chosen $1$ out of these $16$ toppings, when you pick your second topping, you will have one less option, i.e. your second topping will have $15$ options.
With this same logic, you have $14$ options for your last topping choice.
The total number of ways is given by: $16 × 15 × 14 = 3360$ options.
If order doesn't matter (i.e. combinations), then the answer is simply $16C3 = 560$ ways.
|
H: Find the sum $\sum_{n=0}^{49} \sin((2n+1)x) $
This was an exercise in a chapter of a textbook on product to sum and sum to product trigonometric identities. The following question was asked with the given hint:
$$\sum_{n=0}^{49} \sin((2n+1)x) $$
Hint: multiply this sum by $2\sin(x)$
My attempt
$$\sum_{n=0}^{49} \sin((2n+1)x)=1/2\csc(x)\sum_{n=0}^{49} 2\sin(x)\sin((2n+1)x) $$
Using identity $2\sin(A)\sin(B)=\cos(A-B)-\cos(A+B)$
$$1/2\csc(x)\sum_{n=0}^{49} 2\sin(x)\sin((2n+1)x)=1/2\csc(x)\sum_{n=0}^{49} \cos(2nx)-\cos((2n+2)x)$$
How do I continue from here?
AI: $$S_n=\sum_{0}^{n} \sin ((2k+1)x)=\Im \sum_{k=0}^{n} e^{i(2k+1)x}=\Im e^{ix}\sum_{k=0}^{n} e^{2ikx} =\Im e^{ix} \frac{e^{2inx}-1}{e^{2ix}-1}=\Im e^{i)n+1)x} \frac{e^{i)n+1)x}-e^{-i(n+1)x}}{e^{ix}-e^{-ix}}$$ $$\implies S_n=\frac{\sin^2 (n+1)x}{\sin x}$$
So finally, $$S_{49}=\frac{\sin^2 50 x}{\sin x}$$
|
H: Understanding the $\gamma$ rate of the SIR model
Relating the SIER model, I am trying to understand the intuition of the $\gamma$ paramater. This paramater is the recovery rate. $\gamma$ is fixed and biologically determined. Some authors conider for USA:
$$\gamma = \frac{1}{18}$$
and then asserts that $\frac{1}{\gamma} = 18$ is to match an average illness duration of 18 days.
I can't understand how the inverse of 18 days can represent a recovery rate.
For example, if $\gamma = 5 \hbox{ recovered}/\hbox{day}$. How $1/\gamma = 1 \hbox{ day}/5 \hbox{ recovered}$ can represent a recovery rate?
Can someone explain to me what is the intuition behind this relationship between illness days and recovery rate?
AI: The explanation lies in the exponential decay assumption of the process.
If $\gamma$ is the recovery rate, then the infection population behaves like $I'(t) = -\gamma I(t)$ (e.g. only focus on the removal/recovery rate for now). Solving the differential equation leads to $I(t) = I(0)e^{-\gamma t}$, which has a mean lifetime/removal time of $\frac{1}{\gamma}$, see this Wiki. Hence the reason why the duration is the inverse of the removal rate.
|
H: Choosing pairs and singles out of $n$ students
Given $n$ different students, find the number of ways to divide them to $k$ pairs, and $n-2k$ "singles". No order in pairs/singles.
So my idea was to first choose $n-2k$ singles, then out of all possible pairs in the $2k$ students who are now determined, choose $k$ pairs. i.e $\binom{n}{n-2k} \binom{\binom{2k}{2}}{k}$ but this does not seem to work. where is my mistake? the solution my teacher gave is $\frac{n!}{(n-2k)!k!2^k}$
AI: Your $k$ pairs may overlap, so you are not dividing into $k$ pairs.
The correct way of selecting $k$ pairs, is to order the $2k$ elements in one of $(2k)!$ ways, and pair off adjacent elements (first and second, third and fourth etc.). Note that we get the same pairing by swapping any adjacent pair in our ordering. Also note that rearranging the $k$ adjacent pairs in any of $k!$ ways also results in the same pairing. Thus we have $$\frac{(2k)!}{k!2^k}$$ ways of pairing up the $2k$ elements. Here by adjacent we mean an odd positioned element and its successor.
Multiplying by the factor you already calculated, for the number of ways of choosing the singles, results in your teachers answer.
|
H: Show that the derived set of $A$ in a subspace $(Y, \mathcal{O_Y})$ is equal to $A^d \cap Y$.
I am reading "Set Theory and General Topology" by Fuichi Uchida.
There is the following problem in this book:
Let $(X, \mathcal{O})$ be a topological space.
Let $(Y, \mathcal{O_Y})$ be a subspace of $(X, \mathcal{O})$.
Let $A \subset Y$.
Show that the derived set of $A$ in a subspace $(Y, \mathcal{O_Y})$ is equal to $A^d \cap Y$, where $A^d$ is the derived set of $A$ in $(X, \mathcal{O})$.
My answer is the following:
The derived set of $A$ in a subspace $(Y, \mathcal{O_Y})$ is equal to $\{x \in Y \mid x \in \overline{A - \{x\}}\}$.
$A^d = \{x \in X \mid x \in \overline{A - \{x\}}\}$.
So, it is obvious that the derived set of $A$ in a subspace $(Y, \mathcal{O_Y})$ is equal to $A^d \cap Y$.
The author's answer is the following:
Let $O \subset X$ and $y \in Y$.
Then $Y \cap O \cap (A-\{y\}) = O \cap (A-\{y\})$ holds.
For any open set $O$ which contains $y$, if $Y \cap O \cap (A-\{y\}) \neq \emptyset$, then $y$ is an accumulation point of $A$ in $(Y, \mathcal{O_Y})$.
For any open set $O$ which contains $y$, if $O \cap (A-\{y\}) \neq \emptyset$, then $y$ is an accumulation point of $A$ in $(X, \mathcal{O})$.
So, the derived set of $A$ in a subspace $(Y, \mathcal{O_Y})$ is equal to $A^d \cap Y$.
Is my answer correct or not?
I think the author's answer is not simple.
Why did the author write the above answer?
AI: Your answer is not correct because you're using the closure in $X$ and not in $Y$. What you've to show is that the closure of $A$ in $Y$(as a subset of $Y$), which I will denote by $cl_Y(A)$ is equal to $cl_X(A) \cap Y$, where $cl_X(A) $ is the closure of $A$ in $X$. When you write something like "$x \in \overline{A - \{x\}}$" it is not clear what closure you are referring to.
|
H: Does the center of a perfect group not contain all elements of prime order?
Let $G$ be a finite perfect group (i.e. $G=G'$) and $Z(G)$ be its center.
I don't know whether this statement is correct:
There exists an element $x$ of prime order such that $x\notin Z(G)$.
A quick check on CFSG gives that this holds for every (quasi-)simple group. But what if $G$ is a general finite perfect group? Or is there any further descriptions on the center of perfect groups?
Another description on this question is (also I don't know if this holds):
Let $H$ be a center-less (insoluble) group (i.e. $Z(H)=1$). Then there always exists a prime divisor $p$ of $|H|$ such that the $p$-part of the Schur multiplicator of $H$ is trivial.
Is there any result on both?
AI: Short version: if $p$ is odd and all elements of order $p$ are central in $G$, then $G$ has a normal $p$-complement, i.e., a normal $p'$-subgroup $K$ such that $|G:K|$ is a power of $p$. This follows from Theorem 5.3.10 from Gorenstein, which states that if $p$ is odd and a $p'$-automorphism of a $p$-group $P$ acts trivially on $\Omega_1(P)$ then it is the identity.
Thus, if $G$ has this property for any odd prime then $G$ is not perfect, because it has a $p$-quotient.
Original post follows, which says things about $p=2$, and I leave here for posterity, and for noting that I completely forgot about that theorem from Gorenstein's book.
I originally thought you meant every prime dividing $|G|$. This question is a lot harder than I first thought.
Notice that the property that all elements of prime order being central is inherited by subgroups. In particular, if $H$ is a normal subgroup of $G$ then the soluble residual of $H$, i.e., the last term in the derived series for $H$, satisfies your conditions.
Let's start with $p=2$, and let $G$ be a counterexample to your claim. Bob Griess proved in the 1978 paper Finite groups whose involutions lie in the center that if all involutions of $G$ lie in the centre of $G$ and $O_{2'}(G)=1$ then the soluble residual of $G$ is a direct product of $\mathrm{SL}_2(q)$s, or a central extension of $A_7$.
Let $H$ be the normal subgroup $O_{2'}(G)X$, where $X$ is one of the direct factors, and let $H_1=H^{(\infty)}$ be its soluble residual. Then $H_1$ is also a
counterexample, and is non-trivial as it has a simple composition factor. Thus $H_1=G$, and we may assume that $X=G/O_{2'}(G)=\mathrm{SL}_2(q)$ or $A_7$.
Now I have to leave, but my current plan is to choose a prime $p$ such that the Sylow $p$-subgroup of $X$ is cyclic, quotient out by $O_{p'}(G)$, and show that the Sylow $p$-subgroup doesn't have the property. I will return! Unless someone else solves it first.
EDIT!!! I should have read Bob's paper more. Remark: if $p$ is an odd prime and $G$ has this property for elements of order $p$ then $G$ is $p$-nilpotent.
|
H: Prove that $\frac{1}{a_1 + 1} + \frac{1}{a_2 + 1} + \dots + \frac{1}{a_n + 1} < 2$ for all $n \ge 1.$
The sequence $a_n$ is defined by $a_1 = \frac{1}{2}$ and $a_n = a_{n - 1}^2 + a_{n - 1}$ for $n \ge 2.$
Prove that $\frac{1}{a_1 + 1} + \frac{1}{a_2 + 1} + \dots + \frac{1}{a_n + 1} < 2$ for all $n \ge 1.$
AI: You have
$$\frac{a_k}{a_{k-1}}=a_{k-1}+1$$ and
$$a_k - a_{k-1}=a_{k-1}^2$$
Hence,
$$\sum_{k=1}^n\frac 1{a_k+1}= \sum_{k=1}^n\frac{a_{k}}{a_{k+1}}= \sum_{k=1}^n\frac{a_{k}^2}{a_{k}a_{k+1}}$$
$$= \sum_{k=1}^n\frac{a_{k+1}-a_k}{a_{k}a_{k+1}}=\sum_{k=1}^n\left(\frac{1}{a_{k}} - \frac 1{a_{k+1}}\right)$$
I leave the last step up to you.
|
H: Expressing $\cos(\varphi)$ using $ z=e^{i \varphi} $
I need to express $ \cos(\varphi) $ with $z = e^{i \varphi}$ in order to use the Cauchy integral formula on the following integral:
$ \int^{2 \pi}_0 \frac{1}{3+2\cos(\varphi)} \,d \varphi $
I got:
$ \int_{|z|=1} \frac{e^{-i \varphi}}{3+2\cos(\varphi)} e^{i \varphi}\, d \varphi = -i \int_{|z|=1} \frac{z^{-1}}{3+2\cos(\varphi)} e^{i \varphi} dz $
But I don't know what to do with the cos.
AI: $\cos \phi =\frac {z+\overline z} 2=\frac {z+\frac 1z} 2$ when $|z|=1$.
|
H: If $\sum_{r=0}^{n-1}\log _2\left(\frac{r+2}{r+1}\right)= \prod_{r = 10}^{99}\log _r(r+1)$, then find $n$.
If \begin{align}\sum_{r=0}^{n-1}\log _2\left(\frac{r+2}{r+1}\right) = \prod_{r = 10}^{99}\log _r(r+1).\end{align}
then find $n$.
I found this question in my 12th grade textbook and I just can't wrap my head around it. I tried splitting the logarithms by the division rule, which didn't work so I'm out of ideas.
AI: It is:
$$\sum_{r=0}^{n-1}\log _2\left(\frac{r+2}{r+1}\right) = \log_2 \frac21+\log_2\frac23+\cdots+\log_2\frac{n+1}{n}=\\
\log_2\frac21\cdot\frac32\cdots\frac{n+1}{n}=\log_2(n+1)\\
\prod_{r = 10}^{99}\log _r(r+1)=\log_{10}11\cdot\log_{11}{12}\cdots\log_{99}{100}=\\
\frac{\log_211}{\log_210}\cdot\frac{\log_212}{\log_211}\cdots\frac{\log_2100}{\log_299}\cdot=\frac{\log_2100}{\log_210}=2$$
Can you find $n$?
Wolfram answer.
|
H: Convex set contains line segment between that connects its interior and closure is contained in its interior
In this question, one of the answer (by Dimitris) uses the following lemma.
Lemma. Let $A\subset \mathbb{R}^N$ be a convex set. Suppose $\text{Int}A \ne \emptyset$. If $x \in \text{Int} A$ and $y \in \text{Cl}A$, then $[x, y) \subset \text{Int}A$, where
$$
[x,y) := \{y+\lambda(x-y)\mid\lambda\in(0,1]\}.
$$
(I changed the statement in the case of $\mathbb{R}^N$ since it suffices for my purpose.)
I am having trouble in proving this lemma. Here is my attempt.
Pick any $x \in \text{Int} A$ and $y \in \text{Cl}A$. If $y \in \text{Int}A$, then the result follows because $A$ is convex and thus $\text{Int}A$ is as well. Suppose $y \in \text{Bdry}A:=\text{Cl}A \setminus \text{Int}A$. Pick any $z \in [x,y)$. Then, there exists $\lambda\in(0,1]$ such that $z = y + \lambda(x-y)$. If $z \notin \text{Int}A$, then $z\in \text{Bdry}A$ since $z \in \text{Cl}A$ by the convexity of $A$ and $\text{Cl} A$.
Although it is intuitively clear that $z,y\in\text{Bdry}A$ is a contradiction, how can I prove it?
AI: Case 1: assume also $y\in A$.
Since $x \in \text{Int} A$, there exists $\rho>0$ such that $B_\rho(x)\subset \text{Int} A$.
Let $z = (1-\lambda) y + \lambda x$, $\lambda\in (0,1)$.
Then $B_{\lambda\rho}(z) = z + B_{\lambda\rho}(0) = (1-\lambda)y + \lambda B_\rho(x) \subset A$, so that $z$ is an interior point of $A$.
Case 2: assume $y\in \text{Cl} A$.
The ball $B_{\rho\lambda/(1-\lambda)}(y)$ contains a point $a\in A$; in particular, $a = \frac{\lambda}{1-\lambda} u + y$, for some $u\in B_\rho$.
Hence, $z = (1-\lambda) a + \lambda(x-u)\in A$. By case 1, we conclude that $z\in \text{Int} A$.
|
H: How to evaluate $\int x^5 (1+x^2)^{\frac{2}{3}}\ dx$?
I am trying to evaluate $\int x^5 (1+x^2)^{\frac{2}{3}} dx$
This is apparently a binomial integral of the form $\int x^m (a+bx^k)^ndx$. Therefore, we can use Euler's substitutions in order to evaluate it. Since $\dfrac{m+1}{k} = \dfrac{5+1}{2} = 3 \in \mathbb{Z}$ we will use the substitution: $$ a+bx^k = u^{\frac{1}{n}}$$
Therefore,
$$ u^3 = 1 + x^2 \iff x = \sqrt{u^3 +1} \text{ (Mistake Here. Check the comments) } $$ $$\iff dx = \frac{3u^2}{2\sqrt{u^3}+1}$$
So the new integral is,
$$ \int x^5 (1+x^2)^{\frac{2}{3}} dx = \frac{3}{2} \int u^4 (u^3+1)^{\frac{7}{6}} du$$
Instead of simplifying the integral, the substitution did nothing by keeping it at the same form, with different values on the variables $m,k,n$.
I tried to substitute once again and it doesn't seem to lead in any known paths, anytime soon.
Any ideas on how this could be evaluated?
AI: HINT:
Let $1+x^2=t^3\implies 2xdx=3t^2dt$ or $xdx=\frac{3}{2}t^2dt$
$$\int x^5(1+x^2)^{2/3}dx=\int (x^2)^2(1+x^2)^{2/3}xdx$$
$$=\int (t^3-1)^2(t^3)^{2/3}\ \frac{3t^2}{2}dt$$
$$=\frac32\int(t^3-1)^2t^4 dt $$
$$=\frac{3}{2}\int (t^{10}-2t^7+t^4)dt$$
|
H: Find the values of α and β for which this series converges.
The given series is,
$$\sum\limits_{n\geq 1}(\sqrt{n+1}-\sqrt{n})^{\alpha}(\ln(1+1/n))^{\beta}$$
With $\alpha, \beta \in \mathbb{R}$.
I don't know how to begin, noreven which criterion use to find the values of $\alpha$ and $\beta$ for which this series converges.
AI: If you multiply the first term by $\frac{\sqrt{n+1}+\sqrt{n}}{\sqrt{n+1}+\sqrt{n}}$ and use Maclauring series expansion for the second term, you can compare the sum to
$$
\frac{1}{2^{\alpha}}\sum_{k=1}^{\infty}\frac{1}{n^{\frac{\alpha}{2}+\beta}}
$$
which converges for $\frac{\alpha}{2}{+\beta}>1$
|
H: Proving a graph doesn't have a perfect matching
Consider the following graph:
Find a perfect matching or prove one doesn't exist.
I don't think a perfect matching exists here, as the vertices $a_2, a_3$ and $a_4$ are problematic to us, but I'm having some trouble proving this. Using Hall's theorem, we can prove that a matching of a certain cardinality doesn't exist, but how am I supposed to know the cardinality of the perfect matching in order to prove my claim? Can someone give me a hint how to apply the theorem here?
EDIT: Can I assume that the cardinality of the perfect matching $|M| = 2$, as the smallest vertex cover is {$a_5, a_4$}, and then find two vertices that break Hall's condition?
AI: Suppose a perfect matching $M$ exists. Note that $b_2$ has degree $2$, so either $\{b_2,a_2\}\in M$ or $\{b_2,a_5\}\in M$.
Case I. $\{b_2,a_2\}\in M$. Then, $\{b_3,a_5\}\in M$. Hence, $\{b_5,a_6\}\in M$. Now, $b_6$ cannot be paired.
Case II. $\{b_2,a_5\}\in M$. Hence, $\{b_5,a_6\}\in M$. Now, $b_6$ cannot be paired.
|
H: Given $dy/dx=-x/y$, how can I solve $d^2 y / dx^2$
The part that confuses me is the square being next to the $d$ vs being to the variable. My intuition tells me $d^2x$ is equivalent to $(dx)^2$ and $dx^2$ should be $d(x^2)$ (if that notation is valid)
AI: We have $$\frac{dy}{dx} = -\frac xy \\ \frac{d^2y}{dx^2} = \frac{y\cdot(-1)-(-x)\frac{dy}{dx}}{y^2}=\frac{-y+x\cdot -\frac xy}{y^2} =\frac{-y^2-x^2}{y^3}$$ Note that $d^2x \ne (dx)^2 \ne d(x^2) $, it is just a fancy way to say that we are taking the derivative of the derivative.
|
H: How do I find the indefinite integral for $\int \frac{6}{2x-x^2}dx$?
How can I integrate this $\int\frac{6}{2x-x^2}dx$ ?
I know I have to integrate it using logarithms, I just don't know how to do this one.
Can someone help me out?
AI: HINT:
Apply partial fraction decomposition method.
$$\int\frac{6}{2x-x^2}dx=3\int\left[\frac{1}{x}+\frac{1}{(2-x)}\right]dx$$
|
H: Is $\limsup\sqrt[n]{|a_{n+1}|}=\limsup\sqrt[n]{|a_n|}$?
Let $\{a_n\}_{n=1}^\infty$ be an arbitrary sequence of complex numbers. Does the equality
\begin{equation}
\limsup\sqrt[n]{|a_{n+1}|}=\limsup\sqrt[n]{|a_n|}
\end{equation}
hold?
I' m sure that this is the case, and it may seem a silly question, but it has been bothering me for hours. I have tried using the fact that $$\sqrt[n]{|a_{n+1}|}=\left(\sqrt[n+1]{|a_{n+1}|}\right)^{\frac{n+1}{n}},$$ and other properties of the fractional powers such as $\sqrt[n]{a}<a$ if and only if $a>1$. My main idea was trying to use the squeeze theorem, but I'm stuck.
AI: For each $r \in \mathbb{Z}$, define
$$\alpha(r) = \limsup_{n\to\infty} |a_{n+r}|^{1/n} \in [0, \infty].$$
Let $r, s \in \mathbb{Z}$ be arbitrary. Then we can find a subsequence $(n(k))_{k\geq 1}$ such that $|a_{n(k)+r}|^{1/n(k)}$ tends to $\alpha(r)$. Write $n'(k) = n(k)+r-s$.
Suppose that $\alpha(r) < \infty$. Then
$$|a_{n'(k)+s}|^{1/n'(k)} = \Bigl( |a_{n(k)+r}|^{1/n(k)} \Bigr)^{n(k)/(n(k)+r-s)} \xrightarrow{k\to\infty} \alpha(r)^{1} = \alpha(r), $$
and so, $\alpha(r)$ is a limit point of $|a_{n+s}|^{1/n}$. This proves $\alpha(r) \leq \alpha(s)$.
Suppose that $\alpha(r) = \infty$. Then we have $|a_{n(k)+r}|^{1/n(k)} \geq 1$ and $\frac{n(k)}{n(k)+r-s} \geq \frac{1}{2}$ for any sufficiently large $k$. So, as $k\to\infty$,
$$ |a_{n'(k)+s}|^{1/n'(k)} = \Bigl( |a_{n(k)+r}|^{1/n(k)} \Bigr)^{n(k)/(n(k)+r-s)} \geq \Bigl( |a_{n(k)+r}|^{1/n(k)} \Bigr)^{1/2} \xrightarrow{k\to\infty} +\infty. $$
This shows that $\alpha(s) = \infty$ as well.
Combining altogether, $\alpha(r) \leq \alpha(s)$ holds unconditionally. However, since $r$ and $s$ are arbitrary, this implies that $\alpha$ is constant. Therefore the desired claim is proved.
|
H: why $\int_{0}^ {x} = \int_{\frac{-1}{2}}^{y}$ and $\int_{0}^{x} = \int_{y}^{1} ?$
I have some confusion in integration . My confusion marked in red and green circle as given below
Im not getting why $$\int_{0}^ {x} = \int_{\frac{-1}{2}}^{y}$$ and $$\int_{0}^{x} = \int_{y}^{1} $$ ?
Im not getting how its derive ?
AI: Hint: Try to plot these areas and look firstly from $Ox$, then from $Oy$ axes.
More formally: we should prove
$$\left\{ \begin{array}{}
0 \leqslant x \leqslant 1 \\
0 \leqslant y \leqslant x
\end{array} \right\} = \left\{ \begin{array}{}
0 \leqslant y \leqslant 1 \\
y \leqslant x \leqslant 1
\end{array} \right\}
$$
and then obtain
$$\int\limits_{0}^{1}\int\limits_{0}^{x} dxdy= \int\limits_{0}^{1}\int\limits_{y}^{1}dydx$$
Same for second.
|
H: Open subgroup of ring of ring of integers
I am trying to understand following lemma from Milne's Class Field Theory: https://www.jmilne.org/math/CourseNotes/CFT.pdf#X.3.2.3 (the link will take you directly to the said lemma)
Let $L$ be a finite Galois extension of $K$ with Galois group $G$. Then there
exists an open subgroup $V$ of $\mathcal O _L$ stable under $G$ such that $H^r(G,V) =0$ for all $r > 0$.
In the proof given, we obtain, using the Normal basis theorem, basis $\{x_{\tau}\in \mathcal O _L |\tau \in G\}$ for $L/K$. We define $$V=\sum_{\tau \in G } \mathcal O _K x_{\tau} $$
I want to see that this subgroup $V$ is open in $\mathcal O _L $. Here is my attempt:
Each $ x_\tau=u_\tau \pi_L^{i_\tau}$ where $u_\tau$ is a unit in $\mathcal O_L$ and $\pi _L $ its uniformizer. Define $i=\text{max}\{i_\tau\ | \ \tau \in G \} $. Then for some fixed $0<c <1 $, consider $U=B(0,c^{i}) $.
What I would like to claim is that $U \subset V $, as if $0$ has open nbhd contained in $V$ then any point has, and so $V$ is open. But what I observe is that, in fact, we can fit $V$ inside ball of any radius by multipying the $x_\tau $ by $\pi _K $. I wonder if this is somehow useful?
Feel free to give any reference. Any help is appreciated.
Update: This question was asked here: Open subgroup of $ \mathcal{O}_L $
But I don't understand why is $\mathcal O _K$ open in $L $?
AI: Note that the $x_\tau$ form a $K$-basis of $L$. For each $a\in L$ there are unique
$b_\tau$ in $K$ with $a=\sum_\tau b_\tau x_\tau$. Each $K$-linear map $\phi_k:a\mapsto b_\tau$
is continuous.
Now $\mathcal{O}_L$ is compact, so each $\phi_\tau(\mathcal{O}_L)$ is bounded.
There are only finitely many $\tau$, so there is a in integer $M\ge0$ such that $\phi_\tau(\mathcal{O}_L)
\subseteq\pi^{-M}\mathcal{O}_K$ where $\pi$ is a uniformiser in $K$.
Let $U=\pi^M\mathcal{O}_L$. Then $U$ is an open subgroup of $\mathcal{O}_L$.
Also each element of $U$ equals $\pi^M a=\sum_{\tau}\phi_\tau(a)x_\tau\in V$ where $a\in\mathcal{O}_L$ and $V$ is your $V$. So $U\subseteq V$. As $U$ is a subgroup of
$V$, and $U$ is open, then so is $V$ (it's a union of cosets of $U$).
|
H: Relation between measure and outer measure
Let $(S, \Sigma, \mu)$ be a measure space and $\mu^*$ be an outer measure on $P(S)$ defined by $$\mu^*(A) = \inf \{\sum_{i = 0}^\infty \mu(E_i) \ | \ (E_i)_{i = 0}^\infty \textrm{ is a measurable cover of }A \}$$
I want to prove that for all $A \subseteq S$, there is a measurable set $E$ such that $A \subseteq E$ and $\mu^*(A) = \mu(E)$. As hint I have to prove first that $\mu^*(A) = \inf \{ \mu(E) \ | \ E \in \Sigma, A \subseteq E\}$.
For any measurable cover $(E_i)$ of $A$, we have that $\bigcup E_i$ is measurable and contains $A$, and furthermore $\mu(\bigcup E_i) \leq \sum \mu(E_i)$, so indeed $\inf \{ \mu(E) \ | \ E \in \Sigma, A \subseteq E\} \leq \mu^*(A)$. For the other direction it suffices to show that $\mu^*(A)$ is a lower bound for $\{\mu(E) \ | \ E \in \Sigma, A \subseteq E\}$, but I am stuck on proving that.
Assuming the statement from the hint, we just need to show that $\{\mu(E) \ | \ E \in \Sigma, A \subseteq E\}$ contains its infimum. We can either explicitely construct an $E$ such that $\mu(E)$ is its minimum, or think of some topology on $\Sigma$ that makes $\{E \in \Sigma \ | \ A \subseteq E\}$ compact and $\mu$ continuous, but I don't get further than that either.
AI: $\mu^{*}$ is monotone: $\mu^{*}(E) \leq \mu^{*}(F) $ if $E \subseteq F$. Hence $\mu^{*}(A)$ is a lower bound for $\{\mu(E): R \in \Sigma, A \subseteq E\}$.
There exist a sequence $(E_n)$ in $\Sigma$ such that $A \subseteq E_n$ for all $n$ and $\mu(E_n) \to \mu^{*}(A)$. Verify that $\mu^{*}(E)=\mu (E)$ where $E=\cap_n E_n$.
|
H: If $T:(\mathbb{R}^2,\|\cdot\|_p) \to (\mathbb{R}^2,\|\cdot\|_q)$ is an onto linear isometry, then must it be $p=q$?
Question: Let $p,q\in [1,\infty)$ and suppose that that $T:(\mathbb{R}^2,\|\cdot\|_p) \to (\mathbb{R}^2,\|\cdot\|_q)$ is an onto linear isometry. Must it be $p=q$?
I think it is true as isometry preserves extreme points.
However, it would be good if there is an elementary arguments.
AI: At first I thought the answer was clearly yes; now I'm not sure. If we allow $q=\infty$ the answer is no: the unit balls of $L^1$ and $L^\infty$ are both squares, so you can transform one to the other. Sure enough, $T(x,y)=(x+y,x-y)$, $p=1$, $q=\infty$ is a counterexample.
Lemma. If $x,y\in\Bbb R$ then $\max(|x+y|,|x-y|)=|x|+|y|$.
Proof: If you don't see anything more elegant just consider the four cases determined by the sign of $x$ and $y$.
|
H: Integrating a function where the denominator is the square root of a second order polynomial
Below is a problem I did. The answer I got differed from the book by a factor of $2$. An online calculator get the book's answer. I would like to know where I went wrong.
Problem:
Evalaute the following integral:
$$ \int \frac{dx}{ \sqrt{9x^2 - 6x + 5} } \,\, dx $$
Answer:
Let $I$ be the integral we are trying to evaluate.
\begin{align*}
I &= \int \frac{dx } { 3 \sqrt{x^2 - \left( \frac{2}{3} \right) x + \frac{5}{9} } } \\
3I &= \int \frac{dx } { \sqrt{ \left( x - \frac{1}{3} \right)^2 + \left( \frac{2}{3} \right) ^ 2 } } \\
u &= x - \frac{1}{3} \\
du &= dx \\
3I &= \int \frac{du } { \sqrt{ u^2 + \left( \frac{2}{3} \right) ^ 2 } } \\
u &= \left( \frac{2}{3} \right) \tan \left( \theta \right) \\
du &= \left( \frac{2}{3} \right) \sec^2 \left( \theta \right) \,\, d\theta \\
%
3I &= \int \frac{ \left( \frac{2}{3} \right) \sec^2 \left( \theta \right) \,\, d\theta }
{ \sqrt{ \left( \frac{4}{9} \right)\tan^2 \left( \theta \right) + \left( \frac{2}{3} \right) ^ 2 } } \\
3I &= \int \frac{ \left( \frac{2}{3} \right) \sec^2 \left( \theta \right) \,\, d\theta }
{ \left( \frac{2}{3} \right)\sqrt{ \tan ^2 \left( \theta \right) + 1 } } \\
3I &= \int \frac{ \sec^2 \theta \,\, d\theta } { \sec \theta } \\
3I &= \int \sec \theta \,\, d\theta
\end{align*}
\begin{align*}
3I &= \ln{| \sec \theta + \tan \theta |} + C_1 \\
3I &= \ln{| \sqrt{ \tan^2 \theta + 1 } + \tan \theta |} + C_1 \\
3I &= \ln{\Bigg| \sqrt{ \left( \frac{9}{4} \right) u^2 + 1 } + \left( \frac{3}{2}\right) u \Bigg| } + C_1 \\
3I &= \ln{\Bigg| \sqrt{ \left( \frac{9}{4} \right) \left( x - \frac{1}{3} \right) ^2 + 1 } +
\left( \frac{3}{2}\right) \left( x - \frac{1}{3} \right) \Bigg| } + C_1 \\
I &= \left( \frac{1}{3}\right) \ln{\Bigg| \sqrt{ \left( \frac{9}{4} \right) \left( x - \frac{1}{3} \right) ^2 + 1 } +
\left( \frac{3}{2}\right)x - \left( \frac{1}{2} \right) \Bigg| } + C \\
%
I &=
\left( \frac{1}{3}\right) \ln{\Bigg| \sqrt{ \left( \frac{9}{4} \right)
\left( x^2 - \left( \frac{2}{3} \right) x + \frac{1}{9} \right) + 1 } +
\left( \frac{3}{2}\right)x - \left( \frac{1}{2} \right) \Bigg| } + C \\
%
I &=
\left( \frac{1}{3}\right) \ln{\Bigg| \sqrt{ \left( \frac{9}{4}\right) x^2 - \left( \frac{3}{2} \right) x + \frac{5}{4} } +
\left( \frac{3}{2}\right)x - \left( \frac{1}{2} \right) \Bigg| } + C \\
I &= \left( \frac{1}{6}\right) \ln{\Bigg| \sqrt{ 9x^2 - 6x + 5 } + 3x - 1 \Bigg| } + C \\
\end{align*}
The book's answer is:
$$ \left( \frac{1}{3} \right) \ln{| \sqrt{9x^2 - 6x + 5} + 3x - 1 |} + C $$
Based upon comments from the group, I updated the last step. My answer is now:
$$ I = \left( \frac{1}{3}\right) \ln{\Bigg|
\frac{ \sqrt{ 9x^2 - 6x + 5 } + 3x - 1 }{2} \Bigg| } + C $$
However, this answer is still wrong. I would like to know where I went wrong.
AI: This integral fits into the standard form $$\int \frac{dx}{\sqrt{a^2+x^2}} = \ln|x + \sqrt{x^2 + a^2}| + c$$
Your integral can be rewritten as:
$$I = \int\frac{dx}{\sqrt{(3x-1)^2 + 4}}$$
Let $3x-1 = t \implies 3dx = dt$
$$I = \frac{1}{3}\int\frac{dt}{\sqrt{t^2 + 4}}$$
Can you take it from here?
EDIT: As somebody pointed out in the comments, you got the correct answer, but made a mistake in the final few steps.
$$\frac{1}{3}\ln\left| \frac{\sqrt{9x^2 - 6x + 5} + 3x-1}{2}\right| \ne \frac{1}{6}\ln\left|\sqrt{9x^2 - 6x + 5} + 3x-1\right|$$
|
H: How to solve $(1-x^2)y''-2xy'+2y=0$
I have not taken a course in differential equations but I decided to try and tackle this question I saw and solve for the general solution because why not. That said, I have some observations about the differential equation $(1-x^2)y'' - 2xy'+2y=0$.
$y=Ax$ is a solution to the equation however, I am unsure if I lose any information because $y''=\frac{\,d^2y}{\,dx^2}Ax=0$. When I set $y(x)=Axu(x)=Aux$,(I think this is how you solve second order with variable coefficients) I got to the point $$\begin{align*} u''(x-x^3)+2u'(1-2x^2) &= u''(x-x^3)+2u'(1-3x^2)+2u'x^2\\&=\frac{\,d}{\,dx}\big[u'(x-x^3)\big]+2u'(1-x^2)\\&=0 \end{align*}$$
Then I let $u'=w$ and got $w'(x-x^3)=-w(3-5x^2)$, separated variables and integrated to get $u=Ce^{\frac{1}{x^3(1-x)(1+x)}}$ and $y=ACxe^{\frac{1}{x^3(1-x)(1+x)}}$
I am not sure if this is correct, but it seems reasonable.
Also, $(1-x^2)y''-2xy' + 2y = \frac{\,d}{\,dx}[y'(1-x^2)] + 2y=0$, and this didn't seem to get me anywhere but, by factoring out $(1-x^2)$, I was able to make some more inferences:
For $x\ne \pm 1$,
$$\begin{align*}\bigg(y''-\frac{2xy'}{1-x^2}+\frac{2y}{1-x^2}\bigg)&=\bigg(\frac{\,d}{\,dx}y'+2\frac{\,d}{\,dx}\bigg[\frac{yx}{\sqrt{1-x^2}}\bigg]\bigg)\\&=\frac{\,d}{\,dx}\bigg(y'+2\frac{yx}{\sqrt{1-x^2}}\bigg)\\&=0 \end{align*}$$
I wasn't sure if I could, but I went ahead and integrated both sides with respect to x to get $$y'+2\frac{yx}{\sqrt{1-x^2}}=C_1 $$
I then assumed $C_1=0$ to turn it into a separable differential equation because I don't know what to do with the above expression and , as expected, upon checking, this was the incorrect answer.
I was just curious how to solve this for the general solution and tried to take on a challenging problem to pass some time. Thanks.
AI: Alpha identifies it as Legendre's equation and gives the solution
$$y(x) = c_1 x + c_2 (-x (\log(1 - x)/2 - \log(x + 1)) - 1)$$
It offers step by step if you have the right account
|
H: Prove that $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$.
Not a duplicate of
Prove that $∩\mathcal H ⊆ (∩\mathcal F) ∪ (∩\mathcal G)$.
This is exercise $3.5.17$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Suppose $\mathcal F$, $\mathcal G$, and $\mathcal H$ are nonempty families of sets and for every $A\in\mathcal F$ and every $B\in\mathcal G$, $A\cup B\in\mathcal H$. Prove that $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$.
Here is my proof:
Let $x$ be an arbitrary element of $\bigcap\mathcal H$. Now we consider two different cases.
Case $1.$ Suppose $x\in\bigcap\mathcal F$. Therefore $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$.
Case $2.$ Suppose $x\notin \bigcap\mathcal F$. So we can choose some $A_0$ such that $A_0\in\mathcal F$ and $x\notin A_0$. From $\forall A\in\mathcal F\forall B\in\mathcal G(A\cup B\in\mathcal H)$ and $A_0\in\mathcal F$, it follows that $A_0\cup B\in\mathcal H$ for every $B\in\mathcal G$. Since $x\in\bigcap\mathcal H$, $x\in A_0\cup B$ for every $B\in\mathcal G$. Since $x\notin A_0$, $x\in B$ for every $B\in\mathcal G$ and so $x\in\bigcap \mathcal G$. Thus $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$.
Since the above cases are exhaustive, $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Therefore if $x\in\bigcap\mathcal H$ then $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in\bigcap\mathcal H\rightarrow x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)\Bigr)$ and so $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. $Q.E.D.$
Is my proof valid$?$
Thanks for your attention.
AI: Your proof is okay.
It is more handsome though to prove the contrapositive statement:$$x\notin\left(\bigcap\mathcal{F}\right)\cup\left(\bigcap\mathcal{G}\right)\implies x\notin\bigcap\mathcal{H}$$
Proof:
If $x\notin\left(\bigcap\mathcal{F}\right)\cup\left(\bigcap\mathcal{G}\right)$
then some $A\in\mathcal{F}$ exists $x\notin A$ and some $B\in\mathcal{G}$
exists with $x\notin B$.
Then $x\notin A\cup B\in\mathcal{H}$ so
we conclude that $x\notin\bigcap\mathcal{H}$.
|
H: Does a function which is oscillating have to have not-continuous derivative?
If $f(x)$ is a differentiable function which is not $0$ everywhere and has the property that around any interval around $0$, $f$ is neither fully positive or negative. Then it can be proven that $f(0)=0$.
An example of such a function is
$$\begin{cases}x^2\sin({1\over x}) & \text{ for }x\neq 0,\\ 0 &\text{ for }x=0.\end{cases}$$
All such functions I have seen so far satisfy this.
Q: Is it true that the derivative of such a function cannot be continuous or is there a counter-example?
I feel that such a function exists and have tried a few examples but have been unable to find one.
AI: $f(x)=x^3 \sin\left(\frac{1}{x}\right)$ has a continuous derivative and respect your criteria.
|
H: Why do I get Lie(matrix Lie group)=$\mathfrak{gl}_n(\mathbb{R})$?
There is something I don't understand about Matrix Lie groups. They are defined as closed subgroups of $GL_n(\mathbb{R})$ but we can show that a closed subgroup of a topological group is opened as well. So if $H$ is a Lie matrix group, $H$ is opened in $Gl_n(\mathbb{R})$ so the tangent space of $H$ and $GL_n(\mathbb{R})$ are isomorphic so their Lie algebra are the same (i.e. $\mathfrak{gl}_n(\mathbb{R})$. But it seems this is not true because if we take $H=SO(n)$, it's lie algebra is formed by antisymmetric matrices and not all the matrices). How to correct this reasoning?
AI: It is not true that every closed subgroup of $GL_n(\Bbb R)$ is also open. For instance, $\{\operatorname{Id}_n\}$ is a closed subgroup which is not open.
|
H: Triple negation implies double negation elimination?
I have a question about intuitionistic logic regarding the relationship between the triple negation elimination rule, i.e. $\neg\neg\neg A\leftrightarrow \neg A$, and the double negation elimination. We know from Brouwer (1925) that $\neg\neg\neg A\leftrightarrow \neg A$ is true in intuitionistic logic. Let us define $B$ to be $\neg A$. Substitute $B$ and get $\neg\neg B\leftrightarrow B$. So does that mean the double negation elimination rule can still be applied in some cases in intuitionistic logic? Or am I missing something?
AI: does that mean the double negation elimination rule can still be applied in some cases in intuitionistic logic?
Yes, exactly right. Specifically, the double negation elimination rule $\lnot \lnot B \to B$ can be applied whenever $B$ is a negation, that is $B = \lnot A$ for some $A$, for the reasons you describe.
In general, many specific instances of double negation elimination are provable in intuitionistic logic. Another example is where $B$ is equivalent to $\bot$: then we can show that $\lnot \lnot B \to B$. But the statement $\lnot \lnot P \to P$ for a general predicate $P$ is not provable.
|
H: Finding area of $D =\{(x,y): x^2+y^2 \geq 1 , y \geq x-1 , y \leq 1, x \geq 0\}$
I want to calculate $$\iint_{D} dx \,dy$$ given that $$ D =\left\{(x,y): x^2+y^2 \geq 1 , y \geq x-1 , y \leq 1, x \geq 0\right\}$$
My attempt : I used polar cordinates $r,\theta$ such that $x =r \cos \theta , y= r \sin \theta$ and from $x^2+y^2 \geq 1$ we derive that $r \geq 1$ also from $ y \leq 1$ we derive that $ r \leq \frac{1}{\sin \theta}$ which implies that $ 0 \leq \theta \leq \pi$ with the additional condition that $ x \geq 0 $ we get that $ 0 \leq \theta \leq \frac{\pi}{2}$ but i don't know what to do with $y \geq x-1$ ??
AI: HINT
Draw a nice plot to see the area required and calculate
$$\int_0^1\int_{\sqrt{1-y^2}}^{y+1}dxdy$$
|
H: How to simplify Kronecker delta with einstein summation?
I am trying to proof a vector identity. I have to prove the following;
I am bit confused how to simplify the following part..
$$\delta_{il} \delta_{jm} x_{j}y_{l}z_{m}$$
Any input is appreciated.
AI: Remember Kronecker $\delta$ "is" the identity matrix, so contracting one suffix has the effect of replacing suffix. So you get
$$\require{color}
{\color{blue}\delta_{il}}{\color{red}\delta_{jm}x_j}{\color{blue}y_l}z_m={\color{red}x_m}{\color{blue}y_i}z_m=y_i(x\cdot z).
$$
|
H: How do I prove a set is not simply connected?
The set in question is the unit open disc $U$ with the origin omitted. I know that this set is not simply connected as the path, say the circle centred at the origin with a radius of 1/2 is not homotopic to any point in $U$. However, how do I mathematically express this?
Any help would be greatly appreciated.
AI: In a simply connected domain $D \subset \Bbb C$ is $\oint_\gamma f(z) \, dz = 0$ for all functions $f$ holomorphic in $D$ and all (rectifiable) closed curves $\gamma$ in $D$. That is because the integral is invariant under the homotopy which transforms $\gamma$ to a single point. (See also Cauchy's integral theorem).
But for $D = \{ z \mid 0 < |z| < 1 \}$ we have
$$
\frac{1}{2 \pi i} \oint_{|z| = 1/2} \frac{dz}{z} = 1 \ne 0
$$
as you can calculate easily. The integral is in fact the winding number of the circle with radius $1/2$ with respect to the point zero.
|
H: Convex solution set
Let $y, z, a \in \mathbb{R}$. Is it true, that the solutions $x \in \mathbb{R}$ of the inequality
$$
\lvert x - y \rvert - \lvert x - z \rvert \leq a
$$
form a convex set (i.e. an interval)? I don't even know if this is true but I have not found a counterexample yet.
I tried to prove this by choosing solutions $x, \tilde{x} \in \mathbb{R}$ and verifying that that $x_t := tx + (1-t)\tilde{x}$ is a solution for $t \in [0, 1]$, too. The latter gave me problems: I tried to add $0 = t\lvert x-y \rvert - t \lvert x -y \rvert + (1-t) \lvert \tilde{x} - z \rvert - (1-t) \lvert \tilde{x} - z \rvert$ and use the reverse triangle inequality but that did not get me anywhere.
AI: Hint
If $y=z$, then the set $J$ of the inequality solutions is either the emptyset or $\mathbb R$ depending on the sign of $a$. Hence is an interval in that case.
Without loss of generality, you can suppose $y\lt z$ for the rest of the analysis.
Then deal with the three cases $x \le y\lt z$, $y <x \le z$ and $x >z$ to get rid of the absolute values. For each cases, the solution set is the intersection of two intervals. Hence is an interval.
This enables to graph the map $x \mapsto \vert x-y \vert - \vert x-z \vert$ which is constant, then linearly growing then constant. Hence the level sets of this map are intervals or the emptyset.
|
H: How do I find all functions $F$ with $F(x_1) − F(x_2) \le (x_1 − x_2)^2$ for all $x_1, x_2$?
In calculus class we were given this so-called "coffin problem" originally from Moscow State University.
Find all real functions $F(x)$, having the property that for any $x_1$ and $x_2$ the
following inequality holds:
$$F(x_1) − F(x_2) \le (x_1 − x_2)^2$$
I have the solution to this problem, which is supposed to make the question very intuitive once you see it. However, I still do not quite understand it, and I would appreciate your help.
Solution:
The inequality implies
$$\frac{F(x_1) − F(x_2)}{|x_1 − x_2|} \le |x_1 − x_2|,$$
so the derivative of $F$ at any point $x_2$ exists and is equal to zero. Therefore,
by the fundamental theorem of calculus, the constant functions are exactly the
functions with the desired property.
Based on this solution, I substituted $x_1=x_2+h$ and took the limit as $h$ approaches zero, therefore by first principles, the derivative of $F(x)$ at $x_2$ is less than or equal to zero. Where do I proceed from here?
AI: The intended solution seems to be something like the following:
Fix some $x \in \mathbb{R}$. By assumption, for any $h \in \mathbb{R}$ (in particular, for any very small value of $h$), taking $x_1 = x+h$ and $x_2 = x$, we get
$$ F(x+h) - F(x) \le \big( (x-h) - x \big)^2
\implies \frac{|F(x+h) - F(x)|}{|h|} \le |h|. $$
Take $h$ to zero, do a little algebra (the limit passes into the absolute value, since the absolute value function is continuous at $0$), and apply the Squeeze Theorem to get
$$ \Bigg\lvert \underbrace{\lim_{h\to 0} \frac{F(x+h) - F(x)}{h}}_{=F'(x),\text{ if it exists}} \Bigg\rvert
\le \lim_{h\to 0} |h|
= 0.$$
This implies that $F$ is differentiable at $x$, and that $F'(x) = 0$. But $x$ was chosen arbitrarily, so $F$ is differentiable everywhere and $F' \equiv 0$. Therefore $F$ is a constant function.
|
H: Product of two absolutely continuous measures
Suppose we have two probability measures on $\mathbb{R^{n}}$, $Q$ and $R$ absolutely continuous w.r.t $P$ and $\frac{dQ}{dP}$,$\frac{dR}{dP}$ is given. Is there a way to define multiplication for instance via Cauchy product such that we get a new absolutely continuous measure $S$ such that $\frac{dQ}{dP}\cdot \frac{dR}{dP}=\frac{dS}{dP}$ ?
Feel free so impose additonal conditions if needed.
AI: Yes, there is a way to define a multiplication of two measures absolutely continuous with respect to the same measure. By the Radon-Nikodim theorem, since $Q$ and $R$ are absolutely continuous with respect to $P$, there exists two $P$ measurable functions $q$ and $r$ such that
$$
Q(\mathrm{d}x) = q(x)\,P(\mathrm{d}x) \ \text{ and } \ R(\mathrm{d}x) = r(x)\,P(\mathrm{d}x)
$$
(i.e. $q(x) = \tfrac{\mathrm{d}Q}{\mathrm{d}P}$ and $r(x) = \tfrac{\mathrm{d}R}{\mathrm{d}P}$ in your notation).
One can then define the function $s(x) = q(x)\,r(x)$ (pointwise multiplication of functions, almost everywhere with respect to $P$) which is also $P$ measurable and then the measure $Q\cdot P$, defined for any set $A$ by
$$
(Q·R)(A) := S(A) := \int_A s(x)\, P(\mathrm{d}x)
$$
If $q\,r\in L^1(P)$, then $Q·R$ is absolutely continuous with respect to $P$.
|
H: Does $\int_0^{\pi \over 2} \lfloor \tan(x) \rfloor\, dx$ converge?
My book follows the following method
Let
$$I=\int_0^{\pi \over 2} \lfloor \tan(x) \rfloor\, dx.$$
Then using King's rule $$\int_{a}^{b}f(x)dx=\int_{a}^{b}f(a+b-x)dx$$ we have
$$I=\int_0^{\pi \over 2} \lfloor \tan(\pi- x)\rfloor\, dx=\int_0^{\pi \over 2} \lfloor \tan(- x)\rfloor\, dx$$
Adding the above two we get:
$$2I =\int_0^{\pi\over 2} \left[\lfloor(\tan(- x)\rfloor+\lfloor\tan x\rfloor\right]\,dx$$
Now since $ \lfloor x\rfloor +\lfloor -x\rfloor=-1$ when $x$ is not a integer and $0$ otherwise the integral becomes
$$2I=\int_0^{\pi\over 2} -1\, dx$$
which then gives $I = {-\pi \over 2}$
But Wolfram Alpha says that the integral does not converge.
My Question:
Is my book correct? If not where is the error in above calculations?
AI: Your application of King's rule is wrong; it gives$$I=\int_0^{\pi/2}\lfloor\tan(\color{blue}{\pi/2}-x)\rfloor dx.$$To prove $I$ diverges, note$$I\ge\int_0^{\pi/2}(\tan x-1)dx=[\ln|\sec x|-x]_0^{\pi/2}=\infty.$$
|
H: Prove, by contradiction, that if a and b are nonzero real numbers, and $a<\frac{1}{a}
Prove, by contradiction, that if a and b are nonzero real numbers, and $a<1/a<b<1/b$ then $a<−1$.
I understand that the first step is to assume that $a<1/a<b<1/b$ is true. Therefore, the hypotheses makes up:
$a ≠ 0$
$b ≠ 0$
$a<1/a<b<1/b$ is true
From here, to prove by contradiction, we find the negated version of $ a < -1$ which is $ a => -1 $.
Now we must find a contradiction within these statements, but I am struggling to pinpoint that contradiction.
FYI Answer:
I have accepted the completed answer for those who are looking for a quick answer but do check the other one for in-depth
analysis as well.
AI: suppose $-1\le a$, i.e $a\in [-1,\infty)$.
Note that $a$ cannot be larger than $1$ or be in $(-1,0)$, otherwise, $\frac{1}{a}<a$. And $a$ cannot be $-1$ or $1$. otherwise, $\frac{1}{a}=a$. Therefore, $a\in(0,1)$ and $1<\frac{1}{a}$
But, there is $b$ such that $1<\frac{1}{a}<b$, thereofre, $\frac{1}{b}<1<b$, which contradicts to the assumption that $b<\frac{1}{b}$.
|
H: $\pi(v)=x+y$ implies $v=v_1+v_2,\; \pi(v_1)=x,\pi(v_2)=y$
Let $V$ be a vector space over some field $k$, let $U\subset V$ be a subspace of $V$ and consider the quotient space $V/U$, along with the projection $\pi:V\to V/U,\, x\mapsto x+U$.
Does it hold that $\pi(v)=x+y$ implies $v=v_1+v_2$ with $ \pi(v_1)=x,\pi(v_2)=y$?
My approach: $x+y=\pi(\tilde{x})+\pi(\tilde{y})=\pi(\tilde{x}+\tilde{y})$ for $\tilde{x},\tilde{y}\in V$. How do I continue from here in order to show that $v$ splits in the above way?
AI: The answer is positive.
Suppose that $\pi(v) = x+y$. As $x \in V/U$ and $\pi$ is surjective, it exists $v_1^\prime \in V$ such that $\pi(v_1^\prime)=x$. Similarly for $y$ with $v_2^\prime$.
We then have $\pi(w)=0$ where $ w = v-(v_1^\prime +v_2^\prime)$. Which means that $ w $ belongs to $U$.
So $v=(v_1^\prime+w)+v_2^\prime$ and $v_1=v_1^\prime+w$, $v_2=v_2^\prime$ solve the problem as $\pi(v_1^\prime) =\pi(v_1)$.
|
H: Does my claim work for a set?
I don't have enough reputation to comment on this neat answer so I ask here for some hints/answers. I take the set $A = \{0,1,2,3\}$ and so according to this answer, $A \sim \mathbb{Z}_4$. My question is: does that certain bijection allow to write the subset $\{0,2\}$ of $A$ is isomorphic to the subgroup $\{0,2\}$ of $\mathbb{Z}_4$?
I asked this because I don't know the explicit rule of the bijection. Thanks.
AI: If $A\sim\Bbb Z_4$ means that $A$ is isomorphic to $\Bbb Z_4$, then the statement doesn't make sense, since $A$ is simply a set, not a group. Given any bijection $b\colon A\longrightarrow\Bbb Z_4$, you can use it to define a group structure on $A$ by transport of structure: if $a_1,a_2\in A$, $a_1\oplus a_2=b^{-1}\bigl(b(a_1)+b(a_2)\bigr)$. Then $(A,\oplus)$ is a group which is isomorphic to $(\Bbb Z_4,+)$. And $\{0,2\}$ may be or may not be a subgroup of $(A,\oplus)$; it depends upon the choice of $b$.
|
H: Will optimal point of a convex function $f(x)$ under linear constraint lie on boundary?
I have a function $f(x)$ that is positive definite quadratic function. I have linear constraints , then will the optimal lie on boundary ?
My answer that I feel is "No" it will not lay at the boundary. But I am unable to give a solid proof due to my weak calculus. (specially when it comes to higher dimensions)
Can anyone help me visualize from $1$ dimension (taking $f(x)=x^2$) and move towards higher dimension ?
AI: It might lie on the boundary though it need not.
Let's consider a one dimensional example.
$\min x^2$ subject to $x \ge 0$.
The optimal solution is clearly $x=0$ which is on the boundary.
$\min x^2$ subject to $x \ge -1$.
Then the optimal solution is not on the boundary.
In general, given $x^TAx+b^Tx$, differentiating it gives us $2Ax+b$, we can check if the stationary point is in the interior. Suppose not, then the optimal solution must be on the boundary.
|
H: Definite integral of $\int_{-2}^{2} \frac{5}{(x^2+4)^2}\,dx$ using the substitution of $x=2\tanθ$.
Can someone help with the integral $\int_{-2}^{2} \frac{5}{(x^2+4)^2}\,dx$?
I'm supposed to find the definite integral for this using the substitution $x=2\tanθ$.
This is what I've done so far:
$$\longrightarrow \frac{dx}{dθ}=2\sec^2θ$$
$$\longrightarrow x=2 \rightarrow θ=\frac{\pi}{4}$$
$$\longrightarrow x=-2 \rightarrow θ=-\frac{\pi}{4}$$
$$\therefore \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{10\sec^2θ}{(4\tan^2θ+4)^2}\,dθ$$
Using; $$t=\tan^2θ+1,$$
$$=\int_{2}^{2} \frac{10}{32t^2\sqrt{t-1}}\,dt$$
After this, I don't know how to finish. Does anyone know how to finish?
*Note: The first substitution is the one the exercise is telling me to use, the other is one I used myself.
AI: Instead of applying another substitution, just simplify the expression after your first substitunion.
$$\int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{10\sec^2{\theta}}{{\left(4\tan^2{\theta}+4\right)}^2} \; d \theta $$ $$=\int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{10\sec^2{\theta}}{16{\left(\tan^2{\theta}+1\right)}^2} \; d \theta$$
$$=\frac{5}{8}\int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{\sec^2{\theta}}{{\sec^4{\theta}}} \; d \theta$$
$$=\frac{5}{8}\int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \cos^2{\theta} \; d \theta$$
Using the angle reduction formula for $\cos^2{\theta}$:
$$=\frac{5}{16} \left(\theta+\frac{1}{2}\sin{(2\theta)}\right) \bigg\rvert_{-\frac{\pi}{4}}^{\frac{\pi}{4}}$$
$$=\frac{5\left(\pi+2\right)}{32}$$
|
H: If $kx^2-4x+3k+1>0$ for at least one $x>0$ and if $k\in S$, find $S$
Options ; $A)~ (1,\infty)~~ B)~(0,\infty)~~C)~(-1,\infty)~~D)~(-\frac 14 , \infty)$
Obviously
$$16-4(k)(3k+1)<0$$
$$k\in (-\infty, -\frac 43)\cup (1,\infty)$$
And also $k>0$ so the answer should be A)
The answer is, however, A, B, D
I think it has something to do with ‘ least one $x>0$’ but I don’t know what the question exactly means by that
AI: If $k\ge 0$, then the said condition will always be true, as we get an upward facing parabola. (or a line, for the $k=0$ case)
If $k\lt 0$, then we need the discriminant to be $\gt 0$, for otherwise the quadratic always assumes negative values, i.e. $$16-4k(3k+1) \gt 0 \\ \implies k\in \left(-\frac 43, 1\right ) $$
Now, since the quadratic must be positive for at least one positive $x$, at least one of the roots must be positive (can you see why?), so $$\frac{4+\sqrt{16-4k(3k+1)}}{2k} \gt 0 $$
But this is never true for $k\lt 0$. Although $$\frac{4-\sqrt{16-4k(3k+1)}}{2k} \gt 0 $$ is true for $$k\in \left(-\frac 13,0\right)$$
Taking the union of the two cases, we must have that $$\boxed{k\in \left(-\frac 13, \infty\right)} $$ Only options (A),(B),(D) are subsets of this set.
|
H: Prove that $\phi: G / F \rightarrow \operatorname{Sym}(X)$ is a monomorphism
I'm doing this exercise 11 in textbook Algebra by Saunders MacLane and Garrett Birkhoff.
If $G$ acts on $X$, and $F$ consists of those $g \in G$ fixing every $x \in X$, prove that $F \trianglelefteq G$. If $p: G \rightarrow G / F$ is the projection, prove that there is a unique action of $G / F$ on $X$ with $(p g) x=g x$. If $\phi$ maps $p g$ to the permutation $x \mapsto g x$ on $X$, prove that $\phi: G / F \rightarrow \operatorname{Sym}(X)$ is a monomorphism.
Because the authors mentioned "the permutation $x \mapsto g x$ on $X$", I tried to prove that $x \mapsto g x$ is bijective, but to no avail. Could you please elaborate on the correctness of this exercise?
AI: If $gx=gy$, then multiplying both sides by $g^{-1}$, we have $x=y$. And given an arbitrary $y\in X$, take $x=g^{-1}y$.
|
H: How I can describe $v^{\perp}$ as $Rw$?
Let $v=(\alpha , \beta)\in \mathbb{R}^2$ be nonzero. Describe $v^{\perp}$ as R$w$ for a suitable $w$.
I am thinking to consider the orthogonal complement, and the plane must have an equation of the form $ax+by+cz=0$, This one should be perpendicular to the basis vectors.
But how?
AI: It seems that you are looking for a rotation matrix in the subspace of 2 by 2 that transforms the vector $v$ into another vector rotated by $\pi/2$. Ergo, $u=Rv$ where $u$ is perpendicular to $v$. Following the comment of Tanner you can set up $(-\beta \ \alpha)' = R \: (\alpha \ \beta)'$ where the apostrophes means transpose. The components of the matrix are $R_{11}=0$, $R_{12}=-1$, $R_{21}=1$ and $R_{22}=0$. You can normalize $u$ to get a basis for the orthogonal space (say $v$ perp. in your post).
|
H: Proving continuity at the end points of the extension of a continuous function
(Baby Rudin Chapter 4 Exercise 5)
I want to follow up on my previous question. (Previously, I asked if I needed to even prove that $g$ is continuous on the endpoints of $E$. Here, I want to ask about the actual methodology of proving that $g$ is continuous on the endpoints.)
Suppose $f$ is a real, continuous function defined on the closed set $E \subset \mathbb{R}^1$. Prove that there exists real, continuous function $g$ on $\mathbb{R}^1$ such that $g(x) = f(x) \forall x \in E$.
My attempt:
Define $g$ as:
$g(x) =
\begin{cases}
f(x) & \text{if $x \in E$} \\
f(a_i)+(x-a_i)\frac{f(b_i)-f(a_i)}{b_i-a_i} & \text{if $x \in (a_i, b_i)$} \\
f(b_0) & \text{if $x \in (b_0, +\infty)$} \\
f(a_0) & \text{if $x \in (-\infty, a_0)$}
\end{cases}$
Clearly, $g$ is an extension of $f$ on $\mathbb{R}^1$ and it remains to show that $g$ is continuous on $\mathbb{R}^1$. [Then I show that $g$ is continuous on all points of $E^c$]
Then, to show that $g$ is continuous when $x \in E$, it is clear from the definiton of $g$ that $g$ is continuous if $x$ is an interior point of $E$ and it remains to show that $g$ is continuous if $x=a_i$ or $x=b_i$ for some $i$ (endpoints of $E$)
Now, it suffices prove that $g$ is continuous if $x=a_i$ (the $x=b_i$ case is probably identical). My attempt to prove that $g$ is continuous if $x=a_i$ so far:
If $x=a_i$ for some $i$, since $g$ is linear in $(a_i, b_i)$ by construction and $a_i
\in E$, we have
\begin{equation*}
g(a_i+) = \lim\limits_{u \to a_i+}g(u) = f(a_i) = g(a_i)
\end{equation*}
This is where I got stuck. Of course, I want to prove that $$g(a_i-)=g(a_i)$$ but I don't know how this can be proven. Unfortunately, I can't adopt the same methodology as I did in showing that $ g(a_i+) =g(a_i)$. How can I complete this proof?
AI: Fix $\epsilon>0$. Since $f$ is continuous, there is a $\delta>0$ such that $|f(x)-f(a_i)|<\epsilon$ for all $x\in E\cap(a_i-\delta,a_i]$. If $E\cap(a_i-\delta,a_i)=\varnothing$, then $a_i=b_j$ for some $j$, and proving continuity of $g$ from the left at $b_j$ is like proving continuity from the right at $a_i$, which you know how to do. Otherwise, fix $x_0\in E\cap(a_i-\delta,a_i)$; I claim that $|g(x)-g(a_i)|<\epsilon$ for all $x\in(x_0,a_i]$.
This is immediate for $x\in E\cap(x_0,a_i]$. If $x\notin E$, then $x\in(a_j,b_j)\subseteq(x_0,a_i)$ for some $j$, and by construction $g(x)$ lies between $f(a_j)$ and $f(b_j)$. Thus,
$$|g(x)-g(a_i)|<\max\{|f(a_j)-f(a_i)|,|f(b_j)-f(a_i)|\}<\epsilon\;,$$
as claimed.
|
H: An Inequality in von Neumann algebras
In Section $9.9$ of the book 'Lectures on von Neumann algebras' by Strătilă and Zsidó, I am not getting how they get the following inequality:
Given a positive self-adjoint linear operator $A$ in the Hilbert space $\mathcal{H}$, we have $a=(1+A)^{-1}\in \mathcal{B}(\mathcal{H}) \text{ and } 0\leq a \leq 1$. For any natural number $n$, let ${\chi}_n$ be the characteristic function of the set $((n+1)^{-1},+\infty)$. Let us define $e_n={\chi}_n(a)$.
Problem: Then there exists a unique $a_n\in\mathcal{R}(\{a\})$ ($\mathcal{R}(\{a\})$ stands for the von Neumann algebra generated by the element $a$) such that $e_n\leq a_n\leq (n+1)e_n$ and $e_n=aa_n$.
Thanks in advance for any help.
AI: Let $f(t)=\tfrac1t\,1_{\bigl(\tfrac1{n+1},\infty\bigr)}(t)$. Define
$$
a_n=f(a).
$$
Then, since $tf(t)=\chi_n$,
$$
e_n=aa_n.
$$
As
$$
\,1_{\big(\tfrac1{n+1},1\big]}(t)\leq \tfrac1t\,1_{\big(\tfrac1{n+1},1\big]}(t)\leq (n+1)\,1_{\big(\tfrac1{n+1},1\big]}(t),
$$
we get
$$
e_n\leq a_n\leq (n+1)\,e_n.
$$
Uniqueness: if $aa_n=ab_n$, then $a(a_n-b_n)=0$. Then $f(a)\,(a_n-b_n)=0$ for all continuous $f$; as $\{f(a):\ f\ \text{ continuous } \}$ is dense in $\mathcal R(a)$, we get that $a_n-b_n=0$.
|
H: Find a basis $ {\{b_1,\cdots, b_n}\} $ of $ \mathbb{C}^n $ such that $ \langle b_j, b_k \rangle = 1 $ whenever $ j \neq k $
Find a basis $ {\{b_1,\cdots, b_n}\} $ of $ \mathbb{C}^n $ such that $ \langle b_j, b_k \rangle = 1 $ whenever $ j \neq k $ where $\left\langle (x_1,\cdots,x_n),(y_1,\cdots,y_n)\right\rangle:=\sum_{k=1}^{n}\overline{x_k}y_k.$
Friends, I have not been able to solve this problem, I feel like I should use the Gram-Schmidt orthonormalization process, correct? Could you give me a hint?
AI: Hint: start with an orthonormal basis, and then add one basis vector to all the others.
|
H: Eigenvectors matrix multiplied by its transpose $\boldsymbol{\chi} \boldsymbol{\chi}^T $
Let $V$ be the set of datapoint and assume that each point can be represented by a vertex. Then, given a similarity matrix $ \mathbf{M}$, we define a graph $G = (V, \mathbf{M})$ generated using $k$-nearest neighbors.
Let $\mathbf{D}$ denote the diagonal degree matrix of $\mathbf{M}$. Then, we define the normalized weight matrix $\mathbf{W}$ using $\mathbf{D}$, so that
\begin{equation*}
\mathbf{W}= \mathbf{D}^{-\frac{1}{2}} \mathbf{M} \mathbf{D}^{-\frac{1}{2}}.
\end{equation*}
Therefore, we define the normalized Laplacian matrix of $G$ as
\begin{equation*}
\mathbf{L} = \mathbf{I} - \mathbf{W}= \mathbf{I}- \mathbf{D}^{-\frac{1}{2}} \mathbf{M} \mathbf{D}^{-\frac{1}{2}},
\end{equation*}
where $\mathbf{I}$ is the identity matrix.
Since the normalized Laplacian matrix $\mathbf{L}$ is a positive semi-definite matrix, the matrix $\mathbf{L}$ is decomposed into an orthogonal set of eigenvectors $\mathbf{U}=[u_1,...u_n]$ and eigenvalues $\mathbf{\Lambda}=[\lambda_1,...,\lambda_n]$ represented as follows
\begin{equation*}
\mathbf{L}= \mathbf{U}\Lambda \mathbf{U}^{T}.
\end{equation*}
In the graph setting, the eigenvalues of $ \mathbf{L}$ can be treated as graph frequencies, and are always situated in the interval $[0, 2]$ for $\mathbf{L}$.
......................
Now, let $\mathbf{H}=\mathbf{U} h(\mathbf{\Lambda}) \mathbf{U}^{T}$ where $h(\mathbf{\Lambda})$=diag$(h(\lambda_1),...,h(\lambda_n))$.
I assume that $h(\mathbf{\lambda})=1$ if $\lambda \leq \lambda_k$ and $h(\mathbf{\lambda})=0$ if not.
Therefore, $\mathbf{U} h(\mathbf{\Lambda}) \mathbf{U}^{T}= \boldsymbol{\chi} \boldsymbol{\chi}^T $, where $\boldsymbol{\chi} \in \mathbb{R}^{n\times k}$ contains the first $k$ eigenvectors of the normalized Laplacian $\mathbf{L}$.
My questions:
I know that $UU^{T}=\mathbf{I}$ but what about $\boldsymbol{\chi} \boldsymbol{\chi}^T $ that contains the first $k$ eigenvectors of the normalized Laplacian $\mathbf{L}$ ?
Assume that I sum up all elements of the matrix $\boldsymbol{\chi} \boldsymbol{\chi}^T $ (call it $S$). Is there any condition to ensure that the sum $S$ decreases or takes the smallest possible value?
Is there any relation between $S$, $\boldsymbol{\chi} \boldsymbol{\chi}^T $ and the the topology of the graph?
AI: know that $UU^{T}=\mathbf{I}$ but what about $\boldsymbol{\chi} \boldsymbol{\chi}^T $ that contains the first $k$ eigenvectors of the normalized Laplacian $\mathbf{L}$ ?
Because $\chi^T\chi = I_k$, $\chi\chi^T$ is the orthogonal projection onto the span of the columns of $\chi$, i.e. the first $k$ eigenvectors of L.
Assume that I sum up all elements of the matrix $\boldsymbol{\chi} \boldsymbol{\chi}^T $ (call it $S$). Is there any condition to ensure that the sum $S$ decreases or takes the smallest possible value?
Let $\chi_k$ denote the matrix $\chi$ constructed above with $k$ columns. Let $e = (1,1,\dots,1)^T$. The sum of the elements of $\chi_k\chi_k^T$ is equal to
$$
S_k = e^T[\chi_k\chi_k^T]e = [e^T\chi_k][\chi_k^Te] = (\chi_k^Te)^T(\chi_k^Te) = \sum_{j=1}^k (e^Tv_j)^2,
$$
where $v_j$ denotes the $j$th unit eigenvector (i.e. the $j$th column of $\chi_k$). Note that $e^Tv_j$ is the sum of the entries of $v_j$. With that, it is clear that $S$ increases as $k$ increases since each term in the sum is non-negative.
Is there any relation between $S$, $\boldsymbol{\chi} \boldsymbol{\chi}^T $ and the the topology of the graph?
There is no relation to the topology of the graph that I can see. If you're interested in a more thorough answer, then this question should probably be asked as its own, separate post.
|
H: Martingale constructed from a random walk
I am trying to solve this problem for a while now but I am not coming to a solution. Could anyone help or give me a hint?
Let $S_n=\sum_{i=1}^{n}X_i$ be a random walk on $\mathbb{Z}$ with $S_0=0$, $\mathbb{P}(X_i=1)=\frac{2}{3}$, $\mathbb{P}(X_i=-1)=\frac{1}{3}$. For which strictly positive constant $c\neq1$ is $M_n:= c^{S_n}$ a martingale?
AI: Here is how you get it. So, note that
$$
E(c^{S_{n+1}}\mid \mathcal F_n) = E(c^{S_n}\cdot c^{X_{n+1}} \mid \mathcal F_n)= c^{S_n} E(c^{X_{n+1}})=
c^{S_{n}}
$$
Therefore, you need that $E(c^{X_{n+1}})=1$. Hence,
$$
E(c^{X_{n+1}})= c2/3+1/(3c)= 1
$$
Solve the above equation and find $c$.
|
H: Sequence in $\Bbb Q$ that converges to $0$ under $d_E$ and to $1$ under $d_2$
I am looking for a sequence in $\Bbb Q$ that converges to $0$ under the Euclidian metric and to $1$ under the 2-adic metric $d_2$. This metric is defined by the 2-adic absolute value as $d_2(x, y) = |x-y|_2 = 2^{-\text{ord}_2(x-y)}$ for $x ≠ y$. As $|3^{-n}|_2 = 2^{-0} = 1$ for all $n ∈ \Bbb N$, this sequence seems to "be the constant sequence $(1)_{n ∈ \Bbb N}$ w.r.t. the 2-adic metric", so my guess would be that the sequence $(3^{-n})_{n ∈ \Bbb N}$ will suffice. However, I run into a problem when I try to run the proof neatly. Indeed, what I want to show is the following:
$$(\forall \varepsilon ∈ \Bbb R_{>0})(\exists N ∈ \Bbb N)(\forall n ∈ \Bbb N)\left(n ≥ N \Rightarrow d_2(1, 3^{-n}) = 2^{-\text{ord}_2(3^{-n} - 1)} < \varepsilon\right),
$$
but this suddenly isn't so clear.. The value of $\text{ord}_2(3^{-n} - 1)$ really seems to depend on $n$ in an non-trivial way. Am I even right that this sequence will suffice? If so, how do I prove it correctly?
AI: You can take$$x_n=1-\frac{2^n}{2^n-1}.$$It clearly converges to $0$ with respect to the Euclidean distance. Besides, $d_2(x_n,1)=2^{-n}$ and so…
|
H: why this operator $T$ is always diagonalizable?
Let $V = \mathbb{R}^3$ and $B=(v_1,v_2,v_3)$ ordered basis for $V$
Let $T:V \to V$ linear operator and given the representation matrix with respect to the basis $B$ $$[T]_B^B = {\left[\begin{array}{ccc} 3 & 0 & 8 \\ 0 & 0 & -1 \\ 8 & -1 & 5 \end{array}\right]}.$$
Why is it true that $T$ is always diagonalizable?
I do not understand how I can conclude anything about eigenvector? this the only way I think on approaching this kind of question
AI: I will try to answer this question in a little bit more general setting.
One of the common definitions of diagonalizability is that a linear operator $A\colon V\to V$ is diagonalizable if and only if there exists a basis of $V$ consisting of eigenvectors of $A$. The most general class of linear operators satisfying this property are the so called normal operators. We say that a linear operator $A$ is a normal operator if and only if it commutes with its Hermitian adjoint: $AA^*=A^*A$.
Since a linear operator on $V$ can be expressed in a matrix form we also have a notion of a normal matrix, i.e. the matrix $A$ is normal if and only if $AA^*=A^*A$, where $A^*$ is the conjugate transpose of $A$. Note that for the real case $A^*=A^T$, so a real matrix $A$ is normal if and only if $AA^T=A^TA$. This property is invariant under the change of basis since
$$AA^T=A^TA\Longleftrightarrow P^{-1}AA^TP=P^{-1}A^TAP$$
for any invertible matrix $P$.
In this particular problem it's easy to note that $T$ is represented as a symmetric matrix in the given basis $\mathcal{B}$, i.e. $([T]_{\mathcal{B}})^T=[T]_{\mathcal{B}}$, so $T$ is normal, since $TT^T=T^2=T^TT$.
|
H: Interpretation and use of the logarithmic scale for high school students
Often when we discuss on the logarithms in high school we also talk about a scale called logarithmic.
In the he logarithmic scale: the distance from $1$ to $2$ is the same as the distance from $2$ to $4$, or from $4$ to $8$ as the image below.
There are many applications of the logarithmic scale: Weber-Fecner's law, sound intensity perceived by our hear hearings, the brightness of a star, etc.. What is the optimal solution or explanation for understanding how to make students build a logarithmic scale and its utility?
AI: If I remember correctly, this video does a really good job at this sort of thing: https://www.youtube.com/watch?v=CfW845LNObM .
What you have shown above is just a different way of looking at what exactly a function does. For example, let's say we had
$${f(x) = x + 1}$$
This just "moves the number line left 1 unit" - but it does not affect the distance between two numbers. The numbers $0$ and $1$ have a distance of $1$ between them, and ${f(0)}$ and ${f(1)}$ has a distance of ${f(1)-f(0)=2-1=1}$ between them also - the distances stay the same!
Another example is
$${f(x) = 2x}$$
In this case, the distances between numbers are affected. Take the same example of $0$ and $1$ - ${f(0) = 0, f(1) = 2}$, and so ${f(0)}$ and ${f(1)}$ have a distance of $2$ between them - it's double the distance! And you can show that ${\left|f(y) - f(x)\right|=2\left|y-x\right|}$. The distance between numbers always get's doubled under this "transformation". So this function $f$ is a transformation that does stretch the number line.
So the Logarithmic scale is just showing you how the distances between two numbers get changed under ${\log(x)}$ as a transformation. And because ${\log(x)}$ grows so slowly - we expect distances between numbers under the transformation to get closer and closer together - just as you see in the picture.
We have some special names for how distances get changed too. Lipschitz mappings are mappings such that
$${|f(y)-f(x)|\leq k|x-y|}$$
For some constant $k$, and Contraction mappings are Lipschitz mappings with ${0\leq k<1}$. In fact one of the (in my opinion) coolest theorems ever is the contraction mapping theorem - but this is getting quite complicated :)
|
H: About the hypotheses of Schauder Theorem
I know that Schauder Theorem says: $T: E \to F$ is an compact operator iff $T^{*}: F^{*} \to E^{*}$ is an compact operator.
My doubt is: what are the hypotheses about $E$ and $F$? Is it enough that they are just normed spaces or do they need to be Banach spaces? Or $E$ normed and $F$ Banach?
appreciate...
AI: As far as I know, the definition of a compact operator $T:E \to F$ usually requires that that both $E$ and $F$ are Banach spaces. The main reason is that if $(E, d)$ is a complete metric space. Then $F \subseteq E$ is relatively compact if and only if $F$ is totally bounded. This allows us to have several equivalent definitions for a compact operator $T:E \to F$ when $E$ and $F$ are Banach spaces:
For each bounded set $B \subseteq F$, $T(B)$ is relatively compact in $F$.
If $(\xi_n)_{n=1}^{\infty}$ is a bounded sequence in $E$, then $(T(\xi_n))_{n=1}^\infty$ admits a convergent subsequence in $F$.
$T\Big(\overline{B_1^E(0)}\Big)$ is relatively compact in $F$.
$T\Big(\overline{B_1^E(0)}\Big)$ is totally bounded.
I have never worked with compact operators between non-Banach spaces. One of the main reasons is that if $F$ is not Banach you lose definition 4 above. This definition is important because an immediate consequence is that any compact operator is bounded.
Let's take a look at how the proof of Schauder's Theorem actually uses the equivalent definitions 2 and 4 above and therefore it's implicitly assumed that $E$ and $F$ are Banach spaces.
Sketch of proof of Schauder's Theorem
$(\Rightarrow)$ To show that if $T$ is compact then $T^*$ is compact, one shows $T^*\Big(\overline{B_1^{F^*}(0)}\Big)$ is totally bounded using the fact that $T\Big(\overline{B_1^E(0)}\Big)$ is. This can be done by a clever $\varepsilon/3$ argument.
$(\Leftarrow)$ Conversely, if $T^*$ is compact we now know that $T^{**}$ is compact (same proof as above which again relies on definition 4 of compact operators). Now, recall that there is an isometric embedding map $\widehat{\cdot}: E \hookrightarrow E^{**}$ given by
$$
\widehat{\xi}(\varphi) := \varphi(\xi) \ \ \forall \xi \in E \ \forall \varphi \in E^*
$$
Now, we use definition 2 above: Let $(\xi_n)_{n=1}^\infty$ be a bounded sequence in $E$. Then, $(\widehat{\xi_n})_{n=1}^\infty$ is a bounded sequence in $E^{**}$. Since $T^{**}$ is compact, it follows that $(T^{**}(\widehat{\xi_n}))_{n=1}^{\infty}$ admits a convergent subsequence in $F^{**}$. Now notice that for any $\sigma \in F^*$, we have
$$
T^{**}(\widehat{\xi})(\sigma) = \widehat{\xi}(T^*(\sigma)) = \widehat{\xi}(\sigma \circ T) = \sigma(T(\xi))=\widehat{T(\xi)}(\sigma)
$$
That is, $T^{**}(\widehat{\xi})=\widehat{T(\xi)}$ and therefore $\|T^{**}(\widehat{\xi_n})\|=\|T(\xi_n)\|$, which implies that $(T(\xi_n))_{n=1}^\infty$ admits a convergent subsequence, so $T$ is in fact compact. $\blacksquare$
|
H: Number of ternary strings of length n such that number of 0s is greater than or equal to number of occurrences of any other digit
I understand how to count this for a binary string of a fixed length using combinations, so I think the way to go with this problem is to use an exponential generating function for each of the set {0, 1, 2} when counting the solutions. For example, if I want to count the number of ternary strings with an even number of 0s, we can use (1 + x^2/2! + x^4/4! + x^6 + ...) for 0's, (1 + x + x^2/2! + x^3/3! + ...) for 1's, and (1 + x + x^2/2! + x^3/3! + ...) for the number of 2's, and then we can combine exponential generating functions in the following way:
Exponential Generating Functions with odd num of 0's
Not sure how I would account for more 0's than any other term though. Thank you!
AI: I don't think generating functions give the best approach. You should exploit the symmetry of the situation instead. The are $3^n$ ternary strings of length $n$. In how many of them is $0$ a winner, that is, in how many of them are there at least as many $0$'s as $1$'s or $2$'s? If we count all the winners, $\frac13$ of them will be $0$, $\frac13$ of them will be $1$, $\frac13$ of them will be $2$, by symmetry.
The only problem is that some strings have $2$ or $3$ winners. Therefore, the problem reduces to counting the number of two-way ties and three-way ties.
For example, let $n=3$. There are $27$ strings. In $6$ of them there is a three-way tie. There are no two-way ties. In $21$ cases there is a single winner. The total number of winners is $21+3\cdot6=39$, so $0$ comes in first (including ties) $$\frac{39}3=13$$ times.
Can you do the general case? (Note that three-way ties are possible only when $n$ is divisible by $3$.
|
H: What is the meaning of a probability distribution parameter?
Named probability distributions are often explicitly presented as having a specific number of parameters. For example, even though the Poisson distribution PMF equation $p_K (k) = \frac{\lambda^k}{k!e^\lambda}$ has two variables, $k$ and $\lambda$, only $\lambda$ is said to be a parameter. It seems that the observed value of the random variable listed in the LHS of the PMF equation does not count as a parameter, but all other variables do. In other notation, the LHS might be written as $f(k; \lambda, [$other parameters, there aren't any for Poisson$])$, where the semicolon is used instead of a comma to delimit the observed value variables and the parameter variables.
Is this divide really a matter of conceptual importance in Probability Theory, or is it merely a decision made arbitrarily? Presented with only the RHS, $\frac{\lambda^k}{k!e^\lambda}$, I would classify $k$ and $\lambda$ symmetrically as independent variables, and the geometry expressed would be a surface in $3$-space. If we wanted to visualize the geometry as a curve in $2$-space, we'd take a slice of the surface holding either $k$ or $\lambda$ constant. In software, we could add a slider for the user to manipulate whichever variable we chose to hold constant within our slice. Thus, both variables may vary, but the curve only displays one variation at a time.
On one hand, it seems to me that the parameters of a probability distribution are nothing more than the set of variables assigned to sliders. If I decide to display $\lambda$ on the horizontal axis of the plane and create a slider for $k$, then I have represented the same information as the usual Poisson distribution PMF. Have I successfully switched the parameter and non-parameter of the Poisson distribution PMF by doing so? Is it still a Poisson distribution, or is $\lambda$ being a parameter essential to the nature of the Poisson distribution?
On the other hand, the set of parameters of a named distribution is often stated even when it is algebra rather than geometry under immediate consideration. If I have the wrong notion of a parameter above, then what is it really that distinguishes a parameter from a non-parameter?
AI: The probablity mass function $p_{\lambda}$ is probabilitistically interpreted as so: If a random variable $X$ has distribution $\text{Poisson}(\lambda)$ then
$$ \Pr(X = k) = p_{\lambda}(k). $$
Swapping the roles of $\lambda$ and $k$ gives a statement which doesn't make sense: If a random variable $X$ has distribution $\text{Poisson}(k)$ then
$$ \Pr(X = \lambda) = p_{k}(\lambda). $$
You can see that the roles of $\lambda$ and $k$ are rather different. $\lambda$ indexes different probability distributions. On the other hand $k$ is a dummy variable used in the probability mass function. It is conceptually important that these two are treated differently.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.