text
stringlengths 83
79.5k
|
|---|
H: Humphreys' Lemma 4.3 for Cartan's Criterion
I'm having a trouble with this proof (see bottom). At the last fourth line it says
$$tr(xy)=0 \implies \sum_{i=1}^na_if(a_i)=0$$
but $$tr(xy)=tr(sy)+tr(ny)=\sum_{i=1}^na_if(a_i)+tr(ny)$$
how do we know $tr(ny)=0$? The endomorphism $n$ is nilpotent and $y$ is diagonalizable but this only is not sufficient to infer $tr(ny)=0$. Maybe $n$ commutes with $y$?
AI: Let's review where Jordan decompositions come from. Put $x$ into Jordan normal
form with respect to some basis, then $s$ consists of its diagonal entries and
$n$ of its non-diagonal entries. Then $n$ is strictly upper-triangular.
The matrix $y$ is chosen to be diagonal in this basis. Therefore $ny$ is strictly
upper-triangular, and so has trace zero.
Alternatively $xy$ is upper-triangular with diagonal entries $a_if(a_i)$
and so its trace is $\sum_i a_if(a_i)$.
|
H: Why are check digits for UPCs computed modulo 10, whereas for ISBNs, modulo 11?
Recently I was reading a mathematics book and encounterd the fact that UPC codes have a check digit at the end that is computed modulo 10, whereas ISBN numbers have a check digit that is computed modulo 11.
What disturbs me is the fact that why is 10 used as the modulus in UPC check digit calculation whereas 11 for ISBN check digit calculation. What would happen if we instead used 10 for ISBN check digit calculation?
AI: If you changed one of the digits by 5 (e.g. changed a 7 to a 2) and that digit was in a spot with an even weight, that change would not be caught using a mod 10 checksum. So the checksum would no longer be able to catch all single-digit errors. Since 11 is prime, it does not have that problem, but does have the disadvantage of occasionally needing to use a non-digit symbol for the checkdigit.
The UPC uses odd weights for each digit, so there all single-digit errors are still caught even though it uses modulus 10. However, the difference of the weights of adjacent digits is even, so it won't catch adjacent transpositions if the digits differ by 5.
|
H: Maps into products
Theorem: Let $f:A\to X×Y$ be given by the equation
$$f(a)=((f_1(a),f_2(a)).$$
Then f is continuous if and only if the functions
$f_1:A\to X$ and $f_2:A\to Y$ are continuous.
how to prove $\Leftarrow$ this direction.I wanna prove like this if we take an open set in $X×Y$ then get an inverse image which is open in $A$.
AI: If $W$ is open in $X \times Y$ then there exist open sets $U_i$ in $X$, $V_i$ in $Y$ such that $W =\bigcup_i (U_i \times V_i)$. Now $f^{-1}[W]=\bigcup_i \left(f_1^{-1}[U_i] \cap f_2^{-1}[V_i]\right)$ which is open.
|
H: Assume that $ \pi $ is a projection operator in $ E $. Prove that $ \pi^{*} $ is a projection operator in $ E^{*} $.
Suppose $ \pi: E \rightarrow E $ and $ \pi^{*}: E^{*} \rightarrow E^{*} $ are dual mappings. Assume that $ \pi $ is a projection operator in $ E $. Prove that $ \pi^{*} $ is a projection operator in $ E^{*} $.
Definition: suppose that $E,E^*$ and $F,F^*$ are two pair of dual spaces and $\varphi : E \rightarrow F $, $\varphi^*: E^* \leftarrow F^*$ are linear mappings. The mappings $\varphi $ and $\varphi^*$ are called dual if $\langle y^* , \varphi (x) \rangle = \langle \varphi^*(y^*) , x \rangle$, $y^* \in F^*$, $x \in E $. (Recall that $\langle x,y \rangle$ is called the scalar product of $x $ and $y$).
Definition: A linear transfomation $\varphi :E \rightarrow E $ is called a projection operator in $E $, if $\varphi^2 = \varphi $
This exercise seemed interesting to me and I want to find a proof. I must show that $ {\pi^ *}^2 = \pi^*$. I have that
\begin{equation}
\begin{split}
\langle y^*, \pi(x) \rangle &= \langle y^*, \pi^2(x) \rangle &= \langle \phi^*(y^*), x \rangle
\end {split}
\end{equation}
$y^{*} \in E^{*}, x \in E$, by definition, but could not obtain the conclusion. Could you give me a suggestion? Please.
Perhaps the fact should be used that if $\varphi $ is a projection operator in $E $, then $E = \ker\varphi \oplus \rm im\varphi$
AI: Take any $x \in E$ and any $y^* \in E^*$ we have
$$\begin{aligned}
\langle \left(\left(\pi^*\right)^2 - \pi^*\right)(y^*), x \rangle & = \langle (\pi^* \circ \pi^*)(y^*), x \rangle-\langle \pi^*(y^*), x \rangle\\
&=\langle \pi^*(y^*), \pi(x) \rangle-\langle y^*, \pi(x) \rangle\\
&=\langle y^*, (\pi \circ \pi)(x) \rangle-\langle y^*, \pi(x) \rangle\\
&=\langle y^*, (\pi \circ \pi)(x) - \pi(x)\rangle\\
&=0
\end{aligned}$$
Because $\pi$ is a projection operator. This allows to conclude that $\pi^*$ is itself also a projection operator.
|
H: sigma fields measures
I am studying basics measure theory, and am stuck with the following problem:
Let Ω = {(i, j) : i, j ∈ {1, . . . , 6}}, F = P(Ω) and define the
random variables $X_1$(i, j) = i and $X_2$(i, j) = j.
Is X1 + X2 measurable with respect to either σ($X_1$) or σ($X_2$)?
The given answer to this is no because :
For example, take B = {12}. Then
{ω ∈ Ω : $X_1$(ω) + $X_2$(ω) ∈ B} = {(i, j) ∈ Ω : $X_2$(i, j) + $X_2$(i, j) = 12}
= {(i, j) ∈ Ω : i + j = 12} = {(6, 6)},
which is in neither of the two sigma fields σ($X_1$), σ($X_2$).
The part that troubles me is that from what I understood, the σ($X_1$) is the list of all unions of the atoms [{(i,1),(i,2)...(i,6)} for i in range (1-6 included)] which does include (6,6) in one of it's atoms.
Where is my confusion ?
Thank you for reading!
AI: $X_1(i,j)=k$ is same $i=k$ and $j$ arbitrary. This means $\sigma (X_1)$ consists of sets of the form $A \times \{1,2,3,4,5,6\}$ with $A \subseteq \{1,2,3,4,5,6\}$. So $\{(6,6)\}$ is not in $\sigma (X_1)$
|
H: Log simplification
Question Image
Now this is the question of asymptotic analysis in Algorithms; My question is $H(n)= n^\frac{1}{\log n}$, First i am taking log of this expression then can I apply power rule here and it will become $\frac{1}{\log n} \log n$ and Here we will have 1 only as answer.
AI: Your image has $\log_2 n$ rather than $\log n$. If you take $\log_2$ of both sides you have
$$\log_2 H(n) = \frac{1}{\log_2 n}\log_2 n = 1.$$
But that means $H(n) = 2^1 = 2$. So if you're concluding that $H(n)$ is constant, you're correct.
|
H: Derivative of $p\left( x \right) = \frac{1}{{\sqrt {2\pi } }}\int\limits_x^\infty {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{u^2}}}{2}}}du} $?
I found a result,
I have,
$p\left( x \right) = \frac{1}{{\sqrt {2\pi } }}\int\limits_x^\infty {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{u^2}}}{2}}}du} $
$y = - p\left( x \right)\log p\left( x \right) - (1 - p\left( x \right))\log (1 - p\left( x \right))$
I found a solution for the derivative of $y$ at $x=0$ is,
$\frac{{dy\left( 0 \right)}}{{dx}} = \frac{2}{\pi }$
But I could not able to show that. It comes zero for me every time.
AI: Since you edited the question, I edited my answer.
FIRST PART
We will show that
$$Q'(x) = -\frac{1}{\sqrt{ 2 \pi}} \operatorname{e}^{-\frac{x^2}{2}} $$
The main tool here is the Fundamental theorem of Calculus.
First method
$$
\begin{split}
\frac{\mathrm d}{\mathrm d x} \left[ \frac{1}{\sqrt{ 2 \pi}} \int_x^\infty {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{u^2}}}{2}}}\,\mathrm du} \right] &= \frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x} \left[\int_x^\infty {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{u^2}}}{2}}}\,\mathrm du}\right]\\
&\overset{t=-u}{=}\frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x} \left[\int_{-\infty}^{-x} {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{t^2}}}{2}}}\,\mathrm dt}\right]\\
&=\frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x} \left[\int_{-\infty}^{-x} {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{t^2}}}{2}}}\,\mathrm dt}\right]\\
&=-\frac{1}{\sqrt{ 2 \pi}} {{\mathop{\rm e}\nolimits} ^{ - \frac{{{x^2}}}{2}}}\\
\end{split}
$$
Second method (essentially the same)
Let $G(x)$ a primitive of $\mathrm{e}^{-\frac{x^2}{2}}$, i. e. $G'(x) = \mathrm{e}^{-\frac{x^2}{2}}$; therefore
$$
\begin{split}
\frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x} \left[ \int_x^\infty {{{\mathop{\rm e}\nolimits} ^{ - \frac{{{u^2}}}{2}}}\,\mathrm du} \right] &= \frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x}\left[G(+\infty)-G(x)\right] \\
&= \frac{1}{\sqrt{ 2 \pi}} \frac{\mathrm d}{\mathrm d x} G(-x)\\
&= -\frac{1}{\sqrt{ 2 \pi}} G'(-x) \\
&= -\frac{1}{\sqrt{ 2 \pi}} \operatorname{e}^{-\frac{x^2}{2}}
\end{split}
$$
Third method
Remembering that $\int_{-\infty}^{+\infty}\operatorname e^{-\frac{u^2}{2}}\, \mathrm du = \sqrt{2\pi}$ we can write
$$
\int_{x}^{+\infty}\operatorname e^{-\frac{u^2}{2}}\, \mathrm du = \int_{-\infty}^{+\infty}\operatorname e^{-\frac{u^2}{2}}\, \mathrm du -\int_{-\infty}^x \operatorname e^{-\frac{u^2}{2}}\, \mathrm du = \sqrt{2\pi} - \int_{-\infty}^x \operatorname e^{-\frac{u^2}{2}}\, \mathrm du
$$
And then apply FTC.
SECOND PART
According to your notation we have
$$C(x) := 2 \log 2 -2h\left(Q\left(\sqrt x\right)\right) $$
If we evaluate $\dot C$, thanks to the chain rule we have
$$
\begin{split}
\dot C (x) &= -2h'\left(Q\left(\sqrt x\right)\right) \cdot Q'\left(\sqrt x\right) \cdot \frac{1}{2 \sqrt x} \\
&= -\log \left[ \frac{Q\left(\sqrt x\right)}{1-Q\left(\sqrt x\right)} \right]\cdot \frac{1}{\sqrt{ 2 \pi}} \operatorname{e}^{-\frac{x}{2}} \cdot \frac{1}{\sqrt x}\\
&= \frac{\operatorname{e}^{-\frac{x}{2}}}{\sqrt{ 2 \pi}} \cdot \frac{\log \left( 1 - Q\left(\sqrt x\right)\right) -\log Q\left(\sqrt x\right) }{\sqrt x}
\end{split}
$$
Since $Q(0)=\frac{1}{2}$, we obtain an indeterminate form $\frac{0}{0}$. Can you handle from here?
THIRD PART
We have to evaluate
$$\lim_{x \to 0}\frac{\log \left( 1 - Q\left(\sqrt x\right)\right) -\log Q\left(\sqrt x\right) }{\sqrt x} \overset{y=\sqrt x}{=} \lim_{y \to 0}\frac{\log \left( 1 - Q(y) \right) -\log Q(y) }{y}$$
to be continued...
|
H: Bounded partial derivatives on convex set implies uniform continuity
Let $f(x,y):\mathbb{R^2} \rightarrow \mathbb{R}$, where $A \subset \mathbb{R^2}$ is convex, $f_x, f_y$ are bounded on $A$. How does one actually show that $f$ is uniformly continuos on $A$? I imagine that I have to use Mean Value theorem like that:
$$|f(x_0, y_0) - f(x,y)| \leq |f(x_0,y_0) - f(x, y_0)| + |f(x, y_0) - f(x, y)| \leq M|x-x_0| + M|y-y_0|$$
where $M$ is some bound on partial derivatives. How do I get this bound from a convex set?
I have one more question. Why do I need set $A$ to be convex? I don't see how to get a counter-example.
AI: Let $M$ be an upper bound of both $f_x$ and $f_y$. Take $(x_1,x_2),(y_1,y_2)\in A$. For each $t\in[0,1]$, let$$\varphi(t)=f\bigl((1-t)(x_1,x_2)+t(y_1,y_2)\bigr).$$Then\begin{align*}\varphi'(t)&=Df_{(1-t)(x_1,x_2)+t(y_1,y_2)}((y_1,y_2)-(x_1,x_2))\\&=f_x((1-t)(x_1,x_2)+t(y_1,y_2)).(y_1-x_1)+\\&\phantom=+f_y((1-t)(x_1,x_2)+t(y_1,y_2)).(y_2-x_2)\end{align*}and therefore$$|\varphi'(t)|\leqslant M\bigl(|y_1-x_1|+|y_2-x_2|\bigr)\leqslant\sqrt2M\|(y_1,y_2)-(x_1,x_2)\|.$$So, by the mean value theorem,$$|f(y_1,y_2)-f(x_1,x_2)|=|\varphi(1)-\varphi(0)|\leqslant\sqrt2M\|(y_1,y_2)-(x_1,x_2)\|.$$Since this occurs for each two points of $A$, then, given $\varepsilon>0$, if you take $\delta=\frac\varepsilon{\sqrt2M}$, you have$$\|(y_1,y_2)-(x_1,x_2)\|<\delta\implies\bigl|f(y_1,y_2)-f(x_1,x_2)\bigr|<\varepsilon.$$
If $A$ is not convex, you may take $A=\{(x,y)\in\Bbb R^2\mid y\neq0\}$ and $f(x,y)=\operatorname{sgn}(y)$. Then $f_x$ and $f_y$ are bounded, but $f$ is not uniformly continuous.
|
H: Trying to understand the chain rule for partial derivatives
So I've been studying the chain rule for partial derivatives recently and I'm having an extremely difficult time wrapping my head around it as I'm having an incredibly hard time understanding the formulation of the chain rule for partial derivatives in my textbook. In an attempt to help me understannd how it works I've been going over some of the exercises in the back of the book to see if I could at least apply it. I came to an exercise that looks rather simple, but I'm not sure how to solve it. The exercise is as follows:
Consider a differentiable function function $f:\mathbb{R^2}\rightarrow\mathbb{R}$, now find the partial derivative of the function:
$F:(x,y)\rightarrow f(2x,3y)$ (State the result in terms of the partial derivatives of $f$)
Now in terms of what I want to find I'm having some doubts around one of the deriavtives. What I find is:
$\frac{\partial F(x,y)}{\partial x}=\frac{\partial f}{\partial y_1}(2x,3y)\frac{\partial(2x)}{\partial x}+\frac{\partial f}{\partial y_2}(2x,3y)\frac{\partial(3y)}{\partial y}$
My issue here is that I can't really figure out what $y_1$ and $y_2$ are supposed to be, and the formulation in my textbook is really confusing, I'd really appreciate if anyone could clarify this for me.
AI: Maybe some other choice of letters can help. Imagine that $f: \mathbb{R}^2\to \mathbb{R}$ is defined by $(u,v) \mapsto f(u,v)$ and $F(x,y)=f(2x, 3y)$. The chain rule says that
$$
\frac{\partial F}{\partial x} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial x} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial x}
$$
$$
\frac{\partial F}{\partial y} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial y} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial y},
$$
where $u = 3x, v=3y$.
|
H: Picking colored balls randomly.
I created this post yesterday but I have a further enquiry that I will post here.
In 13 balls we have: 5 Blue 4 Red 4 Green
We randomly select 6 balls without replacement, what is the probability of having 2 blue, 2 red and 1 green? (The color of the last ball does not matter)
I think that this could be done by taking:
${5 \choose 2}\times{4 \choose 2}\times{4 \choose 1}\times{8 \choose 1} = 1920$
as the total number of combinations that has 2 blue, 2 red and 1 green, then dividing by the total number of 6 hand combos ${13 \choose 6} = 1719$
Obviously there is an issue here as $\frac{1920}{1719}>1$, but I'm not sure what's wrong
AI: The problem, with that hypergeometric distribution way of solving it, is that ${8}\choose{1}$. That would suggest, that there are 5+4+4+8 = 21 balls in total.
You could for example distinguish between the cases for the color of the last ball:
${5\choose 3} \cdot {4\choose 2} \cdot {4 \choose 1} + {5\choose 2} \cdot {4\choose 3} \cdot {4 \choose 1} + {5\choose 2} \cdot {4\choose 2} \cdot {4 \choose 2}$
|
H: Is it true that if an ideal $I$ of ring $R$ can be denoted as the product of ideals $J$ and $K$ then $I \subseteq J$ and $I \subseteq K$?
I just proof-read a proof of someone, and in the proof the assumption is used that if $I$ is an ideal of a ring $R$ such that $I = JK$ for some other ideals $J$ and $K$, then $I \subseteq J$ and $I \subseteq K$. The proof claims that this follows from the definition of the product of two ideals, but I cannot see why this should be true. Is it perhaps true under certain circumstances (in this case, for example, $I$ was a principal ideal, and $R$ was a Dedekind Domain)? Or is the assumption just wrong, and is the proof therefore plainly false? Or is it just true, and is it something I just don't see?
AI: The ideal $JK$ is the additive subgroup generated by elements $ab$ with $a\in J$ and $b\in K$. All these $ab$ lie in $J$, since $a\in J$ and $b\in R$ and $J$ is an ideal. As $J$ is closed under addition, all elements of $JK$ lie in $J$.
|
H: Find a matrix $A$ such that $B=AA^T$
I'm trying to write a fokker-planck equation as an SDE. I know my diffusion matrix, $D(\mathbf{x})$, where $D = \frac{1}{2}\sigma \sigma^T$.
How can I find this sigma, to then use in the SDE?
Edit: I don't think the fokker-planck bit is relevant to the question, it's just there for context (unless someone comes along and suggests a better way of going about it, which would be much appreciated!)
Edit 2: I've seen another answer that gives the answer if the matrix is positive definite. However, the entries in my matrix can vary, and I think it's unlikely all the eigenvalues will remain positive
AI: If the matrix $B$ is not positive semi-definite, then there is no such matrix $A$.
For suppose that
$$
Bv = c v
$$
for some negative number $c$, and unit vector $v$, and suppose that such an $A$ exists. Then
\begin{align}
\langle Bv, v \rangle &= c < 0.
\end{align}
But we can also say
\begin{align}
\langle Bv, v \rangle &=
\langle AA^tv, v \rangle \\
&=(AA^tv)^t v \\
&=(v^t A^t A) v \\
&=(v^t A^t) (A v) \\
&= \langle Av, Av \rangle \\
&= \| Av \|^2 \ge 0\\
\end{align}
which is a contradiction.
So: for the matrix $B$ to have a "square root" like $A$ requires that $B$ be positive semidefinite. By the way, since $AA^t$ is symmetric, it also requires that $B$ be symmetric.
Those other proofs you saw: they weren't just being stupid by requiring symmetric positive semi-definiteness. They were actually proving the strongest theorem possible.
For actually finding the matrix $A$, @celtschk's suggestion is spot-on. The reason it didn't work in your test case is that your input matrix, $B$, wasn't symmetric. When it IS symmatric, you get
$$
B = U D V^t
$$
but it'll turn out that $V$ is actually equal to $U$, and $U^t U = I$, so you get
\begin{align}
B &= U D U^t
\end{align}
at which point you can define $E = D^\frac12$, adn note that $E$ is diagonal so that $E^t = E$, and thus
\begin{align}
B
&= U D U^t\\
&= U E^2 U^t\\
&= U E^t E U^t\\
&= U E^t I E U^t\\
&= U E^t (U^t U) E U^t\\
&= (U E^t U^t) (U E U^t)\\
&= S^t S
\end{align}
where $S = UEU^t$.
You can shove the factor of $\frac12$ in there wherever you'd like it.
|
H: integral calculation mistake
I try to solve this:
$
\int_{0}^{\pi /2}\sin x \cos x\sqrt{1+\cos^{2}x } dx
$
This is what I do:
$ \cos x = t; -\sin x dx = dt; -\sqrt{1-u^{2}} dt $
$ \frac{-\sqrt{1-t^{2}}*t*\sqrt{1+t^{2}}}{-\sqrt{1-t^{2}}} dt $
$-\int_{0}^{1} t * \sqrt{1+t^{2}} dt$
$1+t^2 = a; 2tdt = da; tdt = da/2$
$-\int_{0}^{2}\sqrt{a}da = - \frac{2\sqrt{2}}{3}$
But the answer is $\frac{2\sqrt{2}}{3} - \frac{1}{3}$
Where do I make the mistake
AI: After your first substitution you should have $-\int_{1}^{0}t\sqrt{1+t^{2}}dt$= $\int_{0}^{1}t\sqrt{1+t^{2}}dt$.
The limits for your second substitution should be $a=1$ at ($t=0$) and $a=2$ (at $t=1$) so we have $\frac{1}{2}\int_{1}^{2} \sqrt{a}da=\frac{1}{3}a^{\frac{3}{2}}|^{a=2}_{a=1}=\frac{2\sqrt{2}}{3}-\frac{1}{3}$ as required.
|
H: New Criteria of Isomorphism + Abelian Group
The following questions were suggested by my friend, while we were studying fundamental group theory. We had no exact ideas of the way to approach the problems.
Questions
(1) Let $G$ and $H$ be groups such that $|H|=|G|$. If we can make a bijection $\phi:G\to H$ such that for $\forall a\in G$, $ord(a)=ord(\phi(a))$, then does $G\cong H$?
(2) Let $G$ be a group. If all of the elements in $G$ except $e$ have the same order, then is $G$ an abelian group?
Both suggestions were not easy to find the counter-examples. So, we wanted to prove these statements instead, but again too difficult. We also failed to find these types of questions on the internet. Is there theories or propositions related to the problems? Thanks.
AI: (1) Let $G$ and $H$ be groups such that $|H|=|G|$. If we can make a bijection $\phi:G\to H$ such that for $\forall a\in G$, $ord(a)=ord(\phi(a))$, then does $G\cong H$?
Yes, if both are finite abelian. This follows from the classification of finite abelian groups, see here.
No, if both are infinite, even abelian. The simple counterexample is $\mathbb{Z}$ and $\mathbb{Z}^2$.
No, if one of them is non-abelian, even when both finite: the Heisenberg group over $\mathbb{Z}_p$ and the corresponding $(\mathbb{Z}_p)^n$.
For a deeper discussion on the subject read this: Is a finite group uniquely determined by the orders of its elements?
(2) Let $G$ be a group. If all of the elements in $G$ except $e$ have the same order, then does $G$ an abelian group?
No. Neither in finite nor in infinite case (with finite order). For finite case we have the already mentioned Heisenberg group, for infinite case the Tarski monster group.
|
H: Why is this not an equivalence relation on real functions? $(\exists c\in\mathbb{R})(\forall x\in\mathbb{R})|f(x)-g(x)|=c$
Say we have the following relation on the set of all functions $\mathbb{R} \to \mathbb{R}$
$$(\exists c \in \mathbb{R})(\forall x \in \mathbb{R})|f(x) - g(x)| = c$$
I'm having trouble understanding why this relation isn't an equivalence relation.
I know that the relation is reflexive, as $f(x) - f(x) = 0$, $0 \in \mathbb{R}$.
But I'm having trouble when it comes to symmetry and transitivity.
AI: Say $f(x)= -1$ if $x <0$ and $f(x) = 1$ if $x \geqslant 0$. Then $|f(x) - 0| = 1$ and $f$ is related to the null function.
The constant function $g(x) = 1$ is also related to the null function.
But $f$ and $g$ are not related as $|f(x)-g(x)|$ is non-constant!
I think if you restrict your relation on continuous functions, you avoid this kind of behavior and maybe it would define an equivalence relation on this set.
|
H: Turning a fraction with repeating decimals into a mixed number: why doesn't this work?
Problem:
Turn $\frac{0.\overline{48}}{0.\overline{15}} $ into a mixed number.
My solution:
$0.\overline{15}$ goes into $0.\overline{48}$ 3 times, with a remainder of $0.\overline{48} - 3 x 0.\overline{15} = 0.\overline{48}-0.\overline{45}= 0.\overline{03}$
$100x -x = 3.\overline{03} - 0.\overline{03} = 99 x = 3$, hence $x = \frac{99x}{99} = \frac{3}{99}= \frac{1}{33}$. Since the remainder is $\frac{1}{33}$, the mixed number I'm looking for is $3\frac{1}{33}$ but the book gives $3\frac{1}{5}$ as a result, where am I wrong? Does it have anything to do with the way I multiplied $0.\overline{15}$ by 3?
AI: When you want to compute, say,
$$
\frac{25}{7}
$$
you say "$7$ goes into $25$ three times, with a remainder of $4$."
But does that mean that
$$
\frac{25}{7} = 3 + 4?
$$
Not at all! It means that
$$
\frac{25}{7} = 3 + \frac{4}{7}.
$$
By analogy, in your case, you have
$$
\frac{0.\overline{48}}{0.\overline{15}}
$$
is $3$, with a remainder of $0\overline{.03}$, which means that
\begin{align}
\frac{0.\overline{48}}{0.\overline{15}} = 3 + \frac{0.\overline{03}}{0.\overline{15}}
\end{align}
You still have to simplify that last fraction but that's relatively easy: You can write
\begin{align}
\frac{0.\overline{03}}{0.\overline{15}}
&= \frac{1}{10} \frac{0.\overline{30}}{0.\overline{15}} \\
&= \frac{1}{10} 2 \\
&= \frac{2}{10}\\
&= \frac{1}{5},
\end{align}
although your text may have some other way of reducing that to get the same answer --- I just happened to notice that the "3" and the 15" could be made to cancel nicely if it was "30" and "15" instead.
|
H: Space of meromorphic functions is not finitely generated
During a lesson of the course on Riemann Surfaces our lecturer made the following remark, saying this could be proved as an exercise:
The space $\mathcal{M}(X)$ of meromorphic functions on a compact Riemann surface, if not empty, is not finite dimensional (as a vector space over $\mathbb{C}$).
This is a bit confusing: this was stated before Riemann-Roch, whence the "if not empty". Anyway, holomorphic functions are always also meromorhic, so this is in any case a problem. Besides, the space of meromorphic functions which are not holomorphic is not a vector space, so what is meant here? I have adjusted the statement as follows (still in a "pre-Riemann-Roch" setting):
Let $X$ be a compact Riemann surface. If there exists a function $f \in \mathcal{M}(X)-\mathcal{O}(X)$, then $\mathcal{M}(X)$ is infinite dimensional. Does this hold true? How can this be proved (without Riemann-Roch)?
I have tought to go by induction on number of generators. I am able to prove directly that it cannot be generated by two elements: suppose these two elements are called $g$ and $h$, one of the two, say $g$, is a constant, then if $f$ (the same as above) has a pole of order $n$ at a point $p$, also $h$ must have a pole of order $n$ at $p$, now also $h^2$ is meromorphic but cannot be written as a linear combination of $g$ and $h$, since $\mathrm{ord}_p h^2 = 2n$.
Now I'd like to prove that if $\mathcal{M}(X)$ is generated by $n$ functions, it is also generated by $n-1$ functions; but how?
Alternatively: every "polynomial in $f$", that is any expression of the form $a_0+ a_1 f + \ldots + a_n f^n$, where powers are respect to point-wise multiplication, is again in $\mathcal{M}(X)$, if this map $\mathbb{C}[z] \to \mathcal{M}(X)$ is injective, we are done. But is it?
AI: As long as you have a meromorphic function $f$ on a compact Riemann surface, that is not constant, then $f$ will have a pole; of order $m$ at a point $P$. Then the $f^n$
are linearly independent over $\Bbb C$ since $f^n$ has a pole of order $mn$ at $P$,
and functions with poles of different orders at $P$ are linearly independent.
|
H: Linear algebra - diagonizable matrix: find matrix P and D such that A = PDP^-1
Provide a P and a diagonal matrix D such that A = PDP^-1
Given:
A=
\begin{array}{l}-1-5i&1+2i&1+7i\\-4-14i&3+6i&1+19i\\-6+4i&3-2i&5-5i\end{array}
λ=1−i, 2−3i, 4
The matrix P would be: ____
The matrix D would be: ____
So I'm struggling to figure out how I would find the P and D matrices from matrix A. To find P, I thought I would have to find the eigenbasis of A which gave me:
P=
\begin{array}{l}-6+4i&3-2i&5-5i\\0&1/2-1/2i&-12/13+31/13i\\0&0&-48/13+20/13i\end{array}
And I thought for matrix D the general rule would just be to plug in the given eigenvalues diagonally in an empty matrix:
D=
\begin{array}{l}1-i&0&0\\0&2-3i&0\\0&0&4\end{array}
I'm told my answer is wrong, however. I'm not exactly sure if both my matrices are wrong or if it's just one of them, and I don't quite understand how I would acheive the correct values. I would appreciate any help or a push in the right direction!
AI: To find the columns of $P$, find the eigenvectors associated with each eigenvalue of $A$.
So for example, to find the first column of $P$, we find the eigenvector associated with the first eigenvalue, $\lambda = 1 - i$. That is, we have to find a non-zero solution to the equation $(A - \lambda I)x = 0$. One approach to this is to row-reduce the matrix $A - \lambda I$. Doing this gives us
$$
\left(\begin{array}{ccc} -2-4{}\mathrm{i} & 1+2{}\mathrm{i} & 1+7{}\mathrm{i}\\ -4-14{}\mathrm{i} & 2+7{}\mathrm{i} & 1+19{}\mathrm{i}\\ -6+4{}\mathrm{i} & 3-2{}\mathrm{i} & 4-4{}\mathrm{i} \end{array}\right)
\leadsto
\left(\begin{array}{ccc} 1 & -\frac{1}{2} & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{array}\right).
$$
Let $R$ denote the row-reduced form on the right. We want a non-zero solution to $Rx = 0$. We can write this system as
$$
\begin{cases}
x_1 - \frac 12 x_2 = 0\\
x_3 = 0
\end{cases} \implies
\begin{cases}
x_2 = 2x_1\\
x_3 = 0.
\end{cases}
$$
So, taking $x_1 = 1$ gives us the the vector $x = (1,2,0)$, which we can take as the first column of our matrix $P$.
Following a similar procedure leads to $(- 2 + 1i, - 5 + 2i, 1)$ as the second column and $(2i, - 1 + 5i, 1)$ as the third column.
|
H: Truncated Fourier series
Let $f\in L^2[0,2\pi]$. Suppose that $\exists k\in\mathbb{N}$ s.t. the Fourier coefficients $a_n,b_n$ of $f$ vanish for $n\geq k$. In this situation, can we conclude that $f(x)=\frac{1}{2}a_0+\sum_{n=1}^{k-1}(a_n\cos nx+b_n\sin nx)$ for a.e. $x\in[0,2\pi]$? It seems that we cannot use the Fejer theorem without assuming continuity. Any idea? Thanks.
AI: Let $(S_j)$ be the sequence of partial sums of the Fourier series. Then $S_j \to f$ in $L^{2}$. Hence there is a subsequence which converges to $f$ almost everywhere. But $S_j =\sum\limits_{n=1}^{k-1}(a_n\cos(nx)+b_n \sin (nx))$ for all $j$ sufficiently large. Hence $f =\sum\limits_{n=1}^{k-1}(a_n\cos(nx)+b_n \sin (nx))$ almost everywhere.
|
H: Existence of limits and colimits in a pointed category.
I am reading Mark Hovey's model category theory.
In the first chapter, on page $4$, we have a category $\mathcal C$ which has all small limits and colimits. He claims that the pointed category ${\mathcal C}_*$ has arbitrary limits and colimits. He says:
Indeed, if $F : I \to {\mathcal C}_∗$ is a functor from a small category $I$ to ${\mathcal C}_∗$, the limit of $F$ as a functor to $\mathcal{C}$ is naturally
an element of $\mathcal C_∗$ and is the limit there.
I am not able to follow the argument. How did he think of $F$ as a functor to $C$? Did he forget the base point?
AI: Yes, you can simply compose $F$ with the forgetful functor $U: C_*\to C$.
The claim is that this forgetful functor creates limits.
The argument is actually pretty simple : let $*\to \lim(U\circ F)$ be defined by the universal property of the limit applied to $*\to U\circ F$. This defines an object $L\in C_*$, and it's easy to check that the maps $UL\to UF(i)$ are pointed maps, indeed $*\to \lim(U\circ F)\to UF(i)$ is by definition the basepoint of $F(i)$, so we actually get a cone $L\to F$.
Checking that it's a limit is straightforward.
|
H: Why must this density function be greater than zero almost everywhere?
In Klenke's Probability book, in Example 8.31, he states
Why is it that $f(x)>0$ a.a.? There are several densities for which we have $f(x)=0$ outside a compact set... For example, we can see the densities of the Beta distribution as zero outside the $[0,1]$ interval.
AI: The term 'almost everywhere' requires a reference measure. You are taking the reference measure as Lebesgue measure. But the author is taking $P_X$ as the reference measure . Since $P_X (\{x: f (x)=0\})=\int_{\{x: f (x)=0\}} f(x)dx=0$ it follows that $f>0$ almost everywhere w.r.t. $P_X$.
|
H: I have to calculate this dirac integral: $I=\int_{-1}^{0}\delta(4t+1)dt$
How can I evaluate the following integral?
$$ I=\int_{-1}^{0}\delta(4t+1) \, dt $$
Here is my working out so far:
\begin{align*}
I
&=\int_{-1}^{0} \delta(4t+1) \, dt \\
&=\int \delta \cdot (4t+1) \, dt \\
&=\int \delta \cdot 4\cdot \left(t+\tfrac{1}{4}\right) \, dt
\end{align*}
Noting $u=t+\frac{1}{4}$, $du = dt$, we have:
\begin{align*}
4\int \delta\cdot u\cdot du
&= 4\left[\left(\delta \cdot \frac{u^{2}}{2}\right) - \left(\delta\cdot\frac{u^{2}}{2}\right)\right] \\
&= 4\left[\left(\delta \cdot \frac{0^{2}}{2}\right) - \left(\delta\cdot\frac{-1^{2}}{2}\right)\right] \\
\end{align*}
AI: Well, it is not hard to show that:
$$\int\delta\left(\text{a}x+\text{b}\right)\space\text{d}x=\frac{\theta\left(\text{a}x+\text{b}\right)}{\text{a}}+\text{C}\tag1$$
Where $\theta(\cdot)$ is the Heaviside step function.
So, in your case you will get:
$$\int_{-1}^0\delta\left(4x+1\right)\space\text{d}x=\frac{\theta\left(4\cdot0+1\right)}{4}-\frac{\theta\left(4\cdot(-1)+1\right)}{4}=$$
$$\frac{\theta\left(1\right)}{4}-\frac{\theta\left(-3\right)}{4}=\frac{1}{4}-\frac{0}{4}=\frac{1}{4}\tag2$$
|
H: How do I find the real general solution of this third order ODE?
I have this inhomogenous ODE
$$y'''-y''+y'-y= 2e^{ \omega x} $$
where $ \omega \in \mathbb{R} $
I want to find the real (!) general solution to it.
The problem starts in the beginning:
The characteristic polynomial is $ \lambda^3- \lambda^2 + \lambda - 1 = 0 $
so $ \lambda_1= 1, \lambda_2= i, \lambda_3=-i $
How do I get to the real solution? any help very appreciated!
AI: For $\omega\neq1$ we look for a particular integral (PI) we look for a solution of the form $y_{PI}(x)=Ce^{\omega x}$ and substituting into the differential equation gives $C= \frac{2}{(\omega -1)({\omega}^{2}+1)}$. Then using the Euler's formula $e^{i\theta}=cos(\theta)+isin(\theta)$ we have the general solution to be $y(x)=\frac{2e^{\omega x}}{(\omega -1)({\omega}^{2}+1)}+c_{1}cos(x)+c_{2}sin(x)+c_{3}e^{x}$.
For $\omega=1$ we look for a particular integral of the form $y=Bxe^{x}$ and substituting into the ODE gives B=1 and hence the general solution in this case is $y(x)=c_{1}cos(x)+c_{2}sin(x)+c_{3}e^{x}+xe^{x}$.
|
H: Sections on a finite union of principal open subsets in affine $n$-space
This is exercise 2.5.12 of Liu's Algebraic Geometry.
Let $k$ be a field. Let $X = \bigcup_{i=1}^rD(f_i)$ be a finite union of principal open subsets of $\mathbb{A}_k^n$. Show that $\mathcal{O}_{\mathbb{A}_k^n}(X) = k[T_1,\dots,T_n]_f$ where $f = \mathrm{gcd}(f_1,\dots,f_r)$.
Can anyone help me solve this? Thank you.
I have some progress:
First, note that $X \subset D(f)$. So we have restriction maps
$$
\mathcal{O}_{\mathbb{A}_k^n}(D(f)) \to \mathcal{O}_{\mathbb{A}_k^n}(X) \to
\mathcal{O}_{\mathbb{A}_k^n}(D(f_i))
$$
Since $\mathbb{A}_k^n$ is an integral scheme, all the above restriction maps are injective.
It suffices to show $\mathcal{O}_{\mathbb{A}_k^n}(D(f)) \to \mathcal{O}_{\mathbb{A}_k^n}(X)$ is
surjective. Because elements of $\mathcal{O}_{\mathbb{A}_k^n}(X)$ is in one-to-one correspondence
with elements $(a_1,\dots,a_r) \in \prod_{i=1}^r\mathcal{O}_{\mathbb{A}_k^n}(D(f_i))$ verifying
$a_i|_{D(f_if_j)} = a_j|_{D(f_if_j)}$ for all $i,j \in [r]$. So, it suffices to find $a \in \mathcal{O}_{\mathbb{A}_k^n}(D(f))$
verifying $a|_D(f_i) = a_i$ for all $i \in [r]$.
Suppose $a_i = g_i/f_i^u \in \mathcal{O}_{\mathbb{A}_k^n}(D(f_i)) = k[T_1,\dots,T_n]_{f_i}$.
(Because there are finitely many $a_i$, $u$ can be chose independent of $i$).
$a_i|_{D(f_if_j)} = a_j|_{D(f_if_j)}$ then means
$$
\frac{g_if_j^u}{(f_if_j)^u} = \frac{g_jf_i^u}{(f_if_j)^u}, \quad \mathrm{i.e.,} \quad
g_if_j^u = g_jf_i^u
$$
All the ring above can be thought of subrings of $k(T_1,\dots,T_n)$. So, in $k(T_1,\dots,T_n)$, we
have
$$
\frac{g_i}{f_i^u} = \frac{g_j}{f_j^u}
$$
Here I got stuck. I cannot find $g/f^l$ to represent $g_i/f_i^u$ simultaneously.
AI: The main point that you are not using is that $k[x_1,\ldots,x_n]$ is a UFD.
Thus, any element $\alpha$ of $k(x_1,\ldots,x_n)$ can be written in a unique way as a quotient $\frac{a}{b}$ with $a$ and $b$ having no common irreducible divisor and in $k[x_1,\ldots,x_n]$ (up to scalars).
Then, for any nonzero polynomial $g$, $\alpha \in k[x_1,\ldots,x_n]_g$ iff $b$ divides some power of $g$, that is, if every irreducible factor of the denominator of $\alpha$ occurs in $g$.
Can you finish the proof using that?
|
H: Confusion over a trigronometric function
I found 2 solutions
But they say it is:
It feels like there's a mistake in the shift here, I tested it on Desmos and the functions doesn't reflect the perihelion and aphelion years. Do you guys have a different understanding of the shift here? Thanks :)
AI: You're confused because you are looking at the wrong section of the graph. Your $t$ is the time before/after the year 2000. So, your horizontal axis should be set to something like $(-50, 150)$, rather than something like $(1950,2150)$.
So, their answer is correct.
|
H: Evaluate $\int_0^1 \ln{\left(\Gamma(x)\right)}\cos^2{(\pi x)} \; {\mathrm{d}x}$
I have stumbled across the following integral and have struck a dead end...
$$\int_0^1 \ln{\left(\Gamma(x)\right)}\cos^2{(\pi x)} \; {\mathrm{d}x}$$
Where $\Gamma(x)$ is the Gamma function.
I tried expressing $\Gamma(x)$ as $(x-1)!$ then using log properties to split the integral. Maybe there should be a summation in combination with the integral?? I believe this integral has a closed form but I would like help finding it.
AI: The key to evaluating this integral is to utilize Euler's reflection formula, which the proof can be looked up elsewhere, by substituting $u=1-x$ so that the Gamma function "disappears":
$$I=\int_0^1 \ln{\left(\Gamma(1-u)\right)}\cos^2{(\pi u)} \; \mathrm{d}u$$
Now, add the original integral:
\begin{align*}
2I&=\int_0^1 \ln{\left(\Gamma(x)\Gamma(1-x)\right)}\cos^2{(\pi x)} \; \mathrm{d}x \\
I&=\frac{1}{2} \int_0^1 \ln{\left(\frac{\pi}{\sin{(\pi x)}}\right)}\cos^2{(\pi x)} \; \mathrm{d}x \\
I&\overset{\pi x \to x}=\frac{1}{2 \pi} \int_0^{\pi} \ln{\left(\frac{\pi}{\sin{(x)}}\right)}\cos^2{(x)} \; \mathrm{d}x \\
&=\frac{\ln{\pi}}{2 \pi} \int_0^{\pi} \cos^2{(x)} \; \mathrm{d}x-\frac{1}{2 \pi} \int_0^{\pi} \cos^2{(x)} \ln{\left({\sin{(x)}}\right)} \; \mathrm{d}x \\
&= \frac{\ln{\pi}}{4}- \frac{1}{4 \pi}\underbrace{ \int_0^{\pi} \ln{(\sin{x})} \; \mathrm{d}x}_{I_1} - \frac{1}{4 \pi}\underbrace{ \int_0^{\pi} \cos{(2x)} \ln{(\sin{x})} \; \mathrm{d}x}_{I_2}\\
\end{align*}
Now, to calculate $I_1$, use symmetry and let $u=\frac{\pi}{2}-x$, then add the two integrals:
\begin{align*}
I_1&=\int_0^{\frac{\pi}{2}} \ln{(\sin{u})} +\ln{(\cos{u})}\; \mathrm{d}u \\
I_1&=\int_0^{\frac{\pi}{2}} \ln{(\sin{(2u)})}-\ln{2} \; \mathrm{d}u\\
I_1&=\frac{I_1}{2}-\frac{\pi\ln{2}}{2}\\
I_1 &= -\pi\ln{2}\\
\end{align*}
Now, to calculate $I_2$
\begin{align*}
I_2&=2\int_0^{\frac{\pi}{2}} \cos{(2x)} \ln{(\sin{x})} \; \mathrm{d}x\\
&\overset{\sin{x} \to x}=2\int_0^1\frac{\left(1-2x^2\right)\ln{x}}{\sqrt{1-x^2}} \; \mathrm{d}x \\
&=2\int_0^1\frac{\ln{x}}{\sqrt{1-x^2}} \; \mathrm{d}x - 2\int_0^1 \frac{2x^2\ln{x}}{\sqrt{1-x^2}} \; \mathrm{d}x \\
&=-2\int_0^1 \frac{\arcsin{x}}{x} \mathrm{d}x+ 2\int_0^1 \frac{\arcsin{x}-2x\sqrt{1-x^2}}{x} \; \mathrm{d}x \\
&=-2\int_0^1 \frac{\arcsin{x}}{x} \; \mathrm{d}x+2\int_0^1\frac{\arcsin{x}}{x} \; \mathrm{d}x-2\int_0^1 \sqrt{1-x^2} \; \mathrm{d}x \\
&=-\frac{\pi}{2}\\
\end{align*}
Therefore,
\begin{align*}
\int_0^1 \ln{\left(\Gamma(x)\right)}\cos^2{(\pi x)} \; \mathrm{d}x&=\frac{\ln{\pi}}{4}-\frac{1}{4\pi} \left(-\pi \ln{2}-\frac{\pi}{2}\right) \\
&= \boxed{\frac{\ln{(2\pi)}}{4}+\frac{1}{8}}\\
\end{align*}
|
H: Hatcher Theorem 2.13 - is the subspace $X$ of its cone $CX$ a deformation retract of some neighborhood in $CX$?
Hatcher's Theorem 2.13 says
If $X$ is a space and $A$ is a nonempty closed subspace that is a
deformation retract of some neighborhood in $X$, then there is an
exact sequence $$\cdots \to \widetilde{H}_n(A) \xrightarrow{i_*}
\widetilde{H}_n(X) \xrightarrow{j_*} \widetilde{H}_n(X/A)
\xrightarrow{\partial} \widetilde{H}_{n-1}(A) \to \cdots \to
\widetilde{H}_0(X/A) \to 0$$where $i$ is the inclusion
$A\hookrightarrow X$ and $j$ is the quotient map $X\to X/A$.
I am currently working on a problem where i would like to apply this exact sequence to the pair $(CX,X)$. I know that $CX$ is contractible, but i couldn't figure out whether this implies that $X$ is a deformation retract of some neighborhood in $CX$.
Can someone help me on this?
AI: The cone $CX$ is $X \times [0,1]$ modulo identifying the entire end $X \times \{1\}$ to a single point. Therefore a neighborhood of $X$ in $CX$ is $X \times [0,1/2)$, which deformation retracts onto $X$ via $(x,t,u) \mapsto (x,(1-u)t)$ for $x \in X$, $0 \leq t < 1/2$, and $0 \leq u \leq 1$.
|
H: Limit $\lim_{x \to -2^- } \frac{a - e^\frac{1}{x+2}}{2e^\frac{1}{x+2} - 1}$
According to my friend you put $ x=-2-h$ directly and say that it becomes like $ e^{\frac{-1}{0}}$ is $-\infty$ , he says that this is justified because x is tending to that limit however from what I've learned you can't directly put $ x=0$ when you have functions like $ \frac{1}{x}$
However, I do know that we can evaluate at limits inform of $ \frac{0}{0}$ because they are
'indeterminate' rather than undefined. Where exactly is the gap in my knowledge and how do I solve this limit in a more rigorous way ( Taylor series or l'hospital) without using facts like $\lim_{x \to 0^{+}} \frac{1}{x}= \infty$
edit: minus one the two is meaning exponent approaching from negative side
im saying -2 from the left side – , Idk how to latex it
AI: My suggestion would be to substitute $x+2=-1/t$, so your limit becomes
$$
\lim_{t\to\infty}\frac{a-e^{-t}}{2e^{-t}-1}
$$
Now the numerator has limit $a$ and the denominator has limit $-1$, so you get $a/(-1)=-a$.
It would be different if the limit is for $x\to-2^+$. With the substitution $x+2=1/t$, the limit becomes
$$
\lim_{t\to\infty}\frac{a-e^t}{2e^t-1}=\lim_{t\to\infty}\frac{ae^{-t}-1}{2-e^{-t}}=-\frac{1}{2}
$$
|
H: If $A \in SL(d,\mathbb{Z})$ does the same hold for $A^{-1}$?
If I consider an element $A \in SL(2,\mathbb{Z})$, then I have that $A^{-1}\in SL(2,\mathbb{Z})$. I can see this because the inverse of $A$ is obtained by movin the coefficient of the metrix or changing their sign.
Does the same hold for an element of $SL(d,\mathbb{Z})$. I cannot convince myself of that. Could you explain this to me or give ma counterexample?
I cannot convince myself of these even thinking of lattice automorphism of $\mathbb{Z}^d$
AI: The cofactors of $A$ are obtained by performing ring operations on the entries of $A$. The inverse of $A$ is then the matrix of cofactors of $A$ divided by the determinant of $A$. Since the determinant of $A$ is $\pm 1$, all of the entries of $A^{-1}$ will lie in $\mathbb{Z}$ and the determinant will be the same as that of $A$.
Check out this link for more information.
|
H: Fourier coefficients, $\sum_{n=1}^\infty(|a_n|+|b_n|)<\infty$
Suppose $f$ is absolutely continuous on $[0,2\pi]$ with $f'\in L^2[0,2\pi]$ and $f(0)=f(2\pi)$. I would like to prove
$$\sum_{n=1}^\infty(|a_n|+|b_n|)<\infty.$$
By using Parseval's identity, I have shown that
$$\frac{1}{\pi}\int_0^{2\pi}|f'|^2=\sum_{n=1}^\infty n^2(|a_n|^2+|b_n|^2).$$
Does this help? Thanks.
AI: By Cauchy-Schwarz,
$$ \sum_n |a_n| \leq \left( \sum_n n^2|a_n|^2 \right)^{1/2} \left( \sum_n \frac{1}{n^2} \right)^{1/2} \leq \|f'\|_{L^2} \left(\sum_n \frac{1}{n^2}\right)^{1/2} < \infty.$$ Do similar estimate for $\sum_n |b_n|$ and the proof is complete.
|
H: Finding Mass of Object Given Density
I need to find the mass of an object that lies above the disk $x^2 +y^2 \le 1$ in the $x$-$y$ plane and below the sphere $x^2 + y^2 + z^2 = 4$, if its density is $\rho(x, y, z)=2z$.
I know that the mass will be $\iiint_R 2z$ $dV$, and I just need to determine the region $R$ which bounds the object. If I were to use spherical coordinates, I'd have $(r, \theta, \phi)$ where $0 \le r \le 1$ (since the radial distance is restricted by the disk), $0 \le \theta \le 2\pi$ (since we can complete a full revolution just fine), however I am unsure how to determine the upper limit of $\phi$.
Am I going in the right direction using spherical coordinates? And how would I find the upper limit of $\phi$? Thanks.
AI: It's fine to use spherical coordinates, but I think that it is more natural to use cylindrical ones:\begin{align}\iiint_V2z\,\mathrm dx\,\mathrm dy\,\mathrm dz&=\int_0^{2\pi}\int_0^1\int_0^{\sqrt{4-r^2}}2zr\,\mathrm dz\,\mathrm dr\,\mathrm d\theta\\&=2\pi\int_0^1r(4-r^2)\,\mathrm dr.\end{align}Can you take it from here?
|
H: Definition of Wave map on manifolds
Let $u: V \rightarrow M$, where $(V,g)$ is a Lorentzian manifold and $(M,h)$ is a Riemannian manifold.
The wave equation is defined as $g. \nabla^2 u$.
As far as I see, $\nabla^2 u \in \Gamma(T^{*}V \otimes T^{*}V \otimes TM)$, since $\nabla_{\partial_{\alpha}} u= \partial_{\alpha} u \in \Gamma(T^{*}V \otimes TM)$. Is that correct?
But then, how can you apply $g$ on this?
AI: This mapping is a harmonic map, if the mapping $u$ satisifes $g \cdot \nabla^2 u=0$ then in local coordinates
\begin{align}
g^{\alpha \beta}\nabla_\alpha \partial_{\beta}u^A &= g^{\alpha \beta}(\partial^2_{\alpha \beta}u^A - \Gamma^\lambda_{\alpha \beta}\partial_{\lambda} u^A+ \Gamma^A_{BC}\partial_{\alpha}u^B \partial_{\beta}u^C)\\
&=0
\end{align}
And this is how you apply $g$, as you say.
|
H: How can I construct these homeomorphisms?
From Rotman's Algebraic Topology:
If $X$ is a polyhedron and $x \in X$, there exists a triangulation $(K,h)$ of $X$ with $x = h(v)$ for some vertex $v$ of $K$.
I'm having difficulty figuring out how to work this out, or even understand how it's possible for a simple example take $K = \Delta^2$ and $X = D^2$. How would you construct a homeomorphism if $p_0$, a vertex of $\Delta^2$, is mapped to the center of $D^2$?
Any suggestions?
AI: In your example you cannot take $K = \Delta^2$, you have to refine it, in other words subdivide the simplices that appear. I hope the following picture is enough to answer your question (the little cross is $x$):
Essentially, let's say that you have a triangulation but $x$ is not already a vertex. Then $x$ is in the interior of some simplex $\sigma$, so you can subdivide $\sigma$ to get a new triangulation with more simplices such that $x$ is a simplex. However $\sigma$ is not necessarily of maximum dimension (consider the case where $x$ is in the boundary of $D^2$ but not at one of the vertices of the triangle, for example), so you may need to subdivide the higher-dimensional simplices that touch $\sigma$, then subdivide the higher-dimensional simplex that touch them, and so on:
|
H: Norm that comes from inner product and quadratic function
Let $Y$ be a normed real vector space with the norm $||.||$, I am trying to see that this norm comes from an inner product on $Y$ if and only if for any $y,y'\in Y$ the function $Q(t)=||y+ty'||^2$ is quadratic on $\mathbb{R}.$
Now the way I am trying to do this is using the parallelogram law but I am getting nowhere. Suppose that the function is quadratic then for $x,y$ we will have $||x+ty||^2=a_2t^2 +a_1t+a_0$ and we will have that $||x+y||^2+||x-y||^2=2(a_0+a_2)=2(||x||^2+a_2)$, now the problem is that I can quite figure out what $a_2$ is, if I consider $||y+tx||^2=b_2t^2+b_1t+b_0$ I can see that we will have $b_1=a_1$ and $a_0+a_2=b_0+b_2$, just need to take $t=1$ and $t=-1$, where $b_0=||y||^2$ but I can't see why $a_2$ would be $||y||^2=b_0$.
Any help is aprecciated.
AI: $\newcommand{nrm}[1]{\left\lVert {#1}\right\rVert}$You have that, for all $t\ne 0$, $\nrm{x+ty}^2=t^2\nrm{ \frac1tx+y}^2=b_2+ t b_1+t^2b_0$, and therefore $b_0=a_2$, which completes your proof.
|
H: Sum of six numbers from 1 to 4 divisible by 5 (and generalization.)
Find the probability that 6 positive integers from 1 to 4 are chosen such that their sum is divisible by 5.
In other words, you could have that $[1, 4, 3, 1, 2, 3], [2, 2, 3, 3, 2, 3], \text{and } [3, 2, 2, 3, 3, 2]$ are three separate sets. In mathematical terms the question is asking $a + b + c + d + e + f \equiv 0 \pmod{5}$ where $a, b, c, d, e, f = 1, 2, 3, \text{or } 4$.
I was confused on how to solve it. First, I tried using casework and finding the number of 6-tuplets that added to 10, 15, and 20 but found there were too many to keep track. I suppose if I really had to, I could bash it out that way, but I would like to know if there's an elegant way to do this problem.
I know there's a simple formula to find the number of ways you can sum $x_1+x_2+\dots+x_n = k$ (where $x_n$ is a non-negative integer and order matters.) It's just $\binom{n+k-1}{n-1}$. However, I want to know if there's a way to generalize for a specific set of numbers, in this case, $1$ to $4$. For example, a formula for the number of positive integer solutions to each of the $x_i$ for $x_1+x_2+x_3+x_4+x_5+x_6 = 10$ would be great. (And for $15$ and $20$, but if there's a formula for $10$ it should work for $15$ and $20$ too.)
Hopefully there is a much easier way to do this problem than just bashing out all the combinations, and if there isn't, is there still an easier way? Thanks in advance.
-FruDe
AI: Let $p_n$ be the desired probability for $n$ tosses. The desired answer is $p_6$. Clearly $p_1=0$.
We will work recursively.
If the first $n-1$ tosses sum to something not divisible by $5$ then there is a unique choice for the the last toss. If they sum to a multiple of $5$ then no selection will work for the $n^{th}$ toss. It follows that $$p_n=\frac 14\times (1-p_{n-1})$$
The rest is now straight forward, even with pencil and paper. We get $$p_6=\frac {205}{1024}$$
Just as a sanity check, note that $p=\frac 15$ is a fixed point for that recursion. Indeed the process converges to $\frac 15$ very rapidly. That certainly makes sense (after a bunch of tosses it seems likely that all remainders $\pmod 5$ should be equally probable).
Note: It is not difficult to verify that the $p_n$ are given by:
$$p_n=1-\frac {4^n+(-1)^{n+1}}{5\times 4^{n-1}}$$
|
H: What is the derivative of $F[\mathbf{v}]=\mathbf{v}^T\mathbf{v}$?
How does one attack a derivative of this type?
$$
\frac{\partial }{\partial (\mathbf{v})} \mathbf{v}^T\mathbf{v}
$$
$$
\begin{align}
\frac{\partial }{\partial (\mathbf{v})} \mathbf{v}^T\mathbf{v}&=\left(\frac{\partial }{\partial (\mathbf{v})} \mathbf{v}^T\right)\mathbf{v}+ \mathbf{v}^T \frac{\partial }{\partial (\mathbf{v})} \mathbf{v}\\
&=\left(\frac{\partial }{\partial (\mathbf{v})} \mathbf{v}^T\right)\mathbf{v}+ \mathbf{v}^T
\end{align}
$$
I am uncertain how to treat the part $\left(\frac{\partial }{\partial (\mathbf{v})} \mathbf{v}^T\right)\mathbf{v}$?
Is $\mathbf{v}^T$ constant with respect to $\mathbf{v}$? --- doubtfull.
What is then the derivative of a transpose of a vector?
AI: $$\nabla(\mathbf v^T\mathbf v)=\nabla(x^2+y^2+z^2)=2(x\,\mathbf i+y\,\mathbf j+z\,\mathbf k)=2\,\mathbf v.$$
|
H: Is there a name for a group-like structure under a unary operation?
It seems to me that the set,
$$
S=\{ \sin, \cos, -\sin, -\cos \}
$$
forms something like a cyclic group under differentiation. But I understand groups to be defined to have a binary operation.
Is there a name for a group-like structure under a unary operation?
AI: To me the right context seems to be that the set $M$ consisting of the differentiation operator and its powers is a monoid, and the set you gave is a set with the monoid $M$ acting upon it.
So I would advise you to see semigroup and monoid actions.
|
H: Simple Moving Average Value
I'm trying to create a 7 Day Moving average column for my sales. I am having trouble in comprehending the notion of moving averages as I'm not sure which date should the Moving Average value be associated to.
Data
+--------+-------+----+
| date | sales | MA |
+--------+-------+----+
| 1-Jan | 5 | |
| 2-Jan | 10 | |
| 3-Jan | 15 | |
| 4-Jan | 10 | |
| 5-Jan | 20 | |
| 6-Jan | 40 | |
| 7-Jan | 25 | |
| 8-Jan | 30 | |
| 9-Jan | 40 | |
| 10-Jan | 20 | |
| 11-Jan | 50 | |
| 12-Jan | 10 | |
+--------+-------+----+
Question:
If i take the MA of say January 8th which is the sales average between Jan 2nd and Jan 8th (inclusively), I'd compute 21.42. Should this value be associated with row Jan 2nd or Jan 8th ? If both answers are accepted, what are the different notions of using either one?
AI: Associate it with January 8th. Moving averages in cases like this always correspond to the previous 7 days, meaning the latest day in the series is the day to which the average is associated.
Take moving averages in stock prices for an example. If the 7-day simple moving average of stock ABC on June 16, 2020, is \$50, that means that the average price of ABC's stock over the previous 7 trading days is \$50. If we associate moving averages with future dates then we wouldn't be able to calculate the June 16 moving average for 7 more trading days, making it unusable.
|
H: Given two $3$-distinct-digit-natural numbers. Prove that the probability that at least one of both is a multiple of $10$ is $16/81\simeq 0.197530..$ .
Given two $3$-distinct-digit-natural numbers. Prove that the probability that at least one of both is a multiple of $10$ is $$16/81\simeq 0.197530..$$
My observation is https://www.wolframalpha.com/input/?i=9%21%2810%21-9%21-9%21%29%2Fbinomial%2810%21-9%21%2C2%29 , but why ?
AI: Consider one such number.
There are $9\cdot 9 \cdot 8$ three-digit numbers with distinct digits. (Pick first, then pick second, then pick third)
$9\cdot 8$ of them are multiples of 10. (Pick first, then pick second)
Hence $\frac19$ chance of getting it.
Therefore the chance to get exactly one (among two) is $\binom21 \cdot \frac19 \frac89 = \frac{16}{81}.$
Hence your question might be wrong, because we didn’t add the probability that the two are both multiples of 10.
|
H: Proving/Disproving there are always two uncountable sets whose intersection is uncountable.
I have been trying to prove the following:
Let $\mathcal{C}$ be an uncountable family of uncountable subsets of $\mathbb{R}$. Either prove or disprove that there are always two sets in $\mathcal{C}$ whose intersection is an uncountable set.
My intuition tells me that the statement is true and that it is connected to the axiom of choice. Although, no matter what I try, it doesn't seem to go anywhere.
AI: The statement is false as noted by @Hanul Jeon in comments.
Consider the following uncountable collection of uncountable disjoint subsets of $\mathbb{R}^2$:
$$\mathcal{U}=\big\{\{x\}\times\mathbb{R}\ \big|\ x\in\mathbb{R}\big\}$$
Then consider any bijection $f:\mathbb{R}^2\to\mathbb{R}$ and note that
$$f(\mathcal{U})=\big\{f(U)\ \big|\ U\in\mathcal{U}\big\}$$
is an uncountable collection of pairwise disjoint uncountable subsets of $\mathbb{R}$.
|
H: Structure constants in Poisson algebras
I am currently studying Poisson algebras. Regarding the structure constants of a Poisson algebra, How can it be defined for Poisson algebras?
AI: If your Poisson algebra $A$ is generated as an algebra by $x^1,\ldots,x^n$, then the Poisson bracket of arbitrary elements of $A$ can be expressed (using bi-linearity and the bi-derivation property) in terms of the structure functions $P^{ij}=\{x^i, x^j\} \in A$.
Of course these are antisymmetric $P^{ji} =-P^{ij}$ and the Jacobi identity for $(f,g,h) =(x^i,x^j,x^k)$ leads to $$\sum_{\ell=1}^n P^{i\ell}\partial_\ell P^{jk} + P^{j\ell}\partial_\ell P^{ki}+P^{k\ell}\partial_\ell P^{ij}=0 \qquad \text{ for all }\quad 1 \leqslant i <j<k\leqslant n.$$
Here I used the abbreviation $\partial_\ell = \frac{\partial}{\partial x^\ell}$.
Conversely, a set of structure functions $P^{ij}$ satisfying these equations can be used to define a Poisson bracket, $$\{f,g\}=\sum_{i,j=1}^n P^{ij}\cdot \partial_i f \cdot \partial_j g$$
If you put the structure functions into a matrix, it is called the Poisson matrix.
In the linear case $P^{ij} = \sum_k c^{ij}_k x^k$ the structure functions are completely determined by the constants $c^{ij}_k$, but a general Poisson algebra cannot be captured using only a finite set of constants.
For a reference, see e.g. the book Poisson Structures by Laurent-Gengoux, Pichereau, and Vanhaecke, in particular section 1.2.2.
|
H: Solving System of ODEs Using Matrix Diagonalisation
I have been given the matrix $A = \begin{bmatrix} -3 & -2 & 2 \\ 0 & 2 & 0 \\ -4 & -1 & 3 \\\end{bmatrix}$.
I firstly needed to find the matrix $P$ that diagonalises it, so I found the eigenvalues of $A$, the corresponding eigenvectors and then constructed $P$ with columns being those eigenvectors. So, $P=\begin{bmatrix} 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 2 & 1 \\\end{bmatrix}$ (with the corresponding diagonal matrix $\begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \\\end{bmatrix}$).
I need to use these results to solve the following system of ODEs:
$$x_1' =-3x_1 -2x_2 +2x_3$$
$$x_2' =2x_2$$
$$x_3' =-4x_1 -x_2 +3x_3$$
where $x_i'=\frac{dx_i}{dt}$.
Now I recognise that the coefficients in the system of ODEs correspond directly to the values of matrix $A$, however I am not sure how to use diagonalisation to solve it? I guess you could set up an equation where y $=A$x where y is a column vector $[x_1', x_2', x_3']$ and x is the column vector $[x_1, x_2, x_3]$, however I'm not sure how to go from here? I'm guessing I would use $P$?
Any guidance would be greatly appreciated.
AI: Using the fact that $A=PDP^{-1}$ we can write the system as x'=$A$x=$PDP^{-1}$x or $(P^{-1}x)^{'}=DP^{-1}x$ so setting $y=P^{-1}x$ we solve $y'=Dy$ (which is easily solved). Then once we know y, use x=$P$y to find x.
|
H: Basic question about the definition of the homology of a spectrum
The general definition of the homology of a spectrum $E$ with coefficients in an abelian group $G$ is $$H_*(E;G):=\pi_*(E\wedge HG)$$
and I always see people using the equality $$H_*(E;G)=\mathrm{colim}_nH_{*+n}(E(n);G)$$
and say that it is easily seen to be the same. I tried to write things down but I can see why these two things coincide. Is it not obvious or have I missed something ?
AI: Every spectrum $\{E_n\}$ is the same as $\operatorname{hocolim} \Sigma ^{-k} \Sigma ^\infty E_k$. If we have a spectrum $F$ and we want $E$'s homology with respect to $F$ we do as you say and consider $\pi_*(E \wedge F)$. By the above remark, this is the same as $\pi_*(\operatorname{hocolim} \Sigma ^{-k} \Sigma ^\infty E_k)$. So we have that $F_*(E)=\pi_*(E \wedge F)=\pi_*(\operatorname{hocolim} \Sigma ^{-k} \Sigma ^\infty E_k \wedge F)$.
Now, morally, since smashing is left adjoint to taking function spectra it commutes with homotopy colimits (I can't find a reference for this immediately. It should follow from left quillen functors preserving homotopy colimts, but you can work it out by hand in this case using the handicraft definition of smash product here), so we have one further equality $\pi_*(\operatorname{hocolim} \Sigma ^{-k} \Sigma ^\infty E_k \wedge F)=\pi_*(\operatorname{hocolim}(\Sigma^{-k} (\Sigma^\infty E_k \wedge F)))$. Now $\pi_*$ commutes with directed homotopy colimits in spectra for the same reason it does in spaces, the sphere spectrum is built out of finite spaces. This means we have the equality $\pi_*(\operatorname{hocolim}(\Sigma^{-k} (\Sigma^\infty E_k \wedge F)))=\operatorname{colim} \pi_*(\Sigma^{-k} (\Sigma^\infty E_k \wedge F))= \operatorname{colim} F_*(E_k)$.
|
H: Elementary operation on determinant, but actually basic algebra
If $\begin{vmatrix} -1 & a & a \\ b & -1 & b \\ c & c & -1 \end{vmatrix} =0$ then what's the value of $$\frac{1}{1+a}+\frac{1}{1+b} +\frac{1}{1+c}$$
I just expanded the Determinant, to get
$$ab+bc+ac+2abc=1$$
Which further leads to $$\frac{1+a}{a}+\frac{1+b}{b}+ \frac{1+c}{c}= 1+\frac{1}{abc}$$
The solution in the book uses elementary operations, but is it possible to make the "original" equation of determinant in the required form?
This maybe a duplicate question, but I couldn't find it.
AI: With $$c=\frac{1-ab}{a+b+2ab}$$ we get $$\frac{1}{1+a}+\frac{1}{1+b}+\frac{1}{1+\frac{1-ab}{a+b+2ab}}=2$$ after a few simplifications.
|
H: Support of a measure and Lebesgue decomposition
Let $X=[0,1]^n$ endowed with the Euclidean norm and $\mathcal B$ the Borel $\sigma$-algebra on $X$.
Let $\lambda$ be the Lebesgue measure and $\mu$ be a finite measure on $(X, \mathcal B)$ with full support (following wikipedia's definition).
Let $\nu_{ac}$ and $\nu_{s}$ be such that $\mu=\nu_{ac}+\nu_{s}$, where $\nu_{ac}$ is absolutely continuous and $\nu_{s}$ and $\lambda$ are mutually singular.
Question 1: Does $\nu_{ac}$ have full support? Is it possible for its support to have Lebesgue measure zero?
Question 2: Are there known results ensuring that $\nu_{ac}$ has full support?
AI: Say $r_n$ is an enumeration of a dense subset of $[0,1]$. Let $$\nu=\sum_n2^{-n}\delta_{r_n}.$$Then $\nu$ has full support and $\nu_{ac}=0$.
|
H: Prove there exists $\alpha \ge 0$ s.t $\int_0^\alpha f(x)dx =\int_0^\infty g(x)dx$ given that $f,g\ge 0$, $F(x)$ diverges and $G(x)$ converges
This is one of the problems we got as an assignment:
if $f(x),g(x)$ are two integrable functions on $[0,t]$ for any $0<t\in \Bbb{R}$.
and suppose that:
$f(x)\ge 0,\ g(x)\ge 0$, for all $x\ge 0$
$\int_0^\infty f(x)dx$ diverges and $\int_0^\infty g(x)dx$ converges.
Prove that there exists some $\alpha \ge 0$ such that $\int_0^\alpha f(x)dx =\int_0^\infty g(x)dx$
So I obviously see that if $\int_0^\infty g(x)dx=0$ then $\alpha =0$
I also know that both functions are non-negative, so they are increasing.
therefor if $\int_0^\infty g(x)dx = S$ then $S>0$
But now I don't understand how this gets me to the a value that im trying to get to...
also with integrals I try to somehow visualize and I don't understand how this is true at all, I mean if $\int_0^\infty f(x)dx$ diverges how can I find such specific value? I mean if it diverges it can "start diverging" at any point..
AI: If $\int_0^{\infty}g(x)dx=0$,then clearly $\alpha=0$
If not , then let $\int_0^{\infty}g(x)dx=L\gt 0$ (since $g\ge 0$)
Now consider $F(x)=\int_0^{x}f(t)dt$ .
Then $F(0)=0$ and $F(x)$ is continous and $\displaystyle\lim_{x\to \infty}F(x)=\infty$ (Why? Because $F(x)$ is increasing and thus the above limit diverges to infinity)
Can you do from here?
|
H: Can a set with a cardinality $\mathbb R^{\mathbb R}$ be ordered?
$\mathbb Z$ has a natural order. $\mathbb R$ has one too. $\mathbb R \times \mathbb R$ can be ordered by first comparing the left index and then, if left-equal, comparing the right index. That scheme for ordering can be extended to $\mathbb R^{n}$ for any $n$ in the obvious way. It can be extended to $\mathbb {R^N}$ by defining equality as when the process of comparing successive tuple elements never terminates with a greater-than or a less-than. However, moving up to $\mathbb{R^R}$ my confidence wanes, and it becomes no longer clear that such a process would be well-defined. For example, if I were to try and encode the comparison process in the language of functions like this:
$f > g$ iff for the least $x$ for which $f(x)\neq g(x)$, $f(x)\gt g(x)$
$f < g$ iff for the least $x$ for which $f(x)\neq g(x)$, $f(x)\lt g(x)$
$f = g$ iff for all $x$ $f(x)=g(x)$
Then it would be unclear what the comparison of $\sin(x)$ and $\cos(x)$ should be, because there is no least $x$ for which they are unequal.
Can an order like this be constructed for sets as big as $\mathbb{R^R}$? Of course I mean to rule out trivial orders like "everything is equal." Two elements of the set should only be equal if they are actually the same element.
AI: The question is what exactly do you mean by this.
Assuming the axiom of choice, every set can be well-ordered, and in particular linearly ordered. The statement "every set can be linearly ordered" is itself weaker than the axiom of choice, but assuming it, we can of course prove that there exists a linear ordering on any set, in particular $\Bbb{R^R}$.
But we can also understand this question as "can we define an explicit total order on $\Bbb{R^R}$?", which we can also understand as "Does $\sf ZF$ prove that $\Bbb{R^R}$ can be linearly ordered?"
And the answer to that is negative. It is consistent that there is a subset of $\Bbb{R^R}$ which cannot be linearly ordered. In some models of $\sf ZF$ we can even make an explicit such a set: $\sf ZF$ does not prove that $\Bbb{R/Q}$ can be linearly ordered, and we can consider this as a set of functions from $\Bbb R$ to itself by noting that $\Bbb{R/Q\subseteq\mathcal P(R)}$.
As to your "secretly soft question", there is no notion of "strongest order", because linear orders are exactly the maximal partial orders on a set, so a partial order is either linear or can be extended by at least one more comparison.
|
H: Is there a polynomial of degree 2 such that $f (0) = 1, f (2) = 2$ and $f (3) = 2$?
I was wondering if there exist a polynomial whith these.
I've trying, but since I only started to see polynomials I don't how to get to the correct answer
Any help?
AI: Yes, there is. Write your polynomial as $ax^2+bx+c$ and substitute in the values, giving
$$a0^2+b0+c=1\\a2^2+b2+c=2\\a3^2+b3+c=2$$
The first gives $c=1$, leaving you two equations in two unknowns to solve.
|
H: Prove that we can have a graph with 15 vertices that every vertex is exactly connected to 5 another vertices
Imagine a city that has 15 public phones. Is it possible to connect them to each other with some cables in case that every phone must connect to exactly 5 another phone.
I tried to draw this graph with 15 vertices but
I could just fill 14 vertices and the last vertex 's degree was 4.
Is there a strong way to prove that it's possible?
AI: Hint In any graph, the number of vertices of odd degree must be even.
|
H: What is the sum of the products of digits of all three digit numbers?
How do I proceed? All approaches are welcomed.
AI: We can break the numbers in brackets of $100$. For numbers from $101-199$, the product of two digits will be multiplied by $1$. Similarly, for $201-299$, they will be multiplied by $2$. So on and so forth. Note that the sum of product of two digits is the same in every case; call it $P_2$. Hence, we need to evaluate $\sum\limits_{k = 1}^{n} k\cdot P_2 = 45 P_2$. Now we can apply the same technique on $P_2$ by breaking it in brackets of $10$. Consider numbers from $11-19, 21-29$, so on and so forth. For $11-19$, the sum of product has $1$ common in all the terms. Similarly, for $21-29$, $2$ is common in all the terms. If we take the common term out, the sum of the product of digits is the common term multiplied by $45$. Hence, $P_2 = \sum\limits_{k = 1}^{9} k \cdot 45 = 2025$. Hence, the answer is $45^3 = 91125$.
|
H: Is $a-a=0$ defined or can it be proved by using any axioms?
Following is a partial proof for the trichotomy of integers from Terence Tao's book Real Analysis:
Lemma 4.1.5 (Trichotomy of integers).
Let $x$ be an integer. Then
exactly one of the following three statements is true:
(a) $x$ is zero;
(b) $x$ is equal to a positive natural number n; or
(c) $x$ is the negation -n of a positive natural number n.
Proof. We first show that at least one of (a), (b), (c) is true. By
definition, $x$ = $a-b$ for some natural numbers $a, b$. We have three
cases: $a > b, a = b, or a < b$.
If $a > b$ then $a = b + c$ for some
positive natural number $c$, which means that $a-b = c-0 = c$,
which is (b).
If $a= b$, then $a-b =a-a= 0-0 = 0$ which is (a).
If $a < b$, then $b > a$, so that $b-a = n$ for some natural number $n$ by the previous reasoning, and thus $a-b = -n$, which
is (c).
Can anyone explain the below statement
If $a= b$, then $a-b =a-a= 0-0 = 0$
which is (a).
How is $a-a=0-0$?I understand this might be a trivial question but i am also new to real analysis.Any help would be greatly appreciated.
AI: Answering this question calls for a careful look at Tao's text, in particular Definition 4.1.1:
An integer is an expression of the form $a-b$, where $a$ and $b$ are
natural numbers. Two integers are considered to be equal, $a-b = c-d$,
if and only if $a + d = c + b$.
(There is a footnote attached to "expression" elaborating on the notion of equivalence relation on ordered pairs of natural numbers.) That's how one gets from $a-a$ to $0-0$.
|
H: Differential Equations Integrating y by x
This may be a bit of a silly questions, but when solving a differential equation by finding an integrating factor, is it possible to integrate a function of y and x by x? I understand that in multi variable calculus the y would be treated as a constant, but I am not sure why the same does not apply in differential equations. For example...
For the differential equation $y' + y =xy^3$
The way to solve it would be to multiply the whole equation by $-2y^{-3} $ then solve for the integrating factor which would be $e^{-2x}$.
But I'm wondering why we even need to get rid of the y terms on the right side of the equations. Can we solve the equations as such...
$y' + y = xy^3$
$y'*e^x + e^xy =xy^3e^x$
$\int d(ye^x) = \int xy^3e^x dx$
I understand that this method is incorrect, but I am having trouble understanding why separation of variables are absolutely necessary in differential equations.
AI: $$y'e^x + e^xy =xy^3e^x$$
Then make it separable:
$$(ye^x)' =xy^3e^x$$
$$\dfrac {d(ye^x)}{y^3e^{3x}} =xe^{-2x}dx$$
And integrate:
$$\int \dfrac {d(ye^x)}{(ye^{x})^3} =\int xe^{-2x}dx$$
Otherwise you can't evaluate the integral: $$I=\int y^3xe^xdx$$
Because $y$ is not a constant but a function of the variable $x$.
|
H: Controlling the convergence of a series
I have a sequence of real numbers $(a_n)_{n \in \mathbb N}$ such that each $a_n$ is positive and the $a_n$s decrease monotonically with limit zero. Is there any way to control the convergence of the series
$$\sum_{n=0}^{\infty} e^{in\varphi} a_n$$
for $\varphi \in \mathbb R \setminus 2\pi \mathbb Z$?
It looks like most of the assumptions to apply Dirichlet's test hold, but I cannot control the sum of the $e^{in\varphi}$ in a good way to ensure the criterion applies. Other tests that I know fail. I also thought this looks like the Fourier series of some function, but I am not sure whether there exist results from the general theory that might help. I also do not see any good counterexample: each sequence $(a_n)$ I can think of yields either a trivial or a very complicated series. This looks like a very natural series to consider, but I have not found much online on it.
AI: It is easy enough to show that the partial sums of $e^{in \varphi}$ are bounded. In particular, we have
$$
\left|\sum_{n=0}^N e^{in \varphi}\right| = \left|\sum_{n=0}^N (e^{i \varphi})^n\right| = \left|\frac{e^{i(N+1)\varphi} - 1}{e^{i \varphi} - 1}\right| \leq
\frac{|e^{i(N+1)\varphi}| + |1|}{\left|e^{i \varphi} - 1\right|} = \frac{2}{\left|e^{i \varphi} - 1\right|}.
$$
So, Dirichlet's test can be applied.
|
H: Find all possible pairs of digits $(a,b)$ such that the six-digit number $5a4bb2$ is divisible by $9$
Find all possible pairs of digits $(a,b)$ such that the six-digit number $5a4bb2$ is divisible by $9$
I tried to answer this question, but when I answered $(1,3)$, $(3,2)$, $(5,1)$, I got a 3/6 on the question. Can you kind people help me?
Okay so some of you guys helped me. Thank you. Now I have $(1,3)$, $(3,2)$, $(5,1)$, $(7,0)$, and $(0,8)$. I want to give my thanks to lulu and ty.
AI: divisibility test for 9 gives $$11+a+2b=9n$$ where $n \in N$
On rearranging , $b=\dfrac{9n-11-a}{2}$
Since, there's $2$ in denominator, numerator should be even.
Let's start with maximum value of $a$ , i.e. $a=9$
$$b=\dfrac{9n-20}{2} $$
$n$ will be even (why?)it can't be $2$,
and $$9n-20<20 \\ \implies n \le 4$$ (or else $b \text{ will be}>10$) , which infact, is the only possible value . Thus, $b=8$
If you choose $a=8$
$$b=\dfrac{9n-19}{2} \ $$ this time n will be odd (why?)
Similarly, you can work out all the cases, and each possible value of $a$ gives atmost 2 values of $b$ (that I know because I checked all possible values)
$(a, b)=(9,8), (8,4), (7,0), (7,9), (6,5), (5,1), (4,6), (3,2), (2,7), (1,3), (0,8)$
|
H: Which items to buy if the best one will always be stolen?
The Problem:
A store sells $N$ items. Each item $i$ is priced at $p_i \ge 0$ and you value the $i$th item at $x_i \ge p_i$. You can carry at most two items.
To complicate matters, when you leave the store you will be attacked by a bully who will steal the item you value most among the items you have bought.
(If you buy no items the bully leaves you alone; if you buy one item he steals it and if you buy items $i$ and $j$, say, with $x_i \ge x_j$ he steals item $i$).
Which items do you buy?
I am wondering whether this problem has an explicit, "simple" solution. If not, is it possible to prove that there is no simple solution?
The only way of solving it I can come up with is brute force:
Example with $N=3$:
$$x_1 = 10, x_2 = 5, x_3 = 4$$
$$p_1 = 3, p_2 = 2, p_3 = 1.$$
If you buy two items you can keep either item 2 or item 3 since item 1 would always be stolen.
If you want to keep item 2 you must also buy item 1 and so you get $5-(3+2)=0$
If you want to keep item 3 you must also buy either item 1 or item 2. In the first case you get $4-3-1=0$; in the second case you get $4-1-2=1.$
If you buy only one item it will be stolen but you will have paid a positive price for it, so you get less than zero.
If you buy no item you get zero.
So in this example the optimal thing to do is to buy items 2 and 3.
AI: For each $i$ let $q_i=\min\{p_j|x_j>x_i\}.$ Then the problem is to maximize $x_i-p_i-q_i$.
|
H: Rank of matrix $M$
Let $M$ a matrix over $M(m,k,\mathbb{K})$ and matrix $B$ over $M(m,l,\mathbb{K})$. What is the sufficient condition for $\operatorname{rank}(\lbrack M\mid B \rbrack ) = \operatorname{rank}(M)$?
AI: (Assuming I've understood your notation correctly)
You need each column of $B$ to be a linear combination of columns of $M$.
|
H: Finding $|f(4)|$ given that $f$ is a continuous function satisfying $f(x)+f(2x+y)+5xy=f(3x-y)+2x^2+1\forall x,y\in\mathbb{R}$
The question is simply to find $|f(4)|$ given that $f$ is a continuous function and satisfies the following functional equation $\forall x,y \in \mathbb{R}$.
$$f(x)+f(2x+y)+5xy=f(3x-y)+2x^2+1\forall x,y\in\mathbb{R}$$
Here is what I have done so far. If we put $x=y=0$, we get that $f(0)=1$ and if we let $x=0$, we can find that $f(x)$ is an even function. Any ideas on how to proceed. Thanks.
AI: Hint Take an arbitrary $x$ and pick $y$ such that $2x+y=3x-y$.
|
H: How can I say a set has measure $1$?
Suppose $(\Omega,\mathscr{E},\mathbb{P})$ is a measure space such that $\mathbb{P}(\Omega)=1$.
Suppose $A_i \in \mathscr{E}$ for $1 \leq i \leq n$ where $n \in \mathbb{N}$.
Suppose $\mathbb{P}(\bigcup_{1 \leq i \leq n} A_i)=1$.
Can I conclude that there exists $1 \leq i \leq n$ such that $\mathbb{P}(A_i)=1$? If not, how can I found a counterexample?
I know the inclusion-exclusion principle, but I do not know if we can use it here and how.
In addition, are there conditions (i.e. $(\Omega,\mathscr{E},\mathbb{P})$ has not atoms) such that the statement holds?
AI: No. Take $\mu$ the Lesbegue measure on $[0,1]$ and choose some $n \in \mathbb{N}$. Then, consider
$$A_k = \left[\frac{k-1}{n}, \frac{k}{n}\right], \text{ for } 1 \leq k \leq n-1.$$
Then $\mu(A_k) = \frac{1}{n}$, for all $k \in \{1, \dots, n-1\}$, but $\displaystyle\cup_{k=1}^{n-1} A_k = [0,1]$, so that $\mu(\cup_{k=1}^{n-1} A_k) = 1$.
|
H: How to solve this parametric logarithmic limit of sequence?
Trying to figure out how to solve this limit:
$$\lim_{n\to \infty} \frac{ln(n^a+1)}{ln (n)} , with \ a \ \in \Re $$
This is what I tried so far:
$ \lim_{n\to \infty} \frac{ln(n^a+1)}{ln (n)} = \lim_{n\to \infty} \frac{ln(n^a(1 + \frac{1}{n^a}))}{ln (n)} = \lim_{n\to \infty} \frac{ln (n^a) + ln(1+ \frac{1}{n^a})}{ln(n)} = \lim_{n\to \infty} \frac{ln(n^a)}{ln(n)} + \frac{ln(1+ \frac{1}{n^a})}{ln(n)}$
But don't know how to go further.
AI: Hint: For $a>0$ we get
$$\frac{a\ln(n)+\ln\left(1+\frac{1}{n^a}\right)}{\ln(n)}$$
and for $$a<0$$ write
$$\frac{\ln\left(\frac{1}{n^{-a}+1}\right)}{\ln(n)}$$
|
H: Combination of reflection symmetries in $\mathbb{E^4}$
Is the combination between point reflection (https://en.wikipedia.org/wiki/Point_reflection) symmetry and hyperplane(or axial using Hodge duality) reflection symmetry (https://en.wikipedia.org/wiki/Reflection_symmetry) in $\mathbb{E^4}$ possible for a given orientation of $\mathbb{E^4}$?
I think point reflections for point $P$ in $2n$ dimensions give 180º rotations in $n$ orthogonal planes intersecting in $P$ that preserve space orientation.
AI: I think you're asking whether hyperplane / axial reflections in $\mathbb{E}^4$ preserve orientation. If that's the question, then the answer is no, by a very standard argument: after conjugating by a rotation if needed, those reflections can be written in matrix form as $\mathrm{diag}(1, -1, -1, -1)$ or $\mathrm{diag}(-1, 1, 1, 1)$. Since the determinants of these matrices are $-1$, they both invert orientation. (By contrast, note that the point reflection matrix is $\mathrm{diag}(-1, -1, -1, -1)$ and thus preserves orientation, just as you suspected, since the determinant is $1$.)
|
H: Prove that $4^n-3^n\gt 2n^2$ for all $n\ge 3$
I found this problem in a textbook, I confirmed it works when n = 3, and followed up with the inductive step,
$$4^{n+1}-3^{n+1}\gt2(n+1)^2$$
but I'm stuck at
$$4^n\cdot4-3^n\cdot3\gt2n^2+4n+2$$
Induction is a new thing for me, so please excuse any mistakes, thanks.
AI: Suppose $2n^2<4^n-3^n$ and $n\geq 3$.
\begin{align}
2(n+1)^2&=2n^2+4n+2\\
&<8n^2\\
&<4(4^n-3^n)\\
&=4^{n+1}-4\cdot 3^n\\
&<4^{n+1}-3^{n+1}.
\end{align}
|
H: Vic can beat Harold by $1/10$ of a mile in a $2$ mile race. Harold can beat Charlie by $1/5$ of a mile in a $2$ mile race. Very Confused.
Vic can beat Harold by $1/10$ of a mile in a $2$ mile race. Harold can beat Charlie by $1/5$ of a mile in a $2$ mile race. If Vic races Charlie how far ahead will he finish?
Now I don't know the correct answer but I've done the problem in $2$ different ways and got $2$ different answers♂️
Let V be Vic, H be Harold and C be Charlie.
In V and H race V covers $2$ miles while H covers $2 - 1/10 = 1.9$ miles. In H and C race H covers $2$ miles while C covers $2 - 1/5 = 1.8$ miles
If H covers $1.9$ miles then C covers $(1.8)*(1.9)/2 = 1.71$ miles. Hence V is $2-1.71=0.29$ miles ahead.
Second method:
$V/H=2/1.9$
$H/C=2/1.8$
$V/C=200/171$
Since $V$ runs for 2 miles, $C$ runs $2(170)/200=1.7$ miles Hence V is ahead by $2-1.7=0.3$ miles.
Now there were four options after I've searched the problem online but in book there wasn't any options.
(A)
$0.15$ miles
(B)
$0.22$ miles
(C)
$0.25$ miles
(D)
$0.29$ miles
(E)
$0.33$ miles
But I don't understand If my second method is wrong or not since I've used it successfully to solve problems like this before.
AI: It looks like your error was changing $171$ to $170$ when you inverted $V/C=200/171$ to obtain what should have been $2(171)/200=1.71$. The subtraction would have then given the correct answer, $2-1.71=0.29$, again.
|
H: Permutations on $[2n]$ with relative ($\!\!\bmod n$) restrictions
Question: Let $\mathfrak{S}_{2n}$ be the permutations on $[2n]=\{1,2,\ldots, 2n\}$. Let $$\mathcal{J}_n=\{\sigma\in \mathfrak{S}_{2n} \mid \sigma(i) \not\equiv \sigma(i+n)\mod n, \text{ for all $i\in [n]$}\}.$$
Prove that $$|\mathcal{J}_n|=\sum_{k=0}^n\frac{(-2)^k(n!)^2(2n-2k)!}{k!(n-k)!}.$$
Approach: I believe that this problem can be solved with rook polynomials. The issue I am having is that in all the examples I've seen, the restrictions for the permutations are of the form $\sigma(i)\neq j$. In such cases, determining the board for which to compute the rook polynomial on is straightforward. However, in this problem the restrictions are relative i.e. of the form $\sigma(i)\not\equiv \sigma(i+n)\bmod n$ and so I'm not sure what the correct board is going to be? I would naively think that the board should be $2n\times 2n$, and we place the restrictions on $(i,i+n\bmod n)$ but the summation in the final formula seems to suggest an $n\times n$ board. Any advice or alternate approaches would be appreciated, perhaps i'm missing a straightforward exponential generating function approach.
Second Approach: As mentioned in the comments, the relative positions perhaps make the rook polynomial approach ineffective, as suggested inclusion/exclusion should be the right tool. With this in mind we remark that we can suggestively rewrite the solution as $$|\mathcal{J}_n|=\sum_{k=0}^n (-1)^k\binom{n}{k}\left(2^kn!(2(n-k))!\right),$$ and interpreting $2^kn!(2(n-k))!$ as the number of permutations with at least some set of properties then the answer would be the number of permutation with none of the properties... would the $i$th property be $\sigma(i)\equiv \sigma(i+n)\mod n$?
AI: There appears to be a missing exponent in the denominator: the $(n-k)!$ ought to be squared.
For $k\in[n]$ let $A_k$ be the set of permutations $\sigma$ such that $\sigma(k)\equiv\sigma(k+n)\pmod n$. There are $2n$ choices for $\langle\sigma(k),\sigma(k+n)\rangle$ and $(2n-2)!$ permutations of the remaining $2n-2$ members of $[2n]$, so $|A_k|=2n(2n-2)!$. If $\varnothing\ne I\subseteq[n]$, and $|I|=k$, then
$$\left|\bigcap_{i\in I}A_i\right|=2^k\frac{n!}{(n-k)!}(2n-2k)!\;,$$
so
$$\begin{align*}
\left|\bigcup_{k=1}^nA_k\right|&=\sum_{\varnothing\ne I\subseteq[n]}(-1)^{|I|+1}\left|\bigcap_{k\in I}A_k\right|\\
&=\sum_{k=1}^n(-1)^{k+1}2^k\binom{n}k\frac{n!}{(n-k)!}(2n-2k)!\\
&=\sum_{k=1}^n(-1)^{k+1}2^k\frac{n!^2(2n-2k)!}{k!(n-k)!^2}\;,
\end{align*}$$
and the number of good permutations is
$$\begin{align*}
(2n)!-\sum_{k=1}^n(-1)^{k+1}2^k\frac{n!^2(2n-2k)!}{k!(n-k)!^2}&=(2n)!+\sum_{k=1}^n(-1)^k2^k\frac{n!^2(2n-2k)!}{k!(n-k)!^2}\\
&=\sum_{k=0}^n(-1)^k2^k\frac{n!^2(2n-2k)!}{k!(n-k)!^2}\\
&=\sum_{k=0}^n\frac{(-2)^kn!^2(2n-2k)!}{k!(n-k)!^2}\;.
\end{align*}$$
|
H: Prove that $\displaystyle{\lim_{n \to \infty}a_n ^{1/k}}= a^{1/k}$ if $a_n \ge 0$ for all $n$ and $\displaystyle{\lim_{n \to \infty}a_n}= a$
Prove that $\displaystyle{\lim_{n \to \infty}a_n ^{1/k}}= a^{1/k}$ if
$a_n \ge 0$ for all $n$ and $\displaystyle{\lim_{n \to \infty}a_n}= a$
The book tells me to use $x^k - y^k = (x-y)(x^{k-1}+x^{k-2}y+ \dots + y^{k-1})$ with $x= a_n ^{1/k}$ and $y= a^{1/k}$. I did that and I rearranged to get an expression for $|a_n ^{1/k}-a^{1/k}|$. After that I divided the problem into two cases: $a=0$ and $a>0$. If $a>0$ then there exist positive integers $m$ and $N$ such that $a_n >m$ for all $n>N$. Using this, we can find a lower bound $L$ for the denominator of the expression for $|a_n ^{1/k}-a^{1/k}|$ for $n > N$. Then $|a_n ^{1/k}-a^{1/k}|< |a_n-a|\frac{1}{L}$ and the rest is easy. However, I can't deal with the case when $a=0$. How can I complete this proof?
AI: For $a=0$ you may reason as follows. If $a_n\rightarrow0$, for any $\varepsilon>0$ there is $n_\varepsilon$ such that $a_n < \min(1,\varepsilon^k)$ for all $n\geq n_\varepsilon$. Thus
$$
a^{1/k}_n < \varepsilon
$$
for all $n\geq n_\varepsilon$. This means that $\lim_na^{1/k}_n=0$.
|
H: If $\hat{f}(k)=0$ for all $k <0$, then $f(x)\geq0$ for all $x$
I just started learning about the Fourier series, is this statement true or false?
Looking at $\mathcal {R}(-\pi,\pi).$
If $\hat{f}(k)=0$ for all $k <0$, then $f(x)\geq0$ for all $x$.
AI: Notice that $\widehat{f}(k)=\frac{1}{2\pi}\int^\pi_{-\pi}e^{-iky}f(y)\,dy=\overline{\widehat{f(-k)}}$
So if $f$ is real, your condition implies that $\widehat{f}(k)=0$ for all $k$ and so $f=f(0)=\widehat{f}(0)$ a.s. So, unless $f(0)\geq0$, the answer to your question is general is NO.
|
H: Associative algebras with commutative multiplication?
I.e. the bilinear map/product is not only associative, but also commutative. I am looking for examples of unital associative algebras, so they should be a vector space and a ring, not a vector space and a rng.
One example, inspired by When is matrix multiplication commutative?, is the set of all diagonal matrices. Generalizing this, we could look at all $n \times n$ matrices over $\mathbb{R}$ that share a common eigenbasis (in the case of all diagonal matrices, this eigenbasis forms $I_n$), though they need not have the same eigenvalues $\in \mathbb{R}$. Are there other examples?
AI: Another class of examples: continuous $\mathbb C$-valued functions on some topological space. And various subalgebras of that where some restrictions are placed on the functions, e.g. analytic functions on some domain in $\mathbb C$.
|
H: How to find the domain and range for the composition $g\circ f$, i.e. $g(f(x,t),t)$?
I have the following:
\begin{align}
\frac{\partial }{\partial t} f(x,t)&=g(f(x,t),t) \tag 1\\
f(x,0)&= x \tag 2
\end{align}
where $g:\mathbb R^{n+1}\to \mathbb R^n$
The domain and range for $f$ is not stated, but I assume $f:\mathbb R^{n+1}\to \mathbb R^n$?
However, in the RHS we have a function composition of $g$ and $f$, i.e. $g\circ f$. But the composition is not valid if $f:\mathbb R^{n+1}\to \mathbb R^n$ according to the definition (Wikipedia):
The functions $f:X \rightarrow Y$ and $g:Y\rightarrow Z$ are composed to yield a function... The resulting compositie function is denoted $g\circ f : X \rightarrow Z$, defined by $(g\circ f)(x) = g(f(x))$.
How can I "see" the domain and range for $f$ and $g\circ f$ from $(1)-(2)$?
AI: Since domain of $g$ is $\Bbb{R}^{n+1}$, this means the vector $(f(x,t), t)$ must be in $\Bbb{R}^{n+1}$. Consequently, $f(x,t) \in \Bbb{R}^n$. So the co-domain for $f$ is $\Bbb{R}^{n}$. From the second equation given, we also get that $x \in \Bbb{R}^n$.
But observe that the input vector $(x,t)$ will now be in $\Bbb{R}^{n+1}$.
Finally we can say that $f:\Bbb{R}^{n+1} \rightarrow \Bbb{R}^n$.
The composition map $g \circ f$ is NOT defined. Note that $g(f(x,t), t)$ is NOT the composition of $g$ and $f$ because there is an extra argument $t$ in the input for $g$.
Example
Let $g:\Bbb{R}^3 \rightarrow \Bbb{R}^2$ be defined as $g(p,q,r)=(p+q,r)$ and let $f:\Bbb{R}^3 \rightarrow \Bbb{R}^2$ be defined as $f(a,b,c)=(a^2,b+c)$.
Let $\mathbf{x}=(x_1,x_2)$
\begin{align*}
g(f(\mathbf{x},t), t)&=g(f(x_1,x_2,t)\,, \,t )\\
&=g((x_1^2,x_2+t), \, \, t)\\
&=g(x_1^2,\,\, x_2+t, \,\, t)\\
&=(x_1^2+x_2+t, \,\, t)
\end{align*}
|
H: Why is it called numerical integration when we numerically solve differential equations?
This has been bugging me literally for years.
When numerically simulating a system of differential equations (e.g., with Runge-Kutta or Euler methods), we are using the derivative to estimate the value of the function at the next time step. Why is this called numerical integration or integration rather than simply numerical simulation or function estimation or something?
I have not found this nomenclature discussed, and would love to see the origins. I am probably Googling wrong and missing something obvious. My guess is that from the fundamental theorem, an operation that brings you from $\dot{x}$ to x is by definition integration, so we are technically doing numerical integration?
I suppose it could be that all of the same terms are involved as when you calculate the integral using a Riemann sum (or related techniques). But for the differential equation we are not calculating the area but the value of the function at the next time step, so it doesn't seem like an integral in that sense.
AI: Integration is the general term for the resolution of a differential equation.
You probably know the simple case of antiderivatives,
$$\int f(x)\,dx$$ which in fact solve the ODE $$y'(x)=f(x)$$ via an integral.
The same term is used when you solve, say
$$y'(x)=y(x)+5,$$
giving
$$y(x)=ce^x+5.$$
You integrate the equation. Sometimes, the solution itself is called an integral.
You can integrate by analytical methods, and also by numerical methods.
|
H: Finding points $X=(x,y)$ and directions where directional derivative of $f(x,y)=3x^2+y^2$ is maximum. The points are to be taken on $x^2+y^2=1$
Let the directional vector be $d=(a,b)$ so that $a^2+b^2=1$
Directional derivative of $f$ along $d$ is $ f'(X,d)=\nabla f.d=(6x,2y).(a,b)$
$=6xa+2yb=6\sin \theta\sin\phi+2\cos\theta\cos\phi=4+2\cos(\theta-\phi)$, where let $x=\cos\theta,a =\cos\phi$
Hence, $f'(X,d)|_{max}=6$, which occurs when $\theta -\phi =2n\pi\implies$that is, when $x=a,y=b$
Is this correct? I am confused because I saw the solution online and $f'(X,d)|_{max}=8$.
Can someone please help me understand how it is possible? Thanks in advance.
AI: You basically try to solve the following problem: Find the max value of $g(a,b,x,y) = 6ax+2by$ such that $a^2+b^2=1=x^2+y^2$. Simple. Apply the CS inequality: $|g(a,b,x,y)|^2 \le (36a^2+4b^2)(x^2+y^2)= 36a^2+4b^2= 4+32a^2\le 4+32=36\implies |g| \le \sqrt{36}=6$. This is the max value you are seeking and it is attained when $a = \pm 1, b = 0\implies g = \pm 6x=6\implies x =\pm 1, y=0$. Note that if $a = 1, x = 1$, and $a = -1, x = -1$.
|
H: Polynomial interpolation of a polynomial
Let's say we start with a polynomial like
$$ f(x) = a_{1} x^n + a_{2} x^{n-1} + \cdots + a_{n} $$
then we take $2n$ points over this function, and we try to find the polynomial interpolation, using those $2n$ points (and so we will try to find a polynomial of grade $2n-1$).
So now my question is, is this new polynomial equal to the first one?
My question comes from the statement "the polynomial interpolation will find the best polynomial that fits those points", but i can't understand how it will find a better polynomial then the original one, using $2n-1$ monomial (maybe it will assign the others coefficient, from $a_{n+1}$ to $a_{2n}$?)
AI: Given $k+1$ distinct points $x_0,x_1,\ldots,x_k$ and $k+1$ values $b_0,b_1,\ldots,b_k$, there is one, and only one, polynomial $p(x)$, either equal to $0$ or of degree at most $k$ such that $p(x_i) = b_i$ for $i=0,1,\ldots,k$.
The existence can be established using, for example, Lagrange interpolation.
For the uniqueness clause, if $p(x)$ and $q(x)$ are two polynomials, each either the zero polynomials or of degree at most $k$, with $p(x_i)=q(x_i)=b_i$ for each $i$, then $p-q$ is either the zero polynomial or has degree at most $k$ and has at least $k+1$ roots (namely, $x_0,x_1,\ldots,x_n$). Since a nonzero polynomial of degree at most $k$ has at most $k$ roots, it follows that $p-q$ is the zero polynomial, so $p=q$.
That means that if you start with a nonconstant polynomial of degree $n$, and then try to use interpolation to find a polynomial that agrees with the given polynomial at $2n$ points, then (since $2n-1\geq n$ for nonconstant polynomials) the polynomial you get out of the interpolation process is the one you started with, because that one works and has degree at most $2n-1$, and there is at most one that works and has degree at most $2n-1$. So it’s the one you already have.
|
H: Finding extreme values using chain rule in multivariate function
We are given the function
$$
F(x,y) = (x^2 + y^2)^2 - 2(x^2 - y^2)
$$
with the condition $F(x, y(x)) = 0$ for
$$
y: (0, \sqrt{2}) \to \mathbb{R}, x \mapsto y(x)
$$
The objective is to compute all extreme points and classify them into max/min.
So far I've computed the partial derivatives and got an expression for $y'$
$$
\begin{align*}
\partial_x F(x,y) &= 4x(x^2 + y^2 - 1) \\
\partial_y F(x,y) &= 4y(x^2 + y^2 + 1)
\end{align*}
$$
So using the chain rule we get
$$
y' = - \frac{\partial_x F(x, y(x))}{\partial_y F(x,y(x))} = - \frac{x(x^2 + [y(x)]^2 - 1)}{y(x) (x^2 + [y(x)]^2 + 1)}
$$
So looks like $y' = 0 \iff x = 0 \lor x^2 + [y(x)]^2 - 1 = 0$.
Now because of the constraints given on $y(x)$, $x \in (0, \sqrt{2}) \implies x \neq 0$.
Thus the only possible solution is $x^2 + [y(x)]^2 - 1 = 0$ which implies $x^2 + [y(x)]^2 + 1 = 2$ and $y(x) \neq 0$.
But how can I compute the exact $x$ value from this?
AI: Using $$y^2=1-x^2$$ we get $$1=2(x^2-(1-x^2))$$
|
H: Find all $a\in\mathbb{N}$ such that $3a+6$ divides $a^2+11$
Find all $a\in\mathbb{N}$ such that $3a+6$ divides $a^2+11$
This problem has stumped me. I don't even know where to begin solving it. I know the solutions will be all $a$ such that
$$\frac{a^2+11}{3a+6}=k$$
with $k\in\mathbb{Z}$
But I really don't know how to follow from here
AI: If $3a+6$ divides $a^2+11$, then $3a+6$ divides $3a^2+33$.
$3a+6$ also divides $3a^2+6a$, so this means $3a+6$ divides $6a-33$, or $a+2$ divides $2a-11$.
$a+2$ also divides $2a+4$, so this means $a+2$ divides $15$.
$15$ does not have many factors, so I leave you to check them.
|
H: Is the unit ball of a dense set dense?
Let $(X,||\cdot||)$ be a normed vector space, and let $Y \subset X$ be a dense subset of $X$. Does it follow that $\{y: y \in Y, ||y|| \leq 1\}$ is dense in $\{x: x \in X, ||x|| \leq 1\}$?
AI: Let $B$ stand for the unit ball. Then the answer is yes if $closure(interior(B)) = B$.
Let $x \in B$, and let $\varepsilon > 0$. We have to show that there is a $y \in Y \cap B$ within $\varepsilon$ of $x$.
Since $Y$ is dense in $X$, there is a $z \in Y$ with $\delta$ of $x$ for any $\delta > 0$.
Thus, if $x \in interior(B)$, then we can pick $\delta = \min \{{1-||x||,\varepsilon\}}$ small enough so that $z \in B$, and hence $z \in B \cap Y$.
If $x$ has $||x|| = 1$, then we will reduce it to the first case as follows: Pick another point $x' \in interior(B)$ within $\varepsilon /2$ of $x$, which exists because $closure(interior(B)) = B$. Now since $x'$ is in the interior, we can use the first case to find a point $z' \in B \cap Y$ within $\varepsilon /2$ of $x'$. By triangle inequality, this is within $\varepsilon$ of $x$.
Note that we did not need to use fully that $Y$ was dense in $X$, just that its dense in $B$.
|
H: Prove an inequality $\ln(1-1/x)<2/(1-2x)$
I need some help to prove this inequality:
$$\ln(1-1/x) < \frac{2}{1-2x}$$
with
$$x > 1$$
I did plot the curve of $\ln(1-1/x)-2/(1-2x)$ and it's always in minus.
Many thanks in advance!
AI: Let $$f(x)=\ln(1-1/x)-\frac{2}{1-2x} \implies f'(x)=\frac{1}{x(x-1)(1-2x)^2}>0, if ~ x>1.$$ So $f(x)$ is an increasing function for $x>1 \implies f(x)<f(\infty) \implies f(x)<0$, and hence the result.
|
H: Weak limit of non-negative functions is non-negative (without Mazur)
Let $\Omega \subseteq \mathbb R^2$ be compact subset.
Suppose that $g_n \ge 0$ lie in $L^1(\Omega)$ and that $g_n$ converges weakly in $L^1$ to $g$.
Is there a way to prove that $g \ge 0$ a.e. on $\Omega$ without using Mazur's lemma?
I guess what I have in mind is the following:
We have
$$\int_{\Omega} g f =\lim_{n \to \infty} \int_{\Omega} g_n f\ge 0 $$
for any $f \ge 0$ be in $L^{\infty}(\Omega)$.
Does this property imply that $g$ is non-negative? I think that there should be a way of showing this but I am not sure how...
AI: Take $f = \mathbf{1}_{g\leq 0} \in L^\infty$. Then
$$0\leq\int_\Omega gf = \int_{g \leq 0} g \leq 0.$$
So the function $gf$ is nonpositive and its integral is zero hence it vanishes a.e. This implies that $g \geq 0$ a.e.
|
H: Find the probability of not owning $x$ or $y$ or both
Bit confused as to how to work this out.
If the probability of $x$ is $0.35$
and the probability of $y$ is $0.6$
and the probability of $x$ and $y$ is $0.26$
How do I go working out the probability of not owning $x$ or $y$ or both?
AI: Let's note the following Venn Diagram
Where I exposed the data in %. As you can see,
the probabilty to be "out of X, say out of the red", is $34+31=65=100-35=65\%$
the probabilty to be "out of Y, say out of the blue", is $31+9=40=100-60=40\%$
the probability to be out of both circles, is $31\%$
|
H: Calculating number of items in a summed series - non-repeating connections between points
I would like to calculate the number of possible connections in a set of points, and although I can express the idea in a mathematical formula, I don't know how to "reduce" it to a working calculation.
Let's say I have points A to F that I want to interconnect. I can illustrate the task like so:
A (B C D E F)
B (C D E F)
C (D E F)
D (E F)
E (F)
F --
We start with A, connecting to B C D E F, then move on to B and connect to C D E F, omitting A as we already have the A-B connection from the previous step. And so on until F, where we create no new connections, given that all points are already connected to F.
Mathematically I can state this summing as:
(N-1)+(N-2)+(N-3) ... + (N-(N-1))
That's as far as I can get. I'm not a mathematician and don't know how to go from there to a solution.
I also suspect that what I'm doing here is a known problem in math. Does it have a name that I could look up?
AI: Okay, I'll make the comment an answer. The derivation of AP sum is here.
Another way you can count pairs of elements of a set of cardinality $n$ is ${n\choose 2}$ (MathJax ${n\choose 2}$) -- is number of ways to choose $2$ items out of $n$ distinct items irrecpective of order.
The formula of number of $k-$combinations is $${n\choose k}=\frac{n!}{k!(n-k)!}$$
and for $k=2$ this gives the same formula $${n\choose k}=\frac{n(n-1)}{2}.$$
|
H: If a divides $b-1$ and a divides $c-1$ then a divides $bc-1$
I am wondering if the proof I did for this problem is correct.
We know a divides b-1
so $b-1=a(t)$ for some integer t
also a divides c-1
so $c-1=a(r)$ for some integer r, so by this logic
so then $b=a(t)+1$ and $c=ar+1$
So then
$b(c)-1=(at+1)(ar+1)-1$
$bc-1=a^2tr+at+ar+1-1$
so then subtract the 1
$bc-1=(a^2tr+at+ar)=a(atr+t+r)$
so $a$ is a factor of $bc-1$ proof complete.
AI: $$ (b-1)(c-1) +(b-1) + (c-1) \; \; = \; \; bc-1 $$
|
H: Show that if $X$ and $Y$ are independent with the same exponential distribution, then $Z= |X - Y|$ has the same exponential distribution
$$P\left(Z\le z\right)=P\left(\left|X-Y\right|\le z\right)=P\left(-z\le X-Y\le z\right)=P\left(Y-z\le X\le Y+z\right)$$
This means that, because $\space f(x,y)=f(x)f(y) \space$ as they are independent, we get to calculate this integral:
$$\int _0^{\infty }\:\int _{y-z}^{y+z}\:\left(\lambda e^{-\lambda x}\right)^2dxdy$$
We get
$$\left(e^{\left(-2\cdot \lambda \cdot z\right)}\cdot \left(e^{\left(4\cdot \lambda \cdot z\right)}-1\right)\right)/4$$
Which is not what we want
So where's the mistake?
AI: The main error arises from not considering when the interval $[y-z, y+z]$ is not a subset of $[0,\infty)$. That is to say, when $y < z$, then $y-z < 0$. So you need to take this into account when integrating over $x$. The second error is in writing the integrand as the square of the marginal density $f_X(x)$, when the outer integral is with respect to $y$.
The way to set up the integral is
$$\begin{align*}
&\Pr[Y - z \le X \le Y + z] \\
&\quad = \int_{y=0}^z \int_{x=0}^{y+z} f_{X \mid Y}(x \mid y) f_Y(y) \, dx \, dy + \int_{y=z}^\infty \int_{x=y-z}^{y+z} f_{X \mid Y}(x \mid y) f_Y(y) \, dx \, dy\end{align*}$$ where $f_{X \mid Y}(x \mid y) = f_X(x)$ because $X$ and $Y$ are independent.
As the above is not sufficiently detailed for the audience, I will proceed with a full explanation as follows.
Note we want to compute $\Pr[Y - z \le X \le Y + z]$ for IID $$X, Y \sim \operatorname{Exponential}(\lambda)$$ with rate parametrization $$f_X(x) = \lambda e^{-\lambda x}, \quad x \ge 0, \\ f_Y(y) = \lambda e^{-\lambda y}, \quad y \ge 0.$$
We will do this the mechanical way as requested, then show an alternative computation that is easier.
First note $$\begin{align*}
&\Pr[Y - z \le X \le Y + z] \\
&= \Pr[0 \le X \le Y+z \mid Y \le z]\Pr[Y \le z] + \Pr[Y - z \le X \le Y + z \mid Y > z]\Pr[Y > z],
\end{align*}$$ where we conditioned the probability on the event $Y > z$. This gives us the aforementioned sum of double integrals.
Next, we consider the following idea: let $$f_{X,Y}(x,y) = f_X(x) f_Y(y) = \lambda^2 e^{-\lambda (x+y)}, \quad x, y \ge 0$$ be the joint density. We want to compute $F_Z(z) = \Pr[|X - Y| \le z]$. To do this, we want to integrate the joint density over the region for which $|X - Y| \le z$, when $X, Y \ge 0$; i.e., when $(X,Y)$ is in the first quadrant of the $(X,Y)$ coordinate plane. Notably, when $Y = X$, then $|X-Y| = 0$, and as the distance away from this line increases, $|X-Y|$ increases. So this region comprises a "strip" of width $\sqrt{2} z$ centered over $Y = X$ in the first quadrant. We can also see this by simply sketching the region bounded by the lines $X \ge 0$, $Y \ge 0$, $Y \le X+z$, $Y \ge X-z$.
But because this region is symmetric about $Y = X$, and the joint density is also symmetric about this line, the integral can be written symmetrically: $$\int_{x=0}^\infty \int_{y=x}^{x+z} f_{X,Y}(x,y) \, dy \, dx + \int_{y=0}^\infty \int_{x=y}^{y+z} f_{X,Y}(x,y) \, dx \, dy,$$ and these two pieces are equal in value. This avoids the computation of two separate double integrals with different values.
To really solidify the point, here are some diagrams of the regions of integration. The region described in the first setup looks like this:
The blue region is the first integral, and the orange is the second (where I've only plotted up to $x, y \le 10$ since obviously a plot to $\infty$ is not possible), for the choice $z = 2$. This illustrates why we must split the region into two parts, because if we integrated $x$ on $[y-z, y+z]$ for the blue region, you'd be including a portion in the second quadrant that is not allowed. But this is not the only way to split up the region:
This is the second approach I described. The blue region is the first integral, and the orange region is the second. This is possible because the order of integration of one integral is the reverse of the order in the other, whereas in the first setup, the order of integration is the same (horizontal strips). Here, we integrate the blue region in vertical strips, and the orange region in horizontal strips. And because of the symmetry, we don't even have to compute both--the contribution of the orange region is equal to the contribution of the blue.
|
H: Problem from Discrete Mathematics and its application for Rosen section 4.4
This exercise outlines a proof of Fermat’s little theorem.
a) Suppose that a is not divisible by the prime p. Show that no two of
the integers 1 · a, 2 · a, . . . , (p − 1)a are congruent modulo p.
b) Conclude from part (a) that the product of 1, 2, . . . , p − 1 is
congruent modulo p to the product of a, 2a, . . . , (p − 1)a. Use this
to show that (p − 1)! ≡ a p−1 (p − 1)! (mod p).
I already solved the first problem (a) but I couldn't solve Problem (b)
So can any one please help me to solve it, mean problem (b) ?
Note : I can't use Fermat’s little theorem to prove it as this problem is to prove the Fermat’s little theorem
AI: b) The sets $\{1,2,\dots,p-1\}$ and $\{a,2a,\dots,(p-1)a\}$ are the same modulo $p$ by item (a). So the product of their elements must be equal: $(p-1)! \equiv a\cdot 2a \cdot {\dots} \cdot (p-1)a \equiv a^{p-1}(p-1)! \pmod p$.
|
H: For $f \in L^1_{\text{loc}}(\Bbb{R}^d)$ the average $x \mapsto \frac1{\lambda(B(x,r))} \int_{B(x,r)} f(y)\,dy$ is measurable
Let $f \in L^1_{\text{loc}}(\Bbb{R}^d)$. For $x \in \Bbb{R}^d$ and $r > 0$ define the average of $f$ over the ball $B(x,r)$ as
$$(A_rf)(x) := \frac1{\lambda(B(x,r))} \int_{B(x,r)} f(y)\,dy.$$
I need to prove that for fixed $r> 0$ the map $x \mapsto A_r f$ is measurable.
The proof given to me is the following:
Fix $r > 0$ and denote $\lambda(B(0,1)) = \omega_d$. Then $\lambda(B(x,r)) = \omega_d r^d$ so
$$(A_rf)(x) := \frac1{\lambda(B(x,r))} \int_{B(x,r)} f(y)\,dy = \frac{1}{\omega_d r^d} \int_{\Bbb{R}^d} \chi_{\{(x,y) : \|x-y\| < r\}}(x,y)f(y)\,dy$$
and measurability follows from Fubini's theorem.
My attempt to understand this:
The idea seems to be to show that $\frac{1}{\omega_d r^d}\chi_{\{(x,y) : \|x-y\| < r\}}(x,y)f(y) \in L^1(\Bbb{R}^d \times \Bbb{R}^d)$ and then by Fubini the iterated integral
$$\int_{x \in \Bbb{R}^d} \left(\frac{1}{\omega_d r^d} \int_{\Bbb{R}^d} \chi_{\{(x,y) : \|x-y\| < r\}}(x,y)f(y)\,dy\right)\,dx $$
is well-defined and the integrand is measurable.
Using Tonelli's theorem, it suffices to show that the iterated integral exists:
\begin{align}
\int_{y \in \Bbb{R}^d} \left(\frac{1}{\omega_d r^d} \int_{\Bbb{R}^d} \chi_{\{(x,y) : \|x-y\| < r\}}(x,y)|f(y)|\,dx\right)\,dy &= \frac1{\omega_d r^d} \int_{y \in \Bbb{R}^d} \left(\int_{x \in \Bbb{R}^d} \chi_{\{(x,y) : \|x-y\| < r\}}(x,y)\,dx\right)|f(y)|\,dy\\
&= \frac1{\omega_d r^d} \int_{y \in \Bbb{R}^d} \left(\int_{x \in \Bbb{R}^d} \chi_{B(y,r)}(x)\,dx\right)|f(y)|\,dy\\
&= \frac1{\omega_d r^d} \int_{y \in \Bbb{R}^d} \lambda(B(y,r))|f(y)|\,dy\\
&= \frac1{\omega_d r^d} \int_{y \in \Bbb{R}^d} \omega_dr^d|f(y)|\,dy\\
&= \int_{y \in \Bbb{R}^d} |f(y)|\,dy
\end{align}
However, this doesn't have to be finite since $f \in L^1_{\text{loc}}(\Bbb{R}^d)$, not in $L^1(\Bbb{R}^d)$. What am I missing here?
AI: It should be easier than that. Sketch: For fixed $r$ in fact, $A_r(f)(x)$ is continuous. To see why, fix $x$ and for $y$ close to $x,$ note
$$\int_{B(y,r)}f - \int_{B(x,r)}f = \int_{B(y,r)\setminus B(x,r)}f - \int_{B(x,r)\setminus B(y,r)}f.$$
In absolute value, this is bounded above by
$$\int_{B(y,r)\setminus B(x,r)}|f|+ \int_{B(x,r)\setminus B(y,r)}|f|.$$
Now $f$ is in $L^1(x,2r)$ and the measure of both domains of integration $\to 0.$ This should give it.
|
H: Has the given statement regarding Set Theory been correctly stated in the form of a logical statement using logical symbols?
Let's assume that we have a statement, stated in words as follows:
If $A \subseteq B \implies$ for any given value of $x$ such that $x \in A$, it will imply that $x \in B$
Now, let's assume that we have to write this as a single logical statement, how would we do this? The first thing that came to my mind was :
$$A \subseteq B \implies x \in A \implies x \in B$$
But, this feels more like : "If $A$ is a subset of $B$, then $x$ will belong to $A$ and $B$". It feels like we're stating this for an already given value of $x$ rather than for any value of $x$ such that it belongs to $A$.
Another method I thought of was by using parenthesis, but I don't know if it's right. This is what I thought of :
$$A \subseteq B \implies (x \in A \implies x \in B)$$
The one mentioned below seems to be the best to me. I use the $\forall$ symbol in it. In the statement in words, it's stated that "for any given value of $x$ such that...", which according to me can be replaced by "for all values of $x$ such that..." to obtain this :
$$A \subseteq B \implies \forall x \in A \implies x \in B$$
I want to know which one of these are right and which one (not necessarily out of these) is the most appropriate one to use.
Thanks!
AI: In this case we have a definition for subset in terms of set theory. Think of definitions as "if and only if" i.e. $\iff$. In terms of symbols, we have:
$$
A \subseteq B \iff \big( x \in A \implies x \in B\big).
$$
Hope this clears it up.
|
H: Exercises identifying types of differential equations
I am preparing for a test and want to know if I can identify the different types of differential equations. Are there any tests online? I have searched but couldn't find any exercises of this type.
AI: There is a big number of different families of differential equations, so other than general advice there is probably not a lot one individual resource/person will do for you. You should probably make sure you are familiar with general properties of Differential Equations, such as order, degree, etc. (you can find such definitions with a simple Google search). You also have
(1) Ordinary Differential Equations (judging by the tags, this is the one you are interested in), where the equation has one independent variable. Some examples include:
(.) Separable ODEs
(.) First Order Linear ODEs
(.) First Order Homogeneous ODEs
(.) Second Order Linear Constant Coefficient Homogeneous ODE
(.) Second Order Linear Homogeneous (not necessarily constant coefficient) ODE
(.) Exact ODE
And many, many more. These are just some that pop to mind straight away, with relatively nice solution methods (apart from non constant coefficient second order linear homogeneous ODE, not necessarily nice to solve).
(2) Partial Differential Equations. Where we have more than one independent variable. Again, there are lots of different types, and the methods of solution tend to be quite different (quite a simple method of solution being something like separation of variables, in which we reduce the problem to a system of ODEs).
Ultimately, Differential Equations are much harder to solve than equations you have probably previously encountered, and most in fact do not have nice solutions. This is why we like to characterize them - it allows us to place names to things we can solve, and what we readily have methods available for.
|
H: Find intersection between line and ellipsoid
I want to find points $\space P(x,y,z) \space$ where a line intersect an ellipsoid with $$P = P_{1}+t(P_{2}-P_{1})$$
Here is where I stuck:
The ellipsoid can be described as:
$$\frac{(x-x_{3})^{2}}{a^{2}} + \frac{(y-y_{3})^{2}}{b^{2}} + \frac{(z-z_{3})^{2}}{c^{2}} = 1$$
after substitution:
$$\frac{(x_{2} - x_{1})^{2}t^{2}-2t(x_{2}-x_{1})(x_{3}-x_{1})+(x_{3}-x_{1})^{2})}{a^{2}} + \frac{(y_{2} - y_{1})^{2}t^{2}-2t(y_{2}-y_{1})(y_{3}-y_{1})+(y_{3}-y_{1})^{2})}{b^{2}} + \frac{(z_{2} - z_{1})^{2}t^{2}-2t(z_{2}-z_{1})(z_{3}-z_{1})+(z_{3}-z_{1})^{2}}{c^{2}} - 1 = 0$$
Substitution gives a quadratic equation of the form:
$$kt^{2} + lt + m = 0$$
Any help would be appreciated.
AI: Translate the ellipsoid to origin, by subtracting $(x_3, y_3, z_3)$ from $P1$ and $P2$, so the line is parametrised as
$$\vec{p} = \vec{v}_0 + t \vec{v}_1$$
where
$$\begin{aligned}
\vec{v}_0 = (\chi_0, \gamma_0, \zeta_0) &= (x_1 - x_3, y_1 - y_3, z_1 - z_3) \\
\vec{v}_1 = (\chi_1, \gamma_1, \zeta_1) &= (x_2 - x_1, y_2 - y_1, z_2 - z_1) \\
\end{aligned}$$
and the ellipsoid is
$$\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{b^2} = 1$$
Substituting $\vec{p}$ into the ellipsoid yields
$$\left(\frac{\chi_0 + t \chi_1}{a}\right)^2 + \left(\frac{\gamma_0 + t \gamma_1}{b}\right)^2 + \left(\frac{\zeta_0 + t \zeta_1}{c}\right)^2 = 1$$
Expand the terms and collect the coefficients for powers of $t$ and you get
$$\begin{aligned}
\left( \displaystyle \frac{\chi_1^2}{a^2} + \frac{\gamma_1^2}{b^2} + \frac{\zeta_1^2}{c^2} \right) & t^2 ~ + \\
\left( \displaystyle \frac{2 \chi_0 \chi_1}{a^2} + \frac{2 \gamma_0 \gamma_1}{b^2} + \frac{2 \zeta_0 \zeta_1}{c^2} \right) & t ~ + \\
\left( \frac{\chi_0^2}{a^2} + \frac{\gamma_0^2}{b^2} + \frac{\zeta_0^2}{c^2} - 1 \right) & ~ = 0 \\
\end{aligned}$$
This is a simple quadratic equation in $t$, which can have 0, 1, or 2 real roots $t$.
Remember that we only translated the coordinate system so that they were relative to the center of the ellipsoid, but we didn't scale or rotate the coordinate system. So, after you have found $t$, you can find the point it refers to, in the original coordinate system, using your original parametrised line,
$$P = P1 + t ( P2 - P1 )$$
i.e.
$$\left\lbrace ~ \begin{aligned}
x &= x_1 + t \chi_1 \\
y &= y_1 + t \gamma_1 \\
z &= z_1 + t \zeta_1 \\
\end{aligned} \right.$$
for each intersection point $t$.
|
H: If $A\neq\emptyset$ there not exist the set $S$ whose element are all sets equipotent to $A$
Statement
If $A\neq\emptyset$ there is no set $S$ containing all sets equipotent to $A$.
My text suggests to prove that if $S$ was a set then $\bigcup S$ would be the set of all set that is not a set but unfortunately I don't be able to use this argument. So could someone help me and show how to use this argument, please?
AI: Hint: Suppose $x$ is any set not in $A$. Let $a \in A$ be a fixed element of $A$ (which exists by hypothesis that $A \neq \emptyset$. Then $(A \setminus \{a\}) \cup \{x\}$ is a set equipotent to $A$ containing $x$.
|
H: Explain a confusing bound for the integral of a decreasing function.
I am reading a solution of an exercise. In the solution, it says the following:
Consider $g(x,t):=\frac{x}{(1+tx^{2})t^{\alpha}}$, where $x\in (0,\infty)$, $t=1,2,3,\cdots$ and $\alpha>\frac{1}{2}$. Then, since for fixed $x$, $g(x,t)$ is decreasing in $t$, we must have $$\dfrac{x}{(1+(n+1)x^{2})(n+1)^{\alpha}}\leq\int_{n}^{n+1}g(x,t)dt\leq \dfrac{x}{(1+nx^{2})n^{\alpha}}.$$
I am really doubtful about this bound. Why is this true? This is basically claiming that for a decreasing function $g(x)$, it satisfies $$g(n+1)\leq \int_{n}^{n+1}g(x)dx\leq g(n).$$ How can we connect the area beneath the curve and the values of the function at two end points in such a way??
I cannot either understand or believe this... Thanks in advance for any explanation and confirmation!
AI: If $g(x)$ is decreasing, for all $x \in [n, n+1]$, $g(n+1) \leq g(x) \leq g(n)$. Then we have $$g(n+1) = g(n+1) \int_{n}^{n+1} 1 dx = \int_{n}^{n+1} g(n+1) dx \leq \int_{n}^{n+1} g(x) dx \leq \int_{n}^{n+1} g(n) dx = g(n).$$
|
H: Geometry problem I am having trouble to solve
Prove that $AC = \sqrt{ab}$
$a$ is $AB$; $b$ is $CD$; the dot is the origin of the circle. ABCD is a trapezoid, meaning AB || DC.
My attempt at solving:
According to this rule,
$$MA^2 = MB \cdot MC$$
I can apply this rule and say that $DA^2 = b\cdot DE$. If I manage to prove that $DE = a$, I solve the problem, because if $DE = a$, that means that $DA = BE$, which leads to $BE = AC$, because both are diagonal of the equilateral trapezoid (ABCE) in the circle.
AI: Let $X$ be on a ray $DA$ beyond $A$
$\angle CDA = \angle BAX = \angle ACB$ (tangent-chord)
$\angle DAC = \angle ABC$ (tangent-chord)
So triangles $\Delta ADC$ and $\Delta BCA$ are similar, so: $${AC\over AB} = {DC\over AC}\implies AC^2 = AB\cdot DC$$
and thus a conclusion.
|
H: Distribution of $\frac{2X_1 - X_2-X_3}{\sqrt{(X_1+X_2+X_3)^2 +\frac{3}{2} (X_2-X_3)^2}}$ when $X_1,X_2,X_3\sim N(0,\sigma^2)$
Given that $X_1, X_2, X_3 $ are independent random variables form $N(0, \sigma^2 )$, I have to indicate that the statistic given below has a $t$ distribution or not.
\begin{equation}
\frac{2X_1 - X_2-X_3}{\sqrt{(X_1+X_2+X_3)^2 +\frac{3}{2} (X_2-X_3)^2}}
\end{equation}
In my attempt of solving this problem:
I start by showing that we can write the numerator as $a^TX$, where $a^T = (2 -1 -1)$ and $X^T= (X_1 X_2 X_3)$. Thus we have that $a^TX \sim N(0, a^T(\sigma^2 I)a)= N(0, 6\sigma^2)$. And so $\frac{1}{\sqrt{6\sigma^2}} a^TX \sim N(0,1)$ or $\frac{1}{\sqrt{6\sigma^2}}(2X_1-X_2-X_3)\sim N(0,1)$.
Next, we know that $(X_1+X_2+X_3) \sim N(0, 3\sigma^2)$. This implies that $\frac{1}{\sqrt{3\sigma^2}}(X_1+X_2+X_3) \sim N(0,1)$ and thus,$\frac{1}{{3\sigma^2}}(X_1+X_2+X_3)^2 \sim \chi^2(1)$. Similarly, $\frac{1}{2\sigma^2}(X_2-X_3)^2 \sim \chi^2(1)$. Therefore, $\frac{1}{{3\sigma^2}}(X_1+X_2+X_3)^2 + \frac{1}{2\sigma^2}(X_2-X_3)^2 \sim \chi^2(2)$ or $\frac{1}{{3\sigma^2}}\left((X_1+X_2+X_3)^2 + \frac{3}{2}(X_2-X_3)^2 \right) \sim \chi^2(2)$.
As a third step I have to show that $\frac{1}{\sqrt{6\sigma^2}}(2X_1-X_2-X_3)$ and $ \frac{1}{{3\sigma^2}}\left((X_1+X_2+X_3)^2 + \frac{3}{2}(X_2-X_3)^2 \right)$ are independent and I am not sure how to show that. Any help would be appreciated.
AI: Consider the orthogonal transformation
$$\begin{pmatrix}Y_1 \\ Y_2 \\ Y_3\end{pmatrix}=\begin{pmatrix}\frac{2}{\sqrt 6} &-\frac1{\sqrt 6} & -\frac1{\sqrt 6} \\ \frac1{\sqrt 3} & \frac1{\sqrt 3} & \frac1{\sqrt 3}\\ 0 & \frac1{\sqrt 2} & -\frac1{\sqrt 2} \end{pmatrix}\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix}$$
So if $Y=(Y_1,Y_2,Y_3)^T$ and $X=(X_1,X_2,X_3)^T$, then $X\sim N(0,\sigma^2 I_3)\implies Y\sim N(0,\sigma^2 I_3)$.
Therefore,
$$T=\frac{2X_1 - X_2-X_3}{\sqrt{(X_1+X_2+X_3)^2 +\frac{3}{2} (X_2-X_3)^2}}=\frac{\sqrt 6Y_1}{\sqrt{3Y_2^2+3Y_3^2}}$$
I think you can take it from here.
|
H: Find a convergent sequence with $\sum \limits_{n=0}^{\infty} a_n = \sum \limits_{n=0}^{\infty}a_n^2$
If $(a_n)_{n\in N_0}$ and $a_n>0$, find a convergent sequence $a_n$ with $\sum \limits_{n=0}^{\infty} a_n = \sum \limits_{n=0}^{\infty}a_n^2$ , whereas $\sum \limits_{n=0}^{\infty} a_n$ and $\sum \limits_{n=0}^{\infty}a_n^2$ have to converge also.
An alternating sequence would come in my mind with $(-1)^n$ since $(-1)^0$ = $(-1)^{2n}$, but as for now i can't think of anything to make $a_n$ a convergent sequence
AI: Let
$$
a_n=\frac{c}{2^n},
$$
where $c>0,$ a constant to be determined later.
Then
$$
\sum_{n=0}^\infty a_n =\sum_{n=0}^\infty \frac{c}{2^n}=\cdots=2c,
$$
while
$$
\sum_{n=0}^\infty a_n^2 =\sum_{n=0}^\infty \frac{c^2}{4^n}=\cdots=\frac{4c^2}{3}.
$$
Clearly, for $c=3/2$,
$$
\sum_{n=0}^\infty a_n =\sum_{n=0}^\infty a_n^2=3.
$$
|
H: Is there any $C^\infty$ monotonically non-decreasing function $f$ which satisfies the conditions below?
As stated in the above title, is there any $C^\infty$ monotonically non-decreasing function $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $f((-\infty, -2]) = \{-1\}, f([2, \infty)) = \{1\}$ and $f(x) = x $ near $0$?
[My approach to this problem for now]
It is well-known that if you define $a: \mathbb{R} \rightarrow \mathbb{R}$ such that $a(t) = e^{-\frac{1}{t}} (t > 0), a(t) = 0 (t \leq 0)$, $a$ is $C^\infty$ function.
So, when you define $g: \mathbb{R} \rightarrow \mathbb{R}$ as $g(t) = \frac{a(t)}{a(t) + a(1-t)}$, you can easily see that $g$ is $C^\infty$ monotonically non-decreasing function such that $g((-\infty, 0]) = \{0\}, g([1, \infty)) = \{1\}$.
If you define $h: \mathbb{R} \rightarrow \mathbb{R}; t \mapsto g(t) - g(-t)$, $h$ is $C^\infty$ monotonically non-decreasing function such that $h((-\infty, -2]) = \{-1\}, h([2, \infty)) = \{1\}$. However, it seems that 「$h(t) = t$ in $(-\epsilon, \epsilon)$」does not hold for any $\epsilon > 0$. How can I go any further?
AI: Let
$$
g(x)=\left\{
\begin{array}{ccc}
0 & \text{if} & x\le 0,\\
\mathrm{e}^{-1/x^2} & \text{if} & x>0.
\end{array}
\right.
$$
Then it is not hard to show that $g\in C^\infty(\mathbb R)$.
Next, set
$$
h(x)=\int_{-\infty}^x g(t-¼)g(t+¼)\,dt.
$$
Then $h\in C^\infty(\mathbb R)$, and $h(x)>0$ if $x>-¼$, while $h$ is constant, for $x\ge ¼$. Say $h(¼)=a>0$.
Next, set $j(x)=h(x+½)h(½-x)/a$. Note that the support of $j$ is
A sought for function could be one of the form
$$
f(x)=-1+c\int_{-\infty}^x j(t)\,dt,
$$
for a suitable $c>0$. In fact,
$$
c\int_{-\infty}^x j(t)\,dt=2.
$$
|
H: Use Chebyshev's inequality to find the parameter
We roll a symmetrical die $200$ times. $X$ is a random variable representing the number of the 6 face appearing. Using Chebyshev's inequality find $c>0$ so that the probability $$Pr(X\in(a-c, a+c))$$ is at least $0.85$.
My attempt:
$$Pr(a-c<X<a+c)\geq0.85$$
$$1-Pr(a-c<X<a+c)\leq1-0.85$$
$$1-Pr(|X-a|<c)\leq 0.15$$
$$Pr(|X-a|\geq c)\leq 0.15$$
$E[X]=200\cdot1/6, \sigma^2=200\cdot5/36$
Now:
$$0.15=\frac{\sigma^2}{c^2}$$
And from that we get $c>0$.
The thing is my colleague got an answer with $c$ being an interval. Now I'm not sure which one of our solutions is correct (or maybe neither is).
AI: Define variable
$$
X_n = \Big\{
\begin{array}{cc}
0 & else\\
1 & dice=6
\end{array}
$$
Then $\mathbf{E}X_n = P(X_n=6) = \frac{1}{6} \ \mathbf{E}X_n^2 = \frac{1}{6}, \mathbf{Var}X_n = \frac{1}{6} - \frac{1}{6^2} = \frac{5}{36}$. Now define
$$
S_n = \sum_{k=1}^{200}X_k
$$
Combine them. and from this you can find $\mathbf{E}S_n$ and $\mathbf{Var}S_n$. Now, I assume $a$ in your example is $\mathbf{E}S_n$. In such case,
$$
P(c + a \leq S_n \leq -c +a) = P(|S_n - a|<c) = 1-P(|S_n - a|>c) \geq 1- \frac{\mathbf{Var} S_n}{c^2} \geq 0.85
$$
Now solve for $c$, keep in mind $c>0$.
|
H: What's the name for $r=\cos^3\theta$ (alternatively $x^2+y^2=x^{3/2}$)?
The curve appeared while solving this question.
I tried to look up both $r=\cos^3\theta$ and $x^2+y^2=x^{3/2}$,
and even clicked almost all the links here on wolfram.com .
This image is for $r=4\cos^3\theta$.
Thanks in advance.
AI: its in the wolfram source you cite above - its a folium https://mathworld.wolfram.com/Folium.html
|
H: Are there more numbers in this sequence?
Are more numbers in this sequence that starts with $2$,$4$,...?
A number is in this sequence if all its factors add up to the same number as the product of the prime of its digits.
so 2 is in this list because $1$+$2$=prime($2$), and $4$ is in this list because $1+2+4$ =Prime($4$)
12 isn't in this sequence because $1+2+3+4+6+12$ is not $=$ to prime($1$)$\times$prime($2$)
** for 10 we would have Prime(1) we ignore 0
so 120 would be prime(1)$\times$prime(2)
all I want to know is there more in this sequence and is there infinity more?
AI: There are no more. Note that since the $9$'th prime is $23$, the sum of primes of the digits of an $n$-digit number is at most $23n$. But the sum of factors of an $n$-digit number is at least $10^{n-1}$ (since the number itself is a factor). For $n \ge 3$,
$10^{n-1} > 23 n$, so there can't be any terms with $3$ or more digits. And by
a search we find the only ones with $1$ or $2$ digits are $2$ and $4$.
You might do better if you used the product rather than sum of primes of the digits: $2, 4, 148, 484$ would work (and I don't know if there are more; that's all up to $10^6$).
|
H: Why the integrals and derivatives do not kill each other in case of Thomae function?
The following is the Thomae function:
$$ f(x) = \begin{cases}
\frac{1}{q} & if \quad x = \frac{p}{q}, p \in \mathbb{Z}, q \in \mathbb{N},\text{ gcd(p, q) = 1 } \\
0 & if \quad x\in \mathbb{Q}^c \quad or \quad x=0.
\end{cases} $$
My professor said that integrals and derivatives do not kill each other for Thomae function because $\int_{0}^{x} f(t)dt = 0$ and $\frac {d}{dx} \int_{0}^{x} f(t)dt = 0.$
My question is:
I tried to calculate $\int_{0}^{x} f(t)dt$ and I got $\frac{x}{q}$ and not $0.$ Could anyone show me the detailed calculation of this please?
AI: For each partition $P$ of $[0,x]$, the lower sum of $f$ with respect to $P$ is $0$. Therefore, if $f$ is integrable, $\int_0^xf(t)\,\mathrm dt=0$.
And $f$ is integrable since it is bounded and the set of points at which it is discontinuous is countable.
|
H: Does the alternating composition of sines and cosines converge to a constant?
Let $f(x) = \cos(\sin(x))$ and let $c(f, n)(x)$ denote the function $\underbrace{f\circ f\circ...\circ f}_{n \text{ times}}$. For example, $c(f, 1)(x) = f(x)$, $c(f, 2)(x) = f(f(x))$ and so on.
My question is: does $c(f, n)(x)$ approach any constant function if $n \to +\infty$? I graphed this for some values of $n$ and the function seems to approach some value a little bit over $0.76$. Does anyone have any insight as to whether that is true? If so, what value is it approaching and why?
Any sort of help or material helps; this question has been stuck in my head for quite some time now! Thanks in advance!
AI: For the limit $f$ it must hold:
$f(x)=\cos(\sin(f(x)))$.
Taking the derivative (assuming $f$ differentiable) implies:
$f'(x)=-\sin(\sin(f(x)))\cos(f(x))f'(x)$
Impliing either $f'(x)=0$ or $1=-\sin(\sin(f(x)))\cos(f(x))$
The last equation can only hold if $f(x)=k\pi$ for $k\in \mathbb{Z}$ but this implies $\sin(\sin(f(x)))=0$, contradiction. So $f$ must be constant (or not differentiable).
You can show that a constant solution exists by Banach Fixpoint theorem on the sequence
$x_{n+1}=\cos(\sin(x_n))$
|
H: Extending edge colourings
Suppose that $\Gamma$ is a connected locally finite graph with a uniformly bounded degree, i.e. there is a $d \in \mathbb{N}$ such that for every $v \in V\Gamma$ we have $\mathop{deg}(v) \leq d$. Using de-Bruijn Erdos theorem, such graph has an edge colouring using at most $d+1$ colours.
My question is the following: can a legal colouring of a finite connected subgraph be always extended to a legal colouring of the whole graph? Fore formally: given a finite connected subgraph $\Delta \leq \Gamma$ and a a legal edge-colouring $F \colon E\Delta \to \{1,
\dots, d+1\}$ is there a legal colouring $\tilde{F} \colon E\Gamma \to \{1, \dots, d+1\}$ such that $\tilde{F}\restriction_{E\Delta} = F$?
I am assuming that the graph $\Gamma$ is vertex-transitive, if that helps.
AI: No, in the (vertex transitive, cubic) graph below, there is no way to colour the dashed edge using any of the 4 colours already present.
If you would like a strictly infinite example, the same idea works if you take a little piece of an infinite ladder:
|
H: Gauss sums and Dirichlet characters
Currently, I'm attending an Analytic Number Theory course, and in the lecture notes I've come across the following statement:
Does anyone know how to prove this, or at least can give me a reference? Moreover, I'm also confused by the variable $a$, which appears in (5.4) and in (5.5) again. They should be different, right?
Would be very happy with any hint! :)
Best,
python15
AI: The Dirichlet characters $\bmod q$ are an orthogonal basis of $$\{ v\in \Bbb{C}^q, \gcd(a,q)\ne 1\implies v(a)=0\}$$
With $v(a) = e(a/q)1_{\gcd(a,q)=1}$ then $$\tau(\chi)=\langle v,\overline{\chi}\rangle, \qquad v=\sum_{\chi\bmod q}\overline{\chi}\frac{ \langle v,\overline{\chi}\rangle}{\|\overline{\chi}\|^2}$$
|
H: $\int_{0}^{4a} f(x)\;dx=4\cdot \int_{0}^{a} f(x)\;dx$?
Let $f:\mathbb{R} \longrightarrow \mathbb{R_+}$ be a integrable function. If for some $a \in \mathbb{R}$ we have
$$f(4a-x)=f(x),\; \forall \; x \in \mathbb{R}$$
then is true that
$$\int_{0}^{4a} f(x)\;dx=4\cdot \int_{0}^{a} f(x)\;dx?$$
I think it's true and I couldn't think of a counterexample.
I know that if $f(2a-x)=f(x)$ then $\displaystyle \int_{0}^{2a} f(x)\;dx=2\cdot \int_{0}^{a} f(x)\;dx$ holds.
AI: Take $$f(x)=\sin(x)$$ and$$a=\frac{\pi}{4}$$
then
$$(\forall x\in \Bbb R)\;\; \;\;f(4a-x)=f(x)$$
but
$$\int_0^{4a}f=\int_0^\pi\sin(x)dx=2$$
and
$$4\int_0^af=4\int_0^{\frac{\pi}{4}}\sin(x)dx\ne 2$$
|
H: Diffeomorphism from $\mathbb{R}^m\to\mathbb{R}^n$
I have a question about diffeomorphism between $\mathbb{R}^m$ and $\mathbb{R}^n$.
From this page of the internet we have the following definition:
Let $U\subseteq\mathbb{R}^m$ and $V\subseteq\mathbb{R}^n$. A function
$F:U\to V$ is called a Diffeomorphism from $U$ to $V$ if $F$ has the
following properties:
a) $F:U\to V$ is bijective.
b) $F:U\to V$ is smooth.
c) $F^{−1}:V\to U$ is smooth.
But in this post, it is proven that there is no diffeomorphism between $\mathbb{R}^2$ and $\mathbb{R}^3$. In fact, the spaces $\mathbb{R}^m$ and $\mathbb{R}^n$ are not diffeomorphic when $m \neq n$. Therefore, there cannot be a diffeomorphism between $\mathbb{R}^m$ and $\mathbb{R}^n$. But by this definition, as the symbol $\subseteq$ is used, it implies that the open sets $U$ and $V$ can be $\mathbb{R}^m$ and $\mathbb{R}^n$. So, the definition is "wrong", in the sense that there is no diffeomorphism between $\mathbb{R}^m$ and $\mathbb{R}^n$?
Would the definition be correct if the symbol $\subset$ was used? That is, is it possible to construct diffeomorphism between open sets of $\mathbb{R}^m$ and $\mathbb{R}^n$?
AI: Suppose that $m\neq n$ and $U\subset \mathbb{R}^m,\, V\subset \mathbb{R}^n$ are open, then $U$ and $V$ are not diffeomorphic.
Proof: in first place note that if $U$ and $V$ are diffeomorphic then they are necessarily locally diffeomorphic, that is, if $f:U\to V$ is a diffeomorphism then the restriction of $f$ to any open ball of $U$ is an embedding (this means that it is diffeomorphic into it image). Say we pick $g:=f|_{\mathbb B (0,1)}$.
Also note that diffeomorphism is an equivalence relation because composition of diffeomorphisms is again a diffeomorphism, what follows from the chain rule. Also there exists trivial diffeomorphisms between any open ball an the entire space, that is, $\mathbb B (0,1)\subset \mathbb{R}^m$ and $\mathbb{R}^m$ are diffeomorphic, therefore the question reduces to show that $\mathbb{R}^m$ and $Y:=\operatorname{img}(g)$ are not diffeomorphic.
Then suppose that $h: \mathbb{R}^m\to Y$ is a diffeomorphism, then as the trivial embedding $i:Y \hookrightarrow \mathbb{R}^n$ is smooth we will had that $h\circ i:\mathbb{R}^m \to \mathbb{R}^n$ is also a differentiable embedding, but now it follows from the matrix representation of the Fréchet derivative at a point $x\in \mathbb{R}^m$ of any differentiable map $d:\mathbb{R}^m\to \mathbb{R}^n$, that if $m\neq n$ then $\partial d(x)$ is not invertible, therefore $h\circ i$ cannot be locally invertible at any point, however $i$ is locally invertible at any point, therefore from the chain rule we find that $h$ is not locally invertible at any point so $h$ cannot be a diffeomorphism and so our original function $f$ neither.$\Box$
|
H: Show that the orbits are given by ellipses $\omega^{2}x^{2}+v^{2}=C$, where $C$ is any non- negative constant.
I am working through a text book by Strogatz Nonlinear dynamics and chaos . In chapter 5 question 5.1.1 (a). I have answered the question but would like to check if I have performed the integration step properly.
Question
Show that the orbits are given by ellipses $\omega^{2}x^{2}+v^{2}=C$, where $C$ is any non- negative constant.
Answer
Starting with dividing one equation with the other:
\begin{equation}
\Rightarrow \frac{\dot{x}}{\dot{v}} = \frac{v}{- \omega^{2} x} \\
-\omega^{2} x \dot{x} = v \dot{v}
\\ \omega^{2} x \dot{x} + v \dot{v} = 0 \text{.}
\end{equation}
Then integrating (The integral of 0 is constant D):
\begin{equation}
\int \omega^{2} x \dot{x} \text{ }dx + \int v \dot{v} \text{ }dv = \int 0 \text{ } dx \\
\frac{1}{2} \omega^{2} x^{2} + \frac{1}{2} v^{2} = D \\
\omega^{2} x^{2} + v^{2} = 2 D \\
\omega^{2} x^{2} + v^{2} = C
\end{equation}
Summary
Could someone please verify this proof, particularly with regards to the $\int \omega^{2} x \dot{x} \text{ }dx + \int v \dot{v} \text{ }dv = \int 0 \text{ } dx $ line. Did I do this properly?
AI: Your derivatives (with a point above) are with respect to the variable time $ t .$
So, when you integrate, you should do it according to $t$.
$$\int \omega^2xx'dt+\int vv'dt=\int 0dt$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.