text
stringlengths 83
79.5k
|
|---|
H: $(n-1)$-th derivative of a complex polynomial
I cannot wrap my head around the $(n-1)$-th derivative of the polynomial $(z-2)^{n+1}$.
$$ \frac{d^{n-1}}{dz^{n-1}}(z-2)^{n+1} = \frac{(n+1)!}{2}(z-2)^2, \quad z \in \mathbb C.$$
I get why the term $(z-2)^2$ is there, the problem is with $(n+1)!/2$. Why divide by $2$?
AI: The complex derivative works like the real one in this case:
\begin{align*}
\frac{d}{dz}(z-2)^{n+1} & =(n+1)(z-2)^n\\
(n+1)\frac{d}{dz}(z-2)^n & = (n+1)n(z-2)^{n-1} \\
& \vdots\\
(n+1)n\cdot\dots \cdot4\frac{d}{dz}(z-2)^3 & = \frac{(n+1)!}{2!}(z-2)^2 \\
\frac{(n+1)!}{2!}\frac{d}{dz}(z-2)^2 & = (n+1)!(z-2).
\end{align*}
Notice that the first derivative corresponds to having the exponent $n = (n+1)-1$ on $(z-2)$ on the right hand side of the first line. Thus at the $n$-th derivative you will have exponent $(n+1)-n=1$, meaning that the last line gives you the $n$-th derivative.
EDIT after the edit: for the $(n-1)$-th derivative just look at the line before the last.
|
H: If sets $A, B$ in Euclidean space are closed sets, they have the same boundary and their interior's intersection is non-empty, can we say $A=B$?
If sets $A, B$ in Euclidean space are closed sets, they have the same boundary and their interiors intersection is non-empty, can we say $A=B$? Any suggestions and comments are welcome!
AI: No. For example, $A=\overline{\bigcup_{n\in\mathbb{Z}} [\arctan(2n-1),\arctan(2n)]}\cup [10,11]$, $B=\overline{([-\pi/2,\pi/2]-\bigcup [\arctan(2n-1),\arctan(2n)]}\cup[10,11]$.
To come up with such an example, note we can WLOG forget the interior intersection is nonempty by adding a requirement $A,B$ bounded (since we can take union with a closed ball far far away) and $\mathbb{R}$ is homeomorphic to a bounded open interval. So we want to come up with two different nonempty closed sets in $\mathbb{R}$ with the same boundary, which is easy enough: $\bigcup[2n-1,2n]$ and $\bigcup [2n,2n+1]$ both have boundary $\mathbb{Z}$.
|
H: Evaluation of a limit.
Find $\lim_{n\to \infty}\frac{1}{2^n} \Bigg\{ \dfrac{1}{\sqrt{1-\frac{1}{2^n}}}+\dfrac{1}{\sqrt{1-\frac{2}{2^n}}}+\cdots+\dfrac{1}{\sqrt{1-\frac{2^n-1}{2^n}}}\Bigg\}$
Evaluation of this limit using integration as the limit of a sum doesn't work here. Is there any other way of doing this problem? Please help.
AI: Hint: This is just a subsequence of a sum approximating $\int_0^{1} \frac 1 {\sqrt {1-x}}dx$. So the limit is $2$.
|
H: Extreme points of a function at domain ends
Consider the following function: $f(x) = x\sqrt{9-x^2}$
$\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad $
$f'(x) = \frac{-2x^2+9}{\sqrt{9-x^2}}$ and $D(f) = [-3,3]$ therefore the critical points of the function are $x_{c_i} = \left\{ -3, -\frac{3\sqrt2}{2}, \frac{3\sqrt2}{2}, 3 \right\}$
Apparently the points $\{-\frac{3\sqrt2}{2}, \frac{3\sqrt2}{2}\}$ are global minumum and global maximum respectively.
But what about the domain ends $\{-3, 3\}$? Are they considered to be saddle points, local minimums, or local maximums and why?
AI: The two domain ends are not considered saddle points since per definition for a saddle point the first derivative has to be zero (necessairy but not sufficient). The derivative $f'(x)$ is not defined at the domain ends as can be seen by your formula (division by zero!)
The two endpoints can however be described as local extrema (local minimum and maximum). This is the case, since a distance d can be found, such that, the endpoint $x_{end}$ is the minimal/maximal value f can take in the interval $[x_{end}-d, x_{end}+d]$.
|
H: $\mathbb{Z}_p=\varprojlim \mathbb{Z}/p^{n}\mathbb{Z}$ is uncountable.
The ring of $p$-adic integers is given by $\mathbb{Z}_p=\varprojlim \mathbb{Z}/p^{n}\mathbb{Z}$. From this description how can we conclude that $\mathbb{Z}_p$ is uncountable ?
It follows from the description that each nonzero element of $\mathbb{Z}_p$ is of infinite order (as an element of component wise additive group $\mathbb{Z}_p$). But I cannot produce any contradiction assuming countability of $\mathbb{Z}_p$. Help me.
AI: Each element of $\Bbb Z_p$ has a "base-$p$" expansion
$$a_0+pa_1+p^2a_2+\cdots$$
with each $a_j\in\{0,1,\ldots,p-1)$. Now use the Cantor diagonalisation argument.
|
H: $[X,Y]=0 \implies \exp(X+Y)=\exp(X)\exp(Y)$
I am trying to show that if $[X,Y]=0$ then the exponential map $\exp : Lie(G)\to G$ is such that
$$\exp(X+Y)=\exp(X)\exp(Y), \forall X,Y\in Lie(G).$$
The hint is to show that $\gamma : t\mapsto \exp(tX)\exp(tY)$ is a one-parameter subgroup, which I can show :
\begin{align}
\gamma(t+s)=&\exp((t+s)X)\exp((t+s)Y)\\
=&\exp(tX)\exp(sX)\exp(tY)\exp(sY)\\
=&\exp(tX)\exp(tY)\exp(sX)\exp(sY)\\
=&\gamma(t)\gamma(s)
\end{align}
and
\begin{align}
(\gamma(t))^{-1}&=(\exp(tX)\exp(tY))^{-1}\\
&=\exp(tY)^{-1}\exp(tX)^{-1}\\
&=\exp(-tY)\exp(-tX)\\
&=\exp(-tX)\exp(-tY)\\
&=\gamma(-t)
\end{align}
Note I used a result of a previous exercise that says that $[X,Y]=0$ iff $\exp(tX)\exp(sY)=\exp(sY)\exp(tX), \forall s,t \in \mathbb{R}$
But how to use that $\gamma$ is a one parameter subgroup to conclude?
AI: Two Hints:
Observe that all the one-parameter subroup are linked with the exponential map (how?)
Recall that the differential of the group multiplication act like the sum, i.e.
$$
d_{(e,e)}(\cdot)((v, w)) = v + w
$$
where we identify $T_{(e,e)}(G \times G) = T_e G \times T_e G$.
I will upgrade this hints to a full proof once you solved the problem
|
H: Prove binomial coefficient
Equality of Binomial coefficients.
I was wondering why these two Binomial coefficients (x is just a place holder):
$\binom{x}{ k!(n+1-k)!}$ = $\binom{x}{k!(n-k)!}$.
Both lead to $\binom{x}{k}$.
Does the answer come from Pascal's triangle?
I don't get it. Why is it equal?
AI: This is not an identity.
For example, if $x=6, k=1, n=3$ then I would have thought $\binom{x}{ k!(n+1-k)!}=\binom{6}{ 1!\times3!}=\binom{6}{ 6}=1$ while $\binom{x}{ k!(n-k)!}=\binom{6}{ 1!\times2!}=\binom{6}{ 2}=15$ and $\binom{x}{k}=\binom{6}{1}=6$
So let's instead treat $\binom{x}{ k!(n+1-k)!} = \binom{x}{ k!(n-k)!}$ as an equation.
This requires one of:
$x < k!(n-k)!$ and $n \ge k$
which would give $0=0$. That is not particularly interesting
$x=k!$ and $n=k$
so $k!(n+1-k)! = k!(n-k)!=k!$ to give $1 = 1$
$x=k!(n-k)!(n+2-k)$ and $n \ge k$
so $k!(n+1-k)! = x - k!(n-k)!$ to give $\binom{k!(n-k)!(n+2-k)}{ k!(n+1-k)!} = \binom{k!(n-k)!(n+2-k)}{ k!(n-k)!}$
|
H: If $a^3=b^3$ then $a=b$ for all $a,b\in \mathbb R$ direct proof
While I was working through the first chapter of Spivak's Calculus, "Basic Properties of Numbers", where he introduces the Field Axioms and Order Axioms (P1 - P12), I thought about a problem which is not stated in the book, but could be formulated like this:
Prove that if $a^3=b^3$ then $a=b$ for all $a,b\in \mathbb R$ using P1 - P12.
I worked out a proof by contraposition in the following way:
Suppose $a\neq b$.
$a<b$:
Let $0<a$: (i) $(a<b \land 0<a)\implies a^2<ab$; $(a^2<ab \land 0<a)\implies a^3<a^2b$. (ii) $(a<b \land 0<b)\implies ab<b^2$; $(ab<b^2 \land 0<b)\implies ab^2<b^3$. (iii) $(ab<b^2 \land 0<a)\implies a^2b<ab^2$.
It follows from (i), (ii) and (iii) that $a^3<b^3$.
Let $a=0$: (i) with proof on pg. 7 (Spivak, Calculus, 3rd ed.; $a0=0$ ), $a^3=0$. (ii) $(0<b \land 0<b)\implies 0<b^2$; $(0<b^2 \land 0<b)\implies 0<b^3$.
It follows from (i) and (ii) that $a^3<b^3$.
Let $a<0$ and $0<b$: (i) If $a<0$ then $0<a^2$, which is proven in the text of Chapter 1. $(0<a^2 \land 0>a)\implies a^3<0$, which is proven in a Poblem of Chapter 1. (ii) $0<b^3$ (see above).
It follows from (i) and (ii) that $a^3<b^3$.
Let $a<0$ and $0=b$:(i) $a^3<0$ (see above). (ii) with proof on pg. 7 (Spivak, Calculus, 3rd ed.; $a0=0$ ), $b^3=0$.
It follows from (i) and (ii) that $a^3<b^3$.
Let $b<0$:(i) $(a<b \land 0>a)\implies a^2>ab$; $(a^2>ab \land 0>a)\implies a^3<a^2b$.(ii) $(a<b \land 0>b)\implies ab>b^2$; $(ab>b^2 \land 0>b)\implies ab^2<b^3$.(iii) $(ab>b^2 \land 0>a)\implies a^2b<ab^2$.
It follows from (i), (ii) and (iii) that $a^3<b^3$.
$b<a$: Analogous to 1.
I fail to see how this can be shown with a direct proof. How would a direct proof of this problem look like?
AI: $a^{3}-b^{3} =(a-b)(a^{2}+ab+b^{2})$. So, it is enough to show that $a^{2}+ab+b^{2}>0$. For this we have $-ab \leq \frac {a^{2}+b^{2}} 2$ : This is just a restatement of $(a+b)^{2} \geq 0$. Hence $a^{2}+ab+b^{2} \geq \frac {a^{2}+b^{2}} 2 >0$ (except when $a=b=0$ but we already have $a=b$ in this case).
|
H: Inverse of sum of inverses of matrices
Is it somehow possible to reformulate the following exuation into something easier to calculate:
$$(A^{-1}+ B^{-1})^{-1}$$
A and B are both square real matrices: $A, B \in \mathbb{R}^{n \times n}$,
and are positive definite and therefore invertible.
AI: Note that
$$
A^{-1}(A + B)B^{-1}
= A^{-1}AB^{-1} + A^{-1}BB^{-1} = B^{-1} + A^{-1}.
$$
That is, we have
$$
A^{-1} + B^{-1} = A^{-1}(A + B)B^{-1} \implies
(A^{-1} + B^{-1})^{-1} = B(A + B)^{-1}A.
$$
If you prefer, this can also equal to $A(A + B)^{-1}B$.
Note that because $A,B$ are positive definite, $A + B$ is also positive definite and therefore invertible.
|
H: Proving that if $\forall n\in\mathbb N,\exists x_n \in \mathbb R: |x_n - a| < \frac{1}{n}$, then $a \in \bar S$
This exercise is in my general topology textbook:
Let $S$ be a non-empty subset of $\mathbb R$ and $a \in \mathbb R$. Prove that $a \in \bar S$ if and only if $\forall n\in\mathbb N,\exists x_n \in S: |x_n - a| < \frac{1}{n}$.
So I proved that if $\forall n\in\mathbb N,\exists x_n \in S: |x_n - a| < \frac{1}{n}$, then $a \in \bar S$. But I'm having trouble proving that if $a \in \bar S$, then $\forall n\in\mathbb N,\exists x_n \in S: |x_n - a| < \frac{1}{n}$.
My approach:
Let $a \in \bar S$, then $a \in S$ or $a \in S'$. If $a \in S$ it's trivial that $\forall n\in\mathbb N,\exists x_n \in \mathbb R: |x_n - a| < \frac{1}{n}$, so let's focus in the case where $a \in S'$.
So let $a$ be a limit point of the set $S$. Let $A \in \tau$ be an arbitrary open set such that $a \in A$, then, because $a$ is an limit point: $\exists b \in S: b \in A \wedge b \neq a$.
Because is opened then it is the union of open sets: $a \in A = \bigcup_{i \in I} ]\alpha_i,\beta_i[$ where $I$ is an index set for $i$. Because $a \in A$, there exists an indice set $J \subseteq I$ such that: $a \in ]\alpha_j,\beta_j[, j \in J$.
Let $\varepsilon = \text{min}\{|a - \alpha_j|, |a - \beta_j|\}$, then $]a- \frac{\varepsilon}{n}; a + \frac{\varepsilon}{n}[\ \subseteq\ ]\alpha_j,\beta_j[$, and $]a- \frac{\varepsilon}{n}; a +\frac{\varepsilon}{n}[ \in \tau$.
So, because $a$ is a limit point: $\exists b_n \in S: b_n \in ]a- \frac{\varepsilon}{n}; a +\frac{\varepsilon}{n}[ \ \wedge \ b_n \neq a$
This means that $|b_n - a| < \frac{\varepsilon}{n}$
I don't know how to continue from now on. Intuitively I see that it makes sense that $|b_n - a|$ can get close to 0 as we want, but How can I change that $\frac{\varepsilon}{n}$ to $\frac{1}{n}$?
AI: If $a \in S'$ then $(a-\frac 1 n,a+\frac1 n)$ is an open set containing $a$. Hence there exists $x_n \in S$ such that $x _n \in (a-\frac 1 n,a+\frac1 n)\setminus \{a\}$ and this gives $|x_n-a| <\frac 1 n$.
|
H: Find $\min \{ x+y: x+2y \ge 5, 4x+y\ge6\}$
Find $\min \{ x+y: x+2y \ge 5, 4x+y\ge6\}$ Could anyone tell me what is the answer? Is it zero?
I drew all the lines: $x+y=0$ which intersect the 2nd line at $(2,-2)$.
and with the first line at $(-5,5)$.
AI: The constraints can be written (with $s:=x+y$)$$s\ge5-y,\\s\ge\frac{6+3y}4.$$
As one of the bounds is decreasing and the other growing, the optimum is achieved when they are equal,
$$y=2,\\s=3.$$
|
H: $f: A \rightarrow R^{n} .$ Show if $f^{\prime}(a, u)$ exists, then $f^{\prime}(a ; c u)$ exists and equals $c f^{\prime}(\mathrm{a} ; \mathrm{u}).$
Definition. Let $A \subset R^{m} ;$ let $f: A \rightarrow R^{n} .$ Suppose $A$ contains a neighborhood of a. Given $\mathbf{u} \in \mathbf{R}^{m}$ with $\mathbf{u} \neq \mathbf{0},$ define
$$
f^{\prime}(\mathbf{a} ; \mathbf{u})=\lim _{t \rightarrow 0} \frac{f(\mathbf{a}+t \mathbf{u})-f(\mathbf{a})}{t}
$$
provided the limit exists. This limit depends both on a and on $\mathbf{u}$; it is called the directional derivative of $f$ at a with respect to the vector $\mathbf{u}$. calculus, one usually requires $\mathbf{u}$ to be a unit vector, but that is not necessary.)
Question: Let $A \subset R^{m} ;$ let $f: A \rightarrow R^{n}$. Show that if $f^{\prime}(a, u)$ exists, then $f^{\prime}(a ; c u)$ exists and equals $c f^{\prime}(\mathrm{a} ; \mathrm{u}).$
I couldn't show it. Can you give me a hint?
AI: Just make the substitution $s=tc.$ (Thanks @Kavi Rama Murthy for the hint)
Let $t\neq 0$ and $c\neq 0.$
Since $f'(a;u)$ exists, we have
$$
f^{\prime}(\mathbf{a} ; \mathbf{u})=\lim _{s \rightarrow 0} \frac{f(\mathbf{a}+s \mathbf{u})-f(\mathbf{a})}{s}=\lim _{t \rightarrow 0} \frac{f(\mathbf{a}+tc \mathbf{u})-f(\mathbf{a})}{tc}=\frac{1} {c} \lim _{t \rightarrow 0} \frac{f(\mathbf{a}+tc \mathbf{u})-f(\mathbf{a})}{t}=\frac {1} {c} f'(a;cu)
$$
Hence, we get $cf'(a;u)=f'(a;cu).$
|
H: Uncontrolable subset and stabilizability of a linear dynamical system
I'm reading "Control System Design" by G. Goodwin, and I can't wrap my head around his definitions of uncontrolable subspace and stabilizability of a controlled dynamical system.
Consider a linear dynamical system of state $X\in\mathbb{R}^n$, controlled by an input vector $U\in\mathbb{R}^m$ : $$\dot{X}=AX+BU$$
We know that the controllable subset $R$ of the state space is the image of the controllability matrix $\mathfrak{C}$ : $$\mathfrak{C}=\begin{bmatrix}B&AB&..&A^{n-1}B\end{bmatrix}$$
And as such, its dimension is the rank $r$ of this matrix. Now let's assume this system isn't entirely controlable : $$r<n$$
Now, we can create a basis of this controllable subset $R$ by taking $r$ linearly independant columns $(v_1,..,v_r)$ of $\mathfrak{C}$ : $$R=\text{span}(v_1,..,v_r)$$
We now complete this basis of $R$ with $n-r$ vectors $(v_{r+1},..,v_n)$ of $\mathbb{R}^n$ in order to form a basis of $\mathbb{R}^n$. By doing this, we end up with a basis $(v_1,..,v_n)$ of $\mathbb{R}^{n}$ such that the $r$ first vectors $(v_1,..,v_r)$ of this basis span the controllable subset, and the $n-r$ next vectors $(v_{r+1},..,v_n)$ are necessarily uncontrolable directions of the state space.
Now, the author defines the uncontrollable subspace of the state space as the span of those last $n-r$ vectors, the ones that are necessarily uncontrolable directions : $$I=\text{span}(v_{r+1},..,v_n)$$
Now, to me this "uncontrollable space" is already a really strange notion. It doesn't include at all all the uncontrollable states of the state space, only an infinite fraction of them. And, this uncontrollable subspace depends on the arbitrary choice of the $n-r$ vectors used to complete the basis.
Like, if the state space is $\mathbb{R}^3$ (3D), and say $r=2$, the controllable subspace $R$ is a specific plane of $\mathbb{R}^3$ (no matter what pair of vectors you use as a basis for this plane, it remains the same plane). The set of uncontrollable states is then the complement of this plane in $\mathbb{R}^3$, which is not a linear subspace of $\mathbb{R}^3$. What Goodwin does, is taking an arbitrary line of $\mathbb{R}^3$ passing through the origin and not included in the controlable plane, and says "this is the uncontrolable subset $I$ of the state space".
I mean, yes we have $\mathbb{R}^3=R\oplus I$ (meaning that by summing elements of $R$ and $I$ we can reach all of $\mathbb{R}^3$), but beyond that this "uncontrolable subspace" is so arbitrary and limited that I can't see its utility.
And then comes the definition of stabilizability : this system is said to be stabilizable iff its uncontrolable subspace is stable, that is to say if there are no eigenvectors of $A$ associated with the unstable eigenvalues of $A$ (eigenvalues with a positive real parts) within the uncontrollable subset.
I can't wrap my head around it. Even if his uncontrolable subset is stable, that doesn't mean there isn't any unstable direction (eigenvector of $A$ associated with an unstable eigenvalue of $A$) that is outside of this subspace, and still isn't in the controllable subset (in my 3D example, another line of $\mathbb{R}^3$, that is neither the arbitrarily defined uncontrollable subset $I$ nor contained within the controllable plane). This is precisely because the author's definition of the uncontrollable subset is so limited and arbitrary.
I'd rather be enclined to think that a proper definition would more likely be : this system is said to be stabilizable iff all of its unstable directions are within the controllable plane (so that you can control of of the unstable modes of the system) - a definition that doesn't even require to define an uncontrollable subset.
The author then even goes further : if we transfer the representation of the system to the new basis, the new matrix $A$ takes has the following bloc structure : $$\overline{A}=\begin{bmatrix}A_{ctrl}&A_{1,2}\\0&A_{\textit{not-ctrl}}\end{bmatrix}$$
Which I agree with. He then says that the previous definition is equivalent to say that this system is stabilizable iff the eigenvalues of $A_{\textit{not-ctrl}}$ are stables. But what if the eigenvalues of $A_{\textit{not-ctrl}}$ are all stables, but $\overline{A}$ still has unstable eigenvalues that arent eigenvalues of $A_{\textit{ctrl}}$ neither ? To me, this feels like he implies that the eigen values of $\overline{A}$ (the same as those of $A$) are exactly the eigenvalues of $A_{ctrl}$ and those of $A_{\textit{not-ctrl}}$.
But juste like there are directions of $\mathbb{R}^3$ thare aren't the arbitrarily defined uncontrollable line $I$ nor contained within the controllable plane $R$, $\overline{A}$ can have eigenvalues of its own that aren't eigenvalues of neither $A_{ctrl}$ nor $A_{\textit{not-ctrl}}$, notably because there is a coupling bloc $A_{1,2}$.
I feel like I'm missing something obvious here, and I can't find it. Any possible explanation ?
AI: As you have said, the uncontrollable subspace forms a complement to the controllable subpace. In other words, $\Bbb R^n = R \oplus I$. That is, we can uniquely decompose every vector $x \in \Bbb R^n$ into the form $x = x_R + x_I$, with the "controllable component" $x_R \in R$ and "uncontrollable component" $x_I \in I$. A state $x \in \Bbb R^n$ is controllable if and only if its uncontrollable component is zero.
It is useful to have such a decomposition because the nature of the "state-update" matrix $A$ is completely determined by its behavior over these separate subspaces, since for any $x = x_R + x_I$, we have
$$
Ax = A(x_R + x_I) = Ax_R + Ax_I.
$$
Here is a continuous-time example. Suppose that we have
$$
A = \pmatrix{a_1 & 0\\0 & a_2},\quad B = \pmatrix{1\\0}, \quad C = \pmatrix{1&1}, \quad D = 0.
$$
It is easy to verify that our controllable subspace of $\Bbb R^2$ is the $x_1$-axis, i.e. the span of $(1,0)$. Any other one-dimensional subspace can be selected as the uncontrollable subspace, but it is convenient to take $I$ to be the span of $(0,1)$, since this space happens to be invariant under $A$ (note: such a complement is not always available).
Suppose that the initial state is given by $x(0) = (x_1,x_2)$. It is easy to see that for input $u(t)$, the state and output will be
$$
x(t) = \left(x_1 + e^{a_1t}\int_0^t e^{-a_1t}u(t)\,dt, \quad x_2 e^{a_2t}\right),
\\
y(t) = \left[x_1 + e^{a_1t}\int_0^t e^{-a_1t}u(t)\,dt\right] + x_2 e^{a_2t}.
$$
The first component of the sum, which corresponds to the controllable component of $x(t)$, can be stabilized with a suitable input. The second component, which corresponds to the uncontrollable component of $x(t)$, cannot be stabilized in this way. We could also say that the component $x_2 e^{a_2}t$ is itself an autonomous trajectory of the system: it transpires independently of the input.
We see from the above that the output is only stabilizable (i.e. can be "steered" so that $y(t) \to 0$) if $e^{a_2t} \to 0$.
Correspondingly, we see that $a_2$ is an eigenvalue of $A$ whose eigenvector $(0,1)$ is an element of the uncontrollable subspace $I$.
Suppose that we keep $v_1 = (1,0)$ as the basis for $R$, but instead take $v_2 = (1,1)$ as a basis for $I$. We find that
$$
Av_1 = a_1 v_1 + 0v_2, \\
A v_2 = \pmatrix{a_1\\a_2} = (a_1 - a_2)v_1 + a_2 v_2.
$$
So, the matrix of $A$ relative to the basis $\{v_1,v_2\}$ is
$$
\bar A = \pmatrix{a_1 & a_1 - a_2\\0 & a_2}.
$$
We indeed find that the eigenvalue $a_2$ of $A$ is associated with our uncontrollable subspace $I$. It is tricky, however, to figure out exactly what "associated with $I$" really means here.
One way to make sense of it is this. If we define the projection map $P_I(x_R + x_I) = x_I$, then we could say that the eigenvalue $\lambda$ of $A$ is "associated with $I$" if it is an eigenvalue of the map $T:I \to I$ defined by $T(x) = P_I(Ax)$.
|
H: Vector Calculus Gradient - are my answers correct?
Could someone please clarify if I have the correct answers for these following questions?
[Assume $r=x$i +yj+zk and $a=a_1$i +$a_2$j+$a_3$k for some constants $a_1,a_2,a_3$]
$\nabla f forf=cos(x)+3y^2sin^3z $
Answer: $\nabla f = -sin(x)i+6ysin^3(z)j+9y^2sin^2(z)cos(z)k$ [edited]
$\nabla f forf=r\cdot r$
Answer: $\nabla f =\nabla(x^2i+y^2j+z^2k)+(2xy+2xz+2yz) $ [edited]
$\nabla \cdot (a\times r - r)$
Answer: $\nabla \cdot ((a_2z-a_3y)i-(a_1z-a_3x)j+(a_1y-a_2x)k)-(xi+yj+zk)=0$
AI: The gradient of f(x,y,z) is $\nabla f= \frac{\partial f}{\partial x}\vec{i}+ \frac{\partial f}{\partial y}\vec{j}+ \frac{\partial f}{\partial z}\vec{j}$.
In (i) $f(x,y,z)= \cos(x)+ 3y^2 \sin^3(z)$ so $\frac{\partial f}{\partial x}= -\sin(x)$, $\frac{\partial f}{\partial y}= 6y \sin^3(z)$, and $\frac{\partial f}{\partial z}= 9y^2\sin^2(z)\cos(z)$
So $\nabla f= -\sin(x)\vec{i}+ 6y \sin^3(z)\vec{j}+ 9y^2\sin^2(z)\cos(z)\vec{k}$
I don't know how you got "$\cos(i)$"! I'm hoping it was a typo but I am afraid you decided that the derivative of cos(x) with respect to x was "cos(1)" and then just replaced the "1" with "i"!
|
H: 'Fake' identity regarding the closure in the subspace topology
I have the following argument which I encountered, and can't seem to find why it's not true:
Let $X$ be a topological space and let $A$ and $B$ be nonempty subsets of $X$. Then $\overline{A\cap B}^A=\overline{A\cap B}^X\cap A$.
"Proof"
$\overline{A\cap B}^A =\cap \{ F\subset A: \; F\supseteq A\cap B, F \text{ is closed in } A \}$.
Since $F$ is closed in $A$ if and only if $F=F'\cap A$ where $F$ is closed in $X$. Hence
$\overline{A\cap B}^A= \cap \{ F'\cap A: \; F'\supseteq A\cap B, F' \text{ is closed }\}= \Big( \cap \{ F': \; F'\supseteq A\cap B, F' \text{ is closed }\}\Big) \cap A $.
And using an equivalent characterization of the closure,
$ \cap \{ F': \; F'\supseteq A\cap B, F' \text{ is closed }\}=\overline{A\cap B} $.
And finally we conclude
$ \overline{A\cap B}^A=\overline{A\cap B}^X\cap A $.
I can't seem to find where this argument falters, but this thread makes me think that it's wrong, and would appreciate any pointers.
AI: I am not sure whether this answers your question but in general if $D\subseteq A$ then $\overline D^A=A\cap\overline D$.
Proof:
Observe that $A\cap\overline D$ is closed in $A$ with $D\subseteq A\cap\overline D$ telling us that $\overline D^A\subseteq A\cap\overline D$.
Conversely $\overline D^A$ is closed in $A$ so that $\overline D^A=A\cap F$ for some closed set $F$.
Then from $D\subseteq\overline D^A$ it follows that $D\subseteq F$ and consequently $\overline D\subseteq F$.
Then $A\cap\overline D\subseteq A\cap F=\overline D^A$.
Applying this on $D:=A\cap B$ we find the statement in your question.
|
H: Find poles and residues of \begin{equation} f(z)=z^2/(\cosh z-1) \end{equation}
I understand that the point
\begin{equation}
z=0
\end{equation}
is an obvious singularity, but since it's also a root of multiplicity 2 of the nominator the residue of the given function at 0 will be 0.
As for the other singularities we have
\begin{equation}
\cosh z=1
\end{equation}
or
\begin{equation}
\cos(iz)=1
\end{equation}
and finally we find the singularities
\begin{equation}
iz=2n\pi
\end{equation}
or
\begin{equation}
z_n=-i2n\pi
\end{equation}
for any integer.
As I have said, the residue at 0 is 0, but the other singularites are all simple poles, and therefore the residue at those points will be
\begin{equation}
\text{Res} f(z_n)=\lim_{z\to z_n} \frac{(z-z_n)z^2}{\cosh z-1}
\end{equation}
This is a 0/0 case and we can use L'Hospital's rule, and we get
\begin{equation}
\lim_{z\to z_n} \frac{3z^2-2z_nz}{\sinh z}
\end{equation}
Now this is no longer in indeterminate form, but we have \begin{equation}
a/0
\end{equation}
Clearly I must have done something wrong, but I cannot seem to grasp what that something is. Any help will be greatly appreciated.
AI: $z_n$ is a pole of order $2$ since $e^{z}+e^{-z}-2=(e^{z/2}-e^{-z/2})^{2}$. So you have to multiply by $(z-z_n)^{2}$ and compute the derivative at $z=z_n$.
|
H: Integrate $\Omega=\int_{-\infty}^{\infty}\frac{\operatorname{arccot}(x)}{x^4+x^2+1}dx$
A friend of mine got me the problem proposed by Vasile Mircea Popa from Romania, which was published in the Romanian mathematical Magazine. The problem is to find:
$$\Omega=\int_{-\infty}^{\infty}\frac{\operatorname{arccot}(x)}{x^4+x^2+1}dx$$
As per the Wolfram alpha the evaluated value is found to be $0$. The reason is since $\operatorname{arccot}(x)=-\operatorname{arccot}(-x)$ for all $x\in\mathbb C^+$ is an odd function.
However, the next answer obtained is $\frac{\pi^2}{ 2\sqrt{3}}$ where the relations $\text{arccot}(x)=\frac{\pi}{2}-\operatorname{arctan}(x)\cdots(1)$ is used keeping in the view of principal branch of $\operatorname{arccot}(x)$. The works is as follows: $$\Omega=\int_{-\infty}^{\infty}\frac{\frac{\pi}{2}-\operatorname{arctan}(x)}{x^4+x^2+1}dx=\frac{\pi}{2}\int_{-\infty}^{\infty}\frac{dx}{x^4+x^2+1}-\underbrace{\int_{-\infty}^{\infty}\frac{\operatorname{arctan}(x)}{x^4+x^2+1}}_{\text{odd function}}dx\\\overbrace{=}^{xy=1}\frac{\pi}{2}\int_{-\infty}^{\infty}\frac{x^2 dx}{x^4+x^2+1}=\frac{\pi}{2}\int_{-\infty}^{\infty}\frac{dx}{\left(x-\frac{1}{x}\right)^{2}+3}$$
then, by Cauchy Schlömilch transformation
(Special case of Glasser's Masters theorem) we obtain $$\Omega= \frac{\pi}{2}\int_{-\infty}^{\infty}\frac{dx}{x^2+3}=\frac{\pi^2}{2\sqrt{3}}$$
Note that former integral can be solved without using aforementioned theorem, by the partial fraction of $x^4+x^2+1=(x^2+x+1)(x^2-x+1)$.
My question is, Which of the above work is correct?
In my view the first work is correct. In the second working,
Is the use of Maclaurin series done correctly?
AI: There are two main definitions of $\text{arccot}$ that people use. The one employed by WolframAlpha is that $\text{arccot}$ is the inverse of $\cot:\left(-\dfrac{\pi}{2},+\dfrac{\pi}{2}\right]\to\mathbb{R}$, so that $\text{arccot}$ is an odd function on $\mathbb{R}_{\neq 0}$. As in Botond's now deleted answer (which, I hope, would be undeleted),
$$\arctan(x)+\text{arccot}(x)=\dfrac{\pi}{2}\,\text{sign}(x)$$
for all $x\in\mathbb{R}$, where the sign function $\text{sign}:\mathbb{R}\to\{-1,+1\}$ uses the convention that $\text{sign}(0)=1$. For a plot of this version of $\text{arccot}$, see here.
The other definition is that $\text{arccot}$ is the inverse of $\cot:(0,\pi)\to \mathbb{R}$, which makes $\text{arccot}$ satisfy
$$\arctan(x)+\text{arccot}(x)=\dfrac{\pi}{2}\text{ for all }x\in\mathbb{R}\,.$$
This is the definition I prefer because this version of $\text{arccot}$ is continuous and differentiable (see a plot here). Furthermore, it aligns with other inverse trigonometric function identities:
$$\arcsin(x)+\arccos(x)=\dfrac{\pi}{2}\text{ for all }x\in[-1,+1]$$
and
$$\text{arcsec}(x)+\text{arccsc}(x)=\dfrac{\pi}{2}\text{ for all }x\in(-\infty,-1]\cup[+1,+\infty)\,.$$
|
H: Show that at least one of $x(t),y(t),z(t)$ must be the constant solution at the origin.
Let $A$ be an invertible $3\times3$ matrix, and consider the equation
$\dot{x}(t)=Ax(t)$ Suppose there are three solutions
$x(t),y(t),z(t)$ with the properties:
$\lim\limits_{t\to\infty}x(t)=0$
$\lim\limits_{t\to-\infty}y(t)=0$
$z(4\pi)=z(0)$
Show that at least one of $x(t),y(t),z(t)$ must be the constant
solution at the origin.
My Attempt:
I first thought about this as a problem that should involve the stable manifold theorem, but later realized, that you need the flow $\phi_{t}(x)$ for that. But given here is just a solution.
And then I thought of coming up with an answer by simple manipulation of limits, like $z(0)=z(\lim\limits_{t\to\infty}x(t))=\lim\limits_{t\to\infty}z(x(t))...$ But it was also not helpful.
Appreciate your help
AI: Hint The solution $z$ is necessarily $4\pi$-periodic. Let us suppose that $z$ is not constant. Let $P\subset \bf{R}^3$ be the subspace of initial conditions at $t=0$ that generate $4\pi$-periodic solutions. As $z(0)\in P$, we know that $\dim(P)\ge 1$, but the derivative of a $4\pi$-periodic solution is also $4\pi$-periodic, which implies that $u\in P\Longrightarrow A u\in P$, in other terms, $P$ is stable by $A$. If we had $\dim(P)=1$, it would imply that $A z(0) = \lambda z(0)$, hence $z(t) = e^{\lambda t}z(0)$, hence $\lambda=0$ by periodicity which is impossible because $z$ would be constant.
It follows that $\dim(P)\ge 2$, now let $\lambda$ be a real eigenvalue of $A$ (there is at least one) and $V_\lambda$ its (stable) eigenspace. If $\lambda\neq 0$ one must have ${\bf R}^3=P\oplus V_\lambda$. By decomposing $x(0)$ and $y(0)$ on this space decomposition, one gets that if they are not zero, $\lambda$ must be simultaneously negative and posivive, a contradiction. If $\lambda=0$, then $P={\bf R}^3$, but then $x$ and $y$ must be $0$ because they are periodic and tend to $0$.
Alternate strategy
Assume that $z$ is not constant
If $z, z', z''$ are independent, then all the solutions are $4\pi$-periodic and $x$ and $y$ must vanish.
Otherwise, $W=\text{span}(z(0), z'(0))$ is a 2 dimensional subspace of ${\bf R}^3$, stable by $A$, containing no eigenvector of $A$.
Write ${\bf R}^3 = W\oplus V_\lambda$ and conclude as above.
|
H: Expected value of the number of white balls in a box
In a box there are $k$ white and $l$ black balls, and next to the box there are $m$ white and $m$ black balls. Out of the box one ball is randomly taken out and then returned to the box also with $m$ more balls of the same color.
a) Find expected value of the number of white balls that are now in the box
b) If now we are picking a ball out of the box, what is the probability that it will be white?
b) Hypotheses:
$H_{1}:$ I have taken a white ball out of the box
$H_{2}:$ I have taken a black ball out of the box
Let A be the probability we are looking for:
$P(A)=P(A\setminus H_{1})P(H_{1})+P(A\setminus H_{2})P(H_{2})=\frac{k+m}{k+m+l}\frac{k}{k+l}+\frac{k}{k+m+l}\frac{l}{k+l}$
a) I am not really sure how to solve this, I was thinking of a random variable $X$ that will represent number of white balls in a box, and it can only take values $k$ or $k+m$?
AI: You are on the right track and can apply:
$$\mathbb EX=\mathbb E[X\mid H_1]P(H_1)+\mathbb E[X\mid H_2]P(H_2)$$
Just as you did with finding $P(A)$.
|
H: If $2\tan^{-1}x + \sin^{-1} \frac{2x}{1+x^2}$, find the values for $x$ for which the function is independent of $x$
For $x>1$, then
$$2\tan^{-1} x = \pi -2\sin^{-1} \frac{2x}{1+x^2}$$
For $x<-1$
$$2\tan^{-1} x =-\pi -2\sin^{-1} \frac{2x}{1+x^2}$$
So $x\in (-\infty, -1) \cup (1, \infty)$
But the given answer is only $(1,\infty)$
Is there any specific reason why we don’t consider the left part, or is it just a mistake?
AI: Your answer is correct! It's vividly seen in the graph below:
|
H: Integrate: $\int \frac{x}{\left(x^2-4x-13\right)^2}dx$.
Integrate:
$$\int \frac{x}{\left(x^2-4x-13\right)^2}dx$$
Here's my attempt:
I first completed the squares for the denominator:
$$\left(x^2-4x-13\right)^2=(x-2)^2-17 \implies \int \frac{x}{\left(\left(x-2\right)^2-17\right)^2}dx$$
I then used $u$-subsituition:
$$u=x-2 \implies \int \frac{u+2}{\left(u^2-17\right)^2}du = \int \frac{u}{\left(u^2-17\right)^2}du+\int \frac{2}{\left(u^2-17\right)^2}du$$
The first part of the new integral is quite simple:
$$\int \frac{u}{\left(u^2-17\right)^2}du=\frac{-1}{2(u^2-17)}$$
Then I did the second part:
$$\int \frac{2}{\left(u^2-17\right)^2}du = -\frac{1}{2\left(u^2-17\right)}+2\left(\frac{1}{68\sqrt{17}}\ln \left|u+\sqrt{17}\right|-\frac{1}{68\left(u+\sqrt{17}\right)}-\frac{1}{68\sqrt{17}}\ln \left|u-\sqrt{17}\right|-\frac{1}{68\left(u-\sqrt{17}\right)}\right) = -\frac{1}{2\left(\left(x-2\right)^2-17\right)}+2\left(\frac{1}{68\sqrt{17}}\ln \left|x-2+\sqrt{17}\right|-\frac{1}{68\left(x-2+\sqrt{17}\right)}-\frac{1}{68\sqrt{17}}\ln \left|x-2-\sqrt{17}\right|-\frac{1}{68\left(x-2-\sqrt{17}\right)}\right) = -\frac{1}{2\left(x^2-4x-13\right)}+2\left(\frac{1}{68\sqrt{17}}\ln \left|x-2+\sqrt{17}\right|-\frac{1}{68\left(x-2+\sqrt{17}\right)}-\frac{1}{68\sqrt{17}}\ln \left|x-2-\sqrt{17}\right|-\frac{1}{68\left(x-2-\sqrt{17}\right)}\right) + C, C \in \mathbb{R}$$
Is this working out correct? I'm not really sure how WolframAlpha works, so I didn't check it on there.
AI: Here is an alternative method to integrate as follows
$$\int \frac{x}{\left(x^2-4x-13\right)^2}dx$$
$$=\int\frac12 \frac{(2x-4)+4}{\left(x^2-4x-13\right)^2}dx$$
$$=\frac12\int \frac{2x-4}{\left(x^2-4x-13\right)^2}dx+\frac12\int\frac{4}{\left(x^2-4x-13\right)^2}dx$$
$$=\frac12\int \frac{d(x^2-4x-13)}{\left(x^2-4x-13\right)^2}+2\int\frac{d(x-2)}{\left((x-2\right)^2-17)^2}$$
using reduction formula: $\color{blue}{\int \frac{dt}{(t^2+a)^n}=\frac{t}{2(n-1)a(t^2+a)^{n-1}}+\frac{2n-3}{2(n-1)a}\int\frac{dt}{(t^2+a)^{n-1}}} $,
$$=\frac12 \frac{-1}{\left(x^2-4x-13\right)}+2\left(\frac{(x-2)}{2(-17)((x-2)^2-17)}+\frac{1}{2(-17)}\int \frac{d(x-2)}{(x-2)^2-17}\right)$$
using standard formula: $\color{blue}{\int \frac{dt}{t^2-a^2}=\frac{1}{2a}\ln\left|\frac{t-a}{t+a}\right|}$,
$$=-\frac{1}{2\left(x^2-4x-13\right)}-\frac{(x-2)}{17(x^2-4x-13)}-\frac{1}{34\sqrt{17}}\ln\left|\frac{x-2-\sqrt{17}}{x-2+\sqrt{17}}\right|+C $$
$$=-\frac{2x+13}{34(x^2-4x-13)}-\frac{1}{34\sqrt{17}}\ln\left|\frac{x-2-\sqrt{17}}{x-2+\sqrt{17}}\right|+C $$
|
H: Conditional Probability - how to find $P(A'|B')$
I have been trying to answer this problem but it appears I have the incorrect approach.
Given events $A,B$ with $P(A)=0.5$, $P(B)=0.7$, and $P(A\cap B)=0.3$, find: $P(A'|B')$
I know $P((A\cup B)') = 1 - P(A\cup B)$
I also know that $P(A\cup B) = P(A) + P(B) - P(A\cap B) = 0.5 + 0.7 - 0.3 = 0.9$
Hence $P((A\cup B)') = 1 - P(A\cup B) = 1 - 0.9 = 0.1$
$P(A'|B') = 0.1 / (1-0.7) = 0.3$ but that appears to be the wrong answer. Where am I going wrong?
Thank you in advance!
AI: The only mistake I can find is that $\frac{0.1}{1-0.7}$ is $\frac{1}{3}$ which is $0.333...$ According to me the answer is $\frac{1}{3}$
|
H: Find the greatest integer less than $3^\sqrt{3}$ without using a calculator and prove the answer is correct.
Find the greatest integer less than $3^\sqrt{3}$ without using a calculator and prove the answer is correct.
I'm puzzled on how to solve this problem, any help is appreciated. There was hints about turning the exponents into fractions and picking fractions between : $3^x < 3^\sqrt3 <3^y$
Then I simplified: $x< \sqrt3<y$
$x^2< 3<y^2$
$\sqrt2^2<3<\sqrt4^2$
So $x=\sqrt2$ and $y=\sqrt4=2$
$3^\sqrt2 < 3^\sqrt3 <3^2$
AI: Since $3 = \frac{48}{16} <\frac{49}{16}$, you have $\sqrt{3}<\frac{7}{4}.$ So you might try to take $y=\frac{7}{4}.$ It's easy to calculate $3^7 = 2187$ which is close to $2401 = 7^4.$ So $3^7<7^4$ and you have $3^{7/4}<7.$ So your answer is $6$ or less.
Try $x=5/3$ to get the lower bound.
|
H: Is there a simple proof that a non-invertible matrix reduces to give a zero row?
Let $A$ be a square matrix that is non-invertible. I was wondering if there is a simple proof that we can apply elementary row operations to get a zero row. (For a matrix $C$ to be invertible, I mean there is $B$ such that $CB = BC = I$.)
I can prove this using elementary column operations but I would like a more direct proof that doesn’t appeal to column operations or the fact that row rank equals column rank, or anything to do with transposes, or the existence of RREF, or determinants, etc. The difficulty seems to be that elementary row operations are applied on the row space, whereas invertibility is sort of defined in terms of the column space.
You could also use (but it is preferred not to) facts like: A matrix $C$ being invertible is equivalent to null space of $C$ being zero (i.e. injective) is equivalent to $C$ being surjective.
AI: If your matrix $A$ is $n\times n$, the search for an inverse matrix is the same as solving the $n$ linear systems
$$
Ax=e_i\qquad (i=1,2,\dots,n)
$$
where $e_i$ is the $i$-th column of the identity matrix. If the matrix is not invertible, then at least one of those systems must have no solution, say it's the one for $e_i$.
Performing row reduction on the augmented matrix $[A\mid e_i]$ yields that the last column must be a pivot column. If the pivot is on row $j$, then the row reduction of $A$ has a zero $j$-th row.
|
H: Factor $x^5-5x^3+4x$
I am trying to factor$x^5-5x^3+4x$ so that I can find the roots. I know from the answers section that the roots are where $x = 0, 1, -1, 2$ and $-2$.
I'm stuck, here's as far as I got:
$$
x^5-5x^3+4x =
x(x^4-5x^2+4)
$$
Let $u = x^2$ and just focus on the term on the right (drop the first $x$ for now):
$$x^4-5x^2+4 = u^2-5u+4x.$$
Master term is $a \times c = 1 \times 4 = 4$.
Seeking a pair of numbers that sum to the middle term $-5$ and whose product is $4$:
\begin{align}
1 \times -4 &= -4, &\text{sum} &= -3 \\
4 \times -1 &= -4, &\text{sum} &= 3 \\
2 \times -2 &= -4, &\text{sum} &= 0
\end{align}
???
I'm not sure how to proceed since I cannot find a pair of numbers that satisfy the condition.
Have I gone wrong somewhere? How can I factor $u^2-5u+4x$?
AI: $$x^5-5x^3+4x=x(x^4-5x^2+4)$$$$=x(x^4-4x^2-x^2+4)$$$$=x(x^2(x^2-4)-(x^2-4))$$$$=x((x^2-4)(x^2-1))$$
$$=x(x+2)(x-2)(x+1)(x-1)$$
|
H: find function $f$ such that $f(x)=xf(x-1)$ and $f(1) = 1$
Find function $f$ such that $f(x)=xf(x-1)$ and $f(1) = 1$.
I can prove that there is just one function as $f$ (see Proof1).
I know that there exists a pi function $\Pi(z) = \int_0^\infty e^{-t} t^z\, dt$ that fits as $f$ so it is the only solution.
My problem is, although I know the answer to be $\Pi(z)$ I can not reach it myself. I mean how can I find $f$ without knowing the answer before? how can I reach $\Pi(z)$ from the equation $f(x)=xf(x-1)$ and $f(1) = 1$?
Proof1
suppose $f(x)$ and $g(x)$ are two solutions to $f(x)=xf(x-1)$ and $f(1) = 1$
$$
\frac{f(x)}{g(x)} = h(x) \Rightarrow \frac{f'(x)}{g(x)} - \frac{f(x)g'(x)}{g^2(x)} = h'(x) \\
\Rightarrow \frac{f(x-1) + xf'(x-1)}{g(x)} - \frac{f(x)}{g(x)}\frac{g(x-1) + xg'(x-1)}{g(x)} = h'(x) \\
\Rightarrow \frac{f(x-1)}{g(x)}+\frac{xf'(x-1)}{g(x)} - \frac{f(x)}{g(x)}(\frac{g(x-1)}{g(x)}+\frac{ xg'(x-1)}{g(x)}) = h'(x) \\
\Rightarrow \frac{1}{x}\frac{f(x)}{g(x)}+\frac{xf'(x-1)}{xg(x-1)} - \frac{f(x)}{g(x)}(\frac{1}{x}+\frac{xg'(x-1)}{xg(x-1)}) = h'(x) \\
\Rightarrow \left [\frac{1}{x}\frac{f(x)}{g(x)} - \frac{f(x)}{g(x)}\frac{1}{x}\right ] + \left [\frac{xf'(x-1)}{xg(x-1)} - \frac{f(x)}{g(x)}\frac{xg'(x-1)}{xg(x-1)}\right ] = h'(x) \\
\Rightarrow 0 + \left [\frac{\not{x}f'(x-1)}{\not{x}g(x-1)}\frac{g(x-1)}{g(x-1)} - \frac{\not{x}f(x-1)}{\not{x}g(x-1)}\frac{\not{x}g'(x-1)}{\not{x}g(x-1)}\right ] = h'(x) \\
\Rightarrow \frac{f'(x-1)g(x-1) - g'(x-1)f(x-1)}{g^2(x-1)} = h'(x) \\
\Rightarrow h'(x-1) = h'(x) \\
$$
now we know that $h(x)$ must be a line. But $h(x) = h(x+n)$ $^{h(x+1) = \frac{f(x+1)}{g(x+1)} = \frac{{x +1}f(x)}{{x +1}g(x)} = \frac{f(x)}{g(x)} = h(x)}$. So $h(x)$ must be a horizontal line so it is a constant. so:
$$
\frac{f(x)}{g(x)} = c, \frac{f(1)}{g(1)} = 1 \Rightarrow \frac{f(x)}{g(x)} = 1 \Rightarrow f(x) = g(x)
$$
AI: No, it is not true. Given any solution $f$ and any function $g$ that is periodic with period $1$ (i.e. $g(x+1) = g(x)$) and $g(1)=1$, then $h(x) = f(x) g(x)$ also satisfies your equations. For example, you could take $g(x) = 1 + \sin(2\pi x)$.
|
H: Significance of Codomain of a Function
We know that Range of a function is a set off all values a function will output.
While Codomain is defined as "a set that includes all the possible values of a given function."
By knowing the the range we can gain some insights about the graph and shape of the functions. For example consider $$f(x)=e^x$$By knowing that the range of the function is $(0,\infty)$,we can conclude that the graph lies above the $X-axis$.
My Questions
Does knowing codomain of a function give any insight/information about the function?
Every function has a specific range and it is universal? Is it true for codomain also?
What I am trying to say is that range of $\sin x$ is $[-1,1]$.While as per my understanding codomain is $\mathbb R$ (Real Numbers).But defining codomain of $\sin x$ as say $(-2,2)$ is not going to change anything. So $(-2,2)$ is a valid codomain for $\sin x$. Am I right?
What compelled mathematicians to define codomain, why were they not happy with the concept of range only?
AI: What I am trying to say is that range of $\sin x$ is $(-1,1)$.
You made a mistake here. The range of sine is a closed interval, which we denote with $[-1, 1]$, not an open one $(-1,1)$.
While as per my understanding codomain is $\Re$(real numbers).
Yup, real numbers. But they are usualy denoted with $\mathbb R$ (LaTeX/MathJax \mathbb R), not $\Re$ (\Re).
But defining codomain of $\sin x$ as say $(-2,2)$ is not going to change anything.
You're wrong. Redefining the codomain may change properties of a function. Giving the sine function a codomain of $(-2,2)$ doesn't change it much, but giving it $[-1,1]$ changes a lot:
$$\sin : \mathbb R \to [-1,1]$$
is a surjection (a function "onto"), while
$$\sin : \mathbb R \to \mathbb R$$
is not.
Redefining a domain may change the function's properties, too:
$$\sin : \left[0, \tfrac\pi 2\right] \to \mathbb R$$
is an injection (a function "into"), while
$$\sin : \left[0, \pi\right] \to \mathbb R$$
is not.
To answer specifically the last sentence from the question:
What compelled mathematicians to define codomain why were they not happy with the concept of range only.
Here I copy what I previously added in the comment below:
we need codomains, because we sometimes need to consider functions, whose definition is known together with a codomain, but the range is unknown. Sometimes we do not even have the definition, only some properties are known and we are satisfied with knowing the codomain without narrowing it to the range ("suppose $f$ is a real-valued function such that...; show $f$ is constant" – we know the codomain is $\mathbb R$ and we just need to show the range is one-point, not necessarily which one).
Expansion:
Be also aware that the range of a function may be hard to describe. For continuous real functions we consider at schools the range is often an interval or a sum of intervals – but those are special cases. There are functions with much less regular ranges.
For example see this question at Math.SE: Show that the function f is continuous only at the irrational points for a function described also at Wikipedia: Thomae's function – it is defined on real numbers, but its range is a set of reciprocals of all natural numbers and zero: $$\mathbb R \to \{\tfrac 1n:n\in\mathbb N\}\cup\{0\}.$$
One can easily declare a function whose range is literally any predefined nonempty set $S\subseteq\mathbb R$ – just choose any $s\in S$ and define: $$f:x\mapsto \begin{cases}x&\text{if }x\in S,\\s&\text{otherwise.}\end{cases}$$
In more general approach the range can be even harder to describe analytically.
Consider a function, whose parameter is real and values are pairs of real numbers (or complex numbers, which is equivalent to latter thanks to the complex plane by Jean-Robert Argand). If the function is continuous, its range is a curve on a plane. For example if the function is a position of a projectile in terms of a height and a distance, we get a complete trajectory. It's not too likely one will need to compare such trajectories – we will usually be interested in the maximum height and a maximum distance reachable under some conditions, but not the whole shape. Anyway, it is possible. But how would you compare a curve of a ballistic trajectory to a simple square? ...to a Koch snowflake? ...to the Warsaw Circle? ...or to a Heighway dragon?
And how about non-continuous functions, or those defined on some subsets of $\mathbb R$, whose ranges may become any figure on the plane, for example a family of concentric circles intersected by a family of parallel lines? ...or the interior of an annulus?
Things get even more weird if the 'target space' of a function is some more complex set, like a space of integer sequences, a space of real matrices $5\times 5$, a space of real functions integrable over a unit interval, and so on. You don't always need to know the range of a function, often it's just enough to know what its codomain is.
|
H: An stronger inequality than in AoPS.
For $x,y,z >0.$ Prove$:$
$$\sum {\frac {y+z}{x}}+{\frac {1728 {x}^{ 3}{y}^{3}{z}^{3}}{ \left( x+y \right) ^{2} \left( y+z \right) ^{2} \left( z+x \right) ^{2} \left( x+y+z \right) ^{3}}} \geqslant 4\sum {\frac {x}{y+z }}+1$$
I check when $xyz=0$ and $x=y$ and see it's true. So I guess it's true.
So I try to get it in $uvw$ form as follow$:$
$$-26244{u}^{7}{v}^{2}{w}^{3}+19683{u}^{6}{v}^{6}+2916{u}^{6}{w}^{6}+4374{u}^{5}{v}^{4}{w}^{3}-2673{u}^{4}{v}^{2}{w}^{6}+216{u}^{3}{w}^{9}+1728{w}^{12} \geqslant 0$$
Then I don't know how to end proof for it. BW does not help here.
I'm not sure about this inequality. I found it when I prove this inequality.
Help me please. Thanks for a real lot!
AI: $uvw$ helps!
Let $x+y+z=3u$, $xy+xz+yz=3v^2$, where $v>0$, and $xyz=w^3$.
Thus, we need to prove that:
$$\frac{9uv^2-3w^3}{w^3}+\frac{1728w^9}{(9uv^2-w^3)^2\cdot27u^3}\geq\frac{4(27u^3-27uv^2+3w^3+9uv^2-3w^3+3w^3)}{9uv^2-w^3}+1$$ or
$$\frac{9uv^2}{w^3}+\frac{64w^9}{u^3(9uv^2-w^3)^2}\geq\frac{4(27u^3-9uv^2+2w^3)}{9uv^2-w^3}$$ or $f(w^3)\geq0,$ where
$$f(w^3)=64w^{12}+8u^3w^9+108u^6w^6-99u^2v^4w^6-972u^7v^2w^3+162u^5v^4w^3+729u^6v^6.$$
But since by Maclaurin $$u\geq v\geq w,$$ we obtain: $$f'(w^3)=256w^9+24u^3w^6+216u^6w^3-198u^2v^4w^3-972u^7v^2+162u^5v^4<0,$$
which says that $f$ decreases and it's enough to prove our inequality for a maximal value of $w^3$,
which happens for equality case of two variables.
Since our inequality is homogeneous and symmetric, it's enough to assume $y=z=1,$ which gives:
$$(x-1)^2(x-2)^2(x^4+10x^3+37x^2+20x+4)\geq0$$ and we are done!
Now we see, why BW does not help: the equality occurs also for $(x,y,z)=(2,1,1).$
|
H: Inequality on a domain and on compact subsets of the domain
Let $u \in L^1_{loc}(\Omega)$ on a bounded domain $\Omega$. Suppose
$$u > 0 \quad \text{a.e. on compact subsets of $\Omega$}.$$
Does this imply that
$$u > 0 \quad \text{a.e. on $\Omega$}?$$
This is trivially true, isn't it? Because we can exhaust almost every point of $\Omega$ by compact subsets. The two statements are equivalent?
AI: Yes it does. It's immediate from the inner regularity of Lebesgue measure $\lambda^d$.
|
H: Does $\mathrm{Log}(\zeta)$ extend meromorphically past $\Re(s)=1$?
Let $\zeta(s)=\sum_{n\ge 1}n^{-s}$ be the Riemann zeta function. It is well-known that the infinite sum
$$\mathrm{Log}(\zeta)=-\sum_{p\text{ prime}}\log(1-p^{-s})$$
converges to an analytic function on the right half-plane $\Re(s)>1$.
Question. Is it known whether this function also admits mermorphic contination
to $\Re(s)>1-\delta$ for some $\delta>0$?
I assume the answer to this question should be well-documented, but I couldn't find it so far. Would very much appreciate a reference or a proof of existence or non-existence of meromorphic continuation.
Thank you!
AI: The obstacles to defining a logarithm of a meromorphic function $f$ are the poles and zeros of $f$. That is, if $U$ is a simply-connected open region on which $f$ is analytic and has no zeros, it has an analytic logarithm in $U$. Conversely, if $f$ has zeros or poles in $U$, it can't have a meromorphic logarithm there (note that if $g$ has a
pole at some point, $\exp(g)$ has an essential singularity there). In the case of $\zeta$, we know it has a simple pole at $s = 1$, so $\log(\zeta(s))$ can't be defined as a meromorphic function in a neighbourhood of $1$.
|
H: Does $\ker T\cap {\rm Im}\,T=\{0\}$ imply $V=\ker T\oplus{\rm Im}\,T$?
Let $T: V\rightarrow V$ be a linear operator of the vector space $V$.
We write $V=U\oplus W$, for subspaces $U,W$ of $V$, if $U\cap W=\{0\}$ and $V=U+W$.
If we assume $\dim V<\infty$, then by the rank-nullity theorem, $\ker T\cap {\rm Im}\,T=\{0\}$ implies $V=\ker T\oplus {\rm Im}\,T$.
However, my question is about the case $\dim V$ is infinite. Is it still true? What if $T$ has a minimal polynomial?
Thanks.
AI: Let $\mathbb{K}$ be the base field. If $T:V\to V$ is such that $\ker(T)\cap\text{im}(T)=0$ and there exists $p(X)\in\mathbb{K}[X]$ such that $p(T)=0$, then $$V=\ker(T)\oplus\text{im}(T)\,.$$
By choosing $p(X)$ to be the monic polynomial of the lowest possible degree, we may assume that $0$ is a simple root of $p(X)$ (this is due to the assumption that $\ker(T)\cap\text{im}(T)=0$, and if the minimal polynomial of $T$ is not divisible by $X$, which is possible, then we simply multiply the minimal polynomial of $T$ by $X$). That is,
$$p(X)=X^n+a_{n-1}X^{n-1}+a_{n-2}X^{n-2}+\ldots+a_2X^2+a_1X$$
for some $a_1,a_2,\ldots,a_{n-2},a_{n-1}\in\mathbb{K}$ with $a_1\neq 0$.
Write $q(X):=X^{n-1}+a_{n-1}X^{n-2}+a_{n-2}X^{n-3}+\ldots+a_2X+a_1$. Note that
$$1=\frac{1}{a_1}\,q(X)+r(X)\,X\,,$$
where
$$r(X):=-\frac{1}{a_1}\,X^{n-2}-\frac{a_{n-1}}{a_1}\,X^{n-3}-\frac{a_{n-2}}{a_1}\,X^{n-4}-\ldots-\frac{a_3}{a_1}\,X-\frac{a_2}{a_1}\,.$$
Therefore,
$$\text{id}_V=\frac{1}{a_1}\,q(T)+r(T)\,T\,.$$
Fix $v\in V$. We get
$$v=\text{id}_V(v)=\left(\frac{1}{a_1}\,q(T)+r(T)\,T\right)v=\frac{1}{a_1}\,q(T)v+r(T)\,Tv\,.$$
Observe that $q(T)v\in \ker(T)$ and $Tv\in\ker\big(q(T)\big)$ (as $X\,q(X)=q(X)\,X=p(X)$ is the minimal polynomial of $T$). This implies
$$V=\ker(T)\oplus\ker\big(q(T)\big)\,.$$
We want to prove that
$$\text{im}(T)=\ker\big(q(T)\big)\,.$$
The direction $\text{im}(T)\subseteq\ker\big(q(T)\big)$ is clear because $q(X)\,X=p(X)$. We shall prove the reversed inclusion. Suppose that $v\in\ker\big(q(T)\big)$. Thus,
$$T^{n-1}v+a_{n-1}\,T^{n-2}v+a_{n-2}\,T^{n-3}v+\ldots+a_2Tv+a_1v=0\,.$$
This gives
$$v=T\left(-\frac{1}{a_1}\,T^{n-2}v-\frac{a_{n-1}}{a_1}\,T^{n-3}v-\frac{a_{n-2}}{a_1}\,T^{n-4}v-\ldots-\frac{a_2}{a_1}\,v\right)\in \text{im}(T)\,.$$
|
H: $H$ normal iff $Lie(H)$ is an ideal.
I was reading a proof of the following theorem.
Theorem 20.28 (Ideals and Normal Subgroups). Let $G$ be a connected Lie group,
and suppose $H \subset G$ is a connected Lie subgroup. Then $H$ is a normal subgroup of
$G$ if and only if $\operatorname{Lie}(H):=\mathfrak{h}$ is an ideal in $\operatorname{Lie}(G):=\mathfrak{g}$.
At some point it is proven that $(ad X)^kY\in \mathfrak{h}$ for all $k$. From this the person writing the proof sais $\sum\limits_{k=0}^{\infty}\frac{1}{k!}(adX)^kY\in \mathfrak{h}$. But why is this still true in the limiting process? Are we using the fact that a finite dimensional subspace of a vector space is closed? So maybe if $G$ has infinite dimension this result is not true anymore?
AI: Yes, the idea is to use the fact that vector subspaces of finite-dimensional real vector spaces are always closed subsets. In infinite dimension, I suppose that it depends upon how is it that you define infinite-dimensional Lie groups.
|
H: Iterates of $\frac{\sqrt{2}x}{\sqrt{x^2 +1}}$ converge to $\text{sign}(x)$.
In this post, a comment states that if $f(x):= \dfrac{\sqrt{2}x}{\sqrt{x^2 +1}}$ and $F_n:=\underbrace{f\circ \dots\circ f}_{n\text{ times}}$, then the pointwise limit $\lim\limits_{n \to \infty} F_n$ is equal to the sign function
$$
\text{sign}(x):=\begin{cases}
1 & : x>0\\
0 & : x=0\\
-1 & : x<0
\end{cases}.
$$
When $x=0$, the limit is clear since $f(0)=0$ is a fixed point.
When $x>0$, I have the bound $0<F_n(x)<f(x)^{-2^n}\to 1$.
I guess the way to go about showing that the (pointwise) limit holds is by establishing a lower limit for $x>0$ (and arguing similarly for $x<0$). However, I can't manage to figure what that lower limit should look like...
AI: You can find the $n^{th}$ iterate of $f$ explicitly.
$$f(x)=\dfrac{\sqrt2x}{\sqrt{x^2+1}},$$
$$f(f(x))=\frac{\sqrt2\dfrac{\sqrt2x}{\sqrt{x^2+1}}}{\sqrt{\dfrac{2x^2}{x^2+1}+1}}=\frac{2x}{\sqrt{3x^2+1}},$$
$$f(f(f(x)))=\frac{\sqrt2\dfrac{2x}{\sqrt{x^2+1}}}{\sqrt{\dfrac{4x^2}{3x^2+1}+1}}=\frac{2\sqrt2x}{\sqrt{7x^2+1}}$$
and more generally,
$$f^{(n)}(x)=\frac{2^{n/2}x}{\sqrt{(2^n-1)x^2+1}}=\frac x{\sqrt{(1-2^{-n})x^2+2^{-n/2}}}.$$
Then as $n$ tends to infinity,
$$f^{(\infty)}(x)=\frac x{|x|}.$$
|
H: Can the step in gradient descent be negative?
For a single-variable function, gradient descent works just by repeatedly computing the next x that is closer and closer to the minimum of the function.The formula is
$$x_{i+1}:=x_i-t\nabla f\vert_{x= x_i} $$
But can "t" be negative or is it just positive? Does the gradient lead the way to the minimum so t is just a number to specify the magnitude of the step?
Thanks!
AI: Your intuition is correct: $t$ specifies the magnitude of the step. If you make the step size negative, you're now walking backwards, away from the minimum. This is equivalent to gradient descent of the function $-f$.
There are cases when it is useful to vary step size. When step size is similar to the distance to the minimum, $x_i$ will "zig-zag" around the minimum without settling on it. So you might begin a gradient descent algorithm with a large step size to find the general neighborhood of a minimum, then gradually decrease the step size to refine your estimate.
|
H: Is cardinality a number?
It's easy to find definitions such as
If A and B are sets (finite or infinite) A and B have the same cardinality (written $|A|=|B|)$ if there is a bijection between them.
and equally easy to find statements such as
The cardinality of a finite set is equal to the number of elements in it.
If cardinality is not a number, how is the second statement to be understood? Where and how does the transition from 'no cardinality is not a number' to 'yes cardinality is a number' occur?
AI: By definition, $|A|$ is the smallest ordinal number that is equinumerous with $A$. The finite ordinal numbers are the natural numbers, so the cardinality of a finite set is a natural number by definition.
Infinite cardinals are analogous to natural numbers. Whether they are actually numbers is not a meaningful question, because "number" is not a well-defined term in mathematics. You could also ask whether is a number, or whether infinitesimals are numbers, etc., and those also are not clear questions. They are numbers in the sense that mathematicians refer to them as such, as in the phrase "cardinal number"; but I doubt you will find any formal definition of "number" that includes them.
|
H: Proof verification: polynomials $\mathbb R[X]$ are a vector space that is not isomorphic to its dual
I have never seen an elementary proof of the fact that $V$ need not be isomorphic to $V^*$ that does not require some set theoretic background. I came up with this [most likely incorrect] argument, which does not seem to depend so much on set theoretic arguments, except for basic ideas of cardinality. I wish to have this particular proof vetted.
Consider the vector space of polynomials $V \equiv \mathbb R[X]$ as an $\mathbb R$ vector space.
The set $B_V \equiv \{x^i : i \in \mathbb N \}$ is a basis for the vector space $V$. Given any polynomial $p(x) \in \mathbb R[X]$, since the polynomial $p$ only has finitely many non-zero coefficients. Thus $p(x)$ must be of the form $p(x) = \sum_{i \in \text{nonzero-powers}(p)} a_i x^i$ where the index set $\text{nonzero-powers}(p)$ has finite cardinality. Hence we can write any polynomial $p(x)$ as a finite linear combination of elements from the set $B_V$.
Next, consider the dual space $V^* \equiv \{ f : \mathbb R[X] \rightarrow \mathbb R \mid f \text{ is a linear function} \}$. We have the elements $eval_r$ which evaluate a polynomial at point $r \in \mathbb R$ as elements of $V^*$.
More formally,
$eval_r(p) \equiv p(r); \forall r \in \mathbb R, eval_r \in V^*$.
All the $eval_r$ are linearly independent. Intuitively, this is because we cannot pin down the value of all polynomials by evaluating them at some finite number of points.
More formally, suppose that we have that $\sum_{i \in I} a_i eval_{r_i} = 0$ for some finite index set $I$. So this gives us a way to extrapolate $eval_{i_0}$ from the other $eval_i$. However, this is absurd, since the value of a polynomial of degree $2|I|$ is not determined by its value at $|I|$ points. Hence all the $eval_r$ are linearly independent.
This means that we have a linearly independent set $L_{V^*} \equiv \{ eval_r : r \in \mathbb R \}$ whose cardinality is that of $|\mathbb R|$.
Wrapping up, we have that the basis of $V$, $B_V$ has cardinality $|\mathbb N|$. A linearly independent set of $V^*$, whose cardinality is a lower bound on the cardinality of $V^*$, has cardinality $|\mathbb R|$. Hence the vector spaces cannot be isomorphic since the cardinality of their bases are different.
Is this correct?
AI: This is indeed correct. Note that you don't need to talk about a basis of $V^*$ (something whose existence depends on the Axiom of Choice): simply the fact that $V^*$ has an uncountable linearly independent set, while $V$ does not, establishes that they can't be isomorphic.
|
H: Confusion about proving logical implication statements
I've got four statements* which I'm meant to evaluate as being either true or false.
a. If 25 is a multiple of 5, then 30 is divisible by 10.
b. If 25 is a multiple of 4, then 30 is divisible by 10.
c. If 25 is a multiple of 5, then 30 is divisible by 7.
d. If 25 is a multiple of 4, then 30 is divisible by 7.
While I'm inclined to just say they're all false, since I can't find any way that the then statements follow from the if statements, I can't help but feel as if I'm missing something, in that we're meant to assume the if statements are correct.
The best I can guess is that by accepting the conditionals, the number system must be warped in some way through which the following then statements are analyzed.
Anyone able to provide any insight?
AI: Welcome to MSE.
You’re doing a classic problem in logic. The idea is that, from a false statement, you can derive any other statement.
Hence we have the truth table for the statement “if P, then Q”. We call this $S$:
$S = (P \Rightarrow Q)$
P false, Q true: S true
P false, Q false: S true
P true, Q true: S true
P true, Q false: S false
Can you answer your question now?
|
H: Integrate $\int \frac{\tan ^3\left(\ln \left(x\right)\right)}{x}dx$
Integrate:
$$\int \frac{\tan ^3\left(\ln \left(x\right)\right)}{x}dx$$
My attempt:
$$u=\ln(x) \implies \int \tan ^3\left(u\right)du=\int \tan ^2\left(u\right)\tan \left(u\right)du=\int \left(-1+\sec ^2\left(u\right)\right)\tan \left(u\right)du$$
I'm having trouble after this section. My initial idea was to use $u$-subsituition again. So,
$$v=\sec(u) \implies \int \frac{-1+v^2}{v}dv=\int \:-\frac{1}{v}+vdv=-\ln \left|v\right|+\frac{v^2}{2}=-\ln \left|\sec \left(\ln \left(x\right)\right)\right|+\frac{\sec ^2\left(\ln \left(x\right)\right)}{2} + c, c \in \mathbb{R}$$
I'm pretty sure this is correct. I'm curious if there is any other way to solve this (hopefully in an easier manner?)?
AI: Note that$$\int\tan^3x\,\mathrm dx=\int\frac{\sin^3x}{\cos^3 x}\,\mathrm dx=\int\frac{\sin(x)\bigl(1-\cos^2(x)\bigr)}{\cos^3x}\,\mathrm dx.$$So, if you do $\cos x=t$ and $-\sin x\,\mathrm dx=\mathrm dt$, you get$$-\int\frac{1-t^2}{t^3}\,\mathrm dt.$$
|
H: Solution verification: Picking stones consecutively puzzle
Puzzle:Two players pick consecutively 1,2,3 or 4 stones from a stack of 101 stones. The player who picks the last stone wins. Suppose that both players play perfect, does the first or second player win? What about the situation that the one who picks the last stone loses?
My answer: We first look at the situation where the person that picks the last stone loses. Let player one choose an arbitrary amount of stones and make sure as player two that you pick an amount such that the sum of the picked stones that round is 5 (Note that this is always possible, no matter what player 1 choses). After repeating this process 20 rounds, 100 stones are picked and it's player one's turn. Player 1 can now only choose 1 and pick the last stone, conclusion: player 2 wins.
I am not quite sure what strategy would work for the case that the player who picks the last stone wins. Any help would be appreciated.
AI: Almost the same strategy works: suppose that player $1$'s first move is to pick one stone. Then there are exactly $100$ stones left, and at this point player one cane make use the strategy that you were describing above. Since $100$ is divisible by $5$, player $1$ is sure to win the game.
|
H: dot product of direction cosine vector
I have been reading a scientific paper and they defined Di as the direction cosine vector, [ cos(latitude of point i)cos(longitude of point i) , cos(latitude of point i)sin(longitude of point i), sin(latitude of point i)]'. The same was done for another coordinate j. Next the dot product of those two vectors (Di and Dj) was taken but I'm not sure what this actually represents. I would appreciate any help on what the result of the dot product is.
This is the link of the paper, the part I am referring to is 2.2.1 Spatial weights matrix: W page 21, equation 16 contains di and dj from my question. I am currently trying to use this method but I am unable to obtain 's'. I have used equation 17 to obtain theta (b) but i have used theta as the Angular distance equation from this link but I'm not sure if i should be using the di dot dj rather than the angular distance equation.
Sorry if this is too lengthy and unclear
AI: The direction cosine vectors are normalized to unity, $|\vec D_i|=|\vec D_j|=1$. Then using the formula for dot product: $$\vec D_i\cdot\vec D_j=|\vec D_i||\vec D_j|\cos\alpha_{ij}=\cos\alpha_{ij}$$
Here $\alpha_{ij}$ is the angle between the two vectors.
|
H: every trail from start node having the same end node
given a directed graph (can have cycles) with:
an arbitrary number of nodes
an arbitrary number of edges
that satisfies the condition that there is (at least) one trail (i.e. a walk where no edge is repeated) that visits all nodes.
Would this be a true statement:
Every trail (again, can not repeat edges) from a given starting node will have the same ending node. This could be either an open walk (start and end nodes differ) or a closed walk (start and end nodes are the same). However, the walk must satisfy the condition that it cannot end until there are no available edges to continue walking on.
Note that even though the same edge cannot be walked more than once, nodes may be visited multiple times. I know this may not satisfy the definition of "trail", but it fits the problem I have.
Examples:
trivial case: the graph A->B, B->A. Given A as a start node, the end node is always A.
slightly more complex example:
Given A as start node, C is the end node.
Is there a counterproof where there are two trails (open or closed) that ends in different nodes? Or, conversely, is there a proof/name for this graph property?
Disclaimer: I'm not very experienced in math or graph theory, this is a problem that I encountered while programming.
AI: Remove the edge from $B$ to $A$ in your second graph. Then from $A$ you have trails
$$A\to B\to C\to B$$
and
$$A\to C\to B\to C\;.$$
I am assuming here that the edge $B\to C$ is considered different from the edge $C\to B$, but if that is not the case, this is still a counterexample.
|
H: Why is the answer of $\frac{ab}{a+b}$ always smaller than the smallest number substituted?
If $\frac{ab} {a+b} = y$, where $a$ and $b$ are greater than zero, why is $y$ always smaller than the smallest number substituted?
Say $a=2$ , $b=4$ (smallest number here is $2$. Thus, the answer would be smaller than $2$)
$\frac{2\cdot4}{ 2+4} = 1.\bar 3$
I got this equation from physics. It's for getting total resistance and the miss told us to not waste time in mcq on it because the answer will always be smaller than the smallest number. But I can't explain to myself in words or by intuition why this happens. Any help??
AI: Another way to think about it: Assuming $0<a\leq b$, divide the top and bottom of your fraction by $b$ to get
$$\frac{a}{\frac{a}{b}+1}.$$
$a$ is the smaller number and you're dividing it by a number greater than one, so the result is smaller than $a$.
|
H: calculate $ \intop_{a}^{b}\left(x-a\right)^{n}\left(x-b\right)^{n}dx $
I need to calculate $ \intop_{a}^{b}\left(x-a\right)^{n}\left(x-b\right)^{n}dx $ this.
Now, this exercise came with hints. I have followed the hints and proved :
$ \intop_{-1}^{1}\left(1-x^{2}\right)^{n}dx=\prod_{k=2}^{n}\frac{2k}{2k+1}\cdot\frac{8}{3} $
And now the last hint is to use the result I got from the last integral, and use linear substitution. Still, could'nt figure out how to do it.
Thanks in advance.
AI: Substitute $x=\frac12[(b-a)y+(b+a)]$ to get
$$ \intop_{a}^{b}\left(x-a\right)^{n}\left(x-b\right)^{n}dx
= (-1)^n\left( \frac{b-a}2\right)^{2n+1}I_n
$$
where, per integration-by-parts,,
$$I_n=\int_{-1}^1(1-y^2)^ndy=\frac{2n}{2n+1}I_{n-1},\>\>\> I_0=2
$$
|
H: Show that $E = [0,1]$ is not open.
Here is my proof. Not sure if I'm doing it correctly. We recall that $E$ is open provided every point of $E$ is an interior point. We then have $\forall p \in E$, there exists a neighborhood of $p$, $N$, such that $N \subset E$. Suppose that $E$ is open and consider the point $p=1$ in $E$. Since $E$ is open, then there exists a neighborhood, $N_r(1) = \{q : \ |q-1| < r\}$, such that $N_r(1) \subset E$. Here the topological space under consideration is $R$ with the usual real line topology.
Now, since $N_r(1) \subset E$ and $E$ is open, then each $p' \in N_r(1)$ is an interior point, that is, there exists a neighborhood, $N_{r'}(p') \subset E$. We will show that there exists a neighborhood, $N \subset N_r(1)$ for which $N \not\subset E.$ We consider three cases.
If $r \in (0,1)$, then we can take the neighborhood, $N_{r^2}(1+r^2)$. For example, is $r=\frac{1}{2}$, then
$$N_{\frac{1}{2}}(1) = \{q: \ |q-1| < \frac{1}{2}\} =\{q : \ q\in (\frac{1}{2}, \frac{3}{2})\},$$
$$N_{\frac{1}{4}}(5/4) = \{q: \ |q-\frac{5}{4}| < \frac{1}{4}\} =\{q : \ q\in (1, \frac{3}{2})\}.$$
Notice that $N_{\frac{1}{4}}(5/4) \subset N_{\frac{1}{2}}(1)$, but $N_{\frac{1}{4}}(5/4) \not\subset E$. On the other hand, if $r=1$, then we can take $N_{\frac{1}{4}}(\frac{3}{2})$, where
$$N_{\frac{1}{4}}(\frac{3}{2}) = \{q: \ |q-\frac{3}{2}| < \frac{1}{4}\} =\{q : \ q\in (\frac{5}{4}, \frac{7}{4})\}.$$
Clearly, $$N_{\frac{1}{4}}(\frac{3}{2}) \subset N_1(1) = \{q: |q-1| < 1\} =\{q : \ q\in (0, 2)\},$$ but again $N_{\frac{1}{4}}(\frac{3}{2}) \not\subset E$. Finally, for $r > 1$, We can take $N_{\frac{1}{r^2}}(1+\frac{1}{r^2})$. For example, if $r=2$, then $N_2(1) = \{q : |q-1|<2\} = \{q: q \in (-1,3)\}$ and $N_{\frac{1}{4}}(\frac{5}{4}) \subset N_2(1)$. Again, we have that $N_{\frac{1}{4}}(\frac{5}{4}) \not\subset E$. Does this argument shows that $E$ is not open?
AI: This is way too complicated. Let $N_r(1)$ be any basic nbhd of $1$. Then $1+\frac{r}2\in N_r(1)\setminus E$, so $N_r(1)\nsubseteq E$. Thus, $1$ is a point of $E$ that is not an interior point of $E$, so $E$ is not open.
|
H: prove or disprove: if $\sum_{n=0}^\infty a_n$ converges, then $\sum_{n=0}^\infty (-1)^n a_n^2$ converges
if $\sum_{n=0}^\infty a_n$ converges, then $\sum_{n=0}^\infty (-1)^n a_n^2$ converges
I think I'm supposed to disprove it, but i can't think of anything.
non of the usual stuff disproved it.
I know that becuase $\sum_{n=0}^\infty a_n$ converges $a_n\to0$, and that means that $a_n^2\to0$. Also, assuming that the sequence does not contain $i$, $a_n^2$ must be positive. The only thing preventing me to use Leibniz's Test is that I don't know for sure that $a_n^2$ is monotonically decreasing.
Things I tried to disprove it:
Alternating sequences, with or without $i$
Engineering a sequence that is made from even/odd sub-sequences
$a_n$ as a series itself
AI: $a_{2n}=(-1)^n/(\sqrt n +1), a_{2n+1}=0$ gives a counterxample as $\sum a_n$ converges, but $\sum (-1)^na_n^2$ diverges
|
H: Prove that $13\sqrt{2}$ is irrational.
I am currently a beginner at proofs and I am having trouble proving this problem...
I know that the square root of $2$ is irrational because the square root of $2$ can be expressed as $\frac{p}{q}$ and once both sides are squared it is true that both $p$ and $q$ are even which is a contradiction to the assumption that they have no common factors.
I am having trouble proving that $13$ and the square root of $2$ is irrational though and any help would be greatly appreciated! Since we are not dealing with the square root of $13$, I do not know how to start since we can not set it equal to $\frac{p}{q}$.
Thank you in advance!
AI: If $13\sqrt{2}$ were rational then it would be of the form $a/b$ for $a,b$ integers ($b\neq 0$). But then $\sqrt{2}=(a/b)/13=a/(13b)$ would be rational.
|
H: Is $S^{-1}R$ a free $R$-module?
As in the title, let $R$ be a (commutative unitary) ring and $S\subset R$ a multiplicatively closed subset. Then since there is a canonical map $\tau:R \to S^{-1}R$, which in general need not be injective or surjective, $S^{-1}R$ is a $R$-module by restriction of scalars. Is it free?
AI: Not necessarily.
For example, $\Bbb Q$ is the localization of $\Bbb Z$ with respect to the multiplicative set $\Bbb Z\backslash\{0\}$, but $\Bbb Q$ is not free as $\Bbb Z$-module.
To see this, note that in a (nonzero) free $\Bbb Z$-module, there must exist elements that are not divisible by $2$ (e.g. a basis element). However every element in $\Bbb Q$ is divisible by $2$.
|
H: How to study a function (e.g $\frac{e^x-e^{-x}}{2}$) whose $\{1,2\}$-th derivatives have complex solutions?
I am wondering how one can study a function whose roots exist only in complex plane $\mathbb{C}$.
A similar question have been asked here, but it is quite different from this one.
For example's sake let:
$$f(x) = \frac{e^x-e^{-x}}{2} $$
Now let's take a look at its first derivative:
$$f'(x)=\frac{e^x+e^{-x}}{2} =0 \quad (1)$$
Apparently there are only complex roots for $(1)$: $\:x=\frac{1}{2}i\left(2\pi n+\pi \right), n\in \mathbb{Z}$
(Similarly for the second derivative -in order to study the curvarture.)
The graph suggests that the curve $f(x)$ has:
One saddle point at $x_0 = 0$.
The function is increasing $\forall x \in D(f)$ (except, of course, the saddle point)
I has an inflection point at $x_0 = 0$ and it is concave down and up respectively.
How can these facts be stated algebraically?
AI: The first derivative is used to discuss monotonicity of the function and find stationary points (local extrema or saddle point). What matters is the sign of the first derivative (and the zeroes).
The second derivative is used to qualify the extrema and detect the inflection points. What matters is the sign of the second derivative (and the zeroes).
When you study the function on the real axis, what happens in the complex plane is completely irrelevant. In fact, it is not deemed to exist.
E.g.,
$$\frac{x^3}3+x$$ has no stationary point because $$x^2+1$$ remains strictly positive.
|
H: Solve for $x$ with exponents
I am trying to solve an equation to find a value of $x$ like this:
$(1.08107)^{98/252}=(1.08804+x)^{23/252}(1.08804+2x)^{37/252}(1.08804+3x)^{38/252}$
That is pretty straightforward using Excel Solver, but I am not quite grasping how to do it by hand.
The result is $-0.00323$.
Thanks in advance.
AI: We can use a root finding algorithm, like Newton's Method.
Our function is given by
$$f(x) = 1.03078 -(x+1.08804)^{23/252} (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126}$$
The Newton iteration is given by
$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} = x_n - \dfrac{
1.03078 -(x+1.08804)^{23/252} (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126} }{\left(-\dfrac{23 (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126}}{252 (x+1.08804)^{229/252}}-\dfrac{37 (x+1.08804)^{23/252} (3 x+1.08804)^{19/126}}{126 (2 x+1.08804)^{215/252}}-\dfrac{19 (x+1.08804)^{23/252} (2 x+1.08804)^{37/252}}{42 (3 x+1.08804)^{107/126}}\right)}$
Starting at $x_0 = 1$, we arrive at
$$x \approx -0.003235904357553754$$
|
H: Solving the functional equation $f(x)f(y)=c\,f(\sqrt{x^{2}+y^{2}})$
Find all probability density function $f:\mathbb{R}\to\mathbb{R}$ such that there exists a constant $c\in\mathbb{R}$ for which $$f(x)f(y)=c\,f(\sqrt{x^{2}+y^{2}})\text{ for all }x,y\in\mathbb{R}\,.$$
The following is part of a derivation of the Gaussian distribution, from Jaynes' book "Probability theory - The logic of science" (p. 201)
Where the $f$ in question is a probability density function
The fact that $\log(\frac{f(x)}{f(0)}) = ax^2$ is a solution is obvious, but why is it the only possible one?
AI: Let $f:\mathbb{R}\to\mathbb{R}$ be a Lebesgue-measurable function such that there exists $g:\mathbb{R}_{\geq 0}\to\mathbb{R}$ for which
$$f(x)\,f(y)=g\left(\sqrt{x^2+y^2}\right)\tag{*}$$
for all $x,y\in\mathbb{R}$. Then, note that by plugging in $y:=0$, we have
$$f(x)\,f(0)=g\big(|x|\big)\text{ for every }x\in\mathbb{R}\,.\tag{#}$$
If $f(0)=0$, then $g\equiv 0$, whence $f(x)\,f(y)=0$ for every $x,y\in\mathbb{R}$. This shows that $f\equiv 0$ (which is not a probability density function).
We now assume that $c:=f(0)$ is nonzero. From (*) and (#),
$$f(x)\,f(y)=c\,f\left(\sqrt{x^2+y^2}\right)\text{ for all }x,y\in\mathbb{R}\,.$$
Note that, by (#), $f$ is an even function. Hence, it suffices to solve for $f(x)$ when $x\geq 0$. Define $h:\mathbb{R}_{\geq 0}\to\mathbb{R}$ to be $h(t):=\dfrac{1}{c}\,f(\sqrt{t})$ for all $t\geq 0$. Then,
$$h(s)\,h(t)=h(s+t)\text{ for all }s,t\geq 0\,.$$
That is,
$$h(t)=h\left(\frac{t}{2}+\frac{t}{2}\right)=\left(h\left(\frac{t}{2}\right)^{\vphantom{a^a}}\right)^2\geq 0\,.$$
If $h(\tau)=0$ has a solution $\tau\geq 0$, then note that
$$h(s+\tau)=h(s)\,h(\tau)=0\text{ for all }s\geq 0\,,$$
whence $h(t)=0$ for all $t\geq \tau$.
If $h(t)=0$ for all $t\geq 0$, then $f(x)=0$ for all $x\geq 0$. This contradicts the assumption that $f(0)\neq 0$. If there exists $t\geq 0$ such that $h(t)\neq 0$, then let $$\sigma:=\sup\big\{t\geq 0\,\big|\,h(t)=0\big\}\,.$$
Note that $\sigma\leq \tau$. If $\sigma>0$, then note that
$$0=h(2\sigma)=\big(h(\sigma)\big)^2\,,$$
whence $h(\sigma)=0$. That is,
$$0=h(\sigma)=\left(h\left(\frac{\sigma}{2}\right)^{\vphantom{a^a}}\right)^2\,,$$
implying $h\left(\dfrac{\sigma}{2}\right)=0$. By the same argument as the previous paragraph, $h(t)=0$ for all $t\geq \dfrac{\sigma}{2}$. This contradicts the definition of $\sigma$. Thus, $\sigma=0$. Now,
$$h(0)=h(0+0)=\big(h(0)\big)^2$$
implies that $h(0)=0$ or $h(0)=1$. Since $\sigma=0$, we get $h(0)=1$, leading to a solution
$$h(t)=\left\{\begin{array}{ll}1&\text{if }t=0\,,\\0&\text{if }t>0\,.\end{array}\right.$$
That is,
$$f(x)=\left\{\begin{array}{ll}c&\text{if }x=0\,,\\0&\text{if }x\neq 0\,.\end{array}\right.$$
(This makes $f$ a measurable function, but this solution is not a probability density function.)
From now on, we assume that $h(t)>0$ for all $t\geq 0$. Define
$$\eta(t):=\ln\big(h(t)\big)\text{ for }t\ge 0\,.$$
Then,
$$\eta(s+t)=\eta(s)+\eta(t)\text{ for all }s,t\geq 0\,.\tag{@}$$
This is Cauchy's functional equation. Since $f$ is measurable, $h$ is also measurable, and so is $\eta$. Therefore, there exists a constant $a\in\mathbb{R}$ for which
$$\eta(t)=at\text{ for each }t\geq 0\,.$$
That is,
$$h(t)=\exp(at)\text{ for each }t\geq 0\,,$$
whence
$$f(x)=c\,\exp(ax^2)\text{ for all }x\in\mathbb{R}\,.$$
Now, if $f$ is a probability density function, $a=-b$ for some $b>0$, and $c=\sqrt{\dfrac{b}{\pi}}$.
Remark. There are solutions $f$ that are not Lebesgue measurable. All such solutions are given by $$f(x):=c\,\exp\big(\eta(x^2)\big)$$ for all $x\in\mathbb{R}$, where $\eta:\mathbb{R}_{\geq 0}\to\mathbb{R}$ is any solution to (@) which is not Lebesgue measurable. To find such $\eta$, you need the Axiom of Choice.
|
H: How to solve for $x$ given $x⇔A$ in a truth table?
How can I solve for $x$ in terms of A, B and C given the truth table below?
$$\begin{array}{ccc|c}
A & B & C & x ⇔ A\\
\hline
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 1 & 1 & 1\\
1 & 0 & 0 & 0\\
1 & 0 & 1 & 1\\
1 & 1 & 0 & 0\\
1 & 1 & 1 & 0
\end{array}$$
The main way I tried to solve this was by simplifying the truth table into its ANF and then seeing if I could move things around.
So from $(A \land \lnot B \land C) \lor (\lnot A \land B \land C)$ to $(A \land C) \oplus (B \land C)$ but then I got stuck because I didn't know how to get A onto its own in the formula.
The way that I eventually managed to solve it was intuitively but it took forever and it was a lot of guesswork:
$$
(((\lnot A \lor \lnot B) \land (A \lor B) \land \lnot C) ⇔ B) ⇔ A
$$
$$
\therefore x = (((\lnot A \lor \lnot B) \land (A \lor B) \land \lnot C) ⇔ B)
$$
If this question doesn't obey some stylistic convention, I'm happy to edit it. I'm sure it's not professional but I am a hobbyist not a mathematician.
AI: By comparing corresponding truth values for $A$ and $(x{\iff}A)$, you can infer the corresponding truth values for $x$:
If $(x{\iff}A)=1$, then $x=A$, else $x=A'$.
Hence we can extend the truth table to include a column for $x$:
$$\begin{array}{ccc|c|c}
A & B & C & x{\iff}A&x\\
\hline
0 & 0 & 0 & 0&1\\
0 & 0 & 1 & 0&1\\
0 & 1 & 0 & 0&1\\
0 & 1 & 1 & 1&0\\
1 & 0 & 0 & 0&0\\
1 & 0 & 1 & 1&1\\
1 & 1 & 0 & 0&0\\
1 & 1 & 1 & 0&0\\
\hline
\end{array}$$
which allows us to write
$$
x=A'B'C'+A'B'C+A'BC'+AB'C
$$
with one term for each of the $4$ rows for which $x=1$.
|
H: How to tell if a system of ordinary differential equations is homogeneous?
Suppose that I have the following set of 2 equations:
$\frac{d(x(t))}{dt} = 5tx(t) + 2y(t)$
$\frac{d(y(t))}{dt} = 5ty(t)+2x(t)$
I have read that this is a system of "Linear first order non-constant coefficient homogeneous" differential equations. I understand everything except the homogeneous part.
I understand that if you have something like:
$M(x,y)dx + N(x,y)dy = 0$ then this equation is homogeneous if $M$ and $N$ are both homogeneous functions of the same degree but I don't really know how to apply this definition to the set of equations I wrote above.
So, in general, how to tell if a system of ordinary differential equations is homogeneous?
AI: In a system of linear differential equations, the only terms allowed are the dependent variables or their derivatives, multiplied by functions of the independent variable, and functions of the independent variable (without any dependent variables). The system is homogeneous if there are no terms without dependent variables or their derivatives. That's the case in your system. If there were other terms such as $1$ or $\sin(t)$ that don't involve a dependent variable $x$ or $y$ or its derivative, the system would not be homogeneous.
|
H: True or false: Suppose $p$ and $q$ are propositions. Then $\lnot(p\implies q) \equiv p \land q.$
I am not very familiar with truth tables but I think that the $\lnot$ should get distributed among both $p$ and $q$ making the problem $\lnot p \implies \lnot q$ which does is not the same as $p\land q$ making the statement false.
I know that $\lnot q \implies \lnot p$ is the contrapositive of $p \implies q$ which is also equivalent to $\lnot p$ or $q$, and if we switch the $p$ and $q$ it will still make it false.
If anyone can confirm my answer or give more of an explanation that would be great as I am very lost!
Thank you to all of the help in advance, it is very appreciated.
AI: It is false.
Consider when both $p$ and $q$ are true. Then the RHS is true, whereas, since $p\implies q$ is true, the LHS is false.
|
H: Sum of Continued Fractions
Let $x$ be a positive integer.
Consider the following sum (maybe there is a better notation with continued fractions, but I am not aware of it):
$\frac{1}{x} + \frac{1}{x- \frac{1}{x}} + \frac{1}{x-\frac{1}{x}-\frac{1}{x-\frac{1}{x}}} +\frac{1}{x-\frac{1}{x}-\frac{1}{x-\frac{1}{x}} - \frac{1}{x- \frac{1}{x} - \frac{1}{x-\frac{1}{x}}}} + \dots $
My question is: How many summands do I need such that the sum is at least $x$? $x^2$ is a trivial upper bound here, because every summand is $\geq \frac{1}{x}$. Is there a way to get a better bound here?
AI: The number of steps needed, $s(x)$, is the sequence A140949 in OEIS. Not sure about the background. Also there is not much information there. Perhaps the auther Gil Broussard can tell you more (assuming that he is not you).
However it does look like there is some pattern going on here.
The value of $s(x)$ is very close to $x^2 / 2$. In fact, the sequence $(2s(x) - x^2)_{x \geq 0}$ looks like this:
$$0, 1, 2, 3, 4, 3, 4, 3, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7, 6, 7$$
This looks quite interesting and is probably a logarithmic growth. Thus $x^2 / 2$ is a fairly good approximation of $s(x)$. Not sure about the theory behind.
Also not sure what happens for non-integer real numbers $x$.
I will update this answer if I am motivated enough to look deeper into the problem. That is, if nobody gives a better reference.
It would also help if you could tell us your motivation, e.g. where does this problem come from.
|
H: Find surface area of part of cylinder.
I ran into trouble when I'm trying to find a surface area of parts of the cylinder $x^2+z^2=4$ bounded by another cylinder $x^2+y^2=4$, I simply used a traditional way of double integral, change into polar coordinate calculate
$$
\iint\limits_{x^2+y^2=4}
\sqrt{\left(\frac{\partial z}{\partial x}\right)^2+
\left(\frac{\partial z}{\partial y}\right)^2+1} \,dx\,dy
=
\int_0^{2\pi}\int_{0}^{2} \frac{2r}{\sqrt{4-(r\cos\theta)^2}} \,dr\,d\theta
$$ and eventually this integral diverges. Could anyone tell me where I was wrong ? thanks a lot.
AI: Per my comment, the integral simplifies down to
$$\int_0^{2\pi} \frac{4}{1+|\sin\theta|}\:d\theta$$
after doing the $r$ integral. The easiest way to compute this integral then would be to exploit symmetries
$$ = \int_0^\pi \frac{8}{1+\sin\theta}\:d\theta = \int_0^\pi \frac{8}{\cos^2\left(\frac{\theta}{2}\right)+\sin^2\left(\frac{\theta}{2}\right)+2\cos\left(\frac{\theta}{2}\right)\sin\left(\frac{\theta}{2}\right)}\:d\theta$$
$$= \int_0^\pi \frac{8}{\left[\cos\left(\frac{\theta}{2}\right)+\sin\left(\frac{\theta}{2}\right)\right]^2}\:d\theta = 16\int_0^\pi \frac{\frac{1}{2}\sec^2\left(\frac{\theta}{2}\right)}{\left[1+\tan\left(\frac{\theta}{2}\right)\right]^2}\:d\theta$$
$$ = \frac{-16}{1+\tan\left(\frac{\theta}{2}\right)}\Biggr|_0^{\pi^-} = 16$$
And then the final answer for the problem would be $32$, doubled to account for both sides of the cylinder.
|
H: Solving $n(4n+3)=2^m-1$ in positive integers
Find all positive integers $m$ and $n$ such that $$n(4n+3)=2^m-1\,.$$
This is an interesting equation which was sent to me by a friend (probably found online). I have been scratching my head about whether or not this has a unique solution in positive integers which I have found to be $(n,m)=(1,3)$. My first approach was trying to work some mod casework but it hasn't been really helpful and the only thing I found which I guess was worth, is that $$n \equiv 1\pmod{ 8},\forall \ m \geq 3\,.$$ I have noticed that $2^m-1$ for $m=3$ is prime and so I conjecture that whenever $2^m-1$ is not a prime then the equation will not have a solution in positive integers but I am stuck on proving this last statement. Any hints will be appreciated.
AI: Rewrite it as a quadratic equation in $n$: $4n^2+3n -2^m+1 = 0\implies \triangle = 3^2-4(4)(1-2^m)= 9-16+16\cdot 2^m=2^{m+4}-7=k^2\implies k^2+7=2^{m+4} $. This problem has appeared in an article by author J.Cremona of Nottingham UK, and in that article, they proved that the only possible solutions are: $m + 4 = 3,4,5,7,15$. I leave this for you to finish.
Reference:
$1)$.https://www.researchgate.net/publication/266524000_On_the_Diophantine_equation_x_2_7y_m
|
H: Lee's Intro to Topology, generating the same topology
Suppose $M$ is a set and $d, d^\prime$ are two different metrics on
$M$. Prove that $d$, and $d'$ generate the same topology on $M$ if and
only if the following condition is satisfied: for every $x \in M$ and
every $r > 0$, there exists positive real numbers $r_1, r_2$ such that
$B_{r_1}^{(d^\prime)}(x) \subset B_r^{(d)}(x)$ and $B_{r_2}^{(d)}(x)
\subset B_r^{(d^\prime)}(x)$.
My attempt:
Suppose $d$ and $d'$ generate the same topology on $M$. The properties of a topology are that $\emptyset$ and $M$ are open subsets of $M$. An open subset means there exists an open ball $B_r^{(d)}(x) = \{y\in M : d(x,y) < r\}$ around each $x$.
The issue for me is choosing these $r_1, r_2$. I'm fairly new to topology and metric spaces so forgive me if this seems novice, but it is difficult to wrap my head around this and a lot of posts are using terms I'm unfamiliar with.
AI: I’ll do one direction and give you a chance to try your hand at the other direction once you’ve seen the kinds of reasoning involved. Suppose that for each $x\in M$ and $r>0$ there are $r_1,r_2>0$ such that $B_{r_1}^{(d')}(x)\subseteq B_r^{(d)}(x)$ and $B_{r_2}^{(d)}(x)\subseteq B_r^{(d')}(x)$; we’ll show that $d$ and $d'$ generate the same topology on $M$.
Let $\tau$ be the topology on $M$ generated by $d$ and $\tau'$ the topology generated by $d'$; we’ll show that $\tau=\tau'$ by showing that $\tau\subseteq\tau'$ and $\tau'\subseteq\tau$, so let $U\in\tau$. Because $\tau$ is generated by $d$, for each $x\in U$ there is an $r(x)>0$ such that $B_{r(x)}^{d}(x)\subseteq U$. By hypothesis for each $x\in U$ there is an $r_1(x)>0$ such that $B_{r_1(x)}^{(d')}(x)\subseteq B_{r(x)}^{(d)}(x)$. But then
$$U\subseteq\bigcup_{x\in U}B_{r_1(x)}^{(d')}(x)\subseteq\bigcup_{x\in U}B_{r(x)}^{(d)}(x)\subseteq U\;,$$
so $U=\bigcup_{x\in U}B_{r_1(x)}^{(d')}(x)$, which by definition is in $\tau'$. $U$ was an arbitrary member of $\tau$, so we’ve shown that $\tau\subseteq\tau'$. The proof that $\tau'\subseteq\tau$ is almost identical: start with an arbitrary $U\in\tau'$ and follow the same path, simply reversing the rôles of $d$ and $d'$ and this time using the fact that for each $x\in M$ and $r>0$ there is an $r_2>0$ such that $B_{r_2}^{(d)}(x)\subseteq B_r^{(d')}(x)$.
For the other direction you’ll assume that $\tau=\tau'$ and show that for each $x\in M$ and $r>0$ there are $r_1,r_2>0$ such that $B_{r_1}^{(d')}(x)\subseteq B_r^{(d)}(x)$ and $B_{r_2}^{(d)}(x)\subseteq B_r^{(d')}(x)$. Use the fact that by definition $B_r^{(d)}(x)\in\tau$, so by hypothesis $B_r^{(d)}(x)$ is also in $\tau'$. Similarly, $B_r^{(d')}(x)\in\tau'$ by definition, so by hypothesis $B_r^{(d')}(x)\in\tau$ as well.
|
H: Why do some partial fractions have x or a variable in the numerator and others don't?
Why do rational expressions like $\left(\frac{1}{(x-2)^3}\right)$ do not have x in the numerator of the partial fraction but a rational expression like $\left(\frac{1}{(x^2+2x+3)^2}\right)$ does have x in the numerator of its partial fraction?
AI: You can integrate $\frac{1}{(x-2)^k}$ by itself, so it is not necessary to break it into the form $\frac{A}{x-2} + \frac{Bx+C}{(x-2)^2} + \dots.$ On the other hand, $\frac{1}{(x^2+2x+3)^k}$ does not have a simple antiderivative, so we must decompose it into the form $\frac{Ax+B}{x^2+2x+3} + \frac{Cx+D}{(x^2+2x+3)^2} + \dots$
The most general explanation is that $x^2+2x+3$ has complex roots while $x-2$ has real roots. If you allow complex numbers, you can break $\frac{1}{(x^2+2x+3)^k}$ into partial fractions where the numerator is always a constant instead of a term of the form $Ax+B.$
|
H: All numbers that are less than four units from zero
Looking at a simple algebra question which is to graph "All numbers that are less than four units from zero."
The knee jerk response is to draw a number line with an open circle at -4 and a line to the left pointing at negative infinity. A second interpretation could be to highlight all numbers that are less than those numbers that are four units from zero. In which case that would be the entire number line.
All numbers that are more than four units from zero would clearly be x < -4 and x > 4.
Is there a mathematical standard on how the "all units less than" language should be interpreted?
AI: You've parsed the input as:
All numbers that are less than (four units from zero)
but it is probably intended as:
All numbers that are (less than four units) from zero
or in other words, all numbers whose distance from zero is less than four units.
|
H: Solution verification: sum at least $4N/7$ times odd
Problem: Let $a_{j}$,$b_{j}$,$c_{j}$ be whole numbers for $ 1 \leq j \leq N$. Suppose that for each $j$ at least one of $a_{j}$,$b_{j}$,$c_{j}$ is odd. Show that there are whole numbers $r$,$s$ and $t$ such that the sum $$r\cdot a_{j} + s\cdot b_{j} + t\cdot c_{j}$$ (for $ 1 \leq j \leq N$) is in at least $4N/7$ cases odd.
My attempt: Choose $r=s=t=1$. To get an odd sum with 3 numbers for which one is odd for sure you must have that: 2 are even and the other odd or all three are odd. The total amount of possible even-odd combinations for 3 numbers is equal to ${3 \choose 1} +{3 \choose 2} +{3 \choose 3}=7$ (1 odd v 2 odd v 3 odd), so with $r=s=t=1$ we see that in $4/7$ cases, the sum is odd for a fixed $j$. In total we have thus that in $4N/7$ circumstances the sum is odd.
My doubts: So, while reading the question I thought this would seem like a classic pigeonhole principle question and at first I had the intention to go that route, but when I was trying some random cases I came across this with $r=s=t=1$ and with this all the restrictions hold. What do you guys think about my solution?
AI: Let $$u_1 = (0,0,1)$$ $$u_2 = (0,1,0)$$ $$u_3 = (1,0,0)$$ $$u_4 = (0,1,1)$$ $$u_5 = (1,0,1)$$ $$u_6 = (1,1,0)$$ $$u_7 = (1,1,1)$$
and let for each $i\in \{1,2,...,n\}$ define $v_i = (a_i,b_i,c_i)$. ''Connect'' $u_j$ with $v_i$ iff $u_j\cdot v_i\equiv _2 1$ and we count the number of all connections.
It is easy to see that each $v_i$ has degree exactly $4$, so the number of all connections is $4n$. This means that one $u_j$ from the set $\{u_1,u_2,...u_7\}$ must be connected with at least $4n/7$ vectors from $\{v_1,...v_n\}$. So if take $r,s,t$ to be the coordinates of $u_j$ we are done.
|
H: How can I use the fact that $6=2\cdot 3=(\sqrt{10}-2)(\sqrt{10}+2)$ to prove $Z[\sqrt{10}]$ is not a UFD?
How can I use the fact that $6=2\cdot 3=(\sqrt{10}-2)(\sqrt{10}+2)$ to prove $Z[\sqrt{10}]$ is not a UFD?
They are different ways of factoring 6 into irreducibles?
AI: Hints:
That's exactly the idea, but showing that these elements are irreducible must be proven. It must also be proven that for example $2$ is not equal to $u(\sqrt{10}-2)$ for some unit $u \in \Bbb{Z}[\sqrt{10}].$ So you should also determine the units of $\Bbb{Z}[\sqrt{10}]$.
|
H: Which is the correct integration using an integrating factor?
When shown equation $(1)$, I have two different answers for its integration, one mine, one more from a colleague and I am uncertain of which is the correct one.
$$\left( \frac{\partial r}{\partial T}\right)_{E/T}- r\frac{c_0}{T}= - \frac{c_0}{T} \tag{1}$$
where the subscript indicates a constant ratio of $E/T$ throughout the calculations.
My take on it:
Using an integrating factor $I= e^{\int P dT}$ where $P=Q= \frac{c_0}{T}$.
Following the rule for this method:
$$I r = \int^{T_f}_{T_0} I Q dT $$
$$\left( \frac{T_f}{T_0}\right)^{-c_0}r= \int^{T_f}_{T_0} \left( \frac{T_f}{T_0}\right)^{-c_0} \left( \frac{-c_0}{T} \right) dT + f(E/T)$$
because $E/T$ is seen as a constant and would be differentiated to $0$ I added a function of this term in my calculation, $f(E/T)$.
$$r=\ln \left( \frac{T_f}{T_0}\right)^{-c_0}+ \left(\frac{T_0}{T_f}\right)^{-c_0} f(E/T)$$
My colleague's take on it:
I don't understand where his answer comes from, but he said to have used the same process of integration, using an integrating factor, and choosing $K$ as the constant term.
$$r = -e^{\int_{T_0}^{T} c_0/T^\prime dT^\prime} \int \frac{c_0}{T^\prime} e^{-\int c_0/T^{\prime \prime} dT^{\prime \prime}} dT^\prime - K e^{\int_{T_0}^{T} c_0/T^\prime dT^\prime}$$
Which is the correct integration using an integrating factor?
AI: The flaw is in your method, not (necessarily) your colleague's (although this really depends on what the bounds are for his antiderivatives, it's really hard to tell from the clutter in his notation). You treat the integrating factor like a constant, when it really should have been
$$I = \left(\frac{T}{T_0}\right)^{-c_0} \neq \left(\frac{T_f}{T_0}\right)^{-c_0}$$
As a sanity check, nontrivial integrating factors should always be varying functions, not constants (otherwise, what's the point of an integrating factor?)
|
H: What do you call a "circle-ish" polygon with 256 sides?
I would like to find an image of such a geometric shape or generate one myself, but I am not sure what to look for.
AI: It's a dihectapentacontahexagon.
For a more general answer, see List of polygons - Wikipedia.
The late John H. Conway mentions a few other such names here.
|
H: Prove or disprove the following: If $n^3 − 5$ is an odd integer, then $n$ is even.
Prove or disprove the following proposition: If $n^3 − 5$ is an odd integer, then $n$ is even.
I know that $n$ must be even in order for $n^3 - 5$ to be odd which means I have to prove the statement.. possibly with a contradiction? I have been able to successfully start the proof but I am unsure of where to go from here. Any help would be greatly appreciated!
Proof: Suppose $n$ is an odd integer which can be expressed as $n=2k+1$, and $n^3-5$ is also an odd integer.
$$(2k+1)^3 - 5 = (2k+1)(2k+1)(2k+1) - 5 = 8k^3 + 12k^2 + 6k + 1 - 5...$$
Where should I go from here? Thanks!
AI: You could say if $n=2k+1$ then $n^3-5=8k^3+12k^2+6k-4$ is even (being a sum of even numbers), which proves it by contrapositive, but I find it simpler to say if $n^3-5$ is odd, then $n^3$ is even, so $n$ is even.
|
H: How could a path be homotopic to a point
I have been following Serge Lang's Intoduction to Complex Analysis at a Graduate Level and I met this theorem. I want to ask what does it mean for a function to be homotopic to a point? I am only familiar with paths being homotopic to each other but not to a point. Here is the extract
If anybody could shine some light onto this I will really appreciate it.
AI: A path homotopy $f\to g$ is a homotopy $H=H(x,t)$ such that $H(x,0)=f(x)$ and $H(x,1)=g(x).$ A path $f$ is homotopic to a point $c$ if it is path homotopic to the constant path $c(x)=c$ for each $x.$
Consider this example: let $f(x)=(\sin x,\cos x)$ be the path which traces out the unit circle in $\mathbb R^2$. Then the path homotopy $H(x,t)= tf(x)$ has $H(x,1)=f(x)$ and $H(x,0)=0$ for each $x.$ It follows that $f$ is homotopic to the point $0$ since it is homotopic to the constant path at $0.$
|
H: Tricky Schwarz Lemma Type Qualifying Exam Question
I have a question from a past complex analysis qualifying exam that I would like some help on. Let $f(z)$ be an analytic function on the unit disk $\mathbb{D}$ such that $\lvert f(z)\rvert \leq 1$. Suppose that $z_1,z_2,...,z_n\in \mathbb{D}$ are zeroes of $f(z)$.
Show that
\begin{align}\lvert f(0)\rvert \leq \prod_{j=1}^n \lvert z_j\rvert,\end{align}
where each $z_j$ is repeated in the product on the right hand side corresponding to its multiplicity. For instance, if $z_1$ has multiplicity 3, then we would see $\lvert z_1\rvert^3$ on the right hand side.
This is what I have so far. Suppose that $f(z_j)=0$ with an order of $m_j\geq 1$. Letting $g_j:= f\circ \psi_j$, where $\psi_j$ is the FLT $\psi_j(z)=\frac{z+z_j}{1+\overline{z_j}z}$, we have that $g$ is analytic on $\mathbb{D}$ with a zero of order $m_i$ at $z=0$ and satisfies $\lvert g(z)\rvert \leq 1$. Thus, $\lvert g(z)\rvert \leq \lvert z\rvert ^{m_j}$ by the Schwarz Lemma, and in particular $\lvert g(-z_j)\rvert = \lvert f(0)\rvert \leq \lvert z_j\rvert^{m_j}$. However, I don't know how to combine this over all of the $z_j$ to get the desired inequality.
AI: Hint: Use induction on $n$. Let $\phi_j (z)=\frac {z-z_j}{1-\overline {z_j}z} =\psi_j^{-1}(z)$ and consider $\frac {f{(z)}} {\phi_1(z)....\phi_{n-1}(z)}$. Use MMP to show that this is bounded in modulus by $1$.
|
H: Non constant harmonic function on $\mathbb C$
If $U(z)$ is a non constant real valued harmonic function on $\mathbb C $ then prove that there exists $\{z_n\} \subset\mathbb C $ with $z_n\to \infty $ and $u(z_n) \to 0$ as $n \to\infty$.
Non constant harmonic functions are surjective and unbounded. They do not attain maximum or minimum on the plane. But how do we formalise this to get the sequence $\{z_n\}$ as demanded in the problem? Please help.
AI: By Liouville's theorem, $u$ is not bounded above, and $u$ is not bounded below. But $u$ is continuous, so there is $z_0$ with $u(z_0) = 0$. Next, for a positive integer $n$, let $C_n = \{z : |z - z_0| = n\}$. The value $u(z_0)$
is the average of $u$ on $C_n$.
So there is a point $z_n \in C_n$ with $u(z_n) = 0$. [Either $u$ is identically zero on $C_n$ or $u$ has both positive and negative values, and therefore a zero value.] Now
$$
|z_n| \ge |z_n-z_0| - |z_0| = n-|z_0|,
$$
so $|z_n| \to \infty$. But $u(z_n) = 0$.
|
H: Basic Understanding of Construction of Spectral Sequences - Limit Page
I have always wanted to learn spectral sequences, and I finally found some time to do so.
However, I have some problems to understand the very basics of the construction (the answer is probably obvious, but I don't see it). I found the following paper which I will follow, since I find it very detailed:
http://homepages.math.uic.edu/~mholmb2/serre.pdf
My question concerns "The Limit Page"-section on page 2. I understood how he constructed the $E^2$-page:
$$E^2=Z^1/B^1,$$
but for some reason not the $E^3$-page. This is taken from the paper, which is the part I can't understand:
Write $\overline{Z}:=\ker d^2$, it as a subgroup of $E^2$, whence, by the correspondence theorem, it can be written as $Z^2/B^1$, where $Z^2$ is a subgroup of $Z^1$. Similarly, write $\overline{B^2}=\operatorname{im} d^2$, which is isomorphic to $B^2/B^1$, where $B^2$ is a subgroup of $Z^2$...
So, let me try to make sense of the above quote, and then someone maybe can correct me if I say something wrong.
Since $d^2$ is a homomorphism, $\ker d^2$ is a subgroup of $E^2$.
I actually can't figure out why correspondence theorem implies that $\overline{Z^2}=Z^2/B^1$, for some subgroup $Z^2$ of $Z^1$.
The correspondence theorem says, according to Wikipedia; If $N$ is a normal subgroup of $G$, then there exists a bijection from the set of all subgroups $A$ of $G$ containing $N$, onto the set of all subgroups of the quotient group $G/N$. That is, they define a map $$\phi(A)=A/N.$$
So, if I let $N=\ker d^2$ in the above result, then I would have something like $$\phi(A)=A/\ker d^2,$$
which doesn't really look correct to me. Maybe I want to use some property of the differential to conclude at what he does in what I quoted? I am quite confused right now, so I would be really happy if someone could help me to understand why $\overline{Z^2}=Z^2/B^1$ and why $\overline{B^2}=B^2/B^1$.
Best wishes,
Joel
$\text{ }$
$\text{ }$
Correspondence Theorem: https://en.wikipedia.org/wiki/Correspondence_theorem_(group_theory)
AI: The correspondence theorem is applied to the subgroup $B^1$ of $Z^1$, so that subgroups of $E^2=Z^1/B^1$ correspond to subgroups of $Z^1$ containing $B^1$.
|
H: How many strings of six lowercase letters from the English alphabet contain the letter a?
How many strings of six lowercase letters from the English alphabet
contain the letter a?
The answer to this problem is:
$26^6-25^6$ by the principle of inclusion-exclusion
My question is: why we can't calculate it in this way: $6*26^5$ as we calculate the number possible positions for a letter multiplied by the number of ways to select the other 5 letters of the six-letters?
AI: The reason is because $26^5$ is the list of all strings of length five, and while there are six different ways to insert an "a" into each of those strings, some of those strings already have an "a" or more than one "a" in them. We don't want to use $25^5 * 6$ either as this doesn't include strings that have more than one "a". The easiest way is to count all the strings of length six ($26^6$) and remove those that don't have an "a" in them ($25^6$)
|
H: What's the gradient of a vector field?
Imagine I have the following function
$$ \vec{f}(\vec{x}) = x \vec{x}, x = | \vec{x} |, \vec{x} \in R^3 $$
That is, the function is essentially a quadratic function, but contains a vector direction as well. Intuitively from single variable calculus I would expect the gradient $ \nabla \vec{f} = (\partial \vec{f}/ \partial x_1,\partial \vec{f}/ \partial x_2,\partial \vec{f}/ \partial x_3) $ to be proportional to $2x$, however I also would expect it to be a 3x3 matrix.
My most naive attempt would be to do
$$ \vec{f} = x_1^2 \vec{e}_1 + x_2^2 \vec{e}_2 + x_3^2 \vec{e}_3 $$
and say that
$$ \nabla \vec{f} = 2 x_1 \vec{e}_1 + 2 x_2 \vec{e}_2 + 2 x_3 \vec{e}_3 $$
But it would mean that every gradient w.r.t. a vector would always be a diagonal matrix, which seems wrong to me. What I really want to create is the Jacobian $ \partial \vec{f}_i / \partial x_j $ but I think I get a little bit confused about what I do with the base vectors $ \vec{e_i} $ during the partial derivative.
AI: You sound like you just want the Jacobian but are trying to force it into a singular gradient vector, and I think the main part you are getting confused on is how to differentiate that initial function f. Don't bother thinking of them as vectors for a second and just go through them normally for multivariable, treating all variables which you are not differentiating in regard to as just constants:
$\vec{f} = [f_1, f_2, f_3]^T$
$f_1(\vec{x}) = (x_1^2 + x_2^2 + x_3^2)^{1/2} * x_1$
$\frac{\partial f_1(\vec{x})}{\partial x_1} = (x_1^2 + x_2^2 + x_3^2)^{1/2} +\frac{x_1}{2(x_1^2 + x_2^2 + x_3^2)^{1/2}} * 2*x_1 = (x_1^2 + x_2^2 + x_3^2)^{1/2} +\frac{x_1^2}{(x_1^2 + x_2^2 + x_3^2)^{1/2}}$ by the chain rule.
By analogy, you can get $\frac{\partial f_2(\vec{x})}{\partial x_2}$ and $\frac{\partial f_3(\vec{x})}{\partial x_3}$. For the others, let's look at $\frac{\partial f_1(\vec{x})}{\partial x_2}$ for an example:
$\frac{\partial f_1(\vec{x})}{\partial x_2} = \frac{x_1*x_2}{(x_1^2 + x_2^2 + x_3^2)^{1/2}}$
To summarize, if $i = j$:
$\frac{\partial f_i(\vec{x})}{\partial x_j} = x +\frac{x_j^2}{x}$
Else:
$\frac{\partial f_i(\vec{x})}{\partial x_j} = \frac{x_i*x_j}{x}$
You can then plug this into a Jacobian form
|
H: If $(a+bi)$ is a unit in $\mathbb{Z} [i]$, then $N(a+bi) = 1$?
How does one prove that, if some $a+bi \in \mathbb{Z}[i]$ is a unit, then the norm of $a+bi$, $N(a+bi) = 1$ without simply checking each unit? I've tried a few different things (applying the Well-Ordering Principle considering only the norms, trying to use the fact that $(a+bi)$ has a multiplicative inverse and distributing out, etc) but I haven't gotten anywhere. Even a hint would be great!
AI: Suppose $\alpha=a+bi$ is a unit. Then there exists
$\beta \in \Bbb{Z}[i]$ so that $\alpha \beta=1$. Then $$N(\alpha)N(\beta)=N(\alpha\beta)=N(1)=1.$$ So, $N(\alpha)=N(\beta)\in \{\pm 1\}$. Now, $N(\alpha)=\alpha\overline{\alpha}=(a+bi)(a-bi)=a^2+b^2\ge 0$. So, $N(\alpha)=1$.
|
H: Well-definition of the quotient norm
Consider $X$ a normed space with norm $\|\cdot\|$ and $M$ a closed subspace of $X$. In the quotient space $X/M$ we define the quotient norm:
$$|||\hat{x}||| = \inf_{y\in M} \|\hat{x}+y\|, \quad \hat{x}\in X/M.$$
I'm trying to prove the well-definiton of this norm, that is, given $\hat{x_1}$, $\hat{x_2}\in X/M$ such that $x_1-x_2\in M$ then it must ve verified that $|||\hat{x_1}|||=|||\hat{x_2}|||$.
By means of triangle inequality property of the norm $\|\cdot\|$ I managed to show that
$$|||\hat{x_1}|||-|||\hat{x_2}||| \leq \|x_1-x_2\|,$$
but that doesn't help to conclude what I want. I would really appreciate if someone can please help me with this.
AI: Suppose $\hat{x}_1-\hat{x}_2=m\in M$. Since $M$ is a subspace, taking the $\inf$ over arbitrary $z\in M$ yields the same result as taking the $\inf$ over $m-y$ for arbitrary $y\in M$ (since $m-y$ is still in $M$ and any value at $z$ is achieved when $y$ is $m-z$). Using this change of variables ($z$ for $m-y$),
$$|||\hat{x}_1|||=\inf_{z\in M}\|\hat{x}_1-z\|=\inf_{y\in M}\|\hat{x}_1-m+y\|=\inf_{y\in M}\|\hat{x}_2+y\|=|||\hat{x}_2|||$$
|
H: Link between a particular function and cos(nt)
There was a particular question we were given that I can't work out. The question's fine but the difficulty is the extra bit. So the original question is:
Define the polynomials $f_n$ by
$$ f_0(x) = 1, $$
$$ f_1(x) = x, $$
$$ f_n(x) = 2x f_{n-1}(x) - f_{n-2}(x) \quad (n = 2,3,\ldots). $$
Let $x$ be a real number. Prove that $f_n(-x) = (-1)^n f_n(x)$.
The question itself is quite simple using the minimal criminal idea.
But there's an extra bit which asked: "What is the connection between $f_n(x)$ and $\cos(nt)$?"
I can't figure it out! It's only optional so I don't have to do it but for completeness purposes I've tried and I can't figure it out. I've asked all my friends and they're stuck too! Any help appreciated!
AI: If you compute a few of the values of $f_n(x)$, i.e. for a few values of $n=2,3,4$, the familiarity of the formulae for $\cos(nx)$ in terms of $\cos(x)$ would lead you to the answer.
$f_2(x)=2xf_1(x)-f_0(x) = 2x^2-1,\ ( \text{Note } \cos(2\theta)=2\cos^2(\theta)-1) \\ f_3(x)=2xf_2(x)-f_1(x)=2x(2x^2-1)-x=4x^3-3x, \ (\cos(3\theta)=4\cos^3(\theta)-3\cos(\theta))$
So, taking $x=\cos(\theta), f_n(x)=\cos(n\theta)$, which you can prove by induction easily, over $n$.
Assuming $f_1(x)=\cos(\theta), f_2(x), \cdots,f_k(x)$ to be the representations of $f_t(x)=cos(t\theta)$ (base case for $k=2$ checked above), $$f_{k+1}(x) = 2xf_{k}(x) -f_{k-1}(x) = 2\cos(\theta)\cos(k\theta)-\cos((k-1)\theta)\\ =2\cos(\theta)\cos(k\theta)-\cos(k\theta - \theta)\\=2\cos(\theta)\cos(k\theta)-\cos(k\theta)\cos(\theta)-\sin(k\theta)\sin(\theta) \\ =\cos(k\theta)\cos(\theta)-\sin(k\theta)\sin(\theta) \\ = \cos(k\theta+\theta) = \cos((k+1)\theta)$$ proves the statement, by the strong version of mathematical induction.
The recursive definition of $f_n(x)$ you've been given is the way Chebyshev polynomials of the first kind are defined, and they are quite important in numerical methods, particularly among the class of orthogonal polynomials.
|
H: Joint probability for 2 uniform distributions
X and Y are independent and uniformly distributed on the interval (0, 1). If U = X + Y , and V =X/Y
find the joint density for U and V and the marginal densities for U and V.
Given that $$f_{UV}(uv) = f_X(x)f_Y(y)*|J|$$
Where i get the $|J| =\frac{vu+1}{(1+v)^{3}} $
I get the join distribution to be
$$f_{UV}(uv) = \frac{vu+1}{(1+v)^{3}}, 0<u<1+v, \frac{u}{1+v}<u<1+\frac{u}{1+v}$$
Im trying to get the marginal for U and V but im having trouble setting my limits.
So far i have$$f_V(v) = \int_0^{1+v} f_{UV}(uv) du$$
and im not sure how to set the limits for$$f_U(u) = \int f_{UV}(uv) dv$$
How can i find the right limits for it. Im getting a bit confused with it.
AI: The transformation is $Y=U/(V+1), X=UV/(V+1)$
So the Jacobian should be:
$$\begin{align}J_{U,V}&=\begin{bmatrix}1/(V+1) & -U/(V+1)^2 \\ V/(V+1) & U/(V+1)^2\end{bmatrix} \\ \lVert J_{U,V}\rVert&=\left\lvert\dfrac{U}{(V+1)^2}\right\rvert\end{align}$$ …
The support for $U,V$ should be $0<U<2$ (since $U=X+Y$) and also $0<U/(V+1)<1$ and $0<UV/(V+1)<1$ (from the support).
So
$$f_{\small U,V}(u,v) = \dfrac{u}{(v+1)^2}(\mathbf 1_{0<v<1,0<u<v+1}+\mathbf 1_{1\leqslant v, 0<u<(v+1)/v})$$
…
It would seem easier to find the marginals from first principles. For instance:
$$\begin{align}f_{\small X,X+Y}(x,u) &= f_{\small X}(x) f_{\small Y}(u-x)\\&= (\mathbf 1_{0<u<1, 0<x<u}+\mathbf 1_{1\leqslant u< 2, u-1<x<1})\\[2ex]\therefore\quad f_{\small U}(u)&= u\mathbf 1_{0<u<1}+(2-u)\mathbf 1_{1\leqslant u < 2}\end{align}$$
|
H: Combinatorics Analysis of Game "Olaf Hits The Dragon With His Sword"
I recently encountered the short tabletop roleplaying game Olaf Hits The Dragon With His Sword and I've been trying to do a combinatorics analysis of its dice mechanics. The full rules are available behind the link I've provided but I'll summarize them here.
There are six colors of dice and two players.
Any number of sides can be used for the dice as long as they are all the same.
The players make a series of choices which add dice to a pool and then all of the dice in the pool are rolled to determine the outcome.
There are seven possible outcomes - each color can win or the pool can be emptied.
The game proceeds in three phases:
In the first phase each player picks a color and adds a die of that color to the pool. The players can pick the same color.
In the second phase each player picks three different colors and adds a die of each chosen color to the pool. These colors can include a repeat of the color chosen in the first round.
The eight dice in the pool are rolled. The color with the single highest die wins. If multiple colors are tied then discard all dice showing the highest result and repeat until there is a winner or the pool is empty.
Given these rules I'd like to answer a couple of questions:
How many pools are possible?
For a given pool and choice of number of sides what is the probability of each outcome?
For each of the seven outcomes what's the pool that maximizes the probability it occurs?
I'm not sure how to approach the first two questions. For the third question it seems clear that the pool which maximizes the probability that a given color wins is the pool with 4 dice of that color and 4 dice which each have a different color. However I'm not sure what pool will maximize the probability that the pool is emptied.
I'd appreciate any help with solving this.
AI: For the first, you can have a maximum of four of one color, which only occurs if they both pick that color in the first round and then both pick it as one of the three in the second round. If there are four of one color, there cannot be three of any other color, but they could be split $(2,2), (2,1,1), (1,1,1,1)$. If there are at most three of one color, any distribution is possible. To get the number of possibilities for a given color distribution, take the number of permutations of six things taking the number of colors and divide by the factorials of the numbers of any duplicates. For a distribution of $3,2,1,1,1$ we have $\frac {6\cdot 5 \cdot 4 \cdot 3 \cdot 2}{3!}=120$ possibilities. We just have to run down the list.
$$\begin {array} {r|r} \text{colors}&\text{number}\\
\hline 4,2,1,1&180\\
4,1,1,1,1&30\\
3,3,2&60\\
3,3,1,1&90\\
3,2,2,1&180\\
3,2,1,1,1&120\\
3,1,1,1,1,1&6\\
2,2,2,2&15\\
2,2,2,1,1&60\\
2,2,1,1,1,1&15\\
\hline \text{Total}& 756\end {array}$$
To get the probability of each outcome, I would just write a program to model it. I am sure that $4,1,1,1,1$ maximizes the winning chance for a color. It is better than $4,2,1,1$ because presumably if the $2$ dice tie for the highest number they win, while if they are different colors they will be thrown out and the $4$ might win.
|
H: How is the ideal $(x,y)$ isomorphic to $k[x, y]$ as $k[x, y]$-modules?
Let $R = k[x, y]$ where $k$ is a field and consider the ideal $I = (x, y)$ as a $R$-module.
Consider the $R$-module homomorphism $\varphi : R^2 \to I$ given by $\varphi(a, b) = ax + by$.
Prove that the kernel of $\varphi$ is the set $\{(−cy, cx) \mid c ∈ R\}$, and show that $\ker \varphi$ is isomorphic to $R$ as a $R$-module.
Deduce an isomorphism $R^2/R \cong I$.
I've figured everything out, but I'm bothered by the last statement.
We have $I \cong R^2/ \ker \varphi \cong R^2/R$ since $R \cong \ker \varphi$.
However, isn't $R^2/R \cong R$?
This would mean $I \cong R$, but $I$ is an $R$-module generated by two elements whereas $R$ is an $R$-module generated by one element.
Am I not understanding something?
AI: Certainly, it is true that there are isomorphisms between $R^2/M$ and $R$ where $M$ is a free rank one submodule of $R^2.$ For example, $R^2/(R\oplus 0)$ or $R^2/(0\oplus R)$ are both isomorphic to $R.$ You are also correct that $I\not\cong R.$
The issue here is that there are many different ways to embed $R$ as a submodule of $R^2$ (equivalently, there are many $R$-submodules $M\subseteq R^2$ which are isomorphic to $R$), and the quotients of $R^2$ by different such $M$ need not be isomorphic. Indeed, as you've shown, there exists such an $M$ with $R^2/M\cong I\not\cong R$! So, writing $R^2/R$ is a little misleading.
An example of this phenomenon which might be more familiar is the following. Consider the ring $\Bbb{Z},$ and note $$\Bbb{Z}/\Bbb{Z}\cong 0$$ while on the other hand we have $$\Bbb{Z}/2\Bbb{Z}\not\cong0.$$ However, $\Bbb{Z}\cong2\Bbb{Z}$ as $\Bbb{Z}$-modules, via $n\mapsto 2n.$
So, given two $R$-modules $M$ and $N$ such that there exist $R$-submodules $N'$ and $N''$ of $M$ with $N\cong N'\cong N'',$ we cannot unambiguously write $M/N.$ We need to specify the embedding $N\to M$ which we are using to consider $N$ as a submodule of $M$!
|
H: yes/ No Is $T$ is linear transformation?
Given $T : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be transformation defined by $T(x,y)= (x+1,y+1)$.
Now my question is that Is $T$ is linear transformation ?
My attempt : The main concept of linear transformation is that $T= 0$, so if we put $x=y=-1$ then $T=0$
So i think it will be linear transformation
Again from another definition of linear transformation we have
$T(x,y) + T(z,w)= T(x+ z, y + w)$
$(x +1 , y+1) + ( z+1 , w +1 ) = ( x+ z+ 1, y+w+1 )$
another Properties also satisfied
so again $T$ is linear transformation
Is its correct or not ?
AI: No. In order for $T$ to be linear you would have to have$$(x+1,y+1)+(z+1,w+1)=(x+y+1,z+w+1).$$But, in fact,$$(x+1,y+1)+(z+1,w+1)=(x+y+2,z+w+2).$$
And you can just say that $T$ is not linear because $T(0,0)\ne(0,0)$.
|
H: Coefficient Estimators of $\frac{1}{x^{2}}$ Weighted Least Squares Linear Regression
I have a feeling there should be a mathematical formular for determining the estimators of the coefficients of a $\frac{1}{x^{2}}$ Weighted Linear Regression.
I was able to derive the estimators ($a$ and $b$) for the non-weighted linear regression ($y=ax+b$). I did it by minimizing $\sum \epsilon^2 $ where $\epsilon = y - (ax+b) $
I partially differentiated $\sum \epsilon^2 $ wrt $a$ and $b$ and equating it to zero. at the end i got the estimators as:
$$a = \frac{n\sum xy - \sum x \sum y}{n\sum {x}^2 - {(\sum x)}^2} $$
and
$$b = \frac{\sum y - a\sum x}{n} $$
Now my question is this:
If I decide to use a $\frac{1}{x^{2}}$ weight on the linear regression $y=ax+b$, is there a way to minimize the least squares of the weighted error (ie minimize $\sum \frac{1}{x^{2}} \ [y - (ax+b)]^2 $ ) and come up with a simple mathematical relationship to determine the estimators $a$ and $b$ ?
AI: If I decide to use a $\frac{1}{x^{2}}$ weight on the linear regression $y=ax+b$, is there a way to minimize the least squares of the weighted error (i.e. minimize $\sum \frac{1}{x^{2}} \ [y - (ax+b)]^2 $ ) and come up with a simple mathematical relationship to determine the estimators $a$ and $b$ ?
Why not? You have two equations in two variables. Just solve them.
We have to minimize $S=\sum \frac 1{x_i^2}[y_i-(ax_i+b)]^2=\sum \left[\frac {y_i}{x_i}-\left(a+\frac {b}{x_i}\right)\right]^2$
\begin{align*}
\frac{\partial S}{\partial a}&=0\Rightarrow\sum \left[\frac {y_i}{x_i}-\left(a+\frac {b}{x_i}\right)\right]=0\Rightarrow \boxed{a=\frac{\sum\frac {y_i}{x_i}-b\sum\frac {1}{x_i}}{n}}\tag{1}\\
\frac{\partial S}{\partial b}&=0\Rightarrow\sum \frac1{x_i}\left[\frac {y_i}{x_i}-\left(a+\frac {b}{x_i}\right)\right]=0\Rightarrow a\sum\frac1{x_i}+b\sum\frac1{x_i^2}=\sum\frac {y_i}{x_i^2}\tag{2}\\
\end{align*}
Putting the value of $a$ from $(1)$ in $(2)$, we get
\begin{align*}
\left(\frac{\sum\frac {y_i}{x_i}-b\sum\frac {1}{x_i}}{n}\right)\sum\frac1{x_i}+b\sum\frac1{x_i^2}&=\sum\frac {y_i}{x_i^2}\\
\sum\frac {y_i}{x_i}\sum\frac1{x_i}-b\left(\sum\frac {1}{x_i}\right)^2+nb\sum\frac1{x_i^2}&=n\sum\frac {y_i}{x_i^2}\\
\boxed{b=\frac{n\sum\frac {y_i}{x_i^2}-\sum\frac {y_i}{x_i}\sum\frac1{x_i}}{n\sum\frac1{x_i^2}-\left(\sum\frac {1}{x_i}\right)^2}}\\
\end{align*}
|
H: 3D Parametric Equation changing over time
I can create a 3D Parametric Equation of a spiral but I'm having trouble getting the angle of "decent" to also change over time.
$$x=u\sin(u)\cos(v)$$
$$y=u\cos(u)\cos(v)$$
$$z=-u\sin(v)$$
The Octave code I have so far seems close, I'm just not sure how to "tweak" it.
The image it creates is:
clc
close all
clear all
u=linspace(0,4*pi,100);
v=linspace(0,pi,100);
[u,v]=meshgrid(u,v);
x=u.*sin(u).*cos(v);
y=u.*cos(u).*cos(v);
z=-u.*sin(v);
figure(1)
mesh(x,y,z);
view([-57,32])
h=gca;
get(h,'FontSize')
set(h,'FontSize',14)
xlabel('X','fontSize',14);
ylabel('Y','fontSize',14);
zlabel('Z','fontsize',14);
title('3D Parametric Equation Lily impeller','fontsize',14)
fh = figure(1);
set(fh, 'color', 'white');
The image I'm trying to recreate is the Lily Impeller and how it's created/growth pattern takes shape over time.
Here's a video of what I'm trying to model/animate the growth pattern of.
https://youtu.be/by0JhirtO-0?t=224
I was thinking that the descending curves in the $-Z$ direction may need to be at a $60^\circ$ angle or so but I couldn't come up with a way of how to do this.
AI: As the curve moves radially out from $(0,0)$ reduce $z$ at a $60^{\circ}$ angle.
z=-u.*sin(v) .- sin(60/180 * pi)*(sqrt((x).^2 + (y).^2));
This subtracts a $60^{\circ}$ angle cone from the $z$ value.
surf(x,y,z)
You can adjust the angle and scale to match the image / your taste.
Depending on how you define the angle:
z=-u.*sin(v) .- cos(60/180 * pi)*(sqrt((x).^2 + (y).^2));
clc
close all
clear all
u=linspace(0,4*pi,100);
v=linspace(0,pi,100);
[u,v]=meshgrid(u,v);
x=u.*sin(u).*cos(v);
y=u.*cos(u).*cos(v);
z=-u.*sin(v) .- sin(60/180 * pi)*(sqrt((x).^2 + (y).^2));
figure(1)
mesh(x,y,z);
view([-57,32])
h=gca;
get(h,'FontSize')
set(h,'FontSize',14)
xlabel('X','fontSize',14);
ylabel('Y','fontSize',14);
zlabel('Z','fontsize',14);
title('3D Parametric Equation Lily impeller','fontsize',14)
fh = figure(1);
set(fh, 'color', 'white');
|
H: Arc length of the Inverse Function
I am looking to prove that the arc length of the inverse function $f^{-1}(x)$ on the interval $[f(0),f(a)]$ is the same as the arc length of the function $f(x)$ on the interval $[0,a]$. One may see that this is true visually, as it comes from flipping the function across the line $y=x$, which does not change its length, but I want to show it analytically. Start with the arc length formula,
$$\int_{f(0)}^{f(a)} \sqrt{1+\left(\frac{d}{dx}\left[f^{-1}(x)\right]\right)^2}dx.$$
Use the general form for the derivative of the inverse function and simplify to the following:
$$\int_{f(0)}^{f(a)}\frac{\sqrt{1+f'(x)^2}}{f'(x)}\,dx.$$
The Substitution $y=f^{-1}(x)\,\Rightarrow dy = dx/f'(x)$ then gets very close,
$$\int_0^a \sqrt{1+f'(f(y))^2}\,dy,$$
but this is not the arc length formula I am after. What I want is
$$\int_0^a \sqrt{1+f'(y)^2}\,dy.$$
I have been thinking about this for longer than I care to admit, and haven't figured out what went wrong. Any help is very much appreciated.
AI: It helps to use the lettering
$$L = \int_{\min\{f(0),f(a)\}}^{\max\{f(0),f(a)\}}\sqrt{1+[(f^{-1})'(y)]^2}\:dy$$
and we'll use the same substitution $y = f(x)$:
$$dy \to |f'(x)|\:dx$$
and
$$(f^{-1})'(f(x)) = \frac{1}{f'(x)} $$
gives us
$$L = \int_0^a \sqrt{1+\frac{1}{[f'(x)]^2}}\cdot |f'(x)|\:dx = \int_0^a\sqrt{[f'(x)]^2+1}\:dx$$
we take the absolute value because that is the formula for the Jacobian, which lets us blindly take the absolute value and be assured that the orientation for the bounds is always in the same direction before and after the change of variables (either increasing or decreasing for both).
|
H: limits and L'Hospital's rule
$$\lim_{x\to 0^{+}} \frac{1+x\ln x}{x}$$
I think it is an indeterminate form so I applied the L'H rule and did the derivative of numerator and denominator and got limit as x tends to 0 from right [ $\ln (x) + 1$ ] which clearly shows that it depends on $\ln$ function. we know that $\ln$ function is negative infinity on zero plus but when I put this function in WolframAlpha I get positive infinity as the limit of the function. can somebody explain where am I going wrong?
AI: To apply L'Hôpital, you need your limit to be of the form $0/0$ or $\infty/\infty$. Your limit is neither: the numerator converges to $1$ and the denominator converges to $0$. As the limit is one-sided, so $x>0$, the limit equal $+\infty$.
If you don't know the limit of $x\ln x$, one way to do this limit is to substitute $x=1/t$. Then the limit becomes
$$
\lim_{t\to\infty}\frac{\ln 1/t}{t}=-\lim_{t\to\infty}\frac{\ln t}t=0
$$
(this last one, you can do by L'Hôpital if you need to).
|
H: Can we say that $\text {tr}\ (A) = 0\ $?
Let $A$ be an $n \times n$ real matrix with $A^3 + A = 0.$ Can we say that $\text {tr}\ (A) = 0\ $?
I think it's true but can't prove it. Any help will be highly appreciated.
Thanks in advance.
AI: The minimal polynomial of $A$ must divide $x^3+x$.
Then the real Jordan form of $A$ can have, consequently, two kinds of blocks:
0 blocks of dimension 1
2-dimensional blocks associated to rotations of $\pi/2$
In both cases the trace is equal to 0.
Recalling that the trace of $A$ is invariant for conjugation you have done.
Observe that this is not true over the complex numbers: $A=i I$ satisfies $A^3+A=0$ but $trA \neq 0$.
|
H: Result of $\int_{-\infty}^{\infty}\frac{\cos(ax)}{e^x+e^{-x}}$
I'm trying to validate if result of this integral is equal to:
$$
\frac{\pi*ch(a\frac{\pi}{2})}{ch(\pi))+1})
$$
I'm trying to resolve it using the reside in $\frac{\pi}{2}$ but couldn't find a resolution to compare.
Any help is most appreciated.
AI: Notice that $$\int_{-\infty}^{\infty} \frac{e^{iax}}{e^x + e^{-x}}= \int_{-\infty}^{\infty} \frac{\cos(ax)}{e^x + e^{-x}} dx + i\int_{-\infty}^{\infty} \frac{\sin(ax)}{e^x + e^{-x}}dx$$
Therefore, let's compute the left-hand side and take the real part of the result. Let's take the rectangular contour whose base lies on the $x$-axis and has a height that goes up to $y=i\pi$ in the imaginary axis (that way we capture our pole of $i\pi/2$ but none of the other poles. We'll call this contour $\Gamma$.
Now, we have 4 separate line integrals for each side of the rectangle, but using the ML-Inequality you can show that the line integrals over the two vertical sides of the rectangle go to 0.
We're left with $$\int_{\Gamma} \frac{e^{iaz}}{e^z + e^{-z}} dz= \lim_{R\to\infty} \Bigg(\int_{-R}^{R} \frac{e^{iax}}{e^x + e^{-x}} dx+ \int_{R+i\pi}^{-R+i\pi} \frac{e^{iaz}}{e^x + e^{-z}}dz\Bigg)$$
For the right-most integral, we can use the parametrization $\gamma(t)=i\pi + t$, where $-R\leq t \leq R$. After some algebra, we get that:
$$\int_{\Gamma} \frac{e^{iaz}}{e^z + e^{-z}} dz= (1+e^{-a\pi})\int_{-\infty}^{\infty}\frac{e^{iax}}{e^x + e^{-x}} dx$$
We compute the left-hand side using the residue theorem to get:
$$\pi e^{-a\pi/2} =(1+e^{-a\pi})\int_{-\infty}^{\infty}\frac{e^{iax}}{e^x + e^{-x}} dx$$
Which means that $$\int_{-\infty}^{\infty}\frac{e^{iax}}{e^x + e^{-x}} dx = \frac{\pi e^{-a\pi/2}}{1+e^{-a\pi}} $$
We can take the real part of both sides, but the right-hand side is already purely real-valued, so we conclude that
$$\int_{-\infty}^{\infty} \frac{\cos(ax)}{e^x + e^{-x}} dx = \frac{e^{-a\pi/2}}{1+e^{-a\pi}}$$
|
H: Rudin proof for compact subsets $\{ K_{\alpha} \}$ (theorem 2.36) — Contrapositive or contradiction?
I am having doubts about Theorem 2.36 pasted below. I was able to follow all the steps individually, but I don’t see how this is a proof by contradiction. It seems to be it is a proof by contrapositive.
That is because, we are assuming intersection of all $\{ K_{\alpha} \}$ is empty, then showing that intersection of an arbitrarty finite subcollection $K_1 \cap K_{\alpha_{1}} \cap \cdot\cdot\cdot \cap K_{\alpha_{n}} = \phi$.
What am I missing here? Why is this not proof by contrapositive?
AI: Whether one sees it as a proof by contradiction or as a proof of the contrapositive depends on whether one assumes that the $K_\alpha$ have the finite intersection property. If one makes that assumption, then the argument is indeed a proof by contradiction: the further assumption that the intersection of all of the $K_\alpha$ is empty leads to a conclusion that contradicts that contradicts the starting assumption. That further assumption must therefore be false.
If one does not make that initial assumption, the same argument proves that the $K_\alpha$ do not have the finite intersection property and thus demonstrates the contrapositive of the stated result. I prefer to view it as a proof of the contrapositive, but both views are logically defensible, and Rudin evidently chose to adopt the other view.
Added: The difference is in the logical structure of the proof of an implication $A\implies B$. In a direct proof we assume $A$ and somehow derive $B$. In a proof of the contrapositive we assume $\neg B$ and somehow prove $\neg A$; this direct proof of the contrapositive of course establishes the logically equivalent original implication $A\implies B$.
In a proof by contradiction we assume $A$ and $\neg B$ and somehow derive a contradiction. This shows that $A$ and $\neg B$ cannot both be true, so if $A$ is true, then $\neg B$ must be false and therefore $B$ must be true. It sometimes happens in the in deriving the contradiction we never really use the assumption that $A$ is true: we actually use only the assumption $\neg B$ and get our contradiction (with the assumption $A$) by deriving $\neg A$ from $\neg B$. In that case, which is what we have here, we’ve presented what could have been a direct proof of the contrapositive in the logical form of a proof by contradiction, and technically that form makes it a proof by contradiction. It just didn’t have to be one. We didn’t use the assumption $A$ at any point in the actual mechanics of the argument: it served only to be contradicted.
|
H: Image of morphism of sheaves
Suppose I have a morphism of sheaves $f : E^{\oplus4} \to I_p$ on $X$ a degree $3$ Fano threefold, where $E$ is a rank $2$ vector bundle on $X$ and $I_p$ is an ideal sheaf of a point $p \in X$. I'm interested in knowing the image of $f$.
For each of the direct summands, I have a morphism $f_i : E \to I_p$ and I know that the image of such a morphism is $I_{D_i}$ where $D_i$ is a degree $5$ curve containing $p$. Is there a way in which I can use this information to find the image of $f$? I'm hoping that this makes $\mathrm{im}(f) = I_p$, but I'm not sure if this is true, and if so, how to proceed.
Thanks.
AI: To compute the image you can replace $I_p$ by $\mathcal{O}_X$ and the copies of $E$ by $I_{D_i}$. Then your question becomes: what is the image of
$$
I_{D_1} \oplus I_{D_2} \oplus I_{D_3} \oplus I_{D_4} \to \mathcal{O}_X.
$$
The image of the direct sum of ideal sheaves is, of course, the sum of the corresponding ideals, i.e., the ideal of the corresponding intersection of schemes. So, it proves that the image you are interested in is $I_Z$, where
$$
Z = D_1 \cap D_2 \cap D_3 \cap D_4.
$$
In particular, if you want to prove it is $I_p$, you need to check that the scheme-theoretic intersection of your quintic curves is the point $p$.
|
H: Rudin's PMA: Theorem 3.29 Proof
Theorem 3.29: If $p>1$,
$$
\sum_{n=2}^{\infty}\frac{1}{n(\log\ n)^p}
$$
converges; if $p\leq1$, the series diverges.
Proof: The monotonicity of the logarithmic function implies that $\{log\ n\}$ increases. Hence $\{1/n\ \log\ n\}$ decreases, and we can apply Theorem 3.27 to the series above; this leads us to the series
$$
\sum_{k=1}^{\infty}2^k\cdot \frac{1}{2^k(\log\ 2^k)^p}=\sum_{k=1}^{\infty}\frac{1}{(k\log\ 2)^p}=\frac{1}{(\log\ 2)^p}\sum_{k=1}^{\infty}\frac{1}{k^p}
$$
and Theorem 3.29 follows from Theorem 3.28.
I have two questions:
(1) How can we get the decrease of $\{1/n\ \log\ n\}$ from the increase of $\{\log\ n\}$? I didn't see any conncection between them.
(2) The author said that we can apply Theorem 3.27 to the series. However, in order to apply Theorem 3.27, I think we need to show $\{1/n\ (\log\ n)^p\}$ is decreasing. But I don't know how to do that.
Theorem 3.27: Suppose $a_1\geq a_2\geq\cdots\geq0$. Then the series $\sum_{n=1}^{\infty} a_n$ converges if and only if the series
$$
\sum_{k=0}^{\infty}2^ka_{2^k}=a_1+2a_2+4a_4+8a_8+\cdots
$$
converges.
AI: The function $g(x)=x\log^p(x)$ increases and is positive in the interval $(1,\infty)$. From that, it follows that $f(x)=\frac{1}{x\log^px}$ decreases on $(1,\infty)$.
The convergence of the series can then be analyzed either by the integral test or Cauchy's condensation theorem, as you proposed in your problem.
|
H: Does $\mathbb{Q}$ have the finite-closed topology?
Let $\mathbb{Q}$ be the set of all rational numbers with the usual topology
Does $\mathbb{Q}$ have the finite-closed topology?
My attempt : I think yes
Finite - closed topology mean cofinite topology .we know that in the cofinite topology-$ (\mathbb{R} , T)$, the only closed sets are the finite ones, and $\mathbb{R}$ itself.
similary we can said that in $(\mathbb{Q} , T)$ where T= cofinite Topology ,the only closed sets are the finite ones, and $\mathbb{Q}$ itself.
Is its true ?
AI: $\{1,\frac 1 2 ,\frac 1 3...\}\cup \{0\}$ is an infinite closed set in $\mathbb Q$.
|
H: How to convert from a flattened 3D index to a set of coordinates?
I have a flattened 3D array in row-major format with an index, $I$, defined as $I = x + y D_x + z D_x D_y$, where $x$, $y$, and $z$ are the indices and $D_x$, $D_y$, and $D_z$ are the dimensions of the 3D array. How can I obtain the original set of coordinates from the index $I$?
(This question has been asked a number of other places on StackExchange with no actual derivation of the solution.)
AI: We start with the standard definition of integer division: $a = qd +r$, where $a$ is the dividend, $q$ is the quotient, $d$ is the divisor, and $r$ is the remainder. From here, we define the integer division operator $div(a, d)=q$ and the modulo operator $mod(a, d) = r$.
Starting with our index,
$$I = x + y D_x + z D_x D_y$$
Rewriting to fit the division form above,
$$I = D_x (y + z D_y) + x$$
We can now solve for $x$:
$$mod(I, D_x) = x$$
Now for $y$ and $z$, we extract the inner expression:
$$div(I, D_x) = y + D_y z$$
But, this has the same form as the expresion for $x$. Thus,
$$mod(div(I, D_x), D_y) = y$$
$$div(div(I, D_x), D_y) = z$$
Note: this derivation extends quite easily to higher dimensions.
|
H: Holomorphic in an open set containing the closed unit disc $\mathbb D$ and real on the boundary of $\mathbb D$
If f(z) is holomorphic in an open set containing the closed unit disc, and if f($e^{i\theta})$ is real for all $\theta$ in $\mathbb R$, then prove that f(z) is constant.
I came across this problem in the exercises of a book on Complex Analysis. I need two clarifications about this problem. I think the word "connected" is missing in the problem in the domain of f(z). Is the problem true for all open sets or only for domains? For the domain of f for which the problem is true, how do I prove the result? What is given is that the function is real on the boundary of the unit disc $\mathbb D$. This means that the imaginary part of f say v is zero on $\partial\mathbb D$. From here how do I prove that f is constant? Please help.
AI: It is obviously false if the domain is not connected: Take $f(z)=1$ for $|z|<2$ and $f(z)=0$ for $|z-10|<1$.
For the proof when the domain is connected consider $g(z)=e^{if(z)}$ and note that $|g(z)|=1$ for all $z$ with $|z|=1$. Do you know how to show that $g$ is a constant? [This has been proved many times on this site].
|
H: area being finite in $\int_{0}^\infty \alpha\frac{x}{\beta+x}dx<\infty.$
For $x>0$, for what values of $\alpha$ and $\beta$, do we have:
$$\int_{0}^\infty \alpha\frac{x}{\beta+x}dx<\infty.$$
This is known as the saturaion-growth model specification in nonlinear regression.
I would like to know for what values the area under the curve is finite or infinite when $x$ takes strictly positive values.
How do we obtain the threshold for this? What is the best way to go about this?
AI: $\alpha \frac x {\beta +x} \to \alpha $ as $x \to \infty$. So the integral can be finite only when $\alpha =0$.
|
H: Integrate: $\int \:x\left(\frac{1-x^2}{1+x^2}\right)^2dx$.
Integrate:
$$\int \:x\left(\frac{1-x^2}{1+x^2}\right)^2dx$$
My attempt:
$$\text{Let} \ u = x, v'=\left(\frac{1-x^2}{1+x^2}\right)^2\\$$
\begin{align}
\int \:x\left(\frac{1-x^2}{1+x^2}\right)^2dx & = x\left(x-2\arctan \left(x\right)+\frac{2x}{1+x^2}\right)-\int \frac{3x+x^3-2\arctan \left(x\right)-2x^2\arctan \left(x\right)}{1+x^2}dx\\
& = x\left(x-2\arctan \left(x\right)+\frac{2x}{1+x^2}\right)-\left(\frac{1}{2}x^2-2x\arctan \left(x\right)+\ln \left|x^2+1\right|-2\ln \left|\frac{1}{\sqrt{x^2+1}}\right|+\frac{1}{2}\right)\\ & = \frac{1}{2}x^2+\frac{2x^2}{x^2+1}-\ln \left|x^2+1\right|+2\ln \left|\frac{1}{\sqrt{x^2+1}}\right|-\frac{1}{2}+C,C \in \mathbb{R}
\end{align}
I left out the small details, otherwise this post would be quite long.
I tried to do this with $u$-substitution but I'm not sure how I can do that here.
AI: Let $x^2=t\implies xdx=\frac{dt}{2}$
$$\int x\left(\frac{1-x^2}{1+x^2}\right)^2dx=\int \left(\frac{1-t}{1+t}\right)^2\frac{dt}{2}$$
$$=\frac12\int \left(\frac{2}{1+t}-1\right)^2 dt$$
$$=\frac12\int \left(1+\frac{4}{(1+t)^2}-\frac{4}{1+t}\right) dt$$
$$=\frac12 \left(t-\frac{4}{1+t}-4\ln|1+t|\right)+C$$
$$=\frac{x^2}{2}-\frac{2}{1+x^2}-2\ln(1+x^2)+C$$
|
H: Cyclic Field Extensions of sum of radicals
Given a field $K$ of characteristic 0, which contains a primitive root of order 3,
I would like to show that the extension $K(\sqrt2+3^{\frac{1}{3}})/K$ is cyclic.
My attempt was to look at the "bigger" extension $K(\sqrt2, 3^{\frac{1}{3}})/K$.
The problem is that I don't know it's size, because $\sqrt2$ might be in $K$ for example.
But I do know that it's embedded in $\mathbb{Z}/2\mathbb{Z} \ X \ \mathbb{Z}/3\mathbb{Z}$
by the diamond theorem.
Anyway, we still have to show that the initial extension is Galois, and we know it's enough to show its normality, but I can't figure it out.
Any help would be appreciated,
Thanks.
AI: Both $K(\sqrt2)/K$ and $K(\sqrt[3]3)$ are Kummer extensions: the latter as $K$ has
a cube root of unity. The first has Galois group trivial or $C_2$ the second
has Galois group trivial or $C_3$. Those Galois groups have coprime orders,
so their compositum $K(\sqrt2,\sqrt[3]3)$ is Galois over $K$ with group
$G=\text{Gal}(K(\sqrt2)/K)\times\text{Gal}(K(\sqrt[3]3)/K)$. In all possible cases
$G$ is cyclic.
The subfield $K(\sqrt2+\sqrt[3]3)$ is Galois with group a quotient of $G$
(by the Galois correspondence, since $G$ is Abelian). A quotient of a cyclic group
is cyclic.
|
H: How to easily identify how many distinct roots a polynomial has?
For example, I have $a(x) = 4 x^4 + 5 x^2 + 7 x + 2 $.
$$$$ Using Descartes rule, I know $a(x)$ has maximum $2$ negative real roots. As imaginary roots come in pairs, $a(x)$ will have either $0$ or $2$ negative real roots. I also imagine the possible graph of $4 x^4 + 5 x^2 + 7 x $, it should have a single minimum. So, if the minimum value of $4 x^4 + 5 x^2 + 7 x$ is $\le -2$, then there can be a real root, otherwise both roots will be imaginary. Now I see $a(0) = 2, a(-1) = 4,$ so there can be a root in between $0$ and $1$. By trying out $0.5$, I get a root, as $a(0.5) = 0$. So I decided that $a(x)$ will have two real roots, one of which is $-1/2$. $\\$
$$$$But this interpretation proved to be wrong, when I plotted the graph. $a(x)$ has a repetitive real root at $-1/2$. In fact $ a(x) = (2 x+1)^2(x^2 -x +2)$. $$$$
I want to know if there is an easier way to identify how many distinct real roots (or if any repetitive roots) are there for a polynomial, without using any tool for graph plotting or factorizing. In the above example, if the root were $-23/245$, in place of $-1/2$, then I couldn't have been able to identify it by inspection either.
AI: A complex number $x_0$ is a root of multiplicity $m>1$ of $a(x)$ if and only if $a(x)=(x-x_0)^mb(x)$ for some polynomial $b(x)$ with $b(x)\not=0$. By considering the derivative of $a(x)$ we find that
$$a'(x)=m(x-x_0)^{m-1}b(x)+(x-x_0)^{m}b'(x)=(x-x_0)^{m-1}(mb(x)+(x-x_0)b'(x)).$$
Hence $x_0$ is a root of $a'(x)$ of multiplicity $m-1>0$.
So a multiple root of $a(x)$ can be found by checking whether $a(x)$ and $a'(x)$ have a common root, i.e. by finding the roots of the greatest common divisor of $a(x)$ and $a'(x)$.
In your case $a(x)=4 x^4 + 5 x^2 + 7 x + 2$, and it follows that
$$\gcd(a(x),a'(x))=2x+1.$$
Therefore $x=-1/2$ is a multiple root of $a(x)$ (of multiplicity $2$).
|
H: Proof that $\frac{d(\sin x)}{dx} = \cos x$ for $\frac{\pi}{2} < x < \pi$
Let us assume that $x$ is an angle that lies in the second quadrant i.e. $\dfrac{\pi}{2} < x < \pi$.
We have to prove that $\dfrac{d(\sin x)}{dx} = \cos x$. I will use the unit circle to prove this. The method will be like the one used by Grant Sanderson of 3Blue1Brown in this video, which is a part of his Essence of Calculus series.
The angles are measured in radians. In the diagram below, I have marked the angle $x$ and $dx$ on the unit circle. The angle $dx$ approaches $0$, so it is very very small but for the sake of clarity, I have made it considerably large.
Now, since $dx$ is actually really small, we can approximate arc $AB$ as a straight line approximately perpendicular to $OA$. We are measuring the angles in radians and we have a unit circle, so its radius is $1 \text{ units}$. Hence, the length of arc (now line segment) $AB$ is $\dfrac{\theta}{r}$, where $\theta$ is $\angle AOB$ i.e. $dx$ and $r = 1 \text{ units}$. So, $AB = \dfrac{dx}{1} = dx$.
Now, $d(\sin x) = \sin(x+dx)-\sin x$ which is the change in the ordinate of $A$ and $B$.
Now, $AP = d(\sin x)$ and $AB = dx$. Also, $\triangle APB \sim \triangle AOQ$. So, $\angle BAP = \angle OAQ = \pi - x$. $\cos(\angle BAP) = \dfrac{AP}{AB} = \dfrac{d(\sin x)}{dx}$. And $\cos(\angle BAP) = \cos (\pi - x) = -\cos x$
So, $\dfrac{d(\sin x)}{dx} = -\cos x$ which is not the case at all, since $\dfrac{d(\sin x)}{dx}$ will be negative as $\sin(x+dx) < \sin x$ but the sign of $-\cos x$ will be positive as $\cos x < 0$.
So, what mistake did I make here?
According to me, the mistake was in assuming that $AP = d(\sin x)$. I think that $AP$ should be $|d(\sin x)|$. And as $d(\sin x) < 0 \implies |d(\sin x)| = -d(\sin x)$. This fixes everything but I still want to verify if this indeed is the cause of the error.
Thanks!
PS : Let me know if I should justify why $\triangle APB \sim \triangle OQA$ to make the question clearer.
PPS : It is necessary to prove that differentiating $\sin x$ with respect to $x$ gives $\cos x$ for all 4 quadrants when proving using the unit circle, right?
AI: Note that $$d(\sin x)= \sin(x+dx)-\sin x \lt 0$$as $\sin x$ is decreasing in $\left(\frac{\pi}{2},\pi\right)$. You’re absolutely right in spotting your mistake. When you claim $$AP=d(\sin x)$$ , you’re assigning a negative value to a length, and so you do need an absolute sign there. $$AP=|d(\sin x)| =-d(\sin x)$$
|
H: Trigonometry Ferris Wheel Question
Question: Suppose you wanted to model a Ferris wheel using a sine function that took $60$ seconds to complete one revolution. The Ferris wheel must start $0.5\,\textrm{m}$ above ground. Provide an equation of such a sine function that will ensure that the Ferris wheel's minimum height of the ground is $0.5\,\textrm{m}$. Explain why your equation works.
Format of equation: $y = a\sin k(x - d) + c$
I need help because I don't understand how to determine the equation.
For example: how am I suppose to find the amplitude and the $c$ value if I only have the minimum value and not the maximum value? Also, I don't understand how to find the phase shift (value of $d$) if I am only given the minimum value and the period. Please help!
AI: Since the question states that the Ferris Wheel must start half a meter off of the ground, then we can make our phase shift $d=0$. This allows us to assume that the minimum height is achieved at $x\equiv \frac{\pi}{2}n$ where $n$ is every other odd integer starting with $n=3$. This is because the sine function is $-1$ at those values, and is at a minimum.
Next, sine functions ,$y= a\sin{k(x-d)}+c$, are $2\pi$ periodic, meaning that it takes $2\pi$ radians, or $1$ period, to get back to your initial starting point. The period, $T$, is given as $60$ seconds. Using the formula for the period of a sine and cosine function, $T=\frac{2\pi}{|k|}$, we find that $|k|=\frac{\pi}{30}$. The absolute value signs are not really necessary, but period is typically always positive and $k$ can be positive or negative.
Now to find the amplitude. No speed was specified, nor was the radius of the Ferris Wheel, and the only way I see to solve this is to let $a=r$ where $r$ is the radius of the Ferris Wheel.
Finally, we need that when $\sin{k(x-d)}=-1$, $y=0.5$. Setting $y=-r+c=0.5$, we see that $c=r+0.5$.
$$y=r\sin{\frac{\pi}{30}x} \,+ r+\frac{1}{2}$$
|
H: Problem on ratio wrt time distance
This is a rough translation from a local language so please bear with it,
Say we have two people, police $p1$ and thief $p2$.
$p1$ takes $4$ $steps$ and $p2$ takes $5$ $steps$ in the same amount of time. Also the distance $p1$ covers in $6$ $steps$ is equal to the distance $p2$ covers in $8$ $steps$.
What is the ratio of their velocities?
What I tried:
Let $p1$ and $p2$ cover $d1$ and $d2$ distance in 4 and 5 steps, also let say it takes $t1$ time for $p1$ to take 6 steps and $t2$ time for $p2$ to take 8 steps,
so $d1/d2 = 4/5$ and $t1/t2 = 6/8$, so I get $(d1/t1)/(d2/t2) = 16/15$
In my friend circle we are getting another result $15/16$ (if required I will add the procedure).
I just can't wrap my head around the relation of steps with the other units.
AI: Let the distance covered by $p_1$ in $6$ steps or $p_2$ in $8$ steps be $L$.
Then after one time unit, $p_1$ has travelled $\frac{4L}{6}$, and $p_2$ has travelled $\frac{5L}{8}$.
Velocity = Distance / Time, so the ratio of velocities is:
$$
\frac{p_1}{p_2}
=\frac{\frac{4L}{6}}{\frac{5L}{8}}
=\frac{\frac{2}{3}}{\frac{5}{8}}
=\frac{2}{3}\cdot\frac{8}{5}
=\frac{16}{15}
$$
|
H: Isometry on inner product space
$V$ is an inner product vector space. If a transformation $T\colon V\to V$ satisfies $\langle T(x), T(y)\rangle = \langle x, y\rangle$ for every vector $x, y \in V$, prove or disprove that $T$ is linear.
Seems true, but can't prove it. Tried plugging $x+y$ into $x,y$ and got
$\langle T(x+y), T(x+y)\rangle = \langle T(x)+T(y),T(x)+T(y)\rangle$
but this do not lead to the conclusion. Also I got that $T$ is one-to-one. Does anyone know the answer? Any help is appreciated!
AI: It's true. Take an orthonormal basis $(e_i)$ for $V$. Then $(Te_i)$ is an orthonormal set, hence a basis. By orthonormal expansion, we have $$v = \sum_i \langle v,e_i\rangle e_i$$for all $v\in V$. Similarly we have $$Tv = \sum_i \langle Tv,Te_i\rangle Te_i.$$But $\langle v,e_i\rangle = \langle Tv,Te_i\rangle$. So $T$ is linear.
|
H: Identities in first order logic
When given the statement
(∀)
A()
→
B()
→
C,
can I rewrite it as
(∀)
(A()
^
B())
→
C
?
Also, if so, how can I can prove it please?
Thanks!
AI: Yes, indeed, $(\forall x)~(A(x)\to(B(x)\to C)) \iff (\forall x)~((A(x)\wedge B(x))\to C)$
We may derive $C$ when we assume that $(\forall x)~(A(x)\to(B(x)\to C))$ and, for an arbitrary entity $c$, that $A(c)\wedge B(c)$.
Therefore $(\forall x)~(A(x)\to(B(x)\to C)) \implies (\forall x)~((A(x)\wedge B(x))\to C)$
We may derive $C$ when we assume that $(\forall x)~(A(x)\wedge B(x))\to C$, and, for an arbitrary entity $c$, that $A(c)$ and $B(c)$.
Therefore $(\forall x)~(A(x)\to(B(x)\to C)) \impliedby (\forall x)~((A(x)\wedge B(x))\to C)$
|
H: Having trouble deciphering this function space notation
I am working through some fluid mechanics and I can't seem to find a precise definition anywhere for this function space notation:
$$z(\alpha,t) \in C^1 \left( [-T,T]; C[0,2\pi] \right)$$
I am specifically looking at the bottom of page 332 (Proposition 8.6) in Majda and Bertozzi's Vorticity and Incompressible Flow. I have seen several other examples of this notation such as
$$L^\infty (0,T;C^{1,\alpha})$$
and
$$C(0,T;H^1).$$
I think I get the general idea - for instance $L^\infty (0,T;C^{1,\alpha})$ is something like the set of functions $f(t,x)$ that are $L^\infty$ in time on the interval $[0,T]$ and $C^{1,\alpha}$ in space (could be wrong about that), but it also seems that the Holder norm is interacting with the $L^\infty$ norm somehow.
Does anybody have a good definition of these kind of spaces or know of a reference that defines them thoroughly?
AI: It means that the function $z(\alpha, t)$ is regarded as a $C^1$ function of $\alpha$ (in $C^1[0, 2\pi])$) for each $t \in [-T,T]$: $t \mapsto z_{\alpha}(t):=z(\alpha,t)$. You would read it as "$C^1$ function in $[-T,T]$, with values in $C^1[0, 2\pi]$".
|
H: norm of a functional attains in real part
I tried to prove that, when $X$ is a Banach space, it holds that $$\sup Ref(B_X)=\|f\|$$, where $f \in X^*$ is a functional and the norm is defined by $\|f\|=\sup\{ |f(x)|: x \in B_X \}$.
One of the inequalities is straightforward, but I do not see the other one.
AI: If $z=re^{i\theta}$ is any complex number ($r \geq 0, \theta \in \mathbb R$) then we can write $|z|=Re (cz)$ where $c=e^{-i\theta}$. Note that $|c|=1$. Taking $z=f(x)$ where $\|x\| \leq 1$ we see that $|f(x)|= Re f(y)$ for some $y$ with $\|y\|=1$. This proves the other inequality.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.